manila-10.0.0/0000775000175000017500000000000013656750363013071 5ustar zuulzuul00000000000000manila-10.0.0/doc/0000775000175000017500000000000013656750362013635 5ustar zuulzuul00000000000000manila-10.0.0/doc/requirements.txt0000664000175000017500000000073013656750227017121 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. openstackdocstheme>=1.31.2 # Apache-2.0 reno>=2.5.0 # Apache-2.0 doc8>=0.6.0 # Apache-2.0 sphinx!=1.6.6,!=1.6.7,!=2.1.0,>=1.6.2 # BSD mock>=2.0.0 # BSD os-api-ref>=1.4.0 # Apache-2.0 ddt>=1.0.1 # MIT fixtures>=3.0.0 # Apache-2.0/BSD oslotest>=3.2.0 # Apache-2.0 manila-10.0.0/doc/source/0000775000175000017500000000000013656750362015135 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/cli/0000775000175000017500000000000013656750362015704 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/cli/index.rst0000664000175000017500000000130413656750227017543 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Command Line Interface ---------------------- .. toctree:: :maxdepth: 1 manila manila-manage manila-status manila-10.0.0/doc/source/cli/manila.rst0000664000175000017500000026663313656750227017717 0ustar zuulzuul00000000000000.. ################################################### .. ## WARNING ###################################### .. ############## WARNING ########################## .. ########################## WARNING ############## .. ###################################### WARNING ## .. ################################################### .. ################################################### .. ## .. This file is tool-generated. Do not edit manually. .. http://docs.openstack.org/contributor-guide/ .. doc-tools/cli-reference.html .. ## .. ## WARNING ###################################### .. ############## WARNING ########################## .. ########################## WARNING ############## .. ###################################### WARNING ## .. ################################################### ======================================================== Shared File Systems service (manila) command-line client ======================================================== The manila client is the command-line interface (CLI) for the Shared File Systems service (manila) API and its extensions. This chapter documents :command:`manila` version ``1.16.0``. For help on a specific :command:`manila` command, enter: .. code-block:: console $ manila help COMMAND .. _manila_command_usage: manila usage ~~~~~~~~~~~~ .. code-block:: console usage: manila [--version] [-d] [--os-cache] [--os-reset-cache] [--os-user-id ] [--os-username ] [--os-password ] [--os-tenant-name ] [--os-project-name ] [--os-tenant-id ] [--os-project-id ] [--os-user-domain-id ] [--os-user-domain-name ] [--os-project-domain-id ] [--os-project-domain-name ] [--os-auth-url ] [--os-region-name ] [--os-token ] [--bypass-url ] [--service-type ] [--service-name ] [--share-service-name ] [--endpoint-type ] [--os-share-api-version ] [--os-cacert ] [--retries ] [--os-cert ] ... **Subcommands:** ``absolute-limits`` Print a list of absolute limits for a user. ``access-allow`` Allow access to the share. ``access-deny`` Deny access to a share. ``access-list`` Show access list for share. ``api-version`` Display the API version information. ``availability-zone-list`` List all availability zones. ``create`` Creates a new share (NFS, CIFS, CephFS, GlusterFS or HDFS). ``credentials`` Show user credentials returned from auth. ``delete`` Remove one or more shares. ``endpoints`` Discover endpoints that get returned from the authenticate services. ``extend`` Increases the size of an existing share. ``extra-specs-list`` Print a list of current 'share types and extra specs' (Admin Only). ``force-delete`` Attempt force-delete of share, regardless of state (Admin only). ``list`` List NAS shares with filters. ``manage`` Manage share not handled by Manila (Admin only). ``message-delete`` Remove one or more messages. ``message-list`` Lists all messages. ``message-show`` Show message's details. ``metadata`` Set or delete metadata on a share. ``metadata-show`` Show metadata of given share. ``metadata-update-all`` Update all metadata of a share. ``migration-cancel`` Cancels migration of a given share when copying (Admin only, Experimental). ``migration-complete`` Completes migration for a given share (Admin only, Experimental). ``migration-get-progress`` Gets migration progress of a given share when copying (Admin only, Experimental). ``migration-start`` Migrates share to a new host (Admin only, Experimental). ``pool-list`` List all backend storage pools known to the scheduler (Admin only). ``quota-class-show`` List the quotas for a quota class. ``quota-class-update`` Update the quotas for a quota class (Admin only). ``quota-defaults`` List the default quotas for a tenant. ``quota-delete`` Delete quota for a tenant/user. The quota will revert back to default (Admin only). ``quota-show`` List the quotas for a tenant/user. ``quota-update`` Update the quotas for a tenant/user (Admin only). ``rate-limits`` Print a list of rate limits for a user. ``reset-state`` Explicitly update the state of a share (Admin only). ``reset-task-state`` Explicitly update the task state of a share (Admin only, Experimental). ``revert-to-snapshot`` Revert a share to the specified snapshot. ``security-service-create`` Create security service used by tenant. ``security-service-delete`` Delete one or more security services. ``security-service-list`` Get a list of security services. ``security-service-show`` Show security service. ``security-service-update`` Update security service. ``service-disable`` Disables 'manila-share' or 'manila-scheduler' services (Admin only). ``service-enable`` Enables 'manila-share' or 'manila-scheduler' services (Admin only). ``service-list`` List all services (Admin only). ``share-export-location-list`` List export locations of a given share. ``share-export-location-show`` Show export location of the share. ``share-group-create`` Creates a new share group (Experimental). ``share-group-delete`` Remove one or more share groups (Experimental). ``share-group-list`` List share groups with filters (Experimental). ``share-group-reset-state`` Explicitly update the state of a share group (Admin only, Experimental). ``share-group-show`` Show details about a share group (Experimental). ``share-group-snapshot-create`` Creates a new share group snapshot (Experimental). ``share-group-snapshot-delete`` Remove one or more share group snapshots (Experimental). ``share-group-snapshot-list`` List share group snapshots with filters (Experimental). ``share-group-snapshot-list-members`` List members of a share group snapshot (Experimental). ``share-group-snapshot-reset-state`` Explicitly update the state of a share group snapshot (Admin only, Experimental). ``share-group-snapshot-show`` Show details about a share group snapshot (Experimental). ``share-group-snapshot-update`` Update a share group snapshot (Experimental). ``share-group-type-access-add`` Adds share group type access for the given project (Admin only). ``share-group-type-access-list`` Print access information about a share group type (Admin only). ``share-group-type-access-remove`` Removes share group type access for the given project (Admin only). ``share-group-type-create`` Create a new share group type (Admin only). ``share-group-type-delete`` Delete a specific share group type (Admin only). ``share-group-type-key`` Set or unset group_spec for a share group type (Admin only). ``share-group-type-list`` Print a list of available 'share group types'. ``share-group-type-specs-list`` Print a list of 'share group types specs' (Admin Only). ``share-group-update`` Update a share group (Experimental). ``share-instance-export-location-list`` List export locations of a given share instance. ``share-instance-export-location-show`` Show export location for the share instance. ``share-instance-force-delete`` Force-delete the share instance, regardless of state (Admin only). ``share-instance-list`` List share instances (Admin only). ``share-instance-reset-state`` Explicitly update the state of a share instance (Admin only). ``share-instance-show`` Show details about a share instance (Admin only). ``share-network-create`` Create description for network used by the tenant. ``share-network-delete`` Delete one or more share networks. ``share-network-list`` Get a list of network info. ``share-network-security-service-add`` Associate security service with share network. ``share-network-security-service-list`` Get list of security services associated with a given share network. ``share-network-security-service-remove`` Dissociate security service from share network. ``share-network-show`` Get a description for network used by the tenant. ``share-network-update`` Update share network data. ``share-replica-create`` Create a share replica (Experimental). ``share-replica-delete`` Remove one or more share replicas (Experimental). ``share-replica-list`` List share replicas (Experimental). ``share-replica-promote`` Promote specified replica to 'active' replica_state (Experimental). ``share-replica-reset-replica-state`` Explicitly update the 'replica_state' of a share replica (Experimental). ``share-replica-reset-state`` Explicitly update the 'status' of a share replica (Experimental). ``share-replica-resync`` Attempt to update the share replica with its 'active' mirror (Experimental). ``share-replica-show`` Show details about a replica (Experimental). ``share-server-delete`` Delete one or more share servers (Admin only). ``share-server-details`` Show share server details (Admin only). ``share-server-list`` List all share servers (Admin only). ``share-server-show`` Show share server info (Admin only). ``show`` Show details about a NAS share. ``shrink`` Decreases the size of an existing share. ``snapshot-access-allow`` Allow read only access to a snapshot. ``snapshot-access-deny`` Deny access to a snapshot. ``snapshot-access-list`` Show access list for a snapshot. ``snapshot-create`` Add a new snapshot. ``snapshot-delete`` Remove one or more snapshots. ``snapshot-export-location-list`` List export locations of a given snapshot. ``snapshot-export-location-show`` Show export location of the share snapshot. ``snapshot-force-delete`` Attempt force-deletion of one or more snapshots. Regardless of the state (Admin only). ``snapshot-instance-export-location-list`` List export locations of a given snapshot instance. ``snapshot-instance-export-location-show`` Show export location of the share instance snapshot. ``snapshot-instance-list`` List share snapshot instances. ``snapshot-instance-reset-state`` Explicitly update the state of a share snapshot instance. ``snapshot-instance-show`` Show details about a share snapshot instance. ``snapshot-list`` List all the snapshots. ``snapshot-manage`` Manage share snapshot not handled by Manila (Admin only). ``snapshot-rename`` Rename a snapshot. ``snapshot-reset-state`` Explicitly update the state of a snapshot (Admin only). ``snapshot-show`` Show details about a snapshot. ``snapshot-unmanage`` Unmanage one or more share snapshots (Admin only). ``type-access-add`` Adds share type access for the given project (Admin only). ``type-access-list`` Print access information about the given share type (Admin only). ``type-access-remove`` Removes share type access for the given project (Admin only). ``type-create`` Create a new share type (Admin only). ``type-delete`` Delete one or more specific share types (Admin only). ``type-key`` Set or unset extra_spec for a share type (Admin only). ``type-list`` Print a list of available 'share types'. ``unmanage`` Unmanage share (Admin only). ``update`` Rename a share. ``bash-completion`` Print arguments for bash_completion. Prints all of the commands and options to stdout so that the manila.bash_completion script doesn't have to hard code them. ``help`` Display help about this program or one of its subcommands. ``list-extensions`` List all the os-api extensions that are available. .. _manila_command_options: manila optional arguments ~~~~~~~~~~~~~~~~~~~~~~~~~ ``--version`` show program's version number and exit ``-d, --debug`` Print debugging output. ``--os-cache`` Use the auth token cache. Defaults to ``env[OS_CACHE]``. ``--os-reset-cache`` Delete cached password and auth token. ``--os-user-id `` Defaults to env [OS_USER_ID]. ``--os-username `` Defaults to ``env[OS_USERNAME]``. ``--os-password `` Defaults to ``env[OS_PASSWORD]``. ``--os-tenant-name `` Defaults to ``env[OS_TENANT_NAME]``. ``--os-project-name `` Another way to specify tenant name. This option is mutually exclusive with --os-tenant-name. Defaults to ``env[OS_PROJECT_NAME]``. ``--os-tenant-id `` Defaults to ``env[OS_TENANT_ID]``. ``--os-project-id `` Another way to specify tenant ID. This option is mutually exclusive with --os-tenant-id. Defaults to ``env[OS_PROJECT_ID]``. ``--os-user-domain-id `` OpenStack user domain ID. Defaults to ``env[OS_USER_DOMAIN_ID]``. ``--os-user-domain-name `` OpenStack user domain name. Defaults to ``env[OS_USER_DOMAIN_NAME]``. ``--os-project-domain-id `` Defaults to ``env[OS_PROJECT_DOMAIN_ID]``. ``--os-project-domain-name `` Defaults to ``env[OS_PROJECT_DOMAIN_NAME]``. ``--os-auth-url `` Defaults to ``env[OS_AUTH_URL]``. ``--os-region-name `` Defaults to ``env[OS_REGION_NAME]``. ``--os-token `` Defaults to ``env[OS_TOKEN]``. ``--bypass-url `` Use this API endpoint instead of the Service Catalog. Defaults to ``env[OS_MANILA_BYPASS_URL]``. ``--service-type `` Defaults to compute for most actions. ``--service-name `` Defaults to ``env[OS_MANILA_SERVICE_NAME]``. ``--share-service-name `` Defaults to ``env[OS_MANILA_SHARE_SERVICE_NAME]``. ``--endpoint-type `` Defaults to ``env[OS_MANILA_ENDPOINT_TYPE]`` or publicURL. ``--os-share-api-version `` Accepts 1.x to override default to ``env[OS_SHARE_API_VERSION]``. ``--os-cacert `` Specify a CA bundle file to use in verifying a TLS (https) server certificate. Defaults to ``env[OS_CACERT]``. ``--retries `` Number of retries. ``--os-cert `` Defaults to ``env[OS_CERT]``. .. _manila_absolute-limits: manila absolute-limits ---------------------- .. code-block:: console usage: manila absolute-limits Print a list of absolute limits for a user. .. _manila_access-allow: manila access-allow ------------------- .. code-block:: console usage: manila access-allow [--access-level ] Allow access to the share. **Positional arguments:** ```` Name or ID of the NAS share to modify. ```` Access rule type (only "ip", "user"(user or group), "cert" or "cephx" are supported). ```` Value that defines access. **Optional arguments:** ``--access-level , --access_level `` Share access level ("rw" and "ro" access levels are supported). Defaults to rw. .. _manila_access-deny: manila access-deny ------------------ .. code-block:: console usage: manila access-deny Deny access to a share. **Positional arguments:** ```` Name or ID of the NAS share to modify. ```` ID of the access rule to be deleted. .. _manila_access-list: manila access-list ------------------ .. code-block:: console usage: manila access-list [--columns ] Show access list for share. **Positional arguments:** ```` Name or ID of the share. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "access_type,access_to". .. _manila_api-version: manila api-version ------------------ .. code-block:: console usage: manila api-version Display the API version information. .. _manila_availability-zone-list: manila availability-zone-list ----------------------------- .. code-block:: console usage: manila availability-zone-list [--columns ] List all availability zones. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_create: manila create ------------- .. code-block:: console usage: manila create [--snapshot-id ] [--name ] [--metadata [ [ ...]]] [--share-network ] [--description ] [--share-type ] [--public] [--availability-zone ] [--share-group ] Creates a new share (NFS, CIFS, CephFS, GlusterFS or HDFS). **Positional arguments:** ```` Share protocol (NFS, CIFS, CephFS, GlusterFS or HDFS). ```` Share size in GiB. **Optional arguments:** ``--snapshot-id , --snapshot_id `` Optional snapshot ID to create the share from. (Default=None) ``--name `` Optional share name. (Default=None) ``--metadata [ [ ...]]`` Metadata key=value pairs (Optional, Default=None). ``--share-network , --share_network `` Optional network info ID or name. ``--description `` Optional share description. (Default=None) ``--share-type , --share_type , --volume-type , --volume_type `` Optional share type. Use of optional volume type is deprecated. (Default=None) ``--public`` Level of visibility for share. Defines whether other tenants are able to see it or not. ``--availability-zone , --availability_zone , --az `` Availability zone in which share should be created. ``--share-group , --share_group , --group `` Optional share group name or ID in which to create the share (Experimental, Default=None). .. _manila_credentials: manila credentials ------------------ .. code-block:: console usage: manila credentials Show user credentials returned from auth. .. _manila_delete: manila delete ------------- .. code-block:: console usage: manila delete [--share-group ] [ ...] Remove one or more shares. **Positional arguments:** ```` Name or ID of the share(s). **Optional arguments:** ``--share-group , --share_group , --group `` Optional share group name or ID which contains the share (Experimental, Default=None). .. _manila_endpoints: manila endpoints ---------------- .. code-block:: console usage: manila endpoints Discover endpoints that get returned from the authenticate services. .. _manila_extend: manila extend ------------- .. code-block:: console usage: manila extend Increases the size of an existing share. **Positional arguments:** ```` Name or ID of share to extend. ```` New size of share, in GiBs. .. _manila_extra-specs-list: manila extra-specs-list ----------------------- .. code-block:: console usage: manila extra-specs-list [--columns ] Print a list of current 'share types and extra specs' (Admin Only). **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_force-delete: manila force-delete ------------------- .. code-block:: console usage: manila force-delete [ ...] Attempt force-delete of share, regardless of state (Admin only). **Positional arguments:** ```` Name or ID of the share(s) to force delete. .. _manila_list: manila list ----------- .. code-block:: console usage: manila list [--all-tenants [<0|1>]] [--name ] [--status ] [--share-server-id ] [--metadata [ [ ...]]] [--extra-specs [ [ ...]]] [--share-type ] [--limit ] [--offset ] [--sort-key ] [--sort-dir ] [--snapshot ] [--host ] [--share-network ] [--project-id ] [--public] [--share-group ] [--columns ] List NAS shares with filters. **Optional arguments:** ``--all-tenants [<0|1>]`` Display information from all tenants (Admin only). ``--name `` Filter results by name. ``--status `` Filter results by status. ``--share-server-id , --share-server_id , --share_server-id , --share_server_id `` Filter results by share server ID (Admin only). ``--metadata [ [ ...]]`` Filters results by a metadata key and value. OPTIONAL: Default=None. ``--extra-specs [ [ ...]], --extra_specs [ [ ...]]`` Filters results by an extra specs key and value of share type that was used for share creation. OPTIONAL: Default=None. ``--share-type , --volume-type , --share_type , --share-type-id , --volume-type-id , --share-type_id , --share_type-id , --share_type_id , --volume_type , --volume_type_id `` Filter results by a share type id or name that was used for share creation. ``--limit `` Maximum number of shares to return. OPTIONAL: Default=None. ``--offset `` Set offset to define start point of share listing. OPTIONAL: Default=None. ``--sort-key , --sort_key `` Key to be sorted, available keys are ('id', 'status', 'size', 'host', 'share_proto', 'availability_zone', 'user_id', 'project_id', 'created_at', 'updated_at', 'display_name', 'name', 'share_type_id', 'share_type', 'share_network_id', 'share_network', 'snapshot_id', 'snapshot'). OPTIONAL: Default=None. ``--sort-dir , --sort_dir `` Sort direction, available values are ('asc', 'desc'). OPTIONAL: Default=None. ``--snapshot `` Filter results by snapshot name or id, that was used for share. ``--host `` Filter results by host. ``--share-network , --share_network `` Filter results by share-network name or id. ``--project-id , --project_id `` Filter results by project id. Useful with set key '--all-tenants'. ``--public`` Add public shares from all tenants to result. ``--share-group , --share_group , --group `` Filter results by share group name or ID (Experimental, Default=None). ``--columns `` Comma separated list of columns to be displayed example --columns "export_location,is public". .. _manila_list-extensions: manila list-extensions ---------------------- .. code-block:: console usage: manila list-extensions List all the os-api extensions that are available. .. _manila_manage: manila manage ------------- .. code-block:: console usage: manila manage [--name ] [--description ] [--share_type ] [--driver_options [ [ ...]]] [--public] Manage share not handled by Manila (Admin only). **Positional arguments:** ```` manage-share service host: some.host@driver#pool. ```` Protocol of the share to manage, such as NFS or CIFS. ```` Share export path, NFS share such as: 10.0.0.1:/example_path, CIFS share such as: \\\\10.0.0.1\\example_cifs_share. **Optional arguments:** ``--name `` Optional share name. (Default=None) ``--description `` Optional share description. (Default=None) ``--share_type , --share-type `` Optional share type assigned to share. (Default=None) ``--driver_options [ [ ...]], --driver-options [ [ ...]]`` Driver option key=value pairs (Optional, Default=None). ``--public`` Level of visibility for share. Defines whether other tenants are able to see it or not. Available only for microversion >= 2.8. .. _manila_message-delete: manila message-delete ---------------------- .. code-block:: console usage: manila message-delete [ ...] Remove one or more messages. **Positional arguments:** ```` ID of the message(s). .. _manila_message-list: manila message-list ---------------------- .. code-block:: console usage: manila message-list [--resource_id ] [--resource_type ] [--action_id ] [--detail_id ] [--request_id ] [--level ] [--limit ] [--offset ] [--sort-key ] [--sort-dir ] [--columns ] Lists all messages. **Optional arguments:** ``--resource_id , --resource-id , --resource `` Filters results by a resource uuid. (Default=None). ``--resource_type , --resource-type `` Filters results by a resource type. (Default=None). Example: "manila message-list --resource_type share" ``--action_id , --action-id , --action `` Filters results by action id. (Default=None). ``--detail_id , --detail-id , --detail `` Filters results by detail id. (Default=None). ``--request_id , --request-id , --request `` Filters results by request id. (Default=None). ``--level , --message_level , --message-level `` Filters results by the message level. (Default=None). Example: "manila message-list --level ERROR". ``--limit `` Maximum number of messages to return. (Default=None) ``--offset `` Start position of message listing. ``--sort-key , --sort_key `` Key to be sorted, available keys are ('id', 'project_id', 'request_id', 'resource_type', 'action_id', 'detail_id', 'resource_id', 'message_level', 'expires_at', 'request_id', 'created_at'). (Default=desc). ``--sort-dir , --sort_dir `` Sort direction, available values are ('asc', 'desc'). OPTIONAL: Default=None. ``--columns `` Comma separated list of columns to be displayed example --columns "resource_id, user_message". .. _manila_message-show: manila message-show ---------------------- .. code-block:: console usage: manila message-show Show details about a message. **Positional arguments:** ```` ID of the message. .. _manila_metadata: manila metadata --------------- .. code-block:: console usage: manila metadata [ ...] Set or delete metadata on a share. **Positional arguments:** ```` Name or ID of the share to update metadata on. ```` Actions: 'set' or 'unset'. ```` Metadata to set or unset (key is only necessary on unset). .. _manila_metadata-show: manila metadata-show -------------------- .. code-block:: console usage: manila metadata-show Show metadata of given share. **Positional arguments:** ```` Name or ID of the share. .. _manila_metadata-update-all: manila metadata-update-all -------------------------- .. code-block:: console usage: manila metadata-update-all [ ...] Update all metadata of a share. **Positional arguments:** ```` Name or ID of the share to update metadata on. ```` Metadata entry or entries to update. .. _manila_migration-cancel: manila migration-cancel ----------------------- .. code-block:: console usage: manila migration-cancel Cancels migration of a given share when copying (Admin only, Experimental). **Positional arguments:** ```` Name or ID of share to cancel migration. .. _manila_migration-complete: manila migration-complete ------------------------- .. code-block:: console usage: manila migration-complete Completes migration for a given share (Admin only, Experimental). **Positional arguments:** ```` Name or ID of share to complete migration. .. _manila_migration-get-progress: manila migration-get-progress ----------------------------- .. code-block:: console usage: manila migration-get-progress Gets migration progress of a given share when copying (Admin only, Experimental). **Positional arguments:** ```` Name or ID of the share to get share migration progress information. .. _manila_migration-start: manila migration-start ---------------------- .. code-block:: console usage: manila migration-start [--force_host_assisted_migration ] --preserve-metadata --preserve-snapshots --writable --nondisruptive [--new_share_network ] [--new_share_type ] Migrates share to a new host (Admin only, Experimental). **Positional arguments:** ```` Name or ID of share to migrate. ```` Destination host where share will be migrated to. Use the format 'host@backend#pool'. **Optional arguments:** ``--force_host_assisted_migration , --force-host-assisted-migration `` Enforces the use of the host-assisted migration approach, which bypasses driver optimizations. Default=False. ``--preserve-metadata , --preserve_metadata `` Enforces migration to preserve all file metadata when moving its contents. If set to True, host-assisted migration will not be attempted. ``--preserve-snapshots , --preserve_snapshots `` Enforces migration of the share snapshots to the destination. If set to True, host-assisted migration will not be attempted. ``--writable `` Enforces migration to keep the share writable while contents are being moved. If set to True, host-assisted migration will not be attempted. ``--nondisruptive `` Enforces migration to be nondisruptive. If set to True, host-assisted migration will not be attempted. ``--new_share_network , --new-share-network `` Specify the new share network for the share. Do not specify this parameter if the migrating share has to be retained within its current share network. ``--new_share_type , --new-share-type `` Specify the new share type for the share. Do not specify this parameter if the migrating share has to be retained with its current share type. .. _manila_pool-list: manila pool-list ---------------- .. code-block:: console usage: manila pool-list [--host ] [--backend ] [--pool ] [--columns ] [--detail] [--share-type ] List all backend storage pools known to the scheduler (Admin only). **Optional arguments:** ``--host `` Filter results by host name. Regular expressions are supported. ``--backend `` Filter results by backend name. Regular expressions are supported. ``--pool `` Filter results by pool name. Regular expressions are supported. ``--columns `` Comma separated list of columns to be displayed example --columns "name,host". ``--detail, --detailed`` Show detailed information about pools. (Default=False) ``--share-type , --share_type , --share-type-id , --share_type_id `` Filter results by share type name or ID. (Default=None)Available only for microversion >= 2.23. .. _manila_quota-class-show: manila quota-class-show ----------------------- .. code-block:: console usage: manila quota-class-show List the quotas for a quota class. **Positional arguments:** ```` Name of quota class to list the quotas for. .. _manila_quota-class-update: manila quota-class-update ------------------------- .. code-block:: console usage: manila quota-class-update [--shares ] [--snapshots ] [--gigabytes ] [--snapshot-gigabytes ] [--share-networks ] [--share-groups ] [--share-group-snapshots ] Update the quotas for a quota class (Admin only). **Positional arguments:** ```` Name of quota class to set the quotas for. **Optional arguments:** ``--shares `` New value for the "shares" quota. ``--snapshots `` New value for the "snapshots" quota. ``--gigabytes `` New value for the "gigabytes" quota. ``--snapshot-gigabytes , --snapshot_gigabytes `` New value for the "snapshot_gigabytes" quota. ``--share-networks , --share_networks `` New value for the "share_networks" quota. ``--share-groups , --share_groups `` New value for the "share_groups" quota. ``--share-group-snapshots , --share_group_snapshots `` New value for the "share_group_snapshots" quota. .. _manila_quota-defaults: manila quota-defaults --------------------- .. code-block:: console usage: manila quota-defaults [--tenant ] List the default quotas for a tenant. **Optional arguments:** ``--tenant `` ID of tenant to list the default quotas for. .. _manila_quota-delete: manila quota-delete ------------------- .. code-block:: console usage: manila quota-delete [--tenant ] [--user ] [--share-type ] Delete quota for a tenant/user. The quota will revert back to default (Admin only). **Optional arguments:** ``--tenant `` ID of tenant to delete quota for. ``--user `` ID of user to delete quota for. ``--share-type , --share_type `` UUID or name of a share type to set the quotas for. Optional. Mutually exclusive with '--user-id'. Available only for microversion >= 2.39 .. _manila_quota-show: manila quota-show ----------------- .. code-block:: console usage: manila quota-show [--tenant ] [--user ] [--share-type ] [--detail] List the quotas for a tenant/user. **Optional arguments:** ``--tenant `` ID of tenant to list the quotas for. ``--user `` ID of user to list the quotas for. ``--share-type , --share_type `` UUID or name of a share type to set the quotas for. Optional. Mutually exclusive with '--user-id'. Available only for microversion >= 2.39 ``--detail`` Optional flag to indicate whether to show quota in detail. Default false, available only for microversion >= 2.25. .. _manila_quota-update: manila quota-update ------------------- .. code-block:: console usage: manila quota-update [--user ] [--shares ] [--snapshots ] [--gigabytes ] [--snapshot-gigabytes ] [--share-networks ] [--share-groups ] [--share-group-snapshots ] [--share-type ] [--force] Update the quotas for a tenant/user (Admin only). **Positional arguments:** ```` UUID of tenant to set the quotas for. **Optional arguments:** ``--user `` ID of user to set the quotas for. ``--shares `` New value for the "shares" quota. ``--snapshots `` New value for the "snapshots" quota. ``--gigabytes `` New value for the "gigabytes" quota. ``--snapshot-gigabytes , --snapshot_gigabytes `` New value for the "snapshot_gigabytes" quota. ``--share-networks , --share_networks `` New value for the "share_networks" quota. ``--share-groups , --share_groups `` New value for the "share_groups" quota. ``--share-group-snapshots , --share_group_snapshots `` New value for the "share_group_snapshots" quota. ``--share-type , --share_type `` UUID or name of a share type to set the quotas for. Optional. Mutually exclusive with '--user-id'. Available only for microversion >= 2.39 ``--force`` Whether force update the quota even if the already used and reserved exceeds the new quota. .. _manila_rate-limits: manila rate-limits ------------------ .. code-block:: console usage: manila rate-limits [--columns ] Print a list of rate limits for a user. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "verb,uri,value". .. _manila_reset-state: manila reset-state ------------------ .. code-block:: console usage: manila reset-state [--state ] Explicitly update the state of a share (Admin only). **Positional arguments:** ```` Name or ID of the share to modify. **Optional arguments:** ``--state `` Indicate which state to assign the share. Options include available, error, creating, deleting, error_deleting. If no state is provided, available will be used. .. _manila_reset-task-state: manila reset-task-state ----------------------- .. code-block:: console usage: manila reset-task-state [--task-state ] Explicitly update the task state of a share (Admin only, Experimental). **Positional arguments:** ```` Name or ID of the share to modify. **Optional arguments:** ``--task-state , --task_state , --state `` Indicate which task state to assign the share. Options include migration_starting, migration_in_progress, migration_completing, migration_success, migration_error, migration_cancelled, migration_driver_in_progress, migration_driver_phase1_done, data_copying_starting, data_copying_in_progress, data_copying_completing, data_copying_completed, data_copying_cancelled, data_copying_error. If no value is provided, None will be used. .. _manila_revert-to-snapshot: manila revert-to-snapshot ------------------------- .. code-block:: console usage: manila revert-to-snapshot Revert a share to the specified snapshot. **Positional arguments:** ```` Name or ID of the snapshot to restore. The snapshot must be the most recent one known to manila. .. _manila_security-service-create: manila security-service-create ------------------------------ .. code-block:: console usage: manila security-service-create [--dns-ip ] [--server ] [--domain ] [--user ] [--password ] [--name ] [--description ] Create security service used by tenant. **Positional arguments:** ```` Security service type: 'ldap', 'kerberos' or 'active_directory'. **Optional arguments:** ``--dns-ip `` DNS IP address used inside tenant's network. ``--server `` Security service IP address or hostname. ``--domain `` Security service domain. ``--user `` Security service user or group used by tenant. ``--password `` Password used by user. ``--name `` Security service name. ``--description `` Security service description. .. _manila_security-service-delete: manila security-service-delete ------------------------------ .. code-block:: console usage: manila security-service-delete [ ...] Delete one or more security services. **Positional arguments:** ```` Name or ID of the security service(s) to delete. .. _manila_security-service-list: manila security-service-list ---------------------------- .. code-block:: console usage: manila security-service-list [--all-tenants [<0|1>]] [--share-network ] [--status ] [--name ] [--type ] [--user ] [--dns-ip ] [--server ] [--domain ] [--detailed [<0|1>]] [--offset ] [--limit ] [--columns ] Get a list of security services. **Optional arguments:** ``--all-tenants [<0|1>]`` Display information from all tenants (Admin only). ``--share-network , --share_network `` Filter results by share network id or name. ``--status `` Filter results by status. ``--name `` Filter results by name. ``--type `` Filter results by type. ``--user `` Filter results by user or group used by tenant. ``--dns-ip , --dns_ip `` Filter results by DNS IP address used inside tenant's network. ``--server `` Filter results by security service IP address or hostname. ``--domain `` Filter results by domain. ``--detailed [<0|1>]`` Show detailed information about filtered security services. ``--offset `` Start position of security services listing. ``--limit `` Number of security services to return per request. ``--columns `` Comma separated list of columns to be displayed example --columns "name,type". .. _manila_security-service-show: manila security-service-show ---------------------------- .. code-block:: console usage: manila security-service-show Show security service. **Positional arguments:** ```` Security service name or ID to show. .. _manila_security-service-update: manila security-service-update ------------------------------ .. code-block:: console usage: manila security-service-update [--dns-ip ] [--server ] [--domain ] [--user ] [--password ] [--name ] [--description ] Update security service. **Positional arguments:** ```` Security service name or ID to update. **Optional arguments:** ``--dns-ip `` DNS IP address used inside tenant's network. ``--server `` Security service IP address or hostname. ``--domain `` Security service domain. ``--user `` Security service user or group used by tenant. ``--password `` Password used by user. ``--name `` Security service name. ``--description `` Security service description. .. _manila_service-disable: manila service-disable ---------------------- .. code-block:: console usage: manila service-disable Disables 'manila-share' or 'manila-scheduler' services (Admin only). **Positional arguments:** ```` Host name as 'example_host@example_backend'. ```` Service binary, could be 'manila-share' or 'manila-scheduler'. .. _manila_service-enable: manila service-enable --------------------- .. code-block:: console usage: manila service-enable Enables 'manila-share' or 'manila-scheduler' services (Admin only). **Positional arguments:** ```` Host name as 'example_host@example_backend'. ```` Service binary, could be 'manila-share' or 'manila-scheduler'. .. _manila_service-list: manila service-list ------------------- .. code-block:: console usage: manila service-list [--host ] [--binary ] [--status ] [--state ] [--zone ] [--columns ] List all services (Admin only). **Optional arguments:** ``--host `` Name of host. ``--binary `` Service binary. ``--status `` Filter results by status. ``--state `` Filter results by state. ``--zone `` Availability zone. ``--columns `` Comma separated list of columns to be displayed example --columns "id,host". .. _manila_share-export-location-list: manila share-export-location-list --------------------------------- .. code-block:: console usage: manila share-export-location-list [--columns ] List export locations of a given share. **Positional arguments:** ```` Name or ID of the share. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,host,status". .. _manila_share-export-location-show: manila share-export-location-show --------------------------------- .. code-block:: console usage: manila share-export-location-show Show export location of the share. **Positional arguments:** ```` Name or ID of the share. ```` ID of the share export location. .. _manila_share-group-create: manila share-group-create ------------------------- .. code-block:: console usage: manila share-group-create [--name ] [--description ] [--share-types ] [--share-group-type ] [--share-network ] [--source-share-group-snapshot ] [--availability-zone ] Creates a new share group (Experimental). **Optional arguments:** ``--name `` Optional share group name. (Default=None) ``--description `` Optional share group description. (Default=None) ``--share-types , --share_types `` Comma-separated list of share types. (Default=None) ``--share-group-type , --share_group_type , --type `` Share group type name or ID of the share group to be created. (Default=None) ``--share-network , --share_network `` Specify share network name or id. ``--source-share-group-snapshot , --source_share_group_snapshot `` Optional share group snapshot name or ID to create the share group from. (Default=None) ``--availability-zone , --availability_zone , --az `` Optional availability zone in which group should be created. (Default=None) .. _manila_share-group-delete: manila share-group-delete ------------------------- .. code-block:: console usage: manila share-group-delete [--force] [ ...] Remove one or more share groups (Experimental). **Positional arguments:** ```` Name or ID of the share_group(s). **Optional arguments:** ``--force`` Attempt to force delete the share group (Default=False) (Admin only). .. _manila_share-group-list: manila share-group-list ----------------------- .. code-block:: console usage: manila share-group-list [--all-tenants [<0|1>]] [--name ] [--status ] [--share-server-id ] [--share-group-type ] [--snapshot ] [--host ] [--share-network ] [--project-id ] [--limit ] [--offset ] [--sort-key ] [--sort-dir ] [--columns ] List share groups with filters (Experimental). **Optional arguments:** ``--all-tenants [<0|1>]`` Display information from all tenants (Admin only). ``--name `` Filter results by name. ``--status `` Filter results by status. ``--share-server-id , --share-server_id , --share_server-id , --share_server_id `` Filter results by share server ID (Admin only). ``--share-group-type , --share-group-type-id , --share_group_type , --share_group_type_id `` Filter results by a share group type ID or name that was used for share group creation. ``--snapshot `` Filter results by share group snapshot name or ID that was used to create the share group. ``--host `` Filter results by host. ``--share-network , --share_network `` Filter results by share-network name or ID. ``--project-id , --project_id `` Filter results by project ID. Useful with set key '--all-tenants'. ``--limit `` Maximum number of share groups to return. (Default=None) ``--offset `` Start position of share group listing. ``--sort-key , --sort_key `` Key to be sorted, available keys are ('id', 'name', 'status', 'host', 'user_id', 'project_id', 'created_at', 'availability_zone', 'share_network', 'share_network_id', 'share_group_type', 'share_group_type_id', 'source_share_group_snapshot_id'). Default=None. ``--sort-dir , --sort_dir `` Sort direction, available values are ('asc', 'desc'). OPTIONAL: Default=None. ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_share-group-reset-state: manila share-group-reset-state ------------------------------ .. code-block:: console usage: manila share-group-reset-state [--state ] Explicitly update the state of a share group (Admin only, Experimental). **Positional arguments:** ```` Name or ID of the share group to modify. **Optional arguments:** ``--state `` Indicate which state to assign the share group. Options include available, error, creating, deleting, error_deleting. If no state is provided, available will be used. .. _manila_share-group-show: manila share-group-show ----------------------- .. code-block:: console usage: manila share-group-show Show details about a share group (Experimental). **Positional arguments:** ```` Name or ID of the share group. .. _manila_share-group-snapshot-create: manila share-group-snapshot-create ---------------------------------- .. code-block:: console usage: manila share-group-snapshot-create [--name ] [--description ] Creates a new share group snapshot (Experimental). **Positional arguments:** ```` Name or ID of the share group. **Optional arguments:** ``--name `` Optional share group snapshot name. (Default=None) ``--description `` Optional share group snapshot description. (Default=None) .. _manila_share-group-snapshot-delete: manila share-group-snapshot-delete ---------------------------------- .. code-block:: console usage: manila share-group-snapshot-delete [--force] [ ...] Remove one or more share group snapshots (Experimental). **Positional arguments:** ```` Name or ID of the share group snapshot(s) to delete. **Optional arguments:** ``--force`` Attempt to force delete the share group snapshot(s) (Default=False) (Admin only). .. _manila_share-group-snapshot-list: manila share-group-snapshot-list -------------------------------- .. code-block:: console usage: manila share-group-snapshot-list [--all-tenants [<0|1>]] [--name ] [--status ] [--share-group-id ] [--limit ] [--offset ] [--sort-key ] [--sort-dir ] [--detailed DETAILED] [--columns ] List share group snapshots with filters (Experimental). **Optional arguments:** ``--all-tenants [<0|1>]`` Display information from all tenants (Admin only). ``--name `` Filter results by name. ``--status `` Filter results by status. ``--share-group-id , --share_group_id `` Filter results by share group ID. ``--limit `` Maximum number of share group snapshots to return. (Default=None) ``--offset `` Start position of share group snapshot listing. ``--sort-key , --sort_key `` Key to be sorted, available keys are ('id', 'name', 'status', 'host', 'user_id', 'project_id', 'created_at', 'share_group_id'). Default=None. ``--sort-dir , --sort_dir `` Sort direction, available values are ('asc', 'desc'). OPTIONAL: Default=None. ``--detailed DETAILED`` Show detailed information about share group snapshots. ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_share-group-snapshot-list-members: manila share-group-snapshot-list-members ---------------------------------------- .. code-block:: console usage: manila share-group-snapshot-list-members [--columns ] List members of a share group snapshot (Experimental). **Positional arguments:** ```` Name or ID of the share group snapshot. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_share-group-snapshot-reset-state: manila share-group-snapshot-reset-state --------------------------------------- .. code-block:: console usage: manila share-group-snapshot-reset-state [--state ] Explicitly update the state of a share group snapshot (Admin only, Experimental). **Positional arguments:** ```` Name or ID of the share group snapshot. **Optional arguments:** ``--state `` Indicate which state to assign the share group snapshot. Options include available, error, creating, deleting, error_deleting. If no state is provided, available will be used. .. _manila_share-group-snapshot-show: manila share-group-snapshot-show -------------------------------- .. code-block:: console usage: manila share-group-snapshot-show Show details about a share group snapshot (Experimental). **Positional arguments:** ```` Name or ID of the share group snapshot. .. _manila_share-group-snapshot-update: manila share-group-snapshot-update ---------------------------------- .. code-block:: console usage: manila share-group-snapshot-update [--name ] [--description ] Update a share group snapshot (Experimental). **Positional arguments:** ```` Name or ID of the share group snapshot to update. **Optional arguments:** ``--name `` Optional new name for the share group snapshot. (Default=None) ``--description `` Optional share group snapshot description. (Default=None) .. _manila_share-group-type-access-add: manila share-group-type-access-add ---------------------------------- .. code-block:: console usage: manila share-group-type-access-add Adds share group type access for the given project (Admin only). **Positional arguments:** ```` Share group type name or ID to add access for the given project. ```` Project ID to add share group type access for. .. _manila_share-group-type-access-list: manila share-group-type-access-list ----------------------------------- .. code-block:: console usage: manila share-group-type-access-list Print access information about a share group type (Admin only). **Positional arguments:** ```` Filter results by share group type name or ID. .. _manila_share-group-type-access-remove: manila share-group-type-access-remove ------------------------------------- .. code-block:: console usage: manila share-group-type-access-remove Removes share group type access for the given project (Admin only). **Positional arguments:** ```` Share group type name or ID to remove access for the given project. ```` Project ID to remove share group type access for. .. _manila_share-group-type-create: manila share-group-type-create ------------------------------ .. code-block:: console usage: manila share-group-type-create [--is_public ] Create a new share group type (Admin only). **Positional arguments:** ```` Name of the new share group type. ```` Comma-separated list of share type names or IDs. **Optional arguments:** ``--is_public , --is-public `` Make type accessible to the public (default true). .. _manila_share-group-type-delete: manila share-group-type-delete ------------------------------ .. code-block:: console usage: manila share-group-type-delete Delete a specific share group type (Admin only). **Positional arguments:** ```` Name or ID of the share group type to delete. .. _manila_share-group-type-key: manila share-group-type-key --------------------------- .. code-block:: console usage: manila share-group-type-key [ [ ...]] Set or unset group_spec for a share group type (Admin only). **Positional arguments:** ```` Name or ID of the share group type. ```` Actions: 'set' or 'unset'. ```` Group specs to set or unset (key is only necessary on unset). .. _manila_share-group-type-list: manila share-group-type-list ---------------------------- .. code-block:: console usage: manila share-group-type-list [--all] [--columns ] Print a list of available 'share group types'. **Optional arguments:** ``--all`` Display all share group types (Admin only). ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_share-group-type-specs-list: manila share-group-type-specs-list ---------------------------------- .. code-block:: console usage: manila share-group-type-specs-list [--columns ] Print a list of 'share group types specs' (Admin Only). **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_share-group-update: manila share-group-update ------------------------- .. code-block:: console usage: manila share-group-update [--name ] [--description ] Update a share group (Experimental). **Positional arguments:** ```` Name or ID of the share group to update. **Optional arguments:** ``--name `` Optional new name for the share group. (Default=None) ``--description `` Optional share group description. (Default=None) .. _manila_share-instance-export-location-list: manila share-instance-export-location-list ------------------------------------------ .. code-block:: console usage: manila share-instance-export-location-list [--columns ] List export locations of a given share instance. **Positional arguments:** ```` Name or ID of the share instance. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,host,status". .. _manila_share-instance-export-location-show: manila share-instance-export-location-show ------------------------------------------ .. code-block:: console usage: manila share-instance-export-location-show Show export location for the share instance. **Positional arguments:** ```` Name or ID of the share instance. ```` ID of the share instance export location. .. _manila_share-instance-force-delete: manila share-instance-force-delete ---------------------------------- .. code-block:: console usage: manila share-instance-force-delete [ ...] Force-delete the share instance, regardless of state (Admin only). **Positional arguments:** ```` Name or ID of the instance(s) to force delete. .. _manila_share-instance-list: manila share-instance-list -------------------------- .. code-block:: console usage: manila share-instance-list [--share-id ] [--columns ] List share instances (Admin only). **Optional arguments:** ``--share-id , --share_id `` Filter results by share ID. ``--columns `` Comma separated list of columns to be displayed example --columns "id,host,status". .. _manila_share-instance-reset-state: manila share-instance-reset-state --------------------------------- .. code-block:: console usage: manila share-instance-reset-state [--state ] Explicitly update the state of a share instance (Admin only). **Positional arguments:** ```` Name or ID of the share instance to modify. **Optional arguments:** ``--state `` Indicate which state to assign the instance. Options include available, error, creating, deleting, error_deleting, migrating,migrating_to. If no state is provided, available will be used. .. _manila_share-instance-show: manila share-instance-show -------------------------- .. code-block:: console usage: manila share-instance-show Show details about a share instance (Admin only). **Positional arguments:** ```` Name or ID of the share instance. .. _manila_share-network-create: manila share-network-create --------------------------- .. code-block:: console usage: manila share-network-create [--neutron-net-id ] [--neutron-subnet-id ] [--name ] [--description ] Create description for network used by the tenant. **Optional arguments:** ``--neutron-net-id , --neutron-net_id , --neutron_net_id , --neutron_net-id `` Neutron network ID. Used to set up network for share servers. ``--neutron-subnet-id , --neutron-subnet_id , --neutron_subnet_id , --neutron_subnet-id `` Neutron subnet ID. Used to set up network for share servers. This subnet should belong to specified neutron network. ``--name `` Share network name. ``--description `` Share network description. .. _manila_share-network-delete: manila share-network-delete --------------------------- .. code-block:: console usage: manila share-network-delete [ ...] Delete one or more share networks. **Positional arguments:** ```` Name or ID of share network(s) to be deleted. .. _manila_share-network-list: manila share-network-list ------------------------- .. code-block:: console usage: manila share-network-list [--all-tenants [<0|1>]] [--project-id ] [--name ] [--created-since ] [--created-before ] [--security-service ] [--neutron-net-id ] [--neutron-subnet-id ] [--network-type ] [--segmentation-id ] [--cidr ] [--ip-version ] [--offset ] [--limit ] [--columns ] Get a list of network info. **Optional arguments:** ``--all-tenants [<0|1>]`` Display information from all tenants (Admin only). ``--project-id , --project_id `` Filter results by project ID. ``--name `` Filter results by name. ``--created-since , --created_since `` Return only share networks created since given date. The date is in the format 'yyyy-mm-dd'. ``--created-before , --created_before `` Return only share networks created until given date. The date is in the format 'yyyy-mm-dd'. ``--security-service , --security_service `` Filter results by attached security service. ``--neutron-net-id , --neutron_net_id , --neutron_net-id , --neutron-net_id `` Filter results by neutron net ID. ``--neutron-subnet-id , --neutron_subnet_id , --neutron-subnet_id , --neutron_subnet-id `` Filter results by neutron subnet ID. ``--network-type , --network_type `` Filter results by network type. ``--segmentation-id , --segmentation_id `` Filter results by segmentation ID. ``--cidr `` Filter results by CIDR. ``--ip-version , --ip_version `` Filter results by IP version. ``--offset `` Start position of share networks listing. ``--limit `` Number of share networks to return per request. ``--columns `` Comma separated list of columns to be displayed example --columns "id". .. _manila_share-network-security-service-add: manila share-network-security-service-add ----------------------------------------- .. code-block:: console usage: manila share-network-security-service-add Associate security service with share network. **Positional arguments:** ```` Share network name or ID. ```` Security service name or ID to associate with. .. _manila_share-network-security-service-list: manila share-network-security-service-list ------------------------------------------ .. code-block:: console usage: manila share-network-security-service-list [--columns ] Get list of security services associated with a given share network. **Positional arguments:** ```` Share network name or ID. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_share-network-security-service-remove: manila share-network-security-service-remove -------------------------------------------- .. code-block:: console usage: manila share-network-security-service-remove Dissociate security service from share network. **Positional arguments:** ```` Share network name or ID. ```` Security service name or ID to dissociate. .. _manila_share-network-show: manila share-network-show ------------------------- .. code-block:: console usage: manila share-network-show Get a description for network used by the tenant. **Positional arguments:** ```` Name or ID of the share network to show. .. _manila_share-network-update: manila share-network-update --------------------------- .. code-block:: console usage: manila share-network-update [--neutron-net-id ] [--neutron-subnet-id ] [--name ] [--description ] Update share network data. **Positional arguments:** ```` Name or ID of share network to update. **Optional arguments:** ``--neutron-net-id , --neutron-net_id , --neutron_net_id , --neutron_net-id `` Neutron network ID. Used to set up network for share servers. This option is deprecated and will be rejected in newer releases of OpenStack Manila. ``--neutron-subnet-id , --neutron-subnet_id , --neutron_subnet_id , --neutron_subnet-id `` Neutron subnet ID. Used to set up network for share servers. This subnet should belong to specified neutron network. ``--name `` Share network name. ``--description `` Share network description. .. _manila_share-replica-create: manila share-replica-create --------------------------- .. code-block:: console usage: manila share-replica-create [--availability-zone ] [--share-network ] Create a share replica (Experimental). **Positional arguments:** ```` Name or ID of the share to replicate. **Optional arguments:** ``--availability-zone , --availability_zone , --az `` Optional Availability zone in which replica should be created. ``--share-network , --share_network `` Optional network info ID or name. .. _manila_share-replica-delete: manila share-replica-delete --------------------------- .. code-block:: console usage: manila share-replica-delete [--force] [ ...] Remove one or more share replicas (Experimental). **Positional arguments:** ```` ID of the share replica. **Optional arguments:** ``--force`` Attempt to force deletion of a replica on its backend. Using this option will purge the replica from Manila even if it is not cleaned up on the backend. Defaults to False. .. _manila_share-replica-list: manila share-replica-list ------------------------- .. code-block:: console usage: manila share-replica-list [--share-id ] [--columns ] List share replicas (Experimental). **Optional arguments:** ``--share-id , --share_id , --si `` List replicas belonging to share. ``--columns `` Comma separated list of columns to be displayed example --columns "replica_state,id". .. _manila_share-replica-promote: manila share-replica-promote ---------------------------- .. code-block:: console usage: manila share-replica-promote Promote specified replica to 'active' replica_state (Experimental). **Positional arguments:** ```` ID of the share replica. .. _manila_share-replica-reset-replica-state: manila share-replica-reset-replica-state ---------------------------------------- .. code-block:: console usage: manila share-replica-reset-replica-state [--replica-state ] Explicitly update the 'replica_state' of a share replica (Experimental). **Positional arguments:** ```` ID of the share replica to modify. **Optional arguments:** ``--replica-state , --replica_state , --state `` Indicate which replica_state to assign the replica. Options include in_sync, out_of_sync, active, error. If no state is provided, out_of_sync will be used. .. _manila_share-replica-reset-state: manila share-replica-reset-state -------------------------------- .. code-block:: console usage: manila share-replica-reset-state [--state ] Explicitly update the 'status' of a share replica (Experimental). **Positional arguments:** ```` ID of the share replica to modify. **Optional arguments:** ``--state `` Indicate which state to assign the replica. Options include available, error, creating, deleting, error_deleting. If no state is provided, available will be used. .. _manila_share-replica-resync: manila share-replica-resync --------------------------- .. code-block:: console usage: manila share-replica-resync Attempt to update the share replica with its 'active' mirror (Experimental). **Positional arguments:** ```` ID of the share replica to resync. .. _manila_share-replica-show: manila share-replica-show ------------------------- .. code-block:: console usage: manila share-replica-show Show details about a replica (Experimental). **Positional arguments:** ```` ID of the share replica. .. _manila_share-server-delete: manila share-server-delete -------------------------- .. code-block:: console usage: manila share-server-delete [ ...] Delete one or more share servers (Admin only). **Positional arguments:** ```` ID of the share server(s) to delete. .. _manila_share-server-details: manila share-server-details --------------------------- .. code-block:: console usage: manila share-server-details Show share server details (Admin only). **Positional arguments:** ```` ID of share server. .. _manila_share-server-list: manila share-server-list ------------------------ .. code-block:: console usage: manila share-server-list [--host ] [--status ] [--share-network ] [--project-id ] [--columns ] List all share servers (Admin only). **Optional arguments:** ``--host `` Filter results by name of host. ``--status `` Filter results by status. ``--share-network `` Filter results by share network. ``--project-id `` Filter results by project ID. ``--columns `` Comma separated list of columns to be displayed example --columns "id,host,status". .. _manila_share-server-show: manila share-server-show ------------------------ .. code-block:: console usage: manila share-server-show Show share server info (Admin only). **Positional arguments:** ```` ID of share server. .. _manila_show: manila show ----------- .. code-block:: console usage: manila show Show details about a NAS share. **Positional arguments:** ```` Name or ID of the NAS share. .. _manila_shrink: manila shrink ------------- .. code-block:: console usage: manila shrink Decreases the size of an existing share. **Positional arguments:** ```` Name or ID of share to shrink. ```` New size of share, in GiBs. .. _manila_snapshot-access-allow: manila snapshot-access-allow ---------------------------- .. code-block:: console usage: manila snapshot-access-allow Allow read only access to a snapshot. **Positional arguments:** ```` Name or ID of the share snapshot to allow access to. ```` Access rule type (only "ip", "user"(user or group), "cert" or "cephx" are supported). ```` Value that defines access. .. _manila_snapshot-access-deny: manila snapshot-access-deny --------------------------- .. code-block:: console usage: manila snapshot-access-deny [ ...] Deny access to a snapshot. **Positional arguments:** ```` Name or ID of the share snapshot to deny access to. ```` ID(s) of the access rule(s) to be deleted. .. _manila_snapshot-access-list: manila snapshot-access-list --------------------------- .. code-block:: console usage: manila snapshot-access-list [--columns ] Show access list for a snapshot. **Positional arguments:** ```` Name or ID of the share snapshot to list access of. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "access_type,access_to". .. _manila_snapshot-create: manila snapshot-create ---------------------- .. code-block:: console usage: manila snapshot-create [--force ] [--name ] [--description ] Add a new snapshot. **Positional arguments:** ```` Name or ID of the share to snapshot. **Optional arguments:** ``--force `` Optional flag to indicate whether to snapshot a share even if it's busy. (Default=False) ``--name `` Optional snapshot name. (Default=None) ``--description `` Optional snapshot description. (Default=None) .. _manila_snapshot-delete: manila snapshot-delete ---------------------- .. code-block:: console usage: manila snapshot-delete [ ...] Remove one or more snapshots. **Positional arguments:** ```` Name or ID of the snapshot(s) to delete. .. _manila_snapshot-export-location-list: manila snapshot-export-location-list ------------------------------------ .. code-block:: console usage: manila snapshot-export-location-list [--columns ] List export locations of a given snapshot. **Positional arguments:** ```` Name or ID of the snapshot. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,path". .. _manila_snapshot-export-location-show: manila snapshot-export-location-show ------------------------------------ .. code-block:: console usage: manila snapshot-export-location-show Show export location of the share snapshot. **Positional arguments:** ```` Name or ID of the snapshot. ```` ID of the share snapshot export location. .. _manila_snapshot-force-delete: manila snapshot-force-delete ---------------------------- .. code-block:: console usage: manila snapshot-force-delete [ ...] Attempt force-deletion of one or more snapshots. Regardless of the state (Admin only). **Positional arguments:** ```` Name or ID of the snapshot(s) to force delete. .. _manila_snapshot-instance-export-location-list: manila snapshot-instance-export-location-list --------------------------------------------- .. code-block:: console usage: manila snapshot-instance-export-location-list [--columns ] List export locations of a given snapshot instance. **Positional arguments:** ```` Name or ID of the snapshot instance. **Optional arguments:** ``--columns `` Comma separated list of columns to be displayed example --columns "id,path,is_admin_only". .. _manila_snapshot-instance-export-location-show: manila snapshot-instance-export-location-show --------------------------------------------- .. code-block:: console usage: manila snapshot-instance-export-location-show Show export location of the share instance snapshot. **Positional arguments:** ```` ID of the share snapshot instance. ```` ID of the share snapshot instance export location. .. _manila_snapshot-instance-list: manila snapshot-instance-list ----------------------------- .. code-block:: console usage: manila snapshot-instance-list [--snapshot ] [--columns ] [--detailed ] List share snapshot instances. **Optional arguments:** ``--snapshot `` Filter results by share snapshot ID. ``--columns `` Comma separated list of columns to be displayed example --columns "id". ``--detailed `` Show detailed information about snapshot instances. (Default=False) .. _manila_snapshot-instance-reset-state: manila snapshot-instance-reset-state ------------------------------------ .. code-block:: console usage: manila snapshot-instance-reset-state [--state ] Explicitly update the state of a share snapshot instance. **Positional arguments:** ```` ID of the snapshot instance to modify. **Optional arguments:** ``--state `` Indicate which state to assign the snapshot instance. Options include available, error, creating, deleting, error_deleting. If no state is provided, available will be used. .. _manila_snapshot-instance-show: manila snapshot-instance-show ----------------------------- .. code-block:: console usage: manila snapshot-instance-show Show details about a share snapshot instance. **Positional arguments:** ```` ID of the share snapshot instance. .. _manila_snapshot-list: manila snapshot-list -------------------- .. code-block:: console usage: manila snapshot-list [--all-tenants [<0|1>]] [--name ] [--status ] [--share-id ] [--usage [any|used|unused]] [--limit ] [--offset ] [--sort-key ] [--sort-dir ] [--columns ] List all the snapshots. **Optional arguments:** ``--all-tenants [<0|1>]`` Display information from all tenants (Admin only). ``--name `` Filter results by name. ``--status `` Filter results by status. ``--share-id , --share_id `` Filter results by source share ID. ``--usage [any|used|unused]`` Either filter or not snapshots by its usage. OPTIONAL: Default=any. ``--limit `` Maximum number of share snapshots to return. OPTIONAL: Default=None. ``--offset `` Set offset to define start point of share snapshots listing. OPTIONAL: Default=None. ``--sort-key , --sort_key `` Key to be sorted, available keys are ('id', 'status', 'size', 'share_id', 'user_id', 'project_id', 'progress', 'name', 'display_name'). Default=None. ``--sort-dir , --sort_dir `` Sort direction, available values are ('asc', 'desc'). OPTIONAL: Default=None. ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_snapshot-manage: manila snapshot-manage ---------------------- .. code-block:: console usage: manila snapshot-manage [--name ] [--description ] [--driver_options [ [ ...]]] Manage share snapshot not handled by Manila (Admin only). **Positional arguments:** ```` Name or ID of the share. ```` Provider location of the snapshot on the backend. **Optional arguments:** ``--name `` Optional snapshot name (Default=None). ``--description `` Optional snapshot description (Default=None). ``--driver_options [ [ ...]], --driver-options [ [ ...]]`` Optional driver options as key=value pairs (Default=None). .. _manila_snapshot-rename: manila snapshot-rename ---------------------- .. code-block:: console usage: manila snapshot-rename [--description ] [] Rename a snapshot. **Positional arguments:** ```` Name or ID of the snapshot to rename. ```` New name for the snapshot. **Optional arguments:** ``--description `` Optional snapshot description. (Default=None) .. _manila_snapshot-reset-state: manila snapshot-reset-state --------------------------- .. code-block:: console usage: manila snapshot-reset-state [--state ] Explicitly update the state of a snapshot (Admin only). **Positional arguments:** ```` Name or ID of the snapshot to modify. **Optional arguments:** ``--state `` Indicate which state to assign the snapshot. Options include available, error, creating, deleting, error_deleting. If no state is provided, available will be used. .. _manila_snapshot-show: manila snapshot-show -------------------- .. code-block:: console usage: manila snapshot-show Show details about a snapshot. **Positional arguments:** ```` Name or ID of the snapshot. .. _manila_snapshot-unmanage: manila snapshot-unmanage ------------------------ .. code-block:: console usage: manila snapshot-unmanage [ ...] Unmanage one or more share snapshots (Admin only). **Positional arguments:** ```` Name or ID of the snapshot(s). .. _manila_type-access-add: manila type-access-add ---------------------- .. code-block:: console usage: manila type-access-add Adds share type access for the given project (Admin only). **Positional arguments:** ```` Share type name or ID to add access for the given project. ```` Project ID to add share type access for. .. _manila_type-access-list: manila type-access-list ----------------------- .. code-block:: console usage: manila type-access-list Print access information about the given share type (Admin only). **Positional arguments:** ```` Filter results by share type name or ID. .. _manila_type-access-remove: manila type-access-remove ------------------------- .. code-block:: console usage: manila type-access-remove Removes share type access for the given project (Admin only). **Positional arguments:** ```` Share type name or ID to remove access for the given project. ```` Project ID to remove share type access for. .. _manila_type-create: manila type-create ------------------ .. code-block:: console usage: manila type-create [--snapshot_support ] [--create_share_from_snapshot_support ] [--revert_to_snapshot_support ] [--mount_snapshot_support ] [--extra-specs [ [ ...]]] [--is_public ] Create a new share type (Admin only). **Positional arguments:** ```` Name of the new share type. ```` Required extra specification. Valid values are 'true'/'1' and 'false'/'0'. **Optional arguments:** ``--snapshot_support , --snapshot-support `` Boolean extra spec used for filtering of back ends by their capability to create share snapshots. ``--create_share_from_snapshot_support , --create-share-from-snapshot-support `` Boolean extra spec used for filtering of back ends by their capability to create shares from snapshots. ``--revert_to_snapshot_support , --revert-to-snapshot-support `` Boolean extra spec used for filtering of back ends by their capability to revert shares to snapshots. (Default is False). ``--mount_snapshot_support , --mount-snapshot-support `` Boolean extra spec used for filtering of back ends by their capability to mount share snapshots. (Default is False). ``--extra-specs [ [ ...]], --extra_specs [ [ ...]]`` Extra specs key and value of share type that will be used for share type creation. OPTIONAL: Default=None. example --extra-specs thin_provisioning=' True', replication_type=readable. ``--is_public , --is-public `` Make type accessible to the public (default true). .. _manila_type-delete: manila type-delete ------------------ .. code-block:: console usage: manila type-delete [ ...] Delete one or more specific share types (Admin only). **Positional arguments:** ```` Name or ID of the share type(s) to delete. .. _manila_type-key: manila type-key --------------- .. code-block:: console usage: manila type-key [ [ ...]] Set or unset extra_spec for a share type (Admin only). **Positional arguments:** ```` Name or ID of the share type. ```` Actions: 'set' or 'unset'. ```` Extra_specs to set or unset (key is only necessary on unset). .. _manila_type-list: manila type-list ---------------- .. code-block:: console usage: manila type-list [--all] [--columns ] Print a list of available 'share types'. **Optional arguments:** ``--all`` Display all share types (Admin only). ``--columns `` Comma separated list of columns to be displayed example --columns "id,name". .. _manila_unmanage: manila unmanage --------------- .. code-block:: console usage: manila unmanage Unmanage share (Admin only). **Positional arguments:** ```` Name or ID of the share(s). .. _manila_update: manila update ------------- .. code-block:: console usage: manila update [--name ] [--description ] [--is-public ] Rename a share. **Positional arguments:** ```` Name or ID of the share to rename. **Optional arguments:** ``--name `` New name for the share. ``--description `` Optional share description. (Default=None) ``--is-public , --is_public `` Public share is visible for all tenants. manila-10.0.0/doc/source/cli/manila-status.rst0000664000175000017500000000334513656750227021225 0ustar zuulzuul00000000000000============= manila-status ============= Synopsis ======== :: manila-status [] Description =========== :program:`manila-status` is a tool that provides routines for checking the status of a Manila deployment. Options ======= The standard pattern for executing a :program:`manila-status` command is:: manila-status [] Run without arguments to see a list of available command categories:: manila-status Categories are: * ``upgrade`` Detailed descriptions are below. You can also run with a category argument such as ``upgrade`` to see a list of all commands in that category:: manila-status upgrade These sections describe the available categories and arguments for :program:`manila-status`. Upgrade ~~~~~~~ .. _manila-status-checks: ``manila-status upgrade check`` Performs a release-specific readiness check before restarting services with new code. This command expects to have complete configuration and access to databases and services. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All upgrade readiness checks passed successfully and there is nothing to do. * - 1 - At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * - 2 - There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. * - 255 - An unexpected error occurred. **History of Checks** **8.0.0 (Stein)** * Placeholder to be filled in with checks as they are added in Stein. manila-10.0.0/doc/source/cli/manila-manage.rst0000664000175000017500000000560313656750227021131 0ustar zuulzuul00000000000000============= manila-manage ============= ------------------------------------- control and manage shared filesystems ------------------------------------- :Author: openstack@lists.launchpad.net :Date: 2014-06-11 :Copyright: OpenStack LLC :Version: 2014.2 :Manual section: 1 :Manual group: shared filesystems SYNOPSIS ======== manila-manage [] DESCRIPTION =========== manila-manage controls shared filesystems service. More information about OpenStack Manila is at https://wiki.openstack.org/wiki/Manila OPTIONS ======= The standard pattern for executing a manila-manage command is: ``manila-manage []`` For example, to obtain a list of all hosts: ``manila-manage host list`` Run without arguments to see a list of available command categories: ``manila-manage`` Categories are shell, logs, service, db, host, version and config. Detailed descriptions are below. These sections describe the available categories and arguments for manila-manage. Manila Db ~~~~~~~~~ ``manila-manage db version`` Print the current database version. ``manila-manage db sync`` Sync the database up to the most recent version. This is the standard way to create the db as well. ``manila-manage db downgrade `` Downgrade database to given version. ``manila-manage db stamp `` Stamp database with given version. ``manila-manage db revision `` Generate new migration. ``manila-manage db purge `` Purge deleted rows older than a given age from manila database tables. If age_in_days is not given or is specified as 0 all available rows will be deleted. Manila Logs ~~~~~~~~~~~ ``manila-manage logs errors`` Displays manila errors from log files. ``manila-manage logs syslog `` Displays manila alerts from syslog. Manila Shell ~~~~~~~~~~~~ ``manila-manage shell bpython`` Starts a new bpython shell. ``manila-manage shell ipython`` Starts a new ipython shell. ``manila-manage shell python`` Starts a new python shell. ``manila-manage shell run`` Starts a new shell using python. ``manila-manage shell script `` Runs the named script from the specified path with flags set. Manila Host ~~~~~~~~~~~ ``manila-manage host list`` Returns list of running manila hosts. Manila Config ~~~~~~~~~~~~~ ``manila-manage config list`` Returns list of currently set config options and its values. Manila Service ~~~~~~~~~~~~~~ ``manila-manage service list`` Returns list of manila services. Manila Version ~~~~~~~~~~~~~~ ``manila-manage version list`` Returns list of versions. FILES ===== The manila-manage.conf file contains configuration information in the form of python-gflags. BUGS ==== * Manila is sourced in Launchpad so you can view current bugs at `OpenStack Manila `__ manila-10.0.0/doc/source/reference/0000775000175000017500000000000013656750362017073 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/reference/index.rst0000664000175000017500000000121013656750227020726 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Reference --------- .. toctree:: :maxdepth: 1 glossary manila-10.0.0/doc/source/reference/glossary.rst0000664000175000017500000000717113656750227021476 0ustar zuulzuul00000000000000======== Glossary ======== .. glossary:: manila OpenStack project to provide "Shared Filesystems as a service". manila-api Service that provides a stable RESTful API. The service authenticates and routes requests throughout the Shared Filesystem service. There is :term:`python-manilaclient` to interact with the API. python-manilaclient Command line interface to interact with :term:`manila` via :term:`manila-api` and also a Python module to interact programmatically with :term:`manila`. manila-scheduler Responsible for scheduling/routing requests to the appropriate :term:`manila-share` service. It does that by picking one back-end while filtering all except one back-end. manila-share Responsible for managing Shared File Service devices, specifically the back-end devices. DHSS Acronym for 'driver handles share servers'. It defines two different share driver modes when they either do handle share servers or not. Each driver is allowed to work only in one mode at once. Requirement is to support, at least, one mode. replication_type Type of replication supported by a share driver. If the share driver supports replication it will report a valid value to the :term:`manila-scheduler`. The value of this capability can be one of :term:`readable`, :term:`writable` or :term:`dr`. readable A type of replication supported by :term:`manila` in which there is one :term:`active` replica (also referred to as `primary` share) and one or more non-active replicas (also referred to as `secondary` shares). All share replicas have at least one export location and are mountable. However, the non-active replicas cannot be written to until after promotion. writable A type of replication supported by :term:`manila` in which all share replicas are writable. There is no requirement of a promotion since replication is synchronous. All share replicas have one or more export locations each and are mountable. dr Acronym for `Disaster Recovery`. It is a type of replication supported by :term:`manila` in which there is one :term:`active` replica (also referred to as `primary` share) and one or more non-active replicas (also referred to as `secondary` shares). Only the `active` replica has one or more export locations and can be mounted. The non-active replicas are inaccessible until after promotion. active In :term:`manila`, an `active` replica refers to a share that can be written to. In `readable` and `dr` styles of replication, there is only one `active` replica at any given point in time. Thus, it may also be referred to as the `primary` share. In `writable` style of replication, all replicas are writable and there may be no distinction of a `primary` share. replica_state An attribute of the Share Instance (Share Replica) model in :term:`manila`. If the value is :term:`active`, it refers to the type of the replica. If the value is one of `in_sync` or `out_of_sync`, it refers to the state of consistency of data between the :term:`active` replica and the share replica. If the value is `error`, a potentially irrecoverable error may have occurred during the update of data between the :term:`active` replica and the share replica. replication_change State of a non-active replica when it is being promoted to become the :term:`active` replica. recovery point objective Abbreviated as ``RPO``, recovery point objective is a target window of time between which a storage backend may guarantee that data is consistent between a primary and a secondary replica. This window is **not** managed by :term:`manila`. manila-10.0.0/doc/source/conf.py0000664000175000017500000001770713656750227016450 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # manila documentation build configuration file, created by # sphinx-quickstart on Sat May 1 15:17:47 2010. # # This file is execfile()d with the current directory set # to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import eventlet import sys import os # NOTE(dims): monkey patch subprocess to prevent failures in latest eventlet # See https://github.com/eventlet/eventlet/issues/398 try: eventlet.monkey_patch(subprocess=True) except TypeError: pass # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. # They can be extensions coming with Sphinx (named 'sphinx.ext.*') # or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.coverage', 'sphinx.ext.ifconfig', 'sphinx.ext.graphviz', 'openstackdocstheme', 'oslo_config.sphinxconfiggen', 'oslo_policy.sphinxext', 'oslo_policy.sphinxpolicygen', ] config_generator_config_file = ( '../../etc/oslo-config-generator/manila.conf') sample_config_basename = '_static/manila' policy_generator_config_file = ( '../../etc/manila/manila-policy-generator.conf') sample_policy_basename = '_static/manila' # openstackdocstheme options repository_name = 'openstack/manila' bug_project = 'manila' bug_tag = 'docs' todo_include_todos = True # Add any paths that contain templates here, relative to this directory. templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2010-present, Manila contributors' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. unused_docs = [ 'api_ext/rst_extension_template', 'installer', ] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) to use # for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['manila.'] # -- Options for man page output ---------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' man_pages = [ ('cli/manila-manage', 'manila-manage', u'Cloud controller fabric', [u'OpenStack'], 1), ('cli/manila-status', 'manila-status', u'Cloud controller fabric', [u'OpenStack'], 1), ] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "show_other_versions": "True", } # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'maniladoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'doc-manila.tex', u'Manila Developer Documentation', u'Manila contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True latex_domain_indices = False latex_elements = { 'makeindex': '', 'printindex': '', 'preamble': r'\setcounter{tocdepth}{3}', 'maxlistdepth': 10, } manila-10.0.0/doc/source/install/0000775000175000017500000000000013656750362016603 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/install/install-share-ubuntu.rst0000664000175000017500000000517513656750227023433 0ustar zuulzuul00000000000000.. _share-node-install-ubuntu: Install and configure a share node running Ubuntu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure a share node for the Shared File Systems service. For simplicity, this configuration references one storage node with the generic driver managing the share servers. The generic backend manages share servers using compute, networking and block services for provisioning shares. Note that installation and configuration vary by distribution. This section describes the instructions for a share node running Ubuntu. Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # apt-get install manila-share python-pymysql #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/share-node-common-configuration.rst Two driver modes ---------------- .. include:: common/share-node-share-server-modes.rst Choose one of the following options to configure the share driver: .. include:: common/dhss-false-mode-intro.rst Prerequisites ------------- .. note:: Perform these steps on the storage node. #. Install the supporting utility packages: * Install LVM and NFS server packages: .. code-block:: console # apt-get install lvm2 nfs-kernel-server .. include:: common/dhss-false-mode-configuration.rst .. include:: common/dhss-true-mode-intro.rst Prerequisites ------------- Before you proceed, verify operation of the Compute, Networking, and Block Storage services. This options requires implementation of Networking option 2 and requires installation of some Networking service components on the storage node. * Install the Networking service components: .. code-block:: console # apt-get install neutron-plugin-linuxbridge-agent .. include:: common/dhss-true-mode-configuration.rst Finalize installation --------------------- #. Prepare manila-share as start/stop service. Start the Shared File Systems service including its dependencies: .. code-block:: console # service manila-share restart #. By default, the Ubuntu packages create an SQLite database. Because this configuration uses an SQL database server, remove the SQLite database file: .. code-block:: console # rm -f /var/lib/manila/manila.sqlite manila-10.0.0/doc/source/install/install-share-debian.rst0000664000175000017500000000464513656750227023334 0ustar zuulzuul00000000000000.. _share-node-install-debian: Install and configure a share node running Debian ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure a share node for the Shared File Systems service. For simplicity, this configuration references one storage node with the generic driver managing the share servers. The generic backend manages share servers using compute, networking and block services for provisioning shares. Note that installation and configuration vary by distribution. This section describes the instructions for a share node running a Debian distribution. Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # apt-get install manila-share python-pymysql #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/share-node-common-configuration.rst Two driver modes ---------------- .. include:: common/share-node-share-server-modes.rst Choose one of the following options to configure the share driver: .. include:: common/dhss-false-mode-intro.rst Prerequisites ------------- .. note:: Perform these steps on the storage node. #. Install the supporting utility packages: * Install LVM and NFS server packages: .. code-block:: console # apt-get install lvm2 nfs-kernel-server .. include:: common/dhss-false-mode-configuration.rst .. include:: common/dhss-true-mode-intro.rst Prerequisites ------------- Before you proceed, verify operation of the Compute, Networking, and Block Storage services. This options requires implementation of Networking option 2 and requires installation of some Networking service components on the storage node. * Install the Networking service components: .. code-block:: console # apt-get install neutron-plugin-linuxbridge-agent .. include:: common/dhss-true-mode-configuration.rst Finalize installation --------------------- #. Prepare manila-share as start/stop service. Start the Shared File Systems service including its dependencies: .. code-block:: console # service manila-share restart manila-10.0.0/doc/source/install/install-controller-ubuntu.rst0000664000175000017500000000324413656750227024507 0ustar zuulzuul00000000000000.. _manila-controller-ubuntu: Install and configure controller node on Ubuntu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Shared File Systems service, code-named manila, on the controller node that runs Ubuntu. This service requires at least one additional share node that manages file storage back ends. .. include:: common/controller-node-prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # apt-get install manila-api manila-scheduler python-manilaclient #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/controller-node-common-configuration.rst #. Populate the Shared File Systems database: .. code-block:: console # su -s /bin/sh -c "manila-manage db sync" manila .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- #. Restart the Shared File Systems services: .. code-block:: console # service manila-scheduler restart # service manila-api restart #. By default, the Ubuntu packages create an SQLite database. Because this configuration uses an SQL database server, you can remove the SQLite database file: .. code-block:: console # rm -f /var/lib/manila/manila.sqlite manila-10.0.0/doc/source/install/install-controller-obs.rst0000664000175000017500000000275013656750227023751 0ustar zuulzuul00000000000000.. _manila-controller-obs: Install and configure controller node on openSUSE and SUSE Linux Enterprise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Shared File Systems service, code-named manila, on the controller node that runs openSUSE and SUSE Linux Enterprise. This service requires at least one additional share node that manages file storage back ends. .. include:: common/controller-node-prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # zypper install openstack-manila-api openstack-manila-scheduler python-manilaclient #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/controller-node-common-configuration.rst Finalize installation --------------------- #. Start the Shared File Systems services and configure them to start when the system boots: .. code-block:: console # systemctl enable openstack-manila-api.service openstack-manila-scheduler.service # systemctl start openstack-manila-api.service openstack-manila-scheduler.service manila-10.0.0/doc/source/install/figures/0000775000175000017500000000000013656750362020247 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/install/figures/hwreqs.png0000664000175000017500000026132213656750227022274 0ustar zuulzuul00000000000000‰PNG  IHDRÊvFž¤NsRGB®Îé pHYsœœ&Í:4ÕiTXtXML:com.adobe.xmp 5 2 1 °ã2Ý@IDATxì]€EÖ®ž°‘œsT’L¨¨ g¾óôôTs8쬎ì,˜Î¬Àb8õü½Ã|FTQ ˜HJÜ%g6ÎL÷ÿ½ž­¡gvfvRÏÌÙî®®ðê{õª^U¿ª‚#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ#À0Œ@ƒ( †àŒ#À„AàçÃí*Ü•CìvË–§œ“W„ ÆÞŒ#À0Œ@£DÀÖ(©f¢&€€Óù™m³XÔË&,µO8ïÜ©H2¬âöºŸ.q¬6•ïª<ÕÇhB{§Ö£¾Œ|ÿšÊ¼31¯¦Níè©UZi³ei{Ÿ4i›¢(šÑ¿±Þß༿G§v¢"²_˜éºcec-ÓÍ0Œ@4°¢ J†0-bQÛ³Ò«ˆåH~P¤,6‰E=¼nÏïP¶~G¸"…åwéC ºR}DhÚ%F ñˆóÖÆ0é¾Öë¾ZÓD¡PÜ@÷ÄÆD;ÓÊ0Œ@¬XbÀáF€`@@Qæ EyÖ÷¯ E,[oMSûÜ•ËhV¶2÷µ¢ÌG¹Öb@07s‰Ì\Êè«M?^Wèê¹T2eŒ# àe‰_F€HŠ2sVqáŒÉ]_[­Ûýo˜ªœXív?‡wcïËýÌ)…¤ ÷i,ôf5êéBh‡ »5;ÓhczF ><£\öa=fδÇSˆÙ³g[oxüñ°x¤wñäo³ÊG¶àô‹—®HñžtÞYnÉ—RÌ.v:ggE ït>Ÿé}¨w™Â#mœ3óŒÏÁ÷‰`n†ñàNeÒ4M‰ E;+ƒHÏwÜKJ;R~Ç0æ!ÀÂg¶œ2#kÓpÃFYr©«(¢òDç´>°Q^M6Ê¥.G=åkœÓúy=ž¿#ÃÓ…¦ôìek¤»«Ç¾µ)¶»§OZLÌGñb!,ß•º '^S4m¸GóÞ‚åf§#n[E(——–8^ 878oUãÙû:ëñx ~[„¢}c±)÷j­ìUç ¯ÿCtÛ\¤ëB˜ó…b™RZ\ø¸p ]ÿCü ó9äñ@ÀK<\íp•*B;^S¬³Š'NïÍ,ŸÓ©YÊÝ%·¦s×auôü‚Æñy”驺ç¨. ýߺ²Åòçàe™ðÙ̺dYì?U|×/ÒŸ®œS ·ªôàÝ|(ƒ÷"‹ÝzÛ ç¤5&Ø‘"µsû¬<úÔн eùڢئÌ(žô=òûõfîÌbÇí2îÍÎGÚTzö-Âó&”q´ô7^'M=VÕ¼Ï"½gºŠn–ï ŠJ.ƒ Édø?ÿ'‚ý-6Ëù' 9ð·O~\AùL‡cd°f–ËÑO†sžŠý"%Û³M­ÖîÓ„rêIOÔ£_a ²0ÛÖòŽ'œ7î©›¹¿yBÞƒÅnES¾Rì–Ûf:'ÿ*i0^cÅ]—Mùõ÷jÔç P¾« 7#AKkäW†ü޷س‹f8oß"ó™è(>É+Ä¡‰cɼýaaÁ%|>'Ó=aSæz¹ÐÔkñ`Ô•<®EÚëˆÂb+--¾ë' ËŽ`ÌG€g”Íǘs`LGÀëñ¾¤/°Ò”\MÑtÅ™æ >Ý£y¾Ô * D¥nøµŽû{UïÇ{âþŒ.|‘°X–QpRЫÝ{¾bT€Î:ÊØËø}†{ ºøù2&(Y$©,-ÑÑŸüŽì2¡PœIï‘×åÁïiFÊÏEðïŸcmñ½|oVùHÉ,s—¼šî‡²Ò ÊË«(ÿ›PtºÆ'¡½™ÌÙ*öÿw’pòT(É“qÿ_¤ß2ñ.°uÇŠç ª»æ+ãì´ª(Cð®ul§ž»"V KégÑ„.oäAÛcBUŸÅ-x£|Iv}‘îõBñàNqØ1Œ€y˜ò‰Ñò[=½°°|†Ã¡'7ÁùPÍSõ<,6»5Ýé Ù^ÝA‘;Jë«^M™ ““ƒK ôYHzY«ˆizP0hÀ1÷§÷©ª6×J ÿªj1”Á^«åÔS ççœæTß >#zu¼Vâ@ƒ9ÔÓÏÁ+ ÚjŸåÛò/zÔy‹>¹nêÔöµ•êl¼?I©Ri–ûVIS¼¸ëñ5ÑŠñ$‹°ŽŸéšü¾LSŸáß[û5ò¼Ñ]v-ü¦w¥ÅŽ'qyÊõ7¸¡ˆ¬g–n¯§ÓZá½ømÏ·å 0îŽrÛƒæïÝU36â4@eÇ0)B€g”S4gÄC3R}1ó·,Ò3ŠsÃÅ'ÿ™Å“¿2*Éäçt^Q-,ÊËt™¨!t­ç4Ñ~›gþ9XI¾Æqÿ@¼»Jí¾[ë+¥Ii8ŠÚÕFZž²Šžîiç›à¿¹v.pN ÈÊùPª ø>@qÜfOýŽ”zÀla€‚eFù®u>ÐØßD¹ÙlV‡±|¤dabPW¨ƧÍú©Œý†ׂB×E˜|‘òCùîÄ`I×9OÕd àÝ«F%™ÞB £]3 ˆi,ë¶^(£Ð ƒfßCñç /ôvµÞI ¹ Ÿª+Ê1Ô¢X.¥$'Š9êÎî6­ì·J%™ÊD&)@ô]º èÓ´k¥’L~OMž¼X¿B÷˜½=T¿Ê?qà.£êWEyÚ¨$“ßwÞ¹¸¿N÷ª"†Ñ5Z§Õ”Äç•dò"ùœYâ˜_÷ž/Œ#"Ш°ct"€2:øÎ ü:ÆCcwK·Ïêâu ßb¹7@i« ¨ŠÚQPzÐF(¯gwe:¤,C×û¯|6^-ñ=k^õ$éSIIg”õX”sAV®E˜Eíùž®°KÐí4U‹5@Q6†1Þ'R>×}ÊGöŸŸMwNúØ.Ýw·ÞµïjqÛÿ–‡Î ~éYQµ‡®vÿJ¿ ×:˜"Ô¨Bû hÄ%°m…‰G€OOVÅBŸÜë9EX¾$O¯¢"_z<ÞãâyÊð©ºB™ýv†k²®¸ç™0æ0ûÑÑ „«øD÷R´°¹þ=èµÀì¶|ß+è]̸ã[…u¶ñYÞÛ¬â=ºG}ï+ý¢¹vÇüLƒÂ8|}¸‘XÑÄã0Œ#`lza¶œ2#-+gE¹˜/R‚´?km…:ÊXoÌhõÂ,r›2wYÅÁŒúìÐ.Ëbû>ÔhÁ}ðY¯”•¡Þ“ì ˜®ÿ¦P‚oÓTAŠ2}rÕÕâlRì‹xÿ‰É“·bÌDŽ"ú±ÂÂÍ´c@õ–ÝÇ‚ÖíÝHI ²êHvù0ýP8Mt-¾GÝÃ÷§Üã"ܼÀ1«jO…ýÙð:ò­¢e#®O¹†öÀVp`5ß:xú¤I>ÕºH·>t¨^Í{-h¹¶~âªN+Ò<@¾‹†?Èx½ Ÿª+ìªCÖ'=ÿ1GùËC•C Ì,`±,”åÍ9»*Ä> áßi$^Üùg娌Ïò^õdm¢†Hòç'ßEº:'z`‹~¬“ÞÓTí1Ô‹[`ª1+7Oy–d$R\~Ç0æ Àв9¸rªŒ@ÊðíJ±{zu¥÷|dšeŠ˜ºJÅ6h5>]7üöR™JÜY^Ì÷v×U`E„í ¡ü…|×ÕÒí‹2uC54…Èž™fŸ¡PÓ+aWÅ›¾Ì,ocÖˆªjq&žŸ©Ù¶÷(9 ùm ï #wÝH~ù ÷®Ë£?>ÉËOÞ2[y­žÕ &&…GS,7ÌríßGÊÎû(ÿiÞ õF$|¯Lœ®_üºº+h±c$ƒIgeœññïwÂfÃ+ý¢áÅ¢lòBÕÇYÙëõ³!ê$`‚úÚ%Š9Ê4’Ó*#½5¾‹wcmÝ“· R˜L7ÃU4o‚ó¾ÁŠÇ{ªÄ¥ß’Ê*í|™xEä*“J 7&3?N‹`"#ÀŠrd|ø-#ÑèÛlyJ°èLùÝ7‹íãÖQ´h÷®jL¥ìaƒJtÏ=Bs:ƒ<õGm]0³Õ&Ô[òSõYS¨?˜{5†!ûhœ>öbŸ²É{ßaî»ï±×s2”à%O¹k)¬]±¾U«©.(Ëgãñ¿}²Ðüff–JþV}“k˜BnÌç‘[oÅâFm=”ö±m-ùüÂ͆c1™Ò‡ò æ½6WÎLëŠ4… vH?<߃GùœJÌ")^ÜJ7™ïÉ›vËèf0L| RÙ±Ö£:“™§Å0‘`E92>ü–ÈhT·vN &V}c0±´à+Ø/êgUü@a1Õù§°‹Šê”ÛPiÚì>…ÊÑp*…± ÛưXÙö)ˆJÙ¶ÑÐåGÂÌágl™F‡lèÎÔòÙ¬_c†¶Úy´-—ÌÓ¬k¶­õä‡C`´3 îvÑ,ºÁ)_ÑC§v‚Á3â-Œ}§Pğž&†?¾]!”½`n» Î’0;3hc#ÏËcÞ0‰±ãÞpš „PL’hêéÒ@Hÿk§óÂÚ,›µÎƇõ°c”!ÀŠrÊ æŒä#`±‹Õ”*ÍÊ+{tøL#¦Å›ë¬;_@‘í5÷Ç“‚Ó™è˜:JßiÁþòyº³p)ÃM˜ke³Þâ}fUÀ»9¼E0ï¸fœµºÝ2d3ËG§´Á$áY(‹mj+½ÿºfÚ´¶2_y¥™nÚ†M>'r¥C`»¬ãˆC93ílµY'ŒjLÌßvµcêY¡ò¡ÃE®u>ÕB¾£mä€ïçø¢Ð³Ì½p²ô—× Ž©çÓ“åsˆëoºŸW;/øÝÕw»ÎCºIW”Syp¹‚ŸãÁ=88žu™ÅÂM²Íp$ðOîàY÷àQÕÁt‹Á䯡޳#À˜ƒÛ(›ƒ+§Ê¤ÎâØoË•ßC©ê®T¿.(r½ ¥´ ó¤‡hZí%°éÄ#ʬö{$Bm6å¯G›ƒ•xŰ9>é}ŽbZi7L^ÌŠ*3öÄði(8ñO»”ÞC™/×Å:é«2µd³Tʶß>™Âš]¾¬<Ë$w¥w$ã=û¼¿áêO`¦@æ m€co±n뉵8é¤Ði »Ò)“ŸÇ&× ¼GÏRn”(Ù,c·‹[}æÞ·qÿ9”æï1pذ»!Èá8Ãx˜Ý³oî} .npŒøª[|…ø^ðg”‘?°ý> JÕsx÷w­ï¬ŠSxµ÷±3É$,ë+óÅ0·h ;ê#°ôG™7ÄÛúIFã“jÌ#Ñ/î‘ÒlðEù/p¿ü½ÛöF]ûœì„£ä‹ÕZõ@œÎ¸‹??Çî0K g¤Tw…<Àá1'``T)lÊ£ æÁF iðŒrÒ ä„Ô#@æ›í\(HBùêÎ[²i@É;JÏè|qä­¶(^ÊôYK‘}Ò_;Ú¡|¹ÐyÄ@;XÖ‹ºÛÜXÚ»ÖË èÇÒSQì~ûdé§ïÅ,Ä;ô ¥®ª›­Ç|ùŽ®f—£=;ÀS€“Jý% ùœw“~”´¢-ÃÞÆõè6ÒË=pĤ²;_`H ÄíÆÙjìÿ;Ý¢ØFÖxw$Ü›±ZÒAŠ.Ôaû8åy-_ÝdÌo†Ó±Øf·ðñgˆÅ*΃}ùKÆðÆûYS ?-L Úâï"l¹ñO,ϼaúó»Û»ß‚w`wr]ª1oˆúxpo(ÍHïK§¾'ËdðõM\ˆŠ€Á¦zű û:Ô»·q{ É2øÿÕ FÇA±þÜjSN.uúLn"eÂïF i`0ËŽ`štìrµ·º¯°ÚÊf8oß’ì2Ñ>Çž{f‹¼uƓϒO¸ôÌ.å«›_TŠ~ªfÝÓº¥¶)ÔááèK¦?Ù„þëªþP’ó,¶åø!}7Ói{‘ò #Ë7‹òÞëÚÒ»îÚMaÉ<ŸøßÂûP/ Ÿö¨vW)=Zµ°¬Ju™3sÂ&ÜÃaÚ?É“ºµòÕêµæ´Ê]I‹ eýÝæ=]éŧÜãzýè° E%%¢G‡J Ü‘)Šï­)3ÊÅcAÎ,Hd/lvÿ!~/[ò,ïMŸ4ig|dr,F i pÍ´imÕJõ MÓ.ÁïTtxëP²«K]E¥º„,§©Fœók d’Œñ*(ºï0MóÜ‚~u®­…õ]îOèð}sD`¢Ó5RõˆGЗ£eEÑ®ÄW”O“EÒå E®›5U<¤(ÚÏøÔ|ãÌÇüdÍé1M‚B×ñø^ôÊrb·•;MU¹XNS…4çÓ˜H§Œ6fܘvF •à í(³cvyò½º´ÄñB2ó×í;’• ¯óÕÁo[§cXIN²œNSD€äC±w<šä_D!ùIE9YNS2çÑH—ŒvÒþ²)àÈe`ÌD`–«è“6-³ŽŠ˜« í_øZZÌüÐG'ÇÑg\MSÞ§N¦«ð"˜[Äb •"8F "€ÏFJ£ä¿˜s!6§›i†ÁrÚ+“œvR)£TØ‚»KNV½ÚÛYøÚôT‰cmÚ`F€€Óù™­Ü³À©)–Ÿgþ'Y$'eF™ Ydn!ì/g%9Yìátšº¼@nPÖ_ð›U'OI/:ËiÒ!å› ©’Q‚“:{Í«=ŽÛ2w¯Ž´ø#ÀD€Óy¢MŽd*É”mReÚÝB_¸›äRgAeåá Œ#`@€äÂxÉ‘¾[Œá]²nYN“…$§ÓH…Œ®åÞ…×ãóñ,LºÉ¬UüÍ‘\fF ^’¢(ÓpqÈ6Éñ²ã1Bèö#’'Sð`95VN´ù `ºŒ”ª þô£™®¢›²\RF sHXQ¦ÍÏ…&°O²òræ“)cºAžt¹J"É,§I““jÖ˜%£êçÔA˜Mî£mv³™ Ï$Úµæçã­M*aE™NÜ£ÍÏiŸäD‰áøŒ@sG€äˆä©î$ˤÁÁrš4(9¡fŽ€Y2ªÃêUO¥«’ƒ/KìF !pôÜ÷ž‰ %‚È +ÊH£7ö®ÛÌ›Ÿ'Ê ŽÏ¡Ë‘ïËÞIƃå4É€rrÍeTMsc üÖ ‡£¬y¢Ë¥f’‡6”Ú‰ØîK>Â{7‹Wæ&·œR3G2EòDr•4Çrš4(9!FsCÉ—Q‚+öŸÂ…~ìF A°Ûl»Ú#ÁdDŠ2´õ– b_¢„p|F€ð!™"y"¹Jšc9M”œ#€s ’/£ +#À$=H±u¢©&¬('J@ªãOpÞ×+×î­ytòä-X”ÁFÓp8Hâ:”äJMX^˜å*|ŒJʯi”¶~)°Aÿ)šªÞk¢Å¥®Â„m’êçÀ>f"pËÃçVì¨è–ß.¿ü‘[o­23/N;¹L(t½†ZúYÛ•3Š'}ŸÜÔ95³¨;œa1¥ßÍÖã§óŠêdçeÌCôìtd¼ÛÝ9³³Ê=Ë{ÚlÙÄí[œNEM6­œ^ô47™oŠrcêùšðþ ßËNÐÜîî•nR"K* H¾mÉQîH…=˜ï°‡ËkMpEé†c‡*Šú‰¿º‡òó¿LþMÁÝ®3qšÔÿô”-ÊóŠÿÎÇ'—€ÎÉ‹¥`fqaiðûxŸ±AŒz†bµøŽxÓàx©E`BIIW¥J½AÊ ûvTÝs²öm¯ ÞO8YéQlÿ\j)2?7R„Øjq:/¬Mfn踖`·„aX¶7Ûnô¤óÎs¸ 3gÚµu[j¡l,-q$Õ¬åŒý¿‡¨š7?™eºÆqÿ@¯æ~ ådÊ_g”8xg¥Þ_|aï\‰~o ¦À—iŠò‚ñ †ò®+mJ!ËEm2Ö+գΘG÷m1›C[Ðv¦ÿ„’|&ê˜Åí®劫þZEÖäé®;—ËMïceFrÝÏk–y³YbŠp˜Mt,é8Н@þ*„ì/h,ªÑQ¼ŽÆãs¤A#Ò±]<Ý·Ç’^7•ëÍÎGÚTº+NAy˜Q& 2Mˆ.«rǬ)(–B¹kûy :ÎS…v ¼`¢–ùn×^ïÉP@ØÖªuvŸ‡n¿½‚¨¾fÚ´¶ž}ÞÁn–¥hn}¬¿Ü,óŠW|=+ù"FÏ&­(«ÂëVtÆÿ4*É„QÐ}cÄ«®AqA±>íu?˜0ÿÏU+ÖŽEÆ£¹a>p#¶ñ¹ Ÿ~.PEí(4nW!š=Eî½ÜÍ6p*}Z¥OAåž Œh®‡ÀãŸxŸŒtaïfs.Ä“-MÆr4t¯Ïæíõ܇ï`£‘Wз×gf;fH¥¶Î¶,"máòÑ,âY¡*—£c'¹.G¸çÃ…•þ±”óº¢ûrkž‡è`º˜¿£˧šæEQн\4e5çÛ P³eßiPæNÀìPEvžåœ'&ß±Õ˜íÓÎÂŒÏt¹ æÞ~ ~ƒÇÊR‹ÐJf¸ŠæÉ°²Îâå§šÍþ°ð¸ƒ‹ç½ˆ7ßno{ƒ-<žšGÖq¨óÙ¸~m³[o~Ú9i¥#Ó ÿì|ËäšJõ Ô³ãQ«H ý$ÛÖêæ'œ7ÒÝ.'nÎEá$µg}¾B\ë|  òùùWÍ*q åïè wÅ£hì }$â-­ ;³n§ýQÜ õX=qvV ¶b‘Õf»}†sÒšºð‘.|F›§¢QŸÂ€ä'n¼±&Rz ¶ÎéÔ,eîp», óï¬v1Ù뻨q³Z,€a¿‹§,÷Þ;Ï úO„ŸmË:³ã´ÞbÄþT›ø‚Ö3Éî;ïÜ‹Ót˜§‡úÕ¦hë‘Н¶§¢žß†zq®`ö ñ–Q‚‹„YóÑ^My ¡÷Ù4qñÓ%ŽõÁaámj…nDY(•d S·í—tßP+M£)_öÃôÇ”×FoÙåø vLü¡h²®Ãº˜—³;µ|@ÊÚìÙ³­Ÿü´â:|³¾ïûLʱÂA_±©ƒúØÜL÷IèŸXæ H“]Ò…Òdz£Nž!ðA!;×òPC©ÂbÔýÔÙ¶3x ´ >ߪy¶|K3І4:Óça¨}Bó :î~¨üèìµàì©&SØr±¹,|áðg~»è§¿'1¤ƒ|NÑÖo}AUµÛh/<ï¤÷1ÒDQt”¦Øë¬M]mf.ÐÀ‰§ Š\ÿ” è¶eh“áB^±(úÌÛèÊúÀ-·#~F¥œ×9§¨U=_aæi,’¯ö5 ýª¦ê ôD[Vc¾OŠª+>”ýë‰É“”äP¹ÓÀ<žƒº4rV†0øi'¡×™‹NúJÇ_g…¸Dx< Бú± 2ÎJ»ÜíÙñ¾Û]ûÂ_ˆº¿ r†ŽX;Ããñ¼AJ ¥#ÓÀí5•Þ/PÏE¾k¿-êÜؼþ3–ÂþºG w¤g¿s×ÚÉùL~Äí­èWê8õv÷þT(Ë ^Íû=–à]îÏW=ž¯-tõ¤¸‘ò³´mi{qÃýÕ›÷4ø™=Zl)ßrwÉt”ä^Ðu ð£Áõ¯í‡VŸÒxËâtžè¶ÜóóÛçzÂyçð0;R™ù]”¨ê™¾Ê ň¥Nàä³[Ñ~€ú|äiêûÔ‡ñ¨Šaíàé”P(ÉÿC=¨X”»C)ÉD£–kyý0)—c脵Pt7ÔÇRœhË#e?\,D>ºq7ÊwÊûäìsÔÏ~š¦N©Ýº‡&åt7÷§wcõG6¼¿aߢ²R{¥z ô.LFÿÄ2¯CnúŸ&«(ïÛ^9•Ȇnis4±Ráq誢Ìà àÞ¥.Ç Š½cgÆà=»«ëw:š2 ‹Òþ†äž3]…ý„Eñ ‹¦ü8Wê¼kÝ,WÑñ”Õô °o gúÉ‘.ù£I8¿qŠÕ2¾»mT>vm8†|㢉"FpÊ>÷TN4@ou³ê‰Ù¬³mö¬>˜€Â©ÜXàxàÀÀè¡i üdÉÃB½vàA‡}žÊû‚CŸc)g­[¬ZBÉšŸÝ¹U_ШծiLOÞÇ^V“¯f"€Î¦NQÖ~n(š™…–YBá,Š6ŸÒ†ƒçÃaÕ÷ Óy躩SÛÓ¡:™ý‹ÚúQý°åbý½&`¤mµÙlá¸Å.F’?²QL °……ŒÐÌÛ/^˜>Ú®)˜É[à?l£gÚ¹zz1üA~N˜ÕEùN¶rÖm‚ó¡h'|í‡Åz$ÚŸSgæh1+.Z¹ÍßGÊÖ]e·+ËuzE›ts ¡],ØbF®?èÓ%šÅrh™mkÝÊòÃÀ¤»1‡DËRê¼m›ÜùÐ10æóàûÈ`öø&,öz¿Ùø­†2{Úø÷l6«4í™@,u‚ÂBþ¦PB`ÒE’«yy–.¢…õñPÐÝ^ÍC6úy‡¿ÀMVQÆHuHX£ ³ER8(`/K»e2·@:zC€Š~Ip:hæA!|‘ˆL²„í?z,ž Ï/Áá#=#­gK§¾G3*”…‡¦HyøÒTþLWt¦Oë³7¸¯³…ðjV˜sëÊ…‘.mò]è«–Cþva¿³JËU×8]G…s9Oó¥£ýŸüÔ5ÃéX ÌžNlÌe NƒŸ“ê“O6aƒÚPê^O ™i´@=*ƒRI¬îfº&c¶I™ºÕ¦¶B=]úË+°%úŒ$<4{G˜æ ÂAÁ~XšYÌt:¾…À¯ÕýU­þŒ­ÍR"m§Ÿ*q¬EØQXU¨ô5#¹Î]})Ä(Ó—³Š'}ãOܦü[¿×}ðì÷sãî´#ŸA©ÿ7°Éñº½O† *bÁt¢·ŠØ4kJ¡>+I&(l;íö€<’T–€4ù!F´S0€ù;~X£õAͯ@·2ÇÖ._c»Xê„êuŸŠ:‘¡Z…ðU™*Ù—Þu×nùLWšÞi_ÙVxT˜#‰÷@†ß0† ußÝ>àbÄý'd5ÈëlÍ­}‹AÛ+´Ík¨ðÁ~±”Ç7\ŸWꜼ̸ÅÕÖú#=ž&ºÑv´» ââK­âÍi›ý«?M‹ø…îѱûÛ™dõO,ó~”ëÝLtºFÒBÏz/bô°Å¾ÑG%ÜBÄ¢Ò~ Q‚ºÅE=ôWVe1ˆÅ’õ‹×[M¼7 BжN+as[Y×Ôî‚^ˆïN_lÚDØzï#Ý[m_‡X(šÂfãÛ"j«.¨X,ó:ÿ, Ó*$þ÷ N ˜¶à÷សvÝù>{ýæ+wyÜÊt ŽÀg)úììw±”_¢a#¹¡3E¶Ø, ý‰Ð³PVÁŽ·¬Æ4ùÞ$m *ZwÈhƒ² “ÇH  Èré£LAç£æ¦Ôbµý.ýhÀ‹º¾yv±(ÊjéOW´ë‘zoÔÿ,£?Ýç‹Ü5F?ØI-£º‹8½þI¹W´þ$| ãÐêW2´ýÒRO.#å››k¹­ªR;é†í1Ï1a›®€(±`Ké‘a/jL„ÛW;Š×€ðý³úI.‹1?¾ ^.Ç•¹ªðæy¼ÚH˜õ]†YåDZˆ–¾ à fhKÀŠþz*šòSèÔ|+«ª´Ù˜üéIÜ™.dzoÃ<Ôõ··]ï¼ÿa·×} âODsp±âqÇ¢¾#ëì•ÃĆ<ÅÙ~„êóhòkî+.„,ФY(ïÝ<žyû3¯°@‰®†<,‚<‡¯ÚçãÝKdŠ{ *ÈY ¿fôO,ó„l SÝb*lÝÉsLà›Øžl±o<¡­6íØÏ¡WÓº’}±q1@p)Úïh©”‹Ýøº#,ÖlÅ¿’–Âå «gŸA³–‹„W­þHÍß¡ùýâ¼ñŠ«ŒQã¦É˜Hнm£'j¼o¦[³ÑŠTÈ èüu§Z•%ÒO^ƒi“þQ]­‹¡ý Üá/]y=æÊ13°ßÅRÎlQ³¿¾Ú4ÝŽ{JÞÊý÷0‹³¬Æ4øÞ$4åtaCñ;°¡Ð)‘\"¨KŸ—¾ÀŽ×zJ®G´Þ£Ç ú£Zµª ¯°øàô²=ƒ¦ýõ0(€|ôØ-ùÂmºÉ7a¯z‡ Kƒñ©1””M£_C÷4«…»Ä>© õÑëî}ú#9«.ãÆ„­†…>$wºg€ìÁ'©e©Ë·¹\'¥ Ve›ü¢‚ôVLt¹>S«”Õh‡+pN2Ó9yÿl§!ØêDŸ1;Ц’óßB¹mícPw‚†ã ŠJþª›çùCD¾©Ûüöë ]O¢[ÿi¨Vz/B¬‘bÆXR¡ú¼O~\1Ï«ô/(ŠøÓäÓq¿6Ê/QÄÝ­vê¢j³)wx=Ú; ñE(ÍwBIî…V£%ª×fOz»Ô5Ù”þ‰eÞϾý7ŠÖ`[½?pø»¤$>ùô½é"º­*eÛQYÛïÞUC6Caô‘©:Åöìýpõ+‹•¢’ži©Ü¸ó…îgâ3hzÚyÝ>Ø«m„°w…[hŸ©ˆ‰å ̰'같Âu pœIš†t±–ôïý­Õ*AŸ¢ýŸû°8¤»®ÂÔ%œŽ²Ê2ñµåkÌŸ‡ßåœ3ï(Wšf´›‰¾z< aÌðö£ªdÑ´€Af@ ,^wODß&“@^TçH1_+ý0ðóêÕYÑ:H?= L9 ÕÜø*ä=lW Ã¥Á·vž7… £'vÖ™^æq]6p¸×»Û­aà3ùõ¥ ¶Š²NŸÙ¯ÃÀH ”ûîI¿—eñ'ÞÄoÀû;Ì(¢êéˆÎVдÆ×ÄCq ©(‹êä`5qµÖ7³Ü0ág(0îÃæDsP×§caöâ§œ“i`µ#(†I­õºÞ—¤z ÅPžzq ¾-é·_œZ,åDK´¨.~ݧ,¬ô•ãÒàt‘gÊËZö¨€­ÃS°;Ü„úÐIx¶¼¬¯þ6„"'Ú.¼ì6fØ/êÈ8Z$ƒÑ©m¸‹t`œ=Oú'óŠ=دééôàdOývÄÒÎ tz}²ô£m¤°=ÛDù,¯øä¬·Pú±Þ¨±RöÏÐÆ¸É@Ú:KÆ‘×xdS_0¬Ø®FXk¤Ý ÙÉ‘éÑ5l­VŸÜ!Á˜Õë-Ó¡Ã1 $û&¤§ e‘Ió5v¨î(î­·ÒŶX•¥áR‰¥NX„U7ÃAÿvêDg‰¾» ¥K& ú"º L°è}íí­Xă¨ç-Üïl’• `þGÚéâçý>“È:_j l§Ð#dËd`в¥<2­× K'ò'YjaÉýZ†ñ(îË彜QÞ½×s-°>ë›nŸårLÀ µE¤$Ëpþ« ý˼ݤÞ4ÙeB)«s«§ª·ìÁêxm´æUÞÆläoð†]£ÈCƒ?P¸«i(x~"מ_ˆÃŽGØKaË×$ߢCŽçã {³mv…‹ÇYË›ètÏÂ(ºøjGÉÁµ¶Øé¤Ñtà šp¸ì…kA‹6¶zËîrŒÎÿ ­y7žÐÜ[OÃ^’mä"¿xÊ.ö‚½Au{OF’&–rbBâ>LáC:Çg­NP ÊQŽó}Ê@`Êé*k üŒÍ O¼Ûu-ì&ÿó±Ï½æj‡ë;tv[¶;¶WfÙg¹÷³i1”±kÀóRÅ­~‰po`(ÜçáB g™éºcep‰>S}B‡<+í å˜!‹üºBÙ]ŸÝ±¥þ©•òÀQÛ"ܽ¸;íË/P–y6œ‡ziE<|%FKSçfOúô¯EøÞøôü!Âãô4mfŸ¢2øìùÅ](ëg¨Û ÿ¨5Ì«Çc@ñ?$sD Q†RÐQOqÛéNÇ"Ð>´v e!Ú±7˜Ï$ßçtlŸ'ËiVYdú|m ÔD?6u6[¸· >Ù)c¥áÌ.è},uÂWŸ‹aw,.T=*ê„‹¶6Ìûxéò‘Ó«Hn¥YÏõèèPÖo9ñF”{ÊÆûz“'²tyµÛ}9¾ö’bÿ;$ ç,§]¤`¥¬ÊµæÍ¦päÂõ±O;oKJûáíÙæм4wÚç©x ý&&δ±øD§ŠîÎþúYG4©^ñ2h_…11™¦ìDÛñ±UØÞ•Çn›Õ?±ÌúÉuMvF™`¢]ºÛ OÆÈ“>g­$åú(À§ ¢=xý³Ä:oÙeɧ¢%y 0ánFgw†ŸÚíÖa{¯˜9c̓Ùò òÇhX½-Ày¹ökC ™A5„E€:Þ<(ï×`ÏÇ»p??tÌ£ñ%-ù®îÀW¨”c)'0,/ý´ Œh¾¼\eË: ¼£ß¥«¬~ø&,3¦8Þ´Û¬CÑy|9ƒ¸i؇UƒÍ!>§â°aÕÖÈÈàù,ìË} ‚ýåì ¾ ¦36·àœÈpɼBV=¨WcP§ ¿©í@ç¼|‚Üm…ò£N íÈu¨“d4ô]‰xk»åÐø]=š,⨻_Ó@áoÆ{ÿgëÒbÇ$lw)ã¨ÛÇ¢S¾òyÂä ¬ßÔK+J[ ëd]^B„[[¾«(ï lÙ‡vŒö·(âÓòôà¤Í*Kp>üliŠ:v êÑpðkøÿ„¹bÌaB*¥ÆTb©9Z_†´ÔãkÚ%¨Bý·AnôEkÆtå}Ý®†<í¥þÆ ä;ã ñ`Wø4¥dá\”åtÜÛ‘ö‹YMê7døH}l,å‘é_‰f´S þšŽÝSPÖŽ‹ )~…âaˆ¼·â}>µ?÷ãç! ð’[Ü™Ù?±Ìs0±gð01‡ÓólólèÚÁ6j}¤YS40ÊõŽ’^î^Ë[Á$Z6úl„qÏìN­7;ÛhÒ5ƒ&'YØæqõVlYîö¢ï– =¢!+éab)'}–³µ²o—{®F"&ËŽ^3dÊŒ4ÃÑ?VµwÓ²¼5OMž¼=R|:Ú›UefoôHé„{ç[¾EpµhŸŸGu릒’ÎUVkuð–WÆ4¨~•‹û{d‹¼]Æ“ûŒaŒ÷X\Õ]`mí çí[ŒþòžÌQ”}–N–|ﶆVöË8‰^£Å–VñWUV¶›áp”E“g:Ê ]‰„1KžÈ”…è‚r÷i"ô%+n´u‚ò£:­zlÖn¢ß¦d÷%$ƒØèß6½°°I蟡]C}l,å ƒt€VEN­7T{€/E—©ªúh|_¡ï¦ 6êÓ®s”ô€¦M§ÉžAÿ ðï1¦Ÿ®þ)Z<«Ìc6“pNô(ëf¥(+&ß3™Š€°if*~‰ÐeT”{§üˆ Ɉã6jÌ’'t캂ŒŽ]W˜5HÍx˜L½G3ߊÅ~hiñ]ÛæÁ´êf|½}‡Ü<ËUøX3„§Ñ¹IÛ(7Z®0áŒ#À0ÍÅw”zó+xÓ(±",«±# ”=÷O¼»ä1›ª®ð*¶v8ûz ÖdÁtc;Îx»i”¶ù”‚åæÃk.)#À0Œ#À˜„@¶­åä÷ž>° 9ÝëUOóR>Ø: vÜ*Ì1>ÂË“p2蓲çdMB€e“€ådF ñ!0sÂÏÄ¢içå'i[ƒí"Ø1Œ#uëÆ“½¶V¥ôÇžË-‹¶9+˲֬uQÆB€å„àãÈŒ#Д¨[(ô•I_Ò” Çea” P·Ð5ªÅ®)!¨™f2ÑééõZ5Ú(XQN=ŽË0Œ#À0Œ#q¨n1UÁÉ®pc!ŽåDÐ㸌#À0Œ@òXœ¼¤8%F ™# hIÑq“’H3gŸ`F€HœG@‡c±c B IŸÌ—A83)Œ#À0Œ#À0 V”Ø\F€`F€`Ô ÀŠrjpæ\F€`F€`l£ÜÈÆä2Œ#À4M&:Šõ£«g¸Šô£¬›f)¹TŒ@ŠДåÉÈ)Se…¡9yõ=5¯¿8ÐGwt•÷u^}aþùØ#yÖØøKåb^7>^3Ï ÏTMqÔ c¢Š2ó¼‘ð¼Žßtaž%™g¥%޾qߦSQ¦JA¦ôS.»yÒp[–}¬b±ô¥«E(-â.U#¨)b¯PµP‘×z=žž¨ø[Çg¤Ò¬óïŠÛŠŽ°Úlc!⽉oŠ&Z6Røã&[Ú>⛦ªë<µî^|tÚ’:žï2•±”—y]‡V#â5·¯A5¼Q´¯Š–ˆ9$Ë)ËiP­o|™(§éP”¥rl5êŒüþG ½6'7çzUպЩX-órÔ6­ò”»=‘£ñÕP\ív«»öTj{+«-ŠÕ2¥ ÈµÉãö>Z[¶÷É—^z¨ AHéJ·³\zém¹YÝ[^o³[of¾òÍj·;‰oÕUÕOþ¾èû§,x¯ “ s&ð/–úüBË(£Êkn_ƒx&¼ËàöU’Ë•å4-#¯YNƒÀÉðG#ï2ENS©(Ë+ødûóõÿÛªMÛ§UMë:¤_wmÄAýÄ!tWòr²é}suúà ²ºFü´ªL|óË^µá¾ü>mo¼üvÇÕÿzÐ5À¤k†Rçè—““;Ë«ªÝ˜oþjŠo®ƒF}]ÏÃ]ûÊ“ÿü!=øÑAé⟟Ø(n˜×áAÊT^sûžgòM(ÞeJû*iŒåÊr­P¼Î„6™å4<Ïä›P¼K«œ&¬”?þ¤Ë©tKæö/º†qT9(/;~Y—Þz×Íy-ZÍèÞ¹mþÕ玶œvì¡JNm…Ý–J½= ¥àM8#î§ ìÓE¬Ù¸-¯¢Úý—¡£NØ÷ý‚ytr“´…Mµ:ÿ®œä¼Åf³?×½Sæ[äƒù¶vãö¼¯¸è GV/ýj™cH‘QÊ”L+ªk i2¯£@4Y¼Ž"«h‚pû Jua‚yOûƒ<Å@™1¤Ër²Á¼Ž·MŽ"«h‚°œFƒR]˜`ÞÅ#§®‘C;¥Ûwóç–Çu½ ºæ^Ï7¹ÆÊ‘sÉM·ß’“›ï:bp_县·Ø«Krskb©>w^>ÞJxY,Öÿ~×Ý· ˆ4è \SátþQ¾”?ó-:È|£úNõ1sð£Áb*ùÁ¾PÌëXЪ ›f^sûÏd#ïÒÔ¾JRb¹²œÆ‚V]X#¯ÓÐ&³œÆÁ3ÅÈ»XäTu‹©Šê&Ó‰÷j¶¢P9ι¢à4Ì$ßeK»êÜ”,{ÂÚñ–»QÅ#œ/ÂÍjµßwÙ­“NER¡léü£ü(_æ[lÕÆÈ7ª÷Tÿ‘B¦*ËÌëØØ:M¼æö5€ ñ=y—âö5ÁôÅ~áËi8d¢ð7ò:…m2Ëi¼i(ˆ‘wQË©¢Ùýtf+Ê”>™Ý«×€¶ºõx¤Gç¶ÚegŽ¢ŠÃ.F·îÚj¹yù¥cÆ\è¦óò¡ü˜o12ËœøÖ­cAõŸä¯²ñ#¹0›*¼µ0¯ĨÁ)æ5·¯ r$úih_ëWê*ºƒ~õ^ì÷`9ÝEÜw,§qC—öˆéS3;jJ›f=éSsîQ§Ÿ^ÃÌÎ=ÒJ#v±#@¸]<îH+-€ì~øÀë‘iuþQ>”ó-v~ÉÄ·?Ÿz”…ê?Éüsñ“&fñOfÍ•y JQ„I!¯už$n_£àK4ARܾFCRp–Ó`Dâ|f9¸ ˆ–95³“¦Ycš5£OÍù­Û·ûûÁôÐÈÖ„]ü~„cvNÎ H…ð5kv^çåÃ|‹Ÿ_2¦äÉüèkÉ…™ü“YGse^GƒR”aRÄkn_£äG,Á$ïRоÆB– Ër*‘HÂUòÚä6™å4 ¼ NBò.Urj–¢L•CÎ&çwêYG@Ÿë€-àÈŸ]‚Žš¦u¾àêF )Â9Ù¸êü£ô)æ[‚ «‹îÃQéà“‡Œ±Uf^'‡½©˜ÌkgÈf“¹} @>ñ‡´¯ñÉrj Äa9m  ~=tÎ@IDATJ95KQ¦tIËÂ/§c÷£é0쓜Á°‡&Íãõ ¯7³Î‰  ϼ–-OÕf˜_èü£ôÓÍ7‡¶ŽÎaogQSKÛg¦“|#y…4£LòaÿlÔ.cxMÇ*oìYpèMÔ…MU@“yöö•pOV»XU]›*¶D•ä‰íkX:&:ŠO¢_ˆ%§!è‹Ê+VùŽ*ÑI^›Ô&§]NCACí%ÉoC®¦Ö‘m«¤[ò.¢œjÊrA¿}ú5ÃÑè—ÒÖe[VV–¹Ù!ÿ´»_ÿ(ϼ5_ypqÑØ‘!éÙ°y‡øÏœ¯Åﶈì,Cé).=ã‘e7 ²d„ôŽ‚N0¬©®îD;dÀø=}ü³XzS>é8æËWŠý,¶ìØ#zvn/Î=Tà€“%ú~Ù:ñÎüïŦí»tÁîÔ®•säAâøaźÛÅÔçÞÑãY, pËE:ÝÀËc=§Òé|ƒì‚< _©(›Á¿XŠ•v^±ñÈuº®gþ'¬‹(ºúìXÊlzX“yíãYÚWê`_zo¡Xºrp{T1°wqÙøQ¢E­OÞmÙ±W¼úÉ×bÅÚMúà¶e~޳2쫯'RøÔëbû®½ú=aIûÊ_4îHÅÌÑggHw涯a)S5ÅQ÷òÓ @!§A4é ×Û󖈿üIÜvÙi„ƒƒèÏñÈwÈ„’ìÙTå4LkÊ·Šûÿõ¾8óøÃÅ飫¬Öí,øA|¹ôw±{_¥~¶EÎmÄegŒ]± =ãúÓä´´ÄAk‚vfh}$Ðr$¥´ÙíÛ´Î'ÿ´»9_ý,>Xøcƒ‡›lܶKÐN…r¼¯²Z<þÊÇb:…ã†H{ˆómÙ¾kwWÜÊIÂ7Šq"ÅŽèüüS,JWÊ'bh^VÕÔŠW?þFœuÂábð~cîýùž‚úŠòWKW‰ÞY +Å—Ÿ5JØ­V±rýfÑ·[ÇÊ®úÓ ¢C›ølCZ_‹A}»a Ô/ L*H6n¶q¤(“|$›±#í¼–ÄÆ#oïÍÿA쩨m[æÉdô+ÕŸ.: _it&ñÚÏ3-åíë÷ËÖŠo]-W%r³³Äƒ/¾/æ}û›ŽWº¹‹õï&º´o-½®ÔO}î¢cÛ–bâù'é×u›vè_Œ=ì@qâˆ!ú`ùÍÏ–ˆÿ}þ¸æ‚“AL»7±}L³¢…joý«Öoç¢íÜ·«Øµ·RüQ¶Mtjß* BÆô§æéAå %”â| tI §Ÿ3>ù9v; {Ú]ù–â–KƉö­[D¤ö/â/§­7ö4JîÛ­ƒXµnSÄ8©|IxZ… A'›:ÿ(ýtðš”ÛéÒªšú“æ°Ÿ¯ÏýVôêÒ^ç]iÔK3É=»´ `Í2÷ò|òÈÁ¢x¿_ ÒáO’äM­•”‘dó/–¢¥•×’ÐXåmݦíâËŸ~Çz€L­…ùͼ% mó§ÖyºR޾y¹Y¢=4 ܺEž.«FŒ¾ûmµ6Øè/ïßŧò<³Çƒ1`íЦ¥6¨·‰6×è(}lK)† î-†ì…AîVãkSïMn_ã¡=#äÔH8™ÌäBéºæ‚Þ!ïc•äÙå4Tï-ø_gÛbbºŸúnÙšâçUeâÔcÕûÐŽm[ Z47îèƒõ¯vƙԟš¨ù‹L m²)Är¬7äÐg2BI¦‚^qöñzy½jô¶¯4r^SŒaƒûèq3æWj@%æÉšQöñ/M|k‘—£+´³Þü\‰ÎsÌ0°½Z=Ø·áÓ,ÍöŸ4bp½wÁËñ‰—Ì8¨³­†íÕÈCúIÙsïŽ:<3Ì.$Í訕8Kïd\õ4ëÒOFz1§q,°^ðÃJñé7¿éêÌ<»{*u¯}ÈEˆï~[+^™³H¿§Ùå.£ßÓŸ9_.Õï÷UÖè&è¤ýÍ»‘&fð/ªÓÎë`b’·`VÑ®U¾ Yª·ñI^:ª+dV#ÝuÓ^Ôo¯¿øqpÿúf;2\ ®É浬3tMiûšŸ›-ŽÄ óÍO wMŒy>#¼ óÏöCùÔìOôûc`>qÙxŸ"-_îÜS!Žª¨’ù¤'^“¯DáßÏ$³2ý™Ö’¬.Û óš*Ñ©]Ë”óÐÄöÕ_Þn2NNc = hCò8µMFN°Ñâ½Þ]sŠáúW ]m¼Rø]Ë6˜mî\gfñ¯ÿ}!~ù£\Ojüq‡‰†ï¨fRIN':]#½^«6«xÒ7FLb½7CQ–Ó%NWèb¥-¥áÿ‹…{ŸÁÖŽÜ…côÏR¾Oûd£sçåg«•ÚªŒsF¼“A\²Ó‹™&êHiÁÁ8|: 35¬Óž{WL¹öÜ€ÙÁÖ-èÜ!*ëVÍ÷ìÒVo Án¹¢ª& ßÿ𙔭͘}¤õ#†ôÑ«^x_¸®=OG¦ésI絑gu sêÚט•šS·p‹0}_}p ‘8ÌHñÀMé0?5{.ÓÁB¿®Ââp)štòI³ýÔoÚ¶[Ì_UÃä}):çÄáXìçÖ×(üó¥9âá\Üàú’$óÚˆw’“Ž:¹L !jb)àíþGìÅúr÷ßt¡n¢C÷ÁòM~™áš–œ1ýPŸ$ÒDßî±à}7v«ñŠŠÊ*}±™NI× ýieµgäwÌÚhê?˜|ªqî&•Yý©^‚2¢ºÅTEèÖûgÍdc¸š¡(SöDtHÂc -¥AiÑÈPØÉ‘£Ù r´ëÂB|öÿÇ¥§ê3Xºgæý1b,êÌH3jÚh&‰%§{¨çØŠÓlÍ.WUÓâ$¥•ׯ‡’7²S¿õ¯§êÁˆ×dGG+³û¿ŒQEñ¬·Å“w]Š]ir±ß§1cB †8ÉßdamLG¦’¢~Y£v’-äÆy0Ü¥â‚SFúñ¶Avór²üÏÁ„õOi¦ŠøH§l™>å’¢ÊegÙqÈRw|aZ!ʶìÔׄ g’Ÿk“²HvqÀÓþ‡TÓ±?ç8îh‘¦Ü>°EnŽžB(ùŽ#i3£HYJÖÆtdÚfÒ2í_~/îB;…sÆ›þ÷´à–¾®JSTzArI2ùËï°·‡T÷¥öµOÂOÆfXZcEKŠŽ›”Düè׿©Oxý0áCʱT‰ …h”ßüô[q!:ZH&+ÙAdÑØ Ð\:ÌN?4õý{t•˜)¤…ƒúvKÐ9ç¢ã%s £³a‡‹S°Ðଆo øˆƒúbÆYÁH¹ÊL¿§‘4)P¤Dÿ¸r}Ø­æêE4×ÃdþÅB|zx-)Œ$o´ÛŒt];¶†bÕ_>Š÷±@…”®;.;ÝïG³˜G éëÎ3xmFšáêß³“XüóïbÇî}Príâ·ÕÄ ÌÝ`l¿HŸpÃ9Z,ôõψé¯ÎÅÎ6CõÅE›°ËP°£MȶrÛ®}ºM sçv¡wÒŽ›œçÔËD©«èŽÈ´§ž¦Èô„~ÜOF’ïÐ)¤Í× ™2#ͨºñϧø×kP¤©Ï¾‹…³}Å©Çnwè=õEð/¼»Ûæ©O(Ñ×Z¹öÀ˜afõ§æË„ÙŠ²ÛFu? V W‰ÿ~8ÀŸQxy£*Gc$¶K‡Öbüq‡‹Ù/Ò;IšU¼xÜQè˜ëW×ÓÐé’reÛ˜ß3Œ@rh™Ÿ1k<’[°8RcE9Ð8 #À0Œ#l&:ŠO¢4g¸Š>MvÚœ#ÐìДåÉ(3+ÊÉ@‘Ó`F€`D@ÕG]¬('ˆ%GgJKÉ@åd Èi0Œ#À0‰" h¹Y¢ÅâøŒ@cF€…²1sigF€`F€0 f§(Ó–'´í»¦€ÇãÕWà7Òp)$t»¦ƒËiÓᥱ$,§F4÷=mÓHrÊ®>ÍÆôbþwË¿º~ó}»±¾Ý:Š ç[wìSŸ{GG†¶:êØ¦%ŽN>Ä¿ïͽ,hŸÐ±GìGNªùiÕzqïÄðÛ¬øóM\>õºØ¾k/NX8á+OßGõÏ㎠؎ê‡åkÅŒ×>£qýŧåÏgNå#ž¶ÈËÆ±º öY–Ž‘™óÕÏâØÃ—Ž?Vzó5p{<â¿s :™‘Žo׺…8òà~âìÑÃpøÏÿÉm´»IolûwáØ‘¢k‡6zøÇ_ùX”\¾h8ÒÝõÄlqÌ¡ê‡ZH?¾&–ÓäâÙRc9m \ÚO£ìɇtœh#O9ê qìáöÂÝso!¾ùåý$Tã!O²í5t€øëéûÓ;ÐOoC?}ÃÅcôÓüÌ€‡‰N×H¯×ªÍ*žþxÁ(èlŠòWKW‰W>\¤ïË{ÕŸF‹êšZ±º|«~ü±Äèª? òs³q"ßJ„ýJ ØKßCP¾çkê8Ê, ']ýÚ'‹Å½:û0DÍbœîE‡‘|ûÛj(MG(Ä´}•WÕp4îF1¸o7?ñßþº{¶îßgÙÿ‚oÒŠÍf<óæç89q»ø3==»´Á§µµi•'n¸èQ¶u§x‡Ë¼3ÿ{}Oß´Ι –ÓæS XN/¯¯<çx]ÇY´ôw*²X?Š^n©Jçü¸b­ÞŸR¿jT”©Ä¤ÑÄÔ%§É+ßÄrK'÷e²SÝb*4"qL"t6 Ó‹7?[¢Ÿ[~f†é˜jÚ4ûÌB]§v­t…ŠŽn¤JóݲµÆ×|ŸZaóó~Ý;ê§èÑßþú‡ŸŠ* v–®Ú Î=Üw>=f!ŽôÃp$çw¿­ñ{¯.Û&ªðI¿_÷~?¾É èˆøW¬gâ™Ã1H¥™áƒpý¤³Y¬úÉŠ#ê'N]a¢±Fümü(ñl”3À¥U.2 ü$lÙ¾[´†-:Ù%‡s{+«ÅsoÍk6n[°¶`üq‡˜Û„‹—þfòÚÌ´£‚®‰ËiT4—@,§qs:ír:ëyè‹jÌ%þtÒXçáû²º·¢Zü¶ºL\sÁɘU¶b²O¿ü®Ù“¥õxT˜jôÿ~¡n÷Ì!©o%sÈæàR¡(Ó¥ºÚí¦Š²UUŠÐmUw #)Á‘}ªeŠÉ $Ó­«Ñ©ªŠ‚¤¼(FDu­[SUm_€g’¼šZ‘.¾QQˆôþ ,À£OFÒ¶x÷ÞJÝö¸]ë–âÁÞµøŒD»™ ÕȃûûQ°Z-ú(y dº§ÏIÄãt*ÊÄ7ÇSå'2CnÒÍë0³iØÞMÓù8æÈƒÅ¡ömZæ GritÓ)¥©âu:ÛW»©Ê©Ùí«±®F{Ïr-Rчk.r: £I¹·d™6¨— Zヺ.ÞÿƒøpáR± ©÷`R¢ºÆ-r²}_ìH?:_ûjݘxZ·}í:¤Ñ[X åý ¦ð&zÇlEY×0k«kvîÚ]‘E¹}›|}— Ú¥ât®áÜY' ½º´¯÷ºMË<±ö“F÷û†­¢sûVF¯”ßïÚS¡y=ž­ff¬º½[ÒÅ7*W/,è:~Ø@`áÖwâˆ!½õYÇo`E3Ç#êë/>54ËlT”éå°Á½ÅË,Ò?ÓÈX.DðGLñ ñ­¶¦fW]¶#°ÓbÌ.ݼ&³²›[¶f£>˜1Ò&ï[æåŠ¿9J>ú¯mëæ?`ƒÞ±­O.iÇ2»J§œ¦€×io_‰ MUNÍn_ý8†–ÓÀŠ2hs‘Ó¡PlIÇYµ~‹øïG_‹ë±0šõ›}ºuð/z'¥xÎW?ÁVy¾PWÂH;f 3G²M&ûæ¿cq`º]*ô *£Y6ÊÔ€K%@«®¬Øº·ªF©¬®I9®vرž„…? ~X¡o FŸl7nÛ%h–1Gv;¿­Þ(>ûæWQ¶e§ø#®µ·‰ƒèMtSÂŽ„§»¦¦±NV~zš”~ºøf,ÈÉ#†ˆ¹9Øùâ[Ý{1¶°9ôÀ^ú¶a´uýNB˜_W—‹} ÝAý»cq¦[ßpø>ÆW)¿—|#y@æ~ù0ܧœ¦º¼µtóúð=u¥öÅwˆ¥°¡#{Wú¼·jýæ1¡E*´÷ãE?£_'hÁÑì¿ÑSzum0¾LæµQæÓÚ¾±kjrjbûj„-à~¢£ø$úxú2¢Mf9 Á™ð^)§œ2Bü¼ªLŸÞºs¾«Ô©ØWö¥çœ8»^tÕ̓‹6tp}g0 Ý×,°O§“mlD9Õ”å‚~ :3e£@ßBÕu+—ÿˆQŠò˜“7þ„¡°o=D%Ý=ýu1 {ìÒ6pѸ3ŽªïåúúÜ%¢xÖÛâó%Ëõ u¶-K—# ÏÍÖ- F¼“A’?=J?|“…!³‰óN¡ïñøÕO¿ Úòˆ ¥—f‹ÉÑl³ÑÑ@‰ûѧý{¦W°%ßH@£.¸úñ6Ò¢{Þéæ5Íôßô—q¢kǶ¢ôõybò“¯‰é¯ÎÅ Ô·È/d§^pÞ‰Ân³ Χâ¡?Ð÷ö¼é/cm)—g"¯ý|ðO¹£a±*ÉD$ʼn§ãNv ?ÂqÛÆòW6áK8KÌ“‘LKçå“ |KFÁÒ™†äÉèðËî“Í¿XŠ™‘¼¦Ùåxe¾ ÐÊít:“yÀ3”3£Ú×t➌¼%ïLl_ã!3€ç™Ò&³œFdeÏ’å4"\±½Lµœš¡(S‰¥’,‚Ú…ï¿ó‘êñì y*­œd;„á‡í.¶þÖÿA „¯T–cO0| ?ÿ(Êù¬†ÞH¾Qý'9@xÚ‚Eo8q5ƒ ‘d|ϼ6¢‘à}ŠxíçÈ¥zÄík‚|£è’w)h_ã¡ÖÏsn“ã/0ŽäµÉm²ŸgÈå4q?IÞ¥RNÍP”i$E„´aª´ºª¦²rï¾ïÌ{©|ë.;†]Œn„ß²¾}pçÎ-´A4áK8ÞÉÂ4€”åÇ|Âq:É7ªÿ$H†ÌdH.Ìà_,T2¯cA+а)àuÏ@·¯Qð%š ’w&·¯Ñ&€çÜ&Ãû³äµ‰mrÏ@!Ëiìl Cò.9èt¼ºhÚˆ ÅàiÖöpTIh¦ŒfΨ‚TâWõýÂÏ¿oÛ©ÓÛ¸ÿ“xCh—9JI÷gRÐ’ñŽFPT9°ß¡R¶zÕs >xç M¸¾„3áLÀ?ʯUÛvÏ#ƒ+™oÑÃläÛï¿.}‹ê?bÓÊ$fò/z"}uÇ/«ÌëX Û6żOPÁíë~VÄ|gä]ŠÚטiD„ž³œÆ¡ï«ìKSÐ&𠳜ÆÇ6=Vùp!‚ÒÎ$»ñk ÈTP®Ñ=F‘&ó::(õPÉâu Y† À3àö5JA~õy{û…<åÝ#úÑé!tϹMŽ€^Õçu|mrä\|À3„f9m²P¼KŸœš©•ÊÏ4óIùÈŸnýãW ¾[ñÃ÷+wúh¯×s ×ÎÍoÑ>+7§5ø–“tJ2‰Ìp}& ó< 0AÞ,§A€4‚ÇäóLS–'£Üf+ÊD£Q°å½“"¨ÏxàJвqVR*ÊòŠ×MÎ䌄”`úÉú|/äOØ‘B-ãâÖT'yF™È{æŸrɺg«2‰>J£û+ùK¡å=óÚ‡]¦òZò‰yæã“ño¦òÌHcÀ½ª)Ž:pŠ2½fž ð©žé§JQ¦Ì‚+ ÍT‘HʲÑäB*ÉÍMQlÂGšbr,¸M‹cþ…‡°Étþ…§¾þæu}L¤O¦òšy&9Tÿš©<«Oil>Ìóðxe*Ï™ggº‚žlsÞH…aR¥bLWyßœ”d‰r°`ӳĊî3ÅIš˜i,ü ¤:òó:4>™ÌkæYããYhŠ£÷ež‡ÆŠå44.™ì›‘¼óº>6™ÎkæYããY}Šcóaž×Ç‹å´>&™î“4žMtºFz½VmVñ¤o)t:e#ÝŒÇèÏ÷æ_ãàS2¨d^'ÅÔ¦Á§Ì]v½Å¢Ü :»B9ÔZ2ßt¾©šæ"¾y<ÞG­ÛÕ'KKUÀ¾Ñ*ÍÌë@É1Êh†óšÛ×@Ö #ï2¸}]DvT,§0yÍrˆM¦?y—°œ*ZRtܤ$#ð¤[.¿Ý1Ö’cŸ¥©j·!ýºi#ê'9 »’—“m1½¦\TV׈ŸV•‰oþŸ½ëŒ¢ÚÚw¶¤½é E¥‰ì>»>Ÿ]š½< $¼·J¢þöáÙÞSŸ¢Ï†¢"""é½÷Ð!R·Ìÿ}“Ìf“ìn¶ÌnÚ½0™Ù;·~gÎsÏœ{îúÍÖmÛ÷ÒAgæc#Æ=wß´‰ÿü¹²;‹v G{Þƒìײ{û–’nEñF·É¦¦ÖGñœøð¥´YHæ*>*›„×/iíªªNk9¾z%›évUj|ÍHÿ´ïæ{¿#ùÔ+.Þh]•Ædɧ^ɦEz£]¥òiØBiŸACîf×–ÏûíCžý|•¬Ïrï³ÿxÌby¿e“äÄ×6]~~/¥U“úÂj© ¹ÝO‹+éq ý{´Wº´k&v8šx*·ðö¾]| 8/©¤f)#S'<Ž9· [I·òTðB·„œ|û_Ï:ÿœ•ó_V>‡÷˜ xÊ{^bƒ,SÒÚ †žQFÑÚ³Ì0¯åø €^hôø$?ز “I>­2/´iL® š`nK> -/´«4>Õ$÷ÛN2ýá°Þýtêãf³õžÝÎPž¾ë*s§6ÍÂ)·Æç%>ÏÜ}•¹o·v&U¯ŒHyÂ*¬¸£ÖËúÙI·ŠÁ/¡Û Ÿw>÷ÈeÅÁÉb´éWqƒKRHZ—`ÐU µ_¢TùD%´«Ôñµ|Ã*Ž‘|Z1F¥R”ÐºÒÆdɧ¥(øÚUŸFCPv?·>øøåVkìDÉêý×_¤ÄXÃVhŽt5NIœˆqƒF÷Åûžµqñ\´„-¥¨>'7’nA@|,òE¤é§7!³IÒ:˜ü§‰2­åøêŸAÝ­„ñµ\û\ª’ʣܒɧ%X„|%ù4dè*=c0|š‘ž:ŠG¸Žä‹šeS"¦ö,>1)itŽ­TÚšÈ:Ä8Zc,¢â)jôc=’n¡ÓKÏ©Ó|€¸xºV9RôÓ«ä,iJ¦‰­5š¡Ir| .$Ói…ñÕ{sÕ$xx’O½ãR¬NëÉ’OC¢ŽÿL:í¢Å§¾Ò+»ËςԚнî–þ87† ¸H› Ö²jžŠ8¿r³ÛzŠ;ÎD S…å³I7c˜bóCJ%DŠ~Á4ZÒ:´HZËñ5:„’$Jãk(M“| j~òH>õN¿M>” ÌA\×&Ç5hÒônJ?É•½Óé…vmw–rõçØ¼9òò N©„Ä‘xÆÄÇ CÄÙhaK£˯Lºƒ_A¡#˜ä•’V§ù   ¬k•¦_0ý«TZûã'÷‚é`A¡]`ÂL–°ÓF˜ÖÍÐH>?•>¾VuâÓޝÁ@¦§­’|êïݪ7<гäÓ@‘’鈀>ÆFƒO©ÑŠDÐ?7Ä ð8“ÅÒ&)>VÅf"(è~;w¹øiáZñ÷;/[7ÕúÄøé¿,óVlÒ^”çtm+î¹æBEÖ©1å˹bWæc7\ÒW :§‹W,?%¾˜½DlÙ}Pp OJŒa_ÂZú”·ÿ'ŽeÒ®ÑGÍò-È–ð…‰À:¸Þñ‚‚v(Ÿ‚2gÞg¸BÐé׎õTÕM`ò Å·¿­ ×lv‡CthÕT\:¶oÙ8„.G>‹F7ðA6øµQPæn‹œyM?pˆ(­7ìØ/þõÍ<1 GqË0~X* k¶îÿþ~¾8[ š4¨+ÆÜ4D4o”¬ÝôwOÏ_Ñ™ãç¯Â³±]œ<«ùhפ^yhŽ…•_ÏY.~^´V+†6o êÖ—ž{¦8ÿ¬ÎÐýÓZ§YTÆ×€:ì'QµäÓÈŽ¯~ÐòyK§yDÆä`ùÔß»Õg¼Ü|ê”JŒò%KUb“|Vˆ4Ú–Ößé4«Ó&Œ xo’ù"(S×r³ÅÒ4¹^bD„dîb÷æg¿ˆåw—ëÇš­{ÄïË7Š›†ö£o"6î: æü¹QK÷Õœ"'¯@¬/‚€üé‹Ä±“§Ë•Á—ìÄ÷¿ƒ`«•‘öà X˜5PôîBY§$œß»“H½ÿZñ·+Î'ö»ßW”ÜŒÀ¶ù6Á:¹ŠŽÄçû"ú¡|­ž´ßˆ"©8zò”xè–¡â¹Ñ7³Ù$fü¾Òˆ¢#Vù€ü€ ((“?"A¿`Ú1Zÿ¼hÈøj®0›J3œÔ|ðí¢sÛfbÜ}W‹:ñ±â£óµ6û»§wjËžƒbÙ†úO¯çŒ¯æˆß–oW^Ø[LxàzñÈmCÅY]Ú‰& ëºÓãÙãG\+¸y¨hÝ´ø|ÖR‘‹1Á¨AZÑ, ã«XTK>ìø ¬UŠOý½[õÎI>Õ¾ôFEÒ1çìO– §ÜHæ­HrÙÅDÅåœnJ¿ÁÂ-­(?ZŸýjŸñ¢LŒ³ZoxàçÙøX+4R—+{Õæ=Ð.6ƒûv½;·ý±MöŠ»´t«6ïCúuÕ´ÏÔBRK¼¼øžgAßÏ[%hž1Úãng´’“5Ó,Ë30?\߉sºµgCˆÞ¾ïˆçmï‰'$å:(˜eâm¾nú±üHÑÍ@’“!$_ª \MÑeëÞCF±2ˆ'ùxš^I¿`ÚQZg>.¿}¸hXiIØ´ë  –ñÖaçŠ6ÍŠ¿ éƒ/;Gµ‰ª¿{z {›vdê?Ë7aB¼[À_v^/í+Qãúu Ø£”Ðn1™µ¯>]Û5— èó,‡ØwäD¹òBˆ­Ý4C»">¾†ÚwÏ|Õ•O#4¾zBãíz)"yx7Í#1&‡Â§þÞ­zÃ%ŸFOÒ1çìO– §ÜHæ­PRT Çr²V»µóÅϲyXUÕç*^ÙnÁ•&GNd—ËD-p£ä’—4…Üåw ΜøÉ§Qr‰v‰÷ŽŸÌ)WÆN¼ÀiBÁ—­¿pðX¶Xµy·8t<[,Y¿ÂyWÉ ¹§ÂA "ÆÄÛȠѯ¸|#ËhY›wª¨Ù…gÇ‹ùBŽÎ#FÓϳºŠ®#Fë{®¤Õít•¶ "_Òü©n:ÿ¢I1Ç÷êcb„gR¸ð}ÜjùËj¬É³ gwi«}ý)€`Lžå—¥%kwˆ¶Íáùiâ+yHñ¢µF34(âãkH® SµáÓȯ>ÊHÿ´›UŽO½½[Ùv®|ªQ±Zñ©?YÊÇ3Y%¢‹å”HÈAîþ±p£…7}\<¦í ª­¹Ð67®ŸäÎcÁ§ÕBh³ìZ\ ~ë!ÖÊ{å?»žÈÎçöì %s8bì›_êYDʽW ~¾e ×ÎýGDvNl.“DQX¸¨ºqÖ1‡UvØA/˄ю×Õ",Åädõ–½â)بWõ¸ê‚2Ss#èL÷õz#Nk“8Q%¿é!&†E&°…Ú$Ö×½ÿûÏøRsXÏ&Á6=å¼ñôßÜq¼ÈÏR‹Ù´ØÌâÃïþë‹5ÐWÁã¢>E“ØÓ¹ùâ¿?-ù……Âáp‰[/;W˜LÆ>ò u ÍÜ‚²äÓR€Q?"3¾†ÒºšGxLœO½¿[Ù9ɧ‰KhV ù4”‡´ÒòDOKÞVÆö’/ý(z [~@¥%Âö‘ŸSõ@[¹„ø‘GÓP¬¢òð”@íRÒ— ÉI šÆ‰ñÔ\]ÏÄžÔ¹<^þCúu¹¸ûÙÅ¿,Ã`ñ³xåÉ[µEDeË4ê7´UdFg£Še9Z™ÅåYnDÊÚs‹¿@Üuõ¢]‹ª¹ÏKÇu“™HÐÏKu>£¢Nkò%ùMº'„xð¥¿{Þ|‰pÀƒÍ¼•›Å!ðàM—ö‡÷½”’35Õ¹ù\'YÎíÕQtyÅg?/.U/MB¸æÏ¹6Éšò呈…²4Ÿ28Mký™á¹ÒÆ×`1ªn|Áñ5X蘾Êñ©·w+*ù”(h¡Zò©ÞøêrŽŸ’FýÕå!ÄiÑõTX^ƒz‰0É(òFÁļ®_76Í1‚/åÃæú½²…r‘µQ4Õ€«41/Ýî~´Åô ÑîÛ¸(iÿaãì˶Ëã·'ÞÑ!_]^È $ã¾C'ÄÛÓŦo9»ñ@òWNšRœÊÄ»Rꦀêp8ÅI˜`0.æÑàM÷8‘¥Lþ¥&™×I‰Eæžt$Ï’_×oß§EÓ™|k1{—)É×=;¶B™f|*ÑX{–úµá´ö¤Yñ‹¸rÆ×`0©ž|ªõÐï`ºldÚJiƒ?^ôõne§%Ÿj¤÷¤YµáS#Ú(—剷áUGBPf#Ùèˆ6<$úv?Cì„û7.ìá@MûdÝn±o·vbþª-š[·¹ð„Áϰú=ϲ¹ ˆ÷ÞýâWÍ´"+;å,Ï$ÚuvN¾àâ…•›öhnê(07mP¯\ºDxbmTñ‘(Ó¨¶¹Ë¡]úëÿ%º¶kûÒ†Úgy~š§Íi5žü¡_WF³£Në.m›k‹géžü4{É:,„m -”õwO§L*Îðc‹Þ«SkѺYñ¾2p.ͧ8iÕmšõr°&Ïruþç³–hÂu›ôÛFžuú…µg9zÙF¶×вjŸF ãÑ©†ððBOš{¹m|”?^ô÷nÕ["ù´jÈA:=jøÙ;¨Êf,5Ønß#ez¡·+jŒ^¡ç™š¤á{Š)ÿ›#èž/.éß]KB3‰©_þ&è™¶Ë7]:@³-öÌÏëfê‰'þ6\{‘¾øÑLí3-5P\­c)oáê­‚½_ôèÐJÜ—tÔZG6DZ“éòÃC‡.OÁ&œ“zàâÎ>˜UƒP©üQŸèÑšvÀ÷_7X¼ÿÊs–mÔÊŽºa°Ö÷ôöž þª(Õ‘p+ ÝòÂH|óDFzê(#j*‘ôŒ(­Ë WŠ))w—kÁµƒÏW : ‚²ª}ZÕÐò‰;.ÓlŠc¬V¯¶ŽzZº–¢d.æËË/²sö\iŸ;G¢Àeçõ„ °žÑ¯XÖ)÷]S.}¦“½YsGS Ïàïžg:ל°êq2Å/<4×ÐÃuXkÀC†È# ù4Œ•Èyˆª¨Áò©¯wkEõxÞ—|ê‰FÕ¸ö%KUÖUN+JÞ‘«¿Òµj±…×Àh 6ŽI‰Þí-C¦“#Pé|Q(QVHöl“¿{žé*ºöfÇ\QƒïG’Ö‘,Û`dqÕ_¼èïÝl_%Ÿ‹˜L-|ˆÑª^Ö#H$‰€D@" ¨šHA¹jÒE¶J" H$‰€D DFÛÒú?©_ˆÙÝÙ¢azá®L^H$‰€D@" H"€Ë.&*BÛvh8uIA9ôd^‰€D@" ‡ÀR㊒%Ij9ŠjˆŒkH!•M n.ÀÍdxC€îÁ¬påçm7oée\d|\kJ©’O…ÈHÿteÓSòieS j×_ù´Ú ÊÜvö“™‹Äê­{±!HžhѸ¾¸ò‚³´ígéÆmãÎÚn[Uû‘“­ úºÎ:•#þ9ò:·¯kÒÌÄÄ£&ºÑB|=g¹¶¹·$Öü›Åü•[ÄÞCÇ5_Ùg`{ë‘ð×ëk·žOžC@ò©qXVõ’$ŸVu ùnŸ?>=|ü”¶ÓlË&õ} ïT ö8&&¾?65ÏÜ}•»ÍËÖïŸþ´X¼úä_µ8ù>uC£]T[A™L,^·]­6¤r7ÔŸþðÇJѸA]÷ûÕI^TKøeu7f¾#öêPjûû´Ú Ê»°5uãä$ѽ}KкÌ-Œ?ýq‘¶ítÊ[_Š3;´½| 8xì¤ø/âw᩟”  P܆“鹅Óâh ¼åa…Ïäé¿,Õ´‘M1Hœ×»£Ü·›–žÛÞþ´pذ#SÛÄÀ'Í­±Sßè/ÆöÖ'ÅÇ3"ß1h?ëb·¯¢½ŸívµåŸ ¸èœ.Ú.nk·í hôõo˱µus|eèí.» ¶¹–!ºH>.Þ•]›äÓʦ@hõûâÓ™óW‹evjö,Z½MÜ:ü\ѳS« ß§üª7géqòtžà†B—Ÿß˽ =w´å—¿}‡‹:ñq0‘SÄôçŸÕYæfÎ_£½Ï»·o!nÇ»<›…É:ø«É2|GžÝ¥­ˆ‹­x/‰Úþ>­¶îáza;êG³Ä{_ÿ÷”û©iX/QtlÝD$c»Ú»¯¹@\ܯ›pº\â­Ïfc0»¸ãÊóDûVMÄ¿Ÿ!²sòÅŒy+ÅúûÄ•öÍÖuãÅÙ]Û‰§î¼\ôîÒF|ñË2‘›_ ¥ÿÚjîò5éá› ¨·ÐâF\‘vþÏ °uµ&ÃEK˜ƒü÷§EZ¼üœˆô;³=è°DÛŽÜ_i'OçŠl ÈÝŠiã/­¼Y$ŸFߪVºäÓð(2:uÂá•|n_|Ú«SkÁ]l)Pñ}ÊOö¡¼Oà}|ÝÅ}Å“» fNñë´FææˆO ¨6°‡˜ðÀ 2œ8 ï[ É'Oå nAÁÙÄh|9Þø„˜Z†ð¸ x» étðWbµ~ŸªÊfÁ#Ì AYu8ùùv;yÀ°Ð±uSÍ>u L.þ1åkñÕ¯ ÚY%'%Šú`J.îãÖÓÍ%‹m{‹£¦oÚOP‹|çUç‹„¸X±Ÿô_à€P|…¸¤wm`H®›€ën¢qý$Ñ«ckmpØ´ó –|óîC¢3fÅœ‰ujÓt!vþ3‰0ýŽýGÄguI ±â\|Ö öÙS×ë ÷œ_hW].õt¸åøËïT]9FÓÍ_}þî9œ.qþ°Uγ¡™ðËÖnwÀ„¨ªÒÍápä¡]†òC¸ýŒ­k3ŸF‰Ö_C}žjŸFz|õ†±KURyx»Ç¸hó)¿´rGÚP@ñ}Z'!.¤÷)î³»¶™d¼öîܰ3S{_oÃ`'Æv~¦ eËÆÉ"¿Ð¡uŸ_)`ŸÓµ-Î ¢GÇVbå¦Ý¾  9¾6ò©òѵƒûh_iµ÷¥Ÿ•îUý}êO3ÒSGñùá(ÎÓ‹Âü‚Y's(ºÝ*q=?ú:ÁÏC?/Z'ê$Ɖaçö(‡ …+†Ö‹Z7­/ŽŸ,‘3©…öÜÎzíÖ}âǫšãÙîdøµrº·o.~ûs£ö‰ˆ Æ(3lÞuPc~΂õ@|ßá,Ñf"F†¬ìÕa·5²Ì²e9 G"A·²õú›“—ËÎëz¯çöhï3¿0äæúLSY7H·Â‚‚¬ÊªßW½‘¤umåÓhÑ:R㫯g¥¢øšÂ§‘_½â¨¨~•WÕ•O¿ÁgþåwkZL3L!ÜhXsÝÈ—³—iæ\kôð­EëP6B˜¦©Æ¤÷(ÊP ¢¨ØÚʧ@kÿûòMâóYK Üë\‚q™«ªþ>ŸFZPÖ´fù¹9GNå(4] àhd p{4ÔoÇ!Î-_:µÂ !ôêö‡°’·ì—õàé^Ž6<´OîvFsñÔ]WˆS0ÍxúõÏõ¤š°öî—sÄ–ÝÅ¥Ì{BãÌ[çcðÄI{XâH<……ûÝŠÀËÝBm.'B Vm_ÍùSsùæ­œ†É‰š—‹µÛöjötÞÒTFœN7òCqýUF«iZ×6>­#>¾†Ê'5O#=¾†‚muäÓÝŽbMÏZ·‡¢ß–m€pwÑz“ Üºà¬NbËžC"æŒÜ6Lè_ã @óë°§'£P0ó—§6ó)•È· ^þ÷Ú_8Uõ÷i4øÔïìÕpÄs×…uÏ–Mka¡¬Ýfœ\G.gœÇ³sĺíûµ…ÍÕÓšFcš;œÀ=šCP{ÌÏ:³ u¦«›9Ë6jîÆ¸(Á[ æ8;'f ‚>g-^[*ówÄgý ÏîÓŽzØŠ|8·oÙDû45šhÖO”l›Ñ8Ï{v-DÙžXU•V&Ë7šná6ЬoÚ_,Y·Cèþ²eÒgò,¼œ¿j‹ö¥4§=ûò»Ê&êonäTìæ먶§¸²ˆÒº¶òi„iíÉó_Ã}kŸFp| ÞJåS.öËÇŸ‚B{ÐïÓã'‹¾èRèå{ÑmòˆÑ\ò·?7ÁûBÑ÷Ì3DúuÜØôÆ:$šLþ S»˜cpÝÉ6hœ µOiGsT*Ÿ|…ªþ>ŸFTî>@IDATB£L†fàÙÅcÝŸ‹·zÙIøê«; G{C>ž‡‡ŠŸ¬Å§™\ͬa`¯NXI[äဠ6„ûæðzÑ Ÿr†jÇßý!þñîÿ4ûezCèݹ ÛY.ðÁ6°§&dqÅíE}:‹v-¹ÓÑÃ?%­Þ²W³]¦Ý=^ð¡?½ÿùa¡ OQz¾8sôõ†nv1¾¨ÇçÍüfªðÄ[«3Ì?îòX~ç³Î9†úE·0Û¦e§­[x´Ø¼ë€Ï⮺èlh,Ðp-|,ÿ©Ñ¼cëf¢O·v>óDúéær:³É¨Kã œÝxGº~/å»ëŽ­k+ŸFÖnšéÏP$ÆW/ÏJÐQÕ™O#8¾#2¸i^|zQŸ®š«ÍÇ^þD[ßs^ïNA½O{vl©Ù%?7õ|QŽmì9š§)Aûä¶ðFÄÏÿôvA“ ~}æî+íš™ö»ßWb÷Rí>¿Tð=kT|*Ä —ôk¶îñ iU}ŸVħ£miýN³:mÂØe~;XÁÍHʬRxæFÛÎÝ[7Ï6™Í7lÝsP[ÀDá.Ìã‘ÿ¸e}ã&ÁFõÙ{®„jðøZ”Þî»im>¥ *•òh:Yìš-˜÷)¿.<oÔFëë€ÁÝ'Ãûßþ!â=J“‹Ø‹¦Aþ'çÓõjO,Þ£94¤7).,4*ÔF>å$dJÊÝ¥ ¤¦ÿ§ï(W]Þ§ñ©Ë.&*ÚëV -ÕÁ ”H€Afô“œ³_dl®~³ó¼ðdz]ûIlá¢I‚QÁŸ J¦¦ ëè©¢L”çíR×dpO!Y¿I·*'°@Ÿ‘hþ±+ó¨¶A¿Ï3Ûå¯mži½&nÄî.ŽÍûö+ ÊÄ—8ë˜Z”¿tzYýXë3šnþ`ô½¢çÀèR/O§Ÿòrºù×FÓ/ð†•<7§µ?^¨I|Z—âO;¢ãk0S¸i« ŸFx|­&ï ž¡Í#9&ûâS¾3IŸ²!˜÷©.$—-£Àn×¼ÑýÛâµÛµwv³†uK%£-³‘B²äÓRðõ£Zñ©¢Z0C$e6I{ñ⬠…¹¹§ó–ÿ1÷“Ì#Yâß3æ“ñ«eàÌúÒs{ÂÿàjñÆg¿höUœ}Ñ­\¤q#~–/y-+ëhê#¾<ˆ·‘ÁM?ÖÃúª;ÝŒ'زtºñù' ?ÝqhΑ _0M¬‘´®,>­Ý4+~ŽjÌø̃ktZvQ_}6=#müÓ<¼$pÓ¼&ɹøM£üþ·ó ]ž§mØE_Ëë—”½àV”NëÉnš¡±ï%Ÿ†Eµ¢Ì:í¢É§aKÚ^úM!˜ÕÆ|8òqp§Ž‚U 筫߸É÷¸¾F|%Ô;¯¾@ñô6øj†ÃY7hÎ~ùpü¹q§²gÛæÿ,œ5s1ê&®Ä—8o£&åèÇú’ê7øuÜYé†öG5xÒmÛºÕ3øü£/à)úÓÇMëhòii]Žfú3USÆ×``#ÒzÒ. ãk(M.Góš2&ÓOòÍÄ‚IHyvmÞ˜i/,ȬӰYÇÅëvÆî:p”®‹î¾Cµ=зãŠM{èYCývî ådö©ìÌÍXðÓ÷¿›“Å'(¸(,gôŠC9úíØ°nüùH¨ß¸;è'é¦CUt.K·¬ìS§–Ιõ)Ú’V¤Y…ôë3hˆ é–ƒ§øõÅà‡OY¾¤u(Eë «õL^Žf¸©±r|õ„©üuYÚ…2¾F‚GË·´\L9šË1¹F¥"ÊÒ:Ô1¹T¡Áý(G3d—|†ei"ŸÞͪ–ÏûíCžC FH¥»a,Ýt̤Iõß;öDqC8“ò4½ =-WÐë4­]²pýÖÕ«vž;ìŠóûùðä„…wjR|¬š\/Q‰³Zù€ÕªÀí¢¹7ù€Ö]q:§vmÚøû¢ŸgÎÊË;}`PÐâA<©ÔM/ˆ·‘Á'ýVÌŸ»tãò?· ~Å0´ï"Э®¤[º9ì§wlܰ`ñ¬™ òós) ó8£Bú‘œ9Ž¦ØŸ~·‘EYÞø”UHZt9 ƒÖAT[6©Oš!¡_Ë¢Uü»íB_#È£btê„!lî”´ñüå|Ò\ŽÉž0]—£µäÓò Uјr´ ‘Oµî©Êf#º¶ l6™v¸é®\וhÐÇ"có“.:Ö£úB„çÜワ3÷;ñ{Ç3{µnÛ¹k§„:Iõc’,fKù%¶(¤FLØ/‡Ã^P›w*/çÔñÝ[6oÚºnÕNDóS …,]™k]PŽ„ÙŠ×‚OúAhs¾™Nmç¯zœuFÛÎ]ºÆ'&5ˆMˆO²X¬Etƒ _\N?aãÐ-÷TîéS'voÙ´uÛú5{ÑiN©íŠ~ä#|^1YÀWFç‡OY¤u€`Ië«ô•Ì'ÍŠ3ÈñUGÎàñ5R<ÊæºT%µ¸ÙeeFû¤y•“… ”Å}á8Xä§ï.xÑw&-¾müIò©ñ˜F¼Dƒù”íÍHOeD» fF¤NØ ó‹ui©——iÍ.hׇ#·ÎK.>Ó“q¼Ç4¤)Dó0¤](§:‚\”ǃ°¾’B±þÉ> טuAY×(#*"AÒ/pX ¡ßÈÔ´U¡ö˜–6¾uàU–ÒŸ²IëÀ`d*Chxu>SJšù„¦Ü ÃhI™:a.[σyöªÍ[·ïÔ«y»3þÊöã£ÚÖ¸„„ޏä{]ݶvõÇ:À{†Ñ<ÌvV š…ÙG£²Wš•ê…Óðƒª¼Ž—ûK£RÒMMOçQ .ôqµ§.üêqœe2>µ–ù@éÂ2.Ýyx]S ]HÖMVˆ®¤ LM2ÏüŒÏ{¦™'’A§•¤Ÿo” £ùÇ¥ª—á±ÊwuaÜñͧ,TÒºbh £uÅU”BÒ¬b˜ ¥YÄy´âþT š;\Î#k‘9&Îi·ÒÕñu’ÎB÷”¶]º^ AyZÅ] 9…¡4¹%«ÍJš[)WUf¥@0DPnimùV¦cßÃØ}òÍ‘¶©3l£(Ì1°óüôÀßž ÁžZe]XÖk=oM>ë¸+Om2q¤öX?ˆ…V]“¬?\ˆŠHô Ö°é¾IpÙ¼ Çþ{ZXZ¾XµÁ¥òç,HÒ:08æu`Õ”JÒ, ˜Ü_Â_£Á£t§ZÐÜYèÜÇ×:>[,u¶­]5­ËÙ}{ã§‚ø^m:vn´gÛ–­Z‚Èü‘|\#YjU¢Y©~"(Ûl÷äã“Ñ”í1Õ¥¾ {å/„µñÝšeÖ˺hVA1t Åü6ÃמBrm³SæƒBA«¬°ÅA„‡§€Ì4úƒ…˨I?ßP‡L?j©´É¥PoRLÊãR_ó]1w*àSV"iíêií»HCîHšù†1,šE“G°Qöìe•¦ù #¢p¢-îžûÝ7ç6lR?©ç¹ƒ¾ÆðR‡È9uòß>ûÏ+ž2ð:,šØŽ²EUiš•ml”N³‘)i³Ù,êN_ Ñ(ë àK/a¡ºÄËŠãpgØs=âa³LtM(…=jª(R{Ævè&ºÌª¶bSö!!>ú§B]ˆæ¹²‚¤ŸoäC¢Ÿfïs {f´„dv¡>eIk¢à=„DkïE+iæÎiV ÜÉ߇€inp½ÁW¥hLÃ#”ÖPšù…Æg—H¤ˆ›íƒ¸ýöýa3’G¡ŸjÀ…M©Šš‰Jé¸T€É†ööLkS© «à̾5 ]¬‚­óß$I?J“¾éÂÖQT¥’4¥Ÿd$Þ‡Gýu.°‹”M²ŠÝ –O™KÒÚ?­Á=Úi$Í*¦YUåÑŸ¾Dô‰~±¨Ð²Ý?îù1Zü¹—:fÚ ¶i¼¾ïÉ”VJ\ìbìòÚŒ¿!-oÎÉ?yÁÿïÿô Ë´hƒþ E_ÉôkƒŠ6¼˜J§™á= ­@N<ë×!•TåeÏ^?©Óåޏ¶”[ài e$µpû)Ê™èðnnþñ˫Zç%ŸV5ŠÈöDêÀ£ÑÄ#ܺF¤¦Å¬z"Ëá5؉>®—9bü¤~ŠËù;c\ÌYY™k©{å›¶GèU‰€!T+AÙËB$‰€D@" ¨ŒLøUubñƒòó´ôTøˆ/ £R'^ïR]_Bi¨i¼ñÕye|¼éò×SR•¤’WÐ0JP¦M° ‰€D@" T2£S' áQÉÍ0¤z³0oÔ ‚Ü]¿ÖÏSÓÆ}e¢if±ÉLÒÎÎÍUç±Mj¯§ñ8ëf ”Yxpñm?t,´‰†VòRG@U6c¶YÿêY*r2ŸD@" H Dî¬æ°8˜) 1°ØJ)ÊfûͲß1?V¦p«¨VkýºïØ,·>iÔøô;UÕõeÍCÌ4N(Ši$éÿ¡ánùük®‰ëйçP«Åz! ¶D¹M €SX®­Ás¡#ÀnοŸÊ?ù˯¾Êýð3<ÛÞÚ ¨¯~KAÙ22^" H$QDÀ¨OÅQl²ßª ø¯‡²¦M6Y•~Sm©zË0:uâU0Ø®Û,Ã#Öú©íÚ´ýI§%ÏÚ¾}÷§¬1–Ñ.—H°Z̮争jr.{¯µ2 ¤a5ët®++;G±;œ&“Iä:®·Oæe½9 8W¦;Yod®¶q†ø˜«¶½— —H$‰€D RÐüB”]vÑ ×^å)iã¾Ç.jƒå3Õå<ƒ^Z\.×]­;¶ËÅÿ ýºŸ¡œÛ«£èܶ™É ©Pm’`vbö°e÷A±xͶ„¥ëwþ½AbÃûï}*õÖ÷_JûQX¦†Y†0O[àɬ‰€D@" ø@@Q7èw`LAÙg˜bK]f?}ôhE?u:Ð(»DLlL³æ ë%¦ÞrϵƒD·3Z)$—†xâCœš7J®k‰›yß³¶{’2^­Õº—F*ô_RP;™S" H$À%¦{AŸª*=|$c4…9Ó‡¯½–ÿÁäç(ÈÍ}-&6NíÖ¾¥xæž«DË&õýd•·tˆÓÓw_iîÞ¾¥Él±L¹çÉqC‰+ŽZ),ã+Eº"Ôñ õ,åP‘“ù$‰€D@" ð‰€ÅjòôìŽ} l”Eh Û¥oßfIÉõïiÑ(Ù5êú‹ElŒ´õ °—Äkäõƒà'bâþ{ù=÷4@²Z)ëÁÜg"üuOòSPQµ¼ ’‰%‰€D@"–¢5"¼m·ž)³3Xž×pôs“¼™_P¡÷ xÇ }λø ì@R÷Þ¿ 2K!ˆ„ˆñƒqrró&mžEÄ·öÉ{ŠjÕ°gZµ¸:™E" H$‘F #müÓ<"]O”ËŸ¯×§:œê×ÅgÍä×fâš5kS¿Nݤ;ûŸy†is‹ ;ö‹y+Âv±[¦;ýô¬ûÀ‘,ñÃüÕôòXæS?âhµZÆœ5xp²QÞó¥Ñ°ÔÚ™L ʵ“î²×‰€D@" ˆ<Šò‡G%x\ó’‚›®MŽï}Ñ !°Îˆ£w #ÂŽýGÄ“>§ròËGauÊÅá¯_eyÖ½ïðq1ã÷•‚Þ+ŒÄDâ»õê; eg)(‡²”CMf‘H$‰€D b°CŸ[PV…âM£¬ ÊqÉ ¤Ÿd¸€«¸àR,]·!ZФďR‡–Ĩ:°ÑJh ð“‹8ÏkÌ $“‚²¬üÝ ÛvÃ_áòžD@" H$µ‹{vX5{ÍæÓ°,¨û‚¶¤¤µ~'=uo1"TÖQ¡}r¼Ébižœ”`ˆ 8T©nØ)nº´¿VÕéÜñáwóĶ}‡EÛæD“útÏ\­Ù&fÎ_#Nçæ‹îí[ˆÛ/("Ž‚ðØ{¯†ÙtQÚW?þI´oÕD\;øøz.]ÇÚ­ûĬÅëµÄ­›6—ŸßKsÝVQÝ,yþª-âÇkwQ¹eŸÛ³CIãB¼¢ë8â™›—×Eg;gˆÅÕÚlR£\kI/;.H$U Ñ©†ð¨Jm ·-7ß|33e¡^ŽÃäÖ*Sô¤ BM§G¬ÙdÁ¦{ñ†¨V7ìÌ…v»8«s-įK׋í0Åø „PjZ¯Ý¡ÅóÏÉS¹â£óÅgw£o"ö>!æ­Ü¢ Ì{Û÷ÒÒÑ n¢8§k[íwŽ­ÄÊM»ÅeçõÔW^wlÝT¬Ø¸ ÚDÚE‚rÙ:˜y÷câ¼ÞÄÐݵ²øÇ_Ýz¢kŸ­ÕѾEcñì›ÓmŸ;· ß…xº\NniMA9l\õöV‡sFzê(#Ú)5ÊF (ËH$‰@¸(ª î¬jÜ{ÙlU¾÷€æš‡ßx#¿)´ñÐµÊæÂü¼YÐ°­Ðî«6ïýz´×ª¥¶—l+˜Cè¡M3*Y‹ÂFhŸ)4OzÿíX¸z›8’uJ»9&+ (3ð OšFÙ:xÔ ‹¼ÂBñü´oÄûßÌ…vÍ<Ã_ÝÌÇ@¿Ç ÉuD|lŒØ A9Ü@‰§½°€.úˆ³Žy¸Eתü5Ž!kõdg%‰€D@"PÅxÇ–² 2Ú66¶Êu fi—K7M`>¸oÏ»ÃiÚó†pª-{¸ˆMœÙ¦¹aøÛ¤A]q,ë´»Øã'K®ã ˜Ö‡Fù•'o+9ž¸MKÛ¿{{›9‚öÇûœÊÖÁ¸v-‰q÷^#îÅvÒ«·î…Íóê ëf>†Ã'²µ3í™ó EËÆ%B½v#„?Ä‘xf?¶ Ù¥†Ì"å“Ù$‰€D@"  Å$¦ë)U—¸©øšfúáZµà÷5B¸ c]8aéºí¢O·v¥vm×BÓïf™vÆwpWÑ»skqä_—n€Ø!²¡]Þ¶·È.™^š@Ìøc¥f†¡ûw.[G¡Ý)–añ Ò>™¦±ÀüÕ­7bÎÒâ„å ×Ú4Õo…|&ŽŠ¢æ/úeæB¢ãryµ5£”k+åe¿%‰€D@"%,få wUª¸Æf›N»d o.\¹æÌËÉ)8°{7Ö¾íT¹ .”@솙¢±Ù…^Æà>]µË‰ïÏŸÌ\PÊ«D¯N­5/ßÁŸñ£/},žycº¦AÖóö‡™¶^¦·:œ.§˜ùÇjäý\<ýúgš·‰ÁçtÑŠðW74i$b¬ñw¿s—oW^x–Û¶ZoC°gâGO=úʼnC‡èHš8×*ay´-­ÿˆñ“ú‹]ÙôTÅË H$‰@%#02uÂ\6»ó 湦…)i[!«ud¿ÌŠùê)iã~Á%w£ncMë5lØê¦‘¿Ü²Iøgî¹Ê¤/ÆÃ=CB^~¡ˆ+Òôz+›“ÄÇY…Å̵oÁ»Ão&a6—×CVT7µÒf˜‰xËLK¨ŸüÁ÷®Ì#'rf|òÞµwïÞ…ü\З‹£æF¦¤Íf'±¨oh8-OÉpJ“y%‰€D@" ¥ÈÈ£F“¢ºµÊ.áºÔµÉôï[€#ÿä±cYÎ=íÀ±,‘ñÕ\•BŸ‘ÁŸÌz¸9I¨B2ó[-Ÿ‚nEuÇX±=‹›åˆq#~þ\<Bòqä%¾ÔÚëZå@‹«ÞéՂűürVmÊV•2³D@" H$eX>ï·_x”¯)¿ z (FkýQD—^.ÊXµp.}´Q¡@CUo õŒO¬“+â“z¬Þ¼GíØ¦©À¸%ƒ?hnñæg¹vf;7®kî÷_ÿ„ô'qäà ùgµB›Œ~Š>ƒ†ÜÍ3xêCžC RP9™O" H$€X6wÎÁ¾ ¹Zâ°X,¦<1¿ãš_·ua™³e϶-‡ìûê6iÑãU[-GŽg+q±VÑ ^"L¤Õ(0Ò]ÀmÆÂÄïç­ÿýy‰š}:7‹"'ÿñã N¸($ÓÏnrA­²”B0!l•t0•É´‰€D@" Ô^`û¢*œE&ªòÀ5÷>ýÒwï¿H»Yj–é³ZeÊ&¦5K®Þ¹eãóç_võpUuž¿dݬy3»¸-36þ ¸\k%fH»*ÜÖဟd¸€ƒUEaæîÝsüýº5oÓ®clBB½˜˜Ø:Ð,ëé¥vàç,,,8]—{ü྽›×/[´:?7— ö( ó\Öä¢vÙ'ø8ÔÚ٘ʢ$‰€D@"6£S' a!SÒÆÏ »°*\¼{ŒÁÆ#ï°‰ŠPvˆ#{ºfddPèш# G=uqÔÁAeîægÅ¡ Ò”_j£ CÓ eBêy ʺɅ§m25ʵ*Àë'`ôz1*œŽKr8èɼ‰€D@" 0—ª¤U£åĉæÏyÂr#h•Û››¶½ýþz¾`  HÁN7É  9 Ñn³ \KA¹hq1¢6žvÈŒinÁ3O.à£P]ëB¸²˜”u$äY" H$•‰€¢Ö S‚WŸx"odjú[ßl„ž0&´Mý>Ã6Š‚®-¥€§ Ê’©QÖej•=åÚ Y&. <ë #âFa™Ze¼&vº¬çE” Á" å`“é%‰€D@"  DKü›9Žœ‡ŠµÊíÇa |…;]¤ÿ_ }iv!M/JeOÓ âDÁX?ˆ!5òL#…d€N¨ ³°pð‘y%‰€D@"júÎ|eA5>ýN—Ëõãá¡(Ö¾S'<»?©YçAÍ1*õx”µO®2 _ýÐ…e ƺp¬ ȵÒÜ8¤FÙpHe‰€D@" T„ÀÔ )ÿÆ‚«;a§| 4ËUudØlê@›MÑ5¡ÖC-©§p\5ÉÄÁ3OœŠbä_ —]LT´GG ’ˆ ÊÓ§O7ÿ¶vûYØ×½êRû`ßmº{iî­Ñ˜.~·8Ó<ïáÓÌHUuýÕ3N¿–é…øèOƒ¦n¨òÏσ¶‰Â•<Å–º´¤åUãjÌøI}œ.çp´¦­ª¨-ð&¢‹¦rAòä;χ¢&>Ðj¢[Bk6=»éë*7.DºÃïØÆnƒVy´Êé¬KU•É#ÆOZ\F ¨<üÆ1ù‡O^Šqi$ÃÖŠª4†<q&Òý¹|Uq¨Š8¬uŸ0™~m”4ûÍGá¢>t z>"öœ:Õ*ö½köê-céþ¥¨ÝÊ<Ü‹ñpëݨð¬@º<5¦œ2½_L%>~á‰øóƒ”žÃ}+l×bÀŸf¶4øàÛƒtçSiaäø‰„êúÂár´† K¨Ê!4&GÀí’Ï•òI|ª>™Ž´/À猟ž.Z5ú(cÔ(~úxÈHÿtÄ+©Š´iü’ØsäˆÉ½ /Ä+ªã»‘ÏØdL¶íûë³ÏÖK4'<[xääCÐ&X¸3_ÝD5¹N‚ c(þ×ιHÍ:ëÒvæ³;ž>¹#ǧ½!-/d<û,ý(Ë`yȶMnUà°/ÄÎ1|ñ.‚püºjµ,ʰ=»Ç vËb$ÕÇ_y%>÷xέäF@`¨(ÊvU1ßVF{Õ¾ÿBoØ>Žöüj©cþþݱcODµ²2‰@C€¶.‡xï°ó°!Æ.“¢ÞWÓ7©lŒ±Mjït8–`jÜ!ºÖ8 ÿŒ{¿’ûŸy†rn¯Ž¢sÛfÂlÒþ•Ýä*Q¿Óå[v‹×lK×ïT!sNçÍS'þó×*ÑÀJl„Q‹c##(OœØ¸ W`®érp©Ä§DV]¥‘’vô!ãK+EQŸœš6žvy2H$U©†â%ù ´Ë]ѤØÀà£*Ò´ÙŒQ)iƒœÂù‹êTéN˜¡=nѨ¾zï_™[6©_#ûld§ö>!Þÿfž3óh^-ê¼S>Dù˜gÔÎP¥åÚIÙk‰@ðí/+Áö2ø†Ê*€ÀÓ“''e²O‡–ó2hëFƒO§VfÕÄ&Pqgºëï©w›MÊ¿,11¢{ûVêÈë+±1³­q8:DÆWsÕ ;ö»T‡ã²i/بY®•²Q‚²ü~QãØDv¨:!ð¦í‘ìii©‹–<êé—ì·§g?˜’Ö¶:á$Û*¨,^|æ™S-,ç_ !9]ULµý:aÊêk%Ö« ɨߺaÍÒY–˜ØœfÐ$K!9xŠpRAÜZ4J&‹e:l¼“Q ñ­}AU6ãkÐæp;nÈ4Íf›sÀµõ¢©Ï§ünƒd~‰€D 2Øl¿Y2í Þ@éûíms±ž ‰@ØlÓ—ojIÃJâR½Ž9aTý2SiGy$¶[>\Иpÿ_IMrˆt¤°Ls•´}—œ`Š‹bxÐÅ^­ 0•eD‡ Ñ(°oyÈåtÍ3i’4"2‚*² ‰@Èt.x+Ê»caÒ£ÑZÅnÈ"%5néxÔ®Àþrǽ˜víºÖOH¸ ÷¢a“üëÒ bÛ^:ù‰~ˆtÝÄ8šÍ¦ßtS§rò.ʨºýUHá#¡u›ŽÃŽò^í4ÁðR÷”‹|°Š3MŠò^õÉ$‰€àh¿ŸÝîZ©yÄð“.Ø[#m»B›Ü‹§›W¦—HJ#@ï \ˆ[:Vþ  nš6çø:u“/°ÂÓ]ÀèenÙú]¢_öF糌¥ëvˆng´I‰Ø\° âH<-fËEhq–‚rô ßF~&Yoœ9á›ê—Y$, b›ã4Æ) ¶…·Â»tº.cJœ"µÉá!)sK°>HýµÀž(^”p„…€§ ;ÉI †ùIÞ´+Sääˆsº­]¦öøÓŠÓ¹âüÞDœÕŠ!M#`m1}ÖR±rónMMÄ~ÝŰs{hÚè3Z6×.rJtðèIñæg¿ÐXthÕ[Uü¹a§¸éÒþZ9k·î³¯û­›6—Ÿß Bt Msí«nf\?È?-\#NžÎmš6£n¼X$ÆÇje†ú‡þ¦‰g^~A+”AA™¶öµ&кÓiVÃݧ l2ž2h©”¯ÙϪ5èËŽJ"„€f¾¤ˆ]Ø ¯»¡U¨ª«ö¿™’šºßÐreaZˆ¶Y> ¯Húª…ý7°Ë”A¨°£ßäx³É\¿nb¼aZOjz{uj%âcY¼ÿ°PÄB8þëåÅѬSâà±'&óVlk·íw_=H\=èlñÕ¯Š#'²5ax!„X=,ß´KB¨>£Ec-jÃÎLQh·‹³:·Ñ„æOZ$Ú5o(½mên#ÖKÒÒù«{û¾Ãâ£óE›fPÿ…¢Ú®¬·—x"4Äoâ¾Ì§\ Î.»˜¨¸œ“ÂmjØeÌÅ8Xì·!2¿D@"PŒ€*Ž@÷ËͰ÷so£02H$a"€ ‚öáÝG-Ña©ÑVáò(Sp£¦ÓŠ#Ön/ÈÉÎÉ´á›8NM;|×U¢8!Ndçh‚ïc·]Û5‡–·¹X±©DtY·m¯&T7m˜$x4kTO»? gñÃüÕ‚Â,5È+6”£ûÂÄ ´(Œ÷†@ìéëy÷câÃÈ^¨ç:e_¨.ñðĉ_¿9n\Ñ4ÒgŽÐnpâ¶¾¹°¢ @t€MÛÿAH¾ëLè³ÈTÒrÿ“YÄŒ{7í÷Î16ÛX6{wam¢ñ\o X%¡œ‚íX×·lÏdzâ9rêT«ºçp!4à<<¼‘Ó³à¢ën Gw—êL,+ô,Qð‰ƒåS|3l¡ºÔ³Å5ÛG‘îèèñiàü$ùÔMžr5‰OéQš»—ö;æ÷…¸s"»P n±Åd~¨ì»§2ÂhtaY˜W-ø}m~ ¯ÙŽ œ›_{ã}â‘[/u··EãúšWŠÙKÖ‹¿@^¶~§¶€OOpv—6b „ëîí[ˆŽmš .Ú‹±šEãúuḠ›9 ßý{ ½Ì·tÝvѧ[;·ö»Ðî«·î]Û6×ì“Ô]#ba]QÝ]1)àb¾_ж>Xx¸eÏ!”ÛBxx"pݬæ/šõÃ"4WÇZï²<ˆ€_¡3À2‚J6òéWBH^ ådtvpp¨| bàu¡Su,Æ kLP…z$ÎxöÙ“ªRÅd‹”Ìê0È~›ißR¡§'ON‚Ü„têµ8ç m? ¿kpMà×ÄÄ;Ü‹ él¿}_Þ{ú]ª%—jR£ðÕêÞYºŒ‚ü5èýLAî&dÔŽBzåÁn[ø4~’|ª?)«7ŸÂ­Ûmèá|¼‚9WB`þ cò1ðê¹—óG~9©í¾A[ãTWÍNDͦ~¸òrrì™»wÏYº~§J‡PÊ»E„8Ñ«g çŠ•›÷ˆ”·¾{Ú”ûÆ¡ýDÛf ÅkŸÎNú·xù??Â$"WËž”º%òŠ]ÍÑsƆ™œÛ»«pºœbæ«Å3o|.ž~ý3ÍãÄàsºh÷ýÕÝÿÌöbpßnâ›ß–‹qo} ¡|ý‡ˆq<~äÈÿN>L…žŽsXåÖÆÌáMW‚DìñW^‰Ï9žû.©: ¾²XêßõŽíÁÓ,ÆfSMiið´2×ÿ=l›<ãMÛ3ûÊV‘X7§ÂO2&–ÍgäïÇl¯&çÚs8U_Q¹Y§œ—àùìˆþ­[/¶ÝËO=•Ã<ÜÅÐqÚÙíõ”ÚiAQ7UØ9=q :ºSˆçâæQÿHÿWuÝ }Lêä.p7‰jWL†¾ÅùÚªB¦`¶Æ ›O}˜P•Å"Ò| ?I>-Kò¿kŸ6·¶üz¿cÿg0ò{fƒÚø­™ÄÙ·ìÀ—­–N§ œŒò½/côDV/Z^OI_¶°Ö7Š„Ü^Y;þøñÛoùð ÷¾þ=î™{®2y.’#6„ Îî,x” œÕYœ×«ÍJ-¾c: ÖcnºD3ÉÈ+(„׉¸R ë¾uh©âê$ÄŠwÆÞU*ŽÞ5þ9ê/š¦Ú¤˜vÅsß÷W7xKÜ:|€¸.æèÎŽma\¨¡ Ð!€ŸKu9sçÍüæß(‡ØgóP‹®^ù ú:k ·×Á0vî±Ü1ŒZCˆ8",ïxÇ6ªhº†F›ŒÃ'°‹‘æÜ»}<¢G•m_A¶£L4>Ä34åØqv¬¥îcoÚáâ ”ó›%Ó1¯aÏÜßÓžytêÄ«œÂõ$ÉÞðƒIu±ÙbyjŠmì.¦×µ‚xÒa¿z4¿íQ×f<µË`Ä4QqºZæØs^Ãó†UºJh¿×益·œWE8jO¦ª,Ð…d¦/ÞÅp!¯in‘騗 äC|Ž‘þ”« Ô-,©géfˆ†4Oã~È"h–²Æ$Ôt ªsYƒGÿ­›¤(ûŽL„ÊèKÚוupî¿’[°ùDW”W÷öÀ|铨&I/¾ùÈ#E% ñ@êänQ˜†úÎEu‘ŽFT'õû0¹I7 [=o™³ )iàc´ÕõöÃo¼ÑÓ³eÒº‚s¶ßžžŠöߊ~Ð|c…Ù*Æ9"‹`Ã>­Ôj_T“£LÚch|DU,sU—½ÊÊè{ɈíFÐûE¸| 0ói ÞyðvÁ³ºA1)“¦>ŸŠÉCQðxNCæSÔqêø;úv&Θ_«ËÐÉoš[Z}?émṺòéôéÓͳ×ny¯Àè{tÅÌ÷‚|)ò[ëìÄø¯ir¼€¸ÁHÛ ¼¿çM:Å×úæóÕžO‹ßõèƇ› aZòâFb²^Zé™0 ×ÁLh£ÐœhTÇUâ¨,áûÜž}ìXö²9¿|`ºô²1_ÍUG^?X EXöÕx÷þÊ£pKA5œàËd¢¢ºy?ÜMK($·DzÄÚÅ ^8¼w/¿Z_ ËĻ֣&´¿<}!KÆö`n_É´xPè\íÂ$þ“á!${f2›”©ü Aö|Ïø¢kÅi·þþz¼waPK†=è=j£0Ä4š=³*zâ^OO{f¼TvªÎxL.ÂÛc9ÊÈÁõ0Yý@J„÷¢@Mo¾#{ ÄU­p}§¨›Ðî.¨ƒký£ªêŽ¼ÚŽL8ó¥”Å×ùzžg5Þ4¯p¾´†r7'Ï{úu¦8ÔeÀ/ Z€?§Y&=ÍÈñiàîψŒºö#‡:ÿŠ¾Ý§§+é¿r©º÷ÈGp†þw`ÙxÇJÄ,Sü•œ‰²V‹ßv{Uu=_x$ÂjQx4=½©]µóÞÕˆ™#GqE> Êûw¬%FÓ†ŠmQÉåÿ¢LSý$Ë[hËF\wÊ?”ýLùT¥cѹ2íéï«ç€['`Lá ŸÓ.惶¥KÅÌ,Àç¤l>þžò|ÊÏŠÅÚ ‚É›ŠÙë-Muˆ}ÃâSðÊ-˜ˆ½©\C`½4€/’_Jx½Þÿ’ç44>/=:~D[‡àY̆0ž z^…G©E ü¤·C?WW>ýuí–`rö:úa¯><¾ÁsŽñJôÄäw>ƇïÙG Éâ” u4h’Œ¨980ÁöLSQ@ÚÁ§Þú ÜÎÐâUuƒ·ûQŒãÖzQ¬¯2«ìÚ»‘…d*iø-X½dÁ†õË–|±aÇ>uòß»Â1Ã@yµ&§ÉÌp·mkWg,žƒTŠp%¾ÄCE ‚A lAƒ®I;«U³‚G¥[}%‡Zh;ïAóá­¬§jÆ­õ"ÞÔ ¤B¥{&„ЃjÏ9à˜ä~ —-{¤íåFx2Š„@“y@FZê¥Ô®bòö:ÿºvEu ˆÎG 6Qi ËBQÇÚfZÚø¾-­ç'ãE|6…{ä³Áþy|q+pGï•­—¿3RR ü Ÿˆ't.4æßŒ´¥Ÿã™6Ãöì–6îd<ðyX/—ÚdÍnNUÓµ{ŠzfI}ÐŽsa֢ܗœ8±Œß]õ,Ô9\1›®ji¹ 1#-å<æ×ZMC”6[¢Ž+ÅåèÏ­¼çÒ&¼"7_½ùCÈþéîB/*åoEwEð™›Ìƒmq^¯'{žÕª˜Lôš@âc›ÔÞkBDƒ´Î€«6‘PM¦k€[ÿXK½–¨ã<7-=ë0¢/¤%Ët9ÍWЕâɯ ÜMˆöµ¦@º0ùTt0™•kZZR:âÙém2L¡ð$©®çÊótIëſȇxž9ò-x»€®LÍDóñSI%WÕ•OaǨ­çÀF #’WõÏEظ–ã7'¹B9mÏ{{Œ—ß´°\Ðx]k±Æ´Ãx[€ñé‘Q©/v*AÃ÷UMáSÏޱ¥qb8S¦ÅZÿGÏ{Q¿Vúg.5.E½ Ñ­‚›G!º àWfžóüüý¢E?ÿðÞþCÇ ÒþõúÁ·óÄFlîA—g2” @<ˆ ñ!NûÏÿsîì—ç|û%ÇOâKœ%x%Ð|e 8¥1 ›³— |Õ|˜ êñ¼ð5ŚТô¦&e¤Œ£ €x'=u/Ì0>Âå3.á†ó—Œ/ìùWbð©Ká7cÂØeîûåcaWÅ‹E"½ƒoä_üûç4, dœÍv10M€çï`CKkç[áñb"Ê|_«ÚÅ5à>ë3º`å¯L§£àrôuðRÛ?5-õ'=íÔ´q3G¦¦Íƒð1¨0Çuâÿ£ßãBÄ{ϧð“"úÀ¿E!Ã6n“~ͳÙRo–Ë~œÝnA{= Ój¨å† §è¦%ø$»ñdVÁiÎOç.A`ëYgÙk»°Ç¡­¿—Ñ׿9íη†}*‚Áí¿ø˜9¡BùÚKf:h¿-Ó¾`4:S2Á0¨/åd„¶›6ùÇË»rB˜|ªÌ†™ÅŒ¢B$×±¾•m X÷xø¹Yv¦×nˆ¿Ëi¿ k"ð˜è}¡—åiï¯Çs®n|šÙü„*öIƘ匫[¢ 5A™àóZëý‡0|›6º™LïgâÛÓarAZ½RQ û#­O†^NMãSxFjœŸëú(fÅô¸¾fFﯯ³Q[ã–-4Û‹¸&\SKv»åkƉƒ‚\>šr¡{ ëÚe‹7îܺiòùï{ÛKÖí°Â›‹Û2''%bXçk®v§fÊÁ‘«¹€ƒdU¸oçŽ?ü4cFöñ㔑øu‚xWâKœ‰· A"UA:¥L Y0tk쳊³ I È}çÙg³`Ë[*il¼ØáíëfN‘À-m=ãK]+j–‰u_bšðËû* 'ŠÂ<é~Ž›oZ[t+ü¿Å.äþþmò+v§ýqn£! Þª8ì}`ê1 Ø^ÙgE0¿ì-7;9¿|"e=z2jèÎeï™-¦ËÆÑ¦ñ×Õ[nFi·ã^hV[8ÇJÒåRM«=\…z¯þûK/M¤}õ鬂¿ ²§X§¿l‘? lYT !>Þô÷¼\õjÐêr~ªŸ2räט ”Ê èg[-3lÄ= aû1ÉÚ…‡¢DP6¸/žõu­³›PPuz$—O± ÒG-øî°ncw:Iï‚r€øc­i¼ñ(Æ£ì~uãÓŒQ£òñ /®ž>–‹ ¾øbâ ó*þ-r±xDš]Nõ Ø¢(€ÏâŸÿµ1P¯è\ø”æ(§í?£óØfMI›2!ezEýÖïGj2kQL³ªc"‹s}ÃGz}5øÌgQ”©I¦pG!Ù‚C{ÎÊrýüùÇßÇÅ%ÌîÚ§oÇæmÚµKH¬ IY1#]í øLÉc=«°°àt~nÎɃ{woÛ°|éú‚¼<â—íqð7q­u‚2ƹXPy!øúCô?¬À‡1ša;*ëA±£¯J1pë/Á`íðL[PS*BS"…HDúë‹&œc ÊšãYlwÈß KG9{Bû†AÅ>‚Ÿz0%í-;\‡N®\ç-(~Š¿*Ð7Îø6s{ÈÐÓãµÝœ(pp)œ¢Ñ¶Rø1{õ–)@ë~h§!ÓlÌHÞÅõq|ÿÓž¬{Bƒ¤™å‚yûí >DïÎ>™OÍÓ.(=PÓ)˜jhŸ{‹ËÛâ´ž¨Äóñ–*\¯=øÜ;³Ð>ÀU‚ÂCUu[áR_&ŠKãg)Ï`h_< ®×KƒhCX| ž‰Ê [0WØ|Š’‹ÊÂÕrµQøÔbQžv:ÔàÓCh~Br Iàí/§NûmFÚ8a9àˆµ ÍlÅtL0¨eÒ‚>þ¹ÌÊr=.suçSºÌ:Uø#^g“ÉÓ&¤Ž¤ßî4šÌÒ3ÆÚP@@)Qke¾ßø£æ“ã…_í=Œ³~/.??·pÕ‚ykp¬C<]ýñÅe­ cyèøðIütA9 ×T êeâË´ÞÆfD×Ä ¼5Z­Ð³Ãí¿—V¸e—ËÁwô›ñðNh>Æ¿úÄœé” Xi6’ æüR7Š˜œöÖ¸<ê¾§ªµk|„tÇ•¹€`·‚ ØJµÂ6ïÑ2·Ý?5ÍWJÚ1’ œù؃zy¸ ªàâíôÔÝ𵋖: xt® 9À0Á\bjѪöRÉa"ÑžO¾IUË Å¥âG‘K:ÇýŒ‡1ÂÓŠMèÝÂ¥M8±Š$»¾ÆHÔ¸ŽLø-p»3@»ãÐ$¿oúÄÓ?u Ø–m‹¿ßðôñî~GÚ=À¦Óy⟠[:_’%<e'Rç•Pt…‰SKmŠU|#})[geýÆsÿt u‡Ï§.ò¨;ÐþÕ¥4b„Éå ›Oñ$ì$EÁÑE“jwMÆ^T>MŠ·®‡YËZÌMZÁc!Ä‹¯aB0Ÿ KaŸ¬Bs_ð¬7ÇÄx6Ö+|fRÕ•Oñ…,1ëdÁL i?Ÿ1!5Å<Œ*Ï÷8L6›U^5(‡œµžT^èƒ=ã)à1žæTzXqè‚4ÓéiqYk‡¿â!PÃB2•”§tA™Bò)Ä“ø_âY+å_†5WƒL&Ó(#:¬ÏÚÂ)‹šª€´U±ëN…vphÜ8çXî°½]G¯˜^+й‰¼ðìç( Êóú=ϳSuÝ£ÿ怇§e›u¡_î¬Z£Vew;»Áeï´MÕ4TŒGyš€ŽOž€ æfÂѶ—šÐ4ƒi Hl×Ò**½a¸Ó0®l § ø„æ¬Æ´rTõRF@8Û¤ß@AZ¹ø ØUã®f€¯â„`7„§A@IDAT|¤m¢ûýöâö0°0"‰Ë´~CŽ© ï#½«Ž)~‰žÖ¡ØïÖ¯u2?ׂ·>BüÚii©ÅËõ!¼P^ó’µï}ô=ì€wIt6”¸¦5ЦŠdÌCSHWmüdpü–h T“òÀ´ )1æÆ\<ó ݶᅌ<ÝQξ¸&õ:Ò÷®ž–½R¬Mu7tp'6 n”žÕz‹E@¨s6ô[Ðr¹!î;hÜRyoäøz Õ¾•X0.C}«ñv釺za‹åVúË#RÓvASÙ/ÿYèˤن2ÞÖÊ÷øûÚ÷QÖÝÐø¬Aôv }¨{â0IP¶%Zûé‹6FO¿+X?@½‡á‰ésȉõ…%þñ ÛßB°áTEð9Ñü+|ÜFQâœê#ý«¨û Vë«ÿú=±÷0Ýu5fp³gú"ä0üî6AØVë×i˜˜@M?'pEwm] áìFm‡ÐF¯4™ÅO É ³ô/bË6” Ðti[XÇZcZêØêià!ä]´o4£ß¥¶°æ­~­£ŒýÐÔ|ÿÑg£Üó@Û“ì3¶®ôî„q0m8}a~=h^+\Ž¥ ÍVЦâ¯zFœaz2—?‘o0Ï•BâÓñiéà±qè3µÙxV¾á’€÷-xŽÈOé<âë9 ˜|Žgòf”™ M*Ý>&€/Æ“òFbæŸÊbZ]ùþm.¡~Š~ç€?·s¾O`\úÅ,,ßë~Î9pØ 7ðyMŽƒ÷?^'ÁÐXÿ \NÏ>îueÁÁïšÂ§èÇ{Àà^àå^¼t•|×Ë[¼g\UàQÏöÔk¼†4“ ÎTJQABM2^s‚ÇxÞ§V™éy0èç¢_5ó/ßù <ó –˜B05Æä{ÊQÔ*óà5ãu!YÏ‹¨šø^{|x†¢—ðê5^S„Ûc#4ÊAµ»®ÅYbzâ±þ4>¨v¯†üKÃüÙÂdéç)$—)ü{lq òv†ÕGAùnxQ.±X,U´Afècáî~jµðˆ÷?aVð ʃ6¦ E!c³k…b¦¶a%bú m÷k‚¤¢®*ŒÓ6))JhO²n¼h†!ÍcˆìPt£ô_f°±?¡¯MPÎõ¨û \[чƒã‡êB2sÅ™àUùúKý‡‘î†xkîÔ4 ÎÈǧ<îa»ð¢³£ícÀ'™¦¢€Å?ð„§Ü„:~D›¢ÿÐÚ«M&Ë…†§{æâ¹Wë㥻ˆqÊ[àO/üîô£± è~ÐÓŠ­ž>г¥Ž™B×aoéƒÁÃ’h¾}ž\›i¸b§DLÆnÆ A¼tˆT_J×RÁ¯*°=nX|ª˜¡­W~À3v&(·£·¹Àÿ]Hö×û@ñÇÄøN</ieiu¨×¡> øJ[ÀÆxüT¶ Õ•OáªÚ#là/ x¾»N‡ƒ·_vªöµº+Jz¸€tјá%"pL±âà*ü†Ò`pȧêħ?›Ñ^5c<Öüí—=—}6*û7¿”Tv¢T?aMóIáÎS³Lí1µ¦xε÷Á!œ–9àwM?Êö™8ðýH\ˆqÊÆQV“L\kEÀÞoƒŸ‡Cvºß(!™Àa\­Ü@ÿ¿J¡9¶¬6±¢VqCŒ<³9?£Ø…›žÞ—¦J¿Ï3W;+§MML‰Î£þ1c­ÌÛS^:Ï l3œ–ZÞMIÉ„ÀîóáÕ½oÄ6©·ÏÛ€®ƒœ1yÖrfž•Upý¸í•9q…βø1ÕÆ£Æ§oÅUS̤®uµnò…l~¶T\GBPþ™Úå–.‰ÅžܵŠ­;ƒâAzæåæ6˜’šº?j+£/´+”445a¾P·Æ …OÉ‹Ö=GZ¼•–²§ìón$Ÿ’]‹¹…h°ìóÈ>WÄOL㪠Ÿ‚GïÄ×¹€í±ë?Þ´=ƒ¯Eªò`jz+̈_Á¤üFL®ŸÇDúŸžý£™ÛQGZ[Åco(Î8ì 3ÏôF]×$>…fz*q1jç/ãËÚOxQ·Äº’§õu%þÒ×{Tâñà$µÈº&™ñ”]ô—µ*PvÐNpuÍ2'¼æÁø'¿È[-¾ô|‚‡b17ý2²|ÐjTàF»c;X(ëÒ^]Ý|óÍ|hd³¯0+»Á3«¶õ°}ÁAhfhÿ•€<ÓÊ몃^ìsؼØ5¹²[&ù4| Àüä~¡RLÖ^Ú—0"aÂò4Ư ÅôØ´´”×=nÉËj†'¹Øýöm|EìŠñv¶IQ>1¹Ô_¹@5ëJ°ÍÕažuÁ¹¶ É:†º Ì³.ó쯧•ç0à -¬®¦*¬Ê=2þGúp¬üêäp85ÁŸz¿–B²@A\ZÍæ½ðå ›eqfh£`ì½ÐŒYªpvÊt,€É‹Ú#×ÿI!9P+;)wЬAò©qD€ñN¸PÄkÒ1¸¾nq¹¶8Kx ·Hã13YLßW£,©2àW ((zf:æÂØû¬Ó¥~@í&¿ë¬‰¦ÁoÇÏîZ(¶EÿLå°¦tÀdž¬úu­wл“~§º¥W§îʘ<™í×”|w>žÒ,&.6ÃWyÇïúðµ×Üý­IéÁߊ1Ö À—àêOß@žOtšÏvsôÜÞÒÒ²«Íví±#”1ÓM-n¡¦±ŠhkýçÛܽîQ&ÞJÅ"Þñ“\ÞòƒíGŒO¿>'^ÃÀ<‹µÌæ<Oÿha—.Äx?¥È[òH>-I¨1±–¤qöìvxq\átº.×>©Ž.LbgÁÇòØwlcw…Z¾ÌWu(^hÉÅâoÓ§Cr@—˜¸8.œ­5!¹AjMyh¡N½zN,Rxû— Ն͛“-4Öàíšœ¾L÷kòOØ¢++p¬†L ‹¦"a@ZUVÿŽ´½ÜH¸ zZMÊþ·mã¶„Ñ%™µmé£=‡ÒLN,ÈŠ9ÐÔÒ»¿Õñ¼ðˆÄö¸’OçKU-6Úð4Þ6¬IŠI=cÚÎú…ªÚOÙ.‰€D@"P„­Qþö¾>ŠòüÿÙ+' 9 á>å¾A.¹Åz+ÚjÕ X«µ­ÕŠ€~‚ÒzÔÖÚñ¨J=ê­ (("·Ü÷M¸Â¹³IvwþÏ3›w3Iv7{ÌìNvŸ÷ó™Ù™wÞãûçgžyæ}ÂÑhwuÒj¸ÿwÇx_`Ð|x&>¹q %Z…Ç e<ÕÅrê ™À÷WœêÓª×Âg*ÐâaVY>o3Œ€~ˆEY?rK Ð(‹*ò »Í¶ü?/ÎÝ‚R¸N¡€iZ¿Ÿ…ËüÝ÷ÇÙMfóD|ÌiK¼aôù$?ËiôÙ •ožù¤­²jù»›¿;Eœ‰p«áçÏ ‘¬š˜ëjä×ζòøZçjodãkÖó_F€4˜•¦Íœ·’ À'ì±>$”cãˆ×%vÚø¸‡©¥Á`’â)É †8‹%˜»MÑW6kU•£ ¨L*.³% To4ÎÙªì«<]üê¢E/–ck]1îÃØrãÝwÿ1>&+éa³Åô;æ ÀoÖrë«G7nÿ×Úµ_—Vó&”æ°P`kæº[€k_ëp&þºãN‡ã«h.¯F@'¯(Ïš»Šú‚³_Œ¡µ—$,&Ìc¾óáÇ&&§¤þË!I™=;µ–]ÑÁЫS$ÄÅz)":•Y+`÷‘Ó°yoŽ´çH®Ád4ž±Z˧¾ý¼oR¸Âa¡”ù»÷ñYWÇÅÅ¿nw8Z1oµ¯Çº¼ †³E—úàÕ—–cN.v\äO‹ð¸~*ÊÌumjëýS‹ëz¾ƒÇW±«ËNÆW[ÏÙF ÔÒT0jì½TÀÖÕ?¼Mk‰qªË‚KÌÝxòw M’dµHMœzóã¤+{Zg¤‚Å´'ˆ‡ê×nÂðܳƒ¡k»–püìÅ„RkÕ]ýFŒ.Ù¾vÍ[jEYæïW3æüÞl¶¼••‘¼¹¹¤êòvâl~B…î¸bÐë® kÉC$¯üõ¿rì;I¿uÍÊwÅ Á®}”Sª†¹ölµ¸ö¡*_²ðøê JÕyêrÈøJ³ýFNhµmõ÷gü¨š³2Œ@#D î ÊA<îç>þû¸øÄy»·7<ñËɦÎÙ-!l¡k2áó§{'›/£ÑôÂýO>ý{¬:×P$™?ª—êgÞ|ƒ\É]ïtÝã™q¸Ðâwþ(•` Z+ʵñï›> -Éÿ•-é›Gb,A´ ÌÁ5“p"¼7“Éòç{þ0ã,Ñ»²\•âl™?ªêeÞ,¾­•¼ÑuO×?žé›²ì[þä¢7Þ¢h2×þ Y'o˜¸æñµüUrç󸾇Ù@ºÈç0Œ@h­(SùäO›Ý%5£Uë—[·H•î¹~ ðœüD€pËÊH•âŽ["ž®9TÕǼùI–";ñÖªy ÐõOr€‡ÈŸäBkþ\­ š DÑ42×.¸Þ1×<¾ÌTýÃ0¾ÖoïaÝ! ÆÚ“¥ŠÊ&«'½jŽzíµÓÑ1³Å‡˜è ž“ÿnS®b¢ ³úv}K ÕàÐ]cdþ¨ªys‘oûˆ·;¯j¤ëŸäÏŠÇE¸`hÅŸosæb®ýAËKÞr-s†MáñÕ þ ñøêOÓ8/#À„ oÒ^,Ud5&«½jNlšžv?Í’@>aœG€ð#cãâÁR_­¬ó2Tó8_âLÁÉ$Zò'ªöeÍ\û‚’yBÄ5¯>òáO6Á]ÆWšÅyF Œ­({h; âš7òšŸ D}®Mç!?ïöÂçYnqÛÔGUã¬6®2T>ÕüùAŽ—¬N ÍœòàÅW™Âã†.D.sí…³@ùÌu`Èœá©dMæñ50 =ž‚ñÕcÝ|€`ô‡Y´´Hâµ` ×<«õ &‚ó$«­ÐiÑöZeÚìv4ÙÀdÒꙢVu>ý¡ù¦ Ï„¤¤ x¹¾Ðü¼´¨•dþ¨üpñ†ó4ÎoT**«ä)1pKPå¨u²àäˤé⬸TâR‹?-Âãz‰ v®±ÿA'½É©¯\Øq™3<7ìã«rZn­„ø8êŠ>’àÎëøºY}€Â­`¢­eÒL¨ly 7ÇÄ´NŠ•0˜ˆ.4–}9§á/VÞᎉƒÝÒŸ{þ|øíOp4÷ÄÆ˜¡W§6p÷uÃ!Æ¢dn›áv'e¡†Vk[Ì@ ªr›1ðNþŒÆ¶TÖR§òÏWno7ìg~}3d¤%ûÕ‹Ê*;,[»Öï: …%e²¢ÜºE ÜsÝÈÄê>_¹ËÞ-—I>‰iÉM`ÂÐ+àʾ]üª'Ì2o((x>¹^|hÁ_½æ9$ìê+ë +×uÚRëoc–S¹ÖÅøŒœ^¸T ÷:q**m”]Э çÕ—¯™ÿüò ŠåmÂ’æ•¿ãê!€3׺F´ø#s×ÀøªÅì}á2F x‚ÖúÜXªhùÕ ÙbižÒ4‘ö‡=‘¶lÝ΃›œ½X4SÁ/P9.)³Â+¬€.m[ÂÈ~Ú+T¾€„a¾ù…™˜W|ÐGøz dáK¹˜ÇÅŸÁhȤz| ŠË çôEÈH¯Q¸±OðÈC…úí‡àË7A¿®m!!^û¨$gÏ[šcS•ÓÄÆjñG0ÔOÉaåº~#köD‚œjĵ‹3D+lã«79ý~Ó~¸¢c+h™Þ´†PÅ=À>÷ÖWÐ<5 ¼u¬¼>yîÐ[eº²Og¸jP¸p©>ÿa+|õã6øõmã”Y4ÛÖp|Õ¬Í\0#Àhƒ€»›§_5‘¥Ja­çR¹¤„ÓbÁWè‰q ðaOg.\‚ßÿüjHoÚÄk[ÐO îš4Lì;µií[5ƒ#¨¨é%ž"™:AÍc~ÉüQù¡äÍnwÀ;‹×Áø!=ë4Çù÷ÞLäœq{Œv8~ö`èïk†÷†Qý»â 8è㜫‡õ¬åÆa6šdËT·v™0nH¨¬²AnÞeåªy€ð$yÀ2IÉ2¢6þ49,\7ÔÀHS ¹–9«¾~B>¾6$§ÛösYƒÝñ¼dõ°VTÁ4´woß š¥$Aÿnma0޹ÊDVfœ–úwo‹²Ùøv/OyXÓmÇWMÛÎ…3Œ€ºƒ&KUmk)ÄÂê!+Ê’$ÿW·å–vß £ mf3°;j[/¼Wf­€“èŠÑ¡u†·l¡?æÄ•8˜«ÑQ–íÓ´²´lÝ.ÙÕâŠYµêt8$äË8M¼Ð6-uÓ±3å]dö–*P1Þqð¬Ûq>û~«|=tÌ ·Õò åzüQxÜ©³çÓ‡šZ§°qÝPÇ"ENâº!ܯá¬ÚêñÕ“œ ¹Ä€Ä“þÓvÝDrJ.ô ë-Ë/’å”\¥~Ú›cvó–]ýcÚŒ¯ê·“KdM EV‹D7±„Ô¿ÕŸÎ|ôTXôõ:HIJ€!ðcõ§ýx¢›¦ÀÙŸSÊ+—Y]~CyU9ž{þ2¬Úzf?ð3Ù%BYèK‹–ɾâbßtÍ _ñWžø…Ø%¯ ŠJ‘§DhQífñöWk`oµzòÈ>0z€óFK®4|óX++ÁfsÀ”k†B>ø.3õø“Ãã:¿Í_«ƒÚü 9×þv£±Ë)ö×#×þbQ_\3´éøêIN ÑÅéO¯|äêÎ??úNÞŽî÷L¾ÒµŸ6.£œíÕQÞGî3þñ‰ëøÌû¯r¢D>êÇNçAQi9>@'AÏŽµ ]'i´ám|¥‡Y»Ý$½>wÆfªçbF@'h¡( +¤B‰“:tÙ}3þ‡îý°e¿|ðö‰C`ì îòögßo‘}\ÿtïuºšùBÑ %ÞŠÝoª]^ƒ !«Ó»KÖù$–ãL«?âÉ»\M›$Àon‡þ‹X½ý œ¿X·MŒ³~Ô/6¹I<”Yi" gÚ»tE÷Š¿ÝdE‰ÜnæýæÙÚµóÐ)XðÉJHĆèohR­Ñ±Æ÷’ré®k¢N±v—GWûÿÛ‡P\êäø/Þ._Ô@ýÊi\û®àŠÖÕ shÆWorÚ±uKxþÑ;äÞüó£ïÑ}ªtm› 7Á¥ÈèPZ^!祙mn;Ρ\“åmÐ.Dh ¾ñªø±_|¼b3¼´è[øëcSü¾ÄU€:J¼]%†øaÖU/o0Œ@èÐê†Lƒ‹Û&ô]ô­Fúh¤úÉQ"ë¥å÷Àº‡á±»¯ÁÙÈ­T—I‰µZ Ô¢Lm+ÂôNžË—1#eþLJßÉïôéÒF>7>6F¶$“Bì.µi‘†þÆvØ{4?&j äƒLé“ïÜ}pê;œÍ¤5–iBËÕ…*Êr³”ò!¶åýÐ4‚îRH¹v×öÑÇ_ä#K©I<}Ù(äTð«ÖÊrDÙ2Zþø*§fœJ3§{ó&§ô¦‡d•do>ÐÄÙ/”²¯ìGlŒƒ,eÁZt•:}á2´kE߆,)±®©4t³5uò#À„­eÑ™ â¢Â@פ ™Ê ÿUšþèv´^–ã‡'4M¥ŽºòSÖÚ’¤uù2¤Ð-L}ìNçü¥›áK‹¾'ï› mZ¦¹ö·D—ŠX/ÓóõîÜFÎÿÎ’u8íßèÕ-Ì•õü™mèŸN–£ëÅ–½Çäv6~¬†2ù šÞû®½·¡á£uå¯qÈ©Ü/-¸Ö¢L$ø*§Ý;´’] <DÛþ´'þýñ÷ð³Ñý Ý¥Îá,CuS¾9 9½XP"+Ѥ0·Hs?“FÝsÕùß8dB¾r)Œ#à 5eO–*Ou6Šýä/Kß¡ÐÔaÊ´`æ½Ê¿¼­4w©Hd9¦DV)eв7”~;e¢ìjñÎâµòl”?ß4O©ùp¨ ¨ ž}ó+0›Lò”w¢²·)窓‡–ÓÐbNÙ×k£Z6k øÅÕ8¶þÏ¿³Tv¢·;Ù-Ó!Æ\sKZoóh¡Ù/z¢ìß:~®“xí$dˆA fT °K [ª\.Ö Íi3õ3¯7tÜëÉ|0(Z£ E $tSA Šñ# ²B)ƒÄÜ„þ´è …Ô¨ƒþÔ„†ä°¡ãUªþIZr­eÙ^‘FNiÚÆYÜ Ï\n­Â9Ìk??‹ßpbF@­(ë¡ÜFÀI‰î}™ÝåÕÕ>«+:¸1Ú!@ou’C:q‡vá’F "`E9"iåN5f´ë&‚fc†ˆÛ΄~˜ /þ\;#BXQ!Ø\#.Ñ3W†« \/#)hñ0)Øp?HC hE™-U‘vIp"ÚÑ3#²‹Ü)F€`F@m‚V”ÙR¥6%\#À0Œ#À0Œ€ ÈNÁ%²T±µ*8 ùlF@…Ç:{þ Å.ÞdF€`0 ¼¢†FSe%†2¦ù‘9E6›]ž‡52zãì…×aŸI}ò·/¶˜Sä irʳ‘smrO†Úõ¢¡ ôr|õ¶ƒ°vû!8uþέk†öuÚ-c ïR1<÷Öb¹™f³ S$ÁÕÃ{ÁÐ^å}¿{ñ=¸vD˜8´§«+y÷‘SðÞìÚÇê"0óŸŸB~A1` …› ¸óê!@‰´ãà XðÉ0f@7˜‚ACD:y6_æ´IB,<ÿè0k¦š¥h‹ßnØWöé wO¾Rœ¢¯u”†Ç­²Ùàßn‚}9§áRQ)¤5mCzv€ÆôÇ(™[]!Ž)èEÛÌt¸}â`Èl–"çåƒðì÷B:ž#Ò“ÿø†÷î,Gûx­.Ñ*§òÃ,Ø Ìñê"Ê¥1Œ€Þˆ Eyî#ðÁ7aò¨¾ðÀMcÀZQ ÇÎäˆG¤B.–À©s— «yš«Öíø c6CäµW§Ö²uÙu°z£*ÊÛžtù0oÝŸý»¶ëéj¥>ZájNx7N"¿”ˆO_Ò t¯¡·±‹/ÙÃGK®µ,Û/Ü"TNý€33Œ@ä!´ë…/V*›ÝfµVUÑ€^ã,",ÏçÉ5uláµFz[V^‰¯~/Êç^³¼æ÷Ake•äpH%Z¶Ã.9JÃÅõk ú”oÞ{ òQQ&ËâÕÃjüÄÉí¢O—6²»EßnmaÑ’uhY¶AŒ¥æ’î‚®Uø±ß¡ç +¾QØ‚Öå_NËÑG9\‰x³ÙlåáªßS½áæúB~!4E_tòKö”ŠË¬ðÖ«áøÙ<¸€ßLÙ·–»§óµ?T\‡s|%l#UNµ_Ãu]r½Œ#à5Z…çùš[¶vTZ+.–†EQNNŒ—ÛJJ°·D¯êÉ=ƒ>j›YûÕ<ùº*“Ãá@?ä:¿² PPT*Ùm¶¼Z;Uþ㨲_oÔâ^Æàõèåò-.,.“}Óš&Á ï,…JüŒ”dú¸opOçG˜t¾Éd”Ý/¶¢‚LÛv»Cæ8œŠ2ñVYQQ@íÃTûÂrîý†ƒb3TëpsÝÝlʬÞe”à"Çé ½;·†”¤ÄZð\*“„Ã)¥>q­l°ÿÛa_©É‘*§Z¯þÓÍg0Œ@8ÐJQ¦\(’µ¬4¯¸¼ÂPf­úb=”)=%Qžå‚f©è…7WOég£ûCvËôz‡S’àúO*ÓÑÜÛr‹´ë%‚¦.¸&·šõâÀñ³òÃŒ;j’âá—רw(µZaÎAôæ©N¹¤SÈí*\rê+×õ:ãۥ̇u|¥æFªœz_Ãð0ëÛ¥Á¹F@mÔðQ®Û&¥@&ÇÉÃw¢ÅÖ°ûHí®êž¨Å ú±ŽÅÖî8$O F¯lÏ^,²2ú’胰ýÇΛ÷Áé —aÙº]pâìEèÙ©/§k’‡p$<Ïçž\‹(ñV£>WyT~¸xSvdÜ Ð$>g¾Ø"ïÞ´7-ŠÙò´a4u-c1Ͼcg _Ï+Ó³ðãÌ*yjÀ=Ú)…|[ðFò€•˲kÞZ6ˆ"h*¢hŠª\u‡›ë¾]ÛÈJí»‹×®ç€ü]sp¶“#§Î‹¶z\·HoŠé&Ê{ðƒÀ“@~´b³ü0Õ%;ÓãyZÐkgØþ°¯J #MN½¯ô0«Å­OÞf} ´¢L–*a­RtI(´¶ïÙ¼áˆÃn/ܼ7G òЬÚoNÝý[{Éó°>ýïOa>ΛLÓÀù’®ÙOžËõÓï·ÂÜ׿„·”³‘8mY¸áˆ)Õ’ÏÈÙV`­fsä2©|ª'\¼‰Ñëö[Æ Båذû(Ð<Éë(½d-¦DÖfe¢%úØ^íwnÓRy(äÛ„#ÉÉVN“° îh­mòAS\Ó‡yÞu5d6O……Ÿ®‚§^ýþýñ÷øPêüÈÏ84Oöô[®‹Ùˆç¬„ß]Ñ¢üè]¦” GÒ˜kåuöñUàIrªñø* ã5#À4‚vá›6sÞJê'>]­î/•IþôUN .ô]‹ ·Þug»®Ýo{ìîk䩾p_XRYyÄ£û²ð'Q4¿òŠÐ»ŽÔmãá“çà¥Eß@Þ™Ó¯~ñŸÇãäïJõUà¢ÆƒH-þn¼ïÁG›·Êz8ܼaßu¼?¸ÿ㟼ÿv†L¥äÓ£6nqš6kî*:€ߎ¡uuÒ%ר¤ÈÓ¾â¦Eîø²­É5iDgCµÖ˜ëZœaŸt5¾† c­êÜi8¾jÕt.—`4B h‹2¸·T‘ÅÃV½P,ÚÊuK/wØløÃAóÞ†+Q° •dj+È[Í~n„Nw‘ÿãŸ}ˆe Œ o5“‹?ª‡ê 7ojv.Ôe Þèú'9Àúé«5’ ·ü…8<®î¸&ër ²Fo©$ûËu€×¢‹³êëH7ãk€ýÑÅi‚»Œ¯ºè/7‚`|C xE¹~=dÕ¤œ´aRÈi´¢¬¬¸dÛÚU‹Îäú!ªaùÄb£+n„ß[^¸|ùÍ{GøŠWøjaZ‹?ª‡êcÞ¿Öotý“`Idý'¹pËŸ×aŸx>ŸÉ\û •oýåÚ·RkåªÅáñµ<ÿÜù2¾†øa6ðNñ™Œ#4ZÎzAÖ2²œÑ@^†Kùöu?nOÍÈø·o‚Ï@ºçú†pZ°"‘¥ƒq Õl8}ìÈ[k—-^ƒ '\ _ÂY-%‹’•çâêKNMûîûóæÈ—_%oG÷íú‚®<æP&yðÌ_hÃã2×¾Ù@ž€¹n \‡kq†yx|õ”/»•Üù:¾Ê³²ÆûRçaÆ‹€Vв°( E¹!ŠÇ%våçƒjùÙýìlþeiÊÕCž˜“{ÈgŽÜÈ¢{êèÁw¿ùð¿äßJx EKX”ÝØÞzü-}ÿ®™ò º^îfÞUÉÛѽ»¾\ù^÷NÞÈ¢¬5îè)z&sí-?ö…ëzœasy|õƒ3‘UÉ_ãkhfEsyÍ0a@ è/^Œ{/µ{ëêÞ¦uD 1¹wÐBuÉ˱ûrí¶ª3MÒZtÙ¸çXÜñ³é[9C~¡N>†ÑžhÖmNÂ?l•¾\µÍPXTR¸sÚ¿¯^òÅRĆ>þ"· ¡lÑë{ºqj‘jñwdÏ®“ɔߴY¯{r˜·:ˆ×ç­¸hóªïÞÝøÝ7ë0k1.…¸4È_2U§Vßþ¢|® ÅKnæÚ 8u©ÅuÝrýü_‹3<—ÇW¬Ïÿã«2êCÓ9 #À„5´RO–*ñz,ŸTXd¿èÖn;´cûáaW_;Æn·Ús$7?â‘’âc¥”¦‰†8‹…nQ•(\4E£ øå¿q)>uäÐòõË–,)--¾ˆ`¢¥T’µp»˜»å¿µû6ÿt`ø¤É“±}™74í»áíø}«7|»tUyy))ÈdE&îBÉŸàÑ—5sí J˜GG\»å ›Èã«.ÝqÆñÕC+y7#Àè ­•Q²pXp‰Ã%—¦¸¤T¯“ª÷Ñ1K—^}ÛµíÒ­[|bRZlB|²Ùd¦)æ¢*Ùì6ke¹µ°¼´$ƒRì9°së!€,ÇJk$)[Jk²–Sˆ4È_·>ºdwîÚ3>±IzL|\SäøŒª„¼UT”•¡N|éÄ¡íÞq K?=$úÍŸ‡©Ü°(Msí¼jsíC•Þ²4ÈžÌãk5‚jޝa’Qo×cPâì­iä@VOúÊ_(åbù/Ó~Ù·•‹}¸ÄÿÂU7]çÐv¤&² Q"\h³…6ôáY#IQ&K2­éÃ:F¸R~-“àŠêsË*ó;q9€ÇéˆnÜÌŸSIœ¿ð„Çe®ñâm éMVä ûÃã«“TŠ=¯Î–ñ/#Àè­eº±(-žô_9¸“"H¡³d«®©=BÙŠîŠø$p!¬HÖHRŠÉ")‹0:NyÅ75IÌŸo°ªÊŸ¡qEôÌóf¯ôÐ%æÚ0uv«Êu²ýýËœù†˜úœ…çaÖ·Þr.F€P­ej¬r0ÛÂÊLŠ lñÀ5Y$•VI¡(‹5ޏDxPRä¤Ó",’ôú^,´Ÿ° …’ŒÕÈIpFÄ6óWƒÀEi­ÒrK’a–³ÉàIQý¶Ìu5`Õ+ƒ­õĵàI´MÊ0ºA hEÙGKu¸î`NVIRÉZªt¹Jr´)ÊuoÀÂrLkRŽÅ‚›aIÌŸgØ ½óç¹õõ0×õ1{ôÊ5s&ª¿Ö+gõ[Ê{F@w­(ûa© )ÂôšP(Æ´ÛѤ$ \êæô_`EÛzI¢MÌ_mFTãÂãÚí&éõ¹36×®"äÿ˜k÷«ÆµûâƒÚËœ¹‡OÏœ¹o1ïe] ¼¢x7hà"E™Rº„r,Ö¸+êaBI êÎúüeþêó¢ : Ë\kÄuýbUÛÜՇRù¤buô0[¿—¼‡`TE œŠ²²#4€‰AL¹Ÿ·ÌŸš<é;<.s­&ס)‹9Sg>̪ÜC.Ž`zQ”E{xÍ0Ú à)‚¦6µq©Œ@$# ï‡ÙHFžûÆ„V”C9WÈ„…óf?úZ¹FF€`F q# †¢Ì–ªÆ} pëF€`F€`Ü ´¢Ì–*7¨ò.F€`F€`F@Њr£G€;Àè «7F¸=Œ#À0QŠ+ÊQJÞ´%Þ $¹ÉÑ» •o’ÃqÒVYµüÝ¿Íߊ&Έ;ÿùÓ_x\æºú Vëêr5XñøZT5ÇW?ÌÖé9ÿe`‡¢,”cãˆ×%vÚø¸‡©¥Á`’â)É †8‹%x·` ù֪*GAQ™T\f5LÆg¦ÏžwÎVeÿ[åéâW-z±›DJW¸“ñî»ÿ“•ô°Ùbúó äÍd±Ì!Þ¬åÖWnÜþ¯µk¿.E„ÂNþ‰ É\ב¶FÀ5¯u8•Ü=¾êïaVt“׌# 2du*M›5w€úÆÐÚKæ1ßùðc“SRÿå¤ÌžZKƒ®è`èÕ) âb½‡Ê¬°ûÈiؼ7GÚs$Çt㫵|êÛ/ÌûðßB©l2÷>>ë길ø×íG+æ­6°uy3 g‹ .?ôÁ«/-Çœ6\ì¸4ÈŸ2U»êýc®ÀR-®¨ÆŸÃ<¾úˆV]î_u £>ö–³1Œ@°ÒT0jl,àôÖÕ?¬ðR âT——˜»ÿðäïš$/Èj‘š8õæ1ÆIWö6´ÎH‹9n/­Ó!Âðܳƒ¡k»–püìÅ„RkÕ]ýFŒ.Ù¾vYCíŠ!ó÷«s~o6[ÞÊÊHaÞÜ\uy;q6?¡Âw\1hˆu׆µäŽ!’WþP¦î¥Œ(SoÓ:ĉ¹öpµ¸ö¡*_²ðøê JÕyêrÈøfõ£·œ•`‚E hE™d?”丟?úøïâ“æìÞº}¼1#-9Ø>DôùéM›À°Þy—Šál~áľWŽDeùÇŸ°Ó^•-A‘oÂ÷?ùôïM&ó Ì›oÈ*y»PP:®{ÿÖÝ?­¾ËÄGþÂxf®}£·V®`¸®UP`”J2¯~b¨äΟñ5Œ2êg9;#À‹@Њr ¨5ˆßxßôI)éÍ_!eë›GL¦¨sCn.÷‡ §þÝÛÎ],”Îåï9hè–Öæ`nÊ–û’üÞ+ówÏf\c‰‰{“yó?%oyE壳;uÙ}`ÇÖãXŠP”Ýò7`ä¸aø}kþÖ5+—øWcP¹™ë à ”ë ª¤Sy| @:]ɯã++Ê*ÏE0­5U*Ÿü)b³³»¤f´jýrë©Ò=× ž“ŸnY©R|BâÂñãoKÄÓ5çê¡ú˜7?ÉRd'ÞZ5OºþIð9â“\¸åÂãª"—"hŠ(šŠ¦)7̵ŽÀ¶ýå:°Z\gñøê‚"ø ¿ÆWÉpháÄ0€ÛµJ½¦²…_rüÐk¯Žæ³wLbбhmÈV©:+†p›rõ}™Õ·ëÃÕøjÅ¡ÌÕCõ1o_ ÄÛ× 5ÒõOr€%ÅãBþú$Zñ‡E×$Š ©ˆ¢YsÀ¹Å\×E$Àÿ!äZæ ›Iׯò¥<ÍŸñU‹‡Ye[x›`ôƒ@Ð7i/–*²“Õ,—Ħéi÷Ó, ³[ê§÷°%„á÷6ŸðÕÊ:/óGõ0oÁ_(‚7’,Þ\hÉ_íF{ É\×F+¨!âšÇ× Xr²à.ã«ûð^F€ÐA+Ê,U4ˆ krÜÈk~6õ¹f4œîh„ "%IjqÛÔG(Ì1á¬6®2T>Õü©s‘8q44sʃ¬( «²ÚüùÓ`æÚ´|Ì«1×2gغ~x|õ‘_³…`|õµ)œ`t€@Њ2¸·TQ¹¤ÀÅà×<«õ &Bó$‡*•[+=Våí˜Ç“ܨ¨¬ ¸á戶»GÂ3!)iÖ¤Åë{™?*?Ô¼i‹\xK¼‘<`KÈ¢LòQ? ;uö|z E +×ÞdÑÛ1€ ‡œúʵ?ýPä ûøªhKÄm î4_#3î#ÉЫ_-Y<¨lYQ6ÇÄ´NŠ•0˜ˆj–³Â’2Xøé*È+(†ç½ÃÕ‡]‡OÁ»KÖBIYÐÔs¿¾m,d6K‘{;æ* Ê*;,[»Öï: ÔyNÎ)pÏu# ?Ø:y6ž{k±\ŠÑh€¤„xèÑ¡Ü}Ý•@ÿÕH”…"VX­m±<¹Jre8ù3ÛR=X)sºMûrNÃ_¬†!=;Âë¶2o((ØH¡(×ã/Äáq5åÚ7ÞdÑÛ1_ÉmHN?_¹¾Ý°[.Ž|SÓ’›À„¡WÀ•}»øZ…×|¾ríµÏœi8¾z®:ð#ž®…ÀKÔæL_ÆWz˜µÛMÒësglÖ¦\*#Àè²L¨hùÕ ÙbižÒ4‘ö«’rNçÁso.¢Rk­òªl6øÏ—k KÛ–ðÔ¯®‡&ñ±ðÎâµroÇD!å•ðùÊ-â¯ÛõÂÏVÂ[Àu#ûÀ܇n†ßÞ9úvméµçƒ~à¦ÑðÄ/¯…‰ÃzÁ†]G`ó¾cnË t'†ù6Œ†L<_X$ÕÂ×Å•OõÚÆPœ÷í†=°ð³U€ÑµBQ]Ðu<`A¤( × j| 74!r5åÚ7ÞdÑÛ1þ¡“ç”'_ä¯m˜=õšÏÚ´Hƒÿ-ße墚 ×>qí-.ÎðTMÆWÿ›Ôðž®…†Ï OކÆWùaÖaŸžÖq­Œ#J´Ò.¨\²”ÑbA%&1Îb©Q‚ìáé —a^AQ¡é7óß•7ž2zv¬ù± ¨RðfÝ¢ÚÍâí¯ÖÀÞjËÖdtÅ= ›(¾]¿KÞ&_é!=;@ZŸÕN8#Ý4Îj/—Y]¾šåjZ–•¨F”„ËŒüùCH¸VrãM½{iÑ2|ð¬y3C.M1(ï¯<ñ‹ZýõUNKʬðÁ7?áƒl%Øl˜rÍPÕ¾#¨Õ ÷(µ¸åÐ:dãk>üWy-\HNÔp| Aë¹ F€P ­2ÀÝXª„R¡ÄÉ ]€5ø~Z"ú$“•H¤ t¥ ÞŽì—ƒz´CŸçrxþ¥0ï¡[äóâcäµø!Ku™µÆ/zhïNÐ_Û~øíÆZõRþßÞ9Qvÿ¸„ÊõKï.Ãò˜zóQ”šk%Þj”«vyj´)Âʨõ€N¼ÃR·7Yôvì7·›Ý«·„ó á¶ ƒqö—ú—†¯rJî:ó~s  B;‚Ÿ¬„DüP¶÷†ßÕ¯ÕÓÕ¹=¦uµÂšñÕS#|¿ïï*w`Ü!@­‰—0tã³ÙìPˆ.”.\.–×iɉ²«§côÕ;Ý\q†švMÞ¦ÿfSmc }ðC_Óï=š+—K¾ÃPY®›O>XýCuwÈjÇðD’kµªÐ¢LµÚ)å(åCl;ûÚð¸!ç:P9MÀa’ËøØÙ’,Ël¢ÓÍJyQø+§$ó½:µÆ2M(§5ke™An ~ÕÂZYŽ(;È&òéPbí! ïfHF h‹rà„tïÚ6’ãäiŸ&íßý´Z£rÛ,%IžþÉÓ1Ñ Þ(öh/þÖ[÷îÜÚ´Lƒw–¬ÃiÈ†È 0¹u_Iå çò eK}L¸§¬ëѡƅC™/¸m­-IZ—\ï#àl·òAáqÕîEФ2Ì›½Ò}Ù¡ã:X9m‰®OJ«ºýñUNmè×N–£ëÅ–½Çä‡àìVÍê§Ö·\Y¸eÙ¤H:Ý‹LÐÃ,'F€ˆ ´V”C "ÍSüÀMcà-œS—üŽiVŠé·Œ‘Ûàí˜h$ÍŸy÷uÃÅ_·ëßN™(»Zдsô•<¥T´7¯þpPœDm D_ÕÔg¿è!ñš9A³ºRŠrèšäM½-¼¢ck±éq틜•Á³o~%¿¢)%ïDåA^”=VÆ¢-f£Dî0#ÐHZQnØRårÁP’IÃ{-ÊÔoxyôvyš8zE«LÞŽ)óyÛ&«´ð5.FŸæØ‹üXœ“™ fÞ+þò:DÌüÕÏBT“jÕ„ÖH4ÔÜqãM½óµ ÉéM8u$-!JZr­eÙªÃãîZP½.`ZQÖ“¥J‰K]%Ù×cÊ| m'¹ñlè>Î05°œÖ`Á[Œ#À0úC x+YªÂh­Ò¤Ü"F 8(<îÔÙóW ŸÍ0Œ#À0Á"´E9ØðùŒ#P9<.ÈTÆ×>ÂÿF@ÐìÝn’^Ÿ;c³ÚÃm`í`EY;l¹dF 0(<.'F€Ð-ü0«[j¸aŒ€êð YuH¹@F@—¸‹ ©Ë†r£Ý#À³º§ˆÈ¨…@D(Ê„‚pbÜ!@ÓøYÌf·QÜÜåÄ}n"h†¼›,§!‡¼QUÈrÚ¨èâÆ2Qƒ€ŠrX,Uvö½¥ä`%eåЪy*\7¢¯~Öf·Ãþcgåh[QÃdwtæ??…‚âRøÓn‚Œ´$¹§Äÿ¯Ÿ{½k"toß >_¹¶î?.‡$P¬ÞvÖn?§Î_ÂiüÌоUs˜†ój{›iAœËku`9UÇÆP Ëic`‰ÛÈ0þ"´¢.KÕö'a㞣ðÐmc¡i“8|ê¼܃Ø{ô |±j++Êþ^ :ÏÿÑŠðð|jå†]Gàƒo6ÂäQ}å 4ÖŠJ8v&•dŸÐS/Ë©zX6–’XN SÜNF€ð e_*Ñ"ÏqTzšchjš"àQ:š{Þ_¶Jʬ0óÕOàŠŽYpפa@!¥?ÀýÇÏæCjR‚¬@‰pÕ”ŸB]ç”À´@þæŽq`1™à£›dkd‹´dÞ§ŒØ]®ƒÂÞ~³~ìË9#1›ŒÚ:¼õ*8w±þ»t=ž—ÖÏdŒö5Lu-ŸÈ?#0ºW9Úâî#¹>=}þÃV¼çc–@IDATèÖ.ß2ôqÕIÁ`EŠ ð¸,§âŠS­‘Q%§ª¡Æ1Œ€ž~å0õ®w—6pöb¼ùù¨à»Z‘Þ4:µÉ€ +}ïÏFÀU>ÚîpÀ«~‡ÑúªäÕZgÀxÞITš)•Zañêí°7'®ÙZ¦7…d &Ò¯[;xüžIЧk6|¼b3”Y+äü_¢µ:Æb†ù܆Šz+yßÔ›GËëE_¯ƒø8 º\ YèòÁ7äýüô 2èŠÈÃO`·;¼VXRE%åнš¯™uxÂãª"—"hŠ(š¡ì2Ëi(Ñ]Q#§ô0A´á¿r¸Œ€~…¢,iÑýNmZÈþ©‡ÐåâéŸÃgßoò‡LIJ„TT’éã¾ÎÙ-!³Y 9u.¢2}ËøA@Vä{&_ q±°_Ï‹d­°¡R|-ŒÜãcQÑNÀíîÐ<5 zwj#+ÛŽ“³+yQ¯ýCÀ†Êñ-c¢¯r|·iŸ×“ÏçÉÇ;⑎“&rá©¿ASEÓS6Õ÷³œÊjɵ–eû}=D œºÅ@‹‡Y·ñNF€;A»^+Õ‚y³WzêÍn³Z«ªh@7xÊÈ~úˆë™o‚¥kw·ö@“Ä8˜8´g½¢H¹¢Ô¦Ú=CÞn‘ — KäýôCVèØ‹ëÿîù°lÝN8©ÈõöC>Þ£C&ü°e?΢`ú`ŒbJŸ“•õw¯•ÿÓ)ä¹ ÝDÔLÖÊ*Éáj: fáÕeÙ%G©¼ÚTzx¹fxoä{ íÙÁc1ô6€RYy¥Ç<á:@¼Ùl¶ò×ß@ôL-¹ŽV9 ×Z¯^£‘"§Z¯âËç1Œ@hZQVX©Ü)ʲµ£ÒZq¹ °TuE™ "åö&´4’Õø(.0´>€d¦t•ÞYÍåíó—Š¡'ú/‹¤œ^ Ó²r÷ö™ðø/¯…btÍxâïÿYeeíߟ¬„C'ÎÁTÌ{¡Å™R\¬ÎßáL ZûÕJv›-ÏÕ( 6Uö ZñhséAhÝŽÃðÙÊ-ò”oîÊIOI”g¹Ø}äôêÜÚ]–°í#Þ*+* ª ËGØ£¨Xk®£QNCÀµæã«âñk3äTëñÕ/@93#À„ à]/ÈRUßZE¸P$kYi^qy…AøøªÑ[²èî?v.•ž£§Ñj{ Ý,šÊE“1¹;\ÆcäÑ­Ç43Ær´:_@yåæýòtcž”(²•–¡GÐÜžË7î®Õd:¿¾ÖÙ¯«\§ÅìœÃ¹CV4Iˆƒ¥h‰¦úiÞXj›Ú‰p$<«**Î`ÙJ¬ÕªJ.“ÊW›·`hF¬o?~Ú“ÂÂ_·Lš3y,~x¹vÇ!ùMqNþì4}\8“àäÛá’ŶÜ< ;uöüA!j«¦\G«œúÊu€+e^“ñ5Àv¹N‹9Õp|uáÄŒ# ‚¶(»é¢R ¯®'ܙվ㻜†!^^™»)Ëã®K8CÅ7ëv}¸En Ãzw†IW:g8èƒú­E«ãŒ|Œ³^´†G¦Œ——ÿ|µžþ÷§²ÿ2͆ЧK¶ÛòIÑš8¬—¬d-]» FèíZ5så¥6¾ÀYv:%û.“rL3^?ætœ§wÑ×ëæ¥D3_\ñàͪ» ÑÛp>÷$ùx(ñ–ë òÇU•Ÿœžþ¨š¼Ù6ùô~ݲ¡+ÎhqðøYÅMÝ} Èánœcy‹Ìy§6-a@÷vÏÑú€àäë’e×.¼Eý! ëª[+®£UN}åZpîÇÚÅž£ÙøêG{wÆfðF€ˆ‚öž6kî*BçSCkLd¥ŽÅ…üÒpiKƯžœózï®m“¾c|Ðuby®TŽóãz Q†Öäø¸Ù—Xœ`Å™/è50êP &›Í.ŸkÂéßDÚyè$¼‡ÓÉ=yïuÖ´ TTVÁkŸ®B‹u<üòú"ΰáôõÔ6WÆ6^ýßwÒîç.½ñÜÓTá%\hÚš’ƒnœÁ¦Zü=ðÔ3k{un“¦6oÁ6ÒŸó×A¬OœûS®¿y‰·]O½ùç9Sñ\ô‚ó¸ÔãÏLù[U½üÊ ×Ñ&§¾r]¨†wÔâ ³k:¾6ÜõrèIN_§Íœ÷õ?ê¯\#Àè P½Ö‘ŃRÚl¸TÑúä‘CË÷É5>éœ9÷©’¼)¢ 8{}p§L4SE]Êõ¶éõ¡RI8ÝÜeü@Ü+Èýãø™‹ò<Ìâ8­©]ÞÚ¦ÌëÏ6áG8^<{æ<ð%œæþå)¯(KæêÑ‚7O•k±ßyhQ²ïe ÞHð,—\à¶ÚüyjEЬE3d\{“…H“S¹®ÅrªéøêébÒb¿žä´ÁñÕ ™ÑåP‹7²Z@Ëe2Œ@h¡(Ss„’,‚ÊuK/wØl-ÍA¾»5‘»Æ„¡½àë5;á•WàìðcÂò´rZ÷‰p#üpº‹ü¿øìC¬ðʲšÕ»ø£z¨¾ÆÎ›šàø[–à®’<Ÿ^7È ®µà¯^)‚¦‡(šÉu¸ä4D\»8«¾Ž"f|­wá†p‡à.ãk{ÅU1Œ@°8¿B ¢”£ÆöÀÓOo]ýà E1dÆ¥²i¾5Z⪪*ééûrRF«ÁyøqUÿîíj›zñ`cI4•Ü^á*ü`lN ×6Dßþóåé δq`û湇vo߃x•âbÅ…”.ºyª•\üY­¥Æ„¤¤¼¸¦éã;ojão9‚·­«W¾q*çÐQ<Ÿ\e qqËÊÔ½x P¦Þ¦µÆ)b¹‡œúËu€Üº8Ãó#n| “ OÜù2¾†XFƒîÀ0#´¢L r%™ZC¹ÌIAŽ¡åÜ©…)ÍšIVClw õ,õìÔÆàεórR @–Ä·ì?f8}ìÈ[ßþÑ—x˜¢jE‹LôôJV­T‹?t¸Ø²M¶¡Äfìϼù±’·£ûv}±aùÒñl¡${ä/Ä7aæÚwJ=æ ”kz?P‹3ÌÊã«w¼¼UrçëøbõÚ~>È0Ú"´¢ì¥y4˜“kÕAʲ¼;°/7%½™•å®»Ÿ”Z5O1¤ãGqœÜ#@þŽ >Yé Kò©£ß]öÁ¢÷0'Y#Iá¢H*ô_ _–ZüÞ½3'#+K*µ›z3oˆxIÉÛѽ»¾\ùÅÇßà)BIöÊ߀‘ã†ásfþÖ5+—4PZ‡™ë †ë ª­Å–Ããk`*¹óg|eE9°ùF ‘" ¥¢LˆÁ\(ÌTŸ‰”e»­êL“´]6î9wüìEŠñaHÃÈk45[´'šƒuÛ“4ôåªm†Â¢’ÂÖü}õ’/–"6¬BiTÛíB -þŽìÙ•c2™rã›6ëµqOó¦D ·ëóV\´yÕwïnüî›uxX(É òG ²ÚJ2EÐ8jlû-«8V§Ùâ/s-ða­×>Tå-K-Î0#¯ÞЪ>VŸ;ÿÇ×0<ÌúÐ3ÎÂ0Z @­V‰”cÒziª82S4š..µz»I||bÒ°«¯Ó®[Q&“9g¨’âc¥”¦‰†8‹E˶aô—(\4E£ 4O²Ýn+:…³$¬_¶dIiiñElñe\h:1²HRøjšN«Á<ò—˜˜Ôlø¤É“Ûtê2‘yC'q7¼?°oõ†o—®*//%™¸"ÎBÉVW“p:«•ô§³[³×µÅ\» ð¾¡#®=r†= ±–Ç×:Tºã.ŒãkÖñ_F€Ð+A+£d©¢Î-˜7[¾×é(Y8äMpˆ à)Õkšg™öÅábéÒ«o»¶]ºuCÝ9-6!>Ùl2“‚Êd2m«+´£¢jDÅ]Ƨƒ;‡äæ iÂHsÖÊrkayiI>¥Øs`çÖCX!ù!+­‘¤p ßd²&kávÅÊ©AþºõÐ%»sמñ‰MÒcâãš"oÄgT%ä­¢¢¬¼uâK'8ph÷ŽãqCZ†“?¬Þ™<Ì£,Óºqpm0Ä¡X¶¢ã[¨ ”K1NH8[A.î&Ü5K:ãºAνŒ¯šqâkÁ:_}m:çc0"´ŸƒC2̪n¿;E™fb ‹'Y>…R.ö‘o-íÇ%•‹}¸Äm²”ÐBIœãü§á¯Ñh4 píß° ¹þ¢Ëù«“SÓGS•UÖŠÊÍ?®xY£êñ~/'Â…R|éfOØ”ãBÖHR”É݂֤°Ó1­,ÉX´+ ®¨>Á…Ø'ó‡ÊüN\àqz ¢wXøÃzÕôÌŸ?˜^uÍuZFËŒnýΦŽÙm¶âªÊòCq I½ð¯¡¼¤øÂöu?¾MÇ4JzãºAÎ]Œ¯ñáK±zãÌ—6sF€ÐA+Ê8éºPjÝu‹)¥Å“þ+wRp‘­¸¦öeK(f¸KûDAD,11Ũ0“ÕÛpùü¹åÍ3[ Ã͘ظ¸l´xw@E~«F-¸V¤ k$)Åd‘ áEŠ §¼â&€›š¤Fß&½÷½PUù SxÜFÁµµ¼ÔŠò(3ƒo}’rsÿ­Kï~¤(îïß*»Ý'gN?å;u~çT•k¿k¯}B£à¬v“ÃòOOœ…®”`G xE¹áº•ƒ¹ØVfRe‹®É"©´J EY¬®)È1qqyf³…eˆKN.²UV}Õ$%õVúß¾{ÏŸŸ=uüÇâ‚RVÕJ„%å@NJ°Ò¢L¯ïÅBû »P(ÉXœgôGlë’?gsCú« Ž*xÎà|¾ÒÞÔðKÕê’ë²âbˆ‹O¬À÷är[t1Œcâ‡âÆÞwáLî£6›ÈÕLšp­BOT”Øfùt«gaz˜Uárá"FÀ_B¡(S›ÄNkº‘¢G¯îI$e¹®’LVê)ÈX—œðãÂs–ØØNô§U«¶püÀîW»vƼ¦3F]wÓ/¾~ï?ÏÈ™Õý¸l„²LÑ¢T)¸¨Û Ï¥5 þ<7_ó#êòÞиºç:.11ßü´&V[wéb;wæäsèŽñ1J*ÏI]ÆÝtû„o?~ÿmXW—ku©{ÎÔéfÀ¥¨ÎYfOdÀ•¢L­«;˜“Õƒ@R–©ÂåB(É¡W”O›-4w?NÏÑ*3nÍ·_íWQù§¤ÔÔ…´/.>áÖkîºûÓoÞ_´šþ«˜›ºƒ9á#\1H9‹ŠÕúU”îùó«7êfn ümò£ËºæåðœKQîÐ9fÅÇïýԵπ—⟢>âÇÀ¿¹òê¾X÷í—Gý賯Yõʵ®9ó\ò©ÏYxf5‚‰‹ew„RQõ …aR…bLk±r%™‡®¹&3·’Miä‚QôÕ¢×?øåfN2˜Œ7ÑþÌÖm_½îî_ùzÑ›Ç鿊©î`NÿV´­—$Ú¤;þ ®ù[8oöà£K®cbcr FÓ@êOó™4ådÑŽ•Ëžuó]×àÛŸþø?¶S¯ÞoYMW­úì3š·Zí¤g®uÉ™ÚPžž9  ;| #À„ 5e,UÊ~ÑÀEŠ2-¤t åX¬qWhSlLÌQ0‘aÍÛfs[\ÉþÈSåCfKòpÔ[[à¾ôVmÚ~|ÏofŒx÷ŸóÕ¾ &”Ä îü§Ï_Ýñ§˜þÀ¥+®Í11‡ªgnD9m’)ÏÉɱ1æû ãüÈ>îÔ©kßEpoò¤Uo¿Mo®ÔNzçZWœ© ~€å鳻ŧ1Œ€–7¨D–ª­UÊzi–¡<‡|m2ÇìÅ-¥Œ¨/»a›ä6¼ý çÌ1†Ÿ ¦r:†ºÅ¦&~4gÎÈü¬f;b@Wb¤çm]ð§2ðÚXùóçÚ ;×øÖçpœš:ÞßxnÎ.”Û_ &I>n4ŒéœÕéߨ8/ðéíœÆÄuØ9ÓoÜx:Ö˜8óG&9/#Àhˆ@Њ²†m yÑ1‰Æý¢R ŒGв+-˜3kÞ|ïA-Ù©ÄJÒØ3¶Üo™óJ²+o0j -œÜ"`”G:*¶aÁ3³>ÇWROÖì“î6ûÙ÷¦½öšÓ§ªæo1Œ#À0 "Àв¢<õTž ùÎ]RÒ#sþ"Y/²¼6ï©O”7aI‚ÑÖª¢}öYrÉp—È„™|:¢}!&¸ÉÉfz:-"hŠ(š–¡—ó,æ—¢Œ´ê¶ ±{ßú¼îÚ/IwJ'óÿñ…(¨»$®I–Óšª&îðâ}ü0Ë×#5Ð`ÈIÀ´YóÖà«Ú´Ëh2N|í™™+‡åÍé3çýßὈ®ÄNü †c&3L!«3f }B!4Þùè»ÅÇ5¹ƒaw£!;šÔ-/Zþ#ZÅàÎâÆa›d[òÎ_æÀ¾ÓëP§•>Z€C?§Íœ·’ªE%rlªWµJ”OôÙóÊðª‘#ÄYš6ýÇœßRäJW¢<Óg=û7 ¤ßºv`«Ålžò¯93HÑf9uS{ƒå´6ü`¢ ea¥Z0o¶|#nìpN5o!HÒTê‡ÁhxtáÜY¯¸ëÓÔ™óî6¤·Pé“?ˆÄmŒq =³òÃ^8~|•ážß?uSlBÜÿÃRºà"%'Æ9R’ q )ÑQ™¬UUŽ‚¢2©¨ÔŠ ©ÚªìO¿õ—9Ÿ" ¬0kxUL›5wߌ¡ucOSgÍ݃ŠòÔ“Å0¤ú!µ^·ð¡v–¤¹5 ¥’Ýö‡7þ<çm:•å´±År*à5#À0Îù‹ƒÂÁ!fUŠ2º^ìG+”Ü%ü¨»'p^v֢鳞C7 ÇûÉÑC`›ñ›¡gÆÜ:âzKÌUÔ²ûf5O±ØútichÚ$Ü.¢=É …%e°óÐ)øqËþN§ó þ7mæÜµÖ²KSÞ}ùå³ï$ ÚÑâþ{DŸðwãE"+Ê›Ô3º}çµggÍÇùó þá¤XI²'¢¼¾vïgÞ‚!ëSPN³œÖƒ™å´$¼ƒ`¢à­› ½ p‰d‡ëƒ>´{T”©»è³¼¬¼ÂÚ•äµ»h‰‰‹„7à>wO¾fM½Ñ4ªW@%9BÐQ§„áBøNf‹yX|“ôÍ÷>þTo¬®¥ ßt¨ÓÒð”Báq§Îž?(<µ7’Z%Ãöš–IQö˜ðmר •޽5r7åtË©GØäq‹åÔ3>|„`¢ˆQpÕ¢Ë,\Š2º9öh \ã¢çþüµw'W”—¼/%%ÆÃ“÷^g¸²Ogüž¨³£ü0áC8!^¦&‰ñ±q KþÈ#™KT_—rx\‡}~”_^»o0)eG_¯™ñzzý¹9û¶}ûýU•ÖÒÿ²œ6€VÃ,§uÁ¿ü0[ÞÃD*Q­¸#õÕy3O¢ûE¡ó˜ÔüáYsۻˇû;r§ˆ½|9'..!i˜Ål”½s"de¤z8…w»C€ðztʦŒ„¤æbÂ5z¯M Ë!rÝ]*®}’1¶Æ¢,zôÑGž\›\rº}ûcl|“®³‰åÔ…¤ï,§5XñÃl ¼ÅD:Ñ«Œx`¿0#ÿØõâp•Á8Rl+Öd+¦3ÍÍwï¦ß‚AM¹f¨‘•dJ~lnS®bB3ü•¿|læ-x*áË6y?0l +ùðºõãmà<]^8çñêÈ¥Æá7ñßï;ÒÕMC=Èé–S7`ù²‹å´%~˜õårá<Œ@D Àвq>†5b·äÜ)Ê„Ív‹Kb³ŒÌ?ÒAÃ{w§ñ:?Âý¼çàé„/_Ÿàèî•"hº+:lû J?e›}˜›†°œº%Ø],§Á"Èç3Œ@cB@ E$¢,UDžA’\Š2þ•çTVJV*¬ÉñCÇNêe4šÚãì&­}’÷圆ÕÛ° §uƒ¯×î„Òò  Ú>}á²¼­ÖáG8"]n|à×äN8³UY-€#¯œµ¢KøhŒØ®^G¥œÖ#XNë\ü—`?ZQŽDKUl‹äÍè§,k„øZ·Û#Ï=×\+Ý€ÉÚƒK\f»öãIµÆ)àYßÌ9ÍŠK­õ ¡›àª-£#ø4ãû¥t_*.…Å?n‡â2+N3-ÉÛ¹*+ÊÔ+'Ž)))å:üK8³¢LÀpª‡€d4ý vzP”£NN•cË©¸:xÍ0Œ@à­(^µ~ÏüÇo[!$—?g•Õ¡´*fhõ”åø˜ØØÉ ±’ZSÀmÚ“ÝÛgAR¢tLŸ<E%å0°{» Ê7h0Ý1áHÁYŒ&ù±ÎÑwrx\Ÿ®Ë ½:n÷Έ|´Æ *'²œVƒÁrª¸*x“`? ‹ '7àG}ÊZöOv8ý”?Çlâu.áF®±&‹9#¥i¢*VO¬¶ì;·M,·¨¤¬Þþj5ɽm3›AFj’¼_üœ»Xÿ]ºNχŒ´d¸óša ¯|°î¿qtl!g]¾qlÙ{ fܽ²ŒŠ„a¦§«ÝÑH‹ IøÜ~ûív ͽ/ zû€±£qu]Éé†]G`éÚ]P‚oczth?Ÿ4 Öã>’C!“Ôþ—ÿû t@™½aLú+oHN#¨–SBAåD³œF *^«‰P˜ðƒ¾µŠ®Ulfdé$EÝ/$U”d*ß±3PYU}»dÓ_ø~Ó^8Š®7â³KÛ–°qw޼_ü,úzÄÇYàÑ»®ÆàRáƒo6@‹ôdÀé¯`+*Ü"mÜuÚµj&+Év»¶î?ƒ{v/ß°ò Šá‘)àšá=±œ4yîùK°dÍèÛ5&ê ëwŹ֫·ÄÀ!Ý`Üà@ÊÀÎç\Ç‚Ú •'±Æ_™¯Ñ ÀtžL4Q4U(QEॲJ´Ä 9Ɖm\ëBN ‹ËàÅkaD¿Îðàmce¿þÕÛÉ óÉsùp4÷¼Üäsù…ø°zºµ£çC_å´¡1‚Êb9%ÔMô0«Å­º­äÒF@ ‚VBÈR%¬Uj4H/eXŒ)kÐ.%; £KnŸ‡æÌï„m#åMX«de¹²ÜZPPXJª]Љ,L}:gCl R?ƒz´‡1ûº} S§…˜ŽÑGuäÏ<¢oWHJˆ…¡½;Á©s— •ÞÁ½:¶ƒ'(Z“ŠàLÞe—b¼çh.Øöj_`Ô&Œ¸TX‚ç•bžŽÐ¯›SI?pü¬lqþŵÃa*Õã†ÈÑ‚å2ÅÏ€îíad¿.²:;3á9j¤‚â2‡½Ê–‡eÑõ)0W£èè-#Â"h "MFÓ÷bŸY¯›öÚkô+®—².9Ý}$Ò’¡·¶¸N€žZÃö' >ØÒTk´MiÛþã’”ˆÄNEÙw9õÖ‚³j´r壛ôÙ¼YÉ튯}…Þ¥]K¹Üõ;CÞå"X‹~•uÓÎÃ'e÷¨ä;“]«ý+ëæóõ?áG8ÚíU'6._FÙk_‹ˆˆ|×Oð©8gx¸a̘9$DâÚ¡uØä”\/¢;Ô÷›ö¡l“g›9rÊé—Œ­B—ì–°xÍvùŒˆêéœz#&,§ õÖü0«–\# wj´2½·4 í{þO*Æ÷¶.÷ ‹%öVl†|ãŵ—*\lwlÿüÌÅBãú]õ?xÃã &úr}_ΗB+N3 ›¼ùÜ[‹á½¥ë`hµÅ‰vÆ¡Â;ý–1è\3ÿù)üöùEðÑòŸÐÃy6)Ý4+F›éÐ2Ýé"A¯s¢ gôpfÂßÝGNÁŸß^ÿy*Ú¹pãUåcd‘¦0Þ]²æ¾þ%tÆ §Éy_ÑQžaã߯„ö¨¸;—4áG8æ=º+ |¸…G®3*~8<®_4/œ7sº_ÈòxýgtnY}ÝÐõV9íݹ<‹ÅW8ù£/üþôÊG@§"Ñ÷ô ¬|Ëãœz#\u°œ (T[óìjPrAŒ€î¨Ñ˜l*Î]ºŠNÅÀ#chiiÚìgï’Ž÷¨_x3Þ†_:ÃÍ\Hûl† 9ÿ6ûù#œÝ4-­Ã“÷M6 ËîW%•[+qv œ`ÃC*¯¨”Ð4n&kE•¬|×=Ÿ¬`4‹½æu—H1§WŤ¸“(Êߟÿ³ÄQR\¼ïݿοË"géB\Èù›Hªp+“2…e>O°Q€ H„oÚ¬g_•$Ço¨oø@÷>öó>ÜÔ•œR!š¥†\œMžä´¡1‚å4PÄÝŸ§…Œº¯‰÷2Œ@¸`‹r Ôr¿©ÿ¯žœ}žbÃ…”· \È5úâÓ÷_«¨°ÿýƒåµC;{S’±nyžã`”d*Ó¢K3pxR’é<²2{:—Žû’/­²ª²àÇÅŸÌÁsWÙZkaUÆMN"‰4•X Æ7Å|¢ºõ¶‡§¯Ou%§D(%™úçIÖ#XNÅÕÁkF€`üC@ E™"ØÑ‘©Úýâ Ñ9ƒÁünÓë\2ã–ãB_¹•]8sæÂª¯>ûç墲Œ®Cß^áÇ9¹A€ð!œ/„­pÊe³O–ßK®„/áLŠ2'FÀ+4û>¶m”3I“’’üKÜf9õŠšoYN}És1Œ@d"ø;Àj<¶®þa-‘ ³WÇŽ;!9`ý3 Ý{ôðö®Mëi ÂO,æËyÊsÚ•Õ®S‡ý'óR·ï?æ0 96%)âb‚sM º#%Ñpä‡ùîâ5Žu;J ‹/ÿô½§söí¥/ÉÝ¢¡,G¢<`ÔØ{±ÿ€rõ6­9ù†À 1ãª0šæM”Û`4´·æŸ5//|†„ŒÒšå”ò!±œz‰eÔ36|„ˆ4œsŒEZ¯TîÏksfmÁP¹?à<­W¡Ô“Ôäa¬b.¤Ì‘L aiÌ;{Þõ¥¿3a`Å€“ðã´Œ÷—m”’b% ug±Ð;*“µªJ¢à,4ï4ºp¶ ‡vnÿtÊ¥«˸Ð|sôBî,äzn©÷?ešZ}tZÊ}½ÙÓPaî8ôgS®Ý»wö·XË©p²œúgeˆG j•6™úô³“0¤ÝRçy†âsy';,^¸nÀôÁP2.4‰0-ô‘_\âq‰ÉîÒ­U§½z6INi›Ÿl6™cqT&›ÝfÅi…%Å…çîÛ³ýø½ÇRŒÉŠ\P½²Lð‘ëù˜â³ §`Ñ3Ì›½2زô|>}´ˆ®SÑ/wçºÏ?¼wï^ze9õ‘8–Ó†BÃÉk”‹ÃX7Œç`;¬(ûÁ Þ„wáM¸—|ŠO¾þÜì—q›_º “rL 3-I¸Ð¾8\h* zåKþàÂ'<špŠ.Y‡i~£d5&…˜Ü,H9ÖdÚGó‘’Lù9©€ÞØeoìcU(N·E<8ç… {UE>_%R#1Ҕמõ9n²œzgåÔ;>|”`¢ ]/¢ÅREׇÁ`|§ zG¾V †Çî|ì±7?xé%Rð(ц”@r %nÔ¤(Ó špÊr4)ÉØm96BI&˜aˆ>„¤…¬Ê¤ “…žŽñG|‚ª‰"hFAZ0çñ ÓgÍ}#ÀÏ îâ…7gÀ€iŸmݺ®+J,§NÜý²œºC…÷1Œ@T#´¢ì ä«KI¶X97#óWjÝì8•÷ NgÑï·Í›Ä§þ{:º “"HJ ¹ ÂG )ÉBQ–}˜ñ¿P”ÅwEl¢/%q&|„¢L˜N´ÒLÿé!ƒ•dSàÍ/H%ö‡ð›‚¦¸të7©Í=[·ÂÛX"Ë©{XYNÝãÂ{F€-ÁÁ%–*iáôéUÓŸž÷ˆÃ_É IpÿÔÏ,z}þÓkð¿RÊr­ý0Yõ¢AA–áQülÄÃ)Ä´N´Ð6)Ф$S^qãÆÍèK×n7I¯Ï±9úz|ÿ=cÆåé3罈Ñ\* îùiO=»tás3)v´¸•µ,§NØ6,§N<ø—`å(ñµgf-ž:kî'x»½ï¹èa\0gÎG}æÌ¹”=ºÁ(Ý/„Ë…Òí"Zeq&|hбøOØQž¨Orx\"õ`@RJìËE÷¡E¹^Ui`t¼ŠEÝ‚‹ò:$™¥·,§&ÂF‰Ë© Ký~˜­ ïa"ºApòC¼ñ·P&M¯vÏØÏÀ"þº±e™¬Çt#&%™”cZSŠVE™úNøÐX(Çâ¦LkNƒÄr)°pýâã—N{úÙðJû/9~„{3~Ðx~Ìø1ÉrêW!‡,§îñqíå‡Y¼ÁD<|C€â…3gž>{Þ“„äßòé’4- ËÌ™E¯Ê]ŠðI“b»ôè;ÞbŽi0³ ’¡¹!Š• I2Ø øAd.Þ‘,¶®øøå—Éo”’¸I;ÿñ¯ÚlR»@½—·ð™™?L›õìBœS™¾#  ìŸÓæüù§…sž<…YN=Èrêåî(Ç•0ð6# °¢ Ë ž™ùÚôÙÏÞ7ááhY޵UÂW¿||Ö•ï¼0/wÌm·¥vèÐãqKŒùA‡,f“#%9QJi’`Ä»³ë`Õö4TT¤‚’2GAQ©¡ÊfÿCZ“´²žú¿–üfšG™¬ñœ4@`á¼ÙOhP¬î‹LI2?^P\9 -ÊÙxù5—ª*—Üõë_~ÿßÿ.c9uOË©{\x/#ÀD'j(ÊQg©¢KÅ€¦á_Ï™·­Ê¾Q’ìÍ%‡ÔÂÆ%wÿæ÷ÿŸš†–fCÓA=Ú†öî]Ú¶4šŒQ1;WCRD &;>=:q6î:’°iï±?¦%¦?pÿ㳦¼õÂ<|M.+Ël]nI>îÏÿéOÅSgλÓÒJ‡äˆE9íÓ$ýý;¦ÿ~AÓæio³œº…‘åÔ-,¼“`¢¨µnªD¶áW3žA‹²ã;‡ÍFs&ƒ%6VÊj–ê¸ÿÆQ¦¬ŒT•ª‰ÜbN_¸ o}±Ú~æbØm¶ßüóœÿ`o£úÃ> l³ŠG+ðZs Ús·Ã ½ã°“û-FŠu´j–*±œú†-ËimœXFkãÁÿHF€Íœ³KÆ7ç?½µ¤àÒƒx–ðæ =:dž¸÷:V’}Ä•&/ÄÍh2›Ü÷ØSã W\¢÷!N2ÄïÏú!góŽ€,§oüeÎGåå/¢,ËiÇÖ,§Þq«u”å´ü‡`¢1CuYµ®väºg{A§n=§¶l–jþÝ]WClŒ-ªµS÷™MFèÛ%Û°ëÐ)©¼Ê~}FǶoÙ±ƒ¦íŠJŒ­kV.¡EMâ(‚æÀQcÛoYýÃ15Ëme¹ä´Âf=Ò¹{¯©-›§š½s¢åÔ?öXNkð0rÜ0|–ÏW[Nkjà-F€Ð lQŒ ÂnÀ1¸$ ~ÕÀ`Œ}à¦Ñ¬$†§Œ½GÍ8%3#ûÉj|ùú Ϻ§QMEͺ‡#õ-9xåØßãÌç1Ü8Е䧇 –St‹zvÖtZ„‘OcF„@ЊYªhiD}¶©ò«\,D¶&·l™Ú$9éžÁW´7°OrpÐ~„£Åbþuß1c°4º>£×#88kŸM4£(ŠfõuC×Ëií+!è,§ACÈ0Œ@#B hE9 -U¤¸ kr|ŸÑ£Æâ¼£q4»E(Ò¾œÓ°z[xÜWCQ7áè ¾{ï«qfE9VäÕV9ý~Ó>8rŠ"f‡>±œ†s®‘`" eÙJ}–*¡(Ç¥¤¥£y’q 8Õ®’œÓyðÐüw ¸”Ütk'º ®Úr öÎþy«ÃSqjÕí©|ÚO8ž±1±£ñ/áuŠ2…Ç:{þ ƒSÀ(eMäô“ï6ÁßÞûÖmWü´âˆÁ&–Ó`äóF€àåàêoŒg‹×¹äŸo4›3S’@Íy’7íÉîí³ )QžqNŒBQG ' O0[áùôÚ<ê®Q9<®Ã>?üøšÊ)0Í{àž\j±ÁrªªÁ—ɳÁcÈ%0RD8ùŽY©èL–N .±&£ƒîÅ«fõt ßÁ–}Çà¶ ƒ±x€’² xû«Õp$÷´Íl©Iò~ñ³a×Xºvæ³âÔt­à瓆ÁzÜG7Ø÷_Qœ9_þï7СuÜ0¦?Ô­c÷á\X¾qä^¸mZ¤Á¤+{£¢Þʧº¿Y¿ KÊ!»E:L¿õ*HŒM xMxžC:@8ÞÔ TM¢$qxÜ`‰Ö\N?¥åп[[¹­d=~ÙzYf®ìÓâ,À¨®~°œº ˆˆ ùaä9¹ÇGD‡¸Œ#àRB8ù‡Ý„I£‡ KUUeIQiyÍÑ¿²êåÞwì TVUÑtiò±ï7í…£èŠq#*¸ä–°qwŽëœÂâ2xgñZѯ3NÚÔÕ'3Ý3+û³›¥»Å¡9 ’µ.ꨞë?ût¡¼ u"~vtéåò:}Cº*Yùï”/ ÈÔGo¸ÓI¯ŠºšÃÙÕEŠë4ÃÂ? €@‡hq‹ržµT雬-›áüùù?½ñÅ¡gÔ¿ôÆÒí×Û’éÏï­ÐÌàAÛ¿€=Ïš>‡ö4Ÿ®ß¼3ÛÏ6ìZï"öí--ʉ+¿±s¸rúÉŸ3ÿ7ïU£ý?üø33âÒí­_{–¡éí¸÷ß—Jßè̃O¼hz/(2ÿ9ì´FËîÞµ‹yç£5aq9[Š£ßnÛ‹O=ñ‚d:ç,2Êð³£Ëœ^§¯,þkæ¯8шö 2¦ßá½2ËOä:(}ôÓéÀl”°áÄuJ°D:ž@.Z”;ÞY7¿Æá—o Yèé­µµuk–/—î¼ËœönÉôÊ[5CN:v·.'Û?Ó2¼BZ–õ†¡wþ¶+8ýÒñGgúC꘭uõ©Ì—s8vë!=‹ÌñûšÇŸ{=Ó #|ÊžeÔ'Óf¡Ü<¨7jÿdíÊQ¸£ßeceŸ(? Öoª5O½¼ØhðþÒ›•–àTKN?ÓÇZ×}òɯ×}ü±Ž§Î¡y‹òîP;»ÄèÌÔ\ð3“óët˶ºL—§Ó³F»èDïÌ5sõZnsåz̾¸N›û6r pð¶7‰üzt”èŸqÃùô.¿­ëÇ«V|tò©C‡-]ù÷ˆ<0ÃF#þûCG·Ð›ê¾yþPsh¯î;=—?á.\¼ÌÌ~ñMóÜœ7ä¤ã¤?ä&óÏCN4GÖËD¤¬?,ø‹ùýsÎ|A÷(ê’±B3ä®ûçäá$çŸ18ÓW¹¡2ê%¸}àw ÌcÏ,2ó¾-­c‡˜o bäéx¦±²Hp wýÏ‘zi€ çn† =Ù4çܵ®èOyø©`SíÖÚ§}xÌæ >“ÍÚ,—”9s{¹,óbZôܼßëœË“Õ§gžvΰã^}vþ²\æÛNójµëôe¹™öok>1ß¾ðŸvëvT$ýõç¾ü¶ÌoI׋Ó]®ÃÃéž¹—ë´~JZP­!gŸ÷éöi®¯ÓT‰C@ •ô …éÀtX¸®2÷”Y; ÷‘ùðüÊWÏ8ý¼¯>ùsGÙ’áçÚðf<Ù—“ië¶úÌÍsûÊLNÒµKLÕæýöÑ0Ïz™À{Ï2+[ûQkÀ¬A8ÝžÇïïµÉ5>ãÞþàC·ø•Ç>?ç‰9ŗ2o”9 –÷— û()OÌÓÝ5Uña$ëL»Úü:ÕkAú×ï¼·!L®Ó†T؆´_æEUYçSzsÕ§}þ©‹ž}úí¬Íy5ssœ`vËrÜ,´¡k·î[L×_üË’îó´2pδïrc“æž<¬£¹“ŽØ¡}¢š+ÛJt¬e77HÖî*Sž,[½Ö,{gñ½Ïüþÿž”:èØZµ2k÷ íÏ¡JgjÀs†iGö#¤¬¦Ùt¤CÛü:ÕkaQá:íH!êŠlöZä0ä«ÃÊ F¿öìü;[”QÇ9X¿€÷üÖ›"£+–¾÷q²®îÞ}úñ¹?¿]ûÙFÛ¥0&])ºIkmÃAhÇ9íÜÕT‡€["}­ÿìëæáÙ/»›·l•›"'=÷ÇÇŸ’R4HÖ¡;¶È\/³v» P„–LCþù¼³ñŒ×ž›wGKòé@Çr¶ðÍâ:m! ‡#€@§hqôV¯,•®°Ó Œ=vjU|y§Piü$ÔL›wµr7™õ–wJçCdîÙ£wïÞg]tÉ׎þÜ ³ddªi‘ ô±Ì2–±†Ë-6—2:ä$š“e–q’e8éE]¿zùòyÏÏ~|Öúµkõ.Eí—¬sØ¢œwò¬Y³"sßXò£ž½ºü‹ßtíÚí'zR¿Ç~¾°¨¨WAAawiYÓÉ!ù5É#}Óõõu›ë¶nùì£W.Y¼ðÅ¿lÛ²Eƒ6틬Ë0@»\„£^ä ÔÓo¾ÿÿô‡gíÆÔ«rÒ‹ruâQëÍI¹Ôø-ÉÚ—<ž«|Ûq>\§Í|s¸N‡“Æ¡Ar?ÇÙÓ+Ëh<%{@ 3´8Pî³xµ«Z7J@î (M8 à´K€¶xj°•8  u{×­[k·¼¾à™—_7Fƒ 5]˜VVófÒ % \ÔMG²¨“YoÔÓ¡5Pn¨Ë…zæÕä\p•|BO«,ËY¬€šŸÜÐ÷ÊzÚ\§ªÐô‰ë´IVö~¹qó(Iú@“’“:´@‹eß· =U™v¦F~i_( ™Ó¡Eš^ù0(Ö€O'ý’ ƒg ‹dÖáã´‹†:(ïj‰×Új¬?2´{˺Ô×ê©­õy$—'Δּ3ä¦JýÑ™óI~¡sÖ™óŒÛw†\§M²e½¹N÷°+W^$Ãnž#7N—— ÐIZ(«KßèWg®N>?Vþd|½¼Ì—@9l!ÕÖÑp=ürÑ@YƒäB™Ã@Y»gd·(çC˲ºè~ë‰ÐHb –ÕJg]×/æ0H•M*™8±—Ùœú…œéŠ= ÿ§5θº*þ«ÖÈ·ç^›\§û~£ÂkëtßFæ†Ûoï¶qöÛäãKûEμ¿‘¤ìBN$“@Ù÷ÿ%Uê'¾mSîðNdÓ”S ¿„5¸Óõ°[} ëX®:‡}˜³eÙœ7Sø¬­{¡‘.‡³ê>M£éójr›R3¥[ÄÑ‘¨=;—7ñåâ¾O–ëtß6Ù{¸N³5²ÖKª«cW®ýl:ÉZ÷uýÎËÚÍ*tb|hÕl«·O»Vè¬A±Îú#Dç=ƒä|4¿€Ã€%lYƒã0@Ö 9嶺üIýë’õ«¬gG×TÆïm+aã²Ë.Sû|š¸N÷ýnsîæ8ž¨‘Ÿï?’ÿy‡öü|ÉØŒP ƒ¶Ö|Õ3œÃ/d]†ÛòÙ{Ï/a ŠÃÖãp_k¾7í:ï+Ç?lê¸qú4Â6›¤Oô“òà"ö¦·–ÿ±Í >øe_\§»¿ᵨËðÍûëTî¿yH>4/U'*¦ìÎÅ+èì­¸Ý4iRõ›ê—gËÍI¿ºEþ¯fìXÙ Ÿ¦ð YϹլ;¨~ùê~o•Gÿ^áßÖWO÷>ÿ¦öiËM¸Ãg§:ãN”§ÊÍ•¡ ò÷ô}Uñ•»nm\>×éîàyîÎÁ+ÈgV ÞJË×Ëÿq‹õ‹X‘­±ŸJ?ÌÎÙ½{ƾwÛ˜1ú¶Ì¤D*Yÿ3Ùß=Ü.%ýzÒãÓ‘?òî-õ$¿ä!5v팪xŸð|æÒ÷çGW§”Êu:V~¾¥u‘ëô­X7ïÜìn®Ï]ïÿ?2¦3~äÐïž~r$Dœèû#ô^&ÈsœÜÌ·/C¹Ë^k}gqÅ„¡ÖçÊ“ÁÊk™È+y$ãjù"~Sº±/¶Q3»½œüŽ›’¦J}¦^ŸtRÊÔsÆžPÐ¥‹ÕÇ„@> ÈÓAík2/&HΧ·sE@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ž€mË¢GUL’Ò_“2qÖõ—Â{´eùí¡,gÌ&ëìj©Ëòˆ™=­²lQ{¨u@@Ø] ÕeߟÙeUrÕUƺkŒ3GYkãìÇ(¯–Â7ï^ÎÿJåî(÷#3ž±æCñ¸g@lÀ½¾?b[çà @@Ž!ЪrI¼òBa˜!á@kí“2?äy˜VV¶®cð´^-GM˜Ð;Ø|Ý9÷]™/’+¤´âšDÅœÖ+•œ@@š*ÐjrIEâZ˜;¬uoI êÕÕUñg›Z©|KWZž8'0fŠœ÷`ë™j*ãwç›ç‹ €´7HkTh{ìî²Æ>bb}.©©¼~ik”ÓYò\ôܼå§÷í_Ø`Ë Òº|ý×· ZIDATiçž·aѳó^ê,çÇy € €Q ç-ÊÚÝÂ9û„dühu¢üÛÒÝBºå25E@‚d[¯ú•€ ¶¥FSÔHƒ €´Ž€—ËlõÆ=Éo†v·0±#.'H>0ÝŒ—¸ÉQ‹Õq‡çeBj@@œä4PÖÑ-27îIŸä¿tKNj˜g™¨›¼)£Õ13ZHž?§‹ €´œÊ:œ´Š>É{-{{ÕO3Cêµ,+ŽF@h¦@Îe}˜Èöq’íCͬ ‡e H üzf\³¶³Š € Ð69 ”õ‰{ú0'¹mªÞ¹KQGõÜñ$ÃÎ}²œ € Ðr(˹£OÜãa"¹y—3Žâ™qÍM–ä‚ € pÑHÛhRy$uI°ºÑDì< }Ì· ®L € €m,³@YÆMî!ußÜÆõïÔʼn©zª+ € €@ ä²ëEWâ@@h=åÖ³%g@@,@ Üß<ªŽ € ÐzÊ­gKÎ € €X€@¹¿yT@@ õr6êEëU‘œ›"p…[ß UwI`Ì92žõÉÆšÅ2¿RØÕûÕ”qãÖ6%Ò € €ì Ey—E‡]»rüøÃ’ÉúçgG:ã½j¬W)Aò*çÌšúîuöÄò¤â%ñÄSyrªœ& €J€åõví]YߟU°jË’ßJ`ünÿèQßòýÛv¤zlïÔlio£â“NH™úSÛ[½¨ €C ÜÁ?k‚÷Ï–Ssöø¬ ¹Á³¯,N;{•ì$3Yæ<3mFeü¾0qq<ñ3ãÙG¬s'˜ÀŒ6Öm¨ITœzÓ¤I=ÖmJÞeŒ»XºuĬu³cE‘k§Ž÷ixìžË°,)ç ò„Áuò§‹xu¢b¦¦»Â¯úÇTÒÝ!«§Ê¾ZcììnÑn7Üí_·^÷—ÆÇ\p¬¾ ucÕ§¾äÅ /Òõç›ÀýØs´äý¸ñ¢w×TŽ}3<Î7Lêÿ|`]¹Ôu µæMg#×Ψ,[¨itjjùžõžv&=NÊ:F›ñ¼û§ßZ>;“‰ü³?—ížæ6pg9c‡Ë!5î1/<Ý/û[éÍUç¥ÓÉŸK).O¼¯ùFcæ{ÓüøKºÎ„ €W€®׿ť»Àü“d²ð¾ªøÊÆ2+©H\%A²tɰ÷w‰ÅN4ž©6Ε—Ä«nÜuœëgÒÎ7Î\ìY÷C¯‹½D÷I,­Ó®Ÿz—ÄböBçì‘É-Á/w·ûZi¼r´t™`­3æ¤ÌqÆ<£©FúŽ• Y`ûnÔÆÎ’`±Ÿß’Úü¤“Œ5â=eå²îÙH±³çH‹y¯ µm¡É?’ºÝ(û¾"ìa.H=¨Çè¤ÇI,:Brù~Ę« ¼¨¦Yn‚ôü‘þí}4Í”ïLðC©Ç¸Hûrè²t<¦}Á5öï"f{@*VçE½µQûM©åÐt2]£ÇÞc,®“€~C·X·¡:iÎzU÷1!€ €ÀÁ Eùà¿-¬Apš5öƒýe"u\‚ã3*ËïÙ‘öÞÒxU7 ã³fͺó²Ë.Kg¶[w|aŸ^}§\}u¦o³´tž)ì¹Ò :H[A5Miy"8÷§’xå iqþk測$8§eÕT–WíØ¼<Ü$ÓÚ¼rF¢\[¶uzçªxå’:g>Y1ábyý„n”–æHïß¿m̘MúzdEÕŒtàfÚˆ½|ú­ñù™mñÊÉüÏ=yraX_é—]Ù1Ò*û†¦)©®þYù÷Ï‚dÝËˉM-_Z²c‘nÑïL++[§ùÜpûí•6l»&’_’—5ÝÅ.¨©Œ—i:WTMµ&¨Öu­³´ž×òë$lM×íL € Ð>hQnïC³k!Ò—® ®Wc”øw.iŽ4Æ››ÎZoŽ–=ç¿ý×cvm·„A§n“VÚS¤¡wkJÝ#ñou–n ctŸçyŸ×eötç¡’g_Ϲ}Ý 6XZšwÛwo¢b™”´Ô¹@[nwLvM$ëi%^)­Êõý½/<¦p¦PZÑMÕÖ…Û$Ÿµ÷Ý2.ÓC·Õ”–&%0VŽ=~Gš&–oV‡A²wÇ7ÖÊ’µ.°=ôuS]¤[Å[š>œ$ej¾f‰ €íW€åöûÞ4©fÒÚ»T~G»-È$1áÞS‘‰¤j¥‰T~ÕgïuW'}‘¥ñ×íüx2ZFv‰ ¥Œ­Öóî ·gúGswà¾n —µ±­…&)¯¢vs¸-{)LI‹ênõÈì·®Nú"﬇À[²Óu)·Þ÷/Ûûج„˜nÛËÁÙ¤¶Pk²¦–/éäH#SÓ]ö:Fre €´#¬À¤ÕŠª4Y fc¿®Rñ‘ñªËä _5t þY_Z‚—Aæ¹Ã4AÊ] -œuÉ~‡- ·í¹”üm‰.zfí}~ùŸ÷Ü¿çëšòò5RÖæ e/}¯ì¹_bùW¤øß³·_åOê_—¬DÌ;ÙÛ›µ.­Ùz“]Ø-ÝJ"sÿòÞ¹ò;âNÍ/Wå¨Ë¾ÎEn”ÜT»/¶#€ p0”¦~ÊžZ9v±¦wKÃðrÃ^¿ÂHì7gŸ|ÜíN‘J¥>7#Q±½»…µ“¥½ùÇÒ'ö-wôa›UŸ\h‚`”4ÓNÖî ûªÊ€h|ÞêdÕ«É”û©ŒdQÙ7vMêöAÖ&¿<½²|VCÇIà7%0®Xú2/í;ë‘OR ŽN›‚‚i‰1KL4ò IWǫʺG»Vךm}êSI_Z‚ß7Žø]CùÈ6õ"¶~Sýô|eõ-eKdd‰+¥5¹«ÆÈä“£ò›ãÒàyt½l6¥ºŠí¹ÓÏ4˜† € pPè£|PØs[¨ÜPwƒ¼‘WÈ {WÖ¥ê—Ï}cI}:•z]º2Œ K’Êî•Î5Ò‚y—Y±vƒÉä>!7îU„iZú¾ L‘½TZ•7ÉHsV¥[W¿$íܨ†Òë¶‚>½~"Ý9“î 3W¥ÔÊ]¯¥Mý0ÝWã{W—ÊnÃkSµ+M*-ý‰]ßH4zQc»Û”Iî…Òõb©K¯–V$ÖËÈeÒmdx?v…Ÿ«ò›ãÒPýkÆŽÝ }b&‰í£%å‰Kn®úzC騆 €X@Z5ŸÑ¹ŸB»«zsL¯»óή×TUÉ{ûžt¿ïk—å›d‰X‰?q ŒÑ­)Gj%UUý´ÿtCé¯õï:DëÛоæl“‘1.—`s±«].öç«òÔ¥¡sÓǨUCû؆ €\ 8^ù¸ÌŒ›Ã÷Q‚¾E2·¸;B«Ô®³Ò@Y>ƒo·ëJR9@è0ܪ¸¯3“?ó¯–¹ÿ¾ö³½YÚ¸¦YGr € €@‹r(K-–ËdGŽš0¡w‹jÄÁŒ£xf\1i’€<ä=ÏÚ_7)1‰@@ýälÔ‹ˆ™ RUÁ–@oFzp?å²{?ê(¦'Gfï')»wÌ¨Š¿ «:3!€ €-ÈY‹ò´Ê²E2ÔØ‡rãÖw[\+2ÁÄQ<3®x € €´¹@ÎåLͽG¼‹düÜsÚüL:Qê§Žò`Ž{:Ñiq* € €@‡Èi < 6à^Çv…<übJ‰_]Ô¡$ÚIeÕMýÔQ=ÛIµ¨ € w9 ”}Ä6,–y°I®}@ZE?7ï”›xÂ/qËø‰ãÏ&M2@@\ ´J +R¾Öî.kd‚Ø—×ø¥[rYéΘW¦^\÷-ëÙëäIzwwÆóäœ@@Ž"iŠ.zvÞK§{Þµáëj/9íìóÞ^ôܼå­QVgÈ3Ó'9ØòˆœË?Küc‚äÎð®r € ÐÑZ¥E9D‘G0_(ë3$`h­}R懼"ïÓÊÊÖ…iòu©ã$oÎ}WoÜÓ>ÉbQ\“¨˜“¯&œ7 € ОZ5PÖõý™]V%W]%#¹Æ8s”„Œæð±³nµ¾¹=a´E]œ1Ý3O0”‡‰è8É:œŽn¡7îÑ'¹-ÞÊ@@š&Ðêrv5FUL’Ò_“mÇH Ü_ ?Ö%PÞ¤û–s]®iaœä|x×9G@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@°’¸¥iGUL’Ò_“|ŽqÖõ—Â{4”§gÌCÓ3²÷ŒWÆ|7{[¸Þ¡Ò;»Ñ³ÊX»ÂFÍì¿üµð.鯗”'þæ¬w͌ĸߵ³zR@@ ¯Ú¤E¹´bâ—œK]çœy:Ú=òûieeëòZ]NþZÿ®C¶¦·\¸àc½GfT–ÿ2ßM8@@@@ÈGßwÒm˜ @@Ž+ó®¥7W¤Ýo Œ<µ*¾¼ãÒœš—LœØËlJ½f#¶¤úÖò§N-(@@ §-¿¾??êÒn²°®JÎ:ó[ÉAö6Ö;ž€üàyªãÕš#€ Ðö´(·½yÎK¬K%o•Vät¤{ä¬ieeëÂnš4éu›êç[í­²í‡áö&.‡G¬{6;muUüáì׬wy´?¹ç¶Ô†'œ3§ÆþBZ•·zžùßêÊø-ÅñÄÏä±Úoh:Í?“6¹q’±æbim>Dž#øº57VûñWu¿NzŒñÌmàÎrÆ—M”ù˜Nžî—ýMÓHëõ cl•¬ž)åHkøSÎF'Ϩ,[¨û›2Ɉ+Åig¯’G~AZÒ×ÉŸJâՉЙzì~Õ?¦’îY=UöÕJY³»E»Ýp·ÝzÝ/FÃ\ «/HŸð1ò££¿¬¿äÅ /Òõç›ÀýXZë–¼7^ôîšÊ±o†Ç9ã†Y瞬+7Ο7\›]÷¦–ïYïigÒ㤬c$ÿ¹Ï»ú­å³µ,ä‡Ou›’wã.–²búWƒXQäÚ©ãÆ}ªû÷g]zsÕyétòçRÇCä}}_‰ÆÌ÷¦ùñ—t @vÈe׋W$k™r%àì ˆ–4–>'IàéÙ-6·¡tE‡v™¯ûƒTÝɺ?’N{ŒžrÉGµF K8Ó8W5²¢êòÉ·ŒÞÔ-Úýß$˜Úâ¬wE·X·¡E½‹&nÏÛõ“´‡…åÔ¥6>.å)Ö³Wš˜÷O,¿¤Üszó]˜FÊî'AòRVõþÕFí7%PšN¦kÂ4Ñh¡¯ ­^b‹ì—åœ·Ø =!Ü¿¿ei¼r´Œº2Aº…ÌŠs’z—Èò=n¤?áX ’%¶ïFmì,ùÀK þù-©ÍO†—€³§œ×d9ܳ‘b/fÏ‘à¿WÚ¶P‚äyÖÝ(û¾"ìa.H=ÖG“¸„ü@ø~Ę« ¼¨¦Yn‚ô|ýs å;üPê1.ÒÅþƒ»,Éœ¾ay$?–ñÔó‹Ù åΑÉ-Á/Ãýû³.8¼‡v›¹N|7èûªó‘æ¬?jvåà €¨@ÎZ”kz³Sjªâ¥ûË.bÜqÒò»A[J{×õ×o•#>•–Ïã²÷Ëø$¿™QY~ÿŽm¥Uwp¸Ñl> ÛÖK‹£tuNo[]³ÕõQãÏNés¢1ïKÓüò7v쿦8^yV}2(“×#vl“…] ­Ðº-3WTMµ&¨_kK·¬ÿ4|-Ýî”@~ñ¨ zïÙJ¦É^Jp:NZ­gÔT–k«´NË·/$ŒM¦µ5xåŒDùU;¶½sU¼rI3Œ¬˜p±l{B·KKs¤w‚ò£aF:p3mÄ^>ýÖø|M#­Ö“¥ÕzîèÉ“ §\}u]æ8gŠ¢1;FZe3%ÕÕ?0+ÿþY¬ûoÙ?±©åËŽX¤[ô;áùÞpûí•6l»&’_’|>ÒþâàŸ+-ñƒÂ–øÒòD"pîOÚ"/×ß_µ>Yk¥õ¼60i·¯÷u{ü‹ €*ËeD‚€nŸIÕmÖ¬YÒ¨¹÷´£Õ´»´%ï컬©"Î=Zÿì/ùdµgïÝ{=íÒ_”Vç¿ßw˸L7„0…| æH«¨¶ˆîœ$Ý[;_ÈŠŒœ±LZµ¥5vû$ÁeL¸oW$~*ö¯Ò&)];\ÄÕ™^aš}-¯óïÿn¯:JÀ_ïû—í}lVf˜nÛÓ@N%©-Ôš¬©åK:é~ÒÈdm¡´(oµžwo˜Jê§ÓÝWøúöÕÌ¿{GÖ>V@@àhQ>¬ö˜´Ÿ){W‚µ×“)w}Cõ“òiÝ\T}ë˜ÝZu]„-ž™Ãä»ódeÇŸï3-˜é ‘®9r/Ù+Uâ?1»\ég{´L¿›½­±õUÉ÷äfBwf,¹¼:1îÑé•e¯§ÒöØÆŽÉÞWS^¾FÎs²do×%Ž—zî¾ï*R©ûà bÞ Ó5{)­Ùz“]x¼¶ìK{®”™±ÌUùbô¶´œñÌÚê[ËŸÊžkü> ËßßRn”¹ërµ¿òØ €@GÈjQkÙiHÿÍašÃôD°nåΣGú‰ÓÓéˆËAaçÎ+¾oéƒ|µ³ÁS²ÜêÅ"÷L»eìòÒŸL:Zb¢QA`~$I/Ü«ÕÓº²â›«>à›½:=i° R—JGgþÎ>'ë—ø¾›«eìܾcå¼S>÷ÖÜ¿¼7ß$ƒÛJ*&–FŠ–×¥6ýPn <Ý‹™¯î™~_¯»ÇŠÞÝœªÝ–Neº*¼W\1a¨u©«wk¦Þ×Á;¶Kà7%0®Xúì.í;ë‘OR ŽN›‚‚i‰1KL4ò ÔñŠâxUY÷h×êZ³­O}*éKpý¾pÄïö“õ~wKKxlý¦úéòƒ¡²ú–²%2²Ä•ÒšÜÕFcdÎQù¢ñy«“U¯Ê¢ŸÊµVÙ7vMêöAÖ&¿<½²|Ö~+&è{ÙlJu•<Εkõ™p3K@Ø[ g-Ê2ê@\罋`Ks¤KÅø¦Œþ 7ý-ˆD½¡Ò’ûe¹yìíÒŠD½I¦Þ59#fcCgTÅ_سÒí`’ŒFQ½:•Xç\ò9öóO9~Z˜Nnb»GÖÏ—ýK¾óÆ»pÿe—]–.Œõü9.)#A<_—Üø‘´&_o­÷_Óýx“G?¹Ë¿þ3k¼›Ó&]-ý“×—zLê6VÙaYû[ôéõO†œ“àzæªÔ‚Z¹Ëîµ´©Ïüp«ñÇiëö¥2„ÛðÚTíJé0"}ª]ßH4z‘ö'Þ_ÞûÛ/õ”Ñ:¤¿s2xUÜ×Ë…2é1¼Æ›©®ÊÏüX)²—JKø&¹Îæ¬J%¶®~IÚ¹Qû«cöþš±c7Èœ“$Gå}ý°ä檯gïg@ZA@n.zFçVÈ:o³lŽ©´{Úµ@— ÁÉh ÝŠË+Ý•ë~ÆÌ÷çïó/ ÛóÚ÷~ÍCoÆ+ñï8\×›;éM‡£Ç?¢¹ÇëqzÎ%UUý4¯†ò¹Ö¿ëGº¡}ÍÙ¦ÃéI°¹XÕ.×TUÙX>¹*»÷Äú^6V^cû´o¸Z5–†} €ä»@ƒEsP¤5ð’Ù2LÕéÍ9žcö L¸ñ±´ÿÛÞ{›·Eƒ« ë·m.ðb_œZ96ä5/'ŽÒ@9í‚›f$*2cT#‚ €K ÁVÇf¢³—'¯õiÖ±Ô°€5ÚBùqÃ;ÙŠ € К9 ”¥5ù}¾êh×¶5+œ/yowtä|wŽD‘‹sï^{LRÛ!}wW9yç[ò@’÷=îžrõÕuá9²D@8¸9{àȾNÃ÷Gl“}·é<ʯú— Î8ûäãÖLÙ×y°½GÏÈÒM’7zQoþt¿ü­<8eN@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ “ ü„ [.eTÁIEND®B`‚manila-10.0.0/doc/source/install/figures/hwreqs.graffle0000664000175000017500000000767213656750227023124 0ustar zuulzuul00000000000000‹í[WÛ¸ǟ˧ÈéëG’%Yî0Åm 3´0„¶gºXë,ã(ÁSÇNÊÌêw?ÛÎÅ×@RHIÃÎ ñ–-ËÒÿ·µ£ËÖ¯_z~ãJG/ ~yN ò¼¡7l{A÷—çïÎ~ÛTÏ}µ±õŸ½ãݳ¿Nö}ßÄ“w;G‡»ç›Íæv¿ïëfsïl¯qrtØ:kÀ9šÍý·ÏÏ/ã¸ÿ²Ù¼¾¾6œÄÊpÃ^b8hžDa_GñÍœlí¸ý.3:{!;ðmÛsãW϶>é›WÛnì]é#çFG‡A[Ùj&ßÂA/ˆuWG¯ÈVsòq’®í¹N §|?:ó4‘ENòáÙÖ Žà–_A°xÝ(öcøô:r:_Ë­æØ$gM¥mPÙgBÚŠçL¶š“S²0ŒÃíößÃA<½t usr|Çq?%— Úp¹þ¥çNÍÆ÷>6K,“cÓlüû/yÑ __4þµ‰õ¢!¨ýõk!»‰ý®ï ªI[¡ïe×,%9ÜËì'eÊòÅ;2kÅ7¾Î,'9„S†Ÿ²£¹Ã£ã{‘se+ËØÛã|v <§1ù”}•à@ ÃÞõa× ®œÁqäu½¬&LŠ2-ÉüƒÌR´¼tÅ~Zâ•4¡?ìÛ¾× *y¢Õ<¥Ö­¾ãÂ)¦ö‘vüW&T½ôÃÄþIêòžW²Ãå›”nRÒ ÖK“½ä¢ñW%wÉI¨œþÇ? ¢8×Nä”íyƒ¾ïÜ´\ǯ\š*¼n¯ñKcú¹|‚ß<_ŸÝô+‰heÛ´Rî…î°§ƒ¸Ün§É*%9®ÌƒDZjÚy±v–T®E™ŠÒ²¨4)ãDq™ aØDJl_rËNÚœ†´™MM8©m3–Ü –ɘbÒT‚ÛªØ ëZdÖ$/¾®k“ã2 ƒø0脳›Ô¥ìÉ .r§—%‡¬’‚ÎMtg$ µ¬ºQ}“qS–®µìÜ ÖÂq_-'l¶tÏ»ývé4©åZhò]Úl²æS¸XYárg›‘™¾qºŽóÏkZ©žÍ–g9ÔÊeåéu<ß¿íᕟnÁ¢æùŽKJr"Iö¹r©}ÐÓ”¶°l㔬&eTŸRÙÜ̥䅔…G_®ƒK§^ßV;:»¬«jí·î¨ee.=`AÚQvÓtî‚5¨ÛSÖtÚ²„½fô俢@G§NÛ ÷˜¦RùÔù6Sj>gúK<»&fmä<Š;ôš²—þqû]Ê;wC7t’C ¬£à øŸK²ñïyT!¾ðÏ;ä¼xðî^:Ñ@ǤQ†Ÿ¿‚½›<=Hðóy¤ÛLˆón¤u|¸ð‡ÞÓÊ}Oá’É÷”2H}Þw¢öyüEHá»ôMªäð'oŠ$o¦™š˜vjÂyj"Hj©“7ISiIr^/ œx9þùg7ù"öº—±:Ž <&É^œwT6ÎÝk4Ûúª9h»_ïò‡`É’6ЉH*ì Û6<ÛRxí{Ó[eó¢.¸pÓ.;fcã‚PͼÓ{¹é”X4Ra‚Ä žyÒÈ7˜€¦gq“P ¾úê¦c ê60ì`ûØXþõN·ß“ˆD"gM‘C•m¤^”e )¤´¡;UþeLâèŵìOY6 á†pC¸­ÃdÝy‡è3Âhr !·¦#9„B!‡+Rüx+RTF¡Ï7-‹|늌*ƒ%1XÁ$Ô)8´ªƒ"\ ‘ø³Ö”˜oºÄ¬ÄB”*J”ÅmX‘¤¶‰!MÖ§H´°L¤À)ÖbAŠJðů"~惬Ùqëïö·*¥d†™ºjBŸÈV¬ª”³ÅnÎ!Š3R—Wù“fU*™…R‰R¹RI–»vÏ“ÓUþMR©Vé´jF¼Is!¯Ô×" š¬2mi“£T¢T¢WùDÔ-ß«´VÈ«¬^Ä«,I%Ã8J%z•¨«5ºJ—¯«r5VÚ10u‘•v¹iPnK"ÀóµMpŒQWQWq¥Ýû-iQru„…ƒùžÎä(ç€ãpœŽsX‰qçow—~G?^^¼÷·áøî¾8;c¿þøá-Ù>ü°¿Ýú 6¿}ñ¯w_ßžý#݉ïS<‰ªÜvþègEá­9ýè­öüÞ Ütø×(ôªê®Q™k3qê„-Œt“]‹nqè^¾h(èoBÏ|A,NÞ×;²TÄS}ÀÒ5­ãG¨ §‰7·Ìªpö—qú¢–§â8ЉŠÌð+NµÓ>ü›»åå4¼Þö½n0‡;¦%×pD3ó$Ç.!pàÌ‹«J¸ëWÎ àˆŒRôÀÓN³‘xÆ#÷mÛ½«y¼·4qâ 8«Lû.ð>u{¸åîß ô~{ÑmÅþ¾Þ‘¬žìƒ@o{¶ªï£J"-ɪ^ã†ïé;4¶ÿŠ8ËDŽÃã¨NTPªi Š¼hð4 Å|”V)Ô4uö 1£­8êfáøéÐÏà\1€§sáäO˯püƒ×ÎûR·Ü74¸–ï\èÑÔèb‡G”½§1 Ou· \Yfé•L·LGq ›2ÿJg[ÊšBú†½²_jˆRN£÷ôiµJQóÛzMÛyV{þé—…h[¦TUÉÞû>4£WÿHÜgamanila-10.0.0/doc/source/install/figures/hwreqs.svg0000664000175000017500000012130313656750227022301 0ustar zuulzuul00000000000000 Produced by OmniGraffle 6.5.2 2016-04-26 14:57:28 +0000Canvas 1Layer 1Controller NodeCompute Node 11-2CPUBlock Storage Node 1Object Storage Node 1Object Storage Node 2Hardware RequirementsCore componentOptional component8 GBRAM100 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1-2CPU4 GBRAM2NIC2NIC1NIC1NIC4+ GBRAM1-2CPU1NIC100+ GBStorage100+ GBStorage/dev/sdb/dev/sdb/dev/sdc/dev/sdb/dev/sdc1-2CPU4+ GBRAM100+ GBStorage/dev/sdc manila-10.0.0/doc/source/install/common/0000775000175000017500000000000013656750362020073 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/install/common/share-node-common-configuration.rst0000664000175000017500000000462513656750227027014 0ustar zuulzuul000000000000004. Complete the rest of the configuration in ``manila.conf``. * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. code-block:: ini [DEFAULT] ... transport_url = rabbit://openstack:RABBIT_PASS@controller Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[DEFAULT]`` section, set the following config values: .. code-block:: ini [DEFAULT] ... default_share_type = default_share_type rootwrap_config = /etc/manila/rootwrap.conf .. important:: The ``default_share_type`` option specifies the default share type to be used when shares are created without specifying the share type in the request. The default share type that is specified in the configuration file has to be created with the necessary required extra-specs (such as ``driver_handles_share_servers``) set appropriately with reference to the driver mode used. This is explained in further steps. * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. code-block:: ini [DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... memcached_servers = controller:11211 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = manila password = MANILA_PASS Replace ``MANILA_PASS`` with the password you chose for the ``manila`` user in the Identity service. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: .. code-block:: ini [DEFAULT] ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the management network interface on your share node, typically 10.0.0.41 for the first node in the example architecture shown below: .. figure:: figures/hwreqs.png :alt: Hardware requirements **Hardware requirements** * In the ``[oslo_concurrency]`` section, configure the lock path: .. code-block:: ini [oslo_concurrency] ... lock_path = /var/lib/manila/tmp manila-10.0.0/doc/source/install/common/controller-node-common-configuration.rst0000664000175000017500000000444313656750227030073 0ustar zuulzuul000000000000003. Complete the rest of the configuration in ``manila.conf``: * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: .. code-block:: ini [DEFAULT] ... transport_url = rabbit://openstack:RABBIT_PASS@controller Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` account in ``RabbitMQ``. * In the ``[DEFAULT]`` section, set the following config values: .. code-block:: ini [DEFAULT] ... default_share_type = default_share_type share_name_template = share-%s rootwrap_config = /etc/manila/rootwrap.conf api_paste_config = /etc/manila/api-paste.ini .. important:: The ``default_share_type`` option specifies the default share type to be used when shares are created without specifying the share type in the request. The default share type that is specified in the configuration file has to be created with the necessary required extra-specs (such as ``driver_handles_share_servers``) set appropriately with reference to the driver mode used. This is further explained in the section discussing the setup and configuration of the share node. * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. code-block:: ini [DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... memcached_servers = controller:11211 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = manila password = MANILA_PASS Replace ``MANILA_PASS`` with the password you chose for the ``manila`` user in the Identity service. * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the management interface IP address of the controller node: .. code-block:: ini [DEFAULT] ... my_ip = 10.0.0.11 * In the ``[oslo_concurrency]`` section, configure the lock path: .. code-block:: ini [oslo_concurrency] ... lock_path = /var/lock/manila manila-10.0.0/doc/source/install/common/dhss-true-mode-configuration.rst0000664000175000017500000000561413656750227026340 0ustar zuulzuul00000000000000Configure components -------------------- #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, enable the generic driver and the NFS protocol: .. code-block:: ini [DEFAULT] ... enabled_share_backends = generic enabled_share_protocols = NFS .. note:: Back end names are arbitrary. As an example, this guide uses the name of the driver. * In the ``[neutron]``, ``[nova]``, and ``[cinder]`` sections, enable authentication for those services: .. code-block:: ini [neutron] ... url = http://controller:9696 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS [nova] ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [cinder] ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = cinder password = CINDER_PASS * In the ``[generic]`` section, configure the generic driver: .. code-block:: ini [generic] share_backend_name = GENERIC share_driver = manila.share.drivers.generic.GenericShareDriver driver_handles_share_servers = True service_instance_flavor_id = 100 service_image_name = manila-service-image service_instance_user = manila service_instance_password = manila interface_driver = manila.network.linux.interface.BridgeInterfaceDriver .. note:: You can also use SSH keys instead of password authentication for service instance credentials. .. important:: The ``service_image_name``, ``service_instance_flavor_id``, ``service_instance_user`` and ``service_instance_password`` are with reference to the service image that is used by the driver to create share servers. A sample service image for use with the ``generic`` driver is available in the ``manila-image-elements`` project. Its creation is explained in the post installation steps (See: :ref:`post-install`). manila-10.0.0/doc/source/install/common/dhss-false-mode-configuration.rst0000664000175000017500000000633113656750227026450 0ustar zuulzuul000000000000002. Create the LVM physical volume ``/dev/sdc``: .. code-block:: console # pvcreate /dev/sdc Physical volume "/dev/sdc" successfully created #. Create the LVM volume group ``manila-volumes``: .. code-block:: console # vgcreate manila-volumes /dev/sdc Volume group "manila-volumes" successfully created The Shared File Systems service creates logical volumes in this volume group. #. Only instances can access Shared File Systems service volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the ``/dev`` directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain the ``cinder-volume`` and ``manila-volumes`` volume groups. Edit the ``/etc/lvm/lvm.conf`` file and complete the following actions: * In the ``devices`` section, add a filter that accepts the ``/dev/sdb`` and ``/dev/sdc`` devices and rejects all other devices: .. code-block:: ini devices { ... filter = [ "a/sdb/", "a/sdc", "r/.*/"] .. warning:: If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the ``/dev/sda`` device contains the operating system: .. code-block:: ini filter = [ "a/sda/", "a/sdb/", "a/sdc", "r/.*/"] Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the ``/etc/lvm/lvm.conf`` file on those nodes to include only the operating system disk. For example, if the ``/dev/sda`` device contains the operating system: .. code-block:: ini filter = [ "a/sda/", "r/.*/"] Configure components -------------------- #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[DEFAULT]`` section, enable the LVM driver and the NFS protocol: .. code-block:: ini [DEFAULT] ... enabled_share_backends = lvm enabled_share_protocols = NFS .. note:: Back end names are arbitrary. As an example, this guide uses the name of the driver. * In the ``[lvm]`` section, configure the LVM driver: .. code-block:: ini [lvm] share_backend_name = LVM share_driver = manila.share.drivers.lvm.LVMShareDriver driver_handles_share_servers = False lvm_share_volume_group = manila-volumes lvm_share_export_ips = MANAGEMENT_INTERFACE_IP_ADDRESS Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the management network interface on your storage node. The value of this option can be a comma separated string of one or more IP addresses. In the example architecture shown below, the address would be 10.0.0.41: .. figure:: figures/hwreqs.png :alt: Hardware requirements **Hardware requirements**. manila-10.0.0/doc/source/install/common/controller-node-prerequisites.rst0000664000175000017500000002314513656750227026642 0ustar zuulzuul00000000000000Prerequisites ------------- Before you install and configure the Shared File Systems service, you must create a database, service credentials, and `API endpoints`. #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p * Create the `manila` database: .. code-block:: console CREATE DATABASE manila; * Grant proper access to the ``manila`` database: .. code-block:: console GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'localhost' \ IDENTIFIED BY 'MANILA_DBPASS'; GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'%' \ IDENTIFIED BY 'MANILA_DBPASS'; Replace ``MANILA_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin CLI commands: .. code-block:: console $ . admin-openrc.sh #. To create the service credentials, complete these steps: * Create a ``manila`` user: .. code-block:: console $ openstack user create --domain default --password-prompt manila User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | e0353a670a9e496da891347c589539e9 | | enabled | True | | id | 83a3990fc2144100ba0e2e23886d8acc | | name | manila | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ * Add the ``admin`` role to the ``manila`` user: .. code-block:: console $ openstack role add --project service --user manila admin .. note:: This command provides no output. * Create the ``manila`` and ``manilav2`` service entities: .. code-block:: console $ openstack service create --name manila \ --description "OpenStack Shared File Systems" share +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Shared File Systems | | enabled | True | | id | 82378b5a16b340aa9cc790cdd46a03ba | | name | manila | | type | share | +-------------+----------------------------------+ .. code-block:: console $ openstack service create --name manilav2 \ --description "OpenStack Shared File Systems V2" sharev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Shared File Systems V2 | | enabled | True | | id | 30d92a97a81a4e5d8fd97a32bafd7b88 | | name | manilav2 | | type | sharev2 | +-------------+----------------------------------+ .. note:: The Shared File Systems services require two service entities. #. Create the Shared File Systems service API endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ share public http://controller:8786/v1/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ | enabled | True | | id | 0bd2bbf8d28b433aaea56a254c69f69d | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 82378b5a16b340aa9cc790cdd46a03ba | | service_name | manila | | service_type | share | | url | http://controller:8786/v1/%(tenant_id)s | +--------------+-----------------------------------------+ $ openstack endpoint create --region RegionOne \ share internal http://controller:8786/v1/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ | enabled | True | | id | a2859b5732cc48b5b083dd36dafb6fd9 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 82378b5a16b340aa9cc790cdd46a03ba | | service_name | manila | | service_type | share | | url | http://controller:8786/v1/%(tenant_id)s | +--------------+-----------------------------------------+ $ openstack endpoint create --region RegionOne \ share admin http://controller:8786/v1/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ | enabled | True | | id | f7f46df93a374cc49c0121bef41da03c | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 82378b5a16b340aa9cc790cdd46a03ba | | service_name | manila | | service_type | share | | url | http://controller:8786/v1/%(tenant_id)s | +--------------+-----------------------------------------+ .. code-block:: console $ openstack endpoint create --region RegionOne \ sharev2 public http://controller:8786/v2/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ | enabled | True | | id | d63cc0d358da4ea680178657291eddc1 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 30d92a97a81a4e5d8fd97a32bafd7b88 | | service_name | manilav2 | | service_type | sharev2 | | url | http://controller:8786/v2/%(tenant_id)s | +--------------+-----------------------------------------+ $ openstack endpoint create --region RegionOne \ sharev2 internal http://controller:8786/v2/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ | enabled | True | | id | afc86e5f50804008add349dba605da54 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 30d92a97a81a4e5d8fd97a32bafd7b88 | | service_name | manilav2 | | service_type | sharev2 | | url | http://controller:8786/v2/%(tenant_id)s | +--------------+-----------------------------------------+ $ openstack endpoint create --region RegionOne \ sharev2 admin http://controller:8786/v2/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ | enabled | True | | id | e814a0cec40546e98cf0c25a82498483 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 30d92a97a81a4e5d8fd97a32bafd7b88 | | service_name | manilav2 | | service_type | sharev2 | | url | http://controller:8786/v2/%(tenant_id)s | +--------------+-----------------------------------------+ .. note:: The Shared File Systems services require endpoints for each service entity. manila-10.0.0/doc/source/install/common/dhss-true-mode-using-shared-file-systems.rst0000664000175000017500000004153113656750227030502 0ustar zuulzuul00000000000000Creating shares with Shared File Systems Option 2 (DHSS = True) --------------------------------------------------------------- Before being able to create a share, manila with the generic driver and the DHSS (``driver_handles_share_servers``) mode enabled requires the definition of at least an image, a network and a share-network for being used to create a share server. For that `back end` configuration, the share server is an instance where NFS shares are served. .. note:: This configuration automatically creates a cinder volume for every share. The cinder volumes are attached to share servers according to the definition of a share network. #. Source the admin credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc.sh #. Create a default share type with DHSS enabled. A default share type will allow you to create shares with this driver, without having to specify the share type explicitly during share creation. .. code-block:: console $ manila type-create default_share_type True +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | required_extra_specs | driver_handles_share_servers : True | | Name | default_share_type | | Visibility | public | | is_default | - | | ID | 8a35da28-0f74-490d-afff-23664ecd4f01 | | optional_extra_specs | snapshot_support : True | +----------------------+--------------------------------------+ Set this default share type in ``manila.conf`` under the ``[DEFAULT]`` section and restart the ``manila-api`` service before proceeding. Unless you do so, the default share type will not be effective. .. note:: Creating and configuring a default share type is optional. If you wish to use the shared file system service with a variety of share types, where each share creation request could specify a type, please refer to the Share types usage documentation `here `_. #. Create a manila share server image in the Image service. You may skip this step and use any existing image. However, for mounting a share, the service image must contain the NFS packages as appropriate for the operating system. Whatever image you choose to be the service image, be sure to set the configuration values ``service_image_name``, ``service_instance_flavor_id``, ``service_instance_user`` and ``service_instance_password`` in ``manila.conf``. .. note:: Any changes made to ``manila.conf`` while the ``manila-share`` service is running will require a restart of the service to be effective. .. note:: As an alternative to specifying a plain-text ``service_instance_password`` in your configuration, a key-pair may be specified with options ``path_to_public_key`` and ``path_to_private_key`` to configure and allow password-less SSH access between the `share node` and the share server/s created. .. code-block:: console $ curl https://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2 | \ glance image-create \ --name "manila-service-image" \ --disk-format qcow2 \ --container-format bare \ --visibility public --progress % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3008k 100 3008k 0 0 1042k 0 0:00:02 0:00:02 --:--:-- 1041k +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | 48a08e746cf0986e2bc32040a9183445 | | container_format | bare | | created_at | 2016-01-26T19:52:24Z | | direct_url | rbd://3c3a4cbc-7331-4fc1-8cbb-79213b9cebff/images/ff97deff-b184-47f8-827c- | | | 16c349c82720/snap | | disk_format | qcow2 | | id | 1fc7f29e-8fe6-44ef-9c3c-15217e83997c | | locations | [{"url": "rbd://3c3a4cbc-7331-4fc1-8cbb-79213b9cebff/images/ff97deff-b184-47f8 | | | -827c-16c349c82720/snap", "metadata": {}}] | | min_disk | 0 | | min_ram | 0 | | name | manila-service-image | | owner | e2c965830ecc4162a002bf16ddc91ab7 | | protected | False | | size | 306577408 | | status | active | | tags | [] | | updated_at | 2016-01-26T19:52:28Z | | virtual_size | None | | visibility | public | +------------------+----------------------------------------------------------------------------------+ #. List available networks in order to get id and subnets of the private network: .. code-block:: console $ neutron net-list +--------------------------------------+---------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+----------------------------------------------------+ | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad | public | 5cc70da8-4ee7-4565-be53-b9c011fca011 10.3.31.0/24 | | 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 | +--------------------------------------+---------+----------------------------------------------------+ #. Source the ``demo`` credentials to perform the following steps as a non-administrative project: .. code-block:: console $ . demo-openrc.sh .. code-block:: console $ manila share-network-create --name demo-share-network1 \ --neutron-net-id PRIVATE_NETWORK_ID \ --neutron-subnet-id PRIVATE_NETWORK_SUBNET_ID +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | name | demo-share-network1 | | segmentation_id | None | | created_at | 2016-01-26T20:03:41.877838 | | neutron_subnet_id | 3482f524-8bff-4871-80d4-5774c2730728 | | updated_at | None | | network_type | None | | neutron_net_id | 7c6f9b37-76b4-463e-98d8-27e5686ed083 | | ip_version | None | | cidr | None | | project_id | e2c965830ecc4162a002bf16ddc91ab7 | | id | 58b2f0e6-5509-4830-af9c-97f525a31b14 | | description | None | +-------------------+--------------------------------------+ Create a share -------------- #. Create an NFS share using the share network. Since a default share type has been created and configured, it need not be specified in the request. .. code-block:: console $ manila create NFS 1 --name demo-share1 --share-network demo-share-network1 +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | None | | share_type_name | default_share_type | | description | None | | availability_zone | None | | share_network_id | 58b2f0e6-5509-4830-af9c-97f525a31b14 | | share_group_id | None | | host | None | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 016ca18f-bdd5-48e1-88c0-782e4c1aa28c | | size | 1 | | name | demo-share1 | | share_type | 8a35da28-0f74-490d-afff-23664ecd4f01 | | created_at | 2016-01-26T20:08:50.502877 | | export_location | None | | share_proto | NFS | | project_id | 48e8c35b2ac6495d86d4be61658975e7 | | metadata | {} | +-----------------------------+--------------------------------------+ #. After some time, the share status should change from ``creating`` to ``available``: .. code-block:: console $ manila list +--------------------------------------+-------------+------+-------------+-----------+-----------+------------------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+-------------+------+-------------+-----------+-----------+------------------------+-----------------------------+-------------------+ | 5f8a0574-a95e-40ff-b898-09fd8d6a1fac | demo-share1 | 1 | NFS | available | False | default_share_type | storagenode@generic#GENERIC | nova | +--------------------------------------+-------------+------+-------------+-----------+-----------+------------------------+-----------------------------+-------------------+ #. Determine export IP address of the share: .. code-block:: console $ manila show demo-share1 +-----------------------------+------------------------------------------------------------------------------------+ | Property | Value | +-----------------------------+------------------------------------------------------------------------------------+ | status | available | | share_type_name | default_share_type | | description | None | | availability_zone | nova | | share_network_id | 58b2f0e6-5509-4830-af9c-97f525a31b14 | | share_group_id | None | | export_locations | | | | path = 10.254.0.6:/shares/share-0bfd69a1-27f0-4ef5-af17-7cd50bce6550 | | | id = e525cbca-b3cc-4adf-a1cb-b1bf48fa2422 | | | preferred = False | | host | storagenode@generic#GENERIC | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 5f8a0574-a95e-40ff-b898-09fd8d6a1fac | | size | 1 | | name | demo-share1 | | share_type | 8a35da28-0f74-490d-afff-23664ecd4f01 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-30T19:10:33.000000 | | share_proto | NFS | | project_id | 48e8c35b2ac6495d86d4be61658975e7 | | metadata | {} | +-----------------------------+------------------------------------------------------------------------------------+ Allow access to the share ------------------------- #. Configure access to the new share before attempting to mount it via the network. The compute instance (whose IP address is referenced by the INSTANCE_IP below) must have network connectivity to the network specified in the share network. .. code-block:: console $ manila access-allow demo-share1 ip INSTANCE_IP +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 5f8a0574-a95e-40ff-b898-09fd8d6a1fac | | access_type | ip | | access_to | 10.0.0.46 | | access_level | rw | | state | new | | id | aefeab01-7197-44bf-ad0f-d6ca6f99fc96 | +--------------+--------------------------------------+ Mount the share on a compute instance ------------------------------------- #. Log into your compute instance and create a folder where the mount will be placed: .. code-block:: console $ mkdir ~/test_folder #. Mount the NFS share in the compute instance using the export location of the share: .. code-block:: console $ mount -vt nfs 10.254.0.6:/shares/share-0bfd69a1-27f0-4ef5-af17-7cd50bce6550 ~/test_folder manila-10.0.0/doc/source/install/common/dhss-true-mode-intro.rst0000664000175000017500000000157613656750227024627 0ustar zuulzuul00000000000000Shared File Systems Option 2: Driver support for share servers management ------------------------------------------------------------------------- For simplicity, this configuration references the same storage node as the one used for the Block Storage service. .. note:: This guide describes how to configure the Shared File Systems service to use the ``generic`` driver with the driver handles share server mode (DHSS) enabled. This driver requires Compute service (nova), Image service (glance) and Networking service (neutron) for creating and managing share servers; and Block storage service (cinder) for creating shares. The information used for creating share servers is configured as share networks. Generic driver with DHSS enabled also requires the tenant's private network (where the compute instances are running) to be attached to a public router. manila-10.0.0/doc/source/install/common/dhss-false-mode-using-shared-file-systems.rst0000664000175000017500000002456513656750227030625 0ustar zuulzuul00000000000000Creating shares with Shared File Systems Option 1 (DHSS = False) ---------------------------------------------------------------- Create a share type ------------------- Disable DHSS (``driver_handles_share_servers``) before creating a share using the LVM driver. #. Source the admin credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc #. Create a default share type with DHSS disabled. A default share type will allow you to create shares with this driver, without having to specify the share type explicitly during share creation. .. code-block:: console $ manila type-create default_share_type False +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | required_extra_specs | driver_handles_share_servers : False | | Name | default_share_type | | Visibility | public | | is_default | - | | ID | 3df065c8-6ca4-4b80-a5cb-e633c0439097 | | optional_extra_specs | snapshot_support : True | +----------------------+--------------------------------------+ Set this default share type in ``manila.conf`` under the ``[DEFAULT]`` section and restart the ``manila-api`` service before proceeding. Unless you do so, the default share type will not be effective. .. note:: Creating and configuring a default share type is optional. If you wish to use the shared file system service with a variety of share types, where each share creation request could specify a type, please refer to the Share types usage documentation `here `_. Create a share -------------- #. Source the ``demo`` credentials to perform the following steps as a non-administrative project: .. code-block:: console $ . demo-openrc #. Create an NFS share. Since a default share type has been created and configured, it need not be specified in the request. .. code-block:: console $ manila create NFS 1 --name share1 +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | default_share_type | | description | None | | availability_zone | None | | share_network_id | None | | share_group_id | None | | host | | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 55c401b3-3112-4294-aa9f-3cc355a4e361 | | size | 1 | | name | share1 | | share_type | 3df065c8-6ca4-4b80-a5cb-e633c0439097 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-30T19:10:33.000000 | | share_proto | NFS | | project_id | 3a46a53a377642a284e1d12efabb3b5a | | metadata | {} | +-----------------------------+--------------------------------------+ #. After some time, the share status should change from ``creating`` to ``available``: .. code-block:: console $ manila list +--------------------------------------+--------+------+-------------+-----------+-----------+--------------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+--------+------+-------------+-----------+-----------+--------------------+-----------------------------+-------------------+ | 55c401b3-3112-4294-aa9f-3cc355a4e361 | share1 | 1 | NFS | available | False | default_share_type | storage@lvm#lvm-single-pool | nova | +--------------------------------------+--------+------+-------------+-----------+-----------+--------------------+-----------------------------+-------------------+ #. Determine export IP address of the share: .. code-block:: console $ manila show share1 +-----------------------------+------------------------------------------------------------------------------------+ | Property | Value | +-----------------------------+------------------------------------------------------------------------------------+ | status | available | | share_type_name | default_share_type | | description | None | | availability_zone | nova | | share_network_id | None | | share_group_id | None | | export_locations | | | | path = 10.0.0.41:/var/lib/manila/mnt/share-8e13a98f-c310-41df-ac90-fc8bce4910b8 | | | id = 3c8d0ada-cadf-48dd-85b8-d4e8c3b1e204 | | | preferred = False | | host | storage@lvm#lvm-single-pool | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 55c401b3-3112-4294-aa9f-3cc355a4e361 | | size | 1 | | name | share1 | | share_type | c6dfcfc6-9920-420e-8b0a-283d578efef5 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-30T19:10:33.000000 | | share_proto | NFS | | project_id | 3a46a53a377642a284e1d12efabb3b5a | | metadata | {} | +-----------------------------+------------------------------------------------------------------------------------+ Allow access to the share ------------------------- #. Configure access to the new share before attempting to mount it via the network. The compute instance (whose IP address is referenced by the INSTANCE_IP below) must have network connectivity to the network specified in the share network. .. code-block:: console $ manila access-allow share1 ip INSTANCE_IP +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 55c401b3-3112-4294-aa9f-3cc355a4e361 | | access_type | ip | | access_to | 10.0.0.46 | | access_level | rw | | state | new | | id | f88eab01-7197-44bf-ad0f-d6ca6f99fc96 | +--------------+--------------------------------------+ Mount the share on a compute instance ------------------------------------- #. Log into your compute instance and create a folder where the mount will be placed: .. code-block:: console $ mkdir ~/test_folder #. Mount the NFS share in the compute instance using the export location of the share: .. code-block:: console # mount -vt nfs 10.0.0.41:/var/lib/manila/mnt/share-8e13a98f-c310-41df-ac90-fc8bce4910b8 ~/test_folder manila-10.0.0/doc/source/install/common/dhss-false-mode-intro.rst0000664000175000017500000000076113656750227024735 0ustar zuulzuul00000000000000Shared File Systems Option 1: No driver support for share servers management ---------------------------------------------------------------------------- For simplicity, this configuration references the same storage node configuration for the Block Storage service. However, the LVM driver requires a separate empty local block storage device to avoid conflict with the Block Storage service. The instructions use ``/dev/sdc``, but you can substitute a different value for your particular node. manila-10.0.0/doc/source/install/common/share-node-share-server-modes.rst0000664000175000017500000000367013656750227026371 0ustar zuulzuul00000000000000The share node can support two modes, with and without the handling of share servers. The mode depends on driver support. Option 1 -------- Deploying the service without driver support for share server management. In this mode, the service does not do anything related to networking. The operator must ensure network connectivity between instances and the NAS protocol based server. This tutorial demonstrates setting up the LVM driver which creates LVM volumes on the share node and exports them with the help of an NFS server that is installed locally on the share node. It therefore requires LVM and NFS packages as well as an additional disk for the ``manila-share`` LVM volume group. This driver mode may be referred to as ``driver_handles_share_servers = False`` mode, or simply ``DHSS=False`` mode. Option 2 -------- Deploying the service with driver support for share server management. In this mode, the service runs with a back end driver that creates and manages share servers. This tutorial demonstrates setting up the ``Generic`` driver. This driver requires Compute service (nova), Image service (glance) and Networking service (neutron) for creating and managing share servers; and Block storage service (cinder) for creating shares. The information used for creating share servers is configured with the help of share networks. This driver mode may be referred to as ``driver_handles_share_servers = True`` mode, or simply ``DHSS=True`` mode. .. warning:: When running the generic driver in ``DHSS=True`` driver mode, the share service should be run on the same node as the networking service. However, such a service may not be able to run the LVM driver that runs in ``DHSS=False`` driver mode effectively, due to a bug in some distributions of Linux. For more information, see LVM Driver section in the `Configuration Reference Guide `_. manila-10.0.0/doc/source/install/get-started-with-shared-file-systems.rst0000664000175000017500000000325113656750227026420 0ustar zuulzuul00000000000000================ Service Overview ================ The OpenStack Shared File Systems service (manila) provides file storage to a virtual machine. The Shared File Systems service provides an abstraction for managing and provisioning of file shares. The service also enables management of share types as well as share snapshots if a driver supports them. The Shared File Systems service consists of the following components: manila-api A WSGI app that authenticates and routes requests to the Shared File Systems service. manila-data A standalone service whose purpose is to process data operations such as copying, share migration or backup. manila-scheduler Schedules and routes requests to the appropriate share service. The scheduler uses configurable filters and weighers to route requests. The Filter Scheduler is the default and enables filters on various attributes of back ends, such as, Capacity, Availability Zone and other capabilities. manila-share Manages back-end devices that provide shared file systems. A manila-share service talks to back-end devices by using share back-end drivers as interfaces. A share driver may operate in one of two modes, with or without handling of share servers. Share servers export file shares via share networks. When share servers are not managed by a driver within the shared file systems service, networking requirements should be handled out of band of the shared file systems service. Messaging queue Routes information between the Shared File Systems processes. For more information, see `Configuration Reference Guide `_. manila-10.0.0/doc/source/install/install-controller-node.rst0000664000175000017500000000130013656750227024101 0ustar zuulzuul00000000000000.. _manila-controller: Install and configure controller node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Shared File Systems service, code-named manila, on the controller node. This service requires at least one additional share node that manages file storage back ends. This section assumes that you already have a working OpenStack environment with at least the following components installed: Compute, Image Service, Identity. Note that installation and configuration vary by distribution. .. toctree:: :maxdepth: 1 install-controller-obs.rst install-controller-rdo.rst install-controller-ubuntu.rst install-controller-debian.rst manila-10.0.0/doc/source/install/next-steps.rst0000664000175000017500000000067713656750227021461 0ustar zuulzuul00000000000000.. _next-steps: ========== Next steps ========== Your OpenStack environment now includes the Shared File Systems service. To add more services, see the `additional documentation on installing OpenStack services `_ Continue to evaluate the Shared File Systems service by creating the service image and running the service with the correct driver mode that you chose while configuring the share node. manila-10.0.0/doc/source/install/post-install.rst0000664000175000017500000000201413656750227021763 0ustar zuulzuul00000000000000.. _post-install: Creating and using shared file systems ====================================== Depending on the option chosen while installing the share node (Option with share server management and one without); the steps to create and use your shared file systems will vary. When the Shared File Systems service handles the creation and management of share servers, you would need to specify the ``share network`` with the request to create a share. Either modes will vary in their respective share type definition. When using the driver mode with automatic handling of share servers, a service image is needed as specified in your configuration. The instructions below enumerate the steps for both driver modes. Follow what is appropriate for your installation. .. include:: common/dhss-false-mode-using-shared-file-systems.rst .. include:: common/dhss-true-mode-using-shared-file-systems.rst For more information about how to manage shares, see the `OpenStack End User Guide `_ manila-10.0.0/doc/source/install/install-share-rdo.rst0000664000175000017500000000562013656750227022670 0ustar zuulzuul00000000000000.. _share-node-install-rdo: Install and configure a share node running Red Hat Enterprise Linux and CentOS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure a share node for the Shared File Systems service. For simplicity, this configuration references one storage node with the generic driver managing the share servers. The generic backend manages share servers using compute, networking and block services for provisioning shares. Note that installation and configuration vary by distribution. This section describes the instructions for a share node running Red Hat Enterprise Linux or CentOS. Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # yum install openstack-manila-share python2-PyMySQL #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/share-node-common-configuration.rst Two driver modes ---------------- .. include:: common/share-node-share-server-modes.rst Choose one of the following options to configure the share driver: .. include:: common/dhss-false-mode-intro.rst Prerequisites ------------- .. note:: Perform these steps on the storage node. #. Install the supporting utility packages: * Install LVM and NFS server packages: .. code-block:: console # yum install lvm2 nfs-utils nfs4-acl-tools portmap targetcli * Start the LVM metadata service and configure it to start when the system boots: .. code-block:: console # systemctl enable lvm2-lvmetad.service target.service # systemctl start lvm2-lvmetad.service target.service .. include:: common/dhss-false-mode-configuration.rst .. include:: common/dhss-true-mode-intro.rst Prerequisites ------------- Before you proceed, verify operation of the Compute, Networking, and Block Storage services. This options requires implementation of Networking option 2 and requires installation of some Networking service components on the storage node. * Install the Networking service components: .. code-block:: console # yum install openstack-neutron openstack-neutron-linuxbridge ebtables .. include:: common/dhss-true-mode-configuration.rst Finalize installation --------------------- #. Prepare manila-share as start/stop service. Start the Shared File Systems service including its dependencies and configure them to start when the system boots: .. code-block:: console # systemctl enable openstack-manila-share.service # systemctl start openstack-manila-share.service manila-10.0.0/doc/source/install/install-share-obs.rst0000664000175000017500000000466613656750227022700 0ustar zuulzuul00000000000000.. _share-node-install-obs: Install and configure a share node running openSUSE and SUSE Linux Enterprise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure a share node for the Shared File Systems service. Note that installation and configuration vary by distribution. This section describes the instructions for a share node running openSUSE and SUSE Linux Enterprise. Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # zypper install openstack-manila-share python-PyMySQL #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/share-node-common-configuration.rst Two driver modes ---------------- .. include:: common/share-node-share-server-modes.rst Choose one of the following options to configure the share driver: .. include:: common/dhss-false-mode-intro.rst Prerequisites ------------- .. note:: Perform these steps on the storage node. #. Install the supporting utility packages: * Install LVM and NFS server packages: .. code-block:: console # zypper install lvm2 nfs-kernel-server .. include:: common/dhss-false-mode-configuration.rst .. include:: common/dhss-true-mode-intro.rst Prerequisites ------------- Before you proceed, verify operation of the Compute, Networking, and Block Storage services. This options requires implementation of Networking option 2 and requires installation of some Networking service components on the storage node. * Install the Networking service components: .. code-block:: console # zypper install --no-recommends openstack-neutron-linuxbridge-agent .. include:: common/dhss-true-mode-configuration.rst Finalize installation --------------------- #. Prepare manila-share as start/stop service. Start the Shared File Systems service including its dependencies and configure them to start when the system boots: .. code-block:: console # systemctl enable openstack-manila-share.service tgtd.service # systemctl start openstack-manila-share.service tgtd.service manila-10.0.0/doc/source/install/install-share-node.rst0000664000175000017500000000160413656750227023027 0ustar zuulzuul00000000000000.. _share-node-install: Install and configure a share node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure a share node for the Shared File Systems service. .. note:: The manila-share process can run in two modes, with and without handling of share servers. Some drivers may support either modes; while some may only support one of the two modes. See the `Configuration Reference `_ to determine if the driver you choose supports the driver mode desired. This tutorial describes setting up each driver mode using an example driver for the mode. Note that installation and configuration vary by distribution. .. toctree:: :maxdepth: 1 install-share-obs.rst install-share-rdo.rst install-share-ubuntu.rst install-share-debian.rst manila-10.0.0/doc/source/install/verify.rst0000664000175000017500000000214713656750227020645 0ustar zuulzuul00000000000000.. _verify: Verify operation ~~~~~~~~~~~~~~~~ Verify operation of the Shared File Systems service. .. note:: Perform these commands on the controller node. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc.sh #. List service components to verify successful launch of each process: .. code-block:: console $ manila service-list +------------------+----------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+----------------+------+---------+-------+----------------------------+-----------------+ | manila-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None | | manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None | +------------------+----------------+------+---------+-------+----------------------------+-----------------+ manila-10.0.0/doc/source/install/index.rst0000664000175000017500000000532413656750227020450 0ustar zuulzuul00000000000000===================== Installation Tutorial ===================== .. toctree:: :maxdepth: 2 get-started-with-shared-file-systems.rst install-controller-node.rst install-share-node.rst verify.rst post-install.rst next-steps.rst The OpenStack Shared File Systems service (manila) provides coordinated access to shared or distributed file systems. The method in which the share is provisioned and consumed is determined by the Shared File Systems driver, or drivers in the case of a multi-backend configuration. There are a variety of drivers that support NFS, CIFS, HDFS, GlusterFS, CEPHFS, MAPRFS and other protocols as well. The Shared File Systems API and scheduler services typically run on the controller nodes. Depending upon the drivers used, the share service can run on controllers, compute nodes, or storage nodes. .. important:: For simplicity, this guide describes configuring the Shared File Systems service to use one of either: * the ``generic`` back end with the ``driver_handles_share_servers`` mode (DHSS) enabled that uses the `Compute service` (`nova`), `Image service` (`glance`), `Networking service` (`neutron`) and `Block storage service` (`cinder`); or, * the ``LVM`` back end with ``driver_handles_share_servers`` mode (DHSS) disabled. The storage protocol used and referenced in this guide is ``NFS``. As stated above, the Shared File System service supports different storage protocols depending on the back end chosen. For the ``generic`` back end, networking service configuration requires the capability of networks being attached to a public router in order to create share networks. If using this back end, ensure that Compute, Networking and Block storage services are properly working before you proceed. For networking service, ensure that option 2 (deploying the networking service with support for self-service networks) is properly configured. This installation tutorial also assumes that installation and configuration of OpenStack packages, Network Time Protocol, database engine and message queue has been completed as per the instructions in the `OpenStack Installation Guide. `_. The `Identity Service` (`keystone`) has to be pre-configured with suggested client environment scripts. For more information on various Shared File Systems storage back ends, see the `Shared File Systems Configuration Reference. `_. To learn more about installation dependencies noted above, see the `OpenStack Installation Guide. `_ manila-10.0.0/doc/source/install/install-controller-rdo.rst0000664000175000017500000000322113656750227023744 0ustar zuulzuul00000000000000.. _manila-controller-rdo: Install and configure controller node on Red Hat Enterprise Linux and CentOS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Shared File Systems service, code-named manila, on the controller node that runs Red Hat Enterprise Linux or CentOS. This service requires at least one additional share node that manages file storage back ends. .. include:: common/controller-node-prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # yum install openstack-manila python-manilaclient #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/controller-node-common-configuration.rst #. Populate the Shared File Systems database: .. code-block:: console # su -s /bin/sh -c "manila-manage db sync" manila .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- #. Start the Shared File Systems services and configure them to start when the system boots: .. code-block:: console # systemctl enable openstack-manila-api.service openstack-manila-scheduler.service # systemctl start openstack-manila-api.service openstack-manila-scheduler.service manila-10.0.0/doc/source/install/install-controller-debian.rst0000664000175000017500000000270413656750227024407 0ustar zuulzuul00000000000000.. _manila-controller-debian: Install and configure controller node on Debian ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Shared File Systems service, code-named manila, on the controller node that runs a Debian distribution. This service requires at least one additional share node that manages file storage back ends. .. include:: common/controller-node-prerequisites.rst Install and configure components -------------------------------- #. Install the packages: .. code-block:: console # apt-get install manila-api manila-scheduler python-manilaclient #. Edit the ``/etc/manila/manila.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. code-block:: ini [database] ... connection = mysql+pymysql://manila:MANILA_DBPASS@controller/manila Replace ``MANILA_DBPASS`` with the password you chose for the Shared File Systems database. .. include:: common/controller-node-common-configuration.rst #. Populate the Shared File Systems database: .. code-block:: console # su -s /bin/sh -c "manila-manage db sync" manila .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- #. Restart the Shared File Systems services: .. code-block:: console # service manila-scheduler restart # service manila-api restart manila-10.0.0/doc/source/images/0000775000175000017500000000000013656750362016402 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/images/rpc/0000775000175000017500000000000013656750362017166 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/images/rpc/arch.svg0000664000175000017500000004405013656750227020627 0ustar zuulzuul00000000000000 Page-1 Box.8 Compute Compute Box.2 Volume Storage VolumeStorage Box Auth Manager Auth Manager Box.4 Cloud Controller CloudController Box.3 API Server API Server Box.6 Object Store ObjectStore Box.7 Node Controller NodeController Dynamic connector Dynamic connector.11 Dynamic connector.12 http http Circle Manila-Manage Manila-Manage Circle.15 Euca2ools Euca2ools Dynamic connector.16 Dynamic connector.17 Sheet.15 Project User Role Network VPN ProjectUserRoleNetworkVPN Sheet.16 VM instance Security group Volume Snapshot VM image IP address... VM instanceSecurity groupVolumeSnapshotVM imageIP addressSSH keyAvailability zone Box.20 Network Controller Network Controller Box.5 Storage Controller Storage Controller Dot & arrow Dot & arrow.14 Dynamic connector.13 Sheet.22 AMQP AMQP Sheet.23 AMQP AMQP Sheet.24 AMQP AMQP Sheet.25 REST REST Sheet.26 local method local method Sheet.27 local method local method Sheet.28 local method local method manila-10.0.0/doc/source/images/rpc/flow2.svg0000664000175000017500000005775013656750227020756 0ustar zuulzuul00000000000000 Page-1 Rounded rectangle ATM switch name: control_exchange (type: topic) Sheet.3 Sheet.4 Sheet.5 Sheet.6 Sheet.7 Sheet.8 name: control_exchange(type: topic) Sheet.9 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.17 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.25 Sheet.26 key: topic key: topic Sheet.27 key: topic.host key: topic.host Sheet.28 Rectangle Topic Consumer Topic Consumer Rectangle.30 Topic Consumer Topic Consumer Sheet.31 Sheet.32 Sheet.33 Rectangle.34 Sheet.36 Worker (e.g. compute) Worker(e.g. compute) Rectangle.57 Sheet.57 Invoker (e.g. api) Invoker(e.g. api) Rectangle.55 Topic Publisher Topic Publisher Sheet.59 Sheet.61 RabbitMQ Node RabbitMQ Node Sheet.62 Sheet.63 rpc.cast(topic) rpc.cast(topic) Sheet.64 Sheet.65 manila-10.0.0/doc/source/images/rpc/arch.png0000664000175000017500000006410213656750227020614 0ustar zuulzuul00000000000000‰PNG  IHDRpó ß HsRGB@À}Å pHYsÄÄ•+tEXtSoftwareMicrosoft Officeí5qgÂIDATxÚí½ ”\gy 9gâ3¶q[j[’­à X×hqÂ()Xãœ)Ï´~ÂØ=òð³:3mV±ƒqhÛêV9ƒ•x³Ty2^´ò˜‡°)Ea| #p)SK43:vŽâÚ šìY¬9KHÛ0±w©½où{›·¿þî­¿[?·ê©sžÓU÷·ºïßÓï÷}ïû·~í×~ío@và€ÀÀT \ô*Š…¶!ë9à€À Gàe#qU¡Ëm, ½îŸ¸î®˜–˜qŸ æ}Î}Λågt¾ûœwËä¼í®Y×½oôõ@àÖÊXÃÉ–Fäê.ʶàÞëôŠ[§½sï+n^É-;ï¦Ûuõç¢ÛW‰“¸î®nhxbVñ–ñ>çMókÞ6‰ºÏu÷¾®9'‡‹º N@ຸלé7–ltÌ—-'e#pò¾é¶¹J’¨!p€Àõ&p…˜y½\Ý[¦³n nðWv}Ü ®ÿZÝ_Îô—+¸e+uËfzCûÉ p \Éökóæ­Ëïæä¬dÓ†DoÝ2 ëÎ3ˆ¸áK`¹×NàÜ|êŸ7lsÕê¦8ý‚ÖFuI{+¾À¹ù“Ü7ÇÁn°çSòæÏxŸó^¿9[ŠkÁ~@à#p÷¾ˆÈ5¼å뮹Ô~ð°ÊÁnç>KD­Þ¥À•Íü%¹¢À¹iUÓ¿­ìú·¬Üy˹mÌ»eVûÏ pƒ¸’íãæ¦å½~p‹ÚßÍL[Íç$N…nž p€À p€À@ÖÎ¥Yô¦uT¨>íböîs屺)Íù†À7«xÕJ^¸ŠKî[UÑsïuݪ·ì¼{_Ô* n›º Ÿw9æªn½’I LMU@à®àU_°"UÕ&V'dZ‰¡`ÞKõ…œB÷¾ìD¬d¦åÜòE· ­â7É€ÒŽîL¢ÀiI¬¼'p@áú‚'peI+º÷«bgÏÖ[Õm‘¿ºÛGޏsïç@% \«ô–'pyÓ ªÑ¼ymZíRà*NËœ@€Àµ8÷¹¢Òfä«èÞÏ89› ÈW+rfDlµ¨½Û¦­£Zuͤ…¸¾wv¿\²ÀÍxWp2UÕ蜛žóú»ULÄ­ 2ç žn£l–«Æ\‘~p0×Lj&Œ†™™Ë¾Äyt+pGÞÿÁÛ›ñ}†Ìg?÷‰æWÌžá<8@à8@à˜‹«=jò´-ºQ£:‚´¤©AÜrU?Õ‡[g‘ƒ p€À æáTôÓ}¸é¶"CÝT^(zùÝ~ª Ò pƒ@Õµ@½‘ºº‘1_ÐÌü†[¿‚À pÃ{@-zV1¶ÀUL"Þ†‰ÊÍ#p€ÀÜðR˜÷%—½a°•¦iµáª8 p€ÀÜRU'a æÐz’øi.Sˆ8@à†ð*š:¥ù^Î}®Ûz© p€À öA7¢´Ë»i p€À7„•ÈWΛ6c#ru 1Û™áà p£{€muM¬Ê6, p£áÈ}KÍ;-öµož­¯‘¤~·‡À7¸‡Ð¬°ey(]y啟߼yó™M›6u#Mƒlܸñ…BaEÙ²eË…¤åe{²ÝˆÇe?Ý÷VN @àzgÛ«_ÕºÎ|辞ÖYÛ½÷†5Ÿe›ÀN¢c{å¡ Ô“³³³ß–›ý¥—^ú#°C‡½xï½÷6}ôÑf­Vkž>}º™æK¶'Û}ì±Çš²A÷-2(ßçòË/ÿN$‚§Üí‹ØÎÉ\2"m"[ïÜ¿¯ù†7îèi²ž]²À¹¦Í‘¤ý®D¼DŒ$:výõׯˆ4?~¼yöìÙæ8¾Î;×|â‰'Zr·k×®g¯¹æšç4z'‘Áèý4Õ·‰œ "6r½ÈO¿iôÄãŸY·žNûò©êªÀé4+p"ˆò¹×è\üÃdçÅ_\¾âŠ+¾%Ѭýû÷_8vìX+â5 /‰ÞIdðÖ[oýȨDgff>&QEN&˜f;ýÔ—Z¢¥r¥‘8¿yÔ¦‰¨‰ìÉOY^æ "q"tºŽŠ Ë÷áCà{éáq‰DÙ¤?Y$n?¼îºëVxàæ™3g&BØÚ½$‚xôèѦD/ºè¢¥YØEçf9¹`šîÝï¹eœ…d­ÀÅ5¡Ê|aJÈ4<¸îÛ¥itffæ{e“þd+++S!mq¯çŸ¾Õ,,ѹ 6|ß5µîä$ƒi8?â¦9+^½ \¨œL 5Ç"p€À…¤˜ô “¦Ñi—¶$™“¦V‰Hº•»„&Qà´Ï›mæ 5u"pC8‰)#3%Ú–ö¨Ðihf•¨ÜÆ¿;33ó+œt0i'D¨D¶,Ò¬j3 pC8I*#He$¦ŒÌäÕûëüùóÍ|Ïåµ#- LŒÀ©¼ÅÍÓ¼n!Óè’Àmذᖫ¯¾ú¯&e鸼$‚¹mÛ¶ïF"w˜².pÒÇMdJú¼…æëÈR™¯²¦ýâ4mˆ8æ…Òˆ pm.—˽÷ÆoükéãváÂ…æý÷ßß|æ™gÖɈLöK„RöK'ßyœ_ÒGîŽ;îxîšk®yˆ“²,p~ÄÌG$ÍFèt´ª¢¨2öñG~kužLCà:8‰ ½ýíoÿ®ÊÆSO=Õúùæ›o^'"×^{íÐåG-îûÈ4™'ß9 ¯C‡­ qeë•QË%pRtçÎß‘‘/p"k?üpGŠÖuòzúé§;8Ùohß2íúë¯ \;©KúÎIë¶ûÎíæK¤Sš«9ašn@à`lN*(øIxE^DŒ>ýéO·~ZÑñ%J"`"PŠŠÏm·Ýּ뮻ÖÉ–lKÇ®#ÓOœ8‘(pº¬|'}É{Ù/pö;ɶí÷ÐÏ*~‚ýý’ÖÕ¿‹Î—em“²ÝnÒï$ƒd„*iFCàº8)%BÑ'5$ÛtiN¦Ë|}I´NäEû¦ÙeE¶tž/>*híN¶o¿‹¼—ýXó—ñ¿‡¼·ßÙ~—NÖU”>w²_]W?í‹§ø(Ÿ¤‘\qœ€À!pÝ Üò¡C‡^L8Û”ê ßÔ(Ó´ QÞëÀ#=Ó&LY_"W*p²?™¦X“åýýË+© U¾wÒw–m‡føëÚ¿Ih]•YûÝýˆ¡}I9®Wnúý‡kE€nx厛>¹øÁ÷!TÀ0ÿqÜ*]®`dÌúwäÞ{ïm& œ¼¤)PÅ,$P¾ÀYñ©ñÅK›fm3¤ œíï¦ëØù*EÚ|ê œß<«ÛëDà’ÖU!‹8m6µëkÔ0ô:ö;Ÿjþý[çüž¥Z  ~þöÏœû¥;¡Bà†Æì솯¾ú5¯ºð3¯Ý —«®Úü¼øš/pûçççŸk'p¶)Õ—!?µ‡m6TqS‘³ËØþa¶é5© Õ6s é³'Óm¿µtÆ \Òºþvôo·®ŠoÂhÔ%ÊhBEàÆ9ïµÂ ¹ß‡nVаÛ¨q§M©vº6}ª°ùýåTl|ѳM²Ú‡­SÓõí÷ðNeQ¾WH:“.i]ÛLªÍ«º®ö—ÓßSçÇõ“Ú²Tg€Q œ$Ü•|nq4É®,#¹Ý¸‘"p€Àq®ºêªß<|øð? F(çšD”üév„¨²%­#Ó´ÉQæ…–±/‘!»mùÒÏÓï®Û–¦V­³ïuÛ*“íÖµÒ*?û=ìï×/O^<òÈ Ñß½ÊMF)pZ!”4Wxð¡ûZómÕÆDà©}úÄO4y%¿ü&R‘»¸>nq/IÙÉ[ƒ"0'ò&‚аIôM@à8nLNšRs¹ÜŸ>þøãƒ¦Å¿ì  Í?×ÍKj¢^sÍ5gu4 À¨N‘rWþ|‘;ÂY“›¸6¿ ZçT‘&Ùož­·ÖÕeä½]Fê¤Ê>íþýÚªv?²M‘L¿¹×.#?­ˆÊwÐõDDe¾LCฉ8÷P¸äꫯþÚ‡>ô¡u}âxýä%ͪ"pqéAâ^÷Ýwßó[·ný&òã&pZ›Ôo>ÕÚ£Và¤n©|9Ò:§"HVüd»2M–‘ee•85]Gæ  ÔmøûÑ‚÷v•K]F~Ê|}ÈÈwÖïÒ®~+€ÀeVà”ÙÙÙ#Û¶mûîñãDZµ^2¨áºë®[Ù´iÓoÓl ã(p*T6z¥rå œF²üˆ›+•¦Ð¾ty¾¿¨ùƒ.üýø9ÆÙm†¢‹7qçÛ7oÞüd¡PX¡o\o/i.ݵk׳ҿ0ú{îäã*púÞŠŽÈ‘ˆ/V>2½±ÒæÒ¸mHP÷£Bé?,ä³îG¿—Dáä½¢Q9‘Ñvß@à&Nà̃¢¸eË–ÓŽ;Ö\YYÁÌ^Òôüè£6_ûÚ×>»iÓ&éë¶ ² p¶Uû®…"cÚŸL›3-IçGÝTú«"/mÕíéÚIÛAฉ8óÀØ6;;û»333ßÛ¿ÿ‰.ñúÉëìÙ³­Ú¦’SïÊ+¯ü|ô÷ÚÁY8ÛŒ*‚¦BM›‚•¢Pj’À©¼I?7¸`÷#ßAÞ'Eàô½?ð!©Y@à¦Nà̃㒈]’¨œÈÜc=6u‘9‰´IA‘6IÈÉí·¥0= «gûY1²d›-»í›fNæûýÛüÑ®íúÀ©p†FÀʺ2Ú@ธ¨œÈÜæÍ›¿øâ‹(͆RjR£sò{=ðÀ­ ]tÑ‹ÒGÐIÕ`"Nû¡Ùi¾is§ ŽBíFà´ V"pÚÿMóÑéÂ…j›\m¿:YF[èv´/€À!p=TvHmO×÷«U*J:ñß{ï½MqîܹLˆÚùóç[#Gå{KXSù}ä÷ŠDµÌ€˜)²Q.‰j‰d‰<é4iδ¥´D†4凊š.£7wÛkåÐîËö£“é²oùjeIKyÉtù^~º›‚Ä/ æwCà:{Èl—Nü²Sqùå—GDHF¶Š‰ "K‚ˆÓ0^ÒÜ«û”ˆš|i –ï%‘µË.»ì97rôHÄ~ú³Á¤ Ü8£bèO‹+û5,8@ઉ¸„‡Oщ‘Ò‘%AÄIïÒK/ý‘È”E#y"‚èocãÆ?íKs¯îÓEÔä{pß‹ËûªDòDàb¾Ç¢îÓ߇À p€ÀÁT \ô*;I*8™ÊyÓênZÕ}^t¸º[·à„lÞ‰V%°²›WÖœ›>o¾[Õ}¶Óêî³,[Gà88˜F«›i¥81Ras2VM¸‚ ™Y¯î¶[uÛ(鲡&T'sÁ«xÛÎ{ß—€Àœ‹ŠÙió®Y´â„«èËW‡·`¶QpŸÛ \ÕÛvCà8@à{IÚÊNÔêiÚŒ©òUéEàÌüy·í²i¶Íû§ßщޢ׬ª}åhBEà8@à`ª.oßûi<üiö³®ë§ñ¶Yˆ›ø<ã-_ðæûß­à¾iD88@à8ãd@à8@à88ˆ¸7½¹Ðº¨³Èmº·ù+üåL~w¹ #p€À½Ä‡›»÷Þ¸Ì;÷ïk¾û=·t´½#÷-µÝÇÉ e›•‹9«Üð?Z™}ÅëÊðïPä<îÍ/ŸªŠX4|è¾Äù¹îT·½úUãÈî¥Ú¹½÷Ô¶ñ·È¶À oxãŽØ¨™DÞº2† pÒì)Òuú©/­›'ÓýæS‰Ö‰ð‰øÉ¼ož­Ç œ|ö£w²?AÞ˺²Œü”mÉ6¥ÉV¿‹æ·¤ïÀ p-p*j*UvÀ/v"L*u"^"Ov_àä½Ló#~‚¼?ñøgZ˨¤éúºŒö¿“i6J(ÓCßcP‡Àc'p"D*UVÖ¬4IÄ+ÔÎ6Áö"p²MåÓýX¡Ô(¡ŠeÜ÷Eê8`"N¥H-Xù²ƒB’çK[¯'?u~Ü42Í¢Q¹A Ü+®yyÇ£q8@à`àç÷w³¯xRàd]™/ŸC RàzM‘‚À€À +_¡æH00jëvd,M¨ÀÄ œ D1úø#¿Õúi›S“F«Z± œß쨃z8iÖÕ&Tÿ{И*³£LC‘6=“>sÚïÌ(ðN·'Ò%Ëh?µ~N¿‡nWæ‰$Ê:"Ÿ0U'$r”$B*ašÚÃŽ •ҲˋàÉ2)“(ŸìÏFûâ¦%m·ÝwFà8€‰8@àCà@à*@à88˜f“d¼~¢^@à€18M´+h=ÑAU/IL#7›-Çå' Fà8€‰8­^ $Õ -KWy!-ñBà8@à¸"o¶ä•E¦É¼4d Cà€”NË^ÅÍ—òV¶Oœ–ÔÒæV¿ ½lKk¡ê2¶)Væk­îW~ÚíJô¯“}µ8Û,,Èï¢ó´—F…^úþ!p€ÀÀPNú¢u ÒˆœHš–/€¶¶©,cבùZ|^dL‹Ò[Ñ“éò½:ÝWœÀÉvlQ{ÝžJœLS‘”e}9Dà€±8•˜Pói\´ÎؠͬºPDO¢h¶//¡u:ÝWHàTL5’çGäìïn£r pü-Æ^à¤É°S‰Ñe5j×”ŠèÉçvg?÷²/+g*zU“éŠ|ÖíªÀ…öÀc+p*AI͇Ú™$ p p™8m¶TÑ4GÒr"EþV™çO“Ï~6X·N·û’Ÿ¾Ê4ù=ô÷±ëól½µ¼üDàCà8 “ p‡À pS#p[_~õ ò|é×7{CàCà8¢ÀmÜxù)y¶ôÁ%€ÀÁNþþ©<û9™8@à88@à8@à8@àCà8@à8˜B®ù»oüâýåÉ} ×ÜríÈ®»¿òãƒ÷ÿ/Üç8@à8@àÆ™—Nn®·³»—žüþÜ]ÿîE`Œ™[:õîèZû«ˆ¦0w×—8®#¹(E4õˆ.(`ðQ·]KµÓÑuö¬Ê[Kà>ôÅä!îó\¢X´Í¸Ï9'ry..nlî!Û7nÜð+®˜=3É\}õ¦÷N͵uwmo+êvè'â†À!pÝÜ#*Þ´ó~>¢ê(™é•ˆ²›^±Q7½h¢{ºþ¼›–wûÕuóf™*ëî!v¾å?üìç>ÑœT~õÞ;›"q“~,ç>rò’躪F¬øâ†À!pÝÞª.êVñäM£s9#m î}ÃIXA¦Y ”uŒ¼ULd¯î~ÜúóFÞTî¬("p€À!pð’ÀÝôOv_˜äëY$nN®§èºúVÄ…X»óK/ÜûÑrŸGà:º9بXÃ}ÖèYIeÌÈYÃ[¿nL¥­îÖ)yÛ*èò&bWwó&²éCà8@à¸5‘¸¥Ú‡cî®/½øëý÷y®mjΛVH—²#p'oUÓ|Z÷¶UrQ½5g¢}e'e8¸‰¸Ãµ×ÏM¡èWqb·`û¤©„Åœ6·Ö½íøýæŠ\Å4ÓæümO¯ÞùÏ/îBÿ¸tªùÕÿô§ p7>¸ëï×§^hý<ô•ÿïo_ôwšÑq‡ð²—½ìcc/pF 4…HÃëϦ͛U¯?\#°ºFèÌ´ªYW›V žÎ{û˜¸4&¯ßwäÔ§NžkŽêuç±§š_<ýu8#vªUö,Ÿú'pgö.ծ߽üäã’Ì—ó~®Û!Þ H‚À!p€À!pÜ0"p/F='¹àBó8ï8@à8@à8nü®­ƒG9¿8@à8@à8.hÝÓ¹¥“³œß p p—ëê¨Dà8·8@à8@à8.ÌÝSÛ.èç†Àõ{S˜ñGŽÂøÜ3Ï<Ó¼í¶Ûš7ß|s‹OúÓ‡À!p\¶¯©ãû8¯¸~o ë’êÂxœÈ›H[­V[#s÷ß?‡À!p—ÅèÛK¹ß¦*U 7“hœû)e´ Þ2¹Àz±Óu;fºÖ@Í{ËçCÓ¸fKÔTÞìëúë¯Gà8Cà²y=‰ãœFàÒ¸’)U1IxµÄ•&ñµ%²¦tVÅQ¶U\E‡º)ÍU6û«˜ýÎ#pkî©§žZ7]¢r‡À!p\殥}Ò|ÊùŒÀ JàÊf^ü¯k¤ÌŠ–¹¼WkÁ¾÷¢qu‘[S¹a’£p½œôwó›KEè8Cà¸lá’öž‘ œÏÜ ®”$pf¹ª)½UrµSë1ÛmP¬ºÏUnýKú¼I“©ˆœ¾úé§8Cà¸l]G$íEàF+pNÔ*œ_ ¾è ܌ݧýéÞ/Nò`Š~ÒˆÈà‰¼…šS8Cà¸1¾-œ•¤½¡’Y€À ]àÜûyÀ™eJNÞªf»e³Î¢™®ýåtzk¶"liS¤ÉTß÷}CàCฑ^C$íEàrSXÍçÄ«h敼ÙŒy¯Í¦«}ât[Žb O\É1ãm·4é¹èºÀI´MšM~øá5)EˆÀ!p‡ÀeW2ëI{¸q¾¹è „]#Ap§£P/\¸°úYd®×fT8ÉõóXÄ~ÎanÜo0ó&õH‰ƒ×ŸÀIs©DÝDä"p‡À!pÙ¤½ÀaJ.­  p7|ö,Õj{׸?!p0'©C®½öÚÖ†^ë "p€À!pÜЯ’ö7­'}Þ4™¯DàDæ$"‡À!p‡ÀýuCÉ,J‘u‰vݼªÍÕÃ8­Â WZ Cà:¼Ö+ýœŽ:Oá{äõþ2¬ßCà†,o$í…¡ \=f^ÀI!{Ä Ñ71 p}^ë=åY4Wê÷~àî7š7²1¬ßCà†…+™uޤ½06gÊgUõ¿`Ííf–Ÿ77g»¼æ‰Ë{UM:Mê[5Ÿ«“최ŒD•T" b@àÒ8wÝêµµšæÒµ>oÈm>_|ï8w¨š$Þú>§ûTÌuÝp÷‚†Y¾œðÝÊv?7ÖÑ·åÚ{ Kà&ÉnIˆ¦úBÁþç«9Þ4÷›}@¸¾%#yš®(·U²ÿ»mjU†²ù~™o6鵜­È@%.%³×`=0mÁ]¯­¦ÎWýÜÍÏnÙöÆý1×ðÿ4÷‘²ùçmޔܳ×üLÂ÷¨{?gˆÀ!pã I{aìÎÜLäCฤkäîÚÞÝ˵'8Oa¨ã+pÚlªNœ8ÑúÜK2_8Ø5BÒ^@ฟ¼DÜ$ÿ›¦‘fT‘¸^òÁ!p€À!pÜ@®ýqއÀ-ØH‡À!p7zHÚ ×Vàdƒ6ö2˜CàR¿6(™·þ%Â&²&ͨڔ*Ÿ5‡À!p‡À’öÂØœú_Òò5nÚŒM2àý—ü}º÷%/\&Ó‰ô“ŽQ¨ÜØ]G%ǹ #8—²C“ô–\jMœYVݘ„ ¡*u›ƒn’N.H3ª"}ߤƒ¹Gà8CàFÃÜ=µí­‘§Dß`Ä·èçYÓĹ*SNò š¨×,Wð£búÞ«ÄY_§Ï$ \`Æ4œ\°¥´äs/õP8@à8.µk‚¤½0§e²b­aJeÙj U3½îÕF­ ¬˜:ˆu“]}A×sóÛ œ„«wøø'ó7¼ç»»—kG²Äß¿ý3ÿå_U»/Iä+ÕzÉû†À‡À!pˆ¾-ÕvF×ÄiÎI¹À‰«š’X%_¦ÜgmfñJe½Bö ¡h™Vtpïe?3þrkÊçK7\ýµ°ø¿>/òVØÿ`5‹wì ÿGWÒ%ò&Q7i:Õ ‡À!pÜH¯’öÂøœw“вY‹mŠÏ/˜H[ݸ‚'` ‹ œ·ß¤>p­¦ÚK®È½Ùf…^›Pm:ŠÙ#p‡ÀôZØ'ͧœ0§…æ½ivÔg=0=ï5§®îm»Ð­ÀMÂAìEàTڤߛDã$•H/ýß8@à8®?\ÒÞ³\ 0NW4£P .òÖpÓÛ \Á tX'pæ³Fó¤?\ÅM/»Ï:ó^ZJKG¡ŠÄÑ„ŠÀ!p7’뀤½0^çn 93˜ d"k¡¾nE÷~Þ]Îän[Ô¾mf=Í1·˜^ÖmùûÔo£ešËûܺ+_õ³_‘¼oÞvªNä8@à¸xy“ä½>½ô…CàCà¦\à*.iÁK0_uÓÜ@½E¸·üâï|gïáZÑÛNÉœæ7­ú%)¸)8‰ºi T7­‰ÚK?88nÊ®n2'TL¶ƒª9MsU•fÓ7þóõŒÉ¼3ÛÉù8[…Ó‘¨ÒŽ&TCฮÿv 6%•°z@àªFàμn÷ÝÿÄM/jß8#y p\¼¼Ù‘¨’Ø—Q¨‡À!p]ÿíæcò“ΘÚÝ«yNwÜ|ßg4i¯‰¶•œj?º¢¿MKý…À‡À‘®=®dÖ9’ö‡À—‹Î÷"–9烞N1 pÜp¨›œï…ãœz8M#"ýàzMà‹À‡À!p±g¹vln©vç pЗÀéKÓˆÈhTÈ€À!p‡À¥K$n;ü¤½ô%pZU¢p*s‡À!p\ªçùq-™€ÀAj'HN襤‡ÀÅžãûDà8σTN"n’¼÷Úk¯í»‡ÀÅžãg¤ •ó 8HEàNœ8ÑsŸ78kÏÜáÚ=KµG9ǃÔNšJµùÔCà8ëSÞHÚ  Bà´˜½ ͨúžA ‡À!p©œÛµd© œ}‰ÀéK2 p‡À!p½CÒ^@à`(g£n‡À!p\ßçõQ‰Àqn8Éÿ&#RyÀ!p‡Àõ†KÚ{šè p0p“—öc*‡À!p\_ç4I{ƒÁ œÖBõ¡ Cà8®çèÛN‰¾qN8; ÕCà8ëé|&i/ p0XKó…À‡ÀM»ÀQ2 8@à8¸ œKÚ{–s8@à8¸ŒI{CàËÀ¹¤½g(™ p\Fޤ½€À‡À!p€ÀeHà(™ô-p¿ùûg›ö+#á=} ›:#i/ pЯÜqÓ'ÿÙ‡ÿ¨ùý'#áCǾÚ<óçO!p€À!pS#p®dÖÎ@à Ÿ›ï‘÷ðö‰½ñ"pdSà¾|ª:±תwºTÛÉù p!p>t_sÛ«_ÕÂNçþ}Í7¼qGs÷Þ†.wi I{Cฉ8¥h{-Y³’&Ò&B÷î÷ÜÒ’8_î²"p.iïiJf×ϱß&Ç?«Üð?Z™}ÅëÊðï0‚Ÿ¦À‰¨‰ ùM©"u™“Ïß<[o Ü‘û–2'p$ím/Å,ßw&€mÙ¸#ozs¡)Ç?‹üÒ¿šÙïþ¶wÜ4ðâèY8³;-®™&¢æGÜ´)5Kç¢oç¦=i¯üåüÏêµ›eä~/÷}ƒ 8Žý䤥˜TYó£r¡iã~¼"y{ byÚŸ9òw”¿'÷á#÷{¸ÔN¤Ì8íûÖNôÆùx‘´Cà€‡8DZGà&Vàt‚ôsÓæSéÿ¦|é'DàN?õ¥Ì¯=KµGç×ðÌAà8à!ŽÀq츉8‰ ÉO‘êÏÓ Y8^$íEà8à!ŽÀq츉8¼É ¤é¨’ûös¼(™…À!pÀCãØ#pS!p“r¼HÚ‹À!pÀCãØ#S!pÚÏÍ6¥sÀBÊw†¤½<Ä8Ž=7Ñ'ò¦e´d@ƒˆ›ZfÚ4Ž— ZÁ 3fg7~~מ"25äï.ud8à!ŽÀq츱8m*‰³²6ÊþoÝ/—´÷ô$—ÌûèƒG¸ŽG€üÝ8à!ŽÀűëï„ÀM¯ÀeùxMCÒ^âÁþ/BhžŒNÓùI-àÿg>ìB×\wHdE§4‹éqôû> ë8Žâ|AàÂhþ·Nð«4Œúxí=\+îYªÕ&ý™À!p€À­AnÆ*e¡¦Mê)Z‹ÜðEìßc/’&ÇGNŠô8ÚcÀ!pŠý‡m\nZ’ö"p pëMnÞò`Bóãyê ÿc¯ò–eEàhB 5µËq’|pãØ„:M%³8¸5Mj}‘›¤m–$pÚéïc/ѵ¤‘„2=)g›^Uí=Tü\þ)°‰aýmhäoÓ{À8¦™¶¤½Üš‡¬/`~6öv8;Ïcïçn"cZbI¶¡Í®yýæÙzìqòYÖ—‡²¼÷›à8®›ã%)C$uÈ´<38¸5U‰¢ÅEÔl3©íÿ¦ËÉØv€GàÆóØ'Ix;“¨™¿®ˆ›=wÚ œFÛTø-Ñ”5‹^‹3æsQ¦yËÌÛsЭSŠX0ÓfÜ4%?J“cä÷sµ}]G•Á?îx¹¤½gú¾Éß=püäxäüiæ}η¢w.èôE‡À!p©5«iöu¬hS›mNSó;7kgø¸?7>ÇÞïãÖÀÅS+hí.$£Šìô#pîÁ]÷æò°nxË­.½ª"n÷P¯»éò¹ì~t¹Q \ÒhóQ^ÓqÇ+­¤½Ñ«¢ÇÄLkȱ1Ÿô;á«:Y+¸õËæ\Ðc]tçACà8n õ¸›v/Ñn<½fÙ›/‘1‘öP“hwePàÊ.ºVõ®¢òåÜó0/¶¡y+‚ÙqiB%¡ãµ÷žÚ¶HÞΦ‘´×IÖ‚­[©sÂV7ÂW lC£rïX8CàRC›¯Bédšm2Aà²ìu°J\F}ƒ/p~¿Èn.Ô4?ªó¥O«Z 37oæUÜç’ÿ@÷šÙ4ê¦Ë‰ÌZàäÞ çƒ—qHö:^i%íµâ¶Š‹¶åÜûª=R^pDZìS"p‡À /"cÚÜd{¸úÍÞš&¦±ÇQ@ØþkºŽþàKžö‘ÓóFG8Û}ë´¬œ¯†FcÌÝÊØŒûYõ‘2Òà \qÔ}àì±¶‰ž“ŠÜâx¹’YgRºëz,ݱ-Ë;[tÇ'Ià*+ÐCà¸ÔIÊ foäÜd{ù)?%*fû7&G=컎L“íÊycçÛeôœ’ùºL–"p*fqÍfNÌ*¶yÔ{ðÏÛ¨L/M¦ƒ8¿˜½æ~Óc5Ê¡J?Z’OGªË|Ù†VY³£Ù}Óm¤‘CàÚ Íb¨6` f`ÉäëÑœJ‹f;þ²%nzNj%J‡çqâ•;núä¯|ð—9^Càw'»—Ÿ|ÎI~×ÿw»>ðÌó{¿ŸæßK£$ò`ÕÅc-p˧NŒÛõÝ)»ï®ýR‡¿g+ ÷ú_øàDδÀÙÔRŠJšFÉü&T=‡ìöì4ݧDÖâ’Ûmª¬Ù¨œ8]?ˆæT®½ÐTM]À¢7ÜÞÖ ,8©[0 ¼ƒ7¼»0ȼ<ã.pãI&ÔÁ{¹ jXÏ»>òÇÝÁç×Þ:Ò&ÔC­*IQÂÕ~pi'ò Õ=v3êM÷œêæx—Y~îEÇ÷X°_ Ü F¡Zy·M©Vàô\ÐÉŠÍ Úž~–TötZÜ{ÿ¦‚j.Õi¡è7dó>[i+–/yõë¼ùilB•‹O/6 7ËçQçgCàÒ=ö¶¯HRGä´$1lüšG.t³õ3ÚÜÓ& ·K”‘8µË»6´¾Q™9”<5MÓè…ßoHÓ4 kêO½“'?”h¥D×'áÙý.·Ë š¤ ƒÄà_ƒ¶)Õ/©ç˽¢÷{6Â&ËhþGM ­9!“îÝ\¨j 7¼‡ZÝ4{ÚÚsSÂFÉ›¨]=¦ÀðÔ \¨Ÿ‚ˆ›^”Ó©™³ÿ ëqÖŽ½ƒ1˜VÅÛgf˜#ÖŽþþŸ¬Ë#æ)’¢'Ÿí&¡ïêÍv¹à™Η»aæûÿæ«í$îè$=û$Šýñä"Ã8}†è3ÃFàBÕ{l…ŽÐö4=_åCïWXÐÏ~ÅNÎö»Ä?²\wM¨U“ݼmÉÛä:Í—ôPň2.ýc¯CÿCÕAÕͺÀi$îmGNE"÷¤JÀç–O}jŒ£2/&%ñM[à’¢ôòàî´KZÿã¯5ßZ 'ò•fÇIzöÉhÔ]‡jÿ%)‰ï°Î6¥Ú{‰.kåIeÏ_ÆnË–Í󣽡hŸ±}ïÚ œ¶@…FÔ"pClBu™­ëI¨MXö>O­À…BÈÚQ4tq„šYå"Õ¶ôoò²M™®ÿ9% 5×}jGéAHå¤ \»:¶ò·µÇ¹ÝñÏÚ8”:Bæk­î7î¶ÛW;ë6…@/å^k¡Žéy”šÀÉß5t}k.¸A4Ku¯ì÷ÖGàî®íÄg`;†%pöº÷k"ëP¿ÿ›ýRÓˆèu«Ó칦µ“ýsP§kz[)¢³Ïµ´ÓˆlÙ²¹çŽiìW2ýàê^ê‚Jž™V¦ õK«7¹È’ú½Y!³p}(Ú T;¬ú£ŠìSqM¸~"P½ h}Å´ÿSšd‹û/5íoâ{s³Eåã†îëù¤7ÉÐ1ìd_I§ÿ(ø)ôAJ!ÐËùÀ}#ñ­c¿¥e˜Í©r¼~úÝþK42âY©;ò6“¿mܱ”{M»šÈ¡¨¬Ló×óïóIõ¸eš\Ûš^Äþ¨ëY1 M‹ûîý ÜÏþìÏô|î“nò„u ƒlÄb°P\¿“¹¬o£¡‘B6RÒÉPs?tNjç¨ÄtzcÒt¡ã¡ŠèùÒí?´Cǰ“}Å œŠi·)¸ôN®sïv 3gW$pû"Ž#p“Ý f¡ †"pþ_#6ª¢Óü!á*~Iáuû Õ¡áþƒ=i¨ù KóL²ÀisD§-±Ç ´ŒF[“Î?†ì+Nà´éE›fùœÔK7È8â/I2)£O8¸5Â÷߱ޅò=)íÒ<Èvt±³ÍqIC͸þûÀ%5jd–N£½¡óFÐè7X“¿³ÿÏ—2ª4D{ b&Jà’øÌœµn£nÚçÉÞØ;jŽÀõwìm”4î°õ,Cçà®Ý¾âNå,iP7XÓä~·Qô{K:^­âïKµ³ p0Q§Q6`€DåìÃS›¬ü>GDà¬$Ä"LjŽÀõwìõØ…dI¦Ù>r¡ÑX:(A›aÓ¸Nö•4ˆ!$€íR péæÇ4C¡ã Üé¹¥ÚN8˜Ó‡¤¼ Í¥¾XéÃ^‡lërþ(ÔÐ>´oR¨)¯ÝPs®ÿcoSih6sý’rIlSÂ$5}úgG&ÃvûJ8éÈV=?õ÷Aà/pãXz/t¼"y»cÒrÁ!p p«CŠ”„S–Ó4 òдM 1÷G±*ICÍåó “‚NK)-=*Eþû¸ãéé÷o’Ï~6í Õî&íË®:¿üö{…Ò pé¦U3i×÷‘“—ì^ª­$%½Eà’ïrmʵ˜…zÙ¶o·üÅwFà`h7ÍL[1{@àÒø»H· ›ïo”©C:9^{–jή@àº7›hÛ:„©$¦Nªö€ÀqŽcÀMu.nÃ8 bPö®#‰«!p¡9ýj"uÚE".‚Ÿ†x!p p¸,¯ÝKµsRkö“ Iš­ j¾lW²N£´ œì/´Í¸}u"p²MY?IBeÛ½J*<Ä8Ž=‡À¥'pË’ë,Š•4ÊØ'[„^#wv]nÉÙšØ~6»ŽÎÓæÕNö•$p~ôØþž¶ ~Ÿ^j(#pÀCãØ#pc)p:˜©9©›ã%¤2—ŒVpé´9S…J›Zý4T*Uv%+O~ÎŽ”×õ:ÝWœÀi ?G+lúY“„Ë>z­½ÀqŽcÀ!p)¯Ý˵'vß]Û‹À%7qv“‚'Ôôé×+EôüzÈ!ó×ét_!S1õGÑkš%û»û#â8à!ŽÀqì8šPG)pKµý!pñøÉÔ{•=»^€ûŸ{Ù—8Íú'#íÊðGà8öÜX œŸrT5P;=^.'Üù¹¥“³\rT-©œ&tÏ’Àiº¸ú½š3âÇ›hÓé¶šK\Å•q:^‘À8ˆÀ%7‘û¥îM1¢‚gËßùRÕO ¾Ð:Ýî+TCÙ—3iZµ}ó8à!ŽÀq츉8ºÙQ„òSüŽ*×Éñš[ªíˆn"Žé N›Q;ɧ}Ù¬ìùiH:8+ÿ¡uºÝ—?ˆA›Líú¶/<Ä8Ž=7ÑçKš}HjóÚ8/89.‘]MªÄ`£±¡:Ëœˆ”î+nn÷å œŽ:õ××ßâÇ›x³9û”(Jî 4¥"p‰œöKЬêr¡œ¡òjòÙϱf“ü&•dët_qµPµ¶rè÷I£Lkx厛>ùÏ>üGÍüë? :öÕæ™? Cà8®½ðsxÙÇ87¡ .'ܹI.pŸ–À}òú}GNýæïŸmþÙ_¬Œ„÷<ôµæOCà8®õw‘¦&Û‘]›¤’F/ŽÓñ’t"’V.pŸ:y®9ª×ÇžBà8[ó·ÑNçڼߨC¸»k{%±/ p‡ÀMÀMÂñ’œpÒœŠÀ‡À!p/p’s+.!ê(úö pH‘{¸²g©öht¡5§njî€ô7Ê*;þé}Í‹.žÉì÷—!·i2µ©%-U4Ìú§ýÜÜ=µíѽõ,i Üe—½¬ß{ÏI¸ÚÞõ±~ÐCà å8·÷žÚ6þ“ß„ªåŠüôã1Ôg 7ZKë?Cà8J8­‹š›;\; -< p‡À7µ'Í«£Ú“À½Tà~eÒrÂ!p‡À!p€À!p™ ×A'“H‡À!p€À!p8y¶È3Cà8@ฌœ;OÏʨT8CàCà²#pË’Cà8@ฌœ+pCà8@ฌ\ë\]®=!5R8@à8Cà8.+·TÛqCà8Cà2"p.'ܹI(pÀ!p‡À‡ÀM…À¹óõhÄÁ¬ï+®Øp8Ë5”³Îå—_öCà8¸! ÜÜRmGtΞᜱ¸n8Cà8ëøœ=#"Çy‡À!p .;wPšR9oCà8ãoÀeDàæ–NÎJN¸I+p—»páBó©§ž ‚À!p‡À%ž·EìçܺÀÝÿýͻõóæ›o^}ýõ×#p‡À!pIçíݵ½’Ø—s8n$§Ñ6û^dCà8k{„œp€À!p7=·\;"pþ‡À!p\FNÎ[99Cà†*pµZ­5Á=ýôÓ‡À!pÏ@à¸q8}}úÓŸnEà„gžy†4"‡À!p2w¸v zþ<Ê97TÓѧÒ|úð÷>Ÿ8qCà8ëDà^*p¿BN8@ม œ›DßBR‡À!p‡ÀuøüY®›[ªÝÁy7³8CรpKµÑy|šó8nh8Á¾¤‰|8Càº>ÏÎÝSÛι7p“×m·ÝÖê§/‘7Ji!p‡Àu}/G<À¹7p ¥éõ…À‡ÀM³ÀIE9—Ì7pÓº§6â&ýßB8n‡Àµ=—Gìã|¸ÀI8¿Ïƒ8Càz:—÷‰Äq>‡À \à$ú&ˆ´i“*‡À!p×=.'Ü9 ܇À Eàä%?%'ЄŠÀ!p×óù|4â ç pÜÀ.͇ÀµrÂíˆÎé3œS€À!p‡À—sçôiIîËy‡À!p p8)«%åµ8¯Cà"p2UùÖj5Cà8.5;9×çÉ 7°œÈ›Œ:•Á "tRJ Cà8ëû¼~,b?ç pÜ@ÎŽF•ѧ"s¤Aà8ëó¼¾»¶w÷rí Î-@ภ܉'V+3ØÚ¨‡À!pç6 pܘ œDÝDÚz-¡…À9‹9·—kGÎ/@à¸TN1<ýôÓ b@à8K9¯åüæüî'7š¢Ï ïÿÃ3ÛçÞw00o* ½œ”ÎÒRZ>‡À!pÁ@à’n³F¶–#Ž\y啟߼yó™M›6>7} …ŠÏ[æßûƒ7¼éçŸõ§oٲ傿þe—]öœl_ý9ö»ï°cZNF J¿7áÚk¯]}ï·Gà8Càz>¿÷ˈTÎ1ȬÀE¯Kœ Ý133ó1‘§‹.ºèÅK/½ôG*[‡zñÞ{ïm>úè£-¹8}úô@’Ξ?¾µ}Aö'ÌÏÏ?'ßᵯ}í³"y³³³ß޾ã“NîöFlŸ4³/8}1 Cà8.\ûó’Žó 2!p"<·lÙrúòË/ÿŽÈšÒ­·Þúƒ£G¶äéùçŸoŽëëìÙ³ÍãÇ·äîúë¯_¹æškž±ñŒäîw£÷ûDJ'EàlÔ Cà8K©Ê Õ8Ï`,ÎEØö‰Üˆ°‰ð¼ï}ïûÑO<ѯô^+++Íw¾óÏnݺõÓLC‚À!p pã-p.'Ü9iN圃\.—{ï[Þò–ïŠltó’Nñ:’Ñgle~ ³ûÛ”¾bi¿þàþà…Hâ¾™Ö CàosçûËœsºÀmذá–o¼ñ¯{yˆËèE[’i˜wâĉ־5—™tƿ뮻úÞn¯¥¥:yI¥‰«®ºª‘F$Cà8@àÆ_à$•ˆ¤ᜃTNFJnß¾ý¯z]ª—Æë™gž‰÷ôÓO¯›&#'mÔ-ÖMÚgHà’¾[·¯Gyä…H⪇Àñ·@à&_àÜ9Z’ûrÞAj'µ?¥|T¯¯NN„ÈOSaבmØ”·ÝvÛê<‰¨É²2]~ÚíøR%?»]]_öâÒhXùë7ÝFèåê¯î@à8¸É8)«%åµ8ï ‹^Û¶lÙr¡Ÿ‡xRªô‹‹hYÑ’y¶Ï™|Ùy’åt;òS>Ç%µM¨šïL×Õï©Ò§M¯úiTq´ßW¦ÙfY8iºí÷%£S%ŇÀM"Ñé1QŒãð#vþ?>ý×ûcæïàoˆÀMžÀœm¸''¤$p{¥hû(N%­“ýˆÔÅ œFÈìg¿D”Šb(jh§Ùï+ïõ÷KjŠíö%}á6mÚÔWŸc+öRÝCƒéü ¸, \ô*xŸó3{–jήÐie 9[ ç,XÛ'µLÓ¸¤W’Àµ[_"^Ú *‘0ùi Qó›]UÖüH™J6Ÿv"pòÒú=úx0­¾Îœ9ÝT®ø‡ÀMªÀ…®k½N埡„¼—Y“{GDÊXôªGT÷®#‰«¹i •²èUq”Üϲ‘·ª›^rÛáÞ„À¥Û„ÚÀÙ¨›˜¿¾,/ò¥Í ¾ø©@麡‘§"~þt¿ivÍÃôŸ³ßW¦Û¾vòJc„*M¨Ü4œ\or=É5¤] äºÖ~±2M–Ñå8nN„kA%ÌL«Ë{éÿ¹å5o:覜ðU¼mTÜt™_òä°Ä¹‹ÀµfÄZA,޲™Cà8€‰½FŽFäoø¼'H¹€ •mSdŸ—sѰ†Û_>°|ÅÍ«ÚæÚ«ûƒ LT¯n¨"p p©Fá–j;¢ë$3£gaBÎÐb ?\ÃJž©žÎlõû´U½}¼fÜN#p žæ½ï£ `@×É9þ0l[pѪbHàŒ°üŒ.hÒÌ{BWðšksfCN—±Û ô›ó¿_!´_7=?ÒƒˆÀ!pÀÄâr£À= ]àCàèëZyLÒŠð·Èʵrwm¯$öåo­ëåü8ä„»úêM²0V‹pQ p€À!p™¸^"÷£þ"¿zïÍÏ~î0dÞöŽ›šòÌŠÀ™Žþ¹!ÉL~Ú.jCà&¹Väš™à91|äY?si=l²ÛÊ€÷WðJ`•¦á¢Fà8`:سT«í=\é} ›psùß*Þ´F(¥È€ö_BàCà&‰¹Ãµ‘Ä=ŠÀ!pƒ8¿|V>0¿ê•½*xÓ‹*1‘¶ª)Ÿ¥Åíóîs#æ{Ôc¢wºßªIä[rI€¦x½ý~ æ{„—¼åËø[ˆhBºüÔOýí¢ŸÛyh pc#p/¸_eN8n:šPKF¤*FÆ´"‚&æ-!³ÒV4âª/ÔM…;ÝVn(©8Ù}yßµn’ú=«Æ,7ã~\B%‡’D:‘#ª€Àô„Dà$‡À!pÇEÄìÌǃoø¢æm#Ià m.g¦We½rþ~=+…¾‡Ý+yR;M»À¢pKµÑµsCà%m!Q*yÍKŒ WìFàLäk¾ADà8€1¿vÎÎÝSInzúÀMÎökñZôú´-º÷ófº]¾Ò¡À•½ïSëƒæíw1AÀªæw(˜å’Ö¯k-W¿?‡Àôpí,K^8´Ä5¬¼¡«rž$­™®2§eúÆÜ¼™ž쳑$On[ºý†‘±Å€ì)ùÀïc…´d¶ÛŠró@àúÁ¸?À!p¹š³ƒ*ê)í—&S`×Ïñˆ}7Éò¦iFŠ ËøiM )í@àqýì‰Cà8 #¸œpç†]àCàCàú»†ŽFDà8 #Ì-ÕvD×ÑKЈÏPÂÜÀzÕaÕK`®£3"r.¿aÃËn ë™+gø3¯]óMo. •W\“knÞre³ðÆëæý½?Ýš'?û݇0Èßã¾ß8œ‰D¾~úE¿À}`½:€À@ÇQ¸;ö,׎uºüììÆÏßñÞM‰¢Áp‘¿ûÛÞqÓØ ܺT^Ñz›7­â œÍéfäoÑÈaɲ/„Fzù×ò\èÀä ÜÉYÉ ×i{¸>x„&É ÷±¸€°­:¯‚¢W̾`«*¸é¶¶iÃTC¨šuL%[!µ¼n€ÀŒáµôXÄ~KKàµt•‹†ÙT~t®ÑÀÅL·¥´¦öjÉ–â`¢®¥»k{w/מ@ิ.gê€Z…À•´Ò px=ï$'‡Àu*qÚO­â‹ù\ô‹Ó®Þ¥ÀÕµò‚È*€ÀLìõ´\;" p\ZWtѰ¼7=XºÊ¦1óT;¸ª•9³ ÊY p‹\KrM!p!ö,Õj{׊€Àd„¹Ãµ‘Ä=ŠÀ!p@Vî¥÷ç%7‡À pAª2Hu@ಅ[ªíŒ®­Ó×.™oI“ø ·_Ò† pÓxm»§¶Càz¨œKRvWf9+¿.j‡ë4¸ð8€ ¸¶–#@à¸^#`å€TimTÍõVäˆËk2_3mÆ‘÷ÅÌÉ¢¿¼M\|?ùå¶8€I@*2Èõ*pŸ¶À~êKͺ¯yä¾¥æ‰Ç?ƒ¨M€Ài_IØ› E»\’Ý’Wœ¾d¦WLbÞ’‰è•MÅ…¢&êuË—ÀÙé#‡u?*脳AÂ_ÈÑk_D3«üÜ;ÿ§æÅ—]™Ùï/BÎCãŽGì”À‰¸íÞ{CsÛ«_µ†7¼qG¦DNå[/qU'FkäÈ}ž±¦"勞¸²™^qrXòÊtŒÀͶSõ¦WtŸ4¡Bîˆ\Ðüg;|>û¹O4¯¸bö ç!Œ±À퉄À}ól}UÖäZ°R'Ódž¼Ïµ,ßõÎC‹\ƒF#_ëdÉ«aZôJhY³hKhUœ®J›ßÎlÇŸ^ •è@àƒ,ãrÂó ܧ!pïÜ¿/VÒdšÌ“eüy™kûò©êºeBÓºÝn·'¿‡l3MÍJjÕy'KžÀ5´} g®ìš=ófÚ¼[?×…À•8@àƒ Â8˜¶ÀÅ š.ûYš(ý¦Ö?ò[«ó¥)öÝï¹e5z§Ñ=Ñ“n ò¾Ý:VäBr&ë :_¶«ëÛeìwMú]'¹\Ù Ú¢•³67秊=Óyófù'‹ozœÀÍ›«Âg÷ÍE p0Q¸{jÛý”"ý œÈ™HO§ÍŽ"j²¼íg¦<=•0]F$LåI§É5g?ûëèv­ä…Næ'-#ò¦òh¿‹Èâ4õËyƒr6gÞϘ¦Õ¼FÛÜûE7¿dR’øÛZð§ûyà¼ýÍÇlgžA €À“‚ŒòŽîG͈)t¿õUù/¼ÿ¾O4O}ãë=û"4"d6YŠ iÄ+.Ч2–$Z:Me+´Ž6ßj¿¼nNåÔöëÓþ¾z¸]»‹Ùê—ƒª„\ p@çH8'oÊæîúwÿ'›¿ó…Ó=û*IFàâ–µ"'kí.³ûëVàDÔ´©Ö§iM¸Ë.{YÏ2˜U[ôG¨ p}k†xëòÉæç¾òÖNæ%ITZÚn?§}õäs}›FÎðM´À‡À´GFŸJ1ûÝË­èÛC÷#ûâé¯÷|þk¶ÐhŠÛ<jBµÓ{8~¿M¨qÍÃÒ´Ú«te² pÀNŠÙ/מ‹¾ ¿°üåæ×ÿóŸö|þ«(u’NäH>[)Ò¦J‰Ú«ÀùÛU±´rfåQ$øg›bu¿VNu`0H‰ÛIÜ8é÷ççú»D†´oX»J *V*GqÒÔ‹Àévõ½/Šþ2þHU‘3ýü°qÛh³é9Ì´œIçÑ·×¾é b@à8€N%îžÚö=ë1¸ÜWR»Dl´XR3£4CJ3ÁOŽ+óüÜqqÓ42¦‚'Û’mЬÅE í|Yßßn(°ü.í~§I¸J ,Ö¢©Iª•l|‚ØÍØ´ €À!pɸ ßòî¦ÃÙ(s•D(B7îd)‘o=]Ë+YÉ3yÞB·hrÅUL~¹ŠKÊ[µÉ{ V óÞ~tÝŠùNófy¢}€ÀÙÄ-œ¤­±géÔߨÀ½íž/ p\âÃÅ Û¡³%¯¼å 1·¦ —–éRárïL%†ª©‹Zð*:TÌwj¸ùEtb·ÀÅ py‰“º¨‡jñ‚Ü-þýÌ_ƒÚ‹À æá"Q³rHˆ¼Ú¤91«t*p1Ó ž昕j¡ª4ÚŠ%÷]ê\ø€ÀÅ÷y‘>%¡BÏÒǤ“‚Õ~?`°·kéÉ/î>ôäßýkŸâ1.ñá¢EågÍ© +yN˜Ši œm ÕH›‰Þu"p%·;¢ËâçxÒ´Ãlþ‘L«à4“Ì[ûöÝ¿ñ0B…Àµ}À¬ö5‹¸œ¸¢ÀÙe¬ÀUL_ºÓ„º`G¿2ê¸õò&CíE’lÔMò8ùy•:¸´›?:Ù'Ð1{˜Ó~fùÀÉ+ «ö!pU#ju#þþêeozŽ‹¸Ÿ$팋piÞ$oeJÖí&’4«¶[>4ä®Àɵ-ÿØÉ?biäHF×MAÒI7©¸1|à-x©K5@à:è,l3«‡°ÐU¦tšbó4I³«Ÿ!Ý&Í •Àñ›p5a¨îO“qvZd®7q“k×ïJJà›Vô?h½½7 û>.Þ¼“¶R¨i[Þ¬ÛIž”DÐäæ¨Í­¾à©†êÚåýR;"LòÙÞÈCÒH`p§uN;)¡•iýS†ÀMÆC/çš[ó\Ì€Àµ§“¶´ÞýB×¶|ŽÝ¦FøBQ=]&T‹P²/Ù0xóÿë´«…ÜÚ:uH8Ù_h›qûêDà´ËG’„ʶ{•TÆ.§3À…–·ÒfßË _ÖÕi¶^¢nG>ÛÚ‰ b¾À%õ…U¹±Ÿåšõ›Yí22Mþó뚪 Ù®z}ë::O£òì+Iàüfaû{j?_Ù‡~Ÿ^Ò !p0Të¥\(b'pzãÔzŠ>º|R3 0Xayé´9S…J›Z%Rê:a—ñD…"p¶O­®×é¾âN#üz³Âf[´éX»vL¤À¹6]‡Í¯Öwó¥Ë7˜.£^‹\¤€À oªÜØãF¡vÒ„·¼Ü$µI%®Wn¼IQ? ³Ý$zí»¦÷к¯øÝ%Bç¯Óé¾B§bê­×ASöw·±&Yà ^n·†'qõ~ò¬ùÌtò·7´‰ÔoÆ”›žþG¬}Þtƒp ëëMÒ YHýiz3µ{Žëw‡À¤#pzMvK’=»Ðöü~Hàìç^öeïzoò»pºÝnåuâΛŸ÷§yó ¾ ™Á¹@I¬‚[Ïæ…Ëë:Þr…˜ï[ˆù v€©8?‡öY‘›šüÇl›Wõì÷g³ÿ½úµÐ¶ýˆ›6Ó†¶©ÿE“F`0×I8™'×_–N»ˆÄuáМqÜOæCç•´ª˜Ä¼y“þ£jKb¹ée“¼·d¦×Í6ên™²W¸^·[6ßY÷¯ÑÂy.|˜v³ò"76¹ñ…:òÊ O§k'¿ÿ\¨ITƒê ®¹V«8øÛÔ$ÀÈÂ!p0§ÿD…®1M1¢‚§ƒ BR¥ÓÓ¸^öe.NÎä~bûæM³À5PÕ€-Ö[ô>k ¬ª•(SkQ«8˜åJf•´E›óÍ%ô­˜æ]].o¤nÑFã¸ðnj’qƒi8mFí$œöe³²ç§!éTàl×Ð:ÝîËï3«÷#»¾í‹7õ8m¦ÔfИõŠ^©«†ª¸"ô%/‚Wr«áQ ̳ͯé«"p€À¥ƒüW«iCÒ¬…ŠÀ ^àTf´X»J ¶;D¨"K''"¥ûŠ[§Û}ù§£Nýõõ÷¡ µ³õviWN8©Ë9I›±Ñ7·Ýo_‹ú» pý‹ü7eyCà`šÎJöóGp†– à uy°]0ìrV¦âºItº¯¸Z¨Ú=$ôû¤Q&lnÞ­S67o¤mÑ5ÉÚ óŽFŒÀå¼m«¨éúE3½húÊéþª\ø€Àô#p7üßoÝ?zAäo$yà:\/oRäm¾7'_e7ÍNÏkäÍ-SÔ¨›·íœÙ¶íOW4Ó‹^ä­ä÷Ë@àz¸èï¿"÷Ž^‰îûû*p€À!p¬¸Ï§rÿçd@àúíϦe®âúÁ +¹.€À!p€À%`G|Ù¬å2bÌ.§‰=8Cà`„*k%HíR?=Ȱ*# p—î&ß&ï[\i«|hÙ¸}pQ7<’ ÇKÄM3±kn%ifõeÏVsð+*è²2ÏO •ú-*À‡P!pñ—ªW}Á¯„Ðð*3ŒÔբ׈@r´7äœDÚÚEÖlNmZµÛC5MUúdyM¦)ë„mÊO_þ8ëïÁ²`…ÍMóë—¼å«Àø œFÚTÎD¤$ÊÄà7¡¶+y£Ï5ЧËÉ:~m™æ÷¹Cà8®«Û¦SÛÔé×uÓrý\HMpÁì×~'丑léZöFLÄKÞÛ,ê¡BÔ¡æO-:­ge04MûÜÙZ‡‡À¥óp±5F+^’\[kÔ/_p•üÚ¥±§Âè bѽŸ1’·ZRKë®rA—î࿵ \R=A].´Œˆ_wQI£>! pᇌVThØ f¾–®ª÷SÁ[0Ó^ÿ»†-½å–Y#•\gH³e\-S€C§b(Ò¤5-4¡ p\z–urä•ȪÖiô(pu·Í†)¯¥ÂVðGººÈß|§õY¸o¬ë{f£lé 'púY–±ëH¨6­†Nšdeš4™úM¨²­Q$ FàƒIïW4Q8¿/ZÉô{[ì#g›G+FÒCëj_9mJ@àºCó½‰Ä©t "j2Ý šˆ™ >P!SÁÓþl:*U…0.J§£Ru=‘:;º@à¸ô%®aåÍÍËi³©¦12—Œ`͇ú«ùË:1+˜(`ÕáGq¹é¸Î$Î^Ð>j~j6•³ÓTÂtàƒ¦ 5ÚÔ"£”78˜Š>pcøÐ+0x¸ôÙ²#O»Y¯—¤£ê÷†ÀÜèx}+r!L£ÀÝñÞ­s†‹üÝ8@à8€®Ù°áe·È¹ £Aþþ‡À!p0÷þ p€Àù˜Tª˜Y‹~ ®¸eÓ+nÔjÞ¤©h‰/›+¦Yà4•ÇBŸÛ)éO}Ÿ°l#fú¢+«esÇ-Ú俜€ÀÜK9Þ ^Òb .ê¼™§’Îå’Óem-Õ†ÙÎB'gJ}•ß/ç—ó¾§¢"¸øNEWýaQ÷ám¿äO@à8€¡ œ™Š¹¢‘™º¥[Ð:¥n~Á«ÜМ±/¸mTŒÀUÌô’'“Vàì4-¿•}?ïw›1ë•ÍöÊfß%¯JDÃIÜ‚ù}rZÇUÿ.H pÀ(ÎJ[É«˜P7•FŠl5†²‘¯Àå-YjxߣÑNàüõ´¾ª¾Oø ¡ß%\ÓôëM¯˜ïµH’a@à8€‘œ)._54¼Ú§%/J—sËÕMß¹$ÓR\U/ –†À-ºm.øe½Ì2=+øûêRতX=n0h«øƒ ¼æÌœ–¼‘³rÌòk.ЯnMÎôGËÅ4—& œÙN5®IÓ¯ïêïÛ|Î'Ü¢™,úýí8`X·®X¼“åªz6ïĦä¢_Ë™A%•AO¼tú|§ÈzÌïW5ßuuƒŠ¤i­ø‘ÃÀ~ëÚ\L™/@à8€Q \¡ÝtÛçÍLËë@ýl×sëØèZÁÌ[ýiæåì¶ý÷Þ´œ÷ýJq9çÌ÷,Øïúüïíï×|ç‚ÿ÷@à8€¡ \ÆŠ3v) pÙx0’“ 8@à@à¦^àÞôæBS$†ËÛÞqô$p;Dâ`dìã<8€IãÿQSú‹i3þ‘IEND®B`‚manila-10.0.0/doc/source/images/rpc/state.png0000664000175000017500000011321713656750227021021 0ustar zuulzuul00000000000000‰PNG  IHDRÛˆ `JmsRGB®ÎégAMA± üa cHRMz&€„ú€èu0ê`:˜pœºQ< pHYsÃÃÇo¨d•øIDATx^í½˜Eö÷Ÿ]`v‘ØDfâîBˆàƒ»[°Áƒ… IAƒ†¸NŒ$$Áƒ\qA–eq]Yvÿïó¼¿ß{þç[ÝÕ·ºnÛÌô¹sç„={çzßêoõ§Î©ªsþРAbóýۦѶ þÀÿ9ÿsnÝÿãWãåäÜÜÚŸ%÷¥¤¤¤¤êS ü¿ÿýŸÀŸëГ!K»6nLcÆÞB_|ù} û öµßÜçðš/¾üÒ±/üö9ß·/ø5)ØçüÕ°Ïø½Ùö9?–À>ã×TÙ>ã÷Û?øñDö~Û§YöúôSÇþþ÷Oé“ ûäïô·@û„ÿ„>þ›iãûŽ}ô±mÓ‡ÙGôÁ‡Aö!mý Û¶lý€²m+mÞn›¶l¡M›ClÓfÚĶ1Ð6цá¶~ÃFŠ· üš ´n}¼­]·ž’Úšµë¨6,éñŽnÝúõÜ&°èvAÛE¶ñF>lçiÓ&>¯°àó¤‰Í¬[K[¶neÍÁüÚÛúÁ Òó‡}”¥ÿ¸?|ô±¶LÑ}(sk÷3§ï9–Ý?ûíßtÊ)§Ðÿý¿ÿ—þçþ§F쨱ï›xÙyô?5øÛÒlÃÓ9€þýûï4zÌXÚeׯ´MÃF[µS«h´í¶tÛm·Óo½M÷?ø]3ê&º–íºQ£éºFÓÈÇ(Ãß׺Y=wÍõ7Ò5#o «¯EWÁ®Ågî¸özçq×®ÆkÙôûðÞkù3<Ã÷ÝbêX²Í÷~ã³ÔwĘ>óV+Žvå5ávÅÕ#)c×ñߎ]~ÕµvÙˆkvé•W{vÉWìâË+»l„g]z%]xé»ärºvñeÊ.(»”οȱó.¼DÙ¹”);û¼‹èìá*;ãœóéôsΣÓÎN§žy|úÙtâ©g*;þ”Ó験N£cN(º÷þ4d¯ýhðžûÒî{ìCƒ†íC» Ý[Ù€!{*ë?xê·û0ê7h(õÝͱ>‡Pg¿AÊzôÛº÷HÝú PÖµ÷êÒ«?uîÑWY§î}”uìÖ›­—gºö$Xû.=©]çžµéØ`%º*kÝ®³ÏZµíD­ÚtTÖ²MjQÒžŠŠÛSóÖíÕyÅyî­Î»ÖAWÖ¬[ïþÔuÒƒ5Ó“µÓ«ÿ ÖÓîÊúî6Di­?knkoÀà=ià½X“{±>÷V½B·Ð/t¼Ç¾²¶R­Ã ýývìîrßÐv0÷—C¸ßÀå>t÷¥Ò£Ž£Ã>^Ý¢¿yvÿÍvô '+;æÄS”{Ò©êúܾc'5p{â©§ÙžñìIþûɧŸõì)þ[Ù3Ùöô³Ë)Êžyö9‚½¿f-¸ê*5XfùsôìòçÃí9~.Æ–?ÿ…Ùšuëèá Π­«VÐê)+[3õaZ;ͱ l]ÛÊ·°Ù>ž6ží!ú„íïlŸ&° ×á1Øß”á3ÇÓGl°meÛäÚ¾]çÚÚ©ãé½)ãiËsOÓÉûïEソ†n¿ã.ZõÊkt+ó\ÕάŠƒÀ·Ý^N /qÁè@R€4 åɨT-`{• X´€*À0^wÃÍÊF²]Óu3l,Ý0z,Ý8ú–`»™°nâ÷Y6 Ÿ™À®¿q4™6òF>&Ã0˜ˆ²0GmšøH¹k í+î«Öl—¸VA9ÎW1¤çÊ.¾`¾R™ó%WÐ_îÙye—Ðð‹à‹éð™ç^@§Ÿ èžK'ŸÁÀ=íL:á”3踓àjвê"À=Œ/¡€=8°¸èàäö@ê© `¸â‚ÃÅMÁ•­¯ìW Ø^ýw÷ 뀕/ l€«lÏ~>Èâ‚‹ oæ" ¸òE™ €mÛ©»g²Åí»LCVÁÕ,àªÍ„¬¬†¬lã¢â@ÀšpÅßIL Ó°Û?ýµ1Áâ^d bÀæý`·àA[3ÇviÎ fÛµ9×MÃWà5áëد†/À«á« ð–tx»1x»+ð*øò@JƒZx»°F`]{õóÐÕàí3p°Þ=¾¼{zàÅ1 ¼\Bÿlî{Сt^ ߃x€ªÁ«¡«Á è¢ïyvì ºG’2€÷õ7ߢvØ>þäSlO+sÀ«¡ ððõÀ[Iø¶—_q%Ãöï ¾ÊºQ–ä¤Íyhøé´å¥èÝGî£Õ®½Ï·k'Àî¥ ï¥l›Ù¶Nrì#¶]û„o“Ù8~]¶}ÌiûˆÿÞʶ…móÄqü½ãh=ÛZ¶5î§÷ÙV»¶ñé'èø}ö¤Õッœ 8CsçÑíåwPCŽ{ž-\Þ7ø$–úVÃ3ã¡2ˆ]¯Ïç¹´–«ËÐ °¹`U@s Ý<ö6}k9¹­œÆÞ~ÝR~'ÝZ~ÝÊ£Ÿá± »·ì–Ûï¤$6ö¶;È´18ÃFßz;[¶Ý|ËítwÆnå¿]÷f nÀ ‚MÐ&ÚÐF0dzǀÄñððM6ÓÏ@û:†¶ãY_v`}5]Ì^3¼e€ðpïÙç±·{.C÷ìsé”3Îq€Ë°=–=Û£àÑr'/eØšž¬öb•'ëBV{²/6²Ž'뇬öbƒ kz±Ú“5AkBV{³ ´ìÉjSÀ €,€éɺ^¬òdK2ž,@«M{µa-€ ƒ7«Íôjñ·íÙ.‰gUÔQàµçùêãÔÐ5‡®aÊóu¡k·ªÐl5p‹;Àãí¬<^Xtµ·«¡«=]íí†Cw˜‚î@ޏÀÓÕÞ.€]7 º®†.`k×]î‹èº¯¿ñ¦[×® [üm^oe½Ý÷Ö¬¡‹/¹TÁž°ÜèV¸€ígD›^|ŽÞ¼ÿNeï²­f{ÿÁ;i ÛZ¶ ÝI¾ƒ6³muí£ñwÐÇÊÊ铆×ýM½îÏœ÷ßI±}øðüÙwÒ¶®mxè.Z˶†íýî¢ÕlﲽŶññ%tÜ^C<ظ°W^}§f›d`;†cÌ÷Üÿ `Ok‡M•@ û[‡Ž•Gk„ŠMÈÂk½\V+àyû÷Ðw£;ï¹îº÷~ºû¾Ô1Œ{ࡌÝÏ[vï}RÝÃÇÙÝã Ûîw?ùì^O°áxƒ¬ü®{)Èn¿ëRvçÝžÝÆƒ ˜@ÜÂØX(dŒÜNcø¦¡ýnëß»†¹‚8›†7Âý5Bü8¯o_zÅÕTÆð<ê~C÷ü‹”§{ ‡•Oâò±'qùø“¹sŸ¨<[À!c€ÈjÐ"d¦ÃgÑÓõ¼Y+T Èšž¬&ðdáÅšž¬.¶=Ù(ȶC¨˜-È“Õáâ(u@ÛÁ¬ Û0¯Öönël“†œÃ¼ÝXè&ðtn®Œ§kC×ôtáí†yº¯éé¸vˆY{¹ˆ² ¼ àšÐ5ÃË€®^Ö!fô x¹ÚÓÝŸ§Y‚¼Ü0è¢*/×îk ¶ÕE}áâÇhácK|¶è±¥[lÚ’e´˜í1mK§Ç,[²ìqrì Ÿ½ýλtÁ…ñÚ‹iéãOdÙ²'žd¢œŽ?éÂDvÃMåôø<8Ц ޽»ú=ºÿÔ£iÃò§éÕ»ÆÐëlo²½u÷z—m5Ûû÷Œ¡µ®m¼w Á¶°m½w4}ÈöÛÇ®»ì >¦ <ÃýîMã8èüJ~ÏeªÏ‚¥ÍlÙ6°­uí}¾]}ÏXz‡í-ØÝcé ¶µ-¤c† Ê‚ímÌ:Ìá*ÏöÛ4T?°ô<$º¾Ö­ÛábxaðÐི7ßr›ò^á¹*ÀÞ3Žw¿ë}>L÷?ü=8~=ôÈ$?q2=2qJ¨Ÿ0™l{xÂ$Jb=2‘¿#ÚpaöçýÙx~<ÛðÛ`zðp/lÓƒsàßþÜf0èhO˜Š¸Þÿ-| ƒçŽèÎÎæ»õ9ÆœñE,¸óË.£sp¹Ü¸˜·=’çlÑÁ5láÍpˆ37«çc½Y†¬.òd“†‹MÈêùX´nØXÏÉú¼Xöd=È"dÌÖ†½ØÌDz»s²²6l[sè¦çf1?ëÌÑf,ȳ5çiµGk†’õ¼­ Þ¸ùÚ°yÛ¤s¶IÂÈA¯©®§ øfæw3¡å$áe ÝÀùݘ¹]g^—çÝ+^ótý¡å t1õ¡½\3´¬×&àÓ)®†®örõœ®ZtƒBË º.pÛ¶í;Л“|Æß?%ÄplÚ&MA“¦NÏû$Ëð{´M˜<•”MšâÙ#ü7l<ÿvsà  ö€@Ãÿ‡'p»MÈÿÁñ40ϺãáßÅxáh{xÉôÜÀ!|„¨]xº˜ó=ŸÅ§€Ë ¨°p !eÌ Î#jÌ×\z´Ze.~r>9ž¬š“ \ø”™—Å…)i¸Ø^ôä-|` Èš Õs³6dMв°ÖîÜldh1?èBÖ„­éÕâoÛ³pMÐÆÁ6Éâ¨4a›t7jqUeÃËt¹Ü¤sº•nfN·3‡˜ÍUx“„—íùܨ…TAó¹fhYC×ôrET™Ð²ééb.7h>!e½ ÐÕ°}iå*z˜e&òµΊßQÎËdßõ×¢‰|M²mÒ”©|-ƒe®i“§N£V¼D§žvºšOÅýÉ|ýÓ¦¯™€@›ä^;•¯¿Sgø ×ñ/¯¤;ßÞ_²ˆž»æzm…k+¯»„V]w1½ÆöÆueô6Û;lï,£÷¯/£ulë•]D]»óÖN¾ˆð:ç=xÿÅüYóg:¶ší‘Ó[lo°½6òz…mÕµ—ÒK®½È·ïTL§Ò¾=³`‹P2v«T ¶mпW^y…ÆŽuÜfx³ðÆ&†W/€&3¸¿³*x„†QFbu-´,{$†‘Ùœ¤fîð=¦Íæû¡æŽ1rÄ1j›Y1—möšÉ6Cÿ6ü>˜9ˆ° h‡),>{à‡<ƒ~Êô,kx;gÏŸ à~püD^U>žÛÁËíó€ÂùX†ðò<·«‹2V-c…2V>q̉lÝ0²Zeì®06ÃÅzu1F÷•ñd£BÅldõÜl`moVÃV×\iìz²²­ [Ó«5CÊæœ­½Ù %yºUY•µBÙöD“ÌÏVÆëMÓÓÅ C·²àÍZÉlyºA ©lo7ÉB*ÓÓÕÐżn§kÏçjOW‡–õêå /ÀÕÐÅÊe¸X¹lÎç*ؾþ†òlŸqO§=ÀýÛ±q˜‚C´§áüÆÑCD­èm"ºèÚCãù­Œ£lËyáÒI'Ÿ¢ òð# sAϰ7x…”5Œñœ<2‰†=ÿ‹tç!{Ò{‹çÓ³—Ÿ¯ì9¶ØV\y>½|åpe¯±½1b8½ÅöÎUÃi5Û¶µW;¶Þµu|{çðó²€ë-¯]Y{Õ¹ê=úýk®>V»öß¾uÕyôÛk#Σ•l/³½tÅô"ÛólËÙÞš1™íÕ•Ûé=5WkZj°u²\Àرd¶ün´§ÂCƒ' ÏЄ,B],X¸X…°"Ú³Eüw€Ùaˆ û ø³²Œ¿ߣm>‡BBÍ™Ì[°ˆÑ±¹|¬æ†gæÌÏÝTÌ]À€_à@}Žß RÆ×æ{ƒ›mº‚õle¨(€‚ôtƒÚôÀÊмð”ïcè"T OáfÌý"´ ‹©°’yø…«ÕÊðn;ù45w‹ER•òö7|¼—»§:žldõåÁZž¬³…Ç1„µ7 ïU Y¬6Ö+Žu¸Ø…«†¬ Û0Èšaä°ÅQa° ›¿­Œ— èÆmª H“¾6(tÔÓ5·©ßÊž.¬ºÐ [ÁœY¹ìé†Íéš ©¢<Ý •Ë&t5põ|®oXhÙžÏ5P™[…^eضi×A-TºýN¬}Ý̓imX s/Ýé3ž†â¾o¯E¹›§ª0¥§¦õøú`š9ÛSO?CÇaîVÝ»}klT0ûùçŸé—_~Qã1Óìçü]{€Úð›ÊL«CO]t–²gØ–³=ñYôâÅgÐ ¶Ul¯]r½ÁöÖ¥gÐ;lï±½»ì ZkØþûN^jŽƒ‚ò³Ïá÷œî½ï…½ÇöÎeg*{vé™ôÛ+—œI/±½ÈöBÙÙôÛ³lO³½1õ:¸{§ôa‹p³öl5h·Y°h1g‰.ü¯ Ú›…wŠy€G¨‹ó¤>Oà/]ö¤2{¢÷ã üD¦&ÿ sþðúƒiÃn¢©KÅQæ,2¸Îâ÷žu¯³à¶ÑoKèQžkðLžanš ws¾Ïcs Íž»aÊc7¼qí…ÒïÚð¬µ7­ÀìxÐÓÊhg´7<^\ßã| ¬<†çsuHádìóÅþ]¬R>•÷â»=ša{ø1Ç«…QûTÊsµ‡©½…¥û÷É&kÐêùX2Îòb­9Ù ÈjV-€²æfM/V‡Žd]ÐbKö€íhÛä¾íhÛL¶Ù}ÛÒÜþmiAÿ6´¸ -PBO ,¡gØžg{‘í%Øn~sèÁ¾cÔÇ9š_a½÷Ÿg[ÎöÌnmè ¶%ùû´¡G´¥yÚQE¿v4£_{šÆ6…mrßöôÌÈ+iðÎÛÑjžŸMͳÅÅYÃÖ e¯Ü-Àkz¶˜;„W…‹=.ú‹ÁV¸-¼R€«ãÙe?Å+Þ¸ mÆÍ{ÐöM3Ü×+Hß?\AvÈM³øsyžkÓnʧ{Vã9«ó2¯]¼ä~:‹ÐYã–Ñ"†s ¹v@lxäü7Ž1€¦ÑuCø˜®Ÿ–ñ¦Mï™%Ø3Âê¼h/e:ì­ÃÚ:¤m„¯á ÃëE{cŽó:#Á»õÂɼp ó·˜Ç*eìÏ=çü2µ:ûo±çsEH^±/‡ºö>à`­“Âc¯¬o…±µO66…áÅš«‹mÈš€ÕòfMÀzûgyÛˆ Zu߬ Z:6XàosÎÖLf[{ßmÜV š‚­½ 9)lÃæuƒ¶)Å­`Vóº!ž®½ *lë/›Ðm®B̼’‹Ûp~]è&™ÓÕÚ3W.csXr ]½rÙIŒ‘YD6Ÿ«CËI¡ûêk¯S›¶í•—¹×~*Û‰5`ûDûx°2{PÔâ%ò Ãjì>’A× ¡gï¾|[L¥GÃvt–á»AðKÎ$ˆU¾A€7Ÿ?‚÷ïÃl¸ßÅ‹@¯i¾­¸ëVz¨[+z„mB÷V4™mÛŒî-©¢gKš×³-ìÙœ–ôjNK{7§§Ø–³=Ïö¢¶>ü7Û˜ƒTÇóùçÎñiÃñàXGóó/¸¯Åëñ÷r¶gØžb[ƶ¸w-ìUDó{µ¤9½ZÑÌ­hjÖ4™m"Û„î­é©«/¥Ai˜luv$¶€, Ð…´>ز àÕbÑôjNÅÜ,¹ÃË$Ú'°_Ì2½q;èvÖè=¨Á°±4“—Ž/Söû‡t΃ ìYch˜áÙžý ƒøAñði)/Y÷™÷%ʳ~ÎfØž}¿)¼ö¸[Êž¶2Ò¯Y{ÄÎí º~Hׄ®ZF_23QièêT¸µçr1Ÿû ö¤m;âÝ“#M{óÔll3‚ñ ÀüôÓOô¯ý+ÖØ: ´4dE¦+6„¨Û…€íQØ àrö+Ãð]H ûé§ŸªÛ Ãs0ÎYì!vMÁ¶h{zéîÛh|÷b†X1옦ô,¦él3z¶¦Š^­ôönAKû´ e}[ÐSlËÙžg{Ñ0€ÇöÉ'ÎwÃN8Ù¬¸/öã÷²½À¶œíY¶§Ø–±-æÏ\Ø»%ÍïÝŠæô.¦™½ŠiZ¯šÂ6©g c =} Ãö¯ª[Ö^y¬AkÞâ5¼z‘”Bvæjç0$î¤Ó‚gžuŽò@‡Ý8‹–1÷0C¾{Ž¥Y:YÅØÀçfÙ“¸F‚:6|Îð‡ž¦Ycß1 ߇i8Cwèè ZöÄÃt¿æœ±Ÿë!þ{8=€½]³ñ{Ò¹ÃùsôwïÁïå=`ËžxÈ-¯ÒõžcH/a(/yüA+D=œÆ©ýiÎãgŸãxÖÊιŸ3iÔ°?¸mˆÇÏ¡{°?î^ç÷{vÖ½î|ò½t¦jŸ³½çœBó&^OƒÝןv'‰M¢«ÿv¿f¢o>s¿)Oåù\,ªz˜·La%ó=ÜA±…°Å¶ ‘¼ž-ÂÈz²K!Œœ™³=Jyµ{ì{{´û:iÕÅ*ëS5]0ò}wЄþhÂÀ4™m ÛŒÝ:ÐìAh.Û|¶ElKÙg{rpzvp{znH{zqH;Ïp\~˜9&€vté¡ øúxq‹û/å÷±½0´=-g{fHzŠmÙîh Û"¶»w¤9l³ÙfêHSwëHSØ&²={Õ4hçí«[ Z¬à@“@Vô-Œâ‹?ÿ(Ø*HžM÷²×÷ØÒ•7:lô7Ïç\½ôüñœçs<‡çÆà9dI™c<Çpu=ÛYH[Æp~p¸ûZô°ç1T‘ÎlÝ< °Åßïtólþ›a;Ç8ÃÓ5Zݦ (ã‡ÓýðzÙ#nÐ`Ý8Ë qg6;ðЛg)Ø"ôÜ Á9t/BÏ÷Ã¥QÓùoÛ³z# áßuæ½1cÑÖ½Ü&3OuÿÆçœEw²G;aä`þPL×N|”Æ_Ë÷w¿ŽÆÏŸH×ìÞ€v»f‚·¢Ú[1Íí¬çm1_ØbE8­!y²]!CU¶—ÒYœGùdÎ(užó¥Ÿp:ð!êÇù£ÅÃ}Øq'!ãœcÇŸÌùÕOƤLòˆÈ¿OÆšäf϶û8'ÿÕmv¦—¾‡&ïÙ›¦ìÕ‡¦±Í`›ÍV±WošÏ¶pß^´h¿´”í ¶gèAϲ=ÇöÛ‹®>êßqá>ž}töã/Èïc{m¹kOíϟ϶”m1¾wþÞ}hÛì½ûÒL¶élSÙ–aGh׫[:¶CéyZg®Ö1Ó£Õ^-Gy°e ž– Û3îyŒ3pù¾sÔôryr^χÏãÇÎã…î-¯Š{R›~Œ_ïx²Ú ~:Û‡ø}x\ÁÖ¬÷·ãÙe€Þ4 ž¬c÷ŸË¿ç܇<Ø"Ç—²7«<Ýœ,*÷ Ïx¤øýj~xÙÎ<ï}Jæðòâ¥÷Ó™|ÿLÌû2lG!Œ|à µÈjò¨!Ô`È 4ÉÝà àNÉ ExÐî#';Í٣ݽÁ`ºfÂ'„|çéÜ6§ÓmJVÛ’Ô*gwkæpgó½NÔ^ó€KéNÈqåQ|ÌGŒPûŸ±@ ¦ô~Ûá\Ä@­FfØb52’¢rÄÑœÈâölU ÝS-¢P@Xîâ´!kÏÇjÀêäj!”Îiì.~±A‹û€­ Z Xó_ ÚªÂÖm]‡mu@[™B¡ƒ†o×^Á5¯¼W×-AU ‹ÜËAsºæ|®™þўϵ3QÍçš ¨Ì\Ë€-æl'MžÁ •ÎàLoئwª²#E¿åçü¸iÇœˆí|Èw–²ãNÂŽ†ñ)X#]ë¹ÊN<:vÒiçñš›‰4€³g=þijœiî|Ÿzæœ çε~aˆ]Ä;vúÙ;ãœ2Þf˜m=2•FvhF«&?HÓJ3Øf6Œ*ØæÎvÄ0ZÄöØ‘Ci)ÛlO3”ž=f½pÌîô±»Ó ¶—ÙVº·øÛ´•Çñ}¶—Ü[ü Ó¿ÈÖ;DÙSüùO5”–±-f[tä0ZÀ6ÿˆ=¨¢tš ;jzþž[iX“]*[s~Öôlí•Ǧk‡™õÞÚø"ä ˜³luù4þ°Óïv¶ú<¦<@„v*€%`«oêOxé°±†ªë騩.ž5z˜¨02«»sJ:€ÕQìI k/|2·ïØs²aõ­2NYg^Î4@׆¬¾àúnz·AÛ€‚’]íà ›Ë òr£ `7É*åÊìÅÕ°¬l"Œ$û{õqÄ.¦ra‹ðruC̺òPVV*-«ÁU äé†:À|®žÓ5QÅUÒÅ tƒ•¯¼Jºtåõ<^Ðx™ Êá| @rbšÓÎ÷ìÔ3xwi ½S¿³±6õs.æ<éh_ªì,mÃ/ãH×e¼‹d: ¶/Wz¿¯– ¿Û¯äŠc#‚í"~ܵóË0]åØcQ¦ß&MM#»•Ðë3'ÓÜ“£ylóO•Ò¢S£ÇØ–²=~ú¡ôô™‡Ò³lÏŸy0½pöÁôÒ9Ó˰s¦W\{•om8¿†íUËðØJ¿ïEþ,ØógJËÙža{âŒCiÙi‡Ñbלz8Á ?™–k×JÁ9èí³µAë‡møÊcäåÅ< Î…=]XŒ‡X 3q2oû™ænû©¸CÁäô»°í‡W#c› æ]yNÔYå„}šÊåçÎÃßOó¨ ºI=ÇÞ'ƒt&@ªá ¡àa¼:™ÃÂ÷±—ê@f8 Uyz žé†‚gަ!ðÌ®já“{ßY億ϺO/‚rW'³×zïÙüž³1ËÛ•¦ßHƒùuðXñ;”'«à Ggàw²¿`¯FÌÞ*‡‰ç3`çOÅÞ*¿ï:ž‹U{xïVm2è:Úü»èTþûÔ;œ=»\G»5ØFŒw@ÝvÃö4ºeöxºrÐhЈ‡½-?zîäiÓU–*$·ÐÛ~B¾…³I¡BÈXU®ækyqÔYÃ1BÎ!¡Ó9”Ä!ä#ŽUé÷äEC8 #¼Z\$PîÎ\üXæ.áÙš‚¬>6ÃÆ  É…BZ´BYï¿­lH¹6a«çq«;Ÿ[éÌTls]œ'5ïÝ 4f…!{»Pü|nøV!³œŸéå"‚µÏþ¼ðç„y-Ætî¿×ÓYç]ÎÀ¼Ô3@Áιé+3Æ`<—á7ܵó|Ê|ç+øa±$Û%×*›:cí¹ÏÁôÜ +y?þuTvùõYv1ƒ¶K®E¶]:âÒvÿíÙU7rÁ”ŒÍ¬X@7öåT”ófÒ¢óN¦EçŸL"-½àDzœíÉ‹Nâ}·'ñÞÖ“è…‹Oâ=·'ÒË—žH«.;‘^½üeo\~½iÙ[|?Ë®àÇì5~ìµ+N Wø³V^Οɉ¼¯v-çïÄw?uáI´ vñiôäˆ èe.”ðÐͼ³ãýUò]öT70©Eh5lõ*csÕ±¹òž.âȃŒ\½wÜ5Ž7I»[ø‚„*k” %ß®`Ï Ûb¦EÓnbð¡Ù¡£i:öÜ«dõ=w3Mã9gÞôåqªçÜç$Øžz#oñ9÷•x;³Ÿûxµ¹O¿‰A9”ó–g‘’æU^©'oëQɾÇe@ÊàÉ “ô{à5ïÎ[zœyWxì Sµ¢ØõPïrVÏ»óLwîõLº½ªñ›O½ƒf*oµœNáÏ9ù6cKÏC×ÐÀƒèŠœ=´ÓÆžÂï;…FO€.Ø€\ñ€j[JÒÙ_˫٫Ŗd‘R«9£Z\Ïu‰Q «‘» -”W{ª³ ™£àÅ_aÅ$VQb{@«kÊš{eò=%¢¨Ê¢§¨ùØÌ¼l°[%к^N˜§k¯B¶H%õlmàjO×®kg› ªåÝV&åc\è8-à†>ˆôvC<ݨUÌæ¼ne¶ UÖÓ +ë´ˆJ×ÐM²?×^µŒ…ŒÈ]Œb¥G©¦•–<þ4à—ò€~);/Ë[ {œ¯­Ž-Zò„ß–>ÉÓ\ڞ⿟âi0Ç–,{Ú1þ\ØA¼ðjëóµøÇžx†§ÖžõÛ“|ߵǟ\ÎŽQ€=ŹöÄSÏñúמæ[×nß½mzå%zsÆz›íл3{íý™hÍlÇ6̯lÛf×¶ðmmådzl.?`[ø±Íl]Ãw¬­˜@kgñ÷²­Ö6w-Ÿ6Ƽ†ëÖQÕ ÆöJ [íáfÁ^Nl‘a(hN6hŽ5i¢¼å6'²JjÁ‹rJ†w;€LGH¼ ænuö(^V®>M%}ûSo¾5 …Yß~¨RÖ 'ÏHg´oN‡·-¢ÃÛð-ÛmÙÚÑQíZÐQí[ÒÑZÒ1ZÑ1]ŠéØ.%t\W¶nméøîméÄîíèÄí]ë@'õèÀÖ3ó÷ñüø xÎz ÇŸq\÷ötl·vtL×¶t4ÏQlGv*¦#:¶¦Ò­è°ö-”Õ³=bZþü \4þnh5tÁVƒ6(rlá1¡ä.ê'›y‘½¹[7œ¬’[0`œ=·\q‚a¥S6šéãR.: H»È£¿y0·’…Þ§êK»È 0zÍD7¡„——Ûj@šæesòK8©ͼɘCÅ^W/?2IC£v\HâÂÅQÛwªëÉV²qÞ®^¡,°m¢Ò=Á6ÍÐrä¼® [7 ºÕñt3íàj òtÃc˜†ô"ª°âõðtÑ·Z¶ó-‡%Ä0kçÚ¨€F—ð³ ØåûT­\£^îQœz†ª_0$9ö¤Syõòiʰž†Ê`0ìLJ! ®#¦rÆÙªl' {÷mCòm˜Â2íÌsÏçh[°am‰iH1k*˜%1$ð± ÓiAˆVÖLè†z¶&hMàF•ÓC¢«¹€ü5#¹–-Wy¸7ߢL¡®êmåwó®®þÃó¸ xdðÎÀ`¾PdÙeóbËä™eóŒryÙ%ò0·i™b¦4žY&ÏG³<ƒÒñ0Ù(¹Ò§ETæÖÅõÕÀu=N]O“çRaH®êØ24501ÇZÎ…çafÝZU¯VšGqymå*|ïÔ¯½EE°âZ^u‚PÝÅhXoñ9å ^îªSNi‘ø…á÷Ø™¢xAƒV­×^µTà@/ 2KøÙusÍ Tfa»’ ݪе‹ûºAÀÅcaЭpߪN¸° mØcj~÷*Ç®¼z¤Jû§{íõ7©‹ý 7UÉVËó¸¨8s;Ã@¹[•‡ârP¨ú  È£.cp!øì¯Z˜e Àgc̹b`ƒh¶ï¨úÂnýáËù|àœ]Ê‹ŸP>  .Æ<³’ÛQ^.<\ÓËÕn—[ÈÀ„«âáfÁð¬ l/q­[¸€îU®—{ ]„–‘DåG3\à•</¶£¾N(垜Ш‚™öø<ϯ 7 ;ÜÂ3ÔVÎ+£m»=m|¿2ÎÿiÚm|lÀn¹Ý1GYtA ϰTÀäß ÅÀR“s–0 H`&V;Àt¡éž ¬†]zåÕj‹¬ì²lWò2ü+”§ŠÕ‰( Âk=—‚µ®§òÂ'U^Õ©åïÇä…gOÖ-§ ë&­PÛ{´½8¬Õ«!‹ CÈjoÖž“õÂÅ)A¶²žkÜëí9Û$àÍÅ®#h…²éÝÆy¹q ’T 2Aä‘F…–ÓžÓÍš×Ý¥)íÀ¦CËq!æªC·˜uHÝ ð²™ÊLŠ6Ÿ›$´UÜ@ïËÕÙ§]„•aa^n®ÃÊQ^nU<\x¿u!¬lÁö µh&1l´€­ Ü,O—½.x^ðÄ­häΜ.¼6„;'x½Ø.ˆ•áoÃð|˜á3´aŽR ¥W'Ä0C˺As¹I¼\3¬l{¹Gò¢I3´¬½Ü¸yÜ °2`[aå$!e¼&Wó¸ÿøìsÚ¦a#jðÇmÒç_$„­Y Û [ ] „8¯ºŽ=^/¼8 _ /x|×ks=@<æ™BÕ¡Tó ´ioÑñu¨Õ‹ï¿z$8žŒañ¶?9[ 2æ„e¯S°¼ °d/Óó4˜XÕ{±ámÂãD’E çl"ð8•]À^'àÉ+€ag²÷ ;œÆE-YPÀ“@°€çq O¬@!vä-D‘Rñp^à˜¢“ Ó ,ŒÅvb¸îÃpÅêbTîÑ€EŽc¬4Öó²ðd{ôãm=.dmІAÖ^üäÛÊÃéK\«N¨8Î+­Êóq«“x¶Y¯I0‡W)ÈN„V„>l×Þ“[EékÚÓõy»|ÓòvƒÃËÙž®^D¥“cØeýª2ŸknJ²€*,†®&×,l`{¹¸Ž$®^¬VNÛg–?¯öhyûµô¾-÷y:“š³ïË4ÿçêÏQßñ8–¨ï|™÷˜e[fÚË+ùoðOí¥—[áÙJþ{%½øìezAÛ þ{ÅK޽ø={a…²ç^xQÙrop~ö¹Œ¡aO/Žž~Ö±§žY®ìɧŸUžpíq®l¤ÒRº…T-.| « !C–*r  Ús‚•Ð#S¸^«Wæd±r’møëáêíU*ϲQŒ^m‘2s-»E ô¶(sï0êå¯&ŸajµyÀÖ)ßjsÎ2†LciV¹×ˆ«à‘¼¥ª†½Öq¦·”9·Ó²WÓs’Ô2¶m¯¨¯Œa{šß&ó}¿©ýßqÆ‹ÇGYÜû+ù¼}Œê¾»›@߆¶Ãdn#6_ÛMá¶ô™ÓææyÐçÌwÞ íAÏzŸ=ú€ê'³`¼/6Gm=DC?DT6—·Er?…!}«®i­ûµ“­.“c]å)à¤ó_—ó}e|½|îy¾~Úæ^Sk+®±{þE¾î&4ïš­¯Ý o_äë}–1 ÀÛV0+çkvЪXØ¢®y=5nZ$&m  ˆD¢ Œ¸êš@àÆÂ$@µ¿VžË””¶¶ ˆDuSÿçÿüúòË/éÛo¿¥Ÿ~ú‰~þùgŸýòË/ê>nMûõ×_é·ß~£~øvmÒ\y¶w [žëØÖMÑHg—ó& ˆ*§4ak/œ ‡­»J`[¹“%â–ö ˆDuSiÃÖn6l±ÅÅXq,°­›¢‘Î.çM4  TN¹€­Nû˜[lÏÁV ܼ„mi9­X±Â±Š2™OFáûÊX÷2ª(/¥ÜŽeÝ+÷ÞÊ|¼VÚV4 ¨C¨UØb{M^-(VTPYw0¥å+¨¼4lº—U(@'}}¥V‡ÕÀ…-Ú£&Ûçj…qîìö{Þ|=^[ ZƒÐaUŽ?éûëjûDõ9é¿É®s{ݪklZ°Å6#»A¨g«TÔlË*Ê©4¤aÑá|©Jzg5˜¨ãÏ‹Ž€È{¶5Ñöï-«È ”‚Ú"îù$íWÛíŸäû³t\É IußÕŽIŽ?ÉyÈÅkjC³¹øò™ù9pˆƒí¿ÿýoúÏþC¸Z Øê´‘addX ƒmY…¾ð2£l?õkœPo68CŸW^«6nMï«´\_¬K©Ü}†¯ó¹ü|™f¶<)ÕYËÌï±ŽÏ Q[Þ‡7².7?ßx‚ãG' üýú½ #çwñçzÇb·¡þíგ°Îìx–U"{þ#ÚÏùí|~JÍö÷Ã7îùHÏ.ªý+Ѿæù)/uÛÚh#_òãÞ!áùG;T–aïOâùúôÇç£Bÿ¶J(,ìÏ0µe<§ú´§¿BÛ×m·œõ_5àÉ\WÌk‰7Àú}î€)´}cT±×¼?¬%Õw‚ã¯Ï(Øbk@Û«W¯,àÚ[LØêr}~Ïöó/¼|¿:¥a g«O¸îH|_B³=Os^5îyç‚ [/|ly¶ŽÐmfî;‚ÎtîlO¹ÔçUãóLØ;ï÷žŒ:þÈßïv„Ì…ÈùžìÑ|lís£>3{@cÎ \*9ÊŒ8ÿ NfTÂn?o@äN¨¶q|qÏë A¬BÛ?AûúÏOh1ßí^8m=Åé7Éñ'¹ØÅÁ: Ʀ^جÁVu<[;2¡Ú' ¸nÿ3χ۞êx"Ú7×ý×§W÷øÌöŠû}v»µo|¨<âúÕ¿éÛ?°É:?•Œ°$Ñj]zMl5hKJJh·Ýv£ÓN;ÍÜ8ظ>Ø~[N{˜5gºÕ o¬æ]Ôãž÷F†ñ°õD[{Ò„U¸ìß0º·aëûü€¶ˆ¼Ø-B_û}ÑI:œÎ”ñp²Ã¶ú‚P؆Î7Æ´Ÿ _í噃µ¨ó—V‘íÕ¾8vÛËÇÀÂ{ŒÛ7A ¬â`wŠ{øó¦çyª|ÄDoˆgœÝò^gk3¾}sÚο¯ý¾øöƒmäõ%ª™ÇïNùúW¢ã¯äÀ»Àà[ ÚvíÚÑ¡‡J£F¢qã¸tëwxÀõÁ¶is•Ê×®ôÏ>ËTý±a‹äü+󶾎m\ƒæü’Ã6+ó¢^w`×Qì AfJ îâî{>tž<¾ýâÎOÜó5[ß8ùîÙfgß`ÂÑO’ÁB°^’µ õ¬âß ÛxýE¾?d°•XÆ_’ö­:lc~_l"O0X¬Ôµ À¡ãN8FŽI3fÌ eË–ÑsÏ=G«V­¢åË—+àÁõÄMàÂöâË商­ž“Ó!B= Ãlø.¸*d¶5Å¡Âa±Ju¶€0WìÈÚnëø³abüþØÎ¢aZõ9[s¤[å‹jTd#`ŽÎ¼ fŸ 0rÖgd{ZÑadÃc2Û?AûÚçÇf¯òuÎ…ÿ7†|¿qaŠóLã.xqï{>K†¤¿3ªÿÅWÜ*hÿ4ŒÓvvXÛ5ñ·o®û¯Ý>ö´OÜïKÒ¾U†­=°¯O ôwüqç·ÐŸòlÕ Ãb©0ØêJpNÙòlu©9†Ê9™0r°g䯄ƒBMYÛO|‹¬ mÀÅپЕó"*sЪ{q4¾'è5˜ÛÕŸ…ïHò~%ÆØã÷/PRÇo¶›»Ö^ˆ–ùÕƒ­W¶ñçßl#»ýôÖÿùñoã2ßãœC3ÔüýYaÊ öOÚ¾¶~y1œí ø¸ùA}þ¨·÷þ¸çíÅyÁaäHý&ðdìöÑç'iÿ kß$ïÒ_’÷Û×§ , ³ö¢‡ý>'*`/@L’Or|¡¿Ï\ôsý¨Î1ÖGØšù‘åFvÃÈðlÍ‚õl?gÀrtÃü° SÖÌó>ÁYs il)tA9°•R²ó,픬j¦ï×ô±ÄEjúxäûr«³¸­?I`Û¸2°-»ìJÒöòªWò+©EÄÈÚ¿ä^2KIǬ^Ç´· H{V¯=ëJûÉu¤~œç = l„®êJG–㬿Yνœ{Ñ@~k fa{ÙÏ«…wûòʺãÙŠó[Èr~êðù Ý6¢ç(“ÏMŠê° Üñ©Qتðñ¥Nù¢K¯ —V®ª3adéÄÒ‰E¢Ñ€h ªH ¶Ï¿ø}ÞE ¤>û\VÙ%Ž lE¸U®¼O´# Ô% ¤ ۳ϻÐnÖjdíÑ*ï–MÂÈÒYêRg‘c½ŠDUÕ@Z°E)sëOÀ>ÛϽð±·Yæl¥eÏÓTµcÊûä¢.(, ¤ Ûs/¸8"ƒ”F¾CÈ0_Ùª8TÑG‹/×™JrýùµÑ‰ôþᚬ1[¿S¾³°.Pr>å|ŠR…­›ÔB§lô…‘Q•àÂK.§ .¾LÝ‚æl«œÀ>¡‡TùÌF…#öäE ç7JG•ß!š Ôm ¤ ÛsÏ¿ˆ”e¥kô`뀶*°5³;…U†©N=ͨÏ÷<ðz³IêUÆ ì-R¯3»RN\Êó25! ä¡Ò„íð¬0òçFÕ7Œ È^tiÕ`«GvaiÎìÇ«ZO3êó£ ÄÕ«Œ™ÆÕ³tr«J½Î¸v”çë¶ çOÎ_!j -ؾ¸âel‡_˜1_=[3Œ|ÁÅ'W=ŒžS4¾Þc’0rlCëAÆÕ«Œi%­)õ:eÔ§%y^4"È; ¤ [=_ èfÁö‚²Ëèü²KÕ¼-lÅËÙI-ì9Û yÆÄ ¼«XO3-Ø¢ÂIh!ô¬Î0)½±LêuŠPˆ€ü&Ñu!j W°uæl0²òl±8Š«nÙ’,ª l“Ô{LRO³J° ¨·k׫ŒPÜ*h©×™ä"W"°ºÏ'9yMœÖåyÑH}Ó@Z°]ñÒËtþE—Ðy]ìY`ùBk‹Þú“©Ýèxp¹®§ýùIêA&©W'0©×iÖì¬JNÜêÂ4îýr‘ŒÓ°‚=ZÔ²u¬ÐçlE•¤´™´™h@4PˆH ¶/qÙ)UËb•$êŠ`Øf<[Àö #K9ï6ŸbG—ß$ Ô®RƒíÊU™Úðnø@ÏV÷R϶vO¾t>iÑ€h@4P3H¶Š£ ZÍS ¶_Ð¥W\Eó‹.áp2lå*ñlEè5#tigigÑ€h 65lᤂŸ`©æéçþ9[†í•WÑ%WŒP·°•¯¼šÇad7¹YyÇ ùÆezªÍ*ß-Ñ€h@4H ¶pR5KÁSØç_ ¤pçò«®¡ËF\M—¸FY~ÃÖÍZÛÚr’B µy|òÝù×ÉåœÈ9 Ô¾Rƒ-;©à'Xª- ¶W\u­ó¢+ñ¢ØIöW¬X‘Iâ¯+ÝTTP9?®ÊÜy¯õ§ô3“*”—†{§â³*ï¨÷°«§«¿Å2¯5Ž/ª^­á5ã·+ãïör9‡Tò#¨}AI§–s  ˆ²5:l™£Êqe'öó/¾ÌlýÁ׌$WÛ*;ŒÌp+5VçÕ‡UpQ u f*ð'þ¯$hUîe·V¬Y>À³LC¨ú}|_1®^­ª¿[^ê­Îu€mS†oUòKç—Î/ ˆjSiÁܼòêë<Žâï/²a{=]]ÍÀe[õêkþ9ÛïÍóÜÌz±š %¶õd”“†ƒÊá}¦ øÐÒyaeõbëÕ&+±'°• Fm^0ä»E¢ªi 5Ø27•ãÊ hl¿4<[÷ªëFÑ•ü¢׎T·¯¼úº[Û³ô{­ª¢Žá-&‚m,Å’kØÂsŽÜóÎ ØVMèrv ˆjSiÁöÀ–ªíªk¯·`Ëä½zä ¸Ú^}í lm0ºž`bÏ6 ž¬ Ëf.¬„÷‡hƒßï *íÙòñÅ­bÎ~Þ9Vs^6I=ÞÚ”|·\ÐD¢Ñ@îæl_}íuºÚå(na_~ùUfÎö ¾síõ7)à^3òFe¯¾þ¦/Œl.@Â\ey9æ,6eìÕêEC»ó¢æB$$;T[Îó¥‰a«ç‚zªx¿»P)¾žnp=\{SX½Z-Nûù¬P¾Ed2+Z.ì¢Ñ@]Ð@Zží«¯¿Á,u Ãß>ØâÎÈFÓu£nöìµ7ü°M¿Á’…fÓÿ^¿´©h@4  d4l_cØŽ¼!ÃQ0õ˯ ÏwFÝ4–FÞ8Ú³×s[6¶ÐÈI—Ž/ ˆDµ¥´` n^opôúÇ0l¿Î„‘qçÆÑciÔÍl7Qöú›oåq)em‰R¾W´' šÒ„í š£¸eûÊ„-îÜ4æ®cxño¾-°•ª?RõG4  ¼Ò‚íì¤ÂqÕ,ÅíW_“ñl¿úúk=ö6ºIÛ˜[éÍ·ÞØJ'+øNVh#tù=âuŠ*¯Ô`ûÖÛÌÑ[=–ÞÌLµ`û ¹¥œF+»]Ù›o lE´•­´™´™h@4P×4lÁMÍP‡§åôõ7†gû5»¹·Ü~'½íÏÞ~ç]ñlųÏV4  ¼Ò‚í[ÌM“£àê×ß|› #ƒ¼·–ßåÚ|{'½ýÎj­t²‚ïdum.Ç+^£h } ¤[8©KÁQ‡©ßøaû-Ý~ÇÝÊnsíwë'lã2I‰ÐÓº´©´©h@4P›H ¶à¦æ¨fê7ßž-È{ÇÝã莻îUVÎöîê÷óγ•ÜÃÒ!k³CÊw‹þD…©´`ûîê÷<Ž*ž2W¿ùö»Lä½ëžû”ÝyC—íÝ÷,ØÖf=Û„õbýé3éǹ^Êÿé„þòxqõpÑÉìdvY>鈅Ùå¼Êy ¶Rƒí{ï)†Â4S¿5a‹;÷Ü÷€²»Çݯlõûküžm­Ö³uNt”g믗ëæR6r/;Àµ‹Ågç/+d¤ž­tÈÂîr~åüŠ SiÁv5;©÷Œs8ª™úíw†g‹;÷=ø°²q®åXÄÛ ÔŽÒ‚íæ-[<†VÌÏLOÿüçOØâÎüGѼ iî‚GiîüGië l¶[Ñ€h@4PðH ¶[¶~ ø© LýçO&lùΣ‹£ù 9ÆàÝú¡ÀVF™µ3Ê”v—v ˆjRiÁvë(~j[°p1ýôÓ¿2ž-î,Z²Ô±Ç–ÒÂÅKèÃ>ÏVF´?¢­É-ß% ä§Ò‚í~D2?ÁP°ö¯°Å¥Ëž %l-}œ[ö8}ôñß ¶ÎV£ìtZøqÏ›$_ëÝ&©Z$=?;ºœ9/¢ÚÕ@Z°…“ºxé2æ(ìqÅÔýëçŒgû¯Ÿ¦ÇŸ|ŠíiZÆ·KŸx’>þÛ'[9.©EÜóI:C>ÔÛ K7™äøå5µÛá¥ý¥ýEµ£´`û1;©K‚–1C—=¦>E?3_·i؈üq›†êÎSÏùäïV‰=³ì òRê4‰T®jÅr¾b/­¢¿„™\¢œKö©×'L2¡ß‹ïÍxqÆç‡ÕÛõаg[ÊÕBêÙ*ØF<é9FÕÛ­¡öÑ4 ¶aõ~÷š)+3µýù k§#ÈHÚ]4 È¥Ò‚íߨIuú4=É·O2Sþå¶|çÙçž§g–ÞSö÷O?Í»z¶^nd hly©+¨z»^^c#”¬ P`Þ-É×Àz>̲ëãª÷¹ÀUÇjP0KüùëñVn bŠ0 ¶Iêýzm鯀V.p¹¼ÀÉg‹¾òEiÁö“¿JÏ<û=ýìru ¦þò˯Øþ°}î…iùó/({ö¹èÓ|æ‡mÔ³¨B[ˆ ºž-`äÁÆõ†“×»u:Md½]c€°Â­½ë}>ŽÝöðå„^,lãêý|¿„£åB˜/B9Ñb®5lá¤ÂqCaàé/¿š°å;/¾ô²²V¼¤ìŸ}žwõlÃa OпÊeМ¬ß³¬N½ÛÀ6À VGB² °Ø$8Ž\wù|¹‹Dii -ØÂI}þÅÊ4Kýõ·Œgû+Ãö啯ÐK/¯rlå*úìó/ò®žm(l0™a[oÎÖš #›žd•ÂÈ!õvÍãƒÇj{¶ ¸Ô é,ÏVÏ«†„«ùsÂÃÈqõ~ýÏg…°² J4 (P ¤[8©+^ZI+^v \ýõ7¶|gå+¯*{yÕ+Ê>ÿâË<ªg\oÖ ûš ˜°ÍG×®ÅkôÖ=Û \âž÷/2kÚZÐ ª·k†·Y÷5æB/õ;ì0x9ÏG'†m‚z¼1avûû+xÁ˜ÌÙŠç–ç Ÿ#ZÊg ¤ÛÏ>ÿ\9«/1dÁÑ•l¿™°ÅW_{ƒ^yíuzåUØkôÅ—~ئßP5[Ï6ýãÏuç©Ýö‘9Û\Ÿ_ùüº×'åœê9K ¶Ÿñ­b~ÂK™©¿ýûß™0òo¿ý›^ã-zí7éµ×ߤW_ƒ¾üê«œí³­éz¶uM µÕ>R÷W.¦u­¯ÈñŠfÓÐ@Z°ýâ˯”ã †ÂÀÔû`ËwÞxëmÇÞ|‹^gûꫯsÛ4G>C:™h@4  ¤¡´` 'Ž+ –Âþýïß3ž-Èûö;ïÒ[oÃÞ¡7Ù¾þúm.HCœòr‘ ˆ EiÁNê›ì´*cŽ‚©ÿþÝÛßéÝÕïÑ;°wWÓÛlß|#°-!Éï‹¢h@4 ×@j°e'Ž«2æ(xú»¶|°}÷½÷=è~óí·âÙŠg+[D¢Ñ@Ák -Ø~ÍNêÛï0d]ç\ýý÷ÿdÂÈ ï{ï¯ñl5C÷›o¿ØJ'+øN&£}ñøD¢´`ûÍ7ß:N«k`éïÿ±`ûþ𵤠àýö;­tB鄢рh ð5l9"¼Úp\ßc®þÇ[¾³výZ»n½²5lß}ÿ}^z¶¾m1\ÁÇÎg,£ð;†œc9Ç¢Ñ@šH ¶ßrDxÍÚuc–ú`«’Z¸{‚^ã[lÄý2çI-ª –*æ Nó¤D}V>Ô³­©ß*ßSýÊ´„LKˆòRiÁösNs¬Z0Cu¢(_R d‹ºúºhĵ#銫®¥ËÙð†ÆM‹2 V/¶†êµÚí »nD=[ʱœS ª÷麻–ðÃê½zïwËùécɪéë}¶óâu dP" ä¿Ò‚íÊU¯Ò¥W^M—ÁF\«x ¾zÅã?øð#ÚcßiÈ^ûÑî{ì£lÁÂÅùWÏ6ʳ©gë3“ËØNGWï5«Ü^@¹?ñló¿SÉ…OΑh@4`k -ØÎ[° Ù“²í6t/fé¾´õƒ3°ÅÁ{ì¿û0ê7h(Íå7ù<Û|¨gÛ˜Dû‘°Œ+AçVÓñyªÛ¼ É…T.¤¢Ñ@e5lçÌ[@½쮬ïnCO·lýÀÛAÃö¦ƒ÷P í3pÍÿhþÕ³ …m|=[­tÀÊv@y½hF4P?4lçSÏ~»Q¯þƒ<àfÁV{µ qïƒiÎ<¶qõbkª^klãŽ/gš]\Þ_uÇvv½h»ï3ÔrxÝYéÄõ£Ëy–ó,È ¤Ûйó©Gßpû ì÷lA^íÕ´½úïΰ]Gõlñ’»ÈÉ-Â1GÕ³5ŸS¡`c1UÔ"'_ØØ¦æZ¸eÎb+pƒêÙÊêC 7‹D¢¼Ö@Z°=guï3@™öp·lÝš #¶ˆ-#| Øöì7ˆ*,ئ?:«Ýz­éÿžü½Éo–s$ ˆ²5lg1l»öêGÝz÷÷€»y‹[ÌÕjÐöà˜3Üaß©GfµU¯UD&Ñ€h@4 ÈÕjäYs©KÏ~Ê\„”a‹ð1@Û]àÙsçå ¶"v»h@4  ä‹ÒòlgΞCº÷¡Î=ú*à"œìƒ-îh¯ íÊDFì9Wžm¾4°‡tvÑ€h@4 H¶»õ&˜î¦Í[2s¶¶=ú:^m—ž}¶)†Í¥3Kg ˆDù«´`;cÖjߥ§‚-<\x·Y°Å¢¨î_î“» 2bÏâÙæ¯8¤ãʹ ˆDéh -ØNŸUAí:÷ð7¶Ê«Øæõué\ét.iGiGÑ€h@k MضíÜÚuÉ7 ¶ðj1WÛ™CÈpgÎÏV:£tFÑ€h@4PøH ¶3gS›NÝÈîÆM›3s¶›¶lqæjÙ«h;tíŰ“7ad»êNîÅôœ8£¢L¼\™» ˆD®t`[DÓf΢â]¨¤cW¸~Øòj)åÕò\-&v1Á;#`«3DÕlÉ:Iº‘ûAMᘥ å‹ò_©ÁvÆ,jݾ3•tèJm:ÂÃíAY°íÒË-¼ZXlk¹ž­*$PVFaõhíª?¶WQïVuëýå\²¯ÜüŒ˜Ï×é$‘¾1“ÒŸÙ—r²¢‚*Äs¡À=MþƒFÎQJ¶ÍZд³Ù³íên¶g˰…W Ðb5ÕŒÙyUÏÖç$îîˆ7»mæ9ý¼/ YïÖ®”F¶ ¨ã º~œ¯½qûxU-­ÀV`+ ÔºÒ€m“æ­l[·ïê·‹š¿5`ÛHíêÒƒçk»õQ Åä.–0çS=ÛØyžÇë)P÷ Ï2ªÞm@mZåéj¼E²ªú}Ž×‘\€‡yæÒáj½ÃÉ_¼0Ñ@ýÔ@:°-æ9Û9 Ûî.p»©pò†M›ô)¶j¾¶ko[Ðxú¬ÙyUÏ6¶ âæWcêÝÆÁ6öó]FÂÖ1ÂÚâÙ `e% ÔºÒ€m³miúÌyìÕöfØöàÛîlÝhÃF Û†l± ¹}çž.l-Ï6®^l Ô³†mBXÖ®/ëRãŽ0µjÏÚaÞÈÏ×%¶Yõr[£D Œ¨ëçˆZλœwÑ@ík Ø6oÙÔ Ø~Tܾ—\¶Û4Ü–6mÚ¢¼Z¬BnÛ©»ZE5÷ ™aäÐz±æ¢%ÀÃ]ˆd.Ró–v(¶œç3³<;jõ/,JT–—]ó6ó9Qõn•Ø­T+p|ÖöŸðÏ·CÄN(Û\=ulÒÙj¿³É9s ¨¿H¶E­:1lò¶Ÿ Û> ]·»ëÙnK láÙò|-` Ж°ë;Í‚múBŒ ýÖߟ~[K[J›ŠD¢0 ¤ÛÖ]¶‹¨¸Óî [önÝp²ãÙº°Ýèz¶XÐb5VUå*7²Ô³ÑË…O4  ä‹Ò€m‹Ö]ØvÜÚZµsBÉY°U!d Ûv]r Û|i`9éì¢Ñ€h@4&l[wäÀ¶mOjÙ¦­÷{¶›2Ï×¶fЦr&Œ\y¶"n·h@4  ä‹Ò€mQëÎ4mÖ£Ôªý†lo¶îÔ¢„a»Á #7R›nQ¥ MG^ªÌ!äVí: le9~­/ÇÏ—Ž(Ç!P ¶Ò€mó–™›ó²}²=¨¨¸+[— lÿ¸[äpÄ|m+öj[¶íDS§‹g+¬°;˜œ_9¿¢Ñ4l›µlÇÜœKÍ‹{PóV]¨¯NnÖº#Ãvc&©Å†›•WëÀ¶3µhÄØŠg'Þ½h@4 ¨H¶M‹ÚДi³©YK†,{¹ÍZu ¦ àuë}°Ý¤`Ûº}Z¶3eζˆLFö2² ˆê»Ò€m“æ­iòÔ™Ô¤E{jÜ¢ ß¶¥¦|»ný†L=[,MÖ°EYÂÈÒùê{ç“ß/}@4P4 l›µ¤IS¦+ÐîÚ¼XYã¢âlØ¢,BÈ ¶âÙJèH¼zÑ€h@4PO4ls‰½‰“§Ñ.\ýGY³V ÜVÙ°EYÁaä’4Å#ëT‡åœÂЩvPñ&+]b&U¢™ÄBå/6S#åè2Ÿ‘縞œ|UןQµœk9×¢ÚÕ@*°mZD&M¥š´¤›¶dØò-ÛÚõë3adlºU°e¯ -²` !8Àõç6ÿ#÷¯¯€UÕÆ®çª€k&â¬7[»'B:‚´¿h@4 (\ ¤ Û¿6nAmÒ‚vjÚ‚aÛ‚Ö® ­òlÛtÈZ [O6 ‚Mi¹YÐÝ_Y'« NT½Yñf%œ% ˆD9Ò@:°mΞíúkã"†mQlu™A›Ø:Þ±ª„“U^.¦ÞlŽXFª…;R•s+çV4 Hª´aû—ÆÍ=àf{¶jq”ãÕÂÌ9[F6KƘþ0²éÅâ$Uõq ôjÍr{®—ëû>®ŒjE¢Ñ€h Èl5pý°å ­ÚaË[{Î6Q=Ù„a`õY!çØz³9hणyŒE¢Ñ@áj MØþe׿Ðj«lEd…+29·rnE¢ú®´`ûÈÄÉ”¶ðjƒV#×÷“!¿_.H¢Ñ€h 056lwܵY´g«S5 l SPr¡ó* ˆ²5&lwÜ¥)s›FFæ(ÀV<[éŒÒE¢Ñ@}Ò@Ú°Ýaç¦ ¶°@ØjÐ6/nŸµ¹>5¼üV¹ÐˆD¢ú£4a Ð*ÛÅîšuëŒ R¼Ùôj¶õGdrA‘s- Ôw ä ¶n(lZ­t¾úÞùä÷K Ô ¤ Û?ïÔÄñnÙÖ¬ ðl5h¶õGdrA‘s- Ôw 䶸°Åœ­x¶Òñê{Ç“ß/}@4P¿4+ظ[ÉH%ißD¢Ñ€h€5 °•Ž A4  ˆr¬mŽXBEõ+T$ç[ηh@4¤Z­$µ1ÊI4  Ô' lų•ð‘h@4  äX5 [I×(#Ùú4’•ß*z ˆ´Ò‚íø “ÕÞZ¬BÖ–µÙ„­ì³Ê…H4  Ô ¤ [´[¶Ò±êKÇ’ß)Z ˆL Ô8l%©…P.B¢Ñ€h ¾i mØþé¯ êÙê•ÈRÏV:[}ëlò{Eó¢ú«…­YõG`[E'9÷¢Ñ@}Ó@®` ï64]£ì³•ŽVß:šü^Ѽh ~k@`›ã½UÒÁêw“ó/ç_4 €j¶²ÏVD'Ñ€h@4P5 °ÏV2LjD¢Ñ@Ž5 °Íq'Áu/« +VPy©Œz“¶™¼N´" Ô lS†mYE9•Vñ3\­\<êÊÅCŽS´*H®Z-¶Á¦NŸE›eÂ¥åÊ»ÓVQÖÝy®{Uàñ †‘zžæ½Ö·²ŠÌûËKK×W”E‡H’~~Øñ™Çh§ZZž9>Wÿ¦²îÎ S°-s«þU·t‚ä@ÚJÚJ4 ȵò ¶¥>¯àô<=ˆê¾‚žYÓÄë=@7HZ ³Ÿß€ámz­¾ãs?'ʳU -/õÀý°µïg~t†\wù|јh@4+ älµ‡ix‡>Øj°u¡åÁïµ=X¼.Ϋ5aõù!Þ«ö ‡-Ã?æX²ÂÈ|ü[éü¹êüò¹¢-Ñ@Íi -Ø>21SõG§l ­úF†'šñò¼°ª^0dÂ4)lƒ–û|å)G_¬g+°•Ž]s[ÚZÚZ4_ÈØÚ`4úګŒñ<ÍùOM…m³¼I7¼ì†¡=AÆÁ6îø<Ø@6ÂÝúxüžªs,Ú;Ï6¿:‡\¬ä|ˆDii `ëοfHñb¡rw;Œ¹hácw¡ÀåÛ2c‡¡ËËB·°5ßñùú»œc4ŽÏÜ®ã[D•½2Ù\Àe. 2?;3/í,¦’P²tø´:¼|ŽhI4P;È+ئ/‚øÐmúßY;'R~‡´»h@4 È_ 䶺ˆ|%çlÓm${‹ˆ0Ýö•ö”ö ˆDÉ5P°°$´•´•h@4 È­҄펻4£vnª LJ×+‘“ZHÉO* ˆD¨mžT¡æv„*í+í+ TV[­Œ¢E¢Ñ€h ÇØæ¸+;ú‘×ˈY4  žÒ‚í„ISè/»6'=o‹¹ÛZ],b-<±Ê9•s* ÔU Ô(l±(ª¨¤ƒªøSû ¤*Y¨@<` 3‰D¢Ñ@5Pa‹¢$½¨«£D9nñpD¢º¤ú[+£ªwkåNö§S̤[ôÒBr ÈLJI+£.2iÅ¡*Ž”ê’ÐäXåÂ( Ôg ¤ Û¿6.òæm1w›5g[»ad»jOvÙ_—;†U¢Ï®ÀfÞb»‚z½WÂN2˜ ˆê½j¶˜¯m^Ü^ÍÛ¦LŸI›ÕÌIª kVòÁßFaw=+åbeÝidUž€Z¼Žœ]Œ >îä·‹w# ÔG ä¶zUr–g[а•ùßš4‰‡ í, ÔA Ôl×mب¼ZÓ‚=Ûz³^ãVõy¼ÏïeÚõní0°½€*®Þ,>/¾^Üñ˨·>Žzå7‹îE…­Ô`;y*ýµI 5g«míºõ´MÃFÔàÛ4¤Ú‡­;«B»®¡Þ-þÖóª¡`³°»~_T½Y»^mvY`+•¾¨Èù•ó+ÈÖ@öYëv¤­Fçlë`ØA+-Ñ€h@4PH¶zE2¼[¿g»~ƒYmaˆG.rE¢Ñ@2 Ô(l›¶jK0¶ÓfÔÜjdñleQ…h@4  Ô’Ò„íNM[ÒNÏwð˜†î¤|„-ò!ë}¸’׸òá-ö2(=¦ ,*ßžÒfÒf¢‚Ð@Âqf˜†î¤)yæÙª¤™\ÈÈ¥“ZÄü¼ª@¥¹!ÅK­>ïÂ6+ÛVŽ/*˜qîì6ˆ{Þ|}²L`ùwžµ1XŒÏd–}üIß_WÛ§VûEŽõ/¿-ÿú£}NÒ‚-¸ ‡u°Leï6˳ÍØ–U„ÀÅÆw‘ª¤wV€‰:þ¼èpˆ pA‡šh û÷f§ÛôwÀ¸ç“´_m·’ïÏÒq%/ôÕ}T;&9þ$çA^“ÿp‘sä?G5 [µ7qf×Ãȱg³êNuàeFÙ~8†Õ›Õ'6ôùª<¦çš©ð£S*f<çsÙë-3ëÙf¼`|¿L¼cÒ»¢¶¼Øz¹ ŽÇøûõ{+øøt%"ïXìÁGÕÓI:Þ£e•ÅÇžÿˆös~;ŸR³ýýç'îùHÏ.ªý+ѾæùQõ”Ít¡ü|mÈmç ž­Ãªx¶º…Á6‰çëÓŸŠˆT¨ÐJÒÈQPÿ6ÛÏŽ,évôµCD½éê¾?ðúcþþ˜OlÿÇûÃôŸTRo»VÃÑiÃvWL˺ÞíÚu2¹‘áæ"y2LC׆­¬”î¤|_wȸz³qÏ;äpÏVÃÖ [ž­s!1Þ¯Äk˜÷—äó{Ê¥Tjt:|žy±‰«—wü‘¿ßíh™¼ÎÎqg{ °µÏõûõùÃo6KVj„qþ0œ¢ÚϹ%XÇ÷|l"Û?AûúÏOh2ŽìC|ÉÆêz¦q©w»ÐGœ~ãt×~q…BâêM§ñþ¸ßõcû”þé/Û9zÛ5!¨1Ø®aØþÕå‘ñÂ…îD®^UÏ6,tWo6îyrI`ëuÊØÚ#q³ƒÆuVçâï÷ülØú>? -B?î÷›µ{ÝP¯ç'šgœÎšñpü×ôªªÛP¯,¦ýìÁ‹ýû➯6l"Bé¾ï7Û_¸p<ÏŸÛ7A I6–qP‹{øó™ˆPX-ç$Çx|±íWýzÓ‘ý7Q½êøßÛÈþ¥ÿ¸þèøkÉû“@˜Úl7„7ÝyQs!ZæwV¶:|\yØÆŸÿ¨öÓ[{üçÇ¿ËlsçšaðàïϪGì[¤â‚:iûÚúE=å¬Á[ø4CôùOxü¡ uâÞ÷¼½8ÏZß`†ÎÃÆ­šŽk?ßó™ÅŒ&`#ëMWóýñµ¬Ã/æIú¨þÍE™1ý»:Ç7’ç£a]s°]»Îóh5p òlã:\ŽŸ÷ ÚX¬’åçø8êªpØÆ‡ìêêïK÷¸¥ª×žÒ~Õk¿Üzrrl¹Ùúƒ4Ç(æÓ©Ý¹[¿g[G`&ÿ–†²Zÿ‹ëö…ÂÞâ#ç³rçSÚ¯rí%úªýöJͳMÛviF¦å£g+¢¬}QÊ9s  šÒ†mSTÑs½Û,Ï ýóÎM=à>21`”„gÅk Ô/ „nK I#ú¨_ú(ó]£°hÿ´S\Øø‰“¥êO©ÐF¡ò{ij ˆÒÔ@š°mÖº5ãy[x·°,ÏV`+âMS¼òY¢'Ñ€h ®h Fa»ÃÎFÞ‰ÃÈ| “0²t”ºÒQä8E«¢Ñ@u4P³° š³mV$óJ ˆD¢‚Ö@ Á¶­ÁÖŸ]ŠØ£mî<Û&ÍZ8 lU´ÈJ&`1×™PrýùÕÉ{et- ˆêžÒ‚í”é3©yq{jÞº=ÏÛbî¶­Û°QçFlQˆ '¶héÙ#“¦R“æ­|£™*'°O8*¬|f£ºwR¥#Ê9 ˆDù¥taÛÁ…-—¡ëÁv›†ÛÒ®·÷—Æ%´cãbuû—&%4aÒtjZÔ:1lÍìNa9N«SO3êóõsåœb/“®1¬¬¿öme’ñ‡Õ;õ¾ß­ÐS×êuJÇϯŽ/çC·h f5l‹hÊôYTTÜ‘½ÛŽ Z@·#­Wží¶Ô`›†ÛÑÚu¹Žm{.¯×NÝÂ&NžÉ°-I [-ްDèöãU­§õùQ… |%ܡlj»ÐëuÆý~y¾f;¿´·´·h æ4*lK:S[óbÜvaØn¢mmǰm´½‚í.Í;ÓÎÍ`”Mœ2‹šµh›lU _½ØìŠ.IÂÈQ° ­7Po3®zŠOèõ ^§tìšëØÒÖÒÖ¢üÒ@:°mÁžíljѦ+[7jÙ¶;µlד6lÜL ™³ 6úoºÝÄo{Ò®E=غÓ.EݶԼe»HØf•ã¹ÙÄ«b=Í´`‹E_‰CÈA°-°zÒùó«óËùó!¨9 ¤[^P<•aÛ²m†l/jÕ¾µîПa»•nû'†í¶fØnæJ}¨qËÞ Ý^ ¼“¦ÌaضO ¶IêIV§h\½Yûûýaáø“Zèõ:ŽW¯ºÏÇ·³\`¤D¢šÖ@:°mISgT0d{+ÈwÜJ:¦›> FÛíØîÀ°ÝBM[÷§&­úrá[¸“¦¶B¶þdêz:¡Û\×ÓŒþü$õ&ízºÉ=[€¨€ëuf„]]˜Æ½_."5}‘ï͉â5l›4lç0hûQq§AÔ¦ójÛuO†í‡´ív¡Û [©YÉnp[²‡ëÁÖïÙÒIKîŽ*èm/¤ö‘ßßA¥¤D…¡´`;mæ\öh* mßm?Ú¸ù#Úvû¿¶;*Ø6/DM‹[x¶#gÏÙÖua¥U÷VêuF«ëz–ãŠÒÑ@:°mEÓfÎs½Ú¡Ô®ë>Ô¡Ç´ióÇ ÛLØÂ³í§æm±Hjâ”ÙY«‘åĦsb¥¥E¢Ñ@þh -ØNgضé¼;µí²GlFÞBÍŠ¨ER»²W»Kó®¼Ï[Út>L{þˆ]Î…œ Ñ€h ¶4láÙ–tİÊa佨C÷ý9Œ Ï–ÃÈÎjd,êË^m/öj»©=·AI-j«!ä{¥ŠD¢Ñ@®4l[ry.•t ¼Û6JÞ‹aë.röÙnæyÚÊ£UI-8ƒÔ„É3²Ò5æê‡ÊçJ' ˆD¢ÚÒ@°mÌ«‘§ÍÄjä¾j‘”³"ÙØú“É å@v§¦í8mcÎ<-«Am5„|¯tBÑ€h@4 È•Ò-'µPûl{QkNhQÌ[€J: ¤ ›¶R#Ž ¹‘Èþ…‹ì¸k+RU˜Ô¹úqò¹ÒqD¢Ñ€h 4 l›¶È å¤jlÅ©Þ ·8é3UZ;åõviA;pm[_=Û„%òò¡Ñjãìª?µq òrÑ ˆDUÓ@:°-ât³8/r׺*ð®ß¸‰aË…þ¸SÏvÇ]²nñø?ïÜŒÆOœL›Õ)Ï6I!ƒ\‰1(OtÚßU›¿/íß"ŸWµ‹‚´›´›h } ¤[”ØkQÒ‰«ýÀ:óߪ?(±çÀv{³ÍèÏ;7ul§¦°µ“9åΪ'«ÓrzD§êWûAý7{ÍN²‰ *+3ëÑòýîN£ÆÖ‹µS)ºÕ…|U€ì×T"ûSÜñéc,/+£ ¯²‘UÕÈûÍNªK³Bl=Þ$¿ÏkGJÓýþí/7ýÎ+m*m*¨;H¶Í]϶“\®i[T¢ëÙ6l:°eoÕ¦<Û¦ÏV¶¼Ôót@d€˜•Øß¬êã^ðüt˜Þ 4Pê=™ûq… ì(ÏϨã¯4p£Ïl¬t¥¥ê7ë諯ë (ü¿×®JýûüWí¬_‚ö— Cݹ0ȹ’s%HWéÂÖlóâ Ünñx¶®gû§šÌ[N0&\Ì k!”–»06ËÑîkmØúþ “^"N…¹-`ÛÇçóÌMÏÖ.>o†u]:lÃ_uÛ_:oºWÚSÚS4Pw4 Ø•p™­’ž­Óh¾ª9h•¨Â¤æãuº …Jsá`QÎaçÌ+ î¾ÏÏ,¦ò×·ÉZ püÙaä(؆_’zºæk0·«+äý:4µM‡¿«Ùþra¨;9Wr®Déj V`«W$Û ¤r}rƒÂ´¹þÎÊ|~¾_e~‹¼6ÝŽ*í)í)¨Û¨7°M«Îl®ŸïÇ—«ß-Ÿ[·/ rþäü‰’i ÞÀV‘LÒNÒN¢Ñ€h } l%MdÊà%ô/Ò¦Ò¦¢Ük Fa»ã®Í}Y¤jzÎV•{AIK‹D¢l lųÏV4  ˆr¬mŽXFx2Ê ˆD¢­ÀVF´¢Ñ€h@4c Ô lQý†z¶f!‚º>úó*÷¸•„Òü=RÏVFÆiêI>Kô$¨Y lk»k.SH=Ûšír1’ö ˆÒÒ@ÞÀ6¶ÞjPºF3pL=V/i„Q¹Æ«k|Ž?]¤?£~ÎLi§cÔ°õ§NÌNëhŸ@©g+:­N-Ÿ#Z äŸò¶‡¨ðz«Iêņz¶qUkܼÆYyíJ>:7²~œï›Åü°‡¬Ù)’ÔÛ•z¶ù׉äÂ&çD4 ˆÓ@ÍÁvÝzúKã"Ò{mƒæl#«Ò$¬]üÜ_¦ºŽU5(ª^«ž<·kÈZ“êžg\…r{RÏV:l\‡•çE#¢º©ºÛ„õb£`ëšw«þ.5 Ò'ð|•ÈÀ¶¼œ‹×zÒÎ!õlëf'Jz~åur~EõWu¶ÉêÅú€eÕ›(B*4QrËÏéY¯5±gëÖƒµ¿Û󀥞­\pêïGνœûúª¼mÒz«v½Û,2ªÞ¬ Eëâ耞Q(=ª^®¥†$xç:»Úü°ÆãÎgz‹¯øµþôR϶¾v6ùÝÑ@ýÕ@­Àó¶°G&Ö>Ûêv¤\nªî±ÉûëïEBνœ{Ñ@õ5 °ÍqÖ¤"•z¶ÕsÒ¶–×I[‹D5­mžÀ¶¦O¼|Ÿ\lD¢Ñ@Íi@`+°•œ¨¢Ñ€h@4c lsÜÀ2r¬¹‘£´µ´µh@4¯Ø leD+ ˆD9Ö€À6Ç œ¯£,9.ñD¢Ñ@Íi@`+°•­h@4  äX5Ûµœù¯MZxù‘eŸmͨdô*m- ˆjWÛJŽfj»^®t˜Úí0ÒþÒþ¢Ñ@U4w°5Ó®à‚vV¥Àz³º"r«TŠ\ÝÇKÛhTúA*Å2 Ðé3é½zºn®d}^ªÅ˜z¹ªñí×T¡òOUN¢¼G:¿h@4 Èo älàŒ2w3@hCëͺ S¹Š"fÙ¾$õbÍ\ÇA~¢Køùs-«ãàÊ\P%£'rÑÌœ9?UÑ@­Àuma|¹‘Ú²A§¸xxÞ(计mÃÖS·`½~,²ž®{L‘Åéï¹rä«rå=ÒùE¢Ñ@~k žÁÖªò“&lÖÛ•‘ßBΜÑ€h ¨QØîÔ´¥·"9Û³ ªW딣Ӟgd½Ù„ž­Ö5=Ux¶ö­¿<^ÿ²U³apûõ¹8iò™r1 ˆDuKy[ˆÇ®Wë û,RRÏ›cÎ×]ð™ ŸëòrÌgêÑúàèûüÌbª¬ybïýÙ!âØz»2's¸¢Ñ€h Þi ï`›ËÑšÔ‹­[#Á\jA>[´  Ô¤RƒíŒYÔ²m'jѦ#•tP¶~ÃFÚ¦a#jðÇm’ZÄ…‘sùÃ¥^¬t¬\êK>[ô% Di V`‹LR° “§Rã¦Eõ.œ R:¥h@4 ¨_ØÊ܉ vD¢Ñ€h ǨQØîܬ•J϶~êd/ç[4 ¨ÏÈl1o Ëš³ØJg«ÏM~»è_4P5P+°ÅB)ØD™³•ÐMŽC7rq«¿79÷rîóI[¹Ø ðE¢Ñ€h ǨQØîÒ¼5éPrÚž­]µ§ú#'{Ue èc@ÒŒ|Ë$•~ûȨ¹ú“6”6 Ô  lq² T{¤S!äó}i«} éuÆôÛ'ûؤÞoz竾\„äwŠfêƒò¶þ䜫Ø*QUïVÁ¤Œ«ÿ„¥SŒ«7k=_^šml½[ª¡°ù~ýÛÍ4“ª6o`;ï ¯×ë F¢ÚÇ«줳4½sï÷—›õ€cKRï7+§ûþõˆëCg”ß(Ð ®ò ¶6¤X ØÆÕ»µëßÚŸg§kô×›EØØ¬ ”FNR‚OC-(Œýý®È4ðŒrvYÀ°™¤^¯Y8kPÀƒ ìø<ó»öÍ6hP]ï·zõˆåBT¸"9·rn ]µ[ÌÛÂ&N™ferè™Æ‡t#aâyyð`ÈeÒ¬$¦zûºÀÏJRï6ä3“ц£Z,„zp=ëØ÷+Ï5ÄO¡q’6×ÈE[4 ÈG äl-‘ÀËó<ÛjÂ6®Þl®a÷ý:T\-ØV§^¯íÙgÏ lå"–19&Ñe]Ð@ÍÁvýÚµ¨˜ôŠä Ï6«*`‹’y.ˆ²ëÅúëÝÆÁ ºÞ,>Ëï•Ùaì$õn£ÂȉêÝÆÂV{ÿ!åý¬9n»^¯/$m~—åÅë²…v9ôýî9òC«Þouë×…%Ç(~Ñ€h HyÛLY§æl|½X}ñ7·Ý¨ÇŒÅ>fH7²Þ¬µ@h/òmÿ‰¬wk‡Àõoñÿ†ðï~ö|mlÃëõ&ió5˜ÛÕµq IÞ¯ækCëüU³±\Ää"& ÔU älëj#æËqK½^¹å‹å8D‹¢¿¶ ¶ÕÔÑH½^¹¸ÕÊ1ŠNë«j¶˜·…e¯F!ÖW!Êïí‹D…¬mx¶…,Rùmr ˆêºj ¶ëx5rã%ÞŠdñl¥óÔõÎ#Ç/ ˆ’j@`+ž­Tû ˆD¢k V`‹ý¶°IS¦[¤d””t”$¯­ˆD¢º£mŽG3ÒêNgs%çJ4 È•j¶MZ¶ñæmÓölÓ¯×ZXõls% ù\¹8‰D¢x  lq²Ó¯×Ÿ9HdùZÏV:D|‡6’6 ˆr¡¼ƒ­Ô³uÊÐeÒ#&«g› qÈgÊEG4  ¤£¼‚­Ô³5r W¡ž­tŠt:…´£´£h@4¶j¶Øo ›4Õ^,õl‘È?¨ð|Ú'^>O.&¢Ñ€h æ4g°µ~¸Ô³•½o²Z\4  €j¶M[µ%½"9ȳ•z¶NY<ÛšmÊÈ^ÚZ4 ¨ äl¥ž­SÇ7cÙõl¥cÔDÇï‰Dij ¯`›æ“Ï’Ž" ˆDù¢mÌ䋘ä8äÂ& ˆ‚5P+°Å¼-lrÖjdªU4  ˆ O[ñle¥£h@4  äX5 Ûf­Û‘^‘,žmáÜd4.çT4  äAY`+Q:¢h@4 ¨Ï6Ç¡ƒú(*ùÍr1 ˆD~ l¶2W# ˆD9Ö@­Àó¶°ÉÓfPã¦Es’ŠETÖ]Fµ2ª ˆD¢Œ ¶eµ[’.+ådŽGK"f¹ ‰D¢ü×@ÍÁvÃFj^Üžô")Û³Õõ[ËËËt…8»—Q…™ÎP—¡ÐìçÜ×ét‡^ÜòRÏ“.-wÓ"Ÿã«§»Âÿýú¹¨z³¶™z´øŽÚHGÌÿŽ(çHΑh °57°…Ð@eÀd×·µ½Fõz¸ü¡ž-`l€V »´<öL}E|U‡\!à1€<¤Þ¬¶Y¹€öDίœ_Ñ@2 äl}‰÷Í 8!ž«í5F…‘õsÊCU°äú¹š `¬DS•ÇóŒ­A€2™ ¥¤D¢BÔ@Z°6cµj×™Z¶íD-ÚtT¶ž#ÇÛ4lD þ¸MCZFÖžm(lM0F̃FÁÖñbÀª¿K o7EØ:¡pñj ±ÃÈoˆDUÑ@­Àó¶°)Öjd„…ÃaÛ€i…y ë 5«¯=¾ªP1<Ô ÿ÷e/n2<_ý]±ž­–¶¿[J̪óªt4y\ Eõ[y[sA‘®žåùQsÕ¿€)`ñ‘ñ¾,ïR…¢õ-ƒÔÞ¦ªÎÀ¯·ëí®ð Ì­?Þâ+ëø¥ÃÕï'ç_οh ~j o`+¬Ÿ”ó.ç]4 ¨H¶m9[më7n ž³ #ׇ—ß(Ñ€h@4Pÿ4*lÝR[™Ÿ•ùYÑ€h@4 04*lų­£5¡Ë9 ˆDñH¶âÙÆ7¸ˆRÚH4  Ô? ¤Û]›6'µÏV<Ûú' ¹hÈ9 ˆDñlß{ï=Z²d M™2%˦Nªíi‹/¦µk×Ò?ü@‰`‹ ÈtQTÒAåH†M™>“7+œª?"¸xÁII‰DõQ€-@ûÕW_Ñÿû_ÏþóŸÿì§Ÿ~¢ü‰¡ú#}ÿýôÍ7ßÒgŸ}Në×o¤¥K—*Ø6Öžmx¹‘J'¥ÓK¸0À¶‰ÀVÈBÑ€h@4Pàl'Mšäƒìo¿ý›~ùõWúùç_èË/¿f¸~Aÿô3úøã¿Ó†›éõ7Þ¦M›¶Ð„‰ü° ##g#öµnßÅËéðNåØsÓæ-kIdn’ Éa\Kí/£ûú8º—ß,º¯¯а՞ì¿ÿÍ ýåWú׿~¦þó'úûß?£>ú„¶~ðv+½óîûôê«o*Ï6Û"gΖ=[Ó6è}¶Û4Ü–)½‰ÚtìA%ºSqûnʦϜMÍŠZÕâÅ> c®ê«ÐåwËE^4 ¨M ˜°ýý÷ßé×_ó@ûý÷?ÒGB[¶|H6l¦5k6ÐÛo¿G¯¼ò­[·![ŽOcn¶îÀŽk{®kl6ÚŽ6nÚLºõ¥ö]ûP;e½iÆì¹Ô¼UqÍÁÖJÇX^š Û°z¶Õ®·+¯¹ó,m-m- ä™Ò€m“æ-hú¬ÙTÜ™ÖN]=Û°‰3H5Ú–4Úv{Ú¸y uî½uê5À³YsæS‹â¶5$ ;rv9®žmõvksd%ß-#{Ñ€h@4P;H¶-ZÒôŠ9Ô¦G‰»öðn7°3Ûp[†í¶Ûÿ‰6mù€z FÝû¥ný†(«˜÷(µjÓ¾f`T­žn%êÙFV%JXoW„^;B—v—v ˆjSiÀ¶i«V4cî\jׇ£ÃÚz÷QÎlÃí¶£ÛýéÏ´yë‡Ôgð>Ô{÷½©× =•Í}t1·ëT°MXo·6O¶|·\lD¢Ñ@íh Ø6/)¦Y æS§A©ã ޳uÜmmܺ…m¿=`»máVý÷8€ú ÝúÙWÙ¼…ñ‚©.lu)»°âëU}ïó¦*ƒg¬FŽ«g›F½Ý€²*Õ'Åå¥Ó×N§—v—v Ô¼Ò€mQ»¶4{ÑBê²×Œí9„6}°•ýùOÔ`û?ï¨`;`¯ƒ¸R¿aû+›·h ¯P®)ØrãújÝ2𸼟J¶jÖêz¶©ÕÛØÖL$#ÏGÈÅ­æ/nÒæÒæù¤¶¿ÿþ‚a V&ûí÷œðâú⋯èÿøBmÿ±W#·èÔž*–,¦îìEÝtÿÞüÑ´íÎÀvàÞÓ€=b;PÙüÅK`+É'ȱˆE¢Ñ@õ5`Ãö¿ÿýÿhØÐ¡4üÜséú‘#={èÁé‰Ç§¿ýíÓ,ضìÒ‘æ<þõ*Ý—m?ÇÛ—6ü!m·ã~ØögÈjØVÿJ'6 ˆDù¯ Ø^pþùT1{6=ù䓞½üòË´zõê@ضêÖ‰æ>±„úqõ9’o{±¿Û¿0l·ó…‘yÞs·n¹¤Cg +JÈS4  ˆ Zi„‘lŸdØ´G¨ný°uH `ÈögÈúHÕÔjdrA YFöù?²—s$ç¨>k 0²ölXe‡ï¯BÊžg»íöÎÖŸ¾Cö¡>¼õ§÷ ½¨×n¼õgÁbjݶƒ@@Ä—¡èÿ¤¤ŸˆDuXi„‘[vî@s–=F=Þ‡zh;ho^ õ!m»#/j´“Ô¢çÀ=¨‡—Ôb°JjѲ¤¨ (µ‘ªÀVúôÑ@k 0r‹íhöc ©ë>Ã2¶÷PÚô!¯FÆÖ¤kÜ´y+ué3(“²±çBºÆ¢V%"°Xb l¥H? °Ò#7oÛ†f=º€:æÔÇÚvß6må}¶â¤N!‚-Ô±G?UŒÀ±>4³‚ ´¬ÁBIN¤¹WÊïU¾ó#mey©ÚÓ\QÖ=ùû¶ÉÛ*‰Žå5Òž¢¼Ò@lo½åµÍgåÊ•ž½ýöÛ\ùgCàjäfÅÅ4sÞªž»Yåš¶Ò]š¹§#;‰ÀV`+ð °Ò€mcçz¶­9ÍqkÎQ¡ŒÿVÅãQÏöÛ4TwŠ]ضlÛ‰`S¶x³ï"¬! çKù¾b\½Ù¸ç…{¶¶^øØòlïWËÜw€™p¶§\J¥†˜ðy&ìãêåÆäïw«¾Oµ±sÜÙhlíscý~uÝ6Ë \ŽÄ¶r¡-à m]õÆä¸^¿h7Ø6eØÎœE­ÚwÎ8®ü·ÛFضn×EA¶E›ŽÊ¦NgØò›mØ.¬Ñ o¬å]Ôãžwß—¶^õŸØÚs¸&¬âª5ðNmØú>? Œzüq¿ß¬Ýë†z=o<Ñ<3 ì $2Q¿g¯?çO`›^'• ž´¥h îk Ø6WaDˆµÓŠ[[ž®õælU¹ÀÖÅÈ|~Ï0¶XéŽc¿>ÖQžyÎa'öLèÝ %'^‘,ž­x¶ ¼N\?”çóU#©Â¶­¶ëmت02^TÏVÆö¤ø_‰z´NÖø #œjzea'K…‘­í@¦§ KÓ³ÄEŠ뺙d° ?þÈöIìÙV}ÎÖô”+½K`+°ØŠ X©ÂÖòlC`ÛI¦æl½0r°gä «Æ„aã´zN1ãyYó·Ö¤=§ ˜•ó"ª Ï-I½[ó5˜ÛÕŸ…ߘäýÉŽß¿HIµŸÙnîXüs!Z¦«[>ØŠ‡‘¯†—h³64`Â5lùåWúç?¢~ø‘¾ûî{úàƒU Ûuë6Ò{ï­¥7ßz—V®zïo  'ðë~`^ºaädž-`똶ù!&ß‚¦ì9ÊÚ8iùülý‡DÇ+ž­x5ìÕ$êòû ºhØþþûïh¿ÿþU4þ믿eÐnQ`}ïýµôöÛ«éµ×ߢ—W¾ ÛDž-Wù öló¶‘!d½­E2K¥ß)¶é·©\¼¥MEy£ÀváÂ…ôÕW_ѯ¿þJÿú×ÏìÙþ“~üñŸÊ»ýûß?¥?þ„>úøoôá‡ÑV.Þ³™Ó¿ûî{4þ|åÙîšØ³íà,r`›Ÿž­Œ@kiÐ#°Í›‹‚ôZêÆ‚î€-R1¸ '5€Ó~•‚mIÇ®ÔI-ò8Œ,šZºÐl úB#ýª–ú•[GÞˆC.r ˆDéh Ø&#kØ*à²M›1;;©…ÀV`+ ˆD¦4`ë­FNêÙ¶0m:#&yJ;ŠD¢üÖ@°­œgËy¶ù- é´r~D¢Ñ@º¨Yت{]TAØ4.±—Uˆ ÀB"Øt+í)í) ÔE Ô(lÛvæÂñ \ÔµÅÊdÔ³mÒ¬fëÙÖÅ“$Ç,Ñ€h@4P·5P3°å:{7m¦]{S»Î=•¼3fÏáâñ-e!€xó¢Ñ€h@4PЂ-¶ôüç?ÿ 5¤u4·þÄÎÙ6l´mÚ¼…:÷@º÷£ŽÝûR‡n}hæì¹Ô¬E«‚n`ÖíѨœ?9¢Ñ@‚-@Û·o_êÔ©“²®]»Ò{ìA\p7NA8¶fy=üí"h´Ýö´iËVêÞ0uë;ˆºôÙ:÷@³ç̧¢V%[ÑŠD¢Ñ@Ak ʳ=á„謳΢Q£F©ÔŒ«V­R  ólCa»íö¦-[? ¾ƒ÷¦Þƒö¤^÷ ý‡RÅüG©EqÛ‚nà4FDò2² ˆDu[as¶:”|íµ×Ò¢E‹èý÷ß÷@‹LR>϶‰Sõ'¶ÛýiÚúÁG´Û^Ñ€=ö§þC÷UàûèbjÕ¦½ÀVF´¢Ñ€h@4PÐаýî»ï¸Á¿|)áÁ“]²d‰´A°Eµ¼PØnÿçië‡ÓÐý§Ý÷9”¡{0 Üó@š¿h ·ëXÐ ,£Ñº=•ó'çO4 HC¶?þø£Ê Õ4”ÞûïÿK¸5‡ç «BìÙFÂöO;ü…>øèo´çÁGѰŽ !û•*è.X¼ŒJx¿m?D>C:„h@4  ä«4láÕêÐ0@gmrØîøWú€kôíuè± Ü£i¨ ÜG{œÚp’‹|m9.鸢Ñ€h@4†4l,€ª½X Xó6¹gëÁö(†íá4xßRR°í(°MãDÊgÈA4  ä¯4lÀŠð1àŠÛ K[#ïqБhí톑ųÏ^†ˆD¢×€†mLãO[½@jð¾‡Ñ ½¡jUòî©N"²™Œ¶ów´-çFÎh f4 a hVÕbHm‡}¶|Hý‡íOýxÛO¿!û¨­?ã'M£ÓÎ<—Z–´7¬µ,n§öß*kÝÆgE|‰0š·,ö[‹bÎFe>Ưá×µâ×·v> Ÿ›e%ü˜ûý­ÚtP[‘œûúµîq¸Ç‚ÏSßÝ¢55-j©ÒM6iÞBåxnÜ´ˆ +ðßüXÓ¢VÔ ¯Ã1èïw?Çâü|F+õzçýÍÕj31iÑ€h@4 °5pÜ 'Ñ]ãîßúÓh»?q:©ÍÔk·=¨ç€a*¡EþCèì .¡‡™Ló²=êÙìy höÜ4kî|šÅY¦‚mÍªÈØLþégÎæ[ýø~^¿Ÿ¥?Sýmšó}Áß‹Ïp ßá~ÏôYª*a“1–cOîêôNŸÉÏÏš£lŽ ïãcQV1—f¨ç*2ïç÷M™>“¦LÓ6ƒ&O ±©Óir„Mâç²l ?c§L#ŸMæû16aòTª´MšB {„ÿNÅ&N¦GblŸÕWåeÖ¦Syš]óþvÞã|Go.š€Â ޵éÔƒ+u§’]©5o[BmÞVœ›²{Å-JÆãVíø9~Mq~-¿§-§¶’Ž=øñnêý­Úu¢–m:R¿¯yq{jÞÖŽš¹Ö´U[г&-Ûi[”PœíZTL¶íÒ¼5EÙÎÍZ‘m;5mIqö×&-È´¿4.¢8Ûq׿TYÛa—fgÞ¹)UÖþ´Sª¬mÿׯgög~ݺít›¢ýõyÁùÃyÕçºÐúÑšÒ·Jo®A¶µ`]æÂ‚¾Ë|Ì<®°¿íß’ä~P?²ûNÐ}»/%é7A}%®ïÔf?IÒ'*ÛªúzóXÌþÕô5m¬¯av?ˆê ÐOP_Ò¿P¢¶ïïÈá4ö–[õtljܤ)ÍáôŒºAPYou«©Œá© šœ ;T R¥ú`m:u1ç5™×ñß^§àÏt>Ë6@’MÕà43fBV54.JÆÅ àmÙ¶£ ÝΔœk¥ ÛY½¯-âðr‘ Zû¢Z¿æy¯nħ6=Û«®Å\½CñÕƒ-þh´-€{ÏΧÎ:Î+"Ÿ U@´ kÜÚ@´ïë÷"Ì«¬}¸!”ë3„ TMó]tJN¾`0h½0„]\,\ùb¡Íó*TØØ±$¼&É"ÎÛ º@ąǶÉÃɹ‚mXXÙÙW6”NÖ0Ø:Ó3iõm²þTYØ¢/˜ÀÕ¡å\M¯Ôl<ìH^³4Wñ\5V”g–VƒÀpyÇŒK6nÊK[¿aÅÛF~M:¶Ž?'U[¿ÖåÀÖògfÙ:~Ìgëù~ú¶†?3ã…XLPf´Iv›»ç%JºKKÓµñ9©ô¡”úM`_±ûOŽúN•úI]î‘ý@_‹œkWàu1äú›XÃÌ1Ô“­I=f¬â¨áÑú=[s?×·i¨VO‰IˆD¢Ñ€h ¹ÀÏ ýÑÿ?œÒéú¹Â°þIEND®B`‚manila-10.0.0/doc/source/images/rpc/flow2.png0000664000175000017500000007367213656750227020744 0ustar zuulzuul00000000000000‰PNG  IHDR°W›c¢ÑsRGB®ÎégAMA± üa cHRMz&€„ú€èu0ê`:˜pœºQ< pHYsÊ&ó?w#IDATx^í½ ¸$E™¶Íõ!CÓô¡±›foA•Æ@<Ã("È¢¢ö0(=à‚,Úþ.lƒàö3ŠŠð1Š Ò #8 à†¢ ² (´è àÖàŠÒ"*îùÅSœ÷dVFUeUETÝu]yÕ9™‘‘‘O¼™ñÞõƲÚj|PPPPPPPPPPPPPPPPPPPPPPPP`ì˜p ,vÛ©ÚfÏž}ÇZk­u¾ýÏ÷cº°¡6€ ä`³fͺqbbâÒ©²ï¾Ž}+‡(€(€(€Ù+ h=rbýõož1c­ßpЋyÓq'Ú.ºô3­o64À°l ?8ç¼ Š3Ï>·õ?êØ%Úló-~å~”|xÝu×ý˜{ïOfßzq(€(€(€c¥À ÷y“‹²þ|Ñ!/ÿ­`õ‡eClÀFØî¼ûÞâÝï;«X°ð+Õˆ¨ìXµûÜ,     d«À®³fMÜ¿øˆWÿFÎ à ¸cØ60~6pÙg¯*•}âŸx™kÍfdÛ¢Qp@@@ÑUÀ9*oxÚv ¼mùÝ€ëGY€‘ñƒêœ:ïÖÞwæüqΜ¹ßq-ßüÑmý¸3@@@ì˜3gÞÅ/yÙ¢•÷Ý¿x^±lÀ¦màšën)6Þx“ŸÓ¥8»¦£    Àh*à~]ÿÈ›?ùÑn¡ç<¢;Ø6€ Œ¶ ܳâbó-¶|Àµ‚ÛŽfKÈ]¡     d¡ÀÌ™k¿jÿ_ø+œÏÑv>©_êÀzµ /™3wî]ã6‘E—V!µTÑ2o[êþ®[žãŸ_÷7ù¯ªw¨WÎú\Î3˜ÖÃMiP«À®óJ·aÛ^[Îdž°ñ°Mî45Cñ`[«ü¯¶ÂÝÂ"·MNm»FÜ’¢Ý–¾Óoòo¯]Îúh†ð˜òG˜IPP 3Ü"ö?ÐØ&Ïñp<©gêÀš°ÍR¯¥Ö2kò†]\ìüa‚ë„óFâ.¸ @èTºãÈ6áÈ’v„ ŒŸ h‰µµ×žõK×îÔuí´iåôì(×.÷†(€(ÐwfÈù`×ñs< êÀš°¢°·ÓlÇ’q     üME{?¿‡›pbÈgÀ°ñ³ýºÖZk=Lí- QPP P`ÖìÙWœsÞŒ}eGlÀ°®m`ëm¶]éš&“‰ó2Ø8H…(€(€S`ÆZ3gþZkú5¿¨ uNcØ@S6°äÇýi5Ö|'íl”l”L$B@@Ç+°«~5oÊ!œalÀÆÓ4‹½f³§¡R€’‰D(€(€(ðxpЋÁáO‡“z§Þ±l )`lG.Û‘\$F@@¿)püQÇ.ùSS ùà cØ60¾6àš–‚6J6J&E(°Ø¥™ˆHG@ fΜùÁ·¿ëtÆ¿2q 6€ `Ø@Ï6°ÞìÙ¸Öqþh´}½ ¶¯òŽUæËÜÝNŽÕs³(€ã­À¬‰‰»®¸úºž".ãq¡î©{l0`£} ?Z*Ö(Àb"(€ã¥ÀÌ™ëƒU2üáé²Ï^UÌž=ûކ¬€mHH²AÈDǵ)Ç•|°%l`Úø°«ޓì@Ÿ.†(0j °8œ@6€ `MÙ;P/€`epD`úØq1@¡+Àâ¸6帒¶„ `ì@›u€`úÈq1@$`q8l`6pßý+[Ýæb·K.ÿB_Òþ÷§>ŸïeWD§½´ƒòþwùæ6K<;Ц€`Ø>r\ P XàeðÂ5°³“ÞúŽbóÍç{>ûÙQÛ¬Y³ŠwÞ¹6í;ìX¬±ÆµétÝ6Ú¨Xýõ£Ò*½Òn·`ATz7lTº7Þ¸£2l°Á¼bé…—ô<;ð žAv M; À°}ä¸  @ °€Å [®3Þ¶ö¦ãN*N9å”"ö³îº³Š+VÔ&¿þúë‹9sæÔ¦S‚#Ž8¢X´hQTZ%Úd“MŠeË–E¥w/ô¨t–á°Ãgž}.›D‹Ùh!¹Üfô˜# À°=>DœŽ(¡ìxCPIýÊØÇø€Í°¡ìO‘W¸lç÷˜5 À°=>DœŽ(¡,3(€á:ãmk,›aÙÏ"°™ÂgmËèôóÑ"o@‘W€o¨h¢!&l(ÆXväÔÎn€`ïèÌd*S³ŒNCB’  @& °ÀG |;éÕX6“fqPÅ`XvPO×A-XÀ¤W0á|l(ÆXv´ZÏžï€`›Ø+5îÚ³E’   ä¢ |ÄÀi°“^m€`siTΑØ£ŽYRì²ën¥Û!‡.ndVí}_p@¡­×wÓ0Îox ìÄ€l–Ë   @ °€É0o®9~vÀ°i´zÉ”b¤ö›w­(®þÊM­Í`Öþ¿ñ¶;NAàÇ>~i#y ºMj`“1j ‚(€Q€?tCÍõ°1Ù À¤QËç"# °þ{ÿ„{[+[Öø ÷AWð‚ï÷ô`¡óýó,¿¦ ¹_í›ÏƒÚï’*|~jÛš’{r×§ßö™lþ,pѯš|±-ßX6Ù†p8{€=ûœó‹-¶|òt7cýíGT½ŠÞZ}ûÝC0ÓêÿTÛ!v8ݰ®*HªúäP”¸€<,›úuX #Õžr–m°ìм´ 0Ö«Hi¬ öîÀ*ES}® §ÿ÷VitLiìü”»°i=Œý,Í<—ù]ý¼y£À8*ÀŽ$}Ôgª6À°ãØÆ¶¹ç±XÁ§à4ìþ«}ê.¬ýX¥óÓèøiï9£µÏØvÝ”S|'°ãó6˜ïnU;@`žwÊ4zv À° 6]£ÕØlÙ¸Ø:€õ¡€…Ç`ôï€ý:懠;z üQ§)Ú À¡‰Kù’l0±“u+¶.ÃUX%›²iS6_{hR3šÌ,ç¼X`'EØ¡L£g—,›s[Ù‡²5ÀÚxVÌjØ ØÖ‘ÕlÃj4¦UZ›yØO¯|üè­¥Oµ-i¸ ññÎ>5Ô’O‚ ° VJÆEÒ`>Nvô@!Õ›r·­°ì°]×à/þ‹›G寫­¶ÄýŠ£¯y]äÛöò™\°ð+S·VOÕŒÂá,Ä6þÕÆÀÚDNöíÏ,æëçÎXœšF ì2gD“½çöO¶ÚŽcÎìT­°ã ©5ê”gtí€`‡ål`ÝVØæ`öÊ©}3†U¦.#f*ï‘^™³XEP«ÖeÕ~MÊäGbýYˆ©Š¸ê»lØpŸþ/K›ZÛÀñÉð¥Ø >â—`ØdׇK­¡¥<£ •ƒ¬[€–_¬²:˜]ê"³û «l]\w©;gñÔyYl·ï™²1°Ýæ•Úyl–Ÿé)l¦—h±X€ucŠRkÔ)ÏèÖ À謯 *{¿ƒÙ3ܾm‡UÎÈë* «.£»ºm¤vßð¸etF¥`#­}’°#P‰ Ý Ào,60@`Øaù1ÀljãeCé4Ž÷f·-Êa ì¨@g“÷Àëm0øë°ƒ×|”¯À°ÀËá¥É†Ÿ¼òŒÒ°yìÏî_Yüî__Uüa÷=£¶×XãÏ8¸rÑÌe©lv8ù€ÚÉ߉Œ—-óǽãi ¶ÿïÄüÞ‰ì(#ƪ÷ÀŽO]âNX€`‡bÏßo½Ær;ì°Ã£¸ˆÏºëÎ*V¬XQ›òúë¯/æÌ™S›N Ž8âˆbÑ¢EQi•h“M6)–-[•^uóé´ ÏzÖîCµ•ÅÞäG€ß¨¥}ÔéàÖ*)\ÿÝ¡ÖGøîXkæÌ?°¬º“Âyæ+Àv®gT+À°Cü¦ëlÓÍ6/n[~÷ØÙØ<#°?¿ûÞâO[m3=ƒï¨iÝý¸ìSËï¤66v¡"°O}ÚvD`3ü1–ìø ;>u=ˆ;`Ø„ ¥l©€¦‰üÒ†d6&NYD`Óé°Ãgž}îÀßWݾGÖ›=Ûõ îy}ÓFý€.ÆÀ^é õHw^*kƆz^5‘Ó ›öû¾ê9`}ēΠ€Mºz²+ ÀÌ!¼Ï!{Ý’73fÌhu=ÛjëmÙ 1ïÖ¡Ï`ØNº1°½ûu뢬ZNçb­‹\Ú‰Þ¯•ƒë‘¼Z7ëмªœ…øê¯ÜThkê]8Îù°QÏÈH$j `ÝOVû¸íT¶l5ÐĽ~Xv ðÒ /)æn°A \×\sÍbÓM7?õüýË®¤ãì´èÞX€íµ ïìü2€uÀªes–ºcu–[c©W¸œäÛvò´*òj=€=ûœó‹-¶|ò*Û ÿö¶´¡£Ú^°˜}Þiõ+Õ‘MÜ‚{ò­s&Ù²Õà®ì€`ûÚøÞpËòb¯çì= «ÿøÿXÜtÓMÅøÃâ”SN)fÍšÕ:¦¨¬¢³÷¬x ¯åU' Çû`X¶V¼ƒ, `°®˜Zëu²ƒÓû•´€ ËÒw€ýØÇ/mµU§½çŒé6JðuÈ¡‹i³zèIÀöë±á|®_]wÝâëÇÖ™S?:è¥ßë€`ûÒøª»°&«±îÂsçÎ-þó?ÿ³®þöío»xå+_9 ¸ŠÒæ4Î-GpL¥Ì, ÀöÚ„wv¾{Ñn«­³³úž: €Ýe×Ýjaõ›w­(‘Õæƒ®Þ¹ßÿуÓûtLin¼íÎUÚ_ýoçûóD„Q^óó×ßÊ_­´ŠÛ5-?•-|÷[9²*ÛÊÊÙdÀöýù½ ‚®ßxãâî»ïfËL¶ùçqæÌuÇQ›lˆ,¯‹.ýL±™›aV¿Vk;ú裋ï}ï{Å~ô£ÊMKt(:kçìô;W\}]_àº÷LžO À°lómY†9&°‚?un7ù àSi‘ô xõ¿ÀRíƒÆÌZ÷㣎YÒJ§ÿ ,í|ÓùÙÕyjý1·–¿µ;º–m:_éu¾öé;æ·SäÊkßÐJcÇmŸÊ§ý!à6ÙÞ°>±Ã.2›/¸7°MDq‡mÊ\€í@ÂFìN·Ì„ß]xÇw,®½öÚâÇ?þqôvÞyç9øÝldzÑK åÛdƒI^½×u°,ÛHó•{&ÉlD†ï@A`ØX(€4€Xú¬ã] ¡ÔòX]ßÎÑ5•·A¯ Úÿ_@ê«ÎÓq‹Üª,~ú&Þ÷Uy4 °—;æþ@PþX–‡äo °ÝCº ŸôÖw¬Ò]øŒ3Î(~ò“Ÿtµýà?(N8á„UÆÇª;²®Óφ”¼»·n´`X–VØ)0[¡µè§ £ °9‹œúÝ}cÖïflê_ËRÁ«€×º[ú:˜îæ=_wNÃË5 °ì8Øyì=°ÝÁ‹º k9ëú{Øa‡µ†$üô§?íy[¾|¹[ûñ°é¼Õ-Y³×5ˆï®.­ À°±-ÔH§K`Õ½7Œž†ï˲ˆ¥Uµ.ÄU«ý‚9eYä¶® q¯«ó-:\ îGÀŽô³ÝŸ›`ØþXVž¹°Aºõjù×í¶Û®øüç?_ÜÿýoÊw§vš¾Ö³vß³¸æº[Ùf~ì‡#Òiž, ÀæÙ^6\êäVﶺIœü.¸ö.ô»ǬßmXùÙ$O!û‘]+['[ÖÝÙ° [x³;Úå½vóï$ë;IÜKZ€íÅ~Fí\6`Õ]xÖ¬‰Pjœ·¿ýíŸåoú¼sÎ9…f3žŽöþë«›1İ,;j-iW÷“ÀÚ2:‚;ëâ«1£6–Ô&M²cŠ,úÝŠëVùÛ¹–Öþ÷#²Jg“+tª ¬­gëÇÕß6áÛ•ü¤IwÅïôùª[»ü¯w›¾ë>—¹êvŸùu™Ä`Ø;—4l=ÀjÖ-¶Ør _øÂßúÖ·ŠŸýìgÛ´Ï›ßüæéñ¶é·¿ët¢±‚, À°ãÒ¶½Ï,V°hàg³ ”~ÄÕŽ…Ù2€µq¨:߯¡ú3[¾þuuMýïOÚäç£s4I“ò©«ý6Ñ“/Ì6 q§½jºIOâúw€àQp~Îs;N÷vV¥«¿Bu Ewuí #29t v«’ ^™9€`#ìql’°õ{Îy´àuÍ5×,N?ýôâç?ÿùÐ6­»ï¾ûNôÆájô­ÒÊq²nÂ*‡_~¥ñË%}tܺ+?âªc!ø›6óÝDÞ®ue¾~ã[K^°å¥êÎmMØÁR3¬qÿ`;Ø›o¾¹¸æškнèEÅ.»ìÒúû—¿üåÐ6­+¨žz; W¼û}g±‰:G, ÀŽ{ËÛº6Ñwt¿ ÕÏ·a€=ÕÙ“8©í§ ²bÚï|àˆ¶ËÜ"£1cFý.¿vÞdIæe]ƒËÊ[–Î@ÝÏV÷b‘Óª¼}8ö6Œ‡Å•F¡Ž–FÓ¸¬&J`¯¿þúbÇw¬Ü¾øÅ/ö ÜŸúÔ§ŠÍ7ß¼ÐwŽð®ºs[v gŸS€í`o¹å–âÖ[o-–.]ÚêÊûŠW¼¢¸çž{Їzhh›ÊôœçÀ¶㪼Û>;5fWÝŒ·öýÜç>w:¢*Õþƒ>¸•Æ6t}0¶nË~~Ú×T´·é|Øæ›h¶9€ýæ7¿YÜyçÅç>÷¹bçw.<ðÀâ»ßýnñðÃmûÎw¾Ó‚ê©6‹¹n|¬–ê¶Ñå¼z{©ÓhœV½N9唨muÖií®Køá‡ÿ÷_›Nùì°ÃÅvÛm•VéçÎ[,^¼8*½ž±º²vS†§»ÙÏÕ ¤Î®R9¾ÞìÙLšo°È1T€`e=u!¨€ à| 50Ü)BªãÖ¥Ø&J2£´®ÀÚo똀UNˆö[”Ó®¡ ˜,O;ÇÀ×®­ýþ„P:ן@©,]Ó/·ÑW›¹Øò÷£¿¼¸–ŸÒZwa~U¦ÉŠ÷ëV`ê«`SðiÝ‚°ª5èJïŸãlxì‚ .(n¿ývvŒI¶Hêº_}õÕÅ¿ÿû¿°Z«õÿ÷‹³Ï>»ØØM§nÅZ;ö׿þõÐ6E‡÷ØciÝév)®¸úºlœâTœó&Ê1®«nìê»ýó¿ÖxÚc—¼±xÅâWFç{è+/–¼ñ¸¨ô‡º8*]§e^9Í,ÀÔ`Øh€<†e0¦ýIZ8“þ×~[†Ç,\ç„KdžùØXÛï«Uví°|:Oê_§l‚¦²tÊËÊ–Ó"°a寭L?Ú¬ãeiÃ¥}Â7ëÖÆÅú‘MA­¢®6†ViüãoxÃZçÙ>`õ· µéHi¿ò»–IœoØîVÝ„}á ‹¹®{î&ë­÷8€½ë®»ZÙ×¼æ5­nÅŸùÌgŠGyd¨ÛÅ_Üzè‡.m‹yyqçÝ÷²t‚Æ`›€ò¨_ [#¶ñfº]†ìßÝM>[ÃèB\fHí&¨%wp1‹wpÊã’ÖM:ÕIÞuå`§6«*@m°6ópÀæ0öÕÊÀvòHÅ¥`ëÂ0ûùϾ8ä ƒŠM]·Å÷¯¾zq—ÁM&&JV3«ñµ×^[<ûÙÏ.ž÷¼çµ¢³¿ùÍo†¶iíÚO<±5.V«ewM9/ìà´nªÎÈ'¾Îظ¶·¡T,+SŠêB<*k]†c–ì©zΚØv݇u}v `ȩ߭8„UØRóe)YØzÇÌØ£Ýr9®:øs†T¸Ùåjö{ßû^¡õZ?úÑ[n¹eñ–·¼¥µnìoûÛ¡l¿ûÝïŠÿøÇÅ„oA,³×ÛASÀNë¦êŒ|âë €mMã²`ØžV]dýñ¤qf7üT«”Ùy1]©Ý¬ÂdW—ë`4³ªñª~·b«åw,ji“:•E`•ŸØŽi²'ÆÀvb¶ù§m`¶â‘ìŠêìQn|ù[ÖZ«®¶u°+V¬(4±ÒñÇ_lºé¦Å%—\2p€ýãÿXüõ¯mõFmÀØ`ÛÃÐ7ïZ1’ï‘q`v ~;à÷wSÏq*]ˆj­ýº˜óbNýËj«Ýå¾w›€1¥ëV€i3 Û,Ä>€ZVÇì¸Òû3 ûc`-??}ÙÒ<ýÓÚi¾×2¶ñg²I€ýóf›¿ßoÿâWn–ÛŸÐr-aâ§»gêf`ïw¯ëºã~ùË_nyµIœ4Öº[V{ß}÷?úÑ -½³Ï>û´º/_¾¼PT´ŸÛþð‡¸>ôÐCÅ5×\Óš-€,5帰՚_ý•›Š-¶|òHì÷ô`q¿½­ÐwS¶“C>lt3}ªKÙMàÈ¿ÀäŒký~ÁÂg¬dËKƒ'm±å#n™»£­¥}®»7týáf#€õ£ f¯tÿ/v[‹-÷zsc°íƦª±faÓïB¬(¬‰5X,Ë·*m§€Ùïôl¯ÑãÏo`íòW·þèïþõUÅ/™éV3î^tégºr$C€ýÄ'>Ql·îº…u!Ö=_î¶M×^»xÑGßøÆ7Zã\ëV]xúÓŸ—]vY±ÕV[¯{ÝëŠx xôÑGÝ®ùË_Šßÿþ÷Åm·ÝV|éK_j½;ØÁë€M` 6›Äo»³ãún2ßÔó`£Ûi×y§‘`Ѥˇ-O æE[ [­@°æˆ:}ÔmKÿºÚjû4$t7Ù¢usbxŽ èz·´E¿akù—F9úuM¶‰'`Õ<ú°þaŠÌþÆ-ñ [FcXΗ&)Úm÷=»Z:¦lc^ùÊé®ÄÙ׬±F¡H¬º?mà ‹sÏ=7`­?ûÙÏŠw¾ó-ýÈG>Ò‚Í^7ëŸÿüç¼Þ{ï½Å7ÞØšL €¸ší°éì(E~‡õ޵ë°ÑítS}AެD`}§³ìo²+Üvš;¶m®fÀÞ-¼°Í?uƒXÿ]ò‘Ÿ}n1Œñ²š¨hÇvîxÙ˜ªu`ΟßêJ¬1°êB¼•‹:»V¤5+ñs×Y§8dß}Ûv!¶¬ìƒ>ØŠÜrÈ!Å®»îÚêú/íf¸ª»ðÊ•+[á¯}íkl"c¥Øx€=ûœó‹}_pÀ*?|i½Õ]vÝ­µ}ìã—¶Ž)Ê©ÿÃñ³:×" §½çŒÇ×¹Ú¯sÝÛµõí_OùÛµ´_ãÖ Ô”¯Ê¢oS:uöRûý2é~,¿0í°Á³©ë°Ñí4- k`ë6èb|‡‹ÊéöMädZ£ °ýŠ|¦’/Ûü“6h€îb¼æŒâѽ¤x¨Ën½Ý:[r@÷ú§½;Z6¦ `5“ºÛ2:»Ë6ÛLGf·wûéOºµŒNÙØ2€Õ7šøª«®*vÚi§Ö²?ÿùÏ M¼³ýéOj«º!ëºßüæ7ØDÀ•l}äÛ„ öÔWûL;ÁŸ Qûì¸A¥Ò F-­öûãiõw4 @:fI+­Ž[‚WËS׳4ÄVV¥QY¬<þ5üò+_K«² ný{ëö–Úylt; ÀFKE¾¬&=r‘ɋݶ,ãmE'ÀìÅféâÛo3`‰À:»£ßvÖiþîy:hïŽV_ý¿vQIEF{Ýþê¢Ý¼Cþ2wƒâ·¯=¦øÅ-ËÒÅøÝï;«8ÌÏuüªöÖ[o-^÷ªW/]sÍU–Ñùÿ^ûÚVVãb÷Þe—®V«í½ï}o±Ùf›ïÿû ÁiÕfWu¾ÿþû[3+š›2ÀÞãf­¾óî{£ë!¶¾ROG¶>kp((¬RíTZÄT p-½@W›ý¯¼ªf8.ëB¬¼”§oOÐÚgçøy†eð¶  S·×NËÀF·þl´T$¬Qàbw¼»ž±UãG»qær>Ç9à+•MÙÔØ|öxêæ5Š›º¡æüì7Uv÷ËBáÞ©ã“O}W<µدýë…ºoâÖS,Ú,Ä;l±E«{ñf3g¶ÆÆF`Õ8Üt®"± .lu¬ú›"®ú<òÈ#­ÙµŠMu'žß7‘Ê€Í`ìBÜ55—Í8ÿÖš±ø—+/¼d Ëïš^æ® Bu{ÑEOÊSVØSN9¥xµ‹H¿ÿ O(ÙË:Ø÷žvZ«û±³Ë–-ku+^´hQñ“Ÿü¤ÕUX‘U:E^SX?~“›ÈKðæÿÀÖw«µÓQHgPhÝmý|°‚G¥ñ7?Žëܰûp6lsvÀFûl´T$D6 tâ@OÍL|¹;GËìLä$, Àæd¯ý.ë°ÆÀjfb-³ó7)J“ÙÄñNáU׬XM’¤5]ýì-·ÜRÌuÑוŠÂºåun¾ùæVdÔ_6«™ˆmûçý÷/6[o½â‡?üáô>;öḘ;wn¡ñ²ZÓUß)¬f~>Ç­ ¬1ÇU½&ØæÀ¡‰çcØyø i“5ù]€ëÖˆµ ™tnØý·›l8fÖïVleõ×y­ëBìѶÖýº>Ýj°ÑR‘zXuÖr:«ç¶Ö†íª>sX­Õ¸ãŽ;VÎ×ÿáÚ°e0)Ϙt©LÞdåålWÆÜÀIƒØ?mµMk9_\wKtô³I'«xí`5þô{îY¸)ÓQØ*€ýðÿý¿Å¾ÏzV«û±f$Ö¶è/(¶]}õâÍn<­íó¿7ÝtÓbùòå«DhS‰ÀÞàÆ2kŒ±[¤½¶+8 ÀúÏx õÇœ*u-¶¬Mždy¨û°Ò”­¿Nòä_WQ\ýÈâçg]–µO׳ÿ­‹²öé•Q_ë>ìÛõ»ÛùšJçÚdTM¾ãRÈ €n˜Øh©HˆìÔ²9roéÉÄ[ØD °'Ÿ|r±ù曯²½á oˆ^Ʀn]×ðøsŸûܶÀk ¨2éÜÔµ®<lOÀªyô`ÿè&ˆúõ»Nê°rêä`vÒmØw»‰À `µìÞn–b­»©ƒ¹»îºëqد~õ«Åönü¬@ws7ñÓIo|cqß}÷µöCnŸŽÝpà ƒXìí·ß¾Jt6€¸VE[ÙÿØØn½&°J6R+ƒ-IcåèùËãØqƒT™;GûÊ"µÚoËî”Ý·Í2ì/£cÝ–•_x-¢)[¶ŒNØ ÚÊÎ2:Í·g™åÀfVa7Q¬ ±ƒÖ»¦ÖzmºÝù.=ì= °‚UÁ¢™€VÌ?øÁ(xì`u­Y‚lÏf42ô`¿ßþÅÃn¦ßŸÈ ³Ý¬€uS× Xt½~­µŠ“O8a€ä.˜7¯u\ó<ê¶ã\·ão\l·õÖÅR÷ÿ•nÛËõ˜øéOºÊ&€U×eA«¿¥0Vp&Í4QS]v#°©A㨕§löà~ÜcÙ¸Ù~\'·<‰ÀF»l´T$D6 8j¡ÛŠ)~F`–ê¾kQXl³:¦ÈŠÒÀêÁ¯Ž]pÁÓ€®Žùù©›°Î Ï3€}Ï{ÞÓ:VÔÊ«ì\•Ëʯ2i«榎ëÇ·5òCFŠF>Œ25 °ZÓõgnÜcnW]y{Øc]àS\W`Ea7sÑT= а 4÷r³ a57û´Ö•=ðïÿ¾5‹±Àö™=ÿüó°š9Û6ÔS³+2Nà¤óXº×=·®À~Ô³“ó;I À–Û.Ý’°ÑR‘òU`$V0Žqõ££T9yÚwðÁ·6?‚¬ÀÒò¼ê¼Ã?¼œ:×ïB¬tÚ§ãÊS0kÇõ·ÎÕy:®¿}ÈÕÿÖ]YßMj]>lóp“Û‰ó—SÚ^ö+_ùJk)è+”ž~úé-€=ö¯(Þåþ×~m¯wº½ën|yÉrJ¿C«Ù‡µ)«uh5”¿¥°~]_ãÆ?k6âílßzï°l“ï ÔDIš¼iT»÷R'lt; ÀFKEBÈW‘XëBlQÔX€õáQÐiÚ`}˜ ÁÐàÔöûyj_¬Z´ ¨ë€³éã ¬ã>R€­ˆ^öž{î)žé–ØYæ`í·mçÀó?]Ï‹\4Öàõƒn¹Wp@kbµçï¶[±½ƒÙdµÏ«9¤¹Õ¶É&›šéX³‡[êëÀšs¬¥uî‘næ½8üœ[ÿ ¢Q>°Ñ¾- k8Ã'ñIPlÖ"¨6™“éŒX ­€Ð&†ò»(úÐ*HÖµ§á¸Øp ¬žA±uö£°ÃžªA€•6|بîνì»ßýîVôÕºoíUã]õ¿Àv¯í·o­kp*ÝnþüU VéwvãiuÌök_ûZ+šn¹,P’”PWÔU¬ °Ñ®…%²Ú¼èÔ$Djœ+‘Äd»ÔQ‰Ù¬ÁbÙäJݬÀÔ ¶]V°©´êâk]‰«´` fØÑ{6‰ÀÖ;¦1«Ù„ýu`5A“&qRöÛßþv1×ìJ¡Ç»ñ°tÑTÁ«ûù½5‰“އQÔ—í·_k'‹ÒìîöÔ§¶Ò*«µeµ®l¸°õu댓-±Îl€=?;J^6á*Ê`«ºÒ–uó ÇÀ†Ø2Ø,‹Àú×4еµ_ÛE`mìl»ÙŒ‡5‹1ØæŸP¶Þ9k°þð‡‹]¶Ù¦Øbà +ö»ßýnñ//|akYRÁì?<ñ‰…ÆÈ–EQ_ºï¾«¬ÆÁj&ã­6Ø øÖ·¾ÕØ›nº©¬–mßùÎw A´ Z³+Z{ã7×^{mñ¥/}©ÉýÜç>Wl´ÑF­žêÎ;HG]×{·›©Zcy]®Uoïh„F½ØÛ|;MŽ(P£›°‰Œ$ÀZw`Á¢f¶IšlV›ÄÉ&`R:üj»¬ÆÍ„Z:û¿ÀÚLÉþ¤OÚNâ4ŒudØæŸP¶ÞY-Ø}èCÅÎn©›=\w`EJ7qcZ«"°Ø×ºI›þÿ5לؽ]úw^qï½÷–n°ŠÒªû±f0~ë‰'¶"º:G«õaÕõ¸lK`à œÎLâTo½Àç¢ï m€m¾&G`óµ‘XƒE«èªºüú«ÿm&`›ý×¢«ek³Û´–·òh×…ØŸMXùª{³[×¹éÉšªò`›€ØzרsÏ=·ø‡­¶*vwàªñ«Ö¸À^zé¥Ånüª?îU]˺ÿÚ¾—<ÿùÅÓÝ;ZGV3ë™ðÓÀ ŽË¶Tvé…—T.¡ÀÖÛÞ áƒëQ'½ÚÛ|;MŽ(ÀækYl,ØY×Þvé=m×­·êÜ^"¥º^LÙbï³×tló0[ï°úû;íTœŒMU”´ `ï¼óÎb+×UXi¬û°ºoàºkáª.Àg½ï}­¥­ªŽ `5îVÙªm]ˆïYñ@+ªzЋ^R¬¹æŒV÷ävÛ1¯c«ñ8mW\}ݦªîz)ÎŽÆlóí49¢›¯ Œ4Àö vãr>Ûü ÀÖ;y>À~ò“Ÿ,žìÍ"\=Øõ 8jÒ&¥}µ[ëuûµ×.Nq“9½þ裋ïÿû]mXgÕDQUÛ ö¾ûWo×éÅ^ÿ´w-¸Ô>y«mŠÝvßs¬6ý ÇsõÏ95£Û|;MŽ(Àæk¬ëB8. JâÁ=¨l½ÓŽ}µ[õdAµˆªºïïÀtÑ>û·Ýv[kVaÁãÙgœQì?kÖtº÷»õ^wËZ=ÍÁçåîu®ÃZ·_»lÙ²âþçÚnÚÄI‘Øsλ ‰å4¨ŠÂŽãØM7Û€%;²Qxvpí7WB)˜Ä)aS`Ø‚lóO(Û9À^wÝuŖ믿J·`Áì8@Ýa³ÍZ³ûª{ïv®ë°º ëØen<ëóvÞ¹5^õ¤ãŽkÍ(¼·ƒÞ|ä#¥cX«Æ¶Úþ7Þ¸5ƒ±`¹Ý6,€ £Y]ú™Ò±°l½ýD£œl€m¾&G ›¯ °,ۇ瀭wŽËf!ÖøÔý]W`ëBüz×%XKãÜá¶Ü>mÓM§'yÒ¾7ß¼µü&Wúú׿^Ì9³5{ñþÏ~vk_§›öË_þr¡1¶í¶TÖwÀ5ñIo}G±Ý‚íÇrb"°õÏ\NÀFYW­O6º¡>Ã¥œˆNMB¨V€lÂÖÀ°lP¶Þ™®ZöŸÜLÝê ¬ šÖtã\7uccßï¾²Wz3ïäÖˆÕ¬þdKW¬@VÝŽÛMÆTvL{Í5×´–î©ÛR\vœ~¶þ™gûÈýÞØè†Ú5«É·åƒ½*Àöª`ŸÏŸl"uC½Þ9ã>ž4ÇûWݹM/ý^?“ÇÇ)ÀÖ;ÓUû™Ï|¦ØÊAë]S³k]ÖW¼øÅÅÖnÝV¬@VÑØ/|á {ß\pÁÅ® ñë\WâßüæŽßGØ«¯¾º¸ãŽ;j7¶¾Ž lZõ1Ⱥ‡k°Ñ®- k`ÇÁDØ|'ƒ`›BØzgº `o½õÖâ𗽬xßê/££1°Ï~úÓ‹Í\ãÿüÿ¨œ%X“8)‚«IÚÍ&\vL{ÕUW·ß~{íÀÖ×ñ Á€M«>Y÷ãp-6º`£¥"!‹ ¬À°î1 ;õ.`ëiØÍÜMZFçæ›on­á*€½é¦›Š9.⮫ÈèÅ_Üš¸j;ú5¯i-§³½‹â^vÙemÓ†y`µF³ºÇl_ûÚ×ZݘµôΗ¾ô¥Ö¹í6Ú¨5C0˺ÔÛASðÀNë¦êŒ|âë €v´Øh©HÀblÆciŒÀª»ºG-e¡%aæÎÝ z«;à\¼xqqýõ×·V2½ï}ï+6ž3§5U*iVàºåmt\³Ïu3ÈåûϸÊ9ýèG AgU>Ø+¯¼²U†˜ €wÀû +l:uÑïºÇüØh×¢ €pW;•-[ &£­¥}BçF¬¶mCy‘Mª %›ªm£\D`ãœéûî_Yuì’bu7I“@v]7~õôÓOo¬ºè~âŸX`ë–·±ã{º®Æk2'ÅU>¶ÿE{ï]ìºí¶•Kä`¯¸âŠäÆnD`ãêºßÐÀ¦Qý®çqÍ€nÉ›ØÉ9sæ>ú¦ãN*ØòÒàe‡¼¼˜={¶[¤€ D*À°‘¦2ÉØÎœénY^<óY{´ VÛ;ìP|êSŸjM¤dX-—»yæ™Åþ® ñKÀ¾óïœ>O»Áÿù?Å[O8¡4/uýýüç?ßêλ°Õu¿€M£úU¿ãž/í:4° >cå¸Û\Ž÷Ùg¯`£¶À~õ‰O,ît³€²å¥êÎmMÌBÌÓ0¥Û3½ôÂK ÷Ë÷4ÈvØa…f!®[—5<®ñ«›ÎšÕšÉx²G»|”æ…Ï}n«kñwL³‡ç `?ûÙ϶Ƶv²1¶»únÒA`‡_MÖ'y±l—ûÐø¾ Ø.Ÿšq>m ‚lMQ¾ƒÂœ6¶Á€í¾U·â%o<®xÂT·âu€¾ûÝï®]—5\·UKï,qck '»uaŸñ¤'{ì´Sk؛ݦÿÃs `Íll÷õݨ°Ã¯ƒ¦ê’|_—D`£h€¥ qôã’wÂÉ&Šïfï9ÕÛ2¶<5põwdv@)ÀöîLkß½ž³÷t4vë­·.þû¿ÿ»X¾|yÔ¦_î v[vGÀªmîšk¶ÆÆ jßâÖŠ]âf,öóÀ*2ûÕ¯~µ£ €í½¾{…vøuÐkr~u°ÑÞ À°ÑK¾ 绢yË·þ(y¢ °Í9Ó]ú™b£7™Ù—¼ä%-¸ÔøØ˜MA)Úúr²êR,xÕöèTWâË/¿|:¬þvº±ŒNsuÞ ÈŒ+ÀªËÜî{ì½=m»Ånn¼yÝ9»=k÷â©OÝ®6òÙqÇŠíl•Vé•v—gî•þ)O}ZTºNËð,W½ïÆÖ†qÝØ°,ý¸ä›€Í·î(y °ÍÂŒºŸôÖwLw+žá–É9ñÄ‹Ûo¿=z{ûÉ'Ož˜Xb•ÝeË-[3+/¬ÖŽ]¶lYÇÛlw ã °š%UKPÅÚ¬fúÖZÊuéÏ:ë¬bÂ=/uét|¿ýö+öÚk¯¨´J¿á†gœqFTz÷šJ×iž÷¼} ­EÝ© +=Ýà°,ý¸ä›€Í·î(y °ý™;ï¾·Øg¿ý§£±O~ò“ ­íª¥rb¶ÿéŸZc`- Û뺿öå/o/€ýä'?Y|ùË_îx`ûSç±À0Î{Ê)§±Ÿu×U¬X±¢6¹ÖdžãÖaŽùqÄÅ¢E‹b’¶Òl²É&-(ù`c>–á°Ã° ·¡= €`Ø \N`s©©<ÊÉKcªžØþÂŒºMn²éfÓ »Ï>ûW]uUkýØv[À^©õgÿîïZç `/½ôÒB0ÚÍöÅ/~±øÜç>×ÊGŽ·ÆñÆéz³6ó ·Ö2+¥Ø<œŠ.J À°Mù¢nʌնíÂ9e °yŒ.!§S€í Fbaîíï:½˜1c­,ª[ñÑG]h ªÍX7qYk‰ÝäPçœsNëumÀ^}õÕ]mŠÞxàÓ`­ˆqì½®7›`ØN¢ÀìÈ6Õî7ÉÕæõxw“¬ÛÛûxXíYÃËè87aµÉm‰Óû¤Û'aÇ4[–ìÀMxà ž†FAè‡>ô¡âÖ[o}Ü&€uÓmOƒk˜Nçj¦cER;ÝŽ=öØBKþLklâ°ñq¼. À°cêy4Ûl¦Q\¶ù‡!ÕØTk&Ïr°ìР튫¯+¶ØòÉÓ ûÌg>³øÂ¾Ph9ÛqËê(â*põ÷ÛßXÍZ|å•WFoZ£v«­¶š¾®–þÉivÓQ]€`ót,5 ÀÊ,‰À&øpZ‘Ø„+'â°ìÐÖ@ìÝï;«XËM̤Hèžð„⨣Žj-»£™†ë6ìE]T\qŵÛùçŸ_ì¶ÛnÓàºÕÖÛZògT€0·û`X6C¯!Í"' °ß¼kEqõWn*Ýn¼íÎFÚŸÓÞsF¡-·6@å%›æÕR°ýPu|ó`Ø$½{Vk÷= Á1ŽC:3U°,;¾ÎGÃwž=À*Šj‘Y®ßV阺+Úêw?V÷aEzíu[¶ü´©µlÃOBÂÙ° WN†E`Ød¡N‘ÒÃõ4„jÆàÓN;íq{Á—_~yk[ºti±çž{NŸ³Ùf›K/¼$Ù{LÍ™dyX€ÍÐkH³ÈY¬ÀU ©±²ûø¥­nÆ~÷bë~¬}J£o¥±÷uÙUzK«c)w/`Ó| úQ*¶ªŽož,›<Üi†àgì°Ó4”n·Ýv­nÃ×_}1oÞ¼â£ýhk&âC=t•eqN:åÅ}÷¯Lþþ )] €`Øñu>¾ólöìsÎ_Fmb#ª"©ú߀Ôë¸àOû|€-Ë/¥÷~X¶á'!áìØ„+'â°l6€wÎy´&`R·bm/vKìÌ;·Ðz®YÛÿ¿´ÐZ³)7Ú”íÑ€`Ø ½†æ‹|¹Ër¢Çl³تq±TE[ `ÃnÅ:nûüêå`X6»¦²î`UŽm§ V0 ÀfÚ“¨a€í‡}’'   ¤« ÀŽ:0¦v, À¦Û&°dݬŠxÛ´ ÏH¬º§Ü ¸—¶€à“Æ¥PFO€í¥æÜÎí€`ØÑkK»¸£^V—;Ímç.XøŒ•¼‡;[3¶‹'†SPPÀ`ókø†ÝðrýÞlFûê×S¼é¸“Fb;ü•GFÝÇnnù§ãŽ;.Ž^]ªu×U¬X±¢6ýõ×__Ì™3§6qÄE'ð¸É&›Ë–-‹ÊÛ½S£ÒuZ†ý÷? Øgßý£4NÁ¦f¬µÖœpÛ©lm5XÙ€N÷Í7ïw¼“{{'C?v|üpõõ_4>·Ë¢À``ókø¾y׊ÒÙkm¾QíÚÕo—^xI60RDKÞx\qðK‰ºìë_ÿú(ÈS"ö1©ž÷¼}ØÑ„á^öÎK¸oÎ°Žƒ`ãã¦p•ù®ênÁP AØüVëá… À÷ºÂüGyvÈAk9.× ŸrÊ)l‡QàÃ[\œyö¹ÙÌ¿ÞìÙ¸&J>Ÿö ôÒ…XA} ]ˆókÃõÎ`ÇçõÀŽO]s§T€\ã'è|ö ,ìàê¬×ºâü¿Õû»wÚ…€`ƒ8ØKu °¯û¸âŽô$N£üþ`û° ójì0ÕçÚ#«@¿V ‹[#$ðòÿ·ýê«cepgût^x¼*?åkù)ïNA»VÙ¹–oÙ}ؾª² `Ï>çüUîÿ¿0Ϫku°U‡š~ÿG>®ŽÊêÆÀVÕƒéPV¯í´òíÅ®­ï°>ªl¦Óú&ý`~`Ø‘mH»»±nÖ‡W]u «÷õ¾/8 ÕóG›þfIoïM¶»‡&dzØk-Ý2Ó}ªnú °—T£§¿Ýe‹þímÓP鳯Ñí;äÐÅÓ §þÿØÇ/måU–Ÿ`ÄŽéÛ Ñ EÓòk´X#mçêZ–ÖöÙ·Òú°¥ýÚç§Óq5þ*‡îÝîÑ.¼¿ØkuÒ…XÐlúÛ·éëÿoõ£zïÙêÆ¿×²zuô5xViUvÿ¦›lÅ~œ°û°ò•ý ˜öæ`5­ À¦ë ¥dݬ ºÐ+mßÖÚ/µö£¡Ú¿ýkú]1ù5 °êNîÛÅP š‹–+ÀbM* ˜àãÀ 2 |ÔðùðþŠŽ­4À³ÍТzR??5²>)ÿx;€µ¼…ÔõTf+ŸòôlS¾:Ç J eÿ+ÿ¸Ò”u!6³k*]̵bÖÊéGAu©æœè>Cm”οg¥±rÚUõꃿiaeÐw;­ ¸­ÎCgI×6•g*-P­r>X–Fw–ºÿ&zÔ¤ï¶ö|ûm–ßë©ÓÞIaÿýQ×CÈ®e=Œü¿ÛõбÞAaYuOv_M ÷iâ ì2gG“=Ú§÷I¶OÂŽi¶ìTÅ`ýˆk]÷×2€õ'¿ÑóaΠÑïz;öÔ¢ÅeNÉ5дÈbÙ}ÕÝK;¨ÄùåÔ¶»VUCiPêë¡H¦µ(¸E·ý ªº‰…÷VW¯~Ôi‚´Ê`åõÛî)Þqøõ>Ç{`Ø1õ7úyÛ}ز¶/|ÿtÛ;É~(ö{MÙ¦öC§ßþË—ðÛ.k³ýFjüžPa¤ØztY™ý¼•¿ßãË÷]š~ç°ý|,ÒÊ€M«>r/ ›ÀZwRW£o¿Ñá3`ýÆÐþŽébB™ß`•A°ß˜ÖAY;X éíæZU«E¼CMüFÛ¢´¾cPU^»NÀʉ°kÊPÄÕÀ†d???ºmðêwV^áýÐ-ý(, Àæî8$Xþ¾lD†mML¡v=nÂ,ý¨ßnXo›`ý^Ÿö°µËÖþ‡?vZï û¡ÖÚËðä¦áUù° >M}*Û'aÇ4[6€ »ÙÖE-c¶ÛÉ%ê"°a¾~ÃÝ4Àvz­v‚i•þÒ®ÿ»‰ÀZ”ÔóN"°ö ºEýîÂu‘Þ~8äÙ °ì˜úý¼í¡l¯½“ *ChŒÀú?x‡C”ÂbõC§?–×~@·<Âo?ßýl?‹´ò`ÓªÜKÀ&°þxÉ^#°Ö@ù ?.Fp呟.lüôë¬\8È~¹ éõó+ƒñË¢­Ý\«ª¡-ƒI¥µ_¹mŒ°4²û·c¡–ÊË[Õ5Ü Óÿ%½€5'ÂïöFÃÃ:ìf¶é~:'äýxè`ØÜ‡ËŸÀ†½ˆ:íä÷2 ëwuö{ñX{À&há#P$v*1¡[`XëòcѶ^#°l6&Æ,в_hCèôÇÓXÃæ«4~7ܘ¬?3¯5øeÛ͵Ú“?æÇô°û õÖÿv_VkèUVëŠU×…ØÎ±zí`ý.ÍV^ÿÚþ¬Ê~7e ±™Hi¿t`Ø„|€Q)J_Ö"˜6„£ìÝPÖS§ÓÞI–¯M¤h?H†ícÙØN"°ízY•uQî×».Ä£òøÅݧ©â`°~Ô΃ð×Z›W–?  Ò‡i•¦l=а¡QžjØêf1,k üò”å«F¶lLmx­²{WÞ~¶ÝL‡VŽ˜kÕ5´yüùù•éîÓ9a÷®˜zõÏ+[ãÖ/³ŸŸ@?t˜B`– (ÕE; ëtáøà €`ãšgRu @ßÖ~ôõÛ-½í‡Înz …sØ{Øz6ùËÌYÛc˺…c`;Øp¥ðýO¶Ë#i´l´T$ŒP€Àƒ„QÑ:ü•Ü"²ƒ˜XcT4Lñ>X6¢m&Ig ô`õ.ñgæµ^1ö#c7=†|€ gö'ä ׉gÐج«{ »û½{ØÎŒÔq °q:‘*N€]e™šþ˜2©ñ­Úr™×föï«]¶H3üRX6®y&U `õþ´µVÕã¥lÎNz …=x”gY)ëj«È©]ÓïÍSÖ¨®·õìÒõTfÿ^ŽY± ‰ö„Iœ:°òÌ“°™W`bÅ`Ø‘Ø&Rò>ÜC°lb~À°‹s¥+À¼ 10€‡wÔ ï€íÑò3:€Í¨²2(* À°îWõA6Ø\k¼õ`Ø |ƒAq…»˜|Û^>l¦íÛ‹Ùçwn¯z~wL‰û¥ Ào™6ü€pž À°ýjÐ3Í€ã6€Íô©¥Ø(0dX€cç<°ìÛýÔ.ÀŽqÀ¦ö8RÈC€]`5qCnà„.ƒ½Ül$¶¼,›‡{0°R°ì YÛ2—ÏdCy‘  @ œ‘pÙZ´™3×yð¶åwu4RÓøÛ,·šÕQ[|H[ï¯.-ÇÝv6À°mÜÒ¿ À6°Ç;sïuB°ôŸJˆ(€¦ûhk*}M¯¯éôS²~”GP³ÌŽ­ñ—Š”#_HÀ.^¼¸X¶lYÔ¶îºë_|qmÚ³Î:«˜˜˜¨M§ëî·ß~Å^{í•Vé7ÜpÃâŒ3ΈJïÞ©Qé:-Ãóž·OqæÙç&ó>ª{×›=û§Å|ZÙZX¶)€­56   ÀH)0î{ö9ç·ÖwÊô¿Ö¥«sÔu|ØåÑõ¥Ó î—ëä ©íêné…—»ï±gô¶å–[ÏÜm÷Úô»>s7÷ oY›N×~úÂ…ÅSžú´¨´J¿å“·*vÞå™Qé7Û|ó¨t–áY®W\}]6Ïí"°,ý¸P<Æ`‰´.²Š6î²ënÝú֦Ȭ"•ûø¥«8:Ç Në\EJuŽÒ‡cG혎ëoßÑWÞšÐ!tþ«Êcét]+g˜g]™Ô}Øï6­û´üÂòëº&TòcõÚ´ °Ñ. À°Ñ QP€†Å(‚Š8 î paw[¥1HUú_§È­¾ý¨®òR¨†×4-s$ËÊ£tÊÊSl×·<êÊd@í§·ò똿aÚ¦^ò¢°Ñ±6ÚÅ`X6úqÉ7á Wô#ó->%G4÷lY÷ذˮ ÓÒ°Û±`1ŒP*½EU}Ø•£æ§ÿË"°æÔ‡å)ëölyÚXÞº2ùPZ–ŸŠÖ†Ý¬ŽÑê’ºlÒØè¶€`ØèÇ%ß„ó]Ñõ°óAhPöñã]ËÆœú «è¤ßý¶¬[°Ò !õ»$[´µ ËSõÓµ+“œU?º«î¡“ò6é “p… äelt À°lôã’oB6ߺ£ä +À>~‚¢2€õgí #ªukù ý-Ö16ÀÍ bíŠtÔk?l€nðØ'm±å#Ö‹‡ïÇz3å°½ý]§³gÏ`£—|°ùÖ]Š%W—t>NqØ*ø g!¶ådÊ¢•ÊÃÈj ƒNué ·S§1Ø2  »%·+S­Ôºm§÷CzÀ ]`£]‹&vÞ¬Y³n±å§Áœ9sŽŠ¶–ö ñi²Ù°ýPu|ó¼|o}Õ;w€-›a×&qúëÁÚÄH>ß·ýÖeXyjŸ¥±ÿ•ŸMääoÙ$Q~þeåñ'²‰¡ÂnÍ~7æ°L!”úùiL¬?‰“?S3à1ºàÑTÝÊöÃY¸›Ê{ÐùèyífMh=?á{bÐeÖõØhï¢ €¾ GZ+ÝÝ-é;ÌøæØŒ+/Á¢k\""°-'3ß)€ó—Ñ1GÐf¶‰’l¿E;möá²et옮¥ôþ²<:æÃgèx–•G€ ëX~á2:ueÒ}ûOùù)_R©²‰®†åsÝôº¬ þ ê­k7Û@–¿¬gG§yT¥ïÇ}6U6åÀF»g¸”Ñ©IˆÕ ,s‡&(MØ4ë%×R°S57îXëâBi™CgËá„Çúé¬vëX6U&Ó§Ûrp^úÀÙt `û5ÑXÌ{¡L¿¦ž¿²¼‡¡o'6ÀæêQîŒ`®<6áÊɰh,»Jß0‚Yå8–u ì§³Ú‰ãvk޹§ºüíáAuçp|ü Õ¯ó²IÇün¸êQ ÛÔf€¨ãeöª}vnLª·€ìÔ¢¥á3ªÿÃëù½*ô·•Ißþ½èܰ;´å¦-{þT6]»ì½¡û±r•W/ ¿Ü¦‘ )hâùîdz ÀfèQäÜ`®A6áÊɰh,;í¨ÊA­¯Vå`ËÔ¹íÖq퇓X—gSe*sàë®ÍqÖº¹†ëëÇûÁǺÕ˜†]Õmâ4³'çͶý=s¯´Ö·ŸÆ†öìza÷xƒn\µD–MΦô6†¼ê½aãâí^mŒ¹•ÙîMe²¼”Æ`݆6è˜Êc°ïß§þî6BÜÏg€ÍÐ#¢È¹+À&\ƒl•“aÑXv•HK?:òo ·ú·l8q˜A›õ'ó—ª’f7F-Y¦gYb³î§7€¶}Í´ÿUõÇÄû½+ʆTMð‚t˜o»ÉãT–pñvîÔì €ÍÐ#¢È¹+À&\ƒl•“aÑX€ãEäSsúG©<6±X89𥍾}È —‚êdò°2€-ƒÀ²kø“" ²Éõ#ÈÖU·“¡aÚ°|º·°×†ȾV¡m06Cƒ"£@`û«oO¹°=ÉÇÉ, À°Ø@lÀ¢šecamœª±~7\ƒÖ°ûpàWl8N´ $ÃY}ýrû Ú4À†× —¶²ÙÇC`ñgPØ„M€M¸r2, À/}€—:Ðàøèw§6À »ë–-YÚƒuîtò°*€ ' ËÂ`˜O°±šÅD`ÃI ÊºÛ{?š ÀfèqPdè¯lõí)w¶'ù8™l¹ Œû2:Õèu<Ø:#˜}¶$“qÕ>¿+­E^«ºØV­—là©ülb#wjùk–ÖÝÙα ü‰£,ŠkùYäTù„ª\U“?)¯Œ îíºÖµÙÊêíôR¹ýq»íf@¶½3 ®;pÉã/ÀÆkEÊzˆÀ%Kèƒ øk@jÀe¨4¶iaºlßÄB¸ Ûy~´Òf¶k…ìww6˜õ—ͱ‡íZ~~ºžgX¶ð¼²±Òø:øPó5òË.ñ3lxÕõØzd*ÅÍî{^tj¢@µlÂÖ¡‡ü®„ËGÑòR€`—>ÀK 4elĵ½^UKÀTu¥¹NUžáxS?šiÇ:ÁNÒÖ•»]^M^§®M`£¡.¥‚3|P WØ^ìóù}ΟìÇG€`Xl 1°nƃZW9õñ¤MAå ó`£)6Z*Ö(Àb"(0& °,ð’¼ ÒÉæZiFiÕUÖ_ûµßõ¤kååì·.½äÀF{Ql´T$`±@)À°,‹ `Ø@Ã6ÀF;Yl´T$`±@)p*2<¦³§‰ê%¹Ô)6€ ËØhï€–Š„5 éŽ3!f‚(0> °8ºÃrt¹.¶‡ Œž °Ñþ- QPPÀS€=( N±l`X6ÀF»l´T$D@@–qo {–³Ìu5l -`£] 6Z*¢     À°Ø6€ ôÁØh€–Š„(¯3\ÑÏ·ø”ÒT€.ÄiE/ˆ&QØ6³ °Ñm=- Q _滢ëaçƒ(Р,ÎrÎÎ2eÇ~±´l€n Øh©Hˆù*Àæ[w”Õ¥œˆNMB¨V@;‰@i*À¦Y/¹– €ª96-çgœúÀ°œm€ÍÕ-¢Ü+À&\yl•“aÑX–5 û°dÎŽ7e±Þm€ÍÐ#¢È¹+À&\ƒl•“aÑX€`±lhØØ ="Šœ»lÂ5À&\9 €`q\v\‰^õ½BC4ÌÝØ ="Šœ»lÂ5À&\9 €`XÀ°†m€ÍÐ#¢È¹+À&\ƒl•“aÑXǵaÇ5÷Èå'ú‰ ônl†EÎ]6á`®œ ‹À°,‹ `Ø@Ã6ÀfèQäÜ`®A6áÊɰh,‹ãÚ°ãJôª÷è¢aî6ÀF{DÑ)Iˆí`¶6áÊɰh, À°Ø6€ 4ll´Gt—K9/:5 Q Z6aëÐC~GÂå£hy)À°8® ;®¹GŽ(?ÑOl w`£¡.¥‚3|P WØ^ìóù3úœ?Ù, À°Ø6€ 4ll´#ÀFKEÂXLÆD€ÅqmØq%zÕ{ô Ñ0w`£½(6Z*°Ø  €`X€Å°l a`£,6Z*°Ø  €XŒ )0sæ:Þ¶ünœ¸†¸Ü£(”ŸH 6€ tcl´wÀFKEÂrÇ'P PÆF'µ'•s°l(³6Ú}`£¥"!     x °8¡€6€ `MÙíb°ÑR‘PPXºLÓeÀ°>Øíb°ÑR‘PPX×>8®MEpȇh 6¯ °Ñ.- Q _&\ÑÏÈ·ø”ÒT€.Äù:Š8ùÔ6€ ¤flt[ÀFKEBÈWù®èzØù  4¨‹œšLy°Il _`£h6Z*¢@¾ °ùÖ%OX6_G'ŸºÃ°Ôl€nðØh©Hˆù*Àæ[w)–\öÄÇ)Àâ§æSlÈרh×€–Š„5 àÓ&l"l•“aÑŠ ËÜ—"°ù:Š8ùÔ6€ ¤fltS ÀFKE–¹ã“¨”¦lšõ’k©Ø©š`q€Ss€)6‰ äkl´[t¤K9#:5 Q Z6aë`®œ ‹À°,¥ÃR:Ø6€ 4ll†EÎ]6á`®œ ‹À°8® ;®DÍòšQwÔ]S6ÀfèQäÜ`®A6áÊɰh, À°Ø6€ 4ll†EÎ]6á`®œ ‹À°8® ;®MEpȇh 6¯ °zD9wØ„k€M¸r2, À°,6€ ` Û›¡GD‘sW€M¸Ø„+'â°,ŽkÃŽ+Q³|£fÔu×” °zD9wØ„k€M¸r2, À°,6€ ` Û›¡GD‘sW€M¸Ø„+'â°,ŽkÃŽkSò!ˆ äkl´G4/:% Q ½lÂÀ&\9 €`XÀ°†m€öˆîr)åÛòA^`{U°çO¸¼UA|P  XǵaÇ•¨Y¾Q3ꎺkÊØhe­ ‰Àb(€N€`XlÀ¶6ÚÇ`£¥"aD`1Xǵaǵ©ù Äòµ6Ú‹`£¥"!‹   H€`XlÀ¶6ÚÉ`£¥"!‹   Hƒá1fÎ\çÁÛ–ß×°G)ßuGÝaÝÛí]°ÑR‘°F}Üñ¨„(€c£Û½£†“‹vØ6€ ¬jl´ûÀFKEB@@ð`qÀqÀ±lhÊØh€–Š„(€(€(ÀÒeš.ÓØ6€ ôÁØh€–Š„(€(€(Àâ¸öÁqm*‚C>D±|m€v1Øh©Hˆù*0ኾ4ßâSrHSºçë(âäSwØ6š °Ñm=- Q _滢ëaçƒ(Р,pj0åÁ&±|m€n Øh©Hˆù*Àæ[w”D±|m€v²Øh©HÀb(€¬g3g®óàmËïæ€9lÀ°žm€v²Øh©HÀb(€R`S€Í7ÒA”ŠºÃ°Ôl€ö.Øh©HX£À®îø TB@±Q€ÅNͦ<Ø$6¯ °Ñî- QPPÀS€Í×QÄɧî°l 5`£] 6Z*¢    Ûó8¯ÔœFÊÈ`Ø@ 6ÀF»l´T$D@@€e²lÀú`l´‹ÀFKEBÈW WôËó->%G4  1Q›¢6”;ÄFÃØè¶€–Š„(¯ó]Ñõ°óAhPv4œFœêÀR°6º`£¥"! ä«›oÝQò„`qzSpz)vˆ Œ† °Ñ >- Q _Ø|ë.Å’”b¡†Q&v4œFœêÀR°6º%`£¥"aû¸ã¬›¨™°‰VL¦Å*2-wãÅ`qzSpz)vˆ Œ† °ÑÍ4- kXæŽO¢Rš °iÖK®¥`§j€ §çŸzİl€v‹ˆšEKEB6_`ó­»KÀ°,¡Ñ‡%4Rp ) ‡ ÏØ]Ê4â M¸‚Ø„+'â°, ÀbØ6а °zD9wØ„k€M¸r2, Àâ¸6ì¸õ^Ô íÑ>`3ôˆ(rî ° × ›pådX4€`XlÀ¶6Cˆ"ç®›p ° WN†E`X׆×T"@”ƒh$60<`3ôˆ(rî ° × ›pådX4€`XlÀ¶6Cˆ"ç®›p ° WN†E`X׆W¢^Ëz¡=Ú§bl†EÎ]6á`®œ ‹À°,‹ `Ø@Ã6ÀF{D“Ñ)Iˆí`¶6áÊɰh,‹ãÚ°ãšJˆrĆgl´G´Â¥”oËzU€íUÁ>ž?Ïå½Òmª¤n¶ãkÊ6Ã_ÚeÞ*ù·¯—Ôô`X€Å°l a`£=aìÍ]úòWå·¶ûÈïêÆ_Ö9äß^»ÔôMF[ ®À®S¤JêtÛ6¢´äß^¤QÒga„=ŒE’™3×yð¶åwãÄ5ìÄ^íÑž °Ñ®ƒüÒN}YK/¬îCþí5}êìã(€(0: °Ãsôp²ÑÀFÍØÑñ¸@@HRzÔhî›Æ†gl’M=…B@ìØzª›Jv§ÀýW€ž£‡“öØ60j6Àö¿Ýæ (€(0,NwÖ6ˆÏ‰î"×âB\#?XèQs ¹lž °ùù”PbP *ØØZÃtìð=œl´Ç°Q³v ‰ÁÞòyîrkö’\ PÀ`±…$`q GÍæ~°il`x6À&Ñ´ª¢ö·m\P= Ë‚4—¹ý½ö>ÔõvÔ r@UVä¡S¼þÖq{!è׿ôð÷éogçà%#°ÊÛèoÿ\ÿÅ |ýréeÅg„`‡çèád£=6€ Œš °#ä ÄÝŠù¯~jí+ÜúŒeiã®ò·Tl§Š‘T  `µO»ºFèou“ÐG©6|9h¿>¸Ö¥Â€ÔØX?_kÇìÜ£§®m×Rz][ûùÅ«AH%+zÔhî›Æ†gl*­ûÀÊ!ÔüU]T¾§Sä;ÚÇ¢µaÁä[j+ Ø>7Ÿ6Ø2ßÔò,»–ö)_23.4J ”ì¤wƒaÔÔÿÕÊ èïª_¹ìÅayévé¨úåÌ^a´v”ê€{q °Ãsôp²ÑÀFÍرs-äkúAëµö,›Å‚$öí÷,”j¾²ò·c>À†~­Î1x¶<}PÕ1c½Ç®²¸aèU2€õE t=pöðú/{XÃò(塾­;‡Em-½=Äá·°–O¯÷Ìù *Àâ@šÍý`ÓØÀðl€M°¡ïo‘,âj (U~êd¶a@Ä÷kUBËÇ‚/òC ñ£»Jgk½ ý t ?ê+ÿÕ‡bƒ[&ê¯Mû+Ð)ÀÚ‹@5XZë¶×ÿÊ»êãƒðWÇøÞ;eׂؕ2©µ\ýnÅíVÇtp̬Ïe%%;–æÉM7©€úè‡KÝø¿$é¡öûñëÚê¡—@ÙÌiŠÀêÁ׎s ó²|ì~tÜÖìRþþ Ee_0Mê@^CV€Å5šûÁ¦±áÙ;äF}x—†ë¼Z·^Áf蛆KDÆ‘3µÞ†þ<-~WfS"6Gvx6•QP Øá9z8Ùh `£fl3ms†¹RÕ£/ z*ÃùWt{·6±’¾ý†uXåa½íš6æÕïFLâ ‰"£  @[XèQs ¹lž °cët(ÒYݨ¶‹z >ÕÛ¯¬waÙyá>ç÷`T9²Ê3LKôulÍ“G)fMLÜuÅÕ×8|ÃsøÐí±l`Tl€)›A@@ô˜9sæßþ®Ó؇p GÅæ>°el`x6À¦×ÎS"@@5Ž?êØ%ÂážÃ‡öh `£b®Ô8H>(€(€(€(Ð7pЋç‰û°lŽ Üy÷½ÅZk­õpßZ+2F@@@§À®[o³íJ¾á8|èŽîØ60*6pÍu·? eE@&Ý­ø <çrgšµÍfxÓÌnþ”ä¹Üål£Àß­¹æïîYñã`‹ `Ø6е ,yãqZc5ßIƒ‹(€(0: üü—s¹3­Í.í/ Ë}PÎ Ü/æ—žyö¹];-£=à>ˆ„aØ6н l¶ù¿rÍÌB[@@ÑQ@ X¶ÎUêwª@üôÔ Mù:R`ÑÞÏßïa·î7´C;lg¸mùÝ…Öï¨å!ñ P0â¼A_t ®'?Y>>9&+Œ[]rõBÑVÕT–Ö¾cØ¿Žòñ?]µ4úö?áÓJŸcWè‘3ªohƬY÷ËgŒ{@°lèÎñêß̘1ãM ¶KdÕ¼‚¬Ðlþ*ùç(ßÚ†ÎÅÞüâÐŽ=—t(¬eЧ_Ádðö‹˜^,U«ަ¦WäS›þÖù1«ˆ©u_Ö9:×麖Fyë?Âje4q5&–_š’5µ® vÐ{þã¯pÞºsÞРݰl`\m@³¯½ö¬_ºÖgF×-'ö[ù}ønq*‡~oÌY ö܉QŠ4Y) £ö'?xêEâÿÂ#¸¬êÚvÛUºØñ´á¯Hძ~Aò_re+Žç¬*iÜ «Ù#5‹ä¸:aÜ7‚ `Ø@ç6ðœçîóÐ:ë¬óòqoC¿&Ê&áTO@h¬÷ `Úÿ´ë%(0”{ÀÔùåÔ¾²ŒÚ¯sôÑ·Ò„× ýg·stÌ;úÛ÷u•®êî$þP¼î!пK°^,U¿ÞXý¨h'k%ÖÃ¥­ `µß>zX¨¶¯ì—(=Üþ9Ý©ÂY©)°p‹-·úÅ}÷¯b™‰À°l Ö>rÁEœ3gîu©5f”çq ”ùrƒê (ð“ßh¾¬þ60Ô>ý/Øzú€©rÛæ÷@Ô>ýoÇ|_Wùi¿†äÓ €õ‘ïêû¸Úç÷’”/®ó-;Ï‚;~9í˜]ߘoä°Ín¬ `µÏ~ °G«=ì±P;׺Û‹¢Ýç?„áKÏ«°#g¦«­6kÖ¬—ì¾çä/‰Bt…@34ðq²õØqð*®Ãéûª'ßÇdO@ƒÉP¥É)Pô‡Ï•õô#ǹæƒ*âêû¬¨>\¨Ç¬ÒTnüùùé:þCW »A”u!î¤ìu÷ÆñÄØ`ƒ Þõ²EÿòëqrĸWÀÀ°x¸á–åÅ“¶Ør…k¾æ'Ö„QœrÂèà {VE&ýÈgè·Võ´rW¥/ëÊëûÜuØ2€µ<ý^“Ò~9ÊÀ{D¬R‹bú¿†ißäÔ]êoA«ªu/¶‡Gçùk“;•‰äÿ"ä?„–ÖºIø×òí H•€ÍÚë ïÆÃž°ãN;ÿòžÔv!Ãé‹wúÐ ­°l w¸ð—ÿaÞ¼ ¼Ö·¥ ¥ßf>¦Š5Èž€ÃXßom `ÍAZûù ÀÈ(Pö‹u“°¾ö~¤ÔÒ[7a›ÝÌë£_¾pv'ŽŸÕuí—$ƒY¿{²u=¶<«&™ÊâFS`bbý6ßbË.ûìU@,cá°lsÐüo>þäGŸ8gÎ-j"h+³R >6螀6ù’/ZY4Õº6›Qº‰À†A¤¨m,­•-¼fX¦ªŠ$ÈÊ@(, tjØz0꺗ýzUõ‹VÕõ«ÆÌúùÄŒ«Õƒ]ö"¢æGSmgÏž}‡–ØQ—±Ü#”Ÿè6€ `ÛÀ™gŸ[Ìš˜xx½õÖ?k4›º‘¿«p.–¦{*ØNjd¢Ú0:[Cðl¾¯ßKPé•¶]/À€µ¥&•ŸÝ§µÀŽ«Ì¼ñVåÐÇÊ\6Ž·l§?zä-l¦nîÊ6Efµ•=ÀÝä«s:aûªÛkq^¾ ´îºëþD +GFëþávaØ@.6pÅÕ×G»äOëÍžýˆVr©k¾æåÛ„}ÉÃ(¤i²'`¢¡àþì¾úÛÀ0,C¸ÔOØ °,rìû±ÖƒÑ&<Õ±0Oÿš”ñÓÔú÷¤|•Öï5é¯29ulì FO×ÚÍéÁ°q°ö+PS¿ò(ߨ¼ôÐûl7÷Â9y+p™µÖZëá­·Ùv¥€öMÇTh{Æ;=²`á3V²¡6€ `yÙÀâ#^ó{½Çõ­º[{íu~ë"®w¹æêx·Íϻ٢ôS~žà«ì3¨ž€òƒ«zÆú¡u•éÙkw=åSwͲ²j_Ùy½©êî‰ã(€(€ *°«Ëë ·:µ-vß“lh€ `Ø@v6`ïñ%SuG´µÁÆ2‘¬zÖÏž€MÊS6çL“ùWåÅø×A¨Ì5PPPPPÆFuµ±Þ´~œîWOÀNËÒ.½¢£a—á&ó/Ëk×ì÷=‘?                                                                               ôG­3=ÙAÖJ«sø¤£€fãÔ2|PPPPPP`è N­©Bi=m±­iXtLZBBË+LÆ^`*AYÝ¢÷~¶±ç(ÝÑn;qjoØaù”Ü®×îÜAÀ¥ê¤“:ìâV9PPPPPêd ÃMÀÒĺ{ýX^ÝGøi5T¹ìžËÎ ó([ú*˜³u«ô<½¾:VIaeÔwÄv ü¡•€íF5ÎA@@@@h\XÁ•Ea=40Òw/Ÿ~¬ÀR÷FRí¾ª Öؘè¢Aœà´,½®oyZD¸W=ý2Vý˜Àöbœ‹(€(€(€(€(•zeQH¨Éî¨ß[U´€UÙÎs› ´Ý= N•Fi«îÇ@Riª>í¢ÅeçèZ:ÇÎ++#Ûƒqr*          @^ ´ØvÀ¥q:W‘A}WM¼äŸÌÎQ”·ì㙺Íú×(ëF+¸ô#°k{ç:YÖqm:f+—ʤtíºKä*¢Z°–G]´Úº!ÇD|UNXi«2–å_°ÒYX=©¬í>º–îW›Õi».Ä~é:LÀU#0‡QPPPPºW `}èÑß™5À²ÿË¢|ö¿uÅ »þ †,­½Âòûi×°|üîÄ>ˆ U3UZÜ2€µ{SYê>v~̤N¦¯òÔõu?áýW¬éj`‘ä²rZ^~éÞªV÷kiý.Öu _§ÇQPPPPP T*€Uä®,ê§ý‚<c?!úðéG] úÂȧåcÑS+´•'Œ^V•ßö·kyYÞe“,Y>Že[õ…ï$­°“õQ–ŸêÃÊé¯_O!l[}ø?Bè\¿þüû°h²®ïG¶MË&&ã‘E@@@@@U0@3Xò#i‚Øefª@²,úi08ò¯a@VÖÅØ¢þ±&Vå)Ó²ýÃX•¥,Ò[°qýPöáÖ7„ªû¯ê6Ý.jÝ®Ë1           t­@ÀZ·ÓªèeÙÅê¶ìœ²n±í"”Ýó£„M¬Á¡±´±§>`}ø´.ÈezÕu‰ÏQ=·\¤~t8¬ÛN¢Ì].'¢         ŒŸehàv×5u1õÇFÔ•Eýª"{Ê«ìÚíàÇ@ËcÙÀN–@œ[å÷Ç«`}Í âËôj§¹òϱÊ"¶–Þ·«‡p|­o:ÖÉ ã÷äqÇ(€(€(€(€(€+P€Ui죊Í:+P©ê¶š ÀJ8+«€Õ–Î Çr–ÝE’«Ð¯”:¸ Ó–Mˆä—s˜«k«î«¶Ž‘PPPPPP U[5VÒº“†ÑµnºWu­ŠÞ•]£©¬4²¼ãð“xeÚn™ÿtK;ÁQU7]¢¤vV_öXå+‹lûå®êB\¡ç‰C@@@@@¾(P€ºXÙÕªb»lÙò/í¹j©ƒ5*ë¶j]Ò²ûðËTU÷¯ý*wÕú¶6’Ò„P\U±UePzÓ®ëÿ ÐnY]»lvi»/Fa]§ª;yY]ôÅ@ÉPPPPPLv[Yl7pÕXí·cÖÝÔ(\ºÆöÄZzƒ²°KmUùýe€”‡À͇³*µû«‚ѪóL++·®ge·5mÛM”Tf‘íÖ@Ô´õÖ@Ü4·1¼þ²8!DûúZ¹M[ëê—Ñ–ÑQý õµ¥–xÊPPPPPP QÚ¬.d>뎪oÛg0§ïvXA§®cçtù³ ÛMYwU¶,½Ž…Ýbە߇á°[rˆVu­:OåÔ…×´²Ûyáú¶í*³Àê<ƒN¥ó'šÒ1?âë_[«œeŸ°ì§e]ˆu~¸N¬oeuۨᒠ         Àø) ¨d8žÕWAÀXv\ð"xô£oJvAÕ>ƒ+}+:§óªÆgj¿Ì®3YQ5uå¯:_åª*C»cUzøÅÓùvŸú6ýt®ßõW:´û´+‡W—Fºé:Úª4ôË`éUn¿ÞÚÙHx¦kn“Ã(€(€(€(€(€(€©*àG£Ûaªå§\(€(€(€(€(€(€c¦@UWÞ1“ÛE@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@1Pàÿ·ßŽ×Å‘>7IEND®B`‚manila-10.0.0/doc/source/images/rpc/hds_network.jpg0000664000175000017500000047404213656750227022232 0ustar zuulzuul00000000000000ÿØÿàJFIFÿÛ„ÿÀç&ÿÄ¢ }!1AQa"q2‘¡#B±ÁRÑð$3br‚ %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyzƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚáâãäåæçèéêñòóôõö÷øùú w!1AQaq"2B‘¡±Á #3RðbrÑ $4á%ñ&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz‚ƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚâãäåæçèéêòóôõö÷øùúÿÚ ?þþ( € ( € ( € ( € ( € ( € ( € (  —·özt&âöâ;h†~i,@ÎØÑA’WÇ!#Vr:-yþ¡ñ$,še‘›âíŒiŸU‚<»)õibn9^xæfñLjebÑÜÁl ÎÈm`eÀÜ-ÃãêÄûÐð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ñ§‰AûG<ô6v>Ç óìAô4­iñ Tˆwmkv™ä®ûiO¶õ2DíûŒú“@Ö“âýU)•¬î›Au„ÞÞ‘L ‰òxU,’·h訠€ ( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@P@ψüKm ÂoŸ2··è€d¯pÀ‚°†_žfR‹´–0Ô5+ÝRá®ogy¤9ÚŸ.%<ùpÆ>HuÉË6æ%ˆ( € ( € ( € ( € ( € ( € ( € ( €;Ï øÊ}=£³ÕK›ò¤íºK‹Aü<òÓ@½ g2FŸêI!`bGI$ÖHäUtt`ÈèÀ2º2’¬¬¤`H ‚(ÔP@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € (  ½cT‡GÓç¾› c]°Åœ®* pH Ã.À‘«¾ÜPÏ··—÷S^]Hdžw.ìzÊŠ9Úˆ $j8TUQÀ  ´P@P@|ÿm¢ÚSãWÄO‚ú¶µâ-áÁ¿ øQøcájþñÄ¿|H‡ÄWúGïüUáû«hŸ¼7á]"ÓÄ>&´ðΧ¢k^4Ö|Iáíï\ƒÁú/‹ü3ãpžûþ oÿúÔԿeO…Úôâ?>öúËT»¼¸1D£Ü]\j’O<‚8ÑL³Hò>Ü»3d Ÿðêø'gý?Â?üßÿòÆ€øuGü³þáþ oÿùc@ü:£þ ÙÿFðÿ7ÿü± þQÿìÿ£GøGÿ‚›ÿþXÐÿ¨ÿ‚vÑ£ü#ÿÁMÿÿ,hÿ‡TÁ;?èÑþÿà¦ÿÿ–4ê?àŸôhÿÿðSÿË?áÕðNÏú4„ø)¿ÿåðêø'gý?Â?üßÿòÆ€øuGü³þáþ oÿùc@ü:£þ ÙÿFðÿ7ÿü± þQÿìÿ£GøGÿ‚›ÿþXÐÿ¨ÿ‚vÑ£ü#ÿÁMÿÿ,hÿ‡TÁ;?èÑþÿà¦ÿÿ–4¨ÿðOŸÑå¿ý™<;ìïñH‚{¯ ë> ÕüKkàíCRŽG¾·Ðþ%øûb_ øïÀºÕîÛ?i·š|>%·Ò®o/<âxºÅP_¾'ÁñŸá€>'Ŧ6‰/Œ<;i¨êZÜ}°èZô&KhBôE¾M^´ÔtÈï–õ-Véa‰f¨ªÐ@P@èÞñC2è·Oû‰ËcþªàÆÜgÿ3 ÈÄÿ*«ÉPZ € ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@P@x÷õ3q¨Å¦£~æÅÊàÝN¡Ž{.^êÏ(îh€ € ( € ( ›cù<ø(·ý†?e_ýRWúe@ä?h€ß³…-³¦y'RÒWÄÞ Õõ½µ=<\[›ëzn­<ø~Ñ~jn¡à¯Ú#öø“ñÇÿ~|tø;ãï‹? %X~)|/ðWÄßx§â'ÃYšäÙ,^?ðV‡­ßø—ÁҵⵢÇâ-3Ncr y ¥Añ·ö’ýf} HñGíñóà·ìýáŸjÇ@Ð|Eñ·âŸ¾èZÞº,î5¢é¿ŽõÝOÔµa§Ú]_:ÎâkÁgmqraò!‘ÔÑ|ã/|DðŸ†ü{ðÿÅ^ñ×¼e¡éž&ð‡<®iž'ðŸŠü7­ÙèhÞ!ð߈´K«í]Ðõ{ ˆ/´Í[K¼º°¿³ž›[‰a‘€<¿ã?íCû4~Îi×_´?íð/à5¶®’I¤Ü|gø·à…Ðj‰ùr¾/Žä € ( € (È螺±G•ÑÔá•Ô†V±R±ô^~º¦™e|Ý<*e`,é˜çP=dp¾«ƒÐÐP@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?¨ € ( € ( € ( € ( € ( € ( € ù»U¹7z•ýÉ9óîî$_d2¶Åú*mQßP ( € ( € ù·ö8ÿ“Èÿ‚‹ØcöUÿÕ%q@¦TP@Èïüƒÿ(£øQÿgÉð—ÿTÇí@—ð£þLÿ‚+økö7økð«[ý³þÅãïþÌÞø}«è?ðοµ…ÏÙ<_¥|,Ó¼9¤jZ| ŸFŸÈÖ –ÏíöÚŒÚ\»>Ñ ì–Œ³æûþ yÿ*ªÿÁiÿì¶Zÿéƒöh oÿƒt¿h¯ˆßðJ_Úƒörý™þ>ësà Á]þü/ø×ðÆWåí|;áŸÚVÓ£ðÄzv÷y,ì5-sÅ:}ÿÁßÙÀÒj“Ü| ñF¤tÍ&ç˪ÿðE¯ùXƒþ ïÿc%—þ¬«ªñoø,Ãfÿ‚ÑÁuþÁ)ôÝzòÏàÇì‡û4üXø±ñÏYÒ§Ÿ>ø“ñ7ÀVú¦¨ÝÇfÛ.ÓH»Ôfí.Ù§“ͱ—Æ~'´X‡›qoz÷—ü‹ûJøŸÇ°GÄØÇ⹞Çã‡ü»ã‡Œþx³ÃÚ„¾v¯¥x/Z×uíÁɨ³6õþÇñM¯Ä߇Öå6Zéž²†7(¡"þw¾9jß²ŸìËÿêý½|_Ó­¾7[øB¾ã=/±Ùh^ñÖ›ak¥êqhWQør€?¬Š( €?ÿà›_òeëŸÄýZž8 ¹( € ( € (Ø>\™4»»f$ýšózçøc¸‰HQíæG#têÇ“ØÐ( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËä’I$’NI<’ORO­P@P@óoìqÿ'‘ÿÿ°Çì«ÿªJâ€?L¨ €?‘ßø=þQGð£þÏ“á/þ©Ú*€?S> ÿÁ9ÿàŸ—ì%ðŸÅ:ì%ûßøšÿöIð&¿}â+ßÙ‡à•Ö»{®Ý|ÒµjïWŸÀï¨\j׃½ôú”×y5ã½Ô“4ì\€¿ðKÏùUWþ Oÿe²×ÿL³E~Îx§þ ›?üþ ‡ý…l~éÒŸÚ›öqý¼3ñ¿öfÖ4²öþ!¼ñN…my?Š>i—öæ+¸OÄÍÁtí&(®¬àÇúO€u›Ë„·Ñ ŸðkOí}©øËö¾ÿ‚¶þÚŸ´Î­™©Ø~Ì>ø¿ñÛÄíoö]òøêþÿâ'Œ/l¤0¥¥þ¨¾Õ|I©ØÆ-ímµ Ë›[X­í£†(À1àŽðL¿Ú‡þ ÷ªþÙðUßøo¿Ú‡öñwíûL|IÐ4ÛŸÙ·_ñ†µÏøf[Í#ÇÖ“¨xÃþ5ð^¡uà? êºÞ…àÏ éîtôºðMÚ´I¥ÚúöøOãÏø!üs¢þÌ_þ=xûã÷Ã/ø)Á_4ütø¬÷?ð•xóâ߉5½]ðþ¿ã}BçY×n5ψOñÁ^)ð*ß^jú†§{kñrÃWÕGŸ«  öŸÅÏø/_ÄŸÙKöý£eßø.WüétïÙú4³¿ÅOƒµO‰ÿ ¾$xVÆÿZ†/x¤|[ñž§áÙø§B›D¾Óîüm5oj£WðŸ‹ü¤ngÒ?1ÿb|;ý»¿ààÙ¯öÎÿ‚G~ÆÿdoØŸà׆å¿ý ¾#_ü:ƒáÂßk£Fø‘§ø³þÿ øoRÕ< £7Ä­Äžøy§ø—Ë}©-–§ãWº=´ö§úÐ@ø÷ÿÚÿ“+ø+ÿ\þ êÔñÅ}É@P@P@£ðÜœk#<§=ÏÛA?Žå@Ÿ@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( €>_ € ( € ( €>sý-î#ý°¿à¢ÉÑÁs¬~Ë&Þg‰Ò+'à¬ñËäÈÊ_)þI<²Ûål(ô²€ ( € ( € ( € ( Èø'½Å§ìcðfÞê ­®"O‰ ¸‰áš2~)xÝ€x¤Ut%X0 £*AèE}½@P@P@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@òýP@P@â:•Ž~|SÕþ:|,ð«üD‹Æ¾ð¿ƒþ0ü'´Õ4kþ&°ðuþ³qàß|;×øoe7ÚaŸÄ¶þðÏÄÏø—â‰,á‹í~𭦛aámSPkKxÏÃ6M7šÚ|6øáï…?üðבܧ‡| á­ÂúCßKÆ¥ui£ÙCfº†­w6ë¬êOê΢`ŽMGT¹»¾™|Û‡ Ú€ ( € ( €=Cá¿üÆîÿ·ÔêP@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € (åú( € ( € ( € (  ­í纚;{hži¥m±Å–v=x°–'Tb&€?(à¡?ðV„±4zÏÃ?†‰¡ücý¨¡Š[Ytpo>|#¿xÈŽçâí„ÑÉ«øŠÍÙeOéW–×ÈPÿmj:6Oû7‡^fœ\èf™·¶Ê¸u¸Î5yTq¹œÖ8T‹TèÊÖxÚ°•=qOÔ¹?¼]ñÿ%à‰É2/«çœ\£*s ¦ç–äÕ²–iR””ªâ Ú’ËhT…m?ÚkaS‡´þlîà²ßðRë«›‹“ûSx–q<³˜m¼ð hLÒ4†+xÁ!‚=Û"‰~Xã ‹Àý-|9Œcõg .X¨óK™JNÊ×”ž6îOvÞïSøÞ~?x»9Îëž.<ò”¹a—äÑ„y›|°ŠË­«Ú1Z%dAÿÿ‚–ÿÑÕx³ÿ /†_üÄSÿˆGá×ý_ü*Ì¿ù´Ÿø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçpÃãÿॿôu^,ÿÂKá—ÿ1Ä#ðëþ‰Œ/þf_üÚñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎáÑÿÁbÿà¦2ºEíOâù%‘Ö8ã Ýäw!QÀå™ÙˆUU± I¡øIáÊM¾Â$“m¼^b’Kvß×tK«ã׋Òj1ãLsm¤’Àe ¶ôI%—]¶ôIn~¸|1ý©mßÙÇÀºíÿ+ý±>"ü?ðæµhuŸ…²7‡¼/ðÎÏö‡øáä:˜gñ›/‚íî~|=–aö}OXÖ^Ã]’%º·OìBM k•f\3ÁœC¯ÃÞp–_ÄQŸ±Í8¯‰Ì§d¼ÉóGQc%Ï—½N•R‚|²ýý5[Øþå“ñŸˆœ'–ḳÅÞ<Ír¼&"XÉ8 ƒÉéñWrµË,U)eðžK•Ê^íjø‡Kâ§õZ¯ õÜOÙ7ö˜ÖÿlOÙCàÇíâ é^Ô¾$_ü\uðÎuw}e£é~ø³âÏh6­zD÷÷ÿØ~Ó¦Õ¯„6v÷º´·×Vzv™g4}·àœuÔxKŠ3.¡‰«‹§—ÓË“ÄÖŒ!:ÕqYV ^J÷aOÛâj*P¼å JJ“R©/êO ¸¿Ǽ“ñf+GW6«›µƒÃÎu)У‚Ï3,» V§½V¯Õ°”¥^§-8T®êN*4åP÷Zù#ïB€ ( € ( €=Cá¿üÆîÿ·ÔêP@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € (åú( € ( € ( € (·SéÚ^•«ø‹_ÕôŸ x_ú}Ö¯âOx‡PµÑü?áý"Ƹ½Ôõ}VúX,ì¬í`Gši§™8Õ¤vHÕÝvÃáëâ«ÒÃahÕÄb+Ô*(SZժͨÂ*pRœç94£§&Ý’9ñX¼. _ÄPÂa0´§_ŠÄÕ… >8¹T«ZµYF*pŠrœç%¥vÒ?˜/ø(Çü ï[‡Ä_aOQðÿ…åYô~Ñánt¿x²6ÌWzwÂËic‚ûÁ¾`6ñmÊÅâKxm%œZƵýWáÇ‚pWÎøÎ•ÄÅÙÃ)¥R1y¶m‡’këµiШ£eB¤1:Ÿ‘ß>*|FøÕã{âWÅkÞ=ñ׉îÚ÷Zñ/ˆï¤¾Ô.¤U^nÑ£šÃ„—¬ª<®t!8Ëšub¹gÝ,+ð÷F”ñ< Ï0´ÕåˆÈêåøøéÒ4–wOR\Êpå§BmÊœÒVI¿ˆ~+þÓ>7øä|eÿ‚*~Éß !2¢¿ñ¿ìáñ#ÃÚEÛ3ùJtýkQÖ Òu(Þ_Ý$Ö·1<¡¢W.¬£ìò¾Ágkþ#Ä/ÃgøüËŠjptxmdXŒÿ,cÂK9Ëëfº±Ã¾\/¶ZtãF§±½5Í~õF¤~˸Ï„ð§€1|+•äüGÄ)ñŒ¸Ÿ ÂØ(eÑÇC‡sl6U¡,RæÇ}^T*Ö–"“Ä8ÖueN_¸r¥/çþ¿v?˜€?¼ÿø$/ü¢ëöGÿº÷ÿ­ñ.¿ƒµ¬ø{P·Õ´ [SÐõ[GY-u=þëLÔ-¤VWW·¼²– ˜]]Õ£‘X2«ε8Šr¥^•*ô¤­*u©Æ¥9&¬Ô¡5(µfÖ«fm‡Äb0µc_ ^¶´p­‡«:5`ÓM8Ô§(Î-4ši­RgÜÿ ¿à¨··Á¸–ÃÃ?´·Ä/h±ÜxWâ}å¯Å¿ Ýio‘>Žt¯‰6¾'K2æØ½´¶úKé̱I'‘,27™_šøiÀÙ»u1<;€¡^üñÅe°–U‰Tù£[ÚåÒÃsÔŒ­%*ª¢º\É­Ò2OH(õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ù?_¨Ÿ‰yÿðH_ùE×ìÿuïÿZâ]xÑÿ'/‰?îÿª ¨ÿQ¾ŽŸòfø;þîýj³ÃôB¿.?m ( € ( € õ†ÿóÿ¸wþßP¨P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( —è € ( € ( /âLj|!á´g‹> øPxóÀ>ýŸ>/x‹ÇiRñŸ„4_êZ—‰|&ÓÉòBY‰­ŒÁs=#õ¬<*Pæz/iw¡üxêßðM?ƒßµ¹ñŸü;öðOÄÿí+yµøeo:þ•ðËö“ðrh÷º%ŒZÝÅ¿‡|u§hÓßAk7Š£½ÒOB®cø·ï(V›£WÁT­9,3…ZîÒ›Ãáâýœ?Îúþd”*`ª¨¥véâ&’³½šgÍ•ôGÈ…P@U|ý‡?kÿ“['Â_ÙÇâß‹¬îÞ8â×ÓÁúž‹áyqå‹xŠ-#ÂV{ÃïZ€ya¥'ËGeùŒÛxO"Ry¯åXYÅ6è<]:Ø»-ùpxwW;mîQ–ºnÒ>×!ðçŽøšPY ç˜êsiG°°øÞÜÙ†-PÀÂûûøˆéyl›>þÒàß³¯ì‡=¿Œ¿à¥ÿ´o„´­WIjþÇÿ³Öµiñ㿊.–×ûJËAñ–§§vœäÓzÊNíãôŠÆÑÌ8³…ñ˜l,p8LO‡|-ˆÁà ã*x.'ëøš8:rŒ)©CÞÊ2TàšÔ"­øÓ_¯€…~°~×ò‹¯ø$ýßÿþ´?†«òîÿ“—âÇýØŸú ÄŸ¶ñ×ü™¿ï'ÿëU„?'ëõñ  ï?þ ÿ(ºý‘ÿî½ÿëCüK¯àÏ?äåñ'ýÑÿõA•ê7ÑÓþLßÝÃÿ­Vx~ˆWåÇí¡@P@P@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@òýP@P@áµü™Ÿí³ÿfwûHÿê«ñ}OÿÉkÁÿöTðÿþ­°‡Äø—ÿ&ãÄû"x¯ÿTXóüåã‘ât–'xäÖHäŠiá¯gçÇpÆYÏ«u0Têeu&Û»•J™eLêÊÿj¤¤ú'cô¼“Æ?ø~.oç^ÎÑQ¥˜Ö¥Ñ§«F4hçT³ t"—Ù£EîÓgÑŸðXï‹>.Ž%øÉû'þÀß®ZU}C\øŸû5Ùj>#¾$âyÆ¡¥xŸI±´Ô$…¤H¯aÒÑÝ„¢+µ¾~åXFÿ²8§ŽrH¤Õ:9oNž{±öupÕg*iÙ¸:«™i̺}UOó¼rŠÏø#Ã.$›iÕÄç#N®.¯óKÚÑÆP§ ®-¨Ôq¾ek;cþ 5ûjñŸøLà’³MôÓ††ñüã¿|<‰í9 –0éžÔ[M½HmBžf“„R©ÿˆ}ÅÔŸû'ŠœGz ‚ÃcÚŸW7SOÚBÿòí¤’Òì¿øŠü]·øÂ5%+Æ£ËóA–_•ÏòçÃi.&ñ§ˆhNI:X )ÑÍ+sÝ/g†ÁÔÅbå–ä…YPTqr¨·á\Þ<`å. ú9ðž&rUó<Ò­lNK‡ä³~טRÀàa:Wö•hG,EJQ”cIŸX§í+áŸÙ÷N–ßâWíCÿ«ý“<]k žG‡ÿ`?Ùûã×´AC4v2ëM­j^ ð—‰4Û£<ãûkFÖt›mAv\}¦Þád.øsŸTR˸kÄî)ÂI«â8ëŠá‘àjÁï5EQ§ŒÅaêFÑýÍj5e cË(´}ºâì ÒpÍøËÁnÇB/— áUâlÎ…Uv©¼GÖ+eØ]óKý£ˆ¡ ªÒç„Ó>_øÅÿ:ý’\ð«NsÁf¼KšP”n×û^qb¥K3â_¸±òT£Z•<Ç"àì ò§þÁÃôðu*Âj÷XÚ'¹chÔš_ÿÃZÁ0´><9ÿ¤¸×náý妱ãŸÛOã5ÀódùdŠ÷ÃzV…‘yoeÒßté&öIä;âU?iþªø“_ýãÅЋÒT°\“ÇE³†&­gVnÎ^ëV¼VŒüçýxðw þéà¤ñ3ް¯™xƒŸËW¼ja(a• Šº…äÚ“Ö(ôߊ?ðT_Ù[ãN¥á_âü ់uxÃ? <1uwûB|S³m'ÀÞ‚k_ x~Òô{æ·Ò ¸š8n.’{éC“su;#ÎË<4â|žž&–[âNe…§ŒÆâsLaå“Uq¸¹)bk·V´Ú•YE7¸Á[ÝŒOc9ñ—‚ø†¶¾sàîOŽ«—å¸<£:œSœÓt2ìe &*Ž’”hÆrQœÔªÊþü塿_ðÖÿðL]wÿÁ(ŸE¼¸æóZð/í¥ñžÄFaâÝ4ÿ jZÚ5ºK pÁxÍpÏ#›‹äÅÄ»£þªø“CýÛÅZø(ã¸?'ïñ:˜šu•i4Û”*IrÁûªçþ¼ø=‰ÿ|ðMáç/âb2ßx‚¹~K[ ,|RðŽ]ÝæÕíüJ°x®âÝcÄ1Ǥ«\™ Nä¤s—°ñ‹ïCÀùí8«Êœ>i–bç²J”°×ÂÆMêÝV£k¥«V¯¬ý3?v¦[â_ U“´jàñy&s§«nUã‹åÆÎ {©P\÷åoE.o®>-ü"ýÿk?ÙWö?ý›eoÛËàÌ>&ý–›öƒh4¿ÚzÏųþ«ã—øéñCñõ®§ê:Þ‡yáÆ×´9,§Ñ-áÒïu]?^¹šÖQ{¥bê+”ʳn+á^'âÎ"â~ÍÞ‰–CÍW†ç†Ïi`–I€¯•j”èÖŽ!P®§ÒuaJ¥©.J¾ë—Üç™ñÇp' p_‰yqœø£–ÓÆpÅ|Éñ&i†ÌᇥW‡žâpΜ°ñ•ébg(?iCßPü”ý£a¿ÚŸöQh.¾6|!ñ‡|1|öãHøƒ¤µ—‹~ki{M§6›ãß ÜêÞyuy#¸´Ó®µ]Y¢oÞéñH’"~©ÃÜkÃQÍ›6Ãâ10Rö¸ ¼ø\Æ‹ƒj§´ÀâcKjrN3©r¤žÕi¿Ãø³ÃŽ4àžYñEŠÂàê8û Òƒ§ŽÊ1 ¢R¤èæx)×Á¹U‹S§Jua]ÅëI4Òù6¾¤øsûÏÿ‚BÿÊ.¿dû¯úÐÿëø3Æù9|IÿtýPeGúôtÿ“7Áß÷pÿëUž¢ùqûhP@P@P¨|7ÿ˜Ïýÿöú€=B€ ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@P@|¿@P@P@x_ícÿ&gûlÿÙþÒ?úªüG_SÀÿòZðý•%ÿɸñþÈž+ÿÕ<ÿ9JÿFÏò( €>þÿ‚_~Îý«n‚_>"Å%×ÃýKPñ‰¼e¦Ã<öÒëz/ü'­ø·þã=¬]Ckâ+ý"ÇDÔç´¹µ¼¶Ò¯ï®l®"¼†~Ä®!Åð¿ç9¾^Ôqôá‡Ã`ê8ÆJln*Žë²R‹–YÖ§FP•XB3‹ƒ‘úƒœ'€ãox{ Íbç•Õ«‹Æfc)Aâ0ùv Žú¯4gbªÐ¥‡­(N!B­IÓš©š·7í—ûQü\ñ·‰~ üCÓõ/ÙóáÃ]^óÂý’|jÞðÃ=/C¼tÓñž™àé.d¿>—ÆþÒüE¨x2-RYî©a¢ê7·I¤ÝùÒ•Ò&°´–{©íe»Ÿ xƒļ–æY”Õ|b©‹ÁÔŨ¨}y`±50ôñŽ’Œ}œëS„}¬l¯V3šQRPŽþ4p®YÁ¾!g>OMá²÷K˜QÀ9º¿Ù¯1ÁÑÅUËãYÊ~Úž­Iªæv¡*Prœ ç/êÿþ ÿ(ºý‘ÿî½ÿëCüK¯ä¿?äåñ'ýÑÿõA•Þ?GOù3|ÿwþµYáú!_—¶…P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@Pg¶Óu [FÖô­?^ðÿˆtOÃÞ#Ð5kxîô½wÃúÕœº~¯£êV²«Gqe¨YO5µÄ2+Ç$nRDxË#kB½l5z8œ=YÑÄaêÓ¯Bµ98T¥Z”ÕJUiÉYÆtçÊ2Z©$֨ÆÃãpØŒ.úBâ(R£…âÌ®x×K4ÊÝ*XЉ+s×ËêºXiÔ–ó Fžü¸usøç¾‰˜\U|F73¸eʤ¥8dyÚ¯[ IÉßÙá³Z*¾.ëtñX\e]¹ñnÍŸ?àŽ_ðQo…êâóösñŽô‹ræ ká.© üK‹Q‰$dYh~ÔoÍT8‡‚«+^–iJ¾^àÚ½§_N8M6n™Æÿkkÿ=ç~ø¯‘ºŽ¯ âó*0¿-|’¶6URv½<6¬ñúî£S Ùü;Ûáï|øíðæg¶ø…ðWâ×.#ßæAã/‡1ð¼Éå™Ä›âÖôk]†Úä>åM¼ùÇ•&ß´Áç¹&`”°ÎUŽ‹µ¥ƒÌp˜”ïËk:5¦µæ¿ÅèüçÃI•IÃ4áìó-š½ãÊqø9+s^ñÄaéµnIßM9eü®ÞK^©áŸ¬ŸðE¦¶ý¾¼qo4¶÷ÿ >>ÍðÈñM Ñ|ñŒ‘M ±•xåÕ^9•ÑÔ2@5ùoŒqRàl\d”£,Ó"Œ£$š’y¶4ÓѦ´ièÑûÑæR‡‰¸ ÂN3ŽKÄÒŒ¢ÜeG!ǸÊ2Vi¦“M;§ª:Mþ _ð§öƒÐ4OÁJf'öЏÒ,CÒ¿i_†wÖ¿ ièöðÞµ˜ÔïtõÓ|=ñ1¬'’Ö ;L×o¼5¥F­y«kQx‹W¸º–÷š·‡9¦C^¾7î#«Ãñ­7Z¯æP–cÕëJPçöp›©_.SŠ“©R¸ÕI«Ê–QYÕǤӺugJí5Ê•ùGÀ>qSá/£‘Ö©xå}•Ï/×Á_?êT¤¤¬ã‡¥^ÉßÙsÐ?ðDÿÛÄvM«|ñ?ìÓûJ莮Öz×ÀßÚ Áæ™~K¶»ñTÞ€ÈÊÖÇç‘#_¶Û.Ö‘ãÑxÉÂXyû,ç Ä\;Y;NŽuã(Ô§­Ÿ40«+&¥²¿¹-/dòÿ‰yãÌ]7_‡qœ#ÅØf›§ˆáÎ(˱4jé{B¦6Xݧ ÚKÚCÞµÚòsþ ÿ$ðôÿf¿ý”<{q!šæ Ú«àA¾ÑÖ9[í^ñV­j!fpm® ÞEê’ÎIãGeõ(ø±áåxóCŠ01V‹ýõ,nV’m{¸Œ-)][Þ¹ ì¦¢ÚG‰‰ð+Ŭ,¹*ðNg'Í8ß [.ÆFðiKßÂckÕ·îK›–¢»¦ä“k?ðKßø(PÕH?²Æßµ°Ü%¸:XC\|ÚØè¨|µ+µ¯ÃöÛcí,±¿ø‰|콯úדr®ŸZ^×âåþ ½³×µ?‡ÞøSgüA¿½²¡þ¢ñ;×›ê2t~-qúºÑZΪ|Ö‡ÆÔN¿Eÿ‚CÁHõéZÙCÇÐ:Io:֧௠ÄZå#+?ˆ¼S¥Àñ©F7$¢”{§…$›’·ŠþPIÏŠ02MIþæ–7ýÛ7xáðµdž¾êjóÕE6»ðþxµ‰“> ÌâÓŒÚ+eøHÞm¥ib±´bÒ·½$ù`¬æâšoÖbÿ‚%~Ùš„š¿Æoöqý›4Kxâó\øåû@x#DÒ,cwÈnï¼7âˆG’i$¡X¢‘Œœ(o-øËÂê*Y=!â*Ò’Œ(ä¹2µY¶ì¹!‹X&îì’Ѷұî/£¿á©:üAˆá>ÃÆ.u1댺=çí9ñ–êÏâwí¨éw"ÝîÆ‡m|ú¿ƒþKvÉ-Ž£e¥Ýø£E¾³^[éº^¯ µå¢§áÖqÄUhâ|FâZ™Ý*SUaÃyDg–ðý:‘º¶”,^`£¤á:ÃV„ïR¥'(IÕñs‡¸J|„\G†ëÖ¦èTã þtóŽ+«F|®V…G_”ÊmJZtjcpõ)òÔ5ã åàµóÏuû}øÖêêinnn~üžââyiçžoƒž’i¦šBÒK,²3<’;3»±f%‰5ÕàÜcÁÆ1QŒs<ö1ŒRQŒVm‹J1KD’Ñ%¢Z#‹é )OÄÜÂs”§9ä¼3)ÎMÊR”²”¥'w)I¶Ûm¶ÝÙý5Á!å_²?ý׿ýh‰uü½ãGüœ¾$ÿº?þ¨2£û_èéÿ&oƒ¿îáÿÖ«ð§ÆO|%‚ïÄ:ÿÁ?‡µ=Xñ–½âÍ;PÕ¢_é±_ßAZm«Å£nÂÑ$˜^+“ý À¼/Ãu¼4θ·5Àc1ø¼§™N4(g†[N½,. RŸÕª:täÝY§WØÎVµÔ¬äï¸×Œ0þ2pßäy¦]–`3ÜMNX¬WeYÍl5lv/F¥hýv’«V18ÐúÍ8_™§fzWÀÚ3á?Šÿhï|)¶ý±þ|_ø©¥xwãF…¬øÃ?ðOÝWà‰¯5ox;ÄÖšð‡ã$ºÆ§cg„tË©§H¥º·×í,ÛMµã»‰Ï{Ãù®‡ð™¤¸G6ÊrʸŒž½~'Žég8hRÅâðÓ¡|¢4iÔ›¯í"¢ÚŒ¨J~ÒQN-§ q^IâÌvI=ȳÜꆈ0ØŒ³á…~ÆT¯€Àbéâysùb+R§7±œ¤£)Ç ~ÊjiŸÃýgŸçHP@¿áŸÚãß‚ì“NðwÆÿ‹ÞÓãµ²²ŽÃÃ?¼g Ù%–›A§Z%®—­ZÀ¶º|ðÙ[ª­bvŽƒäâr‹7S’å8ªŽS›ž'.Á×›GÍRnUhÊNU$“œ¯y5y6Ï{Å9üOû>h¸É³ÿ„§ìäùàM“úÐ$ûãuqÿª'ÏíÕ~ö¿óóû-çÛ—ãú·7Ãîï¶›‡úÿÇžËØÿ®Ü]ì_üºÿY3Ÿeñs뼿½·Å®çâ/Ú'öñ}™ÓüYñÓã‰ì+˜ÇÄ_¼k­Y˜/"ò/!6Ú–·s ŠîÜÜÆSdñ~îPÉÅuáø!ÂOÚarL£ ;Æ\ø|·F|Ðwƒæ§F.ñzÅÞñz«3ƒÅ|Qާì±ÜIŸã)8Î.ž+8ÌqÜjG–¤y+bgYÇÝšµ¥Ñã•ëžP@«ÿðZù?OÙ'ýŸÿõLø2¿.ðwþH|/ýsïý[âÏÛ¾Ÿòrñßö$áýP`éÃþ ÿ(ºý‘ÿî½ÿëCüK¯åß?äåñ'ýÑÿõA•ÛGOù3|ÿwþµYáú!_—¶…P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@PógÿÒ0ñn§à¯€žñ©ðþŸâ±áR_øímµøFõi`Óu¯²?ÏýŸ{4vóýÙ þ¨ð‚|S[„úöž':|·©…úÍ%ƒÀsRúÅ%*”yÖžÒ Ê;¤x¹C‰úHð>.Çÿfc«`ør\Ãê´±¿S­,Ã4ä¯õJòG#×ÙT’Œ¶lû;à§í¡ðvëö–“à‹?à ú?Ä?ëZ×þËà½_ö(ðçÁm;Sø‡}c¯xZÖÖãâæ‘fϧ¸ñ0XíZ+–·ñë[ik,ƒQMÿ'œp~oYÖ€êà0thàsUŒ¥Æ8Œâ¥<'C)G*«4ª/«k$ã͇‡5[/fí÷|=âA>.—ã|PÃæ™Ž#™ärËëø{…áúU³Z”ñ88BYå w¤þ¹e¦ã‹¨áE7í•ÿŽ_‹_¾$ü øâ/…ßü®xÇ^¾–ÇWÐ5ë)lîScºÁe#¯‘©èÚŒj.ô}kN–ëJÖ,$‚ÿM»º³ž)Ÿúã*Ͳìï‡Ìò¬]n :UèMN:¥Í ¥ïS­M¾J´j(Õ¥4áRšiç™oÃy¦/&Ï0œ·2ÁT•:ølM9Sš³j5i¶¹kaê¥í(b)9ѯIÆ­Μ£'çè@P@P@×Ã_†_~1xÛ@øqð·Áú÷Žüsâ‹èôý Ã>°›QÔ¯®$#t†8‡—kckëGS½’ßMÒ좞ÿQ»µ²·žxø³Ë”`«æž.††ƒ©_ˆ¨©Ó„VÊïYNOݧN U*ÍÆá)Ê1~–Q“æ™öc…Êrl'2̱•,6 JUkT“ÝÙi pWZÕ(Ѧ¥V¬áN2’ýÿ‚ÊëÚ·ûüT´Ðõ};Zÿ„GÂßü ®]iW){emâ¯ü-ðž‰âm!.â&)®4MfÖïIÔ6&×R³»²˜%ŴѧÀxCBµË'Z•J?[Åf¸ê« Ë ‹ÌñU°Õ\^ª5¨Ê5i·ñSœ&¯&ÿUñû†ÄxŸœÃ ^–#ê8,-ÄÎŒÕJpÆà2lAN>짇ÄBt*¥ðU§Rœ­(I/êþ ÿ(ºý‘ÿî½ÿëCüK¯å?äåñ'ýÑÿõA•ÛÿGOù3|ÿwþµYáú!_—¶…P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@Pó3ÿ7Ѿ#ø‡þ Cû'hŸ&•NËý¥,E ’§Zœÿáûð©FZ¿y=X7ǯ†m“ÿ¶¯íÉÿÕý³|®â"þÅÿðCZÝj ý¶áøâ Ã}›EðÿÄ ß‹ÓØò¤CㇺŒï‚1Ú_[–FæÞŇ»—ñ‡x)Fž3ƒeй«WÀC+ÇTùà±õ°p¾úae¯K1šø}ôpÌa:¹ˆá¬T¯É‡Âæ•3¼¶—ow1ÊðÙ…Kyãa§[ê|ƒ¯ÿÁ2ÿd}E¾ÿÁZÿdíq$vh ø£guð¥£O4þî{¦ñ?ŠÃ¼PG>'vës*[…‚½ˆÇõ”ý¥ ΕàÕñÌ–~>5¾“Á‰¤øfãÄ—p\ÚhÖ·š£½ýå­Í­«M,Deâö]‡­†Âf!ÇÙv3¯ …Åpï-\_Õ⧉úœ3Úâc‡ƒŒëJ’„%JÉœðð 7Åa±xì§ü-Î2üèÇÀñw=ÅMÓÁÿhTyz¡ƒ–.qœ0ð©Yºµ!8AÉÄåÿáÑ¿èí?àžÿø•ÞÿåtÿÄVË?è–ãßüEëÿòóþ fuÿE¿…ßøšá?ù˜’ø$׈lå‹þÿÛÃþ ¥à{Y$O.]ö°Ó Ííº0û|šM¥…nVö{Ú&’Þ{‹$—6±¤ÀHï~)áæŸÕ8#Ä\l’wT8^§,$׸ªÊx¨òFm4¥ÎÊ2|º$Ü|ÅS”~¿âW„YlV–+¨óÔŠ½t!O5RT“‹p”éÝÎ KVâè¿`oØÓÂ{¤ø¿ÿYýžô¨¢Ã¼_>|Løñ<ÑòÞ\- ®‚«,ŠöªË™aynŒˆÂÂE˜|uÅø­2Ÿ óú­èžq™eÙ$Sîý¿·ÑZOut£f½¢iÇÃ.ÀÝçÞ5ðµÇWÊ3~%”—hýYa­&œí.Vçtý“R÷¿þÎ_ðJ_ÙkàoìûûCx–óö«ý¯´Ú)þ+ÇðÛMÓo<-ð?ÀÚ ø/â7ÁÞ1ŸÅvÒY¿Ä Åy¯jQG &Ÿ¨jó\ØE{qz–2Ggö¿ Ä(q6uŸdhpÇ Õáõ•¼Æ¥Hbs¬m?íŒ5L^ad§ýŸ‰p¡Mºî¥:J5#4çÉôÙŸ ø)Áœ9ÂüSŒ©Æ¼wCŠÞv²šTj`¸w.­þ¯ãhà1òÆÁÓþÖÁƦ&¬VR«^S¥’¨©µOŸæ¯ÿÁR|qáÿ ë_ c_‚¿ aχúõ›éšÕÿÂ+[½oãoˆô¶•%[üyñÿÂa~°b6÷Ú=·‡µXɸVÔÞ³ÇôxO ðUñTs/ÎsN4ÇК©F¬£G&ÃÔ³Nxlþ©Ošïšg^“÷v¥gò8ÿ³.”pdžåxšnŽ"®E â8‹E´ÕX*øúqX:X‡˜fžÎxœ= øjÕ©'ñS§ˆ£)-âzŒ¾3þÏúgŒlÍ<¥ÌÁƒÉóÚ¸L-LGâ÷ÔêaèO ìxû/§KêÓ¥PöPX¨SöNQŠŒl¹U¬z™‡p½~6ŽcÄÞhRÅâiãþ±á^mV¿×!ZqÅ{j¯1›©WÛ©ûI¹ÉÎ|Òr•îù¿ø^¿²¿ýø"¯þ+“â/ÿ5ÕÑý‰Äÿô#ñ‹ÿ_ÿÌgúÉÁôS}?ñSæ¿üÞð½eú/ðE_üW'Ä_þk¨þÄâúøÅÿ‹/ÿæ0ÿY8/þŠo£çþ*|×ÿ›Ãþ¯ì¯ÿEãþ«ÿŠäø‹ÿÍuØœOÿB?¿ñ`åÿüÆë'ÿÑMô|ÿÅOšÿóy濵Ÿ‹GŽÿàžÿµ}÷Àß‹_ðO¯ü/ÐuO€‹ñ£Cý–e/|ñeíÖ§ñcHO‡My®êž7—I¸û6¯kª]GöÝYdÓaÖ¬íŸNŸQKšôxW õ=áhgYWa³*ôóÏìzÜMÅ,ç ÓÊ«<Ã’…,jÇš”©ÅòW¢IQœ•HÓq<Ž8Ç,ËÂÞ6©Ã™ç…øÌ› [†Wa¸3‚s.ÇTlﲯi‰­™J„¹+´״Ââ-F8Špt¥ULþbëúHþ< (õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ù?_¨Ÿ‰yÿðH_ùE×ìÿuïÿZâ]xÑÿ'/‰?îÿª ¨ÿQ¾ŽŸòfø;þîýj³ÃôB¿.?m ( € ( € õ†ÿóÿ¸wþßP¨P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( —è € ( € ( € ( æÓþ )eã=Gþ }ûØ|<×´O øâïÁ_!ð·ˆ|I Üx£AÑõ—ñ_þ­áëM_@¹Ö, “æšÂ kM’uùVî#óWõG‡sÁÓðOŠg˜P­‰ÁG³ ¼Mr­àækVªÆR­8bUZ«0š©SÛF|õçÏ+ËšW»æ¿á¦ÿeOú<_ø%çþ*·â7ÿ4õÓþ­ñ?ý>%âÎËÿù”äÿ\8/þ‹ïÿñLf¿üÚðÓ²§ý/üóÿ[ñÿšz?Õ¾'ÿ¢GįüYÙÿ2‡úáÁô_x7ÿŠc5ÿæÐÿ†›ý•?èñà—ŸøªßˆßüÓÑþ­ñ?ý>%âÎËÿù”?× ÿ¢ûÁ¿üS¯ÿ6žmûYxâßâüßö°Õ~~Ñ¿±ÏÄ?†žÕ>Gñ“Âÿbï~Ïž"¾¹Öþ,i|>kŸk>56“ý“U²Õ/“w‡õ½¶6º¥‚¾™&©Ñôx[,·xZ–uÃÜ]—æ8ªyãÊ19ïa3ì<#G*¬ñü¸j8%%ÏJtàÿÚ(ûò¥RÕ'ÇãŒÊ9Ï…¼m_‡x³€³\£[†VƒáñÜ/‹©(ßÁhdÛ?‚z§„ô_‹W?~EðûVñÔ7×ÓüPÞ)ñàÓ.¼G›g¨_ˤÅ77Iies;' +ú»Ã9eðk‰gœÓÅVÊ£‹Î^>–Â8º˜e„À{HáåRt骭|.sŒo»Gð·ŒÐÎj}"8:ŸVÀáóÉåü=®¾e“ÀRÆ<~iìg‹ujÊ‚—ƩӜ­´Yëž!ñí!c¯ë–^*ý°¿à6¾(³Ö5;_[kÃÃ1k–úõ½ìñkk1ßø_G«E¨%ÄzŠ^v—‹2Ü0zópøN 3Âð—Ž’ÃN•9aåCë.„¨Jt¥EÓÆò:N›‹¦áî8[—KÆ+Å´ñXšxÞ<ú2CN½hbá‰ú¢ÄÃ’xâ\·Ú*ñª¦ªªžú¨¤§ï\Çÿ„ããýOüçÿ}x;ÿ˜*ÛêY/ý><ýØ¿þm9ÿ´¸þ‹ï¢ïßÿçhÂqñÇþ'þ óÿ¾¼ÿÌRÉè‘ñçîÅÿóhiqýßEß¿ÿÎÐÿ„ããýOüçÿ}x;ÿ˜*>¥’ÿÑ#ãÏÝ‹ÿæÐþÒâ?ú/¾‹¿~ÿ§‡þØ·Ÿµ‡ˆaÏÚ+Pƒö€ÿ‚_|Mø£êWã.ûÛ¤¾6[­Cân‰ã}sáÿéšU·üT½ÔC\Ô-žm"ÏÄ ¦‰¦BþÏÃ…°ükÃôå‘x•–çU¡›ÿdVâù8à¹iåµÞaÉøª•eþÎÔ_±§$ªÎƒ©ef¾sjq¶/Þ+«'ðs8áÊrŸÐàóf*usŒ2ʽ¤ð¸4aþ×8ýf¬¨SÅ*<Òæ‹þlkú$þH (õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ù?_¨Ÿ‰yÿðH_ùE×ìÿuïÿZâ]xÑÿ'/‰?îÿª ¨ÿQ¾ŽŸòfø;þîýj³ÃôB¿.?m ( € ( € õ†ÿóÿ¸wþßP¨P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( —è € ( € ( € ( æÏþ )ÿ Èÿ‚ß~Ç'á˜ð›|@ÿ„+àü"+ã¶ÖÁÇ\ÿ„¯Çb#o«ëK¥yŸñôtÄkÍŸêjþ¨ðïê_ñø§ûGë_QúÎuõ¿©{'‹ö?SÀsý]Wjµ·Ãí%÷?‡¼\þÒÿ‰‘àì©<Ïê|9õí']`>±ý¡š{?­¼*x…Fÿ±N¥¾Óügmð¼aâ³ã¨¿àð›Ÿk§Æ?ÛÚ×ÄÕ×?á*:¥ÑñöÊÜ@.Uþ×ûgöˆœ …çœ%÷WY÷Õ0¿R~9ýKêÔ>©ì(åÞÇê¾Ê?Wö<²åö^Ë“ÙÛNK[CÔÌ!ÂïþÒÑ›ûGëxŸ¯ýg›¬O×}´þµõ…(ó*þßÚ{^o{Ús_S›û/ì¯ÿ<¿àßoüüGÿã5ÓÍÅüxÿÁ9ÿ$qòpWòýÿð§5ÿäC쿲¿üòÿƒ}¿ðwñÿŒÑÍÅüxÿÁ9ÿ$œü¿Eÿü)Íùû/ì¯ÿ<¿àßoüüGÿã4sqG?ðN_ÿÉ'/Ñÿ s_þDó_ÚÎ7‹þ ïûWqÿÁ/ǶÕ>ÂíoØÓPøwãqt>,ið­þÜ—ð ëUl:1Òá :vëµQ^‡ ;ñï m¿µ<óûýo§—ÇËý•[ûC‘Â^ßø>ÎþÅ5í}‡´÷#Ž/…¼mþ­¯±]nÿXŸÕÍjf<ÿÛt?²}¢©ªÿÛrûvŸ°úײ¼Ò?˜ºþ’?€ ý`ý®ÿå_ðHû¿ÿýh WåÜ)ÿ'/Åû±?õA‰?mã¯ù3~ÿÞOÿÖ«~O×ê'âA@ÞüþQuû#ÿÝ{ÿÖ‡ø—_Áž4ÉËâOû£ÿêƒ*?Ôo£§ü™¾ÿ»‡ÿZ¬ðý¯ËÛB€ ( € ( €=Cá¿üÆîÿ·ÔêP@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € (åú( € ( € ( € (ù³ÿ‚НŽ_þ }û¯Ã)¼'oñ¼ðxFoÛk~‹\ÿ„¯ÇbÛx~êËZŸJÇÔzeݽã'J­ÍTxwõ%àŸ¼Åb¥€úÎuõµ‚•(bÝ©à9ÖUã:1«o…Ô„¡}Ñü=ââÌŸÒCÖQ,sGƒáϨË2…yà'ûC4öo ,éâ%Bÿ£8Tká’=?Æ^"ø‹üW޵ø Yñ¼^$×cñ‰×¾ |MŸ\>*MRé|Bu™®5Ù.&ÕN®/£,ò<Ò^yÏ+³–cÁƒÃçÏ …x*^9ýIáè<'°Î2èÐú«¥‡ö1KÙr{5’…’I¦aŠáˆãñ±Ì«ý´V/±ÿYáüÞXŸ®ªÓX¯¬Jx—)WöþÓÚ¹7'S™¶ÝÎoþoÙcþ‚ßðo¯þˆÿüº®Ÿ«q?üúñãÿ9ÿ)8þ¹Áóÿè½ÿˆîkÿÍÿ 7ì±ÿAoø7×ÿ Äþ]Qõn'ÿŸ^<áç/ÿå!õÎ ÿŸÿEïüGs_þhøI¿eú Á¾¿ød~#ÿòê«q?üúñãÿ9ÿ)®p_üÿú/â;šÿóA濵ž¡&§ÿ÷ý«äø©ÿÁ/æøUmª|mÿcO‡¿¼)ãyn¥ø±¤†æúêÿ\—ÃÒlÖUx¿¶,nÙt‘âôãÜë ô8Zš§Ç¼,³º~%,ÒTóÏìiq~?/Å`ÔVU[ûC’0¢± ô]4ýŒâ½¯°u9¡GUu¼-ãgÃu¼–K Ü3þ±G€2¼×˜ÊrÎèdûIÔÄË í]VqúÅ9¿aõ¥K–rLþbëúHþ< (õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ùUaa}ªßYiz]•Þ¥©êWvÖvam5åõýõäÉoieein’\]]Ý\IÖÐG$ÓÍ"E3²©ý>¥HR„êÕœ)Ó§ T©R¤”!NNSœç&£F)ÊR“J)6ÚHüZ•*•ªS£FœêÖ«8R¥J”%R¥Z•$¡ táå9ÎMFŠr”šI6Ò?¦ïø'×ü¶G@øÍûwØÝé:|‚×WðÇìÙezÖž#Õ¢o.âÎëâÖ©c(›Ã:|Ɇ—Ázlñø…ƒˆ5ëÝêÚÿÃòÿ4x…ã¥:ß'àš‘­[Þ¥ˆÏå*Þ±”rºsN5äžØÚ±t4æÃÓ¯BºþÊð›èÉ[õn ñ"”ðØr¾…c7 Me¤á,êµ9)a¡%¬²ê3X­TqupÓL4¿¦+K]/IÒtø{EѼ1áXA¤øoÂþÓlôoøI¶EŠÛMÒ4»`³²´†4DH ‰UUUU–±ŒF.½\N*½\N&¼åV½zõ'Vµj“w•J•*9Ns“ÖR“m½Ùý»„Âapj,†…¥ 8l.•:z`¹aJQ:tà•£EE-‘%btP@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@Pò™ÿŸøÅâÙóþ —ð7ão‚ìt-KÅ > üñŽaâ{[ûßÝêzGˆüy=´Ŧ•©èºÅ‹¸ÄñYjº}Ã/ÝD~jþÁðk(ÃgÞfÙ62uéás,Ó5ÂW©†•8W…:¸\e*R«Nµ8Í/…ΕH®±gùûô‡Ïñœ-ãnCÄY}<5lnM’dxü-,d*ÔÂÔ­CšJ¯ 5°õeM¿‰S¯JMm4|ªÁ[5MsSÔu­kþ ßÿ°Ö5búïTÕµmSöG›PÔõMOP¸’îÿQÔoîþ#Mu{}{u4·7ww2Ëqsq,“M#ÈìÇëéxWJ:thñÿ‰´¨Ò„)R¥KŠ•:t©ÓŠ„)Ó„2õBJ0„RŒb’I$‘ð¼q¯‰­Wˆð¯ÁzøŠõjV¯^·έjÕªÉέZµg›Ju*Ôœ¥:•'')ɹI¶Û(ÃÖý#{þ Eÿˆv¿üð«Oø†õpüPÿĵÿó—üF·ÿF—Á?ü@ÿü*ðõƒÿHÞÿ‚Qâ¯ÿ<*?âÕÃñCÿ×ÿÌÿ­ÿÑ¥ðOÿ?ÿ ‡ü=`ÿÒ7¿à”_ø‡kÿÏ ø†õpüPÿĵÿóÄkôi|ÿÄÿ§ñþ u㯊Ÿ¾&~Îú'ì»ûüðGÅËŸÝxâÿöwø#«|.ñ&¯7ÃÿYx·Ã&êîÏǺ†“ö-FÒ[xί¢jMma©ê°Ø5”÷­r½yO†Ø³<ËsúÜKÆ9æ7*Ž28(gùÍ,ÏIcðÓÂbyc< :°ç§5'ì«SR:N§<`¢pgÞ1fY× fü+‡àßxk.Ï'—O1«Â¼;_&Å×–W§ŽÁóÎZ}XJ+Ûáë8R­^49Ts_š5ú)ùP@½ÚïìñÛöÀÿ‚ÿÁ>üðE€?o‹{Y”?Â^ Ò&ý¢¼:¯xÏÄ“¡°Ð´´Na´ºŽ­4-§èZ~«ªÉoa7á´x§#á.;ñk2Ï1°ÂÑÿŒ4)/Þb±•Wâ°ÂaÓç¯SXóZÔéEûJõ)RRšþ™ÄðGñç†dü5—TÆâ?ãfË]þë—ЗáÖqø¹/e†¢¹eÊŸ5jò‹¥†¥^»)~í~Á¿ðL/€¿°•‹Ø>0þÒÙ2jµ5FàÙo-ÄWúW­ éî#Ò Dy¬¦ñ]ГĚų܃>›¥j2èÿ€ñÿŠÙßΦ “žUÃê»ËiT½LR„¯ ¹h¨ºòºSXhÛ FJ6Z”ÕyTøUàg øqN–c]SÏ8ªTÿ{œW¥jX8Ú¥ Ÿ 7%†‚NTåŒñ¸ˆ9Þt(Õ–?£rÍ,ò¼ÓÈòË#’I»»¥™‰$ýM~V~âG@P@P@ê ÿæ3ÿpïý¾ P € ( € ( € ( € ( € ( € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@P@/Ð@P@P@P@Ç‹>|(ø¨Ûk?> |ñö¹i¦Úhðë¾7øgá/kK¦Xù¦ÒÇûS[Ó/o¤<òEl²¬I<ïhÒ¾}LyeÔž/ÍóL 7QÑÁæ¼5'RI)MÓ£Vç’ŒS—/3QI½âf5ÙµuŠÍx$ÌñJœi,Na•`1µÕ(9JÕlM •8¹ÉÆܱr“I6ïËÿÂ…ý›ÿèØ¿fÏü1ÿ¿ùC]_ëWÑIŸáã0ÿ惇ýFà¯ú#ø[ÿü§ÿ™þ/ìßÿFÅû6áøuÿÊ?Ö®(ÿ¢“>ÿÃÆaÿÍþ£pWýü-ÿˆþSÿÌÿ öoÿ£bý›?ðÇü:ÿå ëWÑIŸáã0ÿæ€ÿQ¸+þˆþÿÄ)ÿæ@ÿ… û7ÿѱ~ÍŸøcþò†õ«Š?è¤Ï¿ðñ˜ó@¨ÜÿD â?”ÿó Â…ý›ÿèØ¿fÏü1ÿ¿ùCGúÕÅôRgßøxÌ?ù ?Ôn ÿ¢?…¿ñÊù?áBþÍÿôl_³gþÿ‡_ü¡£ýjâú)3ïüN+ŠÆ×ž+‰Äbñ5y}¦#Z¦"½NHFœ9ëU”êO’œ!óIòÂ1Š´b’÷ðX[†¥‚˰x\?±Â`°ôp¸j^Ò¤êÔöT(B*~Ò­IÕŸ,5IÎr¼¤ÛmsA@P@P@P¨|7ÿ˜Ïýÿöú€=B€ ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@P@|¿@P@P@P@P@P@P@P@P@P@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@òýP@P@P@P@P@P@P@P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@P@P@P@> xᾑý¿ñÆÞðƒæùÛ~4ñ&ám#Ï1¼¾Oö–¹{cgæùQI/—çoòãwÆÔbÿÌgþáßû}@¡@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( €>_ € ( € ( € ( € ( € ( € ðØÃÀ? ?hO†:gíWñÂøã¿‹¯ŽçÐ5/h:_ˆÓá×Ãm'â‹t|:ð-®±¤¬^Ó¬|=§X\xÆ÷N´¶Ôÿá½ð—ÿ*(ÿ…ðGþˆç¿ü7¾ÿåE(øðHGÁß…€ŽA|$#¡û"€>+ý£|#ð÷öhøð#â‡Ãhß¿áqürðÇÀ¿Šžð†‘a¡xcâ-¯ü1ãy|'âm[DÒ´åÓãñ÷ƒ¼[¥èÓZøÎ8­5=OÁÒë¾ׯ5+AávðèÐôP@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@ó$ñfšœÅ,‘õÌnTçÎG ú Š€ ( € ð½{Åß|sñÄ_?g½7À¯âßh~ñ'Ä_üQ›X“Àž¶ñ‹ëKá_ Yè^¹·ñ7‹Ÿy²=å`vÚ(¿øgñ TñUß¼ã=×Â_~kº…¾#økOÕ£×t5Ô5_é>)Ñ3xÛÅ_üCâ/O×¾(Ç?‡<àÍÚo^x®¥cḀ:ð[_Û÷Ã>øƒã³§À߆ºÁoø&ìÝÿø¹ðŸöðŸÆ7âÆâüXñ×ߌ?íÆŸâÿ.kh<--÷€ü[âZÜèÒèÒ?ˆ|-âX¼_i„€;/‹?ð[ÚËö·ý©þ|TøOð'Qø×ðûö†ÿ‚uü&ø?â†þøëãO‡š'†¿à¡þñ‹´›ïˆß ü/¨øŸãÅü´ð?‹´‹Í?á^Öþ3êòxn×ÁÞ ð¶¥ª¦Œà¿àªÿðPÿ‹ßÿcÏÙëÀÿ>|=ø§ûA|Qÿ‚„ü*×>&~П³ÏíaðƒÀž(Ñ¿e‡¿~)|#øÿðÃàÄoøãNƒà/ˆžø™s¢xÀ^0Ô5­U|c§^éº_Ä-6×Ú΢û#ÿÁGÿmÿÛWö¬ÿ‚KxŽ×Ä¿¾|#ý¡cÿÚçâ_Ç¿‚Úwügâ;|@øñËÂßüww¡øª‰Zl¶vÒÝXÙj_PÒ5!àKM[ÇPxÒ‰·÷‡®¼ úÁÿ"ÿ?ìkÿgñû>ÿé¿ÇÔôP«|8Œˆ5Y±ÃÍk<òbIØN<áÓžyí@•@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( €<Ŷ&Ã^½\Óý¶"F7-Î^LE¸F1Ù;tÕP@óìw4¯ûbÁD¢ydx¡Ö?e&6vhâó> NÒyhITÞß3ísrÙ4ú]@8>?ÿ‚Ù~ÚÚÏíßû^þÿ±—üNÛVýŽõOZxóÇ0~Üÿ ¾›;ǾÓõý Uo |MøJ-íD÷3êZX²ÒËÿ‚¢ÁOt¯ø'wÀo‚üð‘?j/ˆ_´¯Ç…ß>ü"Ðþ#CðöOˆ^!ø£i©j:f­câäðgÄ/+I·±°·M†/a½Ô5­ Á®¬ÿ´¢¸5übÿ‚Çüv¼ý°>9þÄÿ°?üÆ¿·oÅŸÙ_@ðN¥ûJkiûI|+ý›¾ø\ñÞ·¤xgÞ*ø¡¡êxâ÷È’{ SIÐ.®uM;V†ÛN’ÇL¼Ô¢û»þ ×ûr_þÞ_|AñÄÿ³oÆÏÙCâ'€¾"ø“á_ÄoƒŸôÛ SGñ_†ÊKÛïøž]/HÓü}àÙÍé±µñ>§éàë:fµ¦\iÖæÆ)ï>ú €?)þÍ+þÞðPhžY8_öRÆÎÌ‘ > Ü»ˆ’±‡r]¹‰c“Í}q@P@~ÄŸ>~ÓÿðJß‚_~9øKþ†~$]Ö¼5ý½âo }·UøsûEê_<uý³àýgÃþ ·þÆñ¿ƒü9­ùš¬Ú‡öwön« ö‘w}atôbÿÁ7?bEñ6©ãø ¡¯ˆu¿Ú§Wýµõ«Ñâ_uŸÚg^øs«|&Ö~#êÚhñWö^¥¥à-]Ñî¼ {e?ÉnõWÄ'Âð’êzÄ 'øþ 'ÿ}Ö/#˜èO?ðMoø'÷Åˤ“ãGÁýVÔ|Wðá챤Ýj?þ(øgTŸáÏÁê_>x7Á׺?Ä-ÿIñƒ|c«âÝÆÞšÃâ•Â[]Å}â‹ÝÕíbù#ö£ÿ‚6üºøey§~Çß>ÂqâOÚ/à×ÇÿˆÚ/Çÿ¿¶?…tŸx“áG…|AàÕñ÷€¾/| øŸ©|AøûCëZ6µø«ã§†¼)âýgâ¨Q4oj×ö~!ÑÀ<ïàwüëö#ýœ¿eï_³ÇíýâŸ_¼kûM|wøƒð–_|mñÏìàtï‹¿µ7Œ/î5Ÿ€²÷4¯ˆ ~3ø£@¹ð¶«á¿†óø6ÏSûoÄÛ=/O—Ä>c=†•dú/â¯ø&Oì5ã=#ÇÞ×¾iÍ¡|Ný˜üûø×GÒ/øg/ƒüT÷ï©©k7Ú•µµÜ ü5ÿ‚wþÇ¿üKð?Æ~øI>™ãÙÓTøß¯|)ñ^§ñ#âÇŠüI§x‡öÓtm#ãg‰<]®x»ÇZî§ñSľ<Ó|?¢Ùßø‹â­×u½:-6ÔhWú[&âàø&?ìAðÖãö`¼ðÁ{Ÿ _~Æ·çýœõ 'âŸÆh5/Ãñ‹ÅxÓâV“©ê'âÞüDðߊ¼Q<š½ç…¾'Oã? ÚÏåǦéVVÐà rðR/ùþÆ¿ö³ïþ›ü}@@P@{·ƒ,M–ƒlX÷Œ÷Ì=¦Ú°‘þõ¼p·o½ß©ê¨ € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PãßéâþÝs§w|ÒÚ—ÜÂGœ žLp‹Ð@P@6þÇòyðQoû ~Ê¿ú¤®(ôÊ€ ÿ?‹ïxãÃ_ð^¯ø,äž ÿ‚Åüÿ‚BÍw¯|MCÅ_þ~Ïí>7ÇÃ%­ü?¡[þÐ>9ðEŽ“qà¦yué¼36¡wwˆm“RŠÞ-PècàÏÂMþ ÿØý³¿c‰ðU?ðVO|IÑüa ÜüyøQá/³ø:|gàý1¾ h~!ð7À?øçÃÐ]x;âW‚µ?‰:ˆµ›˜5mv´éâÎ[?DÌø/ÿ™øãïø*Wí—ÿ›ø5ñkGÕ£Ó¿àˆß²—ÄÍsö…ÑõËy•aýªü3ñ3Wý›¾øs\Ãhx¯Bð_Ƭu)Ú=cLñŸÚ Â_›À«¿k­7þ %ñ“þ YûZëÚ?ü3ö˜ÿ‚/ÿÁE>§…<)ñsâmÇÅO ~Ο h«;]ÚßÃ~&¶Ò|i¯G¢|Wц´o Ý ?xø£J»ðþ»7‡µûË«ý^Üï¿ø7{ößý§ÿk߇Ÿ¶Wƒ~?|cÓ?k~Ë¿´­ÿÁ¯?¶–àKOXþѾ ´¶ÔšMI´Í*ÚÓG½»Ñ¬¬|;®FÞ;BM7ÇzT:¶¯¯ÜC·¨EÔPåÃù?oø(_ýtý“ÿõJÜÐ×”P@â¿ðJù0ÙÛþÀþ6ÿÕ¡ãzý  â¯ão‰>&iðPoÚßÁÿ ~1üWø&>2ÿÁÀ_ðL_ƒ?|GðwÆÚÇYüÖõŸƒzwÄ¿‰Z?ŠluŸx»Oñf±ÿ…~!xÆþ÷Å©áøŸÀQkqø[Æ(Òµ 9øqû[ÁBu«OÚ÷Uñçí#â|{ðŸìÛÿÔj_„×_µ7í¯üDð?¼ñrÿöjñGÃ?Ù‡OýŸô„?±,ŸüI |4…ÿü3ñæÆëã„|M¨kú~±­ø®[? èYßüo þпðoÇ_ˆ_?m/ß¼]û.þן´ÿŽuKöŠø½âk¾<ðì‡ðkãTžø{á|Qð¤öŸ~%EâïI5¬À|,ÿ‚‘þÛIàßÚvóöpøÙñ¿ÆúÿÆ¿ø%Œþ8üÒ¾!þÒÞ4ý¯þ!IûPx㇇¦ø²þ‡Ä¿> ü<øMûH|"ýœ¼sâ]{Çÿ²ÿìç¢ø£áïG…|%«Ëçk smnöíûP]xKözÓ/¿`¯Û‹öÆý¤¿e¿mïÙ_Ã´ïÆŸ_´Åïx_á¯ÃßüñÞ±ã x_þ gðÛÇ¿~øÅ¿ô¿‡ û@ø‡B·×À/ëÚoti~iÞ.Õ´»0èþýãŒ^9ý…~xƒã?ÄÝ/ã.¥?Œ¾.Aà‰º‹>#üB¹ñ?Á뉞'_…ÑëŸ~)ü,ø3⿊Ú߆|(l¼sñ^Oiv?m¼;gã›[½\ëÒê—`§tù¿ÿ"ÿ?ìkÿgñû>ÿé¿ÇÔô·áý!õN @Ow ÏÉn„oäti GŽŽáʬ@ЊªŠªªTUPª€ª 8@ @P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ñ¯øUôé$Ôl#ݧÈÛ¥‰6NǦü»3ݰÀˆ‘<²À%P@xì—£Þi?µÿíïq~«ñ4²¿‰tˆJ—> ‘os“F÷Ú©nÅ&´‘èíñ·Äø'Oüçãµÿ‰?a?ØÛâŸÄoÝC{âŸüGý˜~ x߯Þ%¼·³¶Ó­îõÿx›Áž»¬ÝA§ÙÚXÃ>£s,Vv¶ÖÈë "€z‡ÀßÙ_öaý˜müIgû5~Î?¿g›OÍ¥Üøº×ào‡¿ ­üUq¢%ôZ,þ$ƒÀ>Ð"×&Ò"Õ5(ô¹u5º{Ôo’Ñ¢[» ×ßÙïàÁïüJñÏÂOÿ>øÛã>º¾(øÃã‡? <àüWñ2_kZ¢x‹âWˆ|3¢éš·ŽµÔÔüKâ=Eu]꺂ßkúÕØ¸­ô“€rÿ?d?Ù;ö¿Òõ_Ú#ö`ýž>=jz×EÔ~4|ømñJÿGµ¼âÛK¼ñdžµÛ><’L!´’IJ§ € ( *ÿ‚zÿc|?ø¦~Í—÷Pé¿þjÞ8Ðõÿ ÞÌ`Õï|-¨üFñn§àŸ‰:=•ß—{ªxÇšíµîâ[8§Ò[‡Ä^ žñ>x/ã÷Ží´ë…º¼à_ øÞ]Vñ—Ù]·?%˦êz…÷Œ|%âÝÚúÃBøðÏÆÞ'øoãý#LÕd´—WÑ ñW„5-+R¼ðæ¯6Ÿ§ÝjžÕßRðÝþ¡¦iµÎ“&­¢è÷Ö ÿ UcÿG]û}ÿâg|mÿ技øb«ú:ïÛïÿ;ãoÿ4TÃXÿÑ×~ßø™ßù¢ þªÇþŽ»öûÿÄÎøÛÿÍðÅV?ôuß·ßþ&wÆßþh¨ÿ†*±ÿ£®ý¾ÿñ3¾6ÿóE@ü1Uýwí÷ÿ‰ñ·ÿš*?ኬèë¿o¿üLï¿üÑPÝ;ö7²Ó¯`½_Ú—öî¼03k¨þØ¿o,§W£xç·“ÄXu(íµÔ¤°É¶h$Šhã‘@>›ðW‚éýßÿ£Ãßø’/øŠêÖý}ÿ¢‹ëYÖäíÿ«ßîß\þÏÿ‘7ûû§ðÿyüoÞŸÔgü"ÿ d÷úëÿ×ê'ð€Â!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ü"ÿ d÷úëÿÐÿ‡‡?èýþºÿãôÂ!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ü"ÿ d÷úëÿÐÿ‡‡?èýþºÿãôÂ!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ü"ÿ d÷úëÿÐÿ‡‡?èýþºÿãôÂ!áÏúGÿ®¿øý~DÿÁwïµO†?ðJoÚ£Ç5¿øÆzü(ïìoøCÄ:߇|E¥i~Ñÿ4Cû;XÒ¯íoìþÝ¥_ßi·gž?´XÞ\Ú˺äFù<¯[ ™­l=j´+Cê<•hÔ*æÌ°p—-H8Ê<Ñ”¢ìÕâÚz6è¿¢nW–g_HËsŒ»›eØŸõ«ë~e„Ãã°XcÁ\Gˆ£í°¸ªuhUöUéR­OÚS—%jtêFÓ„dø!þ©ñ;þ Mû+øãâ>¹âxÏ[ÿ…áý³â¿ø‡[ñˆµ_ìßÚ?âþ‘§hk­ýÕýçØt­>ÇM´ûDò}žÆÎÚÖ-°Á)ÀuëbxS*­ˆ­V½iý{ž­j“«R|¹–2æ©7)K–1ŒUÛ´RKD},²¼³%ú@qö[“娧.ê¿WËòÜ&‚Ãûn áÌEoc…ÂÓ¥B—µ¯V­jžÎœyëT©RWœå'úíÿ‡‡?èýþºÿãõõçó Â!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ <%áÕ 2,YnX~*Ó@öºu…—üzYZÛíFÇ·Ìê¡›Ž2Iâ€.P@P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( âëþwÿ?ô‹úücþnÿõÿD¹þ™ι?¯ú>§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@P@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒyÿ埲ý×ïýjtx{ÿ$~QÿuýZcCé‰ÿ)â/ýÚ?úÂpÁûG_h3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?¨ € ( € ( € ( € ( € ( € ( € ( € ( € ( € þ.¿çp_óÿH¿¯Æ?æïÿ_ôKŸé—üë“úÿ£êhµû9þf…P@P@P@~.ÁÃò‡ÏÚóþèþµÁJø¿¿äÍÿîŸÿ«Lý3ô;ÿ”ðëþîïýa8œ?àÞùCçì‡ÿuûÿZƒã]ÿÉ”ÝCÿV˜ÐúbÊFø‹ÿvþ°œ0~Ñ×ÚÌÁ@P@P@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ( € ( € ( €?‹¯ùÜüÿÒ/ëñù»ÿ×ýçúeÿ:äþ¿èúŸÚ-~Ι¡@P@P@P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( âëþwÿ?ô‹úücþnÿõÿD¹þ™ι?¯ú>§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@P@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒyÿ埲ý×ïýjtx{ÿ$~QÿuýZcCé‰ÿ)â/ýÚ?úÂpÁûG_h3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?¨ € ( € ( € ( € ( € ( € ( € ( € ( € ( € þ.¿çp_óÿH¿¯Æ?æïÿ_ôKŸé—üë“úÿ£êhµû9þf…P@P@P@~.ÁÃò‡ÏÚóþèþµÁJø¿¿äÍÿîŸÿ«Lý3ô;ÿ”ðëþîïýa8œ?àÞùCçì‡ÿuûÿZƒã]ÿÉ”ÝCÿV˜ÐúbÊFø‹ÿvþ°œ0~Ñ×ÚÌÁ@P@P@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ( € ( € ( €?‹¯ùÜüÿÒ/ëñù»ÿ×ýçúeÿ:äþ¿èúŸÚ-~Ι¡@P@P@P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( âëþwÿ?ô‹úücþnÿõÿD¹þ™ι?¯ú>§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@P@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒy¿åŸ²ý×ïýjtü=ÿ’?(ÿ¹ÿýZcCé‰ÿ)â/ýÚ_úÂðÁûG_f3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?J|I¬~Ñ?þ-ü Ó~'xß៾xwáŒÞ1ÿ…_©'„¾!x·ÇŸ,ËÃ×p[xjÛBñ¿âmfýouÛ] Ãɧø”•ÿ‡zxCþŽ“öÿÿÄÖøåÿÍ%ðïOÑÒ~ßÿøšß¿ù¤ þéáú:OÛÿÿ[ã—ÿ4”ýø(×ý~Éú¤.hô€?™ÿø,‡í7ÿðçü{þ {ûþÅ?µò~Èzgí¥üwµñßäøðcãÚ[j^Óô}{BÕÿ<9{w ‚Ö OLû‹â A)Õåἒʈ_ÃoÛ?þ ]ÿîÿ‚~Êÿ°§ü«ã/ŸÛ/àíáŒtÙßö­ðGÂ=àGÄ/ |VðTt—>ñçÃÏìðaÓ¯ïuß hÞFŸõȓƚ»kâ©"Ò¼CáØÀ>˜ø•ÿ~Ã_ üñ EÖ|ûZk~|i¶ýžþ-þÙÞø >©û"ü6øµ.¡•uá|G—Äö~'”i—óÁÞ­ x[ÑçK­>ëJ¿Ô¬µm"æüË>/ÿÁf>+ø/þ sðþ ßáoÙ“ö†ñWÁCàÉÖ¼c©xKáO„¼E­ø×]ñ÷Œ4=Á¿´ƒµ©þ"X´²·Ã}=u«OøäGo­ÿnYø¦ÞÛÁú¥¿‡tû­HÔþ%Áß°×ÂÿüBÑuŸþÖšÇÁ„m¿g¿‹¶w†>Oª~È¿ ¾-K¨C¥]xcÄŸåñ=Ÿ‰åeüðAw«hÖôyÒëOºÒ¯õ+-[H¹¿ýžøâ ­#áo|UáÛø’ûKðнÕ~ü@øoâMKN ¡ÍâMFÊÛâ úÔš'…t™Þàšð[9¼7ÿkýž¿moø(ßÄ¿|døÛñ§ã§ÄO‚^ðÿÂß…¸ø»ñÇÇëñ;Äþðï†ß ¾è>ðÆ¡¯bé1Û,æÏB±•á·TÕäÖu]>-@êÏÁu~¯ì¯ûrüTð'ìíûYè_´oìGàxõÿŠ_²GÅ¿ƒúGƒ>;øמ¾Ö<ñ[Æ>oÞøv–ð@~ÒŸhŸÛ›Ã_ôï|Xñ7Ÿ‡þýžþ(ø·àŸÃ= âoüWàMFÓ⮥âVð‰ôïXh¾Ô ðY“Wñ=Þ¯®xZÂx­./u4ÑÀ1¼qÿãý•|+£~ËÖ~ø!ûeüqøáûY|ðßí5ðûöQøðCGø¥ûGxgà‹´ÏíøçâO†4_ˆ)௠Øj:pk¸­¡øªßý~Ðm…»Å,€k_ð_Ø[Mÿ‚~üFÿ‚‹é6_üWð³àçĽàçÅÿ„ÚO€´öˆø_ñ;Yñ.‡ásàï|<ñïü£išž™wâ-.óP–/Ýiïa%Ïö]î§c{anðŸí÷ÿšøµ®xGöø‰ûÛ|~øðóâÿü[àgì·â¯üoýŸ<#áÝö¢øã]$ëWþ5ø~&èÞ0¿Ö> øâÒòÕ<3ñKÃö^ ñ-ÜÖW¿ÙSÙ@‹=Àí'ìÅÿø+ûa|ý¤¾üð§Å_øoöY×áð7ÄoÚV Â0þÍz¿Å0–¬|'ø{ã3ã‰<_ãøUn.OŠÛJøÿ¦ˆ,CÜø Ç­øQüB÷Å~þßñóûÿÚ@?gïý5|B Ð ( € ( € ( € ( € ( € þ.¿çp_óÿH¿¯Æ?æïÿ_ôKŸé—üë“úÿ£êhµû9þf…P@P@P@~.ÁÃò‡ÏÚóþèþµÁJø¿¿äÍÿîCÿV˜#úgèwÿ)á×ýÝßúÂñ8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿ¹ÿýZcCé‰ÿ)â/ýÚ?úÂðÁûG_h3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€>þý?äùÿà£_õýû%ÿê¹ Ð þS?à·P|qø]ÿ`ÿ‚>þØÞýkÏÚ¯áoìÑ¥þѺŸÅ+Ù3à7‹þ5ø›I>*Ò4ohz|‘èÐÚøwMÔµ A¯-m|Eâ=íz~Ÿ©Of÷ja` í+Âß¶ücþ ›û ~Ôž=ý‹?hØköÿ‚p]øçâo†Gíká˜>|xøÓñ§ÆQøz]>ÆËá¼½ÔôMJÕüà‹ßµË&§¢Ûèú‰‹ø‹ûkÄúV‰`ø‰ûN~ÌÿðVÛö/ý¶~~Ò_³ïügãíé§ü~×üu&©ãŸ‰:üîÓö~ð?‰¼1®é:GìçðIñuŸÂï¾.Õu; ÛOøÀ¿|a«CÉã-ZÂóD±Žäö"ÿÚÃðZ/ø&çíñ7ì1ûpjß¾)ÿÁ0<û(x…´oÙÿ^¼ñ¿À‹Wüx/íÿh? =äü(д;]OB×5mcÅ–qêÿÚz|Z„Ú^¯cdøïûN~ÌÿðVÛö/ý¶~~Ò_³ïügãíé§ü~×üu&©ãŸ‰:üîÓö~ð?‰¼1®é:GìçðIñuŸÂï¾.Õu; ÛOøÀ¿|a«CÉã-ZÂóD±ŽäûµøKâ½câ×ìá†_¾x‡Å³uÅøYñ_ÀúŸ‚¾-èÝ¿îôðÿ‰| vgÕ´vmBÐý—L•ZòxnlÜB²\Tù,ø7ÿÝý­4Ÿø /ücöøEðcâ‡Ãø*üÇÿ´Ç† |_ðãÆø·ã?k?µwÆm{âGÀáÖ«¥é~7Ôí~'ü8»Ò|Sáï ¶“.¥ã[ƒÃ^·{_ˆÓÍvò…?à™?¶Ö©ÿjÿ‚Jøå?gÚïFñ·ì;ûr|lø¥ñ·öwøw§øËà—íŸgðƒÇŸ ÕäñïÁ}3^³ð¿,þ$øvÃÂv>½Ñl£×VãÄö>'Ñà}/OÔ/-@?PþþÄz—ÆŸƒŸðWÏŠ¿ÿeø*W„þ&ürÿ‚||^ý–>üJÿ‚¢üoյώ_´^·âŸ†zÆàí7áÄ+[ßø2ËÂÞ)Òô¿xwž,ñܺ펱Mœ+s­ÁáÀœ|AáÚúëþÅÿ·øÁðöý²¡áŠð«<§éú¿¼_ðãÄ£VðTš§Š-ü9o•†‹¯µÕƒÚé77‘€|ÿû]Á0¿m߂߲oüƒáGìåð'ã7§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþä?õi‚?¦~‡ò‘¾ÝÝÿ¬/‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ú§^ø7öoý«?i/ˆ¼Y£|>ðwí¥| Õ<ãŸê¶^ð爾x?ľ ñ/€%ñV©ö-Fñ½žŸ§i^+Ó¼?­êv×Þ/ðö¯¨x2=q<ñ? zWü7oìAÿG“û*âC|"ÿæ¾€ønߨƒþ'öTÿĆøEÿÍ}ðÝ¿±ýOì©ÿ‰ ð‹ÿšú?á»bú<ŸÙSÿáÿ5ôÃvþÄôy?²§þ$7Â/þkèÿ†íýˆ?èòeOüHo„_ü×Ðÿ ÛûÑäþÊŸøß¿ù¯ þ·ö ÿ£Éý•?ñ!¾ó_@ü7oìAÿG“û*âC|"ÿæ¾€ønߨƒþ'öTÿĆøEÿÍ}ðÝ¿±ýOì©ÿ‰ ð‹ÿšú?á»bú<ŸÙSÿáÿ5ôÃvþÄôy?²§þ$7Â/þkèÿ†íýˆ?èòeOüHo„_ü×Ðÿ ÛûÑäþÊŸøß¿ù¯ þ·ö ÿ£Éý•?ñ!¾ó_@ü7oìAÿG“û*âC|"ÿæ¾€<â߉þ~Ö¾0ýš<1ð3â/„>+…ß´7€hkÿ |S¡øÇÂÞ ð_‚<7ãvÓ®|UâMãUÑ,¯üi¯jZW‡|áoíüUâ´ë>%ÑtË¿ xÇzׇÀ?Aè € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@üÓÁÍ¿¶WŸ…Ÿ±»û_\>±ñŸöŸŸÀÚ–‡ Ø\"¿„|ðßâ§„¼{©øïÄJb—fŸ«êþ _øvÅžÖ}cQ»Öu )f·ð–±ü×ÄÜç …É*dò|øÜÑД)Åÿ†G*õ4~ìçCØÓŽŽrs”[T¦íϠ熼AŸx£…ñ”†x9­V.´YŽmdŽSC)Á>hÞ¶™¼ÏU*ÃQ§†£V1ža†‘õ¿üЮŸðGÿÙ$RŒñýJ°!Ô¯íAñ¨2º°Y[r•<Œsƒ=äÊ/ÿSýZcOÏ>˜m?¤gˆŽ.é®i«4ÓàN³Mnš³Lý¢¯³?™‚€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@à¼?µGüsöo°ý™´¿ø&_ìcûSéŸãd_´©ü ñ‡Ç  ÇágøC'Ãîlü/â ßDO¾¿ñÂjñß.¬úµ˜·:Uɘù»ÿ‡‚ÁÎ?ô…ß…ø®ˆ_üÚPÿÿƒœé ¿ ÿñ]¿ù´ þ ÿ8ÿÒ~ÿâº>!ói@ü<þqÿ¤.ü+ÿÅt|BÿæÒ€øx'üãÿH]øWÿŠèø…ÿÍ¥ððOø9Çþ»ð¯ÿÑñ ÿ›J?áàŸðsý!wá_þ+£âÿ6”ÃÁ?àçúBï¿üWGÄ/þm(ÿ‡‚ÁÎ?ô…ß…ø®ˆ_üÚPÿÿƒœé ¿ ÿñ]¿ù´ þ ÿ8ÿÒ~ÿâº>!ói@ü<þqÿ¤.ü+ÿÅt|BÿæÒ€øx'üãÿH]øWÿŠèø…ÿÍ¥ððOø9Çþ»ð¯ÿÑñ ÿ›J?áàŸðsý!wá_þ+£âÿ6”ÃÁ?àçúBï¿üWGÄ/þm(ÿ‡‚ÁÎ?ô…ß…ø®ˆ_üÚPy¡ÁUÿàëÿ é餸gþ G£øwJŽG–=3Bý…¾1izK&Ñ$‰g§üD·¶Y*‡uŒ3m]ÄàPÇü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@#ÿÃEÿÁÅðÕ_ðÛŸðå­þ›þŠ÷ü2/í=ý±ÿ$çþ'ü¿ávÂ)ÿ$ëþ)ùǯúWü„¿ÓkÈþÂʵ¶þ¥íOú ç­ÏþíõOƒÚ{/÷Ým~/xýþ"¿ˆ?êüBÿõ—þ¡ÿÑ;õl»êßò:ÿXÞ~§õÿùÿ·½ÿ÷ÁýÑõÇü=çþßÿ¤`æ•|oÿç—^¹ùàÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþyt‰â_ø,÷üuà¿xƒÆ>2ÿ‚phÞð‡„ôMWÄÞ*ñWˆÿc¿ŒÚ?‡¼3á½Â}S\ñ½«^üOŠÏKÑ´m.ÖëQÕ5¹b¶²±¶žæy(†¬MŒÄIƆ…lMy(¹8Ñ¡NUjIF)ÊMB2j1M½’lõr—0â\ó&áÜ¢”kæ¹þm—d¹e Õ§B³ ×G‚¥:õ¥ 4cS^”%V¬ãNšns”b›\OÂ_ø/Wüåñû×¾1ø û|3øÙá 7[¹ðΥ⯅_²ïÅ?ø{Oñ%†›ª^x~óVðÿÅkÛ;}f×KÖt}FãN–U¹†ÇUÓ®^1Ü,ü>w—gØj˜Ì²¬«P§^XiÊtªQj´)Ò«(òÕŒdÒ…jo™+;µ{¦}g‰q„ùæ‡xß/¡–æ¸Üª†u‡¡‡Çà³O/Äã1ØU]| jôc)brÜ\)MTЧ¸¨Î úü=çþßÿ¤`æ•|oÿç—^±ùðÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytøËûG|gÿ‚þÖ¶•¯Ä/‰ÿ³=ß¿à 67Þfý•î¾ kº–¥'ÃMËÄpxf_€ºýþ¯ªj¾ƒÂºeÏ|Eá}VçSÓuË}K\Öõ‹kÍP¿µ“ðʘjYߊpÙšS¡B«öxzŠð«  V£EÆIÅÒ©(¼EX4áV.¤^•.ª¸<ëá‡ÐO/θ"RÂæ™¶_Îp•=ž'ˆâŽ&©—fy”kД*Ä£Z9FN¤1ñÁU‹SÂ(¿×?ÁP¿à꿇>Ó|ð÷þ #áx?FûgöG…<ûüZð¿†ô¯í û­WPþÍдOˆ:]ÛµKëÝJóì¶±}¦þòêòm÷Èÿ·Q¡G N4pôiP£ òR£N©ÃšNrå„ciJRvJòm½[gùq™f™žs­™gŽ;5Ìq>ÏëüˈÇcqÆ•<=mŠÅT«^¯²¡J•|õ%ÉJ:q´!®ŸþóÿoÿÒ0?óJ¾7ÿóË­Nÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòè‰ø‹ÿÄÿƒ¥¾x7XøñwöðgÂ߇žþÏÿ„ƒÇdÏ‹žð†…ý¯ªØèZOö¿ˆ5Švºm‡öž¹©éº5‡ÚgOµjz…•”;§¹‰ƒ3̰™F ¾aŽ©*X\?³ö³Œ'QÇÚÖ§B…5)Êõ*Áh“»Ñ6}_pWø‰Ågp®–;>Î>»õ -lVN¯ö~_‹Í1\Øœ]JXz\˜,"¢ö•#Ï(*p¼ç³á×üþ–øÃàÝâ7Â/ØÁŸ¾x‹ûCþÿü>ý“>.x³ÂïöF«}¡jßÙ Ñþ)Ýi·ÿٚ晩h×ÿfþË©é÷¶SmžÚTS,̰™¾ †a©*¸\G´öS”'MËÙV©Bw…EÆÕ)Mj•ҺѦoÁ\Cáßf|ÅXJXû'ú—×ð´qX|m:_Ú~4Âòâp•*áêó౸zÙÔ—$¦éÎÓ„¢»oø{Ïü¿ÿHÀÿÍ*øßÿÏ.»Ï”ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€Úÿö¼ÿ‚·þÕðBø,Gü=7ö_ÿ†nÿ„þóÿ +þ,§Ž>Âgÿ Gí‡áŸøYßò9x—Ä_ð‘Â;ÿïÃßùýû#ûwý3íÚv¾@÷ù@P@P@P@P@P@P@P@P@P@P@P@P@|]ÿ"ÿ”w~Þßöeßµ'þ¨ïW‹ÄŸòNçßö%Í?õ¹úg‚¿òy<%ÿ³›ÀúÔåGâçü…ÿ(îøÍÿg£ñÿTwìé_áOü“¸ßûb?õ.?¦¿hüžNÿ³g“ëSÆgôé_¦ŸÃ!@?ðrgüÿÇ^ ñ§üoöwñ>§à­{Âv~ð—ÇëÏ x’ûÂ~/Ò.§»Ò~|9ø¡á}fÂúËQ’âê kAøeâk *ê ë{ü5¨[Z]éòx¢ïOüwÄŒ‚½‹Šò겡RŒhÑÌ*²£Z ¸apتSŒ£+µ:xZ±ƒRQöRIÇÚ¸ÿ¤BrœÏSÀ2ÀPÌð™…\Ë1áxì,Ã.ÄBñæs‘cðÕ©U ¡Nx\^y­ˆ§:S¬ñ´gRe€§[÷¯þçûCüPýª¿à›?³ÇOÅ¿ˆ¾&ø³Eøƒ¤x£Ä6öQéí¯Íðûâ÷ĆÚn¹k q­jú7„4íC_žÒ;[+½rãP»²°Óíg†Æßï8;1Åf¼7–c±³U15aˆ…ZŠ<¾Ñáñ˜Œ4g$´çœ(ÆU´\ÜœcÔWògÒGƒ².ñ¯ŽxW†pÓÁäy~''Ä`0sªë,$sŽÉóºøZ3’çú®˜Ö£„GR­<,(Ó«VµHʬÿLëéÀ ( € ( € ( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ( € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@P@P@P@P@P@P@ñwü‹þQÝû{Ù—~ÔŸú£¼u^/É;ŸØ—4ÿÔçéž ÿÉäð—þÎoÿëS•‹Ÿðjü£»ã7ýžÄOýQß³¥|g…?òNãìuˆÿÔ¸þšý òy8kþÍžMÿ­OŸÓ¥~š …ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒyÿ埲ý×ïýjtx{ÿ$~QÿuýZcCé‰ÿ)â/ýÚ?úÂpÁûG_h3P@P@P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € (âïø)ü£»ööÿ³.ý©?õGxê¼^$ÿ’w>ÿ±.iÿ¨5ÏÓ<ÿ“Éá/ýœÞÿÖ§*??àÔ/ùGwÆoû=ˆŸú£¿gJøÏ äÆÿØëÿ¨9qý5û@?äòp×ý›<›ÿZž3?§Jý4þ (ñsþÿ”>~ן÷@õ¨> WÅø…ÿ$~oÿtÿýZ`韡ßü¤o‡_÷wë ÄáÿóÿÊ?d?û¯ßúÔèð÷þHü£þêú´Æ‡ÓþR7Ä_û´õ„áƒö޾Ðþf ( € ( € ( €??àáùCçíyÿtÿZƒà¥|_ˆ_òGæÿ÷OÿÕ¦þ™úÿÊFøuÿwwþ°œNðo?ü¡óöCÿºýÿ­Añ®äÊ?î¡ÿ«Lh}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¸ß>%xá7…îügñÄúg„ü5g=­£êZœ’¤_ßJ Óô½6ÊÚ9õ _WÔg" ;HÒ­o5=Br!³´žO–€>>¸ÿ‚›þÇ6³Ïm7Œþ*y¶óI¾_ì³ûVÏ™´oåÏÁ9 ™7)Û,2I‹‡Ùbü<ûö5ÿ¡Óâ¿þ"§íeÿÎB€øy÷ìkÿC§ÅüEOÚËÿœ…ðóïØ×þ‡OŠÿøŠŸµ—ÿ9 ?áçß±¯ýŸÿñ?k/þrÃÏ¿c_ú>+ÿâ*~Ö_üä(ÿ‡Ÿ~Æ¿ô:|WÿÄTý¬¿ùÈPÿ>ýètø¯ÿˆ©ûYó þ}ûÿÐéñ_ÿSö²ÿç!@ü<ûö5ÿ¡Óâ¿þ"§íeÿÎB€øy÷ìkÿC§ÅüEOÚËÿœ…ðóïØ×þ‡OŠÿøŠŸµ—ÿ9 ?áçß±¯ýŸÿñ?k/þrÃÏ¿c_ú>+ÿâ*~Ö_üä(ÿ‡Ÿ~Æ¿ô:|WÿÄTý¬¿ùÈPÿ>ýètø¯ÿˆ©ûYó þ}ûÿÐéñ_ÿSö²ÿç!@Ýð{ö¹ýŸ~;êËáÿ†Þ;¹¼ñ¶𭝆¼Yை ¼K¨éštém¨êG‡¾&øWÁúÖ±e§M, ¨]iV7Ø‹›Wºx£º·i>‘ € ( € ( € ( € ( € ( € ( € ( € ø»þ Eÿ(îý½¿ìË¿jOýQÞ:¯‰?äÏ¿ìKšê sôÏäòxKÿg7€ÿõ©ÊÅÏø5 þQÝñ›þÏGâ'þ¨ïÙÒ¾3Ÿù'q¿ö:Äê\M~Ðù<œ5ÿfÏ&ÿÖ§ŒÏéÒ¿M?†B€ ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ( € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐèÞ§®6½ÿð·‚µ{+-CMøyûxâ7ƒ¤¹··–] Å^>ø¿¦ø'ÄÚµ‹<HµÏ xVÃF†üLn,t«Í{N±k{Ok‘ênÜÜÛÙ[Ü^^\Ciii ·7WW2ǽµ¼´³Ü\O+,Pà JÒK,Œ±Ç³»Ðüý£¿g¯Ú_ÃÚ¯‹¿gh hZËøs[ñ?Á/Šø«áíÄ1ØÙêrh:®µàMs^Ótýe4ÝGOÔK»¹†ùlo¬îÌ ¨$pOñˆt hçŠüW®ið¿†tKÄ>$ñ'ˆu+-@ðþ¢ÙM©k湬jS[iÚN‘¤éÖ×ú–¥qoeaeo5ÕÔÑA’(?ðßâoÃŒž Ð>%ü!øƒàŠŸ$´µ½ºÓ.®´øbÿTе‹{mJÊ÷O¸ŸN¿¹ŠÛK«Ig·–4íèÌø¿ã¿'„þxWÇ~>ð§„'|6ø7àâgÅÿˆ^øUðã–ö÷~(øñ#Åš¼á»[»ë].Òç_ñ_‰ïô½G·ºÔ¯¬´ëyµûhæ¾¼µ´šââ(Ü¡ð÷ˆtèŠü)®i&𿉴7Ä>ñ'‡µ+-k@ñ­YC©hú懬i³\éÚ¶‘«i×6÷ún¥aqqeeq Õ¬ÒÁ,r0ÆðQ}qüû*ø§â>Ÿgi7Š>ø÷à—Ž¼¨ÜA —Š´ÏŒþ³³Õ¬%–)ŒµŽ¥¨éZ„j¾N­ êšÇ‡õ$¹ÑõJÎäî:( € ( € ( € ( € ( € ( € ( € ( ‹¿à¤_òŽïÛÛþÌ»ö¤ÿÕãªñx“þIÜûþŧþ ×?LðWþO'„¿ösxÿZœ¨ü\ÿƒP¿åß¿ìô~"êŽý+ã<)ÿ’wÿc¬Gþ åÇô×íÿ“ÉÃ_ölòoýjxÌþ+ôÓød( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí}ýü¥:ïþÑÿ§ÿëEêtòüOûb·ìcÿžý¥sâPßh^ ¾²¿–X…†¡á¯†ÿ|_§ß_²ßx~ÚC-¸â Ä/ø!OŠgØ þ ¹âïø'¿ìùûH|ý >þ׿°çìõñSBñ'Á_‹¾ ø¿ákÚïözø{'†>7hUð^¿®éú7ˆ> ÛhŸ¾/jÚ\×ܦŒÞŠ C¥G§5°§ø«ö±ÿ‚™ÁEtOø.‡Ä?…´§ÃƒŸ²_ì?'íCû0xOöXÕþxOÆ_ð¿4Ÿ‡¼}§|D×¼añ~êÿHø—ðãľ"ðå€Ôü¨øSÔ|=¥ø“_³·Õ¼/u£xcPƒÄ€/~Í¿¶Oí§û6Á9àÙƒ_±ßÅ ü:¸ý°¾)üqø1ñׯžðŽü%­ØÞükGðÖ©â(u_[é~ »ñv«â[½/á÷‹<­ø¢C—Ä–>}¥õ€Ñþý«?à·ž$øÑÿxý†íÿo_…+ã_ø''‚-h þÕSþÉŸ GŽ~ i×þ"x#àdŸ c„|*ð¯‡+xZ8‡ÃÖ¯a¢]ø«À¾;ñgÃ]wYÓ4ù&¸:m§ˆu\kÑé‰4°éßÚFÊÝÚ#4üQøwVøµûÁLà¥ÿðY…Ÿðx‹áïìµÿaø­û=þÛ ´u{¦ñ'ìñ÷ÅMÿ‹,ìcòŒ×¿¼wa¥ë6ÖÏ3Guâ{ßj7fÏDðö¹,àhxþ <¿°ìŸðs×íåð: üQÔdý¢c›ßW·oq¨øÄWŸ4¿èñ­ëX\Ù\j¾±Ò|Y–ÎÖòÅüAga•þŸý¦/­À=ïö'ÿ‚ª~ÙÿkÙ·ög?·_‰¿l þÚ_~)h>:ø¦ßðN/~ÍW°í+iðÊ÷ŵÿ]øÏá7Ãïüuøwsâ#?‡ôÛ/ JãB–»}§Yj6:^¸ù£ûüpÿ‚’þÊðKø(/üSá?í©â]nËáGü{Å:¿Ç¯ƒ÷Ÿ>k?ðµÎ«ãï‚ú'ÇŸŠ¾3ñƒ5ÿiÄ»_è_ÚÞðÃèz_ƒ,ü16±ámGH{PÎûb¿ðV_Ú_þ3%»|^Ñ?áÒŸð»gý‡–Óþ¿}‰hûØÉii<`~" ÿÂÂ/ÿ ` þ£}ˆ‹Ã£ ¾@ñןø+¿ü ãGÀø&§ÂŸü_🄿ioø+÷íOûSxáïÇOˆ?>ëß²§ìðÆv^ðç‡<-ðÓ@Ð|'áOˆ>)–×Jñ‰4 _âø‡SÕ´äm?XÕîÄ:¹áÀ¯<5ÿÿ‚„~˵üwþ ÇûOþоý¥¼ð—þ ³ñ_öãý•ÿkà€>xÎÃQð¯‚ç1økÇÿ ü3kªü-½}#ÄWRjZ,£ÝZ\/„å:ôz½Ÿ‹­ô_~}|Vø«ÿý¯?àÚßÚwöäý²¿j¯ xÿÀÿf øoÁŸ³÷‡~|<ðv§aâoþ׿¼5qñ߯<)m Í¨x›Æ)áoÚÃðßÞÐü  øgUðõ¯Új—w vÁÿà Ÿÿà£þü7ø1ñª×àwìgÿìý‘fo‡Ÿ> Ïà¿ß|eý®þ+ø‹áV™£ê·~:ðv¹â_‡_<'áNÝü?Õ|9âo¹Ðµ_QKOŦø`÷Kþ ÿ&3ñ£þÂõxü5 Ð ( € ( € ( € ( € ( € ( € ( € ( ‹¿à¤_òŽïÛÛþÌ»ö¤ÿÕãªñx“þIÜûþŧþ ×?LðWþO'„¿ösxÿZœ¨ü\ÿƒP¿åß¿ìô~"êŽý+ã<)ÿ’wÿc¬Gþ åÇô×íÿ“ÉÃ_ölòoýjxÌþ+ôÓød( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí}ýü¥:ïþÑÿ§ÿëEêtã_ðPÿø%†ÿøóûxÛãÄý)ÿg?ÙâWˆþ,xÓöZÖ¾YøÛôg‹oí4‹ Aãêþ6³Ñ´ÿ økû&K[ï ê¼acâ Ä>,Ð57ŠÏ[fä?´Oü·öcñÇ_ØŸö‘ý|;ð+þ ÿñgö=øñkñZûVøû.ü?Òt¯ÞºM6ü'ñö•ðó[ø>òÛkö_öFŸâFûÄSøgG×<]e¦èr7‰.§ˆüwÿñ©ø»ûy꿳ßü;â—ìáû3ÿÁGn|_â_ÚcöfðÿÁO‡þ2:ßøáWü Ÿð«~Ëÿ SþÏ„’ü,þÞÿ„Ÿþ-Çü,í/7ûwû/þïýoögöÖ´Ê? à€Ÿð¯>Á¾ÃYkÿæ>8|LøËý·ÿ #û?þ÷ü,_‹ ñCþ¿ìßø\·¿ðªÿ±öÿaÿl}¿â?ö†´ÿ²ìqýŸ@T~ÎÿðIÏøP~+ÿ‚¶xŸþïü%ðô¯x»Æ_aÿ…Yý…ÿ /þâf‰ý›öŸøXúÏü,ß°ÂÅûWÛ>Ïð÷í_ØþOÙmÿ´<Û¦?à™_±7ü;ŸöøûÿÂÌÿ…Åÿ NÃÆö?ð²?á ÿ…{ÿ 7ü&_|kñÍÿ„?þ¿cfÿÂaý³þWíŸÙßÚí~×öP ý™?à”þø%«ÁOíþ'øòÇãÏßø)ÇÇOˆ_›EÖgÓ->ûý‹ÿà¶ìýñ/់hßø*—ÆßÚ³áçÀï‡? >|Ó>x;özðÆšúT~Ó5ߎð‡x£Å:‡ÇŸx{B‚ÞZñ<º\öÚ½´:ýÀ»Ô$¾7 9üÿ‚j¿þ ÿÁGÿeSûdë8ý¿oX?h oÃ_õ߀ž·Ö¾|Tøíc¥icât­¼k'ˆ¾$Ýø+MÐ4´Ÿ IaàoÞ^iPêò[Zj³\]ÊæÁºCÿÁ'"ÿ‚pÃ]ëcâÌ´ãþÖûbÿž”ë²|^“\žÙõãðÇþ½ð²cðկŖˆ> ?èì|6>“øåÿ?øwñ'öfÿ‚zü+øUñãÆ?³ÇíÿÈÐ|eû0~Õ ðf¯j:v»¡xgºŒõ|,ñ¥.ƒâ¿|KÕ<%¦x§Äžºñ +ý­ØßX½Ðïuý/]Ïø3ÿUÕ<=®þÛŸÿioÛ Å¿µWí¯ûk~Í>2ý”õÚ/\ø?á/†>øQð›Äþÿ„rÛDøuð+Á~#—G²··½³ðö³®BÞ4‰õù|9h–× ö­âm[[Ýoø#fïø"Xÿ‚:ÃGcþ-òxþ/þ~çÆåøÉý«ÿ ‹þ‡|Â9öøYüÈ_í‡þAtwÂßðG‡øWû[~¿¶'ÀßÚ,|1ø‰û3~˾ ýÿiM!>ÿkøköÐø5à øoïˆ,í>'ø~O†ž&Ò%ÑZÑ|A3üL’ÂïMðÍ®¥¦x+=lëø*7ü˜ÏÆû|!ÿÕãðÖ€?@( € ( € ( € ( € ( € ( € ( € ( €>.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá €??àáùCçíyÿtÿZƒà¥|_ˆ_òGæÿ÷OÿÕ¦þ™úÿÊFøuÿwwþ°œNðo?ü¡óöCÿºýÿ­Añ®äÊ?î¡ÿ«Lh}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( € (ñsþÿ”>~ן÷@õ¨> WÅø…ÿ$~oÿtÿýZ`韡ßü¤o‡_÷wë ÄáÿóÿÊ?d?û¯ßúÔèð÷þHü£þêú´Æ‡ÓþR7Ä_û´õ„áƒö޾Ðþf ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ú«ñ»C¸ð‡Ä |{ð>µðê×â^•àü9»ðOÅÿ ᯊ>ÔµïxŽ]>ÓÅñYkw¾ño‚µ­=5o ë±ø_ÅZí¦½â?ø“Bµh<ø~ó…ÇíûZ%Äémû*~Í—É4©oq'ü3ÁvòO»¦x?áEMä<±…v‡Î—Êf)æ>ÝÄølŸÚïþ;öiÿʼnx3ÿœ5ðÙ?µßýwìÓÿ‹ðgÿ8j?á²k¿ú4ïÙ§ÿ%àÏþpÔÃdþ×ôiß³Oþ,KÁŸüá¨ÿ†Éý®ÿèÓ¿fŸüX—ƒ?ùÃPÿ “û]ÿѧ~Í?ø±/ó† þ'ö»ÿ£Nýšñb^ ÿç @ü6OíwÿFû4ÿâļÿ΀ølŸÚïþ;öiÿʼnx3ÿœ5ðÙ?µßýwìÓÿ‹ðgÿ8j?á²k¿ú4ïÙ§ÿ%àÏþpÔÃdþ×ôiß³Oþ,KÁŸüá¨ÿ†Éý®ÿèÓ¿fŸüX—ƒ?ùÃPÿ “û]ÿѧ~Í?ø±/ó† þ'ö»ÿ£Nýšñb^ ÿç @ü6OíwÿFû4ÿâļÿ΀:ã~Òº%Ÿ‚?h-àÀ†ã_ðçˆüg§xkö˜Ñ¾1ø»Ç àÏé>,Ð|¤Ëoàh^ðæ¹«èºdž7ñ-õι¬Üxf cÁ^ðÞ™ªxšÓâO‚À>ßÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hùùÿƒ‡à¥>ýŸ?dIÿg„Zÿ†üYñSö¾Ò0Ñu_ kx†ÓImKHµ¼½ò,õm@¤²×£À™&7!Èþ­pöøœT±Ê”›Ã¾ B¯4b•hN”ÝEh'+)ËV|oÒ¿ÅñcÅ%ð”1O+Ér-,v&iÓÍñ9fužâešàU*Õ¥,»‡Ì0ÑÁÔÄ*8‰Ó¥ÍSJñ‰ûCÿ _áoý¯ácáßþX×ÙŸÌáÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐã¯üãÆ¾ ñgüOö±ðÿ…|[áŸëÚ‡ü(Ÿ°hž×´­gW½û/í/ðnöëìšnwsysök;k‹¹ü˜_ɵ‚iäÛNëñ¾ FSáÚ0Œ¥'õ F)ÊNÙž »%vì“~‡ô§Ñ½7Ò'ÃÊØŠÔ¨Q‡úÛÏVµHR¥nâhGš¤ÜciJ1Wjòi-ZAÿñ¯ƒ|'ÿ“ý“¼?â¯øgÃZöŸÿ Ûíú'ˆ5í+FÕì¾ÕûKüd½µû^›¨ÝÛ^[}¦ÎæÞî:ó­g†x÷E*;ÆPá¦3Œ£%õûÆIÆJùž5«§f®š~ô¾¯Gô‰ñ¶µ*ôgþ©rU£RiO—¸få©(Ë–Q”]›´“OTÑûÿ _áoý¯ácáßþX×ÙÍaÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐñÏÿ¤ÿ‚üDý½~4^Á#¿b;øŸÁúÏŒ´ ünø‘ss`Ú_Šü[àßi~$›A´ñò>•àï† |M éúÏŽ7ügŒ³ügfà슒¯W…,ud¯í+aêÆ¤©©4Õ 6­8Ï_â”鸧qj¿ú_ôjð‹†¼á?I/ó åuèexŒ eÒ©ìÖ -Íð°t1“¡ÆyžwÄX e\>O•ßÙQÂã#Z¥:¸Úôå•ÿCðK¯„º7ì9û | ý–þ%|jø)âŸü0âA×ußøâÖç·’øÛâïþ"Ù¦“q¯Ã êóGe§xºÎÂæKÍ"ÅžöÖäÅ€Å#þÂùM|"Àåxš”ª×ÂýgÚT æéIׯb1+‘Ô…9¾XÖQmÂ>òvV±üY㿈9WŠ~+q_d˜<Õç¯%ú¦4ŽúQÊøw(ɪÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿå~ÿÁÎ>,ð·Š?à…·‡ü#^%ÐÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@‹ðH/ø7¿áü_þwíãßð¡¿áScoü=ñ;þ¯øZð³3×Xø{ý‡ý‡ÿ ïþ¢ÿÚ_ÛòáýŸþ›ñÅÕ8«ûGÚ`a‚ú‡ÔíÉ^Uý§Ö¾µ{óR§ËÉõuksss½­¯õÒOèïƒðýKú§⸗ýkÿXý§ÖrªYgÔ¿°¿°¹9=–;íþ³ý³.noe콄mÏí'í'ü@éð·þ’'ãÿüGÿó௷?—þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ñoþ ÿ÷ü/ÿ‚«ÿÃCný£ü{ð#þ7ü*lmð?‡¾'ÂUÿ Cþfzë°ÿ°ÿá]ÿÔ_ûKûcþ\?³ÿÓ~#ƒxº§hûL 0_Pú¹+Ê¿´ú×Ö¯~jTùy>®­nnnwµµþ£úIýð~©TâœWÿ­ë´úÎUK,ú—öö''²Çc}¿Ö¶eÍÍì½—°¹ý£äý¤ÿˆ>ÿÒDüÿˆááßþ|öçòàÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@•¿ðToø ¾£ÿˆðÃKÏüañ_Ç¿„Ÿµh:ÿÄ ¾[øçÀ~6³Ó´ö°ðÕɳñ'‹4Óqâië–æòÞ{©¼1âuûçñ¨WÂæ¹wì½¶Š…FIºj¶ñq§Uë±4ç%¼£F¦ü§úgôÍr¬ÿ€ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸ~-ÿÁ ¿àÞÿ…ÿðUøhmß´~†ÿ…M¾ð÷ÄïøJ¿áhÂÌÏ]cáïööü+¿ú‹ÿilˇöúoÄpoTâ¯íi† êS·%yWöŸZúÕïÍJŸ/'ÕÕ­ÍÍÎö¶¿ÔI?£¾ÀOõ/êœSŠâ_õ¯ýcöŸYÊ©eŸRþÂþÂääöXìo·úÏö̹¹½—²ö·?´|Ÿ´Ÿñ§ÂßúHŸÿñ<;ÿÏ‚¾Üþ\øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ù²ý°?`ÏÙÃöÖfÚåþ)øGø{âA-ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|âß´—ü§ð·ö|ý¾>||·—¼\~|ø§ñ|xPüðî‚eñoüþ ø]ÿ=ý¼gññ¿k|>øÓâ/„ðŠ/Ã|I€Ð< ðçÆðÿnx ìŸkÿ„ÿû7û'ûçÈþÉûgö”ßnû-Ÿ‰Â&ËkãêacƒtqÕ0Š”*ºÊJ 5n~gN›M¼C/+·*wÖËõ¤Gƒ8ox×+á<._ˆéæ<-‚â)cq}<¶t§‹Í³¼µáU x¼dgG(eUÕ‹“¯({4©©Kôïþ tø[ÿIñÿþ#‡‡ùðWÔŸ‚‡ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸ`~ÛŸðC¯ Á?à…ðWïøF¿h¯|}ÿ†ÿ†ûoöïÃ;á÷ü"ð§ÿl? ý›ì¿`ñ‡‹?µÿ·áiOçù¿`ûö4>_Ú¾Úÿfþ÷( € ( € ( € ( € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( € ( € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( ƒ?à©>ð·?àœ·>›âïhþ%°Òÿe¾.Ólõ½>×R·°ñO‚¾x£Å~ñ œWQʶÚφ¼K¤izBëOÔì-o-¤Žh•‡ƒÅiVáÌö5©Â¬a”ãëEN*J5hajÕ£Q&§N¬#RZÆQM;£õŸó~YãO…U²ìf'Z¿ˆ#—W©…­R„ë`3<û—æ8:’§(¹á±¸M|.&Œ¯ Ô*Ô§8¸É£ñûþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯ð§þIÜoýޱúƒ—Ñ¿´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( Ã?ø8ÃÀ¾ñüöñŽ¿àï ëž.ðÏÁ‹ßø§Xðö‘©øÁ7šÿíðƒÃºíß„u»Û9õ? Üë^Ô5 VŸFº²—Rѯ¯4ËÇšÊæhá¼E¡B§ fUªQ¥:Øw‚• ³§ T êf8:uÉ9Ss§)S›ƒ‹”$âïÑýQô2ÍsLÒ‚òÜ&eÂåÙ¼8š–m€Ãc10Y¥<q7 O1ÂÒ© Øaq”hâ°ðĬhâiS¯MF¬#%ÔÿÁ¼ÿò‡ÏÙþë÷þµƺ×Ãßù#òû¨êÓp}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( €>.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@Å×üÿ9 ÿ»Mÿß—¯Æÿ±.iÿ¨5ÏÓ<ÿ“Éá/ýœÞÿÖ§*??àÔ/ùGwÆoû=ˆŸú£¿gJøÏ äÆÿØëÿ¨9qý5û@?äòp×ý›<›ÿZž3?§Jý4þ ( € ( € ( âëþ ÿœ…ݦÿïË×ãÿÍCÿtŸýéŸé—íÿ›=ÿyÿ|ƒûE¯ÙÏó4( € ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ø»þ Eÿ(îý½¿ìË¿jOýQÞ:¯‰?äÏ¿ìKšê sôÏäòxKÿg7€ÿõ©ÊÅÏø5 þQÝñ›þÏGâ'þ¨ïÙÒ¾3Ÿù'q¿ö:Äê\M~Ðù<œ5ÿfÏ&ÿÖ§ŒÏéÒ¿M?†B€ ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@_ðhOüä+þí7ÿ~^¿ð‡þjû¤ÿïLÿL¿h¯üÙïûÈ?ûäÚ-~Ι¡@P@P@P@P@_ðhOüä+þí7ÿ~^¿ð‡þjû¤ÿïLÿL¿h¯üÙïûÈ?ûäÚ-~Ι¡@PÅßðR/ùGwííÿf]ûRêŽñÕx¼Iÿ$î}ÿb\ÓÿPkŸ¦x+ÿ'“Â_û9¼ÿ­NT~.Á¨_òŽïŒßöz??õG~ΕñžÿÉ;ÿ±Ö#ÿPrãúkö€Éäá¯û6y7þµ~ן÷@õ¨> WÅø…ÿ$~oÿtÿýZ`韡ßü¤o‡_÷wë ÄáÿóÿÊ?d?û¯ßúÔèð÷þHü£þêú´Æ‡ÓþR7Ä_û´õ„áƒö޾Ðþf ( € ( € (âïø)ü£»ööÿ³.ý©?õGxê¼^$ÿ’w>ÿ±.iÿ¨5ÏÓ<ÿ“Éá/ýœÞÿÖ§*??àÔ/ùGwÆoû=ˆŸú£¿gJøÏ äÆÿØëÿ¨9qý5û@?äòp×ý›<›ÿZž3?§Jý4þ ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@ü]Á¡?ó¯û´ßýùzücÂù¨î“ÿ½3ý2ý¢¿óg¿ï ÿïhµû9þf…P@P@P@P@ü]Á¡?ó¯û´ßýùzücÂù¨î“ÿ½3ý2ý¢¿óg¿ï ÿïhµû9þf…P@ÁH¿åß··ý™wíIÿª;ÇUâñ'ü“¹÷ý‰sOýA®~™à¯üžO ìæðþµ9Qø¹ÿ¡Ê;¾3ÙèüDÿÕû:WÆxSÿ$î7þÇXýAËé¯Úÿ'“†¿ìÙäßúÔñ™ý:Wé§ðÈP@P@P@_ðhOüä+þí7ÿ~^¿ð‡þjû¤ÿïLÿL¿h¯üÙïûÈ?ûäÚ-~Ι¡@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@PÅßðR/ùGwííÿf]ûRêŽñÕx¼Iÿ$î}ÿb\ÓÿPkŸ¦x+ÿ'“Â_û9¼ÿ­NT~.Á¨_òŽïŒßöz??õG~ΕñžÿÉ;ÿ±Ö#ÿPrãúkö€Éäá¯û6y7þµ.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( ‹¿à¤_òŽïÛÛþÌ»ö¤ÿÕãªñx“þIÜûþŧþ ×?LðWþO'„¿ösxÿZœ¨ü\ÿƒP¿åß¿ìô~"êŽý+ã<)ÿ’wÿc¬Gþ åÇô×íÿ“ÉÃ_ölòoýjxÌþ+ôÓød( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@Pñuÿ„ÿÎB¿îÓ÷åëñæ¡ÿºOþôÏôËöŠÿÍžÿ¼ƒÿ¾Aý¢×ìçùšP@P@P@P@Pñuÿ„ÿÎB¿îÓ÷åëñæ¡ÿºOþôÏôËöŠÿÍžÿ¼ƒÿ¾Aý¢×ìçùšP@|]ÿ"ÿ”w~Þßöeßµ'þ¨ïW‹ÄŸòNçßö%Í?õ¹úg‚¿òy<%ÿ³›ÀúÔåGâçü…ÿ(îøÍÿg£ñÿTwìé_áOü“¸ßûb?õ.?¦¿hüžNÿ³g“ëSÆgôé_¦ŸÃ!@P@P@ü]Á¡?ó¯û´ßýùzücÂù¨î“ÿ½3ý2ý¢¿óg¿ï ÿïhµû9þf…P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@ÁH¿åß··ý™wíIÿª;ÇUâñ'ü“¹÷ý‰sOýA®~™à¯üžO ìæðþµ9Qø¹ÿ¡Ê;¾3ÙèüDÿÕû:WÆxSÿ$î7þÇXýAËé¯Úÿ'“†¿ìÙäßúÔñ™ý:Wé§ðÈP@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( âëþ ÿœ…ݦÿïË×ãÿÍCÿtŸýéŸé—íÿ›=ÿyÿ|ƒûE¯ÙÏó4( € ( € ( € ( € ( âëþ ÿœ…ݦÿïË×ãÿÍCÿtŸýéŸé—íÿ›=ÿyÿ|ƒûE¯ÙÏó4( € ø»þ Eÿ(îý½¿ìË¿jOýQÞ:¯‰?äÏ¿ìKšê sôÏäòxKÿg7€ÿõ©ÊÅÏø5 þQÝñ›þÏGâ'þ¨ïÙÒ¾3Ÿù'q¿ö:Äê\M~Ðù<œ5ÿfÏ&ÿÖ§ŒÏéÒ¿M?†B€ ( € ( € (øºÿƒBç!_÷i¿ûòõøÇ„?óPÿÝ'ÿzgúeûEæÏÞAÿß þÑkösüÍ ( €??àáùCçíyÿtÿZƒà¥|_ˆ_òGæÿ÷OÿÕ¦þ™úÿÊFøuÿwwþ°œNðo?ü¡óöCÿºýÿ­Añ®äÊ?î¡ÿ«Lh}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( €>.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@Å×üÿ9 ÿ»Mÿß—¯ÆÖgb-ÚÂ<»cGG’pblA[xýÁ  ïøLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€?oø4ÿZÔ´øoì럳ý£þkÎýͼ»ü¯øhÏ/ý|RíÛæ¿ÝÛß6p1øÇ„?óPÿÝ'ÿzgúeûEæÏÞAÿß þÂá3ñ/ý¿òNÃÿ‘kösüÍøLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘høöÿƒOõ­KHÿ†÷þιû?Ú?á–¼ïÜÛË¿Êÿ†Œòÿ×Å.ݾkýݹÝógŒxCÿ5ýÒ÷¦¦_´Wþl÷ýäýòì'þ?ÿÐKÿ$ì?ù¿g?ÌÐÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hãßø(o‹|AsûþÜ–ójá¸ýi˜eO²Y.ø¥ø-ãd‘w%²²îF#r°aœ©¼^$ÿ’w>ÿ±.iÿ¨5ÏÓ|ÿ“Éá/ýœÞÿÖ§*?¿à×_júWìñ~ÞÂïìð¿í…ãù™>Ïk.eo‚ß³ò3nše#A´0^2I'ã<)ÿ’wÿc¬Gþ åÇôÏíÿ“ÉÃ_ölòoýjxÌþ‘ÿá3ñ/ý¿òNÃÿ‘kôÓød?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€?oø4ÿZÔ´øoì럳ý£þkÎýͼ»ü¯øhÏ/ý|RíÛæ¿ÝÛß6p1øÇ„?óPÿÝ'ÿzgúeûEæÏÞAÿß þÂá3ñ/ý¿òNÃÿ‘kösüÍøLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€?¿à¾^&×5ø$Çí_gy{ç[Mÿ +ÌìÖ‘îòÿi_ƒ’§ÏºH1"+|¬3Œ‚Aø¿¿äÍÿîŸÿ«Lý3ô;ÿ”ðëþîïýa8œ?à¾&×4ÿø$Çì¡gg{äÛCÿ ×ËìÖ’mó?i_Œr¿Ï-»Ès#³|Ìqœ <=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý…ÿ„ÏÄ¿ôÿÉ;þE¯´?™ƒþ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZø÷þ âß\þÀ?·%¼Ú†øn?cßÚfSì–K¾)~ xÙ$]Él¬»‘ˆÜ¬g*AÁ¯‰?äÏ¿ìKšê sôßäòxKÿg7€ÿõ©Êǯø5׾•ûü_·°»ûrÑ4ø(Å_x_â_Ã~ǾðO¼9¤xÃÃ>ñçƒ~0ü@ñv“ ø’Ñ5ø£Â¼á»Ím´;Í6MjßCѦÒ4í]¯ì4½oÄ:u½®·|¡ÿ þ uÿEKöÿÃ3ûAóò þüëþŠ—ìÿ†gö‚ÿçå@™ðNø"ÿí»ÿÚÿ…Åÿ ¯ö“ý•¼wÿ Ÿþ÷öïü,ƒŸû+þ×ü&ÿÙÙðŽ|TÑ1öïøNµ·ý³íYû—ÙüœOæü¯ ðž …þ»õLN+õï«{O¬û/sêßXääöTéü_X—75þÚÚß÷Ï~|Mã·ú¯þ±d¹Oþªÿm}Oûf ëÛŸÙ?XúÏ×±˜¿àÿcÐö>ËÙÿ¯??¹Ëúiÿ þ uÿEKöÿÃ3ûAóò¯ª?&¶øIÿ1[ˆïâì+-¨–3s·ÁïÐ\IpeH&—ãeÄpÊɸG,–ó¢9 ÑHBÝüøŸ/ÄÝÄcV°Ó´Ÿü9ñï‹~|IÒ´MMµ¿Øü@ðúé¾ ÖfµÓï5?_™-µmçTÒô}hiZ…¬:æ¤kßiÖÀ¿@P@P@P@ xGWý°ÿhÿ i¿g ö^ðgÂ?_x‚AñwÂÿü{ã]sHðþ·yá¯øI59| ã¯øÃÃYÕtZóOðüâ+›]M}WU³Ö®uMHßÿ…Gÿ:ÿ¢¥ûÿá™ý ¿ùùPÿ þ uÿEKöÿÃ3ûAóò Ì¿ø'GüöÝÿ‚mÂâÿ…WûIþÊÞ;ÿ…Ïÿ ûûwþÁÏ‹ý•ÿ ëþì¿ìøG>*h˜ûwü'ZÛþÙö¬ýŽËìþN'ó~W†xOÂÿ]ú¦'ˆú÷Õ½§Ö}—¹õo¬rr{*tþ/¬K›šÿ mmoûçŽ?H>&ñÛýWÿX²\‹'ÿU¶¾§ýгõíÏ쟬}gëØÌ_ð±è{eìÿ‹WŸŸÜåý4ÿ…Gÿ:ÿ¢¥ûÿá™ý ¿ùùWÕ€øGÿ9ÈÏÅ/ØCç¿h,ã¾?âùu   |n½Ð`øïá¿+á;Çß³-®³ñNïÀ7×þÔ¼/«x2Oˆ^ñO†àÖÌö˜u Åq©á½wu֯隕ž¯â é>%Õ€9íAÿ‚|Uðç…þ%ü8ñ'ì{àOøûÚGŒ<3á_x7ãÄi:‰-XÐañŠ<'ñÁ~¼ÖÛC¼ÓdÖ­ô=m#NÕÚþÃKÖüC§[ÚëwÀð¨ÿà§_ôT¿`ÿü3?´ÿ?*ó/²—ü‹ã‡Á¯‹_|Añ‹ö!Òt‹ÿ ¼yð»[Õtoƒ_¯¦i<+ªøORÔ4£{ñ®êÈjVVZ´×6&îÖæÔ]GÚ š-ñ·.? ~©)Bž7 ˆÂTœ-ÏbhÎŒ¥dãÍ͸ó&®•ÓZÿ q'„¸§†¸¯B†'Ã9þOÄ8L6+Ú}[‰És6eB†#ÙN_aV®ë{*”ê{9K’q•¤¾eÿ‚~ÿÁ,?oOø'‡Á¯üøeûAþÈž6Ð|Oñ7Yø£wªøïà߯VÕíõ}k¾ ðÆŸl|?ñsF²þÍŠËÁ60‰mdºûUÝæùÚ/&8¼®áì/ `jàp•ë×§W<\§ˆö|êu(Т⽜!Uj黹kk%÷Þ4xÅžxÛÅ8+Ïò¼§)Æeù‡©a²uŒXiá°™Žk™B½O®âqU}¼ªæÕ©Ë–¢§ìéÒ´¹å/¹ÿáQÿÁN¿è©~Áÿøfh/þ~U[|$ÿ‚˜­Ä wñ?ö–ÔK¹ŽÛà÷Çè.$€82¤Kñ²â8edÜ#–Kyцh¤¡îþ |O—ânâ1«XiÚO~ø÷Å¿ ¾$éZ&¦Ú߇ì~ xýtß ë3Zé÷šŸ‡/Ì–Ú¶ƒsªiz>´4­BÖsFÒ5ˆo´ë`_ € ( €>@ðޝûaþÑþÒþ/~Îÿì½àÏ„~&¾ñƒâï…þ'ø÷ƺæ‘áýnóÃ_ð’jrøÇ^ ðÿ‡†³ªéµæŸáøÄW6ºš<ú®«g­\êš¿ÿ þ uÿEKöÿÃ3ûAóò þüëþŠ—ìÿ†gö‚ÿçå@™ðNø"ÿí»ÿÚÿ…Åÿ ¯ö“ý•¼wÿ Ÿþ÷öïü,ƒŸû+þ×ü&ÿÙÙðŽ|TÑ1öïøNµ·ý³íYû—ÙüœOæü¯ ðž …þ»õLN+õï«{O¬û/sêßXääöTéü_X—75þÚÚß÷Ï~|Mã·ú¯þ±d¹Oþªÿm}Oûf ëÛŸÙ?XúÏ×±˜¿àÿcÐö>ËÙÿ¯??¹Ëúiÿ þ uÿEKöÿÃ3ûAóò¯ª?ðþ s‘ŸŠ_°†3Î> ~ÐYÇ|Åòë@ZŸí3sð·Âß­~4龟â_ìÝsàëØü8Ô¼+â×øg¥Þü/½ðÄþ+žÆÿçÅ-­Øhú®•â™Gü#:í¾¨±jþ"ðôZ_ˆõpžóáWü¶êá®4ÿ~Ã:%¬é ©£Þ|7øûâ›1ÞÚâżIÄÿÇ®%µÁ–(uAá_ØV9ßG°wkhÀ*ÿ£ÿ‚ÑRýƒÿðÌþÐ_üü¨æ_Ûöÿ‚…þÙÿ³Ä_Ù«Ç¿c/ øSâOü"?Úº÷„¾ üo_ØÂã¿ xúÇû<ë?õ=4}«RðµßÚlgÿA¸¹ùSùSGågyMó,ÄåxŠ•hÑÅ{z”y=¤}†"–"<¼ñœu•ÅÞ/ÝnÖvkï¼/ñ3ð¯Žr><ÉðXÇ1È¿´þ¯ƒÌ–!à«jdù†M[Û,-l=ÝÐÌjÕ§ìëC÷ЧÍÍhÈýŽ¿`ø(_ìaû8ü:ýš¼ ñËö2ñW…>Â]ý•¯x·à߯öñ ÿü&>;ñ?¯¿´ñLÓOÙu/ÞYÚ}šÆô{a7›?›4†I”ÑÈòÌ6W‡©Vµ/¶ä©[“ÚKÛâ*â%ÍÉGIV”U¢½Ô¯wvÏ_“Qö+[_÷”2êUj{JÓýôêròÖ1úkþüëþŠ—ìÿ†gö‚ÿçå^©ð%«?…_ðRÛYÖçPñïì3®ÚÀ’ÊÚ5ŸÃ¯¾ŸT‘"v‚Á|M?ÄÏÇ ¥Ôâ8eÕ„¼FÖ;ܦ‹¨¼kk(¤|ø© ümøaàÿŠ^‚æÓIñv›%ÚXÞKgqu¦ßY^Ýi:Æ—qs§Ü]é÷RézÅ…þž÷v77wFÛí³I‘±ôê( € ç xWU𞥨iF÷ã]ÕÔ¬¬µi®lMݭͨºŽ/´A4[ãn\~ü3RR…Øøâæeý›—‚l.aÚÉuö«»Íó´^Lqy\9ÃØ^ÀÕÀá+ׯN®*x¹OìùÔêQ¡EÅ{8B<ª4"ÕÓwrÖÖKï¼hñ‹<ñ·ŠpWŸåyNSŒËò /RÃdë°ÓÃa3×2…zŸ]ÄâªûyUÍ«S—-EOÙÓ¥h)sÊ_sÿ£ÿ‚ÑRýƒÿðÌþÐ_üü«ß?"&¶øIÿ1[ˆïâì+-¨–3s·ÁïÐ\IpeH&—ãeÄpÊɸG,–ó¢9 ÑHBÝüøŸ/ÄÝÄcV°Ó´Ÿü9ñï‹~|IÒ´MMµ¿Øü@ðúé¾ ÖfµÓï5?_™-µmçTÒô}hiZ…¬:æ¤kßiÖÀ¿@P@P@Õø_Ä’è·K ÎϦÎàO%„ N>Ó óµ—þZªÞÆ0Atˆ¨ã¯üÊßðBoÛ‘”†V³++)YOí…û>AG ŽäPïýP@P@P@|ýâ}Aµ-nöbÛ¢†V´·ʈ-ÙôYÌŸÞ•ºtP@P~Õ¿òkŸ´Ÿý/Œ_ú®üE@Æ›âïü?ÿ‚jXxóÂ:‡öOŠüû Zø»Ã¯Ù,oÿ³ý‡¾|1ý­|?ðCãv½ÿ·øãÿøõûAj?³×Á߈·Å}gÂߴψÿgÿ† í¼ èŸ|áé/´{â½_ÂÞ²ÖgÓBý‚k;å{‹ èûþ wûXx‹öåÿ‚}þʵoŒ´7Dñ§Æ/…Zfµãm;E†kmñ}Ö‰iq5ÍÅŽ‹©x“@Ôõ-"Â{»É¬4ë»k9oo¹”Ác¯ù ~Úö´—þœ<9@hP@P@P@PŠÿÁ*?äÀ?goûøÛÿV‡èùRý–ÿ઴OǯøLmþ7ÿÁw>:þÍ_öø³ð³Ã| ÿðÇ é~о#ßøWÀ.>.ø_öMñ…®&Ô¬E¢ßjx™®´–BúõÔ/çM@¦¿ðPCþ Ÿðkþ ûüøOÿ`ñ¯‚þÁB>;þÐÞÒ¼.¿±·ìwâøg¯ |<ð2üGð߇¼;­øŸá®¯âˆikä>mSÅÚ´µÅ­¢êw·ww³Jm‡üÞø'çüÓãçìÿ#ýµ¼UãÏ„ŸdÿÙ·Vøu☿e›íWÆ|IýïÄjZ?ì«ðKÄúLJ^°ûŸØº½ä> °dò|9impfŽ€>Wý™à¬·Ÿí«¢þÊþ~Ôz|}ûrÁGࣞð_íOª~Ï¿ µÝoá—ì—û|1Ñ>.øcÂÞøE¯øG¼ñ?ˆthºTÏÄ _x®Æ1v%øåû>þÒÿ´/ì¡ñ7Ç~ðý·„´/‰ÿÀŸ\øzÏÇö~²–}?Óø“÷Z%ί¥i’ÿeE®&§6—oa§Ïm¦Ù€|÷ñŸþJŸü{þÈ7ÃýeÐÞšo‹¼Cðÿþ ©aãÏêÙ>+ðOì5kâï j¿d±¿þÌñ†þǬ躇Øu;kÝ6÷ìZ••µÏÙ5 ;»Ÿ/É»¶ž’'þf?à–ðR/?µV¡û\ü[ÿ‚æüuÕ~3|Sñ/ÃÝCâìükÁúwÃÿß/ˆb½ñÂûKøöVѼ%£è^+Ñ,.tþ$è~:ÓWE‹T“PÒõ«mFÎ éoXÁ^ôOø*×Ãoø'­üsâ~ø“ûxóö¨ÿ…‚a_ØuõûÂÿÀ6ž·Ðåø[%½þ-œ¢õõ«V;Ñ,b#k f–€3þÁx|û8ünÿ‚¬þÏŸ·oícâI¼wð7ã–µà/Ù=tÿÙoÆ>#K? hß mvO}©~Ïß3ºø'àü]ø¿âÿˆŸµø5‡4 ã¶_þøKðòjZ¶±¢øUñ~·®YéöPð´)©kZ˜öK@‘ÿ±×ü†?m?û?ÚKÿN ´( € ( ÿ‚TÉ€~Îßöñ·þ­Ðò¥û-ÿÁT?hŸ_ð˜Ûüoÿ‚î|uýšþ*¿í!ñgág†þ øþ !à/ŽþÒü;¡|G¿ð¯€\|]ð¿ì›â \M©X‹E¾þÔñ3]i,…õë¨_Κ€?Mà¡:‡ü?à×üöø#ðŸþ Áã_ü.ÿ‚„|wý¡¼7¥x]coØïÄ?ðÏ^øyàeøá¿xw[ñ?Ã]_Ä?ÒÖ È|4Ú§‹µh5k‹[EÔïnîïf” Ûø+¼ÿðOÏø)§ÇÏÙþ Gûkx«ÇŸ >þÉÿ³n­ðëÅ1~Ê77Ú¯Œ>2ø’+û߉4Ô´ÙWà—‰õ>½aö9?±u{È|`ÉäørÒÚàÍ|¯û2ÿÁX?o?ÛWEý”>ü$ý¨ôÿøûöäÿ‚ŽÿÁG< à¿ÚŸTýŸ~kºßÃ/Ù/ö:øc¢|]ðÇ…¼9ð‹_ð„<-yâèþ&Ñt¨5Ÿ‰¾ñ]Œbìx…gÕ7IfûÏÿzý¬¾2~Ö¿²¯‹uÚSÐ|KñËö}ý¥ÿh_ÙCâoŽü1áûo h_5ÿ><¹ðõŸìü)e,ú‡'ñ'‡n´K_JÓ%þÊ‹\MNm.ÞÃOžÛM³ø[öØÿ‘»þ ¿ÿaÏø'þ…àýÿ‚Ä~Ð?eø&_í‡ûB|ñoü Ÿþ|,ÿ„“À~.þÁðωÿ°µ¯øI¼?§ý³ûÆZ7ˆ|1ª¡ß]Cö}gEÔm?{æyjFèøãÿèý­¾)þÖ¼ àMþ ÇñûãÄ¿|ñî­{ðsÄðIüð΋âí[àö½keâ;o‹ú÷ì¿àO ê|ñÖµ£üAÑ4¿øK?±þ#]ø2ÓÃWֺLJ5ýJÎàköhðVÁR?kØ·ÆðXë?ÿc þÊ¿.u?øbOØ«O¾ø»¦|gÒî|QâOk1é l®ü#g¦™.‡¹ êóê¢ç¿·ûÜšø»Ã_ðqý·†à™_¶}‡ÅoÚ×_ƒþ [àß~Õ~øuoû+x—RÒ4˜ü7ãSKø-i6»á/Ùòóöj”iÚE¬PMuã}B}Å<ÏO%Ó4ŒúñËö½ÿ‚‘x'ÆßðD_ˆ6ÿ>xöý±>+þÆÿþ)øAøMácâwÆMWâßÁþ#|^ñGÄø“E›Ãÿt®éÙ~ð·Áx{QX5MSX¿ñ}œqé^±þ™¨ñïþ µÿ&WðWþ¹ü@ÿÕ©ãŠû’€ ( €< ö­ÿ“\ý¤ÿì|bÿÕwâ*î4ßx‡áÿüRÃÇžÔ?²|WàŸØj×ÅÞÕ~Écý™â üYÑu°êv׺mïØµ++kŸ²jwv7>_“wm<$NüÌÁ,?à¤_j­Cö0¹ø·ÿÍøëªüfø§â_‡º‡Ä/Ù:ø$׃ôï‡þ%¾_Å{â/„ö—ðÿì­£xKGмW¢X\è3üIÐüu¦®‹©&¡¥ëVÚœ(ÒÞ4°ÿ‚½èŸðU¯†ßðO[ø,çÄü'ñ'ö>ñçíQÿ þ¿°ëë ÷…þ*/€m<o¡Ëð¶K{ý:[9EëëW:¬w¢XÄFÖ@Í-gü8ÿ‚ðø;öqøÝÿYýŸ?nßÚÇÄ“xïàoÇ-kÀ_²zéÿ²ßŒ|F–~ѾÚìžûRýŸ¾xƒÂé5÷ÄÚ”‘üMÕîï  ËÿôJÄfø(¿üöéðì=ð#áíkáÿ‚µïø%¿Çø(ÇÇ¯Ú Qýž¾ü@Õ¾+ë>ý¦|Gû?ü3ø_mà{ýDøsàI}¤ ßêþðÅ–³>šìYß+Ü]GßðK¿ÚÃÄ_·/üïöSý«|e¤éº'>1|*Ó5¯iÚ,3[h‰ã]ÿQðŸ‹î´KK‰®n,t]KÄš§©iÝÞMa§]ÛYË{xð5Ì  ûÈcöÓÿ³øý¤¿ôááÊûB€ ( € ( € ( Çø9[Pkßø oíÅŒZ[ f‹L“–0Û ö{’Üû*FþBîÃø躀 ( € ( € ( €>`bX–c’Ä’}I9'ó  € ( €<›ãß…5Ÿ| øÑàoÀ—^ ñŸÂˆÞЭ¤šhî5Ÿx?XÒ4ÈâæHmàI¯o ¦¸š(b ^Y˜yf‘û@~É:ïìO¡~Ï:ý­¿gÿ„¾/ñ7ì¯cðoĺwŽþ(|?ð÷Œ~xƒQø[ÿ ÷ľ(ø{âøcÄZ_ˆü®JÃÄ׃¬iÚÞ‘¨xU:V£kt¶À–±çÃ?ŠŸ±ÏÃÏ€¿¼ÿ!þÂZßìùð-<- Yü8ºý˜ÿfkMo_ø{¡ëßj¾ˆ7_¶µ¬é—Úæœ×ú\~'k=R÷J’ñ5ìîÞÕ-ÜîßOûø‹þ kðÏþ 1ÿý”¬ÿá]~ÈÞ1ý•ÿáNÿÂÑøCqý±ÿ oĘþ!Âwÿ þôÙÿÙþ_öGü#ðƒß}«?oÿ„†Ûb _ìÙ¤~¼aÿ(ñe·üÓöJñ¯ü<ã»ñaì øðwGÿ…?ý½ðøøtºãF·ÿ ïÙö6ªº©¶ðOÚ0lF› å%$öiú4ÿ"çJ¥;{Js…ïnxJ7µ¯nd¯k«Ûk£óÏÃ_ðOŸÙcá/ÁÏÙ;ß³üöqøûA~Îß²·Å¿Ø«Ç_4ø¿géÿgo‹¿µ?І§øYâ7 àx[Åš‹jþñV›ãÍzM;SÞ^iºŒookfÈ?pdoŠðN/Ù öqøû*|"ý±¿f›ß|ð‡>xbmOö‰ø;yâ}´»uŽó]ÕþÃâ{{{x£Xšû]Õ—O³µ³}WS¹[+ko&Þ0SöFÑn-¬?hÄéwáo‹ßµ§ÇŠß5¸9±ñO€Dý‡><~Ïß³Gì­ð¿àÇÿ~üRøssñÃ/ð?ĝоð'Š4VÛâ7ŠuÌúŠ5ý3R:~«£jº?ˆü;«Çnú_‰¼)®øÅz Þ¡áí{HÔ¯?!¿g/Ùçâ'ì{áü0ý™?à䝨[áÿ¯üYøŸñnÇÂ^ ý˜?fŸˆެ|Rñ]÷Šõ›GñN½ûeG¨jQAsx ŠG·´‰–/2+;e“É@ÑoÚL~Å¿´wí1ÿèý¤µoø)o쟠j¿°W¾)øãRðì_>êQüfÔ>(ü2²ø{y z¤4˜þ&›sk&¼ŒšG’égJÁd± AÀ,|,Ôa?ÁFißÛþÛþ Mû%xƒþ'àwÁƒð–Šÿm?á ÿ…K5ìËâFñä5íÿøH¾Òʺ9ðf…ý—å5MO$*ROfŸ£Oò.tªS·´§8^öç„£{ZöæJöº½¶º?;t?ØGö;øà¯j¿?à³³'ÃOÚ‹áíÙûQþÚŸ¿h‹[Ÿ€^&Ò|iûZiðŒ|Pø1⯄>%øï¦øûÃz—…V *ïÄQøÃ—÷wVVWöÚf™\X]2ÔOø'îµÿùý‚g#àNÿýœ~+x–ÿÆŸ>+|Vø¹âŽ¿4_|Vø¿ñ_ÅZ‡Œ<{ã{ýGñ”šVƒ¡ªj c¤hv2Ý&“¡iÚ]„ú†©{ΩxËjÞºøÑâïø)ÇŒþjG|ñóÀ~øoð³ÅÚ©§ÞxSÆ^'ð¿ìñ¨x3_Þ$ŽèèÚÞ•¤x«]ƒÃ:ˆ4ëÙô+?é¾$ðÔúŠkžñŽ˜ì:Gíû$뿱>…ûü ðoü‡û k³çÀ´ð¶gðâëöcý™­5½áG}ªøEþ Ý~ØzÖ³¦_kšs_éqø¬õKÝ*KÄÔ#³»{T·pºüQsû kßðS/†¿ðQ·ÿ‚~Ê ðßöDñŸì´ÿâŸÂ —V_üHâxõ¾"‹ð 5táÒO†O‚/EÈ'P>"¶ìDØi94’m¶’I]¶ôI%«mè’ÜãÿfÝöøâïø)?‹íÿà¦_²gþ ñÄœ?>hË𵿇«àcáãxŸ5±ãæ´uc©ýŸÁF`ÂËû6Eå$ÓÙ§èî9ÂtݪBPm])ÅÅÛU{4®š¿“?9>$~Ÿ ¼Sÿéý›?àšžÿ‚ïþÅø'ð?Bð×ü&Zž£ðÓà·ŒüCñ?Ç~ øÛ}ñ§Â^/°¹—öÆÐ.|¦é×ï£è×ÞŽÿÅQj‘ésßI¬[I¬-Y'ïÇÀ/Ûà…>ø7Áÿ¿à¤±×ÇߌvCS·ñGÄï xÓàçÁí+ÆwwÞ Õ.ôOì¿…ö?|‡MЮ´ÉoŠõ†Õo4ɵœÚ>¦tûP*ý‘´[‹kÚ#Æq:]ø[â÷íiñßâ·ÃÍnl|Sàk¶V^ñf!?éÞñ"h÷·…µÈAÓ|Qá«­'ÅZÆ¡áÝoHÔï@>¸ € ( €>Dý‡><~Ïß³Gì­ð¿àÇÿ~üRøssñÃ/ð?ĝоð'Š4VÛâ7ŠuÌúŠ5ý3R:~«£jº?ˆü;«Çnú_‰¼)®øÅz Þ¡áí{HÔ¯?!¿g/Ùçâ'ì{áü0ý™?à䝨[áÿ¯üYøŸñnÇÂ^ ý˜?fŸˆެ|Rñ]÷Šõ›GñN½ûeG¨jQAsx ŠG·´‰–/2+;e“É@ÑoÚL~Å¿´wí1ÿèý¤µoø)o쟠j¿°W¾)øãRðì_>êQüfÔ>(ü2²ø{y z¤4˜þ&›sk&¼ŒšG’égJÁd± AÀ,|,Ôa?ÁFißÛþÛþ Mû%xƒþ'àwÁƒð–Šÿm?á ÿ…K5ìËâFñä5íÿøH¾Òʺ9ðf…ý—å5MO$*ROfŸ£Oò.tªS·´§8^öç„£{ZöæJöº½¶º?;t?ØGö;øà¯j¿?à³³'ÃOÚ‹áíÙûQþÚŸ¿h‹[Ÿ€^&Ò|iûZiðŒ|Pø1⯄>%øï¦øûÃz—…V *ïÄQøÃ—÷wVVWöÚf™\X]2ÔOø'îµÿùý‚g#àNÿýœ~+x–ÿÆŸ>+|Vø¹âŽ¿4_|Vø¿ñ_ÅZ‡Œ<{ã{ýGñ”šVƒ¡ªj c¤hv2Ý&“¡iÚ]„ú†©{ΩxóÏí à}WãtðR_ˆß ^/ˆ¾ ø•¬~Ë2xÄ^ ÿЧNñ¿ü(+Áz¿ÄÕðÚ#^ÿÂvÚXê>OøD†®5i:ç4϶øÃBÕô[0¦¿à¡~5ýˆÿnoØÓöýgÿ‚‚þÊÿ _ãÇ€áðšøúo‹? *ÁÃ_°—Æÿ€ß ´Ý?úÇÁ½;ö~ý™¾ë>.ð¾‰á™|?¡i1|SÓÿjÏêþ¸°ž=+Q›T@Õn/†%œÑ¨¾’â jøCyû |ÿ‚‚þÙŸ·¬ßðR/Ù;Z‹öµøsû?^¯‰ák£`f;зŒFsŠXÛ šj馻­PJ2„œgFKxÉ8É]]];5tÓô<Ëö–ýœ¼1ñº_؇Jðü›öøQàOØïàŒ¾ ø_RøKðKâ6¤~.|øZŸ eñLj|M/íáu]Å^v§­ÜérÛè_m‡NþÛÖ>Ä·ó²O݇߶·ì£¦x#Áú¿nïÙ3â'ô_h–~7ñž…ñoá…,üeâ}B¶Oø¿NðEŸÄ?ÿÂ+§ëš•®¡®Aá¸u½n?Ú\®šº¶£ŸÛfùóöð?‰>~Éß<)âÝ+PÑ5Ûm#]Õ®tZÆçKÕ¬m¼Sãx«JƒVÒ/£‡QѵUÒu«©hº¥½¦­£Þ™ôÍZÊÏQµºµ„ëj( € òo~Ö|yð/ãG¼;]xƒÆ þ#xSB¶’hm£¸Ö|EàýcHÓ {‹™!·&½¼‚6šâh¡ˆ1ydDV`åšGíû$뿱>…ûü ðoü‡û k³çÀ´ð¶gðâëöcý™­5½áG}ªøEþ Ý~ØzÖ³¦_kšs_éqø¬õKÝ*KÄÔ#³»{T·pºüQsû kßðS/†¿ðQ·ÿ‚~Ê ðßöDñŸì´ÿâŸÂ —V_üHâxõ¾"‹ð 5táÒO†O‚/EÈ'P>"¶ìDØi94’m¶’I]¶ôI%«mè’ÜãÿfÝöøâïø)?‹íÿà¦_²gþ ñÄœ?>hË𵿇«àcáãxŸ5±ãæ´uc©ýŸÁF`ÂËû6Eå$ÓÙ§èî9ÂtݪBPm])ÅÅÛU{4®š¿“?<|5ÿùý–>üý“¼9û8ÿÁq¿gß´ìíû+|[ýмuñóO‹öxñŽ—ñ¯övø»ñSø¡¨øj…ž(øóp¾ñ'…¼Y¨¶¯àÿi¾<פӵ1-åæ›¨Æöö¶l“÷öFø§ÿâý¿gß²§Â/Ûöi½ðWÁ_øsá׆&ÔÿhŸƒ·ž!×ÛK·Xï5Ý_ì>'··¸ñŠ5‰¯µÝYtû;[7Õu;•±²¶¶òmãå?dmâÚÃöˆñœN—~ø½ûZ|wø­ðó[ƒ›øÄší•—‡|Y£ÈOúw‡6jdmwÀÿü'àx§Áú`²V±»ñ÷ˆ]–Å5K0ÌÝjTqu¹e )Q«Vºƒºeáã(ÊŸµub靯¼DñË‚¼9ÅSËqõ1Y¾q&|«&Xzõð¥xÔÌ*WÄP¡†•EÊéáÝIb§«¡Wðgü>·þ £ÿF_ñÿOüº¯¹ÿ‰tâúd?~aÿÌGæŸñ7\ÿD×ÿà9OÿƬ!:“¡‰§«àÏÔB€ ( € æõo†> ñÝê^x—ÀÞñ5Õ™Œêž'ð7M‰å¸“ÍÔ5k;ƒga É<ì¾jDåuC#Í'&£Ü›I$®Ûz$’Õ¶ôIn)J1‹”šŒbœ¥)4£¥vÛz$–­½ÕŸŒ?િðK_‚uO‡:?Ágøí>þ­xçáGÃß„ÍàÕÒI#ºÒô _ÄZdþ"ŽÀǶmkL¶ŸA»gFÒµ=B?0ÃûVEàGg9u,ÃW/É}¿½K™Ko²i8Ô­F†ªÃóßÝ¥Vq¯?kJ›²ÎOôŸðû‡³jùV†mÄU÷+æ94pRË]tÚ>#Œ ñ^ÎÞõzå…×°­Y]¯ÿ‡ÖÿÁ4èËþ#á ð#ÿ—UìÿĺqGý2¿0ÿæ#çÿân¸+þ‰®)ÿÀrŸþxŸ–_ðMÚþ Íÿòÿ…Õý§à?Žÿ´7ü-ïøWGü%¿¾xwþø@?á<ó³ü߉Þ0ûgöÿü&±ý«oöwÙÿ±-³ö¿<}›äø_è™Å<7õëñ~Q˜ýwêßdžccõ¬|7¡VþÓÛûß ¹÷Óú ÇoÚaÀ¾5ÿª¾ÏÂ,σ?ÕŸíËÿeâ²|Wö—öÏö=½¿/ö³úŸöSöWöÜßZ©ü>_õ7þ[ÿÑÿ£/øÿ„'Àþ]WÖĺqGý2¿0ÿæ#ù÷þ&ë‚¿èšâŸü)ÿ爿ðúßø&ýÄoü!>òêø—N(ÿ¡æC÷æüÄñ7\ÿD×ÿà9OÿÆÖÁ`qô°y6ˆ2üNiŠ«Âbñ8¼m<>•z³ÂapØŒF&0thЫRq§/‡?à·ì¡û~Í~7øMûG~Íþ#ø‘ãürñ/ÄM+\ðLJ>kv¾Õ¼ðÏÃV:T×~(Ômµï Õü%®^Io fÍ!¾·’73Kp«ò^ý$‰ƒ¯”bð/À¡ “#g–SZÝ¿v6íù³ŒsIýxjóÌ%«mæå/¥Ï7eÃ|BÔ¼6ÿ¼)áj:Ň‚õŸxw@½Ô†O‹Sþ ¹Ò4ëû¨õ(×I»¶Ômìõ;=GM³ü_=Ée“bêQ§Œ¡ša!ˆ¯„§šàa_û7ŠÂF„±”puëÓ¥õªýf„jÔ§Mº°•9NœéÔŸô_ ñ8‹KW/Åd˜ùá0¸úÙgS ý±Àã牎]ˆÌ0¸jÕþ§õï©â¥BY*©Q© ±§ZZTúºñ¥ ( € å|Cðïá犅ψ|sáOj6žÒçŸRñGŽ4o\XxB²ÞÝ\_kzí´‘iº]¢››ÉŒ—Aý¢vy¯WJ•Zõ)Ñ£Nu«UœiÒ¥J©R¥IµBœ œ§9I¨Æ1NRm$›2­ZŽ\F"­*(SZÕëT…*4iS‹•J•jÔq…:pŠrœç%Å7&’¹ø™ñ[þ çÿ³økãoÁ>ø«üd±Ð&û ß|ðßá5Ÿƒ5»ø‹-áðãø’ëKÕ5 2ÞQäê›ìµ­q¦Ëy§½µíÏîO€\a˜à(c1X¬¯)«^<ÿQÆÏ,]?ƒÛLJ­NI-]/i)ÓVE œÐó^{ô¨ðÿ)Ìñ9~ ç´pÓöÚym,pŠ‘ºŸÕe‹ÆP­Z”_º«û(Ó«g:.¥' “ó¯ø}oüGþŒ¿â7þŸ?ùu^üK§Ðó!ûóþb<ø›® ÿ¢kŠð§ÿž'å—üGö„ÿ‚sÁ<¿áuiøã¿í ÿ {þÇ‘ÿ oÁïÞÿ„CþøO<ßìÿ7âwŒ>Ùý¿ÿ ¬jÛýöìKlý¯Ïfù>ú&qO ýzü_”f?]ú·ñá˜ÃØý_ë èU¿´öþ÷ÃnE½ôþ‚ñÛö˜p/꯳ð‹3àÏõgûrÿÙx¬Ÿý¥ý³ýooËýŸìþ§ý”ý•ý·7Öª—ßýMÿ‡ÖÿÁ4èËþ#á ð#ÿ—UõŸñ.œQÿĊïÌ?ùˆþ}ÿ‰ºà¯ú&¸§ÿÊùâ/ü>·þ £ÿF_ñÿOüº£þ%ÓŠ?èyýù‡ÿ1üM×Ñ5Å?øSÿÏÑm?ààïØ²ÂÖÚÆÇàíeeeo ¥¤ ­­m-m£Xmí­­áñjC¼¢E 1"Gh¨Šª þ%ÓŠ?èyýù‡ÿ1üM×Ñ5Å?øSÿÏ…Ô?à·¿ðN ZúïSÕ?cߊ:–¥q-Ýö¡¨x7àuåõíÔîdšæîîã]’{‹‰¤fyfšG’G%‰$ÑÿéÅô<È~üÃÿ˜ƒþ&ë‚¿èšâŸü)ÿç‰Oþ[ÿÑÿ£/øÿ„'Àþ]QÿéÅô<È~üÃÿ˜ƒþ&ë‚¿èšâŸü)ÿç‰ñoü;þ =û þ×±çÅÿÙïáGì×ã/†~?øƒÿöõŸ|#M7Dÿ„Sâ‚|oª}¥¼;­®²?´´_ j:D?c#3ßÄ.?ÑLõàq?Ñ‹ŒóŒ—`8‡‡(bñVöUkUÍiS±ÆaëÔæ©G.«R<ÔéN+–œ¯&¢íä¿Wð?éãá_†þ(pÇq?ñžq‘äßÛ_^˰8^Åâ±Ú<=›eXoe‡Ì3œ.§±Æc°õçí«Óä§Js§ÍV0„ø'ü{öý‘ÿcÏ„³ßÅÙ¯Æ_<ðûþÿíÿèÞøFún·ÿ _ÅxßKû3x‹[mdÿfè¾%Ó´‰¾ØN'°”[ÿ¢ˆ(á£äù.ÇñWÅáþ³íjÑ«šÕ§/mŒÄW§ËR¶]J¤¹iÕ„_58ÚIÅ^)Iž8}<|+ñ#Å'ãNà~3Éò<çûê9v; ØLVû;‡²œ«ípù~sŠÁÓöØÌ"¼=zœôêÂu9jÊpÚ_ðúßø&ýÄoü!>òê½ÿø—N(ÿ¡æC÷æüÄ~QÿuÁ_ôMqOþ”ÿóŧÿÁoàœMõ¦§¥~Ç¿´ÍJÂâ+»COðoÀë+ë;¨d‚æÒîÛ]Ž{{ˆdU’)¡‘$ÕY0ø—N(ÿ¡æC÷æüÄñ7\ÿD×ÿà9Oÿ |ŸÆ^ñ¾xC⮥á;OøóÀ—C²Õ~#éÑ5-Vú/G«ø›ÃÚe®±~m¢Úµ…õ¤'KÔô}CRü›ŠxR· cq7˜a3¨Ö¥…Ì1yd13Á`±õã^¥,¾¶&µPxÇK ^¤©Sæöj•HJ^ÖXSýׂ8ë Æ¹v0YV? y–¾;)Àg50tó,Ë+ÂÏ F¾m‡ÁáñêG.|n•ò꾿þ%ÓŠ?èyýù‡ÿ1ÿuÁ_ôMqOþ”ÿóÄòÏŽŸðV?ø&·Æ‚_¾Aû*|Rðdÿþ|AøkŒ4ï‡ou K㯠jþĶ6kâ}5®ï4'Õ©klºž{Tˆ^Ú—óãäÇý¸£Æà¿Öއ×0˜œ/¶‡ö‹•¬Q/kõHÞTùùÒæÚK™n} }5ø'…¸¯†xðgæk‡8‡%Ïž[ˆyE:ŠÊ3,6`ð5ê,ilìßÁÒj‘ÜFóë7}Š3 žçÉᯢ·ðö® ñVM˜{\]LW¶­Æ=½’_W«îÇØs§Ìµ›\º]þ…ãoíàoø¯/âz~g °<=„È^[–WÊ14+¼.e›f^ES•j«4XyCØÊÐÃS—´|ܰûëþ[ÿÑÿ£/øÿ„'Àþ]WÐÿĺqGý2¿0ÿæ#ñßø›® ÿ¢kŠð§ÿž#“þ aÿÓÖHÿc/‰èÁ‘ÓÀßUÑ”ä2²ë@«È ‚"ø—N(ÿ¡æC÷æüÄñ7\ÿD×ÿà9OÿŒ<1Ì89Q§ŠÍòÌË0¯F¾*^Ymld0J5kâó ñžœ(`ðô¨Ô”ªÔšæäŸ³Œ£J´©þ“À4å> Ë[çYFS…Äap53¼êyn.©šcëÑÂà2œ,éã*ÔÅf8ºøŠ0†Œ%Éí){YBUðЭ÷Ž£n–z…õ¤lÍ­åÕº3à»$¼JÍ´*î* ¶ÎWægìå:( € üQÿƒ‘å·ÿýÚ¯þ¶Àºþ“¨ € ( € ( € (åú( € (]‘Gi©ß^Xé:6‘gq©kZæ¯yo¦èÚ6™gÜ^j:®¥w$V–Vv¶ñÉ<óÏ*$pÇ$ŒB#0ºTª×©Ns­Z¬ãN•*P•J•*M¨Âáå9ÊMF1Šr“i$Ù•jÔpÔjâ1iP¡BœêÖ¯Z¤)Q£Jœ\ªT«V£Œ)Ó„S”ç9(Æ)¹4•ÏæÇþ 'ÿÆ·ÒÄ¿a-W3¸Ñ¼cûMÒg WÚÁ‹I6ùÝn~ ßF׿y¼)e‹Gñ{PøsàrJ†uÆÔ®ýÚ¸^oEÖÍä¾/æú…7o…bêJõp‹ø£Åï¤Ä›Åpç†õùcïÐÆñb^ô·Jy %ð­ãý«V.OÞ–”mCÿ—mGQÔ5BÿVÕ¯ï5MWT¼ºÔu=OQºž÷PÔu ÙÞæöþþöåå¹»¼»¹–K‹««‰$žâyY]ävcý;N:4áJ”!J•(F:tãS§NQ„!¥BJ1ŒRŒb’I$U«V½ZµëÕ©Z½j“«ZµYÊ¥ZµjIÎ¥Zµ&ÜêT©6å9ɹJMÊM¶ÙJ¬Ì( € ( € (ôö%ý޼ñCñGíEûUkz—ÃOØ·àüàø«Ä±k¯|cñ¤E%Ò¾ ü-F–ÍSÄzñu­GHŽãûN ntË›˜õ=7óÞ3âì^_[ Ǧ|:øCðÏCøËào<3¶? ü*ø{§þο­´ý"ÈG ©k—–¶V2x‹ÄAúͼ[Áa£éúN•að|sÂXNðŸŠà«ÔÌ3lʶQÏ3œN¸¬ÓSˆ2ÉT«;¹{:”æ°ôš§JR•JÕ*Õ©ú‡†\yŽãŸ¸¤°Ôr¬‹'Ãgùw pöË’åt¸S9…*íªØš§Mâ±RŠ•iÂ1„iaéP£Kú¼¯ã“ý ( € ã>(|Nøcð+áî±ñkãoô‡4ÿL×µ©±>¥vRG·Ñ<7¥Ä$Ô|C¯ê)#°Òt›[ËÛ‡I Vòù2(õr\“5âÂŽY“à«c±µß»JŒt„JUkTv§BŒ.JÕe pV¼®Ò~q.GÂyV#:âÇ–eØeïׯ+J¥F›… =(Þ®'VÍRÃÐ…JÕ|°i6¿_ø(·üâ‡ípšÏÂ/ƒvú·Á¯ÙäšÖçA†éañçÅKpLbóâf¯aq$qi1¨’?é2èÑuÍCÄÏšúoö_‡~e\ ¨æy§²Í¸‰%8×qæÁe³ß—/§R)Ê´^Z*«I{ xdê*Ÿçw‹~?ç¼,FK’{|‹„[•9ac>\Ç9§{sfµ©IÆy­VYBr ›Y«Œ”i:_‹µû!üòP@P@PìçÀO„Ÿ?àŸ <-ûm~ÔÞ±ñ7í âÔ‡]ý‹¿e¿ùoxt¿Ú+âΗÿiYxKB½ÿNð^ƒ~šUÆ·cå¤ßiº°Õ<9ù{šæ{šâx3†13ÃdVèq‡a­+)s*œ?•U’ösÅW‡¹Œ¯V4a9Bk–)b? 8g#ʼ.Èð^"qžž3ŠqÊ8Ÿ¸3xÝÅÆTx¯;£í©àpÕ?y—᪪2ÄU§—<éVÂ~¡Á~,üAøëâ?ø)Å¿Šž$¼ño¼u«þ̚߈õÛÑosu-÷Ç(¢‚ÚÚÝ"´ÓôÝ>Ò+}?JÒìa‚ÃLÓmml,`‚ÖÞ(—òϲ¬I—pU–aá…Àà©q =]¨Åa7)JMÎ¥J’r©V¬Ü§V¤¥9ÊR“oöÿ¢¶yšq&mâ¦ybêc³<Ê¿ â1xš–Ns’âhÆ0„TaJ(F4¨Ñ§Ò£J¥N1„"—ô_ͧö P@æ>8üý˜þ_ü]øýã{ø"̼lrwâjþSËoáÏxz-×úþ³våKx­”n¥©ÜYé6w÷ÖžïpÞsÅ9<¯$ÁTÅâgiT’÷(a©^ÒÄb«Ë÷t(ì¦ù§+S¥•e rùŽ-ãà|¢®uÄ™,ãJßÅckò· . Þâ±3¶‚å§jÕçJ„*U‡ñ¯ÿÿ‚²|`ý¶/o<á8¯~~Ív7xÒ~iwßñ7ñµ›}®½ñSX´qý»¨ÈëÕ¿†í¤>ÐÝ-V8õVÈøŽóûKÃß rn ¥Oˆöy§Ê?½ÌjC÷XG(ÚTrÚSW£›Œ±3_Y®œ®èÒŸÕáþsx±ã—ø‘Z®_…ö¹' ¹Ê)Uýþ=BW†#9¯M¥ˆ¨Ý§ 7õ,3PJ8Šôþ·Sò^¿U? ( € ( € (õköJý˜>ü.øS'íãûoé3?Á=*âh?g¿Ws3ĵ—Ä?|éðBñ\^Û|$ðýÔpIâcéZ€f±óomaºÒuËø«‰sLÏ4\ÁuRÎjÅ<û;„}¥˪YJ¤¤œa,Ö¼\–ª*´íÏhNQ«GöÎàì—&ɉ~"Гáê”x[†ç/cŠã|Ú•ÜiF.2© 5ŽÅºN[º|Õ!ЯúUÿqý§þ*þןðR¿Ú/ã/ÅÍR ½sSýþ!éz‡¥ÂÖ^ðG„´ÿ‹އàÏéeä]7Ãú,wˆ#i&»½»žóVÕ.¯umBúöãóo¸k+áO 2L£*¥(ѧÅ8:µëÕ|øœn*¦Q{lf.­—´¯YÅ]ÙBŒ)RŒ)S„#ûÑóŒs®:ñ“‰3üò´gˆ­Àù…6Œ]<]¥ŸðïÕ²ü ¿c…éK•7)Ô©*•ëN¥zµ*Kú[¯åCû”( €"Ô/tDÖ¼UâsFðŸ„<5aq«x›Å¾%ÔmtoøJ´ŒÍuªê·ÒÁgi1ÌÒÊ€pXªå†øl6#ˆ£„ÂP­ŠÅb*F• = s«ZµI»F:pRœç'¢ŒSlæÆcpyvŽÇâ°ø,”ëâ±xªÔèaðôi«Î­jÕetá«”¤’î.ÿðQoø.ÿŠmõ棪x[Ás}§Kñí ð\é7ñ¬<Ã{k„‡PðG†'RKx†æOj@Ä,!ðÌvóÜk_ÕÞø!C/ö×Ò¥ŠÆ®Z˜lŽñ«„ÂKIFy„¢åOˆ‹µ°Ð”°”õö’ÄÊIQþñ{é+‰ÍV'‡<;­[–¾z8Î&å ~>:ÆTò˜MF®_…’nøÊ‘†>µ×±Ž 1”±ÍŒÓKq,³Ï,“Ï<4ÓLí$³K#’YdrÏ$’;wrYØ–bI&¿£’QJ1J1ŠI$’I%d’Z$–‰-?%)JR”¤å)7)JM¹JMÝÊMêÛz¶õoVGL € ( € ( ·bŸØÛTý©¼W¯xƒÅþ"áGìÓð’É|QûA|wÖV84xZÓ.“¥Ít­­ã¿´cKð¶kýì·w+|úmݵ¹¶¹øÎ2âú\3…¡C ‡y§æ³xl‡$¢Ü«ãq/OkV1÷©`p×ö˜šòtà£R—4Eð÷€kqž7ŠÇbÖIÂ5ŒâŽ%Ä%6[‚åì(Êk–¾eŒ·±Áaaµæª:U!Iý±¢~ÙZgÆïÛsöýŸþxqþþÆ¿k€6_¾Ã$§Pñ÷ü.? ÇñkâEÜŸé߼桨ÈoTðü½í•»Ï¨ßëÚ¾³ñµ¸B¦MÁœuŸg˜…šq~w¹ìó|ÆI{<ý˜hÿ„?³uæt¯†OÂoˆÿÚž.Ò†´Öþ(ëV¾Vñ ì‘ÝCáød>ÑæX<¸u]JÍuÛì¸SÃÞ §O_ˆøw4â C÷¹•L×/öxW%iÒËiOýŒ,Üeˆ’úÍeÍyR§/aó÷Ån:ñ_ÄŠÕrü?qnI©ûœžŽI›{lr„¯N¾q^Eõš—JpÂAýO.[Fµh,L¿'?á’jÏú6_Úÿ ÇÄþfëõ_õ¯…ÿè¤È?ðñ—óAøwúÆ¿ôGñGþ#ù·ÿ2ü2OíYÿFËûAÿá˜øÿÌÝë_ ÿÑIáã.ÿæ€ÿQø×þˆþ(ÿÄ6ÿæ@ÿ†Iý«?èÙh?ü3ÿù›£ýkáú)2ü#ÿó7Gú×ÂÿôRdøxË¿ù ?Ô~5ÿ¢?Š?ñÍ¿ù?á’jÏú6_Úÿ ÇÄþfèÿZø_þŠLƒÿwÿ4úÆ¿ôGñGþ#ù·ÿ2ü2OíYÿFËûAÿá˜øÿÌÝë_ ÿÑIáã.ÿæ€ÿQø×þˆþ(ÿÄ6ÿæ@ÿ†Iý«?èÙh?ü3ÿù›£ýkáú)2ü é’øOö~ø#¦NÒYøSÊV9¼Câ[Ÿ:s®øÿÄâï|G­\\]²JÆÒ «É>ß«jÞŸp…>£‹Åã1/5âLæ¢ÅgÙÍH¥Ã†»†ŒcÒæ”`½*^7ˆ\WŒ± ]‚ŽGÁü=Eàx_‡hɺx,"²–+>i}g4Ærª˜¼D¥6¤ù#:ÚׯïÿðCÏùJ/ìÃÿu«ÿYãâÕx^4É´âOû£ÿêÿ*>Ÿèéÿ'“ƒ¿îáÿÖWµop|ð£áo‚#ÿó7Gú×ÂÿôRdøxË¿ù ?Ô~5ÿ¢?Š?ñÍ¿ù?á’jÏú6_Úÿ ÇÄþfèÿZø_þŠLƒÿwÿ4úÆ¿ôGñGþ#ù·ÿ2ü2OíYÿFËûAÿá˜øÿÌÝë_ ÿÑIáã.ÿæ€ÿQø×þˆþ(ÿÄ6ÿæ@ÿ†Iý«?èÙh?ü3ÿù›£ýkáú)2üß ’eÕ\jN5©GëR„yÔéUÂáóÍ_ÿþ'þÓ¼_ñ«ãˆdñŽ|gö»ë€†ßNÓ, Ao¥èŸæHš_‡ô;áÓô>7Ãm ½Ä×W’ÜÝOú.E‘e¼7•a2|§°ø,9aóT©9>jµëÔ²ukÖ›u*Ôi^NÑQ‚Œcùñ6sÆæ?ˆsìSÅæ9…^z’·-*4â¹hápÔ®Õ.š*“|°ç)Ô”ç/èÓþ µÿ‘köòÿ»]ÿÒÿ޵üéô“ÿš/þî/ýá×C¯ù¸¿÷hÿïÎHõüºl…øã’WHâG’G`©j]݉ÀTE™‰à '¥|1ûs~ßß ?a½M1¼1«|eøÿªéËsáo„~µ¾¹Òôs{-â—ˆ,-® ðþ’¥Í®ƒ¸ñ¼¯l¶–0iWw:þ•úGxq˜ñx׫‰£”ätªrâ3,Lᕹ_¿C.¡9Eâk^ñ•Wˆ Ô½¥IUŒpõ?ñGÅü§Ã¬,°´0˜Œû‰«ÒæÂdø8T•,?5ã¼Þ®uÄŸÚ8ÌL¹£‡ °¸šx,¿åÍ&_…å”0Øxi¢r«ZIÖÄÕ­^S«/áñÇý ž+ÿÂwWÿä:ú¯`¿è3 ÿ…ù3å?³3/úãð’¿ÿ+ø@üqÿBgŠÿðÕÿù¯`¿è3 ÿ…ù0þÌÌ¿è_ÿÂJÿü¬?áñÇý ž+ÿÂwWÿä:>½‚ÿ Ì/þQÿäÃû32ÿ¡~7ÿ +ÿò°ÿ„Çô&x¯ÿ Ý_ÿèúö þƒ0¿øQGÿ“ìÌËþ…øßü$¯ÿÊÃþ?Йâ¿ü'uþC£ëØ/ú ÂÿáEþL?³3/úãð’¿ÿ+ø@üqÿBgŠÿðÕÿù¯`¿è3 ÿ…ù0þÌÌ¿è_ÿÂJÿü¬?áñÇý ž+ÿÂwWÿä:>½‚ÿ Ì/þQÿäÃû32ÿ¡~7ÿ +ÿò°ÿ„Çô&x¯ÿ Ý_ÿèúö þƒ0¿øQGÿ“ìÌËþ…øßü$¯ÿÊÃþ?Йâ¿ü'uþC£ëØ/ú ÂÿáEþL?³3/úãð’¿ÿ+ø@üqÿBgŠÿðÕÿù¯`¿è3 ÿ…ù0þÌÌ¿è_ÿÂJÿü¬?áñÇý ž+ÿÂwWÿä:>½‚ÿ Ì/þQÿäÃû32ÿ¡~7ÿ +ÿò³ôkö@ý| §x/^ý²?mø5¯þÊ¿ ï$³ð׃núiï‰ö¢I´Ï„Þµ›ìú¯ö—Ò¯Œ¼Ug[ØÙÁek¨X kþüû‹8·SC„x.Tq¼O˜ÁOŒ-| å²²©šc¤¹©{u'„ÃM·9Ê”'χ¡Šý_8-¥—âxûÄXâ2î Êj:xL¾\ØlÃŒsˆ^TrL²ä¯õg(5ÆÓŠ:q«Ni{<^+òÿíuûZ|Fý°þ*7ÄOæx{CÐô›Oü0øiá˜VÏÁŸ þèåÓ@ð_…ìbŠÞ1oc }¨´O©Þ˜Åie†›aô¼)¹ e‹/Á:˜ŠõªË™f8–çŒÌó ¶uñ˜™·'Í9|ÔœiBÊó›IüwqÆmÇ™ÓÍs(ÑÂá°Ô!€É²ŒU<¿%ʨ]arü8Æ+–œu©UÆ2­Ròå§MR£Kõ_þ Îÿ“ÌøÛÿfwñ'ÿV§Àúü³éÿ$VWÿeN ÿU9ÙûÑ'þN>uÿdNeÿ«Þ?®:þ4?Ñ €<£ã÷Ç…_²ÇÃ[Šßµ=ZÓFýô>ð‡†4©µïˆ5Xb¦á@ѼÎûáZ¶¡>Ÿ¡iQÜ[Üjú®›iZpJõ\e4ªI8Ÿ'Æ]t&ÂâçT¾Ôÿ´¸ €¸[°ê¥,V0ÎêÓåÅgêPUR’÷è`i¹ËêxWö£Ê­mzµiŸùÍâ—Š\oân.Tkà³ «†èUçÀðþ–*T[‹~Ï™UT¡ý¡Káœá eu…¡JS­R·æü ~8ÿ¡3ÅøNêÿü‡_£ý{ÿA˜_ü(£ÿÉŸÿff_ô/Æÿá%þVðøãþ„Ïÿá;«ÿò^ÁÐfÿ (ÿòaý™™пÿ„•ÿùXÂãú#|]Ò‹->æeÔn4íJ²–é”ÊöÚ}¤LÅ!@?ªü áNÏxK1Åç9Y™â©ñ/ N¾7 J½XP†[”UΤ[TãRµY¨­ªMîÙü5ô™ãž1áž<Êp=ÄÙÎM‚­Â8 ]\.]¯…¡SS9ϨÏ*t§º²¥‡¡NSjî¡Ú(üVÿ‡“þß¿ôx´'þïòu~Íÿï?è’Èð݇ÿäçoø‹ž'ÿÑyÅøwÅÿòÀÿ‡“þß¿ôx´'þïòuñøþ‰,‡ÿ Øþ@?â.xŸÿEçáßÿËþOû~ÿÑáþПøs¼MÿÉÔÄ;àOú$²ü7aÿùÿˆ¹âýœQÿ‡|_ÿ,øy?íûÿG‡ûBáÎñ7ÿ'Qÿï?è’Èð݇ÿäþ"ç‰ÿô^qGþñü°?áäÿ·ïýí ÿ‡;ÄßüGüC¾ÿ¢K!ÿÃvÿø‹ž'ÿÑyÅøwÅÿòÀÿ‡“þß¿ôx´'þïòuñøþ‰,‡ÿ Øþ@?â.xŸÿEçáßÿËþOû~ÿÑáþПøs¼MÿÉÔÄ;àOú$²ü7aÿùÿˆ¹âýœQÿ‡|_ÿ,øy?íûÿG‡ûBáÎñ7ÿ'Qÿï?è’Èð݇ÿäþ"ç‰ÿô^qGþñü°?áäÿ·ïýí ÿ‡;ÄßüGüC¾ÿ¢K!ÿÃvÿø‹ž'ÿÑyÅøwÅÿòÀÿ‡“þß¿ôx´'þïòuñøþ‰,‡ÿ Øþ@?â.xŸÿEçáßÿËþOû~ÿÑáþПøs¼MÿÉÔÄ;àOú$²ü7aÿùÿˆ¹âýœQÿ‡|_ÿ,øy?íûÿG‡ûBáÎñ7ÿ'Qÿï?è’Èð݇ÿäþ"ç‰ÿô^qGþñü°ûOöƒø×ñwã—üçá‹þ1|Hñ—Ä¿ÃzøÇE:÷ŒµëýwT:N—ðaî4í9®ï¦–V´±ŸQ¿šÖ% ’òᣠe|ü~A“eY/‹Y¶)˰™výGÂVöJ¡OÚÕÍÔjTQ‚Kžq§'¼”"žÈýŠx‡=â?²~}›có|güD¼~ë8üM\MoaG‡Ü©Rç©&ù)Ê­IB;EÔ›_?+ö3ùäý`ÿ‚ÊQfû­_úÏ«òï?äÚq'ýÑÿõ•¶ý?äòpwýÜ?úÊç‡öÿ_Á‡úŒP@šü†´û iÿúW Ÿ¶wü öÞðgí…ûWø?µWÇOø[Ÿ´§Ç_ xkÃúGÄOXé:‡ô/Š)ÒômL²‚õa´Ó´Í:ÖÚÊÊÖX­í Š(Ô"_Ýü!À|Œá.Åâ¸c%Äb±\;’bq5êà(N­zõòÜ5ZÕªMÁ¹Ô©RRœäõ”¤ÛÕŸåÿˆ(x‹—ñç`0ÃÇÜV'Ä<ŠÅ׫‰Äb<4á EzÕ§*•*×ÄQÅÖ¯Vr“mέYÎ¥I=e9JNí³ò¿W? ?©ø6×þE¯ÛËþíwÿKþ:×òïÒOþh¿û¸¿÷„m}¿æâÿÝ£ÿ¿9ý#×òéý²PQàÏùtßû|ÿÒ ªþ ?áñÿðRßú:¯á%ðËÿ˜ŠþýÿˆGá×ý_ü*Ì¿ù´ÿ+?â=ø»ÿE®7ÿrþwü>?þ [ÿGUâÏü$¾óGüB?¿è˜ÂÿáVeÿÍ¡ÿïÅßú-q¿øC“ÿó¸?áñÿðRßú:¯á%ðËÿ˜Š?âøuÿDÆÿ ³/þmø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçpÃãÿॿôu^,ÿÂKá—ÿ1Ä#ðëþ‰Œ/þf_üÚñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎàÿ‡ÇÿÁKèê¼Yÿ„—Ã/þb(ÿˆGá×ý_ü*Ì¿ù´?â=ø»ÿE®7ÿrþwü>?þ [ÿGUâÏü$¾óGüB?¿è˜ÂÿáVeÿÍ¡ÿïÅßú-q¿øC“ÿó¸?áñÿðRßú:¯á%ðËÿ˜Š?âøuÿDÆÿ ³/þmø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçqí¿ðUŠßþ2üÿ‚i|@ø™â­CÅž-ñwìÕâx‡U¼[[UÔuÛïÉgyª;M·²Ò­®î­të yä³±·ób³·GD˜ñ|/Êòü£:ñ—aiáp¸N"Ãa°ô¡Í'N„0\ð¥í*Jueʤå9ÊÎrkv}YÞkŸðç„Y¦q«Çc¸G‹ÅV¨¡WS2têVöTaN„'8R§:táxÂ)ü(ü^¯ØOçÓ÷óþ Îÿ“ÌøÛÿfwñ'ÿV§Àúüéÿ$VWÿeN ÿU9ÙýGôIÿ“Ù™ê÷†Ï뎿ô@( å›þ Áÿ+ý¸gÿÛûãßÂ/ƒß´ˆ|ðçÂ_ð«?áð½‡| }i¥ÿo|øsâm[ʺÖ<-©jRý·]Öu=AþÑ{7—%ÛÇ—E×ÞxuÁyïdY®m‘añ¹†+ûOë™â1°•OaœfjWM:k’…t×,Ôw“mÿxÛâïˆÜ/âwdyâ²ì«ý‹õ\<.[R~³ÃùN2¿,ñ*µ¥í18ŠÕ_=IYͨÚ*1_¿ðøÿø)oýW‹?ð’øeÿÌE}ÿüB?¿è˜ÂÿáVeÿͧåñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎàÿ‡ÇÿÁKèê¼Yÿ„—Ã/þb(ÿˆGá×ý_ü*Ì¿ù´?â=ø»ÿE®7ÿrþwü>?þ [ÿGUâÏü$¾óGüB?¿è˜ÂÿáVeÿÍ¡ÿïÅßú-q¿øC“ÿó¸?áñÿðRßú:¯á%ðËÿ˜Š?âøuÿDÆÿ ³/þmø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçpÃãÿॿôu^,ÿÂKá—ÿ1Ä#ðëþ‰Œ/þf_üÚñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎàÿ‡ÇÿÁKèê¼Yÿ„—Ã/þb(ÿˆGá×ý_ü*Ì¿ù´?â=ø»ÿE®7ÿrþw]x‹öºý£ÿjOø$oíO®|zø©®|BÕ4ÚWàO†ô»«Ë- F0hW¶·zÅÖ—,>Òt[{ÛIuKKKãôW;nm ‘ ˜£Ûò”8S‡¸gÅn¡‘å”p«ðîw‰«N½njð”iFªxšµ¥ ªr”/vROv}Ö+Žx³Œü ãLOgXŒÒ¶‹¸o Fu)á°ü¸jyÑ”pt0ñ© V„*Z¤gïÂ-[•[ðF¿r?™O¬?`¿ù>oØÃþÎÃöuÿÕ¿àêù~8ÿ’+Œ?ì–âýTâÏ·ðÓþN?‡ÿö[p§þ¯pú!ëŸòÖ?ì)¨é\ÕþqŸëá—@P@Š?ðr/ü £öÿÿ»UÿÖÃø@ÒuP@P@P@|¿@P@Èçücÿ'™ðKþÌïá·þ­OŽý—ôwÿ’+4ÿ²§ÿªœÿ;þ–ßòqò_û"rßý^ñ!ð^ƒ üý~ ü,øËñãá¦ñ÷ö•ý¥ôÝcÄÿ²ÿìÍâíKUÓ~ø7à÷‡µ´Sö ý¢­|=©i>*Õü!ªëŒú'¯‡vWšE¿Ä‹¨e—ûi4¹µ=oÁ]Oâ4ø=ûXü$Óî _7Áx×Äž)Ôþ|sø{ipºÎƒà½#Åw>ñöîœúuþ©q±à¯?<Åñ‡…¼k_<ÌøÃƒq•éasJy³Ž#4ËkÎÏÛQÅ6§ËQF£Ã'8á¹ÿÙ1ÕYá±2õ¸káÿޏ Ó#ÃpÎOáÿˆx -|vI[!SÂd™Æ•×Õñ$¥MNŒ§F8Ʃϩÿ·ákJN—¡êRx;ó<q‡ŒZò\ƒŠË28J¤éP§Z®†-£8By–kõyFXœEG*~΄¥8S«Vž‡/5Zóý“+àÿ¾Ü#✻q-Hѧ[WCŠÅçéÎ¥<Ÿ#úÔ% Š…_kŠ„)Ô­B…\f+Ÿ–†Ÿ~Ø¿±Àÿø(¿ì¹üþ Qðµ<ñóÀúí¦ŸûB~ÇžþÄÐô¿ˆškKñUžá yôßxoâþ‰kŽ|ão iü=oªè^-Óµ?M¤§† ÷ø7Å4ò¬ã‰Í²:Ž%B¥Zµð˜ì¾rä–/-úÄœ°xÚ6’ÎöÔÕ*þÖŒ©Vu‰áo>ÜW<áü»‘q5Ô£M*0Øì³6§iqõH(æv'š2§ˆ:“ú½W_ ì11¯B?Ÿž<ø¹û3~Æ_5¯Ùgá·ìëðGöÃøð¾òOþÙ´Ç7ñî¯à‹O‰ûPx»ö{ý™¼-à¿ø0h7¿ RVÑüSñÃUÔ¯µ˜ülo´Û? Ëšé?¢dùø³šcñØ ï0á ËêË „yd£O2ÇâTT£Ï‰‡,E ӭеx|4gK J•Zξ&?’qSáß.W–f|9•qÿˆÙ­c3çP•lŸ+Â9Êöx:œÐ7R\>~Åbñ’§_Z½ 2Ã`æûïØ&_Ú;â×ìÇãØGIñ$Ÿ³wíwñWøEâÿøÃ\»ñ׈?a¯žðÞ­ñÅ^ñWŒ.c¶ñ~k? t-wÇŸ üa® WÅ…ô;ß ëº«k:ׇ4XrÃxÄç— ñö*®y‚§­Ès˜ÒQÅãc u%†ÃÔiÉÊ8º´ªaëûZ˜,w»[WR•hmð«…|aá쟌|,ÁPáœÆ®e‡Êø£‡g^SÀeÓjPÆbè§(KF½,tiá• 9ŽZܰøJ…*Ô*~Äüfý¯?àšßðK/‰zOì9ðëö,ðŸíAyáE·ýªþ+x®o]kz>±¯é6:¥¦ƒ,þ,ðOŒádøò÷AÔÇŠuÿG¨|<ð‚´¯øLÓµay©ê:.…ð™>U⌘œÇ8Ägu2¼¯ R¤0ŠU14òØbÔyèàp8L<àŸ±Œ©,V6~Ò¼#*s›ÄÖ|‡éüAžxOôyÁe?…áª9ÞyŒ¥J¦:Q¥‚«œTÀ¹{,Fg™ãñp©(ýbq¬ðYm?g†©8U§N8,:öæ/Ûþ á‹þ7ý›?jÏø&UöŠŸ±çíK¬Ë§ügðÖ£¨ÃaàÙ^+mX×µ?Ú3ëZ…¼þø_£Cáøsâ©ï#À>9´ÑtßÙXØë7žÑ4á_x›ÃÌÛ0á¾3XìË ‚Ž"Ÿ°¯Uâ1ØL]:.®a1u¥)UÀcu)ÎtaFµ,V’ tëeÇ pg‹YSÆ>K,Éñ¹”ðµ~µ… °¹f?W¨ãÞ;B Ó.~Þu:tñ1zø,bIR«‡üÿ´ý¯¿aKMjãá·ÂïØ‡À¿d‹kÅðÝÇÇO‰~%ø£¡~Õ¿¬4ùžÇRøÓà?øoÅþÑ~è>!v½¿øwàè|)«K©ø~/øƒÅ ¢_êw^ÓEÈ2Ÿøó.©Å9æÏ¥õŒÂ¾)'^(«ïÒËñ¸nyFµ\4hС‰‡Õ=‹­Bx£îoØÏþõáˆ_´Æ¹ñÇ:¸ñÏüÃß ´Ú?Àu«Û/jÿ¼ â)|Um/ÁŠ:†•6›§øKâ?Á½Á>*Óþ9ëž}>É|+¥è¥¤~ ×|oöO ü®eã7pþI›pÞs…>:˱‘Ë¡™F?ªO (JrÌåG•Q©ŠPP–FŒ0˜ˆbðØ¿dáN­ Ÿs“ý¸3Šø!ãÇN¯†9¾_<Ú®O,Mo¯ÓÆB¤iÃ%†#žXŠX)Uu!Œ”ñ1øJ˜fÛªµhb©{ÿÂÏø)—ü?öÓñå×ì‘âïØá×€¿cßêÓ|:øûDÞÁ¡øO]פšDÑtˆQèžð_…~ üÑÖÇIºø£ðŸU–×GñT¶–'ġᾕ§< ú÷ƒž&bxª•nÏ«F®y‚£í𸶣 æx(5žÕEFÆa[´”cˆ¡%VQu)b*Ïð?¤/ƒ8>¯‡â®ÃN fX«c°”êÓɳ)©N—°”ܪG/Ç(ÏÙBr”p¸˜J„g5ð´iýiâïùB·ÂûHGÄýR6µôØOùffuk‹Ûèã‰X,­ä¾7c0¸³œ5|E*Uñõrº*3šU15hæø ]XQ†ótðÔ+V•£6Úº¿ï?FÌ¿‹ñs‡qxl-zø\®†w‰ÌqéÊTpt1i¡SQ.Zq­ŒÅPÃÓæwJ‰E;I¯íj¿…Oôè( €54?ù hÿöÓÿô®ÿ=Û_CÖ!á«ã<Ó´Ÿø&ÇÏx³âŸÅ‰î·7ˆ|?ûhþð†¯ñ7ÄÞ3·¼×5¼K¨üñ‡‚ôR×ÂþÔ.5OáÿÄ4ø>æÿA𮵡é~ô8KÅÞ"ນ¯ñunk< 1ptñu¯ŽÁæT%<#VR”òú¯Om'^T"©ÏÏB\«ÊãÏøKÄjYð#-Èéæup¸œÆ¶ƒYfa“â—=\~/¡Bžm‡NÿW„p°ÅMÕ§‹öx¨ó¿…<7ûc~Þ:ÔåøzŸ°†,ÿb­BWÒ<)ñ.ãÆ?tOÛã^ÒYO²ý /~%iþ/±ð_‡5MV)_ÆúGÀY¼ ©x:!ý›¡ë:Ôv’ÜXØþ“p÷‰¼]“ÔâŒwæc‚Åä9> /–SÃ5í0ˇƒîñPåT¹áˆ¯Jƒ†#,UYÏÊx‡‹|à ¥ÁY_‡Oe]G€âŽ!̤±YÍld_²ÆK*ÅUŒÿ{‚šœ«û)á0Õñ1©„ÁCF<\þ¯ý‡¿àÞ"ø‡ûoxãá§ÅÿÏãØ³ág‚| ûFxKö‘°dðÆ—ûEþψ®~éòjzéñxOâóøCźÆ[k;_§‚|Cªè±x~/ø>VùúÞ7g_ æY^k†=Àc?²£7‡¦°ö^Öó*ôc͇Xœ¨Ê”èE,=zõ°¸ŠTg„Ztþ¯ôkáüïrlï"ÆJ¿…¹®_ý»*QÅUx¤åì*a²|6"\¸·ƒÌ!‰…jx©ÉâðØ\>; _ }:ªýµá/ø)‡ü'ã¿Ä}CöCñ7ì#ð·Ãß±¦©ªÿÂðãö‹½²ðÖ˜úÛùŸÙÖŸäд¿hÞ6øcáM_U2êø·añRëÇˤÝé¾8×´ÏKu¨eüÍüMβ ñÄóÜÆY­H}—KŽŽk‰ÀÙÕöØz°«áªÎ«‚ÀÓŠö´yT*Ž•ýž'Åo8sŠ©øiO†2xäTªeæY¼2ì²Y2MQú¾*Jž2:ŽT3δš¡ˆçuUzJ¾&ŸÁ??à”úìñóãïj/øÛ[ÿ‚jüðŸ„þ*ü2ø“¥O¤]|iøÑ©üBñï…< ûh° ,×\øá7íF>5‚Ê×H»øq£xÛÄ—þÕou+7Ðáÿ¸‚=W'­…–mÅ“¯†ÀäxÉRçúÂĹRRÆÒ‡+Äc0õ:tRxÊ•é’Ì-\W `iâ)O„¥WÅaã5*´)cS,êÁk]áqÏšÎJ”šV³ÜßCü¿GÇY•\-zx }~Ã`±s§(ÐÅWËãžËN„ÚJ¤°Ë…öÜ·Puáù®—ôG_ÌÇöhP@Gƒ?äeÓíóÿH.¨üÉ< àoüMñ¯„þøE»ñŒüqâ#¾ЬB­[]×/¡Ó´ËšW޼û«ˆÕ縖+kx÷Ïs4PG$‹þc±¸\·ŠÌ1µ¡‡Áà°õqXšó¿-*`êT›²mÚ1vŒS”£äÒâÖ[–ãsŒÃ•e¸yâó ÇCƒÃS·=|N&¤iQ§ÚŒy§$œ¤ã+ÊrŒSkô#ÄÞ.ÿ‚~Ëž<Õÿg ŸÙøþÜž,øw©MàÿÚoö‚›ãg>xFø§ÜGŒ¾~̺¡³—X_†×Ñ&â‹9šêŸiÚç‡l|3}¦\j–šá¹.sâ‰xŒ~k‘æðàžÂÖ«†Ë%,¯ ™c³,E%g,Dq.Ê0æÿh•±ÃҨㅧOZ…lD?¦x‡¼&ðk •ä|MTñ#±Øz8ÌêÎñÙ>Y“ak»ÆI`ÕÜêr²ÃFxºô”±µªà0Øœ6§{ðÓþ ]âÚö£ø þÎ^)ñ7?b¿ÚSž$ø¡áÏŽºî§Ïã_ÃWGÒþ2|øçcáøí´(>,xCTñ‡4ÿ‡zµ­–‰áωöþ#²¸²Hmt w\¸â‡‹¹¿ Cˆ2.:ÁÓÄq&MF3ʱ8JjŽ?öÓ„(J§$!J„e°ÅºÔiS„°ÐÅRt(ã0ê…oJ§€YÕá>'ðÇ1«…àþ"ÄNž{ƒÇÕxŒw ý^œêbaKÚÔ©[8NLÃâ+Ö© m\eŠÄåø·‰Ã~‘øŸöžÿ‚|øÙâoø'Û~ÉzoÄ¿xäü,øçûW¾ƒ¦kwžøš“½·ˆ¼9cñ&›OŒ:®½áÏ,üGñÃGBÓü{s6áxu=CIÔ4 /ó¬Ÿþ#ˆ”³n Ë3üË ‡ÂÊJ• 6kŒÉð¸¼DR›Àexl$¡‡•JTÜS©‰•89Jœkb¥Vs’ýsˆ?â_|#¯‘p¦s¹>;ŽŒlV3"Ëø‡ÂͺqÍ3¬f>1P¥^´fãK µaVt0P£ q—Áÿ´Ïü‡Çl­Á~ñÔ·ÿðO¯‰_ ümû@ÙþÕþ8¼·×4Ù“áoÃK âÖ‰ñ‡ÅÚtvzn¿¥h~Öt_àç‹ï¥²¿ñæ¨Ã ø“Y¿Õ´øÞãÞàï1Ù^_™åüaJ¾aÀaæòºüІ/Š…HÑþÍÌ¥ËÓ”%.ycgMT…*5Õu_ìÕO–ñ èÉ–çY¶Mšø[ ”åy¦.šÎð¾ÖXœV”±ÛXx;Te†à|+“ÂúÞ¹„çoÂÖ>/‘|2>¿.¥ã7d•x¦a2|F"1Æä¼0²œ¨â°|®tã[‰§:¸WЧ',4qÄJ¯5)×­„N£ùþq[èïÂVéÒtðù§·RНŒ…Ò©…”S¯K©}sÚQ§†J¬éüŸý0u¸Ï_!Ǽ«±qÄb³¸Ö®ªâòE†pœ°¹}LSœ«ÒÇBvÂÖźÿÙþÇ[¢Ö?ðœEðâo ßø ›­ÃÞ5ŸMŽOÙ}}\Œø “ýgŸ`ñy•:o‹àùdØ/«¬"´––.Œcˆ©‹¥M':4)Êj¥:xêÕ,ëü Ãèë™ñ'ú—Ì05j«,Àøƒ"Ì~·,{—²†>¶_ˆ”ð´°ëJJž&º¯JåJµl·Jë õçì ÿÚðWÂþÑ_¶7üÏ@þ þÈ> ñ„üKà}rãWñß‹<¨Å¥êž0Ò5 k-gÇÿ 5½NëGÓþØøn¨ü]ñ»§h±Ù4¶³xs^ø¾-ñâ¾7 Ë0Ü3Nyvu¢å›Öpö’Ë%:WÀJ¤\jÔÅ8º´ñ.T0²¦”V*£–ô^ú/arÞ*ÎqœiV–mÃyV!G ÃÆ§±ŽuAVúÞk SSÃÑÁFJ…\¨£‰ÆÂ³”傤£Œú[ᦟÿ§ÿ‚ÖøSÄ?þüÕÿàž?´¦™ k> øâ_ø{Àþñ§§ØBóÙê¾(ðw/ÓÀu [(!Ô<ð³Ç1ëýŸ†nu+¯øÞÇZÓu_x[çs:>2ðNY–ñn;ˆ3z˜lD ëáq9¦33Yt«É:³l»í0”–&êtýª¡RKRx|DéÂ__“b>$g9Çeœ)ÑÆa!V8\v$Ëòg›Ç ,M|‡7Ë],}ya,êÍVt*Œ%‹£O„§Z¬?˜ox7âWŠ¿gߎ>¶ðÇ?€þ3ºðÄïXËs6‘5ôvðê^ñŸ„®/a¶¾Ô<ñÃ7š_Œ¼ªÝZÛË{¡j°$È.­®BÿHøwÇn:Èa˜FÃæXY¬.m‚„› W/4kQRnUÅC÷”®ã%[çRxyÍÿx·á¦3Ã(©”Ê¥\^Oާ,nC˜ÔŠRÄàœÜ'CáÒúö¥¨â£8ʆ)S¥ONœTà¡?òk¿ðJïû4ý{ÿV&£^_ÿÉMâwý•4?õ¹âŸü‘¾ ÿÙ‰ÿÕ¥Cò‚¿Q?? Ïø7+IÕ&ý­¾=k±iײhšwì™ã]#PÕÒÖfÓlµ]oâgÂk½M»½T6Ö÷Ú­®®\éֲȳÞA£êrÛ¤‰crÑþô‰«Ip~QAÔ‚­S‰pÕiÒrJ¤éQÊóhV©_šP¥:ô#RI5 U¦¤ÓœoýQôH¡Z^ gؘҨðô¸;B­u :4ëb3¼Š¥ S¨—$jV†:P“R©¥Õ9µýi×ñÁþ……ü@Ápÿå(¿´÷ýÑ_ýg„µýçà¿ü›NÿºÇþ¯óSü¹úEÿÉäãû·¿õ•ÈÏ3OþÎ?±w€¾x‡ö–øauûI~Ô|%cñWáÇì¶þ;Öþø à×À[ûÙí|=ñ·ö†ñ/…¡ßêß/­¾ü%ðýΞÞ"Ò-5yüO©ép¥ôþð3Ž2ân+âÊüÀŠ9n+çþßâŠØzX¸áÜ'ìkQÁQ¬¥J^ί>ï·Åb¡QЩ†Ãajâ§õY‡œÀ¼ †ñÅL&'7Åç|Ÿê¯áñ•°Ū”þ±‡ÄfUðîˆ*´=ž.~ÿÕ°X*´V&Ž3£¤ÿþÍ^ ý§þi_¿àŸÿ|_§ëÚþ|1ý¢¿bù|AwñÄ^?uí?Á>8|ñ^¬#ñŒ>^øçR°ð÷Äo øºmľþÒ>':Õ·„ôfþÔæÄqwxqŸàrþ8Ìiñ æîQÂq,ºŽ€«”éâpØHû9F”¥ Ö§'Z¬ðÓxŒ5j’¡WÌ/ð‹ü+™æ¾e¸K²ÂxW̰Y¥ ºupxÌ|½¬%^1©KV+BÊqÂc0ôa‰¡˜KõÛâ%‡üþµ¡|ø%ñ÷àF‡ûpþÚ_ôoxûB¼ðï…¼[¤xÀÉ=݆¥ãIm~"ÛÝx_Á~µ×!»ð§Ñ4iüyñ[´¼¾–ÞÓLÓ¯ßÃ_™Õã|Tâ™å¼'ÆäYm/iRŒ0¸Êùt0˜rQXÌÛ‚¾"¥jÒpJ„XF¥HÑ¡NIT­/Ù¨ø}áGÜO8㼫.âŒâ¿²£ˆž7/ÂæÕ1ù•X¹¼¿"˳+a(áðñU$ñ5#B¬èÒž#Z.T°ðùSþ ÿÕøuñKá'Âßø(Gü_ÃÞ0øð£â¼5àŒ_²Æœu{^ðޱãOYø*ÏÆ? ômNëTñ…u øßUÓôo‰ÿ Ÿ[Ôü†.í|oàü9áí#Q¿ñFAâ/øwÅáÎ;Åbó,¾5áOWZ¦?†¥^Ζg€ÇTæÄâ°Ž2IШê^”gN,>.3ÉÅ^ø{âßSã 08›6–¥\¿G+Áãká¯ù.k–RäÁàqÑœgFž*ŒiZ´©Ö­_:§É^2ñ7ì!û$ü@Öÿeí[öw·/Åo†÷‰áÚ»ã]߯ïü*ø[ð÷â\QA7Š>~Ï6~Í©ø¯ÄþšdÒøã¿Ø[Çãϼ£ËâO‹ÿi/øE4¸4[=GÀž°×¾!øSÆ:n“§Ãã‡Z¯-®¿w¢ÿÂGâ%”ø¥›ð¦aœpç‰q„±Ùf®/šàèB”shœªÑ¡Ó,4ç§õ*Ч…Š«N¶Nž" 3ï²9Êx{‹ü•Xå¹Ö:Ž4ȳ TëÏ!J°¡ˆÄÎu§_N–WVKûJ…J¸Ù<=\>?V®¢·í_Å/Á¿à—¾8пc~Ìiû\üTµÐ4KÚSâ–·áoxÃUørÁ? 1˜ÌøWâ&{J q^o,ó”G.–"œ*:~Veˆ*®´0r ѧ,= .ÃxËÅ96;Œ°|C,5 έ%,Û‚©˜Æ„¤ñ2ÊòÌ'& ¥,;Œ£*Rˆ”'C EXÊ‘ú–oŒú;ðGe~f'J†'_"ËsYD±PŠÂG;αΦeF¾.3„ãZ2ÄK ”ñXº˜Jeøgÿý‰¼{ÿÝý¬¦ýŸ|]­êž:øMñ/DÔþ!~Ë?õ«K+-[Ç>Ò.-m¼cðëÆÿÙvzv…'ÅŸ„׺†›»q ÙÙØx¯Âdž|oáùõMKCÓWðÅ,G:œ=ÄiË<ÃÑ•|1Fµ0Ôíí¡Rœ)¬n5Qû(ÅWÃóÕöq–´ê~ô€ðK ÀÊ—p¥°á¬^"8\Ç/s«ˆþÄÆÖ¿ÕêR«QÔ¬òÜcN”}¼äð¸¿gGÛN8¼=*_Q|"ÿ”<~×ÿöv³ÿþ˜õJúŒÛþNß ÿÙ-žÿéè‘É…ã¿û-¸cÿQ꟔ú‰ø‰öüãIÕ5¯Û¿ö5³ÑôëíRîÚƒà^­5¶Ÿk5äñizį ëºæ£$Vé#¥Ž¢iº†¯©Ý2ˆltÛ»Û—ŽÞÞYäøö­*<Åó­R øk;¤¥RJukåØŠ)§&“ZÕ)Ò§çRq„S”’yáu ظ U+Ný¾¿fÜèšO‰­ü5û/ü×çðÞ¿Ú´/C£|høÍ¨Ë¢kV§‹'VKf°Ô`?ë¬î&ø«û À:áþ}…j¸ibxƒ4¡E r× ëd¹55ZŒ¾ÍZN\ôåÒqLÿ>¾”بà|Uá|lðÔ1Âp¦KŠ–|6*8~#â²Ãb!öèWPtªÇíS”—Sþ û8Z|wÑ?gø,—ìÃa{âï‚~-øá´w‡tyµK¯…^ðëþ%ͦéñ\E¥h?|C¬xÇáwÆ´¶{-7Á¦ø†âÅ´í?ÅzÆŸù‡YÕo |A̲^$—°¡Œœò¬Ã[HÒ®ªªù~e*µ—Õ+©óʬ¤£õ|bÄÎþͶx¹Ã˜ü(Éø“ƒ¡õ¼N_NæU‚ÃûÕ+á¥Aá³\š4)'Ã:|‘£:Ÿ[ËÞ•g{ÿðCŸÙžÛÂ>#ñ‡üËãÆ¬¿ ¿fßÙ“Àÿ•¢ø‹Q_ëšo|KoțûÁ¾ðmæ½ý© =¦¡â[í;IÓ&¿»ÑüAegúOoÃä¿ê† µN;6xlFcìå«—ЭK‡SqºŽ#^•Òùã…„êÊ1|<çøïÑÃlËÄëþe‡Ä`ò¼‰c0™Oµ„èË1Íq8zØ S§rÊx\¿ [N¼Ü}œñ•iÑ„§<6.¿.?à§Wú×ÿn½{öĻֵ|ÿ‚…ü3øeñ‹öKø“¨Y\ipÛü<ðG4/ ø£ösÕt‹‰§ƒÂ_¾ øšJóÆž·î®ßÄòx–ú4ÔfÕ ³ò<Åà²|nyÙžywãVGëQ•Øœ<2«iÕQ”jQ§_ëñ„Suðø™TzaŸ/¿ô¦Àf£8b0Ø<Ê®-Ñža*´á:8Џ_ì¹Ô›Œpج)-q±æýQÿ‚&|WÑ?bWÖÿh¿Ú3â/ü*πߴǎ~~Ê¿ |;©ÛHÓü_øß¬xª »oiV­å˜<ðcÃð•IãßÜ´>Ð4Ïj†âöK½.âÌoãö;šO"᜿ <lj(â+c\p•|FS5<#§IJn¦1ž-Ó·5:m5T¦åËôWË3<’Ÿñžm§”pv# C-çÇÔ† ÌhâéÊž:5k8AQË•JØVüµq9§Õé¹Ô¥V0ü­ÿ‚‰þÀž?ÿ‚q~Öÿ¾ÞhzõïìÝñŸÇÞ*ø«û-üSšÆò} öÏⳫx³Å?uÌ|Nøq¯ÜkfËM½½mgÅž“Iñ]¼ T†Ç_8ß <º|«K‹ÂU¯ŠÉå7k†ÄNXŒV-µÏ‰¡^uqWs©‡«%òádÞ_JO qðÍ¡â.WB¾+Ž¡…Áq iÆue€Åá)à ‚ÇJ1OÙà±XXPÂJVTèâ¨EÎ|øÚq_Ñ—üáæŸÿÄý–|QûJþ×Þ8·ø%©~ÖŸþ|5øQà[Ý´VZ–µ{«hŸ õ¿x~ÊXõˆ5 bçÆZç‰|S‹ | ðŸÃ:Ï‹"Ë8w‡(jÏ'XÚrÄàá*õ1xÜG³–' …öiû\>ž2•Z|ЩWÚ¸þîŒjTý èéÁ˜¯øK:âÞ/ÄÿaÓâ—VŽ1© -[…öÐÁãqÞÖQT1xêÙ„á 5T*Q ¨©Ú®"t©#ÿ<1ñ¿áÇÆ?Œ? ¿j}.ÿFý©|3ñ ÅZ×Ç!ªüoâÿøƒTñEçÆ?ßI ¼~ ðGÅëFëÆþñöUÍ–¨Ú}°·“LžÎÛ÷3¼“2àÌ·/ÊÕ>+&£&iŒ¿{[”çS(ÊÓ•,Ê£©‹ŒÚåIÕÃ]J„£柤' q&Mâ.q›gr¯‹Àñ"Xü“3œ?q<aN²¸Ê¥Ù5%K:J\ó¥NŽ2ΨN_Õ'üãMø}áßÙ“Uÿ‚h~Ø5O x«þ uð«ã·‰¾|®-|[áïZÿÃûo‡ž+ñÆ©™sàÿ‹V𾝬ü1°Öb²Óõãà}gUÐåŸ[¾½±¹üÆÜ~ˆxª´²,3Åχ²Õ‡Ï³,eR›”1jšU]4áìðqÂÏ'yV¬ð÷q®_ê_£nYšp—áãÄøÕ€§Å™ÃÅp¾Qœ(Öä©€u\¨Æ¬£SÚæ”p•1Ô°pMG ‡Ž.Êx×ÍüÊ꟱Ïíû>|ºý‚¼wàms]øõà}zßÀž¶Ð4[Ƶøãà4Ù|9ø¯ðÙ6mÖ<1ãÅgu©ËÿÅ#â;_økÄ«¤êžÔm­ÿqð¿òœ×‚¨Ë2ÇapXÎÁSÂfѯRT0˜:q¥…Ì[NTq#N3”b’Æ*´cisÿ3ø×áV}‘x‰†Q–c³,¿Œ³*¸ì†Xj5+º˜ì¬«ãr§(Ũâ0¸©Ö•8Îm¼¾T1ž•½Ÿõ ûY|_„ðK»ÿø"_ÁÏ‹–¶ŸðP?ÚöYø‡ñ‹Lðe†¨þOĈ|;ãÿø³ã/Ák=~Y!µðÑøçá{ˆ~Ë{—gâ»- źÛZµŽ·3-q†uˆã~*Îø’†_^¾W…• ΄©û£V† "¥€U¥Q¸Ó§€£†¦•L5jq¿²«„åö “’œT)©^ßæ–{ÁüWã,W æ^/‰ëæIN­\ÓŒ¬åG‡«;{z8þu‰§‰“Q•9º•eZœ¿¥ŸðWÚ¾oÚ3Æÿ³/ìg§¡üBø&–™§xwâçí7c<ÚŽ³ñGö™ºøugàŸŠ |!©ÎÒøA¼½¥‡ÄÏ]Í}©|@ø‹j–‚ÛC¹¼Ö¿ž<+áXñfœy‚…|Ÿ!Àæøú¹^„^*®1âÁůvž †¯Ч8Å׆ ”¿uVpþµñÇŽeÂ~äžæ5p¼AÅ9žA•ÐÎñµ'9ÇC/XGÂiûõ±¸ÌfkV¥E)G S1­ßP§Sªñwü¡[áý¤#âþ©Zýo ÿ'‹6ÿ²ÿ«™ƒcÿåòû:9§þ³Ñ>ý™f‹µ¯Æ üø;á鵿kó ¯ï^=Â^·šÖ<_âD)‹Kðþ‹é%ÍÄ„Íws-¦“¦A{¬ê:u…×Úq'å\+”âsŒß8jµ8&lV"IºX\5=êW¬ÓQŠ÷a:µ% TêN?œðg|qŸ`ø ÂËŒÅKš­FšÃàp‘”U|v6­¹hápñ’s“÷ªMÓ¡F51iRŸ÷£û#þÉ? aÿƒvß¾#j—º„–º¯Å‰wÖñÃ≾-†ßÉ“PºTy—Lðö›¾kO øvÖy¬ô›rn5 J÷WÖuà~5ãL×óyæyŒ•*4Ô©eØ rn† åuN ¥í+T²–'(©×¨—»N”(Ñ¥þ¥øqáÖGá®AO%Ê"ëb+8WͳZ°QÅf˜ÕWZ¢NJŽ’r† Jž“w•Zõ1ŠßG×È P¦‡ÿ!­þšþ•Ã@ÆßÁÚ áÏì×ÿÌøÝñâ¶— \x#Rý®i¿j~$×ííåáÝÇŒ~+ø¿KÓ> Ù\Ý–ú¤Ö±ë©ÃÙxRûÄO,¥þÙβȼÉp]ZñÆSáNÆÓÃД“Ì#„Êð•jà'ëQW§ÌéRÚx¨aÓº¹þnpçe<#ôŽâ,Ó;£…–]WŽxÃ-­‹ÄÆ2YTñùÞ>Òœæœh¼5nHׯ½<\S‹NÇÆÿðU?Ø?ưOí­ñi“JÕuÙÃö«ø™ãŽß>!ù7·šEŸŠþ#ê7~+øŸð+^Ö墇ƞñeƹâXÞÞÞê>*øi©Xjv×—Ú‡‡üSm¤|€\g„X"XËûÕ*Q«VÞæûï¥G‡x÷˜a[Ÿ*1W‚¯‡n |ã%O ‰£8`ç5ËJ–"…oÞc½ŸðNßø7þ AûÝülý«þ Kðâ·íáñ'áw†Zn»¥›ýSÁZ‡ˆÿá Ò>ê:ï„®.mvÏ AâO|`øºŒ–«á‡,Ÿð 3RÒµm<|OŒ\P¸ßŠ0R–gC&úÖSˆ–?1Äû)cžÓMÕÃІ•(N§9R¯^ S¨ÿIú=ðSðׂsN*âÊÐÉq1×õ­FçÄ6ŸtÍVü.¡âO ü_±Ôáñç†üI:Õlµ“m¹´¸†/ÝüÎ2|eøº4ðøŒš2Âf˜^eí.sic¤¥iºY‡4±ç(¥{\:ÿwi/ý"8ˆ2¯slË7•\Vˆ¥~Iå~ÊX téáá–ÅÆô•|©BJ´á)Jpö¹i‹‹—õÇû|.øuñ{ö ñŸüöŸø»{áŸÚ#ö©ýž¾)|vðwÃx­¥þßøðŸÅ:¶‚ÿ í|B^[hañÿ¬uŸŽÃ;¹WXÔ¼%Šÿµítû >ð¿óo‹ØÜñ†mšd8Y×Àå˜|6Ìðð•L-|l'<o«ãy}¦4ã&êÊXJ|õi´ÿŒ$˸š¯ ãr¼câ5Žeñ„êâ1XªóJ„°í_ë4ñ|ð«‡ÄAÊzU#Z3p—1ýfþÒß4¿ø)—üGÅÿ°/Ú_Åßø(üÓÄÿ|e©XG©yÖh/†ßõ)bøOqâ“'ö/Œ4ù¼âïüÔ|K5ú7‡þ*ij¾5}&êÅçºþÍ3‡…ãñÆG–Ï •>%­ŽÊý¥)Ç Z¶½Uj\¼”ç^5a^¶›¾8¥J®šåÿO2NŽ7ÃÚ^q6qOžÇƒhe¹ß±­NxÜ>†Ä`pø¨ÅOÚV§† ˜l>.ª¶6xZ¯2¬ùÿž?ØöBø“ûsþÐ>ø5áÿ ø¯ÃúV½|xŸ^Ñ/´]kà–‡¤_y4Óüo¦ê6«7‡|m§½½ï‡ô¯ êðÁuâ£mcµm>Ó{öqâg`83ýmÂã(b#‹Ã5•àH}b¾g*~î­ÍÎÂÕ’úý¯ì)BsNW§ÏþpÿƒL[›üÏÿ‚kxcÅú§íQðçâ熼emð×À?³½Ô´ÆZ„ïƒþüø~¿Ûž8ÖüK ðÅq¥x‡ÃÐÞøV-!faâ3®6HßM—Pšëoó¼—-àŒáæ~ÇO5ÀÕÀåøU87Åâi7† ZqK ùqÏ“T!AU‹öžÍKø7Á^â<çÄ®Y/Ö05²,ʆgšã:Š9v],e,L/ JXØóå‘ÂI©bjb] ÇÙ:Ò‡Ï?µßíEÿ ñûe|eý·­¼ Âß|TÒ¼'à‡¾ XÊkú§Ã†cSÒ|ñâtî¨.~$xÓCº‹S¹Óí¡†×Á¾—CðP¹×'Ðæ×/~CÀþ ÅðæM_<ÇV¯N¿RÃÔ§–Ê*œ(`¨J´°˜ŠñnRxœM:δ#îû T&¥VsTþûé-â>‹¸‡ ÃYf W Â5ñt«g›«S˜âa‡†? †’Q„pX:¸xáêNÓúÖ*ŒªÓ”h›«÷§üËþJì—ÿh÷ý”¿õ×kè¼)ÿ‘gÿÙyÅú‘@ù/ÿäuÁök¸+ÿQq&ü/þ ½ãÛ·âLš—ˆXðoìãà+Ødø¡ñ"ÞÞ8佸E†ê‡¾ šíZ ßëPIÚ."·Ô-¼)¦\Ŭj–³Ïw¡iÞ¾%x‚à\³–“£ŠÏñ´ßön_)6¡Ü^?{ÑÂÒ’|r„ñUbèÒ”TkU£‡ƒžf>'g<ÕÖ#¹uX¼ã5„u&”f²¼¾U=Éã«Á§:Š5a¡%ˆ¯ JxjîÂ~ðoßxOáÃ_ Xx/áÇ€´{}ÁÞÒÕŦ—¦[dî–Y^[‹íFöf–÷SÔ¯g¹¾Ôu ‹›ë뛛˛›™¿…3,Ç›ã±Yžeˆ©‹ÇckJ¾'U®z•%ä’Œ#”)Ó„cN8Æ8ÆŒWúu“ånA–`rlŸ K–eØxa°xJ)¨R¥—“”ç9ÉÊ¥Zµ%:µªÎujÎu')=ªâ=  €:È˦ÿÛçþ]Pñyÿ ñ—ÁMöѼðÅMJ.ø•ð÷_ðÇÁO]Ÿ+Wð_Ìl÷Ö~ÔÝ“þ­{ÅÞ—Ä:^âK)!Ö’òÚ×ÃZ\ØñEÜ3ÿløïÎ1\íòÜExá08ÚU³œJ1Å`%îÂ¥hÃÞ­K‹ö¥FW¤”ž*jøXJ?æçÑ3áüˆ¿UÎp˜Yãó,º¾‡s L#9àsH>z”pòŸ¹‡­˜`^*„qµiJ1ÁRvÆÔŒ¿þ1þÉ?`/Ž?ÿcOŠöú‹êÿ õígWømãmFÜAÆ¿‚^)×µ=WÀ_t©Ó6º…Ö© Ìú'Îk—о!é>!Ò59#½U/8Ÿ›p•:4s.uhÕÃÖ¯ƒ¯^¥zèSZÉJu§C5wõˆ{ZŽ/MIý&¸+1ÈxóÄÍb1?ÆŽ"†.|õ!†Ì0ØjX\VYR³ºŒ£O N›q_Tª¨ÑRXJ®?×Gì3ðÃâì7ÿËÕ|qñ_Aø9ûgÁ@î~%hÿ±Ï„¾%jZ~¤|{×þx¯Qø# Ï¦Ë ±hþ!Ôíü§øGŒ¼I…â®0•ªœ+á²Lòÿ­Ð‡´–.xyVÅc«ûJjRž ûÈÂZÂ1£ˆÄF^έ×õÑãƒñ¼áô1åZ˜\gfóo¨bj{(`)âá†Àå˜gJ«ŒiãñÖ£:‘÷jNxŒ.QöÔ,ÿ‰o„Úlú‚í<)«i> ð÷Œü¨k>ø›á¿[ÝÚxÛÃÿtRêËâF•ãkMC•·ŠâñzjÓk+¨*]Ís?Úv\FOõ?†XŒ›Àü?ý‡háh`©á±Ÿ*­K1¤¿áF8˜­«Ï*µäÚ^ÒaZ R©øÆŒ'á#ÿÁ(¾!üdÒ¼!ûax³ö[Óþ2|4øqw¬Þé¾8ð§Á‹Š3ëß³¿ÄèZ9&ƒÀþ(ñ_ÃËø¯OŠ+«áoˆ´TÒ¡½šÚÎä/±F'޳¼nEÔÂGBÊÑŠž¾mmc'I«ÂTëÖ£UI·lMzXºÐç¦Ôß÷ç˜\ûá å¼ORT±óÁâjåØiÎTó 6E:©åôë)5R5pÔ1Rú¦—áêrV‹‚þBü á߈>#ñü+Ô~xŸÃ¿4O7Ãü¸Ñî“Æ¾ø£¦Ý¦“ªxã@E’÷í‘êXþÉ)EªéW~«dòØÞÁ3ep§eIÃn!¥ˆÃáhSÃ_3§*„2¼NšúÝ ÜÏ÷Té4çFSåö¸iQ­ËQçø}ŸðbøN¾ÄÖÆ[%­ S©S;Á⪵€ÅaùbýµZéªxˆSçö8Øb0ò|ô¤c?µöñÇöGÿ‚'kŸ²_‚üM§|Býµbý•N¿ñá÷‡µki¾3Û~ÉV?4[Ú/‡V:KË«ëÑ|ø[ñK_øá+û}FÎFxum{ÀóM¬i±hçø¯‰3Œ—?ñŸýB·ú³[<ÁÔÆÓÃÆpž# Ñ¥Š®ÛPö8œÒžŠ&éÔöµ¥MKÚEÌÿG8?‡ø…|*Ë8WûSþ¹ÐáœÂŽ]W*u)ás:”ñð8d“©íðy%\^*ÑZJŽ5\}”£Lþ6ü)öoZxu|×-|A•…­´ g»þÕRX"Ñít{+8Ú[‡»ó­à±³¶„ÊîñÛÅò¿½°˜¼l¿ŽÁ×Ã<²xJxœ>"”¡ "Á{%RXKÝ…:£i+òÆZ¨¤íþZã°®6Å噆³ªxúØ<^´*TÇ<ÅW•*´jCß«W,EâÒçJG'$ßõïûtüø§ÿÿ‚J]~Êžø©¦|Mÿ‚„ÿÁ>µÙ×ãOÆ‚ñ—Ljþ x7Á—>4ðçÀZÌÖúÏŠ"8L],TéÓ¼mNY~ò…;)ÐÃW¡îÇÝGú©’dyÆoáuâŒÁÓâ,wO'ͱª)âð³Çà+àiV«iÞ¶"„²ÄÖ»§ŠÆa±VœÓ™ü¸~ÉÿŽc,*2„åG F“æÄfS£Í—ÓÂßN¼mFÓ•iV¦¥ýÿÁÇÚÍÆ±û1üøð{â{|Fø3û|hø-§ÿÁA¼棫øÿÁqx»áL0þÉŸ~.ê“Ƕ¡àd×äþÚñ·&¥þ#ñ?‚|G©Cjt9®lÿ‰x7–ÏrŒïˆ¨R£•b3Úøªî4ãKCVS­‡J2^Ê8L7ƒ«^’mQÂ.i¯g¿úIâWœSð»>á®Ä×Äç¸NÃ`°±i×Íq8 1§‡Å·(KÛϘå¸L†»Iâ1íÆut_ˆß°÷†>7x¿ö²ø §~Îa×âí¯Ä_ë¾ÔŠK&“¡Ç ÞÇ«k ñ9“þ(íDµ¿¾ñdnÞ]߇âÔ,ŒsµÒÛÍý½Æ˜¬— ¹å^ iå3˱q0M*•ý¼:40Íßý®­iSޝ†»§;ÅEÉšÞà¸Ç3C…–}O6Ââ0udèá¾­UVÄbqŠ-?¨ÐìñÑnÓÃ*´­'5 tßðTŸÚ·áWíçÿø…ûJüÐ`Ò>øáÅ¿ì¿eã«{ĺOÚ[TøaãïjMñÎÒH ±µð£k‡~ÞË.¯¬x“º\:íæ¡§é×7‡ô¯Ä~Ü+š`©æSˆ¯*9ng‡ž_„À¸Ëý²X|EŸÚNWäöXyÃÃ4¤êº˜™ÞŒ_éO¥od™•\«‚0¸hâs|—O6Çfjqÿ„øâ°˜Š_ØÊ¾ÓÛb©ÔÁfÅ'AQÁÓj¥ITT>Žÿ‚„ÿÉ®ÿÁ+¿ìÓõïýXš~‹À?òSxÿeMý@ùŠòFø/ÿdN'ÿV•dOÙâïí¥ñ“Fø9ð‹KŽKëˆÿµ|UâHK†<ákˆ!ÕhŸþ i«•d°_xׯW6ÑÃâO‰þ2û+cò\³é%ÇñŽe‘a³n©›`ñ½½˜ÃYkΦ§LhBOõY§Oé*c*u$ŸÁy?dß| ý¾õ¿ÛÂêÿÆŸ³wíãáφRx[â,wVÐ|ñoá·€4ÿÚ|&mf).!²ðçŽ|£é¾<øR³Ï†¥r…zxÊi:øxå9¢ÃÕ¥›aãOšš£Bt)CZšjxjô+IƆR_eÁ~Û|°ý ?९ãMGNµÑ£ºÖ¼?5‰úO¤ࣀÀð•F¾>¦&–i“Qœ°)S© 4Þ–#*²–žòÂÓ’’äÄÁ¿ú'ðNe,Ó4ãÌC¯†Êèàëä™tS8f˜šõhÔÆTkjØLhBw„±µ`á/iƒ©ø™ÿ·ø§oÿ2ý¶5¯Žº¬#ñ¯Åoé?þxÛM’kx÷ö8ñ6‹ioû2jÿ µÑuO x{Á6©áM}ô™.íl>!iÞ)·½½»Ô&’æn¿£ÕLªŽWŸeê‡RÌ\Æxû:òÀ¡…Œ#%û<.%bé׃èâ*þñ¥Z’8>–TsÌFwÂù³©OÖÊ¥G(­†ŸµÂÃ2©RXœlªNTý®;ð°µ¹q8J¹Máë³úÿƒ|þ$êt/\üdñî‹ðóàgí+ñÁß¾i>.½¸Óî>'~ÓŸðŽøŸ]Ôt¿‡H@‰¥oxy´¯_“¶»„4¹õm:[(þé[(Äc2,.>׈0¸|U\[¢”ªKÚÒ§Šå¼¹½¤1šÁ¡UjŠ0­NSú¯¢N?ÂåüOŽÅMPáLn+C±-Áb3ØËØW«‚çjžÊ¦ŠšÿxÅO‡¤çSV0þnh¯ÙÇÿðO_Ú[âßì}ñ2-jé<=âßxûàÄíKñãào¼M©kÞøˆ5«„5ïéWœþø¯»O&‹ã}2én&¸¶Ô4ûëϱðа‡ G†[£‡Í2Iâ&¨®XKÅb*bV2 éÔ©Jµj˜|O*~Í,4æï]Ÿ}(¸5ʸÊ\gˆÅdœIK NX—ÍRvgÂRÁË/©+5F•|6–/Ï(ûYKJœm…“Ö‡ü#áljdOÙÆ?þ?x§ÃžÒ¿h¿Š?ô¯Ù«À_µ(<7§]üB×­µO‡^ Ö¬®n ½ÕtÍoãN³âË ZŦéoªŸxu|C$º òIoùö[žq&_•å4£ŠÅä˜\U ÇN£©V¬á_ê ‘58eÐ¥RµIÅÊ0«Š¯MòJ[þ×ô^á|ç†x?5ÎóÚÓÀà8“‚Äå9v-ÆŒhÑ¡ ˜oíI{F*™¼ëÑÃÑ„”%RŽ U{Hb(ÛøèÖ`x“ âÖâG:ˆúÞK]FQÃÕáöݪhÉ%ªaé,&. ]cèb¥QÎs•ZŸÖïüÖã[ý†|_ûþÐ?á5ÇíéáoÚ3Mý<)³q£|_›áü~Ã~$|5 =­å–¢êÞ*µñG…;>i5kx£L󬯞ý¿ž¼z¯”c8¾Ùpö¸ü]NŽ}^Šæ¤«*‰a¡UÆé×ÃÑ«J–"£Ò ® )*”ù#ýeô\ÃgÙÔyÕOc•fY½jü-†Ä¾Jï*MãjQSiýWˆ£Z¾ŒUæèc±‹¥YT—òOqð'âÇì×ãŸ~È<)>…ñ¯à&±gð¿Yдë+¶µñnŸfÇÀ¼AÖ·àïŠ~Lñ/…5-PÝû.Kx5M6þÊßúÂn*Àgü–¡‡Åd:^e‡æŒ‚¢©PÅ4Ú塊ÃRhÔiª«I7ì$ÏåOø5áoó‰Ô§‰Å`¸«0Ågy6-Âu^)æX—_Œ’“ž'¯<7±NU¥Aá+É/¬Á?ëûÄ_~1þÉðFOþÁß ~(Åáø(÷ísð7ö†ñ×Á„÷ú–¡7‹5ÝWMÐt_|eø]ð™mØÛh>2Ñ~]¶ƒ¦]ǪXYXü^ñ+ø–ÆíŒÒKò‡‰¼EC‹øÇ3Ìòê1–_…§GCJŸ+Äap’T>½ˆ¨£ð×ÄVq£Vª‹ŽXJ2´¢‘ýÕà¿âxÃÜ—&Î+Ê®:¶#2Äá+ÖæX\n>ý—„¤äýü.§ˆ¡AÉO ~&ŒäÏãápÐux>ÏÀÖhë¦ØèÚFж²&©eqdF—6‡}`±‹ˆµë F ´ÝVÊH…äz¼7POÚCŠþÝá\VKˆáœ›‘Jœ2U–ÐX4œ¡B5Nt«µhƾp<_7½ð«Îù¹™þkñƈðœeÄ8>'…YñÍñRÌIx‹Ã¯Å8¯õ[m#KÓôM/ÄZx“R×ôi´?æNáÚ|OâÖežð¥i`¸w%ΣšÔÆ(¹S®¥Z>ß…äqQ¥›ÕXÿa r*9dœåý*ŸÚwŵx/ÀlŸ†8ë Ë‹x‡'‘ÒËå5ØWxlÇê)Êuò /+úÍHs¼FuÎ+ÛV¥ð‹þPñû_ÿÙØ~ÏÿúcÕ+öÌÛþNß ÿÙ-žÿéèÍùü˜^;ÿ²Û†?õ©ù‰àÿø£âм;àè:ŸŠ<_âÝgOð÷†¼;¢ÚÉ{ªëZÖ«sžŸ§XZÄ Íqus,q¢Œ(ÎçeEf¤âñxlÆ×§†ÂahÔ¯‰ÄV’…*4iEÎ¥IÉ裦ßÜ®ô?À`1¹¦7 –åØjØÜ~;K „ÂaàêVÄb+MS¥Jœ#¬¥9É%Ó«i&Ïî;þ ÿßðÇìðüø£Æ–ú/‰?jßé¡/Œs_í¸¸Ú­<« 5e`*;ÇF“Æâ©¨ýj²öq”ðÔh¹~–“žO$òIï_’½P@~(ÿÁÈ¿ò‚ÛÿþíWÿ[à]IÔP@P@P@òýP@#ŸðqüžgÁ/û3¿†ßúµ>8Wö_ÑßþH¬ÓþÊœoþªrCüïú[ÉÇÉì‰Ëõ{ćÄÿ°Oüöˆý€®µm#À_Øž:øYâKÿíOü+ñ¡¾:!Ö 0Û?ˆ<5©éóèxc_šÚÚÞÒîæ{¥êVÐÀ5Mú{M>âËíxçÃLƒŽ¡J®7Û`sL=?e‡Í0ŠÛÙ^RT14¦œ1XxÊRœbÝ:´äåì«SŒêF›øeã'øcRµ µaó<“WÛâòL{¨¨:ü±ƒÄàëÒj® (B0œâªÑ«ÇÛáêÊ)S›öÿÿ‚¤~з÷…4¿ƒþ(°ð¯ÂOÙÃGÕ´Íe¾ü2·¿²Ò|ey ^CÃ|P×onå¹ñ†á{èlµMÂzu‡…ü&·a§ëšÏ†õ}OHÐ.4’á_xs ÇÐ̳,v#?Äaj*¸j5ðô°™|*ÁóS©W ˜‰â'NIJ1©ˆö «Î„ôKï¸ãé=ÅÜU–br|›-Âp®JT1˜Œ6.¶?5ÅÆ­éRÁÒÂÓ­ã9ÒÁýiEÚ–&Ÿ¼åàŸÿloŠ|¬|)o ü"øÑðW\ñ-Ž/>~Ñ ü)ñ§áL>Ò`û>“ãÝ#ÂÞ.´¹]ÅúlB5‡YÐî´én~Ïgý¢—¿a²û?ÝñO‡ü;ŵð¸ìž/ š`”c…Íò¬KÁf4a ûHAVQ© ªuV•IД¦èJ›œù¿/฻€°¸Ü·)­Çd™‹œ±ÙwƒŽe“â'Vš£V¤°ò*´åV’:ê…zPÄÂ㉅eJŸ'’ü~ø×ñ+ö›øŸañ[ãˆGˆu¿ xlx#áχtí/HðÇÃß„^Y!”ø3áWÃÿ Xé^ðfs-½´šµÖŸ¦¶¿â'´°>$Öµìí?ì³ÂÞðï âq9†Ìnk‹çXŒß6Äý{1œj5*‘U¹)BÒI:²§J5+4•YÍF)_x³ÅÜwƒÁåY\¿-ÈðÍár ‡²Ü¢œéE”ÞU­V«£ãBkÎŽ9:éÊsrýmý–¿à»¿´ßì÷ð§LøAãxöƒðÿ„,­-¾ë?î5‹ox\é-çxzÛRÖm%º_é^¹†ÅôHîí,¼Aaoi•¿‰£·ƒN];âx»ÀÞâ<}lÓ/Æ×È1xªŽ®.žK€­VNó­ #©†• ÕåUÓÄ*S›çö*nrŸéô™âÞʰù.k—ax«‚¥8 ¸¬]li‡¡Õ<=L|ic!‰¡F<° «a]ztÒ§õ‰S8Süïý³ÿmÚöóø£¥üLøýâ[¨<%k§|4ømá;+½ áÃ; LƺµÏ†|=y©k×^(ñ P['‰|iâMc\ñ¡+¤é÷º?…¡±ðå—Ñð/…¼?ÀÓž3 *Ù–oV›¥<Ëq•r·=< k“ –^ÒNu«É^¿²n™ò&øÝÅ~&Bž >OQª«Ã&ËåVPÄV…ýlÇUª˜Ú”ný”<>–¢Ã{hª§¢x/þ /ñËÂþð_‡|OàßÙçãn­ð«A›Ãþ#üøðçâ÷ÅO‚Úƒ|?Ãø¯HºÖ´Ý/O»H/4Ý3W—\ÓìžÎÊÊ+Q¤ÙÛéѬßÂ~Í3Q(ÕhË.¥‡qt+*°§7Šsž2£¥KÚâ*{*|¿»¾ÿƒ‹k-á¾™ kÿ ~øûâ¦`úv‹ñ{ÅgˆašXå¶0\ßkÞðö±¢[^j×Í“êøwZðžv¶¥$ÒCJ%‹òl×èñÃø¼|±9fsŽÊ°u&ç<½áéã£M6›§…ÄT­F¥*kU^8¹Æêó’V?xÈþ–œU€ÊáƒÎx{,Ï3 4£N–j±u²ÙUq‹J¶; Kˆ£^¬œÞxJÎÔâåuøkâß‹¿> |lñ?í#㯈þ-ñ?ÇŸxŸNñv­ñNëR6'µÕt)# [xf]4ë_h Š-|áÏ [i:7…íá_ì«8.eº¸¸ýS†8‡xS(¯“åØ8Ö¡Œ‹Že[©âqŸ49,d8Ó.INÜ0ðS©ËIJ¥IOðÞ5ñ7‹¸ï>Ãqo˜K‰Ëçäø|¶UpxL™Â¬kFYtUYÖ¥_ÚÂIâçZ¦.¤©Òç®ãFŒiýâïø)¯í7â]ÆèpüøWñƒâ>ŒÞø—ûSüøðÛá¿íUãí*}=4‹ôÖþ7h >&°Õ/ô¨¢°>&ð¯ü#¾*Ó#Š \Ò/íà»ä'à—ÊuaN¶‡Ëkb#Š­‘ÐÍêG(©VBS£:SÄ·û°ŸÖ½¬#î¢V>þŸÒKÄHS¡R®…qyÎ <‰q9çÔhTÖ¤iâ!^ž1©/~¥5ö5'ïT¥&Ýÿ94mJðî™i£h–úf—b­ªlŠ?2Gži’^[‹›‰eº»º™¤¹»ºšk«™e¸šYõ<¿/ÀåX,>]–áhà°8Jj– ‡‚§J”.äùb·”ç)T©9^u*Ju*JSœ¤ÿÍslË<Ìqy¶qÄf9–:«­‹ÆâêJ­zÕ,¢¹¥-¡N…*Tâ£NS£J¥B?¹ß~|Oý¤à•³¯Áσþ¼ñg޼aÿñíž›§Û)Kk;uø%h÷úÞµ|TÁ¤xFµ_ë½áK[ (¤–F-±ò\Ã;Ëx{Äþ ÍólL0¸,'‡øÔ©-e9<æJ0øª×­+B(^S›Iiv¿yʸo9âß8S Èpu1Ù–?ÅlÎ*PV…8.‹«ˆÄT·- .©^½KB8¶ÝìŸôÙû~Ä¿ ÿ`„2|6ðÄ~(øâ¤Óï¾2|W–÷5Û8¥i:< Ò6à­îní´=&9äY®oµ ®õ ëÛ™ÿ—8û3.;;¹ŠO —á]Jy^[sC BM^u$’öغü±–"³I6£Nš…(B+ûkÂÏ òo 2/ìüXÌÛ©UÎó‰Ã–¦;¾Zt`Ût08g9Ç ‡M´¥*µe:õjMýq_~žP@jhòÑÿì)§ÿé\4þwŸ·§üŸ7íŸÿgaûEêßñ£œÿ$WÿÙ-Ãÿú©Âä‰òqü@ÿ²ÛŠÿõ{?Aeø.í/û7|$²ø-âïxö€ð…ôëK‡—¥Õ-|Aá´ƒ ¾Ó.µ[ž?x{÷v–W5ýœÞž–°ÚXøšÖÒ×M‡OüóŒ<áþ'Ì*æ¸e|‡Ѝêã =<^ VOš¥ªJ¦Tq[n¬©b#Jr÷Ýi)άø}ô•⾠ʨdyž]†âœ³J42÷ŠÅUÀæ8J-,/סG FŒRWÂJµ8%J8ctéþr~ØßµçÇOÛ·ã%‡ÆÚÄ–š¾£ám/RÐ>xöw/Ã_„ú±q Æ·oà \êµÌz߉žÓO>,ñ޽«kž,×âÓtÍ.m^i:6…¦ýx[ÃÜ 9ã0²¯˜æÕiºṞªšR·=< qPÃB¥—´“•ZòWƒ¯ìÛò~&xÛÅž&S§—ãc†Ê2*5Uxdùsªáˆ­ û*ÙŽ&¬ÝLeJ7~Ê 40Еª,7¶Šª{‚¿à¤_´ƒü9à}7PðßÀ/‰¾3øIá´ð‡Á>ü5ø¥ñÇà—…íå7VÃo‰.Ñ5WOÑô«ß*óJÓu¤×¬´÷µ³µ¶4ëK{(óÍü%á\×1Æf”jg/1Xæ_Ø9Ô(æ »R®±4'GMƼ—5xR(V¨åV¬gVR›× ñçŽ2<£’b)pÿàr‰QžN¸£)Y¥|¦Xh¸a¥‚ÄÃ…¬§…‹äÃT¯*õpô”hÑœ(Â4×ÅúÄ/ˆ¾ø±©|{´ø…ãKŸŽš×ŽGĽwã£â ÛψzÏáh¼Gâ'u¸óí"·¶ÓôÍ:Õm´MEµµðþ¦XhV–Út_K“ð_ dy-nÀåt?³1pœqÔ± ëǺTç üÖ¾0ØisèÖÿõ+ÄÿdžÎuVškïèÚö‹þ›yyk¥ê:ŒZOŠ´mêîÇ1h±5¬V?ãþ޹|s­—çÙ†)ó< L5mHE»ºt1’­BQ‚ÕA×£ˆ¨•¹çRIÊ_ÐY_Òç‰ðÙlpù¯ å9®g |‘̨ãqu*’JÑ«‰ÀC‰ŒêKIUXlN”¥ÍìéÑ‹Qá·‰~7üeñ§ÆÝkö“ñÄÿxƒãæ¿â?ÆWÿ§Ô†Ÿâ›moDx¿áš,Ze…4o Á½„¼9á‹-#AðÝŒ o¤éÖÛçi¿\áÞáÎÉ+d8 +`±qœscT15³7RœÞ:N…XºmÂ4£N)ÅÉS¥i9~ž%qwñ(Í3:”3éË(†[*˜<6L©TU©Ç,‚©:”e©T•yÕ«‰«5Z½NH(ýoâ_ø)Ÿí)¬éž-½ðÞ™ð+áGÆ_ˆ–³XüGýª>~Ïÿ ~þÔž<¶¿ÓßHÖŸ\øÃ£xiu{=_[Ò_û>ëÅ>µÐ<]apÜx^ѯ¢Ží~:~ ðs•JT±C†Êëb#‰­‘PÎjÇ(«R 8¹Ñ*˜‰8Ù(Í⽬U¹jE¤×èTþ’> Æ4kVÂðž3:ÃagƒÃñ>+‡¨Ë?¡F¤d¦©b)×¥„‚›“”鬰œ¯ÏJJRRùköUøóñ3ö,ø…ታ¾ºÞ×ü0—ŸeU’ÿEñ¨L.5ÆuÔÌT¿a>=ÿÁÁ_µ·Åo†ÿ€þø/á§ìõâŸh£FñŸÅßE­ê?ãß ÙÜj?ïu»él<«7ËÓ4ý_V³ñž±áø{ßjZ^´šV«¤þ=‡ú:dTñÞ×Äž'.Œù£‚Ž C(¦Ÿ³«ŽR©)ZÓ•,¸¿rTåi/è,_Òï‰êåžÃ™.6•7 æ3Æc1XHͦ½­ ±Â”á(ÝJœkf8˜)/ÞF¤[ƒüoýœ¾6|Jý”Ò|ýž›Z±ñ7‰¾~Ì¿ ¼ð+á7Äoé7 w£jÿ´éÖ·Þ=‡G¾Hµ; x‹W½ð`ÖmtÝzãÃwæ£j:ÉeÞp†ƒÅâgœgqË¢¡—à³ÌÁc²ü"ù£NŽ8z0tc/yP­íh9{Ò§&®}îoô„ãüϘ`0páþ–o'S6Ìxg*–Yšæ5%IÖÄcåŠÄÕX‰ÃÜxªÇ£x´bÚ>]¯ÕOÃé«ãüëâíéûUþÉ:”—þø5àïø'ß쑨ü_ø«ö5–ÏÃZKøw_’ Dk,õøŒFÖš›þ‘ö8ÍÆ»k>§KÏó~UÇÙwp·bjû£¨üBñ-î—£\øÓâ]ÇØ5}uôMÃIÓ¼/á­Jðåßp„ù7B¾'R9ÖqŠ£S [_xz8ZÊÕp¸\$§Z1hû˜ŠÕe:•¡ziR¥:”§ùoо;qˆõ0Ø<%)ðïàq±”2ü6.UqxŒu sPÆã±Ð§‡”燗ï0¸z0§GRÕdë×§J¼=¶oø(n•­kñülñïìsû/|Lý±,ôI‹ö›ñž‹ã­þ,¹ðõµµ–‡ãŒ|/ãO ü1ø·ñC±²Óì´Ox‡K·ÔtûM:ÊÌÅ=”r[MçVðvž˜>âÜ÷…òìÕÿ†Uƒj¾pw‹§†›­B¥ªr©Nœªæ6¥zµ18…m׌¨aÞûŸø/ïÂ<bÿ‚wü&½ý³Ç…ÓÂRüe¹Ô|1g·g©i¾5Óm ÓtŸhLz…¦ƒðÏÁ^•µ 7FøU¡éZ¶±¥Cà»×Ö ×b×¼OªxÖ÷Å^%ñwŠõÝoöðÆrÆpÝL2Í(f±öÆ'¶:¤,é4éµ,40³^Ó Uðµ[­ ²¯)U—ó÷xÓÆ|SÆ9wÒż“‘ÊK‡ðx’ž,¥Qµ^2U“†2®6›öY•JÔ•µÿ ¿ƒ5ý.ãX¹¹½Õ<7©=ÍПåâ N–®K„ãž%Ãp½zŽur4éM:s›J1ÄóFœiTm9Sx)SœÓZug+Ÿsÿ ØêEð˃qœk†£PâiF½6ªÓ„aGSÉ:Ó­IEÆc Ô ãJ…Z0Ÿ„þÅ¿µïÆØcãN£ñÇáOˆ¯5ÿxËY×µ¿Œ)ñQÕ8j-Õ[\ñn­ñ;Sžõ5wĺ޸í®ÂQöèu‹ Yc{9ãÓüí6¬Çx_Â8ΧÂpÀ¼.„Þ' Š£$ó 8ùEFx÷ˆ©ºØŠÑJÕU*U(¨ÑŒ)Ó¥ARø<³Æ¾=ËøÞ¯ÔÌÖ;2ÅSXt{m7O†Ûöjžð•N§Â(åÔg,E,Lg˜Ã0’jy—Ö6¥‹š|’s§*2 £…t~«Qó­8òVãõ™BY½zqÂVÁΜÞQS*ƒNžNðJªqÀS’öTêÃS–5b>»R¦"^µñö¿ÓGÂ| ý”¿gÿ†?±_¾.h·>øÛ¬ü$ÕüâŠ~=ðN ]µ¯…~ø™ñÅ>$Ö~|ñä¶Öü ðæßD’çJµ´Ð­uë-9´û”‡ƒPÅ,¿ÄaÄöC•ÍK’W’ÃÑ„`œ)Ó­^ªN¤aO÷1•8P«J‹•,=J•—ÝTúDTÁ<×1á_øW…ø£;§(æ|I†„±xš“©%Rµl>¥ TéN­Uíç Õ14kb+âébjAIüA§éö:M…–—¦Z[Øiºm¥½…¤I­¤Iµ­´1…Ž( …(£@B€¯Ù°øzL= .<> J = 0TéQ£J *Tá£S„cE$£’?±x¼V?‰Çcq±XÌez¸¬V+RUkâ15êJ­jõªM¹T«V¤¥9ÎMÊR“mÝŸ¼¿?fÏ‹?µ—ÿø$GÁOƒ>:Eû)ø†yî.í´/ hŸîƯâÏjkˤøsEŠxä½»1Mq<òÚiz]¦¡¬ê:nwø¾IÄYW f+ç9½a„ÃñEÆ1JU±5å—¯e…ÃSm{\Eg¡¨Æ*ujJiÔ©è¾$áïò¯ø{‡ð¿YÇbø+)JMà ƒÃC5—·Çck(ËØa0êIÔŸ,§)Jh®"­*Sþ›ÿcÿÙá7ì;ðfÃáÂÈ!Õ5ýJ ÿ‹¦´H5ï‰þ/·¶1Ü];ïš];Âzd³]ÛøSÃQ\Ëi¤ØM#<º†©}¬kzÏòßñ®kÆùÅLÇ9SÂÒ•Jyf]·C…r¼a•LED£,V%ÅN½D¬¡FhÒþÝðÛÃŒ“Ã^£”etã[Z4ªç9¼é¨âs\taiU–²•,%)JqÀàÔå 5);Ê®"®#[ézøÓô0 € þ ?à¸ò”_Ú{þè¯þ³ÇÂZþóð_þM§ ÿÝcÿWù©þ\ý"ÿäòqýÛßúÊäg§~Æ_ðX+¿‚ÿ®ÿeÚ»àO…?kÿÙ©¬ÿ³´Oøê= S»Ð4ˆ® ¿´ð­þ•ã Äþñ·ƒ,/­–}C×´ûk¿Ë* ?Ym+MÒ´;_ ¼ÀñNa<÷%Ǭ9«%STÁckÇU‰—²œ+a1r|®­zJ¬j8)ʇ·JÒú >‘YŸeTøgˆ²Éq/QŒ¨àšÄB–c—a§¤°qöôêPÇàbœÕ-wBtUGN¯«B–3ÿÿ‚¡øëöíð7†?g? |4ð·ìçûx7PÐ5H>ø*én¥ñõÿ„.¡¾ðu¿µ-3Jð¾€<)©ZXkÞøSáï ZhPx›NÓµŸë^+:G‡í4_;„<£–æÐÏx³5\CŽ¥UbiáTkK ,\eÏN7‰›¯”d”ãJ¥:TÜÕë{x7Lõ¸ÿé/ˆÎ2)ðÇdrá<²µƒ­u0ñÆÇ(:sÁeØL8ár¸ÎTå^•jõU9[ õZ‰UøðzMY¢ŸUðׄ>(|9ñ‡üVß 5ÛÛk;¯ü6Õ®5kiega,piÖ¶ö±}‡øaͳ¨ñ6K›æ+Äj¶?+P•,W4U9ÔÄa¥*\ÕgI{:’…jp¬¬ñëMsŸð‡YžCÃ’àÞ"Èrž8áDÔ°ù^vêB¶ ’n­:XLdc[’…*ÏÚÒ…\=Z˜y^8ZØzoð_Ÿ´ÄÿÚ?âþ%øòÿBðõ¯ÁÍo ~Ï ¾hKðÿá'ìíá{«‹;ÍFÏáG…l¯o¯ôÿx‡PÓí5Oø÷Ä÷ˆ|s¯^Amk&¿m iÚ>‡¦ožå9ŒÅçÜV/‰3ü|jÛfüµ%Ɉ‚§ˆ… ;u#IW¦½IT©^«¢ÞU Jœ¹¸çÆlÿ‹²ìå¸p®W:0YCÏF ¦£«„©‰ÅERiaª¿mFia¨,JŽ*TeŠŒ+G÷ á¿üWÁ¾.ø_àï~ÝŸ±¿ÃÚÓÆÿ .-5‡µý;Á7— â; U´´ñ&£¤xÛÁ¾*³ð· q'Û¼oàÃk5é‘™9×µ««]>ûÇßn´›‹Úø¶óLÐôß øfëQÑ|áoŸøÎûÅ__áç„Yob%šã1k8ÎåNt©Wt.UËUaiNujJ½H¹Sž*sŒJ”)SŒêûOÏüZñó8ñ# /À>á¸V§^¶bž#™Õ¡.jZ¡J8j3Q­KNœáð…j•ëJeÕ·íñáßK øßöý?gÚgöƒðg‡´ï øgöø„¿|=ãiº ½½Ÿ†,þ?éŸüaá=#öˆ²ðÕ•ºiúJ|AGÔ#²2¬º¤—Wz…Ýé‰ð‹…̱™—q.qÁÏ1¿×pyo-\ ù¹›öZèÎnT£í*G Í(ácB F/ãæ+“åù?ðoø…§•幆oÍC3§Ëìâ¾³‰TqQÅ9Bœa^^ÊŒñŠ0–:Xš‘”çòŸŠhÿŽþ8ý¥mkïüGÔfý 4Eðîà?h:†4O„ž ðeì×þ ø]ð‡ÁÚ\#Að7à Mqpl¼/oôšô÷WÚ§Žu?kÚ¦««ß{¼5á pî1ÂJ…LãQ©C8Ì3Y*ø¬uÒç«BñQT(ʯjÓª©Ö­^µjTªCæ8ÇÆŽ4âìÓ'ÇÇK‡ðœ9ˆ£Šáì§#‹Ã`rÌF>Ά'–No‰…ì«ßN„ªÐÃá¨P¯^•OÞýþ ñðÓ\Ó¼ñ;ã¯ìðŸâ·ímðÃEŸHð'ÆHæðž—%›HíuöÄZÿÃïxïá…­åó SHðÆ«¬YÜ^ ïíZÈ\Çagù&eô}ÇÑÆâ qGÔò¼j•,FLqÂͧ<5J¸I{<Æ•Òjéá“J1Ÿ<¢êË÷œŸé]–b2ì"ã þÐβÙG…ÆeÓÁÏ SN.4ñ”¨ãàëeùdâêaªã%ç*\š£ߎµçí ûCþÕs~Ùß|u-ŸÇ?ûÇáÕï‚VÿÃÞø3á_ jrë>ð‡ÂÍ:ëSÕõ K¶Öç¸ñ·}{«jZ¿Š¼Oyw«kW’Àºf›¥þ£Á^pÿå˜ìXC9ÅæØg„ͱ˜ºŒ18Y_›CåUa°’mJ¥5V¥JÕ# •jKÙP/ÄüFñ¿Šøû:Ës 5'ÃØ ‡~E—`13”ð˜èròf8œZ…ŒÇÁE•WF•,=)Õ¥BŒ=¾&Uþ•›þ #¦Øx“^øëàÏØëöYðwíâk‡ÕµÚzÓÃ^+ÔmŸÆóBÉsñŽÓö½ñWü(Ø>8\^·ü$.<'x/ÖÖÒÔ8o†²ŽÊée.êøZr•J’œ½¦#ˆšŠ©‰ÅV²ukÔPŠr´aB©BtéÇñN0ã,ÿŽóºùÿã>·Ž«Ñ¥GÙapXJrœ¨à°Tj†“œåóJu*N¥zõ*â*Õ«?ן:>­â/ø$íUáýL¿ÖµÝwöÂýœt}FÒ­'¿Õ5m[SÓ/ì´í3M°µŽ[›ÛûûÉáµ³´·ŠIîn%ŽcyTü>yZ–ÅnÄW© 4(p—V­Z¬£ T©R©Ô©Rrj0„!)ÊMF1M¶’?Jáªñ^q® F®#‰ãΡ‡ÃÑ„ªÖ¯^µ)Ó¥F•8':•jÔ”aNNSœ”b›iÐ/ü¿þ £þÄž¶ø½ñ‹JÒu¯ÚÓÅúsc-oªX|ð¾«gIá­áLÖSøûS·–hü[âK–;kYφt+ƒ¤¦©©ø·ùóÅjñv*y6KZ­ÂÔ\ÍsSžqˆ§6Ö&´]¦°tÚ‹Âa¦“r_Z¯jèÓÃXøà•ÁCˆ¸F¿ci>H·Ôø{ Z <Kšœ³±rŽ?MµÉà°³t"¶7õ©™™Ý™˜³3ÌÌÇ,ÌÇ$±$’IÉ<šüPþ€ ( € üQÿƒ‘å·ÿýÚ¯þ¶Àºþ“¨ € ( € ( € (åú( € øöµýž?e_Ú«â¦mâŸØïZý¬¿h?ü6ðÿ†oñ¿Æ_ <-ðóáý¶µ¬j¾_‰$ÒüqáhkzŸˆ¼K¨xkI°Ð|_ñÄP‹ûÛm  èwš¾‹ö|;â𦠮]fÿPÁÖÅOVÔ2ÌW6&¥*'WÚcpXšªô°Ôcɪk“™AJSrüï‹|(à:̨füUjf| <ºŽ#ûS:Àò`¨×ÄâiÑöYvcƒ¡.ZøÌDý¤éJ«öœ²›„!ü{/ü»ÁK$‹ü—á,‘+¸ŠGÿ‚¦üx‰¤Œ1íðT‚6eÃ>ÂJïldûßñ<óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸ‡ß³ý´±WÃk/…w¿±ÜŸ²OÂøÀÞIâÿ |}—ö‰øu£x×ÅööZcYxÏÅþ,¹Ò~ ø"?\èú…£ë7~ŸÀë2éú%æ» kúæ‡c¯|Wq&uÅ8å™g¸ß¯cU xe[êø\/î(ʤ©ÃÙ`èaè¾YU¨ùÝ>wÍg&’Kô~àþàŒ±äü1—feÒÅVƱ˜f˜ì^cÄnñ%oÇW©‰ÅVöT3ŠT){Zõg?gF•:Pæå§A(¯8ÿ‡Wø;þÅðÿ¡ñãÿ˜zêÿˆÑâ_ýŸù‡È?ùÔpÿĺx7ÿDwþlJq…8sÔœ¥ÉN„ohÅ+#ö< –à°™v —°Áàp¸|=JžË …£ = n¥YN­OgFœ!ÏVs©%Îr•ÛöZç:‚€ (  Q‡IÕí5 „–H`3ïXB™–ÚhÐîŠHi ºð9 æKXÿ‚MÿÁ?¤óî¾xþ »û@ø~ßQ»ÒŒþ鿳Ôþ ½¼Óîo,µð÷‰¾!ø+áæ›ã6ÊúÆ{uïËâ ï­íµIäŠàCû×üLGÿЯ…¿ð‹6ÿçÙü¹ÿ“áÇý¸ÛÿYÿCfü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCaÿŸý’ÿèÒà¶Ÿø ûòâø˜Ž5ÿ¡_ ámÿϰÿ‰Iðãþ‡\mÿ‡,‹ÿ¡°ÿ‡OþÉôi?ð[OüýŠ?ùqGüLGÿЯ…¿ð‹6ÿçØĤøqÿC®6ÿÖEÿÐØçÿd¿ú4Ÿø-§þ~Åü¸£þ&#èWÂßøE›óì?âR|8ÿ¡×áË"ÿèl?áÓÿ²_ýOüÓÿ?bþ\QÿÆ¿ô+áoü"Í¿ùöñ)>Ðë¿ðå‘ô6ðéÿÙ/þ'þ iÿ€Ÿ±Gÿ.(ÿ‰ˆã_úð·þfßüûø”Ÿ?èuÆßørÈ¿úøtÿì—ÿF“ÿ´ÿÀOØ£ÿ—ÄÄq¯ý ø[ÿ³oþ}‡üJO‡ô:ãoü9d_ý ‡ü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCaÿŸý’ÿèÒà¶Ÿø ûòâø˜Ž5ÿ¡_ ámÿϰÿ‰Iðãþ‡\mÿ‡,‹ÿ¡²{oø$×ì“=Ä?ì£ÿ®µI¥Ž7¹¹´ý‹~Ïn®ÁZiþÏ©O?•;äò`š] ùq;aIÿÆ¿ô+áoü"Í¿ùöñ)>Ðë¿ðå‘ô6~õþÊëðSÁŸ¼+ð‡àÞ›ã½+Pø)á=/ágˆ­¾5øbÛÃ|?£<·)Ò<=ã{1 øzTÓ5Õ®5/QѬÂ>%…Ρáë½JÚ9®äüƒ>â ȳ,~etéO1ÆO[ …u¡‚†&táJU(ЫZ¼¢Ý:q4êTŸ*·=´?~á~ʸK'Êò|¶5kSÊrèex|v5aêfU0tëT¯Uñ40øhÊ*­YÏ’*T¹Ÿ7³æÔú ¼CéB€ ( ÆÛ›öý‰hïÚ_âgÅX~ÞŸ~7x£Nð^«ã¯‡Ÿ²îŸð»Xм!g£xGð…×SÕ!Ñ<me¡x£â:xƒY–[cKңЦ…­ÿ[ḣ…2<A—`2Ø<Ö}\n0©‰—Ö±˜ŒmOk:¦“µ\LãZ0µ5.i'9~ ÆŸG~ ã®&̸«7Í8§˜fŸSúÅ»”ÑÁCê9~.¥ìiârLexóPÁÒN|EKÕ”åH8Â?çÿd¿ú4Ÿø-§þ~Åü¸¯þ&#èWÂßøE›óìùoø”Ÿ?èuÆßørÈ¿úøtÿì—ÿF“ÿ´ÿÀOØ£ÿ—ÄÄq¯ý ø[ÿ³oþ}‡üJO‡ô:ãoü9d_ý ‡ü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCaÿŸý’ÿèÒà¶Ÿø ûòâø˜Ž5ÿ¡_ ámÿϰÿ‰Iðãþ‡\mÿ‡,‹ÿ¡°ÿ‡OþÉôi?ð[OüýŠ?ùqGüLGÿЯ…¿ð‹6ÿçØĤøqÿC®6ÿÖEÿÐØçÿd¿ú4Ÿø-§þ~Åü¸£þ&#èWÂßøE›óì?âR|8ÿ¡×áË"ÿèl?áÓÿ²_ýOüÓÿ?bþ\QÿÆ¿ô+áoü"Í¿ùöñ)>Ðë¿ðå‘ô6ðéÿÙ/þ'þ iÿ€Ÿ±Gÿ.(ÿ‰ˆã_úð·þfßüûø”Ÿ?èuÆßørÈ¿úøtÿì—ÿF“ÿ´ÿÀOØ£ÿ—ÄÄq¯ý ø[ÿ³oþ}‡üJO‡ô:ãoü9d_ý ‡ü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCgêÏìðsöaý’ôoà÷‚| ûOø+Æ¿üIeñSþý´|)àX5·Ö|¡.•q¬|2×ü  ÃàgWðÖ›­»êÚn‹¯ë>)Ð’èê—pYiÇ=Ççücâ{Ƹ¬6/2§ÁTÃ`ë`9r˜c0Ôëá«Õj”ñ ÅÊ¢”áÅJ0j+š «Ÿªø}á? xq‚Æ`rй–eK˜áóN|ö¦_Œ«†Æa¨OF®árì )J4êNÓpX¹7 ‘NÇèô’I4,®ÒI#3É#±gwc–fc’X“’O$ן§  € ( € üQÿƒ‘å·ÿýÚ¯þ¶Àºþ“¨ € ( € ( € (åú( € ñø'¶±uâ +ö¸Ö5 I©?í¿ñÃEº¼f’Iï-<¥xÁ>#1?Ùžðæ…¡Ú¢íŽ .ÒÑV1@¡P@P@P@|éû`Aisû&~ÓðßÙÁ¨Y¿ìóñŸíW*Z ˜“áω¡”+íÆä+"·u-ö³âO…µí^öwi&¼Ôõhú…ýÔÒ9/$·wË#±,Î嘒Mz½P@xÏí®êžýž¾&Öc»÷º( € ò¿ø&~¥‰¿c†ÿ'²‚ßÄ_5ÿŠŸ|i|‹ º×<]â_ŠÞ3›TÕu ˜ ·k©pÚišzÊ…t½MÒ4/+LÒlmáûÒ€ ( € ( € ( ƒ?ॺ²øWöBñÇÄ{(.ê¿ü?ãûOÉðïá µO7„~ h0i?ðŽÞGmqáCrº½Ü×ÓiÚh°Á'¿à±¾ÿ‚ŽþÇ¿¿kÿ‹ß |1ûx[àÇïü ñ…—ÄOŽZGŠ<7¤ÿÂ'ào…~/ŸÅ>"øâ/|!Ò¼0³_|M_Ë£ê:k­¥Ö޳¶±4º˜°±ýƒö¹ý”n¾ ê¿´·í;û=\þÎúÄ6šßǸ>4ü7›à¾usªéº½¶«ñJ?·ôû‰õÍgGÑ¡†ï]†Iu]WMÓ‘ZîúÖ@4ü)ûL~Ï_|[ãχ? ~;üø‘ñ?á–.£ã߆þ øµàOxÓÁ0©Å/¼9¢kz–·àûI®^c{¯i¶pG$ÉÌB°åF›ÿÖSIÿ‚ZÍâÙÿáV£«ÁI~>üPøusðOöÉðíðÓàèøw¯jšL'ð÷Ɔ_¯¼ñ½õK{¤Ö4ãÁ-áqõ/ j:µÞ©¢Þ’ú§íGû2Íñ}ÿg¸¿h¯2ü}yñü]ø~ÿÐGfu ü4_/—` óçElÁ¹lB7ÐíñÓÀß³À_Œ¿´gÄÆÕá÷À߆~4ø©ãÐì—QÖçðÿ¼?âJÏE°’{X¯u‹ë{³Òíg»´·žþ{xî.íai.#übø%ÿ‹ý±>$7ì¯ñCÆŸðGÏ^ýkí{Àö þ?ü*øÝà¯ÚGÆþðÇÄý?ûkÀ?~,þÏŸ |Þ-ø}ðþ÷ÃψüOâ¯ßÛø7HóÓUkGNѵ0¤¾Á^>x—ö–ý¿¾|{¿øOû*øgö"øÁðwàîñsâ÷íàÿh_øq x†öHb¸ŽÓCÖ|c®hÚv­u%¼ÐΖö7´2Å*¡Iˆ^/ý¥¿g/‡Ö>Ôü{ñÿàŸ‚4ß‹z–£ð·PñÅ_øjÇâVŸ£è+â^ûÀzνeoã+=+à ¾#Ô®|;&£ Ž‚Ë¬]¤ý•ÿäØgû ß?õ_xz€=æ€?“¿µWüïþ 1ñwö´ñçì!ûZ|!ÿ‚}~ųWÇ~Ο þÙÿµìgûþÐ?¿à¸VŸ ¾¿ìÙã«ÿè´Â]CIÕü;ûLøêþßFðW‹¼;ð»Â÷ºˆ¼1ãoø‚âÇ—Ú'…®u5Ô´»ö𿆠¶ÖM°—ìÁÿÀý—ÿiß ÿfüý±?d¯Šß|'ãÙóLý±¾ƒzWÇÿi¶j÷7¿ u»?x¾ÃVs£[OªÚÛk2hÖèXÇq¨ÜÚY\|ãâïø9‹öðvñÛÅWŸn-gÃ_³íâ_ÙÃöñ„þxwÅøEªè>"Ó¼!¥|Kñ׋4oŠ^Ð>xûÅwš‡‡>=öµÄOjÞ×­ãøwl!°}D鿃ð\/ØÏã—ísðËöBðÆûCx{Yøùáø»öhøßñ àÞ¡àÙãö›Ñüa¨êzåÏÀïëz´>!ñU’XhºÕΙ¯ßø/Dð·ˆ#Ó“þý{TmoÃ+®|õ©ÿÁȲ¶‡ûLø@ýš¿oßè߱߯?|"ý¤5_‡Ÿ³÷…|W¢|*¶ð&ª4KÏŠž'ñ5Å¡ák†ZÖ¡o¯.‡,úÜ~=k ø—XÖ<£è¶6ú…àÚÁ[¿e [ö€ý€g߈¿uø)'Âÿ|^ý~$øK@ðãü1ƒÁÞ ð.¥ñQ“â æ¿ã ÆÞÕ®ôm.æÊÓGÓü ¯ÞÙë é> C𠯶ùëöšÿ‚Üü)øMeÿDðGÃO?´wÄŒÿðL¿‡žñÄ{]Á>Ô|ª]üQð^¥âo x³H½‹â„zôß ¼ ªë¿5ÝsÃþ¿ðÏ…¬5}GAÒŒš°éìQÿ#ø%ûoøŸã/à xãŸÀ_³í߆cøÁû8þÓßàø]ñ«ÁšO´çÕ|â©ü?aâhZ·…|QaÜizÆâmV4‰¬¥Ôc°‹WÑdÔ€?A¨óþ Eÿ Ø×þÏãö}ÿÓ¨è ( € øwþ ?ÿ&oñSþÃõvü9 Ø*(ð7þ ûÿ@ý¨¿à—¿¿gþÉÿ >|Wø‰ñ·ö…µø;7„¾*xcÇ~+‹Pµ½ð_‰5û?é¾øðÿUoêZÆ‘c¦éë5Þ±ÏÚšÒßGžò{vPgöšÿ‚×jv_°çüóöÖýtO†ž'ðçíçûmþÌß³_Ž4_‰Ö>#ñŸ|;ñ[Kø›Åé­áø)¬þ)|9ñ§€'ðœ:®§&·á¥¸°Ôî$ðæ­ky§ÜB÷¯íCÿyÿ‚q~Æ_¿áJ~Ñÿµ…|ñJ #L×õ¿ØxWâGÄ ÿhzÓÚ®“¬|G¸øià¿éß ´ÍF;û «Kïˆ7¾¶›OÔ4ýI%6Ö·3{F¹ûy~É>øÝû3~η¿´{¿‹_¶?„|EãïÙŸDðöƒãh|áo ÜøßXñ.‰ñÂÞÖ~iúBøRÒmoN»ñ‹4d×lüŸì/í)nm£˜æŸÚOþ ûþΞý·†§ñ?QñÄ¿Ø+Ã>¿øáðãKøañŽêïCñWÄýçQø;áq¯Ùü=¸ðÞ©µ²Ó¦ñ‡µ}_Ã~ Žíµ/êÞÓmnîàù÷öIÿ‚ùþÄ_?c/Ù«öœøÿñOIø¯üqø•áÙÛ]Ñ.>üs ðßí3âØø¤ü9´ñv©ðäiÒxb×EÔ¬ïåø­{¨§Â»X оñÌéú‚Z€}3­Áf?à™^ý™|ûaëßµƒ´ŸÙëâW‹¼SàO‡3½ð¿Ä¨u¯ˆ>*ðN·wáßi~øh|ÿ OÅ£EÕ¬¦†îû@ðV¡§ilu(nåÓ5=:òì°ø[ÿYÿ‚}|lý›¾2~Öß ÿiO ø×àGìó¤êÚçÆÿé>ø‚ÿÁmÿà–¾/|øðÿöÃð»ñKãæácá7‡ßÃ_´{7á?é7Ðiz¯Äø¯T·Ñ<'á+MVæÞò "ÚêúáµwYkEôO éšÖ¯o¥ê÷60éw€Ž3þÞðq·Ã_†þ ý«~)Á6dω¿|A7„5¯~Ë¿³¯‰þ4ë_·W„<ãí: ieÓ¯µOø'Ä^)Ò-µ[Iu}Ã>ÕuKËëZVca¬Þè௵'üGöý‰4o†:‡íkñÎÏàf»ñwöž(ðoÃ_ø?Çž(ø¿6•sm÷jŸ ~øcÇ:ÒbÒ®}+V¾½Ñ#Ò¬µ› KK:ƒÝØ\ÇÇü‡þ ç¢þÇW?·ì¿´×…5oÙÃ\Ò¼1ª|]ð‡> xæÄÚÖµ¦øzÇÃzáâ6…âí]gI‚÷EÕü!e©éPjV7ú¥­žŸu Ó€g|)ÿ‚¼Á7¾8ü|ñGìÁðŸö«ð7¾8øKDñOˆ¯|¥hÞ:Hµ½'ÁÚ…ï‹fðŠï|)i࿊:–‘¬^ê:oÃox¯S†ËFÖnþÆmôFK`€ÿbÏø8³ö9ý©<]û{x†çá×Ã?ÙQñÏŽ<#âãð«ãýõß?f_†:O† ñ§ÆOYÿ²øgX‹ÅZܰéÿ . ³ø¢úÛ\Ià—žÓUk`¿>ÿÁ^à›Ÿ|7ñ{Æ_ kkøð¯á—ÆŒ~/Õôßø/Âß~|bÐÿá"øw©xƒÄ>8ðLJ4ˆ5½sO)Ï‚a¼ŸÇ×%ƒÃ&ðæâYáÒd©û&Áaà›?·Ä™þþ̵?„þ"|PM"óÄ>Ô|+ñ+á·ˆ|G¢iÉ,Ú†«à»о ðCxîÂÆÖ ‹ëËŸv+m6ÚçR™’ÂÚ{ˆÀ?<{ÿúý±|-ÿëÿ‚§þ×:Ã_Ù¢o‰°÷ügQýþè—žø¥'‚?xkáŽþ+øÃÆú4o üDøã+íû4]jþ%‡À ¼/ãÿé~ ²¹ƒS„xŸYÓ ÐÔéz”-«Ë6—¨6¼}ÿHý€þ|!ýœþ?xÇö—ðUŸÁOÚËǺÀ47Æ*ðW|oâG¿†ÃG¾×|'á½nËÀÑZ\i:µ§ˆõˆRxSCð…Ùx³RÑnôÛØ`÷oÙsö¬øûi|Ñ>?þÌ¿`ø¡ðƒÄš·‰ôM Æ–¾ñ_†­uMGÁÞ ÔUý…ÿä~ÝŸöy_µßþ¤Ððû:|ø½cûÿÁ.>ü²Ôí¼ÿºýŸõïØãv¯¡»Ãÿž³ð/þ a¯üFÖþ'j̪æyæý—üWñCÀ‚Ø«³xnËY¼¶A&™0œì]KâO€~~Ë߶ׂ “PÖõMPG}©_Ý\ÅŠùsûÿÉžÿÁ¥?öÿ¶‡þ¯O‰”òÅÚ;CñÏ„¾~Ð>øû>þÊ:ŸÁ/ø,Ÿ‚?i_ÚWáŸ~þÔž*ý±?fÍ7Oø?‡+Ó­€?´?xËþ Su¥þØ^5ý¢~~ßµ¯üs[ø)ñ—Sø;ð#ö_´øý©~Úþø’K1àoxÛHø¤ö?µkßüÔ|F¾6Ñ| ßÚ·‰äÓ´ÏZÝ ¥´˜ù_ðçÆÙ·à·‰?e—ÿ‚~Ö_ðPí#öœñgÇO†7Äø$¯Ä+޾øÁž Õ_‹>ø‹ üLð¿†|oàgšëLñßâŒ$ŠÆKÝ{Ãú¶“­çŒ´°®¼Eû1|øßñ›þôñïÅÿ„Þø•âï…_ ¡Ô>ë¾4ðþâÿ†ZÜ?²WÅÿÂKà µ;{†ðŸŠ$ñ/ü©Ëâ캳¿…´kµ‹X$‚`#ý¡tKÏÁ¾|Føû8·ŠOüÂ7_ ÿiÏÚûà—í1ûZÙ|JÖ Õ¾1ëz-´“x‹]þ×ñeÆŸ=†”£N·ƒÃø€Ä¿eÍ?Àe?ø6£ö\øËf¾5Õ¾ÿÁHk/´WÂ?é×òMáÛë‰Ç-~xÿFÖmcƒQÑî¼ ­øbÓXðµÐ¿Ñï<+©Ÿ j¶ïd×zb}]ÿ;ð^Ÿû8Á]¿iù¾5_~Ê?³÷ì³ñSþ ÿðïà¿ì£«þÒß±ŸÄïÚ/àÔtMÑ>$| ý›¼?ðkľ Ò>üYƒÆßð”köšK¨½†³hÚfŸb5ëÕÀ?£ÿÙSᧉ~Á ü9ðÃÅ^:ñWÄ›ÿ~Â4}3Æ>7ø}⟅>+Ô¼žñ´ÿ­u¿‡6¹ºñoƒ®ô/Ïá¿_•uHt¸„Öö›…¬ DþÌŸòm¿³ßýÿ„ÿúhîP@à_µoüšçí'ÿd ãþ«¿PÔŸ²¿ü›ìãÿdáþ«ïP¼ÐñÙû jß¶—üâ/íû1ëðNŸÚ×öÝý‘¾2~Ò¾8ý¥?e±w‚aøÁ¯éq|C°Ñt{¯üPð­Ý¾¡áGÒô xVÆûPÔŸN[]{Nñ§¤Xx“A×tíFÔßíéðþ ÿpÿ‚J|~Ô>3þÍÞø)ñ3Lý¨¼û@~ɲ µÔñ|fÕþxø‚ÊûÁµÝOÄÍá-KÇÚž•âs­xsAMÁ:¥Ýÿ….,µ*ÓQñ…¦é@_ÄIjø,'üŸþ 1ñC@ý‚kÏØÃáüÛÆ>:øçû@|Gý®þ/Áó{â½~/‡Ö þZÞêSjt»­ká½¾‡wâ="Ö ý3_mZ÷NÒ,ôv[ð<ûþÔvÿðHïø8óáÝçì»ñú‰Ÿà ¼aðcÀ·?þ"Å㟌~ ¼øðÏUð÷Š~xb_ ®½ñÃ7qZ꺎®xRÇWÒî"¶¿»³ºt‚âDûâ_ìÍû@ÉûGÿÁ¦šÎ‰û?|c—Aý›¾øŸÃÿ´­¦|*ñ´šOÀK·ýšgÙh¿/ít³ø]rúÎâ ÛNñ´Ú¯©éZÆ›&îÆö€?.¿d/Ú'ãoÃÏÙþýŸ>þ´ßíS©þÕ·íÙðgᯌg‡¶ß|;Έ|?¨|;Õô¯Žvzn«Š|àïè¾&ðÿŒ4O`kˆ|ßès_hWºzOvõ.¡û~Ù?ðN]Gþ Ëý¤µÙ‡ãí9§þà þ7ü/ýª>þÌÞoŒ>ë¿´‡uùmâ±ðžƒtÒø›Nð‹|Hñ•w¬i7rx^Ï;~Ê×^ðýî Ú|ý¿lÿàèøŸö>øçðSTý·¿co h³_‡~&xFïE¶ñþ¡­þÍ¿!Ð?cÏÛÿÀ^8ÿ‚P|lðÇ…¿h¿‚Ú_„¿ðËÂ_4ß¿ðTÏZïˆþ5|jÒ#Ô4/^é–ß¼qa©üCðŽ“ðûPðônâŸxÖóLÔ4½B;]M]kžþèóþ Eÿ Ø×þÏãö}ÿÓ¨è ( € øwþ ?ÿ&oñSþÃõvü9 Ø*(ùµÿƒ‚€?ÿàƒ€Œƒÿ²ýAAñL9Pó©ÿ<øñþ Áû~þÎ?°—…ôKøaOÚ›þ Ãû$ÿÁD¿eVˆ2hß¼£øÄ ?h?‚z;R4Øõ/‰þ×l4Ëf6Ú„ôÿcŸRñˆnÔÐ~:ɯþÈ_ðR¿ø-¯„?koÛoãì1áÛOS±ñ7Ã{oþÅ~ýª,n_‚>%ðÿÄ-6Óá?|Sãÿ‡>3:.½á=Æ_¬´ Ä>Ó.µCV²×µ}6ãÁv÷:p¼üNð¿…¿à˜_ÿàÚOÚg┟´¿ìoû?ü-ý¥~ø³â÷Æ_„:®…ñá øÑáoGð»DøÉðóÁ7>9¹ð­k¥|JÓ,´ÿ -Õî²4kqG¦®© êºEˆ~xº÷ö¥ý ¿àî|1ðGä›ã/ì1á;…ž×¼âxûÆš³û*|Q²øy­iþÖl,¼[hßt'Ð|WáUÑì]ø«ñà7íKÿzÿƒx>øwTƒÇP|+ÿ‚þʳwíá cÃ:þ›m¢øº Ä~-ð6«‰4k vÞçCÕák‹][H¸±Ô’'ºóZkxÀ?iÿà°RÛþÊßðYoø$ßüCöƒð§ˆïÿ`/‚?þ2ü.ñÇŽ4?ëþ>ðŸìùñgÄþñå—„¼{â¯øcIÖ®ôK=Jóľ“JÖmô¹'ƒþ»í-eÕ|9¦ÚÈùâkøjÿÁÎÿðQÿÙãÃ>*Ñÿ`ß?°ß ~øÿYðWˆ¾øsã÷Å ü&𥯊~!x?@ñ.•¡êµ¦“¨xSÆÓjìÚd3?Žìïµ±¯jv°o~Öžð·„àˆðlޱáIеkÛ_öÕlµ-2ÆÞÎúÓQøà_ˆ~8ñµä0F’ÅqâŸÚZx[•X>£«ÚÛß\™'†7PïJ€ üËý°?äõ?àÿõÏö¸ÿÕWá:úR€ ( €ñ.¹¬èš qÏwâ-[G³Ð4ë{KS´…À>QÕ?àèo€Þ1øSáO~ͳ7í'ñ·þ 3â¯øDü:ÿ°”ßþ"ø?]ðŸï¯të?XxÓâ ç†/ô 'Â~'Y{Oé©®_ºA¦Ï¯h~´—X¹Ñ@<â߯}3þ éÿêß·ßü“Âz×Â?Ù÷öˆÿ‚vø/á¯Â‹¶¾ñ—Æï†ßþ5éðÖóâÁ¨üeàßÞ[ê˨øoâLVºí‡†4«{Oñµ…êivZW‹µe³ü\øóðóÆòÁà»?µ]ŸÃÏü(ý—l¯ø)wÂÿ‹Ÿ²Oƒ|aá‹ÿÝê_ ›ö‰ŽòOˆzO‚µ {It=Åšo‰<)¤iþU¬6/„.ôû5†“k3€~ðþÞþð·Ãïø+?üíkàIð´nƒûEø Æ=ÆßMÞ ð×ÁŸ„:†"û,qìÑ4‹M[‚ÃO¶¶VÔ(Ô]ͼó[á¿ÅÏ|8øÿjþÈ<½Ö<)ûCø»ãíýûCx_áîµáOÙϯ|Ô#ñ%½l5×ÑáþÇ¿]SF½Ó>ѬAs«ézÞ•ªi_i÷kp @þÖß²¯ÅŠði7ìYáهᾫâ™<+ðïöYý þ+ü1ø{¥ÜM¯üDð_Ù/¼UñBxt]¿ñðñ‰ì~$k˽ÕûAáûí`G<úzPkñ_ö‘øÿkÿ‚­ÿÁuïø&>…âÏi±^¹ãψ_´ÿÄ?á~øgà/ ½;áÑÒþxÃYñ†|=§ÅªËeáox.ÓÂZTúމe¨xª7Ã÷W°ëzÃÛ~Fübÿ”"ÿÁÃ?öœ-oÿWoÂjûÏöѼøûÿÁm?áª>.þØßàžß>=Á<~øà¿ísáÏÙsŸµ7„&Ôü-¤ü4‹Åÿ³åæ™ão‡?´Ÿjúƹám{ÇO£èÖÞ!f¼Ð¬ÚUÒük7ÚÀ>yý f¿…^ÿ‚jÁ< á½Gã¯ÄO‚à¸~ñÖ™eûQ|ðßÀ¿jž ø¤/ôiÖŸ 4 k]Ñô†Þ*½´ñ¿á˜ü×XÒ¼M5펅‰w¦ÜÞ€¢4Am VöÐÅooqà F‘C 1"ÇQETŽ8ãUŽ4E ˆªª€(óãþ ¯ÿ&ñÛþº|*ÿÕÕðæ€=²€ ( € ( Åø9þPQûÿݪÿëaü  é:€ ( € ( € ( €>_ € ( ÿ‚mȱû\Ùû~Óúuðí~P@P@P@ó¿í{ÿ&›ûPÙ»ükÿÕkâjùßödÿ“mýžÿì‡ü'ÿÔ @ p € ( ý©á–çöbý£míãig¸øñzbA¹å–_‡Þ!Ž8Ðwgv £¹ PÔ_²äoìËû:Ã*”–/?ã‘Ttøáõu8Èʰ àö v € ( €>ý‡?àžßÿ`%ý¥àç‰þ'ø”~Ôß´j/ˆ?ð³5¯ k'GñÿÄAd5½Áßð‹x+Á¿Ùþµû ?Ùz~·ÿ µé>×â ì®À»¨ € (óŸþ 5isw£þÇŸf…æû7íáð î}ƒ>U´:<Ù›Ñ#Ü»`sÒ€=†( €>ÿ‚Ž#Éû|SHÑävÖ>aK±ÇÆÏ‡$áTp'€ è(ö€ ( € ( € ( €?3¿kôvý´¿àžN¨å#ö¶ÞáITßð³Â7°]Ĺ#q ÐÒ4P@ãðJ´xÿ`/ÙÝ$G×Gñ®QÔ£ üOñ±VŒ‚Èäzý € ( € ( € (óÛþ ¨'ìñÕ#G‘ÌŸ ðˆ¥˜ããOÖ8Uœ($àp'@Ó@P@Pâü‹ÿ((ý¿ÿîÕõ°þÐô@P@P@P@/Ð@PÏ~ñÆû!x—âºøÛBñŽ¥ðkâçÄ}[âÞãŸx/ŦøsâÝ{@Ьüsáoˆžð^™­øºË@×u­ü[àÏhþÖ´ y5oøwÇW~ þÅðu6eÿ‚§þÄË$2|Cø“¾)7Ùû/þÕR¦ôb­¶H¾ .m ÙøKÀsx‹Ç>öO xLð—‡4 è±<?†tM+ÃúL9•áÓ4k4ëžB‘ãµ¶‰ÈÈ,@ÍmÐ@P{»K[û[›ëk{Û+Ûy­/,îáŽâÖîÖâ6†âÚæÞex§·ž'x¦†TxåÙYX‚ó®…â¯Ú“önðv›ðËáoÂO~ÑŸ¼¦éø]sâ_7 >"øsÂZ]„¼oqwðËÆÞñt^´ŽM?þ2Ó®´sPЬ´}7ÅZ©â+}_Ç~!‹þÃöòÿ£øiÿ‰µ¦ô6xÃÁ^;øñáßxDøaªÝx›á·ÂOø®ûÇå|m¨xXð´þ<ø‹ãMGÂþ´Õ5]Ãúö±¦x+žРѼ9>·­ëúLjœž9ñ§‰u],Cáë-GÃ>ðþ½?Šá/ðhÑÔP@P@~(ÿÁÈ¿ò‚ÛÿþíWÿ[à]IÔP@P@P@òýP@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@Pâü‹ÿ((ý¿ÿîÕõ°þÐô@P@P@P@/Ð@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@~(ÿÁÈ¿ò‚ÛÿþíWÿ[à]IÔP@P@P@ó+#20!•а=C)ÁÜŠJ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € (ñOþFB?à„¿·ëÿ 7쮀óÕ?l/»c"÷Ï<Æ@?¤Ê( € ( € ( € ùÿÅsiºÝì[HŠy ݹƆá™ð¾ÑIæCÏ$ÄOpH?@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@~9ÿÁË:sXÁnb[÷ý™ïÊÆÿ¶ìö.{ƒ,£<ƒ)ýP@P@P@P3âuË #Ú—ÖÛžÖCÀ|žÞCÐ$¸RóŠ­†E` ž ­¦’ ˆÞ¢b’G"•ta؃ùƒÐ‚$h*( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( €; xfMbá.îP®™€¹lµºùô&<ñ<€Œ.Qå?ÿàèÐüŸöåý™@`?l?Ùôà: ýÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨Xÿ‚ Á(5µjÿ‚“þÀ):±ÝÃûdþΉ:ŽÊÌ~"‘,`òE`¹b› @<êóþ Aÿʼn‰²ÿ‚šÿÁ=îãÏOÛ'öw·›ÕOÄY"8î|àOeì3?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@Eÿ(ÿ‚iHÁ_þ Mÿø€d|òþÚ_³‰QîD?æ|øB} ³Kÿ‚‘Á)íÙ&Ôÿট° ã®Ù¢ý±ÿgh­ÃÑÜüHóf^‡@3Ã+/PAþ ½ÿ®…(¿à¥ðOØ£B¤qþØß³š"(à*"üF ªí@ˆŸðqçü'öøãÿcý²~üý¸dŒ|Qÿ óÿ×ï…¿´·Áˆ;ñö'íWð7ÄZÏö„|'ã]_Ä¿öG‡ôW]Ôÿ³ôû°húf¡©ÝyVVW3ÆÿÙmanila-10.0.0/doc/source/images/rpc/flow1.png0000664000175000017500000012002613656750227020725 0ustar zuulzuul00000000000000‰PNG  IHDR°Y¡iásRGB®ÎégAMA± üa cHRMz&€„ú€èu0ê`:˜pœºQ< pHYsÊ&ó?ŸIDATx^í ¸ÅÕþï?ÆpE«PÃÅŠŠŠWŒJHb"nñªqGçDT¢âB *"Ê§Ä¨Ä „ˆŠQEÑDÀ%¸cÔHLŒ&1Iýëmïë=ÓÕ3Ý=½¼ó<õ̽ÓÕÕÕoWw_ŸSU üP*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T  ôÒg}®N£Û·oëFm´3Q¶¶¶Ì¶ñx–¯»n›+[®á)ú»G!{8ž4 T€ P* ú¶mÛöú¶mÛ}°é¦›}|ÚÃ?1r”;îZ5eê …¿™¨ÛÛÛ@vÛÀ„‰7ªQ^ê=ˇyÌß7Üh£¿555ýQCíe„Ù\ôã< *@¨ …P Ç†nøD·î›ÿå’1WªÅKW¨÷>úŒ‰° ° °   <6o‘~öÈÏ;wî¼fà 7¾V÷zM…èùx’T€ P*@¨@ö€±£eòí÷ÒP-€¡Ê|1Ã6À6P® ¼±zúÙ¹|ÖÔ´ášvíÚ“½5¦T€ P*@ò¬@Ó×;v\cF Zµllllh+W½§öÛÐG;vº9Ï Ï P*@¨ÈŽ=6ÙdÓ?Þ~÷ôÒ`¥ÁÊ6À6À6À6à×.¼ḩbçé®­1;ÝkJ¨ T€ äMM¾Ñ¥Ëó-¥×•!ÃllllÛÀÍ·Mù—î4bóf ð|¨ T€ dCF6üâ¬GçÑh¥ÑÊ6À6À6À6àÔ.ýÅÕÚ»É]ÙèæRUË&]›ÑÒ€ÚÂóí-gWebùùÓ¿oªZ8+C¨ˆ[„‚ám:Ã.È6À6À6À6¦ ü¸ù„õr;çÅÝOå¬üú|VU€Olú ¯^½,¿2ÀfMŸéúšNj0ÜN¨È“Cöðí¿„1X˜—.ÛÛÛÛÚ&ûëÐÔô±î7ÉSÇó¹æÄ| _š °Å¹Øeÿê¤ÖI&f¢T€ P*ÐZáÍ'œüž4<ÙØØØ¢h}úî°Fw3€3~*+@€e ¡T€ P*V¦7^8eê NÞÄõÙØØØ"i—Œ¹RµmÛöú°ýQó`.àyó”© T€ T¯@Û¶í>X¼tE$FKoîY=@lllÙn“nºMuØh£YÕ÷L…Ú“Ëêród© T fÖùêW?Çú}4³m0òúñú± ° ¤¥ L›9[m´ÑFKjî X T€ P*@,š×ãøW† òÛÛÛ@dmQ=ˆîaK¨ T€ P¨ ÀÒhÌhM‹÷‡õ '’m ¾mQ=]»uÿ³î°°.%?T€ $£@£> CÒ“ÑšG¡T Î `±llll‘¶ÎDœlÏ®×Þ=OQ1eOƒõ×ßà•ˆZK³.grDe±*@¨@ªˆÔh¡ç£¾žêOýÙØÒаÉöû`§N˜x#ûó ¾ˆjyéEƒ!ÀF¡"Ë T  °ÃË`‡—•u (± ° ”kØdûlvïEl²÷ F¨@> À`ÙØØØ"mØd ,V·8z`“½íx4*@ê¨@¤F =2ÙíDyíxíØØ¢jX§^î uʉ›Ý{—Ø(î–A¨@Ñ ÀÒóÂ6À6À6À6i À:™u®9N9 °‘¶Ï¨^ÔDQ6Š;€eP*P4rÛ)Dѱ°Œì¾ÕæµKßµ[¹ê=Õ¡C“óL©ë®»®sÞ#Ð)‡œò¡Ìõ×_ß9/òÇQçÆÆF5ÑÒL=« °N¦–/ŽäùâÔ`25ë휅8 %Y ©W SF¡$}PÂkÂkâÚ/]¡ºwï¡\>ãÇWýúõsÉêåÙ{ï½ÕE]䔿K—.jΜ9Ny¥®Ÿ0uSßúܦ͜©g5Ö©ï'À` °N· 3Q*@Z+)£ÈÕPf>BÛ@úÚöK&À²+Ö ` °X> ¨ U(@€eÊ6À6H À`«è£ò¼ –Ï^lžïpž ±)ˆáJoXú¼a¼&¼&I·,6¶ž,›` °ØlÞ»¬5 uV€Ë”m€m ‘6@€%ÀÖ¹¿KÛá °|ö`ÓvW²>T€ dBD פ==<½‹lékXl&zÅä*I€%À`“»ßx$*@r¤–(ÛÛ@"m€K€ÍQßÅ©`ÏûùŪ\zí­j~ö<½x™Úµÿî ßY|q© 6ŠO³.„ËèD¡$Ë T õ dòŸÅNŠuNŸG×$ÙkB€%À¦¾GL¶‚…ØÍ·ØR6lx h£X”ñC«–¯Ê¤=C€Mö¦ãѨȇ™|à<’êM½£hXl>ºÍÈ΢k>;°>±€v‡õ!ÀöÐeõ¬u² šhÔ%à&G úlbä•}Â|³ü/µöÓ-ïú_·³#aø(ÛÛ@"m€K€Íc'ZÃ9`[ž½ðÊn%!äXÀÀ ëÄI·”¶ãŒÍbì/e¥=¼8B€­¡)r×8hÖ…®Öéa¨•>CõÆ9U&–_Y»"èGûM{™‰®QxoX½€lÙnXlÚ;Ä„ëG€Õ+°)žÙi3g{ð `Å3¿ò¢FÂßGÝ\²_LÏî­wNõöG9ØÿKYiìC° ßu ËAÉ ÎCJ,½oll‰´,¶P½kðÉ`õ³°yùã[=ƒàa/+yÌñ²½¤&Àb?nÓ­f°Á7JVs`³zåXï,(ˆášö„õ˶g×/×K€ÍB§˜`ûëc!º­æOSSÓÔ oL}î7Öï7*¼¬âEó9/Pë°ØÏ AN{ÿ@€­¹ù§¶lj/ +–Rß᥽óaý²O¼Nõ¿NXlų́O¡)Š‹°2–Eëaq)@€KY–Kô¸öõ7ìy x ŠÐ°Xvºñ(e€µÇ³âYhzQÅÛjÎ^Œas"'Ó‹‹m⽕çjËõÄõŒ¦6ž{" ¥`ÓpX‡¼*@€åøG¶¶DÚ–›×Ž´Þç•e€Å$K€8€'B¦&€Ê$N²†,ò!¿LÒ°4“<á”#å¥9¤˜ýݳiôEVU"¶*Ù¸Szèß# Ùɉ‰®q½¹d¹ô\² d§ ` °9é7Óp­l™¬,@ÒÏ ÅR:€N{B'ñÀb™À+’ ¯èìr±åÊK[Ÿ!À¶E»(Üç|}ÆÓŒ³~Yÿ½S T À¦à"ä¨ ˜ÑmŠŸ/ ÀÒûÆ6À6H À`ÙñF¦&€Â²‘(1+[ <Ú6USFš÷‰` ËKا°‘=\XP: À¶¾.‰®iîx^îÞ,¶Š>›»TV!Ä·mÛvnÖåóríçe–cK‘®Ô Ј¾ñ€ Šÿ1ËþÆ ‰íøðkîk7mÀ­”#À*eZQ&Ê|²?ÖV’ÿçQlë«J˜#̱ $Ü0Ɉ‘£ —NWý¿ÿ÷ÿ>yþ…|–'ü,È[x/ {»Y7¶€¨yXó7ÓÓ Ø4'X*W9ÌŒò©A@,òʇ¤·çAlë«È/ƒ^&Ëp÷F­Õî{P‡yL¦€$ àÀvìØÉ‰ÃÀ`ÞvÇv"Àæ„£»ãUì_ùÊW>þå•×°?Ï`ž5€µg÷µá&&¼¬2©¾Å[[ "l(µópªŒ²P~0><äÄsH…X,;ù vòQCd=ËÀÚË@Ô³>I›!Ä !N…žJÀ¾ÅLµ|é§ë1°ÓB\¿—’µh¨Ø.“8á[BŠm¸Åÿ2‰ö“0d„ËøX.~/ŒVCë&sµ†LÙÕ@_Ã^ô?ØÖ"²ó©±ó)O“nºMuêܹ­ÇsŒzî¹çÔi§VÚšÔ%c®ä5ˆé¤l °Xl½yö‹¨`›^“X+¢}b¯ø?­ÏÙ¬Ô+Kk/_ƒ¶'ÞQs6b{YbЉ2n½ö²:ø_f5¸ÊGƽ^BlN …|f9~õÌìã^WÀë“_ÿºZ°Ë.LÒ× ××0‚H€%ÀÆÚáÎztžê·ó.%pÝ}÷ÝÕÃ?¬Þ|óÍRZ°`4hP)OÏ­¶VS¦Îˆµ^YéÌ‹PO,–Aožý"jØVg·ö¼Ÿ_ìy]_{냊ýÔ ËWyp‹d?ËŸ^¼¬ô¶›ÿK^üfoC™HfyvùR–ì/y¥>öþ²½\]¥|”çWÏ(û©,¬ß-ç7‰SöoÍì¢ï}O­X±‚)Càš`c»™Myÿ–­x] ùþJPÚµkWuÓM7©·Þz«lº÷Þ{UïÞ½Kûì³ß ãe£ìYV´ãW£Ð“K€%ÀÆÖ§e©àÌ,<­®ôüƒGVB‹‘›!ÆøM@Üð¿”yÚ°á¥ýÍ}ÅókÛ2ñ-õ“ãâûò+Æ·*ïÖ;§¶h»®&c›.Ås¿\Ø,ݲu¨+6»ÐN€õ†!,Õ°ç:êÂK°àê¼óÎS¯½öšzûí·Òå—_®:tèàuè/{¦7Ëñ±éϨŒ,–k¿–•Â3°:6íg!<œ6° ¬J^Qñ† `ÊvìoB¦½_%€E߉òGê‚ãɱ£f¸³®”‰ºšÛnƒ<ÎQô YX4b3Ô7+7_fêI€%À¶4V†3„82hŸ|û½ª[·î%ê!CÔ’%KÔ;ï¼:-_¾\x≥²:uÞDwmdu¢£eÑ@5–K€ÍŒùgEs°6¬š )žMñÀšá»ETšv¸¯‹Ö,åÙÀmÖ!ÁØ.cye<¯Y†lO¢ÏË:ÀÆyƒ°l­–K€õ}ªðÀb,‡ÓÒñ¨vÚIýö·¿Uï¾ûnÍéÉ'ŸTûì³O©ìÞ}¶WW›DGÊcD¨A:` °Xš¦Z\¬ß„Nf¨oÀ,¿sÈ`.ÍpåjÖ>– °VôÝR®|ãØ¦7Øooг½šíØü> †èS]ëé` °-m¨¯þnªµ=åh‚Q€Å8W,#àÚ©S'uà 7¨Õ«WGž¦L™¢½»Ý¾ôîêñµ8~5$÷IL]u&À` °9êE«?•Ì,à®ÒŒÃv®¬ì¬9æccícÛc`mlÀÚùíg·Y¾ës½Ú|,l[ظü¤Df]„}Öô!À`kj@ùÝ™@ä°XîËÞ ³Á8Õ#F¨7ÞxC½÷ž^Ë5¦„òGÕj|숑£ÆÝVÛYr¿úÃ,–K€Ío§âÌ2°·Eߘ4û€+<§²ß寕¬:ŒãˆGÔOÆÄP¨‹9‰S€•c3k‚sÆ=°!šÞËߨKØ„*À!ó:Ö† û‘µ^ÃîWm~{Yžj˱÷#ÀfhÆà8fxŽx§¨Úe^Ê!,:d,sƒé;ßùŽ·žëŸþô§ÄÒ‹/¾¨Ž=öXc|lg…uf £õ‡Ñj®–K€ÍKZÓyd`ñœ“Y†e‚#|›a·2k¯„äÚ^Ì €µg– dR&Ù.ÞàZÖž1uËAq¨ÆˆµVÑåhpFù‘µaÖ èEýâúøA5Ž ¢ü` °Q.£eÛÌCY„ €ýêW¿êã6Ûl£¦OŸ®Þÿýº¥'žxBõëׯ²‡‹ñ¸Õ@÷©ü` °Øø ®éÆoT;w.,Æår|lý€4ìË,–_§–¡’ûëºN¢¾MMMS'L¼‘ý¹ã °Ïì8ó×`j6ùˆ§¿áo®|ð7~CB¨­ùÁ6¤M<ø[>ðèÚKì˜å™yÍr2e»N’õ‘º™uÆvìk×ypî€wÙ&e¡ž€ø(?ͺ0Ž-0Ä2„8ÊÛi­²Øá9,Öh}ä‘GÔ1Ç£zõê¥î¿ÿ~õá‡Ö5¤Ï9çcýÙ&o=Ú8;Z– $` °ØXûµ,ÞEe °Ñ<›ëÑÇé럸Öö¶\uHØ6¸¥QQ9@€ÛLÈÃo²¯@1Ê0µ½ÊÈ‹ñ·å€QöÃväC2ËEõä7”)PjzP‘y¤Ž¨Î eâœðmB|^ßf} ,Va"®(ô,£•G€?¾zúé§Õ´iÓÔn»í¦<ð@õì³Ïª?ÿùÏuMX?öûßÿ~É‹ñºS¦ÎàuMñ›x,–Ëž8J°Ø0í pg{FýBˆí|€@s쬩9¦Û$m(Ä6Ó‹[É £Æ4úÍúáw”eåJXt%-· ¡Î8f5!Ïå´Ï=À>õÔSžGÇ/ÁëÅÄH\pBŠ¢¬¤Ë 6Ìc)t^‚NH€]¸p¡Z´h‘š0a‚êÑ£‡6l˜ZµjU]!=sæLÕ§OŸÈî³ßjþ¢¥¼¾)Y,–º¯â À`ÃÜ 6üa_`Ëž™O¼²æ±MϪYF§_Þ ã¢æyáoxXMFžJõðÓ%Œ®vÞÜìñǯvÚi'/aœ’üØa‡E(gÿý÷¤¬Œl_ÝÀšji9Û—€S%ÀÂûŠÙˆ±œÎf›m¦ÆŽ«>ú裺§qãÆµ{Ò©ÃÔÊUz©Ÿ‚\QëD€%À`sÖ“Öùt°ØÛ6nàÇô¢Jæ¨Ö„Úz,¼»fì*L€­!Ü÷þç)k›om«¶Ó çRöºÜ]vÝÍ)o7=¤Ç¥Lä Sç0õÅzÏY»_ °±öÛkN€%À†X?Ï)àði{+ò;þ6'N’rÌYŒM(µAÿ›å:åcz…ýV–ø‘ü2i“ÔÍ †V~—±°æMcÖA~G½ñ{”ŸÂ,pŠPEñŽ<ñ&€@Åÿ¦÷+chM€Å>È+p›6«]l”·6¬×M+¦°ŒŽÌB,“8Á»ù‹_üBulß^Ý}÷ÝžVöøƒZºt©:ûì³=½öÚkÕ_ÿú׺§_þò—Þ¹àœÕˆ‘£Ô«×dÙ¢lØ{ù³k”`cí· ° ?³ã|¡?ލµ8ó’žßqýBhñ›ßï&‡™½7LÞ m*•Ui›}>€W{fæ cmw¾ • Êrq¹°b@(<+~Vü&“CÙÛ± »i†XlÐ-RÓvÂKØ{î¹Gí¢=š?\}µ]Û¶ê׿þõZ‹åmV®\©,X Ž<òHµë®»ª¹s窿ýíouM¯¾úª:á„JÞØN;«YÎc;HÐ "ÀfÌâ4dóT6Ö©_î¡svʉØì>Sê°hNðXÖ l~žÜ(ÚsÒeHHrÔÇ%À–™Ø‰[USã$N­e#¸„ØÙ³g«æþPõÔàª_Š)Mj¯ví*ì+¯¼¢xq´Ã;¨Ã?\ýñTŸ|òI]ÀÉ óDXgžŒç´Ÿ 6»ÆfÚÛVZêG€u²Oê\º+©ýC€Íî3¥^‹a3„·šV®µŒjŽõ>G¶ÇGqŒþºSj-(ËXLÀd{LýBˆíÙ‹ñ¿LÜdzq±¶,n˜¨Ö˜Û{Kl­­¿âþ—{ê‰'ªÚ´ñÀU’+ÀZ1;0ÆÇvéÒÅ[—ë·þýïO<ýãÿPÿùϼI§ð,Àø½´¾E¨6»ÆfÚgçH€uê· ° F¾DÑ®ã(£^ëÔBÓœI[/›ü§¡a¼þî•æzÖZ·,,‹ñ¬æ$Næ¬Â2i“ä‘IÌ1²&àb(ü–I¡âÑjË'ÀÖÚú“ØO†PÍœ;Bç"c`1Óh=æuu ûÆoxÙ³Î:KõèÑCM:U}ú駉¤Ï>ûLýûßÿVÿüç?ÕÓO?­.½ôRl (le€=òèfõÚ[äîY‡ñ›Ö2 °Nýv¤{†îƒ§é>˜)[`3i륇x4È.ù¯övj«,.µ»e`á•‘‰—Làƒ·Ð)3 ÛË與՜Ø^FÇ.ÿKy8nµ€÷~ØXo«ÈŒÄwëîy%ñý‰ž胔Ír‹P«1öL€ôÔày¢÷*ÏÍqÚ#Û¥©I]rÉ%­&q’1°B,X,f&~ûí·½‰ž ä…òbR(f ×Ï?ÿÜóº¾öÚk^ôŃ>H€­¼¢ `+ìæ[l©}bAU÷k5÷xœû|çÁ )Îc¤±l¬S¿À¶k×î˜6Úh Sö4@ø·Sk ÎÔ¬³Ô¼ìhðaR’ÃX3,NÃìt ³CSRÍš«‘€­+­[m™iÙ[só¯T@d†•¬ùùW¿]ÔÇoTª£2̯½ûl§ªX`1 ñ6›m¦–´xaá]¥Ó5ÔvÝpCuÙe—)ÌBì°ï¾û®Z½zµš>}ºêÝ»·:ùä“Õ›o¾©œQ%€ëÿû_õᇪyóæ©Çœ['p•öL€M'ÀÆÎð†ÝzçÔÈž³Q=ã.‡ëÔoG°NGc¦¼+@€µ@v†Ù4Ìb,jf?Ø©õ°26ÖÛ.2ÃÊ`åYòß6ê³ïÿP­¹ýÞÈŽÖƒ! p»ŸÀ^sÍ5jÿ 6ð¼°ûë vÖ9Ù ;nÜ8oâJXØ?ýéOêƒ>PW]u•êÙ³§3fŒ·n,B}«Mâq…Gž^x °é{I€ °¶7ö…å«<­ý»Ÿ×ÖÕ“‹|؉“nY«ÜrÇóäéÅËJÏ”aþí…ÆþæsGʳó†}6¥9?Ö©ß&À:ÉÄLŽ `M€µ`v¹þÿ\xmÅLM¶¼,Ʊ¦9 8R+m'ÀÆz…ºrFS%€mõéÔYýýÔaêÃEK#;¶«!wɘ+ÕIúØ®ù‘Ï`-Z¤öÕcȧ·ÌB|æ™gª-µ÷õá¯,<²»hìÿþïÿ†XxIW­Z¥†®úöíë…ùþë_ÿ •\1ÖáÊX?áÉØtÀ+Cˆƒ¯ƒé ÅxXü/÷,ÿïÚ÷RÂ6@!~Ç‹*É+¿ (– çÅvì‹{ßH²}<\9êqùãKuÂvÔY¶Ÿ÷ó‹½zÊÿ§ Þ*/þó<ÊJ^¬S¿M€u’‰™°˜üH§ÑYMÚ»zM9`­ô»ÞoŽÞÞ¬S££°uÍ–w€­Ó¾o¶%”/‡}–ŒÖÆ«F‘þÛ¡C«Ùy]ž+Ÿ÷ÜZ}<îZõþŠ×3â?ò5A‡5»‚åöŽ;îPýô9ï¬=°XvþüùjÀöÛ«1úœ;àö½ök,3#=óÌ3jß}÷õÆÈ¾ôÒKÞÖ „1®þ裼0æeË–`ë.ì×ÎèuóÀþL˜ 5½ª€R@ümÂ#ÀÒ„Gl3·Ûׯ!–ãbM 5å›û ñÌ6¿a› Ùy /&À:™¼X'™˜ÉQw€ÕO¢.FZžóhýL§É:Äx£ÀuÉF€Ívñ¹ºÓߪ¡á7ºñhæª)-Ñû?PcµÖ¡ÕþÛ64\õφ†Ïóüœp9·5ÌÞ1ø{êܳGªªãJÃuù]õDS·ÜvÄ–Xx5þÞ÷<ƒû /x“8 Ðk½^?Ó©›3Æz«˜uØo';„k§{î¹Ç +†WÛàYµÀŒ›…×Мv€…— Ë|yáúÒ"î|Ø`€O¥ «¤>ŒoäHo©íu½ž6À @Ûá¿f>ùÌ<Ø.¿™k{c]ë•Å|X'S—ë$39* `‡¸fyÏ£v¡Øáð"9Šœx6lvö©ÁƒÕI9Ø! “òþŒp9¿?m¼±zBƒW÷“ØàP €ÝRÃò£ÇÔ °³gÏVõ²:&ÀÞwß}j;=&猿M¯ª„‹÷ÔZéºù¬éÁ•}ƒÖ„Zlâæ^–H€ÍÒÕJ]ClcKm¢aQ†ÖB<½åÜã^rGÛ÷ ãkm7ØìlÄ!ÄxÉÒTk{ŠzD0Dy_»–¥]Á‘„#¹šbÌRüW=.5©%wÞX½Ftð¡jÊÔÎÐTÉûÜsÏ©o¼Q=ùä“%,Â}: /ìr¶ìÔÉ `ßÿ}% ³7®»®:lþ0ÅLÅ»dÉ/\ø“O>ñàõ½÷ÞK%ÀBóI7ݦöÙ÷€´ ¼`ƒÇƒÖÊYÝW h.AãâÁÌb;|ØE‹¨åàø8&6êÞ2Wå`su9ë~2°mSëHŒ\xM]¼#t11B…‡êüIŽ{u£PA¬,Öp}ê©§ÊÎŒu1qSØñ«W\qEè}Â#êülä÷PÆ t¹ Ðu§h€ÄÒ:IŽ{EÝ«×J“8!„‹I’“B èüÍo~£öÒžY<[wÑßðÊ– !~ýõ×Õ]wÝ¥0±$,àîäcŽiõ»l¿ôÒKÕüãVÚ´ì|=IױǨ:èqÂ&°ÚÓKˆ-7U¼¨Ž+“-Ù3 ›^Z„¦Ë$OæØUãM+9Å~ævÏjŽYµ!Z¼ÁR9¾x~Íü²Í¬?ÇÀf¼­­úØÚôãÞEV Àj`]Õ²„N=Ç·f`1#0Öf•6wÒ3œbßr0h®û ˜E~|W‚Çûï¿_uïÞ[ä~ís`á™ýTuü‹öÂýI{ã‚@8ŽíÕÂkµ‹ñ§Ûvéâ­;Y§ÃôDLåö{ßþ¶:PCnŸnÝÔ¬Y³2–´¨§q€õ輪Î÷.¼ˆ˜eKÑ,\¸PaJXìĉÕ`  ˜Ì©ƒ »|ùro+&YÂXÕ·ß~[]rÞyê§-ãe»ðÖ=è uà 7¨ú˜Wõ«ê¤£V«W¯n•°-ÖÛzÀ"\xˆ^ó7Èó*[D€Å8`´ÉzÞi>¶ía…çÕô²â„èÂÃê·–j¥år‚Ö^(Û/äx~/d¼+<®€U¿u`íßð?òº®Q›ækU®nX§Žžë$3Q2›òI˜2 °¦Ù0ÞÏ0ë²K€å#Àï•qôO=Æca?œ·(W†yµ `/ìõÒ:ß7vêÔ©¤ÚC8nÑÐÚY¯!»·Þð /ìÒ¥Kf,–4~üx`m°­7ÀÚm cFlNÚd{f‹BÕ=ÇrüïíµZãÔÉoâ8—•² °NöE:H‡*@ò¨@.ðà a¾H€Y3ä° /ªÀ([ü†¼HæøU3„û˜åIH±ì' -+0-^b€‘Ç–}M¯1¶¡>øÍo_W›c`c½­s›qwµìù眣~ºÞzj¡¾÷wÚb‹À‹»žuyþ]¡Æò?¾á‘Å6xaO<ê(õÎ;ï”öhí™5ÃßiXóz<¦_l ìÖ/&Àr l”÷­½”N”eû•o¯½ŒNÜÇÌBùØXûmN¨@ÈÀ"Å+0‹o9H"›È#Ð)aÂ~+å!dIÊ–ñ·r\lC9´&› ŒÐdÔCê ùñþsTËå'ÀÆz—`õä.• ÂZãW»j/ëj,¥×„}üñÇÕË/¿¬ú}ó›%@¤þ¸©I~ðÁª›!¾P‡›`ë­'«½°˜( aÇHXJ `µSš—Ñ­—èÙ§±œ –› cõSl¬ý6 §T  d`í1°Š®k™5á²Àâ¸&À ú…›¡Ê¨ŸæŒ: 9†6h¬mTà*å`c½K °1ìÊ•+Õ¨‘#=/ì/´'õ´NPCöÞ[MÑKäˆçð:~ÌDÿûß«³N;ÍÙ[ô>’ÇóÂyd V`ß|óMe§,,a  P/ꕵ6@€µßfáT€ d@Ì,`PBÍYˆ]Öž…Ø„ÍJk†›e,òÂÛ$áÃò-¡ÍõC1ÀÔçØ#í>©*`cØçŸ^uÒcZWé{«Ñ3µ'ք׫/»l-½çž{Ô€–ex×óÂêýP€uܸqê(VŒ1µv"Àv²;¬oþÚ,6©.œÇ¡%`ÛÂÆå'% d`Ë-ƒ&à0(„ØÞûÈo•Ö ôVÐJPl{Rs°˜0mŠŸ/ Àְ矾zòÉ'[­‹Yˆ1‰<°¯¼òŠ:þðý±¬z=êµà³ÛÉX@ì¹ë¬£~¢½°È{ÕUWy XõK8&ê€f ÀŒÉ˜AyÞ¼y^3^r=øàƒ ëÉ¢ ì¹×¶ƒ€v@ÈÊdñšÆwM °41¨@â DÂK‰×:ÇŒä‚hÃqà\m¨Á›uˆ«_y&l–Ûnzeò+¡ºÈ¿Í0`^wê°öÒ9AXÔõ†ÁëWgl.ï2‚K•‹µX·èØQuÐ`úë_ÿº"ÀbÒ¦ÎFH0†Çixôó â·»ï¾»äÅøÙ“ôŒÄ½»vU=ô·ìk¯½¦üRšËÇ`©ÃõÚÀXj‡c`ヵ­W ÀæÒ^àI¥[Hx)ݧ˜­ÚErAÒ°2»°Ì> `µV¼§«æ,ÅAc`1vÕÞ/`œÅ++kØÚ³'ñÀBmÙsë’g–K‹œXV:ÚÖÉD&À:ÉÄLŽ `…ÊT6,–K€­Æ¸Õªµ !ÀÞ~ûí%/ì:„¸“ž˜éú–u^áa}hÝu¿\ïUƒêØË.S/¿ü²—f̘áÁ+ 0+¿‡ùþå/éy`¨A)­ëÀ\/s¥Z¹ê½Â½D¡6UÍó€û¸];¬“‰L€u’‰™ À: •©lX,–[ñé°‹-Rû¶Œ…Eø0€rèÁ«õß–‡§h=tÏ=×>°o_o&cdõXÙ°cÇ1òGáÍP”Ò °Õ\‹¼ìC€uƒ ¼\ï"žÖÉD&À:ÉÄLŽ `…ÊT6,–K€­Æ,°wÜq‡ç…5g!¾óÎ;Õßü¦ú‘þÝ›m¸sgõì³Ï®¨'Nô&rÒ²Õ<°j€œº$ÔãwçÍ›§üqo‰¯|Ð[Îç·ç^ ç­¦-Dµ–U[Jk9X'™ë$39*@€u*SÙ°X,¶c¯À ÷í×O5~å+ê׿þu«Yˆ¯ºê*Õuà խ·Þª–/_î›0‰ ·›†Ý§žzªl>¿ýñ‹_xØçŸÞ)`ÓLØt]jž ܧò5$À:™ÈX'™˜ÉQ¬£P™ÊF€%À` °Õ_Õc[±;ºï¾ûÆÀ"„P8eÊo› °AËÛ`ûu×]§Õ^X,«ÓüÃV\Ç.o̘1jèСjñâÅN‰›.`"À¦ëzTó\à>ØŒ`l"²ˆ’Ø<6,–K€­Æèœ0ñFµžž¤  ºÎ:먣Ž:J=öØcÀ>÷ÜsjÐþû«»ï¾»•6hyÙÞI{_±Ô&zæ™gJKâ<úè£êà}÷-»DŽ,êàšBœh"À¦çZTóLà>Á×X'K:2€m×®Ý/ÑG1eOƒõ×ßà§Öœ‰¬Qâ9šj="–K€%ÀVkxb¦Üc?©d´×Ëâœ{î¹Àb ê’%KZ,Ö‚uI#Ï>[©á Ë>“'OöŽ5aÂßr.ӳà èuMØ`£»Úöv?lz®EØkÇün׎ëdµF°MMMSñ²•íÓ­}¦I§–—N & 6 #,c¨.ë†ZË#À`[ÚÐ&ú»±Öö”£ýÙá¬kvtó-U»í±W d{ôèás56hmVs;¸k‡Þš±=›š&Âö[n¹EmÕØèmC»LX„3»&lz lz®Eš Ù<Õ…ëd%`Cô¿yº?Ìs‰`aÛÂÆå'% DòFAöém¶QÏ Æ”! pÍæj×0%m2OÕ ÀVÑN™:Cmö.%ÝW‡ûbvß 5Yý¶Ÿzâ‰ê"šŒPâ=õÄO#Ï}úx³c|¬kBþ¾ùMuŒÙ5^°H[³¬‹/¾ØóÀÎ;7t"À`ëuáeO‡MjÀÞ{¦­¶ÚZ­¯£\ò"φn¤ºèµ•]òo¼ñÆ ÷§KÞ6mÚ8åCYaꦾôøø¬A,Ö©ãî¥sÕ<Ç ŽÄIœêÿ\¯ö¹J€uºW2™‰›ÉËÆJgDlÄo€'ß~¯g¤·tJê»ßý®úÝï~çÍVìšÎ<í4u¨†V؇5Äî¥ nÙôèÑÀ>ñÄ¡¶þ†NQ=°‹—®PÝ»÷P.D1ôë×Ï%«—go ]t‘S~€îœ9sœòâ>vý„©s˜úŽá­­Ö@®Ç~Ød-lýŸëÕÞgØdï•$F€MRm«h dÊ(ª¶ƒHz?Œ~öHo\,:§F=£ðgœ¡žþy§tã7zc``ñ  7nœ·¿,`´šôÈ#¨|Pa2(ÔoϽ°Dü"£R›#À#a$À¦Ïx'À&kJ`Ów¸ÚØdï•$F€MRm«h \b„L:øÐ’7vÓM7U×^{­·4N¥ô«_ýªÀ.×¹³§qúéÞ~ð4ýèG?ò<»Õ$X¬eK€MÞð!À`é-ZWïù`“Ž»jP>l¼÷F=K'ÀÖSýü{>%Œ;áç °1¬t\³§zmó­È"4rêÔ©jñâžiÒ¤IÀbì9z)-:vTøMò ÀΞ=[U“î»ï>€1¾màØãOd;H H{ À` °ì‚£T€K€m±maãò“°)¹9©Æd}hSü`‡¶±ã®Uíõ,Å€F$@$B€±ôŽ™n¸áÕMÏB¼¥Î{ÆO~²Öö /¼ÐÛžÔ° ¡Ì˜ÄFêpÒ©g¨•«ÞK\‹ ·ÒyÞN€%À`ÙG©–ÛbÛÂÆå'% `Sr!rR lë IpIÐó(,žvÆðÒøØvÚÓ:räHµhÑ¢Rz衇Ô=TáÛü]þ¾à‚ <€Åv×tõÕW«-·Ü²®ûìw€Â’&yÅ´ž–K€Í‰E‘’Ó À` °)¹j`ÓwM²\#,6ІY‘â íÖ­›ºþúëÕ3Ï<˜°?üáÕ¬Y³ÓwÜ¡X:NO½<É”©3R¡AZ3îz` °Ø,›é«{šö¼Ÿ_¬Ê¥×Þú æ¾èéÅËÔ®ýwWøŽûÙGù-6@м…Š–Á ¡˜,ªK€MU'‡å1¾Ñ¥k 0û÷ïïͼpá²éç?ÿ¹°¿ýío˦ûï¿_qÄ¥q®XÚç’1W¦êÜã0²P&–K€¥5¥YØÍ·ØR6lx h£X”ñC«–¯ÊdÿF€òNHWYØt]¬×†K€Me'¸\OOØ„ÎluÖQÍÍÍjîܹjÁ‚k¥Q£Fy;sæLßtþùç·çzìq'*̈œ¸+B °XlÖM‰Hê %oˆ¢¤4¬ùLÀ>úÄöEÆÐ%lw@:Ë À¦óºdµVXlj;OŒ=öø“JÞXŒ½øâ‹ÕÓO?Ý* À>ðÀÊLX;sûí·/í¿ÇžÔcó¥ö|‹«~çH€%À`³jBDZﺴ9Q”˜u€…Wp+ !ÇòìðÂÃ:qÒ-¥íøßc3„ûKYi/&ÀFq¤³ ¼¡šÎª±VT€K€M=Ðar¥]wÛ£¢˜| kÃΟ?ßKð°Â;mÚ4/ÝrË-jРAÆxÚîjòí÷¦þ< °Ùx¤šk‡qßÝ»÷¦W/c°Ü”ë' véÒEÍ™3Ç©h—®Ÿ0uSß{ï­0Ô ÍëµOŸ¾;èUÀhü”W€«=‘›â™E[|XÑ~ñ;îC€(„‘ð÷‘G7—î Ó³{ëS½ýåžÁÿRV½î‡JÇ%ÀòA¨€‹Xlf AL¶Ô±c§˜|ðÁjÆŒÀvØa ë¹"ÔžZt‚jÄÈQêÕk2sŽi4(â®=°ÁHQZ $ÀÆÿâ„ëbŽx€_x,`óò+Æ·ê³àa/+yÌñ²½ò¬6û™p÷ó¼Öò °N÷ 3QÂ+@€%Àf .¼TÃéãcÛ´iãÍ*¼ûî»+Ì\ÜÒù©ïýàGçšð’HÕ.Xlà¦6·v V?³ýÆÅPáe,ò˜Ï[Z?€Å~frµÏé¤ö#Àæöþæ‰QH À`3°Ò‰b¦#Ž:¶¬®ývÞUÍzt^&Ï))!mÇ!À` °‘öëY-Œ[%ÀÊxXl«¦ß¬ÿƒË 9T€K€Í4ìaR¦=öÚ[õÙn{5é¦Û2}.iˤêC€%À`sh]„?%¬X{<+žÃ¦U¼­æìÅ6'r2½¸Ø&Þ[y¦G±\O\ý=°áoîAЍ–KèËH¨m\C½Ë%À` °É›ÿihX®CXÎÕ©GòG÷="V÷E˜d ðDè/ÀÔP™ÄIÖE>ä7'63“<á”#å¥9¤˜›’»‘Õ )W€K€%À`ëÚ°Xlò–‚¦%IÃìý7H¨)ùš”ŽX ÀöÕ¥”@<+Ëè$ý¼¡€Q,¥è´'t,–ɼ"Ù³rÛåbË•Wï˜öñ °u¼yh*!°غÂKÚ:OÖ'þYm °XlòVƒ °È~¦avò%_£šf!Æ“ uò<+[McOØTMiÞ‡[‡;/¡C6Ê šÐñx˜|+pƒ>=®+üå5&ÌÑÉ6p À` °É~kyeWk˜¯&ñ©Å‹ú Ñi:6ù—Qq„ Û6.?)Q ™$%W"ÕhÊÇiDv„—„á%ÊŽeeÓp!À` °‘õaά³ËµWv¸þmç„ÏX+À∣‘òì•¥tòÚßE°Þ»ŒðÍ{Ä¥–ÓBÇ¥.Ë-ºX,Û@Âm€K€ÍÀ~<ñFõÏ=8§ç×ßàó–`¬iZR€µ`öá–ñ²ˆŒòÀ¢>ÓÛ¶mûô}ò yy>¯ˆ6ÊöɲjT€[£€Ü TP€^Âð’çŽ˜çææÞgßTï>Û)€lÒ6ßÚVõß}ÏÀsÙ©ß.ªC‡ÁôªsŒ?^õë×Ï)/2…Â.]º¨9sæ8• ãÒõ¦Îaê»Í6ÛÔµ½œ¸}ßÒäGÕB`öû¸¡ás¯»z‡††Eº]̉ -Ñe¬‰ œyºŒÏÏþÙùìÏ3ØŸ`ók£`ó{myfõW€^Æ:<ÌÄh¯q—48b–HÌ™ôqór¼e+^÷fÑÌKºåö{œÎëo¶Ù7œx0 æ`ûï¶›ºdÌ•NÇѦºõîÂì_Þx®¡á7ç74üDwÛðœF‘†ër±µ–õËuÖYg5=°n/ÓÖ`ëoÇUl\ʲ\* ½ i{˜çµ>æ"최cfdÄ’õ†èZ4ä¾õ1ô/]¡ºwïA€ é1°÷Þk-’t~_¿tùH¿tqM'l¾ùß&|1†t`ŠR(×¡Ï [ÆÁöˆÉXÀdQ—×X¶g#ç} lÒí=Éã`k¼R¼;6ŇU˼Ø„<°æBëµtŽØúÀW-׌û~qͰ_²{˜â4lØ6ܧï…g15—0f ­Ó5´žóäMQiR² °Ùí°QÝé+‡›¾kÂåGXž:>'Ý¢ð7¾mCá¨Ø†„ÅÇe;;—Í%l¿õΩeËC”g/vŽýÍcøer,»®f¹~Û°eÛç‚óÀoX,¬n.¸.úà˜æâíf9åucP–Óç-Ͳ¤Nò›Ÿâõ;_Ù¯’^(ºHûÍ6#z–«—ß¾a´aÞä ?,¶ž]g™u`×hh½KCëP½=êIšâ<Ý!ð¼Ê’X A_†„(¿>ÏÔpÏTlœ·I}Ë&ÀÖW=ß Ä °èä¾sÈ`/Þð 6;ÀjO$Þ[l °AšÈñp.È+0[I—󵽺¦¡c;òášáز]góM¿À±Ýnè ¨ÝŠl °Øúuu ZÇkOkÿzÖ!®cÇ °ö3¿Ü³ýú(¿9ðrÖ|Amþ/¿ã7ìkn󋜲˗ü²¿”'õ)yU®®R>Êó«g”Ïʶ-l\~R¢6%"'Õ@È Ú?_(;Àšßä?f‡g¬ 5ây3T:UìÌS@˜]F¹ŽG<†öörÀhæ÷;/Xˬݛ`ì×aVòVÒD<¾6¤—Óur9_¹&¨ƒ ÃvÙö9ÙkzÒ~Ís¯{”FËŠˆ °Xv¼ñ(7ÀJÔK¥ç¢¼0O­_”½ûÃ~Á}°¿ýÌ·=½¶­`FõHxÙi–g¾Ü–>Ǭ«ýÂÙ —޳?ˆ`ÉKñÜ^U—Ê RµtÜÑGlkQê °F*aÄQ¬@ŽKç“&€µ¡ÖÆZÖOèŽNÔôv×°¦q áÕ¶Ö4\ìsáܾ𛋿íñÎ.לy¢‡ÔJš` °´JâQ n€ zIˆÑ6°š/˜å4úñ† `–‹¼‘ßí!*øÝ/ZK¢¤¤.æ±ÄÖ0Ë4£ªì—²¾Ug_A€çžHC©Ø4\…üÔ›€h1;ˆZV<‰A“4UòÀú­m*õ´Ë5=–.É0X;õ’1²aÖE©»__n­× ó•±­¢µßÅ+<ÙZø…XÇi\°ìè —K€Í)‘®3©7ÀÚ°Šç¦€¤¼ zî‹Õîk]ÖŽ´²Û¬Ÿô‹2–WÆóÚÃX’z)J€M×½em°QªÉ²°)XéÜd¢$ùßÔìÐï ¯ ]fØBY‘ü DŸ¤Ó·À“>e¿9:yclϘëµ’Ò‰ËØT©W€•·Þå4e˸]sâ*3œÙÔ#è|M½P®m€28®xåq,bíë„s”ú6£ƒÍ8´$À`i‚Ä£@ÖoB'³ zî,ÍÉÃx`Ã74J¿$ßò’ØöðÆñ,4Ë$ÀÆsO¤¡Tl®B~ê@€­3Àš„„ŠÊ¸F|WX 5={fÇ…}eü ~G2UfD.×!ɘYÙׄF³\³ž(KB]Írm85Ë–I!üeH'.õ0.,ÀVÒÄöfÚ/Êét¾RGó˜×ݾŽö9I½$LØ`»^f¨XÜÆ˯’ °Øü˜‘œI_]Ji)œZJŒ`ñLöÔrQ6fX°«Vö‘™ûåū߄‚µDk¹ô¡åúæ8žÿØZZ~º÷%À¦ûúd­vØÖåaï²6«K9f”YËÌåö£®•έ–s°Ë­E“jëaåu¹Ž~ÆE9ƒ£šò]êÀ<ÕCj%í°ج 1×w .NLj`ñ2 eG0á%#úÙn†Ý†:b†ËÒyò‚Ø|QE´–9Èg?ðŽ»? ÀFq¤³ŒºZ¸Éù¡Q(@€MÀÆÝ9ä¹|Y2 ç—²xî2> FŠœ“=>7‹çÅ:¦°Ø(:ñ•‘€ÅóK†o˜3û–‹¨’ˆ%ó%cP±”+ûÊðê"ÛÅ\Ëp#{Æd‰#Àæèîâ©Pœ)@€%ÀÆ:ór’ "`gå±;ã$ëű`\˜çTnÜrÇbñx[ýt%À`sfOÔz:™X¹§åÅb¹u\1‚ßHÈoÏêk.@Ï~${"'Ù&åÚǶ£qìcá¸ö>R&êkïŸdt=°µÞFÜŸ C,67KøJ¾¨uíZ` °Å03œÏ2“ËgaíÏBSC¬óýÂŒT Ð ` °XF#„$ݰØB[kŸ<–}‘7¶8¢û¢Y—ɤ`Õ‡ÅP*¡X,áFÛ@Ú–a_ž‡¢°ux%ýâ.èxØ<ÜÊ<*¿X,á…FÛ@Ú–Ÿ©#`ëð ʤ·`3uϲ²T n ` °­à9ØK$Ý¥õxÐÅžL#­ue½ÒM€%ÀÖ­çOç °X†§óÞd­¨@ê À`[,f»õ›11Ï@„Y‘‚κTZ¼>hnO?T&y°ØÔYõ­–K€­ï=ëÑué›Äz^$Æë“Z¤8×@ˆIÒÀMúX4st‹¯§É#G}°ü޹~_%Ý¡OÑ?évX”ã`;wÞDÍ™3'0~úéjë­·Ì'eõíÛW577;åßtÓMÕøñãò"¼Ï¥¾È¦Îaê»ýö}3wöé»Ã­Ÿò ` °Q,l[ظü¤Df]Ϊ•’‹‘ƒjà…?_*Ph€È!™`Kr ¸ x©w}ü4 ª3·Óóê×ÞX½Æ{q²ç^S¿wÑÛ+0Ÿ”µÍ·¶U;ì°“Sþ­{m£vÝmw§¼›o±…S>Ô#LÃÔw=(À–î+¬“™A€%ÀF °ht´qn½d2`“Ñ™G)¦™2Š¢6à‡X4åb‘s„Ëêfà}#á7Ü·Þ9µ•N€: ÁÅvxlñö±X¼ºR&<ªæy l»üJõ‘}åx(×.ÛPwüŽí¨£¹ˆ;ê~Ú°á­ê!y‘߬ô±½ÔQ_–GàeÈW À:XlÔëÔð˜)°ÉèÌ£S¬émÄÖw…~á¶fX-þFB~3j†ç¢,@!Žƒ„¿Mx°õ3àýêƒ|²Êó;¦lGÝ‘ß&„ ¬Ú0Œ²¬æ¸WìO€Í\y=ãnX'ƒ‚K€%À:Ý*ÙÌD€Íæuc­³¡@aVÀÌže×ÙµÇÉÚIä7'D’üR.¶›ÞO»<ü_iŒ©]9¾9N×>¦Ÿ'Øô6›‹zVç*ÛÓV·ñÍò xlµµ¬“@€%À`n•lf"Àfóº±ÖÙP ðkª~cNMøƒwÕô –Ë/W3$Y<£a<švù¶÷Têoæó +63Ëpñ°Ö{.a¢6˜ ~Ô/é6@€u2zè\£rdjjjš:aâ…íÏ“nßQ6Bm€e¤Olú® k” Ûá¹z`ÑQX%,Øö¨¬l •o×°ÞK,áǵ­2ÛŠ´l²FöŒá#¼h¦li@€Mö^IòhØ$Õæ±Š¦@a†V%ø4q„Ë8W{]TünNzd† ü™!Äa|»ŽW¶=¸~!Äö$Mf˜°éEÝЖ«£‹‡6ì91?A‡m ßm€›¬)Ñ®]»c6Úh£%LÙÓ cÇN7'ÛZx´¤ À&¥4SD °2 ±Š‹I…æ8S€«=ÞU XBŒeB%s&bxn±/¶É¤K&\b{¥5YQ¶]ùÍœÄÉžJ€[&q2ÁÛC6ë(ç zpâ|ƒF )3xÇQvÒeâÅN5/ dVó¤ë›–ã`‹hNðœ©0 À²=D©zîe/«Ð p³—½‘Yxa„›<ÉŒÀ~“>a”%`i‘2 0¶ÛËÔ­³êWYÞGʳ—Ñ‘™Žå¸~ËèØç-y±¯9)•ß,Ìi1’YtÂu½ÆLÇ1ј¼Ü ÛÖâŒ\À‹µj :ì9Ô’Ÿ›qË€ÕÏ¢°maãò“°)¹9©Æd}hSü|¡@¡6ŒwQ–Ãq™ô©Ã/Š}ý&qª¶\s«jËà~é͸®K½6®ãÚ/­\t‹`ËMäæR¯¤ò`ibPÄ /%.yåò‚¤ì‚d¼:ØÖ°Ð+!À•–±1C‹MϤù{žŸZ ͨÖ^ò§–:qßâ@¬ßÄcö²O€0Y;YÚþ·=‹¸çäþHúÁ$ʶ×L6‡U~ÇÃqQ>òâÈcßãøÝãŽ}¤<¿çœ‹,ÊF>û\Íýru“óÂvꀲ$Úï¾i¹Ï°·”Xý,*@^JÙUãIÙÉxu°ØVÐã4`Å4bÓèöû½ž†¤iô×R?ã½–ò¸o1 Ö^ÒÉ-QmöUvh»¹]–¡²Û ÛQ"yt%4ÇCʳNj#„ÝÛÇ(7^¿W¿.+k2K=Lª› Ica_9>`OÖ––{Œ›qK‰ÕÏ¢䥔]5^”]ŒW‡K€-¼×9-F.ë‘O €04¯³=k·Æo‡ôÛ½åÚíù•ýÍ û6°ÊÌáòRËX{¶oÔ£\x1Ž  6_Ž™‘Au«‚ÌâŒ[!­«ß˜«³áÉÔSòR=Õ÷96/HÊ.HÆ«C€%À`µáMx¤qµñÚË= ” ˆš^Q3$_ /h‚3»þQ­™,3x£|À£€š[®,óð7¼¼v„H˜zÄuƒÊ¥ÖÉR¤;Ór2V€¼¬Q¢9xA•;÷#À` oX¶Û€„éÚ@é°¶WÕ„Vsíâ `’ñ즷ÕýÆ‹—Í0àÀ»kÎ$n[°×£vÑ'É<X'[j Î5Ç)'3Q`ÈKÁ%š£‡>ZßDȃåY,–ð#¼$i$óXéô"— !ÆMØViV_ ­ÅØtÛƒt½ý<°v",“Eù¬9îÖöÀºÖÉ`ƒê&ç‹1öÈ+“F…é ÍâÚN€u2£°N21“£XG¡˜ dQ,–K€eˆ± Ø“8™3ɤF&8Ù³ü–› Uš´È„<Ÿjÿf¯kŒú˜ž^@¢ –&, ü HʤJr,„û œ¬]Ê0ë&3#ãwX™‰Xf\ÆïÕ,ñ´šå`Ì#¬“LÌä¨ÖQ(f£YT€K€%¼Ä/IÇlì}7@lê†S–ô(@€MϵÈCM&ë“@›âç hÄÅhÄ*xî„ê¸ÛÆ·tðàœ\/ºè"õꫯª·Þz«Uš3gŽúö·¿]ÊÛ­[w5é¦Ûøü‹éùG€¥‰AW€¼”¸ä•È ’² ’ñê`[_@p1pqï,Ÿ€XÔ6€ñ¬C4½ªÔYg¥^{í5õöÛoWLS§NU½{÷.lï>Û+„U˸Λ›qK‰ÕÏ¢䥔]5^”]ŒW‡K€¥±JhgÈ`X¶âu=ŽõŒ|b\ë°aÃÔòåËÕ;ï¼*]ýõª[·n¥²öÙïÎXa› ÀfÜRbõ³¨y)eW$e$ãÕ!À` /ªqypX.=ÌÒ0ñ&`°b؃7!ÓI'©¥K—ªwß}·ê´jÕ*uþùç«:”@öØãNTe¶¿ÚÚÖÙRjrÎÉŒT€«™jØL]®ÔW–K€¥qJ€eÈ@¸Ž9JfS 0‡ªž}öYµzõêÈÒÊ•+= 6ÇåŒÅÕC,ÖÉ„’;åd&*¬y)X£Dsð‚$*wîF€%À^2/ô€UY×3_2æJÕ¹ó&%p=è ƒÔâÅ‹Õ{ïéepbJ(Ç—©“>þØq×òyQÅó‚ëdK Ô¹æ8åd&*¬y)X£Ds4é£õMôˆ¨kºì²Ët8sçÈùþ–ò‰³^6ÖÉ<"À:ÉÄLŽ `…ÊT6=È qnCÃ]:ÍaʤwékØA£#À`ix¦^²n´³þµ{«`¯»î:uß}÷©¨<Л¸éÃ?¬[³çœsNë™õÌÈœèÉÿš`¬¬“LÌä¨ÖQ¨LeÓð3Pƒ«bÊ®¸†4:,–K€eHi0öé§ŸV .T×\sêÑ£‡·æ+–¾ùóŸÿ\·„5g;73$cÂ)L<Å_Â,ÖÉZ!À:ÉÄLŽ `…ÊT6Ø…{í¥–ÝvS†4À5Ël,·®”ò4†k÷æQÃìiè°Ï<óŒç=묳Ôf›m¦ÆŽ«>ú裺¦yóæ©ýöÛ¯²ÝºuW“nºÏÓ–ç)Ö©¿Ž `µ4žÑ…™Œ.DTètmßörj1•3`#1uEÀ.úÞ÷ÔŠ+˜2¤®6¶[Š–m€m 5m À>÷ÜsjîܹꨣŽòÆÇb¶à5kÖÔ5ýö·¿UÛm÷Ÿ]¤Þ}¶WÓfÎNžõz‰C€uê³#Xmßö`tav£ [ìÛÑN-†L+‚›]h'ÀÆz³ÞЪ—ÇãfÏ;Èkÿ5 ØçŸ^-Y²D=ðÀjçwVƒV/½ô’úË_þR×tóÍ7·šèé ƒ«ù‹–öùJ€uê·#Ø'¿þuFf(ºÑ †}K€uºe²•©‡®nßZ«L€%À¶´!<$†ÔÚžr´a ,ÂHü0B©qØ6à °/¼ð‚Z¶l™š0a‚Ú|óÍÕÏ~ö3õÖ[o©?þ¸n kÕŽ=ZuèСä‘=ö¸Õ²¯î9K€u²"ا¾ñ Ff(ºÑ Ïéqýz`aÛFÂN—™‚hÖY0ñNM,¶¦”ß gXùÔ˜€eÔ…—ª#G¦agž­†Ÿ=20Ÿ”uÆOÏvÎ;Ì1/Žú™g¹—¢ÎaêËPIjX@ ʰ!~ì±Ç”,¼¯ðÈb|,@k·Öbqì7ß|SvÚi%ˆmllôîÕ"MôD€u2°Ψ‡ F °NŽ™’S€[ð<ââäZn6ŽD€Õã'L¼Qmß·¯ºè¢‹Ó7ºtQC† Ì'eµk×N ><0ÿñǯÖ_ýÀ|(wÇwT½{÷vÊ‹üaêìZ_œS·îÝÙ~8~6Ò6P `Ï×^Öö_ûš¢—Õ1öøƒÂìÀ˜µøàƒV»îº«š3gŽúë_ÿZ׸>äCJ Û©ó&jì¸k#Õ+è…@½¶` lÁí[¬Ó}’ÙLØ‚ßàØXïÝBSAFöØc›•ËgÛm{«É“'»dõò´oßÁ[ú#èóÔSO©Ž;eó¶Ÿp jèСNy‘)L]ë‹s"ÀÒto…Ýî°7Ýt“ÚFÏ>|’~Á3^O”4DÏþë°+W®T/¿ü²·~ì;ì ?üp…õ[ÿö·¿Õ5=ñÄj/=›¾~’{ißýÈýs—ëÔo` nß`î“Ìf"Àü'ÀÆzïæÞr1  °_²0–PêrÏÄ•ÇXxQï½·ê·Áj‰?MJ' ØW_}Õ×k¯½VuÑ—]v™úðÃÕ'Ÿ|R·ô÷¿ÿÝkظôKK¹X§~›[pû–ëtŸd6¶à786Ö{—ÛBLìK€%ÀÖ‚L€9s¦ê¤Ç~Ö¯aQðÈb<ê–[n©î½÷^L:ýóŸÿTÿùÏ&y"ÀÆÚŸe­ðMt…‡ÖZiYF‡“8eo®l­­?Ýû` °Q®›îÖž|í°ØV¡ÈXlZcZO:òHuÁzëyÞWØ}wÛ­b±x`°o¼ñ†7©ÒÂ… Õ AƒÔÞÚ£»hÑ"õé§ŸÆžþñxàŠæ'Ÿ|R͘1ƒ›|—û#`³®26ß·'–K€ï'À` °œˆ)5Ï{ ì¼yóT—öíÕê€]£¿Õ!ÅßÒK†\sÍ5Þ°2‰“ŒõØ·ß~[½ûî»êþûï÷&@;ùä“Õ{ï½§>ûì³ÈÀõßÿþ·—0¹ÔìÙ³Õ¬Y³°ñõc….™K€-ô â“Ï5ÀþßÿýŸ×¡šÉž¦yþçþGá;ê)¼³PCˆc½;Sc¸ÖÓëÃ1°[ÏöÇcéõö›ÄéÂóÎSÇhh…v•Në„1±‡ê¾²×]wŠ.»zõj/”÷Ê+¯TݺuSãÆSΨ õ¿ÿý¯ÂqߘÀ‰kVøÂ °ØÂß) × 0Ř˜vÚÉKÝõ²H\pA Vyäï7|ÇœÈq”E™ìh­÷À”¶õzT‹K,=°ôÀ¦æ9PnÌB hÅ$Nm¾úUuNKX±€ì¶z²&À"ƼVòÀ À¾ÿþûÞDO§žzªêÕ«—zðÁƪV›>ÿüs\.Œ>uÁ‚Xöµ‰ôéXlKCƒm —Ÿ”({€œš xÔ^ýõ±« •ðþÚuˆ<£*#b€ÕöOÚ?_(õž^(z`é­gûã±+{`Ÿyæuà 7¨½´ÇUf!~Ê)ÞÿZ|—~– Ôk9‡XÌLüÑGycb÷Ýw_oŒ,à÷_ÿú—spÅ>¯¼òŠZ²d‰zöÙg °}wÐÑÞ|Yœ„¡‘f€…“ÆL¦sFlÄÃ;L!Ee3f©œˆÇÀFÂKI´Ù¢#’ ¢oðsuJSãÆí¸™á‘•ºî¿ÿþ ëDâxb±€‹<Ç|)Êo.þ6ÏûK¹(åàa‚ü|ã÷4郺`c½Õ °ôÀÒKljžå<°€Â}ûõSCŒetn¾ùfÕûë_/-±ÓYÏXŒ‰Ÿ\=°Ø?ÿùÏ¥„YŠ{öì©FŽ©>øà8­”0A¼®(!Ì/¾ø"¶å^â2:±öÛ­ O3À¾…])+‘†·Ýv[+»Õ´c£¶Aa÷úsÔÇ©¦<lr÷I=ŽTH€ÅÍm‚-þ†§7ˆxLñ rc ƒŒ§µáÿãFÆvì#‹ý첪¹ãÚ‡ëm—õž^(z`é­gû㱃=°X,«ÓfuZ­‹µU·ÓžXŒÙ¦1|x(€|šéwÞñúG€,^ËdLæ·€+f2ÆLÇflëÙ» °±öÛ™XÛ™;4ɨ?ؾvâ²WÖK€Mî>©Ç‘ °€LxEZm€Å6ó n{œ¬ ÁW\qEÙCˆëѬSsL,=°ôÀÒ›šç@%ìsÏ=§Žþþ÷Õ=´Õ2:ýõVí…ò&xêºá†Î /k¹ôûßÿ^577«¾:,^]VÜ,ðÊb -f6&Àú/;E€uîã±lMŸ´{`ýàÑœëÛÍè?ü ›U¼µ2ÿ‹8[°¯8kL`0¯¯ØÅ°—e~™´,¶¦¦Ÿú °¸á$lØÏkÞ¸¼>lN …|Zì÷vˆ›ú{ Î ¦Æp­§ŠXz`ëÙþxl7,ë¹bMÕ^xA-[¶Ì[F³cFbxaè%wî¼óNo‚&sXYFÇœÄ * ³¿õÖ[¥ÿåwL …þt¸öì"\øã?öBޱ¶üšÉX§n{ Î¥'Õ®í“E€•ˆ@?ûTÀ‘âÄVVãÀ6Ó‹+ˆb3c;ÀI¢¥¬°^Ò8ó`kkûiß»kßœ.+yü–å!À–š9'qj}Ç`饖ØÔ<‚<°Ï?ÿ¼7ÎÔX¬‹Yˆe–âÃôdLåvì˜1j„ ÞR:fê¦ÇÒþô„ÖúyÎ=zôhå­%À–‡W¼!À:™ÖXÍŸµ}² °2Ï‹mŸšp+ðû^Y&M»Øo»äeqmí‹{W¯@“ÞµGõ»±g–&qÂ&{üjÀâ­“BlÞè6ÛÞÛ$Ç#„}›Å1°µ¶þŠû§Æp­§ŠXz`ëÙþxlwl9€8q¢¬½°Ÿi/l§¶m=ÈõóÀ=è u ÞÞm£<ˆ"áÿ_ûšçÕ•ßä{ñâÅÀš[,6‚ž¹Ð+7ù¬ê+!Àö·ß0;ÛÆ$ÀFÐRYDýH;ÀÊ mãoÎ@,qüö$NöMŠqØc_‘ßæÌn¸ñe'¼É’1±ÐWÆ›¸Â~Ã6ãÈO€ wßè‹fL –Xz`éMÍs ZìÒ¥KU×¼euNÖkÄ^zé¥kì%ç§~ÚjŒñ²Ç®¿¾®×\s÷}‘ž €‹0c3a)¬ ¶ !fq¸Þy­Ü…XÓcê °ö¼/å¼±Ø[%wO—iXY GƬ0ý Rf Æ à,·Ü9ˆeškÉÊ~2Þ|8stµ[×Úü·¡¡ÿÆë´J§0!I©1\ëé…¢–Øz¶?»v,f>ÿœsÔO5¼.IJp[lÑ `§NªÐ -ÆÊJmüÊW<øÝN¡ýÝï~§Þ}÷ÝRÂ:±Xó7üM€%ÀºöÑeò`íe"]ÖŽJôƒÔrKñÐ[c+åîõU ­‡Ç2oe–ñÀö×-ª¹ŠV•›1°Ú¢au²N«MƒŒ[9´Í°XBdøûfåª÷Ô²¯Gú¬Z,á¿â…ÅÒ:³fÍRo¼ñ†Z°`Ú¹sgPåY OìÕ_ýj hlõŒ:jýòËéHzæ™g<€5ÃߨêV_ƒš‡…UÑ÷§q—B¬8Mdr&{a€…3‘‚€_sMY±wííæ1ıÀM[”!'qJãm™²:`W´üž%ÈõØStó‚—±©Šf–Y€ÕOï&š5 Þ¥Óg&´`Ãß&°` °Øð÷д™³=£r§~»¨Q^ªæ/ZZ3ÌÖ°+W®T§è¥o~¡ÁpzÂG¨—_~Yí±õÖžWVž“?njRŸ{®:ñ¨£T7 d‘cioºé&P‘0ó1ŒpÌRl'.£¼ŒŽÖ´Ñxáº/^«è»ó¸KîCÓ8ñí70¢‘OìRüí.,ÃâÄ£jçÁÿ€U$3úPf:Æï²$OZl`loëˆÏ‰›}€½«¡áÝ,ôWà :5VÙD2°xS­ÃƒOWµ°Ú¿ÓÞ'À` °áïX@¬¤®Ýº«“N¦°­MkX¬ÙÚmƒ ¼5a»iPýî>û¨[4Ìšð:^ÏD, ŠIO<òHµ•ÞgM ä.Ñß½»v-å€}óÍ7•ê°ð|Cã7V¯©Jãj®KØ}öض÷_lhø¥ß WlÉzÉ=À¦ÓZl•–|‘v#Àf`ïÖE»††ßë6;´Æv›z€Õ†V/FëN~‰+´æ)ßõD,ŸŒ•¨aF€%À†5Йÿ3RMxµÿî ïåÃ$?äõã %ƒ)ËÛtOÖʀʔVs.:>¼¢qõµþö·÷i5o¹¶Ý¶·š>B}ÞskghÍKß„óh™ƒbtv=°ôÀª¹ºMiF‰ª=ÁÆå'% 4ëzÔnB€Í6Àâ?¨¡áݦG±)iÚ•«Pâ–¥qZÍ2dp lø±|Xl­F}÷¯°[j¸ÁÄNag)¶³l¯g>F‡cam ìÏGŒ(…cB§Óô죈¨”ºj8>V{m{o¶™úÍo~Ó*/¢#°¯¼òJÙ'ÀtðàTD: ÒÚðƒú ¢l×q¤âºàEÑ×tš±É&…ÇÍ‚ÇÕ®cÄØLضEª$¶ào¨¬YˆuZˆ•]{eÍ>,F–kÇÇBÜMõE7žó÷˜ °mÚ4zc^g=:¯êvaì”)ST{¦nÎ °×O˜ Õcoåy¸¤eB¦rá¿òûÀwV#Ï<Ó ¶ó>ùä“À"D¹\Š`1YÓ±Çè¥zOÜô']—¿hoú§ú:c¾‚¢ÀªÏ8ÞË#0Äé-¸}K€à.Jq™XLŽi¾‘Ì©Á³øv( uöYF§‹{Tw¢²”ÎÂr†–K€ ߥá53—ÑACpe{`·ÑQ@¨<ï*ì< vÖᲦëI믯°®+&sš4i’zíµ×ªJË €­ÔÆ °Ò6Çê ’vßs@h¯vœmû#=‘×§¬ÿ­gŸv…YŽ-YÜXl”!Ä)F¹bV-S+à*ë^í¿ÿþ ) ˜Õ:ø,î@,Öƒ-ìG =tÂlÅ«¬ 2tļó§jOIœFQÒe3„˜!ÄI·¹<áÁQO(dìØ±cÕ¡ÆLÂúᦶГ=þøãjÙ²e꥗^RøÃÔc=¦úuêä…ãy8eÝuÕ–]º¨z<«w¢ìß¿âÖJã[çÎë,ŽS)Å=‰Ú ¼ÛШ/w\mÏ«Çê%À`’5Y+٤؎%®ÂØ­Çë¡~ëdž)#é¼ôÀ:Ûª™Ì˜€Å €µo¿…›“¾I²|¼2›ÉÆW¥uˆq:a‘x¬6ôÂiXl˜ö¼áî¯0zùMâd{aRÜS‡¯Þ|óÍÀ.Z´Hí¾ùæ%Oí*„ wþyµ¥†ZŒÛÜN{agΜYqk¹1®°¿ÿýïU¥”ÀBK¼8À„NQx¼Ã\›0y¿³Í·>ÖÑD¿5Ì °%K Iÿ5¤V»@ëÜc®nãO}ã¡€¯Z»ðúë¯/EÂæ…“& 8Â&ŠhÀv±]A¨ œGÕžW=ö#ÀÖÚúÓ½f7\PÈ0Þ!Ÿ$ófÃÍŠeÈvÛ{ HöÛfÛ~@ è_xá…ÀD€ýÂ;o¬Ùƒá%«N½bíÕ Vx½Ö´å` Šö+’í‘ÿM€…íèg7›XØŸ~åa?ów`+Ù¥Þv™bÇŠýÖV­&?=°ù¾Ys°~Z34Âï­”yCŠgÖ¾épÓ¸¬éa•²ÌeÈ4es;ö—i©«_]ª¹‰+íC€õ'À`[ÍùÔ¾}o ‘ òpâøBió ©•Î­À^yå•´Ê$NãÇW[n¼±:g½õ«à…®û¥!ûíçAêâÅ‹ÕY§žªzꙇ¯×pÚS‡6í¾gâĉÞLʸ›†àùó燂XXx‚\êº`Á…Ù‹á½={¶š5k–š1c†×¯"%¡e=QΪwbf'ê °v´  Àâ0'r²Ç¹šXÓNäš‘ˆ¶ÇÛØvÔàxæ1% RʨM\å˜,l[ظü¤DL¬`¥1°æ )7,Àb?ÜÀò–É ÷ ! °~Rù…€Dªåʉ`õ ü´)~¾P ÷†”‹Çb†»´æ‰ßã,{ùå—+„ÿ.\¸P=óÌ3 PxÕUW©Æÿ÷ÿJ+³cÖ#9DÝø¿ÿë;K0¼¢]õ¤O˜¡øtO:öØÀ…ÍÙ†÷»ßyF3&…rIØò!Äìx£W  ÛA@÷† †.kqƒígÃf9€œÚ@+Ù¹)!ÄÞ¬ftÜvmL /EßB‹[b$Dßàçê0Wè´Ãj%DAÂì7Frùx`Ë…dØ7žý0°·ûË|(È›©rÓ˜gØK€mý,!ÀÒK,=°©yqÔ±%/å‘G©°Ï=÷œú–žiU<°æ2:AKÜ4ÿð‡^1 0 ¨5÷ÁLÆåÊ€…Ç×%` °IšìõX±akÚs T°¶MY`aË,È6ÈÙ¹Èo†ËßRÿzL±6^J²íæýX‘\$Vbèåm”ÌÐ+žLÜtxkд"dÂôr,ö‘}m¬L…cÊr>æ® Ûo­Üܲ¦­y °¹¹íRc¸ÖÓ»E,=°õl<ö—žÝ•«ÞSÇR bÛéÐßsÎ9ÇóÀ`§L™¢Æ^v™B,‹¥t‚Òoû[µÕúë{ceÏÔãfGž}v«}¶Ð`†íj†!WòÀÚQŒb›¬} ”Snã¸ìY»\z`kmýéÞ?s›TÃ/Êq°±Þ ©3\ëᢖØz´;3xL픩3Ô7ºt-ì>ûì£{챪†îI-KêŒÑÞØýûõó&[´`èm—]tÑZ+‹¼a'qúbX~Ê*}4ÖöIÀÊ8U Ù•hC`íÙ€ÍÙ„%"Ðobs G™tÉœ…¸ÒX™Ê %®©˜„M€­­í§}oì µV2©â$|ÑŽA€­µõWÜŸK,=°ôÀ¦þ9pɘ+UcãzÈ~Ue=å”S¼p^—µY%òwhÓÆ›¡ÄÓÖ]Wm©ÁõöÛoW]ôLÆðÎb|¬]îÃ?ì…cöâ0‰K€ è½3 °ˆÀ³£õl»P «DÚùål·#e˜š”iûÚQ…È‹ßíy[üfÆ1ºö6¿ýã¶· °±Ú·u/¼Q× ©ÖZ`WÄ6yUÜ78¶ÖÖO€ òtÑKlPáö`oiÜ-[ñº2'yj¯—Í;vl(ˆy晪«^J:dN¼³gÛFÿ¿ªe|ì9Ú¸5ÁXFy˜D€%Àæ`ã¶ùŠT>6Vû6…` °--™³·¾¥Sïy‰Û(FùXlíŒÇˆ‚›·Hí°c¿RXñV[m¥î¹çµtéR§„äc~ðÕU‡ Èê0/h‘ð;òHy=ôç†M\66dLg‘Il‘3îs%ÀÆtgå©X,–ë{G` ° !fq&Ÿ“o¿WuìØ©²ßýîw=ðÄ$O.ÉÙëµaÅHW¯³Ž¸RƬY³<€;wnèD€Í“%ù¹`uèoܘæò °‘ßSù+›Ý‡Cˆc½3i¸FíÉ¢–بÛË‹ÆÛ¤ã«×¨ágôÆÅb|l£žˆéŒ3ÎðÖŽÅx8—„u^;ê°b„`á…í§ÇÄÞu×]Þþ>ø °€Ñj&Ϙ1£ÛAç•õí}úrb‡ž›K€UsõóF3Êh‡ö”¥Yg@”!?yR€K€¥–ØrF!–›u`(zý1>vÐÁ‡–qã7V×]w·ìŽKÂ$N°€Øé:í«g)ƾXC m5‰›'k2Òs!À` °‘ÞR9,ŒK€%À`+ìö}ûª‹ôAé]º¨!C†æ“rÚé b†˜ëC¯¯ÇÞÛwÜqGÕ»wo§¼È¦Î®õÅ9uÓF}ÑÁ‰çŸŒ§ÕUçYÎSÛ|«w dq¯Üwß} k¾VJ&Àb6â>Ú#ûS}ObŸ™3gz Om5IVÖħØõ|²šX'CšK€%À:Ý*ÎD€%À` °åŒÁ•«ÞS#FŽrJ?>î'jØOG8åE™˜5Õµl×¼g ?[ý¸ù'Î冩³kpNƒ˜U›õNxF}=&Ýt›êС©²‡~¸þ»xñbß$Ëè£ÁµoêÎ;ï,倈V“î¾ûnuè¡_z‡ûí¼KîﬓÁM€%À`n•g"À` °بd–—oâõÍöõÅ‹©ÓÎÞj|ì¹çžë­ój§.z=Ø.:Zâ¢óÏ_kƮ‹u"Ã$€ïqǧկ+ÆçâÅÆíæ½m` n,–ët«d3S/]íAµVK€miC§èïþµ¶§íŸ{C*¬¡8qÒ-Ԥ̬¼¯½õzzñ2êÃY‹3Õ/]¡öÙ7¶[·nê†nP‹-*¥k®¹Æßjþ&?ðÀÀb9×t饗ªM7Ý´tÌ!ßÿ‘Â8ݰϣ¬æ'À:Y Xl” Û6.?)Q Y×£æYµ°Ø”´ç´U£0•‹!øè Ôæ[lYXM¾sÈ`uùãËžÿy?¿XíÚ÷ÂêãÒ†˜'½^Ûi3g{÷7¼¡H»í¶›š>}º7cq¥„<XÌ$”~õ«_©]vÙ¥t „ c\nÑÚÖ©«'À`£X§FÇLÉ)@€-ø ñ2:ɵÜl©p†U%C²è{äÑÍêÖ;§`éaÍõsaì¸kUÛ¶m=È\G¯ùzÔQG©yóæ©… ú¦iÓ¦y‹àr Kî`r7ãN;+Ìn^4p•ó%À:؂۷¯ëÔè˜)9"عºÃbʦð¢'×ì s¤ÂX~†eÑ6ÈØ¦6½ÞÅ kÇí­¯ÆÇ{üI%àÄ8ÕK.¹D-X°`­tÿý÷{‹±°~éÔSOm5ÎõŒá#Ê/²æX'"R€}òë_WËn»)CšÑN-†™2¥@TÛ¨Áõaæ0eRƒ‡5Àn’©–›ÊÚȲ L`á‘4Cf±ÿ# é´aÃ=ý0.ÿÛãCñöA„ç–ón ‘Pž”±¸È/ÿc³¾¨›ß6ì'¿›ß.Æ´Bl–…ó¶õp)“y½inó-U»í±W d·Øb uã7ª§Ÿ~º”° ¡ÄfÂ8מ={–ö=èàÁ ãmÓ|¾IÕëdD °tÎdÓ9ƒë¦í[¬Ó-“­L‘l¶N™µ¥‰)@cË5V`R†€P&y„F€:3Š¿h‘WàO Fì Hõ3 ñ»¹]Ž2ÅR/9–”: <ŒÍ:Úu2^q<³Ž(KމoüÏ1°Ò v”ÅíS¦ÎÐë%w-Áè^{íå… ÏŸ?_M:ÕX„#Mš4Ií¾û½ûl¯0¾6‹çW °N}x“ÎId™† ÉtÎdÒ9ã9Õ4ÀbÂZ~r¦6g”§“*htù¬Àš ¯~0Р“}Ä „7ÓÚ–¯r[ hÕ­¤ä½E™& ˱ì2ýB M/p±jlÐù•Åíݬµ,o3êÂKÕzë­çÝmÚ´Q'Ÿ|²š2eаçŠõdñ;¶cÙKÆ\Ég¨Ï˜qlªúyV† P:(@€­ƒè5h§0‹cʱý<°fˆs/¬=‰“y¾¢'q"Ä&A /HÊ.HÆ«C€m} °ÀÂsi)þ7=šW*¡Â2C¯ihÛÐ+ÛPN¹™Jñ»]–ßq%¤pŒíâéX–ewÌú†ñš¢v1•,„rýΙ ‘]Ðàµãµ‹£ `3n)±úYT€¼”²«Æ ’² ’ñê` °±B»¹ôL†a¥2ýÆÒš3%']pÄ6PÌ6@€Í¸¥ÄêgQòRÊ®/HÊ.HÆ«C€%Àư~ãF“4àåøú‹o+žXs›ü-3)'YW«˜`Ãë^ŒëN€u²”u®þN9™‰ +@^ Ö(Ѽ ‰Êûƒ` °±,BlÃÌú—1:H’c`)ówûï¸êÂr‹,¼Î¼Îf À:ÙRu®9N9™‰ +@^ Ö(ÑCõÑnHôˆSX'3Šë$39*@€u*ÉlMIŒÇʵX,Ö˜¸ŠÆ6‹m€m ê6@€u²£°N21“£XG¡˜ dQ,–K€e`HUÀøu{t*íµ¡£†Î(Ë#À:™GX'™˜ÉQ¬£PÌF²¨–›*Ã5J£‘eÑ“Æ6Í6€å©¨•®6‹&GÅ:`swIëzBغÊσSx À` °ô¾± ° d® `ã5êP:¶¢çøØ”]ܺ>ª†„ ZéÃò‹§: ~¾P sF=LÙô0ñºçºaÉ&Ìz-ëc™&ü/mà…å«”¹¬“:‹ü˜Ñ[òÈvx)±ô“]ʽõΩÞ6$ìg¶7û–kƒa‡úc© þ—²åœÍåªP· ö:ÚËY™eÉYi˜å<è\°!ÄN&F/ë³ìÛÑGiÒÛ×°ü²*åQŸ 6áÔ0™‰ P*v +c…yŠ'¼Ö¼ÖAmÀI—€žìÀœaàžÅ‰“n)=‹ð?ötâwù€'åa»”'eàÛ|M¨¬žöxÉ8ê'€*uÅÿR'œ'`ŒßÍðeÀ« ¡•^ôØùÁ*6 ]*ëH¨ȧ;%xZÓô±NOðx¦-\͹s*{üÖ~ce‡ýb»®xi!T¹7XÈ/ñîš¿a_¹ñ%|ßâÁ•‹À  rß½Ìá”ãu Çm ˜máÊ©²¶,¶)Ïr­pa±_M;2Ùÿ»DÚ&ÀÊßv"Ê…}‹oÓ!#ÑaˆíüP*PAj7­Ü\æMï °ò`0ß@ù¬À,¶XM˜ç…°RLXáuçugðoôÀËh9[¤€¢xd± v¬À¥í9Ø´AÑ„vD ”oª=Üö®ß&'¶8—™,d3åIW£@5+¡½q0½µ2nÕ¬‡<0d¼ªxQ¥ y@`{Ðzz`«¹ÂÙÛ‡K€e```ˆ´ `³g DPcQ»S Ò ÛÕœìÉÕc‡ ÀâöäQò›e›Wʰ!:‚SgT ß T°PÄ/ÌA`Õ|ƒ$ã DEBí ûö1Ç`óÝåì"5Zè GŠm€m€m€m€[ Âç,%RPÂw‘ÅtšÀ¶4íÖZÖ¶mq~Ôå¯\õžjjÚpî#›rÐOò¨ T€ P”)9ã(jc‹ååœxmëmám5a°†ÿõsÐûF’<ö˜OH/< *ûØQäðÉöÓ† oõlCþ[ïœZöy‡íæ¾&ÀÚÀ-u‘ãI½Íúaüo¶AxYíúËqEó¼pqz~ãº?æ/ZªÚ·oÿNÊú:V‡ P*@¨ȃë­·ÞÇËV¼Nˆå ¤ll±´:–ä7€¡À`ЄNs?äà!Bj‘ð·vk#&‹²½—Øxöƒ6ñ€JÙ€FO&r²VÀÞQÉ#ã|í}äüpnRÔûbŽ)Û°¯2\Î3xFU.´Øh£–ä¡ä9P*@¨ )S ©©éÍ[‹á•1ÄrêïEã5à5¨¶ øA˜_1€ÎôÔ"h%¿9‹±xrQ/?H(”zWKŠãÚ¡º•<°6<ãö8_ü†|Í~Y©[9PÍâäW8§ oÔ!ÄMSSÖݱ:T€ P*RÌ|&Ów‡Ü55ÙQ™Á-5•bEjS ÃFÍštÓmXzߨØbi®+“‰ÇRB‹D~Àkþ†cÀc*á¹åÂŒËA¸ßx× €µ½¹fø²Y`Ë-ÍS`³0VØOÓSOÿég?¯­wâÞT€ P*Po0}÷ÀzW"äñ¬X`ZÀõÇyð“/šùÁߪõ®p?zæØØ*µÓS4‰¼®ðdÚU?3óÔjkO2…zV°ö¸[S—J“B•«¿í•Îʽ¶é¦›}¬»É^ùê*y6T€ Pb)ð³n΂²–—9­9Î#k ž­ëYǦõ×_ÿ“7V¯‰Åû’ƒ‹õ$„± ÄÓ‚¼§~cc²¦‡SÊ0=˜]™àHBmÍcûzÚD™Ûí`ûx~c`m¬²lßãkn/°µ‚y=Úôâ¥+TÛ¶í>¨g§Æc;)Àåeœd ‰ÑŠ¡%ãiUà&]1¤<|òt.y¸‘œ&Û0—¸¨‡ÑÃcÆÔ•ºÖ» Ø¡Á¨üæ7nTÂoÍ1«2‰¶±Ÿ~øµZš€(cN~ÈcBf¥Iœ0;1ÊCÙ¢a=°rNß86þè–cH½PWÙo²y|¹fâ‘®÷5 sü‘çþ|ƒ 6IçÄBâRàh]0#êâQNj¶,5aàµÄÃÂüÀ«9­¥‘ã;hñf€#n3¹ŒGÅ$ÇÁ¾ƒJà˜(׬ B†Íö5ƒóÈ¢79áKž¹Ã Ä¢óaŒæ%± ° ¸¶¿I’t€9{ò$@›½&ªx@áEèõ{醲úìr”•^Ô¡>2q<¹Ø_<ºø6aÛüÊÄŠ× l/£ƒ}¤þö’>(ÛÌß+Müäª}’ùÉÓ¡© áÛd®,V…až^¬S®úlÍ¡t®…0ZÑU)æKµhÈf¨€¿ááßñÿË©À«™×ÑJ'‡ Óï8ø ÇÀ85!Ö®»ì“jÁY¹ð l¸á†OL¾ý^†s"¶¶ÈÛ€x#]`Êv³:™‘Ëù–Ëc¯Ÿ[KYIí{Ò©Ã>éСÃèð=÷HP±A]œ V+µ‡²í`—Š2ZÑE%æIµ|æƒâJ]cx6ÍO¥·aöÍSË›3³,©› ר›éa-ß;‘êfWUåúvë¾ù_’2txzïØŠÕ¦•–²A{±¬v¾",¼±•&…JÛýƒõÄ7ؠßuïÓXUÄ’R \$œågGãÙuƒÅŽ „ýèò=,Ž;Ñ,ÛúE Š3ÇF}ìáze(çfûÙõ–<8ÊDx¿œ«-iFFÚN%F+º´æIµ~KÜCóæÇÿå&°bpxØÐÜH(É`Mñ°¨°È[ÍÛ¨T_$Vî ôº}çý¸ù„Óf±>Å^ï|^ïJ“(É5G¨n¹¥fÊýž×öâ¢WZΡÃÛö›ú³?M½{fEå7جH°;Ë©Eˆ¼.Cáäxš(@ `Q¢a³âØ¥øˆ=-ÃáÄá"uÅþ°OMˆ;¿a|›6®l7µ0m\±›q,ÓF7ÏAŽkB,£S°‚A øy`åf“…|—['Vn8ß0BÉM&aĵ¬<¸è ºòÝÞ±c§›/¼ḩi1ŒX| ¯+¯+Û@~ÚÀ~ûú¨]»vÇd´Û+Zµý<•° Mï" Ñ=S#ñfÊoa`­’gҚͺúÙÓ¨£ ÚvÝÊ«ÌK°8GÛiã§YI[¹hwUNÏ×nüP;„¸Ò©cÜ ¸yÃŒYð{¨ø¬ Îö›9»îaT9½œù?­¯wì¸hâ“ÿE3?&¯%¯%ÛÛ@mQ;›l¶Ù5ùïss†€6Ó›»Î “¨+°6ÀÙÐXI¨Jyý¢ûÌü~ö§}.â1•:ØÛñ»ù[5+Ç}Ź„oóÃhÅÜÜ2Å=4bsbÜüxXà&ï+þ–·_2©“y¢ ¹Y$¦ÛýÖj•ý°MÂ1dÌþï©ùÐBä͘YWûd\1ÚqãÆ:=rÒɧsL,'ô‰|BŸ8Œh–I8cH¶ ¬\õžÚsÀÀ?cèI1ºÅÜœ¥í•0´£ËEÚ‰wÑ]ר¼´¬¬ÒQ ™ òf 7n@y[d{tÔ)yÃ,n^œ?ÅT ¹mÛvÀx¹dÌ•Š0›l¸Ã#©7ÛÛ@=Ú uøÙ#?Ç2k:\ø-†|1{Á|œµ =³çT‘è>‰ 4gÕd ¤JÔž„Ð ¬Év;ŠÐV ÇGyrØ–òÛì@ÙÏ,u¶áÙŽ`ˆË¹ùÍ–lÎfŒº˜‘Š8–œ/àV43ÏÁ¯®ŒVÌǽ³РàA`Þ¤aDÁ #ðŠïj˱‰7Iv¸F¥zá¸öZWa΃yó¡ÀÀ¶mÛ^˜ml\ïZ¤!ß?üS}zcÍD ØØØ2Ú¶ì¹õ§ò\Çóкîºm.Ó÷ÍGƳРÆLaQdHš¹ßoaÊŒ+o½êÅhŸ®(Ë¥T€ D @SË ¼Ü8E'.`¨,‚ P*P'zèã6Ïõ:Uƒ‡Yñ*VsñfŠcEþwC[Í1«Ý§ {ˆÑŠÕ^1îG¨ T€ P*@¨ðQ Lž½;X!¼a–xLòb nÊ$?8£“TœÇ¢T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@²¥f²Ä¬‘X£Úõƒüi\¾Áµþẏë±SOŒçD¨ T€ P*@¨@¶˜Ø)ª%°,ƒÒÉuyÔù±ÖŸëeO É([Î9 ,Ë>Aú L€¸,K1Ðõd¬|r¼J»'—¸\±Ê‹Èݨ T€ P*@¨ˆFF@£™+€–jÁKj—À¢®¨»í!Ø•Mû¼]ÎSÎ¥`£èfë‰ÿQOW‡~f+=Ê. °ÑÜo,… P*@¨ T€ P0=žâíÂö7µ@À(Œ‡Ò®J+žNûØá~òÈy üƒôP,ŒÐMÀú™^mè ÇÂ6—]ÇrðK€uQ“y¨ T€ P*@¨ȼ•Bvb]ËOŒ$¶ÜEpXxÅ»X),Xàß~Àˆ}P‘Çï­%Ï@‡–#×úË1ýÆ`Äd*@¨ T€ P*@²¯@%€d• —…Wp ¸BÂß~h,¼¥È¥N~ g×yû`_¿Ïà–ãË6ñÈ ØÉ8Tsó8ø=hœ®@.ÒQG”û¢‡‹ÇWê(:—+?`ECeáºVú˜šË5ªBŒëŽrQ~¹ëpHn¦T€ P*@¨ T€ +P `ìl(¨È˜N'ù߆#6”!$^H? 3ëc§L˘S‹ýÌcHMh´Ï»âœÌºúåµëPNy€¿htuL€5ÁÙé.WwÓãk†?ãøåZÎü®È[`%4Z¼Ý¢{¥1»AçÍíT€ P*@¨ T€ P_Ê, ­\¸+àÄöžJˆ­ .°Mâ¥5Ë7Ë’ú¬ c?+¨ËÁ£k1„‘pi?¯°„ðJÝý€1È jŠïš×Xì_Î{ëWžÒlzžMá-5?~!ã2íÝð|Ëu2C›åŇ‹—™·% T€ P*@¨ TÀYs¢ s<¨ ÅõãR•Bfzý<£(Ëï)P†mæ' €Øc6tÉ1ÍߣX¿Y“mm€Åv9WÔW>•Bš¡¿ý)§c'×Owüæ:.Û å†\ÛóQ*@¨ T€ P*@JKµˆ·RÀ€UnÌi9ÙüÂL`]AªRH3Ž+^a3Œ6 €•²m “ú›žÙ¨6hvg?€5½ž•V4)w [³ Ýý®m%·Œ –°nÞjT€ P*@¨ T€ Pšðsìd9,àV Æ/i{é*¬ r". e{/£X©«Zë «®aÁ~ç\î"ú,òÊØ`yÁàZGó8¶fâ ÷{Ñ€ýl€5C½åøö·‹—¹æÌ¨ T€ P*@¨(Ž•ÆÀMš$³Ú¢ ¤°Ø´¬Z[êÊ£ °™!ÛA­¬ÀÔ;Jˆw¹Y‘ƒ€:*€E9rýý¾ƒÎ‘Û© T€ P*@¨ Î Tòxú·xò[.&,ÀúÍÈä03ÇVFå…hâݼÊùØã8ýàÐo$¿‹ ùì ”üò–Xä5Ëñ«k±„é^.„Øö¸;78ž@Ê2¡O Í/ô¶ÀŠçPŽgžc%€­ä5Fˬ@¥ ÀæùJ=±ÝbƒÇ3Ëô»¶²]ô“o¿õnÍòÌk„kúúy³íubåÜñM€åó† P*@¨ T€ PÈ(UZÒÛí°RäÔ˜Þ=ä±Ãgíß¶ìS præñä8€¨rkŠ"O¹°]ñúí}Ê•´Íe ÑÇ–ü8<ÀeÐRE~ÚÛ ÀåúI]pü \#S3¿kk–#ûà8ø›ë¿©ÌíT€ P*@¨ T€ PŒ( Þj?¯fFNÕ¤T€ P*@¨ T€ P")PnÝ"iÀs¥T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨ T€ P*@¨@±øÿw.G?ÓæIEND®B`‚manila-10.0.0/doc/source/images/rpc/flow1.svg0000664000175000017500000010610213656750227020737 0ustar zuulzuul00000000000000 Page-1 Rounded rectangle ATM switch name: control_exchange (type: topic) Sheet.3 Sheet.4 Sheet.5 Sheet.6 Sheet.7 Sheet.8 name: control_exchange(type: topic) Sheet.9 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.17 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.25 Sheet.26 key: topic key: topic Sheet.27 key: topic.host key: topic.host Sheet.28 Rectangle Topic Consumer Topic Consumer Rectangle.30 Topic Consumer Topic Consumer Sheet.31 Sheet.32 Sheet.33 Rectangle.34 Rectangle.35 Direct Publisher DirectPublisher Sheet.36 Worker (e.g. compute) Worker(e.g. compute) ATM switch.37 name: msg_id (type: direct) Sheet.38 Sheet.39 Sheet.40 Sheet.41 Sheet.42 Sheet.43 name: msg_id(type: direct) Sheet.44 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.52 key: msg_id key: msg_id Sheet.53 Sheet.54 Rectangle.57 Rectangle.56 Direct Consumer DirectConsumer Sheet.57 Invoker (e.g. api) Invoker(e.g. api) Rectangle.55 Topic Publisher Topic Publisher Sheet.59 Sheet.60 Sheet.61 RabbitMQ Node RabbitMQ Node Sheet.62 Sheet.64 rpc.call (topic.host) rpc.call(topic.host) Sheet.63 Sheet.66 Sheet.67 Sheet.68 manila-10.0.0/doc/source/images/rpc/rabt.png0000664000175000017500000012764413656750227020642 0ustar zuulzuul00000000000000‰PNG  IHDR°š6¯»sRGB®ÎégAMA± üa cHRMz&€„ú€èu0ê`:˜pœºQ< pHYsÊ&ó?¯ IDATx^í½ ¸U™ÿŸÿ H ›dWV‡‘5ƒ"Š‘‘AЈ#‚,FAAq (ÃOˆ"£ˆ@•%È&‚D–@v1 (*:êàŒ3sþõmî{9÷¤ªëTuuuu÷§Ÿç<÷vÕ©sN}ë­®÷SïYÆãƒ(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(€(0d LLÎwÚH:.ù;+Ioó¶Ù>þ¾¢Z 6€ `åm`ÿ‘g͡޳fȽœ.      @ÖM2:q•Uæ¿ì_7ßêM/*}äÈ™Ûa§]F¿Ûvþ¾¬ °lèÜvßs¯?sìñnÿú‹éùêe–ùÏ +¯|mòlÜêÅ*@@@·ÉJ+­tÓJ+¯ü'9_z¥{þ÷/‘ÐÀ°l §6ðèâçÝYç\à·¯yÍkþV]uõï¼oÆÿˆ#ˆ#ˆ `Ø6Ðï6 ±[OÙöw«®ºêG†ï‰Î£    À€+°Újk|íÈó§~wXh?N76€ `Ø€o;î<íw&LØwÀ㜠    É„½u··ý§§À°l`Ðl@<­·þ†¿Mžê[ Ï“½§g:=©}VɤµåÇç´žòÛ „>=5*G¨C­ô`×~МÎGÀ°l@6°pÑ#n­×½î©ä¡Ê2;Ý÷,¦•„WAï̈æQ~{‘Ð'ˆȂ(ÐÇ $“6ýâÆy €WÖsİlhÐ’pZ×¼Ù4PP†[e—}Í‘ûíÿÞÿ BA„À°l`l`ò:ëý!yòo5ÜOÎPPúSñË/?áw÷?òä@¿q‡Œs<°lˆ³‹/½ÒM\e•ùýùئÕ(€(€(0Ä Œ?þ˜<äÏ8=qN:¡6€ `ƒaZëO9~‹4I¶IWƒ¶ @• ,õªWýí©ç^`™yÀ°l`(màÚæ¹ '>\å³uHË`‡ôÂ7ô´؆^š…+@–¨EÙ¨Ça;Ø606 eä4œ¦ã*°Ø@“`›t5h T©“8ဂÊ9`ÇØ6PÖŽ9öx—Wl;‡(0¨ °ÀG¯ZÔ Öe, ÀvÕ›Y7)}qÅ5ôÀ>ñô n½õ7Mú­ñ¿ëÿ*çÎ<ë¼Vùæ*ËíeYlÅwÅ¡À +À½|`Q7öW§ °,ÛUfh6´«í¦n?pÒnÿV°]½7)K€èöC‰ò±±¦Ø À°]õaØ‘ñªy«ªº+…Ý‹õýü‹.mEV-Ÿþ†¶›Ö…ØòŸ|Êl§nÆMùíiG¯v«‘ŠÕ9&)»ÏôÈr²ê¢üö×aÐõ™™ØŸXà"æÁBìdl€`Ø®ºElÀ nÕý÷#GÌtoÇÞ-ŽÑÿf›SåñS˜G]‡µÍïBlåpàŒÖ±Uw[îö3  ;-)3wL¶2ÝÜÕÛ‚ÂQ ^YIV%>l_½ìö‚òÑa¶€`»ê°9+¸ ÁRÑRm³ˆ©V¿U~ÔUÛü®W«n3›aï´–C$ °ÀÀ0?0ñÜO<éÔÖ’ ¤t òîå·Þz«[uÕUc³Î_¦ŽÙ³g»)S¦D·©h~\ô˜/ùËn©¥–ÂÞ¸çºfïx×»ûnâA|¶tûœz°Ó’Šo†P ! ¼,Ûí‡å×kcŠV±Hלl>ûØ9sæ¸ý8¨¯C÷Kbòø4GÆÔù2~Ïê}†ôê™ À6çæ£%½S`zRõŒÞUß?5°Í|0\{ýì/Ç•W€-æèT O=÷¢;þs_p‚Tƒ×iÓ¦¹›nºÉ}öÙníµ×ݾáF»‹/½’‡iMÓª¯u]å°,[ìw€-ïCT|äº)åÕ °ŸúÌ n½õ7“¶›º½{âéxövùÙ;(; ppµëœì¬Š).B¶˜£S%„̹ð7yò:£€ºÁ¸ .¸Àýò—¿M?þ¸;î¸ãÀ0šo×·ìî.z„‡i—¦U^ë:Ë`X¶Øï:á,ô.Km{ò)³[ÏYýõ³?rÄÌV4¶Îßña¬k öÖµÖr<òi€5¸cÛm[Qv¶7O¶˜£SÅCE“3 B-â*8¤>ýôÓ™é§?ý©{×»Þ5zÌøñãÝ‘3q.~ž‡* ;ÆX€-ö»ÀVæLLJR7à*?µ¬"¯8#ê™*ÈU´Vï{xñ˜cοèR7÷ªë[Û•G)Ì#_ByÒö«Líóý •©dÛÔµÙ@ÛÚâcÛÒêÕ6ÛæYç-q¾Ú§ˆ³êËj{¾PX;À°7ˆ0ÀVù;_¼,¶˜£Óɶ`óÃŽ3¾õ ƒr÷Þ{¯{æ™g¢ÒUW]å¶ÞzëÑ2&­¶šûÒigD=p;i;ÇÖg'j À°l±û€-î;d¡.À‹++íå‚jXEXP!8†¿ÇÊ'ÐU·bEfõWß}¸Ô6°å³.ÉþxZ몬¿Êûöwì=ú×ñÚî×­mJ¶Mûm›ýµèqX¯±Ö~Õé—á×¥ãunVN‘îô•u< ÀöUÄ€­ø§¾`ql1G§ì·&aòǹn¿ýöî†np¿úÕ¯J¥¯}íknÒ¤I£ «™‹5ƒqÙöq\=vP‡Î, À»ŸØ‚ŽCvö¾XAš€--b‚d¥Õw. jýnÇ…V–¾‡jûbVÀçGPU¦¶L+ŠÖcpmuYR ¸ë÷ À°le¿Åƒ_[ÌÑ) z›»Ùæ[¾1M SðY\ýã~øawÄG¸e–Yf´üéûì˲;CÞ¥€`Øb¿ële¾N߬EDÛ=ãAM‹Òê9¯ía £¥*3ܦˆ«¶¥E|cV Ù.Jkõ( ÎuŒ ×fYÖßjÛÁuQ¨H~€`Ç›‘Ü3+ûIà‚ØbŽNì±&YÚs¯½GÁRyôÑG»Å‹»gŸ}¶Ò´`Á·çž{¾RW2>V3k†ãØö’¯;vÐ ]X€-v?°•99} °‚:T»Éš¬›q˜'ܰz6øÝŒÃ.È1]ˆ‹¬µ3ìzlÝ¡c¢ÃÝ|¦u`§%eæŽÉV¦džj>Zûõ–ĘÄið'°êRâY‰%*ñÉQ€-æèäýx 5¹’&Yù1vÓ§Ow÷ÝwŸ{î¹çºš.¾øb·á†ŽÖ«Ž5Óq^›Ù_­ ôZO€`‹ÝÓle®R߬E'³ºõêw= `Eõ»Ç¬ÊT½²~bAm]M‹gÕkçd伕(«`Öœ<ë`‰ü–Šü°y·Vw÷°ÅvprÖ98MªdàºÙf›¹¹sçºçŸOf ®1xâ‰c–ÝÙ~ÇÝó²CÒµ€`Øb¿ële~F߬žíêÖ+x O€j݃Óf*Në‰MëBìû![÷bËãOeÛÊ¬Ž™i€íà~ ;ø‘W›Q€íàF©àP¶˜£“°š[C֞蘴zÉ¢ìXû®«sÊ[N¨ϱ^¬¬|U· ÀvhKl¤€l1G'íG{åUVi⪫®êŽ9æ÷ÄO¸^x¡QéüóÏw“'OÙ 7Ú¸'¨n<ô(s¬ °,[ìw€tò³ ÀòL)vÿT¥W/6ß´#s°l¤©de`#`;ÿ¡6€=ýôÓÝ¡‡êÖYg÷å/¹Q+ ~æ™gÜ'>ñ‰1Li¦d͘\Õˆr:·§N5`X¶Ø}ÀF: ùÙØ!™k¡ÓçTÚñ,“)•šLÉÆ¤Öý—1°ùO„næ`‹9:í"°_ýêWÝí·ßæ·Ï>û¸í¶ÛÎýèG?r¿ýío•zè¡VûF- ÕÌÉŒíܺñP/Z& À°Åîe¶2/€`K¿`Xöåñ¯J|r`‹9:1;þ|wçwºóÎ;Ïm¾ùæîŸþéŸÜ£>ê~÷»ß5*]ýõnÊ”)¯L>•Ì ¬™”‹ù;·¡*5`X¶Ø= ÀVæ*°li€­`úÓŸºË/¿<3UÑüìg?ë¾öµ¯õ¤¶Ó€leŽR°Å"{×]w¹… ºý×uk®¹¦;þøã[Ëéüþ÷¿oT:í´ÓÜjÞò?šQY3+W U”Õ¹ÅjÀ°l±û €-å>¤À°¥}¶f€=øàƒ[cÞ,éøßœUBìÖ[oívÛm·Jˬ²}EË`+{p”*€-æè”Ø{î¹§‘ýð‡?ì^ÿú×»ï|ç;îÅ_lTzòÉ'ÝÌ™3ÇŒÝÿ€ƒÜý¤41Ù˜ÌÃZég–®ÏŒb6Ýz° Ø[o½Õ}ìck¥SN9e еhíøÃÑ|:Æ\u»ûå^pÁ}Ðl¥?ô… `;(Ø,Ä6‰“µ.ÄŠÀÀÞwß}îþûïwsæÌq›l²‰;äCÜ/ùËFA¬ ú²Ë.sm´Ñ(ÈNž¼Ž›sá%8}ðÆ€m°7ß|³‹Igœq†K~£òª¼¢ùËsøá‡»7Þ8ºMEó«ME9î¸ãœzkô£Ãlm` »uÀöÁs§Šû¿W«~ï3«²è~]F§]VÀªè¬ºÿúÝŽ}@Õ~Û§®ÂÖÙÏ£íJµ‚YåÓ¶÷¼ç=­ÿ«î¶ÜÍ1[Õ]S®¶7û³ŸýÌ=øàƒîóŸÿ¼[k­µÜ©§žêþøÇ?6.|òÉn„ £ »ë[vw·-XÔ×ÎjÚ&—ÀfßÓoO–Úq§£Òö;ìè6J`16ÿÔ7oŸ¼ô‰Ï¯r‹³ívSÝ&oxct›¦$ãÙ7Þx“èüjS™cN9ýŒ¾þM`Ëù5À°eMmFràœ¼ƒ§UÙm`ÐVš–Ú&è4@4U´FÛô7̬«•¡cú) ÀæÝZÝÝÀv`™}ÝÊ+;­kX؇~Ø)R;cÆŒVDöÊ+¯tÿñÿѨôøã»~ðƒ£«·¤‡vËî4Ô©`;¿§›ü‚‚¶U}Øîú–À6ôYSõoQ¯"°lœYXEDÓÆÆZ´ÕXu/ö#ž\?âꬺ낫ÞnFI»Yv—vz¢‹Þ¼ðÉQ€íÜÊêB¬ñ®;&Ëèì¶Â n÷e–q'œpB*Àêþzì±Çœ–µÙyçÝ{ìá´VëŸþô§F%ÙÝi§ÆŒeÙÎí§j'€mÞ5©úS^µ×€m´«À°e tjràqy°mVPš°ávå 6Ìã¬À€Í3Mö·S€íÜ vÞ¼yîð$b¹ök^ã®K^0%7©{ßrËå¬"O<ñDk|ìúë¯ï>ùÉOºgŸ}ÖýùÏnTúÖ·¾5ºìÎøñã‰Ä6̹`;¿§ÄáÒ€m´ŸÀ6ìÓ­ßÇ.D`£ €-°F`Ø(Û$SØÎµ`·L–Ê9*»—Fàµ(Àþâ¿hõ¨8öØc[ «.ÈùË_“þû¿ÿ»Ú#·pÑ#}=þ­[ã^• Àv~O÷êÚQoo®Ûh7 €`»j l€µ®¾áÌÃYc`ýñ¬yc`Ãý:–1°]µõ*€íÜa ö€w½Ë¾ÔR­È«¥"XìâÅ‹ÝSO=å4ƒñAä¦Nê~ò“Ÿ¸ÿüÏÿìYú¯ÿú/÷?ÿó?î÷¿ÿ}ë7€íÜvº,l3¯K7®5eVs­ØF»5,ÛU`Û¬ ÒfÖ_u 'cR›uXû,¶µ›…XùäHúåúCusüjewi lW} `;w€Òº¿nÅÝs>À&c`?üá·k]ˆ}€Õ;Ï<óŒ»úê«[¿ûí·Ÿ{òÉ'k…Ø—^z©®Š¼Þ}÷Ý­¶|ãß`êT°ÝÓs¯ºÞ}ê3'Ы ¡öÝ h`íÕÌZuÕI/m¾Õ›^$ ¶t!îá:°š8ÃêCž"°‚K-¥“¶ÔU>[n'\VÇ…Çj)•«”¶¾l Ù­2ØÞ>8†`å¬vÚ6m§yd«±E`³ï\~y·ó[´fög!¶IœÚì¯~õ+÷ÜsϹ³Î:Ë­½öڭ߀_|Ñ .»•þú׿º¿ýíoîÿþïÿœ úG?ú‘ÓÄTlg€Ô Ü/€íìú^×[ƒØœ1ðçXä>`+óC&&%å.[R°6•© ið5ت mT’làœ6‰S·À±)å°•Ü¥ v€¼n¿ãÎAlÀª»ïW_Ý=<…½7ù+ˆÕ¤N›'ëªrÀ­e²´ŒN€}þùçÝÓO?í>ñ‰O¸ 7ÜÐ]vÙeN Yu2pUwa½D»ùæ›Ø>‰H°ý°·/¼¿˜ú[Äbóek#li"Û§îÀ¾­)ïÛß±·óÛ¨6ù0luøÇ…åêø^ÙpÝõ°•ù!leRRP] °lÀØ´:º°3ßY—ñ÷s=ì+ÎnYˆm°çwžÛf…Ƭû±Cu;­´’KúY¹£˜ýtÒ¸,À b-ýÛ¿ý››‹º2¶3hM9mÄ $}˜| õWy Nmâ'Ál8†V)ˆ´zôÖ8[A®_¯žV¶Õmmñ!Ö@Tu¨m:^ÿûõ§AºÎS`kõ ‚ëÉ^ÕÀVæÅ°•IIA( 6W À–Š `/Hœ7î[‰‘ÉŽªH*«ÊòªhS#ËXn¹ån9ò£G·Òõî¬s.p›m¾…ûáo‹vÞÚ¬ºö¾eÊ·Úßý;á„Fg!ž3gŽÛ"[EaÕ•ø(ýíoë¤õYMy“M6q?øÁZÕ¬$pÕGãg5³± ºöâK¯tÇî Ñ׫WNr]õí f î~‚Ì0) £žKD[ ê÷Ô®yXÎ}/n;¾5« ±Êñ£±*?Œ¦Zû-Úk­6 ‡›vNuÙkê`ó\üèýl´Td¬AI¹“аli€Ý5y°®õòÀÿ›+J*«ÊòªjWãÊYj©¥~³Á†·&2"½¬Á¤ÕVsÛï°s4åìÅ_ì–IÖ…õVãKwK^Þ\1…=þ“Ÿ, °›•î¸ã·ë®»º·½ímîç?ÿùˆµˆ«Àö׿þµÓR=še¸ŸVÑr9kO^§ ºnMp„›Ð¶s€•M CPÕõµ®»Õ´hªQõÐ"´El# `³ 6Üž5‹²Ñaí楃ØÊP€­LJ ª@¶iÝn©=]êB<+1|%>9 Ð…øg÷ÑÅÏ·@èÄ“N-Cy{Ï=÷¸Cÿå_Ü\0fØË/¿¼…ÕÙM_÷ºB«±®~z÷ž{º+®¸bÌ6íÿîw¿Ûš­ø¸ãŽku5VWa¥?ÿùÏ-xÕò<ý°O=÷b+B¾ë[v]ƒV À.9£ªò"ÀDÞW44¸Sd3Œ¤À*â*pô“?c°UWÜ¢ÝqØÎ^Bµg¶2W €­LJ ª@(€šT”ªù$aªi·$ŽÉ­k­U*ª7H€7èçÀVsÏ”-€}ÙQºmÁ"·õ”mÇtû‹u‚bV“"iÖàû5ÆÔÖštõM~ïÜÞ Èj=guV4tñâÅ­É”—Ï<óL 4µ¬&qx†iÇÍ6s›'ã_Óö© M 5mÚ´VwaëvÜ/«kóþ|ÈMHºZ°†‰À¾âðí ~ü褭—êwÇínkÝ}õ×ïNó»’°êvœÖ¥¹Š.ÄYÑæ˜¶B¶¬±ÄqleRRP D¬ê_Ae­"`]ɃϘ†`ŒŠÊ õ¿º‡YM¾²Ûn»®¬º+¯¢¹ÚïGWCÈU×aåñ»«.‹[ý>Àª £<þ~†>øàÑ2T^¸OÛçíÊ(×EóÒ…¸¾;9­&¶sÇ¿S€Õ=£et>ºì²îë „¸Ï>£ûøã»6Þ¸§>¸*Ÿ´ÉšQ£û™=÷Üs[?YºóÎ;[ëo³ÿûeb9Ä÷?òdkl¬º}ûÝŒØÎí¸Ÿ€£ÚÚ«n¸lñ{€­Ì`+“’‚*P€Àü {>¨…QƒS'€µcüq«!À†ß•Ê/86¶:Už"Ú/@UÒwýõÛeଲ­ «îp¢'ƒX+SëŸ[§0Zæx¶‚[´ƒ"ØÎº*ö±Çs‡Î˜áþ5ÕÓ“ôÁþg÷ù$*{T³!¼Î>餄þüç?w³Ž?¾²Ç.·Ü˜|ŠÂžsÎ9£°j+XMKZºGmx衇ZËüh¦sA´ŽÓo×Í7ßì~ô£¹ë®»®5ÑÔ7¾ñÑî¼ =<&Ña °þ´¶'bì¡“¯É<¬|P  À ê, ò|P,°á,Ä*Ë" y«È«¢ªÙpb§´1°á6u-¶®Ï6£²Ej­<‹À–ϪŽ`{{“°;\UìwÜÑŠ¾¾˜Ü÷–Yf̤M‚XE^ ^ýHª@v³¤{pâ5ŒB¬Ea-Ÿ¬¢ºi €íÜ€4ÄúÓØÞú!ÔŽ]R `Õm`fU èå2:‚>E*˜iKÚt °6V6`ˆê:l ê±X‹®úKÙÿ~6ì]˜Æ–ÀVu×”+€íÜáŠXE/ï½÷^§ ›áüÙÏ~æ|ðA§ett¯(ú©ñ®ï¿+{r0ÆUðzú¿˜=UDuû7¾q À x…ýæ7¿Ù:Fp¬ß¶Å‹§&¶s;^ÐèO`Ëù… W `§UÙm ×ÛnýÖxÌš…ØŸ *` •×À:kâ°]íº ° ¿åjl۹Õ°ÇÌœéVL"ªÿïÿý¿\€7ož›œ¬õv¼fEOµ=XEc7J¢¹gΞÝ:îöÛooý†T³R?v!Ö2HZËW³Û2¢á0Ú[£ÓAU(PŸ¬M´q¤¹—i4)º©±d6{±º+ªªïykù”Weøãgc Ú–ÿñËQY~Ty€»OO´–áòÉQ€íÜaÍØóÎ;ϽaÍ5ÝQãÇ»½“t '䬢´®²ÊèÄLмžö…/¸'Ÿ|²m2€Õ„NZSVßüãsÛm·µ~CÔÝ8+õ ÀjYO:Õiæaý¦*1 qçv<ŒàÂ9c7²W R©ÉY—wfCZ÷\ûN˜dPk@)Ÿœ,ù<€µ®Ã~]þ,ÄáMiPë×oåø]‘`óì•ý# °;p!Àª»ð{öØÃm“táÕXÔÖøÕd’¥<€U·âØ|óÑ®Àþ’:Y]m»€u¿¥—nÁïgœ±D7a›DNÚ.5u§ko˜ç9ì·Á†§® ÀvnÇÀ« °¸D(0¼ À¶ë>ìGbÙT”Óïæëï·r4a’à2œˆÉ¢°iÇøeêØ°MŠÈÆlS9ªWíd‡Çh_Z»bǯV‘¯Kc`‡÷N,xælçNk°ëMšäN_j©±Ý€#öÀ½ör_K"µÖ}XÑTÍ0¬q²íºþjß¾{îé>òþ÷»E‹¥æýÉO~ÒŠÀ>úè£mSSÖ_"gÒ¤ÕR¡Õ¢¯D`;·áaÎÛ![Ði ; ˜°U@e<2f=Úvz°½ý%`;wàB€=à]ï* °Ÿ8ôP÷É$bkðzH2v~°Zçõ3Ççžx≎’ÆÖ†½6ÒîË^¬ÀõÈ™Ç䫯üÿr7j?i}\à¬ó{ ‡CC"°½õC¨z©›D;×â°½¼mÇ`;wÐB€½æškÜÚ €¾ä-ksHY=ô_þ%u ì¿'-½s„Qx=)9vêV[µfVäM_÷º¶cWÛkµ}·ÜrK `¨y© ëÀjmYsÝzʶQ0KâWìXθðÕù½†Ã¡!Û[?„ÚQ — °l)€`{yÛ°U8¨i“8…QX­íºo¤û¿ímnáÂ…£Ëè|÷»ßu;­´Ò(ì^œŒcÝ{ç[Kì¬@­º d¿þõ¯w±°*7/5`ýërÿ#O¶fÞs¯wºe–ÏØdÒ™vv ÀtUñÛE/Û [™21)iNe¥Q Ô  À°5ÜhUWA¶sg7 `¯½öÚ%¢°êüïI$vÓ5ÖpW_}µ»á†Ü”d¼¬ UûÔexû6r<ð@k¢¥ì·_kMX‹ÂæMÀÔnÿÍ7ßÜŠÀj¢¨¼Ô4€õ쬱±D`‰Àcÿ– «†ležÅºII‹++‚P X€­áF«º ¶s§/k÷NŸ>:ök ˆî›t ¬>œ¤Í“èêÖI×`›¥8yâ»ÍV[ÍÝ}÷Ý£³_yå•n‹‘q±Û®¸¢»ä’KÚÎ °‚Ó˜¤‰âîºë.7þüÖÒ[`Í®|Ýu×µàûßøÆh$TÝ}{åøÎ½êúÖìÄûpPÏÚЫsϪ—lç÷tÓ®)íéî5`+ó,ØÊ¤¤ º`X¶®»­ÂzØÎ£,€ìÙXX-£³G²´ŽºîÕ¯vêR,UäUÿoóÚ×¶à0œ%øï×[¯™½"I»o·]î,ÂY³ ßtÓM­¬f)ŽIý°8öKÚ/Ûù=] —†leN[™”T—,Û$€‘þ̺Œ¿Ÿë`;wÔ²VÌ÷¾ûÝ­(¬­«m9ø`7yùåÝw(ÕDOïLÖn=çk_K½fÍšå4”@WÑØ+®¸¢Ô}öãÿ¸°÷Þ{oT`;·‹^AÛ¿×®W63ìõ°•y1leRRP ˆMgç•À°¥ë.Mâ4+1X%>9 °;»íVë,+ ûOÉr8'œpÂè,ÄšTéÉdM«%púéd­g­õ𖼓’è­@7™ÃýcÅÍÊÛnû7ÞØXilê‡.ÄÃîx§?Ûù=] —†le®[™”T fåN*À°lw[ÝE°;j°Ÿþô§Ýí·ßÞ7zçw¶ÆjÆá÷ÙÇÿÿþ¿1«‰”’úÀZÑvKÛì³ûî-xÄ*rû“Ÿü$w)œ°<_ÀjŒml`;·^€۟׭¶BÌB\±ÏÀV,(Åu¤Ëú®Å×wÕŒlG7gǰ;»;íò£í¸ãŽnîܹcV³ /“LâäG`°yËÙØþo}ë[n§‘Éœ4+ñû÷Ý7úX+CmÀ ¨cÛ¹môØþ¼n½°ê`;v"ÆÐ-€UŒ4øÈ~ªüDìÔ¤Ædž‘j>7'é–$ÚpëZk•ŠêÅÂùº¦±Ú°ÕÜ3eK`;wvµNéÞÓ÷…ØW%ù¾÷½ÏÝvÛm-X¼çž{Ü'fÎt\pÁhâ"ÛZ6Y+V3+ ;)鎬¯ÀoÝi§ÖÌÀYPl+(-’š> 18“8aÿ† »†t!.ëA,q\7v檫Nzió­Þô"i°5XúÕ¯þKe–ørAQ«Œã«ª€í=XÆh§ùتîšrå°Õ9ZÒe“7¼qd'$3ŸtÒI-€Uwau¾ï¾ûZËØä­ÅîÿØ‘Gºc“Ù‹5™ÓQ À{ôÑcʘºÉ&nƒUWmÕ•Vöõ×_ߊÀ |‹$¶:û¨ ˆÀöß5«Ë6¨'Ý6ØrþCÊQÝØYº>Øîàÿ®%ö$ÿ©ÊO4ÀVV) ÀvhL³’ã•øä(ÀVÿPøÒig¸‰IÄtäÇØ½á opçŸþ€}à\‘¤uX'ÌF¬µd7œ8Ñ]tÑE£e`·L"¿Ç~üã©åÀÞqÇ®hjò:°85D`±êÆMS¶2W €ý=÷cÙߨŸ¬¸¢»ûˆ#H¬º‰ß’8äÉK‹*€|°ÝyØ<ºøywÈaGŒB¬~˜÷Þ{o§ål-“vÞb wÝÈÚ±êN¼cÉÇuT«¬í6Þ¸µ,ÏÉ6M$–¯ÙÕ¾¢ €íŽ”}ÈçG¶¿®WÞõd÷¯'é0äg`ØÒÑòAØ©‚Òðhìqù¿Ñ9ØH©Øî:G·-XävÚe×Q]6éþûÑ~´5U݉‹¤Ù³g»m’%yîX‡=2ù¾ÛÖ[»7Nž¬—@N“<½ïÿq‰r¯»îºÀ FË$E€5“±Ê¹úê«[ãm-¼pÑ#¥X8çÕÛ[½¦Øé`k ÀF: ùÙX¶´?0«{$×ÙIº™4\‘8ß›äÿ6Fç`#¥`ëqÌ.¾ôJ·Îº¯…¾I“&9é¢E‹ %óƤ×Â;“åt dç.½´›¤ù#“d_‰Êj’§d:x÷Öm¶SÞµ×^ÛØyóæ•Jl}vkOYùØìkõáÃŽt;î´sTÚ~‡ÝvÛMÊ«2‹æ/sÌÔ7oï¦l³]t›ŠæW›ÊóË®,í´vjïUÀF: ùÙX¶ôoÁÀD`óïr Û© °õƒ‰–Ý9ð}3>ö½ï}o+2Zb•×@vß$êª.ÅX¥Í“íw¿ûÝÑò `¢e]ˆë·•2Î9›}ä ÅÚÿg$“±%“¥u+¿Ê-ZÇá‡î6NƻǶ©h~•[ô˜ãŽ;ÎíÀA¥Ö26^õ1l§žÄèñ“ÿ’u•~˜…xH €­ô¾¡°>U€lä…`{%×Þ0Ïm±å›FAv¹å–sG'Ëãh)œ¢i«õÖk5€U÷bu5¶r4nUXM"U6°½³•";Û`]äGcÅWM–¦Šýͯr‹£!S¦L‰mRk˜B‘ü*¸è1sæÌ`#Ÿ·d+¥ À–2œä Iš“w°º ø @`#¯Û{(9ëœ ÜÊ+¯2 ²ë¯¿¾;ûì³ A¬f!öV «q²§vZ«Øo¼Ñ•Mlïm%dXÖ·(Œ°‘N²Õ© À–µ·(€–”^u·² æ8`#m€m”h|¬Àcéd2¦äÒµÒÎ;ï쮺ê*w÷Ýwç¦m7Úh Àž—ÌF¼Á„ îÌd¬¬ŽW9ŠÀ B;I6 ñ¾ðf!n¨SÀ°l±ßuºG: ½ÉÀ6ôYóBµHžß§J+`«T“²jQ`zR‹ —OŽl1G§Èq™¼Z’f·í5 ‡¯J@tÆŒ­µ[.\˜™ `Õux›d<컦MsZûÕŽ1€½þúë]'é²Ë.s{î¹ç+³)¯¶š|—9WŽéŽí°,[ìÞ`í*°lYš˜»D'زòr ôP¶˜£Stͽêz·ÉÞ8 Š’hê¿øÅÖú±iI»G½}ÚkºóÎ;o‰Cï{xñ(ܦ=ß´_Û-Ÿ}÷ó¦¬åï0wû9 ÀV|çP ²l1G§Û?àEËW·Þ}÷;`ÌøØýöÛ¯µTŽ–Þ±¤åo° ë§9sæ¸vÚiôøñÉ8×#gã4î¶h[Èß [`X¶Ø½ÀVæå°#ãUÛ¬"³ên¬<ÖíX]‚í*ðÕ¾œ1¦K²ŸGðv!¶r­L•ÑOÏe¶²û‚P`ð`‹9:M}¨›ï[¾iD—[n9wÔQG¹Ûn»­•.½ôÒÀj,¬Ò÷¾÷='Ðõǹîú–ÝYgf{`X¶Øï:[™¯Àæ¬àT€éGO T}€Ìi»m ó„«¥÷tŒ¹Ú¦ú,ií`+»)¨˜‘´}f·¿¶¦°Å¦? Î:ç‚d ë*£ ûú׿ÞqÆ£«Hì'?ùI·ZÒExäaá6Ühcwñ¥WöÕƒ®éס—í`X¶Øï:[™ËÀæ¬àÕSë&¬íŸ¹þsD]‚ýˆk°*³ß"®ás² ;-)svžu+ÓÍy™Ø5)0+©G‰OŽl1G§—`[·ºÿªðÒK/ýÊdL[oí&Mšä6Ûl³Ñm&Ll£-—|ýa+, À»WØÊ\%6`ÓÆÆ `m{Àêù«<'Ÿ2»õÌVð À.aÇ fåN*ÀVvÿSP °‘"°Å~‚¸…‹q{¼m¯1ãcGÞpº÷àC,‹3Ý…Óì€`Øb¿ël¤ÃŸ €í` NÛ¬º °ù†8’€–ŠŒMQ€¼l1G§ŸÖÚªÞë×[¿²[OÙÖÝ8oQ×…W]s€`‹ý®°‘C~66`%}û;öó >ÿ¢K[Ïg›98mœ¬­ûjy¬MàÔ>еyä{¾•ÅçˆØ­’ò®ˆ/“œ(ÐUØHyØbŽN??hûp\k€`‹Ýël¤ÃŸ €ÍXƒUY„ LÃq±XÁœ‹Õ1úßß`µ|Ž•ã—ÛOËéô `óÍš(PŸl¤Öl1GD¯¦Û À°Å~§ØH‡!?;°Të>3¤¶„NÚr;ÜZ¾pâ'©öù€ªÞV~¹þŒÄMn©}lþ FŽÁW€¼Æl1G§´q¸¯) À°Å~ØH‡!?[§ûõ¤Š·ÕÌÒõ¦çZÖØA׀ͿÁÈ1ø °‘×€-æè ú„óë{`X¶Ø} ÀF: ùÙ:ØñIŽèC,;Às6ø>›ƒ‘cð`#¯1[ÌÑðЫé6À°l±ß)6ÒaÈÏÖ)Àª†b‡`Õõ8œè©éÏ*ÚÀæß`ä|ØÈk Àstªø‘¦ 4ï¦ °,[ì7€tò³U°!ÄÀvóùÐ䲨üŒƒ¯yØbŽN“üi×’etÚÛ€¤ØÏ­·ÞêV]uÕØì®h~\ô˜Ù³g»)S¦D·©h~\ô˜9sæ¸ý8¨¯Ç(°‘C~¶ªÖ‡ØoÛØa}–°ù79_6ò°@ϰ>,õ¼åì­=y·ýŽ;tzÃ7+|~l>ûØ/ùËn5Ö,|-šdŸº_ÛXœ¤›Iipk¢ß‹j8/)ë¯ïØûÝ}ý‚dPŸ5UŸ鸓m `#//;øûÄÓ/´Ö›«úaÓIyMkO'çÒ´c]ü¼Ór ƒžæ\ø½ÂçÀV°ŠÀîú–Ý _‹&Ùç~ÿ| ö[IšFj”º&OñÑ£õüjÚoþ ´§W«n‚>(ÐØÈ«À>À6mJþpöAyørÍ¿—Øî,]ˆ#¸d‹U`b’Q³·|9º7ÿ·µŠç_¯Vo­Ôõ‚ 4A6ò*°Íz0Ü÷ðâÖìƒú[ÅAe°ÍºÆU]WÊ)~]X6í¾a l¤ÃPO¶M’jæ'iúHu,Ë蔵¼Ésò`ób è‡O†Ë'G¶¸ÜMpèFt€mÖ5î¦ýP6“8ùˆZt<+“8á2ôX­ÿ*xÄÚ€`ËšåÔäÀãò`ób? 4Pºöü‹.u8ÃݾðþV„q»©Û·"ƒ¡Ã­õÏ´OIùÃH¤ŽU:Vyì»Eí¸4Gþ#GÌ-[õ„y/Ú¬^Õ–¡1]vnú{æYç)+OíW¹‰™Œ¶SÛÔ;Vm ÏQI+Ó-lWY€Uv>víL›´óó¯…òùzøÇ¤k¿µ;íZK?_Oå÷¯®§ÎÕtÓù‡šûvaúè¯éªý²'k‡êó÷„ƒÿº·b?Eg.šŸYˆ›c[D`ãœìŸ´dbКZVÏ=#Ö[ƒÑ¤g Ï‚î߯ú}î…%°½P:Q CêXƒ(=ô0PÒ•ÿ`P„ Ä"zøog0nÓ÷ppƒ]Á“Ê·vXÙUiPíC›Ž4hÛdD*ÓÎGÛìüüòò4Ð9ûº(¿àÊïlí–N_Ú¦ï:^í Ï¿ ÀšF*Oç£òU®½P°2í» i¥¶H»žþ~ÓÚÚmeù×Zyt ³êW=>ší¨N@ €ýënZùúIwåQ9a8.Ýw\êÒ€ÍÇ÷¢Q[–Ñéð!Ìáy Ô°þsKÏ%óE˜x°ûÏ6ïV`?  À¨u¬ÿ4b²œWåõ¡Iù zBPòAØàËòÔùõ„y  Ã6i»~\Ã( å XÛÃó3Pk§AVâ,°2@óÛ+€S[Cø- :ŸæÃmöR@šhŸ¯Ú¾D0صsô#Åjsx­Ãöj¿aµ·ä~¾0:­c|½­ÍþK‰°!ŒѼÝwr:Ñ€`Óì‡l£¢ÚÖ^ŠÆüÆèy§gKšÏ gõæ2nçç„p¬2ÃÞ`*3ì‘fu[[ü:Ò¶¥íO›oÃÊÕ¾:Á€mô}HãP Y Ô °þhVdÐ~ð-ªéÿ€†•|¾aäTõY²Hg°†›vQÌ,è ·§•nk°i°Ÿšú E#°ëÒÈ×,ŒìÚÛê4xÏj—Ú–uÍôðJ{›=¤t¡ãGdÓW럧¾·{±ãܧ™ À°l³|ˆÖÔ°öÜK†ÚŒ=#ì%ªþú¾„¾Û%늬m>”ÚK[+Ñš6Ä*|¶Y4ëeõXï%¿Þ°ý~»Ã €ùöâßzÕñL`#î²  ¼¬@“ÖÿA·ëjʬýXûbÿÇþ(À`†+ Âñ¡ººFáC¿S€Õ›_ÿ!kcƒý¨pÚÛòp›9ÖÅ8Œ”‡¶‘6F:ÖVÈ×LxÕu`X¶«ÞÏĤôÜY_ ¶ €µæy/µ­×“Á¨þ†={ìÙeÏ{©>¿üžc~$4`õ{fÉÔƒV+×êõŸgaï(Ëã?휬gÝzÆ°ï ²£À0+Ð$€µ·–ö㘩+°Ut Më†líÌê^vSîV6œ`BBédŸ¢Øðø¬‡•½aNÓ7-*Û×:«Ì"kç!è6@ »Kµkg·Ò”Û;À`X¶«ϺIé‹+®¡€yNÚ3%|akÏz{¾Ä¼\ {ùv °aO¡¬zýçfØ+Iõ†¾W¯ž‹lÅwÅ¡À +Ð$€ 4UD`íÍjø61„ýÈg½q´—ÿ YD2|›öV6`}èŠénN®¤cl›ó`NëbŽ ÇÏ$Û›_¤­ þ[jÿ­px]C€ Ä6ζÀú]¡u|ÚX¤°vŽy³Q¡½ƒÐN´`X¶«M߬ùí"°±Ã…bVeY¤6œ…¿[kí·«ÿ×Þ·ëAÕÉïoÞ±lWïM GÁR IkãRíÇߺv:Ö‡:+;Ïi?ìáÄEþnØ>?ÊiÝp´Íïúš¡ipéo]ÚAh8žEmð5+°áùXöpß[„ÚŸü!l—= cÆÀ¦u'/3Ö«”u]®ÊuŽùÉ{ ³¿ZP`X¶«>LßlÌØ<€õŸz®„QÕp›öë»=ƒ,·6ob&¶«÷…£@[f&{•øä(PÀ*¢þh¦m³5^íA£ïat,íÇ8m›ÊÔÖf ™¼uåW:>+R«íÆ´²b5P=*ǯ#íX¿ýª/+‚œwl;HòËõ#ÖYš‡×Æ–ö ßjÇ\3ÕçkĮ̂ïa¹þ6ëÖå·É¢ÅáWµGÛtLÞ( ²Z¨¬SO€`»êõ-ÀZWÚ4ÈÔsÞ#ip®8 Ÿß~¯¤´ãíåx»žU±]ˆÛ½¨Wù°Ó]gçYü´$ÃÍy™Ø5)0+©G‰OC¶N'•ºú0ª¼vyöÖ"ËuONQåyQVyû`X¶«nQ_¬=,ªgˆÁ©½tµïö¢Zõ».ñ–Uï Úû±îÌö×ÀÔ/3«÷VZ½>°ZûÕ^V£mþ‹ÛØɵÉT €íêo…T€¬®,Nwy§»Jílr$ÿèÿo3VYg¯ÊJ›É¸W“SôJê{ß°,é”ËÖ×k½¬ì9iCÂÙ÷„þГp¿ŽŸ¥á6¿ ÁlØóÇ@Sõ¨|Á¬_¦¾‡óT蘴zÃGaûÃcTn/fâ×ïs9³Ë< €­XPŠë¾l¤Æl3À².аÅÍíÍkøw»Ïúç\—ÎÔÓÌû €`ØHç \¶¾X~»{÷ÛÝ+€Ý*©øŠröÎQ(P¹l¤¤lï~¬yP¢=6P¯ °,é”ËÀ&cHù]/§A¯¶œ©s tG6RW¶Ü-(tÃúÏX€tÊe`ØÒÀ–»é8j°`#¯'ÛN8àÄ5ÃÊÙ À°‘ÎA¹l, À–»w8 Z °‘†À–s„tÃúÏX€tÊe`X¶Ü½ÃQ(À±¶ÿœpÀ‰k† ”³€`‹x…ó°,[ø¶áxE"°‘ÖÀ¶w„5-½¿¾àÒ.oÁtôŒ×­:× €`ØHç \6€`ËÝ;…D`‹Ø›íkQ­7ˆKË”…!­I®=—U–i—~eëâ¸Î YvÊ”)ùÔêå˜={¶+rÌœ9sÜþÔ×÷ù1ǯõ'õâ›Og °léß½`ìÌü–8zF²eNÅeR tU"°‘ò°Ù …¼C¼Å\/à¡Ûí“&E8—~D°Ñ^Ü iuN˜0QRtZj©WEçU¹Eó—=¦È9Ô‘÷ÃŽ,í´6Á6ØH‡!?ÛÄ$ËÍù٠嘥ëÓ;¡ Ý}–°…î 2¨lä…`³mT7Xÿ¡¥mJM}5­}8£ÑÀßÔëH»ºë(¡/úú6ÀF: ½ÉÀIT·W«n‚>(ÐØÈ«Àf;rŠª¬9:únÑ ýoÑYAc8ÖÓºÏ>ñô ­.ÈÊ{ÃMw´à׎ #“áþ´H¯êñÛä;aYí³<‚I«[Ã6[U¯å >-Âë—«óó»\§E±àÀšdl¤ÃЛl,[Öòf$æv!žÖ…neÌq(ÀFÚ›îH &«úkŽ–¦ Mÿ¨ ò®Ƃ:‹ÔZYÖVß ú4ÎVåÛx[F}Ç›Z9Y#eµOå[dÖÚ,°ÔùùmЪòUVZž0ÂkçqþE—¶ÎAÇúí³63Ž`i°Ðì‘l¤“Ðûl,[Ö زÊq\ϘžÔ¬Ä'G6`Ó& Jë¢+8 †ÑZƒCƒ8A^15ØÓß°¾´I| NsÂÓÚ§rÔ¾"Mõ»C«þ¼.Óaùyc\Ó^À6€ 4ɈÀ6ÚU`ز:59ðмƒ‰Àæ)Ä~h lº#™”~$3t¾vŠFj{žYçc]‹ íoÎsúÒ6ë\ÂíiÝŠÃ<~ù1p“'ïœØì`Ø@7m€m sòJ“X¶« ÀvU^ Gî(À¦;†YKÀdM’äõ ljƬuMV~?qÜXý"öB^ìxɰÝñ/**uÖö;îܺF¤ÁÖ@/ì+²™BŰ…ä"3 4C6~ l»¬ö)‚i ëw'ŽéBœ6޶Œc™°êʬ¶Ùx[+·Ó.Ä6n·Ý29ŒÊØ1Ç`7uÚÛ $£[%Ûg‘†Bƒ™½°D¶ªS' t¨? ±*MÍ.œ„þìÄþäOz³èÏjNâdãTý5VU¾?nÖ&vj·kVûT·€ÕÚl4ùåÛ,ÆV¾ÀTíö5dƒ`¿\ëJ­ógb@¤NÑ Ý›uÖÙouå£ï·ó©¢½l‡Ž‡£@+ÀöñÅ£éëÛ`ÃIlIœ4P5õNΕE! .³–ѱH©¿Ô_¿ÕÝ`³Ú.Ñ“6ù’¶ÙìɱËè¨>i ýïGŸýÙ˜«p4) ngiã¸ÍfôÒ(œ.ö“ž~ÿ°•ù@“’r—-©¬6 B `+‘"P nØl h×­7Œ¾ÊL›ØX‹|äE@Tv§ËΤµOmiW¶ïü­_å¦Õiݪclò¨Ø@V¿ዯNÎ)L¸ª^%;nÚÛ‹.BH•å_ŸÒ šU—’-Ç•vM `m¼»µWÇøùóÚjZ©>•) Ú‚o+SíÒÿeÇëÖi“le®[™”TÓ“2fç•Àæ)Äþ:˜•T¦Ä€eFÒ!Y½N‡—ºš ýwQ>h¦Íˆmk×3Œš†ûiy=2Òº[Ýþ Éúßïõ`pÎDöŒǪ ¨ý‰ÓÚÙfZL•éÏ$®ïíÚêÖIâ¡u=Ø¡½ô<ñI«r'`y톶Qlä¥'ÛLG@âº`ÅmÀº­êo%µH¥Ô’"‡@릂™Ž » ç]—4€ÍëoÏŸvÃõáÐ@=¯M~”WçëkÖÓV‹ÀJŸ4x`#Àƒ— €¼kÚÏgÀöóÕÒ¶°‘€-î$Ç:‹äC[l ^°®±ú޽´.»>ÀÚÿa]¬AZltÓâfuÉ í!`ý±´MX‡ô±nÚ:¬.ù¼lìà]Ó~>£(€Ý*9Ã+úù,iû@)ÀF^N¶^ Aol {6`à$U¤Ñ»&±òÙR5áøÓ˜k—ÁTy!Ôª,´³"°! úßÓºEÇt!ni«_‡u…ö#Õläxð²°ƒwMûùŒ¢¶ŸO¶žlä5`»çLÇ8»äAl :H›ÄÉÆvÔ†Õp^[þ)-Škc`ýñ¡áõKå4 Ûc+еò¨q¿Íþ9Z”Ø—+h7p×__$‡ÛbÚÎŒŽNÓ®ÉvÎ$N‘C~66_#rÔ§[ŸÖÔT‘l¤luÎs“4ÚÆu#a4Ôïòj]ŠÓ–ÎQ9ÇÔòf!–Æ‚F›V·¿ÝŸÝXÇùkcy•?ƒž£ãµú´ß€7m¦c•Ù.«¶¨ÎvmµÈ¯_§¯•ߦ4}›f‹l¤ÃŸ €Í׈õ)ÀÖ§55U¤)$ Ø4Í™¤=ØdY˜…³ç†“9 ´e´%`ÒêÊêFk ™×>«#`}W½6ÆÖ/Ç„Z¤7míèðU†•›]ËH[(m[»¶¶«ÓÎÉ–*J[c6O¿º÷°‘C~66_#rÔ§[ŸÖÔT‘l¤,°P·³H}Ø\Óm \º¦Žöf­£îa¯€tò³°ù‘£>Øú´¦¦Š`#…`‰aw^9îßl)º#‡lïì€tò³°ù‘£>Øú´¦¦Š`#…`{ç4Nh 4ÏÔ9­ën×*­+oõ{l¤ÃŸ €Í׈õ)ÀÖ§55U¤)$Û<zØIΛİ:m€tò³ML²ÜœŸ(P‹l-2SI• °‘j°8Šu:ŠÔ…½aØ@Ól€tȆý¥@ÀªÛ€  4A6ò*°8“Ms&i6‰ `uÚé0 úK(€–œÝúëÂrkØÈ« Àâ(Öé(Rö† `M³6Òa  ô—l]/Z›(0}$!FŽ,ÎdÓœIÚƒMbØ@6Àâ*¡À@*059«CóÎŒlžBìG*Àâ(Öé(Rö† `M³¶Î MBš`kšjP JXœÉ¦9“´›Ä°:m€­Ò« ,è/Øþº^´Z °8Šu:ŠÔ…½aØ@Ól€Å!BáU€ÞkÏ™÷±,ÎdÓœIÚƒMbØ@6Àö±CÓQ CØäpè…,ŽbŽ"uaoÃfs¯ºÞí¹×Þîýø»ÿ‘'ݰ?œ/[™÷11)iNe¥Q Ô [ƒÈTU+Àýà`ÒFì´ßlàÆy Zàšüf¦ &:ÁÒSϽÈþ¾96 ÀVæY¬›”´¸²Ò(jP€­Adª@ª`›ãDõ›ƒN{±l`IX¸è7}Ÿ}Ç€ë{ìá6Ø`ƒÑm“V[ÍuÎ@lC €­Ì³`+“’‚êR€­KiêA ` À:·uÞƀë›ßüf÷½ï}Ïýò—¿l¥Ïþón„ £y¶Þf[wí óÙƒ,[™SÀV&%Õ¥[—ÒÔ£ÀÌ$“ŸØÎWœ4Ć׮‡v„?~ü(˜nºé¦î /tO?ýôéÁtÿò/ÿ2t±e|lïl€­ÌU`+“’‚*P`zRÆì¼rØ<…Ø_§³’Ê”ø°D7zÝîz瘣}÷´tñó­ñ¬ךü̶’º ŸsÎ9î™gžÉM·Ür‹û‡ø‡ÑcÀŒíÞõjw/°•¹JleRRP ÌHÊÈT €­@iЍL6RJ"°½q˜ tÇúÓ4ÓñŸûÂp]{íµÝìٳݯ~õ«ÂéÛßþöãc¿tÚ¼x«ñÅé0äg`ó5"G} °õiMM)ÀF Àö§ üpݰúmàÄ“Nuš€É"®“&Mr'œp‚[¼x±{öÙg;J*GåYÙ›m¾¥Ó<\çî_g6ÒaÈÏÀækDŽúˆØ­’ö\Q_›¨ Ú*ÀFÛ}ç±þ¶¯œy¶[{ò:Þr8ÜÑGíž|òI÷ÜsÏU–}ôQwä‘Gºe–Yf´®=÷z§ÓÌÆØP÷l€tò³°ù‘£>¢¶¾æP ä+ÀækÔÊÀvÏ)ÂáD[l ¿m`Î…—8EB-**°<æ˜cœ@óùçŸïZZ¸p¡ÛsÏ=ÇŒ=rfRo2ަØH‡!?›¯9êS€­OkjªH6RH¶zgM±þ¶uÝõÁU,rÈ!îþûïï´¦ñܹsÝf›m6fýXucƾªµ/6ÒaÈÏÀækDŽú`ëÓšš*R€€­Ö±DOl ¿m@‘N‹¸êïþûïïî¾ûn÷ë_ÿºgé”SN3>vÃ6v_z% [ÑDOl¤ÃŸ €Í׈õ)ÀÖ§55U¤)$ÛßÎ6°ÄõêµI“^ž¤iõÕWw?úÑÜo~ó›F¤ÇÜuÔQcÖ›Ýõ-»»Û,d;Y6ÒaÈÏÀækDŽú`ëÓšš*R€€­Öù&Ðèo0€1c†[c5ܧ?ýéÖÒ8/¼ðB#ÒOúS÷ö·¿}L”øÃŽ`|l ÀF: ùÙØ|ÈQŸ½Ø[Æ›¤›IC¡Á7§¾ª>l¤’l;ÛÀ×¨Ö `¿úÕ¯º›o¾Ù}ðƒlìœ9sÜoûÛÆ¤+¯¼ÒM™2Å›y¢c|l9[`#†üll¾Fä¨Oú6™© ¸:Òðh\óã*´i6RL¶œÃ4 60˜6àìí·ßîæÏŸï®½öZ·ûî»»]vÙÅÝzë­îw¿û]cÒ7¿ùM·š·6­–ûÑìÉØg¼}°‘C~¶‰I–ÄãƒP ';Mðú“Wtwqi€5¸u­µZ/*’_ë>ùÉOŽ»ýŽ;»ç-d#º°‘C²%¾átù‡¤¡Ñ`õ Í, `×M*¬ @Cm¬àæ‘G! °wl»-[áÝZ´(vðr ‹kŒ ÄÛ@;€Õú¬÷ÜsOk\ìë_ÿzwâ‰'ºßÿþ÷Iò—öÙgŸ1ãcßÿ¹ûym²lQÏ¡žü LLXà%ñih4øz…Ö°Ó’ +ë6À´°Þª%Š`ã[ ­°Á·€ÕDJêJ|ðÁ·@öŠ+®p/¾øbc’fOÞf›mÆŒ=þs_pO=÷" ›²l 硆C47ŠëÂ÷¿ŸÞ˜ÜÓX ¹Þs*4-–(p÷€ºK;=¹”øä(À¾Ctq±xˆØE‹¹ûî»Ïi2¥wÜÑí±Ç­õbÿð‡?4&{î¹cÆÇNf|l*À°Ít•|€}ðÇ?¦7æ÷ÆÔpÑ‘—UìÔIJͳn"°lXÝø.lž½²D6ޱР|( °÷ß¿ûÙÏ~æ¾ño¸õÖ[Ïyä‘îé§Ÿvüã‘~ýë_»ãŽ;nÌøØ­§l뮽aÑØ‘h,ÛL—€í^ð¨›~}™²»°Q† À°¥ÞްQ÷W×2°qùg>ÿ·ãN;G§m·WåÍ_ô˜íwØÑm·]|›ŠæW{Êó•ÿ&NtÄ$3€sÜ}Z…N1{òÉ'»üà­¬ìƒ>è~øa÷©O}ª²gœq†ûÿøÆ¤Çܽ÷½ï3>vú>û2>6¹ÿØ®¹ À°PäÁ, ÀFÞ,MÊÀÆ9Æ“×YÇ}ç;ßi­ “’k•ÏÊ*š_Ç9FÎtr­£ÛT4¿ÚSô˜Ù³g· ¿ è Œ8;F§|ÚìE]ä¶Zw]7áU¯jMà”°Š>hùþçv[mµ•ûáèþô§?5&©m;í´Ó(È.3~| à†y|,Û$¯ä•¶°l– À°lwZÅu°ù­œ~ìâÅ‹]ìGpYäS4¿Ê.rŒ&œYuÕU£›T4¿ .zŒ €³?À³>ÒV“"½wút·ùòË»ùɽý¾å–k °=ö˜SÄóꫯvS§Nuïz×»ÜC=ÔˆPë…Ü:Éïš~G”vH–ÝV;`+v,**€`+2¥¶Å°,[ÇVqlœc Àæ³/gKà ýrÞ!À~ñsŸs믴’;}©¥ôÖ¨•bö‰'žp¿øÅ/ÜÙgŸíÖ_}÷ÙÏ~Öýö·¿uþóŸ‘﵃ ¶_®QÕí`+s,&&%U6  ÀVf™m `X¶Ž;­â:Ø8è`تfÊ‹»÷êÖ)Ø7¬¹¦»n\Ëì“O>é•ýØÇ>æÖ^{mwÁ¸¿üå/=Mýë_ÝÿýßÿµÖµ`—³*~¼cqë&'½¸ª`تl©]9, ÀÖq§U\çD°lÝ E}q÷fÕ:…ûÍo~ÓMIº¼–‰À `Ÿzê©ÖìÄ÷Þ{o«K±ºÿä'?qÿùŸÿYkz饗ÜÿüÏÿ¸ÿþïÿv·Ýv›ÓR;,[‘kÀÂ¥X€Yˆ1œR†SfÊ몎aâŠ%‹`ãœd€­”(/îÞ«[§´1°»n½µ»Â‹Â^œLâ´~2¦\ðÎB¬g£µ.Ä>À>óÌ3îW¿ú•»æškÜÖI¹þð‡[p+°ìvúÛßþæþ÷ÿש]×{Ýu×°ÌB\Ò{H= €…CJq‹á”2œª`´L9]Ø™ÉO«ŸØ8'€`ë)ê‹»7«Ö) `¿÷½ï¹ÍV\ѽäAìÃÉÿ;­°‚Û+™Ñ÷–[nq¶ŒN,À>÷ÜsîùçŸw§žzªÛpà ÝI'äþð‡?8uí­:)Ú*pÕÚ´wÜq‡»é¦›Xoù*ÆÀVæ*°pH)éÀNO,{vžuebcјKš ASæ§Ün»íæ¾öµ¯•º°eÀ°_ŽéÀÎJ V‰[ÉÌš,[5(Q^o5O÷¬et4 ±Mäô¹W¿Ú”t+Ð~'I&KT}ü#i­[`ó›ß¸ŸÿüçnæÌ™n“M6q—]v™û¯ÿú¯J’«"¯>úh ^Õm€k{le® À–âœ.ìŒÄ²s'J€¸jzu²”ˆå«!@j›·Û`yðÁ;¥n×SUùleŽRs¢X6|Øw/5]§,€UwÛMWYÅ=72 ñ®»ìâ6Ÿ0Á)ûb’>¹ì²n—-·,°/¼ð‚ûÝï~ç-ZäöÚk/·Ç{¸x #ˆÕ8WMÒ$@þéOêîºë.Ö‹ºúvÀ–rÒ`ØRüÀÖl8X«sŠÊÔVyEÊ1.rL/ó°•=8JÀÆ9Ý,Ûtð¢}q÷ržNY+<ú¨£ÜQãÇ.£óýïßmºÆî¼dL¬&wzgÒ¥øÌ3ÏŒk]ˆ˜°‚X¥K.¹¤Õ­ø£ýh @IMж \5žVkÑjŒ.ÛÞ6ØRîë1‡ßSÿ‹Ò|ë÷¼ç=}dª‹z °[%–œÌqPÍ'™:{Ú-ÉÃàÖµÖ*Eòu ž°ª[†«Hì)§œ2Úþ° ±¾«»±¢¥‚Nߨ©5M‹¦*¯ŽWÝ ÖmYÛÏÐi‘ິ‰­€­æž)[ çô°lø°?î^jºNíööÛowë'QØ—YÆxâ‰î¾ûîs .tû¼å-î³ ¼Þ›<÷7}ÝëJ¬Öd Ó¿øE7yòdwúé§;i^Ò8W%Á±&’z衇،¨+ز^CÛã†6öÆ4_ÜçùÅòÙ•b}ä²ù´.»@Ë–Qçq½ØJï„~X]t¯Ÿú.à5ƒð!Sùl|¬ [ûô] FT Ž•O7†þZغ4[]utYîÔÀØJoÂ…°qN7 À6¼h_ܽœ§S;€¬žvÚi­Ô°Špj-Õµ“îÄê^¼E²W^ye+úÙnâ0›°Ú¦2=ôÐÖøX]U×à0 ZõÑÚ²šÑXǰñö@¶°ëuÀPìå—_>åŸë·¢óߨ-ª;lS§>{·Ž`Ð…Ø.nžYëƒ"©28?«·':ÖŒPoTÂȪÿ†….Ä­ßÕY#©²_åA-€srX6|Øw/5]§<€¬n’<“}€ýÙÏ~æÞ¿ï¾îô¤+±–ÛÙ#Yãµ(Àªq»¤ ˜vØa7=™LJ€jÑVÝ™ŠÌêX-ÏÀ·C¶2€ 8ÄzKš¯ÎS£ï 6) %ÿÝt‰ ,H•5¬õÕ±´0ë»x˜ ]v؆¬šiëª.´ŒÏòùýéµÍ 3,'4€-ò(`㜀m:xѾ¸{9O§€Õª ,hu!¶u`µ®ëFÉÌÄ «(ìUW]Õ8W?]qÅ-·ëûùçŸß{òÉ'·ÆºþùÏn›ýõ¯ ÀFtN»þl¯.Äi`æšüýÖ32+¸åOëíàÕzr*€¥rÂh® ØÐCåULe)¿þêØ¬RŠ@f7ó° Ø<`MQ3ôp0¸o|lÔ-Ø(™Æ`ãœ^€ÍöÇÝKM×)`5)’f öVëÀþýzë¹ù‰Óøõ$½÷ÝïÎعsç¶À3L;n±…ûç·¿=uŸòÞ}÷ÝnÝu×±`ËÛé0äg#pHØ•7 2é»üúp¬jZoKÈ`ZM0éBÜ2ÖIÊ]F'߬ äè÷1°z¢7þ< •á© òµ{ËŽ­%›jXläýÀÆ9=,Ûtð¢}q÷ržNìç?ÿywH2K±Ö‡œDcï¼óN÷‹_üÂ=ùä“­®½O?ý´{æ™gZ‘R9¬’¼›'“4Í™3Ç=ÿüó£iÇÍ6sãÿîïÜ7Þ8f»åÑX\¬¡`Ë_6ÒaÈÏÀkóÖ¤l¸š‰Í£#ˆõZqÕþ0²² À¶…É´Yˆ­p8ËX ÀÚx×°k±ÿfÆ&yÒ`¬þú“Eùohºl&qÊ"t3çô°lø°?î^jºN¬¢²’а{ðþû§¬ vÊë_ßšµXiï¤Ë±¬&wÚaÓMÝÌdûÛ“1¯ú&-é#€ #·Œ-gƒle^llâЧ3„«²l8!e³D`³`Íú¤ÛtÙö7P{›ÎBœ–Ï¢°~¿x?*+˜µ.ö×k«ÚŠþöËXX6ê&ìZ&6Îá`ئƒí‹»—ótê`~øa÷{ìá’>k-ˆ´Ür­®ÆaöÝÓ¦¹‹—^º5^Ö’²“'MrIO´ØžwÞyîÙgŸ“4þV‚-[ÎØÊ\Œ‰II‰éVóI Z÷–ä>PzðÇ?nôr0Yc`å‹û>z‘lÚ2šÆ#,Ëâ"°D`ÛÞ0Š~ÊH,µ$­<5•Á…ß}#¤ªÛ7k lí—§Õ©má¦ÚÖ9°ÕüЗ-€sxX6|Øw/5]§NVÏæW\±¦]vY÷ÙO}j ÀžpÜqî£ ˜úð‚ì~I×âÅ#ÑYEf³6ÜÀ–³A¶¬ÑÝãú`å[—_ßذ·¥1‚qDZM°l¶aÛßôÙ‡í¼z9‰“º hÜa%Ÿ~[U7Úa.'`í žþ–ýÈ+³Ç²è‡ãØ8‡€`›^ÃÔ¾9^┺qάžç$Ô‡Õº°“'Nt>úhk ì·“hêîÉwXDb'¼úÕî£I´VùC°}_ó“Ò‚SK[«¬¿Íþg¸ßtßvØfz+ý°aoLlHŠXýŽø½-íëB¬ý6Ó°•©úýe8ýãýãšÈ½Øiw˜¦.·®µV£» 4Ñú­MÀ¼nÕáÏ)) çì°l7`‰2ãî¿P§ý8¨5Tf™eÆ»éûìë¾ræÙîÑÅÉ$H%—Qñ«`™9Ó›Œ…µ(ìç>ýiwûí·»-VYŽèAªàõíIÇzÈÍúÌgZ°«¨­²ÁZûU@)ÍŸ?¿5á£&… [ܦØH‡¡ælý°~o̬ž“òÏ´>Ô†ßÓ&c²ub³z\j¿®~Úžµ¯I¼ÀÖ¼ŒN“.~¿¶Åö»ãÆý¿‘— «~b§¤šnû¯:6ÎÙ`Ø*àˆ2âî·< `±~Ú~Ç݉'ên[°¨4ÌV°‚ÕÕ’nÀXmÝlíµÝfk®ÙŠÊZ„Õàõç?ÿù(„êÿYÇïVK¢±~Þc“ïŸJ†¬À VÓ’`÷±Çk±Ö©U·CMütÇw¸Ÿüä'nrZËöºë®sçž{yºÇì_¸èwÈaG”Ö?¦Žªó°Íô]ú `ûÕoJ»»°SË>4Ϻ‰ÀÏ¥¢åØk’‡úZãÆ%C~ÆU¯y¶Ê~O6Ρ`تfÊ‹»÷ÒtÊXf7ØpãH]{üB0UÀ ÷˜:Õ]1¬Û%0{^2+±¯{&‘×ÇÜýò—¿\"mÿÆ7¶&q²ü¶,Ï=÷ÜÓÊ+UV“C¥¥^¬àU/ô·Ÿì€m¦[À>Rʯn ”iG—6ʰX¶Ô&€]1yP¯:nܳ#XÍ`GªIƒ¥–Zê7rôät²5Xî5¯q‹/Χ¸‘r¤‹|ŠæWÙEŽÑ„«®ºjt“ŠæWÁE¹þúëÝJ+­ŒÝqï¶I«­6&òFbÃï‚RAoL7ã<€½å–[ÜÁï}o«[Þ}÷Ý׊pþìg?s>ø Ó,ÄrÚ°š8§Ö-Ò]–¼K‚9]ˆÈRÙË›É&qºzܸ/Ž<<4èšOM °qÎ6 Àv\wŸÅê”°‚(ÍHüÔs/–~1ìÇ;Ìm–D@”6&5`ÕÅwó¤Ë¾ßuXãaßöæ7ç« V«¥uŽýèGSóÀ RÛ¥:&qzìÉçÝvS·wŸuRáHzSzþì÷Ï `¿E°Ž{À½#Ñ0¹U:.§Õoʸqû9¹o4Oʃ?þqßù·½ô­û­n€í»ŽÑ) `Ó¢°Š¬î˜Àé±GÕZFG>æk¬áÔ]Ø¢¯»%‘×óÎ:«µÞ«­ {z¿ï·_k[Ùd«zóRk‘XÍú£uSò°•¹J°ZËzdvÒ+Ó‚bí˜ Ì#9øàƒ£øC/²Uº° †ÍγîJ»}ÖÙ…XF"ˆyƒ!#1CÓq‚_ß(wÛm·Ö›˧ïaWdÝÚ®r|hÖqÚî—§òµÍ¯'­{»cõfEŽÕfk¿ºKûu©]:Nos¶uÆ»°³ƒUâÀVâH°lSœmÚQ¬¦i™µŒÎ[¼±°šœIÑTEXL–ÊÙmÊ÷Žwvç%pjðzTi=þãdÉÖ…Õ1ŠÂj¢§¼I˜²ök *ù-<ð@nª`ûÑ>ØÊ\¥Xë‰é÷ÊÔÿEýá²Û€mk»3’½sò¬»o6Ƹd°–Oݕ̸}c¶.È‚LƒIm3µh¯àQ ( 4øÔ÷pRmK„oåµz¬·_‡A§¶)OÃÚ¯m*˺Lûoxì8«í±7OÑ›9ÌÀæÝZÝÝO6Î`Ø~tÌisÜýÝn'u÷½óÎ;Ý·¿ýíѱ°ï[f7!IЦ Xç&ëÂ~rÙeGáõôñãÝÁÿø­õ^ý4idžc“cNf÷Ç~¿é¦›ZÏyÁiLÒ ú»îºËÝqÇN@éøýèGîºë®sçž{nËÏPV{`+ó3 `ýȦùèE}ãƈɓåk°C °lí@LÀFiõ@Ð6?’Ž›µ<F•oà˜VW€õo Exõàña4¬ÇÚï×æÑªrô–¸S(-r<[Ùƒ£TAlœƒ À°ÃêÜÛygE`€oI"­šˆI“8}âŸpïK uÃ$ë/±#˜Ýc›mRáôÃ3f¸M Wܵ“5bŸ±Ðêç3€]´h‘‹Ilûßy¶”ûvÐÀ¬È2×zD†=1}?ÚàTÛìÿ°Ëo°òç}Ÿ^.ùýˆòÇÔÀ¶ëù©ö…½?ý2¬·§~'tNa¬ˆO_6o—ºGE`·J,9ù]¯æSgâ€Íz;nO{âo³lZ×â"¾ £´aYªÏ7r¯›CÇùÝÃpYC,r[Í=S¶€MCSýV­š¬cYäSôuƒÜq§‡6ê3l`Ø/çÛ`/ºè¢VÖŸ…øÊ+¯t;o¹¥Û-Y"ç;É3õï“Û‚Ê´Ùo¼ñF7yd†â£’ˆí§ΛI8«{©.§3&Íþ­`ËzK7ðëÃîA¿‡¤|ßü®È¾ïíûñ!K¤}·!~ HYPÌê³^—i=?ýà™õþTY~ïPcëÉ©ýáÉ"~}™¼½ØÊ¬_Õ °1¡ûªVUFaÆ£rËt!. °vùÒí34ëB\Æð:9€­ôÖ)\ À°q6Ð/F;;»žíváÂ…­(ìj÷wîÄOt÷Ýwßè,Ägœq†[1‰Æ RÛAéÔM6i­«Éž6Hf2î`ï¹ç›X¶°ƒPü€Ø´^ŠEÖ÷“l[;€3„Aªpè¢ß+3ìù™Õá6®;qT'>Ö±l‰Yˆm§volÒ¥Pø0z›õ7#3ó7 ~;²ÆÀXëzÐÎèØâ¿ÔƒpçìÒ…8?K6ΖÌfëôºµ'·Æ7ÞØ}öÙ­%o4V(€½øâ‹[ûC€}ðÁs—´Ñ’7'tR+‚«±³ïL¢¶_ýêW£Žó—ËÑøUùwß}wt`Ø|–XÝc~ò#Ÿ1Øpò' ýó{<ÆD`•'¬?N$Óï•XÇbYÔVe¤±F7 µ]™l €µn½2e`—2€ß¯Ý¶ù³ çl¸”õg·È¬?¦V€«ýá›—°Ž4à »§µÕê$[ÃOzƒ«`ãœi€<ãî•~×é+gž=:©‘ž¿z}íµ×ެ"ž»î°C*À bó’ží“’H­f1¾7I›®µÖÇüÛÉ'»n¸!³,í“/ (-’˜Ä)݆éB\™“2P+è“?µŽkl6XóÛ­Ü<€•ŸîO´v!ÇÔúœàGWÃ^˜Æ4leö_ob{û¾i AQ0«‡™šö§½igH6™’_†cøŭ7H€Õ9úÝ–­K±ß—Ÿl…ÜGE°qN9 Àö;˜Ñþ¸{]:ݶ`‘Ûvêö£ »l2^õðÃo-}#€7ož[°`Á˜.Äyë±úû÷ß{o÷õ‘õbwJ¢°sæÌ³ž«ºï¶í¶™k¼^ýõ-€UŠ$€í²{2P›·¾j ¤ A 'VX 6…Ýóg¤ —ôô£¡l…wEc`ËhƒœÓBÝ‚M@ÖåXF®'n³2²ŒIùU‡•vOȪ#loxœö«nAwÚ9ªÝiÇt»c`+¼qJÀÆ9µ, ÆÝ+ƒ¤Óœ /qk¬±æ(Èjb³ÓN;­õœ×DM66f=V?Ï¥—^:º$f5Þy«­Æ¬ç*€ÌV¬I£ÒÊ6€U÷æ" €`K¸ E:€õÇŸZ*œ IÁ/?àå÷¾”Ý`õ[ãó† {4ß<`­üv3 °EL<'o¯¶Û°Fù,±D[áS¢(6Î)`ØA3Î%î¾—NO=÷¢;þs_pË%Ý~åˆ*m•ç÷¾÷½Q€Y‹5ÌóƤ밺k,¬ÖÝy‹-Z^åÛ.;;Ù®—¤m’ï+&`«mú¾ö„ ­‰šü²¯»îºVVNhÑÀ°eý…ºë%Àâûw×÷õí%ÀªÛÀ¬ªŒ›.ÄõN/oT¶ª»¦\9lœó À°€^ܽ2,:uÎnå•WÙµ×^Ûýû¿ÿ{kll‘tÎ9ç,²Ÿ_j©¼ j¥ýD5ñËüÁ~ÐXM&U&ÝtÓMNKñ„Ï=÷ÜÑs–kž']ˆËùÝ> €è%ÀN«¸ÛÀ4u¸5ÿÑK¸¢îîß<l·íË`ãœr€VçžóÎþÐøXÁÏÒK/= o~ó›[ËîÜ{ï½…Ò7¿ùÍ%@Ö¢²›®²Š»ñÆGËSùØ[n¹¥T`Ç^S¶·~HVíl÷}ð¦pKâ¾þ.ìôäQ‰OŽ,›†¦ê–¨W‹|ŠsóÍ7»wÚÙHq6ˆNÍÕIãc÷xÛ^£ûªdá|à­è¨ÆÑIgŸ}¶{Ýĉ£< bOO"²í³Ïh9×\sM `u•I,ÛÎ Àvh§S“ãÍ+ƒ,ð\ ž»°yöÊþØ8§˜l>ʰq¶ˆ®Ns¯ºÞmò†7Ž‚ì„düê§>õ©ÖÚ±E’Mâd<½¤±°É,È7ÜpC«œ«¯¾º°?N&¶)›èBüŠm¦KÀ°uX& À°uÜi×ÀÆ9Ó, xÆÝ+èô’ûÒig¸•V^ydßð†7¸9s渻ï¾;*m»ÑF£; b°ÿ´Ì2îÐ:þª«®j¬ ´“ÄØ—m€­Ø±¨¨8€­È”ÚÀ°lwZÅu°qNù†mì¶ÛnªÛy—]¢Ò«_ýê¨|VÞRIÁزËó÷¿µ[VkMF¶_ù5®/6¿ò=fË-·royëît!Nh oð4xtñóîșǸW½ê•ñ±o}ë[Ýõ×_Ÿ ±>ÀjyE_úЇÜwÜ1`‘í$]|ñÅNcv“ÇJ+ «°;À°™Û”Á΃Ôº×q{f×ÀÆ9Î =âÔ=06]úýDçU™—^Q,™cŠÖqI‰6=Fc‡Õiæ¼ãî½~×é¶‹ÜN»ì:f|ì‡Õ2:Z'- `ç$@¹ÛòË»7ß¼5[°ŸïÊ+¯lE`ÃeÓÁìô¢ÍàõøÏ}ahïE¶·~HVí,[‡e%K¶Ž;­â:Øáp¢ûh?vÚï6pñ¥Wºu_¿Þ(0&¿½îÔSOMØ·ØÂ½.?û•¯|%u¿¬À¶hš={¶›_.€€-æû{ÃMw´ºÍªû¬9‡CmûÈ3[)ܯï!ðúãj‚iy|çÓÊ×_µGU†uãÕ¶äò·êSÕ¯ÛoÛüè¦"r&¦Ê—6VTgǨ ÕqþE—f:ž…>4«r©( ¬ÓœYZÛ§6«µ%Öù ë ÏSc¶˜­ÅjO¾ÁÔUKÛl±å›FtÅWtÇsÌ€UWâï~÷»îÝï~÷˜5]œyŒÓDQØFœm°‘C~¶Jv$˜µ.¬¡dµ–ù&V(GÀn•yE¡bÉŒÝS€Ô€spÌ4€Õ_?òªý‰ŠlúÉÔhÛe–óFhU—Ö"ª:΢ÃV†ZyÂè¯ö‡kåtÇ8Á!ÀZ”Ø?–1°Ål-Fwò ¾¦gs[Éëüú׿ÞÍš5«=þøãÇtÞzʶNCaÅì€tò³U°ùU’2ˆXôC&)ÀF^ ¶˜£cQİk°  ч\ûߨúpg]‹8œeVå« Ö~¾ ª5^Uðª6i9 `CX`‹ÙZ› ï`k«ñ±‚¬¥—^z4"ëÿ?aÂD÷¥ÓÎ\K.QÀF: ùÙØ|ÈQŸl}ZSSE °‘B°Å_H«~wa`Ì$ErÖå8œè)Fʬ•+õ!Zí±q¸i‘å¼ö¼ûçž6Ž€-fk1º“g¸4]¸è·ÇÛö3>v¿¢»pIpµû€tò³°ù‘£>Øú´¦¦Š`#…`‹9Àþ$N6I“uµµnºi3üú aà*hL^EA5f´Ê.Äayþy´é2¶-„Ó´(.c`‹ÙpŠ^Y6 —^ïÿÀ‡œÆÉb'Û é0äg`ó5"G} °õiMM)ÀF Às~ÂetÂï6y‘àVP§¿iÝqxŠà†3ÛøÓ´±ªæ¨–‰ÀZ”ÕÚ¤º­[³MÚ¤r-ù³Ç8È!ÀZ·d§±Íö˜òÉSÌNÑ ½°x`#†üll¾Fä¨O¶>­©©"ØH!Øx'G¡º ûë®*š*@ó£—ŠÀ¸¦­kå"Ó¢žayiÑδõeý(«þ!XíDªmþñiãpâEÆÁÚ:¯~[µMå(éÿ´6ád³?ôB/l z`#†üll¾Fä¨O¶>­©©"ØH!Øê¡³( Æ”Y6¢®á²=Y‘ã²up\oì ÝÑÈ·6ÒaÈÏÀækDŽú`ëÓšš*R€€Íwnºá ýHn7êˆ-ÓºöZ·gë-¨µ¨©¢¹Y©ÝxÝØ6¯7vˆîèŽ ¼Ôšá9ydÊoàÓ™“Óå<ù @#`qhDØHµØúØ&v¥µ.À‚TµºF˱`ë·€ͱzm€tȆý¥@À®ËÛ«þºªÞZ6ò°õ:J8¦è `Ø@³l€tȆý¥@ÀN£Û@]Õo-yØf9R8¶\lÀêµ6Òa  ô—l]/Z›(0}$!FŽl½ŽŽ)zcØ6Ð,`q•P` ˜šœÕ¡ygF6O!ö£@`›åHáØr=°l i6 õ©óÚ¤Éàš21]^[Ãýlš„5)ÀÖ$4Õ @• °8ËE=òc3ØÀðØ€&tKž9K¬+Ú€fVמî;`«ô*( úK¶¿®­E–ìð8¢ýâLÒNlh– (›·\‹S(Ð l¥7t¤Y9'=1Ùÿ"ågª„>ýx×4 Íl³Ew®6PÞ-4ÐtµëúÚnÿí ïou™Õß°Œ´mþ5StŒ•QôzZ}VŽ-i¥rl[V™íÎÉöåé’VvÖ¹øç À6àÞû&ŒOšðp¾úœœS üö¡OïïZ€(P‡lyg¹¨cJ~´ÆºkÛMÝÞ}䈙NTJúßj¿m·<ç_téèOuƒÕ18c´ å¬iœ§_nx=í«_ý<ª[e´P•¯±¤Vþª}ÖnÛîC¥öûçä×qæYçÙ'¸( ›a~+ÓÚbçMâ:žÚÔ(€(€C®Û]‡`A_l >0p´É„!|ù°ªk#øòAS&ÈÓvíü¨ "µÍÊõ'-ÒqÊgSg l6`Ü`U·ÝvŒþªL+ׇTw+×ÚÛ>}·óˆµÍPC}7}LCµ€r‡‚ÓG@@:`ës®cEòqM°r6 £žiÛ|} (PH¶\dîX€¡[N!åb[uÛ€àÍŸWõ‡ÛoþxQ}÷#§‚Î0’ªnÆiåú³«.¿lAŸÊñ—»É[FGyÕ¶p‰œ°=ª×oEdÃ:ý1¯Õ6˲AÍ»Nªß?WµÏŸYßÕžP¼r›²€ío¥m<0iÅÜ”–l”l#¤Áe•½7Õ“!…0…þjD°2bÒ©I’Q)ÉPí#COëú«ãtŒ},šV†ò„X+W7Œ}ü6¨åñ?Ú¦Kç¶¿¿®­­LÈhŠI;°El ZÈêj\f"§A¾6le.Å $¿[=-äOûùéÚî3€í¯Òç`ݺt~!À Ê`·N’ÆÇÊ€mœì)°hÛÌø <÷i³þ† ì¬ÀTuø€jeÐZ;üÌàØºøðÛ%¹(¶`«uÙÉãܰl ¿l@Ý{å3øÝšý.ÆŠœZd6í¯ äkÀöƒ·Ò˜6ÊŸöƒPj˜ñ@´Ê d•=¶¬rC~\Àj›ÿÑw¿{þ÷¿ .}×ÿáÄPaXArVt5Œ¸†m ¡xÈ/%§o °ýå²ɹa‹Ø@õ6 ˆUw_·ªn̶̎`Ö¶§ý »Mêõ`ñ‰ (  Uèû›/¯¿ TÙ'-¯Hâ ,©, bÙqY½&C€ ™CœàÃtØÝY|¡mÖáy¬ý¤@,ÀúaFb­}öæF†¦·£úëÂíÖ5! B-¯Êõ“òú†Ë[›~²´Û ÀVï0ª“Çya+Ø60ˆ6ÀÖètôU¡_?mFufaP*ô½­¦ÀVÿ+éÿ|³zMúå™ÏoŠZÙÆú®Äl¼nüêÿ«Ã¤*P`}ƒ6ã²î»EÖnŠ°ï½•!#Ôÿ~òß°uª,é :¤œv `±6Àâ TÀï )`´€‘ßë2ôùU… ù « »%gõš4_>-²ë7´ò}¸Ö6ËÃ0‚¼ß³—X34…îýèlÚYi”Õ…XûìMÍòž˜iÝŠÓn.£‹“ëä‘[Á°A´Ǩ òåmÂ&ýoC}ÿ=mük5µjÃíYA'‹Ì†[•c]‡Ã™~OÏ4È-xêdïGʬ½ »Hë¯<úXÞ´YŠ}CÇÙªm~ÄUå„ Àö£Õu¹Í,é :¤œv `±6ÀvÙѼâ ågË··ˆ¦¦ÔR~z8+q+>»Àj{ØÓ¦Àž-F‘-›c™Ã š HÓÖˆ²·&>dúåøoUÂÙÍBƒ“1†ogüáÚ-]ˆ£.ñðe`qòb<òa+Ø60ˆ6ÀŸïÓáÛr:aIkÝÓzGZ/ʰúÐGo¤q@V÷d¿.¶Ã Ïáé „“9•Ñ©Š2ÊÔË1}ª‹C:ˆ)ç„]û60,ËÁÔe÷Zr§®ºê¨€íS¦·Í¶ÀS´2HÕþð3-Ù .½~ *k§´^“>؆KfZÙþÊ& šù3°½µjG¨JG¿‘:°³^Ù€–‚é÷å`´ Ž’ixò)³Ýù]Ú3ˆT[´¶l¯®iÕõ°UyCUŽg¸¦@R ™ÖSÙØX`ëöë‹—µ¼aD7,Û¢Á–€*ådQ`€`‹ªAÊæšdë­¿›{Õõ£°¥h¬°ImÌk‹­ÑjùôÝÚ¼ãËîÏÒJkËJײå6í8v€œæžšº!§ +¬¢Å*—™†«P’2P©Àø &>·pÑ#ãˆ4Í1¢=À6Ð;H-Eû ¾B€­Ë¦Úi¾¨«MݨçâK¯tWYe~#ŸÒ4 PPP`¬K½êU{ê¹Øß÷ÎÉî†CF™\Olà¥V”ÒTªÛ­¾ ¾gÖ VÙ´q²~7Y‹Ú*Ÿ{ßË—øíô÷û‘ß"×Cåúu„«óð»ëÕeuûõª,µ]å èÓÚ¡üVŸé¥•î÷®Ùv.×Þ0ÏM˜8ñaü@@@>P`¹åVx,°SĹ&/öÒ/6` jípi[òÓì x!è*¿ ÏÔêÿ1³µMÇꯒ±ñU>ËBžŽi×ýWàhuøõèÃïvn~÷bM¶¤²´O€šÉU;¥‡i ü‚Ø,­¬ :ÎoC¿ØDZ;ç\x‰[i¥•nêƒG6MD@@Xyå•ï-%èg‡…¶aØÀàÛ€`,Œ:¦u‹5ðôaTpæÃ§AåO¿ÐIý58ÔwÖ§ßVÁ¡Ý( þ²ì/¬× 4`UOø[–em¶h²µÙ¬¶Ú9¶ëBÜ]±³4?ñ¤SÝrË-÷5<@@@>P`ÂÊ+_{Ö9Ð…˜.ÄØ60P6`p.ù’^‚Ïì|ð £¹‚!®¥A°êóSDg”µ9 Dó6Œ†ú@í·Ç‡Ú0ʶkXö#GÎü[ò¸>®Ù4PPP Q`ÆÞÓÿñOD£?Å5æ“ >•ŒXFÓÀ- `ý|úßïšlÝuõ7vɵ5­Íi³·Z]g+Ëo‡ýoà¼(ê°ìFoòb¢ûT<@´ŽS¸ÆS¿žÛÖ^Ã5ŶևêÖÞýªÑ0¶{õå—_á/ÃäØr®€606Û…Xö ¨§iMƒÕ¼l^43Ææ² » Àf•å·€}ÉÝÿÈ“nÙe—ýã0>ü9g@ÁT@k( ä TõfW‹ÛGp®Eù ¹š}Q³0Æ8XäÇŸëÌuHƒÎv i,…“3I kº„c` ³fúµã”/mÆcÛ﫵HjXwÐf•^Ó´6«ë±µ¯Vƒ2‰Ó—N;í¸âŠç¹Àé£ §&çrë"°Ë{çsà Ð)r*e?~üg;ü£/ ‚ÃÊ9^Ø6`6àÏÈë¤^èªË°’?q“ß 8´#›uXeÚŒ¾!èÚ̽‚;åIƒ¼FÓÆZûüY‹ŽU¹‚i+Ë–ÉQ9~·j[VÈŸ9Ù Ü7M«p‚¨~½ï¶Ýnªº¿­Ì³“c«€4 BñéÒZoª¦V) è«±ôU ú]róSÔŽ)z¬•mÇ¥uûõoTåK»qÓ¶é1²¼«7øûÇ/¿ü„ß©+U¿:!´hÁ°Ð²"ˆ“jKËøÇØüÉœü}ÍÜÙLÂiëÀj¿« 1œImÊÖ¡2ü™‚õ»ï~»mW§Ú•!VYvNa{³´*29USïÍç-p'NüÅà?â‡ê Îôþr+(ˆåÓlŒh©ú.èSdVIÿëfi÷Ñ…´côWoDõ7€÷Éï—¡2}°Õ>ÕÕ_«#ìœäí ½²Se¯H¢°ÇÌøà!nª£A»€l(jY³úf•c3 §íOëŽ\´=ƒ’¿NýtŽ“×YïÉ3w«^=w©·+ 0<®:Y-Š–8HC0‹ž{Ïó .šaÔRÓ‡GƒÆ¬Éì"¬O+p†Êë—ouú«¶ ¨-Ÿòh›_O¬ˆhYTñ+¯òÚ§n[°ˆ(,Ë©`ØÀÀØ€u«´.ÇþxRETcÊô<ŠÖúkäöãùιð·ÒJ+Ý4 Ïóa>-zVwõ; r•…ßêZ?¤% ü$¾ÿ±mþ8Ríow³XÄÖ/§ƒP9×Åáw«+¬€Rc.pÚS7ÝlóžzîEœ4ÀÆÂ¥t²€K€š5ÁR»}ýp´9VÏNêèæ±.3yòº¿Jžëx>’µù ¤ùîæ7[¯IùÂ1Ý[5„Ðz5ú‹¨ žv¬ê÷ƒbê]¶ÉßïÀ,Ÿ•áž‹XÚf]©­÷¦ßë3M¬>ýµÿ­Çh¨™_vØ3}Ô†AšG¨Èuïi^3\¿Úæ_Lß ³–Ú±ˆ«u3–Ñ¥EvóNVå[´TUFÀ*ŸoØlžÊì÷Ú×¾öcûî·ÿ‹Ýt(›® Ø6€ `½°½ Õ‹ÚäqϺ¯ƒçó¤ùîæ‡[ïI}ýãP 4ó³m(`ìÐ?•gð*¿_uªmJú ;Ñœ’ï®üúÙCÈ i ¼Ó¼ã|˜LØ´à˜Á©öYO«SõQi|‘v-ÏâxFi·ÌŰLJݼ1³¡2Lßu|,ÀúÝØZ›´êª“ÎýÜçOúÏ^8Ô‰S‹ `Ø6Ð-Ð Z½¨mâ³—6u¬€üô0ê—æ»teU–“ÕÓ±]ƒåsgùûi-ˆõ‡þYŠªËÎÅÔ†+¦X¿·¨Î7äµÏÿ¤c7„s줭ÚR†™:6 X²›®4±‹+Cˆý¤ÝL±ÇZ¾Ð€ÒÞ’È0ý(°³¿-Ö‹¶üƒ§Àøbçù±cþÔ-'‚rqP±lÀê²E^ߺÛÛ~¿úšk~yðٜшaS›ýî·Y=CùfŠÎlœ¼iþ¸ÚàoO+#«w¨íYàèuL6lÏ$*Ë×Ò‚t¾ŽloË´·'vÁ±2ýõßDX¿që&`ßý‹®O¥ïá›ÿ´­NÕgýÞÓ"°fø~7¿œ´FÛ²º?÷Pzªn‚É[êÛqçi¿{tñó3®.g‰zp̱lh† ,\ôH²<Ò†¿]a…j³•6tM´H ïCË?öSVC ðÌwoMM+cÐV:‡Z†“Þ°]3óü‚eôa˜\GY—^3lƒUí³¾õò÷ûŠÛŶ~ñÖ‚¼õ’d¹¶8°Áv£è ‘Ÿ/4&íóaÕº+°Øs¾- mŽ &ì»ÖZ¯ûflÄk†3Æuà:`Ø6oŠºÎúâÉ[uÒ¤g’‡øVCû žOƒ&¿ZDùË:Nþ±?þ3¶Œpþ›ð8ñCÈæ“«>}:Àæu!çã±q¹V¿ÚÐIÏM6ÖZº/ìBP¦ A°?Õ7ʪÀ1ïMOV»ÓÞT•9GŽ|ÖÕrm¼É‹á"÷8QùN¡6€ `õÚÀWÎ<Û­¶Új/Nœ8ñìä=qðÓœa¢À´èòÅж´avþØÒ0dÐ+ˆ)ùãY-XÕ®£ÍÂë/oiÐjpgeZy~WàNÖ_ZS ›ÖµÚ‡hÓÍïÝ) ý@ž‡µ]ÇØG<ã×vf!îñ-© ^”"M²h«ŒÔºË b¦ðŽ­§,ÀÒ}8Vaò™SW^yå{×XcÍ?~äÈ™»qÞ¢²,µ‚ `Ø6ÐPO¡ý8è/Ë/¿Â_p½4yp­Ëã{¨°Èg跇˾ÈÿõƒKáP>Ëoðj@kÇØðÀvk3 ÛøPùþ~»¬LÛrucö#¢i5µÑ?Ö3\ÂÇ/GF‘¶ÌŽÚâçóóøu„m×qþ •Ÿ½*cìõÉÚ4Õ´#\§ê1§v“ØÛ‘˜¶êŒßõ9æò €)°IòÏq‰ƒð 9 ›oõ¦wßs¯?sìñn‡vù«¾“ÐÀ°l 6`Ï=svÚùþ :Æ_ö¯ê)”<›MÒê<®‡V¬áD~rLïÇ´1¯e»ÄªÎvþy–^¸°yç›·ßtKk—ŽM;7†(vz9P ë ÈQ˜–¤ý“4k$½md›¶“ÐÀ°l JðŸ7ÓGž3»þ´£‚~P èŒÁiçŽQµ.¾ióâ4M“² ]åy0D±J5) PPPPP` ° OËž¤­øaÝ{Ó&]*[v·kÀJÿ°Kq·Ï›òQPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP†M[”>f};_›qrØô”ó-{Ýåü9@@@@@ŠX>)G€¡Eæý¤õâ´n_••ï’Ttø2kФ¿E?lÓB:ùh=B›W¯Ö/´5ùìü츲ºë8+£ÝK[Š¢®åt~e®{ÝÉ‹(€(€(€(€(0 &YIðQ¨BùêX«KÃÏáÉ¥¬ÖëóõÐ÷˜àÕ?. Ô¥§AòûkÚñyœÖ;g•‘vÞvŒ_ìyÅœ{»<l§ r<          @KXA†þ·4m‚ °1ìôS'Àª½´´vçEt ð‘vþ1çîGR³"Ž‚WƒTo)XðµƒÐ´ö˜¾v|¤°1W“<(€(€(€(€(€SÀذqê^l0TE´®N€m't,Àêœíüôí>‚E©u!NXÁtŒžº&1ùÂö˜¾êšÜ¼ØÆÝ†4PPPPP Fv«ãÛuÅÕ~Eo‚"¥vÝC€Æ§ê°ãTFÚ'-kÑe+Ï8ûeø€çGcÛéiçfãgÓÖÀ2&²š¡imñõµÿÓ^<ĬÎÃ4λ®ÖiîÛ‚E¿óº۵ʻ¦1öL@@@@@V `ÃÉŒ|P ÇÏZwäP6ƒ*A§uÏõÕþv]~ÕVœ§ß7¬+ Ò ¤ÒÆûúàíkQS›ÕZç£2mÜjV¤9¯k¯Ò×Ú;k³_¯uUNkwÀ¦]‹èf½ ÇõZ~¿KtÚ˜`Ù•éâ_KìyðíÉ©¡         ø ´X¿+« ÍÿX÷ZˆÉ´‰ŒÒ¢ŒþMÁŠÊб>ü¤M^äŽÊP~g &X #ifÑA‹ÀúFu—¶Ox¬]Z4SÇØ~ƒ»<€M¹4«´óµÏ²à°^û®k^»4͔Ǹ›Îj¯?AUØ}÷_&(¿¶…/ÂóömÅÊ´¶³¡¬sg;          À(ନ`ÂŒ…$Y|à e2ð #h>À†0ãÃrxœ•—Å6öTyüO»(£lÖå õ£™á1~„Öö¥l»Yí°óÎçð¸°^]G‹„úšficZ¦é¬º ,× ðiv⃯Í-¬sL³%;@@@@@h)``•Ö­VpQf­Ð¬¤mÏ*Óö‡ÀfÀ™Íz> U °Ò)+jõø]¬«Ø,=²L7­^ƒK:³´±ã³º ûÐé·!ë……åI»>6Æ7kÝ4]¹eQPPPP†\?2h]j53èÈŒP6‹ÞêoÀ†ÑW+Ë &ŒðåEL­>Ù˜ª6¯mþøØª¶Ó¬tõg>ž6"t–6i ^ç0OLd9­\ÓH×ÚìÎÿkà òÛ–ÓG@@@@áT @;•5ð R$U€a€FqCPÍ[+×À*„–X€õ#·UlÚdNÖµ8î´óô'eÊøÛq¾-©¾pMà!»=9]@@@@@_‚ ¡]´OТ(€(€(€(€(€}ª€œú¸ëÖéÕ°±mÏƒÌØrÒòå•]G6¯ œ_™ce{U¿10N+W/jô†          @* pM‹8ÊÉ×v%ßá#xú®ýÖ Ùò«Û®ÿ±®ÇV¦_nÀj›€.­ ¡ÌYP¢c÷ÉFä Ú­µßÎYÝ™­^¬ïaç°LƒC;6ŒjçÁ£¬Æjf•¡ÓÑ9…Úøã—õ¿¿_åéÓîüBMUF– Ä´!Ï.T¾ÎÑ×Ú·3ßž”ÏÎAuÛ>ÿœÍòÊmZ:Ôï(€(€(€(€(€mHƒ*sòmR%¿Ëe˜ß¾ ˜¦ÀU½‚ í·ò*þ¤Q!ÀÚ˜Uƒ}×ñY‘3šÃq½a¤ÏQíW»>Jj“¬YëGùtžvŒÊÌܬˡsÐY9ºúîw±¼úÛ Vý±³~—pibßîüÂ6L›.þ¹Å´!Æ.¤•Á©þ·ë§òý—¦¥¯µo¶ß€6«\£ÙZ¯&,˺ölG@@@@ˆP@PNdÀ™vxÀú3‡yT¾áþ`Ó Ú %ë”BX Çõ¦lڸ߬(i À¦A ݱX¿œÜÓ@^0&¨•Fú„K¿¼¼6(¯iFÑ­œ˜6X=íìÂÚêÛ†¶©üðÚèÅ‚o§i܇ö•U®¶§Eû#n²          4A‹´ùm18оp‚,€mI!L†€B…uµ¶Ùßv3‡e âböIØœ”·S€UäWeøHkC<¦]‹p[xiçgÑgëÚÝîÚ¤Ù`^;cÚVFÚ¶´²´MÉ¿þ¶Ío¯ßÕ:<¬6°MøÕ¡ (€(€(€(€(PR0²•;‚‹Æ•Xæ Nýî¹i«|Ú¦¬Óœ–ìà ¾Ó–ª`U‡ ±×+t½lܱ¹ÌƒS›—§€•]†×>ìö À–¼é9 PPPPúU!ÀœøÀèOF”%›‘ %`MµÉjp)pµI¥ü2 ` " ¬ŠD?cÖô·ëam ë1¬]v.YçjBª´ _bäµ!Í®Âm>¬†Q~ßFt=ýïþqj‡?Xç’V®¶3 qÖÝÃv@@@@èlÆ_|´¨¦J€µñ©¯únÑ3•kìG- ¨ú-…Q¶P^ìnY´Tû²"t?Vg8³­mWû¬»°Õ.;£ó,2‰“ÍÎì·=m[x}ÂÙ™ývø/¬\{¡ }Y‘Hp;_•é}^b#°þuöÏ#,_í°Ù”­l‚(ëC|V¹í&(ëƒÛ•&¢         duÙLë’ZT-ƒÊpF[Õ™µ,Ž_‡ŽëEwϬxóڢ㲎-ª]^þ¼ëÓ®­±mÌ;ß¼6äƒío§wl[Óê ‡“Ŷ‘|(€(€(€(€(€ P mÒ£ªšeã3}h˜–ž¬ªNÊA4ÌîPPPPPP ÏP4T~7>ÖåÓïìwçíF”‰¡ê–œ× ÕPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP KÿäE­mžWòƒIEND®B`‚manila-10.0.0/doc/source/images/rpc/rabt.svg0000664000175000017500000010200713656750227020637 0ustar zuulzuul00000000000000 Page-1 Rounded rectangle ATM switch name: control_exchange (type: topic) Sheet.3 Sheet.4 Sheet.5 Sheet.6 Sheet.7 Sheet.8 name: control_exchange(type: topic) Sheet.17 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.9 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.25 Sheet.27 key: topic key: topic Sheet.28 key: topic.host key: topic.host Sheet.26 Rectangle Topic Consumer Topic Consumer Rectangle.30 Topic Consumer Topic Consumer Sheet.31 Sheet.32 Sheet.33 Rectangle.34 Rectangle.35 Direct Publisher DirectPublisher Sheet.36 Worker (e.g. compute) Worker(e.g. compute) ATM switch.37 name: msg_id (type: direct) Sheet.38 Sheet.39 Sheet.40 Sheet.41 Sheet.42 Sheet.43 name: msg_id(type: direct) Sheet.44 Rectangle Rectangle.10 Rectangle.11 Rectangle.12 Rectangle.13 Rectangle.14 Rectangle.15 Sheet.52 key: msg_id key: msg_id Sheet.53 Sheet.54 Rectangle.57 Rectangle.58 Direct Consumer DirectConsumer Sheet.59 Invoker (e.g. api) Invoker(e.g. api) Rectangle.55 Topic Publisher Topic Publisher Sheet.56 Sheet.60 Sheet.62 RabbitMQ Node (single virtual host context) RabbitMQ Node(single virtual host context) manila-10.0.0/doc/source/user/0000775000175000017500000000000013656750362016113 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/user/create-and-manage-shares.rst0000664000175000017500000012055713656750227023373 0ustar zuulzuul00000000000000.. _share: ============= Manage shares ============= A share is provided by file storage. You can give access to a share to instances. To create and manage shares, you use ``manila`` client commands. Create a share network ~~~~~~~~~~~~~~~~~~~~~~ #. Create a share network. .. code-block:: console $ manila share-network-create \ --name mysharenetwork \ --description "My Manila network" \ --neutron-net-id dca0efc7-523d-43ef-9ded-af404a02b055 \ --neutron-subnet-id 29ecfbd5-a9be-467e-8b4a-3415d1f82888 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | name | mysharenetwork | | segmentation_id | None | | created_at | 2016-03-24T14:13:02.888816 | | neutron_subnet_id | 29ecfbd5-a9be-467e-8b4a-3415d1f82888 | | updated_at | None | | network_type | None | | neutron_net_id | dca0efc7-523d-43ef-9ded-af404a02b055 | | ip_version | None | | cidr | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | description | My Manila network | +-------------------+--------------------------------------+ #. List share networks. .. code-block:: console $ manila share-network-list +--------------------------------------+----------------+ | id | name | +--------------------------------------+----------------+ | c895fe26-92be-4152-9e6c-f2ad230efb13 | mysharenetwork | +--------------------------------------+----------------+ Create a share ~~~~~~~~~~~~~~ #. Create a share. .. code-block:: console $ manila create NFS 1 \ --name myshare \ --description "My Manila share" \ --share-network mysharenetwork \ --share-type default +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | default | | description | My Manila share | | availability_zone | None | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | share_server_id | None | | share_group_id | None | | host | | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+--------------------------------------+ #. Show a share. .. code-block:: console $ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+ #. List shares. .. code-block:: console $ manila list +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ #. List share export locations. .. code-block:: console $ manila share-export-location-list myshare +--------------------------------------+--------------------------------------------------------+-----------+ | ID | Path | Preferred | +--------------------------------------+--------------------------------------------------------+-----------+ | 6921e862-88bc-49a5-a2df-efeed9acd583 | 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | False | | b6bd76ce-12a2-42a9-a30a-8a43b503867d | 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | False | +--------------------------------------+--------------------------------------------------------+-----------+ Allow read-write access ~~~~~~~~~~~~~~~~~~~~~~~ #. Allow access. .. code-block:: console $ manila access-allow myshare ip 10.0.0.0/24 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | access_type | ip | | access_to | 10.0.0.0/24 | | access_level | rw | | state | new | | id | 0c8470ca-0d77-490c-9e71-29e1f453bf97 | +--------------+--------------------------------------+ #. List access. .. code-block:: console $ manila access-list myshare +--------------------------------------+-------------+-------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------------------------------+-------------+-------------+--------------+--------+ | 0c8470ca-0d77-490c-9e71-29e1f453bf97 | ip | 10.0.0.0/24 | rw | active | +--------------------------------------+-------------+-------------+--------------+--------+ The access is created. Allow read-only access ~~~~~~~~~~~~~~~~~~~~~~ #. Allow access. .. code-block:: console $ manila access-allow myshare ip 20.0.0.0/24 --access-level ro +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | access_type | ip | | access_to | 20.0.0.0/24 | | access_level | ro | | state | new | | id | f151ad17-654d-40ce-ba5d-98a5df67aadc | +--------------+--------------------------------------+ #. List access. .. code-block:: console $ manila access-list myshare +--------------------------------------+-------------+-------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------------------------------+-------------+-------------+--------------+--------+ | 0c8470ca-0d77-490c-9e71-29e1f453bf97 | ip | 10.0.0.0/24 | rw | active | | f151ad17-654d-40ce-ba5d-98a5df67aadc | ip | 20.0.0.0/24 | ro | active | +--------------------------------------+-------------+-------------+--------------+--------+ The access is created. Deny access ~~~~~~~~~~~ #. Deny access. .. code-block:: console $ manila access-deny myshare 0c8470ca-0d77-490c-9e71-29e1f453bf97 $ manila access-deny myshare f151ad17-654d-40ce-ba5d-98a5df67aadc #. List access. .. code-block:: console $ manila access-list myshare +----+-------------+-----------+--------------+-------+ | id | access type | access to | access level | state | +----+-------------+-----------+--------------+-------+ +----+-------------+-----------+--------------+-------+ The access is removed. Create snapshot ~~~~~~~~~~~~~~~ #. Create a snapshot. .. code-block:: console $ manila snapshot-create --name mysnapshot --description "My Manila snapshot" myshare +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | status | creating | | share_id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | description | My Manila snapshot | | created_at | 2016-03-24T14:40:30.000000 | | size | 1 | | share_proto | NFS | | id | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | | project_id | 907004508ef4447397ce6741a8f037c1 | | share_size | 1 | | name | mysnapshot | +-------------+--------------------------------------+ #. List snapshots. .. code-block:: console $ manila snapshot-list +--------------------------------------+--------------------------------------+-----------+------------+------------+ | ID | Share ID | Status | Name | Share Size | +--------------------------------------+--------------------------------------+-----------+------------+------------+ | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | available | mysnapshot | 1 | +--------------------------------------+--------------------------------------+-----------+------------+------------+ Create share from snapshot ~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Create a share from a snapshot. .. code-block:: console $ manila create NFS 1 \ --snapshot-id e744ca47-0931-4e81-9d9f-2ead7d7c1640 \ --share-network mysharenetwork \ --name mysharefromsnap +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | default | | description | None | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | share_server_id | None | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | | is_public | False | | task_state | None | | snapshot_support | True | | id | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | | size | 1 | | name | mysharefromsnap | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:41:36.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+--------------------------------------+ #. List shares. .. code-block:: console $ manila list +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | mysharefromsnap | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ #. Show the share created from snapshot. .. code-block:: console $ manila show mysharefromsnap +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | None | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | | preferred = False | | | is_admin_only = False | | | id = 5419fb40-04b9-4a52-b08e-19aa1ce13a5c | | | share_instance_id = 4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | | path = 10.0.0.3:/share-4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | | preferred = False | | | is_admin_only = True | | | id = 26f55e4c-6edc-4e55-8c55-c62b7db1aa9f | | | share_instance_id = 4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | | is_public | False | | task_state | None | | snapshot_support | True | | id | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | | size | 1 | | name | mysharefromsnap | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:41:36.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+ Delete share ~~~~~~~~~~~~ #. Delete a share. .. code-block:: console $ manila delete mysharefromsnap #. List shares. .. code-block:: console $ manila list +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | mysharefromsnap | 1 | NFS | deleting | False | default | nosb-devstack@london#LONDON | nova | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ The share is being deleted. Delete snapshot ~~~~~~~~~~~~~~~ #. List snapshots before deleting. .. code-block:: console $ manila snapshot-list +--------------------------------------+--------------------------------------+-----------+------------+------------+ | ID | Share ID | Status | Name | Share Size | +--------------------------------------+--------------------------------------+-----------+------------+------------+ | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | available | mysnapshot | 1 | +--------------------------------------+--------------------------------------+-----------+------------+------------+ #. Delete a snapshot. .. code-block:: console $ manila snapshot-delete mysnapshot #. List snapshots after deleting. .. code-block:: console $ manila snapshot-list +----+----------+--------+------+------------+ | ID | Share ID | Status | Name | Share Size | +----+----------+--------+------+------------+ +----+----------+--------+------+------------+ The snapshot is deleted. Extend share ~~~~~~~~~~~~ #. Extend share. .. code-block:: console $ manila extend myshare 2 #. Show the share while it is being extended. .. code-block:: console $ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | extending | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+ #. Show the share after it is extended. .. code-block:: console $ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 2 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+ Shrink share ~~~~~~~~~~~~ #. Shrink a share. .. code-block:: console $ manila shrink myshare 1 #. Show the share while it is being shrunk. .. code-block:: console $ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | shrinking | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 2 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+ #. Show the share after it is being shrunk. .. code-block:: console $ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+ manila-10.0.0/doc/source/user/index.rst0000664000175000017500000000130013656750227017746 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. User ---- .. toctree:: :maxdepth: 1 API create-and-manage-shares troubleshooting-asynchronous-failures manila-10.0.0/doc/source/user/troubleshooting-asynchronous-failures.rst0000664000175000017500000006763013656750227026451 0ustar zuulzuul00000000000000===================================== Troubleshooting asynchronous failures ===================================== The Shared File Systems service performs many user actions asynchronously. For example, when a new share is created, the request is immediately acknowledged with a response containing the metadata of the share. Users can then query the resource and check the ``status`` attribute of the share. Usually an ``...ing`` status indicates that actions are performed asynchronously. For example, a new share's ``status`` attribute is set to ``creating`` by the service. If these asynchronous operations fail, the resource's status will be set to ``error``. More information about the error can be obtained with the help of the CLI client. Scenario ~~~~~~~~ In this example, the user wants to create a share to host software libraries on several virtual machines. The example deliberately introduces two share creation failures to illustrate how to use the command line to retrieve user support messages. #. In order to create a share, you need to specify the share type that meets your requirements. Cloud administrators create share types; see these available share types: .. code-block:: console clouduser1@client:~$ manila type-list +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | dhss_false | public | YES | driver_handles_share_servers : False | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | | 277c1089-127f-426e-9b12-711845991ea1 | dhss_true | public | - | driver_handles_share_servers : True | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ In this example, two share types are available. #. To use a share type that specifies driver_handles_share_servers=True capability, you must create a "share network" on which to export the share. .. code-block:: console clouduser1@client:~$ openstack subnet list +--------------------------------------+---------------------+--------------------------------------+---------------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------------------+--------------------------------------+---------------------+ | 78c6ac57-bba7-4922-ab81-16cde31c2d06 | private-subnet | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | 10.0.0.0/26 | | a344682c-718d-4825-a87a-3622b4d3a771 | ipv6-private-subnet | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | fd36:18fc:a8e9::/64 | +--------------------------------------+---------------------+--------------------------------------+---------------------+ #. Create a "share network" from a private tenant network: .. code-block:: console clouduser1@client:~$ manila share-network-create --name mynet --neutron-net-id 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 --neutron-subnet-id 78c6ac57-bba7-4922-ab81-16cde31c2d06 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | network_type | None | | name | mynet | | segmentation_id | None | | created_at | 2018-10-09T21:32:22.485399 | | neutron_subnet_id | 78c6ac57-bba7-4922-ab81-16cde31c2d06 | | updated_at | None | | mtu | None | | gateway | None | | neutron_net_id | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | | ip_version | None | | cidr | None | | project_id | cadd7139bc3148b8973df097c0911016 | | id | 0b0fc320-d4b5-44a1-a1ae-800c56de550c | | description | None | +-------------------+--------------------------------------+ clouduser1@client:~$ manila share-network-list +--------------------------------------+-------+ | id | name | +--------------------------------------+-------+ | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | mynet | +--------------------------------------+-------+ #. Create the share: .. code-block:: console clouduser1@client:~$ manila create nfs 1 --name software_share --share-network mynet --share-type dhss_true +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_true | | description | None | | availability_zone | None | | share_network_id | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | | share_server_id | None | | share_group_id | None | | host | | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | False | | is_public | False | | task_state | None | | snapshot_support | False | | id | 243f3a51-0624-4bdd-950e-7ed190b53b67 | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 61aef4895b0b41619e67ae83fba6defe | | name | software_share | | share_type | 277c1089-127f-426e-9b12-711845991ea1 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:12:21.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+ #. View the status of the share: .. code-block:: console clouduser1@client:~$ manila list +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ In this example, an error occurred during the share creation. #. To view the generated user message, use the ``message-list`` command. Use ``--resource-id`` to filter messages for a specific share resource. .. code-block:: console clouduser1@client:~$ manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ In User Message column, you can see that the Shared File System service failed to create the share because of a capabilities mismatch. #. To view more information, use the ``message-show`` command, followed by the ID of the message from the message-list command: .. code-block:: console clouduser1@client:~$ manila message-show 7d411c3c-46d9-433f-9e21-c04ca30b209c +---------------+----------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------+----------------------------------------------------------------------------------------------------------+ | request_id | req-0a875292-6c52-458b-87d4-1f945556feac | | detail_id | 008 | | expires_at | 2018-11-08T21:12:21.000000 | | resource_id | 243f3a51-0624-4bdd-950e-7ed190b53b67 | | user_message | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | | created_at | 2018-10-09T21:12:21.000000 | | message_level | ERROR | | id | 7d411c3c-46d9-433f-9e21-c04ca30b209c | | resource_type | SHARE | | action_id | 001 | +---------------+----------------------------------------------------------------------------------------------------------+ As the cloud user, you know the related specs your share type has, so you can review the share types available. The difference between the two share types is the value of driver_handles_share_servers: .. code-block:: console clouduser1@client:~$ manila type-list +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | dhss_false | public | YES | driver_handles_share_servers : False | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | | 277c1089-127f-426e-9b12-711845991ea1 | dhss_true | public | - | driver_handles_share_servers : True | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ #. Create a share with the other available share type: .. code-block:: console clouduser1@client:~$ manila create nfs 1 --name software_share --share-network mynet --share-type dhss_false +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_false | | description | None | | availability_zone | None | | share_network_id | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | | share_group_id | None | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | True | | is_public | False | | task_state | None | | snapshot_support | True | | id | 2d03d480-7cba-4122-ac9d-edc59c8df698 | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | software_share | | share_type | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:24:40.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+ In this example, the second share creation attempt fails. #. View the user support message: .. code-block:: console clouduser1@client:~$ manila list +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | 2d03d480-7cba-4122-ac9d-edc59c8df698 | software_share | 1 | NFS | error | False | dhss_false | | nova | | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ clouduser1@client:~$ manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 | SHARE | 2d03d480-7cba-4122-ac9d-edc59c8df698 | 002 | create: Driver does not expect share-network to be provided with current configuration. | 003 | 2018-10-09T21:24:40.000000 | | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ You can see that the service does not expect a share network for the share type used. Without consulting the administrator, you can discover that the administrator has not made available a storage back end that supports exporting shares directly on to your private neutron network. #. Create the share without the ``--share-network`` parameter: .. code-block:: console clouduser1@client:~$ manila create nfs 1 --name software_share --share-type dhss_false +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_false | | description | None | | availability_zone | None | | share_network_id | None | | share_group_id | None | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | True | | is_public | False | | task_state | None | | snapshot_support | True | | id | 4d3d7fcf-5fb7-4209-90eb-9e064659f46d | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | software_share | | share_type | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:25:40.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+ #. To ensure that the share was created successfully, use the `manila list` command: .. code-block:: console clouduser1@client:~$ manila list +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | 4d3d7fcf-5fb7-4209-90eb-9e064659f46d | software_share | 1 | NFS | available | False | dhss_false | | nova | | 2d03d480-7cba-4122-ac9d-edc59c8df698 | software_share | 1 | NFS | error | False | dhss_false | | nova | | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ #. Delete shares that failed to be created and corresponding support messages: .. code-block:: console clouduser1@client:~$ manila delete 2d03d480-7cba-4122-ac9d-edc59c8df698 243f3a51-0624-4bdd-950e-7ed190b53b67 clouduser1@client:~$ manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 | SHARE | 2d03d480-7cba-4122-ac9d-edc59c8df698 | 002 | create: Driver does not expect share-network to be provided with current configuration. | 003 | 2018-10-09T21:24:40.000000 | | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ clouduser1@client:~$ manila message-delete ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 7d411c3c-46d9-433f-9e21-c04ca30b209c clouduser1@client:~$ manila message-list +----+---------------+-------------+-----------+--------------+-----------+------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +----+---------------+-------------+-----------+--------------+-----------+------------+ +----+---------------+-------------+-----------+--------------+-----------+------------+ manila-10.0.0/doc/source/user/API.rst0000664000175000017500000000040413656750227017254 0ustar zuulzuul00000000000000========================== Manila API Documentation ========================== A full specification of the API for interacting with the shared file systems service may be found in the `API reference `_. manila-10.0.0/doc/source/index.rst0000664000175000017500000000675213656750227017010 0ustar zuulzuul00000000000000.. Copyright 2010-2012 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =================================================== OpenStack Shared Filesystems (manila) documentation =================================================== What is Manila? --------------- Manila is the OpenStack Shared Filesystems service for providing Shared Filesystems as a service. Some of the goals of Manila are to be/have: * **Component based architecture**: Quickly add new behaviors * **Highly available**: Scale to very serious workloads * **Fault-Tolerant**: Isolated processes avoid cascading failures * **Recoverable**: Failures should be easy to diagnose, debug, and rectify * **Open Standards**: Be a reference implementation for a community-driven api For end users ------------- As an end user of Manila, you'll use Manila to create a remote file system with either tools or the API directly: `python-manilaclient `_, or by directly using the `REST API `_. Tools for using Manila ~~~~~~~~~~~~~~~~~~~~~~ Contents: .. toctree:: :maxdepth: 1 user/index Using the Manila API ~~~~~~~~~~~~~~~~~~~~ All features of Manila are exposed via a REST API that can be used to build more complicated logic or automation with Manila. This can be consumed directly or via various SDKs. The following resources can help you get started consuming the API directly: * `Manila API `_ * :doc:`Manila microversion history ` For operators ------------- This section has details for deploying and maintaining Manila services. Installing Manila ~~~~~~~~~~~~~~~~~ Manila can be configured standalone using the configuration setting ``auth_strategy = noauth``, but in most cases you will want to at least have the `Keystone `_ Identity service and other `OpenStack services `_ installed. .. toctree:: :maxdepth: 1 install/index Administrating Manila ~~~~~~~~~~~~~~~~~~~~~ Contents: .. toctree:: :maxdepth: 1 admin/index Reference ~~~~~~~~~ Contents: .. toctree:: :maxdepth: 1 configuration/index cli/index Additional resources ~~~~~~~~~~~~~~~~~~~~ * `Manila release notes `_ For contributors ---------------- If you are a ``new contributor`` :doc:`start here `. .. toctree:: :maxdepth: 1 contributor/index API Microversions Additional reference ~~~~~~~~~~~~~~~~~~~~ Contents: .. toctree:: :maxdepth: 1 reference/index .. only:: html Additional reference ~~~~~~~~~~~~~~~~~~~~ Contents: * :ref:`genindex` manila-10.0.0/doc/source/contributor/0000775000175000017500000000000013656750362017507 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/contributor/development.environment.rst0000664000175000017500000001315013656750227025126 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Setting Up a Development Environment ==================================== This page describes how to setup a working Python development environment that can be used in developing manila on Ubuntu, Fedora or Mac OS X. These instructions assume you're already familiar with git. Refer to `Getting the code`_ for additional information. .. _Getting the code: http://wiki.openstack.org/GettingTheCode Following these instructions will allow you to run the manila unit tests. If you want to be able to run manila (i.e., create NFS/CIFS shares), you will also need to install dependent projects: nova, neutron, cinder and glance. For this purpose 'devstack' project can be used (A documented shell script to build complete OpenStack development environments). You can check out `Setting up a development environment with devstack`_ for instructions on how to enable manila on devstack. .. _Setting up a development environment with devstack: https://docs.openstack.org/manila/latest/contributor/development-environment-devstack.html Virtual environments -------------------- Manila development uses `virtualenv `__ to track and manage Python dependencies while in development and testing. This allows you to install all of the Python package dependencies in a virtual environment or "virtualenv" (a special subdirectory of your manila directory), instead of installing the packages at the system level. .. note:: Virtualenv is useful for running the unit tests, but is not typically used for full integration testing or production usage. Linux Systems ------------- .. note:: This section is tested for manila on Ubuntu and Fedora-based distributions. Feel free to add notes and change according to your experiences or operating system. Install the prerequisite packages. - On Ubuntu/Debian:: sudo apt-get install python-dev libssl-dev python-pip \ libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev git \ git-review libffi-dev gettext graphviz libjpeg-dev - On Fedora 21/RHEL7/Centos7:: sudo yum install python-devel openssl-devel python-pip mysql-devel \ libxml2-devel libxslt-devel postgresql-devel git git-review \ libffi-devel gettext graphviz gcc libjpeg-turbo-devel \ python-tox python3-devel python3 .. note:: If using RHEL and yum reports "No package python-pip available" and "No package git-review available", use the EPEL software repository. Instructions can be found at ``_. - On Fedora 22 and higher:: sudo dnf install python-devel openssl-devel python-pip mysql-devel \ libxml2-devel libxslt-devel postgresql-devel git git-review \ libffi-devel gettext graphviz gcc libjpeg-turbo-devel \ python-tox python3-devel python3 .. note:: Additionally, if using Fedora 23, ``redhat-rpm-config`` package should be installed so that development virtualenv can be built successfully. Mac OS X Systems ---------------- Install virtualenv:: sudo easy_install virtualenv Check the version of OpenSSL you have installed:: openssl version If you have installed OpenSSL 1.0.0a, which can happen when installing a MacPorts package for OpenSSL, you will see an error when running ``manila.tests.auth_unittest.AuthTestCase.test_209_can_generate_x509``. The stock version of OpenSSL that ships with Mac OS X 10.6 (OpenSSL 0.9.8l) or Mac OS X 10.7 (OpenSSL 0.9.8r) works fine with manila. Getting the code ---------------- Grab the code:: git clone https://opendev.org/openstack/manila cd manila Running unit tests ------------------ The preferred way to run the unit tests is using ``tox``. Tox executes tests in isolated environment, by creating separate virtualenv and installing dependencies from the ``requirements.txt`` and ``test-requirements.txt`` files, so the only package you install is ``tox`` itself:: sudo pip install tox Run the unit tests with:: tox -e py{python-version} Example:: tox -epy36 See :doc:`unit_tests` for more details. .. _virtualenv: Manually installing and using the virtualenv -------------------------------------------- You can also manually install the virtual environment:: tox -epy36 --notest This will install all of the Python packages listed in the ``requirements.txt`` file into your virtualenv. To activate the Manila virtualenv you can run:: $ source .tox/py36/bin/activate To exit your virtualenv, just type:: $ deactivate Or, if you prefer, you can run commands in the virtualenv on a case by case basis by running:: $ tox -e venv -- Contributing Your Work ---------------------- Once your work is complete you may wish to contribute it to the project. Manila uses the Gerrit code review system. For information on how to submit your branch to Gerrit, see GerritWorkflow_. .. _GerritWorkflow: https://docs.openstack.org/infra/manual/developers.html#development-workflow manila-10.0.0/doc/source/contributor/driver_requirements.rst0000664000175000017500000002247313656750227024347 0ustar zuulzuul00000000000000.. Copyright (c) 2015 Hitachi Data Systems All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Manila minimum requirements and features since Mitaka ===================================================== In order for a driver to be accepted into manila code base, there are certain minimum requirements and features that must be met, in order to ensure interoperability and standardized manila functionality among cloud providers. At least one driver mode (:term:`DHSS` true/false) -------------------------------------------------- Driver modes determine if the driver is managing network resources (:term:`DHSS` = true) in an automated way, in order to segregate tenants and private networks by making use of manila Share Networks, or if it is up to the administrator to manually configure all networks (:term:`DHSS` = false) and be responsible for segregation, if that is desired. At least one driver mode must be supported. In :term:`DHSS` = true mode, Share Server entities are used, so the driver must implement functions that setup and teardown such servers. At least one file system sharing protocol ----------------------------------------- In order to serve shares as a shared file system service, the driver must support at least one file system sharing protocol, which can be a new protocol or one of the currently supported protocols. The current list of supported protocols is as follows: - NFS - CIFS - GlusterFS - HDFS - MapRFS - CephFS Access rules ------------ Access rules control how shares are accessible, by whom, and what the level of access is. Access rule operations include allowing access and denying access to a given share. The authentication type should be based on IP, User and/or Certificate. Drivers must support read-write and read-only access levels for each supported protocol, either through individual access rules or separate export locations. Shares ------ Share servicing is the core functionality of a shared file system service, so a driver must be able to create and delete shares. Share extending --------------- In order to best satisfy cloud service requirements, shares must be elastic, so drivers must implement a share extend function that allows shares' size to be increased. Capabilities ------------ In order for manila to function accordingly to the driver being used, the driver must provide a set of information to manila, known as capabilities, as follows: - share_backend_name: a name for the backend; - driver_handles_share_servers: driver mode, whether this driver instance handles share servers, possible values are true or false; - vendor_name: driver vendor name; - driver_version: current driver instance version; - storage_protocol: list of shared file system protocols supported by this driver instance; - total_capacity_gb: total amount of storage space provided, in GB; - free_capacity_gb: amount of storage space available for use, in GB; - reserved_percentage: percentage of total storage space to be kept from being used. Certain features, if supported by drivers, need to be reported in order to function correctly in manila, such as: - dedupe: whether the backend supports deduplication; - compression: whether the backend supports compressed shares; - thin_provisioning: whether the backend is overprovisioning shares; - pools: list of storage pools managed by this driver instance; - qos: whether the backend supports quality of service for shares; - replication_domain: string specifying a common group name for all backends that can replicate between each other; - replication_type: string specifying the type of replication supported by the driver. Can be one of ('readable', 'writable' or 'dr'). .. note:: for more information please see https://docs.openstack.org/manila/latest/admin/capabilities_and_extra_specs.html Continuous Integration systems ------------------------------ Every driver vendor must supply a CI system that tests its drivers continuously for each patch submitted to OpenStack gerrit. This allows for better QA and quicker response and notification for driver vendors when a patch submitted affects an existing driver. The CI system must run all applicable tempest tests, test all patches Jenkins has posted +1 and post its test results. .. note:: for more information please see http://docs.openstack.org/infra/system-config/third_party.html Unit tests ---------- All drivers submitted must be contemplated with unit tests covering at least 90% of the code, preferably 100% if possible. Unit tests must use mock framework and be located in-tree using a structure that mirrors the functional code, such as directory names and filenames. See template below: :: manila/[tests/]path/to/brand/new/[test_]driver.py Documentation ------------- Drivers submitted must provide and maintain related documentation on openstack-manuals, containing instructions on how to properly install and configure. The intended audience for this manual is cloud operators and administrators. Also, driver maintainers must update the manila share features support mapping documentation found at https://docs.openstack.org/manila/latest/admin/share_back_ends_feature_support_mapping.html Manila optional requirements and features since Mitaka ====================================================== Additional to the minimum required features supported by manila, other optional features can be supported by drivers as they are already supported in manila and can be accessed through the API. Snapshots --------- Share Snapshots allow for data respective to a particular point in time to be saved in order to be used later. In manila API, share snapshots taken can only be restored by creating new shares from them, thus the original share remains unaffected. If Snapshots are supported by drivers, they must be crash-consistent. Managing/Unmanaging shares -------------------------- If :term:`DHSS` = false mode is used, then drivers may implement a function that supports reading existing shares in the backend that were not created by manila. After the previously existing share is registered in manila, it is completely controlled by manila and should not be handled externally anymore. Additionally, a function that de-registers such shares from manila but do not delete from backend may also be supported. Share shrinking --------------- Manila API supports share shrinking, thus a share can be shrunk in a similar way it can be extended, but the driver is responsible for making sure no data is compromised. Share ensuring -------------- In some situations, such as when the driver is restarted, manila attempts to perform maintenance on created shares, on the purpose of ensuring previously created shares are available and being serviced correctly. The driver can implement this function by checking shares' status and performing maintenance operations if needed, such as re-exporting. Manila experimental features since Mitaka ========================================= Some features are initially released as experimental and can be accessed by including specific additional HTTP Request headers. Those features are not recommended for production cloud environments while in experimental stage. Share Migration --------------- Shares can be migrated between different backends and pools. Manila implements migration using an approach that works for any manufacturer, but driver vendors can implement a better optimized migration function for when migration involves backends or pools related to the same vendor. Share Groups (since Ocata) -------------------------- The share groups provides the ability to manage a group of shares together. This feature is implemented at the manager level, every driver gets this feature by default. If a driver wants to override the default behavior to support additional functionalities such as consistent group snapshot, the driver vendors may report this capability as a group capability, such as: Ordered writes, Consistent snapshots, Group replication. .. note:: for more information please see `group capabilities `_ Share Replication ----------------- Replicas of shares can be created for either data protection (for disaster recovery) or for load sharing. In order to utilize this feature, drivers must report the ``replication_type`` they support as a capability and implement necessary methods. More details can be found at: https://docs.openstack.org/manila/latest/admin/shared-file-systems-share-replication.html Update "used_size" of shares ---------------------------- Drivers can update, for all the shares created on a particular backend, the consumed space in GiB. While the polling interval for drivers to update this information is configurable, drivers can choose to submit cached information as necessary, but specify a time at which this information was "gathered_at". manila-10.0.0/doc/source/contributor/pool-aware-manila-scheduler.rst0000664000175000017500000001750313656750227025530 0ustar zuulzuul00000000000000Pool-Aware Scheduler Support ============================ https://blueprints.launchpad.net/manila/+spec/dynamic-storage-pools Manila currently sees each share backend as a whole, even if the backend consists of several smaller pools with totally different capabilities and capacities. Extending manila to support storage pools within share backends will make manila scheduling decisions smarter as it now knows the full set of capabilities of a backend. Problem Description ------------------- The provisioning decisions in manila are based on the statistics reported by backends. Any backend is assumed to be a single discrete unit with a set of capabilities and single capacity. In reality this assumption is not true for many storage providers, as their storage can be further divided or partitioned into pools to offer completely different sets of capabilities and capacities. That is, there are storage backends which are a combination of storage pools rather than a single homogeneous entity. Usually shares/snapshots can't be placed across pools on such backends. In the current implementation, an attempt is made to map a single backend to a single storage controller, and the following problems may arise: * After the scheduler selects a backend on which to place a new share, the backend may have to make a second decision about where to place the share within that backend. This logic is driver-specific and hard for admins to deal with. * The capabilities that the backend reports back to the scheduler may not apply universally. A single backend may support both SATA and SSD-based storage, but perhaps not at the same time. Backends need a way to express exactly what they support and how much space is consumed out of each type of storage. Therefore, it is important to extend manila so that it is aware of storage pools within each backend and can use them as the finest granularity for resource placement. Proposed change --------------- A pool-aware scheduler will address the need for supporting multiple pools from one storage backend. Terminology ----------- Pool A logical concept to describe a set of storage resources that can be used to serve core manila requests, e.g. shares/snapshots. This notion is almost identical to manila Share Backend, for it has similar attributes (capacity, capability). The difference is that a Pool may not exist on its own; it must reside in a Share Backend. One Share Backend can have multiple Pools but Pools do not have sub-Pools (meaning even if they have them, sub-Pools do not get to exposed to manila, yet). Each Pool has a unique name in the Share Backend namespace, which means a Share Backend cannot have two pools using same name. Design ------ The workflow in this change is simple: 1) Share Backends report how many pools and what those pools look like and are capable of to scheduler; 2) When request comes in, scheduler picks a pool that fits the need best to serve the request, it passes the request to the backend where the target pool resides; 3) Share driver gets the message and lets the target pool serve the request as scheduler instructed. To support placing resources (share/snapshot) onto a pool, these changes will be made to specific components of manila: 1. Share Backends reporting capacity/capabilities at pool level; 2. Scheduler filtering/weighing based on pool capacity/capability and placing shares/snapshots to a pool of a certain backend; 3. Record which backend and pool a resource is located on. Data model impact ----------------- No DB schema change involved, however, the host field of Shares table will now include pool information but no DB migration is needed. Original host field of Shares: ``HostX@BackendY`` With this change: ``HostX@BackendY#pool0`` REST API impact --------------- With pool support added to manila, there is an awkward situation where we require admin to input the exact location for shares to be imported, which must have pool info. But there is no way to find out what pools are there for backends except looking at the scheduler log. That causes a poor user experience and thus is a problem from the User's Point of View. This change simply adds a new admin-api extension to allow admin to fetch all the pool information from scheduler cache (memory), which closes the gap for end users. This extension provides two level of pool information: names only or detailed information: Pool name only: GET http://MANILA_API_ENDPOINT/v1/TENANT_ID/scheduler-stats/pools Detailed Pool info: GET http://MANILA_API_ENDPOINT/v1/TENANT_ID/scheduler-stats/pools/detail Security impact --------------- N/A Notifications impact -------------------- Host attribute of shares now includes pool information in it, consumer of notification can now extend to extract pool information if needed. Admin impact ------------ Administrators now need to suffix commands with ``#pool`` to manage shares. Other end user impact --------------------- No impact visible to the end user directly, but administrators now need to prefix commands that refer to the backend host with the concatenation of the hashtag (``#``) sign and the name of the pool (e.g. ``#poolName``) to manage shares. Other impacts might include scenarios where if a backend does not expose pools, the backend name is used as the pool name. For instance, ``HostX@BackendY#BackendY`` would be used when the driver does not expose pools. Performance Impact ------------------ The size of RPC message for each share stats report will be bigger than before (linear to the number of pools a backend has). It should not really impact the RPC facility in terms of performance and even if it did, pure text compression should easily mitigate this problem. Developer impact ---------------- For those share backends that would like to expose internal pools to manila for more flexibility, developers should update their drivers to include all pool capacities and capabilities in the share stats it reports to scheduler. Share backends without multiple pools do not need to change their implementation. Below is an example of new stats message having multiple pools: :: { 'share_backend_name': 'My Backend', #\ 'vendor_name': 'OpenStack', # backend level 'driver_version': '1.0', # mandatory/fixed 'storage_protocol': 'NFS/CIFS', #- stats&capabilities 'active_shares': 10, #\ 'IOPS_provisioned': 30000, # optional custom 'fancy_capability_1': 'eat', # stats & capabilities 'fancy_capability_2': 'drink', #/ 'pools': [ {'pool_name': '1st pool', #\ 'total_capacity_gb': 500, # mandatory stats for 'free_capacity_gb': 230, # pools 'allocated_capacity_gb': 270, # | 'qos': True, # | 'reserved_percentage': 0, #/ 'dying_disks': 100, #\ 'super_hero_1': 'spider-man', # optional custom 'super_hero_2': 'flash', # stats & capabilities 'super_hero_3': 'neoncat' #/ }, {'pool_name': '2nd pool', 'total_capacity_gb': 1024, 'free_capacity_gb': 1024, 'allocated_capacity_gb': 0, 'qos': False, 'reserved_percentage': 0, 'dying_disks': 200, 'super_hero_1': 'superman', 'super_hero_2': ' ', 'super_hero_2': 'Hulk', } ] } Documentation Impact -------------------- Documentation impact for changes in manila are introduced by the API changes. Also, doc changes are needed to append pool names to host names. Driver changes may also introduce new configuration options which would lead to Doc changes. manila-10.0.0/doc/source/contributor/unit_tests.rst0000664000175000017500000000474613656750227022455 0ustar zuulzuul00000000000000Unit Tests ========== Manila contains a suite of unit tests, in the manila/tests directory. Any proposed code change will be automatically rejected by the OpenStack Jenkins server if the change causes unit test failures. Running the tests ----------------- To run all unit tests simply run:: tox This will create a virtual environment, load all the packages from test-requirements.txt and run all unit tests as well as run flake8 and hacking checks against the code. You may run individual test targets, for example only unit tests, by running:: tox -e py3 Note that you can inspect the tox.ini file to get more details on the available options and what the test run does by default. Running a subset of tests ------------------------- Instead of running all tests, you can specify an individual directory, file, class, or method that contains test code. To run the tests in the ``manila/tests/scheduler`` directory:: tox -epy3 -- manila.tests.scheduler To run the tests in the `ShareManagerTestCase` class in ``manila/tests/share/test_manager.py``:: tox -epy3 -- manila.tests.share.test_manager.ShareManagerTestCase To run the `ShareManagerTestCase::test_share_manager_instance` test method in ``manila/tests/share/test_manager.py``:: tox -epy3 -- manila.tests.share.test_manager.ShareManagerTestCase.test_share_manager_instance For more information on these options and details about stestr, please see the `stestr documentation `_. Database Setup -------------- Some unit tests will use a local database. You can use ``tools/test-setup.sh`` to set up your local system the same way as it's setup in the CI environment. Gotchas ------- **Running Tests from Shared Folders** If you are running the unit tests from a shared folder, you may see tests start to fail or stop completely as a result of Python lockfile issues [#f3]_. You can get around this by manually setting or updating the following line in ``manila/tests/conf_fixture.py``:: FLAGS['lock_path'].SetDefault('/tmp') Note that you may use any location (not just ``/tmp``!) as long as it is not a shared folder. .. rubric:: Footnotes .. [#f1] See :doc:`development.environment` for more details about the use of virtualenv. .. [#f2] There is an effort underway to use a fake DB implementation for the unit tests. See https://lists.launchpad.net/openstack/msg05604.html .. [#f3] See Vish's comment in this bug report: https://bugs.launchpad.net/manila/+bug/882933 manila-10.0.0/doc/source/contributor/rpc.rst0000664000175000017500000003212313656750227021026 0ustar zuulzuul00000000000000.. Copyright (c) 2010 Citrix Systems, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. AMQP and manila =============== AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, either RabbitMQ or Qpid, sits between any two manila components and allows them to communicate in a loosely coupled fashion. More precisely, manila components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC hereinafter) to communicate to one another; however such a paradigm is built atop the publish/subscribe paradigm so that the following benefits can be achieved: * Decoupling between client and servant (such as the client does not need to know where the servant's reference is). * Full a-synchronism between client and servant (such as the client does not need the servant to run at the same time of the remote call). * Random balancing of remote calls (such as if more servants are up and running, one-way calls are transparently dispatched to the first available servant). Manila uses direct, fanout, and topic-based exchanges. The architecture looks like the one depicted in the figure below: .. image:: /images/rpc/arch.png :width: 60% .. Manila implements RPC (both request+response, and one-way, respectively nicknamed 'rpc.call' and 'rpc.cast') over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each manila service (for example Compute, Volume, etc.) create two queues at the initialization time, one which accepts messages with routing keys 'NODE-TYPE.NODE-ID' (for example compute.hostname) and another, which accepts messages with routing keys as generic 'NODE-TYPE' (for example compute). The former is used specifically when Manila-API needs to redirect commands to a specific node like 'euca-terminate instance'. In this case, only the compute node whose host's hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only. Manila RPC Mappings ------------------- The figure below shows the internals of a message broker node (referred to as a RabbitMQ node in the diagrams) when a single instance is deployed and shared in an OpenStack cloud. Every manila component connects to the message broker and, depending on its personality (for example a compute node or a network node), may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute, Volume or Network). Invokers and Workers do not actually exist in the manila object model, but we are going to use them as an abstraction for sake of clarity. An Invoker is a component that sends messages in the queuing system via two operations: 1) rpc.call and ii) rpc.cast; a Worker is a component that receives messages from the queuing system and reply accordingly to rcp.call operations. Figure 2 shows the following internal elements: * Topic Publisher: A Topic Publisher comes to life when an rpc.call or an rpc.cast operation is executed; this object is instantiated and used to push a message to the queuing system. Every publisher connects always to the same topic-based exchange; its life-cycle is limited to the message delivery. * Direct Consumer: A Direct Consumer comes to life if (an only if) a rpc.call operation is executed; this object is instantiated and used to receive a response message from the queuing system; Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery; the exchange and queue identifiers are determined by a UUID generator, and are marshaled in the message sent by the Topic Publisher (only rpc.call operations). * Topic Consumer: A Topic Consumer comes to life as soon as a Worker is instantiated and exists throughout its life-cycle; this object is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during rpc.cast operations (and it connects to a shared queue whose exchange key is 'topic') and the other that is addressed only during rpc.call operations (and it connects to a unique queue whose exchange key is 'topic.host'). * Direct Publisher: A Direct Publisher comes to life only during rpc.call operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message. * Topic Exchange: The Exchange is a routing table that exists in the context of a virtual host (the multi-tenancy mechanism provided by Qpid or RabbitMQ); its type (such as topic vs. direct) determines the routing policy; a message broker node will have only one topic-based exchange for every topic in manila. * Direct Exchange: This is a routing table that is created during rpc.call operations; there are many instances of this kind of exchange throughout the life-cycle of a message broker node, one for each rpc.call invoked. * Queue Element: A Queue is a message bucket. Messages are kept in the queue until a Consumer (either Topic or Direct Consumer) connects to the queue and fetch it. Queues can be shared or can be exclusive. Queues whose routing key is 'topic' are shared amongst Workers of the same personality. .. image:: /images/rpc/rabt.png :width: 60% .. RPC Calls --------- The diagram below shows the message flow during an rpc.call operation: 1. A Topic Publisher is instantiated to send the message request to the queuing system; immediately before the publishing operation, a Direct Consumer is instantiated to wait for the response message. 2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as 'topic.host') and passed to the Worker in charge of the task. 3. Once the task is completed, a Direct Publisher is allocated to send the response message to the queuing system. 4. Once the message is dispatched by the exchange, it is fetched by the Direct Consumer dictated by the routing key (such as 'msg_id') and passed to the Invoker. .. image:: /images/rpc/flow1.png :width: 60% .. RPC Casts --------- The diagram below the message flow during an rp.cast operation: 1. A Topic Publisher is instantiated to send the message request to the queuing system. 2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as 'topic') and passed to the Worker in charge of the task. .. image:: /images/rpc/flow2.png :width: 60% .. AMQP Broker Load ---------------- At any given time the load of a message broker node running either Qpid or RabbitMQ is function of the following parameters: * Throughput of API calls: The number of API calls (more precisely rpc.call ops) being served by the OpenStack cloud dictates the number of direct-based exchanges, related queues and direct consumers connected to them. * Number of Workers: There is one queue shared amongst workers with the same personality; however there are as many exclusive queues as the number of workers; the number of workers dictates also the number of routing keys within the topic-based exchange, which is shared amongst all workers. The figure below shows the status of a RabbitMQ node after manila components' bootstrap in a test environment. Exchanges and queues being created by manila components are: * Exchanges 1. manila (topic exchange) * Queues 1. compute.phantom (phantom is hostname) 2. compute 3. network.phantom (phantom is hostname) 4. network 5. share.phantom (phantom is hostname) 6. share 7. scheduler.phantom (phantom is hostname) 8. scheduler .. image:: /images/rpc/state.png :width: 60% .. RabbitMQ Gotchas ---------------- Manila uses Kombu to connect to the RabbitMQ environment. Kombu is a Python library that in turn uses AMQPLib, a library that implements the standard AMQP 0.8 at the time of writing. When using Kombu, Invokers and Workers need the following parameters in order to instantiate a Connection object that connects to the RabbitMQ server (please note that most of the following material can be also found in the Kombu documentation; it has been summarized and revised here for sake of clarity): * Hostname: The hostname to the AMQP server. * Userid: A valid username used to authenticate to the server. * Password: The password used to authenticate to the server. * Virtual_host: The name of the virtual host to work with. This virtual host must exist on the server, and the user must have access to it. Default is "/". * Port: The port of the AMQP server. Default is 5672 (amqp). The following parameters are default: * Insist: Insist on connecting to a server. In a configuration with multiple load-sharing servers, the Insist option tells the server that the client is insisting on a connection to the specified server. Default is False. * Connect_timeout: The timeout in seconds before the client gives up connecting to the server. The default is no timeout. * SSL: use SSL to connect to the server. The default is False. More precisely Consumers need the following parameters: * Connection: The above mentioned Connection object. * Queue: Name of the queue. * Exchange: Name of the exchange the queue binds to. * Routing_key: The interpretation of the routing key depends on the value of the exchange_type attribute. * Direct exchange: If the routing key property of the message and the routing_key attribute of the queue are identical, then the message is forwarded to the queue. * Fanout exchange: messages are forwarded to the queues bound the exchange, even if the binding does not have a key. * Topic exchange: If the routing key property of the message matches the routing key of the key according to a primitive pattern matching scheme, then the message is forwarded to the queue. The message routing key then consists of words separated by dots (".", like domain names), and two special characters are available; star ("") and hash ("#"). The star matches any word, and the hash matches zero or more words. For example ".stock.#" matches the routing keys "usd.stock" and "eur.stock.db" but not "stock.nasdaq". * Durable: This flag determines the durability of both exchanges and queues; durable exchanges and queues remain active when a RabbitMQ server restarts. Non-durable exchanges/queues (transient exchanges/queues) are purged when a server restarts. It is worth noting that AMQP specifies that durable queues cannot bind to transient exchanges. Default is True. * Auto_delete: If set, the exchange is deleted when all queues have finished using it. Default is False. * Exclusive: exclusive queues (such as non-shared) may only be consumed from by the current connection. When exclusive is on, this also implies auto_delete. Default is False. * Exchange_type: AMQP defines several default exchange types (routing algorithms) that covers most of the common messaging use cases. * Auto_ack: acknowledgment is handled automatically once messages are received. By default auto_ack is set to False, and the receiver is required to manually handle acknowledgment. * No_ack: It disable acknowledgment on the server-side. This is different from auto_ack in that acknowledgment is turned off altogether. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application. * Auto_declare: If this is True and the exchange name is set, the exchange will be automatically declared at instantiation. Auto declare is on by default. Publishers specify most the parameters of Consumers (such as they do not specify a queue name), but they can also specify the following: * Delivery_mode: The default delivery mode used for messages. The value is an integer. The following delivery modes are supported by RabbitMQ: * 1 or "transient": The message is transient. Which means it is stored in memory only, and is lost if the server dies or restarts. * 2 or "persistent": The message is persistent. Which means the message is stored both in-memory, and on disk, and therefore preserved if the server dies or restarts. The default value is 2 (persistent). During a send operation, Publishers can override the delivery mode of messages so that, for example, transient messages can be sent over a durable queue. manila-10.0.0/doc/source/contributor/api_microversion_history.rst0000664000175000017500000000011013656750227025362 0ustar zuulzuul00000000000000.. include:: ../../../manila/api/openstack/rest_api_version_history.rst manila-10.0.0/doc/source/contributor/scheduler.rst0000664000175000017500000001371113656750227022222 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Scheduler ========= The :mod:`manila.scheduler.manager` Module ------------------------------------------ .. automodule:: manila.scheduler.manager :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.base_handler` Module ----------------------------------------------- .. automodule:: manila.scheduler.base_handler :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.host_manager` Module ----------------------------------------------- .. automodule:: manila.scheduler.host_manager :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.rpcapi` Module ----------------------------------------- .. automodule:: manila.scheduler.rpcapi :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.scheduler_options` Module ---------------------------------------------------- .. automodule:: manila.scheduler.scheduler_options :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.drivers.filter` Module ------------------------------------------------- .. automodule:: manila.scheduler.drivers.filter :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.drivers.base` Module ----------------------------------------------- .. automodule:: manila.scheduler.drivers.base :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.drivers.chance` Module ------------------------------------------------- .. automodule:: manila.scheduler.drivers.chance :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.drivers.simple` Module ------------------------------------------------- .. automodule:: manila.scheduler.drivers.simple :noindex: :members: :undoc-members: :show-inheritance: Scheduler Filters ================= The :mod:`manila.scheduler.filters.availability_zone` Filter ------------------------------------------------------------ .. automodule:: manila.scheduler.filters.availability_zone :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.base` Filter ----------------------------------------------- .. automodule:: manila.scheduler.filters.base :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.base_host` Filter ---------------------------------------------------- .. automodule:: manila.scheduler.filters.base_host :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.capabilities` Filter ------------------------------------------------------- .. automodule:: manila.scheduler.filters.capabilities :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.capacity` Filter --------------------------------------------------- .. automodule:: manila.scheduler.filters.capacity :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.extra_specs_ops` Filter ---------------------------------------------------------- .. automodule:: manila.scheduler.filters.extra_specs_ops :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.ignore_attempted_hosts` Filter ----------------------------------------------------------------- .. automodule:: manila.scheduler.filters.ignore_attempted_hosts :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.json` Filter ----------------------------------------------- .. automodule:: manila.scheduler.filters.json :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.retry` Filter ------------------------------------------------ .. automodule:: manila.scheduler.filters.retry :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.filters.share_replication` Filter ------------------------------------------------------------ .. automodule:: manila.scheduler.filters.share_replication :noindex: :members: :undoc-members: :show-inheritance: Scheduler Weighers ================== The :mod:`manila.scheduler.weighers.base` Weigher ------------------------------------------------- .. automodule:: manila.scheduler.weighers.base :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.weighers.base_host` Weigher ------------------------------------------------------ .. automodule:: manila.scheduler.weighers.base_host :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.weighers.capacity` Weigher ----------------------------------------------------- .. automodule:: manila.scheduler.weighers.capacity :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.scheduler.weighers.pool` Weigher ------------------------------------------------- .. automodule:: manila.scheduler.weighers.pool :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/contributor/adding_release_notes.rst0000664000175000017500000001541213656750227024402 0ustar zuulzuul00000000000000.. _adding_release_notes: Release Notes ============= What are release notes? ~~~~~~~~~~~~~~~~~~~~~~~ Release notes are important for change management within manila. Since manila follows a release cycle with milestones, release notes provide a way for the community and users to quickly grasp what changes occurred within a development milestone. To the OpenStack release management and documentation teams, release notes are a way to compile changes per milestone. These notes are published on the `OpenStack Releases website `_. Automated tooling is built around ``releasenotes`` and they get appropriately handled per release milestone, including any back-ports to stable releases. What needs a release note? ~~~~~~~~~~~~~~~~~~~~~~~~~~ * Changes that impact an upgrade, most importantly, those that require a deployer to take some action while upgrading * API changes * New APIs * Changes to the response schema of existing APIs * Changes to request/response headers * Non-trivial API changes such as response code changes from 2xx to 4xx * Deprecation of APIs or response fields * Removal of APIs * A new feature is implemented, such as a new core feature in manila, driver support for an existing manila feature or a new driver * An existing feature is deprecated * An existing feature is removed * Behavior of an existing feature has changed in a discernible way to an end user or administrator * Backend driver interface changes * A security bug is fixed * New configuration option is added What does not need a release note? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * A code change that doesn't change the general behavior of any feature such as code refactor or logging changes. One case of this could be the exercise that all drivers went through by removing ``allow_access`` and ``deny_access`` interfaces in favor of an ``update_access`` interface as suggested in the Mitaka release. * Tempest or unit test coverage enhancement * Changes to response message with API failure codes 4xx and 5xx * Any change submitted with a justified TrivialFix flag added in the commit message * Adding or changing documentation within in-tree documentation guides How do I add a release note? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We use `Reno `_ to create and manage release notes. The new subcommand combines a random suffix with a "slug" value to make the new file with a unique name that is easy to identify again later. To create a release note for your change, use: .. code-block:: console $ reno new slug-goes-here If reno is not installed globally on your system, you can use it from venv of your manila's tox. Prior to running the above command, run: .. code-block:: console $ source .tox/py3/bin/activate Or directly as a one-liner, with: .. code-block:: console $ tox -e venv -- reno new slug-goes-here .. note:: When you are adding a bug-fix reno, name your file using the template: "bug--slug-goes-here". Then add the notes in ``yaml`` format in the file created. Pay attention to the type of section. The following are general sections to use: prelude General comments about the change. The prelude from all notes in a release are combined, in note order, to produce a single prelude introducing the release. features New features introduced issues A list of known issues with respect to the change being introduced. For example, if the new feature in the change is experimental or known to not work in some cases, it should be mentioned here. upgrade A list of upgrade notes in the release. Any removals that affect upgrades are to be noted here. deprecations Any features, APIs, configuration options that the change has deprecated. Deprecations are not removals. Deprecations suggest that there will be support for a certain timeline. Deprecation should allow time for users to make necessary changes for the removal to happen in a future release. It is important to note the timeline of deprecation in this section. critical A list of *fixed* critical bugs (descriptions only). security A list of *fixed* security issues (descriptions only). fixes A list of other *fixed* bugs (descriptions only). other Other notes that are important but do not fall into any of the given categories. :: --- prelude: > Replace this text with content to appear at the top of the section for this change. features: - List new features here, or remove this section. issues: - List known issues here, or remove this section. upgrade: - List upgrade notes here, or remove this section. deprecations: - List deprecation notes here, or remove this section critical: - Add critical notes here, or remove this section. security: - Add security notes here, or remove this section. fixes: - Add normal bug fixes here, or remove this section. other: - Add other notes here, or remove this section. Dos and Don'ts ~~~~~~~~~~~~~~ * Release notes need to be succinct. Short and unambiguous descriptions are preferred * Write in past tense, unless you are writing an imperative statement * Do not have blank sections in the file * Do not include code or links * Avoid special rst formatting unless absolutely necessary * Always prefer including a release note in the same patch * Release notes are not a replacement for developer/user/admin documentation * Release notes are not a way of conveying behavior of any features or usage of any APIs * Limit a release note to fewer than 2-3 lines per change per section * OpenStack prefers atomic changes. So remember that your change may need the fewest sections possible * General writing guidelines can be found `here `_ * Proofread your note. Pretend you are a user or a deployer who is reading the note after a milestone or a release has been cut Examples ~~~~~~~~ The following need only be considered as directions for formatting. They are **not** fixes or features in manila. * *fix-failing-automount-23aef89a7e98c8.yaml* .. code-block:: yaml --- deprecations: - displaying mount options via the array listing API is deprecated. fixes: - users can mount shares on debian systems with kernel version 32.2.41.* with share-mount API * *add-librsync-backup-plugin-for-m-bkup-41cad17c1498a3.yaml* .. code-block:: yaml --- features: - librsync support added for NFS incremental backup upgrade: - Copy new rootwrap.d/librsync.filters file into /etc/manila/rootwrap.d directory. issues: - librsync has not been tested thoroughly in all operating systems that manila is qualified for. m-bkup is an experimental feature. manila-10.0.0/doc/source/contributor/threading.rst0000664000175000017500000000523013656750227022206 0ustar zuulzuul00000000000000Threading model =============== All OpenStack services use *green thread* model of threading, implemented through using the Python `eventlet `_ and `greenlet `_ libraries. Green threads use a cooperative model of threading: thread context switches can only occur when specific eventlet or greenlet library calls are made (e.g., sleep, certain I/O calls). From the operating system's point of view, each OpenStack service runs in a single thread. The use of green threads reduces the likelihood of race conditions, but does not completely eliminate them. In some cases, you may need to use the ``@utils.synchronized(...)`` decorator to avoid races. In addition, since there is only one operating system thread, a call that blocks that main thread will block the entire process. Yielding the thread in long-running tasks ----------------------------------------- If a code path takes a long time to execute and does not contain any methods that trigger an eventlet context switch, the long-running thread will block any pending threads. This scenario can be avoided by adding calls to the eventlet sleep method in the long-running code path. The sleep call will trigger a context switch if there are pending threads, and using an argument of 0 will avoid introducing delays in the case that there is only a single green thread:: from eventlet import greenthread ... greenthread.sleep(0) In current code, time.sleep(0) does the same thing as greenthread.sleep(0) if time module is patched through eventlet.monkey_patch(). To be explicit, we recommend contributors to use ``greenthread.sleep()`` instead of ``time.sleep()``. MySQL access and eventlet ------------------------- There are some MySQL DB API drivers for oslo.db, like `PyMySQL`_, MySQL-python, etc. PyMySQL is the default MySQL DB API driver for oslo.db, and it works well with eventlet. MySQL-python uses an external C library for accessing the MySQL database. Since eventlet cannot use monkey-patching to intercept blocking calls in a C library, queries to the MySQL database will block the main thread of a service. The Diablo release contained a thread-pooling implementation that did not block, but this implementation resulted in a `bug`_ and was removed. See this `mailing list thread`_ for a discussion of this issue, including a discussion of the `impact on performance`_. .. _bug: https://bugs.launchpad.net/manila/+bug/838581 .. _mailing list thread: https://lists.launchpad.net/openstack/msg08118.html .. _impact on performance: https://lists.launchpad.net/openstack/msg08217.html .. _PyMySQL: https://wiki.openstack.org/wiki/PyMySQL_evaluation manila-10.0.0/doc/source/contributor/addmethod.openstackapi.rst0000664000175000017500000000540713656750227024660 0ustar zuulzuul00000000000000.. Copyright 2010-2011 OpenStack LLC All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Adding a Method to the OpenStack Manila API =========================================== The interface to manila is a RESTful API. REST stands for Representational State Transfer and provides an architecture "style" for distributed systems using HTTP for transport. Figure out a way to express your request and response in terms of resources that are being created, modified, read, or destroyed. Manila's API aims to conform to the `guidelines `_ set by OpenStack API SIG. Routing ------- To map URLs to controllers+actions, manila uses the Routes package. See the `routes package documentation `_ for more information. URLs are mapped to "action" methods on "controller" classes in ``manila/api//router.py``. These are two methods of the routes package that are used to perform the mapping and the routing: - mapper.connect() lets you map a single URL to a single action on a controller. - mapper.resource() connects many standard URLs to actions on a controller. Controllers and actions ----------------------- Controllers live in ``manila/api/v1`` and ``manila/api/v2``. See ``manila/api/v1/shares.py`` for an example. Action methods take parameters that are sucked out of the URL by mapper.connect() or .resource(). The first two parameters are self and the WebOb request, from which you can get the req.environ, req.body, req.headers, etc. Actions return a dictionary, and wsgi.Controller serializes that to JSON. Faults ------ If you need to return a non-200, you should return faults.Fault(webob.exc .HTTPNotFound()) replacing the exception as appropriate. Evolving the API ---------------- The ``v1`` version of the manila API has been deprecated. The ``v2`` version of the API supports micro versions. So all changes to the v2 API strive to maintain stability at any given API micro version, so consumers can safely rely on a specific micro version of the API never to change the request and response semantics. Read more about :doc:`API Microversions ` to understand how stability and backwards compatibility are maintained. manila-10.0.0/doc/source/contributor/share.rst0000664000175000017500000000226413656750227021347 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Shared Filesystems ================== The :mod:`manila.share.manager` Module -------------------------------------- .. automodule:: manila.share.manager :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.share.driver` Module ------------------------------------- .. automodule:: manila.share.driver :noindex: :members: :undoc-members: :show-inheritance: :exclude-members: FakeAOEDriver manila-10.0.0/doc/source/contributor/share_replication.rst0000664000175000017500000003331113656750227023735 0ustar zuulzuul00000000000000.. Copyright (c) 2016 Goutham Pacha Ravi Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================= Share Replication ================= As of the Mitaka release of OpenStack, :term:`manila` supports replication of shares between different pools for drivers that operate with ``driver_handles_share_servers=False`` mode. These pools may be on different backends or within the same backend. This feature can be used as a disaster recovery solution or as a load sharing mirroring solution depending upon the replication style chosen, the capability of the driver and the configuration of backends. This feature assumes and relies on the fact that share drivers will be responsible for communicating with ALL storage controllers necessary to achieve any replication tasks, even if that involves sending commands to other storage controllers in other Availability Zones (or AZs). End users would be able to create and manage their replicas, alongside their shares and snapshots. Storage availability zones and replication domains ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Replication is supported within the same availability zone, but in an ideal solution, an Availability Zone should be perceived as a single failure domain. So this feature provides the most value in an inter-AZ replication use case. The ``replication_domain`` option is a backend specific StrOpt option to be used within ``manila.conf``. The value can be any ASCII string. Two backends that can replicate between each other would have the same ``replication_domain``. This comes from the premise that manila expects Share Replication to be performed between backends that have similar characteristics. When scheduling new replicas, the scheduler takes into account the ``replication_domain`` option to match similar backends. It also ensures that only one replica can be scheduled per pool. When backends report multiple pools, manila would allow for replication between two pools on the same backend. The ``replication_domain`` option is meant to be used in conjunction with the ``storage_availability_zone`` (or back end specific ``backend_availability_zone``) option to utilize this solution for Data Protection/Disaster Recovery. Replication types ~~~~~~~~~~~~~~~~~ When creating a share that is meant to have replicas in the future, the user will use a ``share_type`` with an extra_spec, :term:`replication_type` set to a valid replication type that manila supports. Drivers must report the replication type that they support as the :term:`replication_type` capability during the ``_update_share_stats()`` call. Three types of replication are currently supported: **writable** Synchronously replicated shares where all replicas are writable. Promotion is not supported and not needed. **readable** Mirror-style replication with a primary (writable) copy and one or more secondary (read-only) copies which can become writable after a promotion. **dr (for Disaster Recovery)** Generalized replication with secondary copies that are inaccessible until they are promoted to become the ``active`` replica. .. note:: The term :term:`active` replica refers to the ``primary`` share. In :term:`writable` style of replication, all replicas are :term:`active`, and there could be no distinction of a ``primary`` share. In :term:`readable` and :term:`dr` styles of replication, a ``secondary`` replica may be referred to as ``passive``, ``non-active`` or simply ``replica``. Health of a share replica ~~~~~~~~~~~~~~~~~~~~~~~~~ Apart from the ``status`` attribute, share replicas have the :term:`replica_state` attribute to denote the state of the replica. The ``primary`` replica will have it's :term:`replica_state` attribute set to :term:`active`. A ``secondary`` replica may have one of the following values as its :term:`replica_state`: **in_sync** The replica is up to date with the active replica (possibly within a backend specific :term:`recovery point objective`). **out_of_sync** The replica has gone out of date (all new replicas start out in this :term:`replica_state`). **error** When the scheduler failed to schedule this replica or some potentially irrecoverable damage occurred with regard to updating data for this replica. Manila requests periodic update of the :term:`replica_state` of all non-active replicas. The update occurs with respect to an interval defined through the ``replica_state_update_interval`` option in ``manila.conf``. Administrators have an option of initiating a ``resync`` of a secondary replica (for :term:`readable` and :term:`dr` types of replication). This could be performed before a planned failover operation in order to have the most up-to-date data on the replica. Promotion ~~~~~~~~~ For :term:`readable` and :term:`dr` styles, we refer to the task of switching a ``non-active`` replica with the :term:`active` replica as `promotion`. For the :term:`writable` style of replication, promotion does not make sense since all replicas are :term:`active` (or writable) at all given points of time. The ``status`` attribute of the non-active replica being promoted will be set to :term:`replication_change` during its promotion. This has been classified as a ``busy`` state and hence API interactions with the share are restricted while one of its replicas is in this state. Promotion of replicas with :term:`replica_state` set to ``error`` may not be fully supported by the backend. However, manila allows the action as an administrator feature and such an attempt may be honored by backends if possible. When multiple replicas exist, multiple replication relationships between shares may need to be redefined at the backend during the promotion operation. If the driver fails at this stage, the replicas may be left in an inconsistent state. The share manager will set all replicas to have the ``status`` attribute set to ``error``. Recovery from this state would require administrator intervention. Snapshots ~~~~~~~~~ If the driver supports snapshots, the replication of a snapshot is expected to be initiated simultaneously with the creation of the snapshot on the :term:`active` replica. Manila tracks snapshots across replicas as separate snapshot instances. The aggregate snapshot object itself will be in ``creating`` state until it is ``available`` across all of the share's replicas that have their :term:`replica_state` attribute set to :term:`active` or ``in_sync``. Therefore, for a driver that supports snapshots, the definition of being ``in_sync`` with the primary is not only that data is ensured (within the :term:`recovery point objective`), but also that any 'available' snapshots on the primary are ensured on the replica as well. If the snapshots cannot be ensured, the :term:`replica_state` *must* be reported to manila as being ``out_of_sync`` until the snapshots have been replicated. When a snapshot instance has its ``status`` attribute set to ``creating`` or ``deleting``, manila will poll the respective drivers for a status update. As described earlier, the parent snapshot itself will be ``available`` only when its instances across the :term:`active` and ``in_sync`` replicas of the share are ``available``. The polling interval will be the same as ``replica_state_update_interval``. Access Rules ~~~~~~~~~~~~ Access rules are not meant to be different across the replicas of the share. Manila expects drivers to handle these access rules effectively depending on the style of replication supported. For example, the :term:`dr` style of replication does mean that the non-active replicas are inaccessible, so if read-write rules are expected, then the rules should be applied on the :term:`active` replica only. Similarly, drivers that support :term:`readable` replication type should apply any read-write rules as read-only for the non-active replicas. Drivers will receive all the access rules in ``create_replica``, ``delete_replica`` and ``update_replica_state`` calls and have ample opportunity to reconcile these rules effectively across replicas. Understanding Replication Workflows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Creating a share that supports replication ------------------------------------------ Administrators can create a share type with extra-spec :term:`replication_type`, matching the style of replication the desired backend supports. Users can use the share type to create a new share that allows/supports replication. A replicated share always starts out with one replica, the ``primary`` share itself. The :term:`manila-scheduler` service will filter and weigh available pools to find a suitable pool for the share being created. In particular, * The ``CapabilityFilter`` will match the :term:`replication_type` extra_spec in the request share_type with the ``replication_type`` capability reported by a pool. * The ``ShareReplicationFilter`` will further ensure that the pool has a non-empty ``replication_domain`` capability being reported as well. * The ``AvailabilityZoneFilter`` will ensure that the availability_zone requested matches with the pool's availability zone. Creating a replica ------------------ The user has to specify the share name/id of the share that is supposed to be replicated and optionally an availability zone for the replica to exist in. The replica inherits the parent share's share_type and associated extra_specs. Scheduling of the replica is similar to that of the share. * The `ShareReplicationFilter` will ensure that the pool is within the same ``replication_domain`` as the :term:`active` replica and also ensures that the pool does not already have a replica for that share. Drivers supporting :term:`writable` style **must** set the :term:`replica_state` attribute to :term:`active` when the replica has been created and is ``available``. Deleting a replica ------------------ Users can remove replicas that have their `status` attribute set to ``error``, ``in_sync`` or ``out_of_sync``. They could even delete an :term:`active` replica as long as there is another :term:`active` replica (as could be the case with `writable` replication style). Before the ``delete_replica`` call is made to the driver, an update_access call is made to ensure access rules are safely removed for the replica. Administrators may also ``force-delete`` replicas. Any driver exceptions will only be logged and not re-raised; the replica will be purged from manila's database. Promoting a replica ------------------- Users can promote replicas that have their :term:`replica_state` attribute set to ``in_sync``. Administrators can attempt to promote replicas that have their :term:`replica_state` attribute set to ``out_of_sync`` or ``error``. During a promotion, if the driver raises an exception, all replicas will have their `status` attribute set to `error` and recovery from this state will require administrator intervention. Resyncing a replica ------------------- Prior to a planned failover, an administrator could attempt to update the data on the replica. The ``update_replica_state`` call will be made during such an action, giving drivers an opportunity to push the latest updates from the `active` replica to the secondaries. Creating a snapshot ------------------- When a user takes a snapshot of a share that has replicas, manila creates as many snapshot instances as there are share replicas. These snapshot instances all begin with their `status` attribute set to `creating`. The driver is expected to create the snapshot of the ``active`` replica and then begin to replicate this snapshot as soon as the :term:`active` replica's snapshot instance is created and becomes ``available``. Deleting a snapshot ------------------- When a user deletes a snapshot, the snapshot instances corresponding to each replica of the share have their ``status`` attribute set to ``deleting``. Drivers must update their secondaries as soon as the :term:`active` replica's snapshot instance is deleted. Driver Interfaces ~~~~~~~~~~~~~~~~~ As part of the ``_update_share_stats()`` call, the base driver reports the ``replication_domain`` capability. Drivers are expected to update the :term:`replication_type` capability. Drivers must implement the methods enumerated below in order to support replication. ``promote_replica``, ``update_replica_state`` and ``update_replicated_snapshot`` need not be implemented by drivers that support the :term:`writable` style of replication. The snapshot methods ``create_replicated_snapshot``, ``delete_replicated_snapshot`` and ``update_replicated_snapshot`` need not be implemented by a driver that does not support snapshots. Each driver request is made on a specific host. Create/delete operations on secondary replicas are always made on the destination host. Create/delete operations on snapshots are always made on the :term:`active` replica's host. ``update_replica_state`` and ``update_replicated_snapshot`` calls are made on the host that the replica or snapshot resides on. Share Replica interfaces: ------------------------- .. autoclass:: manila.share.driver.ShareDriver :members: create_replica, delete_replica, promote_replica, update_replica_state Replicated Snapshot interfaces: ------------------------------- .. autoclass:: manila.share.driver.ShareDriver :noindex: :members: create_replicated_snapshot, delete_replicated_snapshot, update_replicated_snapshot manila-10.0.0/doc/source/contributor/launchpad.rst0000664000175000017500000000315513656750227022204 0ustar zuulzuul00000000000000Project hosting with Launchpad ============================== `Launchpad`_ hosts the manila project. The manila project homepage on Launchpad is http://launchpad.net/manila. Launchpad credentials --------------------- Creating a login on Launchpad is important even if you don't use the Launchpad site itself, since Launchpad credentials are used for logging in on several OpenStack-related sites. These sites include: * `Wiki`_ * Gerrit (see :doc:`gerrit`) Mailing list ------------ The mailing list email is ``openstack@lists.launchpad.net``. This is a common mailing list across the OpenStack projects. To participate in the mailing list: #. Join the `Manila Team`_ on Launchpad. #. Subscribe to the list on the `OpenStack Team`_ page on Launchpad. The mailing list archives are at https://lists.launchpad.net/openstack. Bug tracking ------------ Report manila bugs at https://bugs.launchpad.net/manila Feature requests (Blueprints) ----------------------------- Manila uses Launchpad Blueprints to track feature requests. Blueprints are at https://blueprints.launchpad.net/manila. Technical support (Answers) --------------------------- Manila uses Launchpad Answers to track manila technical support questions. The manila Answers page is at https://answers.launchpad.net/manila. Note that the `OpenStack Forums`_ (which are not hosted on Launchpad) can also be used for technical support requests. .. _Launchpad: http://launchpad.net .. _Wiki: http://wiki.openstack.org .. _Manila Team: https://launchpad.net/~manila .. _OpenStack Team: https://launchpad.net/~openstack .. _OpenStack Forums: http://forums.openstack.org/ manila-10.0.0/doc/source/contributor/manila-review-policy.rst0000664000175000017500000001164313656750227024303 0ustar zuulzuul00000000000000.. _manila-review-policy: Manila team code review policy ============================== Peer code review and the OpenStack Way ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Manila adheres to the `OpenStack code review policy and guidelines `_. Similar to other projects hosted on `opendev.org `_, all of manila's code is curated and maintained by a small group of individuals called the "core team". The `primary core team `_ consists of members from diverse affiliations. There are special core teams such as the `manila release core team `_ and the `manila stable maintenance core team `_ that have specific roles as the names suggest. To make a code change in openstack/manila or any of the associated code repositories (openstack/manila-image-elements, openstack/manila-specs, openstack/manila-tempest-plugin, openstack/manila-test-image, openstack/manila-ui and openstack/python-manilaclient), contributors need to follow the :ref:`Code Submission Process ` and upload their code on the `OpenStack Gerrit `_ website. They can then seek reviews by adding individual members of the `manila core team `_ or alert the entire core team by inviting the Gerrit group "manila-core" to the review. Anyone with a membership to the OpenStack Gerrit system may review the code change. However, only the core team can accept and merge the code change. Reviews from contributors outside the core team are encouraged. Reviewing code meticulously and often is a pre-requisite for contributors aspiring to join the core reviewer team. One or more core reviewers will take cognizance of the contribution and provide feedback, or accept the code. For the submission to be accepted, it will need a minimum of one Code-Review:+2 and one Workflow:+1 votes, along with getting a Verified:+1 vote from the CI system. If no core reviewer pays attention to a code submission, feel free to remind the team on the #openstack-manila IRC channel on irc.freenode.com. [#]_ [#]_ Core code review guidelines ~~~~~~~~~~~~~~~~~~~~~~~~~~~ By convention rather than rule, we require that a minimum of two code reviewers provide a Code-Review:+2 vote on each code submission before it is given a Workflow:+1 vote. Having two core reviewers approve a change adds diverse perspective, and is extremely valuable in case of: - Feature changes in the manila service stack - Changes to configuration options - Addition of new tests or significant test bug-fixes in manila-tempest-plugin - New features to manila-ui, manila-test-image, manila-image-elements - Bug fixes Trivial changes --------------- Trivial changes are: - Continuous Integration (CI) system break-fixes that are simple, i.e.: - No job or test is being deleted - Change does not break third-party CI - Documentation changes, especially typographical fixes and grammar corrections. - Automated changes generated by tooling - translations, lower-requirements changes, etc. We do not need two core reviewers to approve trivial changes. Affiliation of core reviewers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, the manila core team informally enforced a code review convention that each code change be reviewed and merged by reviewers of different affiliations. This was followed because the OpenStack Technical Committee used the diversity of affiliation of the core reviewer team as a metric for maturity of the project. However, since the Rocky release cycle, the TC has changed its view on the subject [#]_ [#]_. We believe this is a step in the right direction. While there is no strict requirement that two core reviewers accepting a code change have different affiliations. Other things being equal, we will continue to informally encourage organizational diversity by having core reviewers from different organizations. Core reviewers have the professional responsibility of avoiding conflicts of interest. Vendor code and review ~~~~~~~~~~~~~~~~~~~~~~ All code in the manila repositories is open-source and anyone can submit changes to these repositories as long as they seek to improve the code base. Manila supports over 30 vendor storage systems, and many of these vendors participate in the development and maintenance of their drivers. To the extent possible, core reviewers will seek out driver maintainer feedback on code changes pertaining to vendor integrations. References ~~~~~~~~~~ .. [#] Getting started with IRC: https://docs.openstack.org/contributors/common/irc.html .. [#] IRC guidelines: https://docs.openstack.org/infra/manual/irc.html .. [#] TC Report 18-28: https://anticdent.org/tc-report-18-28.html .. [#] TC vote to remove team diversity tags: https://review.opendev.org/#/c/579870/ manila-10.0.0/doc/source/contributor/fakes.rst0000664000175000017500000000343413656750227021336 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Fake Drivers ============ When the real thing isn't available and you have some development to do these fake implementations of various drivers let you get on with your day. The :mod:`fake_compute` Module ------------------------------ .. automodule:: manila.tests.fake_compute :noindex: :members: :undoc-members: :show-inheritance: The :mod:`fake_driver` Module ----------------------------- .. automodule:: manila.tests.fake_driver :noindex: :members: :undoc-members: :show-inheritance: The :mod:`fake_network` Module ------------------------------ .. automodule:: manila.tests.fake_service_instance :noindex: :members: :undoc-members: :show-inheritance: The :mod:`fake_utils` Module ---------------------------- .. automodule:: manila.tests.fake_utils :noindex: :members: :undoc-members: :show-inheritance: The :mod:`fake_volume` Module ------------------------------ .. automodule:: manila.tests.fake_volume :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/contributor/gerrit.rst0000664000175000017500000000106313656750227021535 0ustar zuulzuul00000000000000.. _code-reviews-with-gerrit: Code Reviews with Gerrit ======================== Manila uses the `Gerrit`_ tool to review proposed code changes. The review site is https://review.opendev.org. Gerrit is a complete replacement for Github pull requests. `All Github pull requests to the manila repository will be ignored`. See the `Development Workflow`_ for more detailed documentation on how to work with Gerrit. .. _Gerrit: http://code.google.com/p/gerrit .. _Development Workflow: https://docs.openstack.org/infra/manual/developers.html#development-workflow manila-10.0.0/doc/source/contributor/share_migration.rst0000664000175000017500000004115713656750227023424 0ustar zuulzuul00000000000000.. Copyright (c) 2016 Hitachi Data Systems Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =============== Share Migration =============== As of the Ocata release of OpenStack, :term:`manila` supports migration of shares across different pools through an experimental API. Since it was first introduced, several enhancements have been made through the subsequent releases while still in experimental state. This developer document reflects the latest version of the experimental Share Migration API. Feature definition ~~~~~~~~~~~~~~~~~~ The Share Migration API is an administrator-only experimental API that allows the invoker to select a destination pool to migrate a share to, while still allowing clients to access the source share instance during the migration. Migration of data is generally expected to be disruptive for users accessing the source, because at some point it will cease to exist. For this reason, the share migration feature is implemented in a 2-phase approach, for the purpose of controlling the timing of that expected disruption of migrating shares. The first phase of migration is when operations that take the longest are performed, such as data copying or replication. After the first pass of data copying is complete, it is up to the administrator to trigger the second phase, often referred to as switchover phase, which may perform operations such as a last sync and deleting the source share instance. During the data copy phase, users remain connected to the source, and may have to reconnect after the switchover phase. In order to migrate a share, manila may employ one of two mechanisms which provide different capabilities and affect how the disruption occurs with regards to user access during data copy phase and disconnection during switchover phase. Those two mechanisms are: **driver-assisted migration** This mechanism uses the underlying driver running in the manila-share service node to coordinate the migration. The migration itself is performed directly on the storage. In order for this mechanism to be used, it requires the driver to implement this functionality, while also requiring that the driver which manages the destination pool is compatible with driver-assisted migration. Typically, drivers would be able to assist migration of shares within storage systems from the same vendor. It is likely that this will be the most efficient and reliable mechanism to migrate a given share, as the storage back end may be able to migrate the share while remaining writable, preserving all file system metadata, snapshots, and possibly perform this operation non-disruptively. When this mechanism cannot be used, the host-assisted migration will be attempted. **host-assisted migration** This mechanism uses the Data Service (manila-data) to copy the source share's data to a new destination share created in the given destination pool. For this mechanism to work, it is required that the Data Service is properly configured in the cloud environment and the migration operation for the source share's protocol and access rule type combination is supported by the Data Service. This is the most suited mechanism to migrate shares when the two pools are from different storage vendors. Given that this mechanism is a rough copy of files and the back ends are unaware that their share contents are being copied over, the optimizations found in the driver-assisted migration are not present here, thus the source share remains read-only, snapshots cannot be transferred, some file system metadata such as permissions and ownership may be lost, and users are expected to be disconnected by the end of migration. Note that during a share migration, access rules cannot be added or removed. As of Ocata release, this feature allows several concurrent migrations (driver-assisted or host-assisted) to be performed, having a best-effort type of scalability. API description ~~~~~~~~~~~~~~~ The migration of a share is started by invoking the ``migration_start`` API. The parameters are: **share** The share to be migrated. This parameter is mandatory. **destination** The destination pool in ``host@backend#pool`` representation. This parameter is mandatory. **force_host_assisted_migration** Forces the host-assisted mechanism to be used, thus using the Data Service to copy data across back ends. This parameter value defaults to `False`. When set to `True`, it skips the driver-assisted approach which would otherwise be attempted first. This parameter is optional. **preserve_metadata** Specifies whether migration should enforce the preservation of all file system metadata. When this behavior is expected (i.e, this parameter is set to `True`) and drivers are not capable of ensuring preservation of file system metadata, migration will result in an error status. As of Ocata release, host-assisted migration cannot provide any guarantees of preserving file system metadata. This parameter is mandatory. **preserve_snapshots** Specifies whether migration should enforce the preservation of all existing snapshots at the destination. In other words, the existing snapshots must be migrated along with the share data. When this behavior is expected (i.e, this parameter is set to `True`) and drivers are not capable of migrating the snapshots, migration will result in an error status. As of Ocata release, host-assisted migration cannot provide this capability. This parameter is mandatory. **nondisruptive** Specifies whether migration should only be performed without disrupting clients during migration. For such, it is also expected that the export location does not change. When this behavior is expected (i.e, this parameter is set to `True`) and drivers are not capable of allowing the share to remain accessible through the two phases of the migration, migration will result in an error status. As of Ocata release, host-assisted migration cannot provide this capability. This parameter is mandatory. **writable** Specifies whether migration should only be performed if the share can remain writable. When this behavior is expected (i.e, this parameter is set to `True`) and drivers are not capable of allowing the share to remain writable, migration will result in an error status. If drivers are not capable of performing a nondisruptive migration, manila will ensure that the share will remain writable through the data copy phase of migration. However, during the switchover phase the share will be re-exported at the destination, causing the share to be rendered inaccessible for the duration of this phase. As of Ocata release, host-assisted migration cannot provide this capability. This parameter is mandatory. **new_share_type** If willing to retype the share so it can be allocated in the desired destination pool, the invoker may supply a new share type to be used. This is often suited when the share is to be migrated to a pool which operates in the opposite driver mode. This parameter is optional. **new_share_network** If willing to change the share's share-network so it can be allocated in the desired destination pool, the invoker may supply a new share network to be used. This is often suited when the share is to be migrated to a pool which operates in a different availability zone or managed by a driver that handles share servers. This parameter is optional. After started, a migration may be cancelled through the ``migration_cancel`` API, have its status obtained through the ``migration_get_progress`` API, and completed through the ``migration_complete`` API after reaching a certain state (see ``Workflows`` section below). Workflows ~~~~~~~~~ Upon invoking ``migration_start``, several validations will be performed by the API layer, such as: * If supplied API parameters are valid. * If the share does not have replicas. * If the share is not member of a share group. * If the access rules of the given share are not in error status. * If the driver-assisted parameters specified do not conflict with `force_host_assisted_migration` parameter. * If `force_host_assisted_migration` parameter is set to True while snapshots do not exist. * If share status is `available` and is not busy with other tasks. * If the destination pool chosen to migrate the share to exists and is running. * If share service or Data Service responsible for performing the migration exists and is running. * If the combination of share network and share type resulting is compatible with regards to driver modes. If any of the above validations fail, the API will return an error. Otherwise, the `task_state` field value will transition to `migration_starting` and the share's status will transition to `migrating`. Past this point, all validations, state transitions and errors will not produce any notifications to the user. Instead, the given share's `task_state` field value will transition to `migration_error`. Following API validation, the scheduler will validate if the supplied destination is compatible with the desired share type according to the pool's capabilities. If this validation fails, the `task_state` field value will transition to `migration_error`. The scheduler then invokes the source share pool's manager to proceed with the migration, transitioning the `task_state` field value to `migration_in_progress`. If `force-host-assisted-migration` API parameter is not set, then a driver-assisted migration will be attempted first. Note that whichever mechanism is employed, there will be a new share instance created in the database, referred to as the "destination instance", with a status field value `migrating_to`. This share instance will not have its export location displayed during migration and will prevail instead of the original instance database entry when migration is complete. Driver-assisted migration data copy phase ----------------------------------------- A share server will be created as needed at the destination pool. Then, the share server details are provided to the driver to report the set of migration capabilities for this destination. If the API parameters `writable`, `nondisruptive`, `preserve_metadata` and `preserve_snapshots` are satisfied by the reported migration capabilities, the `task_state` field value transitions to `migration_driver_starting` and the driver is invoked to start the migration. The driver's migration_start method should start a job in the storage back end and return, allowing the `task_state` field value to transition to `migration_driver_in_progress`. If any of the API parameters described previously are not satisfied, or the driver raises an exception in `migration_start`, the driver-assisted migration ends setting the `task_state` field value to `migration_error`, all allocated resources will be cleaned up and migration will proceed to the host-assisted migration mechanism. Once the `migration_start` driver method succeeds, a periodic task that checks for shares with `task_state` field value `migration_driver_in_progress` will invoke the driver's `migration_continue` method, responsible for executing the next steps of migration until the data copy phase is completed, transitioning the `task_state` field value to `migration_driver_phase1_done`. If this step fails, the `task_state` field value transitions to `migration_error` and all allocated resources will be cleaned up. Host-assisted migration data copy phase --------------------------------------- A new share will be created at the destination pool and the source share's access rules will be changed to read-only. The `task_state` field value transitions to `data_copying_starting` and the Data Service is then invoked to mount both shares and copy data from the source to the destination. In order for the Data Service to mount the shares, it will ask the storage driver to allow access to the node where the Data Service is running. It will then attempt to mount the shares via their respective administrator-only export locations that are served in the administrator network when available, otherwise the regular export locations will be used. In order for the access and mount procedures to succeed, the administrator-only export location must be reachable from the Data Service and the access parameter properly configured in the Data Service configuration file. For instance, a NFS share should require an IP configuration, whereas a CIFS share should require a username credential. Those parameters should be previously set in the Data Service configuration file by the administrator. The data copy routine runs commands as root user for the purpose of setting the correct file metadata to the newly created files at the destination share. It can optionally verify the integrity of all files copied through a configuration parameter. Once copy is completed, the shares are unmounted, their access from the Data Service are removed and the `task_state` field value transitions to `data_copying_completed`, allowing the switchover phase to be invoked. Share migration switchover phase -------------------------------- When invoked, the `task_state` field value transitions to `migration_completing`. Whichever migration mechanism is used, the source share instance is deleted and the access rules are applied to the destination share instance. In the driver-assisted migration, the driver is first invoked to perform a final sync. The last step is to update the share model's optional capability fields, such as `create_share_from_snapshot_support`, `revert_to_snapshot_support` and `mount_snapshot_support`, according to the `new_share_type`, if it had been specified when the migration was initiated. At last, the `task_state` field value transitions to `migration_success`. If the `nondisruptive` driver-assisted capability is not supported or the host-assisted migration mechanism is used, the export location will change and clients will need to remount the share. Driver interfaces ~~~~~~~~~~~~~~~~~ All drivers that implement the driver-assisted migration mechanism should be able to perform all required steps from the source share instance back end within the implementation of the interfaces listed in the section below. Those steps include: * Validating compatibility and connectivity between the source and destination back end; * Start the migration job in the storage back end. Return after the job request has been submitted; * Subsequent invocations to the driver to monitor the job status, cancel it and obtain its progress in percentage value; * Complete migration by performing a last sync if necessary and delete the original share from the source back end. For host-assisted migration, drivers may override some methods defined in the base class in case it is necessary to support it. Additional notes ~~~~~~~~~~~~~~~~ * In case of an error in the storage back end during the execution of the migration job, the driver should raise an exception within the ``migration_continue`` method. * If the manila-share service is restarted during a migration, in case it is a driver-assisted migration, the driver's ``migration_continue`` will be invoked continuously with an interval configured in the share manager service (``migration_driver_continue_interval``). The invocation will stop when the driver finishes the data copy phase. In case of host-assisted migration, the migration job is disrupted only if the manila-data service is restarted. In such event, the migration has to be restarted from the beginning. * To be compatible with host-assisted migration, drivers must also support the ``update_access`` interface, along with its `recovery mode` mechanism. Share Migration driver-assisted interfaces: ------------------------------------------- .. autoclass:: manila.share.driver.ShareDriver :noindex: :members: migration_check_compatibility, migration_start, migration_continue, migration_complete, migration_cancel, migration_get_progress Share Migration host-assisted interfaces: ----------------------------------------- .. autoclass:: manila.share.driver.ShareDriver :noindex: :members: connection_get_info manila-10.0.0/doc/source/contributor/user_messages.rst0000664000175000017500000000365313656750227023115 0ustar zuulzuul00000000000000User Messages ============= User messages are a way to inform users about the state of asynchronous operations. One example would be notifying the user of why a share provisioning request failed. These messages can be requested via the `/messages` API. All user visible messages must be defined in the permitted messages module in order to prevent sharing sensitive information with users. Example message generation:: from manila import context from manila.message import api as message_api from manila.message import message_field self.message_api = message_api.API() context = context.RequestContext() project_id = '6c430ede-9476-4128-8838-8d3929ced223' share_id = 'f292cc0c-54a7-4b3b-8174-d2ff82d87008' self.message_api.create( context, message_field.Actions.CREATE, project_id, resource_type=message_field.Resource.SHARE, resource_id=SHARE_id, detail=message_field.Detail.NO_VALID_HOST) Will produce the following:: GET /v2/6c430ede-9476-4128-8838-8d3929ced223/messages { "messages": [ { "id": "5429fffa-5c76-4d68-a671-37a8e24f37cf", "action_id": "001", "detail_id": "002", "user_message": "create: No storage could be allocated for this share " "request. Trying again with a different size " "or share type may succeed."", "message_level": "ERROR", "resource_type": "SHARE", "resource_id": "f292cc0c-54a7-4b3b-8174-d2ff82d87008", "created_at": 2015-08-27T09:49:58-05:00, "expires_at": 2015-09-26T09:49:58-05:00, "request_id": "req-936666d2-4c8f-4e41-9ac9-237b43f8b848", } ] } The Message API Module ---------------------- .. automodule:: manila.message.api :noindex: :members: :undoc-members: The Permitted Messages Module ----------------------------- .. automodule:: manila.message.message_field :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/contributor/commit_message_tags.rst0000664000175000017500000000520313656750227024253 0ustar zuulzuul00000000000000.. _commit_message_tags: Using Commit Message Tags in Manila =================================== When writing git commit messages for code submissions into manila, it can be useful to provide tags in the message for both human consumption as well as linking to other external resources, such as Launchpad. Each tag should be placed on a separate line. The following tags are used in manila. - **APIImpact** - Use this tag when the code change modifies a public HTTP API interface. This tag indicates that the patch creates, changes, or deletes a public API interface or changes its behavior. The tag may be followed by a reason beginning on the next line. If you are touching manila's API layer and you are unsure if your change has an impact on the API, use this tag anyway. - **Change-id** - This tag is automatically generated by a Gerrit hook and is a unique hash that describes the change. This hash should not be changed when rebasing as it is used by Gerrit to keep track of the change. - **Closes-Bug: | Partial-Bug: | Related-Bug:** *<#launchpad_bug_id>* - These tags are used when the change closes, partially closes, or relates to the bug referenced by the Launchpad bug ID respectively. This will automatically generate a link to the bug in Launchpad for easy access for reviewers. - **DocImpact** - Use this tag when the code change requires changes or updates to documentation in order to be understood. This tag can also be used if the documentation is provided along with the patch itself. This will also generate a Launchpad bug in manila for triaging and tracking. Refer to the section on :ref:`documenting_your_work` to understand where to add documentation. - **Implements: | Partially Implements:** *blueprint * - Use this tag when a change implements or partially implements the given blueprint in Launchpad. This will automatically generate a link to the blueprint in Gerrit for easy access for reviewers. - **TrivialFix** - This tag is used for a trivial issue, such as a typo, an unclear log message, or a simple code refactor that does not change existing behavior which does not require the creation of a separate bug or blueprint in Launchpad. Make sure that the **Closes-Bug**, **Partial-Bug**, **Related-Bug**, **blueprint**, and **Change-id** tags are at the very end of the commit message. The Gerrit hooks will automatically put the hash at the end of the commit message. For more information on tags and some examples of good commit messages, refer to the GitCommitMessages_ documentation. .. _GitCommitMessages: https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references manila-10.0.0/doc/source/contributor/development-environment-devstack.rst0000664000175000017500000002077713656750227026744 0ustar zuulzuul00000000000000.. Copyright 2016 Red Hat, Inc. All Rights Reserved. not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Setting up a development environment with devstack ================================================== This page describes how to setup a working development environment that can be used in deploying ``manila`` and ``manila-ui`` on latest releases of Ubuntu, Fedora or CentOS. These instructions assume you are already familiar with git. We recommend using devstack to develop and test code changes to ``manila`` and/or ``manila-ui``, in order to simply evaluate the manila and/or project. Devstack is a shell script to build complete OpenStack development environments on a virtual machine. If you are not familiar with devstack, these pages can give you context: * `Testing Changes with DevStack `_ * `Devstack project documentation `_ Be aware that ``manila`` and ``manila-ui`` are not enabled in devstack by default; you will need to add a few lines to the devstack ``local.conf`` file to let devstack deploy and configure ``manila`` and ``manila-ui`` on your virtual machine. .. note:: If you do not intend to deploy with the OpenStack Dashboard (horizon) service, you can ignore instructions about enabling ``manila-ui``. Getting devstack ---------------- Start by cloning the devstack repository:: git clone https://opendev.org/openstack/devstack Change to devstack directory:: cd devstack/ You're now on ``master`` branch of devstack, switch to the branch you want to test or develop against. Sample local.conf files that get you started -------------------------------------------- Now that you have cloned the devstack repository, you need to configure devstack before deploying it. This is done with a ``local.conf`` file. For manila, the local.conf file can also determine which back end(s) are set up. .. caution:: When using devstack with the below configurations, be aware that you will be setting up fake storage. The `LVM`, `Generic`, `ZFSOnLinux` drivers have not been developed for production use. They exist to provide a vanilla development and testing environment for manila contributors. DHSS=False (`driver_handles_share_servers=False`) mode: ````````````````````````````````````````````````````````` This is the easier mode for new contributors. Manila share back-end drivers that operate in ``driver_handles_share_servers=False`` mode do not allow creating shares on private project networks. On the resulting stack, all manila shares created by you are exported on the host network and hence are accessible to any compute resource (e.g.: virtual machine, baremetal, container) that is able to reach the devstack host. * :download:`LVM driver ` * :download:`ZFSOnLinux driver ` * :download:`CEPHFS driver ` DHSS=True (`driver_handles_share_servers=True`) mode: ``````````````````````````````````````````````````````` You may use the following setups if you are familiar with manila, and would like to test with the project (tenant) isolation that manila provides on the network and data path. Manila share back-end drivers that operate in ``driver_handles_share_servers=True`` mode create shares on isolated project networks if told to do so. On the resulting stack, when creating a share, you must specify a share network to export the share to, and the share will be accessible to any compute resource (e.g.: Virtual machine, baremetal, containers) that is able to reach the share network you indicated. Typically, new contributors take a while to understand OpenStack networking, and we recommend that you familiarize yourself with the ``DHSS=False`` mode setup before attempting ``DHSS=True``. * :download:`Generic driver ` * :download:`Container driver ` Building your devstack ---------------------- * Copy the appropriate sample local.conf file into the devstack folder on your virtual machine, make sure to name it ``local.conf`` * Make sure to read inline comments and customize values where necessary * If you would like to run minimal services in your stack, or allow devstack to bootstrap tempest testing framework for you, see :ref:`more-customization` * Finally, run the ``stack.sh`` script from within the devstack directory. We recommend that your run this inside a screen or tmux session because it could take a while:: ./stack.sh * After the script completes, you should have manila services running. You can verify that the services are running with the following commands:: $ systemctl status devstack@m-sch $ systemctl status devstack@m-shr $ systemctl status devstack@m-dat * By default, devstack sets up manila-api behind apache. The service name is ``httpd`` on Red Hat based systems and ``apache2`` on Debian based systems. * You may also use your "demo" credentials to invoke the command line clients:: $ source DEVSTACK_DIR/openrc admin demo $ manila service-list * The logs are accessible through ``journalctl``. The following commands let you query logs. You may use the ``-f`` option to tail these logs:: $ journalctl -a -o short-precise --unit devstack@m-sch $ journalctl -a -o short-precise --unit devstack@m-shr $ journalctl -a -o short-precise --unit devstack@m-dat * If running behind apache, the manila-api logs will be in ``/var/log/httpd/manila_api.log`` (Red Hat) or in ``/var/log/apache2/manila_api.log`` (Debian). * Manila UI will now be available through OpenStack Horizon; look for the Shares tab under Project > Share. .. _more-customization: More devstack customizations ---------------------------- Testing branches and changes submitted for review ````````````````````````````````````````````````` To test a patch in review:: enable_plugin manila https://opendev.org/openstack/manila If the ref is from review.opendev.org, it is structured as:: refs/changes/// For example, if you want to test patchset 4 of https://review.opendev.org/#/c/614170/, you can provide this in your ``local.conf``:: enable_plugin manila https://opendev.org/openstack/manila refs/changes/70/614170/4 ref can also simply be a stable branch name, for example:: enable_plugin manila https://opendev.org/openstack/manila stable/train Limiting the services enabled in your stack ```````````````````````````````````````````` Manila needs only a message queue (rabbitmq) and a database (mysql, postgresql) to operate. Additionally, keystone service provides project administration if necessary, all other OpenStack services are not necessary to set up a basic test system. [#f1]_ [#f2]_ You can add the following to your ``local.conf`` to deploy your stack in a minimal fashion. This saves you a lot of time and resources, but could limit your testing:: ENABLED_SERVICES=key,mysql,rabbit,tempest,manila,m-api,m-sch,m-shr,m-dat Optionally, you can deploy with Manila, Nova, Neutron, Glance and Tempest:: ENABLED_SERVICES=key,mysql,rabbit,tempest,g-api,g-reg ENABLED_SERVICES+=n-api,n-cpu,n-cond,n-sch,n-crt,n-cauth,n-obj,placement-api,placement-client ENABLED_SERVICES+=q-svc,q-dhcp,q-meta,q-l3,q-agt ENABLED_SERVICES+=tempest You can also enable ``tls-proxy`` with ``ENABLED_SERVICES`` to allow devstack to use Apache and setup a TLS proxy to terminate TLS connections. Using tls-proxy secures all OpenStack service API endpoints and inter-service communication on your devstack. Bootstrapping Tempest ````````````````````` Add the following options in your ``local.conf`` to set up tempest:: ENABLE_ISOLATED_METADATA=True TEMPEST_USE_TEST_ACCOUNTS=True TEMPEST_ALLOW_TENANT_ISOLATION=False TEMPEST_CONCURRENCY=8 .. [#f1] The Generic driver cannot be run without deploying Cinder, Nova, Glance and Neutron. .. [#f2] You must enable Horizon to use manila-ui. Horizon will not work well when Nova, Cinder, Glance and Neutron are not enabled. manila-10.0.0/doc/source/contributor/driver_filter_goodness_weigher.rst0000664000175000017500000002611213656750227026516 0ustar zuulzuul00000000000000.. _driver_filter_goodness_weigher: ========================================================== Configure and use driver filter and weighing for scheduler ========================================================== OpenStack manila enables you to choose a share back end based on back-end specific properties by using the DriverFilter and GoodnessWeigher for the scheduler. The driver filter and weigher scheduling can help ensure that the scheduler chooses the best back end based on requested share properties as well as various back-end specific properties. What is driver filter and weigher and when to use it ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver filter and weigher give you the ability to more finely control how manila scheduler chooses the best back end to use when handling a share provisioning request. One example scenario where using the driver filter and weigher can be if a back end that utilizes thin-provisioning is used. The default filters use the ``free capacity`` property to determine the best back end, but that is not always perfect. If a back end has the ability to provide a more accurate back-end specific value, it can be used as part of the weighing process to find the best possible host for a new share. Some more examples of the use of these filters could be with respect to back end specific limitations. For example, some back ends may be limited by the number of shares that can be created on them, or by the minimum or maximum size allowed per share or by the fact that provisioning beyond a particular capacity affects their performance. The driver filter and weigher can provide a way for these limits to be accounted for during scheduling. Defining your own filter and goodness functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can define your own filter and goodness functions through the use of various capabilities that manila exposes. Capabilities exposed include information about the share request being made, ``share_type`` settings, and back-end specific information about drivers. All of these allow for a lot of control over how the ideal back end for a share request will be decided. The ``filter_function`` option is a string defining a function that will determine whether a back end should be considered as a potential candidate by the scheduler. The ``goodness_function`` option is a string defining a function that will rate the quality of the potential host (0 to 100, 0 lowest, 100 highest). .. important:: The driver filter and weigher will use default values for filter and goodness functions for each back end if you do not define them yourself. If complete control is desired then a filter and goodness function should be defined for each of the back ends in the ``manila.conf`` file. Supported operations in filter and goodness functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Below is a table of all the operations currently usable in custom filter and goodness functions created by you: +--------------------------------+-------------------------+ | Operations | Type | +================================+=========================+ | +, -, \*, /, ^ | standard math | +--------------------------------+-------------------------+ | not, and, or, &, \|, ! | logic | +--------------------------------+-------------------------+ | >, >=, <, <=, ==, <>, != | equality | +--------------------------------+-------------------------+ | +, - | sign | +--------------------------------+-------------------------+ | x ? a : b | ternary | +--------------------------------+-------------------------+ | abs(x), max(x, y), min(x, y) | math helper functions | +--------------------------------+-------------------------+ .. caution:: Syntax errors in filter or goodness strings are thrown at a share creation time. Available capabilities when creating custom functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are various properties that can be used in either the ``filter_function`` or the ``goodness_function`` strings. The properties allow access to share info, qos settings, extra specs, and so on. The following capabilities are currently available for use: Host capabilities for a back end -------------------------------- host The host's name share\_backend\_name The share back end name vendor\_name The vendor name driver\_version The driver version storage\_protocol The storage protocol qos Boolean signifying whether QoS is supported total\_capacity\_gb The total capacity in gibibytes allocated\_capacity\_gb The allocated capacity in gibibytes free\_capacity\_gb The free capacity in gibibytes reserved\_percentage The reserved storage percentage driver\_handles\_share\_server The driver mode used by this host thin\_provisioning Whether or not thin provisioning is supported by this host updated Last time this host's stats were updated dedupe Whether or not dedupe is supported by this host compression Whether or not compression is supported by this host snapshot\_support Whether or not snapshots are supported by this host replication\_domain The replication domain of this host replication\_type The replication type supported by this host provisioned\_capacity\_gb The provisioned capacity of this host in gibibytes pools This host's storage pools max\_over\_subscription\_ratio This hosts's over subscription ratio for thin provisioning Capabilities specific to a back end ----------------------------------- These capabilities are determined by the specific back end you are creating filter and goodness functions for. Some back ends may not have any capabilities available here. Requested share capabilities ---------------------------- availability\_zone\_id ID of the availability zone of this share share\_network\_id ID of the share network used by this share share\_server\_id ID of the share server of this share host Host name of this share is\_public Whether or not this share is public snapshot\_support Whether or not snapshots are supported by this share status Status for the requested share share\_type\_id The share type ID share\_id The share ID user\_id The share's user ID project\_id The share's project ID id The share instance ID replica\_state The share's replication state replication\_type The replication type supported by this share snapshot\_id The ID of the snapshot of which this share was created from size The size of the share in gibibytes share\_proto The protocol of this share metadata General share metadata The most used capability from this list will most likely be the ``size``. Extra specs for the requested share type ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ View the available properties for share types by running: .. code-block:: console $ manila extra-specs-list Driver filter and weigher usage examples ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Below are examples for using the filter and weigher separately, together, and using driver-specific properties. Example ``manila.conf`` file configuration for customizing the filter function: .. code-block:: ini [default] enabled_backends = generic1, generic2 [generic1] share_driver = manila.share.drivers.generic.GenericShareDriver share_backend_name = GENERIC1 filter_function = "share.size < 10" [generic2] share_driver = manila.share.drivers.generic.GenericShareDriver share_backend_name = GENERIC2 filter_function = "share.size >= 10" The above example will filter share to different back ends depending on the size of the requested share. Shares with a size less than 10 GB are sent to generic1 and shares with a size greater than or equal to 10 GB are sent to generic2. Example ``manila.conf`` file configuration for customizing the goodness function: .. code-block:: ini [default] enabled_backends = generic1, generic2 [generic1] share_driver = manila.share.drivers.generic.GenericShareDriver share_backend_name = GENERIC1 goodness_function = "(share.size < 5) ? 100 : 50" [generic2] share_driver = manila.share.drivers.generic.GenericShareDriver share_backend_name = GENERIC2 goodness_function = "(share.size >= 5) ? 100 : 25" The above example will determine the goodness rating of a back end based on the requested share's size. The example shows how the ternary if statement can be used in a filter or goodness function. If a requested share is of size 10 GB then generic1 is rated as 50 and generic2 is rated as 100. In this case generic2 wins. If a requested share is of size 3 GB then generic1 is rated 100 and generic2 is rated 25. In this case generic1 would win. Example ``manila.conf`` file configuration for customizing both the filter and goodness functions: .. code-block:: ini [default] enabled_backends = generic1, generic2 [generic1] share_driver = manila.share.drivers.generic.GenericShareDriver share_backend_name = GENERIC1 filter_function = "stats.total_capacity_gb < 500" goodness_function = "(share.size < 25) ? 100 : 50" [generic2] share_driver = manila.share.drivers.generic.GenericShareDriver share_backend_name = GENERIC2 filter_function = "stats.total_capacity_gb >= 500" goodness_function = "(share.size >= 25) ? 100 : 75" The above example combines the techniques from the first two examples. The best back end is now decided based on the total capacity of the back end and the requested share's size. Example ``manila.conf`` file configuration for accessing driver specific properties: .. code-block:: ini [default] enabled_backends = example1, example2, example3 [example1] share_driver = manila.share.drivers.example.ExampleShareDriver share_backend_name = EXAMPLE1 filter_function = "share.size < 5" goodness_function = "(capabilities.provisioned_capacity_gb < 30) ? 100 : 50" [example2] share_driver = manila.share.drivers.example.ExampleShareDriver share_backend_name = EXAMPLE2 filter_function = "shares.size < 5" goodness_function = "(capabilities.provisioned_capacity_gb < 80) ? 100 : 50" [example3] share_driver = manila.share.drivers.example.ExampleShareDriver share_backend_name = EXAMPLE3 goodness_function = "55" The above is an example of how back-end specific capabilities can be used in the filter and goodness functions. In this example, the driver has a ``provisioned_capacity_gb`` capability that is being used to determine which back end gets used during a share request. In the above example, ``example1`` and ``example2`` will handle share requests for all shares with a size less than 5 GB. ``example1`` will have priority until the provisioned capacity of all shares on it hits 30 GB. After that, ``example2`` will have priority until the provisioned capacity of all shares on it hits 80 GB. ``example3`` will collect all shares greater or equal to 5 GB as well as all shares once ``example1`` and ``example2`` lose priority. manila-10.0.0/doc/source/contributor/documenting_your_work.rst0000664000175000017500000002077513656750227024710 0ustar zuulzuul00000000000000.. _documenting_your_work: ===================== Documenting your work ===================== As with most OpenStack services and libraries, manila suffers from appearing very complicated to understand, develop, deploy, administer and use. As OpenStack developers working on manila, our responsibility goes beyond introducing new features and maintaining existing features. We ought to provide adequate documentation for the benefit of all kinds of audiences. The guidelines below will explain how you can document (or maintain documentation for) new (or existing) features and bug fixes in the core manila project and other projects that are part of the manila suite. Where to add documentation? ~~~~~~~~~~~~~~~~~~~~~~~~~~~ OpenStack User Guide -------------------- - Any documentation targeted at end users of manila in OpenStack needs to go here. This contains high level information about any feature as long as it is available on ``python-manilaclient`` and/or ``manila-ui``. - If you develop an end user facing feature, you need to provide an overview, use cases and example work-flows as part of this documentation. - The source files for the user guide live in manila's code tree. - **Link**: `User guide `_ OpenStack Administrator Guide ----------------------------- - Documentation for administrators of manila deployments in OpenStack clouds needs to go here. - Document instructions for administrators to perform necessary set up for utilizing a feature, along with managing and troubleshooting manila when the feature is used. - Relevant configuration options may be mentioned here briefly. - The source files for the administrator guide live in manila's code tree. - **Link**: `Administrator guide `_ OpenStack Configuration Reference --------------------------------- - Instructions regarding configuration of different manila back ends need to be added in this document. - The configuration reference also contains sections where manila's configuration options are auto-documented. - It contains sample configuration files for using manila with various configuration options. - If you are a driver maintainer, please ensure that your driver and all of its relevant configuration is documented here. - The source files for the configuration guide live in manila's code tree. - **Link**: `Manila release configuration reference `_ OpenStack Installation Tutorial ------------------------------- - Instructions regarding setting up manila on OpenStack need to be documented here. - This tutorial covers step-by-step deployment of OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. - The instructions are written with reference to different distributions. - The source files for this tutorial live in manila's code tree. - **Link**: `Draft installation tutorial `_ OpenStack API Reference ----------------------- - When you add or change a REST API in manila, you will need to add or edit descriptions of the API, request and response parameters, microversions and expected HTTP response codes as part of the API reference. - For releases prior to Newton, the API reference was maintained in `Web Application Description Language (WADL) `_ in the `api-site `_ project. - Since the Newton release, manila's API reference is maintained in-tree in custom YAML/JSON format files. - **Link**: `REST API reference of the Shared File Systems Project v2.0 `_ Manila Developer Reference -------------------------- - When working on a feature in manila, provide judicious inline documentation in the form of comments and docstrings. Code is our best developer reference. - Driver entry-points must be documented with docstrings explaining the expected behavior from a driver routine. - Apart from inline documentation, further developer facing documentation will be necessary when you are introducing changes that will affect vendor drivers, consumers of the manila database and when building a utility in manila that can be consumed by other developers. - The developer reference for manila is maintained in-tree. - Feel free to use it as a sandbox for other documentation that does not live in manila's code-tree. - **Link**: `Manila developer reference `_ OpenStack Security Guide ------------------------ - Any feature that has a security impact needs to be documented here. - In general, administrators will follow the guidelines regarding best practices of setting up their manila deployments with this guide. - Any changes to ``policy.json`` based authorization, share network related security, ``access`` to manila resources, tenant and user related information needs to be documented here. - **Link**: `Security guide `_ - **Repository**: The security guide is maintained within the `OpenStack Security-doc project `_ OpenStack Command Line Reference -------------------------------- - Help text provided in the ``python-manilaclient`` is extracted into this document automatically. - No manual corrections are allowed on this repository; make necessary corrections in the ``python-manilaclient`` repository." - **Link**: `Manila CLI reference `_. Important things to note ~~~~~~~~~~~~~~~~~~~~~~~~ - When implementing a new feature, use appropriate Commit Message Tags (:ref:`commit_message_tags`). - Using the ``DocImpact`` flag in particular will create a ``[doc]`` bug under the `manila project in launchpad `_. When your code patch merges, assign this bug to yourself and track your documentation changes with it. - When writing documentation outside of manila, use either a commit message header that includes the word ``Manila`` or set the topic of the change-set to ``manila-docs``. This will make it easy for manila reviewers to find your patches to aid with a technical content review. - When writing documentation in user/admin/config/api/install guides, *always* refer to the project with its service name: ``Shared File Systems service`` and not the service type (``share``) or the project name (``manila``). - Follow documentation styles prescribed in the `OpenStack Documentation Contributor Guide `_. Pay heed to the `RST formatting conventions `_ and `Writing style `_. - Use CamelCase to spell out `OpenStack` and sentence casing to spell out service types, ex: `Shared File Systems service` and lower case to spell out project names, ex: `manila` (except when the project name is in the beginning of a sentence or a title). - **ALWAYS** use a first party driver when documenting a feature in the user or administrator guides. Provide cross-references to configuration reference sections to lead readers to detailed setup instructions for these drivers. - The manila developer reference, the OpenStack user guide, administrator reference, API reference and security guide are always *current*, i.e, get built with every commit in the respective codebase. Therefore, documentation added here need not be backported to previous releases. - You may backport changes to some documentation such as the configuration reference and the installation guide. - **Important "documentation" that isn't really documentation** - ``specs`` and ``release notes`` are *NOT* documentation. A specification document is written to initiate a dialogue and gather feedback regarding the design of a feature. Neither developers nor users will regard a specification document as official documentation after a feature has been implemented. Release notes (:ref:`adding_release_notes`) allow for gathering release summaries and they are not used to understand, configure, use or troubleshoot any manila feature. - **Less is not more, more is more** - Always add detail when possible. The health and maturity of our community is reflected in our documentation. manila-10.0.0/doc/source/contributor/tempest_tests.rst0000664000175000017500000001314213656750227023145 0ustar zuulzuul00000000000000Tempest Tests ============= Manila stores tempest tests as plugin under ``manila_tempest_tests`` directory. It contains functional and scenario tests. Installation of plugin to tempest --------------------------------- Tempest plugin installation is common for all its plugins and detailed information can be found in its `docs`_. In simple words: if you have installed manila project on the same machine as tempest, then tempest will find it. In case the plugin is not installed (see the verification steps below), you can clone and install it yourself. .. code-block:: console $ git clone https://opendev.org/openstack/manila-tempest-plugin $ pip install -e manila-tempest-plugin .. _docs: https://docs.openstack.org/tempest/latest/plugin.html#using-plugins Verifying installation ---------------------- To verify that the plugin is installed on your system, run the following command and find "manila_tests" in its output. .. code-block:: console $ tempest list-plugins Alternatively, or to double-check, list all the tests available on the system and find manila tests in it. .. code-block:: console $ tempest run -l Configuration of manila-related tests in tempest.conf ----------------------------------------------------- All config options for manila are defined in ``manila_tempest_tests/config.py`` module. They can be set/redefined in ``tempest.conf`` file. Here is a configuration example: .. code-block:: ini [service_available] manila = True [share] # Capabilities capability_storage_protocol = NFS capability_snapshot_support = True capability_create_share_from_snapshot_support = True backend_names = Backendname1,BackendName2 backend_replication_type = readable # Enable/Disable test groups multi_backend = True multitenancy_enabled = True enable_protocols = nfs,cifs,glusterfs,cephfs enable_ip_rules_for_protocols = nfs enable_user_rules_for_protocols = cifs enable_cert_rules_for_protocols = glusterfs enable_cephx_rules_for_protocols = cephfs username_for_user_rules = foouser enable_ro_access_level_for_protocols = nfs run_quota_tests = True run_extend_tests = True run_shrink_tests = True run_snapshot_tests = True run_replication_tests = True run_migration_tests = True run_manage_unmanage_tests = True run_manage_unmanage_snapshot_tests = True .. note:: None of existing share drivers support all features. So, make sure that share backends really support features you enable in config. Running tests ------------- To run tests, it is required to install `pip`_, `tox`_ and `virtualenv`_ packages on host machine. Then run following command from tempest root directory: .. code-block:: console $ tempest run -r manila_tempest_tests.tests.api or to run only scenario tests: .. code-block:: console $ tempest run -r manila_tempest_tests.tests.scenario .. _pip: https://pypi.org/project/pip/ .. _tox: https://pypi.org/project/tox/ .. _virtualenv: https://pypi.org/project/virtualenv Running a subset of tests based on test location ------------------------------------------------ Instead of running all tests, you can specify an individual directory, file, class, or method that contains test code. To run the tests in the ``manila_tempest_tests/tests/api/admin`` directory: .. code-block:: console $ tempest run -r manila_tempest_tests.tests.api.admin To run the tests in the ``manila_tempest_tests/tests/api/admin/test_admin_actions.py`` module: .. code-block:: console $ tempest run -r manila_tempest_tests.tests.api.admin.test_admin_actions To run the tests in the `AdminActionsTest` class in ``manila_tempest_tests/tests/api/admin/test_admin_actions.py`` module: .. code-block:: console $ tempest run -r manila_tempest_tests.tests.api.admin.test_admin_actions.AdminActionsTest To run the `AdminActionsTest.test_reset_share_state` test method in ``manila_tempest_tests/tests/api/admin/test_admin_actions.py`` module: .. code-block:: console $ tempest run -r manila_tempest_tests.tests.api.admin.test_admin_actions.AdminActionsTest.test_reset_share_state Running a subset of tests based on service involvement ------------------------------------------------------ To run the tests that require only `manila-api` service running: .. code-block:: console $ tempest run -r \ \(\?\=\.\*\\\[\.\*\\bapi\\b\.\*\\\]\) \ \(\^manila_tempest_tests.tests.api\) To run the tests that require all manila services running, but intended to test API behaviour: .. code-block:: console $ tempest run -r \ \(\?\=\.\*\\\[\.\*\\b\(api\|api_with_backend\)\\b\.\*\\\]\) \ \(\^manila_tempest_tests.tests.api\) To run the tests that require all manila services running, but intended to test back-end (manila-share) behaviour: .. code-block:: console $ tempest run -r \ \(\?\=\.\*\\\[\.\*\\bbackend\\b\.\*\\\]\) \ \(\^manila_tempest_tests.tests.api\) Running a subset of positive or negative tests ---------------------------------------------- To run only positive tests, use following command: .. code-block:: console $ tempest run -r \ \(\?\=\.\*\\\[\.\*\\bpositive\\b\.\*\\\]\) \ \(\^manila_tempest_tests.tests.api\) To run only negative tests, use following command: .. code-block:: console $ tempest run -r \ \(\?\=\.\*\\\[\.\*\\bnegative\\b\.\*\\\]\) \ \(\^manila_tempest_tests.tests.api\) To run only positive API tests, use following command: .. code-block:: console $ tempest run -r \ \(\?\=\.\*\\\[\.\*\\bpositive\\b\.\*\\\]\) \ \(\?\=\.\*\\\[\.\*\\bapi\\b\.\*\\\]\) \ \(\^manila_tempest_tests.tests.api\) manila-10.0.0/doc/source/contributor/api_microversion_dev.rst0000664000175000017500000002535313656750227024457 0ustar zuulzuul00000000000000API Microversions ================= Background ---------- Manila uses a framework we called 'API Microversions' for allowing changes to the API while preserving backward compatibility. The basic idea is that a user has to explicitly ask for their request to be treated with a particular version of the API. So breaking changes can be added to the API without breaking users who don't specifically ask for it. This is done with an HTTP header ``X-OpenStack-Manila-API-Version`` which is a monotonically increasing semantic version number starting from ``1.0``. If a user makes a request without specifying a version, they will get the ``DEFAULT_API_VERSION`` as defined in ``manila/api/openstack/api_version_request.py``. This value is currently ``2.0`` and is expected to remain so for quite a long time. The Nova project was the first to implement microversions. For full details please read Nova's `Kilo spec for microversions `_ When do I need a new Microversion? ---------------------------------- A microversion is needed when the contract to the user is changed. The user contract covers many kinds of information such as: - the Request - the list of resource urls which exist on the server Example: adding a new shares/{ID}/foo which didn't exist in a previous version of the code - the list of query parameters that are valid on urls Example: adding a new parameter ``is_yellow`` servers/{ID}?is_yellow=True - the list of query parameter values for non free form fields Example: parameter filter_by takes a small set of constants/enums "A", "B", "C". Adding support for new enum "D". - new headers accepted on a request - the Response - the list of attributes and data structures returned Example: adding a new attribute 'locked': True/False to the output of shares/{ID} - the allowed values of non free form fields Example: adding a new allowed ``status`` to shares/{ID} - the list of status codes allowed for a particular request Example: an API previously could return 200, 400, 403, 404 and the change would make the API now also be allowed to return 409. - changing a status code on a particular response Example: changing the return code of an API from 501 to 400. - new headers returned on a response The following flow chart attempts to walk through the process of "do we need a microversion". .. graphviz:: digraph states { label="Do I need a microversion?" silent_fail[shape="diamond", style="", label="Did we silently fail to do what is asked?"]; ret_500[shape="diamond", style="", label="Did we return a 500 before?"]; new_error[shape="diamond", style="", label="Are we changing what status code is returned?"]; new_attr[shape="diamond", style="", label="Did we add or remove an attribute to a payload?"]; new_param[shape="diamond", style="", label="Did we add or remove an accepted query string parameter or value?"]; new_resource[shape="diamond", style="", label="Did we add or remove a resource url?"]; no[shape="box", style=rounded, label="No microversion needed"]; yes[shape="box", style=rounded, label="Yes, you need a microversion"]; no2[shape="box", style=rounded, label="No microversion needed, it's a bug"]; silent_fail -> ret_500[label="no"]; silent_fail -> no2[label="yes"]; ret_500 -> no2[label="yes [1]"]; ret_500 -> new_error[label="no"]; new_error -> new_attr[label="no"]; new_error -> yes[label="yes"]; new_attr -> new_param[label="no"]; new_attr -> yes[label="yes"]; new_param -> new_resource[label="no"]; new_param -> yes[label="yes"]; new_resource -> no[label="no"]; new_resource -> yes[label="yes"]; {rank=same; yes new_attr} {rank=same; no2 ret_500} {rank=min; silent_fail} } **Footnotes** [1] - When fixing 500 errors that previously caused stack traces, try to map the new error into the existing set of errors that API call could previously return (400 if nothing else is appropriate). Changing the set of allowed status codes from a request is changing the contract, and should be part of a microversion. The reason why we are so strict on contract is that we'd like application writers to be able to know, for sure, what the contract is at every microversion in manila. If they do not, they will need to write conditional code in their application to handle ambiguities. When in doubt, consider application authors. If it would work with no client side changes on both manila versions, you probably don't need a microversion. If, on the other hand, there is any ambiguity, a microversion is probably needed. In Code ------- In ``manila/api/openstack/wsgi.py`` we define an ``@api_version`` decorator which is intended to be used on top-level Controller methods. It is not appropriate for lower-level methods. Some examples: Adding a new API method ~~~~~~~~~~~~~~~~~~~~~~~ In the controller class:: @wsgi.Controller.api_version("2.4") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an ``X-OpenStack-Manila-API-Version`` of >= ``2.4``. If they had specified a lower version (or not specified it and received the default of ``2.1``) the server would respond with ``HTTP/404``. Removing an API method ~~~~~~~~~~~~~~~~~~~~~~ In the controller class:: @wsgi.Controller.api_version("2.1", "2.4") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an ``X-OpenStack-Manila-API-Version`` of <= ``2.4``. If ``2.5`` or later is specified the server will respond with ``HTTP/404``. Changing a method's behaviour ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the controller class:: @wsgi.Controller.api_version("2.1", "2.3") def my_api_method(self, req, id): .... method_1 ... @wsgi.Controller.api_version("2.4") # noqa def my_api_method(self, req, id): .... method_2 ... If a caller specified ``2.1``, ``2.2`` or ``2.3`` (or received the default of ``2.1``) they would see the result from ``method_1``, ``2.4`` or later ``method_2``. It is vital that the two methods have the same name, so the second of them will need ``# noqa`` to avoid failing flake8's ``F811`` rule. The two methods may be different in any kind of semantics (schema validation, return values, response codes, etc) A method with only small changes between versions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A method may have only small changes between microversions, in which case you can decorate a private method:: @api_version("2.1", "2.4") def _version_specific_func(self, req, arg1): pass @api_version(min_version="2.5") # noqa def _version_specific_func(self, req, arg1): pass def show(self, req, id): .... common stuff .... self._version_specific_func(req, "foo") .... common stuff .... A change in schema only ~~~~~~~~~~~~~~~~~~~~~~~ If there is no change to the method, only to the schema that is used for validation, you can add a version range to the ``validation.schema`` decorator:: @wsgi.Controller.api_version("2.1") @validation.schema(dummy_schema.dummy, "2.3", "2.8") @validation.schema(dummy_schema.dummy2, "2.9") def update(self, req, id, body): .... This method will be available from version ``2.1``, validated according to ``dummy_schema.dummy`` from ``2.3`` to ``2.8``, and validated according to ``dummy_schema.dummy2`` from ``2.9`` onward. When not using decorators ~~~~~~~~~~~~~~~~~~~~~~~~~ When you don't want to use the ``@api_version`` decorator on a method or you want to change behaviour within a method (say it leads to simpler or simply a lot less code) you can directly test for the requested version with a method as long as you have access to the api request object (commonly called ``req``). Every API method has an api_version_request object attached to the req object and that can be used to modify behaviour based on its value:: def index(self, req): req_version = req.api_version_request if req_version.matches("2.1", "2.5"): ....stuff.... elif req_version.matches("2.6", "2.10"): ....other stuff.... elif req_version > api_version_request.APIVersionRequest("2.10"): ....more stuff..... The first argument to the matches method is the minimum acceptable version and the second is maximum acceptable version. A specified version can be null:: null_version = APIVersionRequest() If the minimum version specified is null then there is no restriction on the minimum version, and likewise if the maximum version is null there is no restriction the maximum version. Alternatively a one sided comparison can be used as in the example above. Other necessary changes ----------------------- If you are adding a patch which adds a new microversion, it is necessary to add changes to other places which describe your change: * Update ``REST_API_VERSION_HISTORY`` in ``manila/api/openstack/api_version_request.py`` * Update ``_MAX_API_VERSION`` in ``manila/api/openstack/api_version_request.py`` * Add a verbose description to ``manila/api/openstack/rest_api_version_history.rst``. There should be enough information that it could be used by the docs team for release notes. * Update the expected versions in affected tests. Allocating a microversion ------------------------- If you are adding a patch which adds a new microversion, it is necessary to allocate the next microversion number. Except under extremely unusual circumstances and this would have been mentioned in the blueprint for the change, the minor number of ``_MAX_API_VERSION`` will be incremented. This will also be the new microversion number for the API change. It is possible that multiple microversion patches would be proposed in parallel and the microversions would conflict between patches. This will cause a merge conflict. We don't reserve a microversion for each patch in advance as we don't know the final merge order. Developers may need over time to rebase their patch calculating a new version number as above based on the updated value of ``_MAX_API_VERSION``. Testing Microversioned API Methods ---------------------------------- Testing a microversioned API method is very similar to a normal controller method test, you just need to add the ``X-OpenStack-Manila-API-Version`` header, for example:: req = fakes.HTTPRequest.blank('/testable/url/endpoint') req.headers = {'X-OpenStack-Manila-API-Version': '2.2'} req.api_version_request = api_version.APIVersionRequest('2.6') controller = controller.TestableController() res = controller.index(req) ... assertions about the response ... manila-10.0.0/doc/source/contributor/services.rst0000664000175000017500000000457113656750227022073 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _service_manager_driver: Services, Managers and Drivers ============================== The responsibilities of Services, Managers, and Drivers, can be a bit confusing to people that are new to manila. This document attempts to outline the division of responsibilities to make understanding the system a little bit easier. Currently, Managers and Drivers are specified by flags and loaded using utils.load_object(). This method allows for them to be implemented as singletons, classes, modules or objects. As long as the path specified by the flag leads to an object (or a callable that returns an object) that responds to getattr, it should work as a manager or driver. The :mod:`manila.service` Module -------------------------------- .. automodule:: manila.service :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.manager` Module -------------------------------- .. automodule:: manila.manager :noindex: :members: :undoc-members: :show-inheritance: Implementation-Specific Drivers ------------------------------- A manager will generally load a driver for some of its tasks. The driver is responsible for specific implementation details. Anything running shell commands on a host, or dealing with other non-python code should probably be happening in a driver. Drivers should minimize touching the database, although it is currently acceptable for implementation specific data. This may be reconsidered at some point. It usually makes sense to define an Abstract Base Class for the specific driver (i.e. VolumeDriver), to define the methods that a different driver would need to implement. manila-10.0.0/doc/source/contributor/samples/0000775000175000017500000000000013656750362021153 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/contributor/samples/generic_local.conf0000664000175000017500000000326413656750227024615 0ustar zuulzuul00000000000000###################################################################### # This local.conf sets up Devstack with manila enabling the Generic # driver that uses Cinder to provide back-end storage and Nova to # serve storage virtual machines (share servers) in the tenant's domain. # This driver operates in driver_handles_share_services=True mode ####################################################################### [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD DEST=/opt/stack DATA_DIR=/opt/stack/data LOGFILE=/opt/stack/devstacklog.txt # Enabling manila services LIBS_FROM_GIT=python-manilaclient enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-ui https://opendev.org/openstack/manila-ui enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Generic Back end config options SHARE_DRIVER=manila.share.drivers.generic.GenericShareDriver MANILA_ENABLED_BACKENDS=tokyo,shanghai MANILA_BACKEND1_CONFIG_GROUP_NAME=tokyo MANILA_BACKEND2_CONFIG_GROUP_NAME=shanghai MANILA_SHARE_BACKEND1_NAME=TOKYO MANILA_SHARE_BACKEND2_NAME=SHANGHAI MANILA_OPTGROUP_tokyo_driver_handles_share_servers=True MANILA_OPTGROUP_shanghai_driver_handles_share_servers=True MANILA_OPTGROUP_tokyo_connect_share_server_to_tenant_network=True MANILA_OPTGROUP_shanghai_connect_share_server_to_tenant_network=True MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=True create_share_from_snapshot_support=True' MANILA_CONFIGURE_DEFAULT_TYPES=True # Storage Virtual Machine settings for the generic driver MANILA_SERVICE_IMAGE_ENABLED=True MANILA_USE_SERVICE_INSTANCE_PASSWORD=Truemanila-10.0.0/doc/source/contributor/samples/cephfs_local.conf0000664000175000017500000000247413656750227024453 0ustar zuulzuul00000000000000###################################################################### # This local.conf sets up Devstack with manila enabling the CEPHFS # driver which operates in driver_handles_share_services=False # mode. Pay attention to the storage protocol configuration to run # the cephfs driver with either the native CEPHFS protocol or the NFS # protocol ####################################################################### [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD DEST=/opt/stack DATA_DIR=/opt/stack/data LOGFILE=/opt/stack/devstacklog.txt # Enabling manila services LIBS_FROM_GIT=python-manilaclient enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-ui https://opendev.org/openstack/manila-ui enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Enabling ceph enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph ENABLE_CEPH_MANILA=True # IMPORTANT - Comment out / remove the following line to use # the CEPH driver with the native CEPHFS protocol MANILA_CEPH_DRIVER=cephfsnfs # CEPHFS backend options MANILA_SERVICE_IMAGE_ENABLED=False MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=False' MANILA_CONFIGURE_DEFAULT_TYPES=Truemanila-10.0.0/doc/source/contributor/samples/lvm_local.conf0000664000175000017500000000260113656750227023771 0ustar zuulzuul00000000000000################################################################ # This local.conf sets up Devstack with manila enabling the LVM # driver which operates in driver_handles_share_services=False # mode ################################################################ [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD DEST=/opt/stack DATA_DIR=/opt/stack/data LOGFILE=/opt/stack/devstacklog.txt # Enabling manila services LIBS_FROM_GIT=python-manilaclient enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-ui https://opendev.org/openstack/manila-ui enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # LVM Backend config options MANILA_SERVICE_IMAGE_ENABLED=False SHARE_DRIVER=manila.share.drivers.lvm.LVMShareDriver MANILA_ENABLED_BACKENDS=chicago,denver MANILA_BACKEND1_CONFIG_GROUP_NAME=chicago MANILA_BACKEND2_CONFIG_GROUP_NAME=denver MANILA_SHARE_BACKEND1_NAME=CHICAGO MANILA_SHARE_BACKEND2_NAME=DENVER MANILA_OPTGROUP_chicago_driver_handles_share_servers=False MANILA_OPTGROUP_denver_driver_handles_share_servers=False SHARE_BACKING_FILE_SIZE=32000M MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=True create_share_from_snapshot_support=True revert_to_snapshot_support=True mount_snapshot_support=True' MANILA_CONFIGURE_DEFAULT_TYPES=True manila-10.0.0/doc/source/contributor/samples/zfsonlinux_local.conf0000664000175000017500000000264113656750227025416 0ustar zuulzuul00000000000000###################################################################### # This local.conf sets up Devstack with manila enabling the ZFSOnLinux # driver which operates in driver_handles_share_services=False # mode ####################################################################### [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD DEST=/opt/stack DATA_DIR=/opt/stack/data LOGFILE=/opt/stack/devstacklog.txt # Enabling manila services LIBS_FROM_GIT=python-manilaclient enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-ui https://opendev.org/openstack/manila-ui enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # ZfsOnLinux Back end config options MANILA_SERVICE_IMAGE_ENABLED=False SHARE_DRIVER=manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver MANILA_ENABLED_BACKENDS=bangalore,mumbai MANILA_BACKEND1_CONFIG_GROUP_NAME=bangalore MANILA_BACKEND2_CONFIG_GROUP_NAME=mumbai MANILA_SHARE_BACKEND1_NAME=BANGALORE MANILA_SHARE_BACKEND2_NAME=MUMBAI MANILA_OPTGROUP_bangalore_driver_handles_share_servers=False MANILA_OPTGROUP_mumbai_driver_handles_share_servers=False MANILA_REPLICA_STATE_UPDATE_INTERVAL=60 MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=True create_share_from_snapshot_support=True replication_type=readable' MANILA_CONFIGURE_DEFAULT_TYPES=True manila-10.0.0/doc/source/contributor/samples/container_local.conf0000664000175000017500000000246313656750227025163 0ustar zuulzuul00000000000000###################################################################### # This local.conf sets up Devstack with manila enabling the Container # driver that uses Docker and operates in # driver_handles_share_services=True mode ####################################################################### [[local|localrc]] ADMIN_PASSWORD=secret DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD DEST=/opt/stack DATA_DIR=/opt/stack/data LOGFILE=/opt/stack/devstacklog.txt # Enabling manila services LIBS_FROM_GIT=python-manilaclient enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-ui https://opendev.org/openstack/manila-ui enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Container Backend config options MANILA_SERVICE_IMAGE_ENABLED=False SHARE_DRIVER=manila.share.drivers.container.driver.ContainerShareDriver MANILA_ENABLED_BACKENDS=vienna,prague MANILA_BACKEND1_CONFIG_GROUP_NAME=vienna MANILA_BACKEND2_CONFIG_GROUP_NAME=prague MANILA_SHARE_BACKEND1_NAME=VIENNA MANILA_SHARE_BACKEND2_NAME=PRAGUE MANILA_OPTGROUP_vienna_driver_handles_share_servers=True MANILA_OPTGROUP_prague_driver_handles_share_servers=True MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=false' MANILA_CONFIGURE_DEFAULT_TYPES=True manila-10.0.0/doc/source/contributor/contributing.rst0000664000175000017500000002620713656750227022757 0ustar zuulzuul00000000000000============================ So You Want to Contribute... ============================ For general information on contributing to OpenStack, check out the `contributor guide `_ to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. Below will cover the more project specific information you need to get started with Manila (Shared File System service). Where is the code? ~~~~~~~~~~~~~~~~~~ manila | The OpenStack Shared File System Service | code: https://opendev.org/openstack/manila | docs: https://docs.openstack.org/manila/ | api-ref: https://docs.openstack.org/api-ref/shared-file-system | release model: https://releases.openstack.org/reference/release_models.html#cycle-with-rc | Launchpad: https://launchpad.net/manila python-manilaclient | Python client library for the OpenStack Shared File System Service API; includes standalone CLI shells and OpenStack client plugin and shell | code: https://opendev.org/openstack/python-manilaclient | docs: https://docs.openstack.org/python-manilaclient | release model: https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary | Launchpad: https://launchpad.net/python-manilaclient manila-ui | OpenStack dashboard plugin for the Shared File System Service | code: https://opendev.org/openstack/manila-ui | docs: https://docs.openstack.org/manila-ui | release model: https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary | Launchpad: https://launchpad.net/manila-ui manila-tempest-plugin | An OpenStack test integration (tempest) plugin containing API and scenario tests for the Shared File System Service | code: https://opendev.org/openstack/manila-tempest-plugin | release model: https://releases.openstack.org/reference/release_models.html#cycle-automatic | Launchpad: https://launchpad.net/manila manila-image-elements | A Disk Image Builder project with scripts to build a bootable Linux image for testing and use by some Shared File System Service storage drivers including the Generic Driver | code: https://opendev.org/openstack/manila-tempest-plugin | release model: no releases | Launchpad: https://launchpad.net/manila manila-test-image | A project with scripts to create a Buildroot based image to create a small bootable Linux image, primarily for the purposes of testing Manila | code: https://opendev.org/openstack/manila-image-elements | images: https://tarballs.opendev.org/openstack/manila-image-elements/ | release model: no releases | Launchpad: https://launchpad.net/manila-image-elements manila-specs | Design Specifications for the Shared File System service | code: https://opendev.org/openstack/manila-specs | release model: no releases | Launchpad: https://launchpad.net/manila See the ``CONTRIBUTING.rst`` file in each code repository for more information about contributing to that specific deliverable. Additionally, you should look over the docs links above; most components have helpful developer information specific to that deliverable. Manila and its associated projects follow a coordinated release alongside other OpenStack projects. Development cycles are code named. See the `OpenStack Releases website`_ for names and schedules of the current, past and future development cycles. Communication ~~~~~~~~~~~~~ IRC --- The team uses `IRC `_ extensively for communication and coordination of project activities. The IRC channel is ``#openstack-manila`` on Freenode. Contributors work in various timezones across the world; so many of them run IRC Bouncers and appear to be always online. If you ping someone, or raise a question on the IRC channel, someone will get back to you when they are back on their computer. Additionally, the IRC channel is logged, so if you ask a question when no one is around, you can `check the log `_ to see if it has been answered. Team Meetings ------------- We host a one-hour IRC based community meeting every Thursday at 1500 UTC on ``#openstack-meeting-alt`` channel. See the `OpenStack meetings page `_ for the most up-to-date meeting information and for downloading the ICS file to integrate this slot with your calendar. The community meeting is a good opportunity to gather the attention of multiple contributors synchronously. If you wish to do so, add a meeting topic along with your IRC nick to the `Meeting agenda `_. Mailing List ------------ In addition to IRC, the team uses the `OpenStack Discuss Mailing List`_ for development discussions. This list is meant for communication about all things developing OpenStack; so we also use this list to engage with contributors across projects, and make any release cycle announcements. Since it is a wide distribution list, the use of subject line tags is encouraged to make sure you reach the right people. Prefix the subject line with ``[manila]`` when sending email that concern Manila on this list. Other Communication Avenues --------------------------- Contributors gather at least once per release at the `OpenDev Project Team Gathering `_ to discuss plans for an upcoming development cycle. This is usually where developers pool ideas and brainstorm features and bug fixes. We have had both virtual, and in-person Project Technical Gathering events in the past. Before every such event, we gather opinions from the community via IRC Meetings and the Mailing list on planning these Project Technical Gatherings. We make extensive use of `Etherpads `_. You can find some of them that the team used in the past `in the project Wiki `_. To share code snippets or logs, we use `PasteBin `_. .. _contacting-the-core-team: Contacting the Core Team ~~~~~~~~~~~~~~~~~~~~~~~~ When you contribute patches, your change will need to be approved by one or more `maintainers (collectively known as the "Core Team") `_. We're always looking for more maintainers! If you're looking to help maintain Manila, express your interest to the existing core team. We have mentored many individuals for one or more development cycles and added them to the core team. Any new core reviewer needs to be nominated to the team by an existing core reviewer by making a proposal on `OpenStack Discuss Mailing List`_. Other maintainers and contributors can then express their approval or disapproval by responding to the proposal. If there is a decision, the project team lead will add the concerned individual to the core reviewers team. An example proposal is `here. `_ New Feature Planning ~~~~~~~~~~~~~~~~~~~~ If you'd like to propose a new feature, do so by `creating a blueprint on Launchpad. `_ For significant changes we might require a design specification. Feature changes that need a specification include: -------------------------------------------------- - Adding new API methods - Substantially modifying the behavior of existing API methods - Adding a new database resource or modifying existing resources - Modifying a share back end driver interface, thereby affecting all share back end drivers What doesn't need a design specification: ----------------------------------------- - Making trivial (backwards compatible) changes to the behavior of an existing API method. Examples include adding a new field to the response schema of an existing method, or introducing a new query parameter. See :doc:`api_microversion_dev` on how Manila APIs are versioned. - Adding new share back end drivers or modifying share drivers, without affecting the share back end driver interface - Adding or changing tests After filing a blueprint, if you're in doubt whether to create a design specification, contact the maintainers. Design specifications are tracked in the `Manila Specifications `_ repository and are published on the `OpenStack Project Specifications website. `_ Refer to the `specification template `_ to structure your design spec. Specifications and new features have deadlines. Usually, specifications for an upcoming release are frozen midway into the release development cycle. To determine the exact deadlines, see the published release calendars by navigating to the specific release from the `OpenStack releases website`_. Task Tracking ~~~~~~~~~~~~~ - We track our bugs in Launchpad: https://bugs.launchpad.net/manila If you're looking for some smaller, easier work item to pick up and get started on, search for the 'low-hanging-fruit' tag - We track future features as blueprints on Launchpad: https://blueprints.launchpad.net/manila - Unimplemented specifications are tracked here: https://specs.openstack.org/openstack/manila-specs/#unimplemented-specs These specifications need a new owner. If you're interested to pick them up and drive them to completion, you can update the corresponding blueprint and get in touch with the project maintainers for help Reporting a Bug ~~~~~~~~~~~~~~~ You found an issue and want to make sure we are aware of it? You can do so on `Launchpad `_. Getting Your Patch Merged ~~~~~~~~~~~~~~~~~~~~~~~~~ When you submit your change through Gerrit, a number of automated Continuous Integration tests are run on your change. A change must receive a +1 vote from the `OpenStack CI system `_ in order for it to be merge-worthy. If these tests are failing and you can't determine why, contact the maintainers. See the :doc:`manila-review-policy` to understand our code review conventions. Generally, reviewers look at new code submissions pro-actively; if you do not have sufficient attention to your change, or are looking for help, do not hesitate to jump into the team's IRC channel, or bring our attention to your issue during a community meeting. The core team would prefer to have an open discussion instead of a one-on-one/private chat. Project Team Lead Duties ~~~~~~~~~~~~~~~~~~~~~~~~ A `project team lead `_ is elected from the project contributors each cycle. Manila Project specific responsibilities for a lead are listed in the :doc:`project-team-lead`. .. _OpenStack Releases website: .. _OpenStack Discuss Mailing List: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss .. _Manila Project Team Lead guide: ../project-team-lead.rst .. _API Microversions: ../api_microversion_dev.rst manila-10.0.0/doc/source/contributor/auth.rst0000664000175000017500000000263313656750227021206 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. Copyright 2014 Mirantis, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _auth: Authentication and Authorization ================================ The :mod:`manila.quota` Module ------------------------------ .. automodule:: manila.quota :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.policy` Module ------------------------------- .. automodule:: manila.policy :noindex: :members: :undoc-members: :show-inheritance: System limits ------------- The following limits need to be defined and enforced: * Maximum cumulative size of shares and snapshots (GB) * Total number of shares * Total number of snapshots * Total number of share networks manila-10.0.0/doc/source/contributor/i18n.rst0000664000175000017500000000274013656750227021023 0ustar zuulzuul00000000000000Internationalization ==================== Manila uses `gettext `_ so that user-facing strings appear in the appropriate language in different locales. Beginning with the Pike series, OpenStack no longer supports log translation. It is not useful to add translation instructions to new code, and the instructions can be removed from old code. Other user-facing strings, e.g. in exception messages, should be translated. To use gettext, make sure that the strings passed to the logger are wrapped in a ``_()`` function call. For example:: msg = _("Share group %s not found.") % share_group_id raise exc.HTTPNotFound(explanation=msg) Do not use ``locals()`` for formatting messages because: 1. It is not as clear as using explicit dicts. 2. It could produce hidden errors during refactoring. 3. Changing the name of a variable causes a change in the message. 4. It creates a lot of otherwise unused variables. If you do not follow the project conventions, your code may cause the LocalizationTestCase.test_multiple_positional_format_placeholders test to fail in manila/tests/test_localization.py. The ``_()`` function is brought into the global scope by doing:: from manila.openstack.common import gettextutils gettextutils.install("manila") These lines are needed in any toplevel script before any manila modules are imported. If this code is missing, it may result in an error that looks like:: NameError: name '_' is not defined manila-10.0.0/doc/source/contributor/index.rst0000664000175000017500000000424313656750227021353 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Contributor/Developer Guide =========================== In this section you will find information helpful for contributing to manila. Basic Information ----------------- .. toctree:: :maxdepth: 3 contributing Programming HowTos and Tutorials -------------------------------- .. toctree:: :maxdepth: 3 development.environment development-environment-devstack unit_tests tempest_tests addmethod.openstackapi documenting_your_work adding_release_notes commit_message_tags guru_meditation_report user_messages ganesha Background Concepts for manila ------------------------------ .. toctree:: :maxdepth: 3 architecture threading i18n rpc driver_requirements pool-aware-manila-scheduler Other Resources --------------- .. toctree:: :maxdepth: 3 launchpad gerrit manila-review-policy project-team-lead API Reference ------------- .. toctree:: :maxdepth: 3 Manila API v2 Reference api_microversion_dev api_microversion_history experimental_apis Module Reference ---------------- .. toctree:: :maxdepth: 3 intro services database share share_hooks auth scheduler fakes manila share_replication driver_filter_goodness_weigher share_migration .. only:: html Indices and tables ------------------ * :ref:`genindex` * :ref:`search` manila-10.0.0/doc/source/contributor/share_hooks.rst0000664000175000017500000000654113656750227022554 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Manila share driver hooks ========================= Manila share driver hooks are designed to provide additional possibilities for each :term:`manila-share` service; such as any kind of notification and additional actions before and after share driver calls. Possibilities ------------- - Perform actions before some share driver method calls. - Perform actions after some share driver method calls with results of driver call and preceding hook call. - Call additional 'periodic' hook each 'N' ticks. - Possibility to update results of driver's action by post-running hook. Features -------- - Errors in hook execution can be suppressed. - Any hook can be disabled. - Any amount of hook instances can be run at once for each manila-share service. Limitations ----------- - Hooks approach is not asynchronous. That is, if we run hooks, and especially, more than one hook instance, then all of them will be executed in one thread. Implementation in share drivers ------------------------------- Share drivers can [re]define method `get_periodic_hook_data` that runs with each execution of 'periodic' hook and receives list of shares (as parameter) with existing access rules. So, each share driver, for each of its shares can add/update some information that will be used then in the periodic hook. What is required for writing new 'hook' implementation? ------------------------------------------------------- All implementations of 'hook' interface are expected to be in 'manila/share/hooks'. Each implementation should inherit class 'manila.share.hook:HookBase' and redefine its abstract methods. How to use 'hook' implementations? ---------------------------------- Just set config option 'hook_drivers' in driver's config group. For example:: [MY_DRIVER] hook_drivers=path.to:FooClass,path.to:BarClass Then all classes defined above will be initialized. In the same config group, any config option of hook modules can be redefined too. .. note:: More info about common config options for hooks can be found in module `manila.share.hook` Driver methods that are wrapped with hooks ------------------------------------------ - allow_access - create_share_instance - create_snapshot - delete_share_instance - delete_share_server - delete_snapshot - deny_access - extend_share - init_host - manage_share - publish_service_capabilities - shrink_share - unmanage_share - create_share_replica - promote_share_replica - delete_share_replica - update_share_replica - create_replicated_snapshot - delete_replicated_snapshot - update_replicated_snapshot Above list with wrapped methods can be extended in future. The :mod:`manila.share.hook.py` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.hook :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/contributor/experimental_apis.rst0000664000175000017500000000651613656750227023762 0ustar zuulzuul00000000000000Experimental APIs ================= Background ---------- Manila uses API microversions to allow natural evolution of its REST APIs over time. But microversions alone cannot solve the question of how to ship APIs that are experimental in nature, are expected to change at any time, and could even be removed entirely without a typical deprecation period. In conjunction with microversions, manila has added a facility for marking individual REST APIs as experimental. To call an experimental API, clients must include a specific HTTP header, ``X-OpenStack-Manila-API-Experimental``, with a value of ``True``. If a user calls an experimental API without including the experimental header, the server would respond with ``HTTP/404``. This forces the client to acknowledge the experimental status of the API and prevents anyone from building an application around a manila feature without realizing the feature could change significantly or even disappear. On the other hand, if a request is made to a non-experimental manila API with ``X-OpenStack-Manila-API-Experimental: True``, the server would respond as if the header had not been included. This is a convenience mechanism, as it allows the client to specify both the requested API version as well as the experimental header (if desired) in one place instead of having to set the headers separately for each API call (although that would be fine, too). When do I need to set an API experimental? ------------------------------------------ An API should be marked as experimental if any of the following is true: - the API is not yet considered a stable, core API - the API is expected to change in later releases - the API could be removed altogether if a feature is redesigned - the API controls a feature that could change or be removed When do I need to remove the experimental annotation from an API? ----------------------------------------------------------------- When the community is satisfied that an experimental feature and its APIs have had sufficient time to gather and incorporate user feedback to consider it stable, which could be one or more OpenStack release cycles, any relevant APIs must be re-released with a microversion bump and without the experimental flag. The maturation period can vary between features, but experimental is NOT a stable state, and an experimental feature should not be left in that state any longer than necessary. Because experimental APIs have no conventional deprecation period, the manila core team may optionally choose to remove any experimental versions of an API at the same time that a microversioned stable version is added. In Code ------- The ``@api_version`` decorator defined in ``manila/api/openstack/wsgi.py``, which is used for specifying API versions on top-level Controller methods, also allows for tagging an API as experimental. For example: In the controller class:: @wsgi.Controller.api_version("2.4", experimental=True) def my_api_method(self, req, id): .... This method would only be available if the caller had specified an ``X-OpenStack-Manila-API-Version`` of >= ``2.4``. and had also included ``X-OpenStack-Manila-API-Experimental: True``. If they had specified a lower version (or not specified it and received a lower default version), or if they had failed to include the experimental header, the server would respond with ``HTTP/404``. manila-10.0.0/doc/source/contributor/guru_meditation_report.rst0000664000175000017500000001042013656750227025030 0ustar zuulzuul00000000000000.. Copyright (c) 2017 Fiberhome Telecommunication Technologies Co.,LTD All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Guru Meditation Reports ======================= Manila contains a mechanism whereby developers and system administrators can generate a report about the state of a running Manila executable. This report is called a *Guru Meditation Report* (*GMR* for short). Generating a GMR ---------------- A *GMR* can be generated by sending the *SIGUSR1/SIGUSR2* signal to any Manila process with support (see below). The *GMR* will then output to standard error for that particular process. For example, suppose that ``manila-api`` has process id ``8675``, and was run with ``2>/var/log/manila/manila-api-err.log``. Then, ``kill -SIGUSR1 8675`` will trigger the Guru Meditation report to be printed to ``/var/log/manila/manila-api-err.log``. It could save these reports to a well known directory for later analysis by the sysadmin or automated bug analysis tools. To configure *GMR* you have to add the following section to manila.conf: [oslo_reports] log_dir = '/path/to/logs/dir' There is other way to trigger a generation of report, user should add a configuration in Manila's conf file:: [oslo_reports] file_event_handler=['The path to a file to watch for changes to trigger ' 'the reports, instead of signals. Setting this option ' 'disables the signal trigger for the reports.'] file_event_handler_interval=['How many seconds to wait between polls when ' 'file_event_handler is set, default value ' 'is 1'] a *GMR* can be generated by "touch"ing the file which was specified in file_event_handler. The *GMR* will then output to standard error for that particular process. For example, suppose that ``manila-api`` was run with ``2>/var/log/manila/manila-api-err.log``, and the file path is ``/tmp/guru_report``. Then, ``touch /tmp/guru_report`` will trigger the Guru Meditation report to be printed to ``/var/log/manila/manila-api-err.log``. Structure of a GMR ------------------ The *GMR* is designed to be extensible; any particular executable may add its own sections. However, the base *GMR* consists of several sections: Package Shows information about the package to which this process belongs, including version information Threads Shows stack traces and thread ids for each of the threads within this process Green Threads Shows stack traces for each of the green threads within this process (green threads don't have thread ids) Configuration Lists all the configuration options currently accessible via the CONF object for the current process Adding Support for GMRs to New Executables ------------------------------------------ Adding support for a *GMR* to a given executable is fairly easy. First import the module (currently residing in oslo.reports), as well as the Manila version module: .. code-block:: python from oslo_reports import guru_meditation_report as gmr from manila import version Then, register any additional sections (optional): .. code-block:: python TextGuruMeditation.register_section('Some Special Section', some_section_generator) Finally (under main), before running the "main loop" of the executable (usually ``service.server(server)`` or something similar), register the *GMR* hook: .. code-block:: python TextGuruMeditation.setup_autorun(version) Extending the GMR ----------------- As mentioned above, additional sections can be added to the GMR for a particular executable. For more information, see the inline documentation about oslo.reports: `oslo.reports `_ manila-10.0.0/doc/source/contributor/database.rst0000664000175000017500000000366313656750227022015 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The Database Layer ================== The :mod:`manila.db.api` Module ------------------------------- .. automodule:: manila.db.api :noindex: :members: :undoc-members: :show-inheritance: The Sqlalchemy Driver --------------------- The :mod:`manila.db.sqlalchemy.api` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.db.sqlalchemy.api :noindex: The :mod:`manila.db.sqlalchemy.models` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.db.sqlalchemy.models :noindex: :members: :undoc-members: :show-inheritance: Tests ----- Tests are lacking for the db api layer and for the sqlalchemy driver. Failures in the drivers would be detected in other test cases, though. DB migration revisions ---------------------- If a DB schema needs to be updated, a new DB migration file needs to be added in ``manila/db/migrations/alembic/versions``. To create such a file it's possible to use ``manila-manage db revision`` or the corresponding tox command:: tox -e dbrevision "change_foo_table" In addition every migration script must be tested. See examples in ``manila/tests/db/migrations/alembic/migrations_data_checks.py``. manila-10.0.0/doc/source/contributor/project-team-lead.rst0000664000175000017500000001720413656750227023542 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Manila Project Team Lead guide ============================== A `project team lead `_ for Manila is elected from the project contributors. A candidate for PTL needn't be a core reviewer on the team, but, must be a contributor, and be familiar with the project to lead the project through its release process. If you would like to be a core reviewer begin by :ref:`contacting-the-core-team`. All the responsibilities below help us in maintaining the project. A project team lead can perform any of these or delegate tasks to other contributors. General Responsibilities ------------------------ * Ensure manila meetings have a chair * https://opendev.org/opendev/irc-meetings/src/branch/master/meetings/manila-team-meeting.yaml * Update the team people wiki * https://wiki.openstack.org/wiki/Manila#People Release cycle activities ------------------------ * Get acquainted with the release schedule and set Project specific milestones in the `OpenStack Releases repository `_ * Example: https://releases.openstack.org/victoria/schedule.html * Ensure the Manila `Cross Project Liaisons `_ are aware of their duties and are plugged into the respective areas * Acknowledge `community wide cycle goals `_ and find leaders and coordinate with the goal liaisons * Plan team activities such as: * ``Documentation day/s`` to groom documentation bugs and re-write release cycle docs * ``Bug Triage day/s`` to ensure the bug backlog is well groomed * ``Bug Squash day/s`` to close bugs * ``Collaborative Review meeting/s`` to perform a high-touch review of a code submission over a synchronous call * Milestone driven work: * ``Milestone-1``: - Request a release for the python-manilaclient and manila-ui - Retarget any bugs whose fixes missed Milestone-1 * ``Milestone-2``: - Retarget any bugs whose fixes missed Milestone-2 - Create a review priority etherpad and share it with the community and have reviewers sign up * ``Milestone-3``: - Groom the release notes for python-manilaclient and add a 'prelude' section describing the most important changes in the release - Request a final cycle release for python-manilaclient - Retarget any bugs whose fixes missed Milestone-3 - Grant/Deny any Feature Freeze Exception Requests - Update task trackers for Community Wide Goals - Write the cycle-highlights in marketing-friendly sentences and propose to the openstack/releases repo. Usually based on reno prelude but made more readable and friendly * Example: https://review.opendev.org/717801/ - Create the launchpad series and milestones for the next cycle in manila, python-manilaclient and manila-ui. Examples: * manila: https://launchpad.net/manila/ussuri * python-manilaclient: https://launchpad.net/python-manilaclient/ussuri * manila-ui: https://launchpad.net/manila-ui/ussuri * ``Before RC-1``: - Groom the release notes for manila-ui and add a 'prelude' section describing the most important changes in the release - Request a final cycle release for manila-ui - Groom the release notes for manila, add a 'prelude' section describing the most important changes in the release - Mark bugs as {release}-rc-potential bugs in launchpad, ensure they are targeted and addressed by RC * ``RC-1``: - Request a RC-1 release for manila - Request a final cycle tagged release for manila-tempest-plugin - Ensure all blueprints for the release have been marked "Implemented" or are re-targeted * ``After RC-1``: - Close the currently active series on Launchpad for manila, python-manilaclient and manila-ui and set the "Development Focus" to the next release. Alternatively, you can switch this on the series page by setting the next release to “active development†- Set the last series status in each of these projects to “current stable branch release†- Set the previous release's series status to “supported†- Move any Unimplemented specs in `the specs repo `_ to "Unimplemented" - Create a new specs directory in the specs repo for the next cycle so people can start proposing new specs * You should NOT plan to have more than one RC. RC2 should only happen if there was a mistake and something was missed for RC-1, or a new regression was discovered * Periodically during the release: * ``Every Week``: - Coordinate the weekly Community Meeting agenda - Coordinate with the Bug Czar and ensure bugs are properly triaged - Check whether any bug-fixes must be back-ported to older stable releases * ``Every 3 weeks``: - Ensure stable branch releases are proposed in case there are any release worthy changes. If there are only documentation or CI/test related fixes, no release for that branch is necessary * To request a release of any manila deliverable: * ``git checkout {branch-to-release-from}`` * ``git log --no-merges {last tag}..`` * Examine commits that will go into the release and use it to decide whether the release is a major, minor, or revision bump according to semver * Then, propose the release with version according to semver x.y.z * X - backward-incompatible changes * Y - features * Z - bug fixes * Use the ``new-release`` command to generate the release * https://releases.openstack.org/reference/using.html#using-new-release-command Project Team Gathering ---------------------- * Create etherpads for PTG planning, cycle retrospective and PTG discussions and announce the Planning etherpad to the community members via the Manila community meeting as well as the `OpenStack Discuss Mailing List` * `Example PTG Planning Etherpad `_ * `Example Retrospective Etherpad `_ * `Example PTG Discussions Etherpad `_ * If the PTG is a physical event, gather an estimate of attendees and request the OpenDev Foundation staff for appropriate meeting space. Ensure the sessions are remote attendee friendly. Coordinate A/V logistics * Set discussion schedule and find an owner to run each proposed discussion at the PTG * All sessions must be recorded, nominate note takers for each discussion * Sign up for group photo at the PTG (if applicable) * After the event, send PTG session summaries and the meeting recording to the `OpenStack Discuss Mailing List` Summit ------ * Prepare the project update presentation. Enlist help of others * Prepare the on-boarding session materials. Enlist help of others .. _OpenStack Discuss Mailing List: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss .. _contacting the core team: contributing#contacting-the-core-team manila-10.0.0/doc/source/contributor/intro.rst0000664000175000017500000000361013656750227021374 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Introduction to the Shared File Systems service =========================================================== Manila is the file share service project for OpenStack. Manila provides the management of file shares for example, NFS and CIFS as a core service to OpenStack. Manila works with a variety of proprietary backend storage arrays and appliances, with open source distributed filesystems, as well as with a base Linux NFS or Samba server. There are a number of concepts that will help in better understanding of the solutions provided by manila. One aspect can be to explore the different service possibilities provided by manila. Manila, depending on the driver, requires the user by default to create a share network using neutron-net-id and neutron-subnet-id (GlusterFS native driver does not require it). After creation of the share network, the user can proceed to create the shares. Users in manila can configure multiple back-ends just like Cinder. Manila has a share server assigned to every tenant. This is the solution for all back-ends except for GlusterFS. The customer in this scenario is prompted to create a share server using neutron net-id and subnet-id before even trying to create a share. The current low-level services available in manila are: - :term:`manila-api` - :term:`manila-scheduler` - :term:`manila-share` manila-10.0.0/doc/source/contributor/ganesha.rst0000664000175000017500000002701613656750227021655 0ustar zuulzuul00000000000000.. Copyright 2015 Red Hat, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Ganesha Library =============== The Ganesha Library provides base classes that can be used by drivers to provision shares via NFS (NFSv3 and NFSv4), utilizing the NFS-Ganesha NFS server. Supported operations -------------------- - Allow NFS Share access - Only IP access type is supported. - Deny NFS Share access Supported manila drivers ------------------------ - CephFS driver uses ``ganesha.GaneshaNASHelper2`` library class - GlusterFS driver uses ``ganesha.GaneshaNASHelper`` library class Requirements ------------ - Preferred: `NFS-Ganesha `_ v2.4 or later, which allows dynamic update of access rules. Use with manila's ``ganesha.GaneshaNASHelper2`` class as described later in :ref:`using_ganesha_library`. (or) `NFS-Ganesha `_ v2.5.4 or later that allows dynamic update of access rules, and can make use of highly available Ceph RADOS (distributed object storage) as its shared storage for NFS client recovery data, and exports. Use with Ceph v12.2.2 or later, and ``ganesha.GaneshaNASHelper2`` library class in manila Queens release or later. - For use with limitations documented in :ref:`ganesha_known_issues`: `NFS-Ganesha `_ v2.1 to v2.3. Use with manila's ``ganesha.GaneshaNASHelper`` class as described later in :ref:`using_ganesha_library`. NFS-Ganesha configuration ------------------------- The library has just modest requirements against general NFS-Ganesha (in the following: Ganesha) configuration; a best effort was made to remain agnostic towards it as much as possible. This section describes the few requirements. Note that Ganesha's concept of storage backend modules is called FSAL ("File System Abstraction Layer"). The FSAL the driver intends to leverage needs to be enabled in Ganesha config. Beyond that (with default manila config) the following line is needed to be present in the Ganesha config file (that defaults to /etc/ganesha/ganesha.conf): ``%include /etc/ganesha/export.d/INDEX.conf`` The above paths can be customized through manila configuration as follows: - `ganesha_config_dir` = toplevel directory for Ganesha configuration, defaults to /etc/ganesha - `ganesha_config_path` = location of the Ganesha config file, defaults to ganesha.conf in `ganesha_config_dir` - `ganesha_export_dir` = directory where manila generated config bits are stored, defaults to `export.d` in `ganesha_config_dir`. The following line is required to be included (with value expanded) in the Ganesha config file (at `ganesha_config_path`): ``%include /INDEX.conf`` In versions 2.5.4 or later, Ganesha can store NFS client recovery data in Ceph RADOS, and also read exports stored in Ceph RADOS. These features are useful to make Ganesha server that has access to a Ceph (luminous or later) storage backend, highly available. The Ganesha library class `GaneshaNASHelper2` (in manila Queens or later) allows you to store Ganesha exports directly in a shared storage, RADOS objects, by setting the following manila config options in the driver section: - `ganesha_rados_store_enable` = 'True' to persist Ganesha exports and export counter in Ceph RADOS objects - `ganesha_rados_store_pool_name` = name of the Ceph RADOS pool to store Ganesha exports and export counter objects - `ganesha_rados_export_index` = name of the Ceph RADOS object used to store a list of export RADOS object URLs (defaults to 'ganesha-export-index') Check out the `cephfs_driver` documentation for an example driver section that uses these options. To allow Ganesha to read from RADOS objects add the below code block in ganesha's configuration file, substituting values per your setup. .. code-block:: console # To read exports from RADOS objects RADOS_URLS { ceph_conf = "/etc/ceph/ceph.conf"; userid = "admin"; } # Replace with actual pool name, and export index object %url rados:/// # To store client recovery data in the same RADOS pool NFSv4 { RecoveryBackend = "rados_kv"; } RADOS_KV { ceph_conf = "/etc/ceph/ceph.conf"; userid = "admin"; # Replace with actual pool name pool = ; } For a fresh setup, make sure to create the Ganesha export index object as an empty object before starting the Ganesha server. .. code-block:: console echo | sudo rados -p ${GANESHA_RADOS_STORE_POOL_NAME} put ganesha-export-index - Further Ganesha related manila configuration -------------------------------------------- There are further Ganesha related options in manila (which affect the behavior of Ganesha, but do not affect how to set up the Ganesha service itself). These are: - `ganesha_service_name` = name of the system service representing Ganesha, defaults to ganesha.nfsd - `ganesha_db_path` = location of on-disk database storing permanent Ganesha state, e.g. an export ID counter to generate export IDs for shares (or) When `ganesha_rados_store_enabled` is set to True, the ganesha export counter is stored in a Ceph RADOS object instead of in a SQLite database local to the manila driver. The counter can be optionally configured with, `ganesha_rados_export_counter` = name of the Ceph RADOS object used as the Ganesha export counter (defaults to 'ganesha-export-counter') - `ganesha_export_template_dir` = directory from where Ganesha loads export customizations (cf. "Customizing Ganesha exports"). .. _using_ganesha_library: Using Ganesha Library in drivers -------------------------------- A driver that wants to use the Ganesha Library has to inherit from ``driver.GaneshaMixin``. The driver has to contain a subclass of ``ganesha.GaneshaNASHelper2``, instantiate it along with the driver instance and delegate ``update_access`` method to it (when appropriate, i.e., when ``access_proto`` is NFS). .. note:: You can also subclass ``ganesha.GaneshaNASHelper``. It works with NFS-Ganesha v2.1 to v2.3 that doesn't support dynamic update of exports. To update access rules without having to restart NFS-Ganesha server, the class manipulates exports created per share access rule (rather than per share) introducing limitations documented in :ref:`ganesha_known_issues`. In the following we explain what has to be implemented by the ``ganesha.GaneshaNASHelper2`` subclass (to which we refer as "helper class"). Ganesha exports are described by so-called *Ganesha export blocks* (introduced in the 2.* release series), that is, snippets of Ganesha config specifying key-pair values. The Ganesha Library generates sane default export blocks for the exports it manages, with one thing left blank, the so-called *FSAL subblock*. The helper class has to implement the ``_fsal_hook`` method which returns the FSAL subblock (in Python represented as a dict with string keys and values). It has one mandatory key, ``Name``, to which the value should be the name of the FSAL (eg.: ``{"Name": "CEPH"}``). Further content of it is optional and FSAL specific. Customizing Ganesha exports --------------------------- As noted, the Ganesha Library provides sane general defaults. However, the driver is allowed to: - customize defaults - allow users to customize exports The config format for Ganesha Library is called *export block template*. They are syntactically either Ganesha export blocks, (please consult the Ganesha documentation about the format), or isomorphic JSON (as Ganesha export blocks are by-and-large equivalent to arrayless JSON), with two special placeholders for values: ``@config`` and ``@runtime``. ``@config`` means a value that shall be filled from manila config, and ``@runtime`` means a value that's filled at runtime with dynamic data. As an example, we show the library's defaults in JSON format (also valid Python literal): :: { "EXPORT": { "Export_Id": "@runtime", "Path": "@runtime", "FSAL": { "Name": "@config" }, "Pseudo": "@runtime", "SecType": "sys", "Tag": "@runtime", "CLIENT": { "Clients": "@runtime", "Access_Type": "RW" }, "Squash": "None" } } The Ganesha Library takes these values from *manila/share/drivers/ganesha/conf/00-base-export-template.conf* where the same data is stored in Ganesha conf format (also supplied with comments). For customization, the driver has to extend the ``_default_config_hook`` method as follows: - take the result of the super method (a dict representing an export block template) - set up another export block dict that include your custom values, either by - using a predefined export block dict stored in code - loading a predefined export block from the manila source tree - loading an export block from an user exposed location (to allow user configuration) - merge the two export block dict using the ``ganesha_utils.patch`` method - return the result With respect to *loading export blocks*, that can be done through the utility method ``_load_conf_dir``. Known Restrictions ------------------ - The library does not support network segmented multi-tenancy model but instead works over a flat network, where the tenants share a network. .. _ganesha_known_issues: Known Issues ------------ Following issues concern only users of `ganesha.GaneshaNASHelper` class that works with NFS-Ganesha v2.1 to v2.3. - The export location for shares of a driver that uses the Ganesha Library will be of the format ``:/share-``. However, this is incomplete information, because it pertains only to NFSv3 access, which is partially broken. NFSv4 mounts work well but the actual NFSv4 export paths differ from the above. In detail: - The export location is usable only for NFSv3 mounts. - The export location works only for the first access rule that's added for the given share. Tenants that should be allowed to access according to a further access rule will be refused (cf. https://bugs.launchpad.net/manila/+bug/1513061). - The share is, however, exported through NFSv4, just on paths that differ from the one indicated by the export location, namely at: ``:/share---``, where ```` ranges over the ID-s of access rules of the share (and the export with ```` is accessible according to the access rule of that ID). - NFSv4 access also works with pseudofs. That is, the tenant can do a v4 mount of``:/`` and access the shares allowed for her at the respective ``share---`` subdirectories. The :mod:`manila.share.drivers.ganesha` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.ganesha :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/contributor/manila.rst0000664000175000017500000000362013656750227021503 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Common and Misc Libraries ========================= Libraries common throughout manila or just ones that haven't yet been categorized in depth. The :mod:`manila.context` Module -------------------------------- .. automodule:: manila.context :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.exception` Module ---------------------------------- .. automodule:: manila.exception :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.test` Module ----------------------------- .. automodule:: manila.test :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.utils` Module ------------------------------ .. automodule:: manila.utils :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.wsgi` Module ----------------------------- .. automodule:: manila.wsgi :noindex: :members: :undoc-members: :show-inheritance: Tests ----- The :mod:`test_exception` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.tests.test_exception :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/contributor/architecture.rst0000664000175000017500000000600413656750227022723 0ustar zuulzuul00000000000000.. Copyright 2010-2011 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. Copyright 2014 Mirantis, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Manila System Architecture ========================== The Shared File Systems service is intended to be ran on one or more nodes. Manila uses a sql-based central database that is shared by all manila services in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, manila will be moving towards multiple data stores with some kind of aggregation system. Components ---------- Below you will a brief explanation of the different components. :: /- ( LDAP ) [ Auth Manager ] --- | \- ( DB ) | | | [ Web Dashboard ]- manilaclient -[ manila-api ] -- < AMQP > -- [ manila-scheduler ] -- [ manila-share ] -- ( shared filesystem ) | | | | | < REST > * DB: sql database for data storage. Used by all components (LINKS NOT SHOWN) * Web Dashboard: external component that talks to the api, implemented as a plugin to the OpenStack Dashboard (Horizon) project, source is `here `_. * :term:`manila-api` * Auth Manager: component responsible for users/projects/and roles. Can backend to DB or LDAP. This is not a separate binary, but rather a python class that is used by most components in the system. * :term:`manila-scheduler` * :term:`manila-share` Further Challenges ------------------ * More efficient share/snapshot size calculation * Create a notion of "attached" shares with automation of mount operations * Allow admin-created share-servers and share-networks to be used by multiple tenants * Support creation of new subnets for share servers (to connect VLANs with VXLAN/GRE/etc) * Gateway mediated networking model with NFS-Ganesha * Add support for more backends manila-10.0.0/doc/source/admin/0000775000175000017500000000000013656750362016225 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/admin/shared-file-systems-security-services.rst0000664000175000017500000002060313656750227026336 0ustar zuulzuul00000000000000.. _shared_file_systems_security_services: ================= Security services ================= A security service stores client configuration information used for authentication and authorization (AuthN/AuthZ). For example, a share server will be the client for an existing service such as LDAP, Kerberos, or Microsoft Active Directory. You can associate a share with one to three security service types: - ``ldap``: LDAP. - ``kerberos``: Kerberos. - ``active_directory``: Microsoft Active Directory. You can configure a security service with these options: - A DNS IP address. - An IP address or host name. - A domain. - A user or group name. - The password for the user, if you specify a user name. You can add the security service to the :ref:`share network `. To create a security service, specify the security service type, a description of a security service, DNS IP address used inside project's network, security service IP address or host name, domain, security service user or group used by project, and a password for the user. The share name is optional. Create a ``ldap`` security service: .. code-block:: console $ manila security-service-create ldap --dns-ip 8.8.8.8 --server 10.254.0.3 --name my_ldap_security_service +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | status | new | | domain | None | | password | None | | name | my_ldap_security_service | | dns_ip | 8.8.8.8 | | created_at | 2015-09-25T10:19:06.019527 | | updated_at | None | | server | 10.254.0.3 | | user | None | | project_id | 20787a7ba11946adad976463b57d8a2f | | type | ldap | | id | 413479b2-0d20-4c58-a9d3-b129fa592d8e | | description | None | +-------------+--------------------------------------+ To create ``kerberos`` security service, run: .. code-block:: console $ manila security-service-create kerberos --server 10.254.0.3 --user demo --password secret --name my_kerberos_security_service --description "Kerberos security service" +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | status | new | | domain | None | | password | secret | | name | my_kerberos_security_service | | dns_ip | None | | created_at | 2015-09-25T10:26:03.211849 | | updated_at | None | | server | 10.254.0.3 | | user | demo | | project_id | 20787a7ba11946adad976463b57d8a2f | | type | kerberos | | id | 7f46a447-2534-453d-924d-bd7c8e63bbec | | description | Kerberos security service | +-------------+--------------------------------------+ To see the list of created security service use :command:`manila security-service-list`: .. code-block:: console $ manila security-service-list +--------------------------------------+------------------------------+--------+----------+ | id | name | status | type | +--------------------------------------+------------------------------+--------+----------+ | 413479b2-0d20-4c58-a9d3-b129fa592d8e | my_ldap_security_service | new | ldap | | 7f46a447-2534-453d-924d-bd7c8e63bbec | my_kerberos_security_service | new | kerberos | +--------------------------------------+------------------------------+--------+----------+ You can add a security service to the existing :ref:`share network `, which is not yet used (a ``share network`` not associated with a share). Add a security service to the share network with ``share-network-security-service-add`` specifying share network and security service. The command returns information about the security service. You can see view new attributes and ``share_networks`` using the associated share network ID. .. code-block:: console $ manila share-network-security-service-add share_net2 my_ldap_security_service $ manila security-service-show my_ldap_security_service +----------------+-------------------------------------------+ | Property | Value | +----------------+-------------------------------------------+ | status | new | | domain | None | | password | None | | name | my_ldap_security_service | | dns_ip | 8.8.8.8 | | created_at | 2015-09-25T10:19:06.000000 | | updated_at | None | | server | 10.254.0.3 | | share_networks | [u'6d36c41f-d310-4aff-a0c2-ffd870e91cab'] | | user | None | | project_id | 20787a7ba11946adad976463b57d8a2f | | type | ldap | | id | 413479b2-0d20-4c58-a9d3-b129fa592d8e | | description | None | +----------------+-------------------------------------------+ It is possible to see the list of security services associated with a given share network. List security services for ``share_net2`` share network with: .. code-block:: console $ manila share-network-security-service-list share_net2 +--------------------------------------+--------------------------+--------+------+ | id | name | status | type | +--------------------------------------+--------------------------+--------+------+ | 413479b2-0d20-4c58-a9d3-b129fa592d8e | my_ldap_security_service | new | ldap | +--------------------------------------+--------------------------+--------+------+ You also can dissociate a security service from the share network and confirm that the security service now has an empty list of share networks: .. code-block:: console $ manila share-network-security-service-remove share_net2 my_ldap_security_service $ manila security-service-show my_ldap_security_service +----------------+--------------------------------------+ | Property | Value | +----------------+--------------------------------------+ | status | new | | domain | None | | password | None | | name | my_ldap_security_service | | dns_ip | 8.8.8.8 | | created_at | 2015-09-25T10:19:06.000000 | | updated_at | None | | server | 10.254.0.3 | | share_networks | [] | | user | None | | project_id | 20787a7ba11946adad976463b57d8a2f | | type | ldap | | id | 413479b2-0d20-4c58-a9d3-b129fa592d8e | | description | None | +----------------+--------------------------------------+ The Shared File Systems service allows you to update a security service field using :command:`manila security-service-update` command with optional arguments such as ``--dns-ip``, ``--server``, ``--domain``, ``--user``, ``--password``, ``--name``, or ``--description``. To remove a security service not associated with any share networks run: .. code-block:: console $ manila security-service-delete my_ldap_security_service manila-10.0.0/doc/source/admin/shared-file-systems-multi-backend.rst0000664000175000017500000000376413656750227025376 0ustar zuulzuul00000000000000.. _shared_file_systems_multi_backend: =========================== Multi-storage configuration =========================== The Shared File Systems service can provide access to one or more file storage back ends. In general, the workflow with multiple back ends looks similar to the Block Storage service one. Using ``manila.conf``, you can spawn multiple share services. To do it, you should set the `enabled_share_backends` flag in the ``manila.conf`` file. This flag defines the comma-separated names of the configuration stanzas for the different back ends. One name is associated to one configuration group for a back end. The following example runs three configured share services: .. code-block:: ini [DEFAULT] enabled_share_backends=backendEMC1,backendGeneric1,backendNetApp [backendGeneric1] share_driver=manila.share.drivers.generic.GenericShareDriver share_backend_name=one_name_for_two_backends service_instance_user=ubuntu_user service_instance_password=ubuntu_user_password service_image_name=ubuntu_image_name path_to_private_key=/home/foouser/.ssh/id_rsa path_to_public_key=/home/foouser/.ssh/id_rsa.pub [backendEMC1] share_driver=manila.share.drivers.emc.driver.EMCShareDriver share_backend_name=backendEMC2 emc_share_backend=vnx emc_nas_server=1.1.1.1 emc_nas_password=password emc_nas_login=user emc_nas_server_container=server_3 emc_nas_pool_name="Pool 2" [backendNetApp] share_driver = manila.share.drivers.netapp.common.NetAppDriver driver_handles_share_servers = True share_backend_name=backendNetApp netapp_login=user netapp_password=password netapp_server_hostname=1.1.1.1 netapp_root_volume_aggregate=aggr01 To spawn separate groups of share services, you can use separate configuration files. If it is necessary to control each back end in a separate way, you should provide a single configuration file per each back end. .. toctree:: shared-file-systems-scheduling.rst shared-file-systems-services-manage.rst manila-10.0.0/doc/source/admin/shared-file-systems-manage-and-unmanage-share.rst0000664000175000017500000002357113656750227027536 0ustar zuulzuul00000000000000.. _shared_file_systems_manage_and_unmanage_share: ========================= Manage and unmanage share ========================= To ``manage`` a share means that an administrator, rather than a share driver, manages the storage lifecycle. This approach is appropriate when an administrator already has the custom non-manila share with its size, shared file system protocol, and export path, and an administrator wants to register it in the Shared File System service. To ``unmanage`` a share means to unregister a specified share from the Shared File Systems service. Administrators can revert an unmanaged share to managed status if needed. .. _unmanage_share: Unmanage a share ---------------- .. note:: The ``unmanage`` operation is not supported for shares that were created on top of share servers and created with share networks until Shared File Systems API version ``2.49`` (Stein/Manila 8.0.0 release). .. important:: Shares that have dependent snapshots or share replicas cannot be removed from the Shared File Systems service unless the snapshots have been removed or unmanaged and the share replicas have been removed. Unmanaging a share removes it from the management of the Shared File Systems service without deleting the share. It is a non-disruptive operation and existing clients are not disconnected, and the functionality is aimed at aiding infrastructure operations and maintenance workflows. To unmanage a share, run the :command:`manila unmanage ` command. Then try to print the information about the share. The returned result should indicate that Shared File Systems service won't find the share: .. code-block:: console $ manila unmanage share_for_docs $ manila show share_for_docs ERROR: No share with a name or ID of 'share_for_docs' exists. .. _manage_share: Manage a share -------------- .. note:: The ``manage`` operation is not supported for shares that are exported on share servers via share networks until Shared File Systems API version ``2.49`` (Stein/Manila 8.0.0 release). To register the non-managed share in the File System service, run the :command:`manila manage` command: .. code-block:: console manila manage [--name ] [--description ] [--share_type ] [--share-server-id ] [--driver_options [ [ ...]]] The positional arguments are: - service_host. The manage-share service host in ``host@backend#POOL`` format, which consists of the host name for the back end, the name of the back end, and the pool name for the back end. - protocol. The Shared File Systems protocol of the share to manage. Valid values are NFS, CIFS, GlusterFS, HDFS or MAPRFS. - export_path. The share export path in the format appropriate for the protocol: - NFS protocol. 10.0.0.1:/foo_path. - CIFS protocol. \\\\10.0.0.1\\foo_name_of_cifs_share. - HDFS protocol. hdfs://10.0.0.1:foo_port/foo_share_name. - GlusterFS. 10.0.0.1:/foo_volume. - MAPRFS. maprfs:///share-0 -C -Z -N foo. The optional arguments are: - name. The name of the share that is being managed. - share_type. The share type of the share that is being managed. If not specified, the service will try to manage the share with the configured default share type. - share_server_id. must be provided to manage shares within share networks. This argument can only be used with File Systems API version ``2.49`` (Stein/Manila 8.0.0 release) and beyond. - driver_options. An optional set of one or more key and value pairs that describe driver options. As a result, a special share type named ``for_managing`` was used in example. To manage share, run: .. code-block:: console $ manila manage \ manila@paris#shares \ nfs \ 1.0.0.4:/shares/manila_share_6d2142d8_2b9b_4405_867f_8a48094c893f \ --name share_for_docs \ --description "We manage share." \ --share_type for_managing +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | manage_starting | | share_type_name | for_managing | | description | We manage share. | | availability_zone | None | | share_network_id | None | | share_server_id | None | | share_group_id | None | | host | manila@paris#shares | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | ddfb1240-ed5e-4071-a031-b842035a834a | | size | None | | name | share_for_docs | | share_type | 14ee8575-aac2-44af-8392-d9c9d344f392 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T15:22:43.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+--------------------------------------+ Check that the share is available: .. code-block:: console $ manila show share_for_docs +----------------------+--------------------------------------------------------------------------+ | Property | Value | +----------------------+--------------------------------------------------------------------------+ | status | available | | share_type_name | for_managing | | description | We manage share. | | availability_zone | None | | share_network_id | None | | export_locations | | | | path = 1.0.0.4:/shares/manila_share_6d2142d8_2b9b_4405_867f_8a48094c893f | | | preferred = False | | | is_admin_only = False | | | id = d4d048bf-4159-4a94-8027-e567192b8d30 | | | share_instance_id = 4c8e3887-4f9a-4775-bab4-e5840a09c34e | | | path = 2.0.0.3:/shares/manila_share_6d2142d8_2b9b_4405_867f_8a48094c893f | | | preferred = False | | | is_admin_only = True | | | id = 1dd4f0a3-778d-486a-a851-b522f6e7cf5f | | | share_instance_id = 4c8e3887-4f9a-4775-bab4-e5840a09c34e | | share_server_id | None | | share_group_id | None | | host | manila@paris#shares | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | ddfb1240-ed5e-4071-a031-b842035a834a | | size | 1 | | name | share_for_docs | | share_type | 14ee8575-aac2-44af-8392-d9c9d344f392 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T15:22:43.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +----------------------+--------------------------------------------------------------------------+ manila-10.0.0/doc/source/admin/zfs_on_linux_driver.rst0000664000175000017500000001356113656750227023055 0ustar zuulzuul00000000000000.. Copyright (c) 2016 Mirantis Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ZFS (on Linux) Driver ===================== Manila ZFSonLinux share driver uses ZFS filesystem for exporting NFS shares. Written and tested using Linux version of ZFS. Requirements ------------ * 'NFS' daemon that can be handled via "exportfs" app. * 'ZFS' filesystem packages, either Kernel or FUSE versions. * ZFS zpools that are going to be used by Manila should exist and be configured as desired. Manila will not change zpool configuration. * For remote ZFS hosts according to manila-share service host SSH should be installed. * For ZFS hosts that support replication: * SSH access for each other should be passwordless. * Service IP addresses should be available by ZFS hosts for each other. Supported Operations -------------------- The following operations are supported: * Create NFS Share * Delete NFS Share * Manage NFS Share * Unmanage NFS Share * Allow NFS Share access * Only IP access type is supported for NFS * Both access levels are supported - 'RW' and 'RO' * Deny NFS Share access * Create snapshot * Delete snapshot * Manage snapshot * Unmanage snapshot * Create share from snapshot * Extend share * Shrink share * Replication (experimental): * Create/update/delete/promote replica operations are supported * Share migration (experimental) Possibilities ------------- * Any amount of ZFS zpools can be used by share driver. * Allowed to configure default options for ZFS datasets that are used for share creation. * Any amount of nested datasets is allowed to be used. * All share replicas are read-only, only active one is RW. * All share replicas are synchronized periodically, not continuously. So, status 'in_sync' means latest sync was successful. Time range between syncs equals to value of config global opt 'replica_state_update_interval'. * Driver is able to use qualified extra spec 'zfsonlinux:compression'. It can contain any value that is supported by used ZFS app. But if it is disabled via config option with value 'compression=off', then it will not be used. Restrictions ------------ The ZFSonLinux share driver has the following restrictions: * Only IP access type is supported for NFS. * Only FLAT network is supported. * 'Promote share replica' operation will switch roles of current 'secondary' replica and 'active'. It does not make more than one active replica available. * 'SaMBa' based sharing is not yet implemented. * 'Thick provisioning' is not yet implemented. Known problems -------------- * 'Promote share replica' operation will make ZFS filesystem that became secondary as RO only on NFS level. On ZFS level system will stay mounted as was - RW. Backend Configuration --------------------- The following parameters need to be configured in the manila configuration file for the ZFSonLinux driver: * share_driver = manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver * driver_handles_share_servers = False * replication_domain = custom_str_value_as_domain_name * if empty, then replication will be disabled * if set then will be able to be used as replication peer for other backend with same value. * zfs_share_export_ip = * zfs_service_ip = * zfs_zpool_list = zpoolname1,zpoolname2/nested_dataset_for_zpool2 * can be one or more zpools * can contain nested datasets * zfs_dataset_creation_options = * readonly,quota,sharenfs and sharesmb options will be ignored * zfs_dataset_name_prefix = * Prefix to be used in each dataset name. * zfs_dataset_snapshot_name_prefix = * Prefix to be used in each dataset snapshot name. * zfs_use_ssh = * set 'False' if ZFS located on the same host as 'manila-share' service * set 'True' if 'manila-share' service should use SSH for ZFS configuration * zfs_ssh_username = * required for replication operations * required for SSH'ing to ZFS host if 'zfs_use_ssh' is set to 'True' * zfs_ssh_user_password = * password for 'zfs_ssh_username' of ZFS host. * used only if 'zfs_use_ssh' is set to 'True' * zfs_ssh_private_key_path = * used only if 'zfs_use_ssh' is set to 'True' * zfs_share_helpers = NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper * Approach for setting up helpers is similar to various other share driver * At least one helper should be used. * zfs_replica_snapshot_prefix = * Prefix to be used in dataset snapshot names that are created by 'update replica' operation. * zfs_migration_snapshot_prefix = * Prefix to be used in dataset snapshot names that are created for 'migration' operation. Restart of :term:`manila-share` service is needed for the configuration changes to take effect. The :mod:`manila.share.drivers.zfsonlinux.driver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.zfsonlinux.driver :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.share.drivers.zfsonlinux.utils` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.zfsonlinux.utils :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/shared-file-systems-scheduling.rst0000664000175000017500000000300113656750227024764 0ustar zuulzuul00000000000000.. _shared_file_systems_scheduling: ========== Scheduling ========== The Shared File Systems service uses a scheduler to provide unified access for a variety of different types of shared file systems. The scheduler collects information from the active shared services, and makes decisions such as what shared services will be used to create a new share. To manage this process, the Shared File Systems service provides Share types API. A share type is a list from key-value pairs called extra-specs. The scheduler uses required and un-scoped extra-specs to look up the shared service most suitable for a new share with the specified share type. For more information about extra-specs and their type, see `Capabilities and Extra-Specs `_ section in developer documentation. The general scheduler workflow: #. Share services report information about their existing pool number, their capacities, and their capabilities. #. When a request on share creation arrives, the scheduler picks a service and pool that best serves the request, using share type filters and back end capabilities. If back end capabilities pass through, all filters request the selected back end where the target pool resides. #. The share driver receives a reply on the request status, and lets the target pool serve the request as the scheduler instructs. The scoped and un-scoped share types are available for the driver implementation to use as needed. manila-10.0.0/doc/source/admin/shared-file-systems-share-server-management.rst0000664000175000017500000001637013656750227027374 0ustar zuulzuul00000000000000.. _shared_file_systems_share_server_management: ============= Share servers ============= A share server is a resource created by the Shared File Systems service when the driver is operating in the `driver_handles_share_servers = True` mode. A share server exports users' shares, manages their exports and access rules. Share servers are abstracted away from end users. Drivers operating in `driver_handles_share_servers = True` mode manage the lifecycle of these share servers automatically. Administrators can however remove the share servers from the management of the Shared File Systems service without destroying them. They can also bring in existing share servers under the Shared File Systems service. They can list all available share servers and update their status attribute. They can delete an specific share server if it has no dependent shares. ======================= Share server management ======================= To ``manage`` a share server means that when the driver is operating in the ``driver_handles_share_servers = True`` mode, the administrator can bring a pre-existing share server under the management of the Shared File Systems service. To ``unmanage`` means that the administrator is able to unregister an existing share server from the Shared File Systems service without deleting it from the storage back end. To be unmanaged, the referred share server cannot have any shares known to the Shared File Systems service. Manage a share server --------------------- To bring a share server under the Shared File System service, use the :command:`manila share-server-manage` command: .. code-block:: console manila share-server-manage [--driver_options [ [ ...]]] The positional arguments are: - host. The manage-share service host in ``host@backend`` format, which consists of the host name for the back end and the name of the back end. - share_network. The share network where the share server is contained. - identifier. The identifier of the share server on the back end storage. The ``driver_options`` is an optional set of one or more driver-specific metadata items as key and value pairs. The specific key-value pairs necessary vary from driver to driver. Consult the driver-specific documentation to determine if any specific parameters must be supplied. Ensure that the share type has the ``driver_handles_share_servers = True`` extra-spec. If using an OpenStack Networking (Neutron) based plugin, ensure that: - There are some ports created, which correspond to the share server interfaces. - The correct IP addresses are allocated to these ports. - ``manila:share`` is set as the owner of these ports. To manage a share server, run: .. code-block:: console $ manila share-server-manage \ manila@paris \ share_net_test \ backend_server_1 \ +--------------------+------------------------------------------+ | Property | Value | +--------------------+------------------------------------------+ | id | 441d806f-f0e0-4c90-b7e2-a553c6aa76b2 | | project_id | 907004508ef4447397ce6741a8f037c1 | | updated_at | None | | status | manage_starting | | host | manila@paris | | share_network_name | share_net_test | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | created_at | 2019-04-25T18:25:23.000000 | | backend_details | {} | | is_auto_deletable | False | | identifier | backend_server_1 | +--------------------+------------------------------------------+ .. note:: The ``is_auto_deletable`` property is used by the Shared File Systems service to identify a share server that can be deleted by internal routines. The service can automatically delete share servers if there are no shares associated with them. To delete a share server when the last share is deleted, set the option: ``delete_share_server_with_last_share``. If a scheduled cleanup is desired instead, ``automatic_share_server_cleanup`` and ``unused_share_server_cleanup_interval`` options can be set. Only one of the cleanup methods can be used at one time. Any share server that has a share unmanaged from it cannot be automatically deleted by the Shared File Systems service. The same is true for share servers that have been managed into the service. Cloud administrators can delete such share servers manually if desired. Unmanage a share server ----------------------- To ``unmanage`` a share server, run :command:`manila share-server-unmanage `. .. code-block:: console $ manila share-server-unmanage 441d806f-f0e0-4c90-b7e2-a553c6aa76b2 $ manila share-server-show 441d806f-f0e0-4c90-b7e2-a553c6aa76b2 ERROR: Share server 441d806f-f0e0-4c90-b7e2-a553c6aa76b2 could not be found. Reset the share server state ---------------------------- As administrator you are able to reset a share server state. To reset the state of a share server, run :command:`manila share-server-reset-state --state `. The positional arguments are: - share-server. The share server name or id. - state. The state to be assigned to the share server. The options are: - ``active`` - ``error`` - ``deleting`` - ``creating`` - ``managing`` - ``unmanaging`` - ``unmanage_error`` - ``manage_error`` List share servers ------------------ To list share servers, run :command:`manila share-server-list` command: .. code-block:: console manila share-server-list [--host ] [--status ] [--share-network ] [--project-id ] [--columns ] All the arguments above are optional. They can ben used to filter share servers. The options to filter: - host. Shows all the share servers pertaining to the specified host. - status. Shows all the share servers that are in the specified status. - share_network. Shows all the share servers that pertain in the same share network. - project_id. Shows all the share servers pertaining to the same project. - columns. The administrator specifies which columns to display in the result of the list operation. .. code-block:: console $ manila share-server-list +--------------------------------------+--------------+--------+----------------+----------------------------------+------------+ | Id | Host | Status | Share Network | Project Id | Updated_at | +--------------------------------------+--------------+--------+----------------+----------------------------------+------------+ | 441d806f-f0e0-4c90-b7e2-a553c6aa76b2 | manila@paris | active | share_net_test | fd6d30efa5ff4c99834dc0d13f96e8eb | None | +--------------------------------------+--------------+--------+----------------+----------------------------------+------------+ manila-10.0.0/doc/source/admin/hpe_3par_driver.rst0000664000175000017500000002665213656750227022046 0ustar zuulzuul00000000000000.. Copyright 2015 Hewlett Packard Development Company, L.P. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. HPE 3PAR Driver for OpenStack Manila ==================================== The HPE 3PAR manila driver provides NFS and CIFS shared file systems to OpenStack using HPE 3PAR's File Persona capabilities. .. note:: In OpenStack releases prior to Mitaka this driver was called the HP 3PAR driver. The Liberty configuration reference can be found at: http://docs.openstack.org/liberty/config-reference/content/hp-3par-share-driver.html For information on HPE 3PAR Driver for OpenStack Manila, refer to `content kit page `_. Supported Operations -------------------- The following operations are supported with HPE 3PAR File Persona: - Create/delete NFS and CIFS shares * Shares are not accessible until access rules allow access - Allow/deny NFS share access * IP access rules are required for NFS share access - Allow/deny CIFS share access * CIFS shares require user access rules. * User access requires a 3PAR local or AD user (LDAP is not yet supported) - Create/delete snapshots - Create shares from snapshots Share networks are not supported. Shares are created directly on the 3PAR without the use of a share server or service VM. Network connectivity is setup outside of manila. Requirements ------------ On the system running the manila share service: - python-3parclient 4.2.0 or newer from PyPI. On the HPE 3PAR array: - HPE 3PAR Operating System software version 3.2.1 MU3 or higher - The array class and hardware configuration must support File Persona Pre-Configuration on the HPE 3PAR --------------------------------- - HPE 3PAR File Persona must be initialized and started (:code:`startfs`) - A File Provisioning Group (FPG) must be created for use with manila - A Virtual File Server (VFS) must be created for the FPG - The VFS must be configured with an appropriate share export IP address - A local user in the Administrators group is needed for CIFS shares Backend Configuration --------------------- The following parameters need to be configured in the manila configuration file for the HPE 3PAR driver: - `share_backend_name` = - `share_driver` = manila.share.drivers.hpe.hpe_3par_driver.HPE3ParShareDriver - `driver_handles_share_servers` = False - `hpe3par_fpg` = - `hpe3par_share_ip_address` = - `hpe3par_san_ip` = - `hpe3par_api_url` = <3PAR WS API Server URL> - `hpe3par_username` = <3PAR username with the 'edit' role> - `hpe3par_password` = <3PAR password for the user specified in hpe3par_username> - `hpe3par_san_login` = - `hpe3par_san_password` = - `hpe3par_debug` = - `hpe3par_cifs_admin_access_username` = - `hpe3par_cifs_admin_access_password` = - `hpe3par_cifs_admin_access_domain` = - `hpe3par_share_mount_path` = The `hpe3par_share_ip_address` must be a valid IP address for the configured FPG's VFS. This IP address is used in export locations for shares that are created. Networking must be configured to allow connectivity from clients to shares. `hpe3par_cifs_admin_access_username` and `hpe3par_cifs_admin_access_password` must be provided to delete nested CIFS shares. If they are not, the share contents will not be deleted. `hpe3par_cifs_admin_access_domain` and `hpe3par_share_mount_path` can be provided for additional configuration. Restart of :term:`manila-share` service is needed for the configuration changes to take effect. Backend Configuration for AD user --------------------------------- The following parameters need to be configured through HPE 3PAR CLI to access file share using AD. Set authentication parameters:: $ setauthparam ldap-server IP_ADDRESS_OF_AD_SERVER $ setauthparam binding simple $ setauthparam user-attr AD_DOMAIN_NAME\\ $ setauthparam accounts-dn CN=Users,DC=AD,DC=DOMAIN,DC=NAME $ setauthparam account-obj user $ setauthparam account-name-attr sAMAccountName $ setauthparam memberof-attr memberOf $ setauthparam super-map CN=AD_USER_GROUP,DC=AD,DC=DOMAIN,DC=NAME Verify new authentication parameters set as expected:: $ showauthparam Verify AD users set as expected:: $ checkpassword AD_USER Command result should show ``user AD_USER is authenticated and authorized`` message on successful configuration. Add 'ActiveDirectory' in authentication providers list:: $ setfs auth ActiveDirectory Local Verify authentication provider list shows 'ActiveDirectory':: $ showfs -auth Set/Add AD user on FS:: $ setfs ad –passwd PASSWORD AD_USER AD_DOMAIN_NAME Verify FS user details:: $ showfs -ad Example of using AD user to access CIFS share --------------------------------------------- Pre-requisite: - Share type should be configured for 3PAR backend Create a CIFS file share with 2GB of size:: $ manila create --name FILE_SHARE_NAME --share-type SHARE_TYPE CIFS 2 Check file share created as expected:: $ manila show FILE_SHARE_NAME Configuration to provide share access to AD user:: $ manila access-allow FILE_SHARE_NAME user AD_DOMAIN_NAME\\\\AD_USER --access-level rw Check users permission set as expected:: $ manila access-list FILE_SHARE_NAME The AD_DOMAIN_NAME\\AD_USER must be listed in access_to column and should show active in its state column as result of this command. Network Approach ---------------- Connectivity between the storage array (SSH/CLI and WSAPI) and the manila host is required for share management. Connectivity between the clients and the VFS is required for mounting and using the shares. This includes: - Routing from the client to the external network - Assigning the client an external IP address (e.g., a floating IP) - Configuring the manila host networking properly for IP forwarding - Configuring the VFS networking properly for client subnets Share Types ----------- When creating a share, a share type can be specified to determine where and how the share will be created. If a share type is not specified, the `default_share_type` set in the manila configuration file is used. Manila requires that the share type includes the `driver_handles_share_servers` extra-spec. This ensures that the share will be created on a backend that supports the requested driver_handles_share_servers (share networks) capability. For the HPE 3PAR driver, this must be set to False. Another common manila extra-spec used to determine where a share is created is `share_backend_name`. When this extra-spec is defined in the share type, the share will be created on a backend with a matching share_backend_name. The HPE 3PAR driver automatically reports capabilities based on the FPG used for each backend. Share types with extra specs can be created by an administrator to control which share types are allowed to use FPGs with or without specific capabilities. The following extra-specs are used with the capabilities filter and the HPE 3PAR driver: - `hpe3par_flash_cache` = ' True' or ' False' - `thin_provisioning` = ' True' or ' False' - `dedupe` = ' True' or ' False' `hpe3par_flash_cache` will be reported as True for backends that have 3PAR's Adaptive Flash Cache enabled. `thin_provisioning` will be reported as True for backends that use thin provisioned volumes. FPGs that use fully provisioned volumes will report False. Backends that use thin provisioning also support manila's over-subscription feature. `dedupe` will be reported as True for backends that use deduplication technology. Scoped extra-specs are used to influence vendor-specific implementation details. Scoped extra-specs use a prefix followed by a colon. For HPE 3PAR these extra-specs have a prefix of `hpe3par`. For HP 3PAR these extra-specs have a prefix of `hp3par`. The following HPE 3PAR extra-specs are used when creating CIFS (SMB) shares: - `hpe3par:smb_access_based_enum` = true or false - `hpe3par:smb_continuous_avail` = true or false - `hpe3par:smb_cache` = off, manual, optimized or auto `smb_access_based_enum` (Access Based Enumeration) specifies if users can see only the files and directories to which they have been allowed access on the shares. The default is `false`. `smb_continuous_avail` (Continuous Availability) specifies if SMB3 continuous availability features should be enabled for this share. If not specified, the default is `true`. This setting will be ignored with hp3parclient 3.2.1 or earlier. `smb_cache` specifies client-side caching for offline files. Valid values are: * `off`: The client must not cache any files from this share. The share is configured to disallow caching. * `manual`: The client must allow only manual caching for the files open from this share. * `optimized`: The client may cache every file that it opens from this share. Also, the client may satisfy the file requests from its local cache. The share is configured to allow automatic caching of programs and documents. * `auto`: The client may cache every file that it opens from this share. The share is configured to allow automatic caching of documents. * If this is not specified, the default is `manual`. The following HPE 3PAR extra-specs are used when creating NFS shares: - `hpe3par:nfs_options` = Comma separated list of NFS export options The NFS export options have the following limitations: * `ro` and `rw` are not allowed (manila will determine the read-only option) * `no_subtree_check` and `fsid` are not allowed per HPE 3PAR CLI support * `(in)secure` and `(no_)root_squash` are not allowed because the HPE 3PAR driver controls those settings All other NFS options are forwarded to the HPE 3PAR as part of share creation. The HPE 3PAR will do additional validation at share creation time. Refer to HPE 3PAR CLI help for more details. Delete Nested Shares -------------------- When a nested share is deleted (nested shares will be created when ``hpe_3par_fstore_per_share`` is set to ``False``), the file tree also attempts to be deleted. With NFS shares, there is no additional configuration that needs to be done. For CIFS shares, ``hpe3par_cifs_admin_access_username`` and ``hpe3par_cifs_admin_access_password`` must be provided. If they are omitted, the original functionality is honored and the file tree remains untouched. ``hpe3par_cifs_admin_access_domain`` and ``hpe3par_share_mount_path`` can also be specified to create further customization. The :mod:`manila.share.drivers.hpe.hpe_3par_driver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.hpe.hpe_3par_driver :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/export_location_metadata.rst0000664000175000017500000000343313656750227024033 0ustar zuulzuul00000000000000Export Location Metadata ======================== Manila shares can have one or more export locations. The exact number depends on the driver and the storage controller, and there is no preference for more or fewer export locations. Usually drivers create an export location for each physical network interface through which the share can be accessed. Because not all export locations have the same qualities, Manila allows drivers to add additional keys to the dict returned for each export location when a share is created. The share manager stores these extra keys and values in the database and they are available to the API service, which may expose them through the REST API or use them for filtering. Metadata Keys ------------- Only keys defined in this document are valid. Arbitrary driver-defined keys are not allowed. The following keys are defined: * `is_admin_only` - May be True or False. Defaults to False. Indicates that the export location exists for administrative purposes. If is_admin_only=True, then the export location is hidden from non-admin users calling the REST API. Also, these export locations are assumed to be reachable directly from the admin network, which is important for drivers that support share servers and which have some export locations only accessible to tenants. * `preferred` - May be True or False. Defaults to False. Indicates that clients should prefer to mount this export location over other export locations that are not preferred. This may be used by drivers which have fast/slow paths to indicate to clients which paths are faster. It could be used to indicate a path is preferred for another reason, as long as the reason isn't one that changes over the life of the manila-share service. This key is always visible through the REST API. manila-10.0.0/doc/source/admin/shared-file-systems-share-networks.rst0000664000175000017500000001510113656750227025617 0ustar zuulzuul00000000000000.. _shared_file_systems_share_networks: ============== Share networks ============== Share network is an entity that encapsulates interaction with the OpenStack Networking service. If the share driver that you selected runs in a mode requiring Networking service interaction, specify the share network when creating a new share network. How to create share network ~~~~~~~~~~~~~~~~~~~~~~~~~~~ To list networks in a project, run: .. code-block:: console $ openstack network list +--------------+---------+--------------------+ | ID | Name | Subnets | +--------------+---------+--------------------+ | bee7411d-... | public | 884a6564-0f11-... | | | | e6da81fa-5d5f-... | | 5ed5a854-... | private | 74dcfb5a-b4d7-... | | | | cc297be2-5213-... | +--------------+---------+--------------------+ A share network stores network information that share servers can use where shares are hosted. You can associate a share with a single share network. When you create or update a share, you can optionally specify the ID of a share network through which instances can access the share. For more information about supported plug-ins for share networks, see :ref:`shared_file_systems_network_plugins`. A share network has these attributes: - The IP block in Classless Inter-Domain Routing (CIDR) notation from which to allocate the network. - The IP version of the network. - The network type, which is `vlan`, `vxlan`, `gre`, or `flat`. If the network uses segmentation, a segmentation identifier. For example, VLAN, VXLAN, and GRE networks use segmentation. To create a share network with private network and subnetwork, run: .. code-block:: console $ manila share-network-create --neutron-net-id 5ed5a854-21dc-4ed3-870a-117b7064eb21 \ --neutron-subnet-id 74dcfb5a-b4d7-4855-86f5-a669729428dc --name my_share_net --description "My first share network" +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | name | my_share_net | | segmentation_id | None | | created_at | 2015-09-24T12:06:32.602174 | | neutron_subnet_id | 74dcfb5a-b4d7-4855-86f5-a669729428dc | | updated_at | None | | network_type | None | | neutron_net_id | 5ed5a854-21dc-4ed3-870a-117b7064eb21 | | ip_version | None | | cidr | None | | project_id | 20787a7ba11946adad976463b57d8a2f | | id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | | description | My first share network | +-------------------+--------------------------------------+ The ``segmentation_id``, ``cidr``, ``ip_version``, and ``network_type`` share network attributes are automatically set to the values determined by the network provider. To check the network list, run: .. code-block:: console $ manila share-network-list +--------------------------------------+--------------+ | id | name | +--------------------------------------+--------------+ | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net | +--------------------------------------+--------------+ If you configured the generic driver with ``driver_handles_share_servers = True`` (with the share servers) and already had previous operations in the Shared File Systems service, you can see ``manila_service_network`` in the neutron list of networks. This network was created by the generic driver for internal use. .. code-block:: console $ openstack network list +--------------+------------------------+--------------------+ | ID | Name | Subnets | +--------------+------------------------+--------------------+ | 3b5a629a-e...| manila_service_network | 4f366100-50... | | bee7411d-... | public | 884a6564-0f11-... | | | | e6da81fa-5d5f-... | | 5ed5a854-... | private | 74dcfb5a-b4d7-... | | | | cc297be2-5213-... | +--------------+------------------------+--------------------+ You also can see detailed information about the share network including ``network_type``, and ``segmentation_id`` fields: .. code-block:: console $ openstack network show manila_service_network +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2016-12-13T09:31:30Z | | description | | | id | 3b5a629a-e7a1-46a3-afb2-ab666fb884bc | | ipv4_address_scope | None | | ipv6_address_scope | None | | mtu | 1450 | | name | manila_service_network | | port_security_enabled | True | | project_id | f6ac448a469b45e888050cf837b6e628 | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 73 | | revision_number | 7 | | router:external | Internal | | shared | False | | status | ACTIVE | | subnets | 682e3329-60b0-440f-8749-83ef53dd8544 | | tags | [] | | updated_at | 2016-12-13T09:31:36Z | +---------------------------+--------------------------------------+ You also can add and remove the security services from the share network. For more detail, see :ref:`shared_file_systems_security_services`. manila-10.0.0/doc/source/admin/shared-file-systems-share-management.rst0000664000175000017500000000240213656750227026057 0ustar zuulzuul00000000000000.. _shared_file_systems_share_management: ================ Share management ================ A share is a remote, mountable file system. You can mount a share to and access a share from several hosts by several users at a time. You can create a share and associate it with a network, list shares, and show information for, update, and delete a specified share. You can also create snapshots of shares. To create a snapshot, you specify the ID of the share that you want to snapshot. The shares are based on of the supported Shared File Systems protocols: * *NFS*. Network File System (NFS). * *CIFS*. Common Internet File System (CIFS). * *GLUSTERFS*. Gluster file system (GlusterFS). * *HDFS*. Hadoop Distributed File System (HDFS). * *CEPHFS*. Ceph File System (CephFS). * *MAPRFS*. MapR File System (MAPRFS). The Shared File Systems service provides set of drivers that enable you to use various network file storage devices, instead of the base implementation. That is the real purpose of the Shared File Systems service in production. .. toctree:: shared-file-systems-crud-share.rst shared-file-systems-manage-and-unmanage-share.rst shared-file-systems-manage-and-unmanage-snapshot.rst shared-file-systems-share-resize.rst shared-file-systems-quotas.rst manila-10.0.0/doc/source/admin/gpfs_driver.rst0000664000175000017500000000675113656750227021302 0ustar zuulzuul00000000000000.. Copyright 2015 IBM Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. GPFS Driver =========== GPFS driver uses IBM General Parallel File System (GPFS), a high-performance, clustered file system, developed by IBM, as the storage backend for serving file shares to the manila clients. Supported shared filesystems ---------------------------- - NFS (access by IP) Supported Operations -------------------- - Create NFS Share - Delete NFS Share - Create Share Snapshot - Delete Share Snapshot - Create Share from a Share Snapshot - Allow NFS Share access * Currently only 'rw' access level is supported - Deny NFS Share access Requirements ------------ - Install GPFS with server license, version >= 2.0, on the storage backend. - Install Kernel NFS or Ganesha NFS server on the storage backend servers. - If using Ganesha NFS, currently NFS Ganesha v1.5 and v2.0 are supported. - Create a GPFS cluster and create a filesystem on the cluster, that will be used to create the manila shares. - Enable quotas for the GPFS file system (`mmchfs -Q yes`). - Establish network connection between the manila host and the storage backend. Manila driver configuration setting ----------------------------------- The following parameters in the manila configuration file need to be set: - `share_driver` = manila.share.drivers.ibm.gpfs.GPFSShareDriver - `gpfs_share_export_ip` = - If the backend GPFS server is not running on the manila host machine, the following options are required to SSH to the remote GPFS backend server: - `gpfs_ssh_login` = and one of the following settings is required to execute commands over SSH: - `gpfs_ssh_private_key` = - `gpfs_ssh_password` = The following configuration parameters are optional: - `gpfs_mount_point_base` = - `gpfs_nfs_server_type` = - `gpfs_nfs_server_list` = - `gpfs_ssh_port` = - `knfs_export_options` = Restart of :term:`manila-share` service is needed for the configuration changes to take effect. Known Restrictions ------------------ - The driver does not support a segmented-network multi-tenancy model but instead works over a flat network where the tenants share a network. - While using remote GPFS node, with Ganesha NFS, 'gpfs_ssh_private_key' for remote login to the GPFS node must be specified and there must be a passwordless authentication already setup between the manila share service and the remote GPFS node. The :mod:`manila.share.drivers.ibm.gpfs` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.ibm.gpfs :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/glusterfs_driver.rst0000664000175000017500000001702113656750227022351 0ustar zuulzuul00000000000000.. Copyright 2015 Red Hat, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. GlusterFS driver ================ GlusterFS driver uses GlusterFS, an open source distributed file system, as the storage backend for serving file shares to manila clients. Supported shared filesystems ---------------------------- - NFS (access by IP) Supported Operations -------------------- - Create share - Delete share - Allow share access (rw) - Deny share access - With volume layout: - Create snapshot - Delete snapshot - Create share from snapshot Requirements ------------ - Install glusterfs-server package, version >= 3.5.x, on the storage backend. - Install NFS-Ganesha, version >=2.1, if using NFS-Ganesha as the NFS server for the GlusterFS backend. - Install glusterfs and glusterfs-fuse package, version >=3.5.x, on the manila host. - Establish network connection between the manila host and the storage backend. Manila driver configuration setting ----------------------------------- The following parameters in the manila's configuration file need to be set: - `share_driver` = manila.share.drivers.glusterfs.GlusterfsShareDriver The following configuration parameters are optional: - `glusterfs_nfs_server_type` = - `glusterfs_share_layout` = ; cf. :ref:`glusterfs_layouts` - `glusterfs_path_to_private_key` = - `glusterfs_server_password` = If Ganesha NFS server is used (``glusterfs_nfs_server_type = Ganesha``), then by default the Ganesha server is supposed to run on the manila host and is managed by local commands. If it's deployed somewhere else, then it's managed via ssh, which can be configured by the following parameters: - `glusterfs_ganesha_server_ip` - `glusterfs_ganesha_server_username` - `glusterfs_ganesha_server_password` In lack of ``glusterfs_ganesha_server_password`` ssh access will fall back to key based authentication, using the key specified by ``glusterfs_path_to_private_key``, or, in lack of that, a key at one of the OpenSSH-style default key locations (*~/.ssh/id_{r,d,ecd}sa*). Layouts have also their set of parameters, see :ref:`glusterfs_layouts` about that. .. _glusterfs_layouts: Layouts ------- New in Liberty, multiple share layouts can be used with glusterfs driver. A layout is a strategy of allocating storage from GlusterFS backends for shares. Currently there are two layouts implemented: - `directory mapped layout` (or `directory layout`, or `dir layout` for short): a share is backed by top-level subdirectories of a given GlusterFS volume. Directory mapped layout is the default and backward compatible with Kilo. The following setting explicitly specifies its usage: ``glusterfs_share_layout = layout_directory.GlusterfsDirectoryMappedLayout``. Options: - `glusterfs_target`: address of the volume that hosts the directories. If it's of the format `:/`, then the manila host is expected to be part of the GlusterFS cluster of the volume and GlusterFS management happens through locally calling the ``gluster`` utility. If it's of the format `@:/`, then we ssh to `@` to execute ``gluster`` (`` is supposed to have administrative privileges on ``). - `glusterfs_mount_point_base` = (optional; defaults to *$state_path*\ ``/mnt``, where *$state_path* defaults to ``/var/lib/manila``) Limitations: - directory layout does not support snapshot operations. - `volume mapped layout` (or `volume layout`, or `vol layout` for short): a share is backed by a whole GlusterFS volume. Volume mapped layout is new in Liberty. It can be chosen by setting ``glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout``. Options (required): - `glusterfs_servers` - `glusterfs_volume_pattern` Volume mapped layout is implemented as a common backend of the glusterfs and glusterfs-native drivers; see the description of these options in :doc:`glusterfs_native_driver`: :ref:`gluster_native_manila_conf`. Gluster NFS with volume mapped layout ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A special configuration choice is :: glusterfs_nfs_server_type = Gluster glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout that is, Gluster NFS used to export whole volumes. All other GlusterFS backend configurations (including GlusterFS set up with glusterfs-native) require the ``nfs.export-volumes = off`` GlusterFS setting. Gluster NFS with volume layout requires ``nfs.export-volumes = on``. ``nfs.export-volumes`` is a *cluster-wide* setting, so a given GlusterFS cluster cannot host a share backend with Gluster NFS + volume layout and other share backend configurations at the same time. There is another caveat with ``nfs.export-volumes``: setting it to ``on`` without enough care is a security risk, as the default access control for the volume exports is "allow all". For this reason, while the ``nfs.export-volumes = off`` setting is automatically set by manila for all other share backend configurations, ``nfs.export-volumes = on`` is *not* set by manila in case of a Gluster NFS with volume layout setup. It's left to the GlusterFS admin to make this setting in conjunction with the associated safeguards (that is, for those volumes of the cluster which are not used by manila, access restrictions have to be manually configured through the ``nfs.rpc-auth-{allow,reject}`` options). Known Restrictions ------------------ - The driver does not support network segmented multi-tenancy model, but instead works over a flat network, where the tenants share a network. - If NFS Ganesha is the NFS server used by the GlusterFS backend, then the shares can be accessed by NFSv3 and v4 protocols. However, if Gluster NFS is used by the GlusterFS backend, then the shares can only be accessed by NFSv3 protocol. - All manila shares, which map to subdirectories within a GlusterFS volume, are currently created within a single GlusterFS volume of a GlusterFS storage pool. - The driver does not provide read-only access level for shares. - Assume that share S is exported through Gluster NFS, and tenant machine T has mounted S. If at this point access of T to S is revoked through `access-deny`, the pre-existing mount will be still usable and T will still be able to access the data in S as long as that mount is in place. (This violates the principle *Access deny should always result in immediate loss of access to the share*, see http://lists.openstack.org/pipermail/openstack-dev/2015-July/069109.html.) The :mod:`manila.share.drivers.glusterfs` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.glusterfs :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/capabilities_and_extra_specs.rst0000664000175000017500000003435113656750227024640 0ustar zuulzuul00000000000000.. _capabilities_and_extra_specs: Capabilities and Extra-Specs ============================ Manila Administrators create share types with extra-specs to allow users to request a type of share to create. The Administrator chooses a name for the share type and decides how to communicate the significance of the different share types in terms that the users should understand or need to know. By design, most of the details of a share type (the extra- specs) are not exposed to users -- only Administrators. Share Types ----------- Refer to the manila client command-line help for information on how to create a share type and set "extra-spec" key/value pairs for a share type. Extra-Specs ----------- There are 3 types of extra-specs: required, scoped, and un-scoped. Manila *requires* the driver_handles_share_servers extra-spec. *Scoped* extra-specs use a prefix followed by a colon to define a namespace for scoping the extra-spec. A prefix could be a vendor name or acronym and is a hint that this extra-spec key/value only applies to that vendor's driver. Scoped extra-specs are not used by the scheduler to determine where a share is created (except for the special `capabilities` prefix). It is up to each driver implementation to determine how to use scoped extra-specs and to document them. The prefix "capabilities" is a special prefix to indicate extra-specs that are treated like un-scoped extra-specs. In the CapabilitiesFilter the "capabilities:" is stripped from the key and then the extra-spec key and value are used as an un-scoped extra-spec. *Un-scoped* extra-specs have a key that either starts with "capabilities:" or does not contain a colon. When the CapabilitiesFilter is enabled (it is enabled by default), the scheduler will only create a share on a backend that reports capabilities that match the share type's un-scoped extra-spec keys. The CapabilitiesFilter uses the following for matching operators: * No operator This defaults to doing a python ==. Additionally it will match boolean values. * **<=, >=, ==, !=** This does a float conversion and then uses the python operators as expected. * **** This either chooses a host that has partially matching string in the capability or chooses a host if it matches any value in a list. For example, if " sse4" is used, it will match a host that reports capability of "sse4_1" or "sse4_2". * **** This chooses a host that has one of the items specified. If the first word in the string is , another and value pair can be concatenated. Examples are " 3", " 3 5", and " 1 3 7". This is for string values only. * **** This chooses a host that matches a boolean capability. An example extra-spec value would be " True". * **=** This does a float conversion and chooses a host that has equal to or greater than the resource specified. This operator behaves this way for historical reasons. * **s==, s!=, s>=, s>, s<=, s<** The "s" indicates it is a string comparison. These choose a host that satisfies the comparison of strings in capability and specification. For example, if "capabilities:replication_type s== dr", a host that reports replication_type of "dr" will be chosen. For vendor-specific capabilities (which need to be visible to the CapabilityFilter), it is recommended to use the vendor prefix followed by an underscore. This is not a strict requirement, but will provide a consistent look along-side the scoped extra-specs and will be a clear indicator of vendor capabilities vs. common capabilities. Common Capabilities ------------------- For capabilities that apply to multiple backends a common capability can be created. Like all other backend reported capabilities, these capabilities can be used verbatim as extra_specs in share types used to create shares. * `driver_handles_share_servers` is a special, required, user-visible common capability. Added in Kilo. * `dedupe` - indicates that a backend/pool can provide shares using some deduplication technology. The default value of the dedupe capability (if a driver doesn't report it) is False. In Liberty, drivers cannot report to the scheduler that they support both dedupe and non-deduped share. For each pool it's either always on or always off, even if the drivers can technically support both dedupe and non-deduped in a pool. Since Mitaka, the logic is changed to allow a driver to report dedupe=[True, False] if it can support both dedupe and non-deduped in a pool. Administrators can make a share type use deduplication by setting this extra-spec to ' True'. Administrators can prevent a share type from using deduplication by setting this extra-spec to ' False'. Added in Liberty. * `compression` - indicates that a backend/pool can provide shares using some compression technology. The default value of the compression capability (if a driver doesn't report it) is False. In Liberty, drivers cannot report to the scheduler that they support both compression and non-compression. For each pool it's either always on or always off, even if the drivers can technically support both compression and non-compression in a pool. Since Mitaka, the logic is changed to allow a driver to report compression=[True, False] if it can support both compression and non-compression in a pool. Administrators can make a share type use compression by setting this extra-spec to ' True'. Administrators can prevent a share type from using compression by setting this extra-spec to ' False'. Added in Liberty. * `thin_provisioning` - shares will not be space guaranteed and overprovisioning will be enabled. This capability defaults to False. Backends/pools that support thin provisioning must report True for this capability. Administrators can make a share type use thin provisioned shares by setting this extra-spec to ' True'. If a driver reports thin_provisioning=False (the default) then it's assumed that the driver is doing thick provisioning and overprovisioning is turned off. This was added in Liberty. In Liberty and Mitaka, the driver was required to configure one pool for thin and another pool for thick and report thin_provisioning as either True or False even if an array can technically support both thin and thick provisioning in a pool. In Newton, the logic is changed to allow a driver to report thin_provisioning=[True, False] if it can support both thin and thick provisioning in a pool. To provision a thick share on a back end that supports both thin and thick provisioning, set one of the following in extra specs: :: {'thin_provisioning': 'False'} {'thin_provisioning': ' False'} {'capabilities:thin_provisioning': 'False'} {'capabilities:thin_provisioning': ' False'} * `qos` - indicates that a backend/pool can provide shares using some QoS (Quality of Service) specification. The default value of the qos capability (if a driver doesn't report it) is False. Administrators can make a share type use QoS by setting this extra-spec to ' True' and also setting the relevant QoS-related extra specs for the drivers being used. Administrators can prevent a share type from using QoS by setting this extra-spec to ' False'. Different drivers have different ways of specifying QoS limits (or guarantees) and this extra spec merely allows the scheduler to filter by pools that either have or don't have QoS support enabled. Added in Mitaka. * `replication_type` - indicates the style of replication supported for the backend/pool. This extra_spec will have a string value and could be one of :term:`writable`, :term:`readable` or :term:`dr`. `writable` replication type involves synchronously replicated shares where all replicas are writable. Promotion is not supported and not needed. `readable` and `dr` replication types involve a single `active` or `primary` replica and one or more `non-active` or secondary replicas per share. In `readable` type of replication, `non-active` replicas have one or more export_locations and can thus be mounted and read while the `active` replica is the only one that can be written into. In `dr` style of replication, only the `active` replica can be mounted, read from and written into. Added in Mitaka. * `snapshot_support` - indicates whether snapshots are supported for shares created on the pool/backend. When administrators do not set this capability as an extra-spec in a share type, the scheduler can place new shares of that type in pools without regard for whether snapshots are supported, and those shares will not support snapshots. * `create_share_from_snapshot_support` - indicates whether a backend can create a new share from a snapshot. When administrators do not set this capability as an extra-spec in a share type, the scheduler can place new shares of that type in pools without regard for whether creating shares from snapshots is supported, and those shares will not support creating shares from snapshots. * `revert_to_snapshot_support` - indicates that a driver is capable of reverting a share in place to its most recent snapshot. When administrators do not set this capability as an extra-spec in a share type, the scheduler can place new shares of that type in pools without regard for whether reverting shares to snapshots is supported, and those shares will not support reverting shares to snapshots. * `ipv4_support` - indicates whether a back end can create a share that can be accessed via IPv4 protocol. If administrators do not set this capability as an extra-spec in a share type, the scheduler can place new shares of that type in pools without regard for whether IPv4 is supported. * `ipv6_support` - indicates whether a back end can create a share that can be accessed via IPv6 protocol. If administrators do not set this capability as an extra-spec in a share type, the scheduler can place new shares of that type in pools without regard for whether IPv6 is supported. Reporting Capabilities ---------------------- Drivers report capabilities as part of the updated stats (e.g. capacity) for their backend/pools. This is how a backend/pool advertizes its ability to provide a share that matches the capabilities requested in the share type extra-specs. Developer impact ---------------- Developers should update their drivers to include all backend and pool capacities and capabilities in the share stats it reports to scheduler. Below is an example having multiple pools. "my" is used as an example vendor prefix: :: { 'driver_handles_share_servers': 'False', #\ 'share_backend_name': 'My Backend', # backend level 'vendor_name': 'MY', # mandatory/fixed 'driver_version': '1.0', # stats & capabilities 'storage_protocol': 'NFS_CIFS', #/ #\ 'my_capability_1': 'custom_val', # "my" optional vendor 'my_capability_2': True, # stats & capabilities #/ 'pools': [ {'pool_name': 'thin-dedupe-compression pool', #\ 'total_capacity_gb': 500, # mandatory stats for 'free_capacity_gb': 230, # pools 'reserved_percentage': 0, #/ #\ 'dedupe': True, # common capabilities 'compression': True, # 'snapshot_support': True, # 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'qos': True, # this backend supports QoS 'thin_provisioning': True, # 'max_over_subscription_ratio': 10, # (mandatory for thin) 'provisioned_capacity_gb': 270, # (mandatory for thin) # # 'replication_type': 'dr', # this backend supports # replication_type 'dr' #/ 'my_dying_disks': 100, #\ 'my_super_hero_1': 'Hulk', # "my" optional vendor 'my_super_hero_2': 'Spider-Man', # stats & capabilities #/ #\ # can replicate to other 'replication_domain': 'asgard', # backends in # replication_domain 'asgard' #/ 'ipv4_support': True, 'ipv6_support': True, }, {'pool_name': 'thick pool', 'total_capacity_gb': 1024, 'free_capacity_gb': 1024, 'qos': False, 'snapshot_support': True, 'create_share_from_snapshot_support': False, # this pool does not # allow creating # shares from # snapshots 'revert_to_snapshot_support': True, 'reserved_percentage': 0, 'dedupe': False, 'compression': False, 'thin_provisioning': False, 'replication_type': None, 'my_dying_disks': 200, 'my_super_hero_1': 'Batman', 'my_super_hero_2': 'Robin', 'ipv4_support': True, 'ipv6_support': True, }, ] } Work Flow --------- 1) Share Backends report how many pools and what those pools look like and are capable of to scheduler; 2) When request comes in, scheduler picks a pool that fits the need best to serve the request, it passes the request to the backend where the target pool resides; 3) Share driver gets the message and lets the target pool serve the request as scheduler instructed. Share type extra-specs (scoped and un-scoped) are available for the driver implementation to use as-needed. manila-10.0.0/doc/source/admin/emc_isilon_driver.rst0000664000175000017500000000605613656750227022462 0ustar zuulzuul00000000000000.. Copyright (c) 2015 EMC Corporation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Isilon Driver ============= The EMC manila driver framework (EMCShareDriver) utilizes EMC storage products to provide shared filesystems to OpenStack. The EMC manila driver is a plugin based driver which is designed to use different plugins to manage different EMC storage products. The Isilon manila driver is a plugin for the EMC manila driver framework which allows manila to interface with an Isilon backend to provide a shared filesystem. The EMC driver framework with the Isilon plugin is referred to as the "Isilon Driver" in this document. This Isilon Driver interfaces with an Isilon cluster via the REST Isilon Platform API (PAPI) and the RESTful Access to Namespace API (RAN). Requirements ------------ - Isilon cluster running OneFS 7.2 or higher Supported Operations -------------------- The following operations are supported on an Isilon cluster: * Create CIFS/NFS Share * Delete CIFS/NFS Share * Allow CIFS/NFS Share access * Only IP access type is supported for NFS and CIFS * Only RW access supported * Deny CIFS/NFS Share access * Create snapshot * Delete snapshot * Create share from snapshot * Extend share Backend Configuration --------------------- The following parameters need to be configured in the manila configuration file for the Isilon driver: * share_driver = manila.share.drivers.dell_emc.driver.EMCShareDriver * driver_handles_share_servers = False * emc_share_backend = isilon * emc_nas_server = * emc_nas_server_port = * emc_nas_login = * emc_nas_password = * emc_nas_root_dir = Restart of :term:`manila-share` service is needed for the configuration changes to take effect. Restrictions ------------ The Isilon driver has the following restrictions: - Only IP access type is supported for NFS and CIFS. - Only FLAT network is supported. The :mod:`manila.share.drivers.dell_emc.driver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.dell_emc.driver :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.share.drivers.dell_emc.plugins.isilon.isilon` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.dell_emc.plugins.isilon.isilon :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/shared-file-systems-share-types.rst0000664000175000017500000001616613656750227025123 0ustar zuulzuul00000000000000.. _shared_file_systems_share_types: =========== Share types =========== A share type enables you to filter or choose back ends before you create a share and to set data for the share driver. A share type behaves in the same way as a Block Storage volume type behaves. In the Shared File Systems configuration file ``manila.conf``, the administrator can set the share type used by default for the share creation and then create a default share type. To create a share type, use :command:`manila type-create` command as: .. code-block:: console manila type-create [--snapshot_support ] [--is_public ] where the ``name`` is the share type name, ``--is_public`` defines the level of the visibility for the share type, ``snapshot_support`` and ``spec_driver_handles_share_servers`` are the extra specifications used to filter back ends. Administrators can create share types with these extra specifications for the back ends filtering: - ``driver_handles_share_servers``. Required. Defines the driver mode for share server lifecycle management. Valid values are ``true``/``1`` and ``false``/``0``. Set to True when the share driver can manage, or handle, the share server lifecycle. Set to False when an administrator, rather than a share driver, manages the bare metal storage with some net interface instead of the presence of the share servers. - ``snapshot_support``. Filters back ends by whether they do or do not support share snapshots. Default is ``True``. Set to True to find back ends that support share snapshots. Set to False to find back ends that do not support share snapshots. .. note:: The extra specifications set in the share types are operated in the :ref:`shared_file_systems_scheduling`. Administrators can also set additional extra specifications for a share type for the following purposes: - *Filter back ends*. Unqualified extra specifications written in this format: ``extra_spec=value``. For example, **netapp_raid_type=raid4**. - *Set data for the driver*. Qualified extra specifications always written with the prefix with a colon, except for the special ``capabilities`` prefix, in this format: ``vendor:extra_spec=value``. For example, **netapp:thin_provisioned=true**. The scheduler uses the special capabilities prefix for filtering. The scheduler can only create a share on a back end that reports capabilities matching the un-scoped extra-spec keys for the share type. For details, see `Capabilities and Extra-Specs `_. Each driver implementation determines which extra specification keys it uses. For details, see the documentation for the driver. An administrator can use the ``policy.json`` file to grant permissions for share type creation with extra specifications to other roles. You set a share type to private or public and :ref:`manage the access` to the private share types. By default a share type is created as publicly accessible. Set ``--is_public`` to ``False`` to make the share type private. Share type operations --------------------- To create a new share type you need to specify the name of the new share type. You also require an extra spec ``driver_handles_share_servers``. The new share type can also be public. .. code-block:: console $ manila type-create netapp1 False --is_public True $ manila type-list +-----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +-----+--------+-----------+-----------+-----------------------------------+-----------------------+ | c0..| netapp1| public | - | driver_handles_share_servers:False| snapshot_support:True | +-----+--------+-----------+-----------+-----------------------------------+-----------------------+ You can set or unset extra specifications for a share type using **manila type-key set ** command. Since it is up to each driver what extra specification keys it uses, see the documentation for the specified driver. .. code-block:: console $ manila type-key netapp1 set thin_provisioned=True It is also possible to view a list of current share types and extra specifications: .. code-block:: console $ manila extra-specs-list +-------------+---------+-------------------------------------+ | ID | Name | all_extra_specs | +-------------+---------+-------------------------------------+ | c0086582-...| netapp1 | snapshot_support : True | | | | thin_provisioned : True | | | | driver_handles_share_servers : True | +-------------+---------+-------------------------------------+ Use :command:`manila type-key unset ` to unset an extra specification. The public or private share type can be deleted with the :command:`manila type-delete ` command. .. _share_type_access: Share type access ----------------- You can manage access to a private share type for different projects. Administrators can provide access, remove access, and retrieve information about access for a specified private share. Create a private type: .. code-block:: console $ manila type-create my_type1 True --is_public False +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | required_extra_specs | driver_handles_share_servers : True | | Name | my_type1 | | Visibility | private | | is_default | - | | ID | 06793be5-9a79-4516-89fe-61188cad4d6c | | optional_extra_specs | snapshot_support : True | +----------------------+--------------------------------------+ .. note:: If you run :command:`manila type-list` only public share types appear. To see private share types, run :command:`manila type-list` with ``--all`` optional argument. Grant access to created private type for a demo and alt_demo projects by providing their IDs: .. code-block:: console $ manila type-access-add my_type1 d8f9af6915404114ae4f30668a4f5ba7 $ manila type-access-add my_type1 e4970f57f1824faab2701db61ee7efdf To view information about access for a private share, type ``my_type1``: .. code-block:: console $ manila type-access-list my_type1 +----------------------------------+ | Project_ID | +----------------------------------+ | d8f9af6915404114ae4f30668a4f5ba7 | | e4970f57f1824faab2701db61ee7efdf | +----------------------------------+ After granting access to the share, the target project can see the share type in the list, and create private shares. To deny access for a specified project, use :command:`manila type-access-remove ` command. manila-10.0.0/doc/source/admin/shared-file-systems-services-manage.rst0000664000175000017500000000137013656750227025717 0ustar zuulzuul00000000000000.. _shared_file_systems_services_manage.rst: ====================== Manage shares services ====================== The Shared File Systems service provides API that allows to manage running share services (`Share services API `_). Using the :command:`manila service-list` command, it is possible to get a list of all kinds of running services. To select only share services, you can pick items that have field ``binary`` equal to ``manila-share``. Also, you can enable or disable share services using raw API requests. Disabling means that share services are excluded from the scheduler cycle and new shares will not be placed on the disabled back end. However, shares from this service stay available. manila-10.0.0/doc/source/admin/group_capabilities_and_extra_specs.rst0000664000175000017500000001045013656750227026046 0ustar zuulzuul00000000000000.. _group_capabilities_and_extra_specs: Group Capabilities and group-specs ================================== Manila Administrators create share group types with `share types `_ and group-specs to allow users to request a group type of share group to create. The Administrator chooses a name for the share group type and decides how to communicate the significance of the different share group types in terms that the users should understand or need to know. By design, most of the details of a share group type (the extra- specs) are not exposed to users -- only Administrators. Share group Types ----------------- Refer to the manila client command-line help for information on how to create a share group type and set "share types", "group-spec" key/value pairs for a share group type. Group-Specs ----------- The group specs contains the group capabilities, similar to snapshot_support in share types. Users know what a group can do from group specs. The group specs is an exact match requirement in share group filter (such as ConsistentSnapshotFilter). When the ConsistentSnapshotFilter is enabled (it is enabled by default), the scheduler will only create a share group on a backend that reports capabilities that match the share group type's group-spec keys. Common Group Capabilities ------------------------- For group capabilities that apply to multiple backends a common capability can be created. Like all other backend reported group capabilities, these group capabilities can be used verbatim as group_specs in share group types used to create share groups. * `consistent_snapshot_support` - indicates that a backend can enable you to create snapshots at the exact same point in time from multiple shares. The default value of the consistent_snapshot_support capability (if a driver doesn't report it) is None. Administrators can make a share group type use consistent snapshot support by setting this group-spec to 'host'. Reporting Group Capabilities ---------------------------- Drivers report group capabilities as part of the updated stats (e.g. capacity) and filled in 'share_group_stats' node for their back end. This is how a backend advertizes its ability to provide a share that matches the group capabilities requested in the share group type group-specs. Developer impact ---------------- Developers should update their drivers to include all backend and pool capacities and capabilities in the share stats it reports to scheduler. Below is an example having multiple pools. "my" is used as an example vendor prefix: :: { 'driver_handles_share_servers': 'False', #\ 'share_backend_name': 'My Backend', # backend level 'vendor_name': 'MY', # mandatory/fixed 'driver_version': '1.0', # stats & capabilities 'storage_protocol': 'NFS_CIFS', #/ #\ 'my_capability_1': 'custom_val', # "my" optional vendor 'my_capability_2': True, # stats & capabilities #/ 'share_group_stats': { #\ 'my_group_capability_1': 'custom_val', # "my" optional vendor 'my_group_capability_2': True, # stats & group capabilities #/ 'consistent_snapshot_support': 'host', #\ # common group capabilities #/ }, ] } Work Flow --------- 1) Share Backends report how many pools and what those pools look like and are capable of to group scheduler; 2) When request comes in, scheduler picks a backend that fits the need best to serve the request, it passes the request to the backend where the target pool resides; 3) Share driver gets the message and lets the target pool serve the request as group scheduler instructed. Share group type group-specs (scoped and un-scoped) are available for the driver implementation to use as-needed. manila-10.0.0/doc/source/admin/huawei_nas_driver.rst0000664000175000017500000002245213656750227022462 0ustar zuulzuul00000000000000.. Copyright (c) 2015 Huawei Technologies Co., Ltd. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Huawei Driver ============= Huawei NAS Driver is a plugin based the OpenStack manila service. The Huawei NAS Driver can be used to provide functions such as the share and snapshot for virtual machines(instances) in OpenStack. Huawei NAS Driver enables the OceanStor V3 series V300R002 storage system to provide only network filesystems for OpenStack. Requirements ------------ - The OceanStor V3 series V300R002 storage system. - The following licenses should be activated on V3 for File: * CIFS * NFS * HyperSnap License (for snapshot) Supported Operations -------------------- The following operations is supported on V3 storage: - Create CIFS/NFS Share - Delete CIFS/NFS Share - Allow CIFS/NFS Share access * IP and USER access types are supported for NFS(ro/rw). * Only USER access type is supported for CIFS(ro/rw). - Deny CIFS/NFS Share access - Create snapshot - Delete snapshot - Manage CIFS/NFS share - Support pools in one backend - Extend share - Shrink share - Support multi RestURLs() - Support multi-tenancy - Ensure share - Create share from snapshot - Support QoS Pre-Configurations on Huawei ---------------------------- 1. Create a driver configuration file. The driver configuration file name must be the same as the manila_huawei_conf_file item in the manila_conf configuration file. 2. Configure Product. Product indicates the storage system type. For the OceanStor V3 series V300R002 storage systems, the driver configuration file is as follows: :: V3 x.x.x.x abc;CTE0.A.H1 https://x.x.x.x:8088/deviceManager/rest/; https://x.x.x.x:8088/deviceManager/rest/ xxxxxxxxx xxxxxxxxx xxxxxxxxx 64 3 60 x.x.x.x xxxxxxxxx xxxxxxxxx - `Product` is a type of a storage product. Set it to `V3`. - `LogicalPortIP` is an IP address of the logical port. - `Port` is a port name list of bond port or ETH port, used to create vlan and logical port. Multi Ports can be configured in (separated by ";"). If is not configured, then will choose an online port on the array. - `RestURL` is an access address of the REST interface. Multi RestURLs can be configured in (separated by ";"). When one of the RestURL failed to connect, driver will retry another automatically. - `UserName` is a user name of an administrator. - `UserPassword` is a password of an administrator. - `StoragePool` is a name of a storage pool to be used. - `SectorSize` is the size of the disk blocks, optional value can be "4", "8", "16", "32" or "64", and the units is KB. If "sectorsize" is configured in both share_type and xml file, the value of sectorsize in the share_type will be used. If "sectorsize" is configured in neither share_type nor xml file, huawei storage backends will provide a default value(64) when creating a new share. - `WaitInterval` is the interval time of querying the file system status. - `Timeout` is the timeout period for waiting command execution of a device to complete. - `NFSClient\IP` is the backend IP in admin network to use for mounting NFS share. - `CIFSClient\UserName` is the backend user name in admin network to use for mounting CIFS share. - `CIFSClient\UserPassword` is the backend password in admin network to use for mounting CIFS share. Backend Configuration --------------------- Modify the `manila.conf` manila configuration file and add share_driver and manila_huawei_conf_file items. Example for configuring a storage system: - `share_driver` = manila.share.drivers.huawei.huawei_nas.HuaweiNasDriver - `manila_huawei_conf_file` = /etc/manila/manila_huawei_conf.xml - `driver_handles_share_servers` = True or False .. note:: - If `driver_handles_share_servers` is True, the driver will choose a port in to create vlan and logical port for each tenant network. And the share type with the DHSS extra spec should be set to True when creating shares. - If `driver_handles_share_servers` is False, then will use the IP in . Also the share type with the DHSS extra spec should be set to False when creating shares. Restart of manila-share service is needed for the configuration changes to take effect. Share Types ----------- When creating a share, a share type can be specified to determine where and how the share will be created. If a share type is not specified, the `default_share_type` set in the manila configuration file is used. Manila requires that the share type includes the `driver_handles_share_servers` extra-spec. This ensures that the share will be created on a backend that supports the requested driver_handles_share_servers (share networks) capability. For the Huawei driver, this must be set to False. To create a share on a backend with a specific type of disks, include the `huawei_disk_type` extra-spec in the share type. Valid values for this extra-spec are 'ssd', 'sas', 'nl_sas' or 'mix'. This share will be created on a backend with a matching disk type. Another common manila extra-spec used to determine where a share is created is `share_backend_name`. When this extra-spec is defined in the share type, the share will be created on a backend with a matching share_backend_name. Manila "share types" may contain qualified extra-specs, -extra-specs that have significance for the backend driver and the CapabilityFilter. This commit makes the Huawei driver report the following boolean capabilities: - capabilities:dedupe - capabilities:compression - capabilities:thin_provisioning - capabilities:huawei_smartcache * huawei_smartcache:cachename - capabilities:huawei_smartpartition * huawei_smartpartition:partitionname - capabilities:qos * qos:maxIOPS * qos:minIOPS * qos:minbandwidth * qos:maxbandwidth * qos:latency * qos:iotype - capabilities:huawei_sectorsize The scheduler will choose a host that supports the needed capability when the CapabilityFilter is used and a share type uses one or more of the following extra-specs: - capabilities:dedupe=' True' or ' False' - capabilities:compression=' True' or ' False' - capabilities:thin_provisioning=' True' or ' False' - capabilities:huawei_smartcache=' True' or ' False' * huawei_smartcache:cachename=test_cache_name - capabilities:huawei_smartpartition=' True' or ' False' * huawei_smartpartition:partitionname=test_partition_name - capabilities:qos=' True' or ' False' * qos:maxIOPS=100 * qos:minIOPS=10 * qos:maxbandwidth=100 * qos:minbandwidth=10 * qos:latency=10 * qos:iotype=0 - capabilities:huawei_sectorsize=' True' or ' False' * huawei_sectorsize:sectorsize=4 - huawei_disk_type='ssd' or 'sas' or 'nl_sas' or 'mix' `thin_provisioning` will be reported as [True, False] for Huawei backends. `dedupe` will be reported as [True, False] for Huawei backends. `compression` will be reported as [True, False] for Huawei backends. `huawei_smartcache` will be reported as [True, False] for Huawei backends. Adds SSDs into a high-speed cache pool and divides the pool into multiple cache partitions to cache hotspot data in random and small read I/Os. `huawei_smartpartition` will be reported as [True, False] for Huawei backends. Add share to the smartpartition named 'test_partition_name'. Allocates cache resources based on service characteristics, ensuring the quality of critical services. `qos` will be reported as True for backends that use QoS (Quality of Service) specification. `huawei_sectorsize` will be reported as [True, False] for Huawei backends. `huawei_disk_type` will be reported as "ssd", "sas", "nl_sas" or "mix" for Huawei backends. Restrictions ------------ The Huawei driver has the following restrictions: - IP and USER access types are supported for NFS. - Only LDAP domain is supported for NFS. - Only USER access type is supported for CIFS. - Only AD domain is supported for CIFS. The :mod:`manila.share.drivers.huawei.huawei_nas` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.huawei.huawei_nas :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/glusterfs_native_driver.rst0000664000175000017500000001452113656750227023721 0ustar zuulzuul00000000000000.. Copyright 2015 Red Hat, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. GlusterFS Native driver ======================= GlusterFS Native driver uses GlusterFS, an open source distributed file system, as the storage backend for serving file shares to manila clients. A manila share is a GlusterFS volume. This driver uses flat-network (share-server-less) model. Instances directly talk with the GlusterFS backend storage pool. The instances use 'glusterfs' protocol to mount the GlusterFS shares. Access to each share is allowed via TLS Certificates. Only the instance which has the TLS trust established with the GlusterFS backend can mount and hence use the share. Currently only 'rw' access is supported. Network Approach ---------------- L3 connectivity between the storage backend and the host running the manila share service should exist. Supported shared filesystems ---------------------------- - GlusterFS (share protocol: ``glusterfs``, access by TLS certificates (``cert`` access type)) Multi-tenancy model ------------------- The driver does not support network segmented multi-tenancy model. Instead multi-tenancy is supported using tenant specific TLS certificates. Supported Operations -------------------- - Create share - Delete share - Allow share access (rw) - Deny share access - Create snapshot - Delete snapshot - Create share from snapshot Requirements ------------ - Install glusterfs-server package, version >= 3.6.x, on the storage backend. - Install glusterfs and glusterfs-fuse package, version >=3.6.x, on the manila host. - Establish network connection between the manila host and the storage backend. .. _gluster_native_manila_conf: Manila driver configuration setting ----------------------------------- The following parameters in manila's configuration file need to be set: - `share_driver` = manila.share.drivers.glusterfs.glusterfs_native.GlusterfsNativeShareDriver - `glusterfs_servers` = List of GlusterFS servers which provide volumes that can be used to create shares. The servers are expected to be of distinct Gluster clusters (ie. should not be gluster peers). Each server should be of the form ``[@]``. The optional ``@`` part of the server URI indicates SSH access for cluster management (see related optional parameters below). If it is not given, direct command line management is performed (ie. manila host is assumed to be part of the GlusterFS cluster the server belongs to). - `glusterfs_volume_pattern` = Regular expression template used to filter GlusterFS volumes for share creation. The regex template can contain the #{size} parameter which matches a number (sequence of digits) and the value shall be interpreted as size of the volume in GB. Examples: ``manila-share-volume-\d+$``, ``manila-share-volume-#{size}G-\d+$``; with matching volume names, respectively: *manila-share-volume-12*, *manila-share-volume-3G-13*". In latter example, the number that matches ``#{size}``, that is, 3, is an indication that the size of volume is 3G. The following configuration parameters are optional: - `glusterfs_mount_point_base` = - `glusterfs_path_to_private_key` = - `glusterfs_server_password` = Host and backend configuration ------------------------------ - SSL/TLS should be enabled on the I/O path for GlusterFS servers and volumes involved (ie. ones specified in ``glusterfs_servers``), as described in https://docs.gluster.org/en/latest/Administrator%20Guide/SSL/. (Enabling SSL/TLS for the management path is also possible but not recommended currently.) - The manila host should be also configured for GlusterFS SSL/TLS (ie. `/etc/ssl/glusterfs.{pem,key,ca}` files has to be deployed as the above document specifies). - There is a further requirement for the CA-s used: the set of CA-s involved should be consensual, ie. `/etc/ssl/glusterfs.ca` should be identical across all the servers and the manila host. - There is a further requirement for the common names (CN-s) of the certificates used: the certificates of the servers should have a common name starting with `glusterfs-server`, and the certificate of the host should have common name starting with `manila-host`. - To support snapshots, bricks that consist the GlusterFS volumes used by manila should be thinly provisioned LVM ones (cf. https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Snapshots/). Known Restrictions ------------------ - GlusterFS volumes are not created on demand. A pre-existing set of GlusterFS volumes should be supplied by the GlusterFS cluster(s), conforming to the naming convention encoded by ``glusterfs_volume_pattern``. However, the GlusterFS endpoint is allowed to extend this set any time (so manila and GlusterFS endpoints are expected to communicate volume supply/demand out-of-band). ``glusterfs_volume_pattern`` can include a size hint (with ``#{size}`` syntax), which, if present, requires the GlusterFS end to indicate the size of the shares in GB in the name. (On share creation, manila picks volumes *at least* as big as the requested one.) - Certificate setup (aka trust setup) between instance and storage backend is out of band of manila. - For manila to use GlusterFS volumes, the name of the trashcan directory in GlusterFS volumes must not be changed from the default. The :mod:`manila.share.drivers.glusterfs.glusterfs_native.GlusterfsNativeShareDriver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.glusterfs.glusterfs_native :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/cephfs_driver.rst0000664000175000017500000004471513656750227021615 0ustar zuulzuul00000000000000.. Copyright 2016 Red Hat, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============= CephFS driver ============= The CephFS driver enables manila to export shared filesystems backed by Ceph's File System (CephFS) using either the Ceph network protocol or NFS protocol. Guests require a native Ceph client or an NFS client in order to mount the filesystem. When guests access CephFS using the native Ceph protocol, access is controlled via Ceph's cephx authentication system. If a user requests share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret key, if they do not already exist, and authorizes the ID to access the share. The client can then mount the share using the ID and the secret key. To learn more about configuring Ceph clients to access the shares created using this driver, please see the Ceph documentation (http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel client rather than the FUSE client, the share size limits set in manila may not be obeyed. And when guests access CephFS through NFS, an NFS-Ganesha server mediates access to CephFS. The driver enables access control by managing the NFS-Ganesha server's exports. Supported Operations ~~~~~~~~~~~~~~~~~~~~ The following operations are supported with CephFS backend: - Create/delete share - Allow/deny CephFS native protocol access to share * Only ``cephx`` access type is supported for CephFS native protocol. * ``read-only`` access level is supported in Newton or later versions of manila. * ``read-write`` access level is supported in Mitaka or later versions of manila. (or) Allow/deny NFS access to share * Only ``ip`` access type is supported for NFS protocol. * ``read-only`` and ``read-write`` access levels are supported in Pike or later versions of manila. - Extend/shrink share - Create/delete snapshot - Create/delete consistency group (CG) - Create/delete CG snapshot .. warning:: CephFS currently supports snapshots as an experimental feature, therefore the snapshot support with the CephFS Native driver is also experimental and should not be used in production environments. For more information, see (http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots). Prerequisites ~~~~~~~~~~~~~ .. important:: A manila share backed by CephFS is only as good as the underlying filesystem. Take care when configuring your Ceph cluster, and consult the latest guidance on the use of CephFS in the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/) For CephFS native shares ------------------------ - Mitaka or later versions of manila. - Jewel or later versions of Ceph. - A Ceph cluster with a filesystem configured ( http://docs.ceph.com/docs/master/cephfs/createfs/) - ``ceph-common`` package installed in the servers running the :term:`manila-share` service. - Ceph client installed in the guest, preferably the FUSE based client, ``ceph-fuse``. - Network connectivity between your Ceph cluster's public network and the servers running the :term:`manila-share` service. - Network connectivity between your Ceph cluster's public network and guests. See :ref:`security_cephfs_native`. For CephFS NFS shares --------------------- - Pike or later versions of manila. - Kraken or later versions of Ceph. - 2.5 or later versions of NFS-Ganesha. - A Ceph cluster with a filesystem configured ( http://docs.ceph.com/docs/master/cephfs/createfs/) - ``ceph-common`` package installed in the servers running the :term:`manila-share` service. - NFS client installed in the guest. - Network connectivity between your Ceph cluster's public network and the servers running the :term:`manila-share` service. - Network connectivity between your Ceph cluster's public network and NFS-Ganesha server. - Network connectivity between your NFS-Ganesha server and the manila guest. .. _authorize_ceph_driver: Authorizing the driver to communicate with Ceph ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Run the following commands to create a Ceph identity for a driver instance to use: .. code-block:: console read -d '' MON_CAPS << EOF allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create" EOF ceph auth get-or-create client.manila -o manila.keyring \ mds 'allow *' \ osd 'allow rw' \ mon "$MON_CAPS" ``manila.keyring``, along with your ``ceph.conf`` file, will then need to be placed on the server running the :term:`manila-share` service. .. important:: To communicate with the Ceph backend, a CephFS driver instance (represented as a backend driver section in manila.conf) requires its own Ceph auth ID that is not used by other CephFS driver instances running in the same controller node. In the server running the :term:`manila-share` service, you can place the ``ceph.conf`` and ``manila.keyring`` files in the /etc/ceph directory. Set the same owner for the :term:`manila-share` process and the ``manila.keyring`` file. Add the following section to the ``ceph.conf`` file. .. code-block:: ini [client.manila] client mount uid = 0 client mount gid = 0 log file = /opt/stack/logs/ceph-client.manila.log admin socket = /opt/stack/status/stack/ceph-$name.$pid.asok keyring = /etc/ceph/manila.keyring It is advisable to modify the Ceph client's admin socket file and log file locations so that they are co-located with manila services's pid files and log files respectively. Enabling snapshot support in Ceph backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enable snapshots in Ceph if you want to use them in manila: .. code-block:: console ceph mds set allow_new_snaps true --yes-i-really-mean-it .. warning:: Note that the snapshot support for the CephFS driver is experimental and is known to have several caveats for use. Only enable this and the equivalent ``manila.conf`` option if you understand these risks. See (http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots) for more details. Configuring CephFS backend in manila.conf ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Configure CephFS native share backend in manila.conf ---------------------------------------------------- Add CephFS to ``enabled_share_protocols`` (enforced at manila api layer). In this example we leave NFS and CIFS enabled, although you can remove these if you will only use CephFS: .. code-block:: ini enabled_share_protocols = NFS,CIFS,CEPHFS Create a section like this to define a CephFS native backend: .. code-block:: ini [cephfsnative1] driver_handles_share_servers = False share_backend_name = CEPHFSNATIVE1 share_driver = manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path = /etc/ceph/ceph.conf cephfs_protocol_helper_type = CEPHFS cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_enable_snapshots = False Set ``driver-handles-share-servers`` to ``False`` as the driver does not manage the lifecycle of ``share-servers``. To let the driver perform snapshot related operations, set ``cephfs_enable_snapshots`` to True. For the driver backend to expose shares via the native Ceph protocol, set ``cephfs_protocol_helper_type`` to ``CEPHFS``. Then edit ``enabled_share_backends`` to point to the driver's backend section using the section name. In this example we are also including another backend ("generic1"), you would include whatever other backends you have configured. .. note:: For Mitaka, Newton, and Ocata releases, the ``share_driver`` path was ``manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver`` .. code-block:: ini enabled_share_backends = generic1, cephfsnative1 Configure CephFS NFS share backend in manila.conf ------------------------------------------------- Add NFS to ``enabled_share_protocols`` if it's not already there: .. code-block:: ini enabled_share_protocols = NFS,CIFS,CEPHFS Create a section to define a CephFS NFS share backend: .. code-block:: ini [cephfsnfs1] driver_handles_share_servers = False share_backend_name = CEPHFSNFS1 share_driver = manila.share.drivers.cephfs.driver.CephFSDriver cephfs_protocol_helper_type = NFS cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_enable_snapshots = False cephfs_ganesha_server_is_remote= False cephfs_ganesha_server_ip = 172.24.4.3 The following options are set in the driver backend section above: * ``driver-handles-share-servers`` to ``False`` as the driver does not manage the lifecycle of ``share-servers``. * ``cephfs_protocol_helper_type`` to ``NFS`` to allow NFS protocol access to the CephFS backed shares. * ``ceph_auth_id`` to the ceph auth ID created in :ref:`authorize_ceph_driver`. * ``cephfs_ganesha_server_is_remote`` to False if the NFS-ganesha server is co-located with the :term:`manila-share` service. If the NFS-Ganesha server is remote, then set the options to ``True``, and set other options such as ``cephfs_ganesha_server_ip``, ``cephfs_ganesha_server_username``, and ``cephfs_ganesha_server_password`` (or ``cephfs_ganesha_path_to_private_key``) to allow the driver to manage the NFS-Ganesha export entries over SSH. * ``cephfs_ganesha_server_ip`` to the ganesha server IP address. It is recommended to set this option even if the ganesha server is co-located with the :term:`manila-share` service. With NFS-Ganesha (v2.5.4 or later), Ceph (v12.2.2 or later), the driver (Queens or later) can store NFS-Ganesha exports and export counter in Ceph RADOS objects. This is useful for highly available NFS-Ganesha deployments to store its configuration efficiently in an already available distributed storage system. Set additional options in the NFS driver section to enable the driver to do this. .. code-block:: ini [cephfsnfs1] ganesha_rados_store_enable = True ganesha_rados_store_pool_name = cephfs_data driver_handles_share_servers = False share_backend_name = CEPHFSNFS1 share_driver = manila.share.drivers.cephfs.driver.CephFSDriver cephfs_protocol_helper_type = NFS cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_enable_snapshots = False cephfs_ganesha_server_is_remote= False cephfs_ganesha_server_ip = 172.24.4.3 The following ganesha library (See manila's ganesha library documentation for more details) related options are set in the driver backend section above: * ``ganesha_rados_store_enable`` to True for persisting Ganesha exports and export counter in Ceph RADOS objects. * ``ganesha_rados_store_pool_name`` to the Ceph RADOS pool that stores Ganesha exports and export counter objects. If you want to use one of the backend CephFS's RADOS pools, then using CephFS's data pool is preferred over using its metadata pool. Edit ``enabled_share_backends`` to point to the driver's backend section using the section name, ``cephfsnfs1``. .. code-block:: ini enabled_share_backends = generic1, cephfsnfs1 Creating shares ~~~~~~~~~~~~~~~ Create CephFS native share -------------------------- The default share type may have ``driver_handles_share_servers`` set to True. Configure a share type suitable for CephFS native share: .. code-block:: console manila type-create cephfsnativetype false manila type-key cephfsnativetype set vendor_name=Ceph storage_protocol=CEPHFS Then create yourself a share: .. code-block:: console manila create --share-type cephfsnativetype --name cephnativeshare1 cephfs 1 Note the export location of the share: .. code-block:: console manila share-export-location-list cephnativeshare1 The export location of the share contains the Ceph monitor (mon) addresses and ports, and the path to be mounted. It is of the form, ``{mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}`` Create CephFS NFS share ----------------------- Configure a share type suitable for CephFS NFS share: .. code-block:: console manila type-create cephfsnfstype false manila type-key cephfsnfstype set vendor_name=Ceph storage_protocol=NFS Then create a share: .. code-block:: console manila create --share-type cephfsnfstype --name cephnfsshare1 nfs 1 Note the export location of the share: .. code-block:: console manila share-export-location-list cephnfsshare1 The export location of the share contains the IP address of the NFS-Ganesha server and the path to be mounted. It is of the form, ``{NFS-Ganesha server address}:{path to be mounted}`` Allowing access to shares ~~~~~~~~~~~~~~~~~~~~~~~~~ Allow access to CephFS native share ----------------------------------- Allow Ceph auth ID ``alice`` access to the share using ``cephx`` access type. .. code-block:: console manila access-allow cephnativeshare1 cephx alice Note the access status, and the access/secret key of ``alice``. .. code-block:: console manila access-list cephnativeshare1 .. note:: In Mitaka release, the secret key is not exposed by any manila API. The Ceph storage admin needs to pass the secret key to the guest out of band of manila. You can refer to the link below to see how the storage admin could obtain the secret key of an ID. http://docs.ceph.com/docs/jewel/rados/operations/user-management/#get-a-user Alternatively, the cloud admin can create Ceph auth IDs for each of the tenants. The users can then request manila to authorize the pre-created Ceph auth IDs, whose secret keys are already shared with them out of band of manila, to access the shares. Following is a command that the cloud admin could run from the server running the :term:`manila-share` service to create a Ceph auth ID and get its keyring file. .. code-block:: console ceph --name=client.manila --keyring=/etc/ceph/manila.keyring auth \ get-or-create client.alice -o alice.keyring For more details, please see the Ceph documentation. http://docs.ceph.com/docs/jewel/rados/operations/user-management/#add-a-user Allow access to CephFS NFS share -------------------------------- Allow a guest access to the share using ``ip`` access type. .. code-block:: console manila access-allow cephnfsshare1 ip 172.24.4.225 Mounting CephFS shares ~~~~~~~~~~~~~~~~~~~~~~ Mounting CephFS native share using FUSE client ---------------------------------------------- Using the secret key of the authorized ID ``alice`` create a keyring file, ``alice.keyring`` like: .. code-block:: ini [client.alice] key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA== Using the mon IP addresses from the share's export location, create a configuration file, ``ceph.conf`` like: .. code-block:: ini [client] client quota = true mon host = 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789 Finally, mount the filesystem, substituting the filenames of the keyring and configuration files you just created, and substituting the path to be mounted from the share's export location: .. code-block:: console sudo ceph-fuse ~/mnt \ --id=alice \ --conf=./ceph.conf \ --keyring=./alice.keyring \ --client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c Mount CephFS NFS share using NFS client --------------------------------------- In the guest, mount the share using the NFS client and knowing the share's export location. .. code-block:: ini sudo mount -t nfs 172.24.4.3:/volumes/_nogroup/6732900b-32c1-4816-a529-4d6d3f15811e /mnt/nfs/ Known restrictions ~~~~~~~~~~~~~~~~~~ - A CephFS driver instance, represented as a backend driver section in manila.conf, requires a Ceph auth ID unique to the backend Ceph Filesystem. Using a non-unique Ceph auth ID will result in the driver unintentionally evicting other CephFS clients using the same Ceph auth ID to connect to the backend. - The snapshot support of the driver is disabled by default. The ``cephfs_enable_snapshots`` configuration option needs to be set to ``True`` to allow snapshot operations. Snapshot support will also need to be enabled on the backend CephFS storage. - Snapshots are read-only. A user can read a snapshot's contents from the ``.snap/{manila-snapshot-id}_{unknown-id}`` folder within the mounted share. Restrictions with CephFS native share backend --------------------------------------------- - To restrict share sizes, CephFS uses quotas that are enforced in the client side. The CephFS FUSE clients are relied on to respect quotas. Mitaka release only - The secret-key of a Ceph auth ID required to mount a share is not exposed to an user by a manila API. To workaround this, the storage admin would need to pass the key out of band of manila, or the user would need to use the Ceph ID and key already created and shared with her by the cloud admin. Security ~~~~~~~~ - Each share's data is mapped to a distinct Ceph RADOS namespace. A guest is restricted to access only that particular RADOS namespace. http://docs.ceph.com/docs/master/cephfs/file-layouts/ - An additional level of resource isolation can be provided by mapping a share's contents to a separate RADOS pool. This layout would be preferred only for cloud deployments with a limited number of shares needing strong resource separation. You can do this by setting a share type specification, ``cephfs:data_isolated`` for the share type used by the cephfs driver. .. code-block:: console manila type-key cephfstype set cephfs:data_isolated=True .. _security_cephfs_native: Security with CephFS native share backend ----------------------------------------- As the guests need direct access to Ceph's public network, CephFS native share backend is suitable only in private clouds where guests can be trusted. The :mod:`manila.share.drivers.cephfs.driver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.cephfs.driver :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/hitachi_hnas_driver.rst0000664000175000017500000005215113656750227022760 0ustar zuulzuul00000000000000.. Copyright 2016 Hitachi Data Systems, Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ======================================================= Hitachi NAS Platform File Services Driver for OpenStack ======================================================= ------------------ Driver Version 3.0 ------------------ Hitachi NAS Platform Storage Requirements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This Hitachi NAS Platform File Services Driver for OpenStack provides support for Hitachi NAS Platform (HNAS) models 3080, 3090, 4040, 4060, 4080 and 4100 with NAS OS 12.2 or higher. Before configuring the driver, ensure the HNAS has at least: - 1 storage pool (span) configured. - 1 EVS configured. - 1 file system in this EVS, created without replication target option and should be in mounted state. It is recommended to disable auto-expansion, because the scheduler uses the current free space reported by the file system when creating shares. - 1 Management User configured with "supervisor" permission level. - Hitachi NAS Management interface should be reachable from manila-share node. Also, if the driver is going to create CIFS shares, either LDAP servers or domains must be configured previously in HNAS to provide the users and groups. Supported Operations ~~~~~~~~~~~~~~~~~~~~ The following operations are supported in this version of Hitachi NAS Platform File Services Driver for OpenStack: - Create and delete CIFS and NFS shares; - Extend and shrink shares; - Manage rules to shares (allow/deny access); - Allow and deny share access; - ``IP`` access type supported for ``NFS`` shares; - ``User`` access type supported for ``CIFS`` shares; - Both ``RW`` and ``RO`` access level are supported for NFS and CIFS shares; - Manage and unmanage shares; - Create and delete snapshots; - Create shares from snapshots. Driver Configuration ~~~~~~~~~~~~~~~~~~~~ This document contains the installation and user guide of the Hitachi NAS Platform File Services Driver for OpenStack. Although mentioning some Shared File Systems service operations and HNAS commands, both are not in the scope of this document. Please refer to their own guides for details. Before configuring the driver, make sure that the nodes running the manila-share service have access to the HNAS management port, and compute and network nodes have access to the data ports (EVS IPs or aggregations). The driver configuration can be summarized in the following steps: #. Configure HNAS parameters on ``manila.conf``; #. Prepare the network ensuring all OpenStack-HNAS connections mentioned above; #. Configure/create share type; #. Restart the services; #. Configure OpenStack networks. Step 1 - HNAS Parameters Configuration ************************************** The following parameters need to be configured in the [DEFAULT] section of ``/etc/manila/manila.conf``: +----------------------------+------------------------------------------------+ | **Option** | **Description** | +============================+================================================+ | enabled_share_backends | Name of the section on ``manila.conf`` used to | | | specify a backend. For example: | | | *enabled_share_backends = hnas1* | +----------------------------+------------------------------------------------+ | enabled_share_protocols | Specify a list of protocols to be allowed for | | | share creation. This driver version supports | | | NFS and/or CIFS. | +----------------------------+------------------------------------------------+ The following parameters need to be configured in the [backend] section of ``/etc/manila/manila.conf``: +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | **Option** | **Description** | +=================================================+=====================================================================================================+ | share_backend_name | A name for the backend. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | share_driver | Python module path. For this driver **this must be**: | | | *manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver* | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | driver_handles_share_servers | Driver working mode. For this driver **this must be**: | | | *False*. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_ip | HNAS management interface IP for communication between manila-share node and HNAS. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_user | This field is used to provide user credential to HNAS. Provided management user must have | | | "supervisor" level. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_password | This field is used to provide password credential to HNAS. | | | Either hitachi_hnas_password or hitachi_hnas_ssh_private_key must be set. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_ssh_private_key | Set this parameter with RSA/DSA private key path to allow the driver to connect into HNAS. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_evs_id | ID from EVS which this backend is assigned to (ID can be listed by CLI "evs list" | | | or EVS Management in HNAS Interface). | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_evs_ip | EVS IP for mounting shares (this can be listed by CLI "evs list" or EVS Management in HNAS | | | interface). | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_file_system_name | Name of the file system in HNAS, located in the specified EVS. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_cluster_admin_ip0* | If HNAS is in a multi-farm (one SMU managing multiple HNAS) configuration, set this parameter with | | | the IP of the cluster's admin node. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_stalled_job_timeout* | Tree-clone-job commands are used to create snapshots and create shares from snapshots. | | | This parameter sets a timeout (in seconds) to wait for jobs to complete. Default value is | | | 30 seconds. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_driver_helper* | Python module path for the driver helper. For this driver, it should use (default value): | | | *manila.share.drivers.hitachi.hnas.ssh.HNASSSHBackend* | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ | hitachi_hnas_allow_cifs_snapshot_while_mounted* | By default, CIFS snapshots are not allowed to be taken while the share has clients connected | | | because point-in-time replica cannot be guaranteed for all files. This parameter can be set | | | to *True* to allow snapshots to be taken while the share has clients connected. **WARNING**: | | | Setting this parameter to *True* might cause inconsistent snapshots on CIFS shares. Default | | | value is *False*. | +-------------------------------------------------+-----------------------------------------------------------------------------------------------------+ \* Non mandatory parameters. Below is an example of a valid configuration of HNAS driver: .. code-block:: ini [DEFAULT]`` ... enabled_share_backends = hitachi1 enabled_share_protocols = CIFS,NFS ... [hitachi1] share_backend_name = HITACHI1 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila Step 2 - Prepare the Network **************************** In the driver mode used by Hitachi NAS Platform File Services Driver for OpenStack, driver_handles_share_servers (DHSS) as False, the driver does not handle network configuration, it is up to the administrator to configure it. It is mandatory that HNAS management interface is reachable from a manila-share node through admin network, while the selected EVS data interface is reachable from OpenStack Cloud, such as through neutron flat networking. Here is a step-by-step of an example configuration: | **Manila-Share Node:** | **eth0**: Admin Network, can ping HNAS management interface. | **eth1**: Data Network, can ping HNAS EVS IP (data interface). This interface is only required if you plan to use Share Migration. | **Network Node and Compute Nodes:** | **eth0**: Admin Network, can ping HNAS management interface. | **eth1**: Data Network, can ping HNAS EVS IP (data interface). The following image represents the described scenario: .. image:: /images/rpc/hds_network.jpg :width: 60% Run in **Network Node**: .. code-block:: console $ sudo ifconfig eth1 0 $ sudo ovs-vsctl add-br br-eth1 $ sudo ovs-vsctl add-port br-eth1 eth1 $ sudo ifconfig eth1 up Edit */etc/neutron/plugins/ml2/ml2_conf.ini* (default directory), change the following settings as follows in their respective tags: .. code-block:: ini [ml2] type_drivers = flat,vlan,vxlan,gre mechanism_drivers = openvswitch [ml2_type_flat] flat_networks = physnet1,physnet2 [ml2_type_vlan] network_vlan_ranges = physnet1:1000:1500,physnet2:2000:2500 [ovs] bridge_mappings = physnet1:br-ex,physnet2:br-eth1 You may have to repeat the last line above in another file in the Compute Node, if it exists is located in: */etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini*. Create a route in HNAS to the tenant network. Please make sure multi-tenancy is enabled and routes are configured per EVS. Use the command "route-net-add" in HNAS console, where the network parameter should be the tenant's private network, while the gateway parameter should be the flat network gateway and the "console-context --evs" parameter should be the ID of EVS in use, such as in the following example: .. code-block:: console $ console-context --evs 3 route-net-add --gateway 192.168.1.1 10.0.0.0/24 Step 3 - Share Type Configuration ********************************* Shared File Systems service requires that the share type includes the driver_handles_share_servers extra-spec. This ensures that the share will be created on a backend that supports the requested driver_handles_share_servers capability. For the Hitachi NAS Platform File Services Driver for OpenStack this must be set to False. .. code-block:: console $ manila type-create hitachi False Additionally, the driver also reports the following common capabilities that can be specified in the share type: +----------------------------+----------------------------------------------+ | **Capability** | **Description** | +============================+==============================================+ | thin_provisioning = True | All shares created on HNAS are always thin | | | provisioned. So, if you set it, the value | | | **must be**: *True*. | +----------------------------+----------------------------------------------+ | dedupe = True/False | HNAS supports deduplication on its file | | | systems and the driver will report | | | *dedupe=True* if it is enabled on the file | | | system being used. To use it, go to HNAS and | | | enable the feature on the file system used. | +----------------------------+----------------------------------------------+ To specify a common capability on the share type, use the *type-key* command, for example: .. code-block:: console $ manila type-key hitachi set dedupe=True Step 4 - Restart the Services ***************************** Restart all Shared File Systems services (manila-share, manila-scheduler and manila-api) and neutron services (neutron-\*). This step is specific to your environment. If you are running in devstack for example, you have to log into screen (``screen -r``), stop the process (``Ctrl^C``) and run it again. If you are running it in a distro like RHEL or SUSE, a service command (for example *service manila-api restart*) is used to restart the service. Step 5 - Configure OpenStack Networks ************************************* In Neutron Controller it is necessary to create a network, a subnet and to add this subnet interface to a router: Create a network to the given tenant (demo), providing the DEMO_ID (this can be fetched using *keystone tenant-list*), a name for the network, the name of the physical network over which the virtual network is implemented and the type of the physical mechanism by which the virtual network is implemented: .. code-block:: console $ neutron net-create --tenant-id hnas_network --provider:physical_network=physnet2 --provider:network_type=flat Create a subnet to same tenant (demo), providing the DEMO_ID (this can be fetched using *keystone tenant-list*), the gateway IP of this subnet, a name for the subnet, the network ID created on previously step (this can be fetched using *neutron net-list*) and CIDR of subnet: .. code-block:: console $ neutron subnet-create --tenant-id --gateway --name hnas_subnet Finally, add the subnet interface to a router, providing the router ID and subnet ID created on previously step (can be fetched using *neutron subnet-list*): .. code-block:: console $ neutron router-interface-add Manage and Unmanage Shares ~~~~~~~~~~~~~~~~~~~~~~~~~~ Manila has the ability to manage and unmanage shares. If there is a share in the storage and it is not in OpenStack, you can manage that share and use it as a manila share. Hitachi NAS Platform File Services Driver for OpenStack use virtual-volumes (V-VOLs) to create shares. Only V-VOLs with a quota limit can be used by the driver, also, they must be created or moved inside the directory '/shares/' and exported (as NFS or CIFS shares). The unmanage operation only unlinks the share from OpenStack, preserving all data in the share. To **manage** shares use: .. code-block:: console $ manila manage [--name ] [--description ] [--share_type ] [--driver_options [ [ ...]]] Where: +------------------+----------------------------------------------------------+ | Parameter | Description | +==================+==========================================================+ | | Manila host, backend and share name. For example | | service_host | ubuntu\@hitachi1#HITACHI1. The available hosts can be | | | listed with the command: *manila pool-list* (admin only).| +------------------+---------------------+------------------------------------+ | protocol | NFS or CIFS protocols are currently supported. | +------------------+----------------------------------------------------------+ | export_path | The export path of the share. | | | For example: *172.24.44.31:/shares/some_share_id* | +------------------+----------------------------------------------------------+ To **unmanage** a share use: .. code-block:: console $ manila unmanage Where: +------------------+---------------------------------------------------------+ | Parameter | Description | +==================+=========================================================+ | share_id | Manila ID of the share to be unmanaged. This list can | | | be fetched with: *manila list*. | +------------------+---------------------+-----------------------------------+ Additional Notes ~~~~~~~~~~~~~~~~ - HNAS has some restrictions about the number of EVSs, file systems, virtual-volumes and simultaneous SSC connections. Check the manual specification for your system. - Shares and snapshots are thin provisioned. It is reported to manila only the real used space in HNAS. Also, a snapshot does not initially take any space in HNAS, it only stores the difference between the share and the snapshot, so it grows when share data is changed. - Admins should manage the tenant's quota (*manila quota-update*) to control the backend usage. - By default, CIFS snapshots are disabled when the share is mounted, since it uses tree-clone to create snapshots and does not guarantee point-in-time replicas when the source directory tree is changing, also, changing permissions to *read-only* does not affect already mounted shares. So, enable it if your source directory can be static while taking snapshots. Currently, it affects only CIFS protocol. For more information check the tree-clone feature in HNAS with *man tree-clone*. The :mod:`manila.share.drivers.hitachi.hnas.driver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.hitachi.hnas.driver :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/shared-file-systems-quotas.rst0000664000175000017500000001636413656750227024173 0ustar zuulzuul00000000000000.. _shared_file_systems_quotas: ================= Quotas and limits ================= Limits ~~~~~~ Limits are the resource limitations that are allowed for each project. An administrator can configure limits in the ``manila.conf`` file. Users can query their rate and absolute limits. To see the absolute limits, run: .. code-block:: console $ manila absolute-limits +----------------------------+-------+ | Name | Value | +----------------------------+-------+ | maxTotalShareGigabytes | 1000 | | maxTotalShareNetworks | 10 | | maxTotalShareSnapshots | 50 | | maxTotalShares | 50 | | maxTotalSnapshotGigabytes | 1000 | | totalShareGigabytesUsed | 1 | | totalShareNetworksUsed | 2 | | totalShareSnapshotsUsed | 1 | | totalSharesUsed | 1 | | totalSnapshotGigabytesUsed | 1 | +----------------------------+-------+ Rate limits control the frequency at which users can issue specific API requests. Administrators use rate limiting to configure limits on the type and number of API calls that can be made in a specific time interval. For example, a rate limit can control the number of ``GET`` requests processed during a one-minute period. To set the API rate limits, modify the ``etc/manila/api-paste.ini`` file, which is a part of the WSGI pipeline and defines the actual limits. You need to restart ``manila-api`` service after you edit the ``etc/manila/api-paste.ini`` file. .. code-block:: ini [filter:ratelimit] paste.filter_factory = manila.api.v1.limits:RateLimitingMiddleware.factory limits = (POST, "*/shares", ^/shares, 120, MINUTE);(PUT, "*/shares", .*, 120, MINUTE);(DELETE, "*", .*, 120, MINUTE) Also, add the ``ratelimit`` to ``noauth``, ``keystone``, ``keystone_nolimit`` parameters in the ``[composite:openstack_share_api]`` and ``[composite:openstack_share_api_v2]`` groups. .. code-block:: ini [composite:openstack_share_api] use = call:manila.api.middleware.auth:pipeline_factory noauth = cors faultwrap ssl ratelimit sizelimit noauth api keystone = cors faultwrap ssl ratelimit sizelimit authtoken keystonecontext api keystone_nolimit = cors faultwrap ssl ratelimit sizelimit authtoken keystonecontext api [composite:openstack_share_api_v2] use = call:manila.api.middleware.auth:pipeline_factory noauth = cors faultwrap ssl ratelimit sizelimit noauth apiv2 keystone = cors faultwrap ssl ratelimit sizelimit authtoken keystonecontext apiv2 keystone_nolimit = cors faultwrap ssl ratelimit sizelimit authtoken keystonecontext apiv2 To see the rate limits, run: .. code-block:: console $ manila rate-limits +--------+------------+-------+--------+--------+----------------------+ | Verb | URI | Value | Remain | Unit | Next_Available | +--------+------------+-------+--------+--------+----------------------+ | DELETE | "*" | 120 | 120 | MINUTE | 2015-10-20T15:17:20Z | | POST | "*/shares" | 120 | 120 | MINUTE | 2015-10-20T15:17:20Z | | PUT | "*/shares" | 120 | 120 | MINUTE | 2015-10-20T15:17:20Z | +--------+------------+-------+--------+--------+----------------------+ Quotas ~~~~~~ Quota sets provide quota management support. To list the quotas for a project or user, use the :command:`manila quota-show` command. If you specify the optional ``--user`` parameter, you get the quotas for this user in the specified project. If you omit this parameter, you get the quotas for the specified project. .. note:: The Shared File Systems service does not perform mapping of usernames and project names to IDs. Provide only ID values to get correct setup of quotas. Setting it by names you set quota for nonexistent project/user. In case quota is not set explicitly by project/user ID, The Shared File Systems service just applies default quotas. .. code-block:: console $ manila quota-show --tenant %project_id% --user %user_id% +-----------------------+-----------------------------------+ | Property | Value | +-----------------------+-----------------------------------+ | id | d99c76b43b1743fd822d26ccc915989c | | gigabytes | 1000 | | snapshot_gigabytes | 1000 | | snapshots | 50 | | shares | 50 | | share_networks | 10 | | share_groups | 50 | | share_group_snapshots | 50 | +-----------------------+-----------------------------------+ There are default quotas for a project that are set from the ``manila.conf`` file. To list the default quotas for a project, use the :command:`manila quota-defaults` command: .. code-block:: console $ manila quota-defaults --tenant %project_id% +-----------------------+------------------------------------+ | Property | Value | +-----------------------+------------------------------------+ | id | 1cc2154937bd40f4815d5f168d372263 | | gigabytes | 1000 | | snapshot_gigabytes | 1000 | | snapshots | 50 | | shares | 50 | | share_networks | 10 | | share_groups | 50 | | share_group_snapshots | 50 | +-----------------------+------------------------------------+ The administrator can update the quotas for a specific project, or for a specific user by providing both the ``--tenant`` and ``--user`` optional arguments. It is possible to update the ``shares``, ``snapshots``, ``gigabytes``, ``snapshot-gigabytes``, ``share-networks``, ``share_groups``, ``share_group_snapshots`` and ``share-type`` quotas. .. code-block:: console $ manila quota-update %project_id% --user %user_id% --shares 49 --snapshots 49 As administrator, you can also permit or deny the force-update of a quota that is already used, or if the requested value exceeds the configured quota limit. To force-update a quota, use ``force`` optional key. .. code-block:: console $ manila quota-update %project_id% --shares 51 --snapshots 51 --force The administrator can also update the quotas for a specific share type. Share Type quotas cannot be set for individual users within a project. They can only be applied across all users of a particular project. .. code-block:: console $ manila quota-update %project_id% --share-type %share_type_id% To revert quotas to default for a project or for a user, delete quotas: .. code-block:: console $ manila quota-delete --tenant %project_id% --user-id %user_id% To revert quotas to default, use the specific project or share type. Share Type quotas can not be reverted for individual users within a project. They can only be reverted across all users of a particular project. .. code-block:: console $ manila quota-delete --tenant %project_id% --share-type %share_type_id% manila-10.0.0/doc/source/admin/shared-file-systems-manage-and-unmanage-snapshot.rst0000664000175000017500000001070413656750227030265 0ustar zuulzuul00000000000000.. _shared_file_systems_manage_and_unmanage_snapshot: ================================== Manage and unmanage share snapshot ================================== To ``manage`` a share snapshot means that an administrator, rather than a share driver, manages the storage lifecycle. This approach is appropriate when an administrator manages share snapshots outside of the Shared File Systems service and wants to register it with the service. To ``unmanage`` a share snapshot means to unregister a specified share snapshot from the Shared File Systems service. Administrators can revert an unmanaged share snapshot to managed status if needed. .. _unmanage_share_snapshot: Unmanage a share snapshot ------------------------- The ``unmanage`` operation is not supported for shares that were created on top of share servers and created with share networks. The Share service should have the option ``driver_handles_share_servers = False`` set in the ``manila.conf`` file. To unmanage managed share snapshot, run the :command:`manila snapshot-unmanage ` command. Then try to print the information about the share snapshot. The returned result should indicate that Shared File Systems service won't find the share snapshot: .. code-block:: console $ manila snapshot-unmanage my_test_share_snapshot $ manila snapshot-show my_test_share_snapshot ERROR: No sharesnapshot with a name or ID of 'my_test_share_snapshot' exists. .. _manage_share_snapshot: Manage a share snapshot ----------------------- To register the non-managed share snapshot in the File System service, run the :command:`manila snapshot-manage` command: .. code-block:: console manila snapshot-manage [--name ] [--description ] [--driver_options [ [ ...]]] The positional arguments are: - share. Name or ID of the share. - provider_location. Provider location of the share snapshot on the backend. The ``driver_options`` is an optional set of one or more key and value pairs that describe driver options. To manage share snapshot, run: .. code-block:: console $ manila snapshot-manage \ 9ba52cc6-c97e-4b40-8653-4bcbaaf9628d \ 4d1e2863-33dd-4243-bf39-f7354752097d \ --name my_test_share_snapshot \ --description "My test share snapshot" \ +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | manage_starting | | share_id | 9ba52cc6-c97e-4b40-8653-4bcbaaf9628d | | user_id | d9f4003655c94db5b16c591920be1f91 | | description | My test share snapshot | | created_at | 2016-07-25T04:49:42.600980 | | size | None | | share_proto | NFS | | provider_location | 4d1e2863-33dd-4243-bf39-f7354752097d | | id | 89c663b5-026d-45c7-a43b-56ef0ba0faab | | project_id | aaa33a0ca4324965a3e65ae47e864e94 | | share_size | 1 | | name | my_test_share_snapshot | +-------------------+--------------------------------------+ Check that the share snapshot is available: .. code-block:: console $ manila snapshot-show my_test_share_snapshot +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | available | | share_id | 9ba52cc6-c97e-4b40-8653-4bcbaaf9628d | | user_id | d9f4003655c94db5b16c591920be1f91 | | description | My test share snapshot | | created_at | 2016-07-25T04:49:42.000000 | | size | 1 | | share_proto | NFS | | provider_location | 4d1e2863-33dd-4243-bf39-f7354752097d | | id | 89c663b5-026d-45c7-a43b-56ef0ba0faab | | project_id | aaa33a0ca4324965a3e65ae47e864e94 | | share_size | 1 | | name | my_test_share_snapshot | +-------------------+--------------------------------------+ manila-10.0.0/doc/source/admin/share_back_ends_feature_support_mapping.rst0000664000175000017500000014632213656750227027104 0ustar zuulzuul00000000000000.. Copyright 2015 Mirantis Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Manila share features support mapping ===================================== Here we provide information on support of different share features by different share drivers. Column values contain the OpenStack release letter when a feature was added to the driver. Column value "?" means that this field requires an update with current information. Column value "-" means that this feature is not currently supported. Mapping of share drivers and share features support --------------------------------------------------- +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Driver name | create delete share | manage unmanage share | extend share | shrink share | create delete snapshot | create share from snapshot | manage unmanage snapshot | revert to snapshot | mountable snapshot | +========================================+=======================+=======================+==========================+==========================+========================+============================+==========================+====================+====================+ | ZFSonLinux | M | N | M | M | M | M | N | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Container | N | \- | N | \- | \- | \- | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Generic (Cinder as back-end) | J | K | L | L | J | J | M | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | NetApp Clustered Data ONTAP | J | L | L | L | J | J | N | O | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | EMC VMAX | O | \- | O | \- | O | O | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | EMC VNX | J | \- | \- | \- | J | J | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | EMC Unity | N | U | N | S | N | N | U | S | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | EMC Isilon | K | \- | M | \- | K | K | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | GlusterFS | J | \- | directory layout (T) | directory layout (T) | volume layout (L) | volume layout (L) | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | GlusterFS-Native | J | \- | \- | \- | K | L | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | HDFS | K | \- | M | \- | K | K | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Hitachi HNAS | L | L | L | M | L | L | O | O | O | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Hitachi HSP | N | N | N | N | \- | \- | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | HPE 3PAR | K | \- | \- | \- | K | K | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Huawei | K | L | L | L | K | M | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | IBM GPFS | K | O | L | \- | K | K | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | INFINIDAT | Q | \- | Q | \- | Q | Q | \- | Q | Q | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | INSPUR AS13000 | R | \- | R | \- | R | R | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | INSPUR InStorage | T | \- | T | \- | \- | \- | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Infortrend | T | T | T | T | \- | \- | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | LVM | M | \- | M | \- | M | M | \- | O | O | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Quobyte | K | \- | M | M | \- | \- | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Windows SMB | L | L | L | L | L | L | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Oracle ZFSSA | K | N | M | M | K | K | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | CephFS | M | \- | M | M | M | \- | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | Tegile | M | \- | M | M | M | M | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | NexentaStor4 | N | \- | N | \- | N | N | \- | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | NexentaStor5 | N | T | N | N | N | N | \- | T | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | MapRFS | O | O | O | O | O | O | O | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ | QNAP | O | O | O | \- | O | O | O | \- | \- | +----------------------------------------+-----------------------+-----------------------+--------------------------+--------------------------+------------------------+----------------------------+--------------------------+--------------------+--------------------+ Mapping of share drivers and share access rules support ------------------------------------------------------- +----------------------------------------+--------------------------------------------------------------------------+------------------------------------------------------------------------+ | | Read & Write | Read Only | + Driver name +--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | | IPv4 | IPv6 | USER | Cert | CephX | IPv4 | IPv6 | USER | Cert | CephX | +========================================+==============+==============+================+============+==============+==============+==============+================+============+============+ | ZFSonLinux | NFS (M) | \- | \- | \- | \- | NFS (M) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Container | \- | \- | CIFS (N) | \- | \- | \- | \- | CIFS (N) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Generic (Cinder as back-end) | NFS,CIFS (J) | \- | \- | \- | \- | NFS (K) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | NetApp Clustered Data ONTAP | NFS (J) | NFS (Q) | CIFS (J) | \- | \- | NFS (K) | NFS (Q) | CIFS (M) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | EMC VMAX | NFS (O) | NFS (R) | CIFS (O) | \- | \- | NFS (O) | NFS (R) | CIFS (O) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | EMC VNX | NFS (J) | NFS (Q) | CIFS (J) | \- | \- | NFS (L) | NFS (Q) | CIFS (L) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | EMC Unity | NFS (N) | NFS (Q) | CIFS (N) | \- | \- | NFS (N) | NFS (Q) | CIFS (N) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | EMC Isilon | NFS,CIFS (K) | \- | CIFS (M) | \- | \- | NFS (M) | \- | CIFS (M) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | GlusterFS | NFS (J) | \- | \- | \- | \- | \- | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | GlusterFS-Native | \- | \- | \- | J | \- | \- | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | HDFS | \- | \- | HDFS(K) | \- | \- | \- | \- | HDFS(K) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Hitachi HNAS | NFS (L) | \- | CIFS (N) | \- | \- | NFS (L) | \- | CIFS (N) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Hitachi HSP | NFS (N) | \- | \- | \- | \- | NFS (N) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | HPE 3PAR | NFS,CIFS (K) | \- | CIFS (K) | \- | \- | \- | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Huawei | NFS (K) | \- |NFS (M),CIFS (K)| \- | \- | NFS (K) | \- |NFS (M),CIFS (K)| \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | LVM | NFS (M) | NFS (P) | CIFS (M) | \- | \- | NFS (M) | NFS (P) | CIFS (M) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Quobyte | NFS (K) | \- | \- | \- | \- | NFS (K) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Windows SMB | \- | \- | CIFS (L) | \- | \- | \- | \- | CIFS (L) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | IBM GPFS | NFS (K) | \- | \- | \- | \- | NFS (K) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | INFINIDAT | NFS (Q) | \- | \- | \- | \- | NFS (Q) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | INSPUR AS13000 | NFS (R) | \- | CIFS (R) | \- | \- | NFS (R) | \- | CIFS (R) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | INSPUR InStorage | NFS (T) | \- | CIFS (T) | \- | \- | NFS (T) | \- | CIFS (T) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Infortrend | NFS (T) | \- | CIFS (T) | \- | \- | NFS (T) | \- | CIFS (T) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Oracle ZFSSA | NFS,CIFS(K) | \- | \- | \- | \- | \- | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | CephFS | NFS (P) | NFS (T) | \- | \- | CEPHFS (M) | NFS (P) | NFS (T) | \- | \- | CEPHFS (N) | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | Tegile | NFS (M) | \- |NFS (M),CIFS (M)| \- | \- | NFS (M) | \- |NFS (M),CIFS (M)| \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | NexentaStor4 | NFS (N) | \- | \- | \- | \- | NFS (N) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | NexentaStor5 | NFS (N) | T | \- | \- | \- | NFS (N) | T | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | MapRFS | \- | \- | MapRFS(O) | \- | \- | \- | \- | MapRFS(O) | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ | QNAP | NFS (O) | \- | \- | \- | \- | NFS (O) | \- | \- | \- | \- | +----------------------------------------+--------------+--------------+----------------+------------+--------------+--------------+--------------+----------------+------------+------------+ Mapping of share drivers and security services support ------------------------------------------------------ +----------------------------------------+------------------+-----------------+------------------+ | Driver name | Active Directory | LDAP | Kerberos | +========================================+==================+=================+==================+ | ZFSonLinux | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Container | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Generic (Cinder as back-end) | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | NetApp Clustered Data ONTAP | J | J | J | +----------------------------------------+------------------+-----------------+------------------+ | EMC VMAX | O | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | EMC VNX | J | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | EMC Unity | N | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | EMC Isilon | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | GlusterFS | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | GlusterFS-Native | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | HDFS | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Hitachi HNAS | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Hitachi HSP | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | HPE 3PAR | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Huawei | M | M | \- | +----------------------------------------+------------------+-----------------+------------------+ | LVM | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Quobyte | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Windows SMB | L | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | IBM GPFS | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | INFINIDAT | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | INSPUR AS13000 | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | INSPUR InStorage | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Infortrend | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Oracle ZFSSA | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | CephFS | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | Tegile | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | NexentaStor4 | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | NexentaStor5 | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | MapRFS | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ | QNAP | \- | \- | \- | +----------------------------------------+------------------+-----------------+------------------+ Mapping of share drivers and common capabilities ------------------------------------------------ More information: :ref:`capabilities_and_extra_specs` +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Driver name | DHSS=True | DHSS=False | dedupe | compression | thin_provisioning | thick_provisioning | qos | create share from snapshot | revert to snapshot | mountable snapshot | ipv4_support | ipv6_support | +========================================+===========+============+========+=============+===================+====================+=====+============================+====================+====================+==============+==============+ | ZFSonLinux | \- | M | M | M | M | \- | \- | M | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Container | N | \- | \- | \- | \- | N | \- | \- | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Generic (Cinder as back-end) | J | K | \- | \- | \- | L | \- | J | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | NetApp Clustered Data ONTAP | J | K | M | M | M | L | P | J | O | \- | P | Q | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | EMC VMAX | O | \- | \- | \- | \- | \- | \- | O | \- | \- | P | R | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | EMC VNX | J | \- | \- | \- | \- | L | \- | J | \- | \- | P | Q | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | EMC Unity | N | T | \- | \- | N | \- | \- | N | S | \- | P | Q | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | EMC Isilon | \- | K | \- | \- | \- | L | \- | K | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | GlusterFS | \- | J | \- | \- | \- | L | \- | volume layout (L) | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | GlusterFS-Native | \- | J | \- | \- | \- | L | \- | L | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | HDFS | \- | K | \- | \- | \- | L | \- | K | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Hitachi HNAS | \- | L | N | \- | L | \- | \- | L | O | O | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Hitachi HSP | \- | N | \- | \- | N | \- | \- | \- | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | HPE 3PAR | L | K | L | \- | L | L | \- | K | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Huawei | M | K | L | L | L | L | M | M | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | INFINIDAT | \- | Q | \- | \- | Q | Q | \- | Q | Q | Q | Q | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Infortrend | \- | T | \- | \- | \- | \- | \- | \- | \- | \- | T | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | LVM | \- | M | \- | \- | \- | M | \- | K | O | O | P | P | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Quobyte | \- | K | \- | \- | \- | L | \- | M | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Windows SMB | L | L | \- | \- | \- | L | \- | \- | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | IBM GPFS | \- | K | \- | \- | \- | L | \- | L | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Oracle ZFSSA | \- | K | \- | \- | \- | L | \- | K | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | CephFS | \- | M | \- | \- | \- | M | \- | \- | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | Tegile | \- | M | M | M | M | \- | \- | M | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | NexentaStor4 | \- | N | N | N | N | N | \- | N | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | NexentaStor5 | \- | N | \- | N | N | N | \- | N | T | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | MapRFS | \- | N | \- | \- | \- | N | \- | O | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | QNAP | \- | O | Q | Q | O | Q | \- | O | \- | \- | P | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | INSPUR AS13000 | \- | R | \- | \- | R | \- | \- | R | \- | \- | R | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ | INSPUR InStorage | \- | T | \- | \- | \- | T | \- | \- | \- | \- | T | \- | +----------------------------------------+-----------+------------+--------+-------------+-------------------+--------------------+-----+----------------------------+--------------------+--------------------+--------------+--------------+ .. note:: The common capability reported by back ends differs from some names seen in the above table: * `DHSS` is reported as ``driver_handles_share_servers`` (See details for :term:`DHSS`) * `create share from snapshot` is reported as ``create_share_from_snapshot_support`` manila-10.0.0/doc/source/admin/emc_vnx_driver.rst0000664000175000017500000003174513656750227022003 0ustar zuulzuul00000000000000.. Copyright (c) 2014 EMC Corporation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. VNX Driver ========== EMC manila driver framework (EMCShareDriver) utilizes the EMC storage products to provide the shared filesystems to OpenStack. The EMC manila driver is a plugin based driver which is designed to use different plugins to manage different EMC storage products. VNX plugin is the plugin which manages the VNX to provide shared filesystems. EMC driver framework with VNX plugin is referred to as VNX driver in this document. This driver performs the operations on VNX by XMLAPI and the File command line. Each backend manages one Data Mover of VNX. Multiple manila backends need to be configured to manage multiple Data Movers. Requirements ------------ - VNX OE for File version 7.1 or higher. - VNX Unified, File only, or Gateway system with single storage backend. - The following licenses should be activated on VNX for File: * CIFS * NFS * SnapSure (for snapshot) * ReplicationV2 (for create share from snapshot) Supported Operations -------------------- The following operations will be supported on VNX array: - Create CIFS/NFS Share - Delete CIFS/NFS Share - Allow CIFS/NFS Share access * Only IP access type is supported for NFS. * Only user access type is supported for CIFS. - Deny CIFS/NFS Share access - Create snapshot - Delete snapshot - Create share from snapshot While the generic driver creates shared filesystems based on Cinder volumes attached to Nova VMs, the VNX driver performs similar operations using the Data Movers on the array. Pre-Configurations on VNX ------------------------- 1. Enable Unicode on Data mover VNX driver requires that the Unicode is enabled on Data Mover. CAUTION: After enabling Unicode, you cannot disable it. If there are some filesystems created before Unicode is enabled on the VNX, consult the storage administrator before enabling Unicode. To check the Unicode status on Data Mover, use the following VNX File command on VNX control station: server_cifs | head where: mover_name = Check the value of `I18N mode` field. UNICODE mode is shown as `I18N mode = UNICODE` To enable the Unicode for Data Mover: uc_config -on -mover where: mover_name = Refer to the document `Using International Character Sets on VNX for File` on [EMC support site](https://support.emc.com) for more information. 2. Enable CIFS service on Data Mover Ensure the CIFS service is enabled on the Data Mover which is going to be managed by VNX driver. To start the CIFS service, use the following command: server_setup -Protocol cifs -option start [=] where: = [=] = Note: If there is 1 GB of memory on the Data Mover, the default is 96 threads; however, if there is over 1 GB of memory, the default number of threads is 256. To check the CIFS service status, use this command: server_cifs | head where: = The command output will show the number of CIFS threads started. 3. NTP settings on Data Mover VNX driver only supports CIFS share creation with share network which has an Active Directory security-service associated. Creating CIFS share requires that the time on the Data Mover is in sync with the Active Directory domain so that the CIFS server can join the domain. Otherwise, the domain join will fail when creating share with this security service. There is a limitation that the time of the domains used by security-services even for different tenants and different share networks should be in sync. Time difference should be less than 10 minutes. It is recommended to set the NTP server to the same public NTP server on both the Data Mover and domains used in security services to ensure the time is in sync everywhere. Check the date and time on Data Mover: server_date where: mover_name = Set the NTP server for Data Mover: server_date timesvc start ntp [ ...] where: mover_name = host = Note: The host must be running the NTP protocol. Only 4 host entries are allowed. 4. Configure User Mapping on the Data Mover Before creating CIFS share using VNX driver, you must select a method of mapping Windows SIDs to UIDs and GIDs. EMC recommends using usermapper in single protocol (CIFS) environment which is enabled on VNX by default. To check usermapper status, use this command syntax: server_usermapper where: = If usermapper is not started, the following command can be used to start the usermapper: server_usermapper -enable where: = For multiple protocol environment, refer to `Configuring VNX User Mapping` on [EMC support site](https://support.emc.com) for additional information. 5. Network Connection In the current release, the share created by VNX driver uses the first network device (physical port on NIC) of Data Mover to access the network. Go to Unisphere to check the device list: Settings -> Network -> Settings for File (Unified system only) -> Device. Backend Configuration --------------------- The following parameters need to be configured in `/etc/manila/manila.conf` for the VNX driver: emc_share_backend = vnx emc_nas_server = emc_nas_password = emc_nas_login = emc_nas_server_container = emc_nas_pool_name = emc_interface_ports = share_driver = manila.share.drivers.dell_emc.driver.EMCShareDriver driver_handles_share_servers = True - `emc_share_backend` is the plugin name. Set it to `vnx` for the VNX driver. - `emc_nas_server` is the control station IP address of the VNX system to be managed. - `emc_nas_password` and `emc_nas_login` fields are used to provide credentials to the VNX system. Only local users of VNX File is supported. - `emc_nas_server_container` field is the name of the Data Mover to serve the share service. - `emc_nas_pool_name` is the pool name user wants to create volume from. The pools can be created using Unisphere for VNX. - `emc_interface_ports` is comma separated list specifying the ports(devices) of Data Mover that can be used for share server interface. Members of the list can be Unix-style glob expressions (supports Unix shell-style wildcards). This list is optional. In the absence of this option, any of the ports on the Data Mover can be used. - `driver_handles_share_servers` must be True, the driver will choose a port from port list which configured in emc_interface_ports. Restart of :term:`manila-share` service is needed for the configuration changes to take effect. IPv6 support ------------ IPv6 support for VNX driver is introduced in Queens release. The feature is divided into two parts: 1. The driver is able to manage share or snapshot in the Neutron IPv6 network. 2. The driver is able to connect VNX management interface using its IPv6 address. Pre-Configurations for IPv6 support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following parameters need to be configured in `/etc/manila/manila.conf` for the VNX driver: network_plugin_ipv6_enabled = True - `network_plugin_ipv6_enabled` indicates IPv6 is enabled. If you want to connect VNX using IPv6 address, you should configure IPv6 address by `nas_cs` command for VNX and specify the address in `/etc/manila/manila.conf`: emc_nas_server = Snapshot support ---------------- In the Mitaka and Newton release of OpenStack, Snapshot support is enabled by default for a newly created share type. Starting with the Ocata release, the snapshot_support extra spec must be set to True in order to allow snapshots for a share type. If the 'snapshot_support' extra_spec is omitted or if it is set to False, users would not be able to create snapshots on shares of this share type. The feature is divided into two parts: 1. The driver is able to create/delete snapshot of share. 2. The driver is able to create share from snapshot. Pre-Configurations for Snapshot support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following extra specifications need to be configured with share type. - snapshot_support = True - create_share_from_snapshot_support = True For new share type, these extra specifications can be set directly when creating share type: .. code-block:: console manila type-create --snapshot_support True --create_share_from_snapshot_support True ${share_type_name} True Or you can update already existing share type with command: .. code-block:: console manila type-key ${share_type_name} set snapshot_support=True manila type-key ${share_type_name} set create_share_from_snapshot_support=True To snapshot a share and create share from the snapshot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Firstly, you need create a share from share type that has extra specifications(snapshot_support=True, create_share_from_snapshot_support=True). Then snapshot the share with command: .. code-block:: console manila snapshot-create ${source_share_name} --name ${target_snapshot_name} --description " " After creating the snapshot from previous step, you can create share from that snapshot. Use command: .. code-block:: console manila create nfs 1 --name ${target_share_name} --metadata source=snapshot --description " " --snapshot-id ${source_snapshot_id} Restrictions ------------ The VNX driver has the following restrictions: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Only FLAT network and VLAN network are supported. - VLAN network is supported with limitations. The Neutron subnets in different VLANs that are used to create share networks cannot have overlapped address spaces. Otherwise, VNX may have a problem to communicate with the hosts in the VLANs. To create shares for different VLANs with same subnet address, use different Data Movers. - The 'Active Directory' security service is the only supported security service type and it is required to create CIFS shares. - Only one security service can be configured for each share network. - Active Directory domain name of the 'active_directory' security service should be unique even for different tenants. - The time on Data Mover and the Active Directory domains used in security services should be in sync (time difference should be less than 10 minutes). It is recommended to use same NTP server on both the Data Mover and Active Directory domains. - On VNX the snapshot is stored in the SavVols. VNX system allows the space used by SavVol to be created and extended until the sum of the space consumed by all SavVols on the system exceeds the default 20% of the total space available on the system. If the 20% threshold value is reached, an alert will be generated on VNX. Continuing to create snapshot will cause the old snapshot to be inactivated (and the snapshot data to be abandoned). The limit percentage value can be changed manually by storage administrator based on the storage needs. Administrator is recommended to configure the notification on the SavVol usage. Refer to `Using VNX SnapSure` document on [EMC support site](https://support.emc.com) for more information. - VNX has limitations on the overall numbers of Virtual Data Movers, filesystems, shares, checkpoints, and etc. Virtual Data Mover(VDM) is created by the VNX driver on the VNX to serve as the manila share server. Similarly, filesystem is created, mounted, and exported from the VDM over CIFS or NFS protocol to serve as the manila share. The VNX checkpoint serves as the manila share snapshot. Refer to the `NAS Support Matrix` document on [EMC support site](https://support.emc.com) for the limitations and configure the quotas accordingly. The :mod:`manila.share.drivers.dell_emc.driver` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.dell_emc.driver :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.share.drivers.dell_emc.plugins.vnx.connection` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.dell_emc.plugins.vnx.connection :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/shared-file-systems-troubleshoot.rst0000664000175000017500000000605713656750227025406 0ustar zuulzuul00000000000000.. _shared_file_systems_troubleshoot: ======================================== Troubleshoot Shared File Systems service ======================================== Failures in Share File Systems service during a share creation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Problem ------- New shares can enter ``error`` state during the creation process. Solution -------- #. Make sure, that share services are running in debug mode. If the debug mode is not set, you will not get any tips from logs how to fix your issue. #. Find what share service holds a specified share. To do that, run command :command:`manila show ` and find a share host in the output. Host uniquely identifies what share service holds the broken share. #. Look thought logs of this share service. Usually, it can be found at ``/etc/var/log/manila-share.log``. This log should contain kind of traceback with extra information to help you to find the origin of issues. No valid host was found ~~~~~~~~~~~~~~~~~~~~~~~ Problem ------- If a share type contains invalid extra specs, the scheduler will not be able to locate a valid host for the shares. Solution -------- To diagnose this issue, make sure that scheduler service is running in debug mode. Try to create a new share and look for message ``Failed to schedule create_share: No valid host was found.`` in ``/etc/var/log/manila-scheduler.log``. To solve this issue look carefully through the list of extra specs in the share type, and the list of share services reported capabilities. Make sure that extra specs are pointed in the right way. Created share is unreachable ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Problem ------- By default, a new share does not have any active access rules. Solution -------- To provide access to new share, you need to create appropriate access rule with the right value. The value must defines access. Service becomes unavailable after upgrade ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Problem ------- After upgrading the Shared File Systems service from version v1 to version v2.x, you must update the service endpoint in the OpenStack Identity service. Otherwise, the service may become unavailable. Solution -------- #. To get the service type related to the Shared File Systems service, run: .. code-block:: console # openstack endpoint list # openstack endpoint show You will get the endpoints expected from running the Shared File Systems service. #. Make sure that these endpoints are updated. Otherwise, delete the outdated endpoints and create new ones. Failures during management of internal resources ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Problem ------- The Shared File System service manages internal resources effectively. Administrators may need to manually adjust internal resources to handle failures. Solution -------- Some drivers in the Shared File Systems service can create service entities, like servers and networks. If it is necessary, you can log in to project ``service`` and take manual control over it. manila-10.0.0/doc/source/admin/tegile_driver.rst0000664000175000017500000001613513656750227021611 0ustar zuulzuul00000000000000.. Copyright (c) 2016 Tegile Systems Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Tegile Driver ============= The Tegile Manila driver uses Tegile IntelliFlash Arrays to provide shared filesystems to OpenStack. The Tegile Driver interfaces with a Tegile Array via the REST API. Requirements ------------ - Tegile IntelliFlash version 3.5.1 - For using CIFS, Active Directory must be configured in the Tegile Array. Supported Operations -------------------- The following operations are supported on a Tegile Array: * Create CIFS/NFS Share * Delete CIFS/NFS Share * Allow CIFS/NFS Share access * Only IP access type is supported for NFS * USER access type is supported for NFS and CIFS * RW and RO access supported * Deny CIFS/NFS Share access * IP access type is supported for NFS * USER access type is supported for NFS and CIFS * Create snapshot * Delete snapshot * Extend share * Shrink share * Create share from snapshot Backend Configuration --------------------- The following parameters need to be configured in the [DEFAULT] section of */etc/manila/manila.conf*: +-----------------------------------------------------------------------------------------------------------------------------------+ | [DEFAULT] | +============================+======================================================================================================+ | **Option** | **Description** | +----------------------------+-----------+------------------------------------------------------------------------------------------+ | enabled_share_backends | Name of the section on manila.conf used to specify a backend. | | | E.g. *enabled_share_backends = tegileNAS* | +----------------------------+------------------------------------------------------------------------------------------------------+ | enabled_share_protocols | Specify a list of protocols to be allowed for share creation. For Tegile driver this can be: | | | *NFS* or *CIFS* or *NFS, CIFS*. | +----------------------------+------------------------------------------------------------------------------------------------------+ The following parameters need to be configured in the [backend] section of */etc/manila/manila.conf*: +-------------------------------------------------------------------------------------------------------------------------------------+ | [tegileNAS] | +===============================+=====================================================================================================+ | **Option** | **Description** | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | share_backend_name | A name for the backend. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | share_driver | Python module path. For Tegile driver this must be: | | | *manila.share.drivers.tegile.tegile.TegileShareDriver*. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | driver_handles_share_servers| DHSS, Driver working mode. For Tegile driver **this must be**: | | | *False*. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | tegile_nas_server | Tegile array IP to connect from the Manila node. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | tegile_nas_login | This field is used to provide username credential to Tegile array. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | tegile_nas_password | This field is used to provide password credential to Tegile array. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ | tegile_default_project | This field can be used to specify the default project in Tegile array where shares are created. | | | This field is optional. | +-------------------------------+-----------------------------------------------------------------------------------------------------+ Below is an example of a valid configuration of Tegile driver: | ``[DEFAULT]`` | ``enabled_share_backends = tegileNAS`` | ``enabled_share_protocols = NFS,CIFS`` | ``[tegileNAS]`` | ``driver_handles_share_servers = False`` | ``share_backend_name = tegileNAS`` | ``share_driver = manila.share.drivers.tegile.tegile.TegileShareDriver`` | ``tegile_nas_server = 10.12.14.16`` | ``tegile_nas_login = admin`` | ``tegile_nas_password = password`` | ``tegile_default_project = financeshares`` Restart of :term:`manila-share` service is needed for the configuration changes to take effect. Restrictions ------------ The Tegile driver has the following restrictions: - IP access type is supported only for NFS. - Only FLAT network is supported. The :mod:`manila.share.drivers.tegile.tegile` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.tegile.tegile :noindex: :members: :undoc-members: :show-inheritance: :exclude-members: TegileAPIExecutor, debugger manila-10.0.0/doc/source/admin/shared-file-systems-share-replication.rst0000664000175000017500000007607113656750227026271 0ustar zuulzuul00000000000000.. _shared_file_systems_share_replication: ================= Share replication ================= Replication of data has a number of use cases in the cloud. One use case is High Availability of the data in a shared file system, used for example, to support a production database. Another use case is ensuring Data Protection; i.e being prepared for a disaster by having a replication location that will be ready to back up your primary data source. The Shared File System service supports user facing APIs that allow users to create shares that support replication, add and remove share replicas and manage their snapshots and access rules. Three replication types are currently supported and they vary in the semantics associated with the primary share and the secondary copies. .. important:: **Share replication** is an **experimental** Shared File Systems API in the Mitaka release. Contributors can change or remove the experimental part of the Shared File Systems API in further releases without maintaining backward compatibility. Experimental APIs have an ``X-OpenStack-Manila-API-Experimental: true`` header in their HTTP requests. Replication types supported ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before using share replication, make sure the Shared File System driver that you are running supports this feature. You can check it in the ``manila-scheduler`` service reports. The ``replication_type`` capability reported can have one of the following values: writable The driver supports creating ``writable`` share replicas. All share replicas can be accorded read/write access and would be synchronously mirrored. readable The driver supports creating ``read-only`` share replicas. All secondary share replicas can be accorded read access. Only the primary (or ``active`` share replica) can be written into. dr The driver supports creating ``dr`` (abbreviated from Disaster Recovery) share replicas. A secondary share replica is inaccessible until after a ``promotion``. None The driver does not support Share Replication. .. note:: The term ``active`` share replica refers to the ``primary`` share. In ``writable`` style of replication, all share replicas are ``active``, and there could be no distinction of a ``primary`` share. In ``readable`` and ``dr`` styles of replication, a ``secondary`` share replica may be referred to as ``passive``, ``non-active`` or simply, ``replica``. Configuration ~~~~~~~~~~~~~ Two new configuration options have been introduced to support Share Replication. replica_state_update_interval Specify this option in the ``DEFAULT`` section of your ``manila.conf``. The Shared File Systems service requests periodic update of the `replica_state` of all ``non-active`` share replicas. The update occurs with respect to an interval corresponding to this option. If it is not specified, it defaults to 300 seconds. replication_domain Specify this option in the backend stanza when using a multi-backend style configuration. The value can be any ASCII string. Two backends that can replicate between each other would have the same ``replication_domain``. This comes from the premise that the Shared File Systems service expects Share Replication to be performed between symmetric backends. This option is *required* for using the Share Replication feature. Health of a share replica ~~~~~~~~~~~~~~~~~~~~~~~~~ Apart from the ``status`` attribute, share replicas have the ``replica_state`` attribute to denote the state of data replication on the storage backend. The ``primary`` share replica will have it's `replica_state` attribute set to `active`. The ``secondary`` share replicas may have one of the following as their ``replica_state``: in_sync The share replica is up to date with the ``active`` share replica (possibly within a backend-specific ``recovery point objective``). out_of_sync The share replica is out of date (all new share replicas start out in this ``replica_state``). error When the scheduler fails to schedule this share replica or some potentially irrecoverable error occurred with regard to updating data for this replica. Promotion or failover ~~~~~~~~~~~~~~~~~~~~~ For ``readable`` and ``dr`` types of replication, we refer to the task of switching a `non-active` share replica with the ``active`` replica as `promotion`. For the ``writable`` style of replication, promotion does not make sense since all share replicas are ``active`` (or writable) at all times. The `status` attribute of the non-active replica being promoted will be set to ``replication_change`` during its promotion. This has been classified as a ``busy`` state and thus API interactions with the share are restricted while one of its share replicas is in this state. Share replication workflows ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following examples have been implemented with the ZFSonLinux driver that is a reference implementation in the Shared File Systems service. It operates in ``driver_handles_share_servers=False`` mode and supports the ``readable`` type of replication. In the example, we assume a configuration of two Availability Zones [1]_, called `availability_zone_1` and `availability_zone_2`. Multiple availability zones are not necessary to use the replication feature. However, the use of an availability zone as a ``failure domain`` is encouraged. Pay attention to the network configuration for the ZFS driver. Here, we assume a configuration of ``zfs_service_ip`` and ``zfs_share_export_ip`` from two separate networks. The service network is reachable from the host where the ``manila-share`` service is running. The share export IP is from a network that allows user access. See `Configuring the ZFSonLinux driver `_ for information on how to set up the ZFSonLinux driver. Creating a share that supports replication ------------------------------------------ Create a new share type and specify the `replication_type` as an extra-spec within the share-type being used. Use the :command:`manila type-create` command to create a new share type. Specify the name and the value for the extra-spec ``driver_handles_share_servers``. .. code-block:: console $ manila type-create readable_type_replication False +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | required_extra_specs | driver_handles_share_servers : False | | Name | readable_type_replication | | Visibility | public | | is_default | - | | ID | 3b3ee3f7-6e43-4aa1-859d-0b0511c43074 | | optional_extra_specs | snapshot_support : True | +----------------------+--------------------------------------+ Use the :command:`manila type-key` command to set an extra-spec to the share type. .. code-block:: console $ manila type-key readable_type_replication set replication_type=readable .. note:: This command has no output. To verify the extra-spec, use the :command:`manila extra-specs-list` command and specify the share type's name or ID as a parameter. Create a share with the share type Use the :command:`manila create` command to create a share. Specify the share protocol, size and the availability zone. .. code-block:: console $ manila create NFS 1 --share_type readable_type_replication --name my_share --description "This share will have replicas" --az availability_zone_1 +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | readable_type_replication | | description | This share will have replicas | | availability_zone | availability_zone_1 | | share_network_id | None | | share_server_id | None | | share_group_id | None | | host | | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | e496ed61-8f2e-436b-b299-32c3e90991cc | | size | 1 | | name | my_share | | share_type | 3b3ee3f7-6e43-4aa1-859d-0b0511c43074 | | has_replicas | False | | replication_type | readable | | created_at | 2016-03-29T20:22:18.000000 | | share_proto | NFS | | project_id | 48a5ca76ac69405e99dc1c13c5195186 | | metadata | {} | +-----------------------------+--------------------------------------+ Use the :command:`manila show` command to retrieve details of the share. Specify the share ID or name as a parameter. .. code-block:: console $ manila show my_share +-----------------------------+--------------------------------------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------------------------------------+ | status | available | | share_type_name | readable_type_replication | | description | This share will have replicas | | availability_zone | availability_zone_1 | | share_network_id | None | | export_locations | | | | path = | | |10.32.62.26:/alpha/manila_share_38efc042_50c2_4825_a6d8_cba2a8277b28| | | preferred = False | | | is_admin_only = False | | | id = e1d754b5-ec06-42d2-afff-3e98c0013faf | | | share_instance_id = 38efc042-50c2-4825-a6d8-cba2a8277b28 | | | path = | | |172.21.0.23:/alpha/manila_share_38efc042_50c2_4825_a6d8_cba2a8277b28| | | preferred = False | | | is_admin_only = True | | | id = 6f843ecd-a7ea-4939-86de-e1e01d9e8672 | | | share_instance_id = 38efc042-50c2-4825-a6d8-cba2a8277b28 | | share_server_id | None | | share_group_id | None | | host | openstack4@zfsonlinux_1#alpha | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | e496ed61-8f2e-436b-b299-32c3e90991cc | | size | 1 | | name | my_share | | share_type | 3b3ee3f7-6e43-4aa1-859d-0b0511c43074 | | has_replicas | False | | replication_type | readable | | created_at | 2016-03-29T20:22:18.000000 | | share_proto | NFS | | project_id | 48a5ca76ac69405e99dc1c13c5195186 | | metadata | {} | +-----------------------------+--------------------------------------------------------------------+ .. note:: When you create a share that supports replication, an ``active`` replica is created for you. You can verify this with the :command:`manila share-replica-list` command. Creating and promoting share replicas ------------------------------------- Create a share replica Use the :command:`manila share-replica-create` command to create a share replica. Specify the share ID or name as a parameter. You may optionally provide the `availability_zone` and `share_network_id`. In the example below, `share_network_id` is not used since the ZFSonLinux driver does not support it. .. code-block:: console $ manila share-replica-create my_share --az availability_zone_2 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | creating | | share_id | e496ed61-8f2e-436b-b299-32c3e90991cc | | availability_zone | availability_zone_2 | | created_at | 2016-03-29T20:24:53.148992 | | updated_at | None | | share_network_id | None | | share_server_id | None | | host | | | replica_state | None | | id | 78a5ef96-6c36-42e0-b50b-44efe7c1807e | +-------------------+--------------------------------------+ See details of the newly created share replica Use the :command:`manila share-replica-show` command to see details of the newly created share replica. Specify the share replica's ID as a parameter. .. code-block:: console $ manila share-replica-show 78a5ef96-6c36-42e0-b50b-44efe7c1807e +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | available | | share_id | e496ed61-8f2e-436b-b299-32c3e90991cc | | availability_zone | availability_zone_2 | | created_at | 2016-03-29T20:24:53.000000 | | updated_at | 2016-03-29T20:24:58.000000 | | share_network_id | None | | share_server_id | None | | host | openstack4@zfsonlinux_2#beta | | replica_state | in_sync | | id | 78a5ef96-6c36-42e0-b50b-44efe7c1807e | +-------------------+--------------------------------------+ See all replicas of the share Use the :command:`manila share-replica-list` command to see all the replicas of the share. Specify the share ID or name as an optional parameter. .. code-block:: console $ manila share-replica-list --share-id my_share +--------------------------------------+-----------+---------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ | ID | Status | Replica State | Share ID | Host | Availability Zone | Updated At | +--------------------------------------+-----------+---------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ | 38efc042-50c2-4825-a6d8-cba2a8277b28 | available | active | e496ed61-8f2e-436b-b299-32c3e90991cc | openstack4@zfsonlinux_1#alpha | availability_zone_1 | 2016-03-29T20:22:19.000000 | | 78a5ef96-6c36-42e0-b50b-44efe7c1807e | available | in_sync | e496ed61-8f2e-436b-b299-32c3e90991cc | openstack4@zfsonlinux_2#beta | availability_zone_2 | 2016-03-29T20:24:58.000000 | +--------------------------------------+-----------+---------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ Promote the secondary share replica to be the new active replica Use the :command:`manila share-replica-promote` command to promote a non-active share replica to become the ``active`` replica. Specify the non-active replica's ID as a parameter. .. code-block:: console $ manila share-replica-promote 78a5ef96-6c36-42e0-b50b-44efe7c1807e .. note:: This command has no output. The promotion may take time. During the promotion, the ``replica_state`` attribute of the share replica being promoted will be set to ``replication_change``. .. code-block:: console $ manila share-replica-list --share-id my_share +--------------------------------------+-----------+--------------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ | ID | Status | Replica State | Share ID | Host | Availability Zone | Updated At | +--------------------------------------+-----------+--------------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ | 38efc042-50c2-4825-a6d8-cba2a8277b28 | available | active | e496ed61-8f2e-436b-b299-32c3e90991cc | openstack4@zfsonlinux_1#alpha | availability_zone_1 | 2016-03-29T20:32:19.000000 | | 78a5ef96-6c36-42e0-b50b-44efe7c1807e | available | replication_change | e496ed61-8f2e-436b-b299-32c3e90991cc | openstack4@zfsonlinux_2#beta | availability_zone_2 | 2016-03-29T20:32:19.000000 | +--------------------------------------+-----------+--------------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ Once the promotion is complete, the ``replica_state`` will be set to ``active``. .. code-block:: console $ manila share-replica-list --share-id my_share +--------------------------------------+-----------+---------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ | ID | Status | Replica State | Share ID | Host | Availability Zone | Updated At | +--------------------------------------+-----------+---------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ | 38efc042-50c2-4825-a6d8-cba2a8277b28 | available | in_sync | e496ed61-8f2e-436b-b299-32c3e90991cc | openstack4@zfsonlinux_1#alpha | availability_zone_1 | 2016-03-29T20:32:19.000000 | | 78a5ef96-6c36-42e0-b50b-44efe7c1807e | available | active | e496ed61-8f2e-436b-b299-32c3e90991cc | openstack4@zfsonlinux_2#beta | availability_zone_2 | 2016-03-29T20:32:19.000000 | +--------------------------------------+-----------+---------------+--------------------------------------+-------------------------------+---------------------+----------------------------+ Access rules ------------ Create an IP access rule for the share Use the :command:`manila access-allow` command to add an access rule. Specify the share ID or name, protocol and the target as parameters. .. code-block:: console $ manila access-allow my_share ip 0.0.0.0/0 --access-level rw +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | e496ed61-8f2e-436b-b299-32c3e90991cc | | access_type | ip | | access_to | 0.0.0.0/0 | | access_level | rw | | state | new | | id | 8b339cdc-c1e0-448f-bf6d-f068ee6e8f45 | +--------------+--------------------------------------+ .. note:: Access rules are not meant to be different across the replicas of the share. However, as per the type of replication, drivers may choose to modify the access level prescribed. In the above example, even though read/write access was requested for the share, the driver will provide read-only access to the non-active replica to the same target, because of the semantics of the replication type: ``readable``. However, the target will have read/write access to the (currently) non-active replica when it is promoted to become the ``active`` replica. The :command:`manila access-deny` command can be used to remove a previously applied access rule. List the export locations of the share Use the :command:`manila share-export-locations-list` command to list the export locations of a share. .. code-block:: console $ manila share-export-location-list my_share +--------------------------------------+---------------------------------------------------------------------------+-----------+ | ID | Path | Preferred | +--------------------------------------+---------------------------------------------------------------------------+-----------+ | 3ed3fbf5-2fa1-4dc0-8440-a0af72398cb6 | 10.32.62.21:/beta/subdir/manila_share_78a5ef96_6c36_42e0_b50b_44efe7c1807e| False | | 6f843ecd-a7ea-4939-86de-e1e01d9e8672 | 172.21.0.23:/alpha/manila_share_38efc042_50c2_4825_a6d8_cba2a8277b28 | False | | e1d754b5-ec06-42d2-afff-3e98c0013faf | 10.32.62.26:/alpha/manila_share_38efc042_50c2_4825_a6d8_cba2a8277b28 | False | | f3c5585f-c2f7-4264-91a7-a4a1e754e686 | 172.21.0.29:/beta/subdir/manila_share_78a5ef96_6c36_42e0_b50b_44efe7c1807e| False | +--------------------------------------+---------------------------------------------------------------------------+-----------+ Identify the export location corresponding to the share replica on the user accessible network and you may mount it on the target node. .. note:: As an administrator, you can list the export locations for a particular share replica by using the :command:`manila share-instance-export-location-list` command and specifying the share replica's ID as a parameter. Snapshots --------- Create a snapshot of the share Use the :command:`manila snapshot-create` command to create a snapshot of the share. Specify the share ID or name as a parameter. .. code-block:: console $ manila snapshot-create my_share --name "my_snapshot" +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | creating | | share_id | e496ed61-8f2e-436b-b299-32c3e90991cc | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | description | None | | created_at | 2016-03-29T21:14:03.000000 | | share_proto | NFS | | provider_location | None | | id | 06cdccaf-93a0-4e57-9a39-79fb1929c649 | | project_id | cadd7139bc3148b8973df097c0911016 | | size | 1 | | share_size | 1 | | name | my_snapshot | +-------------------+--------------------------------------+ Show the details of the snapshot Use the :command:`manila snapshot-show` to view details of a snapshot. Specify the snapshot ID or name as a parameter. .. code-block:: console $ manila snapshot-show my_snapshot +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | available | | share_id | e496ed61-8f2e-436b-b299-32c3e90991cc | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | description | None | | created_at | 2016-03-29T21:14:03.000000 | | share_proto | NFS | | provider_location | None | | id | 06cdccaf-93a0-4e57-9a39-79fb1929c649 | | project_id | cadd7139bc3148b8973df097c0911016 | | size | 1 | | share_size | 1 | | name | my_snapshot | +-------------------+--------------------------------------+ .. note:: The ``status`` attribute of a snapshot will transition from ``creating`` to ``available`` only when it is present on all the share replicas that have their ``replica_state`` attribute set to ``active`` or ``in_sync``. Likewise, the ``replica_state`` attribute of a share replica will transition from ``out_of_sync`` to ``in_sync`` only when all ``available`` snapshots are present on it. Planned failovers ----------------- As an administrator, you can use the :command:`manila share-replica-resync` command to attempt to sync data between ``active`` and ``non-active`` share replicas of a share before promotion. This will ensure that share replicas have the most up-to-date data and their relationships can be safely switched. .. code-block:: console $ manila share-replica-resync 38efc042-50c2-4825-a6d8-cba2a8277b28 .. note:: This command has no output. Updating attributes ------------------- If an error occurs while updating data or replication relationships (during a ``promotion``), the Shared File Systems service may not be able to determine the consistency or health of a share replica. It may require administrator intervention to make any fixes on the storage backend as necessary. In such a situation, state correction within the Shared File Systems service is possible. As an administrator, you can: Reset the ``status`` attribute of a share replica Use the :command:`manila share-replica-reset-state` command to reset the ``status`` attribute. Specify the share replica's ID as a parameter and use the ``--state`` option to specify the state intended. .. code-block:: console $ manila share-replica-reset-state 38efc042-50c2-4825-a6d8-cba2a8277b28 --state=available .. note:: This command has no output. Reset the ``replica_state`` attribute Use the :command:`manila share-replica-reset-replica-state` command to reset the ``replica_state`` attribute. Specify the share replica's ID and use the ``--state`` option to specify the state intended. .. code-block:: console $ manila share-replica-reset-replica-state 38efc042-50c2-4825-a6d8-cba2a8277b28 --state=out_of_sync .. note:: This command has no output. Force delete a specified share replica in any state Use the :command:`manila share-replica-delete` command with the '--force' key to remove the share replica, regardless of the state it is in. .. code-block:: console $ manila share-replica-show 9513de5d-0384-4528-89fb-957dd9b57680 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | error | | share_id | e496ed61-8f2e-436b-b299-32c3e90991cc | | availability_zone | availability_zone_1 | | created_at | 2016-03-30T01:32:47.000000 | | updated_at | 2016-03-30T01:34:25.000000 | | share_network_id | None | | share_server_id | None | | host | openstack4@zfsonlinux_1#alpha | | replica_state | out_of_sync | | id | 38efc042-50c2-4825-a6d8-cba2a8277b28 | +-------------------+--------------------------------------+ $ manila share-replica-delete --force 38efc042-50c2-4825-a6d8-cba2a8277b28 .. note:: This command has no output. Use the ``policy.json`` file to grant permissions for these actions to other roles. Deleting share replicas ----------------------- Use the :command:`manila share-replica-delete` command with the share replica's ID to delete a share replica. .. code-block:: console $ manila share-replica-delete 38efc042-50c2-4825-a6d8-cba2a8277b28 .. note:: This command has no output. .. note:: You cannot delete the last ``active`` replica with this command. You should use the :command:`manila delete` command to remove the share. .. [1] When running in a multi-backend configuration, until the Stein release, deployers could only configure one Availability Zone per manila configuration file. This is achieved with the option ``storage_availability_zone`` defined under the ``[DEFAULT]`` section. Beyond the Stein release, the option ``backend_availability_zone`` can be specified in each back end stanza. The value of this configuration option will override any configuration of the ``storage_availability_zone`` from the ``[DEFAULT]`` section. manila-10.0.0/doc/source/admin/nexentastor5_driver.rst0000664000175000017500000000635713656750227023004 0ustar zuulzuul00000000000000.. Copyright 2019 Nexenta by DDN, Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. NexentaStor5 Driver for OpenStack Manila ======================================== The `NexentaStor5 `__ Manila driver provides NFS shared file systems to OpenStack. Requirements ------------ - The NexentaStor 5.1 or newer Supported shared filesystems and operations ------------------------------------------- This driver supports NFS shares. The following operations are supported: - Create NFS Share - Delete NFS Share - Allow NFS Share access * Only IP access type is supported for NFS (ro/rw). - Deny NFS Share access - Manage a share. - Unmanage a share. - Extend a share. - Shrink a share. - Create snapshot - Revert to snapshot - Delete snapshot - Create share from snapshot Backend Configuration --------------------- The following parameters need to be configured in the manila configuration file for the NexentaStor5 driver: - `share_backend_name` = - `share_driver` = manila.share.drivers.nexenta.ns5.nexenta_nas.NexentaNasDriver - `driver_handles_share_servers` = False - `nexenta_nas_host` = - `nexenta_user` = - `nexenta_password` = - `nexenta_pool` = - `nexenta_rest_addresses` = - `nexenta_folder` = - `nexenta_nfs` = True Share Types ----------- When creating a share, a share type can be specified to determine where and how the share will be created. If a share type is not specified, the `default_share_type` set in the manila configuration file is used. Manila requires that the share type includes the `driver_handles_share_servers` extra-spec. This ensures that the share will be created on a backend that supports the requested driver_handles_share_servers (share networks) capability. For the NexentaStor driver, this extra-spec's value must be set to False. Restrictions ------------ - Only IP share access control is allowed for NFS shares. Back-end configuration example ------------------------------ .. code-block:: ini [DEFAULT] enabled_share_backends = NexentaStor5 [NexentaStor5] share_backend_name = NexentaStor5 driver_handles_share_servers = False nexenta_folder = manila share_driver = manila.share.drivers.nexenta.ns5.nexenta_nas.NexentaNasDriver nexenta_rest_addresses = 10.3.1.1,10.3.1.2 nexenta_nas_host = 10.3.1.10 nexenta_rest_port = 8443 nexenta_pool = pool1 nexenta_nfs = True nexenta_user = admin nexenta_password = secret_password nexenta_thin_provisioning = True manila-10.0.0/doc/source/admin/netapp_cluster_mode_driver.rst0000664000175000017500000000723713656750227024377 0ustar zuulzuul00000000000000.. Copyright 2014 Mirantis Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. NetApp Clustered Data ONTAP =========================== The Shared File Systems service can be configured to use NetApp Clustered Data ONTAP (cDOT) version 8.2 and later. Supported Operations -------------------- The following operations are supported on Clustered Data ONTAP: - Create CIFS/NFS Share - Delete CIFS/NFS Share - Allow NFS Share access * IP access type is supported for NFS. * Read/write and read-only access are supported for NFS. - Allow CIFS Share access * User access type is supported for CIFS. * Read/write access is supported for CIFS. - Deny CIFS/NFS Share access - Create snapshot - Delete snapshot - Create share from snapshot - Extend share - Shrink share - Manage share - Unmanage share - Create consistency group - Delete consistency group - Create consistency group from CG snapshot - Create CG snapshot - Delete CG snapshot - Create a replica (DHSS=False) - Promote a replica (DHSS=False) - Delete a replica (DHSS=False) - Update a replica (DHSS=False) - Create a replicated snapshot (DHSS=False) - Delete a replicated snapshot (DHSS=False) - Update a replicated snapshot (DHSS=False) .. note:: :term:`DHSS` is abbreviated from `driver_handles_share_servers`. Supported Operating Modes ------------------------- The cDOT driver supports both 'driver_handles_share_servers' (:term:`DHSS`) modes. If 'driver_handles_share_servers' is True, the driver will create a storage virtual machine (SVM, previously known as vServers) for each unique tenant network and provision each of a tenant's shares into that SVM. This requires the user to specify both a share network as well as a share type with the DHSS extra spec set to True when creating shares. If 'driver_handles_share_servers' is False, the manila admin must configure a single SVM, along with associated LIFs and protocol services, that will be used for provisioning shares. The SVM is specified in the manila config file. Network approach ---------------- L3 connectivity between the storage cluster and manila host must exist, and VLAN segmentation may be configured. All of manila's network plug-ins are supported with the cDOT driver. Supported shared filesystems ---------------------------- - NFS (access by IP address or subnet) - CIFS (authentication by user) Required licenses ----------------- - NFS - CIFS - FlexClone Known restrictions ------------------ - For CIFS shares an external Active Directory (AD) service is required. The AD details should be provided via a manila security service that is attached to the specified share network. - Share access rules for CIFS shares may be created only for existing users in Active Directory. - The time on external security services and storage must be synchronized. The maximum allowed clock skew is 5 minutes. - cDOT supports only flat and VLAN network segmentation types. The :mod:`manila.share.drivers.netapp.common.py` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.netapp.common :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/index.rst0000664000175000017500000000570213656750227020072 0ustar zuulzuul00000000000000.. _shared_file_systems_intro: =========== Admin Guide =========== Shared File Systems service provides a set of services for management of shared file systems in a multi-project cloud environment. The service resembles OpenStack block-based storage management from the OpenStack Block Storage service project. With the Shared File Systems service, you can create a remote file system, mount the file system on your instances, and then read and write data from your instances to and from your file system. The Shared File Systems service serves same purpose as the Amazon Elastic File System (EFS) does. The Shared File Systems service can run in a single-node or multiple node configuration. The Shared File Systems service can be configured to provision shares from one or more back ends, so it is required to declare at least one back end. Shared File System service contains several configurable components. It is important to understand these components: * Share networks * Shares * Multi-tenancy * Back ends The Shared File Systems service consists of four types of services, most of which are similar to those of the Block Storage service: - ``manila-api`` - ``manila-data`` - ``manila-scheduler`` - ``manila-share`` Installation of first three - ``manila-api``, ``manila-data``, and ``manila-scheduler`` is common for almost all deployments. But configuration of ``manila-share`` is backend-specific and can differ from deployment to deployment. .. toctree:: :maxdepth: 1 shared-file-systems-key-concepts.rst shared-file-systems-share-management.rst shared-file-systems-share-server-management.rst shared-file-systems-share-migration.rst shared-file-systems-share-types.rst shared-file-systems-snapshots.rst shared-file-systems-security-services.rst shared-file-systems-share-replication.rst shared-file-systems-multi-backend.rst shared-file-systems-networking.rst shared-file-systems-troubleshoot.rst share_back_ends_feature_support_mapping capabilities_and_extra_specs group_capabilities_and_extra_specs export_location_metadata Supported share back ends ------------------------- The manila share service must be configured to use drivers for one or more storage back ends, as described in general terms below. See the drivers section in the `Configuration Reference `_ for detailed configuration options for each back end. .. toctree:: :maxdepth: 3 container_driver zfs_on_linux_driver netapp_cluster_mode_driver emc_isilon_driver emc_vnx_driver ../configuration/shared-file-systems/drivers/dell-emc-unity-driver generic_driver glusterfs_driver glusterfs_native_driver cephfs_driver gpfs_driver huawei_nas_driver hdfs_native_driver hitachi_hnas_driver hpe_3par_driver infortrend_driver tegile_driver nexentastor5_driver ../configuration/shared-file-systems/drivers/windows-smb-driver manila-10.0.0/doc/source/admin/infortrend_driver.rst0000664000175000017500000000613413656750227022510 0ustar zuulzuul00000000000000.. Copyright (c) 2019 Infortrend Technologies Co., Ltd. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Infortrend Driver for OpenStack Manila ====================================== The `Infortrend `__ Manila driver provides NFS and CIFS shared file systems to Openstack. Requirements ------------ - The EonStor GS/GSe series Fireware version 139A23 Supported shared filesystems and operations ------------------------------------------- This driver supports NFS and CIFS shares. The following operations are supported: - Create CIFS/NFS Share - Delete CIFS/NFS Share - Allow CIFS/NFS Share access * Only IP access type is supported for NFS (ro/rw). * Only USER access type is supported for CIFS (ro/rw). - Deny CIFS/NFS Share access - Manage a share. - Unmanage a share. - Extend a share. - Shrink a share. Backend Configuration --------------------- The following parameters need to be configured in the manila configuration file for the Infortrend driver: - `share_backend_name` = - `share_driver` = manila.share.drivers.infortrend.driver.InfortrendNASDriver - `driver_handles_share_servers` = False - `infortrend_nas_ip` = - `infortrend_nas_user` = - `infortrend_nas_password` = - `infortrend_share_pools` = - `infortrend_share_channels` = Share Types ----------- When creating a share, a share type can be specified to determine where and how the share will be created. If a share type is not specified, the `default_share_type` set in the manila configuration file is used. Manila requires that the share type includes the `driver_handles_share_servers` extra-spec. This ensures that the share will be created on a backend that supports the requested driver_handles_share_servers (share networks) capability. For the Infortrend driver, this must be set to False. Back-end configuration example ------------------------------ .. code-block:: ini [DEFAULT] enabled_share_backends = ift-manila enabled_share_protocols = NFS, CIFS [ift-manila] share_backend_name = ift-manila share_driver = manila.share.drivers.infortrend.driver.InfortrendNASDriver driver_handles_share_servers = False infortrend_nas_ip = FAKE_IP infortrend_nas_user = FAKE_USER infortrend_nas_password = FAKE_PASS infortrend_share_pools = pool-1, pool-2 infortrend_share_channels = 0, 1 manila-10.0.0/doc/source/admin/shared-file-systems-key-concepts.rst0000664000175000017500000001141513656750227025253 0ustar zuulzuul00000000000000.. _shared_file_systems_key_concepts: ============ Key concepts ============ Share ~~~~~ In the Shared File Systems service ``share`` is the fundamental resource unit allocated by the Shared File System service. It represents an allocation of a persistent, readable, and writable filesystems. Compute instances access these filesystems. Depending on the deployment configuration, clients outside of OpenStack can also access the filesystem. .. note:: A ``share`` is an abstract storage object that may or may not directly map to a "share" concept from the underlying storage provider. See the description of ``share instance`` for more details. Share instance ~~~~~~~~~~~~~~ This concept is tied with ``share`` and represents created resource on specific back end, when ``share`` represents abstraction between end user and back-end storages. In common cases, it is one-to-one relation. One single ``share`` has more than one ``share instance`` in two cases: - When ``share migration`` is being applied - When ``share replication`` is enabled Therefore, each ``share instance`` stores information specific to real allocated resource on storage. And ``share`` represents the information that is common for ``share instances``. A user with ``member`` role will not be able to work with it directly. Only a user with ``admin`` role has rights to perform actions against specific share instances. Snapshot ~~~~~~~~ A ``snapshot`` is a point-in-time, read-only copy of a ``share``. You can create ``Snapshots`` from an existing, operational ``share`` regardless of whether a client has mounted the file system. A ``snapshot`` can serve as the content source for a new ``share``. Specify the **Create from snapshot** option when creating a new ``share`` on the dashboard. Storage Pools ~~~~~~~~~~~~~ With the Kilo release of OpenStack, Shared File Systems can use ``storage pools``. The storage may present one or more logical storage resource pools that the Shared File Systems service will select as a storage location when provisioning ``shares``. Share Type ~~~~~~~~~~ ``Share type`` is an abstract collection of criteria used to characterize ``shares``. They are most commonly used to create a hierarchy of functional capabilities. This hierarchy represents tiered storage services levels. For example, an administrator might define a premium ``share type`` that indicates a greater level of performance than a basic ``share type``. Premium represents the best performance level. Share Access Rules ~~~~~~~~~~~~~~~~~~ ``Share access rules`` define which users can access a particular ``share``. For example, administrators can declare rules for NFS shares by listing the valid IP networks which will access the ``share``. List the IP networks in CIDR notation. Security Services ~~~~~~~~~~~~~~~~~ ``Security services`` allow granular client access rules for administrators. They can declare rules for authentication or authorization to access ``share`` content. External services including LDAP, Active Directory, and Kerberos can be declared as resources. Examine and consult these resources when making an access decision for a particular ``share``. You can associate ``Shares`` with multiple security services, but only one service per one type. Share Networks ~~~~~~~~~~~~~~ A ``share network`` is an object that defines a relationship between a project network and subnet, as defined in an OpenStack Networking service or Compute service. The ``share network`` is also defined in ``shares`` created by the same project. A project may find it desirable to provision ``shares`` such that only instances connected to a particular OpenStack-defined network have access to the ``share``. Also, ``security services`` can be attached to ``share networks``, because most of auth protocols require some interaction with network services. The Shared File Systems service has the ability to work outside of OpenStack. That is due to the ``StandaloneNetworkPlugin``. The plugin is compatible with any network platform, and does not require specific network services in OpenStack like Compute or Networking service. You can set the network parameters in the ``manila.conf`` file. Share Servers ~~~~~~~~~~~~~ A ``share server`` is a logical entity that hosts the shares created on a specific ``share network``. A ``share server`` may be a configuration object within the storage controller, or it may represent logical resources provisioned within an OpenStack deployment used to support the data path used to access ``shares``. ``Share servers`` interact with network services to determine the appropriate IP addresses on which to export ``shares`` according to the related ``share network``. The Shared File Systems service has a pluggable network model that allows ``share servers`` to work with different implementations of the Networking service. manila-10.0.0/doc/source/admin/shared-file-systems-crud-share.rst0000664000175000017500000012654013656750227024712 0ustar zuulzuul00000000000000.. _shared_file_systems_crud_share: ====================== Share basic operations ====================== General concepts ---------------- To create a file share, and access it, the following general concepts are prerequisite knowledge: #. To create a share, use :command:`manila create` command and specify the required arguments: the size of the share and the shared file system protocol. ``NFS``, ``CIFS``, ``GlusterFS``, ``HDFS``, ``CephFS`` or ``MAPRFS`` share file system protocols are supported. #. You can also optionally specify the share network and the share type. #. After the share becomes available, use the :command:`manila show` command to get the share export locations. #. After getting the share export locations, you can create an :ref:`access rule ` for the share, mount it and work with files on the remote file system. There are big number of the share drivers created by different vendors in the Shared File Systems service. As a Python class, each share driver can be set for the :ref:`back end ` and run in the back end to manage the share operations. Initially there are two driver modes for the back ends: * no share servers mode * share servers mode Each share driver supports one or two of possible back end modes that can be configured in the ``manila.conf`` file. The configuration option ``driver_handles_share_servers`` in the ``manila.conf`` file sets the share servers mode or no share servers mode, and defines the driver mode for share storage lifecycle management: +------------------+-------------------------------------+--------------------+ | Mode | Config option | Description | +==================+=====================================+====================+ | no share servers | driver_handles_share_servers = False| An administrator | | | | rather than a share| | | | driver manages the | | | | bare metal storage | | | | with some net | | | | interface instead | | | | of the presence of | | | | the share servers. | +------------------+-------------------------------------+--------------------+ | share servers | driver_handles_share_servers = True | The share driver | | | | creates the share | | | | server and manages,| | | | or handles, the | | | | share server life | | | | cycle. | +------------------+-------------------------------------+--------------------+ It is :ref:`the share types ` which have the extra specifications that help scheduler to filter back ends and choose the appropriate back end for the user that requested to create a share. The required extra boolean specification for each share type is ``driver_handles_share_servers``. As an administrator, you can create the share types with the specifications you need. For details of managing the share types and configuration the back ends, see :ref:`shared_file_systems_share_types` and :ref:`shared_file_systems_multi_backend` documentation. You can create a share in two described above modes: * in a no share servers mode without specifying the share network and specifying the share type with ``driver_handles_share_servers = False`` parameter. See subsection :ref:`create_share_in_no_share_server_mode`. * in a share servers mode with specifying the share network and the share type with ``driver_handles_share_servers = True`` parameter. See subsection :ref:`create_share_in_share_server_mode`. .. _create_share_in_no_share_server_mode: Create a share in no share servers mode --------------------------------------- To create a file share in no share servers mode, you need to: #. To create a share, use :command:`manila create` command and specify the required arguments: the size of the share and the shared file system protocol. ``NFS``, ``CIFS``, ``GlusterFS``, ``HDFS``, ``CephFS`` or ``MAPRFS`` share file system protocols are supported. #. You should specify the :ref:`share type ` with ``driver_handles_share_servers = False`` extra specification. #. You must not specify the ``share network`` because no share servers are created. In this mode the Shared File Systems service expects that administrator has some bare metal storage with some net interface. #. The :command:`manila create` command creates a share. This command does the following things: * The :ref:`manila-scheduler ` service will find the back end with ``driver_handles_share_servers = False`` mode due to filtering the extra specifications of the share type. * The share is created using the storage that is specified in the found back end. #. After the share becomes available, use the :command:`manila show` command to get the share export locations. In the example to create a share, the created already share type named ``my_type`` with ``driver_handles_share_servers = False`` extra specification is used. Check share types that exist, run: .. code-block:: console $ manila type-list +------+---------+------------+------------+--------------------------------------+-------------------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | +------+---------+------------+------------+--------------------------------------+-------------------------+ | %ID% | my_type | public | - | driver_handles_share_servers : False | snapshot_support : True | +------+---------+------------+------------+--------------------------------------+-------------------------+ Create a private share with ``my_type`` share type, NFS shared file system protocol, and size 1 GB: .. code-block:: console $ manila create nfs 1 --name Share1 --description "My share" --share-type my_type +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | my_type | | description | My share | | availability_zone | None | | share_network_id | None | | share_server_id | None | | share_group_id | None | | host | | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | | size | 1 | | name | Share1 | | share_type | 14ee8575-aac2-44af-8392-d9c9d344f392 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T12:02:46.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+--------------------------------------+ New share ``Share2`` should have a status ``available``: .. code-block:: console $ manila show Share2 +-----------------------------+----------------------------------------------------------+ | Property | Value | +-----------------------------+----------------------------------------------------------+ | status | available | | share_type_name | my_type | | description | My share | | availability_zone | nova | | share_network_id | None | | export_locations | | | | path = 10.0.0.4:/shares/manila_share_a5fb1ab7_... | | | preferred = False | | | is_admin_only = False | | | id = 9e078eee-bcad-40b8-b4fe-1c916cf98ed1 | | | share_instance_id = a5fb1ab7-0bbd-465b-ac14-05706294b6e9 | | | path = 172.18.198.52:/shares/manila_share_a5fb1ab7_... | | | preferred = False | | | is_admin_only = True | | | id = 44933f59-e0e3-4483-bb88-72ba7c486f41 | | | share_instance_id = a5fb1ab7-0bbd-465b-ac14-05706294b6e9 | | share_server_id | None | | share_group_id | None | | host | manila@paris#epsilon | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | | size | 1 | | name | Share1 | | share_type | 14ee8575-aac2-44af-8392-d9c9d344f392 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T12:02:46.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+----------------------------------------------------------+ .. _create_share_in_share_server_mode: Create a share in share servers mode ------------------------------------ To create a file share in share servers mode, you need to: #. To create a share, use :command:`manila create` command and specify the required arguments: the size of the share and the shared file system protocol. ``NFS``, ``CIFS``, ``GlusterFS``, ``HDFS``, ``CephFS`` or ``MAPRFS`` share file system protocols are supported. #. You should specify the :ref:`share type ` with ``driver_handles_share_servers = True`` extra specification. #. You should specify the :ref:`share network `. #. The :command:`manila create` command creates a share. This command does the following things: * The :ref:`manila-scheduler ` service will find the back end with ``driver_handles_share_servers = True`` mode due to filtering the extra specifications of the share type. * The share driver will create a share server with the share network. For details of creating the resources, see the `documentation `_ of the specific share driver. #. After the share becomes available, use the :command:`manila show` command to get the share export location. In the example to create a share, the default share type and the already existing share network are used. .. note:: There is no default share type just after you started manila as the administrator. See :ref:`shared_file_systems_share_types` to create the default share type. To create a share network, use :ref:`shared_file_systems_share_networks`. Check share types that exist, run: .. code-block:: console $ manila type-list +------+---------+------------+------------+--------------------------------------+-------------------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | +------+---------+------------+------------+--------------------------------------+-------------------------+ | %id% | default | public | YES | driver_handles_share_servers : True | snapshot_support : True | +------+---------+------------+------------+--------------------------------------+-------------------------+ Check share networks that exist, run: .. code-block:: console $ manila share-network-list +--------------------------------------+--------------+ | id | name | +--------------------------------------+--------------+ | c895fe26-92be-4152-9e6c-f2ad230efb13 | my_share_net | +--------------------------------------+--------------+ Create a public share with ``my_share_net`` network, ``default`` share type, NFS shared file system protocol, and size 1 GB: .. code-block:: console $ manila create nfs 1 \ --name "Share2" \ --description "My second share" \ --share-type default \ --share-network my_share_net \ --metadata aim=testing \ --public +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | default | | description | My second share | | availability_zone | None | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | share_server_id | None | | share_group_id | None | | host | | | access_rules_status | active | | snapshot_id | None | | is_public | True | | task_state | None | | snapshot_support | True | | id | 195e3ba2-9342-446a-bc93-a584551de0ac | | size | 1 | | name | Share2 | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T12:13:40.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {u'aim': u'testing'} | +-----------------------------+--------------------------------------+ The share also can be created from a share snapshot. For details, see :ref:`shared_file_systems_snapshots`. See the share in a share list: .. code-block:: console $ manila list +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | Share1 | 1 | NFS | available | False | my_type | manila@paris#epsilon | nova | | 195e3ba2-9342-446a-bc93-a584551de0ac | Share2 | 1 | NFS | available | True | default | manila@london#LONDON | nova | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ Check the share status and see the share export locations. After ``creating`` status share should have status ``available``: .. code-block:: console $ manila show Share2 +----------------------+----------------------------------------------------------------------+ | Property | Value | +----------------------+----------------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My second share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/shares/share-fe874928-39a2-441b-8d24-29e6f0fde965 | | | preferred = False | | | is_admin_only = False | | | id = de6d4012-6158-46f0-8b28-4167baca51a7 | | | share_instance_id = fe874928-39a2-441b-8d24-29e6f0fde965 | | | path = 10.0.0.3:/shares/share-fe874928-39a2-441b-8d24-29e6f0fde965 | | | preferred = False | | | is_admin_only = True | | | id = 602d0f5c-921b-4e45-bfdb-5eec8a89165a | | | share_instance_id = fe874928-39a2-441b-8d24-29e6f0fde965 | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | manila@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | True | | task_state | None | | snapshot_support | True | | id | 195e3ba2-9342-446a-bc93-a584551de0ac | | size | 1 | | name | Share2 | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T12:13:40.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {u'aim': u'testing'} | +----------------------+----------------------------------------------------------------------+ ``is_public`` defines the level of visibility for the share: whether other projects can or cannot see the share. By default, the share is private. Update share ------------ Update the name, or description, or level of visibility for all projects for the share if you need: .. code-block:: console $ manila update Share2 --description "My second share. Updated" --is-public False $ manila show Share2 +----------------------+----------------------------------------------------------------------+ | Property | Value | +----------------------+----------------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My second share. Updated | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/shares/share-fe874928-39a2-441b-8d24-29e6f0fde965 | | | preferred = False | | | is_admin_only = False | | | id = de6d4012-6158-46f0-8b28-4167baca51a7 | | | share_instance_id = fe874928-39a2-441b-8d24-29e6f0fde965 | | | path = 10.0.0.3:/shares/share-fe874928-39a2-441b-8d24-29e6f0fde965 | | | preferred = False | | | is_admin_only = True | | | id = 602d0f5c-921b-4e45-bfdb-5eec8a89165a | | | share_instance_id = fe874928-39a2-441b-8d24-29e6f0fde965 | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | manila@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 195e3ba2-9342-446a-bc93-a584551de0ac | | size | 1 | | name | Share2 | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T12:13:40.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {u'aim': u'testing'} | +----------------------+----------------------------------------------------------------------+ A share can have one of these status values: +-----------------------------------+-----------------------------------------+ | Status | Description | +===================================+=========================================+ | creating | The share is being created. | +-----------------------------------+-----------------------------------------+ | deleting | The share is being deleted. | +-----------------------------------+-----------------------------------------+ | error | An error occurred during share creation.| +-----------------------------------+-----------------------------------------+ | error_deleting | An error occurred during share deletion.| +-----------------------------------+-----------------------------------------+ | available | The share is ready to use. | +-----------------------------------+-----------------------------------------+ | manage_starting | Share manage started. | +-----------------------------------+-----------------------------------------+ | manage_error | Share manage failed. | +-----------------------------------+-----------------------------------------+ | unmanage_starting | Share unmanage started. | +-----------------------------------+-----------------------------------------+ | unmanage_error | Share cannot be unmanaged. | +-----------------------------------+-----------------------------------------+ | unmanaged | Share was unmanaged. | +-----------------------------------+-----------------------------------------+ | extending | The extend, or increase, share size | | | request was issued successfully. | +-----------------------------------+-----------------------------------------+ | extending_error | Extend share failed. | +-----------------------------------+-----------------------------------------+ | shrinking | Share is being shrunk. | +-----------------------------------+-----------------------------------------+ | shrinking_error | Failed to update quota on share | | | shrinking. | +-----------------------------------+-----------------------------------------+ | shrinking_possible_data_loss_error| Shrink share failed due to possible data| | | loss. | +-----------------------------------+-----------------------------------------+ | migrating | Share migration is in progress. | +-----------------------------------+-----------------------------------------+ .. _share_metadata: Share metadata -------------- If you want to set the metadata key-value pairs on the share, run: .. code-block:: console $ manila metadata Share2 set project=my_abc deadline=01/20/16 Get all metadata key-value pairs of the share: .. code-block:: console $ manila metadata-show Share2 +----------+----------+ | Property | Value | +----------+----------+ | aim | testing | | project | my_abc | | deadline | 01/20/16 | +----------+----------+ You can update the metadata: .. code-block:: console $ manila metadata-update-all Share2 deadline=01/30/16 +----------+----------+ | Property | Value | +----------+----------+ | deadline | 01/30/16 | +----------+----------+ You also can unset the metadata using **manila metadata unset **. Reset share state ----------------- As administrator, you can reset the state of a share. Use **manila reset-state [--state ] ** command to reset share state, where ``state`` indicates which state to assign the share. Options include ``available``, ``error``, ``creating``, ``deleting``, ``error_deleting`` states. .. code-block:: console $ manila reset-state Share2 --state deleting $ manila show Share2 +----------------------+----------------------------------------------------------------------+ | Property | Value | +----------------------+----------------------------------------------------------------------+ | status | deleting | | share_type_name | default | | description | My second share. Updated | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/shares/share-fe874928-39a2-441b-8d24-29e6f0fde965 | | | preferred = False | | | is_admin_only = False | | | id = de6d4012-6158-46f0-8b28-4167baca51a7 | | | share_instance_id = fe874928-39a2-441b-8d24-29e6f0fde965 | | | path = 10.0.0.3:/shares/share-fe874928-39a2-441b-8d24-29e6f0fde965 | | | preferred = False | | | is_admin_only = True | | | id = 602d0f5c-921b-4e45-bfdb-5eec8a89165a | | | share_instance_id = fe874928-39a2-441b-8d24-29e6f0fde965 | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | share_group_id | None | | host | manila@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 195e3ba2-9342-446a-bc93-a584551de0ac | | size | 1 | | name | Share2 | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T12:13:40.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {u'deadline': u'01/30/16'} | +----------------------+----------------------------------------------------------------------+ Delete and force-delete share ----------------------------- You also can force-delete a share. The shares cannot be deleted in transitional states. The transitional states are ``creating``, ``deleting``, ``managing``, ``unmanaging``, ``migrating``, ``extending``, and ``shrinking`` statuses for the shares. Force-deletion deletes an object in any state. Use the ``policy.json`` file to grant permissions for this action to other roles. .. tip:: The configuration file ``policy.json`` may be used from different places. The path ``/etc/manila/policy.json`` is one of expected paths by default. Use **manila delete ** command to delete a specified share: .. code-block:: console $ manila delete %share_name_or_id% .. code-block:: console $ manila delete %share_name_or_id% --consistency-group %consistency-group-id% If you try to delete the share in one of the transitional state using soft-deletion you'll get an error: .. code-block:: console $ manila delete Share2 Delete for share 195e3ba2-9342-446a-bc93-a584551de0ac failed: Invalid share: Share status must be one of ('available', 'error', 'inactive'). (HTTP 403) (Request-ID: req-9a77b9a0-17d2-4d97-8a7a-b7e23c27f1fe) ERROR: Unable to delete any of the specified shares. A share cannot be deleted in a transitional status, that it why an error from ``python-manilaclient`` appeared. Print the list of all shares for all projects: .. code-block:: console $ manila list --all-tenants +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | Share1 | 1 | NFS | available | False | my_type | manila@paris#epsilon | nova | | 195e3ba2-9342-446a-bc93-a584551de0ac | Share2 | 1 | NFS | available | False | default | manila@london#LONDON | nova | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ Force-delete Share2 and check that it is absent in the list of shares, run: .. code-block:: console $ manila force-delete Share2 $ manila list +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | Share1 | 1 | NFS | available | False | my_type | manila@paris#epsilon | nova | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+----------------------+-------------------+ .. _access_to_share: Manage access to share ---------------------- The Shared File Systems service allows to grant or deny access to a specified share, and list the permissions for a specified share. To grant or deny access to a share, specify one of these supported share access levels: - **rw**. Read and write (RW) access. This is the default value. - **ro**. Read-only (RO) access. You must also specify one of these supported authentication methods: - **ip**. Authenticates an instance through its IP address. A valid format is ``XX.XX.XX.XX`` or ``XX.XX.XX.XX/XX``. For example ``0.0.0.0/0``. - **user**. Authenticates by a specified user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long. - **cert**. Authenticates an instance through a TLS certificate. Specify the TLS identity as the IDENTKEY. A valid value is any string up to 64 characters long in the common name (CN) of the certificate. The meaning of a string depends on its interpretation. - **cephx**. Ceph authentication system. Specify the Ceph auth ID that needs to be authenticated and authorized for share access by the Ceph back end. A valid value must be non-empty, consist of ASCII printable characters, and not contain periods. Try to mount NFS share with export path ``10.0.0.4:/shares/manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9`` on the node with IP address ``10.0.0.13``: .. code-block:: console $ sudo mount -v -t nfs 10.0.0.4:/shares/manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9 /mnt/ mount.nfs: timeout set for Tue Oct 6 10:37:23 2015 mount.nfs: trying text-based options 'vers=4,addr=10.0.0.4,clientaddr=10.0.0.13' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 10.0.0.4:/shares/manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9 An error message "Permission denied" appeared, so you are not allowed to mount a share without an access rule. Allow access to the share with ``ip`` access type and ``10.0.0.13`` IP address: .. code-block:: console $ manila access-allow Share1 ip 10.0.0.13 --access-level rw +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | | access_type | ip | | access_to | 10.0.0.13 | | access_level | rw | | state | new | | id | de715226-da00-4cfc-b1ab-c11f3393745e | +--------------+--------------------------------------+ Try to mount a share again. This time it is mounted successfully: .. code-block:: console $ sudo mount -v -t nfs 10.0.0.4:/shares/manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9 /mnt/ Since it is allowed node on 10.0.0.13 read and write access, try to create a file on a mounted share: .. code-block:: console $ cd /mnt $ ls lost+found $ touch my_file.txt Connect via SSH to the ``10.0.0.4`` node and check new file `my_file.txt` in the ``/shares/manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9`` directory: .. code-block:: console $ ssh 10.0.0.4 $ cd /shares $ ls manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9 $ cd manila_share_a5fb1ab7_0bbd_465b_ac14_05706294b6e9 $ ls lost+found my_file.txt You have successfully created a file from instance that was given access by its IP address. Allow access to the share with ``user`` access type: .. code-block:: console $ manila access-allow Share1 user demo --access-level rw +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 10f5a2a1-36f5-45aa-a8e6-00e94e592e88 | | access_type | user | | access_to | demo | | access_level | rw | | state | new | | id | 4f391c6b-fb4f-47f5-8b4b-88c5ec9d568a | +--------------+--------------------------------------+ .. note:: Different share features are supported by different share drivers. For the example, the Generic driver with the Block Storage service as a back-end doesn't support ``user`` and ``cert`` authentications methods. For details of supporting of features by different drivers, see `Manila share features support mapping `_. To verify that the access rules (ACL) were configured correctly for a share, you list permissions for a share: .. code-block:: console $ manila access-list Share1 +--------------------------------------+-------------+------------+--------------+--------+ | id | access type | access to | access level | state | +--------------------------------------+-------------+------------+--------------+--------+ | 4f391c6b-fb4f-47f5-8b4b-88c5ec9d568a | user | demo | rw | error | | de715226-da00-4cfc-b1ab-c11f3393745e | ip | 10.0.0.13 | rw | active | +--------------------------------------+-------------+------------+--------------+--------+ Deny access to the share and check that deleted access rule is absent in the access rule list: .. code-block:: console $ manila access-deny Share1 de715226-da00-4cfc-b1ab-c11f3393745e $ manila access-list Share1 +--------------------------------------+-------------+-----------+--------------+-------+ | id | access type | access to | access level | state | +--------------------------------------+-------------+-----------+--------------+-------+ | 4f391c6b-fb4f-47f5-8b4b-88c5ec9d568a | user | demo | rw | error | +--------------------------------------+-------------+-----------+--------------+-------+ manila-10.0.0/doc/source/admin/shared-file-systems-network-plugins.rst0000664000175000017500000001121313656750227026013 0ustar zuulzuul00000000000000.. _shared_file_systems_network_plugins: ================ Network plug-ins ================ The Shared File Systems service architecture defines an abstraction layer for network resource provisioning and allowing administrators to choose from a different options for how network resources are assigned to their projects' networked storage. There are a set of network plug-ins that provide a variety of integration approaches with the network services that are available with OpenStack. What is a network plugin in Manila? ----------------------------------- A network plugin is a python class that uses a specific facility (e.g. Neutron network) to provide network resources to the :term:`manila-share` service. When to use a network plugin? ----------------------------- A Manila `share driver` may be configured in one of two modes, where it is managing the lifecycle of `share servers` on its own or where it is merely providing storage resources on a pre-configured share server. This mode is defined using the boolean option `driver_handles_share_servers` in the Manila configuration file. A network plugin is only useful when a driver is handling its own share servers. .. note:: Not all share drivers support both modes. Each driver must report which mode(s) it supports to the manila-share service. When `driver_handles_share_servers` is set to `True`, a share driver will be called to create share servers for shares using information provided within a `share network`. This information will be provided to one of the enabled network plugins that will handle reservation, creation and deletion of network resources including `IP addresses` and `network interfaces`. The Shared File Systems service may need a network resource provisioning if share service with specified driver works in mode, when a share driver manages lifecycle of share servers on its own. This behavior is defined by a flag ``driver_handles_share_servers`` in share service configuration. When ``driver_handles_share_servers`` is set to ``True``, a share driver will be called to create share servers for shares using information provided within a share network. This information will be provided to one of the enabled network plug-ins that will handle reservation, creation and deletion of network resources including IP addresses and network interfaces. What network plug-ins are available? ------------------------------------ There are two network plug-ins and three python classes in the Shared File Systems service: #. Network plug-in for using the OpenStack Networking service. It allows to use any network segmentation that the Networking service supports. It is up to each share driver to support at least one network segmentation type. a) ``manila.network.neutron.neutron_network_plugin.NeutronNetworkPlugin``. This is a default network plug-in. It requires the ``neutron_net_id`` and the ``neutron_subnet_id`` to be provided when defining the share network that will be used for the creation of share servers. The user may define any number of share networks corresponding to the various physical network segments in a project environment. b) ``manila.network.neutron.neutron_network_plugin.NeutronSingleNetworkPlugin``. This is a simplification of the previous case. It accepts values for ``neutron_net_id`` and ``neutron_subnet_id`` from the ``manila.conf`` configuration file and uses one network for all shares. When only a single network is needed, the NeutronSingleNetworkPlugin (1.b) is a simple solution. Otherwise NeutronNetworkPlugin (1.a) should be chosen. #. Network plug-in for specifying networks independently from OpenStack networking services. a) ``manila.network.standalone_network_plugin.StandaloneNetworkPlugin``. This plug-in uses a pre-existing network that is available to the manila-share host. This network may be handled either by OpenStack or be created independently by any other means. The plug-in supports any type of network - flat and segmented. As above, it is completely up to the share driver to support the network type for which the network plug-in is configured. .. note:: The ip version of the share network is defined by the flags of ``network_plugin_ipv4_enabled`` and ``network_plugin_ipv6_enabled`` in the ``manila.conf`` configuration since Pike. The ``network_plugin_ipv4_enabled`` default value is set to True. The ``network_plugin_ipv6_enabled`` default value is set to False. If ``network_plugin_ipv6_enabled`` option is True, the value of ``network_plugin_ipv4_enabled`` will be ignored, it means to support both IPv4 and IPv6 share network. manila-10.0.0/doc/source/admin/shared-file-systems-share-migration.rst0000664000175000017500000003220313656750227025736 0ustar zuulzuul00000000000000.. _shared_file_systems_share_migration: =============== Share migration =============== Share migration is the feature that migrates a share between different storage pools. Use cases ~~~~~~~~~ As an administrator, you may want to migrate your share from one storage pool to another for several reasons. Examples include: * Maintenance or evacuation * Evacuate a back end for hardware or software upgrades * Evacuate a back end experiencing failures * Evacuate a back end which is tagged end-of-life * Optimization * Defragment back ends to empty and be taken offline to conserve power * Rebalance back ends to maximize available performance * Move data and compute closer together to reduce network utilization and decrease latency or increase bandwidth * Moving shares * Migrate from old hardware generation to a newer generation * Migrate from one vendor to another Migration workflows ~~~~~~~~~~~~~~~~~~~ Moving shares across different storage pools is generally expected to be a disruptive operation that disconnects existing clients when the source ceases to exist. For this reason, share migration is implemented in a 2-phase approach that allows the administrator to control the timing of the disruption. The first phase performs data copy while users retain access to the share. When copying is complete, the second phase may be triggered to perform a switchover that may include a last sync and deleting the source, generally requiring users to reconnect to continue accessing the share. In order to migrate a share, one of two possible mechanisms may be employed, which provide different capabilities and affect how the disruption occurs with regards to user access during data copy phase and disconnection during switchover phase. Those two mechanisms are: * Driver-assisted migration: This mechanism is intended to make use of driver optimizations to migrate shares between pools of the same storage vendor. This mechanism allows migrating shares nondisruptively while the source remains writable, preserving all filesystem metadata and snapshots. The migration workload is performed in the storage back end. * Host-assisted migration: This mechanism is intended to migrate shares in an agnostic manner between two different pools, regardless of storage vendor. The implementation for this mechanism does not offer the same properties found in driver-assisted migration. In host-assisted migration, the source remains readable, snapshots must be deleted prior to starting the migration, filesystem metadata may be lost, and the clients will get disconnected by the end of migration. The migration workload is performed by the Data Service, which is a dedicated manila service for intensive data operations. When starting a migration, driver-assisted migration is attempted first. If the shared file system service detects it is not possible to perform the driver-assisted migration, it proceeds to attempt host-assisted migration. Using the migration APIs ~~~~~~~~~~~~~~~~~~~~~~~~ The commands to interact with the share migration API are: * ``migration_start``: starts a migration while retaining access to the share. Migration is paused and waits for ``migration_complete`` invocation when it has copied all data and is ready to take down the source share. .. code-block:: console $ manila migration-start share_1 ubuntu@generic2#GENERIC2 --writable False --preserve-snapshots False --preserve-metadata False --nondisruptive False .. note:: This command has no output. * ``migration_complete``: completes a migration, removing the source share and setting the destination share instance to ``available``. .. code-block:: console $ manila migration-complete share_1 .. note:: This command has no output. * ``migration_get_progress``: obtains migration progress information of a share. .. code-block:: console $ manila migration-get-progress share_1 +----------------+--------------------------+ | Property | Value | +----------------+--------------------------+ | task_state | data_copying_in_progress | | total_progress | 37 | +----------------+--------------------------+ * ``migration_cancel``: cancels an in-progress migration of a share. .. code-block:: console $ manila migration-cancel share_1 .. note:: This command has no output. The parameters -------------- To start a migration, an administrator should specify several parameters. Among those, two of them are key for the migration. * ``share``: The share that will be migrated. * ``destination_pool``: The destination pool to which the share should be migrated to, in format host@backend#pool. Several other parameters, referred to here as ``driver-assisted parameters``, *must* be specified in the ``migration_start`` API. They are: * ``preserve_metadata``: whether preservation of filesystem metadata should be enforced for this migration. * ``preserve_snapshots``: whether preservation of snapshots should be enforced for this migration. * ``writable``: whether the source share remaining writable should be enforced for this migration. * ``nondisruptive``: whether it should be enforced to keep clients connected throughout the migration. Specifying any of the boolean parameters above as ``True`` will disallow a host-assisted migration. In order to appropriately move a share to a different storage pool, it may be required to change one or more share properties, such as the share type, share network, or availability zone. To accomplish this, use the optional parameters: * ``new_share_type_id``: Specify the ID of the share type that should be set in the migrated share. * ``new_share_network_id``: Specify the ID of the share network that should be set in the migrated share. If driver-assisted migration should not be attempted, you may provide the optional parameter: * ``force_host_assisted_migration``: whether driver-assisted migration attempt should be skipped. If this option is set to ``True``, all driver-assisted options must be set to ``False``. Configuration ~~~~~~~~~~~~~ For share migration to work in the cloud, there are several configuration requirements that need to be met: For driver-assisted migration: it is necessary that the configuration of all back end stanzas is present in the file manila.conf of all manila-share nodes. Also, network connectivity between the nodes running manila-share service and their respective storage back ends is required. For host-assisted migration: it is necessary that the Data Service (manila-data) is installed and configured in a node connected to the cloud's administrator network. The drivers pertaining to the source back end and destination back end involved in the migration should be able to provide shares that can be accessed from the administrator network. This can easily be accomplished if the driver supports ``admin_only`` export locations, else it is up to the administrator to set up means of connectivity. In order for the Data Service to mount the source and destination instances, it must use manila share access APIs to grant access to mount the instances. The access rule type varies according to the share protocol, so there are a few config options to set the access value for each type: * ``data_node_access_ips``: For IP-based access type, provide one or more administrator network IP addresses of the host running the Data Service. For NFS shares, drivers should always add rules with the "no_root_squash" property. * ``data_node_access_cert``: For certificate-based access type, provide the value of the certificate name that grants access to the Data Service. * ``data_node_access_admin_user``: For user-based access type, provide the value of a username that grants access and administrator privileges to the files in the share. * ``data_node_mount_options``: Provide the value of a mapping of protocol name to respective mount options. The Data Service makes use of mount command templates that by default have a dedicated field to inserting mount options parameter. The default value for this config option already includes the username and password parameters for CIFS shares and NFS v3 enforcing parameter for NFS shares. * ``mount_tmp_location``: Provide the value of a string representing the path where the share instances used in migration should be temporarily mounted. The default value is ``/tmp/``. * ``check_hash``: This boolean config option value determines whether the hash of all files copied in migration should be validated. Setting this option increases the time it takes to migrate files, and is recommended for ultra-dependable systems. It defaults to disabled. The configuration options above are respective to the Data Service only and should be defined the ``DEFAULT`` group of the ``manila.conf`` configuration file. Also, the Data Service node must have all the protocol-related libraries pre-installed to be able to run the mount commands for each protocol. You may need to change some driver-specific configuration options from their default value to work with specific drivers. If so, they must be set under the driver configuration stanza in ``manila.conf``. See a detailed description for each one below: * ``migration_ignore_files``: Provide value as a list containing the names of files or folders to be ignored during migration for a specific driver. The default value is a list containing only ``lost+found`` folder. * ``share_mount_template``: Provide a string that defines the template for the mount command for a specific driver. The template should contain the following entries to be formatted by the code: * proto: The share protocol. Automatically formatted by the Data Service. * options: The mount options to be formatted by the Data Service according to the data_node_mount_options config option. * export: The export path of the share. Automatically formatted by the Data Service with the share's ``admin_only`` export location. * path: The path to mount the share. Automatically formatted by the Data Service according to the mount_tmp_location config option. The default value for this config option is:: mount -vt %(proto)s %(options)s %(export)s %(path)s. * ``share_unmount_template``: Provide the value of a string that defines the template for the unmount command for a specific driver. The template should contain the path of where the shares are mounted, according to the ``mount_tmp_location`` config option, to be formatted automatically by the Data Service. The default value for this config option is:: umount -v %(path)s * ``protocol_access_mapping``: Provide the value of a mapping of access rule type to protocols supported. The default value specifies IP and user based access types mapped to NFS and CIFS respectively, which are the combinations supported by manila. If a certain driver uses a different protocol for IP or user access types, or is not included in the default mapping, it should be specified in this configuration option. Other remarks ~~~~~~~~~~~~~ * There is no need to manually add any of the previously existing access rules after a migration is complete, they will be persisted on the destination after the migration. * Once migration of a share has started, the user will see the status ``migrating`` and it will block other share actions, such as adding or removing access rules, creating or deleting snapshots, resizing, among others. * The destination share instance export locations, although it may exist from the beginning of a host-assisted migration, are not visible nor accessible as access rules cannot be added. * During a host-assisted migration, an access rule granting access to the Data Service will be added and displayed by querying the ``access-list`` API. This access rule should not be tampered with, it will otherwise cause migration to fail. * Resources allocated are cleaned up automatically when a migration fails, except if this failure occurs during the 2nd phase of a driver-assisted migration. Each step in migration is saved to the field ``task_state`` present in the Share model. If for any reason the state is not set to ``migration_error`` during a failure, it will need to be reset using the ``reset-task-state`` API. * It is advised that the node running the Data Service is well secured, since it will be mounting shares with highest privileges, temporarily exposing user data to whoever has access to this node. * The two mechanisms of migration are affected differently by service restarts: * If performing a host-assisted migration, all services may be restarted except for the manila-data service when performing the copy (the ``task_state`` field value starts with ``data_copying_``). In other steps of the host-assisted migration, both the source and destination manila-share services should not be restarted. * If performing a driver-assisted migration, the migration is affected minimally by driver restarts if the ``task_state`` is ``migration_driver_in_progress``, while the copy is being done in the back end. Otherwise, the source and destination manila-share services should not be restarted. manila-10.0.0/doc/source/admin/container_driver.rst0000664000175000017500000001072213656750227022316 0ustar zuulzuul00000000000000.. Copyright 2016 Mirantis Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Container Driver ================ The Container driver provides a lightweight solution for share servers management. It allows to use Docker containers for hosting userspace shared file systems services. Supported operations -------------------- - Create CIFS share; - Delete CIFS share; - Allow user access to CIFS share; - Deny user access to CIFS share; - Extend CIFS share. Restrictions ------------ - Current implementation has been tested only on Ubuntu. Devstack plugin won't work on other distributions however it should be possible to install prerequisites and set the driver up manually; - The only supported protocol is CIFS; - The following features are not implemented: * Manage/unmanage share; * Shrink share; * Create/delete snapshots; * Create a share from a snapshot; * Manage/unmanage snapshots. Known problems -------------- - May demonstrate unstable behaviour when running concurrently. It is strongly suggested that the driver should be used with extreme care in cases other than building lightweight development and testing environments. Setting up container driver with devstack ----------------------------------------- The driver could be set up via devstack. This requires the following update to local.conf: .. code-block:: ini enable_plugin manila https://opendev.org/openstack/manila MANILA_BACKEND1_CONFIG_GROUP_NAME=london MANILA_SHARE_BACKEND1_NAME=LONDON MANILA_OPTGROUP_london_driver_handles_share_servers=True MANILA_OPTGROUP_london_neutron_host_id= SHARE_DRIVER=manila.share.drivers.container.driver.ContainerShareDriver SHARE_BACKING_FILE_SIZE= MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='snapshot_support=false' where is change reference, which could be copied from gerrit web-interface, is the name of the host with running neutron Setting Container Driver Up Manually ------------------------------------ This section describes steps needed to be performed to set the driver up manually. The driver has been tested on Ubuntu 14.04, thus in case of any other distribution package names might differ. The following packages must be installed: - docker.io One can verify if the package is installed by issuing ``sudo docker info`` command. In case of normal operation it should return docker usage statistics. In case it fails complaining on inaccessible socket try installing ``apparmor``. Please note that docker usage requires superuser privileges. After docker is successfully installed a docker image containing necessary packages must be provided. Currently such image could be downloaded from https://github.com/a-ovchinnikov/manila-image-elements-lxd-images/releases/download/0.1.0/manila-docker-container.tar.gz The image has to be unpacked but not untarred. This could be achieved by running 'gzip -d ' command. Resulting tar-archive of the image could be uploaded to docker via .. code-block:: console sudo docker load --input If the previous command finished successfully you will be able to see the image in the image list: .. code-block:: console sudo docker images The driver expects to find a folder /tmp/shares on the host where it is running as well as a logical volume group "manila_docker_volumes". When installing the driver manually one must make sure that 'brctl' and 'docker' commands are present in the /etc/manila/rootwrap.d/share.filters and could be executed as root. Finally to use the driver one must add a backend to the config file containing the following settings: .. code-block:: ini driver_handles_share_servers = True share_driver = manila.share.drivers.container.driver.ContainerShareDriver neutron_host_id = where is the name of the host running neutron. (In case of single VM devstack it is VM's name). After restarting manila services you should be able to use the driver. manila-10.0.0/doc/source/admin/shared-file-systems-networking.rst0000664000175000017500000000107013656750227025032 0ustar zuulzuul00000000000000.. _shared_file_systems_networking: ========== Networking ========== Unlike the OpenStack Block Storage service, the Shared File Systems service must connect to the Networking service. The share service requires the option to self-manage share servers. For client authentication and authorization, you can configure the Shared File Systems service to work with different network authentication services, like LDAP, Kerberos protocols, or Microsoft Active Directory. .. toctree:: shared-file-systems-share-networks.rst shared-file-systems-network-plugins.rst manila-10.0.0/doc/source/admin/hdfs_native_driver.rst0000664000175000017500000000610613656750227022627 0ustar zuulzuul00000000000000.. Copyright 2015 Intel, Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. HDFS native driver ================== HDFS native driver is a plugin based on the OpenStack manila service, which uses Hadoop distributed file system (HDFS), a distributed file system designed to hold very large amounts of data, and provide high-throughput access to the data. A manila share in this driver is a subdirectory in hdfs root directory. Instances talk directly to the HDFS storage backend with 'hdfs' protocol. And access to each share is allowed by user based access type, which is aligned with HDFS ACLs to support access control of multiple users and groups. Network configuration --------------------- The storage backend and manila hosts should be in a flat network, otherwise, the L3 connectivity between them should exist. Supported shared filesystems ---------------------------- - HDFS (authentication by user) Supported Operations -------------------- - Create HDFS share - Delete HDFS share - Allow HDFS Share access * Only support user access type * Support level of access (ro/rw) - Deny HDFS Share access - Create snapshot - Delete snapshot - Create share from snapshot - Extend share Requirements ------------ - Install HDFS package, version >= 2.4.x, on the storage backend - To enable access control, the HDFS file system must have ACLs enabled - Establish network connection between the manila host and storage backend Manila driver configuration --------------------------- - `share_driver` = manila.share.drivers.hdfs.hdfs_native.HDFSNativeShareDriver - `hdfs_namenode_ip` = the IP address of the HDFS namenode, and only single namenode is supported now - `hdfs_namenode_port` = the port of the HDFS namenode service - `hdfs_ssh_port` = HDFS namenode SSH port - `hdfs_ssh_name` = HDFS namenode SSH login name - `hdfs_ssh_pw` = HDFS namenode SSH login password, this parameter is not necessary, if the following `hdfs_ssh_private_key` is configured - `hdfs_ssh_private_key` = Path to the HDFS namenode private key to ssh login Known Restrictions ------------------ - This driver does not support network segmented multi-tenancy model. Instead multi-tenancy is supported by the tenant specific user authentication - Only support for single HDFS namenode in Kilo release The :mod:`manila.share.drivers.hdfs.hdfs_native` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.hdfs.hdfs_native :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/admin/shared-file-systems-snapshots.rst0000664000175000017500000001711013656750227024667 0ustar zuulzuul00000000000000.. _shared_file_systems_snapshots: =============== Share snapshots =============== The Shared File Systems service provides a snapshot mechanism to help users restore data by running the :command:`manila snapshot-create` command. To export a snapshot, create a share from it, then mount the new share to an instance. Copy files from the attached share into the archive. To import a snapshot, create a new share with appropriate size, attach it to instance, and then copy a file from the archive to the attached file system. .. note:: You cannot delete a share while it has saved dependent snapshots. Create a snapshot from the share: .. code-block:: console $ manila snapshot-create Share1 --name Snapshot1 --description "Snapshot of Share1" +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | status | creating | | share_id | aca648eb-8c03-4394-a5cc-755066b7eb66 | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | description | Snapshot of Share1 | | created_at | 2015-09-25T05:27:38.000000 | | size | 1 | | share_proto | NFS | | id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | | project_id | cadd7139bc3148b8973df097c0911016 | | share_size | 1 | | name | Snapshot1 | +-------------+--------------------------------------+ Update snapshot name or description if needed: .. code-block:: console $ manila snapshot-rename Snapshot1 Snapshot_1 --description "Snapshot of Share1. Updated." Check that status of a snapshot is ``available``: .. code-block:: console $ manila snapshot-show Snapshot1 +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | status | available | | share_id | aca648eb-8c03-4394-a5cc-755066b7eb66 | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | Snapshot1 | | created_at | 2015-09-25T05:27:38.000000 | | share_proto | NFS | | id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | | project_id | cadd7139bc3148b8973df097c0911016 | | size | 1 | | share_size | 1 | | description | Snapshot of Share1 | +-------------+--------------------------------------+ To create a copy of your data from a snapshot, use :command:`manila create` with key ``--snapshot-id``. This creates a new share from an existing snapshot. Create a share from a snapshot and check whether it is available: .. code-block:: console $ manila create nfs 1 --name Share2 --metadata source=snapshot --description "Share from a snapshot." --snapshot-id 962e8126-35c3-47bb-8c00-f0ee37f42ddd +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | None | | share_type_name | default | | description | Share from a snapshot. | | availability_zone | None | | share_network_id | None | | export_locations | [] | | share_server_id | None | | share_group_id | None | | host | None | | snapshot_id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | | is_public | False | | task_state | None | | snapshot_support | True | | id | b6b0617c-ea51-4450-848e-e7cff69238c7 | | size | 1 | | name | Share2 | | share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | | created_at | 2015-09-25T06:25:50.240417 | | export_location | None | | share_proto | NFS | | project_id | 20787a7ba11946adad976463b57d8a2f | | metadata | {u'source': u'snapshot'} | +-----------------------------+--------------------------------------+ $ manila show Share2 +-----------------------------+-------------------------------------------+ | Property | Value | +-----------------------------+-------------------------------------------+ | status | available | | share_type_name | default | | description | Share from a snapshot. | | availability_zone | nova | | share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | | export_locations | 10.254.0.3:/shares/share-1dc2a471-3d47-...| | share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 | | share_group_id | None | | host | manila@generic1#GENERIC1 | | snapshot_id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | | is_public | False | | task_state | None | | snapshot_support | True | | id | b6b0617c-ea51-4450-848e-e7cff69238c7 | | size | 1 | | name | Share2 | | share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | | created_at | 2015-09-25T06:25:50.000000 | | share_proto | NFS | | project_id | 20787a7ba11946adad976463b57d8a2f | | metadata | {u'source': u'snapshot'} | +-----------------------------+-------------------------------------------+ You can soft-delete a snapshot using :command:`manila snapshot-delete `. If a snapshot is in busy state, and during the delete an ``error_deleting`` status appeared, administrator can force-delete it or explicitly reset the state. Use :command:`snapshot-reset-state [--state ] ` to update the state of a snapshot explicitly. A valid value of a status are ``available``, ``error``, ``creating``, ``deleting``, ``error_deleting``. If no state is provided, the ``available`` state will be used. Use :command:`manila snapshot-force-delete ` to force-delete a specified share snapshot in any state. manila-10.0.0/doc/source/admin/shared-file-systems-share-resize.rst0000664000175000017500000002153113656750227025250 0ustar zuulzuul00000000000000.. _shared_file_systems_share_resize: ============ Resize share ============ To change file share size, use the :command:`manila extend` command and the :command:`manila shrink` command. For most drivers it is safe operation. If you want to be sure that your data is safe, you can make a share back up by creating a snapshot of it. You can extend and shrink the share with the :command:`manila extend` and :command:`manila shrink` commands respectively, and specify the share with the new size that does not exceed the quota. For details, see :ref:`Quotas and Limits `. You also cannot shrink share size to 0 or to a greater value than the current share size. While extending, the share has an ``extending`` status. This means that the increase share size request was issued successfully. To extend the share and check the result, run: .. code-block:: console $ manila extend docs_resize 2 $ manila show docs_resize +----------------------+--------------------------------------------------------------------------+ | Property | Value | +----------------------+--------------------------------------------------------------------------+ | status | available | | share_type_name | my_type | | description | None | | availability_zone | nova | | share_network_id | None | | export_locations | | | | path = 1.0.0.4:/shares/manila_share_b8afc508_8487_442b_b170_ea65b07074a8 | | | preferred = False | | | is_admin_only = False | | | id = 3ffb76f4-92b9-4639-83fd-025bc3e302ff | | | share_instance_id = b8afc508-8487-442b-b170-ea65b07074a8 | | | path = 2.0.0.3:/shares/manila_share_b8afc508_8487_442b_b170_ea65b07074a8 | | | preferred = False | | | is_admin_only = True | | | id = 1f0e263f-370d-47d3-95f6-1be64088b9da | | | share_instance_id = b8afc508-8487-442b-b170-ea65b07074a8 | | share_server_id | None | | share_group_id | None | | host | manila@paris#shares | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | b07dbebe-a328-403c-b402-c8871c89e3d1 | | size | 2 | | name | docs_resize | | share_type | 14ee8575-aac2-44af-8392-d9c9d344f392 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T15:33:18.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +----------------------+--------------------------------------------------------------------------+ While shrinking, the share has a ``shrinking`` status. This means that the decrease share size request was issued successfully. To shrink the share and check the result, run: .. code-block:: console $ manila shrink docs_resize 1 $ manila show docs_resize +----------------------+--------------------------------------------------------------------------+ | Property | Value | +----------------------+--------------------------------------------------------------------------+ | status | available | | share_type_name | my_type | | description | None | | availability_zone | nova | | share_network_id | None | | export_locations | | | | path = 1.0.0.4:/shares/manila_share_b8afc508_8487_442b_b170_ea65b07074a8 | | | preferred = False | | | is_admin_only = False | | | id = 3ffb76f4-92b9-4639-83fd-025bc3e302ff | | | share_instance_id = b8afc508-8487-442b-b170-ea65b07074a8 | | | path = 2.0.0.3:/shares/manila_share_b8afc508_8487_442b_b170_ea65b07074a8 | | | preferred = False | | | is_admin_only = True | | | id = 1f0e263f-370d-47d3-95f6-1be64088b9da | | | share_instance_id = b8afc508-8487-442b-b170-ea65b07074a8 | | share_server_id | None | | share_group_id | None | | host | manila@paris#shares | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | b07dbebe-a328-403c-b402-c8871c89e3d1 | | size | 1 | | name | docs_resize | | share_type | 14ee8575-aac2-44af-8392-d9c9d344f392 | | has_replicas | False | | replication_type | None | | created_at | 2016-03-25T15:33:18.000000 | | share_proto | NFS | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +----------------------+--------------------------------------------------------------------------+ manila-10.0.0/doc/source/admin/generic_driver.rst0000664000175000017500000001060413656750227021747 0ustar zuulzuul00000000000000.. Copyright 2014 Mirantis Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Generic approach for share provisioning ======================================= The Shared File Systems service can be configured to use Nova VMs and Cinder volumes. There are two modules that handle them in manila: 1) 'service_instance' module creates VMs in Nova with predefined image called service image. This module can be used by any backend driver for provisioning of service VMs to be able to separate share resources among tenants. 2) 'generic' module operates with Cinder volumes and VMs created by 'service_instance' module, then creates shared filesystems based on volumes attached to VMs. Network configurations ---------------------- Each backend driver can handle networking in its own way, see: https://wiki.openstack.org/wiki/Manila/Networking One of two possible configurations can be chosen for share provisioning using 'service_instance' module: - Service VM has one net interface from net that is connected to public router. For successful creation of share, user network should be connected to public router too. - Service VM has two net interfaces, first one connected to service network, second one connected directly to user's network. Requirements for service image ------------------------------ - Linux based distro - NFS server - Samba server >=3.2.0, that can be configured by data stored in registry - SSH server - Two net interfaces configured to DHCP (see network approaches) - 'exportfs' and 'net conf' libraries used for share actions - Following files will be used, so if their paths differ one needs to create at least symlinks for them: * /etc/exports (permanent file with NFS exports) * /var/lib/nfs/etab (temporary file with NFS exports used by 'exportfs') * /etc/fstab (permanent file with mounted filesystems) * /etc/mtab (temporary file with mounted filesystems used by 'mount') Supported shared filesystems ---------------------------- - NFS (access by IP) - CIFS (access by IP) Known restrictions ------------------ - One of Nova's configurations only allows 26 shares per server. This limit comes from the maximum number of virtual PCI interfaces that are used for block device attaching. There are 28 virtual PCI interfaces, in this configuration, two of them are used for server needs and other 26 are used for attaching block devices that are used for shares. - Juno version works only with Neutron. Each share should be created with neutron-net and neutron-subnet IDs provided via share-network entity. - Juno version handles security group, flavor, image, keypair for Nova VM and also creates service networks, but does not use availability zones for Nova VMs and volume types for Cinder block devices. - Juno version does not use security services data provided with share-network. These data will be just ignored. - Liberty version adds a share extend capability. Share access will be briefly interrupted during an extend operation. - Liberty version adds a share shrink capability, but this capability is not effective because generic driver shrinks only filesystem size and doesn't shrink the size of Cinder volume. Using Windows instances ~~~~~~~~~~~~~~~~~~~~~~~ While the generic driver only supports Linux instances, you may use the Windows SMB driver when Windows VMs are preferred. For more details, please check out the following page: :ref:`windows_smb_driver`. The :mod:`manila.share.drivers.generic` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.generic :noindex: :members: :undoc-members: :show-inheritance: The :mod:`manila.share.drivers.service_instance` Module ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: manila.share.drivers.service_instance :noindex: :members: :undoc-members: :show-inheritance: manila-10.0.0/doc/source/configuration/0000775000175000017500000000000013656750362020004 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/configuration/shared-file-systems/0000775000175000017500000000000013656750362023674 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/configuration/shared-file-systems/config-options.rst0000664000175000017500000000113213656750227027361 0ustar zuulzuul00000000000000================== Additional options ================== These options can also be set in the ``manila.conf`` file. .. include:: ../tables/manila-ca.inc .. include:: ../tables/manila-common.inc .. include:: ../tables/manila-compute.inc .. include:: ../tables/manila-ganesha.inc .. include:: ../tables/manila-hnas.inc .. include:: ../tables/manila-quota.inc .. include:: ../tables/manila-redis.inc .. include:: ../tables/manila-san.inc .. include:: ../tables/manila-scheduler.inc .. include:: ../tables/manila-share.inc .. include:: ../tables/manila-tegile.inc .. include:: ../tables/manila-winrm.inc manila-10.0.0/doc/source/configuration/shared-file-systems/overview.rst0000664000175000017500000001017513656750227026300 0ustar zuulzuul00000000000000=============================================== Introduction to the Shared File Systems service =============================================== The Shared File Systems service provides shared file systems that Compute instances can consume. The overall Shared File Systems service is implemented via the following specific services: manila-api A WSGI app that authenticates and routes requests throughout the Shared File Systems service. It supports the OpenStack APIs. manila-data A standalone service whose purpose is to receive requests, process data operations with potentially long running time such as copying, share migration or backup. manila-scheduler Schedules and routes requests to the appropriate share service. The scheduler uses configurable filters and weighers to route requests. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Share Types, and Capabilities as well as custom filters. manila-share Manages back-end devices that provide shared file systems. A manila-share service can run in one of two modes, with or without handling of share servers. Share servers export file shares via share networks. When share servers are not used, the networking requirements are handled outside of Manila. The Shared File Systems service contains the following components: **Back-end storage devices** The Shared File Services service requires some form of back-end shared file system provider that the service is built on. The reference implementation uses the Block Storage service (Cinder) and a service VM to provide shares. Additional drivers are used to access shared file systems from a variety of vendor solutions. **Users and tenants (projects)** The Shared File Systems service can be used by many different cloud computing consumers or customers (tenants on a shared system), using role-based access assignments. Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role unless they are restricted to administrators, but this can be configured by the system administrator in the appropriate ``policy.json`` file that maintains the rules. A user's access to manage particular shares is limited by tenant. Guest access to mount and use shares is secured by IP and/or user access rules. Quotas used to control resource consumption across available hardware resources are per tenant. For tenants, quota controls are available to limit: - The number of shares that can be created. - The number of gigabytes that can be provisioned for shares. - The number of share snapshots that can be created. - The number of gigabytes that can be provisioned for share snapshots. - The number of share networks that can be created. - The number of share groups that can be created. - The number of share group snapshots that can be created. You can revise the default quota values with the Shared File Systems CLI, so the limits placed by quotas are editable by admin users. **Shares, snapshots, and share networks** The basic resources offered by the Shared File Systems service are shares, snapshots and share networks: **Shares** A share is a unit of storage with a protocol, a size, and an access list. Shares are the basic primitive provided by Manila. All shares exist on a backend. Some shares are associated with share networks and share servers. The main protocols supported are NFS and CIFS, but other protocols are supported as well. **Snapshots** A snapshot is a point in time copy of a share. Snapshots can only be used to create new shares (containing the snapshotted data). Shares cannot be deleted until all associated snapshots are deleted. **Share networks** A share network is a tenant-defined object that informs Manila about the security and network configuration for a group of shares. Share networks are only relevant for backends that manage share servers. A share network contains a security service and network/subnet. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/0000775000175000017500000000000013656750362025352 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/hpe-3par-share-driver.rst0000664000175000017500000006201413656750227032117 0ustar zuulzuul00000000000000==================================== HPE 3PAR Driver for OpenStack Manila ==================================== The HPE 3PAR driver provides NFS and CIFS shared file systems to OpenStack using HPE 3PAR's File Persona capabilities. For information on HPE 3PAR Driver for OpenStack Manila, refer to `content kit page `_. HPE 3PAR File Persona Software Suite concepts and terminology ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The software suite comprises the following managed objects: - File Provisioning Groups (FPGs) - Virtual File Servers (VFSs) - File Stores - File Shares The File Persona Software Suite is built upon the resilient mesh-active architecture of HPE 3PAR StoreServ and benefits from HPE 3PAR storage foundation of wide-striped logical disks and autonomic ``Common Provisioning Groups (CPGs)``. A CPG can be shared between file and block to create the File Shares or the logical unit numbers (LUNs) to provide true convergence. ``A File Provisioning Group (FPG)`` is an instance of the HPE intellectual property Adaptive File System. It controls how files are stored and retrieved. Each FPG is transparently constructed from one or multiple Virtual Volumes (VVs) and is the unit for replication and disaster recovery for File Persona Software Suite. There are up to 16 FPGs supported on a node pair. ``A Virtual File Server (VFS)`` is conceptually like a server. As such, it presents virtual IP addresses to clients, participates in user authentication services, and can have properties for such things as user/group quota management and antivirus policies. Up to 16 VFSs are supported on a node pair, one per FPG. ``File Stores`` are the slice of a VFS and FPG at which snapshots are taken, capacity quota management can be performed, and antivirus scan service policies customized. There are up to 256 File Stores supported on a node pair, 16 File Stores per VFS. ``File Shares`` are what provide data access to clients via SMB, NFS, and the Object Access API, subject to the share permissions applied to them. Multiple File Shares can be created for a File Store and at different directory levels within a File Store. Supported shared filesystems ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. Operations supported ~~~~~~~~~~~~~~~~~~~~ - Create a share. – Share is not accessible until access rules allow access. - Delete a share. - Allow share access. Note the following limitations: – IP access rules are required for NFS share access. – User access rules are not allowed for NFS shares. – User access rules are required for SMB share access. – User access requires a File Persona local user for SMB shares. – Shares are read/write (and subject to ACLs). - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. - Extend a share. - Shrink a share. - Share networks. HPE 3PAR File Persona driver can be configured to work with or without share networks. When using share networks, the HPE 3PAR driver allocates an FSIP on the back end FPG (VFS) to match the share network's subnet and segmentation ID. Security groups associated with share networks are ignored. Operations not supported ~~~~~~~~~~~~~~~~~~~~~~~~ - Manage and unmanage - Manila Experimental APIs (consistency groups, replication, and migration) were added in Mitaka but have not yet been implemented by the HPE 3PAR File Persona driver. Requirements ~~~~~~~~~~~~ On the OpenStack host running the Manila share service: - python-3parclient version 4.2.0 or newer from PyPI. On the HPE 3PAR array: - HPE 3PAR Operating System software version 3.2.1 MU3 or higher. - The array class and hardware configuration must support File Persona. Pre-configuration on the HPE 3PAR StoreServ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following HPE 3PAR CLI commands show how to set up the HPE 3PAR StoreServ to use File Persona with OpenStack Manila. HPE 3PAR File Persona must be initialized, and started on the HPE 3PAR storage. .. code-block:: console cli% startfs 0:2:1 1:2:1 cli% setfs nodeip -ipaddress 10.10.10.11 -subnet 255.255.240.0 0 cli% setfs nodeip -ipaddress 10.10.10.12 -subnet 255.255.240.0 1 cli% setfs dns 192.168.8.80,127.127.5.50 foo.com,bar.com cli% setfs gw 10.10.10.10 - A File Provisioning Group (FPG) must be created for use with the Shared File Systems service. .. code-block:: console cli% createfpg examplecpg examplefpg 18T - A Virtual File Server (VFS) must be created on the FPG. - The VFS must be configured with an appropriate share export IP address. .. code-block:: console cli% createvfs -fpg examplefpg 10.10.10.101 255.255.0.0 examplevfs - A local user in the Administrators group is needed for CIFS (SMB) shares. .. code-block:: console cli% createfsgroup fsusers cli% createfsuser –passwd -enable true -grplist Users,Administrators –primarygroup fsusers fsadmin - The WSAPI with HTTP and/or HTTPS must be enabled and started. .. code-block:: console cli% setwsapi -https enable cli% startwsapi HPE 3PAR shared file system driver configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Install the python-3parclient python package on the OpenStack Block Storage system: .. code-block:: console $ pip install 'python-3parclient>=4.0,<5.0' - Manila configuration file The Manila configuration file (typically ``/etc/manila/manila.conf``) defines and configures the Manila drivers and backends. After updating the configuration file, the Manila share service must be restarted for changes to take effect. - Enable share protocols To enable share protocols, an optional list of supported protocols can be specified using the ``enabled_share_protocols`` setting in the ``DEFAULT`` section of the ``manila.conf`` file. The default is ``NFS, CIFS`` which allows both protocols supported by HPE 3PAR (NFS and SMB). Where Manila uses the term ``CIFS``, HPE 3PAR uses the term ``SMB``. Use the ``enabled_share_protocols`` option if you want to only provide one type of share (for example, only NFS) or if you want to explicitly avoid the introduction of other protocols that can be added for other drivers in the future. - Enable share back ends In the ``[DEFAULT]`` section of the Manila configuration file, use the ``enabled_share_backends`` option to specify the name of one or more back-end configuration sections to be enabled. To enable multiple back ends, use a comma-separated list. .. note:: The name of the backend's configuration section is used (which may be different from the ``share_backend_name`` value) - Configure each back end For each back end, a configuration section defines the driver and back end options. These include common Manila options, as well as driver-specific options. The following ``Driver options`` section describes the parameters that need to be configured in the Manila configuration file for the HPE 3PAR driver. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-hpe3par.inc HPE 3PAR Manila driver configuration example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following parameters shows a sample subset of the ``manila.conf`` file, which configures two backends and the relevant ``[DEFAULT]`` options. A real configuration would include additional ``[DEFAULT]`` options and additional sections that are not discussed in this document. In this example, the backends are using different FPGs on the same array: .. code-block:: ini [DEFAULT] enabled_share_backends = HPE1,HPE2 enabled_share_protocols = NFS,CIFS default_share_type = default [HPE1] share_backend_name = HPE3PAR1 share_driver = manila.share.drivers.hpe.hpe_3par_driver.HPE3ParShareDriver driver_handles_share_servers = False max_over_subscription_ratio = 1 hpe3par_fpg = examplefpg,10.10.10.101 hpe3par_san_ip = 10.20.30.40 hpe3par_api_url = https://10.20.30.40:8080/api/v1 hpe3par_username = hpe3par_password = hpe3par_san_login = hpe3par_san_password = hpe3par_debug = False hpe3par_cifs_admin_access_username = hpe3par_cifs_admin_access_password = [HPE2] share_backend_name = HPE3PAR2 share_driver = manila.share.drivers.hpe.hpe_3par_driver.HPE3ParShareDriver driver_handles_share_servers = False max_over_subscription_ratio = 1 hpe3par_fpg = examplefpg2,10.10.10.102 hpe3par_san_ip = 10.20.30.40 hpe3par_api_url = https://10.20.30.40:8080/api/v1 hpe3par_username = hpe3par_password = hpe3par_san_login = hpe3par_san_password = hpe3par_debug = False hpe3par_cifs_admin_access_username = hpe3par_cifs_admin_access_password = Network approach ~~~~~~~~~~~~~~~~ Network connectivity between the storage array (SSH/CLI and WSAPI) and the Manila host is required for share management. Network connectivity between the clients and the VFS is required for mounting and using the shares. This includes: - Routing from the client to the external network. - Assigning the client an external IP address, for example a floating IP. - Configuring the Shared File Systems service host networking properly for IP forwarding. - Configuring the VFS networking properly for client subnets. - Configuring network segmentation, if applicable. In the OpenStack Kilo release, the HPE 3PAR driver did not support share networks. Share access from clients to HPE 3PAR shares required external network access (external to OpenStack) and was set up and configured outside of Manila. In the OpenStack Liberty release, the HPE 3PAR driver could run with or without share networks. The configuration option ``driver_handles_share_servers``( ``True`` or ``False`` ) indicated whether share networks could be used. When set to ``False``, the HPE 3PAR driver behaved as described earlier for Kilo. When set to ``True``, the share network's subnet, segmentation ID and IP address range were used to allocate an FSIP on the HPE 3PAR. There is a limit of four FSIPs per VFS. For clients to communicate with shares via this FSIP, the client must have access to the external network using the subnet and segmentation ID of the share network. For example, the client must be routed to the neutron provider network with external access. The Manila host networking configuration and network switches must support the subnet routing. If the VLAN segmentation ID is used, communication with the share will use the FSIP IP address. Neutron networking is required for HPE 3PAR share network support. Flat and VLAN provider networks are supported, but the HPE 3PAR driver does not support share network security groups. Share access ~~~~~~~~~~~~ A share that is mounted before access is allowed can appear to be an empty read-only share. After granting access, the share must be remounted. - IP access rules are required for NFS. - SMB shares require user access rules. With the proper access rules, share access is not limited to the OpenStack environment. Access rules added via Manila or directly in HPE 3PAR CLI can be used to allow access to clients outside of the stack. The HPE 3PAR VFS/FSIP settings determine the subnets visible for HPE 3PAR share access. - IP access rules To allow IP access to a share in the horizon UI, find the share in the Project|Manage Compute|Shares view. Use the ``Manage Rules`` action to add a rule. Select IP as the access type, and enter the external IP address (for example, the floating IP) of the client in the ``Access to`` box. You can also use the command line to allow IP access to a share in the horizon UI with the command: .. code-block:: console $ manila access-allow ip - User access rules To allow user access to a share in the horizon UI, find the share in the Project|Manage Compute|Shares view. Use the ``Manage Rules`` action to add a rule. Select user as the access type and enter user name in the ``Access to`` box. You can also use the command line to allow user access to a share in the horizon UI with the command: .. code-block:: console $ manila access-allow user The user name must be an HPE 3PAR user. Share access is different from file system permissions, for example, ACLs on files and folders. If a user wants to read a file, the user must have at least read permissions on the share and an ACL that grants him read permissions on the file or folder. Even with full control share access, it does not mean every user can do everything due to the additional restrictions of the folder ACLs. To modify the file or folder ACLs, allow access to an HPE 3PAR File Persona local user that is in the administrator's group and connect to the share using that user's credentials. Then, use the appropriate mechanism to modify the ACL or permissions to allow different access than what is provided by default. .. _Share types: Share types ~~~~~~~~~~~ When creating a share, a share type can be specified to determine where and how the share will be created. If a share type is not specified, the ``default_share_type`` set in the Shared File Systems service configuration file is used. Manila share types are a type or label that can be selected at share creation time in OpenStack. These types can be created either in the ``Admin`` horizon UI or using the command line, as follows: .. code-block:: console $ manila --os-username admin --os-tenant-name demo type-create –is_public false false The ```` is the name of the new share type. False at the end specifies ``driver_handles_share_servers=False``. The ``driver_handles_share_servers`` setting in the share type needs to match the setting configured for the back end in the ``manila.conf`` file. ``is_public`` is used to indicate whether this share type is applicable to all tenants or will be assigned to specific tenants. ``--os-username admin --os-tenant-name demo`` are only needed if your environment variables do not specify the desired user and tenant. For share types that are not public, use Manila ``type-access-add`` to assign the share type to a tenant. - Using share types to require share networks The Shared File Systems service requires that the share type include the ``driver_handles_share_servers`` extra-spec. This ensures that the share is created on a back end that supports the requested ``driver_handles_share_servers`` (share networks) capability. From the Liberty release forward, both ``True`` and ``False`` are supported. The ``driver_handles_share_servers`` setting in the share type must match the setting in the back end configuration. - Using share types to select backends by name Administrators can optionally specify that a particular share type be explicitly associated with a single back end (or group of backends) by including the extra spec share_backend_name to match the name specified within the ``share_backend_name`` option in the back end configuration. When a share type is not selected during share creation, the default share type is used. To prevent creating these shares on any back end, the default share type needs to be specific enough to find appropriate default backends (or to find none if the default should not be used). The following example shows how to set share_backend_name for a share type. .. code-block:: console $ manila --os-username admin --os-tenant-name demo type-key set share_backend_name=HPE3PAR2 - Using share types to select backends with capabilities The HPE 3PAR driver automatically reports capabilities based on the FPG used for each back end. An administrator can create share types with extra specs, which controls share types that can use FPGs with or without specific capabilities. With the OpenStack Liberty release or later, below section shows the extra specs used with the capabilities filter and the HPE 3PAR driver: ``hpe3par_flash_cache`` When the value is set to `` True`` (or `` False``), shares of this type are only created on a back end that uses HPE 3PAR Adaptive Flash Cache. For Adaptive Flash Cache, the HPE 3PAR StoreServ Storage array must meet the following requirements: - Adaptive Flash Cache enabled - Available SSDs - Adaptive Flash Cache must be enabled on the HPE 3PAR StoreServ Storage array. This is done with the following CLI command: .. code-block:: console cli% createflashcache ```` must be in 16 GB increments. For example, the below command creates 128 GB of Flash Cache for each node pair in the array. .. code-block:: console cli% createflashcache 128g - Adaptive Flash Cache must be enabled for the VV set used by an FPG. For example, ``setflashcache vvset:``. The VV set name is the same as the FPG name. .. note:: This setting affects all shares in that FPG (on that back end). ``Dedupe`` When the value is set to `` True`` (or `` False``), shares of this type are only created on a back end that uses deduplication. For HPE 3PAR File Persona, the provisioning type is determined when the FPG is created. Using the ``createfpg –tdvv`` option creates an FPG that supports both dedupe and thin provisioning. The thin deduplication must be enabled to use the tdvv option. ``thin_provisioning`` When the value is set to `` True`` (or `` False``), shares of this type are only created on a back end that uses thin (or full) provisioning. For HPE 3PAR File Persona, the provisioning type is determined when the FPG is created. By default, FPGs are created with thin provisioning. The capacity filter uses the total provisioned space and configured ``max_oversubscription_ratio`` when filtering and weighing backends that use thin provisioning. - Using share types to influence share creation options Scoped extra-specs are used to influence vendor-specific implementation details. Scoped extra-specs use a prefix followed by a colon. For HPE 3PAR, these extra specs have a prefix of hpe3par. The following HPE 3PAR extra-specs are used when creating CIFS (SMB) shares: ``hpe3par:smb_access_based_enum`` ``smb_access_based_enum`` (Access Based Enumeration) specifies if users can see only the files and directories to which they have been allowed access on the shares. Valid values are ``True`` or ``False``. The default is ``False``. ``hpe3par:smb_continuous_avail`` ``smb_continuous_avail`` (Continuous Availability) specifies if continuous availability features of SMB3 should be enabled for this share. Valid values are ``True`` or ``False``. The default is ``True``. ``hpe3par:smb_cache`` ``smb_cache`` specifies client-side caching for offline files. The default value is ``manual``. Valid values are: - ``off`` — the client must not cache any files from this share. The share is configured to disallow caching. - ``manual`` — the client must allow only manual caching for the files open from this share. - ``optimized`` — the client may cache every file that it opens from this share. Also, the client may satisfy the file requests from its local cache. The share is configured to allow automatic caching of programs and documents. - ``auto`` — the client may cache every file that it opens from this share. The share is configured to allow automatic caching of documents. When creating NFS shares, the following HPE 3PAR extra-specs are used: ``hpe3par:nfs_options`` Comma separated list of NFS export options. The NFS export options have the following limitations: ``ro`` and ``rw`` are not allowed (will be determined by the driver) ``no_subtree_check`` and ``fsid`` are not allowed per HPE 3PAR CLI support ``(in)secure`` and ``(no_)root_squash`` are not allowed because the HPE 3PAR driver controls those settings All other NFS options are forwarded to the HPE 3PAR as part of share creation. The HPE 3PAR performs additional validation at share creation time. For details, see the HPE 3PAR CLI help. Implementation characteristics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Shares from snapshots - When a share is created from a snapshot, the share must be deleted before the snapshot can be deleted. This is enforced by the driver. - A snapshot of an empty share will appear to work correctly, but attempting to create a share from an empty share snapshot may fail with an ``NFS Create export`` error. - HPE 3PAR File Persona snapshots are for an entire File Store. In Manila, they appear as snapshots of shares. A share sub-directory is used to give the appearance of a share snapshot when using ``create share from snapshot`` . - Snapshots - For HPE 3PAR File Persona, snapshots are per File Store and not per share. So, the HPE 3PAR limit of 1024 snapshots per File Store results in a Manila limit of 1024 snapshots per tenant on each back end FPG. - Before deleting a share, you must delete its snapshots. This is enforced by Manila. For HPE 3PAR File Persona, this also kicks off a snapshot reclamation. - Size enforcement Manila users create shares with size limits. HPE 3PAR enforces size limits by using File Store quotas. When using ``hpe3par_fstore_per_share``= ``True``(the non-default setting) there is only one share per File Store, so the size enforcement acts as expected. When using ``hpe3par_fstore_per_share`` = ``False`` (the default), the HPE 3PAR Manila driver uses one File Store for multiple shares. In this case, the size of the File Store limit is set to the cumulative limit of its Manila share sizes. This can allow one tenant share to exceed the limit and affect the space available for the same tenant's other shares. One tenant cannot use another tenant's File Store. - File removal When shares are removed and the ``hpe3par_fstore_per_share``=``False`` setting is used (the default), files may be left behind in the File Store. Prior to Mitaka, removal of obsolete share directories and files that have been stranded would require tools outside of OpenStack/Manila. In Mitaka and later, the driver mounts the File Store to remove the deleted share's subdirectory and files. For SMB/CIFS share, it requires the ``hpe3par_cifs_admin_access_username`` and ``hpe3par_cifs_admin_access_password`` configuration. If the mount and delete cannot be performed, an error is logged and the share is deleted in Manila. Due to the potential space held by leftover files, File Store quotas are not reduced when shares are removed. - Multi-tenancy - Network The ``driver_handles_share_servers`` configuration setting determines whether share networks are supported. When ``driver_handles_share_servers`` is set to ``True``, a share network is required to create a share. The administrator creates share networks with the desired network, subnet, IP range, and segmentation ID. The HPE 3PAR is configured with an FSIP using the same subnet and segmentation ID and an IP address allocated from the neutron network. Using share network-specific IP addresses, subnets, and segmentation IDs give the appearance of better tenant isolation. Shares on an FPG, however, are accessible via any of the FSIPs (subject to access rules). Back end filtering should be used for further separation. - Back end filtering A Manila HPE 3PAR back end configuration refers to a specific array and a specific FPG. With multiple backends and multiple tenants, the scheduler determines where shares will be created. In a scenario where an array or back end needs to be restricted to one or more specific tenants, share types can be used to influence the selection of a back end. For more information on using share types, see `Share types`_ . - Tenant limit The HPE 3PAR driver uses one File Store per tenant per protocol in each configured FPG. When only one back end is configured, this results in a limit of eight tenants (16 if only using one protocol). Use multiple back end configurations to introduce additional FPGs on the same array to increase the tenant limit. When using share networks, an FSIP is created for each share network (when its first share is created on the back end). The HPE 3PAR supports 4 FSIPs per FPG (VFS). One of those 4 FSIPs is reserved for the initial VFS IP, so the share network limit is 48 share networks per node pair. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/ibm-spectrumscale-driver.rst0000664000175000017500000001243213656750227033016 0ustar zuulzuul00000000000000=============================== IBM Spectrum Scale share driver =============================== IBM Spectrum Scale is a flexible software-defined storage product that can be deployed as high-performance file storage or a cost optimized large-scale content repository. IBM Spectrum Scale, previously known as IBM General Parallel File System (GPFS), is designed to scale performance and capacity with no bottlenecks. IBM Spectrum Scale is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be direct attached, network attached, SAN attached, or a combination of these methods. Spectrum Scale provides many features beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations. Supported shared filesystems and operations (NFS shares only) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Spectrum Scale share driver supports NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. - Only IP access type is supported. - Both RW & RO access level is supported. - Deny share access. - Create a share snapshot. - Delete a share snapshot. - Create a share from a snapshot. - Extend a share. - Manage a share. - Unmanage a share. Requirements ~~~~~~~~~~~~ Spectrum Scale must be installed and a cluster must be created that includes one or more storage nodes and protocol server nodes. The NFS server running on these nodes is used to export shares to storage consumers in OpenStack virtual machines or even to bare metal storage consumers in the OpenStack environment. A file system must also be created and mounted on these nodes before configuring the manila service to use Spectrum Scale storage. For more details, refer to `Spectrum Scale product documentation `_. Spectrum Scale supports two ways of exporting data through NFS with high availability. #. CES (which uses Ganesha NFS) * This is provided inherently by the protocol support in Spectrum Scale and is a recommended method for NFS access. #. CNFS (which uses kernel NFS) For more information on NFS support in Spectrum Scale, refer to `Protocol support in Spectrum Scale `_ and `NFS Support overview in Spectrum Scale `_. The following figure is an example of Spectrum Scale architecture with OpenStack services: .. figure:: ../../figures/openstack-spectrumscale-setup.JPG :width: 90% :align: center :alt: OpenStack with Spectrum Scale Setup Quotas should be enabled for the Spectrum Scale filesystem to be exported through NFS using Spectrum Scale share driver. Use the following command to enable quota for a filesystem: .. code-block:: console $ mmchfs -Q yes Limitation ~~~~~~~~~~ Spectrum Scale share driver currently supports creation of NFS shares in the flat network space only. For example, the Spectrum Scale storage node exporting the data should be in the same network as that of the Compute VMs which mount the shares acting as NFS clients. Driver configuration ~~~~~~~~~~~~~~~~~~~~ Spectrum Scale share driver supports creation of shares using both NFS servers (Ganesha using Spectrum Scale CES/Kernel NFS). For both the NFS server types, you need to set the ``share_driver`` in the ``manila.conf`` as: .. code-block:: ini share_driver = manila.share.drivers.ibm.gpfs.GPFSShareDriver Spectrum Scale CES (NFS Ganesha server) --------------------------------------- To use Spectrum Scale share driver in this mode, set the ``gpfs_share_helpers`` in the ``manila.conf`` as: .. code-block:: ini gpfs_share_helpers = CES=manila.share.drivers.ibm.gpfs.CESHelper Following table lists the additional configuration options which are used with this driver configuration. .. include:: ../../tables/manila-spectrumscale_ces.inc .. note:: Configuration options related to ssh are required only if ``is_gpfs_node`` is set to ``False``. Spectrum Scale Clustered NFS (Kernel NFS server) ------------------------------------------------ To use Spectrum Scale share driver in this mode, set the ``gpfs_share_helpers`` in the ``manila.conf`` as: .. code-block:: ini gpfs_share_helpers = KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper Following table lists the additional configuration options which are used with this driver configuration. .. include:: ../../tables/manila-spectrumscale_knfs.inc .. note:: Configuration options related to ssh are required only if ``is_gpfs_node`` is set to ``False``. Share creation steps ~~~~~~~~~~~~~~~~~~~~ Sample configuration -------------------- .. code-block:: ini [gpfs] share_driver = manila.share.drivers.ibm.gpfs.GPFSShareDriver gpfs_share_export_ip = x.x.x.x gpfs_mount_point_base = /ibm/gpfs0 gpfs_nfs_server_type = CES is_gpfs_node = True gpfs_share_helpers = CES=manila.share.drivers.ibm.gpfs.CESHelper share_backend_name = GPFS driver_handles_share_servers = False Create GPFS share type and set extra spec ----------------------------------------- .. code-block:: console $ manila type-create --snapshot_support True \ --create_share_from_snapshot_support True gpfs False $ manila type-key gpfs set share_backend_name=GPFS manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/hitachi-hsp-driver.rst0000664000175000017500000001560413656750227031604 0ustar zuulzuul00000000000000=================================================================== Hitachi Hyper Scale-Out Platform File Services Driver for OpenStack =================================================================== The Hitachi Hyper Scale-Out Platform File Services Driver for OpenStack provides the management of file shares, supporting NFS shares with IP based rules to control access. It has a layer that handles the complexity of the protocol used to communicate to Hitachi Hyper Scale-Out Platform via a RESTful API, formatting and sending requests to the backend. Requirements ~~~~~~~~~~~~ - Hitachi Hyper Scale-Out Platform (HSP) version 1.1. - HSP user with ``file-system-full-access`` role. - Established network connection between the HSP interface and OpenStack nodes. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports NFS shares. The following operations are supported: - Create a share. - Delete a share. - Extend a share. - Shrink a share. - Allow share access. - Deny share access. - Manage a share. - Unmanage a share. .. note:: - Only ``IP`` access type is supported - Both ``RW`` and ``RO`` access levels supported Known restrictions ~~~~~~~~~~~~~~~~~~ - The Hitachi HSP allows only 1024 virtual file systems per cluster. This determines the limit of shares the driver can provide. - The Hitachi HSP file systems must have at least 128 GB. This means that all shares created by Shared File Systems service should have 128 GB or more. .. note:: The driver has an internal filter function that accepts only requests for shares size greater than or equal to 128 GB, otherwise the request will fail or be redirected to another available storage backend. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-hds_hsp.inc Network approach ~~~~~~~~~~~~~~~~ .. note:: In the driver mode used by HSP Driver (DHSS = ``False``), the driver does not handle network configuration, it is up to the administrator to configure it. * Configure the network of the manila-share, Compute and Networking nodes to reach HSP interface. For this, your provider network should be capable of reaching HSP Cluster-Virtual-IP. These connections are mandatory so nova instances are capable of accessing shares provided by the backend. * The following image represents a valid scenario: .. image:: ../../figures/hsp_network.png :width: 60% .. note:: To HSP, the Virtual IP is the address through which clients access shares and the Shared File Systems service sends commands to the management interface. This IP can be checked in HSP using its CLI: .. code-block:: console $ hspadm ip-address list Back end configuration ~~~~~~~~~~~~~~~~~~~~~~ #. Configure HSP driver according to your environment. This example shows a valid HSP driver configuration: .. code-block:: ini [DEFAULT] # ... enabled_share_backends = hsp1 enabled_share_protocols = NFS # ... [hsp1] share_backend_name = HITACHI1 share_driver = manila.share.drivers.hitachi.hsp.driver.HitachiHSPDriver driver_handles_share_servers = False hitachi_hsp_host = 172.24.47.190 hitachi_hsp_username = admin hitachi_hsp_password = admin_password #. Configure HSP share type. .. note:: Shared File Systems service requires that the share type includes the ``driver_handles_share_servers`` extra-spec. This ensures that the share will be created on a backend that supports the requested ``driver_handles_share_servers`` capability. Also, ``snapshot_support`` extra-spec should be provided if its value differs from the default value (``True``), as this driver version that currently does not support snapshot operations. For this driver both extra-specs must be set to ``False``. .. code-block:: console $ manila type-create --snapshot_support False hsp False #. Restart all Shared File Systems services (``manila-share``, ``manila-scheduler`` and ``manila-api``). Manage and unmanage shares ~~~~~~~~~~~~~~~~~~~~~~~~~~ The Shared File Systems service has the ability to manage and unmanage shares. If there is a share in the storage and it is not in OpenStack, you can manage that share and use it as a Shared File Systems share. Previous access rules are not imported by manila. The unmanage operation only unlinks the share from OpenStack, preserving all data in the share. In order to manage a HSP share, it must adhere to the following rules: - File system and share name must not contain spaces. - Share name must not contain backslashes (`\\`). To **manage** a share use: .. code-block:: console $ manila manage [--name ] [--description ] [--share_type ] [--driver_options [ [ ...]]] Where: +--------------------+------------------------------------------------------+ | **Parameter** | **Description** | +====================+======================================================+ | | Manila host, backend and share name. For example, | | ``service_host`` | ``ubuntu@hitachi1#hsp1``. The available hosts can | | | be listed with the command: ``manila pool-list`` | | | (admin only). | +--------------------+---------------------+--------------------------------+ | ``protocol`` | Must be **NFS**, the only supported protocol in this | | | driver version. | +--------------------+------------------------------------------------------+ | ``export_path`` | The Hitachi Hyper Scale-Out Platform export path of | | | the share, for example: | | | ``172.24.47.190:/some_share_name`` | +--------------------+------------------------------------------------------+ | To **unmanage** a share use: .. code-block:: console $ manila unmanage Where: +------------------+---------------------------------------------------------+ | **Parameter** | **Description** | +==================+=========================================================+ | ``share`` | ID or name of the share to be unmanaged. This list can | | | be fetched with: ``manila list``. | +------------------+---------------------+-----------------------------------+ Additional notes ~~~~~~~~~~~~~~~~ - Shares are thin provisioned. It is reported to manila only the real used space in HSP. - Administrators should manage the tenant's quota (``manila quota-update``) to control the backend usage. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/hdfs-native-driver.rst0000664000175000017500000000424513656750227031612 0ustar zuulzuul00000000000000================== HDFS native driver ================== The HDFS native driver is a plug-in for the Shared File Systems service. It uses Hadoop distributed file system (HDFS), a distributed file system designed to hold very large amounts of data, and provide high-throughput access to the data. A Shared File Systems service share in this driver is a subdirectory in the hdfs root directory. Instances talk directly to the HDFS storage back end using the ``hdfs`` protocol. Access to each share is allowed by user based access type, which is aligned with HDFS ACLs to support access control of multiple users and groups. Network configuration ~~~~~~~~~~~~~~~~~~~~~ The storage back end and Shared File Systems service hosts should be in a flat network, otherwise L3 connectivity between them should exist. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports HDFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only user access type is supported. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. Requirements ~~~~~~~~~~~~ - Install HDFS package, version >= 2.4.x, on the storage back end. - To enable access control, the HDFS file system must have ACLs enabled. - Establish network connection between the Shared File Systems service host and storage back end. Shared File Systems service driver configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable the driver, set the ``share_driver`` option in file ``manila.conf`` and add other options as appropriate. .. code-block:: ini share_driver = manila.share.drivers.hdfs.hdfs_native.HDFSNativeShareDriver Known restrictions ~~~~~~~~~~~~~~~~~~ - This driver does not support network segmented multi-tenancy model. Instead multi-tenancy is supported by the tenant specific user authentication. - Only support for single HDFS namenode in Kilo release. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-hdfs.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/quobyte-driver.rst0000664000175000017500000000412513656750227031067 0ustar zuulzuul00000000000000============== Quobyte Driver ============== Quobyte can be used as a storage back end for the OpenStack Shared File System service. Shares in the Shared File System service are mapped 1:1 to Quobyte volumes. Access is provided via NFS protocol and IP-based authentication. The Quobyte driver uses the Quobyte API service. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The drivers supports NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported. - Deny share access. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-quobyte.inc Configuration ~~~~~~~~~~~~~~ To configure Quobyte access for the Shared File System service, a back end configuration section has to be added in the ``manila.conf`` file. Add the name of the configuration section to ``enabled_share_backends`` in the ``manila.conf`` file. For example, if the section is named ``Quobyte``: .. code-block:: ini enabled_share_backends = Quobyte Create the new back end configuration section, in this case named ``Quobyte``: .. code-block:: ini [Quobyte] share_driver = manila.share.drivers.quobyte.quobyte.QuobyteShareDriver share_backend_name = QUOBYTE quobyte_api_url = http://api.myserver.com:1234/ quobyte_delete_shares = False quobyte_volume_configuration = BASE quobyte_default_volume_user = myuser quobyte_default_volume_group = mygroup The section name must match the name used in the ``enabled_share_backends`` option described above. The ``share_driver`` setting is required as shown, the other options should be set according to your local Quobyte setup. Other security-related options are: .. code-block:: ini quobyte_api_ca = /path/to/API/server/verification/certificate quobyte_api_username = api_user quobyte_api_password = api_user_pwd Quobyte support can be found at the `Quobyte support webpage `_. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/huawei-nas-driver.rst0000664000175000017500000000713013656750227031437 0ustar zuulzuul00000000000000============= Huawei driver ============= Huawei NAS driver is a plug-in based on the Shared File Systems service. The Huawei NAS driver can be used to provide functions such as the share and snapshot for virtual machines, or instances, in OpenStack. Huawei NAS driver enables the OceanStor V3 series V300R002 storage system to provide only network filesystems for OpenStack. Requirements ~~~~~~~~~~~~ - The OceanStor V3 series V300R002 storage system. - The following licenses should be activated on V3 for File: CIFS, NFS, HyperSnap License (for snapshot). Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS. - Only user access is supported for CIFS. - Deny share access. - Create a snapshot. - Delete a snapshot. - Support pools in one backend. - Extend a share. - Shrink a share. - Create a replica. - Delete a replica. - Promote a replica. - Update a replica state. Pre-configurations on Huawei ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Create a driver configuration file. The driver configuration file name must be the same as the ``manila_huawei_conf_file`` item in the ``manila_conf`` configuration file. #. Configure the product. Product indicates the storage system type. For the OceanStor V3 series V300R002 storage systems, the driver configuration file is as follows: .. code-block:: xml V3 x.x.x.x https://x.x.x.x:8088/deviceManager/rest/ xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx 3 60 The options are: - ``Product`` is a type of storage product. Set it to ``V3``. - ``LogicalPortIP`` is the IP address of the logical port. - ``RestURL`` is an access address of the REST interface. Multiple RestURLs can be configured in ````, separated by ";". The driver will automatically retry another ``RestURL`` if one fails to connect. - ``UserName`` is the user name of an administrator. - ``UserPassword`` is the password of an administrator. - ``Thin_StoragePool`` is the name of a thin storage pool to be used. - ``Thick_StoragePool`` is the name of a thick storage pool to be used. - ``WaitInterval`` is the interval time of querying the file system status. - ``Timeout`` is the timeout period for waiting command execution of a device to complete. Back end configuration ~~~~~~~~~~~~~~~~~~~~~~ Modify the ``manila.conf`` Shared File Systems service configuration file and add ``share_driver`` and ``manila_huawei_conf_file`` items. Here is an example for configuring a storage system: .. code-block:: ini share_driver = manila.share.drivers.huawei.huawei_nas.HuaweiNasDriver manila_huawei_conf_file = /etc/manila/manila_huawei_conf.xml driver_handles_share_servers = False Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-huawei.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/hitachi-hnas-driver.rst0000664000175000017500000004062513656750227031744 0ustar zuulzuul00000000000000========================= Hitachi NAS (HNAS) driver ========================= The HNAS driver provides NFS Shared File Systems to OpenStack. Requirements ~~~~~~~~~~~~ - Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100. - HNAS/SMU software version is 12.2 or higher. - HNAS configuration and management utilities to create a storage pool (span) and an EVS. - GUI (SMU). - SSC CLI. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports NFS and CIFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. - Revert a share to a snapshot. - Extend a share. - Manage a share. - Unmanage a share. - Shrink a share. - Mount snapshots. - Allow snapshot access. - Deny snapshot access. - Manage a snapshot. - Unmanage a snapshot. Driver options ~~~~~~~~~~~~~~ This table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-hds_hnas.inc Pre-configuration on OpenStack deployment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Install the OpenStack environment with manila. See the `OpenStack installation guide `_. #. Configure the OpenStack networking so it can reach HNAS Management interface and HNAS EVS Data interface. .. note :: In the driver mode used by HNAS Driver (DHSS = ``False``), the driver does not handle network configuration, it is up to the administrator to configure it. * Configure the network of the manila-share node network to reach HNAS management interface through the admin network. * Configure the network of the Compute and Networking nodes to reach HNAS EVS data interface through the data network. * Example of networking architecture: .. figure:: ../../figures/hds_network.jpg :width: 60% :align: center :alt: Example networking scenario * Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and update the following settings in their respective tags. In case you use linuxbridge, update bridge mappings at linuxbridge section: .. important :: It is mandatory that HNAS management interface is reachable from the Shared File System node through the admin network, while the selected EVS data interface is reachable from OpenStack Cloud, such as through Neutron flat networking. .. code-block:: ini [ml2] type_drivers = flat,vlan,vxlan,gre mechanism_drivers = openvswitch [ml2_type_flat] flat_networks = physnet1,physnet2 [ml2_type_vlan] network_vlan_ranges = physnet1:1000:1500,physnet2:2000:2500 [ovs] bridge_mappings = physnet1:br-ex,physnet2:br-eth1 You may have to repeat the last line above in another file on the Compute node, if it exists it is located in: ``/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini``. * In case openvswitch for neutron agent, run in network node: .. code-block:: console # ifconfig eth1 0 # ovs-vsctl add-br br-eth1 # ovs-vsctl add-port br-eth1 eth1 # ifconfig eth1 up * Restart all neutron processes. #. Create the data HNAS network in OpenStack: * List the available projects: .. code-block:: console $ openstack project list * Create a network to the given project (DEMO), providing the project name, a name for the network, the name of the physical network over which the virtual network is implemented, and the type of the physical mechanism by which the virtual network is implemented: .. code-block:: console $ openstack network create --project DEMO \ --provider-network-type flat \ --provider-physical-network physnet2 hnas_network * Optional: List available networks: .. code-block:: console $ openstack network list * Create a subnet to the same project (DEMO), the gateway IP of this subnet, a name for the subnet, the network name created before, and the CIDR of subnet: .. code-block:: console $ openstack subnet create --project DEMO --gateway GATEWAY \ --subnet-range SUBNET_CIDR --network NETWORK HNAS_SUBNET * Optional: List available subnets: .. code-block:: console $ openstack subnet list * Add the subnet interface to a router, providing the router name and subnet name created before: .. code-block:: console $ openstack router add subnet SUBNET ROUTER Pre-configuration on HNAS ~~~~~~~~~~~~~~~~~~~~~~~~~ #. Create a file system on HNAS. See the `Hitachi HNAS reference `_. .. important:: Make sure that the filesystem is not created as a replication target. For more information, refer to the official HNAS administration guide. #. Prepare the HNAS EVS network. * Create a route in HNAS to the project network: .. code-block:: console $ console-context --evs route-net-add \ --gateway .. important:: Make sure multi-tenancy is enabled and routes are configured per EVS. .. code-block:: console $ console-context --evs 3 route-net-add --gateway 192.168.1.1 \ 10.0.0.0/24 #. Configure the CIFS security. * Before using CIFS shares with the HNAS driver, make sure to configure a security service in the back end. For details, refer to the `Hitachi HNAS reference `_. Back end configuration ~~~~~~~~~~~~~~~~~~~~~~ #. Configure HNAS driver. * Configure HNAS driver according to your environment. This example shows a minimal HNAS driver configuration: .. code-block:: ini [DEFAULT] enabled_share_backends = hnas1 enabled_share_protocols = NFS,CIFS [hnas1] share_backend_name = HNAS1 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila hitachi_hnas_cifs_snapshot_while_mounted = True .. note:: The ``hds_hnas_cifs_snapshot_while_mounted`` parameter allows snapshots to be taken while CIFS shares are mounted. This parameter is set to ``False`` by default, which prevents a snapshot from being taken if the share is mounted or in use. #. Optional. HNAS multi-backend configuration. * Update the ``enabled_share_backends`` flag with the names of the back ends separated by commas. * Add a section for every back end according to the example bellow: .. code-block:: ini [DEFAULT] enabled_share_backends = hnas1,hnas2 enabled_share_protocols = NFS,CIFS [hnas1] share_backend_name = HNAS1 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila1 hitachi_hnas_cifs_snapshot_while_mounted = True [hnas2] share_backend_name = HNAS2 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila2 hitachi_hnas_cifs_snapshot_while_mounted = True #. Disable DHSS for HNAS share type configuration: .. note:: Shared File Systems requires that the share type includes the ``driver_handles_share_servers`` extra-spec. This ensures that the share will be created on a back end that supports the requested ``driver_handles_share_servers`` capability. .. code-block:: console $ manila type-create hitachi False #. Optional: Add extra-specs for enabling HNAS-supported features: * These commands will enable various snapshot-related features that are supported in HNAS. .. code-block:: console $ manila type-key hitachi set snapshot_support=True $ manila type-key hitachi set mount_snapshot_support=True $ manila type-key hitachi set revert_to_snapshot_support=True $ manila type-key hitachi set create_share_from_snapshot_support=True * To specify which HNAS back end will be created by the share, in case of multiple back end setups, add an extra-spec for each share-type to match a specific back end. Therefore, it is possible to specify which back end the Shared File System service will use when creating a share. .. code-block:: console $ manila type-key hitachi set share_backend_name=hnas1 $ manila type-key hitachi2 set share_backend_name=hnas2 #. Restart all Shared File Systems services (``manila-share``, ``manila-scheduler`` and ``manila-api``). Share migration ~~~~~~~~~~~~~~~ Extra configuration is needed for allowing shares to be migrated from or to HNAS. In the OpenStack deployment, the manila-share node needs an additional connection to the EVS data interface. Furthermore, make sure to add ``hitachi_hnas_admin_network_ip`` to the configuration. This should match the value of ``data_node_access_ips``. For more in-depth documentation, refer to the `share migration documents `_ Manage and unmanage shares ~~~~~~~~~~~~~~~~~~~~~~~~~~ Shared File Systems has the ability to manage and unmanage shares. If there is a share in the storage and it is not in OpenStack, you can manage that share and use it as a Shared File Systems share. Administrators have to make sure the exports are under the ``/shares`` folder beforehand. HNAS drivers use virtual-volumes (V-VOL) to create shares. Only V-VOL shares can be used by the driver, and V-VOLs must have a quota limit. If the NFS export is an ordinary FS export, it is not possible to use it in Shared File Systems. The unmanage operation only unlinks the share from Shared File Systems, all data is preserved. Both manage and unmanage operations are non-disruptive by default, until access rules are modified. To **manage** a share, use: .. code-block:: console $ manila manage [--name ] [--description ] [--share_type ] [--driver_options [ [ ...]]] [--public] Where: +--------------------+------------------------------------------------------+ | **Parameter** | **Description** | +====================+======================================================+ | | Manila host, back end and share name. For example, | | ``service_host`` | ``ubuntu@hitachi1#hsp1``. The available hosts can | | | be listed with the command: ``manila pool-list`` | | | (admin only). | +--------------------+------------------------------------------------------+ | ``protocol`` | Protocol of share to manage, such as NFS or CIFS. | +--------------------+------------------------------------------------------+ | ``export_path`` | Share export path. | | | For NFS: ``10.0.0.1:/shares/share_name`` | | | | | | For CIFS: ``\\10.0.0.1\share_name`` | +--------------------+------------------------------------------------------+ .. note:: For NFS exports, ``export_path`` **must** include ``/shares/`` after the target address. Trying to reference the share name directly or under another path will fail. .. note:: For CIFS exports, although the shares will be created under the ``/shares/`` folder in the back end, only the share name is needed in the export path. It should also be noted that the backslash ``\`` character has to be escaped when entered in Linux terminals. For additional details, refer to ``manila help manage``. To **unmanage** a share, use: .. code-block:: console $ manila unmanage Where: +------------------+---------------------------------------------------------+ | **Parameter** | **Description** | +==================+=========================================================+ | ``share`` | ID or name of the share to be unmanaged. A list of | | | shares can be fetched with ``manila list``. | +------------------+---------------------------------------------------------+ Manage and unmanage snapshots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Shared File Systems service also has the ability to manage share snapshots. Existing HNAS snapshots can be managed, as long as the snapshot directory is located in ``/snapshots/share_ID``. New snapshots created through the Shared File Systems service are also created according to this specific folder structure. To **manage** a snapshot, use: .. code-block:: console $ manila snapshot-manage [--name ] [--description ] [--driver_options [ [ ...]]] Where: +------------------------+-------------------------------------------------+ | **Parameter** | **Description** | +========================+=================================================+ | ``share`` | ID or name of the share to be managed. A list | | | of shares can be fetched with ``manila list``. | +------------------------+-------------------------------------------------+ | ``provider_location`` | Location of the snapshot on the back end, such | | | as ``/snapshots/share_ID/snapshot_ID``. | +------------------------+-------------------------------------------------+ | ``--driver_options`` | Driver-related configuration, passed such as | | | ``size=10``. | +------------------------+-------------------------------------------------+ .. note:: The mandatory ``provider_location`` parameter uses the same syntax for both NFS and CIFS shares. This is only the case for snapshot management. .. note:: The ``--driver_options`` parameter ``size`` is **required** for the HNAS driver. Administrators need to know the size of the to-be-managed snapshot beforehand. .. note:: If the ``mount_snapshot_support=True`` extra-spec is set in the share type, the HNAS driver will automatically create an export when managing a snapshot if one does not already exist. To **unmanage** a snapshot, use: .. code-block:: console $ manila snapshot-unmanage Where: +---------------+--------------------------------+ | **Parameter** | **Description** | +===============+================================+ | ``snapshot`` | Name or ID of the snapshot(s). | +---------------+--------------------------------+ Additional notes ~~~~~~~~~~~~~~~~ * HNAS has some restrictions about the number of EVSs, filesystems, virtual-volumes, and simultaneous SSC connections. Check the manual specification for your system. * Shares and snapshots are thin provisioned. It is reported to Shared File System only the real used space in HNAS. Also, a snapshot does not initially take any space in HNAS, it only stores the difference between the share and the snapshot, so it grows when share data is changed. * Administrators should manage the project's quota (:command:`manila quota-update`) to control the back end usage. * Shares will need to be remounted after a revert-to-snapshot operation. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/zfs-on-linux-driver.rst0000664000175000017500000001235613656750227031755 0ustar zuulzuul00000000000000===================== ZFS (on Linux) driver ===================== Manila ZFSonLinux share driver uses ZFS file system for exporting NFS shares. Written and tested using Linux version of ZFS. Requirements ~~~~~~~~~~~~ - NFS daemon that can be handled through ``exportfs`` app. - ZFS file system packages, either Kernel or FUSE versions. - ZFS zpools that are going to be used by Manila should exist and be configured as desired. Manila will not change zpool configuration. - For remote ZFS hosts according to manila-share service host SSH should be installed. - For ZFS hosts that support replication: - SSH access for each other should be passwordless. - Service IP addresses should be available by ZFS hosts for each other. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. - Only IP access type is supported. - Both access levels are supported - ``RW`` and ``RO``. - Deny share access. - Bring an existing ZFSOnLinux share under the shared file system service (Managing a share) - Remove a ZFSOnLinux share from the shared file system service without deleting it (Unmanaging a share) - Create a snapshot. - Delete a snapshot. - Bring an existing ZFSOnLinux snapshot under the shared file system service (Managing a snapshot) - Remove a ZFSOnLinux snapshot from the shared file system service without deleting it (Unmanaging a snapshot) - Create a share from snapshot. - Extend a share. - Shrink a share. - Share replication (experimental): - Create, update, delete, and promote replica operations are supported. Possibilities ~~~~~~~~~~~~~ - Any amount of ZFS zpools can be used by share driver. - Allowed to configure default options for ZFS datasets that are used for share creation. - Any amount of nested datasets is allowed to be used. - All share replicas are read-only, only active one is read-write. - All share replicas are synchronized periodically, not continuously. Status ``in_sync`` means latest sync was successful. Time range between syncs equals to the value of the ``replica_state_update_interval`` configuration global option. - Driver can use qualified extra spec ``zfsonlinux:compression``. It can contain any value that ZFS app supports. But if it is disabled through the configuration option with the value ``compression=off``, then it will not be used. Restrictions ~~~~~~~~~~~~ The ZFSonLinux share driver has the following restrictions: - Only IP access type is supported for NFS. - Only FLAT network is supported. - ``Promote share replica`` operation will switch roles of current ``secondary`` replica and ``active``. It does not make more than one active replica available. - The below items are not yet implemented: - ``SaMBa`` based sharing. - ``Thick provisioning`` capability. Known problems ~~~~~~~~~~~~~~ - ``Promote share replica`` operation will make ZFS file system that became secondary as RO only on NFS level. On ZFS level system will stay mounted as was - RW. Back-end configuration ~~~~~~~~~~~~~~~~~~~~~~ The following parameters need to be configured in the manila configuration file for back-ends that use the ZFSonLinux driver: - ``share_driver`` = manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver - ``driver_handles_share_servers`` = False - ``replication_domain`` = custom_str_value_as_domain_name - If empty, then replication will be disabled. - If set, then will be able to be used as replication peer for other back ends with the same value. - ``zfs_share_export_ip`` = - ``zfs_service_ip`` = - ``zfs_zpool_list`` = zpoolname1,zpoolname2/nested_dataset_for_zpool2 - Can be one or more zpools. - Can contain nested datasets. - ``zfs_dataset_creation_options`` = - readonly, quota, sharenfs and sharesmb options will be ignored. - ``zfs_dataset_name_prefix`` = - Prefix to be used in each dataset name. - ``zfs_dataset_snapshot_name_prefix`` = - Prefix to be used in each dataset snapshot name. - ``zfs_use_ssh`` = - Set ``False`` if ZFS located on the same host as `manila-share` service. - Set ``True`` if `manila-share` service should use SSH for ZFS configuration. - ``zfs_ssh_username`` = - Required for replication operations. - Required for SSH``ing to ZFS host if ``zfs_use_ssh`` is set to ``True``. - ``zfs_ssh_user_password`` = - Password for ``zfs_ssh_username`` of ZFS host. - Used only if ``zfs_use_ssh`` is set to ``True``. - ``zfs_ssh_private_key_path`` = - Used only if ``zfs_use_ssh`` is set to ``True``. - ``zfs_share_helpers`` = NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper - Approach for setting up helpers is similar to various other share drivers. - At least one helper should be used. - ``zfs_replica_snapshot_prefix`` = - Prefix to be used in dataset snapshot names that are created by ``update replica`` operation. Driver options ~~~~~~~~~~~~~~ .. include:: ../../tables/manila-zfs.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/dell-emc-vnx-driver.rst0000664000175000017500000002336213656750227031676 0ustar zuulzuul00000000000000=================== Dell EMC VNX driver =================== The EMC Shared File Systems service driver framework (EMCShareDriver) utilizes the EMC storage products to provide the shared file systems to OpenStack. The EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different EMC storage products. The VNX plug-in is the plug-in which manages the VNX to provide shared filesystems. The EMC driver framework with the VNX plug-in is referred to as the VNX driver in this document. This driver performs the operations on VNX by XMLAPI and the file command line. Each back end manages one Data Mover of VNX. Multiple Shared File Systems service back ends need to be configured to manage multiple Data Movers. Requirements ~~~~~~~~~~~~ - VNX OE for File version 7.1 or higher - VNX Unified, File only, or Gateway system with a single storage back end - The following licenses should be activated on VNX for File: - CIFS - NFS - SnapSure (for snapshot) - ReplicationV2 (for create share from snapshot) Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. While the generic driver creates shared filesystems based on cinder volumes attached to nova VMs, the VNX driver performs similar operations using the Data Movers on the array. Pre-configurations on VNX ~~~~~~~~~~~~~~~~~~~~~~~~~ #. Enable unicode on Data Mover. The VNX driver requires that the unicode is enabled on Data Mover. .. warning:: After enabling Unicode, you cannot disable it. If there are some filesystems created before Unicode is enabled on the VNX, consult the storage administrator before enabling Unicode. To check the Unicode status on Data Mover, use the following VNX File command on the VNX control station:: server_cifs | head # mover_name = Check the value of I18N mode field. UNICODE mode is shown as ``I18N mode = UNICODE``. To enable the Unicode for Data Mover:: uc_config -on -mover # mover_name = Refer to the document Using International Character Sets on VNX for File on `EMC support site `_ for more information. #. Enable CIFS service on Data Mover. Ensure the CIFS service is enabled on the Data Mover which is going to be managed by VNX driver. To start the CIFS service, use the following command:: server_setup -Protocol cifs -option start [=] # mover_name = # n = .. note:: If there is 1 GB of memory on the Data Mover, the default is 96 threads; however, if there is over 1 GB of memory, the default number of threads is 256. To check the CIFS service status, use this command:: server_cifs | head # mover_name = The command output will show the number of CIFS threads started. #. NTP settings on Data Mover. VNX driver only supports CIFS share creation with share network which has an Active Directory security-service associated. Creating CIFS share requires that the time on the Data Mover is in sync with the Active Directory domain so that the CIFS server can join the domain. Otherwise, the domain join will fail when creating share with this security service. There is a limitation that the time of the domains used by security-services even for different tenants and different share networks should be in sync. Time difference should be less than 10 minutes. It is recommended to set the NTP server to the same public NTP server on both the Data Mover and domains used in security services to ensure the time is in sync everywhere. Check the date and time on Data Mover:: server_date # mover_name = Set the NTP server for Data Mover:: server_date timesvc start ntp [ ...] # mover_name = # host = .. note:: The host must be running the NTP protocol. Only 4 host entries are allowed. #. Configure User Mapping on the Data Mover. Before creating CIFS share using VNX driver, you must select a method of mapping Windows SIDs to UIDs and GIDs. EMC recommends using usermapper in single protocol (CIFS) environment which is enabled on VNX by default. To check usermapper status, use this command syntax:: server_usermapper # movername = If usermapper is not started, the following command can be used to start the usermapper:: server_usermapper -enable # movername = For a multiple protocol environment, refer to Configuring VNX User Mapping on `EMC support site `_ for additional information. #. Network Connection. Find the network devices (physical port on NIC) of Data Mover that has access to the share network. Go to :guilabel:`Unisphere` to check the device list: :menuselection:`Settings > Network > Settings for File (Unified system only) > Device`. Back-end configurations ~~~~~~~~~~~~~~~~~~~~~~~ The following parameters need to be configured in the ``/etc/manila/manila.conf`` file for the VNX driver: .. code-block:: ini emc_share_backend = vnx emc_nas_server = emc_nas_password = emc_nas_login = vnx_server_container = vnx_share_data_pools = share_driver = manila.share.drivers.emc.driver.EMCShareDriver vnx_ethernet_ports = - `emc_share_backend` The plug-in name. Set it to ``vnx`` for the VNX driver. - `emc_nas_server` The control station IP address of the VNX system to be managed. - `emc_nas_password` and `emc_nas_login` The fields that are used to provide credentials to the VNX system. Only local users of VNX File is supported. - `vnx_server_container` Name of the Data Mover to serve the share service. - `vnx_share_data_pools` Comma separated list specifying the name of the pools to be used by this back end. Do not set this option if all storage pools on the system can be used. Wild card character is supported. Examples: pool_1, pool_*, * - `vnx_ethernet_ports` Comma separated list specifying the ports (devices) of Data Mover that can be used for share server interface. Do not set this option if all ports on the Data Mover can be used. Wild card character is supported. Examples: spa_eth1, spa_*, * Restart of the ``manila-share`` service is needed for the configuration changes to take effect. Restrictions ~~~~~~~~~~~~ The VNX driver has the following restrictions: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Only FLAT network and VLAN network are supported. - VLAN network is supported with limitations. The neutron subnets in different VLANs that are used to create share networks cannot have overlapped address spaces. Otherwise, VNX may have a problem to communicate with the hosts in the VLANs. To create shares for different VLANs with same subnet address, use different Data Movers. - The ``Active Directory`` security service is the only supported security service type and it is required to create CIFS shares. - Only one security service can be configured for each share network. - Active Directory domain name of the 'active\_directory' security service should be unique even for different tenants. - The time on Data Mover and the Active Directory domains used in security services should be in sync (time difference should be less than 10 minutes). It is recommended to use same NTP server on both the Data Mover and Active Directory domains. - On VNX the snapshot is stored in the SavVols. VNX system allows the space used by SavVol to be created and extended until the sum of the space consumed by all SavVols on the system exceeds the default 20% of the total space available on the system. If the 20% threshold value is reached, an alert will be generated on VNX. Continuing to create snapshot will cause the old snapshot to be inactivated (and the snapshot data to be abandoned). The limit percentage value can be changed manually by storage administrator based on the storage needs. Administrator is recommended to configure the notification on the SavVol usage. Refer to Using VNX SnapSure document on `EMC support site `_ for more information. - VNX has limitations on the overall numbers of Virtual Data Movers, filesystems, shares, checkpoints, etc. Virtual Data Mover(VDM) is created by the VNX driver on the VNX to serve as the Shared File Systems service share server. Similarly, filesystem is created, mounted, and exported from the VDM over CIFS or NFS protocol to serve as the Shared File Systems service share. The VNX checkpoint serves as the Shared File Systems service share snapshot. Refer to the NAS Support Matrix document on `EMC support site `_ for the limitations and configure the quotas accordingly. Driver options ~~~~~~~~~~~~~~ Configuration options specific to this driver: .. include:: ../../tables/manila-vnx.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/glusterfs-native-driver.rst0000664000175000017500000001044113656750227032677 0ustar zuulzuul00000000000000======================= GlusterFS Native driver ======================= GlusterFS Native driver uses GlusterFS, an open source distributed file system, as the storage back end for serving file shares to Shared File Systems service clients. A Shared File Systems service share is a GlusterFS volume. This driver uses flat-network (share-server-less) model. Instances directly talk with the GlusterFS back end storage pool. The instances use ``glusterfs`` protocol to mount the GlusterFS shares. Access to each share is allowed via TLS Certificates. Only the instance which has the TLS trust established with the GlusterFS back end can mount and hence use the share. Currently only ``read-write (rw)`` access is supported. Network approach ~~~~~~~~~~~~~~~~ L3 connectivity between the storage back end and the host running the Shared File Systems share service should exist. Multi-tenancy model ~~~~~~~~~~~~~~~~~~~ The driver does not support network segmented multi-tenancy model. Instead multi-tenancy is supported using tenant specific TLS certificates. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports GlusterFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only access by TLS Certificates (``cert`` access type) is supported. - Only read-write access is supported. - Deny share access. - Create a snapshot. - Delete a snapshot. Requirements ~~~~~~~~~~~~ - Install glusterfs-server package, version >= 3.6.x, on the storage back end. - Install glusterfs and glusterfs-fuse package, version >= 3.6.x, on the Shared File Systems service host. - Establish network connection between the Shared File Systems service host and the storage back end. Shared File Systems service driver configuration setting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following parameters in the Shared File Systems service's configuration file need to be set: .. code-block:: ini share_driver = manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver glusterfs_servers = glustervolserver glusterfs_volume_pattern = manila-share-volume-\d+$ The parameters are: ``glusterfs_servers`` List of GlusterFS servers which provide volumes that can be used to create shares. The servers are expected to be of distinct Gluster clusters, so they should not be Gluster peers. Each server should be of the form ``[@]``. The optional ``@`` part of the server URI indicates SSH access for cluster management (see related optional parameters below). If it is not given, direct command line management is performed (the Shared File Systems service host is assumed to be part of the GlusterFS cluster the server belongs to). ``glusterfs_volume_pattern`` Regular expression template used to filter GlusterFS volumes for share creation. The regular expression template can contain the ``#{size}`` parameter which matches a number and the value will be interpreted as size of the volume in GB. Examples: ``manila-share-volume-\d+$``, ``manila-share-volume-#{size}G-\d+$``; with matching volume names, respectively: ``manila-share-volume-12``, ``manila-share-volume-3G-13``. In the latter example, the number that matches ``#{size}``, which is 3, is an indication that the size of volume is 3 GB. On share creation, the Shared File Systems service picks volumes at least as large as the requested one. When setting up GlusterFS shares, note the following: - GlusterFS volumes are not created on demand. A pre-existing set of GlusterFS volumes should be supplied by the GlusterFS cluster(s), conforming to the naming convention encoded by ``glusterfs_volume_pattern``. However, the GlusterFS endpoint is allowed to extend this set any time, so the Shared File Systems service and GlusterFS endpoints are expected to communicate volume supply and demand out-of-band. - Certificate setup, also known as trust setup, between instance and storage back end is out of band of the Shared File Systems service. - For the Shared File Systems service to use GlusterFS volumes, the name of the trashcan directory in GlusterFS volumes must not be changed from the default. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/emc-isilon-driver.rst0000664000175000017500000000374713656750227031447 0ustar zuulzuul00000000000000================= EMC Isilon driver ================= The EMC Shared File Systems driver framework (EMCShareDriver) utilizes EMC storage products to provide shared file systems to OpenStack. The EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different EMC storage products. The Isilon driver is a plug-in for the EMC framework which allows the Shared File Systems service to interface with an Isilon back end to provide a shared filesystem. The EMC driver framework with the Isilon plug-in is referred to as the ``Isilon Driver`` in this document. This Isilon Driver interfaces with an Isilon cluster via the REST Isilon Platform API (PAPI) and the RESTful Access to Namespace API (RAN). Requirements ~~~~~~~~~~~~ - Isilon cluster running OneFS 7.2 or higher Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The drivers supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported. - Only read-write access is supported. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. Back end configuration ~~~~~~~~~~~~~~~~~~~~~~ The following parameters need to be configured in the Shared File Systems service configuration file for the Isilon driver: .. code-block:: ini share_driver = manila.share.drivers.emc.driver.EMCShareDriver emc_share_backend = isilon emc_nas_server = emc_nas_login = emc_nas_password = Restrictions ~~~~~~~~~~~~ The Isilon driver has the following restrictions: - Only IP access type is supported for NFS and CIFS. - Only FLAT network is supported. - Quotas are not yet supported. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-emc.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/dell-emc-powermax-driver.rst0000664000175000017500000004413713656750227032730 0ustar zuulzuul00000000000000======================== Dell EMC PowerMax Plugin ======================== The Dell EMC Shared File Systems service driver framework (EMCShareDriver) utilizes the Dell EMC storage products to provide the shared file systems to OpenStack. The Dell EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different Dell EMC storage products. The PowerMax plug-in manages the PowerMax to provide shared file systems. The Dell EMC driver framework with the PowerMax plug-in is referred to as the PowerMax driver in this document. This driver performs the operations on PowerMax eNAS by XMLAPI and the file command line. Each back end manages one Data Mover of PowerMax. Multiple Shared File Systems service back ends need to be configured to manage multiple Data Movers. Requirements ~~~~~~~~~~~~ - PowerMax eNAS OE for File version 8.1 or higher - PowerMax Unified or File only - The following licenses should be activated on PowerMax for File: - CIFS - NFS - SnapSure (for snapshot) - ReplicationV2 (for create share from snapshot) Supported shared file systems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. While the generic driver creates shared file systems based on cinder volumes attached to nova VMs, the PowerMax driver performs similar operations using the Data Movers on the array. Pre-configurations on PowerMax ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Configure a storage pool There is a one to one relationship between a storage pool in embedded NAS to a storage group on the PowerMax. The best way to provision storage for file is from the Unisphere for PowerMax UI rather than eNAS UI. Go to :menuselection:`{array} > SYSTEM > FIle` and under :menuselection:`Actions` click :menuselection:`PROVISION STORAGE FOR FILE` .. note:: When creating a new storage group you have the ability to assign a service level e.g. Diamond and disable compression/deduplication which is enabled by default. To pick up the newly created storage pool in the eNAS UI, go to :menuselection:`{Control Station} > Storage > Storage Configuration > Storage Pools` and under :menuselection:`File Storage` click :menuselection:`Rescan Storage Systems` or on the command line: .. code-block:: console $ nas_diskmark -mark -all -discovery y -monitor y The new storage pool should now appear in the eNAS UI #. Make sure you have the appropriate licenses .. code-block:: console $ nas_license -l key status value site_key online xx xx xx xx nfs online cifs online snapsure online replicatorV2 online filelevelretention online #. Enable CIFS service on Data Mover. Ensure the CIFS service is enabled on the Data Mover which is going to be managed by PowerMax driver. To start the CIFS service, use the following command: .. code-block:: console $ server_setup -Protocol cifs -option start [=] # movername = name of the Data Mover # n = number of threads for CIFS users .. note:: If there is 1 GB of memory on the Data Mover, the default is 96 threads. However, if there is over 1 GB of memory, the default number of threads is 256. To check the CIFS service status, use the following command: .. code-block:: console $ server_cifs | head # movername = name of the Data Mover The command output will show the number of CIFS threads started. #. NTP settings on Data Mover. PowerMax driver only supports CIFS share creation with share network which has an Active Directory security-service associated. Creating CIFS share requires that the time on the Data Mover is in sync with the Active Directory domain so that the CIFS server can join the domain. Otherwise, the domain join will fail when creating a share with this security service. There is a limitation that the time of the domains used by security-services, even for different tenants and different share networks, should be in sync. Time difference should be less than 5 minutes. .. note:: If there is a clock skew then you may see the following error "The local machine and the remote machine are not synchronized. Kerberos protocol requires a synchronization of both participants within the same 5 minutes". To fix this error you must make sure the times of the eNas controller host and the Domain Controller or within 5 minutes of each other. You must be root to change the date of the eNas control station. Check also that your time zones coincide. We recommend setting the NTP server to the same public NTP server on both the Data Mover and domains used in security services to ensure the time is in sync everywhere. Check the date and time on Data Mover with the following command: .. code-block:: console $ server_date # movername = name of the Data Mover Set the NTP server for Data Mover with the following command: .. code-block:: console $ server_date timesvc start ntp [ ...] # movername = name of the Data Mover # host = IP address of the time server host .. note:: The host must be running the NTP protocol. Only 4 host entries are allowed. #. Configure User Mapping on the Data Mover. Before creating CIFS share using PowerMax driver, you must select a method of mapping Windows SIDs to UIDs and GIDs. DELL EMC recommends using usermapper in single protocol (CIFS) environment which is enabled on PowerMax eNAS by default. To check usermapper status, use the following command syntax: .. code-block:: console $ server_usermapper # movername = name of the Data Mover If usermapper does not start, use the following command to start the usermapper: .. code-block:: console $ server_usermapper -enable # movername = name of the Data Mover For a multiple protocol environment, refer to Configuring PowerMax eNAS User Mapping on `EMC support site `_ for additional information. #. Configure network connection. Find the network devices (physical port on NIC) of the Data Mover that has access to the share network. To check the device list on the eNAS UI go to :menuselection:`{Control Station} > Settings > Network > Devices`. or on the command line: .. code-block:: console $ server_sysconfig server_2 -pci server_2 : PCI DEVICES: On Board: VendorID=0x1120 DeviceID=0x1B00 Controller 0: scsi-0 IRQ: 32 0: scsi-16 IRQ: 33 0: scsi-32 IRQ: 34 0: scsi-48 IRQ: 35 Broadcom 10 Gigabit Ethernet Controller 0: fxg-3-0 IRQ: 36 speed=10000 duplex=full txflowctl=disable rxflowctl=disable Link: Up 0: fxg-3-1 IRQ: 38 speed=10000 duplex=full txflowctl=disable rxflowctl=disable Link: Down Back-end configurations ~~~~~~~~~~~~~~~~~~~~~~~ .. note:: The following deprecated tags will be removed in the T release: - emc_nas_server_container - emc_nas_pool_names - emc_interface_ports The following parameters need to be configured in the ``/etc/manila/manila.conf`` file for the PowerMax driver: .. code-block:: ini emc_share_backend = powermax emc_nas_server = emc_nas_password = emc_nas_login = driver_handles_share_servers = True powermax_server_container = powermax_share_data_pools = share_driver = manila.share.drivers.dell_emc.driver.EMCShareDriver powermax_ethernet_ports = emc_ssl_cert_verify = True emc_ssl_cert_path = share_backend_name = - `emc_share_backend` The plug-in name. Set it to ``powermax`` for the PowerMax driver. Other values are ``isilon``, ``vnx`` and ``unity``. - `emc_nas_server` The control station IP address of the PowerMax system to be managed. - `emc_nas_password` and `emc_nas_login` The fields that are used to provide credentials to the PowerMax system. Only local users of PowerMax File is supported. - `driver_handles_share_servers` PowerMax only supports True, where the share driver handles the provisioning and management of the share servers. - `powermax_server_container` Name of the Data Mover to serve the share service. - `powermax_share_data_pools` Comma separated list specifying the name of the pools to be used by this back end. Do not set this option if all storage pools on the system can be used. Wild card character is supported. Examples: pool_1, pool_*, * - `powermax_ethernet_ports (optional)` Comma-separated list specifying the ports (devices) of Data Mover that can be used for share server interface. Do not set this option if all ports on the Data Mover can be used. Wild card character is supported. Examples: fxg-9-0, fxg-_*, * - `emc_ssl_cert_verify (optional)` By default this is True, setting it to False is not recommended - `emc_ssl_cert_path (optional)` The path to the This must be set if emc_ssl_cert_verify is True which is the recommended configuration. See ``SSL Support`` section for more details. - `share_backend_name` The backend name for a given driver implementation. Restart of the ``manila-share`` service is needed for the configuration changes to take effect. SSL Support ----------- #. Run the following on eNas Control Station, to display the CA certification for the active CS. .. code-block:: console $ /nas/sbin/nas_ca_certificate -display .. warning:: This cert will be different for the secondary CS so if there is a failover a different certificate must be used. #. Copy the contents and create a file with a .pem extention on your manila host. .. code-block:: ini -----BEGIN CERTIFICATE----- the cert contents are here -----END CERTIFICATE----- #. To verify the cert by running the following and examining the output: .. code-block:: console $ openssl x509 -in test.pem -text -noout .. code-block:: ini Certificate: Data: Version: 3 (0x2) Serial Number: xxxxxx Signature Algorithm: sha1WithRSAEncryption Issuer: O=VNX Certificate Authority, CN=xxx Validity Not Before: Feb 27 16:02:41 2019 GMT Not After : Mar 4 16:02:41 2024 GMT Subject: O=VNX Certificate Authority, CN=xxxxxx Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: xxxxxx Exponent: xxxxxx X509v3 extensions: X509v3 Subject Key Identifier: xxxxxx X509v3 Authority Key Identifier: keyid:xxxxx DirName:/O=VNX Certificate Authority/CN=xxxxxx serial:xxxxx X509v3 Basic Constraints: CA:TRUE X509v3 Subject Alternative Name: DNS:xxxxxx, DNS:xxxxxx.localdomain, DNS:xxxxxxx, DNS:xxxxx Signature Algorithm: sha1WithRSAEncryption xxxxxx #. As it is the capath and not the cafile that is expected, copy the file to either new directory or an existing directory (where other .pem files exist). #. Run the following on the directory .. code-block:: console $ c_rehash $PATH_TO_CERTS #. Update manila.conf with the directory where the .pem exists. .. code-block:: ini emc_ssl_cert_path = /path_to_certs/ #. Restart manila services. Snapshot Support ~~~~~~~~~~~~~~~~ Snapshot support is disabled by default, so in order to allow shapshots for a share type, the ``snapshot_support`` extra spec must be set to True. Creating a share from a snapshot is also disabled by default so ``create_share_from_snapshot_support`` must also be set to True if this functionality is required. For a new share type: .. code-block:: console $ manila type-create --snapshot_support True \ --create_share_from_snapshot_support True \ ${share_type_name} True For an existing share type: .. code-block:: console $ manila type-key ${share_type_name} \ set snapshot_support=True $ manila type-key ${share_type_name} \ set create_share_from_snapshot_support=True To create a snapshot from a share where snapshot_support=True: .. code-block:: console $ manila snapshot-create ${source_share_name} --name ${target_snapshot_name} To create a target share from a shapshot where create_share_from_snapshot_support=True: .. code-block:: console $ manila create cifs 3 --name ${target_share_name} \ --share-network ${share_network} \ --share-type ${share_type_name} \ --metadata source=snapshot \ --snapshot-id ${snapshot_id} IPv6 support ~~~~~~~~~~~~ IPv6 support for PowerMax Manila driver was introduced in Rocky release. The feature is divided into two parts: #. The driver is able to manage share or snapshot in the Neutron IPv6 network. #. The driver is able to connect PowerMax management interface using its IPv6 address. Pre-Configurations for IPv6 support ----------------------------------- The following parameters need to be configured in ``/etc/manila/manila.conf`` for the PowerMax driver: .. code-block:: ini network_plugin_ipv6_enabled = True If you want to connect to the eNAS controller using IPv6 address specify the address in ``/etc/manila/manila.conf``: .. code-block:: ini emc_nas_server = Restrictions ~~~~~~~~~~~~ The PowerMax driver has the following restrictions: - Only ``driver_handles_share_servers`` equals True is supported. - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Only FLAT network and VLAN network are supported. - VLAN network is supported with limitations. The neutron subnets in different VLANs that are used to create share networks cannot have overlapped address spaces. Otherwise, PowerMax may have a problem to communicate with the hosts in the VLANs. To create shares for different VLANs with same subnet address, use different Data Movers. - The **Active Directory** security service is the only supported security service type and it is required to create CIFS shares. - Only one security service can be configured for each share network. - The domain name of the ``active_directory`` security service should be unique even for different tenants. - The time on the Data Mover and the Active Directory domains used in security services should be in sync (time difference should be less than 10 minutes). We recommended using same NTP server on both the Data Mover and Active Directory domains. - On eNAS, the snapshot is stored in the SavVols. eNAS system allows the space used by SavVol to be created and extended until the sum of the space consumed by all SavVols on the system exceeds the default 20% of the total space available on the system. If the 20% threshold value is reached, an alert will be generated on eNAS. Continuing to create snapshot will cause the old snapshot to be inactivated (and the snapshot data to be abandoned). The limit percentage value can be changed manually by storage administrator based on the storage needs. We recommend the administrator configures the notification on the SavVol usage. Refer to Using eNAS SnapSure document on `EMC support site `_ for more information. - eNAS has limitations on the overall numbers of Virtual Data Movers, filesystems, shares, and checkpoints. Virtual Data Mover(VDM) is created by the eNAS driver on the eNAS to serve as the Shared File Systems service share server. Similarly, the filesystem is created, mounted, and exported from the VDM over CIFS or NFS protocol to serve as the Shared File Systems service share. The eNAS checkpoint serves as the Shared File Systems service share snapshot. Refer to the NAS Support Matrix document on `EMC support site `_ for the limitations and configure the quotas accordingly. Other Remarks ~~~~~~~~~~~~~ - eNAS ``nas_quotas`` should not be confused with OpenStack manila quotas. The former edits quotas for mounted file systems, and displays a listing of quotas and disk usage at the file system level (by the user, group, or tree), or at the quota-tree level (by the user or group). ``nas_quotas`` also turns quotas on and off, and clears quotas records for a file system, quota tree, or a Data Mover. Refer to PowerMax eNAS CLI Reference guide on `EMC support site `_ for additional information. ``OpenStack manila quotas`` delimit the number of shares, snapshots etc. a user can create. .. code-block:: console $ manila quota-show --tenant --user +-----------------------+-------+ | Property | Value | +-----------------------+-------+ | share_groups | 50 | | gigabytes | 1000 | | snapshot_gigabytes | 1000 | | share_group_snapshots | 50 | | snapshots | 50 | | shares | 50 | | share_networks | 10 | +-----------------------+-------+ Driver options ~~~~~~~~~~~~~~ Configuration options specific to this driver: .. include:: ../../tables/manila-powermax.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/nexentastor5-driver.rst0000664000175000017500000000413013656750227032032 0ustar zuulzuul00000000000000=================== NexentaStor5 Driver =================== Nexentastor5 can be used as a storage back end for the OpenStack Shared File System service. Shares in the Shared File System service are mapped 1:1 to Nexentastor5 filesystems. Access is provided via NFS protocol and IP-based authentication. Network approach ~~~~~~~~~~~~~~~~ L3 connectivity between the storage back end and the host running the Shared File Systems share service should exist. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The drivers supports NFS shares. The following operations are supported: - Create NFS share - Delete share - Extend share - Shrink share - Allow share access Note the following limitation: * Only IP based access is supported (ro/rw). - Deny share access - Create snapshot - Revert to snapshot - Delete snapshot - Create share from snapshot - Manage share - Unmanage share Requirements ~~~~~~~~~~~~ - NexentaStor 5.x Appliance pre-provisioned and licensed - Pool and parent filesystem configured (this filesystem will contain all manila shares) Restrictions ~~~~~~~~~~~~ - Only IP share access control is allowed for NFS shares. Configuration ~~~~~~~~~~~~~~ .. code-block:: ini enabled_share_backends = NexentaStor5 Create the new back end configuration section, in this case named ``NexentaStor5``: .. code-block:: ini [NexentaStor5] share_backend_name = NexentaStor5 driver_handles_share_servers = False nexenta_folder = manila share_driver = manila.share.drivers.nexenta.ns5.nexenta_nas.NexentaNasDriver nexenta_rest_addresses = 10.3.1.1,10.3.1.2 nexenta_nas_host = 10.3.1.10 nexenta_rest_port = 8443 nexenta_pool = pool1 nexenta_nfs = True nexenta_user = admin nexenta_password = secret_password nexenta_thin_provisioning = True More information can be found at the `Nexenta documentation webpage `. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-nexentastor5.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/infortrend-nas-driver.rst0000664000175000017500000000357113656750227032334 0ustar zuulzuul00000000000000======================== Infortrend Manila driver ======================== The `Infortrend `__ Manila driver provides NFS and CIFS shared file systems to OpenStack. Requirements ~~~~~~~~~~~~ To use the Infortrend Manila driver, the following items are required: - GS/GSe Family firmware version v73.1.0-4 and later. - Configure at least one channel for shared file systems. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This driver supports NFS and CIFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Deny share access. - Manage a share. - Unmanage a share. - Extend a share. - Shrink a share. Restrictions ~~~~~~~~~~~~ The Infortrend manila driver has the following restrictions: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Only file-level data service channel can offer the NAS service. Driver configuration ~~~~~~~~~~~~~~~~~~~~ On ``manila-share`` nodes, set the following in your ``/etc/manila/manila.conf``, and use the following options to configure it: Driver options -------------- .. include:: ../../tables/manila-infortrend.inc Back-end configuration example ------------------------------ .. code-block:: ini [DEFAULT] enabled_share_backends = ift-manila enabled_share_protocols = NFS, CIFS [ift-manila] share_backend_name = ift-manila share_driver = manila.share.drivers.infortrend.driver.InfortrendNASDriver driver_handles_share_servers = False infortrend_nas_ip = FAKE_IP infortrend_nas_user = FAKE_USER infortrend_nas_password = FAKE_PASS infortrend_share_pools = pool-1, pool-2 infortrend_share_channels = 0, 1 manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/zfssa-manila-driver.rst0000664000175000017500000001113613656750227031764 0ustar zuulzuul00000000000000=================================== Oracle ZFS Storage Appliance driver =================================== The Oracle ZFS Storage Appliance driver, version 1.0.0, enables the Oracle ZFS Storage Appliance (ZFSSA) to be used seamlessly as a shared storage resource for the OpenStack File System service (manila). The driver provides the ability to create and manage NFS and CIFS shares on the appliance, allowing virtual machines to access the shares simultaneously and securely. Requirements ~~~~~~~~~~~~ Oracle ZFS Storage Appliance Software version 2013.1.2.0 or later. Supported operations ~~~~~~~~~~~~~~~~~~~~ - Create NFS and CIFS shares. - Delete NFS and CIFS shares. - Allow or deny IP access to NFS shares. - Create snapshots of a share. - Delete snapshots of a share. - Create share from snapshot. Restrictions ~~~~~~~~~~~~ - Access to CIFS shares are open and cannot be changed from manila. - Version 1.0.0 of the driver only supports Single SVM networking mode. Appliance configuration ~~~~~~~~~~~~~~~~~~~~~~~ #. Enable RESTful service on the ZFSSA Storage Appliance. #. Create a new user on the appliance with the following authorizations:: scope=stmf - allow_configure=true scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true You can create a role with authorizations as follows:: zfssa:> configuration roles zfssa:configuration roles> role OpenStackRole zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack Manila Driver" zfssa:configuration roles OpenStackRole (uncommitted)> commit zfssa:configuration roles> select OpenStackRole zfssa:configuration roles OpenStackRole> authorizations create zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=stmf zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true zfssa:configuration roles OpenStackRole auth (uncommitted)> commit You can create a user with a specific role as follows:: zfssa:> configuration users zfssa:configuration users> user cinder zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Manila Driver" zfssa:configuration users cinder (uncommitted)> set initial_password=12345 zfssa:configuration users cinder (uncommitted)> commit zfssa:configuration users> select cinder set roles=OpenStackRole #. Create a storage pool. An existing pool can also be used if required. You can create a pool as follows:: zfssa:> configuration storage zfssa:configuration storage> config pool zfssa:configuration storage verify> set data=2 zfssa:configuration storage verify> done zfssa:configuration storage config> done #. Create a new project. You can create a project as follows:: zfssa:> shares zfssa:shares> project proj zfssa:shares proj (uncommitted)> commit #. Create a new or use an existing data IP address. You can create an interface as follows:: zfssa:> configuration net interfaces ip zfssa:configuration net interfaces ip (uncommitted)> set v4addrs=127.0.0.1/24 v4addrs = 127.0.0.1/24 (uncommitted) zfssa:configuration net interfaces ip (uncommitted)> set links=vnic1 links = vnic1 (uncommitted) zfssa:configuration net interfaces ip (uncommitted)> set admin=false admin = false (uncommitted) zfssa:configuration net interfaces ip (uncommitted)> commit It is required that both interfaces used for data and management are configured properly. The data interface must be different from the management interface. #. Configure the cluster. If a cluster is used as the manila storage resource, the following verifications are required: - Verify that both the newly created pool and the network interface are of type singleton and are not locked to the current controller. This approach ensures that the pool and the interface used for data always belong to the active controller, regardless of the current state of the cluster. - Verify that the management IP, data IP and storage pool belong to the same head. .. note:: A short service interruption occurs during failback or takeover, but once the process is complete, manila should be able to access the pool through the data IP. Driver options ~~~~~~~~~~~~~~ The Oracle ZFSSA driver supports these options: .. include:: ../../tables/manila-zfssa.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/windows-smb-driver.rst0000664000175000017500000000531113656750227031646 0ustar zuulzuul00000000000000.. _windows_smb_driver: Windows SMB driver ================== While the generic driver only supports Linux instances, you may use the Windows SMB driver when Windows VMs are preferred. This driver extends the generic one in order to provide Windows instance support. It can integrate with Active Directory domains through the Manila security service feature, which can ease access control. Although Samba is a great SMB share server, Windows instances may provide improved SMB 3 support. Limitations ----------- - ip access rules are not supported at the moment, only user based ACLs may be used - SMB (also known as CIFS) is the only supported share protocol - although it can handle Windows VMs, Manila cannot run on Windows at the moment. The VMs on the other hand may very well run on Hyper-V, KVM or any other hypervisor supported by Nova. Prerequisites ------------- This driver requires a Windows Server image having cloudbase-init installed. Cloudbase-init is the de-facto standard tool for initializing Windows VMs running on OpenStack. The driver relies on it to do tasks such as: - configuring WinRM access using password or certificate based authentication - network configuration - setting the host name .. note:: This driver was initially developed with Windows Nano Server in mind. Unfortunately, Microsoft no longer supports running Nano Servers on bare metal or virtual machines, for which reason you may want to use Windows Server Core images. Configuring ----------- Below is a config sample that enables the Windows SMB driver. .. code-block:: ini [DEFAULT] manila_service_keypair_name = manila-service enabled_share_backends = windows_smb enabled_share_protocols = CIFS [windows_smb] service_net_name_or_ip = private tenant_net_name_or_ip = private share_mount_path = C:/shares # The driver can either create share servers by itself # or use existing ones. driver_handles_share_servers = True service_instance_user = Admin service_image_name = ws2016 # nova get-password may be used to retrieve passwords generated # by cloudbase-init and encrypted with the public key. path_to_private_key = /etc/manila/ssh/id_rsa path_to_public_key = /etc/manila/ssh/id_rsa.pub winrm_cert_pem_path = /etc/manila/ssl/winrm_client_cert.pem winrm_cert_key_pem_path = /etc/manila/ssl/winrm_client_cert.key # If really needed, you can use password based authentication as well. winrm_use_cert_based_auth = True winrm_conn_timeout = 40 max_time_to_build_instance = 900 share_backend_name = windows_smb share_driver = manila.share.drivers.windows.windows_smb_driver.WindowsSMBDriver service_instance_flavor_id = 100 manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/lvm-driver.rst0000664000175000017500000000464013656750227030177 0ustar zuulzuul00000000000000================ LVM share driver ================ The Shared File Systems service can be configured to use LVM share driver. LVM share driver relies solely on LVM running on the same host with manila-share service. It does not require any services not related to the Shared File Systems service to be present to work. Prerequisites ~~~~~~~~~~~~~ The following packages must be installed on the same host with manila-share service: - NFS server - Samba server >= 3.2.0 - LVM2 >= 2.02.66 Services must be up and running, ports used by the services must not be blocked. A node with manila-share service should be accessible to share service users. LVM should be preconfigured. By default, LVM driver expects to find a volume group named ``lvm-shares``. This volume group will be used by the driver for share provisioning. It should be managed by node administrator separately. Shared File Systems service driver configuration setting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To use the driver, one should set up a corresponding back end. A driver must be explicitly specified as well as export IP address. A minimal back-end specification that will enable LVM share driver is presented below: .. code-block:: ini [LVM_sample_backend] driver_handles_share_servers = False share_driver = manila.share.drivers.lvm.LVMShareDriver lvm_share_export_ips = 1.2.3.4 In the example above, ``lvm_share_export_ips`` is the address to be used by clients for accessing shares. In the simplest case, it should be the same as host's address. The option allows configuring more than one IP address as a comma separated string. Supported shared file systems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. - Extend a share. Known restrictions ~~~~~~~~~~~~~~~~~~ - LVM driver should not be used on a host running Neutron agents, simultaneous usage might cause issues with share deletion (shares will not get deleted from volume groups). Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to this driver. .. include:: ../../tables/manila-lvm.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/maprfs-native-driver.rst0000664000175000017500000000766613656750227032170 0ustar zuulzuul00000000000000==================== MapRFS native driver ==================== MapR-FS native driver is a plug-in based on the Shared File Systems service and provides high-throughput access to the data on MapR-FS distributed file system, which is designed to hold very large amounts of data. A Shared File Systems service share in this driver is a volume in MapR-FS. Instances talk directly to the MapR-FS storage backend via the (mapr-posix) client. To mount a MapR-FS volume, the MapR POSIX client is required. Access to each share is allowed by user and group based access type, which is aligned with MapR-FS ACEs to support access control for multiple users and groups. If user name and group name are the same, the group access type will be used by default. For more details, see `MapR documentation `_. Network configuration ~~~~~~~~~~~~~~~~~~~~~ The storage backend and Shared File Systems service hosts should be in a flat network. Otherwise, the L3 connectivity between them should exist. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports MapR-FS shares. The following operations are supported: - Create MapR-FS share. - Delete MapR-FS share. - Allow MapR-FS Share access. - Only support user and group access type. - Support level of access (ro/rw). - Deny MapR-FS Share access. - Update MapR-FS Share access. - Create snapshot. - Delete snapshot. - Create share from snapshot. - Extend share. - Shrink share. - Manage share. - Unmanage share. - Manage snapshot. - Unmanage snapshot. - Ensure share. Requirements ~~~~~~~~~~~~ - Install MapR core packages, version >= 5.2.x, on the storage backend. - To enable snapshots, the MapR cluster should have at least M5 license. - Establish network connection between the Shared File Systems service hosts and storage backend. - Obtain a `ticket `_ for user who will be used to access MapR-FS. Back end configuration (manila.conf) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Add MapR-FS protocol to ``enabled_share_protocols``: .. code-block:: ini enabled_share_protocols = MAPRFS Create a section for MapR-FS backend. Example: .. code-block:: ini [maprfs] driver_handles_share_servers = False share_driver = manila.share.drivers.maprfs.maprfs_native.MapRFSNativeShareDriver maprfs_clinode_ip = example maprfs_ssh_name = mapr maprfs_ssh_pw = mapr share_backend_name = maprfs Set ``driver-handles-share-servers`` to ``False`` as the driver does not manage the lifecycle of ``share-servers``. Add driver backend to ``enabled_share_backends``: .. code-block:: ini enabled_share_backends = maprfs Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to this driver. .. include:: ../../tables/manila-maprfs.inc Known restrictions ~~~~~~~~~~~~~~~~~~ This driver does not handle user authentication, no tickets or users are created by this driver. This means that when 'access_allow' or 'update_access' is calling, this will have no effect without providing tickets to users. Share metadata ~~~~~~~~~~~~~~ MapR-FS shares can be created by specifying additional options. Metadata is used for this purpose. Every metadata option with ``-`` prefix is passed to MapR-FS volume. For example, to specify advisory volume quota add ``_advisoryquota=10G`` option to metadata: .. code-block:: console $ manila create MAPRFS 1 --metadata _advisoryquota=10G If you need to create a share with your custom backend name or export location instead if uuid, you can specify ``_name`` and ``_path`` options: .. code-block:: console $ manila create MAPRFS 1 --metadata _name=example _path=/example .. WARNING:: Specifying invalid options will cause an error. The list of allowed options depends on mapr-core version. See `volume create `_ for more information. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/glusterfs-driver.rst0000664000175000017500000000471213656750227031417 0ustar zuulzuul00000000000000================ GlusterFS driver ================ GlusterFS driver uses GlusterFS, an open source distributed file system, as the storage back end for serving file shares to the Shared File Systems clients. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported - Only read-write access is supported. - Deny share access. Requirements ~~~~~~~~~~~~ - Install glusterfs-server package, version >= 3.5.x, on the storage back end. - Install NFS-Ganesha, version >=2.1, if using NFS-Ganesha as the NFS server for the GlusterFS back end. - Install glusterfs and glusterfs-fuse package, version >=3.5.x, on the Shared File Systems service host. - Establish network connection between the Shared File Systems service host and the storage back end. Shared File Systems service driver configuration setting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following parameters in the Shared File Systems service's configuration file ``manila.conf`` need to be set: .. code-block:: ini share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver If the back-end GlusterFS server runs on the Shared File Systems service host machine: .. code-block:: ini glusterfs_target = :/ If the back-end GlusterFS server runs remotely: .. code-block:: ini glusterfs_target = @:/ Known restrictions ~~~~~~~~~~~~~~~~~~ - The driver does not support network segmented multi-tenancy model, but instead works over a flat network, where the tenants share a network. - If NFS Ganesha is the NFS server used by the GlusterFS back end, then the shares can be accessed by NFSv3 and v4 protocols. However, if Gluster NFS is used by the GlusterFS back end, then the shares can only be accessed by NFSv3 protocol. - All Shared File Systems service shares, which map to subdirectories within a GlusterFS volume, are currently created within a single GlusterFS volume of a GlusterFS storage pool. - The driver does not provide read-only access level for shares. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-glusterfs.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/cephfs-native-driver.rst0000664000175000017500000002134413656750227032135 0ustar zuulzuul00000000000000==================== CephFS Native driver ==================== The CephFS Native driver enables the Shared File Systems service to export shared file systems to guests using the Ceph network protocol. Guests require a Ceph client in order to mount the file system. Access is controlled via Ceph's cephx authentication system. When a user requests share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret key, if they do not already exist, and authorizes the ID to access the share. The client can then mount the share using the ID and the secret key. To learn more about configuring Ceph clients to access the shares created using this driver, please see the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel client rather than the FUSE client, the share size limits set in the Shared File Systems service may not be obeyed. Supported shared file systems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CephFS shares. The following operations are supported with CephFS back end: - Create a share. - Delete a share. - Allow share access. - ``read-only`` access level is supported. - ``read-write`` access level is supported. Note the following limitation for CephFS shares: - Only ``cephx`` access type is supported. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a consistency group (CG). - Delete a CG. - Create a CG snapshot. - Delete a CG snapshot. Requirements ~~~~~~~~~~~~ - Mitaka or later versions of manila. - Jewel or later versions of Ceph. - A Ceph cluster with a file system configured ( http://docs.ceph.com/docs/master/cephfs/createfs/) - ``ceph-common`` package installed in the servers running the ``manila-share`` service. - Ceph client installed in the guest, preferably the FUSE based client, ``ceph-fuse``. - Network connectivity between your Ceph cluster's public network and the servers running the ``manila-share`` service. - Network connectivity between your Ceph cluster's public network and guests. .. important:: A manila share backed onto CephFS is only as good as the underlying file system. Take care when configuring your Ceph cluster, and consult the latest guidance on the use of CephFS in the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/). Authorize the driver to communicate with Ceph ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Run the following commands to create a Ceph identity for the Shared File Systems service to use: .. code-block:: console read -d '' MON_CAPS << EOF allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create" EOF ceph auth get-or-create client.manila -o manila.keyring \ mds 'allow *' \ osd 'allow rw' \ mon "$MON_CAPS" ``manila.keyring``, along with your ``ceph.conf`` file, then needs to be placed on the server running the ``manila-share`` service. Enable snapshots in Ceph if you want to use them in the Shared File Systems service: .. code-block:: console ceph mds set allow_new_snaps true --yes-i-really-mean-it In the server running the ``manila-share`` service, you can place the ``ceph.conf`` and ``manila.keyring`` files in the ``/etc/ceph`` directory. Set the same owner for the ``manila-share`` process and the ``manila.keyring`` file. Add the following section to the ``ceph.conf`` file. .. code-block:: ini [client.manila] client mount uid = 0 client mount gid = 0 log file = /opt/stack/logs/ceph-client.manila.log admin socket = /opt/stack/status/stack/ceph-$name.$pid.asok keyring = /etc/ceph/manila.keyring It is advisable to modify the Ceph client's admin socket file and log file locations so that they are co-located with the Shared File Systems services' pid files and log files respectively. Configure CephFS back end in ``manila.conf`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Add CephFS to ``enabled_share_protocols`` (enforced at the Shared File Systems service's API layer). In this example we leave NFS and CIFS enabled, although you can remove these if you only use CephFS: .. code-block:: ini enabled_share_protocols = NFS,CIFS,CEPHFS #. Refer to the following table for the list of all the ``cephfs_native`` driver-specific configuration options. .. include:: ../../tables/manila-cephfs.inc Create a section to define a CephFS back end: .. code-block:: ini [cephfs1] driver_handles_share_servers = False share_backend_name = CEPHFS1 share_driver = manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph cephfs_enable_snapshots = False To let the driver perform snapshot related operations, set cephfs_enable_snapshots to True . Also set the ``driver-handles-share-servers`` to ``False`` as the driver does not manage the lifecycle of ``share-servers``. #. Edit ``enabled_share_backends`` to point to the driver's back-end section using the section name. In this example we are also including another back end (``generic1``), you would include whatever other back ends you have configured. .. code-block:: ini enabled_share_backends = generic1,cephfs1 Creating shares ~~~~~~~~~~~~~~~ The default share type may have ``driver_handles_share_servers`` set to ``True``. Configure a share type suitable for CephFS: .. code-block:: console manila type-create cephfstype false manila type-set cephfstype set share_backend_name='CEPHFS1' Then create a share: .. code-block:: console manila create --share-type cephfstype --name cephshare1 cephfs 1 Note the export location of the share: .. code-block:: console manila share-export-location-list cephshare1 The export location of the share contains the Ceph monitor (mon) addresses and ports, and the path to be mounted. It is of the form, ``{mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}`` Allowing access to shares ~~~~~~~~~~~~~~~~~~~~~~~~~ Allow Ceph auth ID ``alice`` access to the share using ``cephx`` access type. .. code-block:: console manila access-allow cephshare1 cephx alice Note the access status and the secret access key of ``alice``. .. code-block:: console manila access-list cephshare1 Mounting shares using FUSE client ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Using the secret key of the authorized ID ``alice``, create a keyring file ``alice.keyring``. .. code-block:: ini [client.alice] key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA== Using the monitor IP addresses from the share's export location, create a configuration file, ``ceph.conf``: .. code-block:: ini [client] client quota = true mon host = 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789 Finally, mount the file system, substituting the file names of the keyring and configuration files you just created, and substituting the path to be mounted from the share's export location: .. code-block:: console sudo ceph-fuse ~/mnt \ --id=alice \ --conf=./ceph.conf \ --keyring=./alice.keyring \ --client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c Known restrictions ~~~~~~~~~~~~~~~~~~ Consider the driver as a building block for supporting multi-tenant workloads in the future. However, it can be used in private cloud deployments. - The guests have direct access to Ceph's public network. - The snapshot support of the driver is disabled by default. ``cephfs_enable_snapshots`` configuration option needs to be set to ``True`` to allow snapshot operations. - Snapshots are read-only. A user can read a snapshot's contents from the ``.snap/{manila-snapshot-id}_{unknown-id}`` folder within the mounted share. - To restrict share sizes, CephFS uses quotas that are enforced in the client side. The CephFS clients are relied on to respect quotas. Security ~~~~~~~~ - Each share's data is mapped to a distinct Ceph RADOS namespace. A guest is restricted to access only that particular RADOS namespace. - An additional level of resource isolation can be provided by mapping a share's contents to a separate RADOS pool. This layout would be preferred only for cloud deployments with a limited number of shares needing strong resource separation. You can do this by setting a share type specification, ``cephfs:data_isolated`` for the share type used by the cephfs driver. .. code-block:: console manila type-key cephfstype set cephfs:data_isolated=True - Untrusted manila guests pose security risks to the Ceph storage cluster as they would have direct access to the cluster's public network. manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/infinidat-share-driver.rst0000664000175000017500000000762613656750227032455 0ustar zuulzuul00000000000000================================ INFINIDAT InfiniBox Share driver ================================ The INFINIDAT Share driver provides support for managing filesystem shares on the INFINIDAT InfiniBox storage systems. This section explains how to configure the INFINIDAT driver. Supported operations ~~~~~~~~~~~~~~~~~~~~ - Create and delete filesystem shares. - Ensure filesystem shares. - Extend a share. - Create and delete filesystem snapshots. - Create a share from a share snapshot. - Revert a share to its snapshot. - Mount a snapshot. - Set access rights to shares and snapshots. Note the following limitations: - Only IP access type is supported. - Both RW & RO access levels are supported. External package installation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver requires the ``infinisdk`` package for communicating with InfiniBox systems. Install the package from PyPI using the following command: .. code-block:: console $ pip install infinisdk Setting up the storage array ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Create a storage pool object on the InfiniBox array in advance. The storage pool will contain shares managed by OpenStack. Refer to the InfiniBox manuals for details on pool management. Driver configuration ~~~~~~~~~~~~~~~~~~~~ Edit the ``manila.conf`` file, which is usually located under the following path ``/etc/manila/manila.conf``. * Add a section for the INFINIDAT driver back end. * Under the ``[DEFAULT]`` section, set the ``enabled_share_backends`` parameter with the name of the new back-end section. Configure the driver back-end section with the parameters below. * Configure the driver name by setting the following parameter: .. code-block:: ini share_driver = manila.share.drivers.infinidat.infinibox.InfiniboxShareDriver * Configure the management IP of the InfiniBox array by adding the following parameter: .. code-block:: ini infinibox_hostname = InfiniBox management IP * Configure user credentials: The driver requires an InfiniBox user with administrative privileges. We recommend creating a dedicated OpenStack user account that holds an administrative user role. Refer to the InfiniBox manuals for details on user account management. Configure the user credentials by adding the following parameters: .. code-block:: ini infinibox_login = Infinibox management login infinibox_password = Infinibox management password * Configure the name of the InfiniBox pool by adding the following parameter: .. code-block:: ini infinidat_pool_name = Pool as defined in the InfiniBox * Configure the name of the InfiniBox NAS network space by adding the following parameter: .. code-block:: ini infinidat_nas_network_space_name = Network space as defined in the InfiniBox * The back-end name is an identifier for the back end. We recommend using the same name as the name of the section. Configure the back-end name by adding the following parameter: .. code-block:: ini share_backend_name = back-end name * Thin provisioning: The INFINIDAT driver supports creating thin or thick provisioned filesystems. Configure thin or thick provisioning by adding the following parameter: .. code-block:: ini infinidat_thin_provision = true/false This parameter defaults to ``true``. Configuration example ~~~~~~~~~~~~~~~~~~~~~ .. code-block:: ini [DEFAULT] enabled_share_backends = infinidat-pool-a [infinidat-pool-a] share_driver = manila.share.drivers.infinidat.infinibox.InfiniboxShareDriver share_backend_name = infinidat-pool-a driver_handles_share_servers = false infinibox_hostname = 10.1.2.3 infinibox_login = openstackuser infinibox_password = openstackpass infinidat_pool_name = pool-a infinidat_nas_network_space_name = nas_space infinidat_thin_provision = true Driver options ~~~~~~~~~~~~~~ Configuration options specific to this driver: .. include:: ../../tables/manila-infinidat.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/generic-driver.rst0000664000175000017500000000650713656750227031021 0ustar zuulzuul00000000000000======================================= Generic approach for share provisioning ======================================= The Shared File Systems service can be configured to use Compute VMs and Block Storage service volumes. There are two modules that handle them in the Shared File Systems service: - The ``service_instance`` module creates VMs in Compute with a predefined image called ``service image``. This module can be used by any driver for provisioning of service VMs to be able to separate share resources among tenants. - The ``generic`` module operates with Block Storage service volumes and VMs created by the ``service_instance`` module, then creates shared filesystems based on volumes attached to VMs. Network configurations ~~~~~~~~~~~~~~~~~~~~~~ Each driver can handle networking in its own way, see: https://wiki.openstack.org/wiki/manila/Networking. One of the two possible configurations can be chosen for share provisioning using the ``service_instance`` module: - Service VM has one network interface from a network that is connected to a public router. For successful creation of a share, the user network should be connected to a public router, too. - Service VM has two network interfaces, the first one is connected to the service network, the second one is connected directly to the user's network. Requirements for service image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Linux based distro - NFS server - Samba server >= 3.2.0, that can be configured by data stored in registry - SSH server - Two network interfaces configured to DHCP (see network approaches) - ``exportfs`` and ``net conf`` libraries used for share actions - The following files will be used, so if their paths differ one needs to create at least symlinks for them: - ``/etc/exports``: permanent file with NFS exports. - ``/var/lib/nfs/etab``: temporary file with NFS exports used by ``exportfs``. - ``/etc/fstab``: permanent file with mounted filesystems. - ``/etc/mtab``: temporary file with mounted filesystems used by ``mount``. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS and CIFS. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. - Extend a share. - Shrink a share. Known restrictions ~~~~~~~~~~~~~~~~~~ - One of nova's configurations only allows 26 shares per server. This limit comes from the maximum number of virtual PCI interfaces that are used for block device attaching. There are 28 virtual PCI interfaces, in this configuration, two of them are used for server needs and the other 26 are used for attaching block devices that are used for shares. Using Windows instances ~~~~~~~~~~~~~~~~~~~~~~~ While the generic driver only supports Linux instances, you may use the Windows SMB driver when Windows instances are preferred. For more details, please check out the following page: :ref:`windows_smb_driver`. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to this driver. .. include:: ../../tables/manila-generic.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/netapp-cluster-mode-driver.rst0000664000175000017500000000371513656750227033273 0ustar zuulzuul00000000000000================================== NetApp Clustered Data ONTAP driver ================================== The Shared File Systems service can be configured to use NetApp clustered Data ONTAP version 8. Network approach ~~~~~~~~~~~~~~~~ L3 connectivity between the storage cluster and Shared File Systems service host should exist, and VLAN segmentation should be configured. The clustered Data ONTAP driver creates storage virtual machines (SVM, previously known as vServers) as representations of the Shared File Systems service share server interface, configures logical interfaces (LIFs) and stores shares there. Supported shared filesystems and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The driver supports CIFS and NFS shares. The following operations are supported: - Create a share. - Delete a share. - Allow share access. Note the following limitations: - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. - Deny share access. - Create a snapshot. - Delete a snapshot. - Create a share from a snapshot. - Extend a share. - Shrink a share. - Create a consistency group. - Delete a consistency group. - Create a consistency group snapshot. - Delete a consistency group snapshot. Required licenses ~~~~~~~~~~~~~~~~~ - NFS - CIFS - FlexClone Known restrictions ~~~~~~~~~~~~~~~~~~ - For CIFS shares an external active directory service is required. Its data should be provided via security-service that is attached to used share-network. - Share access rule by user for CIFS shares can be created only for existing user in active directory. - To be able to configure clients to security services, the time on these external security services and storage should be synchronized. The maximum allowed clock skew is 5 minutes. Driver options ~~~~~~~~~~~~~~ The following table contains the configuration options specific to the share driver. .. include:: ../../tables/manila-netapp.inc manila-10.0.0/doc/source/configuration/shared-file-systems/drivers/dell-emc-unity-driver.rst0000664000175000017500000003722713656750227032240 0ustar zuulzuul00000000000000===================== Dell EMC Unity driver ===================== The EMC Shared File Systems service driver framework (EMCShareDriver) utilizes the EMC storage products to provide the shared file systems to OpenStack. The EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different EMC storage products. The Unity plug-in manages the Unity system to provide shared filesystems. The EMC driver framework with the Unity plug-in is referred to as the Unity driver in this document. This driver performs the operations on Unity through RESTful APIs. Each backend manages one Storage Processor of Unity. Configure multiple Shared File Systems service backends to manage multiple Unity systems. Requirements ------------ - Unity OE 4.1.x or higher. - StorOps 1.1.0 or higher is installed on Manila node. - Following licenses are activated on Unity: * CIFS/SMB Support * Network File System (NFS) * Thin Provisioning * Fiber Channel (FC) * Internet Small Computer System Interface (iSCSI) Supported shared filesystems and operations ------------------------------------------- In detail, users are allowed to do following operation with EMC Unity Storage Systems. * Create/delete a NFS share. * Create/delete a CIFS share. * Extend the size of a share. * Shrink the size of a share. * Modify the host access privilege of a NFS share. * Modify the user access privilege of a CIFS share. * Create/Delete snapshot of a share. * Create a new share from snapshot. * Revert a share to a snapshot. * Manage/Unmanage a share server. * Manage/Unmanage a share. * Manage/Unmanage a snapshot. Supported Network Topologies ---------------------------- * Flat This type is fully supported by Unity share driver, however flat networks are restricted due to the limited number of tenant networks that can be created from them. * VLAN We recommend this type of network topology in Manila. In most use cases, VLAN is used to isolate the different tenants and provide an isolated network for each tenant. To support this function, an administrator needs to set a slot connected with Unity Ethernet port in ``Trunk`` mode or allow multiple VLANs from the slot. * VXLAN Unity native VXLAN is still unavailable. However, with the `HPB `_ (Hierarchical Port Binding) in Networking and Shared file system services, it is possible that Unity co-exists with VXLAN enabled network environment. Pre-Configurations ------------------ On Manila Node ~~~~~~~~~~~~~~~ Python library ``storops`` is required to run Unity driver. Install it with the ``pip`` command. You may need root privilege to install python libraries. .. code-block:: console $ pip install storops On Unity System ~~~~~~~~~~~~~~~~ #. Configure system level NTP server. Open ``Unisphere`` of your Unity system and navigate to: .. code-block:: console Unisphere -> Settings -> Management -> System Time and NTP Select ``Enable NTP synchronization`` and add your NTP server(s). The time on the Unity system and the Active Directory domains used in security services should be in sync. We recommend using the same NTP server on both the Unity system and Active Directory domains. #. Configure system level DNS server. Open ``Unisphere`` of your Unity system and navigate to: .. code-block:: console Unisphere -> Settings -> Management -> DNS Server Select ``Configure DNS server address manually`` and add your DNS server(s). Backend configurations ---------------------- Following configurations need to be configured in `/etc/manila/manila.conf` for the Unity driver. .. code-block:: ini share_driver = manila.share.drivers.dell_emc.driver.EMCShareDriver emc_share_backend = unity emc_nas_server = emc_nas_login = emc_nas_password = unity_server_meta_pool = unity_share_data_pools = unity_ethernet_ports = driver_handles_share_servers = True/False unity_share_server = - `emc_share_backend` The plugin name. Set it to `unity` for the Unity driver. - `emc_nas_server` The management IP for Unity. - `unity_server_meta_pool` The name of the pool to persist the meta-data of NAS server. This option is required. - `unity_share_data_pools` Comma separated list specifying the name of the pools to be used by this backend. Do not set this option if all storage pools on the system can be used. Wild card character is supported. Examples: .. code-block:: ini # Only use pool_1 unity_share_data_pools = pool_1 # Only use pools whose name stars from pool_ unity_share_data_pools = pool_* # Use all pools on Unity unity_share_data_pools = * - `unity_ethernet_ports` Comma separated list specifying the ethernet ports of Unity system that can be used for share. Do not set this option if all ethernet ports can be used. Wild card character is supported. Both the normal ethernet port and link aggregation port can be used by Unity share driver. Examples: .. code-block:: ini # Only use spa_eth1 unity_ethernet_ports = spa_eth1 # Use port whose name stars from spa_ unity_ethernet_ports = spa_* # Use all Link Aggregation ports unity_ethernet_ports = sp*_la_* # Use all available ports unity_ethernet_ports = * - `driver_handles_share_servers` Unity driver requires this option to be as `True` or `False`. Need to set `unity_share_server` when the value is `False`. - `unity_share_server` One of NAS server names in Unity, it is used for share creation when the driver is in `DHSS=False` mode. Restart of :term:`manila-share` service is needed for the configuration changes to take effect. Supported MTU size ------------------ Unity currently only supports 1500 and 9000 as the mtu size, the user can change the above mtu size from Unity Unisphere: #. In the Unisphere, go to `Settings`, `Access`, and then `Ethernet`. #. Double click the ethernet port. #. Select the `MTU` size from the drop down list. The Unity driver will select the port where mtu is equal to the mtu of share network during share server creation. IPv6 support ------------ IPv6 support for Unity driver is introduced in Queens release. The feature is divided into two parts: #. The driver is able to manage share or snapshot in the Neutron IPv6 network. #. The driver is able to connect Unity management interface using its IPv6 address. Pre-Configurations for IPv6 support ----------------------------------- The following parameters need to be configured in `/etc/manila/manila.conf` for the Unity driver: network_plugin_ipv6_enabled = True - `network_plugin_ipv6_enabled` indicates IPv6 is enabled. If you want to connect Unity using IPv6 address, you should configure IPv6 address by `/net/if/mgmt` uemcli command, `mgmtInterfaceSettings` RESTful api or the system settings of Unity GUI for Unity and specify the address in `/etc/manila/manila.conf`: emc_nas_server = Supported share creation in mode that driver does not create and destroy share servers (DHSS=False) --------------------------------------------------------------------------------------------------- To create a file share in this mode, you need to: #. Create NAS server with network interface in Unity system. #. Set 'driver_handles_share_servers=False' and 'unity_share_server' in ``/etc/manila/manila.conf``: .. code-block:: ini driver_handles_share_servers = False unity_share_server = #. Specify the share type with driver_handles_share_servers = False extra specification: .. code-block:: console $ manila type-create ${share_type_name} False #. Create share. .. code-block:: console $ manila create ${share_protocol} ${size} --name ${share_name} --share-type ${share_type_name} .. note:: Do not specify the share network in share creation command because no share servers will be created. Driver will use the unity_share_server specified for share creation. Snapshot support ---------------- In the Mitaka and Newton release of OpenStack, Snapshot support is enabled by default for a newly created share type. Starting with the Ocata release, the snapshot_support extra spec must be set to True in order to allow snapshots for a share type. If the 'snapshot_support' extra_spec is omitted or if it is set to False, users would not be able to create snapshots on shares of this share type. The feature is divided into two parts: 1. The driver is able to create/delete snapshot of share. 2. The driver is able to create share from snapshot. Pre-Configurations for Snapshot support --------------------------------------- The following extra specifications need to be configured with share type. - snapshot_support = True - create_share_from_snapshot_support = True For new share type, these extra specifications can be set directly when creating share type: .. code-block:: console $ manila type-create --snapshot_support True --create_share_from_snapshot_support True ${share_type_name} True Or you can update already existing share type with command: .. code-block:: console $ manila type-key ${share_type_name} set snapshot_support=True $ manila type-key ${share_type_name} set create_share_from_snapshot_support=True To snapshot a share and create share from the snapshot ------------------------------------------------------ Firstly, you need create a share from share type that has extra specifications (snapshot_support=True, create_share_from_snapshot_support=True). Then snapshot the share with command: .. code-block:: console $ manila snapshot-create ${source_share_name} --name ${target_snapshot_name} --description " " After creating the snapshot from previous step, you can create share from that snapshot. Use command: .. code-block:: console $ manila create nfs 1 --name ${target_share_name} --metadata source=snapshot --description " " --snapshot-id ${source_snapshot_id} To manage an existing share server ---------------------------------- To manage a share server existing in Unity System, you need to: #. Create network, subnet, port (ip address of nas server in Unity system) and share network in OpenStack. .. code-block:: console $ openstack network create ${network_name} --provider-network-type ${network_type} $ openstack subnet create ${subnet_name} --network ${network_name} --subnet-range ${subnet_range} $ openstack port create --network ${network_name} --fixed-ip subnet=${subnet_name},ip-address=${ip address} \ ${port_name} --device-owner=manila:share $ manila share-network-create --name ${share_network_name} --neutron-net-id ${network_name} \ --neutron-subnet-id ${subnet_name} #. Manage the share server in OpenStack: .. code-block:: console $ manila share-server-manage ${host} ${share_network_name} ${identifier} .. note:: '${identifier}' is the nas server name in Unity system. To un-manage a Manila share server ---------------------------------- To unmanage a share server existing in OpenStack: .. code-block:: console $ manila share-server-unmanage ${share_server_id} To manage an existing share --------------------------- To manage a share existing in Unity System: - In DHSS=True mode Need make sure the related share server is existing in OpenStack, otherwise need to manage share server first (check the step of 'Supported Manage share server'). .. code-block:: console $ manila manage ${service_host} ${protocol} '${export_path}' --name ${share_name} --driver_options size=${share_size} \ --share_type ${share_type} --share_server_id ${share_server_id} .. note:: '${share_server_id}' is the id of share server in OpenStack. '${share_type}' should have the property 'driver_handles_share_servers=True'. - In DHSS=False mode .. code-block:: console $ manila manage ${service_host} ${protocol} '${export_path}' --name ${share_name} --driver_options size=${share_size} \ --share_type ${share_type} .. note:: '${share_type}' should have the property 'driver_handles_share_servers=False'. To un-manage a Manila share --------------------------- To unmanage a share existing in OpenStack: .. code-block:: console $ manila unmanage ${share_id} To manage an existing share snapshot ------------------------------------ To manage a snapshot existing in Unity System, you need make sure the related share instance is existing in OpenStack, otherwise need to manage share first (check the step of 'Supported Manage share'). .. code-block:: console $ manila snapshot-manage --name ${name} ${share_name} ${provider_location} --driver_options size=${snapshot_size} .. note:: '${provider_location}' is the snapshot name in Unity system. '${share_name}' is the share name or id in OpenStack. To un-manage a Manila share snapshot ------------------------------------ To unmanage a snapshot existing in OpenStack: .. code-block:: console $ manila snapshot-unmanage ${snapshot_id} Supported security services --------------------------- Unity share driver provides ``IP`` based authentication method support for ``NFS`` shares and ``user`` based authentication method for ``CIFS`` shares respectively. For ``CIFS`` share, Microsoft Active Directory is the only supported security service. .. _unity_file_io_load_balance: IO Load balance --------------- The Unity driver automatically distributes the file interfaces per storage processor based on the option ``unity_ethernet_ports``. This balances IO traffic. The recommended configuration for ``unity_ethernet_ports`` specifies balanced ports per storage processor. For example: .. code-block:: ini # Use eth2 from both SPs unity_ethernet_ports = spa_eth2, spb_eth2 Restrictions ------------ The Unity driver has following restrictions. - EMC Unity does not support the same IP in different VLANs. - Only IP access type is supported for NFS. - Only user access type is supported for CIFS. API Implementations ------------------- Following driver features are implemented in the plugin. * create_share: Create a share and export it based on the protocol used (NFS or CIFS). * create_share_from_snapshot: Create a share from a snapshot - clone a snapshot. * delete_share: Delete a share. * extend_share: Extend the maximum size of a share. * shrink_share: Shrink the minimum size of a share. * create_snapshot: Create a snapshot for the specified share. * delete_snapshot: Delete the snapshot of the share. * update_access: recover, add or delete user/host access to a share. * allow_access: Allow access (read write/read only) of a user to a CIFS share. Allow access (read write/read only) of a host to a NFS share. * deny_access: Remove access (read write/read only) of a user from a CIFS share. Remove access (read write/read only) of a host from a NFS share. * ensure_share: Check whether share exists or not. * update_share_stats: Retrieve share related statistics from Unity. * get_network_allocations_number: Returns number of network allocations for creating VIFs. * setup_server: Set up and configures share server with given network parameters. * teardown_server: Tear down the share server. * revert_to_snapshot: Revert a share to a snapshot. Driver options -------------- Configuration options specific to this driver: .. include:: ../../tables/manila-unity.inc manila-10.0.0/doc/source/configuration/shared-file-systems/log-files.rst0000664000175000017500000000164113656750227026311 0ustar zuulzuul00000000000000===================================== Log files used by Shared File Systems ===================================== The corresponding log file of each Shared File Systems service is stored in the ``/var/log/manila/`` directory of the host on which each service runs. .. list-table:: Log files used by Shared File Systems services :header-rows: 1 * - Log file - Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise) - Service/interface (for Ubuntu and Debian) * - ``api.log`` - ``openstack-manila-api`` - ``manila-api`` * - ``manila-manage.log`` - ``manila-manage`` - ``manila-manage`` * - ``scheduler.log`` - ``openstack-manila-scheduler`` - ``manila-scheduler`` * - ``share.log`` - ``openstack-manila-share`` - ``manila-share`` * - ``data.log`` - ``openstack-manila-data`` - ``manila-data`` manila-10.0.0/doc/source/configuration/shared-file-systems/samples/0000775000175000017500000000000013656750362025340 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/configuration/shared-file-systems/samples/rootwrap.conf.rst0000664000175000017500000000045713656750227030701 0ustar zuulzuul00000000000000============= rootwrap.conf ============= The ``rootwrap.conf`` file defines configuration values used by the ``rootwrap`` script when the Shared File Systems service must escalate its privileges to those of the root user. .. literalinclude:: ../../../../../etc/manila/rootwrap.conf :language: ini manila-10.0.0/doc/source/configuration/shared-file-systems/samples/api-paste.ini.rst0000664000175000017500000000033713656750227030536 0ustar zuulzuul00000000000000============= api-paste.ini ============= The shared file systems service stores its API configuration settings in the ``api-paste.ini`` file. .. literalinclude:: ../../../../../etc/manila/api-paste.ini :language: ini manila-10.0.0/doc/source/configuration/shared-file-systems/samples/sample_policy.rst0000664000175000017500000000136213656750227030734 0ustar zuulzuul00000000000000==================== Manila Sample Policy ==================== The following is a sample Manila policy file that has been auto-generated from default policy values in code. If you're using the default policies, then the maintenance of this file is not necessary. It is here to help explain which policy operations protect specific Manila API, but it is not suggested to copy and paste into a deployment unless you're planning on providing a different policy for an operation that is not the default. For instance, if you want to change the default value of "share:create", you only need to keep this single rule in your policy config file (**/etc/manila/policy.json**). .. literalinclude:: ../../../_static/manila.policy.yaml.sample :language: ini manila-10.0.0/doc/source/configuration/shared-file-systems/samples/manila.conf.rst0000664000175000017500000000103413656750227030255 0ustar zuulzuul00000000000000=========== manila.conf =========== The ``manila.conf`` file is installed in ``/etc/manila`` by default. When you manually install the Shared File Systems service, the options in the ``manila.conf`` file are set to default values. The ``manila.conf`` file contains most of the options needed to configure the Shared File Systems service. .. only:: html .. literalinclude:: ../../../_static/manila.conf.sample :language: ini .. only:: latex See the online version of this documentation for the full config file example. manila-10.0.0/doc/source/configuration/shared-file-systems/samples/policy.rst0000664000175000017500000000056613656750227027400 0ustar zuulzuul00000000000000==================== Policy configuration ==================== Configuration ~~~~~~~~~~~~~ .. only:: html The following is an overview of all available policies in Manila. .. show-policy:: :config-file: etc/manila/manila-policy-generator.conf .. only:: latex See the online version of this documentation for the list of available policies in Manila. manila-10.0.0/doc/source/configuration/shared-file-systems/samples/index.rst0000664000175000017500000000054413656750227027204 0ustar zuulzuul00000000000000====================================================== Shared File Systems service sample configuration files ====================================================== All the files in this section can be found in ``/etc/manila``. .. toctree:: :maxdepth: 1 manila.conf.rst api-paste.ini.rst rootwrap.conf.rst policy.rst sample_policy.rst manila-10.0.0/doc/source/configuration/shared-file-systems/drivers.rst0000664000175000017500000000422013656750227026102 0ustar zuulzuul00000000000000============= Share drivers ============= .. sort by the drivers by open source software .. and the drivers for proprietary components .. toctree:: :maxdepth: 1 drivers/generic-driver.rst drivers/cephfs-native-driver.rst drivers/dell-emc-powermax-driver.rst drivers/dell-emc-unity-driver.rst drivers/dell-emc-vnx-driver.rst drivers/glusterfs-driver.rst drivers/glusterfs-native-driver.rst drivers/hdfs-native-driver.rst drivers/lvm-driver.rst drivers/zfs-on-linux-driver.rst drivers/zfssa-manila-driver.rst drivers/emc-isilon-driver.rst drivers/hitachi-hnas-driver.rst drivers/hitachi-hsp-driver.rst drivers/hpe-3par-share-driver.rst drivers/huawei-nas-driver.rst drivers/ibm-spectrumscale-driver.rst drivers/infinidat-share-driver.rst drivers/infortrend-nas-driver.rst drivers/maprfs-native-driver.rst drivers/netapp-cluster-mode-driver.rst drivers/quobyte-driver.rst drivers/windows-smb-driver.rst drivers/nexentastor5-driver.rst To use different share drivers for the Shared File Systems service, use the parameters described in these sections. The Shared File Systems service can handle multiple drivers at once. The configuration for all of them follows a common paradigm: #. In the configuration file ``manila.conf``, configure the option ``enabled_backends`` with the list of names for your configuration. For example, if you want to enable two drivers and name them ``Driver1`` and ``Driver2``: .. code-block:: ini [Default] # ... enabled_backends = Driver1 Driver2 #. Configure a separate section for each driver using these names. You need to define in each section at least the option ``share_driver`` and assign it the value of your driver. In this example it is the generic driver: .. code-block:: ini [Driver1] share_driver = manila.share.drivers.generic.GenericShareDriver # ... [Driver2] share_driver = manila.share.drivers.generic.GenericShareDriver # ... The share drivers are included in the `Shared File Systems repository `_. manila-10.0.0/doc/source/configuration/shared-file-systems/api.rst0000664000175000017500000000045013656750227025176 0ustar zuulzuul00000000000000===================================== Shared File Systems API configuration ===================================== Configuration options ~~~~~~~~~~~~~~~~~~~~~ The following options allow configuration of the APIs that Shared File Systems service supports. .. include:: ../tables/manila-api.inc manila-10.0.0/doc/source/configuration/figures/0000775000175000017500000000000013656750362021450 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/configuration/figures/hds_network.jpg0000664000175000017500000047404213656750227024514 0ustar zuulzuul00000000000000ÿØÿàJFIFÿÛ„ÿÀç&ÿÄ¢ }!1AQa"q2‘¡#B±ÁRÑð$3br‚ %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyzƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚáâãäåæçèéêñòóôõö÷øùú w!1AQaq"2B‘¡±Á #3RðbrÑ $4á%ñ&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz‚ƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚâãäåæçèéêòóôõö÷øùúÿÚ ?þþ( € ( € ( € ( € ( € ( € ( € (  —·özt&âöâ;h†~i,@ÎØÑA’WÇ!#Vr:-yþ¡ñ$,še‘›âíŒiŸU‚<»)õibn9^xæfñLjebÑÜÁl ÎÈm`eÀÜ-ÃãêÄûÐð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ü&~%ÿ —þIØò-ð™ø—þ‚_ù'aÿÈ´Âgâ_ú ä‡ÿ"Ðÿ Ÿ‰è%ÿ’vü‹@ñ§‰AûG<ô6v>Ç óìAô4­iñ Tˆwmkv™ä®ûiO¶õ2DíûŒú“@Ö“âýU)•¬î›Au„ÞÞ‘L ‰òxU,’·h訠€ ( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@P@ψüKm ÂoŸ2··è€d¯pÀ‚°†_žfR‹´–0Ô5+ÝRá®ogy¤9ÚŸ.%<ùpÆ>HuÉË6æ%ˆ( € ( € ( € ( € ( € ( € ( € ( €;Ï øÊ}=£³ÕK›ò¤íºK‹Aü<òÓ@½ g2FŸêI!`bGI$ÖHäUtt`ÈèÀ2º2’¬¬¤`H ‚(ÔP@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € (  ½cT‡GÓç¾› c]°Åœ®* pH Ã.À‘«¾ÜPÏ··—÷S^]Hdžw.ìzÊŠ9Úˆ $j8TUQÀ  ´P@P@|ÿm¢ÚSãWÄO‚ú¶µâ-áÁ¿ øQøcájþñÄ¿|H‡ÄWúGïüUáû«hŸ¼7á]"ÓÄ>&´ðΧ¢k^4Ö|Iáíï\ƒÁú/‹ü3ãpžûþ oÿúÔԿeO…Úôâ?>öúËT»¼¸1D£Ü]\j’O<‚8ÑL³Hò>Ü»3d Ÿðêø'gý?Â?üßÿòÆ€øuGü³þáþ oÿùc@ü:£þ ÙÿFðÿ7ÿü± þQÿìÿ£GøGÿ‚›ÿþXÐÿ¨ÿ‚vÑ£ü#ÿÁMÿÿ,hÿ‡TÁ;?èÑþÿà¦ÿÿ–4ê?àŸôhÿÿðSÿË?áÕðNÏú4„ø)¿ÿåðêø'gý?Â?üßÿòÆ€øuGü³þáþ oÿùc@ü:£þ ÙÿFðÿ7ÿü± þQÿìÿ£GøGÿ‚›ÿþXÐÿ¨ÿ‚vÑ£ü#ÿÁMÿÿ,hÿ‡TÁ;?èÑþÿà¦ÿÿ–4¨ÿðOŸÑå¿ý™<;ìïñH‚{¯ ë> ÕüKkàíCRŽG¾·Ðþ%øûb_ øïÀºÕîÛ?i·š|>%·Ò®o/<âxºÅP_¾'ÁñŸá€>'Ŧ6‰/Œ<;i¨êZÜ}°èZô&KhBôE¾M^´ÔtÈï–õ-Véa‰f¨ªÐ@P@èÞñC2è·Oû‰ËcþªàÆÜgÿ3 ÈÄÿ*«ÉPZ € ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@P@x÷õ3q¨Å¦£~æÅÊàÝN¡Ž{.^êÏ(îh€ € ( € ( ›cù<ø(·ý†?e_ýRWúe@ä?h€ß³…-³¦y'RÒWÄÞ Õõ½µ=<\[›ëzn­<ø~Ñ~jn¡à¯Ú#öø“ñÇÿ~|tø;ãï‹? %X~)|/ðWÄßx§â'ÃYšäÙ,^?ðV‡­ßø—ÁҵⵢÇâ-3Ncr y ¥Añ·ö’ýf} HñGíñóà·ìýáŸjÇ@Ð|Eñ·âŸ¾èZÞº,î5¢é¿ŽõÝOÔµa§Ú]_:ÎâkÁgmqraò!‘ÔÑ|ã/|DðŸ†ü{ðÿÅ^ñ×¼e¡éž&ð‡<®iž'ðŸŠü7­ÙèhÞ!ð߈´K«í]Ðõ{ ˆ/´Í[K¼º°¿³ž›[‰a‘€<¿ã?íCû4~Îi×_´?íð/à5¶®’I¤Ü|gø·à…Ðj‰ùr¾/Žä € ( € (È螺±G•ÑÔá•Ô†V±R±ô^~º¦™e|Ý<*e`,é˜çP=dp¾«ƒÐÐP@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?¨ € ( € ( € ( € ( € ( € ( € ù»U¹7z•ýÉ9óîî$_d2¶Åú*mQßP ( € ( € ù·ö8ÿ“Èÿ‚‹ØcöUÿÕ%q@¦TP@Èïüƒÿ(£øQÿgÉð—ÿTÇí@—ð£þLÿ‚+økö7økð«[ý³þÅãïþÌÞø}«è?ðοµ…ÏÙ<_¥|,Ó¼9¤jZ| ŸFŸÈÖ –ÏíöÚŒÚ\»>Ñ ì–Œ³æûþ yÿ*ªÿÁiÿì¶Zÿéƒöh oÿƒt¿h¯ˆßðJ_Úƒörý™þ>ësà Á]þü/ø×ðÆWåí|;áŸÚVÓ£ðÄzv÷y,ì5-sÅ:}ÿÁßÙÀÒj“Ü| ñF¤tÍ&ç˪ÿðE¯ùXƒþ ïÿc%—þ¬«ªñoø,Ãfÿ‚ÑÁuþÁ)ôÝzòÏàÇì‡û4üXø±ñÏYÒ§Ÿ>ø“ñ7ÀVú¦¨ÝÇfÛ.ÓH»Ôfí.Ù§“ͱ—Æ~'´X‡›qoz÷—ü‹ûJøŸÇ°GÄØÇ⹞Çã‡ü»ã‡Œþx³ÃÚ„¾v¯¥x/Z×uíÁɨ³6õþÇñM¯Ä߇Öå6Zéž²†7(¡"þw¾9jß²ŸìËÿêý½|_Ó­¾7[øB¾ã=/±Ùh^ñÖ›ak¥êqhWQør€?¬Š( €?ÿà›_òeëŸÄýZž8 ¹( € ( € (Ø>\™4»»f$ýšózçøc¸‰HQíæG#têÇ“ØÐ( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËä’I$’NI<’ORO­P@P@óoìqÿ'‘ÿÿ°Çì«ÿªJâ€?L¨ €?‘ßø=þQGð£þÏ“á/þ©Ú*€?S> ÿÁ9ÿàŸ—ì%ðŸÅ:ì%ûßøšÿöIð&¿}â+ßÙ‡à•Ö»{®Ý|ÒµjïWŸÀï¨\j׃½ôú”×y5ã½Ô“4ì\€¿ðKÏùUWþ Oÿe²×ÿL³E~Îx§þ ›?üþ ‡ý…l~éÒŸÚ›öqý¼3ñ¿öfÖ4²öþ!¼ñN…my?Š>i—öæ+¸OÄÍÁtí&(®¬àÇúO€u›Ë„·Ñ ŸðkOí}©øËö¾ÿ‚¶þÚŸ´Î­™©Ø~Ì>ø¿ñÛÄíoö]òøêþÿâ'Œ/l¤0¥¥þ¨¾Õ|I©ØÆ-ímµ Ë›[X­í£†(À1àŽðL¿Ú‡þ ÷ªþÙðUßøo¿Ú‡öñwíûL|IÐ4ÛŸÙ·_ñ†µÏøf[Í#ÇÖ“¨xÃþ5ð^¡uà? êºÞ…àÏ éîtôºðMÚ´I¥ÚúöøOãÏø!üs¢þÌ_þ=xûã÷Ã/ø)Á_4ütø¬÷?ð•xóâ߉5½]ðþ¿ã}BçY×n5ψOñÁ^)ð*ß^jú†§{kñrÃWÕGŸ«  öŸÅÏø/_ÄŸÙKöý£eßø.WüétïÙú4³¿ÅOƒµO‰ÿ ¾$xVÆÿZ†/x¤|[ñž§áÙø§B›D¾Óîüm5oj£WðŸ‹ü¤ngÒ?1ÿb|;ý»¿ààÙ¯öÎÿ‚G~ÆÿdoØŸà׆å¿ý ¾#_ü:ƒáÂßk£Fø‘§ø³þÿ øoRÕ< £7Ä­Äžøy§ø—Ë}©-–§ãWº=´ö§úÐ@ø÷ÿÚÿ“+ø+ÿ\þ êÔñÅ}É@P@P@£ðÜœk#<§=ÏÛA?Žå@Ÿ@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( €>_ € ( € ( €>sý-î#ý°¿à¢ÉÑÁs¬~Ë&Þg‰Ò+'à¬ñËäÈÊ_)þI<²Ûål(ô²€ ( € ( € ( € ( Èø'½Å§ìcðfÞê ­®"O‰ ¸‰áš2~)xÝ€x¤Ut%X0 £*AèE}½@P@P@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@òýP@P@â:•Ž~|SÕþ:|,ð«üD‹Æ¾ð¿ƒþ0ü'´Õ4kþ&°ðuþ³qàß|;×øoe7ÚaŸÄ¶þðÏÄÏø—â‰,á‹í~𭦛aámSPkKxÏÃ6M7šÚ|6øáï…?üðבܧ‡| á­ÂúCßKÆ¥ui£ÙCfº†­w6ë¬êOê΢`ŽMGT¹»¾™|Û‡ Ú€ ( € ( €=Cá¿üÆîÿ·ÔêP@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € (åú( € ( € ( € (  ­í纚;{hži¥m±Å–v=x°–'Tb&€?(à¡?ðV„±4zÏÃ?†‰¡ücý¨¡Š[Ytpo>|#¿xÈŽçâí„ÑÉ«øŠÍÙeOéW–×ÈPÿmj:6Oû7‡^fœ\èf™·¶Ê¸u¸Î5yTq¹œÖ8T‹TèÊÖxÚ°•=qOÔ¹?¼]ñÿ%à‰É2/«çœ\£*s ¦ç–äÕ²–iR””ªâ Ú’ËhT…m?ÚkaS‡´þlîà²ßðRë«›‹“ûSx–q<³˜m¼ð hLÒ4†+xÁ!‚=Û"‰~Xã ‹Àý-|9Œcõg .X¨óK™JNÊ×”ž6îOvÞïSøÞ~?x»9Îëž.<ò”¹a—äÑ„y›|°ŠË­«Ú1Z%dAÿÿ‚–ÿÑÕx³ÿ /†_üÄSÿˆGá×ý_ü*Ì¿ù´Ÿø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçpÃãÿॿôu^,ÿÂKá—ÿ1Ä#ðëþ‰Œ/þf_üÚñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎáÑÿÁbÿà¦2ºEíOâù%‘Ö8ã Ýäw!QÀå™ÙˆUU± I¡øIáÊM¾Â$“m¼^b’Kvß×tK«ã׋Òj1ãLsm¤’Àe ¶ôI%—]¶ôIn~¸|1ý©mßÙÇÀºíÿ+ý±>"ü?ðæµhuŸ…²7‡¼/ðÎÏö‡øáä:˜gñ›/‚íî~|=–aö}OXÖ^Ã]’%º·OìBM k•f\3ÁœC¯ÃÞp–_ÄQŸ±Í8¯‰Ì§d¼ÉóGQc%Ï—½N•R‚|²ýý5[Øþå“ñŸˆœ'–ḳÅÞ<Ír¼&"XÉ8 ƒÉéñWrµË,U)eðžK•Ê^íjø‡Kâ§õZ¯ õÜOÙ7ö˜ÖÿlOÙCàÇíâ é^Ô¾$_ü\uðÎuw}e£é~ø³âÏh6­zD÷÷ÿØ~Ó¦Õ¯„6v÷º´·×Vzv™g4}·àœuÔxKŠ3.¡‰«‹§—ÓË“ÄÖŒ!:ÕqYV ^J÷aOÛâj*P¼å JJ“R©/êO ¸¿Ǽ“ñf+GW6«›µƒÃÎu)У‚Ï3,» V§½V¯Õ°”¥^§-8T®êN*4åP÷Zù#ïB€ ( € ( €=Cá¿üÆîÿ·ÔêP@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € (åú( € ( € ( € (·SéÚ^•«ø‹_ÕôŸ x_ú}Ö¯âOx‡PµÑü?áý"Ƹ½Ôõ}VúX,ì¬í`Gši§™8Õ¤vHÕÝvÃáëâ«ÒÃahÕÄb+Ô*(SZժͨÂ*pRœç94£§&Ý’9ñX¼. _ÄPÂa0´§_ŠÄÕ… >8¹T«ZµYF*pŠrœç%¥vÒ?˜/ø(Çü ï[‡Ä_aOQðÿ…åYô~Ñánt¿x²6ÌWzwÂËic‚ûÁ¾`6ñmÊÅâKxm%œZƵýWáÇ‚pWÎøÎ•ÄÅÙÃ)¥R1y¶m‡’këµiШ£eB¤1:Ÿ‘ß>*|FøÕã{âWÅkÞ=ñ׉îÚ÷Zñ/ˆï¤¾Ô.¤U^nÑ£šÃ„—¬ª<®t!8Ëšub¹gÝ,+ð÷F”ñ< Ï0´ÕåˆÈêåøøéÒ4–wOR\Êpå§BmÊœÒVI¿ˆ~+þÓ>7øä|eÿ‚*~Éß !2¢¿ñ¿ìáñ#ÃÚEÛ3ùJtýkQÖ Òu(Þ_Ý$Ö·1<¡¢W.¬£ìò¾Ágkþ#Ä/ÃgøüËŠjptxmdXŒÿ,cÂK9Ëëfº±Ã¾\/¶ZtãF§±½5Í~õF¤~˸Ï„ð§€1|+•äüGÄ)ñŒ¸Ÿ ÂØ(eÑÇC‡sl6U¡,RæÇ}^T*Ö–"“Ä8ÖueN_¸r¥/çþ¿v?˜€?¼ÿø$/ü¢ëöGÿº÷ÿ­ñ.¿ƒµ¬ø{P·Õ´ [SÐõ[GY-u=þëLÔ-¤VWW·¼²– ˜]]Õ£‘X2«ε8Šr¥^•*ô¤­*u©Æ¥9&¬Ô¡5(µfÖ«fm‡Äb0µc_ ^¶´p­‡«:5`ÓM8Ô§(Î-4ši­RgÜÿ ¿à¨··Á¸–ÃÃ?´·Ä/h±ÜxWâ}å¯Å¿ Ýio‘>Žt¯‰6¾'K2æØ½´¶úKé̱I'‘,27™_šøiÀÙ»u1<;€¡^üñÅe°–U‰Tù£[ÚåÒÃsÔŒ­%*ª¢º\É­Ò2OH(õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ù?_¨Ÿ‰yÿðH_ùE×ìÿuïÿZâ]xÑÿ'/‰?îÿª ¨ÿQ¾ŽŸòfø;þîýj³ÃôB¿.?m ( € ( € õ†ÿóÿ¸wþßP¨P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( —è € ( € ( /âLj|!á´g‹> øPxóÀ>ýŸ>/x‹ÇiRñŸ„4_êZ—‰|&ÓÉòBY‰­ŒÁs=#õ¬<*Pæz/iw¡üxêßðM?ƒßµ¹ñŸü;öðOÄÿí+yµøeo:þ•ðËö“ðrh÷º%ŒZÝÅ¿‡|u§hÓßAk7Š£½ÒOB®cø·ï(V›£WÁT­9,3…ZîÒ›Ãáâýœ?Îúþd”*`ª¨¥véâ&’³½šgÍ•ôGÈ…P@U|ý‡?kÿ“['Â_ÙÇâß‹¬îÞ8â×ÓÁúž‹áyqå‹xŠ-#ÂV{ÃïZ€ya¥'ËGeùŒÛxO"Ry¯åXYÅ6è<]:Ø»-ùpxwW;mîQ–ºnÒ>×!ðçŽøšPY ç˜êsiG°°øÞÜÙ†-PÀÂûûøˆéyl›>þÒàß³¯ì‡=¿Œ¿à¥ÿ´o„´­WIjþÇÿ³Öµiñ㿊.–×ûJËAñ–§§vœäÓzÊNíãôŠÆÑÌ8³…ñ˜l,p8LO‡|-ˆÁà ã*x.'ëøš8:rŒ)©CÞÊ2TàšÔ"­øÓ_¯€…~°~×ò‹¯ø$ýßÿþ´?†«òîÿ“—âÇýØŸú ÄŸ¶ñ×ü™¿ï'ÿëU„?'ëõñ  ï?þ ÿ(ºý‘ÿî½ÿëCüK¯àÏ?äåñ'ýÑÿõA•ê7ÑÓþLßÝÃÿ­Vx~ˆWåÇí¡@P@P@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@òýP@P@áµü™Ÿí³ÿfwûHÿê«ñ}OÿÉkÁÿöTðÿþ­°‡Äø—ÿ&ãÄû"x¯ÿTXóüåã‘ât–'xäÖHäŠiá¯gçÇpÆYÏ«u0Têeu&Û»•J™eLêÊÿj¤¤ú'cô¼“Æ?ø~.oç^ÎÑQ¥˜Ö¥Ñ§«F4hçT³ t"—Ù£EîÓgÑŸðXï‹>.Ž%øÉû'þÀß®ZU}C\øŸû5Ùj>#¾$âyÆ¡¥xŸI±´Ô$…¤H¯aÒÑÝ„¢+µ¾~åXFÿ²8§ŽrH¤Õ:9oNž{±öupÕg*iÙ¸:«™i̺}UOó¼rŠÏø#Ã.$›iÕÄç#N®.¯óKÚÑÆP§ ®-¨Ôq¾ek;cþ 5ûjñŸøLà’³MôÓ††ñüã¿|<‰í9 –0éžÔ[M½HmBžf“„R©ÿˆ}ÅÔŸû'ŠœGz ‚ÃcÚŸW7SOÚBÿòí¤’Òì¿øŠü]·øÂ5%+Æ£ËóA–_•ÏòçÃi.&ñ§ˆhNI:X )ÑÍ+sÝ/g†ÁÔÅbå–ä…YPTqr¨·á\Þ<`å. ú9ðž&rUó<Ò­lNK‡ä³~טRÀàa:Wö•hG,EJQ”cIŸX§í+áŸÙ÷N–ßâWíCÿ«ý“<]k žG‡ÿ`?Ùûã×´AC4v2ëM­j^ ð—‰4Û£<ãûkFÖt›mAv\}¦Þád.øsŸTR˸kÄî)ÂI«â8ëŠá‘àjÁï5EQ§ŒÅaêFÑýÍj5e cË(´}ºâì ÒpÍøËÁnÇB/— áUâlÎ…Uv©¼GÖ+eØ]óKý£ˆ¡ ªÒç„Ó>_øÅÿ:ý’\ð«NsÁf¼KšP”n×û^qb¥K3â_¸±òT£Z•<Ç"àì ò§þÁÃôðu*Âj÷XÚ'¹chÔš_ÿÃZÁ0´><9ÿ¤¸×náý妱ãŸÛOã5ÀódùdŠ÷ÃzV…‘yoeÒßté&öIä;âU?iþªø“_ýãÅЋÒT°\“ÇE³†&­gVnÎ^ëV¼VŒüçýxðw þéà¤ñ3ް¯™xƒŸËW¼ja(a• Šº…äÚ“Ö(ôߊ?ðT_Ù[ãN¥á_âü ់uxÃ? <1uwûB|S³m'ÀÞ‚k_ x~Òô{æ·Ò ¸š8n.’{éC“su;#ÎË<4â|žž&–[âNe…§ŒÆâsLaå“Uq¸¹)bk·V´Ú•YE7¸Á[ÝŒOc9ñ—‚ø†¶¾sàîOŽ«—å¸<£:œSœÓt2ìe &*Ž’”hÆrQœÔªÊþü塿_ðÖÿðL]wÿÁ(ŸE¼¸æóZð/í¥ñžÄFaâÝ4ÿ jZÚ5ºK pÁxÍpÏ#›‹äÅÄ»£þªø“CýÛÅZø(ã¸?'ïñ:˜šu•i4Û”*IrÁûªçþ¼ø=‰ÿ|ðMáç/âb2ßx‚¹~K[ ,|RðŽ]ÝæÕíüJ°x®âÝcÄ1Ǥ«\™ Nä¤s—°ñ‹ïCÀùí8«Êœ>i–bç²J”°×ÂÆMêÝV£k¥«V¯¬ý3?v¦[â_ U“´jàñy&s§«nUã‹åÆÎ {©P\÷åoE.o®>-ü"ýÿk?ÙWö?ý›eoÛËàÌ>&ý–›öƒh4¿ÚzÏųþ«ã—øéñCñõ®§ê:Þ‡yáÆ×´9,§Ñ-áÒïu]?^¹šÖQ{¥bê+”ʳn+á^'âÎ"â~ÍÞ‰–CÍW†ç†Ïi`–I€¯•j”èÖŽ!P®§ÒuaJ¥©.J¾ë—Üç™ñÇp' p_‰yqœø£–ÓÆpÅ|Éñ&i†ÌᇥW‡žâpΜ°ñ•ébg(?iCßPü”ý£a¿ÚŸöQh.¾6|!ñ‡|1|öãHøƒ¤µ—‹~ki{M§6›ãß ÜêÞyuy#¸´Ó®µ]Y¢oÞéñH’"~©ÃÜkÃQÍ›6Ãâ10Rö¸ ¼ø\Æ‹ƒj§´ÀâcKjrN3©r¤žÕi¿Ãø³ÃŽ4àžYñEŠÂàê8û Òƒ§ŽÊ1 ¢R¤èæx)×Á¹U‹S§Jua]ÅëI4Òù6¾¤øsûÏÿ‚BÿÊ.¿dû¯úÐÿëø3Æù9|IÿtýPeGúôtÿ“7Áß÷pÿëUž¢ùqûhP@P@P¨|7ÿ˜Ïýÿöú€=B€ ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@P@|¿@P@P@x_ícÿ&gûlÿÙþÒ?úªüG_SÀÿòZðý•%ÿɸñþÈž+ÿÕ<ÿ9JÿFÏò( €>þÿ‚_~Îý«n‚_>"Å%×ÃýKPñ‰¼e¦Ã<öÒëz/ü'­ø·þã=¬]Ckâ+ý"ÇDÔç´¹µ¼¶Ò¯ï®l®"¼†~Ä®!Åð¿ç9¾^Ôqôá‡Ã`ê8ÆJln*Žë²R‹–YÖ§FP•XB3‹ƒ‘úƒœ'€ãox{ Íbç•Õ«‹Æfc)Aâ0ùv Žú¯4gbªÐ¥‡­(N!B­IÓš©š·7í—ûQü\ñ·‰~ üCÓõ/ÙóáÃ]^óÂý’|jÞðÃ=/C¼tÓñž™àé.d¿>—ÆþÒüE¨x2-RYî©a¢ê7·I¤ÝùÒ•Ò&°´–{©íe»Ÿ xƒļ–æY”Õ|b©‹ÁÔŨ¨}y`±50ôñŽ’Œ}œëS„}¬l¯V3šQRPŽþ4p®YÁ¾!g>OMá²÷K˜QÀ9º¿Ù¯1ÁÑÅUËãYÊ~Úž­Iªæv¡*Prœ ç/êÿþ ÿ(ºý‘ÿî½ÿëCüK¯ä¿?äåñ'ýÑÿõA•Þ?GOù3|ÿwþµYáú!_—¶…P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@Pg¶Óu [FÖô­?^ðÿˆtOÃÞ#Ð5kxîô½wÃúÕœº~¯£êV²«Gqe¨YO5µÄ2+Ç$nRDxË#kB½l5z8œ=YÑÄaêÓ¯Bµ98T¥Z”ÕJUiÉYÆtçÊ2Z©$֨ÆÃãpØŒ.úBâ(R£…âÌ®x×K4ÊÝ*XЉ+s×ËêºXiÔ–ó Fžü¸usøç¾‰˜\U|F73¸eʤ¥8dyÚ¯[ IÉßÙá³Z*¾.ëtñX\e]¹ñnÍŸ?àŽ_ðQo…êâóösñŽô‹ræ ká.© üK‹Q‰$dYh~ÔoÍT8‡‚«+^–iJ¾^àÚ½§_N8M6n™Æÿkkÿ=ç~ø¯‘ºŽ¯ âó*0¿-|’¶6URv½<6¬ñúî£S Ùü;Ûáï|øíðæg¶ø…ðWâ×.#ßæAã/‡1ð¼Éå™Ä›âÖôk]†Úä>åM¼ùÇ•&ß´Áç¹&`”°ÎUŽ‹µ¥ƒÌp˜”ïËk:5¦µæ¿ÅèüçÃI•IÃ4áìó-š½ãÊqø9+s^ñÄaéµnIßM9eü®ÞK^©áŸ¬ŸðE¦¶ý¾¼qo4¶÷ÿ >>ÍðÈñM Ñ|ñŒ‘M ±•xåÕ^9•ÑÔ2@5ùoŒqRàl\d”£,Ó"Œ£$š’y¶4ÓѦ´ièÑûÑæR‡‰¸ ÂN3ŽKÄÒŒ¢ÜeG!ǸÊ2Vi¦“M;§ª:Mþ _ð§öƒÐ4OÁJf'öЏÒ,CÒ¿i_†wÖ¿ ièöðÞµ˜ÔïtõÓ|=ñ1¬'’Ö ;L×o¼5¥F­y«kQx‹W¸º–÷š·‡9¦C^¾7î#«Ãñ­7Z¯æP–cÕëJPçöp›©_.SŠ“©R¸ÕI«Ê–QYÕǤӺugJí5Ê•ùGÀ>qSá/£‘Ö©xå}•Ï/×Á_?êT¤¤¬ã‡¥^ÉßÙsÐ?ðDÿÛÄvM«|ñ?ìÓûJ莮Öz×ÀßÚ Áæ™~K¶»ñTÞ€ÈÊÖÇç‘#_¶Û.Ö‘ãÑxÉÂXyû,ç Ä\;Y;NŽuã(Ô§­Ÿ40«+&¥²¿¹-/dòÿ‰yãÌ]7_‡qœ#ÅØf›§ˆáÎ(˱4jé{B¦6Xݧ ÚKÚCÞµÚòsþ ÿ$ðôÿf¿ý”<{q!šæ Ú«àA¾ÑÖ9[í^ñV­j!fpm® ÞEê’ÎIãGeõ(ø±áåxóCŠ01V‹ýõ,nV’m{¸Œ-)][Þ¹ ì¦¢ÚG‰‰ð+Ŭ,¹*ðNg'Í8ß [.ÆFðiKßÂckÕ·îK›–¢»¦ä“k?ðKßø(PÕH?²Æßµ°Ü%¸:XC\|ÚØè¨|µ+µ¯ÃöÛcí,±¿ø‰|콯úדr®ŸZ^×âåþ ½³×µ?‡ÞøSgüA¿½²¡þ¢ñ;×›ê2t~-qúºÑZΪ|Ö‡ÆÔN¿Eÿ‚CÁHõéZÙCÇÐ:Io:֧௠ÄZå#+?ˆ¼S¥Àñ©F7$¢”{§…$›’·ŠþPIÏŠ02MIþæ–7ýÛ7xáðµdž¾êjóÕE6»ðþxµ‰“> ÌâÓŒÚ+eøHÞm¥ib±´bÒ·½$ù`¬æâšoÖbÿ‚%~Ùš„š¿Æoöqý›4Kxâó\øåû@x#DÒ,cwÈnï¼7âˆG’i$¡X¢‘Œœ(o-øËÂê*Y=!â*Ò’Œ(ä¹2µY¶ì¹!‹X&îì’Ѷұî/£¿á©:üAˆá>ÃÆ.u1댺=çí9ñ–êÏâwí¨éw"ÝîÆ‡m|ú¿ƒþKvÉ-Ž£e¥Ýø£E¾³^[éº^¯ µå¢§áÖqÄUhâ|FâZ™Ý*SUaÃyDg–ðý:‘º¶”,^`£¤á:ÃV„ïR¥'(IÕñs‡¸J|„\G†ëÖ¦èTã þtóŽ+«F|®V…G_”ÊmJZtjcpõ)òÔ5ã åàµóÏuû}øÖêêinnn~üžââyiçžoƒž’i¦šBÒK,²3<’;3»±f%‰5ÕàÜcÁÆ1QŒs<ö1ŒRQŒVm‹J1KD’Ñ%¢Z#‹é )OÄÜÂs”§9ä¼3)ÎMÊR”²”¥'w)I¶Ûm¶ÝÙý5Á!å_²?ý׿ýh‰uü½ãGüœ¾$ÿº?þ¨2£û_èéÿ&oƒ¿îáÿÖ«ð§ÆO|%‚ïÄ:ÿÁ?‡µ=Xñ–½âÍ;PÕ¢_é±_ßAZm«Å£nÂÑ$˜^+“ý À¼/Ãu¼4θ·5Àc1ø¼§™N4(g†[N½,. RŸÕª:täÝY§WØÎVµÔ¬äï¸×Œ0þ2pßäy¦]–`3ÜMNX¬WeYÍl5lv/F¥hýv’«V18ÐúÍ8_™§fzWÀÚ3á?Šÿhï|)¶ý±þ|_ø©¥xwãF…¬øÃ?ðOÝWà‰¯5ox;ÄÖšð‡ã$ºÆ§cg„tË©§H¥º·×í,ÛMµã»‰Ï{Ãù®‡ð™¤¸G6ÊrʸŒž½~'Žég8hRÅâðÓ¡|¢4iÔ›¯í"¢ÚŒ¨J~ÒQN-§ q^IâÌvI=ȳÜꆈ0ØŒ³á…~ÆT¯€Àbéâysùb+R§7±œ¤£)Ç ~ÊjiŸÃýgŸçHP@¿áŸÚãß‚ì“NðwÆÿ‹ÞÓãµ²²ŽÃÃ?¼g Ù%–›A§Z%®—­ZÀ¶º|ðÙ[ª­bvŽƒäâr‹7S’å8ªŽS›ž'.Á×›GÍRnUhÊNU$“œ¯y5y6Ï{Å9üOû>h¸É³ÿ„§ìäùàM“úÐ$ûãuqÿª'ÏíÕ~ö¿óóû-çÛ—ãú·7Ãîï¶›‡úÿÇžËØÿ®Ü]ì_üºÿY3Ÿeñs뼿½·Å®çâ/Ú'öñ}™ÓüYñÓã‰ì+˜ÇÄ_¼k­Y˜/"ò/!6Ú–·s ŠîÜÜÆSdñ~îPÉÅuáø!ÂOÚarL£ ;Æ\ø|·F|Ðwƒæ§F.ñzÅÞñz«3ƒÅ|Qާì±ÜIŸã)8Î.ž+8ÌqÜjG–¤y+bgYÇÝšµ¥Ñã•ëžP@«ÿðZù?OÙ'ýŸÿõLø2¿.ðwþH|/ýsïý[âÏÛ¾Ÿòrñßö$áýP`éÃþ ÿ(ºý‘ÿî½ÿëCüK¯åß?äåñ'ýÑÿõA•ÛGOù3|ÿwþµYáú!_—¶…P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@PógÿÒ0ñn§à¯€žñ©ðþŸâ±áR_øímµøFõi`Óu¯²?ÏýŸ{4vóýÙ þ¨ð‚|S[„úöž':|·©…úÍ%ƒÀsRúÅ%*”yÖžÒ Ê;¤x¹C‰úHð>.Çÿfc«`ør\Ãê´±¿S­,Ã4ä¯õJòG#×ÙT’Œ¶lû;à§í¡ðvëö–“à‹?à ú?Ä?ëZ×þËà½_ö(ðçÁm;Sø‡}c¯xZÖÖãâæ‘fϧ¸ñ0XíZ+–·ñë[ik,ƒQMÿ'œp~oYÖ€êà0thàsUŒ¥Æ8Œâ¥<'C)G*«4ª/«k$ã͇‡5[/fí÷|=âA>.—ã|PÃæ™Ž#™ärËëø{…áúU³Z”ñ88BYå w¤þ¹e¦ã‹¨áE7í•ÿŽ_‹_¾$ü øâ/…ßü®xÇ^¾–ÇWÐ5ë)lîScºÁe#¯‘©èÚŒj.ô}kN–ëJÖ,$‚ÿM»º³ž)Ÿúã*Ͳìï‡Ìò¬]n :UèMN:¥Í ¥ïS­M¾J´j(Õ¥4áRšiç™oÃy¦/&Ï0œ·2ÁT•:ølM9Sš³j5i¶¹kaê¥í(b)9ѯIÆ­Μ£'çè@P@P@×Ã_†_~1xÛ@øqð·Áú÷Žüsâ‹èôý Ã>°›QÔ¯®$#t†8‡—kckëGS½’ßMÒ좞ÿQ»µ²·žxø³Ë”`«æž.††ƒ©_ˆ¨©Ó„VÊïYNOݧN U*ÍÆá)Ê1~–Q“æ™öc…Êrl'2̱•,6 JUkT“ÝÙi pWZÕ(Ѧ¥V¬áN2’ýÿ‚ÊëÚ·ûüT´Ðõ};Zÿ„GÂßü ®]iW){emâ¯ü-ðž‰âm!.â&)®4MfÖïIÔ6&×R³»²˜%ŴѧÀxCBµË'Z•J?[Åf¸ê« Ë ‹ÌñU°Õ\^ª5¨Ê5i·ñSœ&¯&ÿUñû†ÄxŸœÃ ^–#ê8,-ÄÎŒÕJpÆà2lAN>짇ÄBt*¥ðU§Rœ­(I/êþ ÿ(ºý‘ÿî½ÿëCüK¯å?äåñ'ýÑÿõA•ÛÿGOù3|ÿwþµYáú!_—¶…P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@Pó3ÿ7Ѿ#ø‡þ Cû'hŸ&•NËý¥,E ’§Zœÿáûð©FZ¿y=X7ǯ†m“ÿ¶¯íÉÿÕý³|®â"þÅÿðCZÝj ý¶áøâ Ã}›EðÿÄ ß‹ÓØò¤CㇺŒï‚1Ú_[–FæÞŇ»—ñ‡x)Fž3ƒeй«WÀC+ÇTùà±õ°p¾úae¯K1šø}ôpÌa:¹ˆá¬T¯É‡Âæ•3¼¶—ow1ÊðÙ…Kyãa§[ê|ƒ¯ÿÁ2ÿd}E¾ÿÁZÿdíq$vh ø£guð¥£O4þî{¦ñ?ŠÃ¼PG>'vës*[…‚½ˆÇõ”ý¥ ΕàÕñÌ–~>5¾“Á‰¤øfãÄ—p\ÚhÖ·š£½ýå­Í­«M,Deâö]‡­†Âf!ÇÙv3¯ …Åpï-\_Õ⧉úœ3Úâc‡ƒŒëJ’„%JÉœðð 7Åa±xì§ü-Î2üèÇÀñw=ÅMÓÁÿhTyz¡ƒ–.qœ0ð©Yºµ!8AÉÄåÿáÑ¿èí?àžÿø•ÞÿåtÿÄVË?è–ãßüEëÿòóþ fuÿE¿…ßøšá?ù˜’ø$׈lå‹þÿÛÃþ ¥à{Y$O.]ö°Ó Ííº0û|šM¥…nVö{Ú&’Þ{‹$—6±¤ÀHï~)áæŸÕ8#Ä\l’wT8^§,$׸ªÊx¨òFm4¥ÎÊ2|º$Ü|ÅS”~¿âW„YlV–+¨óÔŠ½t!O5RT“‹p”éÝÎ KVâè¿`oØÓÂ{¤ø¿ÿYýžô¨¢Ã¼_>|Løñ<ÑòÞ\- ®‚«,ŠöªË™aynŒˆÂÂE˜|uÅø­2Ÿ óú­èžq™eÙ$Sîý¿·ÑZOut£f½¢iÇÃ.ÀÝçÞ5ðµÇWÊ3~%”—hýYa­&œí.Vçtý“R÷¿þÎ_ðJ_ÙkàoìûûCx–óö«ý¯´Ú)þ+ÇðÛMÓo<-ð?ÀÚ ø/â7ÁÞ1ŸÅvÒY¿Ä Åy¯jQG &Ÿ¨jó\ØE{qz–2Ggö¿ Ä(q6uŸdhpÇ Õáõ•¼Æ¥Hbs¬m?íŒ5L^ad§ýŸ‰p¡Mºî¥:J5#4çÉôÙŸ ø)Áœ9ÂüSŒ©Æ¼wCŠÞv²šTj`¸w.­þ¯ãhà1òÆÁÓþÖÁƦ&¬VR«^S¥’¨©µOŸæ¯ÿÁR|qáÿ ë_ c_‚¿ aχúõ›éšÕÿÂ+[½oãoˆô¶•%[üyñÿÂa~°b6÷Ú=·‡µXɸVÔÞ³ÇôxO ðUñTs/ÎsN4ÇК©F¬£G&ÃÔ³Nxlþ©Ošïšg^“÷v¥gò8ÿ³.”pdžåxšnŽ"®E â8‹E´ÕX*øúqX:X‡˜fžÎxœ= øjÕ©'ñS§ˆ£)-âzŒ¾3þÏúgŒlÍ<¥ÌÁƒÉóÚ¸L-LGâ÷ÔêaèO ìxû/§KêÓ¥PöPX¨SöNQŠŒl¹U¬z™‡p½~6ŽcÄÞhRÅâiãþ±á^mV¿×!ZqÅ{j¯1›©WÛ©ûI¹ÉÎ|Òr•îù¿ø^¿²¿ýø"¯þ+“â/ÿ5ÕÑý‰Äÿô#ñ‹ÿ_ÿÌgúÉÁôS}?ñSæ¿üÞð½eú/ðE_üW'Ä_þk¨þÄâúøÅÿ‹/ÿæ0ÿY8/þŠo£çþ*|×ÿ›Ãþ¯ì¯ÿEãþ«ÿŠäø‹ÿÍuØœOÿB?¿ñ`åÿüÆë'ÿÑMô|ÿÅOšÿóy濵Ÿ‹GŽÿàžÿµ}÷Àß‹_ðO¯ü/ÐuO€‹ñ£Cý–e/|ñeíÖ§ñcHO‡My®êž7—I¸û6¯kª]GöÝYdÓaÖ¬íŸNŸQKšôxW õ=áhgYWa³*ôóÏìzÜMÅ,ç ÓÊ«<Ã’…,jÇš”©ÅòW¢IQœ•HÓq<Ž8Ç,ËÂÞ6©Ã™ç…øÌ› [†Wa¸3‚s.ÇTlﲯi‰­™J„¹+´״Ââ-F8Špt¥ULþbëúHþ< (õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ù?_¨Ÿ‰yÿðH_ùE×ìÿuïÿZâ]xÑÿ'/‰?îÿª ¨ÿQ¾ŽŸòfø;þîýj³ÃôB¿.?m ( € ( € õ†ÿóÿ¸wþßP¨P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( —è € ( € ( € ( æÓþ )eã=Gþ }ûØ|<×´O øâïÁ_!ð·ˆ|I Üx£AÑõ—ñ_þ­áëM_@¹Ö, “æšÂ kM’uùVî#óWõG‡sÁÓðOŠg˜P­‰ÁG³ ¼Mr­àækVªÆR­8bUZ«0š©SÛF|õçÏ+ËšW»æ¿á¦ÿeOú<_ø%çþ*·â7ÿ4õÓþ­ñ?ý>%âÎËÿù”äÿ\8/þ‹ïÿñLf¿üÚðÓ²§ý/üóÿ[ñÿšz?Õ¾'ÿ¢GįüYÙÿ2‡úáÁô_x7ÿŠc5ÿæÐÿ†›ý•?èñà—ŸøªßˆßüÓÑþ­ñ?ý>%âÎËÿù”?× ÿ¢ûÁ¿üS¯ÿ6žmûYxâßâüßö°Õ~~Ñ¿±ÏÄ?†žÕ>Gñ“Âÿbï~Ïž"¾¹Öþ,i|>kŸk>56“ý“U²Õ/“w‡õ½¶6º¥‚¾™&©Ñôx[,·xZ–uÃÜ]—æ8ªyãÊ19ïa3ì<#G*¬ñü¸j8%%ÏJtàÿÚ(ûò¥RÕ'ÇãŒÊ9Ï…¼m_‡x³€³\£[†VƒáñÜ/‹©(ßÁhdÛ?‚z§„ô_‹W?~EðûVñÔ7×ÓüPÞ)ñàÓ.¼G›g¨_ˤÅ77Iies;' +ú»Ã9eðk‰gœÓÅVÊ£‹Î^>–Â8º˜e„À{HáåRt骭|.sŒo»Gð·ŒÐÎj}"8:ŸVÀáóÉåü=®¾e“ÀRÆ<~iìg‹ujÊ‚—ƩӜ­´Yëž!ñí!c¯ë–^*ý°¿à6¾(³Ö5;_[kÃÃ1k–úõ½ìñkk1ßø_G«E¨%ÄzŠ^v—‹2Ü0zópøN 3Âð—Ž’ÃN•9aåCë.„¨Jt¥EÓÆò:N›‹¦áî8[—KÆ+Å´ñXšxÞ<ú2CN½hbá‰ú¢ÄÃ’xâ\·Ú*ñª¦ªªžú¨¤§ï\Çÿ„ããýOüçÿ}x;ÿ˜*ÛêY/ý><ýØ¿þm9ÿ´¸þ‹ï¢ïßÿçhÂqñÇþ'þ óÿ¾¼ÿÌRÉè‘ñçîÅÿóhiqýßEß¿ÿÎÐÿ„ããýOüçÿ}x;ÿ˜*>¥’ÿÑ#ãÏÝ‹ÿæÐþÒâ?ú/¾‹¿~ÿ§‡þØ·Ÿµ‡ˆaÏÚ+Pƒö€ÿ‚_|Mø£êWã.ûÛ¤¾6[­Cân‰ã}sáÿéšU·üT½ÔC\Ô-žm"ÏÄ ¦‰¦BþÏÃ…°ükÃôå‘x•–çU¡›ÿdVâù8à¹iåµÞaÉøª•eþÎÔ_±§$ªÎƒ©ef¾sjq¶/Þ+«'ðs8áÊrŸÐàóf*usŒ2ʽ¤ð¸4aþ×8ýf¬¨SÅ*<Òæ‹þlkú$þH (õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ù?_¨Ÿ‰yÿðH_ùE×ìÿuïÿZâ]xÑÿ'/‰?îÿª ¨ÿQ¾ŽŸòfø;þîýj³ÃôB¿.?m ( € ( € õ†ÿóÿ¸wþßP¨P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( —è € ( € ( € ( æÏþ )ÿ Èÿ‚ß~Ç'á˜ð›|@ÿ„+àü"+ã¶ÖÁÇ\ÿ„¯Çb#o«ëK¥yŸñôtÄkÍŸêjþ¨ðïê_ñø§ûGë_QúÎuõ¿©{'‹ö?SÀsý]Wjµ·Ãí%÷?‡¼\þÒÿ‰‘àì©<Ïê|9õí']`>±ý¡š{?­¼*x…Fÿ±N¥¾Óügmð¼aâ³ã¨¿àð›Ÿk§Æ?ÛÚ×ÄÕ×?á*:¥ÑñöÊÜ@.Uþ×ûgöˆœ …çœ%÷WY÷Õ0¿R~9ýKêÔ>©ì(åÞÇê¾Ê?Wö<²åö^Ë“ÙÛNK[CÔÌ!ÂïþÒÑ›ûGëxŸ¯ýg›¬O×}´þµõ…(ó*þßÚ{^o{Ús_S›û/ì¯ÿ<¿àßoüüGÿã5ÓÍÅüxÿÁ9ÿ$qòpWòýÿð§5ÿäC쿲¿üòÿƒ}¿ðwñÿŒÑÍÅüxÿÁ9ÿ$œü¿Eÿü)Íùû/ì¯ÿ<¿àßoüüGÿã4sqG?ðN_ÿÉ'/Ñÿ s_þDó_ÚÎ7‹þ ïûWqÿÁ/ǶÕ>ÂíoØÓPøwãqt>,ið­þÜ—ð ëUl:1Òá :vëµQ^‡ ;ñï m¿µ<óûýo§—ÇËý•[ûC‘Â^ßø>ÎþÅ5í}‡´÷#Ž/…¼mþ­¯±]nÿXŸÕÍjf<ÿÛt?²}¢©ªÿÛrûvŸ°úײ¼Ò?˜ºþ’?€ ý`ý®ÿå_ðHû¿ÿýh WåÜ)ÿ'/Åû±?õA‰?mã¯ù3~ÿÞOÿÖ«~O×ê'âA@ÞüþQuû#ÿÝ{ÿÖ‡ø—_Áž4ÉËâOû£ÿêƒ*?Ôo£§ü™¾ÿ»‡ÿZ¬ðý¯ËÛB€ ( € ( €=Cá¿üÆîÿ·ÔêP@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € (åú( € ( € ( € (ù³ÿ‚НŽ_þ }û¯Ã)¼'oñ¼ðxFoÛk~‹\ÿ„¯ÇbÛx~êËZŸJÇÔzeݽã'J­ÍTxwõ%àŸ¼Åb¥€úÎuõµ‚•(bÝ©à9ÖUã:1«o…Ô„¡}Ñü=ââÌŸÒCÖQ,sGƒáϨË2…yà'ûC4öo ,éâ%Bÿ£8Tká’=?Æ^"ø‹üW޵ø Yñ¼^$×cñ‰×¾ |MŸ\>*MRé|Bu™®5Ù.&ÕN®/£,ò<Ò^yÏ+³–cÁƒÃçÏ …x*^9ýIáè<'°Î2èÐú«¥‡ö1KÙr{5’…’I¦aŠáˆãñ±Ì«ý´V/±ÿYáüÞXŸ®ªÓX¯¬Jx—)WöþÓÚ¹7'S™¶ÝÎoþoÙcþ‚ßðo¯þˆÿüº®Ÿ«q?üúñãÿ9ÿ)8þ¹Áóÿè½ÿˆîkÿÍÿ 7ì±ÿAoø7×ÿ Äþ]Qõn'ÿŸ^<áç/ÿå!õÎ ÿŸÿEïüGs_þhøI¿eú Á¾¿ød~#ÿòê«q?üúñãÿ9ÿ)®p_üÿú/â;šÿóA濵ž¡&§ÿ÷ý«äø©ÿÁ/æøUmª|mÿcO‡¿¼)ãyn¥ø±¤†æúêÿ\—ÃÒlÖUx¿¶,nÙt‘âôãÜë ô8Zš§Ç¼,³º~%,ÒTóÏìiq~?/Å`ÔVU[ûC’0¢± ô]4ýŒâ½¯°u9¡GUu¼-ãgÃu¼–K Ü3þ±G€2¼×˜ÊrÎèdûIÔÄË í]VqúÅ9¿aõ¥K–rLþbëúHþ< (õƒö»ÿ”]Á îÿÿõ¡ü5_—p§üœ¿?îÄÿÕ$ý·Ž¿äÍøÿy?ÿZ¬!ùUaa}ªßYiz]•Þ¥©êWvÖvam5åõýõäÉoieein’\]]Ý\IÖÐG$ÓÍ"E3²©ý>¥HR„êÕœ)Ó§ T©R¤”!NNSœç&£F)ÊR“J)6ÚHüZ•*•ªS£FœêÖ«8R¥J”%R¥Z•$¡ táå9ÎMFŠr”šI6Ò?¦ïø'×ü¶G@øÍûwØÝé:|‚×WðÇìÙezÖž#Õ¢o.âÎëâÖ©c(›Ã:|Ɇ—Ázlñø…ƒˆ5ëÝêÚÿÃòÿ4x…ã¥:ß'àš‘­[Þ¥ˆÏå*Þ±”rºsN5äžØÚ±t4æÃÓ¯BºþÊð›èÉ[õn ñ"”ðØr¾…c7 Me¤á,êµ9)a¡%¬²ê3X­TqupÓL4¿¦+K]/IÒtø{EѼ1áXA¤øoÂþÓlôoøI¶EŠÛMÒ4»`³²´†4DH ‰UUUU–±ŒF.½\N*½\N&¼åV½zõ'Vµj“w•J•*9Ns“ÖR“m½Ùý»„Âapj,†…¥ 8l.•:z`¹aJQ:tà•£EE-‘%btP@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@Pò™ÿŸøÅâÙóþ —ð7ão‚ìt-KÅ > üñŽaâ{[ûßÝêzGˆüy=´Ŧ•©èºÅ‹¸ÄñYjº}Ã/ÝD~jþÁðk(ÃgÞfÙ62uéás,Ó5ÂW©†•8W…:¸\e*R«Nµ8Í/…ΕH®±gùûô‡Ïñœ-ãnCÄY}<5lnM’dxü-,d*ÔÂÔ­CšJ¯ 5°õeM¿‰S¯JMm4|ªÁ[5MsSÔu­kþ ßÿ°Ö5búïTÕµmSöG›PÔõMOP¸’îÿQÔoîþ#Mu{}{u4·7ww2Ëqsq,“M#ÈìÇëéxWJ:thñÿ‰´¨Ò„)R¥KŠ•:t©ÓŠ„)Ó„2õBJ0„RŒb’I$‘ð¼q¯‰­Wˆð¯ÁzøŠõjV¯^·έjÕªÉέZµg›Ju*Ôœ¥:•'')ɹI¶Û(ÃÖý#{þ Eÿˆv¿üð«Oø†õpüPÿĵÿó—üF·ÿF—Á?ü@ÿü*ðõƒÿHÞÿ‚Qâ¯ÿ<*?âÕÃñCÿ×ÿÌÿ­ÿÑ¥ðOÿ?ÿ ‡ü=`ÿÒ7¿à”_ø‡kÿÏ ø†õpüPÿĵÿóÄkôi|ÿÄÿ§ñþ u㯊Ÿ¾&~Îú'ì»ûüðGÅËŸÝxâÿöwø#«|.ñ&¯7ÃÿYx·Ã&êîÏǺ†“ö-FÒ[xί¢jMma©ê°Ø5”÷­r½yO†Ø³<ËsúÜKÆ9æ7*Ž28(gùÍ,ÏIcðÓÂbyc< :°ç§5'ì«SR:N§<`¢pgÞ1fY× fü+‡àßxk.Ï'—O1«Â¼;_&Å×–W§ŽÁóÎZ}XJ+Ûáë8R­^49Ts_š5ú)ùP@½ÚïìñÛöÀÿ‚ÿÁ>üðE€?o‹{Y”?Â^ Ò&ý¢¼:¯xÏÄ“¡°Ð´´Na´ºŽ­4-§èZ~«ªÉoa7á´x§#á.;ñk2Ï1°ÂÑÿŒ4)/Þb±•Wâ°ÂaÓç¯SXóZÔéEûJõ)RRšþ™ÄðGñç†dü5—TÆâ?ãfË]þë—ЗáÖqø¹/e†¢¹eÊŸ5jò‹¥†¥^»)~í~Á¿ðL/€¿°•‹Ø>0þÒÙ2jµ5FàÙo-ÄWúW­ éî#Ò Dy¬¦ñ]ГĚų܃>›¥j2èÿ€ñÿŠÙßΦ “žUÃê»ËiT½LR„¯ ¹h¨ºòºSXhÛ FJ6Z”ÕyTøUàg øqN–c]SÏ8ªTÿ{œW¥jX8Ú¥ Ÿ 7%†‚NTåŒñ¸ˆ9Þt(Õ–?£rÍ,ò¼ÓÈòË#’I»»¥™‰$ýM~V~âG@P@P@ê ÿæ3ÿpïý¾ P € ( € ( € ( € ( € ( € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@P@/Ð@P@P@P@Ç‹>|(ø¨Ûk?> |ñö¹i¦Úhðë¾7øgá/kK¦Xù¦ÒÇûS[Ó/o¤<òEl²¬I<ïhÒ¾}LyeÔž/ÍóL 7QÑÁæ¼5'RI)MÓ£Vç’ŒS—/3QI½âf5ÙµuŠÍx$ÌñJœi,Na•`1µÕ(9JÕlM •8¹ÉÆܱr“I6ïËÿÂ…ý›ÿèØ¿fÏü1ÿ¿ùC]_ëWÑIŸáã0ÿ惇ýFà¯ú#ø[ÿü§ÿ™þ/ìßÿFÅû6áøuÿÊ?Ö®(ÿ¢“>ÿÃÆaÿÍþ£pWýü-ÿˆþSÿÌÿ öoÿ£bý›?ðÇü:ÿå ëWÑIŸáã0ÿæ€ÿQ¸+þˆþÿÄ)ÿæ@ÿ… û7ÿѱ~ÍŸøcþò†õ«Š?è¤Ï¿ðñ˜ó@¨ÜÿD â?”ÿó Â…ý›ÿèØ¿fÏü1ÿ¿ùCGúÕÅôRgßøxÌ?ù ?Ôn ÿ¢?…¿ñÊù?áBþÍÿôl_³gþÿ‡_ü¡£ýjâú)3ïüN+ŠÆ×ž+‰Äbñ5y}¦#Z¦"½NHFœ9ëU”êO’œ!óIòÂ1Š´b’÷ðX[†¥‚˰x\?±Â`°ôp¸j^Ò¤êÔöT(B*~Ò­IÕŸ,5IÎr¼¤ÛmsA@P@P@P¨|7ÿ˜Ïýÿöú€=B€ ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@P@|¿@P@P@P@P@P@P@P@P@P@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@òýP@P@P@P@P@P@P@P@P@z‡ÃùŒÿÜ;ÿo¨Ô( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PËôP@P@P@P@P@P@> xᾑý¿ñÆÞðƒæùÛ~4ñ&ám#Ï1¼¾Oö–¹{cgæùQI/—çoòãwÆÔbÿÌgþáßû}@¡@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( €>_ € ( € ( € ( € ( € ( € ðØÃÀ? ?hO†:gíWñÂøã¿‹¯ŽçÐ5/h:_ˆÓá×Ãm'â‹t|:ð-®±¤¬^Ó¬|=§X\xÆ÷N´¶Ôÿá½ð—ÿ*(ÿ…ðGþˆç¿ü7¾ÿåE(øðHGÁß…€ŽA|$#¡û"€>+ý£|#ð÷öhøð#â‡Ãhß¿áqürðÇÀ¿Šžð†‘a¡xcâ-¯ü1ãy|'âm[DÒ´åÓãñ÷ƒ¼[¥èÓZøÎ8­5=OÁÒë¾ׯ5+AávðèÐôP@¡ðßþc?÷ÿÛêõ ( € ( € ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@P@ó$ñfšœÅ,‘õÌnTçÎG ú Š€ ( € ð½{Åß|sñÄ_?g½7À¯âßh~ñ'Ä_üQ›X“Àž¶ñ‹ëKá_ Yè^¹·ñ7‹Ÿy²=å`vÚ(¿øgñ TñUß¼ã=×Â_~kº…¾#økOÕ£×t5Ô5_é>)Ñ3xÛÅ_üCâ/O×¾(Ç?‡<àÍÚo^x®¥cḀ:ð[_Û÷Ã>øƒã³§À߆ºÁoø&ìÝÿø¹ðŸöðŸÆ7âÆâüXñ×ߌ?íÆŸâÿ.kh<--÷€ü[âZÜèÒèÒ?ˆ|-âX¼_i„€;/‹?ð[ÚËö·ý©þ|TøOð'Qø×ðûö†ÿ‚uü&ø?â†þøëãO‡š'†¿à¡þñ‹´›ïˆß ü/¨øŸãÅü´ð?‹´‹Í?á^Öþ3êòxn×ÁÞ ð¶¥ª¦Œà¿àªÿðPÿ‹ßÿcÏÙëÀÿ>|=ø§ûA|Qÿ‚„ü*×>&~П³ÏíaðƒÀž(Ñ¿e‡¿~)|#øÿðÃàÄoøãNƒà/ˆžø™s¢xÀ^0Ô5­U|c§^éº_Ä-6×Ú΢û#ÿÁGÿmÿÛWö¬ÿ‚KxŽ×Ä¿¾|#ý¡cÿÚçâ_Ç¿‚Úwügâ;|@øñËÂßüww¡øª‰Zl¶vÒÝXÙj_PÒ5!àKM[ÇPxÒ‰·÷‡®¼ úÁÿ"ÿ?ìkÿgñû>ÿé¿ÇÔôP«|8Œˆ5Y±ÃÍk<òbIØN<áÓžyí@•@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( €<Ŷ&Ã^½\Óý¶"F7-Î^LE¸F1Ù;tÕP@óìw4¯ûbÁD¢ydx¡Ö?e&6vhâó> NÒyhITÞß3ísrÙ4ú]@8>?ÿ‚Ù~ÚÚÏíßû^þÿ±—üNÛVýŽõOZxóÇ0~Üÿ ¾›;ǾÓõý Uo |MøJ-íD÷3êZX²ÒËÿ‚¢ÁOt¯ø'wÀo‚üð‘?j/ˆ_´¯Ç…ß>ü"Ðþ#CðöOˆ^!ø£i©j:f­câäðgÄ/+I·±°·M†/a½Ô5­ Á®¬ÿ´¢¸5übÿ‚Çüv¼ý°>9þÄÿ°?üÆ¿·oÅŸÙ_@ðN¥ûJkiûI|+ý›¾ø\ñÞ·¤xgÞ*ø¡¡êxâ÷È’{ SIÐ.®uM;V†ÛN’ÇL¼Ô¢û»þ ×ûr_þÞ_|AñÄÿ³oÆÏÙCâ'€¾"ø“á_ÄoƒŸôÛ SGñ_†ÊKÛïøž]/HÓü}àÙÍé±µñ>§éàë:fµ¦\iÖæÆ)ï>ú €?)þÍ+þÞðPhžY8_öRÆÎÌ‘ > Ü»ˆ’±‡r]¹‰c“Í}q@P@~ÄŸ>~ÓÿðJß‚_~9øKþ†~$]Ö¼5ý½âo }·UøsûEê_<uý³àýgÃþ ·þÆñ¿ƒü9­ùš¬Ú‡öwön« ö‘w}atôbÿÁ7?bEñ6©ãø ¡¯ˆu¿Ú§Wýµõ«Ñâ_uŸÚg^øs«|&Ö~#êÚhñWö^¥¥à-]Ñî¼ {e?ÉnõWÄ'Âð’êzÄ 'øþ 'ÿ}Ö/#˜èO?ðMoø'÷Åˤ“ãGÁýVÔ|Wðá챤Ýj?þ(øgTŸáÏÁê_>x7Á׺?Ä-ÿIñƒ|c«âÝÆÞšÃâ•Â[]Å}â‹ÝÕíbù#ö£ÿ‚6üºøey§~Çß>ÂqâOÚ/à×ÇÿˆÚ/Çÿ¿¶?…tŸx“áG…|AàÕñ÷€¾/| øŸ©|AøûCëZ6µø«ã§†¼)âýgâ¨Q4oj×ö~!ÑÀ<ïàwüëö#ýœ¿eï_³ÇíýâŸ_¼kûM|wøƒð–_|mñÏìàtï‹¿µ7Œ/î5Ÿ€²÷4¯ˆ ~3ø£@¹ð¶«á¿†óø6ÏSûoÄÛ=/O—Ä>c=†•dú/â¯ø&Oì5ã=#ÇÞ×¾iÍ¡|Ný˜üûø×GÒ/øg/ƒüT÷ï©©k7Ú•µµÜ ü5ÿ‚wþÇ¿üKð?Æ~øI>™ãÙÓTøß¯|)ñ^§ñ#âÇŠüI§x‡öÓtm#ãg‰<]®x»ÇZî§ñSľ<Ó|?¢Ùßø‹â­×u½:-6ÔhWú[&âàø&?ìAðÖãö`¼ðÁ{Ÿ _~Æ·çýœõ 'âŸÆh5/Ãñ‹ÅxÓâV“©ê'âÞüDðߊ¼Q<š½ç…¾'Oã? ÚÏåǦéVVÐà rðR/ùþÆ¿ö³ïþ›ü}@@P@{·ƒ,M–ƒlX÷Œ÷Ì=¦Ú°‘þõ¼p·o½ß©ê¨ € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@P@PãßéâþÝs§w|ÒÚ—ÜÂGœ žLp‹Ð@P@6þÇòyðQoû ~Ê¿ú¤®(ôÊ€ ÿ?‹ïxãÃ_ð^¯ø,äž ÿ‚Åüÿ‚BÍw¯|MCÅ_þ~Ïí>7ÇÃ%­ü?¡[þÐ>9ðEŽ“qà¦yué¼36¡wwˆm“RŠÞ-PècàÏÂMþ ÿØý³¿c‰ðU?ðVO|IÑüa ÜüyøQá/³ø:|gàý1¾ h~!ð7À?øçÃÐ]x;âW‚µ?‰:ˆµ›˜5mv´éâÎ[?DÌø/ÿ™øãïø*Wí—ÿ›ø5ñkGÕ£Ó¿àˆß²—ÄÍsö…ÑõËy•aýªü3ñ3Wý›¾øs\Ãhx¯Bð_Ƭu)Ú=cLñŸÚ Â_›À«¿k­7þ %ñ“þ YûZëÚ?ü3ö˜ÿ‚/ÿÁE>§…<)ñsâmÇÅO ~Ο h«;]ÚßÃ~&¶Ò|i¯G¢|Wц´o Ý ?xø£J»ðþ»7‡µûË«ý^Üï¿ø7{ößý§ÿk߇Ÿ¶Wƒ~?|cÓ?k~Ë¿´­ÿÁ¯?¶–àKOXþѾ ´¶ÔšMI´Í*ÚÓG½»Ñ¬¬|;®FÞ;BM7ÇzT:¶¯¯ÜC·¨EÔPåÃù?oø(_ýtý“ÿõJÜÐ×”P@â¿ðJù0ÙÛþÀþ6ÿÕ¡ãzý  â¯ão‰>&iðPoÚßÁÿ ~1üWø&>2ÿÁÀ_ðL_ƒ?|GðwÆÚÇYüÖõŸƒzwÄ¿‰Z?ŠluŸx»Oñf±ÿ…~!xÆþ÷Å©áøŸÀQkqø[Æ(Òµ 9øqû[ÁBu«OÚ÷Uñçí#â|{ðŸìÛÿÔj_„×_µ7í¯üDð?¼ñrÿöjñGÃ?Ù‡OýŸô„?±,ŸüI |4…ÿü3ñæÆëã„|M¨kú~±­ø®[? èYßüo þпðoÇ_ˆ_?m/ß¼]û.þן´ÿŽuKöŠø½âk¾<ðì‡ðkãTžø{á|Qð¤öŸ~%EâïI5¬À|,ÿ‚‘þÛIàßÚvóöpøÙñ¿ÆúÿÆ¿ø%Œþ8üÒ¾!þÒÞ4ý¯þ!IûPx㇇¦ø²þ‡Ä¿> ü<øMûH|"ýœ¼sâ]{Çÿ²ÿìç¢ø£áïG…|%«Ëçk smnöíûP]xKözÓ/¿`¯Û‹öÆý¤¿e¿mïÙ_Ã´ïÆŸ_´Åïx_á¯ÃßüñÞ±ã x_þ gðÛÇ¿~øÅ¿ô¿‡ û@ø‡B·×À/ëÚoti~iÞ.Õ´»0èþýãŒ^9ý…~xƒã?ÄÝ/ã.¥?Œ¾.Aà‰º‹>#üB¹ñ?Á뉞'_…ÑëŸ~)ü,ø3⿊Ú߆|(l¼sñ^Oiv?m¼;gã›[½\ëÒê—`§tù¿ÿ"ÿ?ìkÿgñû>ÿé¿ÇÔô·áý!õN @Ow ÏÉn„oäti GŽŽáʬ@ЊªŠªªTUPª€ª 8@ @P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ñ¯øUôé$Ôl#ݧÈÛ¥‰6NǦü»3ݰÀˆ‘<²À%P@xì—£Þi?µÿíïq~«ñ4²¿‰tˆJ—> ‘os“F÷Ú©nÅ&´‘èíñ·Äø'Oüçãµÿ‰?a?ØÛâŸÄoÝC{âŸüGý˜~ x߯Þ%¼·³¶Ó­îõÿx›Áž»¬ÝA§ÙÚXÃ>£s,Vv¶ÖÈë "€z‡ÀßÙ_öaý˜müIgû5~Î?¿g›OÍ¥Üøº×ào‡¿ ­üUq¢%ôZ,þ$ƒÀ>Ð"×&Ò"Õ5(ô¹u5º{Ôo’Ñ¢[» ×ßÙïàÁïüJñÏÂOÿ>øÛã>º¾(øÃã‡? <àüWñ2_kZ¢x‹âWˆ|3¢éš·ŽµÔÔüKâ=Eu]꺂ßkúÕØ¸­ô“€rÿ?d?Ù;ö¿Òõ_Ú#ö`ýž>=jz×EÔ~4|ømñJÿGµ¼âÛK¼ñdžµÛ><’L!´’IJ§ € ( *ÿ‚zÿc|?ø¦~Í—÷Pé¿þjÞ8Ðõÿ ÞÌ`Õï|-¨üFñn§àŸ‰:=•ß—{ªxÇšíµîâ[8§Ò[‡Ä^ žñ>x/ã÷Ží´ë…º¼à_ øÞ]Vñ—Ù]·?%˦êz…÷Œ|%âÝÚúÃBøðÏÆÞ'øoãý#LÕd´—WÑ ñW„5-+R¼ðæ¯6Ÿ§ÝjžÕßRðÝþ¡¦iµÎ“&­¢è÷Ö ÿ UcÿG]û}ÿâg|mÿ技øb«ú:ïÛïÿ;ãoÿ4TÃXÿÑ×~ßø™ßù¢ þªÇþŽ»öûÿÄÎøÛÿÍðÅV?ôuß·ßþ&wÆßþh¨ÿ†*±ÿ£®ý¾ÿñ3¾6ÿóE@ü1Uýwí÷ÿ‰ñ·ÿš*?ኬèë¿o¿üLï¿üÑPÝ;ö7²Ó¯`½_Ú—öî¼03k¨þØ¿o,§W£xç·“ÄXu(íµÔ¤°É¶h$Šhã‘@>›ðW‚éýßÿ£Ãßø’/øŠêÖý}ÿ¢‹ëYÖäíÿ«ßîß\þÏÿ‘7ûû§ðÿyüoÞŸÔgü"ÿ d÷úëÿ×ê'ð€Â!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ü"ÿ d÷úëÿÐÿ‡‡?èýþºÿãôÂ!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ü"ÿ d÷úëÿÐÿ‡‡?èýþºÿãôÂ!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ü"ÿ d÷úëÿÐÿ‡‡?èýþºÿãôÂ!áÏúGÿ®¿øý~DÿÁwïµO†?ðJoÚ£Ç5¿øÆzü(ïìoøCÄ:߇|E¥i~Ñÿ4Cû;XÒ¯íoìþÝ¥_ßi·gž?´XÞ\Ú˺äFù<¯[ ™­l=j´+Cê<•hÔ*æÌ°p—-H8Ê<Ñ”¢ìÕâÚz6è¿¢nW–g_HËsŒ»›eØŸõ«ë~e„Ãã°XcÁ\Gˆ£í°¸ªuhUöUéR­OÚS—%jtêFÓ„dø!þ©ñ;þ Mû+øãâ>¹âxÏ[ÿ…áý³â¿ø‡[ñˆµ_ìßÚ?âþ‘§hk­ýÕýçØt­>ÇM´ûDò}žÆÎÚÖ-°Á)ÀuëbxS*­ˆ­V½iý{ž­j“«R|¹–2æ©7)K–1ŒUÛ´RKD},²¼³%ú@qö[“娧.ê¿WËòÜ&‚Ãûn áÌEoc…ÂÓ¥B—µ¯V­jžÎœyëT©RWœå'úíÿ‡‡?èýþºÿãõõçó Â!áÏúGÿ®¿øýðˆxsþ‘ÿßë¯þ?@ <%áÕ 2,YnX~*Ó@öºu…—üzYZÛíFÇ·Ìê¡›Ž2Iâ€.P@P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( âëþwÿ?ô‹úücþnÿõÿD¹þ™ι?¯ú>§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@P@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒyÿ埲ý×ïýjtx{ÿ$~QÿuýZcCé‰ÿ)â/ýÚ?úÂpÁûG_h3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?¨ € ( € ( € ( € ( € ( € ( € ( € ( € ( € þ.¿çp_óÿH¿¯Æ?æïÿ_ôKŸé—üë“úÿ£êhµû9þf…P@P@P@~.ÁÃò‡ÏÚóþèþµÁJø¿¿äÍÿîŸÿ«Lý3ô;ÿ”ðëþîïýa8œ?àÞùCçì‡ÿuûÿZƒã]ÿÉ”ÝCÿV˜ÐúbÊFø‹ÿvþ°œ0~Ñ×ÚÌÁ@P@P@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ( € ( € ( €?‹¯ùÜüÿÒ/ëñù»ÿ×ýçúeÿ:äþ¿èúŸÚ-~Ι¡@P@P@P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( âëþwÿ?ô‹úücþnÿõÿD¹þ™ι?¯ú>§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@P@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒyÿ埲ý×ïýjtx{ÿ$~QÿuýZcCé‰ÿ)â/ýÚ?úÂpÁûG_h3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?¨ € ( € ( € ( € ( € ( € ( € ( € ( € ( € þ.¿çp_óÿH¿¯Æ?æïÿ_ôKŸé—üë“úÿ£êhµû9þf…P@P@P@~.ÁÃò‡ÏÚóþèþµÁJø¿¿äÍÿîŸÿ«Lý3ô;ÿ”ðëþîïýa8œ?àÞùCçì‡ÿuûÿZƒã]ÿÉ”ÝCÿV˜ÐúbÊFø‹ÿvþ°œ0~Ñ×ÚÌÁ@P@P@P@P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ( € ( € ( €?‹¯ùÜüÿÒ/ëñù»ÿ×ýçúeÿ:äþ¿èúŸÚ-~Ι¡@P@P@P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@P@P@P@P@P@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( âëþwÿ?ô‹úücþnÿõÿD¹þ™ι?¯ú>§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@P@P@P@P@P@P@P@P@Pàüÿ((ý¹îÙ¿õ°ÿgÚýþ € ( € ( € ( € ( € ( € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒy¿åŸ²ý×ïýjtü=ÿ’?(ÿ¹ÿýZcCé‰ÿ)â/ýÚ_úÂðÁûG_f3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€?J|I¬~Ñ?þ-ü Ó~'xß៾xwáŒÞ1ÿ…_©'„¾!x·ÇŸ,ËÃ×p[xjÛBñ¿âmfýouÛ] Ãɧø”•ÿ‡zxCþŽ“öÿÿÄÖøåÿÍ%ðïOÑÒ~ßÿøšß¿ù¤ þéáú:OÛÿÿ[ã—ÿ4”ýø(×ý~Éú¤.hô€?™ÿø,‡í7ÿðçü{þ {ûþÅ?µò~Èzgí¥üwµñßäøðcãÚ[j^Óô}{BÕÿ<9{w ‚Ö OLû‹â A)Õåἒʈ_ÃoÛ?þ ]ÿîÿ‚~Êÿ°§ü«ã/ŸÛ/àíáŒtÙßö­ðGÂ=àGÄ/ |VðTt—>ñçÃÏìðaÓ¯ïuß hÞFŸõȓƚ»kâ©"Ò¼CáØÀ>˜ø•ÿ~Ã_ üñ EÖ|ûZk~|i¶ýžþ-þÙÞø >©û"ü6øµ.¡•uá|G—Äö~'”i—óÁÞ­ x[ÑçK­>ëJ¿Ô¬µm"æüË>/ÿÁf>+ø/þ sðþ ßáoÙ“ö†ñWÁCàÉÖ¼c©xKáO„¼E­ø×]ñ÷Œ4=Á¿´ƒµ©þ"X´²·Ã}=u«OøäGo­ÿnYø¦ÞÛÁú¥¿‡tû­HÔþ%Áß°×ÂÿüBÑuŸþÖšÇÁ„m¿g¿‹¶w†>Oª~È¿ ¾-K¨C¥]xcÄŸåñ=Ÿ‰åeüðAw«hÖôyÒëOºÒ¯õ+-[H¹¿ýžøâ ­#áo|UáÛø’ûKðнÕ~ü@øoâMKN ¡ÍâMFÊÛâ úÔš'…t™Þàšð[9¼7ÿkýž¿moø(ßÄ¿|døÛñ§ã§ÄO‚^ðÿÂß…¸ø»ñÇÇëñ;Äþðï†ß ¾è>ðÆ¡¯bé1Û,æÏB±•á·TÕäÖu]>-@êÏÁu~¯ì¯ûrüTð'ìíûYè_´oìGàxõÿŠ_²GÅ¿ƒúGƒ>;øמ¾Ö<ñ[Æ>oÞøv–ð@~ÒŸhŸÛ›Ã_ôï|Xñ7Ÿ‡þýžþ(ø·àŸÃ= âoüWàMFÓ⮥âVð‰ôïXh¾Ô ðY“Wñ=Þ¯®xZÂx­./u4ÑÀ1¼qÿãý•|+£~ËÖ~ø!ûeüqøáûY|ðßí5ðûöQøðCGø¥ûGxgà‹´ÏíøçâO†4_ˆ)௠Øj:pk¸­¡øªßý~Ðm…»Å,€k_ð_Ø[Mÿ‚~üFÿ‚‹é6_üWð³àçĽàçÅÿ„ÚO€´öˆø_ñ;Yñ.‡ásàï|<ñïü£išž™wâ-.óP–/Ýiïa%Ïö]î§c{anðŸí÷ÿšøµ®xGöø‰ûÛ|~øðóâÿü[àgì·â¯üoýŸ<#áÝö¢øã]$ëWþ5ø~&èÞ0¿Ö> øâÒòÕ<3ñKÃö^ ñ-ÜÖW¿ÙSÙ@‹=Àí'ìÅÿø+ûa|ý¤¾üð§Å_øoöY×áð7ÄoÚV Â0þÍz¿Å0–¬|'ø{ã3ã‰<_ãøUn.OŠÛJøÿ¦ˆ,CÜø Ç­øQüB÷Å~þßñóûÿÚ@?gïý5|B Ð ( € ( € ( € ( € ( € þ.¿çp_óÿH¿¯Æ?æïÿ_ôKŸé—üë“úÿ£êhµû9þf…P@P@P@~.ÁÃò‡ÏÚóþèþµÁJø¿¿äÍÿîCÿV˜#úgèwÿ)á×ýÝßúÂñ8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿ¹ÿýZcCé‰ÿ)â/ýÚ?úÂðÁûG_h3P@P@P@P@P@P@P@P@P@P@øÿGÊ ?n_û¶oýl?Ùö€>þý?äùÿà£_õýû%ÿê¹ Ð þS?à·P|qø]ÿ`ÿ‚>þØÞýkÏÚ¯áoìÑ¥þѺŸÅ+Ù3à7‹þ5ø›I>*Ò4ohz|‘èÐÚøwMÔµ A¯-m|Eâ=íz~Ÿ©Of÷ja` í+Âß¶ücþ ›û ~Ôž=ý‹?hØköÿ‚p]øçâo†Gíká˜>|xøÓñ§ÆQøz]>ÆËá¼½ÔôMJÕüà‹ßµË&§¢Ûèú‰‹ø‹ûkÄúV‰`ø‰ûN~ÌÿðVÛö/ý¶~~Ò_³ïügãíé§ü~×üu&©ãŸ‰:üîÓö~ð?‰¼1®é:GìçðIñuŸÂï¾.Õu; ÛOøÀ¿|a«CÉã-ZÂóD±Žäö"ÿÚÃðZ/ø&çíñ7ì1ûpjß¾)ÿÁ0<û(x…´oÙÿ^¼ñ¿À‹Wüx/íÿh? =äü(д;]OB×5mcÅ–qêÿÚz|Z„Ú^¯cdøïûN~ÌÿðVÛö/ý¶~~Ò_³ïügãíé§ü~×üu&©ãŸ‰:üîÓö~ð?‰¼1®é:GìçðIñuŸÂï¾.Õu; ÛOøÀ¿|a«CÉã-ZÂóD±ŽäûµøKâ½câ×ìá†_¾x‡Å³uÅøYñ_ÀúŸ‚¾-èÝ¿îôðÿ‰| vgÕ´vmBÐý—L•ZòxnlÜB²\Tù,ø7ÿÝý­4Ÿø /ücöøEðcâ‡Ãø*üÇÿ´Ç† |_ðãÆø·ã?k?µwÆm{âGÀáÖ«¥é~7Ôí~'ü8»Ò|Sáï ¶“.¥ã[ƒÃ^·{_ˆÓÍvò…?à™?¶Ö©ÿjÿ‚Jøå?gÚïFñ·ì;ûr|lø¥ñ·öwøw§øËà—íŸgðƒÇŸ ÕäñïÁ}3^³ð¿,þ$øvÃÂv>½Ñl£×VãÄö>'Ñà}/OÔ/-@?PþþÄz—ÆŸƒŸðWÏŠ¿ÿeø*W„þ&ürÿ‚||^ý–>üJÿ‚¢üoյώ_´^·âŸ†zÆàí7áÄ+[ßø2ËÂÞ)Òô¿xwž,ñܺ펱Mœ+s­ÁáÀœ|AáÚúëþÅÿ·øÁðöý²¡áŠð«<§éú¿¼_ðãÄ£VðTš§Š-ü9o•†‹¯µÕƒÚé77‘€|ÿû]Á0¿m߂߲oüƒáGìåð'ã7§ö‹_³ŸæhP@P@P@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþä?õi‚?¦~‡ò‘¾ÝÝÿ¬/‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ú§^ø7öoý«?i/ˆ¼Y£|>ðwí¥| Õ<ãŸê¶^ð爾x?ľ ñ/€%ñV©ö-Fñ½žŸ§i^+Ó¼?­êv×Þ/ðö¯¨x2=q<ñ? zWü7oìAÿG“û*âC|"ÿæ¾€ønߨƒþ'öTÿĆøEÿÍ}ðÝ¿±ýOì©ÿ‰ ð‹ÿšú?á»bú<ŸÙSÿáÿ5ôÃvþÄôy?²§þ$7Â/þkèÿ†íýˆ?èòeOüHo„_ü×Ðÿ ÛûÑäþÊŸøß¿ù¯ þ·ö ÿ£Éý•?ñ!¾ó_@ü7oìAÿG“û*âC|"ÿæ¾€ønߨƒþ'öTÿĆøEÿÍ}ðÝ¿±ýOì©ÿ‰ ð‹ÿšú?á»bú<ŸÙSÿáÿ5ôÃvþÄôy?²§þ$7Â/þkèÿ†íýˆ?èòeOüHo„_ü×Ðÿ ÛûÑäþÊŸøß¿ù¯ þ·ö ÿ£Éý•?ñ!¾ó_@ü7oìAÿG“û*âC|"ÿæ¾€<â߉þ~Ö¾0ýš<1ð3â/„>+…ß´7€hkÿ |S¡øÇÂÞ ð_‚<7ãvÓ®|UâMãUÑ,¯üi¯jZW‡|áoíüUâ´ë>%ÑtË¿ xÇzׇÀ?Aè € ( € ( € ( € ( € (øºÿÁÏý"þ¿ÿ›¿ýÑ.¦_ó®Oëþ©ý¢×ìçùšP@P@P@üÓÁÍ¿¶WŸ…Ÿ±»û_\>±ñŸöŸŸÀÚ–‡ Ø\"¿„|ðßâ§„¼{©øïÄJb—fŸ«êþ _øvÅžÖ}cQ»Öu )f·ð–±ü×ÄÜç …É*dò|øÜÑД)Åÿ†G*õ4~ìçCØÓŽŽrs”[T¦íϠ熼AŸx£…ñ”†x9­V.´YŽmdŽSC)Á>hÞ¶™¼ÏU*ÃQ§†£V1ža†‘õ¿üЮŸðGÿÙ$RŒñýJ°!Ô¯íAñ¨2º°Y[r•<Œsƒ=äÊ/ÿSýZcOÏ>˜m?¤gˆŽ.é®i«4ÓàN³Mnš³Lý¢¯³?™‚€ ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@à¼?µGüsöo°ý™´¿ø&_ìcûSéŸãd_´©ü ñ‡Ç  ÇágøC'Ãîlü/â ßDO¾¿ñÂjñß.¬úµ˜·:Uɘù»ÿ‡‚ÁÎ?ô…ß…ø®ˆ_üÚPÿÿƒœé ¿ ÿñ]¿ù´ þ ÿ8ÿÒ~ÿâº>!ói@ü<þqÿ¤.ü+ÿÅt|BÿæÒ€øx'üãÿH]øWÿŠèø…ÿÍ¥ððOø9Çþ»ð¯ÿÑñ ÿ›J?áàŸðsý!wá_þ+£âÿ6”ÃÁ?àçúBï¿üWGÄ/þm(ÿ‡‚ÁÎ?ô…ß…ø®ˆ_üÚPÿÿƒœé ¿ ÿñ]¿ù´ þ ÿ8ÿÒ~ÿâº>!ói@ü<þqÿ¤.ü+ÿÅt|BÿæÒ€øx'üãÿH]øWÿŠèø…ÿÍ¥ððOø9Çþ»ð¯ÿÑñ ÿ›J?áàŸðsý!wá_þ+£âÿ6”ÃÁ?àçúBï¿üWGÄ/þm(ÿ‡‚ÁÎ?ô…ß…ø®ˆ_üÚPy¡ÁUÿàëÿ é餸gþ G£øwJŽG–=3Bý…¾1izK&Ñ$‰g§üD·¶Y*‡uŒ3m]ÄàPÇü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@ü=çþßÿ¤`æ•|oÿç—@#ÿÃEÿÁÅðÕ_ðÛŸðå­þ›þŠ÷ü2/í=ý±ÿ$çþ'ü¿ávÂ)ÿ$ëþ)ùǯúWü„¿ÓkÈþÂʵ¶þ¥íOú ç­ÏþíõOƒÚ{/÷Ým~/xýþ"¿ˆ?êüBÿõ—þ¡ÿÑ;õl»êßò:ÿXÞ~§õÿùÿ·½ÿ÷ÁýÑõÇü=çþßÿ¤`æ•|oÿç—^¹ùàÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþyt‰â_ø,÷üuà¿xƒÆ>2ÿ‚phÞð‡„ôMWÄÞ*ñWˆÿc¿ŒÚ?‡¼3á½Â}S\ñ½«^üOŠÏKÑ´m.ÖëQÕ5¹b¶²±¶žæy(†¬MŒÄIƆ…lMy(¹8Ñ¡NUjIF)ÊMB2j1M½’lõr—0â\ó&áÜ¢”kæ¹þm—d¹e Õ§B³ ×G‚¥:õ¥ 4cS^”%V¬ãNšns”b›\OÂ_ø/Wüåñû×¾1ø û|3øÙá 7[¹ðΥ⯅_²ïÅ?ø{Oñ%†›ª^x~óVðÿÅkÛ;}f×KÖt}FãN–U¹†ÇUÓ®^1Ü,ü>w—gØj˜Ì²¬«P§^XiÊtªQj´)Ò«(òÕŒdÒ…jo™+;µ{¦}g‰q„ùæ‡xß/¡–æ¸Üª†u‡¡‡Çà³O/Äã1ØU]| jôc)brÜ\)MTЧ¸¨Î úü=çþßÿ¤`æ•|oÿç—^±ùðÃÞàíÿúFþiWÆÿþytÃÞàíÿúFþiWÆÿþytøËûG|gÿ‚þÖ¶•¯Ä/‰ÿ³=ß¿à 67Þfý•î¾ kº–¥'ÃMËÄpxf_€ºýþ¯ªj¾ƒÂºeÏ|Eá}VçSÓuË}K\Öõ‹kÍP¿µ“ðʘjYߊpÙšS¡B«öxzŠð«  V£EÆIÅÒ©(¼EX4áV.¤^•.ª¸<ëá‡ÐO/θ"RÂæ™¶_Îp•=ž'ˆâŽ&©—fy”kД*Ä£Z9FN¤1ñÁU‹SÂ(¿×?ÁP¿à꿇>Ó|ð÷þ #áx?FûgöG…<ûüZð¿†ô¯í û­WPþÍдOˆ:]ÛµKëÝJóì¶±}¦þòêòm÷Èÿ·Q¡G N4pôiP£ òR£N©ÃšNrå„ciJRvJòm½[gùq™f™žs­™gŽ;5Ìq>ÏëüˈÇcqÆ•<=mŠÅT«^¯²¡J•|õ%ÉJ:q´!®ŸþóÿoÿÒ0?óJ¾7ÿóË­Nÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòèÿ‡¼ÿÁÛÿôŒüÒ¯ÿüòè‰ø‹ÿÄÿƒ¥¾x7XøñwöðgÂ߇žþÏÿ„ƒÇdÏ‹žð†…ý¯ªØèZOö¿ˆ5Švºm‡öž¹©éº5‡ÚgOµjz…•”;§¹‰ƒ3̰™F ¾aŽ©*X\?³ö³Œ'QÇÚÖ§B…5)Êõ*Áh“»Ñ6}_pWø‰Ågp®–;>Î>»õ -lVN¯ö~_‹Í1\Øœ]JXz\˜,"¢ö•#Ï(*p¼ç³á×üþ–øÃàÝâ7Â/ØÁŸ¾x‹ûCþÿü>ý“>.x³ÂïöF«}¡jßÙ Ñþ)Ýi·ÿٚ晩h×ÿfþË©é÷¶SmžÚTS,̰™¾ †a©*¸\G´öS”'MËÙV©Bw…EÆÕ)Mj•ҺѦoÁ\Cáßf|ÅXJXû'ú—×ð´qX|m:_Ú~4Âòâp•*áêó౸zÙÔ—$¦éÎÓ„¢»oø{Ïü¿ÿHÀÿÍ*øßÿÏ.»Ï”ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€ø{Ïü¿ÿHÀÿÍ*øßÿÏ.€Úÿö¼ÿ‚·þÕðBø,Gü=7ö_ÿ†nÿ„þóÿ +þ,§Ž>Âgÿ Gí‡áŸøYßò9x—Ä_ð‘Â;ÿïÃßùýû#ûwý3íÚv¾@÷ù@P@P@P@P@P@P@P@P@P@P@P@P@|]ÿ"ÿ”w~Þßöeßµ'þ¨ïW‹ÄŸòNçßö%Í?õ¹úg‚¿òy<%ÿ³›ÀúÔåGâçü…ÿ(îøÍÿg£ñÿTwìé_áOü“¸ßûb?õ.?¦¿hüžNÿ³g“ëSÆgôé_¦ŸÃ!@?ðrgüÿÇ^ ñ§üoöwñ>§à­{Âv~ð—ÇëÏ x’ûÂ~/Ò.§»Ò~|9ø¡á}fÂúËQ’âê kAøeâk *ê ë{ü5¨[Z]éòx¢ïOüwÄŒ‚½‹Šò겡RŒhÑÌ*²£Z ¸apتSŒ£+µ:xZ±ƒRQöRIÇÚ¸ÿ¤BrœÏSÀ2ÀPÌð™…\Ë1áxì,Ã.ÄBñæs‘cðÕ©U ¡Nx\^y­ˆ§:S¬ñ´gRe€§[÷¯þçûCüPýª¿à›?³ÇOÅ¿ˆ¾&ø³Eøƒ¤x£Ä6öQéí¯Íðûâ÷ĆÚn¹k q­jú7„4íC_žÒ;[+½rãP»²°Óíg†Æßï8;1Åf¼7–c±³U15aˆ…ZŠ<¾Ñáñ˜Œ4g$´çœ(ÆU´\ÜœcÔWògÒGƒ².ñ¯ŽxW†pÓÁäy~''Ä`0sªë,$sŽÉóºøZ3’çú®˜Ö£„GR­<,(Ó«VµHʬÿLëéÀ ( € ( € ( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ( € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@P@P@P@P@P@P@ñwü‹þQÝû{Ù—~ÔŸú£¼u^/É;ŸØ—4ÿÔçéž ÿÉäð—þÎoÿëS•‹Ÿðjü£»ã7ýžÄOýQß³¥|g…?òNãìuˆÿÔ¸þšý òy8kþÍžMÿ­OŸÓ¥~š …ø¹ÿ Ê?kÏû ?úÔ+âüBÿ’?7ÿºþ­0GôÏÐïþR7ïû»¿õ„âpÿƒyÿ埲ý×ïýjtx{ÿ$~QÿuýZcCé‰ÿ)â/ýÚ?úÂpÁûG_h3P@P@P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@P@P@P@P@~ÿÁÑßò‚Û—þí›ÿ[ö} ßê( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € (âïø)ü£»ööÿ³.ý©?õGxê¼^$ÿ’w>ÿ±.iÿ¨5ÏÓ<ÿ“Éá/ýœÞÿÖ§*??àÔ/ùGwÆoû=ˆŸú£¿gJøÏ äÆÿØëÿ¨9qý5û@?äòp×ý›<›ÿZž3?§Jý4þ (ñsþÿ”>~ן÷@õ¨> WÅø…ÿ$~oÿtÿýZ`韡ßü¤o‡_÷wë ÄáÿóÿÊ?d?û¯ßúÔèð÷þHü£þêú´Æ‡ÓþR7Ä_û´õ„áƒö޾Ðþf ( € ( € ( €??àáùCçíyÿtÿZƒà¥|_ˆ_òGæÿ÷OÿÕ¦þ™úÿÊFøuÿwwþ°œNðo?ü¡óöCÿºýÿ­Añ®äÊ?î¡ÿ«Lh}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( € ( € ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¸ß>%xá7…îügñÄúg„ü5g=­£êZœ’¤_ßJ Óô½6ÊÚ9õ _WÔg" ;HÒ­o5=Br!³´žO–€>>¸ÿ‚›þÇ6³Ïm7Œþ*y¶óI¾_ì³ûVÏ™´oåÏÁ9 ™7)Û,2I‹‡Ùbü<ûö5ÿ¡Óâ¿þ"§íeÿÎB€øy÷ìkÿC§ÅüEOÚËÿœ…ðóïØ×þ‡OŠÿøŠŸµ—ÿ9 ?áçß±¯ýŸÿñ?k/þrÃÏ¿c_ú>+ÿâ*~Ö_üä(ÿ‡Ÿ~Æ¿ô:|WÿÄTý¬¿ùÈPÿ>ýètø¯ÿˆ©ûYó þ}ûÿÐéñ_ÿSö²ÿç!@ü<ûö5ÿ¡Óâ¿þ"§íeÿÎB€øy÷ìkÿC§ÅüEOÚËÿœ…ðóïØ×þ‡OŠÿøŠŸµ—ÿ9 ?áçß±¯ýŸÿñ?k/þrÃÏ¿c_ú>+ÿâ*~Ö_üä(ÿ‡Ÿ~Æ¿ô:|WÿÄTý¬¿ùÈPÿ>ýètø¯ÿˆ©ûYó þ}ûÿÐéñ_ÿSö²ÿç!@Ýð{ö¹ýŸ~;êËáÿ†Þ;¹¼ñ¶𭝆¼Yை ¼K¨éštém¨êG‡¾&øWÁúÖ±e§M, ¨]iV7Ø‹›Wºx£º·i>‘ € ( € ( € ( € ( € ( € ( € ( € ø»þ Eÿ(îý½¿ìË¿jOýQÞ:¯‰?äÏ¿ìKšê sôÏäòxKÿg7€ÿõ©ÊÅÏø5 þQÝñ›þÏGâ'þ¨ïÙÒ¾3Ÿù'q¿ö:Äê\M~Ðù<œ5ÿfÏ&ÿÖ§ŒÏéÒ¿M?†B€ ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ( € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐèÞ§®6½ÿð·‚µ{+-CMøyûxâ7ƒ¤¹··–] Å^>ø¿¦ø'ÄÚµ‹<HµÏ xVÃF†üLn,t«Í{N±k{Ok‘ênÜÜÛÙ[Ü^^\Ciii ·7WW2ǽµ¼´³Ü\O+,Pà JÒK,Œ±Ç³»Ðüý£¿g¯Ú_ÃÚ¯‹¿gh hZËøs[ñ?Á/Šø«áíÄ1ØÙêrh:®µàMs^Ótýe4ÝGOÔK»¹†ùlo¬îÌ ¨$pOñˆt hçŠüW®ið¿†tKÄ>$ñ'ˆu+-@ðþ¢ÙM©k湬jS[iÚN‘¤éÖ×ú–¥qoeaeo5ÕÔÑA’(?ðßâoÃŒž Ð>%ü!øƒàŠŸ$´µ½ºÓ.®´øbÿTе‹{mJÊ÷O¸ŸN¿¹ŠÛK«Ig·–4íèÌø¿ã¿'„þxWÇ~>ð§„'|6ø7àâgÅÿˆ^øUðã–ö÷~(øñ#Åš¼á»[»ë].Òç_ñ_‰ïô½G·ºÔ¯¬´ëyµûhæ¾¼µ´šââ(Ü¡ð÷ˆtèŠü)®i&𿉴7Ä>ñ'‡µ+-k@ñ­YC©hú懬i³\éÚ¶‘«i×6÷ún¥aqqeeq Õ¬ÒÁ,r0ÆðQ}qüû*ø§â>Ÿgi7Š>ø÷à—Ž¼¨ÜA —Š´ÏŒþ³³Õ¬%–)ŒµŽ¥¨éZ„j¾N­ êšÇ‡õ$¹ÑõJÎäî:( € ( € ( € ( € ( € ( € ( € ( ‹¿à¤_òŽïÛÛþÌ»ö¤ÿÕãªñx“þIÜûþŧþ ×?LðWþO'„¿ösxÿZœ¨ü\ÿƒP¿åß¿ìô~"êŽý+ã<)ÿ’wÿc¬Gþ åÇô×íÿ“ÉÃ_ölòoýjxÌþ+ôÓød( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí}ýü¥:ïþÑÿ§ÿëEêtòüOûb·ìcÿžý¥sâPßh^ ¾²¿–X…†¡á¯†ÿ|_§ß_²ßx~ÚC-¸â Ä/ø!OŠgØ þ ¹âïø'¿ìùûH|ý >þ׿°çìõñSBñ'Á_‹¾ ø¿ákÚïözø{'†>7hUð^¿®éú7ˆ> ÛhŸ¾/jÚ\×ܦŒÞŠ C¥G§5°§ø«ö±ÿ‚™ÁEtOø.‡Ä?…´§ÃƒŸ²_ì?'íCû0xOöXÕþxOÆ_ð¿4Ÿ‡¼}§|D×¼añ~êÿHø—ðãľ"ðå€Ôü¨øSÔ|=¥ø“_³·Õ¼/u£xcPƒÄ€/~Í¿¶Oí§û6Á9àÙƒ_±ßÅ ü:¸ý°¾)üqø1ñׯžðŽü%­ØÞükGðÖ©â(u_[é~ »ñv«â[½/á÷‹<­ø¢C—Ä–>}¥õ€Ñþý«?à·ž$øÑÿxý†íÿo_…+ã_ø''‚-h þÕSþÉŸ GŽ~ i×þ"x#àdŸ c„|*ð¯‡+xZ8‡ÃÖ¯a¢]ø«À¾;ñgÃ]wYÓ4ù&¸:m§ˆu\kÑé‰4°éßÚFÊÝÚ#4üQøwVøµûÁLà¥ÿðY…Ÿðx‹áïìµÿaø­û=þÛ ´u{¦ñ'ìñ÷ÅMÿ‹,ìcòŒ×¿¼wa¥ë6ÖÏ3Guâ{ßj7fÏDðö¹,àhxþ <¿°ìŸðs×íåð: üQÔdý¢c›ßW·oq¨øÄWŸ4¿èñ­ëX\Ù\j¾±Ò|Y–ÎÖòÅüAga•þŸý¦/­À=ïö'ÿ‚ª~ÙÿkÙ·ög?·_‰¿l þÚ_~)h>:ø¦ßðN/~ÍW°í+iðÊ÷ŵÿ]øÏá7Ãïüuøwsâ#?‡ôÛ/ JãB–»}§Yj6:^¸ù£ûüpÿ‚’þÊðKø(/üSá?í©â]nËáGü{Å:¿Ç¯ƒ÷Ÿ>k?ðµÎ«ãï‚ú'ÇŸŠ¾3ñƒ5ÿiÄ»_è_ÚÞðÃèz_ƒ,ü16±ámGH{PÎûb¿ðV_Ú_þ3%»|^Ñ?áÒŸð»gý‡–Óþ¿}‰hûØÉii<`~" ÿÂÂ/ÿ ` þ£}ˆ‹Ã£ ¾@ñןø+¿ü ãGÀø&§ÂŸü_🄿ioø+÷íOûSxáïÇOˆ?>ëß²§ìðÆv^ðç‡<-ðÓ@Ð|'áOˆ>)–×Jñ‰4 _âø‡SÕ´äm?XÕîÄ:¹áÀ¯<5ÿÿ‚„~˵üwþ ÇûOþоý¥¼ð—þ ³ñ_öãý•ÿkà€>xÎÃQð¯‚ç1økÇÿ ü3kªü-½}#ÄWRjZ,£ÝZ\/„å:ôz½Ÿ‹­ô_~}|Vø«ÿý¯?àÚßÚwöäý²¿j¯ xÿÀÿf øoÁŸ³÷‡~|<ðv§aâoþ׿¼5qñ߯<)m Í¨x›Æ)áoÚÃðßÞÐü  øgUðõ¯Új—w vÁÿà Ÿÿà£þü7ø1ñª×àwìgÿìý‘fo‡Ÿ> Ïà¿ß|eý®þ+ø‹áV™£ê·~:ðv¹â_‡_<'áNÝü?Õ|9âo¹Ðµ_QKOŦø`÷Kþ ÿ&3ñ£þÂõxü5 Ð ( € ( € ( € ( € ( € ( € ( € ( ‹¿à¤_òŽïÛÛþÌ»ö¤ÿÕãªñx“þIÜûþŧþ ×?LðWþO'„¿ösxÿZœ¨ü\ÿƒP¿åß¿ìô~"êŽý+ã<)ÿ’wÿc¬Gþ åÇô×íÿ“ÉÃ_ölòoýjxÌþ+ôÓød( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( € ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ( € ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí}ýü¥:ïþÑÿ§ÿëEêtã_ðPÿø%†ÿøóûxÛãÄý)ÿg?ÙâWˆþ,xÓöZÖ¾YøÛôg‹oí4‹ Aãêþ6³Ñ´ÿ økû&K[ï ê¼acâ Ä>,Ð57ŠÏ[fä?´Oü·öcñÇ_ØŸö‘ý|;ð+þ ÿñgö=øñkñZûVøû.ü?Òt¯ÞºM6ü'ñö•ðó[ø>òÛkö_öFŸâFûÄSøgG×<]e¦èr7‰.§ˆüwÿñ©ø»ûy꿳ßü;â—ìáû3ÿÁGn|_â_ÚcöfðÿÁO‡þ2:ßøáWü Ÿð«~Ëÿ SþÏ„’ü,þÞÿ„Ÿþ-Çü,í/7ûwû/þïýoögöÖ´Ê? à€Ÿð¯>Á¾ÃYkÿæ>8|LøËý·ÿ #û?þ÷ü,_‹ ñCþ¿ìßø\·¿ðªÿ±öÿaÿl}¿â?ö†´ÿ²ìqýŸ@T~ÎÿðIÏøP~+ÿ‚¶xŸþïü%ðô¯x»Æ_aÿ…Yý…ÿ /þâf‰ý›öŸøXúÏü,ß°ÂÅûWÛ>Ïð÷í_ØþOÙmÿ´<Û¦?à™_±7ü;ŸöøûÿÂÌÿ…Åÿ NÃÆö?ð²?á ÿ…{ÿ 7ü&_|kñÍÿ„?þ¿cfÿÂaý³þWíŸÙßÚí~×öP ý™?à”þø%«ÁOíþ'øòÇãÏßø)ÇÇOˆ_›EÖgÓ->ûý‹ÿà¶ìýñ/់hßø*—ÆßÚ³áçÀï‡? >|Ó>x;özðÆšúT~Ó5ߎð‡x£Å:‡ÇŸx{B‚ÞZñ<º\öÚ½´:ýÀ»Ô$¾7 9üÿ‚j¿þ ÿÁGÿeSûdë8ý¿oX?h oÃ_õ߀ž·Ö¾|Tøíc¥icât­¼k'ˆ¾$Ýø+MÐ4´Ÿ IaàoÞ^iPêò[Zj³\]ÊæÁºCÿÁ'"ÿ‚pÃ]ëcâÌ´ãþÖûbÿž”ë²|^“\žÙõãðÇþ½ð²cðկŖˆ> ?èì|6>“øåÿ?øwñ'öfÿ‚zü+øUñãÆ?³ÇíÿÈÐ|eû0~Õ ðf¯j:v»¡xgºŒõ|,ñ¥.ƒâ¿|KÕ<%¦x§Äžºñ +ý­ØßX½Ðïuý/]Ïø3ÿUÕ<=®þÛŸÿioÛ Å¿µWí¯ûk~Í>2ý”õÚ/\ø?á/†>øQð›Äþÿ„rÛDøuð+Á~#—G²··½³ðö³®BÞ4‰õù|9h–× ö­âm[[Ýoø#fïø"Xÿ‚:ÃGcþ-òxþ/þ~çÆåøÉý«ÿ ‹þ‡|Â9öøYüÈ_í‡þAtwÂßðG‡øWû[~¿¶'ÀßÚ,|1ø‰û3~˾ ýÿiM!>ÿkøköÐø5à øoïˆ,í>'ø~O†ž&Ò%ÑZÑ|A3üL’ÂïMðÍ®¥¦x+=lëø*7ü˜ÏÆû|!ÿÕãðÖ€?@( € ( € ( € ( € ( € ( € ( € ( €>.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá €??àáùCçíyÿtÿZƒà¥|_ˆ_òGæÿ÷OÿÕ¦þ™úÿÊFøuÿwwþ°œNðo?ü¡óöCÿºýÿ­Añ®äÊ?î¡ÿ«Lh}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( € (ñsþÿ”>~ן÷@õ¨> WÅø…ÿ$~oÿtÿýZ`韡ßü¤o‡_÷wë ÄáÿóÿÊ?d?û¯ßúÔèð÷þHü£þêú´Æ‡ÓþR7Ä_û´õ„áƒö޾Ðþf ( € ( € ( € ( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ú«ñ»C¸ð‡Ä |{ð>µðê×â^•àü9»ðOÅÿ ᯊ>ÔµïxŽ]>ÓÅñYkw¾ño‚µ­=5o ë±ø_ÅZí¦½â?ø“Bµh<ø~ó…ÇíûZ%Äémû*~Í—É4©oq'ü3ÁvòO»¦x?áEMä<±…v‡Î—Êf)æ>ÝÄølŸÚïþ;öiÿʼnx3ÿœ5ðÙ?µßýwìÓÿ‹ðgÿ8j?á²k¿ú4ïÙ§ÿ%àÏþpÔÃdþ×ôiß³Oþ,KÁŸüá¨ÿ†Éý®ÿèÓ¿fŸüX—ƒ?ùÃPÿ “û]ÿѧ~Í?ø±/ó† þ'ö»ÿ£Nýšñb^ ÿç @ü6OíwÿFû4ÿâļÿ΀ølŸÚïþ;öiÿʼnx3ÿœ5ðÙ?µßýwìÓÿ‹ðgÿ8j?á²k¿ú4ïÙ§ÿ%àÏþpÔÃdþ×ôiß³Oþ,KÁŸüá¨ÿ†Éý®ÿèÓ¿fŸüX—ƒ?ùÃPÿ “û]ÿѧ~Í?ø±/ó† þ'ö»ÿ£Nýšñb^ ÿç @ü6OíwÿFû4ÿâļÿ΀:ã~Òº%Ÿ‚?h-àÀ†ã_ðçˆüg§xkö˜Ñ¾1ø»Ç àÏé>,Ð|¤Ëoàh^ðæ¹«èºdž7ñ-õι¬Üxf cÁ^ðÞ™ªxšÓâO‚À>ßÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hÿ…¯ð·þŠW€?ð±ðïÿ,hùùÿƒ‡à¥>ýŸ?dIÿg„Zÿ†üYñSö¾Ò0Ñu_ kx†ÓImKHµ¼½ò,õm@¤²×£À™&7!Èþ­pöøœT±Ê”›Ã¾ B¯4b•hN”ÝEh'+)ËV|oÒ¿ÅñcÅ%ð”1O+Ér-,v&iÓÍñ9fužâešàU*Õ¥,»‡Ì0ÑÁÔÄ*8‰Ó¥ÍSJñ‰ûCÿ _áoý¯ácáßþX×ÙŸÌáÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐã¯üãÆ¾ ñgüOö±ðÿ…|[áŸëÚ‡ü(Ÿ°hž×´­gW½û/í/ðnöëìšnwsysök;k‹¹ü˜_ɵ‚iäÛNëñ¾ FSáÚ0Œ¥'õ F)ÊNÙž »%vì“~‡ô§Ñ½7Ò'ÃÊØŠÔ¨Q‡úÛÏVµHR¥nâhGš¤ÜciJ1Wjòi-ZAÿñ¯ƒ|'ÿ“ý“¼?â¯øgÃZöŸÿ Ûíú'ˆ5í+FÕì¾ÕûKüd½µû^›¨ÝÛ^[}¦ÎæÞî:ó­g†x÷E*;ÆPá¦3Œ£%õûÆIÆJùž5«§f®š~ô¾¯Gô‰ñ¶µ*ôgþ©rU£RiO—¸få©(Ë–Q”]›´“OTÑûÿ _áoý¯ácáßþX×ÙÍaÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐÿ _áoý¯ácáßþXÐñÏÿ¤ÿ‚üDý½~4^Á#¿b;øŸÁúÏŒ´ ünø‘ss`Ú_Šü[àßi~$›A´ñò>•àï† |M éúÏŽ7ügŒ³ügfà슒¯W…,ud¯í+aêÆ¤©©4Õ 6­8Ï_â”鸧qj¿ú_ôjð‹†¼á?I/ó åuèexŒ eÒ©ìÖ -Íð°t1“¡ÆyžwÄX e\>O•ßÙQÂã#Z¥:¸Úôå•ÿCðK¯„º7ì9û | ý–þ%|jø)âŸü0âA×ußøâÖç·’øÛâïþ"Ù¦“q¯Ã êóGe§xºÎÂæKÍ"ÅžöÖäÅ€Å#þÂùM|"Àåxš”ª×ÂýgÚT æéIׯb1+‘Ô…9¾XÖQmÂ>òvV±üY㿈9WŠ~+q_d˜<Õç¯%ú¦4ŽúQÊøw(ɪÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿåðµþÿÑJðþ>ÿå~ÿÁÎ>,ð·Š?à…·‡ü#^%ÐÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@‹ðH/ø7¿áü_þwíãßð¡¿áScoü=ñ;þ¯øZð³3×Xø{ý‡ý‡ÿ ïþ¢ÿÚ_ÛòáýŸþ›ñÅÕ8«ûGÚ`a‚ú‡ÔíÉ^Uý§Ö¾µ{óR§ËÉõuksss½­¯õÒOèïƒðýKú§⸗ýkÿXý§ÖrªYgÔ¿°¿°¹9=–;íþ³ý³.noe콄mÏí'í'ü@éð·þ’'ãÿüGÿó௷?—þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ñoþ ÿ÷ü/ÿ‚«ÿÃCný£ü{ð#þ7ü*lmð?‡¾'ÂUÿ Cþfzë°ÿ°ÿá]ÿÔ_ûKûcþ\?³ÿÓ~#ƒxº§hûL 0_Pú¹+Ê¿´ú×Ö¯~jTùy>®­nnnwµµþ£úIýð~©TâœWÿ­ë´úÎUK,ú—öö''²Çc}¿Ö¶eÍÍì½—°¹ý£äý¤ÿˆ>ÿÒDüÿˆááßþ|öçòàÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@•¿ðToø ¾£ÿˆðÃKÏüañ_Ç¿„Ÿµh:ÿÄ ¾[øçÀ~6³Ó´ö°ðÕɳñ'‹4Óqâië–æòÞ{©¼1âuûçñ¨WÂæ¹wì½¶Š…FIºj¶ñq§Uë±4ç%¼£F¦ü§úgôÍr¬ÿ€ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸ~-ÿÁ ¿àÞÿ…ÿðUøhmß´~†ÿ…M¾ð÷ÄïøJ¿áhÂÌÏ]cáïööü+¿ú‹ÿilˇöúoÄpoTâ¯íi† êS·%yWöŸZúÕïÍJŸ/'ÕÕ­ÍÍÎö¶¿ÔI?£¾ÀOõ/êœSŠâ_õ¯ýcöŸYÊ©eŸRþÂþÂääöXìo·úÏö̹¹½—²ö·?´|Ÿ´Ÿñ§ÂßúHŸÿñ<;ÿÏ‚¾Üþ\øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ù²ý°?`ÏÙÃöÖfÚåþ)øGø{âA-ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|âß´—ü§ð·ö|ý¾>||·—¼\~|ø§ñ|xPüðî‚eñoüþ ø]ÿ=ý¼gññ¿k|>øÓâ/„ðŠ/Ã|I€Ð< ðçÆðÿnx ìŸkÿ„ÿû7û'ûçÈþÉûgö”ßnû-Ÿ‰Â&ËkãêacƒtqÕ0Š”*ºÊJ 5n~gN›M¼C/+·*wÖËõ¤Gƒ8ox×+á<._ˆéæ<-‚â)cq}<¶t§‹Í³¼µáU x¼dgG(eUÕ‹“¯({4©©Kôïþ tø[ÿIñÿþ#‡‡ùðWÔŸ‚‡ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸñ§ÂßúHŸÿñ<;ÿÏ‚€øÓáoý$OÇÿøŽÿçÁ@ü@éð·þ’'ãÿüGÿóà þ tø[ÿIñÿþ#‡‡ùðPÿ:|-ÿ¤‰øÿÿÃÿüø(ÿˆ>ÿÒDüÿˆááßþ|ÄŸ é"~?ÿÄpðïÿ> ?âO…¿ô‘?ÿâ8xwÿŸ`~ÛŸðC¯ Á?à…ðWïøF¿h¯|}ÿ†ÿ†ûoöïÃ;á÷ü"ð§ÿl? ý›ì¿`ñ‡‹?µÿ·áiOçù¿`ûö4>_Ú¾Úÿfþ÷( € ( € ( € ( € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( € ( € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( ƒ?à©>ð·?àœ·>›âïhþ%°Òÿe¾.Ólõ½>×R·°ñO‚¾x£Å~ñ œWQʶÚφ¼K¤izBëOÔì-o-¤Žh•‡ƒÅiVáÌö5©Â¬a”ãëEN*J5hajÕ£Q&§N¬#RZÆQM;£õŸó~YãO…U²ìf'Z¿ˆ#—W©…­R„ë`3<û—æ8:’§(¹á±¸M|.&Œ¯ Ô*Ô§8¸É£ñûþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯ð§þIÜoýޱúƒ—Ñ¿´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( Ã?ø8ÃÀ¾ñüöñŽ¿àï ëž.ðÏÁ‹ßø§Xðö‘©øÁ7šÿíðƒÃºíß„u»Û9õ? Üë^Ô5 VŸFº²—Rѯ¯4ËÇšÊæhá¼E¡B§ fUªQ¥:Øw‚• ³§ T êf8:uÉ9Ss§)S›ƒ‹”$âïÑýQô2ÍsLÒ‚òÜ&eÂåÙ¼8š–m€Ãc10Y¥<q7 O1ÂÒ© Øaq”hâ°ðĬhâiS¯MF¬#%ÔÿÁ¼ÿò‡ÏÙþë÷þµƺ×Ãßù#òû¨êÓp}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( €>.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@Å×üÿ9 ÿ»Mÿß—¯Æÿ±.iÿ¨5ÏÓ<ÿ“Éá/ýœÞÿÖ§*??àÔ/ùGwÆoû=ˆŸú£¿gJøÏ äÆÿØëÿ¨9qý5û@?äòp×ý›<›ÿZž3?§Jý4þ ( € ( € ( âëþ ÿœ…ݦÿïË×ãÿÍCÿtŸýéŸé—íÿ›=ÿyÿ|ƒûE¯ÙÏó4( € ü\ÿƒ†?埵çýÐýj‚•ñ~!É›ÿÝ?ÿV˜#úgèwÿ)á×ýÝßúÂq8Á¼ÿò‡ÏÙþë÷þµƺ<=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý£¯´?™‚€ ( € ( € ø»þ Eÿ(îý½¿ìË¿jOýQÞ:¯‰?äÏ¿ìKšê sôÏäòxKÿg7€ÿõ©ÊÅÏø5 þQÝñ›þÏGâ'þ¨ïÙÒ¾3Ÿù'q¿ö:Äê\M~Ðù<œ5ÿfÏ&ÿÖ§ŒÏéÒ¿M?†B€ ( € ( € ( € üÿƒ£¿å·/ýÛ7þ¶ìû@¿ÔP@P@P@P@P@P@_ðhOüä+þí7ÿ~^¿ð‡þjû¤ÿïLÿL¿h¯üÙïûÈ?ûäÚ-~Ι¡@P@P@P@P@_ðhOüä+þí7ÿ~^¿ð‡þjû¤ÿïLÿL¿h¯üÙïûÈ?ûäÚ-~Ι¡@PÅßðR/ùGwííÿf]ûRêŽñÕx¼Iÿ$î}ÿb\ÓÿPkŸ¦x+ÿ'“Â_û9¼ÿ­NT~.Á¨_òŽïŒßöz??õG~ΕñžÿÉ;ÿ±Ö#ÿPrãúkö€Éäá¯û6y7þµ~ן÷@õ¨> WÅø…ÿ$~oÿtÿýZ`韡ßü¤o‡_÷wë ÄáÿóÿÊ?d?û¯ßúÔèð÷þHü£þêú´Æ‡ÓþR7Ä_û´õ„áƒö޾Ðþf ( € ( € (âïø)ü£»ööÿ³.ý©?õGxê¼^$ÿ’w>ÿ±.iÿ¨5ÏÓ<ÿ“Éá/ýœÞÿÖ§*??àÔ/ùGwÆoû=ˆŸú£¿gJøÏ äÆÿØëÿ¨9qý5û@?äòp×ý›<›ÿZž3?§Jý4þ ( € ( € ( € (ðþŽÿ”~Ü¿÷lßúسí~ÿP@P@P@P@P@P@ü]Á¡?ó¯û´ßýùzücÂù¨î“ÿ½3ý2ý¢¿óg¿ï ÿïhµû9þf…P@P@P@P@ü]Á¡?ó¯û´ßýùzücÂù¨î“ÿ½3ý2ý¢¿óg¿ï ÿïhµû9þf…P@ÁH¿åß··ý™wíIÿª;ÇUâñ'ü“¹÷ý‰sOýA®~™à¯üžO ìæðþµ9Qø¹ÿ¡Ê;¾3ÙèüDÿÕû:WÆxSÿ$î7þÇXýAËé¯Úÿ'“†¿ìÙäßúÔñ™ý:Wé§ðÈP@P@P@_ðhOüä+þí7ÿ~^¿ð‡þjû¤ÿïLÿL¿h¯üÙïûÈ?ûäÚ-~Ι¡@Pâçü1ÿ(|ý¯?î€ÿëP|¯‹ñ þHüßþéÿú´ÁÓ?C¿ùHß¿îîÿÖ‰Ãþ çÿ”>~È÷_¿õ¨>5Ñáïü‘ùGýÔ?õi¦'ü¤oˆ¿÷hÿë Ãí}¡üÌP@P@PÅßðR/ùGwííÿf]ûRêŽñÕx¼Iÿ$î}ÿb\ÓÿPkŸ¦x+ÿ'“Â_û9¼ÿ­NT~.Á¨_òŽïŒßöz??õG~ΕñžÿÉ;ÿ±Ö#ÿPrãúkö€Éäá¯û6y7þµ.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € þ.¿àПùÈWýÚoþü½~1áüÔ?÷IÿÞ™þ™~Ñ_ù³ß÷÷È?´Zýœÿ3B€ ( ÅÏø8cþPùû^ÝÿÖ ø)_âü‘ù¿ýÓÿõi‚?¦~‡ò‘¾ÝÝÿ¬'‡üÏÿ(|ýÿî¿ëP|k£Ãßù#òû¨êÓLOùHßîÑÿÖ†Ú:ûCù˜( € ( € ( ‹¿à¤_òŽïÛÛþÌ»ö¤ÿÕãªñx“þIÜûþŧþ ×?LðWþO'„¿ösxÿZœ¨ü\ÿƒP¿åß¿ìô~"êŽý+ã<)ÿ’wÿc¬Gþ åÇô×íÿ“ÉÃ_ölòoýjxÌþ+ôÓød( € ( € ( € ( Àø:;þPQûrÿݳëaþÏ´ûý@P@P@P@P@P@Pñuÿ„ÿÎB¿îÓ÷åëñæ¡ÿºOþôÏôËöŠÿÍžÿ¼ƒÿ¾Aý¢×ìçùšP@P@P@P@Pñuÿ„ÿÎB¿îÓ÷åëñæ¡ÿºOþôÏôËöŠÿÍžÿ¼ƒÿ¾Aý¢×ìçùšP@|]ÿ"ÿ”w~Þßöeßµ'þ¨ïW‹ÄŸòNçßö%Í?õ¹úg‚¿òy<%ÿ³›ÀúÔåGâçü…ÿ(îøÍÿg£ñÿTwìé_áOü“¸ßûb?õ.?¦¿hüžNÿ³g“ëSÆgôé_¦ŸÃ!@P@P@ü]Á¡?ó¯û´ßýùzücÂù¨î“ÿ½3ý2ý¢¿óg¿ï ÿïhµû9þf…P@‹ŸðpÇü¡óö¼ÿºÿ­AðR¾/Ä/ù#óû§ÿêÓLýÿå#|:ÿ»»ÿXN'ø7ŸþPùû!ÿÝ~ÿÖ ø×G‡¿òGå÷PÿÕ¦4>˜Ÿò‘¾"ÿÝ£ÿ¬' ´uö‡ó0P@P@P@ÁH¿åß··ý™wíIÿª;ÇUâñ'ü“¹÷ý‰sOýA®~™à¯üžO ìæðþµ9Qø¹ÿ¡Ê;¾3ÙèüDÿÕû:WÆxSÿ$î7þÇXýAËé¯Úÿ'“†¿ìÙäßúÔñ™ý:Wé§ðÈP@P@P@P@€?ðtwü £öåÿ»fÿÖÃýŸh÷ú€ ( € ( € ( € ( € ( € ( âëþ ÿœ…ݦÿïË×ãÿÍCÿtŸýéŸé—íÿ›=ÿyÿ|ƒûE¯ÙÏó4( € ( € ( € ( € ( âëþ ÿœ…ݦÿïË×ãÿÍCÿtŸýéŸé—íÿ›=ÿyÿ|ƒûE¯ÙÏó4( € ø»þ Eÿ(îý½¿ìË¿jOýQÞ:¯‰?äÏ¿ìKšê sôÏäòxKÿg7€ÿõ©ÊÅÏø5 þQÝñ›þÏGâ'þ¨ïÙÒ¾3Ÿù'q¿ö:Äê\M~Ðù<œ5ÿfÏ&ÿÖ§ŒÏéÒ¿M?†B€ ( € ( € (øºÿƒBç!_÷i¿ûòõøÇ„?óPÿÝ'ÿzgúeûEæÏÞAÿß þÑkösüÍ ( €??àáùCçíyÿtÿZƒà¥|_ˆ_òGæÿ÷OÿÕ¦þ™úÿÊFøuÿwwþ°œNðo?ü¡óöCÿºýÿ­Añ®äÊ?î¡ÿ«Lh}1?å#|Eÿ»GÿXN?hëíæ` € ( € ( €>.ÿ‚‘Ê;¿ooû2ïÚ“ÿTwŽ«ÅâOù'sïûæŸúƒ\ý3Á_ù<žÿÙÍà?ýjr£ñsþ Bÿ”w|fÿ³Ñø‰ÿª;öt¯Œð§þIÜoýޱúƒ—Ó_´þO' ٳɿõ©ã3út¯ÓOá € ( € ( € ( €?àèïùAGíËÿvÍÿ­‡û>ÐïõP@P@P@P@P@P@Å×üÿ9 ÿ»Mÿß—¯ÆÖgb-ÚÂ<»cGG’pblA[xýÁ  ïøLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€?oø4ÿZÔ´øoì럳ý£þkÎýͼ»ü¯øhÏ/ý|RíÛæ¿ÝÛß6p1øÇ„?óPÿÝ'ÿzgúeûEæÏÞAÿß þÂá3ñ/ý¿òNÃÿ‘kösüÍøLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘høöÿƒOõ­KHÿ†÷þιû?Ú?á–¼ïÜÛË¿Êÿ†Œòÿ×Å.ݾkýݹÝógŒxCÿ5ýÒ÷¦¦_´Wþl÷ýäýòì'þ?ÿÐKÿ$ì?ù¿g?ÌÐÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hãßø(o‹|AsûþÜ–ójá¸ýi˜eO²Y.ø¥ø-ãd‘w%²²îF#r°aœ©¼^$ÿ’w>ÿ±.iÿ¨5ÏÓ|ÿ“Éá/ýœÞÿÖ§*?¿à×_júWìñ~ÞÂïìð¿í…ãù™>Ïk.eo‚ß³ò3nše#A´0^2I'ã<)ÿ’wÿc¬Gþ åÇôÏíÿ“ÉÃ_ölòoýjxÌþ‘ÿá3ñ/ý¿òNÃÿ‘kôÓød?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€?oø4ÿZÔ´øoì럳ý£þkÎýͼ»ü¯øhÏ/ý|RíÛæ¿ÝÛß6p1øÇ„?óPÿÝ'ÿzgúeûEæÏÞAÿß þÂá3ñ/ý¿òNÃÿ‘kösüÍøLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€?¿à¾^&×5ø$Çí_gy{ç[Mÿ +ÌìÖ‘îòÿi_ƒ’§ÏºH1"+|¬3Œ‚Aø¿¿äÍÿîŸÿ«Lý3ô;ÿ”ðëþîïýa8œ?à¾&×4ÿø$Çì¡gg{äÛCÿ ×ËìÖ’mó?i_Œr¿Ï-»Ès#³|Ìqœ <=ÿ’?(ÿº‡þ­1¡ôÄÿ”ñþíýa8`ý…ÿ„ÏÄ¿ôÿÉ;þE¯´?™ƒþ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZ?á3ñ/ý¿òNÃÿ‘hÿ„ÏÄ¿ôÿÉ;þE þ?ÿÐKÿ$ì?ù€øLüKÿA/ü“°ÿäZø÷þ âß\þÀ?·%¼Ú†øn?cßÚfSì–K¾)~ xÙ$]Él¬»‘ˆÜ¬g*AÁ¯‰?äÏ¿ìKšê sôßäòxKÿg7€ÿõ©Êǯø5׾•ûü_·°»ûrÑ4ø(Å_x_â_Ã~ǾðO¼9¤xÃÃ>ñçƒ~0ü@ñv“ ø’Ñ5ø£Â¼á»Ím´;Í6MjßCѦÒ4í]¯ì4½oÄ:u½®·|¡ÿ þ uÿEKöÿÃ3ûAóò þüëþŠ—ìÿ†gö‚ÿçå@™ðNø"ÿí»ÿÚÿ…Åÿ ¯ö“ý•¼wÿ Ÿþ÷öïü,ƒŸû+þ×ü&ÿÙÙðŽ|TÑ1öïøNµ·ý³íYû—ÙüœOæü¯ ðž …þ»õLN+õï«{O¬û/sêßXääöTéü_X—75þÚÚß÷Ï~|Mã·ú¯þ±d¹Oþªÿm}Oûf ëÛŸÙ?XúÏ×±˜¿àÿcÐö>ËÙÿ¯??¹Ëúiÿ þ uÿEKöÿÃ3ûAóò¯ª?&¶øIÿ1[ˆïâì+-¨–3s·ÁïÐ\IpeH&—ãeÄpÊɸG,–ó¢9 ÑHBÝüøŸ/ÄÝÄcV°Ó´Ÿü9ñï‹~|IÒ´MMµ¿Øü@ðúé¾ ÖfµÓï5?_™-µmçTÒô}hiZ…¬:æ¤kßiÖÀ¿@P@P@P@ xGWý°ÿhÿ i¿g ö^ðgÂ?_x‚AñwÂÿü{ã]sHðþ·yá¯øI59| ã¯øÃÃYÕtZóOðüâ+›]M}WU³Ö®uMHßÿ…Gÿ:ÿ¢¥ûÿá™ý ¿ùùPÿ þ uÿEKöÿÃ3ûAóò Ì¿ø'GüöÝÿ‚mÂâÿ…WûIþÊÞ;ÿ…Ïÿ ûûwþÁÏ‹ý•ÿ ëþì¿ìøG>*h˜ûwü'ZÛþÙö¬ýŽËìþN'ó~W†xOÂÿ]ú¦'ˆú÷Õ½§Ö}—¹õo¬rr{*tþ/¬K›šÿ mmoûçŽ?H>&ñÛýWÿX²\‹'ÿU¶¾§ýгõíÏ쟬}gëØÌ_ð±è{eìÿ‹WŸŸÜåý4ÿ…Gÿ:ÿ¢¥ûÿá™ý ¿ùùWÕ€øGÿ9ÈÏÅ/ØCç¿h,ã¾?âùu   |n½Ð`øïá¿+á;Çß³-®³ñNïÀ7×þÔ¼/«x2Oˆ^ñO†àÖÌö˜u Åq©á½wu֯隕ž¯â é>%Õ€9íAÿ‚|Uðç…þ%ü8ñ'ì{àOøûÚGŒ<3á_x7ãÄi:‰-XÐañŠ<'ñÁ~¼ÖÛC¼ÓdÖ­ô=m#NÕÚþÃKÖüC§[ÚëwÀð¨ÿà§_ôT¿`ÿü3?´ÿ?*ó/²—ü‹ã‡Á¯‹_|Añ‹ö!Òt‹ÿ ¼yð»[Õtoƒ_¯¦i<+ªøORÔ4£{ñ®êÈjVVZ´×6&îÖæÔ]GÚ š-ñ·.? ~©)Bž7 ˆÂTœ-ÏbhÎŒ¥dãÍ͸ó&®•ÓZÿ q'„¸§†¸¯B†'Ã9þOÄ8L6+Ú}[‰És6eB†#ÙN_aV®ë{*”ê{9K’q•¤¾eÿ‚~ÿÁ,?oOø'‡Á¯üøeûAþÈž6Ð|Oñ7Yø£wªøïà߯VÕíõ}k¾ ðÆŸl|?ñsF²þÍŠËÁ60‰mdºûUÝæùÚ/&8¼®áì/ `jàp•ë×§W<\§ˆö|êu(Т⽜!Uj黹kk%÷Þ4xÅžxÛÅ8+Ïò¼§)Æeù‡©a²uŒXiá°™Žk™B½O®âqU}¼ªæÕ©Ë–¢§ìéÒ´¹å/¹ÿáQÿÁN¿è©~Áÿøfh/þ~U[|$ÿ‚˜­Ä wñ?ö–ÔK¹ŽÛà÷Çè.$€82¤Kñ²â8edÜ#–Kyцh¤¡îþ |O—ânâ1«XiÚO~ø÷Å¿ ¾$éZ&¦Ú߇ì~ xýtß ë3Zé÷šŸ‡/Ì–Ú¶ƒsªiz>´4­BÖsFÒ5ˆo´ë`_ € ( €>@ðޝûaþÑþÒþ/~Îÿì½àÏ„~&¾ñƒâï…þ'ø÷ƺæ‘áýnóÃ_ð’jrøÇ^ ðÿ‡†³ªéµæŸáøÄW6ºš<ú®«g­\êš¿ÿ þ uÿEKöÿÃ3ûAóò þüëþŠ—ìÿ†gö‚ÿçå@™ðNø"ÿí»ÿÚÿ…Åÿ ¯ö“ý•¼wÿ Ÿþ÷öïü,ƒŸû+þ×ü&ÿÙÙðŽ|TÑ1öïøNµ·ý³íYû—ÙüœOæü¯ ðž …þ»õLN+õï«{O¬û/sêßXääöTéü_X—75þÚÚß÷Ï~|Mã·ú¯þ±d¹Oþªÿm}Oûf ëÛŸÙ?XúÏ×±˜¿àÿcÐö>ËÙÿ¯??¹Ëúiÿ þ uÿEKöÿÃ3ûAóò¯ª?ðþ s‘ŸŠ_°†3Î> ~ÐYÇ|Åòë@ZŸí3sð·Âß­~4龟â_ìÝsàëØü8Ô¼+â×øg¥Þü/½ðÄþ+žÆÿçÅ-­Øhú®•â™Gü#:í¾¨±jþ"ðôZ_ˆõpžóáWü¶êá®4ÿ~Ã:%¬é ©£Þ|7øûâ›1ÞÚâżIÄÿÇ®%µÁ–(uAá_ØV9ßG°wkhÀ*ÿ£ÿ‚ÑRýƒÿðÌþÐ_üü¨æ_Ûöÿ‚…þÙÿ³Ä_Ù«Ç¿c/ øSâOü"?Úº÷„¾ üo_ØÂã¿ xúÇû<ë?õ=4}«RðµßÚlgÿA¸¹ùSùSGågyMó,ÄåxŠ•hÑÅ{z”y=¤}†"–"<¼ñœu•ÅÞ/ÝnÖvkï¼/ñ3ð¯Žr><ÉðXÇ1È¿´þ¯ƒÌ–!à«jdù†M[Û,-l=ÝÐÌjÕ§ìëC÷ЧÍÍhÈýŽ¿`ø(_ìaû8ü:ýš¼ ñËö2ñW…>Â]ý•¯x·à߯öñ ÿü&>;ñ?¯¿´ñLÓOÙu/ÞYÚ}šÆô{a7›?›4†I”ÑÈòÌ6W‡©Vµ/¶ä©[“ÚKÛâ*â%ÍÉGIV”U¢½Ô¯wvÏ_“Qö+[_÷”2êUj{JÓýôêròÖ1úkþüëþŠ—ìÿ†gö‚ÿçå^©ð%«?…_ðRÛYÖçPñïì3®ÚÀ’ÊÚ5ŸÃ¯¾ŸT‘"v‚Á|M?ÄÏÇ ¥Ôâ8eÕ„¼FÖ;ܦ‹¨¼kk(¤|ø© ümøaàÿŠ^‚æÓIñv›%ÚXÞKgqu¦ßY^Ýi:Æ—qs§Ü]é÷RézÅ…þž÷v77wFÛí³I‘±ôê( € ç xWU𞥨iF÷ã]ÕÔ¬¬µi®lMݭͨºŽ/´A4[ãn\~ü3RR…Øøâæeý›—‚l.aÚÉuö«»Íó´^Lqy\9ÃØ^ÀÕÀá+ׯN®*x¹OìùÔêQ¡EÅ{8B<ª4"ÕÓwrÖÖKï¼hñ‹<ñ·ŠpWŸåyNSŒËò /RÃdë°ÓÃa3×2…zŸ]ÄâªûyUÍ«S—-EOÙÓ¥h)sÊ_sÿ£ÿ‚ÑRýƒÿðÌþÐ_üü«ß?"&¶øIÿ1[ˆïâì+-¨–3s·ÁïÐ\IpeH&—ãeÄpÊɸG,–ó¢9 ÑHBÝüøŸ/ÄÝÄcV°Ó´Ÿü9ñï‹~|IÒ´MMµ¿Øü@ðúé¾ ÖfµÓï5?_™-µmçTÒô}hiZ…¬:æ¤kßiÖÀ¿@P@P@Õø_Ä’è·K ÎϦÎàO%„ N>Ó óµ—þZªÞÆ0Atˆ¨ã¯üÊßðBoÛ‘”†V³++)YOí…û>AG ŽäPïýP@P@P@|ýâ}Aµ-nöbÛ¢†V´·ʈ-ÙôYÌŸÞ•ºtP@P~Õ¿òkŸ´Ÿý/Œ_ú®üE@Æ›âïü?ÿ‚jXxóÂ:‡öOŠüû Zø»Ã¯Ù,oÿ³ý‡¾|1ý­|?ðCãv½ÿ·øãÿøõûAj?³×Á߈·Å}gÂߴψÿgÿ† í¼ èŸ|áé/´{â½_ÂÞ²ÖgÓBý‚k;å{‹ èûþ wûXx‹öåÿ‚}þʵoŒ´7Dñ§Æ/…Zfµãm;E†kmñ}Ö‰iq5ÍÅŽ‹©x“@Ôõ-"Â{»É¬4ë»k9oo¹”Ác¯ù ~Úö´—þœ<9@hP@P@P@PŠÿÁ*?äÀ?goûøÛÿV‡èùRý–ÿ઴OǯøLmþ7ÿÁw>:þÍ_öø³ð³Ã| ÿðÇ é~о#ßøWÀ.>.ø_öMñ…®&Ô¬E¢ßjx™®´–BúõÔ/çM@¦¿ðPCþ Ÿðkþ ûüøOÿ`ñ¯‚þÁB>;þÐÞÒ¼.¿±·ìwâøg¯ |<ð2üGð߇¼;­øŸá®¯âˆikä>mSÅÚ´µÅ­¢êw·ww³Jm‡üÞø'çüÓãçìÿ#ýµ¼UãÏ„ŸdÿÙ·Vøu☿e›íWÆ|IýïÄjZ?ì«ðKÄúLJ^°ûŸØº½ä> °dò|9impfŽ€>Wý™à¬·Ÿí«¢þÊþ~Ôz|}ûrÁGࣞð_íOª~Ï¿ µÝoá—ì—û|1Ñ>.øcÂÞøE¯øG¼ñ?ˆthºTÏÄ _x®Æ1v%øåû>þÒÿ´/ì¡ñ7Ç~ðý·„´/‰ÿÀŸ\øzÏÇö~²–}?Óø“÷Z%ί¥i’ÿeE®&§6—oa§Ïm¦Ù€|÷ñŸþJŸü{þÈ7ÃýeÐÞšo‹¼Cðÿþ ©aãÏêÙ>+ðOì5kâï j¿d±¿þÌñ†þǬ躇Øu;kÝ6÷ìZ••µÏÙ5 ;»Ÿ/É»¶ž’'þf?à–ðR/?µV¡û\ü[ÿ‚æüuÕ~3|Sñ/ÃÝCâìükÁúwÃÿß/ˆb½ñÂûKøöVѼ%£è^+Ñ,.tþ$è~:ÓWE‹T“PÒõ«mFÎ éoXÁ^ôOø*×Ãoø'­üsâ~ø“ûxóö¨ÿ…‚a_ØuõûÂÿÀ6ž·Ðåø[%½þ-œ¢õõ«V;Ñ,b#k f–€3þÁx|û8ünÿ‚¬þÏŸ·oícâI¼wð7ã–µà/Ù=tÿÙoÆ>#K? hß mvO}©~Ïß3ºø'àü]ø¿âÿˆŸµø5‡4 ã¶_þøKðòjZ¶±¢øUñ~·®YéöPð´)©kZ˜öK@‘ÿ±×ü†?m?û?ÚKÿN ´( € ( ÿ‚TÉ€~Îßöñ·þ­Ðò¥û-ÿÁT?hŸ_ð˜Ûüoÿ‚î|uýšþ*¿í!ñgág†þ øþ !à/ŽþÒü;¡|G¿ð¯€\|]ð¿ì›â \M©X‹E¾þÔñ3]i,…õë¨_Κ€?Mà¡:‡ü?à×üöø#ðŸþ Áã_ü.ÿ‚„|wý¡¼7¥x]coØïÄ?ðÏ^øyàeøá¿xw[ñ?Ã]_Ä?ÒÖ È|4Ú§‹µh5k‹[EÔïnîïf” Ûø+¼ÿðOÏø)§ÇÏÙþ Gûkx«ÇŸ >þÉÿ³n­ðëÅ1~Ê77Ú¯Œ>2ø’+û߉4Ô´ÙWà—‰õ>½aö9?±u{È|`ÉäørÒÚàÍ|¯û2ÿÁX?o?ÛWEý”>ü$ý¨ôÿøûöäÿ‚ŽÿÁG< à¿ÚŸTýŸ~kºßÃ/Ù/ö:øc¢|]ðÇ…¼9ð‹_ð„<-yâèþ&Ñt¨5Ÿ‰¾ñ]Œbìx…gÕ7IfûÏÿzý¬¾2~Ö¿²¯‹uÚSÐ|KñËö}ý¥ÿh_ÙCâoŽü1áûo h_5ÿ><¹ðõŸìü)e,ú‡'ñ'‡n´K_JÓ%þÊ‹\MNm.ÞÃOžÛM³ø[öØÿ‘»þ ¿ÿaÏø'þ…àýÿ‚Ä~Ð?eø&_í‡ûB|ñoü Ÿþ|,ÿ„“À~.þÁðωÿ°µ¯øI¼?§ý³ûÆZ7ˆ|1ª¡ß]Cö}gEÔm?{æyjFèøãÿèý­¾)þÖ¼ àMþ ÇñûãÄ¿|ñî­{ðsÄðIüð΋âí[àö½keâ;o‹ú÷ì¿àO ê|ñÖµ£üAÑ4¿øK?±þ#]ø2ÓÃWֺLJ5ýJÎàköhðVÁR?kØ·ÆðXë?ÿc þÊ¿.u?øbOØ«O¾ø»¦|gÒî|QâOk1é l®ü#g¦™.‡¹ êóê¢ç¿·ûÜšø»Ã_ðqý·†à™_¶}‡ÅoÚ×_ƒþ [àß~Õ~øuoû+x—RÒ4˜ü7ãSKø-i6»á/Ùòóöj”iÚE¬PMuã}B}Å<ÏO%Ó4ŒúñËö½ÿ‚‘x'ÆßðD_ˆ6ÿ>xöý±>+þÆÿþ)øAøMácâwÆMWâßÁþ#|^ñGÄø“E›Ãÿt®éÙ~ð·Áx{QX5MSX¿ñ}œqé^±þ™¨ñïþ µÿ&WðWþ¹ü@ÿÕ©ãŠû’€ ( €< ö­ÿ“\ý¤ÿì|bÿÕwâ*î4ßx‡áÿüRÃÇžÔ?²|WàŸØj×ÅÞÕ~Écý™â üYÑu°êv׺mïØµ++kŸ²jwv7>_“wm<$NüÌÁ,?à¤_j­Cö0¹ø·ÿÍøëªüfø§â_‡º‡Ä/Ù:ø$׃ôï‡þ%¾_Å{â/„ö—ðÿì­£xKGмW¢X\è3üIÐüu¦®‹©&¡¥ëVÚœ(ÒÞ4°ÿ‚½èŸðU¯†ßðO[ø,çÄü'ñ'ö>ñçíQÿ þ¿°ëë ÷…þ*/€m<o¡Ëð¶K{ý:[9EëëW:¬w¢XÄFÖ@Í-gü8ÿ‚ðø;öqøÝÿYýŸ?nßÚÇÄ“xïàoÇ-kÀ_²zéÿ²ßŒ|F–~ѾÚìžûRýŸ¾xƒÂé5÷ÄÚ”‘üMÕîï  ËÿôJÄfø(¿üöéðì=ð#áíkáÿ‚µïø%¿Çø(ÇÇ¯Ú Qýž¾ü@Õ¾+ë>ý¦|Gû?ü3ø_mà{ýDøsàI}¤ ßêþðÅ–³>šìYß+Ü]GßðK¿ÚÃÄ_·/üïöSý«|e¤éº'>1|*Ó5¯iÚ,3[h‰ã]ÿQðŸ‹î´KK‰®n,t]KÄš§©iÝÞMa§]ÛYË{xð5Ì  ûÈcöÓÿ³øý¤¿ôááÊûB€ ( € ( € ( Çø9[Pkßø oíÅŒZ[ f‹L“–0Û ö{’Üû*FþBîÃø躀 ( € ( € ( €>`bX–c’Ä’}I9'ó  € ( €<›ãß…5Ÿ| øÑàoÀ—^ ñŸÂˆÞЭ¤šhî5Ÿx?XÒ4ÈâæHmàI¯o ¦¸š(b ^Y˜yf‘û@~É:ïìO¡~Ï:ý­¿gÿ„¾/ñ7ì¯cðoĺwŽþ(|?ð÷Œ~xƒQø[ÿ ÷ľ(ø{âøcÄZ_ˆü®JÃÄ׃¬iÚÞ‘¨xU:V£kt¶À–±çÃ?ŠŸ±ÏÃÏ€¿¼ÿ!þÂZßìùð-<- Yü8ºý˜ÿfkMo_ø{¡ëßj¾ˆ7_¶µ¬é—Úæœ×ú\~'k=R÷J’ñ5ìîÞÕ-ÜîßOûø‹þ kðÏþ 1ÿý”¬ÿá]~ÈÞ1ý•ÿáNÿÂÑøCqý±ÿ oĘþ!Âwÿ þôÙÿÙþ_öGü#ðƒß}«?oÿ„†Ûb _ìÙ¤~¼aÿ(ñe·üÓöJñ¯ü<ã»ñaì øðwGÿ…?ý½ðøøtºãF·ÿ ïÙö6ªº©¶ðOÚ0lF› å%$öiú4ÿ"çJ¥;{Js…ïnxJ7µ¯nd¯k«Ûk£óÏÃ_ðOŸÙcá/ÁÏÙ;ß³üöqøûA~Îß²·Å¿Ø«Ç_4ø¿géÿgo‹¿µ?І§øYâ7 àx[Åš‹jþñV›ãÍzM;SÞ^iºŒookfÈ?pdoŠðN/Ù öqøû*|"ý±¿f›ß|ð‡>xbmOö‰ø;yâ}´»uŽó]ÕþÃâ{{{x£Xšû]Õ—O³µ³}WS¹[+ko&Þ0SöFÑn-¬?hÄéwáo‹ßµ§ÇŠß5¸9±ñO€Dý‡><~Ïß³Gì­ð¿àÇÿ~üRøssñÃ/ð?ĝоð'Š4VÛâ7ŠuÌúŠ5ý3R:~«£jº?ˆü;«Çnú_‰¼)®øÅz Þ¡áí{HÔ¯?!¿g/Ùçâ'ì{áü0ý™?à䝨[áÿ¯üYøŸñnÇÂ^ ý˜?fŸˆެ|Rñ]÷Šõ›GñN½ûeG¨jQAsx ŠG·´‰–/2+;e“É@ÑoÚL~Å¿´wí1ÿèý¤µoø)o쟠j¿°W¾)øãRðì_>êQüfÔ>(ü2²ø{y z¤4˜þ&›sk&¼ŒšG’égJÁd± AÀ,|,Ôa?ÁFißÛþÛþ Mû%xƒþ'àwÁƒð–Šÿm?á ÿ…K5ìËâFñä5íÿøH¾Òʺ9ðf…ý—å5MO$*ROfŸ£Oò.tªS·´§8^öç„£{ZöæJöº½¶º?;t?ØGö;øà¯j¿?à³³'ÃOÚ‹áíÙûQþÚŸ¿h‹[Ÿ€^&Ò|iûZiðŒ|Pø1⯄>%øï¦øûÃz—…V *ïÄQøÃ—÷wVVWöÚf™\X]2ÔOø'îµÿùý‚g#àNÿýœ~+x–ÿÆŸ>+|Vø¹âŽ¿4_|Vø¿ñ_ÅZ‡Œ<{ã{ýGñ”šVƒ¡ªj c¤hv2Ý&“¡iÚ]„ú†©{ΩxËjÞºøÑâïø)ÇŒþjG|ñóÀ~øoð³ÅÚ©§ÞxSÆ^'ð¿ìñ¨x3_Þ$ŽèèÚÞ•¤x«]ƒÃ:ˆ4ëÙô+?é¾$ðÔúŠkžñŽ˜ì:Gíû$뿱>…ûü ðoü‡û k³çÀ´ð¶gðâëöcý™­5½áG}ªøEþ Ý~ØzÖ³¦_kšs_éqø¬õKÝ*KÄÔ#³»{T·pºüQsû kßðS/†¿ðQ·ÿ‚~Ê ðßöDñŸì´ÿâŸÂ —V_üHâxõ¾"‹ð 5táÒO†O‚/EÈ'P>"¶ìDØi94’m¶’I]¶ôI%«mè’ÜãÿfÝöøâïø)?‹íÿà¦_²gþ ñÄœ?>hË𵿇«àcáãxŸ5±ãæ´uc©ýŸÁF`ÂËû6Eå$ÓÙ§èî9ÂtݪBPm])ÅÅÛU{4®š¿“?9>$~Ÿ ¼Sÿéý›?àšžÿ‚ïþÅø'ð?Bð×ü&Zž£ðÓà·ŒüCñ?Ç~ øÛ}ñ§Â^/°¹—öÆÐ.|¦é×ï£è×ÞŽÿÅQj‘ésßI¬[I¬-Y'ïÇÀ/Ûà…>ø7Áÿ¿à¤±×ÇߌvCS·ñGÄï xÓàçÁí+ÆwwÞ Õ.ôOì¿…ö?|‡MЮ´ÉoŠõ†Õo4ɵœÚ>¦tûP*ý‘´[‹kÚ#Æq:]ø[â÷íiñßâ·ÃÍnl|Sàk¶V^ñf!?éÞñ"h÷·…µÈAÓ|Qá«­'ÅZÆ¡áÝoHÔï@>¸ € ( €>Dý‡><~Ïß³Gì­ð¿àÇÿ~üRøssñÃ/ð?ĝоð'Š4VÛâ7ŠuÌúŠ5ý3R:~«£jº?ˆü;«Çnú_‰¼)®øÅz Þ¡áí{HÔ¯?!¿g/Ùçâ'ì{áü0ý™?à䝨[áÿ¯üYøŸñnÇÂ^ ý˜?fŸˆެ|Rñ]÷Šõ›GñN½ûeG¨jQAsx ŠG·´‰–/2+;e“É@ÑoÚL~Å¿´wí1ÿèý¤µoø)o쟠j¿°W¾)øãRðì_>êQüfÔ>(ü2²ø{y z¤4˜þ&›sk&¼ŒšG’égJÁd± AÀ,|,Ôa?ÁFißÛþÛþ Mû%xƒþ'àwÁƒð–Šÿm?á ÿ…K5ìËâFñä5íÿøH¾Òʺ9ðf…ý—å5MO$*ROfŸ£Oò.tªS·´§8^öç„£{ZöæJöº½¶º?;t?ØGö;øà¯j¿?à³³'ÃOÚ‹áíÙûQþÚŸ¿h‹[Ÿ€^&Ò|iûZiðŒ|Pø1⯄>%øï¦øûÃz—…V *ïÄQøÃ—÷wVVWöÚf™\X]2ÔOø'îµÿùý‚g#àNÿýœ~+x–ÿÆŸ>+|Vø¹âŽ¿4_|Vø¿ñ_ÅZ‡Œ<{ã{ýGñ”šVƒ¡ªj c¤hv2Ý&“¡iÚ]„ú†©{ΩxóÏí à}WãtðR_ˆß ^/ˆ¾ ø•¬~Ë2xÄ^ ÿЧNñ¿ü(+Áz¿ÄÕðÚ#^ÿÂvÚXê>OøD†®5i:ç4϶øÃBÕô[0¦¿à¡~5ýˆÿnoØÓöýgÿ‚‚þÊÿ _ãÇ€áðšøúo‹? *ÁÃ_°—Æÿ€ß ´Ý?úÇÁ½;ö~ý™¾ë>.ð¾‰á™|?¡i1|SÓÿjÏêþ¸°ž=+Q›T@Õn/†%œÑ¨¾’â jøCyû |ÿ‚‚þÙŸ·¬ßðR/Ù;Z‹öµøsû?^¯‰ák£`f;зŒFsŠXÛ šj馻­PJ2„œgFKxÉ8É]]];5tÓô<Ëö–ýœ¼1ñº_؇Jðü›öøQàOØïàŒ¾ ø_RøKðKâ6¤~.|øZŸ eñLj|M/íáu]Å^v§­ÜérÛè_m‡NþÛÖ>Ä·ó²O݇߶·ì£¦x#Áú¿nïÙ3â'ô_h–~7ñž…ñoá…,üeâ}B¶Oø¿NðEŸÄ?ÿÂ+§ëš•®¡®Aá¸u½n?Ú\®šº¶£ŸÛfùóöð?‰>~Éß<)âÝ+PÑ5Ûm#]Õ®tZÆçKÕ¬m¼Sãx«JƒVÒ/£‡QѵUÒu«©hº¥½¦­£Þ™ôÍZÊÏQµºµ„ëj( € òo~Ö|yð/ãG¼;]xƒÆ þ#xSB¶’hm£¸Ö|EàýcHÓ {‹™!·&½¼‚6šâh¡ˆ1ydDV`åšGíû$뿱>…ûü ðoü‡û k³çÀ´ð¶gðâëöcý™­5½áG}ªøEþ Ý~ØzÖ³¦_kšs_éqø¬õKÝ*KÄÔ#³»{T·pºüQsû kßðS/†¿ðQ·ÿ‚~Ê ðßöDñŸì´ÿâŸÂ —V_üHâxõ¾"‹ð 5táÒO†O‚/EÈ'P>"¶ìDØi94’m¶’I]¶ôI%«mè’ÜãÿfÝöøâïø)?‹íÿà¦_²gþ ñÄœ?>hË𵿇«àcáãxŸ5±ãæ´uc©ýŸÁF`ÂËû6Eå$ÓÙ§èî9ÂtݪBPm])ÅÅÛU{4®š¿“?<|5ÿùý–>üý“¼9û8ÿÁq¿gß´ìíû+|[ýмuñóO‹öxñŽ—ñ¯övø»ñSø¡¨øj…ž(øóp¾ñ'…¼Y¨¶¯àÿi¾<פӵ1-åæ›¨Æöö¶l“÷öFø§ÿâý¿gß²§Â/Ûöi½ðWÁ_øsá׆&ÔÿhŸƒ·ž!×ÛK·Xï5Ý_ì>'··¸ñŠ5‰¯µÝYtû;[7Õu;•±²¶¶òmãå?dmâÚÃöˆñœN—~ø½ûZ|wø­ðó[ƒ›øÄší•—‡|Y£ÈOúw‡6jdmwÀÿü'àx§Áú`²V±»ñ÷ˆ]–Å5K0ÌÝjTqu¹e )Q«Vºƒºeáã(ÊŸµub靯¼DñË‚¼9ÅSËqõ1Y¾q&|«&Xzõð¥xÔÌ*WÄP¡†•EÊéáÝIb§«¡Wðgü>·þ £ÿF_ñÿOüº¯¹ÿ‰tâúd?~aÿÌGæŸñ7\ÿD×ÿà9OÿƬ!:“¡‰§«àÏÔB€ ( € æõo†> ñÝê^x—ÀÞñ5Õ™Œêž'ð7M‰å¸“ÍÔ5k;ƒga É<ì¾jDåuC#Í'&£Ü›I$®Ûz$’Õ¶ôIn)J1‹”šŒbœ¥)4£¥vÛz$–­½ÕŸŒ?િðK_‚uO‡:?Ágøí>þ­xçáGÃß„ÍàÕÒI#ºÒô _ÄZdþ"ŽÀǶmkL¶ŸA»gFÒµ=B?0ÃûVEàGg9u,ÃW/É}¿½K™Ko²i8Ô­F†ªÃóßÝ¥Vq¯?kJ›²ÎOôŸðû‡³jùV†mÄU÷+æ94pRË]tÚ>#Œ ñ^ÎÞõzå…×°­Y]¯ÿ‡ÖÿÁ4èËþ#á ð#ÿ—UìÿĺqGý2¿0ÿæ#çÿân¸+þ‰®)ÿÀrŸþxŸ–_ðMÚþ Íÿòÿ…Õý§à?Žÿ´7ü-ïøWGü%¿¾xwþø@?á<ó³ü߉Þ0ûgöÿü&±ý«oöwÙÿ±-³ö¿<}›äø_è™Å<7õëñ~Q˜ýwêßdžccõ¬|7¡VþÓÛûß ¹÷Óú ÇoÚaÀ¾5ÿª¾ÏÂ,σ?ÕŸíËÿeâ²|Wö—öÏö=½¿/ö³úŸöSöWöÜßZ©ü>_õ7þ[ÿÑÿ£/øÿ„'Àþ]WÖĺqGý2¿0ÿæ#ù÷þ&ë‚¿èšâŸü)ÿ爿ðúßø&ýÄoü!>òêø—N(ÿ¡æC÷æüÄñ7\ÿD×ÿà9OÿÆÖÁ`qô°y6ˆ2üNiŠ«Âbñ8¼m<>•z³ÂapØŒF&0thЫRq§/‡?à·ì¡û~Í~7øMûG~Íþ#ø‘ãürñ/ÄM+\ðLJ>kv¾Õ¼ðÏÃV:T×~(Ômµï Õü%®^Io fÍ!¾·’73Kp«ò^ý$‰ƒ¯”bð/À¡ “#g–SZÝ¿v6íù³ŒsIýxjóÌ%«mæå/¥Ï7eÃ|BÔ¼6ÿ¼)áj:Ň‚õŸxw@½Ô†O‹Sþ ¹Ò4ëû¨õ(×I»¶Ômìõ;=GM³ü_=Ée“bêQ§Œ¡ša!ˆ¯„§šàa_û7ŠÂF„±”puëÓ¥õªýf„jÔ§Mº°•9NœéÔŸô_ ñ8‹KW/Åd˜ùá0¸úÙgS ý±Àã牎]ˆÌ0¸jÕþ§õï©â¥BY*©Q© ±§ZZTúºñ¥ ( € å|Cðïá犅ψ|sáOj6žÒçŸRñGŽ4o\XxB²ÞÝ\_kzí´‘iº]¢››ÉŒ—Aý¢vy¯WJ•Zõ)Ñ£Nu«UœiÒ¥J©R¥IµBœ œ§9I¨Æ1NRm$›2­ZŽ\F"­*(SZÕëT…*4iS‹•J•jÔq…:pŠrœç%Å7&’¹ø™ñ[þ çÿ³økãoÁ>ø«üd±Ð&û ß|ðßá5Ÿƒ5»ø‹-áðãø’ëKÕ5 2ÞQäê›ìµ­q¦Ëy§½µíÏîO€\a˜à(c1X¬¯)«^<ÿQÆÏ,]?ƒÛLJ­NI-]/i)ÓVE œÐó^{ô¨ðÿ)Ìñ9~ ç´pÓöÚym,pŠ‘ºŸÕe‹ÆP­Z”_º«û(Ó«g:.¥' “ó¯ø}oüGþŒ¿â7þŸ?ùu^üK§Ðó!ûóþb<ø›® ÿ¢kŠð§ÿž'å—üGö„ÿ‚sÁ<¿áuiøã¿í ÿ {þÇ‘ÿ oÁïÞÿ„CþøO<ßìÿ7âwŒ>Ùý¿ÿ ¬jÛýöìKlý¯Ïfù>ú&qO ýzü_”f?]ú·ñá˜ÃØý_ë èU¿´öþ÷ÃnE½ôþ‚ñÛö˜p/꯳ð‹3àÏõgûrÿÙx¬Ÿý¥ý³ýooËýŸìþ§ý”ý•ý·7Öª—ßýMÿ‡ÖÿÁ4èËþ#á ð#ÿ—UõŸñ.œQÿĊïÌ?ùˆþ}ÿ‰ºà¯ú&¸§ÿÊùâ/ü>·þ £ÿF_ñÿOüº£þ%ÓŠ?èyýù‡ÿ1üM×Ñ5Å?øSÿÏÑm?ààïØ²ÂÖÚÆÇàíeeeo ¥¤ ­­m-m£Xmí­­áñjC¼¢E 1"Gh¨Šª þ%ÓŠ?èyýù‡ÿ1üM×Ñ5Å?øSÿÏ…Ô?à·¿ðN ZúïSÕ?cߊ:–¥q-Ýö¡¨x7àuåõíÔîdšæîîã]’{‹‰¤fyfšG’G%‰$ÑÿéÅô<È~üÃÿ˜ƒþ&ë‚¿èšâŸü)ÿç‰Oþ[ÿÑÿ£/øÿ„'Àþ]QÿéÅô<È~üÃÿ˜ƒþ&ë‚¿èšâŸü)ÿç‰ñoü;þ =û þ×±çÅÿÙïáGì×ã/†~?øƒÿöõŸ|#M7Dÿ„Sâ‚|oª}¥¼;­®²?´´_ j:D?c#3ßÄ.?ÑLõàq?Ñ‹ŒóŒ—`8‡‡(bñVöUkUÍiS±ÆaëÔæ©G.«R<ÔéN+–œ¯&¢íä¿Wð?éãá_†þ(pÇq?ñžq‘äßÛ_^˰8^Åâ±Ú<=›eXoe‡Ì3œ.§±Æc°õçí«Óä§Js§ÍV0„ø'ü{öý‘ÿcÏ„³ßÅÙ¯Æ_<ðûþÿíÿèÞøFún·ÿ _ÅxßKû3x‹[mdÿfè¾%Ó´‰¾ØN'°”[ÿ¢ˆ(á£äù.ÇñWÅáþ³íjÑ«šÕ§/mŒÄW§ËR¶]J¤¹iÕ„_58ÚIÅ^)Iž8}<|+ñ#Å'ãNà~3Éò<çûê9v; ØLVû;‡²œ«ípù~sŠÁÓöØÌ"¼=zœôêÂu9jÊpÚ_ðúßø&ýÄoü!>òê½ÿø—N(ÿ¡æC÷æüÄ~QÿuÁ_ôMqOþ”ÿóŧÿÁoàœMõ¦§¥~Ç¿´ÍJÂâ+»COðoÀë+ë;¨d‚æÒîÛ]Ž{{ˆdU’)¡‘$ÕY0ø—N(ÿ¡æC÷æüÄñ7\ÿD×ÿà9Oÿ |ŸÆ^ñ¾xC⮥á;OøóÀ—C²Õ~#éÑ5-Vú/G«ø›ÃÚe®±~m¢Úµ…õ¤'KÔô}CRü›ŠxR· cq7˜a3¨Ö¥…Ì1yd13Á`±õã^¥,¾¶&µPxÇK ^¤©Sæöj•HJ^ÖXSýׂ8ë Æ¹v0YV? y–¾;)Àg50tó,Ë+ÂÏ F¾m‡ÁáñêG.|n•ò꾿þ%ÓŠ?èyýù‡ÿ1ÿuÁ_ôMqOþ”ÿóÄòÏŽŸðV?ø&·Æ‚_¾Aû*|Rðdÿþ|AøkŒ4ï‡ou K㯠jþĶ6kâ}5®ï4'Õ©klºž{Tˆ^Ú—óãäÇý¸£Æà¿Öއ×0˜œ/¶‡ö‹•¬Q/kõHÞTùùÒæÚK™n} }5ø'…¸¯†xðgæk‡8‡%Ïž[ˆyE:ŠÊ3,6`ð5ê,ilìßÁÒj‘ÜFóë7}Š3 žçÉᯢ·ðö® ñVM˜{\]LW¶­Æ=½’_W«îÇØs§Ìµ›\º]þ…ãoíàoø¯/âz~g °<=„È^[–WÊ14+¼.e›f^ES•j«4XyCØÊÐÃS—´|ܰûëþ[ÿÑÿ£/øÿ„'Àþ]WÐÿĺqGý2¿0ÿæ#ñßø›® ÿ¢kŠð§ÿž#“þ aÿÓÖHÿc/‰èÁ‘ÓÀßUÑ”ä2²ë@«È ‚"ø—N(ÿ¡æC÷æüÄñ7\ÿD×ÿà9OÿŒ<1Ì89Q§ŠÍòÌË0¯F¾*^Ymld0J5kâó ñžœ(`ðô¨Ô”ªÔšæäŸ³Œ£J´©þ“À4å> Ë[çYFS…Äap53¼êyn.©šcëÑÂà2œ,éã*ÔÅf8ºøŠ0†Œ%Éí){YBUðЭ÷Ž£n–z…õ¤lÍ­åÕº3à»$¼JÍ´*î* ¶ÎWægìå:( € üQÿƒ‘å·ÿýÚ¯þ¶Àºþ“¨ € ( € ( € (åú( € (]‘Gi©ß^Xé:6‘gq©kZæ¯yo¦èÚ6™gÜ^j:®¥w$V–Vv¶ñÉ<óÏ*$pÇ$ŒB#0ºTª×©Ns­Z¬ãN•*P•J•*M¨Âáå9ÊMF1Šr“i$Ù•jÔpÔjâ1iP¡BœêÖ¯Z¤)Q£Jœ\ªT«V£Œ)Ó„S”ç9(Æ)¹4•ÏæÇþ 'ÿÆ·ÒÄ¿a-W3¸Ñ¼cûMÒg WÚÁ‹I6ùÝn~ ßF׿y¼)e‹Gñ{PøsàrJ†uÆÔ®ýÚ¸^oEÖÍä¾/æú…7o…bêJõp‹ø£Åï¤Ä›Åpç†õùcïÐÆñb^ô·Jy %ð­ãý«V.OÞ–”mCÿ—mGQÔ5BÿVÕ¯ï5MWT¼ºÔu=OQºž÷PÔu ÙÞæöþþöåå¹»¼»¹–K‹««‰$žâyY]ävcý;N:4áJ”!J•(F:tãS§NQ„!¥BJ1ŒRŒb’I$U«V½ZµëÕ©Z½j“«ZµYÊ¥ZµjIÎ¥Zµ&ÜêT©6å9ɹJMÊM¶ÙJ¬Ì( € ( € (ôö%ý޼ñCñGíEûUkz—ÃOØ·àüàø«Ä±k¯|cñ¤E%Ò¾ ü-F–ÍSÄzñu­GHŽãûN ntË›˜õ=7óÞ3âì^_[ Ǧ|:øCðÏCøËào<3¶? ü*ø{§þο­´ý"ÈG ©k—–¶V2x‹ÄAúͼ[Áa£éúN•að|sÂXNðŸŠà«ÔÌ3lʶQÏ3œN¸¬ÓSˆ2ÉT«;¹{:”æ°ôš§JR•JÕ*Õ©ú‡†\yŽãŸ¸¤°Ôr¬‹'Ãgùw pöË’åt¸S9…*íªØš§Mâ±RŠ•iÂ1„iaéP£Kú¼¯ã“ý ( € ã>(|Nøcð+áî±ñkãoô‡4ÿL×µ©±>¥vRG·Ñ<7¥Ä$Ô|C¯ê)#°Òt›[ËÛ‡I Vòù2(õr\“5âÂŽY“à«c±µß»JŒt„JUkTv§BŒ.JÕe pV¼®Ò~q.GÂyV#:âÇ–eØeïׯ+J¥F›… =(Þ®'VÍRÃÐ…JÕ|°i6¿_ø(·üâ‡ípšÏÂ/ƒvú·Á¯ÙäšÖçA†éañçÅKpLbóâf¯aq$qi1¨’?é2èÑuÍCÄÏšúoö_‡~e\ ¨æy§²Í¸‰%8×qæÁe³ß—/§R)Ê´^Z*«I{ xdê*Ÿçw‹~?ç¼,FK’{|‹„[•9ac>\Ç9§{sfµ©IÆy­VYBr ›Y«Œ”i:_‹µû!üòP@P@PìçÀO„Ÿ?àŸ <-ûm~ÔÞ±ñ7í âÔ‡]ý‹¿e¿ùoxt¿Ú+âΗÿiYxKB½ÿNð^ƒ~šUÆ·cå¤ßiº°Õ<9ù{šæ{šâx3†13ÃdVèq‡a­+)s*œ?•U’ösÅW‡¹Œ¯V4a9Bk–)b? 8g#ʼ.Èð^"qžž3ŠqÊ8Ÿ¸3xÝÅÆTx¯;£í©àpÕ?y—᪪2ÄU§—<éVÂ~¡Á~,üAøëâ?ø)Å¿Šž$¼ño¼u«þ̚߈õÛÑosu-÷Ç(¢‚ÚÚÝ"´ÓôÝ>Ò+}?JÒìa‚ÃLÓmml,`‚ÖÞ(—òϲ¬I—pU–aá…Àà©q =]¨Åa7)JMÎ¥J’r©V¬Ü§V¤¥9ÊR“oöÿ¢¶yšq&mâ¦ybêc³<Ê¿ â1xš–Ns’âhÆ0„TaJ(F4¨Ñ§Ò£J¥N1„"—ô_ͧö P@æ>8üý˜þ_ü]øýã{ø"̼lrwâjþSËoáÏxz-×úþ³våKx­”n¥©ÜYé6w÷ÖžïpÞsÅ9<¯$ÁTÅâgiT’÷(a©^ÒÄb«Ë÷t(ì¦ù§+S¥•e rùŽ-ãà|¢®uÄ™,ãJßÅckò· . Þâ±3¶‚å§jÕçJ„*U‡ñ¯ÿÿ‚²|`ý¶/o<á8¯~~Ív7xÒ~iwßñ7ñµ›}®½ñSX´qý»¨ÈëÕ¿†í¤>ÐÝ-V8õVÈøŽóûKÃß rn ¥Oˆöy§Ê?½ÌjC÷XG(ÚTrÚSW£›Œ±3_Y®œ®èÒŸÕáþsx±ã—ø‘Z®_…ö¹' ¹Ê)Uýþ=BW†#9¯M¥ˆ¨Ý§ 7õ,3PJ8Šôþ·Sò^¿U? ( € ( € (õköJý˜>ü.øS'íãûoé3?Á=*âh?g¿Ws3ĵ—Ä?|éðBñ\^Û|$ðýÔpIâcéZ€f±óomaºÒuËø«‰sLÏ4\ÁuRÎjÅ<û;„}¥˪YJ¤¤œa,Ö¼\–ª*´íÏhNQ«GöÎàì—&ɉ~"Гáê”x[†ç/cŠã|Ú•ÜiF.2© 5ŽÅºN[º|Õ!ЯúUÿqý§þ*þןðR¿Ú/ã/ÅÍR ½sSýþ!éz‡¥ÂÖ^ðG„´ÿ‹އàÏéeä]7Ãú,wˆ#i&»½»žóVÕ.¯umBúöãóo¸k+áO 2L£*¥(ѧÅ8:µëÕ|øœn*¦Q{lf.­—´¯YÅ]ÙBŒ)RŒ)S„#ûÑóŒs®:ñ“‰3üò´gˆ­Àù…6Œ]<]¥ŸðïÕ²ü ¿c…éK•7)Ô©*•ëN¥zµ*Kú[¯åCû”( €"Ô/tDÖ¼UâsFðŸ„<5aq«x›Å¾%ÔmtoøJ´ŒÍuªê·ÒÁgi1ÌÒÊ€pXªå†øl6#ˆ£„ÂP­ŠÅb*F• = s«ZµI»F:pRœç'¢ŒSlæÆcpyvŽÇâ°ø,”ëâ±xªÔèaðôi«Î­jÕetá«”¤’î.ÿðQoø.ÿŠmõ棪x[Ás}§Kñí ð\é7ñ¬<Ã{k„‡PðG†'RKx†æOj@Ä,!ðÌvóÜk_ÕÞø!C/ö×Ò¥ŠÆ®Z˜lŽñ«„ÂKIFy„¢åOˆ‹µ°Ð”°”õö’ÄÊIQþñ{é+‰ÍV'‡<;­[–¾z8Î&å ~>:ÆTò˜MF®_…’nøÊ‘†>µ×±Ž 1”±ÍŒÓKq,³Ï,“Ï<4ÓLí$³K#’YdrÏ$’;wrYØ–bI&¿£’QJ1J1ŠI$’I%d’Z$–‰-?%)JR”¤å)7)JM¹JMÝÊMêÛz¶õoVGL € ( € ( ·bŸØÛTý©¼W¯xƒÅþ"áGìÓð’É|QûA|wÖV84xZÓ.“¥Ít­­ã¿´cKð¶kýì·w+|úmݵ¹¶¹øÎ2âú\3…¡C ‡y§æ³xl‡$¢Ü«ãq/OkV1÷©`p×ö˜šòtà£R—4Eð÷€kqž7ŠÇbÖIÂ5ŒâŽ%Ä%6[‚åì(Êk–¾eŒ·±Áaaµæª:U!Iý±¢~ÙZgÆïÛsöýŸþxqþþÆ¿k€6_¾Ã$§Pñ÷ü.? ÇñkâEÜŸé߼桨ÈoTðü½í•»Ï¨ßëÚ¾³ñµ¸B¦MÁœuŸg˜…šq~w¹ìó|ÆI{<ý˜hÿ„?³uæt¯†OÂoˆÿÚž.Ò†´Öþ(ëV¾Vñ ì‘ÝCáød>ÑæX<¸u]JÍuÛì¸SÃÞ §O_ˆøw4â C÷¹•L×/öxW%iÒËiOýŒ,Üeˆ’úÍeÍyR§/aó÷Ån:ñ_ÄŠÕrü?qnI©ûœžŽI›{lr„¯N¾q^Eõš—JpÂAýO.[Fµh,L¿'?á’jÏú6_Úÿ ÇÄþfëõ_õ¯…ÿè¤È?ðñ—óAøwúÆ¿ôGñGþ#ù·ÿ2ü2OíYÿFËûAÿá˜øÿÌÝë_ ÿÑIáã.ÿæ€ÿQø×þˆþ(ÿÄ6ÿæ@ÿ†Iý«?èÙh?ü3ÿù›£ýkáú)2ü#ÿó7Gú×ÂÿôRdøxË¿ù ?Ô~5ÿ¢?Š?ñÍ¿ù?á’jÏú6_Úÿ ÇÄþfèÿZø_þŠLƒÿwÿ4úÆ¿ôGñGþ#ù·ÿ2ü2OíYÿFËûAÿá˜øÿÌÝë_ ÿÑIáã.ÿæ€ÿQø×þˆþ(ÿÄ6ÿæ@ÿ†Iý«?èÙh?ü3ÿù›£ýkáú)2ü é’øOö~ø#¦NÒYøSÊV9¼Câ[Ÿ:s®øÿÄâï|G­\\]²JÆÒ «É>ß«jÞŸp…>£‹Åã1/5âLæ¢ÅgÙÍH¥Ã†»†ŒcÒæ”`½*^7ˆ\WŒ± ]‚ŽGÁü=Eàx_‡hɺx,"²–+>i}g4Ærª˜¼D¥6¤ù#:ÚׯïÿðCÏùJ/ìÃÿu«ÿYãâÕx^4É´âOû£ÿêÿ*>Ÿèéÿ'“ƒ¿îáÿÖWµop|ð£áo‚#ÿó7Gú×ÂÿôRdøxË¿ù ?Ô~5ÿ¢?Š?ñÍ¿ù?á’jÏú6_Úÿ ÇÄþfèÿZø_þŠLƒÿwÿ4úÆ¿ôGñGþ#ù·ÿ2ü2OíYÿFËûAÿá˜øÿÌÝë_ ÿÑIáã.ÿæ€ÿQø×þˆþ(ÿÄ6ÿæ@ÿ†Iý«?èÙh?ü3ÿù›£ýkáú)2üß ’eÕ\jN5©GëR„yÔéUÂáóÍ_ÿþ'þÓ¼_ñ«ãˆdñŽ|gö»ë€†ßNÓ, Ao¥èŸæHš_‡ô;áÓô>7Ãm ½Ä×W’ÜÝOú.E‘e¼7•a2|§°ø,9aóT©9>jµëÔ²ukÖ›u*Ôi^NÑQ‚Œcùñ6sÆæ?ˆsìSÅæ9…^z’·-*4â¹hápÔ®Õ.š*“|°ç)Ô”ç/èÓþ µÿ‘köòÿ»]ÿÒÿ޵üéô“ÿš/þî/ýá×C¯ù¸¿÷hÿïÎHõüºl…øã’WHâG’G`©j]݉ÀTE™‰à '¥|1ûs~ßß ?a½M1¼1«|eøÿªéËsáo„~µ¾¹Òôs{-â—ˆ,-® ðþ’¥Í®ƒ¸ñ¼¯l¶–0iWw:þ•úGxq˜ñx׫‰£”ätªrâ3,Lᕹ_¿C.¡9Eâk^ñ•Wˆ Ô½¥IUŒpõ?ñGÅü§Ã¬,°´0˜Œû‰«ÒæÂdø8T•,?5ã¼Þ®uÄŸÚ8ÌL¹£‡ °¸šx,¿åÍ&_…å”0Øxi¢r«ZIÖÄÕ­^S«/áñÇý ž+ÿÂwWÿä:ú¯`¿è3 ÿ…ù3å?³3/úãð’¿ÿ+ø@üqÿBgŠÿðÕÿù¯`¿è3 ÿ…ù0þÌÌ¿è_ÿÂJÿü¬?áñÇý ž+ÿÂwWÿä:>½‚ÿ Ì/þQÿäÃû32ÿ¡~7ÿ +ÿò°ÿ„Çô&x¯ÿ Ý_ÿèúö þƒ0¿øQGÿ“ìÌËþ…øßü$¯ÿÊÃþ?Йâ¿ü'uþC£ëØ/ú ÂÿáEþL?³3/úãð’¿ÿ+ø@üqÿBgŠÿðÕÿù¯`¿è3 ÿ…ù0þÌÌ¿è_ÿÂJÿü¬?áñÇý ž+ÿÂwWÿä:>½‚ÿ Ì/þQÿäÃû32ÿ¡~7ÿ +ÿò°ÿ„Çô&x¯ÿ Ý_ÿèúö þƒ0¿øQGÿ“ìÌËþ…øßü$¯ÿÊÃþ?Йâ¿ü'uþC£ëØ/ú ÂÿáEþL?³3/úãð’¿ÿ+ø@üqÿBgŠÿðÕÿù¯`¿è3 ÿ…ù0þÌÌ¿è_ÿÂJÿü¬?áñÇý ž+ÿÂwWÿä:>½‚ÿ Ì/þQÿäÃû32ÿ¡~7ÿ +ÿò³ôkö@ý| §x/^ý²?mø5¯þÊ¿ ï$³ð׃núiï‰ö¢I´Ï„Þµ›ìú¯ö—Ò¯Œ¼Ug[ØÙÁek¨X kþüû‹8·SC„x.Tq¼O˜ÁOŒ-| å²²©šc¤¹©{u'„ÃM·9Ê”'χ¡Šý_8-¥—âxûÄXâ2î Êj:xL¾\ØlÃŒsˆ^TrL²ä¯õg(5ÆÓŠ:q«Ni{<^+òÿíuûZ|Fý°þ*7ÄOæx{CÐô›Oü0øiá˜VÏÁŸ þèåÓ@ð_…ìbŠÞ1oc }¨´O©Þ˜Åie†›aô¼)¹ e‹/Á:˜ŠõªË™f8–çŒÌó ¶uñ˜™·'Í9|ÔœiBÊó›IüwqÆmÇ™ÓÍs(ÑÂá°Ô!€É²ŒU<¿%ʨ]arü8Æ+–œu©UÆ2­Ròå§MR£Kõ_þ Îÿ“ÌøÛÿfwñ'ÿV§Àúü³éÿ$VWÿeN ÿU9ÙûÑ'þN>uÿdNeÿ«Þ?®:þ4?Ñ €<£ã÷Ç…_²ÇÃ[Šßµ=ZÓFýô>ð‡†4©µïˆ5Xb¦á@ѼÎûáZ¶¡>Ÿ¡iQÜ[Üjú®›iZpJõ\e4ªI8Ÿ'Æ]t&ÂâçT¾Ôÿ´¸ €¸[°ê¥,V0ÎêÓåÅgêPUR’÷è`i¹ËêxWö£Ê­mzµiŸùÍâ—Š\oân.Tkà³ «†èUçÀðþ–*T[‹~Ï™UT¡ý¡Káœá eu…¡JS­R·æü ~8ÿ¡3ÅøNêÿü‡_£ý{ÿA˜_ü(£ÿÉŸÿff_ô/Æÿá%þVðøãþ„Ïÿá;«ÿò^ÁÐfÿ (ÿòaý™™пÿ„•ÿùXÂãú#|]Ò‹->æeÔn4íJ²–é”ÊöÚ}¤LÅ!@?ªü áNÏxK1Åç9Y™â©ñ/ N¾7 J½XP†[”UΤ[TãRµY¨­ªMîÙü5ô™ãž1áž<Êp=ÄÙÎM‚­Â8 ]\.]¯…¡SS9ϨÏ*t§º²¥‡¡NSjî¡Ú(üVÿ‡“þß¿ôx´'þïòu~Íÿï?è’Èð݇ÿäçoø‹ž'ÿÑyÅøwÅÿòÀÿ‡“þß¿ôx´'þïòuñøþ‰,‡ÿ Øþ@?â.xŸÿEçáßÿËþOû~ÿÑáþПøs¼MÿÉÔÄ;àOú$²ü7aÿùÿˆ¹âýœQÿ‡|_ÿ,øy?íûÿG‡ûBáÎñ7ÿ'Qÿï?è’Èð݇ÿäþ"ç‰ÿô^qGþñü°?áäÿ·ïýí ÿ‡;ÄßüGüC¾ÿ¢K!ÿÃvÿø‹ž'ÿÑyÅøwÅÿòÀÿ‡“þß¿ôx´'þïòuñøþ‰,‡ÿ Øþ@?â.xŸÿEçáßÿËþOû~ÿÑáþПøs¼MÿÉÔÄ;àOú$²ü7aÿùÿˆ¹âýœQÿ‡|_ÿ,øy?íûÿG‡ûBáÎñ7ÿ'Qÿï?è’Èð݇ÿäþ"ç‰ÿô^qGþñü°?áäÿ·ïýí ÿ‡;ÄßüGüC¾ÿ¢K!ÿÃvÿø‹ž'ÿÑyÅøwÅÿòÀÿ‡“þß¿ôx´'þïòuñøþ‰,‡ÿ Øþ@?â.xŸÿEçáßÿËþOû~ÿÑáþПøs¼MÿÉÔÄ;àOú$²ü7aÿùÿˆ¹âýœQÿ‡|_ÿ,øy?íûÿG‡ûBáÎñ7ÿ'Qÿï?è’Èð݇ÿäþ"ç‰ÿô^qGþñü°ûOöƒø×ñwã—üçá‹þ1|Hñ—Ä¿ÃzøÇE:÷ŒµëýwT:N—ðaî4í9®ï¦–V´±ŸQ¿šÖ% ’òᣠe|ü~A“eY/‹Y¶)˰™výGÂVöJ¡OÚÕÍÔjTQ‚Kžq§'¼”"žÈýŠx‡=â?²~}›có|güD¼~ë8üM\MoaG‡Ü©Rç©&ù)Ê­IB;EÔ›_?+ö3ùäý`ÿ‚ÊQfû­_úÏ«òï?äÚq'ýÑÿõ•¶ý?äòpwýÜ?úÊç‡öÿ_Á‡úŒP@šü†´û iÿúW Ÿ¶wü öÞðgí…ûWø?µWÇOø[Ÿ´§Ç_ xkÃúGÄOXé:‡ô/Š)ÒômL²‚õa´Ó´Í:ÖÚÊÊÖX­í Š(Ô"_Ýü!À|Œá.Åâ¸c%Äb±\;’bq5êà(N­zõòÜ5ZÕªMÁ¹Ô©RRœäõ”¤ÛÕŸåÿˆ(x‹—ñç`0ÃÇÜV'Ä<ŠÅ׫‰Äb<4á EzÕ§*•*×ÄQÅÖ¯Vr“mέYÎ¥I=e9JNí³ò¿W? ?©ø6×þE¯ÛËþíwÿKþ:×òïÒOþh¿û¸¿÷„m}¿æâÿÝ£ÿ¿9ý#×òéý²PQàÏùtßû|ÿÒ ªþ ?áñÿðRßú:¯á%ðËÿ˜ŠþýÿˆGá×ý_ü*Ì¿ù´ÿ+?â=ø»ÿE®7ÿrþwü>?þ [ÿGUâÏü$¾óGüB?¿è˜ÂÿáVeÿÍ¡ÿïÅßú-q¿øC“ÿó¸?áñÿðRßú:¯á%ðËÿ˜Š?âøuÿDÆÿ ³/þmø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçpÃãÿॿôu^,ÿÂKá—ÿ1Ä#ðëþ‰Œ/þf_üÚñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎàÿ‡ÇÿÁKèê¼Yÿ„—Ã/þb(ÿˆGá×ý_ü*Ì¿ù´?â=ø»ÿE®7ÿrþwü>?þ [ÿGUâÏü$¾óGüB?¿è˜ÂÿáVeÿÍ¡ÿïÅßú-q¿øC“ÿó¸?áñÿðRßú:¯á%ðËÿ˜Š?âøuÿDÆÿ ³/þmø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçqí¿ðUŠßþ2üÿ‚i|@ø™â­CÅž-ñwìÕâx‡U¼[[UÔuÛïÉgyª;M·²Ò­®î­të yä³±·ób³·GD˜ñ|/Êòü£:ñ—aiáp¸N"Ãa°ô¡Í'N„0\ð¥í*Jueʤå9ÊÎrkv}YÞkŸðç„Y¦q«Çc¸G‹ÅV¨¡WS2têVöTaN„'8R§:táxÂ)ü(ü^¯ØOçÓ÷óþ Îÿ“ÌøÛÿfwñ'ÿV§Àúüéÿ$VWÿeN ÿU9ÙýGôIÿ“Ù™ê÷†Ï뎿ô@( å›þ Áÿ+ý¸gÿÛûãßÂ/ƒß´ˆ|ðçÂ_ð«?áð½‡| }i¥ÿo|øsâm[ʺÖ<-©jRý·]Öu=AþÑ{7—%ÛÇ—E×ÞxuÁyïdY®m‘añ¹†+ûOë™â1°•OaœfjWM:k’…t×,Ôw“mÿxÛâïˆÜ/âwdyâ²ì«ý‹õ\<.[R~³ÃùN2¿,ñ*µ¥í18ŠÕ_=IYͨÚ*1_¿ðøÿø)oýW‹?ð’øeÿÌE}ÿüB?¿è˜ÂÿáVeÿͧåñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎàÿ‡ÇÿÁKèê¼Yÿ„—Ã/þb(ÿˆGá×ý_ü*Ì¿ù´?â=ø»ÿE®7ÿrþwü>?þ [ÿGUâÏü$¾óGüB?¿è˜ÂÿáVeÿÍ¡ÿïÅßú-q¿øC“ÿó¸?áñÿðRßú:¯á%ðËÿ˜Š?âøuÿDÆÿ ³/þmø~.ÿÑkÿŸÿÁÿÿ‚–ÿÑÕx³ÿ /†_üÄQÿïú&0¿øU™óhÄ{ñwþ‹\oþäÿüîø|ü·þŽ«ÅŸøI|2ÿæ"ø„~Ñ1…ÿ¬Ëÿ›Cþ#ß‹¿ôZãð‡'ÿçpÃãÿॿôu^,ÿÂKá—ÿ1Ä#ðëþ‰Œ/þf_üÚñü]ÿ¢×ÿ„9?ÿ;ƒþÿ-ÿ£ªñgþ_ ¿ùˆ£þ!‡_ôLað«2ÿæÐÿˆ÷âïý¸ßü!ÉÿùÜðøÿø)oýW‹?ð’øeÿÌEñü:ÿ¢c ÿ…Y—ÿ6‡üG¿èµÆÿáOÿÎàÿ‡ÇÿÁKèê¼Yÿ„—Ã/þb(ÿˆGá×ý_ü*Ì¿ù´?â=ø»ÿE®7ÿrþw]x‹öºý£ÿjOø$oíO®|zø©®|BÕ4ÚWàO†ô»«Ë- F0hW¶·zÅÖ—,>Òt[{ÛIuKKKãôW;nm ‘ ˜£Ûò”8S‡¸gÅn¡‘å”p«ðîw‰«N½njð”iFªxšµ¥ ªr”/vROv}Ö+Žx³Œü ãLOgXŒÒ¶‹¸o Fu)á°ü¸jyÑ”pt0ñ© V„*Z¤gïÂ-[•[ðF¿r?™O¬?`¿ù>oØÃþÎÃöuÿÕ¿àêù~8ÿ’+Œ?ì–âýTâÏ·ðÓþN?‡ÿö[p§þ¯pú!ëŸòÖ?ì)¨é\ÕþqŸëá—@P@Š?ðr/ü £öÿÿ»UÿÖÃø@ÒuP@P@P@|¿@P@Èçücÿ'™ðKþÌïá·þ­OŽý—ôwÿ’+4ÿ²§ÿªœÿ;þ–ßòqò_û"rßý^ñ!ð^ƒ üý~ ü,øËñãá¦ñ÷ö•ý¥ôÝcÄÿ²ÿìÍâíKUÓ~ø7à÷‡µ´Sö ý¢­|=©i>*Õü!ªëŒú'¯‡vWšE¿Ä‹¨e—ûi4¹µ=oÁ]Oâ4ø=ûXü$Óî _7Áx×Äž)Ôþ|sø{ipºÎƒà½#Åw>ñöîœúuþ©q±à¯?<Åñ‡…¼k_<ÌøÃƒq•éasJy³Ž#4ËkÎÏÛQÅ6§ËQF£Ã'8á¹ÿÙ1ÕYá±2õ¸káÿޏ Ó#ÃpÎOáÿˆx -|vI[!SÂd™Æ•×Õñ$¥MNŒ§F8Ʃϩÿ·ákJN—¡êRx;ó<q‡ŒZò\ƒŠË28J¤éP§Z®†-£8By–kõyFXœEG*~΄¥8S«Vž‡/5Zóý“+àÿ¾Ü#✻q-Hѧ[WCŠÅçéÎ¥<Ÿ#úÔ% Š…_kŠ„)Ô­B…\f+Ÿ–†Ÿ~Ø¿±Àÿø(¿ì¹üþ Qðµ<ñóÀúí¦ŸûB~ÇžþÄÐô¿ˆškKñUžá yôßxoâþ‰kŽ|ão iü=oªè^-Óµ?M¤§† ÷ø7Å4ò¬ã‰Í²:Ž%B¥Zµð˜ì¾rä–/-úÄœ°xÚ6’ÎöÔÕ*þÖŒ©Vu‰áo>ÜW<áü»‘q5Ô£M*0Øì³6§iqõH(æv'š2§ˆ:“ú½W_ ì11¯B?Ÿž<ø¹û3~Æ_5¯Ùgá·ìëðGöÃøð¾òOþÙ´Ç7ñî¯à‹O‰ûPx»ö{ý™¼-à¿ø0h7¿ RVÑüSñÃUÔ¯µ˜ülo´Û? Ëšé?¢dùø³šcñØ ï0á ËêË „yd£O2ÇâTT£Ï‰‡,E ӭеx|4gK J•Zξ&?’qSáß.W–f|9•qÿˆÙ­c3çP•lŸ+Â9Êöx:œÐ7R\>~Åbñ’§_Z½ 2Ã`æûïØ&_Ú;â×ìÇãØGIñ$Ÿ³wíwñWøEâÿøÃ\»ñ׈?a¯žðÞ­ñÅ^ñWŒ.c¶ñ~k? t-wÇŸ üa® WÅ…ô;ß ëº«k:ׇ4XrÃxÄç— ñö*®y‚§­Ès˜ÒQÅãc u%†ÃÔiÉÊ8º´ªaëûZ˜,w»[WR•hmð«…|aá쟌|,ÁPáœÆ®e‡Êø£‡g^SÀeÓjPÆbè§(KF½,tiá• 9ŽZܰøJ…*Ô*~Äüfý¯?àšßðK/‰zOì9ðëö,ðŸíAyáE·ýªþ+x®o]kz>±¯é6:¥¦ƒ,þ,ðOŒádøò÷AÔÇŠuÿG¨|<ð‚´¯øLÓµay©ê:.…ð™>U⌘œÇ8Ägu2¼¯ R¤0ŠU14òØbÔyèàp8L<àŸ±Œ©,V6~Ò¼#*s›ÄÖ|‡éüAžxOôyÁe?…áª9ÞyŒ¥J¦:Q¥‚«œTÀ¹{,Fg™ãñp©(ýbq¬ðYm?g†©8U§N8,:öæ/Ûþ á‹þ7ý›?jÏø&UöŠŸ±çíK¬Ë§ügðÖ£¨ÃaàÙ^+mX×µ?Ú3ëZ…¼þø_£Cáøsâ©ï#À>9´ÑtßÙXØë7žÑ4á_x›ÃÌÛ0á¾3XìË ‚Ž"Ÿ°¯Uâ1ØL]:.®a1u¥)UÀcu)ÎtaFµ,V’ tëeÇ pg‹YSÆ>K,Éñ¹”ðµ~µ… °¹f?W¨ãÞ;B Ó.~Þu:tñ1zø,bIR«‡üÿ´ý¯¿aKMjãá·ÂïØ‡À¿d‹kÅðÝÇÇO‰~%ø£¡~Õ¿¬4ùžÇRøÓà?øoÅþÑ~è>!v½¿øwàè|)«K©ø~/øƒÅ ¢_êw^ÓEÈ2Ÿøó.©Å9æÏ¥õŒÂ¾)'^(«ïÒËñ¸nyFµ\4hС‰‡Õ=‹­Bx£îoØÏþõáˆ_´Æ¹ñÇ:¸ñÏüÃß ´Ú?Àu«Û/jÿ¼ â)|Um/ÁŠ:†•6›§øKâ?Á½Á>*Óþ9ëž}>É|+¥è¥¤~ ×|oöO ü®eã7pþI›pÞs…>:˱‘Ë¡™F?ªO (JrÌåG•Q©ŠPP–FŒ0˜ˆbðØ¿dáN­ Ÿs“ý¸3Šø!ãÇN¯†9¾_<Ú®O,Mo¯ÓÆB¤iÃ%†#žXŠX)Uu!Œ”ñ1øJ˜fÛªµhb©{ÿÂÏø)—ü?öÓñå×ì‘âïØá×€¿cßêÓ|:øûDÞÁ¡øO]פšDÑtˆQèžð_…~ üÑÖÇIºø£ðŸU–×GñT¶–'ġᾕ§< ú÷ƒž&bxª•nÏ«F®y‚£í𸶣 æx(5žÕEFÆa[´”cˆ¡%VQu)b*Ïð?¤/ƒ8>¯‡â®ÃN fX«c°”êÓɳ)©N—°”ܪG/Ç(ÏÙBr”p¸˜J„g5ð´iýiâïùB·ÂûHGÄýR6µôØOùffuk‹Ûèã‰X,­ä¾7c0¸³œ5|E*Uñõrº*3šU15hæø ]XQ†ótðÔ+V•£6Úº¿ï?FÌ¿‹ñs‡qxl-zø\®†w‰ÌqéÊTpt1i¡SQ.Zq­ŒÅPÃÓæwJ‰E;I¯íj¿…Oôè( €54?ù hÿöÓÿô®ÿ=Û_CÖ!á«ã<Ó´Ÿø&ÇÏx³âŸÅ‰î·7ˆ|?ûhþð†¯ñ7ÄÞ3·¼×5¼K¨üñ‡‚ôR×ÂþÔ.5OáÿÄ4ø>æÿA𮵡é~ô8KÅÞ"ນ¯ñunk< 1ptñu¯ŽÁæT%<#VR”òú¯Om'^T"©ÏÏB\«ÊãÏøKÄjYð#-Èéæup¸œÆ¶ƒYfa“â—=\~/¡Bžm‡NÿW„p°ÅMÕ§‹öx¨ó¿…<7ûc~Þ:ÔåøzŸ°†,ÿb­BWÒ<)ñ.ãÆ?tOÛã^ÒYO²ý /~%iþ/±ð_‡5MV)_ÆúGÀY¼ ©x:!ý›¡ë:Ôv’ÜXØþ“p÷‰¼]“ÔâŒwæc‚Åä9> /–SÃ5í0ˇƒîñPåT¹áˆ¯Jƒ†#,UYÏÊx‡‹|à ¥ÁY_‡Oe]G€âŽ!̤±YÍld_²ÆK*ÅUŒÿ{‚šœ«û)á0Õñ1©„ÁCF<\þ¯ý‡¿àÞ"ø‡ûoxãá§ÅÿÏãØ³ág‚| ûFxKö‘°dðÆ—ûEþψ®~éòjzéñxOâóøCźÆ[k;_§‚|Cªè±x~/ø>VùúÞ7g_ æY^k†=Àc?²£7‡¦°ö^Öó*ôc͇Xœ¨Ê”èE,=zõ°¸ŠTg„Ztþ¯ôkáüïrlï"ÆJ¿…¹®_ý»*QÅUx¤åì*a²|6"\¸·ƒÌ!‰…jx©ÉâðØ\>; _ }:ªýµá/ø)‡ü'ã¿Ä}CöCñ7ì#ð·Ãß±¦©ªÿÂðãö‹½²ðÖ˜úÛùŸÙÖŸäд¿hÞ6øcáM_U2êø·añRëÇˤÝé¾8×´ÏKu¨eüÍüMβ ñÄóÜÆY­H}—KŽŽk‰ÀÙÕöØz°«áªÎ«‚ÀÓŠö´yT*Ž•ýž'Åo8sŠ©øiO†2xäTªeæY¼2ì²Y2MQú¾*Jž2:ŽT3δš¡ˆçuUzJ¾&ŸÁ??à”úìñóãïj/øÛ[ÿ‚jüðŸ„þ*ü2ø“¥O¤]|iøÑ©üBñï…< ûh° ,×\øá7íF>5‚Ê×H»øq£xÛÄ—þÕou+7Ðáÿ¸‚=W'­…–mÅ“¯†ÀäxÉRçúÂĹRRÆÒ‡+Äc0õ:tRxÊ•é’Ì-\W `iâ)O„¥WÅaã5*´)cS,êÁk]áqÏšÎJ”šV³ÜßCü¿GÇY•\-zx }~Ã`±s§(ÐÅWËãžËN„ÚJ¤°Ë…öÜ·Puáù®—ôG_ÌÇöhP@Gƒ?äeÓíóÿH.¨üÉ< àoüMñ¯„þøE»ñŒüqâ#¾ЬB­[]×/¡Ó´ËšW޼û«ˆÕ縖+kx÷Ïs4PG$‹þc±¸\·ŠÌ1µ¡‡Áà°õqXšó¿-*`êT›²mÚ1vŒS”£äÒâÖ[–ãsŒÃ•e¸yâó ÇCƒÃS·=|N&¤iQ§ÚŒy§$œ¤ã+ÊrŒSkô#ÄÞ.ÿ‚~Ëž<Õÿg ŸÙøþÜž,øw©MàÿÚoö‚›ãg>xFø§ÜGŒ¾~̺¡³—X_†×Ñ&â‹9šêŸiÚç‡l|3}¦\j–šá¹.sâ‰xŒ~k‘æðàžÂÖ«†Ë%,¯ ™c³,E%g,Dq.Ê0æÿh•±ÃҨㅧOZ…lD?¦x‡¼&ðk •ä|MTñ#±Øz8ÌêÎñÙ>Y“ak»ÆI`ÕÜêr²ÃFxºô”±µªà0Øœ6§{ðÓþ ]âÚö£ø þÎ^)ñ7?b¿ÚSž$ø¡áÏŽºî§Ïã_ÃWGÒþ2|øçcáøí´(>,xCTñ‡4ÿ‡zµ­–‰áωöþ#²¸²Hmt w\¸â‡‹¹¿ Cˆ2.:ÁÓÄq&MF3ʱ8JjŽ?öÓ„(J§$!J„e°ÅºÔiS„°ÐÅRt(ã0ê…oJ§€YÕá>'ðÇ1«…àþ"ÄNž{ƒÇÕxŒw ý^œêbaKÚÔ©[8NLÃâ+Ö© m\eŠÄåø·‰Ã~‘øŸöžÿ‚|øÙâoø'Û~ÉzoÄ¿xäü,øçûW¾ƒ¦kwžøš“½·ˆ¼9cñ&›OŒ:®½áÏ,üGñÃGBÓü{s6áxu=CIÔ4 /ó¬Ÿþ#ˆ”³n Ë3üË ‡ÂÊJ• 6kŒÉð¸¼DR›Àexl$¡‡•JTÜS©‰•89Jœkb¥Vs’ýsˆ?â_|#¯‘p¦s¹>;ŽŒlV3"Ëø‡ÂͺqÍ3¬f>1P¥^´fãK µaVt0P£ q—Áÿ´Ïü‡Çl­Á~ñÔ·ÿðO¯‰_ ümû@ÙþÕþ8¼·×4Ù“áoÃK âÖ‰ñ‡ÅÚtvzn¿¥h~Öt_àç‹ï¥²¿ñæ¨Ã ø“Y¿Õ´øÞãÞàï1Ù^_™åüaJ¾aÀaæòºüІ/Š…HÑþÍÌ¥ËÓ”%.ycgMT…*5Õu_ìÕO–ñ èÉ–çY¶Mšø[ ”åy¦.šÎð¾ÖXœV”±ÛXx;Te†à|+“ÂúÞ¹„çoÂÖ>/‘|2>¿.¥ã7d•x¦a2|F"1Æä¼0²œ¨â°|®tã[‰§:¸WЧ',4qÄJ¯5)×­„N£ùþq[èïÂVéÒtðù§·RНŒ…Ò©…”S¯K©}sÚQ§†J¬éüŸý0u¸Ï_!Ǽ«±qÄb³¸Ö®ªâòE†pœ°¹}LSœ«ÒÇBvÂÖźÿÙþÇ[¢Ö?ðœEðâo ßø ›­ÃÞ5ŸMŽOÙ}}\Œø “ýgŸ`ñy•:o‹àùdØ/«¬"´––.Œcˆ©‹¥M':4)Êj¥:xêÕ,ëü Ãèë™ñ'ú—Ì05j«,Àøƒ"Ì~·,{—²†>¶_ˆ”ð´°ëJJž&º¯JåJµl·Jë õçì ÿÚðWÂþÑ_¶7üÏ@þ þÈ> ñ„üKà}rãWñß‹<¨Å¥êž0Ò5 k-gÇÿ 5½NëGÓþØøn¨ü]ñ»§h±Ù4¶³xs^ø¾-ñâ¾7 Ë0Ü3Nyvu¢å›Öpö’Ë%:WÀJ¤\jÔÅ8º´ñ.T0²¦”V*£–ô^ú/arÞ*ÎqœiV–mÃyV!G ÃÆ§±ŽuAVúÞk SSÃÑÁFJ…\¨£‰ÆÂ³”傤£Œú[ᦟÿ§ÿ‚ÖøSÄ?þüÕÿàž?´¦™ k> øâ_ø{Àþñ§§ØBóÙê¾(ðw/ÓÀu [(!Ô<ð³Ç1ëýŸ†nu+¯øÞÇZÓu_x[çs:>2ðNY–ñn;ˆ3z˜lD ëáq9¦33Yt«É:³l»í0”–&êtýª¡RKRx|DéÂ__“b>$g9Çeœ)ÑÆa!V8\v$Ëòg›Ç ,M|‡7Ë],}ya,êÍVt*Œ%‹£O„§Z¬?˜ox7âWŠ¿gߎ>¶ðÇ?€þ3ºðÄïXËs6‘5ôvðê^ñŸ„®/a¶¾Ô<ñÃ7š_Œ¼ªÝZÛË{¡j°$È.­®BÿHøwÇn:Èa˜FÃæXY¬.m‚„› W/4kQRnUÅC÷”®ã%[çRxyÍÿx·á¦3Ã(©”Ê¥\^Oާ,nC˜ÔŠRÄàœÜ'CáÒúö¥¨â£8ʆ)S¥ONœTà¡?òk¿ðJïû4ý{ÿV&£^_ÿÉMâwý•4?õ¹âŸü‘¾ ÿÙ‰ÿÕ¥Cò‚¿Q?? Ïø7+IÕ&ý­¾=k±iײhšwì™ã]#PÕÒÖfÓlµ]oâgÂk½M»½T6Ö÷Ú­®®\éֲȳÞA£êrÛ¤‰crÑþô‰«Ip~QAÔ‚­S‰pÕiÒrJ¤éQÊóhV©_šP¥:ô#RI5 U¦¤ÓœoýQôH¡Z^ gؘҨðô¸;B­u :4ëb3¼Š¥ S¨—$jV†:P“R©¥Õ9µýi×ñÁþ……ü@Ápÿå(¿´÷ýÑ_ýg„µýçà¿ü›NÿºÇþ¯óSü¹úEÿÉäãû·¿õ•ÈÏ3OþÎ?±w€¾x‡ö–øauûI~Ô|%cñWáÇì¶þ;Öþø à×À[ûÙí|=ñ·ö†ñ/…¡ßêß/­¾ü%ðýΞÞ"Ò-5yüO©ép¥ôþð3Ž2ân+âÊüÀŠ9n+çþßâŠØzX¸áÜ'ìkQÁQ¬¥J^ί>ï·Åb¡QЩ†Ãajâ§õY‡œÀ¼ †ñÅL&'7Åç|Ÿê¯áñ•°Ū”þ±‡ÄfUðîˆ*´=ž.~ÿÕ°X*´V&Ž3£¤ÿþÍ^ ý§þi_¿àŸÿ|_§ëÚþ|1ý¢¿bù|AwñÄ^?uí?Á>8|ñ^¬#ñŒ>^øçR°ð÷Äo øºmľþÒ>':Õ·„ôfþÔæÄqwxqŸàrþ8Ìiñ æîQÂq,ºŽ€«”éâpØHû9F”¥ Ö§'Z¬ðÓxŒ5j’¡WÌ/ð‹ü+™æ¾e¸K²ÂxW̰Y¥ ºupxÌ|½¬%^1©KV+BÊqÂc0ôa‰¡˜KõÛâ%‡üþµ¡|ø%ñ÷àF‡ûpþÚ_ôoxûB¼ðï…¼[¤xÀÉ=݆¥ãIm~"ÛÝx_Á~µ×!»ð§Ñ4iüyñ[´¼¾–ÞÓLÓ¯ßÃ_™Õã|Tâ™å¼'ÆäYm/iRŒ0¸Êùt0˜rQXÌÛ‚¾"¥jÒpJ„XF¥HÑ¡NIT­/Ù¨ø}áGÜO8㼫.âŒâ¿²£ˆž7/ÂæÕ1ù•X¹¼¿"˳+a(áðñU$ñ5#B¬èÒž#Z.T°ðùSþ ÿÕøuñKá'Âßø(Gü_ÃÞ0øð£â¼5àŒ_²Æœu{^ðޱãOYø*ÏÆ? ômNëTñ…u øßUÓôo‰ÿ Ÿ[Ôü†.í|oàü9áí#Q¿ñFAâ/øwÅáÎ;Åbó,¾5áOWZ¦?†¥^Ζg€ÇTæÄâ°Ž2IШê^”gN,>.3ÉÅ^ø{âßSã 08›6–¥\¿G+Áãká¯ù.k–RäÁàqÑœgFž*ŒiZ´©Ö­_:§É^2ñ7ì!û$ü@Öÿeí[öw·/Åo†÷‰áÚ»ã]߯ïü*ø[ð÷â\QA7Š>~Ï6~Í©ø¯ÄþšdÒøã¿Ø[Çãϼ£ËâO‹ÿi/øE4¸4[=GÀž°×¾!øSÆ:n“§Ãã‡Z¯-®¿w¢ÿÂGâ%”ø¥›ð¦aœpç‰q„±Ùf®/šàèB”shœªÑ¡Ó,4ç§õ*Ч…Š«N¶Nž" 3ï²9Êx{‹ü•Xå¹Ö:Ž4ȳ TëÏ!J°¡ˆÄÎu§_N–WVKûJ…J¸Ù<=\>?V®¢·í_Å/Á¿à—¾8пc~Ìiû\üTµÐ4KÚSâ–·áoxÃUørÁ? 1˜ÌøWâ&{J q^o,ó”G.–"œ*:~Veˆ*®´0r ѧ,= .ÃxËÅ96;Œ°|C,5 έ%,Û‚©˜Æ„¤ñ2ÊòÌ'& ¥,;Œ£*Rˆ”'C EXÊ‘ú–oŒú;ðGe~f'J†'_"ËsYD±PŠÂG;αΦeF¾.3„ãZ2ÄK ”ñXº˜Jeøgÿý‰¼{ÿÝý¬¦ýŸ|]­êž:øMñ/DÔþ!~Ë?õ«K+-[Ç>Ò.-m¼cðëÆÿÙvzv…'ÅŸ„׺†›»q ÙÙØx¯Âdž|oáùõMKCÓWðÅ,G:œ=ÄiË<ÃÑ•|1Fµ0Ôíí¡Rœ)¬n5Qû(ÅWÃóÕöq–´ê~ô€ðK ÀÊ—p¥°á¬^"8\Ç/s«ˆþÄÆÖ¿ÕêR«QÔ¬òÜcN”}¼äð¸¿gGÛN8¼=*_Q|"ÿ”<~×ÿöv³ÿþ˜õJúŒÛþNß ÿÙ-žÿéè‘É…ã¿û-¸cÿQ꟔ú‰ø‰öüãIÕ5¯Û¿ö5³ÑôëíRîÚƒà^­5¶Ÿk5äñizį ëºæ£$Vé#¥Ž¢iº†¯©Ý2ˆltÛ»Û—ŽÞÞYäøö­*<Åó­R øk;¤¥RJukåØŠ)§&“ZÕ)Ò§çRq„S”’yáu ظ U+Ný¾¿fÜèšO‰­ü5û/ü×çðÞ¿Ú´/C£|høÍ¨Ë¢kV§‹'VKf°Ô`?ë¬î&ø«û À:áþ}…j¸ibxƒ4¡E r× ëd¹55ZŒ¾ÍZN\ôåÒqLÿ>¾”بà|Uá|lðÔ1Âp¦KŠ–|6*8~#â²Ãb!öèWPtªÇíS”—Sþ û8Z|wÑ?gø,—ìÃa{âï‚~-øá´w‡tyµK¯…^ðëþ%ͦéñ\E¥h?|C¬xÇáwÆ´¶{-7Á¦ø†âÅ´í?ÅzÆŸù‡YÕo |A̲^$—°¡Œœò¬Ã[HÒ®ªªù~e*µ—Õ+©óʬ¤£õ|bÄÎþͶx¹Ã˜ü(Éø“ƒ¡õ¼N_NæU‚ÃûÕ+á¥Aá³\š4)'Ã:|‘£:Ÿ[ËÞ•g{ÿðCŸÙžÛÂ>#ñ‡üËãÆ¬¿ ¿fßÙ“Àÿ•¢ø‹Q_ëšo|KoțûÁ¾ðmæ½ý© =¦¡â[í;IÓ&¿»ÑüAegúOoÃä¿ê† µN;6xlFcìå«—ЭK‡SqºŽ#^•Òùã…„êÊ1|<çøïÑÃlËÄëþe‡Ä`ò¼‰c0™Oµ„èË1Íq8zØ S§rÊx\¿ [N¼Ü}œñ•iÑ„§<6.¿.?à§Wú×ÿn½{öĻֵ|ÿ‚…ü3øeñ‹öKø“¨Y\ipÛü<ðG4/ ø£ösÕt‹‰§ƒÂ_¾ øšJóÆž·î®ßÄòx–ú4ÔfÕ ³ò<Åà²|nyÙžywãVGëQ•Øœ<2«iÕQ”jQ§_ëñ„Suðø™TzaŸ/¿ô¦Àf£8b0Ø<Ê®-Ñža*´á:8Џ_ì¹Ô›Œpج)-q±æýQÿ‚&|WÑ?bWÖÿh¿Ú3â/ü*πߴǎ~~Ê¿ |;©ÛHÓü_øß¬xª »oiV­å˜<ðcÃð•IãßÜ´>Ð4Ïj†âöK½.âÌoãö;šO"᜿ <lj(â+c\p•|FS5<#§IJn¦1ž-Ó·5:m5T¦åËôWË3<’Ÿñžm§”pv# C-çÇÔ† ÌhâéÊž:5k8AQË•JØVüµq9§Õé¹Ô¥V0ü­ÿ‚‰þÀž?ÿ‚q~Öÿ¾ÞhzõïìÝñŸÇÞ*ø«û-üSšÆò} öÏⳫx³Å?uÌ|Nøq¯ÜkfËM½½mgÅž“Iñ]¼ T†Ç_8ß <º|«K‹ÂU¯ŠÉå7k†ÄNXŒV-µÏ‰¡^uqWs©‡«%òádÞ_JO qðÍ¡â.WB¾+Ž¡…Áq iÆue€Åá)à ‚ÇJ1OÙà±XXPÂJVTèâ¨EÎ|øÚq_Ñ—üáæŸÿÄý–|QûJþ×Þ8·ø%©~ÖŸþ|5øQà[Ý´VZ–µ{«hŸ õ¿x~ÊXõˆ5 bçÆZç‰|S‹ | ðŸÃ:Ï‹"Ë8w‡(jÏ'XÚrÄàá*õ1xÜG³–' …öiû\>ž2•Z|ЩWÚ¸þîŒjTý èéÁ˜¯øK:âÞ/ÄÿaÓâ—VŽ1© -[…öÐÁãqÞÖQT1xêÙ„á 5T*Q ¨©Ú®"t©#ÿ<1ñ¿áÇÆ?Œ? ¿j}.ÿFý©|3ñ ÅZ×Ç!ªüoâÿøƒTñEçÆ?ßI ¼~ ðGÅëFëÆþñöUÍ–¨Ú}°·“LžÎÛ÷3¼“2àÌ·/ÊÕ>+&£&iŒ¿{[”çS(ÊÓ•,Ê£©‹ŒÚåIÕÃ]J„£柤' q&Mâ.q›gr¯‹Àñ"Xü“3œ?q<aN²¸Ê¥Ù5%K:J\ó¥NŽ2ΨN_Õ'üãMø}áßÙ“Uÿ‚h~Ø5O x«þ uð«ã·‰¾|®-|[áïZÿÃûo‡ž+ñÆ©™sàÿ‹V𾝬ü1°Öb²Óõãà}gUÐåŸ[¾½±¹üÆÜ~ˆxª´²,3Åχ²Õ‡Ï³,eR›”1jšU]4áìðqÂÏ'yV¬ð÷q®_ê_£nYšp—áãÄøÕ€§Å™ÃÅp¾Qœ(Öä©€u\¨Æ¬£SÚæ”p•1Ô°pMG ‡Ž.Êx×ÍüÊ꟱Ïíû>|ºý‚¼wàms]øõà}zßÀž¶Ð4[Ƶøãà4Ù|9ø¯ðÙ6mÖ<1ãÅgu©ËÿÅ#â;_økÄ«¤êžÔm­ÿqð¿òœ×‚¨Ë2ÇapXÎÁSÂfѯRT0˜:q¥…Ì[NTq#N3”b’Æ*´cisÿ3ø×áV}‘x‰†Q–c³,¿Œ³*¸ì†Xj5+º˜ì¬«ãr§(Ũâ0¸©Ö•8Îm¼¾T1ž•½Ÿõ ûY|_„ðK»ÿø"_ÁÏ‹–¶ŸðP?ÚöYø‡ñ‹Lðe†¨þOĈ|;ãÿø³ã/Ák=~Y!µðÑøçá{ˆ~Ë{—gâ»- źÛZµŽ·3-q†uˆã~*Îø’†_^¾W…• ΄©û£V† "¥€U¥Q¸Ó§€£†¦•L5jq¿²«„åö “’œT)©^ßæ–{ÁüWã,W æ^/‰ëæIN­\ÓŒ¬åG‡«;{z8þu‰§‰“Q•9º•eZœ¿¥ŸðWÚ¾oÚ3Æÿ³/ìg§¡üBø&–™§xwâçí7c<ÚŽ³ñGö™ºøugàŸŠ |!©ÎÒøA¼½¥‡ÄÏ]Í}©|@ø‹j–‚ÛC¹¼Ö¿ž<+áXñfœy‚…|Ÿ!Àæøú¹^„^*®1âÁůvž †¯Ч8Å׆ ”¿uVpþµñÇŽeÂ~äžæ5p¼AÅ9žA•ÐÎñµ'9ÇC/XGÂiûõ±¸ÌfkV¥E)G S1­ßP§Sªñwü¡[áý¤#âþ©Zýo ÿ'‹6ÿ²ÿ«™ƒcÿåòû:9§þ³Ñ>ý™f‹µ¯Æ üø;á鵿kó ¯ï^=Â^·šÖ<_âD)‹Kðþ‹é%ÍÄ„Íws-¦“¦A{¬ê:u…×Úq'å\+”âsŒß8jµ8&lV"IºX\5=êW¬ÓQŠ÷a:µ% TêN?œðg|qŸ`ø ÂËŒÅKš­FšÃàp‘”U|v6­¹hápñ’s“÷ªMÓ¡F51iRŸ÷£û#þÉ? aÿƒvß¾#j—º„–º¯Å‰wÖñÃ≾-†ßÉ“PºTy—Lðö›¾kO øvÖy¬ô›rn5 J÷WÖuà~5ãL×óyæyŒ•*4Ô©eØ rn† åuN ¥í+T²–'(©×¨—»N”(Ñ¥þ¥øqáÖGá®AO%Ê"ëb+8WͳZ°QÅf˜ÕWZ¢NJŽ’r† Jž“w•Zõ1ŠßG×È P¦‡ÿ!­þšþ•Ã@ÆßÁÚ áÏì×ÿÌøÝñâ¶— \x#Rý®i¿j~$×ííåáÝÇŒ~+ø¿KÓ> Ù\Ý–ú¤Ö±ë©ÃÙxRûÄO,¥þÙβȼÉp]ZñÆSáNÆÓÃД“Ì#„Êð•jà'ëQW§ÌéRÚx¨aÓº¹þnpçe<#ôŽâ,Ó;£…–]WŽxÃ-­‹ÄÆ2YTñùÞ>Òœæœh¼5nHׯ½<\S‹NÇÆÿðU?Ø?ưOí­ñi“JÕuÙÃö«ø™ãŽß>!ù7·šEŸŠþ#ê7~+øŸð+^Ö墇ƞñeƹâXÞÞÞê>*øi©Xjv×—Ú‡‡üSm¤|€\g„X"XËûÕ*Q«VÞæûï¥G‡x÷˜a[Ÿ*1W‚¯‡n |ã%O ‰£8`ç5ËJ–"…oÞc½ŸðNßø7þ AûÝülý«þ Kðâ·íáñ'áw†Zn»¥›ýSÁZ‡ˆÿá Ò>ê:ï„®.mvÏ AâO|`øºŒ–«á‡,Ÿð 3RÒµm<|OŒ\P¸ßŠ0R–gC&úÖSˆ–?1Äû)cžÓMÕÃІ•(N§9R¯^ S¨ÿIú=ðSðׂsN*âÊÐÉq1×õ­FçÄ6ŸtÍVü.¡âO ü_±Ôáñç†üI:Õlµ“m¹´¸†/ÝüÎ2|eøº4ðøŒš2Âf˜^eí.sic¤¥iºY‡4±ç(¥{\:ÿwi/ý"8ˆ2¯slË7•\Vˆ¥~Iå~ÊX téáá–ÅÆô•|©BJ´á)Jpö¹i‹‹—õÇû|.øuñ{ö ñŸüöŸø»{áŸÚ#ö©ýž¾)|vðwÃx­¥þßøðŸÅ:¶‚ÿ í|B^[hañÿ¬uŸŽÃ;¹WXÔ¼%Šÿµítû >ð¿óo‹ØÜñ†mšd8Y×Àå˜|6Ìðð•L-|l'<o«ãy}¦4ã&êÊXJ|õi´ÿŒ$˸š¯ ãr¼câ5Žeñ„êâ1XªóJ„°í_ë4ñ|ð«‡ÄAÊzU#Z3p—1ýfþÒß4¿ø)—üGÅÿ°/Ú_Åßø(üÓÄÿ|e©XG©yÖh/†ßõ)bøOqâ“'ö/Œ4ù¼âïüÔ|K5ú7‡þ*ij¾5}&êÅçºþÍ3‡…ãñÆG–Ï •>%­ŽÊý¥)Ç Z¶½Uj\¼”ç^5a^¶›¾8¥J®šåÿO2NŽ7ÃÚ^q6qOžÇƒhe¹ß±­NxÜ>†Ä`pø¨ÅOÚV§† ˜l>.ª¶6xZ¯2¬ùÿž?ØöBø“ûsþÐ>ø5áÿ ø¯ÃúV½|xŸ^Ñ/´]kà–‡¤_y4Óüo¦ê6«7‡|m§½½ï‡ô¯ êðÁuâ£mcµm>Ó{öqâg`83ýmÂã(b#‹Ã5•àH}b¾g*~î­ÍÎÂÕ’úý¯ì)BsNW§ÏþpÿƒL[›üÏÿ‚kxcÅú§íQðçâ熼emð×À?³½Ô´ÆZ„ïƒþüø~¿Ûž8ÖüK ðÅq¥x‡ÃÐÞøV-!faâ3®6HßM—Pšëoó¼—-àŒáæ~ÇO5ÀÕÀåøU87Åâi7† ZqK ùqÏ“T!AU‹öžÍKø7Á^â<çÄ®Y/Ö05²,ʆgšã:Š9v],e,L/ JXØóå‘ÂI©bjb] ÇÙ:Ò‡Ï?µßíEÿ ñûe|eý·­¼ Âß|TÒ¼'à‡¾ XÊkú§Ã†cSÒ|ñâtî¨.~$xÓCº‹S¹Óí¡†×Á¾—CðP¹×'Ðæ×/~CÀþ ÅðæM_<ÇV¯N¿RÃÔ§–Ê*œ(`¨J´°˜ŠñnRxœM:δ#îû T&¥VsTþûé-â>‹¸‡ ÃYf W Â5ñt«g›«S˜âa‡†? †’Q„pX:¸xáêNÓúÖ*ŒªÓ”h›«÷§üËþJì—ÿh÷ý”¿õ×kè¼)ÿ‘gÿÙyÅú‘@ù/ÿäuÁök¸+ÿQq&ü/þ ½ãÛ·âLš—ˆXðoìãà+Ødø¡ñ"ÞÞ8佸E†ê‡¾ šíZ ßëPIÚ."·Ô-¼)¦\Ŭj–³Ïw¡iÞ¾%x‚à\³–“£ŠÏñ´ßön_)6¡Ü^?{ÑÂÒ’|r„ñUbèÒ”TkU£‡ƒžf>'g<ÕÖ#¹uX¼ã5„u&”f²¼¾U=Éã«Á§:Š5a¡%ˆ¯ JxjîÂ~ðoßxOáÃ_ Xx/áÇ€´{}ÁÞÒÕŦ—¦[dî–Y^[‹íFöf–÷SÔ¯g¹¾Ôu ‹›ë뛛˛›™¿…3,Ç›ã±Yžeˆ©‹ÇckJ¾'U®z•%ä’Œ#”)Ó„cN8Æ8ÆŒWúu“ånA–`rlŸ K–eØxa°xJ)¨R¥—“”ç9ÉÊ¥Zµ%:µªÎujÎu')=ªâ=  €:È˦ÿÛçþ]Pñyÿ ñ—ÁMöѼðÅMJ.ø•ð÷_ðÇÁO]Ÿ+Wð_Ìl÷Ö~ÔÝ“þ­{ÅÞ—Ä:^âK)!Ö’òÚ×ÃZ\ØñEÜ3ÿløïÎ1\íòÜExá08ÚU³œJ1Å`%îÂ¥hÃÞ­K‹ö¥FW¤”ž*jøXJ?æçÑ3áüˆ¿UÎp˜Yãó,º¾‡s L#9àsH>z”pòŸ¹‡­˜`^*„qµiJ1ÁRvÆÔŒ¿þ1þÉ?`/Ž?ÿcOŠöú‹êÿ õígWømãmFÜAÆ¿‚^)×µ=WÀ_t©Ó6º…Ö© Ìú'Îk—о!é>!Ò59#½U/8Ÿ›p•:4s.uhÕÃÖ¯ƒ¯^¥zèSZÉJu§C5wõˆ{ZŽ/MIý&¸+1ÈxóÄÍb1?ÆŽ"†.|õ!†Ì0ØjX\VYR³ºŒ£O N›q_Tª¨ÑRXJ®?×Gì3ðÃâì7ÿËÕ|qñ_Aø9ûgÁ@î~%hÿ±Ï„¾%jZ~¤|{×þx¯Qø# Ï¦Ë ±hþ!Ôíü§øGŒ¼I…â®0•ªœ+á²Lòÿ­Ð‡´–.xyVÅc«ûJjRž ûÈÂZÂ1£ˆÄF^έ×õÑãƒñ¼áô1åZ˜\gfóo¨bj{(`)âá†Àå˜gJ«ŒiãñÖ£:‘÷jNxŒ.QöÔ,ÿ‰o„Úlú‚í<)«i> ð÷Œü¨k>ø›á¿[ÝÚxÛÃÿtRêËâF•ãkMC•·ŠâñzjÓk+¨*]Ís?Úv\FOõ?†XŒ›Àü?ý‡háh`©á±Ÿ*­K1¤¿áF8˜­«Ï*µäÚ^ÒaZ R©øÆŒ'á#ÿÁ(¾!üdÒ¼!ûax³ö[Óþ2|4øqw¬Þé¾8ð§Á‹Š3ëß³¿ÄèZ9&ƒÀþ(ñ_ÃËø¯OŠ+«áoˆ´TÒ¡½šÚÎä/±F'޳¼nEÔÂGBÊÑŠž¾mmc'I«ÂTëÖ£UI·lMzXºÐç¦Ôß÷ç˜\ûá å¼ORT±óÁâjåØiÎTó 6E:©åôë)5R5pÔ1Rú¦—áêrV‹‚þBü á߈>#ñü+Ô~xŸÃ¿4O7Ãü¸Ñî“Æ¾ø£¦Ý¦“ªxã@E’÷í‘êXþÉ)EªéW~«dòØÞÁ3ep§eIÃn!¥ˆÃáhSÃ_3§*„2¼NšúÝ ÜÏ÷Té4çFSåö¸iQ­ËQçø}ŸðbøN¾ÄÖÆ[%­ S©S;Á⪵€ÅaùbýµZéªxˆSçö8Øb0ò|ô¤c?µöñÇöGÿ‚'kŸ²_‚üM§|Býµbý•N¿ñá÷‡µki¾3Û~ÉV?4[Ú/‡V:KË«ëÑ|ø[ñK_øá+û}FÎFxum{ÀóM¬i±hçø¯‰3Œ—?ñŸýB·ú³[<ÁÔÆÓÃÆpž# Ñ¥Š®ÛPö8œÒžŠ&éÔöµ¥MKÚEÌÿG8?‡ø…|*Ë8WûSþ¹ÐáœÂŽ]W*u)ás:”ñð8d“©íðy%\^*ÑZJŽ5\}”£Lþ6ü)öoZxu|×-|A•…­´ g»þÕRX"Ñít{+8Ú[‡»ó­à±³¶„ÊîñÛÅò¿½°˜¼l¿ŽÁ×Ã<²xJxœ>"”¡ "Á{%RXKÝ…:£i+òÆZ¨¤íþZã°®6Å噆³ªxúØ<^´*TÇ<ÅW•*´jCß«W,EâÒçJG'$ßõïûtüø§ÿÿ‚J]~Êžø©¦|Mÿ‚„ÿÁ>µÙ×ãOÆ‚ñ—Ljþ x7Á—>4ðçÀZÌÖúÏŠ"8L],TéÓ¼mNY~ò…;)ÐÃW¡îÇÝGú©’dyÆoáuâŒÁÓâ,wO'ͱª)âð³Çà+àiV«iÞ¶"„²ÄÖ»§ŠÆa±VœÓ™ü¸~ÉÿŽc,*2„åG F“æÄfS£Í—ÓÂßN¼mFÓ•iV¦¥ýÿÁÇÚÍÆ±û1üøð{â{|Fø3û|hø-§ÿÁA¼棫øÿÁqx»áL0þÉŸ~.ê“Ƕ¡àd×äþÚñ·&¥þ#ñ?‚|G©Cjt9®lÿ‰x7–ÏrŒïˆ¨R£•b3Úøªî4ãKCVS­‡J2^Ê8L7ƒ«^’mQÂ.i¯g¿úIâWœSð»>á®Ä×Äç¸NÃ`°±i×Íq8 1§‡Å·(KÛϘå¸L†»Iâ1íÆut_ˆß°÷†>7x¿ö²ø §~Îa×âí¯Ä_ë¾ÔŠK&“¡Ç ÞÇ«k ñ9“þ(íDµ¿¾ñdnÞ]߇âÔ,ŒsµÒÛÍý½Æ˜¬— ¹å^ iå3˱q0M*•ý¼:40Íßý®­iSޝ†»§;ÅEÉšÞà¸Ç3C…–}O6Ââ0udèá¾­UVÄbqŠ-?¨ÐìñÑnÓÃ*´­'5 tßðTŸÚ·áWíçÿø…ûJüÐ`Ò>øáÅ¿ì¿eã«{ĺOÚ[TøaãïjMñÎÒH ±µð£k‡~ÞË.¯¬x“º\:íæ¡§é×7‡ô¯Ä~Ü+š`©æSˆ¯*9ng‡ž_„À¸Ëý²X|EŸÚNWäöXyÃÃ4¤êº˜™ÞŒ_éO¥od™•\«‚0¸hâs|—O6Çfjqÿ„øâ°˜Š_ØÊ¾ÓÛb©ÔÁfÅ'AQÁÓj¥ITT>Žÿ‚„ÿÉ®ÿÁ+¿ìÓõïýXš~‹À?òSxÿeMý@ùŠòFø/ÿdN'ÿV•dOÙâïí¥ñ“Fø9ð‹KŽKëˆÿµ|UâHK†<ákˆ!ÕhŸþ i«•d°_xׯW6ÑÃâO‰þ2û+cò\³é%ÇñŽe‘a³n©›`ñ½½˜ÃYkΦ§LhBOõY§Oé*c*u$ŸÁy?dß| ý¾õ¿ÛÂêÿÆŸ³wíãáφRx[â,wVÐ|ñoá·€4ÿÚ|&mf).!²ðçŽ|£é¾<øR³Ï†¥r…zxÊi:øxå9¢ÃÕ¥›aãOšš£Bt)CZšjxjô+IƆR_eÁ~Û|°ý ?९ãMGNµÑ£ºÖ¼?5‰úO¤ࣀÀð•F¾>¦&–i“Qœ°)S© 4Þ–#*²–žòÂÓ’’äÄÁ¿ú'ðNe,Ó4ãÌC¯†Êèàëä™tS8f˜šõhÔÆTkjØLhBw„±µ`á/iƒ©ø™ÿ·ø§oÿ2ý¶5¯Žº¬#ñ¯Åoé?þxÛM’kx÷ö8ñ6‹ioû2jÿ µÑuO x{Á6©áM}ô™.íl>!iÞ)·½½»Ô&’æn¿£ÕLªŽWŸeê‡RÌ\Æxû:òÀ¡…Œ#%û<.%bé׃èâ*þñ¥Z’8>–TsÌFwÂù³©OÖÊ¥G(­†ŸµÂÃ2©RXœlªNTý®;ð°µ¹q8J¹Máë³úÿƒ|þ$êt/\üdñî‹ðóàgí+ñÁß¾i>.½¸Óî>'~ÓŸðŽøŸ]Ôt¿‡H@‰¥oxy´¯_“¶»„4¹õm:[(þé[(Äc2,.>׈0¸|U\[¢”ªKÚÒ§Šå¼¹½¤1šÁ¡UjŠ0­NSú¯¢N?ÂåüOŽÅMPáLn+C±-Áb3ØËØW«‚çjžÊ¦ŠšÿxÅO‡¤çSV0þnh¯ÙÇÿðO_Ú[âßì}ñ2-jé<=âßxûàÄíKñãào¼M©kÞøˆ5«„5ïéWœþø¯»O&‹ã}2én&¸¶Ô4ûëϱðа‡ G†[£‡Í2Iâ&¨®XKÅb*bV2 éÔ©Jµj˜|O*~Í,4æï]Ÿ}(¸5ʸÊ\gˆÅdœIK NX—ÍRvgÂRÁË/©+5F•|6–/Ï(ûYKJœm…“Ö‡ü#áljdOÙÆ?þ?x§ÃžÒ¿h¿Š?ô¯Ù«À_µ(<7§]üB×­µO‡^ Ö¬®n ½ÕtÍoãN³âË ZŦéoªŸxu|C$º òIoùö[žq&_•å4£ŠÅä˜\U ÇN£©V¬á_ê ‘58eÐ¥RµIÅÊ0«Š¯MòJ[þ×ô^á|ç†x?5ÎóÚÓÀà8“‚Äå9v-ÆŒhÑ¡ ˜oíI{F*™¼ëÑÃÑ„”%RŽ U{Hb(ÛøèÖ`x“ âÖâG:ˆúÞK]FQÃÕáöݪhÉ%ªaé,&. ]cèb¥QÎs•ZŸÖïüÖã[ý†|_ûþÐ?á5ÇíéáoÚ3Mý<)³q£|_›áü~Ã~$|5 =­å–¢êÞ*µñG…;>i5kx£L󬯞ý¿ž¼z¯”c8¾Ùpö¸ü]NŽ}^Šæ¤«*‰a¡UÆé×ÃÑ«J–"£Ò ® )*”ù#ýeô\ÃgÙÔyÕOc•fY½jü-†Ä¾Jï*MãjQSiýWˆ£Z¾ŒUæèc±‹¥YT—òOqð'âÇì×ãŸ~È<)>…ñ¯à&±gð¿Yдë+¶µñnŸfÇÀ¼AÖ·àïŠ~Lñ/…5-PÝû.Kx5M6þÊßúÂn*Àgü–¡‡Åd:^e‡æŒ‚¢©PÅ4Ú塊ÃRhÔiª«I7ì$ÏåOø5áoó‰Ô§‰Å`¸«0Ågy6-Âu^)æX—_Œ’“ž'¯<7±NU¥Aá+É/¬Á?ëûÄ_~1þÉðFOþÁß ~(Åáø(÷ísð7ö†ñ×Á„÷ú–¡7‹5ÝWMÐt_|eø]ð™mØÛh>2Ñ~]¶ƒ¦]ǪXYXü^ñ+ø–ÆíŒÒKò‡‰¼EC‹øÇ3Ìòê1–_…§GCJŸ+Äap’T>½ˆ¨£ð×ÄVq£Vª‹ŽXJ2´¢‘ýÕà¿âxÃÜ—&Î+Ê®:¶#2Äá+ÖæX\n>ý—„¤äýü.§ˆ¡AÉO ~&ŒäÏãápÐux>ÏÀÖhë¦ØèÚFж²&©eqdF—6‡}`±‹ˆµë F ´ÝVÊH…äz¼7POÚCŠþÝá\VKˆáœ›‘Jœ2U–ÐX4œ¡B5Nt«µhƾp<_7½ð«Îù¹™þkñƈðœeÄ8>'…YñÍñRÌIx‹Ã¯Å8¯õ[m#KÓôM/ÄZx“R×ôi´?æNáÚ|OâÖežð¥i`¸w%ΣšÔÆ(¹S®¥Z>ß…äqQ¥›ÕXÿa r*9dœåý*ŸÚwŵx/ÀlŸ†8ë Ë‹x‡'‘ÒËå5ØWxlÇê)Êuò /+úÍHs¼FuÎ+ÛV¥ð‹þPñû_ÿÙØ~ÏÿúcÕ+öÌÛþNß ÿÙ-žÿéèÍùü˜^;ÿ²Û†?õ©ù‰àÿø£âм;àè:ŸŠ<_âÝgOð÷†¼;¢ÚÉ{ªëZÖ«sžŸ§XZÄ Íqus,q¢Œ(ÎçeEf¤âñxlÆ×§†ÂahÔ¯‰ÄV’…*4iEÎ¥IÉ裦ßÜ®ô?À`1¹¦7 –åØjØÜ~;K „ÂaàêVÄb+MS¥Jœ#¬¥9É%Ó«i&Ïî;þ ÿßðÇìðüø£Æ–ú/‰?jßé¡/Œs_í¸¸Ú­<« 5e`*;ÇF“Æâ©¨ýj²öq”ðÔh¹~–“žO$òIï_’½P@~(ÿÁÈ¿ò‚ÛÿþíWÿ[à]IÔP@P@P@òýP@#ŸðqüžgÁ/û3¿†ßúµ>8Wö_ÑßþH¬ÓþÊœoþªrCüïú[ÉÇÉì‰Ëõ{ćÄÿ°Oüöˆý€®µm#À_Øž:øYâKÿíOü+ñ¡¾:!Ö 0Û?ˆ<5©éóèxc_šÚÚÞÒîæ{¥êVÐÀ5Mú{M>âËíxçÃLƒŽ¡J®7Û`sL=?e‡Í0ŠÛÙ^RT14¦œ1XxÊRœbÝ:´äåì«SŒêF›øeã'øcRµ µaó<“WÛâòL{¨¨:ü±ƒÄàëÒj® (B0œâªÑ«ÇÛáêÊ)S›öÿÿ‚¤~з÷…4¿ƒþ(°ð¯ÂOÙÃGÕ´Íe¾ü2·¿²Ò|ey ^CÃ|P×onå¹ñ†á{èlµMÂzu‡…ü&·a§ëšÏ†õ}OHÐ.4’á_xs ÇÐ̳,v#?Äaj*¸j5ðô°™|*ÁóS©W ˜‰â'NIJ1©ˆö «Î„ôKï¸ãé=ÅÜU–br|›-Âp®JT1˜Œ6.¶?5ÅÆ­éRÁÒÂÓ­ã9ÒÁýiEÚ–&Ÿ¼åàŸÿloŠ|¬|)o ü"øÑðW\ñ-Ž/>~Ñ ü)ñ§áL>Ò`û>“ãÝ#ÂÞ.´¹]ÅúlB5‡YÐî´én~Ïgý¢—¿a²û?ÝñO‡ü;ŵð¸ìž/ š`”c…Íò¬KÁf4a ûHAVQ© ªuV•IД¦èJ›œù¿/฻€°¸Ü·)­Çd™‹œ±ÙwƒŽe“â'Vš£V¤°ò*´åV’:ê…zPÄÂ㉅eJŸ'’ü~ø×ñ+ö›øŸañ[ãˆGˆu¿ xlx#áχtí/HðÇÃß„^Y!”ø3áWÃÿ Xé^ðfs-½´šµÖŸ¦¶¿â'´°>$Öµìí?ì³ÂÞðï âq9†Ìnk‹çXŒß6Äý{1œj5*‘U¹)BÒI:²§J5+4•YÍF)_x³ÅÜwƒÁåY\¿-ÈðÍár ‡²Ü¢œéE”ÞU­V«£ãBkÎŽ9:éÊsrýmý–¿à»¿´ßì÷ð§LøAãxöƒðÿ„,­-¾ë?î5‹ox\é-çxzÛRÖm%º_é^¹†ÅôHîí,¼Aaoi•¿‰£·ƒN];âx»ÀÞâ<}lÓ/Æ×È1xªŽ®.žK€­VNó­ #©†• ÕåUÓÄ*S›çö*nrŸéô™âÞʰù.k—ax«‚¥8 ¸¬]li‡¡Õ<=L|ic!‰¡F<° «a]ztÒ§õ‰S8Süïý³ÿmÚöóø£¥üLøýâ[¨<%k§|4ømá;+½ áÃ; LƺµÏ†|=y©k×^(ñ P['‰|iâMc\ñ¡+¤é÷º?…¡±ðå—Ñð/…¼?ÀÓž3 *Ù–oV›¥<Ëq•r·=< k“ –^ÒNu«É^¿²n™ò&øÝÅ~&Bž >OQª«Ã&ËåVPÄV…ýlÇUª˜Ú”ný”<>–¢Ã{hª§¢x/þ /ñËÂþð_‡|OàßÙçãn­ð«A›Ãþ#üøðçâ÷ÅO‚Úƒ|?Ãø¯HºÖ´Ý/O»H/4Ý3W—\ÓìžÎÊÊ+Q¤ÙÛéѬßÂ~Í3Q(ÕhË.¥‡qt+*°§7Šsž2£¥KÚâ*{*|¿»¾ÿƒ‹k-á¾™ kÿ ~øûâ¦`úv‹ñ{ÅgˆašXå¶0\ßkÞðö±¢[^j×Í“êøwZðžv¶¥$ÒCJ%‹òl×èñÃø¼|±9fsŽÊ°u&ç<½áéã£M6›§…ÄT­F¥*kU^8¹Æêó’V?xÈþ–œU€ÊáƒÎx{,Ï3 4£N–j±u²ÙUq‹J¶; Kˆ£^¬œÞxJÎÔâåuøkâß‹¿> |lñ?í#㯈þ-ñ?ÇŸxŸNñv­ñNëR6'µÕt)# [xf]4ë_h Š-|áÏ [i:7…íá_ì«8.eº¸¸ýS†8‡xS(¯“åØ8Ö¡Œ‹Že[©âqŸ49,d8Ó.INÜ0ðS©ËIJ¥IOðÞ5ñ7‹¸ï>Ãqo˜K‰Ëçäø|¶UpxL™Â¬kFYtUYÖ¥_ÚÂIâçZ¦.¤©Òç®ãFŒiýâïø)¯í7â]ÆèpüøWñƒâ>ŒÞø—ûSüøðÛá¿íUãí*}=4‹ôÖþ7h >&°Õ/ô¨¢°>&ð¯ü#¾*Ó#Š \Ò/íà»ä'à—ÊuaN¶‡Ëkb#Š­‘ÐÍêG(©VBS£:SÄ·û°ŸÖ½¬#î¢V>þŸÒKÄHS¡R®…qyÎ <‰q9çÔhTÖ¤iâ!^ž1©/~¥5ö5'ïT¥&Ýÿ94mJðî™i£h–úf—b­ªlŠ?2Gži’^[‹›‰eº»º™¤¹»ºšk«™e¸šYõ<¿/ÀåX,>]–áhà°8Jj– ‡‚§J”.äùb·”ç)T©9^u*Ju*JSœ¤ÿÍslË<Ìqy¶qÄf9–:«­‹ÆâêJ­zÕ,¢¹¥-¡N…*Tâ£NS£J¥B?¹ß~|Oý¤à•³¯Áσþ¼ñg޼aÿñíž›§Û)Kk;uø%h÷úÞµ|TÁ¤xFµ_ë½áK[ (¤–F-±ò\Ã;Ëx{Äþ ÍólL0¸,'‡øÔ©-e9<æJ0øª×­+B(^S›Iiv¿yʸo9âß8S Èpu1Ù–?ÅlÎ*PV…8.‹«ˆÄT·- .©^½KB8¶ÝìŸôÙû~Ä¿ ÿ`„2|6ðÄ~(øâ¤Óï¾2|W–÷5Û8¥i:< Ò6à­îní´=&9äY®oµ ®õ ëÛ™ÿ—8û3.;;¹ŠO —á]Jy^[sC BM^u$’öغü±–"³I6£Nš…(B+ûkÂÏ òo 2/ìüXÌÛ©UÎó‰Ã–¦;¾Zt`Ût08g9Ç ‡M´¥*µe:õjMýq_~žP@jhòÑÿì)§ÿé\4þwŸ·§üŸ7íŸÿgaûEêßñ£œÿ$WÿÙ-Ãÿú©Âä‰òqü@ÿ²ÛŠÿõ{?Aeø.í/û7|$²ø-âïxö€ð…ôëK‡—¥Õ-|Aá´ƒ ¾Ó.µ[ž?x{÷v–W5ýœÞž–°ÚXøšÖÒ×M‡OüóŒ<áþ'Ì*æ¸e|‡Ѝêã =<^ VOš¥ªJ¦Tq[n¬©b#Jr÷Ýi)άø}ô•⾠ʨdyž]†âœ³J42÷ŠÅUÀæ8J-,/סG FŒRWÂJµ8%J8ctéþr~ØßµçÇOÛ·ã%‡ÆÚÄ–š¾£ám/RÐ>xöw/Ã_„ú±q Æ·oà \êµÌz߉žÓO>,ñ޽«kž,×âÓtÍ.m^i:6…¦ýx[ÃÜ 9ã0²¯˜æÕiºṞªšR·=< qPÃB¥—´“•ZòWƒ¯ìÛò~&xÛÅž&S§—ãc†Ê2*5Uxdùsªáˆ­ û*ÙŽ&¬ÝLeJ7~Ê 40Еª,7¶Šª{‚¿à¤_´ƒü9à}7PðßÀ/‰¾3øIá´ð‡Á>ü5ø¥ñÇà—…íå7VÃo‰.Ñ5WOÑô«ß*óJÓu¤×¬´÷µ³µ¶4ëK{(óÍü%á\×1Æf”jg/1Xæ_Ø9Ô(æ »R®±4'GMƼ—5xR(V¨åV¬gVR›× ñçŽ2<£’b)pÿàr‰QžN¸£)Y¥|¦Xh¸a¥‚ÄÃ…¬§…‹äÃT¯*õpô”hÑœ(Â4×ÅúÄ/ˆ¾ø±©|{´ø…ãKŸŽš×ŽGĽwã£â ÛψzÏáh¼Gâ'u¸óí"·¶ÓôÍ:Õm´MEµµðþ¦XhV–Út_K“ð_ dy-nÀåt?³1pœqÔ± ëǺTç üÖ¾0ØisèÖÿõ+ÄÿdžÎuVškïèÚö‹þ›yyk¥ê:ŒZOŠ´mêîÇ1h±5¬V?ãþ޹|s­—çÙ†)ó< L5mHE»ºt1’­BQ‚ÕA×£ˆ¨•¹çRIÊ_ÐY_Òç‰ðÙlpù¯ å9®g |‘̨ãqu*’JÑ«‰ÀC‰ŒêKIUXlN”¥ÍìéÑ‹Qá·‰~7üeñ§ÆÝkö“ñÄÿxƒãæ¿â?ÆWÿ§Ô†Ÿâ›moDx¿áš,Ze…4o Á½„¼9á‹-#AðÝŒ o¤éÖÛçi¿\áÞáÎÉ+d8 +`±qœscT15³7RœÞ:N…XºmÂ4£N)ÅÉS¥i9~ž%qwñ(Í3:”3éË(†[*˜<6L©TU©Ç,‚©:”e©T•yÕ«‰«5Z½NH(ýoâ_ø)Ÿí)¬éž-½ðÞ™ð+áGÆ_ˆ–³XüGýª>~Ïÿ ~þÔž<¶¿ÓßHÖŸ\øÃ£xiu{=_[Ò_û>ëÅ>µÐ<]apÜx^ѯ¢Ží~:~ ðs•JT±C†Êëb#‰­‘PÎjÇ(«R 8¹Ñ*˜‰8Ù(Í⽬U¹jE¤×èTþ’> Æ4kVÂðž3:ÃagƒÃñ>+‡¨Ë?¡F¤d¦©b)×¥„‚›“”鬰œ¯ÏJJRRùköUøóñ3ö,ø…ታ¾ºÞ×ü0—ŸeU’ÿEñ¨L.5ÆuÔÌT¿a>=ÿÁÁ_µ·Åo†ÿ€þø/á§ìõâŸh£FñŸÅßE­ê?ãß ÙÜj?ïu»él<«7ËÓ4ý_V³ñž±áø{ßjZ^´šV«¤þ=‡ú:dTñÞ×Äž'.Œù£‚Ž C(¦Ÿ³«ŽR©)ZÓ•,¸¿rTåi/è,_Òï‰êåžÃ™.6•7 æ3Æc1XHͦ½­ ±Â”á(ÝJœkf8˜)/ÞF¤[ƒüoýœ¾6|Jý”Ò|ýž›Z±ñ7‰¾~Ì¿ ¼ð+á7Äoé7 w£jÿ´éÖ·Þ=‡G¾Hµ; x‹W½ð`ÖmtÝzãÃwæ£j:ÉeÞp†ƒÅâgœgqË¢¡—à³ÌÁc²ü"ù£NŽ8z0tc/yP­íh9{Ò§&®}îoô„ãüϘ`0páþ–o'S6Ìxg*–Yšæ5%IÖÄcåŠÄÕX‰ÃÜxªÇ£x´bÚ>]¯ÕOÃé«ãüëâíéûUþÉ:”—þø5àïø'ß쑨ü_ø«ö5–ÏÃZKøw_’ Dk,õøŒFÖš›þ‘ö8ÍÆ»k>§KÏó~UÇÙwp·bjû£¨üBñ-î—£\øÓâ]ÇØ5}uôMÃIÓ¼/á­Jðåßp„ù7B¾'R9ÖqŠ£S [_xz8ZÊÕp¸\$§Z1hû˜ŠÕe:•¡ziR¥:”§ùoо;qˆõ0Ø<%)ðïàq±”2ü6.UqxŒu sPÆã±Ð§‡”燗ï0¸z0§GRÕdë×§J¼=¶oø(n•­kñülñïìsû/|Lý±,ôI‹ö›ñž‹ã­þ,¹ðõµµ–‡ãŒ|/ãO ü1ø·ñC±²Óì´Ox‡K·ÔtûM:ÊÌÅ=”r[MçVðvž˜>âÜ÷…òìÕÿ†Uƒj¾pw‹§†›­B¥ªr©Nœªæ6¥zµ18…m׌¨aÞûŸø/ïÂ<bÿ‚wü&½ý³Ç…ÓÂRüe¹Ô|1g·g©i¾5Óm ÓtŸhLz…¦ƒðÏÁ^•µ 7FøU¡éZ¶±¥Cà»×Ö ×b×¼OªxÖ÷Å^%ñwŠõÝoöðÆrÆpÝL2Í(f±öÆ'¶:¤,é4éµ,40³^Ó Uðµ[­ ²¯)U—ó÷xÓÆ|SÆ9wÒż“‘ÊK‡ðx’ž,¥Qµ^2U“†2®6›öY•JÔ•µÿ ¿ƒ5ý.ãX¹¹½Õ<7©=ÍПåâ N–®K„ãž%Ãp½zŽur4éM:s›J1ÄóFœiTm9Sx)SœÓZug+Ÿsÿ ØêEð˃qœk†£PâiF½6ªÓ„aGSÉ:Ó­IEÆc Ô ãJ…Z0Ÿ„þÅ¿µïÆØcãN£ñÇáOˆ¯5ÿxËY×µ¿Œ)ñQÕ8j-Õ[\ñn­ñ;Sžõ5wĺ޸í®ÂQöèu‹ Yc{9ãÓüí6¬Çx_Â8ΧÂpÀ¼.„Þ' Š£$ó 8ùEFx÷ˆ©ºØŠÑJÕU*U(¨ÑŒ)Ó¥ARø<³Æ¾=ËøÞ¯ÔÌÖ;2ÅSXt{m7O†Ûöjžð•N§Â(åÔg,E,Lg˜Ã0’jy—Ö6¥‹š|’s§*2 £…t~«Qó­8òVãõ™BY½zqÂVÁΜÞQS*ƒNžNðJªqÀS’öTêÃS–5b>»R¦"^µñö¿ÓGÂ| ý”¿gÿ†?±_¾.h·>øÛ¬ü$ÕüâŠ~=ðN ]µ¯…~ø™ñÅ>$Ö~|ñä¶Öü ðæßD’çJµ´Ð­uë-9´û”‡ƒPÅ,¿ÄaÄöC•ÍK’W’ÃÑ„`œ)Ó­^ªN¤aO÷1•8P«J‹•,=J•—ÝTúDTÁ<×1á_øW…ø£;§(æ|I†„±xš“©%Rµl>¥ TéN­Uíç Õ14kb+âébjAIüA§éö:M…–—¦Z[Øiºm¥½…¤I­¤Iµ­´1…Ž( …(£@B€¯Ù°øzL= .<> J = 0TéQ£J *Tá£S„cE$£’?±x¼V?‰Çcq±XÌez¸¬V+RUkâ15êJ­jõªM¹T«V¤¥9ÎMÊR“mÝŸ¼¿?fÏ‹?µ—ÿø$GÁOƒ>:Eû)ø†yî.í´/ hŸîƯâÏjkˤøsEŠxä½»1Mq<òÚiz]¦¡¬ê:nwø¾IÄYW f+ç9½a„ÃñEÆ1JU±5å—¯e…ÃSm{\Eg¡¨Æ*ujJiÔ©è¾$áïò¯ø{‡ð¿YÇbø+)JMà ƒÃC5—·Çck(ËØa0êIÔŸ,§)Jh®"­*Sþ›ÿcÿÙá7ì;ðfÃáÂÈ!Õ5ýJ ÿ‹¦´H5ï‰þ/·¶1Ü];ïš];Âzd³]ÛøSÃQ\Ëi¤ØM#<º†©}¬kzÏòßñ®kÆùÅLÇ9SÂÒ•Jyf]·C…r¼a•LED£,V%ÅN½D¬¡FhÒþÝðÛÃŒ“Ã^£”etã[Z4ªç9¼é¨âs\taiU–²•,%)JqÀàÔå 5);Ê®"®#[ézøÓô0 € þ ?à¸ò”_Ú{þè¯þ³ÇÂZþóð_þM§ ÿÝcÿWù©þ\ý"ÿäòqýÛßúÊäg§~Æ_ðX+¿‚ÿ®ÿeÚ»àO…?kÿÙ©¬ÿ³´Oøê= S»Ð4ˆ® ¿´ð­þ•ã Äþñ·ƒ,/­–}C×´ûk¿Ë* ?Ym+MÒ´;_ ¼ÀñNa<÷%Ǭ9«%STÁckÇU‰—²œ+a1r|®­zJ¬j8)ʇ·JÒú >‘YŸeTøgˆ²Éq/QŒ¨àšÄB–c—a§¤°qöôêPÇàbœÕ-wBtUGN¯«B–3ÿÿ‚¡øëöíð7†?g? |4ð·ìçûx7PÐ5H>ø*én¥ñõÿ„.¡¾ðu¿µ-3Jð¾€<)©ZXkÞøSáï ZhPx›NÓµŸë^+:G‡í4_;„<£–æÐÏx³5\CŽ¥UbiáTkK ,\eÏN7‰›¯”d”ãJ¥:TÜÕë{x7Lõ¸ÿé/ˆÎ2)ðÇdrá<²µƒ­u0ñÆÇ(:sÁeØL8ár¸ÎTå^•jõU9[ õZ‰UøðzMY¢ŸUðׄ>(|9ñ‡üVß 5ÛÛk;¯ü6Õ®5kiega,piÖ¶ö±}‡øaͳ¨ñ6K›æ+Äj¶?+P•,W4U9ÔÄa¥*\ÕgI{:’…jp¬¬ñëMsŸð‡YžCÃ’àÞ"Èrž8áDÔ°ù^vêB¶ ’n­:XLdc[’…*ÏÚÒ…\=Z˜y^8ZØzoð_Ÿ´ÄÿÚ?âþ%øòÿBðõ¯ÁÍo ~Ï ¾hKðÿá'ìíá{«‹;ÍFÏáG…l¯o¯ôÿx‡PÓí5Oø÷Ä÷ˆ|s¯^Amk&¿m iÚ>‡¦ožå9ŒÅçÜV/‰3ü|jÛfüµ%Ɉ‚§ˆ… ;u#IW¦½IT©^«¢ÞU Jœ¹¸çÆlÿ‹²ìå¸p®W:0YCÏF ¦£«„©‰ÅERiaª¿mFia¨,JŽ*TeŠŒ+G÷ á¿üWÁ¾.ø_àï~ÝŸ±¿ÃÚÓÆÿ .-5‡µý;Á7— â; U´´ñ&£¤xÛÁ¾*³ð· q'Û¼oàÃk5é‘™9×µ««]>ûÇßn´›‹Úø¶óLÐôß øfëQÑ|áoŸøÎûÅ__áç„Yob%šã1k8ÎåNt©Wt.UËUaiNujJ½H¹Sž*sŒJ”)SŒêûOÏüZñó8ñ# /À>á¸V§^¶bž#™Õ¡.jZ¡J8j3Q­KNœáð…j•ëJeÕ·íñáßK øßöý?gÚgöƒðg‡´ï øgöø„¿|=ãiº ½½Ÿ†,þ?éŸüaá=#öˆ²ðÕ•ºiúJ|AGÔ#²2¬º¤—Wz…Ýé‰ð‹…̱™—q.qÁÏ1¿×pyo-\ ù¹›öZèÎnT£í*G Í(ácB F/ãæ+“åù?ðoø…§•幆oÍC3§Ëìâ¾³‰TqQÅ9Bœa^^ÊŒñŠ0–:Xš‘”çòŸŠhÿŽþ8ý¥mkïüGÔfý 4Eðîà?h:†4O„ž ðeì×þ ø]ð‡ÁÚ\#Að7à Mqpl¼/oôšô÷WÚ§Žu?kÚ¦««ß{¼5á pî1ÂJ…LãQ©C8Ì3Y*ø¬uÒç«BñQT(ʯjÓª©Ö­^µjTªCæ8ÇÆŽ4âìÓ'ÇÇK‡ðœ9ˆ£Šáì§#‹Ã`rÌF>Ά'–No‰…ì«ßN„ªÐÃá¨P¯^•OÞýþ ñðÓ\Ó¼ñ;ã¯ìðŸâ·ímðÃEŸHð'ÆHæðž—%›HíuöÄZÿÃïxïá…­åó SHðÆ«¬YÜ^ ïíZÈ\Çagù&eô}ÇÑÆâ qGÔò¼j•,FLqÂͧ<5J¸I{<Æ•Òjéá“J1Ÿ<¢êË÷œŸé]–b2ì"ã þÐβÙG…ÆeÓÁÏ SN.4ñ”¨ãàëeùdâêaªã%ç*\š£ߎµçí ûCþÕs~Ùß|u-ŸÇ?ûÇáÕï‚VÿÃÞø3á_ jrë>ð‡ÂÍ:ëSÕõ K¶Öç¸ñ·}{«jZ¿Š¼Oyw«kW’Àºf›¥þ£Á^pÿå˜ìXC9ÅæØg„ͱ˜ºŒ18Y_›CåUa°’mJ¥5V¥JÕ# •jKÙP/ÄüFñ¿Šøû:Ës 5'ÃØ ‡~E—`13”ð˜èròf8œZ…ŒÇÁE•WF•,=)Õ¥BŒ=¾&Uþ•›þ #¦Øx“^øëàÏØëöYðwíâk‡ÕµÚzÓÃ^+ÔmŸÆóBÉsñŽÓö½ñWü(Ø>8\^·ü$.<'x/ÖÖÒÔ8o†²ŽÊée.êøZr•J’œ½¦#ˆšŠ©‰ÅV²ukÔPŠr´aB©BtéÇñN0ã,ÿŽóºùÿã>·Ž«Ñ¥GÙapXJrœ¨à°Tj†“œåóJu*N¥zõ*â*Õ«?ן:>­â/ø$íUáýL¿ÖµÝwöÂýœt}FÒ­'¿Õ5m[SÓ/ì´í3M°µŽ[›ÛûûÉáµ³´·ŠIîn%ŽcyTü>yZ–ÅnÄW© 4(p—V­Z¬£ T©R©Ô©Rrj0„!)ÊMF1M¶’?Jáªñ^q® F®#‰ãΡ‡ÃÑ„ªÖ¯^µ)Ó¥F•8':•jÔ”aNNSœ”b›iÐ/ü¿þ £þÄž¶ø½ñ‹JÒu¯ÚÓÅúsc-oªX|ð¾«gIá­áLÖSøûS·–hü[âK–;kYφt+ƒ¤¦©©ø·ùóÅjñv*y6KZ­ÂÔ\ÍsSžqˆ§6Ö&´]¦°tÚ‹Âa¦“r_Z¯jèÓÃXøà•ÁCˆ¸F¿ci>H·Ôø{ Z <Kšœ³±rŽ?MµÉà°³t"¶7õ©™™Ý™˜³3ÌÌÇ,ÌÇ$±$’IÉ<šüPþ€ ( € üQÿƒ‘å·ÿýÚ¯þ¶Àºþ“¨ € ( € ( € (åú( € øöµýž?e_Ú«â¦mâŸØïZý¬¿h?ü6ðÿ†oñ¿Æ_ <-ðóáý¶µ¬j¾_‰$ÒüqáhkzŸˆ¼K¨xkI°Ð|_ñÄP‹ûÛm  èwš¾‹ö|;â𦠮]fÿPÁÖÅOVÔ2ÌW6&¥*'WÚcpXšªô°Ôcɪk“™AJSrüï‹|(à:̨füUjf| <ºŽ#ûS:Àò`¨×ÄâiÑöYvcƒ¡.ZøÌDý¤éJ«öœ²›„!ü{/ü»ÁK$‹ü—á,‘+¸ŠGÿ‚¦üx‰¤Œ1íðT‚6eÃ>ÂJïldûßñ<óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸÿ¯ðwý!‹áþ-CãÇÿ0ôÄhñ/þŠOüÃäüêø—OÿèŽÿÍ‡Š¿ùøðêÿÒ¾ÿâÔ><óGüFÿè¤ÿÌ>AÿΠÿ‰tðoþˆïüØx«ÿŸ‡ß³ý´±WÃk/…w¿±ÜŸ²OÂøÀÞIâÿ |}—ö‰øu£x×ÅööZcYxÏÅþ,¹Ò~ ø"?\èú…£ë7~ŸÀë2éú%æ» kúæ‡c¯|Wq&uÅ8å™g¸ß¯cU xe[êø\/î(ʤ©ÃÙ`èaè¾YU¨ùÝ>wÍg&’Kô~àþàŒ±äü1—feÒÅVƱ˜f˜ì^cÄnñ%oÇW©‰ÅVöT3ŠT){Zõg?gF•:Pæå§A(¯8ÿ‡Wø;þÅðÿ¡ñãÿ˜zêÿˆÑâ_ýŸù‡È?ùÔpÿĺx7ÿDwþlJq…8sÔœ¥ÉN„ohÅ+#ö< –à°™v —°Áàp¸|=JžË …£ = n¥YN­OgFœ!ÏVs©%Îr•ÛöZç:‚€ (  Q‡IÕí5 „–H`3ïXB™–ÚhÐîŠHi ºð9 æKXÿ‚MÿÁ?¤óî¾xþ »û@ø~ßQ»ÒŒþ鿳Ôþ ½¼Óîo,µð÷‰¾!ø+áæ›ã6ÊúÆ{uïËâ ï­íµIäŠàCû×üLGÿЯ…¿ð‹6ÿçÙü¹ÿ“áÇý¸ÛÿYÿCfü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCaÿŸý’ÿèÒà¶Ÿø ûòâø˜Ž5ÿ¡_ ámÿϰÿ‰Iðãþ‡\mÿ‡,‹ÿ¡°ÿ‡OþÉôi?ð[OüýŠ?ùqGüLGÿЯ…¿ð‹6ÿçØĤøqÿC®6ÿÖEÿÐØçÿd¿ú4Ÿø-§þ~Åü¸£þ&#èWÂßøE›óì?âR|8ÿ¡×áË"ÿèl?áÓÿ²_ýOüÓÿ?bþ\QÿÆ¿ô+áoü"Í¿ùöñ)>Ðë¿ðå‘ô6ðéÿÙ/þ'þ iÿ€Ÿ±Gÿ.(ÿ‰ˆã_úð·þfßüûø”Ÿ?èuÆßørÈ¿úøtÿì—ÿF“ÿ´ÿÀOØ£ÿ—ÄÄq¯ý ø[ÿ³oþ}‡üJO‡ô:ãoü9d_ý ‡ü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCaÿŸý’ÿèÒà¶Ÿø ûòâø˜Ž5ÿ¡_ ámÿϰÿ‰Iðãþ‡\mÿ‡,‹ÿ¡²{oø$×ì“=Ä?ì£ÿ®µI¥Ž7¹¹´ý‹~Ïn®ÁZiþÏ©O?•;äò`š] ùq;aIÿÆ¿ô+áoü"Í¿ùöñ)>Ðë¿ðå‘ô6~õþÊëðSÁŸ¼+ð‡àÞ›ã½+Pø)á=/ágˆ­¾5øbÛÃ|?£<·)Ò<=ã{1 øzTÓ5Õ®5/QѬÂ>%…Ρáë½JÚ9®äüƒ>â ȳ,~etéO1ÆO[ …u¡‚†&táJU(ЫZ¼¢Ý:q4êTŸ*·=´?~á~ʸK'Êò|¶5kSÊrèex|v5aêfU0tëT¯Uñ40øhÊ*­YÏ’*T¹Ÿ7³æÔú ¼CéB€ ( ÆÛ›öý‰hïÚ_âgÅX~ÞŸ~7x£Nð^«ã¯‡Ÿ²îŸð»Xм!g£xGð…×SÕ!Ñ<me¡x£â:xƒY–[cKңЦ…­ÿ[ḣ…2<A—`2Ø<Ö}\n0©‰—Ö±˜ŒmOk:¦“µ\LãZ0µ5.i'9~ ÆŸG~ ã®&̸«7Í8§˜fŸSúÅ»”ÑÁCê9~.¥ìiârLexóPÁÒN|EKÕ”åH8Â?çÿd¿ú4Ÿø-§þ~Åü¸¯þ&#èWÂßøE›óìùoø”Ÿ?èuÆßørÈ¿úøtÿì—ÿF“ÿ´ÿÀOØ£ÿ—ÄÄq¯ý ø[ÿ³oþ}‡üJO‡ô:ãoü9d_ý ‡ü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCaÿŸý’ÿèÒà¶Ÿø ûòâø˜Ž5ÿ¡_ ámÿϰÿ‰Iðãþ‡\mÿ‡,‹ÿ¡°ÿ‡OþÉôi?ð[OüýŠ?ùqGüLGÿЯ…¿ð‹6ÿçØĤøqÿC®6ÿÖEÿÐØçÿd¿ú4Ÿø-§þ~Åü¸£þ&#èWÂßøE›óì?âR|8ÿ¡×áË"ÿèl?áÓÿ²_ýOüÓÿ?bþ\QÿÆ¿ô+áoü"Í¿ùöñ)>Ðë¿ðå‘ô6ðéÿÙ/þ'þ iÿ€Ÿ±Gÿ.(ÿ‰ˆã_úð·þfßüûø”Ÿ?èuÆßørÈ¿úøtÿì—ÿF“ÿ´ÿÀOØ£ÿ—ÄÄq¯ý ø[ÿ³oþ}‡üJO‡ô:ãoü9d_ý ‡ü:öKÿ£Iÿ‚Úà'ìQÿËŠ?âb8×þ…|-ÿ„Y·ÿ>Ãþ%'Ãúq·þ²/þ†Ãþ?û%ÿѤÿÁm?ðö(ÿåÅñ1kÿB¾ÿÂ,ÛÿŸaÿ“áÇý¸ÛÿYÿCgêÏìðsöaý’ôoà÷‚| ûOø+Æ¿üIeñSþý´|)àX5·Ö|¡.•q¬|2×ü  ÃàgWðÖ›­»êÚn‹¯ë>)Ð’èê—pYiÇ=Ççücâ{Ƹ¬6/2§ÁTÃ`ë`9r˜c0Ôëá«Õj”ñ ÅÊ¢”áÅJ0j+š «Ÿªø}á? xq‚Æ`rй–eK˜áóN|ö¦_Œ«†Æa¨OF®árì )J4êNÓpX¹7 ‘NÇèô’I4,®ÒI#3É#±gwc–fc’X“’O$ן§  € ( € üQÿƒ‘å·ÿýÚ¯þ¶Àºþ“¨ € ( € ( € (åú( € ñø'¶±uâ +ö¸Ö5 I©?í¿ñÃEº¼f’Iï-<¥xÁ>#1?Ùžðæ…¡Ú¢íŽ .ÒÑV1@¡P@P@P@|éû`Aisû&~ÓðßÙÁ¨Y¿ìóñŸíW*Z ˜“áω¡”+íÆä+"·u-ö³âO…µí^öwi&¼Ôõhú…ýÔÒ9/$·wË#±,Î嘒Mz½P@xÏí®êžýž¾&Öc»÷º( € ò¿ø&~¥‰¿c†ÿ'²‚ßÄ_5ÿŠŸ|i|‹ º×<]â_ŠÞ3›TÕu ˜ ·k©pÚišzÊ…t½MÒ4/+LÒlmáûÒ€ ( € ( € ( ƒ?ॺ²øWöBñÇÄ{(.ê¿ü?ãûOÉðïá µO7„~ h0i?ðŽÞGmqáCrº½Ü×ÓiÚh°Á'¿à±¾ÿ‚ŽþÇ¿¿kÿ‹ß |1ûx[àÇïü ñ…—ÄOŽZGŠ<7¤ÿÂ'ào…~/ŸÅ>"øâ/|!Ò¼0³_|M_Ë£ê:k­¥Ö޳¶±4º˜°±ýƒö¹ý”n¾ ê¿´·í;û=\þÎúÄ6šßǸ>4ü7›à¾usªéº½¶«ñJ?·ôû‰õÍgGÑ¡†ï]†Iu]WMÓ‘ZîúÖ@4ü)ûL~Ï_|[ãχ? ~;üø‘ñ?á–.£ã߆þ øµàOxÓÁ0©Å/¼9¢kz–·àûI®^c{¯i¶pG$ÉÌB°åF›ÿÖSIÿ‚ZÍâÙÿáV£«ÁI~>üPøusðOöÉðíðÓàèøw¯jšL'ð÷Ɔ_¯¼ñ½õK{¤Ö4ãÁ-áqõ/ j:µÞ©¢Þ’ú§íGû2Íñ}ÿg¸¿h¯2ü}yñü]ø~ÿÐGfu ü4_/—` óçElÁ¹lB7ÐíñÓÀß³À_Œ¿´gÄÆÕá÷À߆~4ø©ãÐì—QÖçðÿ¼?âJÏE°’{X¯u‹ë{³Òíg»´·žþ{xî.íai.#übø%ÿ‹ý±>$7ì¯ñCÆŸðGÏ^ýkí{Àö þ?ü*øÝà¯ÚGÆþðÇÄý?ûkÀ?~,þÏŸ |Þ-ø}ðþ÷ÃψüOâ¯ßÛø7HóÓUkGNѵ0¤¾Á^>x—ö–ý¿¾|{¿øOû*øgö"øÁðwàîñsâ÷íàÿh_øq x†öHb¸ŽÓCÖ|c®hÚv­u%¼ÐΖö7´2Å*¡Iˆ^/ý¥¿g/‡Ö>Ôü{ñÿàŸ‚4ß‹z–£ð·PñÅ_øjÇâVŸ£è+â^ûÀzνeoã+=+à ¾#Ô®|;&£ Ž‚Ë¬]¤ý•ÿäØgû ß?õ_xz€=æ€?“¿µWüïþ 1ñwö´ñçì!ûZ|!ÿ‚}~ųWÇ~Ο þÙÿµìgûþÐ?¿à¸VŸ ¾¿ìÙã«ÿè´Â]CIÕü;ûLøêþßFðW‹¼;ð»Â÷ºˆ¼1ãoø‚âÇ—Ú'…®u5Ô´»ö𿆠¶ÖM°—ìÁÿÀý—ÿiß ÿfüý±?d¯Šß|'ãÙóLý±¾ƒzWÇÿi¶j÷7¿ u»?x¾ÃVs£[OªÚÛk2hÖèXÇq¨ÜÚY\|ãâïø9‹öðvñÛÅWŸn-gÃ_³íâ_ÙÃöñ„þxwÅøEªè>"Ó¼!¥|Kñ׋4oŠ^Ð>xûÅwš‡‡>=öµÄOjÞ×­ãøwl!°}D鿃ð\/ØÏã—ísðËöBðÆûCx{Yøùáø»öhøßñ àÞ¡àÙãö›Ñüa¨êzåÏÀïëz´>!ñU’XhºÕΙ¯ßø/Dð·ˆ#Ó“þý{TmoÃ+®|õ©ÿÁȲ¶‡ûLø@ýš¿oßè߱߯?|"ý¤5_‡Ÿ³÷…|W¢|*¶ð&ª4KÏŠž'ñ5Å¡ák†ZÖ¡o¯.‡,úÜ~=k ø—XÖ<£è¶6ú…àÚÁ[¿e [ö€ý€g߈¿uø)'Âÿ|^ý~$øK@ðãü1ƒÁÞ ð.¥ñQ“â æ¿ã ÆÞÕ®ôm.æÊÓGÓü ¯ÞÙë é> C𠯶ùëöšÿ‚Üü)øMeÿDðGÃO?´wÄŒÿðL¿‡žñÄ{]Á>Ô|ª]üQð^¥âo x³H½‹â„zôß ¼ ªë¿5ÝsÃþ¿ðÏ…¬5}GAÒŒš°éìQÿ#ø%ûoøŸã/à xãŸÀ_³í߆cøÁû8þÓßàø]ñ«ÁšO´çÕ|â©ü?aâhZ·…|QaÜizÆâmV4‰¬¥Ôc°‹WÑdÔ€?A¨óþ Eÿ Ø×þÏãö}ÿÓ¨è ( € øwþ ?ÿ&oñSþÃõvü9 Ø*(ð7þ ûÿ@ý¨¿à—¿¿gþÉÿ >|Wø‰ñ·ö…µø;7„¾*xcÇ~+‹Pµ½ð_‰5û?é¾øðÿUoêZÆ‘c¦éë5Þ±ÏÚšÒßGžò{vPgöšÿ‚×jv_°çüóöÖýtO†ž'ðçíçûmþÌß³_Ž4_‰Ö>#ñŸ|;ñ[Kø›Åé­áø)¬þ)|9ñ§€'ðœ:®§&·á¥¸°Ôî$ðæ­ky§ÜB÷¯íCÿyÿ‚q~Æ_¿áJ~Ñÿµ…|ñJ #L×õ¿ØxWâGÄ ÿhzÓÚ®“¬|G¸øià¿éß ´ÍF;û «Kïˆ7¾¶›OÔ4ýI%6Ö·3{F¹ûy~É>øÝû3~η¿´{¿‹_¶?„|EãïÙŸDðöƒãh|áo ÜøßXñ.‰ñÂÞÖ~iúBøRÒmoN»ñ‹4d×lüŸì/í)nm£˜æŸÚOþ ûþΞý·†§ñ?QñÄ¿Ø+Ã>¿øáðãKøañŽêïCñWÄýçQø;áq¯Ùü=¸ðÞ©µ²Ó¦ñ‡µ}_Ã~ Žíµ/êÞÓmnîàù÷öIÿ‚ùþÄ_?c/Ù«öœøÿñOIø¯üqø•áÙÛ]Ñ.>üs ðßí3âØø¤ü9´ñv©ðäiÒxb×EÔ¬ïåø­{¨§Â»X оñÌéú‚Z€}3­Áf?à™^ý™|ûaëßµƒ´ŸÙëâW‹¼SàO‡3½ð¿Ä¨u¯ˆ>*ðN·wáßi~øh|ÿ OÅ£EÕ¬¦†îû@ðV¡§ilu(nåÓ5=:òì°ø[ÿYÿ‚}|lý›¾2~Öß ÿiO ø×àGìó¤êÚçÆÿé>ø‚ÿÁmÿà–¾/|øðÿöÃð»ñKãæácá7‡ßÃ_´{7á?é7Ðiz¯Äø¯T·Ñ<'á+MVæÞò "ÚêúáµwYkEôO éšÖ¯o¥ê÷60éw€Ž3þÞðq·Ã_†þ ý«~)Á6dω¿|A7„5¯~Ë¿³¯‰þ4ë_·W„<ãí: ieÓ¯µOø'Ä^)Ò-µ[Iu}Ã>ÕuKËëZVca¬Þè௵'üGöý‰4o†:‡íkñÎÏàf»ñwöž(ðoÃ_ø?Çž(ø¿6•sm÷jŸ ~øcÇ:ÒbÒ®}+V¾½Ñ#Ò¬µ› KK:ƒÝØ\ÇÇü‡þ ç¢þÇW?·ì¿´×…5oÙÃ\Ò¼1ª|]ð‡> xæÄÚÖµ¦øzÇÃzáâ6…âí]gI‚÷EÕü!e©éPjV7ú¥­žŸu Ó€g|)ÿ‚¼Á7¾8ü|ñGìÁðŸö«ð7¾8øKDñOˆ¯|¥hÞ:Hµ½'ÁÚ…ï‹fðŠï|)i࿊:–‘¬^ê:oÃox¯S†ËFÖnþÆmôFK`€ÿbÏø8³ö9ý©<]û{x†çá×Ã?ÙQñÏŽ<#âãð«ãýõß?f_†:O† ñ§ÆOYÿ²øgX‹ÅZܰéÿ . ³ø¢úÛ\Ià—žÓUk`¿>ÿÁ^à›Ÿ|7ñ{Æ_ kkøð¯á—ÆŒ~/Õôßø/Âß~|bÐÿá"øw©xƒÄ>8ðLJ4ˆ5½sO)Ï‚a¼ŸÇ×%ƒÃ&ðæâYáÒd©û&Áaà›?·Ä™þþ̵?„þ"|PM"óÄ>Ô|+ñ+á·ˆ|G¢iÉ,Ú†«à»о ðCxîÂÆÖ ‹ëËŸv+m6ÚçR™’ÂÚ{ˆÀ?<{ÿúý±|-ÿëÿ‚§þ×:Ã_Ù¢o‰°÷ügQýþè—žø¥'‚?xkáŽþ+øÃÆú4o üDøã+íû4]jþ%‡À ¼/ãÿé~ ²¹ƒS„xŸYÓ ÐÔéz”-«Ë6—¨6¼}ÿHý€þ|!ýœþ?xÇö—ðUŸÁOÚËǺÀ47Æ*ðW|oâG¿†ÃG¾×|'á½nËÀÑZ\i:µ§ˆõˆRxSCð…Ùx³RÑnôÛØ`÷oÙsö¬øûi|Ñ>?þÌ¿`ø¡ðƒÄš·‰ôM Æ–¾ñ_†­uMGÁÞ ÔUý…ÿä~ÝŸöy_µßþ¤Ððû:|ø½cûÿÁ.>ü²Ôí¼ÿºýŸõïØãv¯¡»Ãÿž³ð/þ a¯üFÖþ'j̪æyæý—üWñCÀ‚Ø«³xnËY¼¶A&™0œì]KâO€~~Ë߶ׂ “PÖõMPG}©_Ý\ÅŠùsûÿÉžÿÁ¥?öÿ¶‡þ¯O‰”òÅÚ;CñÏ„¾~Ð>øû>þÊ:ŸÁ/ø,Ÿ‚?i_ÚWáŸ~þÔž*ý±?fÍ7Oø?‡+Ó­€?´?xËþ Su¥þØ^5ý¢~~ßµ¯üs[ø)ñ—Sø;ð#ö_´øý©~Úþø’K1àoxÛHø¤ö?µkßüÔ|F¾6Ñ| ßÚ·‰äÓ´ÏZÝ ¥´˜ù_ðçÆÙ·à·‰?e—ÿ‚~Ö_ðPí#öœñgÇO†7Äø$¯Ä+޾øÁž Õ_‹>ø‹ üLð¿†|oàgšëLñßâŒ$ŠÆKÝ{Ãú¶“­çŒ´°®¼Eû1|øßñ›þôñïÅÿ„Þø•âï…_ ¡Ô>ë¾4ðþâÿ†ZÜ?²WÅÿÂKà µ;{†ðŸŠ$ñ/ü©Ëâ캳¿…´kµ‹X$‚`#ý¡tKÏÁ¾|Føû8·ŠOüÂ7_ ÿiÏÚûà—í1ûZÙ|JÖ Õ¾1ëz-´“x‹]þ×ñeÆŸ=†”£N·ƒÃø€Ä¿eÍ?Àe?ø6£ö\øËf¾5Õ¾ÿÁHk/´WÂ?é×òMáÛë‰Ç-~xÿFÖmcƒQÑî¼ ­øbÓXðµÐ¿Ñï<+©Ÿ j¶ïd×zb}]ÿ;ð^Ÿû8Á]¿iù¾5_~Ê?³÷ì³ñSþ ÿðïà¿ì£«þÒß±ŸÄïÚ/àÔtMÑ>$| ý›¼?ðkľ Ò>üYƒÆßð”köšK¨½†³hÚfŸb5ëÕÀ?£ÿÙSᧉ~Á ü9ðÃÅ^:ñWÄ›ÿ~Â4}3Æ>7ø}⟅>+Ô¼žñ´ÿ­u¿‡6¹ºñoƒ®ô/Ïá¿_•uHt¸„Öö›…¬ DþÌŸòm¿³ßýÿ„ÿúhîP@à_µoüšçí'ÿd ãþ«¿PÔŸ²¿ü›ìãÿdáþ«ïP¼ÐñÙû jß¶—üâ/íû1ëðNŸÚ×öÝý‘¾2~Ò¾8ý¥?e±w‚aøÁ¯éq|C°Ñt{¯üPð­Ý¾¡áGÒô xVÆûPÔŸN[]{Nñ§¤Xx“A×tíFÔßíéðþ ÿpÿ‚J|~Ô>3þÍÞø)ñ3Lý¨¼û@~ɲ µÔñ|fÕþxø‚ÊûÁµÝOÄÍá-KÇÚž•âs­xsAMÁ:¥Ýÿ….,µ*ÓQñ…¦é@_ÄIjø,'üŸþ 1ñC@ý‚kÏØÃáüÛÆ>:øçû@|Gý®þ/Áó{â½~/‡Ö þZÞêSjt»­ká½¾‡wâ="Ö ý3_mZ÷NÒ,ôv[ð<ûþÔvÿðHïø8óáÝçì»ñú‰Ÿà ¼aðcÀ·?þ"Å㟌~ ¼øðÏUð÷Š~xb_ ®½ñÃ7qZ꺎®xRÇWÒî"¶¿»³ºt‚âDûâ_ìÍû@ÉûGÿÁ¦šÎ‰û?|c—Aý›¾øŸÃÿ´­¦|*ñ´šOÀK·ýšgÙh¿/ít³ø]rúÎâ ÛNñ´Ú¯©éZÆ›&îÆö€?.¿d/Ú'ãoÃÏÙþýŸ>þ´ßíS©þÕ·íÙðgᯌg‡¶ß|;Έ|?¨|;Õô¯Žvzn«Š|àïè¾&ðÿŒ4O`kˆ|ßès_hWºzOvõ.¡û~Ù?ðN]Gþ Ëý¤µÙ‡ãí9§þà þ7ü/ýª>þÌÞoŒ>ë¿´‡uùmâ±ðžƒtÒø›Nð‹|Hñ•w¬i7rx^Ï;~Ê×^ðýî Ú|ý¿lÿàèøŸö>øçðSTý·¿co h³_‡~&xFïE¶ñþ¡­þÍ¿!Ð?cÏÛÿÀ^8ÿ‚P|lðÇ…¿h¿‚Ú_„¿ðËÂ_4ß¿ðTÏZïˆþ5|jÒ#Ô4/^é–ß¼qa©üCðŽ“ðûPðônâŸxÖóLÔ4½B;]M]kžþèóþ Eÿ Ø×þÏãö}ÿÓ¨è ( € øwþ ?ÿ&oñSþÃõvü9 Ø*(ùµÿƒ‚€?ÿàƒ€Œƒÿ²ýAAñL9Pó©ÿ<øñþ Áû~þÎ?°—…ôKøaOÚ›þ Ãû$ÿÁD¿eVˆ2hß¼£øÄ ?h?‚z;R4Øõ/‰þ×l4Ëf6Ú„ôÿcŸRñˆnÔÐ~:ɯþÈ_ðR¿ø-¯„?koÛoãì1áÛOS±ñ7Ã{oþÅ~ýª,n_‚>%ðÿÄ-6Óá?|Sãÿ‡>3:.½á=Æ_¬´ Ä>Ó.µCV²×µ}6ãÁv÷:p¼üNð¿…¿à˜_ÿàÚOÚg┟´¿ìoû?ü-ý¥~ø³â÷Æ_„:®…ñá øÑáoGð»DøÉðóÁ7>9¹ð­k¥|JÓ,´ÿ -Õî²4kqG¦®© êºEˆ~xº÷ö¥ý ¿àî|1ðGä›ã/ì1á;…ž×¼âxûÆš³û*|Q²øy­iþÖl,¼[hßt'Ð|WáUÑì]ø«ñà7íKÿzÿƒx>øwTƒÇP|+ÿ‚þʳwíá cÃ:þ›m¢øº Ä~-ð6«‰4k vÞçCÕák‹][H¸±Ô’'ºóZkxÀ?iÿà°RÛþÊßðYoø$ßüCöƒð§ˆïÿ`/‚?þ2ü.ñÇŽ4?ëþ>ðŸìùñgÄþñå—„¼{â¯øcIÖ®ôK=Jóľ“JÖmô¹'ƒþ»í-eÕ|9¦ÚÈùâkøjÿÁÎÿðQÿÙãÃ>*Ñÿ`ß?°ß ~øÿYðWˆ¾øsã÷Å ü&𥯊~!x?@ñ.•¡êµ¦“¨xSÆÓjìÚd3?Žìïµ±¯jv°o~Öžð·„àˆðlޱáIеkÛ_öÕlµ-2ÆÞÎúÓQøà_ˆ~8ñµä0F’ÅqâŸÚZx[•X>£«ÚÛß\™'†7PïJ€ üËý°?äõ?àÿõÏö¸ÿÕWá:úR€ ( €ñ.¹¬èš qÏwâ-[G³Ð4ë{KS´…À>QÕ?àèo€Þ1øSáO~ͳ7í'ñ·þ 3â¯øDü:ÿ°”ßþ"ø?]ðŸï¯të?XxÓâ ç†/ô 'Â~'Y{Oé©®_ºA¦Ï¯h~´—X¹Ñ@<â߯}3þ éÿêß·ßü“Âz×Â?Ù÷öˆÿ‚vø/á¯Â‹¶¾ñ—Æï†ßþ5éðÖóâÁ¨üeàßÞ[ê˨øoâLVºí‡†4«{Oñµ…êivZW‹µe³ü\øóðóÆòÁà»?µ]ŸÃÏü(ý—l¯ø)wÂÿ‹Ÿ²Oƒ|aá‹ÿÝê_ ›ö‰ŽòOˆzO‚µ {It=Åšo‰<)¤iþU¬6/„.ôû5†“k3€~ðþÞþð·Ãïø+?üíkàIð´nƒûEø Æ=ÆßMÞ ð×ÁŸ„:†"û,qìÑ4‹M[‚ÃO¶¶VÔ(Ô]ͼó[á¿ÅÏ|8øÿjþÈ<½Ö<)ûCø»ãíýûCx_áîµáOÙϯ|Ô#ñ%½l5×ÑáþÇ¿]SF½Ó>ѬAs«ézÞ•ªi_i÷kp @þÖß²¯ÅŠði7ìYáهᾫâ™<+ðïöYý þ+ü1ø{¥ÜM¯üDð_Ù/¼UñBxt]¿ñðñ‰ì~$k˽ÕûAáûí`G<úzPkñ_ö‘øÿkÿ‚­ÿÁuïø&>…âÏi±^¹ãψ_´ÿÄ?á~øgà/ ½;áÑÒþxÃYñ†|=§ÅªËeáox.ÓÂZTúމe¨xª7Ã÷W°ëzÃÛ~Fübÿ”"ÿÁÃ?öœ-oÿWoÂjûÏöѼøûÿÁm?áª>.þØßàžß>=Á<~øà¿ísáÏÙsŸµ7„&Ôü-¤ü4‹Åÿ³åæ™ão‡?´Ÿjúƹám{ÇO£èÖÞ!f¼Ð¬ÚUÒük7ÚÀ>yý f¿…^ÿ‚jÁ< á½Gã¯ÄO‚à¸~ñÖ™eûQ|ðßÀ¿jž ø¤/ôiÖŸ 4 k]Ñô†Þ*½´ñ¿á˜ü×XÒ¼M5펅‰w¦ÜÞ€¢4Am VöÐÅooqà F‘C 1"ÇQETŽ8ãUŽ4E ˆªª€(óãþ ¯ÿ&ñÛþº|*ÿÕÕðæ€=²€ ( € ( Åø9þPQûÿݪÿëaü  é:€ ( € ( € ( €>_ € ( ÿ‚mȱû\Ùû~Óúuðí~P@P@P@ó¿í{ÿ&›ûPÙ»ükÿÕkâjùßödÿ“mýžÿì‡ü'ÿÔ @ p € ( ý©á–çöbý£míãig¸øñzbA¹å–_‡Þ!Ž8Ðwgv £¹ PÔ_²äoìËû:Ã*”–/?ã‘Ttøáõu8Èʰ àö v € ( €>ý‡?àžßÿ`%ý¥àç‰þ'ø”~Ôß´j/ˆ?ð³5¯ k'GñÿÄAd5½Áßð‹x+Á¿Ùþµû ?Ùz~·ÿ µé>×â ì®À»¨ € (óŸþ 5isw£þÇŸf…æû7íáð î}ƒ>U´:<Ù›Ñ#Ü»`sÒ€=†( €>ÿ‚Ž#Éû|SHÑävÖ>aK±ÇÆÏ‡$áTp'€ è(ö€ ( € ( € ( €?3¿kôvý´¿àžN¨å#ö¶ÞáITßð³Â7°]Ĺ#q ÐÒ4P@ãðJ´xÿ`/ÙÝ$G×Gñ®QÔ£ üOñ±VŒ‚Èäzý € ( € ( € (óÛþ ¨'ìñÕ#G‘ÌŸ ðˆ¥˜ããOÖ8Uœ($àp'@Ó@P@Pâü‹ÿ((ý¿ÿîÕõ°þÐô@P@P@P@/Ð@PÏ~ñÆû!x—âºøÛBñŽ¥ðkâçÄ}[âÞãŸx/ŦøsâÝ{@Ьüsáoˆžð^™­øºË@×u­ü[àÏhþÖ´ y5oøwÇW~ þÅðu6eÿ‚§þÄË$2|Cø“¾)7Ùû/þÕR¦ôb­¶H¾ .m ÙøKÀsx‹Ç>öO xLð—‡4 è±<?†tM+ÃúL9•áÓ4k4ëžB‘ãµ¶‰ÈÈ,@ÍmÐ@P{»K[û[›ëk{Û+Ûy­/,îáŽâÖîÖâ6†âÚæÞex§·ž'x¦†TxåÙYX‚ó®…â¯Ú“önðv›ðËáoÂO~ÑŸ¼¦éø]sâ_7 >"øsÂZ]„¼oqwðËÆÞñt^´ŽM?þ2Ó®´sPЬ´}7ÅZ©â+}_Ç~!‹þÃöòÿ£øiÿ‰µ¦ô6xÃÁ^;øñáßxDøaªÝx›á·ÂOø®ûÇå|m¨xXð´þ<ø‹ãMGÂþ´Õ5]Ãúö±¦x+žРѼ9>·­ëúLjœž9ñ§‰u],Cáë-GÃ>ðþ½?Šá/ðhÑÔP@P@~(ÿÁÈ¿ò‚ÛÿþíWÿ[à]IÔP@P@P@òýP@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@Pâü‹ÿ((ý¿ÿîÕõ°þÐô@P@P@P@/Ð@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@~(ÿÁÈ¿ò‚ÛÿþíWÿ[à]IÔP@P@P@ó+#20!•а=C)ÁÜŠJ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € (ñOþFB?à„¿·ëÿ 7쮀óÕ?l/»c"÷Ï<Æ@?¤Ê( € ( € ( € ùÿÅsiºÝì[HŠy ݹƆá™ð¾ÑIæCÏ$ÄOpH?@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@~9ÿÁË:sXÁnb[÷ý™ïÊÆÿ¶ìö.{ƒ,£<ƒ)ýP@P@P@P3âuË #Ú—ÖÛžÖCÀ|žÞCÐ$¸RóŠ­†E` ž ­¦’ ˆÞ¢b’G"•ta؃ùƒÐ‚$h*( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( € ( €; xfMbá.îP®™€¹lµºùô&<ñ<€Œ.Qå?ÿàèÐüŸöåý™@`?l?Ùôà: ýÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨ÿ‡±Á,¿é%Ÿ°þ&GìëÿÏ€ø{üËþ’Yûâd~οüñ¨Xÿ‚ Á(5µjÿ‚“þÀ):±ÝÃûdþΉ:ŽÊÌ~"‘,`òE`¹b› @<êóþ Aÿʼn‰²ÿ‚šÿÁ=îãÏOÛ'öw·›ÕOÄY"8î|àOeì3?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@ü<¯þ «ÿIÿ‚~âiþÍüó(ÿ‡•ÿÁ5é#ÿðOÏüM?Ù¯ÿžeðò¿ø&¯ý$þ ùÿ‰§û5ÿóÌ þWÿÕÿ¤ÿÁ??ñ4ÿf¿þy”ÃÊÿàš¿ô‘ÿø'çþ&Ÿì×ÿÏ2€øy_üWþ’?ÿüÿÄÓýšÿùæPÿ+ÿ‚jÿÒGÿàŸŸøš³_ÿ<Ê?áåðM_úHÿüóÿOökÿç™@Eÿ(ÿ‚iHÁ_þ Mÿø€d|òþÚ_³‰QîD?æ|øB} ³Kÿ‚‘Á)íÙ&Ôÿট° ã®Ù¢ý±ÿgh­ÃÑÜüHóf^‡@3Ã+/PAþ ½ÿ®…(¿à¥ðOØ£B¤qþØß³š"(à*"üF ªí@ˆŸðqçü'öøãÿcý²~üý¸dŒ|Qÿ óÿ×ï…¿´·Áˆ;ñö'íWð7ÄZÏö„|'ã]_Ä¿öG‡ôW]Ôÿ³ôû°húf¡©ÝyVVW3ÆÿÙmanila-10.0.0/doc/source/configuration/figures/openstack-spectrumscale-setup.JPG0000664000175000017500000022713013656750227030014 0ustar zuulzuul00000000000000ÿØÿàJFIFxxÿáæExifMM*; J‡iTœÌê >êADMINIBM¢¶’‘92’’92ê –ê2017:06:14 14:43:022017:06:14 14:43:02ADMINIBMÿá http://ns.adobe.com/xap/1.0/ 2017-06-14T14:43:02.921ADMINIBM ÿÛC   '!%"."%()+,+ /3/*2'*+*ÿÛC  ***************************************************ÿÀïM"ÿÄ ÿĵ}!1AQa"q2‘¡#B±ÁRÑð$3br‚ %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyzƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚáâãäåæçèéêñòóôõö÷øùúÿÄ ÿĵw!1AQaq"2B‘¡±Á #3RðbrÑ $4á%ñ&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz‚ƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚâãäåæçèéêòóôõö÷øùúÿÚ ?úFŠ( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( ŠòŸÚÆGÂß ÖÊöâÓTÕ&H­dµ˜Å"ew`Ü`8þýs¿ <)Ä/Úk_ð±|yÞL7Ç®±Ì½@ ‚rxa@ñEy·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“@ ôæ¼{ž³ð‡„õ wQø‘ãÿ&Êû·pdnŠƒäêÌ@ükÉ> \ÜøßÄ׺½ãi÷¡¹µ6»F&q̷˃ž8VÍ}yEy·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W››ÿªñÿŸý…ð¦ÿê£ü@ÿÁçÿa@“Ey·ü)¿ú¨ÿ?ðyÿØQÿ oþª?Äüöé4W—ß|(·Ótû‹ë>†ÚÚ&–Y]áFI?'`*‡ìõ¯Ûøƒ@Ö&]o\Ôn£¼ØðëßihbË™xܧ êÈz õú(¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€<—ö‹ðþ•wð£UÖîl£—R±HÚ峺 ׆·!ˆ®á'‡tᾇs¥XÇk6£¦Z\]´yýô†%¾I®cöŽñ6“cð»QÐ.î¼½ORH^ÒŸ4%ÄlØ`6ð8'==EtŸüQ£ø‡á¾k£Ýý¦]/Nµµ¼'Q¢rÀ ƒÓ=½EwTQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQECsim{“yoÄDçd¨sô5â³E¤ÚO‰/šÖ*k±9ŒŒlÛ¯j÷Zñ/Ù‹Ÿ ø‡*u§Áì~D m¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢²µÿÙxvÄOxK;äC }éO öõ=«Í5?kº”åÜ‹OH­€Èú¹'é¥tÑÃT­¬v9ëb)ÒÒ[žÁExIÕõbruK?õû(þMGö¶«ÿAOÿ¥ÿâ««û:ÌŽ_ívgªxë‡ÆÞ¹ÐN¥6 ÑQ4° fd;yè >Üw¯ý–<(ZÎ÷űjSDÂâM>[ €Ç*ÃÔÍú{ÖÏö¶«ÿAOÿ¥ÿ⪎•ö «ZèrϦۻ™+9Þf Ä)8gØQý?æAý¡ÌúŠðŸímWþƒŸþKÿÅQý­ªÿÐcSÿÀéøª?³§üÈ?´!ÙžíExOö¶«ÿAOÿ¥ÿâ¨þÖÕè1©ÿàt¿üUÙÓþdÚìÏv¢¼'û[Uÿ Æ§ÿÒÿñTkj¿ôÔÿð:_þ*ìéÿ2ívg»Q^+eâÍ~ÁÃCªM(î—'ÍSùóù^…áo[ëÎ-.ã[[üdF+(JŸé×ë\õ°•)+½QÑKN«²ÑEQ\‡PQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEcx§__è­r^âFòàŒôg=ϰ“ôÇz¨Åɨ¢e%vZÕuÍ7DˆI©Ý¤¾êœ–o¢Œ“ø æäøŸ££ž¡(þòÆ€ûéÅy­ÍÄ÷·R]^LÓÜHrò?SþØp*:öiåðKßzžDñóoÜZ—ÿ KJÿ n§ÿ|EÿÇ(ÿ…¥¥Ð7Sÿ¾"ÿã•æ”VŸP£æG׫y~4_éü ÖÖzmúê–R}¢ÍÝ#Á=œ2ú¡k_ᦻ¢øÀV"麃\*ù·r¢E‰&nXÿ¬ä{(®~Š_P£æ^«äz_ü--+þºŸýñÿ£þ–•ÿ@ÝOþø‹ÿŽWšQOê|ÃëÕ¼Kÿ…¥¥Ð7Sÿ¾"ÿã”ÂÒÒ¿è©ÿßñÊóJ(ú…0úõo#Ô!ø¢ÈàMon;´‘+ÿ|±?¥tÚv©c«[}£MºŽâ>„¡åO¡Aö5á5gNÔnô‹ô½Ó¥ò¦^¿Ýu‡qþG5L¾6ýÛÔÒž>W÷Ö‡¼QT4M^sG‚þÜm™ åpÊ~†¯×ŽÓNÌõÓM]QHaEPEPEPEPEPEPEPEPEPEPEPErþ&ñ½¦…+Z[§Û/€ËF zn?®?NµÄ\ø÷ÄwKØíG÷`H÷Øc]T°•j«¥¡ËSJ›³zž¿ExÏü&ž&ÿ Ô¿øÿGü&ž&ÿ Ô¿øÿ[gÖ¿K³=⫯h^Ô5o ÛÚÜßÙGçùQ³¬‘¯.VS¹#žØÇ5៲ַâ ïµ=ÖÞÌhqÈ×·—™|×PˆŠÛ¶ŒìÏ*xSê+±>3ñ+) ¬ÈAàƒo?øåbø^KXÍgᛃaó¤UŠ6Ü䜲“Œj?³ëwAõú]™ô ã?ðšx›þƒRÿàÊ@½ºc$ŒìþóãØ~¤UB.rQ]Iœ”"äɵÏéZ ù724×8ÏÙàœ}y+ÌüYâyüK{o$v‹6èʈód’Ädð¾€V?%™™Ý‰fv9f'©'¹¢½ê8Rj[³Ã­‹Tã²"Ýqÿ<¢ÿ¿§ÿ‰£uÇüò‹þþŸþ&¥¢»N2-×óÊ/ûúøš7\Ï(¿ïéÿâjZ(-×óÊ/ûúøš7\Ï(¿ïéÿâjZ(-×óÊ/ûúøš7\Ï(¿ïéÿâjZ(2eûЂ?Ø|ÿ0)ñʲgiäu`ŸQKü:q"ýÓëì}¨Ôd´Scq$aÇíéíN¦#Ð~\¹U´?r7Š`=Ü2Ÿý+Ð+Î~ÿÇæ³ÿ\íÿœµèÕóxÅjò>‡ïF!EW)ÒQEQEQEQEQEQEQEQEQEQE•âm\è~º½@ ª¡"»±Àü9>ÀÖ?‹ühº =Rkö]Ì[”€„ú“Ø~'¶|ËP¿¼Õßv©w5ÙÎvÈß(>Ê8€®ì>um'±ÅˆÅÆâ· ,Îìò;<ŽÅØä³I>ôT_e·ÿžß²[ÿÏ¿ï^þLJ¹-Ù-ÿç„_÷À£ì–ÿóÂ/ûàQ¨´%¢¢û%¿üð‹þø}’ßþxEÿ| 5 h¨¾Éoÿ<"ÿ¾d·ÿžßCBZ*/²ÛÿÏÇÑ!€§01CýÒr§ðíøQ¨hMG^µRù€‚6ºœ2úT”é µ—»ÓfÒî³Ùm0’y1àÀH#èEvÕå_ I/”v6øIø×ª×Îã ¡Y¤}ntSaEW!ÔQEQEQEQEQEQEQEQEW'âA¢JÖZ|kw|Î Äpún#©ÿd~$q\犵ûç-6«qì–çÊþùçó&»)`êÕWÙ•qté»nÍŸˆºËÞëcKˆ·²¤QÑåaž~ŠGâMrÉcó¦yf–y$s—wÉcêNy¦ý™?½/ýýoñ¯r?exÕª{Y¹2Z*/³'÷¥ÿ¿­þ4}™?½/ýýoñ­u1Ð–Š‹ìÉýéïëfOïKÿ[ühÔ4%¢¢û2z_ûúßãGÙ“ûÒÿßÖÿ5 h¨¾ÌŸÞ—þþ·øÑödþô¿÷õ¿ÆCBZ*³ãîK"ŸvÝüè”\,MA‚È=ES2Œðžv`®ºÉ5DÜ^'¼múþ5-$ ô¿…¿ò/ßÿ×û訫¶®'áoü‹÷ÿõþßú**ìn®c³³šælˆá¤| œ“ü«æ±?Æ—©ôxàÇЖŠñ­ïâÄ¿‹ô_Øè6·2HÚnŠÚts,‘£¢Y›æRÅH%AõgCNøë¥¯„<9©kšeÿÛõ—šÛìº|v.a!YîÝó³(P3÷€'½s›ž«EpzÆ/ ÞøgYÖ.SPÓˆëíý·—s·»<±ž çºÅÍ÷NÖg»Òõ­*ïG³kéôÝJÏȹx¬E-‚2ê9ëŽ wtWh_<;¯kzNŸ›®Y¦°¿è7·–>]¼îE}Çs);NH ž;Ç?¼SĽoDÓ¼Scღ[E.›e}iW%7°3ÊBÇÏÊ9éÔ@íEyŠ~)ø‡@ñÏ‚ôkþ3}oöRÖÊÕnI dbmØ`ó1€çƒ·yñ¯Ã¶^"}6M?Z{HïÆœúÊYfÅgèSÍÝœƒÁã±##šôJ+€ñ'Æ?øk_»Ò¥±Ö5ÓÕQ¹Ó¬¼è,ôó[#sÀ>A¨üRBø£ªkjÊÍxª³F@꤀qÛ9E\ Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š)‚©f 2I=(kÊ~'_F|QoÌa´‡ ÎÙüp«úRxŸÆ÷z½Ä–Ú\Ïm§©Ú2Uç÷'¨_@:޾ƒ“TUûªÐW±„ÂJU$y8¬Td8Œû]¿ü÷‹þû}®ßþ{Åÿ}Š–Šõu<½¾×oÿ=âÿ¾Åk·ÿžñßb¥¢CB/µÛÿÏx¿ï±GÚíÿç¼_÷Ø©h£PЋívÿóÞ/ûìQö»ùïýö*Z(Ô4"01ÂÍ>Î*ZB¡†=ÅDa1üÖß)þçðŸð£PКŠlr 0ã±±ô§S(8i—°“ÄýjZŠõ·õÐè+RÒ@Îïá_ü~k?õÎßùË^^sð¯þ?5Ÿúçoüå¯F¯Æ_×Cè0ŸÀõÔ(¢Šä:‚Š( Š( Š( Š( Š( Š( Š( Š( Š( ¼‰m$¬¨Š 31Àw&¸ýOâV—k#G§A. Êq½HHÏÑ'êõÏøÿÄrjœšE´„YÚ¶&ÚÖÉÜeôõÏ ®B½\6J<õ/Œq—-1.n¯o/&º¹XÞiä2;y‡©ÿ€ôíôëùåýý?üMKEzê6VG”ÝÝÙëùåýý?üM®?ç”_÷ôÿñ5-ÄEºãþyEÿOÿFëùåýý?üMKEEºãþyEÿOÿFëùåýý?üMKEEºãþyEÿÿIç:­…€õC¸ëúTÔP++¨d ƒÐŠZ‚Aä?š¼)?¼Ö§ eù&ŽAÜìopz~µ5Esþ©ë¢èB¥£¨t:¿†¿ò8Éÿ^ÿèȫիÊ~ÿÈã'ýxKÿ£"¯V¯üv{¸/à ¢Š+„í (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ Ãñ~¶Ú‡e¸„´ÊD0g³¶yü'ð­ÊóÏŠ’°}"­çHG¸Øþ„k|<J±‹1¯7 NHày$–ffbY™ŽKÔ“ÜÑEDÒ;©èÎsÊçÁ¯wŠ£…§ÏUÙ~~‡‰†ÂÖÅÔ䤮ÿ/P–uŒ…ʆn››{šBGϨ ?ì2ú椈yCä´—=Ø”$þ;ªO:Oùö—óOþ*¿;Çg˜œMKÒ“„z$í÷³ôlE…ÃSµX©Ë«jÿr+æßþ‚øú…·ÿ ‡þ>ŸáV<é?çÚ_Í?øª<é?çÚ_Í?øªóÿ´q¿óö_ø=ììüùþŠù·ÿ ‡þ>ŸáFmÿè!ÿ§øU:Oùö—óOþ*:Oùö—óOþ*íoüý—þÃû;ÿ>cÿ€¢¾mÿè!ÿ§øQ›úãéþcΓþ}¥üÓÿŠ£ÏqÖÚQø©þ´hãçì¿ð&/ììüúþŠùƒþ‚øú…J-Ø®è®Y½7#ô¦ŽT”½GU#~±yYš†²ŽŽ?Æ…™cSþ,¾ö-Á5üýÈY·‘vºõðG¨§2†R¬2Á‹‚­ä_âb3ê “ý-~‹“cgÂ{JŸv~ÕÏÎs¬ 0X·NŸÂÕ×—õb(IRÑ1ÉN„÷SÓü? –¡<^¯ûQ·èGøÔÕì#ÆdMÿ‘ÿ×7þkRÔMÿ‘ÿ×7þkRÐ¥ü-ÿ‘~ÿþ¿ÛÿEE]Õ´w–s[M“Ñ´oƒƒ‚0qß ä_¿ÿ¯öÿÑQWm_5‰þ4½O£Ãÿ>‡h6¾xyü!¢xjÇ]µ·iMÖŽ¢¬Q»¦X[æb¤ä…ã°'<æ»á]Sáõß½/HêºÍ½ÝõÄ‹$žZ\ÊÊ"+‘òäeUˆô$u¯¡fš+ky'¸‘"†%/$’0UE$’z;וMñŸÀ÷Ú†Ÿ©j¬tØî;]é葹ʓ­ó.J•8Pxç¡Ç9¹Íêÿ <]ã]'ÆZÞ¯ak¤êúÔö’Úé t%]–Ë·kÊœeÇL‚9Æx]áf£‡âÉí|cá‹‹Í}>ÊÑui/.'‘Ôd™S¡ =ñ×Þ••Ô2Êà ƒE-y ÇüDÚOˆbÓÿyáù`mL ã@XÐ7ñ|܃÷sP|Dðÿõ=GZ±—Âo´«È[û"ây­íæÒ× 2À3|ÁO|݇²Ñ@-à?xz†·úM‚xŽãòÛ^Ä—‰wÆli1•^@ã$(àgŽO^øcñ [¹šM_C“UÔlõŸµÁ«Ï®[æ-‹Œto›oØ¥j†¹­YxwC»ÕõY VvqfuBÄ(ô“@c.“ñÁ~6ñ=σü;e®ÚxŠxžùaRck ]rs…=;äàfx¿á±ãˆ—zž°«M4´Ôm%òã]EX„Ü\/ÞÆ{wÍ{&‘©C­h–:¥ªÈ_[ÇqÈ`®¡€ 3ƒêjådxRçY»ð®Ÿ/Š,>Á«ù!nà#0pX$a±¸ ñœv­z( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( ¹ˆWïeácŒ•k¹ß#Ðä·æªGã]=q?¿ä_°ÿ¯õÿÑRÖøt¥V)÷1®Ú¥&»iE5Ü"äýÍ."(dzIÏè+ØÅf8\#J¼ìß«ü+ —b±‰Ê„.—¢üÇQIåÜÿÏ8¿ïáÿâhòîçœ_÷ðÿñ5Çý½–ÿÏÏÂ_äv`f_óëñù‹E'—sÿ<âÿ¿‡ÿ‰£Ë¹ÿžqßÃÿÄÑý½–ÿÏÏÂ_䨗üúücþbÑIåÜÿÏ8¿ïáÿâhòîçœ_÷ðÿñ4oe¿óóð—ùöeÿ>¿ÿ˜´Ryw?óÎ/ûøøš<»Ÿùçýü?üMÛÙoüüü%þAý™ϯÆ?æ-›.çœ_÷ðÿñ4Šä±GRŽ:ƒéë]8|ÓŠŸ³£;¿šüÑ͉ʱ¸X{JÔì»èÿ&F¿%ÛÒEÝøŽó5Dßñùýsæµ-z(óY?ën?ë ÿÐV¥¨¡ÿ[qÿ]þ‚µ-Ýü+ÿÍgþ¹Ûÿ9kÑ«Î~ÿÇæ³ÿ\íÿœµèÕó¸ÏãËúè}øþº…Q\‡PQEQEQEQEQEQEQEQEQEóêÌ× ÜHró“+ܱÉþtÑ!oõQ¼ƒ¦W~¤T(Åt¥aÔBÿ¾kE"Q€£·ÎóZ¸QJï¹SKç:ÍÙv*æ_ùö“ó_ñ£2ÿÏ´Ÿšÿ[¢¾oýdÇù}ÇÒÿ«X?¼©™çÚOÍÆŒËÿ>Ò~kþ5½áëk{Ívoc2@UË b3„cÔ}+F- Þy毜‚Kv¶—q¢‘ÈÏ£ƒèA­é繕Hó'½;+œõ2,¶œ¹Z—N½ÝŽC2ÿÏ´Ÿšÿ—þ}¤ü×ükªÖ- µºžea 9ŒKÑ’UPÝvy‡œr¿•K©é¶égw-¾ž†Ö6Ûí¤þvHí Üq}:Õ<ó1WÖ:y1,-vÒZù¯ëõ9 Ëÿ>Ò~kþ4f_ùö“ó_ñ«tW7úÉòûŽŸõkç÷•3/üûIù¯øÐæÚêÈ݃µn ¼¹Vî².?ùéÂq.Uã‰4Ú[ؾÂF„¥I´Òo~Än¡ãe=`Ómؽ´Lz²*’¢´ÿ8ëšÿ*ý©ùç@¹ÿT?ë¢èB¥¨®ÕúèŸú©hê¯á¯üŽ2ׄ¿ú2*õjòŸ†¿ò8Éÿ^ÿèȫիÀÇžî ø((¢Šá;BŠ( Š( Š( Š( Š( Š( Š( ¼çâ§ü~hßõÎãùÅ^^sñSþ?4oúçqü⮼ñãýt9qÀ—õÔàY|ÉR.Í’ÞàvýE\€0Uþ?“þ¹·óZµ_)ÄUg<ÂQoH¥o¹3ëør”!—ÆIk&ï÷´Q[%µ-}8üó[´[ d‘ÒD†Ð׃s»õIò+˜ôW\öÉs ºbÜL¶úw‘,O…•ÞQæ1ù<º*(´ý,èÚ%´û¢½†ßÎyT·,ë‘òp8äsÛž+§ê®ö¿ô¯þG7Öãkµÿ íoÌ娭ßìK}×ûTžN£ö`±(wdÓŽ[åúVŽŸ¥[Ú]ZÝÛ†F´Äñ›˜ç“Èzjc…œŸõÞÅO¯ëµÎFŠ(®S¬‚äl_=xhù>ëÜTõ ßüxÏÿ\ÛùTÔú ©DÇ¥áÿ –¢ñçcÿÿÐ K_£pÇûœ¿Äÿ$~oÄÿï‘ÿ üÙÇäõÍÿšÔµÇäõÍÿšÔµôçË‘7ü~Gÿ\ßù­KQ7ü~Gÿ\ßù­K@—ð·þEûÿúÿoývÕÄü-ÿ‘~ÿþ¿ÛÿEE]µ|Ö'øÒõ>üú¿Ä½>ûUøaâ&7šò{ 8“ïIÇ*=Iïšã4OŠ¿­>ø{í÷Öw[mí-Ε,³¤ÈT¨ê6:n árJçÖë%<)áèõ¯í„ÐtÅÔ÷—ûp³ŒM¸Œæcvpqœ×9¹ä2‹Ã~!øÙ¨i¿/þÍ¢[i1M¤[Ý]µ¬.Íþ²@r¹Œg$ `ãŽBÞ;hŸ ¬µë›Ë«µÛ»kyå‘’k‹@P.Xa¹—#¯¥5oèÚüqÇ®é6:šDKF·–É0BzT“èúeÌ–r\éÖ“=ÝhÒ@¬mÎ1˜É/tÇJùsPð†—oá?‰M ^*xWTD‹í’ì±-(ÜÈ7rÄ 9<ךÞÔ ðŸˆ>245·³ºðŤ¿¾½6ÑÍ6üÌÉ$zŽõïïá½Ho¡“FÓÚ-E÷Þ£Z¡[–ÎwH1óœ÷9®m¾iW=¾×õ(ìï¬n´ø¬WJžÅ(ü¶Xd‘Ûmõ Ñuf¶ð¯Âûífõ×K²ñ Ä6×·€-”팳60£ à¸à _6‘âÍKã-íœÐê6‘ÚÙÏo<å Æ˜Ü0P}ëèû½ H¿ÒSK¾Ò¬®tøÂªZMn…á@B01ÛŽ*;O hV p¶:.l.¡X'Ú"y±ªíTl™Bðà(åÝbÇO¹·ð>¥]x~?É¢ýµÆ³©O”šâ}ÒDà‰WŒ) .â03ƒî_ší¾Ú-Þ­o«CòÇisoçì†À@fUr†QÁ⺖ðg…ßJM-ü7¤6Ÿ¦d´61–LcxM¸ ƒŒã5­oo ¥´Vö±$0B#Š5 ¨ `(€ã%Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@q?¿ä_°ÿ¯õÿÑR×m\OÅ/ùì?ëýôTµÑ†þ4}L1Á—¡æ'›ˆûgÿA5rªøø·ÿ|ÿè-Vëæ¸—ýÿä©á¯÷› (­¯ MömNâ}¡ü»9Ÿkt8Bq_=NóQn×>†¤Ü ä•ìbÑ]¶™e«ýªÞg¶n¿¸ 9÷Þ?ïšÌmÛû6êM—Ío”¦/È1}õðIéƒÞº%„š^g ŽÃ¤û?ÿg_§Ã+ò¿¸üºxiEÚëï;ˆÖÜ×Aÿ ­K^ˆ|x¯)ÿ„fݾl°Ðà·ô§ÿÂMã¯úcÿ¾ÿû*µˆg÷2]×ÞxøWÿšÏýs·þr×£WÌÞ ñÏÄ­6ãP:W€–ýäX¼Ð®v}½|·å]gü,ï‹Ã¯ÂÞ?ë£xx©)VmÖ.4’g¶Ñ^%ÿ [â¯ý‹ûøÿüMð¶¾(Çþ·á-ãÓcÉÿÄæ:m¢¼KþÄ¿ú$Z‡ý÷'ÿ£þGÄ5á¾êäŽ .?ôM{mâ_ðºü~Ÿ4Ÿõ¦^áZlÿèƒGü//ÿÑñýõ?ÿ#жÑ^%ÿ ßÅÉÄß¼@陸À£þ׊-ð‹Äw?½ãÿ P¶Ñ^%ÿ ®Ñ,ñäÿüjøhMU?ããáˆ#ÏNŸÎ1@ÛEx—ü4=÷ýoß-ÿÄQÿ G àî6tÿÇhÛh¯ÿ†‘y›Àž EõòÇø ?á¥lèKñýúZöÚ+Ä¿á¦ôuâo ø¸òSÿŠ£þwB¿…ü@ª:Ÿ%8÷Õ{mâ_ðÔžÿ ˆ?ïÄ_ürøj'úýÄç§ú<<þrŠöÚ+Ä¿áªüÿ@¯à<üz±üYûRim£ÆÞ ´¼]EgFeÔm“Êxùܤ¬„ƒÓ¥B?ä?ë‡þËZuÂx_Ç6~#±k1o4‘Àw.Òèp:îÇwuæq<ã9Ò”_F{/ BU%ÕWBÖ6öÚ Ó#Ò_Ph‘MÄ‘™ ™ (àA÷ë_+ ngÕTª¡¹“¥ßfê uåù›U×nìgr•ëøÕë?Ém£gÉ•Vd’7Ý‚ 6â½9þY5:)¹º¥Ô6ï$Í0Ü–YƒŽBƒ´çŽqÎibðḭ́Â|ûu–u‘¢™·¾ÂÁ‡ü§©­éªñV‡Ÿãoø¡…G‡“¼ü»ô¿üÔ§q{çêÒß—÷“™¼·ù—–Ψ««ÚÄ.šÂŭ庌Æù›r*·Þ »F:w'Á¡\`âX|æƒí m¸ù†?ïtÛÓœg8íV"ÐM¾©i ìÖîZxRk`ì+‘ì㺓ŠQké×Ó¨å*³éëÐÄ¢¶.ô&K‚Ö×Ín×/ÿ0…„ŽpÅ€íÜdqU/tÖ³¶·¸[ˆn!¸,âÝÕqCGQYJŒã{­cZµžå*‚óþ=ÿàiÿ¡ ž ¼ÿøèB¯ üxz¯ÌœWð'èÿ!µ§üyÃÿ\×ùTµ§üyÃÿ\×ùWìÝOźÏú¡ÿ]ÿB-Esþ¨×DÿÐ…KGPèu äq“þ¼%ÿÑ‘W«W”ü5ÿ‘ÆOúð—ÿFE^­^;øì÷p_ÁAEW ÚQEx¿ˆ~*jþøüÚV¥1 yE8òW(ùe.ìn98Ã3Š«â?ˆž)°ð·Å ËMSËŸAÔíàÓŸìñ!TV+†È'–É®ŠÿáÝLj~&xºMnÇþ$:Î œwbæ)!sT€À‘Œ\¯Ûü#ñÞƒ}eöÍSR»µûý¦/ô´ŠTËä·Ëò®~r úÐwàK­^ÿÄVo?ÆM/ÄÑÚIô‹[D‘ÁB>ônYv±SœvÇzäuˆþ5ºñæ»i¥x†ÂËPÒõEµ²ðå´QiC½T0¹Ãn '·lûƒàïh>MΙáí/O½„yílãŽNƒ#rŒõµäž:ðo|Jº¶‡©øOIñ\ÜÓd6FOJv• …·Œm ²û@h¥š9L¥X)ŒמOST¨ÕN1rµÿWoó%×¢Ô¤£{k÷+ÿ‘–þ!ÔÝ•¾ÐªVo?ä‰/‚7Iƒž½óHuýGlj³",EŠ*@Šr•l1È'ùõ­4»W³–RÝÀ’Ú\±C('1Á!FT‚21øÒÜh6&[»K3p.mã‰ÃË"ìmå ‘ýsOÙâ-~oÇÊäûL=íËø.ö9º+{VÑll­®|›€&¶ u™ùÁ!”Áç£=þ|†Ñ¤9\|ÇqBŒã¥gÙüŽÏáΑáèµùâÔô[·½°Õà·ÑHÌ[˜ËËÎÏ8ü(ºüXÖü7‰ì!Õuûaiq~aK*%û‚8×*¤¸%GsGKø?­Câ/ ê¾ ñÔÚÐðîô··“NHÔÄP \‡Îî9fÜ[Ó$ްý¥'ºñE»2hçF¹¾Ëb‰r/㌶Ñ39_$ÿ{h9Áǽkêß|Ceñ3RСµÐ-ítëÅ„Øê3½µÝÌ_.eŽW+È`ʤî# =k¡Ñ>j:¡ok§xãT·ðµ­é¼‡E·ŒDÀ’O–× w4e‰Êc{üÕ‹¾ ê,»ºµ¹ñ½÷ü#×—‹w.™uh—RDÀü ܖ„c 0#ªQMŠ$†Š1„E  œà:šuQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEÄüRÿ‘~Ãþ¿×ÿEK]µq?¿ä_°ÿ¯õÿÑR×FøÑõ0Ä^‡™øø·ÿ|ÿè-VꜾVA–FܯoäjQyñ,€úyl¯ ˆðµåŒö‘‹i¥²>ƒ†ñT#ƒtå$šowbzšÞêkVvö#hÛ€r¬0G>ÕKí‘ÓOûôßáGÛ"ÿ¦Ÿ÷é¿Â¾qa± ÝAýÌúW‰Ãµg5÷£SûZûhS9 [›` Ž#þïO×­Húö¢ðIL¥eŒG)òStŠæÆIã©9¬¶EÿM?ïÓ…l‹þšߦÿ ¿e‹þY}ÌÏÚá?š?­ýµ~.&›Ïýäò$²6Åù™NTôìiF»¨ÚfWO›(ñ#)ÜÛŽA?7>Ý«#í‘ÓOûôßáGÛ"ÿ¦Ÿ÷é¿Âg‹þY~!í0}ãøº…Σ2Ëy'˜è… …zÕjƒí‘ÓOûôßáGÛ"ÿ¦Ÿ÷é¿Â¡áñwp—ÜËŽ# e8ýèž«\ÇÜ_î?óZwÛ"ÿ¦Ÿ÷é¿Â¢,fŸÌ*UUv¨=NzŸÐW­’á1N\%«Ðòs¬fê5"¦›z-HŒ1I&¨”ÄÊ_oÌFWŒúUЉ¿ãò?úæÿÍjZý=—²(ÖÜ×Aÿ ­KQCþ¶ãþºýjZ™Ýü+ÿÍgþ¹Ûÿ9kÑ«Î~ÿÇæ³ÿ\íÿœµèÕó¸ÏãËúè}øþº…Q\‡PQEQEyÏÆê¾ðÞ›&“x4¨ïuí®õsiöŸ°ÄrKˆú18Ç ñ9 €F¢¼ïá÷ˆAøµrþÏú§ˆæ[Ÿ‰¾9Ôµ†Îï²Û1X”úÙú*× xoáW‚|)±´Z,ëÒâuó¥Ï¨gɆ(Ï?áp|Bñ—ÉðãÀGnü&¡ªQ’«ŸøQÿ â7Œ¾oˆ¾?– gûú~–R=6¯èÕîTPœøoà?€<7±ÓE]Já嶤Þy?ðòãµK⦧jøvqe ½¤W+ÔF ‹ †O­zyÏÅOøüÑ¿ëÇ󊺰i:ñ¹ÍŠmQ•Ž?øþOúæßÍjÕUþ?“þ¹·óZµ_#Äò2©òÿÒQö\?ÿ"Ú?ý)–a’êÊ34A£KˆÞ-å8u<0åO:µé3ä}›ì¿tªÎvôýzÖí´vWqù±m¼fMÅsµwG¸Èl´Ù-m/¦³X–ki‹*ù­ :¾›¸R8àõÅyêK{²²ÿ€¿ÌïuéßÞÞ½»¿ò2¥×µ£•uÄÀ Šς$’AÈ¡µíE®#›ÏU’6gbEË0Ã1`’;œšÜ’Ò8´»»H¬âe¸º¶h–)™²®§Xãñ¸w9Ï$6›rÖå­­×þ&"ÚE·–RíbU‹pX9^+Oc]»)þ/¿ôÌý½®á§¢ì¿áŽf=Ròe‚9±¤ˆhá_‡Nø¯ªÞÉ$ò4Ù{„Xå;@Ü«Œœ}ÑÓÒ¶¬ìtýBÆ+Ñd±Þàt‘ÏŸ²0ê2I óÎ=JŠÆÂÎâ;ÛÛ‹4µX¡‰ã†v”ÆwùFý¼qÉëÖ³öU]—7âû–†¾Ú’»pÛÉwÿ=LËbúòŠâ`ÊåL…cUié¹€¿juk¦é)p߸Jðù_iIÖ˜e‘\Ùî 1ù×5w¶½ž]¦)Ý»8ëßëYV§8ûÓwüMhT§/v·ÊÅ;¿øñŸþ¹·ò©ª¿øñŸþ¹·ò©]Äq³·FMaÐèêRñécøè¥¨ Ý$pî,hAN95=~Ãøz”0¼Væmü´ÿ#òþ!ÄS¯ýÛ¿*Kç¯ù‘7ü~Gÿ\ßù­KQ7ü~Gÿ\ßù­K^ùóäMÿ‘ÿ×7þkRÔMÿ‘ÿ×7þkRÐ¥ü-ÿ‘~ÿþ¿ÛÿEE]µq? ä_¿ÿ¯öÿÑQWm_5‰þ4½O£Ãÿ>EW9¸QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEWñKþEûúÿ_ý-vÕÄüRÿ‘~Ãþ¿×ÿEK]oãGÔÃüziEWÓ8QEQEQEQEQEDßñùýsæµ-Dßñùýsæµ- "‡ýmÇýtú Ôµ?ën?ë ÿÐV¥¡;¿…ñù¬ÿ×;ç-z5yÏ¿øüÖë¿ó–½¾wüy] Â?×P¢Š+ê (¢€ ã>#Úx®}:ÊoÛÚj±E6ÝKC»Ô­Û“|€…#“Ô Ø {:(É>xÄz]ÇŒuQ¦Ûx-õ¤TÓôËiRå,äT#Î8ÌÙÚ¨À¬xßÄ>0ðÍÖ½á7EÔ4}AnµÚ]C¿ST##ÊŒó€>ðãž§Þh ,ð?à DÕü_wãO é—O¯ÜÝXËyoÃ5»¶Tƒó“´àûVŸÁ ê¾ð$ºn¹d,®¡<©‘ylFÓ”$t+Ð( Š( Š( Š( Š( Š( žm?ãÎúæ¿Ê¥¨­?ãÎúæ¿Ê¥¯­[*÷ (¢˜rÇI½ÔQÚÊ*¡Ã@Áüj×ü#Çüù·ýô¿ã\“Æa©ÉÆu"šîÑ´põ¤¯¶½“EtÖ¾ ¹šÙ^âámä9Ìe7øƒYšÎ‡>*ù„Ip²Ž2qÈÅcG4Á׫ìiÔN_ÖÏfiS^œ=¤ãdfQE蜡EPQZÇœ?õÍ•KQZÇœ?õÍ•.¡Ð.ÕúèŸú©j+ŸõCþº'þ„*Z:‡C«økÿ#ŒŸõá/þŒŠ½Z¼§á¯üŽ2ׄ¿ú2*õjð1ßÇg»‚þ (¢¸NТŠä޺LjmëÖÞó¥Ï¡TÉŽ(¯¢¼>_ÚUñ$ÍmðËÀÚ–®ÙÛö«¥+p¹úºÓá øÓã~|Sâ» Ù?[M7™wB2>²õÆ^ð´EüC­YXq“L·Ñ>ñüy–¥ûHhóÝ›h¯‰¯OÜÂcF÷ÿÇE_Ð?g?i2‹Z;½zìÍ%üÇio]‹€~º½7MÒtíÐZéÖ6ëÒ+hV5ü” ñø¿9ÿ ‚ìèf+ÿ°?÷Å[Ó?fík±}ã{Uñ5éûæiš4ocÉüxW³Ñ@ºƒ¼9áh‚x{E²ÓøÁxaØ{¿Þ?‰­ª( Š( Š( ¼çâ§ü~hßõÎãùÅ^^sñSþ?4oúçqü⮼ñãýt9qÀ—õÔàÙu„ϹÁþ•nª²‡R¬2 $ñŒ ²ŽÅŽÓüŽkËÏr|EzÿX ¹¯k®ºhzùs‡¡Cêõß-¯gÓ]M»¸Œ K‰TF FÝß{™ïëNŠþî‚êhÌ@ˆöHFÀzã3YßhŸþxÇÿOÿGÚ'ÿž1ÿßÓÿÄ×Ïdf+þ]³è¿µò×½Dhÿh^×>鱿Ÿ4åñÓ<óz‘õ}JF ú…Ó0 ‚gbA;öÉüë+íÿÏÿïéÿâhûDÿóÆ?ûúøšÙ9—üû×Ì_ÚÙgüü_×È¿ÕÄ!3ËGÞ­Ó#Ðð9§.¡z·mt·s‹†3 Nóøç5ö‰ÿçŒ÷ôÿñ4}¢ùãýý?üMÙŠÿ—lÚùkÿ—ˆÓT¿Ši&Šúå%—ýc¬Ìþ§<ÕbI$“’zš«ö‰ÿçŒ÷ôÿñ4yóÿÏ(Çý´?üM'”f/GM‚Î2Õª¨‡^6-$^¥ÆÀ=I¨œÉ9` 9Ùœ“õÿ ³>ù[s€péO¯ªÊ¸~’«ŠW–ét^½ßà|¦mĪÝ,+´vo«ôì¿¢Š+ë‘"oøüþ¹¿óZ–¢oøüþ¹¿óZ–7ü~Gÿ\ßù­KQ7ü~Gÿ\ßù­K@—ð·þEûÿúÿoývÕÄü-ÿ‘~ÿþ¿ÛÿEE]µ|Ö'øÒõ>üúQ\æáEPEPEPEPEPEPEPEPEPEPEPEPEPEPE›âvÓÃ>¿Öµ"ÂÖÆš@ƒ,Øt{“€=ÍiQ^=ÿ çÅ(<,¾5ºðÞÿ÷”oMIåû|vÜÅÏîòÓ8þx]ÿÆi6ÖsjºâÚ}·O‹R ž j‘µN[9ʘ`’0  ÚŠæ_â7„£ðjø­õËq¢± ·Xc–Î6ìÆýßìã>Õ^?Š~ ›Áóx¢r94x$M:Ã!hܶÐ=»ÆIW¡ÏNh®¢¹|Ið‡‹u›­+úÜ7×¶ ´‘¢8‚ÊÌqžêH䆼õ~.øÂÿ\ÕîtLÔtmU6:Tlš³F¤)œ"ñ·'Ó±‹PµQ^nß´åøØ| +E+m°NRFy/“lC $œƒÇ#:ú/ůø‹ÄØZ7ˆ!ºÔK:¤Kd+œír¡[¡#är3@È\|UðE¯‹†§ñ ºjÞh„õʬŸÜ2Ø<`¶sÇ^+'Oø¥®µã$ñdö–nƒ ¥´±ÆæI|Å8Abí‘ÑW¦xâ€=¸ŸŠ_ò/Ø×úÿè©k³ŠEš•u ¡Vúƒ‚±æ¸ÏŠ_ò/Ø×úÿè©k£ ühú˜b?ƒ/CÍ(¢ŠúcçŠ( Š( Š( Š( Š( ›þ?#ÿ®oüÖ¥¨›þ?#ÿ®oüÖ¥¤Pÿ­¸ÿ®ƒÿAZ–¢‡ýmÇýtú Ô´ gwð¯þ?5Ÿúçoüå¯F¯9øWÿšÏýs·þr×£WÎã?/ë¡ôOàGúêQErAEPEPEPEPEPEPEPEPEPÏ6Ÿñçýs_åRÔVŸñçýs_åR×Ö­•{…QLãÁ vîg Ÿø ÿtuÍø/AÛí OýòµÒWäÙÇûý_SíðîÐô æ|l¬öV±ógµtÕŸ®êÙº<Ó)ÄŒ6Gþñïør Ë,©:XÊr„nï·®…ã!áä¤ì4#ƒEWë§Â…QL¢´ÿ8ëšÿ*–¢´ÿ8ëšÿ*]C \ÿªõÑ?ô!RÕ=NöÖÆÔK{q¼aÐ=k™Ô~'èV¯åXyú”äáVÞ3‚~§ 5œê±¤iÎ =wá¯üŽ2ׄ¿ú2*õ9çŠÚšæT†$g‘‚ªROJùoÂZ¯ÅëÌžÓ­´)$¶qö‹ì‘îLŸ˜ûz!®âÙòÿÄ­×Ä¿êzÔ€îû5»• -ž>еàã&§U´{˜H¸RIOˆþ<ü?ðÞô}djW ÿ,tÕó‰ÿðŸøõrð¶þ#øÏäøwà-íŸîê¡!HõÚ¹üZ½ß ¼áM¢øzÎ9“¥Ä©çJ¨wɆ+¬®C¨ðßøSß|eóüGñüÉnü¾Ÿ¦C€©Ÿø }k¯ðßÀ¿xkcÅ¢&¡p¿òßRo<Ÿ}§ä‚Šô:(‘E¬PF±Æƒ ˆ¸ =ú( Š( Š( Š( Š( Š( ¼çâ§ü~hßõÎãùÅ^^sñSþ?4oúçqü⮼ñãýt9qÀ—õÔá)…™¤òâ]ÍŒ’Nýiôë1û¦ní#gð8þ•®y˜TÀáÓ¥ñIÛМ‹/§ŽÄ5W኿¨ß"ùëöòÉþ´}žùíýú?üUY¢¾ûg0ÿŸ¯ð>óû/ÿŸKñ+}žùíýú?üUgŸþ{Gÿ~ÿVh£ûg0ÿŸ¬عüúEo³Ïÿ=£ÿ¿GÿŠ£ìóÿÏhÿïÑÿâªÍlæóõ‡ö._ÿ>‘[ìóÿÏhÿïÑÿâ¨û<ÿóÚ?ûôøª³EÛ9‡üýaý‹—ÿϤThçŒdí”w 0*U`êNAj©¨Û4Ê: 8ü@?Ö¾— ͱŠÎ…wÍ¥Ó>g?Êpøj*½Ë­šEWÚDßñùýsæµ-Dßñùýsæµ- "oøüþ¹¿óZ–¢oøüþ¹¿óZ–€=/áoü‹÷ÿõþßú**í«‰ø[ÿ"ýÿý·þŠŠ»jù¬Oñ¥ê}ø1ô (¢¹ÍŠ( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( ¹ÿxv_ø XЭåXf½¶hâwû¡ú®}²}« ¢€À¿'–eÛ÷~ltÏ=ò&‰þühð½¸Ñï|BtÇ«¦À%œ:D2Ç Ÿ˜ãä+ÐöOxÿÂþŽñN¯‰¸8Š=#·¾Ä±Ç\c8æ«øzOxËU‹ÇšBñmŽž/#•ÁH÷o1´d§'?2†Áˆ ºøâvð¶±ý‘«ÙÀÞ)—Xm'L";ëKiª4kŽ$\p ddtç§à½Rçá/ŒotÝÅÒ\ê×Vb(õ¹¾Ó}t±IÌ F09-3Àý5Ey–¡¢]ÃûExvúÇL™tÈ4)mžæ;sädžm±—¶{×›x×GÕõRüKðïU´ñìwêšv¿¡E$VRF•äÞT1RrO#%p@úÛ^Òï5ëÍÖö9µ ã–ê$˜Uó·qè ÚNÜç8ÁРÖm¼E¢|q“UB½ÕŽ¥áìè®mí|Ëe»Þï› $yNIìÃŒdŽ'Âú?Š/¼aà-GRÑ|QömÛC|·ZtvÖ6¤—ìðD£bcïH@Í}'ªjvš6“u©êRù6vp´ÓÉ´¶ÄQ’p'Р˱ñ·‡õ-KM°²Ô<ÛRÄjqù26ïä®ÐàûP’xtjÞ·Ôü©|5½ñÝÖ¹%Ô7RÀ§OY¤œ«`yRFàñY:·UîèãydXâFwrUFKÐM­Ÿ ÞKaân¢²–ðFøáBΪF qŸéÆjfÜbÚ* JI2àÖ¬&KX¢¿¶–nVWFp;ÔOªj‘»$—×jÊpÊf`Aôë]õª5Æ©¢ßYj²ßé¦åÑås,NcbFò2Ãצ^µ•¥iº®Ÿ{sa§K܈€í¸$Ï–Áê0¹Ï§çÞŒ¥z”×­—{ks¹Ó©û’—K¯ö¾¥ÿA ¯ûþßãQO{ut¡nnf™AÈHXø×k&ƒ£Üé¦TÓ’ÒâÞö(&Ž;§—«…db@ù¿‡?^Õ2è: ƹhšzA˜›Ý¥»uY‰n<ìQÏ#žžà¸Ë ÍvkÉ]It«IYÏ7ýt<öŠÞñVŸ¦Y\ÛɤOnÉ2~ò.<剆:7\ òoˆ~!Öm-ÛOÑ,/uýõêBÅTáR_SÛ··\«ÅSö‡:£'SÙØ9ŠÏÔµí+G\êwð[œgk¸Ü~‹ÔþUæÞÓ¼Mâ!ÿ„¬,-Ùü¨Ô‰88Ç#©®§Møeáû&ó.£—P›9/rüþèÀüóYƵJ‘NûÍ%J7iËî*ÜüP´šcoáÝ2óUŸ¶Ä*§ß¡oÐUhbøƒ®ÛÆ<ËmÔ¨¯ß+޽Îïšïm­-ì¡YÛÅoè‘ P?Kiÿpÿ×5þTý”äýù}ÚÁµ„W¹¿Sˆ‹á€as®_Ýê·Ô1‘Ê©Ë{–ïë]v¢éºJmÓl`¶ãÆ€õ=OãVnÕúèŸú©kHQ§î£9՜׼ίá¯üŽ2ׄ¿ú2*õjòŸ†¿ò8Éÿ^ÿèȫիÅÇžÆ ø((¢Šá;BŠ( Š( Š( Š( Š( Š( Š( ¼çâ§ü~hßõÎãùÅ^^sñSþ?4oúçqü⮼ñãýt9qÀ—õÔá)ÖñïÿýÓiÖñïÿýן¯èz)üzžŸ©=Q_~‚t¶×ÞŠ)öiå0JÜÌŠ¤FO¡ãß¡¥[ìK™Ÿì–ðÙÛoò¡ÜÏ#¦zdÄ’Oó¬í ¾Ãªá ZTuÈ`ÇóþÈ«×>$¹¼¸žKË{i–tE’&V J}Ö ƒÉè{šîö”e5®Ÿ—ùœÊ´gx½5üÿËñ'‡$ ¼âbÓ¤dˆÈÂ:†Gúœúc½?Oðìz’Ÿ²]O!&@®¶„Æ»s·{gå'8Ȫ‹¯^­ÝÝ”VºˆÄÊ«…UÀh0©müúQ\æáEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP•h- ý§øCDzø×_¹¼ŸWKW³šì½»ÁçJ¤:ž]‰Só1<01î¾øcàïËs/†ôH¬åºÊ–S,’¹NêØ•¸g=Iÿ çÂ¿ð‰Øøgû+þ%ý¢ÚÛíþîMÌû·nÜ~gc‚H怌–ël¶²’ÁQF NàF;<Ћø†]Oš—Œüð8êZ¾¨I¤xÁ:”yÒÙü>yÒôá5¯ˆOâ/ k·wwóYë—n_Rñ¬–×q³|þE¡ÚѲv¦HÆÜH®Óàõ†©¯jºÏˆµk—+§ë—––Ús^±¶Ù€>t<¶7ü£8\vš¿øoÄo®èž·µÔ_y«;,{Ž[b+§Ê `+kAðÞ“á›{¨4K_²Çwt÷s/˜ï¾WÆæù‰Æp8{P¥q?¿ä_°ÿ¯õÿÑR×m\OÅ/ùì?ëýôTµÑ†þ4}L1Á—¡æR1UùFXœ(õ5"Ùnj̭ܸÈüFãâ÷Ïþ‚Õn¼N&ÅUx…A;E$íÝžÿ á),;®ÕäÛWìˆ~Émÿ>ñß²[ϼ_÷À©¨¯’»>ºÈyÑq§ ÿ±Göc'•æm_½Œãi—RZ²,ö‘!’5‘~U9Vjé¬/"¶ðÕ¤7™6wWE>J‚©‡ÕNçëWu '…çŽÖ¯/íímcEò„¿&Ü;*œç¼ààzîúºq¼_oÊÿðÇÖ\gË%ßó·ü9Ã}’Ûþ}âÿ¾d¶ÿŸx¿ï]‹Yi÷Õ´‰Õ’êQö<èÁ³t¹«Z5‚Þ¬=¤ ¡,Œ!³VQÀ)#a€©è=iG 9K•?êö±pŒyšþ­s„û%·üûÅÿ| >Émÿ>ñߦ¢¸®Îë"²[ÏÇѨ »® 11Ç';OøUº‚óþ=ÿàiÿ¡ ôrÜ]\>&v“óG›™a(âp³ŒÖɵäÈþ?#ÿ®oüÖ¥¨›þ?#ÿ®oüÖ¥¯×ÈH¡ÿ[qÿ]þ‚µ-EúÛúè?ô©h@Îïá_ü~k?õÎßùË^^sð¯þ?5Ÿúçoüå¯F¯Æ_×Cè0ŸÀõÔ(¢Šä:‚Š( Š( Š( Š( Š( Š( Š( Š( Š( žm?ãÎúæ¿Ê¥¨­?ãÎúæ¿Ê¥¯­[*÷ žÊúçN»K›)šðËþyÕu^¶‚âMM§†ÎVŠÔ¼fñ=[=©ô¬êÉB µsJQršIØÍŸÅšÝÅÔ_³InKEò( HÆp Á=EW\Ôa†X¢¹*³N. ¹2lã#:W]oÛu¸-|¯ 7îd}ÖVþjñŽdséUdðâêö>†Í"·i-d–âe’^H±ç­s*´•“Š_Ó¡ÐéÕz©7ý%ú˜—-×.ìà”;Li€Q·)þ} W^ÔâÕßSŽé–òN@£æã+Œƒ·jÜ>êVÇ}þr®ÂYmÚ7z-¹Ï§¯5_þ;\‡N²¾i Fe•¦´’‰Aëµ¹löÇSéÖ­TÃÚÊßwõØ— ÷»üÌmKW¾Ö'Yµ+†Ôm\€À`U:í¬¼nšåŠ^]É-•Àr¡­Þw^J<¨#'=À#Œƒ\–£ ½¾¡4Vsý¢l$› gð<ûV”êS“å‡ägRœâ¹¦V¢Š+s¨­?ãÎúæ¿Ê¥¨­?ãÎúæ¿Ê—Pè?ê‡ýtOýTµÏú¡ÿ]ÿB-C¡Õü5ÿ‘ÆOúð—ÿFE^­^Sð×þG?ëÂ_ýzµxïã³ÝÁQ\'hQEQEQEQEQEQEQEWœüTÿÍþ¹Ü8«Ñ«Î~*Çæÿ\î?œU׃þ<®‡./øþºœ%:Ïþ=ÿàoÿ¡m:Ïþ=ÿàoÿ¡ââ¯àSõýC…?SÓõ'­i´hâšE»Íü04D(óFœdg׌ÖMhÉ­O"DD6ëq—þ’26Ï»É$zt8Í|->K>sîê*—\Ÿ×üìŠòïɲ¾i<»‘o9x6ì'8aóÀ•#±éÅ:ßÃÖW_dòu9Û$x¡Í®>e럛Èç¯=*ºxŽâ+„šÞÚÖý¢A¶%~۲ǓÀÀç¥WµÖn->Ãå¤GìR¼±î’ØÎyéòJéæÃ.ŸŸuçÚÿ3›—þ×åÙùw·È· ÑÚF·Íö»È ÑÅä|£à¶{í=áL] |×W¹!RÁoIäà…;qŸöºûT³ë« ¥‚YGO Ÿ”Ó²7™l€sŽ„sƒŒœTðOöV‡ìÖÛÚØZ´Û[yŒ ÷±éÎ;Pþ®´ÿ>ßæ ë/_òïùXÒ¾ðÌ/©]›t-`®È-ÌÎ]”»¦9$‘ÉÆ+ S°}/RšÎVÑ67Œ‚2;pzU©5ù§i>Ókm2J¨$Ãá™ðÀ†Ç¥gÜÎnn^cqï9ÙUöVuåFJôÖ·ÿ?ø”#^.ÕÕ¿Ëþ TññqþøÿÐV­ÕAÿïýkÚá¯÷ÿ“<^%ÿpù¡ÔQE~–~fDßñùýsæµ-Dßñùýsæµ- "oøüþ¹¿óZ–¢oøüþ¹¿óZ–€=/áoü‹÷ÿõþßú**í«‰ø[ÿ"ýÿý·þŠŠ»jù¬Oñ¥ê}ø1ô (¢¹ÍŠ( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( £¸%me*pBíÅIL™ H‹Õ”Ÿ¥|Çà‰^ O†þ•âMNùïot{ÍGCÔ¥¹f•ü°é$bBwnSqÎ@Éàb½'Oø‘{£øÁ:u†•wâk:TS¥¹¹–UKÉ$Ïœwäç'<ŠÈŸà]ýïÀ½;—WvQø‡K–imo"‘ü‘æHÅжÐÛYåêP+bo†ž%Ó¬|ªx_QÓ"ñ‡´ÅÓ¦Žõ]­.£Ø ÊŒ‘Ç>Ô3ük´·ðf£©_hW–úæ|ºlº3µËýÅYÃ)Á;€þ€xÏ?§ø÷]_×2ø¾ÊûÖv–êëJkáqÙ!o9v¬JñœGj¹qðkYÔ|'«Ï¨ë6‡ÅúŽ©¬."Œ‹Xe‡"8À#q@†HÏ#ƒŽ^Ÿ =Ô4o+PÐ&Ò='ÍFæ%J‰ÌKdž§ÀÍXðwÇkOø¢ÃI½Ò"ÓãÕwÿgͫӱpDŸ4$ŽÍߎ٧ø?ãTþ1ñvÖ>¹:D× ê^$òCÉÓۨݶÓËzÖªxá>¿ ëºlšÀðœ6:DLÜizJ}²ôãj<²H™Fœ¡É9=k>?ƒ^(ºñΓ¨ê—~HtÍHÞ¶±§Yµ¶£x7oÛ* X‰'žH99 €{…Q@q?¿ä_°ÿ¯õÿÑR×m\OÅ/ùì?ëýôTµÑ†þ4}L1Á—¡æGþ>-ÿß?ú Uº¨ãâßýóÿ µ[¯™âO÷÷è©á¯÷›%··šîá ¶¥–C…E&¦¹Ónm Y¥XÚ&b‚H¥Iv3‚T;·áÖ_·O eYn-d†bÃŽOLôüjk *[i!þÚg²³šæ4xf-˜ÉÇ ÏÞ8<׋ *QO¿Ü½OnuœfÓé÷¿CŸRLÅa¤` ªIÀ'ò®ÆÆÈÉuký³¦ÛÁ9½dH¼…Œ<~[ò¼ …Ãs×­3J›ÌK „··[›ˆ¯"ÄVÈ7í@Tmçý;Ö±Âj®ÿ z™”±š;/ÇN¿äqÕ4³\¬Í nÇæIÈW gó"º‘FîÒÜÚZù§MÇ’ãñ“þïLŽ”‹N¸iíã¶¾}2CqF#ÇïSi(8RG°¤°»¶ö‹Ù%¹ËÝZÍeu%µÊl–6Úëp~¢¡¯@º´·}Jy"¶k©ù…Ê­¢NBñµIfX#'wëÅsšÇ“b–ĉ1˜—1£9 ) óòx‡Ÿz+a}Ýô_æ¿ÏP£‹öœªÚ¿òoôШ/?ãßþŸú©ê Ïø÷ÿ§þ„+,/ñáê¿3|Wð'èÿ"ÿÈÿë›ÿ5©j&ÿÈÿë›ÿ5©köcñb(ÖÜ×Aÿ ­KQCþ¶ãþºýjZ3»øWÿšÏýs·þr×£Wœü+ÿÍgþ¹Ûÿ9kÑ«çqŸÇ—õÐú 'ð#ýu (¢¹ ¢Š(¬/x×ÃþÓá¾ñEÿØmç—Éü™$Üø'E' 5»^KñçíØð7öGÙÿ´?á'µû/Ú·y^o;7íçnìgã8 ¿ÂŸ¼ã{é¬ü1­Gys y †H›nqWpñœdg¨®®¼–/øÊ]øßÇ7Ú,7zn5­”:ÊnK»Éó nàFN~\ÜM†¡âMà=ŸŒ‹õ›SYhì^[Û£-½„M9S*£ ä“ób€>fTRÎBªŒ’NGE×4ß鋨è—qÞY´Ov±G(Ø'¨ÊžG¨$W½–©á¿ˆ à›kšÖ¯h—N×w‚[«'U;eŽLeCccÏ^1Åø.}OFø=á+]7ÄZ†›Šõ†³º¹3¶1¬’ ÈýÓ?R}Fx ¨è¯»ñ/ˆ~êÞ9ðö›¯ßx‚ 7CMJÖãS\Oc;R¬øù²ÌŒKmxnÖóÂzež½ÿ ó[¹Ô4I/‹ª\¬ÿm•a2î·ƒ"ƒœ… ÇSÇÁX¶þ0Ð®Žµä_ƒý„̺‰1:‹r±ä›€OËšðØ5_iðÏÄHüs©êšž¯¨EΕ4êÖs,ŽCC#„e0qŠšêy¯Åm>ÜŸ´ký¶™I„gÿ-@·®üOðw†´½?P×5¸í Ô¢Y­U¢‘¤‘wòÕKŽä 2w~¸ÇqÞ€;ãW€¿ö‹Û§Ù _c¸MÍ×dtîjïˆ~)ø+ºòhÚþ½¥û&#°7MìªU=~b8 ô9ª^ÿ…§ý©eÿ /ü!ÿÙýŸö¯´ciÛ·ËœíÎ{f¼çÆ:mÇü$~<ÕüªiZµ‹BÅ:¬’DËå!Úc=7C(»pÐë$jñ°t` ²œ‚=E-`xP³Õ>èWš]£ÙYÉa‘lò0 Pn<¶1Ǔַèç›Oøó‡þ¹¯ò©j+Oøó‡þ¹¯ò©këVÇʽµtmb*×QŽbŸé–íÌÄQƒ¸ó÷G'è TTTƒœyQ­ÆR’ºû¿FlxY´Ðu3vn ¸ýÛ)Er2Ðí9oö>ðÏ `ãJÓÆpYfˆä·"Ɔ@%oß#m?) 0ü—–äpqÊâŒW,°Ó›¼¤¾ïø>glqXx+FÿÀ—ÿ#ät·Þ%Ó/¯ yÂËi,ÖòjÊ\ž…XŸ¾8ùÜ2r:íºþ>‰/-^ÙàX-Õ·£ÜI3Ê­Œ# îéò}áÆG7b“ÂÉ«9/¹ÿ™KAj ÿð%ÿÈoü&V#\²ºKˆâKmÍåÜ_M*É•#ï°![îà7n¼sQžÆæúIôý‘Å!ܰ+34c¾ì€AÎx?‡&®¥t¨Nœ¯Íø?ó3­ˆÃÔ‡,`Óÿü¹WO0¢Š+°ó¢´ÿ8ëšÿ*–¢´ÿ8ëšÿ*]C \ÿªõÑ?ô!RÔW?ê‡ýtOýT´u‡Wð×þG?ëÂ_ýzµyOÃ_ùdÿ¯ ôdUêÕàc¿ŽÏwüQEp¡EPEPEPEPEPEPEP^sñSþ?4oúçqüâ¯F¯9ø©ÿš7ýs¸þqW^øñþº¸¿àKúêp”Øe;$‡j³eXôÉê)Ô‘ƒÈ¯G2Ëá£ì¤ì÷LáËs åõý¬UÖÍyAÈÈäQT¾Íë ÷À¤û4óÆ?ûàWʪµçêû™õŸë]/ùôþô^¢¨ýšùãýð(û4óÆ?ûàRÿUjÿÏÕ÷0ÿZèÿϧ÷¢õGìÐÏÿïGÙ ÿž1ÿßõV¯üý_sõ®üúz/QT~Íüñþø}šùãýð(ÿUjÿÏÕ÷0ÿZèÿϧ÷¢Ü³GùÛžÊ9'ðªÑ†Ã3Œ3Äz{R¤QÇ÷Wè1N¯s)É#€›©)sIéèxy¶w, ”c˯¨QEôGΑ7ü~Gÿ\ßù­KQ7ü~Gÿ\ßù­KH›þ?#ÿ®oüÖ¥¨›þ?#ÿ®oüÖ¥ Kø[ÿ"ýÿý·þŠŠ»jâ~ÿÈ¿ÿ_íÿ¢¢®Ú¾küizŸG‡þ }Š(®sp¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(®'â—ü‹öõþ¿ú*Zí«‰ø¥ÿ"ý‡ý¯þŠ–º0߯©†#ø2ô<ÈÿÇÅ¿ûçÿAj·Uü|[ÿ¾ô«uó€UËòÖ%‘#d‰‚Èb™$òÉé»i8äcš4V¼MfÙôÔY.•³60Ür9ÇQŸéZöñéÚŠÞùÜèæ(×ÎrÑ0ÞVÈHÀÉW‹J”g=/O¿T{ukJòÓóõû´g5EuzšéñM©X I\Áò’+SÜms r̸ÆKuÎx©/’ Ÿjz2Ú[F¬Œ¶¾T¬²|ÀgœøÕ¼-´¿[|õÿ"-=yz_å§ùœšÄÏÈ mLdœúr”ÊìÖÂÁ…̓C io$Ê£qc'ïî½ÈëÚ©,ßø‰,ï,b†Ñ/Þ,Qýqa€z¿7^h–«+êÿ;´(âÓ»¶‹_•“ýNfŠêâ’ÂælÉ¥\Þ<+”‡NX üÈÛîéÀnxëɪ¼I6“í´–³B$ò̉h-äFv²¯Ê@¨ÏZ‰aí$ÿ¯ëµüì\qÒQjß×õ½¼®aÔŸñïÿOýTõçü{ÿÀÓÿB°¿Ç‡ªüËÅ~ò oøüþ¹¿óZ–¢oøüþ¹¿óZ–¿f?"‡ýmÇýtú Ôµ?ën?ë ÿÐV¥¡;¿…ñù¬ÿ×;ç-z5yÏ¿øüÖë¿ó–½¾wüy] Â?×P¢Š+ê (¢€ ËÖ¼5¤ø†]:MbÓí¦]¥í¡ó<¹“î·ÊFqèr=«RŠŽâÞ;«Ymî|S!G\‘•#d{VU¯„4?¯†"Óbmc1 9‹J»IÝ‚X’y9Îx­š(˜ð§Ã x!î$ðÆ‹ ”—+²Y ¼®ËýÝÎÌBðÁÀª–¿ < g¤jZ]¿‡á:£¬—P4²2–\íeË,Ç6â»*ñvøû¨&‰q®Ÿ]ÆüÙ__E1ÞB¡œá‡ ÷áÏx[ÂZ]Þ h¶öÖ·¼]#fS8Á]œ’Ë‚~Rp2xäÕ_ |/ðoƒuIu èqYÞJ¥o6IRy ½ŽßÃâ/‰ú·Š.ôxJãÅzmºÜj%.ÒÙ-ÀȠ°;ØŒ gÓ<â ό֗:†î|!¢Ü뺉Qg§–Ý—Êÿ[½ÎBíüŽ Î9  ½;á7´ŸoéÞµƒQdYr‘·÷–2v)ô!F;Uåð†“T—Q]7ý*]Au's<„5Ê«*É·v8 xÆ9Î3^UáŠ7š%ÏÄwűê1®Ÿuk:EÅÏ™öyX:˜çhRÃ;—‚£v;WEáÿŽ–:•®·ý³¥Çey¤X6¢`°ÔáÔxWƒ¶XŽÐù =>€—‹>x[Ç ñNó[ŸÝI½ã‘Gµ_ùßÞÃê´?‘}ȃìqÓOûúßãGØâÿ¦Ÿ÷õ¿Æ­J‹¬©*Ê££ 8?˜ô¦Qõšëí¿½‹êØwö܈>Çý4ÿ¿­þ4}Ž/úiÿ[üjz(úÕç{ÕpÿȾäT*a›Ë,YYK)=F:ÔS¨¸ÿ¸¿Üæ´Wéy j•°1•Gwª?2ϨS¡Ž”i«-ÇäõÍÿšÔµÇäõÍÿšÔµížÇäõÍÿšÔµÇäõÍÿšÔ´é ä_¿ÿ¯öÿÑQWm\OÂßùïÿëý¿ôTUÛWÍb/SèðÿÁ QEÎnQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEÄüRÿ‘~Ãþ¿×ÿEK]µq?¿ä_°ÿ¯õÿÑR×FøÑõ0Ä^‡˜È€É÷”îþÕb)ÒQòž{©ê**kÄ’¬Eo÷†k\Û%Ža%R2å’ÓÕ”çrËâéÊ<Ñzú2à$AÁ X¹Ô¯¯#Þ^ÜN€î ,¬À\YfƒþxÇÿ| >Íüñþøâ.®••U÷3Ü|UA»ºOïF»ê7²Ú‹i/.Ý@&•Š€:qœTms;]}¥§Ï¸7š\îÈès×5™öh?çŒ÷À£ìÐÏÿïCázïz«îb\SAmIýèÓ73••LÒ1Ì£yùÎs“ëÏ­:âöêð »¹šq d-·éž++ìÐÏÿïGÙ ÿž1ÿßõZ½­íWÜÇþ´Ð½ý“ûѱ6§q³í·2ùm¹7ÊÍ´úŒž 2æúîô©¼ºšà¯Ýód-¦k+ìÐÏÿïGÙ ÿž1ÿß›á|CÞªüD¸£¶¢ÿñ8ëÅT–A;*ÆrŠrÍØ‘Ð oÙ !þø'N•×á¥B²«Z|ÖÖɘî&uèºTaË}.ßB&ÿÈÿë›ÿ5©j&ÿÈÿë›ÿ5©këÏŽ"‡ýmÇýtú Ôµ?ën?ë ÿÐV¥¡;¿…ñù¬ÿ×;ç-z5yÏ¿øüÖë¿ó–½¾wüy] Â?×P¢Š+ê (¢€ (¢€ (¢€ ñÙþëÒüÖ|·zwö…þ¤×qJe“ÉfGÁ;3œ)è=ëØ¨ -Ô|ã]ÅšŽ½ðëRÑ£“ZµŠ-BßVI6Ç$H%ˆ 98-à ýìáiƒš¯‡4ÍàRÎMwÃopÞn¨Ž »ó7Ë.3ŽÝNy¯_¢€p(­h|/¬Ï¦ý¾+kb¥Ãn\•õ œŸÊ²jc(ËfSŒ£º (¢¨¢ž"ÂeÞZ°Røà’}x?•2€ (¢€ (¢€"oøüþ¹¿óZ–¢oøüþ¹¿óZ–Cþ¶ãþºýjZŠõ·õÐè+RÐß¿øüÖë¿ó–½¼çá_ü~k?õÎßùË^_;Œþ<¿®‡Ða?ë¨QEÈuQ@Q@Q@Q@Q@Q@Q@Q@Q@<ÚÇœ?õÍ•KQZÇœ?õÍ•K_Z¶>Uîhé•¶žó Ý6øf]¬’¬¾êØÊþÔè¾(¶»Õ$†ÞÂÊÒÂX¡ŠY·I^ 63œt®ŠÆ¥T»{›S­(lu ãO±½’ilvv¶²4 ”Éæ9cÏBÏMãWÖ—6³Eöw.É=ô³‡È+˜àpO8ÍIáMËPÒo.d±þÓ»ŠEUµQ ýàGã×Ò©GS¿m=³¬mÈÜÚ“y^Q8ž¼ç§¶3ɬ-C™¦¶þ»ùõ6½~TÓßúíäH—¡ÛÚg3I™ »ächb2£ð8Î8õRñ,7š]Ý•¶öd¹º$ùåö¶=Gr3øÔgÂ×1êSYÝÞØZ4J¬$¸¸Ø’ЩÆOOJ¼¾û›®&¡Ëwh5»ÄìGÎÇ‘Ó9ÇqUû„ÓÝéÕ÷ÿ‚/ß4ׯO/ø¶ž8‚ ­.¦Ñ–kÛx¿Ú>ÐFåô\`zýET·ñg‘6'Ø·e£¦<Üy»†=8ýj ï j2Ü<¶²¼Mż2î–{²ÿ>½9¬:¨R£-c¯Íÿ]I•ZÑÒZ_ð {½wíZéßgÙºõ®üÍùÆAq~¿¥dQEtF**ÈÂRrÜ*+Oøó‡þ¹¯ò©j+Oøó‡þ¹¯ò§Ôžsþ¨×DÿÐ…KQ\ÿªõÑ?ô!RÑÔ:_Ã_ùdÿ¯ ôdUêÕå? äq“þ¼%ÿÑ‘W«WŽþ;=ÜðPQEÂv…Q@Q@Q@Q@Q@Q@Q@yÏÅOøüÑ¿ëÇóн¼çâ§ü~hßõÎãùÅ]x?ãÇúèrâÿ/ë©ÂSlœE+Æ¿Ý ~`Óè¯n¾Ž&<µ¢¤—sÇ¡Š­†“•8·Øn%ÿŸ™?%ÿ 1/üüÉù/øSªÈÓ¯šÏíkgpm±Ÿ8DÛ:ãïck‰å9zÞ’;Vm˜=ª²¦%ÿŸ™?%ÿ 1/üüÉù/øS¨§ý‘€ÿŸH_ÚøÿùúÆâ_ùù“ò_ð£ÿÏÌŸ’ÿ…:Š?²0óéö¾?þ~±¸—þ~dü—ü(Ä¿óó'ä¿áN¢ìŒüúAý¯ÿŸ¬n%ÿŸ™?%ÿ 1/üüÉù/øS¨£û#ÿ>kãÿçë¨'%˜õbrM:Š+ѧNâ¡d:¥IÔ“œÝÛêDßñùýsæµ-Dßñùýsæµ-QMÿ‘ÿ×7þkRÔMÿ‘ÿ×7þkRÐ¥ü-ÿ‘~ÿþ¿ÛÿEE]µq_ ”ßѯØû÷þ•Ú×Íb/SèðÿÁ QEÎnQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEÄüRÿ‘~Ãþ¿×ÿEK]µq?¿ä_°ÿ¯õÿÑR×FøÑõ0Ä^‡šQEôÇΦÚÜjv–6z•íÒZ„µÔìŸ)åà”àdÇ|öÍ-¥ŠÃc ¥¾ƒg~—#eÕÁ€>Ñ7níÜäõÅpI©ßÇhmc½¸Kr0¬¬ƒÔmÎ9­¯Ý˦XZY´ÖdˆÆïäy¹ÇP1éïÖ¼ùaê_G§ü9ÝD-ª×þíl´MÎÌ´6Ñ\#]Ê’±’é¶«‘´ûœ¿5ma¦ùú.¬ó_¼R-Ä%ŒHápßpã$}{W%öæ•gæÅý¡em)zï‘Ç<ÅUµÔ/,wýŠî{møÝäÈSv:gž¦šÃÍÝóÜx«.[üa—Ã)ko±5_&X ‰ü•êX°ãpÎÝÇ·–w^,Ôô„Ò4õŠÞÕÚ-–˽œ…ïõ'® BöÖŠÖîxcc–Håeú K<ºž¢..d»‚ôÂI ,„ŒäóÓŠ>¬îõî/¬+-;¾‹¡Ã ¯‡£ÔtØ–yfŸÎY¡˜lr¡²>‡›Ú{éºm÷öœ$žÿìe|œ¨BÝqݰ'ßÖ¸£¬jm";j7eã%‘Œí•$`sÇ¡×b$Œ]L7óD‡ ÿÞ±÷§õi·vÿ­Ì_XŠVKúÓü‹¾$µŠËÄ—Ööê$”íQÑAç­eÓæš[‰š[‰Y娱?Ri•ÙÔRg,šrm7ü~Gÿ\ßù­KQ7ü~Gÿ\ßù­KL’(ÖÜ×Aÿ ­KQCþ¶ãþºýk[CЯ|Cx °LF§Ü0ù"Ôú/òÔÊq„y¤ËŒe9Z'gð¶ÍÒÓR½a„šD‰xë°Oæøü wµWLÓ­ô2 5+ µsÔ÷$û“’~µj¾fµOiQϹôT¡ìà£Ø(¢ŠÈÔ(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢ŠùæÓþ<áÿ®küªZŠÓþ<áÿ®küªZúÕ±ò¯p¢¥¶‡ípûo˜ê›±œdã5ÔÉà«3¨M¦Ûk«&£ïjÈÆì‘ÐûÖs«;HÒ§5x™z%ökhÅ} Ò¾è®ì%Úà‚¸'ê9æ¶.ücaªÉm©ÚÜ¥…ÎÂ_4<ž9Àútç­rçKÔ×í-cr ÚÍ0¶Ý§¡Î1zYt­F/ϰºÍ`±ï…†ö=ÈäÖr¥NRæo_SHÔ©ò¥§¡¿k­xrÝ®Ò>òÍ[g‘snàθëó1ù3ßon*Õ÷,çmF[xgYnc·‡PB´lIÉÏNGÿZ¹ëÿjzuì6“ÚÈóÌ‘bR۸ɒ;ã5N[¸.–ÚkY£ˆ ÆCôÀëÍJ£JOš÷ùÿ]ŠujÅZÖù]λYñ¼Ž—|+Ñín&Õìá±·¸ºÒ5Xí<¯±Í0L€B°ÀϯQÈæ¼â¶SźävÍ5!vµwþö7~µÍ^”ª%ËÐÞEùŽ‚ßKÐáƒ@ŠçH’â]QyVw]‡›ó×'¦«VÞЬmê/m ’âXüë‹Ó EW*6€0ÇŽAïX³øÒê #N´Ñ¦šÝ ƒËŸr) xÁ\çßÒ³,|Q¬é°'ÖtØäŽÒþEY»òÇ©ùÆi¶Þ$Õí-^Ú×òžO5ƒª¹-s’ ê3VèU{¾½ßô‰U©®;#§—JÐå×5M2ßIòÍœ’‰MñwqÆxÆO®hÐ<5¦]éú<—¶ežèNe%ÝwÎÓÁãŒt®V=S‡W}N+¢·’ <æÇLc°íVŸÆì“G+ß–x·l&$ãw^ÔÖ´eø¾ÏõZï(þ ¹ÐA§ørk=*øi„¾¹û'”n›-€äõ'Žƒ“è+ÕíÃZ¼µ‹îC;¢dç€xý)Ñë7ñ[ZÛÇ>"´—Îv/ÈùÎzsÏ­V¹¹–òêK›—ß,¬]ÛdŸa[Ó§8I¶î½YJ”RK_B*(¢º ›þ?#ÿ®oüÖ¥¨›þ?#ÿ®oüÖ¥¤Mÿ‘ÿ×7þkR1ÀÎ ôdŸjuµ•Ýþ§6³\Ëå·Ëç¯'°çŠô xé÷ê:ÞǹC˜mÔîX÷‰îß ÷8#š¶"S¾ýŽŠ8yÕjÛþÒEðÍ­¬à È2MŽÎÇ$~Çá[4Q_;)97&} b¢’AETŒ(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š+‰ø¥ÿ"ý‡ý¯þŠ–»jâ~)È¿aÿ_ëÿ¢¥®Œ7ñ£êaˆþ ½4¢Š+éœ ’ßÚbòâóŸxÛ]ÛÎzc¾}*:UfG ŒU”äpA¤Á…ªË&¿¦ê2Xß\ÚL‘½Ó/cÊ ?)#åèOxéYºÆ—£ÙMu¤[éÒÝÁl$[È™œ±À9+Ð'8$~Uwâ½núÅ­.¯Ýáa†Pª B@Éþ´Ù¼Q¬Üi¦ÂkùܮҤ ‘è[?pÂ…HÙ'§¯§–¿?¼íjr»{úÁÓätM é¢j1ͦ -BÆÔJØ»id†6 ã çž‚µõÇQÖ®®õ!¥­œRkƒ e‹rÌGN+‰ŸÅºåÍ»C=û¼mDÊQpÊÀž9ö„†$7ûã§¥s×þ¿Óìf¸yí&kp¦x!—t†é¸cüý+¹ãFœìâÿ«ÜÞUªCI/êÖ=6÷U´}^ìŬÛ+^Ùìæä[¸0cü¸9ÿg× ¶µ§ØÝé6º–¥ íìQÊ­~½b/÷rÝý3Û½yÅþ§ZÿÖÁõ¹^öþ·;ý?PLÔ¬Zñ ¾¡&$¼`J¶Å¸ e<óÓ¾•5Ž£„Z}¦¯­[j7?móVe¸óÚG.‘õ¯:¢›Â§»ü¼ÿÌKÖËúþ‘sWt—\¾’6WF¸‘•”äXò S¢ŠëJÊÇ+ww ŠÓþ<áÿ®küªZŠÓþ<áÿ®kü¨ê.p @“"`œüº}Á®³*µÌRiöyù¥•vÈÃÑTóŸsÇ×¥Xø}£¾£â~ëþa“¸ŽR0ð>ß/­z½yx¼\¡'ž Çže{}6Æ;(ÄPB»QGùä÷ÍX¢Šñ·=}‚Š( Š( Š( Š( Š( Š( Š( Š( ¼çâ§ü~hßõÎãùÅ^^sñSþ?4oúçqü⮼ñãýt9qÀ—õÔá(¢Šú3çŠ(  ò¾ÑÚ7y[Æý½vçœ~ÝxOŽ]Yt­"ÂæÁ¶›{»–X±€wŒ|Ù$ŒvÆO5ÂC/“q»ö0m².U°zÜWG/‹-£³»JÑ¢°šñM*LJãý”À ×µr֌ܢãÐé£((ÉHf£ákm:Þt“[·:…¼bI-6õ¹?1ÇlfœÞ‚]}JÃT7"÷²›G¨Wn¸ç·åEï‹-oVIäЭ[Q•Is#SŒ „#àuÎjkßCuøIX¥¿ƒÊš_´3@ÂÀž_ZÏý¢ËþõoÄ¿Ü]ÿÁþ¿CGXð”z‡ˆnÚ ¶6vñGŸ³Ú™ f'_s\®»£K¡jfÒg¡ãp»w)èH=+wþ×mBêW±o³Ü¢ ]4l¬½Ã¨žâ¹ýcSþÕÔ Àã]¡Q$æ ¼ÄžçÐS «Å¥=­åÿ*ÎŒ“pÞå (¢»N@¢Š(&ÿÈÿë›ÿ5©i‹³jG2M!ð‘!v<¯aÍv~ø}y*O®#ZYŽ|ß½—Øãî×éÖ°©Z“rfÔèΫJ(×øe¥<6W:¬Ëµ9þâ““ø“ÿŽŠî©±ÆÄ‘ÄŠ‘¢…UQ€ tS«ç*Ôu&æúŸAJš§QEfhQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEÆ|OŒ¿†í#½V>À£¯ó"»:Ï×´¤Öô;­=Ûa™>Gþ냕?µ£5 ŠO¡X¹ÓqG‡QOž ­n¥µ»ŒÅq l’3ÕOøw¸¦WÔ&šº>e¦˜Vç…,mo5+‰oâià³µ’å¡ýfÜqúÖ_ѵyôMEníÕ_å)$n>WSÔŠŠN Gré´¦œ¶4§×4íNÅ¡¸Ñ­m®üÕ0IgBçÿÿ_¶9Ù×4„i¼D–Öˆ#’Õ#Œ[À°A„l€™'žkëÄvgN{=+E†Å&‘d•Œ¦Vm§#Ž;þf¥¼ñ‹Ü¶¨ñZy2_<+y»¼£Ü9Ûíø×/³©tâ­óó_ðN®xY©;ü¼Ÿü}CÀ“YXÉ4wfimö™ã6îŠêQÏoþµ¾ ‚ØßC²“ÞYÀgk ©(=rpyéÏoZf¥ãS¨Ú²ýŽX§#­ô¾YõÄyÚ3éÏãP?Š÷ëÚ–¥ö,}¾ÔÛù~oÜʨݜs÷z`u¥¬Û_ÓËþKê×Óõóÿ€;ÄÚ&™¥išl–m,³Ä·#5O;ùá{ ½k›­W\‹TÒl-ZÄG=œk¸“¹Æ6ã±Íc×M% Osž«‹•ã°QE#0E,Çu&¶2:_ivš§Š_ÚÇs 6ŽÁe]ʺ`àñÓ5눋*F¡UF¨À¹?‡Ú ºV“%åâ4w7Ä6Æ)ÎÐGcÉ'êjëkæñuJ­­ ÂÓp¤“Ü(¢Šå:‚Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( ÔlJÕ®ìmæd\÷\åO⤭^«ã?u÷NÚº„K´«,ËýÒ{ØþÔymÌ3Y\µ½ì2[N½c•vŸ¨õãŠú<6"5`µÔùüE R›ì2¤·R×1*ʰ±pŒpç©=±QÑ]G)馜/t‹›ŸZ[C5¼@Å©ZÌ?Ò{ô‘ÜãsÄ-n¾½ o4ºq·fe6âÎß•“ý~½ð+Êè®/ª»¯{oëMNÏ­+?wë±é׉.4ÏÚZK(sFžt{@p*XœgŒƒ×µ,éúOˆ4¯Ë"¼vÊófL|ó±ù3Û gQßæÔSú¤l’}?§ø‡Öî×_é®ÄhŸj¶¹[Åy@žäÁ½ÓË~1è1ǯ­y¾³ª]jÚ‹M{ vLƘP¡T€1õªV”p꛾ÿÓ3«]ÔVþºQ3L°Â­,®p±Æ¥™°šèm-Ît›Ø6I)[¡’iHÐuf<ù×c£|3Ô%ÿlN–*€c…·ÈÞÙû«õæµ¼à©lnUÖP-ÂÜ[ç>V‰¿ÚÇAÛëÓº¯|Ü´žÏ_ƒ\¼ÕZÃOµÒìc³°…a‚1…EþgÔûÕš(¯-¶õg§¶ˆ(¢Š@QEQEQEQEQEQEQEQEÂ|Q³gÓôûÕX&hŸŽÇ_ÍüEwuWSÓ Õ´Éìn×1N…[G¡ààqZÑ©ìê)ö2«iç„QWuïBÔZÎýyäÅ(Y—ûÃúŽÕJ¾ž2S\ÑØùÉEÅÙ…QTI±á­.ÛR¾™õu³´®& ÷™W°«†O êPD-ì%°»ûJ"³4«*2I=1ÏOQ×¶~¬ Py&‡íÓÄÐÏq¹®=êóê>³¶Hô»+¹%3¤­qvW|aH8]¼sϧãÆ9f¥Ï×Êßÿ¯C¦<<î_Ö¼=mkÂÂ<[ÜCOç¾ôÜ€¸!²OR{Öe׃õ [s!šÒWO Snx7tÞ1Ç>™ü«NóÆ6’ Ií¡ŸÌ¸º†x|Å6pØ?ìöÍOªxæÞöØ‹yui ïÖ)G€;wÌ~+¼DRVûýüi* ·êïþ—wà}FΗk«Ù7ÉsὂSÏ¥Câ? Ç¡ChñßCpf‰YÕ_,IÏÌ£û˜Üæ®Mâ{)5­rñbŸËÔ,Ì«•brÜôÈ횣¯êÚ~­g`ð%Ê^[Û¥¼öùeTF9ÎOåZÁ׿6ßð æ¨ò¾]ÿà˜tQEvE5••Î¥}„Fk‰>ꎀxžÀzÒmE]''duÿ -Mfú÷$0Aõ,Ûä~b½2²ü;¢Eáý+(›{ä¼Òãcž§ú`+R¾gSÚÔrGÑP§ì騰¢Š+p¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢ŠÃñ>½}¡[A&¢\jí+•dƒvPc©Âµ8ÅÉÙ µvnQ^}ÿ Ä?ô ꜟüjøXž!ÿ¡SüäÿãU¯°©ý4eí¡ý&z ÍøcÄÚ–»sïÝÆϕݮƜÊÉ›U=Zöm;I¸»µ³’öhSr[ÇÒA€•q_ð± ðžâ%r­ Ê %ÌX ¡ìÃØþ®*ãá–±„Z^XΚRñÈþué—46ÒÊ‘™Y² êÄ•ŸáÝVëYÒVîÿL›K˜»)·›;€ùQ×éWKZœ}סJªKÞZž{ÿ ×Ä?óÓLÿÀ‰?øÝð­|Cÿ=4Ïü“ÿ×eâjZÌiÞ»ÕÖT,Ïl!ÏC„jÂÿ…‰âúu?ÎOþ5]QÄâd®­øï†‹³¿âeµñüôÓ?ð"Oþ7U4Ïkµ£ÜY˧Òâ{cºyß­ÿË>›‘« ÿ…‰âúu?ÎOþ5V>ê·Z§†ïÍÞ™5†Í^ý”KŸ˜½äÎÕt’§ÜvéJX¬LUÝ¿Ç ‡“²¿âaµñüôÓ?ð"Oþ7Gü+_ÿÏM3ÿ$ÿãußxƒY¼ÑþÇö"}Kíyry9ýÈþñž?*بúít¯r¾§A»XòŸøV¾!ÿžšgþIÿÆèÿ…kâùé¦àDŸün¶n<¯Ãs,ià]JUG*0À¿êª?øXž!ÿ¡SüäÿãU¯Ö1^_Ÿ°Ãyþ&b|4׋$új/r³HÇòØ+§Ð~XiS¥Õü¦þåäÜ›cCØ…ç'Ü“íŠËÿ…‰âúu?ÎOþ5]í¼5´R¼f&t Èz©#¥aZ¾!«MþFÔ¨ÐNñD”V>¬Þ^øƒPÓçÒ'µ·µÇ•vùÙ?Ó*äMIâ-VëFÒZîÃL›T˜:¨·‡;ˆ=øSÓé\œ®ö:¹•®jQ^}ÿ Ä?ô ꜟüjøXž!ÿ¡SüäÿãU§°©ý4gí¡ý&z Éø{ÅÚ¾³«-¥ÿ…otÈJ3‰‹ívå_­tµìÚv“qwkg%ìЦä·;¤>ƒÿ*‰BQ|¬µ4ÕÑrЧ¤Þͨé6÷wVrYM2n{y3º3èrò®Ä>.ÕômY­,<+{©ÂX\B_i'·z}hŒ%'ÊÍ%vu”WŸÂÄñý:Ÿç'ÿ£þ'ˆèAÔÿ9?øÕ_°©ý4G¶‡ô™è4V_‡u[­gI[»ý2m.bì¦ÞlîwåG_¥G¨ë7–^ Óôø4‰î­î³æÝ¦vAõ‘ù‘Yò»ØÓ™ZæÅÄ ´²¤fVD,¨:±¥p_ð±ÿ…‰âúu?ÎOþ5R[øÿ_šæ(ßÀº”Jî¹2aA=ÕUû ŸÓD{hIíV?‡õ›ÍcíŸnÒ'Ó~Ï7—Ÿßï ¨ãó¬¬Ú¹­Òv6(¬?ë×Ú´iÚ%ƮҹVH7e:œ+W5ÿ Ä?ô ꜟüj´)É]~„J¤bìÏA¢¼ûþ'ˆèAÔÿ9?øÕnøcÄÚ–»s¦©ÿÂÄñý:Ÿç'ÿ¬;¿ë­ñI¹> ÔXô»èÖ_s†–Ж»è6Óø‡Nú{:¿Óÿ‚G´¥ý#Ò?áð÷ýtÏüü(ÿ„WÃßôÓ?ð?ð¬ 'ÆúÞ£«[Ú]x6þÊŸkÜH_lcÔæ1üë°¸‘¡¶–TŒÊÈ…•V t¨—´‹³‰qä’º_ÿ¯‡¿è¦àáGü"¾ÿ ™ÿ€qÿ…/‡u[­gI[»ý2m.bì¦ÞlîwåG_¥gøŸÄÚ–…sZw‡.õu• 3À[sÐá…í¹o¯¨>Ekiè_ÿ„WÃßôÓ?ð?ð«¶š}•‚•±´‚Ù[¨†%LþB¸oøXž!ÿ¡SüäÿãTÂÄñý:Ÿç'ÿ«tª½ÿ5þd*´–ß“=ŠÃðƽ}®ÛO&£¢\i …TŸv\c¨Ê­Iâ fóGûØt‰õ/´MåÉäç÷#ûÇ xü«.Isrõ5ç\¼Ý Š(® ãÇúü72ÆžÔ¥Tr¡Á“ ëþªˆÂSØRœc¹ÞÑ^}ÿ Ä?ô ꜟüjøXž!ÿ¡SüäÿãU~§ôÑÚÒg ÑQÛÈÓ[E+Æbg@̇ª’:V^¬Þ^øƒPÓçÒ'µ·µÇ•vùÙ?Ó*äMd¢ÝÍn‘±Eeø‹UºÑ´–»°Ó&Õ&ª-áÎâ~ôúW%ÿ Ä?ô ꜟüj®4¥%tDªF.Ìô+Ï¿ábx‡þ„Oó“ÿV¯‡¼]«ë:²Ú_øV÷L„£1¸˜¾ÐGnPuúÓtf•ßæ„ªÁ»/ÉeOV½›NÒn.íl佚Ü–ñçt‡Ð`åF“{6£¤ÛÝÝYÉe4ɹíäÎèÏ¡Èʳ³µÍ.¯båÉø‡ÅÚ¾«5¥‡…ou8B+ ˆKí$öáO­eÂÄñý:Ÿç'ÿ­µuù£7V ÙþLô+Ï¿ábx‡þ„Oó“ÿW[áÝVëYÒVîÿL›K˜»)·›;€ùQ×éJT¥v8ÔŒ‘©Ecê:Íå—ˆ4ý> "{«{¬ù·i}p¤~dV¥Ä ´²¤fVD,¨:±¥C‹V.é’Q^}ÿ Ä?ô ꜟüjøXž!ÿ¡SüäÿãU¯°©ý4eí¡ý&z Á[øÿ_šæ(ßÀº”Jî¹2aA=ÕW{Q(J—Æ[u2ËV´6Ú²\DyÚã¡õ¨>â¸ÛÏ…ÖÎäéÚ¤öê†hÄ }9Sù“]'‡õ›ÍcíŸnÒ'Ó~Ï7—Ÿßï ¨ãó¨üO¯_hVÐI§h—»JåY Ý”êp­ZÓZr僱HR©i+œ¯ü*»úÅÿ€'ÿŽQÿ ®ãþƒ±à ÿã•/ü,OÿЃ©þrñª?ábx‡þ„Oó“ÿWOµÅ÷üŽoe…íù‘«¸ÿ ì_øøåðªî?è;þŸþ9]†¥ö‰¼¹<œþäxáO•lVvi\»¦ìWqãý~™cOêRª9PàɆõÿUQÿÂÄñý:Ÿç'ÿ­}…Oé£/mé3Ðh¯>ÿ…‰âúu?ÎOþ5]í¼5´R¼f&t Èz©#¥D©Ê—Æ[QXúv³y{â COŸHžÖÞ×UÛçdÿL¨‘5rMZÂZ.[¨Öödßü̼ò?ï“ùRqiØjI«—(¢Š’‚Š( Š( Š( Š( Š( Š( Š( Š( ¸½OV¿‡ã¦Eu"ÙMfÏ$ü¬Ø›“ÿ|Ê»JóícþK¾ƒÿ^ ÿ Ï[RI·~ÌÊ«i/Tz QXš…Q@Q@Q@Q@Q@Q@Q@Q@s~ÿ‘vëþÃZ¯þœ.+¤®oÀò.ÝØkUÿÓ…Åt”QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEÍßÉSÐ¿ì ©èûé+›¾ÿ’§¡ØRÿÑö4ÒQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEW7}ÿ%OBÿ°.¥ÿ£ìk¤®nûþJž…ÿ`]KÿGØÐIEPEPEPEPEPEPEPEPEP^}¬ÉwÐëÁ¿ôëÐkϵù.úýx7þƒ=mGwèÌjì½Qè4QEblQEQEQEQEQEQEQEQEçÚÇü—}þ¼ÿAž½©É¤ØM«CªKk^›#œ™Wžýô:Òœ”[¿fDâä—©rŠ(¬Ë (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ æüÿ"í×ý†µ_ý8\WI\߀ÿä]ºÿ°Ö«ÿ§ Šé(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¤fTRÎBªŒ’Nkã/ __ +/é7dà[Ã}ÉŸM¡³@TU]CS°Òm~Óªß[XÁ¸'›s2Æ»eˆ>•j€ (¢€ * ËÛ]:Î[½BæWt“O DAêXðI Ñ\ÛÇ=¼‰,2¨xäƒ+©ÔÞ€EPEgj¾!ÑtC®jöh“îË”‡wÓq©ôýNÃVµ:Uí½í»t–ÚU‘⤊µEPEPE›”±PFà#<þA ¢Š(¢Š(®nûþJž…ÿ`]KÿGØ×I\Ý÷ü•= þÀº—þ± ’Š©©jÚvköcPµ°ƒ8ón¦X—?V TzV½¤k±4š&«c¨Æ¿yí.P>¥I  ôQEQUouK 5­×Q¾¶´k©– q<ʆi¢.OÌǰЪ(¢€ )“Mµ¼“ÜH‘C—’I*¢’I=ëžÿ…àú¼?ÿƒH?øªé(¬‹xnïL¸Ô­|A¥Ocj@¸ºŽö6Šznpp½GSVÛXÓòÎÑõE¹¾V{HLêá@ÜJ.rÀIâ€.QEQEbËã? ÛêÂi^´Û½ôBLúm-šÚ¢€C(*r ŽôPEPE"²°ÊFHÈ=Ç–€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ æï¿ä©è_öÔ¿ô}t•ÍßÉSÐ¿ì ©èûé(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š+‹Ôô›ù¾0èúœV²5”6l’NÊ­‰¸?÷Ðüë´¨ÍÄ+p°4±‰˜ec,7êàjã'Ø™EJ×$¢Š* (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ ǹþÞÿ„¶ÓìÞGö'’~Ñ»g™‡Æ=¾çë[NMZÂZ.[¨Öödßü̼ò?ï“ùUFý2·råQRPQEQEQEQEQEQEQEQEW7à?ùn¿ì5ªÿéÂâºJæüÿ"í×ý†µ_ý8\PIEPEPEPEPEPEPEP˜üx·¿ŸÁV+k«½"-R 5«{<ù’YŒ—‘Ó§±è b[Ãð3ÇZ iZ@Ð,n'ŒÇýmnc|pA`¬Äzœã½vüe©øt}Z;(îttbÖ$;Ëo²EÚÀsz€0k›ñn¯ðKÄ Åö¹yáû½Ñ“¾ÝÐ^tþ¿¼Ï×ñ  OݧÃÿ„ú|~(¶‡ÆMk<Îר©æ±|,„0™F=I#9§x›âÎ…ñx?Nð½Ö³{-‚Ý[›k…]î\­¸aIrǰÇ9¯9ÖaÖ`ý”´ñœ/õ¹UŸ>bÅçŸ,6ØÛøb»‹`í_w‘œxHíþ’”±à߈×ÿŠ/|3â?ÏáÝvÒuöY.á%ˆ»–EHí߯³çø¯©_k:…¿‚¼}â; .f‚òþ;¤D‹÷–%aûÒ=?" W¹ý«í=ü$ô¥ëá‹txBûÂÞ.Õm´_F¾œ\¥ëˆšà3–Y<¾A2xˆ k!ø‹ð;Ç—–ºt–0Ø4¶‘ù²niTm!Êí îóZéÇvÞð„mc°ºÕµ}NÂìtë@7ÌV%,ÄžFF[¶~¸ó-ú-Sà¿ÅÝBÙ8nµ‹¹ãG]¬ªÛXCƒÒ§ñ„Vú§Ã}k[Õõ}C“DŽÆMOJœÂö²˜Ã.æá[ :é@•£|JÔ¿á"²Ñ|qᯠÜêLRÆSwÔ8Ød@¹ë^^tÝxÓÃv¾?ñŒ/þßÕ´jÑ^ÃlÑÞl¤®G|ØÎ{õ|ó¦^ø3Dø‹âx¾3iñ6µw©Èö7z¥¡žÝìøÉTžp:œ‚+¿ðƒü3Šþk6Pés[˜/ôÝ7kÁ;òQþVÄl¾›}zd掅ñ3FÔÿ´<9ñdhú^µcrñIoy—mIÂãny®à|RÒu/ið£[ëíz’Myuiv;”pÌTIïã]펗mâŸj á9Œaµ†º@ʼAå2xcŒñÇ8¯=¶¾]Oà‡Å»ôŠH–ë_šaªUÐ4±¶AäWoñ Ô-¼'u2ǵ-ÂŽCÞT<XŽLñ]}÷ü•= þÀº—þ±®#ãÐ û>]*€€Û÷‰]½÷ü•= þÀº—þ± ,ñχôŽ—·ÿlZãDšÒô)î-Ú{XŽÑæ«(n/“È`ìÄ®àxì8=8Ïã(<?Œ|0ß Nž|RuX™›Bpcüù¦_/äÛŒg¿^Ù Dñ?Ä{­7Å_ðŒxOÃw&Ö£€\\ÁÊ[Ån‡¦ù œî*׆þ$éº×†umWR¶ŸF›CgMRÒë—¶d]Ç‘÷ëé\†•¬éÞøûã5ñ]älzìw6—rã‘cBŽ»Û€rzgøkNo‹‚¼eâ éâÏC&+ Ã!t¾`>wÙ´a ä¸g‘@&øÁâUÐψàøk¨?†Ä~ÛŸQ‰%òqŸ3ÈÁlcž¸Ç9ÅVø•­YøŽÏáv±¦35¥÷‰ì&‹xÃNpGb:ã|A¬ÿkü8º¼Õ¾0Kªß_Ø9‹EÒ!Š2ò4g÷/û;1`8žµmÙ_áwÁbŒÂAb2yÀЧx—â=ÆŸâwðׄü7uâmj–k¨b-â¶Vû»ånÈéV¼ãïøHõ[½YÑ®t ~Ê%šm>åÖ@Ñ“29‡\ðOדê:VañÃÅ6þ/ñ†½áÕ­>êÃQû$h;•#*x#¿¶z?‡zw…&ø­ssáÿø§Å7º}Š]Júò;«4Glˆ„˜ [?0 •ëÎs@—ã?ùõÿûÜÿ覯(økÿ “þ®…ÿ ü!Úe_´ý»ìžvüŸ¿»æÏÖ½_Æò!ëÿö ¹ÿÑM\7ŸøKQøQáë½CÂÚ-ÕÌÖjÒM>Ÿ»œžKÉ4§âÏø_á>­®ø@еM ’K[$…mï'%U•°}+ŠÃ×§KŸŽ? §Š‚9l¯b^‘ƒk£§¥j|bÓlt€šýŽ•goci åÛÛD±Æ™™I¨d’~¦±u/ù,ÿìyÿ¤”¿ª|NÔ¦ñ þàO]x¢]2O*úà]ÇkRwŒ;ƒ¹‡qÚ·|ãkoA{²¹Ó5=6Q þv’݈ʞ8*Ã8nø¯ðÖ£éþ&ñ>‹ãø‹Áú”Z¬× ¾®,íî¡s•™w. üäŒWð‚ÇÃmâ?j~Õ¼K®bµ¸Õu‰’Xn™AÇ”àm£‚HÆÇPQñ^ß[»øW¯Aáa)ÔÞÜÖó²ï_0/|”Þ9ÏJó¯ j5?Á¤Éa¢Ù\¬b)¢Õm„W øÁgÎ{†¯Lø‰­ëÞðmƯá{u «GI&¶‘‹ÃŸÞm AÜ=øƒ\Í狾x×F[ínûÃ× Ñ†)¨Òâ<Ž˜?8#§Ë@Z¥ÏÁï„÷—6÷2x¦ÂÆ>Ù ùM›2Šß>ý€–Ïq]Œ|mkáOIâ8¡:аˆZAí7-!gvz⸟‚š\WÞ ñ-’Gpþ½Ôn"Òcº-–´a´íÝÎÓŒû÷Ís^{ßxŸÂßµÎ<sqq¨³!_!¶Y‘í‡Sîµz'ˆþ#joˆ­ü5áÏ Íâ5˜¼¹µŠñ ŠÚ2qÌ®0Nz ñ늎oøƒSðˆ'_jVÕ„[ …ÔËH‚ñ\±Â®æãû wÎüI¶ðV£ñmoYÔüâ¬UíuÈ®ÚãÜ~Må¹ “vŸr*ø‡^ÔoÖ?ä»è?õàßú õè5Ëø›À_е(﵋ÈåŽ:…À$÷SÏÌkZRŒ[æìeR-¥nçQEy÷ü)¯Ïæ§ÿcÿâ+µÒtÈtm&ßNµitØ!ˆ÷À¥%î»üŠ‹›~ò±rŠÇÿ„fÏþßøH|Ùþ×äù;7/ÇLg?\Õ´Èu&ãNºi„Øí€öÈ56WZŽîÏBåçßð¦¼=ÿ?šŸýýÿˆ£þׇ¿çóSÿ¿±ÿñ§-/æü?à™óTþ_Çþè4W¤ü-Ñ4mZßQµº¿y­ßz,’!R}ð‚» ˆVæÚX²¡F#®ÅD”S÷]Ë‹“Z«QY~ðý¯†´•Ó¬$šHUÙÃLÀ¶OÐ Ïñ?´ß\Á>£=ÜMQ¨žr¦„£ÍfôåËtµ:J+Ï¿áMx{þ5?ûûÿGü)¯Ïæ§ÿcÿâ*ùi7áÿŽjŸËøÿÀ=ŠÃðÇ„ì|'m<t·¬îŒì¤‚8À'ˆ<3gâ?±ýºYãûÞt~K“ïx¨´y­} /.[ÛSbŠ+‚¸øA \ÜË;ÝêA¥rä cÆIÏ÷(Š‹ø…'%ð«íçßð¦¼=ÿ?šŸýýÿˆ£þׇ¿çóSÿ¿±ÿñ|´¿›ðÿ‚G5Oåüàƒ\߀ÿä]ºÿ°Ö«ÿ§ Šè-á[kh BJÄž¸Å|9ðý­™Ö5¨¤˜Üßjú’HŒÃ` 8ÏoSYhk©ÜÑY~"ðý¯‰t–Ó¯äš8YÕËBÀ6GÔä¿áMx{þ5?ûûÿWÁ¯yÛäD¥4ýÕ™è4WŸšð÷üþj÷ö?þ"µ|=ðçHðÖ¬º…Åì“*2™Ð®ÑE7vÒ_€”ª_Xþ'YESÕ´Èu&ãNºi„Øí€öÈ4i:d:6“o§Z´ ºlF‚Ä{à ÏKkråÉø‡áΑâ]Yµû‹ØædT+  \ªšÊÿ…5áïùüÔÿïìüEh£NÚËð3r©}#øžƒEy÷ü)¯Ïæ§ÿcÿâ+­ðï‡í|5¤®a$ÒB®Îf²~€R”`—ºïòe6ýåo™©Ecê>³ÔüA§êóË:ÜXgÊT`¾ ŒþDV¥Ä+sm,HYP£×b¡ÛBõ$¢¼ûþׇ¿çóSÿ¿±ÿñšð÷üþj÷ö?þ"µå¥ü߇ü.jŸËøÿÀ=•]J¸ ¬0AV4 ð½­ø¾¶ðÞ‘ Ø;…Äv,€úî šç-þh×1N—z‘hœ8XñsýÊïj$¢¾râäþ%b®¡¦XjÖ¿fÕlm¯ Üʹ…d]ÃpÀŒZ—`º±ÕÆØj nÄ+æ˜ó›ñ¹Æqš£áÿ ÙøsíŸa–y>Ù7'œÀàû`*?øNÇÅ–ÐA¨ËqÀåÔÀÊ $cœƒE£Íkè;Ë–öÔÒ:]ƒjÃTkc¨,>@»0¯š#ÎvoÆväçÆkÉïtïˆz½ÂëÐüxépͧkËmo%´düªêÈGû>üž+¡ÿ…5áïùüÔÿïìüEð¦¼=ÿ?šŸýýÿˆ«å¥ü߇ü>jŸËøÿÀ&ø{à)t? ê–þ-[=Fû^¿›PÔ¢„eaÈî:“]›éöriÿ`{HÏËý˜Ä {À]¸Æ8éX~ð6›á;™çÓ§»•§@Œ'u sÆV‡ˆ¼?kâ]%´ëù&Žurа ‘õ¡¨óY= N\·kQÚO‡4MÌþÂÑ´ý7ÌûÿcµHw}všÒ¨íá[kh BJÄž¸Çêß tMgV¸Ôn®¯Òk‡Þëˆl¡¢*-ûÎÁ'$´W:MWÃú6ºŠºÞ‘c©*}Ñyl’…ún¥Ót;GµÚE…­…¸9Z±'ä  âáMx{þ5?ûûÿGü)¯Ïæ§ÿcÿâ*ùi7áÿŽjŸËøÿÀ(ø7áô«â¯Kã ÒçNÕuUº²[µŠá%Q¿æÛÎÓó kÒ­mmì­c¶²‚;x"cŠ$¨= ƒIÓ!Ñ´›}:Õ¤xmÓb4„#ßU?øFlÿá-ÿ„‡ÍŸí~O“³pòñŒtÆsøÖvWzš]Ùh-ׄü9{©F÷@Òî/]KeÊ,þµ©41\ÀðÜF’Å *ñº†V„µ[VÓ!Öt›:é¤Hnc´dÛ ×ÿ kÃßóù©ÿߨÿøŠ¨Æ {Îß!IÍ?u\ìt­GЖEÑ4›8Js ´¶H·ŸS´ ÒÏ éZ´:¥Ö•c6¡n1 ä–ÈÓF?Ùr2:ž†¸ßøS^ÿŸÍOþþÇÿÄUÝ'án‰£jÖú­ÕûÍnûÑd‘ “ï„N4í¤¿ø$©T¾±üNªÃL°Ò¡x´»k(ä‘¥t·…cVsÕˆP2OsÖ«ÞxoCÔu(µ CFÓî¯aÁŠæ{Ty#ÇM¬FGáW®![›i`rBÊ…ޏ#ŸáßÚøkI]:ÂI¤…]œ4Ì dý¬´±¦·|9¢5¥í«hÚy·Ô%3^BmSē伋Œ;ÉÉ©®´}2ûIþ˽ӭ.4íªŸcšxv©FÂ1€@ÀÇâi¾,¹‚}F{¸š(¢P'<åMašð÷üþj÷ö?þ"´Œiµ¬¿7)§¤¶¿ÒtíWO6¥…­í›c6÷0¬‘œ•b±¯¿ä©è_öÔ¿ô}ašð÷üþj÷ö?þ"Âv>ø•¥A§Kq*Ϥj.ÆvRAXŽ0ŒÒWù2›z¯Äíµ+OÖ-M¶­ak}ëÌ+"ŸÁ•áíAW]H°ÓUþð³¶HC}všƒÄ³ñØþÝ,ñýŽo:?%€É÷ȃ¥®§{ "ÇÏX¼·y^3×Ò¹‡^Õl/µßøªÞ msÄ $¶ÐÈ$°Æ»c‹páˆÉ=+¥ðï‡í|5¤®a$ÒB®Îf²~€Tz†lõ?iú¼òηò•o¨#?‘£{_CKÊÛµ]J×mÖ oL³ÔaS¹c»·IT\0"¥ÓôÛ&Ím4«+{+dû°ÛD±¢ý@-Ä+sm,HYP£×b¸/øS^ÿŸÍOþþÇÿÄSŠƒø…'%ð«–Ÿ éMÍÅΕ¥XÙOtA¸–ÚÙ#iˆÎ ·S×ÔÕx¼'áÈu_íHt .=Cvïµ¥”b\úïÆsø×+ÿ kÃßóù©ÿߨÿøŠ’ßámséw©‰Ã€e?Ü«å¥ü߇ü9ª/âvWNw¨[_ÝØZÏyi»ì×­$;†Æ#+žø¥—L°›R‡QšÆÚKëud†éáS,jÝB¾2îZ¬øfÏßlû ³Éöɼé<æÛqYic]n^}2ÂMR=JKf¿Š3wM ™Q%Cã {gj°üOá;[A£-ÄK—S($‘Žr s_ð¦¼=ÿ?šŸýýÿˆ­#5¬­ò"Ršz/Äô+Ï¿áMx{þ5?ûûÿ[¾ð6›á;™çÓ§»•§@Œ'u sÆQ(ÓKI~R›zÇñ:J+/Ä^µñ.’Úuü“G :¹hXÈúƒZð­µ´P!%b@ŠO\ŠÏKkrJ+‹Õ¾èšÎ­q¨Ý]_¤×½Ö9(>ÙCT¿áMx{þ5?ûûÿZ¨Ó¶²ü?à™¹T¿ÃøžƒEy÷ü)¯Ïæ§ÿcÿâ+µÒtÈtm&ßNµitØ!ˆ÷À2Œ÷]þEEÍ¿yX¹EcÿÂ3gÿ oü$>lÿkò|›‡—Œc¦3ŸÆ®jÚd:Γq§]4‰ ÂlvŒ€À{d›+­Gwg¡rŠóïøS^ÿŸÍOþþÇÿÄQÿ kÃßóù©ÿߨÿøŠÓ–—ó~ðLùª/ãÿôæï¿ä©è_öÔ¿ô}gé? tMV·Ômn¯ÞkwÞ‹$ˆTŸ| «š´+sñ+GÉ .‡©£×k!Q%ý×râäÖªÇQEeøwÃö¾ÒWN°’i!Wg 3Ù?@+?ÄþÓ|YsúŒ÷q4QD NyÊš5›Ð—-ÒÔé(¯>ÿ…5áïùüÔÿïìüEð¦¼=ÿ?šŸýýÿˆ«å¥ü߇ü9ª/ãÿô+ñð´ðiÒÜJ³¸v3²’ãTž ðÍŸˆþÇöégìsyÑù,O¾Aâ¢Ñæµô4¼¹omMŠ(® ãáss,ïw©•Ë%'?Ü¢*/âvœ—®w´WŸšð÷üþj÷ö?þ"øS^ÿŸÍOþþÇÿÄUòÒþoÃþ Õ?—ñÿ€z ¼+mmIX"“×b²ôï Ùéž Ô5x%®/ñæ«°(¿@2k%mMu6(¬¿x~×ĺKi×òM,êå¡`#ê r_ð¦¼=ÿ?šŸýýÿˆ«Œ`×¼íò"Rš~ê¿Ìô+Ï¿áMx{þ5?ûûÿZ¾øs¤xkV]FÂâöI•ÌèW袛;i/ÀJU/¬¬¢©êÚd:Γq§]4‰ ÂlvŒ€À{d42I·Ó­ZG†Ý6#HAb=ðg¥5¹r¼ûXÿ’ï ÿ׃è3Ö¯ˆ~é%Õ›Q¿¸½ŽfEB°ºÀú©¨ô/†z7‡õ¨5;+›çž ÛVY©Ü¥Np€ô'½o¦ï­»ÍNM+is°¢Š+œÜ(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢¼ÿP°´ÔV-núÞ+§}D§ï:ù,æ$P}ÆúçÔÕÆ7Ü™I­@¢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð¥î÷¼uÔW#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷ŽºŠäáÑ?è §ÿà*…ðŽhŸôÓÿð?Âw¸{Ç]Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQÿæ‰ÿ@m?ÿSü(÷{‡¼uÕÍøþEÛ¯û j¿úp¸ª¿ðŽhŸôÓÿð?°< hòèW .“bì5]EAkd'ö`N€? =Þáï›Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQÿæ‰ÿ@m?ÿSü(÷{‡¼uÔW#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷ŽºŠäáÑ?è §ÿà*…ðŽhŸôÓÿð?Âw¸{Ç]Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQÿæ‰ÿ@m?ÿSü(÷{‡¼uÔW#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷ŽºŠäáÑ?è §ÿà*…ðŽhŸôÓÿð?Âw¸{Ç]Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×W7}ÿ%OBÿ°.¥ÿ£ìj¯ü#š'ý´ÿüOð¬ ÍG´x†“b#m*ý™>̘$MiƒŒu?™£Ýîñé´W#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷ŽºŠäáÑ?è §ÿà*…ðŽhŸôÓÿð?Âw¸{Ç]Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQÿæ‰ÿ@m?ÿSü(÷{‡¼uÔW#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷ŽºŠäáÑ?è §ÿà*…ðŽhŸôÓÿð?Âw¸{Ç]Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQÿæ‰ÿ@m?ÿSü(÷{‡¼uÔW#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷Žº¹»ïù*zýu/ýcUáÑ?è §ÿà*…`^h8ø£Ä4›iWìÉödÁ"kLc¨ÉüÍïp÷M¢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQÿæ‰ÿ@m?ÿSü(÷{‡¼uÔW#ÿæ‰ÿ@m?ÿSü(ÿ„sDÿ 6Ÿÿ€©þ{½ÃÞ:ê+‘ÿ„sDÿ 6Ÿÿ€©þÂ9¢ÐOÿÀTÿ =ÞáïuÈÿÂ9¢ÐOÿÀTÿ ?áÑ?è §ÿà*…ïp÷ŽºŠäáÑ?è §ÿà*…ðŽhŸôÓÿð?Âw¸{Ç]Er?ðŽhŸôÓÿð?ÂøG4Oúiÿø ŸáG»Ü=㮢¹øG4Oúiÿø ŸáGü#š'ý´ÿüOð£Ýîñ×Q\ü#š'ý´ÿüOð£þÍþ€Úþ§øQî÷x먮GþÍþ€Úþ§øQogk¢ë6éÖÐÚG<¦Þt†0Šá”•$ ÜúÓ²{16Öç]ETQEQEQEQEQEQEQEÃÍ9´nþLøîSüë¹®þ_ ÛXÿ¼KB?ÜŸ ÿŽ¡5¤?Tg/Ñ›´QEfjQEQEQEQE6Fd‰™ÈÀ Nkû{\ÿ¡Bûÿ-¿øåt4P=ý½®С}ÿ–ßüríísþ… ïü ¶ÿã•ÐÑ@÷öö¹ÿB…÷þ[ñÊ?·µÏú/¿ð2ÛÿŽWCEsßÛÚçý ßømÿÇ(þÞ×?èP¾ÿÀËoþ9] ÏnkÇîøFèöïmÁýÖ_‡îƺJæï¿ä©è_öÔ¿ô}t”QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ\§üVúºYéå´.pb2!N›±Üç€:p}0n•I(Çr'8Â.R7ukMÒ6¥{ ¾FU]þfú/SøVAø…áqý¡!úZLöJòIæ™æžG–i^I³1÷&’½håÑ·¼Ï*Y„¯î£ÖÿáaøgþåÿÀ9¿øŠ?áaøgþåÿÀ9¿øŠòJ*ÿ³©wdÿhTì[ÿ…‡áŸùÿ—ÿæÿâ+ïÆzž?Òuº”Ú[é—°K'Ùeù^ImY6ç‘óÓå稯?¢ìê]ØhTì[ÿ…‡áŸùÿ—ÿæÿâ(ÿ…‡áŸùÿ—ÿæÿâ+É(£û:—vÚ;#ÖÿáaøgþåÿÀ9¿øŠ?áaøgþåÿÀ9¿øŠòJ(þΥ݇ö…NÈö­?ÅZ©(ŠËR…åc…ó1öVš×¯ŸYC.= wñ…ÄW±i:´Í4–óHrÑ·d'¸=pxïÇ-| §h;41ªråš±éTQEy§¢QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQ\ß|JÞÒÑ-1öë¢RŒì|ÏŽøÈüHª„ä£Ù3’„\™k[ñf“ .òr÷È·„nR:©"¹y¾*çìÚ!dìeº ­üëfi$y%v’G;Üå˜ú“ÜÒW·OM/VxÕ1Õ÷tGwÿ Rãþ€QàqÿãuÆëZµæµ­\jÁˆÛœNÅ3·Û?‰ªÔWM<5*Oš Sž¦"¥Ei2-×óÊ/ûúøš7\Ï(¿ïéÿâjZ+s-×óÊ/ûúøš7\Ï(¿ïéÿâjZ(-×óÊ/ûúøš7\Ï(¿ïéÿâjZ(-×óÊ/ûúøšM󎰩ÿvLÿ0*j(4™]¶œ£ÿu†ÿ^¤¦Éʸaîê­2bZ9>úwþðìhZFÎÓ°•aÊpAìih vÒo¡¢Ù^°Ü[Ç)¶åúÕºÊð¯ü‰Ú7ýxAÿ¢Öµkå%¤š>¢:¤QEIAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPYšÏˆ´Í ÚÈGa”…Fé裟ǥGâ}y<=¢½ÙQ$ÌDpFOßsÓð}…xÍÍÌ÷·R]^ÌÓÜJrò7Síì=jíÂá]m^Ç'¨è·;ÛŠjýF’Dìg¸ŸÈ®;Äšõ߈uzö±@«Ä‘yå¶€I';GR}; Ï¢½zxZTß4V§“SR¢å“ЋuÇüò‹þþŸþ&×óÊ/ûúøš–Šé9È·\Ï(¿ïéÿâhÝqÿ<¢ÿ¿§ÿ‰©h ·\Ï(¿ïéÿâhÝqÿ<¢ÿ¿§ÿ‰©h ·\Ï(¿ïéÿâhÝqÿ<¢ÿ¿§ÿ‰©h ·Î:‡é'ÿZ…Kpѱè¿Ðô©iE*ã ö4jÑPÆÌ’\猣ãüEM@C'Ës âÊË?Ò¦¨¦ÿ[oÿ]þ‚Ô0D´QE0=³Â¿ò'hßõáþ‹ZÕ¬¯ ÿÈ£ׄú-kV¾N>¢? (¢¤ ¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(̾'Ý4ší¦~Hm̸÷v#ÿdýMquÕüJÿ‘Æ?úð‹ÿFK\¥}&Z„OžÅ;ÖQEÔsQ@Q@Q@Q@Q@Üqå¿up?>?­MQ\ÿªõÑ?ô!RÒꢛým¿ýt?ú Tµßëmÿë¡ÿÐZ†–Š(¦¶xWþDíþ¼ ÿÑkZµ•á_ù´oúðƒÿE­j×ÉÏâgÔGáAET”QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEyOįùcÿ¯¿ôdµÊWWñ+þGÿëÂ/ý-r•ô˜OàDùÜWñ¤QEuáEPEPEPEPEPW?ê‡ýtOýTµÏú¡ÿ]ÿB-.¡Ð*)¿ÖÛÿ×Cÿ µKQMþ¶ßþºý¨`‰h¢Š`{g…äNÑ¿ëÂýµ«Y^ÿ‘;Fÿ¯?ôZÖ­|œþ&}D~QEIAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP”üJÿ‘Æ?úð‹ÿFK\¥u¿äqþ¼"ÿÑ’×)_I„þOÅAEWQÎQEQEQEQEQEEsþ¨×DÿÐ…KQ\ÿªõÑ?ô!RÒꢛým¿ýt?ú Tµßëmÿë¡ÿÐZ†–Š(¦¶xWþDíþ¼ ÿÑkZµ•á_ù´oúðƒÿE­j×ÉÏâgÔGáAET”QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEyOįùcÿ¯¿ôdµÊWWñ+þGÿëÂ/ý-r•ô˜OàDùÜWñ¤QEuáEPEPEPEPEPW?ê‡ýtOýTµÏú¡ÿ]ÿB-.¡Ð*)¿ÖÛÿ×Cÿ µKQMþ¶ßþºý¨`‰h¢Š`{g…äNÑ¿ëÂýµ«Y^ÿ‘;Fÿ¯?ôZÖ­|œþ&}D~QEIAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP”üJÿ‘Æ?úð‹ÿFK\¥u¿äqþ¼"ÿÑ’×)_I„þOÅAEøa–âeŠÞ7–F8TE,OÐ é9ÆQS\Ú\ÙJ#¼·–ÞB7• ’=pjO`³[…QLŠ( Š( Š( ®ÕúèŸú©j+ŸõCþº'þ„*Z]C TS­·ÿ®‡ÿAj–¢›ým¿ýt?ú PÁÑEÀöÏ ÿÈ£ׄú-kV¼›@ÿ…ÏÿÖ™öøA>ÍöH¼Ÿ;í›ölwcŒãÅhÅïÿªÿ“µòrø™õøQé4W›Åïÿªÿ“´Åïÿªÿ“µ%“Ey·ü^ÿú§ÿù;Gü^ÿú§ÿù;@“Ep¾ñO‰u_ø“ÃÞ3·Ò“PÑ «yÚI“É‘gF`1'9zûôã'º Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( )ø•ÿ#ŒõáþŒ–¹Jêþ%ÈãýxEÿ£%®R¾“ üŸ;Šþ4‚·<#~Ún¸nEœ×h°°@¥Œ°ôÇáÖ°ê͆£w¥Ý­Í„Í Ê1¹yÈô ðGÖ¶©x8™S—,”ŽäéQjÚ†ŒòjRêÚ<²Kbà•NÆ$ᛕöƪö¾ÑõëV6vréoz-Ü´¥Ì«ßï}ÖöíïY.¼oY_ëw\Åm» ¨ÆTŽÀÏ=jž§âmWSdY¯e1C&ø@J‘Ðäu#Ôæ¸ÕÉÙ;ÿ¼ìuhÚí_þ}ÇQá}æh­í.--'[¡G ïœÒ!89 ʰôsžÕ…¾…ý°!‡Ã÷KöKåÜ»‚™›ü1¿âWú<É»À?¯CDpjÉi )ê ýEGæËÿ<ýô(ó&íæâÂü˜÷¨aЂ?Ó´×õ‹ÏU»@:+Èd_ûå²++7ø#_ø?Ò“Ê•¾ü؈¸þy¨”#-ÕÊŒ¥Vð_Œ§×nåÓµãQÅç,‘¨ ŽÇ$wç=±]y7Ã(’/I°rl%É'$þò.õë5óøºq§UÆ'½…œ§I9Q\§HQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEå?¿äqþ¼"ÿÑ’×)]_įùcÿ¯¿ôdµÊWÒa?çq_ÆQEÔsƒô«=Râð]Ãö©¢‡|žo—ç÷»vüè½ÐöºÖÚe”š`Ž2á/_ 8fÉ)è§J%Ö—o$¿Úö÷- 3ÚɶHX døôÅtÉãÛxµ–8îÍ¢[ Õk‚xÃóò“‘ß×>ÕÅSÚªÁ_òþ¿²Ÿ²pJnß™ƒ5u!iÖŒ_SYÞñ4z­ÔW0¼Ù>m°’í#'=ˆ#×¥ XŽ[Ûä4/küÆCàÛéd”5ÝŒ(“y $³Y¤N2pxè=³IƒïžÆK«›‹;$ŠV‰ÅÌ¥aÛ¡ÛšÒÐ|gŽŒ¶7²^ÂñÈγZ¬n\1$†óîzŠÏÕüG© }‘„æäÞ›‚òÁ]¤Wò;B–#šÏ`q¡Ëu¹ÿ…n´Û#5Ýîž’ˆÄ†ÔÜ6û$`Ÿ¡íÆkº÷ñVœ<;-‰þÒ¼y!(±^˜Þ8Üÿp7qÛúvä+j2¨Óç1¬ šä (¢·1"¹ÿT?ë¢èB¥¨®ÕúèŸú©iuQMþ¶ßþºýªZŠoõ¶ÿõÐÿè-CKESÛ<+ÿ"vÿ^èµ­ZÊð¯ü‰Ú7ýxAÿ¢Öµkäçñ3ê#ð£Î~*Çæÿ\î?œUçææ pyþUÝü\Ü軑BÜnQ阻wúW Ñ7Ë(#øzøW¿‚þùþg‡Œþ3Îs÷`ûœëFùÏHWñþµKEvd[®?ç”÷ðÿñ4oœu… ?úÕ- ûÖïø­iŒ}òSýõ#õ5-2IcýcªûÖwÃSŸ¹°Kÿ£"¯V¯#ø\þ9Œq²!°“Éó#ç«×+çñ߯g»‚þ (¢¸ŽÀ¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(Ê~%ÈãýxEÿ£%®Rº¿‰_ò8Çÿ^èÉk”¯¤Â'Îâ¿ ¢Š+¨ç (¢€ (¢€ (¢€ (¢€ (¢€"¹ÿT?ë¢èB¥¨®ÕúèŸú©iuQMþ¶ßþºýªZŠoõ¶ÿõÐÿè-CKESÛ<+ÿ"vÿ^èµ­ZÊð¯ü‰Ú7ýxAÿ¢Öµkäçñ3ê#ð£Î~*Çæÿ\î?œUÁwüiQ]G8QEQEQEQEQEÏú¡ÿ]ÿB-Esþ¨×DÿÐ…KK¨t Šoõ¶ÿõÐÿè-RÔS­·ÿ®‡ÿAj"Z(¢˜Ùá_ù´oúðƒÿE­jÖW…äNÑ¿ëÂýµ«_'?‰ŸQ…oñaö]hÇk6RࣩÌUÀþýÿ¹÷ùô®ûâ»Ѷ¹Bà‚>±W¾uûÑ«û£cô?ã^ö ø çùž3øÌ_%ÏÞ¸¢€?¥gä”ÿÀÈþT} ½‹ÿ'ùfµC݈ú©١ɨ}™?½/ýýoñ£ìëÚIGý´&µCÿ=j‡³çè £@Ô<—vy×úQ‹…èc“Ø‚§úÑö…?u$oøΓ̙¾ä;}Ý¿Ã4hwÃ'/ãrŒ…l$È?õÒ.ÿ…zÍy7Ãuñ”¦GÜÆÂNƒ~ò.•ë5àc¿ŒÏwüQEqEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP”üJÿ‘Æ?úð‹ÿFK\¥u¿äqþ¼"ÿÑ’×)_I„þOÅAEWQÎQEQEQEQEQEEsþ¨×DÿÐ…KQ\ÿªõÑ?ô!RÒꢛým¿ýt?ú Tµßëmÿë¡ÿÐZ†–Š(¦¶xWþDíþ¼ ÿÑkZµ•á_ù´oúðƒÿE­j×ÉÏâgÔGáGœüTÿÍþ¹Ü8«„®ïâ§ü~hßõÎãùÅ\%} ø çùž3øïúèQEvEPEPWð×þG?ëÂ_ýzµyOÃ_ùdÿ¯ ôdUêÕóøïã³ÝÁQ\'hQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEå?¿äqþ¼"ÿÑ’×)]_įùcÿ¯¿ôdµÊWÒa?çq_ÆQEÔs…Q@Q@Q@Q@Q@\ÿªõÑ?ô!RÔW?ê‡ýtOýT´º‡@¨¦ÿ[oÿ]þ‚Õ-E7úÛúèô¡‚%¤lí;Afè“ØPÌrÄ;šíüá ‰ï¢ÕõXZ!;íáa¤nÎG`:ŽäàôëjÑ¥fmF”ªË•¤Y?E±²b [[Ç#ý•úUº(¯™nîçÑ­Ž#âF‹}¨ÃayaÜ A*Ëc/‡ÙóÔãgAÏ5æb¹ù”à©ê¸¯ ªî‘§jXþб·¹#¡– Ä} é]ø|k¥F®Ž*ø5V\ÉÙžEzÜß|9)%l䄞ñ\Håœ~•MþhÌ~K­B?e•OóS]«0¤÷Lâx «f0¢½/þv•ÿAOþû‹ÿÓ×ᆎÍy¨?±•òAOëô|Åõ¾G˜ÓYÕ]‚s^·ÿÇ2Úiˆÿž—/ü¶,t'M`Ö:u´?å¢D7ß]k9f0û(Ò9|þÓ8‡-ôZ¼º¥Å¼[}™¡ŒÊ»L…™ põïž+Ñ袼ªÕ]Yó³Ó¥MR‡* (¢²5 (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€ (¢€<§âWüŽ1ÿׄ_ú2Zå+«ø•ÿ#ŒõáþŒ–¹JúL'ð"|î+øÒ (¢ºŽp¢Š(¢Š(¢Š(¢Š(¢Š(+ŸõCþº'þ„*ZŠçýPÿ®‰ÿ¡ –—Pè£áÍ:ÛUñV›e’ÞYz‡d'9©rgWMðöÉ®¼aà–p¼¬}ØlñÜß‘¬qå¥'ämB<ÕbCÓü'¡és l´ØVU9Y22ŸbÄ‘øVÅW̹9;¶}ŠŠ²AERQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEå?¿äqþ¼"ÿÑ’×)]¿Å &MRÂü’Xš8èTîñ ß•qôx6ØùìZj´‚Š(®³˜(¢Š(¢Š(¢Š(¢Š(¢ŠŠçýPÿ®‰ÿ¡ –¯hz?öþ»k§3H‘ÈÅäxñ”UÏ Ž¸;× EðÃGGKÍB`?…å@ýò€þµÉWNŒ¹dtÒÃT«Äó[kyïnãµ²…§¸á"N§ßØ{ž{„ü8žÒ|§e’êbâEè[(ö¿Þ¯iz&¢ÂbÓ-#· ÷ˆÉfú±äþ&¯W“‰ÅºÞêÑ® ¨êõaEWØQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEgkº4ö-É* ’Ìn9 ?Ï#"¼gRÓnôkö²Ô¢ò¦©vAýå=Çòï^ïU¯ôë=RÔÛêÑÜDyÛ"çÔzq]xlT¨;nŽLFVWÙžEzÇÃ-Y A=õ²öHæ ýö¬Z‡þn•ÿA-Oþû‹ÿצ³ >gõ¾GšQ^—ÿ ·Jÿ –§ÿ}ÅÿÆèÿ…[¥ÐKSÿ¾âÿãtþ¿GÌ>£[ÈóJ+ÒÿáVé_ôÔÿ︿øÝð«t¯ú j÷Ü_ün¯Ñó¨Öò<ÒŠô¿øUºWýµ?ûî/þ7Gü*Ý+þ‚ZŸý÷ÿ£ëô|Ãê5¼4¢½/þn•ÿA-Oþû‹ÿÑÿ ·Jÿ –§ÿ}ÅÿÆèúý0úo#Í)QZIR(‘¤–Cµ#A–sèzôÄø_¤+eïµ£IþH+¡Ò|9¥hy:mšG# 4¬K¹ÿœ{t¬ç˜SKÝWeÃ6ýædø'ÂÍ Ú=Õðo¹:ƒ‘usÜŸ_¦k©¢Šñç9T“”·=hAB*1 (¢ °¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(ÿÙmanila-10.0.0/doc/source/configuration/figures/hsp_network.png0000664000175000017500000045750113656750227024535 0ustar zuulzuul00000000000000‰PNG  IHDR·¨m×9sBITÛáOàsRGB®ÎégAMA± üa pHYsÄÄ•+tEXtSoftwaregnome-screenshotï¿>ÿqIDATx^ì UöŸio½÷&7 BB‘¢‚   €‚XWEE±®}uu­ë~êÚ{A×ÞÛº ‚(ŠØTTšR¤CBznn{Û´ïyÎÜI.1@Ƚ Iüÿ’s§Ÿ9sfÞysÞ3çxy<σ mhCÚІ6´¡ m¸#ÜÂ0 Ã0 Ã0vü‰¡a†a†aìðXÉ­al#ÊZùÓ‰ÐPÁýŒÂP’¦)|¿xöœ<ß>®ÆÅäëpS”×ñ¦(¯ó &æ†al=Ln csO¹r~)µ\Í˲ I’¸iÛGÖx ÐuYIª†’Ùrž(¯×%·œ¾/A6 ØLn c3ù#§/{I«†åÇèõz.ˆRD)†±-ÙÔW…®ÅJ¥‚0 ÝPÒ[^ÇZ¿Üfòõm†±µ1¹5ŒmĦ>je)—†¥Ô–¥´†r›ÉC“ã`²´n ‰mEëewã‡0»v ÃØV˜ÜÆ6Bµò ^A[Ь†Û²W¢P–ÔŠÉãVoÑx (ÄÄdI-¯k¡ùei®‚dwr•…ÉÛ†al-Ln cQ ª¾àõ±ët:èv»Nh7ÅÆÍíQ $å’IŒJž… Ì–£üT©§`4®k`{˜ÙÔWH)¹º.jµšj=-+?ågÂØ ØNLn c+±ñ—yYò%™•ÔN[ÍßѼ”éÖ±¨¤NL.e66Ÿ²„Sׄ‚®åéŽz‹.¯ýRp˪ åõ¡ñò3brkÆtbrk[‰R\K Ô—ºæ»*š·#J­Ðm£”uÉ‹ÆË[‰ÝR¶ å›òRyªëDòd|GCÇ¢ô—׉ޫZ­¢^¯O¬q÷ºÉã†aSÅäÖ0¶“?Z—°Œ­oaG/­RIœŽ©Õj9q’”Qƶ$³ªÞ¡<,KÂwD$´º¾uºFtýk\’Ûl6Ýxùpd׋aÓÉ­alô±*K£ô%^Ö¯•¸hþŽ^RUƒÄESÖ5QÙr”we[åm»Ý^ÿа£Q^ÿeÐtYº¯Ò[US(îÊu Ã0¦ “[ÃØ HúJѓЪ*‚W%WúR×r}©oïBXÞ"”ÎRP$^]t~ùË_bxx«W¯v%ÒZgkÝRrúç¢Î@]âP¹?7¤ ¹9\¦ý{L§VuK6PŒ1LFsÊúÂZÂó’Oœ·ã2Ü}«éDù«kd``³fÍÂ!‡‚§>õ©Ø}÷Ý×  òwkæót ´–%¶e ®Æu| ’Û¾¾¾õóËc›|Œ¢œ6 ø?˜ÜÆV@_àúb.Ŷ¬;¹#~Ü$[ªN!á*‡ïyÏ{ðä'?pf̘áJã$ï’–­!$Ê5i'õš7-Õe.¦r/à‡œÒ^—¿\–kùD^3=’ßb ÍåóœÆri±“eOñ*p\¥Š!£*¤¹ø£u¸÷¹5(ů̿ò¡á¿ø,X€N8Á]G’ƲtwGCiÖ1ê³ RéF£áæ•‚[Sùp¸#£a<ö¢alôÅ,‘𕲣>GªN­våÊ•®èi§†—¿üå8öØc±Ë.»8ÑÒ1n[áÒ­ëîû*rWóŠù¹ò\a£u5Ç£üú ’e·e¹dvÒm‘³Tb\Äîtx«¡¼S)§†ÊÏþþ~ì³Ï>xÕ«^…›o¾‹-råz;"J»>’ײªŽ(EVŸM†al)v1Œ­„~¦ŸÜÜ׎*·’•@ëgäßÿþ÷xÒ“ž„½öÚk½€Hj'ËÉÖ@~Y”Ï í—û+æaƒî1^à¤6ç\•ÔÞ=U’TImJ-vUTbëª"(”P¸Ü¿õkn$´ ¢ÌSå§æ|òÉøæ7¿‰ÑÑQwôÀ±#¢¾RÎu\\…É×M9®õ Ã0¶„ÉwpÃ0¦ ÕI•Üê‹zGÿ’–dIÐõ3ò]wÝ…ƒ>ØÉWù‚œ$EǹuKÛ¤¨E)«Jc3'¯ÒYJ5Ç|å3C! ZSU˜÷wËÿ qrkA%¸ÚÖË)ËNnC‹ö•0¤ ËŸÉËÒé­‡ò¯¨k[T&(Ê_κúµE]Y‘JB'Dôr½x#m“R*-ZÿÞ˜›£Œ—‚[ÔµÕü­wË,¯‘Rj%¸[MºN•ØêšÚQ¯']3:®òz)¯²y<Ã0ŒéÀäÖ0¦²¤MCýt¬/m…’§w$$$¢”£êÞJl%Zú‰¹ßzǘ£52Œæm—â×ê¥R{ Ó“ mÑqSN§è0ôâI·‹–2z]dL»+%ìÄNcM§Æ[m%’]tU¤£Òv/¸õ’¸ÇøÚný˜Ž›Óš·–V*ÿ”—z1Oy\æ¯Æ‡††ÖOï¨è3²ñgAדòº¬«iÃ0Œ-ÅäÖ0¦òËZ_ÊåO¬Æ4B)ªTBÔ«>ÕפäÅ’ œq—™!íuàùžÖJè!Mºy‡ëu;¸õÖ[ÜKY·Ür V®^ã$µE‘E"Íbnƒ[ "0:žG Ê}%\¦â\Mlj¬w"=Æ´2¹´Zò.‰7 ÃØRìbS@_Êei”†*Ù4¹Ý:|ãK_Ä#Žx{âãqÍ5C¥J ¢œª1°?_øÌ1xÌcŽÇ·ÿ÷ûè¶)¶”Ò_ÿâg8éq'àð‡އ<äa8šëuÔÑxÑK^%K—à ëNÿrùÅ8ê‘ć‰cõsôñØÿ€ýpâãOijžýlœwþO±víÒ­]ûâŸ?;僢æ†al &·†1Mègûò§Ucš¡¨Ž âŽ;ã–›oÅÚ‘q¨ ¹^öòpù 7܀ˮ¸Kî¼ïl?:ëûxÃk_‹k®úÒ¸‡0ôÐéô0<Ú¹?ù)þåéÏÄ¢;—"ÉR̨ãï7\Å‹áê¿]+þr9-ZŠ¿þõJ\xáïñ¼ïy÷ ¹5çšVÊD«{kÆtarkSdò—³Jžì'Õ­…rÖàL´ÆZhºê)ç©É/O&›ñ¨šBš!î©‹ãŸþÔ§°lÙjT+!Þÿ¾÷à²Ë/ÅÏþ3¸ÿèÆ îX¼_ÿÖ·Õ*<‰Ûvd|»Ìßïzç»ñÞ÷¾?éñÜ‘z”ËñÍoü/¥÷Ö‰Ó‰>3*±ÕçGŸ#Ã0Œ©`߆1Eô¥¬º‚*µÕÐ~NÝ ä>Ɔ;¨Ez¹-FÅïÀÏU§–yß ‘`&’\Íc…¨4*ÅÈðˆ«"»ÇÂÝñšW¿x(Ž=ö8|ó¿… `æ`–-_ ª+|Šm­!ªØûàCð¦ÿ÷Vü¿ÿz;Îþ¿3ð„ަ8ó¡%ʰ|l:-3¦}~ÊB“[Ã0¦“[ØÊ’Û²¾ 1ݨd/D'ˆ‚ —üù"üèßÃy?ý1~vþ¯ðû‹/§ÜV5û)H=ô `öœ]Ðìkà†›nÅK_ö*œwÞÏpã7aö¬Ù¸ìò?cñ·ãcû²Ní‘Z]Š%«> ø9ƆF¶0³ÞßK)µ1b?±Z [{@4 cª˜ÜÆ4 /dck¢–TÝèvr|ðÝÿ>ÿ¥8õÔçàÙÏ=ûäÐKÆøBÔ‘¤>žþôg Óî"CüêW¿ÀsŸûuä£ð”'= ÿú‚áÿ¾{fÌ_© R«CÝîúY‚;o¿Ÿûü—ñÕ¯¯~Íëð› .Džåè›9ý}Mk0a+0YfõhrkÆT0¹5Œi@r« ’[+½Ý x¢H%zúéÚÃî{„ý9Úsì·ßÞØsÏÙ®D·×Õ÷js7À¿½îuøâ—¾€Cyúú›˜3w–káo×]‡ ~ó[¼öÕ¯Á«_ûjd=5ß–Ár'ºË—,ÂÛÿó-xÇ;߉Ÿw>ÖŽ¹j|ä#pÜ‘›H1Ý”ŸUMÜÚçÈ0Œ-ÅäÖ0¦²ž }!o%(·TP×BB½1ˆO|ü“øýE¿ÆÅü.¾äWøðGþÕ*ø>2$±ÚÅ­ày/z!~õË_à7¿ù ¾ó¿ßÆÿ|ê“xÒNFµVQa0Î9ë¸é–Ûà‡5¤ܤ3ލâaGCöP<ôðCñ´Sž‰Ï~ö3øÊ—>‡nWíáÓMYR«Ï•܆1ULn c èËX_Äe0¶ݤ‹J5tZk4P­41kö f4Ñ׬Pj)E£Z`Ù]+]7Á{î±'þýu¯Å^{í‰ãŽ?/}ÙKpæwÏÄq9>O——ûÈ ˆjœð9 ñèãÁÅ_ˆK.½¿ýÝðíÿû?¼ò•¯Äü9sÐÏý›Ün]¬Î­aSÅäÖ0¦É_ÆVÿvëÐìësmÒªé/ßQ«Ã•Ðæ”]ÝÊBuK›§®³z£IW °zõZœÿ‹ŸãÓŸý4Z­1t»üòç?Çõ×]‡8Í]õ…j¥ñNB¹å¹ëuÑ-~g¬a¥?¬Â§øV¹ß ·s»5(??öÙ1 c:0¹5 cû'÷Ñëé'kŠmè#Ícä”Ñ R+ J.ç«ä•šf Îjà/xgÏBwñÎwý7æÎ›çÚÊ}îsŸË–¡Y«á™ÏxöÞg?d¼ªÛWuѬW‹x¸Û¢SÞ¢¬ÖçZåÙ³ö Ã0¶kLn ÃØðQ«ÀW‡ è"¬L”ô¥ú ;B³: ^j*j)}ômïxÞòŸoÅî{ìÉyUDQ„€b\«500Ї|äÃøÔ'O§ ç¨5ú¹‰ŠnÕ D9ãQ¢B©²^Nv%Œ&·†aÛ3^n•› cJè§Ôññqt:õupw¦ËÔÀ¾ŽQÃ/|á 8í´Ó°Ï>ûL,ÝFð.Õ]7¯^AZ ÑãŒN{sêýð»1²NŒ<ªÕ­ÌC¨ê Ô T9k†;݆;ï¼a¡Õð‡Æeu¶èB½šE‘ïä7§@¶;ˆ(Á:f­QÑ9Ízúý¸oÛ– èšÄ-·Ü²þÛ™Ðñ(¯Ëãp/†al VrkÆŽeT]îªîläçh6$?œUáGÑâe3²JO‚ªšz¿ °ç{áèGcŽ>;ôa§ÈÝü´Š¶«Öªˆ8/¥k~sBlÅúÇlc©5 Ã0î?v§6Œi@%N ;S‰m‰^®ê^¸,]+çm3˜­Õ¾*êµ ûPE€¦Wsõl½Àƒ×à×TêúªUT)ªØ(ª¸ÒÚjµŽZ­‰ ¨0Žç‡®å… ¤G·w/—Õ'—ñ8kÜ­‚JnÝiõ"¸ ¶ñmS¿¨ä¼,±ÕxÇK Ã0Œ1¹5 c³¨R%Xa:ÁÚæ8Á,nZGÔ£˜§’TÍ×̉Ñ2l`Ò .Ü}鯔ÛA[¸9®Ô–áÞ7Ÿv”ïjûUù^ ®ýdo†qÏ÷mÃ0Œ{@RU–J—R[vZal}”×*1Wþ«Ä\ã–ÿ†a÷ŒÉ­a÷Êä}šÍ¦ûI\¥‰Æ¶AR+fΜéÎ…JmËóa†aü#&·†aÜ+ew¨jVo±/^¼Øäj"¡]ºt)öÞ{owTWÍš†a›ÆäÖ0ŒûD¥…jîl·Ývà 7ÜàäV%¸š'ùíõzö’Ó)«|(oˇ‡rÞí·ßŽã?¾è9mÒrÃ0 ã1¹5 ã^Qi¡äµ^¯ãQz®ºê*œþùn^£Ñpë¨$QÂeuA·åŸäUB[楆«W¯Æ…^ˆ¿øÅnZy¾Í[«0 ÃØ°N cŠHFÆÆÆÜÏÅAM—õ$wÊR[—·Ýnã׿þ5Ö¬Yƒ#<tfÍšåÄËn'[ŽòW¥ßª†P«ÕpÓM7ášk®ÁÅ_ìÄváÂ…n™®3­¿³ ëfrÝnëÄÁ0Œ©`rkSdg—[•ê¸f̘åË—cîܹÅßÿþwœsÎ9X´h‘›/éÝ™Žû |€ØÍŸ?Çwžö´§¹ùFFFÖKߊ+ÜùØY0¹5 c:1¹5Œ)²³Ë­n:&uâ Z«ŽOUt¼*mÔOèj2¬\߸ÿ($y ÊÓò:*¯+¡ù*ÙÝ™ª&èš1¹5 cº°:·†aÜ'W•ªz‚D¥‹’,‰®|•/;m-ó†@éËyûbð8ÃË8'ϨûÜœRè2\(–m˜WNo¼Lã ÷µÞæ/+R«¡ê"—©×øäeÅxàù¨Uªn˜ôbøۀ§¼eÆ"Kµ^Ñ›Jq Ã0ŒMÃû0al1*QÛ™Kn·r'ƒÊW=“{¼yIp5–s¥–A‚É? ‘­?å°Ü^”·½{Z¶¹ë‰{^¦4c/Sk+–ºþÖ8êó˜tX“)ÖØ€›Þh} YÉ­aÓ…É­aL“ÛmAQ2*±•Ê‚òš{’¡òǧœ¾Ç[™—ò/åVË\w¹0LßæÜ`UÚòø"Wª[ íîi[7'ºÄLn ØNLn cŠ˜Ün 2Nn%vÌcj`)·™§iÎWýGFýM\Iî “ãIÄ7.нx$… on-“[Ã0ŒMbrkSÄäv[@aU=V7®ÒP© ¤¶Ï™ß)™¡’÷dì WJŠçJ–7¯9ã ë}wHark†qϘÜÆ1¹ÝVðVÅÿÊY_yë*%OC‰¡Jo5îç«¿:Í¿ûœûϽű©e9Ó±¹ä^Ê„gED¤ŒkcÙnÖTf;ÂäÖ0ŒéÄäÖ0¦ˆÉíÖG7©Eåmž!dˆ| ³9MÕí/çUB$IŒžZ9§æ¶--Pâë^B2Æõ†È7¸ïew‹£X‰lb™‹ÃCÂNž%4í{Ôs^#‰Ú.£”gTâɵ†:Ëhµ\skÈW¹—bX´›@Ê™;8&·†aL'&·†1ELn·>z]Œè1à”{ùÊ•rfÅ ÙDð˜ÿrÛ<¹nYjZž Ýê& MO÷µlS뉻/Su‰ÔçõQΚD(¡å5¢.ŒµUR̳a¢Ýi»uúMtÚm×4˜VR4“÷äö¾‰¸wDLn ØNLn cŠ˜Ün}$·ªcëj°Jôò„ˉ$Ë‘ÒcÊÑx·‡z³}i‚è®s[(ÿmâR˜|Û•Ôñ‚A;ÎÐãÊjSصÛë{¨!ÒXÇZx¬«½ËqÅéJoMn Ã0þ“[Ø"&·ÛÞ¥™]JYä-˵” V ¢¾::£º,')ˆ@³¿²áÇû„BFÿñöZ̧ŒsY8Ñ\YÂcò(±ý•*Æ;ãˆ|ÕZ­YU1Ln Ã06“[Ø"&·[ÊÝo=*ݘ²žª‡•6îz)S\å…^¤ÓÐäÖ0 ãž1¹5Œ)br»eHXÊæ¼ROÝÌúHÒ ó1”Ô©j^ºÊ{ˆÒ‚j£Yo`ÖyUœõ‹‹ñà þ„%Cãðª th|yPA'¨!ö‰=màþÜî\bÿpU9B³‹xôW¢éª¸9ó\»µ1Ô(Û>ÓšqÅ$ðs©çE¨1ÕÞú“µxÞ“ŽÂOx–­ìág?ÿ)ÞtêÉ8`î,Œòº ä>e¼YŸ÷<•ûr/&·†aÿ@ñ›˜aƆªczùªZ¯"¥Ôv»úI>FèQ(Ãq¥Ñ¨Žnsnj%x˾‡Oýäw¸±`]s6†BÊm¥F œ6o =tlnØT®ôT"Æ ÒÔ ãÅ2·N1Âbæœyè1®Vž`ÊcîâÜw¿ ‡ÏÄðÈÆ›@”õPO|¤é,æO‡Ûuû›”°+¹5 c:±’[Ã0T!õBdS n¨¦°².å±M±R$¡äöªU VÇ9>G |×éßÄÒÕ]Œ·=T¢~Fråå/ˆ=DtZ'·nEPýW ¡j›Ô}ïä8Ê Ÿ,¶e©­JZõ]–Ç¡!Š‚–{5øá Óå¡Ú^ŽÕ•xëËž„7½ðXrç*¼á¿?‰ß]} Ö&1…{uUW¢Z­LÄ?¹¤¸ØŸa†±i¬äÖ0¦ˆ•ÜnRÇ”bä”Zê\˜Ç”ÐqžQòªË|„ƒZ‚ëïZƒÿ9û׸ê–E}X7ÖE^ësrœ¦·0•ØN +rÝÔ9¸¿w»¢p›ûJK±•xêVêV×4ªbÑ£ †>%•×F_<„“ݧ=éøõ&¾qÆ/ñç«nEÕÐÉ;èÆL?;TÉíÛ_Œ£w‹á¡´ªê´¢‹†J©³AæZLaÏ6NßŽŠ•Ü†1˜ÜÆ1¹Ý22æ‘~Z/äVÕôª…ÐÔûÑŽšhÓÞοì:|ùÌcU—ÛDU—Ï•™èvÞÁFò÷/”ÝŸó²©[£Jl½@5f‹ý8‘ur«ߘö\mÔrZíÔ&I ý~ŠÝj^|ʉ8á‘á†åkðéoƒëïF¥9½.2GµÞ@—qEèàœ·½ÇΛ…ñÑq´+"Êm-æîÒAî—ÇMŽMn Ã06‰É­aL“Û-CY”Sjó$F$Läah `4¨âŽ¡qüÏ™?ÕwÜ…µ-õOÆe q;‰±CRÄXT% PoeœVùéxXC×5Œ[ȧpb:i\hzò¸¸¯e¹v6A9ßK‹WØÔ£˜º×Íã.úÛCxÌ#öà Ÿ~<ª3ûñÝü^qÖÆ:‘zU+J›Ã4â–œö'·?~û qì.s1><ŠV%E”uQO˜[åV-%x¢u‰ä:3¹5 c:1¹5Œ)br»ed®—±¢^kÆ»P­Þ¿Ö‡®àÒnÁ¿v6VRWµcäÍÙM$¶ÊWݲTæ«Q×>Ö¤º¨ãótcû‡BMmUΛ<.6{™ëæw¢.âæs4 yÞÓ •jY’1IvÙÄiGì'x ®¼s-¾ý“ßãÆ¥£j!èïã53® ‰'‰«1ý=Šq›rÛÆßö"³†¸5Š0Uj/D®rÕ;îQdÛøÉÛ_ˆGÏ¥Ü2NÉmHá­PžÓl6rŠ­É­ª]èhw|Ln ØNLn cŠ˜ÜnT5WrE54ýÂ8ÇX‡2W÷)}º\/aVF%˜òS檆“s¸ÐÜõå¹w[®¡–)”óµ^9>y™˜¼^¹L&¯#Ê} ½Ó¦ª ª<‘pÚ§|ßtÇJ|ø«ßÇ]ërŒ´)°zŽbwGáSÜò¬J§¸K–9¿lá¡’w(·/À£çÍÅø„Ü’[^WI>‹yFqGÌ›wÈôLNÑŽ‹É­aÓ‰5fÆ} ›¬r…úé‡ÿM…¤Ü¾®ž­[²WAPipbh1OË4îs\MÏjž† 1­4aÈãbZÛ¥Ï&¶/ãsÓ ŠKËõ"ÛȈª nÔ[¾ûøçpóð Ü•ÍDLAõ²•»cQåP1„’ KV©lNç~幌ú˜ÈÖ²:Dñ×0 ÃØVrkSdç.¹•–?ï«{\õŽåÃócˆ6Å’#èëïCÓò(vy–%q¡Ç¼HÇ]æ‹ïzóiniN¹Í)¹A„$¯ Ö?‹îZ…/~ó ¬X;¿¹ø£J€P/oqý”ÉO|Š/õ.Wbº4ÑD«ô’ ½^‚. •$Þ ¤^•)-ºªõhŸÍÑó2×­ošè˜x ÜÖcä®·jÀG³Ö@}Œ=£[õÑŠ¸{®ÔäNUïåiUk¹¯j<˜¬‚8¤„sy“ÇSÆñ™7¾uéïý†+s\sgEI+¯ Ž©‡1媫E¡(G3ϵöÒšk~ŒŸü÷ ñ¨¹sÐÇãÑBǘd³ÜñÀ“][[Ã0ŒMarkSdg–[WR¨z¦÷)sNè$g<Æ—7)Û»Ö3Ìnú˜5»ŽÝÌÁƒ}0öÞcƺ1VݹWݶ¿ûý¥Xv×j‹^ׯú*”GtUô{u´tlª‰Áƒ¯pw’[¹$U›Ç«’[-¬¢öÆ@Ÿ¢ SnŸÃa„g8¹m*·”•÷É­É­aÓ…É­aL‘]nõ¿Ž¦Ú¢BÚ›  :ô½e³9sF)§w ¯Ãe·¬À/.ø-n¼õ׆k^­c\/eE5ä ™!‘'¦=I×µqKå¸ lX«rŒ3)ЪÃ+‘Ö^ã„) yÈõ¦] ·J–™¦Œ’««d7ÉÔ’CÆí㢴‹º×чŒÇ<òa8èAs°×`?þò·«ñ½Ÿÿ×­èay»Í}÷£×¥hå!bî#•p1ÖP%ÖwJÉ®ê¼Jns•܆záÍCŸ¢V¡Ü¾þÔ ¹ý.ÅWr»9˜ÜšÜ†1]˜ÜÆÙ™åVèå/•œêå­ÇË9½HUCÔ¬`y+ÅE—_‡ß\v ®[¶k{±“Wµ]›y\_.¨*Ô²âfC¥Èø©‡jOµzõ’šÄV²¦j¯”T÷ò…ÇS·¼n)¥6¢ÜIh'¦©nŸ²©h<ùqÁîûí†ÛîZOñ µb¬ëR”£†kÖK±1n_nÌ€ˆ¶îª%T™’t Ÿ2]«®Ã—(·ª–ðŒ÷r«j ›ƒÉ­É­aÓÅæT3 ãŸJ!µ”šJ±K(œE ‰ ý5üöº•xãÇÿùîøó-,0æÏF'œ–7]*_æ5‘©{Ù”’Èà§UA Üöüº«šàB^A—!ãxÆù©_¥ 6¸lbL²Q츎§õ*®E*«Û&çz¹Çí8/LCTа¶Ri«Jb;”ÖµLÓ¢vÎÿËZüÇ'ÏÃû?>úûëxÿ›^ˆù<¦Z›©in'™õ)ÆÚ^/“)Z]Ž‹B€Uš-Ý.¦µpý †aÆ6ÄäÖ0Œ{Å•Bg’Û Õúëßñ±Ï×.^Ž!ÔЊfPPëË"DI•$¢dR4»ê ÍnΡ¯“ryÛF´(“ãÈÂQ CÎé^Ò¡¿«*DŠ §N»ÒD§´îoÑ pQž{w‡U‚k†alkLn ø|¨j^¢R‰-(z* ­rôŠK/AÜk! s„Õ½NÛÝPÔ^këG}U)ÐKhT>Wª!×Èôó³ïº“í‹»èëvÐ×ë ÁaSCº¨÷z¨Æ© QL¹–¥ºÀ8\T"\™Z…¼B î¢Uí -ö”îA$ùdÉ ÔÂ^Êt¬’Ûz«°ú®Û°fñ]xìá{0Ý…[‰åqHêU BÆêMˆsVý)CV)§Ý8G‹y†aƶDßE†a÷HšISx~èêDf©ºåÊñ¢SŸ†Ýf  切W VËàºk‡rØEô º¡JLáDs< 0^©`¼ZA;ŒóûbÏ£·rœÃ˜­¶d{Ü_ìBäš# óªù8C ÕC ]èP¦»”ç.}RÃ˜Ã¢Ž¬§º±~›a¾7Êá:ñZ&¿íªX¨Ž´š5{ÌÑÄñG>ã£f4dL—«N¯JoØe¶ „¶ÄÕÍ$³ºyŤa†± 1¹5 ã^q/ûÐÒ${)%ÏÏRÔ8…œþáÿÄIÇŠ#÷Y€þшÒ1$^B¹M‘… àfH)µi @Uä0ã¶®4˜ÂÚ“Ø2$!æiƒìJ'µ.ÓÀ j°e)°K—Ͼä•BK | /Æ)½ã¨å”aÕñMC‡ëŒr¿CHóµ¨G]ôQºûÒÎÙÏêSðú—¼„ZœaÕºŒ¬]Ãí¹zZ㱨ÃßµçÀýQ–%ïÒV¥+ÕËd)¥ÚëQàscâÒXa\9Ó£—¿æ—BHér®ï‚š9Ky,<gÅjýÁndžaÓÝM øW|š­^*£vRT5îQ3Îø.¾wÁïðüÓžŒ¾öYxÛ³Ÿ„Ǽ/æ7CÔzm4Ò*1…“ãQÌñ¬‡*¥¯Êø•ŒR#ÞÔ®B¨!—û¨Ò÷* \?¢V(ŒÕp%°IV§jêe3µˆ1¨|•8L(ÞïP,½”"Ëx)ª5. CÌâò]¼.žzøøÀ+ŸŽO¿áxÖ1‡àÌý?¿â¯æþÕY=ö¸êlè¡RZ™Ö,¦Ø&-äA¹Ì–îWcÌ¡d7(½=¯„im2«fPU£pÜIlHÕKj: n(áu²\”.s‡ ÑÄÐ0 Ø*Ö˜aL‘»[:iÌü:X­¢oƼõ3_Ã97Ý€Á™»àßO}.¼ps°nõ.»új†k±dÕ:ŒQW·{X×KÑ¡Òå•ò0B®ÒÊ„&Ê[» Q=Š­›˜œ§)§>º”J•ÖRX£*×(‡!¿âZaèõbÄ=Jc#ëçòú¸ú|ŠãÂj»Džñø“pÌ£ÀxœcÝøüêÏÆçüà ^öŠÅž=oýàçÐNt=*(Wûó˜¶@Ò¬âã0CØës]gù0Æ:Èj}ˆú0˜Œà/y:hæýä×0Ì•ÎsCnçª6p8‘«EWƪŒ!×_æqÖµ¦Àˆ5fÆT0¹5Œ)²sË­:qˆ\é£ä¶EߨÆåvoýâøþßoD7ö1× ÐEØg~ˆãõ0yäáè¯V12ÖªuÃX»z-VŽÄX|ç2,Z´‹/ÁÊŽNc÷R”_*º§¶nS×®®{‹V©^ÊüdµjÀµ=d\P<•Ëi¯GÅõQ§ ©;ÞùsçâðýöÂì§k˜5w&fö£Z °l¨…ßþárüý–ÛpÛ’ÅX3´ý•Œ­nã/|6=êH¼ñ¿ÿ=¯(_U=àLU&JYó€ibˆ²AW­¢ë`F§çêwš5ôecøÎ+Ÿƒþº§|òë褻ð©ëNË”ê`tt”ܼxÙÎæ°Þ_Ór•›ÜšÜ†15Ln cŠììr›¡Bù¢ÜRêÆ+9"Éí̼êc_Á9‹–!÷˜&åSÇÝF–Ž¡×jc·¹spÒc‡²7æ4= 6ê Jh€ˆÙ3šKÆ€•«F°|ù2¬^µ -n§îuã$Fǧr]姪CÔ«jQ„þF sgÎÄÜ9ƒ˜3{ýMÔû¨Cš1ªk(¿q e-Ü´:ÆÏ/¿øÛÕ𘆠RE5ä1e>šoRŽgQžýØã°×>ûá‹ß8Q½ã.…žÇÏó™I,“®{Ñ̉ïºãLc'C%ia8ˆ0Öl ÏÇ·^ó¯¨òŸò‘OR[÷¤}÷1U¬™ÐwÊ­:\=\U]p+?sÛäÖäÖ0Œ)arkSdg–['OyàJnÓ žÛ*êƒ3ñ†Ïÿ¾sãm”Û&Õvq«Çã¯Q¹|z\F‹á¥]Tô3>…lþœ™˜=ØGAÕ»6#°Ë,ì2gæÎ‹þ~×Z†ðŸq¹”8æ1ÿtéƒô^i"ãÎ16<ŠáŠñÊ•¸céX¹n£qíNŒl¬‡fXÃÌ9³°ën»bîüÙh6CÔëuDÕå¸NÉŽ¸¿Mî«O/¼1ÎÑN/¢F‡Ü ]û—/Kn{”Ùe»ÕNŽHÀ»X—ŒcÝšaܾv·­^‡•w\‡¼äy¢OÿÀ‡Ð®<ò>¤¾jݪŽpÈü¢Ä¹ËƒBÇk¥h:Í5¶FyMMn‰É­aSÁäÖ0¦ÈN-·¼;¨­W½ •F«”Û¤ŠÆ¬¹xÕé_Æ–,GòæuT"Ê(¶w \Æ|ÐdFCtÿœ¬Rbø¯Fñm0¨{]·>C™kQ¿æs¦J6ûUM¾3L¹¡fõË8ß °›_ÃêMc `†¡‡£{0Ž=t/²`.æÏšá5øþºpÉ¢!,îbõšµoµÑ¡àvzT>êêÑ:‘UJ8M©RŸº©)$KÁsé£xi¨[e@©ìgžŒ×€ÅÙAƒÂÛ¬ÏÆ.ó±g-À>>zØC°àÁû8™_»ôNÜ|×.ºø¯¸þæ;àUûЊ±^Ž$)ìuŠmŒZ½Š¤3frKLn Ø &·†1Evþ’[É•ä6A;Ê]—ºÙä¶+¹íªg°ˆ’PD%]*Uïd™+u-~z/^ *†9ºAãAëyå|wânTül¿aZ/³…Þ("ƹ[½'=ôP<ç GcvµUÃcøÍ¥·à—^„åÝ$a€NkiBid…;c’r¦E2«—·‚”Ç‘QÔiµÈG»¢úž„Fµ¾Î£ªH¸Wט昂í1ýa6 5§™C<Ž×­À‹«hÆ-4‚P 04žàø‡íƒG²?}ðîX8{&n¸c~ð“ (¹‹1–¨÷´â°Šjß Œ¬c¾ôLn‰É­aSÁäÖ0¦ÈN-·”§ñEœµXrÛO¹­ñf¢¦¹rS ±kþJC5ƒ%9tâ›S<)vcí㾨§£Ø#Y„ãþ0<ïOå>j¸òÊkpÞo®ÀÒÑq,íõÍĘöQÜËdÎÓÝÍ£{n?§xóXô"™z`ð&z‡ÜvÜ2(ÝD)reÌŒ §ÌºxNuZkY„êx~/Ab¼?F6QÍû˜ÎócŸ×Rvg×|ô…>öœ7 ôáxÔ‘áo7,ÃwϽ×Þ±-¯Šž¡B)OÓ¶É-1¹5 c*˜ÜÆÙ™å6£@¥”Örëû å6E5Ñ7{^õñ/ãœÅ«Ð¡ÜzÝ*å1äq÷èˆ='ªoK#VCW1UW Œ9,‚ºÊ Ô î„PÞõ´ƒ§¾?^þ¼'áÏ7¬ÀWÏ< «Ç»ð+u´¸ßZc&Æ[1¢ †„˜å1Ó¹ÒWÝâ‚€û£<&i‚<($5çûIBQeº™&U?Ô*MYQù–"É•™nß]ë =®SÐà1æÁÖUǸ®úPOx ôň)ØIÒ¤×s›±Qô)é8˜‡»ï2ÿöòç"l â£_ü®[²ym€âšñÚ±vn…É­aSAß2†a›DµP{²”Bæ>šÀõx›ÐAòÐG”%ð²¥kª¾-×Ë=Ê¥ºÆå¸¶sËBJ˜×Z"Ðr©*a¦ øU…¡èeLJìQâÔËXNQ¨UñÊSN¯~>ùÍó±¤=H—ÜCq„Na¤CÑM)à­u¨ÄÃÞܲ:wÙàîꌃ’«4•»M)‡i—æÞáŒ)ÅVþ:qÌÅQsš"ëI~)¶Ú.sÍ“1â,B›iZ—w0Ê}æ=K/A„€˻=Dí*íTó€‡: šh…5¬Èª¸juïýæ÷yL]¼ø™ÇSäú¨*‡š„0 Ã0¦ŒÉ­a÷‚^âêQðbŠl†F,!UsVjU EÒ鸶œâ—Qжqº^¥GÁ¥x«G³LõyUöÛcè2¾ªÙjÙ0šé: $«13Y‰Áx™ õl”k‡Ü»Jï=ä9å=i T À«m_WµÂK(ÊcÜ÷xá:ÞYWÉRxí¥¨E-ì2ÌHDZW³‚g>êaøÏ}ª=>0$c¨Ô| µ‡)ÈV-Á0 c:°Ê cŠìÌ/”åTºÜ£R\óNŠJPC›ò÷àvï¹—Ü„sÎÿF{UôÒ-/×;T”Ëâ¶âÚ§%²CÓ œ¯Ž ²\/™KŠÎ(¥Ì¿€ËÕlX•qåiŠýûüú=ÿ†/óS ÷ÍÇ“O<íµ+qõu7á÷×]‹+o\Šv·Š¨9ë(´1jÜ·ªUG(šðâQLLs(Ùæ¾%ðj‹w}Bî…ÄÐ ª+×¥Lëp4ª:¿Š[ÇÁШˆ)߻ϙCöÚ>l_¶ß~ðÃ~þÛßá1 ú-X€}ö ¬HêðbßÕ×µÊì…2Ã0¦†É­aL‘[n3$^5Šº1 ‚v ÄQI_?²(ÄÍ+Æñýü½æ¬äzCWgRÊ¢<Ö CW–êj½º¼a`>ùª ËÙ„ÜH $·zñ+âyÒÅž3êøá^ƒÿ÷©ïàš;GЗGøÌ[_€Ýæ…Ú– ŸèÂ+ð‡K/ÃJ®¿²Ý±¸†8ÓËmŒŸÉO(³i ÜT¶8>¥%L»¨Çª—[Zªû3n¦{^ˆnÐàÍãÑ1N½T0T¹n#ôQeÜ:î:O8î<ô =P«æÇÅZ„oõc '+qúkNÅàÂ=ñ²¯ýk{u¤C-WUÃäÖäÖ0Œ©arkSdç–Û©¯’Mª”J>DÂãëyÔ,Zb@Ñmö`ÙŠ!\µt)þrçj\þ—«pÇÒeÔ0 X¥NªJãR/fÊ/EF%”¹ÀÑé%#Ê ÷“§BÊ0(©êüÇUüàS¯Ç?û]\tÃ2DIŽA »ÎÌqÀ>»áȃÃÑ? ¹ÞŠ‘.¯X¥ëÆpçÊ1,]9„;–­ÀÒ5CX;ÞF—û‹½€Ç joJ•«È }—AKt.™–”éÔùÔ´Êl㣨PÕõªÙ¬™ ì³çnØ}Áì2·ófV°€Rº`·]ø @ñ_:‚˯¼»áNܱêV,YÝÁ˜WCâC/xöØc/¼îËçbÙj½FG+¦h›ÜšÜ†15Ln cŠìÜrË㓈ºrW…„‚K)õ2T+ê•Lõ^Û î ¬éQ «MD0–y¸âêÛpåßob¸㱇±8Çx/E»“` ÆÃ…7„ÕÐíe”6j¦êÍ2U#ˆT5M°K_Šï~øExÓ¾KoX‰ ®`fkjþ8:hÃúÐWD¥VÅ 'ƒGï·ê¯bF½Š@’ÄøBžµä5:ÒÃêUCXµj5Ö­ÁšÖÖtÆ]¸j#7Mw£J„ …[’U«Õ0sæLÌé¯cï9u,ØeWô7Ì æèÄ)ÆÛÄqŒU«3üáâËñÛ‹ÿ„tæ Ö´>ða bZÓ&e€Ç´§?÷ Øo÷xËgÌ´$ˆC-¿crKLn Ø &·†1Evf¹ª›Sfs_õKUÑ€’›q˜¦x˜ +è¨ÊjîUÔðVÑî-ÅUMjùuµE›c˜"<ļehuzè%1Vs<VŽ´ÐUËE5V´[ãüËiŒy}Þü‚Á¿}ò«¸ðö!xÑlÊ^q>†¤&õ®I>å6º¨µG1À¹³gÎÀ¹s0»YG=ían_»pÞóçaÁ¼y˜50YÅCgÂuÚÜ©c KòÉq¦§çäµÅ4'X¶z ·-¹ «×¬E'É0Ò¥ÈwR,Z¾Ë(Í^¥¡ƒGšSX«=dÜÖ ™«‰°*X‹O=÷dì·Ç|üçg~Œ•«zèF>Ƙv“[“[Ã0¦†É­aL‘ZnywP=Põ>«ê š©Õ ¥V“½8FLñò+ªJ#¢~QObËå ·ïª+\ÊKX©¢Z«s¹/é¡&\Î(£­AµQG›ÛøªË<Œz¼E pÿƒ”æ—|ì[8ÿ¶u”Æ9½±7ެªÄ ÀkeFG‘÷yÈRnàENÈ}îGubkå¼=†z˜£F/ô(Öj§WmöªäX6øÜ_qîŠjiªJÁLGR¼@™õÐ g1(ðªÃË5ƒŠ‹½”¡^bãö”uìµW#hRÔ(øÝñ•zMԪ̴¥øÄiÇž{ïŠ×þܵ–âë…èñ±ÀäÖäÖ0Œ©arkSdg–[•\V(m’Û˜.•ø*Ȥr>Ý’ÇJ¹£Ðå”·8O¡ª²ª‹šëE.J­ÚÉU' bÍÓP=~å\·—qåÚ€“ºeëÆñ®Ó¿‚$ªÓIC§mZ¿Z­q݃µï}ã+ðšO}¿ºu˜r9ˆ­8@‡bÌDÄŒ_mèÒºÕ¦mÌó3TòªȸWÐM]¯cÙõM”ñ B-ÐÔÄíP- (\BçT"és[ªºbž¨«ë#£tª›_ɘҬª a¹:¼=Î˲Ó2­QÚQ°}þIX¸ï|¼æËgañP ?«PÄÕ™ƒÉ­É­aS¡¸s†al‰aø”ZÝ*|™ºÜU¯c= bêU¬”²Ûs=ø Í-¡0r½€ÿ|UiP)jª–T6¤2.sÛ8G3¢tÒo]5ŠëV¯ÅÀ¬Nç«ñµÿ~¾ñîWâSoz%>ô†—£Êh‚lµh5¼ÚjÄ•Q f‚Pu¨z½z€Uõ ÖVú0„è1Í ‡ 꼡çQ8RJu’pm=‹Cʧ‡˜që%¹”Û(dªRAAÖ<uÒ yjWÍŒÅh MkHU"¯R\kÈzß4¤s¾Ž;¯ Ú® Œ•:÷ÑZˆ±&õ¸¦êª”âF'A׫1-†aÆÔ1¹5 ã^Q©­$Wh09H*U–© ’QÝPä·ã"HàŠQx›½ý½µ$  ÖÐöÈúf EQ>ç‹ññoýÿsÖñ™ïžåT1f^NñL)Ýš!áÖTZs±:nì*ÍLC™÷RΙÜ›[Q%¦ yÑÚBŠŽ ŠÔCì«é1 ¿‚ %Õ:…6ä€OΙ–$ˆÐ ë`+©4 ØLn Ãx€˜(ÍÍTOU‚¨’Õ­8E+MqÍM7ã‚K®Âå×ß„¿/^Âu(ÑD_¦­* ¹ÊŠT·–¢)i %ŽMŒ•Ò}_áþ!ÎøWõõ7çþU]C¥¾EPë)ƒæÅ¡ÚÖU·º1¥6F-ŽQsTz¼õRnc?@+ª [­r]I³a†1ULn Ãx@èPæ†j¬«E =NÓYý Qž`·z o~Þ)øÊ;ÿ Ÿ}ÓËð¡W<Í4G=SWº]Š¢zS…µYK9 ªèùUNW ß²ØûGéÂ[ ®D¶ *­Uu= NÓƒ™ÖœÒ­ª S•ºm´ŽJl5ÇÕ•°ß_Ï6 Ã06‰É­a¹— :”Ó®çY?@Û¯ã†e#øÄÿý_ÿéUøÞËñí_Üï]t+ÚçJ: mNUGN(醩š)c|zˆ2ÕŒÍ\)ïf…‰ôlZw}‰/7Ö‹j® ™A%¸w·U¦K kI„jRp}.ËÑ s´* 5H+ç\£Þn»¡a†1uLn Ãx@ˆR ¯“aF’a MÑ—vÑHF±kððýçâžGTÇaÏÀÌÆº4ÇHXGZë+Jgé†^Âcäù(•w5Œ¢–ÃÏzÈrµØ K¹œ²«–¨¡›ÕûƒV/ƒûÃ8¸/ê­kÂó'ª<Ðx{%³Êyªv¡V'¸Ó¦ÂÚJ-‚ZhRÊû²”+†aSÆäÖ0Œ„(KÑT©%ŶÆPϺ¨fÌî ðØG>{-˜…9» bßýæaÞü&Jàn»ÍDÖ×c[t×Ò+)´ÍqÚC«×ƒ_U;º!ÂÀGÈ¡z£ý"î´'ö¼1ÎR7)²k MnSŠ3ï¤rg/O\»º>…ZÕšsg££V’.ú¸Î W«¨¦æô7‘Ç=¬^»A•Û©6…a†1eLn Ãx@P)f/ˆ‘‰kC·Dh ŒyX2Öŧþ÷çxñ›?—ÿ×çðºÿü(þtþðê'WœüpìÛhcF¼³Â¼çz Ë›óÏÚk¼¹h§Ò8AÚ‹];´3Š*qp’º™L[ }µÏJà ½…,F”vQI;¨0]íÞüZ‚ZÐE½5Œã#˜ÕîâuÏz²÷ œ÷ËË1äXë'<öû'Ù†aƦ±N cŠìÔ=”mEÚQF¡K0«6ׯˆqÚ»>á Ä1ï†ÿyÝK0tÛr´Û>üЇjÐB#Ègôa`îƒpí]«qîÙ¿À×Ü‚!X¡^Âs‘w«ðÒò9†ˆB©Û[’$n¨s³)64bvß8!VåZ٪έÞSuƒ¤u\ñ‹^Õ*ªsëûåxÞZ‰ù}v©ÖpüQ'âÙ§ƒÛÖöpÁ7…óÿü; ï»Ƈ2TÈÆÇ­b8†1Ln cŠ˜Ün=ºY—‚8H©»–rûœw}k*!ößµ†gõPÌÈCøQcIOùÉäµ:†ÇG±×Â…8êÀÝÑÇíË–âÏ7/Â×ߌåw ëXÕ0¦Ž$MLu^üMnQCvÓâ»1’Û€r«-ô‚˜êÙ*s5A»æÈÊÆÍ9p?uèžØoá ì¿pVŒõð½ß^Ž?ß°Ñ’5”{`q³Š,@Çç1“[brkÆT0¹5Œ)br»e¸Î!²3š3põêÏz×ç±¢RÅŒ`»úÚ6²:ˆ6E®›Rî²>Ê®:>HPéµ1—’÷ÔŽÂQ8 wß©êÙznúûøÉ5·»ÒÝåËV NrJaŽ1žõ>æäâ´T5g"ô²—Z]PSîf褕¨éw9=1òvYU;º~âºæ Âê{©B³áA»ÏÇû>9ø@ Îo¸ãK¹ß›n_Š_ÿöB\yÓÊê†(êƒãÝíýðzTBY¼Î䖘܆1Ln cŠ˜Ün))e&À“ÛϦܮŽ" uŠÙ3?ð´) ª·›% ]˜:JyÏ•æ¸>LnMn Ø.Ln cŠ˜ÜnTF'·ý”Ûk&ª%HnØsÿrؘÕ°¦“!èÑí a&…³x k"osŠ'G%¤ª"°áFVTкWPPÕ€˜ÛQ2³(rç)ôBôƺøÓ­+ñËk»ª“£_/º¤ŒÛ %·Ë½Äíy„$¢ÂÒ¥›i€Fe¾ö¦ç¢A ~Úû¾‡NTŸ( ¾/LnMn Ø.ô žaƶg’—I$L’¾þ~<ñ¤£±÷> °Ë.³Ð׬bÆì¹wÙþ.ûÁßu_0_Ã}ìÂáü}î°Û^ {r|w„ówCu—yh2®YógbÞ.M X°KÏ~Ê#±ÿ󑏿ÀB¤~ˆ˜ÃDãÃÉã™§ÒZ•ÚºÅ\(Ô7§Pk~æJ—ÕÆ­^x£V»c2 Ã0¶VrkSÄJn· u¿«Öú&^(+ëÜûàøÀËNÁç?üܸx9º•~ ó©Œêan^ò;cÞ÷t8KmÒVR5Öã*=„• µŠ‡ñu«°ç¼¹øÌG?†o_t=>vΕN\U+\¼ßø4æj¯!áHÌpgô¢^QrËöEëðõ7>à žöžÿÃXµ¯hC÷>±’[+¹5 cº0¹5Œ)br»eHnSÕ¹$·ª–ðèƒvÃ'_û\ÜvÕ­˺ÏiŽ\7òWÅ`2ºy)«Ë~„ä¶åG*CE”樨î¬JPÕšã C½$Aƒë<öȇã“ç_‰Oüô¯ný»¹â&O!eYée!²¼†$ìÁ“ÜæšNnŸ‹¾0ÂÓß{Z•¦Éíf`rkÆtbÕ Ãx™$g‚ª9W_¿3gb—]wÃÀ¬ùhÌšƒ9»ìY»îŽYó‹0›ãs9œ71tãór›ØsÞ<ì3w4o.î2ówÝsæ/@8kwDó÷ÂìýöEß>{â²åk°lݘò˜£šŽs|Cˆ&B8¢¬#s¥Â© üCéV=ÜŒ²™Jxy )ç«ÓÃ0 cÛb%·†1E¬ävËP «7è›kVwðìw}ëBÏ‹pÒ¾ 0#ó1žRk)2¿ ?US\²¨¢Zþw¹,¿,ocóµV¨ÂZ·¾‡„¢Ù vΟMn@Ln ØNŠßÝ Ã0¶1™Ä0›Ã0“5Dqµ¸Š™Y»R¹ÛnxÒÁûã¤}àˆYM'¶-PõÅswtC£&ºÒÜe‹ ê}Œg>ãéõ2„½Þ}>õãËñ¹Ÿ_jrûbrkÆtbrkSÄävËȽYža 9W­Žñ¬w«)°Øg6>úªÓð•O}»e)’(¹ ºi•[mh-AbWܾ&ߊö2µ– ^ÍRÕ»-ZK²¯^Cot{ͥؾÿÝøæEWàçYk $&·†aL'&·†1ELn·×XQçöÊÕžùîOcu¥ŠcÞŸ~í pûÍËp˲QŒd]ÚeŠA¿«j0B¤<ŸŒäÓ£ óæFa·)$U%·c­qTª¼$Æ %÷„Gî‡/üì œ~®Éí‰É­aÓÉÝ¿) Ã0¶Y^Cš0ôÑn*Ȫ …md —ÿî2ø+—a¿}ÂöÀvõ×`—pÕݼ`f{Ëîæ€óâ5˜¯ãp3³aôçëP‡}]•Qø6ZÁÎÿÓqËK&R´y”½˜Ý],ó‰©IeÍ)Úóô×U£`p‹<.÷ó”ÇNaör×Ñ× ä•Ë%çšP—¾’YÕ?Þк¯a†ñXÉ­aL+¹Ý2$s9óJ/”]½:vÝï®BìÑèá!ê:¨¢tÖ2„ÝÞúê÷Ž^" ¸.MRµoõBYnØöRz´ÎM†HŠ˜¤¸³Ý‡›†kLÏæ•ܦͽ˜KaVEvá'@?BT+ëð•ןŠFá”÷ŸÉ¸/·I¼ã\âºÔV#î§ÓFÐm#bú:3úÑE‚ÁÎÎü/ãw›‡ÞšµŒ_Rò SÀ[¶T—©RL.¾+¹5 c:1¹5Œ)br»e¸:·Y:!·)åöóNn}ðBœþêSQ£‘¦0FñËèq~ºù*§Ú á@Õ&îp’Ò.ã¨Ñ™\<Ý ƒUŸ?ï |ö¼Íï~7å:™/Ñ'ä¶SÈ-·¯©ZÂëŸ3!·g ñN®¯äTC+÷‘¶[”áš+µÍ)Ùݺ‡¨ácæÚå8ë¯ÂÁƒ» ÚC’÷j§W[RòuL.%Åß“[Ã0¦“[Ø"&·[Æ=ÉíQûÍÃ{_ú |ñ#_Àõ·Ý…¸ž¡ÆÈS‰¢J+ïÜSØ6%2AD) )ÉjLx”Ä ÏM{íZì3>>wúéøßß]ŽþäJëôÊí3Þ&zAŸÛF%­ŒY[»é$ÍQ­ös®®Ú(#QÄxÚKñ Fö¿ïù2oÆÖ¬†zŒ§H›O±ur›«$—Ûí$—™É­aӉɭaL“Û-ãžäöÈ}çâÓ¯>.»ð/X²ª…¸–"¡Üª‹^ý,_¨NjêwyÔ–-å6óy£+„5åþjÕÈ•šÎ©7qÚÓ‹Ïžw9Nÿéæ7¶Ùrû3Ññû¹…*0¤ÔXUeàŠŒ\sT1¢›0}µ¤i†fg±G/{Ê xÜCöE?·éµFÔ0b?àqù®$ºB1V uL1‘¼“[Ã0¦“[Ø"&·[Æ=Éí Ùo=í ˜§* ”¹qŠiähÐè6¯Î­(~þ/¥Tƒ˜A]ð&\¢W²¤N5ÞþθðoøêùÝ*rÛ š®tV2*\ .÷™§)xHhT+èuZ˜ÙlâqÞoxæIØs¶‡ÎÐ:4à ããcj5î3¤ø®ÝÞ ·Uj\ï†díИ܆1˜ÜÆ1¹Ý2îIn4ã°9}¨µSŒg’z† £ÒK6On)Ã^ðæ¦—¯Š  …µÇÓ—ò %6O(¬ ’vKÇë¸~•xº«%œ$j"Q;»•Ò”ñ'Bc£ÚF-ÆÁ»ÏÁsŸzNzØ>˜UÐFˆ.Ò¤òsûEWª’åÓ—YÉ­aÆ=`rkSÄäv˸'¹=ñ°½ñ®=s£ŒŠça˜BÓjë/j§n^Ñ4—JG%»ü˜£jVMyiv…·¾¾ÀÇWο_úñÖy¡l¼›¢oö<Œt‘St+<Öxõ*Ô£ó½1µï,¼é…OÅ”ùÞãaœYÚ¢|s}÷F\Ä<*ª0¸–TL¹…òNiÛ¬“[Ã0¦“[Ø"&·[Æ=Éí1ÎÇÇÿíYøÙÙ¿ÂËGЭ&è=ÄiƒQ(ï½P–z_ŠR¥E[g‚FÕ¹MZã˜ß?¯{驸ò¯.ÇÇÏÝ:rÛævž1=5ÆÂOs¦§‡}gWñ‚Ç…g³/fSr1²ŠËkH²:¥V40o¸ËÜ ™ö*G(·R{æTo× 7[õ·{Ln ØNvž»£a; zI‚;—Þ…[n½·Ývn¿ýv,^¼‹–lFàzw,¹“AÓ‹qÇb —,ÂbÍg\7ßx3–,Z„”ŽlðÖiEšVm4ù “£‰.ú“!,Öá˜Ý«øð¿?/|ÜðÚcˆÛTýˆ3ºrÓc)Ï![’Mà ²`} í:éumí»1 Ã06ÂJn cŠXÉí–qo/”}øUÏFͳcz ŒÒª»ÛÍ…Ù¯òW½€U¶”(ðÜø”GbÈøk¾tÞåøÂVhçö8=Ô\Iq¥=„… àO8Ï<þÊnϵØ…Ú¼nÔBÛ¨s‡Ì‹¹™®šþŠe5xYÈu<ÄAŒ$ì1jïÖŸHÝŽ•܆1˜ÜÆ1¹Ý2üœFH¹­÷ âš5)žóߟ¡Ü60Pö¨#l÷ÐÕ*Qa#¤ül^®RN]Ñæ†öm%­1¹Cyô+úyŸÂJ…Ì{1ÖÅU,S‰hQ*ª…TKþ-$W³Ül'×êL!äz”15K¦:²Ý Í1ÀôÕÃQ|ú sJú‹ÞýM4çb|í ±ÿB¼öÔ'਽硷nú#Ýv‡"]A^•êð⢜ÒËŒ>爷zZÓ>3J~ÊèÆ€é)ŽnÇÇäÖ0ŒéÄäÖ0¦ˆÉí–QÉbס¸_Ç­mÏyë'°®>m¿F )o‰“½JÖïÅ®µ h‰Æ6¾y•2ZPŒ©.9îäu ç-Ú‹U‰­:pØT|wG² …”CM«Tا i‚:§éuñÁÿ|.ú«ÞñöÿAk¾èÙOÅO8ý•Yk UÄNìõÊ›Jl¹µKƒVŠ­š)+q š¹É"u®î˜4ºÃcrkÆtbrkSÄävËÈ‚½ÈC;êÃMkZxáÛ>‚ñæJl ÜºŠY‚0ïºØ,¤ÞEÎõ”»…˜*¦ ¢W.ÓH)‚ZO3ïi™Æ'KãzÖo@&Æ=‰i6¡Êv‹jjÔ QzdžQ‰[øä;^…J­ŠÿûÚwðö×¼{ïÒ‡^+CÞÃ@=DÒmMHl!·RZ×®û«xÿù0¹5 c:1¹5Œ)br»e¨m/¢Ô5fàoKVàïÿÆ(¶m lBécÎ"ÈS'‚>-6ÈfP'·– <žúíK‚¼¹±袒Sh)«.jS—!‘vƱË@ï|ËkqЃö@Ǩæ>/â.¼¤‹P½¦9£Îxt…Üê%Ê&·&·†aL&·†1ELn·ÜC·ÛC¥¯õ&îZ×Emfͽø5Y8•“ÒÈ ¥PíÖõa7¨­‚–—ã“—‰Éë•Ë&ÇqäVëGLwG‰”[uIS#ÞRÇÖ )®a Ã#£®´v°¿‰e~|laX­ªC˜Ü2ÿMn ØFLn cŠ˜Ünaê#èÕZYž¢Û‹Qï£Ü¦E¯ª_[”nJ&ST] Tß2kK[“ÇÅýXæª%LLÞ7þD[Õ¿-Jnøèµ»¨WBäq^–"QhAè6 x,±zKPUÝ[Þv¥²e[ÅerkrkÆô`rkSÄävËÈ]ëA!u®úAÌåÎÓ‹S¹*#Pp%€I®I±-ƦÉôæ -š +Ä»hsAÇ!Íå“WO±T5ôÒY±Å†´S\“ǧ8´¶ÛÞäÖM›Ü†1Ln cŠ˜Ün1Å®¨¹.Õ­M¨xÔYJ®çzßRɦäOâIé‘«]Y5¿Å©åñäq!9tZɰñz …ˆN^&‘.Ç 6ÞNËJ—ˆ%¯”R'éZ³\G0õÜ4”ä–QL¬Q®§´¨tZC­Rêò?#&·†aL'åÞ0 c#™£ÈÒþŠÒY…‚£PÔC|‹*JH¥…íÎNžW”}nX61b½õÛ«‚[6±^¹®«Ÿ°Ñ²·á¼J ’D¨%ê±Ï!Ö‡jâq׉2 yQ²[ŸJhÊq'¶¢8rÃ0 cz°’[Ø"Vr»eèÆS”XêF¤¡DÒÝ”ø'ã2µH ’Øâå;Ùœ~ŠØ7•ÒF©êÇ*µ¶ÔX9U”Ì¥Î1]%ÃEyrqM"[^&¶ÂJn ØNLn cŠ˜Ün®³Y­“Ö‰NŽ»Ú§”Ú¢TWUŠj¾º›$¸Êái¹y1¢ÍGëmÓj37Åp÷z²Nnù_Õ ¡Ð`÷‡‹\ ©Ý$óZ ðχɭaÓÉÖ) 1 ÃØÜÏýB‚¨‡µ† ™-êÝ:$´åÏûœ,êáNl11®ùŠÊOLoî²²lxs‚ÜSâ­ .x]|ªfp·P¬[J«^’S½b½,¦ é¢~m¹gýåF†aÆ´`%·†1E¬äÖ0¦†•܆1XÉ­a†a†±Ó`rk†a†aì4˜Ü†a†a; &·†a†aÆNƒÉ­a†a†±Ó`rk†a†aì4˜Ü†a†a; &·†a†aÆNƒÉ­a÷:¤Ø:¥ØœtL,w}lèýk}O5®“€¢74ËÊ87,+·Ý°ÏûÚï4P$h=åäF³'MÜSº6îsÍ0 㟠ë¡Ì0¦HÙCY»Ýv=”íÈt»]̘1ããã=FñøÔý­C]Æê¦!yÒH!ˆP—³’¥r7¥[ê:w2ÅôÝ´s=›¾%ù\w"o¹ß{Ä¥5Gè%ü›#õBd^ cHäêÚ7wG’ø‡£Ëàg1ãepbÈ<Æ“G ëNWqB]s¹_¦cšñ™í'㾕êÊ·Èqîšs|ƒ¦4T*r?tË´;·tÞ”®¾¾>„ah=ý†±E˜ÜÆ‘ H%·e¢;â—rš¦N(z½ªÕ*.»ì2üâ¿ÄŠ«&ÖpÊ5D9œ8V$‘šš|S)Æ7ÎBv=oòš›¾%iýb›‚MI™öáñ_!²Ú3uÖ‰ª“hÊ_0¡‹ÚCJéÍ@é“üi}Ék)ìÎh)¹’[{Z.ùåÒlãc™¤ÌŠY)Ôî%·Enºð/‹‡-õÕ:<[K )žØ’× §ÁÜ]fã°ÃÓŸüdÎDZG]›:ÇzÛÞ®ÓRºMn Ø &·†1 ´Z-'¸%;â—²„gÖ¬YX½z5®ºê*üîw¿ÃüÇ`áÂ…klû‰Q‡&þ1/ʛΦréžnIÒÖõlrÉ¡X_º[”‚–H ¹Žh¾ï´P~]h3§\鬶j28ɈZ‚ëRË&‹ö4áŽ[qs·¥² ¥Â7CK&Žk"&Hâ¾’͸î¼s1~ù«_aݺ!¼è…/u^Z=ÄH"õ£ñí ¥{²Ü†al &·†1Ê~ÎWÕIƒÄvG”[ÉO§Óq%¶×]wÞö¶·9ùQI`Á}³ÂóyÜ:þ‰Y÷Æ=Ýzîi¾Ò·Aãî m_Æq÷õ Wå2þ_¿„3UÒé„x¢êA!¶*¹-×â~±L¥ ›—Žû‰Kv‘07Z´7_#ŸByýéÈ] mžgÌ/×egý}´[1N9åÔëu·\×§®Õ"O·tî+• šÍ¦É­a[Ìöug3Œ”íñ'ÞûK’$ÄE]„ÓN;͉mxhS >ƒª8•©.«ç;ÇÒ²Âkõ“ú¤@ù*ÃÆ” צ攺7i´”½Iö©iÉjQÝ £1ºúªnE5O8Æ¡‡PîË×qq^qlÅá–ã[ Æí¹c×¾‹|U¾‡n¨4Lä[‡éä©ȆcZ®Ë÷¨¢z·Ô <ç9ÏÁ7¿ùÍ⸸­R¥£Û[©­(ó_Á0 cK1¹5Œ)P–2î _Èžµk×®¯ž tLi#Ž;œÈ(‡)bJ°Ž[¡Ýî¸@.dȹX34ìj¼vâIš!åzZ'ŽUòY&12Ê•„ZñH´T¢XN+ ¥|iž‹;ëq¨x¸-ãëvUBÉ9Ü}¹}NYMRnË5[Ýç÷Ðf|m´Ú-ôÒ=nàb汤ÆÏõQ§§}ë8 Œ•þ„ëðDñi¿JêWé*®K&…Ò.4¿DéÓt9¯Ü¶<–âx‹ezÎÅÅu³ŒëåÌw¦[K2f°ÖsiÒ2æGÆqÅ¡­ÃJ…ÉÐcº+µªÛŸÎëÊ•+ݰȣâ|l/%£JOùù)Ó[ކal &·†1 èËxr‰ãŽˆL?[K|D!??ÇøØ0.ùãÅX¼h1Üq;Ë ‘£Æ­6î\¼×]úúû0ÞŠ*î,n1EžH{® ‡JIý È3‰¢ªth’> ˜~šÖð®»îÂí·ßNé^Ãt¥hwÆ0Þb­¯´Jî$Eº½ŽÛe!z’» µz+—-ÅÊËl+-б¤Ö3¬[¹ŒûXLùí`l|±ÉBº$‘N­ÑQ¯J¨ÇÇF×/×ñ+¿§Jïdr¾†ÊOIªÒªcÖ| ‡††píµ×â /tëh¾¶í1oFÇ»è´{Ì‹is’¸ËPä›HyL*iN(Û:öœ’«ɹ.ZÜ>cœQXuéRü%Wž—iÞ^(Ó¢¡®ƒí)m†aìxß<†aL ŠÄag#Kº8ã¿…ãó8pÀA8`ÿýqàƒÆáGg<ý¸ð‚ß:;êáGà©Oy*z å”2•2R¦$”’-ߘG!RJ¥JmËÑZ­æÂÈȈË?Iœ$W%x_üâñЇ>gu…5¥ôHЊ7ꇇÇ)¸>3¡HhQ°«”b•Ðv’⥩”RØkã¿þó­8àÀpÅ5×¢K׫T#„”áÞðüç›ß„ƒz¾ÖÙôt?ŸiNÝËYéjrE)ç0Åôõ¯=¾öµ¯¹´*HÄ&—4+mBóÊ—¶ôРi-{Å+^#<üãqÉ%—¸¼Sʸ»#SüuŒÎTóÔ•«’E¹/¥­×-ªS¨T—+3x<ö \‡y¡|ãCƒ2¬U\z¶gJßY?C†al[ì.bS ,aÒ²JËé‡^£ÓC€Ýî†W¿úÕxö³OŬY³qùeWà{g~Ãk×`”rRÞÆ(™kÖ®ÂÍ·-¡pv0:ÖªU«\‰¥Z•h4›Ð6Ö®YCAv¥›Ë–-[?”ÜJø$…{Üãð¢½ûS¨•³*ÑlµÆpçwrý1,_¾†Â8C«V¯ÀÒewbõê•X±zMQBÌP«U¸NñS}LìQ3—W‹(Û׺W‚]¥`.½ëN £Ëô¨äxíPQUcŒÇ7>6†ýèG®äuݺuNÊ”^‰¹~ú¿í¶Û\ÚµL×ÂâÅ‹]ÚTÚ­å7ÜpþøÇ?ºœýüç?G<⮤RÛÝzëmÌ!WÕc|l\^ФSf—/ 7Ýt‹Àˆr™¯w.½+V,Çò’æîœ˜ÛÖju>úo!ŽÛ+ʧ²Ä[ò¯ã3 Ø &·†1EvöR'ŸòX¥¶[)|ðƒñá|ßüÆ7ðîw½Ë•"Þ|óÍ®„Rb¸níZ<óÙ§áÈGƒÇ?ö±xû¿ ·Ür Ž;þ±xú)§8Ñ“´¾ày§áä“OÆ\àšSiæÃþp~øá8úè£ñ‘|ÄUUPÜùË_°rÕ ×!Ág|O|ÒÝ:‡v¸kàg?û¥ÐâtœtÒI8ú¨GsÿGâ´çœŠŸžwrn×ëöœèöâÔ½¨åN“l™Ò®Nª”ªV§‡ .ü-Ò\¼ÇŸp~Ø8ö˜cð÷¿¸è"üË¿<ÕIò¹çž‹×½îu¸ãŽ;ðoÿöo8ꨣpÈ!‡àéO:^ò’—8áýë_ÿꦟøÄ'ºáqÇçšV“ëAèë_ÿ:.¿ür¼ç=ïÁ³žõlÆñ(<üÐGâ™§<ӕ芳Ïþ>žøø'ãÑ~ ãñ¼ç=ó~äêàž{—üDqØÃqóâùÏ~òã¡QW[ÅÅqvTê»]“&¶†aL&·†1TêTʭĶ”ÛržÐøäé©ê¶º_ÊSÜxã¸íö;ð»ßÿŽ¢š`Án»B½c…•# «ïZŽg>ýdŒ­]Žo|ù«é¡Ù?ˆK/þþø›ŸaÕ]Ëð‡?_Š%K—a×]wÅ—¾ô%\|ñÅNüÞ÷¾÷¹’Ïï|ç;¸öÚëÜ n7ÝtZcãX}çRüÏÇ>AÙ½ ~è¡8äÐÃqÙ—âÅ/~V,½?øþ™¸õÖ;ñ¯/{9Ž<úѸòÊ«ðýï}é÷˜n•zÃG€ÜI®ŽE?ÿ§<‡Vˉ÷Èènç±]yÕ58ð ƒðø'ž„5ëÖâ;g~yàá)Oý½ ‡öÝ'>öqøö·¿íÒªú²o}ë[ÔþøGçâoFÖẫ¯Å_¯º7/º³gÏÆ‰Ç>9×éf=ÿ¸0><ŠsÏ<w-[×¾þõxÆ3ŸŽK/½ßb¼×]w=¾ð…¯àêk®ÂCr(<ð@÷ ð¢>K-Á;ÿûý¸ù†¿;Á}øÃǯù¼ïÁðxìª\èŠSË ­Ö†ö—·îéó Ï’É­aÓÉ­aL‘Rh5TÝÑRx7þ×üý¾íEòõz¿þÕ8úQÇâ!‡‚oë[è¨ã¤'œÏOÑéµöÏÁy]€/~ì¿p컢Î[Ìh§†ç½äh„ÎøÌGpùÅ¿G; qàaÀ!> Æ«z¸?8ûûî'{Õ±UÉí¡‡†n/¡8fÈ{)î¸æZ ÝA!ž·'>÷oã{?þ!>òÉá¿Þò άâß^þ2üËSOÁx+7D”ÆŒ»íÄGØèG’yˆüÌ•hV™^¯¬BáåP;4u~…‚UÅ {¾üͯâ}yvÙ}xµ³솷ü×Û0“qzàÁxîsž‹ëþv-¢j»î¶Ö ­Å>ÚÏóßþr%¸cÔƒfï±νäÏøÓ%—âíoþOÌâ5Òœ;¯{ópÌ#ŽÆ¾‚ÃcŽÇê5+°øŽ›(÷mæuÃÃ-\ó·[PkÌÂW¾ú üð‡ç2_>€ü¸éÆ¿á.>$~³fz †[®çöÝ”ù)qç5I¹ÝÞêÜ–UÊσÆ%µFÃUÑ0 Ø*&·†1èKZ‚[Êmù%® /õOíÙ†FÆ»8ä‡âßø&¾ý¿ßÆù?;ûÛßðüç?Ï5A¥¢^ô¢bttŸ:ý“8묳qö~ˆ¿^ñô:]'y>ωzK;m÷3}ÜS—ê‡Êp›û]»±i’ à±ôÏ@„hªÓƒnŒöxj™!¤(z¼m–/©ž°ê‰ªêÅÙ?ø«‹D!Í:2é|È¡‡apî ÆÛcHÇù .Ähتžp;·ûØÇ?„Ÿÿâ—8ëçPî/vU<º”sÕW®òZÚm·<Þ>÷RÚ‹_üb—߃3™wUt˜Žï}ÿ,œqÆÿQΘ»Û|,[v'ª•€q­N¨zÇöÂÆŸMk¨ 壆š'´ŽaÆ–`rkS¤üÖ—²¾œ%'›jΨüÒÞÑp-è…-:Ù‚… ðÔ§>Ï~Ö³ðèG‹™3f"ær×f-C- ò8s®¼déR h•rÕÁü{`Ÿ}÷wMuýä—¿v¢yêsNu/j½áõ¯wíêª~©~æŸ1cJ›^æ’ð¨Õ‚ñÖ8fÎĬysqÓ-7á²K/Ãïÿðgû¨£ð¨£Áí7ß‚5Cë°ï~ûáŠ+.Á›ßüfôõõ»×tj½€R´›Èõ‘«ý^Š–¯éºÝÑQµy›8Ù’ê|阸2šýuW ªà’,Á’%K°ní̧LЧ>õ©®W·÷½ÿ}ø—§> ÜãÑèǺ±!Jvƒõ~W‚ÚîuTB0Ì™3wݹ«GÖ⡇êþã?^»¾•…þ÷°¤—ÑTmC=Ç=æ1Ç1<ÆSŠ™ƒƒøáÏÁOÏû1žöŒ§áq=ûî½¥[M/å͘9Ã¥q{@y:ùsQ~f&¿H¶£~N ÃØ~0¹5Œ) /â¿°'ÿĺÃQSèôÆþŒ™³(!ju Fñ‹ÐsÍd…höõ¹ÉÔ¦l—ëUé'iOm¯f˜5w.·íº^Àúê <ë9ÏA‡^U8å´çaþì™è£Àͤ´þô§ç»—¥žýìg»–ÔÃÁè¤GyØh6ðà‡<{콦ã™O:ÙI¤êž¾áo¦äÍu²zËÍ7á€ýÀ;ÞñŒŒbùòå®D¹Z©q»íVjºª&³*è¶:Auâa€2:Ð߇n·M!UImâŽM%Ç=[%¬2='¼W\ù<ùÉOÁSžò—M?:÷\Ìâqè%3½l¶çÞB;îQˆy]¤Œ‡ú¬t7(ç«(á*UTå#ðѬõQrcÁ.ópúéŸæµSÇʕ˱E]/ª·ÆpÚóžë^H[´è<í_žŽC˜O{úÓ°rí*—gÇÎ<óÿܾÚm•Xgèg<’Û$.š(ÛÐçdãÏŠ{‘AóMl Ø<ÞLìnb[ˆ>> åÏ©åÇI_Ôú9Xh\ËU"8ù‹}{Dé|ûÛßî꼪•Gˆ,îàï×__]p!öÜko<öq'Qû\“U•(¤¶eÂg~õÁùxæ)ÿ‚~ù£Ÿã¶UÀ£Ÿðtì¾ç ÆW-¡= )úðÃ?ü=d?ôç1†‡GðóŸÿW^y%ü rÏ|æ³°%U/¯]ôûßãq»÷^è~jÿü׿»Ö#à~~è!8õ”§"¡Œþæ‚‹påÕ7!¦@?îÄpãõ× î´ð¼ÓNÅeW\[/Å3žýÌš3OƒŸu¶†pñŸþ‚kþ¾'žx¢ËŸüäÇxð!ÆãûX¬ZCYý%1¦Ì> æ/À¿wn½í6 ÎÀ)Ïz†Ö­ÃÙ?8Û•¸ª®“w"Ž=ú,[zΡèÎÛg/ÿÄ“0ƒŽõ|û[ß@é¤Çâæ›oÀe—_†£>–b[¥Kº6//¤v …Y¥¯ ,R»¹}JaÍÝ~Ë Êtm³KåÄÌ kêøø°ÁÑ òÉ ¥ú;ùŸÛfbà¤щÉõ·)®JY¨ÔißÊ Îñ¸P!çÌœ‰Ò:*¢UA«.]#jƒ7ב¯Xl8’y·‰åŠC¥ìY¦®‡%Äp¥³e)©d6á9UÜ:¯åümI¹o}&Ê¿íõó`ÆŽ‹É­alô^~iëK\?ñ—_ä e‰­DPa{@Íi©$mþüù®5€ò vÅæèG¹Ž¤KU2O¥©*Í(U]'Œ)Wêë«#íŽ!¤øÝ¤‰…²qŒâ'™SÎ)·þ‘D–ÂWP.-öç$Ó¡•Š[ IŠó£pw´n´dã¥S,—š*­yQEbCw‹€;ü[¤Q»œ†LÇ›œÍõOÄ©€ù®s§ÖÔU±z7Ó´²Ä!¶BûÕgAOî!hâs`†1˜ÜÆVB_Þå·¤±Ùl:Ñ•d”lOr«7Ö%§žz*^ð‚¸âTÒ\R¨¤XAèݲò88HykÉ=Š”f»u¬Z4K>¢j„îøýkuýþ™§Å¾%spã2× {UT·Éú•Ö—½¸ù¼j¨óÄPJn1orП»³~Îäõ6 H9Æôrԭ»¹n¢lÈÅM1±âz6ŒoÐÓ ±(N‹‡•#îö\kïÿñš×¼Æ=\éü–Rù@ ªú ”×¼†J—aÆtb/”ÆVfòGL²¨Ÿ‡õó¿~"Þ˜RtKÉÐËFJß·¾õ-÷Bœ:*:Š"u“o“çè§vŽóOî©<2 cI(%ȱ[ž¤× (¸:vФiÃIlúø‹—Äô  ’PÉr™U‡P((ÇÔ¥®Æ‚j’Ù²€ª)”bð@ˆ€DEi,[¢$)Q³`;ÊåŒÇ’¹Ÿû34’ ‰ ñC„©ò5Av1ºüv¼ÿQ'bN7C°û®ø¯ Š`æ|Œ®î¢ÖèCä³až¢øH%þ¹ïâ”,o躙ü`Rþ: ‡¨­E)µBûÕþuÝhX¦Å0 c[bÒ†±°éAH- èÖµ¬ñò:.¯)Ã0Œm…É­alC&‹ˆ@ê‹_PJn)å:“¹'I(b*¡üùZéPzÊÜM­»=‡,cÈ‹* ê ­ƒžz kµ¦9šAÕÒNǬÎ$ˆ1<•Ô2®„Û(h\ñè59Å¿©ýnAB)ÑÔu¥àŽâÛÜpÐÙÔÓ˜‚Dº,©|­+Ü߸ Ã0¶«–`Û)}<7–oŒæÛGyêì!Sg”Üj’"ás|ìG¨%|ˆt{C¨t†ñŽ? aEÍÇn¼Ô(¸}H¸}G-*ä1Bæw'T5‡¢õµ¾p?LØá¸§_ $¨Z&yÕpr0 ÃØž0¹5ŒíI¬„¸ÿ視2ë¼"ÞÏ~FÏG”º»o|m ;†·.<³[)ÖÖ|tɵ|²ïçíªén[]?GÅÜ>GFœãƒ³àÛÇÍ0 c»æåV³ïºë.|ï{ßßøDì¿ÿþnž„WË%TDum9î{Ò$u&TZ«ŸÓÓ8á0pß ù„Üjûû’2U‰Ð:.Ž«ôXi¸?2§z© ª3,Êx$ïe:µ< C7.&g‰ö¥érž†[’ŽMQ¦AiÒ1*¾2o&s¹½ØwÓM7áì³ÏÆ[ßúVÌœ9sýq†Q`rk†ñÏËF宥´½å-oÁÉ'ŸŒƒ>ØÉŸDK¨·\gx|œPÀ’4sòÛjµœ˜©Š‚ðÀ«HØ6H­dMu{%s¢œ?9h’¾r¹¶)ŠßÜP–@k_J—âTÚ—ö­ eZ¯dòö¢”OmS¦GB9h[í[h=ɽ„yª(ÞRœ•FíCñ—"«}—éÒq”ù¯y*MÿÉO~âÖ1 Ã0 Ã0 6)·¥Xé'pÉ—B)•°R5ô)±õF2ëcxÝ(¢0(ªÙò¯çé'|8¹å–ð\Sa…ñ '„ÚïÚµk]|üà±jÕ*Œ¯—èÉèX%•ëÖ­s’\¦G2¬eS¥SÅ¥¼8÷Üsñ_ÿõ_¸ñÆݲ²ú‡Ò¬ô)-š¯P¶hQÆa†a†a܃ÜJ6%qÚ®~‚§ÌI´T‡¶ÈRJÜzãõøÞ÷¿Ï}úsøØG>‚Õ«‡ª-[þëJˆ'JYn7:>ŠÏ}áËøé/~‰Ñá1:o½c†‰—>rWª«fÅ(oCš#RIkÜÅŸÿüG|í«_Åõ×_¿¾„µ ® íƒÑ$nž¦TúY„Ë.»Ÿûüçð¡ ¿½èw\_%˾óßdœß¢Ü®urYŸ¶ˆ7Õ ]1’´ý÷Û¯ú÷× %™L™VåU³HïD(ÒÂq¥¿ óËu”F•Áj\msÌ3Õ0Ào/ø5¾üå/㪫ÿ†.?f˜AÈ>LðŸÊÝ’{z00 Ã0 Ãøgääˆ*!RŠb¥Vw3B•°zÝ6‚¼G;ëàÒßý ·\w Ú•ËpþÏE¯£e±¨‚+þr%>òÑã;g~KW¬€_«¢E³«u\öÇ?ã§çŸ‡‹/û=>yú‡ðõ/}7Þrþø×Ëñ?§ÿû…o`xÕ*D¿Cr^ýªWàˆÃs¥¨?=ÿg¸àÂßã—\†w½÷øÔg>‡ko¾]Êd»ÓBÒeò†èÌ-'áÕfÆÆ[øø§¾€Ä« κtÆÕê ¤±Oi qóÍ7㓟ü$>ô¡áœü«V cÝð~{áyNf׎ã÷—ü üæ7øùÏÃâÅ‹y,1~pîð£ïŸ¤ÝÃ’•Ëñ­ï‰?þî÷è´±vÅ*|õk_ÃÞø¼ëíïÀÏ~ñ ´²ý6®ûÛµøÑ™gáŽ[n÷(´úÝo¸v‚ùPÁô¶Sà¢?ý?9ÿ׸úªk™Î UCLh Ã0 Ã0î Ó?eIž¤qî‡ÔØ,ÏÎKòŒóºy–vò¬».O»kòzX1ŠW½òßóþþÁ< êùÐÚ‘¼Ýêåçþè¼|ÆŒyyùÀ̾¼ZGN ΟôÌççëÖå'<ê8Æïå~ÍÏûêA^¼1PÍ½Š—×Âj>/ìËO|ôqù²»îÌßûÞwS—‘Ÿõ½ïçW]uU>·ùy_ÿŒ¼ÙìÏ«ÕzVêù¬Ý÷Ë/þëõùx{<ÏÓ‘<ï®Ì“±eù;ßñ–¼R¯å}³æåžßÈ_ùš7å‹—Þ–/Üc^~ðGåºøÊü /ÌçÏŸŸSóF£‘W¢F~øa'æßøæ×òz£š×¦“ÃývH~à¾ûåýA”åK_ί_t[^k6ò>?ÊoÿûMùç¿þ•<Þç?û9ùÚ[çGz8÷éåƒsçä»Î™›{òÓ^ùÒ|ˆùøï¯|U^E”/œ·[ÞUòÏ|ø}ùþí¥Ü7ò³üãü½ýdTfæ3g-Ìö³ßæIœåiÊÇ,Ëã8vã{ØÃŠfÆÝHyßêñžÕÓÝ+íåqçí<Í“$Ë3†<æ}blEþŸ3gçáçì­3fçùÈ2Îãú¼2´µ=?«yÜÉù‰c(â5 Ã0¶o6Y¨òÁ 59‹1Õð=ÃCCð‚¼è÷øëU×b¿ýöÁë^÷ï8èÀƒ\]Ô_ÿú¨W²O}êS®Nè ^ð||û[_ÇÃö•Q¥â~òoÔˆ‚ Þøæ7àk_û*ø>|èC@¥QÁ²U+°jõZ´9ßã¾× »ŸækÕÓ’ãMozÎÿÙO°ÿ¾û`hõÜ|Û-L[ˆV« 0A­ÏW;»=¼ì¥/ÁCv¾ùÕ/ãÒKþÌ4úsU/^ùÊWbxxï~÷»qÞyçáA{¿£#ãøÙO¿‹:÷÷Ä“Ÿ€o|ýëxæÓžš%nºñFüétmøV¸¯_ÿâøûõ‡_¯âÃÃG?ñq\sí5xÌñÇáÜsŒ÷¾ç½˜;w.ºà7¸í¦›1wp6Â¨Š•c-üû߀G?îĽ6ªo}ýkøä'>ŠJ_øúWpÌcC'ÞÐT™a†a†±i6ýûö¤Ö¥Ôœ—&) @­Ñ„šýúøéŸÂX'ÅQÇãÖ[°`Æ[ãøÝï.ÂÚµCøó%—`öì9øà>„'?ù©øÔéŸvÒÛëÆð)·ÙÄKTÏ9õ¹xüÉ'ãÁ>ƒsgã´œ†Sžõ ôõ5dRîtt´…Z½Á½ø®ëÙÙ³±pÁ|<ûY§à裎ēŸò$Ð ñwJ£ªúD… Ä)j”hµè°û‚Ýðò—ý+ò¸O}üN$úgâÎ;—bùòå® ÜÓO?'žx"Ýqz½ívG?êX„Œ«Y«ãƒŒŽ?Þ™þ Ùs~ðÌ=;~û« p3…7£„>öñ'áFŠvʼ{Õ«_‡<ô¼ä¥/e~ÌÆºµÃ¸ò/A½REšŸüü—ðÎ÷½‡PþçÌ™^'Ão~vE¿…(µ;ù$¬Z7BQ¯Lzà0 Ã0 Ã06Å=TÞÔìÂp‹Wš$·E‡aá&Jä/ù èž8ãÌ3¼O! |Ÿrû;¬\¹3(‡4›}Å.ÆÆÇé„Ü@’ÒêrõÈ Ýê AaÖ‹b ß §ºÒËZÛ„«ö͘‰n’Pp뮤ulli¯‡f½FIíQ({ˆ¸l—]vCB[­ÖrÃñp]¥»Ãý¿ðù§á)O|<®¸âJŠuLo3}NÚÕ9ÂÛÞö6ת¾å­xã›Þˆcy4FFF]>¨‡3ßóð¨Gý÷ÜW^ùWüî7à±ÇŸ€#=øÓpù%—âA€½öÚÓIx__Ÿ{™NõÛcãj1Óª¼‘©fžæà ‡RJñØè*!pÀþ{cpf?~ñóóqæ÷¿Ys0BÑ6 Ã0 Ã0îMÊ­„R§ÒZ‘ç©+ÁÕ{L-JâòÊfŒ‡z(ÎûéOñ“ŸügŸý}D•·Ý~+®¹æÊe###®³ŸrÓN{2×Mo ?óRì´“f³iªŠRÚZ£ŽãŽ)…j NTúZs2¨¡$8ŽÕÌå•ãQ`||½¶zS” Z6| ¥âóýЕx­[GÙlⳟý ú›Môº 2Æ¡—É, ÷:”öðˆGŽï~ÿL|ö3ŸÅµ×^‡z­é¤öª+¯Âu×^Ëõ#<ÿùÏú¡uèt»8öØc)¼¢ls¿í6þýõ¯w[<ç9ÏÁÈð^÷º×á7üüÀ(ý+±Ëœ¹”×ЦÒèúu pÇT*ó˜i>ö±àå/{)ÓáËLïwÞE‘Üq†1½è>„j¹$áQ·Ä -œ¸G{~þÝ HS5 Ã0¶k6)·bƒH©*QÜìCJã•Wÿ ãíN¢¸>æ¸ãð¤'=}ìcñŠW¼Â­söÙg»vd÷Ýw?\Íu_ð‚cÿý÷Eè˜3{ò4Pd«5Ši aPq¥·•(B»ÓsBZ«T)–T#5-¦wÑ|W/Ö5ƒÅo˜EUiŒjuTªup$”͆+MøEÇ\ƒÓ*!ž¿p¡+Í]¸×ÞxßûÞÃmT}Áç~+øÂ?ƒGýHœqæxüÉ'bÙ]w╯zN:é lûì»þ~Ãßñêû7¬fþå_žæ¾ Ó‡q·=ã”ægŸò,îÏË_ò¼ô¥/u¥µÏÞóð™ÏQªg à½x?öÜs/'ï ßèÈZwŒêºXéQЍRÇ‹_ü´ßþ¸é†kñã³Î@Îã±ïUÃ0 Ã0Œ{Ç£ŒnðØ ºš•I>#¤=½ÌUÔ»UGIµ)ëQTµaÜm¡F)UIjU¬ªï†Z­JÉ ð‡‹ÿˆ½öÚƒƒ3Q¡&q†f¬íºHªBuÕ«ÚnY-B—" Ž7z2nÏE®ÄWm½6›ªw«–„ii»ŸûÇÆG1{ö\t˜B•.wÇG\˜9sa¥†67õBÔ)£q‹ñR$c¯C‘Œàguä<¶mŒŽ¯sǰhÑbÌ™µ³gîAyM±zÕ"Ô£A$uu6A¡M<<Î4O0Ôka Þ@%c^0oÒþ:®_r+^¸Bæ §—¯[˯¸ çbŸ@/ò144„ýæ.ÄŠáuèŸ;ȃï êSnsµ÷ ŒµSøL{ÌsÐä~Õœ™šF.QßÐeïÃþp\uÕU.O ÃØ?:üÜëÅûˆ>p&üìD©WTºÊۼό᭠Ƭñ#U|èæ¿3fsiŸo çó¡š÷žˆÊD? ñS¨¿öi†±}³i¹•QR´ ¹U/YœQ®æEè$”+ÊbÊap~F“ËùåAë $üQé¥~¶o4ª®ž©ï{h%=ÆYAE¸ê+‚ñÓ„ë¾Ú¼¥­J‘md>âœûævéD]ߤ—¸ýêçû ài?ú Q%´®N0EPA)¸êA å6£W¸n…»Mòºq‚Z8ÈaQµÃµ‹:Å’Gä5'¾™SÐ;ÈzUJ)—Ë+=†°‚ó$cZ|•¨v¸Ïzm®¦«74Œà15jÏzˆ*"æM›ÇŸÔ˜&·«z!‘ñøå9"yNxÜiÎ|g^è»ÙÏøp‘Žbx ¦é0¹5Œ{ÅäÖ0 㟗{¬–°1ünpÂ(±ÒOÿ1ý4C'|ívËUG(êÆªž+ŶC±­W¹L]Äò‹‚’\§ QE‘ñ‹GñU£ŠX/`Røå0(þ<à—W (€YÜuMUj‘«g+ÁSÕƒÓ¢—²<‰-ýZÑj›v[/¯q}Æ-OïtcÊqÀè}t(ÈúvjÔùÆo@Å—¤±KW% 9í£ÛmqÈ9ËÅ\iÓWbHq/»!–wRŠ'·kUdÝŸb›tѯŽ/ª5Œ¬b~%®%†N¯ëÕgZÕZÈôƒû­q?Æ¡òŒ²Fë¢ÓæŽ'£`W)ô£Ã.opŠC/åx9Í ŠñÉsvb&f9:iÖ?ÎÜ(lbÖ¤cJLäᦲrãy΃‰/ÛԼͧÜúî1èó1ùŸþ»`lL:=ëOÓ¤‰b0é\ƶfòõ¨PN–Ö‡‰ÿlRnõó¸ÞlèësUb™¬VͨûWz'J›+Ë hö÷Íp¥Ž’>W…€¡^¯ N«Nµ=õR†Ì¸¥ä”«6c]fsQÈíÕa/GÝI«â¬6šPŠuRôÒU«QZ)Õ5ʨœXqT*Ç© LG¥ÖÏÝ)&UJã@=‚ ŸUm¡ÆôxžÒ#I¦ˆ»º¿ý\VgÜFVA_߀KRµÚ Ðª)1ŸËRæ O9Æí$ÅÍ€q1¯$°jß–©EÃ羘.øÌ¿9³1ÐìC?ƒêçJVµVè´ž)¨iTÈsÊ4—s˜5³ŠF•q¨¤¸ÞäñE¨ö©DÉã¹(Ú»ÕyQk …hoC¸;& ;pP¤‹yæ6ØYÑ¡é`y˜™®O³ò%æLýs]A¯Ê2LÌÓ: ú¡Çs™šËs‹vò¬ÛÚèó®O–²2Ö?;~À|ÎhsápÆ3'˜]ïÇ ÞÂ^Êö -ÞKÚ?ë\Wëë©>æWñèóï$x³Ï‹>#<»9¯wnËS¯ù)º|xîò9Ó‡Fu½TÜll_è”ð³¨*b£¼ F9¡÷%ÜÏ‚].s§NŸ{~¨ÛwMpm޾WtáõèF9Ë™‚n8º^åYº¿èF¦°ƒ^¢ò°ªjªCã÷Ä&«%hCIì3žñ |ô£Å>û샵k×bÞ¼yklàžäJÕŒéCù\žLUIøãÿˆ_ýêWxÿûß?±Æ6‚§;ç—~+äC?,‘Zº}WçY;U~pz|`ø¡RÈU‡šËvZx¼©ŒÏ"º¡È‡DÀs¥ª&÷‘(¹)ÌbÈiéUAµœ!S-ú•ž%7lcÜ?xNø¼èª% UµÐ’úh!ÓÇy] ­Æéû…æÚaÜÁ+ø£‹¯ÇÐÂ]ùPÅœ¤¨*•Dº4â0÷Qá’oºë›M!ÍÑãŽä8¢S®•(ræ%èPzu-Ôs†.½.c{AµÍ:­)fð¨vùáä‡]å!=÷Ùç5F(*®ØI4¶1úR}R~߯¼üt½ê>S×÷P2!€ž?ñ}Lܵ¼c!/U¡¦:ò’ ©5.5á*WÝ›”[Ù±fã+_ù Ž<òHW¿s@í³nÄ&6w˜ÜN/ÊgØÛo¿×_=þú׿â oxƒ;¹Û6¯)b m~8j”Û CP˜6½DW¥ì¶k¼½S(**ªòÔ„ÙÎ{-”%y’SºoEéžÚøp4ññô–ŸÍwÛ :û(ê‡Rl¹¢“[M*>7׸¿xfb¬zü|R¨f¢$@žð¦È볇qÌlwðIÊmßšQÜŠ6>q×ß±¬QÅ@e6ª£=TjªkŸòaÛóI#dÐwBñ³9gF'2å÷ŽÊóŠÏäV¿ðè¬'™k8í…<ß<ÙºFÊ ÃØ.PÛì|á=h‡j9ˆÂÀé '¹åõUñÐ僾ÞK©ñË×§<äþ¦¿l ck¡[’î!º7Å =ÞGt­VsÞyäh ¼63Ý}¸\Õ6w´o÷NUº_­U@Óúõ¿ÙlN¬qwîQnUOT2»téR|ýë_Ç7¾ñ ×—ñÀ¡'–C9§žzªkfL'Z'øžž\¶êN¢ÃH—N…þšð_UNü.Ÿù”˜4ùEΗ¯UÑGogE¥s“?B:V÷±Òí¥!fÓúµ$·.¸ü³þáDkm€9:1fÜ_Ê»ZFéÐK›*y õ²æxŒN3DBci´†ñ…cžŒð®ÕXåxÇ%¿ÄøüùÂ:fJ^xZҨؾ"±uç•á~¨ª3e\_ZÄOƒbpg•»ãV/°òË&ðÜ—‘ª-Y™ßvHQ ™$––ϧXw}ñdõx2õ‹îrÝyÙ¸_\ c¢Ë1ç÷ˆ:¿*fd¼Ïèû¤øN)ïeEmÇ®9ôb~òŽÂ›Ž„§ >oFyœ#òøµé)·Ì/½à lÆWù›ÊÂsæ©næa£G3U+'ž«wËå»ó|¥Ø‡Ù>Û0”¥¸‘yŸª¥Ч–©že¨å Ây ÖÔ^®Ç8‹r¼{Gé¢ êŒkã¼cŒsHnÌôÕÒš“¥ŽŸ¢W£Õª—Bõªhl7¸_`žI>‘„iÄ:?×¼¦aŸ_ž+]#cü Sló¬ÂÏwÎkDE¸†±íPõ¦¨RE·;†ýû›8°ZAw¬ƒ›ùÝsGš¸v»›î;s¿¿îxß-ZîÉK%ºK–,Á®»î:±ôîlRn%±ú üAz+ú­×뮪‚É­ðâªðƒðf^ (zxdœàуs±œF¿©¦Xè« `mOçì¤×î ’Pé'N¦4CÕ62µÑ_GÞë Ÿóª­.*ü‚TNŒh»f„6?¤ú <í ägK¥àªàš®RïšëlÌ`ЍÂ"Pn)®$3ßUM˜ç4Vsy××Zã «ÔÆ¡šòãCJ…·Åؽ (¹å¹‹y®ôà“ʵ7ãÄpÝÁ… Ð]r'V0H w›…ßܺ mžàUþÕõÂãT”_cûAŸU÷³ GR=u2øX«¼†Úêòƒç2¬a]×lj®&ûàÛUŠsÕâx9¸’ã!üŽ®{U\÷ps®ß ðá>÷Œ…|ˆß!Ú¸öÀÌ™3ñ—¿ü³g«™Ôd“r«V‘©ô¶Ýn»á¶}+ߨÑ5‘g)eŽ7r_%d)òî|ýÔ`ä7@R«àm·_ƒ¤1ARꔉÚNJÊ›‰Ú³dO©jÐúH9ªÞör. âêí~Öqëßor”œö¦×ÁèC«Zã'¶Jq¢üfjfŠEšTÍW©Ed{©ìþ£ZO%mÌÏ7Ãa õŠ??<ëè2.W! Ïžs%ÖØ¾Ðã*OOºJðÝeÀëÁObÜö—«põßÇ¿ñm,ûûßñþ‰²"ÃØ&è§úƒ¨ñ{¥‘'¨ð;9Ò÷³^Fòh5ªãn%=º=o*̧ˆUççh É0ƒë{úe„ó´U¨ΙWÅ­Æå–‹So·njî;( ]+j[ËÞà)™îa­÷ÁxKãhî®ánš¡ÃkX ñ—5¯Ô‹˜Ö÷U A%.Î)Nó&÷û«ö(Ay»ÙÛh¼…]›xc-Æ\<Ш>§êOèZP‹›ŒÇÂt]3rç^A㺎ê}#mÌKrTþ?{WhGqµ¿½kןÆ=DÐ@ÐàPÜÝZ¬8…â…"…R¥¥Hqi‹¶¥EŠ»k @ˆËóëwåþß™}7yÚ¦ý“ó½Lvïîìììœ3s¾3;;#ŽI¨í¤_NPï é ‡l˪š|çA‡Ý/—Ù¾yÊN3"3-”´n_”ΊÒ×gïnÚÜܬ޾pX‚†Æ?ƒ¼‚Ú*óÒ’‡‘¤U‹øÍ»¢øè?P¤?Ö'²Qï£xŒýR{¢ $c8Y i÷*B„ø'ãh­Z…쿈ß~fßÿw”è=_ðÆÓH B+™¢£`©¯Wåõw<ˆ®¼„ˆ.küwP2᪠ºó@‚¢ ˜§¸‘cB‚KPúá+ú©âˆÙP,Άly,JáßCÒ—ïæÝ]øÞzë`ȧóñAÒÆO?˜Š°©!A™Í!‰-©òNB¦5V(È\Å‘ÌY?eØÅeT+xä×?Ç ç\ˆíðË«±æ~‡"$¡P:¤âkhü õ2¤]~úª«ðü÷. ãîc÷®ÇÊ{ïËæË¡=’Ù_bjL#¡ÛŸ¡[RÿBÛBp£ b2(¿Ì¡§fë¬ÙQ,YÚ§—(ôg5QÓ«È{Kþ’R9€ÕLÒ•ËE:èÜ@¾ºçŽŠËã<'ã;å%Qo*LGÆ‚~-Šm¹B±úO~ i•?þè=&q!DO¥‡VBô»÷¿Þ8Ýe߽ǖWä.;žÌØÀdÆùØÍy?é6(mgÍ{HoŠÆŠ‘¿Ô[©™ò%ŽÔU›òtzWä9®¥tECãË€ègä|I§RL^ ©vEziۤㄶ‰º yëÿеQCCCCCCCC£ß@“[ ~Mn544444444ú 4¹ÕÐøÒÑ;ª¶xÔ¥þ̳ .OùÊwÑtŠõ­††††Ær&·_2>Ku4ñéOXÄc?Cn£††††Æò&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr«¡¡¡¡¡¡¡¡Ño É­††††††††F¿&·ýšÜjhhhhhhhhôhr»Q«ÕþmøÊY6B5Ã`à~ ¿CÈ_!f0‚ ò$LžŒ.ëé|ÞÀðaÔ| ¬©ò°B)V.–‘CÈ8¡Ô4‹[K -ú]—¾ÁHr TYú,E>ú j‡ÆÅй(j± …\†!:ãñ\ÕFÚð(o_”FcŃTÄzýô¥â²›¬ë¬§ÃÇ:c›(•º¦Eø?ÃÒØÑ¥‰£Ñ?aPèZêÿCHqû> ŒB†¯Œ€;ó]qŠT vÕA¢RÀÏÙ ÝO>I"—ÄùoOEÐÒ„ª0¶˜Õ{eÿ›L·ˆ”X4|I’ƒ®b94„,'þ Œ"7yÜzä·ÑþÐãȱ8Î}õ C†ÂB–Ž€©È¯Ir÷+(ÙàõBˆ8I¡ýϯ,¤a­!s9|›m`N}³31üâ½×᥇À.eÇ@)á«údRwœ°ÿÖ—¯*j”a >åSƒíÙ(9âÜWðô¯~/¼!ëþž¿ø%VÞíp]–UíÏ>ýÿuŠ‹Å`š&CCÙï ‰'6µ~^Pßö{Hñ_ÀðÜ•Wâù ÎG9VÆn×ß„ÕöÚÎ˪V¢ÓÞÔâ,ËÞëú14¹]Æâ¬W¨ze“m¥RQ[!¶A,"¶_µÊ£–¦ª[⚊ܦHn¯?ú`t?ý$¼Éí«o¡Ð˜AÙ䳟m€ú¤âH'N£ñ£lÉQx †²ERJrk  ÌËpbEÜ}Ì)èüË“è1k8û¥GQ0€×d¨ 1„fŒÉmà‘ØÖP‰NHo®¤¡ñU…8?Òß’ÏáòvFꓘ‘ªáªWŸEwzÜŠ[Wzˬ[5êDÿ­/_]°ŽJO»ÔQßBɤS_«â•ß\‹×¯¸ëí.?½Ãw=EÖ}ÛÐävY£/Y­ï»®KGÂR$Wˆ¯ ne+è¿_Cž—ÿ4¹] Ý-´!¶\.#—Ë©­\!¶©Œ_ÍJÇÆBÈ—üëm8$HS"ÇëÏ$uí«ù|KyºÙm,´¹'=°1ø”«Çv$f“°2BŒ:`úÒ³[ƒË2qŧñu,&ï0MÒYWeðy=é =N&öïÒû: Û)Š“C= £g y¥Óð°g ¡ô0Àa³@¿Fc…ƒªåY Uʲb‡tFYŸ¥êú”[hÀ¤lk$´¡-BŒ:-4–-êEÒ9äyž …B===ÈçóÊ¶Š½©ÛUÙ~um¬Æ²€&·ËB`¥²‹EEj¥B ¤‚ö _M0ß’õÞMý¿PõF÷žÞjR]9Ù¿QcJ²rë±!­Ò[Wƒ<.=qy‘2 ‚úP9^Sç©ð%È5,.éªIšÂ~4½ýÊ#¬yâÁ£¡­ùÔÑÖTb!ª±@¨/lŸ$I†²h¬X`•ïÔ[#¤YY_e’QæÉ Û{’\lkÔªÜÊ@je‰úÛOôÒJo­œ«ÛÚz'’Øßz|Ùö½^ãëÝš.cˆ§(^¥x“Rá¤bI…\r(ÂWÙ«”×­ò'¼úã®Áç±ÌhLL…~ï5ó¹c$*òª‡€ f 4iY«‚€t•äÕ¢Ê2©™6ËV[ÄlÊßTÔ5$á­ÕHp̯—¦ÇÒTék|u!”(ðÊ2za2Žj<‰ÀŽ–£êF(2—˜zc1’¼ÐXñ`’ÜÊP,ië ’X‹uÖñ)ߚó¬Ï2ÆÞ´è’ÜBnuÅ]–è;ô@l¨tI¨Öúñ:É-•JŸ9¯ñõ„&·ÿ%–ôe+ª­­M[ù-R1Æ«¿&©Çí{íWòlä¹>óL½Ç¿ªÏ'ŽHýÙäê[y¾º“"Ç& ½ôR<¯‚Œ²$Ye$Ø”¿éØðX yFÏY: šHÛ"‰¥Ãã‡HÅãL›å†!-¨PùŸé,Xê¾R¾ÒÀ×ó ÷®ë’8Rõ¼j,[Hy+g¹×õ¼®}Ë\ÎK\‘Oý·h¿eX(û5œ8æóú‚C‚P~† ¯Rf ,rnd/º—¤½¤ñ–s"kÙÖï¥ñÿC]VR–BŒd+eÞ7°ºªu‹2³Â€ÛÉ­SoÆÑCG4Ggµæ8ˆóXB]U}?›Î? ’‘o]ÞBÌêu[#‚”‘)ŸzÛ×7ô…Ô ‘e½·_·_?èZô_B*‹„z)OH­Fÿ@½Ñ¬7–q’Ðú±ú¶J²!ÄÔ¶LX¶ ÛŽ1˜pø{PK jAÄFf–pàe3芻ȧ’¨5dP£IBV}dS)¸½¯Ú,³H~%yÍ&†º»»[ßL&ƒjµºˆäH¾DEÿ4–-ºººTý–rN&“¸á†°Ë.»`àÀ*Œ=C† Áˆ#0xð`477cøðá0`FމAcäÐÑXw£Mqç‹/ãõî<5gF¯>­MC0~ôXŒ<C¶ ÃÐÈ ×4£FÂØ±c1tèP´P—$lµÕVøå/‰D"¡tR㿇ÈT uKê’Ôi ¶m&H}tl.÷ãÜO±n'M1;<Ôõ =G‰¬áMªÍz¼d:ÿ,ÔÉ—è—@ä,W㿃”¥ÔY)C‘­@ÚFÝ>~ý gKø/Q/6iœÄë–ÊôuðºÕl |t™ºˆÍ¸š-!]-âÚ#÷Gá¹§iœóÒÈ“Ày6ÈWtܨ?‡¤T¶büfÏžW^yï½÷Þ"C(ɹðH`=Ûã“FŽNHHTIxƒ<‡$8^Ň}þÛ¢LÒºîñ‡"ˆ9H”eVXLJo…0y¹\+sfÚ>¯wMø¡èš2~«­¶ÖXc ´¶¶.rª„ Ù­÷ìj,;Èרb ¥\o»í6E<<ðÀEu\ê½ÔlEWDugCB¬RF•[?á2,Å@>Ü„£>0«züErT+T‘tmõå½è›ÈTP'@¢k²}ðÁñá‡âÄO\Gã?G½OŸ>/¾ø¢*ÓºÜú¨ÅàÔ’ðÌ |»È¶Ž2 ]†8f½û>¦=ÿª<·ÎößÀÐaãPfüœTê¥D6›ÅÊ+¯ŒõÖ[O9-‘¿ÆEuò•:,å*e*òí·½å?=[Âbhrû_BÈE½1”¯6e¬ü®¾þН¹Ò’J¥”³"=£S§Nŵ×^‹½÷Þ½µŠ#ç„üº2ÞŽÏX1}6ò9˜<3ö*âLÃç鎘†r®t$°Y˜6Il’h*Ëkî¾A7Xš²|ƒ”¯ ‡·ê‘<Ç]•ÁC=¤zî~ñ‹_(²+¶¼ª]¤«ò²…”»Vé¥ýÖ·¾…ë®»N‘W9&A : ¨—}ÝѨ a!G‚›wL$)a9e²™^Û£ìÍݶ¤ â!wÔ ‘¯¤Yw®êÄYHîÙgŸÓN;mQoŸÆi³;;;qÉ%—àœsÎÁZk­¥Úp©÷ŸˆU>”ÿbÄEîaš‡Åa]ÏwÀæ%&ë©ÑMáÒy­%XÉÿƒfOôêµ×^Sy‘vEòP¯óÿ¤-¬;.õ![ò¶«þ†«ßBÚþÓäv14¹ý@*’ôìH¨÷âôë D|È­LáEAÂ÷|ÄØ(Üôû›°ÁS°á†¢FÄFä,“»›$ŠtBi…ÌHubãJâê—=”1t¹@+ÚD±‚$tAÂA@Ýi©˜°+lw2$LVÀ«HväÖ5R¢š…r…ÚS¿""-Æ÷¥—^B{{;¶ÜrKæ÷ …•sÒÇTÆ5–êNƒÊwÞO>ù¤’»ëÊaÀ[o¼K/¹Lɲ“ñšPÃÖ[o<ôÀ}ÉkÚL¡Û¥,ãFŒËD„•"~u饰(»º¿¼ç6üîºQÉñ“Ë.GgW'šššpæg`ð!”qô‘’¥˜Ûm·îºënµžÆ‘ãÅ_Œ£>+­´m4®yQ‰Jýæ/‘©L† +F ¶ó5¤QŠÙ(Qî®|@j°^³-h(±¢[d no*ÿD<ÒB¨-¯¯/¹ùæ›U®„¯ƒYÞò«w@ ÑmllTÛ~ Mn?‡¯Á#.?¡•ÆIü©8úuR@¤•?C,VCH£öâ«/cƒ7"Q„ÝSÞUÚ°?ÿÉå¸÷ž[qó­7âÆ;þ€ko»?þîÿã]¸íækqÏÀ-·Þ…¿ßóî¹åtõäpóÝÀ%WÉÆ<äKsqí 7ãåßI2;oN}ôa<öG+`“Øúl­7©t*•ÀØ1£0íÔ®ù!ójZ¨ÖhI®4–-¤N×{L¥®KOy\zËC¡—GÇìð÷{ÿ€~/pФ3R”¿ž9èžûnº÷N¼þ曈U˰Êxã¹Gðòó#p ôÀëŸLÓÿx gOÇ%^„ßßz¦~8·ÝñG|ëØ“Hœ 8ÉÞ×°LĤgz)oø¿Æ !>=þL˜¸ ë— -Œ° T{ð‡ÛnĽ¸¿½îz\wëí¸þöðÌ“OáÑ?=ûîþÝëpçÝwàûþˆ{n»¿ñ~sÕïéȹ ˆ93gâV’Õ·ÞxS‘ÞO§Êzý(žyîYé¨*»A‡¥øpãqÕ«(ªŒñî÷¯ÏÿG›\/G‘µ¼ ‘­@ê´œ¯ÿÖèŸÐäöÿ©};¾u'xÿÀgL‹GŠUI–µEdì86ríí¸êÇ?Ä·;^ò}üà’Ëñ£]ƒGwßq7®ºæjâq8éä“ñã‹.Åõ¿ù ž}áœxÊ(tç`Õ|ÌY0§~÷Lüõï ³½»ì²'N9áT|ÀþØ{ŸƒPªø0íd‘7ÕXËßg _Ô»ôO»ˆ4–B:t|(Ç\ÛÂÉ'ŸˆóÎ?çœN;ãtxàAxàÏà‚ Î‚›ŒãàCÅ=¿¿çŸ~*dT¥OCû̳Ïà­© ­­A¡Bòëá“icÊ”ñøãbÒ“ж° ´+ÝS½ˆêþ½º¨ñÿBT†1å<(ð@d³Z,àäNÄÑß<¿üõopÅå?ÄO~ôc¼ðÜó¸ú‡?Å÷¾÷}œtÊ©j˜Ê÷Ï=îðð}÷á—Wý…’ ’ƼyópÌÑG«ñùo¾ñ¶Û~;{ì±Øc=púé§«YUr…ë²6¿ÿ+ˆCQ'µâ´j¢ÿC×®ÿBlåU°&´ý $³Ñà‚(DÄÑP{2û¬ôJÇ™k™È$8`¿ýðæ›oãÍ·_Ë/=ï_x!~ûÛk¹ÿ¬2j&ŒÇó/½€¿?ò0,¢D:;îÀ”¹nM[Í,¯yôLÿäcÜvëM¸éÆëñì3Oaê‡¢ê ™’á ¼© ;èÓ(K>dˆˆÿ,™³˜d´X®’¨ø8ùÔÓ°Ñ”M±õú›à³.PmBOwj¶«fÌPÃxŠ%¤i¸$4Ó?øÛm¹ 6]w ®ÿõﲓH[IXÔ¯ ãÇÃu-¬ºÚjjXŠÌi`䯖ÜjüçˆÊ®OÏô"{º:Q)qâ‰Çãå—ŸÇo½†—^{›¿OÆ=¸ {â/˜8q" ðö»ïã¹çžB¶Á¡t*h̦a;N4Þ÷™ºçž{ÔøÞçž{g}÷,ü†Nî[o¿…†LmÈWqÐÖWBnë`Ö‰­&¸ýšÜþ—†KÈ­lëÐD· "¶BcM5þ5 <îËÇ$Ü¡c#«ŽUË<ü×ÇpÐ~cß}Ç‘‡‡®Î8’!4$'óÌ!A-£æÆÔë쮎Nì·ßAØy÷Ýpü '*'¯"gØŽ1­ÍMhmiDC6…OfÍP¤6P7Wÿ”S%¨“ÚÞ7Ôj_ãÛ³¸òÑWÌ©§Š»ÿpþø·?áäÓŽW>ˆ—Ô€d©Œ¸Œ.”àñ÷ÊãWÁ“ÿx O>ñ¾yè‘ð‹U”º‹©_¶#E<ɪWÿ¬IÑfùÿ‡hfqD´)Ãq'šÞÍŠ©! t‹ÔX˵&¯ƒ7ž‚a#†3A‰T†Mn¿åfôtváåžÅÁƒæ5"ãEòæ_Ý±Ñøÿañ@ˆÁ´m¤èŒújîê ãÇ©ú²é&ø¬÷† n“ÐV=”‹”c%†dÚUÓõ5Ó)Ýb«í°Íö;c—ÝvƒGg²ô…]kdÚ•R™×–à’Øêÿw¨Ÿ¯Ûlm«û?¤½Ôø7¨W„úVzÏê :]QúD¦ü«ËV5Œ$,•BY #ˆ‘¼J.c`µUWÆùçž³Î<WþôrÔ‚*|é!(ÕÇ?¯BTA¥ZFSS3’©$Ž;îXœrâI8äàƒ˜N€|!‡‘£F`öœÙ4°æÌ‹ ¢±©Iõ&Iä^õÁI¯º‰mt¬hj*e‘»¼Î¬ë€@V•N”N7´Ú(óü¡‡†µ×ZãGÆëNÆ+¯¼ÓI"Pn<ŸiÊ @ù—«ꎉx&¯æ£$N‰C'&‘ ´ÇÞ{¢³« “×^SéÂN;m"YÀ£\)«7E2îZ†Î„ºÍù¯!½¶23‰ÈQ:ŒRœ²\v¥RU²Þ|óÍpê)'á‚óÎÆtâ®õ´æØ0c¦š®+™’×ÝUÖñöX5¥ËäÉ“aR§Ô‡‹ÌWjú/r5ýœæNÒ­±l!sŽÊ\²òzY^'÷ôô¨ýrY–÷ô±Å7¶ÁÜ…mêË÷B±‡d&‡Ó?ÄÆm„\¡ˆõ§lBR[Â7> &ž$©}è/à™gž¥c »PÀI§‡…]í8j(öÚgO|ðÁûxðÁ0uê{t‚Žaʼw^Õ‡K§*—ÏÑQª"‘ÐsÜþ¿À £—(uOêšÔ¯PzgYg]7‰ß]#&ŒŸ€a‡`DËÜk`%²4˜–4ô$·IE^åMQ³P(”èìÐz:{º•ÞˆS²Ûn»©…"dê¾ïÿûXýõ±Á¨!(9—ͶEê²8­Bzõl ËRÖ‹†ué2î÷0/$z÷5þê•A¶RI–oûuAf%%˜Ò‹d L8‡Wî»Þ¬$^6=ú8Tã.By³¯bµ>IŒôÚñcþöׇ±ë®»‘ˆV`Çã$«¼I7Ùoº)6Ødslºù&XoòZ˜¼úTJ1 k®·!V_}â1§ ´µ&¯N¢BýñÊpc­Øb‹Í°æä5°êj“Pñ«Xm5pÁ…?À¸ñãT~lW!Ã2/f>—Ãûï¿5×"ñµm”+U5×n¤žb¢5–„ ÈÜÂ2MÓ<€#Ž8"êcy˘JùÐ/žH+çE†­0ý"©$”o1Ëe=±£>¥lP¨*Ô«I‘ÄGIÆ}ªQÞt\Òé hQ3rL×åV^_«á)ü-cwùû#Øfëm´þ/!®«Íz}ý7á[GÓ‘dU6Å¡¥d& ‘úS6$Ý›±nn²åf˜BRºÒprR€hÂøU&bíµ'Ã5«Jvã&LÂè «1uƒ[[á°nnµÍÖ5j´ZJYVÊšÄz}þ¹ç!C§X ÓÊ™”«è—,â S‚ÉòËâ@i,{ˆ|¥']Ê»¿Ö±³ž³¾ácåÝöÀÀUW—×P<#‹ ‰¶ÖíEÿ†^Äa) ET'µ²R+|Ô¿¾ü:¡ÿ/â ®õÈ+£çâ”SNÃ-7ß„B±BƒÍpP¥sS,å¦!²$æå«È6&a !.u³l|„‰Fx¼^ȉ ¦I2[³/t²sà ÒöxEPmW Q%é•…ÂÐeÊ­Q»$ÄÆDˆÙ³fá®;ïıÇ}›V€Š¼ÒŽ'Q)ÔüñË2Z¾|bòï|'œp&MšÄ6 †*2eLdŒJ¬&¯”}8$½ˆ™({õ HHÍå\§Š#Sˆ™nÕÉqHâK’•¢ì‹]Ýpâ¼¶·çNzðB¦›ËåU:,X òtýõ7àüóÏgÚÑÚùÿdX‚8››o¹5^zãmTÅOeÝ1è„$¡”«i#_ªÂIfQ#µX)mÆ«Å:áSvíE›$ÕEÊêBµPb¼A˜“Ï¡!†Qe@ŠÃ# °¤“©ˆ°RYÄAíêìDSc£’µÏ{Joâ¬Óƒ R+¥Éµ4–-¤œÅa•!%ý®|…ÆñŸ¼…Ћ8DÐäv) Éíb|Èm,FÒX-#•ÌPÎeÜ|ËíØzëm±Êª«ªÆƒNr*†)@"VCP.Ãs2|vž£N¤MY‹†Í7QŒÅUªY“ä‡åTC–²¡mW ŒWD;j¢@rj»izÛ96D4na !i°d¥4“$È«–ðì3ϨʶÙv{E° aù€­Z.ÂÁh,Sˆ““ËåÔ+æÛn» «­¶>šö!‰)‰lôÑŸ|t æÉK‹Ô‰ÎÏv"$q¥8<†âõŠ"’Ð<††& BÅÂŽèŽ# rÈ'gâÌêÞBÅêmÏÈ‘#1sæLì±Çžhm@ÒÍ ¦ñC¹õ˸òêŸa÷}ÂDéÙ"â¦T¶’ZxÁ ¨Ñ¹”u tdÜO$¬Ÿ¾c£L©5ÊEÆÙQ(X¼€úàÇ—S˜Žôò«ñµ„8¹•2d:K©­2¤¨\©¨)ÃFqãÆ)‡JcÙC“[Mn5–€&·‹±â[Q[ õZ*¯[–HfeCƒ‘øõ WP¦*—$ÒUËß^XS«ƒ¾ ?©¡³«‡|®þùϰÁ”Éj9NöÊ M¤(ü]®ÅHvÉOHŒk¥2줃"›ÌuêPo,3Ÿ¬ØE>߆d«‹Žž.4eV>,#ÔGBUYýŒÆOÒ(=J*[Ì×c>ŠK/½×_=›I’øT5 Ñkm!Xò¤äÙ}Gû5æ¹ó”,M ’Óòc2£`3ª›ÏÐãým…Á0›D$#ÖF•Gy,é!‹ŽDr•ÃQ†¹§ê¬\£ ñsX|4þK<%¯•1Óò _%qñ=5}“¼º–ïʃ$†DD‹‰©V‘Ï”ÑnhA…N‰‘kG3 î™ß>o¿òŠŸûøk¯¢DbÜM…‘ñd7Ô#=ÒsŸa½aœ° wö¦Rª"“ͨ±¶’fK9Då-¹¯?Y=÷òü}÷%Ž¥ò"0¹C…™›$ßL‰Ÿ_ StY¾”$-nð¥Aò!R•§«×my’èŒ@$^ÿýOrO®‰ô#Ò þ×!z&õ“usæÜù8÷{—à‡?ù)&ŒÅú#E “ýË= 5³I•U¶³˜Cc" ú¤0lÖ–“Ïú’”&(sVr–kÕ¤‡*e˜p’(JȦ“ho댖~íÍ„Šr‰zD}©QRo»õ¼LÝ8ïüóèÄÊWý$ß&#Fµ!ëçbˆ õÏ!*O ဲõ•3åDužzcQä9¼˜ʳÄÍú!4¹ÕäVc H¥ˆzQ"hr»"[IYfˆˆIä^‘É“š‘‰ÇmM–Ó•†Ü"9‰â+™Ò8AqÛ§q«0zˆ*htR!ùà5¼Ö#Ñåy¥¡´‹CÒ¡J+Y&›ˆ‘HÄ/‘ÏcêãObÞ{SÕ˜Ý÷Ø Æ VôØ6ª[&ѰX%=ß 0kV`(-³9Ž‹ÕVY묷.‰Oª÷ û@=«õ¨¼¥d}×÷}³ŠÀ&ç½Ó®g¡B–sy=eæÊ‹X „Ä5%HÌK$¸_n3!wò­sXÚrHd_“S›ñle¬k,Ç2'eWK«šŒ(ÃÂÞ×ÀÒëZc™K\ 鉫—•¤¹}×ùïçK#Ò¾:Ô/Çmç‘-—q®û£:«]z»ñ±¿"ßœ@Gœ÷gBizê“á `I>êiDXto†Åùè%ªK‰æVHdÜŠŽ”‰‡FÊÞ¢ºW}R L9¯ ‡7βL}–­86_.DÎ$ûÜJ9ɶF"®\->Ý)•«±î‰ü#ªÆc¬Ó.uZVïd+SûI:†…rà°.&ñɧ3ðòK/«¾:/¢‹Ë;ª1j´´´r9ejQh):½sÞ|¯>û”=e»­=˜2”¼F=³Ÿ…W—g¯2Ù,V]y¬ËzÉÊ, 2¼…N•ÊsOmø¼½h‘»´pâˆË6 ‘Ô ’, ?× w`ÕP ~K»f±QN³ýÊ:@‰å›§#]fqÉû¥ús÷Whr«É­ÆÐäv1V$r«2Bƒ¥zÝTˆîUoê•‘c1òá†ôˆ™Œ#óÕ†^ ¦f’¹(c–…®|ÜT´a0å#¢PzÍø¬Ò²Šˆm,¤Ê4@n<2^BxC5(Ö@œE®˜œ²ÍëâèʵÅ.9ÂÏžƒÌ-—m¤È­E ãX$eU’[ß uSjÎS!Ò/ÜÄ"ÊGEò¤Ÿ©¸4zK‹"Óê±$( ó M’žyTà¸Ì{¡„!™VÌ›³é†F’¯úò› ‘hD࣠=mrTèŒrnTï”ì¨T 4ìòáuU¦àâ¥2nU¦tGÁ$‘¬–YIp]:"&›:™ø—uÂ_ HÏXG¡ ¦‹l7+Y|—2®µdH*jè ÁR‘A¶ÊrgîÃ5d)òQïi•üü;È«õ Öˆn¶caÂF¬ê£…ŽSµ§V‚zÕG‰ùÊÏíÆ`3Ã%ý•®Þ/Râ „ÐJýŽê—:§NHϵu‘8ÿg},UHÒ]êv*ù  EÂh‘| 1–tDx¢%-‰ ÝÑXv;Ͳ‘4e¿/zoŸ;U–‰8‰Émšõõ¡oÁ×Ý@Òèã„óÎÀ†;oX2K}cþ–P’°¤XE7å~òá aÆÔÇ‹žôü’€É3|öj‰³´w™hl”j1ú¿n§L3A'¥‚¶ÜB´ÈöäÎJ HÛåe’Þˆ0‹ƒè°í’”û34¹ÕäVc hr»+ ¹##D'jÌ%DM½ôÐÝ‘sQÏ­ô†Šq¢± Q²jUæZŒ" •5È@^#r?Çx2é¶2Ú®i%õ^*  Nˆ “*$=¯R6b|«¼¾‡…$¼WÒI•Jxú÷¡cö\õªyŸo¯¹ EÌ*•¬S”âV½æuÒCésk°¥R=·¼'·ÃL"ýÏ&)“ ¸"ó™ WV¿™žÏàòž)ÞOÆ%Ù )E/É<ÉØ‹/|v!á"cUæò'[%©Þ}9.Æ?I£DÇ2#™rÍ䮋ÄÖgt‘è±<•ôX ýYZDZõï!qò~ qêaš%’K‹ÒfYæ…Þ¼ Š3²Ë+ì¥tRD/"’ÃDþMn„ðùüÝ|pÛ¡sÛÀ1dˆCe³B'ÍE©U-Kg†Ä‡Ç¾LÈI½‡&êQ—ÕóÅä®·çVb…t\k: 7‡ÆµÏŠz]ë2“T,î¤OÉ=ºÓGVÿ|YŠHd•ÀJo6l–}œrúÂËxïÅרO6Üe =B ñ0bløTj‹ñÏÈ”ÈQziÕ[%‘)£}ÑârÄPmoÔÕ“r[²úo Ìº2ÝE®MÝ'±m´Y—½¬tæåéð±-t½nõ¡]f^õO–~Mn5¹ÕXšÜ.Æ En%#Ò Kƒ_oôÄà1¨óªÎ£Øp,S5è ¡L±d$2ÈÕüþ¿ã¦CÞn@‰Ï§q7kÑ+ʾª¢ˆ.ƒôÊyÑ'gBNiH S‘[¤m+˜~•1‡fÂ]æ> í?D-#D ©$é"‘”×’oß”\P×U½9> ±L$Aî-_ÌòôKWÖ2ÜÁ%›µCE·ÄP†”ÑR(cQ+á´3OA²)‹Ö"ÉVuÝ®ÀFðe‚åäÈ=lœ…@ÈxW!?Që,ò–ž>é!¥‹Ñ݃æ¦ä©£e'jÂÄÇÀéç_ŒÝyEvä£ÀÀ# ࣉ —rÏ¥EËÀAjÉ垎. 0DÉNÆäÊLJâ[ø½ÄK¬K½´e çR@r+ãf—.+Ô-鯓˜ßCK: «c>Üüͽ°Ën[³ndÑEY—ºÄ)óÿ’¿i’2QãDY.Reh…<³<¯B‰±¨×’òQ—¥+?°ãS ø¸£‚knºÏ¿9U9sQÏo= Æç5²ƒÌJ!uY°d}D÷“+¨YLBtÍb·Š ò°JŒKû¸ä¤ƒ°þº«¡Š—c]ÊòYŒù3Ñ”N¡›í— ùèÏòÕäV“[>J¡Ém„…ÜŠ’†]I…ù‘Æ?ÚuVgyžûŒ¡VbY0Ý×ÁGsÚpãŸÁß^z % â4Ž T§OC/ùg LWzW¤UùnEä•bHâ)·‹13ä>I£cÓXÄ"£k4† ò:Szˆr$1™/¯É}‰Ãk™Œ!療ä;CÍ{Êp AŒû’¶ä%ÒÅÞ|)È>ƒSøï‘ô|4xET­2J.ÉV,–r n0Çî·?>7 £ÆŒbÆê¹C²½tI/?ðQm‘ŸQ ÑøË¨Ì@Bf0ˆSbÖBd(ÃR)@%‘B)™ÂO¼›þúæUCôP!‰­(L­Ægdco„}Gö-×¾ûü%:õ…ñKœ£Ìü\â©8ª¼KÊb ¦}‘Èl ¹g(¤œ¥gNê×Ò¥ODÌæßBœ˜†rŽº çÉôf5dƒÎ>bl·æjøè­×1y­µP¦e,2q“u\Èð—yR™QB†mˆî‹LÅù“z®äMH«@ô `ù…ŽE’iâÁ§^ÁM÷?‚å<7­Ò‘…šaÀºe²®'¨J6EÄ´Ô“/*ãh_½ â!åÔ0²ô¶É²JÊ¥¬¿²@G‰Î•çPv©÷ CIC’¹Fû}!Ü„ŒªùŽyIÔƒ\×7 >MÚÜ€¿XŒÌÒàQ¦[Ï;ËIŽJù±"B/GîbT<…“öÜ{m±!^åY¬³ÎúèhïRã®+dºÌwW¥zýšÜ~½Èí×à5ú%TÛ©¯âR³itäƒ iÀÔôX4<µÞZ,óZ&“®"¯ÕD¼6ÇýðVÜÿÆlt%"ofÕ«L!;2þNÂÅ ¿:5¾çE¯ ù[ÆÊÉm¥?E’ÀÛÉJE!hñ^+_ÖËP%Æ+Èu4ª2#ƒŒÿ”tä«i±}’U™A-ñ˸ҫ$éKœˆØFX”§¾ùc«¶øøçýŽö+Lwe£-“€Aâ×RKbíA£pü·ÃŒr€+n¸ å ,Ë"*ª—ñK‡ô´™¾ !=!²t3è^‘ÔVKH²|Òt2•׿ò•½Lµ4¡FH'\ê'‰"‰e©FGÁ¬ âçø8%:=òQ_‰„«Ì´IFèô|Ñ=¿(ˆ^ÉB"ÒûË«z‰ãð8‰É©G½‘Š$\…x *véT^¡—\p*&m¼>þüÚûøáõ÷!pRjÕ¬„Q@XéŽÊüK†Ì lU† ‹A†Õ¤VêañDù2åï&l#>(ظð·÷⇷>„ÙA=ñftQ3ò†K=–¡)1j³ÔG©s¬V,Gµ³H¿¥²-‰Þ“_]#[9#yˆÁ'¡.‘ ú2å—è_o±ÄèTÒ* ‹KG‰èŠ"´üµ=}ëoÔqJØ£ÞûXë„'o+lòÓ„´)%f‹š`Té4±~¸’±nüäÂ#±ÅVá÷÷?Œ_Þ|tù£‘õ(Å61Œ¥˜V’wW¹ÔÐè7ˆ,¿†ÆWb»šgùrY–>-U–ô˜Y€›BÁ7Ô¼6n/¹áÏ8ï׷ヒ‹v·•Ķωq±hPÝÀC|Q¨ö†%÷£ßnXS#IáÖ’-ƒK²÷¼ ·Wy.¢!,‡Ç^/Á¢Ãk,n‡éÈõrÍ¿¾7Óî»/÷$YŠ“,EÛ¾¡~,Ú(M²ÂVnO{Mš„kÎ> SçäpηâùùóQ‰°M®Õ·'é˃ä@E&±­Ñxò²YŸögýJY-˜lˆ0Ñ€ÂÀA¸óÝqä¯Ámϼ †ùj8‚ #p`z.슃xÙF²j A9}¾Œ—ܧ¬¸/²ù8$£¢ýúïÏž¯§&=þª÷0Ú†¦ô>‹i­¨¯ôNZŒß÷^KÞû3ù žÈl«¢s¢7Ѷ ¿ÔÉr)ÃqjpMæÁ–žÅEƒL' ×ýðÄc1aÄ<ñÆû¸ê¾Ç0ÕwÐ%N qõ#) îK†èœô²Z”¯Œ[w î3Ôµ"\"…ÎjˆÔÐÁȱ<ï~â œyõmøËk¡Ö<ݵ Ò`y»B]–q·ª•P=v!˯^–}˸ï~ýw½>–‘ êlÖi:Wfo°™ž+u}Ñu}ÓˆÒY¼ß÷Ü’ñ¾øœè '²}1Ñ_ÙÚ¡‹°À2a=N’ÄÇJ>Òa I˜•ЏæÔc1(žÆ#{÷<ü<Šfš¥)åPFÕòБ ì][•‘†F‚&·ýùR=§jKBk»0Ù¨ûfíe…CðäsqÒ%¿À¿Œ§µl+ÉR‚†Ï!ÑÏzähdX¥©_š WÕòÚ^^ëɾ\/ÓzÉ0U Ò³+¯Nåt“±¢¼ÿ¢óQ|¹®Æ›G„G®Yú<, roäçí÷þ&Ù©ææ£5ébŸ7Àq{î‹®®\výñ·ÞAlðpT¥÷˜j†¡"^+êe,½¸²œ8" [".Ëܺ0²¨¤³˜ƒ8.þÓ£øÎM÷àÙ…=‡Œ@'ùš™mPòu} áÕ&©Í0¸tx¾°<¿ ¨Û3êÇõý¾¿åµ±GýóÌ åšá‘ åœo¤ÉÈÙ)­$Ë:‰°–ä$Þ½×þ»i‰È“„o‘ŒȩŒ¼!y&)ªTŠô×dê³ |»ˆL<ÀÏ/º[Mž„©SgàêßÝŠ÷sTš£ÀäB–‰M‚&s ®g²ø°²d±Ò€ùª†5>‹ uÛK7`>•â¼_Ý…Ëo¾ï¶—è´ Å‚\±tV‚e*…ÅËUü!2a }¦,ÿ]`ÍäŽ ‰áQPGyR U T]!™–qÐuY¶©ÖRÔ“4jazñ6LÁ¶›èÈ4 ($À2~ßÀ úU·~ïlºêD¼õÖ'øÅïÿˆNt܈“ÜR?L%:>]ñyùÖ–÷ÐÐèOÐcn—òZH¹°ÂŒ¹e²BqäS'yàÌ#Ÿ 1¨ðwÖ±l˜¸ùáçqï£Ï ½è£`¸(Å\’MfДId O"ç)æ›Bv£×‹ÿªò0/bè„t© òq™)i‡"fQ‰ŒÉëQy¹NΫÈÒ“‘­ÿ Kožd®„t­ûl´Žßs[¤ §]y3žš5=Ôõx©€üü\Ld&œ\ŠÌ³|ØóeBÊF·È[ÆÚ¬ƒ’#qdª$¶—$2áàOæâ:Û|<9Ãa½5aȸF_^Sw©já W}‹2E]‘Y––¢Ð{Eªdþï r‡¼è…診^]ÛG^<(ã­åX}<鿃.™Ë÷sùæ\e&Ï­'¿„ÖTgì»3öZoULÿøSœ|Õoñɬgg0”%ó÷kÎÀ€Îndü6Ö‹eÍ¥üeBMƘj\´”ë7óVK¦IÌâxöƒY”õãxwú\Ê?‚-gk'âjnjQYÕSËòPiI«}«*sæ.YxŸ‡\R‡\¯dÈË"'KœVnùŸœ“ EÄœÊ8ðÏÉåÿ ¹¯ó$y§£¬†?ôŽãµIT½B7’Ô5¯„–”…ïî³vßheÜûòtüüÎ?£­@[UÊaÕF ·_y6üÎÂñ‘gh1­¤|ì*7êÇ;®ÇÜê1·+<¾ˆ8nœ†ÇB¾â!Û`c~[θøüüÞ§1­G[%·±‰F°„¤ãéH–hÙ2„$C^,¡Œ¨j)þMP1×+ŒI±&Ö¨Þ =jŠÚ÷öÆJ[*ÖC’澌 ‹%q¢4ø[1.ù-½0ôÞçßIkéCCàÀ5&ãâvDœdaÏï]€潇.¦“ð“6"Íœå*JjŒ+¯ûÒ!¤ßfö¥›Iz8£Â”ƒ*td|Û?<÷6޼ôçxäý™Èç]dkᔓ°‹²V¾še£óRU‹t'|t$¾¨\?T»„¥øè0Ñ ¸$—™J ÙJ‘e\ Š”˜\ÏS@œ¿Ó^Q½Ò^ò~ÿ,H-©Óÿl°DÆtÒá>Aa¾-®³÷Ù;¬6ŠÏzîð ­`ÉI M‚4Ô1ÉJïc;·2†tE€P}¿fÁS+€¹ˆ%²0³-è-Üýä«8íŠëðÂÇ Ñ€®2K…$/‘¦“Cg[íˆÛB:£Ù3DÄÂúíÅâtÂ¥B~¶L¿(¨?Yu°·Nª}–qtœqúp$©ëQûlË"Ô Ÿ„„zcöÓv3ä(0Ùg@'2é yíL:ªÝøÖ¡Ûbí5ÇbA©Œ n¼owÕÐé4 tlëB°ÙcHÁõ4U Èz9¡<¼†Fÿ&·_)Hψ„W§¢(‘³Á–Þ™Õ ¤'.?›ÄíϾƒ“¯¾o,ì—hFè²eZÑ•«"‘h@U-ô/Ãä‹eù DzsİGÄéßé2j_¾˜–ôlÓøFiDéHLÕ«º›Åʯz:2=Ytäºúµê§ôþþû ñ¢±~yD_T«Ïfx+ùªš¦^»|(—6ªÈÝXgü ì¿Ï(Ñ`^ò»ßcfŽ€F4€Uü‚\[­¡1)³¿FyYj_cÞÅ~ËôNuN ½R2Ú2à9‘µ,+,ãcèæï>“Ç|’ðij¨PŽ3ó>~xË£¸ò¶‡ÐC9WÒˆ%S¨x,7Ý %˜ñ÷e@ƒ¸E"ucnIX–2D½ˆKH–%I§üU{·aÈð¸A'CBÈtåOž/ºç™ÎNˆ3(k¹Jt7¬¹,3²$›gU`¹ jqqðN›b›)“ðîì.ì|èRC™‡Ë4¿LbVeÍÃgžœ Åý+l-%$³bn•`D¯¨û"gy3 3„ˆXÔ+~_Vý£ÜY>òA¦ŸH£‡äìƒbWßû7üè¶“¡ ɪË"’TQTŒA’o¡\ö(¦Ëº%¢Î-õŸÈ†ùø¢r]2(íQ³”Bµ•c ”!d 2èC@ù†<þEéü¿ó-N·ãÇ‘ÈÇf>bF5»Lò_…A]‘‰áÔý·ÁV“Fàýé3±Ñáç£íÝÒŠŠ¨¡'‹åÍ2–õÊdZDKÆ@Þ^I}ÐÐè?몡±BC^!-ù©BãîѠȸ¼Æd¦Ç[VÝjN fàòÛîÇY¿¹ÐHΪ„(û%Ú" $4˜ÕŠ|™nÀ§å”YBé5 Ë4X%6öÜ2ÞÒ™2ɨÒPTzC•A¦Qò«4Êx?¦gx U LŸÛ¯õƒŠY“ø¯ãõ’䕪¤»Ä½þY´bò‘ <ä+ý𼿖@c&¸A¾—[ø vZk4.:e?4lÄ:{ŸŽ¼;žG:ïÂ( )²Ð÷‘ ¿1}Ë¥œ³ó¿ƒ¤Q"=l²’ôˆ!— N]¸$â ³„d’$Ø1á´4a®ŸÂ[s:pá¯Ä >ƒùe\–7ŸÕf¸,[‡åâ0Ý8Ó²éñùœ˜;Æ4Hü…ÆÉ N23X=Èìk²­—ߟÙgpù[¶}÷Õ5½¿£}Y OîCÀƒ>ó¸|6‡ù%•å•°Eb!S¹Üò&KÞ»ïþgÎYL[„pä:—‘Ž›ÙÄO‚·„SiG°`*ŽÛe¹ÍZø°­']q3Üáë0îxE’!·FU:?âs1›0ù[áòõQ†RˆœI`•Hr«Æ£³WI¶¨Ìi7e”äq™ù"ÝÜHçÔÅsÓçãÈ‹¯ÁíO½Šn7…ë»Ì`Ĩ÷¾ôÖRO\‡ÅI§ŒÏcñ¼MyÛVL•—k†”‰Ï2 ”œþU×÷•<™G}`É-ƒ¤U¿FÞpx_Û¤{,ÔWNKÊS¥¹Ä½ê¿—Œ÷ÙstøÄ‘õ“h´¢Ð™G¼5Å °L)¯dOÇí¸ ¶¼ N›Ž³®¼…æñ,WVG;\ÏC…ešcùT™¦eYFUÝÛQ¦#N¦†F?‚ÖhõÞZAä Œ-ºÔÍómE5¡Oá‘·?Æ™W߀{Ÿ}aj:ò h*p¬Â¢`[¼¦Ïï¾á‹ÎÙ‹öûž‹öŸ[þiú¶žSÛ>aѹh_Ò´$^ý¸l?³¿øœm•Q•UÖœa$§šY|‡$!„ä‘õ;àtOÇökÁ ûn³ZÁ‰ß¿ÙAÃQ /°Í8,­˜êIBAõzÊúåÕƒ»ø®Ü#!%C•üK«ô÷)ÂUNø§h”3˜SðñÇg_ÄÉ?½/}:v¶)7ƒõÂ¥obóᢠ ÉOJ‹~ !‚üîs. Ÿ?]³8ùÅ©Ÿ[œÆ¥_×7î’ç§Ñ·~®Ï}ë龉²×‚މhNGÉ ƒ”Ì—1¡\ÁZù…øåþ»à˜M7ÀÔW¦á¬ó®‚•jÄÔ?ReÅH®Ui“˜Ékö¨ø™Î¢×êË }tIS!ºä‹ª'Û"™•Å4dÊ=YÁ6Oâk55aa®Œïü+.»òWhïʱ- .0›j6Æ• S†ÕËíŸËIŽzCTŽ}ËX¶õ2®§±ä¹þâãõôUþE½ç>ŸFô[sH&ª(99t8Ø#¢¸p(ó‰Ôõ°=ößt}¼óÆûøÖ~…3Õ©ÞöRý¿ø¿è°”Ÿcý úƒ²¥€þ l1V”Ê’>zW;‘@¬Åfƒ_Ã/î{×ÿý1t©©¿l6î)8f•< DZÔ¼’¯¾û‚uŽÖTí/yNö%Ôõã_¤!Äá_¦ñÏÎ d_ðEçHJnÓ<$:YæY™V¬¿k7Å1(cãªóŽAcÜÆ‘§y$G=2Ž1©^kÆJ!üĵ S6œ‹W®:ƒh‹å¼êl«ß}YBújUÏ-› !±2î9ú«?™ôð Á~±bMYÌ(—èÀüïu0'WA,ÙHbîÇV©“L‰”þ羚tþ¿Oò¯ÒXÚôÿÛ4Ÿ3ü$,?Ü€NŽLyeø JÒjÏyß9xì°óæxgf§\yJˆ#Wpa¦†°`r0’!º+$Iô`ÖàÙÇ~~ìŽ(ÚòQ– FYšçøOAGEV+ ™×ÐáD>|&S†”aZÑðS /»1¼6³?øÙ-øpAJV9ÓFè¤ØÎDº#å!CÂÕ¬jº½:þÛ2^Zü¯Ó¯"/ð9“¬-¨z]hi0`̘‰K>‡m¾þöü‹8óÆ;0³uzŠ,cʶfVÕTwòñœoV°Z#ð—KOAc×lEïä3#¶‘ËÓ©Y ?(ûz}P¦ÉíR@“ÛÅø2Èí’½¶j˜B5DÜ"Á%¹}¯}6~ù‡‡ðø{ï£'ž@®ƒo„¸uUv“¨†%Öý/_Õ¥7ti±´q5 IGh¤dnUCæ5¢yO,`Òê«à¸o„yݸö×·ãýÙsÑå$)¿l kòVVO^¼EÝlm.^"¹`ÖP¨ÕåѪ¸>)G×—æ—¿d|#å³`:$BV¥ÐÀ}O½‰›~ è´l‰Œz]kxìr )/@™×Tåý|?…Ìô ãda-€Or+c«íj©bßÜq}¹ÿxñ­÷pÎoîÀ¼ 9þBÈq›Q-¶±~„¨Æj½“Hs„Üš P¶„ÜʨçåaüI¦ÄSRo(S¥àÒ¶úðüålÁ§,Ó Ð]öqÏSïàîÇ^ÁœŽ.:]ºea ÖåŠ/¼a ~H/47êƒLŠû˯áË&UÛ›G§ÀE¯k>ZâŽÚw'l·ÙƘúîT\ø³ßa6ÕöÄ(„l Ÿ2×äv4¹ÕäVc hr»+ ¹u\ÀNYè(Whøâè&ëdíö ù´Œñh8-ÚѬŒ{¤”izV,r+9Øet¬KïÓÚK–\.:ÉêIy’½o^øÌœGO–Ólàk$=~À6¯ Yä(0Ϫ‘ÜÎÁ W­ÜWä–jΦqÙ£—Ü.~N‘“,rÁʸa™Ê*ž„•̲þ™xᥙxô¹—Ñi„ð©_s EXéhRz[Æ2$dµ'†r,P3ô_jÌdÕ.!ˆy°©ã™²¦¬ƒa#WÆëŸ~€knþ+fæ\Ã$L«‡D'©> ‹;òY[ž×É+}k%<úó³auÎ%¹¥ÃÐ;mÚ²‡TB!T’¶[!¨l[éPÐÆFøN!‰ìÃÏ¿Þú]•v2 JR¢\Õ4_UùˆK4G–¤•ë#·5Z’ö?¨4_)ðÙ>qÅDŠdup­ Ûm¹.F¬:·½ô>.¿þv„©º+gà ¹¥C¯Éí"hr«É­ÆÐäv1¾Ìa Ò ‰ºÊ6ÛЀ\I–š$¢á‹¹ò.ªãÂi¤7G¶êî|‰ý…Š.ê'úî/‰ÿö\_0ÞÒD«ciãJ¼Š4ÖÜIÑîËœ­>¯¼ÒoðCTIn¹ìJ<·°„¢×@êh¢l2¢Ÿå55J±„Lµ†ª™b:>Hñ"Ém 0_)PÆË‹ÜÌ£©^1 1¢cA–Ze#£á•®¡(=}&É+ŸOä+ã€=>"¶BŽùO‘»E˜%ôSÈ3ʇúÝÔmrQ:$ :z<×Fùoÿ½ß`F™ÄßËÀ¬x¨ÙÝ,cz„4lf­ÈHnM&^‘Ûs`vÎ'¹µØ¾É$Ë«ð„P‘€©YIdáƒ@94âœÅ‘£6Vl:­tÒ²|6‡±ëSêA®”ÖX‚|l(Û¨¯¿ÿ \ä:[|>>l Ã0~©ou—qȵÀ‡U5¿·Qª²N¥£ÃïÕ*šÜö&·šÜj,MncE!·ùd3N»êf|øéPÖÇ7kÿô îzò̦Š:‰Öy!¨×:IC†"H`»Ìýˆ“‰Ñ¦ÒôSˆëW*VÙž±^›mØw¡øÎáã­Y¶ýþõ¨¤G"HäºX&ë¿‹<ÛMnC“[Mn5–€&·‹±¢[¯¹{žu5¦Íï!)³i›‘ËáÈŠPdMé£êx¨ÚžX¥4áò hÿþ#ÎÀr–èRAë—õÝï‹À¤I L4‘Ìíšz] ºqê–›àÐ}¶Àß>'šTÕ0ÏâAšù Ô|™Iİ·ç¶1˜‹—¯>YÎ2,!¦Æ8.{È m2§©sIpmæ¥FREÖH¨)Ã"ù™AøÝýâ×÷=‡îÄ’`›„H†!ðz) Y€AH­ôèÊÇ’Y‹­üŠ2ey@V™ щjË(… ZªØq°+/ú.ÞʰýY?¥30nåéUÐI(Dz,¬Ë7GÒ“c¹Ñ™`¡­F#øÈÏÏA¬³ºCªIrë¨/»–1(“é ¹õ¨¯"7!·ÛW¢à6Ào¹ ¿ò5´[iêEuDn…ÌŠ\i°¢iîxm½?Wé¦ qPsV÷OH]ÉP¡óN¡Ý]F¦põ‰§à=ØÿŠ;‹‚Ë£±ZBža^Û_MnC“Û¯¹ý<¢F„˜±LÌGSR­"ò•n”j>Ša€ª_C( z™¯wmÕ‰_Jˆõ«Ïï¾çd+çêñ–R’ÕË!ÊòL•þXVa…—÷V*¨f2hwRhgù•d6ÆQFÉ.£Í¡ã"¶\C5F•Az¾yHzUá+)ÈKþåëŠC‘Ò²gÄ,ê­ óT6¨×©F”ø,Už%%ã5 ¥±=õÛ & ý–•Æ|ú·¼S#i d±~òQ]’H&%X àyy’^AÜ@ž^E•®†Æ×šÜj|%!Žªéšèö«’i-R¸l|'ICÐ@ØBB׌ÆJšÊ ¤i<“ ’Ì8ÉQ"¬1)K“He¯7ø<æ!I/?!¯ù$. §\'×Èñ”^“¢›0mò/ÆuI,d+Ç3¾Fæ¯Á¯ð’¦Ä“{G~9|‡Ä­É=êy’û†ˆóœä9ôҼŠñÒ<–%)mªhašÌkš¿“BIJ yWo‘Ö¨™äeb^ÌF™‡kn¾Eàd/ËÊOÒkF¢@š CzÑ„0Hï ãà%,c‘ åˆâHïQD°¢¥mIÄÓ˜Ì{ÕQ ++rÜ—^¨0VäsP‰PLQL•PNQuiÜŸ¹ÀPVãI…üz$ª‰0ƒÇò Èê™~ʪÑ1žª*ž¡B…q%È1Ÿ¿åºè<‰˜i©´£k×–žp–ïÐ#_CâVHÌUÚê¾BÆyÓæ9IW®“ ÃHêù”ã’ &Óa^¥Y¿¿Äc±±ånt°)C¤YV^>I"©?šXfÍ=!z8aŠÄ‘`ïpq(DžÒðGã“EÒ‘,dÜs$m Ë"w2$½MѸZKʘùlËWù,eí¨¹3„¼VHzeÙÜÊÛ‘–‡KqÛ[¶ªLûÈDÆÛ‡$|òv#:ßGd$¿ymŲ•Lëåé„ȃ:"rRe/çEÀ@‡çD7Ô1uœû u}«PŽuŠÂâc¢Oõx¢j_½q!Qe´E$__îÙžo“b &šcá–øL]ë ååÒ™M…è Á-Äeº4‘·TüH®r™ôz‹s#â•r—ú-[ùm54ú4¹ÕøJBÚsÚbIiɲfœ}‰e'G–äež°ÐR‡ ÕOÑ´Á/åaŠa K4¢Ý0"ɉÝ l11MG×ÐZ™Žê‚‘q:P)¶³Ñ#fgP’r‚n4eò°ƒé˜êé̓Ui'­Àò‹°Iù0«f“ V1Á\ˆý&¤fÃi¼_3©(ªE„n7ülqIÊdûªê¤bmp¹°Â…0+U$ª2ô#¦&ÝwÅÚ±ÛHÛ´ô 1ÿ¸±Ù$s]¨•+à]‘K%K7Ã(ˆËp‚ZZáÈ/J¯o3É‚ ƒÆÒkÇR yd`¾Y~5¯Œ&‡B±€ aô)²»|ÈŽ¬"%=o33Ê$ù’n’†0IB–@Üó1F](Žg%h `[Iy_ÝšyU4"xq’&‰-<˜t ¬ž"âùšd(C•$'_¤sPÀ`.šòS)‹…¨UHˆÝ ŸÑUË ÇJ,÷jdEªÉcÜ÷ÐàÑ"´«T@sœÎR dè0vF0 ”Ë·€ÁÅùh.uRkd¥8ÏT::Õ‰ŠI݉#AÒ–0dq‰vzh”IUBV‘c9‡í9¤Ì4¬BýtÏ ü\´ÖJ»º‘C½˜¨RÇ{úepÍbHR«È°,3²Ö…G…N“Ôƒš¬`¦ho••¢DâHÂL)³‰$‚2b¦¼ µ¢žx•nØj±å-D–ò ³èÔ¤W2 Y!cÙH^e¾Þx–e ÅRt˜/‹ºZë ÏÓCL³n4 Éç:ŇrŸÊ ÏÈ”ã°ÔЄ’ú€.a,À@ÖÿÑ……ÞÕF¹åayÔ÷„ ‹Ò¨æY¨÷t¬X²”Ó`æÉ*•)ÛÌ„Èk\§Ä<ЉõcHÊhö;ÑÌFÆ›…tYö}dXW“Å.ê u'߉ÒuĹˆr°©;Iqyƒd²ìã^ iÊÎ l  ¼Æ1©—©ô,˜¥w1Šò³-‰³,„Æ)›uÅ ,¤cT™ŸR†¢OÓI¨ÉŠŠ¬ßI-õX>ÐsÙ–°e—†ä8ÁÓl?ègXþ·],Y P¾äó‚ÊÿÆŸÑÐø¼èÝ×ø'¨ó¬C~{ž§ÆÞ~Ý –Íä60å!j6ÖNàá•ûî7käM>Õ¸ Ú¯eJŒê2­‘Lâö¿=O=Z7#£ÈL e6ì$™Õ2Fˆí¶œˆ}¶Zk %y©°ð#’\’¿ÿõVÛÅ^LÄÅÇŒ]7[{l±>Ún ŒhJá'Ÿ‚Q©ÁŒ'¼òvX}`;­7ûmµ)V:ü >yo*JVfÒA¡Z…·1¦5‹“÷ܳvßpì¿ãvØq«-0ïíwðöó/£apаÓ(6ºil1~ ßr ìµédŒÌšÈ/èDÛœùH64“øù0iØi½Úu#œû­½qÀ7¦`Çͦà­Î?Æ[O¿@Ò•%Y¤KVå*p]R]¯ ›®2«O‡[ï} í&ê®LÐOBA3(%£—j²L§k¢ZhC2æaï·dùذHš•u]ÆO’ŒÆÊÇd$øª'Y†ÄÕ9›åîfRxjê,<õaIP‚Dž½œc9óY+$«µ«65ãÇbýñcá’Ìÿd:õÀ‚“J‘½Ù(TòhnLb» VÇ{nƒ3÷Ü;MZ Ã’&>}çmägÏÀ€†,*m,ƒ$R™Fd,ŒŽÍY~ƒíæ|ü!õ@z–Izxï.³H– …ûÀ-6ÂáÛm‚ƒ¶Ú»OY#­>}ù擤ˆDV¦¦‹› O ƒ31¬7aÖÕˆÁ‰róÛPèÈ©Þj™Û5;l(jÝmØvµ18q§õñÝýwÇþ[n¤t²ÔÕ…ᅢTÊA®'맬ciµ"EÂ8.YÀΛ¯ƒ¼üîá—ÉyIj¥ É} –+õžäRÕ#%‹“Ì%H÷Þuk:üM} Ù¶ÅÔlËâ(IwŠ ᾌó¬’tÉP6‰£CÝ}äixevb^ÔË(ùve€ yy‚ÎÚjƒZ°éĘ4zÂ2æ4 ƒ[[tÉxB›ŽJ2ößxuœ¶ÛN8|§m±éêã14Ç'o¿‹z#ºdÛ :Ä% ϺØtl+¶ß‚f’ÿYŸ~¤/¿L‚™J£ÌrvŠE  rìöë⨶`]ü¶[gu:=f¾ù<¦ã45 H½3S£S4º)‰F7cÛÕ‡cTÜÀœio“ F:dÈÒÁ´# Çã‡á´ývÄqûn‡­ÖŸ„)C†¡ã…7Ð>kj|F·1C]bÛF™†ËzØ.¶Þhææsøã/£h6²`Ù3M‹äÕ–(ØúzF4µšôÝ&‚ØUì½íæt øtØeNcÇ¡ÃËr­èÏ»-K3;2ov¯-éoøYÏ?Y?ŸuyåÝöÀÀUWg /j_eøQ?}üÏ@“Û¥€&·‹±¢[a›·>ò>-†ð«$g¦4â5d­“7àê3B¦©×þæ8v N=â ¬?z8nüá%<~œjöÝh ÎÛws "!•9$…è ±cXwâL5¿ó«nÆ*VÙŠkN8C¯®¹ÃÄ!;ì†!ƒmüõ®[$’ÜÜ€±2.>j¬?j$šéE"i4YR$Êûn¿!¼®6üåÖ›1fÕ5y 8rÛMqöA›cÁÓpÿ-÷`§oìŒ}¶ÿÞí=¼ûÎ hžFÒèÁ¾}¶ž¸%c(ñùC6VC2ÍØgÓ 1"Þ„?ÝyF­2]ÅšZ#ßÞ†o°Ñ*#±Ú„•pÛ}O`¡“5 ™IBò’$”¡P]i C[> ØzsŽ(ÞK‘¤eŒè¥¨A¢!f[H·ôÄÊ‹sYcŒ¿kU5í“$·ÏNGr)Ã% Øt8L[§m6Ûbc|ÿ˜½(Ÿ•±ê˜±ØiÓIEùèƒ!6¸eƒŽG£‰oí¶!ŽÛy#lØ”AsÉÆH‘ÕÂ7wÛÏ?tÌøñL+‰kÉ…ŸyÝbe Ie°Ë6ëa£õ'ã)–qG¥ùVa²Š4‰â%ûï…c7\+kEc6…ÁÔËoŒŸˆ½é,ýã÷ ½m®¬CLÚňf ?úÎ!ØmÓ50¼qÚz^·^î9ä¨Ç!ïå:5ìºöœsÀvXkP†)ƒ¬ãb­±Ã±çVkÃô|üõÞ°Ú£»XE`7’’ij G3O;l¶!ò~€ÿú’rv(U˜$T"@5(Ä*²l4s$Ò#Ë8pç­P%¹Oò·êÕ7Dw—5˜¹µÌjÀ<”ÇÂQä–Ωrû!^™ÕI;¢”ÑÔԂ‚LH¤qø&«â¼ÃX·V c'‭ÖACâ¡Ë~Œ 묒YEs«ƒ“vÙ'l¸Önl@¶ÁEÓàf¬ºÊì¿íºøëêÌž645Er7yx~qÊ>$ÃaÝßcçͱé+ቻnG¬~7hx• ZØ®üè[`·Iã1yL RtŒ‡’t®=z$¶ž2 Ý|+氾؃ƨ¹z§Ð¹âèí±ï”‰×ÜŒ¶ÛkM‰{û XY”­$Ü ØiÃQ¸pŸ°åð °MrÝ€Mèì¼Ý70ýµ—ðÖ»/QÆ6·‘e#Îe9n€‹í6œ„ÝBn_EÑÊ*¹Æ„¤Š½b©ú¦ ?§èåMM@r[ÄðŒ…ݾ±!jùXŒï0¯5Ö+©Ë¡Š¯PÐäöëEnå‰54¾r¨²Ǭœ‰e#O²QapCß?ñ üý®ÛðͽwÅŒŽþðÒ4|çê;1rÜH\ùã Py÷Iì¸rÎÝ{= ¨•ñÊcc—ýŽÅ”½¾…Ÿÿñï詆XoõUpýÅß…ÿÖcØx„Ÿžtž»ç~ì·ßx¿XÆ•Ï=†nû-V^o2Î9ýtØ}ˆÌŒ©øÑ¡»`‡ Ã0¶ÅÆå?¿kïuv=íGx¯½eâ3;Gì²#:ø.Øl2Žœ2'{*.øõ¯ð¶çàÌ_ý>ý".<ûpLÙó×pÕ±ûbûáKõ‹þ km·'޼ø<õîGY½öÜac·ç7ðéc÷¢)Ì£8w&o ¶e)2#\EˆŽ4}òuº?YÐ_zð„ÐËGZÒó-öÔHÈs¥äEv‚—¸tàêã2¿LÈxÑŒ_A¢â#^‘ÌWpê>{â€-ÖÁo~økìyøñØû¤³pé÷`ÜÚ«à{çŽà‡0 ? Ço2‡­;CK=xåÙWqοÁ¯îx–AÊÊàú«ŠÑ$Zî‚™X»ÉÅugŒ™Oý[o¶Ž9á œpÞ/ñÉ'ópÍÎÀ°™/Ã}ïe +çpÚ&á0:B™ÎÏ?õ2N¾òG¸å©‡ÑE1tP3~Õ¹^zöÌg°Z£+Ž=?ó"vÚñxú¥Øû´«Q-÷à†óŽÁðÞÀ éïaó¬… ÷ÿÆ5ÅNœyõÕ8ÿºßaÚüÙ0Ëeœ¾Ïö8v«MñÆï~£»®’ª@ƒ$ˆ2•ù¥_Tz>-q„¼DÌR‚,MSò&¯Aµ"#]IÀIÌctÚ-e¿\Hþ,;.ʹ¡!Ž7‰½6™€+Ϲ Ûîr?ú'¸è²;±ûæâêßþ€Dð/Ô‹™8x«É8xƒU0˜uøÉÇY]ŽëîûJA£ZšpýO.ÄðYoÁyùAl5¸è°ðôC`-vÃÁßýûÞ $Œ~÷ãó1xá[°ßycó¸dÏ­±ý„V Ș¸ý/âÔ+¯ÁÍþVÒÁÄ¡-xàÖ_` õÇ`üÉ™œµß&xúocû=ÃA§^‚}O½]7Þðs¤f¼‚LÇ;Øb´ƒwÙc’%|òöTœ{Êå¸õ¶‡1õÓy¯ƒKÎ9»®1þ‡¯#Q.²ôο…Ì@bÁ3\ê%+Õõ^B)—S=á2ûOí‰L–ìWJJ+44ú4¹ÕøJB7Ó„×g«_n›ƒ³ŒïŸz>|íEüä'?FëJ+£DYÌ´à©¶.|±ÛÞÛc÷í7ÄùßÚÍNfÏxûœv¦Ó£_è¶âš¿½Ž+îz%×ÄÆ[n„ktŽÜ| >~ùY\zù•h\y ä[‡!ŸÌâ¥î.üì·ðÍvÅάƒC7Y ;®1Ix8ûâàº??Ü€Qx»sÙÕx³«ˆRÒÂ~¶7ßfc|눃ñÌ´w±`èPÌ832Y\ó—‡ðÞÃïqNÙ{7l·Ê$I‚ŽûæQøéµ7¢6`^\èáû¿¯Ìé@¼5…óÎú6»>J¿ƒÁ)G½b.e²Ÿ¨?4êA'e¨ùÊÈ©÷Ã$B7}B&¯©ÍžÖ_y",y]LæPÊÌ_:¤÷ѬÁ5I¾ÊÝØbÕqÐ^›ã7×Ý„;ž|µ‘C‘<~<—Ý}'¾±ÅF¸øèC±sƒ…Ë·ßIn>xå%ìpࡸñýiøåÛÓpÞÝOb¾CË A¸é¦ë`wÏ"ÙXåO¦ã‚³ÎG¬q(zÒÃð~‡‡«n¸1ÇÅS~Ù·ÞÅÁ ƒpÞv›#EÙ¾òæ‡8äìïãÏŸÌÄÅ$JgÝqfùEŒš0þãuëÀŽÝŸ¼ü.Î<ã‡h±ücÑ–hÆ5·þ‘eâ—?8g¿‰;Ï8Ck Õ ÖÝeOüöÅwð»×¦âÔßÞŽ‚DÏSÿôIDAT¢mÁöЏò¼±Á° $ĉå–Õ«èúªb"­û2ä#ê¯íû†‰qTÏ^¨zóüJ«ª71Æö«jžÛ/¢­ Ë…°ÖƒÃÜ —°>Ü÷ú³ˆötž›ÝŽ‹®»»í· Ù{;ì²Ú(¹éHzžxòi|Æéøã+¯á—?‚ŸÜõGäøX Æ»ü£{>ƾ¹^~ìœ{ù5HOÞþÄ-ðÒ‚®¹û1*!þvÛoà¿t?îâð Ö@Ûˆ[ÿüNùÙ¸ÿãnüâÉ7pá-@ŒNÁ˜l^{}ü,n;é¼xÿøÕµ×cž5=#×dzù ¾wÓh1W\z羊+ÞcÌ@M½¶í¾ûáAÖù+~‡\ó3,ÂZ2øå/Ž!63>ë}:šK7õ¤¼eam†L›'é×’X à{>bÔ ‹Î¯É¶3N©xý Qk¨¡ñƒ4ÛF%Æ9eHb{Ø6ëbd<†ï{!²Ã×@5;U+©Æ\–!ž›6 ¿¹ýÏ8ëäS0¢a æݾ}òG£;Ó ž0Û½ö>JV {í±;V3'wÌa#P<alÒµ¡Ht$ðîí¸îñwqìwŽÂïFÃRÆ£/¼†_ßý7Ø#VC·Áû×R(XqÚ‹×:»á5Ú8æÌ“pÝwàÙOæãVGÎÂÑèÐØ¸MÃpýƒÏ¢Û‹á÷‡iÇðôïྷ^Eóš pG \‚÷Ú|\y÷}ø´§€Æ†4~såÈÍþ^¾ W…ëJÿ«W1v$4ÒÛÍ}‡¬*š9 +ÓØ•`ñºõnjƩ{íŽF#†R¥€ZŠFöKêÖéÛQ%_±ç¤i‰Òì¹ýZøË#Ïâö{‚=x%ðiIálT=¯}RÄ5·=†m¶Û ?8ó|Ø]y|üþ§Øï°[kcGÂG(âÁמÃoø+r$ñ£gq× × 1‘ÅᇉìJ“:h„*I¥ŸÈàâëï#ÙŽ Ï:'ì¿jÕ^Ÿó)vüΩ°Wž ¿6€NÑHÜÿöLüâGÑQ41tÈ*¸ãÆ;б gŸw6¯¹&æåKt´ñØüªƒKoMkÇŸ¼f¡Å2=öLxî`x©!tÐZðú‚*ºô×ø¸fÃÌZ¸áú#S˜ ·{&É\7~I±LßÖëÆ0Ðõ3,EtB>rWÞWhðH’ËÊ ›8lÛwg5~rÕ5h»*’?ö”K0R˜í;¸æÞg)û1øÞdz½8F¬Š'Þ›Š“®ø5b&Rßš1»#†»ž™Š_ýý”š1vx#žyèô|:Wýø×0šGÁ2a:µóèPžùãßcÌØÕñ×[ïÆ;°o°ùn˜ß0 £WBeÌp|`øØïÊ«0£B޹øéyç3Þ*9È\­"`éy {ëóA}J'F>Ætƒ"’ÔñCZqÌ{ªÞù J]a÷+UTÙ†jhô7hr«ñ•„¼dUiÐ- ZèÂ'ƒ·Ýó?~sfÍCØ2åYœL—T"íaŸÌâí7?E6–!rpÏß^Àëó]„7‚ïf”*$|.ÊñFÜøè3x·' Ðc{ º€aÃPŒ'Q+»H/´1Ñ +ÖŒË äH´ jBgµŠkïþ3Z7ØŽ­•×'Ô\ä½ æ“xà7Pu tÓxßõ·Ça´Œ%‰ȧaŽø‰b9µLn ^|{: 7…|üä÷7À3óI²ÊL+fðšô0<óÑ,ÜÿäójÚ¬)ë­‹Í6œ‚93g¨ñ©ï!3‚ ¹UDQzìÄ !â^¨fQ0bØ) ³€uFÄû쎆2ß4¤¤‹Ý$BjœæÿÒߨø4ó[‡Låf¥‡©ˆ_]úŒÍ&pÆþ‡#>tìä0*ÍgrÐ’‰ÀŒ‡žŽ7çT2> ÞÏoº ,>×ðUPqâ|6žåãž'žÀÓï¼­¦Q5¤?ùÉ(ʇkƒ'¢ÍÊ¢lÛÈ‘€ÎÏ[9çßþ8ö<þH 3• λñk­Ž*ehxC©CƒP°‡áÏMÅ{³æ£à'0rø8œtÂñ($løYÊ.“€Ò)‹'PN4ã­ª…ŸþùÄG A˜pñÀ³/ã‰Ç^Å  k#eº ïD ¼]°ñ˧_ÁL;ކ #°Ñ:1ûÝ—¨v­ö CgL–ð4Þý¨w"Ÿ-3X­Bfd`ê§Ý¸÷‘×HÓ°`'Zðàc/bf`£=”\²qfºUŽ˜[ qçÓïÑ¿Š³¬l¸ÉÔI…’ #aÒ‘‰±ÝcëQ[ƒOç£c~nüÝm¾öÆ0ãPíb½DìùUØmÔ¯g?A¹+‡Ž÷ê›NA­ƒy“¡D¾ŸÍªòñÁ’ 81]³>DƒUŘFÊúؽ°RSAŽ­F¥ØëÀ¬ŽŒ†Æò€&·_Iòj$¯HÃæÇ\ÜñÐc˜SˆÁ$Ñ8ûôoÛú<Æ4ÓXsH'‘6SXmäì½ÓꨒÆmŠdÌASWÒ4\17‰" QÍÏcp&A$¤¦£ÍlÁu×]‡ÒG ©–£që†?ÒÀüÖ2YWy†˜$G¾ƒJ.DǬÙÒà¢V틬î€ÐïDK†äiÈ8Ƴa”z°ËvSH`ÇÃøàmh¨a>S¹0 Çî¾ Ç ÔЀÉã&#CbTžßÏëBͤAv+Ìk«¯=ÃÂÊ“ÖÃ{¹ ‚Ô$îV\–"г2·fÚÈ—ò:¸f5‡a¶‹9Œ¿½ ~¡ 7FDZY¶½ý¨_*dщ$ ù‹~M2t£à·Ü€ 4’ކgÈ“Ì˸™°ûm9cIèDÌ]¸]AUõðÇà ȳQª5 Â0¯`*Ò"=œ§}8Vo¶é¯ÂÊ/`yÅ`7ƒIg ±¼'í´1é‡A‡ÉÄßÿxÉ5åJ"í’lÈ×ç½.£æ«yYç•æ©¼F—^t!&Óh1l:b9rë •Ô9ëMâ„ͧðùJÊ ™=ãc4f,ä«ÝÔ‡Œ†4Bæ?U#±êÌ]]˜öaþüì›pÇ­Ž:`¡“„G11 JZÑdÁ•_qÖ ‡‰WºæaÜÀÎ ¯44¹Õøj‚šÛÖÓ‰_à¤1­­‚»þþ2d‘‚í§¬c÷ÝÓÿ~+iœŠù2ÅÊ8íme°)Þëž=?Û°Æ jAÏGŸÐÒ”QôÛáºsÐZ˜ÓwÜ…æâû—þ9âzë­Š³ÙÅÇÆà®¹pÚ§c`µ‡oµ.& ±ÕkÂ[îz“×^m·)æ½ø²^ e ¾(aB¬ˆ·ŸŒÏ?ÏΞ _ÿö§ßPÃàò$Iª]”±ù䉨fÒX4š·b/l0~Šm3+,$ñ« &+œy>Ÿ#ÔÁ _.d±‡iÈãgO=‹™Ôã}wß{¯¿ðÑT4: TVÇ”q 8þ€í`“Ä~øÊÛ6x8vÜrs,üøÕÛ%#¯â§ô­-6Æk>Ìø kÊâgŸ…®wžCI³Ë4«žMGØÇ[®Š Fй¥³2oþ§Ølã5à}ú:Â3Š¥X®­°ü,2TÌ¡N‡ì´!žyúUôÈ·†³O>ï?aU9bñ°Js3ÎnºçVõVuþtŒiô*½ iü71—^pîüË£xšä׳-\xÖñ8yÏíQzþqŒíZˆ“7Ù'ì¼)R$.<õ .þíÍøÕâÂ|­fåw^Æ’˜Q4œSâ¸öü£ñÄSÁm÷܆Kú ¦ÑH¦‡Ã×þ=oü ÙyocË1qÑ·FS:Ž™Ån\ôÛ_㌟^Ië®…£ß³Ÿ}©Âl ©ÎÃ:Y¿>é t¾9 wýéa´n°O«’¬DBߤ1«‘H¾ÍPæ¶•ÉûIt‹]˜8¸ç~ó@L9aOšâ|¿ŒªŒ³¥é¬”+|¶”ôy©2_Ö9“e‘1!`ÒÇ(4,úŠIú„G ”,?’Ä™˜ŸÍàî÷¦áïÓçìÚøÎ÷ÎÄÐÕOÞB¼ÜFòŸÇÑ;oÈcIt÷qæé—á××ý|ïbxóg£4ãm´€NLáS´x³±×ºÃ±ý¤‘8~ßýqÝDOÌÆ†Ûo…3NøÚž}CÚßÁ»kO¿ßðé(}:¤ä['bìj$»GŒÎw^BPš…´Û4ÓPœ‡C§¬‰ñ$Û;l²-nûËèà³m±Ãæ8jßMÑùʽ˜P™…qÓqÌ&kaÓUVB`ÇqÙ¯½¾yöØw'¬>yUt8 M…ær ­Å"¶š8 Çìµ=.:÷|Ìì"7 sK5„f’͆j KsHiIÉX¾,[ùÊ~H:†Aqàûg 'ŽF¾}šãv4eÅJ¯p̄ϭì//H¼¼?ŠteøA]þ¢f2íœúðx¢Ëöpë«/ã­…íj½KÏ»!ÅŽéo!éÍÁÈlÇí¿ï,â˜ÃNģϼˆ .ýºß{‡NǧԱ¹HÇ;Ðbv`—Ic±¤#÷Ø wÜý't—<ì·÷^8éÐ}Ññîsps³Ðtc›•â„]7Ãи‰g_|[ïº vÛ}Gl¹Å†èzþi4V*H*D]õ?ù‡o»9ZHh÷=xoÜþ—‡é*XØs×ípØ.[`Þ ÃP¦9ŠŽíaÛ¬…µG5©Uæ.¹êJsι8ร0`x3z¨—.õ·Å P¡ó¼Þ:+cç=·ÅѧŸù5:Ç£«#é7dÀ b›E¦z]k–ò]M?P²¶˜ƒ¡Y¿½ül¬óÒ¸à§×!hsUtQ7~vÕEÈxe’­"¶_ và®xâ…pÙ57Á½*ŠÉk{k®¹2é4N:ê0ì¶ñd°ã&°hl>œ·ß<é", ’È ×ÞŸŠ}Úë­»2œb'¶YwU±ßÎxá­÷qÂé§Á›´ rM#ᜆf†5‰x``Ç ƒ±î¸1¸ãÞGÐî7Â%É”»°ÞÀ8ÎûÖ~$üïÈ#)3TeŒ. 3æèȦZ÷ÙHñó‰û…|ûÔ%陕 d¼g¿MäIrªpR-xâƒOñÔ´ù´á)Ò! ;ÑÝ]„q£I¾mì¶ýöhŒ•1¾ÉÀ‰‡ïŽ5VŸ€$süƒË¯Å?Hr摘°Ž8þ[˜úük$ÂÓ±ö€,ößnì¶Ç8ëÒcêœ*N † Áˆá0yòšXcâJ°ò8`Ë)8bí1¬µÌ]…6–ÏG Ûpä1‡a`s Ÿ¾ðœü\¬Üâàà6Öo€SÏþ:­,÷FŽÖæFlµÙf=¨ÍèÂ1ûl‹]·Ù˜:dà/?Ÿ]w;šWÙ]…g~÷ Ìúh::ßÜ"vœ2'P^wÜyþtÿ_1j½ ‘—eUíBæ%møX-]ÆN›OÁ‚rˆ[ŸÆ6ŠN õ!íwaœSÂÙ‡ïõG„ŸË©×ôâèÈð©É"kY1!ž2ÊzÙK\H¬‰ ÈÀ†¬¨EgK4’ñdϾù.Þ›1Kô j­| «R~- tÐAˆWçcñÍ8íÐ=Õl•² ¯º¯,è§=yŒ?{´Þ~ùøó>Æj­Únì±Õ¦8çü‹0·j¢'>WÊdÝ)b­UÆ¢¡4o¹&ŽÛg´fShë)ᔳ/A%;ïÍï¹ç~W¸ >zíy$ªl;Z œ|È®XuÜœ}ÖÅ袣Ä›0jô >[l¶)F nÀPÊåÀ¦`çol¤JõöÁoo¾Ùa«³MjÀÙgœ…ßþ¹OÞÄÄT{®·2¾ÂøÙ‡‡Ÿx ™ k#´’t°\:#N“mì²Þ:˜ßQÄO¾ ƒä¿©šG"7cÆÄÅGïµR°+U„¥"I­¼Ã‘!I,WvbÈ<Ç’齕:Þÿ!Ï©qÿ¯Ç"þõÐìÿÄËUcz!¤¶P( Z]ºyûäÃX¡Áe3 »JÃZ-âÚ#÷Gá¹§i°Ò8‡d/ßac¼è¥øÿ Ò-)«©‡žùCLí,c~Ì&Y4Uä¸lY’¿KN9kªfpl3ð틾>sèH$›Ã(ù¨ÌšývØ;m·’™ü뽸åW7bÀ qh ¨: ÐңІÓIœöØdu8¦œg Èrxè‰'ñ½þ1Æ7š†£ØGcS CÎ<æ¬>j^}ÿüüú[ñÖÓМŒá4ª3çÏE:ac£Õ&â;‡ìFR”DX®!7ðÄ›Ÿâ¬Ë~Œ7zh½JÅndÓ6hØÞs7¬³ÚHA —\ø¼ôî{h1 ¥ôPTÈfš½6äÌcY¤ƒ.Ýk"öØasìzüù˜c¼B&·ÚøÁ7÷ƺ†"×UEœäO>.³-Ré¹S³#ÈÇE$=ÜW«©fsÙÂñ£å¢KGQŒm˜`ël‘dUaט>O’åzáOâG~ ‹H%>[ë®4 ß=î0ŒœT½R²¤kGwpñeב@ü£6\=nÈb÷íwÀ®O„ßUVF®»”ǧžŽŽù9¤ŒCs"ŽFÇÀ)Gˆí6ž€j‰y´DbÑ]¹26Ùñ@:ã‘He‘IX3´§}û›˜-‘,°Äè<´wZ8êÈãÕ‡nöP’¦T ©XçQ'7\mšÑÀ+G=¥E¶!o½? ûï8R›l‡…NM$zûm¼)Žß}=Ø$XIêo-åâŠë¯®û-†n¹)äó°œ4<ÊÙ“7F;ìÁOÏ;s*6>ê—4l8ò ?Æè Ö§¼óLÏ$ámM§ÐÕ± W^RS¶¬c"aYØÃ¨™”ù²·~Ò+k“M$UUÖ!ú6jV‹mJ(n hÆn¹7=û.ò™’Ô´W¡ä0eÕQ¸àÛ‡`LÆA©=‡ÆTšÊc “õ[æ~þÙ ÷¢aÍH\,ÊÅÄÁûï†× Ÿ×§DÖ]Ž;þ;ôa;PÍf` ¨æ­ýþÑc§5‡Ãl£#+s§L´•=T¬ºÙö°†@…mk»XkèœÔᛤ›\­¡Xø˜äò`êP§Ãögä0ÄXö£ ¸ð¸C°ùÊCÏñ!s$2I:"Àïþúœ|ñÐ2ym䥣€²>`÷]pÄŽÔSê×`Öq¸.N<ó <ûÌKH¯29ÃâY5“EÌ5èÞuaŸ±øÉÉÇàåOæâ€+n‡W³Ñ”oÇÈ„³.ù.&µ&ÐX.~¦¦oçÀ×bC’É$R¬ý®„ÆñŸøŽÏ]y%ž¿à|”éðïvýMXm¯ýXѤ§I–¨f»^‹G\·ŸC“Û¥€&·‹±¢ÛXs 9ó"’ÛšqäÍÉYL.‡¶‰DPÁ«NDKK3¼ªw^zóŠeä“ 5§¬ÌQ0di\œj…ÏR…Anå…e¸d2†•EÑl@‰„J¾NÎÚšŒHrû—W“a|F'$a¡¼j…v¬9q,¶ßlŒ5³;oöL<ô÷§ðø“ï!5b<ºä&Ù$JÊÕr°Æˆ1˜8l(ºº:ð§EY¦ÏJ·’LgԸׄá#Qó°óVa½ÉãÑÀòmïÌãÃO¦á÷·Þ‡©i¥i´Z!/n2×^ ¦±öZ“Ôdøò1Ó³ýA*ƒt–ú:$í>ION}d¸ËVkc­ £$‘ž;o^}÷cÜ|ËŸÐ:j%Ûä§ÓCkoä±c±º|äWµðÒ¯bêôOÐ8jreêMCŠ¥ ËÊ¥Bò\«`ÇÁ=øÙù'a&ëÄNgüó6v@ß9jol2qŠ]9ÄI7Ÿ¤‡Ä/ÆÊ,‹xÄÁyËŸ Xæ`šâ€ © xù-³Hò<–“9 —Ý|?nzî´±ž¤H„íªÌò@r˜±öøØrÒj˜4låÇŒ93pÇŸÃS¯¿†ä¨É¬ƒMHÄYé¨É‘«ŒQ£†cÎì9xñ‰§Qs’tJ2¨ð¾å¤ƒiýPæeõ'a›IãaÝíènÇKÓ>Á÷þ $AeÊÂçñZXC’ùo4c˜²Æ$Ö³fÎÆ /¿45! žÄt†Y']¶U£3ì³éúØ`ÌÖy móÛððó/âÎ'ŸúWbý§Ã:k¡’ïÆÄQC1yµ Èu,Ä+/¿„|ÙG„¶ÌF©ædX']Ä ¬‡ä¹N¬»ŽÏà§|¯Î˜‡C.ù-Ö› [Ó8ïð=è<Óq/–÷ʪÝÿu'¶Mn5¹ÕXšÜ.ÆŠBnÖìGrûaW•ä6Éüd˜9!–5 ½A#.É¥1«ÑŠ4UtHoWs FÔ$XÑB…†ÒDŒ¯H‚P!±Iò6¾G:—lEàfUäžãu"Kâ%$ÊŠ§QÙ(ä:a˜4ÙA(ø6j<O8haVΘ†tÜ¢«)ò™Ì4¢˜/ =pºzºá: ²„¤_Sayò=i„Òäu­Ãk«x$ê©tœF°qæ=æ‘à™b¶ƒ.>¯¼Õ5Ri„$i?O¢ß@ò†äqÉ`‡ÖÀ¡ÇžŸ¤ì¼Ó¿…µ6"Ö‘G–D¼wÈ´Iï+&.46úb~y“['(ðì2ŸÝ€-ä6tá“,(S†žú@î"é¹}˜äÖ–Õ•dÁ ŸI<ã(ötÀe‘A&¥çUqËFèQ3ÝA˜ÛUD˸ÑXÐ>L”ÏêH6L~Qó)/’ÄÆ$ y™AÁD’ä´šëBCÂFµ£ >u¨¥µe–±Á²Ž‘˲¦=ÔG²p3)’Å*u£CͪáÒpŠƒU¡¼m™Ã¶$s“ÑÑ¢#»Ñ”²h_º‘v õ—OrScšåš‰b…˜Ïã~@!{6‰]‰>uRæ+6HL8 x…ÒaHg1Ÿ„]†rãI‘²í08Ÿw:Hl¶=úRêJ?ýÞ©Xuô£<è!Õøì²*™C9K= ¥žP¤ÇVõïI:e½¼È­ü§:…•ð…ØÊ ˆ<’3£u.½åÜôì4tš¬w$·nBÆÒ‘±è`$ä-tPe®ÖRÑc•O£B‚X¬Æ`¥ZQ-`Û|:*²°InÖ§H AYФ“k2nª±ùž¢Ò†$õº¤¦c9K›!3FÀJRoºYg›u¨? Ûæ"ÌØ0ÒëLYƒí„É£ÎY4 ’ÚîB̃+zdRA+,ód#Öb•Žå`SÖÚ8ëžA§ºT`[IògÍœƒTó øÔ ™îË´“0ÅA¦îØfÛMLâÒS›3fã˜K~ÅvØÃ½?8k7'‘£üeeAUÖ½æ½oÛùu…&·_/r«5^ã+ 1¾N-×Ï"YMÃ,Ñà•\¸öð$ky F3üR†$"vŒ Œ¡T›ߘ˶`&BsIa*ñ…ÈY ¹-"H“¬¸=¨¤rlæ¢bÎE9œg p+ jC‰æa!ÿÚK“ »4”•¹ˆùsi$Ût„…³ÞB*é‘sµ“Ä’¼Æòðý6’ânÖi4bí4@ ù$]j’wÏÊ£b÷Iì°“F¶aq‰ÉåÃË-„ECi†L‹FÓ/åÈïrÊ(;NŒÏX@@ëZ¶}>—Œ-Á&1’ÞºF®I­pÅaûasÂtg;BšJ=$tB_¦ÑŠüÈ ï-È Û\!ç¤ẎÌÓÊd?5’<µD0!ç„ø£Â¦Ù—Ü)üAè©’ÒYáÇG¢`E‡¶ž ªæ0ôtiEçÇsXù q>:‹,S™ÊˆÅô¼Î*e‡›N£X*Ãr(1CFS+† ­ d&ÐIBY`f =pHL1N‘'—G£“B“K”®8’—8ÀZŽr"IÊ47«–Ô‡Z]tˆ»Ib}t°HºÝL³Jç,°ežc!©5dh¨Ò2]Yg¦ÌàÐÃ4I„ ùPБ°˜\N•“ZN9¬2H±ìD‚~ÅÄAY\}þ‰Ø`¥A°„PË×BNMU%ÉGEQ‡­ŒÁ“r×@¼„åg„Ô´ª}gt»Ë´e¡!ÃaT,>3FqÇ’-ðÊ$ÕF:0#ÇÔœ‘h÷›±À¢ƒ˜…îå_jEõÀ)”‘ej‰j€$EQmïAË€¨.è†M§3îÄ\”çw¡©Jg†d´Àkº¨‡skUÌ"ÑE`>Óè(d6 E¢dÁ*×PîÈ¡)ÛBùÄaR—Ü0Nª!A=r} q’ÛF7‹²j­%Ô[†nʰ›ÓÜ„…®-˜AG¶‡ÎƒK0]:Ôíe™*å‚OâMg6Ñ@NÝ„¹íy$[Âój(K—õ× ÎÕ„,Ó!0Iúe¬²H+I2¾zk×]vƶ&èt-Tm†)+}{lû¡ÓÐøÐäVã+úë5ß÷U/¹É …Pæy¤ávä³?¿LÂiЈÇ$à‡]4¦’!EÏç1ùˆ¥ÆÆŸÕ+2Ý Héñ2m±©> £ÇIT«»‡Æ$¦“£P%Á¨ñ¸ —Æ,Îx iÀ =J¶Ì¡JÂ)¯¡¥'¯Z•×íÒ#Ê-P@ãæû$¡~ ²†¿/sôzUÞŽÖ¼L#›«VQ¤Q¬Ì«O2ðL9¯¾Ž’Ê:^#sª†½_>2¬¢R€[É3”¨t“#w"Yî¢áõ5 üäÜsð•G"“ïA£WRC¤ dŒ­pÚúŠGŠõÚ@µéÝ_ˆhM4¥Y6÷Äx‡$ê<"½ð¼Øèó(=ÛË-.yæ~XÈñùF‚¤TJÐF©ìÃ&9(CÃ5P ºHX>F‰I•„9„®‡ r$°9˜.Ó2é( ˆ²O‡Â.¡D'¤d£€NÆ©ÆCé4ô— F~Æe¹ “q¢$$¦2Þ±L}¨É+d#NˇxõÉ@O™ŽK¬‡äµ„*Ëgze˃—07˨ÈTU1>ë#OÇ$7y’{’Þ¸LKU¶‘©œscÚÓF§É8Žr1 pBXÅ’;asâDÑa(–pò{cã #€<ê¥ ŸÂcБQŽ„LqXƹ•=A¤Ë ÒC,$Wn£Æõ Ic])WeÑR7nb7ZX¯žã–í‚Ávƒº'Ã2b&Ó ƒnÅÊ0™O‡NN–"ŸnĹ„)C›æ:Ùn0uKHpØÖI­~I«ñu‚–°ÐÃãË– ¤Ö¤Q—­ëº@¦×Ý÷7LŸÓ‰xº¹²˜Í@NÄØ*òº³&(‘(ù4^Ò;d¨× ¤—Pz7êqlæÝ¢ˆÑxUâ?•ÀkBz« ²²Í±€F‹yzž ¨¨òŸ”É3$ËL›ô™‡I̤wRPc~!º$¯lõàž|ÑJ2#)fˆhSarV@ƒ¥îÏ'!™ñyKK†`˘FŸiH>l¦»Í䱘²ú8 ¬*o4+vÂ&™((P‡ÐL!9Ñž\+ù}ähy@fFqž5³ñ ªzÀüZeqZH2’,ÑtÓ@\ó—Gqå}ÂóIðHh{HlÙSžÔ ©ŸòaŽÉ Ä41Ÿe'r‰H»lÕ²œ¤ìX~Ñ9–;Ë8êÉŒ~ ê:¤ z‘¥”äˆÔÙS'•“ÈC•Ÿl_¥[Hä«ÎDeÉtëq¹ï%ò ˜)¯ÄyFýÉõLZmÕDîÑVl‡Ò5’AY¬€dgÛ‰-¸üÌãáZ:éP%«$ç$·v<¥T7ºVž#Ê¿’3Ó©ßG$"ù’x½%´L!·}•žYÑcÀšIÇ.§q(®¼ùüåù÷0§“õ„DÞ¦S*+í¶,0áÉÇQ'•å»Xæ2EVˆ*ë¢HE•ªÜLdÄAžIHµÔåèyæƒN(õK¦&“ºc‰T…ÀúÄxÒÃì©6ß Ó*½Eº$ˆäÁÒc<ÜQ®•2%wV‹iÈ3FéJ^Tf…i2Èt^òQJ/e§ÎI﵉[^#9‘g-äù$k+à ȽsÝØcQ8ÿèÑd;Ët*t B¯ –IGIœ,ÕÎh,‚– ÇÜj,MnãË"·Ræ"‘…Eã-ã-Çe#Î:K²'½ò5±|Ï%µ\L·"$Êjº01Jb0ê’ůy¶È„}1êqëOåsG´ o[±dº‰/¦&AÃeJÊMÔ ¹a¦$«uH\1oenÅPFý™Ñq¡>–­“û EwxÌ•²àoy½« ©‘Ÿ¤ñ[°pS6z:zÔªmÑMë¹”Ôë ¿£cB–B|!ò°ºà„Eµê—WËÂϤQ”q‹•"Zâ\ÓIƤ,iÜSÜVY(q'†îIp\B‘Ÿ@>Z’]‘·z þ'eÒk$nD zJùÉ®lGÅëÝ—ãefYÒZ½I,‚¤íôM[Bßë$=Ùö«ËUMmÇ cÃEöGtGfYË,ÒW]ä… Ì|Œ²ªðx¶\ÁJXG€öžndü’¾*‰i3AusyåXÕ3CM=h¯œyN¶e yqÄÄ‘×ê‘lTèȈ³R±l´|5“ÅÜŽšS¬'Ìæ¼R>É­x®I¶)fÓ]gzRNŽÔrÉE2U¹—}y\BȤįËM~±h¢òd°¹/×I øL³Êë$m‰/K`KºJ&½÷¹HÒ¢yªîñ€¤¡òÃ`×Óe°x\n+uÕã¹zü8rP”R"õü©g„¢ú£ƒ[úl§t:rXep ~¾‚JÙFÉ+ÃJ–éHè`Ũ-™ú1>œ: ó‘#)ªiéÙ‰…&É‹Cb'yˆL¶éyQ_m+õ—ž°ˆ„-.ªiŠH»Èœ#b§úixLzT#=8ùÐJþ—³æÓ I|‰+×K\6(,,CzoSzŽ%]éµ5ä+zæJb‡†ô2’®ˆå£Ï<‹q‹Eç5¼¥äUz†¤GYŒ&SV9 ™¾úÊ=Œ3MYÔg9yLF®«aÀˆ¡È46bíuÖBS6Íòc>x‹DRJ£^ù£’‰Ê"BÔ{´dÐÉ­C¦7€ybÑ•h@ž2qý)éÉ«zˆ[&ÊÅ*’6YŠT¥Ç /&RŽQyEOÉ’æ9]ï“ ¹ZGz×$–\ÿGƒ…¼ˆ,Ió‡ -¨ßgqy ê¥ÝSÉ‹,É`äÈYˆ•²üŽâK}R½$ª¾I¢És*‹®‘rë^íã~…Æ*gÅÔP Ä$\&ËÊG’iŠe:ÚU”Kyxn–Ï-O.—òΊØJJ½yPõ¥®ërHÎKX¶ˆt¥Ïôc¡ÅûIO*QïkôªAÙtº»‹hNˇ U]yŸÛ8õ›å–}dHÜX-Tö…ˆÊŒRˆ’šôtÊSI™ŠÜ”TùC•'wÕ Æ0x¡’¿Š•¯:%;‚ÞDT“žVù-ûÜ*ýá…r'¹FNI:®Ž1Mu‰Ú—ùEV·ãA!¸Êé–t>wq<¹§´“í™G‚fبy]HÐËÉõÐ)Do,mDžu‘«iòZ¶&+Æ"hr«É­ÆÐäv1¾,r»$r~ÉÆ4îºíN¼ûò;8`ßý1aüX8I›Æ@Lš AæU¢¶L´ûb8¤÷«×Fª^3éa‘óªÇF‹^› ä²jò‰!r Ép‰«IŠr¥Äa#Òûº9&Á;Ѱ˲¢1EDiÜ™ e,yL^¡KnDËdƒóçg\Dç¤7("cr/õ:“'BCfz¥ñã},E>ƒ]<ûü«øþ÷/Æý¸Ç‚i¹(–+œõ|XtjeŠ@§’A%(·äÝFšÜjr«±4¹]Œ…ܱ·Üq \'…oñ-Úª˜jàmyGÈZ®z½d,­˜ ¥á2Ö•‰ñ„FSEdHõÞÉ–Ñåëä(Ï)ž#"¶Qz2?kÔkò™Mi2Tˆ‚P-!MѳGWÊÝäŒb!¡—F A5¢Üˆ–E=IÜ#aU½6ê:•èju“hü-¡nG%OV­ÂxB´y­|/=µ¦Ë/ûö?ð6îXŽ­â/‰è™’S‰¥º\@b+eQ‹åI°H;)ψ£LÉ„ÿ ÌKŠ÷þxÞ|ÜvÿƒHe°û{`Ø !ªte¼%QG)¾AÈœ­òG7‚í}$…¨”%ˆäŠèyxÞ':+=êRÖ¯®9DWåÝ‚ÿKÏ™Úÿ,ê¥Ô›BÈ´|TÝ]ÎÔuB {‘¾DgTYðÉqåñPß’W2æIÙ*âÎg4eÖ Ê×°’øðý÷ñÌÓÏ`áÂvì¶×^Ñ’†Ç¶Ë3£ýèù>›KI§.çºîFwï{çe…H2,ÍŲçŸÏºéÆ-KÝèioÇŸÿtZ[`£v@Ë!H1ÿn¯²tVBõf%JQê­8¥rZÆóöV¹—ýúo<™@Þ·TyWΨ¬£k¥<Ô0!õ'uRþä-GošÔR¥Ë’㎤«ø±º‚zÄ?ÑxRbËVÕÓEú•»rIyNâJÚꞸ¤©znÕxEɃc§x>ž3 ·üᵘȾ{¢æpNÄmx2¢ç°~³=4¾~öé_A“Û¯¹ý<¢FÄ'S?DÇŒ98îÈoª)œdöÃ! ™ÿI‹t´Ù4|6k´Ì2`1’ôÁ&ØyJ2¸$¨¶Ì¤`Zª§O0EždIب'KùßV{Ò¿j«mô‚UL¤˜*52’G¢øB ”1¯ÙLOBtNH6O©¼É‡)¼+SsÓáoÞy‘öWRRŒ¶’>Jt-·²/¦WæYÈØx¡–bH¨´\›ÆÎ²°ÅVÛâO÷?À $OÒKüùѯ:ë½Ñò‚Lmdg¹1¯4=*ÿN5@ŠùÎS¨Ý<÷‡þŒ}vß§ûÛXièPÄInä£8`ʤýe•—*]•ky޾ÌZ }ðò[ŒôÖ*'ìAé ÿ\–l¢7¤Oô"Ò S&‚ÍüX”€HIh™ÈaÉ ²’ ÷Ž‚¬&e«)¨¢´-¦·8Èo9'ÁeÚ®ºW$kEýzŸId¬žAŽ+=à˜|KÁ¤ƒ'Î÷ÄUVŇŒCŽ8÷Üs'º½ªÌwDZE7$ð„TErò%Qî岿< w‘’‘Ý^Ù“€ÉŠ=°°bà®»îÇÞ}>cDsŒÄV®“,ñR“uœÿT9IPåÀ’qÇõ:"A•ÃbYD¿%°†!Íi–»È:ÎàHýg°èØŠ”#¹Å)#w‘(é‹#+íãÉ–ÿ«àP§âÔ­ëxªæ ÉmœÇD¦r?‹u\¶.©…JSt‚1yŽ¿ ÖÕŸ”N¨Ì•,AäË\©?É‘|ôZ–…7è l‚óN½“Ç­…ßþìgÜÒŒR.¯–ò6â2$)rn54¾®èmi44¾Z˜>ýSl½Í¶(•JH§’¥Ñ—bå_¯}ò‘59”ÿåw¢J,Iî¢Pÿ«#Úïû»þßPÇâý%ÓD±ÿ->Rßû|øøœ‚è\=·4ªa¢R©`òZkâ©§žBŒÄ°÷ìçÂÿ‹nº8òçØ¶z;"„­P,"ŸÏcÈàÁ}bEá‹P?¥Ä?>¿AB"²iKÈ7"Œ½LÛÅéÊ_Ÿë?·ÿŸ†%ÿþÓó¾èøgcG›x<ކ†%ï\.GgÍTK°þ»ôÿÙÑeÞ´ûÜJdm1ŸóçÏW“ ¥ãoqÈêyî‹úïz‚úþ² õ¿ú~½­P}ýêÈ"pwñu¢kêmÈ’ç¢ó‹÷¢¿úï¾aÉ?)™=BÞHÄãÒ¬³Î:ª7ò“O>V²ïîîVÇõ Y¯;4¹ÕøJ¢««Í­Õ*_W¦…¬ÂF[«›Qm1DÒC%=W‚¾Æ£áŸ>Q(³HD¯`]Ä®î.E$Vä2&PD”!›Í"N«cK!Ò.}jÒS+¯œ…”Ô¡H­êÍ”‘Û¥Æ ®:eY`"‘À°aÃÁ’#¤hE‡äS^4¨÷HDz—–VOBÝ^u=Z2,†h×" £.ÖC4vwéQ¤£'Ÿ8ôR/d_!·cÆŒQ¿Eö¾ ÉÑÐø:C“[¯$dì]©}0aÅ 5y}H‚+fDÆÈ©)¦hŒÔ˜B!¼Ñè˾}*‹!GêaIô­$QOpd¬þcóaDcCë×þ»°´ø\žÕ=ä(Ë„FPF9Z–£H]®S&¢_ž¤áÿém”<1Ö2®]Æ·× ïÒ"’SÔËõ¶Éoþõ–l£Ð« ½Ç#Èú¾û+>¤÷NHí¬Y³©•ò“²\‘!DM™·žž8tX…èšÌ½¾ÈJ¦}BïáE|ñ?ÁgÛú¯(D©ö‰ÃÄÝsñiõ“'êAŽ,NCÚù«ÿê›ÞžiqðDž"×zÝý3f ££CÉ]©?R~_ghr«ñ•„ôL8n^µ“ÄV–••Ždô©LÖ^Ÿ°=¢vÒÐ×{^d[ÿܸˆ1ZþPWô=øŸÙ¥å1›Ñ÷‚„A}É0Ј*²@ƒH·@½’_‘!äF†"ˆŒùQ wüPDVäÆ [yä(°Œ„Õ •C_Á ê±ë£6—Œ¿bCzo…àHùIOÞŠÞ‹'ù.2–}qdDdÕ¼å U4ê?ÞKf4‰ô†‹÷þú^×”âÉGdüY ÒNEŽ×Ñ[‘«Z5”£w+ÇEæõóâ$¬è†Æò†´Ü_9¨ÞY’6—×<Ôü Bχ,)êù5µ„«¼—y]Åh•Ë”ªÿÇÞwèQ•k?Ó¾¶}ÓHBïM:* vE® "EĆ^+6~ ¨X°qQ¯ýZÀ ¨¥ƒÒA:HÏÖ¯MýŸçÌN² l”@²9Ï·ïN;sæÔyŸóÎ)ÚtQ´„j«r?1ŸùÚÍâ0DGhq;Ú¥_ R*,Ié‚„÷…cŸz›­&Õ!ŸÃçµéoÌëò3¡[ÉxìÓ ÜKšõ†9'%¤çšð4Û&,ºW¢°È‘‘Q %/å%´Õýò[çÕŸRÛâZ…f)_!¾–#¥BôCp¤ìrkN–§ÉZ ÅGDGŠzéÒ¥æœ,U“9…òQËë@ý"-Û-ncå‘ò&6e"æyuÝåKî´lr) Wä§òMé/ÿ´¯¾":§ûäFy¡cYÔ´-®Ë½öõüIžQ ¼'¤‚lò¾r¹„V£iføˆ¨ e-Ò jGÞÇÓ(D*)TõéÑ•2©6†Ç5 tNÐÈm)'=KÊW[W*e4©|o¹õ,VQ “ôj ÙáQ£È‡G¨vv ³«Ó|Ž_á¯k™ %®O”òWûº¦ð§$ùz¦ŽEl¹c¬F²]Sšà®˜Òj*#c¶0¢YŒÑ¡e&σ1RŸ‡:󹆒_ŨIÛ¤aN•bïð8oéÞŒ$‘`åiWW—ñK$BÛ‚˜ê~]W¾(ï/^lˆˆ!k*$ÌŸ?߸UÚõ÷÷cæÌ™&}‹4W^ô÷÷ñ=˜úÑn³Q¢)ë°¬ÁËÍ>h=ùß.&±uÍ{DK¾¨Á”´fZâQ6_odݧ_:ŽØª²aú™O†y^Áµ×]Ç2vº:kxËQGap`È4\ã6ɶipçî‚«¾«Q«r#ý Òkaa19Xrk±N¢àf²°¨iYdŽäBG>ö8æ/XŒn"R¦‚C¥)%{à 7àû?ø_l²ÉÆøÝï~~èCøÉЧ>õiCb¥Ô$"ÍR²«­”¯_AduM¤[а°êY²@—(²ŽŒ0NùˆiY'BÞwx*³[¦^ÉG›$¨Y×0S%úþ÷1?Ôè‘iØŒ0sÖl¦£¦¡Z`Èéðð¨ÉKu1å„£Ñhš´V¾ ú+ˆÈþîw¿ÃÙgŸE‹™<4$ŒÐ¾®ËOå—Dy(èœöU–Ô½DÐV÷î•¿òCûë+TBSV\•q¥ƒI[îŸ{Ηñ•/™iⵯ}-ÞÅ|]¶d ¾õíoá=lˆª¿¶Ò® ª3ÁUc¢^oÐÖyMÌó¦;S̆ïÑ2¿>p?Îúgá†oâýÌK¾@R^÷—îZ¬ï-¾;d©wx¬:¯|sx>ol3óòðòÞV»%vŽäYñ‰yâ…KÌ|º^àw ›D]]DnµÈ€ò_áÕyKl-,Vëï›Ób•Åü¨¨ô9¾LR"‚øàýàñùááGÁüùSI蓽ˆ¢ƒßýþØk¯}P­va»í¶Á»ß}2ñÉÓ>‰W¿ò5x˛ތ6Úûí·/Î9çl”©ìî»ÿ~\zÉŸeUÓî bÆŒé¸í–[ðº×½›l¼1æÌÙˆû‡á®»î2?õµ­Òýc=F%æà‚7ÝŸüÔ'ñS?„î»/¬ºQd$Ý zè+ÑÝÝ‹½÷~!.þÓÅFyŸCâ´Ûn»™é°öÙg|ñKgcÙ²œp 8øàƒñ–·¼p ®½ú*÷¶wày;ílV {Ó[ŽÂµ7Ü$­QªO ²óÞ©FÖ!yÔªW¾Ÿ¢¤Å=X®¿þ&üä'?2+9‰àT*]X¼`®»öz¼÷=ïÃ[l‰]wݯzÕkpãÿ$Žññ†W¼â¸òÊ+ Q‘¥öu¯? ÿ÷¿ÄWÏ=W]u‰pùÈGpÉ%ã3Ÿù ^úÒ—á„OdcfS\tÑE¸ ¬c=n´¡im½õÖ8í´ÓLcl IÙÑGÿŽ8òHæå[ð¼çíˆvÚ §žz*g†à¬Œ"7Ÿ£Sjr’Û5 Ó5¡ÉÁÿœ¾±¶žyæYøÖ·¾/þ¿/âWþÛlµ Ý‘<.ZÈ|þ^øÂýÐÙÕYÌÄßøFüúWËíÅÿ/~é¡8é¤wã ƒÆÖ[mÍ<~5.úÅÏqÑÿý>ÍF§æ†þ¿_üßçâ§ÿûC¼øàCpü;ÃË_ùJ|죃fà=û g›òÒÝÛƒM7ßï>é½|'Ünº=Uª²Ð˺›7@Õo°SåûdÑâ…xçqÇaFït¾;6Á+_ù*üú׿1–|Ågÿý0åcϽö¹_ûš!çêaaa1yXrk±nÂËÐNbóÙ¡æpíGe†@ª;ì¢Eóq͵W‘,ì‚H<ÄAØLqâ ïDتãϹûîÿ|óÛçáÞÀÃ/Æõ7ßv\Âg>ñ <þà½øæW¿l,)÷=ø0þ~ã?ðÈü…$‹²¦89¾ýÕ³qõ_/Ãiù>úá㪿_3>{¦Y¹HúX•d‰ÛaÇí¡%v¿û½ï`÷]wÄæoÀ ðžSŽç6’öF»c«-¶Æ™g|7ßt¾÷­ó1¸xNÿÌÇé_Šßÿþ÷nà{ÿóCÓgwx¤Žën¸þîרy—mqö9_Äô#¼õ˜cñ¥¯|ýþO8ýó_4+·ET²Qþ}×X¢Šyo§4´œÇÜòÑd¹xñKÞ€$ëÀ¹ßø _øj¥‚˜nêKðþw¿ü㟱é&[a‹Í·Æå—]i,€²Î=ðà}¸öº¿cé2-0ààžîÃeW^…zœb“Í·Dgg‡!Ë›m<ý}ݸõ¶[qËwáü ~ˆÅƒÃèì¨â“ÿ~ô“ ÚUÅ;ßu<æ=ö(¾rö—ñ“þ)ËîwÞ‹¿]s=îø×8þ„cÙ˜ú¾ýíoá’‹/%‡#ÁcY2B‚£~¿,ùY§6áÑì¼%Æ_+6ÛM楋»ºKGÐ7}N<é$h.ãËõËx þò‡KHjÏÂùâo7n½å!uô8èà½ñçKþŠSNønûç­xøÛðnÀùç_€ÍIl{ú¦ãúë®Ã¹çž‡¾þéØ|‹­Xwé37ÀÆm‚ú’Ü|ëݸà'ÿ‡¿þã„üþ?Ä9Ÿ9¨áò¾·£«ßÇ_~õK|ò#§!òkh$c}pÙàVã a­kúö~øƒÿŸýâ—˜»á&x²×\y >ö‘áÞ݇y<†«¯¾ç~ó<Ü}Ï}¨G|DZ¡ â%Ÿ-,,&†%·ë&H<*ž—¾s}Ó×MÛ®žNìµÏžøóe—⪫¯Âö;ìͪ O{åÀÃ×Ïý*vßc7|ÿûçãšk®FWw'FFë¦ïÛûßÿœpÂñ8xÿýÍçl‘çF3ïûèÑ݉p´šuœ|â xåËÅý÷?ˆýïOyÞ'OÍð¹Ïý?l±ÅÆó®w½ {îµ'.¼ðBû¶·áå¯x ú§MÇÙ_ü}éK1¬O|N)¨’È~Gý&ôOï5dfú´~œú÷a÷Ý÷Àw¾ý] ’Ôú˜B•&©ùôúÛ‹~ƒÏ~VÝBh̼Gçᦛn2æn¸Ñ(D‰l_fËgI¦6øZc$eÛOHýÀÅ6Ûì„|쳸ïûqÞy_7‡7ÀüGÀ·ßŽ˜$âU¯z•±ÒªA¢n <ð0ÓØ7ý¢Í€CY×ÀãÁáQ¼ø¥/cÃeG“Çó_,W{‹ýðÐÞþ®“ñУbß¼—ýåRhžá?þñ8ãŒ3ð—?_Šžž^|ꓟ6}l%Ìšs¿þuœöÉã-oy“±Ø.Z´”Ï‹’  d#)ÏM2§)ͤ¥hÊhiL’؇qˆˆe¿£«ƒu¶Å¼qѬs˺ßÙÕͺâg?ýïóñöcNÄYgž…ï~çëxþ^»ct0ÄW_ÇI™õ¸7u¾ý­oá›ßø:Ê©”¶Í¶Ûá­o=ÆX^wß}7¼îu¯FW¥†˜ï—Ý97Ü~_ÂGøžpÿb£óÌ3ÏÄ.ù¨#øÇõÿÀ¿˜‡ïœ £fÞ!ÃYfTŸÞÁ¥¬Ÿ7ò=‘àˆ#7e¦ƒîæ?6÷Ýw¿‰·ÊЋ^´/î½ÿ>¾Þ:ïw ‹ÉÃ’[‹uÔzæsɆ_«b¤Ñ4£‘_°÷ÞøÙ‚¿\z)öxÁ Ð Û¨×GÍ´R'’¸ÞyÇØyçñš×¼ÚtÐ=RfR83gj…$ÇÄF³¾i½fjŸLcÎÜÙ¦`©R1].¼è"ôööà o8Ì) Ùk¯=pØa‡aÏ=÷0Ëc~ík_ÇQT¤ï:éx|ï{ßÅÕWý §žú’§ûqÃ7 Épëó¥º.ˆ(ÕGGÅm\à øÊW¾{g¤ëåØh£¹Lß^MÔF…JP+QiÊ2Ý×Û×eº.¼òU¯Äÿüà|õ«çšd Ÿ5la=ùœï¢4¨ÏÓ'Ÿü<ï½ð…/œ‹›þqÓ-F½Õ„ëå}¯?ü‘ãÃþËAº¼Ï£ ë½–;õH tÎu=ôöô˜QõêW©ÙÔ:aÑ-š{ùå/{ ÝtbÁ‚l4õa»í¶ÆÜ çÿvØqÓÅAŸ¾“Xó•z(³¬)?ÕG[]^”c5ó•`bLùŠ™õ‚ ×ô%±ïíí6õoÞ£`tt˜éÕ@µVƷ߉SN~7>ùÉϘîCªK[mµ zº»Ñ͆„¹dvÓ4E:·'‘UÝ™¹ÁLSN‚2<| ÒÏ*J…ÿwÖYX²t1b6Öõþi°Ñ¬VõGX;:IàYÏ;ø®Qx-,,&[c,ÖIˆ`hê,‚D– *†ZÉ5ÈÆ›l ¬CƒG¶Øbsã®ÚQ5ƒÀ¤Ìú¨˜Ô÷ñï¿ ¥’e²åƒ|²1S‘ëûT„Tä`p`%’Y’¬h QD·xÉRÌÝh#>oüãæMÊÙsfcÿöÇYg}çœsŽ!µ›o¾™éãùò—¿ gŸ}N?ýLÓgsæ¬YØm÷]P#‰‘Õ´f«N’ÚcÂÑ$¹ÞjË­±l`)Ì_@eÙÄI•WDr&Â]-WIˆú!Üvë-¸éÆëñ¡}þæ—ù"J0MedRn=¸£U³rT«]gùpñþ÷½==5¦ç2CF;;{è°D²Ñk®¾ÿøÇ ø§žúAl°Á¦e\wÝõÆ’öûßþÁ|ZÖ´P==]fŠ1aˈúFjô{`Fâ'fÝYôÃs6PîÇÏ~ö3<öØ£øÞwÏ7åmÛm·5þkð_µ0mÚ4T+5ú£ ù3 Ñ¿q¹f¬¶fgl;µ¡ÁWcí BóÂ&Øxã Ù€ÛÓœ9ömÇàúë¯Ã5×\ƒ/õËøùÿý7Ýt76Ùd3DI ýö·¸óλqë­wàÏù3z;»±Áì¹$¼ýHØx©ÕJ†4«, mæfï$ñÔ—™¥|_ ‹Øj˜ºÙ©U¦êþ4âG—ÎÇyß:ï•f`áý?„¹s6̧ä;Â+—ÌL,Šo'YÎç¤Õ ŸÄYýµo¾å|è¿ÿ› «`›m·AOo¯±H«!®²%k¯¹ú–ÜZX¬l±X'¡9/5ˆŒ,ng'†ê#èíï#a‰±Åæ›cC’ }ú¹íèîD'‰¯ú¦j Öe—]†÷¾÷½èí¥û$ÅW^iÈ„ˆ­¦‰¥,³"Àš¡§§ÛôÓ-‘ÜN'‰ Ö<·øà‡ðЃओޅ¥KcúôiT®7`€ÄIµEn:ðœý¥/‘Ø,ÀçÎ8 ßùÎw’Ó`£™ÓfªQÙ–©ôDœ¥ÄEzvÝcìû¢ýÌL _þò—±ñ¦Qi·qË­·böìYæs¹¦ «Tk8çË_Ák^ó*|ë¼oã¼ož‡ýúØÀ[Cp¥½³õƒ°A£¹j]— Ó÷Ñg™a^ì·ï~ÆÒš:1fo¸!^ùê—ã‘yáå¯x^øÂ}™†ç‘ÝBâ[aÞ÷òyÖYg1/ÂwÜnºh„ýð"œŽ{牸øâKò¼$IòÒŒŽ-6jÞqÜ;Ðl„øÈ‡?†=vÛŸúô§¼ŸÿÂY$<šW}1#Ó5BÄuÅTu}VÆ ÅVÐS¦þ«[VëÜr­¼dœIB«Õ >qÚGÐßß…¿þõ/xӛ߄¿øüðG?à;`§}òc8éäw³‘PÂå—_н÷Þ/{ékM7“Y³fààC¢oL;²Må«™‰€uQO!Õl C²ì2ÿ÷‚ pÚiŸ4‹4ø,?d™$ÀuTبùÊ7¾n,²_a½Ûk»pÜ[C›Døà—¼˜ïž™$ÏÝOÝZ4EŸ¾sæÎÅ;Þñ,Z²û| vÙm|ñK_Àow¡ùº00¸ e†Ý<‡ä¸Æø–y¿–·°°˜ˆÄ`'¼AÜÅ¥Ÿº¦9ms µɉöFG[h‡)¦‘تO¦éÈò"1Ö×1R¢Oà"+‹/BU±ñƳé‹WÛ|^'ŸKòE²å¹,¤4.ÃáóÚ‘æÛUÚ:ºH²>}r5å‘DÜ,4Ðn‘x+Œ.‰õ†GëèŸ6dÅYV#‡ÌNi´Çî»›Qýpr‹ãÚÅMuL[Íü£ýŸüä'ML JrÞŸ¡‰?\ò\zé xùË^‡ÚŸ„vK/ÅÎú*§B2ô>´¢¾óÝà®»îazzlÀÌÀûß÷>L›6#£Ã8÷Ü/ãñùóL÷“8þëßL·“½Ÿ¿'nºázüàûß7ù~ôÑGãŽ;ïÂ]ÿºG¿õlµÅ–ð­PÖÀúc\yÕÕ óŽá{ë[ŽÆAb,tç}ç;H|'ðvÌèëÁ…^„Ë/Ó3Þ@²ý"Ss°0ëm^¦Ö?ý_z¥êSŸ2ÓdmHò>ét{¡0Þ~ûíøç?ÿižª?&¾ù³)É£¢¢øÝtÓ?ð½ïýd)â;gÎÆxDZoÇ[ÍbÖ;lL8lH~ž$ò~cùÜzËpÒñïÁ´Y¸òïů~óG¼á oÄž{îŽáÁœyƦËÑI'dêÐù?øæÏ_„çï¹vØjCüøáů}=ö{ÑóáÇÃ(Å®¾êœÿ³ÿÅP² %†kÿÄ[Ž> •¾Nüî÷¿e^†w¿û$tuvÐÿ³ð¼vćÁºèãþçÜ|ó-¦[EgGNyï{°åV[á7þ†þ^…·‘köU^5ÄT‘K"¹c)R@i¡~¹ýèGÇάÀºïÏ&T¦d4ÐTkS.mô¢çŸza]sÎ9¸ö´O å¶ðšó¿;ÂtÝCÖ¤ÐÊžêæ2v߆%·“€%·+°6‘Ûçïóì±ëóXiõù’Š0óH>3Ï+ñéšûV½a5W¤ÂÂÀ› ŠSQÔ6«ì8Z@{"š²i_]DuV÷ç®õ©Q$vUŒ÷?Gfü$Ò1Râ°üHø²q46ÜM¯ðTyãI\3)ø±§æþ+tEx¤âu­8“Ÿ•ä©‘!#ÑÚc÷=ðç?ÿÙ\_[ñL[uOÉÓWó«a“Ïš9ù nHyÌäu½67 Z!LåÈ£¬"ŽC*dqSýæé_«Á>1ë€G?T̪ešØŸ¯Q…Oƒ‘:6¡•¶aš®l¬(X/`#IÊF˜§t‘ñ2Ö,y~ó†|;–{¹ãܯ|_¹­ŠyÊJPX¦¹Í_¹cñ6é’Ç=WYyꆺ‚x%æKÍ4TÓTù$è#l©q8Âã µJýã»ù¥Y4ôžTKò“7Hó´r±Á+R¨{ËLÆw]ˆ2oág ø £è@Äüˆ²6Ý$¼šÏ™(Ü*²çù¥pkÀ©Ïph_óñÊj¯þýr‘wÃÈɬ¶Šme5Õ R ²~æ/ÉBñŽ—'Ãò¬ÀJ§tïXXõ|öåa4çÆû¯ÇÜdr“»Ë©­”*Å|ž§˜Ï»¹’) &KOÍ1*RK²`Ò„—”N©Ò”ùÌ”’¹$´ê¶P«UÌVŠBVwe-ù†ÑårÕtEéV9å>Uãƒðƒ²Y× !Ö³ÔBDŠ 5‹ó/¥ - bxkäåDH/ò²pTä÷˜]{‚›©Š"ŽãE¤V©Aúêûèêì1yc ç"ŽªðÌ4ÍsÜÙщŽZ)jøÒæµKb«µÅ$9ñ”¼™ ‘ƒ—yR6_^Ô_׋÷ ë¿H´é«ë˜²ð} ¯CyMS0ò7G.ú¯»rQD°¹V*Ó 55½Õ%J„ÃgáÒW%=×Ôß1É}·°°˜,Tó,,Ö=ðý/ 5 {¨ôY"6v-ß)¨dñz:í ëôTV¶‚RäÈ ‚$‡Ú“%5Pázå»&±À®äGqn<Æ_*jV[ÙÎ'sσòÆÀÂJ>VƆI_Gä”Ô$EMYtEj•·ä¤Æ Xå^}kÕ>Ðà0“–Æ?YâRCxeASC'å݈Ó‡ÄJÝrŸHqÆösŒ¹0Æ_Óþú©(åWNµ¯A{²ÒçŒ%žNS‘MåAœ„l[´™lªLoæ‰nQ›@Óè­šîù•ý/úà›÷¯!ðx—’Pz«.?>ï‘È•™ÂOÝLQy2þ1 !â,â;Kï›ü¹b·šACej%Œùaaa1y¨¾ZX¬ÓÐ{_W¯0ŠhLHMä¢Ú» M!㯋ì¿—ÿr߯öø0‘aóP£Ø„÷¬„•LvO…1ÿVª~êxås«Qp³¿&Œu+b_dŒòV&?½òòãÜb–[âu‡,±úl¬k²ÐÇ꟬i²ÈŠàŠÈ¦šŸE]̯¼–û¡ÓyZçO¥Ÿ‘é`Y¡hÍê‘—a‚¼YޱkEðŸàÖœœÚ(¢lòL¢F–¥Tþ0Mc5:òDÒrÕ1똚eBÝÊ5«Š±´’0Šð*Ù 6Ès¢HU³Ô¯~tcÎ+¹ŸŽ\½Sòå¾óæbÄcïµrZ’Åf¥Áâ=”{ïʽüʪ”/Û­°©{ŠfJÑýÅŒ)*{+<È%¯Ç“EQË-,Ö9Hä[í­°†‰kȯéẸ˜ŸÏE¯<Š{ {ù¶pùuó”ä5÷seèXU°8¯­úQÕrÕ{Vì¯|e¼Â÷‹ØLš_O9äŸu 10D…igˆRNKÔRäGD‡¼Õ8έ¸ùgpÓí@äÈINVõ5´´7®±ç†È¬èk+¬(Aº#Ï£"ïÇ`nóo9r×ë'\cAQÌóDù‘§¯¬­†ìrß42˜nZv[ù¬<©Ô½­Öø±9mÌSžîÌžì²ê¾`:˜ð§wKñ~Ñõœd›~áÜõ¼€$¸dúèV×UsG„U3?¨\™k ˆ‚ÈR£i }uo (ܹ U¾±ÄÖÂbõ±Ê›ÔÂbÝ@¥Z¦Z !‘Õ&ÊúüIñxLåÇ_ÀŸ–é4 ˈúʱð¢ãå×òëê!—«ºR\—_ùÿ✰ÈB$Q?\Iq\Ýr+·f¶ólùP6~ˆgåÏÑà²*¥«è˜+ž½Bò{éšþë~ÅfyP¦Oe#)}½Ý½T˜ôw-†jj*,)zMÕ¤Y-t.Žc#:Ÿ[ÂÔï5'«Šlkdhr @L7'¢J§<߯H’WæV}téšÿ aÒO$D´Õ*fb¥¯ò%@Ùs(¿åV^.ÊX NSÕ3T‚è/ÝÉŸ¼ìäá+Þ'¦à˜p¬(ñXÞj¡Õ‡V«…ááa3sQîm•6ë3T-,Ö9ˆð<ðÀF jå(YÞÔW®PŒ†øáKÌÂ6YÉïË%ÿ-É?+æ2ÞMqmåë˸ëiqœ[¡V\×ñ˜_rSˆ¹¦¸ä²²¿I&}f5VLJ~3fÌ0ûk3de“(œRäš.K[)vYN‡††Œ¢—0Ûÿ\H0ŒLtmU™ŒÛÕñïÉä©ýR^‹è(mŒu’é£s"=r£}Íð°6CñÐÌÓ§OgYYž·fð&·ŠS]×þÓËdÒ2×%zþx·«nǯ®Ï¼ò¾ÒIi¡wœV$T¾Šàª>¨1£sj ÊmAt-,ÖWXrk±NbË-·4+ühU()C!­Ñ_(x+¹(=¤øD€}ôQ¼ä%/1éµ6C<‘V)óîînl´ÑFøýïoâ#‚ÓÓÓc¸¦õ‘"_Eé£mAt´/²£ÅA4˜ÒNi©sk;úûûÕVáýÓŸþdâ¢ü K¥ÊòúžßJ½ót¬|½øâ‹ÍJwjªnä«äéëUíë7l °X'!å­y<Ï8ã £õÒ×K]/}½ä­ä"X(<‘†×¼æ5æxm†È·ˆŽ¬Z’öÀ4ñxÝë^‡o¼Ñ䵎e¥±[Q¥ƒDk¯½o{ÛÛŒ%ï¯x…iì)Övr«F‹òº««Ë„[VÜ#<·Ür‹±Ôù¬2,·ë+TŸ•j¬^rÉ%xõ«_mö?üp“FE:­íõÛÂâÙ€]ÄaaßÖKÆ.âðÜ.â (_î¸ã\~ùåF1ʪ#+Å tvv«ÎÂ… qÊ)§˜~˜Âº`Ù‘AS¾ŠðÊò¬(D„D€TÿôúR}\ß B«ôQºh«FÀÌ™3±óÎ;c§vZnõÔužµý5/Ò¦wªÊ«ò÷±ÇÕW^iˆÚ‚ ŒŹhÄ®P¼UoçÎk|°éÊ¡÷`ÑÈ‘åµDÇ+ t²‹8hv»B™Å,¹]µ…ÜŽy–[å‹-Î+CÄG¢Ï½J¯"}Öö—{‘¿ã·«bmÃšÄø|\5mŠtY5 ×VLÎ"N_›ã²&±j:M”.ÚX_ÓéÉ`É­%·«À’ÛX›,·“%·vù] ‹u–ÜZXXXXXXXXLXrkaaaaaaaa1e`É­……………………Å”%·S–ÜZXXXXXXXXLXrkaaaaaaaa1e`É­……………………Å”%·S–Ü®ŠÅÜ´Ò‰ÄÂÂÂÂÂÂbíF±"™ô¶Vµú{êÃ’ÛÕ„]­ØÂÂÂÂÂbÝA¡·Er ¢kuùÔ†%·«¢µçºîÔ[›ÚÂÂÂÂÂbŠ¢ ¶ã ®ÅÔ…%·“Ī­<[9,,,,,,ÖXÃÔúKn' ‘Û‰>mXXXXXXX¬Ý(t¶ÕÝë,¹ 2ëyÞòcß÷Í~(Šl ‹µ ãu¸ô´ºZ}=µaÉí$PTŒñ¢J¢OE%Ѿ………………ÅÚÍ ]èiéléq‹© ËÈþMÈr[*•̾*JaÕµ°°°°°°X{ ÁJ__-¹Ú°äöß„Z€Ee±­@ ‹µ2FI_zZú[zÛbêÂ’Ûª…õVÆV ‹µÕjÕèkééÂ¥® S–ÜþPëOäVÖ[õ½-Hn!¬U×ÂÂÂÂÂâÙƒt²DúYzºèok±~Àæö¿ V‰>u”ËååĶ8/)ÜYXXXXXX¬YF¥ñÝ*•Š!¸ÒÅ:_ÜUg<²˜Z°äö?„*Œ*¤@An%ª`j=ZXXXXXX¬9HߤVäUz¹è:h±~Á’Ûgjª©2‰Èª/¶E%³°°°°°°X³‰•îÕ¼ó"µËLë,¹ý0¾Âˆàª"}p…¢i[kÒµ22i™¤èŽ`±þÁ’ÛÿE‹° °ê+‚[ôñO~§ йŠóXüR¾;TÜâ¢ÒE/ûN±°°°°x!=ÜÙÙ¹ÜÈ$Ýl îúKnÿCÔ‹ ¤Š%r«AfÚÁhʹÕ5}BYU ¼6JNoSq ’:)¢,å)’ÝPÓ«ðzÉG˜%$»,^Ã(’sü¶¨˜Åñx¬s‘ÁÏÒ„·Ar[bJpÚMœ÷úWaäÏ—!,wác<‚DúÕ=Añ^ß5ÿ9 )]9^Ê04þœö Yï 4àŸ%·+°DñÙ‡*—ÈìøíTy(:êx»R´Ô¿V•K.´o‹–…………ņ'#¬…n-º(L%=kñŸÃ2gã+XQ)ŸìÜøóëÆ‚žq;FgŸüœ……………Å3Uõçxºê±Åú Kn-V9™ÍÅàIÞ'˯[XXXXXXXä•ápRNPJÜ—ø&¬ËÅ¸Ë ¯(y<„y(×Xrkaaaaaaa±†±ÂºIAÔ@2c¯![m’ñ¸ïe껹TÛˆ{!|J€¶SBËáU’ØÄåyßÇ`¹„Jìù|”úÖ&¼'¦h?F‰ÇecΟkúù¦ Ÿ@¿RãŸc¬ºéÆÈ‰uŽb?§¾í^`É­…………………Å„HŸ€iˆ˜¶iB*˜Jx¦U‡6à$-rÑ6·Mdiƒ›@¢ÙRGZ-·„e˜2Hb+&]Î*NzÑ¿õ^ØöuoÃó;mÑ´í¬„6*h’Ò&¨Â‰< éäÛ´Œz3C+óÁ ÝÇu"Œòiuæ&Io”’ð¦$°¼ž]§ w"BœÛk73ðL;k œÌ ‰³°˜T\ÔòÓt"+X±œV„ó^ÿJŒ^vš^ §Í{iO§iÇ9¼nJ¿……………Åz -[1.c»%9txFÝò$¼‘8ÆIÏ (²«:h'.b—•2BçK†"<ôø"<ôØãx|þ" Œ6Q ŽÖÑ7¢ÄuikÞœ>8ÝeÓ­!*;˜ÖÛMû¦c¯­¶Ç´ž.ôM÷Q–q× S_^™j#Ôܼ·oK¤Û旅¢V ÝµŠ±6gÐ&·àææ=ˆeµEæÐµCå[rk±Z°äÖÂÂÂÂÂbõ ¢“jЗ…i`–¦ÚRg€ˆz5&QÔD^m:‹–P–aÁÒ!<2 îäQ<ôè|,¡ ÇGæúI„[¥ ”ÔcÔZÚ¥­ "an£Âç„~‚Rš §¢’D$Ð1ª¨Ô:°ùf›aöŒ>l»ñ lÚSÃVÌÀ]>*䮣Y€Ô/¡Ä}_Ä\]%œœ SÁS ÷-¹µXWaÉ­……………ÅêAD+‰c¸b¶†ª·-5)9î@˜ +ûX×ßñþø×¿áÇ¡Îó­V××Ì .Íž_›×––×ÐdßCZHŽSN s«’"mÂI›èIy%ÈP‰"t¶pо»âùû™ýøÂ7/Ä÷= ߯ÂO3ô.:¢:¶šÕ‹W²?vÛu[tu¨@•OÓ úé:cäVýˆ5¼Ì’[‹u–ÜZXXXXX¬.H6“H\”ºÓtH@=u1H{Õ-ÿ¯½ ×Þõ –µyÍë@è–¸š©€ä‘,ÍðHÏ5“%hVƒÜþH.K$¯•´…:úfÎZ£Aˆþî6ï©bÛé˜;»{n½^´ÙÔJ†ÈGÉD?þÍ‹på]aq -uÐG^’è¶ég[•Zxá–spÀ‹vÅó¶ÙÌß9u ð˜iD²×>Ë­±,[XXXXXXXX<óQHƒÈšŽ‡E­:†œ‹Òÿwå xó©gàÓçÿ —ßw‡–=A:QA7IcàVÉÔºÈ;’¸&Q>ପ黢:ÒV¥€Ä4IèÖA#ê©ëfåÞæ` ›Îœ‹ã_y^´Õè 2´È>[>I4`¹·£QqÒBJ’œô†Iͤõ¸ó–Uñû<†“¿ö3¼ùôoà×Þ…Vì £'iÛ¦ƒf³(V‹µÖrk±Z°–[ ‹ÉCzSËæ‡m„$¬÷>²¿úó•¸úÆâÀƒÆË^öø~†ù ã‘Gᡇcé’ 60o°…M-£ëÃqø$¶nµšú54Óí(A‰Ä6 JˆdÝõ”IXËà qÄþ{â¯ÚÛmÐCRb„n¿÷Ûñ›Ë®AÐ?ƒ)pÏ¢%º§¡ÞÖ7NêÀ§h®Ý²ï£ÑB©‡d9«£›„ú{ì‚Ý›tt`FgYFd³@Dš1|®·Vè|Kn-V –ÜZXXXXXL9¹ÍgK¸îŽGð±/\€¬«ŒO8 /ÚaKÌr2TbY‹îÜír>ûA”VÐŒ3,‰b<8o<ôî½÷nÜøÏ›±õN»ãµG¿ç|ç¸ÿáùä¤eÔµD¯¯~¸6,µð¡ÃÆ÷Ú}<%âÚÈ\œþ­Ÿâ÷×<„ê´9XRA“zÚïêàõœr`îõÔƒ–D•4m¿Yµ ä3b”§U-ÄV3{ñÿŽy öž6 = ýÐ2½Õ€:ßµäÖb݃%·“‡ôfÈíƒm¼ÿsçãá&6Ùvs¸Ïج»„MzjØ¢«}µxAжÓàM"Œš:T½hÉÔ4t«ÙŽÀňëc à¯·=Œ¯}ïçh´]Ôƒ´ƒ²¤…×ï» Þþâ=°y¨•}Ü¿´…3rþ~ï¤|FcxÓúÑn¶àøô9Ž‘¦1<’ڜܪ-ɶü ÔHŒyõ¬¿¿„tt1vžÑ‡ï~ø]زê#Fwg®ŽX ”¾%·«Kn-,,,,,&éMÒUœòýKð“?_‰žé3 ŽJ>‰4jaæ´ ¼x}$‰[l¸)vÞvì´Í¦˜;­Š¾² ó*Ñ ýpÓõ° §ÒÇHLÓJçýü/øÙoÀⶃ ýN¨£»ÑFo<‚Íú{d–¹ý¸}aˆ¸\#Q~%Y§LrÛF­Ú‹(ÌÐŽ4 ‚¦ó¨ßóUÇ*Ôõ.؆.º‚N”Hž‡ëp+!‚RG¾j_¼óÐý°‰o)á ¥%·ë,¹µ°°°°°˜<¤7ëÜò‘¯ãÎÅ˨µ nRb¯Š0Ê#$!œ¦‹JØ $uT+-l>½›ôuâ=wÁËöÙ%ÏÅÒÌ[:€…Kqÿ‚E¸å¡a\y'ý­u¡Þ®“œff6·ÝBM³)P GIø>6œ; [ô…Øpf¶ÚbkÌ™;¿ú핸äò›€Ž4Hx[^±OÝíºp1Œ²’…— ÓI æzhò|\Æ´î6.:û3Ø 1‚é$ÚZðŠ?øsKn-V –ÜZXXXXXLÒ›MnßrÆwñG¢‘ÃmqÖ¿o¤ARHtç$>Ü„Ä×#ŽFÐYr°ó¦sðÂwÀ‡îÁ}ÿºÇhàKÃ/X:8€¾Ù[`Á Ãójšq .NQF:ÒÄÜþ Ž}Å ðâ]gaãj å,EOg'Úä¬YàŽû—àñ¡&þzã-¸ù‘ÇðÐÀ Fµ@D©„V<­™U0­¶M³˜C;FðüífáóÇ¿ ³†ÑÙÑÁç–ÜZ¬{°äÖÂÂÂÂÂbòÞ$ÝÄ­uùÁÏa¸<C~…2Aš´à§.Ê1uiæ!v3„];5¸è€76GÐÛYC£Õ†ã×H,Tj54G‡0­»†z+xÃþ»£>²-Y„GëøÇKP÷¦Ñ­‹Z4Œ#^°>yôèwpƒé$Ô ¢0B¹\1ƒÝ2'EJ]­p6Âó/Å}?‚{ç/ÂãËêpñÈâ<\ÁÒ4‚ïŘŽââo|[zda‹þ5Qéé³äÖb݃%·“‡ô¦”=œÅøÆ…—á—Ü„7JZ‘!DÐÎP ktWBèÇh—@XÓ H"SÔjš$¶i@" Í_¡T*¡3iâùÛoŽ3Þy6 ã¡[ÆÏþö|éÇ—`ÑhNÏ”鮳ñ(Î>õ(ì¹ÃF8çü_£ÖÝ-7ÝÏ™…­æö¢æÂôé-S)íH뎩;øÌ CÍóš)ËZ8í›?ÂüGÄN:tt´cA Þס–ÜZ¬k°äÖÂÂÂÂÂbòÞÔ4`Wßu76Øn;|ý¿Ço¯¹ K šš Áõáj6J¦¹ ­Dæ «³­Ñ¥$—m8ÏË£n :£QùÂ]qÊaûcF‰÷¦|F¥Šï_|5¾ý›?c™Ó‡Ñ´Q ©.GÃØn³éfu²Ý·aLýì‘$ûz{Jè­ùØlö,l9w.6šÑƒ™]Ø ¯sºHy³ƒ à <€oýðwÆ1¯y)Ž|ñ¾xøö[±ÇN;’„'­£·£Ÿ*ß’[‹u –ÜZXXXXX<= rUÛÃO~/^ý†Ãð’öÃí-Ä9çý w<:ˆzÙG³” &©u“*¥G<¥Z‚V{ŽßB¤Øï½ðÈÝwadÞý8åˆWáèýwÇto”d¸‚…NÎýéßpþïþЏ{F4›˜_AGgâÆÂúº»«$¡>¢†×óH|C¤~ŒVÚ€oÖÓMàe© °({z*%L‹`zOŒF­‚{[‚-§o‚ÿבØi“¹øÃoÿ‚?ÿùR|ãœOÃwy/ v9€%·ë,¹µ°°°°°˜Ë -)·RbY£ÜîyÜÉáãøÃÄ!»nÙ=]øÅo.ÃM÷>ˆ«ïyÃAZš=D5‹KJ„ÉàŽ`F'ðñc_‡—n»%ª6ú+JIg2‚Çœû«ëñ«Knj=ö³ñzQ-ub˜“ì­¸tÈ¢&ª$Ÿí6zâ³gvaÛ·À–[o /p±õ†³±Óœ™è¤.Å@à8Ȩ[[|`ÄsZÊw´bxtC#£|¶‡¾Þ>Ñ :Ê:*@É'af Ê¼/J34bÞG}ß‚%¸ùÁÉbÜõÈ ®¾å6,AæWÑÈ\¤IˆÎª‡æp½néÈ2„>Hϱ™áꯟ _ì˜q®j¦Eø9†%·«Kn-,,,,,&Æ“‘[-¿ûü£þC¥^4à£íUiVY§ ,•“d‘z•ì3s©c“=)™Å%à]b¦1·cÝ È`ÓH¤ÔhèihXG5¥|ªÌ¹TºÅáéô5°Œhóúh×L´— ¡ïót§pÕ•?u´5p¨0É­Y†"íE‰œ¶’ÖѪðÞ,Ån×~僨ðÓ¯w-!·E ,,,,,,,,,Öô ÐïĈ߃Äé$¡­ L«¾°šF+òH I"½´ ?n¢”-FÕyˆ2Õl!eÊFÔ@’DZ³qD"™”Ó0PÝKk›aYǦ”M°´c#,íÞKú7ÇâÞÍðxe/o€® Ñnòf-™F (ZmH.irêj0ZŠØM{’Ü zðRV׈\x´ä ¬ˆÊïqŒeX]%ÈÉWXŸcXrkaaaaaaa±¦áUHºJd0EE$´¥ÍrD‚›ÀÏRt¶]øQÂl&ât:’´ŸÒ8«¡ÜÙ…RWn…dÔ!i€“£6ÆI=ß¶FPª"h“H·H>[@}·ÆPTŽ¥vŠrœ˜©¼ªÉjÉ:ÒE”ùèH¢œÕÍü¸ZÜA$XÖcYˆa¦cœÈy5O®ºOä~í€%·kê`°aÌ#’ÌnÃåš•7!ità¦%¤YšN—éâ¹%„NÞ…!vÊFÚ™¤„Ì«Â÷»Hâ*È—÷9äžMÍ_+":Ö? IDNÂ{õ$DêŽÀñGá¸MÒÑÐtmÈèw’u’@÷#"©ŽÒ9$Ö åVÑöI¤Õå€~v„)z[)J£!‰qbºIhPZÝÏ'ù“?ó¹†%·Ï4”‰"X}ª×´^sfVQrGx-2̶±ˆú© ?ñ ñYÉ£Èf´yCD²›ð~_õ·¥‡Æ‡>Æ$¬íQì’°VÈœ+$«ôÇ#ÕåuÒ^nÉ8c’R…'ðId=>O×\„iLÒœ öé%æ}ôŠÂ§ñ~RVŠo—¹²ÈšÅ\’å”çÓ×cÔ¢6êï%Vu}í¡”kOH,,,,,,,,Öa†KmÍà,þ\‡¤’rä[_Vº„¤uÔLûµIÏ,dK›¨9UÞ~‰1ˆ‘¦Ãp“J$·¾!¸ÄM²+¢›ÄðÔ5À ÜÊx†„“"+òë²ìÂa6ºrm]ŠËsY¬ãV¥6Í|n5?ovJHÑ£†Áæóè­v‘—Eu8=†6*ôzZà¸#3ÏŽFš¨ê9k ,¹µ°°°°°°°x†‘Ü\H!qÈÖ3ñ¦}÷À4´Ð]Nñèã÷£4£‡d1AÃóÑH’Ê^øþt$nmrÅHX25çR¿VC`)䕹Ä$r©æœ-›.£AÃ¥*FJ5Ôý*ï­ %yN(mî‡n™„9ã½ EÛŒ„˜aÔì ô'ÍHzÁ{(ï‰ý†ëȵ®„ÃKÐYJÐéµñŠ÷Ä.ÛôÃCt×dn½6À’[ ‹5ŸœozæâCG¼o:ø¨$ƒèŸVB3‚fÕDµLFVF’à&d›e[‘RÔ ÀˆéDkX.ÿ;$¥É©K²*«Eè&pW¸ÕJ ¾Üâ›ê Ë?Þ’w‡Ð´bcDÙM5k”©g°,½|6É-(ŽWWí@µ‘µGÐWËÐâØ×ˆã^sú4Ø,iÂëôß<ˆòÜÃ’[ ‹5ʪ£ßKpâ_‚Óßs6ë)¡×i¢ÔZŒ.%­Œ¤œPPbŒ›šWÍS ‚›:š—âø<ö É-'!|´èŽâ…$¶mÞ×âsÛf¦„2 ¨Ï}¡ñG+˜e<’?ò_³ äÝdÉaMLŸ_§1ŒRcÝi]­¥Ømn/Î:å¼ù€½0½”!]†Î’1}›Kwm€]ÄÁbµ`q°°°°°°xr´Ê1–Ö1h>Ûö¨™6ò;1¿ÕÆ(•ãu âÇ¿ù#n`¿†0¤†M¤ZA{9YÛšÅè§w™­üwd!Êia@ÍL•›© °ÛNQ 5Í—ކØMÑÏ4«˜cÝm¦õ¢ˆ|›uɸÍsB^ËZH†—a¯]¶ÅÛßôZl;§F’ôü4D…ql5šè¨tÀñÔ—aïç–ÜZ¬,¹µ°°°°°X=hèWD¢˜ÞÀ æJHNµ– Iè?ïy—Ýx3þõÈ<Ì[¼Ëš>"tÐøt/R¬i¾|YkÕE!'ºÒÉš¢«œDÈÊý‹;¤®.ÉgL-Íg¨ËBœðù CÂójfz0‡÷hþÚ€ç’ï2ïõ)ˆZè®U°é†s±Ã½8p—°Û[ 3Jô;HcrXYzY€íëO§$Ï1,¹µX-Xrkaaaaa±zÐ\²u3ãºØªŸ,#ydš¤Ã^W#Ô¯aˆyõÜxóc¸å¶GqÇÝÿÂâ¡!8˜eÉhÞHÓuù.‰n—â8¤¤­Í¡• :*U4 RVék£6–%Öuø‰nˆ@—¡ ²ôºj†¤EØfÓ ±ßÞ{`×m6Äô®NÌ,ùè¤múWæ=~‰ÏK4ŠM1!±å³Õ×Wö^©zrðœà>ǰäÖbµ`É­……………ÅêAä¯A­)uX¦•µ égš QW’Æ6]µ=!]‘î–Ø"\¸´…»îwÝw?ÛFÚ1–ŽÖ1<Ú@½ÝF3!ùL|TbÁHo’q-@«B2¸p=5¯dúÇöt$À:«ftv`³ gc³¹`‹f™ad†Ä2Œêë‡Mtø<†Ka†›,—;¹Õv9¹eä ¹5®ž[Xrk±Z°äÖÂÂÂÂÂbõ ¢‹R’;ró8 ®¡ë"&™ É3É„ú“îRÈdRƒÈ¼@@7IoŒª»A‹zy˜¹ç§Q37éGÍÁˆŸa„Û:‰­úàVùŒ.²Ïšï‘¬ŠÁò|;¹/ ­cúÙ¶FFPJè¨Rw‹:)9xˆ„Ïó®€a KÃLd-¿Ëÿ<âO‹GŒ.!»²…ÅêÁô³añU¹W™6ÈwÆ.å;Ë,,,,,,Ö_HWjNZ3/-÷EVCÊ&¥N¢ØsË(9eÔR^†¯…Œ¢ß1ÝГµÐ›¶1ÃM±QÉõ2¶éªaë®vìN±[—‹{.ü>±ß.øÈ^;cô¦+°s7°s¯ƒzb<¯#Ķå³³œsSJ`ýëN"tÀÎììDwɇgpVrl$~ nµ ±_A3óPç¹Ðõ3"9ôÏu4CĘ.'Ï),¹µXmh>Ij ±%•5¢s$”c¤´ÒRxY ©WåPCÔ²Ð<*ã}¢§N&zê#uÕU®VÍ¢ çêù$¶yàñY"´¢°¹Íù¹žKwË .‰-ÃÇ”ç9 ±°X hgêfˆýÔ¬ž¢}}f RM"6›ñ|†Ä¬é—ßcaaaaaaañlÀ’[‹Õ©+'A›äµEV›ªuGN[‰²z»g bG£>yÍ’[ ‹g–ÜZ¬Ô)Á3“~ˆâJxŽÄÖ'©-r˃TÝDzåÂÂÂÂÂÂÂÂâÙƒ%·HS’ÐŒdtlûTÂ$³©é†à³øhåк™z©¿‚è¬7Ñê'šêäéý,$‘{"Š4â2GqÎÂÂÂÂÂÂÂb2°äÖÂtH¹¹Z­ÖJse8ÆR+"[&½ÕøHÓIÝ#Õ­Ôà”yFËþÑMÍý<´zJ³Ù4ÏWxâXSŒØ~ “‡%·+YlE(K¥’9'«éª’¦’ N” IˆCsŒ:.z ,ŸçÎAÚŒNàÇ“‰ˆ­&ˆ®V«+…ÉÂÂÂÂÂÂÂb²°äÖÂXL—-[f¥V-Á r9^ÔI ’¥W7qà’Ø¦¼§Ý]Á£^Œ¥% VBâúðy]3„MäÏD"b;::ºÜZ+ë±YEÅÂÂÂÂÂÂÂb’°ËïZ`xx¸çž{pß}÷aéÒ¥cWžˆÄ%±u2ø1‰g E ±—âÊ ÎÃâ^tõâ5g~n «¡] &½PYOO6Ùdl»í¶†ÔJdÍíêêsaaaaaaa±Dãø§ñÜלs®=íh¹-¼æüïc‡ÃŽ•)¯7¡ùpÓL CŒÝ7…aÉ­…épíµ×â¿øÞ÷¾÷¡^A­V»*¬¨ !%"¹­¥m¸ ‰« íf¨6à7‘–jXÒ;åV†NÖ´$PñòXïÈpÕY—ÈÙy†äØÀœOeœ{î¹ØgŸ}p衇 r¥R±Ö[ ‹'ƒ%·O€%·ë9”ù*ï8îx|ô#Âì f‘ØVòkiÊŠ¢F$žUCPup”¢aÖ—*"”fÉnƒ7„hÇ.†ƒtÊ©Ö›Ž¸†f+…_Iá²’¥N“Ïl#p§!l'(WDa¥R‹.Á7¾ñ |êSŸ2ÝÔUÁ*³°°°°°xXrûXr»žC™Ÿ°Fì²ûn¸ù¦ÐAGG•46ÃàÀ.¾ôRuÊ¥+~©¯TƦ›mŠùwÞˆ:‰iäw!óÊèõ"ôv–1šøxh¨…#{5º½óøþ~ÅmxíaoF¹3ÁC=‚»îû6Úx¶Ýb_d|ö²ÁG1gƒ¬Ÿ.ÒÔÁ)§œb,¸²(kp›%·OKnŸKn×s™¿Å–[á®»nGE¨VËHâ7Ýx#ö?à`ôMïCoïtŒÖ›˜9gC¼ü‡â'ßü2BÑ¥£ý(aƒÞ2öÞýyˆýNüíÎûqû?®AÉíÏÿN9ù4Üy÷¸ñæ¿á¨£FÏ´*Í!|ôƒ_Æ{ßs¢dž›¡R®¡ÝŽqÜqÇáûßÿ>Â04äVÞ,,,,,,,&€%·O€e ¨×¨Ôªf¿R­"&ÁuÆJww'>ýéOãü®¹öüèÇ?»O~þò×KñW×õpäQoÆÕ×\3Ï<Áõü±YXéßÇàà~úÓŸb«-·ÄåWü»ìú<|÷{ßüy‹P«ÖxcÈl¹\6÷ËZ+)æÞµ°°°°°°°˜ ,¹µ€O"Ú­Ã÷|sœ© HR*£~½ÑÄ…^„/³5xúé§ã׿ú5È|±áF¡«³ ¬æ¹íïíCïôéh·Z¦;ÃI'œŒcßv¬!°Q›jZœa¯çï… çnˆ­¶ÚÃCC$°ÀðÈ0ŸêR«®"µÅÖ~X°°°°°°°XXr» ¤È™¶ã-Å96‰ömÛí¶ÙŠëÏ–øA ]]ˆâ|U2-Ò Ö©ð ØÕh4Ðh6188Ïuàx:ÌŒ Z676Lj3`¬ÍøÌÚ`l·í¶˜>}4ÛAÂtPc¥É¬Y3u«!·"±²îú”z½Žîîn.á]5¼V¬X±beꉾöz§ÐŸ:ÖµB_êœ %ÚJ_Ò-š6²€îÑ9é.mu¿ÅúÛçö†!Чõbú*U@U°Ûo¿ÝÌ#»páB,^¼###†¼iª+ÝS[¹¶á¸®ºújÜqÇ­¦Ïm–%†ÀÞvë­Øï€ƒpÞ·ÏᇾžYh¶Œ¶ÇmáoW_×ù6¼áÈ·à«g|‚>Å8î¤÷áŠÛîÁ]·\‹rÒ0}nO~×ÇñÏ›ïÆ©>Éè_]ôcü×1oÁµW>„?_ú'lµõl¾Ôš|¦º4xÇ;Þaˆ®^f"·ÏEšXXXXX<{Ð;¿ ±ÒÒ¡}}}†Àûƒ1mÚ4lµÕVØqÇ1cÆŒå‹ifA÷JgÈ?éUééä) Ñ8þÙ>·+`Éí3 U´¡¡!³ð€*×w¿û]|þóŸÇÑG×¾öµ˜5k–©hEE–•RQR"ˆ ?›µu«­¶Æ}÷݃Ñá!3[‚jÊu×\ƒ—úrtõö°nT9.zú§ãsgžŽCöÜwÞ÷ ö;ðxÓ[ßo}å,´‡–âÄ÷}Wßý n¸úrtûm|÷_ßú9Ü|û½øÊ×>‡¯}ík8èe/Àõ7\93vÃß®ø3zû*ŒsQ˜ðÙÝxÛÛÞ†ï}ï{¦%®tÒËÉÂÂÂÂbê¢ÐÅû^ºTäT¤µØ×—CÜë®»?øÁ0sæL3ÖCK¶ë|aT’n•èXdyÊϺcÉí`Éí3 -c«O"—_~¹Ya÷ÝwÇßøFCÒT¹Dd•äÚ/*ŸÜëX[]/Z Ï”ùqœàEûí‡Ë/û3j• ÂvA)À¿îº Ÿÿ— ©•ñTƒÌüJ ï~÷ÉxÞæàþƹßþ!öÜ{?¼æÅ/‚?øÙopóÃâôO~Õ¬Žë®º?þÑoñÙÏžÌÅg>ûY<¶ð~tõtâ[_û*%þ·&m”K5¦‰‹N8ç°‚ê…%¢¯“…………ÅÔ†tbaìŠ/wÅyAÛÂ8tÓM7áç?ÿ96ÜpC³ðϦ›njˆ° HÅ=Ò#‚%·–ÜZüAýÄ'>7¼á ØrË-ÑÙÙi*™DD¶ ±ª€:V%Õ¾ "'wEe|¶ò™Ûî°nºá:„Í&ú§õ"aXÎÉn³Õb¸<”®¡‘Ê•:½­v‚ ÚmqHˆ='Fìu`4ðPÊøRAˆÆð28Iª]$®n†‘ÑúÉ—S¢ÕègeKÑÑ©þUЦ™sÌ1¸à‚ ŒåVÄVidaaaa1u1žŠDT}k¥'%Ò“Òù•EVÇÚjÉøï|ç;Øf›mpÄG72é~¹±–ÛõÜÚï½ÿ& ª WX\¯¾új³|íñÇwÞÙôM-H¬¤¸GûãûåªÒçÔ÷ÙFÅØ÷/$Ùl¡¯¿ÃÃ#ð‚Jå*RÖß+¡££i”¢¯§ 5†3 Õ"®1î@Àw†W à–j$­.j:Î\$q†®ži$¶ô'u̽=Ý|¢ËtqÐÝá ³ƒià°Ê‘ÔÊ‚¬kzA©r‘nS҃㻠éXúP$Uç‹/xÒ "¬ÚÊ®o¼ñÆøä'?iô«º¾Ik_×­qdý„µÜþ›6VÙb$æÝwß[n¹‡v˜9¯ÊX[A•lm…ÂyÙe—á’K.Ág?ûYóQ¿ážžž1AÅfâ8*¿:‘»•]P:~ñ‹_Äܹsqøá‡›c¥ã³ÙMÃÂÂÂÂbÝ„tÙ£>jÆlè ªô‡ %Òek³þ!Ç?k¹]Knÿ,Y²ýýýøûßÿŽßüæ78묳 1T«Q¢Š%‘ewm¶@Ögõ_úíok:í«µ«—³‰Í6Û ûì³öÚk/3ÐN£dõÉIilaaaaañT‰½í¶ÛpÅWàÄO\n¹µäÖ’[‹I É×]wá/ù N>ùds^¨°8•IÛµ¹b‰ˆ‹à-\u d~6!kqoo¯I¿ ˜}Ymíg% ‹§ƒt‡®DzY³*|ô£µ}n×CrkûÜþ›PÿXX}þ8ꨣLÅ‘uVçDÆ K­*Ùº\M_&Ò.BY¼ žMQ¿^kYk7Ø`“ŽJO ‹§ƒÈ­fKŸì¿ÿþØi§pñÅ]µXŸ`™Ã¿ UMW¥©¾dí +,¶ÅgþuŠKA"E0ElE,ŸgSÍ!r­¾Ì —¶Oi¤?Dp¥ÃöÛo?Üzë­Æ`b±~Á’ÛI@FdU¬ ®wÜq®¿þzcµŠë‚*WaÅÕ¾HÚÚ …O„¶«Â­ðkÿÙ”ñÏ×(Y½œžíiÑ,,,,,ÖMHgú²*¬®mm´‘±ÞJ?ëËäx=n1uaÉí$ Ê Ò%’+¨Uxá…âË_þòò–¢*ÜXXXXXXX<û®. 3Å"@šÁèÔSO5”Ez««§>,¹$TTqdUÔV“Ck[AÓ‚©¿ª­0ϤŸuiÁ¡}õ«_m˜‰ìÊ %]]Œ‹±˜š°äv(,·E¥Y¸p¡!¶×'t]+®[XXXXXX<»>VÿZ¡Ôí@¢ý÷¿ÿýfšKíËH®ÅÔ†%·“€*C1ÀIV hNVU‰º%­A ‹g"¶êw[ÌúSœfÍšeK/[¶Ì^¹±}n§6,¹$T òzíµ×b‹-¶XÞ ”ý|,,,,,,,ž}HOK«+‚È­Ž #ÔìÙ³Í\ê‚”éœÅÔ…ÍÝI ¨"¯j je2Â,*GAp-¹µ°°°°°xnPèéUgû‘Þ.æQáñµ˜Ú°äv5  ¢ß¼yó0}úô±³k+ÔmP–[ ¤Ëm·„© Kn'u>We(¦ýA__ߨU ‹µ²èj9yépAÖ\Yp-¦.,¹DhUŠÑ—š-A\ ‹µ"µ£££f®[Aú[+‹© KnWãûêØVŸ……………ÅÚõÁ•aJF*AúÛ’Û© Kn'¢h[tO°°°°°°°Xû!½­Dj ‚[ ·˜š°¹kaaaaaaaa1e`É­……………………Å”%·S–ÜZ¬ÛPwèÉÈêb¢ûW=Wœ·X¥ÓD².c¢øL$ÿ&ògUùwPÜ·ª_«ÊÚ„‰Â7‘ü;˜ÈŸ‰ÄÂÂbƒ%·ë&Ö¤â™Èß'{–U~+cuÒc]M»5nù;Y¿W7 …ûÉÜ·º~¯)¬N8þÝô˜ Ö–ô°°°˜4,¹µ°°°°°°°°˜2°äÖÂÂÂÂÂÂÂÂbÊÀ’[ ‹)Kn-ÖMhîÕu››´Lpÿ“‰ÜNèÇ“ÈÚ€‰ÂõŒÉ*éóT².¦Á*ñx*™(O*Üÿd²Úi·ÊýO+k& דȚLu¶œZX¬Ç°äÖbD±Zܤe5~«§¢Æßù4¿‰Â5&Ï.øÌ5ô[ciÇßÚŒ‰òÔÈjüÖdÚM6ʺˆ‰âad5~«ƒñ÷=ío¢pQ,,,ž]Xrk±nBK(®ŽLd’y )¨ÆÓÉD÷>™hÙÇ'“gO Û3)¥ÓD2ѽO-k&Œ‹òpÑ;“q ”ÿV•UŸ%™èþ§” Â&™ÈoÉÚ€‰ÒB2Q<ò:4yY5¾O%«“‡«ÖëB,,,ž]Xrk±ŽBJò¿t5D¿Éb¢ûŸL*扬:’g‡ï™ý& ¹È'“µyÌŸøK(Åe"yÒn @~OäǓɓýÖfN8Œñ›(ÎO%«ƒÕÉC‘ïç¾~[XXXrk±ŽÂ™ôouÜê—kѧG®²Æßù4?iëqÖœñV5¥'òW>óÉkä7Ù´²åwMî·6S„ñá,~ÏT*OŒñ.þýßÚñ!}êßê¦Çd±ÚuEõy•ú-™°R,,,Ö ,¹µXG‘[Å&#ZXžF&ògU‘;ýV½÷ÉÄüÖ§'N¾gJ&J«‰D¿‰îŸHÌo‚´“<ÛPx&ŠÏªR„{|<žJ’,E<<™ß«#ù1‘Èm^Bž{¤¤Å“¥Çꤳd"?&¹]¿õc¡|Î˨…Åú‡ÏÖ¼§Aš¦¦õÇ1<ÏC?Ç®Z<3ZHùS{ËËR)[ gBq³®DY€×Å@+Ãàh Ì“7aæ -U¤*Ô ‚,6>%üŸ:®ñKÐÿÜΓïÇ‚ö]þÏC0vUA™q‡ùM|&%¡Œ÷CûíÇ‚Çp•Ó-xÈ</8,[ŠWogý% !|‡éÀxx~™ww?ò´3.ÍC¥–=om$IŒ¶[Ã0GÀÿ…|¾ {²„©ÄHóe0vÂ\%®”ÐÉ_¯Ò 3Na‰Y&Û.ÃÁqx²¸IòTÐ^qûü+ÒpE*åHè^îr’Uý/ >Ë“k^õ¹„®ÒÇA‰ééÓ§„™]ë¨aF)CGàš<òøOui¬ÒýXí [ÆBuП•øÈÕ C2Þç¤cÇô;eÞyÃÂsC~ ¬â ÑnGpƒQ¬H(´cÏ5þh_’×Å'KƒñÇJŤHÛªßr´ÊY“Î…ë'ó_бÒDÛ”qSžy¦<ȇ¥J],¯½¥A8ŒR@—n§\Q,Ö$¤Ã/ºè"ôööâ€0Ç®^zSªWüK(לs®=íh¹-¼æüïc‡ÃŽ`cËš,“Ô Yżï§:,¹,¹}6 zFâÂÊ'BZ¦>•:Iùþ)%4‘µGÑô{ñ‡‡JøÙ•wâž&Yl T"áM4JÓ9e”’*É•JLâQFì±2§R0¹R QÕ íç›1…EÅ7¦ŒtF×r¥Çã,#(LÅ]T]cä9ÇSù_J[¨bËЉī2N$æa“ôwxÕN3qÌþÛbZ)¢‚wUÞõôo!QÛL~I½+Œ$ WD‚WvF‘&½^|ãªüõŠ›L£ ^ê†Ë²,%ŸF-”|’2Ö±Hd"<“’(—+â²<žc[sÍü£Ò ?Ü7çxUîòëúÇcnR’ÐX$tÌsÙìóì¤ÈY¥!.‘LÖHþ™·¥VЧ÷áˆwž;2>ñj¾ÒiàtPØZxÆÑ@£µ7/¯ÿê6Ü»„D±ê`i³Žrà£/l Æò×"y`{銆ÕJi5¶3¾Œ,3÷\Q*“yZååQ’ß§ãG&.Ç9´ïñ_‰ 5®<6®@9âOc#!€ßXˆn¿´Ü ‡ ž7î>‡³ öW<”L^²ÁÀ†¥ÊhÊç©\©Œ‰üæd™a‰/ø456'Ìl—2ï}úÏ׫ñ#KZlØÕÙ€õñ·F~ü‡[qÏ} 1\o£ZëF£­Rî±Í0,¼‰ñà[™[ŠÒAa Ȫi ¬¢šÐl™+*›Jc]ÍUî‹{”¤¢f+Œ÷ó þ›ôˆyœ1mË&}*ŒS™å¡Z­¢Éz¸ñì>œðª=ðü¡yGy噼gM4Â,ÆÃ’[Kn-&€%·ÏDnIê ¹õ–“Û„•°ÉZqZHš£X–öàÝß¾?œ¡^™Fr;J’ئö®¡žu %‘õX‰«É(•K‚Э Å®úœ¶¤‚/ß¹_è4¡Øs£)r sï‘CnE" æ%˜[¸oÜio¼ÿ柟ð¶-¿‡o¡”„c‰%Ê[ lWÂéo}!öۼǨӚ!ŸO£LÍÙk^yŶÅ7X7HÜ[wpô—®À‚EèôÍÂcI'ÃÂÉŽï¦$=yú¸ ¿¢’Yq”åÊç9ÅÍ<,ÛX­œŽÅùüš Ó:^Aò3"Ú‘X5/&òßeX‚¸„¨¢é5Y'ªÈGW};Ïðñù÷€Ù•Ó|/^+¹$:ôû™s ÷‘ï}ô‡wãw7Ž"èßa´¨2-“]"·lÄ´øü!’ÛÉœ¬wyÜ–GpÂxÊk,~|%ËÉrr+Ç&½´¯+ævCpU> ?žÌ—VN<…E_@X†ƒé†\íe¨:!Ú¥>6"+˜ÝÓ߯ßmº+â(r«ç¦,üGäV0l–Ô‚Üêf‘VŸtÂeÈÊÝøì•KpÎïîA(òG÷yøõ¡Fi™Ç9e¹•UÙØV•×j„=MçåLǺWŽ”Æù)s«®ëíªµ¦ì2_„§.«ò-1>«Q­4*‘Ðz)d¼/-×±|¼|·éøØë·ÄF$ç³ÆÛÜm±&aÉ­%·À’ÛgR!U+_÷k©ôGÌJØd••SŸë¯½wÞxúon¸;F™›tRÝÖ² 'l;ð*UŠ•´Ae˜¡Ir+k®/¥nž1F–' Úw©t¥ÀòÚŸ_Óí¯ iù]Ùþ¤©íÆû¯Ž¤Ü@­nÜBñm§>«W‘ömˆÎú#8á€-pÒ+6E7oéâ=|%=-Žˆÿ ¹÷ÓŒqv@> Ÿ„+c™½ð¦yxï÷oBP†Ñ%aÖ´*$‚QØB%` B}¦Z{è“q.¥E®ÔMÜx¤þ…ÚêH·ä×rKoî~ò´ËÝåÿÞ, /SÛ\ƧU¾/gcù”ù «¤ÃC³Ä+Ó°t™‹Ÿ™¥áô“ÆÁÏ«¡›yî„)ºJ"KÆ«g"·W-¨ã„/þ ¢¹Làfv7H^†Ì—…Î(dê–@ÖPcú‹$>1nùO¹–§ÊÊþйcIÊ¡ÔÍŠTéϘo¹«úŸï³˜²§n0¥¬Í{ ù}ˆÒh  Z«`™7ÞtÌŠçáí{NÃ)¯Þ~‚2kdÂ’”˜¯>C 9ŽÜ ê¿AnYFÔèô Ñu0ÔJP«$ƒÃ¨{8ñÇ·ãw·.F; ÐWúƒíEæ`Â’×:ùhÂ@˜àŒdÕ4Pú6t‰ºÃâªûÇA׊¨äû¬S¦‘ñ4iLè(e~%dƪÎcØ”õÑ뚉ÇFHäÙÐܬ£‰sO9;÷²qF&SÇ-þ3XrËRfɭŪ°äöÙ€Š¡ÈmþPäVˆ¨ Ü–„•2Áõ÷㸯ý Knt—#|ü­/Æ 6÷L_Áo‘Ò“Õ³bTŽƒ&EÝʼV(­§ƒÔ]Þõ Ǫ¤¸¤­”ü¤ý¥Ã†!tTƒI‚*ÇNÿõøýmuTãµç,|øõ[¢‹è$˜Ì;Há+È­Ü<ÑæVäÖe™MXfzÕƒøÄE1};°ã´_:iÌê$¹%·ÐsÔõQdp|\‹ýñçžrK^¿㩃®éX—µ/KÜd­VŠ—Ç4‹y›2¸u‘ƒO}í 4ªÑ >zì ñ²]úÐÇ'øLè2op—ôg!åºyKqêy×cA³Ÿ„vŸzÏ!ØaS‡„šíÆ©ÄH†|¾Ò¿ƒî}EvPÚ$ガۊÃñ[#¼8ÎéSBþÈ>jÈ­¶<3Ä-Û¨ðõ?Êôúâ¯Å%w À]z?Û±§¿ó`”ùÞ«¨lȆ¢Oåè–>bf°¨azòï’[†FdUá[Ö ¹­$Óm ¤%ÿƒ›qÕ=$³Lñ#Þo>hŽi¸ôŠÈS3«¬«sƒ)kò`5 çºOé-Lt{mW«ŽÓ7Ñu5.å±üùïowd8û‚ߘ/L›u„øÒ c란 Ë`Ò~[üû°ävý#·ëA-Ö H R ?–Šø/#mj&ð;ûÑÎ<ŒŽŽ’PPá¥mtDml@â8›®7 Ú›F%)™ÉcõjÛ`Ò’aïÙ2‡û³yNR\Ÿ3v,7s)ùó²åןJtÏ&iˆÍ©°7O07[Œ y\&]k ¢¸h ,B7YC—S©ê÷é ”l~cv?Y­µïQÉJ††Gà—kh§.4Ž¥'Ž0ƒ÷)}¦Ñ]óXi•‹ös™8.O&JÅS²Ù*=‹4Ÿ~ßÿd2‹qšÁ—óìx)æ´a´,™G"ÙFW…¡5„Q÷hÆMÄkM@Þª²,î­0F…ù4;ˆ±19ßRÈYŒóL³M–Çu¢øL$J+•©ñé4þºŽgÓÒ8/w+_*™ÉðÌ0ÒÂt6M™e9˜É²ÑÏëÓÊ)#£h¥¿lteƒñ3M1/  #Ò°Vda²åsb¤N^V˜‡Ú‹Y>]ßeúzh‘†lué«K?5Õl6lú²:¦gŠCLÉLùTýž¥øp;)Ê膼§Hëñ׋s+êødý–1ýmc#³ÍËÀLFtn7›íQS.È´P}~-­µ°XS°äÖb­‚yÝKÛI»rGÇfVYk¨ˆýR€¥Cudå^Ä5ªë’¯Z†Çëk>‹sk”ÿDòL¿¸ÉˆÜ’pºuŸDÈH”–”Mâ4IyêžDËŒÎÏ"C* &òk‘ë†WÁPVFêw±öÕÌ ¯¬¤¾Âe4©ôúzºÿNR§§ùÀ›É ïÌ1Lm8i‹~ès8Pfäõ1¹Ròµ[&i°ÍðÛÍçü ü™@J‰y©Þìf6‰¤m?š‹C=t0I04H©µ!£ü2ÝYxZîb·‚ºß‡¯ åµÎ3®1£æ]ÔuF[Ýc‚1.ÎO)rÊ¢!L3ÆSbˆÏjÆJC–LßDYyElÔ9—2b# Ø‹ú8+Ýt>÷câ8=AR¦3Å G—ñ^¦I§Ç˜ú” «›ôó¦/óËÓôOùóIIp™­†ˆ0?Yb§£”ºß‰¦[Fè,8La—i'¢´† >´†<‹”1,ù,êCÝ4ñ ªÇrÑ¡™Ñ“:³œ5ŒT<æ‰Ã†W¤žÛ|ÆDþL(m†™aQz›Æó˜Û ý(ñ|MD—×|†•ši˜7«Ô°YQ )þaŒÔMJ±1&š—nÃîÌÙU¼áD}Š}N•Wæ0§Æ‹úÏ«±H¤:11ŽJó ã>‘0 To¹Õ`T?nšòÐãï3?K¬û·Fènbž(š²Ðax”^ZgºRԦߪŸE¯_5üV§»ƒ……ÅêaÍi ‹Õ õ&})µ CÊòOíÅ'úCBŠÿrÏ=s]TCjO$GÔN½[óëRŸ¥>³z§8Üu£}sM¾Se¶IÝ|Ú£8“e¸N¹JeOUªO´$¢ò?s<#+û±ªÿcûšï "í EòdÇÍ­Ê2€_ñÈõÐ 1kò ?ä)¢°Ó#s·c‰“§¨LJP.CƒWÓqì¶Â¿üÚ˜J6q{’xŽ»†@ø*ÈÊ:{ÍìJ7´Sé Fþ Œ5Ö·UüÈ÷Wö_PC¥¢3F/ ƒÈ/’äʪ ¹h{¤ƒ®îU\ŸyÈW†Ò¤Wäú6YÆž§0Çct Y`È‹Îçå`âxŽ/#*&ÜŒ¼E‰)A¥™_A;f+‹Ô2¬"‡¥c…œ´ß|ê•JŸ{ón úLÍ­IuMÐyHsÌ8Ðwõ'ÌûæiUÄM´C z¤þ›þy÷|üñÊ[ñÐÂQŒF(ݨO£BHªÕ¯Œ?ËÓ§æí¯x†¬„$ŒÍÒHüAd¯†Ô/™çYȸ´ÌüƆ'·þ?óýS]ô9_Ý,ò.j[`nÙ ìÓ<®+—‹ñ²rÜMÓƒ…]–|FŠ)œÏr°`ñþzí]¸ëEe IèéP"åI~ÿª~KVøí'jÈ:Ö€[óãŽñѧx6°Šâ#âÅ(=ÿi²ªLšÈÉ¡üË-·*ÅycVÓÙ•“61ª S'ÖsÕi“¦¼Gi^¢{S÷)+ÊËŠr³BV>¯ºE/MóEC-\wû<\õχ±`8F‹ÏiÈj 3M^^ÖõLn•ÞJ÷'¤s¾/³ÜÊÌw¯)œš?ZÝòò z¶Â­……Å3Kn-ÖhnØ|~XHËË"#•Ÿ+ c½á~BE‘ˆNˆ ¹t£Ï­©[å}¾¹ÑçꊱÞʲ›Û‡¸KÅå’|¹‘ú8Ê.6L¿—Á Âm/…ÛAƒgïngþúzÿÕ?â¬ßÝ‹÷}÷ïøäÿ†›" ÐM¥T—z¢ßi¢%BC3£ƒ&k§÷ôAÊšaSQ£äxÃoH £¦±áeÆS[ÑŒ¼Ç«¡MrÈûW¯jʵüÒ'f} Ï—uÈ!ÿ½LKÔæ„R}o5U‘!0„ºKba>±Š2™Þº†Éˆüä~j‰M†´É_”ŒðZÒB+jc„nç3î—ÏÞûƒ»ðî îÀg.| ï=÷r|ûÂ[±0v0Â<­3¯t—Ä-$$/Y¢| 9a#DqhóQ"0eúŸ¦£|jÌ_þ¡××§_c5cøÝ&Ý åÄ,Õb¼¤ëË7V†þCL4KbÞ°Ñ'í˜×YR\‘ fóI®]Rn"]uw•#ZhA7ñߘwÊSÄ)iB¥dŒe¢=ÓvÜÄ Ï-hÆøâϯÃ{νŸúñ½xï7ÿ‰^pþô@Œ…t9Jb­”ÉXÎ2Þ¬M¦ÄcäYÌù8ýبbš¹Lšèc~ÛÌ—°<¿TéNŸÔÕÐÊÓZuÉö±À*M•›±¨¬6TÂ4O®:P«ŒÊsCSM~jå9=Q—]ľ,¢Ä–7ä~7·†Ï!áÍ—¦Ìš›&Tfê똓a*g )BéŒüƒ ”Ü*Z •¿zõŒ±f¶¨Ð’2ƒi,ÍÀMËÊøæ_ðŽ3¯À·¯ÀƒÝ»á6l†[Ü-ðë;œ|ÖïpæÿÞ„›ÄHä˜õïµl©_© i›D1ƃ٥ƱÌ-N)ÊTÎ9¹a§27ŸˆËÆ‘æüÊä X*†ÚŠÊ;EÌÿœL麡´jpϵ±ë9sQÚ:`Dçn¥µúµª¬ú?–D”=L„µ„Tà—ÍÁgzNc#à/wâÞV ÷fÓp}{¾ú·¥xÓgÿ‚oÿáAÜ;ŸÄ" Ì'aY •w­Ñ$aˆ8’µŒ„ÌÄÈDV4ƒ¯(Z>pPTLÄA ÀÉ©Ã)]‘’¹yú™h­!Ⱦ˜[Å5^–CB¦1ÓE³È2-Bf,èc騜)ÒVéÆ$ND‰^Àr4ý2 q× ƒ]6G~úb|ïšW·6ÆÝÛàØ?¿sŸûÉøÐ·oÃåFx8ò°ˆ^4½ˆu Ég¶™8yC%͘Z¥)S|0CáhPš¶ óU3˜¡ƒ GN#óPKLbѾúù*7äË¿Sþt³ñ7õ2O#-€ °è²æûUUSüW¿muóPy4}ñI4}ƒHY¡E)4 ɯ‹êVÓf:k¦…f±¬V1àu⾸¿¸uøÖUl4\‹¿-ôñxÇ&x4Ø· •pæÏnÀ»Ù øÓ‹ð@½ŠÅí P}ž.æ[À´PYÕ{DBÖÊ£ÓVƒòdo7d˜ó¸™#ÆÕMX&R†ÕÄ[g-,,ÖTÃ,,¦6¨ý˜„3¥B }Šç‚UG ƒ~/®«8î× ðÚóÿ…Ïþì,IvF£ú<,éDXÕ¬•³0êm‰xk\ñŸ<ç2|ñ®Ám £IÝ&I‘R\'ŸŽLê8×oüå†}­2ן •j ”ŠÙéÄ 2|ôü›qây·ãìË<âv‘äö!n{èbuP_ÍŠ3c'®¬îÃLp-9caañ\Ã’[‹©3Ç,ý”[‡*2‘\ÑŽ:ðËKÄÉŸ¾¿¾m)õ¦£I1/†ÕÑEbWÉŒz ¢Êt<‚>ÜG¥ø›û#|èÜŸákÿóC<ôØãˆd%ŠC4›#†Ô‹Ó˜ÅLÄFÛu êNQ&+%²RFÈÜ#™‹ûGb|éÿnÅÛ>þ\uOŠ!Wý`´üFÑ@Rê&ɧI?BucÅcÕ.ü“éö›ZxÇç®ÃO.}˜‡zfÌBP!δT©,‰9)[aα®%_Þœ¡¨,ä{F´¨F)ðÍ|¸~©‚F„Ënº:ûJüÏ•%<äoŽÇ܃٣Lóù(·‡° úËZðÂÂÊ0º‡ñ(7{8ý‚kð‰¯þ ÷=IJ÷ñ!L=—¹A ¯ŠþºXöV ²â3¾1}Ä"£W÷€*ëúÀ²e¸õžyøè×ÿ†Ïÿà6Ü>0Ë:6@Ã+Aý÷+2í6jhÕPʦ£ÌÆ[ÔѪvãÚeÀyW>€¿ø7Üðàƒ&=בŽ.cÉ”íVß Ô­C9V4Æ,,,ž[Xrk±@}cI0Sx}Rç3+ nk@/õÒìr3¢y¨Öïƒ_‹4Ð~6€žt>¦{rÿÄ¥I¡YFºû°4íÅ‚F€(@Tù×tm,i[)>=“D&_Ç>'6ë E`š…N€Ø-òàºMt¦1zZ#èn5DN õ®9hV7FRÛ-2Œ¨¾Á2tÆ¡£Dª—vb(îÇPR#‹G‘•Ëô_„VŸËóÁNJ=Ó5Aý(y¤ß:K/‰) c§DzµØƒº.¨Ì$%Ãi‰é‘0]ëfÀ‘Ii! Z†Zý!ôµEo<Àr+KoqÖ¬UAûHéGXÊÐ.•YùWéæÓ€’O‡7å¡«Ûb:kX%£²Û&Ãfå¿iý•.ô2¹»ëƒè&1uÚ_™¢#l¸¹ånÍxÐFƒé]h§UÓ‡;ñÛh9h™Ù´“^©ÄüSbl9)ËtÚÏgwæç,,,žSXrk1e±œLR»k­YO]2ŒRâÁ£BJÈ8Fã&^vÀf8ã”Wá­»vcßžeè¨/E5‘Õ2 •êXä "ës!PsàMtÅóqàöU|øýGà-' ·Ç¬!ïû>Š`úf²Ši»<,ãöÖFˆ‹ 9!â°ÔÏÐd,4¥‘¾ã”P"É=éèýpÎÇÞŒƒvéÇ4o¥v ¾YÒÍ4ª ¤ˆSƒ¨¸Y'qÆÖCxóÝ8óø}qô«wD”POdM÷MÃÀu˜ZæáúÇ$¸9½]O<³v‚ Ê ŠØŠàæÉ›ÌG,+FÛ1vÛw|ê}ãäƒ;±]ð8f»1ª®¦cKª³Q­„°Â2ë³v¡Æ4ÞŒiµ×]8îˆ}qòñ£wvK¢˜$L=ºIÌHÜÔ˜Ó1ñè© õdO3–½Ô‡ÏŠnfQÐ@½Z•DµŠéL§¼k?œ~Ü pÐK±qú(Ê Ku¥IGZÙ0Fë¡ÁÆÅh©†v¹Û,S½]WŒ£Þ Ÿ~Ïk°×Îc)ó+,÷ -÷ò©y–¯F³1«QQô¢ŸÅ ok–ÜZL)4h%]NÍ‹oò’QB)•?IÚ(<|႟à{ø;ª•Ç¿îøÜ½Çø<Ì©4ÑŒ‡1BòÕ–×C’PBc/êIpÖ[vÄçÞ¼9vÙº7Þׯwv)Y¸ m’ c‘ãÃôi=ÿ¼ž Q¬µÓº+\­’5ˆRªY~fâb~ÛÇW~~)~똻i o{ÓNøÆ)»à¨m=lÔ|µ‘GQYŒ¾Jõ}/Òj*Î^µ£‹¯¿c[|êðí°Í®ûç½øßß\ 3OIlÔÔ÷±ç*,.3mÌzkÎóT.9AÈ·ÆõZŒ<Ï b›'-ÿ§ Úí6J•~|áåøéŸî¢o>hÎyÿÁ8öÅÛcÃ*o"QdY¨m€FçLD6f;1¶l?ŽÓÞ°=N;v/ì¿ãt<¶0Æ—¾snyèqÔÙѧyÍ2áÑ­YÎîi—ñ‹;2|ð{7àë?ˆ¥‡î>ç¿g_|æn¼tÚtÌCyh[ÔR»K ß?q¼ÿ Ntðñù_»—ñ=1ÔêÄ@â¡T øPÖøl”Ï%AÆb6ä†yNËÖ±Jna1aÉ­ÅzuGȨ‘iØÈ€2V0ˆ™¸mIgýò|âÿi¬`ÛõUð‘7ìŽo³=>¸WÎ=b|ã¤qÌK6E­œá‚?.À{¾r~ð×Q d½h¸]¤^ó4-˜§Á%¢d¹mF†\Cl×A¥§i–PÛ™Û?ÐâƒèõRt¸ ¢¨ ·ÞïáŒïÝO}÷¯¸{ ÆÎ[ÎÅ™'½ø¯½ñ¢­B|™ivÎÛ÷ÀK·›Ž‡é{NþÉÃøõƒ–Í|âJ¯éΡ©ª‚Ž ¢0BªÎÑ&íDnÕg´HÉu cyn,ùEîë¿X!*%­$Çrè•1„Ü88 ŸþÝÞóµ+qímK°M·½b+|ã­[ã#÷âôöÀ—OØ o}Ù¶¦<ÿêŠù8ñŒ ñ‹¿Ý‰Çâ% –!Öl |‚Ÿ­’U¡hÉŽ© Çi²Á0€”ñ¡Æ¥¦‘Ó”]ê- oc,t¶ÁÄû¾| ~ré<Çâûo‹o¾ó…øØÛá¸—ÎÆ7NÜŸ|ý6س?À¼û†qÆ7ÿŠO~ço¸}Y'–„]¨7ct³z‡Mu64ó-Ÿè¨™¢·ÊÔoHXX¬ °äÖb=€1 *’6ò#d>¡£IÕV§ aÚ…G©¼þ:/‘§ÿ§ÿøF<òØRlØ[Æ;_»Üs#„$#¿¹cÞûÝ«ñÍKÿ…yq'êåiŒËHƒn”ËéÇ›$ù «Ö£È{ÜRL·„u‹f¨[E¾oÜ Q(™>†•¨…n/A£b¦ãw÷UðÎ îÂ7ÿþ0î^b»-zðù“ÆN•áÅ~qù=øïo\Š¿ÞÝÆÒV/R§IC–KŸo¡ú¿Wãâ›nŨ—â’Žÿè_xß÷âÒy]´±7 §ÔAR«Y23%!I‘]™–éhå3ë´ä€úúfˆ Q—™ýa´ÎFBVGæÖ1J"?صnnl3ÿ¼Ÿýùexè‘ùèàwÞ»üÖ_pÎÅ÷ḄzÜ@ÐDi¤y‡[‰Öég¶ë&3€û¢e:Ò êÙ(j¨½u+Ûå ‰÷y Ñl# #T²´*Ûp`‘·þtO‚÷ÏøÉ¥7¢å8˜7áÓ?½üÕ=øíK:{±ÈïE£´ Ù&馷ùh}­3 0BÂÆÈ„a˜BÈH,^D©g–ÈŽYÖ·1½3Ag4„ *KP F/D³”aaÇt\¸ úåcø¿|í6v-üé’ñžó®ÃwîèÄÝ{` 6#b]ýhhᆴNõHH†Üjޕ̬K×Á4ïdÉͿژŸê‰na±Ã’[‹)é˜õ&\7ü’DAý9µ*Ô¨,ºµ¢ˆJ‹T Úl! =4°îmLÇ= CŒú.–ñÖ¿ßñI¬Ö‡o!jÇhÅe$$gíæ `bV [ ”eÚ>çWÓ¥¥¥€Í™1pw­Ò$ç ª…ËÐ2Ùz32ó÷j„)ȧZÊüÄ^S u Ó¶>/%êÄcCRñùüªÿ˜×ÀmK»Ñpgîê’t‘»a¤< ‡¤:ƒ¤€ÏIékØtOPêéS/¢Ê2’³ç³Ò©œš>¬ãdm…Bfï`ªc…éAlN2!ÆR3Cð°Ê î+–Q‡ƒñZÚ2ñ]Ü pßü3@l0+á²»êXR˜¾>JQ~sˆ-²a–5M(±±¡Éû·J-¢[2Zy*˜<û?vjŠ€M¢,4KK5ÆÊýˆƒ¨3Ý’ÔA{hÕ ŠV»4ÈØ`-mTñðü =ýF*~ü{ðp»õ–Ïlâ B‹3D£(«Ïs3DµÚÅod™‡ÃuÚÌ×”[Í¡M13&L VÍ»gLÆò  ‹5 j¡…ÅTƒ,,Ræ²ÿ Å53÷´IHZ$¥ÒJ7 pÙ|26îH|K¼n>4ò0õHf=*J·Ì{ËôB}&}8åNetŒ&(Ã)‘XÐy£ ¦mN$ÖM…V«h5ê$H™YUCfFPFÝ­ñ˜q%i÷ËL‡¸‰È¬ÕǴЧaõ@,ÓM/ÚQ i¬î!ü´R<Œ’\Ø4¤Ï‰cx•Êò2i6®1¢œ[7!«åÕA Gæ9LS–¬ˆÌsép^¹ƒ„ŒñÏX¦¼ U^ë`Úuð†À¤QÀD0S{±Õàf$_A/¼jË[…OaÄ×¥"²¦—2“O‹pèÞ©•-§@NOQ±ÉX?)ŸLøºZ¬ß£ ëgµNÒB5e)fCU]ØDÀ"–åÅþ46„}i•„«x”¶Aµ‘übcÖñÔMGs$(GMÇ<MqU8$OCp…§qgaaño£xßZXLYHÑ-·§\92Èp«A9:îîëCD&Öª©ß…ë´PNëèÌ棣Ob‘×½ND~n­ÌYƒ<óÑJ]4!‰KH(â+–ažMf>ašÙÖ5œ¥1:jUªpÍTsÞ:$ Œ+I¾¨D!MH,´ˆ…È~Fâ/‹8áf<6ánúV…'¨5–¢³µY¦“YæÂ/!SßB雩ÀÔ€ÈÁX× 033ï ñ¡¨³šK‘¦©r:H¶ØÀ"QrKf‘22þ±eƒÀëcËúê!ཕ$C9‰H¼2ãGˆ´ŒU’‘éí$lR°Œéû€¦ÿ‚f0Ä+Ë”†½+ë¸J¤'“åFÖÔˆi3]ýj7²RR6 ¦—šžÓ@êF¦L×yNK÷Ò5ýec¢ÎÝIœ¡2=ÙpÙ¸滢EÿD¥•Ÿ"´y‹î®‹uÜÂbêÁ’[‹õFÁ‹p‰@hÀMY‡Í!´F–ÂÓù$hZ1+ D¦Ôó“Ä‹¡Jå¨AT *ˆH(u/0VÅ&|¿ /λ%¨/ª eVˆÊíe9Á¡[óàu¢cžï!bºyžƒÍ0ÅÖ@r'n0ª)Ém›é×&!IÂØ0 ¢'Åeº1Þ¼Ï :Ñ.õ¢é÷0|6Ú(·£'F‹ ÏÕ`â#$#ZžÖ4 ´ÓZTEé¸NBÅ„O0SYnIä5[‡ÆÒéKÁ𒨤-6¢ZtÈòç©!ÕM’«Î2¾i LSMg§´‘奪™ºN}w5˜×A)–­<¦À¨ÆœyêÔF>m\`ÒÙWy5ß x†'â6ë%Ë[’¬úâzH5„¬§±§î!yšU˜\%MG§n!>Ë·ús¤%8q]¦:·*÷¥±ÌT¹ÌEàÍ†ØæGÏ-TM-,¦8òbnúÃqtÉvÄK_€­§yèÁk¿JÖ©5,#Yù¹Ÿ÷uW#€Š4ÁfÓ;ñÂmf¡Æ´ª… ™þl@¸Ô½NŒêk› Ÿ7Î"7íFHnD,£¡ßA?T˜ÎÕáñú±çÖ³˜wÌQ“—*—"ÕÊcî+s-,,Ö ¬«ZÃb=‚QÓRÒ†ýäû«;H6YÌàæ}eeyíöRyàŽøôÛ÷ÅQûoŽ ¢ÑݸŸd¬øZG^KÃ#äªêFëóžšÓ&©0Jr&–àÀ-køÜ±ûãݯÝ›v•Ñã9f€RÞD’¡pjz°XÖ:£Çi@£ ×,ŠTZ>øªØNÆe™T•äAè.'Æ)N}íîøÔ‘ûà¥[wbf¼]Y%'â …D–ä@k5y«§iV³T2µ¥¨”ÕÍc Úe6>ûîâe{m„Þæ‡Ò&¨‘XøÆr+‹¦±Æ‘‹D(åÖ5¨y£î E%×P!¦½º ”˜F嬉™dIG¼p |î/Â÷™M*ËÐ=‚rø8 Ô2d$ºš·U3¨ŽÄ!²’O«U±ê˜ÏÃÌÑ{ð¢œy¾xÏöÆ&}]¼–[-E¬Uæ 1ÎÏŒÁØÅó¼Í‹Är¬éòød`{‰PC°°„êÿ“‡&\©<«o7L_ps_Êø«™1õél»9]øÌ1/ćß›gaŽÏFi2 Ïsz5Ù¼Y&ÙÐh £*∭Z¢;ZŒÍÝ…x×+¶ÇiÇì—í¶ f°¸VYÔ)A¹*»¼êoH®……Ås[-Ö9¬XÆrò’’×'q3ÌXo[èsì4³†w¾x+œñöCðâ­jènχ×&7ÕˆÏL#60Dî’x´Çìt!vìpÂ+wçŽ9‡’àmD’ÒMÿª ž‹û,,ˆ\èùϵõ6Íä!e-«˜¦ëÊçn aF6Šý7)áŒã^€·½d[lÞÑFFHb,m´1êzáÃG5åZ•Á»±yuöšÕÄéoß ;|3l^.£—’I¦³éñYz)¹êBB#‚¨=‘Ûuíe¥°Ç"·Œ›öó“)Ë^—Ÿ i²±@‚KòºS_ïyÍ^8ñ•»`96-/BOø(zƒ’pÄä›fûpœ5¿¾x!véÁû_³¾pâ~Øknfølùl9±õùì•-·)|R<Çåóé‚''«¸S ¥WB1}óæ,ãž„˜^v±ëç[÷îÇ×N9mœbëà1ø÷£ƒ \ÒZso¹³j,ÝQ£·°Y0‚7ïÙ‹ N{5Þ~ðæ,¿@©°¸>ýU:ëqf–#|ôs~–ÜZ¬XN-Œ²7}9¥ü*±,Dobš8tûéøøQ/Âû^µ¶"Y›Yj#Ш3ÞÒU¡Ä °©¿G½h#œsÒÁxõ[cã’æÑA%EEs¥f—×*=wųs¼âh]@~C˜$"ëiŒRƆ‰üÜÀÅ‘ûmŽsNy^±ûF$¸u3øLÓQ ð×KP aŸ™-|è•Ûâ¬cöÆË7ïÅ\¦÷lgb&»­>Ý«o©ž¢äËŸ'ˆìæ=*×E¬”ûŒÒò#’wÑNO}Ù`èuY݇î±)>Â!8åÏÃ]Ãèh<‚iyŸãΪ¿¹³’8bŸqúq/Å1mM* ¦emô1*ƒJ1Y‹×Ÿ×»ÒG±UÃRÍH¡†¬ëWàÄl øMLC„íû|œ~ì~øì[Ÿ×îО‘Çá·GMÿç…l”i6N6<ö™Ý‰/½c'|ôµ[c›.ÝLךæ Ó­:ëÉ3óS¥T)®m.:²°°x®aÉ­Åz(*~n¥€¤ô§/Ð4KúT¡·ä’˜e˜]ñðν{ðµöÀËv˜Yµ'1±—î1Ÿ;áex÷«vÀ]eÌõTHŒß[RßÐaêQ…æS„åÕK–:‘}Ž.Tະ2U™[W„Áë@êw¥nø¥*~‚9›w•ð×m3þkO¼E?fz)úÉ8æv”pôA;âìw½Gì²¶ "ÚJl4”+š­U”ó’²9 ù¨sõ›Ô[û”ü꺆<ßEÎÇbgʆ™VNS©i& Ó&JQuRL÷\lT pø^â+ï} ÁöèKëè¥od ^¼Ãü¿·Œ¶¶™V†×h²Ü*"$ú¯3K………ÅsKn-Öh8T‹Ò&ÁÐò"Ç% È|¸®FY“\%mtQ‘Íj`÷N§¶7^»ßðãft$8þ°`»úHöúéW>ŽÝ!¡õÑLÔ¡[BJÁÈ!R&b­)™¤ ÇN¯3  ×`±¬Å=-†`Dƒqœ š™âèhþÌpÌGðê-j8ãð=°ÏœnÌàÕ7¼`+¼õ¥»b‹J„js‰Æ¨“#T0±aAr' XFb7Ö‘Ã$ONl Xfˆ…HúFn^3rŸÄ'à%ˆ®«Ogã¯Ù4­šæWvxÞêèÉÚèÏBlUópâk÷ÂÛ^{0‚¨çLÇ'Ž:ÍõQXˆª—¡Vá=*[êΡUöHêŠhfŽ[6Ü<3%ØÔ†x•Ô(m±È°¤ºym\C­Ôô{g²°¬Å¨U«hÓÝÆl¸»ï¼ïÛ³ñ° =þ(N:üE8îEb3&Xº°ˆGÜ×@>Õk'¨™y®†›Lõ¼”ªsBÞI],¹µ°X`É­ÅZõïTZYÔWQݼD}µôe†væMZOÅ•:$YZK^”!@‰nœ$¥ó¨äX´©yÌô_T|NBßy¼B´ é‰mJ÷ ªãUÑRçÒ „‡éCw1:ü2fUʘYö1½³‚¬¥áRƦHR–[á ©¬´*ÃÛä³bn#c!–äýVEo ÅåµÌXÓ4ÄLäÇØÙÌ|²ÍÕÉø†Ì„èé!•*‘ŠÕFR»Rö®ú 3¾%ú[KGxÜ4ilºÑ­ì\!‰@Èt5ŸÇê;(ÉCEí3-—œqaš1z&íeµ©ŠCYZ]TI:HÊúHd;HvK|vOàò˜žy.JA•>2Ÿx}ÐTLj«)< *Ó.OW¥Ÿ¶*à“§ÃNï4ƒB“;‘ÕдZ1ÓS73ßÍvòI·Ú=Ô̓̕»:ù†Eå\ƒ$ýóLe9/÷*ãyzäu!ßgÍŒeSö5½˜N‹Êr\býJPåš+OBÕ3­î¥ éŸÊ¬b«z•e,Ëf^^Þ?I—2#Ó/–=3†æïeyU©W}ÖóŒ3Æ-fcÊÐFsN ¾ ø|•Ó©Å„MaUÝR¿Y-Å«’Æs,T•ëlÐÅ´®2)|“nlPèK„ÓjcnϱÕx¼iÅeクµ 5\ÛL?•ÁŸ©Á¥1In­[ ¾Ð‰†…×ón5 Ë«¢$?˜ËjêjŠ6õ#WùÈ-ê̲4ol[XX<óP³°X Ò¢â(ʨ9cs U*šà&mÄdi j¯ÚIE!êÐÆ•Å2îFÕ_Ô s J…-®©Ï²9Dø\3_JOŸ05 UóŒzt¯§»¾Ï‹ð¥ìxÜßÕO2V’Ê'Ñ×ô£ÚE¥«O¿ôÓ˧ÈR'QMUÓ"Üæ“ÈkJ±\+. µ Ÿ ט}7P"¹‘&.‰8H+/ûÓC*RwäˆdÍAÍJÐæÒT£A“>^‰ŠŸlGJ^$"Õ|Ÿ¼nè4Ãf’ObòC‘á©ðÇÄѬ¤ÏKUŠ&JóI¼D #À“L¥R—i8”Æzg¤%ž:è>0i©tÒ…Ô LÚkP™Y|Àœ/Ê‚Ãø8(üJ YyÃ0Ì'ægéVôßgyÓ2¯Š{D‚“pÁióRjÈYÈÐæ´Wù!òC‚«t0y¯tã½å¿D1ÌãÎ+lLe¥š)‹J+¥¹JoÕÐÇDòJ"³Užc]¡¿J'3#= ïe:ŽuE0¥‚þ1_äL6/™%ãFK$‡LÏIm–ÔÑã4˜Ý©väaU:«œ›Rt§RRÔ)C&—==èƒ!xòŽšQžYô„í&’q6ºØ¸ Y'‡å–[Ó†ñ‰lð–”ç”Æêø#¨©Y /Õá|þà(ÕRÔøŽ˜æÅÆŸDØ †€õ¬Êû5É—þ‚ž²¦RfºY¦»0“~úL²¬Ìüêè4õ[å\eÓ4*Õw¥cžoÊCåe^וªZMŽånbÓ¢`9Q|GGFÖ‘ ‚˜ùª{ô£3 ‹5ÕQ ‹µFQ ¹ºÔ‘ör’Eex2"m×Qv†Ñ_Z†Î …Z@E4¹užQDiƒJ»…8­#I‘d#$m´²Š¼…I_‹£Nµ7D¯—Q¹.#!F@%‹¤EeßâƒH”I›Çsœèx…ä×E~ZlÜè“»Çr iÙºî·ðÛ È8¬¥gãD“ï3EÖk·j0T}Q`^Ž2¯Ù˜adÕÂb8[(©±¢r 0-0M ñµàEé0m}–Ÿ•0ži¢ôS“Ô éã®t•e0á6IG˜/#,K,÷|nJIx-ÎFY®sIT2æ݇¼Ö6Ò °NðZæµHø2çáCl=ÌFó•UÝKDhY/Õÿ7iå…É’>GbÎäç|}ÿgãD RQŠ+rºÕxóv„áÓל&Éï0…e‘e²ÅZ*nËË¢Öá±ÜBe·-6 ^å;`9 ÕM¥ÿ(ËJ×à#‡m†~^ëàmÆ*ô´Ð'^YÂ*=K¢OÌ>)€‡Å<úÙU8û××a¸¶ 6®5ðƒwy¸Ò8C…„:‘Qp óE•~¨ E“äX„CñU°%úD.Õ¬±Æ-½ÒFË’=ƒÎÉ*¨´“åk20–0¢jžãàúGŸ#5RTóñ¥£÷Ä{ÎFEÄÏ«’œåaýO WãªÓÎÉ*{1ËàG~òO<šM7Kñİ×LæÉw[›·”x¯Ž#•ÅUüXJ¥¼l¤,Ʋ›ŸZމnW~$|F&J¨A£øòŒ9Î÷s)ò®Ø/pt0“çÿînüöª»0xËÎÓpæ1/0ÖÊŒ‰^aàô•CÞ¤‡,éÌg•6AVÓ ‚øèilz°¾„æÙf¦BÊ›MY’Ò0èÄI ŸøÁõ¸ò¾Aãì„78 =$Ä%Öñº+Û­®äõÏX¡uµòÿyJi¯ôÍïÈã®®"EÝÎÓdeänõ?¯y}xjȵGÏXPg¢K:÷•N¿ý'pÖ/ïÀhÔÍú[8ÿí[a÷~,?Ê+‹5 éð‹.º½½½8à€̱ÃSªË*k”kÎ9מö ´Ü^sþ÷±ÃaG°`R“dM–5 ‚¬˜/òÍ‹0RÛ˜$%Å,”äTRDùgt}hÎc/‚ì"bšj?÷QÌ4*Ü4ÌyIÞOÚE‹  ªS\Õ—Xa2)7F2r÷Å–çÄz‹KO…c¨ÝF%AW©§: Wà—ʨ=ˆož|ݾ‡ä·8óH¸ôyý?{ƒ?¹½~aŒãϽ %3øNð0½#Dµþ8ª>WÌO­”¥²WMë,O*o+ânúÓjÏœRøòr%(µMù2gUó­ˆzžÊ+’JZ¢DÆõi\i­ÆÂ«3yþk›wƒÐ?žcz¨Ž)<¦Gæ•€»¥ØHÈÊÓ˜]ŽÝ9À§ÛÎt‘/~ʧqßt\1ùÅ2FnEüô5e2Pø#þ÷³|.h5BÕ†GuAÖ&ËÚ£q gýâ&\xG a½~“ÒBÒd|Þšú»u\Vòý™U×Õ_ÕcÅY]W>ؤ€It S–Š_i 4Îó'„ÉY•ÇîÔ'ÅÑ 8c ‹: ô®bÙoøú«°Ni•ÚFxh¸ §Ô‰­ªKñ­ãwÆ$·•I×q‹ÿ–ÜZrk1,¹]óP!Ì•¥XQRj@‘T}@åê& d~ּ㬟áîæt û³Pö©hÛuD­&ʵN’9Þ£¾Ÿ„ÈkYŸé³zðE$½"ü!í¥çˆ¨ >ã§¼¢3¯#+ÕÇü—2×=|1­§~«ôÛÏÚFÁª?_Af ËY¾ÉŠkRè²25­ÂEÒ“`TðÕéh ×±qeŸ;rw¼r»tªo!ã[(ݧŸ""©­ÈïÑs•vurž:O=ÎíqŸø mUÑ®NCìw¢Ùfü}Ÿ¤6FµZF›äQúÚCÆBÏF…OÒ#‹›?öœDéÈðÅl0H…+tKD¦£nZrWóáºI¾J›Ë<qS^x?Z ƒt™i´’÷òKG:ï›~·ºHÔÉp0h§šŠ,ÅŒd>.øè«°ë õ n˜ð¸ÌGÇäÙ¿‰È-ŸŒŒÚ;¿øüs€¤ÐíaY`#J$†q¬3]Õc¨ó..ŒFåÓ!–‰‹R£øn 1QmÆÜ䉡N¯–_Óä_YV3¤°œÔͳd]{ÊÉSOݼ +-t,kbá½<‹Ù éèèdÞ‡tS"9kâc¯˜÷ŸÁ÷ž,ú¼'ŽÌ…Vd1·Üê9ô‹ÿ&KnUrDnÿ{×pGQuÏÛ÷öµ¯¦BBG5Dz‘& "H‘^EAE)"¢‚€‚t¤ "¨ÊO¯é-=_y}wßÎÝ·É—5 ˜6'™o÷m™¹;3÷ÌÝ;3"¤*K²Û×Ṡâ”3ë]™l¼ìpÙý/ãì›ÞB=¿¯­ Ab«jjõ0–}ÙQNŒÜ²¬‰ôÊ«úwnãw¯¨c™3Í,¨C@)¥J̾æÃmÉžç%¸óAI 輩ýO¤fõ‚ÿ­´Æ‡ˆÖ9þSýœjìà5™u®sa@Ò¡™,Šj…0ntˆóÞ£³|g^ÁîsøxáÈ-ëƒ#·sÑÛ*„²=I©ÄŠò6KkŠä!B– ¼N\k„¿¾ÞÀî}oM*›rÖìµ)¼§Â«7Yyy””–4ÉRÄŠOíXc0Ò¬xIìúË5ø…n¼1¥avJ‘è®”œHo|Ÿ ¸’fMÇÖ¯Þ¶TƒüC:4x‹l‡×©*E$â"Q¨TñÅ!Îc-L±Ñé4Y`žäªéµ4¨êóÆ¡›-•:cb˜Êt𺘨ÿ'Èâ¦øM› | Ô¶èN™-^y½ýÑ7ñÇûŸÀô0~ذÛÅv­[®”Å#¥.Yú&ìÉ#ä;ñQI·cRÍÇäªÏÜ“ d²fÉ (7,D’äS©ë}Š8x$%iÆÛ(OÇ'F0_•ihË“ò´.‘KŠÈ¯ò¡¤è—ÉMùâVƒ û(‘¿B¹ñÆl¶ÃÚ Øe³ÕðùM‡ Û QLó|YÊ-=+%XÞžüæ®§ðÄ›=ÌË;Mæ±–Š-×’§¼fňå+Ê­ý8o!߉f ðüË`]CðÒ»eô£×ƈß¡w¢ CrNïG3`tgŒ(„=Fô¤Üê…¶^+R‡Í(_+ˆ :§"I_üë?CäVÝÁ¸NªŽ³›ãYoUÇÕeg€ík/» o°qÝ]/ã‰ç¦¡\.![Ȣ̴êK2yë1+–çŒêЧAcÌ;‰m¹æ?¾•¹»¶Ån¼=½†Þ J@êiPšÞ:ÌÊŠÞebY÷yŸ:böB—î‘÷Ô­aÕo^Ç[’òŠ$«œ2è«æ¹®2}!å)¿l?¨±“ã~k,¿¾¸Å'°íÊíhkö’kwÇÏvøXáÈ­#·ó€#·?T¥n40Jsªê·æþÔ´P)‘Û dZ¶Ò,b*ßAP§ªdh°øŠÇV!)´XÕJ±IeÈ.&;NNŸ§ä)Âfòà«Óª¸ô†Ç1qj€ž €4ŽùÊBË+5ÛA#-‹PŽ…!„WŽa~»l¾vÙd†T% ÞÚ«R$ÜW“d¥CýTˆ̈TeN„‚i«D)ø9c:ó™*ï‘0åÚÈkÛyÓü’ÛX†ø›ªŠi§j§BŽø—ÏÌ0“Š~j‰ÊÞÖ¼1{oNH†ŠOrM|lu­< §S2z¦—ßñ ¦Txe®ÝêFL¨H*¨Ð³”Ÿ×”QyÐ< 5tû5öùOaesè<˜p{ ÿ‹3‰µQÏ6?lþ—k›Y€÷èZ)§bÖÇÐÎÚIB<žÏùÌ5ï‘Í[³;|ô`)­÷¢æ²Y:¦TC”Hsdgš9B/YR’îÏäa÷Éòm§gA$Te²Æ‹fTBüèšçñè;ïßój¢UGåÊØnì*ØëÓÃ1Tä“eS²R=RPÙ³ÚR`I<’·ê‹ê€:3MÉ•éÈ“l/Õ݆Á9¯5ÓHœö8/B|RÇâ_Iþþ3ôL¹uä™öLTåžüãÛÍâÊ>£¹yi–SÖ{Í‚"òÚ[bç¬ÆNR6¾ßÚaeŒ[WýNê¹Ò!{›6Œû’¿Ê€¦ßê'ó½úÎ×p÷c¯¡§ÙƲij$¶³¾ÊèÅðº¦:F$ÊróÐàÏ5VX §}qy çËÓ%*w’c¼×qAÇ´¯ ]åy½Eªºè³`-¨®6 +¦Qlô³Æš•eG×ràðq‘[GnæGn?~¨J¹•‡Ðü– )*†LM£ÒåPÀ´0guµjLŸ9Í¥€¤´E'âÏêRvR€"ˆ"UN¥.¥DM*_IJT¶9<õ^/N¿ì~…C·…¥ÕPJÛñÿ,¨ZÍuHéj6úPJwšå)Kí¬)¹ª YC=ø ÚS*¼,ªay_Ÿ,磒ðôHåK?¥ØEá£Z¿Mý‘à–#§|ÑÎøÌ_lÕÕ·b ¦šgCñiž¦Äϱq”\Ë^;¦¥ò¸þ±I8ûÚ èKBè·™µZ34ha?¨› EÎ%·T“ä6êCW0?8dlµê )·å–¤"iŠÈu ñ±dOÄ­Jò#‰x6Š_´&¥ø™€­º,ñ‰ÐÅÝç¯[ðÁ òÕˆ(“І™¼Íz¡ÏòZD@äUdÝ–€Uä?Yü%fu–d L\]ôôA¹Ñ¼3­„c.}›>LÉŸêé ¤œË4ûмÝ7[Góyy¾½y­ÌE:È’ÒA²¥ÅJXÖdÕe²VÖ›Ð%JNÚERWdú²êºð~±I=+~;ÉßÙ¸?¿P ,3äQµÙ$”Öº×I9Äñç䯚 ¬¼h;YÿÕ L©î±üÔ™•ÎtËj«’ —ÉEs\Ë‚+!‘[ówmÖŒÜVyo/ã<ï†gð»{_D¹8’Ï”›€Þ.Ÿ¥@„õ2X]×€J?Æ­¹,~¶ß²­K”ÈÖKˆó¯ ”þf¢¨ÆtÈå†eÓˆ+sQ¦|srbº˜Ç;!¯Í¤åÂòa¤éðAàÈ-ËáFn—€,:,zÒPÕR…e•Ì娫ȄUts·Ì%›ö‘£ÒÊR‰ÈR'[$jªh£b+ò·|\5¿¤æiÕüúتA]ž/P jÆ›{“ =“ÌK† ™î@#S@ͧªOgPgÃ`$EÖ[¶𝔠Ž1‘0™)*e„GIl N)-¦ÜæÕgáj]“¸7˜~*¶Ù›æÎÔT\¹ ‡$½‚.ïȲÁeÚ4¿æäšo$ÄPéÑŽibœ\QdË÷Q̳SÀã²~j0™Ò’Iiú%„OöIê™·8è·”qLÚrÌ—dœ“U›qD”…MįØl'5 Š9‹:å­OÎ5’—ª×Fv<Ê7“Ñ;àûchkê󯬆¼ƒ!ÇÄ+XŒø7ùnDVùÞ˜YÀ$ãlJï®OêAÞ+£-Ëëyò*+³(Åǽ¡2‰¢ŸÍ[º:(•¡´È!åUð4ÿ-ß«òAÁÉC2”<Ó¶ ™Næûy–+›§–éÍ3ž¿ë,cµ¹B=ÃÀ­¾h« Ž_S–}n3¬$)v&äÛ™ç»a7Ãæ"Vº´Þ—#™ô*¶z–#-’%ÙK‘¤(˜M™dÊXYšŠ’Š+Ÿ&‰Îj  OQAÊçT DÌBå(‹MlÒoÆ›¦Í´¡”‚™Tšä¨Då’’þ ^¯‘,†|†\1|ô“,ƪ~AYE²XÇ¢“5(ú©\û¨2Ÿ¶ ƒ•zò$îIÐJP Zb´I¢¯ é#)_MKŸ%µH5s|eLrWàò¡ä[@ºž¡¸r$•¼ÆLÇ$-²¿Á >¸^©’apHD^Dì<Ì#4ìËÊXÎt£'Ýé.ô‘0×ÒZ¶V–[YôD2Ht2²J3Ó”ÅǾ;Ù°Œ\½Çü¦ÛI¼Ú3êD‰„Éj¨rÀ÷HYF”­Ê„䊴Èe*K$å+«-QyuâbYêoüV[°Bï~ Ø=R=Ê0ÁßV¿uŒeV¬ª©^²>j9•Ë_Fé,³Î1Oi–ù‚§ÕÑིHÉÚl_pXÿôÅÀfvPÙ Œ5¯ìÌ0‡~¯ unå‹.—•Õø›‘–íeç­^&DÈN€:¯ 1Ö¢žwò=êÚs¿p³:.~uD:PÍC5Ëàa‡¤‹iRýcǡȶE_ؾ4¢&‰3ëààð±@-‰ƒÃB6û,T`"T¼óÎ=HXR:F"¨U®òT¶ò•˜¬2¦Š=ªa»Ž$¢:iƒ®U4HT×x_A¿ úp«¾‰²§ÅâU»4B^ªˆ4Ë…z¨ qpXà0å¥ Få¦ ‚ å—y0¥,Ë­“®¡zI¦Ôâ©¢¤t´¼mÊf$ T*—¥ò I𨦲¶6¡åD{y¾FÅR32,ÝÖ@½¢UÊúøƒ!¤‚#AÈœ¦ågm®IÙI…çdA&‰&w¶¸ä‚@ºÁ­>ãózé“«•¸ò|¶,ŠúQùJ‰·1è³¼”zBŠLù‰ŠÔ3¯âÙó]9y­:#uü? Ä¿>9ÇÊVG”bÑÖ8åñœ€ùó3²ö’Ü3Í^Ø"}”YYÄ ß…ˆ /ñ˜V ãÌHˆæW>¹"""$Š£Æ­ŽõS¾Ÿ½Õ ú‚5Æ[•p“ï„‚Ô¬y ÊbgÄ'5ÑR¬šXð¢:ãaZH‡Mfö!Íêþ†0ˆNPbÌcLæ•ËÉûÒ»âsI´ÔqÐû‹ßa,k=^. úD­Ù5ÌÚkåƒD—òÕ 7YmUTm3Æ,Ÿeå[Ó§Y°Ïà ­}YÄmŸešûÙL€2×½UÊ'd|âT†dÕ—x’^vÔôÆS<®e‚å¿ ŸïL–~Ö#ù€Šr+ͪC²ì'òÔžþ©,J¢I0Ìñã?Cq¨üy&'ÕevQø, f³yt•€D†¼N¿SêÀ L§ê¹ä¥ Á  þNeY®;ôµ¡†œÏö@–ZJDä>Ê´£¨#áÇÏ ¬rõéV%_Í^ 2ž6WufÙ¦\5W“uŽå’ hFÕ³˜sËîˬÜ'(¦5›‚¾&õ2¬|·ÊgÔU0+²ò•!qgùd.­mS>>z¨½qpX!{¢” Õ0µŠüô4HKÓiHO¤£BR 2$yÔRˆ"jÀ;Œá]–þ© }Œ§³X@‰@º—$— Ï>§ë?ã‘М¸¨Q)’‰x$ÎZÖ¶YdVEq¼ Éç1M•¦–ÖÌ`f…ŠÓ/2-$$à©”f Ñ`Ì "§ýéNô{({”)#ÙnEjd‘²ÁZ"$ =õe•¤•o¥ºð^zÞðÚñOWòÝsÔÙZ”‚£r×ü¶jTDìÌß””@4$¿^ÚG¾s(zùÌ)à Æ;•b­|ÔÓ”YYó2b‘· J2RF¶Ñƒb4“ÄPŸ‰E/HšÙˆ¡F?e +¶æ&ÁL37$;²O Kµ@Ù¤ò”¥,„ê1)Øç‡ä÷À}ʇ€¼ÊõòP_¾È8Ú0ÃoÇtÜi YÔ›(’tùÍ>4H¨å’"oé”>±³nØ;V˜+õèS~<“ÿ³¼¨+¦Ž¤¶¢Ýú¥Ábò¾.•Y^¼&jõe–¡j6wyÍtÖÕÆ5“å¬H?4ÅòTŸÌúc‘Dš-ùj?íkР–xö³9›Ñb&Û€©Œk*¯ÖW•Ý×Èåo”u[¾½úº¡ï7 n¶„ù€›-aჾ$ªàƳ"(Pñ3ˆ˜™j4º¸’>U>ûâ \üû;ñÏ™%ôRá•£¹¢ÀVçêÆ´’ª7 (šžG–Ç „x¡†ø“f*ÛfZ2Ð@œ>Óͽ ϲ!bí¥H$¢2Ï•±Ïg·ÄgÆ/6–ÁY¦«4 …|ÐܵFÿlj$nEbdA2£¯~óøôÞrmmf¡’…ìŽG^÷ox†²Ì2_ý$JužË1žÑlGDÙÉUX7çÒòRÌ’ ¤áEÊNydM¤ìl6Ê´2#²eËï"›Ë¡_n!¼®ªaˆßÀÁ{mM×ÁŒ¯ÏÒšýL±R d‡ò9²…-P‰,³<êkB™²|mJ çüòf¼×`f=ÍŽ/[y(_„IAjùqÁþÐ7ŠTˆL&esàñ‰Àê8Ó¨[êØÄ®:':™a™Ôdw¬›*Lv™ÛŸ_s?þôÐDÔ‹ÃÑO6¯<6CÒßB‘D8‡5_ÖÁ07”qgâB¯xõ%H™VýÖ1©EvÚØs½ËŽD?¯'¥¶Ëùtv.ÆŒ‚cÜ+te‘ äÞ¡î¶êSÜévXxàfK`É\ÂfKpäv>àÈí‡ÄW‘zÜ”Ÿì‘HÉEAX3вž/½7ϼôº-ÎðÄk ÜùÈKx¯ÑDµØf~Šúä6ªÈyrh£’l£²Ì#”"‰•ŽõУFÓ"iMã¨^%ÉíGyV]sf6; ¥L3"c$¶…°­± 6[m5{°t®† Vî‚|…üŠD"9ZhBL[r«’ÑN¯6ñÔ+ï` ™B_”Æÿ½2xb†‘Ö G›c–-cà·£ž&a CXC1Çö“ÛjDÁ'ɯs)7Í]šös|W‰ÉXe¢Žaèó3 ³Ðv…=Øv½±îrè¨MÃ:ËÆšË‚Vy²ÆÛcç@„dK$Þ¨÷>ñúHdßí‹pËߞŒFÕTže-^Ä"dïÀŒ¨$¸"˜*š¦MÓYñhk_ÂEëâÒ#>§W¨YRšE$Ãö‰2ª‘œ¥øÚIÆ63 Û­·,º½’ °Áš+£½ÈçÖ³ré`MQÏfú˜ŸCPe)¨±sYÐÔØŒ—1¦U–’<ê)’5ë¹\‘xÜ÷3”£ŽË „嘗ª£¦Cf3ŽjXah;¶»"–òk– ±î'Fb¹î4:xÿ‚•žÃÜpäÖ‘[‡yÀ‘Û…FnTGcr«Oþ"jiTy²NåuׄÇqóE˜‚^o8*é6ôG$¾"©|M›W“Ú. H2Rð|Ï4 X¨)§j³¬°ò?%{µå{k$|Ù‚\4ÚŸ:“q52šñÀRÂ4‘0Pñ‰30(]Æ'†dqÈÞ;aéî¶–ŸíFD’ÀüYkh2£*ö´2ÉÉÿ%7܃§ß˜Žé Ò¿fLÈí“9/ÕÜÉ¥g¸D<Ò~[>µÑ¤lI>Dl)iñf’a_3›~Õš,‹$,$nšc¸æw>»Í†¼S+Ei^àØ÷sQÈÙ]O¼Ž+n¾ÛÜ7úS žÊ Ü@²ðsõr–E¹a°CSY›h&†º–úe›¤)Üô)] ùtF|¿>;…fÉftèŒú°BpÐ>»bôˆ!,½ÁNZ[NÞ˳+{Û¶· ÀÜÌ"·¢ür#HÈmDÉ_[_S^~w*~uÕx¯ä¡œ†RªÈ2Õi®Ašá@mƒ­9(—#uÊØQȈ¸²þ&_{âœÊÕAmŠ~ó·Rr+’ ’¦a‹c9`ùe™-°œv“@dúÞÁˆbˆ=wø46[s¹]èàÈ­#·ó€#· ÞOn¥ #OZžSsWöÔ˜^Ó2úŒ)]ÏÞ£þi‡‹˜3݄͆dמlNUÞa*MÏâ1 *Ñyùóê· Bú©OÏ:§8DOB&.'k[=BËÌvŸ„èÈ-M$„Þü7-µÌ É{DRZKðn€©rÐÌëX Cyžô˨€>+ÿ’—öÓ<¸Úûeо¹jð—ž {$#Ý£xÙ`m ‘ SoI;ºGÿš(d}òYöz›–W¼‹”›žj3+$ý$žËA‰òLä ÙÄrŠ \ì7žHçߣλKöù;–—¾`è‰;‰bÑÓyÍ·ìW"ŒR´ƒdxFnmÐ; a¯‚w-hz¦tÌi¹MnåNµSm‘_%IDAT å&ä'ÞÏŽè4ʸT#Y‘,ëhR•3s[ É]2i§Lâ«Tîâ;„d›ÀÒ¡Ž-·º7©ßºNr×ÜÁ>Ûœ"‰îðŽ ¹"Ûç–°°Á‘[GnæGn>ˆ”Jk>ÕXA%$ ˆšLÊ¿Á\Q…çÅ6©~RÑ§à“™¥än ¯B²‹€·îi !*Ò(^eKõ_“³k0(™Q}š4×)R’>d4ÏeÍ_J:œÊ" =›ULÊU>Ž"qÉv6¯‰¬¬â“¤l‰^SòÌó©)¤4M—Ͱñó”_½®E4ó©ó¦üiNVŠÒ²"ò ÷½}TÑR¾%¥„ÊÚ/“ ­â–ÜYÖ¯°WQð*RlÊUSbñ‰ÜŠê‘ òýeAË­rU)÷›ÞIg &S•ÍLV®’ˆÊK’ü»S,_-Ò6¿+ŒÉY÷ËÏ»5HL.4úd/×›?—g4/±Hbƒ–{d9‘\Þ#ÜJ‘Þ¤0ûè‚“w`UGYF–%6Ǩ ×D"3YØl^<¦4[]S)’LH†u¯•+ÉÖÊ®¤®²¯ëUºTVãß T®õÓÊ·Å¥/rMbg졽9Íò w§º­Ö Ž°ZBÄΡŸïâ} Z‚áÈí’Gn—€,:,Žˆ‰k²‘-‘M*ué?þÎRix®›a0ëuYÁÛ¨ˆ´ŠT6SAÆÓR£²ÿè3nƒd+@1 4«­m “šFDkå"Y‚µPƒft×§xò/;IÛQMGà âï6>7‹"µewZË™2N6(YM=„2üŒ¬¥R› ‘,Q5Ê©.²I2®¹C¡Qôjôµi½…¨„®LÍÜtNùo2ï)’#Š…†8h¹R›_–òµU¦¸ß¤œBŸd‚÷… šÛSþ"ùZÙKƒŸ$äºßÅÐi \ˆè:)‹7¥îBEÊзIõcr³è€ù͵!-kžå(z }`ƒå´J^žùóIp5ý”fÆMI¾óDJ³¼_üÔ%P™×ó4sH–„..ËM>K¾>çí¹4;]|’:3abM 8ˆJÊÇ[©Rm—[€‚ßåry´É²O¹ÊÕG²ÕôyXŸ´ê¸»¬¨v7Én>Ò2Ò¬óìÔz,¯êHYÐ~+¤T¹¹•ë’Ê¨Þ ÷ì=åyojÔ¾¨,+v-Êᆰã¢Iþ4¤bAÄÖZ ºÑ'GYd,PùIYés¯W'%QójU*5‘*)`1¬&µ`˜÷Ð ‹ 2Æ“£òÊRye©#î'ŸÙEE¥©"ù©WÙn4益›×¤ôÞç75Š¿d9®T6Hž•Ò´V£JékpË´`¡|)ýf%?‘\­˜Fªà©±I…ª!ÂJ7ÈGYƒp(jy¹bÇ$"jP®”‡äÅü)n‘a–ܸ/+yìÕKrÁ­–‰•%ÑȰ-}ªgU)¨j=@Cþ<çù$ÇêXXL‹”w¤H/–=}%ȲW ¯2ì1è+PJ«–±ã’Ü+ÿö5a>‚f;°9¡å+íQË´£žéBƒ¡™.Z9Ïóšv–¼N’:•iMkg–zþÎhµ¬Y-É6î> ¤» qÉ!ág9‘Õ4¶H«Œ¦Qe¶6 BZ¾Å,Sšæ+Çüj¹fu´š™·Zê™Òf}ÙU™ŒC\nX) mã/”‰W`Ç‹”Lqgí½eTn#¦M–`MeeQfýWû£”;88,X8rë°H"QÃÔómdÑÒ±H.$üͦQà1­¢«…D¹dFkj.\*:©y}ðO°ÞI•–>mưÇ/³ñj}0ÏP™Êã//k/•¤¬c9Fë3hY*j¢ÑÝòÕó Õ¸j))ØxÎ }®Õ€£l3¶&$^®"¡Èl®a¶©B73”&‘оr%bkCk" ªÓ¬”ã,ˆð+n#Î1QMnM-è»?è3oVmfFr¬ûLÞ$`¼Ï÷³ÈæI°I$jf¤Ðçd=yÑ# òð)<ómeyÑ‚ "e1©T)Öw-@O"w˜š¶ÍòYamßd$ùÆo‰·3’Xu˜T²åm¾Ð2ó›ÎNÐ÷Å–Ú¦Fc鉮êÈl,Œ²•Œª˜¤µ”„>ÃΖäjË<³©™:´èk¯‰˜[ù6«þ©~Ç1ÅRÒ/Íô¬ÅBäwX÷•M-gœe`-0ÙësÓi²¦¨¡ˆ#‹åš›²ÔÌ+uþœFupX(ÀN±µ²ÿÎçva„Š­‚èÐlè]U«²˜6‘Ëeñö[oá­7ÞÅ´)Ó4û©|H”jyäÚûP¤85|L[Ù¥­ø—$8¤âlPY)îL“Œ&Ik ö*«7õi=Èwv"2⢠bMeJ2‘Í“´utvuaÔÈå0rÔr6J»PU^Ð’WÎâü&ôA+” FL¾ž}ö9Lž<ÉŽ”‰ê€ VMùj2ÏZQË`Z^T–SZŠ4°)ê$@çúüí‘NÈò¼_ ”m±Œ°AºO"+«\…ÏÍÊ¡QAG¾€aC»±ö'?ùúÊ]!ŸÕJd‹(âz˜6õ=<ǫ̃Ôûb_?æ5Šb’¦…´‚žD¨¥Œ5Ó„–6+8Å«*Ê$miÊÅçëÑñï ªŽ ;PÍË¿Êg†qñÉe¤R+¡ã9¾·Éï î¡èÜ1«®‚ ã©T*(µ.ãSÙ5Ä[• aás«| (£B£0ÔÏå¸UÙŠð裣·wª”q3¥.«ÊYâß. «¨.Ø”µšyÕꆡföàZ]Œ…”W·Å.¼pßå—ª²ì±Ü³-¨5"ä5Ë‚Ú ^ÖI¨)YÛ;:0hÐ`¬¹æ'Y¿s|¯‰L8Ÿ[6*n@™ÃÜpävÑÞ•‚æ}óÍ7qÞyçá°¯~ }ÓË$´²ê¤©ô‹(“¨ˆÚà5‹Tbª-ÂGå§ù-µ,o]íæHjÓÔ‹Š³§^F±³ƒ'ås—%é«ó–êõÈFJ{)’>}®$•ÓÒ¿Åö6\õÛßcÜf[aܸñȷɼðâwÞA[[;ì0ìºë®Xe•UP*Wl¾ÐdI$Lu V«±ó {ölHVA¥Dî@bŸo#! ¹%qÈh X¯€ÞZ¥(‹Îv -U¦œInk¬[ùÔøÎô½Þ–ï )ÛlÏ>ýî¾ë^œõ㟢³{²$‹(iïõâ?8ŸßûsÈ=”(Ÿ¶b;@êÈ5”‹d?R‡HÓÐQvU–=Yõ%@˜Y/QþäHÊÔñª“ìjm]µIZȵI‚ìñÚˆD8ÅÎDÄW”.MU–’9žˆ;î¹ k®½vØaôõõ¡ƒ„Låyîw¸(@„VĤ§§Çêú7¾ñ l½Í6XyÌJ$·ÓÑF‚Ù`åõšm&#²Uj:Mý¥ú-B+¿ZÉ—'iå6CÅׇJ¥Š[œsξgüàûðä°ºB¤Aáæ›o¶ß{챇•{)¤™P]оü5µ?7´"”ØZZ~ˆ!²~Ò¬?$µ”ü eÑàW¦‘Dów]ö4͈Ë{I ´ÂVg!Fµß|$o¿ýNÆ—ÁΟÝ-žÆjá¯|e_ŒÝèS(Wf²CUÀÌž~tw !©e«ÎØ©OÒÅ~*,{YîÈR«"Yá9u¶¢þ*ÚÒ”CÃ'Í "–C}Z×òÇ>ÈmY®ê‰ñ>ÍøáS™É/´T)ãìŸN8D9oï.+K¸HÝ"†¤–úꫯÆÒK/-·Ü>;DMh©aÒtË\e•u8%ù–Œ˜Z}W6]1Ën“2J¥õ¥‚ñF=|¼¦1‚²a®³œ² ¨²#—‘|Yîs$´ÚªœjNë\VÖc’c¾Çßüæ72dvß}÷VJ¨¼;r»d‘Û% ‹Kr+ëÉÃ?Œ•W^9&2Ìx͘>¼ånSN!³-h…äŸ •UÂS`|ÐÂ$«rwðP%yÕ칡B^Ö¶*å ^}åU#³Z—¾¿¿ÄÚ•Fžd¦³£¯¾þªµ? 3ÔØ‹8<þøãØvÛmg5þQ„Öð§?ý Ï?ÿ¼¢„äÎ$°TöT OÅ<–eþ}’ýÞ™²°é0~nr~mí¨UJxé…ðϾ‚©Óf˜e}Ô\Å•Þãø-¶ÂãO<Ù²®/z¸æÚk°ÞúëÊŒ`®Õjm,·üñÊ4¶„k”>}SÒ¼LÊV  ­ìQnêtÈÚªò' b™ÄŽw±¨†$`”ûzønžÆ«¯½Šž™,{)^£òÊÎDï®X,˜¥VVyYnU?âOú‹T.•v•ÁW^yo¼±YQ5Àθ:e—Õ‰Úµß IYT^­'1ëXLòýtŽñ’"÷UIn«=χå®ÀKô5èé§Ÿ6ríû¹Ø]§UO¶Ûn;¼Àòìàà°àáÈ­Ãb…r¹l–))r)­þþ~û¤.e„ Û›±×^{ãwÞE6“³é©4$C"f[ É´F¶“thT´¶ßa{üæ·Wð7 m*ƒt.‡*Ÿ÷ò‹/⸣ŽÂ&oŠ)S¦’ 6I8²¨ˆdç²F$¤vˆðËš'Ù‰ä*H†š6éí·ÞÀ®ŸûÎ=÷\’zù'¦ÍçP\ùq&£ÿå癕_-ãkðO™¤ÿ¢Ÿý[o½-Þ~÷=ø™&ålºúŠ+ñéñ[P®;`½O­«¯»)þüžÕ§ríí(ÛûË[<1-Y´ÐÓ7¥R™Ÿ²ý–ßð_ï½{}~/Ld¹i4Dde—…‘Ô•rÌQ©ÜiÊ/ME%÷–,‰”âÙy§ñóŒ®ön± :ÚÛñÎ[¯cÿýöÅFoˆ Ö‹=w߯½ö&Ë`žOd9Îiyé”ÕC‘daQµ\©œªlª¬ª¾Ë^*•X¦XÇIzŸ{ö9ì½×>øå//#QmP„*›¬¯ ìx¦3ªçq}÷<•cŸï¦ˆcO8ŸýÜžfénë(°ƒÐb!Ë2èã·ß4·Ž±n„7ÞzµF@~œ2¿[¥G†EµÃàà°¸Á‘[‡Å íTö"“"ZR‚ú-Å.˘H˜,X"lݸøâ_à’K.Çñ'œˆÝ÷Ü ÷Ü{d -Ž9ük;vœpüqxûwpîÙ?Â+¯üÿò<ûâó0Fò̳ÏbçwÆU¿ý­>/Û©»Ó¬»¶„§–ú]ˆ]‘H#¨”d(â*b«Å Ë+i˜ˆÄc=†SNù&.¼ðgØi§pæ™gÚÔgÕþ>\{啨~«­IÞ¾€'Ÿz Ï>÷<îùë_ñìó/âïÿ˜`ËËw¶QéÃÕW_Eb»#n¿ý6¬¹úê8š„2;åªyLE ‹|õzŶxÔ¢ȶŽvä "ì± 5˜H§,7ÿáøùÏS¾y*öÚcOÜò‡›Pª$m%œpô1Ø|“pÄ‘Gâ5v.n¼é&¼øÒ˸öºëñü‹ùJ"„:ž{îi<ðà}øÙçâÇ?þ1žðžzêYPYEÀd5ŸÝ5°)ÀˆE•ÜŠÔªªŒÆ/Õë"%ʲE’ÚAyË*®æ#<ƒÓO?çœw.vøìN8ûœsÑ×_B©¿¿üÅ¥ì\m‹Ïî² žxüi<üè#˜0áa<ûìó¸ã®{Ñ`'OcYoë¥>|óä“(ëçô ­C’öµ„2ë · ²Š/Šn‹#¹uXì å-Å(šxŸ$“äBŸ{ãÏŽ}¸ûî»qI¬>3>þØ“8ô¯âùžÇN;l?þé,³ÌHÜtÓ8÷Çg¡“åLDK3"è“pбæj«á¾{ÿŒ3p†ø ’Y-½ëið“,o-Ø`a!7?ô¥iì<’u}’Õ rO,.¸àv.¶ëÎ8ýtÜzóHRoÅQÇcŸÞ'½7 ŸÝvüõ¾¿ã­·ß5ÙL™2Éæmjº%Šæë‡†#>¹|)’?WàIù¡ò"†øS½,šzÊ"ÚT1ír‡Ñ¿Y«`Q¼,ލWëøË=÷áÄO£=aÄiïÏï…—'NÄöÙ×^{ wÂ_ÿz/¾ûíïà–[oA‰ˆ:I×´éÓY¼2F¼6Úp}Üu×m3f%<øàƒÈw ¡KaFüL%ÒøÁI}X/EÌ‘\¹ÊªnéHZBá9eû±ÇÇ÷Ï8Wþöwvì»ß9=ôî¼óœpâ ¼6)“'cGv²îgY6½l犯uÝg®ÖÊ(÷LÇŸÿ|'+¯¸ù¢KÆaIÖ¾b°®$éqppXðXD5†ƒÃD3"§Œ¤Éº#‹>'Žß|ê¿ÿ=|뛜:õFï5…Z]~ç¼HE•¹EÌJ€ƒƒÃ…#·K¨pÒIO& ‹,”òí\~¹åí|µZFÐhÚ€¦Ÿ]ü3|zË-0b™Qøû}Ãi§jDX£³Eî4% 5(ÅÏ$ 7‡€×hàT£T‹—$³Hž”ß"ÊDV>¿©W«Ìo„4å7räÒ$ó5›SUSƒm²ÉÆ8ôƒ±æêkâµþÇvîúóŸMæ"š.M[}žoÖ*¸óæ›Íwذaøýu×c§í·D¥®O÷1‹e·ˆÃXVòþcX®øGS«ŠEŒ³ º»:l¤}_¹ªù pú·¿ƒ·ßÁffùû}÷ãÛßúrÅ<<_Öô4 sláÎ’|½ðâó8í”ob¯/ì«®º™\þr÷|'ò3—ç|þ⊄РKN[[;º:;1½g† °“¥wUÇà«е×X ï¾ó¾~Ô¸ó®?ó>¶ ì¤f³s)*³œ¦Òi^ó.¾÷Ýï`÷Ý÷°¥Ûl³úË5û$¨ PpÖ[‡…ŽÜ:," Þ±e$¡a˜FŠNäKŸÛÛ èïïÃaG‰÷&¿‹Í6Ûƒ† A•„C NÓ-]úë_ã•—_SF3d\õÈ"Péë3¨µéåÛ»8щ¨^3"%bª)‘âõøIÜÉveÙ )M^ÏƵ7þ«Œƒ1«®Ê;S$Wj”Ñ{“ÞÃ=ùkLÔÁÈú8çœs0sF¯Åó“ÿÛoµzfΜmi\,xó+Ö%?"áºø¤NšFJ2 (É8ŸM›[Ç©§}¯°ƒ / Ý$¸uʬ^0sfn½õOxíÕ×YnþÞ¼ýö›¸øâ+ð•/Î: µr«¬¾’ Š”K„†¬-©P§Ku¼Ny4>Ë®Êì]$²7Ýx£u(6;–ï%î¤ÊáùçžÃßþö+ªór¹ö†kpÅ•WàüóÏCw÷ \ø³Ÿ±c1{A#µ *Ë ŽÜÎ’^y²?pë°ð!ñÕV#ÿUÌågë¥Ú0rÔò¿Å&(;°öZë`µÕWFWg;V]eM¬±úºXqù•qÉ//#!Éâø#–Zj0~pæ™ä&l¹Íèîè “Ï( Z†ä!í!ÕÑ…eVXoº! rOÆÛ4bšžHZÒt^¼»È@ò‹”‡lÙB;67Þ¬Œ²Øn4v}*ù.ä yæ{cttÅö»îŠOï¸=îè!ô¿;‡~õ0l»ýØd‹-IvWG­R6«¤ObÚ‡ŒÀÖÛog€ºÚ ¶ŠVg±À“ìiÂÆ¦¦S§„ÍTë.j°×¯Ž&ª…-öÐ9h06ÞlS >KŠÕW]™×D1t8Ö_oC,½Ì²8÷âŸ#Êæqó­·bxw7N8ú[±m÷½ö@{6‡ž)Óg€aKÄÖ[movzK LŸ>'œx4¾üÅ=QÌŬÒél\þ)w ²\\Ú®¤ž+?r:R'B‹ ]7~s¬°â²Xn¹‘¬çkbØàáðÓy¬µÖÚèêêž{}ÛO=ÿ$žéyœpìØqëí1nÓ-±æºkàÝiïÙË˧rȲ-Xoì&»Ñ¦XcÍ5ùŽÖŦ®‚Öj`ó¢d|üÕZuBSäµò¬æ™eGýÕÕ°ÉŽ{³Î}&Lï¹$#MiÆ+Z!àíìXYx¼tmë>þ™3þS‡é_þã5jïâÛô;>nä_[åÁ2‡ø:åCéNBë»>þǃÂücV<­sX|áq˜Äi6FŒÉ“'ǽu‡… RÞFÊV_}uLœ8Ñ,µò½3³™•v¤Øë¶öí| •©T"¹fT'apg'r^'J3IÊÚ³³É«Öê—ÓAŽw‹®ÙÍö7ù”®9tå;ÙÞÖŽZ­Š}öÙøC¼8 Mc$wcŽ9§žzªMx.ÙåäŠÑB"5a`Ó¡\K×N j(ÔÂŒ©혙¦œHNsu­ðä!íIbê¶hÉWSqÇ2³ £Ô¼¬ª[z¶¦¯úùÏŽo~ó[f_Ô K~¹ÔÏ<5l¦m5+»Ev~ <½€±R }”HX:Si¤«ZmŒï¥À¼3ÿAÃcÙËðFn):YÁe¡•õ<^¼„2%ùÕŒ"FT*ùŽ?þxüð‡?4ÙŠèÚ»Í-z+”©Ž«MVÚ=öX›!Bù1™ÊRÝ"½ÌekP¢@I[ùÒŸê ‘À&·%Þ ÝÏ#­²èû˜JY¦|yÊ®­Á¸4»…™ætð(•Ë6¥˜fe(ó½M™2W\q¾óï´®ø(!“:ó™FÈŽ¡Ö¥/pÊã;f–4¥œ}ZIó7ê¬cidšÉ»‰"ƒ˜šÆAâ6,‘S 𵋓3­ßŠEË?ë^1±ÌYÀu>¾Vs…Ûû`Ú4Å]¨/XþRp«óºˆˆ7<Ç`Ë#ÛC‡žüNZ×ïú¿‚ÒpÛm·YÇf³Í6³ßVf ]ýh·ˆC K@ÿ{¨$•AA~pIê°pÁ|b ˆ±cÇÚJr"lÖ ²¸KÉ«=³ßìë<ÿÛr(’9T¦LFiê»HÕ´uh¹MÆÏF¬±lLd11î ËbQ²±ÏÎZû_~ºúô¼öÚŸ´99fÈ5CòÒÂ꼩Œë“î¬LYþâ Ì’›~·¶]Ù<òÜúíE„}eøll3”U>›O¢aw2^ ÓÜ®:кÝÎi+â"#È9fÌ;¾(âó{îiskš(-ð¡r©Oåqf[ùVàÏf M¹©|eIN5;Ež²Ê–ªxsâËÙYòHÄäïí¥ôU‚–³Z-ìW,¶Yü"´!‰«âx¼èÁŠ+®hKÖJ¾"bzß‹"ë³êÓª«®Š·ß~;nM–Ri²øS ª~QVï­þk_§$cʱû…& 6y*Pi Ë(¤ü5%›Y0O«ŽÏ*ï„VÐS{SjÍÅ,™*=”‡ÀÞ«Vd:øúø¦m:¸Œ|±ÉvUêš`º)×(_Û$ý‰ ¢¸®GK+÷xqLlgŸá¥<,yJÆ1©•,“ó³¡c…²³’—¹ÇÄ×&„²uÉ,$ûFfy^Û˜Ø zæÜÄöƒCåÝÚ3Bå%I“Ã≤ô8üH¨2(¨q:t(¦M›Ö:ë°0AïGkß}÷µÑüRìz‡ó´âSØ×ÇV¶KÎüÎ=õ4„õ2¢JhÖ’”èØ§56ïT úö¦AsÇ£†T XÄZK„.³Ì2Ö°/ÌéRÚ·Új+|ï{ß³)¿$ǹóö¯B’DuÌü狸ëâ‹Ëû(P)6£†^Œ.ù:Kfìn€ê˜¿cב!éD>òÈ#8í´ÓlªØ*¹èáˆ#ŽÀ…^h‹ Ø\¬T¨"Dsç9Œ4m*õ–šÈ±#ÐÔ f”Q¥\Ã7¾ð%Ìxõ5D$¸²€ÇŸ¢##U6¢¿ET“ø*qá÷¿ÿ½]£Á{ºføðáv|QD2Ç­ò½ýöÛãÄO´²ªú–,¢Li÷>9·‚V‚Ó yi–I¯ZÃ8·_s½« ïóH¨dof$C•Û9ïWÜ’½ÚÍJ¡éÅÎ?ÿ|Œ7Ξÿуï¤w£6j$Þõhè°³‘æý©Pr}ÙÓ;ÔûÔ³6ß|syä‘f¡ÿ¸Ê©ú…é4ÉuTFZ™`Ðjˆr±©o¦´Z`›}1R¹ðýd”å!!›,qÐq`=l±ÅÏëõ/Þ1þ°³­¢g›VÜ‚fœÐê2XjÌ…&D–i䊮ãjwSü¶Zi%ZѪ:èoëhŒØz<P}øÉO~b!Y×ÕÑSg^”ÅöN)oç–ÑÛù€D”ôøäø™Ï|Æ,J ¤ø¤\Ô€ê]i+‹Ù|ƒD"ÕÛ_ý5¼üðý¨gR8éòk1|•õÙ@°A¦r‰ÒØ‘B:È›¤Õ2Ï5œ‰….);jPf‚¦´&d@éÕ¾DbüˆB<ûç[zå \þ½Ÿb÷o€ ¾¼7ªŒ£˜)P‘QÉÉ5AVÊÌdø/””µä&‹mòþæ; "–”…„P¾?/Rìêõå·^C5ÿWü\ö[¼øòóØí„Ã1ŽåR¼€ÜØ/jÛ¸Ó@ñ'eM˜›ðê¸Ò¡t©Ã ÷»¨Ai”—$ ‘MŽ©žé÷ÀüÏ a¹_ÑÿÚk¸ö§còCÏ`¹µ?‰/ý⣕f€ü Näü,²bµ¼GÙÚ³ºÁúL˜6JQJËgü-vµ ¿ú tÐA™ÁAuPõ ©'‹<$<ÉÁ‘ÛK@ÿ{$ŸµcÿÉvÞyg<þøãö;Q^²$®Ã‚ƒH„Ž/õÊ“%1ç;(ù•õõ"zç= Neà‰ð™OÏYëÊ›×j?ifçŽGÏ–’KÄ¢`yTC¯t*HŽÊƒÒ>wÞ’0úÝ$¹í}ûüüÔÓ¯TõöÚ€ª¢¬#j\yñ¦¸‡þ·âô|¥£££ÃÒµ¨*!åCä<)“*ó’)ÿØÜµ’‡ÏmAÕˆ”ŠþݧŸAG#„ÏöÅ£²Oûš‹µ`åIa`<É»Kd–\£ÈP[Yt•ŽEÊ_’Ç$oÊËÀc‚¶e3;€òSûÀë)ÏÚ¤ÉÈ÷–Øi%iôy»Ïx4J«é½Ì+=OÏ,µ¯²š<û#«:'A#@P/£\éé§}{ï½þþ1yÊtüóŸ/ââ‹ÎÇ.;ï…çžžˆ VGÄë-PI‡…$šòÑ®ªµÀžÑµr¯ª±Þª†JÏiê9‘ú€×•« ’jÝ4X±¯§iŠP*Ux/;ì´j|Ç(—ú˜Ö*AÃ:¨ß>å›6û–[oE_¹Ä¢MJËë«õ**aÍâ¬3 òhü{¶1‰½hBÍ+Ç~½ºÇòª8YgzÙIWË'Z_Îô®t<©‹'nm» !&ª Ú×€›×_}–rH,"[Ãæð¿•_–gG­‰bdB ì¬Q×Ú&v ‡¹‘£¬ŠÔNY†´|“¥„¸‘²2·’‚RM²ð8Ä$Ô~°I Éï36‘¹•<[r#»²Kþ °«j£\óV^%Wµõ<Í7aãw³à¡Ù”ÞB1 ?€ë®½‰ÄÓÇqÇ}÷Üs;®½îJ,»ü<ûÌ“¸úwW«Æá•—_‰'œ€í¶Ýx n»í[P¤ö·Þ† /º?ú0Ž=îh}äxæÙgððCáë_û:N;í;xùŸ¯Ø\Õo¿ý.¸ðb<ñÄÓ8ûÌ3qðWÀŸÿ|7Jý5¦)ƒ;ÿ|Ï_„7ßx“¿Sxá¹gñ‹‹.À“L‹ÜE^|öy›©àÎ{ïÆko½ }z˜ð·àsŸÛG.<ï<#Ù²Ó–ëÉ+Inë+Älñ«ìÇ?b«ñû!k½îI RÒÑÒÙO=õ”¹çtvvÚuÒ׊ßañ…ccóU’ä“—>=%=tùzi«Ê$+—UD‡El5©A2TjÅ0ƒLÀwÚdc›áq5¨l¸[ôÂanP0>µŽ:}£¦’¬B63eRv’[<©Î8 DŠÿ<öôÙ×>ìÊï“A5e(.ù-šìDl¹ý¯ÑÔ'y† ˬʭÍÁ*öÈ…F=¥ô.4úÉ*þ‚G3dùHGše¼üò+˜6­ŒµÖÚ„äöd ±Ö[ÿ“¸êª_ãøã¾Ž 7X¯¾ø¶Ûj\pÞ˜’ÕR©LÊG>“Åô“ñô³áÝwÞÄ Ï=‡eHê9ø`¬°ÂŠØm·]`¾øâDLš4ƒ!"ÜhÓM°žV]Ûx,–]vÉçØ`ƒ 0|øR¶Ôp¹DýÖÙÅr˜Á—÷Ý묽6öÚç È’|ÊŸuÚÔLRDrVd/Œÿ妭Œ®AÝö\ë<S§M·nfôNÇ.ŸÙ [ÛÄ‘ÏmZ‹T0 MØÊµ½©÷ã_¿˜Äå@Ï“NþË_þ‚ .¸À­¯¶‚tºÃâ ÇÆæ i•ïN²UeÕü¥ßúÖ·ì˜*ÒÜ=AU"gÍ]x¡÷“4t³ ŸT"HRMÕQÏiæ… ²,l5°Œ0ßômZȹgÐYRð>¹FHÄ´jR¤e@%HO~m øTvòÇÕÄ;úÜ7=óV[K"Lvi†e2«Ù8Øž¤ëi4¼~ õ°Æk Îâô_Ã:¯Y4£ ™ÕlÙ¦ôKÕ}Yudë|Z,™åùýE}ÁÀúÜy4ªYŒ¹"õM?ž|òN¼8ñq¦ÒC©ZÄ!‡ŠwÜÿêWÈdÓ˜6s¦Î˜Æ#Ìì™i䮳£Å|iÊÀŒ¢!ïfGª{ðPô–«ˆØ¡ª‡u’×4J½=`ˆ°^%iŒ0eæ ›A_¥~^z0Íëxˆ½ëÎ6C’\ïMé1W¤f-´ç”X®kAíì8Œì„ž †|wîºë.Ü~Ã-øÉy?ÅÏ/þ6ûÔêðŠÔ¥U n‹Ô¹ÓB2IC«ç0NnÕÉ·v`[”ì+Ÿ2Bék«´i:Ád±ôyâ®à°øÂ½Ýù€*Â@ÿm}æÐ*Xš@\ç“Ï IPEÓqí;,*à»Õò–Ôh ô,#«}¦5kD(ŠËò`nH%Èߎ£Í‚‰‚2‰(Yôï! ¼HÖ"Öê?ù1jùX 9á%8âBJY’-²‡éuvŒÅ²C¡ÎW“uŸL×äÂ"i¶>hTäòC±ÑØqØr‹ 1eú+Øm÷íqÖY?ÄgvØ ·ßþIp[l½ ¾~ÔF(wÛcw\xÁøò—÷5kåØ 6À¨ÃH\û‘+´)bê© úËe’Zz’W}-¨T+”O²îæü‚zg|ÿ;8ýûßÇDû.V[m V\a4º:5¿n 'œpÎøÁY8ÿÂ_ ãçÑ™ï@À44r$Á$—wÝ|+ê½%|bÕÐW¯á†k®Ãÿýc?âsìÑxò‘GÑÆgé¹y²fÍ“+:käÖ¾ é—Èxˆ Og'+­¬ô­ô³¬Éš¡D«-þèG?—¾ô%Óà ±]2àÞðU¬Ýwß7ß|3¦Nj ‡*Žzˆ:§ßªp‰Å×ÁÁÁÁÁáà "Éô¨KÂF#—…ï}ÿ ì½÷îèíéÁgœ|«Œƒ‹~~Æß_ýêWq衇 R©Ù¯÷ß÷7¬¾Æø>¯í!±<¨Ûˆ§t”:Vò—M 7öYŸ!K‚Ù¤Ëjª¿ÑË.‹ŸýìBÜrË-5j¾ûÝïÚ Z rõÕWÇĉqÞyçaÏ=÷4câÑvuÖ1«é•W^‰›nº —^z)VZi%Üxã8ýôÓÍÂú“sÎÆöÛoM¢ª¹Ê5Çuݾ†ÆiØÅˆë¸t¬®‘ÎdHÒt{ZAôœsα—4¯­Ã’Gn?$T‰T±¾Ïž¬V§Ñúh›L‹'¼ôÒK¶Rä±Çk×ÿßÿýn¿ýv[ºX:S+è™Ï¾±Í@”3r,]::ëSY›“sÒ»²ÞJçêØý÷ßo«‘ 2Ä\çŒÃaI€#·ª0Z/^ÉÔs¼øâ‹ñ‹_üo½õÖ¬O‰;BÜ 988888|8¤5E"x¾ÈZŠ„®A=”ÇÒ#Fb¹ÑËb̘•Iæ:ÌÍ ^‹çpÕœ¶Ë­ø ,·üŠXzéQÔMò2hØ9-º ë¨ îWho7ß[ùáf2Yä¥ëëÕ $ÃõJÙfDÈå|sÉ“ÔçGYLeÍ•…VùÃ& ý6l˜éKC‘¥UçeíÕ2ö:¯8²Œ[ƒÓ’±*"­³]b‹m<‹Wç´M¬·Ó§O7"½öÚkcÓM7Eww·ntXáÈ퇄zˆú£Ê¥Š¨ ¢õ‰FëWßqdzz‘BRÁ’dÒ ÜWH®qáã æJå’ßyÅãµZ2Úúô)xòÖ›Pyû]‡ÂÚ;ìŒôàNv"B–Ö& nïCetÁ·Éjk’ÀÿÎ%iÓVNmyÞr ­þÎ ª<×â«Ê1ï´ :ê)Ä™·kþÚ{ÞGâgØÌ Ü—±Çã;’Qè­·ÞÆÓO?ƒûïû;~ñ‹Kð·¿ÿ ]68îÐCµAk"¾Š'1-‰Ð›zk¼uï½RVÝeW _}MöWÔa ø†ÕÆÈuÄ._¬Á2ÌÒàð‘BÍD\ÕÛVsøðá±5‹Ð(Î_|ï½÷žD•o®>µˆ'Áuøß`î⯉ڂcÞ| £{'abBߘµñ^['j~ ‘Ç·é!ddÒ TK@Kño`V%"Mr»òŒ©èzñU´gŠxyx'&Zýšè]s‡š}‰Í+…ÖH“ÄyÚ_²e7f]£BjJ@ì I°ì´™þÆs½“ºã‘+ — ?KE^­»ù³?,²ì lsGTJè~åy¬–óñf˜C´î8jºüF,µÁúü ’ŒŒÈmè#b˯Ú㵔Ȓ5IÙeÇ —ÿWuþ`Îã<`-¨×x„ÍÙ\òWr@7Ìqÿ’$ûñ6&º’YBzå¨ÕŽ$ÓXQÎvœçu Rë”Ûµ ͵9÷e5´uúùOŠ\ƒ%˸\¾?æ/b‘•Ÿ©$rƒj¹z¤ý4ò¾H¯o•ïbŸZ îÎJ'ËË,\’U͆YD•gÓ5é ƒ?+h™âPœÒHY† (N;(Y1>‹×âŸ3]UˆãV.ÌðUdÍÏ\iöù>¤7}Ÿ!Ë|²ÈW:Ñ£:—Äãà 8rëààààà°„C´p`0¼ïÀÇ =D”Äчÿ®988888888,6päÖÁÁÁÁÁÁÁÁa±#·‹ ¹uppppppppXlàÈ­ƒƒƒƒƒƒƒƒÃbGn8rëàààààààà°ØÀ‘[‡ÅŽÜ:88888888,6päÖÁÁÁÁÁÁÁÁa±#·‹ ¹uppppppppXlàÈ­ƒƒƒƒƒƒƒƒÃbGn8rëàààààààà°ØÀ‘[‡ÅŽÜ:88888888,6päÖÁÁÁÁÁÁÁÁa±Á¿ ·ÍV˜æ:g?“cs#tÊ‚öãC< 9ë_ s<<ùÁ0+qÿ.´.çŸÙ±ÎunÀ®!9÷/0ç©s¡ƒƒƒƒƒƒƒƒÃÂ<ÉmÖ†¡í×먉Z= ‹xDǵm8ãfúSgh0èü, ˜\ÙÚF¶ÿ‘#yC³ÉôÚ¿(~Ú€s"MžQHÒ§l@žl;¯}ÝW篷Ê'ŇgíêÊÙ÷ ¼7†ö’ëâý$=Jif!¹x@¨×ëÌBAÌz?³‘"YušF?RðQ­’(†M¼8ñ%’Û~4‚ ·.ofxsšL­‰T* M&¹å©nÒ<æq««Sñõ©˜Pzºçc€’§§HÓ>+¥|V&òŠ”IçÒ.CšÉi¥l6ìÀÜGõ;…È#afeV2óJÌ#/ÑãC¯‰L3ày>+ß7+ÿD¼_#⯄¤Æý ÅCéŸ}¯2îb±ˆ¥–Z £GF©TBWWe=çuó‡¤èÏ!?–gôTqÝ¡_Á?ÿp-R+®€ýu –Ú`}4ü /à;N#Ë÷±ézü·¤½¹› ɰY¯áÅ+.ÃuÇŒTÍÃøoMO<A>Oyey•ê>%Å>\“õTu&­¶ÂbpPÑS;¢Âý,rÕ &^s®>r?Ê«-û:6ÿæ€,e™V{GÉ9á}8„*„)ôL|Wº/úžzK¯3ûüæ:ø£#ô«Ôdo³àäì°ðCí1ÿ“®áÁsÏÅ„ÓNEÕ«b—K/Ç»ïÅö‚mo³BM•fÛ’‡ÓÅóÌ¢Õeæ¾···ß~'âqôp_VÑf“ ݶ ldk (Ð JS°GbÇQë¼] V;Š/–Rüȃâoíë¹zÉJ‡íÛ³™»Déæq‹©ö\¡u霡¯îQþ¬pP‘󨬋,nýVüŠ‹dX?[Ç“`¡ü%A×+.†V\³¯–¬g‡4 èäÉ“qÓM7a„ Ft Y‘„yZnƒ°Šz=ÄSO<'žxx2> ×,·râ¦Ïà‘ë¸ì¤ b¶×4 ­Î´X›]‘æõCX9 “ÔFÖÓ–¡–Oc°äĘ\4xžÿíÜÜ)™‡(Zð¦šxƒ®HóOFye]å‰Cç±,Öscös”˜ÙHˆíl(ö$ ÷ãkƒ @6›5‹í)§œ‚C=«­¶{aóì£8ü$ïÛYn?8æ®+ÎrûßCEÏYnÿGp–[‡Å jùßYngcžY”¢ÊårØe—]°ç»£54jNÀ³"hfÏ4ê%õ®_ªh¯qIýI™9R¾¸±ËBÌuòã@œ¢˜,¶Z"³”2ÍÍÏ–6j î+ÛTÆósl‘Õ}r¹õVçâÜ2×ö mãÌé¨=k^°ˆ”.ýž+´0û¤ËkI®#³ôÆÛãŽ;Ǽ]ëààààààààcžä6$ñ I fôLG{{IŠ„Wþ·unIëZ=Ù¦W*D@òÈËÑhÄî fÔÏÀóQTÕÅäs¤˜!I&/–2!jIÐox<ù­OðÉ5P¥c6ÈÊè_Mõ9_vdYÓê Ê“qËX~·"¹™LÆdà\fcžÌÈ|V…Y[’«0&ŠûîûE :mímh+¹?ë}j]üîÊ? ^Žà“|•J}(•«ðsyTªýøö÷NèџÀ©§çûûû@&ÓY‰ÐÊY(ÌBÄNVN׉êøK/½„?þñ˜A’ªß²‚*ˆ„6È®e1n6=4y,—õe eÒ¼ùú?‘÷Û±òJ+ãškoã½i¦ÃÃ9çž?›·­Ì¾ B–i Mg|^“áï,‚j =4÷ýå~LŸÖ ^†r­4Ó,Ü”Ÿ&¡Á%‰%)ÖTj}=3õ=´µ[$Ú³¼gsvÒ¤ü)æ?à1vØ{È0Yž'<ø É⫇N"_%yõInCLš4e‘ ½ƒƒƒƒƒƒƒƒÃÜø—f¿Ùô‰LNÓ‹H6=̘>ÝHïèå–ÇæŸeGóÏLÄñLjgžšhÈÎÎv#˜I^>_Àc×ÇÎ;ï†Ñ£W0 ¦ˆ©_[[›=A–Hýîëë3ˤHœNi+¢'hÿ‘G!¹Þ?þ¸Ó=±Õ5„—I¡Æ¸!ÓifS`æô<::Ò$¯ }ø‘xý÷”%#ÏzNì6 ü¥,7«aeÞ£:N?ízÐÁ˜2ém’× y?vMï,ªççøì& -ÓK’ÝÖV@©¯‡rˆ-ÂFÔ¹õü ÊõʹȳÄß`úù|† ^Å”©1‰-•*¨Vc„J5ÀÐaKÖ¹ÄíA bü r+Ç‚H‘À‘D¥I}’3«£9wÞu‰ícØg¯íQ.•qûmwà{~~OœýãŸbï}öÅçöü,Jµ Šm±ô2¸þúëñ…/|çwž3M7ö•¯|_úÒ—ðä“Oâ™gž±óË-·Ö]w]pÀxôÑGqå•WâŠ+®°{.»ì2œþùxúé§mÆ€1cÆ`µÕVÇr¦L™DRô÷—1tøRhðÙ¹œOb¢Z©£¿ZÇÙ?ü) cÍ,©"²uM§Åxÿö÷`§vÆÚk«¬²*~ù«_‘¬×qÕe¿Æ Ͻ€ˆûßþÖñË_üŸûÜ^øÑ9?F=¨ã„“N—öÝøÃíì4qÆw¿ƒƒ÷߯¼<¯¼ôN>ù$ŒZfY ØjÛÏàÖÛî4™>üè#Øý³;á´S¿ÉÎÁ±·ùf˜9c:ÉnDrÜA¢ŸÅ/yŽ?þ$yä1˜Ñò±ÁMH­³â:8888888ÌÆ<É­×Ô€¬M[Q]‘¨¦pÕlb€B>žž~¼÷ö$¼þÚk¼)ÀR#:ñÌÓáO·Ü…³Ï>·Ür›¹<÷ÌÓ¸öº›ð×ûÀÈQKã¶ÛoÁ…žoÈçŸ÷ÞûW<ñÄS$¡ì¸ãŽøÓŸþ„ 7Ü#FŒÀ5×\cVd.ñ³Õ@ªwÞyûï¿?®»î:áe1tÈpœñ½3I•7‚Y<›a€´Ÿµ¤7êÀÚë¬ÁC:qãM×áɧŸ‚ŸË’ŒzèèêfÚþ†¶ß?þ6Úx#˜§œò-{Ô±æbQ,æQ­•ÑÑ^@WW;î»ï^üôœs1yòTü¤øÚë¯Åݾa½ß\~%n½õV³B{ìq¸ð‚ °ìèeðe¦wÂ?À×9 ×\w j¥þ|Ç_ð³‹~‰K.ÿ‰r€\&„lÂå¾>œáÏqÜq'à·þ‰„ÿ‹ÔÝmÄÖ§µH­#· e®}÷Ì“Üæ€K#˜`S}iDC3l @²˜óR8x¿¯`ä°‘Xș˜0áy ™Ç¦[~~º‚LÇrËoˆ <ƒ®ý=:}Íš†ŸÍbóñ`å1ËâÝ÷ÞÆƒ>ˆ'3gôc£ 7Ã2£–Ån»í¯~õP|ðØxãÍ…AÖÝÝwßx ‘»£>š×íf>¸Z­ë¤OÆY§ÿmÙ6ÜqÛè/÷!ÏØb¹M/‡0Ê"–=’ô&DÁ |ãä£Ð¨•lX£ZÇý÷Þgä}­5WÇW¾üE|í«‡Ã÷ ¸üб×GbÄò+¡kp;N>áì¼í&X{õ•1åÝ©xôá'1éÝ)ÔÕAÂüžzù5L.×1n›0yúL’àû0bÈ \ó»+qögà{ß9åþ*.¿ìjx=}X¾³éŽåqñ·ã¯?ˆF¦1œ¢¿ûæ?áû§Ÿƒ¨k~uÕ¥»á'ÍA_D_î"ÏÔ©D\“}mwppppppXØ0o·„Ä*hAs»êþzÈd²ˆÂ&vØ~œpü‰øú׎ÅOÏ»×ÿáZ¬²Ú æÐ APcÇ®‡•VZYYNÃ9ßGÍfˆpÖYg2®n¸áãwÊ,“²òjå­k¯½;츓¹h°™šéܤI“lU®ŽŽ³ÜŠø¾ýöÛøÜ®»âs»ìÊSöœ©S§0MdüŒÍ2 <$ì=3g`••VÄ>ûì…x?úáYH{iL›2ÕrµZ=ô¶ßa{|ë›ßdRSÈçÛPÓ˜F=¯^¯`èÐ.Œ]ÿSö¼Ë.ý ;ð€ýñòËqÁE™ïíîŸß ZºXXi¥•HÜ—±Î˜•Ç0ž<^}åUv<”úú±þF›cùU×D•‡L*°™|4Ó^­†þž>Iž)³Hk¶‰ï²ÃG‰øÍÆÜ¿âßsppppppX˜7¹M‰RqoÒ]&Ë-ñ¿,ˆò‹ýþ÷OÃ÷Ï<_üâ°Úê«òš4|’ß4Ï×U³Ô¦šix©4Ê•>»2?×O}êS$|ŸÀ¹çž‹ß]õ;á °é¦⥗&šßíºë~ o¾ù*~ýë_›kB2eØðáÃmð×óÏ?oWûŸüä'ñ.‰îc=†Ÿ]p!Îýé¹Xiåe’˜zš~€äOäXé/söûÀ¾‚¥†ÔIŞvذa6«A±XÀ®»î‚É$Çÿ÷øc8ï‚ópúßã}{¾¼Ér¬°ÑFÙ 1YŸ»»»±÷Þ{3ï\~ÙeI"«jUœXˆÐ×ßuÖ^‡„ñddW†8Ët|úÓãðÆÛïºÛn¿ öùÒ>8ä«㉧Ÿ$!'Q&¡:u9äPÜÿý$äcIB³6ËÂvÛmoî+¯¼²=cÃ6Ä AÝXë“ë`­µÖ6W‚/²3°×^ŸÇOÏ;A­Œœq:òííèÕd ”µ-$!I§sè©ÁˆüÉ'‚-·Ø“^'{<‰¸æ–õ¸n–ln¥ËÁÁÁÁÁÁÁÁ!Æ¿eFúèšÛTJÓf°îºëa“q›"K²Z¯$žY[¨@sÂ6)’¼‘Øl³M1f•¥‘/¤m!‚1+¯„ Æ®…õ7XÃpš×î¹ÇžØxã°û»aóÍ73ºÕ–[àÄO°Õ·Dâd Ýo¿ý°Â +`Ú´i$Ä[c›m¶±óãÆÃå—_ŽM6Ù„éÊÙÀ·/|qœsîOP(¦ÍÊ)‹m@˜Éf±þú›š_oWgÉypØol½åV2xãÙ=4«®º:ÚÚ‹Èå3øÆi§àg??¾ŸÅÉ'ŸŒ 7kVÚ®®. 2»îú9l°ÁØ~ûíìØæã7g<áÓ›oÎßÝ”G€Ë®¸_Ü_tóú=3±ó|û·‘¯Ž®aC°þf›a­5VB‡æ¹¥¬ŠC±ñæã°îzŸbGÀ÷¿} ¶?9?‡žÁȽ‚¬Æ"¶n@™ƒƒƒƒƒƒƒÃl¤HŽÞÏŽUTÃ&Ib;jaH¢XE!ëÁkÖZx GZœG$1ÃJ{¨yMæÂ R¼&0›±Ïî9KE DèO1Ž*É™’&yão’ÝJŠÎ•Ë%û­é°‘9ù™ÊÒ›,ü ­>Ïëžr‰×‹‹hê@ÒöŒ&»Õ’Áø[S˜ehNZ>Ø ˜¦Î®’)Z é&óTC¹Zat>¿Ó~.YÄ‚ÏTôL‘Ëä· ‹mŠqׂø˜V@ËP&ÍÖ5ýýZé,Ÿ¤Ý'—ë‡THöS(¥ ¨òAE¯Œ|“y HÀCÊ/ËxÈxãÁ}|-”£¬ÛšXÄvýõ×·¹õöÃ!)ú*_³ 7Þ:®?t?¼|ÓuÀ £±ÿ¯®Åˆ±¢á7ÐðdšìèE|?i ·Œ—z–ÿõ’„¹› ɰɲùâ—áºãNFªæaü7ŽÆ¦'Ž Ÿ§¼²¼Ju…r¢ˆ›éì»ñ;uƒ› Š'd«Åö‡mU®šÁÄknÀU_ÿ2åÕÀ–_;ã¿õ°á£àÔ`QrNx²º°=î™ø,®5¼IÛKòʆÚaý&ÉÔÌ ¢"}v ÿÄ[ÅÒ4ò¦·ƒF&µ¯­B2˜JÏooï@G _«Á±”*Ï&½ØøXr]«4é¾”¦M áô3it´·£“A‡lÙaBi™ÖVÏ—r׳µUˆÓÙ<Àz¦Ü"<’Ý4¯ÕsÚ;:ÑÑÙa® )>GW{Zm‚׈Þ•½Ž”òżòœ²¡f–LfJñ)ßzžÚ œÌᣅ½OþÅË?za jkõï}˜ÃÇkÏU´hàœƒš9@ðs$¬$qíÝü%’hLËÎÍÉZÜð2º¦ú»"ú­Þ±Žù-r@¢iTˆãhÊjªëþ«Æ;¹OÏmíÎBr@Û'Å€“d0ÄYŠÓœBÈœ†óοoOr7ë^=Ç‚öãçÄçõ,ðàs7ËàÛ™ "’XYµdù6«3¯µûÒìt`ËçV~ÈsBlêýH…fÙ9z´-›ñ³Í„ÕbhD¾æ¦£"¶ŠRG úž®c²Mr_VH;¯c¼Šqj&Ýx´Vø0°ä$)™3E³ÑJ7OË2Ý$[4Þ­äÄÙáFÞÅ ñ~šéKóúÖé9Xçs#¡î qÊa+¯Æ¨y¥åÞÁ­Ü!ä¾!rÛ¤¼@r«{ô/ #›!ArÓ@:ÍÈ[Œ„÷³1BƒÃôé]¶~Ã‘ÅÆ$*&c"i"k³¡# ÆÌj“²˜Aê>Ý!"ûŸÈØlø gÚsç…$í ¸¯tXH~¶ö㟌Jd3&œÿ FÎç ‰(„9e¤ÃÿédœGý³]ó©Õ½º"… ˆ¯ÐßV,:ÍÞ‹Êiƈ#<2>çàààààààà`˜ç€2­Ü‘¤¦³üîêk1uʬ»ÎÚH7IX›¤g$‚QJ$7eÖMÑ® Y†DÖIK…hx¡Î"eŒŸEé†q\y˜4w„˜ÊâiDÎRbÔ²Åçɽç‘PŧtiV|T“•yZ+¸oSŸüyN{)³+þøbým!½Tu.vÃhΓá*í­Ýo¬`|Þþ$I+¹AÏh!ò(.u(×Lˆ²™N÷Pg êÌ[ž§™/þëïï³9}·Ûn;›ŽLÙDx>8TöÞgùÖÀÀj ×x^»ñˆF/ƒý®ü-†ŒÝ@æu¾A«|'òWæŠc @Òl ”Ÿ”½ô»ËñûãNFT¶üÖ1Øø¸Ãd‹(j@Y3Íÿ¬£¼Öcy×ÍÄeÇÁú¸Ö6… 4ØäØË}ñºëqÍaò\ã¿þ5lqêlh;$8˜?““݇¥L=ÕŒ(Õ—žÆ5€O=‹îõ6ƾW]‹Ìðá¼"`=Ù»ã¶FC6§åmšâCD®43€Çj,xÄ®ñB s¨ûøO«}É$©ù\uÒ¥Ù”ŧóú‡øqÜóâtÉuBiPº´è„s§¥[D—盺‡ô™?R$0žb`“Æ´Û}oÿûƒÒÿþ¤I«²)- ŒÐä¥ôÄÁ’Þ²#¦'ÓÉ~Ò¼Þcü¢PËó:É´gSiâu×]×\½d>x˜—ìÌ]ÄëEÊ'…:V}–:j麹dTvìh¨3— X}•×9ãXB"»¤aIʽ2)W«H²˜YêGÀÂ;£¿¨²`×"ÔY®{=v"ë ü=w¼Kre< ÂIUÙþÔShäÖ›UDZØ%E­QA=UE¹¿ÂöD/ÀÕýØ|S–”¹}5àóGÊF­5¯LZËëúY®{dî¤Ñæ .,,ÁÊtÈ’[ëC©Íˆ rÆ7J,ã=j.‚,š<ÞLUX½2-h5XAkˆ³iŠØ…yZngΜiŸ¾5xéÎ;ï4¦Qú ôÊD$4sƒÃG ö<ü`&ž;ïRÔï{3; Xï´Ãá­ºr$µê´Y?-²)ÚXíâ1”K8T&5Å^ù®;ñø/~ ÏËcôžŸÁr»î„j¾ …zš’òPÏe/BGcvæ¢ÖÌ*äû$³E?OÅŽË—$VúI<øÓÙèÇ­7ƺG€°?¬¦b,Ôì> dìÙy¨õ cÊxìÌŸ òÚÛÈl´Ö:úp¤»G"W¢ZÔÔ‹úŠ({×<í@ DnàÉv¡ŒWoº¯\v ™v¬|Ò1(nº!‚T]ìË¥Ø2AüåQ_¦!Ș'*b+Ë­øVŠW }?æIn‰­NiT~{{»ÐWpX°H­‰Üêý+“¼Ã‡ƒÇ2?$ÓĦé"Ö‹è+q{m &Fö„SÈG9Š FòQ$¹•]§æÉŒ¶d"i>T‹”Ë&,—[u-…J5Ä_(·rô’¤uÖ<6°z}Ê,"WaçLÆ\ç÷5@K(T•»óí¨•*¨’LåY®Ödc¾]¡’Û×ÚÓ¸cjÙf‚ B?ÊìhE­»æRìé”Ϻ^Á'X7ckÊø„¸§R ÖóNhõJšÙʸ#· /¼¦f Ï  ÖëjǸAÝ(÷TqO/^¢–Ò£…šÚ˜:Ê~üUXÆ…E Ò3rÃLVh4i† Ö:;'æInµÔ­æ™Õ©ÎÎN‹H&`‡‹x^à”u2b+’ëðÑA*l;´ëfŠØ|èòxàÝ—ð‚ïᵨ¿Oצ9? ú–¡O–uGÏ y–Ç ›6ëŠR#Ä´¥:q뛯Ü‘|½<ô‰Ìæšðj$·,ºšSÅa6r”Q†¥¬æi8<ý" b+d=¬ãùYhÄéSñ¢ŸA‰ä׫É$ƒ¯FýçtŸÃ v׬MHµEÄÎZg_Y¶¯oðÜ Ÿ -Ì¢5’ÜÖ3ú¨E2¯i®þwÐkR°Áw¶‰×á£Q°°ŠWïú3n¿øjœtÅoP*¤ÐK%XHei@%;¾È°æØ;øwSj,AÕ»úㆳÏÁ–{+o± zÛ²ìd‘i°ýHy3²z“ÄñZùßaXÇCÕm¬(D nvòd\|ÔqØô³;`=>‹éQ€b~pìïÉΕ½ÊŽ|µÞèGW3À—ýOþý!yÉÅèÏúIv} s¡Éiá"'f‡…ž,± åtͨŠAé‚RAGê*Ï!;ÍÉ-XÅH|S6†dQB³V¥Õ¾0~üxsQ˜æIn–\Dƒ~ô¼þ6®<ûuöÙ¨ç<Ô¨àHmÙ€D$d¶ ò1åëh¸îÁŽVý½wðµ=öÀ^‡~ Ûì¶3JÙ e•F[*§^Í– qx2a,ð qG)¶fk¨h–Ä+_ ðÈu7!bçj½¶@½½–ÄÿÚg'» »j½¦7JÞzêÜpé8éüŸ"*äØqͰnÇ岯Bš5kº´ÃB q=†z>bû¡¹V"dØÞFñÓ¬U,ÎìÐib¦zFezQó¸ýàpäÖÁaˆÜVl¶Š&YFoµ P… |[ÆY­DKÑqßšwªPSÕñúÄç‘okGǨ¥Í}CÚöLŽ2å%”MCcÜ!–¥žAM±pÑXf\‘Û¶F©¾ ê,zUñÙ¶6ëdÉ}ÁÉíÃA ¯fùNÒAhSXÖ£~{»}™‘pCq†¼#· ;ä9Ãv¶îGìÇ”N×ã¿t“åW55Ïl6ôMz+n…_,îùspø€ˆ§££ÊC_Ø@aðTërFluÖ3‹#{…äeÚqJo jËŒY ]ËŒFNQДVÖÜRT1mˆaí­_ +Sì(™ X¶L1I`•·ù"¼B;÷2ÖhWµ<º]ëðAÁN2À2Ù`ÖóÑ6ÑÏC¹ö.“¹ænWýޝupXøÑôšˆ4B—Å6ÍR¬¹ØEoãy>š< ™fH©]Qû²øÃ‘[‡`ãà±_ëe²³YôÊÿ<ãÛÈ~vŠmÞ|9êûTˆúí*Ð@¤ó‹kj@Ù¸Ö#¤ƒ&Ú3²}Ùiƒhƒæ –ß²ÃÈŽåLÊGK¶¤µÐEä![ì ™%‹(Sù؇!²i /&`’/û é "MŸÔÖå^ $[ÊŸÁ–^ç;ð88,ÜÐ<øu^-\%Zé¦t“5îë„fU ´UÚ‘[‡%Tf)AµB® mÅ6äR„•:2AlY“#~šä6Þ×”*­[H"d´J¥l3,¬×ÐÞh[r‡ BRðI¾ÌÝEJH!•B6›C†Š©à¥YC4Ý´Œª¯ VfþÏRž*ð½L\—£Ør«£" ®~;,ìÐW0 lV;«AÏ*·ñ¸þaoýMæVY2 ´ó¹upÕ5³,‹$²Í¢svÌ.ˆ÷í7ÿ$×.É L"ù/¦e{¤HHÊâÙ<âýä¹"È·TGä­Ü:³dC“âNDV«êDÉr+IÅ+?’ôj:@ÊÔ™å*“ÈÕá€24¡ŸØ%2iów¶Îª—lm‡hÙœœb°»kô5Ó”Û\\Vm#zǶBûjbt(¥k–€òìÈ­ƒƒƒƒƒƒƒƒÃbç–ààààààààà°ØÀ‘[‡ÅŽÜ:88888888,&þÃïT&ÈÒ†IEND®B`‚manila-10.0.0/doc/source/configuration/index.rst0000664000175000017500000000063213656750227021646 0ustar zuulzuul00000000000000============= Configuration ============= .. toctree:: :maxdepth: 1 shared-file-systems/overview shared-file-systems/api.rst shared-file-systems/drivers.rst shared-file-systems/log-files.rst shared-file-systems/config-options.rst shared-file-systems/samples/index.rst The Shared File Systems service works with many different drivers that you can configure by using these instructions. manila-10.0.0/doc/source/configuration/tables/0000775000175000017500000000000013656750362021256 5ustar zuulzuul00000000000000manila-10.0.0/doc/source/configuration/tables/manila-api.inc0000664000175000017500000000635713656750227023774 0ustar zuulzuul00000000000000.. _manila-api: .. list-table:: Description of API configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``admin_network_config_group`` = ``None`` - (String) If share driver requires to setup admin network for share, then define network plugin config options in some separate config group and set its name here. Used only with another option 'driver_handles_share_servers' set to 'True'. * - ``admin_network_id`` = ``None`` - (String) ID of neutron network used to communicate with admin network, to create additional admin export locations on. * - ``admin_subnet_id`` = ``None`` - (String) ID of neutron subnet used to communicate with admin network, to create additional admin export locations on. Related to 'admin_network_id'. * - ``api_paste_config`` = ``api-paste.ini`` - (String) File name for the paste.deploy config for manila-api. * - ``api_rate_limit`` = ``True`` - (Boolean) Whether to rate limit the API. * - ``db_backend`` = ``sqlalchemy`` - (String) The backend to use for database. * - ``max_header_line`` = ``16384`` - (Integer) Maximum line size of message headers to be accepted. Option max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). * - ``osapi_max_limit`` = ``1000`` - (Integer) The maximum number of items returned in a single response from a collection resource. * - ``osapi_share_base_URL`` = ``None`` - (String) Base URL to be presented to users in links to the Share API * - ``osapi_share_ext_list`` = - (List) Specify list of extensions to load when using osapi_share_extension option with manila.api.contrib.select_extensions. * - ``osapi_share_extension`` = ``manila.api.contrib.standard_extensions`` - (List) The osapi share extensions to load. * - ``osapi_share_listen`` = ``::`` - (String) IP address for OpenStack Share API to listen on. * - ``osapi_share_listen_port`` = ``8786`` - (Port number) Port for OpenStack Share API to listen on. * - ``osapi_share_workers`` = ``1`` - (Integer) Number of workers for OpenStack Share API service. * - ``share_api_class`` = ``manila.share.api.API`` - (String) The full class name of the share API class to use. * - ``volume_api_class`` = ``manila.volume.cinder.API`` - (String) The full class name of the Volume API class to use. * - ``volume_name_template`` = ``manila-share-%s`` - (String) Volume name template. * - ``volume_snapshot_name_template`` = ``manila-snapshot-%s`` - (String) Volume snapshot name template. * - **[oslo_middleware]** - * - ``enable_proxy_headers_parsing`` = ``False`` - (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. * - ``max_request_body_size`` = ``114688`` - (Integer) The maximum body size for each request, in bytes. * - ``secure_proxy_ssl_header`` = ``X-Forwarded-Proto`` - (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. manila-10.0.0/doc/source/configuration/tables/manila-hds_hnas.inc0000664000175000017500000000522713656750227025005 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-hds_hnas: .. list-table:: Description of HDS NAS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``hitachi_hnas_admin_network_ip`` = ``None`` - (String) Specify IP for mounting shares in the Admin network. * - ``hitachi_hnas_allow_cifs_snapshot_while_mounted`` = ``False`` - (Boolean) By default, CIFS snapshots are not allowed to be taken when the share has clients connected because consistent point-in-time replica cannot be guaranteed for all files. Enabling this might cause inconsistent snapshots on CIFS shares. * - ``hitachi_hnas_cluster_admin_ip0`` = ``None`` - (String) The IP of the clusters admin node. Only set in HNAS multinode clusters. * - ``hitachi_hnas_driver_helper`` = ``manila.share.drivers.hitachi.hnas.ssh.HNASSSHBackend`` - (String) Python class to be used for driver helper. * - ``hitachi_hnas_evs_id`` = ``None`` - (Integer) Specify which EVS this backend is assigned to. * - ``hitachi_hnas_evs_ip`` = ``None`` - (String) Specify IP for mounting shares. * - ``hitachi_hnas_file_system_name`` = ``None`` - (String) Specify file-system name for creating shares. * - ``hitachi_hnas_ip`` = ``None`` - (String) HNAS management interface IP for communication between Manila controller and HNAS. * - ``hitachi_hnas_password`` = ``None`` - (String) HNAS user password. Required only if private key is not provided. * - ``hitachi_hnas_ssh_private_key`` = ``None`` - (String) RSA/DSA private key value used to connect into HNAS. Required only if password is not provided. * - ``hitachi_hnas_stalled_job_timeout`` = ``30`` - (Integer) The time (in seconds) to wait for stalled HNAS jobs before aborting. * - ``hitachi_hnas_user`` = ``None`` - (String) HNAS username Base64 String in order to perform tasks such as create file-systems and network interfaces. * - **[hnas1]** - * - ``share_backend_name`` = ``None`` - (String) The backend name for a given driver implementation. * - ``share_driver`` = ``manila.share.drivers.generic.GenericShareDriver`` - (String) Driver to use for share creation. manila-10.0.0/doc/source/configuration/tables/manila-zfssa.inc0000664000175000017500000000423413656750227024341 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-zfssa: .. list-table:: Description of ZFSSA share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``zfssa_auth_password`` = ``None`` - (String) ZFSSA management authorized userpassword. * - ``zfssa_auth_user`` = ``None`` - (String) ZFSSA management authorized username. * - ``zfssa_data_ip`` = ``None`` - (String) IP address for data. * - ``zfssa_host`` = ``None`` - (String) ZFSSA management IP address. * - ``zfssa_manage_policy`` = ``loose`` - (String) Driver policy for share manage. A strict policy checks for a schema named manila_managed, and makes sure its value is true. A loose policy does not check for the schema. * - ``zfssa_nas_checksum`` = ``fletcher4`` - (String) Controls checksum used for data blocks. * - ``zfssa_nas_compression`` = ``off`` - (String) Data compression-off, lzjb, gzip-2, gzip, gzip-9. * - ``zfssa_nas_logbias`` = ``latency`` - (String) Controls behavior when servicing synchronous writes. * - ``zfssa_nas_mountpoint`` = - (String) Location of project in ZFS/SA. * - ``zfssa_nas_quota_snap`` = ``true`` - (String) Controls whether a share quota includes snapshot. * - ``zfssa_nas_rstchown`` = ``true`` - (String) Controls whether file ownership can be changed. * - ``zfssa_nas_vscan`` = ``false`` - (String) Controls whether the share is scanned for viruses. * - ``zfssa_pool`` = ``None`` - (String) ZFSSA storage pool name. * - ``zfssa_project`` = ``None`` - (String) ZFSSA project name. * - ``zfssa_rest_timeout`` = ``None`` - (String) REST connection timeout (in seconds). manila-10.0.0/doc/source/configuration/tables/manila-emc.inc0000664000175000017500000000266413656750227023764 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-emc: .. list-table:: Description of EMC share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``emc_nas_login`` = ``None`` - (String) User name for the EMC server. * - ``emc_nas_password`` = ``None`` - (String) Password for the EMC server. * - ``emc_nas_root_dir`` = ``None`` - (String) The root directory where shares will be located. * - ``emc_nas_server`` = ``None`` - (String) EMC server hostname or IP address. * - ``emc_nas_server_container`` = ``None`` - (String) DEPRECATED: Storage processor to host the NAS server. Obsolete. Unity driver supports nas server auto load balance. * - ``emc_nas_server_port`` = ``8080`` - (Port number) Port number for the EMC server. * - ``emc_nas_server_secure`` = ``True`` - (Boolean) Use secure connection to server. * - ``emc_share_backend`` = ``None`` - (String) Share backend. manila-10.0.0/doc/source/configuration/tables/manila-huawei.inc0000664000175000017500000000145613656750227024500 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-huawei: .. list-table:: Description of Huawei share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``manila_huawei_conf_file`` = ``/etc/manila/manila_huawei_conf.xml`` - (String) The configuration file for the Manila Huawei driver. manila-10.0.0/doc/source/configuration/tables/manila-zfs.inc0000664000175000017500000000622713656750227024021 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-zfs: .. list-table:: Description of ZFS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``zfs_dataset_creation_options`` = ``None`` - (List) Define here list of options that should be applied for each dataset creation if needed. Example: compression=gzip,dedup=off. Note that, for secondary replicas option 'readonly' will be set to 'on' and for active replicas to 'off' in any way. Also, 'quota' will be equal to share size. Optional. * - ``zfs_dataset_name_prefix`` = ``manila_share_`` - (String) Prefix to be used in each dataset name. Optional. * - ``zfs_dataset_snapshot_name_prefix`` = ``manila_share_snapshot_`` - (String) Prefix to be used in each dataset snapshot name. Optional. * - ``zfs_migration_snapshot_prefix`` = ``tmp_snapshot_for_share_migration_`` - (String) Set snapshot prefix for usage in ZFS migration. Required. * - ``zfs_replica_snapshot_prefix`` = ``tmp_snapshot_for_replication_`` - (String) Set snapshot prefix for usage in ZFS replication. Required. * - ``zfs_service_ip`` = ``None`` - (String) IP to be added to admin-facing export location. Required. * - ``zfs_share_export_ip`` = ``None`` - (String) IP to be added to user-facing export location. Required. * - ``zfs_share_helpers`` = ``NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper`` - (List) Specify list of share export helpers for ZFS storage. It should look like following: 'FOO_protocol=foo.FooClass,BAR_protocol=bar.BarClass'. Required. * - ``zfs_ssh_private_key_path`` = ``None`` - (String) Path to SSH private key that should be used for SSH'ing ZFS storage host. Not used for replication operations. Optional. * - ``zfs_ssh_user_password`` = ``None`` - (String) Password for user that is used for SSH'ing ZFS storage host. Not used for replication operations. They require passwordless SSH access. Optional. * - ``zfs_ssh_username`` = ``None`` - (String) SSH user that will be used in 2 cases: 1) By manila-share service in case it is located on different host than its ZFS storage. 2) By manila-share services with other ZFS backends that perform replication. It is expected that SSH'ing will be key-based, passwordless. This user should be passwordless sudoer. Optional. * - ``zfs_use_ssh`` = ``False`` - (Boolean) Remote ZFS storage hostname that should be used for SSH'ing. Optional. * - ``zfs_zpool_list`` = ``None`` - (List) Specify list of zpools that are allowed to be used by backend. Can contain nested datasets. Examples: Without nested dataset: 'zpool_name'. With nested dataset: 'zpool_name/nested_dataset_name'. Required. manila-10.0.0/doc/source/configuration/tables/manila-spectrumscale_knfs.inc0000664000175000017500000000376213656750227027113 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-spectrumscale_knfs: .. list-table:: Description of IBM Spectrum Scale KNFS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``gpfs_mount_point_base`` = ``$state_path/mnt`` - (String) Base folder where exported shares are located. * - ``gpfs_nfs_server_list`` = ``None`` - (List) A list of the fully qualified NFS server names that make up the OpenStack Manila configuration. * - ``gpfs_nfs_server_type`` = ``CES`` - (String) NFS Server type. Valid choices are "CES" (Ganesha NFS) or "KNFS" (Kernel NFS). * - ``gpfs_share_export_ip`` = ``None`` - (Host address) IP to be added to GPFS export string. * - ``gpfs_share_helpers`` = ``KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper, CES=manila.share.drivers.ibm.gpfs.CESHelper`` - (List) Specify list of share export helpers. * - ``gpfs_ssh_login`` = ``None`` - (String) GPFS server SSH login name. * - ``gpfs_ssh_password`` = ``None`` - (String) GPFS server SSH login password. The password is not needed, if 'gpfs_ssh_private_key' is configured. * - ``gpfs_ssh_port`` = ``22`` - (Port number) GPFS server SSH port. * - ``gpfs_ssh_private_key`` = ``None`` - (String) Path to GPFS server SSH private key for login. * - ``is_gpfs_node`` = ``False`` - (Boolean) True:when Manila services are running on one of the Spectrum Scale node. False:when Manila services are not running on any of the Spectrum Scale node. manila-10.0.0/doc/source/configuration/tables/manila-powermax.inc0000664000175000017500000000212113656750227025046 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-powermax: .. list-table:: Description of Dell EMC PowerMax share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``powermax_ethernet_ports`` = ``None`` - (List) Comma separated list of ports that can be used for share server interfaces. Members of the list can be Unix-style glob expressions. * - ``powermax_server_container`` = ``None`` - (String) Data mover to host the NAS server. * - ``powermax_share_data_pools`` = ``None`` - (List) Comma separated list of pools that can be used to persist share data. manila-10.0.0/doc/source/configuration/tables/manila-infinidat.inc0000664000175000017500000000172713656750227025164 0ustar zuulzuul00000000000000.. _manila-infinidat: .. list-table:: Description of INFINIDAT InfiniBox share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``infinibox_hostname`` = ``None`` - (String) The name (or IP address) for the INFINIDAT Infinibox storage system. * - ``infinibox_login`` = ``None`` - (String) Administrative user account name used to access the INFINIDAT Infinibox storage system. * - ``infinibox_password`` = ``None`` - (String) Password for the administrative user account specified in the infinibox_login option. * - ``infinidat_pool_name`` = ``None`` - (String) Name of the pool from which volumes are allocated. * - ``infinidat_nas_network_space_name`` = ``None`` - (String) Name of the NAS network space on the INFINIDAT InfiniBox. * - ``infinidat_thin_provision`` = ``True`` - (Boolean) Use thin provisioning manila-10.0.0/doc/source/configuration/tables/manila-hds_hsp.inc0000664000175000017500000000161113656750227024637 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-hds_hsp: .. list-table:: Description of HDS HSP share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[hsp1]** - * - ``share_backend_name`` = ``None`` - (String) The backend name for a given driver implementation. * - ``share_driver`` = ``manila.share.drivers.generic.GenericShareDriver`` - (String) Driver to use for share creation. manila-10.0.0/doc/source/configuration/tables/manila-unity.inc0000664000175000017500000000235013656750227024360 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-unity: .. list-table:: Description of Dell EMC Unity share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``unity_ethernet_ports`` = ``None`` - (List) Comma separated list of ports that can be used for share server interfaces. Members of the list can be Unix-style glob expressions. * - ``unity_server_meta_pool`` = ``None`` - (String) Pool to persist the meta-data of NAS server. * - ``unity_share_data_pools`` = ``None`` - (List) Comma separated list of pools that can be used to persist share data. * - ``unity_share_server`` = ``None`` - One of NAS server names in Unity, it is used for share creation when the driver is in ``DHSS=False`` mode.. manila-10.0.0/doc/source/configuration/tables/manila-redis.inc0000664000175000017500000000313113656750227024314 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-redis: .. list-table:: Description of Redis configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[matchmaker_redis]** - * - ``check_timeout`` = ``20000`` - (Integer) Time in ms to wait before the transaction is killed. * - ``host`` = ``127.0.0.1`` - (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url * - ``password`` = - (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url * - ``port`` = ``6379`` - (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url * - ``sentinel_group_name`` = ``oslo-messaging-zeromq`` - (String) Redis replica set name. * - ``sentinel_hosts`` = - (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g., [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url * - ``socket_timeout`` = ``10000`` - (Integer) Timeout in ms on blocking socket operations. * - ``wait_timeout`` = ``2000`` - (Integer) Time in ms to wait between connection attempts. manila-10.0.0/doc/source/configuration/tables/manila-infortrend.inc0000664000175000017500000000275413656750227025372 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-infortrend: .. list-table:: Description of Infortrend Manila driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``infortrend_nas_ip`` = ``None`` - (String) Infortrend NAS ip. It is the ip for management. * - ``infortrend_nas_user`` = ``manila`` - (String) Infortrend NAS username. * - ``infortrend_nas_password`` = ``None`` - (String) Password for the Infortrend NAS server. This is not necessary if infortrend_nas_ssh_key is set. * - ``infortrend_nas_ssh_key`` = ``None`` - (String) SSH key for the Infortrend NAS server. This is not necessary if infortrend_nas_password is set. * - ``infortrend_share_pools`` = ``None`` - (String) Infortrend nas pool name list. It is separated with comma. * - ``infortrend_share_channels`` = ``None`` - (String) Infortrend channels for file service. It is separated with comma. * - ``infortrend_cli_timeout`` = ``30`` - (Integer) CLI timeout in seconds. manila-10.0.0/doc/source/configuration/tables/manila-maprfs.inc0000664000175000017500000000317713656750227024510 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-maprfs: .. list-table:: Description of MapRFS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``maprfs_base_volume_dir`` = ``/`` - (String) Path in MapRFS where share volumes must be created. * - ``maprfs_cldb_ip`` = ``None`` - (List) The list of IPs or hostnames of CLDB nodes. * - ``maprfs_clinode_ip`` = ``None`` - (List) The list of IPs or hostnames of nodes where mapr-core is installed. * - ``maprfs_rename_managed_volume`` = ``True`` - (Boolean) Specify whether existing volume should be renamed when start managing. * - ``maprfs_ssh_name`` = ``mapr`` - (String) Cluster admin user ssh login name. * - ``maprfs_ssh_port`` = ``22`` - (Port number) CLDB node SSH port. * - ``maprfs_ssh_private_key`` = ``None`` - (String) Path to SSH private key for login. * - ``maprfs_ssh_pw`` = ``None`` - (String) Cluster node SSH login password, This parameter is not necessary, if 'maprfs_ssh_private_key' is configured. * - ``maprfs_zookeeper_ip`` = ``None`` - (List) The list of IPs or hostnames of ZooKeeper nodes. manila-10.0.0/doc/source/configuration/tables/manila-nexentastor5.inc0000664000175000017500000000512013656750227025645 0ustar zuulzuul00000000000000.. _manila-nexentastor5: .. list-table:: Description of NexentaStor5 configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``nexenta_rest_addresses`` = ``None`` - (List) One or more comma delimited IP addresses for management communication with NexentaStor appliance. * - ``nexenta_rest_port`` = ``8443`` - (Integer) Port to connect to Nexenta REST API server. * - ``nexenta_use_https`` = ``True`` - (Boolean) Use HTTP secure protocol for NexentaStor management REST API connections. * - ``nexenta_user`` = ``admin`` - (String) User name to connect to Nexenta SA. * - ``nexenta_password`` = ``None`` - (String) Password to connect to Nexenta SA. * - ``nexenta_pool`` = ``pool1`` - (String) Pool name on NexentaStor. * - ``nexenta_nfs`` = ``True`` - (Boolean) Defines whether share over NFS is enabled. * - ``nexenta_ssl_cert_verify`` = ``False`` - (Boolean) Defines whether the driver should check ssl cert. * - ``nexenta_rest_connect_timeout`` = ``30`` - (Float) Specifies the time limit (in seconds), within which the connection to NexentaStor management REST API server must be established. * - ``nexenta_rest_read_timeout`` = ``300`` - (Float) Specifies the time limit (in seconds), within which NexentaStor management REST API server must send a response. * - ``nexenta_rest_backoff_factor`` = ``1`` - (Float) Specifies the backoff factor to apply between connection attempts to NexentaStor management REST API server. * - ``nexenta_rest_retry_count`` = ``5`` - (Integer) Specifies the number of times to repeat NexentaStor management REST API call in case of connection errors and NexentaStor appliance EBUSY or ENOENT errors. * - ``nexenta_nas_host`` = ``None`` - (Hostname) Data IP address of Nexenta storage appliance. * - ``nexenta_mount_point_base`` = ``$state_path/mnt`` - (String) Base directory that contains NFS share mount points. * - ``nexenta_share_name_prefix`` = ``share-`` - (String) Nexenta share name prefix. * - ``nexenta_folder`` = ``folder`` - (String) Parent folder on NexentaStor. * - ``nexenta_dataset_compression`` = ``on`` - (String) Compression value for new ZFS folders. * - ``nexenta_thin_provisioning`` = ``True`` - (Boolean) If True shares will not be space guaranteed and overprovisioning will be enabled. * - ``nexenta_dataset_record_size`` = ``131072`` - (Integer) Specifies a suggested block size in for files in a file system. (bytes) manila-10.0.0/doc/source/configuration/tables/manila-compute.inc0000664000175000017500000000347613656750227024676 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-compute: .. list-table:: Description of Compute configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``nova_admin_auth_url`` = ``http://localhost:5000/v2.0`` - (String) DEPRECATED: Identity service URL. This option isn't used any longer. Please use [nova] url instead. * - ``nova_admin_password`` = ``None`` - (String) DEPRECATED: Nova admin password. This option isn't used any longer. Please use [nova] password instead. * - ``nova_admin_tenant_name`` = ``service`` - (String) DEPRECATED: Nova admin tenant name. This option isn't used any longer. Please use [nova] tenant instead. * - ``nova_admin_username`` = ``nova`` - (String) DEPRECATED: Nova admin username. This option isn't used any longer. Please use [nova] username instead. * - ``nova_catalog_admin_info`` = ``compute:nova:adminURL`` - (String) DEPRECATED: Same as nova_catalog_info, but for admin endpoint. This option isn't used any longer. * - ``nova_catalog_info`` = ``compute:nova:publicURL`` - (String) DEPRECATED: Info to match when looking for nova in the service catalog. Format is separated values of the form: :: This option isn't used any longer. * - ``os_region_name`` = ``None`` - (String) Region name of this node. manila-10.0.0/doc/source/configuration/tables/manila-hnas.inc0000664000175000017500000000143713656750227024146 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-hnas: .. list-table:: Description of hnas configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``hds_hnas_driver_helper`` = ``manila.share.drivers.hitachi.ssh.HNASSSHBackend`` - (String) Python class to be used for driver helper. manila-10.0.0/doc/source/configuration/tables/manila-generic.inc0000664000175000017500000002265413656750227024635 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-generic: .. list-table:: Description of Generic share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``connect_share_server_to_tenant_network`` = ``False`` - (Boolean) Attach share server directly to share network. Used only with Neutron and if driver_handles_share_servers=True. * - ``container_volume_group`` = ``manila_docker_volumes`` - (String) LVM volume group to use for volumes. This volume group must be created by the cloud administrator independently from manila operations. * - ``driver_handles_share_servers`` = ``None`` - (Boolean) There are two possible approaches for share drivers in Manila. First is when share driver is able to handle share-servers and second when not. Drivers can support either both or only one of these approaches. So, set this opt to True if share driver is able to handle share servers and it is desired mode else set False. It is set to None by default to make this choice intentional. * - ``goodness_function`` = ``None`` - (String) String representation for an equation that will be used to determine the goodness of a host. * - ``interface_driver`` = ``manila.network.linux.interface.OVSInterfaceDriver`` - (String) Vif driver. Used only with Neutron and if driver_handles_share_servers=True. * - ``manila_service_keypair_name`` = ``manila-service`` - (String) Keypair name that will be created and used for service instances. Only used if driver_handles_share_servers=True. * - ``max_time_to_attach`` = ``120`` - (Integer) Maximum time to wait for attaching cinder volume. * - ``max_time_to_build_instance`` = ``300`` - (Integer) Maximum time in seconds to wait for creating service instance. * - ``max_time_to_create_volume`` = ``180`` - (Integer) Maximum time to wait for creating cinder volume. * - ``max_time_to_extend_volume`` = ``180`` - (Integer) Maximum time to wait for extending cinder volume. * - ``ovs_integration_bridge`` = ``br-int`` - (String) Name of Open vSwitch bridge to use. * - ``path_to_private_key`` = ``None`` - (String) Path to host's private key. * - ``path_to_public_key`` = ``~/.ssh/id_rsa.pub`` - (String) Path to hosts public key. Only used if driver_handles_share_servers=True. * - ``protocol_access_mapping`` = ``{'ip': ['nfs'], 'user': ['cifs']}`` - (Dict) Protocol access mapping for this backend. Should be a dictionary comprised of {'access_type1': ['share_proto1', 'share_proto2'], 'access_type2': ['share_proto2', 'share_proto3']}. * - ``service_image_name`` = ``manila-service-image`` - (String) Name of image in Glance, that will be used for service instance creation. Only used if driver_handles_share_servers=True. * - ``service_instance_flavor_id`` = ``100`` - (String) ID of flavor, that will be used for service instance creation. Only used if driver_handles_share_servers=True. * - ``service_instance_name_or_id`` = ``None`` - (String) Name or ID of service instance in Nova to use for share exports. Used only when share servers handling is disabled. * - ``service_instance_name_template`` = ``manila_service_instance_%s`` - (String) Name of service instance. Only used if driver_handles_share_servers=True. * - ``service_instance_network_helper_type`` = ``neutron`` - (String) DEPRECATED: Used to select between neutron and nova helpers when driver_handles_share_servers=True. Obsolete. This option isn't used any longer because nova networking is no longer supported. * - ``service_instance_password`` = ``None`` - (String) Password for service instance user. * - ``service_instance_security_group`` = ``manila-service`` - (String) Security group name, that will be used for service instance creation. Only used if driver_handles_share_servers=True. * - ``service_instance_smb_config_path`` = ``$share_mount_path/smb.conf`` - (String) Path to SMB config in service instance. * - ``service_instance_user`` = ``None`` - (String) User in service instance that will be used for authentication. * - ``service_net_name_or_ip`` = ``None`` - (String) Can be either name of network that is used by service instance within Nova to get IP address or IP address itself for managing shares there. Used only when share servers handling is disabled. * - ``service_network_cidr`` = ``10.254.0.0/16`` - (String) CIDR of manila service network. Used only with Neutron and if driver_handles_share_servers=True. * - ``service_network_division_mask`` = ``28`` - (Integer) This mask is used for dividing service network into subnets, IP capacity of subnet with this mask directly defines possible amount of created service VMs per tenant's subnet. Used only with Neutron and if driver_handles_share_servers=True. * - ``service_network_name`` = ``manila_service_network`` - (String) Name of manila service network. Used only with Neutron. Only used if driver_handles_share_servers=True. * - ``share_helpers`` = ``CIFS=manila.share.drivers.helpers.CIFSHelperIPAccess, NFS=manila.share.drivers.helpers.NFSHelper`` - (List) Specify list of share export helpers. * - ``share_mount_path`` = ``/shares`` - (String) Parent path in service instance where shares will be mounted. * - ``share_mount_template`` = ``mount -vt %(proto)s %(options)s %(export)s %(path)s`` - (String) The template for mounting shares for this backend. Must specify the executable with all necessary parameters for the protocol supported. 'proto' template element may not be required if included in the command. 'export' and 'path' template elements are required. It is advisable to separate different commands per backend. * - ``share_unmount_template`` = ``umount -v %(path)s`` - (String) The template for unmounting shares for this backend. Must specify the executable with all necessary parameters for the protocol supported. 'path' template element is required. It is advisable to separate different commands per backend. * - ``share_volume_fstype`` = ``ext4`` - (String) Filesystem type of the share volume. * - ``tenant_net_name_or_ip`` = ``None`` - (String) Can be either name of network that is used by service instance within Nova to get IP address or IP address itself for exporting shares. Used only when share servers handling is disabled. * - ``volume_name_template`` = ``manila-share-%s`` - (String) Volume name template. * - ``volume_snapshot_name_template`` = ``manila-snapshot-%s`` - (String) Volume snapshot name template. * - **[cinder]** - * - ``auth_section`` = ``None`` - (Unknown) Config Section from which to load plugin specific options * - ``auth_type`` = ``None`` - (Unknown) Authentication type to load * - ``cafile`` = ``None`` - (String) PEM encoded Certificate Authority to use when verifying HTTPs connections. * - ``certfile`` = ``None`` - (String) PEM encoded client certificate cert file * - ``cross_az_attach`` = ``True`` - (Boolean) Allow attaching between instances and volumes in different availability zones. * - ``endpoint_type`` = ``publicURL`` - (String) Endpoint type to be used with cinder client calls. * - ``http_retries`` = ``3`` - (Integer) Number of cinderclient retries on failed HTTP calls. * - ``insecure`` = ``False`` - (Boolean) Verify HTTPS connections. * - ``keyfile`` = ``None`` - (String) PEM encoded client certificate key file * - ``region_name`` = ``None`` - (String) Region name for connecting to cinder. * - ``timeout`` = ``None`` - (Integer) Timeout value for http requests * - **[neutron]** - * - ``cafile`` = ``None`` - (String) PEM encoded Certificate Authority to use when verifying HTTPs connections. * - ``certfile`` = ``None`` - (String) PEM encoded client certificate cert file * - ``insecure`` = ``False`` - (Boolean) Verify HTTPS connections. * - ``keyfile`` = ``None`` - (String) PEM encoded client certificate key file * - ``timeout`` = ``None`` - (Integer) Timeout value for http requests * - **[nova]** - * - ``api_microversion`` = ``2.10`` - (String) Version of Nova API to be used. * - ``auth_section`` = ``None`` - (Unknown) Config Section from which to load plugin specific options * - ``auth_type`` = ``None`` - (Unknown) Authentication type to load * - ``cafile`` = ``None`` - (String) PEM encoded Certificate Authority to use when verifying HTTPs connections. * - ``certfile`` = ``None`` - (String) PEM encoded client certificate cert file * - ``endpoint_type`` = ``publicURL`` - (String) Endpoint type to be used with nova client calls. * - ``insecure`` = ``False`` - (Boolean) Verify HTTPS connections. * - ``keyfile`` = ``None`` - (String) PEM encoded client certificate key file * - ``region_name`` = ``None`` - (String) Region name for connecting to nova. * - ``timeout`` = ``None`` - (Integer) Timeout value for http requests manila-10.0.0/doc/source/configuration/tables/manila-ganesha.inc0000664000175000017500000000270013656750227024615 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-ganesha: .. list-table:: Description of Ganesha configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``ganesha_config_dir`` = ``/etc/ganesha`` - (String) Directory where Ganesha config files are stored. * - ``ganesha_config_path`` = ``$ganesha_config_dir/ganesha.conf`` - (String) Path to main Ganesha config file. * - ``ganesha_db_path`` = ``$state_path/manila-ganesha.db`` - (String) Location of Ganesha database file. (Ganesha module only.) * - ``ganesha_export_dir`` = ``$ganesha_config_dir/export.d`` - (String) Path to directory containing Ganesha export configuration. (Ganesha module only.) * - ``ganesha_export_template_dir`` = ``/etc/manila/ganesha-export-templ.d`` - (String) Path to directory containing Ganesha export block templates. (Ganesha module only.) * - ``ganesha_service_name`` = ``ganesha.nfsd`` - (String) Name of the ganesha nfs service. manila-10.0.0/doc/source/configuration/tables/manila-netapp.inc0000664000175000017500000000705113656750227024502 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-netapp: .. list-table:: Description of NetApp share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``netapp_aggregate_name_search_pattern`` = ``(.*)`` - (String) Pattern for searching available aggregates for provisioning. * - ``netapp_enabled_share_protocols`` = ``nfs3, nfs4.0`` - (List) The NFS protocol versions that will be enabled. Supported values include nfs3, nfs4.0, nfs4.1. This option only applies when the option driver_handles_share_servers is set to True. * - ``netapp_lif_name_template`` = ``os_%(net_allocation_id)s`` - (String) Logical interface (LIF) name template * - ``netapp_login`` = ``None`` - (String) Administrative user account name used to access the storage system. * - ``netapp_password`` = ``None`` - (String) Password for the administrative user account specified in the netapp_login option. * - ``netapp_port_name_search_pattern`` = ``(.*)`` - (String) Pattern for overriding the selection of network ports on which to create Vserver LIFs. * - ``netapp_root_volume`` = ``root`` - (String) Root volume name. * - ``netapp_root_volume_aggregate`` = ``None`` - (String) Name of aggregate to create Vserver root volumes on. This option only applies when the option driver_handles_share_servers is set to True. * - ``netapp_server_hostname`` = ``None`` - (String) The hostname (or IP address) for the storage system. * - ``netapp_server_port`` = ``None`` - (Port number) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS. * - ``netapp_snapmirror_quiesce_timeout`` = ``3600`` - (Integer) The maximum time in seconds to wait for existing snapmirror transfers to complete before aborting when promoting a replica. * - ``netapp_storage_family`` = ``ontap_cluster`` - (String) The storage family type used on the storage system; valid values include ontap_cluster for using clustered Data ONTAP. * - ``netapp_trace_flags`` = ``None`` - (String) Comma-separated list of options that control which trace info is written to the debug logs. Values include method and api. * - ``netapp_transport_type`` = ``http`` - (String) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https. * - ``netapp_volume_move_cutover_timeout`` = ``3600`` - (Integer) The maximum time in seconds to wait for the completion of a volume move operation after the cutover was triggered. * - ``netapp_volume_name_template`` = ``share_%(share_id)s`` - (String) NetApp volume name template. * - ``netapp_volume_snapshot_reserve_percent`` = ``5`` - (Integer) The percentage of share space set aside as reserve for snapshot usage; valid values range from 0 to 90. * - ``netapp_vserver_name_template`` = ``os_%s`` - (String) Name template to use for new Vserver. When using CIFS protocol make sure to not configure characters illegal in DNS hostnames. manila-10.0.0/doc/source/configuration/tables/manila-hpe3par.inc0000664000175000017500000000411113656750227024547 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-hpe3par: .. list-table:: Description of HPE 3PAR share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``hpe3par_api_url`` = - (String) 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 * - ``hpe3par_cifs_admin_access_domain`` = ``LOCAL_CLUSTER`` - (String) File system domain for the CIFS admin user. * - ``hpe3par_cifs_admin_access_password`` = - (String) File system admin password for CIFS. * - ``hpe3par_cifs_admin_access_username`` = - (String) File system admin user name for CIFS. * - ``hpe3par_debug`` = ``False`` - (Boolean) Enable HTTP debugging to 3PAR * - ``hpe3par_fpg`` = ``None`` - (Unknown) The File Provisioning Group (FPG) to use * - ``hpe3par_fstore_per_share`` = ``False`` - (Boolean) Use one filestore per share * - ``hpe3par_password`` = - (String) 3PAR password for the user specified in hpe3par_username * - ``hpe3par_require_cifs_ip`` = ``False`` - (Boolean) Require IP access rules for CIFS (in addition to user) * - ``hpe3par_san_ip`` = - (String) IP address of SAN controller * - ``hpe3par_san_login`` = - (String) Username for SAN controller * - ``hpe3par_san_password`` = - (String) Password for SAN controller * - ``hpe3par_san_ssh_port`` = ``22`` - (Port number) SSH port to use with SAN * - ``hpe3par_share_mount_path`` = ``/mnt/`` - (String) The path where shares will be mounted when deleting nested file trees. * - ``hpe3par_username`` = - (String) 3PAR username with the 'edit' role manila-10.0.0/doc/source/configuration/tables/manila-vnx.inc0000664000175000017500000000207013656750227024022 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-vnx: .. list-table:: Description of Dell EMC VNX share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``vnx_ethernet_ports`` = ``None`` - (List) Comma separated list of ports that can be used for share server interfaces. Members of the list can be Unix-style glob expressions. * - ``vnx_server_container`` = ``None`` - (String) Data mover to host the NAS server. * - ``vnx_share_data_pools`` = ``None`` - (List) Comma separated list of pools that can be used to persist share data. manila-10.0.0/doc/source/configuration/tables/manila-glusterfs.inc0000664000175000017500000000552213656750227025232 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-glusterfs: .. list-table:: Description of GlusterFS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``glusterfs_ganesha_server_ip`` = ``None`` - (String) Remote Ganesha server node's IP address. * - ``glusterfs_ganesha_server_password`` = ``None`` - (String) Remote Ganesha server node's login password. This is not required if 'glusterfs_path_to_private_key' is configured. * - ``glusterfs_ganesha_server_username`` = ``root`` - (String) Remote Ganesha server node's username. * - ``glusterfs_mount_point_base`` = ``$state_path/mnt`` - (String) Base directory containing mount points for Gluster volumes. * - ``glusterfs_nfs_server_type`` = ``Gluster`` - (String) Type of NFS server that mediate access to the Gluster volumes (Gluster or Ganesha). * - ``glusterfs_path_to_private_key`` = ``None`` - (String) Path of Manila host's private SSH key file. * - ``glusterfs_server_password`` = ``None`` - (String) Remote GlusterFS server node's login password. This is not required if 'glusterfs_path_to_private_key' is configured. * - ``glusterfs_servers`` = - (List) List of GlusterFS servers that can be used to create shares. Each GlusterFS server should be of the form [remoteuser@], and they are assumed to belong to distinct Gluster clusters. * - ``glusterfs_share_layout`` = ``None`` - (String) Specifies GlusterFS share layout, that is, the method of associating backing GlusterFS resources to shares. * - ``glusterfs_target`` = ``None`` - (String) Specifies the GlusterFS volume to be mounted on the Manila host. It is of the form [remoteuser@]:. * - ``glusterfs_volume_pattern`` = ``None`` - (String) Regular expression template used to filter GlusterFS volumes for share creation. The regex template can optionally (ie. with support of the GlusterFS backend) contain the #{size} parameter which matches an integer (sequence of digits) in which case the value shall be interpreted as size of the volume in GB. Examples: "manila-share-volume-\d+$", "manila-share-volume-#{size}G-\d+$"; with matching volume names, respectively: "manila-share-volume-12", "manila-share-volume-3G-13". In latter example, the number that matches "#{size}", that is, 3, is an indication that the size of volume is 3G. manila-10.0.0/doc/source/configuration/tables/manila-san.inc0000664000175000017500000000166213656750227023776 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-san: .. list-table:: Description of SAN configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``ssh_conn_timeout`` = ``60`` - (Integer) Backend server SSH connection timeout. * - ``ssh_max_pool_conn`` = ``10`` - (Integer) Maximum number of connections in the SSH pool. * - ``ssh_min_pool_conn`` = ``1`` - (Integer) Minimum number of connections in the SSH pool. manila-10.0.0/doc/source/configuration/tables/manila-scheduler.inc0000664000175000017500000000410213656750227025163 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-scheduler: .. list-table:: Description of Scheduler configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``capacity_weight_multiplier`` = ``1.0`` - (Floating point) Multiplier used for weighing share capacity. Negative numbers mean to stack vs spread. * - ``pool_weight_multiplier`` = ``1.0`` - (Floating point) Multiplier used for weighing pools which have existing share servers. Negative numbers mean to spread vs stack. * - ``scheduler_default_filters`` = ``AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter, DriverFilter, ShareReplicationFilter`` - (List) Which filter class names to use for filtering hosts when not specified in the request. * - ``scheduler_default_weighers`` = ``CapacityWeigher, GoodnessWeigher`` - (List) Which weigher class names to use for weighing hosts. * - ``scheduler_driver`` = ``manila.scheduler.drivers.filter.FilterScheduler`` - (String) Default scheduler driver to use. * - ``scheduler_host_manager`` = ``manila.scheduler.host_manager.HostManager`` - (String) The scheduler host manager class to use. * - ``scheduler_json_config_location`` = - (String) Absolute path to scheduler configuration JSON file. * - ``scheduler_manager`` = ``manila.scheduler.manager.SchedulerManager`` - (String) Full class name for the scheduler manager. * - ``scheduler_max_attempts`` = ``3`` - (Integer) Maximum number of attempts to schedule a share. * - ``scheduler_topic`` = ``manila-scheduler`` - (String) The topic scheduler nodes listen on. manila-10.0.0/doc/source/configuration/tables/manila-tegile.inc0000664000175000017500000000202413656750227024457 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-tegile: .. list-table:: Description of Tegile share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``tegile_default_project`` = ``None`` - (String) Create shares in this project * - ``tegile_nas_login`` = ``None`` - (String) User name for the Tegile NAS server. * - ``tegile_nas_password`` = ``None`` - (String) Password for the Tegile NAS server. * - ``tegile_nas_server`` = ``None`` - (String) Tegile NAS server hostname or IP address. manila-10.0.0/doc/source/configuration/tables/manila-ca.inc0000664000175000017500000000141113656750227023570 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-ca: .. list-table:: Description of Certificate Authority configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``osapi_share_use_ssl`` = ``False`` - (Boolean) Wraps the socket in a SSL context if True is set. manila-10.0.0/doc/source/configuration/tables/manila-cephfs.inc0000664000175000017500000000210213656750227024453 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-cephfs: .. list-table:: Description of CephFS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``cephfs_auth_id`` = ``manila`` - (String) The name of the ceph auth identity to use. * - ``cephfs_cluster_name`` = ``None`` - (String) The name of the cluster in use, if it is not the default ('ceph'). * - ``cephfs_conf_path`` = - (String) Fully qualified path to the ceph.conf file. * - ``cephfs_enable_snapshots`` = ``False`` - (Boolean) Whether to enable snapshots in this driver. manila-10.0.0/doc/source/configuration/tables/manila-spectrumscale_ces.inc0000664000175000017500000000352513656750227026721 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-spectrumscale_ces: .. list-table:: Description of IBM Spectrum Scale CES share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``gpfs_mount_point_base`` = ``$state_path/mnt`` - (String) Base folder where exported shares are located. * - ``gpfs_nfs_server_type`` = ``CES`` - (String) NFS Server type. Valid choices are "CES" (Ganesha NFS) or "KNFS" (Kernel NFS). * - ``gpfs_share_export_ip`` = ``None`` - (Host address) IP to be added to GPFS export string. * - ``gpfs_share_helpers`` = ``KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper, CES=manila.share.drivers.ibm.gpfs.CESHelper`` - (List) Specify list of share export helpers. * - ``gpfs_ssh_login`` = ``None`` - (String) GPFS server SSH login name. * - ``gpfs_ssh_password`` = ``None`` - (String) GPFS server SSH login password. The password is not needed, if 'gpfs_ssh_private_key' is configured. * - ``gpfs_ssh_port`` = ``22`` - (Port number) GPFS server SSH port. * - ``gpfs_ssh_private_key`` = ``None`` - (String) Path to GPFS server SSH private key for login. * - ``is_gpfs_node`` = ``False`` - (Boolean) True:when Manila services are running on one of the Spectrum Scale node. False:when Manila services are not running on any of the Spectrum Scale node. manila-10.0.0/doc/source/configuration/tables/manila-quobyte.inc0000664000175000017500000000271213656750227024702 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-quobyte: .. list-table:: Description of Quobyte share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``quobyte_api_ca`` = ``None`` - (String) The X.509 CA file to verify the server cert. * - ``quobyte_api_password`` = ``quobyte`` - (String) Password for Quobyte API server * - ``quobyte_api_url`` = ``None`` - (String) URL of the Quobyte API server (http or https) * - ``quobyte_api_username`` = ``admin`` - (String) Username for Quobyte API server. * - ``quobyte_default_volume_group`` = ``root`` - (String) Default owning group for new volumes. * - ``quobyte_default_volume_user`` = ``root`` - (String) Default owning user for new volumes. * - ``quobyte_delete_shares`` = ``False`` - (Boolean) Actually deletes shares (vs. unexport) * - ``quobyte_volume_configuration`` = ``BASE`` - (String) Name of volume configuration used for new shares. manila-10.0.0/doc/source/configuration/tables/manila-hdfs.inc0000664000175000017500000000237113656750227024137 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-hdfs: .. list-table:: Description of HDFS share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``hdfs_namenode_ip`` = ``None`` - (String) The IP of the HDFS namenode. * - ``hdfs_namenode_port`` = ``9000`` - (Port number) The port of HDFS namenode service. * - ``hdfs_ssh_name`` = ``None`` - (String) HDFS namenode ssh login name. * - ``hdfs_ssh_port`` = ``22`` - (Port number) HDFS namenode SSH port. * - ``hdfs_ssh_private_key`` = ``None`` - (String) Path to HDFS namenode SSH private key for login. * - ``hdfs_ssh_pw`` = ``None`` - (String) HDFS namenode SSH login password, This parameter is not necessary, if 'hdfs_ssh_private_key' is configured. manila-10.0.0/doc/source/configuration/tables/manila-quota.inc0000664000175000017500000000336513656750227024350 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-quota: .. list-table:: Description of Quota configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``max_age`` = ``0`` - (Integer) Number of seconds between subsequent usage refreshes. * - ``max_gigabytes`` = ``10000`` - (Integer) Maximum number of volume gigabytes to allow per host. * - ``quota_driver`` = ``manila.quota.DbQuotaDriver`` - (String) Default driver to use for quota checks. * - ``quota_gigabytes`` = ``1000`` - (Integer) Number of share gigabytes allowed per project. * - ``quota_share_networks`` = ``10`` - (Integer) Number of share-networks allowed per project. * - ``quota_shares`` = ``50`` - (Integer) Number of shares allowed per project. * - ``quota_snapshot_gigabytes`` = ``1000`` - (Integer) Number of snapshot gigabytes allowed per project. * - ``quota_snapshots`` = ``50`` - (Integer) Number of share snapshots allowed per project. * - ``quota_share_groups`` = ``50`` - (Integer) Number of share groups allowed. * - ``quota_share_group_snapshots`` = ``50`` - (Integer) Number of share group snapshots allowed. * - ``reservation_expire`` = ``86400`` - (Integer) Number of seconds until a reservation expires. manila-10.0.0/doc/source/configuration/tables/manila-winrm.inc0000664000175000017500000000252513656750227024350 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-winrm: .. list-table:: Description of WinRM configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``winrm_cert_key_pem_path`` = ``~/.ssl/key.pem`` - (String) Path to the x509 certificate key. * - ``winrm_cert_pem_path`` = ``~/.ssl/cert.pem`` - (String) Path to the x509 certificate used for accessing the serviceinstance. * - ``winrm_conn_timeout`` = ``60`` - (Integer) WinRM connection timeout. * - ``winrm_operation_timeout`` = ``60`` - (Integer) WinRM operation timeout. * - ``winrm_retry_count`` = ``3`` - (Integer) WinRM retry count. * - ``winrm_retry_interval`` = ``5`` - (Integer) WinRM retry interval in seconds * - ``winrm_use_cert_based_auth`` = ``False`` - (Boolean) Use x509 certificates in order to authenticate to theservice instance. manila-10.0.0/doc/source/configuration/tables/manila-lvm.inc0000664000175000017500000000253513656750227024013 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-lvm: .. list-table:: Description of LVM share driver configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``lvm_share_export_ips`` = ``None`` - (String) List of IPs to export shares belonging to the LVM storage driver. * - ``lvm_share_export_root`` = ``$state_path/mnt`` - (String) Base folder where exported shares are located. * - ``lvm_share_helpers`` = ``CIFS=manila.share.drivers.helpers.CIFSHelperUserAccess, NFS=manila.share.drivers.helpers.NFSHelper`` - (List) Specify list of share export helpers. * - ``lvm_share_mirrors`` = ``0`` - (Integer) If set, create LVMs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space. * - ``lvm_share_volume_group`` = ``lvm-shares`` - (String) Name for the VG that will contain exported shares. manila-10.0.0/doc/source/configuration/tables/manila-common.inc0000664000175000017500000002014713656750227024504 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-common: .. list-table:: Description of Common configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``check_hash`` = ``False`` - (Boolean) Chooses whether hash of each file should be checked on data copying. * - ``client_socket_timeout`` = ``900`` - (Integer) Timeout for client connections socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of '0' means wait forever. * - ``compute_api_class`` = ``manila.compute.nova.API`` - (String) The full class name of the Compute API class to use. * - ``data_access_wait_access_rules_timeout`` = ``180`` - (Integer) Time to wait for access rules to be allowed/denied on backends when migrating a share (seconds). * - ``data_manager`` = ``manila.data.manager.DataManager`` - (String) Full class name for the data manager. * - ``data_node_access_admin_user`` = ``None`` - (String) The admin user name registered in the security service in order to allow access to user authentication-based shares. * - ``data_node_access_cert`` = ``None`` - (String) The certificate installed in the data node in order to allow access to certificate authentication-based shares. * - ``data_node_access_ips`` = ``None`` - (String) A list of the IPs of the node interface connected to the admin network. Used for allowing access to the mounting shares. Default is []. * - ``data_node_mount_options`` = ``{}`` - (Dict) Mount options to be included in the mount command for share protocols. Use dictionary format, example: {'nfs': '-o nfsvers=3', 'cifs': '-o user=foo,pass=bar'} * - ``data_topic`` = ``manila-data`` - (String) The topic data nodes listen on. * - ``enable_new_services`` = ``True`` - (Boolean) Services to be added to the available pool on create. * - ``fatal_exception_format_errors`` = ``False`` - (Boolean) Whether to make exception message format errors fatal. * - ``filter_function`` = ``None`` - (String) String representation for an equation that will be used to filter hosts. * - ``host`` = ```` - (String) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. * - ``max_over_subscription_ratio`` = ``20.0`` - (Floating point) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. A ratio lower than 1.0 is invalid. * - ``memcached_servers`` = ``None`` - (List) Memcached servers or None for in process cache. * - ``monkey_patch`` = ``False`` - (Boolean) Whether to log monkey patching. * - ``monkey_patch_modules`` = - (List) List of modules or decorators to monkey patch. * - ``mount_tmp_location`` = ``/tmp/`` - (String) Temporary path to create and mount shares during migration. * - ``my_ip`` = ```` - (String) IP address of this host. * - ``num_shell_tries`` = ``3`` - (Integer) Number of times to attempt to run flakey shell commands. * - ``periodic_fuzzy_delay`` = ``60`` - (Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) * - ``periodic_hooks_interval`` = ``300.0`` - (Floating point) Interval in seconds between execution of periodic hooks. Used when option 'enable_periodic_hooks' is set to True. Default is 300. * - ``periodic_interval`` = ``60`` - (Integer) Seconds between running periodic tasks. * - ``replica_state_update_interval`` = ``300`` - (Integer) This value, specified in seconds, determines how often the share manager will poll for the health (replica_state) of each replica instance. * - ``replication_domain`` = ``None`` - (String) A string specifying the replication domain that the backend belongs to. This option needs to be specified the same in the configuration sections of all backends that support replication between each other. If this option is not specified in the group, it means that replication is not enabled on the backend. * - ``report_interval`` = ``10`` - (Integer) Seconds between nodes reporting state to datastore. * - ``reserved_share_percentage`` = ``0`` - (Integer) The percentage of backend capacity reserved. * - ``rootwrap_config`` = ``None`` - (String) Path to the rootwrap configuration file to use for running commands as root. * - ``service_down_time`` = ``60`` - (Integer) Maximum time since last check-in for up service. * - ``smb_template_config_path`` = ``$state_path/smb.conf`` - (String) Path to smb config. * - ``sql_idle_timeout`` = ``3600`` - (Integer) Timeout before idle SQL connections are reaped. * - ``sql_max_retries`` = ``10`` - (Integer) Maximum database connection retries during startup. (setting -1 implies an infinite retry count). * - ``sql_retry_interval`` = ``10`` - (Integer) Interval between retries of opening a SQL connection. * - ``sqlite_clean_db`` = ``clean.sqlite`` - (String) File name of clean sqlite database. * - ``sqlite_db`` = ``manila.sqlite`` - (String) The filename to use with sqlite. * - ``sqlite_synchronous`` = ``True`` - (Boolean) If passed, use synchronous mode for sqlite. * - ``state_path`` = ``/var/lib/manila`` - (String) Top-level directory for maintaining manila's state. * - ``storage_availability_zone`` = ``nova`` - (String) Availability zone of this node. * - ``tcp_keepalive`` = ``True`` - (Boolean) Sets the value of TCP_KEEPALIVE (True/False) for each server socket. * - ``tcp_keepalive_count`` = ``None`` - (Integer) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X. * - ``tcp_keepalive_interval`` = ``None`` - (Integer) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X. * - ``tcp_keepidle`` = ``600`` - (Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. * - ``until_refresh`` = ``0`` - (Integer) Count of reservations until usage is refreshed. * - ``use_forwarded_for`` = ``False`` - (Boolean) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy. * - ``wsgi_keep_alive`` = ``True`` - (Boolean) If False, closes the client socket connection explicitly. Setting it to True to maintain backward compatibility. Recommended setting is set it to False. * - **[coordination]** - * - ``backend_url`` = ``file://$state_path`` - (String) The back end URL to use for distributed coordination. * - **[healthcheck]** - * - ``backends`` = - (List) Additional backends that can perform health checks and report that information back as part of a request. * - ``detailed`` = ``False`` - (Boolean) Show more detailed information as part of the response * - ``disable_by_file_path`` = ``None`` - (String) Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. * - ``disable_by_file_paths`` = - (List) Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. * - ``path`` = ``/healthcheck`` - (String) DEPRECATED: The path to respond to healtcheck requests on. manila-10.0.0/doc/source/configuration/tables/manila-share.inc0000664000175000017500000001306613656750227024320 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _manila-share: .. list-table:: Description of Share configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``automatic_share_server_cleanup`` = ``True`` - (Boolean) If set to True, then Manila will delete all share servers which were unused more than specified time .If set to False - automatic deletion of share servers will be disabled. * - ``backlog`` = ``4096`` - (Integer) Number of backlog requests to configure the socket with. * - ``default_share_group_type`` = ``None`` - (String) Default share group type to use. * - ``default_share_type`` = ``None`` - (String) Default share type to use. * - ``delete_share_server_with_last_share`` = ``False`` - (Boolean) Whether share servers will be deleted on deletion of the last share. * - ``driver_handles_share_servers`` = ``None`` - (Boolean) There are two possible approaches for share drivers in Manila. First is when share driver is able to handle share-servers and second when not. Drivers can support either both or only one of these approaches. So, set this opt to True if share driver is able to handle share servers and it is desired mode else set False. It is set to None by default to make this choice intentional. * - ``enable_periodic_hooks`` = ``False`` - (Boolean) Whether to enable periodic hooks or not. * - ``enable_post_hooks`` = ``False`` - (Boolean) Whether to enable post hooks or not. * - ``enable_pre_hooks`` = ``False`` - (Boolean) Whether to enable pre hooks or not. * - ``enabled_share_backends`` = ``None`` - (List) A list of share backend names to use. These backend names should be backed by a unique [CONFIG] group with its options. * - ``enabled_share_protocols`` = ``NFS, CIFS`` - (List) Specify list of protocols to be allowed for share creation. Available values are '('NFS', 'CIFS', 'GLUSTERFS', 'HDFS', 'CEPHFS', 'MAPRFS')' * - ``executor_thread_pool_size`` = ``64`` - (Integer) Size of executor thread pool. * - ``hook_drivers`` = - (List) Driver(s) to perform some additional actions before and after share driver actions and on a periodic basis. Default is []. * - ``migration_create_delete_share_timeout`` = ``300`` - (Integer) Timeout for creating and deleting share instances when performing share migration (seconds). * - ``migration_driver_continue_update_interval`` = ``60`` - (Integer) This value, specified in seconds, determines how often the share manager will poll the driver to perform the next step of migration in the storage backend, for a migrating share. * - ``migration_ignore_files`` = ``lost+found`` - (List) List of files and folders to be ignored when migrating shares. Items should be names (not including any path). * - ``migration_readonly_rules_support`` = ``True`` - (Boolean) DEPRECATED: Specify whether read only access rule mode is supported in this backend. Obsolete. All drivers are now required to support read-only access rules. * - ``migration_wait_access_rules_timeout`` = ``180`` - (Integer) Time to wait for access rules to be allowed/denied on backends when migrating shares using generic approach (seconds). * - ``network_config_group`` = ``None`` - (String) Name of the configuration group in the Manila conf file to look for network config options.If not set, the share backend's config group will be used.If an option is not found within provided group, then'DEFAULT' group will be used for search of option. * - ``share_manager`` = ``manila.share.manager.ShareManager`` - (String) Full class name for the share manager. * - ``share_name_template`` = ``share-%s`` - (String) Template string to be used to generate share names. * - ``share_snapshot_name_template`` = ``share-snapshot-%s`` - (String) Template string to be used to generate share snapshot names. * - ``share_topic`` = ``manila-share`` - (String) The topic share nodes listen on. * - ``suppress_post_hooks_errors`` = ``False`` - (Boolean) Whether to suppress post hook errors (allow driver's results to pass through) or not. * - ``suppress_pre_hooks_errors`` = ``False`` - (Boolean) Whether to suppress pre hook errors (allow driver perform actions) or not. * - ``unmanage_remove_access_rules`` = ``False`` - (Boolean) If set to True, then manila will deny access and remove all access rules on share unmanage.If set to False - nothing will be changed. * - ``unused_share_server_cleanup_interval`` = ``10`` - (Integer) Unallocated share servers reclamation time interval (minutes). Minimum value is 10 minutes, maximum is 60 minutes. The reclamation function is run every 10 minutes and delete share servers which were unused more than unused_share_server_cleanup_interval option defines. This value reflects the shortest time Manila will wait for a share server to go unutilized before deleting it. * - ``use_scheduler_creating_share_from_snapshot`` = ``False`` - (Boolean) If set to False, then share creation from snapshot will be performed on the same host. If set to True, then scheduling step will be used. manila-10.0.0/doc/ext/0000775000175000017500000000000013656750362014435 5ustar zuulzuul00000000000000manila-10.0.0/doc/ext/__init__.py0000664000175000017500000000000013656750227016534 0ustar zuulzuul00000000000000manila-10.0.0/doc/README.rst0000664000175000017500000000154113656750227015325 0ustar zuulzuul00000000000000======================= Manila Development Docs ======================= Files under this directory tree are used for generating the documentation for the manila source code. Developer documentation is built to: https://docs.openstack.org/manila/latest/ Tools ===== Sphinx The Python Sphinx package is used to generate the documentation output. Information on Sphinx, including formatting information for RST source files, can be found in the `Sphinx online documentation `_. Graphviz Some of the diagrams are generated using the ``dot`` language from Graphviz. See the `Graphviz documentation `_ for Graphviz and dot language usage information. Building Documentation ====================== Doc builds are performed using tox with the ``docs`` target:: % cd .. % tox -e docs manila-10.0.0/requirements.txt0000664000175000017500000000271013656750227016354 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # pbr should be first pbr!=2.1.0,>=2.0.0 # Apache-2.0 alembic>=0.8.10 # MIT Babel!=2.4.0,>=2.3.4 # BSD eventlet>=0.22.0,!=0.23.0,!=0.25.0 # MIT greenlet>=0.4.10 # MIT lxml!=3.7.0,>=3.4.1 # BSD netaddr>=0.7.18 # BSD oslo.config>=5.2.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.messaging>=6.4.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 oslo.reports>=1.18.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.service>=2.1.1 # Apache-2.0 oslo.upgradecheck>=0.1.0 # Apache-2.0 oslo.utils>=3.40.2 # Apache-2.0 oslo.concurrency>=3.26.0 # Apache-2.0 paramiko>=2.0.0 # LGPLv2.1+ Paste>=2.0.2 # MIT PasteDeploy>=1.5.0 # MIT pyparsing>=2.1.0 # MIT python-neutronclient>=6.7.0 # Apache-2.0 keystoneauth1>=3.4.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0 requests>=2.14.2 # Apache-2.0 retrying!=1.3.0,>=1.2.3 # Apache-2.0 Routes>=2.3.1 # MIT six>=1.10.0 # MIT SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT stevedore>=1.20.0 # Apache-2.0 tooz>=1.58.0 # Apache-2.0 python-cinderclient!=4.0.0,>=3.3.0 # Apache-2.0 python-novaclient>=9.1.0 # Apache-2.0 WebOb>=1.7.1 # MIT manila-10.0.0/setup.cfg0000664000175000017500000000772413656750363014724 0ustar zuulzuul00000000000000[metadata] name = manila summary = Shared Storage for OpenStack description-file = README.rst author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = https://docs.openstack.org/manila/latest/ python-requires = >=3.6 classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 [global] setup-hooks = pbr.hooks.setup_hook [files] data_files = etc/manila = etc/manila/api-paste.ini etc/manila/rootwrap.conf etc/manila/rootwrap.d = etc/manila/rootwrap.d/* packages = manila [entry_points] console_scripts = manila-all = manila.cmd.all:main manila-api = manila.cmd.api:main manila-data = manila.cmd.data:main manila-manage = manila.cmd.manage:main manila-rootwrap = oslo_rootwrap.cmd:main manila-scheduler = manila.cmd.scheduler:main manila-share = manila.cmd.share:main manila-status = manila.cmd.status:main wsgi_scripts = manila-wsgi = manila.wsgi.wsgi:initialize_application manila.scheduler.filters = AvailabilityZoneFilter = manila.scheduler.filters.availability_zone:AvailabilityZoneFilter CapabilitiesFilter = manila.scheduler.filters.capabilities:CapabilitiesFilter CapacityFilter = manila.scheduler.filters.capacity:CapacityFilter DriverFilter = manila.scheduler.filters.driver:DriverFilter IgnoreAttemptedHostsFilter = manila.scheduler.filters.ignore_attempted_hosts:IgnoreAttemptedHostsFilter JsonFilter = manila.scheduler.filters.json:JsonFilter RetryFilter = manila.scheduler.filters.retry:RetryFilter ShareReplicationFilter = manila.scheduler.filters.share_replication:ShareReplicationFilter CreateFromSnapshotFilter = manila.scheduler.filters.create_from_snapshot:CreateFromSnapshotFilter ConsistentSnapshotFilter = manila.scheduler.filters.share_group_filters.consistent_snapshot:ConsistentSnapshotFilter manila.scheduler.weighers = CapacityWeigher = manila.scheduler.weighers.capacity:CapacityWeigher GoodnessWeigher = manila.scheduler.weighers.goodness:GoodnessWeigher PoolWeigher = manila.scheduler.weighers.pool:PoolWeigher HostAffinityWeigher = manila.scheduler.weighers.host_affinity:HostAffinityWeigher oslo_messaging.notify.drivers = manila.openstack.common.notifier.log_notifier = oslo_messaging.notify._impl_log:LogDriver manila.openstack.common.notifier.no_op_notifier = oslo_messaging.notify._impl_noop:NoOpDriver manila.openstack.common.notifier.rpc_notifier2 = oslo_messaging.notify.messaging:MessagingV2Driver manila.openstack.common.notifier.rpc_notifier = oslo_messaging.notify.messaging:MessagingDriver manila.openstack.common.notifier.test_notifier = oslo_messaging.notify._impl_test:TestDriver oslo.config.opts = manila = manila.opts:list_opts oslo.config.opts.defaults = manila = manila.common.config:set_middleware_defaults oslo.policy.enforcer = manila = manila.policy:get_enforcer oslo.policy.policies = manila = manila.policies:list_rules manila.share.drivers.dell_emc.plugins = vnx = manila.share.drivers.dell_emc.plugins.vnx.connection:VNXStorageConnection unity = manila.share.drivers.dell_emc.plugins.unity.connection:UnityStorageConnection isilon = manila.share.drivers.dell_emc.plugins.isilon.isilon:IsilonStorageConnection powermax = manila.share.drivers.dell_emc.plugins.powermax.connection:PowerMaxStorageConnection manila.tests.scheduler.fakes = FakeWeigher1 = manila.tests.scheduler.fakes:FakeWeigher1 FakeWeigher2 = manila.tests.scheduler.fakes:FakeWeigher2 [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [compile_catalog] directory = manila/locale domain = manila [update_catalog] domain = manila output_dir = manila/locale input_file = manila/locale/manila.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = manila/locale/manila.pot manila-10.0.0/contrib/0000775000175000017500000000000013656750362014530 5ustar zuulzuul00000000000000manila-10.0.0/contrib/ci/0000775000175000017500000000000013656750362015123 5ustar zuulzuul00000000000000manila-10.0.0/contrib/ci/common.sh0000775000175000017500000000703213656750227016754 0ustar zuulzuul00000000000000# Environment variables # ---------------------------------------------- # Functions # Import devstack functions source $BASE/new/devstack/functions function manila_check_service_vm_availability { # First argument is expected to be IP address of a service VM wait_step=10 wait_timeout=300 available='false' while (( wait_timeout > 0 )) ; do if ping -w 1 $1; then available='true' break fi ((wait_timeout-=$wait_step)) sleep $wait_step done if [[ $available == 'true' ]]; then echo "SUCCESS! Service VM $1 is available." else echo "FAILURE! Service VM $1 is not available." exit 1 fi } function manila_wait_for_generic_driver_init { # First argument is expected to be file path to Manila config MANILA_CONF=$1 DRIVER_GROUPS=$(iniget $MANILA_CONF DEFAULT enabled_share_backends) for driver_group in ${DRIVER_GROUPS//,/ }; do SHARE_DRIVER=$(iniget $MANILA_CONF $driver_group share_driver) GENERIC_DRIVER='manila.share.drivers.generic.GenericShareDriver' DHSS=$(iniget $MANILA_CONF $driver_group driver_handles_share_servers) if [[ $SHARE_DRIVER == $GENERIC_DRIVER && $(trueorfalse False DHSS) == False ]]; then # Wait for service VM availability source /opt/stack/new/devstack/openrc admin demo vm_ip=$(iniget $MANILA_CONF $driver_group service_net_name_or_ip) # Check availability manila_check_service_vm_availability $vm_ip fi done } function manila_wait_for_drivers_init { # First argument is expected to be file path to Manila config manila_wait_for_generic_driver_init $1 # Sleep to make manila-share service notify manila-scheduler about # its capabilities on time. sleep 10 } function archive_file { # First argument is expected to be filename local filename=$1 sudo gzip -9 $filename sudo chown $USER:stack $filename.gz sudo chmod a+r $filename.gz } function save_tempest_results { # First argument is expected to be number or tempest run local src_dirname local dst_dirname src_dirname="$BASE/new/tempest" dst_dirname="$BASE/logs/tempest_$1" # 1. Create destination directory sudo mkdir $dst_dirname sudo chown $USER:stack $dst_dirname sudo chmod 755 $dst_dirname # 2. Save tempest configuration file sudo cp $src_dirname/etc/tempest.conf $dst_dirname/tempest_conf.txt # 3. Save tempest log file cp $src_dirname/tempest.log $src_dirname/tempest.txt echo '' > $src_dirname/tempest.log archive_file $src_dirname/tempest.txt sudo mv $src_dirname/tempest.txt.gz $dst_dirname/tempest.txt.gz # 4. Save tempest stestr results if [ -f $src_dirname/.stestr/0 ]; then pushd $src_dirname sudo stestr last --subunit > $src_dirname/tempest.subunit popd else echo "Tests have not run!" fi if [ -f $src_dirname/tempest.subunit ]; then s2h=`type -p subunit2html` sudo $s2h $src_dirname/tempest.subunit $src_dirname/testr_results.html archive_file $src_dirname/tempest.subunit sudo mv $src_dirname/tempest.subunit.gz $dst_dirname/tempest.subunit.gz archive_file $src_dirname/testr_results.html sudo mv $src_dirname/testr_results.html.gz $dst_dirname/testr_results.html.gz # 5. Cleanup sudo rm -rf $src_dirname/.stestr else echo "No 'stestr' results available for saving. File '$src_dirname/tempest.subunit' is absent." fi } manila-10.0.0/contrib/ci/pre_test_hook.sh0000775000175000017500000002652513656750227020341 0ustar zuulzuul00000000000000#!/bin/bash -xe # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside pre_test_hook function in devstack gate. # First argument ($1) expects boolean as value where: # 'False' means share driver will not handle share servers # 'True' means it will handle share servers. # Import devstack function 'trueorfalse' source $BASE/new/devstack/functions localconf=$BASE/new/devstack/local.conf echo "[[local|localrc]]" >> $localconf echo "DEVSTACK_GATE_TEMPEST_ALLOW_TENANT_ISOLATION=1" >> $localconf echo "API_RATE_LIMIT=False" >> $localconf echo "VOLUME_BACKING_FILE_SIZE=22G" >> $localconf echo "CINDER_LVM_TYPE=thin" >> $localconf # Set DevStack's PYTHON3_VERSION variable if CI scripts specify it if [[ ! -z "$PYTHON3_VERSION" ]]; then echo "PYTHON3_VERSION=$PYTHON3_VERSION" >> $localconf fi # NOTE(mkoderer): switch to keystone v3 by default echo "IDENTITY_API_VERSION=3" >> $localconf # NOTE(vponomaryov): Set oversubscription ratio for Cinder LVM driver # bigger than 1.0, because in CI we do not need such small value. # It will allow us to avoid exceeding real capacity in CI test runs. echo "CINDER_OVERSUBSCRIPTION_RATIO=100.0" >> $localconf echo "MANILA_BACKEND1_CONFIG_GROUP_NAME=london" >> $localconf echo "MANILA_BACKEND2_CONFIG_GROUP_NAME=paris" >> $localconf echo "MANILA_SHARE_BACKEND1_NAME=LONDON" >> $localconf echo "MANILA_SHARE_BACKEND2_NAME=PARIS" >> $localconf echo "MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=${MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE:=True}" >> $localconf # === Handle script arguments === # First argument is expected to be a boolean-like value for DHSS. DHSS=$1 DHSS=$(trueorfalse True DHSS) # Second argument is expected to have codename of a share driver. DRIVER=$2 # Third argument is expected to contain value equal either to 'singlebackend' # or 'multibackend' that defines how many back-ends should be configured. BACK_END_TYPE=$3 echo "MANILA_OPTGROUP_london_driver_handles_share_servers=$DHSS" >> $localconf echo "MANILA_OPTGROUP_paris_driver_handles_share_servers=$DHSS" >> $localconf echo "MANILA_USE_SERVICE_INSTANCE_PASSWORD=True" >> $localconf echo "MANILA_USE_DOWNGRADE_MIGRATIONS=True" >> $localconf if [[ "$BACK_END_TYPE" == "multibackend" ]]; then echo "MANILA_MULTI_BACKEND=True" >> $localconf else echo "MANILA_MULTI_BACKEND=False" >> $localconf fi # Set MANILA_ADMIN_NET_RANGE for admin_network and data_service IP echo "MANILA_ADMIN_NET_RANGE=${MANILA_ADMIN_NET_RANGE:=10.2.5.0/24}" >> $localconf echo "MANILA_DATA_NODE_IP=${MANILA_DATA_NODE_IP:=$MANILA_ADMIN_NET_RANGE}" >> $localconf echo "MANILA_DATA_COPY_CHECK_HASH=${MANILA_DATA_COPY_CHECK_HASH:=True}" >> $localconf # Share Migration CI tests migration_continue period task interval echo "MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL=${MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL:=1}" >> $localconf MANILA_SERVICE_IMAGE_ENABLED=${MANILA_SERVICE_IMAGE_ENABLED:-False} DEFAULT_EXTRA_SPECS=${DEFAULT_EXTRA_SPECS:-"'snapshot_support=True create_share_from_snapshot_support=True'"} if [[ "$DRIVER" == "generic"* ]]; then MANILA_SERVICE_IMAGE_ENABLED=True echo "SHARE_DRIVER=manila.share.drivers.generic.GenericShareDriver" >> $localconf elif [[ "$DRIVER" == "windows" ]]; then MANILA_SERVICE_IMAGE_ENABLED=True echo "SHARE_DRIVER=manila.share.drivers.windows.windows_smb_driver.WindowsSMBDriver" >> $localconf elif [[ "$DRIVER" == "dummy" ]]; then driver_path="manila.tests.share.drivers.dummy.DummyDriver" DEFAULT_EXTRA_SPECS="'snapshot_support=True create_share_from_snapshot_support=True revert_to_snapshot_support=True mount_snapshot_support=True'" echo "MANILA_SERVICE_IMAGE_ENABLED=False" >> $localconf # Run dummy driver CI job using standalone approach for running # manila API service just because we need to test this approach too, # that is very useful for development needs. echo "MANILA_USE_UWSGI=False" >> $localconf echo "MANILA_USE_MOD_WSGI=False" >> $localconf echo "SHARE_DRIVER=$driver_path" >> $localconf echo "SUPPRESS_ERRORS_IN_CLEANUP=False" >> $localconf echo "MANILA_REPLICA_STATE_UPDATE_INTERVAL=10" >> $localconf echo "MANILA_ENABLED_BACKENDS=alpha,beta,gamma,delta" >> $localconf echo "MANILA_CONFIGURE_GROUPS=alpha,beta,gamma,delta,membernet,adminnet" >> $localconf echo "MANILA_OPTGROUP_alpha_share_driver=$driver_path" >> $localconf echo "MANILA_OPTGROUP_alpha_driver_handles_share_servers=True" >> $localconf echo "MANILA_OPTGROUP_alpha_share_backend_name=ALPHA" >> $localconf echo "MANILA_OPTGROUP_alpha_network_config_group=membernet" >> $localconf echo "MANILA_OPTGROUP_alpha_admin_network_config_group=adminnet" >> $localconf echo "MANILA_OPTGROUP_alpha_replication_domain=DUMMY_DOMAIN_2" >> $localconf echo "MANILA_OPTGROUP_beta_share_driver=$driver_path" >> $localconf echo "MANILA_OPTGROUP_beta_driver_handles_share_servers=True" >> $localconf echo "MANILA_OPTGROUP_beta_share_backend_name=BETA" >> $localconf echo "MANILA_OPTGROUP_beta_network_config_group=membernet" >> $localconf echo "MANILA_OPTGROUP_beta_admin_network_config_group=adminnet" >> $localconf echo "MANILA_OPTGROUP_beta_replication_domain=DUMMY_DOMAIN_2" >> $localconf echo "MANILA_OPTGROUP_gamma_share_driver=$driver_path" >> $localconf echo "MANILA_OPTGROUP_gamma_driver_handles_share_servers=False" >> $localconf echo "MANILA_OPTGROUP_gamma_share_backend_name=GAMMA" >> $localconf echo "MANILA_OPTGROUP_gamma_replication_domain=DUMMY_DOMAIN" >> $localconf echo "MANILA_OPTGROUP_delta_share_driver=$driver_path" >> $localconf echo "MANILA_OPTGROUP_delta_driver_handles_share_servers=False" >> $localconf echo "MANILA_OPTGROUP_delta_share_backend_name=DELTA" >> $localconf echo "MANILA_OPTGROUP_delta_replication_domain=DUMMY_DOMAIN" >> $localconf echo "MANILA_OPTGROUP_membernet_network_api_class=manila.network.standalone_network_plugin.StandaloneNetworkPlugin" >> $localconf echo "MANILA_OPTGROUP_membernet_standalone_network_plugin_gateway=10.0.0.1" >> $localconf echo "MANILA_OPTGROUP_membernet_standalone_network_plugin_mask=24" >> $localconf echo "MANILA_OPTGROUP_membernet_standalone_network_plugin_network_type=vlan" >> $localconf echo "MANILA_OPTGROUP_membernet_standalone_network_plugin_segmentation_id=1010" >> $localconf echo "MANILA_OPTGROUP_membernet_standalone_network_plugin_allowed_ip_ranges=10.0.0.10-10.0.0.209" >> $localconf echo "MANILA_OPTGROUP_membernet_network_plugin_ipv4_enabled=True" >> $localconf echo "MANILA_OPTGROUP_adminnet_network_api_class=manila.network.standalone_network_plugin.StandaloneNetworkPlugin" >> $localconf echo "MANILA_OPTGROUP_adminnet_standalone_network_plugin_gateway=11.0.0.1" >> $localconf echo "MANILA_OPTGROUP_adminnet_standalone_network_plugin_mask=24" >> $localconf echo "MANILA_OPTGROUP_adminnet_standalone_network_plugin_network_type=vlan" >> $localconf echo "MANILA_OPTGROUP_adminnet_standalone_network_plugin_segmentation_id=1011" >> $localconf echo "MANILA_OPTGROUP_adminnet_standalone_network_plugin_allowed_ip_ranges=11.0.0.10-11.0.0.19,11.0.0.30-11.0.0.39,11.0.0.50-11.0.0.199" >> $localconf echo "MANILA_OPTGROUP_adminnet_network_plugin_ipv4_enabled=True" >> $localconf export MANILA_TEMPEST_CONCURRENCY=24 export MANILA_CONFIGURE_DEFAULT_TYPES=False elif [[ "$DRIVER" == "lvm" ]]; then MANILA_SERVICE_IMAGE_ENABLED=True DEFAULT_EXTRA_SPECS="'snapshot_support=True create_share_from_snapshot_support=True revert_to_snapshot_support=True mount_snapshot_support=True'" echo "SHARE_DRIVER=manila.share.drivers.lvm.LVMShareDriver" >> $localconf echo "SHARE_BACKING_FILE_SIZE=32000M" >> $localconf export MANILA_SETUP_IPV6=True elif [[ "$DRIVER" == "zfsonlinux" ]]; then MANILA_SERVICE_IMAGE_ENABLED=True echo "SHARE_DRIVER=manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver" >> $localconf echo "RUN_MANILA_REPLICATION_TESTS=True" >> $localconf # Enable using the scheduler when creating a share from snapshot echo "MANILA_USE_SCHEDULER_CREATING_SHARE_FROM_SNAPSHOT=True" >> $localconf # Set the replica_state_update_interval to 60 seconds to make # replication tests run faster. The default is 300, which is greater than # the build timeout for ZFS on the gate. echo "MANILA_REPLICA_STATE_UPDATE_INTERVAL=60" >> $localconf echo "MANILA_ZFSONLINUX_USE_SSH=True" >> $localconf # Set proper host IP for user export to be able to run scenario tests correctly echo "MANILA_ZFSONLINUX_SHARE_EXPORT_IP=$HOST" >> $localconf echo "MANILA_ZFSONLINUX_SERVICE_IP=127.0.0.1" >> $localconf elif [[ "$DRIVER" == "container"* ]]; then DEFAULT_EXTRA_SPECS="'snapshot_support=false'" echo "SHARE_DRIVER=manila.share.drivers.container.driver.ContainerShareDriver" >> $localconf echo "SHARE_BACKING_FILE_SIZE=64000M" >> $localconf fi echo "MANILA_SERVICE_IMAGE_ENABLED=$MANILA_SERVICE_IMAGE_ENABLED" >> $localconf if [[ "$MANILA_SERVICE_IMAGE_ENABLED" == True ]]; then echo "MANILA_SERVICE_IMAGE_URL=$MANILA_SERVICE_IMAGE_URL" >> $localconf echo "MANILA_SERVICE_IMAGE_NAME=$MANILA_SERVICE_IMAGE_NAME" >> $localconf fi echo "MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS=$DEFAULT_EXTRA_SPECS" >> $localconf echo "MANILA_CONFIGURE_DEFAULT_TYPES=${MANILA_CONFIGURE_DEFAULT_TYPES:-True}" >> $localconf # Enabling isolated metadata in Neutron is required because # Tempest creates isolated networks and created vm's in scenario tests don't # have access to Nova Metadata service. This leads to unavailability of # created vm's in scenario tests. echo "ENABLE_ISOLATED_METADATA=True" >> $localconf echo "TEMPEST_USE_TEST_ACCOUNTS=True" >> $localconf echo "TEMPEST_ALLOW_TENANT_ISOLATION=False" >> $localconf echo "TEMPEST_CONCURRENCY=${MANILA_TEMPEST_CONCURRENCY:-8}" >> $localconf MANILA_SETUP_IPV6=${MANILA_SETUP_IPV6:-False} echo "MANILA_SETUP_IPV6=${MANILA_SETUP_IPV6}" >> $localconf if [[ "$MANILA_SETUP_IPV6" == True ]]; then # When setting up proper IPv6 networks, we should do it ourselves so we can # use Neutron Dynamic Routing plugin with address scopes instead of the # regular Neutron DevStack configuration. echo "NEUTRON_CREATE_INITIAL_NETWORKS=False" >> $localconf echo "IP_VERSION=4+6" >> $localconf echo "MANILA_RESTORE_IPV6_DEFAULT_ROUTE=False" >> $localconf fi if [[ "$DRIVER" == "generic"* ]]; then echo -e '[[post-config|${NOVA_CONF:-/etc/nova/nova.conf}]]\n[DEFAULT]\nquota_instances=30\n' >> $localconf echo -e '[[post-config|${NEUTRON_CONF:-/etc/neutron/neutron.conf}]]\n[DEFAULT]\nmax_fixed_ips_per_port=100\n' >> $localconf echo -e '[[post-config|${NEUTRON_CONF:-/etc/neutron/neutron.conf}]]\n[QUOTAS]\nquota_subnet=-1\n' >> $localconf fi # Required for "grenade" job that uses devstack config from 'old' directory. if [[ -d "$BASE/old/devstack" ]]; then cp $localconf $BASE/old/devstack/local.conf fi cd $BASE/new/tempest source $BASE/new/manila/contrib/ci/common.sh # Print current Tempest status git status manila-10.0.0/contrib/ci/post_test_hook.sh0000775000175000017500000004661013656750227020535 0ustar zuulzuul00000000000000#!/bin/bash -xe # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside post_test_hook function in devstack gate. # First argument ($1) expects 'multibackend' as value for setting appropriate # tempest conf opts, all other values will assume singlebackend installation. sudo chown -R $USER:stack $BASE/new/tempest sudo chown -R $USER:stack $BASE/data/tempest sudo chmod -R o+rx $BASE/new/devstack/files # Import devstack functions 'iniset', 'iniget' and 'trueorfalse' source $BASE/new/devstack/functions export TEMPEST_CONFIG=${TEMPEST_CONFIG:-$BASE/new/tempest/etc/tempest.conf} # === Handle script arguments === # First argument is expected to contain value equal either to 'singlebackend' # or 'multibackend' that defines how many back-ends are used. BACK_END_TYPE=$1 # Second argument is expected to have codename of a share driver. DRIVER=$2 # Third argument is expected to contain either 'api' or 'scenario' values # that define test suites to be run. TEST_TYPE=$3 # Fourth argument is expected to be boolean-like and it should be 'true' # when PostgreSQL DB back-end is used and 'false' when MySQL. POSTGRES_ENABLED=$4 POSTGRES_ENABLED=$(trueorfalse True POSTGRES_ENABLED) if [[ "$DRIVER" == "dummy" ]]; then export BACKENDS_NAMES="ALPHA,BETA" elif [[ "$BACK_END_TYPE" == "multibackend" ]]; then iniset $TEMPEST_CONFIG share multi_backend True # Set share backends names, they are defined within pre_test_hook export BACKENDS_NAMES="LONDON,PARIS" else export BACKENDS_NAMES="LONDON" fi iniset $TEMPEST_CONFIG share backend_names $BACKENDS_NAMES # If testing a stable branch, we need to ensure we're testing with supported # API micro-versions; so set the versions from code if we're not testing the # master branch. If we're testing master, we'll allow manila-tempest-plugin # (which is branchless) tell us what versions it wants to test. if [[ $ZUUL_BRANCH != "master" ]]; then # Grab the supported API micro-versions from the code _API_VERSION_REQUEST_PATH=$BASE/new/manila/manila/api/openstack/api_version_request.py _DEFAULT_MIN_VERSION=$(awk '$0 ~ /_MIN_API_VERSION = /{print $3}' $_API_VERSION_REQUEST_PATH) _DEFAULT_MAX_VERSION=$(awk '$0 ~ /_MAX_API_VERSION = /{print $3}' $_API_VERSION_REQUEST_PATH) # Override the *_api_microversion tempest options if present MANILA_TEMPEST_MIN_API_MICROVERSION=${MANILA_TEMPEST_MIN_API_MICROVERSION:-$_DEFAULT_MIN_VERSION} MANILA_TEMPEST_MAX_API_MICROVERSION=${MANILA_TEMPEST_MAX_API_MICROVERSION:-$_DEFAULT_MAX_VERSION} # Set these options in tempest.conf iniset $TEMPEST_CONFIG share min_api_microversion $MANILA_TEMPEST_MIN_API_MICROVERSION iniset $TEMPEST_CONFIG share max_api_microversion $MANILA_TEMPEST_MAX_API_MICROVERSION fi # Set two retries for CI jobs iniset $TEMPEST_CONFIG share share_creation_retry_number 2 # Suppress errors in cleanup of resources SUPPRESS_ERRORS=${SUPPRESS_ERRORS_IN_CLEANUP:-True} iniset $TEMPEST_CONFIG share suppress_errors_in_cleanup $SUPPRESS_ERRORS USERNAME_FOR_USER_RULES=${USERNAME_FOR_USER_RULES:-"manila"} PASSWORD_FOR_SAMBA_USER=${PASSWORD_FOR_SAMBA_USER:-$USERNAME_FOR_USER_RULES} # Enable feature tests: # Default options are as specified in tempest. RUN_MANILA_QUOTA_TESTS=${RUN_MANILA_QUOTA_TESTS:-True} RUN_MANILA_SHRINK_TESTS=${RUN_MANILA_SHRINK_TESTS:-True} RUN_MANILA_SNAPSHOT_TESTS=${RUN_MANILA_SNAPSHOT_TESTS:-True} RUN_MANILA_REVERT_TO_SNAPSHOT_TESTS=${RUN_MANILA_REVERT_TO_SNAPSHOT_TESTS:-False} RUN_MANILA_SG_TESTS=${RUN_MANILA_SG_TESTS:-${RUN_MANILA_CG_TESTS:-True}} RUN_MANILA_MANAGE_TESTS=${RUN_MANILA_MANAGE_TESTS:-True} RUN_MANILA_MANAGE_SNAPSHOT_TESTS=${RUN_MANILA_MANAGE_SNAPSHOT_TESTS:-False} RUN_MANILA_REPLICATION_TESTS=${RUN_MANILA_REPLICATION_TESTS:-False} RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS=${RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS:-False} RUN_MANILA_DRIVER_ASSISTED_MIGRATION_TESTS=${RUN_MANILA_DRIVER_ASSISTED_MIGRATION_TESTS:-False} RUN_MANILA_MOUNT_SNAPSHOT_TESTS=${RUN_MANILA_MOUNT_SNAPSHOT_TESTS:-False} RUN_MANILA_MIGRATION_WITH_PRESERVE_SNAPSHOTS_TESTS=${RUN_MANILA_MIGRATION_WITH_PRESERVE_SNAPSHOTS_TESTS:-False} RUN_MANILA_IPV6_TESTS=${RUN_MANILA_IPV6_TESTS:-False} MANILA_CONF=${MANILA_CONF:-/etc/manila/manila.conf} # Capabilitities CAPABILITY_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT=${CAPABILITY_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT:-True} MANILA_CONFIGURE_DEFAULT_TYPES=${MANILA_CONFIGURE_DEFAULT_TYPES:-True} if [[ -z "$MULTITENANCY_ENABLED" ]]; then # Define whether share drivers handle share servers or not. # Requires defined config option 'driver_handles_share_servers'. NO_SHARE_SERVER_HANDLING_MODES=0 WITH_SHARE_SERVER_HANDLING_MODES=0 # Convert backend names to config groups using lowercase translation CONFIG_GROUPS=${BACKENDS_NAMES,,} for CG in ${CONFIG_GROUPS//,/ }; do DRIVER_HANDLES_SHARE_SERVERS=$(iniget $MANILA_CONF $CG driver_handles_share_servers) if [[ $DRIVER_HANDLES_SHARE_SERVERS == False ]]; then NO_SHARE_SERVER_HANDLING_MODES=$((NO_SHARE_SERVER_HANDLING_MODES+1)) elif [[ $DRIVER_HANDLES_SHARE_SERVERS == True ]]; then WITH_SHARE_SERVER_HANDLING_MODES=$((WITH_SHARE_SERVER_HANDLING_MODES+1)) else echo "Config option 'driver_handles_share_servers' either is not defined or \ defined with improper value - '$DRIVER_HANDLES_SHARE_SERVERS'." exit 1 fi done if [[ $NO_SHARE_SERVER_HANDLING_MODES -ge 1 && $WITH_SHARE_SERVER_HANDLING_MODES -ge 1 || \ $NO_SHARE_SERVER_HANDLING_MODES -eq 0 && $WITH_SHARE_SERVER_HANDLING_MODES -eq 0 ]]; then echo 'Allowed only same driver modes for all backends to be run with Tempest job.' exit 1 elif [[ $NO_SHARE_SERVER_HANDLING_MODES -ge 1 ]]; then MULTITENANCY_ENABLED='False' elif [[ $WITH_SHARE_SERVER_HANDLING_MODES -ge 1 ]]; then MULTITENANCY_ENABLED='True' else echo 'Should never get here unless an error occurred.' exit 1 fi else MULTITENANCY_ENABLED=$(trueorfalse True MULTITENANCY_ENABLED) fi # Set multitenancy configuration for Tempest iniset $TEMPEST_CONFIG share multitenancy_enabled $MULTITENANCY_ENABLED if [[ "$MULTITENANCY_ENABLED" == "False" ]]; then # Using approach without handling of share servers we have bigger load for # volume creation in Cinder using Generic driver. So, reduce amount of # threads to avoid errors for Cinder volume creations that appear # because of lack of free space. MANILA_TEMPEST_CONCURRENCY=${MANILA_TEMPEST_CONCURRENCY:-8} iniset $TEMPEST_CONFIG auth create_isolated_networks False fi # let us control if we die or not set +o errexit cd $BASE/new/tempest export MANILA_TEMPEST_CONCURRENCY=${MANILA_TEMPEST_CONCURRENCY:-6} export MANILA_TESTS=${MANILA_TESTS:-'manila_tempest_tests.tests.api'} if [[ "$DRIVER" == "generic"* ]]; then RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS=True RUN_MANILA_MANAGE_SNAPSHOT_TESTS=True RUN_MANILA_CG_TESTS=False if [[ "$MULTITENANCY_ENABLED" == "True" ]]; then # NOTE(ganso): The generic driver has not implemented support for # Manage/unmanage shares/snapshots in DHSS=True RUN_MANILA_MANAGE_SNAPSHOT_TESTS=False RUN_MANILA_MANAGE_TESTS=False fi if [[ "$POSTGRES_ENABLED" == "True" ]]; then # Run only CIFS tests on PostgreSQL DB backend # to reduce amount of tests per job using 'generic' share driver. iniset $TEMPEST_CONFIG share enable_protocols cifs else # Run only NFS tests on MySQL DB backend to reduce amount of tests # per job using 'generic' share driver. iniset $TEMPEST_CONFIG share enable_protocols nfs fi MANILA_TESTS="(^manila_tempest_tests.tests.api)(?=.*\[.*\bbackend\b.*\])" RUN_MANILA_SG_TESTS=False fi if [[ "$DRIVER" == "generic_with_custom_image" ]]; then # For CI jobs that test changes to image we do not need to run lots of tests # Will be enough to run simple scenario test, because # if some package is lost, it is very likely to fail with each test. MANILA_TESTS="(^manila_tempest_tests.tests.scenario)(?=.*\btest_write_data_to_share_created_from_snapshot\b.*)" elif [[ "$TEST_TYPE" == "scenario" ]]; then echo "Set test set to scenario only" MANILA_TESTS='manila_tempest_tests.tests.scenario' iniset $TEMPEST_CONFIG auth use_dynamic_credentials True RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS=True fi if [[ "$DRIVER" == "lvm" ]]; then MANILA_TESTS="(^manila_tempest_tests.tests)(?=.*\[.*\bbackend\b.*\])" MANILA_TEMPEST_CONCURRENCY=8 RUN_MANILA_SG_TESTS=False RUN_MANILA_MANAGE_TESTS=False RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS=True RUN_MANILA_SHRINK_TESTS=False RUN_MANILA_REVERT_TO_SNAPSHOT_TESTS=True RUN_MANILA_MOUNT_SNAPSHOT_TESTS=True RUN_MANILA_IPV6_TESTS=True iniset $TEMPEST_CONFIG share enable_ip_rules_for_protocols 'nfs' iniset $TEMPEST_CONFIG share enable_user_rules_for_protocols 'cifs' iniset $TEMPEST_CONFIG share image_with_share_tools 'manila-service-image-master' iniset $TEMPEST_CONFIG auth use_dynamic_credentials True iniset $TEMPEST_CONFIG share capability_snapshot_support True if ! grep $USERNAME_FOR_USER_RULES "/etc/passwd"; then sudo useradd $USERNAME_FOR_USER_RULES fi (echo $PASSWORD_FOR_SAMBA_USER; echo $PASSWORD_FOR_SAMBA_USER) | sudo smbpasswd -s -a $USERNAME_FOR_USER_RULES sudo smbpasswd -e $USERNAME_FOR_USER_RULES samba_daemon_name=smbd if is_fedora; then samba_daemon_name=smb fi sudo service $samba_daemon_name restart elif [[ "$DRIVER" == "zfsonlinux" ]]; then MANILA_TESTS="(^manila_tempest_tests.tests)(?=.*\[.*\bbackend\b.*\])" MANILA_TEMPEST_CONCURRENCY=8 RUN_MANILA_SG_TESTS=False RUN_MANILA_MANAGE_TESTS=True RUN_MANILA_MANAGE_SNAPSHOT_TESTS=True RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS=True RUN_MANILA_DRIVER_ASSISTED_MIGRATION_TESTS=True RUN_MANILA_REPLICATION_TESTS=True iniset $TEMPEST_CONFIG share enable_ip_rules_for_protocols 'nfs' iniset $TEMPEST_CONFIG share enable_user_rules_for_protocols '' iniset $TEMPEST_CONFIG share enable_cert_rules_for_protocols '' iniset $TEMPEST_CONFIG share enable_ro_access_level_for_protocols 'nfs' iniset $TEMPEST_CONFIG share build_timeout 180 iniset $TEMPEST_CONFIG share share_creation_retry_number 0 iniset $TEMPEST_CONFIG share capability_storage_protocol 'NFS' iniset $TEMPEST_CONFIG share enable_protocols 'nfs' iniset $TEMPEST_CONFIG share suppress_errors_in_cleanup False iniset $TEMPEST_CONFIG share multitenancy_enabled False iniset $TEMPEST_CONFIG share multi_backend True iniset $TEMPEST_CONFIG share backend_replication_type 'readable' iniset $TEMPEST_CONFIG share image_with_share_tools 'manila-service-image-master' iniset $TEMPEST_CONFIG auth use_dynamic_credentials True iniset $TEMPEST_CONFIG share capability_snapshot_support True iniset $TEMPEST_CONFIG share run_create_share_from_snapshot_in_another_pool_or_az_tests True elif [[ "$DRIVER" == "dummy" ]]; then MANILA_TEMPEST_CONCURRENCY=24 MANILA_CONFIGURE_DEFAULT_TYPES=False RUN_MANILA_SG_TESTS=True RUN_MANILA_MANAGE_TESTS=True RUN_MANILA_MANAGE_SNAPSHOT_TESTS=True RUN_MANILA_DRIVER_ASSISTED_MIGRATION_TESTS=True RUN_MANILA_REVERT_TO_SNAPSHOT_TESTS=True RUN_MANILA_MOUNT_SNAPSHOT_TESTS=True RUN_MANILA_MIGRATION_WITH_PRESERVE_SNAPSHOTS_TESTS=True RUN_MANILA_REPLICATION_TESTS=True iniset $TEMPEST_CONFIG share enable_ip_rules_for_protocols 'nfs' iniset $TEMPEST_CONFIG share enable_user_rules_for_protocols 'cifs' iniset $TEMPEST_CONFIG share enable_cert_rules_for_protocols '' iniset $TEMPEST_CONFIG share enable_ro_access_level_for_protocols 'nfs,cifs' iniset $TEMPEST_CONFIG share build_timeout 180 iniset $TEMPEST_CONFIG share share_creation_retry_number 0 iniset $TEMPEST_CONFIG share capability_storage_protocol 'NFS_CIFS' iniset $TEMPEST_CONFIG share capability_sg_consistent_snapshot_support 'pool' iniset $TEMPEST_CONFIG share enable_protocols 'nfs,cifs' iniset $TEMPEST_CONFIG share suppress_errors_in_cleanup False iniset $TEMPEST_CONFIG share multitenancy_enabled True iniset $TEMPEST_CONFIG share create_networks_when_multitenancy_enabled False iniset $TEMPEST_CONFIG share multi_backend True iniset $TEMPEST_CONFIG share backend_replication_type 'readable' elif [[ "$DRIVER" == "container"* ]]; then if [[ "$DRIVER" == "container_with_custom_image" ]]; then # TODO(vponomaryov): set scenario tests for run when # manila tempest plugin supports share protocol and rules that # container driver uses. # MANILA_TESTS="(^manila_tempest_tests.tests.scenario)(?=.*\btest_read_write_two_vms\b.*)" : fi MANILA_TEMPEST_CONCURRENCY=8 RUN_MANILA_SG_TESTS=False RUN_MANILA_MANAGE_TESTS=True RUN_MANILA_QUOTA_TESTS=False RUN_MANILA_SHRINK_TESTS=False RUN_MANILA_SNAPSHOT_TESTS=False RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS=False CAPABILITY_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT=False iniset $TEMPEST_CONFIG share capability_storage_protocol 'CIFS' iniset $TEMPEST_CONFIG share enable_protocols 'cifs' iniset $TEMPEST_CONFIG share enable_user_rules_for_protocols 'cifs' iniset $TEMPEST_CONFIG share enable_ip_rules_for_protocols '' # TODO(vponomaryov): set following to True when bug #1679715 is fixed iniset $TEMPEST_CONFIG auth use_dynamic_credentials False fi # Enable quota tests iniset $TEMPEST_CONFIG share run_quota_tests $RUN_MANILA_QUOTA_TESTS # Enable shrink tests iniset $TEMPEST_CONFIG share run_shrink_tests $RUN_MANILA_SHRINK_TESTS # Enable snapshot tests iniset $TEMPEST_CONFIG share run_snapshot_tests $RUN_MANILA_SNAPSHOT_TESTS # Enable revert to snapshot tests iniset $TEMPEST_CONFIG share run_revert_to_snapshot_tests $RUN_MANILA_REVERT_TO_SNAPSHOT_TESTS # Enable share group tests iniset $TEMPEST_CONFIG share run_share_group_tests $RUN_MANILA_SG_TESTS # Enable manage/unmanage tests iniset $TEMPEST_CONFIG share run_manage_unmanage_tests $RUN_MANILA_MANAGE_TESTS # Enable manage/unmanage snapshot tests iniset $TEMPEST_CONFIG share run_manage_unmanage_snapshot_tests $RUN_MANILA_MANAGE_SNAPSHOT_TESTS # Enable replication tests iniset $TEMPEST_CONFIG share run_replication_tests $RUN_MANILA_REPLICATION_TESTS # Enable migration tests iniset $TEMPEST_CONFIG share run_host_assisted_migration_tests $RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS iniset $TEMPEST_CONFIG share run_driver_assisted_migration_tests $RUN_MANILA_DRIVER_ASSISTED_MIGRATION_TESTS iniset $TEMPEST_CONFIG share run_migration_with_preserve_snapshots_tests $RUN_MANILA_MIGRATION_WITH_PRESERVE_SNAPSHOTS_TESTS # Enable mountable snapshots tests iniset $TEMPEST_CONFIG share run_mount_snapshot_tests $RUN_MANILA_MOUNT_SNAPSHOT_TESTS # Create share from snapshot support iniset $TEMPEST_CONFIG share capability_create_share_from_snapshot_support $CAPABILITY_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT iniset $TEMPEST_CONFIG validation ip_version_for_ssh 4 iniset $TEMPEST_CONFIG validation network_for_ssh ${PRIVATE_NETWORK_NAME:-"private"} if [ $(trueorfalse False MANILA_CONFIGURE_DEFAULT_TYPES) == True ]; then iniset $TEMPEST_CONFIG share default_share_type_name ${MANILA_DEFAULT_SHARE_TYPE:-default} fi ADMIN_DOMAIN_NAME=${ADMIN_DOMAIN_NAME:-"Default"} export OS_PROJECT_DOMAIN_NAME=$ADMIN_DOMAIN_NAME export OS_USER_DOMAIN_NAME=$ADMIN_DOMAIN_NAME # Also, we should wait until service VM is available # before running Tempest tests using Generic driver in DHSS=False mode. source $BASE/new/manila/contrib/ci/common.sh manila_wait_for_drivers_init $MANILA_CONF TCP_PORTS=(2049 111 32803 892 875 662) UDP_PORTS=(111 32769 892 875 662) for ipcmd in iptables ip6tables; do # (aovchinnikov): extra rules are needed to allow instances talk to host. sudo $ipcmd -N manila-nfs sudo $ipcmd -I INPUT 1 -j manila-nfs for port in ${TCP_PORTS[*]}; do sudo $ipcmd -A manila-nfs -m tcp -p tcp --dport $port -j ACCEPT done for port in ${UDP_PORTS[*]}; do sudo $ipcmd -A manila-nfs -m udp -p udp --dport $port -j ACCEPT done done source $BASE/new/devstack/openrc admin admin public_net_id=$(openstack network list --name $PUBLIC_NETWORK_NAME -f value -c ID ) iniset $TEMPEST_CONFIG network public_network_id $public_net_id if [ $(trueorfalse False MANILA_SETUP_IPV6) == True ]; then # Now that all plugins are loaded, setup BGP here public_gateway_ipv6=$(openstack subnet show ipv6-public-subnet -c gateway_ip -f value) neutron bgp-speaker-create --ip-version 6 --local-as 100 bgpspeaker neutron bgp-speaker-network-add bgpspeaker $PUBLIC_NETWORK_NAME neutron bgp-peer-create --peer-ip $public_gateway_ipv6 --remote-as 200 bgppeer neutron bgp-speaker-peer-add bgpspeaker bgppeer fi # Set config to run IPv6 tests according to env var iniset $TEMPEST_CONFIG share run_ipv6_tests $RUN_MANILA_IPV6_TESTS if ! [[ -z "$OVERRIDE_IP_FOR_NFS_ACCESS" ]]; then # Set config to use specified IP as access rule on NFS scenario tests # in order to workaround multiple NATs between the VMs and the storage # controller iniset $TEMPEST_CONFIG share override_ip_for_nfs_access $OVERRIDE_IP_FOR_NFS_ACCESS fi echo "Manila service details" source $BASE/new/devstack/openrc admin admin manila service-list echo "Running tempest manila test suites" cd $BASE/new/tempest/ # List plugins in logs to enable debugging sudo -H -u $USER tempest list-plugins sudo -H -u $USER tempest run -r $MANILA_TESTS --concurrency=$MANILA_TEMPEST_CONCURRENCY RETVAL=$? cd - # If using the dummy driver, configure the second run. We can't use the # devstack variables RUN_MANILA_* now, we'll directly iniset tempest options. if [[ "$DRIVER" == "dummy" ]]; then save_tempest_results 1 echo "First tempest run (DHSS=True) returned '$RETVAL'" iniset $TEMPEST_CONFIG share backend_names "GAMMA,DELTA" iniset $TEMPEST_CONFIG share run_manage_unmanage_tests True iniset $TEMPEST_CONFIG share run_manage_unmanage_snapshot_tests True iniset $TEMPEST_CONFIG share run_replication_tests True iniset $TEMPEST_CONFIG share multitenancy_enabled False iniset $TEMPEST_CONFIG share backend_replication_type 'readable' # Change driver mode for default share type to make tempest use # DHSS=False backends. This is just done here for semantics, if # the default share type hasn't been configured # ($MANILA_CONFIGURE_DEFAULT_TYPES=False), this command has no effect # since there is no default share type configured. source $BASE/new/devstack/openrc admin demo manila type-key default set driver_handles_share_servers=False echo "Running tempest manila test suites for DHSS=False mode" cd $BASE/new/tempest/ sudo -H -u $USER tempest run -r $MANILA_TESTS --concurrency=$MANILA_TEMPEST_CONCURRENCY RETVAL2=$? cd - save_tempest_results 2 # Exit with last code if first succeeded else exit with first error code if [[ "$RETVAL" == "0" ]]; then RETVAL=$RETVAL2 fi echo "Second tempest run (DHSS=False) returned '$RETVAL2'" fi exit $RETVAL manila-10.0.0/contrib/share_driver_hooks/0000775000175000017500000000000013656750362020410 5ustar zuulzuul00000000000000manila-10.0.0/contrib/share_driver_hooks/zaqar_notification_example_consumer.py0000775000175000017500000001666613656750227030316 0ustar zuulzuul00000000000000#!/usr/bin/env python # # Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import os import pprint import signal import sys import time import netaddr from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import timeutils import six opts = [ cfg.IntOpt( "consume_interval", default=5, deprecated_name="sleep_between_consume_attempts", help=("Time that script will sleep between requests for consuming " "Zaqar messages in seconds."), ), cfg.StrOpt( "mount_dir", default="/tmp", help="Directory that will contain all mounted shares." ), cfg.ListOpt( "expected_ip_addresses", default=[], help=("List of IP addresses that are expected to be found in access " "rules to trigger [un]mount operation for a share.") ), ] CONF = cfg.CONF def print_with_time(data): time = six.text_type(timeutils.utcnow()) print(time + " " + six.text_type(data)) def print_pretty_dict(d): pprint.pprint(d) def pop_zaqar_messages(client, queues_names): if not isinstance(queues_names, (list, set, tuple)): queues_names = (queues_names, ) try: user = client.conf['auth_opts']['options']['os_username'] project = client.conf['auth_opts']['options']['os_project_name'] messages = [] for queue_name in queues_names: queue = client.queue(queue_name) messages.extend([six.text_type(m.body) for m in queue.pop()]) print_with_time( "Received %(len)s message[s] from '%(q)s' " "queue using '%(u)s' user and '%(p)s' project." % { 'len': len(messages), 'q': queue_name, 'u': user, 'p': project, } ) return messages except Exception as e: print_with_time("Caught exception - %s" % e) return [] def signal_handler(signal, frame): print("") print_with_time("Ctrl+C was pressed. Shutting down consumer.") sys.exit(0) def parse_str_to_dict(string): if not isinstance(string, six.string_types): return string result = eval(string) return result def handle_message(data): """Handles consumed message. Expected structure of a message is following: {'data': { 'access_rules': [ { 'access_id': u'b28268b9-36c6-40d3-a485-22534077328f', 'access_instance_id': u'd137b2cb-f549-4141-9dd7-36b2789fb973', 'access_level': u'rw', 'access_state': u'active', 'access_to': u'7.7.7.7', 'access_type': u'ip', } ], 'availability_zone': u'nova', 'export_locations': [u'127.0.0.1:/path/to/nfs/share'], 'is_allow_operation': True, 'share_id': u'053eae9a-726f-4f7e-8502-49d7b1adf290', 'share_instance_id': u'dc33e554-e0b9-40f5-9046-c198716d73a0', 'share_proto': u'NFS' }} """ if 'data' in data.keys(): data = data['data'] valid_access = ( 'access_rules' in data and len(data['access_rules']) == 1 and data['access_rules'][0].get('access_type', '?').lower() == 'ip' and data.get('share_proto', '?').lower() == 'nfs' ) if valid_access: is_allow_operation = data['is_allow_operation'] export_location = data['export_locations'][0] if is_allow_operation: mount_share(export_location, data['access_to']) else: unmount_share(export_location, data['access_to']) else: print_with_time('Do nothing with above message.') def execute(cmd): try: print_with_time('Executing following command: \n%s' % cmd) cmd = cmd.split() stdout, stderr = processutils.execute(*cmd) if stderr: print_with_time('Got error: %s' % stderr) return stdout, stderr except Exception as e: print_with_time('Got following error: %s' % e) return False, True def is_share_mounted(mount_point): mounts, stderr = execute('mount') return mount_point in mounts def rule_affects_me(ip_or_cidr): if '/' in ip_or_cidr: net = netaddr.IPNetwork(ip_or_cidr) for my_ip in CONF.zaqar.expected_ip_addresses: if netaddr.IPAddress(my_ip) in net: return True else: for my_ip in CONF.zaqar.expected_ip_addresses: if my_ip == ip_or_cidr: return True return False def mount_share(export_location, access_to): data = { 'mount_point': os.path.join(CONF.zaqar.mount_dir, export_location.split('/')[-1]), 'export_location': export_location, } if (rule_affects_me(access_to) and not is_share_mounted(data['mount_point'])): print_with_time( "Mounting '%(export_location)s' share to %(mount_point)s.") execute('sudo mkdir -p %(mount_point)s' % data) stdout, stderr = execute( 'sudo mount.nfs %(export_location)s %(mount_point)s' % data) if stderr: print_with_time("Mount operation failed.") else: print_with_time("Mount operation went OK.") def unmount_share(export_location, access_to): if rule_affects_me(access_to) and is_share_mounted(export_location): print_with_time("Unmounting '%(export_location)s' share.") stdout, stderr = execute('sudo umount %s' % export_location) if stderr: print_with_time("Unmount operation failed.") else: print_with_time("Unmount operation went OK.") def main(): # Register other local modules cur = os.path.dirname(__file__) pathtest = os.path.join(cur) sys.path.append(pathtest) # Init configuration CONF(sys.argv[1:], project="manila_notifier", version=1.0) CONF.register_opts(opts, group="zaqar") # Import common config and Zaqar client import zaqarclientwrapper # Handle SIGINT signal.signal(signal.SIGINT, signal_handler) # Run consumer print_with_time("Consumer was successfully run.") while(True): messages = pop_zaqar_messages( zaqarclientwrapper.ZAQARCLIENT, CONF.zaqar.zaqar_queues) if not messages: message = ("No new messages in '%s' queue[s] " "found." % ','.join(CONF.zaqar.zaqar_queues)) else: message = "Got following messages:" print_with_time(message) for message in messages: message = parse_str_to_dict(message) print_pretty_dict(message) handle_message(message) time.sleep(CONF.zaqar.consume_interval) if __name__ == '__main__': main() manila-10.0.0/contrib/share_driver_hooks/zaqar_notification.py0000664000175000017500000001147513656750227024656 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import timeutils from manila import exception from manila.share import api from manila.share import hook from manila.share.hooks import zaqarclientwrapper # noqa CONF = zaqarclientwrapper.CONF LOG = log.getLogger(__name__) ZAQARCLIENT = zaqarclientwrapper.ZAQARCLIENT class ZaqarNotification(hook.HookBase): share_api = api.API() def _access_changed_trigger(self, context, func_name, access_rules_ids, share_instance_id): access = [self.db.share_access_get(context, rule_id) for rule_id in access_rules_ids] share_instance = self.db.share_instance_get(context, share_instance_id) share = self.share_api.get(context, share_id=share_instance.share_id) def rules_view(rules): result = [] for rule in rules: access_instance = None for ins in rule.instance_mappings: if ins.share_instance_id == share_instance_id: access_instance = ins break else: raise exception.InstanceNotFound( instance_id=share_instance_id) result.append({ 'access_id': rule.id, 'access_instance_id': access_instance.id, 'access_type': rule.access_type, 'access_to': rule.access_to, 'access_level': rule.access_level, }) return result is_allow_operation = 'allow' in func_name results = { 'share_id': share.share_id, 'share_instance_id': share_instance_id, 'export_locations': [ el.path for el in share_instance.export_locations], 'share_proto': share.share_proto, 'access_rules': rules_view(access), 'is_allow_operation': is_allow_operation, 'availability_zone': share_instance.availability_zone, } LOG.debug(results) return results def _execute_pre_hook(self, context, func_name, *args, **kwargs): LOG.debug("\n PRE zaqar notification has been called for " "method '%s'.\n", func_name) if func_name == "deny_access": LOG.debug("\nSending notification about denied access.\n") data = self._access_changed_trigger( context, func_name, kwargs.get('access_rules'), kwargs.get('share_instance_id'), ) self._send_notification(data) def _execute_post_hook(self, context, func_name, pre_hook_data, driver_action_results, *args, **kwargs): LOG.debug("\n POST zaqar notification has been called for " "method '%s'.\n", func_name) if func_name == "allow_access": LOG.debug("\nSending notification about allowed access.\n") data = self._access_changed_trigger( context, func_name, kwargs.get('access_rules'), kwargs.get('share_instance_id'), ) self._send_notification(data) def _send_notification(self, data): for queue_name in CONF.zaqar.zaqar_queues: ZAQARCLIENT.queue_name = queue_name message = { "body": { "example_message": ( "message generated at '%s'" % timeutils.utcnow()), "data": data, } } LOG.debug( "\n Sending message %(m)s to '%(q)s' queue using '%(u)s' user " "and '%(p)s' project.", { 'm': message, 'q': queue_name, 'u': CONF.zaqar.zaqar_username, 'p': CONF.zaqar.zaqar_project_name, } ) queue = ZAQARCLIENT.queue(queue_name) queue.post(message) def _execute_periodic_hook(self, context, periodic_hook_data, *args, **kwargs): LOG.debug("Periodic zaqar notification has been called. (Placeholder)") manila-10.0.0/contrib/share_driver_hooks/zaqarclientwrapper.py0000664000175000017500000000537013656750227024705 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from zaqarclient.queues import client as zaqar zaqar_notification_opts = [ cfg.StrOpt( "zaqar_username", help="Username that should be used for init of zaqar client.", ), cfg.StrOpt( "zaqar_password", secret=True, help="Password for user specified in opt 'zaqar_username'.", ), cfg.StrOpt( "zaqar_project_name", help=("Project/Tenant name that is owns user specified " "in opt 'zaqar_username'."), ), cfg.StrOpt( "zaqar_auth_url", default="http://127.0.0.1:35357/v2.0/", help="Auth url to be used by Zaqar client.", ), cfg.StrOpt( "zaqar_region_name", help="Name of the region that should be used. Optional.", ), cfg.StrOpt( "zaqar_service_type", default="messaging", help="Service type for Zaqar. Optional.", ), cfg.StrOpt( "zaqar_endpoint_type", default="publicURL", help="Type of endpoint to be used for init of Zaqar client. Optional.", ), cfg.FloatOpt( "zaqar_api_version", default=1.1, help="Version of Zaqar API to use. Optional.", ), cfg.ListOpt( "zaqar_queues", default=["manila_notification_qeueue"], help=("List of queues names to be used for sending Manila " "notifications. Optional."), ), ] CONF = cfg.CONF CONF.register_opts(zaqar_notification_opts, group='zaqar') ZAQARCLIENT = zaqar.Client( version=CONF.zaqar.zaqar_api_version, conf={ "auth_opts": { "backend": "keystone", "options": { "os_username": CONF.zaqar.zaqar_username, "os_password": CONF.zaqar.zaqar_password, "os_project_name": CONF.zaqar.zaqar_project_name, "os_auth_url": CONF.zaqar.zaqar_auth_url, "os_region_name": CONF.zaqar.zaqar_region_name, "os_service_type": CONF.zaqar.zaqar_service_type, "os_endpoint_type": CONF.zaqar.zaqar_endpoint_type, "insecure": True, }, }, }, ) manila-10.0.0/contrib/share_driver_hooks/README.rst0000664000175000017500000000630013656750227022076 0ustar zuulzuul00000000000000Manila mount automation example using share driver hooks feature ================================================================ Manila has feature called 'share driver hooks'. Which allows to perform actions before and after driver actions such as 'create share' or 'access allow', also allows to do custom things on periodic basis. Here, we provide example of mount automation using this feature. This example uses OpenStack Zaqar project for sending notifications when operations 'access allow' and 'access deny' are performed. Server side hook will send notifications about changed access for shares after granting and prior to denying access. Possibilities of the mount automation example (consumer) -------------------------------------------------------- - Supports only 'NFS' protocol. - Supports only 'IP' rules. - Supports both levels of access - 'RW' and 'RO'. - Consume interval can be configured. - Allows to choose parent mount directory. Server side setup and run ------------------------- 1. Place files 'zaqarclientwrapper.py' and 'zaqar_notification.py' to dir %manila_dir%/manila/share/hooks. Then update manila configuration file with following options: :: [share_backend_config_group] hook_drivers = manila.share.hooks.zaqar_notification.ZaqarNotification enable_pre_hooks = True enable_post_hooks = True enable_periodic_hooks = False [zaqar] zaqar_auth_url = http://%ip_of_endpoint_with_keystone%:35357/v2.0/ zaqar_region_name = %name_of_region_optional% zaqar_username = foo_user zaqar_password = foo_tenant zaqar_project_name = foo_password zaqar_queues = manila_notification 2. Restart manila-share service. Consumer side setup and run --------------------------- 1. Place files 'zaqarclientwrapper.py' and 'zaqar_notification_example_consumer.py' to any dir on user machine, but they both should be in the same dir. 2. Make sure that following dependencies are installed: - PIP dependencies: - netaddr - oslo_concurrency - oslo_config - oslo_utils - python-zaqarclient - six - System libs that install 'mount' and 'mount.nfs' apps. 3. Create file with following options: :: [zaqar] # Consumer-related options sleep_between_consume_attempts = 7 mount_dir = "/tmp" expected_ip_addresses = 10.254.0.4 # Common options for consumer and server sides zaqar_auth_url = http://%ip_of_endpoint_with_keystone%:35357/v2.0/ zaqar_region_name = %name_of_region_optional% zaqar_username = foo_user zaqar_password = foo_tenant zaqar_project_name = foo_password zaqar_queues = manila_notification Consumer options descriptions: - 'sleep_between_consume_attempts' - wait interval between consuming notifications from message queue. - 'mount_dir' - parent mount directory that will contain all mounted shares as subdirectories. - 'expected_ip_addresses' - list of IP addresses that are expected to be granted access for. Could be either equal to or be part of a CIDR. Match triggers [un]mount operations. 4. Run consumer with following command: :: $ zaqar_notification_example_consumer.py --config-file path/to/config.conf 5. Now create NFS share and grant IP access to consumer by its IP address. manila-10.0.0/rally-jobs/0000775000175000017500000000000013656750362015146 5ustar zuulzuul00000000000000manila-10.0.0/rally-jobs/rally-manila.yaml0000664000175000017500000001022713656750227020416 0ustar zuulzuul00000000000000--- Dummy.openstack: - description: "Check quotas context" runner: type: "constant" times: 1 concurrency: 1 context: users: tenants: 1 users_per_tenant: 1 quotas: manila: shares: -1 gigabytes: -1 snapshots: -1 snapshot_gigabytes: -1 share_networks: -1 ManilaShares.list_shares: - args: detailed: True runner: type: "constant" times: 10 concurrency: 10 context: users: tenants: 3 users_per_tenant: 4 user_choice_method: "round_robin" sla: failure_rate: max: 0 {% for s in ("create_and_delete_share", "create_and_list_share") %} ManilaShares.{{s}}: - args: share_proto: "nfs" size: 1 share_type: "dhss_true" min_sleep: 1 max_sleep: 2 runner: type: "constant" times: 10 concurrency: 10 context: quotas: manila: shares: -1 gigabytes: -1 share_networks: -1 users: tenants: 2 users_per_tenant: 1 user_choice_method: "round_robin" manila_share_networks: use_share_networks: True sla: failure_rate: max: 0 {% endfor %} ManilaShares.create_share_network_and_delete: - args: name: "rally" runner: type: "constant" times: 10 concurrency: 10 context: quotas: manila: share_networks: -1 users: tenants: 2 users_per_tenant: 1 sla: failure_rate: max: 0 ManilaShares.create_share_network_and_list: - args: name: "rally" detailed: True search_opts: name: "rally" runner: type: "constant" times: 10 concurrency: 10 context: quotas: manila: share_networks: -1 users: tenants: 2 users_per_tenant: 1 sla: failure_rate: max: 0 ManilaShares.list_share_servers: - args: search_opts: {} runner: type: "constant" times: 10 concurrency: 10 sla: failure_rate: max: 0 ManilaShares.create_security_service_and_delete: {% for s in ("ldap", "kerberos", "active_directory") %} - args: security_service_type: {{s}} dns_ip: "fake_dns_ip" server: "fake-server" domain: "fake_domain" user: "fake_user" password: "fake_password" name: "fake_name" description: "fake_description" runner: type: "constant" times: 10 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 {% endfor %} ManilaShares.attach_security_service_to_share_network: {% for s in ("ldap", "kerberos", "active_directory") %} - args: security_service_type: {{s}} runner: type: "constant" times: 10 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 quotas: manila: share_networks: -1 sla: failure_rate: max: 0 {% endfor %} ManilaShares.set_and_delete_metadata: - args: sets: 1 set_size: 3 delete_size: 3 key_min_length: 1 key_max_length: 256 value_min_length: 1 value_max_length: 1024 runner: type: "constant" times: 10 concurrency: 10 context: quotas: manila: shares: -1 gigabytes: -1 share_networks: -1 users: tenants: 1 users_per_tenant: 1 manila_share_networks: use_share_networks: True manila_shares: shares_per_tenant: 1 share_proto: "NFS" size: 1 share_type: "dhss_true" sla: failure_rate: max: 0 manila-10.0.0/rally-jobs/rally-manila-no-ss.yaml0000664000175000017500000000354213656750227021455 0ustar zuulzuul00000000000000--- Dummy.openstack: - description: "Check quotas context" runner: type: "constant" times: 1 concurrency: 1 context: users: tenants: 1 users_per_tenant: 1 quotas: manila: shares: -1 gigabytes: -1 snapshots: -1 snapshot_gigabytes: -1 share_networks: -1 ManilaShares.list_shares: - args: detailed: True runner: type: "constant" times: 10 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 {% for s in ("create_and_delete_share", "create_and_list_share") %} ManilaShares.{{s}}: - args: share_proto: "nfs" size: 1 share_type: "dhss_false" min_sleep: 1 max_sleep: 2 runner: type: "constant" times: 10 concurrency: 10 context: quotas: manila: shares: -1 gigabytes: -1 users: tenants: 2 users_per_tenant: 1 sla: failure_rate: max: 0 {% endfor %} ManilaShares.set_and_delete_metadata: - args: sets: 1 set_size: 3 delete_size: 3 key_min_length: 1 key_max_length: 256 value_min_length: 1 value_max_length: 1024 runner: type: "constant" times: 10 concurrency: 10 context: quotas: manila: shares: -1 gigabytes: -1 users: tenants: 1 users_per_tenant: 1 manila_shares: shares_per_tenant: 1 share_proto: "NFS" size: 1 share_type: "dhss_false" sla: failure_rate: max: 0 manila-10.0.0/tox.ini0000664000175000017500000001261313656750227014406 0ustar zuulzuul00000000000000[tox] minversion = 2.0 skipsdist = True envlist = py3,pep8 [testenv] basepython = python3 setenv = VIRTUAL_ENV={envdir} usedevelop = True whitelist_externals = find deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = find . -type f -name "*.py[c|o]" -delete stestr run {posargs} stestr slowest [testenv:releasenotes] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/doc/requirements.txt commands = rm -rf releasenotes/build sphinx-build -a -E -W -d releasenotes/build/doctrees \ -b html releasenotes/source releasenotes/build/html whitelist_externals = rm [testenv:debug] commands = oslo_debug_helper {posargs} [testenv:pep8] # Let's gate pep8 under py3 by default because the py3 checks are stricter. commands = flake8 {posargs} # Run bashate during pep8 runs to ensure violations are caught by # the check and gate queues. bashate -i E006,E042,E043 \ tools/enable-pre-commit-hook.sh \ contrib/ci/pre_test_hook.sh \ contrib/ci/post_test_hook.sh \ devstack/plugin.sh \ devstack/upgrade/from-mitaka/upgrade-manila \ devstack/upgrade/resources.sh \ devstack/upgrade/shutdown.sh \ devstack/upgrade/upgrade.sh \ tools/cover.sh \ tools/check_logging.sh \ tools/coding-checks.sh {toxinidir}/tools/check_exec.py {toxinidir}/manila {toxinidir}/tools/check_logging.sh {toxinidir}/manila [testenv:genconfig] whitelist_externals = bash commands = oslo-config-generator --config-file etc/oslo-config-generator/manila.conf [testenv:genpolicy] commands = oslopolicy-sample-generator --config-file=etc/manila/manila-policy-generator.conf [testenv:venv] commands = {posargs} [testenv:docs] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/ussuri} -r{toxinidir}/requirements.txt -r{toxinidir}/doc/requirements.txt commands = rm -rf doc/build sphinx-build -W -b html doc/source doc/build/html # Ignore D001 since we allow lines in excess of 79 characters. doc8 --ignore D001 --ignore-path .tox --ignore-path doc/build --ignore-path manila.egg-info -e .txt -e .rst -e .inc whitelist_externals = rm [testenv:pdf-docs] deps = {[testenv:docs]deps} whitelist_externals = make commands = sphinx-build -W -b latex doc/source doc/build/pdf make -C doc/build/pdf [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files, and develop mode disabled # explicitly to avoid unnecessarily installing the checked-out repo too (this # further relies on "tox.skipsdist = True" above). deps = bindep commands = bindep test usedevelop = False [testenv:cover] setenv = {[testenv]setenv} PYTHON=coverage run --source manila --parallel-mode commands = {toxinidir}/tools/cover.sh {posargs} [testenv:fast8] # Let's run fast8 under py3 by default because the py3 checks are stricter. commands = {toxinidir}/tools/fast8.sh [testenv:pylint] deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt pylint==2.3.1 whitelist_externals = bash commands = bash ./tools/coding-checks.sh --pylint {posargs} [testenv:api-ref] # This environment is called from CI scripts to test and publish # the API Ref to docs.openstack.org. deps = {[testenv:docs]deps} whitelist_externals = rm commands = rm -rf api-ref/build python {toxinidir}/tools/validate-json-files.py {toxinidir}/api-ref/source/samples/ sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html [testenv:dbrevision] deps = -r{toxinidir}/requirements.txt commands = alembic -c manila/db/migrations/alembic.ini revision -m ""{posargs} [flake8] # Following checks are ignored on purpose: # Following checks should be evaluated and fixed: # E123 closing bracket does not match indentation of opening bracket's line # E402 module level import not at top of file # W503 line break before binary operator # W504 line break after binary operator ignore = E123,E402,W503,W504 builtins = _ # [H106] Don't put vim configuration in source files. # [H203] Use assertIs(Not)None to check for None. # [H904] Use ',' instead of '%', String interpolation should be delayed to be handled by the logging code, # rather than being done at the point of the logging call.. enable-extensions = H106,H203,H904 exclude = .git,.tox,.testrepository,.venv,build,cover,dist,doc,*egg,api-ref/build,*/source/conf.py [hacking] import_exceptions = manila.i18n [flake8:local-plugins] extension = M310 = checks:CheckLoggingFormatArgs M313 = checks:validate_assertTrue M323 = checks:check_explicit_underscore_import M325 = checks:CheckForStrUnicodeExc M326 = checks:CheckForTransAdd M333 = checks:check_oslo_namespace_imports M336 = checks:dict_constructor_with_list_copy M337 = checks:no_xrange M338 = checks:no_log_warn_check M339 = checks:no_third_party_mock M354 = checks:check_uuid4 M359 = checks:no_translate_logs paths = ./manila/hacking [testenv:lower-constraints] deps = -c{toxinidir}/lower-constraints.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt manila-10.0.0/ChangeLog0000664000175000017500000036642113656750362014656 0ustar zuulzuul00000000000000CHANGES ======= 10.0.0 ------ * [Unity] Fix unit test issue * Update share-manager behavior for shrink share operation * [CI] Fix grenade share networks test * [grenade][stable/ussuri only] Switch base version * Update TOX\_CONSTRAINTS\_FILE for stable/ussuri * Update .gitreview for stable/ussuri 10.0.0.0rc1 ----------- * Drop install\_command usage in tox * [NetApp] Fix vserver peer accept on intra cluster replication * [NetApp] Fix share shrink error status * fix bug in quota checking * [doc] Annotate max api microversion in Ussuri * fix bug in consume from share * [cycle-goals] Add PTL and contributor quickstart * Use unittest.mock instead of third party lib * fix bug in tox py test * Fix invalid assert statement * VNX/PowerMax: Fix export locations * [NetApp] Fix vserver peer creation with same vserver * Fix docs duplicated autoclass definition * [CI] Stop gating with manila-tempest-minimal-dsvm-lvm * [NetApp] Improve create share from snapshot functionality * [ZFSonLinux] Create share from snapshot in different backends * Remove experimental flag from share groups feature * Create share from snapshot in another pool or backend * [Unity] Manage/unmanage share server/share/snap * Remove provisioned calculation on non thin provision backends * Delete type access list when deleting types * Add new quota for share replicas * Prevent share type deletion if linked to group types * Increase MANILA\_SERVICE\_VM\_FLAVOR\_DISK * Support query user message by timestamp * Revert "Remove provisioned calculation on non thin provision backends" * Hacking: Fix W605 * Hacking: Fix E731 * Hacking: Fix E741 * Hacking: Fix E305 * Hacking: Fix E117 * Hacking: Fix E226 * Hacking: Fix E241 * Hacking: Fix F601 * Hacking: Fix F841 * Hacking: Fix F632 * Update hacking for Python3 * Remove provisioned calculation on non thin provision backends * Cleanup Python 2.7 support * If only .pyc exist, the extension API will be disabled * [ci] Stop requiring neutron-tempest-plugin * Enforce policy checks for share export locations * Fix URLs in code and documentation * [NetApp] cDOT to set valid QoS during migration * Enable the use scheduler creating share from snapshot flag * [NetApp] Fix driver to honor standard extra specs * share\_networks: enable project\_only API only * Cleanup docs building * Update devstack repository URL * Fix database loading for some resources * Fix release note for LP 1853940's bugfix * Add opt doc and reno for noop interface driver * Add asynchronous error info into messages when share extend error * Use psycopg2-binary for test-requirements * Introduce noop interface driver * Refactor route clearing to linux interface * clear\_outdated\_routes: reduce neutron calls * generic: Refactor network functions to l3\_init * Use StrOpt for instance type * Improve share list speed using lazy='subquery' * Store ganesha logs and configs * [Unity]: Failed to delete cifs share if wrong access set * Fix over-quota exception of snapshot creation * Don't send heartbeats if driver not initializing correctly * Fix missing parameter in the log message * Fix a wrong comma in log message * Add manila-specs link to readme.rst * fix a typo * Fix error that failed to get image for booting server * Make extra\_specs value as case-insensitive * VNX/Powermax: Make it work under python3 * [ussuri][goal] Drop python2.7 support * PowerMax and VNX Manila - Read only policy is not working correctly * [NetApp] Fix share replica failing for 'transfer in progress' error * Document max Train-release API version * [Unity] Add release note and tests for IPv6 fix * Fix invalid assert statement * Fix share network update erroneously returns success * [Unity] Sync Unity related Docs * Enable glusterfs-native ci * [NetApp] Allow extension/shrinking of NetApp replicated share * add document,source,bugs,blueprints links to readme * update readme links * Imported Translations from Zanata * Update master for stable/train 9.0.0 ----- * Fix pagination does not speed up queries bug * Fix timeout when compute server was soft-deleted * Fix [Unity] verification and convert mgmt ipv6 * Retrieve compatible share servers using subnet id * Fix error print format * Skip NFS/Samba install for CephFS * [train][goal] Define new manila-tempest-minimal-lvm-ipv6-only job * Add share network with multiple subnets * Add manila-status to man-pages list * [NetApp] Adds support for replication with DHSS=True * Pylint: use -j 0 arg * update share group test in db * Add update share-type API to Share Types * Remove backend spec from share type while creating replica * Remove support for \`\`data\_node\_access\_ip\`\` * [Unity] Driver supports the mode that does not create and destory share servers (DHSS=False) * Fix \_list\_view function for count * Change PDF file name * [Nexenta] Refactored NexentaStor5 NFS driver * Add PDF documentation build * [Infortrend] Add Infortrend Manila Doc * Fix subsections for container driver * Enable replication tests (DHSS=True) on Dummy driver * Add extend/shrink feature for glusterfs directory layout * Validate API sample JSON files * Correct json format in api-ref * [CI] Enable glusterfs-nfs ci * Fix incorrect 'cephfnfs1' to 'cephfsnfs1' * Add missing space * Add Infortrend Manila Driver * Add manila-ui config instructions * Remove support for "lvm\_share\_export\_ip" * [CI] Convert rally jobs to zuulv3 native * Fix usage of deprecated devstack function * Make manila-tempest-plugin installation optional * [api-ref] Correct share metadata API ref * Conditionally restore default route in setup\_ipv6 * Run tempest jobs under python3 * add IPv6 support for CephFS/NFS back end * [api-ref] Use relative links and fix grammar * Update api-ref location * Manila PowerMax - rebrand from VMAX to PowerMax * Add Python 3 Train unit tests * Remove the redunant table from windows' editor * Unmount NetApp active share after replica promote * Bump the openstackdocstheme extension to 1.20 * Check NetApp SnapRestore license for pools * Fix an invalid assert state * Manila share driver for Inspur InStorage series * [CI] Add bindep.txt * Adding documentation for User Messages in Manila Documentation * Fix typo in Manila docs in manila.rst file * [CI] Run scenario tests in the cephfs-nfs job * Add admin ref for manage/unmanage servers DHSS=True * Blacklist python-cinderclient 4.0.0 * Manila VMAX docs - notification of removal of tags * Update sphinx dependency * [NetApp] Fix race condition issues on vserver deletion * [CI] Bump timeout for the migrations test case * NeutronBindNetworkPlugin: fix multi segment mtu * [api-ref] Update JSON samples for scheduler-stats API * Fix error print format * [Unity] Update doc for revert to snap support * OpenDev Migration Patch * Dropping the py35 testing * The parameters of 'list shares' are optional * [api-ref] Delete unused parameters * [api-ref] De-duplicate name and description parameters * [api-ref] De-duplicate date and time parameters * [api-ref] Replace "tenant" terminology with "project" * Fix misuse of assertFalse * [grenade] Switch base version * [tests] Fix PYTHON3\_VERSION * Manila VMAX docs - clarify backend configurations * [doc][api-ref] Fix annotation and missing parameters * Add api-ref for manage/unmanage with DHSS=True * [doc][api-ref] Clarify manage/unmanage APIs * Replace openstack.org git:// URLs with https:// * [doc][api-ref] snapshot user\_id and project\_id fields * Update master for stable/stein 8.0.0 ----- * Fix server delete attempt along with share net deletion * INFINIDAT: suppress 'no-member' pylint errors * Dummy driver: Don't fail unmanage on malformed share servers * Document Windows SMB driver * Only allow IP access type for CephFS NFS * Drop run\_tests.sh and tools/colorizer.py * Check all\_tenants value in share\_networks api * NetApp cDOT assume disabled compression on empty result * Check all\_tenants value in security\_service api * Fix parameters passed to exception * Destroy type quotas when a share type is deleted * Replacing the HTTP protocol with HTTPS * Fix driver filter to not check share\_backend\_name * Fix logging in wsgi module * Use legacy base to run CI/CD on Bionic * Manila VMAX docs - differences between quotas * Deploy manila with uwsgi on devstack * Fix API version inferred w/ un-versioned URLs * Add missing ws seperator between words * Manila VMAX docs - improve pre-configurations on VMAX section * Bump timeout on sqlalchemy migration test * Bump pylint job timeout * Manila VMAX docs - clarify snapshot support * Fix hyperlink reference to security section * Manila VMAX docs - clarify driver\_handles\_share\_servers * Fix version selector when for proxy-style URLs * VMAX manila doc - SSL Support * TrivialFix: Remove trailing whitespace in tox.ini * [pylint] Fix Manage-Unmanage with DHSS=True pylint issues * [Pylint] Bump pylint version to latest * [pylint] Use filenames in coding-checks * [pylint] Run pylint separately for code and tests * [NetApp] Add manage/unmanage of share servers * Add manage/unmanage of shares in DHSS=True * Fix missing size value in snapshot instance * Add manage/unmanage implementation to Container Driver * Refactor Container Driver * Move grenade job to bionic and run with python 3 * Update docs landing page to follow guideline * [pylint] Fix/ignore pylint errors in test modules * Fix error message when updating quota values * [pylint] Fix/ignore pylint errors in non-test modules * Extend remove\_version\_from\_href support * [NetApp] Fix race condition issue in NetApp driver * Fix tls-proxy issues with the devstack plugin * [pylint] Remove lint tox environment * Include .inc files in doc8 linting * Suppress pylint warnings from dell\_emc drivers * Fix sshpool.remove * Fix typo in test name * Add policy to create/update public shares * [ZFSOnLinux] Log ZFS options as they are retrieved * Return request-id to APIs that don't respond with a body * Fix service image boot issues * Add api ref for access rule metadata feature * [Unity] Shrink share in Unity driver * Allow configuring availability\_zones in share types * Bump timeout on dsvm jobs * Add tripleo scenario004 job to experimental queu * Match job names in playbooks to their names * Address E0102 pylint errors * [CI] Drop redundant if condition in the LVM job playbook * NetApp ONTAP: allow multiple DNS IPs * Run cephfs jobs under py3 * Fix pylint errors for ganesha manager * Set mode for CephFS volumes and snapshots * Deprecated config option [DEFAUL]memcached\_servers * Deprecate [DEFAULT]/share\_usage\_size\_audit\_period * Fix spurious pylint import errors for ddt and mock * Configure per backend availability zones in devstack * Allow configuration of a back end specific availability zone * [Trivial fix] add missing ws seperator between words * Drop [DEFAULT]root\_helper config option * [Unity] Revert to snapshot support * Convert dummy job to py3 * Separate APIs for share & replica export locations * Set paramiko logging to DEBUG level * Change ssh\_utils parameter to correctly send keepalive packets * devstack: Do a vgscan before checking if the VG is there * QNAP: Fix inconsistent cases while create/manage from snapshot * Fix the misspelling of "except" * Publish sample config file in the genconfig job * Improve service instance module debug logging * Move/Drop useless SQL related config options * Drop param2id() from cmd/manage.py * Drop trycmd() from manila/utils.py * QNAP: driver should not manage snapshot which does not exist * Add Ubuntu Bionic CephFS jobs * Drop is\_eventlet\_bug105() from manila/utils.py * QNAP: Support QES FW on TDS series NAS * Adjust ssh timeouts * Add devstack instructions and local.conf samples * [doc] Fix api sections in the contributor doc * Set ram for manila service image to 256 * [Manila Unity/VNX] add 'snapshot support' related Doc for Unity/VNX driver * NetApp cDOT store port IDs and addresses at share server backend details * Deprecate old keystone session config opts * speed up GET scheduler-stats/pools/detail * Fix image\_name retrieval in custom-image jobs * Only run the needed services for CephFS jobs * Use the canonical URL for Manila repositories * fix http link to https link * NetApp ONTAP: cifs add AD security service server as preferred DC * Change openstack-dev to openstack-discuss * Fix ganesha for 0.0.0.0/0 access * Add missing ws separator between words * VMAX manila doc - support for IPv6 * [api-ref] Added share servers show and corrected path to details * [CI][LVM] Run the LVM job on Bionic Beaver * [LVM][IPv6] Quagga changes to support Bionic Beaver * Use OS CLI instead of the neutronclient * Remove i18n.enable\_lazy() translation * Delete the duplicate words in cephfs\_driver.rst * The URL of SSL is missing * [DevRef] Add code review guideline * [Trivial Fix] Correct spelling error of "throughput" * [CI] Switch Xenial tempest jobs to Bionic Beaver * VMAX manila - deprecate old tags correctly * inspur: transfer 'rw' to 'rwx' when Shared File Systems protocol is cifs * NeutronBindNetworkPlugin: fix multi segment neutron data save * NetApp ONTAP: Fix use of multiple subnets with DHSS=True * VMAX manila doc - use of correct VMAX tags * Add manila-status upgrade check command framework * [LVM] Run filesystem check before assigning UUID * Change python3.5 job to python3.7 job on Stein+ * Increment versioning with pbr instruction * Make coverage non-voting and fix use of rpc\_backend * Simplify running pylint * Don't quote {posargs} in tox.ini * remove glusterfs-nfs job from check queue * change tox envlist from 3.5 to 3 * Remove run\_tests.sh * [grenade] Switch base version * [Container driver] Fix volume group data collection * [ZFSOnLinux] Allow devstack bootstrap in Ubuntu > 16.04 * 3PAR: Update Storage Driver docs * Remove install-guide-jobs * Use templates for cover and lower-constraints * Spelling Errors * Add version maximum annotation to API versions doc * Add command to update share instance hosts * add python 3.6 unit test job * switch documentation job to new PTI * import zuul job settings from project-config * NetApp ONTAP fix test allocate container with share\_instance * Remove logging overrides from plugin.sh * adjust response code in 'service.inc' * Adds export path option to Quobyte driver * Fix manila-ui link in the contributor doc * Fix ShareGroup sqlalchemy model ShareGroupTypes relation * [ZFSOnLinux] Retry unmounting old datasets during manage * Update reno for stable/rocky * NetApp ONTAP: change cifs server valid dns hostname * NetApp cDOT driver switch volume efficiency 7.0.0 ----- * replace 'data=' with 'message=' * NetApp cDOT driver qos policy same name * Test share type per test suite changes * INFINIDAT: unit tests - remove fake exception body * Fix grenade job * Fix mutable config in manila-scheduler * Fix ZFSOnLinux doc about manage ops * INFINIDAT: add host.created\_by metadata key * check all\_tenants value in share api * NetApp cDOT: use security service ou 7.0.0.0b3 --------- * Api-ref: Add min\_version in the API parameters * Retrieve is\_default value to fix empty display in CLI * [Docs] Don't include unittest documentation * Support metadata for access rule resource * QNAP: Add support for QES 2.1.0 * [CI] Don't set test config for API microversions if master * Api-ref: Add missing parameter in the version api * Allow setting test API microversions in gate tests * Api-ref: change fix \`\`extra-spec-key\`\` key in path * Docs: glance image-create returns an error issue * [NetApp driver] Control snapshot folder visibility * Fix results capturing for the dummy driver * Fix ensure\_shares bugs * [NetApp driver] NVE License not present fix * Change depreciated to deprecated * Fix bare exceptions in ganesha manager * INFINIDAT: change create\_child to create\_snapshot * Manila share driver for Inspur AS13000 series * Add share instance index on share\_id * [Manila Unity/VNX] admin doc failed to render * DB Migration: fix downgrade in 579c267fbb4d * Cannot remove user rule for NFS share * Fix mutable default argument in Quobyte jsonrpc * API: Add \`\`all\_tenants\`\` parameter * Fix doc warnings * [API] Doc snapshot and share net deletion preconditions * Address trivial TODOs * NetApp cDOT driver skip vserver route with no gateway * Remove confusing DB deprecation messages * add release notes to README.rst * rectify 'a export ID' to 'an export ID' * rectify 'a extra specs' to 'an extra specs' * rectify 'a exact match' to 'an exact match' * Document the preconditions for deleting a share * Use volume\_uuid in \_resize\_share of Quobyte Driver * Limit formatting routes when adding resources * Allow api\_version\_request.matches to accept a string or None * Update link address * Generic driver - Limiting SSH access from tenant network * [Trivialfix] Remove the useless parameter 'ext\_mgr' * Delete unused test check * [Doc] Add 'gateway' and 'mtu' in share network api-ref * QNAP: driver changes share size when manage share * Trivial: Update pypi url to new url * Config for cephfs volume path prefix * Switch to oslo\_messaging.ConfFixture.transport\_url * Use class name in invocation of super * Fix use of pbr version release 7.0.0.0b2 --------- * Default pylint to run using python3 * fix tox python3 overrides * [Grenade] Switch base to stable/queens * Set initial quota in Quobyte and correct resizing * Trivial:Update pypi url to new url * Fix share-service VM restart problem * Fix test plugin issues in dsvm-lvm-centos job * Fix manila-tempest-\*-centos-7 jobs * VMAX driver - Implement IPv6 support for Dell EMC VMAX driver * Fix post-execution for tempest tests * Fix access control for single host addresses * Switch from ostestr to stestr * Update "auth\_url" in install docs * NetApp ONTAP: Fix delete-share for vsadmin users * Fix title overline too short when generate docs * Fix bug for share type filter search * Update auth\_url value in install docs * Fix doc build warnings * Add ou to security service * [Manila Unity] Set unity\_server\_meta\_pool option as required * Use 'Default' as the value of domain name in install guide * Remove deprecated DEFAULT options * uncap eventlet * Update auth\_uri option to www\_authenticate\_uri * Fix allow the use of blank in user group name to access the share 7.0.0.0b1 --------- * move securiy service error explanation from comment * Run pep8/fast8 with python3 * Circumvent bug #1747721 to prevent CI failures * Remove option standalone\_network\_plugin\_ip\_version * Updated from global requirements * Support filter search for share type API * Fix typos in help text of Generic driver and ZFSSA config opts * Remove the deprecated "giturl" option * Disable tempest in rally jobs * Modify grammatical errors * Use rest\_status\_code for api-ref response codes * Updated from global requirements * add lower-constraints job * Update the new PTI for document build * Add manila-tempest-plugin as a requirement in rally job definitions * use http code constant instead of int * Adding driver to mysql connection URL * Log config options with oslo.config * Fix tap device disappear after node restart * Updated from global requirements * Update doc name and path for dell emc vnx and unity driver * Fetch and install manila-tempest-plugin system-wide * INFINIDAT: fix release notes * Updated from global requirements * Change a parameter key for CIFS mounting command * Updated NetApp driver features support mapping * INFINIDAT: set REST API client parameters * Add docs for quota\_class\_set API * Fix the incorrect reference links * Rename Zuul jobs * Remove the nonexistent install-guide directory * Remove use of unsupported TEMPEST\_SERVICES variable * Fix manila logging rabbitmq password in debug mode * Updated from global requirements * Replace Chinese quotes to English quotes * Fix db migration for mariadb >= 10.2.8 * Move openstackdocstheme to extensions in api-ref * Update documentation links * Fix typos * Update reno for stable/queens * Update docs since manila\_tempest\_tests are installed system-wide 6.0.0.0rc1 ---------- * Revert Id905d47600bda9923cebae617749c8286552ec94 * Fix LVM driver not handling IPv6 in recovery mode * Fix UnicodeDecodeError when decode API input * Fix Host-assisted Share Migration with IPv4+IPv6 * Add manila.data.helper options to config sample * INFINIDAT: load-balance shares inside network space * INFINIDAT: support deleting datasets with snapshots * Replace chinese double quotes to English double quotes * Remove the unused variable * Fix boolean types in db migration tests * drivers/cephfs: log an error if RO access is used and it's unavailable * Fix a trivial bug of Dell EMC Manila IPv6 implementation * Handle TZ change in iso8601 >=0.1.12 6.0.0.0b3 --------- * Use native Zuul v3 tox job * fix misspelling of 'password' * Enable IPv6 scenario tests in Upstream CI * Update manila plugin to support IPv6 * NetApp cDOT: Add NVE support in Manila * Update unreachable link * Replace curly quotes with straight quotes * Updated from global requirements * Update contributor/tempest\_tests.rst * Implement IPv6 support for Manila Dell EMC Unity driver * Disable security group rule when create port * Modify outdated links * Updated from global requirements * Add ipv6 for share network admin doc * Follow the new PTI for document build * Updated from global requirements * DocImpact: Add MapR-FS native driver * Use stestr for coverage * Fix NFS/CIFS share creation failure issue * Implement IPv6 support for Dell EMC VNX driver * Fix version details API does not return 200 OK * QNAP: Add support for QES 2.0.0 * Remove ordering attempts of 'unorderable types' * Fix volume attach error in generic driver * Always disable root-squash * Add support for enhanced features to the QNAP Manila driver * Fix error message in the manage API * DocImpact: Add quotas per share type * Fix running docs job failure * Raise error when image status is not active * ganesha: read and store non-ASCII data in exports * Api-ref: add show details for share type * Replace invalid link in manila doc * Fix incorrect api ref parameters * [Doc] Correct a known restriction in cephfs\_driver * QNAP Manila driver: Access rule setting is override by the later rule setting * Fix install docs reference error * Fix default and detailed share type result not correct * Remove in-tree tempest plugin * Updated from global requirements * Add policy documentation and sample file [10/10] * [policy in code] Add support for AZ, scheduler and message resource [9/10] * [policy in code] Add support for share and type extra resource [8/10] * [policy in code] Add support for replicas, networks and security services [7/10] * [policy in code] Add support for group resource [6/10] * Huawei driver supports snapshot revert * Updated from global requirements * Fix getting share networks and security services error * Updated from global requirements * Change ensure share to make startup faster * [policy in code] Add support for service and quota resource [5/10] * Remove unused configuration options * [policy in code] Add support for snapshot resource [4/10] * Add count info in /shares and /shares/detail response * Extend .gitignore for linux swap files range * [policy in code] Add support for share resource [3/10] * [policy in code] Add support for share type resource [2/10] * Add count info in /shares and /shares/detail API doc * Updated from global requirements * Remove usage of deprecated config 'resources\_prefix' * ganesha: store exports and export counter in RADOS * INFINIDAT add Manila driver 6.0.0.0b2 --------- * Updated from global requirements * Simplify the way drivers report support for ipv6 * QNAP: Add support for QES 1.1.4 * Update docs to fix broken links * Add utils methods to write files * Fix drivers\_private\_data update on deleted entries * Use v3 cinder client for share volume * Updated from global requirements * Added Handling Newer Quobyte API Error Codes * Remove 'branches:' lines from .zuul.yaml * Install centos-release-openstack-pike * Add 'description' in share type API Doc * Add 'description' in share type APIs * [Api-ref] update parameters for share types api * fix keystone auth failed since project\_domain\_id and user\_domain\_id * [Doc]Update cephfs\_auth\_id for cephfsnfs Configuration * Fix quota usages update deleting same share from several API endpoints * [Doc] Use share group instead of consistency group in driver\_requirements * Fix shared-file-systems-share-types URL * Utilize requests lib for Huawei storage connection * Remove setting of version/release from releasenotes * Add ssl support for manila API access * Remove unused functions from api/extensions.py * Api ref contains incorrect parameters * Updated from global requirements * [policy in code] Add support for share instance export location resource * Remove hdfs job from check queue * Updated from global requirements * Advertise IPv6 support in the NetApp driver * Allow IPv6 gateways for the default route * Allow ZAPI over IPv6 * Remove glusterfs-native job from check queue * Updated from global requirements 6.0.0.0b1 --------- * Add API document for share group [3/3] * Add API document for share group [2/3] * The default cephfs\_enable\_snapshots set to False * Add admin documentation for following keys of quotas: -'share\_groups' -'share\_group\_snapshots' * Add API document for share group [1/3] * Purge doc of references to nova net * Remove deprecated ganesha\_nfs\_export\_options * Fix missing neutron net plugin options * Zuul: add file extension to playbook path * Fix duplicate standalone\_network\_plugin\_ip\_version * Fix issue with different decimal separators * Use sslutils from oslo\_service * Impove coverage job accuracy * NetApp ONTAP: Fix share size when creating from snapshot * [Doc] Fix parameters in share network api-ref * [Doc] Fix wrong links in docs * [doc] Fix install guide doc * Don't attempt to escalate manila-manage privileges * CentOS share node install docs * Migrating legacy jobs * doc: move stuff from contributor to admin * Delete limited\_by\_marker from api/common.py * Rename to index.rst * Restore .testr.conf * Fix 'project\_share\_type\_quotas' DB table unique constraint * Updated from global requirements * Use generic user for both zuul v2 and v3 * [Doc] Add share group in doc * Updated from global requirements * Fixed creation neutron api mapping for security groups * cleanup test-requirements * Add default configuration files to data\_files * NetApp ONTAP: Add support for filtering API tracing * Updated from global requirements * Switch base to latest in link address * Enable mutable config in Manila * ganesha: cleanup of tmp config files * [Doc] Delete consistency group in doc * tempest: remove call to set\_network\_resources() * Removes use of timeutils.set\_time\_override * Updated from global requirements * Implementation of Manila driver for Veritas Access * tests: replace .testr.conf with .stestr.conf * [install-guide] remove install-guide doc * [doc] Add API document for snapshot instances * Remove auto generated files and unnecessary .gitignore file * Allows the use of dollar sign in usernames * [Api-ref] Delete the duplicate tenant arguments in parameters.yaml * Fix html\_last\_updated\_fmt in conf.py * Fix test\_rpc\_consumer\_isolation for oslo.messaging 5.31.0 * Fix wrong links in manila * Delete the 'share\_extension:types\_extra\_specs' policy * Add API document for share replica * [Grenade] Switch base to stable/pike * NetApp: Fix usage of iso8601\_from\_timestamp * Use newer location for iso8601 UTC * Remove name and description from the search\_options list * Fix a typo in share\_migration.rst * Fix a typo: replace microverison with microversion * Remove "os\_region\_name" config option * [doc] Move Experimental APIs description to a common place * [Api-ref] Remove unused parameter extra\_specs\_2 in parameters.yaml * Updated from global requirements * Remove vestigate HUDSON\_PUBLISH\_DOCS reference * Add API document for share type quota * doc migration: update the doc link address * Update the documentation link for doc migration * Fix incorrect literal\_block error when build docs * Updated from global requirements * doc migration: configuration reference * Fix man page build * Remove unused variables and broken links * doc migration: cli reference * doc migration: user-guide * doc migration: install guide * doc migration: admin guide * doc migration: new directory layout * doc migration: openstackdocstheme completion * NetApp ONTAP: Fix revert-to-snapshot * Updated from global requirements * [Doc] Fix access rule description in api-ref * Update reno for stable/pike * [Doc] Add more description to user messages api-ref * [Api-ref] remove "is\_public" in snapshot updated description * TrivialFix: Add code block and format JSON data * Fix the duplicate hacking check M312 and H203 5.0.0 ----- * Re-enable broken CG code in NetApp driver * Fix wrong links * [Api-ref] Add supported protocol "MAPRFS" in doc * Add API document for share group quotas * [Doc] Remove unused 'provider\_location' parameter * [Doc] Fix API document for Consistency group * Change the way to create image service * [Tempest] Fix tests for pre-existing share network * NetApp cDOT: Fix security style for CIFS shares * Remove duplicate variables * Remove tempest pin * Imported Translations from Zanata * Update links in README * Fix NFSHelper 0-length netmask bug * Enable some off-by-default checks * Imported Translations from Zanata * Fix multiple issues with revert to snapshot in LVM * Add exception for no default share type configured * Fix cannot deny ipv6 access rules * Removed unnecessary setUp() calls in tests * [Trivialfix]Fix typos * Imported Translations from Zanata * Use tempest-plugin service client registration * Imported Translations from Zanata * Add ipaddress in manila requirements * Updated from global requirements * Enable IPv6 in manila(documentation) * Updated from global requirements 5.0.0.0b3 --------- * Enable IPv6 in manila(network plugins and drivers) * Add share groups and share group snapshots quotas * Add share usage size tracking in doc * Add share usage size tracking * Update location of dynamic creds in tempest tests * Provide filter name in user messages * Fix the exact filter can be filter by inexact value * NetApp cDOT: Add support for QoS/throughput ceilings * Updated from global requirements * NetApp: Define 'preferred' to False instead of none * Updated from global requirements * Add quotas per share type * Fix deprecated options version * Replace test.attr with decorators.attr * Allow 2 or more export IPs for LVM driver * Updated from global requirements * Disable notifications * Add user messages periodic cleanup task * Added like filter in api-ref * Enable IPv6 in manila(allow access) * NetApp cDOT: Fix share specs on migration * Updated from global requirements * Updated from global requirements * Update the documentation link for doc migration * Fix grammatical mistake, Changed character from "a" to "an" * Update URL home-page in documents according to document migration * Extend usage of user messages * User Messages * Add prefix 'test' to test name in test\_shares * Fix inappropriate parameters * VMAX VNX Manila - Refactor VMAX and VNX to use common code * Allow docs build without git * VNX: bump the version for Pike * Add like filter * TrivialFix: replace set(sorted(x)) with sorted(set(x)) * Remove --omit argument in run\_tests.sh * Unity: unexpected data in share from snapshot * VNX: share server cannot be deleted * Add export-location filter in share and share instance list API * NetApp cDOT: Add gateway information to create static routes * Add create/delete/extend/shrink share notifications * Updated from global requirements * NetApp cDOT: Fix share server deletion * Updated from global requirements * Replace the usage of 'admin\_manager' with 'os\_admin' * Add support for Guru Meditation Reports for manila * Replace the usage of 'manager' with 'os\_primary' * Use parenthesis instead of backslashes in tempest folder * Retry backend initialization * Allow endless retry loops in the utility function * Updated from global requirements * cephfs/driver: add nfs protocol support * Use parenthesis instead of backslashes in tests folder * Use parenthesis instead of backslashes in scheduler folder * Updated from global requirements * Wrong substitution of replica ID in log message * Fix ShareSnapshotInstance DB table * Use parenthesis instead of backslashes in db folder * Use parenthesis instead of backslashes in share folder * Use parenthesis instead of backslashes in API folder * Replace assertEqual([], items) with assertEmpty(items) * Change example value in docs for CephFS snapshots * [Docs] Correct glusterfs references * Updated from global requirements * Updated from global requirements 5.0.0.0b2 --------- * Imported Translations from Zanata * GPFS: Changing default value of NFS server type * Use get\_rpc\_transport instead of get\_transport * CI: Update tempest commit * [Share Groups] Squash SGS member and SS instances DB tables * Updated from global requirements * [Share Groups] Add two new fields to SG API object * [Share Groups] Add availability zone support * [Share Groups] Fix creation of share group types with wrong specs values * [Generic driver] Fix incompatibility with novaclient * ganesha: dynamically update access of share * [Share groups] Add scheduler filter ConsistentSnapshotFilter * Remove pbr warnerrors in favor of sphinx check * Clean releasenotes and install-guide build dir * Fix pep8 M325 error with python 3.5 * Use get\_notification\_transport for notifications * Updated from global requirements * Implement update\_access in Isilon Driver * Replace oslo\_utils.timeutils.isotime * Use ShareInstance model to access share properties * CI: Update tempest commit * Fix share instance list API display error * Remove unused function in test\_share\_snapshot\_instances file * Updated from global requirements * Refactor share instances tempest test * GPFS Path: Fix bugs related to initialization of GPFS Driver * Updated from global requirements * Add a releasenote for tooz heartbeat * coordination: use tooz builtin heartbeat feature * Updated from global requirements * Remove unused self.context * Correct re-raising of exception in VNX driver * Updated from global requirements * Fix typos in document * Fix unit test failures in gate * Replaced exc.message with str(exc) * Change to share access list API * Fix update share instance pool fail * Change share to share snapshot in snapshot list API annotation * devstack: clone Manila client only if marked to * api-ref:Update ref link * Set access\_policy for messaging's dispatcher * Fix api-ref doc generation for Python3 * Optimize the link address * Add periodic task to clean up expired reservation * Refactor and rename CephFSNativeDriver * Remove usage of parameter enforce\_type * Capitalize the first letter in comment 5.0.0.0b1 --------- * Add comment explaining ignore D001 for doc8 * Updated from global requirements * Add possibility to run 'manila-api' with wsgi web servers * Hacking: do not translate log messages * Updated from global requirements * Remove log translations in others 5/5 * Replace six.iteritems() with .items() * Add sem-ver flag so pbr generates correct version * Fix important:: directive display in install guide * [CI] Add support for CI jobs with custom images * Updated from global requirements * Remove service\_instance\_network\_helper\_type option * Update to current tempest tag * Add read-only tests for cephx access rules * Remove log translations in share and share\_group 4/5 * Remove log translations in scheduler 3/5 * [Rally] fix jobs * Remove unnecessary setUp function in testcase * Remove log translations in cmd,common,data,db and network 2/5 * Updated from global requirements * Remove log translations in api 1/5 * Updated from global requirements * Remove deprecated manila-all command * setup \_IntegratedTestBase without verbose flag * Handle SSL from VNX driver * [Dell EMC Unity] Create with user capacity * Move create\_manila\_accounts to post-config * Imported Translations from Zanata * change user access name limit from 32 to 255 characters * Fix some reST field lists in docstrings * Fix docs failures caused by latest eventlet * Use HostAddressOpt for opts that accept IP and hostnames * set basepython for pylint tox env * Updated from global requirements * Remove old oslo.messaging transport aliases * Revert "Handle ssl for VNX manila driver" * remove hacking rule that enforces log translation * docs: fix build failure on html\_last\_updated\_fmt * devstack: skip nfs kernel install if nfs-ganesha * Updated from global requirements * Handle ssl for VNX manila driver * Update share replicas after promotion in proper order * Enable share groups back * Updated from global requirements * Switch to use stable data\_utils * Deprecate 'ganesha\_nfs\_export\_options' * Local copy of scenario test base class * Rename wrapped methods in share manager * CephFS driver: change CG variables to SG variables * [api-ref]: Add missing share statuses * Fix python3 pep8 errors * Update share server provisioning for share groups * Send resize parameters in rpc as list in the Quobyte driver * [Tempest] Fix concurrency in test with listing share servers * The python version is added Python 3 and 3.5 version was missing * Start NFS and SMB services on fedora platforms * Remove unused "share\_id" parameter * Remove unused assignments in share manager * Updated from global requirements * Unblock gate failure on docs build * Fix 3 CI breakages * [Grenade] Fix devstack configuration in CI hook * Change tempest tag to 15.0.0 * Fix gate breakage caused by localrc usage * Fix host-assisted migration stale source share * Fix syntax in devstack plugin * Align policy.json with code * Add Apache License Content in index.rst * Use https instead of http for git.openstack.org * Remove unused pylintrc * Address family neutrality for container driver * container driver: log network id as network id * Remove redundant revert-to-snapshot test option * Fix some typos * Update tempest pin to 15.0.0 * Only return share host for admins using shares API * Fix migration\_success before completing * Update HNAS driver version history * doc: verify all rst files * Fix to use correct config options for network\_for\_ssh * Adds manila-manage 'db purge' command to man page * Enable devstack deploy of container driver on Fedora * Add Share Migration devref docs * Improve HNAS driver coverage * Updated from global requirements * [Tempest] Refactor api/tests/admin/test\_share\_servers module * 3PAR: Replace ConsistencyGroup * Updated from global requirements * Updated from global requirements * Update tempest pin to latest commit ref * [Tempest] Split up share migration tests to separate classes * Mock time.sleep in tests that sleep * HNAS: Fix concurrency creating/deleting snapshots * [Grenade] Add test with creation of share snapshot * Fix Windows SMB helper * [Grenade] Switch base to stable/ocata * Use more specific asserts in tests * Optimize opposite driver modes migration test * HNAS: ensure snapshot before trying to revert * Update reno for stable/ocata 4.0.0.0rc1 ---------- * Add 'consistent\_snapshot\_support' attr to 'share\_groups' DB model * Pass access rules to driver on snapshot revert * Fix default approach for share group snapshot creation * Remove a py34 environment from tox * Disable share groups APIs by default * Fix devstack manila nfs install for fedora * Improve test coverage for share migration * Replaces yaml.load() with yaml.safe\_load() * Prepare for using standard python tests * Fix nonsense variable name * Fix wrong access-rule negative test * Fix migration of mountable snapshots * Fix HNAS driver inconsistent exceptions * Fix HNAS driver always handling mountable snapshots * HNAS: Fix syntax to make shares read-only in snapshot create * Blocked migration of shares within share groups * HNAS: Fix managed snapshots not being mounted * Fix multiple export locations during migration * Fix snapshot export locations incorrectly handled * HNAS: avoid mismatch access level for managed shares * Fix error'ed access rules being sent to driver * Fix setup of DHSS=False mode for generic driver * HNAS: Fix concurrency error when managing snapshots * Enable host-assisted migration in ZFSOnLinux CI * Decrease share migration periodic task interval * Add access-rules tests to improve the coverage * Remove unit test that is not relevant anymore * Fix creation of share group types using share type names * Mark 'v1' API deprecated in the versions response * Fix Generic driver DHSS=False setup * Fix string formatting in access-deny API error message * Make LVM export IP configurable * Updated from global requirements 4.0.0.0b3 --------- * Updated from global requirements * Revert "[Devstack] Workaround osclient breakage" * Add mountable snapshots support to HNAS driver * Improve share migration scenario test validation * Mountable snapshots scenario tests * Fix MapRFS test\_\_execute to not impact others * Add mountable snapshots support * Fix devstack plugin to not depend on private network * VMAX manila plugin - Support for VMAX in Manila * NetApp: Support share revert to snapshot * [Tempest] Add functional tests for share groups feature * Manila Share Groups * Rename consistency group modules to share groups * [api-ref] Fix missing parameters in api-ref * Removes unnecessary utf-8 coding * NetApp cDOT: Add Intra-Vserver migration support * Updated from global requirements * Add QNAP Manila Driver * Add cast\_rules\_to\_readonly to share instances * Don't call update\_access if there are no rules * Implement Revert-to-snapshot in HNAS Driver * Share Migration Ocata Improvements * Refactor Access Rules APIs * Tooz integration * Trivial fixes to snapshot revert patch * [api-ref] Refactor share network documentation * Fix \`\`exportfs -u\`\` usage in generic driver * Add manila-manage db purge command * [Unity driver] VLAN enhancement * Implement share revert to snapshot * Fix metadata's soft-delete error when deleting shares * Fix license and E265 errors in doc/source/conf.py * Updated from global requirements * tests: remove useless variables in db\_utils methods * Some share api test cleanup * Update .gitignore * Fix column name error in migration script * Fix error message in Share Networks API * Remove NovaNetworkPlugin * [TrivialFix] Add negative test in quota detail * Add MapR-FS native driver * Allow skipping manila tempest tests * Properly deprecate service\_instance\_network\_helper\_type * remove devref jenkins doc * Support python 3.5 in tox * Unity/VNX Driver: Rename driver options * Migration Data Check fixes * Remove trailing backtick * Updated from global requirements * Remove nova net support from service\_instance * [api-ref] Refactor share instance export locations API documentation * GPFS: Add update\_access() * Report create\_share\_from\_snapshot\_support * Allow share status reset to migration status * Add support for manage/unmanage in GPFS driver * [api-ref] Refactor share actions API documentation * [api-ref] Refactor share export location API documentation * Add the ability to check the tenant quota in detail * Fix test variable injection in CI * [TrivialFix] optimize get filesystem id in huawei driver * [Devstack] Workaround osclient breakage * Updated from global requirements * GPFS KNFS: Fix deny access to succeed when possible * GPFS KNFS: Do not reuse ssh prefix in loop * Add create\_share\_from\_snapshot\_support extra spec * Trivial fix LOG.exception issues * [Grenade] Do not run tempest tests * Fix typo in rootwrap.conf * use six.StringIO for compatibility with io.StringIO in python3 * Trivial fix translate issues * NetApp: set proper broadcast domain for IPspace * Add Apache 2.0 license to source file * [Dell EMC Unity] Support create share smaller than 3 GB * Updated from global requirements * [TrivialFix] Move share type filter tempest to test\_scheduler\_stats.py * [devref] copy samples/local.conf correctly * GPFS CES: Fix bugs related to access rules not found * Add DriverFilter and GoodnessWeigher documentation * Setting up a development env with devstack instructions * Enable scenario tests for LVM and ZFSonLinux drivers * [Tempest] Add scenario test creating share from snapshot 4.0.0.0b2 --------- * Decouple Manila UI from Manila Devstack plugin * [Generic driver] Fix generation of admin export location * Fix undefined attribute in scenario test class * Fix Manila service image config for 3rd party CIs * Change network allocation of Unity driver to 1 * [LVM,Generic drivers] Fix relationships between parent and child shares * Replace six.iteritems() with .items() * Add "update\_access" interface support for VNX * Add share\_type filter support to pool\_list * [TrivialFix] Fix doc typo error * Updated from global requirements * [Tempest] Fix concurrency issue in scenario test * Add support for manage/unmanage snapshots in HNAS driver * [ZFSonLinux] Stop inheriting options creating share from snapshot * Updated from global requirements * [Devstack] Use openstack CLI instead of other clients * [Devstack] Fix DHSS=False setup for Generic driver * [Devstack] Run tempest update in proper time * Fix wrong data type in database migration * LOG marker mismatch in the code * [hacking] Ensure not to use LOG.warn * Fix devstack smb configuration outside ubuntu * Fix share writable in host-assisted migration * Remove unused function in db api * [api-ref] Refactor Manila scheduler stats API * TrivialFix: Remove Duplicate Keys * Show team and repo badges on README * Fix wrong instructions in the install guide * [Dummy driver] Add possibility to set delays for driver methods * [Devstack] Fix devstack plugin compatibility * Add Admin network support to HNAS driver * Fix extend operation of shrinked share in generic driver 4.0.0.0b1 --------- * [Tempest] Make share size configurable in scenario tests * [Tempest] Port remote\_client into Manila * hacking: Use uuidutils to generate UUID * devref/driver\_requirements: add cephfs protocol * Add Rally CI jobs with Manila scenarios * Fix spelling mistakes in cover.sh * Updated from global requirements * Check ceph backend connection on driver setup * Move EMC drivers to dell\_emc folder * NetApp cDOT controller utilization metrics * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Remove unused link * [install] Make the rabbitmq configuration simpler * Add testscenario to test-requirements * Fix share manage tempest test cleanup * Updated from global requirements * [Devstack] Create additional custom share types by default * Remove fake CG support from Generic share driver * Correct the order of parameters in assertEqual() * Use cors.set\_defaults instead of cfg.set\_defaults * Fix missing 'migration\_completing' task state * Replace 'assertEqual(None, ...)' with 'assertIsNone(...)' * Compare the encoded tag more accurately for huawei driver * Updated from global requirements * Add support of endpoint\_type and region\_name to clients manila uses * Updated from global requirements * [Tempest] Fix visibility of test\_quotas.py module * Fix a typo * Remove broken modindex link from devref * Clarify language in release notes * Updated from global requirements * Remove warnings for dropped context arguments * NetApp cDOT driver enhanced support logging * Add utility of boolean value parser * Fix concurrency issues in container driver * Updated from global requirements * Remove unused functions in utils * Update .coveragerc after the removal of openstack directory * [Grenade] Update devstack and pre\_test\_hook * Fix a typo in parameters.yaml * updated positional argument and output * Fix a typo in api\_version\_request.py * Updated from global requirements * NetApp cDOT driver should not report untenable pools * [api-ref] Refactor Manila snapshot API * [Container] Fix deletion of veths * Updated from global requirements * Enable release notes translation * Updated from global requirements * Avoid Forcing the Translation of Translatable Variables * Fix devstack for ubuntu-xenial * Stop adding ServiceAvailable group option * cephfs\_native: doc fixes * Remove tempest.test usage from manila tests * Fix typo in test\_gpfs.py * Use assert(Not)In/Greater(Equal)/LessEqual/IsNotNone * Updated from global requirements * Use method is\_ipv6\_enabled from oslo.utils * Files with no code must be left completely empty * TrivialFix: Remove default=None when set defaul value in Config * [TrivialFix] Correct file mode settings * [api-ref] Refactor Manila security service API * Remove redundant 'the' * Adjust doc about threading * Updated Hitachi NAS Platform Driver documentation * Updated from global requirements * Remove unused methods * Fix huawei driver username/password encoding bug * Use fnmatch from oslo.utils * Updated from global requirements * Fix check for nfsd presence * [api-ref] Refactor Manila availability-zones API * Fix huawei driver cannot delete qos while status is idle * Bring remote and local executors into accord * Add tempest tests for mtu and gateway fields * Make port\_binding\_extension mandatory if host\_id is specified * [api-ref] Refactor Manila quota set API * [api-ref] Remove temporary block in conf.py * Make nfs-kernel-server run on a clean host * Modify use of assertTrue(A in B) * Updated from global requirements * 3PAR driver fails to validate conf share server IPs * Manila install guide: Fix wrong instructions * delete python bytecode including pyo before every test run * Update installation tutorial and api-ref instructions * Update reno for stable/newton * [api-ref] Refactor limits and services API * [api-ref] Refactor manila extension API * [api-ref] Refactor consistency group API 3.0.0 ----- * Add cleanup to create from snap in Manila HNAS driver * [ZFSonLinux] Fix share migration using remote host * Put all imports from manila.i18n in one line * Fix access rules for managed shares in HSP driver * Improve Share Migration tempest tests * Fix allow/deny error message and race in migration * Fix for LV mounting issue in docker containers * Fix flaky Neutron port binding unit tests * Fix useless statements in unit tests * [docs] Update dev docs for ZFSonLinux share driver * [ZFSonLinux] Add test coverage for share migration * NetApp cDOT driver autosupport broken * Fix dedup/compression description in doc * huawei driver default create thin type share * HPE 3PAR: file share support of AD in devref * Updated from global requirements * glusterfs: handle new cli XML format * Add provisioned\_capacity\_gb estimation * Fix typo in response status code * standardize release note page ordering * Fix race condition updating routes * share-size not set to 1 with 'manage\_error' state * Config logABug feature for Manila api-ref * NetApp cDOT: Avoid cleaning up 'invalid' mirrors * [ZFSonLinux] Fix share migration support * Update to tempest 12.2.0 3.0.0.0b3 --------- * Add multi-segment support * Add binding\_profile option for backends * Nexenta: adding share drivers for NexentaStor * Updated from global requirements * Windows SMB: implement 'update\_access' method * Windows SMB: remove redundant operations * [Dummy driver] Add share migration support * [ZFSonLinux] Add share migration support * Add share type change to Share Migration * HPE 3PAR driver pool support * Share migration Newton improvements * Unity: Use job for NFS share creation * Correct reraising of exception * Windows SMB: avoid default read share access * Change assertTrue(isinstance()) by optimal assert * Fix Share Migration improper behavior for drivers * Fix Manila HNAS driver managing a share twice * Fix test bugs for replication CI * Implement replication support in huawei driver * Fix connectivity problem in Scenario job * Updated from global requirements * [CI FIX] Fix 'ip route' matching multiple subnets * Clean imports in code * Clarify grenade failure message * Updated from global requirements * Add documentation for EMC Unity Driver for Manila * Remove enable\_v1\_api and enable\_v2\_api config opts * 3PAR: Add update\_access support * add access\_key to share\_access\_map * Add missing filter function in HSP driver * Get ready for os-api-ref sphinx theme change * Fix fallback share migration with empty files * Rename and move HNAS driver * Updated from global requirements * Add neutron driver for binding * Fix sample config generation for cinder, nova and neutron opts * Add Hitachi HSP driver * manila\_tempest\_tests: fix exception messages * Container driver * Tox Upper Constraints - strip out reinstalls for remaining jobs * NetApp cDOT: Apply network MTU to VLAN ports * Fix typo in glusterfs driver comment * [dev-docs] Changed small case letters to capital * Add MTU information in DB and API * In-tree Install Guide * Updated from global requirements * cephfs\_native: enhance update\_access() * TrivialFix: Change LOG.warn to LOG.warning * Fix the broken UT of huawei driver for py34/35 * Add dedupe report in HNAS driver * cephfs\_native: add read-only share support * Updated from global requirements * Refactor GPFS driver for NFS ganesha support * NetApp cDOT driver configurable clone split * NetApp cDOT multi-SVM driver configurable NFS versions * Add support for CIFS shares in HNAS driver * Fix KeyError on err in unit test * Fix concurrent usage of update\_access method for share instances * NetApp cDOT vserver deletion fails if no lifs present * Fix ZFSonLinux driver prerequisites setup * Updated from global requirements * HPE3PAR make share from snapshot writable * Check for usage of same Cephx ID as manila service * Fix share migration test with snapshot support * [Tempest] Fix concurrency in "test\_show\_share\_server" test * [ZFSonLinux] Fix replicated snapshot deletion error * Fix race condition in tempest test * Replaces httplib with requests lib in Quobyte RPC layer * Add EMC Unity Driver for Manila * Add snapshot instances admin APIs * TrivialFix: Fix a wrong order bug in resource\_cleanup() * [ZFSonLinux] Add 'manage snapshot' feature support * Minor optimization and formatting corrections in Quobyte driver * Add retry in VNX driver when DB lock error happened * Remove "host" from driver private data * NetApp: Report hybrid aggregates in share stats * share/access: allow maintenance mode to be triggered * Migrate API reference into tree * Fix devref README and remove Makefile * Add dummy driver * Correct Quobyte driver capacity reporting * Updated from global requirements * Huawei: Support reporting disk type of pool * Documentation changes for thin/thick provisioning * Check 'thin\_provisioning' in extra specs * HPE3PAR: Fix filestore quota decrement * HPE3PAR: Handle exceptions on deleted shares * Fix pep8 job * Add reno notes about http\_proxy\_to\_wsgi middleware * Add DriverFilter and GoodnessWeigher to manila * Use http\_proxy\_to\_wsgi instead of ssl middleware * Use constraints for coverage job * Do not put real hostname and IP address to manila config sample * Add tox job for db revision creation * Add interface port configuration in EMC VNX driver 3.0.0.0b2 --------- * Huawei: Add share sectorsize config in Huawei driver * Huawei driver support access of all IPs * update min tox version to 2.0 * Updated from global requirements * [Tempest] Handle errored shares correctly using recreation logic * [Tempest] Create heavy scenario resources in parallel * Update tempest to newer commit version * Add share manage/unmanage of Oracle ZFSSA driver * Delete duplicated broken tempest test * Add lvm driver options to sample config * Updated from global requirements * [ZFSonLinux] Add 'manage share' feature support * Fix snapshot manage Tempest test * Manage / unmanage snapshot in NetApp cDOT drivers * Add gateway in network\_info and share network API * Fixed a spelling mistake of "seperate" to "separate" * Add share\_size config option * Config: no need to set default=None * Use upper-constraints in tox installs * Updated from global requirements * Update quota usages correctly in manage share operation * Change user\_id and project\_id to 255 length * Add user\_id and project\_id to snapshot APIs * [Tempest] Fix negative replication test * [Tempest] Remove noqa filters * Updated from global requirements * Cleanup unused DB APIs * glusterfs: Implement update\_access() method * ganesha: implement update\_access * Huawei: Add manage share snapshot in Huawei driver * Delete VLAN on delete\_vserver in Netapp cmode * Use is\_valid\_ipv4 and is\_valid\_ipv6 from oslo.utils * Updated from global requirements * Do not supply logging arguments as tuple * cephfs\_native: Fix client eviction * Pass context down to ViewBuilder method * Add more dir exceptions to pep8 tox job * [Tempest] Bump tempest version * [Tempest] Stop using deprecated Tempest opts * [Tempest] Add valuable tags to tests * [Tempest] HotFix for broken CI jobs * Updated from global requirements 3.0.0.0b1 --------- * Fix issue with testtool testrunner * HPE3PAR driver doesn't decrease fstore capacity * Updated from global requirements * Fix badly formatted release note * Use oslo IntOpt function instead of explicit check * Document instructions for documentation * Adding info to use venv of tox for reno * Polish hook decorator * Updated from global requirements * Updated from global requirements * Fix HDS HNAS errors caused by incorrect IDs * Huawei: Fix exception in update\_access not found * Hacking check for str in exception breaks in py34 * Add hacking rule for assertEqual(None, \*) * Squash E042 and E043 bashate warnings * Removed the invalid link from Manila Dev Guide * Use assertTrue rather than assertEqual(True, ...) * Replace assertEqual(None, \*) with assertIsNone in tests * Updated from global requirements * Remove retry logic from manage API * Fix tox errors and warnings in the devref * [Doc] Update quick start guide to Mitaka release * Updated from global requirements * HDS\_HNAS: Fix improper error message * HDS\_HNAS: Remove unused parameter * Fix context warning spam of scheduler and share logs * Updated from global requirements * Fix docs for REST API history and Scheduler * Fix Manila RequestContext.to\_dict() AttributeError * Add wraps function to decorator * Fix context decorator usage in DB API * Add hint how to configure fake\_driver in manila-share * Test: make enforce\_type=True in CONF.set\_override * Remove NetAppCmodeClient.delete\_network\_interface * Updated from global requirements * Add user\_id echo in manila show/create/manage API * Bump Tempest version * Remove deprecated manila RequestBodySizeLimiter * Fixed references for scheduler drivers in doc * Fix share server info in CGs created from CGs * Skip over quota tests if quota tests disabled * Delete Snapshot: status wrongly set when busy * Updated from global requirements * Fix HNAS error with unconfined filesystems * Developer Reference: Adopt the openstackdocstheme * Fix IPv6 standalone network plugin test * cephfs\_native: doc fixes * Added docs for commit message tags * Fix docstring for policy.enforce method * Updated from global requirements * Fix tempest.conf generation * [Trivial] replace logging with oslo.log * Add Grenade support to Manila * NetApp: DR look up config via host name * [Devstack] Set proper driver mode for ZFSonLinux driver * use thread safe fnmatch * Updated from global requirements * Make devstack functions support grenade * Fix microversion usage in share manage functional tests * Handle manage/unmanage for replicated shares * Fix HNAS driver exception messages * Updated from global requirements * Add doc for Share Replication * Fix Share status when driver migrates * Fix doc build if git is absent * Remove unused tenant\_id variable * [Fix CI] Bump Tempest version * Detect addition of executable files * Updated from global requirements * Add release notes usage and documentation * Deprecate manila-all command * update hacking checks for manila * Fix creation of Neutron network in Devstack * Fix manage tempest test validation * Update HPE 3PAR devref docs * NetApp cDOT driver should honor reserved percentage * Remove Devstack workaround for Neutron * Remove unused logging import and LOG global * cephfs\_native: Change backend snapshot dir's name * Remove openstack-common.conf * update dev env doc for Fedora releases * Fix force-delete on snapshot resource * Increase Cinder oversubscription ratio in CI * Use install\_package when preparing LVM driver installation * Fix Manage API synchronous call * Generic driver: ignore VolumeNotFound in deleting * Removing some redundant words * Add common capabilities matrix to devref * Add caution to test-requirements * Increase logging for driver initialization * Capitalize global var for clients * Fix typos * Update ZFSonLinux share driver docs * Update reno for stable/mitaka 2.0.0 ----- * Fix call of clients in post\_test\_hook.sh * Add tests to ensure snapshots across replicas * NetApp cDOT: Handle replicated snapshots * Data Replication: Ensure Snapshots across replicas * Fix update\_access concurrency issue * Fix manage API ignoring type extra specs * Make ZFSonLinux driver handle snapshots of replicated shares properly * Fix keystone v3 issues for all clients * Fix for incorrect LVMMixin exception message * NetApp cDOT: Fix status updates for replicas * NetApp cDOT: Raise ShareResourceNotFound in update\_access * Add hacking check to ensure not to use xrange() * Fix generic and LVM driver access rules for CIDRs * Fix report of ZFSonLinux driver capabilities * Fix the scheduler choose a disable share service * Fix typos * Fix error logged for wrong HPE 3par client * 3PAR remove file tree on delete when using nested shares * HDS-HNAS: Fix exception in update\_access not found * Revert "LXC/LXD driver" * Fix Hitachi HNAS driver version * service instance: also recognize instance name * Fix update of access rules in ZFSonLinux driver * Check share-network in 'share create' API * glusterfs volume layout: take care of deletion of DOA shares * Fix delete when share not found in update\_access * Remove default values for update\_access() * NetApp cDOT driver should not split clones * Fix handling of share server details after error * HDS-HNAS: fixed exception when export not found * Fix lock decorator usage for LVM and Generic drivers * Fix HNAS snapshot creation on deleted shares * Move iso8601 from requirements to test-requirements * Fix typos * glusterfs.common: GlusterManager.gluster\_call error report fix * glusterfs.GlusterNFSVolHelper: remove \_\_init\_\_ * Add tempest tests for Share Replication * register the config generator default hook with the right name * Windows driver: fix share access actions * Collapse common os\_region\_name option * Disallow scheduling multiple replicas on a given pool * update quota of origin user on share extend/shrink * Update quota of proper user on resource delete * Fix Share Migration access rule mapping * Fix unstable DB migration tests * Fix Share Migration KeyError on dict.pop * NetApp cDOT APIs may get too little data * HNAS: Enable no\_root\_squash option when allowing access to a share * Fix HNAS driver crash with unmounted filesystems * Fix compatibility with Tempest * Set proper image name for tempest * Remove nsenter dependency * Fix ZFSonLinux driver share replica SSHing * Fix ZFSonLinux access rules for CIDRs * Fix HNAS driver thin\_provisioning support * Fix pylxd hard dependencies * Squash consequent DB calls in create\_share\_instance * Fix slow unit test * Run ZfsOnLinux gate tests with SSH enabled * Fix status update for replicas * Set TCP keepalive options * Fix manila devstack plugin for keystone v3 usage * Add /usr/local/{sbin,bin} to rootwrap exec\_dirs * Updated from global requirements * Use official location for service image * Allow devstack plugin to work without Cinder * Download service image only when needed * glusterManager instantiation regexp validation 2.0.0.0b3 --------- * Moved CORS middleware configuration into oslo-config-generator * Move Share Migration code to Data Service * Remove unintended exposure of private attribute * Add share driver for Tegile IntelliFlash Arrays * Update tempest commit and switch to tempest.lib * LXC/LXD driver * Update export location retrieval APIs * Huawei driver improve support of StandaloneNetworkPlugin * Add Ceph Native driver * Introduced Data Service * Implement admin network in generic driver * NetApp: Add Replication support in cDOT * Fix NFS helper root squashing in RW access level * Add ZFSonLinux share driver * glusterfs.common: move the numreduct function to toplevel * glusterfs\_native: relocate module under glusterfs * Huawei driver code review * Add QoS description in Huawei * glusterfs/ganesha: add symbolic access-id to export location * Add share resize support to Oracle ZFSSA driver * Implement update\_access() method in huawei driver * Update Huawei driver doc for Mitaka * Remove unused pngmath Sphinx extension * Implement update\_access() in generic driver + LVM * Add doc for export location metadata * gluster\*: clean up volume option querying * Admin networks in NetApp cDOT multi-SVM driver * Support export location metadata in NetApp cDOT drivers * Change sudo to run\_as\_root in LVM driver * Huawei driver: change CIFS rw to full control * Updated from global requirements * Fix NetApp cDOT driver update\_access negative test * Define context.roles with base class * Subclass context from oslo\_context base class * Add Replication admin APIs and driver i/f changes * glusterfs/common: don't suppress vol set errors * Improve exception msg when attaching/detaching volumes * Use assertIsNone instead of assertEqual(None, \*\*\*) * Scheduler enhancements for Share Replication * Fix typo in comment message * Remove aggressive assert from share server test * Fix scenario tests * EMC Isilon Driver Support For CIFS Read-Only Share * Add update\_access() interface to Quobyte driver * Check for device node availability before mkfs * Replace TENANT => PROJECT for manila plugin * Validate qos during share creation * Fix doc string in driver interface * Fix neutron port concurrency in generic driver * Add additional documentation on extra spec operations * Implement update\_access() method in Hitachi HNAS driver * Fix share migration tests in gate * Update help text for some service instance config opts * Three ways to set Thin/Thick Type in Huawei driver * Squash E006 bashate warnings * Implement update\_access() in NetApp cDOT drivers * Add tox fast8 option * Use ostestr to run unit test * Make consistency group timeout exception message more robust * Manage and unmanage snapshot * Stop proxying share\_server\_id through share in share.manager * Remove deprecated share attribute usage from manila.share.api * Get host from share['instance'] in share RPC API * Cleanup deprecation warnings from using share proxy properties in API * Add possibility to skip quota tests in Tempest * Remove default=None from config options * Add space to message in manila\_tempest\_tests/tests/api/test\_shares.py * Fix rpcapi identifiers for better readability * Add admin network for DHSS=True share drivers * Allow DHSS=False tests to override Tempest concurrency * Remove \`None\` as a redundant argument to dict.get() * gluster\*: add proper getter/setters for volume options * Unify usage of project name in doc to 'manila' * Removed ignored checks from tox.ini and fixed pep8 issues * Updated from global requirements * Fix tempest test for export locations API * Support devstack install without nova * EMC Isilon Driver Support For NFS Read-Only Share * replace string format arguments with function parameters * Converted MultiStrOpt to ListOpt * Fix Hitachi HNAS Driver default helper * Use existing "insecure" options when creating nova/cinder clients * Fix Share Replica details in the API * Share Replication API and Scheduler Support * Fixed Hitachi HNAS slow test * Replace 'stack' with $STACK\_USER in devstack plugin * Replace deprecated oslo\_messaging \_impl\_messaging * Avoid KeyError on instance\_id in ensure\_service\_instance * Hitachi HNAS driver share shrink * LVM driver: Pass '--units g' to vgs invocation * Updated from global requirements * Fix scheduling with instance properties * Add update\_access() method to driver interface * Update the home page * Fix issue in hacking with underscore imports * Added Keystone and RequestID headers to CORS middleware * Ext. exception handling for httplib and socket errors in Quobyte driver * Huawei: Create share from snapshot support in Huawei driver * Don't convert share object to dict on create * Fix Cinder's NoValidHostFound errors * Remove outdated pot files * Fix Devstack and Manila-ui interaction * Fix devstack function call recreate db * tempest: wait for deletion of cert rule * Bump tempest version * Fix params order in assertEqual * Removed unnecessary string conversions on Hitachi HNAS Driver * Add feature support information of Oracle ZFSSA Manila driver * extra-specs should work with string True/False * Fix db shim layer mismatches with implementation * TrivialFix: Remove deprecated option 'DEFAULT/verbose' * isoformat instead of deprecated timeutils.isotime 2.0.0.0b2 --------- * Return appropriate data on share create * Hitachi HNAS driver refactoring * Trivial Fix: fix missing import * Remove unused server\_get() method * QoS support for Huawei Driver * Add LVM driver * Fix release of resources created by Tempest * Fix access rules tempest v2 client * Huawei: Ensure that share is exported * Using dict.items() is better than six.iteritems(dict) * Updated from global requirements * gluster\*: refactor gluster\_call * Fix pep8 failure * Fix Mutable default argument * Fix devstack in non-neutron environments * Fix usage of standlone\_network\_plugin * Implement export location metadata feature * Doc: Remove prerequisite: Ubuntu * Hide snapshots with no instances from listing * QoS support for shares * Huawei: Add share server support * Isilon Driver: Update Share Backends Feature Doc * Clean up removed hacking rule from [flake8] ignore lists * Fix Manila tempest tests * Adds extend\_share for Quobyte shares * Update NetApp driver support matrix line * Fix response code for various NotFound exceptions * Huawei driver report pool capabilities [True, False] * Fix 'extend' API for 2.7+ microversions * Replace assertEqual(None, \*) with assertIsNone in tests * Delete Share Instance of unmanaged share * Add debug testenv in tox * A tempest test in services API using unsafe assert * Cannot return a value from \_\_init\_\_ * Make Manila UI be installed after Horizon * Use new approach for setting up CI jobs * Add doc for share driver hooks * Add more documentation to share/driver * Fix grammatical mistake, Changed character from "an" to "a" * Huawei: Add manage share with share type in Huawei driver * Refactor share metadata tests to use DB * Replace deprecated [logger/LOG].warn with warning * Add snap reserve config option to NetApp cDOT driver * Updated from global requirements * Fix tempest case "test\_delete\_ss\_from\_sn\_used\_by\_share\_server" * Fix CI Tempest jobs * glusterfs/vol layout: remove manila-created vols upon delete\_share * Use constants instead of literals in Huawei Driver * Fix unit test of ShareSnapshotNotFound * Fix handling of Novaclient exceptions * Drop MANIFEST.in - it's not needed with PBR * Replace deprecated library function os.popen() with subprocess * Change assertTrue(isinstance()) by optimal assert * EMC Isilon Driver Doc Update for Extend Share * [docs] Fix table elements view on page with list of supported features * Trivial: Remove unused logging import * Set timeout for parmiko ssh connection * Fix wrong flake8 exception and pep8 violations * Remove unused oslo-incubator \_i18n.py from Manila * Deprecated tox -downloadcache option removed * Keep py3.X compatibility for urllib * EMC VNX: Fix the interface garbage in VNX backend * EMC Isilon Driver Support For Extend Share * HPE3PAR finds CIFS share with either prefix * Improve tempest tests for shares listing APIs * Updated from global requirements * Support standard Manila capability flags in NetApp cDOT driver * Mock out service availability check in unit test * Capability lists in Manila scheduler * HPE3PAR support for share extend and shrink * Pop off user/tenant kwargs in RequestContext init * Move the config environment variables into devstack/settings file * glusterfs: document Gluster NFS misbehavior * Change instance service default path for private key to None * Use isoformat() instead of timeutils.strtime() * EMC VNX: Add multi-pools support * Add space to message in manila/consistency\_group/api.py * Remove duplicate keys from dictionary * Fix Tempest microversion comparison approach * Prevent removal of share server used by CG * HPE3PAR support for access-level (ro,rw) * Performance: leverage dict comprehension in PEP-0274 * Updated from global requirements * Document correction in quick\_start.rst * glusterfs\_native: fix parsing of the dynamic-auth option * Fix wrong check message * NetApp cDOT driver should support read-only CIFS shares * Do not allow to modify access for public share type * EMC VNX: Add share extend support * Allow to set share visibility using "manage" API * Remove version per M-1 release instructions * Updated from global requirements * [CI] Speed up Tempest jobs * Avoid service\_instance neutron port clash in HA 2.0.0.0b1 --------- * EMC: Fix bugs when domain controller is not available * Put py34 first in the env order of tox * Move API module 'share\_instances' under v2 dir * Change manila\_tempest\_tests to use credentials\_factory * timeutils.total\_seconds() is deprecated * Reorganize scheduler and merge code from Oslo incubator * glusterfs: add missing i18n import * Fix Share status precedence based on instances * doc: document the non-standard export semantics of Ganesha * Liberty doc updates for GlusterFS drivers * Add new URLs for APIs ported from extensions * Updated from global requirements * NetApp cDOT multi-SVM driver can't handle duplicate addresses * Remove mention of isilon\_share\_root\_dir * Add share-networks validation * Simplify ping usage for service VM check in CI * Improve Tempest tests for consistency groups * Add sleep to CI hooks to avoid races * add Red Hat GlusterFS drivers feature support info * Add reno for release notes management * Delete python bytecode before every test run * Updated from global requirements * Add support of 'network\_type' to standalone network plugin * Fix import of devstack functions for common CI script * Last sync to Manila from oslo-incubator * glusterfs/volume layout: indicate volume usage on volumes themselves * glusterfs/volume layout: fix incorrect usage of export\_location * Refactor authorize() method in wsgi.py * Implements ensure\_share() in Quobyte driver * Prevent Share operations during share migration * Fix typo on quota limit error message * Refactor HP 3PAR share driver to now be HPE * OpenStack typo * Added driver minimum requirements and features doc * Remove httplib2 useless requirement * Added CONTRIBUTING file in .rst format * HPE3PAR create share from snapshot fails * Updated from global requirements * EMC VNX Manila Driver Refactoring * Updated from global requirements * Port share type extensions to core API * Port admin actions extension to core API * Use oslo\_config new type PortOpt for port options * Added CORS support to Manila * Split common logic of CI hooks to separate file * Port share actions to core API * Port quotas to core API * Port services to core API * remove default=None for config options * Add mount automation example based on Zaqar * Make setup.py install Manila Tempest plugin * Sync Manila Tempest plugin with latest Tempest * Port manage/unmanage extensions to core API * Updated from global requirements * Rephrase comments for Share create API * Use assertTrue/False instead of assertEqual(T/F) * Fix no-share-servers CI job * Use default Keystone API version in Devstack * Updated from global requirements * Port availability zones to core API * Generic driver: wait for common server during setup * Port used limits to core API * Updated from global requirements * Add IBM GPFS Manila driver * Fix list-availability-zones API for PostgreSQL * Fix share type model scalability for get request 1.0.0 ----- * Fix usage of dependencies * Fix usage of dependencies * Use 'False' as default value for "compression" common capability * Stop using deprecated tempest options * Make share service understand driver init failure * Fix broken unit tests * Enable extend\_share in HDFS driver * Verify common server in Generic driver on startup * Updated from global requirements * Improve Manila HDS HNAS Driver Manual * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * Update feature support matrix for Windows SMB 1.0.0.0rc2 ---------- * Share manager: catch exception raised by driver's setup() * Fix display of availability-zone for manila-manage command * glusterfs\_native: use dynamic-auth option if available * Fix setting of "snapshot\_support" extra spec for tempest * Fix deletion of error state access rules * Fix response data for API access-allow * Fix display of availability-zone for manila-manage command * glusterfs: check nfs.export-volumes with Gluster NFS + vol layout * glusterfs: manage nfs.rpc-auth-allow not being set * glusterfs vol layout: start volume cloned from snapshot * glusterfs\_native: use dynamic-auth option if available * NetApp cDOT driver isn't reentrant * Can't create shares on drivers that don't support snapshots * Revert netapp\_lib dependency in NetApp cDOT Manila drivers * Set defaultbranch to stable/liberty in .gitreview * Feature support matrix update for HP 3PAR * Fix \`test\_trans\_add\` for Python 3.4.3 * Remove misleading snapshot methods from Quobyte driver * Fix response data for API access-allow * Improve logging of calls in ShareManager * Use random IPs in security service tests * EMC Isilon Manila Driver Feature Support * Fix deletion of error state access rules * Fix order of arguments in assertEqual * glusterfs vol layout: start volume cloned from snapshot * Fix order of arguments in assertEqual * NetApp cDOT driver isn't reentrant * Fix mentioned DEFAULT\_API\_VERSION in doc * Revert netapp\_lib dependency in NetApp cDOT Manila drivers * Fix \`test\_trans\_add\` for Python 3.4.3 * Adds Quobyte share backend feature support mapping data * Remove language about future features from driver doc * Remove LegacyFormatter from logging\_sample.conf * Fix setting of "snapshot\_support" extra spec for tempest * Fix some spelling typo in manual and error message * glusterfs: check nfs.export-volumes with Gluster NFS + vol layout * glusterfs: manage nfs.rpc-auth-allow not being set * Can't create shares on drivers that don't support snapshots * Add Huawei driver details in doc * Add Hitachi HNAS driver documentation * Open Mitaka development 1.0.0.0rc1 ---------- * glusterfs\*: fix ssh credential options * Make Quobyte shares actually read-only when requested * Fixes a Quobyte backend call issue with a wrong field name * Fix error response when denying snapshot creation * Fix 'cover' tox job * glusterfs: fix gluster-nfs export for volume mapped layout * Updated from global requirements * Fix experimental=True for view in microversion 2.5 * glusterfs\_native: Hardwire Manila Host CN pattern * Fix HDS HNAS manage incorrect share size * glusterfs\*: amend export location * Fix HDS HNAS Create from snapshot ignoring Size * Fix pool\_list filter tests to match pools exactly * Non-admin user can perform 'extra-specs-list' * Fix improper handling of extending error * Update feature support mapping doc for NetApp cDOT * Remove IBM GPFS driver due to lack of CI * Add 'snapshot\_support' attr to share details * Fix get\_stats to return real used space in HNAS * Add new features description in Huawei doc * Fix API version history in Huawei driver * Fix task\_state field shown on API < 2.5 * glusterfs: Fix use of ShareSnapshotInstance object * NetApp cDOT driver should prefer aggregate-local LIFs * Fix HDS HNAS snapshot creation tracking * Return share\_type UUID instead of name in Share API * doc: turn ascii art tables into proper reST grid tables * Make scenario tests able to run with DHSS=False * Fix missing value types for log message * glusterfs\_native: Fix typo for protocol compatibility * Fix typo in test\_hook * Fix Share Migration tempest tests * Remove support for 'latest' microversion * Adds retry function to HNAS driver * Corrects capabilities returned by Quobyte Manila driver * Fix create snapshot API in Huawei driver * Check the snapshot directory before copy it * Remove HDS SOP driver due to lack of CI * Missing check in ShareManager::manage\_existing() * Add v2 Manila API path as base for microversions * Huawei driver: fix reports reduplicate pools * Enhance base driver checking if a method was implemented * Updated from global requirements * Allow service image download to be skipped * Use 'False' as default value for "dedupe" common capability * Capacity filter should check free space if total space is unknown * Fix usage of novaclient * NetApp cDOT driver with vserver creds can't create shares * Fix unstable unit test 'test\_get\_all\_host\_states\_share' * Fix concurrency issue in tempest test * Fix description in Huawei driver * Replaces xrange() with range() for py 2/3 compatibility * Updated from global requirements * Consistency groups in NetApp cDOT drivers * Fix keypair creation * Add functional tests for Manila consistency groups * Place tempest microversions test module in proper place * Consistency Group Support for the Generic Driver * Add Share Migration tempest functional tests * Share Migration support in generic driver * Add Share Migration feature * glusterfs: directory mapped share layout * glusterfs: volume mapped share layout * glusterfs/layout: add layout base classes * Add Consistency Groups API * Scheduler changes for consistency groups * Add DB changes for consistency-groups * Use Tempest plugin interface * Make devstack plugin independent from default Identity API version * glusterfs-native: cut back on redundancy * glusterfs/common: refactor GlusterManager * glusterfs\*: factor out common parts * Add share hooks * Add possibility to setup password for generic driver * Use devstack functions for registering Manila * devstack plug-in to reflect new manila-ui plug-in * HP 3PAR extra-spec prefix needs to be hp3par * Fix the typo "version" * Updated from global requirements 1.0.0.0b3 --------- * Add attributes 'name' and 'share\_name' to ShareSnapshotInstance * Fix data copying issue in DB migration 1f0bd302c1a6 * HP 3PAR driver handles shares servers * Updated from global requirements * Fix failing Quobyte unit test * Remove instances of "infinite" capacity from Manila * Replace thin/thick capabilities with thin\_provisioning * Add Share instances Admin API * Add Windows SMB share driver * Remove ununsed dependency: discover * Implement snapshot tracking in HDS HNAS driver * Use Share Instance ID in 'name' property * Ignore git backup merge files * Tempest: wrong assertion on the number of shares created * Ignore unavailable volumes when deleting a share * Updated from global requirements * New Manila HDS HNAS Driver * Tempest: wait for access rule to be deleted * Fix Tempest tests targeting user based access rules * glusterfs\_native: Add create share from snapshot * Generic driver:Create Cinder volume in correct AZ * Reduce dependency to tempest: exceptions * Add possibility to filter back ends by snapshot support * Add tempest tests for "cert" based access type * Clean up admin\_actions API extension unit tests * Use service availability\_zone for Share Server VM * Add availability zones support * Add methods for share instances in Share API * Add compression in common capabilities doc * HP 3PAR add more info to the share comment * Add tempest tests for REST API microversions * Huawei driver support smartcache and smartpartition * Manila experimental REST APIs * Fix compatibility with sqlalchemy 0.9.7 * Updated from global requirements * Fix incorrect use of snapshot instances * HP 3PAR reports capabilities * Lazy Load Services * Replace assertEqual(None, \*) with assertIsNone in tests * Updated from global requirements * Fix incorrect variable name in some exception class * Update NetApp cDOT Manila drivers to use netapp\_lib * Add manage/unmanage support to NetApp cDOT driver * Service Instance: Add instance reboot method * Add WinRM helper * Common capabilities documentation * Fix Neutron config setting in pre\_test\_hook * Add share instances and snapshot instances * Fix extend share API in Huawei driver * Huawei driver support dedup, compression, thin and thick * Fix the log level in scheduler manage * Enable Tempest tests for glusterfs/hdfs protocols * Support shrink\_share in NetApp cDOT drivers * Fix sample config file generation * Change huawei driver send REST command serially * Support extend\_share in NetApp cDOT drivers * Fix for Isilon driver failing to connect * Updated from global requirements * Fix bug to locate hdfs command in HDFS native driver * Fix AttributeError without share type provided * Implement Manila REST API microversions * Add retry logic when delete a NFS share in VNX * Cleanup shares created by Tempest * Add py34 to test environment to tox.ini * Allow Tempest to skip snapshot tests * Add retries for deadlock-vulnerable DB methods * Adding extend share support in IBM GPFS Driver * Make QuobyteHttpsConnectionWithCaVerification py3 compatible * Add SSL middleware to fix incorrect version host\_url * Updated from global requirements * Fix HTTP headers case for API unit tests * Fix bug to run command as root in HDFS driver * Fix typos in neutron\_network\_plugin.py * Remove incorrect URLs from jenkins.rst * Remove ordering attempts of 'unorderable types' * Fix 'hacking' unit tests for py3 compatibility * Skip unit tests for SSL + py3 * Fix string/binary conversions for py34 compatibility * Make 'utils.monkey\_patch' py3 compatible * Decouple some of the Service Instance logic * Wrap iterators and 'dict\_items' for py34 compatibitity * Update Documents to use HDFS Driver * Fix two typos on documentation and one typo on CLI help * Stop using deprecated contextlib.nested * Fix imports for py34 compatibility * Fix exceptions handling for py34 compatibility * Rename from il8n.rst to i18n.rst * Remove copyright from empty file * Fix HP3PAR extra-specs scoping prefix bug * Updated from global requirements * Support manage\_existing in Huawei driver * Fix HP3PAR SMB extra-specs for ABE and CA * Generic: add service instance mgr set up method * Fix Generic driver share extend * Replace py2 xrange with six.moves.range * Fix integer/float conversions for py34 compatibility * Fix dictionary initialization for Python 3 compatibility * Replace (int, long) with six.integer\_types * Fix list creation * Replace dict.iteritems() with six.iteritems() * Add doc share features mapping * Replace 'types.StringTypes' with 'six.string\_types' * Replace '\_\_metaclass\_\_' with '@six.add\_metaclass' * Fix ZFSSA driver for py34 compatibility * Listen on :: instead of 0.0.0.0 by default 1.0.0.0b2 --------- * Fix slow unit tests * Remove Cinder leftover unit tests * Eventlet green threads not released back to pool * Add client\_socket\_timeout option to manila.wsgi.Server * Catch error\_deleting state for more resources than just shares * Updated from global requirements * Make coverage tox job fail when test coverage was reduced * Add test coverage for periodic tasks * Change \_LE to \_LW (at manila/share/manager.py) * Fix 'extend\_share' in generic driver * Fix unit tests for quobyte * Support shrink\_share in Huawei driver * GlusterFS: fix retrieval of management address of GlusterFS volumes * Explicit backend connect call in Quobyte RPCs * Enable multi-process for API service * Updated from global requirements * Make config opt 'enabled\_share\_protocols' verification case insensitive * glusterfs\_native: prefix GlusterFS snap names with "manila-" * glusterfs\_native: delete\_snapshot(): find out real GlusterFS snap name * glusterfs\_native: fix delete share * Reuse 'periodic\_task' from oslo\_service * Implement shrink\_share() method in Generic driver * doc: fix typo s/virutalenv/virtualenv/ * Cleanup DB API unit tests * Add negative tests for admin-only API * Updated from global requirements * HP 3PAR uses scoped extra-specs to influence share creation options * Retry \_unmount\_device in generic driver * Add 'retry' wrapper to manila/utils.py * Huawei driver support storage pools * Updated from global requirements * Modify confusing name in Huawei driver * Use all types of migrations in devstack installation * Close DB migration sessions explicitly for compatibility with PyMySQL * Delete redundant period in ManilaException messages * Use soft\_delete() methods in DB api * Use uuidutils to generate id's in DB api * Add license header to migrations template * Remove models usage from migrations * Huawei manila driver support multi RestURLs * EMC VNX: Fix the total capacity for dynamic Pool * Updated from global requirements * Updated from global requirements * Add access-level support in VNX Manila driver * Enable Manila multi-SVM driver on NetApp cDOT 8.3 * Support for oversubscription in thin provisioning * Fix for SchedulerStatsAdminTest fails on timestamp * Print devstack command traces before executing command * Fix unit tests for compatibility with new mock==1.1.0 * Change "volume" to "share" in filter and weigher * Updated from global requirements * Remove unneeded OS\_TEST\_DBAPI\_ADMIN\_CONNECTION * Remove duplicated options in manila/opts.py * More Manila cDOT qualified specs * Add PoolWeigher for Manila scheduler * Remove unused manila/openstack/common/eventlet\_backdoor.py * Updated from global requirements 1.0.0.0b1 --------- * Use loopingcall from oslo.service * Updated from global requirements * Use new manila-service-image with public-key auth * Allow drivers to ask for additional share\_servers * HP 3PAR driver config has unused username/password * Huawei manila driver support Read-Only share * Override opportunistic database tests to PyMySQL * Support share-server-to-pool mapping in NetApp cDOT driver * Remove unused files from oslo-incubator * Update version for Liberty 1.0.0a0 ------- * Support extend\_share in Huawei driver * Fix incompatiblity issue in VNX manila driver * Updated from global requirements * Updated from global requirements * Reduce amount of tempest threads for no-share-servers jobs * Add retry on volume attach error in Generic driver * HP 3PAR Add version checking and logging * Bump supported tempest version * Share\_server-pool mapping * Replace it.next() with next(it) for py3 compat * Fix tempest ShareUserRules\* tests * Updated from global requirements * Stop using deprecated 'oslo' namespace * Use oslo.utils to get host IP address * Remove deprecated WritableLogger * Make required function arguments explicit * Remove unused contrib/ci files * Fix docstrings in tempest plugin * Updated from global requirements * Add share shrink API * Implement tempest tests for share extend API * Implement extend\_share() method in Generic driver * Huawei manila driver code refactoring * Transform share and share servers statuses to lowercase * Updated from global requirements * Fix policy check for API 'security service update' * Remove unused attr status from models * Drop incubating theme from docs * Make devstack install manila-ui if horizon is enabled * glusterfs: Edit doc and comments * Simplify generic driver with private data storage API * Provide private data storage API for drivers * Remove usage of utils.test\_utils * Remove ServiceClient from share\_client * Switch from MySQL-python to PyMySQL * Add share extend API * Export custom Share model properties with \_extra\_keys * Release Neutron ports after share server deletion using generic driver * Make generic driver use only ipv4 addresses from service instances * Fix share-server resources cleanup in generic driver * ganesha: Add doc * Update Quickstart guide * NetApp cDOT driver fails Tempest cleanup on clone workflows * Updated from global requirements * Add doc for network plugins * Fix 'AllocType' read failure in Huawei driver * Sync tempest plugin with latest tempest * Updated from global requirements * Improve ShareServer DB model * Updated from global requirements * Add multi vm scenario test * Imported Translations from Transifex * Drop use of 'oslo' namespace package * Updated from global requirements * EMC: Remove unnecessary parameter emc\_share\_driver * Add doc with basic deployment steps * Move to the oslo.middleware library * Clean up redundant code and nits from EMC VNX driver * Remove unused oslo-incubator modules * EMC VNX Manila Driver Feature Support * Allow overriding the manila test regex * Updated from global requirements 2015.1.0 -------- * NetApp cDOT driver clones NFS export policy * Add config\_group\_name for NeutronNetworkHelper * Remove ping check from basic scenario test * Sync contrib/tempest to newer state * Fix for the deletion of an error share server * NetApp cDOT driver clones NFS export policy * Sync oslo-incubator code * EMC VNX Driver: Fix typo issues * Remove passing DB reference to drivers in Share Manager * Use oslo\_policy lib instead of oslo-incubator code * Use oslo\_log instead of oslo-incubator code * Use lib lxml for handling of XML request * Updated from global requirements * Remove direct DB calls from glusterfs\_native driver * Release Import of Translations from Transifex * Remove maniladir() and debug() function from utils * Use identity\_uri for keystone\_authtoken in devstack * Switch to new style policy for test policy * Add mount/umount in scenario tests * update .gitreview for stable/kilo * Update doc-strings for snapshot methods in Share Driver * Use openstackclient in devstack plugin * Remove direct DB usage from NetApp driver * Move response code verification to share client * Use entry\_points for manila scripts * Switch to new style policy language 2015.1.0rc1 ----------- * Remove Limited XML API Support from Manila * Prevent hanging share server in 'creating' state * More flexible matching in SSL error test * Imported Translations from Transifex * Mock out base share driver \_\_init\_\_ in EMC driver * Add object caching in manila REST API requests * glusterfs\_native: Fix Gluster command call * glusterfs, glusterfs\_native: perform version checks * Open Liberty development * Add Glossary with basic Manila terms * Restrict access only to vm ip * NetApp cDOT driver is too strict in delete workflows * Adding configuration instructions in huawei\_nas\_driver.rst * Update openstack-common reference in openstack/common/README * Prevent share server creation with unsupported network types with cDOT * Fix log/error message formatting * Updated from global requirements * Add segmentation ID checks for different segmentation types * glusterfs\_native: make {allow,deny}\_access non-destructive * glusterfs\_native: negotiate volumes with glusterd * NetApp cDOT driver uses deprecated APIs for NFS exports * Automatic cleanup of share\_servers * Fix fields 'deleted' in various DB models for PostgreSQL compatibility * Add tempest coverage for share type access operations * Enable developers to see pylint output * Allow overwriting some Manila tempest settings in CI jobs * Set share-type on share created from snapshot * cDOT multi-SVM driver may choose unsuitable physical port for LIFs * cDOT driver should split clone from snapshot after creation * Replace SQL code for ORM analog in DB migration scripts * Delete skipped tempest tests that won't be enabled * NetApp cDOT drivers should not start without aggregates * IBM GPFS Manila Driver Docs - update * Switch to v2 version of novaclient * Backslashify CIFS share export paths for Generic * NetApp cDOT multi-SVM driver should work with non-VLAN networks * NetApp cDOT multi-SVM driver should not start with cDOT 8.3 * Fix CIFS export format in EMC VNX driver * Forbid unmanage operation for shares with snapshots * Fix deletion of export locations * Add initial scenario test for Manila * Fix setting of share name and description with manage API * HP 3PAR driver documentation * Fix setting of extra specs for share types * Huawei NAS driver returns CIFS export locations in wrong format * IBM GPFS Manila Driver Docs * Fix common misspellings * Add share state verification for API 'unmanage' * Updated from global requirements * Sync tempest plugin with latest tempest * Make generic driver update export location after manage operation * Deal with PEP-0476 certificate chaining checking * Fix manage operation in generic driver * Imported Translations from Transifex 2015.1.0b3 ---------- * Implement manage/unmanage support in generic driver * cDOT driver should report all share export locations * Enable bashate during pep8 run * Allow updates to export locations * NFS based driver for Quobyte file storage system * glusterfs\_native: partially implement snapshot * Fix issues with get\_pool scheduler API * Use SoftDeleteMixin from oslo.db * Imported Translations from Transifex * Fix cleanup order for tempest test * Enable downgrade migrations in unit tests * Allow shares to have multiple export locations * Add basic manage/unmanage share functionality * Set proper attr "deleted" for ShareTypes model * Imported Translations from Transifex * EMC Isilon Manila Driver Docs * HP3PAR driver log the SHA1 for driver and mediator correctly * Add public attr for shares * Imported Translations from Transifex * Add ro level of access support to generic driver * Remove CLI tests from tempest plugin * Manila Scheduler should read full driver capabilities * NetApp cDOT driver should not create useless export-policy rule * Manila cDOT driver should use loopingcall for ASUP report timing * EMC Isilon Manila driver * Implement private share\_types * Updated from global requirements * Always allow delete share-network when no shares exist * Imported Translations from Transifex * Add nova network plugin * Manila cDOT qualified specs * Make extra spec driver\_handles\_share\_servers required * Failed to load xml configure file * Updated from global requirements * Allow tempest to skip RO access level tests * Manila cDOT netapp:thin\_provisioned qualified extra spec * Replace TEMPEST\_CONCURRENCY with Manila-specific var * doc: Add glusterfs\_native driver developer doc * Fix example style in admin doc * Imported Translations from Transifex * Improve error handling in GPFS driver * Updated from global requirements * Add doc for hdfs\_native driver * Remove copypasted export\_location field from snapshots * HP 3PAR use one filestore per tenant * Single-SVM Manila driver for NetApp Clustered Data ONTAP * Remove hacking exception for oslo.messaging import * Remove Python 2.6 classifier * Remove obsolete option: enabled\_backends * Manila access-allow API doesn't accept backslash * Add temporary workaround to scheduler * Add doc for Dynamic Storage Pools for Manila scheduler * Fix config opts description for class NeutronSingleNetworkPlugin * Add snapshot gigabytes quota * Use devstack plugin in CI hooks * HP 3PAR driver fix for delete snapshot * Add Nova-network support to service\_instance module * Updated from global requirements * Sync tempest plugin * Manila cDOT storage service catalog * Add devstack plugin * Generic Driver image supported protocols * Updated from global requirements * glusterfs: add NFS-Ganesha based service backend * ganesha utils: allow remote execution as root * Remove left-over modules from Cinder * Add share\_type\_default() method to API * Add support of default share type * Support Manila pools in NetApp Clustered Data ONTAP driver * Move definition of couple of config opts to proper module * Add support of nova network for share-networks API and DB * Make listing of networks compatible for neutron and nova in devstack * ganesha: fix execute call using invalid argument * Imported Translations from Transifex * Rename volume\_type to share\_type * Imported Translations from Transifex * Add possibility to enable/disable some share protocols * Add standalone network plugin * Add possibility to define driver mode within pre\_test\_hook for CI * Skip multisvm tempest tests for singlesvm setup * Correct the share server's db info after its deletion * Add support for HDFS native protocol driver * Fix cinderclient compatibility of list filtering by name * Fix spelling mistake * Fixed spelling mistake in tests * Manila NetApp cDOT driver refactoring * glusterfs: Add doc * Imported Translations from Transifex * fix case sensitivity * Fix generation of config sample * Use oslo\_log lib * unify some messages * HP 3PAR Driver for Manila * Do not instantiate network plugin when not used by driver 2015.1.0b2 ---------- * Pool-aware Scheduler Support * Implement additional test for db migrations * Updated from global requirements * Add share driver for HDS NAS Scale-out Platform * Replace legacy StubOutForTesting class * Add unit test for volume types * Add CI job support for second mode of Generic driver * Implement additional driver mode for Generic driver * ganesha: fix resetting of exports * Remove workaround for Nova VM boot bug * Add tracing facility to NetApp cDOT driver * Remove startswith for share\_proto check * Remove copy-pasted code for fake-share * driver: Fix ganesha config option registry * Workaround Nova VM boot bug * Add access levels for shares * Imported Translations from Transifex * Add factory for NetApp drivers * Updated from global requirements * Search snapshot by ID instead of name in Huawei driver * Fix documentation for some Ganesha config variables * Add Neutron single network plugin * Add unit test for quota remains functionality * Switch to using oslo\_\* instead of oslo.\* * utils: Allow discovery of private key in ~/.ssh * Updated from global requirements * Do not use router for service instance with direct connect * Port cinder EMS and ASUP support to manila * Adapt readme to usual structure * glusterfs: add infrastructure to accommodate NAS helpers * Fix tempest pep8 failures * Release resources in tempest test properly * Replace string driver modes with boolean value * Adding required rootwrap filters for GPFS driver * Add doc for Huawei driver * Fix pep8 error E265 in wsgi * fix typo in config.py * fix typo in nova.py helpline * fix typo in rpc.rst * Fix typo "authogenerate" in manila-manage * Updated from global requirements * Fix searching mechanism of share-networks within tempest * Fix small typo in 70-manila.sh * Change default migration in "manila-manage db downgrade" command * Add manila.conf.sample to .gitignore * Fix deletion of share-server within Generic driver * Fix devstack compatibility * Reuse network resources in share-server creation test * Updated from global requirements * Add share driver for Huawei V3 Storage * Make Tempest tests use networks only from same project * Refactor tempest test 'test\_create\_share\_with\_size\_bigger\_than\_quota' * Sync tempest plugin with latest Tempest * Update message for exception ShareNetworkNotFound * Update documentation for tempest integration * Add error suppressing to isolated creds cleanup in Tempest plugin * Updated from global requirements * Fix handling of share-networks with single\_svm drivers * Set pbr 'warnerrors' option for doc build * Fix nit in tempest naming * Fix documentation build * Imported Translations from Transifex * Fix TypeError in tempest retry functionality * Fix using anyjson in fake\_notifier * Fix typo in db migration test function name * Use Cinder v2 API within Generic driver * Add driver mode attr definition for all drivers * Fix concurrency problem in getting share network in Tempest * Make it possible to update tempest conf in all CI Tempest jobs * Use oslotest.base.BaseTestCase as test base class * Add possibility to create lots of shares in parallel for tempest * Add service id to information provided by API * Raise error immediately for undeletable share in tempest * py3: use function next() instead of next() method on iterator objects * Allow deleting share with invalid share server in generic driver * Rename share driver stats update method * Remove unsed python modules from requirements * Remove unused conf option 'fake\_tests' * Make tempest cleanup errors be suppressed in all CI jobs * Add retries for share creation within Tempest plugin * Remove unused sslutils module * Improve share driver mode setting * py3: use six.moves.range instead of xrange * py3: use six.moves.urllib.parse instead of urlparse * Use lockutils from "oslo concurrency" lib * Remove non-active host from host\_state\_map * Strip exec\_dirs prefix from rootwrap filters * Add possibility to suppress errors in Tempest plugin cleanup * Make Tempest repo stable for Manila * Use uuidutils from oslo.utils * Cleanup manila/utils.py * Remove configs sql\_connection and sql\_connection\_debug * Remove unused configs pybasedir and bindir * Remove unused connection\_type config * Fix tempest test with share server listing with no filters * Improve tempest share server filtering * Increase quotas and number of threads for tempest * Use oslo.context lib * Imported Translations from Transifex * Add missing imports for sample config generation * Fix tempest compatibility for network client * Fix driver mode opt definition * Adds Oracle ZFSSA driver for Manila 2015.1.0b1 ---------- * ganesha: NFS-Ganesha instrumentation * Add driver mode interface * Updated from global requirements * Updated from global requirements * Move networking from share manager to driver interface * Workflow documentation is now in infra-manual * Fix error message in share delete method * glusterfs: create share of specific size * Fix metadata validation in share api * Fix devstack plugin custom config opt setting * Enhance devstack plugin * Update EMC Manila driver framework using stevedore * Alternative way to import emc.plugins.registry * Fix wrong mock assertions in unit tests * Release network resources properly * Updated from global requirements * Imported Translations from Transifex * Add support for volume types with Generic driver * Fix H302 rule after release of oslo.concurrency 0.3.0 * Fix for debugging m-shr in PyCharm * Updated from global requirements * Fix tempest compatibility for cli tests * Fix context.elevated * Updated from global requirements * Updated from global requirements * Remove obsolete methods from tempest service client * Switch to oslo.concurrency for processutils * Updated from global requirements * Use oslo.utils.netutils function to set tcp\_keepalive * Fix couple of nit picks * Use keystonemiddleware and manila.conf for config * Imported Translations from Transifex * Updated from global requirements * Fix share manager to save data after driver error * Adding GPFS Manila driver * Remove object in wsgi LOG.info * Fix share network id in tempest test * Convert files to use \_LE and friends * Imported Translations from Transifex * Fix concurrency issue in security-service tempest test * Sync Tempest plugin with latest Tempest changes * Improve share-network list API filtering * Updated from global requirements * Don't translate LOG messages in testsuite * Add admin doc for multiple backends configuration * Remove gettextutils * Use proper value for osap\_share\_extension * Refactor shares client init in Tempest plugin * Delete unused versionutils module * Sync with oslo-incubator * Updated from global requirements * Use oslo.utils - remove importutils usage * Switch to oslo.config * Use oslo.serialization * Use oslo.utils * Silence tox warning * Add manila specific hacking checks * Remove extra flake8 args * Sync with global requirements * Improve share snapshots list API filtering * Use oslo.i18n * Use six instead of str for exceptions * Add info to cDOT driver doc * Fix tempest compatibility * Add new search options for security service * Fix doc build * Add Admin doc for an Introduction to Manila * Add share server id field in shares detail info * Improve share list API filtering * Fix doc build warnings so docs build clean * Remove extraneous vim editor configuration comments * Add share network id field in share server info * Fix tempest compatibility * Use 'generate\_request\_id' func from common code * Remove vim headers * Add info to generic driver doc * Open Kilo development * Add doc for EMC VNX driver 2014.2 ------ * Fix creation of share from snapshot * Specify the correct Samba share path * Fixes several typos (Manila) * Fix KeyError while creating share from snapshot * Fix references in jenkins.rst * Update translation information * Mention Samba in intro.rst * Add doc for an Introduction to Manila 2014.2.rc1 ---------- * Add support for working with multiple glusterfs volumes * Minor Manila doc change * Make copyrights in docs as comments instead of page content * Update challenges in the developer docs * Update naming from clustered mode to cDOT * Fix doc build errors in db/sqlalchemy/models.py * Improve documentation build * Add doc for netapp cluster mode driver * Add doc for generic driver * Fix using key for ssh * Fix getting ssh key if ssh path is not set * Rename stackforge to openstack in docs * Move from stackforge to openstack * Fix two functional tests within tempest\_plugin * glusterfs: edit config option specifying volume * Change exception thrown by db method * Fix some LOG.debug invocations * Fix Invalid pathname for netapp cmode driver * Make block devices mounts permanent within service instances * Stop using intersphinx * Increase share-network default quota * Don't allow security service to be updated if used * Move db related unittests to proper places * Fix update of backend details in cmode driver * Update shares and snapshot create to show details * Use oslosphinx and remove local copy of doc theme * Move driver unittest modules to proper place * Move unittests related to manila/share/\*.py modules to proper place * Make NFS exports in generic driver permanent * Fix ssh connection recreation in generic driver * Drop a forgotten fragment * warn against sorting requirements * Fix version number to Juno 2014.2.b3 --------- * Add support for glusterfs native protocol driver * Fix some LOG invocations and messages * EMC VNX Manila Plugin * Add support for cert based access type * Make m-shr more stable on start up * Fix scheduled share creation with generic driver * Add "." at end of exceptions * py3: Use six module for StringIO imports * Update share\_network obj after db update * Transform Exception args to strings when exceptions * Fix string concatenation * glusterfs: Fix docstring * Fix concurrent policy issue in unittest * Remove redundant glance config options * Improve help strings * Remove hash seed dependency for unittests * Updated usage of locks * Fix creation of cifs entry in cmode driver * Flake8: Fix and enable H405 * Forbid to attach security services with same type to share network * Flake8: Fix H501 * Flake8: Fix and enable H404 * Flake8: Fix E128 * Fix device mount/umount methods in generic driver * Change service VM connectivity * Use Alembic instead of Sqlalchemy-migrate in Manila * Flake8: Fix H302 * Remove NetApp 7-mode driver as obsolete * Flake8: Fix F841 * Remove bin/manila-rpc-zmq-receiver * Cmode, CIFS shares, fix allowed share access type * Fix obtaining of service VM ip * EMC Manila driver * Add specific docs build option to tox * Flake8: Fix some occurences of F841 * Flake8: Fix E126 and E127 * Flake8: Fix F401 * pep8: Enable H303 and F403 * Sync requirements with global requirements * Remove extra setenv from tox.ini * Enable E121,E122,E123,E124,E125,E129 flake8 tests * Refactor NetApp Cmode driver * Use opportunistic migrations * Add config option for share volume fs type * Fix failing of unittests in one thread * Fix H402 hacking rules * Fix pep8 issues in manila/tests * Clean up devstack plugin after LVM driver removal * Remove LVM driver * Fix pep8 failures in manila/{db,volume} * Handle missing config options for tests gracefully * Add oslo.utils and oslo.i18n libs to requirements * Issue one SQL statement per execute() call * Further pep8 fixes * Fix pep8 F811 and F812 * Rename 'sid' to 'user' in access rules and sec services * Decrease amount of threads for Tempest tests * Flake8 in bin/\* * Remove manila-clear-rabbit-queues * Sync scripts with oslo-incubator * Replace utils.config\_find with CONF.find\_file * Use common code within manila.policy module * Fix bad indentation in manila * Refactor cifs helper for generic driver * Fix share status waiter within tempest * Fix update of share with share-server-id * Use common config generator * Add config module from oslo-incubator * Remove dangerous arguments default * Remove unused imports * Fix F402 pep8 * Make flake8 ignore list more fine granular * Sync common modules from Oslo * Add share\_server\_id filter option to 'get\_all' share API method * Fix tempest compatibility * Fix pep8 F821 * Update requirements file matching global requ * glusterfs: Edit comments and docstrings * glusterfs: Modify interface methods * Fix setting up security-services in Cmode * Update pep8 testing * Added calculating capacity info in Cmode * Added calculating capacity info to 7mode driver * Adds undocumented policies and defaults in policy.json * Add check on eventlet bug #105 (ipv6 support) * Remove reference to 'in-use' state in share manager * Enable check for H237 * Use oslo.rootwrap library instead of local copy * py3.x: Use six.text\_type() instead of unicode() * py3: use six.string\_types instead of basestring * Use oslo.db in manila * Fix compatibility with tempest project * README merge * Refactor test framework * Add interprocess locks to net interfaces handlers * Fix obtaining of service instance ip * Setup for translation * Enabled hacking checks H305 and H307 * Fix service subnet capacity within service\_instance module * Fix metaclasses assignment * Enable hacking check H236 * Add share-server-delete API * Change get\_client\_with\_isolated\_creads() to \*\_creds() * Sync with global requirements * Fix E112 expected an indented block * Fix E713 test for membership should be 'not in' * Fix E131 continuation line unaligned for hanging indent * Address H104 File contains nothing but comments * Fix E251 unexpected spaces around keyword / parameter equals * Fix E265 block comment should start with '# ' * Fix usage of ProcessExecutionError exception * Enabled hacking check H403 * py33: use six.iteritems for item iterations (part2) * Cleanup manila.utils module (part1) * glusterfs: Implement methods to update share stats * glusterfs: Fix issues in backend instrumentation * Enabled hacking check H401 * Use ssh\_execute function from common code * Use execute() and trycmd() functions from common code * Use looping calls for running services from common code * Fix typo in error message for share\_export\_ip * py33: use six.iteritems for item iterations (part1) * Change logging level AUDIT to INFO * Teardown/setup server enhancements * Removed custom synchronized in service\_instance * Migrate to oslo.messaging instead of commom/rpc * Removed redundant methods from singletenant drivers * Replace python print operator with print function (pep H233, py33) * share.manager: Modify allow\_access method call * Delete skipped quota tests as invalid * Add CLI tests for share-server-list API * Added retrieving vserver name from backend details * Update ci scripts * service\_instance: Add lock to creation of security\_group * Enable skipped tests from test\_capacity\_weigher.py * Add using share-server backend details in Generic driver * Fixed passing share\_server to teardown\_network * Fix create\_share\_from\_snapshot method * Added tempest tests * Cleaned up exception module and added unittests * Check share net ids when creating share from snapshot * Update manila's docs * Replace usage of unittest module with manila.test * Fix tempest test's rare concurrent issue * Improved share\_servers db api * Fixed passing share\_server to ensure\_share * Rewrited mox tests to mock (part 2) * Fix lvm driver to be compatible with share manager * Rewrited mox tests to mock (part 1) * Replace json with jsonutils from common code * Removed redundant code for glance * Use testtools module instead unittest module * Cleanup resources with tempest more reliably * Added service\_instance\_locks directory to .gitignore * Added force-delete action to admin actions * Update contrib/ci bash scripts * devstack: strip obsolete part of m-shr instumentation * Sync common modules from Oslo * Several fixies to tempest plugin * Moved exports needed for tempest into post\_test\_hook * Fix some cosmetic issues in README.rst * Fixed ci bash scripts * Remove explicit dependency on amqplib * Added share server api * Removed redundant dependency of hp3parclient * Add multibackend test suite for tempest plugin * Added bash scripts for ci jobs * Added multibackendency to devstack plugin * Switch to Hacking 0.8.x * Use Python 3.x compatible except construct * assertEquals is deprecated, use assertEqual * Share server details * Added locks into service\_instance module * Removed redundant option from devstack plugin * Separated locks for cifs and server operations * Share servers implementation * Made safe get of security\_groups with nova's response * Made service\_instance consider driver's config * Set locks for shared resources in generic driver's cifs helper * change assertEquals to assertEqual * change assert\_ to assertTrue * Added handling of secgroup for service\_instance module * set default auth\_strategy to keystone * Enabled ip rules tests for cifs in tempest * Increase default quota for share networks from 3 to 5 * debug level logs should not be translated * tempest plugin update * Fixed tempest plugin compatibility * Fixed possibility to have more than 25 shares with generic driver * Retrieve share\_backend name from config on get\_share\_stats * Fixed retrieving export ip address in Cmode drv * Made template for service VM unique using generic driver * Fixed usage of config option in generic driver * Replaced manila.conf.sample with README.manila.conf * Added API to manage volume types * Fixed rise of Duplicate exception for DB * Added volume\_types to DB * Removed unused module from unittests * Raise max header size to accommodate large tokens * Added cli tests for service-list request * Allowed devstack not fail if couldn't stop smb service * Removed redundant keystone token usage * Refactored service-list filters * Fixed tempest plugin compatibility with master * Checking security service is not used while deleting * Added creation of secgroup for service vms in devstack plugin * Removed unique constraint for share networks * Added type field to security services index list * Update tempest plugin for latest changes of manila * Made max limit name for snapshots unique * Made limits usages names unique * Fixed ownership for service volumes * Fixed quotas for share-networks * Fixes bug with share network deactivation * Added extension that provides used resources in absolute limits * Fixed detail list for shares * Added quota for share-networks * Teardown share network in Netapp Cmode driver * Fixed detail list for security-services * Fix venv installation for run\_tests.sh * Updated generic\_driver and service\_instance with activation * Added Cmode driver * Fixed race condition in tempest plugin * Fixes bug with simultaneous network modification * Fixes bug with keypair creating * Update tempest plugin, make it more stable * Add exception to tempest plugin * Splits service\_instance module from generic driver * Make functions in manila uniquenamed * Fixed creation of cinder's volumes * Add share network activate and deactivate * Separate action and creation tests in tempest * Add handling of share-networks to tempest plugin * Fix sequence of called functions in devstack plugin * Update policy.json * Enforce function declaration format in bash8 * Switched devstack plugin to use generic driver * DevStack plugin: make source dirs configurable * Fixes bug with getting hostname * Fix DevStack plugin's source collection issue * Let DevStack plugin get python executable path * Removed swiftclient from dependencies * Use uuid instead of uuidutils * Update plugin for tempest * Add detail filter for share-network-list * Add function cidr\_to\_netmask to utils * Fixes bug with path to ssh keys * Fixed detail list for security-services * Removed cinder artifacts in devstack plugin * Added to devstack plugin passwords for services * Generic driver * Fix devstack plugin's usage of RECLONE option * Removes use of timeutils.set\_time\_override * Adds modules for managing network interfaces for generic driver * Extends neutron api with methods needed for generic driver * Adds nova api needed for generic driver implementation * Adds cinder api needed for generic driver implementation * Squash all migrations into one * Add network id verification on share creation * Add policy checks in share networks API * Fix policy.py * Updated from global requirements * Fix bad calls to model\_query() * Change manila DB to have working unique constraint * Change 'deleted' to Boolean in project\_user\_quotas * Fixes handling of duplicate share access rule creation * Fixes empty network\_info for share * Use actual rootwrap option in manila.conf instead deprecated one * Fix xml response for create/update security service * Add 'password' field to the security service * Adds network creation to ShareManager * Checking if access rule exists in share api * Add share's networks API * Add share's networks DB model, API and neutron support * Fix manila's devstack plugin for using Fedora/CentOS/RHEL distro * Add manila's tempest-plugin * Security service API * Add security service DB model and API * Remove redundant options in devstack plugin * Fix bug with full access to reset-state * glusterfs: Add GlusterFS driver * Fix manila's devstack plugin * Adds an ability to reset snapshot state * Adds validation of access rules * Adds admin actions extension to provide reset-state command * Refactoring driver interfaces * Move NetAppApiClient to separate module * Moved netapp.py from drivers to drivers/netapp * Insert validation of losetup duplicates * Remove redundant options for manila * Place devstack files to proper dirs * Fixes inappropriate size of metadata value * Adds 'metadata' key to list of options for xml responses * Adds an ability to manage share metadata * Added Neutron API module * Add consume\_from\_share method to HostState class * Add devstack integration * Update requirements.txt for keystoneclient * Support building wheels (PEP-427) * Update openstack/common/lockutils * Remove unused manila.compute.aggregate\_states * Remove obsolete redhat-eventlet.patch * Added per user-tenant quota support * Change wording of short description * Removing deprecated using of flags module from project * Fixed share size validation while creating from snapshot * Fixed xml response for share snapshot * Added share size checking if creating from snapshot * Fixed values passed to share\_rpcapi.create\_share * Remove d2to1 dependency * Update functionality implementation for manila api * Fixed policy check for manila api * Added XML serialization for access actions * Check policy implementation for shares api * Update README with relevant Manila information * Fix xml response content for share list/show * Add .gitreview file * Unittests failure fix * Fixed snapshot\_id None for share * Quota releasing on snapshot deleting bug fixed * Fixed absolute limits * fixed pep8 * Stubed driver do\_setup in start\_service * Quota tests fixed * removed egg-info * modified conf sample * modified docs * docs * snapshot view, size added * quotas for snapshot * fixed api error * snapshot size * fixed TYPO * Access create empty boy fix * User cannot delete snapshot fix * Can not delete share with error status fixed * response status for share with snapshot delete request - fixed * fixed null value validation for snapshot id * fixed share temaplate name * fixed share snapshots * pep8 fix * License flake8 error fixed * Fixed flake8 errors * Api share-snapshots to snapshots * Removed unused imports * Fixed api tests * Removed v2 api. Moved shares and snapshots from contrib to v1 * quotas exception fix * Quotas fix * Deleted api v2 * Quotas fixed. quotas unittests fixed * Removed ubused unittests * fixed fake flags * Removed volume specific tests * merge * Mass replace osapi\_volume to osapi\_share Removed locale * Update connfig.sample scripts * Update connfig.sample scripts * Removed unused opts from flags.py * removed some volume occurances * removed block specific exceptions * osapi\_volume to osapi\_share * removed volumes from bin scripts * Added help to smb\_config\_path conf * modified fake flags * deleted brick * fixed manila manage * api-paste.ini: osapi\_volume to osapi-share * Replaced cinder with manila * Renamed service api config opts. Set default port to 8786 * removed volumes from scheduler * deleteted .idea, added .gitignore * volume api removed * fixed keystone context * api fix * Removed backups * DB cleaned * Removed SM models and migrations * Modified models * Modified migrations * Removed block-specific from DB api * Deleted manila.volume * Renamed cinder to manila. Fixed setup.py, fixed bin scripts * Initialize from cinder * Initial commit manila-10.0.0/etc/0000775000175000017500000000000013656750362013643 5ustar zuulzuul00000000000000manila-10.0.0/etc/oslo-config-generator/0000775000175000017500000000000013656750362020046 5ustar zuulzuul00000000000000manila-10.0.0/etc/oslo-config-generator/manila.conf0000664000175000017500000000042213656750227022154 0ustar zuulzuul00000000000000[DEFAULT] output_file = etc/manila/manila.conf.sample namespace = manila namespace = oslo.messaging namespace = oslo.middleware.cors namespace = oslo.middleware.http_proxy_to_wsgi namespace = oslo.db namespace = oslo.db.concurrency namespace = keystonemiddleware.auth_token manila-10.0.0/etc/manila/0000775000175000017500000000000013656750362015104 5ustar zuulzuul00000000000000manila-10.0.0/etc/manila/rootwrap.conf0000664000175000017500000000173413656750227017635 0ustar zuulzuul00000000000000# Configuration for manila-rootwrap # This file should be owned by (and only-writeable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/manila/rootwrap.d,/usr/share/manila/rootwrap # List of directories to search executables in, in case filters do not # explicitly specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/sbin,/usr/local/bin,/usr/lpp/mmfs/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, user0, user1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR manila-10.0.0/etc/manila/rootwrap.d/0000775000175000017500000000000013656750362017203 5ustar zuulzuul00000000000000manila-10.0.0/etc/manila/rootwrap.d/share.filters0000664000175000017500000001766013656750227021711 0ustar zuulzuul00000000000000# manila-rootwrap command filters for share nodes # This file should be owned by (and only-writeable by) the root user [Filters] # manila/utils.py : 'chown', '%s', '%s' chown: CommandFilter, chown, root # manila/utils.py : 'cat', '%s' cat: CommandFilter, cat, root # manila/share/drivers/lvm.py: 'mkfs.ext4', '/dev/mapper/%s' mkfs.ext4: CommandFilter, mkfs.ext4, root # manila/share/drivers/lvm.py: 'mkfs.ext3', '/dev/mapper/%s' mkfs.ext3: CommandFilter, mkfs.ext3, root # manila/share/drivers/lvm.py: 'smbd', '-s', '%s', '-D' smbd: CommandFilter, smbd, root smb: CommandFilter, smb, root # manila/share/drivers/lvm.py: 'rmdir', '%s' rmdir: CommandFilter, rmdir, root # manila/share/drivers/lvm.py: 'dd' 'count=0', 'if=%s' % srcstr, 'of=%s' dd: CommandFilter, dd, root # manila/share/drivers/lvm.py: 'fsck', '-pf', %s fsck: CommandFilter, fsck, root # manila/share/drivers/lvm.py: 'resize2fs', %s resize2fs: CommandFilter, resize2fs, root # manila/share/drivers/helpers.py: 'smbcontrol', 'all', 'close-share', '%s' smbcontrol: CommandFilter, smbcontrol, root # manila/share/drivers/helpers.py: 'net', 'conf', 'addshare', '%s', '%s', 'writeable=y', 'guest_ok=y # manila/share/drivers/helpers.py: 'net', 'conf', 'delshare', '%s' # manila/share/drivers/helpers.py: 'net', 'conf', 'setparm', '%s', '%s', '%s' # manila/share/drivers/helpers.py: 'net', 'conf', 'getparm', '%s', 'hosts allow' net: CommandFilter, net, root # manila/share/drivers/helpers.py: 'cp', '%s', '%s' cp: CommandFilter, cp, root # manila/share/drivers/helpers.py: 'service', '%s', '%s' service: CommandFilter, service, root # manila/share/drivers/lvm.py: 'lvremove', '-f', "%s/%s lvremove: CommandFilter, lvremove, root # manila/share/drivers/lvm.py: 'lvextend', '-L', '%sG''-n', %s lvextend: CommandFilter, lvextend, root # manila/share/drivers/lvm.py: 'lvcreate', '-L', %s, '-n', %s lvcreate: CommandFilter, lvcreate, root # manila/share/drivers/lvm.py: 'vgs', '--noheadings', '-o', 'name' # manila/share/drivers/lvm.py: 'vgs', %s, '--rows', '--units', 'g' vgs: CommandFilter, vgs, root # manila/share/drivers/lvm.py: 'tune2fs', '-U', 'random', '%volume-snapshot%' tune2fs: CommandFilter, tune2fs, root # manila/share/drivers/generic.py: 'sed', '-i', '\'/%s/d\'', '%s' sed: CommandFilter, sed, root # manila/share/drivers/glusterfs.py: 'mkdir', '%s' # manila/share/drivers/ganesha/manager.py: 'mkdir', '-p', '%s' mkdir: CommandFilter, mkdir, root # manila/share/drivers/glusterfs.py: 'rm', '-rf', '%s' rm: CommandFilter, rm, root # manila/share/drivers/glusterfs.py: 'mount', '-t', 'glusterfs', '%s', '%s' # manila/share/drivers/glusterfs/glusterfs_native.py: 'mount', '-t', 'glusterfs', '%s', '%s' mount: CommandFilter, mount, root # manila/share/drivers/glusterfs.py: 'gluster', '--xml', 'volume', 'info', '%s' # manila/share/drivers/glusterfs.py: 'gluster', 'volume', 'set', '%s', 'nfs.export-dir', '%s' gluster: CommandFilter, gluster, root # manila/network/linux/ip_lib.py: 'ip', 'netns', 'exec', '%s', '%s' ip: CommandFilter, ip, root # manila/network/linux/interface.py: 'ovs-vsctl', 'add-port', '%s', '%s' ovs-vsctl: CommandFilter, ovs-vsctl, root # manila/share/drivers/glusterfs/glusterfs_native.py: 'find', '%s', '-mindepth', '1', '!', '-path', '%s', '!', '-path', '%s', '-delete' # manila/share/drivers/glusterfs/glusterfs_native.py: 'find', '%s', '-mindepth', '1', '-delete' find: CommandFilter, find, root # manila/share/drivers/glusterfs/glusterfs_native.py: 'umount', '%s' umount: CommandFilter, umount, root # GPFS commands # manila/share/drivers/ibm/gpfs.py: 'mmgetstate', '-Y' mmgetstate: CommandFilter, mmgetstate, root # manila/share/drivers/ibm/gpfs.py: 'mmlsattr', '%s' mmlsattr: CommandFilter, mmlsattr, root # manila/share/drivers/ibm/gpfs.py: 'mmcrfileset', '%s', '%s', '--inode-space', 'new' mmcrfileset: CommandFilter, mmcrfileset, root # manila/share/drivers/ibm/gpfs.py: 'mmlinkfileset', '%s', '%s', '-J', '%s' mmlinkfileset: CommandFilter, mmlinkfileset, root # manila/share/drivers/ibm/gpfs.py: 'mmsetquota', '-j', '%s', '-h', '%s', '%s' mmsetquota: CommandFilter, mmsetquota, root # manila/share/drivers/ibm/gpfs.py: 'mmunlinkfileset', '%s', '%s', '-f' mmunlinkfileset: CommandFilter, mmunlinkfileset, root # manila/share/drivers/ibm/gpfs.py: 'mmdelfileset', '%s', '%s', '-f' mmdelfileset: CommandFilter, mmdelfileset, root # manila/share/drivers/ibm/gpfs.py: 'mmcrsnapshot', '%s', '%s', '-j', '%s' mmcrsnapshot: CommandFilter, mmcrsnapshot, root # manila/share/drivers/ibm/gpfs.py: 'mmdelsnapshot', '%s', '%s', '-j', '%s' mmdelsnapshot: CommandFilter, mmdelsnapshot, root # manila/share/drivers/ibm/gpfs.py: 'rsync', '-rp', '%s', '%s' rsync: CommandFilter, rsync, root # manila/share/drivers/ibm/gpfs.py: 'exportfs' exportfs: CommandFilter, exportfs, root # manila/share/drivers/ibm/gpfs.py: 'stat', '--format=%F', '%s' stat: CommandFilter, stat, root # manila/share/drivers/ibm/gpfs.py: 'df', '-P', '-B', '1', '%s' df: CommandFilter, df, root # manila/share/drivers/ibm/gpfs.py: 'chmod', '777', '%s' chmod: CommandFilter, chmod, root # manila/share/drivers/ibm/gpfs.py: 'mmnfs', 'export', '%s', '%s' mmnfs: CommandFilter, mmnfs, root # manila/share/drivers/ibm/gpfs.py: 'mmlsfileset', '%s', '-J', '%s', '-L' mmlsfileset: CommandFilter, mmlsfileset, root # manila/share/drivers/ibm/gpfs.py: 'mmchfileset', '%s', '-J', '%s', '-j', '%s' mmchfileset: CommandFilter, mmchfileset, root # manila/share/drivers/ibm/gpfs.py: 'mmlsquota', '-j', '-J', '%s', '%s' mmlsquota: CommandFilter, mmlsquota, root # manila/share/drivers/ganesha/manager.py: 'mv', '%s', '%s' mv: CommandFilter, mv, root # manila/share/drivers/ganesha/manager.py: 'mktemp', '-p', '%s', '-t', '%s' mktemp: CommandFilter, mktemp, root # manila/share/drivers/ganesha/manager.py: shcat: RegExpFilter, sh, root, sh, -c, echo '((.|\n)*)' > /.* # manila/share/drivers/ganesha/manager.py: dbus-addexport: RegExpFilter, dbus-send, root, dbus-send, --print-reply, --system, --dest=org\.ganesha\.nfsd, /org/ganesha/nfsd/ExportMgr, org\.ganesha\.nfsd\.exportmgr\.(Add|Remove)Export, .*, .* # manila/share/drivers/ganesha/manager.py: dbus-removeexport: RegExpFilter, dbus-send, root, dbus-send, --print-reply, --system, --dest=org\.ganesha\.nfsd, /org/ganesha/nfsd/ExportMgr, org\.ganesha\.nfsd\.exportmgr\.(Add|Remove)Export, .* # manila/share/drivers/ganesha/manager.py: dbus-updateexport: RegExpFilter, dbus-send, root, dbus-send, --print-reply, --system, --dest=org\.ganesha\.nfsd, /org/ganesha/nfsd/ExportMgr, org\.ganesha\.nfsd\.exportmgr\.UpdateExport, .*, .* # manila/share/drivers/ganesha/manager.py: rmconf: RegExpFilter, sh, root, sh, -c, rm -f /.*/\*\.conf$ # ZFS commands # manila/share/drivers/zfsonlinux/driver.py # manila/share/drivers/zfsonlinux/utils.py zpool: CommandFilter, zpool, root # manila/share/drivers/zfsonlinux/driver.py # manila/share/drivers/zfsonlinux/utils.py zfs: CommandFilter, zfs, root # manila/share/drivers/zfsonlinux/driver.py kill: CommandFilter, kill, root # manila/data/utils.py: 'ls', '-pA1', '--group-directories-first', '%s' ls: CommandFilter, ls, root # manila/data/utils.py: 'touch', '--reference=%s', '%s' touch: CommandFilter, touch, root # manila/share/drivers/container/container.py: docker docker: CommandFilter, docker, root # manila/share/drivers/container/container.py: brctl brctl: CommandFilter, brctl, root # manila/share/drivers/container/storage_helper.py: e2fsck # manila/share/drivers/generic.py: e2fsck # manila/share/drivers/lvm.py: e2fsck e2fsck: CommandFilter, e2fsck, root # manila/share/drivers/lvm.py: lvconvert --merge %s lvconvert: CommandFilter, lvconvert, root # manila/data/utils.py: 'sha256sum', '%s' sha256sum: CommandFilter, sha256sum, root # manila/utils.py: 'tee', '%s' tee: CommandFilter, tee, root # manila/share/drivers/container/storage_helper.py: lvs -o lv_size --noheadings --nosuffix --units g lvs: CommandFilter, lvs, root # manila/share/drivers/container/storage_helper.py: lvrename --autobackup n lvrename: CommandFilter, lvrename, root manila-10.0.0/etc/manila/manila-policy-generator.conf0000664000175000017500000000011113656750227022466 0ustar zuulzuul00000000000000[DEFAULT] output_file = etc/manila/policy.yaml.sample namespace = manila manila-10.0.0/etc/manila/logging_sample.conf0000664000175000017500000000236713656750227020752 0ustar zuulzuul00000000000000[loggers] keys = root, manila [handlers] keys = stderr, stdout, watchedfile, syslog, null [formatters] keys = default [logger_root] level = WARNING handlers = null [logger_manila] level = INFO handlers = stderr qualname = manila [logger_amqplib] level = WARNING handlers = stderr qualname = amqplib [logger_sqlalchemy] level = WARNING handlers = stderr qualname = sqlalchemy # "level = INFO" logs SQL queries. # "level = DEBUG" logs SQL queries and results. # "level = WARNING" logs neither. (Recommended for production systems.) [logger_boto] level = WARNING handlers = stderr qualname = boto [logger_suds] level = INFO handlers = stderr qualname = suds [logger_eventletwsgi] level = WARNING handlers = stderr qualname = eventlet.wsgi.server [handler_stderr] class = StreamHandler args = (sys.stderr,) formatter = default [handler_stdout] class = StreamHandler args = (sys.stdout,) formatter = default [handler_watchedfile] class = handlers.WatchedFileHandler args = ('manila.log',) formatter = default [handler_syslog] class = handlers.SysLogHandler args = ('/dev/log', handlers.SysLogHandler.LOG_USER) formatter = default [handler_null] class = manila.common.openstack.NullHandler formatter = default args = () [formatter_default] format = %(message)s manila-10.0.0/etc/manila/README.manila.conf0000664000175000017500000000020013656750227020140 0ustar zuulzuul00000000000000To generate the sample manila.conf file, run the following command from the top level of the manila directory: tox -egenconfig manila-10.0.0/etc/manila/api-paste.ini0000664000175000017500000000353613656750227017477 0ustar zuulzuul00000000000000############# # OpenStack # ############# [composite:osapi_share] use = call:manila.api:root_app_factory /: apiversions /v1: openstack_share_api /v2: openstack_share_api_v2 [composite:openstack_share_api] use = call:manila.api.middleware.auth:pipeline_factory noauth = cors faultwrap http_proxy_to_wsgi sizelimit noauth api keystone = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext api keystone_nolimit = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext api [composite:openstack_share_api_v2] use = call:manila.api.middleware.auth:pipeline_factory noauth = cors faultwrap http_proxy_to_wsgi sizelimit noauth apiv2 keystone = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext apiv2 keystone_nolimit = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext apiv2 [filter:faultwrap] paste.filter_factory = manila.api.middleware.fault:FaultWrapper.factory [filter:noauth] paste.filter_factory = manila.api.middleware.auth:NoAuthMiddleware.factory [filter:sizelimit] paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory [app:api] paste.app_factory = manila.api.v1.router:APIRouter.factory [app:apiv2] paste.app_factory = manila.api.v2.router:APIRouter.factory [pipeline:apiversions] pipeline = cors faultwrap http_proxy_to_wsgi osshareversionapp [app:osshareversionapp] paste.app_factory = manila.api.versions:VersionsRouter.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = manila.api.middleware.auth:ManilaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = manila manila-10.0.0/HACKING.rst0000664000175000017500000001014713656750227014671 0ustar zuulzuul00000000000000Manila Style Commandments ========================= - Step 1: Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ - Step 2: Read on Manila Specific Commandments ---------------------------- - [M310] Check for improper use of logging format arguments. - [M313] Use assertTrue(...) rather than assertEqual(True, ...). - [M323] Ensure that the _() function is explicitly imported to ensure proper translations. - [M325] str() and unicode() cannot be used on an exception. Remove or use six.text_type(). - [M326] Translated messages cannot be concatenated. String should be included in translated message. - [M333] ``oslo_`` should be used instead of ``oslo.`` - [M336] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs. - [M337] Ensure to not use xrange(). - [M338] Ensure to not use LOG.warn(). - [M339] Ensure 'mock' is not imported/used. Use 'unittest.mock' instead. - [M354] Use oslo_utils.uuidutils to generate UUID instead of uuid4(). - [M359] Validate that log messages are not translated. LOG Translations ---------------- Beginning with the Pike series, OpenStack no longer supports log translation. It is not useful to add translation instructions to new code, the instructions can be removed from old code, and the hacking checks that enforced use of special translation markers for log messages have been removed. Other user-facing strings, e.g. in exception messages, should be translated using ``_()``. A common pattern is to define a single message object and use it more than once, for the log call and the exception. In that case, ``_()`` must be used because the message is going to appear in an exception that may be presented to the user. For more details about translations, see https://docs.openstack.org/oslo.i18n/latest/user/guidelines.html Creating Unit Tests ------------------- For every new feature, unit tests should be created that both test and (implicitly) document the usage of said feature. If submitting a patch for a bug that had no unit test, a new passing unit test should be added. If a submitted bug fix does have a unit test, be sure to add a new one that fails without the patch and passes with the patch. For more information on creating unit tests and utilizing the testing infrastructure in OpenStack Manila, please read manila/testing/README.rst. Running Tests ------------- The testing system is based on a combination of tox and testr. If you just want to run the whole suite, run `tox` and all will be fine. However, if you'd like to dig in a bit more, you might want to learn some things about testr itself. A basic walkthrough for OpenStack can be found at http://wiki.openstack.org/testr OpenStack Trademark ------------------- OpenStack is a registered trademark of OpenStack, LLC, and uses the following capitalization: OpenStack Commit Messages --------------- Using a common format for commit messages will help keep our git history readable. Follow these guidelines: First, provide a brief summary (it is recommended to keep the commit title under 50 chars). The first line of the commit message should provide an accurate description of the change, not just a reference to a bug or blueprint. It must be followed by a single blank line. If the change relates to a specific driver (libvirt, xenapi, qpid, etc...), begin the first line of the commit message with the driver name, lowercased, followed by a colon. Following your brief summary, provide a more detailed description of the patch, manually wrapping the text at 72 characters. This description should provide enough detail that one does not have to refer to external resources to determine its high-level functionality. Once you use 'git review', two lines will be appended to the commit message: a blank line followed by a 'Change-Id'. This is important to correlate this commit with a specific review in Gerrit, and it should not be modified. For further information on constructing high quality commit messages, and how to split up commits into a series of changes, consult the project wiki: http://wiki.openstack.org/GitCommitMessages manila-10.0.0/.pylintrc0000664000175000017500000001253613656750227014744 0ustar zuulzuul00000000000000[MASTER] # A comma-separated list of package or module names from where C extensions may # be loaded. Extensions are loading into the active Python interpreter and may # run arbitrary code. extension-pkg-whitelist= # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS,tests,test # Add files or directories matching the regex patterns to the blacklist. The # regex matches against base names, not paths. ignore-patterns= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the # number of processors available to use. jobs=1 # Control the amount of potential inferred values when inferring a single # object. This can help the performance when dealing with large functions or # complex, nested conditions. limit-inference-results=100 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= # Pickle collected data for later comparisons. persistent=yes # Specify a configuration file. #rcfile= # When enabled, pylint would attempt to guess common misconfiguration and emit # user-friendly hints instead of false-positive error messages. suggestion-mode=yes # Allow loading of arbitrary C extensions. Extensions are imported into the # active Python interpreter and may run arbitrary code. unsafe-load-any-extension=no [MESSAGES CONTROL] # Only show warnings with the listed confidence levels. Leave empty to show # all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED. confidence= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once). You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use "--disable=all --enable=classes # --disable=W". disable= # "F" Fatal errors that prevent further processing import-error, # "I" Informational noise locally-disabled, c-extension-no-member, # "E" Error for important programming issues (likely bugs) no-member, too-many-function-args, not-callable, assignment-from-none, unsubscriptable-object, used-prior-global-declaration, not-an-iterable, # "W" Warnings for stylistic problems or minor programming issues unused-argument, bad-indentation, unused-variable, useless-else-on-loop, pointless-string-statement, unused-import, redefined-outer-name, redefined-builtin, attribute-defined-outside-init, abstract-method, fixme, exec-used, anomalous-backslash-in-string, broad-except, protected-access, arguments-differ, undefined-loop-variable, try-except-raise, global-statement, super-init-not-called, pointless-statement, global-statement, unnecessary-lambda, keyword-arg-before-vararg, deprecated-method, useless-super-delegation, eval-used, wildcard-import, reimported, expression-not-assigned, cell-var-from-loop, signature-differs, # "C" Coding convention violations missing-docstring, invalid-name, wrong-import-order, len-as-condition, wrong-import-position, bad-continuation, too-many-lines, misplaced-comparison-constant, bad-mcs-classmethod-argument, ungrouped-imports, superfluous-parens, unidiomatic-typecheck, consider-iterating-dictionary, bad-whitespace, dangerous-default-value, line-too-long, consider-using-enumerate, useless-import-alias, singleton-comparison, # "R" Refactor recommendations no-self-use, no-else-return, too-many-locals, too-many-public-methods, consider-using-set-comprehension, inconsistent-return-statements, useless-object-inheritance, too-few-public-methods, too-many-boolean-expressions, too-many-instance-attributes, too-many-return-statements, literal-comparison, too-many-statements, too-many-ancestors, literal-comparison, consider-merging-isinstance, too-many-nested-blocks, trailing-comma-tuple, simplifiable-if-statement, consider-using-in, consider-using-ternary, too-many-arguments [REPORTS] # Tells whether to display a full report or only the messages. reports=no [BASIC] # Variable names can be 1 to 31 characters long, with lowercase and underscores variable-rgx=[a-z_][a-z0-9_]{0,30}$ # Argument names can be 2 to 31 characters long, with lowercase and underscores argument-rgx=[a-z_][a-z0-9_]{1,30}$ # Method names should be at least 3 characters long # and be lowercased with underscores method-rgx=([a-z_][a-z0-9_]{2,}|setUp|tearDown)$ # Module names matching neutron-* are ok (files in bin/) module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+)|(neutron-[a-z0-9_-]+))$ # Don't require docstrings on tests. no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$ [FORMAT] # Maximum number of characters on a single line. max-line-length=79 [VARIABLES] # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins=_ [TYPECHECK] # List of module names for which member attributes should not be checked ignored-modules=six.moves,_MovedItems,alembic.op manila-10.0.0/releasenotes/0000775000175000017500000000000013656750362015561 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/notes/0000775000175000017500000000000013656750363016712 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/notes/bug-1861485-fix-share-network-retrieval-31768dcda5aeeaaa.yaml0000664000175000017500000000042013656750227031257 0ustar zuulzuul00000000000000--- security: - | CVE-2020-9543: An issue with share network retrieval has been addressed in the API by scoping unprivileged access to project only. Please see `launchpad bug #1861485 `_ for more details. manila-10.0.0/releasenotes/notes/bug-1682795-share-access-list-api-5b1e86218959f796.yaml0000664000175000017500000000024313656750227027321 0ustar zuulzuul00000000000000--- features: - Beginning in API version 2.33, share access APIs return "created_at" and "updated_at" for each access rule as part of the JSON response. manila-10.0.0/releasenotes/notes/bug-1872872-fix-quota-checking-b06fd372be143101.yaml0000664000175000017500000000056213656750227027015 0ustar zuulzuul00000000000000--- fixes: - | Fixed quota issue that made it impossible to create resources when the project had the quotas set to unlimited, and the user had a limited amount of quotas to use. Now, operations in the mentioned quota scenario are working properly. Please see `Launchpad bug 1872872 `_ for more details. manila-10.0.0/releasenotes/notes/bug-1660425-snapshot-access-in-error-bce279ee310060f5.yaml0000664000175000017500000000016413656750227030237 0ustar zuulzuul00000000000000--- fixes: - Snapshot access rules in error state no longer cause other rules to go into error state as well. manila-10.0.0/releasenotes/notes/bug-1707943-make-lvm-revert-synchronous-0ef5baee3367fd27.yaml0000664000175000017500000000021613656750227031171 0ustar zuulzuul00000000000000--- fixes: - Changed implementation of revert-to-snapshot in LVM driver to be synchronous, preventing failure of subsequent operations. manila-10.0.0/releasenotes/notes/unity-un-handles-share-server-mode-support-e179c092ab148948.yaml0000664000175000017500000000022413656750227032125 0ustar zuulzuul00000000000000--- features: - Dell EMC Unity Manila driver now supports the mode in which it does not itself create and destroy share servers (DHSS=False). manila-10.0.0/releasenotes/notes/drop-support-for-lvm-share-export-ip-e031ef4c5f95b534.yaml0000664000175000017500000000044513656750227031102 0ustar zuulzuul00000000000000--- upgrade: - | The LVM driver configuration option ``lvm_share_export_ip`` is no longer supported. This option has been replaced by ``lvm_share_export_ips`` which accepts a comma-separated string of IP addresses of the host exporting the LVM shares (NFS/CIFS share server).././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1700346-new-exception-for-no-default-share-type-b1dd9bbe8c9cb3df.yamlmanila-10.0.0/releasenotes/notes/bug-1700346-new-exception-for-no-default-share-type-b1dd9bbe8c9cb3d0000664000175000017500000000025013656750227032344 0ustar zuulzuul00000000000000--- fixes: - A new exception will be thrown when a default share type was not configured and no other share type was specified on any sort of share creation. manila-10.0.0/releasenotes/notes/newton-migration-improvements-cf9d3d6e37e19c94.yaml0000664000175000017500000000247113656750227030146 0ustar zuulzuul00000000000000--- prelude: > Added new parameters to Share Migration experimental API and more combinations of share protocols and access types support to the Data Service. features: - Share Migration now has parameters to force share migration procedure to maintain the share writable, preserve its metadata and be non-disruptive when migrating. - Added CIFS protocol support to Data Service, along with respective 'user' access type support, through the 'data_node_access_admin_user' configuration option. - Added possibility to include options to mount commands issued by the Data Service through the 'data_node_mount_options' configuration option. - Administrators can now change share's share network during a migration. - Added possibility of having files hash verified during migration. deprecations: - Renamed Share Migration 'force_host_copy' parameter to 'force_host_assisted_migration', to better represent the parameter's functionality in API version 2.22. - API version 2.22 is now required for all Share Migration APIs. upgrades: - Removed Share Migration 'notify' parameter, it is no longer possible to perform a 1-phase migration. - Removed 'migrate_share' API support. - Added 'None' to 'reset_task_state' API possible values so it can unset the task_state. manila-10.0.0/releasenotes/notes/add-ipv6-32d89161a9a1e0b4.yaml0000664000175000017500000000016513656750227023337 0ustar zuulzuul00000000000000--- features: - Validation of IPv6 based addresses has been added for allow access API when access type is IP. manila-10.0.0/releasenotes/notes/netapp-cdot-configure-nfs-versions-83e3f319c4592c39.yaml0000664000175000017500000000075313656750227030522 0ustar zuulzuul00000000000000--- features: - NFS Versions can be configured when using the NetApp cDOT driver with driver mode ``driver_handles_share_servers = True``. upgrade: - Added new configuration option ``netapp_enabled_share_protocols`` to configure NFS versions with the NetApp cDOT driver operating in driver mode ``driver_handles_share_servers = True``. If this option is not specified, new share servers (NetApp vServers) will be created supporting NFS Version 3 and NFS Version 4.0. manila-10.0.0/releasenotes/notes/huawei-support-access-all-ip-4994c10ff75ac683.yaml0000664000175000017500000000013313656750227027351 0ustar zuulzuul00000000000000--- fixes: - Huawei driver now properly handles access for all IP addresses (0.0.0.0/0). manila-10.0.0/releasenotes/notes/fix-managing-twice-hnas-4956a7653d27e320.yaml0000664000175000017500000000020113656750227026175 0ustar zuulzuul00000000000000--- fixes: - Fixed Hitachi HNAS driver allowing a share to be managed twice through a malformed export location parameter. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1698258-netapp-fix-tenant-network-gateways-85935582e89a72a0.yamlmanila-10.0.0/releasenotes/notes/bug-1698258-netapp-fix-tenant-network-gateways-85935582e89a72a0.yam0000664000175000017500000000051613656750227031730 0ustar zuulzuul00000000000000--- fixes: - The NetApp DHSS=True driver now creates static routes with the gateway specified on the tenant networks. Potential beneficiaries of this bug-fix are deployers/users whose CIFS security service (e.g. Active Directory) is not part of the tenant network, but a route exists via the tenant network gateway. manila-10.0.0/releasenotes/notes/container-driver-hardening-against-races-30c9f517a6392b9d.yaml0000664000175000017500000000017113656750227031666 0ustar zuulzuul00000000000000--- fixes: - Container driver. Fixed share and share server deletion concurrencies by adding shared external lock. manila-10.0.0/releasenotes/notes/bug-1646603-netapp-broadcast-domains-411a626d38835177.yaml0000664000175000017500000000021513656750227027774 0ustar zuulzuul00000000000000--- fixes: - Changed NetApp cDOT driver when running with DHSS=True to maintain a 1-1 relation between IPSpaces and broadcast domains. ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1696000-netapp-fix-security-style-on-cifs-shares-cbdd557a27d11961.yamlmanila-10.0.0/releasenotes/notes/bug-1696000-netapp-fix-security-style-on-cifs-shares-cbdd557a27d1190000664000175000017500000000017213656750227032173 0ustar zuulzuul00000000000000--- fixes: - The NetApp ONTAP driver has been fixed to ensure the "security style" on CIFS shares is always "ntfs". manila-10.0.0/releasenotes/notes/introduce-tooz-library-5fed75b8caffcf42.yaml0000664000175000017500000000131313656750227026750 0ustar zuulzuul00000000000000--- features: - Add support for the tooz library. - Allow configuration of file/distributed locking for the share manager service. upgrade: - New options are necessary in manila.conf to specify the coordination back-end URL (for example, a Distributed Locking Manager (DLM) back-end or a file based lock location). The configuration determines the tooz driver invoked for the locking/coordination. fixes: - Share replication workflows are coordinated by the share-manager service with the help of the tooz library instead of oslo_concurrency. This allows for deployers to configure Distributed Locking Management if multiple manila-share services are run across different nodes. manila-10.0.0/releasenotes/notes/bug-1690785-fix-gpfs-path-91a354bc69bf6a47.yaml0000664000175000017500000000013213656750227026104 0ustar zuulzuul00000000000000--- fixes: - Fixed the prerequisite of GPFS path export needed for initializing driver. manila-10.0.0/releasenotes/notes/.placeholder0000664000175000017500000000000013656750227021162 0ustar zuulzuul00000000000000manila-10.0.0/releasenotes/notes/remove-standalone-network-plugin-ip-version-440ebcf27ffd22f8.yaml0000664000175000017500000000032413656750227032653 0ustar zuulzuul00000000000000--- upgrade: - | The deprecated configuration option 'standalone_network_plugin_ip_version' has been removed. 'network_plugin_ipv4_enabled' and 'network_plugin_ipv6_enabled' should be used instead. ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1773761-qnap-fix-manage-share-size-override-a18acdf1a41909b0.yamlmanila-10.0.0/releasenotes/notes/bug-1773761-qnap-fix-manage-share-size-override-a18acdf1a41909b0.ya0000664000175000017500000000020613656750227031770 0ustar zuulzuul00000000000000--- fixes: - | Fixed the QNAP driver so that it does not modify the share size on the back end when manila manages a share. ././@LongLink0000000000000000000000000000017000000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1869148-if-only-pyc-exist-the-extension-API-cannot-be-loaded-172cb9153ebd4b56.yamlmanila-10.0.0/releasenotes/notes/bug-1869148-if-only-pyc-exist-the-extension-API-cannot-be-loaded-170000664000175000017500000000034413656750227032270 0ustar zuulzuul00000000000000--- fixes: - | `Launchpad bug 1869148 `_ has been fixed. This bug could have affected environments where extension APIs were provided in compiled files rather than source code. manila-10.0.0/releasenotes/notes/snapshot-force-delete-4432bebfb5a0bbc9.yaml0000664000175000017500000000031213656750227026405 0ustar zuulzuul00000000000000--- fixes: - force-delete API requests for snapshots are now propagated to the manila-share service and will not fail even if share drivers cannot remove the snapshots on the storage backend. ././@LongLink0000000000000000000000000000017400000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1639188-fix-extend-operation-of-shrinked-share-in-generic-driver-5c7f82faefaf26ea.yamlmanila-10.0.0/releasenotes/notes/bug-1639188-fix-extend-operation-of-shrinked-share-in-generic-drive0000664000175000017500000000043613656750227032760 0ustar zuulzuul00000000000000--- fixes: - In the Generic driver, the backing volume size is greater than the share size when the share has been shrunk. So share extend logic in this driver was changed to only extend the backing volume if its size is less than the size of the new, extended share. manila-10.0.0/releasenotes/notes/bug-1735832-43e9291ddd73286d.yaml0000664000175000017500000000017013656750227023352 0ustar zuulzuul00000000000000--- fixes: - Root uses can now correctly read files on read-only shares when the LVM or generic drivers are used. manila-10.0.0/releasenotes/notes/bug-1774159-0afe3dbc39e3c6b0.yaml0000664000175000017500000000030713656750227023635 0ustar zuulzuul00000000000000--- fixes: - | The NetApp ONTAP DHSS=True driver has been fixed to allow multiple shares to use the same ipspace and VLAN port across all subnets belonging to the same neutron network. manila-10.0.0/releasenotes/notes/netapp-replication-dhss-true-5b2887de8e9a2cb5.yaml0000664000175000017500000000050013656750227027613 0ustar zuulzuul00000000000000--- features: - | The NetApp driver now supports replication with ``driver_handles_share_servers`` set to True, in addition to the mode where the driver does not handle the creation and management of share servers. For replication to work across ONTAP clusters, clusters must be peered in advance. manila-10.0.0/releasenotes/notes/nexentastor5-v1.1-1ad6c8f7b5cc11b6.yaml0000664000175000017500000000237313656750227025274 0ustar zuulzuul00000000000000--- features: - Added revert to snapshot support for NexentaStor5 driver. - Added manage existing support for NexentaStor5 driver. upgrade: - Added a new config option ``nexenta_ssl_cert_verify``. This option defines whether the NexentaStor5 driver should check ssl certificate. - Added a new config option ``nexenta_rest_connect_timeout``. This option specifies the time limit (in seconds), within which the connection to NexentaStor management REST API server must be established. - Added a new config option ``nexenta_rest_read_timeout``. This option specifies the time limit (in seconds), within which NexentaStor management REST API server must send a response. - Added a new config option ``nexenta_rest_backoff_factor``. This option specifies the backoff factor to apply between connection attempts to NexentaStor management REST API server. - Added a new config option ``nexenta_rest_retry_count``. This option specifies the number of times to repeat NexentaStor management REST API call in case of connection errors and NexentaStor appliance EBUSY or ENOENT errors. - Added a new config option ``nexenta_dataset_record_size``. This option specifies a suggested block size in for files in a filesystem' manila-10.0.0/releasenotes/notes/manage-share-in-zfsonlinux-driver-e80921081206f75b.yaml0000664000175000017500000000012013656750227030223 0ustar zuulzuul00000000000000--- features: - Added support of 'manage share' feature to ZFSonLinux driver. manila-10.0.0/releasenotes/notes/fix-volume-efficiency-status-2102ad630c5407a8.yaml0000664000175000017500000000020013656750227027337 0ustar zuulzuul00000000000000fixes: - | Fixed an issue while getting efficiency status from the NetApp backend while creating or updating volumes. manila-10.0.0/releasenotes/notes/Huawei-driver-utilize-requests-lib-67f2c4e7ae0d2efa.yaml0000664000175000017500000000056513656750227031053 0ustar zuulzuul00000000000000--- fixes: - | For the latest Python 2.7 release, urllib uses the SSL certification while launching URL connection by default, which causes Huawei driver failed to connect backend storage because it doesn't support SSL certification. Utilize the requests lib for Huawei driver instead, and set no SSL certification for backend storage connection. manila-10.0.0/releasenotes/notes/remove-AllocType-from-huawei-driver-8b279802f36efb00.yaml0000664000175000017500000000061313656750227030635 0ustar zuulzuul00000000000000--- prelude: > Manila scheduler checks "thin_provisioning" in extra specs of the share type and decides whether to use the logic for thin or thick. If "thin_provisioning" not given in extra specs, default use thin. upgrade: - Remove the "AllocType" configuration from huawei driver configuration file. If "thin_provisioning" not given, default create new share by "thin" type. manila-10.0.0/releasenotes/notes/fix_cephx_validation-cba4df77f9f45c6e.yaml0000664000175000017500000000026213656750227026437 0ustar zuulzuul00000000000000--- fixes: - Check the Cephx ID used when granting access to a CephFS share to make sure it's not the same as the one Manila uses to communicate with the Ceph backend. manila-10.0.0/releasenotes/notes/fix-share-manager-shrinking-data-loss-state-edc87ba2fd7e32d8.yaml0000664000175000017500000000045513656750227032540 0ustar zuulzuul00000000000000--- fixes: - | When attempting to shrink a share to a size smaller than the current used space, the share status will remain as ``available`` instead of ``shrinking_possible_data_loss_error``. The user will receive warning message saying that the shrink operation was not completed. ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1654598-enforce-policy-checks-for-share-export-locations-a5cea1ec123b1469.yamlmanila-10.0.0/releasenotes/notes/bug-1654598-enforce-policy-checks-for-share-export-locations-a5cea10000664000175000017500000000027313656750227032661 0ustar zuulzuul00000000000000--- security: - | Closes a gap where a user can see the export locations for another user's share if the uuid of the other share is leaked, stolen, or (improbably) guessed. manila-10.0.0/releasenotes/notes/ibm-gpfs-ces-support-3498e35d9fea1b55.yaml0000664000175000017500000000100113656750227026003 0ustar zuulzuul00000000000000--- prelude: > Refactored GPFS driver to support NFS Ganesha through Spectrum Scale CES framework. upgrade: - Added a new config option is_gpfs_node which will determine if manila share service is running on GPFS node or not. Added mmnfs commands in the root wrap share.filters. Removed scp and ssh commands from root wrap share.filters. deprecations: - Deprecated knfs_export_options configuration parameter as export options are now configured in extra specs of share types. manila-10.0.0/releasenotes/notes/support-qes-114-5881c0ff0e7da512.yaml0000664000175000017500000000011213656750227024607 0ustar zuulzuul00000000000000--- features: - | QNAP Manila driver adds support for QES fw 1.1.4. manila-10.0.0/releasenotes/notes/fix-hds-hnas-unconfined-09b79f3bdb24a83c.yaml0000664000175000017500000000025313656750227026512 0ustar zuulzuul00000000000000--- fixes: - Crash when using unconfined filesystems in HDS HNAS driver using SSH backend. - HDS HNAS Driver no longer mounts unmounted filesystems automatically. ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1846836-fix-share-network-update-unexpected-success-eba8f40db392c467.yamlmanila-10.0.0/releasenotes/notes/bug-1846836-fix-share-network-update-unexpected-success-eba8f40db390000664000175000017500000000033413656750227032526 0ustar zuulzuul00000000000000--- fixes: - | Fixed unexpected behavior when updating a share network's `neutron_net_id` or `neutron_subnet_id`. Now, Manila does not allow updating a share network that does not contain a default subnet. manila-10.0.0/releasenotes/notes/huawei-driver-replication-8ed62c8d26ad5060.yaml0000664000175000017500000000065713656750227027104 0ustar zuulzuul00000000000000--- features: - Huawei driver now supports replication. It reports a replication type 'dr'(Disaster Recovery), so "replication_type=dr" can be used in the share type extra specs to schedule shares to the Huawei driver when configured for replication. - The huawei driver now supports turning off snapshot support. issues: - When snapshot support is turned on in the Huawei driver, replication cannot be used. manila-10.0.0/releasenotes/notes/bp-netapp-ontap-storage-based-cryptograpy-bb7e28896e2a2539.yaml0000664000175000017500000000024613656750227032054 0ustar zuulzuul00000000000000--- features: - Now Manila NetApp ONTAP driver supports NetApp Volume Encryption (NVE) which allows the creation of volumes that will be encrypted at rest. manila-10.0.0/releasenotes/notes/bug-1730509-netapp-ipv6-hostname-39abc7f40d48c844.yaml0000664000175000017500000000016513656750227027410 0ustar zuulzuul00000000000000--- fixes: IPv6 addresses are handled corrected when specified for the netapp_server_hostname driver option. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1638896-missing-migration-completing-state-1e4926ed56eb268c.yamlmanila-10.0.0/releasenotes/notes/bug-1638896-missing-migration-completing-state-1e4926ed56eb268c.yam0000664000175000017500000000022313656750227032202 0ustar zuulzuul00000000000000--- fixes: - Added missing 'migration_completing' task state when requesting migration-complete for a driver-assisted share migration. manila-10.0.0/releasenotes/notes/multi-segment-support-fa171a8e3201d54e.yaml0000664000175000017500000000013213656750227026300 0ustar zuulzuul00000000000000--- features: - Added port binding support for neutron networks with multiple segments. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/manage-unmanage-snapshot-in-netapp-cdot-driver-5cb4b1619c39625a.yamlmanila-10.0.0/releasenotes/notes/manage-unmanage-snapshot-in-netapp-cdot-driver-5cb4b1619c39625a.yam0000664000175000017500000000013313656750227032540 0ustar zuulzuul00000000000000--- features: - Add support for snapshot manage/unmanage to the NetApp cDOT driver. manila-10.0.0/releasenotes/notes/hnas-revert-to-snapshot-a2405cd6653b1e85.yaml0000664000175000017500000000054613656750227026446 0ustar zuulzuul00000000000000--- features: - Added Revert-to-snapshot functionality to Hitachi NAS driver. upgrades: - If using existing share types with the HNAS back end, set the 'revert_to_snapshot_support' extra-spec to allow creating shares that support in-place revert-to-snapshot functionality. This modification will not affect existing shares of such types. ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-172112-fix-drives-private-storage-update-deleted-entries-7516ba624da2dda7.yamlmanila-10.0.0/releasenotes/notes/bug-172112-fix-drives-private-storage-update-deleted-entries-7516ba0000664000175000017500000000023113656750227032566 0ustar zuulzuul00000000000000--- fixes: - | Fixed the database update query for the drivers' private data store that was failing to update any rows marked as deleted. manila-10.0.0/releasenotes/notes/unity-revert-to-snapshot-support-1cffc3914982003d.yaml0000664000175000017500000000011713656750227030373 0ustar zuulzuul00000000000000--- features: - Revert to snapshot support for Dell EMC Unity Manila driver. ././@LongLink0000000000000000000000000000020000000000000011205 Lustar 00000000000000manila-10.0.0/releasenotes/notes/zfsonlinux-driver-improvement-create-share-from-snapshot-another-backend-44296f572681be35.yamlmanila-10.0.0/releasenotes/notes/zfsonlinux-driver-improvement-create-share-from-snapshot-another-ba0000664000175000017500000000053113656750227034063 0ustar zuulzuul00000000000000--- upgrade: In this release, the operation create share from snapshot was improved in the ZFSonLinux driver. Now, the operator using the ZFSonLinux driver can create a share from snapshot in different pools or backends by specifying the Manila API configuration option [DEFAULT]/use_scheduler_creating_share_from_snapshot. manila-10.0.0/releasenotes/notes/netapp-cdot-apply-mtu-from-network-provider-d12179a2374cdda0.yaml0000664000175000017500000000041613656750227032424 0ustar zuulzuul00000000000000--- features: - The NetApp cDOT driver operating in ``driver_handles_share_servers = True`` mode applies the Maximum Transmission Unit (MTU) from the network provider where available when creating Logical Interfaces (LIFs) for newly created share servers. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1639662-fix-share-service-VM-restart-problem-1110f9133cc294e8.yamlmanila-10.0.0/releasenotes/notes/bug-1639662-fix-share-service-VM-restart-problem-1110f9133cc294e8.y0000664000175000017500000000034213656750227031547 0ustar zuulzuul00000000000000--- fixes: - Changed sync mount permanently logic in the Generic driver to select the newly mounted share from /etc/mtab and insert it into /etc/fstab. Added corresponding remove mount permanently functionality. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1873963-netapp-fix-vserver-peer-intra-cluster-966398cf3a621edd.yamlmanila-10.0.0/releasenotes/notes/bug-1873963-netapp-fix-vserver-peer-intra-cluster-966398cf3a621edd.0000664000175000017500000000070313656750227032050 0ustar zuulzuul00000000000000--- fixes: - | NetApp cDOT driver is now fixed to not trigger peer accept operation between share servers that belong to the same cluster, when handling share replica creation and promotion. This issue was happening when operating in `driver_handles_share_servers` enabled mode with multiple backends configured within the same cluster. See `Launchpad bug 1873963 `_ for more details. manila-10.0.0/releasenotes/notes/fix-share-instance-list-with-limit-db7b5b99138e22ee.yaml0000664000175000017500000000025113656750227030625 0ustar zuulzuul00000000000000--- fixes: - Fixed share instance list with limit number API display error. Change _collection_name to _collection_links when we want to show instances_links. manila-10.0.0/releasenotes/notes/unity-manage-server-share-snapshot-support-6a0bbbed74da13c7.yaml0000664000175000017500000000020013656750227032557 0ustar zuulzuul00000000000000--- features: - Dell EMC Unity Manila driver now supports manage/unmange share server, share instance and share snapshot. manila-10.0.0/releasenotes/notes/add-cleanup-create-from-snap-hnas-0e0431f1fc861a4e.yaml0000664000175000017500000000017313656750227030246 0ustar zuulzuul00000000000000--- fixes: - Fixed Hitachi HNAS driver not cleaning up data in backend when failing to create a share from snapshot. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/add-share-migration-support-in-zfsonlinux-driver-88e6da5692b50810.yamlmanila-10.0.0/releasenotes/notes/add-share-migration-support-in-zfsonlinux-driver-88e6da5692b50810.y0000664000175000017500000000013213656750227032625 0ustar zuulzuul00000000000000--- features: - Added support for driver-assisted share migration to ZFSonLinux driver. manila-10.0.0/releasenotes/notes/fix_access_level_managed_shares_hnas-c76a09beed365b46.yaml0000664000175000017500000000016413656750227031427 0ustar zuulzuul00000000000000--- fixes: - HNAS driver correctly handles rule updates to pre-existing access rules on a managed CIFS share. manila-10.0.0/releasenotes/notes/netapp-support-filtering-api-tracing-02d1f4271f44d24c.yaml0000664000175000017500000000043213656750227031075 0ustar zuulzuul00000000000000--- features: - The NetApp driver supports a new configuration option ``netapp_api_trace_pattern`` to enable filtering backend API interactions to log. This option must be specified in the backend section when desired and it accepts a valid python regular expression. manila-10.0.0/releasenotes/notes/huawei-driver-sectorsize-config-da776132ba6da2a7.yaml0000664000175000017500000000036413656750227030272 0ustar zuulzuul00000000000000--- features: - Huawei driver supports setting the backend 'sectorsize' while creating shares and administrators can use this capability via the share types extra-spec 'huawei_sectorsize:sectorsize' or via the XML configuration file. manila-10.0.0/releasenotes/notes/bug_1564623_change-e286060a27b02f64.yaml0000664000175000017500000000022613656750227024637 0ustar zuulzuul00000000000000--- fixes: - For a delete snapshot request, if backend reports that snapshot is busy then the state of snapshot is changed to 'error_deleting'. manila-10.0.0/releasenotes/notes/share-mount-snapshots-b52bf3433d1e7afb.yaml0000664000175000017500000000036313656750227026426 0ustar zuulzuul00000000000000--- features: - Added mountable snapshots feature to manila. Access can now be allowed and denied to snapshots of shares created with a share type that supports this feature. - Added mountable snapshots support to the LVM driver. manila-10.0.0/releasenotes/notes/cephfs-native-add-readonly-shares-support-067ccab0217ab5f5.yaml0000664000175000017500000000012013656750227032140 0ustar zuulzuul00000000000000--- features: - For cephfs_native driver, added read-only shares support. manila-10.0.0/releasenotes/notes/netapp-ipv6-support-f448e99a7c112362.yaml0000664000175000017500000000014413656750227025543 0ustar zuulzuul00000000000000--- features: Added support for IPv6 export location and access rules to the NetApp driver. manila-10.0.0/releasenotes/notes/bug-1734127-a239d022bef4a002.yaml0000664000175000017500000000015213656750227023366 0ustar zuulzuul00000000000000--- fixes: - Fixed logic in driver base class that determines whether IPv6 is supported at runtime. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/add-create_share_from_snapshot_support-extra-spec-9b1c3ad6796dd07d.yamlmanila-10.0.0/releasenotes/notes/add-create_share_from_snapshot_support-extra-spec-9b1c3ad6796dd07d.0000664000175000017500000000043113656750227033175 0ustar zuulzuul00000000000000--- features: - Added optional create_share_from_snapshot_support extra spec, which was previously implied by the overloaded snapshot_support extra spec. upgrade: - The snapshot_support extra spec is now optional and has no default value set when creating share types. ././@LongLink0000000000000000000000000000017600000000000011221 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1699836-disallow-share-type-deletion-with-active-share-group-types-83809532d06ef0dd.yamlmanila-10.0.0/releasenotes/notes/bug-1699836-disallow-share-type-deletion-with-active-share-group-ty0000664000175000017500000000027213656750227033063 0ustar zuulzuul00000000000000--- fixes: - | Fixed `Launchpad bug 1699836 `_ by preventing share type deletion when there are share group types associated with them. manila-10.0.0/releasenotes/notes/netapp-cdot-quality-of-service-limits-c1fe8601d00cb5a8.yaml0000664000175000017500000000120413656750227031317 0ustar zuulzuul00000000000000--- features: - The NetApp driver now supports Quality of Service extra specs. To create a share on ONTAP with qos support, set the 'qos' extra-spec in your share type to True and use one of 'netapp:maxiops' and 'netapp:maxbps' scoped extra-specs to set absolute limits. To set size based limits, use scoped extra-specs 'netapp:maxiopspergib' or 'netapp:maxbpspergib'. QoS policies on the back end are created exclusive to each manila share. upgrade: - A new configuration option 'netapp_qos_policy_group_name_template' has been added to allow overriding the naming of QoS policies created by the NetApp driver. ././@LongLink0000000000000000000000000000017200000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1660321-fix-default-approach-for-share-group-snapshot-creation-3e843155c395e861.yamlmanila-10.0.0/releasenotes/notes/bug-1660321-fix-default-approach-for-share-group-snapshot-creation-0000664000175000017500000000045013656750227032763 0ustar zuulzuul00000000000000--- fixes: - Fixed default approach for creating share group snapshots that uses common share driver interface by making proper call of this method. Before, some drivers that were depending on some specific data from 'snapshot' object were failing not being able to get these data. manila-10.0.0/releasenotes/notes/inspur-support-rwx-for-cifs-permission-4279f1fe7a59fd00.yaml0000664000175000017500000000021213656750227031561 0ustar zuulzuul00000000000000--- fixes: - Fixed CIFS permission issue with Inspur AS13000 driver so that files and folders can be created and deleted correctly. manila-10.0.0/releasenotes/notes/graduate-share-groups-feature-5f751b49ccc62969.yaml0000664000175000017500000000101713656750227027613 0ustar zuulzuul00000000000000--- prelude: > - | Share group APIs have graduated from their `experimental feature state `_ from API version ``2.55``. Share group types can be created to encompass one or more share types, share groups can be created, updated, snapshotted and deleted, and shares can be created within share groups. These actions no longer require the inclusion of ``X-OpenStack-Manila-API-Experimental`` header in the API requests. manila-10.0.0/releasenotes/notes/add-update-host-command-to-manila-manage-b32ad5017b564c9e.yaml0000664000175000017500000000060113656750227031506 0ustar zuulzuul00000000000000--- features: - The ``manila-manage`` utility now has a new command to update the host attribute of shares. This is useful when the share manager process has been migrated to a different host, or if changes are made to the ``host`` config option or the backend section name in ``manila.conf``. Execute ``manila-manage share update_host -h`` to see usage instructions.manila-10.0.0/releasenotes/notes/bp-support-query-user-message-by-timestamp-c0a02b3b3e337e12.yaml0000664000175000017500000000024113656750227032246 0ustar zuulzuul00000000000000--- features: - User messages can be queried by timestamp with query keys ``created_since`` and ``created_before`` starting with API version ``2.52``. manila-10.0.0/releasenotes/notes/remove-intree-tempest-plugin-9fcf6edbeba47cba.yaml0000664000175000017500000000033113656750227030205 0ustar zuulzuul00000000000000--- other: - | Remove in-tree manila tempest plugin because it now lives in the new repo openstack/manila-tempest-plugin From now on changes to manila tempest tests should be made in this new repo. manila-10.0.0/releasenotes/notes/share-replication-81ecf4a32a5c83b6.yaml0000664000175000017500000000020613656750227025476 0ustar zuulzuul00000000000000--- features: - Shares can be replicated. Replicas can be added, listed, queried for detail, promoted to be 'active' or removed.manila-10.0.0/releasenotes/notes/fix-consistency-groups-api-dd9b5b99138e22eb.yaml0000664000175000017500000000046513656750227027320 0ustar zuulzuul00000000000000--- fixes: - Consistency Group APIs return share_server_id information correctly to administrators. - When using a consistency group snapshot to create another consistency group, share server and network information is persisted from the source consistency group to the new consistency group. ././@LongLink0000000000000000000000000000022000000000000011207 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1645746-fix-inheritance-of-access-rules-from-parent-share-by-zfsonlinux-child-shares-4f85908c8e9871ef.yamlmanila-10.0.0/releasenotes/notes/bug-1645746-fix-inheritance-of-access-rules-from-parent-share-by-zf0000664000175000017500000000034613656750227032673 0ustar zuulzuul00000000000000--- fixes: - Fix inheritance of access rules from parent share by ZFSonLinux child shares. It was inherited before, now it is not, as expected. Now, each share created from snapshot will not have inherited access rules. ././@LongLink0000000000000000000000000000021400000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1664201-fix-share-replica-status-update-concurrency-in-replica-promotion-feature-63b15d96106c65da.yamlmanila-10.0.0/releasenotes/notes/bug-1664201-fix-share-replica-status-update-concurrency-in-replica-0000664000175000017500000000064013656750227032762 0ustar zuulzuul00000000000000--- fixes: - Fixed share replica status update concurrency in share replica promotion feature. Before it was possible to see two active replicas, having 'dr' or 'readable' type of replication, performing 'share replica promotion' action. Now, replica that becomes active is always updated last, so, at some period of time we will have zero 'active' replicas at once instead of two of them. manila-10.0.0/releasenotes/notes/remove-nova-network-support-f5bcb8b2fcd38581.yaml0000664000175000017500000000031613656750227027615 0ustar zuulzuul00000000000000--- upgrade: - Removed support for ``nova_net_id`` in share_networks API and in the ShareNetwork DB model. Also removed the nova network plugins themselves and corresponding manila.conf options. manila-10.0.0/releasenotes/notes/unity-shrink-share-support-cc748daebfe8f562.yaml0000664000175000017500000000013013656750227027515 0ustar zuulzuul00000000000000--- features: - Shrink share support has been added for Dell EMC Unity Manila driver. manila-10.0.0/releasenotes/notes/unity-vnx-rename-options-1656168dd4bdba70.yaml0000664000175000017500000000131513656750227026726 0ustar zuulzuul00000000000000--- upgrade: - For Dell EMC Unity Manila driver, replaced emc_nas_pool_names with unity_share_data_pools, emc_nas_server_pool with unity_server_meta_pool, emc_interface_ports with unity_ethernet_ports, - For Dell EMC VNX Manila driver, replaced emc_nas_pool_names with vnx_share_data_pools, emc_interface_ports with vnx_ethernet_ports, emc_nas_server_container with vnx_server_container. deprecations: - For Dell EMC Unity Manila driver, options emc_nas_pool_names, emc_nas_server_pool, emc_interface_ports, emc_nas_server_container are deprecated. - For Dell EMC VNX Manila driver, options emc_nas_pool_names, emc_interface_ports, emc_nas_server_container are deprecated. manila-10.0.0/releasenotes/notes/bug-1707066-deny-ipv6-access-in-error-bce379ee310060f6.yaml0000664000175000017500000000051513656750227030226 0ustar zuulzuul00000000000000--- fixes: - The access-allow API accepts ipv6 rules and ignores them if the configured backend does not support ipv6 access rules. However, when the access-deny API is invoked to remove such rules, they used to be stuck in "denying" state. This bug has been fixed and ipv6 access rules can be denied successfully. manila-10.0.0/releasenotes/notes/bug-1774604-qb-driver-b7e717cbc71d6189.yaml0000664000175000017500000000022613656750227025323 0ustar zuulzuul00000000000000--- fixes: - | Fixed a bug in the Quobyte driver that allowed share resizing to incorrectly address the share to be resized in the backend. manila-10.0.0/releasenotes/notes/bug-1859785-share-list-speed-6b09e7717624e037.yaml0000664000175000017500000000041013656750227026370 0ustar zuulzuul00000000000000--- fixes: - | Improved share list speed using lazy='subquery'. The sqlalchemy models of Share and Share Instance relationships previously had lazy='immediate'. This resulted in at least three extra queries when we queried for all share details. manila-10.0.0/releasenotes/notes/qnap-fix-share-and-snapshot-inconsistant-bd628c6e14eeab14.yaml0000664000175000017500000000023613656750227032100 0ustar zuulzuul00000000000000--- fixes: - | Fixed the QNAP driver so that the managed snapshot and the share which created from snapshot will not be inconsistent in some cases. manila-10.0.0/releasenotes/notes/container-manage-unmanage-share-servers-880d889828ee7ce3.yaml0000664000175000017500000000024213656750227031553 0ustar zuulzuul00000000000000--- features: - Added managing and unmanaging of share servers functionality to the Container Driver, allowing for shares to be managed and unmanaged. manila-10.0.0/releasenotes/notes/bug-1658133-fix-lvm-revert-34a90e70c9aa7354.yaml0000664000175000017500000000020413656750227026216 0ustar zuulzuul00000000000000--- fixes: - Fixed failure when reverting a share to a snapshot using the LVM driver while access rules exist for that share. manila-10.0.0/releasenotes/notes/bug-1684032-6e4502fdceb693dr7.yaml0000664000175000017500000000060313656750227023664 0ustar zuulzuul00000000000000--- fixes: - When upgrading manila to a new release, if a share driver has added support for storage pools in the new release, shares belonging to such drivers would be iteratively updated by manila. This is done by querying the back end driver for the storage pool that each share is hosted within. A bug affecting the database update in this path has been fixed. ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/deprecate-old-ks-opts-in-nova-neutron-cinder-groups-e395015088d93fdc.yamlmanila-10.0.0/releasenotes/notes/deprecate-old-ks-opts-in-nova-neutron-cinder-groups-e395015088d93fd0000664000175000017500000000165513656750227032570 0ustar zuulzuul00000000000000--- fixes: - | `Launchpad bug 1809318 `_ has been fixed. The deprecated options ``api_insecure`` and ``ca_certificates_file`` from nova, cinder, neutron or DEFAULT configuration groups no longer override the newer ``insecure`` option if provided. Always use ``insecure`` and ``cafile`` to control SSL and validation since the deprecated options will be removed in a future release. deprecations: - | The options ``ca_certificates_file``, ``nova_ca_certificates_file``, ``cinder_ca_certificates_file``, ``api_insecure``, ``nova_api_insecure`` and ``cinder_api_insecure`` have been deprecated from the ``DEFAULT`` group as well as ``nova``, ``neutron`` and ``cinder`` configuration groups. Use ``cafile`` to specify the CA certificates and ``insecure`` to turn off SSL validation in these respective groups (nova, neutron and cinder). manila-10.0.0/releasenotes/notes/fix-ganesha-allow-access-for-all-ips-09773a79dc76ad44.yaml0000664000175000017500000000024413656750227030636 0ustar zuulzuul00000000000000--- fixes: - | Drivers using ganesha can now handle 'manila access-allow ip 0.0.0.0/0' as a way to allow access to the share from all IPs. manila-10.0.0/releasenotes/notes/ibm-gpfs-manage-support-c110120c350728e3.yaml0000664000175000017500000000045213656750227026213 0ustar zuulzuul00000000000000--- features: - Added manila manage/unmanage feature support for GPFS driver. The existing fileset should be an independent fileset and should not have any NFS export over the fileset path. With this prerequisite existing GPFS filesets can be brought under Manila management. ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1813054-remove-share-usage-size-audit-period-conf-opt-7331013d1cdb7b43.yamlmanila-10.0.0/releasenotes/notes/bug-1813054-remove-share-usage-size-audit-period-conf-opt-7331013d10000664000175000017500000000070113656750227032052 0ustar zuulzuul00000000000000--- deprecations: - | The configuration option ``share_usage_audit_period`` from the [DEFAULT] section has been deprecated. Specifying this option never had any effect on manila and so it will be removed in an upcoming release. This option should not be confused with ``share_usage_size_update_interval`` from the back end section, which can be used to gather usage size for some back ends that support that feature. manila-10.0.0/releasenotes/notes/nexenta-manila-drivers-cbd0b376a076ec50.yaml0000664000175000017500000000013113656750227026430 0ustar zuulzuul00000000000000features: - Added share backend drivers for NexentaStor4 and NexentaStor5 appliances. ././@LongLink0000000000000000000000000000020100000000000011206 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1661266-add-consistent-snapshot-support-attr-to-share-groups-DB-model-daa1d05129802796.yamlmanila-10.0.0/releasenotes/notes/bug-1661266-add-consistent-snapshot-support-attr-to-share-groups-DB0000664000175000017500000000024613656750227033020 0ustar zuulzuul00000000000000--- fixes: - Added 'consistent_snapshot_support' attribute to 'share_groups' DB model, to ease possible future backport of bugfixes for 'share groups' feature. manila-10.0.0/releasenotes/notes/bug-1736370-qnap-fix-access-rule-override-1b79b70ae48ad9e6.yaml0000664000175000017500000000017413656750227031252 0ustar zuulzuul00000000000000--- fixes: - | Fixed the QNAP driver that the access rule setting is overridden by the later access rule setting. manila-10.0.0/releasenotes/notes/vnx-ssl-verification-2d26a24e7e73bf81.yaml0000664000175000017500000000024313656750227026104 0ustar zuulzuul00000000000000upgrade: - Added ``emc_ssl_cert_verify`` and ``emc_ssl_cert_path`` options for VNX SSL verification. For more details, see OpenStack official documentation. manila-10.0.0/releasenotes/notes/hnas-manage-unmanage-snapshot-support-0d939e1764c9ebb9.yaml0000664000175000017500000000012113656750227031341 0ustar zuulzuul00000000000000--- features: - Added manage/unmanage snapshot support to Hitachi HNAS Driver. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/share-revert-to-snapshot-in-netapp-cdot-driver-37f645ec3c14313c.yamlmanila-10.0.0/releasenotes/notes/share-revert-to-snapshot-in-netapp-cdot-driver-37f645ec3c14313c.yam0000664000175000017500000000054613656750227032554 0ustar zuulzuul00000000000000--- features: - Added support for share revert-to-snapshot to NetApp Data ONTAP drivers. upgrades: - If using existing share types with Data ONTAP, set the 'revert_to_snapshot_support' extra spec to allow creating shares that support in-place revert-to-snapshot functionality. This modification will not affect existing shares of such types. manila-10.0.0/releasenotes/notes/manage-unmanage-share-servers-cd4a6523d8e9fbdf.yaml0000664000175000017500000000210413656750227030047 0ustar zuulzuul00000000000000--- features: - Added APIs with default policy set to 'rule:admin_api' that allow managing and unmanaging share servers. Managing Share servers is useful for importing pre-existing shares and snapshots into Manila's management when the driver is configured in ``driver_handles_share_servers`` enabled mode. Unmanaging removes manila share servers from the database without removing them from the back end. Managed share servers, or share servers that have had one or more shares unmanaged will not be deleted automatically when they do not have any shares managed by Manila, even if the config options [DEFAULT]/delete_share_server_with_last_share or [DEFAULT]/automatic_share_server_cleanup have been set to True. - Updated Manage Share API to be able to manage shares in ``driver_handles_share_servers`` enabled driver mode by supplying the Share Server ID. - Updated Unmanage Share and Unmanage Snapshot APIs to allow unmanaging shares and snapshots in ``driver_handles_share_servers`` enabled driver mode. manila-10.0.0/releasenotes/notes/bug-1690159-retry-backend-init-58486ea420feaf51.yaml0000664000175000017500000000035213656750227027117 0ustar zuulzuul00000000000000--- fixes: - Retry to initialize the manila-share driver for every backend in case there was an error during initialization. That way even a temporary broken backend can be initialized later without restarting manila-share. manila-10.0.0/releasenotes/notes/bug-1773929-a5cb52c8417ec5fc.yaml0000664000175000017500000000021713656750227023571 0ustar zuulzuul00000000000000--- upgrade: - | The Quobyte driver now provides an option to adapt the export path to the Quobyte NFS services PSEUDO path setting. manila-10.0.0/releasenotes/notes/bug-1707946-nfs-helper-0-netmask-224da94b82056f93.yaml0000664000175000017500000000032013656750227027122 0ustar zuulzuul00000000000000--- fixes: - Fixed application of access rules with type ``ip`` and netmask length 0 in the ``NFSHelper`` plugin, affecting LVM and Generic drivers. Previously these rules silently failed to apply. ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1871999-dell-emc-vnx-powermax-wrong-export-locations-e9763631c621656f.yamlmanila-10.0.0/releasenotes/notes/bug-1871999-dell-emc-vnx-powermax-wrong-export-locations-e9763631c60000664000175000017500000000035013656750227032311 0ustar zuulzuul00000000000000--- fixes: - | Dell EMC VNX and PowerMax Drivers: Fixes `bug 1871999 `__ to make `create_share` and `create_share_from_snapshot` return correct list of export locations. ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1853940-not-send-heartbeat-if-driver-not-initial-9c3cee39e8c725d1.yamlmanila-10.0.0/releasenotes/notes/bug-1853940-not-send-heartbeat-if-driver-not-initial-9c3cee39e8c7250000664000175000017500000000043213656750227032122 0ustar zuulzuul00000000000000--- fixes: - | `Launchpad bug 1853940 `_ has been fixed. When drivers are still initializing or when they fail to initialize, the share service will be reported as being "down" until the driver has been initialized. ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/add-create-share-from-snapshot-another-pool-or-backend-98d61fe753b85632.yamlmanila-10.0.0/releasenotes/notes/add-create-share-from-snapshot-another-pool-or-backend-98d61fe753b80000664000175000017500000000136613656750227032716 0ustar zuulzuul00000000000000--- features: - The scheduler was improved to select and weigh compatible back ends when creating shares from snapshots. This change only affects the existing behavior if the option ``use_scheduler_creating_share_from_snapshot`` is enabled. - A new share status `creating_from_snapshot` was added to inform the user that a share creation from snapshot is in progress and may take some time to be concluded. In order to quantify the share creation progress a new field called ``progress`` was added to shares and share instances information, to indicate the conclusion percentage of share create operation (0 to 100%). fixes: - The availability zone parameter is now being considered when creating shares from snapshots. manila-10.0.0/releasenotes/notes/fix-py3-netapp-a9815186ddc865d4.yaml0000664000175000017500000000027613656750227024540 0ustar zuulzuul00000000000000--- fixes: - | Fixed the size value not being present in share snapshot instances, which caused the NetApp driver to crash when creating a share from a snapshot using python3. ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1804656-netapp-cdot-add-port-ids-to-share-server-backend-424ca11a1eb44826.yamlmanila-10.0.0/releasenotes/notes/bug-1804656-netapp-cdot-add-port-ids-to-share-server-backend-424ca10000664000175000017500000000046513656750227032257 0ustar zuulzuul00000000000000--- features: - | The Neutron Port IDs and IP addresses of the network allocation when using the NetApp cDOT driver with DHSS=true are made accessible for administrators at share server backend_details of newly created share servers. Those are corresponding to the NetApp lifs of a vserver. manila-10.0.0/releasenotes/notes/bug-667744-fix-c64071e6e5a098f7.yaml0000664000175000017500000000033213656750227024061 0ustar zuulzuul00000000000000fixes: - Launchpad bug `1822815 `_ has been fixed. The user no longer gets an error if the list command has no rows when executing `manila list --count True`. manila-10.0.0/releasenotes/notes/bug-1772026-nve-license-not-present-fix-e5d2e0d6c5df9227.yaml0000664000175000017500000000031713656750227030751 0ustar zuulzuul00000000000000fixes: - | Since the addition of NVE support, the Netapp driver used to fail to start when a VE license is not present on an ONTAP > 9.1. Now the driver starts but it reports NVE not supported.manila-10.0.0/releasenotes/notes/use-oslo-logging-for-config-options-388da64bb4ce45db.yaml0000664000175000017500000000015113656750227031062 0ustar zuulzuul00000000000000--- fixes: - Use Oslo's logging features to securely output the configuration options for Manila. manila-10.0.0/releasenotes/notes/vlan-enhancement-in-unity-driver-0f1d972f2f6d00d9.yaml0000664000175000017500000000075213656750227030306 0ustar zuulzuul00000000000000--- features: - Dell EMC Unity driver deprecated the option `emc_nas_server_container`. The driver will choose storage processor automatically to load balance the nas servers. - Dell EMC Unity driver is enhanced to use different tenant in Unity for each vlan. Thus the nas server in different vlan could have isolated IP address space. - Dell EMC Unity driver is enhanced to select the appropriate port on the system to create interfaces based on the network MTU. manila-10.0.0/releasenotes/notes/add_mtu_info_db-3c1d6dc02f40d5a6.yaml0000664000175000017500000000022713656750227025154 0ustar zuulzuul00000000000000--- features: - Store network MTU value into DB to make it possible for drivers with share server support to support different values than 1500. manila-10.0.0/releasenotes/notes/bug-1651587-deny-access-verify-563ef2f3f6b8c13b.yaml0000664000175000017500000000016613656750227027211 0ustar zuulzuul00000000000000--- fixes: - Fixed GPFS KNFS deny access so that it will not fail when the access can be verified to not exist. manila-10.0.0/releasenotes/notes/add-share-type-quotas-33a6b36c0f4c88b1.yaml0000664000175000017500000000025513656750227026130 0ustar zuulzuul00000000000000--- features: - Added possibility to set quotas per share type. It is useful for deployments with multiple backends that are accessible via different share types. manila-10.0.0/releasenotes/notes/bug-1714691-decimal-separators-in-locales-392c0c794c49c1c2.yaml0000664000175000017500000000032413656750227031142 0ustar zuulzuul00000000000000--- fixes: - Fixed issue where locales other than POSIX and en_US.UTF-8 might cause the translate_string_size_to_float method to fail on a comma decimal separator instead of a period decimal separator. manila-10.0.0/releasenotes/notes/delete_vlan_on_vserver_delete-a7acd145c0b8236d.yaml0000664000175000017500000000013313656750227030131 0ustar zuulzuul00000000000000--- features: - NetApp cMode driver - configured VLAN will be deleted on Vserver removal manila-10.0.0/releasenotes/notes/ganesha-dynamic-update-access-be80bd1cb785e733.yaml0000664000175000017500000000062313656750227027643 0ustar zuulzuul00000000000000--- features: - The new class `ganesha.GaneshaNASHelper2` in the ganesha library uses dynamic update of export feature of NFS-Ganesha versions v2.4 or newer to modify access rules of a share in a clean way. It modifies exports created per share rather than per share access rule (as with `ganesha.GaneshaNASHelper`) that introduced limitations and unintuitive end user experience. manila-10.0.0/releasenotes/notes/manage-share-snapshot-in-huawei-driver-007b2c763fbdf480.yaml0000664000175000017500000000010313656750227031342 0ustar zuulzuul00000000000000--- features: - Manage share snapshot on array in huawei driver. ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1785180-zfsonlinux-retry-unmounting-during-manage-872cf46313c5a4ff.yamlmanila-10.0.0/releasenotes/notes/bug-1785180-zfsonlinux-retry-unmounting-during-manage-872cf46313c5a0000664000175000017500000000032113656750227032356 0ustar zuulzuul00000000000000--- fixes: - | The ZFSOnLinux driver now retries unmounting zfs shares to perform the manage operation. See `Launchpad bug 1785180 `_ for details. ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1747695-fixed-ip-version-in-neutron-bind-network-plugin-526958e2d83df072.yamlmanila-10.0.0/releasenotes/notes/bug-1747695-fixed-ip-version-in-neutron-bind-network-plugin-526958e0000664000175000017500000000021413656750227032356 0ustar zuulzuul00000000000000--- fixes: - | Fixed multi segment neutron data save in NeutronBindNetworkPlugin to provide IP version for neutron port creation. manila-10.0.0/releasenotes/notes/clean-expired-messages-6161094d0c108aa7.yaml0000664000175000017500000000023513656750227026163 0ustar zuulzuul00000000000000--- features: - | Added a periodic task which cleans up expired user messages. Cleanup interval can be set by message_reap_interval config option. manila-10.0.0/releasenotes/notes/qnap-manila-driver-a30fe4011cb90801.yaml0000664000175000017500000000012113656750227025367 0ustar zuulzuul00000000000000--- features: - Added Manila share driver for QNAP ES series storage systems. ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1872243-netapp-fix-vserver-peer-with-same-vserver-8bc65816f1764784.yamlmanila-10.0.0/releasenotes/notes/bug-1872243-netapp-fix-vserver-peer-with-same-vserver-8bc65816f17640000664000175000017500000000062613656750227032103 0ustar zuulzuul00000000000000--- fixes: - | NetApp cDOT driver is now fixed to not create peer relationship between same share servers when handling share replica creation and promotion. This issue was happening when operating in `driver_handles_share_servers` enabled mode with backends configured with more than one pool. See `Launchpad bug 1872243 `_ for more details. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/netapp-create-share-from-snapshot-another-pool-330639b57aa5f04d.yamlmanila-10.0.0/releasenotes/notes/netapp-create-share-from-snapshot-another-pool-330639b57aa5f04d.yam0000664000175000017500000000046113656750227032605 0ustar zuulzuul00000000000000--- features: - | The NetApp driver now supports efficiently creating new shares from snapshots in pools or back ends different than that of the source share. In order to have this functionality working across different back ends, replication must be enabled and configured accordingly. ././@LongLink0000000000000000000000000000017700000000000011222 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1811680-destroy-quotas-usages-reservations-when-deleting-share-type-a18f2e00a65fe922.yamlmanila-10.0.0/releasenotes/notes/bug-1811680-destroy-quotas-usages-reservations-when-deleting-share-0000664000175000017500000000043713656750227033160 0ustar zuulzuul00000000000000--- fixes: - | Share type quotas, usages and reservations will now be correctly cleaned up if a share type has been deleted. See `launchpad bug #1811680 `_ for details regarding the bug that prevented this cleanup prior. manila-10.0.0/releasenotes/notes/bug-1822099-fix-multisegment-mtu.yaml-ac2e31c084d8bbb6.yaml0000664000175000017500000000023013656750227030607 0ustar zuulzuul00000000000000--- fixes: - | Update share networks with MTU before creating network allocations so that the first allocation in a share network is correct. manila-10.0.0/releasenotes/notes/bp-admin-network-hnas-9b714736e521101e.yaml0000664000175000017500000000027113656750227025672 0ustar zuulzuul00000000000000--- features: - Added admin network support to Hitachi HNAS Driver upgrade: - Added a new config option to Hitachi HNAS Driver to allow configuration of Admin Network support. manila-10.0.0/releasenotes/notes/check-thin-provisioning-4bb702535f6b10b6.yaml0000664000175000017500000000047513656750227026466 0ustar zuulzuul00000000000000--- fixes: - Capacity filter and weigher scheduler logic was modified to account for back ends that can support thin and thick provisioning for shares. Over subscription calculation is triggered with the presence of the ``thin_provisioning`` extra-spec in the share type of the share being created. manila-10.0.0/releasenotes/notes/qnap-fix-manage-snapshot-not-exist-4b111982ddc5fdae.yaml0000664000175000017500000000016513656750227030706 0ustar zuulzuul00000000000000--- fixes: - | Fixed the QNAP driver so that the snapshot which does not exist in NAS will not be managed. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1845452-unity--fix-fail-to-delete-cifs-share-c502a10ae306e506.yamlmanila-10.0.0/releasenotes/notes/bug-1845452-unity--fix-fail-to-delete-cifs-share-c502a10ae306e506.y0000664000175000017500000000015013656750227031440 0ustar zuulzuul00000000000000--- fixes: - Fixed an issue with Unity driver fails to delete CIFS share if wrong access was set. manila-10.0.0/releasenotes/notes/bug-1666541-quobyte-resize-list-param-bc5b9c42bdc94c9f.yaml0000664000175000017500000000013513656750227030702 0ustar zuulzuul00000000000000--- fixes: - Quobyte share extend/shrink operations now work with all Quobyte API versions manila-10.0.0/releasenotes/notes/bug-1772647-b98025c07553e35d.yaml0000664000175000017500000000015613656750227023272 0ustar zuulzuul00000000000000--- fixes: - Fix ensure_shares running every time despite not having any configuration option changed. manila-10.0.0/releasenotes/notes/bugfix-1771958-1771970-bcec841e7ae6b9f6.yaml0000664000175000017500000000031713656750227025230 0ustar zuulzuul00000000000000--- fixes: - | New shares created on a Quobyte backend are now initialized with the correct quota. - | fixes a bug causing incorrect quotas being set in the backend when resizing Quobyte shares. manila-10.0.0/releasenotes/notes/generic-route-racing-adf92d212f1ab4de.yaml0000664000175000017500000000014113656750227026225 0ustar zuulzuul00000000000000--- fixes: - Fixed race-condition in generic driver while updating network routes in host. manila-10.0.0/releasenotes/notes/bug-1657033-fix-share-metadata-error-when-deleting-share.yaml0000664000175000017500000000015313656750227031620 0ustar zuulzuul00000000000000--- fixes: - Fixed the error that share metadata records are not soft-deleted when deleting a share. manila-10.0.0/releasenotes/notes/inspur-instorage-driver-51d7a67f253f3ecd.yaml0000664000175000017500000000026313656750227026701 0ustar zuulzuul00000000000000--- prelude: > Add Inspur InStorage driver. features: - Add new Inspur InStorage driver, support share create, delete, extend, and access through NFS and CIFS protocol. manila-10.0.0/releasenotes/notes/fix_limit_formating_routes-1b0e1a475de6ac44.yaml0000664000175000017500000000020213656750227027504 0ustar zuulzuul00000000000000--- fixes: - Fixed routes.mapper.Mapper.resource adds a bunch of formatted routes that cannot accept something after a '.'. manila-10.0.0/releasenotes/notes/bug-1661271-hnas-snapshot-readonly-4e50183100ed2b19.yaml0000664000175000017500000000015713656750227027640 0ustar zuulzuul00000000000000--- fixes: - Fixed HNAS driver creating snapshots of NFS shares without first changing it to read-only. ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1746202-fix-unicodeDecodeError-when-decode-API-input-4e4502fb50b69502.yamlmanila-10.0.0/releasenotes/notes/bug-1746202-fix-unicodeDecodeError-when-decode-API-input-4e4502fb500000664000175000017500000000025513656750227032006 0ustar zuulzuul00000000000000--- fixes: - This patch converts UnicodeDecodeError exception into BadRequest, plus an explicit error message. Fix invalid query parameter could lead to HTTP 500. manila-10.0.0/releasenotes/notes/use-tooz-heartbeat-c6aa7e15444e63c3.yaml0000664000175000017500000000044113656750227025530 0ustar zuulzuul00000000000000--- features: - Switched to tooz internal heartbeat feature in the coordination system. deprecations: - Deprecated 'coordination.heartbeat', 'coordination.initial_reconnect_backoff' and 'coordination.max_reconnect_backoff' configuration options which are not used anymore. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/deprecate-service-instance-network-helper-option-82ff62a038f2bfa3.yamlmanila-10.0.0/releasenotes/notes/deprecate-service-instance-network-helper-option-82ff62a038f2bfa3.y0000664000175000017500000000027713656750227033052 0ustar zuulzuul00000000000000--- upgrade: - Deprecated the ``service_instance_network_helper_type`` option for removal. This option is no longer used for anything since nova networking is no longer supported. manila-10.0.0/releasenotes/notes/glusterfs-handle-new-volume-option-xml-schema-dad06253453c572c.yaml0000664000175000017500000000014613656750227032631 0ustar zuulzuul00000000000000--- fixes: - GlusterFS drivers now handle the volume option XML schema of GlusterFS >= 3.7.14. manila-10.0.0/releasenotes/notes/vnx-manila-ipv6-support-9ae986431549cc63.yaml0000664000175000017500000000007713656750227026343 0ustar zuulzuul00000000000000--- features: - IPv6 support for Dell EMC VNX Manila driver. manila-10.0.0/releasenotes/notes/bug-1665002-hnas-driver-version-f3a8f6bff3dbe054.yaml0000664000175000017500000000014713656750227027540 0ustar zuulzuul00000000000000--- fixes: - Fixed HNAS driver version according to the new content added in the Ocata release. manila-10.0.0/releasenotes/notes/qnap-enhance-support-53848fda525b7ea4.yaml0000664000175000017500000000025113656750227026066 0ustar zuulzuul00000000000000--- features: - | Added enhanced support to the QNAP Manila driver, including ``Thin Provisioning``, ``SSD Cache``, ``Deduplication`` and ``Compression``. ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1678524-check-snaprestore-license-for-snapshot-revert-6d32afdc5d0b2b51.yamlmanila-10.0.0/releasenotes/notes/bug-1678524-check-snaprestore-license-for-snapshot-revert-6d32afdc50000664000175000017500000000022313656750227032607 0ustar zuulzuul00000000000000--- fixes: - The NetApp ONTAP driver is now fixed to set revert_to_snapshot_support to True or False depending upon SnapRestore License. manila-10.0.0/releasenotes/notes/glusterfs-add-directory-layout-extend-shrink-fd2a008f152edbf5.yaml0000664000175000017500000000011713656750227033004 0ustar zuulzuul00000000000000--- features: - For glusterfs_nfs driver, added share extend/shrink support. manila-10.0.0/releasenotes/notes/share-network-with-multiple-subnets-a56be8b646b9e463.yaml0000664000175000017500000000206413656750227031102 0ustar zuulzuul00000000000000--- features: - Added APIs with default policy set to 'rule:default' that allow the creation of share networks with multiple subnets. This gives the users the ability to create multiple subnets in a share network for different availability zones. Also, users will be able to delete and show existing subnets. - Updated the share server API to make possible to manage share servers in a specific subnet when the driver is operating in ``driver_handles_share_servers`` enabled mode. - Share servers are now associated with a single share network subnet, which pertain to a share network. upgrade: - On upgrading to this release, all existing share networks will be updated to accommodate an availability zone assignment. Existing share networks will have their availability zone set to "empty" indicating that they are available across all storage availability zones known to manila. fixes: - A share network cannot be provided while creating a share replica. Replicas will inherit the share's share network if one exists. manila-10.0.0/releasenotes/notes/netapp_cdot_performance_utilization-aff1b498a159470e.yaml0000664000175000017500000000052213656750227031332 0ustar zuulzuul00000000000000--- features: - The NetApp cDOT drivers now include the cluster node utilization metrics for each pool reported to the manila scheduler. These values are designed to be included in the filter & goodness functions used by the scheduler, so the cDOT drivers now also report those functions to the scheduler for each pool. manila-10.0.0/releasenotes/notes/qnap-support-qes-200-639f3ad70687023d.yaml0000664000175000017500000000011313656750227025410 0ustar zuulzuul00000000000000--- features: - | QNAP Manila driver added support for QES fw 2.0.0. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1698250-netapp-cdot-fix-share-server-deletion-494ab3ad1c0a97c0.yamlmanila-10.0.0/releasenotes/notes/bug-1698250-netapp-cdot-fix-share-server-deletion-494ab3ad1c0a97c0.0000664000175000017500000000023113656750227032005 0ustar zuulzuul00000000000000--- fixes: - The NetApp cDOT DHSS=True drivers have been fixed to not assume that share servers are only provisioned on segmented (VLAN) networks. manila-10.0.0/releasenotes/notes/fix_policy_file-4a382ac241c718c6.yaml0000664000175000017500000000011413656750227025063 0ustar zuulzuul00000000000000--- fixes: - Adapted policy.json file to correct snapshot policy values. manila-10.0.0/releasenotes/notes/rules-for-managed-share-f28a26ffc980f6fb.yaml0000664000175000017500000000032413656750227026576 0ustar zuulzuul00000000000000--- fixes: - Fixed HSP driver not supporting adding rules that exist in backend for managed shares. - Fixed HSP driver not supporting deleting share if it has rules in backend that are not in Manila. manila-10.0.0/releasenotes/notes/bug-1602525-port_binding_mandatory-2aaba0fa72b82676.yaml0000664000175000017500000000022613656750227030211 0ustar zuulzuul00000000000000--- fixes: - Raises an exception in case the host_id is specified when creating a neutron port but the port_binding extension is not activated. manila-10.0.0/releasenotes/notes/bug-1649782-fixed-incorrect-exportfs-exportfs.yaml0000664000175000017500000000016113656750227030015 0ustar zuulzuul00000000000000--- fixes: - Fixed incorrect exportfs command used while extending and shrinking shares on Generic driver. ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1660686-snapshot-export-locations-mount-not-supported-cdc2f5a3b57a9319.yamlmanila-10.0.0/releasenotes/notes/bug-1660686-snapshot-export-locations-mount-not-supported-cdc2f5a3b0000664000175000017500000000017113656750227033026 0ustar zuulzuul00000000000000--- fixes: - Fixed snapshot export locations being created for shares with property mount_snapshot_support=False. manila-10.0.0/releasenotes/notes/limiting-ssh-access-from-tenant-network-6519efd6d6895076.yaml0000664000175000017500000000024113656750227031463 0ustar zuulzuul00000000000000--- security: - | Service Instance Module - Added option to block port 22 from other subnets than manila service network using neutron security groups.manila-10.0.0/releasenotes/notes/add-tegile-driver-1859114513edb13e.yaml0000664000175000017500000000010013656750227025123 0ustar zuulzuul00000000000000--- features: - Added driver for Tegile IntelliFlash arrays. manila-10.0.0/releasenotes/notes/add-hsp-default-filter-function-0af60a819faabfec.yaml0000664000175000017500000000011313656750227030345 0ustar zuulzuul00000000000000--- fixes: - Added missing default filter function on Hitachi HSP driver.manila-10.0.0/releasenotes/notes/move-emc-share-driver-to-dell-emc-dir-1ec34dee0544270d.yaml0000664000175000017500000000035013656750227030755 0ustar zuulzuul00000000000000--- upgrade: - The EMCShareDriver is moved to the dell_emc directory. share_driver entry in manila.conf needs to be changed to manila.share.drivers.dell_emc.driver.EMCShareDriver. Vendor name is changed to "Dell EMC". manila-10.0.0/releasenotes/notes/fix-huawei-driver-cifs-mount-issue-2d7bff5a7e6e3ad6.yaml0000664000175000017500000000022213656750227031003 0ustar zuulzuul00000000000000--- fixes: - | Change the CIFS mounting parameter of Huawei driver from form "user=" to "username=", which is compatible in various OS. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/Use-http_proxy_to_wsgi-instead-of-ssl-middleware-df533a2c2d9c3a61.yamlmanila-10.0.0/releasenotes/notes/Use-http_proxy_to_wsgi-instead-of-ssl-middleware-df533a2c2d9c3a61.y0000664000175000017500000000064213656750227033040 0ustar zuulzuul00000000000000--- security: - http_proxy_to_wsgi is taken into use instead of the deprecated ssl middleware. This makes it easier for deployers to have Manila running behind a proxy that terminates TLS connections. This middleware addition adds the enable_proxy_headers_parsing option to the oslo_middleware section which needs to be set in the configuration file in order to enable middleware to do its work. ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1717263-netapp-ontap-fix-size-for-share-from-snapshot-02385baa7e085f39.yamlmanila-10.0.0/releasenotes/notes/bug-1717263-netapp-ontap-fix-size-for-share-from-snapshot-02385baa70000664000175000017500000000021713656750227032275 0ustar zuulzuul00000000000000--- fixes: - The NetApp ONTAP driver has been fixed to honor the share size as requested when creating shares from an existing snapshot. ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/ganesha-store-exports-and-export-counter-in-ceph-rados-052b925f8ea460f4.yamlmanila-10.0.0/releasenotes/notes/ganesha-store-exports-and-export-counter-in-ceph-rados-052b925f8ea40000664000175000017500000000022513656750227033016 0ustar zuulzuul00000000000000--- features: - Added ganesha driver feature to store NFS-Ganesha's exports and export counter directly in a HA storage, Ceph's RADOS objects. manila-10.0.0/releasenotes/notes/bug-1859775-snapshot-over-quota-exception-bb6691612af03ddf.yaml0000664000175000017500000000024713656750227031445 0ustar zuulzuul00000000000000--- fixes: - | Fixed Quota exceeded exception for snapshot creation. Consumed gigabytes now reports the snapshot gigabytes instead of share gigabytes usage. manila-10.0.0/releasenotes/notes/bug-1716922-security-group-creation-failed-d46085d11370d918.yaml0000664000175000017500000000014313656750227031222 0ustar zuulzuul00000000000000--- fixes: - Fixed creation of security group and security group rule - neutronclient mappingmanila-10.0.0/releasenotes/notes/remove-root-helper-config-option-fd517b0603031afa.yaml0000664000175000017500000000040413656750227030272 0ustar zuulzuul00000000000000--- other: - | The "root_helper" configuration option from the [DEFAULT] section got removed. This option was not used anywhere in the codebase. Manila uses "sudo" together with "rootwrap" to allow unprivileged users running actions as root. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1777126-netapp-skip-route-setup-if-no-gateway-e841635dcd20fd12.yamlmanila-10.0.0/releasenotes/notes/bug-1777126-netapp-skip-route-setup-if-no-gateway-e841635dcd20fd12.0000664000175000017500000000034113656750227031720 0ustar zuulzuul00000000000000--- fixes: - The NetApp driver has been fixed to not enforce route creation when the share network provided has no gateway. See `Launchpad bug 1777126 `_ for details. manila-10.0.0/releasenotes/notes/veritas-access-manila-driver-d75558c01ce6d428.yaml0000664000175000017500000000007213656750227027403 0ustar zuulzuul00000000000000--- features: - Added Manila driver for Veritas Access. manila-10.0.0/releasenotes/notes/inspur-as13000-driver-41f6b7caea82e46e.yaml0000664000175000017500000000026013656750227025750 0ustar zuulzuul00000000000000--- prelude: > Add Inspur AS13000 driver. features: - Added new Inspur AS13000 driver, which supports snapshots operation along with all the minimum driver features. manila-10.0.0/releasenotes/notes/add-access-key-to-share-access-map-2fda4c06a750e24e.yaml0000664000175000017500000000030013656750227030361 0ustar zuulzuul00000000000000--- features: - Driver may return ``access_key``, an access credential, for client identities granted share access. - Added ``access_key`` to the JSON response of ``access_list`` API. manila-10.0.0/releasenotes/notes/error-share-set-size-ff5d4f4ac2d56755.yaml0000664000175000017500000000027313656750227026077 0ustar zuulzuul00000000000000--- fixes: - Any errors that may occur during 'managing' a share into manila will result in the share's size being set to 1, aside from transitioning the status to 'manage_error'. manila-10.0.0/releasenotes/notes/extra_specs_case_insensitive-e9d4ca10d94f2307.yaml0000664000175000017500000000022513656750227027742 0ustar zuulzuul00000000000000--- upgrade: - The values of share type extra-specs will be considered case insensitive for comparison in the scheduler's capabilities filter. manila-10.0.0/releasenotes/notes/bp-share-type-supported-azs-2e12ed406f181b3b.yaml0000664000175000017500000000105613656750227027276 0ustar zuulzuul00000000000000--- features: - | A new common user-visible share types extra-spec called "availability_zones" has been introduced. When using API version 2.48, user requests to create new shares in a specific availability zone will be validated against the configured availability zones of the share type. Similarly, users requests to create share groups and share replicas are validated against the share type ``availability_zones`` extra-spec when present. Users can also filter share types by one or more AZs that are supported by them.././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1801763-gate-public-share-creation-by-policy-a0ad84e4127a3fc3.yamlmanila-10.0.0/releasenotes/notes/bug-1801763-gate-public-share-creation-by-policy-a0ad84e4127a3fc3.y0000664000175000017500000000216713656750227032000 0ustar zuulzuul00000000000000--- features: - | New API policies (share:create_public_share and share:set_public_share) have been introduced for the "create" (POST /shares) and "update" (PUT /shares) APIs to validate requests to create publicly visible shares. deprecations: - | The API policies to create publicly visible shares (share:create_public_share) or modify existing shares to become publicly visible (share:set_public_share) have their default value changed to rule:admin_api. This means that these APIs (POST /shares and PUT /shares) will allow the 'is_public' parameter to be set to True in the request body if the requester's role is set to an Administrator role. These policies will allow their previous default behavior in the Stein release (8.0.0) (i.e., any user can create publicly visible shares and even non-privileged users within a project can update their shares to become publicly visible). If the previous default behavior is always desired, deployers *must* explicitly set "share:create_public_share" and "share:set_public_share" to "rule:default" in their policy.json file.manila-10.0.0/releasenotes/notes/fix-huawei-driver-qos-deletion-9ad62db3d7415980.yaml0000664000175000017500000000016313656750227027677 0ustar zuulzuul00000000000000--- fixes: - Fixed qos deletion failing in huawei driver when qos status is 'idle' by deactivating it first. manila-10.0.0/releasenotes/notes/bug-1745436-78c46f8a0c96cbca.yaml0000664000175000017500000000062413656750227023572 0ustar zuulzuul00000000000000--- fixes: - Improved responsiveness of Host-assisted share migration by changing the waiting function of resource waiters. upgrade: - Added config option 'data_node_access_ips' that accepts a list of IP addresses. Those IPs can be either IPv4 or IPv6. deprecations: - Config option 'data_node_access_ip' has been deprecated in favor of 'data_node_access_ips', and marked for removal. manila-10.0.0/releasenotes/notes/enhance-ensure-share-58fc14ffc099f481.yaml0000664000175000017500000000021713656750227026031 0ustar zuulzuul00000000000000--- fixes: - Adds the ability to solve the potential problem of slow start up, and deal with non-user-initiated state changes to shares. manila-10.0.0/releasenotes/notes/cephfs-native-fix-evict-c45fd2de8f520757.yaml0000664000175000017500000000013113656750227026446 0ustar zuulzuul00000000000000--- fixes: - In cephfs_native driver, fixed client eviction call during access denial. manila-10.0.0/releasenotes/notes/driver-filter-91e2c60c9d1a48dd.yaml0000664000175000017500000000066313656750227024655 0ustar zuulzuul00000000000000--- features: - Add DriverFilter and GoodnessWeigher to manila's scheduler. These can use two new properties provided by backends, 'filter_function' and 'goodness_function', which can be used to filter and weigh qualified backends, respectively. upgrade: - To add DriverFilter and GoodnessWeigher to an active deployment, their references must be added to the filters and weighers sections on entry_points.txt. manila-10.0.0/releasenotes/notes/zfssa-driver-add-share-manage-unmanage-9bd6d2e25cc86c35.yaml0000664000175000017500000000030213656750227031357 0ustar zuulzuul00000000000000--- features: - Oracle ZFSSA driver now supports share manage/unmanage feature, where a ZFSSA share can be brought under Manila's management, or can be released from Manila's management. ././@LongLink0000000000000000000000000000016200000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1707084-netapp-manila-driver-to-honour-std-extra-specs-d32fae4e9411b503.yamlmanila-10.0.0/releasenotes/notes/bug-1707084-netapp-manila-driver-to-honour-std-extra-specs-d32fae4e0000664000175000017500000000020013656750227032515 0ustar zuulzuul00000000000000--- fixes: - The NetApp cDOT driver is now fixed to honour the standard extra_specs during migration and manage/unmanage. ././@LongLink0000000000000000000000000000017000000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1607029-fix-share-server-deletion-when-interfaces-dont-exist-4d00fe9dafadc252.yamlmanila-10.0.0/releasenotes/notes/bug-1607029-fix-share-server-deletion-when-interfaces-dont-exist-4d0000664000175000017500000000017513656750227032714 0ustar zuulzuul00000000000000--- fixes: - Fixed issue with NetApp cDOT share server cleanup when LIF creation fails while setting up a new vServer. manila-10.0.0/releasenotes/notes/add-manage-db-purge-b32a24ee045d8d45.yaml0000664000175000017500000000021613656750227025464 0ustar zuulzuul00000000000000--- features: - Add ``purge`` sub command to the ``manila-manage db`` command for administrators to be able to purge soft-deleted rows. ././@LongLink0000000000000000000000000000017300000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1767430-access-control-raise-ip-address-conflict-on-host-routes-0c298125fee4a640.yamlmanila-10.0.0/releasenotes/notes/bug-1767430-access-control-raise-ip-address-conflict-on-host-routes0000664000175000017500000000041213656750227033011 0ustar zuulzuul00000000000000--- fixes: - | The access-allow API has now been fixed to validate duplicate IP addresses by different notation styles. For example, if a host with IP 172.16.21.24 already has access to an NFS share, access cannot be requested for 172.16.21.24/32. manila-10.0.0/releasenotes/notes/vmax-manila-support-7c655fc094c09367.yaml0000664000175000017500000000005613656750227025613 0ustar zuulzuul00000000000000--- features: - Support for VMAX in Manila. manila-10.0.0/releasenotes/notes/hnas-driver-rename-7ef74fe720f7e04b.yaml0000664000175000017500000000165413656750227025600 0ustar zuulzuul00000000000000--- features: - Renamed all HDS mentions on HNAS driver to Hitachi and moved driver to another folder. upgrade: - HNAS driver vendor changed from HDS to Hitachi. - New HNAS driver location. - New HNAS config options hitachi_hnas_ip, hitachi_hnas_user, hitachi_hnas_password, hitachi_hnas_evs_id, hitachi_hnas_evs_ip, hitachi_hnas_file_system_name, hitachi_hnas_ssh_private_key, hitachi_hnas_cluster_admin_ip0, hitachi_hnas_stalled_job_timeout, hitachi_hnas_driver_helper and hitachi_hnas_allow_cifs_snapshot_while_mounted. deprecations: - HNAS driver location was deprecated. - All HNAS driver config options were deprecated hds_hnas_ip, hds_hnas_user, hds_hnas_password, hds_hnas_evs_id, hds_hnas_evs_ip, hds_hnas_file_system_name, hds_hnas_ssh_private_key, hds_hnas_cluster_admin_ip0, hds_hnas_stalled_job_timeout, hds_hnas_driver_helper and hds_hnas_allow_cifs_snapshot_while_mounted. manila-10.0.0/releasenotes/notes/infinidat-balance-network-spaces-ips-25a9f1e587b87156.yaml0000664000175000017500000000024413656750227030754 0ustar zuulzuul00000000000000--- features: - The INFINIDAT share driver now supports multiple export locations per share, defined by the enabled IP addresses in the chosen network space. manila-10.0.0/releasenotes/notes/gpfs-nfs-server-type-default-value-change-58890adba373737c.yaml0000664000175000017500000000030113656750227031720 0ustar zuulzuul00000000000000--- other: - | Changing the default value of 'gpfs_nfs_server_type' configuration parameter from KNFS to CES as Spectrum Scale provide NFS service with Ganesha server by default. manila-10.0.0/releasenotes/notes/config-for-cephfs-volume-prefix-67f2513f603cb614.yaml0000664000175000017500000000017613656750227027752 0ustar zuulzuul00000000000000--- features: - cephfs volume path prefix is now configurable in order to enable support for multiple cephfs back ends. ././@LongLink0000000000000000000000000000020300000000000011210 Lustar 00000000000000manila-10.0.0/releasenotes/notes/added-possibility-to-run-manila-api-with-web-servers-that-support-wsgi-apps-cfffe0b789f8670a.yamlmanila-10.0.0/releasenotes/notes/added-possibility-to-run-manila-api-with-web-servers-that-support-w0000664000175000017500000000045613656750227033631 0ustar zuulzuul00000000000000--- features: - Manila API service now can be run using web servers that support WSGI applications. upgrade: - Deprecated path 'manila.api.openstack:FaultWrapper' to 'FaultWrapper' was removed and now only current path is available, which is 'manila.api.middleware.fault:FaultWrapper'. manila-10.0.0/releasenotes/notes/bug-1650043-gpfs-access-bugs-8c10f26ff1f795f4.yaml0000664000175000017500000000037513656750227026562 0ustar zuulzuul00000000000000--- fixes: - Fixed GPFS CES to allow adding a first access rule to a share. - Fixed GPFS CES to allow deleting a share with no access rules. - Fixed GPFS CES to allow deletion of a failed access rule when there are no successful access rules. manila-10.0.0/releasenotes/notes/add-gathering-usage-size-8454sd45deopb14e.yaml0000664000175000017500000000052413656750227026774 0ustar zuulzuul00000000000000--- features: - Added periodic task to gather share usage size. upgrade: - Added enable_gathering_share_usage_size and share_usage_size_update_interval options in the manila.conf file to allow configuration of gathering share usage size support and to allow configuration of interval time of gathering share usage size. manila-10.0.0/releasenotes/notes/fixing-driver-filter-14022294c8c04d2d.yaml0000664000175000017500000000057613656750227025706 0ustar zuulzuul00000000000000--- fixes: - | Fixed the driver filter to not check for hard equality between the share_backend_name and the name reported by the host as it defeats the purpose of the capabilities filter giving the ability to use "" selection operator in the extra-spec. Refer to `Launchpad bug 1815700 `_ for more details. ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/netapp-cdot-optimized-migration-within-share-server-92cfa1bcf0c317fc.yamlmanila-10.0.0/releasenotes/notes/netapp-cdot-optimized-migration-within-share-server-92cfa1bcf0c317f0000664000175000017500000000032113656750227033225 0ustar zuulzuul00000000000000--- features: - Driver assisted migration support has been added to the NetApp cDOT driver to efficiently and nondisruptively migrate shares within Vservers by ensuring data, snapshots and metadata. manila-10.0.0/releasenotes/notes/manila-status-upgrade-check-framework-aef9b5cf9d8e3bda.yaml0000664000175000017500000000072313656750227031662 0ustar zuulzuul00000000000000--- prelude: > Added new tool ``manila-status upgrade check``. features: - | New framework for ``manila-status upgrade check`` command is added. This framework allows adding various checks which can be run before a Manila upgrade to ensure if the upgrade can be performed safely. upgrade: - | Operator can now use new CLI tool ``manila-status upgrade check`` to check if Manila deployment can be safely upgraded from N-1 to N release. manila-10.0.0/releasenotes/notes/user-messages-api-589ee7d68ccba70c.yaml0000664000175000017500000000050113656750227025514 0ustar zuulzuul00000000000000--- features: - Added new user messages API - GET /messages, GET /messages/ and DELETE /messages/. - Added sorting, filtering and pagination to the user messages listing. - Added 'message_ttl' configuration option which can be used for configuring message expiration time. manila-10.0.0/releasenotes/notes/hnas_allow_managed_fix-4ec7794e2035d3f2.yaml0000664000175000017500000000046113656750227026411 0ustar zuulzuul00000000000000--- fixes: - Fixed error when allowing access to a managed share in HDS HNAS driver. - Fixed error when attempting to create a new share from a snapshot taken from a managed share in HDS HNAS driver. - Fixed ID inconsistencies in log when handling managed shares in HDS HNAS driver. manila-10.0.0/releasenotes/notes/bug-1626523-migration-rw-access-fix-7da3365c7b5b90a1.yaml0000664000175000017500000000015713656750227030053 0ustar zuulzuul00000000000000--- fixes: - Fixed share remaining with read/write access rules during a host-assisted share migration. manila-10.0.0/releasenotes/notes/netapp-cdot-ss-multiple-dns-ip-df42a217977ce44d.yaml0000664000175000017500000000033613656750227027704 0ustar zuulzuul00000000000000--- features: - | The NetApp ONTAP driver security service dns_ip parameter also takes a list of comma separated DNS IPs for vserver dns configuration. Allows HA setup, where DNS can be down for maintenance. manila-10.0.0/releasenotes/notes/fixed-netapp-cdot-autosupport-3fabd8ac2e407f70.yaml0000664000175000017500000000015113656750227030060 0ustar zuulzuul00000000000000--- fixes: - The NetApp cDOT driver's autosupport reporting now works on Python 2.7.12 and later. manila-10.0.0/releasenotes/notes/bug-1845135-fix-Unity-cannot-use-mgmt-ipv6-9407710a3fc7f4aa.yaml0000664000175000017500000000017613656750227031207 0ustar zuulzuul00000000000000--- fixes: - | Fixed an issue with the Dell EMC Unity driver to work with a management IP configured in IPv6 format.manila-10.0.0/releasenotes/notes/cephfs-add-nfs-protocol-support-44764094c9d784d8.yaml0000664000175000017500000000011213656750227027744 0ustar zuulzuul00000000000000--- features: - Added NFS protocol support for shares backed by CephFS. manila-10.0.0/releasenotes/notes/bug-1665072-migration-success-fix-3da1e80fbab666de.yaml0000664000175000017500000000022713656750227030055 0ustar zuulzuul00000000000000--- fixes: - Fixed ``task_state`` field in the share model being set to ``migration_success`` before actually completing a share migration. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1765420-netapp-fix-delete-share-for-vsadmins-b5dc9e0224cb3ba2.yamlmanila-10.0.0/releasenotes/notes/bug-1765420-netapp-fix-delete-share-for-vsadmins-b5dc9e0224cb3ba2.y0000664000175000017500000000025513656750227032065 0ustar zuulzuul00000000000000--- fixes: - The `Launchpad bug 1765420 `_ that affected the NetApp ONTAP driver during share deletion has been fixed. ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1700871-ontap-allow-extend-of-replicated-share-2c9709180d954308.yamlmanila-10.0.0/releasenotes/notes/bug-1700871-ontap-allow-extend-of-replicated-share-2c9709180d9543080000664000175000017500000000020413656750227031513 0ustar zuulzuul00000000000000--- fixes: - The NetApp ONTAP driver is now fixed to allow extension and shrinking of share replicas after they get promoted. manila-10.0.0/releasenotes/notes/add-policy-in-code-c31a24ee045d8d21.yaml0000664000175000017500000000125213656750227025337 0ustar zuulzuul00000000000000--- features: - Default Role Based Access Control (RBAC) policies for all the Manila APIs have moved into code from the auxiliary ``policy.json`` file. upgrade: - Removed the default ``policy.json`` file. - Operators need not maintain the ``policy.json`` file if they were not overriding default manila policies. - If Operators need to override certain RBAC policies, they can do so by creating a JSON formatted file named ``policy.json`` and populate it with the necessary overrides. This file must be placed in the config directory. The default RBAC policies are documented in the configuration reference alongside other sample configuration files.manila-10.0.0/releasenotes/notes/bug-1659023-netapp-cg-fix-56bb77b7bc61c3f5.yaml0000664000175000017500000000031213656750227026135 0ustar zuulzuul00000000000000--- fixes: - Re-enabled the consistent snapshot code in the NetApp driver, now compatible with the Manila Share Groups API instead of the deprecated and removed Manila Consistency Groups API. manila-10.0.0/releasenotes/notes/dell-emc-unity-use-user-capacity-322f8bbb7c536453.yaml0000664000175000017500000000035713656750227030127 0ustar zuulzuul00000000000000--- features: - Dell EMC Unity driver is changed to create shares with the available space instead of allocated space as the same as the size specified by user. - Dell EMC Unity driver version is changed to 3.0.0 for Pike release. manila-10.0.0/releasenotes/notes/netapp-cdot-switch-volume-efficiency-bd22733445d146f0.yaml0000664000175000017500000000043113656750227030765 0ustar zuulzuul00000000000000--- fixes: - | NetApp driver volume efficiency settings now behave consistently: like on volume creation now also modification, which is currently consumed by manage and migration, will make sure that deduplication and compression settings are applied correctly. manila-10.0.0/releasenotes/notes/huawei-pool-disktype-support-0a52ba5d44da55f9.yaml0000664000175000017500000000032113656750227027655 0ustar zuulzuul00000000000000--- features: - Add support for reporting pool disk type in Huawei driver. `huawei_disk_type` extra-spec in the share type. Valid values for this extra-spec are 'ssd', 'sas', 'nl_sas' or 'mix'. manila-10.0.0/releasenotes/notes/add-user-id-echo-8f42db469b27ff14.yaml0000664000175000017500000000011513656750227025032 0ustar zuulzuul00000000000000--- features: - User ID is added to the JSON response of the /shares APIs. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1777551-security-networks-api-all-tenants-fix-a061274afe15180d.yamlmanila-10.0.0/releasenotes/notes/bug-1777551-security-networks-api-all-tenants-fix-a061274afe15180d.0000664000175000017500000000047713656750227031750 0ustar zuulzuul00000000000000--- fixes: - | The ``all_tenants`` query parameter in the share networks API (GET /v2/{project_id}/share-networks) has been fixed to accept 'f', 'false', 'off', 'n', 'no', or '0'. Setting the flag to any of these values will retrieve security services only from the requester's project namespace. manila-10.0.0/releasenotes/notes/lv-mounting-inside-containers-af8f84d1fab256d1.yaml0000664000175000017500000000010713656750227030044 0ustar zuulzuul00000000000000--- fixes: - Makes docker containers actually mount logical volumes. manila-10.0.0/releasenotes/notes/bug-1815532-supply-request-id-in-all-apis-74419bc1b1feea1e.yaml0000664000175000017500000000017713656750227031271 0ustar zuulzuul00000000000000--- fixes: - APIs that were not returning a request ID ('x-compute-request-id') in the response headers have been fixed. manila-10.0.0/releasenotes/notes/3par-pool-support-fb43b368214c9eda.yaml0000664000175000017500000000050513656750227025417 0ustar zuulzuul00000000000000 --- features: - HPE 3PAR driver now supports configuring multiple pools per backend. upgrade: - HPE 3PAR driver no longer uses hpe3par_share_ip_address option in configuration. With pool support, configuration just requires hpe3par_fpg option or optionally supply share IP address(es) along with hpe3par_fpg.manila-10.0.0/releasenotes/notes/migration-share-type-98e3d3c4c6f47bd9.yaml0000664000175000017500000000012313656750227026156 0ustar zuulzuul00000000000000--- features: - Administrators can now change a share's type during a migration. manila-10.0.0/releasenotes/notes/remove-host-field-from-shares-and-replicas-a087f85bc4a4ba45.yaml0000664000175000017500000000111113656750227032176 0ustar zuulzuul00000000000000--- critical: - The "host" field is no longer returned in the JSON response of the /shares and /share-replicas APIs when these APIs are invoked with non-admin privileges. Applications that depend on this field must be updated as necessary. The value of this field is privileged information and the request context must specify administrator privileges when using these APIs for the "host" field to be present. The use of "host" as a filter key in the GET /shares API is controlled with the policy "list_by_host". This policy defaults to "rule:admin_api". manila-10.0.0/releasenotes/notes/revert-switch-to-use-glanceclient-bc462a5477d6b8cb.yaml0000664000175000017500000000026213656750227030542 0ustar zuulzuul00000000000000--- fixes: - | Change Id905d47600bda9923cebae617749c8286552ec94 is causing gate failures with the generic driver so we need to revert it for now and revisit after rc. ././@LongLink0000000000000000000000000000016200000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1818081-fix-inferred-script-name-in-case-of-proxy-urls-e33466af856708b4.yamlmanila-10.0.0/releasenotes/notes/bug-1818081-fix-inferred-script-name-in-case-of-proxy-urls-e33466af0000664000175000017500000000047513656750227032267 0ustar zuulzuul00000000000000--- fixes: - | When manila API is run behind a proxy webserver, the API service was parsing the major API version requested incorrectly, leading to incorrect responses. This behavior has now been fixed. See `launchpad bug 1818081 `_ for more details. ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1626249-reintroduce-per-share-instance-access-rule-state-7c08a91373b21557.yamlmanila-10.0.0/releasenotes/notes/bug-1626249-reintroduce-per-share-instance-access-rule-state-7c08a90000664000175000017500000000253213656750227032510 0ustar zuulzuul00000000000000--- features: - New micro-states ('applying', 'denying'), appear in the 'state' field of access rules list API. These transitional states signify the state of an access rule while its application or denial is being processed asynchronously. - Access rules can be added regardless of the 'access_rules_status' of the share or any of its replicas. fixes: - Fixed a bug with the share manager losing access rule updates when multiple access rules are added to a given share simultaneously. - Instead of all existing access rules transitioning to 'error' state when some error occurs while applying or denying access rules to a given share, only the rules that were in transitional statuses ('applying', 'denying') during an update will transition to 'error' state. This change is expected to aid in identifying any 'bad' rules that require a resolution by the user. - Share action APIs dealing with allowing and denying access to shares now perform the policy check for authorization to invoke those APIs as a preliminary step. - As before, when a share is replicated (or being migrated), all replicas (or migration instances) of the share must be in a valid state in order to allow or deny access to the share (where such actions are otherwise allowed). The check enforcing this in the API is fixed. manila-10.0.0/releasenotes/notes/bug-1705533-manage-api-error-message-fix-967b0d44c09b914a.yaml0000664000175000017500000000007413656750227030670 0ustar zuulzuul00000000000000--- fixes: - | Error message changed for manage API. manila-10.0.0/releasenotes/notes/bug_1844046-fix-image-not-found-629415d50cd6042a.yaml0000664000175000017500000000042613656750227027106 0ustar zuulzuul00000000000000--- fixes: - | The Generic driver has been fixed to invoke compute image retrieval by ID rather than list all images and implement a filter. This prevents failures in case there are a lot of images available and the image service returns a paginated response. manila-10.0.0/releasenotes/notes/add-share-access-metadata-4fda2c06e750e83c.yaml0000664000175000017500000000156613656750227026745 0ustar zuulzuul00000000000000--- features: - | Metadata can be added to share access rules as key=value pairs, and also introduced the GET /share-access-rules API with API version 2.45. The prior API to retrieve access rules of a given share, POST /shares/{share-id}/action {'access-list: null} has been removed in API version 2.45. upgrade: - | The API GET /share-access-rules?share_id={share-id} replaces POST /shares/{share-id}/action with body {'access_list': null} in API version 2.45. The new API supports access rule metadata and is expected to support sorting, filtering and pagination features along with newer fields to interact with access rules in future versions. The API request header 'X-OpenStack-Manila-API-Version' can be set to 2.44 to continue using the prior API to retrieve access rules, but no new features will be added to that API. manila-10.0.0/releasenotes/notes/add-count-info-in-share-21a6b36c0f4c87b9.yaml0000664000175000017500000000012613656750227026323 0ustar zuulzuul00000000000000--- features: - Added total count info in Manila's /shares and /shares/detail APIs. manila-10.0.0/releasenotes/notes/fix-huawei-exception-a09b73234ksd94kd.yaml0000664000175000017500000000011213656750227026163 0ustar zuulzuul00000000000000--- fixes: - Fix exception in update_access not found in Huawei driver. manila-10.0.0/releasenotes/notes/cephfs-set-mode-b7fb3ec51300c220.yaml0000664000175000017500000000046713656750227024760 0ustar zuulzuul00000000000000--- fixes: - | Shares backed by CephFS no longer have hard-coded mode 755. Use the ``cephfs_volume_mode`` configuration option to set another mode, such as 775 when using manila dynamic external storage provider with OpenShift. The default value remains 755 for backwards compatibility. manila-10.0.0/releasenotes/notes/infinidat-delete-datasets-with-snapshots-4d18f8c197918606.yaml0000664000175000017500000000020713656750227031616 0ustar zuulzuul00000000000000--- fixes: - Fixed a failure in the INFINIDAT share driver which occurs while deleting shares with externally created snapshots. manila-10.0.0/releasenotes/notes/bug-1640169-check-ceph-connection-on-setup-c92bde41ced43326.yaml0000664000175000017500000000016613656750227031373 0ustar zuulzuul00000000000000--- fixes: - Added a check on driver startup for CEPHFS back ends to verify whether the back end is accessible. manila-10.0.0/releasenotes/notes/bug-1816420-validate-access-type-for-ganehas-c42ce6f859fa0c8c.yaml0000664000175000017500000000055113656750227031760 0ustar zuulzuul00000000000000--- fixes: - | Access rule type for shares served via nfs-ganesha is now validated, fixing `launchpad bug #1816420 `_ where ``cephx`` access type was allowed though only ``ip`` access type is effective. This fix also validates ``access_level`` to ensure that it is set to ``RW`` or ``RO``. manila-10.0.0/releasenotes/notes/migration-empty-files-01d1a3caa2e9705e.yaml0000664000175000017500000000014313656750227026275 0ustar zuulzuul00000000000000--- fixes: - Fixed share migration error using Data Service when there are only empty files. ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1638994-drop-fake-cg-support-from-generic-driver-16efce98f94b1b6b.yamlmanila-10.0.0/releasenotes/notes/bug-1638994-drop-fake-cg-support-from-generic-driver-16efce98f94b1b0000664000175000017500000000021613656750227032230 0ustar zuulzuul00000000000000--- other: - Removed fake Consistency Group support from Generic driver. It was added only for testing purpose and now it is redundant. manila-10.0.0/releasenotes/notes/bug-1794402-fix-share-stats-container-driver-b3cb1fa2987ad4b1.yaml0000664000175000017500000000032713656750227032042 0ustar zuulzuul00000000000000--- fixes: - | Pool stats collection has been fixed in the container driver to reflect the differences in formatting of information for the underlying volume groups across different operating systems. ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1848889-netapp-fix-share-replica-update-check-failure-90aa964417e7734c.yamlmanila-10.0.0/releasenotes/notes/bug-1848889-netapp-fix-share-replica-update-check-failure-90aa964410000664000175000017500000000062413656750227032174 0ustar zuulzuul00000000000000--- fixes: - | Fixed an issue in NetApp driver share replica periodic check that erroneously set a replica state to 'error'. In this routine, a SnapMirror resync operation was being triggered while the replica data transfering is still in progress, receiving an error from the storage side. The driver now skips resync operation for all in progress SnapMirror relationship status. manila-10.0.0/releasenotes/notes/bug-1845147-vnx-read-only-policy-75b0f414ea5ef471.yaml0000664000175000017500000000031313656750227027414 0ustar zuulzuul00000000000000--- fixes: - | Manila VNX fix ensuring that hosts that are given access to a share i.e read only, will always precede '-0.0.0.0/0.0.0.0'. Any host after this string will be denied access. manila-10.0.0/releasenotes/notes/hitachi-driver-cifs-user-support-3f1a8b894fe3e9bb.yaml0000664000175000017500000000046113656750227030477 0ustar zuulzuul00000000000000--- prelude: > Add support for CIFS protocol in Manila HNAS driver. features: - Added support for CIFS shares in Hitachi HNAS driver. It supports user access type, where a permission for a user or a group can be added/removed. Also, accepts 'read write' and 'read only' as access level. manila-10.0.0/releasenotes/notes/remove-confusing-deprecation-warnings-a17c20d8973ef2bb.yaml0000664000175000017500000000047413656750227031504 0ustar zuulzuul00000000000000--- fixes: - | Removed confusing manila.db.sqlalchemy model messages indicating deprecated properties for ``share_type``, ``host``, ``share_server_id``, ``share_network_id``, ``available_zone``. These are exposed in the API as properties of shares and are not in fact actually deprecated as such. ././@LongLink0000000000000000000000000000016600000000000011220 Lustar 00000000000000manila-10.0.0/releasenotes/notes/add-support-filter-search-for-share-type-fdbaaa9510cc59dd.yaml-5655800975cec5d4.yamlmanila-10.0.0/releasenotes/notes/add-support-filter-search-for-share-type-fdbaaa9510cc59dd.yaml-56550000664000175000017500000000011013656750227032531 0ustar zuulzuul00000000000000--- features: - Share types can now be filtered with its extra_specs. manila-10.0.0/releasenotes/notes/windows-smb-fix-default-access-d4b9eee899e400a0.yaml0000664000175000017500000000022413656750227030014 0ustar zuulzuul00000000000000--- security: - Ensure we don't grant read access to 'Everyone' by default when creating CIFS shares and the Windows SMB backend is used. ././@LongLink0000000000000000000000000000017200000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1721787-fix-getting-share-networks-and-security-services-error-7e5e7981fcbf2b53.yamlmanila-10.0.0/releasenotes/notes/bug-1721787-fix-getting-share-networks-and-security-services-error-0000775000175000017500000000047013656750227033102 0ustar zuulzuul00000000000000--- fixes: - Non admin users may invoke GET /share-networks and GET /security-services APIs with the 'all-tenants' flag in the query, however, the flag is ignored, and only resources belonging to the project will be served. This API change was made to fix bug 1721787 in the manila client project. manila-10.0.0/releasenotes/notes/add-export-locations-api-6fc6086c6a081faa.yaml0000664000175000017500000000036113656750227026677 0ustar zuulzuul00000000000000--- features: - Added APIs for listing export locations per share and share instances. deprecations: - Removed 'export_location' and 'export_locations' attributes from share and share instance views starting with microversion '2.9'. manila-10.0.0/releasenotes/notes/share-revert-to-snapshot-3d028fa00620651e.yaml0000664000175000017500000000021613656750227026516 0ustar zuulzuul00000000000000--- features: - Added revert-to-snapshot feature for regular and replicated shares. - Added revert-to-snapshot support to the LVM driver. manila-10.0.0/releasenotes/notes/switch-to-use-glanceclient-dde019b0b141caf8.yaml0000664000175000017500000000017513656750227027301 0ustar zuulzuul00000000000000--- fixes: - | Switch to use glance client to retrive image list, novaclient is rarely maintained with glance API. manila-10.0.0/releasenotes/notes/bug-1634278-unmount-orig-active-after-promote-8e24c099ddc1e564.yaml0000664000175000017500000000022013656750227032121 0ustar zuulzuul00000000000000--- fixes: - The NetApp ONTAP driver is now fixed to unmount the original active share volume after one of its replica gets promoted. manila-10.0.0/releasenotes/notes/bug-1663300-554e9c78ca2ba992.yaml0000664000175000017500000000013113656750227023415 0ustar zuulzuul00000000000000fixes: - | The Windows driver issues regarding share creation have been fixed. manila-10.0.0/releasenotes/notes/add-like-filter-4c1d6dc02f40d5a5.yaml0000664000175000017500000000017613656750227025017 0ustar zuulzuul00000000000000--- features: - Added like filter support in ``shares``, ``snapshots``, ``share-networks``, ``share-groups`` list APIs. manila-10.0.0/releasenotes/notes/rename-cephfs-native-driver-3d9b4e3c6c78ee98.yaml0000664000175000017500000000100213656750227027401 0ustar zuulzuul00000000000000--- upgrade: - To use the CephFS driver, which enables CephFS access via the native Ceph protocol, set the `share_driver` in the driver section of the config file as `manila.share.drivers.cephfs.driver.CephFSDriver`. The previous `share_driver` setting in Mitaka/Newton/Ocata releases `manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver` would still work (usually until one more release, Queens, as part of standard deprecation process.), but it's usage is no longer preferred. manila-10.0.0/releasenotes/notes/bug-1613303-fix-config-generator-18b9f9be40d7eee6.yaml0000664000175000017500000000035313656750227027577 0ustar zuulzuul00000000000000--- fixes: - Fixed the generation of options in the correct option groups. Using the config generator (``tox -e genconfig``), [cinder], [nova] and [neutron] options are now generated in the right groups instead of [default]. manila-10.0.0/releasenotes/notes/bug-1749184-eb06929e76a14fce.yaml0000664000175000017500000000032313656750227023513 0ustar zuulzuul00000000000000--- fixes: - The database migration has been adjusted to work with mariadb >= 10.2.8 by ensuring that a primary key constraint is first dropped and re-added when a column is removed that is part of it manila-10.0.0/releasenotes/notes/remove-os-region-name-82e3cd4c7fb05ff4.yaml0000664000175000017500000000020313656750227026264 0ustar zuulzuul00000000000000--- other: - | The configuration option "os_region_name" from the [DEFAULT] group got removed. It was not used anywhere. manila-10.0.0/releasenotes/notes/qnap-tds-support-qes-24704313a0881c8c.yaml0000664000175000017500000000011613656750227025607 0ustar zuulzuul00000000000000--- features: - | QNAP Manila driver supports QES FW on TDS series NAS. manila-10.0.0/releasenotes/notes/bp-update-share-type-name-or-description-a39c5991b930932f.yaml0000664000175000017500000000025613656750227031504 0ustar zuulzuul00000000000000--- features: - The ``name``, ``description`` and/or ``share_type_access:is_public`` attributes of share types can be updated with API version ``2.50`` and beyond. manila-10.0.0/releasenotes/notes/unexpected-data-of-share-from-snap-134189fc0f3eeedf.yaml0000664000175000017500000000026413656750227030636 0ustar zuulzuul00000000000000--- fixes: - Fixed bug in Dell EMC Unity driver that caused shares created from snapshots to contain data from the original shares, instead of data from their snapshots. manila-10.0.0/releasenotes/notes/netapp-default-ipv6-route-13a9fd4959928524.yaml0000664000175000017500000000012413656750227026537 0ustar zuulzuul00000000000000--- features: Added support for IPv6 default gateways to the NetApp driver. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1694768-fix-netapp-cdot-revert-to-snapshot-5e1be65260454988.yamlmanila-10.0.0/releasenotes/notes/bug-1694768-fix-netapp-cdot-revert-to-snapshot-5e1be65260454988.yam0000664000175000017500000000015313656750227031635 0ustar zuulzuul00000000000000--- fixes: - Fixed the NetApp ONTAP driver to handle reverting to replicated and migrated snapshots. manila-10.0.0/releasenotes/notes/bug-1674908-allow-user-access-fix-495b3e42bdc985ec.yaml0000664000175000017500000000024313656750227027632 0ustar zuulzuul00000000000000--- fixes: - Changed user access name limit from 32 to 255 characters since there are security services that allow user names longer than 32 characters. manila-10.0.0/releasenotes/notes/fix-hnas-mount-on-manage-snapshot-91e094c579ddf1a3.yaml0000664000175000017500000000014613656750227030374 0ustar zuulzuul00000000000000--- fixes: - Fixed Hitachi HNAS driver not checking export on backend when managing a snapshot. ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1591357-fix-cannot-remove-user-rule-for-NFS-8e1130e2accabd56.yamlmanila-10.0.0/releasenotes/notes/bug-1591357-fix-cannot-remove-user-rule-for-NFS-8e1130e2accabd56.ya0000664000175000017500000000015713656750227031707 0ustar zuulzuul00000000000000--- fixes: - The generic driver has been fixed to allow removing inappropriate CIFS rules on NFS shares. manila-10.0.0/releasenotes/notes/add-cast-rules-to-readonly-field-62ead37b728db654.yaml0000664000175000017500000000046413656750227030146 0ustar zuulzuul00000000000000--- features: - Improvements have been made to ensure read only rule semantics for shares and readable replicas. When invoked with administrative context, the share instance and share replica APIs will return ``cast_rules_to_readonly`` as an additional field in the detailed JSON response. ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1850264-add-async-error-when-share-extend-error-a0c458204b395994.yamlmanila-10.0.0/releasenotes/notes/bug-1850264-add-async-error-when-share-extend-error-a0c458204b395990000664000175000017500000000015613656750227031622 0ustar zuulzuul00000000000000--- fixes: - | A new user message has been added in case of share extensions failing asynchronously.manila-10.0.0/releasenotes/notes/maprfs-manila-drivers-1541296f26cf78fd.yaml0000664000175000017500000000007313656750227026151 0ustar zuulzuul00000000000000--- features: - Added share backend drivers for MapR-FS. manila-10.0.0/releasenotes/notes/bug-1651578-gpfs-prepend-beb99f408cf20bb5.yaml0000664000175000017500000000040313656750227026153 0ustar zuulzuul00000000000000--- fixes: - Fixed GPFS KNFS generation of NFS server allow/deny commands when there are multiple servers in gpfs_nfs_server_list so that the remote ssh login prefix used for one server is not carried over to the commands for following servers. manila-10.0.0/releasenotes/notes/bug-1704971-fix-name-description-filter-85935582e89a72a0.yaml0000664000175000017500000000116613656750227030531 0ustar zuulzuul00000000000000--- fixes: - Fix the ``exact`` filters (name, description) in ``shares``, ``snapshots``, ``share-networks`` list can be filter by ``inexact`` value. We got the error because the ``description`` filter will be skipped in ``shares``, ``snapshots`` list API, and we will directly remove the ``inexact`` filter flag('~') and process the ``exact`` filters (name, description) by ``inexact`` filter logic. Now, we added ``description`` filter in ``shares``, ``snapshots`` list, and check whether the filter keys has the '~' in the end in ``shares``, ``snapshots``, ``share-networks`` list firstly. manila-10.0.0/releasenotes/notes/netapp-manage-unmanage-share-servers-635496b46e306920.yaml0000664000175000017500000000035413656750227030617 0ustar zuulzuul00000000000000--- features: - Added managing and unmanaging of share servers functionality to the NetApp driver, allowing for shares and snapshots to be managed and unmanaged in driver mode ``driver_handles_share_servers`` set to True. manila-10.0.0/releasenotes/notes/lvm-export-ips-5f73f30df94381d3.yaml0000664000175000017500000000115613656750227024653 0ustar zuulzuul00000000000000--- features: - Added new config option 'lvm_share_export_ips' which allows a list of IP addresses to use for export locations for the LVM driver. Every share created will be exported on every IP address. This new option supercedes 'lvm_share_export_ip'. upgrade: - After upgrading, rename lvm_share_export_ip to lvm_share_export_ips in the manila.conf file to avoid a deprecation warning. As long as the list remains a single element, functionality is unchanged. deprecations: - The 'lvm_share_export_ip' option is deprecated and will be removed. Use 'lvm_share_export_ips' instead. manila-10.0.0/releasenotes/notes/infinidat-add-infinibox-driver-ec652258e710d6a0.yaml0000664000175000017500000000012013656750227027666 0ustar zuulzuul00000000000000--- features: - Added a new driver for the INFINIDAT InfiniBox storage array. manila-10.0.0/releasenotes/notes/bug-1717392-fix-downgrade-share-access-map-bbd5fe9cc7002f2d.yaml0000664000175000017500000000042313656750227031503 0ustar zuulzuul00000000000000--- fixes: - The `Launchpad bug 1717392 `_ has been fixed and database downgrades do not fail if the database contains deleted access rules. Database downgrades are not recommended in production environments. manila-10.0.0/releasenotes/notes/vmax-rename-options-44d8123d14a23f94.yaml0000664000175000017500000000055613656750227025565 0ustar zuulzuul00000000000000--- upgrade: - For Dell EMC VMAX Manila driver, replaced emc_nas_pool_names with vmax_share_data_pools, emc_interface_ports with vmax_ethernet_ports, emc_nas_server_container with vmax_server_container. deprecations: - For Dell EMC VMAX Manila driver, options emc_nas_pool_names, emc_interface_ports, emc_nas_server_container are deprecated. manila-10.0.0/releasenotes/notes/drop-python2-support-e160ff36811a5964.yaml0000664000175000017500000000034413656750227025736 0ustar zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. Last release of openstack/manila to support python 2.7 is OpenStack Train (9.x). The minimum version of Python now supported by openstack/manila is Python 3.6. manila-10.0.0/releasenotes/notes/support-ipv6-in-drivers-and-network-plugins-1833121513edb13d.yaml0000664000175000017500000000051113656750227032212 0ustar zuulzuul00000000000000--- features: - Added optional extra spec 'ipv4_support' and 'ipv6_support' for share type. - Added new capabilities 'ipv4_support' and 'ipv6_support' for IP based drivers. - Added IPv6 support in network plugins. (support either IPv6 or IPv4) - Added IPv6 support in the lvm driver. (support both IPv6 and IPv4) manila-10.0.0/releasenotes/notes/reset_tap_device_after_node_restart-0690a6beca077b95.yaml0000664000175000017500000000017413656750227031252 0ustar zuulzuul00000000000000--- fixes: - When use driver_handles_share_servers driver, reset the tap device after manila-share service start. manila-10.0.0/releasenotes/notes/qb-bug-1733807-581e71e6581de28e.yaml0000664000175000017500000000015513656750227023756 0ustar zuulzuul00000000000000--- fixes: - | The Quobyte driver now handles updated error codes from Quobyte API versions 1.4+ . ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1661381-migration-snapshot-export-locations-169786dcec386402.yamlmanila-10.0.0/releasenotes/notes/bug-1661381-migration-snapshot-export-locations-169786dcec386402.ya0000664000175000017500000000013313656750227032071 0ustar zuulzuul00000000000000--- fixes: - Fixed error in driver-assisted share migration of mountable snapshots. manila-10.0.0/releasenotes/notes/remove-deprecated-default-options-00fed1238fb6dca0.yaml0000664000175000017500000000013613656750227030643 0ustar zuulzuul00000000000000--- deprecations: - | Remove deprecated cinder, neutron, nova options in DEFAULT group. ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1706137-netapp-manila-set-valid-qos-during-migration-4405fff02bd6fa83.yamlmanila-10.0.0/releasenotes/notes/bug-1706137-netapp-manila-set-valid-qos-during-migration-4405fff02b0000664000175000017500000000033613656750227032310 0ustar zuulzuul00000000000000--- fixes: - NetApp cDOT driver is now fixed to remove the QoS Policy on the backend volume when a share is migrated from an extra-spec which had QoS defined to another extra-spec which has no QoS defined in it. manila-10.0.0/releasenotes/notes/add-perodic-task-7454sd45deopb13e.yaml0000664000175000017500000000014613656750227025335 0ustar zuulzuul00000000000000--- features: - Added perodic task to clean up expired reservation at manila scheduler service. ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1798219-fix-snapshot-creation-lvm-and-generic-driver-55e349e02e7fa370.yamlmanila-10.0.0/releasenotes/notes/bug-1798219-fix-snapshot-creation-lvm-and-generic-driver-55e349e02e0000664000175000017500000000046313656750227032250 0ustar zuulzuul00000000000000--- fixes: - | The generic and LVM drivers have been fixed to always perform a filesystem check on newly created snapshots and derivative shares before attempting to assign a UUID to them. See `Launchpad bug 1798219 `_ for more details. manila-10.0.0/releasenotes/notes/powermax-rebrand-manila-a46a0c2ac0aa77ed.yaml0000664000175000017500000000163613656750227026732 0ustar zuulzuul00000000000000--- features: - | Rebrand from VMAX to PowerMax includes changing of tag names, directory structure, file names and documentation. deprecations: - | The following have been deprecated but will remain until the V release ``vmax_server_container`` is now ``powermax_server_container`` ``vmax_share_data_pools`` is now ``powermax_share_data_pools`` ``vmax_ethernet_ports`` is now ``powermax_ethernet_ports`` upgrade: - | - ``emc_share_backend`` configuration option must be switched from ``vmax`` to ``powermax`` if using a newly rebranded PowerMax storage backend. - If using a PowerMax storage backend, deprecated options ``emc_nas_server_container``, ``emc_nas_pool_names`` and ``emc_interface_ports`` can no longer be used. They must be replaced by ``powermax_server_container``, ``powermax_share_data_pools`` and ``powermax_ethernet_ports`` respectively. manila-10.0.0/releasenotes/notes/hsp-driver-e00aff5bc89d4b54.yaml0000664000175000017500000000033513656750227024236 0ustar zuulzuul00000000000000--- prelude: > Add Hitachi HSP driver. features: - Added new Hitachi HSP driver, that supports manage/unmanage and shrinking of shares, along with all the minimum driver features. Does not support snapshots.manila-10.0.0/releasenotes/notes/hpe3par-rw-snapshot-shares-f7c33b4bf528bf00.yaml0000664000175000017500000000012313656750227027171 0ustar zuulzuul00000000000000--- features: - Add read-write functionality for HPE 3PAR shares from snapshots. manila-10.0.0/releasenotes/notes/add_user_id_and_project_id_to_snapshot_APIs-157614b4b8d01e15.yaml0000664000175000017500000000014613656750227032426 0ustar zuulzuul00000000000000--- features: - user_id and project_id fields are added to the JSON response of /snapshots APIs.manila-10.0.0/releasenotes/notes/cephfs-nfs-ipv6-support-2ffd9c0448c2f47e.yaml0000664000175000017500000000011313656750227026526 0ustar zuulzuul00000000000000--- features: - IPv6 support for CephFS Manila driver with NFS gateway. manila-10.0.0/releasenotes/notes/api-versions-mark-v1-deprecated-3540d39279fbd60e.yaml0000664000175000017500000000133413656750227027731 0ustar zuulzuul00000000000000--- deprecations: - Deprecation of the manila v1 API was announced in the mitaka release. The versions response from the API has been fixed to state that this version has been deprecated. If you are using v1 API, consider switching to the v2 API to take advantage of newer features. v2 API has support for 'microversions'. Any endpoint on the v2 API can be requested with the HTTP header 'X-OpenStack-Manila-API-Version' and providing a value '2.x', where '2' is the major version and 'x' is the minor (or 'micro') version. To continue exploiting feature functionality that was part of the v1 API, you may use the v2 API with the microversion '2.0', which is behaviourally identical to the v1 API. manila-10.0.0/releasenotes/notes/bug-1831092-netapp-fix-race-condition-524555133aaa6ca8.yaml0000664000175000017500000000020113656750227030260 0ustar zuulzuul00000000000000--- fixes: - | Fixed an issue with the NetApp driver failing during a rollback operation in the share server creation. manila-10.0.0/releasenotes/notes/bug-1804659-speed-up-pools-detail-18f539a96042099a.yaml0000664000175000017500000000015513656750227027331 0ustar zuulzuul00000000000000--- fixes: - | Added caching of host state map to speed up calls for scheduler-stats/pools/detail. manila-10.0.0/releasenotes/notes/bug-1862833-fix-backref-by-eager-loading-2d897976e7598625.yaml0000664000175000017500000000024113656750227030446 0ustar zuulzuul00000000000000--- fixes: - | Some resources will be eagerly loaded from the database to avoid cyclical references and faulty results if their retrieval is deferred. ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/add-tenant-quota-for-share-replicas-and-replicas-size-565ffca315afb6f0.yamlmanila-10.0.0/releasenotes/notes/add-tenant-quota-for-share-replicas-and-replicas-size-565ffca315afb0000664000175000017500000000040013656750227033027 0ustar zuulzuul00000000000000--- features: - | Added quotas for amount of share replicas and share replica gigabytes. upgrade: - | Two new config options are available for setting default quotas for share replicas: `quota_share_replicas` and `quota_replica_gigabytes`. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1777551-security-services-api-all-tenants-fix-e820ec370d7df473.yamlmanila-10.0.0/releasenotes/notes/bug-1777551-security-services-api-all-tenants-fix-e820ec370d7df473.0000664000175000017500000000050513656750227032007 0ustar zuulzuul00000000000000--- fixes: - | The ``all_tenants`` query parameter in the security services API (GET /v2/{project_id}/security-services) has been fixed to accept 'f', 'false', 'off', 'n', 'no', or '0'. Setting the flag to any of these values will retrieve security services only from the requester's project namespace. manila-10.0.0/releasenotes/notes/share-server-delete-failure-ca29d6b286a2c790.yaml0000664000175000017500000000031213656750227027302 0ustar zuulzuul00000000000000--- fixes: - Fix the issue of deleting share server in VNX driver. The VNX driver failed to detect the NFS interface of share server, so the detach and deletion of NFS interface were skipped. manila-10.0.0/releasenotes/notes/qnap-support-qes-210-8775e6c210f3ca9f.yaml0000664000175000017500000000011313656750227025554 0ustar zuulzuul00000000000000--- features: - | QNAP Manila driver added support for QES fw 2.1.0. manila-10.0.0/releasenotes/notes/bp-ocata-migration-improvements-c8c5675e266100da.yaml0000664000175000017500000000215113656750227030132 0ustar zuulzuul00000000000000--- prelude: > The share migration feature was improved to support migrating snapshots where possible and provide a more deterministic user experience. features: - Added 'preserve_snapshots' parameter to share migration API. upgrade: - All share migration driver-assisted API parameters are now mandatory. - Improvements to the share migration API have been qualified with the driver assisted migration support that exists in the ZFSOnLinux driver. However, this driver does not currently support preserving snapshots on migration. - Snapshot restriction in share migration API has been changed to return error only when parameter force-host-assisted-migration is True. deprecations: - Support for the experimental share migration APIs has been dropped for API microversions prior to 2.30. fixes: - Added check to validate that host assisted migration cannot be forced while specifying driver assisted migration options. - The share migration API can only be invoked when at least one parameter within (host, share-network, share-type) is expected to be changed. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1745436-remove-data-node-access-ip-config-opt-709f330c57cdb0d5.yamlmanila-10.0.0/releasenotes/notes/bug-1745436-remove-data-node-access-ip-config-opt-709f330c57cdb0d5.0000664000175000017500000000045113656750227031571 0ustar zuulzuul00000000000000--- upgrade: - | The configuration option for the manila-data service, ``data_node_access_ip`` from the [DEFAULT] section is no longer supported. It was deprecated in favor of ``data_node_access_ips`` in the OpenStack Shared File Systems (manila) service release 6.0.0 (Queens). manila-10.0.0/releasenotes/notes/bug-1746723-8b89633062885f0b.yaml0000664000175000017500000000015313656750227023215 0ustar zuulzuul00000000000000--- fixes: - LVM driver now correctly parses IPv6 addresses during a Host-assisted share migration. manila-10.0.0/releasenotes/notes/add-export-location-filter-92ead37b728db654.yaml0000664000175000017500000000016013656750227027156 0ustar zuulzuul00000000000000--- features: - It is now possible to filter shares and share instances with export location ID or path.manila-10.0.0/releasenotes/notes/bug_1582931-1437eae20fa544d1.yaml0000664000175000017500000000014713656750227023472 0ustar zuulzuul00000000000000--- fixes: - HPE3PAR Driver fix to reduce the fsquota when a share is deleted for shared fstores.././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1704622-netapp-cdot-fix-share-specs-on-migration-bfbbebec26533652.yamlmanila-10.0.0/releasenotes/notes/bug-1704622-netapp-cdot-fix-share-specs-on-migration-bfbbebec2653360000664000175000017500000000032213656750227032246 0ustar zuulzuul00000000000000--- fixes: - The NetApp driver has been fixed to ensure that share type changes during driver optimized share migration will result in correction of share properties as per the requested extra-specs. manila-10.0.0/releasenotes/notes/manage-unmanage-replicated-share-fa90ce34372b6df5.yaml0000664000175000017500000000060413656750227030327 0ustar zuulzuul00000000000000--- features: - Share can be managed with replication_type extra-spec in the share_type issues: - Managing a share with replication_type can only be possible if the share does not already have replicas. fixes: - Retrying to manage shares in ``manage_error`` status works as expected. - Snapshot manage and unmange operations are disabled for shares with replicas. manila-10.0.0/releasenotes/notes/add-two-new-fields-to-share-groups-api-bc576dddd58a3086.yaml0000664000175000017500000000027313656750227031300 0ustar zuulzuul00000000000000--- features: - added two new fields to share groups API - 'availability_zone_id' and 'consistent_snapshot_support' to be able to get to know these attributes of a share group. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1785129-fix-sighup-behavior-with-scheduler-8ee803ad0e543cce.yamlmanila-10.0.0/releasenotes/notes/bug-1785129-fix-sighup-behavior-with-scheduler-8ee803ad0e543cce.yam0000664000175000017500000000034313656750227032212 0ustar zuulzuul00000000000000--- fixes: - The SIGHUP behavior for the manila-scheduler service has been fixed. Previously, only the manila-share service was responding to SIGHUP and reloading its configuration, now manila-scheduler does the same.././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1804651-netapp-cdot-add-peferred-dc-to-cifs-ad-99072ce663762e83.yamlmanila-10.0.0/releasenotes/notes/bug-1804651-netapp-cdot-add-peferred-dc-to-cifs-ad-99072ce663762e830000664000175000017500000000053513656750227031324 0ustar zuulzuul00000000000000--- features: - | For NetApp CIFS share provisioning users can now specify the optional "server" API parameter to provide an active directory domain controller IP address for when creating a security service. Multiple IP addresses can be given separated by comma. This represents the "Preferred DC" at the vserver cifs domain. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1634734-fix-backend-extraspec-for-replication-d611d2227997ae3e.yamlmanila-10.0.0/releasenotes/notes/bug-1634734-fix-backend-extraspec-for-replication-d611d2227997ae3e.0000664000175000017500000000052613656750227031724 0ustar zuulzuul00000000000000--- fixes: - | Share type extra-specification ``share_backend_name`` is now ignored when creating share replicas. This ensures that backends in the same replication domain need not have the same value of ``share_backend_name``. See `launchpad bug #1634734 `_ for details. manila-10.0.0/releasenotes/notes/generic-driver-noop-interface-driver-24abcf7af1e08ff9.yaml0000664000175000017500000000071313656750227031343 0ustar zuulzuul00000000000000--- features: - | A "no-op" interface driver (manila.network.linux.interface.NoopInterfaceDriver) has been introduced to work with drivers that create and manage lifecycle of share servers (``driver_handles_share_servers=True``) through service instance virtual machines using OpenStack Compute. This interface driver can be used when manila-share is running on a machine that has access to the administrator network used by Manila. manila-10.0.0/releasenotes/notes/add-ability-to-check-tenant-quota-usages-7fs17djahy61nsd6.yaml0000664000175000017500000000016013656750227032202 0ustar zuulzuul00000000000000--- features: - Added detail API to show user and tenant specific usages through the quota-sets resource. manila-10.0.0/releasenotes/notes/add-share-type-filter-to-pool-list-api-267614b4d93j12de.yaml0000664000175000017500000000012213656750227031134 0ustar zuulzuul00000000000000--- features: - Added share_type to filter results of scheduler-stats/pools API.manila-10.0.0/releasenotes/notes/bug-1660319-1660336-migration-share-groups-e66a1478634947ad.yaml0000664000175000017500000000013213656750227030520 0ustar zuulzuul00000000000000--- fixes: - Shares can no longer be migrated while being members of share groups. manila-10.0.0/releasenotes/notes/bug-1845147-powermax-read-only-policy-585c29c5ff020007.yaml0000664000175000017500000000032013656750227030277 0ustar zuulzuul00000000000000--- fixes: - | Manila PowerMax fix ensuring that hosts that are given access to a share i.e read only, will always precede '-0.0.0.0/0.0.0.0'. Any host after this string will be denied access. ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000manila-10.0.0/releasenotes/notes/remove-nova-net-support-from-service-instance-module-dd7559803fa01d45.yamlmanila-10.0.0/releasenotes/notes/remove-nova-net-support-from-service-instance-module-dd7559803fa01d0000664000175000017500000000030513656750227033040 0ustar zuulzuul00000000000000--- upgrade: - Removed nova net support from the service instance module since legacy nova networking was deprecated in Newton and is no longer supported in regular deployments in Ocata. manila-10.0.0/releasenotes/notes/bug-1703660-fix-netapp-driver-preferred-state-0ce1a62961cded35.yaml0000664000175000017500000000030613656750227032120 0ustar zuulzuul00000000000000--- fixes: - | Fixed the NetApp driver to report the correct value of the "preferred" export location metadata where it cannot determine if there are any "preferred" export locations. manila-10.0.0/releasenotes/notes/bug-1660726-migration-export-locations-5670734670435015.yaml0000664000175000017500000000017213656750227030256 0ustar zuulzuul00000000000000--- fixes: - Export locations pertaining to migration destinations are no longer shown until migration is complete. manila-10.0.0/releasenotes/notes/add-share-group-quotas-4e426907eed4c000.yaml0000664000175000017500000000041713656750227026222 0ustar zuulzuul00000000000000--- features: - Added quotas for amount of share groups and share group snapshots. upgrade: - Two new config options are available for setting default quotas for share groups and share group snapshots - 'quota_share_groups' and 'quota_share_group_snapshots'. manila-10.0.0/releasenotes/notes/bug-1597940-fix-hpe3par-delete-share-0daf75193f318c41.yaml0000664000175000017500000000014113656750227030025 0ustar zuulzuul00000000000000--- fixes: - HPE3PAR driver fix to allow delete of a share that does not exist on the backend. manila-10.0.0/releasenotes/notes/unity-manila-ipv6-support-dd9bcf23064baceb.yaml0000664000175000017500000000010113656750227027275 0ustar zuulzuul00000000000000--- features: - IPv6 support for Dell EMC Unity Manila driver. manila-10.0.0/releasenotes/notes/blueprint-netapp-snapshot-visibility-4f090a20145fbf34.yaml0000664000175000017500000000065213656750227031235 0ustar zuulzuul00000000000000--- features: - Snapshot directories of shares created by the NetApp driver can now be controlled through extra-specs for newly created shares and through a config option for existing shares. upgrades: - A new config option ``netapp_reset_snapdir_visibility`` has been added to the NetApp driver, allowing existing shares to have their snapshot directory visibility setting changed at driver startup. manila-10.0.0/releasenotes/notes/bp-export-locations-az-api-changes-c8aa1a3a5bc86312.yaml0000664000175000017500000000223213656750227030550 0ustar zuulzuul00000000000000--- features: - | New experimental APIs were introduced version ``2.47`` to retrieve export locations of share replicas. With API versions ``2.46`` and prior, export locations of non-active or secondary share replicas are included in the share export locations APIs, albeit these APIs do not provide all the necessary distinguishing information (availability zone, replica state and replica ID). See the `API reference `_ for more information regarding these API changes. deprecations: - | In API version ``2.47``, export locations APIs: ``GET /v2/{tenant_id}/shares/{share_id}/export_locations`` and ``GET /v2/{tenant_id}/shares/{share_id}/export_locations/​{export_location_id }`` no longer provide export locations of non-active or secondary share replicas where available. Use the newly introduced share replica export locations APIs to gather this information: ``GET /v2/{tenant_id}/share-replicas/{share_replica_id}/export-locations`` and ``GET /v2/{tenant_id}/share-replicas/{share_replica_id}/export -locations/{export_location_id}``. manila-10.0.0/releasenotes/notes/add-is-default-e49727d276dd9bc3.yaml0000664000175000017500000000034613656750227024616 0ustar zuulzuul00000000000000--- features: - The share type and share group type APIs in API version 2.46 return field "is_default" which is set to 'true' if the share type or the share group type is the default as configured by the administrator. manila-10.0.0/releasenotes/notes/estimate-provisioned-capacity-34f0d2d7c6c56621.yaml0000664000175000017500000000024613656750227027677 0ustar zuulzuul00000000000000--- fixes: - Improve max_over_subscription_ratio enforcement by providing a reasonable estimate of backend provisioned-capacity when drivers cannot supply it. manila-10.0.0/releasenotes/notes/bug-1662615-hnas-snapshot-concurrency-2147159ea6b086c5.yaml0000664000175000017500000000016213656750227030373 0ustar zuulzuul00000000000000--- fixes: - Fixed HNAS driver error when creating or deleting snapshots caused by concurrency in backend. manila-10.0.0/releasenotes/notes/per-backend-az-590c68be0e2cb4bd.yaml0000664000175000017500000000131113656750227024725 0ustar zuulzuul00000000000000--- features: - | Availability zones may now be configured per backend in a multi-backend configuration. Individual back end sections can now have the configuration option ``backend_availability_zone`` set. If set, this value will override the ``storage_availability_zone`` option from the [DEFAULT] section. upgrade: - The ``storage_availability_zone`` option can now be overridden per backend by using the ``backend_availability_zone`` option within the backend stanza. This allows enabling multiple storage backends that may be deployed in different AZs in the same ``manila.conf`` file if desired, simplifying service architecture around the Share Replication feature. manila-10.0.0/releasenotes/notes/3par-add-update-access-68fc12ffc099f480.yaml0000664000175000017500000000010013656750227026130 0ustar zuulzuul00000000000000--- features: - Add update_access support to HPE 3PAR driver. manila-10.0.0/releasenotes/notes/fix-race-condition-netapp-5a36f6ba95a49c5e.yaml0000664000175000017500000000026513656750227027053 0ustar zuulzuul00000000000000--- fixes: - | Fixed an issue with the NetApp driver leaving leftover resources when it was handling too many share server creation and deletion requests in parallel. manila-10.0.0/releasenotes/notes/fix_manage_snapshots_hnas-2c0e1a47b5e6ac33.yaml0000664000175000017500000000014513656750227027266 0ustar zuulzuul00000000000000--- fixes: - Fixed HNAS driver error when managing snapshots caused by concurrency in backend. ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1815038-extend-remove_version_from_href-support-ea479daaaf5c5700.yamlmanila-10.0.0/releasenotes/notes/bug-1815038-extend-remove_version_from_href-support-ea479daaaf5c5700000664000175000017500000000032513656750227032616 0ustar zuulzuul00000000000000--- fixes: - | `Launchpad bug 1815038 `_ has been fixed and now we correctly parse the base URL from manila's endpoint url, accounting for proxy URLs. manila-10.0.0/releasenotes/notes/guru-meditation-support-7872da69f529a6c2.yaml0000664000175000017500000000051213656750227026575 0ustar zuulzuul00000000000000--- features: - Adds support to generate Guru Meditation Reports(GMR) for manila services. GMR provides useful debugging information that can be used to obtain an accurate view on the current live state of the system. For example, what threads are running, what configuration parameters are in effect, and more. manila-10.0.0/releasenotes/notes/netapp-cdot-clone-split-control-a68b5fc80f1fc368.yaml0000664000175000017500000000054113656750227030225 0ustar zuulzuul00000000000000--- features: - NetApp cDOT driver now supports a scoped extra-spec ``netapp:split_clone_on_create`` to be used in share types when creating shares (NetApp FlexClone) from snapshots. If this extra-spec is not included, or set to ``false``, the cDOT driver will perform the clone-split only if/when the parent snapshot is being deleted.manila-10.0.0/releasenotes/notes/emc-unity-manila-support-d4f5a410501cfdae.yaml0000664000175000017500000000062213656750227027017 0ustar zuulzuul00000000000000--- prelude: > Add a new EMC Unity plugin in manila which allows user to create NFS/CIFS share with an EMC Unity backend. features: - | Add a new Unity plugin in manila which allows user to create NFS/CIFS share with an EMC Unity backend. This plugin performs the operations on Unity by REST API. issues: - EMC Unity does not support the same IP in different VLANs. ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1733494-allow-user-group-name-with-blank-access-fix-665b3e42bdc985ac.yamlmanila-10.0.0/releasenotes/notes/bug-1733494-allow-user-group-name-with-blank-access-fix-665b3e42bdc0000664000175000017500000000016413656750227032216 0ustar zuulzuul00000000000000--- fixes: - Allows the use of blank in user group name, since the AD allow user group name to include blank. manila-10.0.0/releasenotes/notes/remove-deprecated-size-limiter-9d7c8ab69cf85aea.yaml0000664000175000017500000000066413656750227030264 0ustar zuulzuul00000000000000deprecations: - Removed manila RequestBodySizeLimiter shims and deprecation log messages since it has been deprecated since equivalent oslo.middleware library object was added in kilo. upgrade: - Ensure that /etc/manila/api-paste.ini is up-to-date with etc/manila/api-paste.ini, in particular that [filter:sizelimit] section has paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory manila-10.0.0/releasenotes/notes/bug-1696669-add-ou-to-security-service-06b69615bd417d40.yaml0000664000175000017500000000022213656750227030362 0ustar zuulzuul00000000000000--- features: - | Added 'ou' field to 'security_service' object to be able to configure in which organizational unit the share ends up. ././@LongLink0000000000000000000000000000020100000000000011206 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1870751-cleanup-share-type-and-group-type-project-access-when-deleted-4fcd49ba6e6c40bd.yamlmanila-10.0.0/releasenotes/notes/bug-1870751-cleanup-share-type-and-group-type-project-access-when-d0000664000175000017500000000043613656750227032706 0ustar zuulzuul00000000000000--- fixes: - | Fixed the cleanup for private share types and share group types to include clearing out the database entries recording project specific access rules to these types. See `Launchpad bug 1870751 `_ for more details. manila-10.0.0/releasenotes/notes/huawei-driver-support-snapshot-revert-1208c586bd8db98e.yaml0000664000175000017500000000020713656750227031453 0ustar zuulzuul00000000000000--- features: - | Huawei driver implements the snapshot reverting feature, by Huawei storage's snapshot-rollback capability. ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1750074-fix-rabbitmq-password-in-debug-mode-4e136ff86223c4ea.yamlmanila-10.0.0/releasenotes/notes/bug-1750074-fix-rabbitmq-password-in-debug-mode-4e136ff86223c4ea.ya0000664000175000017500000000014013656750227031714 0ustar zuulzuul00000000000000--- fixes: - rabbitmq password is no longer exposed in the logs when debugging is enabled.manila-10.0.0/releasenotes/notes/cephfs-native-enhance-update-access-support-e1a1258084c997ca.yaml0000664000175000017500000000034313656750227032321 0ustar zuulzuul00000000000000--- features: - | Enhanced ``cephfs_native`` driver's update_access() to, - remove undesired rules existing in the backend during recovery mode. - return ``access_keys`` of ceph auth IDs that are allowed access. manila-10.0.0/releasenotes/notes/manage-unmanage-snapshot-bd92164472638f44.yaml0000664000175000017500000000006013656750227026454 0ustar zuulzuul00000000000000--- features: - Manage and unmanage snapshot. manila-10.0.0/releasenotes/notes/container-driver-5d972cc40e314663.yaml0000664000175000017500000000072413656750227025131 0ustar zuulzuul00000000000000--- prelude: > A new Container driver is added. It uses docker container as a share server. features: - The Container driver allows using a docker container as a share server. This allows for very fast share server startup. - The Container driver supports CIFS protocol. issues: - | The Container driver has the following known issues: * Only basic driver operations are supported: create/delete share, update access and extend share. manila-10.0.0/releasenotes/notes/manage-snapshot-in-zfsonlinux-driver-6478d8d5b3c6a97f.yaml0000664000175000017500000000012313656750227031230 0ustar zuulzuul00000000000000--- features: - Added support of 'manage snapshot' feature to ZFSonLinux driver. manila-10.0.0/releasenotes/notes/bug-1872873-fix-consume-from-share-eea5941de17a5bcc.yaml0000664000175000017500000000041013656750227030134 0ustar zuulzuul00000000000000--- fixes: - Updated the scheduler pool attributes ``provisioned_capacity_gb`` and ``allocated_capacity_gb`` to accommodate shares being created. This helps maintain an approximate tally of these attributes in between back end scheduler updates. manila-10.0.0/releasenotes/notes/bug-1578328-fix-replica-deletion-in-cDOT-7e4502fb50b69507.yaml0000664000175000017500000000027213656750227030465 0ustar zuulzuul00000000000000--- fixes: - Changed share replica deletion logic in the NetApp cDOT driver to disregard invalid replication relationships from among those recorded by the driver to clean up. ././@LongLink0000000000000000000000000000020300000000000011210 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1645751-fixed-shares-created-from-snapshots-for-lvm-and-generic-drivers-94a1161a9e0b5a85.yamlmanila-10.0.0/releasenotes/notes/bug-1645751-fixed-shares-created-from-snapshots-for-lvm-and-generic0000664000175000017500000000043313656750227032743 0ustar zuulzuul00000000000000--- fixes: - Fixed shares created from snapshots on the LVM and Generic drivers to no longer share the same filesystem handle as the source shares. The cause was the same as described in Ubuntu launchpad bug https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1071733 manila-10.0.0/releasenotes/notes/hybrid-aggregates-in-netapp-cdot-drivers-e7c90fb62426c281.yaml0000664000175000017500000000012113656750227031612 0ustar zuulzuul00000000000000--- features: - Add support for hybrid aggregates to the NetApp cDOT drivers. ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1624526-netapp-cdot-filter-root-aggregates-c30ac5064d530b86.yamlmanila-10.0.0/releasenotes/notes/bug-1624526-netapp-cdot-filter-root-aggregates-c30ac5064d530b86.yam0000664000175000017500000000027313656750227031740 0ustar zuulzuul00000000000000--- fixes: - The NetApp cDOT driver now explicitly filters root aggregates from the pools reported to the manila scheduler if the driver is operating with cluster credentials. manila-10.0.0/releasenotes/notes/neutron-binding-driver-43f01565051b031b.yaml0000664000175000017500000000007613656750227026144 0ustar zuulzuul00000000000000--- features: - Added neutron driver for port bind actions. manila-10.0.0/releasenotes/notes/emc_vnx_interface_ports_configuration-00d454b3003ef981.yaml0000664000175000017500000000025013656750227031472 0ustar zuulzuul00000000000000--- fixes: - EMC VNX driver supports interface ports configuration now. The ports of Data Mover that can be used by share server interfaces are configurable. manila-10.0.0/releasenotes/notes/netapp-cdot-use-security-service-ou-4dc5835c9e00ad9d.yaml0000664000175000017500000000032413656750227031030 0ustar zuulzuul00000000000000--- features: - | The NetApp cDOT driver uses the ou field from security services to set the organizational unit of a vserver's active directory configuration. This is done at CIFS server creation. manila-10.0.0/releasenotes/notes/hnas-mountable-snapshots-4fbffa05656112c4.yaml0000664000175000017500000000047713656750227026751 0ustar zuulzuul00000000000000--- features: - Added Mountable Snapshots support to HNAS driver. upgrade: - If using existing share types with the HNAS back end, set the 'mount_snapshot_support' extra-spec to allow creating shares that support mountable snapshots. This modification will not affect existing shares of such types. manila-10.0.0/releasenotes/notes/add-snapshot-instances-admin-api-959a1121aa407629.yaml0000664000175000017500000000012613656750227027773 0ustar zuulzuul00000000000000--- features: - Add list, show, and reset-status admin APIs for snapshot instances. manila-10.0.0/releasenotes/notes/bug-1858328-netapp-fix-shrinking-error-48bcfffe694f5e81.yaml0000664000175000017500000000035013656750227031003 0ustar zuulzuul00000000000000--- fixes: - | Fixed an issue in NetApp driver when shrinking shares to a size smaller than the current used space. Now it will return a more appropriate error status called ``shrinking_possible_data_loss_error``. manila-10.0.0/releasenotes/notes/unity-drvier-support-1gb-share-48f032dff8a6a789.yaml0000664000175000017500000000030413656750227027750 0ustar zuulzuul00000000000000--- fixes: - Shares under 3 GB cannot be created on the Dell EMC Unity back end. If users create shares smaller than 3 GB, they will be allocated a 3 GB file system on the Unity system. manila-10.0.0/releasenotes/notes/3par-fix-get_vfs-driver-bootup-db6b085eb6094f5f.yaml0000664000175000017500000000055513656750227027770 0ustar zuulzuul00000000000000--- issues: - 3parclient up to version 4.2.1 always returns only 1 VFS IP address. This may cause 3PAR driver boot up failure while validating VFS IP addresses against IP addresses configured in manila.conf. fixes: - Fixed 3PAR driver boot up failure while validating share server IP address provided in manila.conf against IP address set on array. manila-10.0.0/releasenotes/notes/infortrend-manila-driver-a1a2af20de6368cb.yaml0000664000175000017500000000023513656750227027037 0ustar zuulzuul00000000000000--- prelude: > Added Manila share driver for Infortrend storage systems. features: - The new Infortrend driver supports GS/GSe Family storage systems. manila-10.0.0/releasenotes/notes/deprecate-memcached-servers-config-option-f4456382b9b4d6db.yaml0000664000175000017500000000060413656750227032121 0ustar zuulzuul00000000000000--- deprecations: - | The configuration option "memcached_servers" from the [DEFAULT] section is deprecated. This option has currently no effect and will be removed in future releases. To specify memcached servers for the authentication middleware when using keystone, please use the option "memcached_servers" from the [keystone_authtoken] configuration group. ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1717135-ganesha-cleanup-of-tmp-config-files-66082b2384ace0a5.yamlmanila-10.0.0/releasenotes/notes/bug-1717135-ganesha-cleanup-of-tmp-config-files-66082b2384ace0a5.ya0000664000175000017500000000025213656750227031567 0ustar zuulzuul00000000000000--- fixes: - Added operation of cleaning up the temp config files when moving the config file from temp location to the correct ganesha config location goes wrong. ././@LongLink0000000000000000000000000000020200000000000011207 Lustar 00000000000000manila-10.0.0/releasenotes/notes/bug-1869712-fix-increased-scheduled-time-for-non-thin-provisioned-backends-1da2cc33d365ba4f.yamlmanila-10.0.0/releasenotes/notes/bug-1869712-fix-increased-scheduled-time-for-non-thin-provisioned-b0000664000175000017500000000047713656750227032763 0ustar zuulzuul00000000000000--- fixes: - | Reduces an increase of schedule time for non thin provisioned backends. On those backends, there is no need to calculate provisioned_capacity_gb, as it is not used during the scheduling. This calculation was not scaling properly on big environments as it implies many database queries. manila-10.0.0/releasenotes/notes/disable-share-groups-api-by-default-0627b97ac2cda4cb.yaml0000664000175000017500000000126113656750227030763 0ustar zuulzuul00000000000000--- issues: - Share groups replaced the experimental consistency groups feature in Ocata. The APIs for share groups have a default role-based-access-control policy set to "!". This means that these APIs are not enabled by default on upgrading to the Ocata release. Modify policy.json appropriately in your deployment to enable these APIs. You may set these policies to "rule:default" to allow access to all tenants and "rule:admin_api" to restrict the access only to tenants with those privileges. upgrade: - Policies relating to "consistency_group" and "cgsnapshot" APIs have been removed from manila. These policies can be removed from "policy.json". manila-10.0.0/releasenotes/notes/dedupe-support-hnas-driver-017d2f2a93a8b487.yaml0000664000175000017500000000031213656750227027127 0ustar zuulzuul00000000000000--- fixes: - Hitachi HNAS driver now reports ``dedupe`` capability and it can be used in extra-specs to choose a HNAS file system that has dedupe enabled when creating a manila share on HNAS. manila-10.0.0/releasenotes/notes/add_gateway_into_db-1f3cd3f392ae81cf.yaml0000664000175000017500000000017313656750227026122 0ustar zuulzuul00000000000000--- features: - Store network gateway value in DB. - Gateway is added to the JSON response of the /share-networks API. manila-10.0.0/releasenotes/notes/bug-1795463-fix-pagination-slowness-8fcda3746aa13940.yaml0000664000175000017500000000050113656750227030206 0ustar zuulzuul00000000000000--- fixes: - | When the OpenStack administrator has a busy environment that contains many shares, the list operation with `--limit` parameter was taking too long to respond. This lag has now been fixed. See the `launchpad bug 1795463 `_ for more details. manila-10.0.0/releasenotes/notes/migration-access-fix-71a0f52ea7a152a3.yaml0000664000175000017500000000036013656750227026005 0ustar zuulzuul00000000000000--- fixes: - Fixed access_allow and access_deny displaying incorrect error message during migration of a share. - Fixed access rule concurrency in migration that was preventing new rules from being added to the migrated share. manila-10.0.0/releasenotes/notes/change_user_project_length-93cc8d1c32926e75.yaml0000664000175000017500000000013213656750227027311 0ustar zuulzuul00000000000000--- fixes: - User_id and project_id DB fields are extended to also support LDAP setups. manila-10.0.0/releasenotes/notes/bug-1667450-migration-stale-source-9c092fee267a7a0f.yaml0000664000175000017500000000017413656750227030103 0ustar zuulzuul00000000000000fixes: - Fixed host-assisted share migration not deleting the source share from the storage backend upon completion. manila-10.0.0/releasenotes/source/0000775000175000017500000000000013656750363017062 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/train.rst0000664000175000017500000000017613656750227020734 0ustar zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train manila-10.0.0/releasenotes/source/_static/0000775000175000017500000000000013656750363020510 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/_static/.placeholder0000664000175000017500000000000013656750227022760 0ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/conf.py0000664000175000017500000002122113656750227020356 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Manila Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'reno.sphinxext', 'openstackdocstheme', ] # openstackdocstheme options repository_name = 'openstack/manila' bug_project = 'manila' bug_tag = 'release notes' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2015, Manila Developers' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [openstackdocstheme.get_html_theme_path()] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'ManilaReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'ManilaReleaseNotes.tex', u'Manila Release Notes Documentation', u'Manila Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'manilareleasenotes', u'Manila Release Notes Documentation', [u'Manila Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'ManilaReleaseNotes', u'Manila Release Notes Documentation', u'Manila Developers', 'ManilaReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] manila-10.0.0/releasenotes/source/mitaka.rst0000664000175000017500000000023213656750227021056 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka manila-10.0.0/releasenotes/source/liberty.rst0000664000175000017500000000021513656750227021263 0ustar zuulzuul00000000000000============================ Liberty Series Release Notes ============================ .. release-notes:: :branch: origin/stable/liberty manila-10.0.0/releasenotes/source/stein.rst0000664000175000017500000000022113656750227020730 0ustar zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein manila-10.0.0/releasenotes/source/queens.rst0000664000175000017500000000022313656750227021110 0ustar zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens manila-10.0.0/releasenotes/source/unreleased.rst0000664000175000017500000000015313656750227021741 0ustar zuulzuul00000000000000============================ Current Series Release Notes ============================ .. release-notes:: manila-10.0.0/releasenotes/source/rocky.rst0000664000175000017500000000022113656750227020735 0ustar zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky manila-10.0.0/releasenotes/source/index.rst0000664000175000017500000000152213656750227020722 0ustar zuulzuul00000000000000.. Copyright 2016-2017 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==================== Manila Release Notes ==================== .. toctree:: :maxdepth: 1 unreleased train stein rocky queens pike ocata newton mitaka liberty manila-10.0.0/releasenotes/source/ocata.rst0000664000175000017500000000023013656750227020675 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata manila-10.0.0/releasenotes/source/newton.rst0000664000175000017500000000023213656750227021122 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton manila-10.0.0/releasenotes/source/_templates/0000775000175000017500000000000013656750363021217 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/_templates/.placeholder0000664000175000017500000000000013656750227023467 0ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/0000775000175000017500000000000013656750362020320 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/fr/0000775000175000017500000000000013656750362020727 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/fr/LC_MESSAGES/0000775000175000017500000000000013656750363022515 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/fr/LC_MESSAGES/releasenotes.po0000664000175000017500000000267013656750227025552 0ustar zuulzuul00000000000000# Gérald LONLAS , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: Manila Release Notes 5.0.0\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-08-03 15:36+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-10-22 05:29+0000\n" "Last-Translator: Gérald LONLAS \n" "Language-Team: French\n" "Language: fr\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=2; plural=(n > 1)\n" msgid "1.0.1" msgstr "1.0.1" msgid "2.0.0" msgstr "2.0.0" msgid "3.0.0" msgstr "3.0.0" msgid "Bug Fixes" msgstr "Corrections de bugs" msgid "Current Series Release Notes" msgstr "Note de la release actuelle" msgid "Deprecation Notes" msgstr "Notes dépréciées " msgid "Known Issues" msgstr "Problèmes connus" msgid "Liberty Series Release Notes" msgstr "Note de release pour Liberty" msgid "Manila Release Notes" msgstr "Note de release de Manila" msgid "Mitaka Series Release Notes" msgstr "Note de release pour Mitaka" msgid "New Features" msgstr "Nouvelles fonctionnalités" msgid "Newton Series Release Notes" msgstr "Note de release pour Newton" msgid "Other Notes" msgstr "Autres notes" msgid "Security Issues" msgstr "Problèmes de sécurités" msgid "Start using reno to manage release notes." msgstr "Commence à utiliser reno pour la gestion des notes de release" msgid "Upgrade Notes" msgstr "Notes de mises à jours" manila-10.0.0/releasenotes/source/locale/es/0000775000175000017500000000000013656750362020727 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/es/LC_MESSAGES/0000775000175000017500000000000013656750363022515 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/es/LC_MESSAGES/releasenotes.po0000664000175000017500000000606613656750227025555 0ustar zuulzuul00000000000000# Jose Porrua , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: Manila Release Notes 5.0.0\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-08-03 15:36+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-07-28 02:25+0000\n" "Last-Translator: Jose Porrua \n" "Language-Team: Spanish\n" "Language: es\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "1.0.1" msgstr "1.0.1" msgid "2.0.0" msgstr "2.0.0" msgid "2.0.0-44" msgstr "2.0.0-44" msgid "Add support for hybrid aggregates to the NetApp cDOT drivers." msgstr "" "Añadir funcionalidad para user agrupaciones híbridas en los drivers de " "Clustered Data ONTAP de NetApp." msgid "Add support for snapshot manage/unmanage to the NetApp cDOT driver." msgstr "" "Añadir funcionalidad para administrar instantáneas en el driver de NetApp " "cDOT." msgid "Added APIs for listing export locations per share and share instances." msgstr "" "Se añadió APIs para obtener una lista con las localidades de exportación por " "recurso compartido y por instancia de recurso compartido." msgid "Added driver for Tegile IntelliFlash arrays." msgstr "" "Se añadió un controlador para interactuar con los sistemas de almacenamiento " "de datos Tegile IntelliFlash." msgid "Added support of 'manage share' feature to ZFSonLinux driver." msgstr "" "Se añadió la funcionalidad para gestionar recursos compartidos en el " "controlador de ZFSonLinux." msgid "Added support of 'manage snapshot' feature to ZFSonLinux driver." msgstr "" "Se añadió la funcionalidad para gestionar instantáneas en el controlador de " "ZFSonLinux." msgid "Bug Fixes" msgstr "Corrigiendo errores." msgid "Current Series Release Notes" msgstr "Notas de la versión actual" msgid "Deprecation Notes" msgstr "Notas de desuso" msgid "Known Issues" msgstr "Problemas Conocidos" msgid "Liberty Series Release Notes" msgstr "Notas de la versión para Liberty" msgid "Manage and unmanage snapshot." msgstr "Gestionar o dejar de gestionar una instantánea." msgid "Manage share snapshot on array in huawei driver." msgstr "" "Gestionar una instantánea de un recurso compartido en el controlador Huawei." msgid "Manila Release Notes" msgstr "Notas de la versión para Manila" msgid "Mitaka Series Release Notes" msgstr "Notas de la versión para Mitaka" msgid "New Features" msgstr "Nuevas Funcionalidades" msgid "Other Notes" msgstr "Otras Notas" msgid "Security Issues" msgstr "Problemas de Seguridad" msgid "Store network gateway value in DB." msgstr "Guardar el valor de la red de pasarela en la base de datos." msgid "Upgrade Notes" msgstr "Notas de Actualización" msgid "User ID is added to the JSON response of the /shares APIs." msgstr "Se añadió el ID de usuario a la respuesta JSON de la API /shares." msgid "" "user_id and project_id fields are added to the JSON response of /snapshots " "APIs." msgstr "" "Se añadieron los campos user_id y project_id a la respuesta JSON de la API /" "snapshots." manila-10.0.0/releasenotes/source/locale/de/0000775000175000017500000000000013656750362020710 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/de/LC_MESSAGES/0000775000175000017500000000000013656750363022476 5ustar zuulzuul00000000000000manila-10.0.0/releasenotes/source/locale/de/LC_MESSAGES/releasenotes.po0000664000175000017500000000410613656750227025527 0ustar zuulzuul00000000000000# Andreas Jaeger , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: manila\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-09-26 20:43+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-09-26 08:00+0000\n" "Last-Translator: Andreas Jaeger \n" "Language-Team: German\n" "Language: de\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "1.0.1" msgstr "1.0.1" msgid "2.0.0" msgstr "2.0.0" msgid "2.0.0-44" msgstr "2.0.0-44" msgid "3.0.0" msgstr "3.0.0" msgid "3.0.0-20" msgstr "3.0.0-20" msgid "4.0.0" msgstr "4.0.0" msgid "4.0.1" msgstr "4.0.1" msgid "4.0.2" msgstr "4.0.2" msgid "5.0.0" msgstr "5.0.0" msgid "5.0.1" msgstr "5.0.1" msgid "5.0.2" msgstr "5.0.2" msgid "5.0.3" msgstr "5.0.3" msgid "5.1.0" msgstr "5.1.0" msgid "6.0.0" msgstr "6.0.0" msgid "6.0.1" msgstr "6.0.1" msgid "6.0.2" msgstr "6.0.2" msgid "6.1.0" msgstr "6.1.0" msgid "6.2.0" msgstr "6.2.0" msgid "6.3.0" msgstr "6.3.0" msgid "6.3.1" msgstr "6.3.1" msgid "7.0.0" msgstr "7.0.0" msgid "7.1.0" msgstr "7.1.0" msgid "7.2.0" msgstr "7.2.0" msgid "7.3.0" msgstr "7.3.0" msgid "7.3.0-5" msgstr "7.3.0-5" msgid "8.0.0" msgstr "8.0.0" msgid "8.0.1" msgstr "8.0.1" msgid "8.0.1-2" msgstr "8.0.1-2" msgid "Current Series Release Notes" msgstr "Aktuelle Serie Releasenotes" msgid "Liberty Series Release Notes" msgstr "Liberty Serie Releasenotes" msgid "Manila Release Notes" msgstr "Manila Releasenotes" msgid "Mitaka Series Release Notes" msgstr "Mitaka Serie Releasenotes" msgid "Newton Series Release Notes" msgstr "Newton Serie Releasenotes" msgid "Ocata Series Release Notes" msgstr "Ocata Serie Releasenotes" msgid "Pike Series Release Notes" msgstr "Pike Serie Releasenotes" msgid "Queens Series Release Notes" msgstr "Queens Serie Releasenotes" msgid "Rocky Series Release Notes" msgstr "Rocky Serie Releasenotes" msgid "Stein Series Release Notes" msgstr "Stein Serie Releasenotes" msgid "Upgrade Notes" msgstr "Aktualisierungsnotizen" manila-10.0.0/releasenotes/source/pike.rst0000664000175000017500000000017513656750227020546 0ustar zuulzuul00000000000000========================== Pike Series Release Notes ========================== .. release-notes:: :branch: stable/pike manila-10.0.0/api-ref/0000775000175000017500000000000013656750362014413 5ustar zuulzuul00000000000000manila-10.0.0/api-ref/source/0000775000175000017500000000000013656750362015713 5ustar zuulzuul00000000000000manila-10.0.0/api-ref/source/share-replica-export-locations.inc0000664000175000017500000000454013656750227024440 0ustar zuulzuul00000000000000.. -*- rst -*- .. _share_replica_export_locations: ================================================ Share replica export locations (since API v2.47) ================================================ Set of APIs used to view export locations of share replicas. List export locations ===================== .. rest_method:: GET /v2/{project_id}/share-replicas/{share_replica_id}/export-locations .. versionadded:: 2.47 Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: export_location_id - share_instance_id: export_location_share_instance_id - path: export_location_path - is_admin_only: export_location_is_admin_only - preferred: export_location_preferred_replicas - availability_zone: export_location_availability_zone - replica_state: share_replica_replica_state Response example ---------------- .. literalinclude:: samples/share-replica-export-location-list-response.json :language: javascript Show single export location =========================== .. rest_method:: GET /v2/{project_id}/share-replicas/{share_replica_id}/export-locations/{export-location-id} .. versionadded:: 2.47 Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path - export_location_id: export_location_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: export_location_id - share_instance_id: export_location_share_instance_id - path: export_location_path - is_admin_only: export_location_is_admin_only - preferred: export_location_preferred_replicas - availability_zone: export_location_availability_zone - replica_state: share_replica_replica_state - created_at: created_at - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/share-replica-export-location-show-response.json :language: javascript manila-10.0.0/api-ref/source/experimental.inc0000664000175000017500000000073413656750227021107 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Experimental APIs ================= .. important:: The following APIs are part of the `experimental feature `_ introduced in version 2.4. The APIs may change or be removed in future versions of the Shared File Systems API. All experimental APIs require the ``X-OpenStack-Manila-API-Experimental: True`` header to be sent in the requests. manila-10.0.0/api-ref/source/share-migration.inc0000664000175000017500000000575013656750227021506 0ustar zuulzuul00000000000000.. -*- rst -*- ================================ Share Migration (since API v2.5) ================================ As an administrator, you can migrate a share with its data from one location to another in a manner that is transparent to users and workloads. Possible use cases for data migration include: - Bring down a physical storage device for maintenance without disrupting workloads. - Modify the properties of a share. - Free up space in a thinly-provisioned back end. .. note:: Share Migration APIs are `experimental APIs <#experimental-apis>`_ . Migrate share (DEPRECATED) ========================== .. warning:: This API is deprecated starting with microversion 2.15 and requests to this API will fail with a 404 starting from microversion 2.15. Please see the new experimental API below. .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action .. versionadded:: 2.5 Migrates a share from one back end to another. You can migrate a share from one back end to another but both back ends must set the ``driver_handles_share_servers`` parameter to ``False``. If a share driver handles one of the back ends, this action is not supported. You can configure a back end in the ``manila.conf`` file. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - os-migrate_share: os-migrate_share - migrate_share: migrate_share - host: host_10 - force_host_copy: force_host_copy Start Migration (since API v2.15) ================================= .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action .. versionadded:: 2.15 Initiates share migration. This API will initiate the share data copy to the new host. The copy operation is non-disruptive. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - migrate-start: migrate-start - host: host_10 - notify: notify - force_host_copy: force_host_copy Complete Migration (since API v2.15) ======================================= .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action .. versionadded:: 2.15 Completes share migration. This API will initiate the switch-over from the source to destination share. This operation is disruptive. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - migration_complete: migration_complete - host: host_10 - notify: notify - force_host_copy: force_host_copy manila-10.0.0/api-ref/source/share-instance-export-locations.inc0000664000175000017500000000444713656750227024633 0ustar zuulzuul00000000000000.. -*- rst -*- ================================================ Share instance export locations (since API v2.9) ================================================ Set of APIs used to view export locations of share instances. By default, these APIs are admin-only. Use the ``policy.json`` file to grant permissions for these actions to other roles. Lists all export locations for a share instance. Show details of an export location belonging to a share instance. List export locations ===================== .. rest_method:: GET /v2/{project_id}/share_instances/{share_instance_id}/export_locations .. versionadded:: 2.9 Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_instance_id: share_instance_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: export_location_id - share_instance_id: export_location_share_instance_id - path: export_location_path - is_admin_only: export_location_is_admin_only - preferred: export_location_preferred Response example ---------------- .. literalinclude:: samples/export-location-list-response.json :language: javascript Show single export location =========================== .. rest_method:: GET /v2/{project_id}/share_instances/{share_instance_id}/export_locations/{export_location_id} .. versionadded:: 2.9 Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_instance_id: share_instance_id - export_location_id: export_location_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: export_location_id - share_instance_id: export_location_share_instance_id - path: export_location_path - is_admin_only: export_location_is_admin_only - preferred: export_location_preferred - created_at: created_at - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/export-location-show-response.json :language: javascript manila-10.0.0/api-ref/source/shares.inc0000664000175000017500000004273513656750227017706 0ustar zuulzuul00000000000000.. -*- rst -*- ====== Shares ====== A share is a remote, mountable file system. In the APIs below, a share resource is a representation of this remote file system within the Shared File Systems service. This resource representation includes useful metadata, communicating the characteristics of the remote file system as determined by the user and the Shared File Systems service. You can create a share and associate it with a network, list shares, and show information for, update, and delete a share. To create a share, specify one of these supported protocols: - ``NFS``. Network File System (NFS). - ``CIFS``. Common Internet File System (CIFS). - ``GLUSTERFS``. Gluster file system (GlusterFS). - ``HDFS``. Hadoop Distributed File System (HDFS). - ``CEPHFS``. Ceph File System (CephFS). - ``MAPRFS``. MapR File System (MAPRFS). You can also create snapshots of shares. To create a snapshot, you specify the ID of the share that you want to snapshot. A share has one of these status values: **Share statuses** +----------------------------------------+--------------------------------------------------------+ | Status | Description | +----------------------------------------+--------------------------------------------------------+ | ``creating`` | The share is being created. | +----------------------------------------+--------------------------------------------------------+ | ``creating_from_snapshot`` | The share is being created from a parent snapshot. | +----------------------------------------+--------------------------------------------------------+ | ``deleting`` | The share is being deleted. | +----------------------------------------+--------------------------------------------------------+ | ``deleted`` | The share was deleted. | +----------------------------------------+--------------------------------------------------------+ | ``error`` | An error occurred during share creation. | +----------------------------------------+--------------------------------------------------------+ | ``error_deleting`` | An error occurred during share deletion. | +----------------------------------------+--------------------------------------------------------+ | ``available`` | The share is ready to use. | +----------------------------------------+--------------------------------------------------------+ | ``inactive`` | The share is inactive. | +----------------------------------------+--------------------------------------------------------+ | ``manage_starting`` | Share manage started. | +----------------------------------------+--------------------------------------------------------+ | ``manage_error`` | Share manage failed. | +----------------------------------------+--------------------------------------------------------+ | ``unmanage_starting`` | Share unmanage started. | +----------------------------------------+--------------------------------------------------------+ | ``unmanage_error`` | Share cannot be unmanaged. | +----------------------------------------+--------------------------------------------------------+ | ``unmanaged`` | Share was unmanaged. | +----------------------------------------+--------------------------------------------------------+ | ``extending`` | The extend, or increase, share size request was issued | | | successfully. | +----------------------------------------+--------------------------------------------------------+ | ``extending_error`` | Extend share failed. | +----------------------------------------+--------------------------------------------------------+ | ``shrinking`` | Share is being shrunk. | +----------------------------------------+--------------------------------------------------------+ | ``shrinking_error`` | Failed to update quota on share shrinking. | +----------------------------------------+--------------------------------------------------------+ | ``shrinking_possible_data_loss_error`` | Shrink share failed due to possible data loss. | +----------------------------------------+--------------------------------------------------------+ | ``migrating`` | Share is currently migrating. | +----------------------------------------+--------------------------------------------------------+ | ``migrating_to`` | Share is a migration destination. | +----------------------------------------+--------------------------------------------------------+ | ``replication_change`` | The share is undergoing a replication change. | +----------------------------------------+--------------------------------------------------------+ | ``reverting`` | Share is being reverted to a snapshot. | +----------------------------------------+--------------------------------------------------------+ | ``reverting_error`` | Share revert to snapshot failed. | +----------------------------------------+--------------------------------------------------------+ List shares =========== .. rest_method:: GET /v2/{project_id}/shares Lists all shares. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name: name_query - status: status_query - share_server_id: share_server_id_query - metadata: metadata_query - extra_specs: extra_specs_query - share_type_id: share_type_id_query - snapshot_id: snapshot_id_query - host: host_query - share_network_id: share_network_id_query - project_id: project_id_query - is_public: is_public_query - share_group_id: share_group_id_query - export_location_id: export_location_id_query - export_location_path: export_location_path_query - name~: name_inexact_query - description~: description_inexact_query - with_count: with_count_query - limit: limit - offset: offset - sort_key: sort_key - sort_dir: sort_dir Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: id_4 - links: links - name: name - count: count Response example ---------------- .. literalinclude:: samples/shares-list-response.json :language: javascript List shares with details ======================== .. rest_method:: GET /v2/{project_id}/shares/detail Lists all shares, with details. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - status: status_query - share_server_id: share_server_id_query - metadata: metadata_query - extra_specs: extra_specs_query - share_type_id: share_type_id_query - name: name_query - snapshot_id: snapshot_id_query - host: host_query - share_network_id: share_network_id_query - project_id: project_id_query - is_public: is_public_query - share_group_id: share_group_id_query - export_location_id: export_location_id_query - export_location_path: export_location_path_query - name~: name_inexact_query - description~: description_inexact_query - with_count: with_count_query - limit: limit - offset: offset - sort_key: sort_key - sort_dir: sort_dir Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_type_name: share_type_name - links: links - availability_zone: availability_zone - share_network_id: share_network_id - export_locations: export_locations - share_server_id: share_server_id - snapshot_id: snapshot_id_share_response - id: id_4 - size: size_2 - share_type: share_type_1 - export_location: export_location - project_id: project_id - metadata: metadata - status: status_16 - progress: progress_share_instance - description: description - host: host_1 - access_rules_status: access_rules_status - is_public: is_public - share_group_id: share_group_id - task_state: task_state - snapshot_support: snapshot_support - name: name - has_replicas: has_replicas - replication_type: replication_type - created_at: created_at - share_proto: share_proto - volume_type: volume_type - count: count Response example ---------------- .. literalinclude:: samples/shares-list-detailed-response.json :language: javascript Show share details ================== .. rest_method:: GET /v2/{project_id}/shares/{share_id} Shows details for a share. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_type_name: share_type_name - links: links - availability_zone: availability_zone_1 - share_network_id: share_network_id - export_locations: export_locations - share_server_id: share_server_id - snapshot_id: snapshot_id_share_response - id: id_4 - size: size_2 - share_type: share_type_1 - export_location: export_location - project_id: project_id - metadata: metadata - status: status_16 - progress: progress_share_instance - description: description - host: host_9 - access_rules_status: access_rules_status - is_public: is_public - share_group_id: share_group_id - task_state: task_state - snapshot_support: snapshot_support - name: name - has_replicas: has_replicas - replication_type: replication_type - created_at: created_at - share_proto: share_proto - volume_type: volume_type Response example ---------------- .. literalinclude:: samples/share-show-response.json :language: javascript Create share ============ .. rest_method:: POST /v2/{project_id}/shares Creates a share. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_proto: share_proto - size: size - name: name_request - description: description_request - display_name: display_name_request - display_description: display_description_request - share_type: share_type - volume_type: volume_type - snapshot_id: snapshot_id_request - is_public: is_public - share_group_id: share_group_id_request - metadata: metadata - share_network_id: share_network_id_2 - availability_zone: availability_zone Request example --------------- .. literalinclude:: samples/share-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: id_4 - status: status_3 - progress: progress_share_instance - links: links - project_id: project_id - share_proto: share_proto - size: size - name: name - description: description - share_type: share_type_1 - share_type_name: share_share_type_name - has_replicas: has_replicas - replication_type: replication_type - volume_type: volume_type - snapshot_id: snapshot_id_share_response - is_public: is_public - share_group_id: share_group_id - metadata: metadata - share_network_id: share_network_id - availability_zone: availability_zone_1 - export_location: export_location - export_locations: export_locations - host: host_1 - task_state: task_state - share_server_id: share_server_id - snapshot_support: snapshot_support - created_at: created_at Response example ---------------- .. literalinclude:: samples/share-create-response.json :language: javascript Manage share (since API v2.7) ============================= .. rest_method:: GET /v2/{project_id}/shares/manage .. versionadded:: 2.7 Use this API to bring a share under the management of the Shared File Systems service. Administrator only. Use the ``policy.json`` file to grant permissions for this action to other roles. .. note:: Managing shares that are created on top of managed share servers (i.e. with parameter ``share_server_id``) is not supported prior to API version 2.49. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share: share - protocol: protocol - name: name_request - share_type: share_type_2 - driver_options: driver_options - export_path: export_path - service_host: service_host - share_server_id: manage_share_server_id - is_public: is_public - description: description_request Request example --------------- .. literalinclude:: samples/share-manage-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - share: share - links: links - availability_zone: availability_zone_1 - share_network_id: share_network_id - export_locations: export_locations - share_server_id: share_server_id - snapshot_id: snapshot_id_share_response - id: id_4 - size: size_2 - share_type: share_type_1 - share_type_name: share_share_type_name - has_replicas: has_replicas - replication_type: replication_type - export_location: export_location - project_id: project_id - metadata: metadata - status: status_8 - share_server_id: manage_share_server_id - description: description - host: host_9 - is_public: is_public - share_group_id: share_group_id - snapshot_support: snapshot_support - name: name - created_at: created_at - share_proto: share_proto - volume_type: volume_type Response example ---------------- .. literalinclude:: samples/share-manage-response.json :language: javascript Update share ============ .. rest_method:: PUT /v2/{project_id}/shares/{share_id} Updates a share. You can update these attributes: - ``display_name``, which also changes the ``name`` of the share. - ``display_description``, which also changes the ``description`` of the share. - ``is_public``. Changes the level of visibility. If you try to update other attributes, they retain their previous values. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - is_public: is_public - display_name: display_name_request - display_description: display_description_request Request example --------------- .. literalinclude:: samples/share-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_type_name: share_share_type_name - links: links - availability_zone: availability_zone_1 - share_network_id: share_network_id - export_locations: export_locations - share_server_id: share_server_id - snapshot_id: snapshot_id_share_response - id: id_4 - size: size_2 - share_type: share_type_1 - export_location: export_location - project_id: project_id - metadata: metadata - status: status_16 - description: description - host: host_9 - access_rules_status: access_rules_status - is_public: is_public - share_group_id: share_group_id - task_state: task_state - snapshot_support: snapshot_support - name: name - has_replicas: has_replicas - replication_type: replication_type - created_at: created_at - share_proto: share_proto - volume_type: volume_type Response example ---------------- .. literalinclude:: samples/share-update-response.json :language: javascript Delete share ============ .. rest_method:: DELETE /v2/{project_id}/shares/{share_id} Deletes a share. Preconditions - Share status must be ``available``, ``error`` or ``inactive`` - You cannot already have a snapshot of the share. - You cannot already have a group snapshot of the share. - You cannot already have a replica of the share. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id manila-10.0.0/api-ref/source/quota-sets.inc0000664000175000017500000001456513656750227020526 0ustar zuulzuul00000000000000.. -*- rst -*- ========== Quota sets ========== Provides quotas management support. .. important:: For API versions 2.6 and prior, replace ``quota-sets`` in the URLs with ``os-quota-sets``. Share type quotas were added in API version 2.39. It is possible to set quotas per share type for the following quota resources: - ``gigabytes`` - ``snapshots`` - ``shares`` - ``snapshot_gigabytes`` Share groups and share group snapshots were added to quota management APIs in API version 2.40. Show default quota set ====================== .. rest_method:: GET /v2/{project_id}/quota-sets/{project_id}/defaults Shows default quotas for a given project. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - project_id: project_id_quota_request_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - id: quota_project_id - gigabytes: quota_gigabytes - snapshots: quota_snapshots - shares: quota_shares - snapshot_gigabytes: quota_snapshot_gigabytes - share_networks: quota_share_networks - share_groups: quota_share_groups - share_group_snapshots: quota_share_group_snapshots - share_networks: quota_share_networks_default Response example ---------------- .. literalinclude:: samples/quota-show-response.json :language: javascript Show quota set ============== .. rest_method:: GET /v2/{project_id}/quota-sets/{project_id}?user_id={user_id} Shows quotas for a given project.. If you specify the optional ``user_id`` query parameter, you get the quotas for this user in the project. If you omit this parameter, you get the quotas for the project. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - project_id: project_id_quota_request_path - user_id: user_id_query - share_type: share_type_for_quota Response parameters ------------------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - id: quota_project_id - gigabytes: quota_gigabytes - snapshots: quota_snapshots - shares: quota_shares - snapshot_gigabytes: quota_snapshot_gigabytes - share_networks: quota_share_networks - share_groups: quota_share_groups - share_group_snapshots: quota_share_group_snapshots Response example ---------------- .. literalinclude:: samples/quota-show-response.json :language: javascript Show quota set in detail (since API v2.25) ========================================== .. rest_method:: GET /v2/{project_id}/quota-sets/{project_id}/detail?user_id={user_id} .. versionadded:: 2.25 Shows quotas for a project in detail. If you specify the optional ``user_id`` query parameter, you get the quotas for this user in the project. If you omit this parameter, you get the quotas for the project. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - project_id: project_id_quota_request_path - user_id: user_id_query - share_type: share_type_for_quota Response parameters ------------------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - id: quota_project_id - gigabytes: quota_gigabytes_detail - snapshots: quota_snapshots_detail - shares: quota_shares_detail - snapshot_gigabytes: quota_snapshot_gigabytes_detail - share_networks: quota_share_networks_detail - share_groups: quota_share_groups_detail - share_group_snapshots: quota_share_group_snapshots_detail Response example ---------------- .. literalinclude:: samples/quota-show-detail-response.json :language: javascript Update quota set ================ .. rest_method:: PUT /v2/{project_id}/quota-sets/{project_id}?user_id={user_id} Updates quotas for a project. If you specify the optional ``user_id`` query parameter, you update the quotas for this user in the project. If you omit this parameter, you update the quotas for the project. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - project_id: project_id_quota_request_path - user_id: user_id_query - quota_set: quota_set - force: force - gigabytes: quota_gigabytes_request - snapshots: quota_snapshots_request - snapshot_gigabytes: quota_snapshot_gigabytes_request - shares: quota_shares_request - share_networks: quota_share_networks_request - share_groups: quota_share_groups_request - share_group_snapshots: quota_share_group_snapshots_request - share_type: share_type_for_quota Request example --------------- .. literalinclude:: samples/quota-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - quota_set: quota_set - id: quota_project_id - gigabytes: quota_gigabytes - snapshots: quota_snapshots - shares: quota_shares - snapshot_gigabytes: quota_snapshot_gigabytes - share_networks: quota_share_networks - share_groups: quota_share_groups - share_group_snapshots: quota_share_group_snapshots Response example ---------------- .. literalinclude:: samples/quota-update-response.json :language: javascript Delete quota set ================ .. rest_method:: DELETE /v2/{project_id}/quota-sets/{project_id}?user_id={user_id} Deletes quotas for a project. The quota reverts to the default quota. If you specify the optional ``user_id`` query parameter, you delete the quotas for this user in the project. If you omit this parameter, you delete the quotas for the project. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - project_id: project_id_quota_request_path - user_id: user_id_query - share_type: share_type_for_quota manila-10.0.0/api-ref/source/share-replicas.inc0000664000175000017500000002424013656750227021312 0ustar zuulzuul00000000000000.. -*- rst -*- ================================ Share replicas (since API v2.11) ================================ Share replicas are the replicated copies of the existing share. You can use Share Replicas to sync data so that each share replica has an identical copy of the same share. Share replication can be used as a disaster recovery solution or as a load sharing mirroring solution. Manila supports replication of shares between different storage pools. These pools may be on different back-end storage systems or within the same back end, depending upon the replication style chosen, the capability of the driver and the configuration of back ends. To ensure that a secondary copy is scheduled to a distinct back end, you must specify the ``availability_zone`` attribute. .. note:: You can create a replicated share with the help of a share type that has an extra-spec ``replication_type`` specified with a valid replication style. Once a replicated share has been created, it always starts out with an ``active`` replica. You may then create secondary copies of the share. A secondary copy can be "promoted" to fail-over to becoming the ``active`` replica. To create a share that supports replication, the share type must specify one of these supported replication types: - writable Synchronously replicated shares where all replicas are writable. Promotion is not supported and not needed because all copies are already exported and can be accessed simultaneously. - readable Mirror-style replication with a primary (writable) copy and one or more secondary (read-only) copies which can become writable after a promotion. - dr (for Disaster Recovery) Generalized replication with secondary copies that are inaccessible until they are promoted to become the active replica. .. important:: The term active replica refers to the primary share. In writable style of replication, all replicas are active, and there could be no distinction of a primary share. In readable and dr styles of replication, a secondary replica may be referred to as passive, non-active or simply replica. Create share replica ==================== .. rest_method:: POST /v2/{project_id}/share-replicas .. versionadded:: 2.11 Create a share replica for the share. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_replica_share_id - availability_zone: share_replica_az - share_network_id: share_replica_share_network_id Request example --------------- .. literalinclude:: samples/share-replica-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_id: share_replica_share_id - status: share_replica_status - cast_rules_to_readonly: share_replica_cast_rules_to_readonly - updated_at: updated_at - share_network_id: share_network_id - share_server_id: share_server_id - host: share_replica_host - id: share_replica_id - replica_state: share_replica_replica_state - created_at: created_at Response example ---------------- .. literalinclude:: samples/share-replica-create-response.json :language: javascript Promote share replica ===================== .. rest_method:: POST /v2/{project_id}/share-replicas/{share_replica_id}/action .. versionadded:: 2.11 Promotes a replica to ``active`` replica state. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path Resync share replica ==================== .. rest_method:: POST /v2/{project_id}/share-replicas/{share_replica_id}/action .. versionadded:: 2.11 Resync a replica with its ``active`` mirror. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path List share replicas =================== .. rest_method:: GET /v2/{project_id}/share-replicas?share_id={share_id} .. versionadded:: 2.11 Lists share replicas. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_id: share_replica_share_id - status: share_replica_status - id: share_replica_id - replica_state: share_replica_replica_state Response example ---------------- .. literalinclude:: samples/share-replicas-list-response.json :language: javascript List share replicas with details ================================ .. rest_method:: GET /v2/{project_id}/share-replicas/detail?share_id={share_id} .. versionadded:: 2.11 Lists share replicas with details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id_replicas_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_id: share_replica_share_id - status: share_replica_status - cast_rules_to_readonly: share_replica_cast_rules_to_readonly - updated_at: updated_at - share_network_id: share_network_id - share_server_id: share_server_id - host: share_replica_host - id: share_replica_id - replica_state: share_replica_replica_state - created_at: created_at Response example ---------------- .. literalinclude:: samples/share-replicas-list-detail-response.json :language: javascript Show share replica ================== .. rest_method:: GET /v2/{project_id}/share-replicas/{share_replica_id} .. versionadded:: 2.11 Show a share replica. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_id: share_replica_share_id - status: share_replica_status - cast_rules_to_readonly: share_replica_cast_rules_to_readonly - updated_at: updated_at - share_network_id: share_network_id - share_server_id: share_server_id - host: share_replica_host - id: share_replica_id - replica_state: share_replica_replica_state - created_at: created_at Response example ---------------- .. literalinclude:: samples/share-replicas-show-response.json :language: javascript Reset status of the share replica ================================= .. rest_method:: POST /v2/{project_id}/share-replicas/{share_replica_id}/action .. versionadded:: 2.11 Administrator only. Explicitly updates the ``status`` of a share replica. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path - reset_status: reset_status - status: share_replica_status Request example --------------- .. literalinclude:: samples/share-replicas-reset-state-request.json :language: javascript Reset replica_state of the share replica ======================================== .. rest_method:: POST /v2/{project_id}/share-replicas/{share_replica_id}/action .. versionadded:: 2.11 Administrator only. Explicitly updates the ``replica state`` of a share replica. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path - reset_replica_state: share_replica_reset_replica_state - replica_state: share_replica_replica_state Request example --------------- .. literalinclude:: samples/share-replicas-reset-replica-state-request.json :language: javascript Delete share replica ==================== .. rest_method:: DELETE /v2/{project_id}/share-replicas/{share_replica_id} .. versionadded:: 2.11 Deletes a share replica. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 .. note:: The ``active`` replica cannot be deleted with this API. Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path Force-delete share replica ========================== .. rest_method:: POST /v2/{project_id}/share-replicas/{share_replica_id}/action .. versionadded:: 2.11 Administrator only. Force-deletes a share replica in any state. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 .. note:: The ``active`` replica cannot be deleted with this API. Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_replica_id: share_replica_id_path - force_delete: share_replica_force_delete Request example --------------- .. literalinclude:: samples/share-replicas-force-delete-request.json :language: javascript manila-10.0.0/api-ref/source/limits.inc0000664000175000017500000000372613656750227017717 0ustar zuulzuul00000000000000.. -*- rst -*- ====== Limits ====== Limits are the resource limitations that are allowed for each tenant (project). An administrator can configure limits in the ``manila.conf`` file. Users can query their rate and absolute limits. The absolute limits contain information about: - Total maximum share memory, in GBs. - Number of share-networks. - Number of share-snapshots. - Number of shares. - Shares and total used memory, in GBs. - Snapshots and total used memory, in GBs. Rate limits control the frequency at which users can issue specific API requests. Administrators use rate limiting to configure limits on the type and number of API calls that can be made in a specific time interval. For example, a rate limit can control the number of GET requests that can be processed during a one-minute period. List share limits ================= .. rest_method:: GET /v2/{project_id}/limits Lists share limits. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - maxTotalShareGigabytes: maxTotalShareGigabytes - maxTotalSnapshotGigabytes: maxTotalSnapshotGigabytes - maxTotalShares: maxTotalShares - maxTotalShareSnapshots: maxTotalShareSnapshots - maxTotalShareNetworks: maxTotalShareNetworks - totalSharesUsed: totalSharesUsed - totalShareSnapshotsUsed: totalShareSnapshotsUsed - totalShareNetworksUsed: totalShareNetworksUsed - totalShareGigabytesUsed: totalShareGigabytesUsed - totalSnapshotGigabytesUsed: totalSnapshotGigabytesUsed - uri: uri - regex: regex - value: value - verb: verb - remaining: remaining - unit: unit - next-available: next-available Response example ---------------- .. literalinclude:: samples/limits-response.json :language: javascript manila-10.0.0/api-ref/source/security-services.inc0000664000175000017500000001720113656750227022077 0ustar zuulzuul00000000000000.. -*- rst -*- ================= Security services ================= You can create, update, view, and delete security services. A security service resource represents configuration information for clients for authentication and authorization (AuthN/AuthZ). For example, a share server will be the client for an existing security service such as LDAP, Kerberos, or Microsoft Active Directory. The Shared File Systems service supports three security service types: - ``ldap``. LDAP. - ``kerberos``. Kerberos. - ``active_directory``. Microsoft Active Directory. You can configure a security service with these options: - A DNS IP address. Some drivers may allow a comma separated list of multiple addresses, e.g. NetApp ONTAP. - An IP address or host name. - A domain. - An ou, the organizational unit. (available starting with API version 2.44) - A user or group name. - The password for the user, if you specify a user name. A security service resource can also be given a user defined name and description. List security services ====================== .. rest_method:: GET /v2/{project_id}/security-services Lists all security services. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: security_service_status - type: security_service_type - id: security_service_id - name: name Response example ---------------- .. literalinclude:: samples/security-services-list-response.json :language: javascript List security services with details =================================== .. rest_method:: GET /v2/{project_id}/security-services/detail Lists all security services with details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: security_service_status - id: security_service_id - project_id: project_id - type: security_service_type - name: name - description: description - dns_ip: security_service_dns_ip - user: security_service_user - password: security_service_password - domain: security_service_domain - ou: security_service_ou - server: security_service_server - updated_at: updated_at - created_at: created_at Response example ---------------- .. literalinclude:: samples/security-services-list-detailed-response.json :language: javascript Show security service details ============================= .. rest_method:: GET /v2/{project_id}/security-services/{security_service_id} Shows details for a security service. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - security_service_id: security_service_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: security_service_status - id: security_service_id - project_id: project_id - type: security_service_type - name: name - description: description - dns_ip: security_service_dns_ip - user: security_service_user - password: security_service_password - domain: security_service_domain - ou: security_service_ou - server: security_service_server - updated_at: updated_at - created_at: created_at Response example ---------------- .. literalinclude:: samples/security-service-show-response.json :language: javascript Create security service ======================= .. rest_method:: POST /v2/{project_id}/security-services Creates a security service. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - type: security_service_type - name: name_request - description: description_request - dns_ip: security_service_dns_ip_request - user: security_service_user_request - password: security_service_password_request - domain: security_service_domain_request - ou: security_service_ou_request - server: security_service_server_request Request example --------------- .. literalinclude:: samples/security-service-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: security_service_status - id: security_service_id - project_id: project_id - type: security_service_type - name: name - description: description - dns_ip: security_service_dns_ip - user: security_service_user - password: security_service_password - domain: security_service_domain - ou: security_service_ou - server: security_service_server - updated_at: updated_at - created_at: created_at Response example ---------------- .. literalinclude:: samples/security-service-create-response.json :language: javascript Update security service ======================= .. rest_method:: PUT /v2/{project_id}/security-services/{security_service_id} Updates a security service. If the security service is in ``active`` state, you can update only the ``name`` and ``description`` attributes. A security service in ``active`` state is attached to a share network with an associated share server. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - security_service_id: security_service_id_path - type: security_service_type - name: name_request - description: description_request - dns_ip: security_service_dns_ip_request - user: security_service_user_request - password: security_service_password_request - domain: security_service_domain_request - ou: security_service_ou_request - server: security_service_server_request Request example --------------- .. literalinclude:: samples/security-service-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: security_service_status - id: security_service_id - project_id: project_id - type: security_service_type - name: name - description: description - dns_ip: security_service_dns_ip - user: security_service_user - password: security_service_password - domain: security_service_domain - ou: security_service_ou - server: security_service_server - updated_at: updated_at - created_at: created_at Response example ---------------- .. literalinclude:: samples/security-service-update-response.json :language: javascript Delete security service ======================= .. rest_method:: DELETE /v2/{project_id}/security-services/{security_service_id} Deletes a security service. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - security_service_id: security_service_id_path manila-10.0.0/api-ref/source/quota-classes.inc0000664000175000017500000000517513656750227021202 0ustar zuulzuul00000000000000.. -*- rst -*- =============== Quota class set =============== Quota classes can be shown and updated for a project. Show quota classes for a project ================================ .. rest_method:: GET /v2/{project_id}/quota-class-sets/{quota_class_name} Shows quota class set for a project. If no specific value for the quota class resource exists, then the default value will be reported. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - quota_class_name: quota_class_name Response Parameters ------------------- .. rest_parameters:: parameters.yaml - quota_class_set: quota_class_set - share_groups: maxTotalShareGroups - gigabytes: maxTotalShareGigabytes - share_group_snapshots: maxTotalShareGroupSnapshots - snapshots: maxTotalShareSnapshots - snapshot_gigabytes: maxTotalSnapshotGigabytes - shares: maxTotalShares - id: quota_class_id - share_networks: maxTotalShareNetworks Response Example ---------------- .. literalinclude:: ./samples/quota-classes-show-response.json :language: javascript Update quota classes for a project ================================== .. rest_method:: PUT /v2/{project_id}/quota-class-sets/{quota_class_name} Updates quota class set for a project. If the ``quota_class_name`` key does not exist, then the API will create one. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - quota_class_name: quota_class_name - shares: maxTotalSharesOptional - snapshots: maxTotalShareSnapshotsOptional - gigabytes: maxTotalShareGigabytesOptional - snapshot-gigabytes: maxTotalSnapshotGigabytesOptional - share-networks: maxTotalShareNetworksOptional Request Example --------------- .. literalinclude:: ./samples/quota-classes-update-request.json :language: javascript Response Parameters ------------------- .. rest_parameters:: parameters.yaml - quota_class_set: quota_class_set - share_groups: maxTotalShareGroups - gigabytes: maxTotalShareGigabytes - share_group_snapshots: maxTotalShareGroupSnapshots - snapshots: maxTotalShareSnapshots - snapshot_gigabytes: maxTotalSnapshotGigabytes - shares: maxTotalShares - share_networks: maxTotalShareNetworks Response Example ---------------- .. literalinclude:: ./samples/quota-classes-update-response.json :language: javascript manila-10.0.0/api-ref/source/scheduler-stats.inc0000664000175000017500000000601613656750227021523 0ustar zuulzuul00000000000000.. -*- rst -*- =============================== Scheduler Stats - Storage Pools =============================== An administrator can list all back-end storage pools that are known to the scheduler service. List back-end storage pools =========================== .. rest_method:: GET /v2/{project_id}/scheduler-stats/pools?pool={pool_name}&host={host_name}&backend={backend_name}&capabilities={capabilities}&share_type={share_type} Lists all back-end storage pools. If search options are provided, the pool list that is returned is filtered with these options. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - pool_name: backend_pool_query - host_name: backend_host_query - backend_name: backend_query - capabilities: backend_capabilities_query - share_type: share_type_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - backend: backend - host: backend_host - pool: pool - name: backend_name Response example ---------------- .. literalinclude:: samples/pools-list-response.json :language: javascript List back-end storage pools with details ======================================== .. rest_method:: GET /v2/{project_id}/scheduler-stats/pools/detail?pool={pool_name}&host={host_name}&backend={backend_name}&capabilities={capabilities}&share_type={share_type} Lists all back-end storage pools with details. If search options are provided, the pool list that is returned is filtered with these options. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - pool_name: backend_pool_query - host_name: backend_host_query - backend_name: backend_query - capabilities: backend_capabilities_query - share_type: share_type_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - pools: pools - name: backend_name - backend: backend - pool: pool - host: backend_host - capabilities: capabilities - qos: capability_qos - timestamp: timestamp - share_backend_name: capability_share_backend_name - server_pools_mapping: capability_server_pools_mapping - driver_handles_share_servers: capability_driver_handles_share_servers - driver_version: capability_driver_version - total_capacity_gb: capability_total_capacity_gb - free_capacity_gb: capability_free_capacity_gb - reserved_percentage: capability_reserved_percentage - vendor_name: capability_vendor_name - snapshot_support: capability_snapshot_support - replication_domain: capability_replication_domain - storage_protocol: capability_storage_protocol Response example ---------------- .. literalinclude:: samples/pools-list-detailed-response.json :language: javascript manila-10.0.0/api-ref/source/snapshot-instances.inc0000664000175000017500000001047713656750227022243 0ustar zuulzuul00000000000000.. -*- rst -*- ========================================== Share snapshot instances (since API v2.19) ========================================== A share snapshot instance is an internal representation for a snapshot of a share. A single snapshot can have multiple snapshot instances if the parent share has multiple ``instances``. When a share is replicated or is in the process of being migrated, it can live in multiple places and each individual location is called an "instance", internally within the Shared File Systems service. By default administrators can list, show information for and explicitly set the state of share snapshot instances. Use the ``policy.json`` file to grant permissions for these actions to other roles. List share snapshot instances ============================= .. rest_method:: GET /v2/{project_id}/snapshot-instances .. versionadded:: 2.19 Lists all share snapshot instances. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_instance_id_response - snapshot_id: snapshot_id - status: snapshot_instance_status Response example ---------------- .. literalinclude:: samples/snapshot-instances-list-response.json :language: javascript List share snapshot instances with details ========================================== .. rest_method:: GET /v2/{project_id}/snapshot-instances/detail .. versionadded:: 2.19 Lists all share snapshot instances with details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_instance_id_response - snapshot_id: snapshot_id - created_at: created_at - updated_at: updated_at - status: snapshot_instance_status - share_id: share_id - share_instance_id: share_instance_id_1 - progress: progress - provider_location: snapshot_provider_location Response example ---------------- .. literalinclude:: samples/snapshot-instances-list-with-detail-response.json :language: javascript Show share snapshot instance details ==================================== .. rest_method:: GET /v2/{project_id}/snapshot-instances/{snapshot_instance_id} .. versionadded:: 2.19 Shows details for a share snapshot instance. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_instance_id: snapshot_instance_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_instance_id_response - snapshot_id: snapshot_id - created_at: created_at - updated_at: updated_at - status: snapshot_instance_status - share_id: share_id - share_instance_id: share_instance_id_1 - progress: progress - provider_location: snapshot_provider_location Response example ---------------- .. literalinclude:: samples/snapshot-instance-show-response.json :language: javascript Reset share snapshot instance state =================================== .. rest_method:: POST /v2/{project_id}/snapshot-instances/{snapshot_instance_id}/action .. versionadded:: 2.19 Administrator only. Explicitly updates the state of a share snapshot instance. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_instance_id: snapshot_instance_id_path - status: snapshot_instance_status Request example --------------- .. literalinclude:: samples/snapshot-instance-actions-reset-state-request.json :language: javascript manila-10.0.0/api-ref/source/share-groups.inc0000664000175000017500000001672113656750227021034 0ustar zuulzuul00000000000000.. -*- rst -*- ============================== Share groups (since API v2.31) ============================== The share groups enable you to create a group of volumes and manage them together. A project can put shares be used in the same application together in a share group, such as consistency group snapshot, clone, backup, migrate, replicate, retype, etc. Shares should be able to become a part of a share group only on share creation step. If share was created without provided ``share_group_id`` then this share won't be able to become a part of any share group. You can create a share group and associate it with multiple shares, list share groups, and show information for delete a share group. .. note:: Share Group APIs are `experimental APIs <#experimental-apis>`_. The ``availability_zone_id`` and ``consistent_snapshot_support`` fields were added to ``share_group`` object since version 2.34. List share groups ================= .. rest_method:: GET /v2/{project_id}/share_groups .. versionadded:: 2.31 Lists all share groups. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name: name_query - description: description_query - status: share_group_status_query - share_server_id: share_server_id_query - snapshot_id: snapshot_id_query - host: backend_host_query - share_network_id: share_network_id_query - share_group_type_id: share_group_type_id_query - share_group_snapshot_id: source_share_group_snapshot_id_query - share_types: share_types_query - limit: limit_query - offset: offset - sort_key: sort_key - sort_dir: sort_dir - name~: name_inexact_query - description~: description_inexact_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_id - links: share_group_links - name: name - status: share_group_status - description: description Response example ---------------- .. literalinclude:: samples/share-groups-list-response.json :language: javascript Show share group details ======================== .. rest_method:: GET /v2/{project_id}/share_groups/{share_group_id} .. versionadded:: 2.31 Shows details for a share group. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_id: share_group_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_id - name: name - created_at: created_at - status: share_group_status - description: description - project_id: project_id - host: backend_host - share_group_type_id: share_group_type_id_required - source_share_group_snapshot_id: source_share_group_snapshot_id_response - share_network_id: share_network_id - share_types: share_types_1 - links: share_group_links - availability_zone: availability_zone_id_2 - consistent_snapshot_support: consistent_snapshot_support Response example ---------------- .. literalinclude:: samples/share-group-show-response.json :language: javascript Create share group ================== .. rest_method:: POST /v2/{project_id}/share_groups .. versionadded:: 2.31 Creates a share group. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - name: name_request - description: description_request - share_types: share_types - share_group_type: share_group_type_id - share_network: share_network_id_1 - source_share_group_snapshot: source_share_group_snapshot_id - availability_zone: availability_zone_id_2 Request example --------------- .. literalinclude:: samples/share-group-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_id - name: name - created_at: created_at - status: share_group_status - description: description - project_id: project_id - host: share_group_host - share_group_type_id: share_group_type_id_required - source_share_group_snapshot_id: source_share_group_snapshot_id_response - share_network_id: share_network_id - share_types: share_types_1 - links: share_group_links - availability_zone: availability_zone_id_2 - consistent_snapshot_support: consistent_snapshot_support Response example ---------------- .. literalinclude:: samples/share-group-create-response.json :language: javascript Reset share group state ======================= .. rest_method:: POST /v2/{project_id}/share-groups/{share_group_id}/action .. versionadded:: 2.31 Administrator only. Explicitly updates the state of a share group. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_id: share_group_id_path - reset_status: reset_status - status: share_group_status Request example --------------- .. literalinclude:: samples/share-group-reset-state-request.json :language: javascript Update share group ================== .. rest_method:: PUT /v2/{project_id}/share-groups/{share_group_id} .. versionadded:: 2.31 Updates a share group. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_id: share_group_id_path - name: name_request - description: description_request Request example --------------- .. literalinclude:: samples/share-group-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_id - name: name - created_at: created_at - status: share_group_status - description: description - project_id: project_id - host: share_group_host - share_group_type_id: share_group_type_id_required - source_share_group_snapshot_id: source_share_group_snapshot_id - share_network_id: share_network_id - share_types: share_types_1 - links: share_group_links - availability_zone: availability_zone_id_2 - consistent_snapshot_support: consistent_snapshot_support Response example ---------------- .. literalinclude:: samples/share-group-update-response.json :language: javascript Delete share group ================== .. rest_method:: DELETE /v2/{project_id}/share-groups/{share_group_id} .. versionadded:: 2.31 Deletes a share group. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_id: share_group_id_path - force: share_force_delete manila-10.0.0/api-ref/source/os-share-manage.inc0000664000175000017500000000726413656750227021366 0ustar zuulzuul00000000000000.. -*- rst -*- ======================================= Manage and unmanage shares (DEPRECATED) ======================================= Allows bringing shared file systems under service management. Manage share (DEPRECATED) ========================= .. warning:: This API is deprecated starting with microversion 2.7 and requests to this API will fail with a 404 starting from microversion 2.7. Use `Share Manage API <#manage-share-since-api-v2-7>`_ instead of this API from version 2.7. .. rest_method:: POST /v2/{project_id}/os-share-manage Use this API to bring a share under the management of the Shared File Systems service. In the service, the share will be represented as a resource in the database. It can have a user defined name and description. Administrator only. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share: share - protocol: protocol - name: name_request - display_name: display_name_request - share_type: share_type_2 - driver_options: driver_options - export_path: export_path - service_host: service_host - is_public: is_public - description: description_request - display_description: display_description_request Request example --------------- .. literalinclude:: samples/share-manage-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - share: share - links: links - availability_zone: availability_zone_1 - share_network_id: share_network_id - export_locations: export_locations - share_server_id: share_server_id - snapshot_id: snapshot_id_share_response - id: id_4 - size: size_2 - share_type: share_type_1 - share_type_name: share_type_name - has_replicas: has_replicas - replication_type: replication_type - export_location: export_location - project_id: project_id - metadata: metadata - status: status_8 - description: description - host: host_1 - is_public: is_public - snapshot_support: snapshot_support - name: name - created_at: created_at - share_proto: share_proto - volume_type: volume_type Response example ---------------- .. literalinclude:: samples/share-manage-response.json :language: javascript Unmanage share (DEPRECATED) =========================== .. warning:: This API is deprecated starting with microversion 2.7 and requests to this API will fail with a 404 starting from microversion 2.7. Use `Share Unmanage API <#unmanage-share-since-api-v2-7>`_ instead of this API from version 2.7. .. rest_method:: POST /v2/{project_id}/os-share-unmanage/{share_id}/unmanage Use this API to remove a share from the management of the Shared File Systems service without deleting the share. Administrator only. Use the ``policy.json`` file to grant permissions for this action to other roles. Preconditions: - This API does not support unmanaging shares that are created on top of share servers (i.e. created with share networks). - You should remove any snapshots and share replicas before attempting to unmanage a share. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id Response parameters ------------------- There is no body content for the response. manila-10.0.0/api-ref/source/services.inc0000664000175000017500000000606613656750227020241 0ustar zuulzuul00000000000000.. -*- rst -*- ======== Services ======== These APIs help in interacting with the Shared File Systems services, ``manila-scheduler``, ``manila-share`` and ``manila-data``. .. important:: For API versions 2.6 and prior, replace ``services`` in the URLs with ``os-services``. List services ============= .. rest_method:: GET /v2/{project_id}/services?host={host}&binary={binary}&zone={zone}&state={state}&status={status} Lists all services optionally filtered with the specified search options. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - host: service_host_query - binary: service_binary_query - zone: service_zone_query - state: service_state_query - status: service_status_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - services: services - id: service_id_response - status: service_status_response - binary: service_binary_response - zone: service_zone_response - host: service_host_response - state: service_state_response - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/services-list-response.json :language: javascript Enable service ============== .. rest_method:: PUT /v2/{project_id}/services/enable Enables a service. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - binary: service_enable_binary_request - host: service_enable_host_request Request example --------------- .. literalinclude:: samples/service-enable-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - host: service_enable_host_response - binary: service_binary_response - disabled: service_disabled_response Response example ---------------- .. literalinclude:: samples/service-enable-response.json :language: javascript Disable service =============== .. rest_method:: PUT /v2/{project_id}/services/disable Disables a service. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - binary: service_disable_binary_request - host: service_disable_host_request Request example --------------- .. literalinclude:: samples/service-disable-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - host: service_disable_host_response - binary: service_disable_binary_response - disabled: service_disabled_response Response example ---------------- .. literalinclude:: samples/service-disable-response.json :language: javascript manila-10.0.0/api-ref/source/parameters.yaml0000664000175000017500000022652513656750227020756 0ustar zuulzuul00000000000000# variables in header #{} # variables in path access_id_path: description: | The UUID of the access rule to which access is granted. in: path required: true type: string api_version: in: path required: true type: string description: > The API version as returned in the links from the ``GET /`` call. export_location_id_path: description: | The UUID of the export location. in: path required: true type: string extra_spec_key_path: description: | The extra specification key in: path required: true type: string group_snapshot_id_path: description: | The group snapshot ID. in: path required: true type: string message_id: description: | The UUID of the message. in: path required: false type: string metadata_key_path: description: | The key of a metadata item. For example, if the metadata on an existing share or access rule is as follows: ``"project": "my_test", "aim": "testing"``, the keys are "project" and "aim". in: path required: false type: string project_id_path: description: | The project ID of the user or service making the API request. in: path required: true type: string project_id_quota_request_path: description: | The ID of the project whose quotas must be acted upon by the API. This ID can be different from the first project ID in the URI. For example, in a multi-tenant cloud, the first ID in the URI is typically the project ID of a privileged user (such as a cloud administrator) that can create, query or delete quotas of other projects in the cloud. in: path required: true type: string quota_class_name: description: The name of the quota class for which to set quotas. in: path required: true type: string security_service_id_path: description: | The UUID of the security service. in: path required: true type: string share_group_id_path: description: | The UUID of the share group. in: path required: true type: string share_group_type_id_path: description: | The UUID of the share group type. in: path required: true type: string share_id: description: | The UUID of the share. in: path required: true type: string share_instance_id: description: | The UUID of the share instance. in: path required: true type: string share_network_id_path: description: | The UUID of the share network. in: path required: true type: string share_replica_id_path: description: | The UUID of the share replica. in: path required: true type: string share_type_for_quota: description: | The name or UUID of the share type. If you specify this parameter in the URI, you show, update, or delete quotas for this share type. in: path required: false type: string min_version: 2.39 share_type_id: description: | The UUID of the share type. in: path required: true type: string snapshot_id_path: description: | The UUID of the snapshot. in: path required: true type: string snapshot_instance_id_path: description: | The UUID of the share snapshot instance. in: path required: true type: string # variables in query action_id: in: query required: false type: string description: > The ID of the action during which the message was created. all_tenants_query: description: | (Admin only). Defines whether to list the requested resources for all projects. Set to ``1`` to list resources for all projects. Set to ``0`` to list resources only for the current project. Examples of resources include shares, snapshots, share networks, security services and share groups. in: query required: false type: boolean backend_capabilities_query: description: | The capabilities for the storage back end. in: query required: false type: string backend_host_query: description: | The host name for the back end. in: query required: false type: string backend_pool_query: description: | The pool name for the back end. in: query required: false type: string backend_query: description: | The name of the back end. in: query required: false type: string description_inexact_query: description: | The description pattern that can be used to filter shares, share snapshots, share networks or share groups. in: query required: false type: string min_version: 2.36 description_query: description: | The user defined description text that can be used to filter resources. in: query required: false type: string detail_id: in: query required: false type: string description: > The ID of the message detail. export_location_id_query: description: | The export location UUID that can be used to filter shares or share instances. in: query required: false type: string min_version: 2.35 export_location_path_query: description: | The export location path that can be used to filter shares or share instances. in: query required: false type: string min_version: 2.35 extra_specs_query: description: | The extra specifications as a set of one or more key-value pairs. In each pair, the key is the name of the extra specification and the value is the share type that was used to filter search share type list. in: query required: false type: string min_version: 2.43 group_snapshot_status_query: description: | Filters by a share group snapshot status. A valid value is ``creating``, ``error``, ``available``, ``deleting``, ``error_deleting``. in: query required: false type: string host_query: description: | The host name of the resource to query with. Querying by hostname is a privileged operation. If restricted by API policy, this query parameter may be silently ignored by the server. in: query required: false type: string is_public_query: description: | A boolean query parameter that, when set to true, allows retrieving public resources that belong to all projects. in: query required: false type: boolean limit: description: | The maximum number of shares to return. in: query required: false type: integer limit_query: description: | The maximum number of share groups members to return. in: query required: false type: integer message_level: in: query required: false type: string description: > The message level. metadata_query: in: query required: false type: object description: | One or more metadata key and value pairs as a url encoded dictionary of strings. name_inexact_query: description: | The name pattern that can be used to filter shares, share snapshots, share networks or share groups. in: query required: false type: string min_version: 2.36 name_query: description: | The user defined name of the resource to filter resources by. in: query required: false type: string offset: description: | The offset to define start point of share or share group listing. in: query required: false type: integer project_id_messages: description: | The ID of the project for which the message was created. in: query required: false type: string project_id_query: description: | The ID of the project that owns the resource. This query parameter is useful in conjunction with the ``all_tenants`` parameter. in: query required: false type: string request_id: description: | The ID of the request during which the message was created. in: query required: false type: string resource_id: description: | The UUID of the resource for which the message was created. in: query required: false type: string resource_type: description: | The type of the resource for which the message was created. in: query required: false type: string service_binary_query: description: | The service binary name. Default is the base name of the executable. in: query required: false type: string service_host_query: description: | The service host name. in: query required: false type: string service_state_query: description: | The current state of the service. A valid value is ``up`` or ``down``. in: query required: false type: string service_status_query: description: | The service status, which is ``enabled`` or ``disabled``. in: query required: false type: string service_zone_query: description: | The availability zone. in: query required: false type: string share_group_id_query: description: | The UUID of a share group to filter resource. in: query required: false type: string min_version: 2.31 share_group_status_query: description: | Filters by a share group status. A valid value is ``creating``, ``error``, ``available``, ``deleting``, ``error_deleting``. in: query required: false type: string share_group_type_id_query: description: | The share group type ID to filter share groups. in: query required: false type: string share_id_access_rules_query: description: | The share ID to filter share access rules with. in: query required: true type: string share_id_replicas_query: description: | The share ID to filter share replicas with. in: query required: false type: string share_network_id_query: description: | The UUID of the share network to filter resources by. in: query required: false type: string share_server_id_query: description: | The UUID of the share server. in: query required: false type: string share_type_id_query: description: | The UUID of a share type to query resources by. in: query required: false type: string share_type_query: description: | The share type name or UUID. Allows filtering back end pools based on the extra-specs in the share type. in: query required: false type: string min_version: 2.23 share_types_query: description: | A list of one or more share type IDs. Allows filtering share groups. in: query required: false type: array snapshot_id_query: description: | The UUID of the share's base snapshot to filter the request based on. in: query required: false type: string sort_dir: description: | The direction to sort a list of shares. A valid value is ``asc``, or ``desc``. in: query required: false type: string sort_key: description: | The key to sort a list of shares. A valid value is ``id``, ``status``, ``size``, ``host``, ``share_proto``, ``export_location``, ``availability_zone``, ``user_id``, ``project_id``, ``created_at``, ``updated_at``, ``display_name``, ``name``, ``share_type_id``, ``share_type``, ``share_network_id``, ``share_network``, ``snapshot_id``, or ``snapshot``. in: query required: false type: string sort_key_messages: description: | The key to sort a list of messages. A valid value is ``id``, ``project_id``, ``request_id``, ``resource_type``, ``action_id``, ``detail_id``, ``resource_id``, ``message_level``, ``expires_at``, ``created_at``. in: query required: false type: string source_share_group_snapshot_id_query: description: | The source share group snapshot ID to list the share group. in: query required: false type: string min_version: 2.31 status_query: description: | Filters by a share status. A valid value is ``creating``, ``creating_from_snapshot``, ``error``, ``available``, ``deleting``, ``error_deleting``, ``manage_starting``, ``manage_error``, ``unmanage_starting``, ``unmanage_error``, ``migrating``, ``extending``, ``extending_error``, ``shrinking``, ``shrinking_error``, or ``shrinking_possible_data_loss_error``. in: query required: false type: string user_id_query: description: | The ID of the user. If you specify this query parameter, you update the quotas for this user in the project. If you omit this parameter, you update the quotas for the whole project. in: query required: false type: string with_count_query: description: | Whether to show ``count`` in API response or not, default is ``False``. in: query required: false type: boolean min_version: 2.42 # variables in body access: description: | The ``access`` object. in: body required: true type: object access_id: description: | The UUID of the access rule to which access is granted. in: body required: true type: string access_key: description: | The access credential of the entity granted share access. in: body required: true type: string min_version: 2.21 access_level: description: | The access level to the share. To grant or deny access to a share, you specify one of the following share access levels: - ``rw``. Read and write (RW) access. - ``ro``. Read- only (RO) access. in: body required: true type: string access_list: description: | The object of the access rule. To list access rules, set this value to ``null``. in: body required: true type: string access_metadata: description: | One or more access rule metadata key and value pairs as a dictionary of strings. in: body required: true type: object access_rule_id: description: | The access rule ID. in: body required: true type: string access_rules_status: description: | The share instance access rules status. A valid value is ``active``, ``error``, or ``syncing``. In versions prior to 2.28, ``syncing`` was represented with status ``out_of_sync``. in: body required: true type: string min_version: 2.10 access_share_id: description: | The UUID of the share to which you are granted or denied access. in: body required: true type: string access_status: description: | The share access status, which is ``new``, ``error``, ``active``. in: body required: true type: string access_to: description: | The value that defines the access. The back end grants or denies the access to it. A valid value is one of these values: - ``ip``. Authenticates an instance through its IP address. A valid format is ``XX.XX.XX.XX`` or ``XX.XX.XX.XX/XX``. For example ``0.0.0.0/0``. - ``cert``. Authenticates an instance through a TLS certificate. Specify the TLS identity as the IDENTKEY. A valid value is any string up to 64 characters long in the common name (CN) of the certificate. The meaning of a string depends on its interpretation. - ``user``. Authenticates by a user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long. in: body required: true type: string access_type: description: | The access rule type. A valid value for the share access rule type is one of the following values: - ``ip``. Authenticates an instance through its IP address. A valid format is ``XX.XX.XX.XX`` or ``XX.XX.XX.XX/XX``. For example ``0.0.0.0/0``. - ``cert``. Authenticates an instance through a TLS certificate. Specify the TLS identity as the IDENTKEY. A valid value is any string up to 64 characters long in the common name (CN) of the certificate. The meaning of a string depends on its interpretation. - ``user``. Authenticates by a user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long. in: body required: true type: string action_id_body: in: body required: true type: string description: > The ID of the action during which the message was created. add_project_access: description: | An object representing the project resource that access should be granted to. in: body required: true type: object allow_access: description: | The object of grant access. in: body required: true type: object availability_zone: description: | The availability zone. in: body required: false type: string min_version: 2.1 availability_zone_1: description: | The availability zone. in: body required: true type: string availability_zone_id: description: | The availability zone ID. in: body required: true type: string availability_zone_id_2: description: | The availability zone ID for create share group. in: body required: true type: string min_version: 2.34 availability_zone_name: description: | The name of the availability zone. in: body required: true type: string availability_zones: description: | Top level response body element. in: body required: true type: string backend: description: | The name of the back end. in: body required: true type: string backend_details: description: | The back-end details for a server. Each back end can store any key- value information that it requires. For example, the generic back- end driver might store the router ID. in: body required: true type: object backend_host: description: | The host name for the back end. in: body required: true type: string backend_name: description: | The name of the back end in this format: ``host@backend#POOL``. - ``host``. The host name for the back end. - ``backend``. The name of the back end. - ``POOL``. The pool name for the back end. in: body required: true type: string capabilities: description: | The back end capabilities which include ``qos``, ``total_capacity_gb``, etc. in: body required: true type: object capability_driver_handles_share_servers: description: | Share server is usually a storage virtual machine or a lightweight container that is used to export shared file systems. Storage backends may be able to work with configured share servers or allow the share driver to create and manage the lifecycle of share servers. This capability specifies whether the pool's associated share driver is responsible to create and manage the lifecycle of share servers. If ``false``, the administrator of the shared file systems service has configured the share server as necessary for the given back end. in: body required: true type: boolean capability_driver_version: description: | The driver version of the back end. in: body required: true type: string capability_free_capacity_gb: description: | The amount of free capacity for the back end, in GBs. A valid value is a string, such as ``unknown``, or an integer. in: body required: true type: string capability_qos: description: | The quality of service (QoS) support. in: body required: true type: boolean capability_replication_domain: description: | The back end replication domain. in: body required: true type: string capability_reserved_percentage: description: | The percentage of the total capacity that is reserved for the internal use by the back end. in: body required: true type: integer capability_server_pools_mapping: description: | The mapping between servers and pools. in: body required: true type: object capability_share_backend_name: description: | The name of the share back end. in: body required: true type: string capability_snapshot_support: description: | The specification that filters back ends by whether they do or do not support share snapshots. in: body required: true type: boolean capability_storage_protocol: description: | The storage protocol for the back end. For example, ``NFS_CIFS``, ``glusterfs``, ``HDFS``, etc. in: body required: true type: string capability_total_capacity_gb: description: | The total capacity for the back end, in GBs. A valid value is a string, such as ``unknown``, or an integer. in: body required: true type: string capability_vendor_name: description: | The name of the vendor for the back end. in: body required: true type: string cidr: description: | The CIDR. in: body required: true type: string cidr_1: description: | The IP block from which to allocate the network, in CIDR notation. For example, ``172.16.0.0/24`` or ``2001:DB8::/64``. This parameter is automatically set to a value determined by the network provider. in: body required: true type: string consistent_snapshot_support: description: | The consistency snapshot support. in: body required: true type: string min_version: 2.34 count: description: | The total count of requested resource before pagination is applied. in: body required: false type: integer min_version: 2.42 create_share_from_snapshot_support: description: | Boolean extra spec used for filtering of back ends by their capability to create shares from snapshots. in: body required: false type: boolean min_version: 2.24 create_share_from_snapshot_support_body: description: | Boolean extra spec used for filtering of back ends by their capability to create shares from snapshots. in: body required: false type: boolean created_at: description: | The date and time stamp when the resource was created within the service's database. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2019-03-27T09:49:58-05:00``. in: body required: true type: string deny_access: description: | The ``deny_access`` object. in: body required: true type: object description: description: | The user defined description of the resource. in: body required: true type: string description_request: description: | The user defined description of the resource. The value of this field is limited to 255 characters. in: body required: false type: string detail_id_body: in: body required: true type: string description: > The ID of the message detail. display_description_request: description: | The user defined description of the resource. This field sets the ``description`` parameter. in: body required: false type: string display_name_request: description: | The user defined name of the resource. This field sets the ``name`` parameter. in: body required: false type: string driver_handles_share_servers: description: | An extra specification that defines the driver mode for share server, or storage, life cycle management. The Shared File Systems service creates a share server for the export of shares. This value is ``true`` when the share driver manages, or handles, the share server life cycle. This value is ``false`` when an administrator rather than a share driver manages the storage life cycle. in: body required: true type: boolean driver_options: description: | A set of one or more key and value pairs, as a dictionary of strings, that describe driver options. Details for driver options should be taken from `appropriate share driver documentation `_. in: body required: false type: object export_location: description: | The export location. For newer API versions it is available in separate APIs. See sections `Share export locations <#share-share-export-locations>`_ and `Share instance export locations <#share-share-instance-export- locations>`_. in: body required: false type: string max_version: 2.8 export_location_availability_zone: description: | The name of the availability zone that the export location belongs to. in: body required: true type: string export_location_id: description: | The share export location UUID. in: body required: true type: string export_location_is_admin_only: description: | Defines purpose of an export location. If set to ``true``, then it is expected to be used for service needs and by administrators only. If it is set to ``false``, then this export location can be used by end users. This parameter is only available to users with an "administrator" role, and cannot be controlled via policy .json. in: body required: true type: boolean export_location_path: description: | The export location path that should be used for mount operation. in: body required: true type: string export_location_preferred: description: | Drivers may use this field to identify which export locations are most efficient and should be used preferentially by clients. By default it is set to ``false`` value. in: body required: true type: boolean min_version: 2.14 export_location_preferred_replicas: description: | Drivers may use this field to identify which export locations are most efficient and should be used preferentially by clients. By default it is set to ``false`` value. in: body required: true type: boolean export_location_share_instance_id: description: | The UUID of the share instance that this export location belongs to. This parameter is only available to users with an "administrator" role, and cannot be controlled via policy.json. in: body required: true type: string export_locations: description: | A list of export locations. For example, when a share server has more than one network interface, it can have multiple export locations. For newer API versions it is available in separate APIs. See sections `Share export locations <#share-share-export-locations>`_ and `Share instance export locations <#share-share-instance- export- locations>`_. in: body required: false type: array max_version: 2.8 export_path: description: | The share export path in the format appropriate for the protocol: - NFS protocol. ``10.0.0.1:/foo_path``. For example, ``10.254.0.5:/shares/share-42033c24-0261-424f-abda- 4fef2f6dbfd5``. - CIFS protocol. ``\\10.0.0.1\foo_name_of_cifs_share``. in: body required: true type: string extend: description: | The ``extend`` object. in: body required: true type: object extension_alias: description: | The alias for the extension. For example, "FOXNSOX", "os-availability-zone", "os-extended-quotas", "os- share-unmanage", or "os-used-limits". in: body required: true type: string extension_description: description: | The description of the extension API. in: body required: true type: string extension_links: description: | The extension links. in: body required: true type: array extension_name: description: | The name of the extension. For example, "Fox In Socks." in: body required: true type: string extra_spec_key: description: | The extra specification key in: body required: true type: string extra_specs: description: | The extra specifications for the share type. in: body required: true type: object force: description: | Indicates whether to permit or deny the force- update of a quota that is already used and the requested value exceeds the configured quota. Set to ``True`` to permit the force-update of the quota. Set to ``False`` to deny the force- update of the quota. in: body required: false type: boolean force_delete_2: description: | To force-delete a share instance, set this value to ``null``. The force-delete action, unlike the delete action, ignores the share instance status. in: body required: true type: string force_host_copy: description: | Enables or disables generic host-based forced migrations, which bypasses driver optimizations. Default value is ``false``. in: body required: true type: boolean force_snapshot_request: description: | Indicates whether snapshot creation must be attempted when a share's status is not ``available``. Set to ``true`` to force snapshot creation when the share is busy performing other operations. Default is ``false``. in: body required: false type: boolean group_snapshot_id: description: | The share group snapshot ID. in: body required: true type: object group_snapshot_links: description: | The share group snapshot links. in: body required: true type: string group_snapshot_members: description: | The share group snapshot members. in: body required: true type: string group_snapshot_status_required: description: | Filters by a share group snapshot status. A valid value is ``creating``, ``error``, ``available``, ``deleting``, ``error_deleting``. in: body required: true type: string group_spec_key: description: | The extra specification key for the share group type. in: body required: true type: string group_specs: description: | The extra specifications for the share group type. in: body required: false type: object group_specs_required: description: | The extra specifications for the share group type. in: body required: true type: object has_replicas: description: | Indicates whether a share has replicas or not. in: body required: true type: boolean min_version: 2.11 host_1: description: | The share host name. in: body required: true type: string host_10: description: | The host pool of the destination back end, in this format: ``host@backend#POOL``. - ``host``. The host name for the destination back end. - ``backend``. The name of the destination back end. - ``POOL``. The pool name for the destination back end. in: body required: true type: string host_6: description: | The share instance host name. in: body required: true type: string host_9: description: | The share host name. in: body required: false type: string host_share_server_body: description: | The share server host name or IP address. in: body required: true type: string id_13: description: | The share instance ID. in: body required: true type: string id_4: description: | The UUID of the share. in: body required: true type: string identifier: description: | The identifier of the share server in the back-end storage system. in: body required: true type: string ip_version: description: | The IP version of the network. A valid value is ``4`` or ``6``. in: body required: true type: integer ip_version_1: description: | The IP version of the network. A valid value is ``4`` or ``6``. This parameter is automatically set to a value determined by the network provider. in: body required: true type: integer is_auto_deletable: description: | Defines if a share server can be deleted automatically by the service. Share server deletion can be automated with configuration. However, Share servers that have ever had a share removed from service management cannot be automatically deleted by the service. in: body required: true type: boolean is_default_type: description: | Defines the share type created is default or not. If the returning value is true, then it is the default share type, otherwise, it is not default. in: body required: true type: boolean min_version: 2.46 is_default_type_body: description: | Defines the share type created is default or not. If the returning value is true, then it is the default share type, otherwise, it is not default. in: body required: true type: boolean is_group_type_default: description: | Defines the share group type created is default or not. If the returning value is true, then it is the default share group type, otherwise, it is not default. in: body required: true type: boolean min_version: 2.46 is_public: description: | The level of visibility for the share. Set to ``true`` to make share public. Set to ``false`` to make it private. Default value is ``false``. in: body required: false type: boolean min_version: 2.8 links: description: | The share links in: body required: true type: array manage_host: description: | The host of the destination back end, in this format: ``host@backend``. - ``host``. The host name for the destination back end. - ``backend``. The name of the destination back end. in: body required: true type: string manage_share_server_id: description: | The UUID of the share server. in: body required: true type: string min_version: 2.49 maxTotalShareGigabytes: description: | The total maximum number of share gigabytes that are allowed in a project. You cannot request a share that exceeds the allowed gigabytes quota. in: body required: true type: integer maxTotalShareGigabytesOptional: description: | The total maximum number of share gigabytes that are allowed in a project. You cannot request a share that exceeds the allowed gigabytes quota. in: body required: false type: integer maxTotalShareGroups: description: | The maximum number of share groups. in: body required: true type: integer min_version: 2.40 maxTotalShareGroupSnapshots: description: | The maximum number of share group snapshots. in: body required: true type: integer min_version: 2.40 maxTotalShareNetworks: description: | The total maximum number of share-networks that are allowed in a project. in: body required: true type: integer maxTotalShareNetworksOptional: description: | The total maximum number of share-networks that are allowed in a project. in: body required: false type: integer maxTotalShares: description: | The total maximum number of shares that are allowed in a project. in: body required: true type: integer maxTotalShareSnapshots: description: | The total maximum number of share snapshots that are allowed in a project. in: body required: true type: integer maxTotalShareSnapshotsOptional: description: | The total maximum number of share snapshots that are allowed in a project. in: body required: false type: integer maxTotalSharesOptional: description: | The total maximum number of shares that are allowed in a project. in: body required: false type: integer maxTotalSnapshotGigabytes: description: | The total maximum number of snapshot gigabytes that are allowed in a project. in: body required: true type: integer maxTotalSnapshotGigabytesOptional: description: | The total maximum number of snapshot gigabytes that are allowed in a project. in: body required: false type: integer message_level_body: in: body required: true type: string description: > The message level. message_links: description: | The message links. in: body required: true type: array message_members_links: description: | The message member links. in: body required: true type: array metadata: description: | One or more metadata key and value pairs as a dictionary of strings. in: body required: false type: object metadata_2: description: | One or more metadata key-value pairs, as a dictionary of strings. For example, ``"project": "my_test", "aim": "testing"``. The share server does not respect case-sensitive key names. For example, ``"key": "v1"`` and ``"KEY": "V1"`` are equivalent. If you specify both key-value pairs, the server sets and returns only the ``"KEY": "V1"`` key-value pair. in: body required: true type: object metadata_3: description: | One or more metadata key and value pairs as a dictionary of strings. in: body required: true type: object metadata_item: description: | A single metadata key and value pair. in: body required: true type: object metadata_key_request: description: | The key of a metadata item. For example, if the metadata on an existing share or access rule is as follows: ``"project": "my_test", "aim": "testing"``, the keys are "project" and "aim". in: body required: true type: object migrate-start: description: | The ``migrate-start`` object. in: body required: true type: object migrate_share: description: | The ``migrate_share`` object. in: body required: true type: object migration_complete: description: | The ``migration_complate`` object. in: body required: true type: object mount_snapshot_support: description: | Boolean extra spec used for filtering of back ends by their capability to mount share snapshots. in: body required: false type: boolean min_version: 2.32 mount_snapshot_support_body: description: | Boolean extra spec used for filtering of back ends by their capability to mount share snapshots. in: body required: false type: boolean name: description: | The user defined name of the resource. in: body required: true type: string name_request: description: | The user defined name of the resource. The value of this field is limited to 255 characters. in: body required: false type: string network_type: description: | The network type. A valid value is ``VLAN``, ``VXLAN``, ``GRE``, or ``flat``. in: body required: true type: string network_type_1: description: | The network type. A valid value is ``VLAN``, ``VXLAN``, ``GRE``, or ``flat``. This parameter is automatically set to a value determined by the network provider. in: body required: true type: string neutron_net_id: description: | The neutron network ID. in: body required: true type: string neutron_net_id_request: description: | The UUID of a neutron network when setting up or updating a share network with neutron. Specify both a neutron network and a neutron subnet that belongs to that neutron network. in: body required: false type: string neutron_subnet_id: description: | The neutron subnet ID. in: body required: true type: string neutron_subnet_id_request: description: | The UUID of the neutron subnet when setting up or updating a share network with neutron. Specify both a neutron network and a neutron subnet that belongs to that neutron network. in: body required: false type: string next-available: description: | The date and time stamp when next issues are available. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: false type: string notify: description: | Enables or disables notification of data copying completed in: body required: true type: string os-migrate_share: description: | The ``migrate_share`` object. in: body required: true type: object os-share-type-access:is_public: description: | Indicates whether a share type is publicly accessible. Default is ``true``, or publicly accessible. in: body required: false type: boolean max_version: 2.6 pool: description: | The pool name for the back end. in: body required: true type: string pools: description: | The pools for the back end. This value is either ``null`` or a string value that indicates the capabilities for each pool. For example, ``pool_name``, ``total_capacity_gb``, ``qos``, and so on. in: body required: true type: string progress: description: | The progress of the snapshot creation. in: body required: true type: string progress_share_instance: description: | The progress of the share creation. in: body min_version: 2.54 required: true type: string project: description: | The UUID of the project to which access to the share type is granted. in: body required: true type: string project_id: description: | The ID of the project that owns the resource. in: body required: true type: string project_id_messages_body: description: | The ID of the project for which the message was created. in: body required: true type: string project_id_type_access: description: | The ID of the project that has been granted access to the type resource. in: body required: true type: string project_id_type_access_grant_request: description: | The ID of the project that needs to have access to the type resource. in: body required: true type: string project_id_type_access_revoke_request: description: | The ID of the project whose access to the type resource must be revoked. in: body required: true type: string protocol: description: | The Shared File Systems protocol of the share to manage. A valid value is ``NFS``, ``CIFS``, ``GlusterFS``, ``CEPHFS``, ``HDFS`` or ``MAPRFS``. in: body required: true type: string quota_class_id: description: | A ``quota_class_set`` id. in: body required: true type: string quota_class_set: description: | A ``quota_class_set`` object. in: body required: true type: object quota_gigabytes: description: | The number of gigabytes allowed for each project. in: body required: true type: integer quota_gigabytes_detail: description: | The limit, in_use, reserved number of gigabytes allowed for each project. in: body min_version: 2.25 required: true type: object quota_gigabytes_request: description: | The number of gigabytes for the project. in: body required: false type: integer quota_project_id: description: | The ID of the project the quota pertains to. in: body required: true type: string quota_set: description: | The ``quota_set`` object. in: body required: true type: object quota_share_group_snapshots: description: | The number of share group snapshots allowed for each project or user. in: body min_version: 2.40 required: true type: integer quota_share_group_snapshots_detail: description: | The limit, in_use, reserved number of share group snapshots for each project or user. in: body min_version: 2.40 required: true type: object quota_share_group_snapshots_request: description: | The number of share group snapshots allowed for each project or user. in: body min_version: 2.40 required: false type: integer quota_share_groups: description: | The number of share groups allowed for each project or user. in: body min_version: 2.40 required: true type: integer quota_share_groups_detail: description: | The limit, in_use, reserved number of share groups for each project or user. in: body min_version: 2.40 required: true type: object quota_share_groups_request: description: | The number of share groups allowed for each project or user. in: body min_version: 2.40 required: false type: integer quota_share_networks: description: | The number of share networks allowed for user and project, but not share type. in: body required: false type: integer quota_share_networks_default: description: | The number of share networks allowed for each project. in: body required: true type: integer quota_share_networks_detail: description: | The limit, in_use, reserved number of share networks allowed for user and project, but not share type. in: body min_version: 2.25 required: false type: object quota_share_networks_request: description: | The number of share networks for the project. in: body required: false type: integer quota_shares: description: | The number of shares allowed for each project. in: body required: true type: integer quota_shares_detail: description: | The limit, in_use, reserved number of shares allowed for each project. in: body min_version: 2.25 required: true type: object quota_shares_request: description: | The number of shares for the project. in: body required: false type: integer quota_snapshot_gigabytes: description: | The number of gigabytes for the snapshots allowed for each project. in: body required: true type: integer quota_snapshot_gigabytes_detail: description: | The limit, in_use, reserved number of gigabytes for the snapshots allowed for each project. in: body min_version: 2.25 required: true type: object quota_snapshot_gigabytes_request: description: | The number of gigabytes for the snapshots for the project. in: body required: false type: integer quota_snapshots: description: | The number of snapshots allowed for each project. in: body required: true type: integer quota_snapshots_detail: description: | The limit, in_use, reserved number of snapshots allowed for each project. in: body min_version: 2.25 required: true type: object quota_snapshots_request: description: | The number of snapshots for the project. in: body required: false type: integer regex: description: | An API regular expression. For example, ``^/shares`` for the ``/shares`` API URI or ``.*`` for any URI. in: body required: false type: string remaining: description: | The remaining number of allowed requests. in: body required: false type: integer remove_project_access: description: | An object representing the project resource that access should be revoked from. in: body required: true type: object replica_state: description: | The share replica state. Has set value only when replication is used. List of possible values: ``active``, ``in_sync``, ``out_of_sync``, and ``error``. in: body required: true type: string min_version: 2.11 replication_type: description: | The share replication type. in: body required: false type: string min_version: 2.11 replication_type_body: description: | The share replication type. in: body required: false type: string request_id_body: description: | The UUID of the request during which the message was created. in: body required: true type: string required_extra_specs: description: | The required extra specifications for the share type. in: body required: true type: object reset_status: description: | The ``reset_status`` object. in: body required: true type: object resource_id_body: description: | The UUID of the resource for which the message was created. in: body required: true type: string resource_type_body: description: | The type of the resource for which the message was created. in: body required: true type: string revert_to_snapshot_support: description: | Boolean extra spec used for filtering of back ends by their capability to revert shares to snapshots. in: body required: false type: boolean min_version: 2.27 revert_to_snapshot_support_body: description: | Boolean extra spec used for filtering of back ends by their capability to revert shares to snapshots. in: body required: false type: boolean security_service_dns_ip: description: | The DNS IP address that is used inside the project network. in: body required: true type: string security_service_dns_ip_request: description: | The DNS IP address that is used inside the project network. in: body required: false type: string security_service_domain: description: | The security service domain. in: body required: true type: string security_service_domain_request: description: | The security service domain. in: body required: false type: string security_service_id: description: | The security service ID. in: body required: true type: string security_service_ou: description: | The security service ou. in: body required: true type: string min_version: 2.44 security_service_ou_request: description: | The security service ou. An organizational unit can be added to specify where the share ends up. in: body required: false type: string min_version: 2.44 security_service_password: description: | The user password, if you specify a ``user``. in: body required: true type: string security_service_password_request: description: | The user password, if you specify a ``user``. in: body required: false type: string security_service_server: description: | The security service host name or IP address. in: body required: true type: string security_service_server_request: description: | The security service host name or IP address. in: body required: false type: string security_service_status: description: | The security service status. in: body required: true type: string security_service_type: description: | The security service type. A valid value is ``ldap``, ``kerberos``, or ``active_directory``. in: body required: true type: string security_service_type_request: description: | The security service type. A valid value is ``ldap``, ``kerberos``, or ``active_directory``. in: body required: false type: string security_service_user: description: | The security service user or group name that is used by the project. in: body required: true type: string security_service_user_request: description: | The security service user or group name that is used by the project. in: body required: false type: string segmentation_id: description: | The segmentation ID. in: body required: true type: integer segmentation_id_1: description: | The segmentation ID. This parameter is automatically set to a value determined by the network provider. For VLAN, this value is an integer from 1 to 4094. For VXLAN, this value is an integer from 1 to 16777215. For GRE, this value is an integer from 1 to 4294967295. in: body required: true type: integer service_binary_response: description: | The service binary name. Default is the base name of the executable. in: body required: true type: string service_disable_binary_request: description: | The name of the service binary that you want to disable. Typically, this name is the base name of the executable. in: body required: true type: string service_disable_binary_response: description: | The name of the disabled service binary. Typically, this name is the base name of the executable. in: body required: true type: string service_disable_host_request: description: | The host name of the service that you want to disable. in: body required: true type: string service_disable_host_response: description: | The host name of the disabled service. in: body required: true type: string service_disabled_response: description: | Indicates whether the service is disabled. in: body required: true type: boolean service_enable_binary_request: description: | The name of the service binary that you want to enable. Typically, this name is the base name of the executable. in: body required: true type: string service_enable_host_request: description: | The host name of the service that you want to enable. in: body required: true type: string service_enable_host_response: description: | The host name of the enabled service. in: body required: true type: string service_host: description: | The manage-share service host in this format: ``host@backend#POOL``. - ``host``. The host name for the back end. - ``backend``. The name of the back end. - ``POOL``. The pool name for the back end. in: body required: true type: string service_host_response: description: | The service host name. in: body required: true type: string service_id_response: description: | The service ID. in: body required: true type: integer service_state_response: description: | The current state of the service. A valid value is ``up`` or ``down``. in: body required: true type: string service_status_response: description: | The service status, which is ``enabled`` or ``disabled``. in: body required: true type: string service_zone_response: description: | The service availability zone. in: body required: true type: string services: description: | Top element in the response body. in: body required: true type: string share: description: | A ``share`` object. in: body required: true type: object share_force_delete: description: | To force-delete a share or share group, set this value to ``null``. The force-delete action, unlike the delete action, ignores the share or share group status. in: body required: true type: string share_group_host: description: | The share group host name. in: body required: false type: string share_group_id: description: | The UUID of the share group. in: body required: true type: string min_version: 2.31 share_group_id_request: description: | The UUID of the share group. in: body required: false type: string min_version: 2.31 share_group_links: description: | The share group links. in: body required: true type: string share_group_status: description: | The share group status, which is ``available``, ``error``, ``creating``, or ``deleting``. in: body required: true type: string share_group_type_id: description: | The share group type ID to create a share group. in: body required: false type: string share_group_type_id_required: description: | The share group type ID. in: body required: true type: string share_group_type_is_public: description: | The level of visibility for the share group type. Set to ``true`` to make share group type public. Set to ``false`` to make it private. Default value is ``false``. in: body required: true type: boolean share_group_type_is_public_request: description: | The level of visibility for the share group type. Set to ``true`` to make share group type public. Set to ``false`` to make it private. Default value is ``false``. in: body required: false type: boolean share_group_type_name: description: | The share group type name. in: body required: true type: string share_group_type_name_request: description: | The name of the share group type resource. The value of this field is limited to 255 characters. in: body required: false type: string share_id_2: description: | The UUID of the share from which the share instance was created. in: body required: true type: string share_instance_cast_rules_to_readonly: description: | If the share instance has its ``cast_rules_to_readonly`` attribute set to True, all existing access rules will be cast to read/only. in: body required: true type: string min_version: 2.30 share_instance_id_1: description: | The UUID of the share instance. in: body required: true type: string share_network_gateway: description: | The gateway of a share network. in: body required: true type: string min_version: 2.18 share_network_id: description: | The share network ID. in: body required: true type: string share_network_id_1: description: | The ID of a share network. Note that when using a share type with the ``driver_handles_share_servers`` extra spec as ``False``, you should not provide a ``share_network_id``. in: body required: false type: string share_network_id_2: description: | The UUID of a share network where the share server exists or will be created. If ``share_network_id`` is ``None`` and you provide a ``snapshot_id``, the ``share_network_id`` value from the snapshot is used. in: body required: false type: string share_network_id_4: description: | The share network ID. in: body required: true type: string share_network_id_share_server_body: description: | The UUID of a share network that is associated with the share server. in: body required: true type: string share_network_mtu: description: The MTU value of a share network. in: body required: true type: integer min_version: 2.20 share_network_name: description: | The name of a share network that is associated with the share server. in: body required: true type: string share_network_security_service_id: description: | The UUID of the security service to remove from the share network. For details, see the security service section. in: body required: true type: string share_new_size: description: | New size of the share, in GBs. in: body required: true type: integer share_proto: description: | The Shared File Systems protocol. A valid value is ``NFS``, ``CIFS``, ``GlusterFS``, ``HDFS``, ``CephFS``, ``MAPRFS``, ``CephFS`` supported is starting with API v2.13. in: body required: true type: string share_replica_az: description: | The availability zone. in: body required: false type: string share_replica_cast_rules_to_readonly: description: | If the share replica has its ``cast_rules_to_readonly`` attribute set to True, all existing access rules will be cast to read/only. in: body required: true type: string min_version: 2.30 share_replica_force_delete: description: | To force-delete a share replica, set this value to ``null``. The force-delete action, unlike the delete action, ignores the share replica status. in: body required: true type: string share_replica_host: description: | The host name of the share replica. in: body required: true type: string share_replica_id: description: | The UUID of the share replica. in: body required: true type: string share_replica_replica_state: description: | The replica state of a share replica. List of possible values: ``active``, ``in_sync``, ``out_of_sync``, and ``error``. in: body required: true type: string share_replica_reset_replica_state: description: | The ``reset_replica_state`` object. in: body required: true type: object share_replica_share_id: description: | The UUID of the share from which to create a share replica. in: body required: true type: string share_replica_share_network_id: description: | The UUID of the share network. in: body required: false type: string share_replica_status: description: | The status of a share replica. List of possible values: ``available``, ``error``, ``creating``, ``deleting``, or ``error_deleting``. in: body required: true type: string share_server_id: description: | The UUID of the share server. in: body required: true type: string share_server_show_identifier: description: | The identifier of the share server in the back-end storage system. in: body required: true type: string min_version: 2.49 share_server_show_is_auto_deletable: description: | Defines if a share server can be deleted automatically by the service. Share server deletion can be automated with configuration. However, Share servers that have ever had a share removed from service management cannot be automatically deleted by the service. in: body required: true type: boolean min_version: 2.49 share_server_status: description: | The share server status, which can be ``active``, ``error``, ``creating``, ``deleting``, ``manage_starting``, ``manage_error``, ``unmanage_starting``, ``unmanage_error`` or ``error_deleting``. in: body required: true type: string share_server_unmanage: description: | To unmanage a share server, either set this value to ``null`` or {}. Optionally, the ``force`` attribute can be included in this object. in: body required: true type: object share_share_type_name: description: | Name of the share type. in: body required: true type: string min_version: 2.6 share_type: description: | The share type name. If you omit this parameter, the default share type is used. To view the default share type set by the administrator, issue a list default share types request. You cannot specify both the ``share_type`` and ``volume_type`` parameters. in: body required: false type: string share_type_1: description: | The UUID of the share type. In minor versions, this parameter is a share type name, as a string. in: body required: true type: string min_version: 2.6 share_type_2: description: | The share type name. in: body required: false type: string share_type_access:is_public: description: | Indicates whether a share type is publicly accessible. Default is ``true``, or publicly accessible. in: body required: false type: boolean min_version: 2.7 share_type_access:is_public_body: description: | Indicates whether a share type is accessible by all projects (tenants) in the cloud. in: body required: true type: boolean share_type_access:is_public_update_request: description: | Indicates whether the share type should be accessible by all projects (tenants) in the cloud. If not specified, the visibility of the share type is not altered. in: body required: false type: boolean share_type_description: description: | The description of the share type. in: body required: true type: string min_version: 2.41 share_type_description_body: description: | The description of the share type. in: body required: true type: string share_type_description_request: description: | The description of the share type. The value of this field is limited to 255 characters. in: body required: false type: string min_version: 2.41 share_type_description_update_request: description: | New description for the share type. in: body required: false type: string share_type_id_body: description: | The UUID of the share type. in: body required: true type: string share_type_name: description: | Name of the share type. in: body required: true type: string share_type_name_request: description: | Name of the share type. The value of this field is limited to 255 characters. in: body required: false type: string share_types: description: | A list of one or more share type IDs. in: body required: false type: array share_types_1: description: | A list of share type IDs. in: body required: true type: array share_unmanage: description: | To unmanage a share, set this value to ``null``. in: body required: true type: string shrink: description: | The ``shrink`` object. in: body required: true type: object size: description: | The share size, in GBs. The requested share size cannot be greater than the allowed GB quota. To view the allowed quota, issue a get limits request. in: body required: true type: integer size_2: description: | The share size, in GBs. in: body required: true type: integer snapshot_force_delete: description: | To force-delete a snapshot, include this param and set its value to ``null``. The force-delete action, unlike the delete action, ignores the snapshot status. in: body required: true type: string snapshot_id: description: | The UUID of the snapshot. in: body required: true type: string snapshot_id_request: description: | The UUID of the share's base snapshot. in: body required: false type: string snapshot_id_share_response: description: | The UUID of the snapshot that was used to create the share. in: body required: true type: string snapshot_instance_id: description: | The UUID of the share snapshot instance. in: body required: false type: string snapshot_instance_id_response: description: | The UUID of the share snapshot instance. in: body required: true type: string snapshot_instance_status: description: | The snapshot instance status. A valid value is ``available``, ``error``, ``creating``, ``deleting``, and ``error_deleting``, ``restoring``, ``unmanage_starting``, ``unmanage_error``, ``manage_starting``, ``manage_error``. in: body required: true type: string snapshot_manage_share_id: description: | The UUID of the share that has snapshot which should be managed. in: body required: true type: string snapshot_manage_status: description: | The snapshot status, which could be ``manage_starting``, ``manage_error``, ``unmanage_starting``, or ``unmanage_error``. in: body required: true type: string snapshot_project_id: description: | ID of the project that the snapshot belongs to. in: body required: true type: string min_version: 2.17 snapshot_provider_location: description: | Provider location of the snapshot on the backend. in: body required: true type: string snapshot_provider_location_request: description: | Provider location of the snapshot on the backend. in: body required: true type: string snapshot_share_id: description: | The UUID of the source share that was used to create the snapshot. in: body required: true type: string snapshot_share_id_request: description: | The UUID of the share from which to create a snapshot. in: body required: true type: string snapshot_share_protocol: description: | The file system protocol of a share snapshot. A valid value is ``NFS``, ``CIFS``, ``GlusterFS``, ``HDFS``, ``CephFS`` or ``MAPRFS``. ``CephFS`` is supported starting with API v2.13. in: body required: true type: string snapshot_share_size: description: | The share snapshot size, in GBs. in: body required: true type: integer snapshot_size: description: | The snapshot size, in GBs. in: body required: true type: integer snapshot_status: description: | The snapshot status, which can be ``available``, ``error``, ``creating``, ``deleting``, ``manage_starting``, ``manage_error``, ``unmanage_starting``, ``unmanage_error`` or ``error_deleting``. in: body required: true type: string snapshot_status_request: description: | The snapshot status, which can be ``available``, ``error``, ``creating``, ``deleting``, ``manage_starting``, ``manage_error``, ``unmanage_starting``, ``unmanage_error`` or ``error_deleting``. in: body required: false type: string snapshot_support: description: | An extra specification that filters back ends by whether they do or do not support share snapshots. in: body required: true type: boolean min_version: 2.2 snapshot_support_1: description: | An extra specification that filters back ends by whether they do or do not support share snapshots. in: body required: false type: boolean snapshot_unmanage: description: | To unmanage a share snapshot, include this parameter and set its value to ``null``. in: body required: true type: string snapshot_user_id: description: | ID of the user that the snapshot was created by. in: body required: true type: string min_version: 2.17 source_share_group_snapshot_id: description: | The source share group snapshot ID to create the share group. in: body required: false type: string source_share_group_snapshot_id_response: description: | The source share group snapshot ID to create the share group. in: body required: true type: string state: description: | Prior to versions 2.28, the state of all access rules of a given share is the same at all times. This could be ``new``, ``active`` or ``error``. Since 2.28, the state of each access rule of a share is independent of the others and can be ``queued_to_apply``, ``applying``, ``active``, ``error``, ``queued_to_deny`` or ``denying``. A new rule starts out in ``queued_to_apply`` state and is successfully applied if it transitions to ``active`` state. in: body required: true type: string status: description: | The consistency group snapshot status, which is ``available``, ``creating``, ``error``, ``deleting``, or ``error_deleting``. in: body required: true type: string status_1: description: | The consistency group status. A valid value is ``creating``, ``available``, ``error``, ``deleting``, or ``error_deleting``. in: body required: true type: string status_16: description: | The share status, which is ``creating``, ``creating_from_snapshot``, ``error``, ``available``, ``deleting``, ``error_deleting``, ``manage_starting``, ``manage_error``, ``unmanage_starting``, ``unmanage_error``, ``unmanaged``, ``extend``, ``extending_error``, ``shrinking``, ``shrinking_error``, or ``shrinking_possible_data_loss_error``. in: body required: true type: string status_3: description: | The share status. A valid value is: - ``creating``. The share is being created. - ``deleting``. The share is being deleted. - ``error``. An error occurred during share creation. - ``error_deleting``. An error occurred during share deletion. - ``available``. The share is ready to use. - ``manage_starting``. Share manage started. - ``manage_error``. Share manage failed. - ``unmanage_starting``. Share unmanage started. - ``unmanage_error``. Share cannot be unmanaged. - ``unmanaged``. Share was unmanaged. - ``extending``. The extend, or increase, share size request was issued successfully. - ``extending_error``. Extend share failed. - ``shrinking``. Share is being shrunk. - ``shrinking_error``. Failed to update quota on share shrinking. - ``shrinking_possible_data_loss_error``. Shrink share failed due to possible data loss. - ``creating_from_snapshot``. The share is being created from a parent snapshot. in: body required: true type: string status_5: description: | The share instance status. A valid value is ``available``, ``error``, ``creating``, ``deleting``, ``creating_from_snapshot``, or ``error_deleting``. in: body required: true type: string status_8: description: | The share status, which is ``available``, ``manage_starting``, or ``manage_error``. in: body required: true type: string status_share_server_body: description: | The share server status, which is ``active``, ``error``, ``creating``, or ``deleting``. in: body required: true type: string task_state: description: | For the share migration, the migration task state. A valid value is ``null``, ``migration_starting``, ``migration_error``, ``migration_success``, ``migration_completing``, or ``migrating``. The ``task_state`` is ``null`` unless the share is migrated from one back-end to another. For details, see ``os-migrate_share`` extension request. in: body required: true type: string min_version: 2.5 timestamp: description: | The date and time stamp when the API request was issued. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string totalShareGigabytesUsed: description: | The total number of gigabytes used in a project by shares. in: body required: true type: integer totalShareNetworksUsed: description: | The total number of created share-networks in a project. in: body required: true type: integer totalShareSnapshotsUsed: description: | The total number of created share snapshots in a project. in: body required: true type: integer totalSharesUsed: description: | The total number of created shares in a project. in: body required: true type: integer totalSnapshotGigabytesUsed: description: | The total number of gigabytes used in a project by snapshots. in: body required: true type: integer unit: description: | The time interval during which a number of API requests are allowed. A valid value is ``SECOND``, ``MINUTE``, ``HOUR``, or ``DAY``. Used in conjunction with the ``value`` parameter, expressed as ``value`` per ``unit``. For example, 120 requests are allowed per minute. in: body required: false type: string updated_at: description: | The date and time stamp when the resource was last updated within the service's database. If a resource was never updated after it was created, the value of this parameter is set to ``null``. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2016-12-31T13:14:15-05:00``. in: body required: true type: string updated_at_extensions: description: | The date and time stamp when the extension API was last updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, returns the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. in: body required: true type: string uri: description: | A human-readable URI of a rate limit. format: uri in: body required: false type: string user_id: description: | ID of the user that is part of a given project. in: body required: false type: string value: description: | The number of API requests that are allowed during a time interval. Used in conjunction with the ``unit`` parameter, expressed as ``value`` per ``unit``. For example, 120 requests are allowed per minute. in: body required: false type: integer verb: description: | The HTTP method for the API request. For example, ``GET``, ``POST``, ``DELETE``, and so on. in: body required: false type: string version: description: | The version. in: body required: true type: string version_id: type: string in: body required: true description: > A common name for the version in question. Informative only, it has no real semantic meaning. version_max: type: string in: body required: true description: > If this version of the API supports microversions, the maximum microversion that is supported. This will be the empty string if microversions are not supported. version_media_types: description: | Media types supported by the API. in: body required: true type: object version_min: type: string in: body required: true description: > If this version of the API supports microversions, the minimum microversion that is supported. This will be the empty string if microversions are not supported. version_status: type: string in: body required: true description: | The status of this API version. This can be one of: - ``CURRENT``: this is the preferred version of the API to use - ``SUPPORTED``: this is an older, but still supported version of the API - ``DEPRECATED``: a deprecated version of the API that is slated for removal version_updated: description: | A date and time stamp for API versions. This field presents no meaningful information. in: body required: true type: string versions: type: array in: body required: true description: > A list of version objects that describe the API versions available. volume_type: description: | The volume type. The use of the ``volume_type`` object is deprecated but supported. It is recommended that you use the ``share_type`` object when you create a share type. When you issue a create a share type request, you can submit a request body with either a ``share_type`` or ``volume_type`` object. No matter which object type you include in the request, the API creates both a ``volume_type`` object and a ``share_type`` object. Both objects have the same ID. When you issue a list share types request, the response shows both ``share_types`` and ``volume_types`` objects. in: body required: false type: string manila-10.0.0/api-ref/source/share-types.inc0000664000175000017500000003621013656750227020654 0ustar zuulzuul00000000000000.. -*- rst -*- =========== Share types =========== A share type enables you to filter or choose back ends before you create a share. A share type behaves in the same way as a Block Storage volume type behaves. You set a share type to private or public and manage the access to the private share types. When you issue a create a share type request, you can submit a request body with either a ``share_type`` or ``volume_type`` object. .. important:: The use of the ``volume_type`` object is deprecated but supported. It is recommended that you use the ``share_type`` object when you create a share type. No matter which object type you include in the request, the API creates both a ``volume_type`` object and a ``share_type`` object. Both objects have the same ID. When you issue a list share types request, the response shows both ``share_type`` and ``volume_type`` objects. You can set share types as either public or private. By default a share type is created as publicly accessible. Set ``share_type_access:is_public`` (``os-share-type-access:is_public`` for API versions 1.0-2.6) to ``False`` to make the share type private. You can manage the access to the private share types for the different projects. You can add access, remove access, and get information about access for a private share type. Administrators can create share types with these extra specifications that are used to filter back ends: - ``driver_handles_share_servers``. Required. Defines the driver mode for share server, or storage, life cycle management. The Shared File Systems service creates a share server for the export of shares. Set to ``True`` when the share driver manages or handles the share server life cycle. Set to ``False`` when an administrator rather than a share driver manages the share server life cycle. - ``snapshot_support``. Filters back ends by whether they do or do not support share snapshots. Set to ``True`` to find back ends that support share snapshots. Set to ``False`` to find back ends that do not support share snapshots. Administrators can also set additional extra specifications for a share type for the following purposes: - Filter back ends. Specify these unqualified extra specifications in this format: ``extra_spec=value``. For example, ``netapp_raid_type=raid4``. - Set data for the driver. Except for the special ``capabilities`` prefix, you specify these qualified extra specifications with its prefix followed by a colon: ``vendor:extra_spec=value``. For example, ``netapp:thin_provisioned=true``. The scheduler uses the special ``capabilities`` prefix for filtering. The scheduler can only create a share on a back end that reports capabilities that match the un-scoped extra-spec keys for the share type. For details, see `Capabilities and Extra-Specs `_. Each driver implementation determines which extra specification keys it uses. For details, see the documentation for the driver. An administrator can use the ``policy.json`` file to grant permissions for share type creation with extra specifications to other roles. List share types ================ .. rest_method:: GET /v2/{project_id}/types Lists all share types. Response codes -------------- .. rest_status_code:: success status.yaml - 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - extra_specs: extra_specs_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_type_id_body - name: share_type_name - required_extra_specs: required_extra_specs - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - replication_type: replication_type - snapshot_support: snapshot_support_1 - mount_snapshot_support: mount_snapshot_support - revert_to_snapshot_support: revert_to_snapshot_support - share_type_access:is_public: share_type_access:is_public - create_share_from_snapshot_support: create_share_from_snapshot_support - description: share_type_description - is_default: is_default_type Response example ---------------- .. literalinclude:: samples/share-types-list-response.json :language: javascript List default share types ======================== .. rest_method:: GET /v2/{project_id}/types/default Lists default share types. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_type_id_body - required_extra_specs: required_extra_specs - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - snapshot_support: snapshot_support_1 - share_type_access:is_public: share_type_access:is_public - name: share_type_name - description: share_type_description - is_default: is_default_type Response example ---------------- .. literalinclude:: samples/share-types-default-list-response.json :language: javascript Show share type detail ====================== .. rest_method:: GET /v2/{project_id}/types/{share_type_id} Shows details for a specified share type. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id Response Parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_type_id_body - required_extra_specs: required_extra_specs - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - snapshot_support: snapshot_support_1 - replication_type: replication_type - mount_snapshot_support: mount_snapshot_support - revert_to_snapshot_support: revert_to_snapshot_support - create_share_from_snapshot_support: create_share_from_snapshot_support - share_type_access:is_public: share_type_access:is_public - name: share_type_name - description: share_type_description - is_default: is_default_type Response Example ---------------- .. literalinclude:: ./samples/share-type-show-response.json :language: javascript List extra specs ================ .. rest_method:: GET /v2/{project_id}/types/{share_type_id}/extra_specs Lists the extra specifications for a share type. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - snapshot_support: snapshot_support_1 - replication_type: replication_type - mount_snapshot_support: mount_snapshot_support - revert_to_snapshot_support: revert_to_snapshot_support - create_share_from_snapshot_support: create_share_from_snapshot_support Response example ---------------- .. literalinclude:: samples/share-types-extra-specs-list-response.json :language: javascript Create share type ================= .. rest_method:: POST /v2/{project_id}/types Creates a share type. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - snapshot_support: snapshot_support_1 - os-share-type-access:is_public: os-share-type-access:is_public - name: share_type_name_request - replication_type: replication_type - mount_snapshot_support: mount_snapshot_support - revert_to_snapshot_support: revert_to_snapshot_support - create_share_from_snapshot_support: create_share_from_snapshot_support - description: share_type_description_request Request example --------------- .. literalinclude:: samples/share-type-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_type_id_body - required_extra_specs: required_extra_specs - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - snapshot_support: snapshot_support_1 - os-share-type-access:is_public: os-share-type-access:is_public - share_type_access:is_public: share_type_access:is_public - name: share_type_name - replication_type: replication_type - mount_snapshot_support: mount_snapshot_support - revert_to_snapshot_support: revert_to_snapshot_support - create_share_from_snapshot_support: create_share_from_snapshot_support - description: share_type_description - is_default: is_default_type Response example ---------------- .. literalinclude:: samples/share-type-create-response.json :language: javascript Show share type access details ============================== .. rest_method:: GET /v2/{project_id}/types/{share_type_id}/share_type_access Shows access details for a share type. You can view access details for private share types only. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - project_id: project_id_type_access - share_type_id: share_type_id_body Response example ---------------- .. literalinclude:: samples/share-types-list-access-response.json :language: javascript Set extra spec for share type ============================= .. rest_method:: POST /v2/{project_id}/types/{share_type_id}/extra_specs Sets an extra specification for the share type. Each driver implementation determines which extra specification keys it uses. For details, see `Capabilities and Extra-Specs `_ and documentation for your driver. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id - extra_specs: extra_specs Request example --------------- .. literalinclude:: samples/share-type-set-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - extra_specs: extra_specs Response example ---------------- .. literalinclude:: samples/share-type-set-response.json :language: javascript Unset an extra spec =================== .. rest_method:: DELETE /v2/{project_id}/types/{share_type_id}/extra_specs/{extra-spec-key} Unsets an extra specification for the share type. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id - extra-spec-key: extra_spec_key_path Add share type access ===================== .. rest_method:: POST /v2/{project_id}/types/{share_type_id}/action Adds share type access for a project. You can add access to private share types only. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id - addProjectAccess: add_project_access - project: project_id_type_access_grant_request Request example --------------- .. literalinclude:: samples/share-type-grant-access-request.json :language: javascript Remove share type access ======================== .. rest_method:: POST /v2/{project_id}/types/{share_type_id}/action Removes share type access from a project. You can remove access from private share types only. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id - removeProjectAccess: remove_project_access - project: project_id_type_access_revoke_request Request example --------------- .. literalinclude:: samples/share-type-revoke-access-request.json :language: javascript Delete share type ================= .. rest_method:: DELETE /v2/{project_id}/types/{share_type_id} Deletes a share type. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id Update share type (since API v2.50) =================================== .. rest_method:: PUT /v2/{project_id}/types/{share_type_id} .. versionadded:: 2.50 Update a share type. Share type extra-specs cannot be updated with this API. Please use the respective APIs to `set extra specs <#set-extra-spec-for-share-type>`_ or `unset extra specs <#unset-an-extra-spec>`_. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_type_id: share_type_id - name: share_type_name_request - share_type_access:is_public: share_type_access:is_public_update_request - description: share_type_description_update_request Request example --------------- .. literalinclude:: samples/share-type-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_type_id_body - required_extra_specs: required_extra_specs - extra_specs: extra_specs - driver_handles_share_servers: driver_handles_share_servers - snapshot_support: snapshot_support_1 - share_type_access:is_public: share_type_access:is_public_body - name: share_type_name - replication_type: replication_type_body - mount_snapshot_support: mount_snapshot_support_body - revert_to_snapshot_support: revert_to_snapshot_support_body - create_share_from_snapshot_support: create_share_from_snapshot_support_body - description: share_type_description_body - is_default: is_default_type_body Response example ---------------- .. literalinclude:: samples/share-type-update-response.json :language: javascript manila-10.0.0/api-ref/source/conf.py0000664000175000017500000002322513656750227017216 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # manila documentation build configuration file, created by # sphinx-quickstart on Sat May 7 13:35:27 2016. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys html_theme = 'openstackdocs' html_theme_options = { "sidebar_mode": "toc", } extensions = [ 'os_api_ref', 'openstackdocstheme', ] # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. # Add any paths that contain templates here, relative to this directory. # templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. copyright = u'2010-present, OpenStack Foundation' # openstackdocstheme options repository_name = 'openstack/manila' bug_project = 'manila' bug_tag = 'api-ref' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. # " v documentation" by default. # html_title = u'Shared File Systems API Reference v2' # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (relative to this directory) to use as a favicon of # the docs. This file should be a Windows icon file (.ico) being 16x16 or # 32x32 pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Language to be used for generating the HTML full-text search index. # Sphinx supports the following languages: # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' # html_search_language = 'en' # A dictionary with options for the search language support, empty by default. # 'ja' uses this config value. # 'zh' user can custom change `jieba` dictionary path. # html_search_options = {'type': 'default'} # The name of a javascript file (relative to the configuration directory) that # implements a search results scorer. If empty, the default will be used. # html_search_scorer = 'scorer.js' # Output file base name for HTML help builder. htmlhelp_basename = 'maniladoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', # Latex figure (float) alignment # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'manila.tex', u'OpenStack Shared File Systems API Documentation', u'OpenStack Foundation', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'manila', u'OpenStack Shared File Systems API Documentation', u'Openstack Foundation', 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'Manila', u'OpenStack Shared File Systems API Documentation', u'OpenStack Foundation', 'Manila', 'OpenStack Shared File Systems', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False manila-10.0.0/api-ref/source/share-group-snapshots.inc0000664000175000017500000002061313656750227022664 0ustar zuulzuul00000000000000.. -*- rst -*- ======================================= Share group snapshots (since API v2.31) ======================================= Use the Shared File Systems Service to make snapshots of share groups. A share group snapshot is a point-in-time, read-only copy of the data that is contained in a share group. You can create, update, and delete share group snapshots. After you create a share group snapshot, you can create a share group from it. You can update a share group snapshot to rename it, change its description, or update its state. As administrator, you can also reset the state of a group snapshot. Use the ``policy.json`` file to grant permissions for these actions to other roles. .. note:: Share Group Snapshot APIs are `experimental APIs <#experimental-apis>`_. List share group snapshots ========================== .. rest_method:: GET /v2/{project_id}/share-group-snapshots .. versionadded:: 2.31 Lists all share group snapshots. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name: name_query - description: description_query - status: group_snapshot_status_query - share_group_id: share_group_id_query - limit: limit_query - offset: offset - sort_key: sort_key - sort_dir: sort_dir Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: group_snapshot_id - name: name - links: group_snapshot_links Response example ---------------- .. literalinclude:: samples/share-group-snapshots-list-response.json :language: javascript List share group snapshots with details ======================================= .. rest_method:: GET /v2/{project_id}/share-group-snapshots/detail .. versionadded:: 2.31 Lists all share group snapshots with details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name: name_query - description: description_query - status: group_snapshot_status_query - share_group_id: share_group_id_query - limit: limit_query - offset: offset - sort_key: sort_key - sort_dir: sort_dir Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: group_snapshot_id - project_id: project_id - status: group_snapshot_status_required - share_group_id: share_group_id - name: name - description: description - created_at: created_at - members: group_snapshot_members - links: group_snapshot_links Response example ---------------- .. literalinclude:: samples/share-group-snapshots-list-detailed-response.json :language: javascript List share group snapshots members ================================== .. rest_method:: GET /v2/{project_id}/share-group-snapshots/{group_snapshot_id}/members .. versionadded:: 2.31 Lists all share group snapshots members. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - group_snapshot_id: group_snapshot_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: group_snapshot_id - created_at: created_at - project_id: project_id - size: snapshot_size - share_protocol: snapshot_share_protocol - name: name - share_group_snapshot_id: group_snapshot_id - share_id: snapshot_share_id Response example ---------------- .. literalinclude:: samples/share-group-snapshots-list-members-response.json :language: javascript Show share group snapshot details ================================= .. rest_method:: GET /v2/{project_id}/share-group-snapshots/{group_snapshot_id} .. versionadded:: 2.31 Shows details for a share group snapshot. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - group_snapshot_id: group_snapshot_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: group_snapshot_id - project_id: project_id - status: group_snapshot_status_required - share_group_id: share_group_id - name: name - description: description - created_at: created_at - members: group_snapshot_members - links: group_snapshot_links Response example ---------------- .. literalinclude:: samples/share-group-snapshot-show-response.json :language: javascript Create share group snapshot =========================== .. rest_method:: POST /v2/{project_id}/share-group-snapshots .. versionadded:: 2.31 Creates a snapshot from a share. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - name: name_request - description: description_request - share_group_id: share_group_id Request example --------------- .. literalinclude:: samples/share-group-snapshot-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: group_snapshot_id - project_id: project_id - status: group_snapshot_status_required - share_group_id: share_group_id - name: name - description: description - created_at: created_at - members: group_snapshot_members - links: group_snapshot_links Response example ---------------- .. literalinclude:: samples/share-group-snapshot-create-response.json :language: javascript Reset share group snapshot state ================================ .. rest_method:: POST /v2/{project_id}/share-group-snapshots/{group_snapshot_id}/action .. versionadded:: 2.31 Administrator only. Explicitly updates the state of a share group snapshot. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - group_snapshot_id: group_snapshot_id_path - status: group_snapshot_status_required Request example --------------- .. literalinclude:: samples/snapshot-actions-reset-state-request.json :language: javascript Update share group snapshot =========================== .. rest_method:: PUT /v2/{project_id}/share-group-snapshots/{group_snapshot_id} .. versionadded:: 2.31 Updates a share group snapshot. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - group_snapshot_id: group_snapshot_id_path - name: name_request - description: description_request Request example --------------- .. literalinclude:: samples/snapshot-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: group_snapshot_id - project_id: project_id - status: group_snapshot_status_required - share_group_id: share_group_id - name: name - description: description - created_at: created_at - members: group_snapshot_members - links: group_snapshot_links Response example ---------------- .. literalinclude:: samples/share-group-snapshot-update-response.json :language: javascript Delete share group snapshot =========================== .. rest_method:: DELETE /v2/{project_id}/share-group-snapshots/{group_snapshot_id} .. versionadded:: 2.31 Deletes a share group snapshot. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - group_snapshot_id: group_snapshot_id_path manila-10.0.0/api-ref/source/share-servers.inc0000664000175000017500000002056113656750227021203 0ustar zuulzuul00000000000000.. -*- rst -*- ============= Share servers ============= A share server is created by multi-tenant back-end drivers where shares are hosted. For example, with the ``generic`` driver, shares are hosted on Compute VMs. Administrators can perform read and delete actions for share servers. An administrator can delete an active share server only if it contains no dependent shares. If an administrator deletes the share server, the Shared File Systems service creates a share server in response to a subsequent create share request. An administrator can use the ``policy.json`` file to grant permissions for share server actions to other roles. The status of a share server indicates its current state. After you successfully set up a share server, its status is ``active``. If errors occur during set up such as when server data is not valid, its status is ``error``. The possible share servers statuses are: **Share server statuses** +--------------+------------------------------------------------------------------+ | Status | Description | +--------------+------------------------------------------------------------------+ | ``active`` | Share server was successfully set up. | +--------------+------------------------------------------------------------------+ | ``error`` | The set up or deletion of the share server failed. | +--------------+------------------------------------------------------------------+ | ``deleting`` | The share server has no dependent shares and is being deleted. | +--------------+------------------------------------------------------------------+ | ``creating`` | The share server is being created on the back end with data from | | | the database. | +--------------+------------------------------------------------------------------+ List share servers ================== .. rest_method:: GET /v2/{project_id}/share-servers Lists all share servers. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_server_id - project_id: project_id - status: status_share_server_body - share_network_id: share_network_id - share_network_name: share_network_name - host: host_share_server_body - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/share-servers-list-response.json :language: javascript Show share server ================= .. rest_method:: GET /v2/{project_id}/share-servers/{share_server_id} Show a share server's details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_server_id: share_server_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_server_id - project_id: project_id - status: status_share_server_body - backend_details: backend_details - share_network_id: share_network_id_share_server_body - share_network_name: share_network_name - host: host_share_server_body - created_at: created_at - updated_at: updated_at - identifier: share_server_show_identifier - is_auto_deletable: share_server_show_is_auto_deletable Response example ---------------- .. literalinclude:: samples/share-server-show-response.json :language: javascript Show share server back end details ================================== .. rest_method:: GET /v2/{project_id}/share-servers/{share_server_id}/details Shows back end details of a share server. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_server_id: share_server_id Response parameters ------------------- Response parameters can differ based on the back end used. Each back end can store any key-value information that it requires. For example, the generic back end driver might store the router ID. Response example ---------------- .. literalinclude:: samples/share-server-show-details-response.json :language: javascript Delete share server =================== .. rest_method:: DELETE /v2/{project_id}/share-servers/{share_server_id} Deletes a share server. An administrator can delete an active share server only if it contains no dependent shares. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_server_id: share_server_id Manage share server (since API v2.49) ===================================== .. rest_method:: POST /v2/{project_id}/share-servers/manage .. versionadded:: 2.49 Manages a share server An administrator can bring a pre-existing share server if the back end driver is operating in ``driver_handles_share_servers=True`` mode. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 403 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - host: manage_host - identifier: identifier - share_network: share_network_id - driver_options: driver_options Request example --------------- .. literalinclude:: samples/share-server-manage-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_server_id - project_id: project_id - updated_at: updated_at - status: share_server_status - host: manage_host - share_network_name: share_network_name - share_network_id: share_network_id - created_at: created_at - backend_details: backend_details - is_auto_deletable: is_auto_deletable - identifier: identifier Response examples ----------------- .. literalinclude:: samples/share-server-manage-response.json :language: javascript Unmanage share server (since API v2.49) ======================================= .. rest_method:: POST /v2/{project_id}/share-servers/{share_server_id}/action .. versionadded:: 2.49 Unmanages a share server An administrator can remove a share server from the Shared File System service's management if there are no associated shares that the service is aware of. The share server will not be torn down in the back end. Preconditions - Share server status must be either ``error``, ``manage_error``, ``active`` or ``unmanage_error``. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 404 Request parameters ------------------ .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_server_id: share_server_id - force: force - unmanage: share_server_unmanage Request example --------------- .. literalinclude:: samples/share-server-unmanage-request.json :language: javascript Response parameters ------------------- There is no body content for the response. Reset status (since API v2.49) ============================== .. rest_method:: POST /v2/{project_id}/share-servers/{share_server_id}/action .. versionadded:: 2.49 Resets a share server status Administrator only. Explicitly updates the state of a share server. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 404 Request parameters ------------------ .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_server_id: share_server_id - status: share_server_status Request example --------------- .. literalinclude:: samples/share-server-reset-state-request.json :language: javascript Response parameters ------------------- There is no body content for the response. manila-10.0.0/api-ref/source/share-actions.inc0000664000175000017500000002276613656750227021163 0ustar zuulzuul00000000000000.. -*- rst -*- .. _get-access-rules-before-2-45: ============= Share actions ============= Share actions include granting or revoking share access, listing the available access rules for a share, explicitly updating the state of a share, resizing a share and un-managing a share. As administrator, you can reset the state of a share and force- delete a share in any state. Use the ``policy.json`` file to grant permissions for this action to other roles. You can set the state of a share to one of these supported states: - ``available`` - ``error`` - ``creating`` - ``deleting`` - ``error_deleting`` If API version 1.0-2.6 is used then all share actions, defined below, should include prefix ``os-`` in top element of request JSON's body. For example: {"access_list": null} is valid for v2.7+. And {"os- access_list": null} is valid for v1.0-2.6 Grant access ============ All manila shares begin with no access. Clients must be provided with explicit access via this API. To grant access, specify one of these supported share access levels: - ``rw``. Read and write (RW) access. - ``ro``. Read-only (RO) access. You must also specify one of these supported authentication methods: - ``ip``. Authenticates an instance through its IP address. The value specified should be a valid IPv4 or an IPv6 address, or a subnet in CIDR notation. A valid format is ``X:X:X:X:X:X:X:X``, ``X:X:X:X:X:X:X:X/XX``, ``XX.XX.XX.XX``, or ``XX.XX.XX.XX/XX``, etc. For example ``0.0.0.0/0`` or ``::/0``. .. important:: IPv6 based access is only supported with API version 2.38 and beyond. - ``cert``. Authenticates an instance through a TLS certificate. Specify the TLS identity as the IDENTKEY. A valid value is any string up to 64 characters long in the common name (CN) of the certificate. The meaning of a string depends on its interpretation. - ``user``. Authenticates by a user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 255 characters long. .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action Grants access to a share. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - allow_access: allow_access - access_level: access_level - access_type: access_type - access_to: access_to - access_metadata: metadata Request example --------------- .. literalinclude:: samples/share-actions-grant-access-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_id: access_share_id - created_at: created_at - updated_at: updated_at - access_type: access_type - access_to: access_to - access_key: access_key - access: access - access_level: access_level - id: access_rule_id - access_metadata: access_metadata Response example ---------------- .. literalinclude:: samples/share-actions-grant-access-response.json :language: javascript Revoke access ============= .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action The shared file systems service stores each access rule in its database and assigns it a unique ID. This ID can be used to revoke access after access has been requested. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - deny_access: deny_access - access_id: access_id Request example --------------- .. literalinclude:: samples/share-actions-revoke-access-request.json :language: javascript List access rules (DEPRECATED) ============================== .. warning:: This API is deprecated starting with microversion 2.45 and requests to this API will fail with a 404 starting from microversion 2.45. Use :ref:`List share access rules ` API instead of this API from version 2.45. .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action Lists access rules for a share. The Access ID returned is necessary to deny access. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - access_list: access_list Request example --------------- .. literalinclude:: samples/share-actions-list-access-rules-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - access_type: access_type - access_key: access_key - access_to: access_to - access_level: access_level - state: state - access_list: access_list - id: access_rule_id - created_at: created_at - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/share-actions-list-access-rules-response.json :language: javascript Reset share state ================= .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action Administrator only. Explicitly updates the state of a share. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - reset_status: reset_status - status: access_status Request example --------------- .. literalinclude:: samples/share-actions-reset-state-request.json :language: javascript Force-delete share ================== .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action Administrator only. Force-deletes a share in any state. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - force_delete: share_force_delete Request example --------------- .. literalinclude:: samples/share-actions-force-delete-request.json :language: javascript Extend share ============ .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action Increases the size of a share. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - extend: extend - new_size: share_new_size Request example --------------- .. literalinclude:: samples/share-actions-extend-request.json :language: javascript Shrink share ============ .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action Shrinks the size of a share. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - shrink: shrink - new_size: share_new_size Request example --------------- .. literalinclude:: samples/share-actions-shrink-request.json :language: javascript Unmanage share (since API v2.7) =============================== .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action .. versionadded:: 2.7 Use this API to remove a share from the management of the Shared File Systems service without deleting the share. Administrator only. Use the ``policy.json`` file to grant permissions for this action to other roles. Preconditions: - You should remove any snapshots and share replicas before attempting to unmanage a share. .. note:: Unmanaging shares that are created on top of share servers (i.e. created with share networks) is not supported prior to API version 2.49. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - unmanage: share_unmanage Request example --------------- .. literalinclude:: samples/share-actions-unmanage-request.json :language: javascript Response parameters ------------------- There is no body content for the response. Revert share to snapshot (since API v2.27) ========================================== .. rest_method:: POST /v2/{project_id}/shares/{share_id}/action .. versionadded:: 2.27 Reverts a share to the specified snapshot, which must be the most recent one known to manila. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - snapshot_id: snapshot_id Request example --------------- .. literalinclude:: samples/share-actions-revert-to-snapshot-request.json :language: javascript manila-10.0.0/api-ref/source/status.yaml0000664000175000017500000000325113656750227020123 0ustar zuulzuul00000000000000 200: default: | Request was successful. 201: default: | Request has been fulfilled and new resource created. 202: default: | Request is accepted, but processing may take some time. 203: default: | Returned information is not full set, but a subset. 204: default: | Request fulfilled but service does not return anything. 300: default: | The resource corresponds to more than one representation. 400: default: | Some content in the request was invalid. 401: default: | User must authenticate before making a request. 403: default: | Policy does not allow current user to do this operation. 404: default: | The requested resource could not be found. 405: default: | Method is not valid for this endpoint and resource. 409: default: | This resource has an action in progress that would conflict with this request. 413: default: | This operation cannot be completed. 415: default: | The entity of the request is in a format not supported by the requested resource for the method. 422: default: | The entity of the request is in a format not processable by the requested resource for the method. 500: default: | Something went wrong with the service which prevents it from fulfilling the request. 501: default: | The service does not have the functionality required to fulfill this request. 503: default: | The service cannot handle the request right now. manila-10.0.0/api-ref/source/share-group-types.inc0000664000175000017500000002261313656750227022010 0ustar zuulzuul00000000000000.. -*- rst -*- =================================== Share group types (since API v2.31) =================================== A share group type enables you to filter or choose back ends before you create a share group. You can set share group types as either public or private. By default a share group type is created as publicly accessible. Set ``share_group_type_access:is_public`` to ``False`` to make a share group type private. You can manage access to the private share group types for different projects. You can add access, remove access, and get information about access for a private share group type. Administrators can specify which `share type(s) <#experimental-apis>`_ a given group type may contain. If Administrators do not explicitly associate share types with a given share group type, the service will associate the share type configured as the ``default_share_type`` with the share group type. When creating a share group, the scheduler picks one of the back ends that match a combination of the extra specs in the specified share type(s) and share group type. Administrators can also set additional group extra specifications for a share group type for the following purposes: - Filter back ends by group scheduler. Specify these group extras specifications in this format: ``group_specs=value``. For example, ``consistent_snapshot_support=true``. .. note:: Share Group Types APIs are `experimental APIs <#experimental-apis>`_. List share group types ====================== .. rest_method:: GET /v2/{project_id}/share-group-types .. versionadded:: 2.31 Lists all share group types. Response codes -------------- .. rest_status_code:: success status.yaml - 200 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_type_id_required - is_public: share_group_type_is_public - share_types: share_types_1 - name: share_group_type_name - group_specs: group_specs_required - is_default: is_group_type_default Response example ---------------- .. literalinclude:: samples/share-group-types-list-response.json :language: javascript List default share group types ============================== .. rest_method:: GET /v2/{project_id}/share-group-types/default .. versionadded:: 2.31 Lists default share group types. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_type_id_required - is_public: share_group_type_is_public - share_types: share_types_1 - name: share_group_type_name - group_specs: group_specs_required - is_default: is_group_type_default Response example ---------------- .. literalinclude:: samples/share-group-types-default-list-response.json :language: javascript List share group types extra specs ================================== .. rest_method:: GET /v2/{project_id}/share-group-types/{share_group_type_id}/group_specs .. versionadded:: 2.31 Lists the extra specifications for a share group type. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_required Response parameters ------------------- .. rest_parameters:: parameters.yaml - group_specs: group_specs_required Response example ---------------- .. literalinclude:: samples/share-group-types-group-specs-list-response.json :language: javascript Create share group type ======================= .. rest_method:: POST /v2/{project_id}/share-group-types .. versionadded:: 2.31 Creates a share group type. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_types: share_types_1 - name: share_group_type_name_request - group_specs: group_specs - is_public: share_group_type_is_public_request Request example --------------- .. literalinclude:: samples/share-group-type-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_group_type_id_required - group_specs: group_specs_required - name: share_group_type_name - share_types: share_types_1 - is_public: share_group_type_is_public - is_default: is_group_type_default Response example ---------------- .. literalinclude:: samples/share-group-type-create-response.json :language: javascript Show share group type access details ==================================== .. rest_method:: GET /v2/{project_id}/share-group-types/{share_group_type_id}/share_type_access .. versionadded:: 2.31 Shows access details for a share group type. You can view access details for private share group types only. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_required Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_group_type_id: share_group_type_id_required - project_id: project_id_type_access Response example ---------------- .. literalinclude:: samples/share-group-types-list-access-response.json :language: javascript Set extra spec for share group type =================================== .. rest_method:: POST /v2/{project_id}/share-group-types/{share_group_type_id}/group_specs .. versionadded:: 2.31 Sets an extra specification for the share group type. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_required - group_specs: group_specs_required Request example --------------- .. literalinclude:: samples/share-group-type-set-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - group_specs: group_specs_required Response example ---------------- .. literalinclude:: samples/share-group-type-set-response.json :language: javascript Unset an group spec =================== .. rest_method:: DELETE /v2/{project_id}/share-group-types/{share_group_type_id}/group-specs/{group_spec_key} .. versionadded:: 2.31 Unsets an extra specification for the share type. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_required - group_spec_key: group_spec_key Add share group type access =========================== .. rest_method:: POST /v2/{project_id}/share-group-types/{share_group_type_id}/action .. versionadded:: 2.31 Adds share group type access for a project. You can add access to private share group types only. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_path - addProjectAccess: add_project_access - project: project_id_type_access_grant_request Request example --------------- .. literalinclude:: samples/share-group-type-grant-access-request.json :language: javascript Remove share group type access ============================== .. rest_method:: POST /v2/{project_id}/share-group-types/{share_group_type_id}/action .. versionadded:: 2.31 Removes share group type access from a project. You can remove access from private share group types only. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_path - removeProjectAccess: remove_project_access - project: project_id_type_access_revoke_request Request example --------------- .. literalinclude:: samples/share-group-type-revoke-access-request.json :language: javascript Delete share group type ======================= .. rest_method:: DELETE /v2/{project_id}/share-group-types/{share_group_type_id} .. versionadded:: 2.31 Deletes a share group type. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_group_type_id: share_group_type_id_path manila-10.0.0/api-ref/source/share-metadata.inc0000664000175000017500000001077513656750227021300 0ustar zuulzuul00000000000000.. -*- rst -*- ============== Share metadata ============== Shows, sets, updates, and unsets share metadata. Show all share metadata ======================= .. rest_method:: GET /v2/{project_id}/shares/{share_id}/metadata Shows all the metadata for a share, as key and value pairs. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - metadata: metadata_3 Response example ---------------- .. literalinclude:: samples/share-show-metadata-response.json :language: javascript Show share metadata item ========================= .. rest_method:: GET /v2/{project_id}/shares/{share_id}/metadata/{key} Retrieves a specific metadata item from a share's metadata by its key. If the specified key does not represent a valid metadata item, the API will respond with HTTP 404. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - key: metadata_key_request Response parameters ------------------- .. rest_parameters:: parameters.yaml - metadata: metadata_item Response example ---------------- .. literalinclude:: samples/share-show-metadata-item-response.json :language: javascript Set share metadata ================== .. rest_method:: POST /v2/{project_id}/shares/{share_id}/metadata Allows adding new metadata items as key-value pairs. This API will not delete pre-existing metadata items. If the request object contains metadata items that already exist, they will be updated with new values as specified in the request object. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - metadata: metadata_2 Request example --------------- .. literalinclude:: samples/share-set-metadata-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - metadata: metadata Response example ---------------- .. literalinclude:: samples/share-set-metadata-response.json :language: javascript Update share metadata ===================== .. rest_method:: PUT /v2/{project_id}/shares/{share_id}/metadata Replaces the metadata for a given share with the metadata (specified as key-value pairs) in the request object. All pre-existing metadata of the share will be deleted and replaced with the new metadata supplied. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - metadata: metadata_2 Request example --------------- .. literalinclude:: samples/share-update-metadata-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - metadata: metadata_3 Response example ---------------- .. literalinclude:: samples/share-update-metadata-response.json :language: javascript To delete all existing metadata items on a given share, the request object needs to specify an empty metadata object: Request example --------------- .. literalinclude:: samples/share-update-null-metadata-request.json :language: javascript Response example ---------------- .. literalinclude:: samples/share-update-null-metadata-response.json :language: javascript Delete share metadata item ========================== .. rest_method:: DELETE /v2/{project_id}/shares/{share_id}/metadata/{key} Deletes a single metadata item on a share, idetified by its key. If the specified key does not represent a valid metadata item, the API will respond with HTTP 404. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - key: metadata_key_request manila-10.0.0/api-ref/source/user-messages.inc0000664000175000017500000000645613656750227021204 0ustar zuulzuul00000000000000.. -*- rst -*- ============================== User messages (since API 2.37) ============================== Lists, shows and deletes user messages. User messages are automatically created when an asynchronous action fails on a resource. In such situations an error is logged in the appropriate log file but end users may not have access to the log files. User messages can be used by users to get error details for failed actions. This is handy for example when creating shares - if a share creation fails because a scheduling filter doesn't find suitable back-end host for the share, this share will end up in error state, but from user messages API users can get details about the last executed filter which helps them identify the issue and perhaps re-attempt the creation request with different parameters. List user messages ================== .. rest_method:: GET /v2/{project_id}/messages .. versionadded:: 2.37 Lists all user messages. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - limit: limit - offset: offset - sort_key: sort_key_messages - sort_dir: sort_dir - action_id: action_id - detail_id: detail_id - message_level: message_level - project_id: project_id_messages - request_id: request_id - resource_id: resource_id - resource_type: resource_type Response parameters ------------------- .. rest_parameters:: parameters.yaml - action_id: action_id_body - detail_id: detail_id_body - message_level: message_level_body - project_id: project_id_messages_body - request_id: request_id_body - resource_id: resource_id_body - resource_type: resource_type_body - message_members_links: message_members_links Response example ---------------- .. literalinclude:: samples/user-messages-list-response.json :language: javascript Show user message details ========================= .. rest_method:: GET /v2/{project_id}/messages/{message_id} .. versionadded:: 2.37 Shows details for a user message. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - message_id: message_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - action_id: action_id_body - detail_id: detail_id_body - message_level: message_level_body - project_id: project_id_messages_body - request_id: request_id_body - resource_id: resource_id_body - resource_type: resource_type_body - message_links: message_links Response example ---------------- .. literalinclude:: samples/user-message-show-response.json :language: javascript Delete message ============== .. rest_method:: DELETE /v2/{project_id}/messages/{message_id} .. versionadded:: 2.37 Deletes a user message. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - message_id: message_id manila-10.0.0/api-ref/source/availability-zones.inc0000664000175000017500000000202413656750227022212 0ustar zuulzuul00000000000000.. -*- rst -*- ================== Availability zones ================== Describes availability zones that the Shared File Systems service is configured with. .. important:: For API versions 2.6 and prior, replace ``availability-zones`` in the URLs with ``os-availability-zone``. List availability zones ======================= .. rest_method:: GET /v2/{project_id}/availability-zones Lists all availability zones. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - availability_zones: availability_zones - id: availability_zone_id - name: availability_zone_name - created_at: created_at - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/availability-zones-list-response.json :language: javascript manila-10.0.0/api-ref/source/share-instances.inc0000664000175000017500000001020213656750227021470 0ustar zuulzuul00000000000000.. -*- rst -*- ================================ Share instances (since API v2.3) ================================ Administrators can list, show information for, explicitly set the state of, and force-delete share instances. Use the ``policy.json`` file to grant permissions for these actions to other roles. List share instances ==================== .. rest_method:: GET /v2/{project_id}/share_instances .. versionadded:: 2.3 Lists all share instances. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - export_location_id: export_location_id_query - export_location_path: export_location_path_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: status_5 - access_rules_status: access_rules_status - share_id: share_id_2 - progress: progress_share_instance - availability_zone: availability_zone_1 - created_at: created_at - replica_state: replica_state - export_location: export_location - export_locations: export_locations - cast_rules_to_readonly: share_instance_cast_rules_to_readonly - share_network_id: share_network_id_4 - share_server_id: share_server_id - host: host_6 - id: id_13 Response example ---------------- .. literalinclude:: samples/share-instances-list-response.json :language: javascript Show share instance details =========================== .. rest_method:: GET /v2/{project_id}/share_instances/{share_instance_id} .. versionadded:: 2.3 Shows details for a share instance. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_instance_id: share_instance_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - status: status_5 - access_rules_status: access_rules_status - share_id: share_id_2 - progress: progress_share_instance - availability_zone: availability_zone_1 - created_at: created_at - replica_state: replica_state - export_location: export_location - export_locations: export_locations - cast_rules_to_readonly: share_instance_cast_rules_to_readonly - share_network_id: share_network_id_4 - share_server_id: share_server_id - host: host_6 - id: id_13 Response example ---------------- .. literalinclude:: samples/share-show-instance-response.json :language: javascript Reset share instance state ========================== .. rest_method:: POST /v2/{project_id}/share_instances/{share_instance_id}/action .. versionadded:: 2.3 Administrator only. Explicitly updates the state of a share instance. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_instance_id: share_instance_id - status: status_5 Request example --------------- .. literalinclude:: samples/share-instance-actions-reset-state-request.json :language: javascript Force-delete share instance =========================== .. rest_method:: POST /v2/{project_id}/share_instances/{share_instance_id}/action .. versionadded:: 2.3 Administrator only. Force-deletes a share instance. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_instance_id: share_instance_id - force_delete: force_delete_2 Request example --------------- .. literalinclude:: samples/share-instance-actions-force-delete-request.json :language: javascript manila-10.0.0/api-ref/source/share-export-locations.inc0000664000175000017500000000454113656750227023024 0ustar zuulzuul00000000000000.. -*- rst -*- ======================================= Share export locations (since API v2.9) ======================================= Set of APIs used for viewing export locations of shares. These APIs allow retrieval of export locations belonging to non-active share replicas until API version 2.46. In and beyond API version 2.47, export locations of non-active share replicas can only be retrieved using the :ref:`Share Replica Export Locations APIs `. List export locations ===================== .. rest_method:: GET /v2/{project_id}/shares/{share_id}/export_locations .. versionadded:: 2.9 Lists all export locations for a share. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: export_location_id - share_instance_id: export_location_share_instance_id - path: export_location_path - is_admin_only: export_location_is_admin_only - preferred: export_location_preferred Response example ---------------- .. literalinclude:: samples/export-location-list-response.json :language: javascript Show single export location =========================== .. rest_method:: GET /v2/{project_id}/shares/{share_id}/export_locations/​{export_location_id}​ .. versionadded:: 2.9 Show details of an export location belonging to a share. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id - export_location_id: export_location_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: export_location_id - share_instance_id: export_location_share_instance_id - path: export_location_path - is_admin_only: export_location_is_admin_only - preferred: export_location_preferred - created_at: created_at - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/export-location-show-response.json :language: javascript manila-10.0.0/api-ref/source/samples/0000775000175000017500000000000013656750362017357 5ustar zuulzuul00000000000000manila-10.0.0/api-ref/source/samples/share-group-type-grant-access-request.json0000664000175000017500000000013213656750227027517 0ustar zuulzuul00000000000000{ "addProjectAccess": { "project": "e1284adea3ee4d2482af5ed214f3ad90" } } manila-10.0.0/api-ref/source/samples/share-group-snapshot-actions-reset-state-request.json0000664000175000017500000000007213656750227031724 0ustar zuulzuul00000000000000{ "reset_status": { "status": "error" } } manila-10.0.0/api-ref/source/samples/quota-classes-update-request.json0000664000175000017500000000015413656750227026004 0ustar zuulzuul00000000000000{ "quota_class_set": { "class_name": "test-qupta-class-update", "gigabytes": 20 } } manila-10.0.0/api-ref/source/samples/share-type-update-request.json0000664000175000017500000000021013656750227025272 0ustar zuulzuul00000000000000{ "share_type": { "share_type_access:is_public": true, "name": "testing", "description": "share type description" } } manila-10.0.0/api-ref/source/samples/share-servers-list-response.json0000664000175000017500000000064013656750227025650 0ustar zuulzuul00000000000000{ "share_servers": [ { "status": "active", "updated_at": "2015-09-07T08:52:15.000000", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "host": "manila2@generic1", "share_network_name": "net_my", "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73" } ] } manila-10.0.0/api-ref/source/samples/service-disable-request.json0000664000175000017500000000011513656750227024776 0ustar zuulzuul00000000000000{ "binary": "manila-share", "host": "openstackhost@generic#pool_0" } manila-10.0.0/api-ref/source/samples/share-instances-list-response.json0000664000175000017500000000335613656750227026155 0ustar zuulzuul00000000000000{ "share_instances": [ { "status": "error", "progress": null, "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "availability_zone": "nova", "replica_state": null, "created_at": "2015-09-07T08:41:20.000000", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "cast_rules_to_readonly": false, "share_server_id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73", "host": "manila2@generic1#GENERIC1", "id": "081f7030-c54f-42f5-98ee-93a37393e0f2" }, { "status": "available", "progress": "100%", "share_id": "d94a8548-2079-4be0-b21c-0a887acd31ca", "availability_zone": "nova", "replica_state": null, "created_at": "2015-09-07T08:51:34.000000", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "cast_rules_to_readonly": false, "share_server_id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73", "host": "manila2@generic1#GENERIC1", "id": "75559a8b-c90c-42a7-bda2-edbe86acfb7b" }, { "status": "creating_from_snapshot", "progress": "30%", "share_id": "9bb15af4-27e5-4174-ae15-dc549d4a3b51", "availability_zone": "nova", "replica_state": null, "created_at": "2015-09-07T09:01:15.000000", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "cast_rules_to_readonly": false, "share_server_id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73", "host": "manila2@generic1#GENERIC1", "id": "48155648-2fd3-480d-b02b-44b995c24bab" } ] } manila-10.0.0/api-ref/source/samples/share-actions-grant-access-request.json0000664000175000017500000000032713656750227027052 0ustar zuulzuul00000000000000{ "allow_access": { "access_level": "rw", "access_type": "ip", "access_to": "0.0.0.0/0", "metadata":{ "key1": "value1", "key2": "value2" } } } manila-10.0.0/api-ref/source/samples/share-types-list-access-response.json0000664000175000017500000000052213656750227026561 0ustar zuulzuul00000000000000{ "share_type_access": [ { "share_type_id": "1732f284-401d-41d9-a494-425451e8b4b8", "project_id": "818a3f48dcd644909b3fa2e45a399a27" }, { "share_type_id": "1732f284-401d-41d9-a494-425451e8b4b8", "project_id": "e1284adea3ee4d2482af5ed214f3ad90" } ] } manila-10.0.0/api-ref/source/samples/share-actions-reset-state-request.json0000664000175000017500000000007213656750227026735 0ustar zuulzuul00000000000000{ "reset_status": { "status": "error" } } manila-10.0.0/api-ref/source/samples/share-replicas-reset-replica-state-request.json0000664000175000017500000000011613656750227030513 0ustar zuulzuul00000000000000{ "reset_replica_state": { "replica_state": "out_of_sync" } } manila-10.0.0/api-ref/source/samples/snapshot-actions-force-delete-request.json0000664000175000017500000000003513656750227027567 0ustar zuulzuul00000000000000{ "force_delete": null } manila-10.0.0/api-ref/source/samples/share-manage-response.json0000664000175000017500000000265213656750227024443 0ustar zuulzuul00000000000000{ "share": { "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/shares/00137b40-ca06-4ae8-83a3-2c5989eebcce", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/00137b40-ca06-4ae8-83a3-2c5989eebcce", "rel": "bookmark" } ], "availability_zone": null, "share_network_id": null, "export_locations": [], "share_server_id": "00137b40-ca06-4ae8-83a3-2c5989eebcce", "share_group_id": null, "snapshot_id": null, "id": "00137b40-ca06-4ae8-83a3-2c5989eebcce", "size": null, "share_type": "14747856-08e5-494f-ab40-a64b9d20d8f7", "share_type_name": "d", "export_location": "10.254.0.5:/shares/share-42033c24-0261-424f-abda-4fef2f6dbfd5", "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "metadata": {}, "status": "manage_starting", "description": "Lets manage share.", "host": "manila2@unmanage1#UNMANAGE1", "access_rules_status": "active", "has_replicas": false, "replication_type": null, "is_public": false, "snapshot_support": true, "name": "share_texas1", "created_at": "2019-03-05T10:00:00.000000", "share_proto": "NFS", "volume_type": "d" } } manila-10.0.0/api-ref/source/samples/share-actions-revert-to-snapshot-request.json0000664000175000017500000000013013656750227030254 0ustar zuulzuul00000000000000{ "revert": { "snapshot_id": "6020af24-a305-4155-9a29-55e20efcb0e8" } } manila-10.0.0/api-ref/source/samples/share-update-null-metadata-request.json0000664000175000017500000000003113656750227027042 0ustar zuulzuul00000000000000{ "metadata": null } manila-10.0.0/api-ref/source/samples/share-group-type-create-request.json0000664000175000017500000000035513656750227026417 0ustar zuulzuul00000000000000{ "share_group_type": { "is_public": true, "group_specs": { "snapshot_support": true }, "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"], "name": "my_new_group_type" } } manila-10.0.0/api-ref/source/samples/share-replica-export-location-show-response.json0000664000175000017500000000073113656750227030731 0ustar zuulzuul00000000000000{ "export_location": { "created_at": "2016-03-24T14:20:47.000000", "updated_at": "2016-03-24T14:20:47.000000", "preferred": false, "is_admin_only": true, "share_instance_id": "e1c2d35e-fe67-4028-ad7a-45f668732b1d", "path": "10.0.0.3:/shares/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d", "id": "6921e862-88bc-49a5-a2df-efeed9acd583", "replica_state": "in_sync", "availability_zone": "paris" } } manila-10.0.0/api-ref/source/samples/share-network-show-response.json0000664000175000017500000000110113656750227025646 0ustar zuulzuul00000000000000{ "share_network": { "name": "net_my1", "segmentation_id": null, "created_at": "2015-09-04T14:56:45.000000", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "updated_at": null, "id": "7f950b52-6141-4a08-bbb5-bb7ffa3ea5fd", "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "ip_version": null, "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": "descr", "gateway": null, "mtu": null } } manila-10.0.0/api-ref/source/samples/snapshot-manage-request.json0000664000175000017500000000053013656750227025023 0ustar zuulzuul00000000000000{ "snapshot": { "share_id": "dd6c5d35-9db1-4662-a7ae-8b52f880aeba", "provider_location": "4045fee5-4e0e-408e-97f3-15e25239dbc9", "name": "managed_snapshot", "description": "description_of_managed_snapshot", "driver_options": { "opt1": "opt1", "opt2": "opt2" } } } manila-10.0.0/api-ref/source/samples/share-types-extra-specs-list-response.json0000664000175000017500000000045213656750227027560 0ustar zuulzuul00000000000000{ "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "create_share_from_snapshot_support": "True", "revert_to_snapshot_support": "False", "mount_snapshot_support": "False", "snapshot_support": "True" } } manila-10.0.0/api-ref/source/samples/pools-list-response.json0000664000175000017500000000106313656750227024213 0ustar zuulzuul00000000000000{ "pools": [ { "name": "opencloud@alpha#ALPHA_pool", "host": "opencloud", "backend": "alpha", "pool": "ALPHA_pool" }, { "name": "opencloud@beta#BETA_pool", "host": "opencloud", "backend": "beta", "pool": "BETA_pool" }, { "name": "opencloud@gamma#GAMMA_pool", "host": "opencloud", "backend": "gamma", "pool": "GAMMA_pool" }, { "name": "opencloud@delta#DELTA_pool", "host": "opencloud", "backend": "delta", "pool": "DELTA_pool" } ] }manila-10.0.0/api-ref/source/samples/snapshot-instance-show-response.json0000664000175000017500000000111313656750227026521 0ustar zuulzuul00000000000000{ "snapshot_instance": { "status": "available", "share_id": "618599ab-09a1-432d-973a-c102564c7fec", "share_instance_id": "8edff0cb-e5ce-4bab-aa99-afe02ed6a76a", "snapshot_id": "d447de19-a6d3-40b3-ae9f-895c86798924", "progress": "100%", "created_at": "2017-08-04T00:44:52.000000", "id": "275516e8-c998-4e78-a41e-7dd3a03e71cd", "provider_location": "/path/to/fake/snapshot/snapshot_d447de19_a6d3_40b3_ae9f_895c86798924_275516e8_c998_4e78_a41e_7dd3a03e71cd", "updated_at": "2017-08-04T00:44:54.000000" } } manila-10.0.0/api-ref/source/samples/share-replicas-show-response.json0000664000175000017500000000102413656750227025763 0ustar zuulzuul00000000000000{ "share_replica": { "status": "available", "share_id": "5043dffd-f033-4248-a315-319ca2bd70c8", "availability_zone": "nova", "cast_rules_to_readonly": false, "updated_at": "2017-08-15T20:20:50.000000", "share_network_id": null, "share_server_id": null, "host": "ubuntu@generic3#fake_pool_for_DummyDriver", "id": "57f5c47a-0216-4ee0-a517-0460d63301a6", "replica_state": "active", "created_at": "2017-08-15T20:20:45.000000" } }manila-10.0.0/api-ref/source/samples/share-group-snapshot-update-response.json0000664000175000017500000000146113656750227027461 0ustar zuulzuul00000000000000{ "share_group_snapshot": { "status": "creating", "share_group_id": "cd7a3d06-23b3-4d05-b4ca-7c9a20faa95f", "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "bookmark" } ], "name": null, "members": [], "created_at": "2017-08-10T03:01:39.442509", "project_id": "e23850eeb91d4fa3866af634223e454c", "id": "46bf5875-58d6-4816-948f-8828423b0b9f", "description": null } } manila-10.0.0/api-ref/source/samples/share-network-update-response.json0000664000175000017500000000115213656750227026156 0ustar zuulzuul00000000000000{ "share_network": { "name": "net_my", "segmentation_id": null, "created_at": "2015-09-04T14:54:25.000000", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "updated_at": "2015-09-07T08:02:53.512184", "id": "713df749-aac0-4a54-af52-10f6c991e80c", "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "ip_version": "4", "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": "i'm adding a description", "gateway": null, "mtu": null } } manila-10.0.0/api-ref/source/samples/share-replicas-force-delete-request.json0000664000175000017500000000003513656750227027174 0ustar zuulzuul00000000000000{ "force_delete": null } manila-10.0.0/api-ref/source/samples/share-replicas-list-response.json0000664000175000017500000000072213656750227025762 0ustar zuulzuul00000000000000{ "share_replicas": [ { "status": "available", "share_id": "5043dffd-f033-4248-a315-319ca2bd70c8", "id": "57f5c47a-0216-4ee0-a517-0460d63301a6", "replica_state": "active" }, { "status": "available", "share_id": "5043dffd-f033-4248-a315-319ca2bd70c8", "id": "c9f52e33-d780-41d8-89ba-fc06869f465f", "replica_state": "in_sync" } ] } manila-10.0.0/api-ref/source/samples/security-services-list-for-share-network-response.json0000664000175000017500000000252613656750227032127 0ustar zuulzuul00000000000000{ "security_services": [ { "status": "new", "domain": null, "ou": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ1", "created_at": "2015-09-07T12:19:10.000000", "description": "Creating my first Security Service", "updated_at": null, "server": null, "dns_ip": "10.0.0.0/24", "user": "demo", "password": "supersecret", "type": "kerberos", "id": "3c829734-0679-4c17-9637-801da48c0d5f", "share_networks": [ "d8ae6799-2567-4a89-aafb-fa4424350d2b" ] }, { "status": "new", "domain": null, "ou": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ2", "created_at": "2015-09-07T12:25:03.000000", "description": "Creating my second Security Service", "updated_at": null, "server": null, "dns_ip": "10.0.0.0/24", "user": null, "password": null, "type": "ldap", "id": "5a1d3a12-34a7-4087-8983-50e9ed03509a", "share_networks": [ "d8ae6799-2567-4a89-aafb-fa4424350d2b" ] } ] } manila-10.0.0/api-ref/source/samples/share-networks-list-detailed-response.json0000664000175000017500000000343413656750227027610 0ustar zuulzuul00000000000000{ "share_networks": [ { "name": "net_my1", "segmentation_id": null, "created_at": "2015-09-04T14:57:13.000000", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "updated_at": null, "id": "32763294-e3d4-456a-998d-60047677c2fb", "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "ip_version": null, "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": "descr", "gateway": null, "mtu": null }, { "name": "net_my", "segmentation_id": null, "created_at": "2015-09-04T14:54:25.000000", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "updated_at": null, "id": "713df749-aac0-4a54-af52-10f6c991e80c", "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "ip_version": null, "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": "desecr", "gateway": null, "mtu": null }, { "name": null, "segmentation_id": null, "created_at": "2015-09-04T14:51:41.000000", "neutron_subnet_id": null, "updated_at": null, "id": "fa158a3d-6d9f-4187-9ca5-abbb82646eb2", "neutron_net_id": null, "ip_version": null, "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": null, "gateway": null, "mtu": null } ] } manila-10.0.0/api-ref/source/samples/shares-list-response.json0000664000175000017500000000222013656750227024340 0ustar zuulzuul00000000000000{ "shares": [ { "id": "d94a8548-2079-4be0-b21c-0a887acd31ca", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/shares/d94a8548-2079-4be0-b21c-0a887acd31ca", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/d94a8548-2079-4be0-b21c-0a887acd31ca", "rel": "bookmark" } ], "name": "My_share" }, { "id": "406ea93b-32e9-4907-a117-148b3945749f", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/shares/406ea93b-32e9-4907-a117-148b3945749f", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/406ea93b-32e9-4907-a117-148b3945749f", "rel": "bookmark" } ], "name": "Share1" } ], "count": 10 } manila-10.0.0/api-ref/source/samples/share-group-type-set-request.json0000664000175000017500000000011013656750227025734 0ustar zuulzuul00000000000000{ "group_specs": { "my_group_key": "my_group_value" } } manila-10.0.0/api-ref/source/samples/share-group-type-revoke-access-request.json0000664000175000017500000000013513656750227027702 0ustar zuulzuul00000000000000{ "removeProjectAccess": { "project": "818a3f48dcd644909b3fa2e45a399a27" } } manila-10.0.0/api-ref/source/samples/quota-update-request.json0000664000175000017500000000016513656750227024353 0ustar zuulzuul00000000000000{ "quota_set": { "snapshot_gigabytes": 999, "snapshots": 49, "share_networks": 9 } } manila-10.0.0/api-ref/source/samples/share-actions-extend-request.json0000664000175000017500000000006013656750227025761 0ustar zuulzuul00000000000000{ "extend": { "new_size": 2 } } manila-10.0.0/api-ref/source/samples/shares-list-detailed-response.json0000664000175000017500000000652513656750227026125 0ustar zuulzuul00000000000000{ "shares": [ { "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/shares/f45cc5b2-d1bb-4a3e-ba5b-5c4125613adc", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/f45cc5b2-d1bb-4a3e-ba5b-5c4125613adc", "rel": "bookmark" } ], "availability_zone": "nova", "share_network_id": "f9b2e754-ac01-4466-86e1-5c569424754e", "export_locations": [], "share_server_id": "87d8943a-f5da-47a4-b2f2-ddfa6794aa82", "share_group_id": null, "snapshot_id": null, "id": "f45cc5b2-d1bb-4a3e-ba5b-5c4125613adc", "size": 1, "share_type": "25747776-08e5-494f-ab40-a64b9d20d8f7", "share_type_name": "default", "export_location": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "metadata": {}, "status": "error", "progress": null, "access_rules_status": "active", "description": "There is a share description.", "host": "manila2@generic1#GENERIC1", "task_state": null, "is_public": true, "snapshot_support": true, "name": "my_share4", "has_replicas": false, "replication_type": null, "created_at": "2015-09-16T18:19:50.000000", "share_proto": "NFS", "volume_type": "default" }, { "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/shares/c4a2ced4-2c9f-4ae1-adaa-6171833e64df", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/c4a2ced4-2c9f-4ae1-adaa-6171833e64df", "rel": "bookmark" } ], "availability_zone": "nova", "share_network_id": "f9b2e754-ac01-4466-86e1-5c569424754e", "export_locations": [ "10.254.0.5:/shares/share-50ad5e7b-f6f1-4b78-a651-0812cef2bb67" ], "share_server_id": "87d8943a-f5da-47a4-b2f2-ddfa6794aa82", "snapshot_id": null, "id": "c4a2ced4-2c9f-4ae1-adaa-6171833e64df", "size": 1, "share_type": "25747776-08e5-494f-ab40-a64b9d20d8f7", "share_type_name": "default", "export_location": "10.254.0.5:/shares/share-50ad5e7b-f6f1-4b78-a651-0812cef2bb67", "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "metadata": {}, "status": "available", "progress": "100%", "access_rules_status": "active", "description": "Changed description.", "host": "manila2@generic1#GENERIC1", "task_state": null, "is_public": true, "snapshot_support": true, "name": "my_share4", "has_replicas": false, "replication_type": null, "created_at": "2015-09-16T17:26:28.000000", "share_proto": "NFS", "volume_type": "default" } ], "count": 10 } manila-10.0.0/api-ref/source/samples/share-replica-create-response.json0000664000175000017500000000070113656750227026064 0ustar zuulzuul00000000000000{ "share_replica": { "status": "creating", "share_id": "5043dffd-f033-4248-a315-319ca2bd70c8", "availability_zone": null, "cast_rules_to_readonly": true, "updated_at": null, "share_network_id": null, "share_server_id": null, "host": "", "id": "c9f52e33-d780-41d8-89ba-fc06869f465f", "replica_state": null, "created_at": "2017-08-15T20:21:43.493731" } } manila-10.0.0/api-ref/source/samples/security-service-create-response.json0000664000175000017500000000102113656750227026646 0ustar zuulzuul00000000000000{ "security_service": { "status": "new", "domain": null, "ou": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ1", "created_at": "2015-09-07T12:19:10.695211", "updated_at": null, "server": null, "dns_ip": "10.0.0.0/24", "user": "demo", "password": "supersecret", "type": "kerberos", "id": "3c829734-0679-4c17-9637-801da48c0d5f", "description": "Creating my first Security Service" } } manila-10.0.0/api-ref/source/samples/share-server-manage-response.json0000664000175000017500000000103413656750227025740 0ustar zuulzuul00000000000000{ "share_server": { "id": "dd218d97-6b16-45b7-9b23-19681ccdec3a", "project_id": "5b23075b4b504261a5987b18588f86cf", "updated_at": null, "status": "manage_starting", "host": "myhost@mybackend", "share_network_name": "share-net-name", "share_network_id": "78cef6eb-648a-4bbd-9ae1-d2eaaf594cc0", "created_at": "2019-03-06T11:59:41.000000", "backend_details": {}, "is_auto_deletable": false, "identifier": "4ef3507e-0513-4140-beda-f619ab30d424" } }manila-10.0.0/api-ref/source/samples/snapshot-actions-unmanage-request.json0000664000175000017500000000003113656750227027020 0ustar zuulzuul00000000000000{ "unmanage": null } manila-10.0.0/api-ref/source/samples/share-actions-grant-access-response.json0000664000175000017500000000065713656750227027226 0ustar zuulzuul00000000000000{ "access": { "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "created_at": "2015-09-07T09:14:48.000000", "updated_at": null, "access_type": "ip", "access_to": "0.0.0.0/0", "access_level": "rw", "access_key": null, "id": "a25b2df3-90bd-4add-afa6-5f0dbbd50452", "metadata":{ "key1": "value1", "key2": "value2" } } } manila-10.0.0/api-ref/source/samples/share-access-rules-update-metadata-response.json0000664000175000017500000000021313656750227030631 0ustar zuulzuul00000000000000{ "metadata": { "aim": "changed_doc", "speed": "my_fast_access", "new_metadata_key": "new_information" } } manila-10.0.0/api-ref/source/samples/share-group-snapshots-list-members-response.json0000664000175000017500000000125513656750227030766 0ustar zuulzuul00000000000000{ "share_group_snapshot_members": [ { "status": "available", "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "created_at": "2017-09-07T11:50:39.000000", "share_proto": "NFS", "share_size": 1, "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "size": 1 }, { "status": "available", "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "created_at": "2015-09-07T11:50:39.000000", "share_proto": "NFS", "share_size": 1, "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "size": 1 } ] } manila-10.0.0/api-ref/source/samples/share-actions-revoke-access-request.json0000664000175000017500000000013313656750227027225 0ustar zuulzuul00000000000000{ "deny_access": { "access_id": "a25b2df3-90bd-4add-afa6-5f0dbbd50452" } } manila-10.0.0/api-ref/source/samples/share-group-types-list-access-response.json0000664000175000017500000000054413656750227027717 0ustar zuulzuul00000000000000{ "share_group_type_access": [ { "share_group_type_id": "1732f284-401d-41d9-a494-425451e8b4b8", "project_id": "818a3f48dcd644909b3fa2e45a399a27" }, { "share_group_type_id": "1732f284-401d-41d9-a494-425451e8b4b8", "project_id": "e1284adea3ee4d2482af5ed214f3ad90" } ] } manila-10.0.0/api-ref/source/samples/extensions-list-response.json0000664000175000017500000000645513656750227025270 0ustar zuulzuul00000000000000{ "extensions": [ { "alias": "os-extended-quotas", "updated": "2013-06-09T00:00:00+00:00", "name": "ExtendedQuotas", "links": [], "description": "Extend quotas. Adds ability for admins to delete quota and optionally force the update Quota command." }, { "alias": "os-quota-sets", "updated": "2011-08-08T00:00:00+00:00", "name": "Quotas", "links": [], "description": "Quotas management support." }, { "alias": "os-quota-class-sets", "updated": "2012-03-12T00:00:00+00:00", "name": "QuotaClasses", "links": [], "description": "Quota classes management support." }, { "alias": "os-share-unmanage", "updated": "2015-02-17T00:00:00+00:00", "name": "ShareUnmanage", "links": [], "description": "Enable share unmanage operation." }, { "alias": "os-types-manage", "updated": "2011-08-24T00:00:00+00:00", "name": "TypesManage", "links": [], "description": "Types manage support." }, { "alias": "share-actions", "updated": "2012-08-14T00:00:00+00:00", "name": "ShareActions", "links": [], "description": "Enable share actions." }, { "alias": "os-availability-zone", "updated": "2015-07-28T00:00:00+00:00", "name": "AvailabilityZones", "links": [], "description": "Describe Availability Zones." }, { "alias": "os-user-quotas", "updated": "2013-07-18T00:00:00+00:00", "name": "UserQuotas", "links": [], "description": "Project user quota support." }, { "alias": "os-share-type-access", "updated": "2015-03-02T00:00:00Z", "name": "ShareTypeAccess", "links": [], "description": "share type access support." }, { "alias": "os-types-extra-specs", "updated": "2011-08-24T00:00:00+00:00", "name": "TypesExtraSpecs", "links": [], "description": "Type extra specs support." }, { "alias": "os-admin-actions", "updated": "2015-08-03T00:00:00+00:00", "name": "AdminActions", "links": [], "description": "Enable admin actions." }, { "alias": "os-used-limits", "updated": "2014-03-27T00:00:00+00:00", "name": "UsedLimits", "links": [], "description": "Provide data on limited resources that are being used." }, { "alias": "os-services", "updated": "2012-10-28T00:00:00-00:00", "name": "Services", "links": [], "description": "Services support." }, { "alias": "os-share-manage", "updated": "2015-02-17T00:00:00+00:00", "name": "ShareManage", "links": [], "description": "Allows existing share to be 'managed' by Manila." } ] } manila-10.0.0/api-ref/source/samples/share-group-snapshot-update-request.json0000664000175000017500000000016313656750227027311 0ustar zuulzuul00000000000000{ "share_group_snapshot": { "name": "update name", "description": "update description" } } manila-10.0.0/api-ref/source/samples/share-instance-actions-reset-state-request.json0000664000175000017500000000007613656750227030543 0ustar zuulzuul00000000000000{ "reset_status": { "status": "available" } } manila-10.0.0/api-ref/source/samples/share-group-type-create-response.json0000664000175000017500000000042013656750227026556 0ustar zuulzuul00000000000000{ "share_group_type": { "is_public": true, "group_specs": {}, "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"], "id": "89861c2a-10bf-4013-bdd4-3d020466aee4", "name": "test_group_type", "is_default": false } } manila-10.0.0/api-ref/source/samples/share-server-manage-request.json0000664000175000017500000000041413656750227025573 0ustar zuulzuul00000000000000{ "share_server": { "host": "myhost@mybackend", "share_network_id": "78cef6eb-648a-4bbd-9ae1-d2eaaf594cc0", "identifier": "4ef3507e-0513-4140-beda-f619ab30d424", "driver_options": { "opt1": "opt1_value" } } }manila-10.0.0/api-ref/source/samples/security-services-list-response.json0000664000175000017500000000056613656750227026556 0ustar zuulzuul00000000000000{ "security_services": [ { "status": "new", "type": "kerberos", "id": "3c829734-0679-4c17-9637-801da48c0d5f", "name": "SecServ1" }, { "status": "new", "type": "ldap", "id": "5a1d3a12-34a7-4087-8983-50e9ed03509a", "name": "SecServ2" } ] } manila-10.0.0/api-ref/source/samples/quota-show-response.json0000664000175000017500000000043313656750227024215 0ustar zuulzuul00000000000000{ "quota_set": { "gigabytes": 1000, "shares": 50, "snapshot_gigabytes": 1000, "snapshots": 50, "id": "16e1ab15c35a457e9c2b2aa189f544e1", "share_networks": 10, "share_groups": 10, "share_group_snapshots": 10 } } manila-10.0.0/api-ref/source/samples/share-update-request.json0000664000175000017500000000016513656750227024324 0ustar zuulzuul00000000000000{ "share": { "is_public": true, "display_description": "Changing the share description." } } manila-10.0.0/api-ref/source/samples/share-networks-list-response.json0000664000175000017500000000054213656750227026034 0ustar zuulzuul00000000000000{ "share_networks": [ { "id": "32763294-e3d4-456a-998d-60047677c2fb", "name": "net_my1" }, { "id": "713df749-aac0-4a54-af52-10f6c991e80c", "name": "net_my" }, { "id": "fa158a3d-6d9f-4187-9ca5-abbb82646eb2", "name": null } ] } manila-10.0.0/api-ref/source/samples/share-replica-create-request.json0000664000175000017500000000030513656750227025716 0ustar zuulzuul00000000000000{ "share_replica": { "share_id": "50a6a566-6bac-475c-ad69-5035c86696c0", "availability_zone": "nova", "share_network_id": "f5a55875-e33a-4888-be52-7cd75b72294b" } } manila-10.0.0/api-ref/source/samples/service-enable-request.json0000664000175000017500000000011513656750227024621 0ustar zuulzuul00000000000000{ "binary": "manila-share", "host": "openstackhost@generic#pool_0" } manila-10.0.0/api-ref/source/samples/share-groups-list-response.json0000664000175000017500000000225513656750227025502 0ustar zuulzuul00000000000000{ "share_groups": [ { "id": "b94a8548-2079-4be0-b21c-0a887acd31ca", "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/share_groups/b94a8548-2079-4be0-b21c-0a887acd31ca", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/share_groups/b94a8548-2079-4be0-b21c-0a887acd31ca", "rel": "bookmark" } ], "name": "My_share_group" }, { "id": "306ea93c-32e9-4907-a117-148b3945749f", "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/share_groups/306ea93c-32e9-4907-a117-148b3945749f", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/share_groups/306ea93c-32e9-4907-a117-148b3945749f", "rel": "bookmark" } ], "name": "Test_Share_group" } ] } manila-10.0.0/api-ref/source/samples/quota-classes-update-response.json0000664000175000017500000000035513656750227026155 0ustar zuulzuul00000000000000{ "quota_class_set": { "share_groups": 50, "gigabytes": 20, "share_group_snapshots": 50, "snapshots": 50, "snapshot_gigabytes": 1000, "shares": 50, "share_networks": 10 } } manila-10.0.0/api-ref/source/samples/share-access-rules-show-response.json0000664000175000017500000000074613656750227026564 0ustar zuulzuul00000000000000{ "access": { "access_level": "rw", "state": "error", "id": "507bf114-36f2-4f56-8cf4-857985ca87c1", "share_id": "fb213952-2352-41b4-ad7b-2c4c69d13eef", "access_type": "cert", "access_to": "example.com", "access_key": null, "created_at": "2018-07-17T02:01:04.000000", "updated_at": "2018-07-17T02:01:04.000000", "metadata": { "key1": "value1", "key2": "value2" } } } manila-10.0.0/api-ref/source/samples/share-group-create-response.json0000664000175000017500000000204113656750227025600 0ustar zuulzuul00000000000000{ "share_groups": { "status": "creating", "description": null, "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_groups/f9c1f80c-2392-4e34-bd90-fc89cdc5bf93", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_groups/f9c1f80c-2392-4e34-bd90-fc89cdc5bf93", "rel": "bookmark" } ], "availability_zone": null, "source_share_group_snapshot_id": null, "share_network_id": null, "share_server_id": null, "host": null, "share_group_type_id": "89861c2a-10bf-4013-bdd4-3d020466aee4", "consistent_snapshot_support": null, "id": "f9c1f80c-2392-4e34-bd90-fc89cdc5bf93", "name": null, "created_at": "2017-08-03T19:20:33.974421", "project_id": "e23850eeb91d4fa3866af634223e454c", "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"] } } manila-10.0.0/api-ref/source/samples/security-service-update-response.json0000664000175000017500000000105113656750227026670 0ustar zuulzuul00000000000000{ "security_service": { "status": "new", "domain": "my_domain", "ou": "CN=Computers", "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ1", "created_at": "2015-09-07T12:19:10.000000", "updated_at": "2015-09-07T12:47:21.858737", "server": null, "dns_ip": "10.0.0.0/24", "user": "new_user", "password": "pass", "type": "kerberos", "id": "3c829734-0679-4c17-9637-801da48c0d5f", "description": "Adding a description" } } manila-10.0.0/api-ref/source/samples/security-service-show-response.json0000664000175000017500000000102113656750227026363 0ustar zuulzuul00000000000000{ "security_service": { "status": "new", "domain": null, "ou": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ1", "created_at": "2015-09-07T12:19:10.000000", "updated_at": null, "server": null, "dns_ip": "10.0.0.0/24", "user": "demo", "password": "supersecret", "type": "kerberos", "id": "3c829734-0679-4c17-9637-801da48c0d5f", "description": "Creating my first Security Service" } } manila-10.0.0/api-ref/source/samples/user-message-show-response.json0000664000175000017500000000206313656750227025465 0ustar zuulzuul00000000000000{ "message": { "links": [ { "href": "http://192.168.122.180:8786/v2/2e3de76b49b444fd9dc7ca9f7048ce6b/messages/4b319d29-d5b7-4b6e-8e7c-8d6e53f3c3d5", "rel": "self" }, { "href": "http://192.168.122.180:8786/2e3de76b49b444fd9dc7ca9f7048ce6b/messages/4b319d29-d5b7-4b6e-8e7c-8d6e53f3c3d5", "rel": "bookmark" } ], "resource_id": "351cc796-2d79-4a08-b878-a8ed933b6b68", "message_level": "ERROR", "user_message": "allocate host: No storage could be allocated for this share request. Trying again with a different size or share type may succeed.", "expires_at": "2017-07-10T10:27:43.000000", "id": "4b319d29-d5b7-4b6e-8e7c-8d6e53f3c3d5", "created_at": "2017-07-10T10:26:43.000000", "detail_id": "002", "request_id": "req-24e7ccb6-a7d5-4ddd-a8e4-d8f72a4509c8", "project_id": "2e3de76b49b444fd9dc7ca9f7048ce6b", "resource_type": "SHARE", "action_id": "001" } } manila-10.0.0/api-ref/source/samples/pools-list-detailed-response.json0000664000175000017500000000752113656750227025771 0ustar zuulzuul00000000000000{ "pools": [ { "name": "opencloud@alpha#ALPHA_pool", "host": "opencloud", "backend": "alpha", "pool": "ALPHA_pool", "capabilities": { "pool_name": "ALPHA_pool", "total_capacity_gb": 1230.0, "free_capacity_gb": 1210.0, "reserved_percentage": 0, "share_backend_name": "ALPHA", "storage_protocol": "NFS_CIFS", "vendor_name": "Open Source", "driver_version": "1.0", "timestamp": "2019-05-07T00:28:02.935569", "driver_handles_share_servers": true, "snapshot_support": true, "create_share_from_snapshot_support": true, "revert_to_snapshot_support": true, "mount_snapshot_support": true, "dedupe": false, "compression": false, "replication_type": null, "replication_domain": null, "sg_consistent_snapshot_support": "pool", "ipv4_support": true, "ipv6_support": false } }, { "name": "opencloud@beta#BETA_pool", "host": "opencloud", "backend": "beta", "pool": "BETA_pool", "capabilities": { "pool_name": "BETA_pool", "total_capacity_gb": 1230.0, "free_capacity_gb": 1210.0, "reserved_percentage": 0, "share_backend_name": "BETA", "storage_protocol": "NFS_CIFS", "vendor_name": "Open Source", "driver_version": "1.0", "timestamp": "2019-05-07T00:28:02.817309", "driver_handles_share_servers": true, "snapshot_support": true, "create_share_from_snapshot_support": true, "revert_to_snapshot_support": true, "mount_snapshot_support": true, "dedupe": false, "compression": false, "replication_type": null, "replication_domain": null, "sg_consistent_snapshot_support": "pool", "ipv4_support": true, "ipv6_support": false } }, { "name": "opencloud@gamma#GAMMA_pool", "host": "opencloud", "backend": "gamma", "pool": "GAMMA_pool", "capabilities": { "pool_name": "GAMMA_pool", "total_capacity_gb": 1230.0, "free_capacity_gb": 1210.0, "reserved_percentage": 0, "replication_type": "readable", "share_backend_name": "GAMMA", "storage_protocol": "NFS_CIFS", "vendor_name": "Open Source", "driver_version": "1.0", "timestamp": "2019-05-07T00:28:02.899888", "driver_handles_share_servers": false, "snapshot_support": true, "create_share_from_snapshot_support": true, "revert_to_snapshot_support": true, "mount_snapshot_support": true, "dedupe": false, "compression": false, "replication_domain": "replica_domain_store1", "sg_consistent_snapshot_support": "pool", "ipv4_support": true, "ipv6_support": false } }, { "name": "opencloud@delta#DELTA_pool", "host": "opencloud", "backend": "delta", "pool": "DELTA_pool", "capabilities": { "pool_name": "DELTA_pool", "total_capacity_gb": 1230.0, "free_capacity_gb": 1210.0, "reserved_percentage": 0, "replication_type": "readable", "share_backend_name": "DELTA", "storage_protocol": "NFS_CIFS", "vendor_name": "Open Source", "driver_version": "1.0", "timestamp": "2019-05-07T00:28:02.963660", "driver_handles_share_servers": false, "snapshot_support": true, "create_share_from_snapshot_support": true, "revert_to_snapshot_support": true, "mount_snapshot_support": true, "dedupe": false, "compression": false, "replication_domain": "replica_domain_store1", "sg_consistent_snapshot_support": "pool", "ipv4_support": true, "ipv6_support": false } } ] }manila-10.0.0/api-ref/source/samples/share-types-default-list-response.json0000664000175000017500000000147413656750227026753 0ustar zuulzuul00000000000000{ "share_type": { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "driver_handles_share_servers": "True" }, "id": "420e6a31-3f3d-4ed7-9d11-59450372182a", "name": "default", "is_default": true, "description": "share type description" }, "volume_type": { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "driver_handles_share_servers": "True" }, "id": "420e6a31-3f3d-4ed7-9d11-59450372182a", "name": "default", "is_default": true, "description": "share type description" } } manila-10.0.0/api-ref/source/samples/share-server-reset-state-request.json0000664000175000017500000000007213656750227026603 0ustar zuulzuul00000000000000{ "reset_status": { "status": "active" } }manila-10.0.0/api-ref/source/samples/security-service-update-request.json0000664000175000017500000000030613656750227026524 0ustar zuulzuul00000000000000{ "security_service": { "domain": "my_domain", "ou": "CN=Computers", "password": "***", "user": "new_user", "description": "Adding a description" } } manila-10.0.0/api-ref/source/samples/share-actions-list-access-rules-request.json0000664000175000017500000000003413656750227030035 0ustar zuulzuul00000000000000{ "access_list": null } manila-10.0.0/api-ref/source/samples/share-update-metadata-request.json0000664000175000017500000000020513656750227026075 0ustar zuulzuul00000000000000{ "metadata": { "aim": "changed_doc", "project": "my_app", "new_metadata_key": "new_information" } } manila-10.0.0/api-ref/source/samples/availability-zones-list-response.json0000664000175000017500000000034713656750227026671 0ustar zuulzuul00000000000000{ "availability_zones": [ { "name": "nova", "created_at": "2015-09-18T09:50:55.000000", "updated_at": null, "id": "388c983d-258e-4a0e-b1ba-10da37d766db" } ] } manila-10.0.0/api-ref/source/samples/share-group-snapshots-list-response.json0000664000175000017500000000116313656750227027334 0ustar zuulzuul00000000000000{ "share_group_snapshot": [ { "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "bookmark" } ], "name": null, "id": "46bf5875-58d6-4816-948f-8828423b0b9f" } ] } manila-10.0.0/api-ref/source/samples/share-set-metadata-request.json0000664000175000017500000000006513656750227025412 0ustar zuulzuul00000000000000{ "metadata": { "key1": "value1" } } manila-10.0.0/api-ref/source/samples/share-type-create-response.json0000664000175000017500000000243213656750227025431 0ustar zuulzuul00000000000000{ "share_type": { "required_extra_specs": { "driver_handles_share_servers": true }, "share_type_access:is_public": true, "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "mount_snapshot_support": "False", "revert_to_snapshot_support": "False", "create_share_from_snapshot_support": "True", "snapshot_support": "True" }, "id": "7fa1342b-de9d-4d89-bdc8-af67795c0e52", "name": "testing", "is_default": false, "description": "share type description" }, "volume_type": { "required_extra_specs": { "driver_handles_share_servers": true }, "share_type_access:is_public": true, "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "mount_snapshot_support": "False", "revert_to_snapshot_support": "False", "create_share_from_snapshot_support": "True", "snapshot_support": "True" }, "id": "7fa1342b-de9d-4d89-bdc8-af67795c0e52", "name": "testing", "is_default": false, "description": "share type description" } } manila-10.0.0/api-ref/source/samples/share-type-revoke-access-request.json0000664000175000017500000000013513656750227026550 0ustar zuulzuul00000000000000{ "removeProjectAccess": { "project": "818a3f48dcd644909b3fa2e45a399a27" } } manila-10.0.0/api-ref/source/samples/export-location-show-response.json0000664000175000017500000000061713656750227026217 0ustar zuulzuul00000000000000{ "export_location": { "created_at": "2016-03-24T14:20:47.000000", "updated_at": "2016-03-24T14:20:47.000000", "preferred": false, "is_admin_only": true, "share_instance_id": "e1c2d35e-fe67-4028-ad7a-45f668732b1d", "path": "10.0.0.3:/shares/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d", "id": "6921e862-88bc-49a5-a2df-efeed9acd583" } } manila-10.0.0/api-ref/source/samples/share-show-metadata-item-response.json0000664000175000017500000000006413656750227026700 0ustar zuulzuul00000000000000{ "meta": { "project": "my_app" } } manila-10.0.0/api-ref/source/samples/share-server-unmanage-request.json0000664000175000017500000000005413656750227026136 0ustar zuulzuul00000000000000{ "unmanage": { "force": "false" } }manila-10.0.0/api-ref/source/samples/services-list-with-filters-response.json0000664000175000017500000000044213656750227027321 0ustar zuulzuul00000000000000{ "services": [ { "status": "enabled", "binary": "manila-share", "zone": "nova", "host": "manila2@generic1", "updated_at": "2015-09-07T13:14:27.000000", "state": "up", "id": 1 } ] } manila-10.0.0/api-ref/source/samples/share-network-add-security-service-response.json0000664000175000017500000000076713656750227030742 0ustar zuulzuul00000000000000{ "share_network": { "name": "net2", "segmentation_id": null, "created_at": "2015-09-07T12:31:12.000000", "neutron_subnet_id": null, "updated_at": null, "id": "d8ae6799-2567-4a89-aafb-fa4424350d2b", "neutron_net_id": null, "ip_version": null, "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": null, "gateway": null, "mtu": null } } manila-10.0.0/api-ref/source/samples/snapshot-create-request.json0000664000175000017500000000032013656750227025033 0ustar zuulzuul00000000000000{ "snapshot": { "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "force": "True", "name": "snapshot_share1", "description": "Here is a snapshot of share Share1" } } manila-10.0.0/api-ref/source/samples/snapshot-update-request.json0000664000175000017500000000025513656750227025061 0ustar zuulzuul00000000000000{ "snapshot": { "display_name": "snapshot_Share1", "display_description": "I am changing a description also. Here is a snapshot of share Share1" } } manila-10.0.0/api-ref/source/samples/share-type-update-response.json0000664000175000017500000000243213656750227025450 0ustar zuulzuul00000000000000{ "share_type": { "required_extra_specs": { "driver_handles_share_servers": true }, "share_type_access:is_public": true, "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "mount_snapshot_support": "False", "revert_to_snapshot_support": "False", "create_share_from_snapshot_support": "True", "snapshot_support": "True" }, "id": "7fa1342b-de9d-4d89-bdc8-af67795c0e52", "name": "testing", "is_default": false, "description": "share type description" }, "volume_type": { "required_extra_specs": { "driver_handles_share_servers": true }, "share_type_access:is_public": true, "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "mount_snapshot_support": "False", "revert_to_snapshot_support": "False", "create_share_from_snapshot_support": "True", "snapshot_support": "True" }, "id": "7fa1342b-de9d-4d89-bdc8-af67795c0e52", "name": "testing", "is_default": false, "description": "share type description" } } manila-10.0.0/api-ref/source/samples/user-messages-list-response.json0000664000175000017500000000223413656750227025643 0ustar zuulzuul00000000000000{ "messages": [ { "links": [ { "href": "http://192.168.122.180:8786/v2/2e3de76b49b444fd9dc7ca9f7048ce6b/messages/4b319d29-d5b7-4b6e-8e7c-8d6e53f3c3d5", "rel": "self" }, { "href": "http://192.168.122.180:8786/2e3de76b49b444fd9dc7ca9f7048ce6b/messages/4b319d29-d5b7-4b6e-8e7c-8d6e53f3c3d5", "rel": "bookmark" } ], "id": "4b319d29-d5b7-4b6e-8e7c-8d6e53f3c3d5", "resource_id": "351cc796-2d79-4a08-b878-a8ed933b6b68", "message_level": "ERROR", "user_message": "allocate host: No storage could be allocated for this share request. Trying again with a different size or share type may succeed.", "expires_at": "2017-07-10T10:27:43.000000", "created_at": "2017-07-10T10:26:43.000000", "detail_id": "002", "request_id": "req-24e7ccb6-a7d5-4ddd-a8e4-d8f72a4509c8", "project_id": "2e3de76b49b444fd9dc7ca9f7048ce6b", "resource_type": "SHARE", "action_id": "001" } ] } manila-10.0.0/api-ref/source/samples/security-services-list-detailed-response.json0000664000175000017500000000231613656750227030322 0ustar zuulzuul00000000000000{ "security_services": [ { "status": "new", "domain": null, "ou": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ1", "created_at": "2015-09-07T12:19:10.000000", "description": "Creating my first Security Service", "updated_at": null, "server": null, "dns_ip": "10.0.0.0/24", "user": "demo", "password": "supersecret", "type": "kerberos", "id": "3c829734-0679-4c17-9637-801da48c0d5f", "share_networks": [] }, { "status": "new", "domain": null, "ou": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "SecServ2", "created_at": "2015-09-07T12:25:03.000000", "description": "Creating my second Security Service", "updated_at": null, "server": null, "dns_ip": "10.0.0.0/24", "user": null, "password": null, "type": "ldap", "id": "5a1d3a12-34a7-4087-8983-50e9ed03509a", "share_networks": [] } ] } manila-10.0.0/api-ref/source/samples/share-network-create-request.json0000664000175000017500000000037013656750227025772 0ustar zuulzuul00000000000000{ "share_network": { "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "name": "my_network", "description": "This is my share network" } } manila-10.0.0/api-ref/source/samples/versions-get-version-response.json0000664000175000017500000000140113656750227026212 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "updated": "2015-08-27T11:33:21Z", "links": [ { "href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby" }, { "href": "http://172.18.198.54:8786/v2/", "rel": "self" } ], "min_version": "2.0", "version": "2.15", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.share+json;version=1" } ], "id": "v2.0" } ] } manila-10.0.0/api-ref/source/samples/share-replicas-list-detail-response.json0000664000175000017500000000216513656750227027225 0ustar zuulzuul00000000000000{ "share_replicas": [ { "status": "available", "share_id": "5043dffd-f033-4248-a315-319ca2bd70c8", "availability_zone": "nova", "cast_rules_to_readonly": false, "updated_at": "2017-08-15T20:20:50.000000", "share_network_id": null, "share_server_id": null, "host": "ubuntu@generic3#fake_pool_for_DummyDriver", "id": "57f5c47a-0216-4ee0-a517-0460d63301a6", "replica_state": "active", "created_at": "2017-08-15T20:20:45.000000" }, { "status": "available", "share_id": "5043dffd-f033-4248-a315-319ca2bd70c8", "availability_zone": "nova", "cast_rules_to_readonly": true, "updated_at": "2017-08-15T20:21:49.000000", "share_network_id": null, "share_server_id": null, "host": "ubuntu@generic2#fake_pool_for_DummyDriver", "id": "c9f52e33-d780-41d8-89ba-fc06869f465f", "replica_state": "in_sync", "created_at": "2017-08-15T20:21:43.000000" } ] } manila-10.0.0/api-ref/source/samples/snapshot-update-response.json0000664000175000017500000000170113656750227025224 0ustar zuulzuul00000000000000{ "snapshot": { "status": "available", "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "name": "snapshot_Share1", "user_id": "5c7bdb6eb0504d54a619acf8375c08ce", "project_id": "cadd7139bc3148b8973df097c0911016", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "bookmark" } ], "created_at": "2015-09-07T11:50:39.000000", "description": "I am changing a description also. Here is a snapshot of share Share1", "share_proto": "NFS", "share_size": 1, "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "size": 1 } } manila-10.0.0/api-ref/source/samples/share-group-types-default-list-response.json0000664000175000017500000000041713656750227030101 0ustar zuulzuul00000000000000{ "share_group_type": { "is_public": true, "group_specs": {}, "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"], "id": "89861c2a-10bf-4013-bdd4-3d020466aee4", "name": "test_group_type", "is_default": true } } manila-10.0.0/api-ref/source/samples/share-set-metadata-response.json0000664000175000017500000000026713656750227025564 0ustar zuulzuul00000000000000{ "metadata": { "aim": "changed_doc", "project": "my_app", "key1": "value1", "new_metadata_key": "new_information", "key": "value" } } manila-10.0.0/api-ref/source/samples/share-create-response.json0000664000175000017500000000265413656750227024460 0ustar zuulzuul00000000000000{ "share": { "status": null, "progress": null, "share_server_id": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "name": "share_London", "share_type": "25747776-08e5-494f-ab40-a64b9d20d8f7", "share_type_name": "default", "availability_zone": null, "created_at": "2015-09-18T10:25:24.533287", "export_location": null, "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/shares/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "bookmark" } ], "share_network_id": null, "share_group_id": null, "export_locations": [], "share_proto": "NFS", "host": null, "access_rules_status": "active", "has_replicas": false, "replication_type": null, "task_state": null, "snapshot_support": true, "volume_type": "default", "snapshot_id": null, "is_public": true, "metadata": { "project": "my_app", "aim": "doc" }, "id": "011d21e2-fbc3-4e4a-9993-9ea223f73264", "size": 1, "description": "My custom share London" } } manila-10.0.0/api-ref/source/samples/share-server-show-details-response.json0000664000175000017500000000064513656750227027122 0ustar zuulzuul00000000000000{ "details": { "username": "manila", "router_id": "4b62ce91-56c5-45c1-b0ef-8cbbe5dd34f4", "pk_path": "/opt/stack/.ssh/id_rsa", "subnet_id": "16e99ad6-5191-461c-9f34-ac84a39c3adb", "ip": "10.254.0.3", "instance_id": "75f2f282-af65-49ba-a7b1-525705b1bf1a", "public_address": "10.254.0.3", "service_port_id": "8ff21760-961e-4b83-a032-03fd559bb1d3" } } manila-10.0.0/api-ref/source/samples/quota-show-detail-response.json0000664000175000017500000000162413656750227025460 0ustar zuulzuul00000000000000{ "quota_set": { "id": "16e1ab15c35a457e9c2b2aa189f544e1", "gigabytes": {"in_use": 0, "limit": 1000, "reserved": 0}, "shares": {"in_use": 0, "limit": 50, "reserved": 0}, "snapshot_gigabytes": {"in_use": 0, "limit": 1000, "reserved": 0}, "snapshots": {"in_use": 0, "limit": 50, "reserved": 0}, "share_networks": {"in_use": 0, "limit": 10, "reserved": 0}, "share_groups": {"in_use": 0, "limit": 10, "reserved": 0}, "share_group_snapshots": {"in_use": 0, "limit": 10, "reserved": 0} } } manila-10.0.0/api-ref/source/samples/share-create-request.json0000664000175000017500000000065513656750227024311 0ustar zuulzuul00000000000000{ "share": { "description": "My custom share London", "share_type": null, "share_proto": "nfs", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "share_group_id": null, "name": "share_London", "snapshot_id": null, "is_public": true, "size": 1, "metadata": { "project": "my_app", "aim": "doc" } } } manila-10.0.0/api-ref/source/samples/quota-update-response.json0000664000175000017500000000034713656750227024523 0ustar zuulzuul00000000000000{ "quota_set": { "gigabytes": 1000, "snapshot_gigabytes": 999, "shares": 50, "snapshots": 49, "share_networks": 9, "share_groups": 12, "share_group_snapshots": 12 } } manila-10.0.0/api-ref/source/samples/share-group-show-response.json0000664000175000017500000000213013656750227025314 0ustar zuulzuul00000000000000{ "share_groups": { "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/share_groups/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/share_groups/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "bookmark" } ], "availability_zone": "nova", "consistent_snapshot_support": true, "share_group_type_id": "313df749-aac0-1a54-af52-10f6c991e80c", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "id": "011d21e2-fbc3-4e4a-9993-9ea223f73264", "share_types": ["25747776-08e5-494f-ab40-a64b9d20d8f7"], "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "status": "available", "description": "My custom share London", "host": "manila2@generic1#GENERIC1", "source_share_group_snapshot_id": null, "name": "share_London", "created_at": "2015-09-18T10:25:24.000000" } } manila-10.0.0/api-ref/source/samples/share-actions-shrink-request.json0000664000175000017500000000006013656750227025770 0ustar zuulzuul00000000000000{ "shrink": { "new_size": 1 } } manila-10.0.0/api-ref/source/samples/service-enable-response.json0000664000175000017500000000014413656750227024771 0ustar zuulzuul00000000000000{ "disabled": false, "binary": "manila-share", "host": "openstackhost@generic#pool_0" } manila-10.0.0/api-ref/source/samples/share-group-reset-state-request.json0000664000175000017500000000007213656750227026431 0ustar zuulzuul00000000000000{ "reset_status": { "status": "error" } } manila-10.0.0/api-ref/source/samples/snapshot-actions-reset-state-request.json0000664000175000017500000000007213656750227027472 0ustar zuulzuul00000000000000{ "reset_status": { "status": "error" } } manila-10.0.0/api-ref/source/samples/share-type-set-response.json0000664000175000017500000000007413656750227024761 0ustar zuulzuul00000000000000{ "extra_specs": { "my_key": "my_value" } } manila-10.0.0/api-ref/source/samples/snapshot-show-response.json0000664000175000017500000000163713656750227024732 0ustar zuulzuul00000000000000{ "snapshot": { "status": "available", "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "user_id": "5c7bdb6eb0504d54a619acf8375c08ce", "name": "snapshot_share1", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "bookmark" } ], "created_at": "2015-09-07T11:50:39.000000", "description": "Here is a snapshot of share Share1", "share_proto": "NFS", "share_size": 1, "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "project_id": "cadd7139bc3148b8973df097c0911016", "size": 1 } } manila-10.0.0/api-ref/source/samples/limits-response.json0000664000175000017500000000075113656750227023412 0ustar zuulzuul00000000000000{ "limits": { "rate": [], "absolute": { "totalShareNetworksUsed": 0, "maxTotalShareGigabytes": 1000, "maxTotalShareNetworks": 10, "totalSharesUsed": 0, "totalShareGigabytesUsed": 0, "totalShareSnapshotsUsed": 0, "maxTotalShares": 50, "totalSnapshotGigabytesUsed": 0, "maxTotalSnapshotGigabytes": 1000, "maxTotalShareSnapshots": 50 } } } manila-10.0.0/api-ref/source/samples/share-type-set-request.json0000664000175000017500000000007413656750227024613 0ustar zuulzuul00000000000000{ "extra_specs": { "my_key": "my_value" } } manila-10.0.0/api-ref/source/samples/share-type-create-request.json0000664000175000017500000000072313656750227025264 0ustar zuulzuul00000000000000{ "share_type": { "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": true, "mount_snapshot_support": false, "revert_to_snapshot_support": false, "create_share_from_snapshot_support": true, "snapshot_support": true }, "share_type_access:is_public": true, "name": "testing", "description": "share type description" } } manila-10.0.0/api-ref/source/samples/share-group-create-request.json0000664000175000017500000000064013656750227025435 0ustar zuulzuul00000000000000{ "share_group": { "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"], "name": "my_group", "description": "for_test", "share_group_type_id": "89861c2a-10bf-4013-bdd4-3d020466aee4", "availability_zone": "nova", "share_network_id": "82168c2a-10bf-4013-bcc4-3d984136aee3", "source_share_group_snapshot_id": "69861c2a-10bf-4013-bcc4-3d020466aee3" } } manila-10.0.0/api-ref/source/samples/share-group-snapshot-create-request.json0000664000175000017500000000025413656750227027273 0ustar zuulzuul00000000000000{ "share_group_snapshot": { "share_group_id": "cd7a3d06-23b3-4d05-b4ca-7c9a20faa95f", "name": "test", "description": "test description" } } manila-10.0.0/api-ref/source/samples/share-instance-actions-force-delete-request.json0000664000175000017500000000003513656750227030634 0ustar zuulzuul00000000000000{ "force_delete": null } manila-10.0.0/api-ref/source/samples/share-group-types-list-response.json0000664000175000017500000000047513656750227026463 0ustar zuulzuul00000000000000{ "share_group_types": [ { "is_public": true, "group_specs": {}, "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"], "id": "89861c2a-10bf-4013-bdd4-3d020466aee4", "name": "test_group_type", "is_default": false } ] } manila-10.0.0/api-ref/source/samples/service-disable-response.json0000664000175000017500000000014313656750227025145 0ustar zuulzuul00000000000000{ "disabled": true, "binary": "manila-share", "host": "openstackhost@generic#pool_0" } manila-10.0.0/api-ref/source/samples/share-group-update-request.json0000664000175000017500000000017213656750227025454 0ustar zuulzuul00000000000000{ "share_group": { "name": "new name", "description": "Changing the share group description." } } manila-10.0.0/api-ref/source/samples/snapshot-instance-actions-reset-state-request.json0000664000175000017500000000007613656750227031300 0ustar zuulzuul00000000000000{ "reset_status": { "status": "available" } } manila-10.0.0/api-ref/source/samples/quota-classes-show-response.json0000664000175000017500000000041013656750227025643 0ustar zuulzuul00000000000000{ "quota_class_set": { "share_groups": 50, "gigabytes": 1000, "share_group_snapshots": 50, "snapshots": 50, "snapshot_gigabytes": 1000, "shares": 50, "id": "default", "share_networks": 10 } } manila-10.0.0/api-ref/source/samples/share-update-metadata-response.json0000664000175000017500000000020513656750227026243 0ustar zuulzuul00000000000000{ "metadata": { "aim": "changed_doc", "project": "my_app", "new_metadata_key": "new_information" } } manila-10.0.0/api-ref/source/samples/share-group-update-response.json0000664000175000017500000000211213656750227025616 0ustar zuulzuul00000000000000{ "share_groups": { "status": "creating", "description": "Changing the share group description.", "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_groups/f9c1f80c-2392-4e34-bd90-fc89cdc5bf93", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_groups/f9c1f80c-2392-4e34-bd90-fc89cdc5bf93", "rel": "bookmark" } ], "availability_zone": null, "source_share_group_snapshot_id": null, "share_network_id": null, "share_server_id": null, "host": null, "share_group_type_id": "89861c2a-10bf-4013-bdd4-3d020466aee4", "consistent_snapshot_support": null, "id": "f9c1f80c-2392-4e34-bd90-fc89cdc5bf93", "name": "new name", "created_at": "2017-08-03T19:20:33.974421", "project_id": "e23850eeb91d4fa3866af634223e454c", "share_types": ["ecd11f4c-d811-4471-b656-c755c77e02ba"] } } manila-10.0.0/api-ref/source/samples/snapshot-instances-list-with-detail-response.json0000664000175000017500000000120013656750227031105 0ustar zuulzuul00000000000000{ "snapshot_instances": [ { "status": "available", "share_id": "618599ab-09a1-432d-973a-c102564c7fec", "share_instance_id": "8edff0cb-e5ce-4bab-aa99-afe02ed6a76a", "snapshot_id": "d447de19-a6d3-40b3-ae9f-895c86798924", "progress": "100%", "created_at": "2017-08-04T00:44:52.000000", "id": "275516e8-c998-4e78-a41e-7dd3a03e71cd", "provider_location": "/path/to/fake/snapshot/snapshot_d447de19_a6d3_40b3_ae9f_895c86798924_275516e8_c998_4e78_a41e_7dd3a03e71cd", "updated_at": "2017-08-04T00:44:54.000000" } ] } manila-10.0.0/api-ref/source/samples/versions-index-response.json0000664000175000017500000000274313656750227025071 0ustar zuulzuul00000000000000{ "versions": [ { "status": "DEPRECATED", "updated": "2015-08-27T11:33:21Z", "links": [ { "href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby" }, { "href": "http://172.18.198.54:8786/v1/", "rel": "self" } ], "min_version": "", "version": "", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.share+json;version=1" } ], "id": "v1.0" }, { "status": "CURRENT", "updated": "2015-08-27T11:33:21Z", "links": [ { "href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby" }, { "href": "http://172.18.198.54:8786/v2/", "rel": "self" } ], "min_version": "2.0", "version": "2.15", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.share+json;version=1" } ], "id": "v2.0" } ] } manila-10.0.0/api-ref/source/samples/snapshot-instances-list-response.json0000664000175000017500000000033113656750227026700 0ustar zuulzuul00000000000000{ "snapshot_instances": [ { "status": "available", "snapshot_id": "d447de19-a6d3-40b3-ae9f-895c86798924", "id": "275516e8-c998-4e78-a41e-7dd3a03e71cd" } ] } manila-10.0.0/api-ref/source/samples/snapshots-list-response.json0000664000175000017500000000224013656750227025077 0ustar zuulzuul00000000000000{ "snapshots": [ { "id": "086a1aa6-c425-4ecd-9612-391a3b1b9375", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/086a1aa6-c425-4ecd-9612-391a3b1b9375", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/086a1aa6-c425-4ecd-9612-391a3b1b9375", "rel": "bookmark" } ], "name": "snapshot_My_share" }, { "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "bookmark" } ], "name": "snapshot_share1" } ] } manila-10.0.0/api-ref/source/samples/share-group-snapshot-show-response.json0000664000175000017500000000146113656750227027157 0ustar zuulzuul00000000000000{ "share_group_snapshot": { "status": "creating", "share_group_id": "cd7a3d06-23b3-4d05-b4ca-7c9a20faa95f", "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "bookmark" } ], "name": null, "members": [], "created_at": "2017-08-10T03:01:39.442509", "project_id": "e23850eeb91d4fa3866af634223e454c", "id": "46bf5875-58d6-4816-948f-8828423b0b9f", "description": null } } manila-10.0.0/api-ref/source/samples/share-network-create-response.json0000664000175000017500000000112713656750227026141 0ustar zuulzuul00000000000000{ "share_network": { "name": "my_network", "segmentation_id": null, "created_at": "2015-09-07T14:37:00.583656", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "updated_at": null, "id": "77eb3421-4549-4789-ac39-0d5185d68c29", "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "ip_version": null, "cidr": null, "project_id": "e10a683c20da41248cfd5e1ab3d88c62", "network_type": null, "description": "This is my share network", "gateway": null, "mtu": null } } manila-10.0.0/api-ref/source/samples/export-location-list-response.json0000664000175000017500000000120013656750227026177 0ustar zuulzuul00000000000000{ "export_locations": [ { "path": "10.254.0.3:/shares/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d", "share_instance_id": "e1c2d35e-fe67-4028-ad7a-45f668732b1d", "is_admin_only": false, "id": "b6bd76ce-12a2-42a9-a30a-8a43b503867d", "preferred": false }, { "path": "10.0.0.3:/shares/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d", "share_instance_id": "e1c2d35e-fe67-4028-ad7a-45f668732b1d", "is_admin_only": true, "id": "6921e862-88bc-49a5-a2df-efeed9acd583", "preferred": false } ] } manila-10.0.0/api-ref/source/samples/share-network-remove-security-service-request.json0000664000175000017500000000016113656750227031325 0ustar zuulzuul00000000000000{ "remove_security_service": { "security_service_id": "3c829734-0679-4c17-9637-801da48c0d5f" } } manila-10.0.0/api-ref/source/samples/share-show-instance-response.json0000664000175000017500000000111413656750227025765 0ustar zuulzuul00000000000000{ "share_instance": { "status": "available", "progress": "100%", "share_id": "d94a8548-2079-4be0-b21c-0a887acd31ca", "availability_zone": "nova", "replica_state": null, "created_at": "2015-09-07T08:51:34.000000", "cast_rules_to_readonly": false, "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "share_server_id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73", "host": "manila2@generic1#GENERIC1", "access_rules_status": "active", "id": "75559a8b-c90c-42a7-bda2-edbe86acfb7b" } } manila-10.0.0/api-ref/source/samples/share-update-null-metadata-response.json0000664000175000017500000000003113656750227027210 0ustar zuulzuul00000000000000{ "metadata": null } manila-10.0.0/api-ref/source/samples/share-network-add-security-service-request.json0000664000175000017500000000015613656750227030564 0ustar zuulzuul00000000000000{ "add_security_service": { "security_service_id": "3c829734-0679-4c17-9637-801da48c0d5f" } } manila-10.0.0/api-ref/source/samples/share-update-response.json0000664000175000017500000000262113656750227024471 0ustar zuulzuul00000000000000{ "share": { "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/shares/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "bookmark" } ], "availability_zone": "nova", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "export_locations": [], "share_server_id": "e268f4aa-d571-43dd-9ab3-f49ad06ffaef", "share_group_id": null, "snapshot_id": null, "id": "011d21e2-fbc3-4e4a-9993-9ea223f73264", "size": 1, "share_type": "25747776-08e5-494f-ab40-a64b9d20d8f7", "share_type_name": "default", "export_location": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "metadata": { "project": "my_app", "aim": "doc" }, "status": "error", "description": "Changing the share description.", "host": "manila2@generic1#GENERIC1", "task_state": null, "is_public": true, "snapshot_support": true, "name": "share_London", "created_at": "2015-09-18T10:25:24.000000", "share_proto": "NFS", "volume_type": "default" } } manila-10.0.0/api-ref/source/samples/share-network-remove-security-service-response.json0000664000175000017500000000076713656750227031507 0ustar zuulzuul00000000000000{ "share_network": { "name": "net2", "segmentation_id": null, "created_at": "2015-09-07T12:31:12.000000", "neutron_subnet_id": null, "updated_at": null, "id": "d8ae6799-2567-4a89-aafb-fa4424350d2b", "neutron_net_id": null, "ip_version": null, "cidr": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "network_type": null, "description": null, "gateway": null, "mtu": null } } manila-10.0.0/api-ref/source/samples/snapshot-create-response.json0000664000175000017500000000163613656750227025214 0ustar zuulzuul00000000000000{ "snapshot": { "status": "creating", "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "user_id": "5c7bdb6eb0504d54a619acf8375c08ce", "name": "snapshot_share1", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "bookmark" } ], "created_at": "2015-09-07T11:50:39.756808", "description": "Here is a snapshot of share Share1", "share_proto": "NFS", "share_size": 1, "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "project_id": "cadd7139bc3148b8973df097c0911016", "size": 1 } } manila-10.0.0/api-ref/source/samples/security-service-create-request.json0000664000175000017500000000035413656750227026510 0ustar zuulzuul00000000000000{ "security_service": { "description": "Creating my first Security Service", "dns_ip": "10.0.0.0/24", "user": "demo", "password": "***", "type": "kerberos", "name": "SecServ1" } } manila-10.0.0/api-ref/source/samples/share-type-grant-access-request.json0000664000175000017500000000013213656750227026365 0ustar zuulzuul00000000000000{ "addProjectAccess": { "project": "e1284adea3ee4d2482af5ed214f3ad90" } } manila-10.0.0/api-ref/source/samples/share-server-show-response.json0000664000175000017500000000156513656750227025501 0ustar zuulzuul00000000000000{ "share_server": { "status": "active", "backend_details": { "username": "manila", "router_id": "4b62ce91-56c5-45c1-b0ef-8cbbe5dd34f4", "pk_path": "/opt/stack/.ssh/id_rsa", "subnet_id": "16e99ad6-5191-461c-9f34-ac84a39c3adb", "ip": "10.254.0.3", "instance_id": "75f2f282-af65-49ba-a7b1-525705b1bf1a", "public_address": "10.254.0.3", "service_port_id": "8ff21760-961e-4b83-a032-03fd559bb1d3" }, "created_at": "2015-09-07T08:37:19.000000", "updated_at": "2015-09-07T08:52:15.000000", "share_network_name": "net_my", "host": "manila2@generic1", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73" } } manila-10.0.0/api-ref/source/samples/share-access-rules-update-metadata-request.json0000664000175000017500000000021313656750227030463 0ustar zuulzuul00000000000000{ "metadata": { "aim": "changed_doc", "speed": "my_fast_access", "new_metadata_key": "new_information" } } manila-10.0.0/api-ref/source/samples/share-show-response.json0000664000175000017500000000302213656750227024163 0ustar zuulzuul00000000000000{ "share": { "links": [ { "href": "http://172.18.198.54:8786/v2/16e1ab15c35a457e9c2b2aa189f544e1/shares/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/shares/011d21e2-fbc3-4e4a-9993-9ea223f73264", "rel": "bookmark" } ], "availability_zone": "nova", "share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c", "export_locations": [], "share_server_id": "e268f4aa-d571-43dd-9ab3-f49ad06ffaef", "share_group_id": null, "snapshot_id": null, "id": "011d21e2-fbc3-4e4a-9993-9ea223f73264", "size": 1, "share_type": "25747776-08e5-494f-ab40-a64b9d20d8f7", "share_type_name": "default", "export_location": null, "project_id": "16e1ab15c35a457e9c2b2aa189f544e1", "metadata": { "project": "my_app", "aim": "doc" }, "status": "available", "progress": "100%", "description": "My custom share London", "host": "manila2@generic1#GENERIC1", "access_rules_status": "active", "has_replicas": false, "replication_type": null, "task_state": null, "is_public": true, "snapshot_support": true, "name": "share_London", "created_at": "2015-09-18T10:25:24.000000", "share_proto": "NFS", "volume_type": "default" } } manila-10.0.0/api-ref/source/samples/share-type-show-response.json0000664000175000017500000000151013656750227025142 0ustar zuulzuul00000000000000{ "share_type": { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "driver_handles_share_servers": "True" }, "id": "2780fc88-526b-464a-a72c-ecb83f0e3929", "name": "default-share-type", "is_default": true, "description": "manila share type" }, "volume_type": { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "driver_handles_share_servers": "True" }, "id": "2780fc88-526b-464a-a72c-ecb83f0e3929", "name": "default-share-type", "is_default": true, "description": "manila share type" } } manila-10.0.0/api-ref/source/samples/share-access-rules-list-response.json0000664000175000017500000000170513656750227026553 0ustar zuulzuul00000000000000{ "access_list": [ { "access_level": "rw", "state": "error", "id": "507bf114-36f2-4f56-8cf4-857985ca87c1", "access_type": "cert", "access_to": "example.com", "access_key": null, "created_at": "2018-07-17T02:01:04.000000", "updated_at": "2018-07-17T02:01:04.000000", "metadata": { "key1": "value1", "key2": "value2" } }, { "access_level": "rw", "state": "active", "id": "a25b2df3-90bd-4add-afa6-5f0dbbd50452", "access_type": "ip", "access_to": "0.0.0.0/0", "access_key": null, "created_at": "2018-07-16T01:03:21.000000", "updated_at": "2018-07-16T01:03:21.000000", "metadata": { "key3": "value3", "key4": "value4" } } ] } manila-10.0.0/api-ref/source/samples/share-actions-unmanage-request.json0000664000175000017500000000003113656750227026263 0ustar zuulzuul00000000000000{ "unmanage": null } manila-10.0.0/api-ref/source/samples/share-group-types-group-specs-list-response.json0000664000175000017500000000010213656750227030713 0ustar zuulzuul00000000000000{ "group_specs": { "snapshot_support": "True" } } manila-10.0.0/api-ref/source/samples/services-list-response.json0000664000175000017500000000104413656750227024701 0ustar zuulzuul00000000000000{ "services": [ { "status": "enabled", "binary": "manila-share", "zone": "nova", "host": "manila2@generic1", "updated_at": "2015-09-07T13:03:57.000000", "state": "up", "id": 1 }, { "status": "enabled", "binary": "manila-scheduler", "zone": "nova", "host": "manila2", "updated_at": "2015-09-07T13:03:57.000000", "state": "up", "id": 2 } ] } manila-10.0.0/api-ref/source/samples/share-types-list-response.json0000664000175000017500000000451413656750227025327 0ustar zuulzuul00000000000000{ "volume_types": [ { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "driver_handles_share_servers": "True" }, "id": "420e6a31-3f3d-4ed7-9d11-59450372182a", "name": "default", "is_default": true, "description": "share type description" }, { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "mount_snapshot_support": "False", "revert_to_snapshot_support": "False", "create_share_from_snapshot_support": "True", "snapshot_support": "True" }, "id": "7fa1342b-de9d-4d89-bdc8-af67795c0e52", "name": "testing", "is_default": false, "description": "share type description" } ], "share_types": [ { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "driver_handles_share_servers": "True" }, "id": "420e6a31-3f3d-4ed7-9d11-59450372182a", "name": "default", "is_default": true, "description": "share type description" }, { "required_extra_specs": { "driver_handles_share_servers": "True" }, "share_type_access:is_public": true, "extra_specs": { "replication_type": "readable", "driver_handles_share_servers": "True", "mount_snapshot_support": "False", "revert_to_snapshot_support": "False", "create_share_from_snapshot_support": "True", "snapshot_support": "True" }, "id": "7fa1342b-de9d-4d89-bdc8-af67795c0e52", "name": "testing", "is_default": false, "description": "share type description" } ] } manila-10.0.0/api-ref/source/samples/share-actions-force-delete-request.json0000664000175000017500000000003513656750227027032 0ustar zuulzuul00000000000000{ "force_delete": null } manila-10.0.0/api-ref/source/samples/snapshot-manage-response.json0000664000175000017500000000174013656750227025175 0ustar zuulzuul00000000000000{ "snapshot": { "id": "22de7000-3a32-4fe1-bd0c-38d03f93dec3", "share_id": "dd6c5d35-9db1-4662-a7ae-8b52f880aeba", "share_size": 1, "created_at": "2016-04-01T15:16:17.000000", "status": "manage_starting", "name": "managed_snapshot", "description": "description_of_managed_snapshot", "size": 1, "share_proto": "NFS", "user_id": "5c7bdb6eb0504d54a619acf8375c08ce", "project_id": "cadd7139bc3148b8973df097c0911016", "links": [ { "href": "http://127.0.0.1:8786/v2/907004508ef4447397ce6741a8f037c1/snapshots/22de7000-3a32-4fe1-bd0c-38d03f93dec3", "rel": "self" }, { "href": "http://127.0.0.1:8786/907004508ef4447397ce6741a8f037c1/snapshots/22de7000-3a32-4fe1-bd0c-38d03f93dec3", "rel": "bookmark" } ], "provider_location": "4045fee5-4e0e-408e-97f3-15e25239dbc9" } } manila-10.0.0/api-ref/source/samples/share-replica-export-location-list-response.json0000664000175000017500000000144413656750227030726 0ustar zuulzuul00000000000000{ "export_locations": [ { "path": "10.254.0.3:/shares/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d", "share_instance_id": "e1c2d35e-fe67-4028-ad7a-45f668732b1d", "is_admin_only": false, "id": "b6bd76ce-12a2-42a9-a30a-8a43b503867d", "preferred": false, "replica_state": "in_sync", "availability_zone": "paris" }, { "path": "10.0.0.3:/shares/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d", "share_instance_id": "e1c2d35e-fe67-4028-ad7a-45f668732b1d", "is_admin_only": true, "id": "6921e862-88bc-49a5-a2df-efeed9acd583", "preferred": false, "replica_state": "in_sync", "availability_zone": "paris" } ] } manila-10.0.0/api-ref/source/samples/share-replicas-reset-state-request.json0000664000175000017500000000007613656750227027103 0ustar zuulzuul00000000000000{ "reset_status": { "status": "available" } } manila-10.0.0/api-ref/source/samples/share-group-snapshots-list-detailed-response.json0000664000175000017500000000336613656750227031114 0ustar zuulzuul00000000000000{ "share_group_snapshots": [ { "status": "available", "share_group_id": "cd7a3d06-23b3-4d05-b4ca-7c9a20faa95f", "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "bookmark" } ], "name": null, "members": [], "created_at": "2017-08-10T03:01:39.000000", "project_id": "e23850eeb91d4fa3866af634223e454c", "id": "46bf5875-58d6-4816-948f-8828423b0b9f", "description": null }, { "status": "available", "share_group_id": "cd7a3d06-23b3-4d05-b4ca-7c9a20faa95f", "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/9d8ed9be-4454-4df0-b0ae-8360b623d93d", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/9d8ed9be-4454-4df0-b0ae-8360b623d93d", "rel": "bookmark" } ], "name": null, "members": [], "created_at": "2017-08-10T03:01:28.000000", "project_id": "e23850eeb91d4fa3866af634223e454c", "id": "9d8ed9be-4454-4df0-b0ae-8360b623d93d", "description": null } ] } manila-10.0.0/api-ref/source/samples/share-group-snapshot-create-response.json0000664000175000017500000000146113656750227027442 0ustar zuulzuul00000000000000{ "share_group_snapshot": { "status": "creating", "share_group_id": "cd7a3d06-23b3-4d05-b4ca-7c9a20faa95f", "links": [ { "href": "http://192.168.98.191:8786/v2/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "self" }, { "href": "http://192.168.98.191:8786/e23850eeb91d4fa3866af634223e454c/share_group_snapshot/46bf5875-58d6-4816-948f-8828423b0b9f", "rel": "bookmark" } ], "name": null, "members": [], "created_at": "2017-08-10T03:01:39.442509", "project_id": "e23850eeb91d4fa3866af634223e454c", "id": "46bf5875-58d6-4816-948f-8828423b0b9f", "description": null } } manila-10.0.0/api-ref/source/samples/share-show-metadata-response.json0000664000175000017500000000011613656750227025742 0ustar zuulzuul00000000000000{ "metadata": { "project": "my_app", "aim": "doc" } } manila-10.0.0/api-ref/source/samples/share-actions-list-access-rules-response.json0000664000175000017500000000101513656750227030203 0ustar zuulzuul00000000000000{ "access_list": [ { "access_level": "rw", "state": "error", "id": "507bf114-36f2-4f56-8cf4-857985ca87c1", "access_type": "cert", "access_to": "example.com", "access_key": null }, { "access_level": "rw", "state": "active", "id": "a25b2df3-90bd-4add-afa6-5f0dbbd50452", "access_type": "ip", "access_to": "0.0.0.0/0", "access_key": null } ] } manila-10.0.0/api-ref/source/samples/snapshots-list-detailed-response.json0000664000175000017500000000376713656750227026667 0ustar zuulzuul00000000000000{ "snapshots": [ { "status": "creating", "share_id": "d94a8548-2079-4be0-b21c-0a887acd31ca", "user_id": "5c7bdb6eb0504d54a619acf8375c08ce", "name": "snapshot_My_share", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/086a1aa6-c425-4ecd-9612-391a3b1b9375", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/086a1aa6-c425-4ecd-9612-391a3b1b9375", "rel": "bookmark" } ], "created_at": "2015-09-07T11:55:09.000000", "description": "Here is a snapshot of share My_share", "share_proto": "NFS", "share_size": 1, "id": "086a1aa6-c425-4ecd-9612-391a3b1b9375", "project_id": "cadd7139bc3148b8973df097c0911016", "size": 1 }, { "status": "available", "share_id": "406ea93b-32e9-4907-a117-148b3945749f", "user_id": "5c7bdb6eb0504d54a619acf8375c08ce", "name": "snapshot_share1", "links": [ { "href": "http://172.18.198.54:8786/v1/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "self" }, { "href": "http://172.18.198.54:8786/16e1ab15c35a457e9c2b2aa189f544e1/snapshots/6d221c1d-0200-461e-8d20-24b4776b9ddb", "rel": "bookmark" } ], "created_at": "2015-09-07T11:50:39.000000", "description": "Here is a snapshot of share Share1", "share_proto": "NFS", "share_size": 1, "id": "6d221c1d-0200-461e-8d20-24b4776b9ddb", "project_id": "cadd7139bc3148b8973df097c0911016", "size": 1 } ] } manila-10.0.0/api-ref/source/samples/share-network-update-request.json0000664000175000017500000000037713656750227026020 0ustar zuulzuul00000000000000{ "share_network": { "neutron_net_id": "998b42ee-2cee-4d36-8b95-67b5ca1f2109", "neutron_subnet_id": "53482b62-2c84-4a53-b6ab-30d9d9800d06", "name": "update my network", "description": "i'm adding a description" } } manila-10.0.0/api-ref/source/samples/share-group-type-set-response.json0000664000175000017500000000011013656750227026102 0ustar zuulzuul00000000000000{ "group_specs": { "my_group_key": "my_group_value" } } manila-10.0.0/api-ref/source/samples/share-manage-request.json0000664000175000017500000000111413656750227024265 0ustar zuulzuul00000000000000{ "share": { "protocol": "nfs", "name": "accounting_p8787", "share_type": "gold", "driver_options": { "opt1": "opt1", "opt2": "opt2" }, "export_path": "192.162.10.6:/shares/share-accounting_p8787", "service_host": "manila2@openstackstor01#accountingpool", "is_public": true, "description": "Common storage for spreadsheets and presentations. Please contact John Accessman to be added to the users of this drive.", "share_server_id": "00137b40-ca06-4ae8-83a3-2c5989eebcce" } } manila-10.0.0/api-ref/source/share-access-rule-metadata.inc0000664000175000017500000000327513656750227023501 0ustar zuulzuul00000000000000.. -*- rst -*- ============================================ Share access rule metadata (since API v2.45) ============================================ Updates, and unsets share access rule metadata. Update share access rule metadata ================================== .. rest_method:: PUT /v2/{project_id}/share-access-rules/{access_id}/metadata .. versionadded:: 2.45 Updates the metadata for a share access rule. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - access_id: access_id_path - metadata: access_metadata Request example --------------- .. literalinclude:: samples/share-access-rules-update-metadata-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - metadata: access_metadata Response example ---------------- .. literalinclude:: samples/share-access-rules-update-metadata-response.json :language: javascript Unset share access rule metadata ================================ .. rest_method:: DELETE /v2/{project_id}/share-access-rules/{access_id}/metadata/{key} .. versionadded:: 2.45 Un-sets the metadata on a share access rule. To unset a metadata key value, specify only the key name in the URI. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - access_id: access_id_path - key: metadata_key_path manila-10.0.0/api-ref/source/snapshots.inc0000664000175000017500000002567113656750227020443 0ustar zuulzuul00000000000000.. -*- rst -*- =============== Share snapshots =============== Use the Shared File Systems service to make snapshots of shares. A share snapshot is a point-in-time, read-only copy of the data that is contained in a share. The APIs below allow controlling share snapshots. They are represented by a "snapshot" resource in the Shared File Systems service, and they can have user-defined metadata such as a name and description. You can create, manage, update, and delete share snapshots. After you create or manage a share snapshot, you can create a share from it. You can also revert a share to its most recent snapshot. You can update a share snapshot to rename it, change its description, or update its state to one of these supported states: - ``available`` - ``error`` - ``creating`` - ``deleting`` - ``error_deleting`` - ``manage_starting`` - ``manage_error`` - ``unmanage_starting`` - ``unmanage_error`` - ``restoring`` As administrator, you can also reset the state of a snapshot and force-delete a share snapshot in any state. Use the ``policy.json`` file to grant permissions for these actions to other roles. List share snapshots ==================== .. rest_method:: GET /v2/{project_id}/snapshots Lists all share snapshots. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name~: name_inexact_query - description~: description_inexact_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_id - name: name Response example ---------------- .. literalinclude:: samples/snapshots-list-response.json :language: javascript List share snapshots with details ================================= .. rest_method:: GET /v2/{project_id}/snapshots/detail Lists all share snapshots with details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name~: name_inexact_query - description~: description_inexact_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_id - status: snapshot_status - share_id: snapshot_share_id - name: name - description: description - created_at: created_at - share_proto: snapshot_share_protocol - share_size: snapshot_share_size - size: snapshot_size - project_id: snapshot_project_id - user_id: snapshot_user_id Response example ---------------- .. literalinclude:: samples/snapshots-list-detailed-response.json :language: javascript Show share snapshot details =========================== .. rest_method:: GET /v2/{project_id}/snapshots/{snapshot_id} Shows details for a share snapshot. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_id - status: snapshot_status - share_id: snapshot_share_id - name: name - description: description - created_at: created_at - share_proto: snapshot_share_protocol - share_size: snapshot_share_size - size: snapshot_size - project_id: snapshot_project_id - user_id: snapshot_user_id Response example ---------------- .. literalinclude:: samples/snapshot-show-response.json :language: javascript Create share snapshot ===================== .. rest_method:: POST /v2/{project_id}/snapshots Creates a snapshot from a share. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: snapshot_share_id_request - force: force_snapshot_request - name: name_request - description: description_request - display_name: display_name_request - display_description: display_description_request Request example --------------- .. literalinclude:: samples/snapshot-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_id - share_id: snapshot_share_id - status: snapshot_status - name: name - description: description - created_at: created_at - share_proto: snapshot_share_protocol - share_size: snapshot_share_size - provider_location: snapshot_provider_location - size: snapshot_size - project_id: snapshot_project_id - user_id: snapshot_user_id Response example ---------------- .. literalinclude:: samples/snapshot-create-response.json :language: javascript Manage share snapshot (since API v2.12) ======================================= .. rest_method:: POST /v2/{project_id}/snapshots/manage .. versionadded:: 2.12 Configures Shared File Systems to manage a share snapshot. .. note:: Managing snapshots of shares that are created on top of share servers (i.e. created with share networks) is not supported prior to API version 2.49. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: snapshot_manage_share_id - provider_location: snapshot_provider_location_request - name: name_request - display_name: display_name_request - description: description_request - display_description: display_description_request - driver_options: driver_options Request example --------------- .. literalinclude:: samples/snapshot-manage-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_id - share_id: snapshot_share_id - status: snapshot_manage_status - name: name - description: description - created_at: created_at - share_proto: snapshot_share_protocol - share_size: snapshot_share_size - provider_location: snapshot_provider_location - size: snapshot_size - project_id: snapshot_project_id - user_id: snapshot_user_id Response example ---------------- .. literalinclude:: samples/snapshot-manage-response.json :language: javascript Unmanage share snapshot (since API v2.12) ========================================= .. rest_method:: POST /v2/{project_id}/snapshots/{snapshot_id}/action .. versionadded:: 2.12 Configures Shared File Systems to stop managing a share snapshot. .. note:: Unmanaging snapshots of shares that are created on top of share servers (i.e. created with share networks) is not supported prior to API version 2.49. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_path - unmanage: snapshot_unmanage Request example --------------- .. literalinclude:: samples/snapshot-actions-unmanage-request.json :language: javascript Response parameters ------------------- There is no body content for the response. Reset share snapshot state ========================== .. rest_method:: POST /v2/{project_id}/snapshots/{snapshot_id}/action Administrator only. Explicitly updates the state of a share snapshot. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_path - status: snapshot_status_request Request example --------------- .. literalinclude:: samples/snapshot-actions-reset-state-request.json :language: javascript Force-delete share snapshot =========================== .. rest_method:: POST /v2/{project_id}/snapshots/{snapshot_id}/action Administrator only. Force-deletes a share snapshot in any state. Use the ``policy.json`` file to grant permissions for this action to other roles. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_path - force_delete: snapshot_force_delete Request example --------------- .. literalinclude:: samples/snapshot-actions-force-delete-request.json :language: javascript Update share snapshot ===================== .. rest_method:: PUT /v2/{project_id}/snapshots/{snapshot_id} Updates a share snapshot. You can update these attributes: - ``display_name``, which also changes the ``name`` of the share snapshot. - ``display_description``, which also changes the ``description`` of the share snapshot. If you try to update other attributes, they retain their previous values. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_path - display_name: display_name_request - display_description: display_description_request Request example --------------- .. literalinclude:: samples/snapshot-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: snapshot_id - status: snapshot_status - share_id: snapshot_share_id - name: name - description: description - created_at: created_at - share_proto: snapshot_share_protocol - share_size: snapshot_share_size - size: snapshot_size - project_id: snapshot_project_id - user_id: snapshot_user_id Response example ---------------- .. literalinclude:: samples/snapshot-update-response.json :language: javascript Delete share snapshot ===================== .. rest_method:: DELETE /v2/{project_id}/snapshots/{snapshot_id} Deletes a share snapshot. Preconditions - Share snapshot status must be ``available`` or ``error``. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - snapshot_id: snapshot_id_path manila-10.0.0/api-ref/source/share-networks.inc0000664000175000017500000002351213656750227021365 0ustar zuulzuul00000000000000.. -*- rst -*- ============== Share networks ============== A share network resource stores network information to create and manage share servers. Shares created with share networks are exported on these networks with the help of share servers. You can create, update, view, and delete a share network. When you create a share network, you may optionally specify an associated neutron network and subnetwork. For more information about supported plug-ins for share networks, see `Manila Network Plugins `_. A share network resource has these attributes: - The IP block in Classless Inter-Domain Routing (CIDR) notation from which to allocate the network. - The IP version of the network. - The network type, which is ``vlan``, ``vxlan``, ``gre``, or ``flat``. - If the network uses segmentation, a segmentation identifier. For example, VLAN, VXLAN, and GRE networks use segmentation. A share network resource can also have a user defined name and description. List share networks =================== .. rest_method:: GET /v2/{project_id}/share-networks Lists all share networks. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name~: name_inexact_query - description~: description_inexact_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - name: name Response example ---------------- .. literalinclude:: samples/share-networks-list-response.json :language: javascript List share networks with details ================================ .. rest_method:: GET /v2/{project_id}/share-networks/detail Lists all share networks with details. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - all_tenants: all_tenants_query - name~: name_inexact_query - description~: description_inexact_query Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - project_id: project_id - neutron_net_id: neutron_net_id - neutron_subnet_id: neutron_subnet_id - network_type: network_type - segmentation_id: segmentation_id - cidr: cidr - ip_version: ip_version - name: name - description: description - created_at: created_at - updated_at: updated_at - gateway: share_network_gateway - mtu: share_network_mtu Response example ---------------- .. literalinclude:: samples/share-networks-list-detailed-response.json :language: javascript Show share network details ========================== .. rest_method:: GET /v2/{project_id}/share-networks/{share_network_id} Shows details for a share network. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_network_id: share_network_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - project_id: project_id - neutron_net_id: neutron_net_id - neutron_subnet_id: neutron_subnet_id - network_type: network_type - segmentation_id: segmentation_id - cidr: cidr - ip_version: ip_version - name: name - description: description - created_at: created_at - updated_at: updated_at - gateway: share_network_gateway - mtu: share_network_mtu Response example ---------------- .. literalinclude:: samples/share-network-show-response.json :language: javascript Create share network ==================== .. rest_method:: POST /v2/{project_id}/share-networks Creates a share network. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 413 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - neutron_net_id: neutron_net_id_request - neutron_subnet_id: neutron_subnet_id_request - name: name_request - description: description_request Request example --------------- .. literalinclude:: samples/share-network-create-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - project_id: project_id - neutron_net_id: neutron_net_id - neutron_subnet_id: neutron_subnet_id - network_type: network_type_1 - segmentation_id: segmentation_id_1 - cidr: cidr_1 - ip_version: ip_version_1 - name: name - description: description - created_at: created_at - updated_at: updated_at - gateway: share_network_gateway - mtu: share_network_mtu Response example ---------------- .. literalinclude:: samples/share-network-create-response.json :language: javascript Add security service to share network ===================================== .. rest_method:: POST /v2/{project_id}/share-networks/{share_network_id}/action Adds a security service to a share network. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id - share_network_id: share_network_id_path - security_service_id: security_service_id Request example --------------- .. literalinclude:: samples/share-network-add-security-service-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - project_id: project_id - neutron_net_id: neutron_net_id - neutron_subnet_id: neutron_subnet_id - network_type: network_type - segmentation_id: segmentation_id - cidr: cidr - ip_version: ip_version - name: name - description: description - created_at: created_at - updated_at: updated_at - gateway: share_network_gateway - mtu: share_network_mtu Response example ---------------- .. literalinclude:: samples/share-network-add-security-service-response.json :language: javascript Remove security service from share network ========================================== .. rest_method:: POST /v2/{project_id}/share-networks/{share_network_id}/action Removes a security service from a share network. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_network_id: share_network_id_path - security_service_id: share_network_security_service_id Request example --------------- .. literalinclude:: samples/share-network-remove-security-service-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - project_id: project_id - neutron_net_id: neutron_net_id - neutron_subnet_id: neutron_subnet_id - network_type: network_type - segmentation_id: segmentation_id - cidr: cidr - ip_version: ip_version - name: name - description: description - created_at: created_at - updated_at: updated_at - gateway: share_network_gateway - mtu: share_network_mtu Response example ---------------- .. literalinclude:: samples/share-network-remove-security-service-response.json :language: javascript Update share network ==================== .. rest_method:: PUT /v2/{project_id}/share-networks/{share_network_id} Updates a share network. Note that if the share network is used by any share server, you can update only the ``name`` and ``description`` attributes. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 422 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_network_id: share_network_id_path - name: name_request - description: description_request - neutron_net_id: neutron_net_id_request - neutron_subnet_id: neutron_subnet_id_request Request example --------------- .. literalinclude:: samples/share-network-update-request.json :language: javascript Response parameters ------------------- .. rest_parameters:: parameters.yaml - id: share_network_id - project_id: project_id - neutron_net_id: neutron_net_id - neutron_subnet_id: neutron_subnet_id - network_type: network_type - segmentation_id: segmentation_id - cidr: cidr - ip_version: ip_version - name: name - description: description - created_at: created_at - updated_at: updated_at - gateway: share_network_gateway - mtu: share_network_mtu Response example ---------------- .. literalinclude:: samples/share-network-update-response.json :language: javascript Delete share network ==================== .. rest_method:: DELETE /v2/{project_id}/share-networks/{share_network_id} Deletes a share network. Preconditions - You cannot delete a share network if it has shares created/exported on it. - You cannot delete a share network if it has share groups created on it. Response codes -------------- .. rest_status_code:: success status.yaml - 202 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_network_id: share_network_id_path manila-10.0.0/api-ref/source/extensions.inc0000664000175000017500000000146513656750227020613 0ustar zuulzuul00000000000000.. -*- rst -*- ============== API extensions ============== Lists available Shared File Systems API extensions. List extensions =============== .. rest_method:: GET /v2/{project_id}/extensions Lists all extensions. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - name: extension_name - links: extension_links - description: extension_description - alias: extension_alias - updated: updated_at_extensions Response example ---------------- .. literalinclude:: samples/extensions-list-response.json :language: javascript manila-10.0.0/api-ref/source/index.rst0000664000175000017500000000237013656750227017556 0ustar zuulzuul00000000000000:tocdepth: 2 ======================= Shared File Systems API ======================= .. rest_expand_all:: .. include:: versions.inc .. include:: extensions.inc .. include:: limits.inc .. include:: shares.inc .. include:: share-export-locations.inc .. include:: share-metadata.inc .. include:: share-actions.inc .. include:: snapshots.inc .. include:: snapshot-instances.inc .. include:: share-networks.inc .. include:: security-services.inc .. include:: share-servers.inc .. include:: share-instances.inc .. include:: share-instance-export-locations.inc .. include:: share-types.inc .. include:: scheduler-stats.inc .. include:: services.inc .. include:: availability-zones.inc .. include:: os-share-manage.inc .. include:: quota-sets.inc .. include:: quota-classes.inc .. include:: user-messages.inc .. include:: share-access-rules.inc .. include:: share-access-rule-metadata.inc ====================================== Shared File Systems API (EXPERIMENTAL) ====================================== .. rest_expand_all:: .. include:: experimental.inc .. include:: share-migration.inc .. include:: share-replicas.inc .. include:: share-replica-export-locations.inc .. include:: share-groups.inc .. include:: share-group-types.inc .. include:: share-group-snapshots.inc manila-10.0.0/api-ref/source/versions.inc0000664000175000017500000000607413656750227020265 0ustar zuulzuul00000000000000.. -*- rst -*- ============ API versions ============ Lists information for all Shared File Systems API versions. Concepts ======== In order to bring new features to users over time, the Shared File Systems API supports versioning. There are two kinds of versions in the Shared File Systems API: - ''major versions'', which have dedicated URLs - ''microversions'', which can be requested through the use of the ``X-OpenStack-Manila-API-Version`` header Read more about microversion guidelines that the service adheres to `here `_ See `A history of the Shared File Systems API versions `_ to view the evolution of the API and pick an appropriate version for API requests. List All Major Versions ======================= .. rest_method:: GET / This fetches all the information about all known major API versions in the deployment. Links to more specific information will be provided for each API version, as well as information about supported min and max microversions. Response codes -------------- .. rest_status_code:: success status.yaml - 300 Response -------- .. rest_parameters:: parameters.yaml - versions: versions - id: version_id - updated: version_updated - status: version_status - links: links - media-types: version_media_types - version: version_max - min_version: version_min .. note:: The ``updated`` and ``media-types`` parameters in the response are vestigial and provide no useful information. They will probably be deprecated and removed in the future. Response Example ---------------- This demonstrates the expected response from a bleeding edge server that supports up to the current microversion. When querying OpenStack environments you will typically find the current microversion on the v2.1 API is lower than listed below. .. literalinclude:: samples/versions-index-response.json :language: javascript Show Details of Specific API Version ==================================== .. rest_method:: GET /{api_version}/ This gets the details of a specific API at it's root. Nearly all this information exists at the API root, so this is mostly a redundant operation. Response codes -------------- .. rest_status_code:: success status.yaml - 200 Request ------- .. rest_parameters:: parameters.yaml - api_version: api_version Response -------- .. rest_parameters:: parameters.yaml - version: version - id: version_id - status: version_status - links: links - version: version_max - updated: version_updated - min_version: version_min - media-types: version_media_types .. note:: The ``updated`` and ``media-types`` parameters in the response are vestigial and provide no useful information. They will probably be deprecated and removed in the future. Response Example ---------------- This is an example of a ``GET /v2/`` on a relatively current server. .. literalinclude:: samples/versions-get-version-response.json :language: javascript manila-10.0.0/api-ref/source/share-access-rules.inc0000664000175000017500000000437513656750227022110 0ustar zuulzuul00000000000000.. -*- rst -*- .. _get-access-rules-after-2-45: ==================================== Share access rules (since API v2.45) ==================================== Retrieve details about access rules Describe share access rule ========================== .. rest_method:: GET /v2/{project_id}/share-access-rules/{access_id} .. versionadded:: 2.45 Retrieve details about a specified access rule. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - access_id: access_id_path Response parameters ------------------- .. rest_parameters:: parameters.yaml - share_id: access_share_id - created_at: created_at - updated_at: updated_at - access_type: access_type - access_to: access_to - access_key: access_key - state: state - access_level: access_level - id: access_rule_id - access_metadata: access_metadata Response example ---------------- .. literalinclude:: samples/share-access-rules-show-response.json :language: javascript List share access rules ======================= .. rest_method:: GET /v2/{project_id}/share-access-rules?share_id={share-id} .. versionadded:: 2.45 Lists the share access rules on a share. .. note:: This API replaces the older :ref:`List share access rules ` API from version 2.45. Response codes -------------- .. rest_status_code:: success status.yaml - 200 .. rest_status_code:: error status.yaml - 400 - 401 - 403 - 404 - 409 Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id_path - share_id: share_id_access_rules_query - metadata: metadata Response parameters ------------------- .. rest_parameters:: parameters.yaml - metadata: access_metadata - access_type: access_type - access_key: access_key - access_to: access_to - access_level: access_level - state: state - access_list: access_list - id: access_rule_id - created_at: created_at - updated_at: updated_at Response example ---------------- .. literalinclude:: samples/share-access-rules-list-response.json :language: javascript manila-10.0.0/PKG-INFO0000664000175000017500000000507613656750363014176 0ustar zuulzuul00000000000000Metadata-Version: 1.2 Name: manila Version: 10.0.0 Summary: Shared Storage for OpenStack Home-page: https://docs.openstack.org/manila/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/manila.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ====== MANILA ====== You have come across an OpenStack shared file system service. It has identified itself as "Manila." It was abstracted from the Cinder project. * Wiki: https://wiki.openstack.org/wiki/Manila * Developer docs: https://docs.openstack.org/manila/latest/ Getting Started --------------- If you'd like to run from the master branch, you can clone the git repo: git clone https://opendev.org/openstack/manila For developer information please see `HACKING.rst `_ You can raise bugs here https://bugs.launchpad.net/manila Python client ------------- https://opendev.org/openstack/python-manilaclient * Documentation for the project can be found at: https://docs.openstack.org/manila/latest/ * Release notes for the project can be found at: https://docs.openstack.org/releasenotes/manila/ * Source for the project: https://opendev.org/openstack/manila * Bugs: https://bugs.launchpad.net/manila * Blueprints: https://blueprints.launchpad.net/manila * Design specifications are tracked at: https://specs.openstack.org/openstack/manila-specs/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Requires-Python: >=3.6 manila-10.0.0/bindep.txt0000664000175000017500000000230013656750227015065 0ustar zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for # install and tests; # see https://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg test] gcc [platform:rpm test] # gettext and graphviz are needed by doc builds only. For transition, # have them in both doc and test. # TODO(jaegerandi): Remove test once infra scripts are updated. gettext [!platform:suse doc test] gettext-runtime [platform:suse doc test] graphviz [doc test] libffi-dev [platform:dpkg] libffi-devel [platform:redhat] libffi48-devel [platform:suse] virtual/libffi [platform:gentoo] libssl-dev [platform:dpkg] openssl-devel [platform:rpm !platform:suse] libopenssl-devel [platform:suse !platform:rpm] locales [platform:debian] mariadb [platform:rpm] mariadb-server [platform:redhat] mariadb-devel [platform:redhat] libmysqlclient-dev [platform:dpkg] libmysqlclient-devel [platform:suse] libpq-dev [platform:dpkg] mysql-client [platform:dpkg] mysql-server [platform:dpkg] postgresql postgresql-client [platform:dpkg] postgresql-devel [platform:rpm] postgresql-server [platform:rpm] libxml2-dev [platform:dpkg test] libxslt-devel [platform:rpm test] libxslt1-dev [platform:dpkg test] manila-10.0.0/test-requirements.txt0000664000175000017500000000164013656750227017332 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # hacking should be first hacking>=3.0,<3.1.0 # Apache-2.0 bashate>=0.5.1 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 ddt>=1.0.1 # MIT fixtures>=3.0.0 # Apache-2.0/BSD iso8601>=0.1.11 # MIT oslotest>=3.2.0 # Apache-2.0 # Do not remove 'PyMySQL' and 'psycopg2-binary' dependencies. They are used # by oslo_db lib for running MySQL and PostgreSQL DB migration tests. # See https://docs.openstack.org/oslo.db/latest/contributor/index.html#how-to-run-unit-tests PyMySQL>=0.7.6 # MIT License psycopg2-binary>=2.8.5 # LGPL/ZPL requests-mock>=1.2.0 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0 os-testr>=1.0.0 # Apache-2.0 testresources>=2.0.0 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT manila-10.0.0/lower-constraints.txt0000664000175000017500000000462413656750227017334 0ustar zuulzuul00000000000000alabaster==0.7.10 alembic==0.8.10 amqp==2.2.2 appdirs==1.4.3 asn1crypto==0.24.0 Babel==2.3.4 bashate==0.5.1 bcrypt==3.1.4 cachetools==2.0.1 certifi==2018.1.18 cffi==1.11.5 chardet==3.0.4 cliff==2.11.0 cmd2==0.8.1 contextlib2==0.5.5 coverage==4.0 cryptography==2.1.4 ddt==1.0.1 debtcollector==1.19.0 decorator==4.2.1 deprecation==2.0 docutils==0.14 dogpile.cache==0.6.5 dulwich==0.19.0 enum-compat==0.0.2 eventlet==0.22.0 extras==1.0.0 fasteners==0.14.1 fixtures==3.0.0 future==0.16.0 futurist==1.6.0 greenlet==0.4.10 idna==2.6 imagesize==1.0.0 ipaddress==1.0.17 iso8601==0.1.11 Jinja2==2.10 jmespath==0.9.3 jsonpatch==1.21 jsonpointer==2.0 keystoneauth1==3.4.0 keystonemiddleware==4.17.0 kombu==4.1.0 linecache2==1.0.0 lxml==3.4.1 Mako==1.0.7 MarkupSafe==1.0 mccabe==0.2.1 monotonic==1.4 mox3==0.25.0 msgpack==0.5.6 munch==2.2.0 netaddr==0.7.18 netifaces==0.10.6 openstackdocstheme==1.31.2 openstacksdk==0.12.0 os-api-ref==1.4.0 os-client-config==1.29.0 os-service-types==1.2.0 os-testr==1.0.0 osc-lib==1.10.0 oslo.cache==1.29.0 oslo.concurrency==3.26.0 oslo.config==5.2.0 oslo.context==2.19.2 oslo.db==4.27.0 oslo.i18n==3.15.3 oslo.log==3.36.0 oslo.messaging==6.4.0 oslo.middleware==3.31.0 oslo.policy==1.30.0 oslo.reports==1.18.0 oslo.rootwrap==5.8.0 oslo.serialization==2.18.0 oslo.service==2.1.1 oslo.upgradecheck==0.1.0 oslo.utils==3.40.2 oslotest==3.2.0 packaging==17.1 paramiko==2.0.0 Paste==2.0.2 PasteDeploy==1.5.0 pbr==2.0.0 pika==0.10.0 pika-pool==0.1.3 prettytable==0.7.2 psutil==5.4.3 psycopg2-binary==2.8.5 pyasn1==0.4.2 pycadf==2.7.0 pycparser==2.18 Pygments==2.2.0 pyinotify==0.9.6 PyMySQL==0.7.6 PyNaCl==1.2.1 pyparsing==2.1.0 pyperclip==1.6.0 python-cinderclient==3.3.0 python-dateutil==2.7.0 python-editor==1.0.3 python-keystoneclient==3.15.0 python-mimeparse==1.6.0 python-neutronclient==6.7.0 python-novaclient==9.1.0 python-subunit==1.2.0 pytz==2018.3 PyYAML==3.12 repoze.lru==0.7 requests==2.14.2 requests-mock==1.2.0 requestsexceptions==1.4.0 retrying==1.2.3 rfc3986==1.1.0 Routes==2.3.1 simplejson==3.13.2 six==1.10.0 snowballstemmer==1.2.1 Sphinx==1.6.5 sphinxcontrib-websupport==1.0.1 SQLAlchemy==1.0.10 sqlalchemy-migrate==0.11.0 sqlparse==0.2.4 statsd==3.2.2 stestr==2.0.0 stevedore==1.20.0 Tempita==0.5.2 tenacity==4.9.0 testrepository==0.0.20 testresources==2.0.0 testscenarios==0.4 testtools==2.2.0 tooz==1.58.0 traceback2==1.4.0 unittest2==1.1.0 urllib3==1.22 vine==1.1.4 voluptuous==0.11.1 WebOb==1.7.1 wrapt==1.10.11 manila-10.0.0/AUTHORS0000664000175000017500000003354413656750362014151 0ustar zuulzuul00000000000000119Vik Abhilash Divakaran Accela Zhao Akshai Parthasarathy Aleks Chirko Alex Meade Alex O'Rourke Alex O'Rourke Alexey Khodos Alexey Ovchinnikov Alfredo Moralejo Alin Balutoiu Alyson Rosa Amit Oren Andrea Frittoli (andreaf) Andrea Frittoli Andrea Ma Andreas Jaeger Andreas Jaeger Andrei Ta Andrei V. Ostapenko Andrew Kerr Andrey Kurilin Anh Tran Ankit Agrawal Anthony Lee Arjun Kashyap Arne Wiebalck Arnon Yaari Atsushi SAKAI Ben Swartzlander Ben Swartzlander Ben Swartzlander Bertrand Lallau Bill Owen Bin Zhou Bob Callaway Bob-OpenStack <295988511@qq.com> Béla Vancsics Cao Xuan Hoang Cedric Zhuang Chandan Kumar ChangBo Guo(gcb) Chaozhe.Chen Che, Roger Chris Yang Christian Berendt Chuan Miao Chuck Short Ciara Stacke Clinton Knight Colleen Murphy Corey Bryant Csaba Henk Dai Dang Van Dan Sneddon Daniel Gonzalez Daniel Mellado Daniel Russell Daniel Stelter-Gliese Danny Al-Gaaf Davanum Srinivas Dave Hill David Disseldorp David Sariel Deepak C Shetty Deepika Gupta Deliang Fan Diem Tran Dirk Mueller Dmitriy Rabotyagov Dmitry Bogun Doug Hellmann Douglas Viroel Duan Jiong Dustin Schoenbrun Emilien Macchi Eric Harney Flavio Percoco Gaurang Tapase Ghanshyam Mann Goutham Pacha Ravi Goutham Pacha Ravi Graham Hayes Gábor Antal Ha Van Tu Harshada Mangesh Kakad Helen Walsh Hiroyuki Eguchi Hongbin Lu Ian Wienand Igor Malinovskiy Iswarya_Vakati James E. Blair James Page Jan Provaznik Jan Vondra Javier Pena Jay Mehta Jay Xu Jeremy Liu Jeremy Stanley Jiao Pengju Joe Gordon Johannes Kulik John Spray John Spray Jordan Pittier Jose Castro Leon Jose Falavinha Jose Porrua Juan Antonio Osorio Robles Julia Varlamova Kamil Rykowski Ken'ichi Ohmichi Kuirong.Chen Li Wei Li, Chen Lin Yang LiuNanke Liyankun Longgeek Lucian Petrut Lucio Seki Lucky samadhiya Luigi Toscano Luis Pabón Lukas Bezdicka Luong Anh Tuan M V P Nitesh Marc Koderer Marc Solanas Tarre Mark McLoughlin Mark Sturdevant Mark Sturdevant Martin Kletzander Marty Turner Masaki Matsushita Matt Riedemann Maurice Escher Maurice Schreiber Maysa Macedo Michael Krotscheck Michael Still Miriam Yumi Mohammed Naser Monty Taylor Naresh Kumar Gunjalli Ngo Quoc Cuong Nguyen Hai Truong Nguyen Hung Phuong Nguyen Phuong An Nguyen Van Trung Nicolas Trangez Nilesh Bhosale Nishant Kumar OTSUKA, Yuanying Ondřej Nový OpenStack Release Bot Pan Pengju Jiao Pete Zaitcev Peter Wang Petr Kuběna Pierre Riteau Ponomaryov Valeriy Pony Chou Prudhvi Rao Shedimbi Quique Llorente Rafael Rivero Raissa Sarmento Ralf Rantzau Ram Raja Ramana Raja Ramy Asselin Ravichandran Nudurumati Rich Hagarty Rishabh Dave Rob Esker Rodrigo Barbieri Rodrigo Barbieri Ronald Bradford Rui Yuan Dou Rushil Chugh Ryan Hefner Ryan Liang Ryan Liang Saju.Madhavan Sam Wan Sascha Peilicke Sean McGinnis Sean McGinnis Sean McGinnis Sebastian Lohff Sergey Vilgelm Shaohui Wang Shaun Edwards Shaun Edwards Shuquan Huang Silvan Kaiser SolKuczala Stefan Nica Stephen Gordon Steve Kowalik Sumit Kumar Sun Jun Surya Ghatty Swapnil Kulkarni (coolsvap) Takashi NATSUME Thierry Carrez Thomas Bechtold Thomas Goirand Tiago Pasqualini Tin Lam Tina Tina Tang Tom Barron Tom Patzig TommyLike Valeriy Valeriy Ponomaryov Vasyl Saienko Victor Sergeyev Victoria Martinez de la Cruz Vijay Bellur Vincent Untz Vitaliy Levitksi Vivek Soni Vladimir Vechkanov Vu Cong Tuan Woohyung Han Xiaoyang Zhang XieYingYun Xing Yang Yang Wei Yatin Kumbhare Yogesh Yong Huang Your Name YuYang Yulia Portnova Yulia Portnova Yusuke Hayashi Yuval Brik Zhao Lei ZhiQiang Fan Zhiteng Huang ZhongShengping Zhongyue Luo andrebeltrami arthurnsantos binean bswartz caowei caoyuan carloss chao liu chen-li chenaidong1 chengebj5238 chenghuiyu chenxing daiki kato danielarthurt darkwsh digvijay2016 dingd dongdongpei dviroel ericxiett fpxie gaofei gecong1973 gengchc2 ghanshyam guotao.bj haixin haobing1 houming-wang howardlee hparekh huayue hulina huyang inspur-storage janonymous jason bishop jeckxie ji-xuepeng jinxingfang junboli kavithahr kedy kutner leiyashuai li,chen lijunbo lijunjie liucheng liujiong liuke2 liumengwen liushi liushuobj liuyamin liyanhang luqitao maqi marcusvrn mark.sturdevant mark.sturdevant mayurindalkar melissaml nidhimittalhada npraveen35 pangliye pawnesh.kumar pengdake <19921207pq@gmail.com> pengyuesheng peter_wang rajat29 scottda shangxiaobj shaoxj sharat.sharma shubhendu silvacarloss smcginnis snpd sonu.kumar stack stack sunjia sunjiazz tclayton ting.wang tpsilva ubu venkatamahesh vponomaryov wang yong wangdequn wangqi wangqiangbj weiting-chen whhan91 whoami-rajat wlhc <1216083447@qq.com> wudong xiaozhuangqing xing-yang xinyanzhang <236592348@qq.com> xuanyandong xulei xuleibj xurong00037997 yanghuichan yangweiwei yangyapeng yanjun.fu yfzhao yfzhao yogesh yuhui_inspur zengyingzhe zhang.lei zhangbailin zhangdaolong zhangdebo zhangdebo1987 zhangguoqing zhangqing zhangqing zhangshj zhangxuanyuan zhangyangyang zhangyanxian zhangyanxian zhaohua zhengwei6082 zhiguo.li zhongjun zhongjun2 zhongjun2 zhufl zzxwill manila-10.0.0/babel.cfg0000664000175000017500000000002113656750227014607 0ustar zuulzuul00000000000000[python: **.py] manila-10.0.0/CONTRIBUTING.rst0000664000175000017500000000125213656750227015531 0ustar zuulzuul00000000000000The source repository for this project can be found at: https://opendev.org/openstack/manila This repository is mirrored to GitHub at: https://github.com/openstack/manila Pull requests submitted through GitHub are not monitored. To start contributing to OpenStack, follow the steps in the contribution guide to set up and use Gerrit: https://docs.openstack.org/contributors/code-and-documentation/quick-start.html Bugs should be filed on Launchpad: https://bugs.launchpad.net/manila For more specific information about contributing to this repository, see the Manila contributor guide: https://docs.openstack.org/manila/latest/contributor/contributing.html manila-10.0.0/tools/0000775000175000017500000000000013656750363014231 5ustar zuulzuul00000000000000manila-10.0.0/tools/install_venv_common.py0000664000175000017500000001350713656750227020664 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Provides methods needed by installation script for OpenStack development virtual environments. Since this script is used to bootstrap a virtualenv from the system's Python environment, it should be kept strictly compatible with Python 2.6. Synced in from openstack-common """ from __future__ import print_function import optparse import os import subprocess import sys class InstallVenv(object): def __init__(self, root, venv, requirements, test_requirements, py_version, project): self.root = root self.venv = venv self.requirements = requirements self.test_requirements = test_requirements self.py_version = py_version self.project = project def die(self, message, *args): print(message % args, file=sys.stderr) sys.exit(1) def check_python_version(self): if sys.version_info < (2, 6): self.die("Need Python Version >= 2.6") def run_command_with_code(self, cmd, redirect_output=True, check_exit_code=True): """Runs a command in an out-of-process shell. Returns the output of that command. Working directory is self.root. """ if redirect_output: stdout = subprocess.PIPE else: stdout = None proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout) output = proc.communicate()[0] if check_exit_code and proc.returncode != 0: self.die('Command "%s" failed.\n%s', ' '.join(cmd), output) return (output, proc.returncode) def run_command(self, cmd, redirect_output=True, check_exit_code=True): return self.run_command_with_code(cmd, redirect_output, check_exit_code)[0] def get_distro(self): if (os.path.exists('/etc/fedora-release') or os.path.exists('/etc/redhat-release')): return Fedora( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) else: return Distro( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) def check_dependencies(self): self.get_distro().install_virtualenv() def create_virtualenv(self, no_site_packages=True): """Creates the virtual environment and installs PIP. Creates the virtual environment and installs PIP only into the virtual environment. """ if not os.path.isdir(self.venv): print('Creating venv...', end=' ') if no_site_packages: self.run_command(['virtualenv', '-q', '--no-site-packages', self.venv]) else: self.run_command(['virtualenv', '-q', self.venv]) print('done.') else: print("venv already exists...") pass def pip_install(self, *args): self.run_command(['tools/with_venv.sh', 'pip', 'install', '--upgrade'] + list(args), redirect_output=False) def install_dependencies(self): print('Installing dependencies with pip (this can take a while)...') # First things first, make sure our venv has the latest pip and # setuptools and pbr self.pip_install('pip>=1.4') self.pip_install('setuptools') self.pip_install('pbr') self.pip_install('-r', self.requirements, '-r', self.test_requirements) def parse_args(self, argv): """Parses command-line arguments.""" parser = optparse.OptionParser() parser.add_option('-n', '--no-site-packages', action='store_true', help="Do not inherit packages from global Python " "install.") return parser.parse_args(argv[1:])[0] class Distro(InstallVenv): def check_cmd(self, cmd): return bool(self.run_command(['which', cmd], check_exit_code=False).strip()) def install_virtualenv(self): if self.check_cmd('virtualenv'): return if self.check_cmd('easy_install'): print('Installing virtualenv via easy_install...', end=' ') if self.run_command(['easy_install', 'virtualenv']): print('Succeeded') return else: print('Failed') self.die('ERROR: virtualenv not found.\n\n%s development' ' requires virtualenv, please install it using your' ' favorite package management tool' % self.project) class Fedora(Distro): """This covers all Fedora-based distributions. Includes: Fedora, RHEL, CentOS, Scientific Linux """ def check_pkg(self, pkg): return self.run_command_with_code(['rpm', '-q', pkg], check_exit_code=False)[1] == 0 def install_virtualenv(self): if self.check_cmd('virtualenv'): return if not self.check_pkg('python-virtualenv'): self.die("Please install 'python-virtualenv'.") super(Fedora, self).install_virtualenv() manila-10.0.0/tools/enable-pre-commit-hook.sh0000775000175000017500000000230713656750227021027 0ustar zuulzuul00000000000000#!/bin/sh # Copyright 2011 OpenStack LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. PRE_COMMIT_SCRIPT=.git/hooks/pre-commit make_hook() { echo "exec tox -e fast8" >> $PRE_COMMIT_SCRIPT chmod +x $PRE_COMMIT_SCRIPT if [ -w $PRE_COMMIT_SCRIPT -a -x $PRE_COMMIT_SCRIPT ]; then echo "pre-commit hook was created successfully" else echo "unable to create pre-commit hook" fi } # NOTE(jk0): Make sure we are in manila's root directory before adding the hook. if [ ! -d ".git" ]; then echo "unable to find .git; moving up a directory" cd .. if [ -d ".git" ]; then make_hook else echo "still unable to find .git; hook not created" fi else make_hook fi manila-10.0.0/tools/check_exec.py0000775000175000017500000000225513656750227016672 0ustar zuulzuul00000000000000#!/usr/bin/python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Print a list and return with error if any executable files are found. # Compatible with both python 2 and 3. import os.path import stat import sys if len(sys.argv) < 2: print("Usage: %s " % sys.argv[0]) sys.exit(1) directory = sys.argv[1] executable = [] for root, mydir, myfile in os.walk(directory): for f in myfile: path = os.path.join(root, f) mode = os.lstat(path).st_mode if stat.S_IXUSR & mode: executable.append(path) if executable: print("Executable files found:") for f in executable: print(f) sys.exit(1) manila-10.0.0/tools/validate-json-files.py0000664000175000017500000000257713656750227020455 0ustar zuulzuul00000000000000#!/usr/bin/env python # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import os import sys if len(sys.argv) < 2: print("Usage: %s " % sys.argv[0]) sys.exit(1) directory = sys.argv[1] invalid_json_files = [] print("Validating JSON files in directory: ", directory) for dirpath, dirname, files in os.walk(directory): json_files = [f for f in files if f.endswith('.json')] for json_file in json_files: path = os.path.join(dirpath, json_file) with open(path) as json_file_content: try: content = json.load(json_file_content) except ValueError as e: print("File %s has invalid JSON: %s" % (path, e)) invalid_json_files.append(path) if invalid_json_files: print("%d JSON files are invalid." % len(invalid_json_files)) sys.exit(1) else: print("All JSON files are valid.") manila-10.0.0/tools/cover.sh0000775000175000017500000000465013656750227015712 0ustar zuulzuul00000000000000#!/bin/bash # # Copyright 2015: Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ALLOWED_EXTRA_MISSING=4 TESTR_ARGS="$*" show_diff () { head -1 $1 diff -U 0 $1 $2 | sed 1,2d } # Stash uncommitted changes, checkout master and save coverage report uncommitted=$(git status --porcelain | grep -v "^??") [[ -n $uncommitted ]] && git stash > /dev/null git checkout HEAD^ baseline_report=$(mktemp -t manila_coverageXXXXXXX) find . -type f -name "*.py[c|o]" -delete && stestr run "$TESTR_ARGS" && coverage combine && coverage html -d cover coverage report --ignore-errors > $baseline_report baseline_missing=$(awk 'END { print $3 }' $baseline_report) # Checkout back and unstash uncommitted changes (if any) git checkout - [[ -n $uncommitted ]] && git stash pop > /dev/null # Generate and save coverage report current_report=$(mktemp -t manila_coverageXXXXXXX) find . -type f -name "*.py[c|o]" -delete && stestr run "$TESTR_ARGS" && coverage combine && coverage html -d cover coverage report --ignore-errors > $current_report current_missing=$(awk 'END { print $3 }' $current_report) # Show coverage details allowed_missing=$((baseline_missing+ALLOWED_EXTRA_MISSING)) echo "Allowed to introduce missing lines : ${ALLOWED_EXTRA_MISSING}" echo "Missing lines in master : ${baseline_missing}" echo "Missing lines in proposed change : ${current_missing}" if [ $allowed_missing -gt $current_missing ]; then if [ $baseline_missing -lt $current_missing ]; then show_diff $baseline_report $current_report echo "I believe you can cover all your code with 100% coverage!" else echo "Thank you! You are awesome! Keep writing unit tests! :)" fi exit_code=0 else show_diff $baseline_report $current_report echo "Please write more unit tests, we should keep our test coverage :( " exit_code=1 fi rm $baseline_report $current_report exit $exit_code manila-10.0.0/tools/coding-checks.sh0000775000175000017500000000353313656750227017274 0ustar zuulzuul00000000000000#!/bin/bash set -eu usage() { echo "Usage: $0 [OPTION]..." echo "Run Manila's coding check(s)" echo "" echo " -Y, --pylint [] Run pylint check on the entire manila module or just files changed in basecommit (e.g. HEAD~1)" echo " -h, --help Print this usage message" echo exit 0 } process_options() { i=1 while [ $i -le $# ]; do eval opt=\$$i case $opt in -h|--help) usage;; -Y|--pylint) pylint=1;; *) scriptargs="$scriptargs $opt" esac i=$((i+1)) done } run_pylint() { local target="${scriptargs:-HEAD~1}" CODE_OKAY=0 if [[ "$target" = *"all"* ]]; then files=$(find manila/ -type f -name "*.py" -and ! -path "manila/tests*") test_files=$(find manila/tests/ -type f -name "*.py") else files=$(git diff --name-only --diff-filter=ACMRU HEAD~1 ':!manila/tests/*' '*.py') test_files=$(git diff --name-only --diff-filter=ACMRU HEAD~1 'manila/tests/*.py') fi if [[ -z "${files}" || -z "${test_files}" ]]; then echo "No python changes in this commit, pylint check not required." exit 0 fi if [[ -n "${files}" ]]; then echo "Running pylint against manila code modules:" printf "\t%s\n" "${files[@]}" pylint --rcfile=.pylintrc --output-format=colorized ${files} \ -E -j 0 || CODE_OKAY=1 fi if [[ -n "${test_files}" ]]; then echo "Running pylint against manila test modules:" printf "\t%s\n" "${test_files[@]}" pylint --rcfile=.pylintrc --output-format=colorized ${test_files} \ -E -d "no-member,assignment-from-no-return,assignment-from-none" \ -j 0 || CODE_OKAY=1 fi exit $CODE_OKAY } scriptargs= pylint=1 process_options $@ if [ $pylint -eq 1 ]; then run_pylint exit 0 fi manila-10.0.0/tools/install_venv.py0000664000175000017500000000454213656750227017313 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2010 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys import install_venv_common as install_venv # noqa def print_help(venv, root): help = """ OpenStack development environment setup is complete. OpenStack development uses virtualenv to track and manage Python dependencies while in development and testing. To activate the OpenStack virtualenv for the extent of your current shell session you can run: $ source %s/bin/activate Or, if you prefer, you can run commands in the virtualenv on a case by case basis by running: $ %s/tools/with_venv.sh Also, make test will automatically use the virtualenv. """ print(help % (venv, root)) def main(argv): root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) if os.environ.get('TOOLS_PATH'): root = os.environ['TOOLS_PATH'] venv = os.path.join(root, '.venv') if os.environ.get('VENV'): venv = os.environ['VENV'] pip_requires = os.path.join(root, 'requirements.txt') test_requires = os.path.join(root, 'test-requirements.txt') py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1]) project = 'OpenStack' install = install_venv.InstallVenv(root, venv, pip_requires, test_requires, py_version, project) options = install.parse_args(argv) install.check_python_version() install.check_dependencies() install.create_virtualenv(no_site_packages=options.no_site_packages) install.install_dependencies() print_help(venv, root) if __name__ == '__main__': main(sys.argv) manila-10.0.0/tools/check_logging.sh0000775000175000017500000000065513656750227017360 0ustar zuulzuul00000000000000#!/bin/bash tree=$1 tmpfile=$(mktemp) find $tree -name '*.py' \ | xargs grep -l 'import log' \ | xargs grep -l '^LOG =' \ | xargs grep -c 'LOG' \ | grep ':1$' \ | awk -F ':' '{print $1}' > $tmpfile count=$(wc -l < $tmpfile) if [[ count -eq 0 ]]; then rm $tmpfile exit 0 fi echo 'Found files with unused LOG variable (see https://review.opendev.org/#/c/301054):' cat $tmpfile rm $tmpfile exit 1 manila-10.0.0/tools/fast8.sh0000775000175000017500000000045313656750227015616 0ustar zuulzuul00000000000000#!/bin/bash cd $(dirname "$0")/.. CHANGED=$(git diff --name-only HEAD~1 | tr '\n' ' ') # Skip files that don't exist # (have been git rm'd) CHECK="" for FILE in $CHANGED; do if [ -f "$FILE" ]; then CHECK="$CHECK $FILE" fi done diff -u --from-file /dev/null $CHECK | flake8 --diff manila-10.0.0/tools/with_venv.sh0000775000175000017500000000030613656750227016577 0ustar zuulzuul00000000000000#!/bin/bash TOOLS_PATH=${TOOLS_PATH:-$(dirname $0)/../} VENV_PATH=${VENV_PATH:-${TOOLS_PATH}} VENV_DIR=${VENV_DIR:-/.venv} VENV=${VENV:-${VENV_PATH}/${VENV_DIR}} source ${VENV}/bin/activate && "$@" manila-10.0.0/tools/test-setup.sh0000775000175000017500000000370713656750227016713 0ustar zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # an anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # The root password for the PostgreSQL database; pass it in via # POSTGRES_ROOT_PW. DB_ROOT_PW=${POSTGRES_ROOT_PW:-insecure_slave} # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest manila-10.0.0/README.rst0000664000175000017500000000257013656750227014563 0ustar zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/manila.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ====== MANILA ====== You have come across an OpenStack shared file system service. It has identified itself as "Manila." It was abstracted from the Cinder project. * Wiki: https://wiki.openstack.org/wiki/Manila * Developer docs: https://docs.openstack.org/manila/latest/ Getting Started --------------- If you'd like to run from the master branch, you can clone the git repo: git clone https://opendev.org/openstack/manila For developer information please see `HACKING.rst `_ You can raise bugs here https://bugs.launchpad.net/manila Python client ------------- https://opendev.org/openstack/python-manilaclient * Documentation for the project can be found at: https://docs.openstack.org/manila/latest/ * Release notes for the project can be found at: https://docs.openstack.org/releasenotes/manila/ * Source for the project: https://opendev.org/openstack/manila * Bugs: https://bugs.launchpad.net/manila * Blueprints: https://blueprints.launchpad.net/manila * Design specifications are tracked at: https://specs.openstack.org/openstack/manila-specs/ manila-10.0.0/.zuul.yaml0000664000175000017500000003311013656750227015027 0ustar zuulzuul00000000000000- project: templates: - publish-openstack-docs-pti - openstack-cover-jobs - openstack-lower-constraints-jobs - openstack-python3-ussuri-jobs - check-requirements - release-notes-jobs-python3 - periodic-stable-jobs check: jobs: - manila-tox-genconfig - manila-tempest-dsvm-mysql-generic: voting: false - manila-tempest-dsvm-postgres-container: voting: false - manila-tempest-dsvm-postgres-zfsonlinux: voting: false - manila-tempest-dsvm-postgres-generic-singlebackend: voting: false - manila-tempest-dsvm-generic-no-share-servers: voting: false - manila-tempest-dsvm-scenario: voting: false - manila-tempest-minimal-dsvm-cephfs-native: voting: false - manila-tempest-minimal-dsvm-cephfs-nfs: voting: false - manila-tempest-dsvm-glusterfs-nfs: voting: false - manila-tempest-dsvm-glusterfs-native: voting: false - manila-tempest-minimal-dsvm-dummy - manila-tempest-minimal-lvm-ipv6-only - manila-grenade: voting: false - manila-rally-no-ss: voting: false - manila-rally-ss: voting: false - openstack-tox-pylint: voting: false timeout: 5400 - openstack-tox-cover: voting: false gate: queue: manila jobs: - manila-tox-genconfig - manila-tempest-minimal-dsvm-dummy - manila-tempest-minimal-lvm-ipv6-only experimental: jobs: - manila-tempest-dsvm-glusterfs-nfs-heketi - manila-tempest-dsvm-glusterfs-native-heketi - manila-tempest-minimal-dsvm-cephfs-native-centos-7 - manila-tempest-minimal-dsvm-cephfs-nfs-centos-7 - tripleo-ci-centos-7-scenario004-standalone - job: name: manila-tempest-base parent: legacy-dsvm-base timeout: 7200 irrelevant-files: &tempest-irrelevant-files - ^(test-|)requirements.txt$ - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^manila/hacking/.*$ - ^manila/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tools/.*$ - ^tox.ini$ - job: name: manila-grenade parent: manila-tempest-base run: playbooks/legacy/grenade-dsvm-manila/run.yaml post-run: playbooks/legacy/grenade-dsvm-manila/post.yaml timeout: 10800 required-projects: - openstack/grenade - openstack/devstack-gate - openstack/manila - openstack/python-manilaclient - openstack/manila-tempest-plugin - job: name: manila-tempest-dsvm-container-scenario-custom-image parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-generic-no-share-servers parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-generic-scenario-custom-image parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-glusterfs-native parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-glusterfs-native/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-glusterfs-native/post.yaml required-projects: - openstack/devstack-gate - x/devstack-plugin-glusterfs - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-glusterfs-native-heketi parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/post.yaml required-projects: - openstack/devstack-gate - x/devstack-plugin-glusterfs - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-glusterfs-nfs parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/post.yaml required-projects: - openstack/devstack-gate - x/devstack-plugin-glusterfs - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-glusterfs-nfs-heketi parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/post.yaml required-projects: - openstack/devstack-gate - x/devstack-plugin-glusterfs - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-hdfs parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-hdfs/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-hdfs/post.yaml required-projects: - openstack/devstack-gate - x/devstack-plugin-hdfs - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-mysql-generic parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-mysql-generic/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-mysql-generic/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-postgres-container parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-postgres-container/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-postgres-container/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-postgres-generic-singlebackend parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-postgres-zfsonlinux parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-dsvm-scenario parent: manila-tempest-base run: playbooks/legacy/manila-tempest-dsvm-scenario/run.yaml post-run: playbooks/legacy/manila-tempest-dsvm-scenario/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-image-elements - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-dsvm-cephfs-native-centos-7 parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/post.yaml nodeset: legacy-centos-7 required-projects: - openstack/devstack-gate - openstack/devstack-plugin-ceph - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-dsvm-cephfs-native parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/post.yaml required-projects: - openstack/devstack-gate - openstack/devstack-plugin-ceph - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-dsvm-cephfs-nfs-centos-7 parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/post.yaml nodeset: legacy-centos-7 required-projects: - openstack/devstack-gate - openstack/devstack-plugin-ceph - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-dsvm-cephfs-nfs parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/post.yaml required-projects: - openstack/devstack-gate - openstack/devstack-plugin-ceph - openstack/manila - openstack/manila-tempest-plugin - openstack/neutron-dynamic-routing - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-dsvm-dummy parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-dsvm-dummy/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-dsvm-dummy/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-dsvm-lvm parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-dsvm-lvm/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-dsvm-lvm/post.yaml required-projects: - openstack/devstack-gate - openstack/manila - openstack/manila-tempest-plugin - openstack/neutron-dynamic-routing - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-lvm-ipv6-only parent: manila-tempest-minimal-dsvm-lvm run: playbooks/legacy/manila-tempest-minimal-dsvm-lvm/run-ipv6.yaml required-projects: - openstack/tempest - job: name: manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7 parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/post.yaml nodeset: legacy-centos-7 required-projects: - openstack/devstack-gate - openstack/devstack-plugin-ceph - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7 parent: manila-tempest-base run: playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/run.yaml post-run: playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/post.yaml nodeset: legacy-centos-7 required-projects: - openstack/devstack-gate - openstack/devstack-plugin-ceph - openstack/manila - openstack/manila-tempest-plugin - openstack/python-manilaclient - openstack/tempest - job: name: manila-tox-genconfig parent: openstack-tox description: | Run tests for manila project. Uses tox with the ``genconfig`` environment. post-run: playbooks/manila-tox-genconfig/post.yaml vars: tox_envlist: genconfig - job: name: manila-rally-no-ss parent: rally-task-manila-no-ss irrelevant-files: *tempest-irrelevant-files vars: rally_task: rally-jobs/rally-manila-no-ss.yaml devstack_plugins: rally-openstack: https://opendev.org/openstack/rally-openstack required-projects: - openstack/rally-openstack - job: name: manila-rally-ss parent: rally-task-manila-ss irrelevant-files: *tempest-irrelevant-files vars: rally_task: rally-jobs/rally-manila.yaml devstack_plugins: rally-openstack: https://opendev.org/openstack/rally-openstack required-projects: - openstack/rally-openstack manila-10.0.0/setup.py0000664000175000017500000000137613656750227014611 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) manila-10.0.0/manila/0000775000175000017500000000000013656750362014331 5ustar zuulzuul00000000000000manila-10.0.0/manila/utils.py0000664000175000017500000006130513656750227016050 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities and helper functions.""" import contextlib import functools import inspect import os import pyclbr import random import re import shutil import sys import tempfile import time from eventlet import pools import logging import netaddr from oslo_concurrency import lockutils from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log from oslo_utils import encodeutils from oslo_utils import importutils from oslo_utils import netutils from oslo_utils import strutils from oslo_utils import timeutils import paramiko import retrying import six from webob import exc from manila.common import constants from manila.db import api as db_api from manila import exception from manila.i18n import _ CONF = cfg.CONF LOG = log.getLogger(__name__) if hasattr('CONF', 'debug') and CONF.debug: logging.getLogger("paramiko").setLevel(logging.DEBUG) _ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f' _ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S' synchronized = lockutils.synchronized_with_prefix('manila-') def isotime(at=None, subsecond=False): """Stringify time in ISO 8601 format.""" # Python provides a similar instance method for datetime.datetime objects # called isoformat(). The format of the strings generated by isoformat() # have a couple of problems: # 1) The strings generated by isotime are used in tokens and other public # APIs that we can't change without a deprecation period. The strings # generated by isoformat are not the same format, so we can't just # change to it. # 2) The strings generated by isoformat do not include the microseconds if # the value happens to be 0. This will likely show up as random failures # as parsers may be written to always expect microseconds, and it will # parse correctly most of the time. if not at: at = timeutils.utcnow() st = at.strftime(_ISO8601_TIME_FORMAT if not subsecond else _ISO8601_TIME_FORMAT_SUBSECOND) tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC' # Need to handle either iso8601 or python UTC format st += ('Z' if tz in ['UTC', 'UTC+00:00'] else tz) return st def _get_root_helper(): return 'sudo manila-rootwrap %s' % CONF.rootwrap_config def execute(*cmd, **kwargs): """Convenience wrapper around oslo's execute() function.""" if 'run_as_root' in kwargs and 'root_helper' not in kwargs: kwargs['root_helper'] = _get_root_helper() if hasattr('CONF', 'debug') and CONF.debug: kwargs['loglevel'] = logging.DEBUG return processutils.execute(*cmd, **kwargs) class SSHPool(pools.Pool): """A simple eventlet pool to hold ssh connections.""" def __init__(self, ip, port, conn_timeout, login, password=None, privatekey=None, *args, **kwargs): self.ip = ip self.port = port self.login = login self.password = password self.conn_timeout = conn_timeout if conn_timeout else None self.path_to_private_key = privatekey super(SSHPool, self).__init__(*args, **kwargs) def create(self): # pylint: disable=method-hidden ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) look_for_keys = True if self.path_to_private_key: self.path_to_private_key = os.path.expanduser( self.path_to_private_key) look_for_keys = False elif self.password: look_for_keys = False try: LOG.debug("ssh.connect: ip: %s, port: %s, username: %s, " "password: %s, key_filename: %s, look_for_keys: %s, " "timeout: %s, banner_timeout: %s", self.ip, self.port, self.login, self.password, self.path_to_private_key, look_for_keys, self.conn_timeout, self.conn_timeout) ssh.connect(self.ip, port=self.port, username=self.login, password=self.password, key_filename=self.path_to_private_key, look_for_keys=look_for_keys, timeout=self.conn_timeout, banner_timeout=self.conn_timeout) if self.conn_timeout: transport = ssh.get_transport() transport.set_keepalive(self.conn_timeout) return ssh except Exception as e: msg = _("Check whether private key or password are correctly " "set. Error connecting via ssh: %s") % e LOG.error(msg) raise exception.SSHException(msg) def get(self): """Return an item from the pool, when one is available. This may cause the calling greenthread to block. Check if a connection is active before returning it. For dead connections create and return a new connection. """ if self.free_items: conn = self.free_items.popleft() if conn: if conn.get_transport().is_active(): return conn else: conn.close() return self.create() if self.current_size < self.max_size: created = self.create() self.current_size += 1 return created return self.channel.get() def remove(self, ssh): """Close an ssh client and remove it from free_items.""" ssh.close() if ssh in self.free_items: self.free_items.remove(ssh) if self.current_size > 0: self.current_size -= 1 def check_ssh_injection(cmd_list): ssh_injection_pattern = ['`', '$', '|', '||', ';', '&', '&&', '>', '>>', '<'] # Check whether injection attacks exist for arg in cmd_list: arg = arg.strip() # Check for matching quotes on the ends is_quoted = re.match('^(?P[\'"])(?P.*)(?P=quote)$', arg) if is_quoted: # Check for unescaped quotes within the quoted argument quoted = is_quoted.group('quoted') if quoted: if (re.match('[\'"]', quoted) or re.search('[^\\\\][\'"]', quoted)): raise exception.SSHInjectionThreat(command=cmd_list) else: # We only allow spaces within quoted arguments, and that # is the only special character allowed within quotes if len(arg.split()) > 1: raise exception.SSHInjectionThreat(command=cmd_list) # Second, check whether danger character in command. So the shell # special operator must be a single argument. for c in ssh_injection_pattern: if c not in arg: continue result = arg.find(c) if not result == -1: if result == 0 or not arg[result - 1] == '\\': raise exception.SSHInjectionThreat(command=cmd_list) class LazyPluggable(object): """A pluggable backend loaded lazily based on some value.""" def __init__(self, pivot, **backends): self.__backends = backends self.__pivot = pivot self.__backend = None def __get_backend(self): if not self.__backend: backend_name = CONF[self.__pivot] if backend_name not in self.__backends: raise exception.Error(_('Invalid backend: %s') % backend_name) backend = self.__backends[backend_name] if isinstance(backend, tuple): name = backend[0] fromlist = backend[1] else: name = backend fromlist = backend self.__backend = __import__(name, None, None, fromlist) LOG.debug('backend %s', self.__backend) return self.__backend def __getattr__(self, key): backend = self.__get_backend() return getattr(backend, key) def monkey_patch(): """Patch decorator. If the Flags.monkey_patch set as True, this function patches a decorator for all functions in specified modules. You can set decorators for each modules using CONF.monkey_patch_modules. The format is "Module path:Decorator function". Example: 'manila.api.ec2.cloud:' \ manila.openstack.common.notifier.api.notify_decorator' Parameters of the decorator is as follows. (See manila.openstack.common.notifier.api.notify_decorator) name - name of the function function - object of the function """ # If CONF.monkey_patch is not True, this function do nothing. if not CONF.monkey_patch: return # Get list of modules and decorators for module_and_decorator in CONF.monkey_patch_modules: module, decorator_name = module_and_decorator.split(':') # import decorator function decorator = importutils.import_class(decorator_name) __import__(module) # Retrieve module information using pyclbr module_data = pyclbr.readmodule_ex(module) for key in module_data.keys(): # set the decorator for the class methods if isinstance(module_data[key], pyclbr.Class): clz = importutils.import_class("%s.%s" % (module, key)) # NOTE(vponomaryov): we need to distinguish class methods types # for py2 and py3, because the concept of 'unbound methods' has # been removed from the python3.x if six.PY3: member_type = inspect.isfunction else: member_type = inspect.ismethod for method, func in inspect.getmembers(clz, member_type): setattr( clz, method, decorator("%s.%s.%s" % (module, key, method), func)) # set the decorator for the function if isinstance(module_data[key], pyclbr.Function): func = importutils.import_class("%s.%s" % (module, key)) setattr(sys.modules[module], key, decorator("%s.%s" % (module, key), func)) def file_open(*args, **kwargs): """Open file see built-in open() documentation for more details Note: The reason this is kept in a separate module is to easily be able to provide a stub module that doesn't alter system state at all (for unit tests) """ return open(*args, **kwargs) def service_is_up(service): """Check whether a service is up based on last heartbeat.""" last_heartbeat = service['updated_at'] or service['created_at'] # Timestamps in DB are UTC. tdelta = timeutils.utcnow() - last_heartbeat elapsed = tdelta.total_seconds() return abs(elapsed) <= CONF.service_down_time def validate_service_host(context, host): service = db_api.service_get_by_host_and_topic(context, host, 'manila-share') if not service_is_up(service): raise exception.ServiceIsDown(service=service['host']) return service @contextlib.contextmanager def tempdir(**kwargs): tmpdir = tempfile.mkdtemp(**kwargs) try: yield tmpdir finally: try: shutil.rmtree(tmpdir) except OSError as e: LOG.debug('Could not remove tmpdir: %s', e) def walk_class_hierarchy(clazz, encountered=None): """Walk class hierarchy, yielding most derived classes first.""" if not encountered: encountered = [] for subclass in clazz.__subclasses__(): if subclass not in encountered: encountered.append(subclass) # drill down to leaves first for subsubclass in walk_class_hierarchy(subclass, encountered): yield subsubclass yield subclass def cidr_to_network(cidr): """Convert cidr to network.""" try: network = netaddr.IPNetwork(cidr) return network except netaddr.AddrFormatError: raise exception.InvalidInput(_("Invalid cidr supplied %s") % cidr) def cidr_to_netmask(cidr): """Convert cidr to netmask.""" return six.text_type(cidr_to_network(cidr).netmask) def cidr_to_prefixlen(cidr): """Convert cidr to prefix length.""" return cidr_to_network(cidr).prefixlen def is_valid_ip_address(ip_address, ip_version): ip_version = ([int(ip_version)] if not isinstance(ip_version, list) else ip_version) if not set(ip_version).issubset(set([4, 6])): raise exception.ManilaException( _("Provided improper IP version '%s'.") % ip_version) if 4 in ip_version: if netutils.is_valid_ipv4(ip_address): return True if 6 in ip_version: if netutils.is_valid_ipv6(ip_address): return True return False def is_all_tenants(search_opts): """Checks to see if the all_tenants flag is in search_opts :param dict search_opts: The search options for a request :returns: boolean indicating if all_tenants are being requested or not """ all_tenants = search_opts.get('all_tenants') if all_tenants: try: all_tenants = strutils.bool_from_string(all_tenants, True) except ValueError as err: raise exception.InvalidInput(six.text_type(err)) else: # The empty string is considered enabling all_tenants all_tenants = 'all_tenants' in search_opts return all_tenants class IsAMatcher(object): def __init__(self, expected_value=None): self.expected_value = expected_value def __eq__(self, actual_value): return isinstance(actual_value, self.expected_value) class ComparableMixin(object): def _compare(self, other, method): try: return method(self._cmpkey(), other._cmpkey()) except (AttributeError, TypeError): # _cmpkey not implemented, or return different type, # so I can't compare with "other". return NotImplemented def __lt__(self, other): return self._compare(other, lambda s, o: s < o) def __le__(self, other): return self._compare(other, lambda s, o: s <= o) def __eq__(self, other): return self._compare(other, lambda s, o: s == o) def __ge__(self, other): return self._compare(other, lambda s, o: s >= o) def __gt__(self, other): return self._compare(other, lambda s, o: s > o) def __ne__(self, other): return self._compare(other, lambda s, o: s != o) def retry(exception, interval=1, retries=10, backoff_rate=2, wait_random=False, backoff_sleep_max=None): """A wrapper around retrying library. This decorator allows to log and to check 'retries' input param. Time interval between retries is calculated in the following way: interval * backoff_rate ^ previous_attempt_number :param exception: expected exception type. When wrapped function raises an exception of this type, the function execution is retried. :param interval: param 'interval' is used to calculate time interval between retries: interval * backoff_rate ^ previous_attempt_number :param retries: number of retries. Use 0 for an infinite retry loop. :param backoff_rate: param 'backoff_rate' is used to calculate time interval between retries: interval * backoff_rate ^ previous_attempt_number :param wait_random: boolean value to enable retry with random wait timer. :param backoff_sleep_max: Maximum number of seconds for the calculated backoff sleep. Use None if no maximum is needed. """ def _retry_on_exception(e): return isinstance(e, exception) def _backoff_sleep(previous_attempt_number, delay_since_first_attempt_ms): exp = backoff_rate ** previous_attempt_number wait_for = max(0, interval * exp) if wait_random: wait_val = random.randrange(interval * 1000.0, wait_for * 1000.0) else: wait_val = wait_for * 1000.0 if backoff_sleep_max: wait_val = min(backoff_sleep_max * 1000.0, wait_val) LOG.debug("Sleeping for %s seconds.", (wait_val / 1000.0)) return wait_val def _print_stop(previous_attempt_number, delay_since_first_attempt_ms): delay_since_first_attempt = delay_since_first_attempt_ms / 1000.0 LOG.debug("Failed attempt %s", previous_attempt_number) LOG.debug("Have been at this for %s seconds", delay_since_first_attempt) return retries > 0 and previous_attempt_number == retries if retries < 0: raise ValueError(_('Retries must be greater than or ' 'equal to 0 (received: %s).') % retries) def _decorator(f): @six.wraps(f) def _wrapper(*args, **kwargs): r = retrying.Retrying(retry_on_exception=_retry_on_exception, wait_func=_backoff_sleep, stop_func=_print_stop) return r.call(f, *args, **kwargs) return _wrapper return _decorator def get_bool_from_api_params(key, params, default=False, strict=True): """Parse bool value from request params. HTTPBadRequest will be directly raised either of the cases below: 1. invalid bool string was found by key(with strict on). 2. key not found while default value is invalid(with strict on). """ param = params.get(key, default) try: param = strutils.bool_from_string(param, strict=strict, default=default) except ValueError: msg = _('Invalid value %(param)s for %(param_string)s. ' 'Expecting a boolean.') % {'param': param, 'param_string': key} raise exc.HTTPBadRequest(explanation=msg) return param def check_params_exist(keys, params): """Validates if keys exist in params. :param keys: List of keys to check :param params: Parameters received from REST API """ if any(set(keys) - set(params)): msg = _("Must specify all mandatory parameters: %s") % keys raise exc.HTTPBadRequest(explanation=msg) def check_params_are_boolean(keys, params, default=False): """Validates if keys in params are boolean. :param keys: List of keys to check :param params: Parameters received from REST API :param default: default value when it does not exist :return: a dictionary with keys and respective retrieved value """ result = {} for key in keys: value = get_bool_from_api_params(key, params, default, strict=True) result[key] = value return result def require_driver_initialized(func): @functools.wraps(func) def wrapper(self, *args, **kwargs): # we can't do anything if the driver didn't init if not self.driver.initialized: driver_name = self.driver.__class__.__name__ raise exception.DriverNotInitialized(driver=driver_name) return func(self, *args, **kwargs) return wrapper def convert_str(text): """Convert to native string. Convert bytes and Unicode strings to native strings: * convert to bytes on Python 2: encode Unicode using encodeutils.safe_encode() * convert to Unicode on Python 3: decode bytes from UTF-8 """ if six.PY2: return encodeutils.safe_encode(text) else: if isinstance(text, bytes): return text.decode('utf-8') else: return text def translate_string_size_to_float(string, multiplier='G'): """Translates human-readable storage size to float value. Supported values for 'multiplier' are following: K - kilo | 1 M - mega | 1024 G - giga | 1024 * 1024 T - tera | 1024 * 1024 * 1024 P = peta | 1024 * 1024 * 1024 * 1024 returns: - float if correct input data provided - None if incorrect """ if not isinstance(string, six.string_types): return None multipliers = ('K', 'M', 'G', 'T', 'P') mapping = { k: 1024.0 ** v for k, v in zip(multipliers, range(len(multipliers))) } if multiplier not in multipliers: raise exception.ManilaException( "'multiplier' arg should be one of following: " "'%(multipliers)s'. But it is '%(multiplier)s'." % { 'multiplier': multiplier, 'multipliers': "', '".join(multipliers), } ) try: value = float(string.replace(",", ".")) / 1024.0 value = value / mapping[multiplier] return value except (ValueError, TypeError): matched = re.match( r"^(\d*[.,]*\d*)([%s])$" % ''.join(multipliers), string) if matched: # The replace() is needed in case decimal separator is a comma value = float(matched.groups()[0].replace(",", ".")) multiplier = mapping[matched.groups()[1]] / mapping[multiplier] return value * multiplier def wait_for_access_update(context, db, share_instance, migration_wait_access_rules_timeout): starttime = time.time() deadline = starttime + migration_wait_access_rules_timeout tries = 0 while True: instance = db.share_instance_get(context, share_instance['id']) if instance['access_rules_status'] == constants.STATUS_ACTIVE: break tries += 1 now = time.time() if (instance['access_rules_status'] == constants.SHARE_INSTANCE_RULES_ERROR): msg = _("Failed to update access rules" " on share instance %s") % share_instance['id'] raise exception.ShareMigrationFailed(reason=msg) elif now > deadline: msg = _("Timeout trying to update access rules" " on share instance %(share_id)s. Timeout " "was %(timeout)s seconds.") % { 'share_id': share_instance['id'], 'timeout': migration_wait_access_rules_timeout} raise exception.ShareMigrationFailed(reason=msg) else: # 1.414 = square-root of 2 time.sleep(1.414 ** tries) class DoNothing(str): """Class that literrally does nothing. We inherit from str in case it's called with json.dumps. """ def __call__(self, *args, **kwargs): return self def __getattr__(self, name): return self DO_NOTHING = DoNothing() def notifications_enabled(conf): """Check if oslo notifications are enabled.""" notifications_driver = set(conf.oslo_messaging_notifications.driver) return notifications_driver and notifications_driver != {'noop'} def if_notifications_enabled(function): """Calls decorated method only if notifications are enabled.""" @functools.wraps(function) def wrapped(*args, **kwargs): if notifications_enabled(CONF): return function(*args, **kwargs) return DO_NOTHING return wrapped def write_local_file(filename, contents, as_root=False): tmp_filename = "%s.tmp" % filename if as_root: execute('tee', tmp_filename, run_as_root=True, process_input=contents) execute('mv', '-f', tmp_filename, filename, run_as_root=True) else: with open(tmp_filename, 'w') as f: f.write(contents) os.rename(tmp_filename, filename) def write_remote_file(ssh, filename, contents, as_root=False): tmp_filename = "%s.tmp" % filename if as_root: cmd = 'sudo tee "%s" > /dev/null' % tmp_filename cmd2 = 'sudo mv -f "%s" "%s"' % (tmp_filename, filename) else: cmd = 'cat > "%s"' % tmp_filename cmd2 = 'mv -f "%s" "%s"' % (tmp_filename, filename) stdin, __, __ = ssh.exec_command(cmd) stdin.write(contents) stdin.close() stdin.channel.shutdown_write() ssh.exec_command(cmd2) manila-10.0.0/manila/__init__.py0000664000175000017500000000000013656750227016430 0ustar zuulzuul00000000000000manila-10.0.0/manila/version.py0000664000175000017500000000157413656750227016377 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from pbr import version as pbr_version MANILA_VENDOR = "OpenStack Foundation" MANILA_PRODUCT = "OpenStack Manila" MANILA_PACKAGE = None # OS distro package version suffix loaded = False version_info = pbr_version.VersionInfo('manila') version_string = version_info.version_string manila-10.0.0/manila/wsgi/0000775000175000017500000000000013656750362015302 5ustar zuulzuul00000000000000manila-10.0.0/manila/wsgi/__init__.py0000664000175000017500000000000013656750227017401 0ustar zuulzuul00000000000000manila-10.0.0/manila/wsgi/common.py0000664000175000017500000001203113656750227017141 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for working with WSGI servers.""" import webob.dec import webob.exc from manila.i18n import _ class Request(webob.Request): pass class Application(object): """Base WSGI application wrapper. Subclasses need to implement __call__.""" @classmethod def factory(cls, global_config, **local_config): """Used for paste app factories in paste.deploy config files. Any local configuration (that is, values under the [app:APPNAME] section of the paste config) will be passed into the `__init__` method as kwargs. A hypothetical configuration would look like: [app:wadl] latest_version = 1.3 paste.app_factory = manila.api.fancy_api:Wadl.factory which would result in a call to the `Wadl` class as import manila.api.fancy_api fancy_api.Wadl(latest_version='1.3') You could of course re-implement the `factory` method in subclasses, but using the kwarg passing it shouldn't be necessary. """ return cls(**local_config) def __call__(self, environ, start_response): r"""Subclasses will probably want to implement __call__ like this: @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): # Any of the following objects work as responses: # Option 1: simple string res = 'message\n' # Option 2: a nicely formatted HTTP exception page res = exc.HTTPForbidden(detail='Nice try') # Option 3: a webob Response object (in case you need to play with # headers, or you want to be treated like an iterable, or or or) res = Response(); res.app_iter = open('somefile') # Option 4: any wsgi app to be run next res = self.application # Option 5: you can get a Response object for a wsgi app, too, to # play with headers etc res = req.get_response(self.application) # You can then just return your response... return res # ... or set req.response and return None. req.response = res See the end of http://pythonpaste.org/webob/modules/dec.html for more info. """ raise NotImplementedError(_('You must implement __call__')) class Middleware(Application): """Base WSGI middleware. These classes require an application to be initialized that will be called next. By default the middleware will simply call its wrapped app, or you can override __call__ to customize its behavior. """ @classmethod def factory(cls, global_config, **local_config): """Used for paste app factories in paste.deploy config files. Any local configuration (that is, values under the [filter:APPNAME] section of the paste config) will be passed into the `__init__` method as kwargs. A hypothetical configuration would look like: [filter:analytics] redis_host = 127.0.0.1 paste.filter_factory = manila.api.analytics:Analytics.factory which would result in a call to the `Analytics` class as import manila.api.analytics analytics.Analytics(app_from_paste, redis_host='127.0.0.1') You could of course re-implement the `factory` method in subclasses, but using the kwarg passing it shouldn't be necessary. """ def _factory(app): return cls(app, **local_config) return _factory def __init__(self, application): self.application = application def process_request(self, req): """Called on each request. If this returns None, the next application down the stack will be executed. If it returns a response then that response will be returned and execution will stop here. """ return None def process_response(self, response): """Do whatever you'd like to the response.""" return response @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): # pylint: disable=assignment-from-none response = self.process_request(req) if response: return response response = req.get_response(self.application) return self.process_response(response) manila-10.0.0/manila/wsgi/wsgi.py0000664000175000017500000000214113656750227016623 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Manila OS API WSGI application.""" import sys from oslo_config import cfg from oslo_log import log from oslo_service import wsgi # Need to register global_opts from manila.common import config from manila import rpc from manila import version CONF = cfg.CONF def initialize_application(): log.register_options(CONF) CONF(sys.argv[1:], project="manila", version=version.version_string()) config.verify_share_protocols() log.setup(CONF, "manila") rpc.init(CONF) return wsgi.Loader(CONF).load_app(name='osapi_share') manila-10.0.0/manila/wsgi/eventlet_server.py0000664000175000017500000000413613656750227021074 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for working with WSGI servers.""" import socket from oslo_config import cfg from oslo_service import wsgi from oslo_utils import netutils socket_opts = [ cfg.BoolOpt('tcp_keepalive', default=True, help="Sets the value of TCP_KEEPALIVE (True/False) for each " "server socket."), cfg.IntOpt('tcp_keepalive_interval', help="Sets the value of TCP_KEEPINTVL in seconds for each " "server socket. Not supported on OS X."), cfg.IntOpt('tcp_keepalive_count', help="Sets the value of TCP_KEEPCNT for each " "server socket. Not supported on OS X."), ] CONF = cfg.CONF CONF.register_opts(socket_opts) class Server(wsgi.Server): """Server class to manage a WSGI server, serving a WSGI application.""" def _set_socket_opts(self, _socket): _socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # NOTE(praneshp): Call set_tcp_keepalive in oslo to set # tcp keepalive parameters. Sockets can hang around forever # without keepalive netutils.set_tcp_keepalive( _socket, self.conf.tcp_keepalive, self.conf.tcp_keepidle, self.conf.tcp_keepalive_count, self.conf.tcp_keepalive_interval, ) return _socket manila-10.0.0/manila/data/0000775000175000017500000000000013656750362015242 5ustar zuulzuul00000000000000manila-10.0.0/manila/data/utils.py0000664000175000017500000001457013656750227016763 0ustar zuulzuul00000000000000# Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_log import log from manila import exception from manila.i18n import _ from manila import utils LOG = log.getLogger(__name__) class Copy(object): def __init__(self, src, dest, ignore_list, check_hash=False): self.src = src self.dest = dest self.total_size = 0 self.current_size = 0 self.files = [] self.dirs = [] self.current_copy = None self.ignore_list = ignore_list self.cancelled = False self.initialized = False self.completed = False self.check_hash = check_hash def get_progress(self): # Empty share or empty contents if self.completed and self.total_size == 0: return {'total_progress': 100} if not self.initialized or self.current_copy is None: return {'total_progress': 0} try: size, err = utils.execute("stat", "-c", "%s", self.current_copy['file_path'], run_as_root=True) size = int(size) except utils.processutils.ProcessExecutionError: size = 0 current_file_progress = 0 if self.current_copy['size'] > 0: current_file_progress = size * 100 / self.current_copy['size'] current_file_path = self.current_copy['file_path'] total_progress = 0 if self.total_size > 0: if current_file_progress == 100: size = 0 total_progress = int((self.current_size + size) * 100 / self.total_size) progress = { 'total_progress': total_progress, 'current_file_path': current_file_path, 'current_file_progress': current_file_progress } return progress def cancel(self): self.cancelled = True def run(self): self.get_total_size(self.src) self.initialized = True self.copy_data(self.src) self.copy_stats(self.src) self.completed = True LOG.info(self.get_progress()) def get_total_size(self, path): if self.cancelled: return out, err = utils.execute( "ls", "-pA1", "--group-directories-first", path, run_as_root=True) for line in out.split('\n'): if self.cancelled: return if len(line) == 0: continue src_item = os.path.join(path, line) if line[-1] == '/': if line[0:-1] in self.ignore_list: continue self.get_total_size(src_item) else: if line in self.ignore_list: continue size, err = utils.execute("stat", "-c", "%s", src_item, run_as_root=True) self.total_size += int(size) def copy_data(self, path): if self.cancelled: return out, err = utils.execute( "ls", "-pA1", "--group-directories-first", path, run_as_root=True) for line in out.split('\n'): if self.cancelled: return if len(line) == 0: continue src_item = os.path.join(path, line) dest_item = src_item.replace(self.src, self.dest) if line[-1] == '/': if line[0:-1] in self.ignore_list: continue utils.execute("mkdir", "-p", dest_item, run_as_root=True) self.copy_data(src_item) else: if line in self.ignore_list: continue size, err = utils.execute("stat", "-c", "%s", src_item, run_as_root=True) self.current_copy = {'file_path': dest_item, 'size': int(size)} self._copy_and_validate(src_item, dest_item) self.current_size += int(size) LOG.info(self.get_progress()) @utils.retry(exception.ShareDataCopyFailed, retries=2) def _copy_and_validate(self, src_item, dest_item): utils.execute("cp", "-P", "--preserve=all", src_item, dest_item, run_as_root=True) if self.check_hash: _validate_item(src_item, dest_item) def copy_stats(self, path): if self.cancelled: return out, err = utils.execute( "ls", "-pA1", "--group-directories-first", path, run_as_root=True) for line in out.split('\n'): if self.cancelled: return if len(line) == 0: continue src_item = os.path.join(path, line) dest_item = src_item.replace(self.src, self.dest) # NOTE(ganso): Should re-apply attributes for folders. if line[-1] == '/': if line[0:-1] in self.ignore_list: continue self.copy_stats(src_item) utils.execute("chmod", "--reference=%s" % src_item, dest_item, run_as_root=True) utils.execute("touch", "--reference=%s" % src_item, dest_item, run_as_root=True) utils.execute("chown", "--reference=%s" % src_item, dest_item, run_as_root=True) def _validate_item(src_item, dest_item): src_sum, err = utils.execute( "sha256sum", "%s" % src_item, run_as_root=True) dest_sum, err = utils.execute( "sha256sum", "%s" % dest_item, run_as_root=True) if src_sum.split()[0] != dest_sum.split()[0]: msg = _("Data corrupted while copying. Aborting data copy.") raise exception.ShareDataCopyFailed(reason=msg) manila-10.0.0/manila/data/__init__.py0000664000175000017500000000000013656750227017341 0ustar zuulzuul00000000000000manila-10.0.0/manila/data/helper.py0000664000175000017500000002576713656750227017114 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Helper class for Data Service operations.""" import os from oslo_config import cfg from oslo_log import log from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import access as access_manager from manila.share import rpcapi as share_rpc from manila import utils LOG = log.getLogger(__name__) data_helper_opts = [ cfg.IntOpt( 'data_access_wait_access_rules_timeout', default=180, help="Time to wait for access rules to be allowed/denied on backends " "when migrating a share (seconds)."), cfg.ListOpt('data_node_access_ips', default=[], help="A list of the IPs of the node interface connected to " "the admin network. Used for allowing access to the " "mounting shares. Default is []."), cfg.StrOpt( 'data_node_access_cert', help="The certificate installed in the data node in order to " "allow access to certificate authentication-based shares."), cfg.StrOpt( 'data_node_access_admin_user', help="The admin user name registered in the security service in order " "to allow access to user authentication-based shares."), cfg.DictOpt( 'data_node_mount_options', default={}, help="Mount options to be included in the mount command for share " "protocols. Use dictionary format, example: " "{'nfs': '-o nfsvers=3', 'cifs': '-o user=foo,pass=bar'}"), ] CONF = cfg.CONF CONF.register_opts(data_helper_opts) class DataServiceHelper(object): def __init__(self, context, db, share): self.db = db self.share = share self.context = context self.share_rpc = share_rpc.ShareAPI() self.access_helper = access_manager.ShareInstanceAccess(self.db, None) self.wait_access_rules_timeout = ( CONF.data_access_wait_access_rules_timeout) def deny_access_to_data_service(self, access_ref_list, share_instance): self._change_data_access_to_instance( share_instance, access_ref_list, deny=True) # NOTE(ganso): Cleanup methods do not throw exceptions, since the # exceptions that should be thrown are the ones that call the cleanup def cleanup_data_access(self, access_ref_list, share_instance_id): try: self.deny_access_to_data_service( access_ref_list, share_instance_id) except Exception: LOG.warning("Could not cleanup access rule of share %s.", self.share['id']) def cleanup_temp_folder(self, instance_id, mount_path): try: path = os.path.join(mount_path, instance_id) if os.path.exists(path): os.rmdir(path) self._check_dir_not_exists(path) except Exception: LOG.warning("Could not cleanup instance %(instance_id)s " "temporary folders for data copy of " "share %(share_id)s.", { 'instance_id': instance_id, 'share_id': self.share['id']}) def cleanup_unmount_temp_folder(self, unmount_template, mount_path, share_instance_id): try: self.unmount_share_instance(unmount_template, mount_path, share_instance_id) except Exception: LOG.warning("Could not unmount folder of instance" " %(instance_id)s for data copy of " "share %(share_id)s.", { 'instance_id': share_instance_id, 'share_id': self.share['id']}) def _change_data_access_to_instance( self, instance, accesses=None, deny=False): self.access_helper.get_and_update_share_instance_access_rules_status( self.context, status=constants.SHARE_INSTANCE_RULES_SYNCING, share_instance_id=instance['id']) if deny: if accesses is None: accesses = [] else: if not isinstance(accesses, list): accesses = [accesses] access_filters = {'access_id': [a['id'] for a in accesses]} updates = {'state': constants.ACCESS_STATE_QUEUED_TO_DENY} self.access_helper.get_and_update_share_instance_access_rules( self.context, filters=access_filters, updates=updates, share_instance_id=instance['id']) self.share_rpc.update_access(self.context, instance) utils.wait_for_access_update( self.context, self.db, instance, self.wait_access_rules_timeout) def allow_access_to_data_service( self, share_instance, connection_info_src, dest_share_instance=None, connection_info_dest=None): allow_access_to_destination_instance = (dest_share_instance and connection_info_dest) # NOTE(ganso): intersect the access type compatible with both instances if allow_access_to_destination_instance: access_mapping = {} for a_type, protocols in ( connection_info_src['access_mapping'].items()): for proto in protocols: if (a_type in connection_info_dest['access_mapping'] and proto in connection_info_dest['access_mapping'][a_type]): access_mapping[a_type] = access_mapping.get(a_type, []) access_mapping[a_type].append(proto) else: access_mapping = connection_info_src['access_mapping'] access_list = self._get_access_entries_according_to_mapping( access_mapping) access_ref_list = [] for access in access_list: values = { 'share_id': self.share['id'], 'access_type': access['access_type'], 'access_level': access['access_level'], 'access_to': access['access_to'], } # Check if the rule being added already exists. If so, we will # remove it to prevent conflicts old_access_list = self.db.share_access_get_all_by_type_and_access( self.context, self.share['id'], access['access_type'], access['access_to']) if old_access_list: self._change_data_access_to_instance( share_instance, old_access_list, deny=True) access_ref = self.db.share_instance_access_create( self.context, values, share_instance['id']) self._change_data_access_to_instance(share_instance) if allow_access_to_destination_instance: access_ref = self.db.share_instance_access_create( self.context, values, dest_share_instance['id']) self._change_data_access_to_instance(dest_share_instance) # The access rule ref used here is a regular Share Access Map, # instead of a Share Instance Access Map. access_ref_list.append(access_ref) return access_ref_list def _get_access_entries_according_to_mapping(self, access_mapping): access_list = [] # NOTE(ganso): protocol is not relevant here because we previously # used it to filter the access types we are interested in for access_type, protocols in access_mapping.items(): access_to_list = [] if access_type.lower() == 'cert' and CONF.data_node_access_cert: access_to_list.append(CONF.data_node_access_cert) elif access_type.lower() == 'ip': ips = CONF.data_node_access_ips if ips: if not isinstance(ips, list): ips = [ips] access_to_list.extend(ips) elif (access_type.lower() == 'user' and CONF.data_node_access_admin_user): access_to_list.append(CONF.data_node_access_admin_user) else: msg = _("Unsupported access type provided: %s.") % access_type raise exception.ShareDataCopyFailed(reason=msg) if not access_to_list: msg = _("Configuration for Data node mounting access type %s " "has not been set.") % access_type raise exception.ShareDataCopyFailed(reason=msg) for access_to in access_to_list: access = { 'access_type': access_type, 'access_level': constants.ACCESS_LEVEL_RW, 'access_to': access_to, } access_list.append(access) return access_list @utils.retry(exception.NotFound, 0.1, 10, 0.1) def _check_dir_exists(self, path): if not os.path.exists(path): raise exception.NotFound("Folder %s could not be found." % path) @utils.retry(exception.Found, 0.1, 10, 0.1) def _check_dir_not_exists(self, path): if os.path.exists(path): raise exception.Found("Folder %s was found." % path) def mount_share_instance(self, mount_template, mount_path, share_instance): path = os.path.join(mount_path, share_instance['id']) options = CONF.data_node_mount_options options = {k.lower(): v for k, v in options.items()} proto_options = options.get(share_instance['share_proto'].lower()) if not proto_options: proto_options = '' if not os.path.exists(path): os.makedirs(path) self._check_dir_exists(path) mount_command = mount_template % {'path': path, 'options': proto_options} utils.execute(*(mount_command.split()), run_as_root=True) def unmount_share_instance(self, unmount_template, mount_path, share_instance_id): path = os.path.join(mount_path, share_instance_id) unmount_command = unmount_template % {'path': path} utils.execute(*(unmount_command.split()), run_as_root=True) try: if os.path.exists(path): os.rmdir(path) self._check_dir_not_exists(path) except Exception: LOG.warning("Folder %s could not be removed.", path) manila-10.0.0/manila/data/manager.py0000664000175000017500000002736713656750227017245 0ustar zuulzuul00000000000000# Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Data Service """ import os from oslo_config import cfg from oslo_log import log import six from manila.common import constants from manila import context from manila.data import helper from manila.data import utils as data_utils from manila import exception from manila import manager from manila.share import rpcapi as share_rpc from manila.i18n import _ LOG = log.getLogger(__name__) data_opts = [ cfg.StrOpt( 'mount_tmp_location', default='/tmp/', deprecated_name='migration_tmp_location', help="Temporary path to create and mount shares during migration."), cfg.BoolOpt( 'check_hash', default=False, help="Chooses whether hash of each file should be checked on data " "copying."), ] CONF = cfg.CONF CONF.register_opts(data_opts) class DataManager(manager.Manager): """Receives requests to handle data and sends responses.""" RPC_API_VERSION = '1.0' def __init__(self, service_name=None, *args, **kwargs): super(DataManager, self).__init__(*args, **kwargs) self.busy_tasks_shares = {} def init_host(self): ctxt = context.get_admin_context() shares = self.db.share_get_all(ctxt) for share in shares: if share['task_state'] in constants.BUSY_COPYING_STATES: self.db.share_update( ctxt, share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_ERROR}) def migration_start(self, context, ignore_list, share_id, share_instance_id, dest_share_instance_id, connection_info_src, connection_info_dest): LOG.debug( "Received request to migrate share content from share instance " "%(instance_id)s to instance %(dest_instance_id)s.", {'instance_id': share_instance_id, 'dest_instance_id': dest_share_instance_id}) share_ref = self.db.share_get(context, share_id) share_instance_ref = self.db.share_instance_get( context, share_instance_id, with_share_data=True) share_rpcapi = share_rpc.ShareAPI() mount_path = CONF.mount_tmp_location try: copy = data_utils.Copy( os.path.join(mount_path, share_instance_id), os.path.join(mount_path, dest_share_instance_id), ignore_list, CONF.check_hash) self._copy_share_data( context, copy, share_ref, share_instance_id, dest_share_instance_id, connection_info_src, connection_info_dest) except exception.ShareDataCopyCancelled: share_rpcapi.migration_complete( context, share_instance_ref, dest_share_instance_id) return except Exception: self.db.share_update( context, share_id, {'task_state': constants.TASK_STATE_DATA_COPYING_ERROR}) msg = _("Failed to copy contents from instance %(src)s to " "instance %(dest)s.") % {'src': share_instance_id, 'dest': dest_share_instance_id} LOG.exception(msg) share_rpcapi.migration_complete( context, share_instance_ref, dest_share_instance_id) raise exception.ShareDataCopyFailed(reason=msg) finally: self.busy_tasks_shares.pop(share_id, None) LOG.info( "Completed copy operation of migrating share content from share " "instance %(instance_id)s to instance %(dest_instance_id)s.", {'instance_id': share_instance_id, 'dest_instance_id': dest_share_instance_id}) def data_copy_cancel(self, context, share_id): LOG.debug("Received request to cancel data copy " "of share %s.", share_id) copy = self.busy_tasks_shares.get(share_id) if copy: copy.cancel() else: msg = _("Data copy for migration of share %s cannot be cancelled" " at this moment.") % share_id LOG.error(msg) raise exception.InvalidShare(reason=msg) def data_copy_get_progress(self, context, share_id): LOG.debug("Received request to get data copy information " "of share %s.", share_id) copy = self.busy_tasks_shares.get(share_id) if copy: result = copy.get_progress() LOG.info("Obtained following data copy information " "of share %(share)s: %(info)s.", {'share': share_id, 'info': six.text_type(result)}) return result else: msg = _("Migration of share %s data copy progress cannot be " "obtained at this moment.") % share_id LOG.error(msg) raise exception.InvalidShare(reason=msg) def _copy_share_data( self, context, copy, src_share, share_instance_id, dest_share_instance_id, connection_info_src, connection_info_dest): copied = False mount_path = CONF.mount_tmp_location share_instance = self.db.share_instance_get( context, share_instance_id, with_share_data=True) dest_share_instance = self.db.share_instance_get( context, dest_share_instance_id, with_share_data=True) self.db.share_update( context, src_share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_STARTING}) helper_src = helper.DataServiceHelper(context, self.db, src_share) helper_dest = helper_src access_ref_list_src = helper_src.allow_access_to_data_service( share_instance, connection_info_src, dest_share_instance, connection_info_dest) access_ref_list_dest = access_ref_list_src def _call_cleanups(items): for item in items: if 'unmount_src' == item: helper_src.cleanup_unmount_temp_folder( connection_info_src['unmount'], mount_path, share_instance_id) elif 'temp_folder_src' == item: helper_src.cleanup_temp_folder(share_instance_id, mount_path) elif 'temp_folder_dest' == item: helper_dest.cleanup_temp_folder(dest_share_instance_id, mount_path) elif 'access_src' == item: helper_src.cleanup_data_access(access_ref_list_src, share_instance_id) elif 'access_dest' == item: helper_dest.cleanup_data_access(access_ref_list_dest, dest_share_instance_id) try: helper_src.mount_share_instance( connection_info_src['mount'], mount_path, share_instance) except Exception: msg = _("Data copy failed attempting to mount " "share instance %s.") % share_instance_id LOG.exception(msg) _call_cleanups(['temp_folder_src', 'access_dest', 'access_src']) raise exception.ShareDataCopyFailed(reason=msg) try: helper_dest.mount_share_instance( connection_info_dest['mount'], mount_path, dest_share_instance) except Exception: msg = _("Data copy failed attempting to mount " "share instance %s.") % dest_share_instance_id LOG.exception(msg) _call_cleanups(['temp_folder_dest', 'unmount_src', 'temp_folder_src', 'access_dest', 'access_src']) raise exception.ShareDataCopyFailed(reason=msg) self.busy_tasks_shares[src_share['id']] = copy self.db.share_update( context, src_share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_IN_PROGRESS}) try: copy.run() self.db.share_update( context, src_share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETING}) if copy.get_progress()['total_progress'] == 100: copied = True except Exception: LOG.exception("Failed to copy data from share instance " "%(share_instance_id)s to " "%(dest_share_instance_id)s.", {'share_instance_id': share_instance_id, 'dest_share_instance_id': dest_share_instance_id}) try: helper_src.unmount_share_instance(connection_info_src['unmount'], mount_path, share_instance_id) except Exception: LOG.exception("Could not unmount folder of instance" " %s after its data copy.", share_instance_id) try: helper_dest.unmount_share_instance( connection_info_dest['unmount'], mount_path, dest_share_instance_id) except Exception: LOG.exception("Could not unmount folder of instance" " %s after its data copy.", dest_share_instance_id) try: helper_src.deny_access_to_data_service( access_ref_list_src, share_instance) except Exception: LOG.exception("Could not deny access to instance" " %s after its data copy.", share_instance_id) try: helper_dest.deny_access_to_data_service( access_ref_list_dest, dest_share_instance) except Exception: LOG.exception("Could not deny access to instance" " %s after its data copy.", dest_share_instance_id) if copy and copy.cancelled: self.db.share_update( context, src_share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_CANCELLED}) LOG.warning("Copy of data from share instance " "%(src_instance)s to share instance " "%(dest_instance)s was cancelled.", {'src_instance': share_instance_id, 'dest_instance': dest_share_instance_id}) raise exception.ShareDataCopyCancelled( src_instance=share_instance_id, dest_instance=dest_share_instance_id) elif not copied: msg = _("Copying data from share instance %(instance_id)s " "to %(dest_instance_id)s did not succeed.") % ( {'instance_id': share_instance_id, 'dest_instance_id': dest_share_instance_id}) raise exception.ShareDataCopyFailed(reason=msg) self.db.share_update( context, src_share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETED}) LOG.debug("Copy of data from share instance %(src_instance)s to " "share instance %(dest_instance)s was successful.", {'src_instance': share_instance_id, 'dest_instance': dest_share_instance_id}) manila-10.0.0/manila/data/rpcapi.py0000664000175000017500000000444213656750227017076 0ustar zuulzuul00000000000000# Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the data manager RPC API. """ from oslo_config import cfg import oslo_messaging as messaging from manila import rpc CONF = cfg.CONF class DataAPI(object): """Client side of the data RPC API. API version history: 1.0 - Initial version, Add migration_start(), data_copy_cancel(), data_copy_get_progress() """ BASE_RPC_API_VERSION = '1.0' def __init__(self): super(DataAPI, self).__init__() target = messaging.Target(topic=CONF.data_topic, version=self.BASE_RPC_API_VERSION) self.client = rpc.get_client(target, version_cap='1.0') def migration_start(self, context, share_id, ignore_list, share_instance_id, dest_share_instance_id, connection_info_src, connection_info_dest): call_context = self.client.prepare(version='1.0') call_context.cast( context, 'migration_start', share_id=share_id, ignore_list=ignore_list, share_instance_id=share_instance_id, dest_share_instance_id=dest_share_instance_id, connection_info_src=connection_info_src, connection_info_dest=connection_info_dest) def data_copy_cancel(self, context, share_id): call_context = self.client.prepare(version='1.0') call_context.call(context, 'data_copy_cancel', share_id=share_id) def data_copy_get_progress(self, context, share_id): call_context = self.client.prepare(version='1.0') return call_context.call(context, 'data_copy_get_progress', share_id=share_id) manila-10.0.0/manila/testing/0000775000175000017500000000000013656750362016006 5ustar zuulzuul00000000000000manila-10.0.0/manila/testing/README.rst0000664000175000017500000000374113656750227017502 0ustar zuulzuul00000000000000======================================= OpenStack Manila Testing Infrastructure ======================================= A note of clarification is in order, to help those who are new to testing in OpenStack Manila: - actual unit tests are created in the "tests" directory; - the "testing" directory is used to house the infrastructure needed to support testing in OpenStack Manila. This README file attempts to provide current and prospective contributors with everything they need to know in order to start creating unit tests and utilizing the convenience code provided in manila.testing. Writing Unit Tests ------------------ - All new unit tests are to be written in python-mock. - Old tests that are still written in mox should be updated to use python-mock. Usage of mox has been deprecated for writing Manila unit tests. - use addCleanup in favor of tearDown test.TestCase ------------- The TestCase class from manila.test (generally imported as test) will automatically manage self.stubs using the stubout module. They will automatically verify and clean up during the tearDown step. If using test.TestCase, calling the super class setUp is required and calling the super class tearDown is required to be last if tearDown is overridden. Running Tests ------------- The preferred way to run the unit tests is using ``tox``. Tox executes tests in isolated environment, by creating separate virtualenv and installing dependencies from the ``requirements.txt`` and ``test-requirements.txt`` files, so the only package you install is ``tox`` itself:: sudo pip install tox Run the unit tests by doing:: tox -e py3 Tests and assertRaises ---------------------- When asserting that a test should raise an exception, test against the most specific exception possible. An overly broad exception type (like Exception) can mask errors in the unit test itself. Example:: self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid, elevated, instance_uuid) manila-10.0.0/manila/common/0000775000175000017500000000000013656750362015621 5ustar zuulzuul00000000000000manila-10.0.0/manila/common/__init__.py0000664000175000017500000000000013656750227017720 0ustar zuulzuul00000000000000manila-10.0.0/manila/common/config.py0000664000175000017500000001755113656750227017451 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Command-line flag library. Emulates gflags by wrapping cfg.ConfigOpts. The idea is to move fully to cfg eventually, and this wrapper is a stepping stone. """ import socket from oslo_config import cfg from oslo_log import log from oslo_middleware import cors from oslo_utils import netutils import six from manila.common import constants from manila import exception from manila.i18n import _ CONF = cfg.CONF log.register_options(CONF) core_opts = [ cfg.StrOpt('state_path', default='/var/lib/manila', help="Top-level directory for maintaining manila's state."), ] debug_opts = [ ] CONF.register_cli_opts(core_opts) CONF.register_cli_opts(debug_opts) global_opts = [ cfg.HostAddressOpt('my_ip', default=netutils.get_my_ipv4(), sample_default='', help='IP address of this host.'), cfg.StrOpt('scheduler_topic', default='manila-scheduler', help='The topic scheduler nodes listen on.'), cfg.StrOpt('share_topic', default='manila-share', help='The topic share nodes listen on.'), cfg.StrOpt('data_topic', default='manila-data', help='The topic data nodes listen on.'), cfg.BoolOpt('api_rate_limit', default=True, help='Whether to rate limit the API.'), cfg.ListOpt('osapi_share_ext_list', default=[], help='Specify list of extensions to load when using osapi_' 'share_extension option with manila.api.contrib.' 'select_extensions.'), cfg.ListOpt('osapi_share_extension', default=['manila.api.contrib.standard_extensions'], help='The osapi share extensions to load.'), cfg.StrOpt('scheduler_manager', default='manila.scheduler.manager.SchedulerManager', help='Full class name for the scheduler manager.'), cfg.StrOpt('share_manager', default='manila.share.manager.ShareManager', help='Full class name for the share manager.'), cfg.StrOpt('data_manager', default='manila.data.manager.DataManager', help='Full class name for the data manager.'), cfg.HostAddressOpt('host', default=socket.gethostname(), sample_default='', help='Name of this node. This can be an opaque ' 'identifier. It is not necessarily a hostname, ' 'FQDN, or IP address.'), # NOTE(vish): default to nova for compatibility with nova installs cfg.StrOpt('storage_availability_zone', default='nova', help='Availability zone of this node.'), cfg.StrOpt('default_share_type', help='Default share type to use.'), cfg.StrOpt('default_share_group_type', help='Default share group type to use.'), cfg.ListOpt('memcached_servers', help='Memcached servers or None for in process cache.', deprecated_reason="The config option is not used. It should " "not be confused with [keystone_authtoken]/memcached_servers.", deprecated_for_removal=True), cfg.StrOpt('share_usage_audit_period', default='month', deprecated_for_removal=True, help='Time period to generate share usages for. ' 'Time period must be hour, day, month or year.', deprecated_reason="The config option is not used."), cfg.StrOpt('rootwrap_config', help='Path to the rootwrap configuration file to use for ' 'running commands as root.'), cfg.BoolOpt('monkey_patch', default=False, help='Whether to log monkey patching.'), cfg.ListOpt('monkey_patch_modules', default=[], help='List of modules or decorators to monkey patch.'), cfg.IntOpt('service_down_time', default=60, help='Maximum time since last check-in for up service.'), cfg.StrOpt('share_api_class', default='manila.share.api.API', help='The full class name of the share API class to use.'), cfg.StrOpt('auth_strategy', default='keystone', help='The strategy to use for auth. Supports noauth, keystone, ' 'and deprecated.'), cfg.ListOpt('enabled_share_backends', help='A list of share backend names to use. These backend ' 'names should be backed by a unique [CONFIG] group ' 'with its options.'), cfg.ListOpt('enabled_share_protocols', default=['NFS', 'CIFS'], help="Specify list of protocols to be allowed for share " "creation. Available values are '%s'" % six.text_type( constants.SUPPORTED_SHARE_PROTOCOLS)), ] CONF.register_opts(global_opts) def verify_share_protocols(): """Perform verification of 'enabled_share_protocols'.""" msg = None supported_protocols = constants.SUPPORTED_SHARE_PROTOCOLS data = dict(supported=', '.join(supported_protocols)) if CONF.enabled_share_protocols: for share_proto in CONF.enabled_share_protocols: if share_proto.upper() not in supported_protocols: data.update({'share_proto': share_proto}) msg = ("Unsupported share protocol '%(share_proto)s' " "is set as enabled. Available values are " "%(supported)s. ") break else: msg = ("No share protocols were specified as enabled. " "Available values are %(supported)s. ") if msg: msg += ("Please specify one or more protocols using " "configuration option 'enabled_share_protocols'.") # NOTE(vponomaryov): use translation to unicode explicitly, # because of 'lazy' translations. msg = six.text_type(_(msg) % data) # noqa H701 raise exception.ManilaException(message=msg) def set_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-OpenStack-Request-ID', 'X-Openstack-Manila-Api-Version', 'X-OpenStack-Manila-API-Experimental', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id'], expose_headers=['X-Auth-Token', 'X-OpenStack-Request-ID', 'X-Openstack-Manila-Api-Version', 'X-OpenStack-Manila-API-Experimental', 'X-Subject-Token', 'X-Service-Token'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) manila-10.0.0/manila/common/client_auth.py0000664000175000017500000000651413656750227020500 0ustar zuulzuul00000000000000# Copyright 2016 SAP SE # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from keystoneauth1 import loading as ks_loading from oslo_config import cfg from manila import exception from manila.i18n import _ CONF = cfg.CONF """Helper class to support keystone v2 and v3 for clients Builds auth and session context before instantiation of the actual client. In order to build this context a dedicated config group is needed to load all needed parameters dynamically. """ class AuthClientLoader(object): def __init__(self, client_class, exception_module, cfg_group): self.client_class = client_class self.exception_module = exception_module self.group = cfg_group self.admin_auth = None self.conf = CONF self.session = None self.auth_plugin = None @staticmethod def list_opts(group): """Generates a list of config option for a given group :param group: group name :return: list of auth default configuration """ opts = copy.deepcopy(ks_loading.get_session_conf_options()) opts.insert(0, ks_loading.get_auth_common_conf_options()[0]) for plugin_option in ks_loading.get_auth_plugin_conf_options( 'password'): found = False for option in opts: if option.name == plugin_option.name: found = True break if not found: opts.append(plugin_option) opts.sort(key=lambda x: x.name) return [(group, opts)] def _load_auth_plugin(self): if self.admin_auth: return self.admin_auth self.auth_plugin = ks_loading.load_auth_from_conf_options( CONF, self.group) if self.auth_plugin: return self.auth_plugin msg = _('Cannot load auth plugin for %s') % self.group raise self.exception_module.Unauthorized(message=msg) def get_client(self, context, admin=False, **kwargs): """Get's the client with the correct auth/session context """ auth_plugin = None if not self.session: self.session = ks_loading.load_session_from_conf_options( self.conf, self.group) if admin or (context.is_admin and not context.auth_token): if not self.admin_auth: self.admin_auth = self._load_auth_plugin() auth_plugin = self.admin_auth else: # NOTE(mkoderer): Manila basically needs admin clients for # it's actions. If needed this must be enhanced later raise exception.ManilaException( _("Client (%s) is not flagged as admin") % self.group) return self.client_class(session=self.session, auth=auth_plugin, **kwargs) manila-10.0.0/manila/common/constants.py0000664000175000017500000001662213656750227020216 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # The maximum value a signed INT type may have DB_MAX_INT = 0x7FFFFFFF # SHARE AND GENERAL STATUSES STATUS_CREATING = 'creating' STATUS_CREATING_FROM_SNAPSHOT = 'creating_from_snapshot' STATUS_DELETING = 'deleting' STATUS_DELETED = 'deleted' STATUS_ERROR = 'error' STATUS_ERROR_DELETING = 'error_deleting' STATUS_AVAILABLE = 'available' STATUS_INACTIVE = 'inactive' STATUS_MANAGING = 'manage_starting' STATUS_MANAGE_ERROR = 'manage_error' STATUS_UNMANAGING = 'unmanage_starting' STATUS_UNMANAGE_ERROR = 'unmanage_error' STATUS_UNMANAGED = 'unmanaged' STATUS_EXTENDING = 'extending' STATUS_EXTENDING_ERROR = 'extending_error' STATUS_SHRINKING = 'shrinking' STATUS_SHRINKING_ERROR = 'shrinking_error' STATUS_MIGRATING = 'migrating' STATUS_MIGRATING_TO = 'migrating_to' STATUS_SHRINKING_POSSIBLE_DATA_LOSS_ERROR = ( 'shrinking_possible_data_loss_error' ) STATUS_REPLICATION_CHANGE = 'replication_change' STATUS_RESTORING = 'restoring' STATUS_REVERTING = 'reverting' STATUS_REVERTING_ERROR = 'reverting_error' # Access rule states ACCESS_STATE_QUEUED_TO_APPLY = 'queued_to_apply' ACCESS_STATE_QUEUED_TO_DENY = 'queued_to_deny' ACCESS_STATE_APPLYING = 'applying' ACCESS_STATE_DENYING = 'denying' ACCESS_STATE_ACTIVE = 'active' ACCESS_STATE_ERROR = 'error' ACCESS_STATE_DELETED = 'deleted' # Share instance "access_rules_status" field values SHARE_INSTANCE_RULES_SYNCING = 'syncing' SHARE_INSTANCE_RULES_ERROR = 'error' # States/statuses for multiple resources STATUS_NEW = 'new' STATUS_OUT_OF_SYNC = 'out_of_sync' STATUS_ACTIVE = 'active' ACCESS_RULES_STATES = ( ACCESS_STATE_QUEUED_TO_APPLY, ACCESS_STATE_QUEUED_TO_DENY, ACCESS_STATE_APPLYING, ACCESS_STATE_DENYING, ACCESS_STATE_ACTIVE, ACCESS_STATE_ERROR, ACCESS_STATE_DELETED, ) TASK_STATE_MIGRATION_STARTING = 'migration_starting' TASK_STATE_MIGRATION_IN_PROGRESS = 'migration_in_progress' TASK_STATE_MIGRATION_COMPLETING = 'migration_completing' TASK_STATE_MIGRATION_SUCCESS = 'migration_success' TASK_STATE_MIGRATION_ERROR = 'migration_error' TASK_STATE_MIGRATION_CANCELLED = 'migration_cancelled' TASK_STATE_MIGRATION_DRIVER_STARTING = 'migration_driver_starting' TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS = 'migration_driver_in_progress' TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE = 'migration_driver_phase1_done' TASK_STATE_DATA_COPYING_STARTING = 'data_copying_starting' TASK_STATE_DATA_COPYING_IN_PROGRESS = 'data_copying_in_progress' TASK_STATE_DATA_COPYING_COMPLETING = 'data_copying_completing' TASK_STATE_DATA_COPYING_COMPLETED = 'data_copying_completed' TASK_STATE_DATA_COPYING_CANCELLED = 'data_copying_cancelled' TASK_STATE_DATA_COPYING_ERROR = 'data_copying_error' BUSY_TASK_STATES = ( TASK_STATE_MIGRATION_STARTING, TASK_STATE_MIGRATION_IN_PROGRESS, TASK_STATE_MIGRATION_COMPLETING, TASK_STATE_MIGRATION_DRIVER_STARTING, TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, TASK_STATE_DATA_COPYING_STARTING, TASK_STATE_DATA_COPYING_IN_PROGRESS, TASK_STATE_DATA_COPYING_COMPLETING, TASK_STATE_DATA_COPYING_COMPLETED, ) BUSY_COPYING_STATES = ( TASK_STATE_DATA_COPYING_STARTING, TASK_STATE_DATA_COPYING_IN_PROGRESS, TASK_STATE_DATA_COPYING_COMPLETING, ) TRANSITIONAL_STATUSES = ( STATUS_CREATING, STATUS_DELETING, STATUS_MANAGING, STATUS_UNMANAGING, STATUS_EXTENDING, STATUS_SHRINKING, STATUS_MIGRATING, STATUS_MIGRATING_TO, STATUS_RESTORING, STATUS_REVERTING, ) INVALID_SHARE_INSTANCE_STATUSES_FOR_ACCESS_RULE_UPDATES = ( TRANSITIONAL_STATUSES + (STATUS_ERROR,) ) SUPPORTED_SHARE_PROTOCOLS = ( 'NFS', 'CIFS', 'GLUSTERFS', 'HDFS', 'CEPHFS', 'MAPRFS') SECURITY_SERVICES_ALLOWED_TYPES = ['active_directory', 'ldap', 'kerberos'] LIKE_FILTER = ['name~', 'description~'] NFS_EXPORTS_FILE = '/etc/exports' NFS_EXPORTS_FILE_TEMP = '/var/lib/nfs/etab' MOUNT_FILE = '/etc/fstab' MOUNT_FILE_TEMP = '/etc/mtab' # Below represented ports are ranges (from, to) CIFS_PORTS = ( ("tcp", (445, 445)), ("tcp", (137, 139)), ("udp", (137, 139)), ("udp", (445, 445)), ) NFS_PORTS = ( ("tcp", (2049, 2049)), ("udp", (2049, 2049)), ) SSH_PORTS = ( ("tcp", (22, 22)), ) PING_PORTS = ( ("icmp", (-1, -1)), ) WINRM_PORTS = ( ("tcp", (5985, 5986)), ) SERVICE_INSTANCE_SECGROUP_DATA = ( CIFS_PORTS + NFS_PORTS + PING_PORTS + WINRM_PORTS) ACCESS_LEVEL_RW = 'rw' ACCESS_LEVEL_RO = 'ro' ACCESS_LEVELS = ( ACCESS_LEVEL_RW, ACCESS_LEVEL_RO, ) TASK_STATE_STATUSES = ( TASK_STATE_MIGRATION_STARTING, TASK_STATE_MIGRATION_IN_PROGRESS, TASK_STATE_MIGRATION_COMPLETING, TASK_STATE_MIGRATION_SUCCESS, TASK_STATE_MIGRATION_ERROR, TASK_STATE_MIGRATION_CANCELLED, TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, TASK_STATE_DATA_COPYING_STARTING, TASK_STATE_DATA_COPYING_IN_PROGRESS, TASK_STATE_DATA_COPYING_COMPLETING, TASK_STATE_DATA_COPYING_COMPLETED, TASK_STATE_DATA_COPYING_CANCELLED, TASK_STATE_DATA_COPYING_ERROR, None, ) REPLICA_STATE_ACTIVE = 'active' REPLICA_STATE_IN_SYNC = 'in_sync' REPLICA_STATE_OUT_OF_SYNC = 'out_of_sync' REPLICATION_TYPE_READABLE = 'readable' REPLICATION_TYPE_WRITABLE = 'writable' REPLICATION_TYPE_DR = 'dr' class ExtraSpecs(object): # Extra specs key names DRIVER_HANDLES_SHARE_SERVERS = "driver_handles_share_servers" SNAPSHOT_SUPPORT = "snapshot_support" REPLICATION_TYPE_SPEC = "replication_type" CREATE_SHARE_FROM_SNAPSHOT_SUPPORT = "create_share_from_snapshot_support" REVERT_TO_SNAPSHOT_SUPPORT = "revert_to_snapshot_support" MOUNT_SNAPSHOT_SUPPORT = "mount_snapshot_support" AVAILABILITY_ZONES = "availability_zones" # Extra specs containers REQUIRED = ( DRIVER_HANDLES_SHARE_SERVERS, ) OPTIONAL = ( SNAPSHOT_SUPPORT, CREATE_SHARE_FROM_SNAPSHOT_SUPPORT, REVERT_TO_SNAPSHOT_SUPPORT, REPLICATION_TYPE_SPEC, MOUNT_SNAPSHOT_SUPPORT, AVAILABILITY_ZONES, ) # NOTE(cknight): Some extra specs are necessary parts of the Manila API and # should be visible to non-admin users. REQUIRED specs are user-visible, as # are a handful of community-agreed standardized OPTIONAL ones. TENANT_VISIBLE = REQUIRED + OPTIONAL BOOLEAN = ( DRIVER_HANDLES_SHARE_SERVERS, SNAPSHOT_SUPPORT, CREATE_SHARE_FROM_SNAPSHOT_SUPPORT, REVERT_TO_SNAPSHOT_SUPPORT, MOUNT_SNAPSHOT_SUPPORT, ) # NOTE(cknight): Some extra specs are optional, but a nominal (typically # False, but may be non-boolean) default value for each is still needed # when creating shares. INFERRED_OPTIONAL_MAP = { SNAPSHOT_SUPPORT: False, CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: False, REVERT_TO_SNAPSHOT_SUPPORT: False, MOUNT_SNAPSHOT_SUPPORT: False, } REPLICATION_TYPES = ('writable', 'readable', 'dr') manila-10.0.0/manila/opts.py0000664000175000017500000002200213656750227015664 0ustar zuulzuul00000000000000# Copyright (c) 2014 SUSE Linux Products GmbH. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'list_opts' ] import copy import itertools import oslo_concurrency.opts import oslo_log._options import oslo_middleware.opts import oslo_policy.opts import oslo_service.sslutils import manila.api.common import manila.api.middleware.auth import manila.common.config import manila.compute import manila.compute.nova import manila.coordination import manila.data.helper import manila.db.api import manila.db.base import manila.exception import manila.message.api import manila.network import manila.network.linux.interface import manila.network.neutron.api import manila.network.neutron.neutron_network_plugin import manila.network.standalone_network_plugin import manila.quota import manila.scheduler.drivers.base import manila.scheduler.drivers.simple import manila.scheduler.host_manager import manila.scheduler.manager import manila.scheduler.scheduler_options import manila.scheduler.weighers import manila.scheduler.weighers.capacity import manila.scheduler.weighers.pool import manila.service import manila.share.api import manila.share.driver import manila.share.drivers.cephfs.driver import manila.share.drivers.container.driver import manila.share.drivers.container.storage_helper import manila.share.drivers.dell_emc.driver import manila.share.drivers.dell_emc.plugins.isilon.isilon import manila.share.drivers.dell_emc.plugins.powermax.connection import manila.share.drivers.generic import manila.share.drivers.glusterfs import manila.share.drivers.glusterfs.common import manila.share.drivers.glusterfs.layout import manila.share.drivers.glusterfs.layout_directory import manila.share.drivers.glusterfs.layout_volume import manila.share.drivers.hdfs.hdfs_native import manila.share.drivers.hitachi.hnas.driver import manila.share.drivers.hitachi.hsp.driver import manila.share.drivers.hpe.hpe_3par_driver import manila.share.drivers.huawei.huawei_nas import manila.share.drivers.ibm.gpfs import manila.share.drivers.infinidat.infinibox import manila.share.drivers.infortrend.driver import manila.share.drivers.inspur.as13000.as13000_nas import manila.share.drivers.inspur.instorage.instorage import manila.share.drivers.lvm import manila.share.drivers.maprfs.maprfs_native import manila.share.drivers.netapp.options import manila.share.drivers.nexenta.options import manila.share.drivers.qnap.qnap import manila.share.drivers.quobyte.quobyte import manila.share.drivers.service_instance import manila.share.drivers.tegile.tegile import manila.share.drivers.windows.service_instance import manila.share.drivers.windows.winrm_helper import manila.share.drivers.zfsonlinux.driver import manila.share.drivers.zfssa.zfssashare import manila.share.drivers_private_data import manila.share.hook import manila.share.manager import manila.volume import manila.volume.cinder import manila.wsgi.eventlet_server # List of *all* options in [DEFAULT] namespace of manila. # Any new option list or option needs to be registered here. _global_opt_lists = [ # Keep list alphabetically sorted manila.api.common.api_common_opts, [manila.api.middleware.auth.use_forwarded_for_opt], manila.common.config.core_opts, manila.common.config.debug_opts, manila.common.config.global_opts, manila.compute._compute_opts, manila.coordination.coordination_opts, manila.data.helper.data_helper_opts, manila.db.api.db_opts, [manila.db.base.db_driver_opt], manila.exception.exc_log_opts, manila.message.api.messages_opts, manila.network.linux.interface.OPTS, manila.network.network_opts, manila.network.network_base_opts, manila.network.neutron.neutron_network_plugin. neutron_network_plugin_opts, manila.network.neutron.neutron_network_plugin. neutron_single_network_plugin_opts, manila.network.neutron.neutron_network_plugin. neutron_bind_network_plugin_opts, manila.network.neutron.neutron_network_plugin. neutron_binding_profile, manila.network.neutron.neutron_network_plugin. neutron_binding_profile_opts, manila.network.standalone_network_plugin.standalone_network_plugin_opts, manila.quota.quota_opts, manila.scheduler.drivers.base.scheduler_driver_opts, manila.scheduler.host_manager.host_manager_opts, [manila.scheduler.manager.scheduler_driver_opt], [manila.scheduler.scheduler_options.scheduler_json_config_location_opt], manila.scheduler.drivers.simple.simple_scheduler_opts, manila.scheduler.weighers.capacity.capacity_weight_opts, manila.scheduler.weighers.pool.pool_weight_opts, manila.service.service_opts, manila.share.api.share_api_opts, manila.share.driver.ganesha_opts, manila.share.driver.share_opts, manila.share.driver.ssh_opts, manila.share.drivers_private_data.private_data_opts, manila.share.drivers.cephfs.driver.cephfs_opts, manila.share.drivers.container.driver.container_opts, manila.share.drivers.container.storage_helper.lv_opts, manila.share.drivers.dell_emc.driver.EMC_NAS_OPTS, manila.share.drivers.dell_emc.plugins.powermax.connection.POWERMAX_OPTS, manila.share.drivers.generic.share_opts, manila.share.drivers.glusterfs.common.glusterfs_common_opts, manila.share.drivers.glusterfs.GlusterfsManilaShare_opts, manila.share.drivers.glusterfs.layout.glusterfs_share_layout_opts, manila.share.drivers.glusterfs.layout_directory. glusterfs_directory_mapped_opts, manila.share.drivers.glusterfs.layout_volume.glusterfs_volume_mapped_opts, manila.share.drivers.hdfs.hdfs_native.hdfs_native_share_opts, manila.share.drivers.hitachi.hnas.driver.hitachi_hnas_opts, manila.share.drivers.hitachi.hsp.driver.hitachi_hsp_opts, manila.share.drivers.hpe.hpe_3par_driver.HPE3PAR_OPTS, manila.share.drivers.huawei.huawei_nas.huawei_opts, manila.share.drivers.ibm.gpfs.gpfs_share_opts, manila.share.drivers.infinidat.infinibox.infinidat_auth_opts, manila.share.drivers.infinidat.infinibox.infinidat_connection_opts, manila.share.drivers.infinidat.infinibox.infinidat_general_opts, manila.share.drivers.infortrend.driver.infortrend_nas_opts, manila.share.drivers.inspur.as13000.as13000_nas.inspur_as13000_opts, manila.share.drivers.inspur.instorage.instorage.instorage_opts, manila.share.drivers.maprfs.maprfs_native.maprfs_native_share_opts, manila.share.drivers.lvm.share_opts, manila.share.drivers.netapp.options.netapp_proxy_opts, manila.share.drivers.netapp.options.netapp_connection_opts, manila.share.drivers.netapp.options.netapp_transport_opts, manila.share.drivers.netapp.options.netapp_basicauth_opts, manila.share.drivers.netapp.options.netapp_provisioning_opts, manila.share.drivers.netapp.options.netapp_data_motion_opts, manila.share.drivers.nexenta.options.nexenta_connection_opts, manila.share.drivers.nexenta.options.nexenta_dataset_opts, manila.share.drivers.nexenta.options.nexenta_nfs_opts, manila.share.drivers.qnap.qnap.qnap_manila_opts, manila.share.drivers.quobyte.quobyte.quobyte_manila_share_opts, manila.share.drivers.service_instance.common_opts, manila.share.drivers.service_instance.no_share_servers_handling_mode_opts, manila.share.drivers.service_instance.share_servers_handling_mode_opts, manila.share.drivers.tegile.tegile.tegile_opts, manila.share.drivers.windows.service_instance.windows_share_server_opts, manila.share.drivers.windows.winrm_helper.winrm_opts, manila.share.drivers.zfsonlinux.driver.zfsonlinux_opts, manila.share.drivers.zfssa.zfssashare.ZFSSA_OPTS, manila.share.hook.hook_options, manila.share.manager.share_manager_opts, manila.volume._volume_opts, manila.wsgi.eventlet_server.socket_opts, ] _opts = [ (None, list(itertools.chain(*_global_opt_lists))), (manila.volume.cinder.CINDER_GROUP, list(itertools.chain(manila.volume.cinder.cinder_opts))), (manila.compute.nova.NOVA_GROUP, list(itertools.chain(manila.compute.nova.nova_opts))), (manila.network.neutron.api.NEUTRON_GROUP, list(itertools.chain(manila.network.neutron.api.neutron_opts))), ] _opts.extend(oslo_concurrency.opts.list_opts()) _opts.extend(oslo_log._options.list_opts()) _opts.extend(oslo_middleware.opts.list_opts()) _opts.extend(oslo_policy.opts.list_opts()) _opts.extend(manila.network.neutron.api.list_opts()) _opts.extend(manila.compute.nova.list_opts()) _opts.extend(manila.volume.cinder.list_opts()) _opts.extend(oslo_service.sslutils.list_opts()) def list_opts(): """Return a list of oslo.config options available in Manila.""" return [(m, copy.deepcopy(o)) for m, o in _opts] manila-10.0.0/manila/policies/0000775000175000017500000000000013656750362016140 5ustar zuulzuul00000000000000manila-10.0.0/manila/policies/share_network_subnet.py0000664000175000017500000000433413656750227022751 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_network_subnet:%s' share_network_subnet_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_DEFAULT, description="Create a new share network subnet.", operations=[ { 'method': 'POST', 'path': '/share-networks/{share_network_id}/subnets' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete a share network subnet.", operations=[ { 'method': 'DELETE', 'path': '/share-networks/{share_network_id}/subnets/' '{share_network_subnet_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Shows a share network subnet.", operations=[ { 'method': 'GET', 'path': '/share-networks/{share_network_id}/subnets/' '{share_network_subnet_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="Get all share network subnets.", operations=[ { 'method': 'GET', 'path': '/share-networks/{share_network_id}/subnets' } ]), ] def list_rules(): return share_network_subnet_policies manila-10.0.0/manila/policies/__init__.py0000664000175000017500000000620613656750227020255 0ustar zuulzuul00000000000000# Copyright (c) 2017 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from manila.policies import availability_zone from manila.policies import base from manila.policies import message from manila.policies import quota_class_set from manila.policies import quota_set from manila.policies import scheduler_stats from manila.policies import security_service from manila.policies import service from manila.policies import share_access from manila.policies import share_access_metadata from manila.policies import share_export_location from manila.policies import share_group from manila.policies import share_group_snapshot from manila.policies import share_group_type from manila.policies import share_group_types_spec from manila.policies import share_instance from manila.policies import share_instance_export_location from manila.policies import share_network from manila.policies import share_network_subnet from manila.policies import share_replica from manila.policies import share_replica_export_location from manila.policies import share_server from manila.policies import share_snapshot from manila.policies import share_snapshot_export_location from manila.policies import share_snapshot_instance from manila.policies import share_snapshot_instance_export_location from manila.policies import share_type from manila.policies import share_types_extra_spec from manila.policies import shares def list_rules(): return itertools.chain( base.list_rules(), availability_zone.list_rules(), scheduler_stats.list_rules(), shares.list_rules(), share_instance_export_location.list_rules(), share_type.list_rules(), share_types_extra_spec.list_rules(), share_snapshot.list_rules(), share_snapshot_export_location.list_rules(), share_snapshot_instance.list_rules(), share_snapshot_instance_export_location.list_rules(), share_server.list_rules(), service.list_rules(), quota_set.list_rules(), quota_class_set.list_rules(), share_group_types_spec.list_rules(), share_group_type.list_rules(), share_group_snapshot.list_rules(), share_group.list_rules(), share_replica.list_rules(), share_replica_export_location.list_rules(), share_network.list_rules(), share_network_subnet.list_rules(), security_service.list_rules(), share_export_location.list_rules(), share_instance.list_rules(), message.list_rules(), share_access.list_rules(), share_access_metadata.list_rules(), ) manila-10.0.0/manila/policies/share_server.py0000664000175000017500000000602313656750227021203 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_server:%s' share_server_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Get share servers.", operations=[ { 'method': 'GET', 'path': '/share-servers', }, { 'method': 'GET', 'path': '/share-servers?{query}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description="Show share server.", operations=[ { 'method': 'GET', 'path': '/share-servers/{server_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'details', check_str=base.RULE_ADMIN_API, description="Get share server details.", operations=[ { 'method': 'GET', 'path': '/share-servers/{server_id}/details', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_ADMIN_API, description="Delete share server.", operations=[ { 'method': 'DELETE', 'path': '/share-servers/{server_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'manage_share_server', check_str=base.RULE_ADMIN_API, description="Manage share server.", operations=[ { 'method': 'POST', 'path': '/share-servers/manage' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'unmanage_share_server', check_str=base.RULE_ADMIN_API, description="Unmanage share server.", operations=[ { 'method': 'POST', 'path': '/share-servers/{share_server_id}/action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset the status of a share server.", operations=[ { 'method': 'POST', 'path': '/share-servers/{share_server_id}/action' } ]), ] def list_rules(): return share_server_policies manila-10.0.0/manila/policies/share_snapshot.py0000664000175000017500000000757713656750227021553 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_snapshot:%s' share_snapshot_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_snapshot', check_str=base.RULE_DEFAULT, description="Get share snapshot.", operations=[ { 'method': 'GET', 'path': '/snapshots/{snapshot_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all_snapshots', check_str=base.RULE_DEFAULT, description="Get all share snapshots.", operations=[ { 'method': 'GET', 'path': '/snapshots' }, { 'method': 'GET', 'path': '/snapshots/detail' }, { 'method': 'GET', 'path': '/snapshots?{query}' }, { 'method': 'GET', 'path': '/snapshots/detail?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force_delete', check_str=base.RULE_ADMIN_API, description="Force Delete a share snapshot.", operations=[ { 'method': 'DELETE', 'path': '/snapshots/{snapshot_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'manage_snapshot', check_str=base.RULE_ADMIN_API, description="Manage share snapshot.", operations=[ { 'method': 'POST', 'path': '/snapshots/manage' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'unmanage_snapshot', check_str=base.RULE_ADMIN_API, description="Unmanage share snapshot.", operations=[ { 'method': 'POST', 'path': '/snapshots/{snapshot_id}/action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset status.", operations=[ { 'method': 'POST', 'path': '/snapshots/{snapshot_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'access_list', check_str=base.RULE_DEFAULT, description="List access rules of a share snapshot.", operations=[ { 'method': 'GET', 'path': '/snapshots/{snapshot_id}/access-list' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'allow_access', check_str=base.RULE_DEFAULT, description="Allow access to a share snapshot.", operations=[ { 'method': 'POST', 'path': '/snapshots/{snapshot_id}/action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'deny_access', check_str=base.RULE_DEFAULT, description="Deny access to a share snapshot.", operations=[ { 'method': 'POST', 'path': '/snapshots/{snapshot_id}/action' } ]), ] def list_rules(): return share_snapshot_policies manila-10.0.0/manila/policies/share_export_location.py0000664000175000017500000000275113656750227023112 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_export_location:%s' share_export_location_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="Get all export locations of a given share.", operations=[ { 'method': 'GET', 'path': '/shares/{share_id}/export_locations', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details about the requested export location.", operations=[ { 'method': 'GET', 'path': ('/shares/{share_id}/export_locations/' '{export_location_id}'), } ]), ] def list_rules(): return share_export_location_policies manila-10.0.0/manila/policies/share_replica.py0000664000175000017500000000760413656750227021322 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_replica:%s' share_replica_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_DEFAULT, description="Create share replica.", operations=[ { 'method': 'POST', 'path': '/share-replicas', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all', check_str=base.RULE_DEFAULT, description="Get all share replicas.", operations=[ { 'method': 'GET', 'path': '/share-replicas', }, { 'method': 'GET', 'path': '/share-replicas/detail', }, { 'method': 'GET', 'path': '/share-replicas/detail?share_id={share_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details of a share replica.", operations=[ { 'method': 'GET', 'path': '/share-replicas/{share_replica_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete a share replica.", operations=[ { 'method': 'DELETE', 'path': '/share-replicas/{share_replica_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force_delete', check_str=base.RULE_ADMIN_API, description="Force delete a share replica.", operations=[ { 'method': 'POST', 'path': '/share-replicas/{share_replica_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'promote', check_str=base.RULE_DEFAULT, description="Promote a non-active share replica to active.", operations=[ { 'method': 'POST', 'path': '/share-replicas/{share_replica_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'resync', check_str=base.RULE_ADMIN_API, description="Resync a share replica that is out of sync.", operations=[ { 'method': 'POST', 'path': '/share-replicas/{share_replica_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_replica_state', check_str=base.RULE_ADMIN_API, description="Reset share replica's replica_state attribute.", operations=[ { 'method': 'POST', 'path': '/share-replicas/{share_replica_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset share replica's status.", operations=[ { 'method': 'POST', 'path': '/share-replicas/{share_replica_id}/action', } ]), ] def list_rules(): return share_replica_policies manila-10.0.0/manila/policies/share_instance_export_location.py0000664000175000017500000000321313656750227024770 0ustar zuulzuul00000000000000# Copyright (c) 2017 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_instance_export_location:%s' share_export_location_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description='Return data about the requested export location.', operations=[ { 'method': 'POST', 'path': ('/share_instances/{share_instance_id}/' 'export_locations'), } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description='Return data about the requested export location.', operations=[ { 'method': 'GET', 'path': ('/share_instances/{share_instance_id}/' 'export_locations/{export_location_id}'), } ]), ] def list_rules(): return share_export_location_policies manila-10.0.0/manila/policies/share_group_snapshot.py0000664000175000017500000000675113656750227022760 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_group_snapshot:%s' share_group_snapshot_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_DEFAULT, description="Create a new share group snapshot.", operations=[ { 'method': 'POST', 'path': '/share-group-snapshots' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get', check_str=base.RULE_DEFAULT, description="Get details of a share group snapshot.", operations=[ { 'method': 'GET', 'path': '/share-group-snapshots/{share_group_snapshot_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all', check_str=base.RULE_DEFAULT, description="Get all share group snapshots.", operations=[ { 'method': 'GET', 'path': '/share-group-snapshots' }, { 'method': 'GET', 'path': '/share-group-snapshots/detail' }, { 'method': 'GET', 'path': '/share-group-snapshots/{query}' }, { 'method': 'GET', 'path': '/share-group-snapshots/detail?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_DEFAULT, description="Update a share group snapshot.", operations=[ { 'method': 'PUT', 'path': '/share-group-snapshots/{share_group_snapshot_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete a share group snapshot.", operations=[ { 'method': 'DELETE', 'path': '/share-group-snapshots/{share_group_snapshot_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force_delete', check_str=base.RULE_ADMIN_API, description="Force delete a share group snapshot.", operations=[ { 'method': 'POST', 'path': '/share-group-snapshots/{share_group_snapshot_id}/' 'action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset a share group snapshot's status.", operations=[ { 'method': 'POST', 'path': '/share-group-snapshots/{share_group_snapshot_id}/' 'action' } ]), ] def list_rules(): return share_group_snapshot_policies manila-10.0.0/manila/policies/share_replica_export_location.py0000664000175000017500000000311613656750227024605 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_replica_export_location:%s' share_replica_export_location_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="Get all export locations of a given share replica.", operations=[ { 'method': 'GET', 'path': '/share-replicas/{share_replica_id}/export-locations', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details about the requested share replica export " "location.", operations=[ { 'method': 'GET', 'path': ('/share-replicas/{share_replica_id}/export-locations/' '{export_location_id}'), } ]), ] def list_rules(): return share_replica_export_location_policies manila-10.0.0/manila/policies/share_snapshot_instance.py0000664000175000017500000000447013656750227023424 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_snapshot_instance:%s' share_snapshot_instance_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description="Get share snapshot instance.", operations=[ { 'method': 'GET', 'path': '/snapshot-instances/{snapshot_instance_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Get all share snapshot instances.", operations=[ { 'method': 'GET', 'path': '/snapshot-instances', }, { 'method': 'GET', 'path': '/snapshot-instances?{query}', }, ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'detail', check_str=base.RULE_ADMIN_API, description="Get details of share snapshot instances.", operations=[ { 'method': 'GET', 'path': '/snapshot-instances/detail', }, { 'method': 'GET', 'path': '/snapshot-instances/detail?{query}', }, ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset share snapshot instance's status.", operations=[ { 'method': 'POST', 'path': '/snapshot-instances/{snapshot_instance_id}/action', } ]), ] def list_rules(): return share_snapshot_instance_policies manila-10.0.0/manila/policies/quota_set.py0000664000175000017500000000575113656750227020526 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'quota_set:%s' quota_set_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_ADMIN_API, description=("Update the quotas for a project/user and/or share " "type."), operations=[ { 'method': 'PUT', 'path': '/quota-sets/{tenant_id}' }, { 'method': 'PUT', 'path': '/quota-sets/{tenant_id}?user_id={user_id}' }, { 'method': 'PUT', 'path': '/quota-sets/{tenant_id}?share_type={share_type_id}' }, { 'method': 'PUT', 'path': '/os-quota-sets/{tenant_id}' }, { 'method': 'PUT', 'path': '/os-quota-sets/{tenant_id}?user_id={user_id}' }, ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="List the quotas for a tenant/user.", operations=[ { 'method': 'GET', 'path': '/quota-sets/{tenant_id}/defaults' }, { 'method': 'GET', 'path': '/os-quota-sets/{tenant_id}/defaults' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_ADMIN_API, description=("Delete quota for a tenant/user or " "tenant/share-type. The quota will revert back to " "default (Admin only)."), operations=[ { 'method': 'DELETE', 'path': '/quota-sets/{tenant_id}' }, { 'method': 'DELETE', 'path': '/quota-sets/{tenant_id}?user_id={user_id}' }, { 'method': 'DELETE', 'path': '/quota-sets/{tenant_id}?share_type={share_type_id}' }, { 'method': 'DELETE', 'path': '/os-quota-sets/{tenant_id}' }, { 'method': 'DELETE', 'path': '/os-quota-sets/{tenant_id}?user_id={user_id}' }, ]), ] def list_rules(): return quota_set_policies manila-10.0.0/manila/policies/share_snapshot_instance_export_location.py0000664000175000017500000000325413656750227026714 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_snapshot_instance_export_location:%s' share_snapshot_instance_export_location_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="List export locations of a share snapshot instance.", operations=[ { 'method': 'GET', 'path': ('/snapshot-instances/{snapshot_instance_id}/' 'export-locations'), } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description="Show details of a specified export location of a share " "snapshot instance.", operations=[ { 'method': 'GET', 'path': ('/snapshot-instances/{snapshot_instance_id}/' 'export-locations/{export_location_id}'), } ]), ] def list_rules(): return share_snapshot_instance_export_location_policies manila-10.0.0/manila/policies/share_access_metadata.py0000664000175000017500000000302113656750227022771 0ustar zuulzuul00000000000000# Copyright 2018 Huawei Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_access_metadata:%s' share_access_rule_metadata_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_DEFAULT, description="Set metadata for a share access rule.", operations=[ { 'method': 'PUT', 'path': '/share-access-rules/{share_access_id}/metadata' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete metadata for a share access rule.", operations=[ { 'method': 'DELETE', 'path': '/share-access-rules/{share_access_id}/metadata/{key}' } ]), ] def list_rules(): return share_access_rule_metadata_policies manila-10.0.0/manila/policies/share_network.py0000664000175000017500000000770413656750227021375 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_network:%s' share_network_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_DEFAULT, description="Create share network.", operations=[ { 'method': 'POST', 'path': '/share-networks' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details of a share network.", operations=[ { 'method': 'GET', 'path': '/share-networks/{share_network_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="Get all share networks.", operations=[ { 'method': 'GET', 'path': '/share-networks' }, { 'method': 'GET', 'path': '/share-networks?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'detail', check_str=base.RULE_DEFAULT, description="Get details of share networks .", operations=[ { 'method': 'GET', 'path': '/share-networks/detail?{query}' }, { 'method': 'GET', 'path': '/share-networks/detail' }, ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_DEFAULT, description="Update a share network.", operations=[ { 'method': 'PUT', 'path': '/share-networks/{share_network_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete a share network.", operations=[ { 'method': 'DELETE', 'path': '/share-networks/{share_network_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'add_security_service', check_str=base.RULE_DEFAULT, description="Add security service to share network.", operations=[ { 'method': 'POST', 'path': '/share-networks/{share_network_id}/action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'remove_security_service', check_str=base.RULE_DEFAULT, description="Remove security service from share network.", operations=[ { 'method': 'POST', 'path': '/share-networks/{share_network_id}/action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all_share_networks', check_str=base.RULE_ADMIN_API, description="Get share networks belonging to all projects.", operations=[ { 'method': 'GET', 'path': '/share-networks?all_tenants=1' }, { 'method': 'GET', 'path': '/share-networks/detail?all_tenants=1' } ]), ] def list_rules(): return share_network_policies manila-10.0.0/manila/policies/shares.py0000664000175000017500000002777313656750227020017 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share:%s' # These deprecated rules can be removed in the 'Train' release. deprecated_create_public_share_rule = policy.DeprecatedRule( name=BASE_POLICY_NAME % 'create_public_share', check_str=base.RULE_DEFAULT, ) deprecated_set_public_share_rule = policy.DeprecatedRule( name=BASE_POLICY_NAME % 'set_public_share', check_str=base.RULE_DEFAULT, ) shares_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str="", description="Create share.", operations=[ { 'method': 'POST', 'path': '/shares', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create_public_share', check_str=base.RULE_ADMIN_API, description="Create shares visible across all projects in the cloud. " "This option will default to rule:admin_api in the " "9.0.0 (Train) release of the OpenStack Shared File " "Systems (manila) service.", deprecated_rule=deprecated_create_public_share_rule, deprecated_reason="Public shares must be accessible across the " "cloud, irrespective of project namespaces. To " "avoid unintended consequences, rule:admin_api " "serves as a better default for this policy.", deprecated_since='S', operations=[ { 'method': 'POST', 'path': '/shares', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get', check_str=base.RULE_DEFAULT, description="Get share.", operations=[ { 'method': 'GET', 'path': '/shares/{share_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all', check_str=base.RULE_DEFAULT, description="List shares.", operations=[ { 'method': 'GET', 'path': '/shares', }, { 'method': 'GET', 'path': '/shares/detail', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_DEFAULT, description="Update share.", operations=[ { 'method': 'PUT', 'path': '/shares', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'set_public_share', check_str=base.RULE_ADMIN_API, description="Update shares to be visible across all projects in the " "cloud. This option will default to rule:admin_api in the " "9.0.0 (Train) release of the OpenStack Shared File " "Systems (manila) service.", deprecated_rule=deprecated_set_public_share_rule, deprecated_reason="Public shares must be accessible across the " "cloud, irrespective of project namespaces. To " "avoid unintended consequences, rule:admin_api " "serves as a better default for this policy.", deprecated_since='S', operations=[ { 'method': 'PUT', 'path': '/shares', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete share.", operations=[ { 'method': 'DELETE', 'path': '/shares/{share_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force_delete', check_str=base.RULE_ADMIN_API, description="Force Delete a share.", operations=[ { 'method': 'DELETE', 'path': '/shares/{share_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'manage', check_str=base.RULE_ADMIN_API, description="Manage share.", operations=[ { 'method': 'POST', 'path': '/shares/manage', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'unmanage', check_str=base.RULE_ADMIN_API, description="Unmanage share.", operations=[ { 'method': 'POST', 'path': '/shares/unmanage', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create_snapshot', check_str=base.RULE_DEFAULT, description="Create share snapshot.", operations=[ { 'method': 'POST', 'path': '/snapshots', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list_by_host', check_str=base.RULE_ADMIN_API, description="List share by host.", operations=[ { 'method': 'GET', 'path': '/shares', }, { 'method': 'GET', 'path': '/shares/detail', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list_by_share_server_id', check_str=base.RULE_ADMIN_API, description="List share by server id.", operations=[ { 'method': 'GET', 'path': '/shares' }, { 'method': 'GET', 'path': '/shares/detail', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'access_get', check_str=base.RULE_DEFAULT, description="Get share access rule, it under deny access operation.", operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'access_get_all', check_str=base.RULE_DEFAULT, description="List share access rules.", operations=[ { 'method': 'GET', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'extend', check_str=base.RULE_DEFAULT, description="Extend share.", operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'shrink', check_str=base.RULE_DEFAULT, description="Shrink share.", operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'migration_start', check_str=base.RULE_ADMIN_API, description="Migrate a share to the specified host.", operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'migration_complete', check_str=base.RULE_ADMIN_API, description="Invokes 2nd phase of share migration.", operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'migration_cancel', check_str=base.RULE_ADMIN_API, description="Attempts to cancel share migration.", operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'migration_get_progress', check_str=base.RULE_ADMIN_API, description=("Retrieve share migration progress for a given " "share."), operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_task_state', check_str=base.RULE_ADMIN_API, description=("Reset task state."), operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description=("Reset status."), operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'revert_to_snapshot', check_str=base.RULE_DEFAULT, description=("Revert a share to a snapshot."), operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'allow_access', check_str=base.RULE_DEFAULT, description=("Add share access rule."), operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'deny_access', check_str=base.RULE_DEFAULT, description=("Remove share access rule."), operations=[ { 'method': 'POST', 'path': '/shares/{share_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete_snapshot', check_str=base.RULE_DEFAULT, description=("Delete share snapshot."), operations=[ { 'method': 'DELETE', 'path': '/snapshots/{snapshot_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'snapshot_update', check_str=base.RULE_DEFAULT, description=("Update share snapshot."), operations=[ { 'method': 'PUT', 'path': '/snapshots/{snapshot_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update_share_metadata', check_str=base.RULE_DEFAULT, description=("Update share metadata."), operations=[ { 'method': 'PUT', 'path': '/shares/{share_id}/metadata', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete_share_metadata', check_str=base.RULE_DEFAULT, description=("Delete share metadata."), operations=[ { 'method': 'DELETE', 'path': '/shares/{share_id}/metadata/{key}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_share_metadata', check_str=base.RULE_DEFAULT, description=("Get share metadata."), operations=[ { 'method': 'GET', 'path': '/shares/{share_id}/metadata', } ]), ] def list_rules(): return shares_policies manila-10.0.0/manila/policies/share_snapshot_export_location.py0000664000175000017500000000307013656750227025024 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_snapshot_export_location:%s' share_snapshot_export_location_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="List export locations of a share snapshot.", operations=[ { 'method': 'GET', 'path': '/snapshots/{snapshot_id}/export-locations/', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details of a specified export location of a " "share snapshot.", operations=[ { 'method': 'GET', 'path': ('/snapshots/{snapshot_id}/' 'export-locations/{export_location_id}'), } ]), ] def list_rules(): return share_snapshot_export_location_policies manila-10.0.0/manila/policies/share_group.py0000664000175000017500000000630613656750227021035 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_group:%s' share_group_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_DEFAULT, description="Create share group.", operations=[ { 'method': 'POST', 'path': '/share-groups' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get', check_str=base.RULE_DEFAULT, description="Get details of a share group.", operations=[ { 'method': 'GET', 'path': '/share-groups/{share_group_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all', check_str=base.RULE_DEFAULT, description="Get all share groups.", operations=[ { 'method': 'GET', 'path': '/share-groups' }, { 'method': 'GET', 'path': '/share-groups/detail' }, { 'method': 'GET', 'path': '/share-groups?{query}' }, { 'method': 'GET', 'path': '/share-groups/detail?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_DEFAULT, description="Update share group.", operations=[ { 'method': 'PUT', 'path': '/share-groups/{share_group_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete share group.", operations=[ { 'method': 'DELETE', 'path': '/share-groups/{share_group_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force_delete', check_str=base.RULE_ADMIN_API, description="Force delete a share group.", operations=[ { 'method': 'POST', 'path': '/share-groups/{share_group_id}/action' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset share group's status.", operations=[ { 'method': 'POST', 'path': '/share-groups/{share_group_id}/action' } ]), ] def list_rules(): return share_group_policies manila-10.0.0/manila/policies/message.py0000664000175000017500000000332013656750227020134 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'message:%s' message_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get', check_str=base.RULE_DEFAULT, description="Get details of a given message.", operations=[ { 'method': 'GET', 'path': '/messages/{message_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all', check_str=base.RULE_DEFAULT, description="Get all messages.", operations=[ { 'method': 'GET', 'path': '/messages' }, { 'method': 'GET', 'path': '/messages?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete a message.", operations=[ { 'method': 'DELETE', 'path': '/messages/{message_id}' } ]), ] def list_rules(): return message_policies manila-10.0.0/manila/policies/service.py0000664000175000017500000000373713656750227020164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'service:%s' service_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Return a list of all running services.", operations=[ { 'method': 'GET', 'path': '/os-services', }, { 'method': 'GET', 'path': '/os-services?{query}', }, { 'method': 'GET', 'path': '/services', }, { 'method': 'GET', 'path': '/services?{query}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_ADMIN_API, description="Enable/Disable scheduling for a service.", operations=[ { 'method': 'PUT', 'path': '/os-services/disable', }, { 'method': 'PUT', 'path': '/os-services/enable', }, { 'method': 'PUT', 'path': '/services/disable', }, { 'method': 'PUT', 'path': '/services/enable', }, ]), ] def list_rules(): return service_policies manila-10.0.0/manila/policies/share_type.py0000664000175000017500000000715713656750227020667 0ustar zuulzuul00000000000000# Copyright (c) 2017 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_type:%s' share_type_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_ADMIN_API, description='Create share type.', operations=[ { 'method': 'POST', 'path': '/types', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_ADMIN_API, description='Update share type.', operations=[ { 'method': 'PUT', 'path': '/types/{share_type_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description='Get share type.', operations=[ { 'method': 'GET', 'path': '/types/{share_type_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description='List share types.', operations=[ { 'method': 'GET', 'path': '/types', }, { 'method': 'GET', 'path': '/types?is_public=all', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'default', check_str=base.RULE_DEFAULT, description='Get default share type.', operations=[ { 'method': 'GET', 'path': '/types/default', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_ADMIN_API, description='Delete share type.', operations=[ { 'method': 'DELETE', 'path': '/types/{share_type_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list_project_access', check_str=base.RULE_ADMIN_API, description='List share type project access.', operations=[ { 'method': 'GET', 'path': '/types/{share_type_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'add_project_access', check_str=base.RULE_ADMIN_API, description='Add share type to project.', operations=[ { 'method': 'POST', 'path': '/types/{share_type_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'remove_project_access', check_str=base.RULE_ADMIN_API, description='Remove share type from project.', operations=[ { 'method': 'POST', 'path': '/types/{share_type_id}/action', } ]), ] def list_rules(): return share_type_policies manila-10.0.0/manila/policies/scheduler_stats.py0000664000175000017500000000341113656750227021705 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'scheduler_stats:pools:%s' scheduler_stats_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Get information regarding backends " "(and storage pools) known to the scheduler.", operations=[ { 'method': 'GET', 'path': '/scheduler-stats/pools' }, { 'method': 'GET', 'path': '/scheduler-stats/pools?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'detail', check_str=base.RULE_ADMIN_API, description="Get detailed information regarding backends " "(and storage pools) known to the scheduler.", operations=[ { 'method': 'GET', 'path': '/scheduler-stats/pools/detail?{query}' }, { 'method': 'GET', 'path': '/scheduler-stats/pools/detail' } ]), ] def list_rules(): return scheduler_stats_policies manila-10.0.0/manila/policies/share_group_type.py0000664000175000017500000000703413656750227022075 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_group_type:%s' share_group_type_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_ADMIN_API, description="Create a new share group type.", operations=[ { 'method': 'POST', 'path': '/share-group-types', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="Get the list of share group types.", operations=[ { 'method': 'GET', 'path': '/share-group-types', }, { 'method': 'GET', 'path': '/share-group-types?is_public=all', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details regarding the specified share group type.", operations=[ { 'method': 'GET', 'path': '/share-group-types/{share_group_type_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'default', check_str=base.RULE_DEFAULT, description="Get the default share group type.", operations=[ { 'method': 'GET', 'path': '/share-group-types/default', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_ADMIN_API, description="Delete an existing group type.", operations=[ { 'method': 'DELETE', 'path': '/share-group-types/{share_group_type_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'list_project_access', check_str=base.RULE_ADMIN_API, description="Get project access by share group type.", operations=[ { 'method': 'POST', 'path': '/share-group-types/{share_group_type_id}/access', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'add_project_access', check_str=base.RULE_ADMIN_API, description="Allow project to use the share group type.", operations=[ { 'method': 'POST', 'path': '/share-group-types/{share_group_type_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'remove_project_access', check_str=base.RULE_ADMIN_API, description="Deny project access to use the share group type.", operations=[ { 'method': 'POST', 'path': '/share-group-types/{share_group_type_id}/action', } ]), ] def list_rules(): return share_group_type_policies manila-10.0.0/manila/policies/share_access.py0000664000175000017500000000301513656750227021134 0ustar zuulzuul00000000000000# Copyright 2018 Huawei Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_access_rule:%s' share_access_rule_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get', check_str=base.RULE_DEFAULT, description="Get details of a share access rule.", operations=[ { 'method': 'GET', 'path': '/share-access-rules/{share_access_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="List access rules of a given share.", operations=[ { 'method': 'GET', 'path': ('/share-access-rules?share_id={share_id}' '&key1=value1&key2=value2') } ]), ] def list_rules(): return share_access_rule_policies manila-10.0.0/manila/policies/quota_class_set.py0000664000175000017500000000313013656750227021700 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'quota_class_set:%s' quota_class_set_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_ADMIN_API, description="Update quota class.", operations=[ { 'method': 'PUT', 'path': '/quota-class-sets/{class_name}' }, { 'method': 'PUT', 'path': '/os-quota-class-sets/{class_name}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get quota class.", operations=[ { 'method': 'GET', 'path': '/quota-class-sets/{class_name}' }, { 'method': 'GET', 'path': '/os-quota-class-sets/{class_name}' } ]), ] def list_rules(): return quota_class_set_policies manila-10.0.0/manila/policies/share_group_types_spec.py0000664000175000017500000000507513656750227023275 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_group_types_spec:%s' share_group_types_spec_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_ADMIN_API, description="Create share group type specs.", operations=[ { 'method': 'POST', 'path': '/share-group-types/{share_group_type_id}/group-specs' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Get share group type specs.", operations=[ { 'method': 'GET', 'path': '/share-group-types/{share_group_type_id}/group-specs', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description="Get details of a share group type spec.", operations=[ { 'method': 'GET', 'path': ('/share-group-types/{share_group_type_id}/' 'group-specs/{key}'), } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_ADMIN_API, description="Update a share group type spec.", operations=[ { 'method': 'PUT', 'path': ('/share-group-types/{share_group_type_id}' '/group-specs/{key}'), } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_ADMIN_API, description="Delete a share group type spec.", operations=[ { 'method': 'DELETE', 'path': ('/share-group-types/{share_group_type_id}/' 'group-specs/{key}'), } ]), ] def list_rules(): return share_group_types_spec_policies manila-10.0.0/manila/policies/base.py0000664000175000017500000000222413656750227017424 0ustar zuulzuul00000000000000# Copyright (c) 2017 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy RULE_ADMIN_OR_OWNER = 'rule:admin_or_owner' RULE_ADMIN_API = 'rule:admin_api' RULE_DEFAULT = 'rule:default' rules = [ policy.RuleDefault(name='context_is_admin', check_str='role:admin'), policy.RuleDefault( name='admin_or_owner', check_str='is_admin:True or project_id:%(project_id)s'), policy.RuleDefault(name='default', check_str=RULE_ADMIN_OR_OWNER), policy.RuleDefault(name='admin_api', check_str='is_admin:True'), ] def list_rules(): return rules manila-10.0.0/manila/policies/share_types_extra_spec.py0000664000175000017500000000463713656750227023267 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_types_extra_spec:%s' share_types_extra_spec_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_ADMIN_API, description="Create share type extra spec.", operations=[ { 'method': 'POST', 'path': '/types/{share_type_id}/extra_specs', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description="Get share type extra specs of a given share type.", operations=[ { 'method': 'GET', 'path': '/types/{share_type_id}/extra_specs', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Get details of a share type extra spec.", operations=[ { 'method': 'GET', 'path': '/types/{share_type_id}/extra_specs/{extra_spec_id}', }, ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_ADMIN_API, description="Update share type extra spec.", operations=[ { 'method': 'PUT', 'path': '/types/{share_type_id}/extra_specs', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_ADMIN_API, description="Delete share type extra spec.", operations=[ { 'method': 'DELETE', 'path': '/types/{share_type_id}/extra_specs/{key}', } ]), ] def list_rules(): return share_types_extra_spec_policies manila-10.0.0/manila/policies/availability_zone.py0000664000175000017500000000226013656750227022217 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'availability_zone:%s' availability_zone_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description=("Get all storage availability zones."), operations=[ { 'method': 'GET', 'path': '/os-availability-zone', }, { 'method': 'GET', 'path': '/availability-zone', }, ]), ] def list_rules(): return availability_zone_policies manila-10.0.0/manila/policies/share_instance.py0000664000175000017500000000417513656750227021507 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'share_instance:%s' shares_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_ADMIN_API, description="Get all share instances.", operations=[ { 'method': 'GET', 'path': '/share_instances', }, { 'method': 'GET', 'path': '/share_instances?{query}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_ADMIN_API, description="Get details of a share instance.", operations=[ { 'method': 'GET', 'path': '/share_instances/{share_instance_id}' }, ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'force_delete', check_str=base.RULE_ADMIN_API, description="Force delete a share instance.", operations=[ { 'method': 'POST', 'path': '/share_instances/{share_instance_id}/action', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'reset_status', check_str=base.RULE_ADMIN_API, description="Reset share instance's status.", operations=[ { 'method': 'POST', 'path': '/share_instances/{share_instance_id}/action', } ]), ] def list_rules(): return shares_policies manila-10.0.0/manila/policies/security_service.py0000664000175000017500000000647613656750227022116 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from manila.policies import base BASE_POLICY_NAME = 'security_service:%s' security_service_policies = [ policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'create', check_str=base.RULE_DEFAULT, description="Create security service.", operations=[ { 'method': 'POST', 'path': '/security-services' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'show', check_str=base.RULE_DEFAULT, description="Get details of a security service.", operations=[ { 'method': 'GET', 'path': '/security-services/{security_service_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'detail', check_str=base.RULE_DEFAULT, description="Get details of all security services.", operations=[ { 'method': 'GET', 'path': '/security-services/detail?{query}' }, { 'method': 'GET', 'path': '/security-services/detail' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'index', check_str=base.RULE_DEFAULT, description="Get all security services.", operations=[ { 'method': 'GET', 'path': '/security-services' }, { 'method': 'GET', 'path': '/security-services?{query}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'update', check_str=base.RULE_DEFAULT, description="Update a security service.", operations=[ { 'method': 'PUT', 'path': '/security-services/{security_service_id}', } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'delete', check_str=base.RULE_DEFAULT, description="Delete a security service.", operations=[ { 'method': 'DELETE', 'path': '/security-services/{security_service_id}' } ]), policy.DocumentedRuleDefault( name=BASE_POLICY_NAME % 'get_all_security_services', check_str=base.RULE_ADMIN_API, description="Get security services of all projects.", operations=[ { 'method': 'GET', 'path': '/security-services?all_tenants=1' }, { 'method': 'GET', 'path': '/security-services/detail?all_tenants=1' } ]), ] def list_rules(): return security_service_policies manila-10.0.0/manila/hacking/0000775000175000017500000000000013656750362015735 5ustar zuulzuul00000000000000manila-10.0.0/manila/hacking/__init__.py0000664000175000017500000000000013656750227020034 0ustar zuulzuul00000000000000manila-10.0.0/manila/hacking/checks.py0000664000175000017500000002750013656750227017553 0ustar zuulzuul00000000000000# Copyright (c) 2012, Cloudscaling # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ast import re import six from hacking import core import pycodestyle """ Guidelines for writing new hacking checks - Use only for Manila specific tests. OpenStack general tests should be submitted to the common 'hacking' module. - Pick numbers in the range M3xx. Find the current test with the highest allocated number and then pick the next value. - Keep the test method code in the source file ordered based on the M3xx value. - List the new rule in the top level HACKING.rst file - Add test cases for each new rule to manila/tests/test_hacking.py """ UNDERSCORE_IMPORT_FILES = [] translated_log = re.compile( r"(.)*LOG\." r"(audit|debug|error|info|warn|warning|critical|exception)" r"\(" r"(_|_LE|_LI|_LW)" r"\(") string_translation = re.compile(r"[^_]*_\(\s*('|\")") underscore_import_check = re.compile(r"(.)*import _$") underscore_import_check_multi = re.compile(r"(.)*import (.)*_, (.)*") # We need this for cases where they have created their own _ function. custom_underscore_check = re.compile(r"(.)*_\s*=\s*(.)*") oslo_namespace_imports = re.compile(r"from[\s]*oslo[.](.*)") dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)") assert_no_xrange_re = re.compile(r"\s*xrange\s*\(") assert_True = re.compile(r".*assertEqual\(True, .*\)") no_log_warn = re.compile(r"\s*LOG.warn\(.*") third_party_mock = re.compile(r"^import.mock") from_third_party_mock = re.compile(r"^from.mock.import") class BaseASTChecker(ast.NodeVisitor): """Provides a simple framework for writing AST-based checks. Subclasses should implement visit_* methods like any other AST visitor implementation. When they detect an error for a particular node the method should call ``self.add_error(offending_node)``. Details about where in the code the error occurred will be pulled from the node object. Subclasses should also provide a class variable named CHECK_DESC to be used for the human readable error message. """ CHECK_DESC = 'No check message specified' def __init__(self, tree, filename): """This object is created automatically by pep8. :param tree: an AST tree :param filename: name of the file being analyzed (ignored by our checks) """ self._tree = tree self._errors = [] def run(self): """Called automatically by pep8.""" self.visit(self._tree) return self._errors def add_error(self, node, message=None): """Add an error caused by a node to the list of errors for pep8.""" message = message or self.CHECK_DESC error = (node.lineno, node.col_offset, message, self.__class__) self._errors.append(error) def _check_call_names(self, call_node, names): if isinstance(call_node, ast.Call): if isinstance(call_node.func, ast.Name): if call_node.func.id in names: return True return False @core.flake8ext def no_translate_logs(logical_line): if translated_log.match(logical_line): yield(0, "M359 Don't translate log messages!") class CheckLoggingFormatArgs(BaseASTChecker): """Check for improper use of logging format arguments. LOG.debug("Volume %s caught fire and is at %d degrees C and climbing.", ('volume1', 500)) The format arguments should not be a tuple as it is easy to miss. """ name = "check_logging_format_args" version = "1.0" CHECK_DESC = 'M310 Log method arguments should not be a tuple.' LOG_METHODS = [ 'debug', 'info', 'warn', 'warning', 'error', 'exception', 'critical', 'fatal', 'trace', 'log' ] def _find_name(self, node): """Return the fully qualified name or a Name or Attribute.""" if isinstance(node, ast.Name): return node.id elif (isinstance(node, ast.Attribute) and isinstance(node.value, (ast.Name, ast.Attribute))): method_name = node.attr obj_name = self._find_name(node.value) if obj_name is None: return None return obj_name + '.' + method_name elif isinstance(node, six.string_types): return node else: # could be Subscript, Call or many more return None def visit_Call(self, node): """Look for the 'LOG.*' calls.""" # extract the obj_name and method_name if isinstance(node.func, ast.Attribute): obj_name = self._find_name(node.func.value) if isinstance(node.func.value, ast.Name): method_name = node.func.attr elif isinstance(node.func.value, ast.Attribute): obj_name = self._find_name(node.func.value) method_name = node.func.attr else: # could be Subscript, Call or many more return super(CheckLoggingFormatArgs, self).generic_visit(node) # obj must be a logger instance and method must be a log helper if (obj_name != 'LOG' or method_name not in self.LOG_METHODS): return super(CheckLoggingFormatArgs, self).generic_visit(node) # the call must have arguments if not len(node.args): return super(CheckLoggingFormatArgs, self).generic_visit(node) # any argument should not be a tuple for arg in node.args: if isinstance(arg, ast.Tuple): self.add_error(arg) return super(CheckLoggingFormatArgs, self).generic_visit(node) @core.flake8ext def check_explicit_underscore_import(logical_line, filename): """Check for explicit import of the _ function We need to ensure that any files that are using the _() function to translate logs are explicitly importing the _ function. We can't trust unit test to catch whether the import has been added so we need to check for it here. """ # Build a list of the files that have _ imported. No further # checking needed once it is found. if filename in UNDERSCORE_IMPORT_FILES: pass elif (underscore_import_check.match(logical_line) or underscore_import_check_multi.match(logical_line) or custom_underscore_check.match(logical_line)): UNDERSCORE_IMPORT_FILES.append(filename) elif string_translation.match(logical_line): yield(0, "M323: Found use of _() without explicit import of _ !") class CheckForStrUnicodeExc(BaseASTChecker): """Checks for the use of str() or unicode() on an exception. This currently only handles the case where str() or unicode() is used in the scope of an exception handler. If the exception is passed into a function, returned from an assertRaises, or used on an exception created in the same scope, this does not catch it. """ name = "check_for_str_unicode_exc" version = "1.0" CHECK_DESC = ('M325 str() and unicode() cannot be used on an ' 'exception. Remove or use six.text_type()') def __init__(self, tree, filename): super(CheckForStrUnicodeExc, self).__init__(tree, filename) self.name = [] self.already_checked = [] # Python 2 def visit_TryExcept(self, node): for handler in node.handlers: if handler.name: self.name.append(handler.name.id) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) # Python 3 def visit_ExceptHandler(self, node): if node.name: self.name.append(node.name) super(CheckForStrUnicodeExc, self).generic_visit(node) self.name = self.name[:-1] else: super(CheckForStrUnicodeExc, self).generic_visit(node) def visit_Call(self, node): if self._check_call_names(node, ['str', 'unicode']): if node not in self.already_checked: self.already_checked.append(node) if isinstance(node.args[0], ast.Name): if node.args[0].id in self.name: self.add_error(node.args[0]) super(CheckForStrUnicodeExc, self).generic_visit(node) class CheckForTransAdd(BaseASTChecker): """Checks for the use of concatenation on a translated string. Translations should not be concatenated with other strings, but should instead include the string being added to the translated string to give the translators the most information. """ name = "check_for_trans_add" version = "1.0" CHECK_DESC = ('M326 Translated messages cannot be concatenated. ' 'String should be included in translated message.') TRANS_FUNC = ['_', '_LI', '_LW', '_LE', '_LC'] def visit_BinOp(self, node): if isinstance(node.op, ast.Add): if self._check_call_names(node.left, self.TRANS_FUNC): self.add_error(node.left) elif self._check_call_names(node.right, self.TRANS_FUNC): self.add_error(node.right) super(CheckForTransAdd, self).generic_visit(node) @core.flake8ext def check_oslo_namespace_imports(physical_line, logical_line, filename): if pycodestyle.noqa(physical_line): return if re.match(oslo_namespace_imports, logical_line): msg = ("M333: '%s' must be used instead of '%s'.") % ( logical_line.replace('oslo.', 'oslo_'), logical_line) yield(0, msg) @core.flake8ext def dict_constructor_with_list_copy(logical_line): msg = ("M336: Must use a dict comprehension instead of a dict constructor" " with a sequence of key-value pairs." ) if dict_constructor_with_list_copy_re.match(logical_line): yield (0, msg) @core.flake8ext def no_xrange(logical_line): if assert_no_xrange_re.match(logical_line): yield(0, "M337: Do not use xrange().") @core.flake8ext def validate_assertTrue(logical_line): if re.match(assert_True, logical_line): msg = ("M313: Unit tests should use assertTrue(value) instead" " of using assertEqual(True, value).") yield(0, msg) @core.flake8ext def check_uuid4(logical_line): """Generating UUID Use oslo_utils.uuidutils to generate UUID instead of uuid4(). M354 """ msg = ("M354: Use oslo_utils.uuidutils to generate UUID instead " "of uuid4().") if "uuid4()." in logical_line: return if "uuid4()" in logical_line: yield (0, msg) @core.flake8ext def no_log_warn_check(logical_line): """Disallow 'LOG.warn' Deprecated LOG.warn(), instead use LOG.warning ://bugs.launchpad.net/manila/+bug/1508442 M338 """ msg = ("M338: LOG.warn is deprecated, use LOG.warning.") if re.match(no_log_warn, logical_line): yield(0, msg) @core.flake8ext def no_third_party_mock(logical_line): # We should only use unittest.mock, not the third party mock library that # was needed for py2 support. if (re.match(third_party_mock, logical_line) or re.match(from_third_party_mock, logical_line)): msg = ('M339: Unit tests should use the standard library "mock" ' 'module, not the third party mock library.') yield(0, msg) manila-10.0.0/manila/rpc.py0000664000175000017500000001072113656750227015470 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'init', 'cleanup', 'set_defaults', 'add_extra_exmods', 'clear_extra_exmods', 'get_allowed_exmods', 'RequestContextSerializer', 'get_client', 'get_server', 'get_notifier', ] from oslo_config import cfg import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils import manila.context import manila.exception from manila import utils CONF = cfg.CONF TRANSPORT = None NOTIFICATION_TRANSPORT = None NOTIFIER = None ALLOWED_EXMODS = [ manila.exception.__name__, ] EXTRA_EXMODS = [] def init(conf): global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER exmods = get_allowed_exmods() TRANSPORT = messaging.get_rpc_transport(conf, allowed_remote_exmods=exmods) NOTIFICATION_TRANSPORT = messaging.get_notification_transport( conf, allowed_remote_exmods=exmods) if utils.notifications_enabled(conf): json_serializer = messaging.JsonPayloadSerializer() serializer = RequestContextSerializer(json_serializer) NOTIFIER = messaging.Notifier(NOTIFICATION_TRANSPORT, serializer=serializer) else: NOTIFIER = utils.DO_NOTHING def initialized(): return None not in [TRANSPORT, NOTIFIER] def cleanup(): global TRANSPORT, NOTIFICATION_TRANSPORT, NOTIFIER assert TRANSPORT is not None assert NOTIFICATION_TRANSPORT is not None assert NOTIFIER is not None TRANSPORT.cleanup() NOTIFICATION_TRANSPORT.cleanup() TRANSPORT = NOTIFIER = NOTIFICATION_TRANSPORT = None def set_defaults(control_exchange): messaging.set_transport_defaults(control_exchange) def add_extra_exmods(*args): EXTRA_EXMODS.extend(args) def clear_extra_exmods(): del EXTRA_EXMODS[:] def get_allowed_exmods(): return ALLOWED_EXMODS + EXTRA_EXMODS class JsonPayloadSerializer(messaging.NoOpSerializer): @staticmethod def serialize_entity(context, entity): return jsonutils.to_primitive(entity, convert_instances=True) class RequestContextSerializer(messaging.Serializer): def __init__(self, base): self._base = base def serialize_entity(self, context, entity): if not self._base: return entity return self._base.serialize_entity(context, entity) def deserialize_entity(self, context, entity): if not self._base: return entity return self._base.deserialize_entity(context, entity) def serialize_context(self, context): return context.to_dict() def deserialize_context(self, context): return manila.context.RequestContext.from_dict(context) def get_transport_url(url_str=None): return messaging.TransportURL.parse(CONF, url_str) def get_client(target, version_cap=None, serializer=None): assert TRANSPORT is not None serializer = RequestContextSerializer(serializer) return messaging.RPCClient(TRANSPORT, target, version_cap=version_cap, serializer=serializer) def get_server(target, endpoints, serializer=None): assert TRANSPORT is not None access_policy = dispatcher.DefaultRPCAccessPolicy serializer = RequestContextSerializer(serializer) return messaging.get_rpc_server(TRANSPORT, target, endpoints, executor='eventlet', serializer=serializer, access_policy=access_policy) @utils.if_notifications_enabled def get_notifier(service=None, host=None, publisher_id=None): assert NOTIFIER is not None if not publisher_id: publisher_id = "%s.%s" % (service, host or CONF.host) return NOTIFIER.prepare(publisher_id=publisher_id) manila-10.0.0/manila/exception.py0000664000175000017500000006535513656750227016717 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Manila base exception handling. Includes decorator for re-raising Manila-type exceptions. SHOULD include dedicated exception logging. """ import re from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log import six import webob.exc from manila.i18n import _ LOG = log.getLogger(__name__) exc_log_opts = [ cfg.BoolOpt('fatal_exception_format_errors', default=False, help='Whether to make exception message format errors fatal.'), ] CONF = cfg.CONF CONF.register_opts(exc_log_opts) ProcessExecutionError = processutils.ProcessExecutionError class ConvertedException(webob.exc.WSGIHTTPException): def __init__(self, code=400, title="", explanation=""): self.code = code self.title = title self.explanation = explanation super(ConvertedException, self).__init__() class Error(Exception): pass class ManilaException(Exception): """Base Manila Exception To correctly use this class, inherit from it and define a 'message' property. That message will get printf'd with the keyword arguments provided to the constructor. """ message = _("An unknown exception occurred.") code = 500 headers = {} safe = False def __init__(self, message=None, detail_data={}, **kwargs): self.kwargs = kwargs self.detail_data = detail_data if 'code' not in self.kwargs: try: self.kwargs['code'] = self.code except AttributeError: pass for k, v in self.kwargs.items(): if isinstance(v, Exception): self.kwargs[k] = six.text_type(v) if not message: try: message = self.message % kwargs except Exception: # kwargs doesn't match a variable in the message # log the issue and the kwargs LOG.exception('Exception in string format operation.') for name, value in kwargs.items(): LOG.error("%(name)s: %(value)s", { 'name': name, 'value': value}) if CONF.fatal_exception_format_errors: raise else: # at least get the core message out if something happened message = self.message elif isinstance(message, Exception): message = six.text_type(message) if re.match(r'.*[^\.]\.\.$', message): message = message[:-1] self.msg = message super(ManilaException, self).__init__(message) class NetworkException(ManilaException): message = _("Exception due to network failure.") class NetworkBindException(ManilaException): message = _("Exception due to failed port status in binding.") class NetworkBadConfigurationException(NetworkException): message = _("Bad network configuration: %(reason)s.") class BadConfigurationException(ManilaException): message = _("Bad configuration: %(reason)s.") class NotAuthorized(ManilaException): message = _("Not authorized.") code = 403 class AdminRequired(NotAuthorized): message = _("User does not have admin privileges.") class PolicyNotAuthorized(NotAuthorized): message = _("Policy doesn't allow %(action)s to be performed.") class Conflict(ManilaException): message = _("%(err)s") code = 409 class Invalid(ManilaException): message = _("Unacceptable parameters.") code = 400 class InvalidRequest(Invalid): message = _("The request is invalid.") class InvalidResults(Invalid): message = _("The results are invalid.") class InvalidInput(Invalid): message = _("Invalid input received: %(reason)s.") class InvalidContentType(Invalid): message = _("Invalid content type %(content_type)s.") class InvalidHost(Invalid): message = _("Invalid host: %(reason)s") # Cannot be templated as the error syntax varies. # msg needs to be constructed when raised. class InvalidParameterValue(Invalid): message = _("%(err)s") class InvalidUUID(Invalid): message = _("Expected a uuid but received %(uuid)s.") class InvalidDriverMode(Invalid): message = _("Invalid driver mode: %(driver_mode)s.") class InvalidAPIVersionString(Invalid): message = _("API Version String %(version)s is of invalid format. Must " "be of format MajorNum.MinorNum.") class VersionNotFoundForAPIMethod(Invalid): message = _("API version %(version)s is not supported on this method.") class InvalidGlobalAPIVersion(Invalid): message = _("Version %(req_ver)s is not supported by the API. Minimum " "is %(min_ver)s and maximum is %(max_ver)s.") class InvalidCapacity(Invalid): message = _("Invalid capacity: %(name)s = %(value)s.") class NotFound(ManilaException): message = _("Resource could not be found.") code = 404 safe = True class MessageNotFound(NotFound): message = _("Message %(message_id)s could not be found.") class Found(ManilaException): message = _("Resource was found.") code = 302 safe = True class InUse(ManilaException): message = _("Resource is in use.") class AvailabilityZoneNotFound(NotFound): message = _("Availability zone %(id)s could not be found.") class ShareNetworkNotFound(NotFound): message = _("Share network %(share_network_id)s could not be found.") class ShareNetworkSubnetNotFound(NotFound): message = _("Share network subnet %(share_network_subnet_id)s could not be" " found.") class ShareServerNotFound(NotFound): message = _("Share server %(share_server_id)s could not be found.") class ShareServerNotFoundByFilters(ShareServerNotFound): message = _("Share server could not be found by " "filters: %(filters_description)s.") class ShareServerInUse(InUse): message = _("Share server %(share_server_id)s is in use.") class InvalidShareServer(Invalid): message = _("Share server %(share_server_id)s is not valid.") class ShareMigrationError(ManilaException): message = _("Error in share migration: %(reason)s") class ShareMigrationFailed(ManilaException): message = _("Share migration failed: %(reason)s") class ShareDataCopyFailed(ManilaException): message = _("Share Data copy failed: %(reason)s") class ShareDataCopyCancelled(ManilaException): message = _("Copy of contents from share instance %(src_instance)s " "to share instance %(dest_instance)s was cancelled.") class ServiceIPNotFound(ManilaException): message = _("Service IP for instance not found: %(reason)s") class AdminIPNotFound(ManilaException): message = _("Admin port IP for service instance not found: %(reason)s") class ShareServerNotCreated(ManilaException): message = _("Share server %(share_server_id)s failed on creation.") class ShareServerNotReady(ManilaException): message = _("Share server %(share_server_id)s failed to reach '%(state)s' " "within %(time)s seconds.") class ServiceNotFound(NotFound): message = _("Service %(service_id)s could not be found.") class ServiceIsDown(Invalid): message = _("Service %(service)s is down.") class HostNotFound(NotFound): message = _("Host %(host)s could not be found.") class SchedulerHostFilterNotFound(NotFound): message = _("Scheduler host filter %(filter_name)s could not be found.") class SchedulerHostWeigherNotFound(NotFound): message = _("Scheduler host weigher %(weigher_name)s could not be found.") class HostBinaryNotFound(NotFound): message = _("Could not find binary %(binary)s on host %(host)s.") class InvalidReservationExpiration(Invalid): message = _("Invalid reservation expiration %(expire)s.") class InvalidQuotaValue(Invalid): message = _("Change would make usage less than 0 for the following " "resources: %(unders)s.") class QuotaNotFound(NotFound): message = _("Quota could not be found.") class QuotaExists(ManilaException): message = _("Quota exists for project %(project_id)s, " "resource %(resource)s.") class QuotaResourceUnknown(QuotaNotFound): message = _("Unknown quota resources %(unknown)s.") class ProjectUserQuotaNotFound(QuotaNotFound): message = _("Quota for user %(user_id)s in project %(project_id)s " "could not be found.") class ProjectShareTypeQuotaNotFound(QuotaNotFound): message = _("Quota for share_type %(share_type)s in " "project %(project_id)s could not be found.") class ProjectQuotaNotFound(QuotaNotFound): message = _("Quota for project %(project_id)s could not be found.") class QuotaClassNotFound(QuotaNotFound): message = _("Quota class %(class_name)s could not be found.") class QuotaUsageNotFound(QuotaNotFound): message = _("Quota usage for project %(project_id)s could not be found.") class ReservationNotFound(QuotaNotFound): message = _("Quota reservation %(uuid)s could not be found.") class OverQuota(ManilaException): message = _("Quota exceeded for resources: %(overs)s.") class MigrationNotFound(NotFound): message = _("Migration %(migration_id)s could not be found.") class MigrationNotFoundByStatus(MigrationNotFound): message = _("Migration not found for instance %(instance_id)s " "with status %(status)s.") class FileNotFound(NotFound): message = _("File %(file_path)s could not be found.") class MigrationError(ManilaException): message = _("Migration error: %(reason)s.") class MalformedRequestBody(ManilaException): message = _("Malformed message body: %(reason)s.") class ConfigNotFound(NotFound): message = _("Could not find config at %(path)s.") class PasteAppNotFound(NotFound): message = _("Could not load paste app '%(name)s' from %(path)s.") class NoValidHost(ManilaException): message = _("No valid host was found. %(reason)s.") class WillNotSchedule(ManilaException): message = _("Host %(host)s is not up or doesn't exist.") class QuotaError(ManilaException): message = _("Quota exceeded: code=%(code)s.") code = 413 headers = {'Retry-After': '0'} safe = True class ShareSizeExceedsAvailableQuota(QuotaError): message = _( "Requested share exceeds allowed project/user or share type " "gigabytes quota.") class SnapshotSizeExceedsAvailableQuota(QuotaError): message = _( "Requested snapshot exceeds allowed project/user or share type " "gigabytes quota.") class ShareLimitExceeded(QuotaError): message = _( "Maximum number of shares allowed (%(allowed)d) either per " "project/user or share type quota is exceeded.") class SnapshotLimitExceeded(QuotaError): message = _( "Maximum number of snapshots allowed (%(allowed)d) either per " "project/user or share type quota is exceeded.") class ShareNetworksLimitExceeded(QuotaError): message = _("Maximum number of share-networks " "allowed (%(allowed)d) exceeded.") class ShareGroupsLimitExceeded(QuotaError): message = _( "Maximum number of allowed share-groups is exceeded.") class ShareGroupSnapshotsLimitExceeded(QuotaError): message = _( "Maximum number of allowed share-group-snapshots is exceeded.") class ShareReplicasLimitExceeded(QuotaError): message = _( "Maximum number of allowed share-replicas is exceeded.") class ShareReplicaSizeExceedsAvailableQuota(QuotaError): message = _( "Requested share replica exceeds allowed project/user or share type " "gigabytes quota.") class GlusterfsException(ManilaException): message = _("Unknown Gluster exception.") class InvalidShare(Invalid): message = _("Invalid share: %(reason)s.") class ShareBusyException(Invalid): message = _("Share is busy with an active task: %(reason)s.") class InvalidShareInstance(Invalid): message = _("Invalid share instance: %(reason)s.") class ManageInvalidShare(InvalidShare): message = _("Manage existing share failed due to " "invalid share: %(reason)s") class ManageShareServerError(ManilaException): message = _("Manage existing share server failed due to: %(reason)s") class UnmanageInvalidShare(InvalidShare): message = _("Unmanage existing share failed due to " "invalid share: %(reason)s") class PortLimitExceeded(QuotaError): message = _("Maximum number of ports exceeded.") class ShareAccessExists(ManilaException): message = _("Share access %(access_type)s:%(access)s exists.") class ShareAccessMetadataNotFound(NotFound): message = _("Share access rule metadata does not exist.") class ShareSnapshotAccessExists(InvalidInput): message = _("Share snapshot access %(access_type)s:%(access)s exists.") class InvalidSnapshotAccess(Invalid): message = _("Invalid access rule: %(reason)s") class InvalidShareAccess(Invalid): message = _("Invalid access rule: %(reason)s") class InvalidShareAccessLevel(Invalid): message = _("Invalid or unsupported share access level: %(level)s.") class ShareBackendException(ManilaException): message = _("Share backend error: %(msg)s.") class ExportLocationNotFound(NotFound): message = _("Export location %(uuid)s could not be found.") class ShareNotFound(NotFound): message = _("Share %(share_id)s could not be found.") class ShareSnapshotNotFound(NotFound): message = _("Snapshot %(snapshot_id)s could not be found.") class ShareSnapshotInstanceNotFound(NotFound): message = _("Snapshot instance %(instance_id)s could not be found.") class ShareSnapshotNotSupported(ManilaException): message = _("Share %(share_name)s does not support snapshots.") class ShareGroupSnapshotNotSupported(ManilaException): message = _("Share group %(share_group)s does not support snapshots.") class ShareSnapshotIsBusy(ManilaException): message = _("Deleting snapshot %(snapshot_name)s that has " "dependent shares.") class InvalidShareSnapshot(Invalid): message = _("Invalid share snapshot: %(reason)s.") class InvalidShareSnapshotInstance(Invalid): message = _("Invalid share snapshot instance: %(reason)s.") class ManageInvalidShareSnapshot(InvalidShareSnapshot): message = _("Manage existing share snapshot failed due to " "invalid share snapshot: %(reason)s.") class UnmanageInvalidShareSnapshot(InvalidShareSnapshot): message = _("Unmanage existing share snapshot failed due to " "invalid share snapshot: %(reason)s.") class ShareMetadataNotFound(NotFound): message = _("Metadata item is not found.") class InvalidMetadata(Invalid): message = _("Invalid metadata.") class InvalidMetadataSize(Invalid): message = _("Invalid metadata size.") class SecurityServiceNotFound(NotFound): message = _("Security service %(security_service_id)s could not be found.") class ShareNetworkSecurityServiceAssociationError(ManilaException): message = _("Failed to associate share network %(share_network_id)s" " and security service %(security_service_id)s: %(reason)s.") class ShareNetworkSecurityServiceDissociationError(ManilaException): message = _("Failed to dissociate share network %(share_network_id)s" " and security service %(security_service_id)s: %(reason)s.") class InvalidVolume(Invalid): message = _("Invalid volume.") class InvalidShareType(Invalid): message = _("Invalid share type: %(reason)s.") class InvalidShareGroupType(Invalid): message = _("Invalid share group type: %(reason)s.") class InvalidExtraSpec(Invalid): message = _("Invalid extra_spec: %(reason)s.") class VolumeNotFound(NotFound): message = _("Volume %(volume_id)s could not be found.") class VolumeSnapshotNotFound(NotFound): message = _("Snapshot %(snapshot_id)s could not be found.") class ShareTypeNotFound(NotFound): message = _("Share type %(share_type_id)s could not be found.") class ShareGroupTypeNotFound(NotFound): message = _("Share group type %(type_id)s could not be found.") class ShareTypeAccessNotFound(NotFound): message = _("Share type access not found for %(share_type_id)s / " "%(project_id)s combination.") class ShareGroupTypeAccessNotFound(NotFound): message = _("Share group type access not found for %(type_id)s / " "%(project_id)s combination.") class ShareTypeNotFoundByName(ShareTypeNotFound): message = _("Share type with name %(share_type_name)s " "could not be found.") class ShareGroupTypeNotFoundByName(ShareTypeNotFound): message = _("Share group type with name %(type_name)s " "could not be found.") class ShareTypeExtraSpecsNotFound(NotFound): message = _("Share Type %(share_type_id)s has no extra specs with " "key %(extra_specs_key)s.") class ShareGroupTypeSpecsNotFound(NotFound): message = _("Share group type %(type_id)s has no group specs with " "key %(specs_key)s.") class ShareTypeInUse(ManilaException): message = _("Share Type %(share_type_id)s deletion is not allowed while " "shares or share group types are associated with the type.") class IPAddressInUse(InUse): message = _("IP address %(ip)s is already used.") class ShareGroupTypeInUse(ManilaException): message = _("Share group Type %(type_id)s deletion is not allowed " "with groups present with the type.") class ShareTypeExists(ManilaException): message = _("Share Type %(id)s already exists.") class ShareTypeDoesNotExist(NotFound): message = _("Share Type %(share_type)s does not exist.") class DefaultShareTypeNotConfigured(NotFound): message = _("No default share type is configured. Either configure a " "default share type or explicitly specify a share type.") class ShareGroupTypeExists(ManilaException): message = _("Share group type %(type_id)s already exists.") class ShareTypeAccessExists(ManilaException): message = _("Share type access for %(share_type_id)s / " "%(project_id)s combination already exists.") class ShareGroupTypeAccessExists(ManilaException): message = _("Share group type access for %(type_id)s / " "%(project_id)s combination already exists.") class ShareTypeCreateFailed(ManilaException): message = _("Cannot create share_type with " "name %(name)s and specs %(extra_specs)s.") class ShareTypeUpdateFailed(ManilaException): message = _("Cannot update share_type %(id)s.") class ShareGroupTypeCreateFailed(ManilaException): message = _("Cannot create share group type with " "name %(name)s and specs %(group_specs)s.") class ManageExistingShareTypeMismatch(ManilaException): message = _("Manage existing share failed due to share type mismatch: " "%(reason)s") class ShareExtendingError(ManilaException): message = _("Share %(share_id)s could not be extended due to error " "in the driver: %(reason)s") class ShareShrinkingError(ManilaException): message = _("Share %(share_id)s could not be shrunk due to error " "in the driver: %(reason)s") class ShareShrinkingPossibleDataLoss(ManilaException): message = _("Share %(share_id)s could not be shrunk due to " "possible data loss") class InstanceNotFound(NotFound): message = _("Instance %(instance_id)s could not be found.") class BridgeDoesNotExist(ManilaException): message = _("Bridge %(bridge)s does not exist.") class ServiceInstanceException(ManilaException): message = _("Exception in service instance manager occurred.") class ServiceInstanceUnavailable(ServiceInstanceException): message = _("Service instance is not available.") class StorageResourceException(ManilaException): message = _("Storage resource exception.") class StorageResourceNotFound(StorageResourceException): message = _("Storage resource %(name)s not found.") code = 404 class SnapshotResourceNotFound(StorageResourceNotFound): message = _("Snapshot %(name)s not found.") class SnapshotUnavailable(StorageResourceException): message = _("Snapshot %(name)s info not available.") class NetAppException(ManilaException): message = _("Exception due to NetApp failure.") class VserverNotFound(NetAppException): message = _("Vserver %(vserver)s not found.") class VserverNotSpecified(NetAppException): message = _("Vserver not specified.") class EMCPowerMaxXMLAPIError(Invalid): message = _("%(err)s") class EMCPowerMaxLockRequiredException(ManilaException): message = _("Unable to acquire lock(s).") class EMCPowerMaxInvalidMoverID(ManilaException): message = _("Invalid mover or vdm %(id)s.") class EMCVnxXMLAPIError(Invalid): message = _("%(err)s") class EMCVnxLockRequiredException(ManilaException): message = _("Unable to acquire lock(s).") class EMCVnxInvalidMoverID(ManilaException): message = _("Invalid mover or vdm %(id)s.") class EMCUnityError(ShareBackendException): message = _("%(err)s") class HPE3ParInvalidClient(Invalid): message = _("%(err)s") class HPE3ParInvalid(Invalid): message = _("%(err)s") class HPE3ParUnexpectedError(ManilaException): message = _("%(err)s") class GPFSException(ManilaException): message = _("GPFS exception occurred.") class GPFSGaneshaException(ManilaException): message = _("GPFS Ganesha exception occurred.") class GaneshaCommandFailure(ProcessExecutionError): _description = _("Ganesha management command failed.") def __init__(self, **kw): if 'description' not in kw: kw['description'] = self._description super(GaneshaCommandFailure, self).__init__(**kw) class InvalidSqliteDB(Invalid): message = _("Invalid Sqlite database.") class SSHException(ManilaException): message = _("Exception in SSH protocol negotiation or logic.") class HDFSException(ManilaException): message = _("HDFS exception occurred!") class MapRFSException(ManilaException): message = _("MapRFS exception occurred: %(msg)s") class ZFSonLinuxException(ManilaException): message = _("ZFSonLinux exception occurred: %(msg)s") class QBException(ManilaException): message = _("Quobyte exception occurred: %(msg)s") class QBRpcException(ManilaException): """Quobyte backend specific exception.""" message = _("Quobyte JsonRpc call to backend raised " "an exception: %(result)s, Quobyte error" " code %(qbcode)s") class SSHInjectionThreat(ManilaException): message = _("SSH command injection detected: %(command)s") class HNASBackendException(ManilaException): message = _("HNAS Backend Exception: %(msg)s") class HNASConnException(ManilaException): message = _("HNAS Connection Exception: %(msg)s") class HNASSSCIsBusy(ManilaException): message = _("HNAS SSC is busy and cannot execute the command: %(msg)s") class HNASSSCContextChange(ManilaException): message = _("HNAS SSC Context has been changed unexpectedly: %(msg)s") class HNASDirectoryNotEmpty(ManilaException): message = _("HNAS Directory is not empty: %(msg)s") class HNASItemNotFoundException(StorageResourceNotFound): message = _("HNAS Item Not Found Exception: %(msg)s") class HNASNothingToCloneException(ManilaException): message = _("HNAS Nothing To Clone Exception: %(msg)s") # ShareGroup class ShareGroupNotFound(NotFound): message = _("Share group %(share_group_id)s could not be found.") class ShareGroupSnapshotNotFound(NotFound): message = _( "Share group snapshot %(share_group_snapshot_id)s could not be found.") class ShareGroupSnapshotMemberNotFound(NotFound): message = _("Share group snapshot member %(member_id)s could not be " "found.") class InvalidShareGroup(Invalid): message = _("Invalid share group: %(reason)s") class InvalidShareGroupSnapshot(Invalid): message = _("Invalid share group snapshot: %(reason)s") class DriverNotInitialized(ManilaException): message = _("Share driver '%(driver)s' not initialized.") class ShareResourceNotFound(StorageResourceNotFound): message = _("Share id %(share_id)s could not be found " "in storage backend.") class ShareUmountException(ManilaException): message = _("Failed to unmount share: %(reason)s") class ShareMountException(ManilaException): message = _("Failed to mount share: %(reason)s") class ShareCopyDataException(ManilaException): message = _("Failed to copy data: %(reason)s") # Replication class ReplicationException(ManilaException): message = _("Unable to perform a replication action: %(reason)s.") class ShareReplicaNotFound(NotFound): message = _("Share Replica %(replica_id)s could not be found.") # Tegile Storage drivers class TegileAPIException(ShareBackendException): message = _("Unexpected response from Tegile IntelliFlash API: " "%(response)s") class StorageCommunicationException(ShareBackendException): message = _("Could not communicate with storage array.") class EvaluatorParseException(ManilaException): message = _("Error during evaluator parsing: %(reason)s") # Hitachi Scaleout Platform driver class HSPBackendException(ShareBackendException): message = _("HSP Backend Exception: %(msg)s") class HSPTimeoutException(ShareBackendException): message = _("HSP Timeout Exception: %(msg)s") class HSPItemNotFoundException(ShareBackendException): message = _("HSP Item Not Found Exception: %(msg)s") class NexentaException(ShareBackendException): message = _("Exception due to Nexenta failure. %(reason)s") # Tooz locking class LockCreationFailed(ManilaException): message = _('Unable to create lock. Coordination backend not started.') class LockingFailed(ManilaException): message = _('Lock acquisition failed.') # Ganesha library class GaneshaException(ManilaException): message = _("Unknown NFS-Ganesha library exception.") # Infortrend Storage driver class InfortrendCLIException(ShareBackendException): message = _("Infortrend CLI exception: %(err)s " "Return Code: %(rc)s, Output: %(out)s") class InfortrendNASException(ShareBackendException): message = _("Infortrend NAS exception: %(err)s") manila-10.0.0/manila/tests/0000775000175000017500000000000013656750362015473 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/utils.py0000664000175000017500000001036713656750227017214 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC # Copyright 2015 Mirantic, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import os from oslo_config import cfg import six from manila import context from manila import utils CONF = cfg.CONF class NamedBinaryStr(six.binary_type): """Wrapper for six.binary_type to facilitate overriding __name__.""" class NamedUnicodeStr(six.text_type): """Unicode string look-alike to facilitate overriding __name__.""" def __init__(self, value): self._value = value def __str__(self): return self._value def encode(self, enc): return self._value.encode(enc) def __format__(self, formatstr): """Workaround for ddt bug. DDT will always call __format__ even when __name__ exists, which blows up for Unicode strings under Py2. """ return '' class NamedDict(dict): """Wrapper for dict to facilitate overriding __name__.""" class NamedTuple(tuple): """Wrapper for dict to facilitate overriding __name__.""" def annotated(test_name, test_input): if isinstance(test_input, dict): annotated_input = NamedDict(test_input) elif isinstance(test_input, six.text_type): annotated_input = NamedUnicodeStr(test_input) elif isinstance(test_input, tuple): annotated_input = NamedTuple(test_input) else: annotated_input = NamedBinaryStr(test_input) setattr(annotated_input, '__name__', test_name) return annotated_input def get_test_admin_context(): return context.get_admin_context() def is_manila_installed(): if os.path.exists('../../manila.manila.egg-info'): return True else: return False def set_timeout(timeout): """Timeout decorator for unit test methods. Use this decorator for tests that are expected to pass in very specific amount of time, not common for all other tests. It can have either big or small value. """ def _decorator(f): @six.wraps(f) def _wrapper(self, *args, **kwargs): self.useFixture(fixtures.Timeout(timeout, gentle=True)) return f(self, *args, **kwargs) return _wrapper return _decorator class create_temp_config_with_opts(object): """Creates temporary config file with provided opts and values. usage: data = {'FOO_GROUP': {'foo_opt': 'foo_value'}} assert CONF.FOO_GROUP.foo_opt != 'foo_value' with create_temp_config_with_opts(data): assert CONF.FOO_GROUP.foo_opt == 'foo_value' assert CONF.FOO_GROUP.foo_opt != 'foo_value' :param data: dict -- expected dict with two layers, first is name of config group and second is opts with values. Example: {'DEFAULT': {'foo_opt': 'foo_v'}, 'BAR_GROUP': {'bar_opt': 'bar_v'}} """ def __init__(self, data): self.data = data def __enter__(self): config_filename = 'fake_config' with utils.tempdir() as tmpdir: tmpfilename = os.path.join(tmpdir, '%s.conf' % config_filename) with open(tmpfilename, "w") as configfile: for group, opts in self.data.items(): configfile.write("""[%s]\n""" % group) for opt, value in opts.items(): configfile.write( """%(k)s = %(v)s\n""" % {'k': opt, 'v': value}) configfile.write("""\n""") # Add config file with updated opts CONF.default_config_files = [configfile.name] # Reload config instance to use redefined opts CONF.reload_config_files() return CONF def __exit__(self, exc_type, exc_value, exc_traceback): return False # do not suppress errors manila-10.0.0/manila/tests/conf_fixture.py0000664000175000017500000000724413656750227020547 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_policy import opts from oslo_service import wsgi from manila.common import config CONF = config.CONF def set_defaults(conf): _safe_set_of_opts(conf, 'verbose', True) _safe_set_of_opts(conf, 'state_path', os.path.abspath( os.path.join(os.path.dirname(__file__), '..', '..'))) _safe_set_of_opts(conf, 'connection', "sqlite://", group='database') _safe_set_of_opts(conf, 'sqlite_synchronous', False) _POLICY_PATH = os.path.abspath(os.path.join(CONF.state_path, 'manila/tests/policy.json')) opts.set_defaults(conf, policy_file=_POLICY_PATH) _safe_set_of_opts(conf, 'share_export_ip', '0.0.0.0') _safe_set_of_opts(conf, 'service_instance_user', 'fake_user') _API_PASTE_PATH = os.path.abspath(os.path.join(CONF.state_path, 'etc/manila/api-paste.ini')) wsgi.register_opts(conf) _safe_set_of_opts(conf, 'api_paste_config', _API_PASTE_PATH) _safe_set_of_opts(conf, 'share_driver', 'manila.tests.fake_driver.FakeShareDriver') _safe_set_of_opts(conf, 'auth_strategy', 'noauth') _safe_set_of_opts(conf, 'zfs_share_export_ip', '1.1.1.1') _safe_set_of_opts(conf, 'zfs_service_ip', '2.2.2.2') _safe_set_of_opts(conf, 'zfs_zpool_list', ['foo', 'bar']) _safe_set_of_opts(conf, 'zfs_share_helpers', 'NFS=foo.bar.Helper') _safe_set_of_opts(conf, 'zfs_replica_snapshot_prefix', 'foo_prefix_') _safe_set_of_opts(conf, 'hitachi_hsp_host', '172.24.47.190') _safe_set_of_opts(conf, 'hitachi_hsp_username', 'hsp_user') _safe_set_of_opts(conf, 'hitachi_hsp_password', 'hsp_password') _safe_set_of_opts(conf, 'as13000_nas_ip', '1.1.1.1') _safe_set_of_opts(conf, 'as13000_nas_login', 'admin') _safe_set_of_opts(conf, 'as13000_nas_password', 'password') _safe_set_of_opts(conf, 'as13000_share_pools', 'pool0') _safe_set_of_opts(conf, 'instorage_nas_ip', '1.1.1.1') _safe_set_of_opts(conf, 'instorage_nas_login', 'admin') _safe_set_of_opts(conf, 'instorage_nas_password', 'password') _safe_set_of_opts(conf, 'instorage_nas_pools', 'pool0') _safe_set_of_opts(conf, 'infortrend_nas_ip', '172.27.1.1') _safe_set_of_opts(conf, 'infortrend_share_pools', 'share-pool-01') _safe_set_of_opts(conf, 'infortrend_share_channels', '0,1') _safe_set_of_opts(conf, 'qnap_management_url', 'http://1.2.3.4:8080') _safe_set_of_opts(conf, 'qnap_share_ip', '1.2.3.4') _safe_set_of_opts(conf, 'qnap_nas_login', 'admin') _safe_set_of_opts(conf, 'qnap_nas_password', 'qnapadmin') _safe_set_of_opts(conf, 'qnap_poolname', 'Storage Pool 1') _safe_set_of_opts(conf, 'unity_server_meta_pool', 'nas_server_pool') def _safe_set_of_opts(conf, *args, **kwargs): try: conf.set_default(*args, **kwargs) except config.cfg.NoSuchOptError: # Assumed that opt is not imported and not used pass manila-10.0.0/manila/tests/__init__.py0000664000175000017500000000163513656750227017611 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`manila.tests` -- Manila Unittests ===================================================== .. automodule:: manila.tests :platform: Unix """ import eventlet eventlet.monkey_patch() manila-10.0.0/manila/tests/monkey_patch_example/0000775000175000017500000000000013656750362021667 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/monkey_patch_example/__init__.py0000664000175000017500000000212213656750227023775 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Example Module for testing utils.monkey_patch().""" CALLED_FUNCTION = [] def example_decorator(name, function): """decorator for notify which is used from utils.monkey_patch(). :param name: name of the function :param function: - object of the function :returns: function -- decorated function """ def wrapped_func(*args, **kwarg): CALLED_FUNCTION.append(name) return function(*args, **kwarg) return wrapped_func manila-10.0.0/manila/tests/monkey_patch_example/example_b.py0000664000175000017500000000162013656750227024174 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Example Module B for testing utils.monkey_patch().""" def example_function_b(): return 'Example function' class ExampleClassB(object): def example_method(self): return 'Example method' def example_method_add(self, arg1, arg2): return arg1 + arg2 manila-10.0.0/manila/tests/monkey_patch_example/example_a.py0000664000175000017500000000161713656750227024201 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Example Module A for testing utils.monkey_patch().""" def example_function_a(): return 'Example function' class ExampleClassA(object): def example_method(self): return 'Example method' def example_method_add(self, arg1, arg2): return arg1 + arg2 manila-10.0.0/manila/tests/test_hacking.py0000664000175000017500000003366313656750227020523 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import sys import textwrap from unittest import mock import ddt import pycodestyle from manila.hacking import checks from manila import test @ddt.ddt class HackingTestCase(test.TestCase): """Hacking test cases This class tests the hacking checks in manila.hacking.checks by passing strings to the check methods like the pep8/flake8 parser would. The parser loops over each line in the file and then passes the parameters to the check method. The parameter names in the check method dictate what type of object is passed to the check method. The parameter types are:: logical_line: A processed line with the following modifications: - Multi-line statements converted to a single line. - Stripped left and right. - Contents of strings replaced with "xxx" of same length. - Comments removed. physical_line: Raw line of text from the input file. lines: a list of the raw lines from the input file tokens: the tokens that contribute to this logical line line_number: line number in the input file total_lines: number of lines in the input file blank_lines: blank lines before this one indent_char: indentation character in this file (" " or "\t") indent_level: indentation (with tabs expanded to multiples of 8) previous_indent_level: indentation on previous line previous_logical: previous logical line filename: Path of the file being run through pep8 When running a test on a check method the return will be False/None if there is no violation in the sample input. If there is an error a tuple is returned with a position in the line, and a message. So to check the result just assertTrue if the check is expected to fail and assertFalse if it should pass. """ @ddt.data(*itertools.product( ('', '_', '_LE', '_LI', '_LW'), ('audit', 'debug', 'error', 'info', 'warn', 'warning', 'critical', 'exception',))) @ddt.unpack def test_no_translate_logs(self, log_marker, log_method): code = "LOG.{0}({1}('foo'))".format(log_method, log_marker) if log_marker: self.assertEqual(1, len(list(checks.no_translate_logs(code)))) else: self.assertEqual(0, len(list(checks.no_translate_logs(code)))) def test_check_explicit_underscore_import(self): self.assertEqual(1, len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "manila/tests/other_files.py")))) self.assertEqual(1, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "manila/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "from manila.i18n import _", "manila/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "manila/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "manila/tests/other_files.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "from manila.i18n import _LE, _, _LW", "manila/tests/other_files2.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "manila/tests/other_files2.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "_ = translations.ugettext", "manila/tests/other_files3.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "manila/tests/other_files3.py")))) # Complete code coverage by falling through all checks self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "LOG.info('My info message')", "manila.tests.unit/other_files4.py")))) self.assertEqual(0, len(list(checks.check_explicit_underscore_import( "from manila.i18n import _LW", "manila.tests.unit/other_files5.py")))) self.assertEqual(1, len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "manila.tests.unit/other_files5.py")))) # We are patching pep8 so that only the check under test is actually # installed. @mock.patch('pycodestyle._checks', {'physical_line': {}, 'logical_line': {}, 'tree': {}}) def _run_check(self, code, checker, filename=None): pycodestyle.register_check(checker) lines = textwrap.dedent(code).strip().splitlines(True) checker = pycodestyle.Checker(filename=filename, lines=lines) checker.check_all() checker.report._deferred_print.sort() return checker.report._deferred_print def _assert_has_errors(self, code, checker, expected_errors=None, filename=None): actual_errors = [e[:3] for e in self._run_check(code, checker, filename)] self.assertEqual(expected_errors or [], actual_errors) def _assert_has_no_errors(self, code, checker, filename=None): self._assert_has_errors(code, checker, filename=filename) def test_logging_format_no_tuple_arguments(self): checker = checks.CheckLoggingFormatArgs code = """ import logging LOG = logging.getLogger() LOG.info("Message without a second argument.") LOG.critical("Message with %s arguments.", 'two') LOG.debug("Volume %s caught fire and is at %d degrees C and" " climbing.", 'volume1', 500) """ self._assert_has_no_errors(code, checker) @ddt.data(*checks.CheckLoggingFormatArgs.LOG_METHODS) def test_logging_with_tuple_argument(self, log_method): checker = checks.CheckLoggingFormatArgs code = """ import logging LOG = logging.getLogger() LOG.{0}("Volume %s caught fire and is at %d degrees C and " "climbing.", ('volume1', 500)) """ self._assert_has_errors(code.format(log_method), checker, expected_errors=[(4, mock.ANY, 'M310')]) def test_str_on_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = str(e) return p """ errors = [(5, 16, 'M325')] self._assert_has_errors(code, checker, expected_errors=errors) def test_no_str_unicode_on_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = unicode(a) + str(b) except ValueError as e: p = e return p """ self._assert_has_no_errors(code, checker) def test_unicode_on_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = unicode(e) return p """ errors = [(5, 20, 'M325')] self._assert_has_errors(code, checker, expected_errors=errors) def test_str_on_multiple_exceptions(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + str(ve) p = e return p """ errors = [(8, 20, 'M325'), (8, 29, 'M325')] self._assert_has_errors(code, checker, expected_errors=errors) def test_str_unicode_on_multiple_exceptions(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + unicode(ve) p = str(e) return p """ errors = [(8, 20, 'M325'), (8, 33, 'M325'), (9, 16, 'M325')] self._assert_has_errors(code, checker, expected_errors=errors) def test_trans_add(self): checker = checks.CheckForTransAdd code = """ def fake_tran(msg): return msg _ = fake_tran _LI = _ _LW = _ _LE = _ _LC = _ def f(a, b): msg = _('test') + 'add me' msg = _LI('test') + 'add me' msg = _LW('test') + 'add me' msg = _LE('test') + 'add me' msg = _LC('test') + 'add me' msg = 'add to me' + _('test') return msg """ # Python 3.4.0 introduced a change to the column calculation during AST # parsing. This was reversed in Python 3.4.3, hence the version-based # expected value calculation. See #1499743 for more background. if sys.version_info < (3, 4, 0) or sys.version_info >= (3, 4, 3): errors = [(13, 10, 'M326'), (14, 10, 'M326'), (15, 10, 'M326'), (16, 10, 'M326'), (17, 10, 'M326'), (18, 24, 'M326')] else: errors = [(13, 11, 'M326'), (14, 13, 'M326'), (15, 13, 'M326'), (16, 13, 'M326'), (17, 13, 'M326'), (18, 25, 'M326')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): msg = 'test' + 'add me' return msg """ errors = [] self._assert_has_errors(code, checker, expected_errors=errors) def test_dict_constructor_with_list_copy(self): self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([(i, connect_info[i])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " attrs = dict([(k, _from_json(v))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " type_names = dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( "foo(param=dict((k, v) for k, v in bar.items()))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([[i,i] for i in range(3)])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dd = dict([i,i] for i in range(3))")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " create_kwargs = dict(snapshot=snapshot,")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " self._render_dict(xml, data_el, data.__dict__)")))) def test_no_xrange(self): self.assertEqual(1, len(list(checks.no_xrange("xrange(45)")))) self.assertEqual(0, len(list(checks.no_xrange("range(45)")))) def test_validate_assertTrue(self): test_value = True self.assertEqual(0, len(list(checks.validate_assertTrue( "assertTrue(True)")))) self.assertEqual(1, len(list(checks.validate_assertTrue( "assertEqual(True, %s)" % test_value)))) def test_check_uuid4(self): code = """ fake_uuid = uuid.uuid4() """ errors = [(1, 0, 'M354')] self._assert_has_errors(code, checks.check_uuid4, expected_errors=errors) code = """ hex_uuid = uuid.uuid4().hex """ self._assert_has_no_errors(code, checks.check_uuid4) @ddt.unpack @ddt.data( (1, 'import mock'), (0, 'from unittest import mock'), (1, 'from mock import patch'), (0, 'from unittest.mock import patch')) def test_no_third_party_mock(self, err_count, line): self.assertEqual(err_count, len(list(checks.no_third_party_mock(line)))) def test_no_log_warn_check(self): self.assertEqual(0, len(list(checks.no_log_warn_check( "LOG.warning('This should not trigger LOG.warn " "hacking check.')")))) self.assertEqual(1, len(list(checks.no_log_warn_check( "LOG.warn('We should not use LOG.warn')")))) foo = """ LOG.warn('Catch me too, please' ) """ self.assertEqual(1, len(list(checks.no_log_warn_check(foo)))) manila-10.0.0/manila/tests/wsgi/0000775000175000017500000000000013656750362016444 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/wsgi/__init__.py0000664000175000017500000000000013656750227020543 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/wsgi/test_common.py0000664000175000017500000000271713656750227021354 0ustar zuulzuul00000000000000# Copyright 2017 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from manila import test from manila.wsgi import common class FakeApp(common.Application): def __init__(self, **kwargs): for k, v in kwargs.items(): setattr(self, k, v) class WSGICommonTestCase(test.TestCase): def test_application_factory(self): fake_global_config = mock.Mock() kwargs = {"k1": "v1", "k2": "v2"} result = FakeApp.factory(fake_global_config, **kwargs) fake_global_config.assert_not_called() self.assertIsInstance(result, FakeApp) for k, v in kwargs.items(): self.assertTrue(hasattr(result, k)) self.assertEqual(getattr(result, k), v) def test_application___call__(self): self.assertRaises( NotImplementedError, common.Application(), 'fake_environ', 'fake_start_response') manila-10.0.0/manila/tests/wsgi/test_wsgi.py0000664000175000017500000000350513656750227021031 0ustar zuulzuul00000000000000# Copyright 2017 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from manila import test from manila.wsgi import wsgi class WSGITestCase(test.TestCase): def test_initialize_application(self): self.mock_object(wsgi.log, 'register_options') self.mock_object(wsgi.cfg.ConfigOpts, '__call__') self.mock_object(wsgi.config, 'verify_share_protocols') self.mock_object(wsgi.log, 'setup') self.mock_object(wsgi.rpc, 'init') self.mock_object(wsgi.wsgi, 'Loader') wsgi.sys.argv = ['--verbose', '--debug'] result = wsgi.initialize_application() self.assertEqual( wsgi.wsgi.Loader.return_value.load_app.return_value, result) wsgi.log.register_options.assert_called_once_with(mock.ANY) wsgi.cfg.ConfigOpts.__call__.assert_called_once_with( mock.ANY, project="manila", version=wsgi.version.version_string()) wsgi.config.verify_share_protocols.assert_called_once_with() wsgi.log.setup.assert_called_once_with(mock.ANY, "manila") wsgi.rpc.init.assert_called_once_with(mock.ANY) wsgi.wsgi.Loader.assert_called_once_with(mock.ANY) wsgi.wsgi.Loader.return_value.load_app.assert_called_once_with( name='osapi_share') manila-10.0.0/manila/tests/data/0000775000175000017500000000000013656750362016404 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/data/__init__.py0000664000175000017500000000000013656750227020503 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/data/test_helper.py0000664000175000017500000003013013656750227021271 0ustar zuulzuul00000000000000# Copyright 2015 Hitachi Data Systems inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from manila.common import constants from manila import context from manila.data import helper as data_copy_helper from manila import db from manila import exception from manila.share import rpcapi as share_rpc from manila import test from manila.tests import db_utils from manila import utils @ddt.ddt class DataServiceHelperTestCase(test.TestCase): """Tests DataServiceHelper.""" def setUp(self): super(DataServiceHelperTestCase, self).setUp() self.share = db_utils.create_share() self.share_instance = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_AVAILABLE) self.context = context.get_admin_context() self.share_instance = db.share_instance_get( self.context, self.share_instance['id'], with_share_data=True) self.access = db_utils.create_access(share_id=self.share['id']) self.helper = data_copy_helper.DataServiceHelper( self.context, db, self.share) @ddt.data(True, False) def test_allow_access_to_data_service(self, allow_dest_instance): access = db_utils.create_access(share_id=self.share['id']) info_src = { 'access_mapping': { 'ip': ['nfs'], 'user': ['cifs', 'nfs'], } } info_dest = { 'access_mapping': { 'ip': ['nfs', 'cifs'], 'user': ['cifs'], } } if allow_dest_instance: mapping = {'ip': ['nfs'], 'user': ['cifs']} else: mapping = info_src['access_mapping'] fake_access = { 'access_to': 'fake_ip', 'access_level': constants.ACCESS_LEVEL_RW, 'access_type': 'ip', } access_values = fake_access access_values['share_id'] = self.share['id'] self.mock_object( self.helper, '_get_access_entries_according_to_mapping', mock.Mock(return_value=[fake_access])) self.mock_object( self.helper.db, 'share_access_get_all_by_type_and_access', mock.Mock(return_value=[access])) change_data_access_call = self.mock_object( self.helper, '_change_data_access_to_instance') self.mock_object(self.helper.db, 'share_instance_access_create', mock.Mock(return_value=access)) if allow_dest_instance: result = self.helper.allow_access_to_data_service( self.share_instance, info_src, self.share_instance, info_dest) else: result = self.helper.allow_access_to_data_service( self.share_instance, info_src) self.assertEqual([access], result) (self.helper._get_access_entries_according_to_mapping. assert_called_once_with(mapping)) (self.helper.db.share_access_get_all_by_type_and_access. assert_called_once_with( self.context, self.share['id'], fake_access['access_type'], fake_access['access_to'])) access_create_calls = [ mock.call(self.context, access_values, self.share_instance['id']) ] if allow_dest_instance: access_create_calls.append(mock.call( self.context, access_values, self.share_instance['id'])) self.helper.db.share_instance_access_create.assert_has_calls( access_create_calls) change_access_calls = [ mock.call(self.share_instance, [access], deny=True), mock.call(self.share_instance), ] if allow_dest_instance: change_access_calls.append( mock.call(self.share_instance)) self.assertEqual(len(change_access_calls), change_data_access_call.call_count) change_data_access_call.assert_has_calls(change_access_calls) @ddt.data({'ip': []}, {'cert': []}, {'user': []}, {'cephx': []}, {'x': []}) def test__get_access_entries_according_to_mapping(self, mapping): data_copy_helper.CONF.data_node_access_cert = 'fake' data_copy_helper.CONF.data_node_access_ips = 'fake' data_copy_helper.CONF.data_node_access_admin_user = 'fake' expected = [{ 'access_type': list(mapping.keys())[0], 'access_level': constants.ACCESS_LEVEL_RW, 'access_to': 'fake', }] exists = [x for x in mapping if x in ('ip', 'user', 'cert')] if exists: result = self.helper._get_access_entries_according_to_mapping( mapping) self.assertEqual(expected, result) else: self.assertRaises( exception.ShareDataCopyFailed, self.helper._get_access_entries_according_to_mapping, mapping) def test__get_access_entries_according_to_mapping_exception_not_set(self): data_copy_helper.CONF.data_node_access_ips = None self.assertRaises( exception.ShareDataCopyFailed, self.helper._get_access_entries_according_to_mapping, {'ip': []}) def test__get_access_entries_according_to_mapping_ip_list(self): ips = ['fake1', 'fake2'] data_copy_helper.CONF.data_node_access_ips = ips expected = [{ 'access_type': 'ip', 'access_level': constants.ACCESS_LEVEL_RW, 'access_to': x, } for x in ips] result = self.helper._get_access_entries_according_to_mapping( {'ip': []}) self.assertEqual(expected, result) def test_deny_access_to_data_service(self): # mocks self.mock_object(self.helper, '_change_data_access_to_instance') # run self.helper.deny_access_to_data_service( [self.access], self.share_instance['id']) # asserts self.helper._change_data_access_to_instance.assert_called_once_with( self.share_instance['id'], [self.access], deny=True) @ddt.data(None, Exception('fake')) def test_cleanup_data_access(self, exc): # mocks self.mock_object(self.helper, 'deny_access_to_data_service', mock.Mock(side_effect=exc)) self.mock_object(data_copy_helper.LOG, 'warning') # run self.helper.cleanup_data_access([self.access], self.share_instance['id']) # asserts self.helper.deny_access_to_data_service.assert_called_once_with( [self.access], self.share_instance['id']) if exc: self.assertTrue(data_copy_helper.LOG.warning.called) @ddt.data(False, True) def test_cleanup_temp_folder(self, exc): fake_path = ''.join(('/fake_path/', self.share_instance['id'])) # mocks self.mock_object(os.path, 'exists', mock.Mock(side_effect=[True, True, exc])) self.mock_object(os, 'rmdir') self.mock_object(data_copy_helper.LOG, 'warning') # run self.helper.cleanup_temp_folder( self.share_instance['id'], '/fake_path/') # asserts os.rmdir.assert_called_once_with(fake_path) os.path.exists.assert_has_calls([ mock.call(fake_path), mock.call(fake_path), mock.call(fake_path) ]) if exc: self.assertTrue(data_copy_helper.LOG.warning.called) @ddt.data(None, Exception('fake')) def test_cleanup_unmount_temp_folder(self, exc): # mocks self.mock_object(self.helper, 'unmount_share_instance', mock.Mock(side_effect=exc)) self.mock_object(data_copy_helper.LOG, 'warning') # run self.helper.cleanup_unmount_temp_folder( 'unmount_template', 'fake_path', self.share_instance['id']) # asserts self.helper.unmount_share_instance.assert_called_once_with( 'unmount_template', 'fake_path', self.share_instance['id']) if exc: self.assertTrue(data_copy_helper.LOG.warning.called) @ddt.data(True, False) def test__change_data_access_to_instance(self, deny): access_rule = db_utils.create_access(share_id=self.share['id']) access_rule = db.share_instance_access_get( self.context, access_rule['id'], self.share_instance['id']) # mocks self.mock_object(share_rpc.ShareAPI, 'update_access') self.mock_object(utils, 'wait_for_access_update') mock_access_rules_status_update = self.mock_object( self.helper.access_helper, 'get_and_update_share_instance_access_rules_status') mock_rules_update = self.mock_object( self.helper.access_helper, 'get_and_update_share_instance_access_rules') # run self.helper._change_data_access_to_instance( self.share_instance, access_rule, deny=deny) # asserts if deny: mock_rules_update.assert_called_once_with( self.context, share_instance_id=self.share_instance['id'], filters={'access_id': [access_rule['id']]}, updates={'state': constants.ACCESS_STATE_QUEUED_TO_DENY}) else: self.assertFalse(mock_rules_update.called) share_rpc.ShareAPI.update_access.assert_called_once_with( self.context, self.share_instance) mock_access_rules_status_update.assert_called_once_with( self.context, status=constants.SHARE_INSTANCE_RULES_SYNCING, share_instance_id=self.share_instance['id']) utils.wait_for_access_update.assert_called_once_with( self.context, self.helper.db, self.share_instance, data_copy_helper.CONF.data_access_wait_access_rules_timeout) def test_mount_share_instance(self): fake_path = ''.join(('/fake_path/', self.share_instance['id'])) # mocks self.mock_object(utils, 'execute') self.mock_object(os.path, 'exists', mock.Mock( side_effect=[False, False, True])) self.mock_object(os, 'makedirs') # run self.helper.mount_share_instance( 'mount %(path)s', '/fake_path', self.share_instance) # asserts utils.execute.assert_called_once_with('mount', fake_path, run_as_root=True) os.makedirs.assert_called_once_with(fake_path) os.path.exists.assert_has_calls([ mock.call(fake_path), mock.call(fake_path), mock.call(fake_path) ]) @ddt.data([True, True, False], [True, True, Exception('fake')]) def test_unmount_share_instance(self, side_effect): fake_path = ''.join(('/fake_path/', self.share_instance['id'])) # mocks self.mock_object(utils, 'execute') self.mock_object(os.path, 'exists', mock.Mock( side_effect=side_effect)) self.mock_object(os, 'rmdir') self.mock_object(data_copy_helper.LOG, 'warning') # run self.helper.unmount_share_instance( 'unmount %(path)s', '/fake_path', self.share_instance['id']) # asserts utils.execute.assert_called_once_with('unmount', fake_path, run_as_root=True) os.rmdir.assert_called_once_with(fake_path) os.path.exists.assert_has_calls([ mock.call(fake_path), mock.call(fake_path), mock.call(fake_path) ]) if any(isinstance(x, Exception) for x in side_effect): self.assertTrue(data_copy_helper.LOG.warning.called) manila-10.0.0/manila/tests/data/test_utils.py0000664000175000017500000002640613656750227021165 0ustar zuulzuul00000000000000# Copyright 2015 Hitachi Data Systems inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import time from unittest import mock from manila.data import utils as data_utils from manila import exception from manila import test from manila import utils class CopyClassTestCase(test.TestCase): def setUp(self): super(CopyClassTestCase, self).setUp() src = '/path/fake/src' dest = '/path/fake/dst' ignore_list = ['item'] self._copy = data_utils.Copy(src, dest, ignore_list) self._copy.total_size = 10000 self._copy.current_size = 100 self._copy.current_copy = {'file_path': '/fake/path', 'size': 100} self._copy.check_hash = True self.mock_log = self.mock_object(data_utils, 'LOG') def test_get_progress(self): expected = {'total_progress': 1, 'current_file_path': '/fake/path', 'current_file_progress': 100} # mocks self.mock_object(utils, 'execute', mock.Mock(return_value=("100", ""))) # run self._copy.initialized = True out = self._copy.get_progress() # asserts self.assertEqual(expected, out) utils.execute.assert_called_once_with("stat", "-c", "%s", "/fake/path", run_as_root=True) def test_get_progress_not_initialized(self): expected = {'total_progress': 0} # run self._copy.initialized = False out = self._copy.get_progress() # asserts self.assertEqual(expected, out) def test_get_progress_completed_empty(self): expected = {'total_progress': 100} # run self._copy.initialized = True self._copy.completed = True self._copy.total_size = 0 out = self._copy.get_progress() # asserts self.assertEqual(expected, out) def test_get_progress_current_copy_none(self): self._copy.current_copy = None expected = {'total_progress': 0} # run self._copy.initialized = True out = self._copy.get_progress() # asserts self.assertEqual(expected, out) def test_get_progress_exception(self): expected = {'total_progress': 1, 'current_file_path': '/fake/path', 'current_file_progress': 0} # mocks self.mock_object( utils, 'execute', mock.Mock(side_effect=utils.processutils.ProcessExecutionError())) # run self._copy.initialized = True out = self._copy.get_progress() # asserts self.assertEqual(expected, out) utils.execute.assert_called_once_with("stat", "-c", "%s", "/fake/path", run_as_root=True) def test_cancel(self): self._copy.cancelled = False # run self._copy.cancel() # asserts self.assertTrue(self._copy.cancelled) # reset self._copy.cancelled = False def test_get_total_size(self): self._copy.total_size = 0 values = [("folder1/\nitem/\nfile1\nitem", ""), ("", ""), ("10000", "")] def get_output(*args, **kwargs): return values.pop(0) # mocks self.mock_object(utils, 'execute', mock.Mock( side_effect=get_output)) # run self._copy.get_total_size(self._copy.src) # asserts self.assertEqual(10000, self._copy.total_size) utils.execute.assert_has_calls([ mock.call("ls", "-pA1", "--group-directories-first", self._copy.src, run_as_root=True), mock.call("ls", "-pA1", "--group-directories-first", os.path.join(self._copy.src, "folder1/"), run_as_root=True), mock.call("stat", "-c", "%s", os.path.join(self._copy.src, "file1"), run_as_root=True) ]) def test_get_total_size_cancelled_1(self): self._copy.total_size = 0 self._copy.cancelled = True # run self._copy.get_total_size(self._copy.src) # asserts self.assertEqual(0, self._copy.total_size) # reset self._copy.total_size = 10000 self._copy.cancelled = False def test_get_total_size_cancelled_2(self): self._copy.total_size = 0 def ls_output(*args, **kwargs): self._copy.cancelled = True return "folder1/", "" # mocks self.mock_object(utils, 'execute', mock.Mock( side_effect=ls_output)) # run self._copy.get_total_size(self._copy.src) # asserts self.assertEqual(0, self._copy.total_size) utils.execute.assert_called_once_with( "ls", "-pA1", "--group-directories-first", self._copy.src, run_as_root=True) # reset self._copy.total_size = 10000 self._copy.cancelled = False def test_copy_data(self): values = [("folder1/\nitem/\nfile1\nitem", ""), "", ("", ""), ("10000", ""), "", ""] def get_output(*args, **kwargs): return values.pop(0) # mocks self.mock_object(data_utils, '_validate_item', mock.Mock(side_effect=[exception.ShareDataCopyFailed( reason='fake'), None])) self.mock_object(utils, 'execute', mock.Mock( side_effect=get_output)) self.mock_object(self._copy, 'get_progress') self.mock_object(time, 'sleep') # run self._copy.copy_data(self._copy.src) # asserts self._copy.get_progress.assert_called_once_with() utils.execute.assert_has_calls([ mock.call("ls", "-pA1", "--group-directories-first", self._copy.src, run_as_root=True), mock.call("mkdir", "-p", os.path.join(self._copy.dest, "folder1/"), run_as_root=True), mock.call("ls", "-pA1", "--group-directories-first", os.path.join(self._copy.src, "folder1/"), run_as_root=True), mock.call("stat", "-c", "%s", os.path.join(self._copy.src, "file1"), run_as_root=True), mock.call("cp", "-P", "--preserve=all", os.path.join(self._copy.src, "file1"), os.path.join(self._copy.dest, "file1"), run_as_root=True), mock.call("cp", "-P", "--preserve=all", os.path.join(self._copy.src, "file1"), os.path.join(self._copy.dest, "file1"), run_as_root=True) ]) def test__validate_item(self): self.mock_object(utils, 'execute', mock.Mock( side_effect=[("abcxyz", ""), ("defrst", "")])) self.assertRaises(exception.ShareDataCopyFailed, data_utils._validate_item, 'src', 'dest') utils.execute.assert_has_calls([ mock.call("sha256sum", "src", run_as_root=True), mock.call("sha256sum", "dest", run_as_root=True), ]) def test_copy_data_cancelled_1(self): self._copy.cancelled = True # run self._copy.copy_data(self._copy.src) # reset self._copy.cancelled = False def test_copy_data_cancelled_2(self): def ls_output(*args, **kwargs): self._copy.cancelled = True return "folder1/", "" # mocks self.mock_object(utils, 'execute', mock.Mock( side_effect=ls_output)) # run self._copy.copy_data(self._copy.src) # asserts utils.execute.assert_called_once_with( "ls", "-pA1", "--group-directories-first", self._copy.src, run_as_root=True) # reset self._copy.cancelled = False def test_copy_stats(self): values = [("folder1/\nitem/\nfile1\nitem", ""), ("", ""), "", "", "", "", "", ""] def get_output(*args, **kwargs): return values.pop(0) # mocks self.mock_object(utils, 'execute', mock.Mock( side_effect=get_output)) # run self._copy.copy_stats(self._copy.src) # asserts utils.execute.assert_has_calls([ mock.call("ls", "-pA1", "--group-directories-first", self._copy.src, run_as_root=True), mock.call("ls", "-pA1", "--group-directories-first", os.path.join(self._copy.src, "folder1/"), run_as_root=True), mock.call( "chmod", "--reference=%s" % os.path.join(self._copy.src, "folder1/"), os.path.join(self._copy.dest, "folder1/"), run_as_root=True), mock.call( "touch", "--reference=%s" % os.path.join(self._copy.src, "folder1/"), os.path.join(self._copy.dest, "folder1/"), run_as_root=True), mock.call( "chown", "--reference=%s" % os.path.join(self._copy.src, "folder1/"), os.path.join(self._copy.dest, "folder1/"), run_as_root=True), ]) def test_copy_stats_cancelled_1(self): self._copy.cancelled = True # run self._copy.copy_stats(self._copy.src) # reset self._copy.cancelled = False def test_copy_stats_cancelled_2(self): def ls_output(*args, **kwargs): self._copy.cancelled = True return "folder1/", "" # mocks self.mock_object(utils, 'execute', mock.Mock( side_effect=ls_output)) # run self._copy.copy_stats(self._copy.src) # asserts utils.execute.assert_called_once_with( "ls", "-pA1", "--group-directories-first", self._copy.src, run_as_root=True) # reset self._copy.cancelled = False def test_run(self): # mocks self.mock_object(self._copy, 'get_total_size') self.mock_object(self._copy, 'copy_data') self.mock_object(self._copy, 'copy_stats') self.mock_object(self._copy, 'get_progress') # run self._copy.run() # asserts self.assertTrue(data_utils.LOG.info.called) self._copy.get_total_size.assert_called_once_with(self._copy.src) self._copy.copy_data.assert_called_once_with(self._copy.src) self._copy.copy_stats.assert_called_once_with(self._copy.src) self._copy.get_progress.assert_called_once_with() manila-10.0.0/manila/tests/data/test_rpcapi.py0000664000175000017500000000727013656750227021301 0ustar zuulzuul00000000000000# Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for manila.data.rpcapi """ import copy from unittest import mock from oslo_config import cfg from oslo_serialization import jsonutils from manila.common import constants from manila import context from manila.data import rpcapi as data_rpcapi from manila import test from manila.tests import db_utils CONF = cfg.CONF class DataRpcAPITestCase(test.TestCase): def setUp(self): super(DataRpcAPITestCase, self).setUp() share = db_utils.create_share( availability_zone=CONF.storage_availability_zone, status=constants.STATUS_AVAILABLE ) self.fake_share = jsonutils.to_primitive(share) def tearDown(self): super(DataRpcAPITestCase, self).tearDown() def _test_data_api(self, method, rpc_method, fanout=False, **kwargs): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = data_rpcapi.DataAPI() expected_retval = 'foo' if method == 'call' else None target = { "fanout": fanout, "version": kwargs.pop('version', '1.0'), } expected_msg = copy.deepcopy(kwargs) self.fake_args = None self.fake_kwargs = None def _fake_prepare_method(*args, **kwds): for kwd in kwds: self.assertEqual(target[kwd], kwds[kwd]) return rpcapi.client def _fake_rpc_method(*args, **kwargs): self.fake_args = args self.fake_kwargs = kwargs if expected_retval: return expected_retval with mock.patch.object(rpcapi.client, "prepare") as mock_prepared: mock_prepared.side_effect = _fake_prepare_method with mock.patch.object(rpcapi.client, rpc_method) as mock_method: mock_method.side_effect = _fake_rpc_method retval = getattr(rpcapi, method)(ctxt, **kwargs) self.assertEqual(expected_retval, retval) expected_args = [ctxt, method, expected_msg] for arg, expected_arg in zip(self.fake_args, expected_args): self.assertEqual(expected_arg, arg) def test_migration_start(self): self._test_data_api('migration_start', rpc_method='cast', version='1.0', share_id=self.fake_share['id'], ignore_list=[], share_instance_id='fake_ins_id', dest_share_instance_id='dest_fake_ins_id', connection_info_src={}, connection_info_dest={}) def test_data_copy_cancel(self): self._test_data_api('data_copy_cancel', rpc_method='call', version='1.0', share_id=self.fake_share['id']) def test_data_copy_get_progress(self): self._test_data_api('data_copy_get_progress', rpc_method='call', version='1.0', share_id=self.fake_share['id']) manila-10.0.0/manila/tests/data/test_manager.py0000664000175000017500000004133513656750227021435 0ustar zuulzuul00000000000000# Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Data Manager """ from unittest import mock import ddt from manila.common import constants from manila import context from manila.data import helper from manila.data import manager from manila.data import utils as data_utils from manila import db from manila import exception from manila.share import rpcapi as share_rpc from manila import test from manila.tests import db_utils from manila import utils @ddt.ddt class DataManagerTestCase(test.TestCase): """Test case for data manager.""" def setUp(self): super(DataManagerTestCase, self).setUp() self.manager = manager.DataManager() self.context = context.get_admin_context() self.topic = 'fake_topic' self.share = db_utils.create_share() manager.CONF.set_default('mount_tmp_location', '/tmp/') def test_init(self): manager = self.manager self.assertIsNotNone(manager) @ddt.data(constants.TASK_STATE_DATA_COPYING_COMPLETING, constants.TASK_STATE_DATA_COPYING_STARTING, constants.TASK_STATE_DATA_COPYING_IN_PROGRESS) def test_init_host(self, status): share = db_utils.create_share( task_state=status) # mocks self.mock_object(db, 'share_get_all', mock.Mock( return_value=[share])) self.mock_object(db, 'share_update') # run self.manager.init_host() # asserts db.share_get_all.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) db.share_update.assert_called_with( utils.IsAMatcher(context.RequestContext), share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_ERROR}) @ddt.data(None, Exception('fake'), exception.ShareDataCopyCancelled( src_instance='ins1', dest_instance='ins2')) def test_migration_start(self, exc): # mocks self.mock_object(db, 'share_get', mock.Mock(return_value=self.share)) self.mock_object(db, 'share_instance_get', mock.Mock( return_value=self.share.instance)) self.mock_object(data_utils, 'Copy', mock.Mock(return_value='fake_copy')) if exc is None: self.manager.busy_tasks_shares[self.share['id']] = 'fake_copy' self.mock_object(self.manager, '_copy_share_data', mock.Mock(side_effect=exc)) self.mock_object(share_rpc.ShareAPI, 'migration_complete') if exc is not None and not isinstance( exc, exception.ShareDataCopyCancelled): self.mock_object(db, 'share_update') # run if exc is None or isinstance(exc, exception.ShareDataCopyCancelled): self.manager.migration_start( self.context, [], self.share['id'], 'ins1_id', 'ins2_id', 'info_src', 'info_dest') else: self.assertRaises( exception.ShareDataCopyFailed, self.manager.migration_start, self.context, [], self.share['id'], 'ins1_id', 'ins2_id', 'info_src', 'info_dest') db.share_update.assert_called_once_with( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_ERROR}) # asserts self.assertFalse(self.manager.busy_tasks_shares.get(self.share['id'])) self.manager._copy_share_data.assert_called_once_with( self.context, 'fake_copy', self.share, 'ins1_id', 'ins2_id', 'info_src', 'info_dest') if exc: share_rpc.ShareAPI.migration_complete.assert_called_once_with( self.context, self.share.instance, 'ins2_id') @ddt.data({'cancelled': False, 'exc': None}, {'cancelled': False, 'exc': Exception('fake')}, {'cancelled': True, 'exc': None}) @ddt.unpack def test__copy_share_data(self, cancelled, exc): access = db_utils.create_access(share_id=self.share['id']) connection_info_src = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} connection_info_dest = {'mount': 'mount_cmd_dest', 'unmount': 'unmount_cmd_dest'} get_progress = {'total_progress': 100} # mocks fake_copy = mock.MagicMock(cancelled=cancelled) self.mock_object(db, 'share_update') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[self.share['instance'], self.share['instance']])) self.mock_object(helper.DataServiceHelper, 'allow_access_to_data_service', mock.Mock(return_value=[access])) self.mock_object(helper.DataServiceHelper, 'mount_share_instance') self.mock_object(fake_copy, 'run', mock.Mock(side_effect=exc)) self.mock_object(fake_copy, 'get_progress', mock.Mock(return_value=get_progress)) self.mock_object(helper.DataServiceHelper, 'unmount_share_instance', mock.Mock(side_effect=Exception('fake'))) self.mock_object(helper.DataServiceHelper, 'deny_access_to_data_service', mock.Mock(side_effect=Exception('fake'))) extra_updates = None # run if cancelled: self.assertRaises( exception.ShareDataCopyCancelled, self.manager._copy_share_data, self.context, fake_copy, self.share, 'ins1_id', 'ins2_id', connection_info_src, connection_info_dest) extra_updates = [ mock.call( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETING}), mock.call( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_CANCELLED}) ] elif exc: self.assertRaises( exception.ShareDataCopyFailed, self.manager._copy_share_data, self.context, fake_copy, self.share, 'ins1_id', 'ins2_id', connection_info_src, connection_info_dest) else: self.manager._copy_share_data( self.context, fake_copy, self.share, 'ins1_id', 'ins2_id', connection_info_src, connection_info_dest) extra_updates = [ mock.call( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETING}), mock.call( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETED}) ] # asserts self.assertEqual( self.manager.busy_tasks_shares[self.share['id']], fake_copy) update_list = [ mock.call( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_STARTING}), mock.call( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_IN_PROGRESS}), ] if extra_updates: update_list = update_list + extra_updates db.share_update.assert_has_calls(update_list) (helper.DataServiceHelper.allow_access_to_data_service. assert_called_once_with( self.share['instance'], connection_info_src, self.share['instance'], connection_info_dest)) helper.DataServiceHelper.mount_share_instance.assert_has_calls([ mock.call(connection_info_src['mount'], '/tmp/', self.share['instance']), mock.call(connection_info_dest['mount'], '/tmp/', self.share['instance'])]) fake_copy.run.assert_called_once_with() if exc is None: fake_copy.get_progress.assert_called_once_with() helper.DataServiceHelper.unmount_share_instance.assert_has_calls([ mock.call(connection_info_src['unmount'], '/tmp/', 'ins1_id'), mock.call(connection_info_dest['unmount'], '/tmp/', 'ins2_id')]) helper.DataServiceHelper.deny_access_to_data_service.assert_has_calls([ mock.call([access], self.share['instance']), mock.call([access], self.share['instance'])]) def test__copy_share_data_exception_access(self): connection_info_src = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} connection_info_dest = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} fake_copy = mock.MagicMock(cancelled=False) # mocks self.mock_object(db, 'share_update') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[self.share['instance'], self.share['instance']])) self.mock_object( helper.DataServiceHelper, 'allow_access_to_data_service', mock.Mock( side_effect=exception.ShareDataCopyFailed(reason='fake'))) self.mock_object(helper.DataServiceHelper, 'cleanup_data_access') # run self.assertRaises(exception.ShareDataCopyFailed, self.manager._copy_share_data, self.context, fake_copy, self.share, 'ins1_id', 'ins2_id', connection_info_src, connection_info_dest) # asserts db.share_update.assert_called_once_with( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_STARTING}) (helper.DataServiceHelper.allow_access_to_data_service. assert_called_once_with( self.share['instance'], connection_info_src, self.share['instance'], connection_info_dest)) def test__copy_share_data_exception_mount_1(self): access = db_utils.create_access(share_id=self.share['id']) connection_info_src = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} connection_info_dest = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} fake_copy = mock.MagicMock(cancelled=False) # mocks self.mock_object(db, 'share_update') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[self.share['instance'], self.share['instance']])) self.mock_object(helper.DataServiceHelper, 'allow_access_to_data_service', mock.Mock(return_value=[access])) self.mock_object(helper.DataServiceHelper, 'mount_share_instance', mock.Mock(side_effect=Exception('fake'))) self.mock_object(helper.DataServiceHelper, 'cleanup_data_access') self.mock_object(helper.DataServiceHelper, 'cleanup_temp_folder') # run self.assertRaises(exception.ShareDataCopyFailed, self.manager._copy_share_data, self.context, fake_copy, self.share, 'ins1_id', 'ins2_id', connection_info_src, connection_info_dest) # asserts db.share_update.assert_called_once_with( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_STARTING}) (helper.DataServiceHelper.allow_access_to_data_service. assert_called_once_with( self.share['instance'], connection_info_src, self.share['instance'], connection_info_dest)) helper.DataServiceHelper.mount_share_instance.assert_called_once_with( connection_info_src['mount'], '/tmp/', self.share['instance']) helper.DataServiceHelper.cleanup_temp_folder.assert_called_once_with( 'ins1_id', '/tmp/') helper.DataServiceHelper.cleanup_data_access.assert_has_calls([ mock.call([access], 'ins2_id'), mock.call([access], 'ins1_id')]) def test__copy_share_data_exception_mount_2(self): access = db_utils.create_access(share_id=self.share['id']) connection_info_src = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} connection_info_dest = {'mount': 'mount_cmd_src', 'unmount': 'unmount_cmd_src'} fake_copy = mock.MagicMock(cancelled=False) # mocks self.mock_object(db, 'share_update') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[self.share['instance'], self.share['instance']])) self.mock_object(helper.DataServiceHelper, 'allow_access_to_data_service', mock.Mock(return_value=[access])) self.mock_object(helper.DataServiceHelper, 'mount_share_instance', mock.Mock(side_effect=[None, Exception('fake')])) self.mock_object(helper.DataServiceHelper, 'cleanup_data_access') self.mock_object(helper.DataServiceHelper, 'cleanup_temp_folder') self.mock_object(helper.DataServiceHelper, 'cleanup_unmount_temp_folder') # run self.assertRaises(exception.ShareDataCopyFailed, self.manager._copy_share_data, self.context, fake_copy, self.share, 'ins1_id', 'ins2_id', connection_info_src, connection_info_dest) # asserts db.share_update.assert_called_once_with( self.context, self.share['id'], {'task_state': constants.TASK_STATE_DATA_COPYING_STARTING}) (helper.DataServiceHelper.allow_access_to_data_service. assert_called_once_with( self.share['instance'], connection_info_src, self.share['instance'], connection_info_dest)) helper.DataServiceHelper.mount_share_instance.assert_has_calls([ mock.call(connection_info_src['mount'], '/tmp/', self.share['instance']), mock.call(connection_info_dest['mount'], '/tmp/', self.share['instance'])]) (helper.DataServiceHelper.cleanup_unmount_temp_folder. assert_called_once_with( connection_info_src['unmount'], '/tmp/', 'ins1_id')) helper.DataServiceHelper.cleanup_temp_folder.assert_has_calls([ mock.call('ins2_id', '/tmp/'), mock.call('ins1_id', '/tmp/')]) helper.DataServiceHelper.cleanup_data_access.assert_has_calls([ mock.call([access], 'ins2_id'), mock.call([access], 'ins1_id')]) def test_data_copy_cancel(self): share = db_utils.create_share() self.manager.busy_tasks_shares[share['id']] = data_utils.Copy # mocks self.mock_object(data_utils.Copy, 'cancel') # run self.manager.data_copy_cancel(self.context, share['id']) # asserts data_utils.Copy.cancel.assert_called_once_with() def test_data_copy_cancel_not_copying(self): self.assertRaises(exception.InvalidShare, self.manager.data_copy_cancel, self.context, 'fake_id') def test_data_copy_get_progress(self): share = db_utils.create_share() self.manager.busy_tasks_shares[share['id']] = data_utils.Copy expected = 'fake_progress' # mocks self.mock_object(data_utils.Copy, 'get_progress', mock.Mock(return_value=expected)) # run result = self.manager.data_copy_get_progress(self.context, share['id']) # asserts self.assertEqual(expected, result) data_utils.Copy.get_progress.assert_called_once_with() def test_data_copy_get_progress_not_copying(self): self.assertRaises(exception.InvalidShare, self.manager.data_copy_get_progress, self.context, 'fake_id') manila-10.0.0/manila/tests/test_misc.py0000664000175000017500000000423113656750227020037 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glob import os from manila import exception from manila import test class ExceptionTestCase(test.TestCase): @staticmethod def _raise_exc(exc): raise exc() def test_exceptions_raise(self): # NOTE(dprince): disable format errors since we are not passing kwargs self.flags(fatal_exception_format_errors=False) for name in dir(exception): exc = getattr(exception, name) if isinstance(exc, type): self.assertRaises(exc, self._raise_exc, exc) class ProjectTestCase(test.TestCase): def test_all_migrations_have_downgrade(self): topdir = os.path.normpath(os.path.dirname(__file__) + '/../../') py_glob = os.path.join(topdir, "manila", "db", "sqlalchemy", "migrate_repo", "versions", "*.py") missing_downgrade = [] for path in glob.iglob(py_glob): has_upgrade = False has_downgrade = False with open(path, "r") as f: for line in f: if 'def upgrade(' in line: has_upgrade = True if 'def downgrade(' in line: has_downgrade = True if has_upgrade and not has_downgrade: fname = os.path.basename(path) missing_downgrade.append(fname) helpful_msg = ("The following migrations are missing a downgrade:" "\n\t%s") % '\n\t'.join(sorted(missing_downgrade)) self.assertTrue(not missing_downgrade, helpful_msg) manila-10.0.0/manila/tests/common/0000775000175000017500000000000013656750362016763 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/common/test_client_auth.py0000664000175000017500000001005013656750227022667 0ustar zuulzuul00000000000000# Copyright 2016 SAP SE # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from keystoneauth1 import loading as auth from oslo_config import cfg from manila.common import client_auth from manila import exception from manila import test from manila.tests import fake_client_exception_class class ClientAuthTestCase(test.TestCase): def setUp(self): super(ClientAuthTestCase, self).setUp() self.context = mock.Mock() self.fake_client = mock.Mock() self.exception_mod = fake_client_exception_class self.auth = client_auth.AuthClientLoader( self.fake_client, self.exception_mod, 'foo_group') def test_get_client_admin_true(self): mock_load_session = self.mock_object(auth, 'load_session_from_conf_options') self.auth.get_client(self.context, admin=True) mock_load_session.assert_called_once_with(client_auth.CONF, 'foo_group') self.fake_client.assert_called_once_with( session=mock_load_session(), auth=auth.load_auth_from_conf_options( client_auth.CONF, 'foo_group')) def test_get_client_admin_false(self): self.mock_object(auth, 'load_session_from_conf_options') self.assertRaises(exception.ManilaException, self.auth.get_client, self.context, admin=False) def test_load_auth_plugin_caching(self): self.auth.admin_auth = 'admin obj' result = self.auth._load_auth_plugin() self.assertEqual(self.auth.admin_auth, result) def test_load_auth_plugin_no_auth(self): auth.load_auth_from_conf_options.return_value = None self.assertRaises(fake_client_exception_class.Unauthorized, self.auth._load_auth_plugin) @mock.patch.object(auth, 'get_session_conf_options') @mock.patch.object(auth, 'get_auth_common_conf_options') @mock.patch.object(auth, 'get_auth_plugin_conf_options') def test_list_opts(self, auth_conf, common_conf, session_conf): session_conf.return_value = [cfg.StrOpt('username'), cfg.StrOpt('password')] common_conf.return_value = ([cfg.StrOpt('auth_url')]) auth_conf.return_value = [cfg.StrOpt('password')] result = client_auth.AuthClientLoader.list_opts("foo_group") self.assertEqual('foo_group', result[0][0]) for entry in result[0][1]: self.assertIn(entry.name, ['username', 'auth_url', 'password']) common_conf.assert_called_once_with() auth_conf.assert_called_once_with('password') @mock.patch.object(auth, 'get_session_conf_options') @mock.patch.object(auth, 'get_auth_common_conf_options') @mock.patch.object(auth, 'get_auth_plugin_conf_options') def test_list_opts_not_found(self, auth_conf, common_conf, session_conf): session_conf.return_value = [cfg.StrOpt('username'), cfg.StrOpt('password')] common_conf.return_value = ([cfg.StrOpt('auth_url')]) auth_conf.return_value = [cfg.StrOpt('tenant')] result = client_auth.AuthClientLoader.list_opts("foo_group") self.assertEqual('foo_group', result[0][0]) for entry in result[0][1]: self.assertIn(entry.name, ['username', 'auth_url', 'password', 'tenant']) common_conf.assert_called_once_with() auth_conf.assert_called_once_with('password') manila-10.0.0/manila/tests/common/__init__.py0000664000175000017500000000000013656750227021062 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/common/test_config.py0000664000175000017500000000325013656750227021641 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from manila.common import config from manila.common import constants from manila import exception from manila import test from manila.tests import utils as test_utils VALID_CASES = [proto.lower() for proto in constants.SUPPORTED_SHARE_PROTOCOLS] VALID_CASES.extend([proto.upper() for proto in VALID_CASES]) VALID_CASES.append(','.join(case for case in VALID_CASES)) @ddt.ddt class VerifyConfigShareProtocolsTestCase(test.TestCase): @ddt.data(*VALID_CASES) def test_verify_share_protocols_valid_cases(self, proto): data = dict(DEFAULT=dict(enabled_share_protocols=proto)) with test_utils.create_temp_config_with_opts(data): config.verify_share_protocols() @ddt.data(None, '', 'fake', [], ['fake'], [VALID_CASES[0] + 'fake']) def test_verify_share_protocols_invalid_cases(self, proto): data = dict(DEFAULT=dict(enabled_share_protocols=proto)) with test_utils.create_temp_config_with_opts(data): self.assertRaises( exception.ManilaException, config.verify_share_protocols) manila-10.0.0/manila/tests/integrated/0000775000175000017500000000000013656750362017621 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/integrated/__init__.py0000664000175000017500000000000013656750227021720 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/integrated/integrated_helpers.py0000664000175000017500000000775213656750227024056 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Provides common functionality for integrated unit tests """ import random import string from oslo_log import log from manila import service from manila import test # For the flags from manila.tests.integrated.api import client from oslo_config import cfg from oslo_utils import uuidutils CONF = cfg.CONF LOG = log.getLogger(__name__) def generate_random_alphanumeric(length): """Creates a random alphanumeric string of specified length.""" return ''.join(random.choice(string.ascii_uppercase + string.digits) for _x in range(length)) def generate_random_numeric(length): """Creates a random numeric string of specified length.""" return ''.join(random.choice(string.digits) for _x in range(length)) def generate_new_element(items, prefix, numeric=False): """Creates a random string with prefix, that is not in 'items' list.""" while True: if numeric: candidate = prefix + generate_random_numeric(8) else: candidate = prefix + generate_random_alphanumeric(8) if candidate not in items: return candidate LOG.debug("Random collision on %s.", candidate) class _IntegratedTestBase(test.TestCase): def setUp(self): super(_IntegratedTestBase, self).setUp() f = self._get_flags() self.flags(**f) # set up services self.volume = self.start_service('share') self.scheduler = self.start_service('scheduler') self._start_api_service() self.api = client.TestOpenStackClient('fake', 'fake', self.auth_url) def tearDown(self): self.osapi.stop() super(_IntegratedTestBase, self).tearDown() def _start_api_service(self): self.osapi = service.WSGIService("osapi_share") self.osapi.start() # FIXME(ja): this is not the auth url - this is the service url # FIXME(ja): this needs fixed in nova as well self.auth_url = 'http://%s:%s/v1' % (self.osapi.host, self.osapi.port) LOG.warning(self.auth_url) def _get_flags(self): """An opportunity to setup flags, before the services are started.""" f = {} # Ensure tests only listen on localhost f['osapi_share_listen'] = '127.0.0.1' # Auto-assign ports to allow concurrent tests f['osapi_share_listen_port'] = 0 return f def get_unused_server_name(self): servers = self.api.get_servers() server_names = [server['name'] for server in servers] return generate_new_element(server_names, 'server') def get_invalid_image(self): return uuidutils.generate_uuid() def _build_minimal_create_server_request(self): server = {} image = self.api.get_images()[0] LOG.debug("Image: %s.", image) if 'imageRef' in image: image_href = image['imageRef'] else: image_href = image['id'] image_href = 'http://fake.server/%s' % image_href # We now have a valid imageId server['imageRef'] = image_href # Set a valid flavorId flavor = self.api.get_flavors()[0] LOG.debug("Using flavor: %s.", flavor) server['flavorRef'] = 'http://fake.server/%s' % flavor['id'] # Set a valid server name server_name = self.get_unused_server_name() server['name'] = server_name return server manila-10.0.0/manila/tests/integrated/test_login.py0000664000175000017500000000177113656750227022350 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.tests.integrated import integrated_helpers LOG = log.getLogger(__name__) class LoginTest(integrated_helpers._IntegratedTestBase): def test_login(self): """Simple check - we list shares - so we know we're logged in.""" shares = self.api.get_shares() for share in shares: LOG.debug("share: %s", share) manila-10.0.0/manila/tests/integrated/test_extensions.py0000664000175000017500000000264013656750227023433 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log import six from manila.tests.integrated import integrated_helpers CONF = cfg.CONF LOG = log.getLogger(__name__) class ExtensionsTest(integrated_helpers._IntegratedTestBase): def _get_flags(self): f = super(ExtensionsTest, self)._get_flags() f['osapi_share_extension'] = CONF.osapi_share_extension[:] f['osapi_share_extension'].append( 'manila.tests.api.extensions.foxinsocks.Foxinsocks') return f def test_get_foxnsocks(self): """Simple check that fox-n-socks works.""" response = self.api.api_request('/foxnsocks') foxnsocks = response.read() LOG.debug("foxnsocks: %s.", foxnsocks) self.assertEqual(six.b('Try to say this Mr. Knox, sir...'), foxnsocks) manila-10.0.0/manila/tests/integrated/api/0000775000175000017500000000000013656750362020372 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/integrated/api/__init__.py0000664000175000017500000000000013656750227022471 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/integrated/api/client.py0000664000175000017500000001740713656750227022233 0ustar zuulzuul00000000000000# Copyright (c) 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_serialization import jsonutils from six.moves import http_client from six.moves.urllib import parse LOG = log.getLogger(__name__) class OpenStackApiException(Exception): def __init__(self, message=None, response=None): self.response = response if not message: message = 'Unspecified error' if response: _status = response.status _body = response.read() message = ('%(message)s\nStatus Code: %(_status)s\n' 'Body: %(_body)s') % { "message": message, "_status": _status, "_body": _body } super(OpenStackApiException, self).__init__(message) class OpenStackApiAuthenticationException(OpenStackApiException): def __init__(self, response=None, message=None): if not message: message = "Authentication error" super(OpenStackApiAuthenticationException, self).__init__(message, response) class OpenStackApiAuthorizationException(OpenStackApiException): def __init__(self, response=None, message=None): if not message: message = "Authorization error" super(OpenStackApiAuthorizationException, self).__init__(message, response) class OpenStackApiNotFoundException(OpenStackApiException): def __init__(self, response=None, message=None): if not message: message = "Item not found" super(OpenStackApiNotFoundException, self).__init__(message, response) class TestOpenStackClient(object): """Simple OpenStack API Client. This is a really basic OpenStack API client that is under our control, so we can make changes / insert hooks for testing """ def __init__(self, auth_user, auth_key, www_authenticate_uri): super(TestOpenStackClient, self).__init__() self.auth_result = None self.auth_user = auth_user self.auth_key = auth_key self.www_authenticate_uri = www_authenticate_uri # default project_id self.project_id = 'openstack' def request(self, url, method='GET', body=None, headers=None): _headers = {'Content-Type': 'application/json'} _headers.update(headers or {}) parsed_url = parse.urlparse(url) port = parsed_url.port hostname = parsed_url.hostname scheme = parsed_url.scheme if scheme == 'http': conn = http_client.HTTPConnection(hostname, port=port) elif scheme == 'https': conn = http_client.HTTPSConnection(hostname, port=port) else: raise OpenStackApiException("Unknown scheme: %s" % url) relative_url = parsed_url.path if parsed_url.query: relative_url = relative_url + "?" + parsed_url.query LOG.info("Doing %(method)s on %(relative_url)s", {"method": method, "relative_url": relative_url}) if body: LOG.info("Body: %s", body) conn.request(method, relative_url, body, _headers) response = conn.getresponse() return response def _authenticate(self): if self.auth_result: return self.auth_result www_authenticate_uri = self.www_authenticate_uri headers = {'X-Auth-User': self.auth_user, 'X-Auth-Key': self.auth_key, 'X-Auth-Project-Id': self.project_id} response = self.request(www_authenticate_uri, headers=headers) http_status = response.status LOG.debug("%(www_authenticate_uri)s => code %(http_status)s.", {"www_authenticate_uri": www_authenticate_uri, "http_status": http_status}) if http_status == 401: raise OpenStackApiAuthenticationException(response=response) auth_headers = {} for k, v in response.getheaders(): auth_headers[k.lower()] = v self.auth_result = auth_headers return self.auth_result def api_request(self, relative_uri, check_response_status=None, **kwargs): auth_result = self._authenticate() base_uri = auth_result['x-server-management-url'] full_uri = '%s/%s' % (base_uri, relative_uri) headers = kwargs.setdefault('headers', {}) headers['X-Auth-Token'] = auth_result['x-auth-token'] response = self.request(full_uri, **kwargs) http_status = response.status LOG.debug("%(relative_uri)s => code %(http_status)s.", {"relative_uri": relative_uri, "http_status": http_status}) if check_response_status: if http_status not in check_response_status: if http_status == 404: raise OpenStackApiNotFoundException(response=response) elif http_status == 401: raise OpenStackApiAuthorizationException(response=response) else: raise OpenStackApiException( message="Unexpected status code", response=response) return response def _decode_json(self, response): body = response.read() LOG.debug("Decoding JSON: %s.", (body)) if body: return jsonutils.loads(body) else: return "" def api_options(self, relative_uri, **kwargs): kwargs['method'] = 'OPTIONS' kwargs.setdefault('check_response_status', [200]) response = self.api_request(relative_uri, **kwargs) return self._decode_json(response) def api_get(self, relative_uri, **kwargs): kwargs.setdefault('check_response_status', [200]) response = self.api_request(relative_uri, **kwargs) return self._decode_json(response) def api_post(self, relative_uri, body, **kwargs): kwargs['method'] = 'POST' if body: headers = kwargs.setdefault('headers', {}) headers['Content-Type'] = 'application/json' kwargs['body'] = jsonutils.dumps(body) kwargs.setdefault('check_response_status', [200, 202]) response = self.api_request(relative_uri, **kwargs) return self._decode_json(response) def api_put(self, relative_uri, body, **kwargs): kwargs['method'] = 'PUT' if body: headers = kwargs.setdefault('headers', {}) headers['Content-Type'] = 'application/json' kwargs['body'] = jsonutils.dumps(body) kwargs.setdefault('check_response_status', [200, 202, 204]) response = self.api_request(relative_uri, **kwargs) return self._decode_json(response) def api_delete(self, relative_uri, **kwargs): kwargs['method'] = 'DELETE' kwargs.setdefault('check_response_status', [200, 202, 204]) return self.api_request(relative_uri, **kwargs) def get_shares(self, detail=True): rel_url = '/shares/detail' if detail else '/shares' return self.api_get(rel_url)['shares'] manila-10.0.0/manila/tests/xenapi/0000775000175000017500000000000013656750362016757 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/xenapi/__init__.py0000664000175000017500000000000013656750227021056 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/runtime_conf.py0000664000175000017500000000153013656750227020534 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg CONF = cfg.CONF CONF.register_opt(cfg.IntOpt('runtime_answer', default=54, help='test flag')) manila-10.0.0/manila/tests/test_utils.py0000664000175000017500000007550513656750227020260 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # Copyright 2014 NetApp, Inc. # Copyright 2014 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import json import time from unittest import mock import ddt from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils import paramiko import six from webob import exc import manila from manila.common import constants from manila import context from manila.db import api as db from manila import exception from manila import test from manila import utils CONF = cfg.CONF @ddt.ddt class GenericUtilsTestCase(test.TestCase): def test_service_is_up(self): fts_func = datetime.datetime.fromtimestamp fake_now = 1000 down_time = 5 self.flags(service_down_time=down_time) with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=fts_func(fake_now))): # Up (equal) service = {'updated_at': fts_func(fake_now - down_time), 'created_at': fts_func(fake_now - down_time)} result = utils.service_is_up(service) self.assertTrue(result) timeutils.utcnow.assert_called_once_with() with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=fts_func(fake_now))): # Up service = {'updated_at': fts_func(fake_now - down_time + 1), 'created_at': fts_func(fake_now - down_time + 1)} result = utils.service_is_up(service) self.assertTrue(result) timeutils.utcnow.assert_called_once_with() with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=fts_func(fake_now))): # Down service = {'updated_at': fts_func(fake_now - down_time - 1), 'created_at': fts_func(fake_now - down_time - 1)} result = utils.service_is_up(service) self.assertFalse(result) timeutils.utcnow.assert_called_once_with() @ddt.data(['ssh', '-D', 'my_name@name_of_remote_computer'], ['echo', '"quoted arg with space"'], ['echo', "'quoted arg with space'"]) def test_check_ssh_injection(self, cmd): cmd_list = cmd self.assertIsNone(utils.check_ssh_injection(cmd_list)) @ddt.data(['ssh', 'my_name@ name_of_remote_computer'], ['||', 'my_name@name_of_remote_computer'], ['cmd', 'virus;ls'], ['cmd', '"arg\"withunescaped"'], ['cmd', 'virus;"quoted argument"'], ['echo', '"quoted argument";rm -rf'], ['echo', "'quoted argument `rm -rf`'"], ['echo', '"quoted";virus;"quoted"'], ['echo', '"quoted";virus;\'quoted\'']) def test_check_ssh_injection_on_error0(self, cmd): self.assertRaises(exception.SSHInjectionThreat, utils.check_ssh_injection, cmd) @ddt.data( (("3G", "G"), 3.0), (("4.1G", "G"), 4.1), (("4,1G", "G"), 4.1), (("5.23G", "G"), 5.23), (("5,23G", "G"), 5.23), (("9728M", "G"), 9.5), (("8192K", "G"), 0.0078125), (("2T", "G"), 2048.0), (("2.1T", "G"), 2150.4), (("2,1T", "G"), 2150.4), (("3P", "G"), 3145728.0), (("3.4P", "G"), 3565158.4), (("3,4P", "G"), 3565158.4), (("9728M", "M"), 9728.0), (("9728.2381T", "T"), 9728.2381), (("9728,2381T", "T"), 9728.2381), (("0", "G"), 0.0), (("512", "M"), 0.00048828125), (("2097152.", "M"), 2.0), ((".1024", "K"), 0.0001), ((",1024", "K"), 0.0001), (("2048G", "T"), 2.0), (("65536G", "P"), 0.0625), ) @ddt.unpack def test_translate_string_size_to_float_positive(self, request, expected): actual = utils.translate_string_size_to_float(*request) self.assertEqual(expected, actual) @ddt.data( (None, "G"), ("fake", "G"), ("1fake", "G"), ("2GG", "G"), ("1KM", "G"), ("K1M", "G"), ("M1K", "G"), ("1.2fake", "G"), ("1,2fake", "G"), ("2.2GG", "G"), ("1.1KM", "G"), ("K2.2M", "G"), ("K2,2M", "G"), ("M2.2K", "G"), ("M2,2K", "G"), ("", "G"), (23, "G"), (23.0, "G"), ) @ddt.unpack def test_translate_string_size_to_float_negative(self, string, multiplier): actual = utils.translate_string_size_to_float(string, multiplier) self.assertIsNone(actual) class MonkeyPatchTestCase(test.TestCase): """Unit test for utils.monkey_patch().""" def setUp(self): super(MonkeyPatchTestCase, self).setUp() self.example_package = 'manila.tests.monkey_patch_example.' self.flags( monkey_patch=True, monkey_patch_modules=[self.example_package + 'example_a' + ':' + self.example_package + 'example_decorator']) def test_monkey_patch(self): utils.monkey_patch() manila.tests.monkey_patch_example.CALLED_FUNCTION = [] from manila.tests.monkey_patch_example import example_a from manila.tests.monkey_patch_example import example_b self.assertEqual('Example function', example_a.example_function_a()) exampleA = example_a.ExampleClassA() exampleA.example_method() ret_a = exampleA.example_method_add(3, 5) self.assertEqual(8, ret_a) self.assertEqual('Example function', example_b.example_function_b()) exampleB = example_b.ExampleClassB() exampleB.example_method() ret_b = exampleB.example_method_add(3, 5) self.assertEqual(8, ret_b) package_a = self.example_package + 'example_a.' self.assertIn(package_a + 'example_function_a', manila.tests.monkey_patch_example.CALLED_FUNCTION) self.assertIn(package_a + 'ExampleClassA.example_method', manila.tests.monkey_patch_example.CALLED_FUNCTION) self.assertIn(package_a + 'ExampleClassA.example_method_add', manila.tests.monkey_patch_example.CALLED_FUNCTION) package_b = self.example_package + 'example_b.' self.assertNotIn(package_b + 'example_function_b', manila.tests.monkey_patch_example.CALLED_FUNCTION) self.assertNotIn(package_b + 'ExampleClassB.example_method', manila.tests.monkey_patch_example.CALLED_FUNCTION) self.assertNotIn(package_b + 'ExampleClassB.example_method_add', manila.tests.monkey_patch_example.CALLED_FUNCTION) class FakeSSHClient(object): def __init__(self): self.id = uuidutils.generate_uuid() self.transport = FakeTransport() def set_missing_host_key_policy(self, policy): pass def connect(self, ip, port=22, username=None, password=None, key_filename=None, look_for_keys=None, timeout=10, banner_timeout=10): pass def get_transport(self): return self.transport def close(self): pass def __call__(self, *args, **kwargs): pass class FakeSock(object): def settimeout(self, timeout): pass class FakeTransport(object): def __init__(self): self.active = True self.sock = FakeSock() def set_keepalive(self, timeout): pass def is_active(self): return self.active class SSHPoolTestCase(test.TestCase): """Unit test for SSH Connection Pool.""" def test_single_ssh_connect(self): with mock.patch.object(paramiko, "SSHClient", mock.Mock(return_value=FakeSSHClient())): sshpool = utils.SSHPool("127.0.0.1", 22, 10, "test", password="test", min_size=1, max_size=1) with sshpool.item() as ssh: first_id = ssh.id with sshpool.item() as ssh: second_id = ssh.id self.assertEqual(first_id, second_id) paramiko.SSHClient.assert_called_once_with() def test_create_ssh_with_password(self): fake_ssh_client = mock.Mock() ssh_pool = utils.SSHPool("127.0.0.1", 22, 10, "test", password="test") with mock.patch.object(paramiko, "SSHClient", return_value=fake_ssh_client): ssh_pool.create() fake_ssh_client.connect.assert_called_once_with( "127.0.0.1", port=22, username="test", password="test", key_filename=None, look_for_keys=False, timeout=10, banner_timeout=10) def test_create_ssh_with_key(self): path_to_private_key = "/fakepath/to/privatekey" fake_ssh_client = mock.Mock() ssh_pool = utils.SSHPool("127.0.0.1", 22, 10, "test", privatekey="/fakepath/to/privatekey") with mock.patch.object(paramiko, "SSHClient", return_value=fake_ssh_client): ssh_pool.create() fake_ssh_client.connect.assert_called_once_with( "127.0.0.1", port=22, username="test", password=None, key_filename=path_to_private_key, look_for_keys=False, timeout=10, banner_timeout=10) def test_create_ssh_with_nothing(self): fake_ssh_client = mock.Mock() ssh_pool = utils.SSHPool("127.0.0.1", 22, 10, "test") with mock.patch.object(paramiko, "SSHClient", return_value=fake_ssh_client): ssh_pool.create() fake_ssh_client.connect.assert_called_once_with( "127.0.0.1", port=22, username="test", password=None, key_filename=None, look_for_keys=True, timeout=10, banner_timeout=10) def test_create_ssh_error_connecting(self): attrs = {'connect.side_effect': paramiko.SSHException, } fake_ssh_client = mock.Mock(**attrs) ssh_pool = utils.SSHPool("127.0.0.1", 22, 10, "test") with mock.patch.object(paramiko, "SSHClient", return_value=fake_ssh_client): self.assertRaises(exception.SSHException, ssh_pool.create) fake_ssh_client.connect.assert_called_once_with( "127.0.0.1", port=22, username="test", password=None, key_filename=None, look_for_keys=True, timeout=10, banner_timeout=10) def test_closed_reopend_ssh_connections(self): with mock.patch.object(paramiko, "SSHClient", mock.Mock(return_value=FakeSSHClient())): sshpool = utils.SSHPool("127.0.0.1", 22, 10, "test", password="test", min_size=1, max_size=2) with sshpool.item() as ssh: first_id = ssh.id with sshpool.item() as ssh: second_id = ssh.id # Close the connection and test for a new connection ssh.get_transport().active = False self.assertEqual(first_id, second_id) paramiko.SSHClient.assert_called_once_with() # Expected new ssh pool with mock.patch.object(paramiko, "SSHClient", mock.Mock(return_value=FakeSSHClient())): with sshpool.item() as ssh: third_id = ssh.id self.assertNotEqual(first_id, third_id) paramiko.SSHClient.assert_called_once_with() @mock.patch('six.moves.builtins.open') @mock.patch('paramiko.SSHClient') @mock.patch('os.path.isfile', return_value=True) def test_sshpool_remove(self, mock_isfile, mock_sshclient, mock_open): ssh_to_remove = mock.Mock() mock_sshclient.side_effect = [mock.Mock(), ssh_to_remove, mock.Mock()] sshpool = utils.SSHPool("127.0.0.1", 22, 10, "test", password="test", min_size=3, max_size=3) self.assertIn(ssh_to_remove, list(sshpool.free_items)) sshpool.remove(ssh_to_remove) self.assertNotIn(ssh_to_remove, list(sshpool.free_items)) @mock.patch('six.moves.builtins.open') @mock.patch('paramiko.SSHClient') @mock.patch('os.path.isfile', return_value=True) def test_sshpool_remove_object_not_in_pool(self, mock_isfile, mock_sshclient, mock_open): # create an SSH Client that is not a part of sshpool. ssh_to_remove = mock.Mock() mock_sshclient.side_effect = [mock.Mock(), mock.Mock()] sshpool = utils.SSHPool("127.0.0.1", 22, 10, "test", password="test", min_size=2, max_size=2) listBefore = list(sshpool.free_items) self.assertNotIn(ssh_to_remove, listBefore) sshpool.remove(ssh_to_remove) self.assertEqual(listBefore, list(sshpool.free_items)) @ddt.ddt class CidrToNetmaskTestCase(test.TestCase): """Unit test for cidr to netmask.""" @ddt.data( ('10.0.0.0/0', '0.0.0.0'), ('10.0.0.0/24', '255.255.255.0'), ('10.0.0.0/5', '248.0.0.0'), ('10.0.0.0/32', '255.255.255.255'), ('10.0.0.1', '255.255.255.255'), ) @ddt.unpack def test_cidr_to_netmask(self, cidr, expected_netmask): result = utils.cidr_to_netmask(cidr) self.assertEqual(expected_netmask, result) @ddt.data( '10.0.0.0/33', '', '10.0.0.555/33' ) def test_cidr_to_netmask_invalid(self, cidr): self.assertRaises(exception.InvalidInput, utils.cidr_to_netmask, cidr) @ddt.ddt class CidrToPrefixLenTestCase(test.TestCase): """Unit test for cidr to prefix length.""" @ddt.data( ('10.0.0.0/0', 0), ('10.0.0.0/24', 24), ('10.0.0.1', 32), ('fdf8:f53b:82e1::1/0', 0), ('fdf8:f53b:82e1::1/64', 64), ('fdf8:f53b:82e1::1', 128), ) @ddt.unpack def test_cidr_to_prefixlen(self, cidr, expected_prefixlen): result = utils.cidr_to_prefixlen(cidr) self.assertEqual(expected_prefixlen, result) @ddt.data( '10.0.0.0/33', '', '10.0.0.555/33', 'fdf8:f53b:82e1::1/129', 'fdf8:f53b:82e1::fffff' ) def test_cidr_to_prefixlen_invalid(self, cidr): self.assertRaises(exception.InvalidInput, utils.cidr_to_prefixlen, cidr) @ddt.ddt class ParseBoolValueTestCase(test.TestCase): @ddt.data( ('t', True), ('on', True), ('1', True), ('false', False), ('n', False), ('no', False), ('0', False),) @ddt.unpack def test_bool_with_valid_string(self, string, value): fake_dict = {'fake_key': string} result = utils.get_bool_from_api_params('fake_key', fake_dict) self.assertEqual(value, result) @ddt.data('None', 'invalid', 'falses') def test_bool_with_invalid_string(self, string): fake_dict = {'fake_key': string} self.assertRaises(exc.HTTPBadRequest, utils.get_bool_from_api_params, 'fake_key', fake_dict) @ddt.data('undefined', None) def test_bool_with_key_not_found_raise_error(self, def_val): fake_dict = {'fake_key1': 'value1'} self.assertRaises(exc.HTTPBadRequest, utils.get_bool_from_api_params, 'fake_key2', fake_dict, def_val) @ddt.data((False, False, False), (True, True, False), ('true', True, False), ('false', False, False), ('undefined', 'undefined', False), (False, False, True), ('true', True, True)) @ddt.unpack def test_bool_with_key_not_found(self, def_val, expected, strict): fake_dict = {'fake_key1': 'value1'} invalid_default = utils.get_bool_from_api_params('fake_key2', fake_dict, def_val, strict) self.assertEqual(expected, invalid_default) @ddt.ddt class IsValidIPVersion(test.TestCase): """Test suite for function 'is_valid_ip_address'.""" @ddt.data('0.0.0.0', '255.255.255.255', '192.168.0.1') def test_valid_v4(self, addr): for vers in (4, '4'): self.assertTrue(utils.is_valid_ip_address(addr, vers)) @ddt.data( '2001:cdba:0000:0000:0000:0000:3257:9652', '2001:cdba:0:0:0:0:3257:9652', '2001:cdba::3257:9652') def test_valid_v6(self, addr): for vers in (6, '6'): self.assertTrue(utils.is_valid_ip_address(addr, vers)) @ddt.data( {'addr': '1.1.1.1', 'vers': 3}, {'addr': '1.1.1.1', 'vers': 5}, {'addr': '1.1.1.1', 'vers': 7}, {'addr': '2001:cdba::3257:9652', 'vers': '3'}, {'addr': '2001:cdba::3257:9652', 'vers': '5'}, {'addr': '2001:cdba::3257:9652', 'vers': '7'}) @ddt.unpack def test_provided_invalid_version(self, addr, vers): self.assertRaises( exception.ManilaException, utils.is_valid_ip_address, addr, vers) def test_provided_none_version(self): self.assertRaises(TypeError, utils.is_valid_ip_address, '', None) @ddt.data(None, 'fake', '1.1.1.1') def test_provided_invalid_v6_address(self, addr): for vers in (6, '6'): self.assertFalse(utils.is_valid_ip_address(addr, vers)) @ddt.data(None, 'fake', '255.255.255.256', '2001:cdba::3257:9652', '') def test_provided_invalid_v4_address(self, addr): for vers in (4, '4'): self.assertFalse(utils.is_valid_ip_address(addr, vers)) class Comparable(utils.ComparableMixin): def __init__(self, value): self.value = value def _cmpkey(self): return self.value class TestComparableMixin(test.TestCase): def setUp(self): super(TestComparableMixin, self).setUp() self.one = Comparable(1) self.two = Comparable(2) def test_lt(self): self.assertTrue(self.one < self.two) self.assertFalse(self.two < self.one) self.assertFalse(self.one < self.one) def test_le(self): self.assertTrue(self.one <= self.two) self.assertFalse(self.two <= self.one) self.assertTrue(self.one <= self.one) def test_eq(self): self.assertFalse(self.one == self.two) self.assertFalse(self.two == self.one) self.assertTrue(self.one == self.one) def test_ge(self): self.assertFalse(self.one >= self.two) self.assertTrue(self.two >= self.one) self.assertTrue(self.one >= self.one) def test_gt(self): self.assertFalse(self.one > self.two) self.assertTrue(self.two > self.one) self.assertFalse(self.one > self.one) def test_ne(self): self.assertTrue(self.one != self.two) self.assertTrue(self.two != self.one) self.assertFalse(self.one != self.one) def test_compare(self): self.assertEqual(NotImplemented, self.one._compare(1, self.one._cmpkey)) class TestRetryDecorator(test.TestCase): def test_no_retry_required(self): self.counter = 0 with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException, interval=2, retries=3, backoff_rate=2) def succeeds(): self.counter += 1 return 'success' ret = succeeds() self.assertFalse(mock_sleep.called) self.assertEqual('success', ret) self.assertEqual(1, self.counter) def test_no_retry_required_random(self): self.counter = 0 with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException, interval=2, retries=3, backoff_rate=2, wait_random=True) def succeeds(): self.counter += 1 return 'success' ret = succeeds() self.assertFalse(mock_sleep.called) self.assertEqual('success', ret) self.assertEqual(1, self.counter) def test_retries_once_random(self): self.counter = 0 interval = 2 backoff_rate = 2 retries = 3 with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException, interval, retries, backoff_rate, wait_random=True) def fails_once(): self.counter += 1 if self.counter < 2: raise exception.ManilaException(message='fake') else: return 'success' ret = fails_once() self.assertEqual('success', ret) self.assertEqual(2, self.counter) self.assertEqual(1, mock_sleep.call_count) self.assertTrue(mock_sleep.called) def test_retries_once(self): self.counter = 0 interval = 2 backoff_rate = 2 retries = 3 with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException, interval, retries, backoff_rate) def fails_once(): self.counter += 1 if self.counter < 2: raise exception.ManilaException(data='fake') else: return 'success' ret = fails_once() self.assertEqual('success', ret) self.assertEqual(2, self.counter) self.assertEqual(1, mock_sleep.call_count) mock_sleep.assert_called_with(interval * backoff_rate) def test_limit_is_reached(self): self.counter = 0 retries = 3 interval = 2 backoff_rate = 4 with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException, interval, retries, backoff_rate) def always_fails(): self.counter += 1 raise exception.ManilaException(data='fake') self.assertRaises(exception.ManilaException, always_fails) self.assertEqual(retries, self.counter) expected_sleep_arg = [] for i in range(retries): if i > 0: interval *= backoff_rate expected_sleep_arg.append(float(interval)) mock_sleep.assert_has_calls(map(mock.call, expected_sleep_arg)) def test_wrong_exception_no_retry(self): with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException) def raise_unexpected_error(): raise ValueError("value error") self.assertRaises(ValueError, raise_unexpected_error) self.assertFalse(mock_sleep.called) def test_wrong_retries_num(self): self.assertRaises(ValueError, utils.retry, exception.ManilaException, retries=-1) def test_max_backoff_sleep(self): self.counter = 0 with mock.patch.object(time, 'sleep') as mock_sleep: @utils.retry(exception.ManilaException, retries=0, backoff_rate=2, backoff_sleep_max=4) def fails_then_passes(): self.counter += 1 if self.counter < 5: raise exception.ManilaException(data='fake') else: return 'success' self.assertEqual('success', fails_then_passes()) mock_sleep.assert_has_calls(map(mock.call, [2, 4, 4, 4])) @ddt.ddt class RequireDriverInitializedTestCase(test.TestCase): @ddt.data(True, False) def test_require_driver_initialized(self, initialized): class FakeDriver(object): @property def initialized(self): return initialized class FakeException(Exception): pass class FakeManager(object): driver = FakeDriver() @utils.require_driver_initialized def call_me(self): raise FakeException( "Should be raised only if manager.driver.initialized " "('%s') is equal to 'True'." % initialized) if initialized: expected_exception = FakeException else: expected_exception = exception.DriverNotInitialized self.assertRaises(expected_exception, FakeManager().call_me) @ddt.ddt class ShareMigrationHelperTestCase(test.TestCase): """Tests DataMigrationHelper.""" def setUp(self): super(ShareMigrationHelperTestCase, self).setUp() self.context = context.get_admin_context() def test_wait_for_access_update(self): sid = 1 fake_share_instances = [ { 'id': sid, 'access_rules_status': constants.SHARE_INSTANCE_RULES_SYNCING, }, { 'id': sid, 'access_rules_status': constants.STATUS_ACTIVE, }, ] self.mock_object(time, 'sleep') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=fake_share_instances)) utils.wait_for_access_update(self.context, db, fake_share_instances[0], 1) db.share_instance_get.assert_has_calls( [mock.call(mock.ANY, sid), mock.call(mock.ANY, sid)] ) time.sleep.assert_called_once_with(1.414) @ddt.data( ( { 'id': '1', 'access_rules_status': constants.SHARE_INSTANCE_RULES_ERROR, }, exception.ShareMigrationFailed ), ( { 'id': '1', 'access_rules_status': constants.SHARE_INSTANCE_RULES_SYNCING, }, exception.ShareMigrationFailed ), ) @ddt.unpack def test_wait_for_access_update_invalid(self, fake_instance, expected_exc): self.mock_object(time, 'sleep') self.mock_object(db, 'share_instance_get', mock.Mock(return_value=fake_instance)) now = time.time() timeout = now + 100 self.mock_object(time, 'time', mock.Mock(side_effect=[now, timeout])) self.assertRaises(expected_exc, utils.wait_for_access_update, self.context, db, fake_instance, 1) @ddt.ddt class ConvertStrTestCase(test.TestCase): def test_convert_str_str_input(self): self.mock_object(utils.encodeutils, 'safe_encode') input_value = six.text_type("string_input") output_value = utils.convert_str(input_value) if six.PY2: utils.encodeutils.safe_encode.assert_called_once_with(input_value) self.assertEqual( utils.encodeutils.safe_encode.return_value, output_value) else: self.assertEqual(0, utils.encodeutils.safe_encode.call_count) self.assertEqual(input_value, output_value) def test_convert_str_bytes_input(self): self.mock_object(utils.encodeutils, 'safe_encode') if six.PY2: input_value = six.binary_type("binary_input") else: input_value = six.binary_type("binary_input", "utf-8") output_value = utils.convert_str(input_value) if six.PY2: utils.encodeutils.safe_encode.assert_called_once_with(input_value) self.assertEqual( utils.encodeutils.safe_encode.return_value, output_value) else: self.assertEqual(0, utils.encodeutils.safe_encode.call_count) self.assertIsInstance(output_value, six.string_types) self.assertEqual(six.text_type("binary_input"), output_value) @ddt.ddt class TestDisableNotifications(test.TestCase): def test_do_nothing_getter(self): """Test any attribute will always return the same instance (self).""" donothing = utils.DoNothing() self.assertIs(donothing, donothing.anyname) def test_do_nothing_caller(self): """Test calling the object will always return the same instance.""" donothing = utils.DoNothing() self.assertIs(donothing, donothing()) def test_do_nothing_json_serializable(self): """Test calling the object will always return the same instance.""" donothing = utils.DoNothing() self.assertEqual('""', json.dumps(donothing)) @utils.if_notifications_enabled def _decorated_method(self): return mock.sentinel.success def test_if_notification_enabled_when_enabled(self): """Test method is called when notifications are enabled.""" result = self._decorated_method() self.assertEqual(mock.sentinel.success, result) @ddt.data([], ['noop'], ['noop', 'noop']) def test_if_notification_enabled_when_disabled(self, driver): """Test method is not called when notifications are disabled.""" self.override_config('driver', driver, group='oslo_messaging_notifications') result = self._decorated_method() self.assertEqual(utils.DO_NOTHING, result) @ddt.ddt class TestAllTenantsValueCase(test.TestCase): @ddt.data(None, '', '1', 'true', 'True') def test_is_all_tenants_true(self, value): search_opts = {'all_tenants': value} self.assertTrue(utils.is_all_tenants(search_opts)) self.assertIn('all_tenants', search_opts) @ddt.data('0', 'false', 'False') def test_is_all_tenants_false(self, value): search_opts = {'all_tenants': value} self.assertFalse(utils.is_all_tenants(search_opts)) self.assertIn('all_tenants', search_opts) def test_is_all_tenants_missing(self): self.assertFalse(utils.is_all_tenants({})) def test_is_all_tenants_invalid(self): search_opts = {'all_tenants': 'wonk'} self.assertRaises(exception.InvalidInput, utils.is_all_tenants, search_opts) manila-10.0.0/manila/tests/fake_client_exception_class.py0000664000175000017500000000140613656750227023555 0ustar zuulzuul00000000000000# Copyright 2016 SAP SE # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class Unauthorized(Exception): status_code = 401 message = "Unauthorized: bad credentials." def __init__(self, message=None): pass manila-10.0.0/manila/tests/fake_volume.py0000664000175000017500000000360313656750227020344 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg CONF = cfg.CONF class FakeVolume(object): def __init__(self, **kwargs): self.id = kwargs.pop('id', 'fake_vol_id') self.status = kwargs.pop('status', 'available') self.device = kwargs.pop('device', '') for key, value in kwargs.items(): setattr(self, key, value) def __getitem__(self, attr): return getattr(self, attr) class FakeVolumeSnapshot(object): def __init__(self, **kwargs): self.id = kwargs.pop('id', 'fake_volsnap_id') self.status = kwargs.pop('status', 'available') for key, value in kwargs.items(): setattr(self, key, value) def __getitem__(self, attr): return getattr(self, attr) class API(object): """Fake Volume API.""" def get(self, *args, **kwargs): pass def create_snapshot_force(self, *args, **kwargs): pass def get_snapshot(self, *args, **kwargs): pass def delete_snapshot(self, *args, **kwargs): pass def create(self, *args, **kwargs): pass def extend(self, *args, **kwargs): pass def get_all(self, search_opts): pass def delete(self, volume_id): pass def get_all_snapshots(self, search_opts): pass manila-10.0.0/manila/tests/fake_utils.py0000664000175000017500000000724413656750227020202 0ustar zuulzuul00000000000000# Copyright (c) 2011 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """This modules stubs out functions in manila.utils.""" import re from unittest import mock from eventlet import greenthread from oslo_log import log import six from manila import exception from manila import utils LOG = log.getLogger(__name__) _fake_execute_repliers = [] _fake_execute_log = [] def fake_execute_get_log(): return _fake_execute_log def fake_execute_clear_log(): global _fake_execute_log _fake_execute_log = [] def fake_execute_set_repliers(repliers): """Allows the client to configure replies to commands.""" global _fake_execute_repliers _fake_execute_repliers = repliers def fake_execute_default_reply_handler(*ignore_args, **ignore_kwargs): """A reply handler for commands that haven't been added to the reply list. Returns empty strings for stdout and stderr. """ return '', '' def fake_execute(*cmd_parts, **kwargs): """This function stubs out execute. It optionally executes a preconfigued function to return expected data. """ global _fake_execute_repliers process_input = kwargs.get('process_input', None) check_exit_code = kwargs.get('check_exit_code', 0) delay_on_retry = kwargs.get('delay_on_retry', True) attempts = kwargs.get('attempts', 1) run_as_root = kwargs.get('run_as_root', False) cmd_str = ' '.join(str(part) for part in cmd_parts) LOG.debug("Faking execution of cmd (subprocess): %s", cmd_str) _fake_execute_log.append(cmd_str) reply_handler = fake_execute_default_reply_handler for fake_replier in _fake_execute_repliers: if re.match(fake_replier[0], cmd_str): reply_handler = fake_replier[1] LOG.debug('Faked command matched %s', fake_replier[0]) break if isinstance(reply_handler, six.string_types): # If the reply handler is a string, return it as stdout reply = reply_handler, '' else: try: # Alternative is a function, so call it reply = reply_handler(cmd_parts, process_input=process_input, delay_on_retry=delay_on_retry, attempts=attempts, run_as_root=run_as_root, check_exit_code=check_exit_code) except exception.ProcessExecutionError as e: LOG.debug('Faked command raised an exception %s', e) raise stdout = reply[0] stderr = reply[1] LOG.debug("Reply to faked command is stdout='%(stdout)s' " "stderr='%(stderr)s'.", {"stdout": stdout, "stderr": stderr}) # Replicate the sleep call in the real function greenthread.sleep(0) return reply def stub_out_utils_execute(testcase): fake_execute_set_repliers([]) fake_execute_clear_log() testcase.mock_object(utils, 'execute', fake_execute) def get_fake_lock_context(): context_manager_mock = mock.Mock() setattr(context_manager_mock, '__enter__', mock.Mock()) setattr(context_manager_mock, '__exit__', mock.Mock()) return context_manager_mock manila-10.0.0/manila/tests/test_rpc.py0000664000175000017500000000272613656750227017677 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila import rpc from manila import test @ddt.ddt class RPCTestCase(test.TestCase): @ddt.data([], ['noop'], ['noop', 'noop']) @mock.patch('oslo_messaging.JsonPayloadSerializer', wraps=True) def test_init_no_notifications(self, driver, serializer_mock): self.override_config('driver', driver, group='oslo_messaging_notifications') rpc.init(test.CONF) self.assertEqual(rpc.utils.DO_NOTHING, rpc.NOTIFIER) serializer_mock.assert_not_called() @mock.patch.object(rpc, 'messaging') def test_init_notifications(self, messaging_mock): rpc.init(test.CONF) self.assertTrue(messaging_mock.JsonPayloadSerializer.called) self.assertTrue(messaging_mock.Notifier.called) self.assertEqual(rpc.NOTIFIER, messaging_mock.Notifier.return_value) manila-10.0.0/manila/tests/db_utils.py0000664000175000017500000002200013656750227017644 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from manila.common import constants from manila import context from manila import db from manila.message import message_levels def _create_db_row(method, default_values, custom_values): override_defaults = custom_values.pop('override_defaults', None) if override_defaults: default_values = custom_values else: default_values.update(copy.deepcopy(custom_values)) return method(context.get_admin_context(), default_values) def create_share_group(**kwargs): """Create a share group object.""" share_group = { 'share_network_id': None, 'share_server_id': None, 'user_id': 'fake', 'project_id': 'fake', 'status': constants.STATUS_CREATING, 'host': 'fake_host' } return _create_db_row(db.share_group_create, share_group, kwargs) def create_share_group_snapshot(share_group_id, **kwargs): """Create a share group snapshot object.""" snapshot = { 'share_group_id': share_group_id, 'user_id': 'fake', 'project_id': 'fake', 'status': constants.STATUS_CREATING, } return _create_db_row(db.share_group_snapshot_create, snapshot, kwargs) def create_share_group_snapshot_member(share_group_snapshot_id, **kwargs): """Create a share group snapshot member object.""" member = { 'share_proto': "NFS", 'size': 0, 'share_instance_id': None, 'user_id': 'fake', 'project_id': 'fake', 'status': 'creating', 'share_group_snapshot_id': share_group_snapshot_id, } return _create_db_row( db.share_group_snapshot_member_create, member, kwargs) def create_share_access(**kwargs): share_access = { 'id': 'fake_id', 'access_type': 'ip', 'access_to': 'fake_ip_address' } return _create_db_row(db.share_access_create, share_access, kwargs) def create_share(**kwargs): """Create a share object.""" share = { 'share_proto': "NFS", 'size': 0, 'snapshot_id': None, 'share_network_id': None, 'share_server_id': None, 'user_id': 'fake', 'project_id': 'fake', 'metadata': {'fake_key': 'fake_value'}, 'availability_zone': 'fake_availability_zone', 'status': constants.STATUS_CREATING, 'host': 'fake_host' } return _create_db_row(db.share_create, share, kwargs) def create_share_without_instance(**kwargs): share = { 'share_proto': "NFS", 'size': 0, 'snapshot_id': None, 'share_network_id': None, 'share_server_id': None, 'user_id': 'fake', 'project_id': 'fake', 'metadata': {}, 'availability_zone': 'fake_availability_zone', 'status': constants.STATUS_CREATING, 'host': 'fake_host' } share.update(copy.deepcopy(kwargs)) return db.share_create(context.get_admin_context(), share, False) def create_share_instance(**kwargs): """Create a share instance object.""" return db.share_instance_create(context.get_admin_context(), kwargs.pop('share_id'), kwargs) def create_share_replica(**kwargs): """Create a share replica object.""" if 'share_id' not in kwargs: share = create_share() kwargs['share_id'] = share['id'] return db.share_instance_create(context.get_admin_context(), kwargs.pop('share_id'), kwargs) def create_snapshot(**kwargs): """Create a snapshot object.""" with_share = kwargs.pop('with_share', False) share = None if with_share: share = create_share(status=constants.STATUS_AVAILABLE, size=kwargs.get('size', 0)) snapshot = { 'share_proto': "NFS", 'size': 0, 'share_id': share['id'] if with_share else None, 'user_id': 'fake', 'project_id': 'fake', 'status': 'creating', 'provider_location': 'fake', } snapshot.update(kwargs) return db.share_snapshot_create(context.get_admin_context(), snapshot) def create_snapshot_instance(snapshot_id, **kwargs): """Create a share snapshot instance object.""" snapshot_instance = { 'provider_location': 'fake_provider_location', 'progress': '0%', 'status': constants.STATUS_CREATING, } snapshot_instance.update(kwargs) return db.share_snapshot_instance_create( context.get_admin_context(), snapshot_id, snapshot_instance) def create_snapshot_instance_export_locations(snapshot_id, **kwargs): """Create a snapshot instance export location object.""" export_location = { 'share_snapshot_instance_id': snapshot_id, } export_location.update(kwargs) return db.share_snapshot_instance_export_location_create( context.get_admin_context(), export_location) def create_access(**kwargs): """Create an access rule object.""" state = kwargs.pop('state', constants.ACCESS_STATE_QUEUED_TO_APPLY) access = { 'access_type': 'fake_type', 'access_to': 'fake_IP', 'share_id': kwargs.pop('share_id', None) or create_share()['id'], } access.update(kwargs) share_access_rule = _create_db_row(db.share_access_create, access, kwargs) for mapping in share_access_rule.instance_mappings: db.share_instance_access_update( context.get_admin_context(), share_access_rule['id'], mapping.share_instance_id, {'state': state}) return share_access_rule def create_snapshot_access(**kwargs): """Create a snapshot access rule object.""" access = { 'access_type': 'fake_type', 'access_to': 'fake_IP', 'share_snapshot_id': None, } return _create_db_row(db.share_snapshot_access_create, access, kwargs) def create_share_server(**kwargs): """Create a share server object.""" backend_details = kwargs.pop('backend_details', {}) srv = { 'host': 'host1', 'share_network_subnet_id': 'fake_srv_id', 'status': constants.STATUS_ACTIVE } share_srv = _create_db_row(db.share_server_create, srv, kwargs) if backend_details: db.share_server_backend_details_set( context.get_admin_context(), share_srv['id'], backend_details) return db.share_server_get(context.get_admin_context(), share_srv['id']) def create_share_type(**kwargs): """Create a share type object""" share_type = { 'name': 'fake_type', 'is_public': True, } return _create_db_row(db.share_type_create, share_type, kwargs) def create_share_group_type(**kwargs): """Create a share group type object""" share_group_type = { 'name': 'fake_group_type', 'is_public': True, } return _create_db_row(db.share_group_type_create, share_group_type, kwargs) def create_share_network(**kwargs): """Create a share network object.""" net = { 'user_id': 'fake', 'project_id': 'fake', 'status': 'new', 'name': 'whatever', 'description': 'fake description', } return _create_db_row(db.share_network_create, net, kwargs) def create_share_network_subnet(**kwargs): """Create a share network subnet object.""" subnet = { 'id': 'fake_sns_id', 'neutron_net_id': 'fake-neutron-net', 'neutron_subnet_id': 'fake-neutron-subnet', 'network_type': 'vlan', 'segmentation_id': 1000, 'cidr': '10.0.0.0/24', 'ip_version': 4, 'availability_zone_id': 'fake_zone_id', 'share_network_id': 'fake_sn_id', 'gateway': None, 'mtu': None } return _create_db_row(db.share_network_subnet_create, subnet, kwargs) def create_security_service(**kwargs): share_network_id = kwargs.pop('share_network_id', None) service = { 'type': "FAKE", 'project_id': 'fake-project-id', } service_ref = _create_db_row(db.security_service_create, service, kwargs) if share_network_id: db.share_network_add_security_service(context.get_admin_context(), share_network_id, service_ref['id']) return service_ref def create_message(**kwargs): message_dict = { 'action': 'fake_Action', 'project_id': 'fake-project-id', 'message_level': message_levels.ERROR, } return _create_db_row(db.message_create, message_dict, kwargs) manila-10.0.0/manila/tests/test_context.py0000664000175000017500000000472413656750227020577 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila import context from manila import test class ContextTestCase(test.TestCase): def test_request_context_elevated(self): user_context = context.RequestContext( 'fake_user', 'fake_project', is_admin=False) self.assertFalse(user_context.is_admin) self.assertEqual([], user_context.roles) admin_context = user_context.elevated() self.assertFalse(user_context.is_admin) self.assertTrue(admin_context.is_admin) self.assertNotIn('admin', user_context.roles) self.assertIn('admin', admin_context.roles) def test_request_context_sets_is_admin(self): ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) self.assertTrue(ctxt.is_admin) def test_request_context_sets_is_admin_upcase(self): ctxt = context.RequestContext('111', '222', roles=['Admin', 'weasel']) self.assertTrue(ctxt.is_admin) def test_request_context_read_deleted(self): ctxt = context.RequestContext('111', '222', read_deleted='yes') self.assertEqual('yes', ctxt.read_deleted) ctxt.read_deleted = 'no' self.assertEqual('no', ctxt.read_deleted) def test_request_context_read_deleted_invalid(self): self.assertRaises(ValueError, context.RequestContext, '111', '222', read_deleted=True) ctxt = context.RequestContext('111', '222') self.assertRaises(ValueError, setattr, ctxt, 'read_deleted', True) manila-10.0.0/manila/tests/var/0000775000175000017500000000000013656750362016263 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/var/privatekey.key0000664000175000017500000000625313656750227021166 0ustar zuulzuul00000000000000-----BEGIN RSA PRIVATE KEY----- MIIJKAIBAAKCAgEA16VJEDeqbmr6PoM96NSuJK1XT5dZuzYzSQ8g//mR9BBjXBDe 4moNajxOybI6hjzWbECtXTKF20s/jkovarzXiZwXH8FMeakwLcMgG/QMRpMLjGny FPpVm7HJaPnTxrI2tNcsG10wmWxd9oqp6TjGIX8VlHaEGIgZIccYVvXjDyi0vypD /P28flWmtlyYgHm6pHfZ65LAAAXhnPZpWn2ARJogoT3SRD8PtXjwOEFavWj3qQ7K gCrRjfCS6ZqAtwXcUEE228C90PH01yuLQjVGlZOAGw8vzHBaustKHEKATyY4oTmN +Zlhvzi7XCPfcjzqVhp6bP+Whv+uAwydg+uxZ2o+oCh1fuk1xTvCmcZZ8bYLYmQy QWZJ3kwbfQK0jr/pejQbLpkc9IhCeKOB9Utk0jJ6awL1+1pxrXOl4vYF2oWHAxxH pcMGM6gIkwb+ocUqeDGdnTV2viszorQu2W1dqrINGrtMI3xP6EkNzb7L1K/Jzpn7 rSU7x0QMGwtb+Bv7bgLDuztMNtLtgd7vqRtOpufq5xKqfqwfYZrpEWE34BBUUbFS L6RZf3MLz1ykXF9N1CDMfpS6/Rbfnqe2KKAYWN8GNpMAsQ+JUWDZm8LAiFcsGbeN H/+GnffE5Ln0fTYbH8nMRnqm65kzBZWfE05Zj/NoqIXpCgjr6MhLkyFi9vsCAwEA AQKCAgAA96baQcWr9SLmQOR4NOwLEhQAMWefpWCZhU3amB4FgEVR1mmJjnw868RW t0v36jH0Dl44us9K6o2Ab+jCi9JTtbWM2Osk6JNkwSlVtsSPVH2KxbbmTTExH50N sYE3tPj12rlB7isXpRrOzlRwzWZmJBHOtrFlAsdKFYCQc03vdXlKGkBv1BuSXYP/ 8W5ltSYXMspxehkOZvhaIejbFREMPbzDvGlDER1a7Q320qQ7kUr7ISvbY1XJUzj1 f1HwgEA6w/AhED5Jv6wfgvx+8Yo9hYnflTPbsO1XRS4x7kJxGHTMlFuEsSF1ICYH Bcos0wUiGcBO2N6uAFuhe98BBn+nOwAPZYWwGkmVuK2psm2mXAHx94GT/XqgK/1r VWGSoOV7Fhjauc2Nv8/vJU18DXT3OY5hc4iXVeEBkuZwRb/NVUtnFoHxVO/Mp5Fh /W5KZaLWVrLghzvSQ/KUIM0k4lfKDZpY9ZpOdNgWDyZY8tNrXumUZZimzWdXZ9vR dBssmd8qEKs1AHGFnMDt56IjLGou6j0qnWsLdR1e/WEFsYzGXLVHCv6vXRNkbjqh WFw5nA+2Dw1YAsy+YkTfgx2pOe+exM/wxsVPa7tG9oZ374dywUi1k6VoHw5dkmJw 1hbXqSLZtx2N51G+SpGmNAV4vLUF0y3dy2wnrzFkFT4uxh1w8QKCAQEA+h6LwHTK hgcJx6CQQ6zYRqXo4wdvMooY1FcqJOq7LvJUA2CX5OOLs8qN1TyFrOCuAUTurOrM ABlQ0FpsIaP8TOGz72dHe2eLB+dD6Bqjn10sEFMn54zWd/w9ympQrO9jb5X3ViTh sCcdYyXVS9Hz8nzbbIF+DaKlxF2Hh71uRDxXpMPxRcGbOIuKZXUj6RkTIulzqT6o uawlegWxch05QSgzq/1ASxtjTzo4iuDCAii3N45xqxnB+fV9NXEt4R2oOGquBRPJ LxKcOnaQKBD0YNX4muTq+zPlv/kOb8/ys2WGWDUrNkpyJXqhTve4KONjqM7+iL/U 4WdJuiCjonzk/QKCAQEA3Lc+kNq35FNLxMcnCVcUgkmiCWZ4dyGZZPdqjOPww1+n bbudGPzY1nxOvE60dZM4or/tm6qlXYfb2UU3+OOJrK9s297EQybZ8DTZu2GHyitc NSFV3Gl4cgvKdbieGKkk9X2dV9xSNesNvX9lJEnQxuwHDTeo8ubLHtV88Ml1xokn 7W+IFiyEuUIL4e5/fadbrI3EwMrbCF4+9VcfABx4PTNMzdc8LsncCMXE+jFX8AWp TsT2JezTe5o2WpvBoKMAYhJQNQiaWATn00pDVY/70H1vK3ljomAa1IUdOr/AhAF7 3jL0MYMgXSHzXZOKAtc7yf+QfFWF1Ls8+sen1clJVwKCAQEAp59rB0r+Iz56RmgL 5t7ifs5XujbURemY5E2aN+18DuVmenD0uvfoO1DnJt4NtCNLWhxpXEdq+jH9H/VJ fG4a+ydT4IC1vjVRTrWlo9qeh4H4suQX3S1c2kKY4pvHf25blH/Lp9bFzbkZD8Ze IRcOxxb4MsrBwL+dGnGYD9dbG63ZCtoqSxaKQSX7VS1hKKmeUopj8ivFBdIht5oz JogBQ/J+Vqg9u1gagRFCrYgdXTcOOtRix0lW336vL+6u0ax/fXe5MjvlW3+8Zc3p pIBgVrlvh9ccx8crFTIDg9m4DJRgqaLQV+0ifI2np3WK3RQvSQWYPetZ7sm69ltD bvUGvQKCAQAz5CEhjUqOs8asjOXwnDiGKSmfbCgGWi/mPQUf+rcwN9z1P5a/uTKB utgIDbj/q401Nkp2vrgCNV7KxitSqKxFnTjKuKUL5KZ4gvRtyZBTR751/1BgcauP pJYE91K0GZBG5zGG5pWtd4XTd5Af5/rdycAeq2ddNEWtCiRFuBeohbaNbBtimzTZ GV4R0DDJKf+zoeEQMqEsZnwG0mTHceoS+WylOGU92teQeG7HI7K5C5uymTwFzpgq ByegRd5QFgKRDB0vWsZuyzh1xI/wHdnmOpdYcUGre0zTijhFB7ALWQ32P6SJv3ps av78kSNxZ4j3BM7DbJf6W8sKasZazOghAoIBAHekpBcLq9gRv2+NfLYxWN2sTZVB 1ldwioG7rWvk5YQR2akukecI3NRjtC5gG2vverawG852Y4+oLfgRMHxgp0qNStwX juTykzPkCwZn8AyR+avC3mkrtJyM3IigcYOu4/UoaRDFa0xvCC1EfumpnKXIpHag miSQZf2sVbgqb3/LWvHIg/ceOP9oGJve87/HVfQtBoLaIe5RXCWkqB7mcI/exvTS 8ShaW6v2Fe5Bzdvawj7sbsVYRWe93Aq2tmIgSX320D2RVepb6mjD4nr0IUaM3Yed TFT7e2ikWXyDLLgVkDTU4Qe8fr3ZKGfanCIDzvgNw6H1gRi+2WQgOmjilMQ= -----END RSA PRIVATE KEY----- manila-10.0.0/manila/tests/var/certificate.crt0000664000175000017500000000350213656750227021257 0ustar zuulzuul00000000000000-----BEGIN CERTIFICATE----- MIIFLjCCAxYCAQEwDQYJKoZIhvcNAQEFBQAwYTELMAkGA1UEBhMCQVUxEzARBgNV BAgTClNvbWUtU3RhdGUxFTATBgNVBAoTDE9wZW5zdGFjayBDQTESMBAGA1UECxMJ R2xhbmNlIENBMRIwEAYDVQQDEwlHbGFuY2UgQ0EwHhcNMTIwMjA5MTcxMDUzWhcN MjIwMjA2MTcxMDUzWjBZMQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0 ZTESMBAGA1UEChMJT3BlbnN0YWNrMQ8wDQYDVQQLEwZHbGFuY2UxEDAOBgNVBAMT BzAuMC4wLjAwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDXpUkQN6pu avo+gz3o1K4krVdPl1m7NjNJDyD/+ZH0EGNcEN7iag1qPE7JsjqGPNZsQK1dMoXb Sz+OSi9qvNeJnBcfwUx5qTAtwyAb9AxGkwuMafIU+lWbsclo+dPGsja01ywbXTCZ bF32iqnpOMYhfxWUdoQYiBkhxxhW9eMPKLS/KkP8/bx+Vaa2XJiAebqkd9nrksAA BeGc9mlafYBEmiChPdJEPw+1ePA4QVq9aPepDsqAKtGN8JLpmoC3BdxQQTbbwL3Q 8fTXK4tCNUaVk4AbDy/McFq6y0ocQoBPJjihOY35mWG/OLtcI99yPOpWGnps/5aG /64DDJ2D67Fnaj6gKHV+6TXFO8KZxlnxtgtiZDJBZkneTBt9ArSOv+l6NBsumRz0 iEJ4o4H1S2TSMnprAvX7WnGtc6Xi9gXahYcDHEelwwYzqAiTBv6hxSp4MZ2dNXa+ KzOitC7ZbV2qsg0au0wjfE/oSQ3NvsvUr8nOmfutJTvHRAwbC1v4G/tuAsO7O0w2 0u2B3u+pG06m5+rnEqp+rB9hmukRYTfgEFRRsVIvpFl/cwvPXKRcX03UIMx+lLr9 Ft+ep7YooBhY3wY2kwCxD4lRYNmbwsCIVywZt40f/4ad98TkufR9NhsfycxGeqbr mTMFlZ8TTlmP82iohekKCOvoyEuTIWL2+wIDAQABMA0GCSqGSIb3DQEBBQUAA4IC AQBMUBgV0R+Qltf4Du7u/8IFmGAoKR/mktB7R1gRRAqsvecUt7kIwBexGdavGg1y 0pU0+lgUZjJ20N1SlPD8gkNHfXE1fL6fmMjWz4dtYJjzRVhpufHPeBW4tl8DgHPN rBGAYQ+drDSXaEjiPQifuzKx8WS+DGA3ki4co5mPjVnVH1xvLIdFsk89z3b3YD1k yCJ/a9K36x6Z/c67JK7s6MWtrdRF9+MVnRKJ2PK4xznd1kBz16V+RA466wBDdARY vFbtkafbEqOb96QTonIZB7+fAldKDPZYnwPqasreLmaGOaM8sxtlPYAJ5bjDONbc AaXG8BMRQyO4FyH237otDKlxPyHOFV66BaffF5S8OlwIMiZoIvq+IcTZOdtDUSW2 KHNLfe5QEDZdKjWCBrfqAfvNuG13m03WqfmcMHl3o/KiPJlx8l9Z4QEzZ9xcyQGL cncgeHM9wJtzi2cD/rTDNFsx/gxvoyutRmno7I3NRbKmpsXF4StZioU3USRspB07 hYXOVnG3pS+PjVby7ThT3gvFHSocguOsxClx1epdUJAmJUbmM7NmOp5WVBVtMtC2 Su4NG/xJciXitKzw+btb7C7RjO6OEqv/1X/oBDzKBWQAwxUC+lqmnM7W6oqWJFEM YfTLnrjs7Hj6ThMGcEnfvc46dWK3dz0RjsQzUxugPuEkLA== -----END CERTIFICATE----- manila-10.0.0/manila/tests/var/ca.crt0000664000175000017500000000415713656750227017367 0ustar zuulzuul00000000000000-----BEGIN CERTIFICATE----- MIIGDDCCA/SgAwIBAgIJAPSvwQYk4qI4MA0GCSqGSIb3DQEBBQUAMGExCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMRUwEwYDVQQKEwxPcGVuc3RhY2sg Q0ExEjAQBgNVBAsTCUdsYW5jZSBDQTESMBAGA1UEAxMJR2xhbmNlIENBMB4XDTEy MDIwOTE3MTAwMloXDTIyMDIwNjE3MTAwMlowYTELMAkGA1UEBhMCQVUxEzARBgNV BAgTClNvbWUtU3RhdGUxFTATBgNVBAoTDE9wZW5zdGFjayBDQTESMBAGA1UECxMJ R2xhbmNlIENBMRIwEAYDVQQDEwlHbGFuY2UgQ0EwggIiMA0GCSqGSIb3DQEBAQUA A4ICDwAwggIKAoICAQDmf+fapWfzy1Uylus0KGalw4X/5xZ+ltPVOr+IdCPbstvi RTC5g+O+TvXeOP32V/cnSY4ho/+f2q730za+ZA/cgWO252rcm3Q7KTJn3PoqzJvX /l3EXe3/TCrbzgZ7lW3QLTCTEE2eEzwYG3wfDTOyoBq+F6ct6ADh+86gmpbIRfYI N+ixB0hVyz9427PTof97fL7qxxkjAayB28OfwHrkEBl7iblNhUC0RoH+/H9r5GEl GnWiebxfNrONEHug6PHgiaGq7/Dj+u9bwr7J3/NoS84I08ajMnhlPZxZ8bS/O8If ceWGZv7clPozyhABT/otDfgVcNH1UdZ4zLlQwc1MuPYN7CwxrElxc8Quf94ttGjb tfGTl4RTXkDofYdG1qBWW962PsGl2tWmbYDXV0q5JhV/IwbrE1X9f+OksJQne1/+ dZDxMhdf2Q1V0P9hZZICu4+YhmTMs5Mc9myKVnzp4NYdX5fXoB/uNYph+G7xG5IK WLSODKhr1wFGTTcuaa8LhOH5UREVenGDJuc6DdgX9a9PzyJGIi2ngQ03TJIkCiU/ 4J/r/vsm81ezDiYZSp2j5JbME+ixW0GBLTUWpOIxUSHgUFwH5f7lQwbXWBOgwXQk BwpZTmdQx09MfalhBtWeu4/6BnOCOj7e/4+4J0eVxXST0AmVyv8YjJ2nz1F9oQID AQABo4HGMIHDMB0GA1UdDgQWBBTk7Krj4bEsTjHXaWEtI2GZ5ACQyTCBkwYDVR0j BIGLMIGIgBTk7Krj4bEsTjHXaWEtI2GZ5ACQyaFlpGMwYTELMAkGA1UEBhMCQVUx EzARBgNVBAgTClNvbWUtU3RhdGUxFTATBgNVBAoTDE9wZW5zdGFjayBDQTESMBAG A1UECxMJR2xhbmNlIENBMRIwEAYDVQQDEwlHbGFuY2UgQ0GCCQD0r8EGJOKiODAM BgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4ICAQA8Zrss/MiwFHGmDlercE0h UvzA54n/EvKP9nP3jHM2qW/VPfKdnFw99nEPFLhb+lN553vdjOpCYFm+sW0Z5Mi4 qsFkk4AmXIIEFOPt6zKxMioLYDQ9Sw/BUv6EZGeANWr/bhmaE+dMcKJt5le/0jJm 2ahsVB9fbFu9jBFeYb7Ba/x2aLkEGMxaDLla+6EQhj148fTnS1wjmX9G2cNzJvj/ +C2EfKJIuDJDqw2oS2FGVpP37FA2Bz2vga0QatNneLkGKCFI3ZTenBznoN+fmurX TL3eJE4IFNrANCcdfMpdyLAtXz4KpjcehqpZMu70er3d30zbi1l0Ajz4dU+WKz/a NQES+vMkT2wqjXHVTjrNwodxw3oLK/EuTgwoxIHJuplx5E5Wrdx9g7Gl1PBIJL8V xiOYS5N7CakyALvdhP7cPubA2+TPAjNInxiAcmhdASS/Vrmpvrkat6XhGn8h9liv ysDOpMQmYQkmgZBpW8yBKK7JABGGsJADJ3E6J5MMWBX2RR4kFoqVGAzdOU3oyaTy I0kz5sfuahaWpdYJVlkO+esc0CRXw8fLDYivabK2tOgUEWeZsZGZ9uK6aV1VxTAY 9Guu3BJ4Rv/KP/hk7mP8rIeCwotV66/2H8nq72ImQhzSVyWcxbFf2rJiFQJ3BFwA WoRMgEwjGJWqzhJZUYpUAQ== -----END CERTIFICATE----- manila-10.0.0/manila/tests/test_exception.py0000664000175000017500000005626713656750227021122 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2014 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import six from manila import exception from manila import test class FakeNotifier(object): """Acts like the manila.openstack.common.notifier.api module.""" ERROR = 88 def __init__(self): self.provided_publisher = None self.provided_event = None self.provided_priority = None self.provided_payload = None def notify(self, context, publisher, event, priority, payload): self.provided_publisher = publisher self.provided_event = event self.provided_priority = priority self.provided_payload = payload @ddt.ddt class ManilaExceptionTestCase(test.TestCase): def test_default_error_msg(self): class FakeManilaException(exception.ManilaException): message = "default message" exc = FakeManilaException() self.assertEqual('default message', six.text_type(exc)) def test_error_msg(self): self.assertEqual('test', six.text_type(exception.ManilaException('test'))) def test_default_error_msg_with_kwargs(self): class FakeManilaException(exception.ManilaException): message = "default message: %(code)s" exc = FakeManilaException(code=500) self.assertEqual('default message: 500', six.text_type(exc)) def test_error_msg_exception_with_kwargs(self): # NOTE(dprince): disable format errors for this test self.flags(fatal_exception_format_errors=False) class FakeManilaException(exception.ManilaException): message = "default message: %(misspelled_code)s" exc = FakeManilaException(code=500) self.assertEqual('default message: %(misspelled_code)s', six.text_type(exc)) def test_default_error_code(self): class FakeManilaException(exception.ManilaException): code = 404 exc = FakeManilaException() self.assertEqual(404, exc.kwargs['code']) def test_error_code_from_kwarg(self): class FakeManilaException(exception.ManilaException): code = 500 exc = FakeManilaException(code=404) self.assertEqual(404, exc.kwargs['code']) def test_error_msg_is_exception_to_string(self): msg = 'test message' exc1 = Exception(msg) exc2 = exception.ManilaException(exc1) self.assertEqual(msg, exc2.msg) def test_exception_kwargs_to_string(self): msg = 'test message' exc1 = Exception(msg) exc2 = exception.ManilaException(kwarg1=exc1) self.assertEqual(msg, exc2.kwargs['kwarg1']) def test_exception_multi_kwargs_to_string(self): exc = exception.ManilaException( 'fake_msg', foo=Exception('foo_msg'), bar=Exception('bar_msg')) self.assertEqual('fake_msg', exc.msg) self.assertEqual('foo_msg', exc.kwargs['foo']) self.assertEqual('bar_msg', exc.kwargs['bar']) self.assertNotIn('fake_msg', exc.kwargs) self.assertNotIn('foo_msg', exc.kwargs) self.assertNotIn('bar_msg', exc.kwargs) @ddt.data("test message.", "test message....", ".") def test_exception_not_redundant_period(self, msg): exc1 = Exception(msg) exc2 = exception.ManilaException(exc1) self.assertEqual(msg, exc2.msg) def test_exception_redundant_period(self): msg = "test message.." exc1 = Exception(msg) exc2 = exception.ManilaException(exc1) self.assertEqual("test message.", exc2.msg) def test_replication_exception(self): # Verify response code for exception.ReplicationException reason = "Something bad happened." e = exception.ReplicationException(reason=reason) self.assertEqual(500, e.code) self.assertIn(reason, e.msg) def test_snapshot_access_already_exists(self): # Verify response code for exception.ShareSnapshotAccessExists access_type = "fake_type" access = "fake_access" e = exception.ShareSnapshotAccessExists(access_type=access_type, access=access) self.assertEqual(400, e.code) self.assertIn(access_type, e.msg) self.assertIn(access, e.msg) def test_manage_share_server_error(self): # Verify response code for exception.ManageShareServerError reason = 'Invalid share server id.' share_server_id = 'fake' e = exception.ManageShareServerError(reason=reason, share_server_id=share_server_id) self.assertEqual(500, e.code) self.assertIn(reason, e.msg) class ManilaExceptionResponseCode400(test.TestCase): def test_invalid(self): # Verify response code for exception.Invalid e = exception.Invalid() self.assertEqual(400, e.code) def test_invalid_input(self): # Verify response code for exception.InvalidInput reason = "fake_reason" e = exception.InvalidInput(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_invalid_request(self): # Verify response code for exception.InvalidRequest e = exception.InvalidRequest() self.assertEqual(400, e.code) def test_invalid_results(self): # Verify response code for exception.InvalidResults e = exception.InvalidResults() self.assertEqual(400, e.code) def test_invalid_uuid(self): # Verify response code for exception.InvalidUUID uuid = "fake_uuid" e = exception.InvalidUUID(uuid=uuid) self.assertEqual(400, e.code) self.assertIn(uuid, e.msg) def test_invalid_content_type(self): # Verify response code for exception.InvalidContentType content_type = "fake_content_type" e = exception.InvalidContentType(content_type=content_type) self.assertEqual(400, e.code) self.assertIn(content_type, e.msg) def test_invalid_parameter_value(self): # Verify response code for exception.InvalidParameterValue err = "fake_err" e = exception.InvalidParameterValue(err=err) self.assertEqual(400, e.code) self.assertIn(err, e.msg) def test_invalid_reservation_expiration(self): # Verify response code for exception.InvalidReservationExpiration expire = "fake_expire" e = exception.InvalidReservationExpiration(expire=expire) self.assertEqual(400, e.code) self.assertIn(expire, e.msg) def test_invalid_quota_value(self): # Verify response code for exception.InvalidQuotaValue unders = '-1' e = exception.InvalidQuotaValue(unders=unders) self.assertEqual(400, e.code) def test_invalid_share(self): # Verify response code for exception.InvalidShare reason = "fake_reason" e = exception.InvalidShare(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_invalid_share_access(self): # Verify response code for exception.InvalidShareAccess reason = "fake_reason" e = exception.InvalidShareAccess(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_invalid_share_snapshot(self): # Verify response code for exception.InvalidShareSnapshot reason = "fake_reason" e = exception.InvalidShareSnapshot(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_invalid_share_metadata(self): # Verify response code for exception.InvalidMetadata e = exception.InvalidMetadata() self.assertEqual(400, e.code) def test_invalid_share_metadata_size(self): # Verify response code for exception.InvalidMetadataSize e = exception.InvalidMetadataSize() self.assertEqual(400, e.code) def test_invalid_volume(self): # Verify response code for exception.InvalidVolume e = exception.InvalidVolume() self.assertEqual(400, e.code) def test_invalid_share_type(self): # Verify response code for exception.InvalidShareType reason = "fake_reason" e = exception.InvalidShareType(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_manage_invalid_share_snapshot(self): # Verify response code for exception.ManageInvalidShareSnapshot reason = "fake_reason" e = exception.ManageInvalidShareSnapshot(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_unmanage_invalid_share_snapshot(self): # Verify response code for exception.UnmanageInvalidShareSnapshot reason = "fake_reason" e = exception.UnmanageInvalidShareSnapshot(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) def test_invalid_share_snapshot_instance(self): # Verify response code for exception.InvalidShareSnapshotInstance reason = "fake_reason" e = exception.InvalidShareSnapshotInstance(reason=reason) self.assertEqual(400, e.code) self.assertIn(reason, e.msg) class ManilaExceptionResponseCode403(test.TestCase): def test_not_authorized(self): # Verify response code for exception.NotAuthorized e = exception.NotAuthorized() self.assertEqual(403, e.code) def test_admin_required(self): # Verify response code for exception.AdminRequired e = exception.AdminRequired() self.assertEqual(403, e.code) def test_policy_not_authorized(self): # Verify response code for exception.PolicyNotAuthorized action = "fake_action" e = exception.PolicyNotAuthorized(action=action) self.assertEqual(403, e.code) self.assertIn(action, e.msg) class ManilaExceptionResponseCode404(test.TestCase): def test_not_found(self): # Verify response code for exception.NotFound e = exception.NotFound() self.assertEqual(404, e.code) def test_share_network_not_found(self): # Verify response code for exception.ShareNetworkNotFound share_network_id = "fake_share_network_id" e = exception.ShareNetworkNotFound(share_network_id=share_network_id) self.assertEqual(404, e.code) self.assertIn(share_network_id, e.msg) def test_share_network_subnet_not_found(self): # Verify response code for exception.ShareNetworkSubnetNotFound share_network_subnet_id = "fake_share_network_subnet_id" e = exception.ShareNetworkSubnetNotFound( share_network_subnet_id=share_network_subnet_id) self.assertEqual(404, e.code) self.assertIn(share_network_subnet_id, e.msg) def test_share_server_not_found(self): # Verify response code for exception.ShareServerNotFound share_server_id = "fake_share_server_id" e = exception.ShareServerNotFound(share_server_id=share_server_id) self.assertEqual(404, e.code) self.assertIn(share_server_id, e.msg) def test_share_server_not_found_by_filters(self): # Verify response code for exception.ShareServerNotFoundByFilters filters_description = "host = fakeHost" e = exception.ShareServerNotFoundByFilters( filters_description=filters_description) self.assertEqual(404, e.code) self.assertIn(filters_description, e.msg) def test_service_not_found(self): # Verify response code for exception.ServiceNotFound service_id = "fake_service_id" e = exception.ServiceNotFound(service_id=service_id) self.assertEqual(404, e.code) self.assertIn(service_id, e.msg) def test_host_not_found(self): # Verify response code for exception.HostNotFound host = "fake_host" e = exception.HostNotFound(host=host) self.assertEqual(404, e.code) self.assertIn(host, e.msg) def test_scheduler_host_filter_not_found(self): # Verify response code for exception.SchedulerHostFilterNotFound filter_name = "fake_filter_name" e = exception.SchedulerHostFilterNotFound(filter_name=filter_name) self.assertEqual(404, e.code) self.assertIn(filter_name, e.msg) def test_scheduler_host_weigher_not_found(self): # Verify response code for exception.SchedulerHostWeigherNotFound weigher_name = "fake_weigher_name" e = exception.SchedulerHostWeigherNotFound(weigher_name=weigher_name) self.assertEqual(404, e.code) self.assertIn(weigher_name, e.msg) def test_host_binary_not_found(self): # Verify response code for exception.HostBinaryNotFound host = "fake_host" binary = "fake_binary" e = exception.HostBinaryNotFound(binary=binary, host=host) self.assertEqual(404, e.code) self.assertIn(binary, e.msg) self.assertIn(host, e.msg) def test_quota_not_found(self): # Verify response code for exception.QuotaNotFound e = exception.QuotaNotFound() self.assertEqual(404, e.code) def test_quota_resource_unknown(self): # Verify response code for exception.QuotaResourceUnknown unknown = "fake_quota_resource" e = exception.QuotaResourceUnknown(unknown=unknown) self.assertEqual(404, e.code) def test_project_quota_not_found(self): # Verify response code for exception.ProjectQuotaNotFound project_id = "fake_tenant_id" e = exception.ProjectQuotaNotFound(project_id=project_id) self.assertEqual(404, e.code) def test_quota_class_not_found(self): # Verify response code for exception.QuotaClassNotFound class_name = "FakeQuotaClass" e = exception.QuotaClassNotFound(class_name=class_name) self.assertEqual(404, e.code) def test_quota_usage_not_found(self): # Verify response code for exception.QuotaUsageNotFound project_id = "fake_tenant_id" e = exception.QuotaUsageNotFound(project_id=project_id) self.assertEqual(404, e.code) def test_reservation_not_found(self): # Verify response code for exception.ReservationNotFound uuid = "fake_uuid" e = exception.ReservationNotFound(uuid=uuid) self.assertEqual(404, e.code) def test_migration_not_found(self): # Verify response code for exception.MigrationNotFound migration_id = "fake_migration_id" e = exception.MigrationNotFound(migration_id=migration_id) self.assertEqual(404, e.code) self.assertIn(migration_id, e.msg) def test_migration_not_found_by_status(self): # Verify response code for exception.MigrationNotFoundByStatus status = "fake_status" instance_id = "fake_instance_id" e = exception.MigrationNotFoundByStatus(status=status, instance_id=instance_id) self.assertEqual(404, e.code) self.assertIn(status, e.msg) self.assertIn(instance_id, e.msg) def test_file_not_found(self): # Verify response code for exception.FileNotFound file_path = "fake_file_path" e = exception.FileNotFound(file_path=file_path) self.assertEqual(404, e.code) self.assertIn(file_path, e.msg) def test_config_not_found(self): # Verify response code for exception.ConfigNotFound path = "fake_path" e = exception.ConfigNotFound(path=path) self.assertEqual(404, e.code) self.assertIn(path, e.msg) def test_paste_app_not_found(self): # Verify response code for exception.PasteAppNotFound name = "fake_name" path = "fake_path" e = exception.PasteAppNotFound(name=name, path=path) self.assertEqual(404, e.code) self.assertIn(name, e.msg) self.assertIn(path, e.msg) def test_share_snapshot_not_found(self): # Verify response code for exception.ShareSnapshotNotFound snapshot_id = "fake_snapshot_id" e = exception.ShareSnapshotNotFound(snapshot_id=snapshot_id) self.assertEqual(404, e.code) self.assertIn(snapshot_id, e.msg) def test_share_metadata_not_found(self): # verify response code for exception.ShareMetadataNotFound e = exception.ShareMetadataNotFound() self.assertEqual(404, e.code) def test_security_service_not_found(self): # verify response code for exception.SecurityServiceNotFound security_service_id = "fake_security_service_id" e = exception.SecurityServiceNotFound( security_service_id=security_service_id) self.assertEqual(404, e.code) self.assertIn(security_service_id, e.msg) def test_volume_not_found(self): # verify response code for exception.VolumeNotFound volume_id = "fake_volume_id" e = exception.VolumeNotFound(volume_id=volume_id) self.assertEqual(404, e.code) self.assertIn(volume_id, e.msg) def test_volume_snapshot_not_found(self): # verify response code for exception.VolumeSnapshotNotFound snapshot_id = "fake_snapshot_id" e = exception.VolumeSnapshotNotFound(snapshot_id=snapshot_id) self.assertEqual(404, e.code) self.assertIn(snapshot_id, e.msg) def test_share_type_not_found(self): # verify response code for exception.ShareTypeNotFound share_type_id = "fake_share_type_id" e = exception.ShareTypeNotFound(share_type_id=share_type_id) self.assertEqual(404, e.code) self.assertIn(share_type_id, e.msg) def test_share_type_not_found_by_name(self): # verify response code for exception.ShareTypeNotFoundByName share_type_name = "fake_share_type_name" e = exception.ShareTypeNotFoundByName( share_type_name=share_type_name) self.assertEqual(404, e.code) self.assertIn(share_type_name, e.msg) def test_share_type_does_not_exist(self): # verify response code for exception.ShareTypeDoesNotExist share_type = "fake_share_type_1234" e = exception.ShareTypeDoesNotExist(share_type=share_type) self.assertEqual(404, e.code) self.assertIn(share_type, e.msg) def test_share_type_extra_specs_not_found(self): # verify response code for exception.ShareTypeExtraSpecsNotFound share_type_id = "fake_share_type_id" extra_specs_key = "fake_extra_specs_key" e = exception.ShareTypeExtraSpecsNotFound( share_type_id=share_type_id, extra_specs_key=extra_specs_key) self.assertEqual(404, e.code) self.assertIn(share_type_id, e.msg) self.assertIn(extra_specs_key, e.msg) def test_default_share_type_not_configured(self): # Verify response code for exception.DefaultShareTypeNotConfigured e = exception.DefaultShareTypeNotConfigured() self.assertEqual(404, e.code) def test_instance_not_found(self): # verify response code for exception.InstanceNotFound instance_id = "fake_instance_id" e = exception.InstanceNotFound(instance_id=instance_id) self.assertEqual(404, e.code) self.assertIn(instance_id, e.msg) def test_share_replica_not_found_exception(self): # Verify response code for exception.ShareReplicaNotFound replica_id = "FAKE_REPLICA_ID" e = exception.ShareReplicaNotFound(replica_id=replica_id) self.assertEqual(404, e.code) self.assertIn(replica_id, e.msg) def test_storage_resource_not_found(self): # verify response code for exception.StorageResourceNotFound name = "fake_name" e = exception.StorageResourceNotFound(name=name) self.assertEqual(404, e.code) self.assertIn(name, e.msg) def test_snapshot_resource_not_found(self): # verify response code for exception.SnapshotResourceNotFound name = "fake_name" e = exception.SnapshotResourceNotFound(name=name) self.assertEqual(404, e.code) self.assertIn(name, e.msg) def test_snapshot_instance_not_found(self): # verify response code for exception.ShareSnapshotInstanceNotFound instance_id = 'fake_instance_id' e = exception.ShareSnapshotInstanceNotFound(instance_id=instance_id) self.assertEqual(404, e.code) self.assertIn(instance_id, e.msg) def test_export_location_not_found(self): # verify response code for exception.ExportLocationNotFound uuid = "fake-export-location-uuid" e = exception.ExportLocationNotFound(uuid=uuid) self.assertEqual(404, e.code) self.assertIn(uuid, e.msg) def test_share_resource_not_found(self): # verify response code for exception.ShareResourceNotFound share_id = "fake_share_id" e = exception.ShareResourceNotFound(share_id=share_id) self.assertEqual(404, e.code) self.assertIn(share_id, e.msg) def test_share_not_found(self): # verify response code for exception.ShareNotFound share_id = "fake_share_id" e = exception.ShareNotFound(share_id=share_id) self.assertEqual(404, e.code) self.assertIn(share_id, e.msg) class ManilaExceptionResponseCode413(test.TestCase): def test_quota_error(self): # verify response code for exception.QuotaError e = exception.QuotaError() self.assertEqual(413, e.code) def test_share_size_exceeds_available_quota(self): # verify response code for exception.ShareSizeExceedsAvailableQuota e = exception.ShareSizeExceedsAvailableQuota() self.assertEqual(413, e.code) def test_share_limit_exceeded(self): # verify response code for exception.ShareLimitExceeded allowed = 776 # amount of allowed shares e = exception.ShareLimitExceeded(allowed=allowed) self.assertEqual(413, e.code) self.assertIn(str(allowed), e.msg) def test_snapshot_limit_exceeded(self): # verify response code for exception.SnapshotLimitExceeded allowed = 777 # amount of allowed snapshots e = exception.SnapshotLimitExceeded(allowed=allowed) self.assertEqual(413, e.code) self.assertIn(str(allowed), e.msg) def test_share_networks_limit_exceeded(self): # verify response code for exception.ShareNetworksLimitExceeded allowed = 778 # amount of allowed share networks e = exception.ShareNetworksLimitExceeded(allowed=allowed) self.assertEqual(413, e.code) self.assertIn(str(allowed), e.msg) def test_port_limit_exceeded(self): # verify response code for exception.PortLimitExceeded e = exception.PortLimitExceeded() self.assertEqual(413, e.code) manila-10.0.0/manila/tests/fake_zfssa.py0000664000175000017500000000562713656750227020173 0ustar zuulzuul00000000000000# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake ZFS Storage Appliance, for unit testing. """ class FakeResponse(object): def __init__(self, statuscode): self.status = statuscode self.data = 'data' class FakeZFSSA(object): """Fake ZFS SA.""" def __init__(self): self.user = None self.host = 'fakehost' self.url = 'fakeurl' self.rclient = None def login(self, user): self.user = user def set_host(self, host, timeout=None): self.host = host def enable_service(self, service): return True def create_project(self, pool, project, arg): pass def get_share(self, pool, project, share): pass def create_share(self, pool, project, share): pass def delete_share(self, pool, project, share): pass def create_snapshot(self, pool, project, share): pass def delete_snapshot(self, pool, project, share, snapshot): pass def clone_snapshot(self, pool, project, share, snapshot, clone, size): pass def has_clones(self, pool, project, vol, snapshot): return False def modify_share(self, pool, project, share, arg): pass def allow_access_nfs(self, pool, project, share, access): pass def deny_access_nfs(self, pool, project, share, access): pass def get_project_stats(self, pool, project): pass def create_schema(self, schema): pass class FakeRestClient(object): """Fake ZFSSA Rest Client.""" def __init__(self): self.url = None self.headers = None self.log_function = None self.local = None self.base_path = None self.timeout = 60 self.do_logout = False self.auth_str = None def _path(self, path, base_path=None): pass def _authoriza(self): pass def login(self, auth_str): pass def logout(self): pass def islogin(self): pass def request(self, path, request, body=None, **kwargs): pass def get(self, path, **kwargs): pass def post(self, path, body="", **kwargs): pass def put(self, path, body="", **kwargs): pass def delete(self, path, **kwargs): pass def head(self, path, **kwargs): pass manila-10.0.0/manila/tests/fake_service_instance.py0000664000175000017500000000361613656750227022365 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from manila.tests import fake_compute class FakeServiceInstanceManager(object): def __init__(self, *args, **kwargs): self.db = mock.Mock() self._helpers = { 'CIFS': mock.Mock(), 'NFS': mock.Mock(), } self.share_networks_locks = {} self.share_networks_servers = {} self.fake_server = fake_compute.FakeServer() self.service_instance_name_template = 'manila_fake_service_instance-%s' self._network_helper = None def get_service_instance(self, context, share_network_id, create=True): return self.fake_server @property def network_helper(self): return self._get_network_helper() def _get_network_helper(self): self._network_helper = FakeNeutronNetworkHelper() return self._network_helper def _create_service_instance(self, context, instance_name, share_network_id, old_server_ip): return self.fake_server def _delete_server(self, context, server): pass def _get_service_instance_name(self, share_network_id): return self.service_instance_name_template % share_network_id class FakeNeutronNetworkHelper(object): def setup_connectivity_with_service_instances(self): pass manila-10.0.0/manila/tests/share_group/0000775000175000017500000000000013656750362020011 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share_group/__init__.py0000664000175000017500000000000013656750227022110 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share_group/test_share_group_types.py0000664000175000017500000001214313656750227025165 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test of Share Type methods for Manila.""" import copy import datetime from unittest import mock import ddt from manila.common import constants from manila import context from manila import db from manila import exception from manila.share_group import share_group_types from manila import test def create_share_group_type_dict(group_specs=None): return { 'fake_type': { 'name': 'fake1', 'group_specs': group_specs } } @ddt.ddt class ShareGroupTypesTestCase(test.TestCase): fake_type = { 'test': { 'created_at': datetime.datetime(2015, 1, 22, 11, 43, 24), 'deleted': '0', 'deleted_at': None, 'group_specs': {}, 'id': u'fooid-1', 'name': u'test', 'updated_at': None } } fake_group_specs = {u'gold': u'True'} fake_share_group_type_id = u'fooid-2' fake_type_w_extra = { 'test_with_extra': { 'created_at': datetime.datetime(2015, 1, 22, 11, 45, 31), 'deleted': '0', 'deleted_at': None, 'group_specs': fake_group_specs, 'id': fake_share_group_type_id, 'name': u'test_with_extra', 'updated_at': None } } fake_type_w_valid_extra = { 'test_with_extra': { 'created_at': datetime.datetime(2015, 1, 22, 11, 45, 31), 'deleted': '0', 'deleted_at': None, 'group_specs': { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: 'true' }, 'id': u'fooid-2', 'name': u'test_with_extra', 'updated_at': None } } fake_types = fake_type.copy() fake_types.update(fake_type_w_extra) fake_types.update(fake_type_w_valid_extra) fake_share_group = { 'id': u'fooid-1', 'share_group_type_id': fake_share_group_type_id, } def setUp(self): super(ShareGroupTypesTestCase, self).setUp() self.context = context.get_admin_context() @ddt.data({}, fake_type, fake_type_w_extra, fake_types) def test_get_all_types(self, share_group_type): self.mock_object( db, 'share_group_type_get_all', mock.Mock(return_value=copy.deepcopy(share_group_type))) returned_type = share_group_types.get_all(self.context) self.assertItemsEqual(share_group_type, returned_type) def test_get_all_types_search(self): share_group_type = self.fake_type_w_extra search_filter = {"group_specs": {"gold": "True"}, 'is_public': True} self.mock_object( db, 'share_group_type_get_all', mock.Mock(return_value=share_group_type)) returned_type = share_group_types.get_all( self.context, search_opts=search_filter) db.share_group_type_get_all.assert_called_once_with( mock.ANY, 0, filters={'is_public': True}) self.assertItemsEqual(share_group_type, returned_type) search_filter = {"group_specs": {"gold": "False"}} returned_type = share_group_types.get_all( self.context, search_opts=search_filter) self.assertEqual({}, returned_type) def test_add_access(self): project_id = '456' share_group_type = share_group_types.create(self.context, 'type2', []) share_group_type_id = share_group_type.get('id') share_group_types.add_share_group_type_access( self.context, share_group_type_id, project_id) stype_access = db.share_group_type_access_get_all( self.context, share_group_type_id) self.assertIn(project_id, [a.project_id for a in stype_access]) def test_add_access_invalid(self): self.assertRaises( exception.InvalidShareGroupType, share_group_types.add_share_group_type_access, 'fake', None, 'fake') def test_remove_access(self): project_id = '456' share_group_type = share_group_types.create( self.context, 'type3', [], projects=['456']) share_group_type_id = share_group_type.get('id') share_group_types.remove_share_group_type_access( self.context, share_group_type_id, project_id) stype_access = db.share_group_type_access_get_all( self.context, share_group_type_id) self.assertNotIn(project_id, stype_access) def test_remove_access_invalid(self): self.assertRaises( exception.InvalidShareGroupType, share_group_types.remove_share_group_type_access, 'fake', None, 'fake') manila-10.0.0/manila/tests/share_group/test_api.py0000664000175000017500000020304413656750227022176 0ustar zuulzuul00000000000000# Copyright 2016 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Share API module.""" import copy import datetime from unittest import mock import ddt from oslo_config import cfg from oslo_utils import timeutils from manila.common import constants from manila import context from manila import db as db_driver from manila import exception from manila.share import share_types from manila.share_group import api as share_group_api from manila import test from manila.tests.api.contrib import stubs from manila.tests import utils as test_utils CONF = cfg.CONF def fake_share_group(id, **kwargs): share_group = { 'id': id, 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'status': constants.STATUS_CREATING, 'name': None, 'description': None, 'host': None, 'availability_zone_id': None, 'share_group_type_id': None, 'source_share_group_snapshot_id': None, 'share_network_id': None, 'share_server_id': None, 'share_types': mock.ANY, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), } if 'source_share_group_snapshot_id' in kwargs: share_group['share_network_id'] = 'fake_share_network_id' share_group['share_server_id'] = 'fake_share_server_id' share_group.update(kwargs) return share_group def fake_share_group_snapshot(id, **kwargs): snap = { 'id': id, 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'status': constants.STATUS_CREATING, 'name': None, 'description': None, 'share_group_id': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), } snap.update(kwargs) return snap @ddt.ddt class ShareGroupsAPITestCase(test.TestCase): def setUp(self): super(ShareGroupsAPITestCase, self).setUp() self.user_id = 'fake_user_id' self.project_id = 'fake_project_id' self.context = context.RequestContext( user_id=self.user_id, project_id=self.project_id, is_admin=True) self.scheduler_rpcapi = mock.Mock() self.share_rpcapi = mock.Mock() self.share_api = mock.Mock() self.api = share_group_api.API() self.mock_object(self.api, 'share_rpcapi', self.share_rpcapi) self.mock_object(self.api, 'share_api', self.share_api) self.mock_object(self.api, 'scheduler_rpcapi', self.scheduler_rpcapi) dt_utc = datetime.datetime.utcnow() self.mock_object(timeutils, 'utcnow', mock.Mock(return_value=dt_utc)) self.fake_share_type = { 'name': 'default', 'extra_specs': {'driver_handles_share_servers': 'False'}, 'is_public': True, 'id': 'c01990c1-448f-435a-9de6-c7c894bb6df9' } self.fake_share_type_2 = { 'name': 'default2', 'extra_specs': {'driver_handles_share_servers': 'False'}, 'is_public': True, 'id': 'c01990c1-448f-435a-9de6-c7c894bb7dfd' } self.fake_share_group_type = { 'share_types': [ {'share_type_id': self.fake_share_type['id']}, {'share_type_id': self.fake_share_type_2['id']}, ] } self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=self.fake_share_type)) self.mock_object( db_driver, 'share_group_type_get', mock.Mock(return_value=self.fake_share_group_type)) self.mock_object(share_group_api.QUOTAS, 'reserve') self.mock_object(share_group_api.QUOTAS, 'commit') self.mock_object(share_group_api.QUOTAS, 'rollback') def test_create_empty_request(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.api.create(self.context) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_request_spec(self): """Ensure the correct values are sent to the scheduler.""" share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) expected_request_spec = {'share_group_id': share_group['id']} expected_request_spec.update(share_group) expected_request_spec['availability_zones'] = set([]) del expected_request_spec['id'] del expected_request_spec['created_at'] del expected_request_spec['host'] expected_request_spec['resource_type'] = self.fake_share_group_type self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.api.create(self.context) self.scheduler_rpcapi.create_share_group.assert_called_once_with( self.context, share_group_id=share_group['id'], request_spec=expected_request_spec, filter_properties={}) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_name(self): fake_name = 'fake_name' share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) expected_values['name'] = fake_name self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.mock_object(db_driver, 'share_network_get') self.api.create(self.context, name=fake_name) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) self.scheduler_rpcapi.create_share_group.assert_called_once_with( self.context, share_group_id=share_group['id'], request_spec=mock.ANY, filter_properties={}) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_description(self): fake_desc = 'fake_desc' share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) expected_values['description'] = fake_desc self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.api.create(self.context, description=fake_desc) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() @ddt.data(True, False) def test_create_with_multiple_share_types_with_az(self, with_az): share_type_1 = copy.deepcopy(self.fake_share_type) share_type_2 = copy.deepcopy(self.fake_share_type_2) share_type_1['extra_specs']['availability_zones'] = 'nova,supernova' share_type_2['extra_specs']['availability_zones'] = 'nova' fake_share_types = [share_type_1, share_type_2] fake_share_type_ids = [x['id'] for x in fake_share_types] share_group_type = { 'share_types': [ {'share_type_id': share_type_1['id']}, {'share_type_id': share_type_2['id']}, {'share_type_id': self.fake_share_type['id']}, ] } self.mock_object( db_driver, 'share_group_type_get', mock.Mock(return_value=share_group_type)) self.mock_object(share_types, 'get_share_type', mock.Mock( side_effect=[share_type_1, share_type_2, share_type_1, share_type_2, self.fake_share_type])) share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING, availability_zone_id=('e030620e-892c-4ff4-8764-9f3f2b560bd1' if with_az else None) ) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) expected_values['share_types'] = fake_share_type_ids self.mock_object( db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.mock_object(db_driver, 'share_network_get') az_kwargs = { 'availability_zone': 'nova', 'availability_zone_id': share_group['availability_zone_id'], } kwargs = {} if not with_az else az_kwargs self.api.create(self.context, share_type_ids=fake_share_type_ids, **kwargs) scheduler_request_spec = ( self.scheduler_rpcapi.create_share_group.call_args_list[ 0][1]['request_spec'] ) az_id = az_kwargs['availability_zone_id'] if with_az else None self.assertEqual({'nova', 'supernova'}, scheduler_request_spec['availability_zones']) self.assertEqual(az_id, scheduler_request_spec['availability_zone_id']) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() @ddt.data( test_utils.annotated('specified_stypes_one_unsupported_in_AZ', (True, True)), test_utils.annotated('specified_stypes_all_unsupported_in_AZ', (True, False)), test_utils.annotated('group_type_stypes_one_unsupported_in_AZ', (False, True)), test_utils.annotated('group_type_stypes_all_unsupported_in_AZ', (False, False))) @ddt.unpack def test_create_unsupported_az(self, specify_stypes, all_unsupported): share_type_1 = copy.deepcopy(self.fake_share_type) share_type_2 = copy.deepcopy(self.fake_share_type_2) share_type_1['extra_specs']['availability_zones'] = 'nova,supernova' share_type_2['extra_specs']['availability_zones'] = ( 'nova' if all_unsupported else 'nova,hypernova' ) share_group_type = { 'share_types': [ {'share_type_id': share_type_1['id'], }, {'share_type_id': share_type_2['id']}, ] } share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING, availability_zone_id='e030620e-892c-4ff4-8764-9f3f2b560bd1') self.mock_object( db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.mock_object(db_driver, 'share_network_get') self.mock_object( db_driver, 'share_group_type_get', mock.Mock(return_value=share_group_type)) self.mock_object(share_types, 'get_share_type', mock.Mock(side_effect=[share_type_1, share_type_1] * 2)) self.mock_object(db_driver, 'share_group_snapshot_get') kwargs = { 'availability_zone': 'hypernova', 'availability_zone_id': share_group['availability_zone_id'], } if specify_stypes: kwargs['share_type_ids'] = [share_type_1['id'], share_type_2['id']] self.assertRaises( exception.InvalidInput, self.api.create, self.context, **kwargs) db_driver.share_group_snapshot_get.assert_not_called() db_driver.share_network_get.assert_not_called() def test_create_with_share_type_not_found(self): self.mock_object(share_types, 'get_share_type', mock.Mock(side_effect=exception.ShareTypeNotFound( share_type_id=self.fake_share_type['id']))) share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) expected_values['share_types'] = self.fake_share_type['id'] self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.assertRaises( exception.InvalidInput, self.api.create, self.context, share_type_ids=[self.fake_share_type['id']]) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_error_on_quota_reserve(self): overs = ["share_groups"] usages = {"share_groups": {"reserved": 1, "in_use": 3, "limit": 4}} quotas = {"share_groups": 5} share_group_api.QUOTAS.reserve.side_effect = exception.OverQuota( overs=overs, usages=usages, quotas=quotas, ) self.mock_object(share_group_api.LOG, "warning") self.assertRaises( exception.ShareGroupsLimitExceeded, self.api.create, self.context) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() share_group_api.LOG.warning.assert_called_once_with(mock.ANY, mock.ANY) def test_create_driver_handles_share_servers_is_false_with_net_id(self): fake_share_types = [self.fake_share_type] self.mock_object(share_types, 'get_share_type') self.assertRaises(exception.InvalidInput, self.api.create, self.context, share_type_ids=fake_share_types, share_network_id="fake_share_network") def test_create_with_conflicting_share_types(self): fake_share_type = { 'name': 'default', 'extra_specs': {'driver_handles_share_servers': 'True'}, 'is_public': True, 'id': 'c01990c1-448f-435a-9de6-c7c894bb6df9', } fake_share_type_2 = { 'name': 'default2', 'extra_specs': {'driver_handles_share_servers': 'False'}, 'is_public': True, 'id': 'c01990c1-448f-435a-9de6-c7c894bb7df9', } fake_share_types = [fake_share_type, fake_share_type_2] fake_share_type_ids = [x['id'] for x in fake_share_types] self.mock_object(share_types, 'get_share_type', mock.Mock(side_effect=[fake_share_type, fake_share_type_2])) self.assertRaises( exception.InvalidInput, self.api.create, self.context, share_type_ids=fake_share_type_ids) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_conflicting_share_type_and_share_network(self): fake_share_type = { 'name': 'default', 'extra_specs': {'driver_handles_share_servers': 'False'}, 'is_public': True, 'id': 'c01990c1-448f-435a-9de6-c7c894bb6df9', } fake_share_types = [fake_share_type] self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_share_type)) self.assertRaises( exception.InvalidInput, self.api.create, self.context, share_type_ids=fake_share_types, share_network_id="fake_sn") share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_source_share_group_snapshot_id(self): snap = fake_share_group_snapshot( "fake_source_share_group_snapshot_id", status=constants.STATUS_AVAILABLE) fake_share_type_mapping = {'share_type_id': self.fake_share_type['id']} orig_share_group = fake_share_group( 'fakeorigid', user_id=self.context.user_id, project_id=self.context.project_id, share_types=[fake_share_type_mapping], status=constants.STATUS_AVAILABLE, host='fake_original_host', share_network_id='fake_network_id', share_server_id='fake_server_id') share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_types=[fake_share_type_mapping], status=constants.STATUS_CREATING, host='fake_original_host', share_network_id='fake_network_id', share_server_id='fake_server_id') expected_values = share_group.copy() for name in ('id', 'created_at', 'share_network_id', 'share_server_id'): expected_values.pop(name, None) expected_values['source_share_group_snapshot_id'] = snap['id'] expected_values['share_types'] = [self.fake_share_type['id']] expected_values['share_network_id'] = 'fake_network_id' expected_values['share_server_id'] = 'fake_server_id' self.mock_object( db_driver, 'share_group_snapshot_get', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=orig_share_group)) self.mock_object( db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_get', mock.Mock(return_value=stubs.stub_share('fake_share'))) self.mock_object( share_types, 'get_share_type', mock.Mock(return_value={"id": self.fake_share_type['id']})) self.mock_object(db_driver, 'share_network_get') self.mock_object( db_driver, 'share_group_snapshot_members_get_all', mock.Mock(return_value=[])) self.api.create( self.context, source_share_group_snapshot_id=snap['id']) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) self.share_rpcapi.create_share_group.assert_called_once_with( self.context, share_group, orig_share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_source_share_group_snapshot_id_with_member(self): snap = fake_share_group_snapshot( "fake_source_share_group_snapshot_id", status=constants.STATUS_AVAILABLE) share = stubs.stub_share('fakeshareid') member = stubs.stub_share_group_snapshot_member('fake_member_id') fake_share_type_mapping = {'share_type_id': self.fake_share_type['id']} orig_share_group = fake_share_group( 'fakeorigid', user_id=self.context.user_id, project_id=self.context.project_id, share_types=[fake_share_type_mapping], status=constants.STATUS_AVAILABLE, share_network_id='fake_network_id', share_server_id='fake_server_id') share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_types=[fake_share_type_mapping], status=constants.STATUS_CREATING, share_network_id='fake_network_id', share_server_id='fake_server_id') expected_values = share_group.copy() for name in ('id', 'created_at', 'fake_network_id', 'fake_share_server_id'): expected_values.pop(name, None) expected_values['source_share_group_snapshot_id'] = snap['id'] expected_values['share_types'] = [self.fake_share_type['id']] expected_values['share_network_id'] = 'fake_network_id' expected_values['share_server_id'] = 'fake_server_id' self.mock_object( db_driver, 'share_group_snapshot_get', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=orig_share_group)) self.mock_object( db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_get', mock.Mock(return_value=stubs.stub_share('fakeshare'))) self.mock_object( share_types, 'get_share_type', mock.Mock(return_value={"id": self.fake_share_type['id']})) self.mock_object(db_driver, 'share_network_get') self.mock_object( db_driver, 'share_instance_get', mock.Mock(return_value=share)) self.mock_object( db_driver, 'share_group_snapshot_members_get_all', mock.Mock(return_value=[member])) self.mock_object(self.share_api, 'create') self.api.create( self.context, source_share_group_snapshot_id=snap['id']) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) self.assertTrue(self.share_api.create.called) self.share_rpcapi.create_share_group.assert_called_once_with( self.context, share_group, orig_share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_source_sg_snapshot_id_with_members_error(self): snap = fake_share_group_snapshot( "fake_source_share_group_snapshot_id", status=constants.STATUS_AVAILABLE) member = stubs.stub_share_group_snapshot_member('fake_member_id') member_2 = stubs.stub_share_group_snapshot_member('fake_member2_id') share = stubs.stub_share('fakeshareid') fake_share_type_mapping = {'share_type_id': self.fake_share_type['id']} orig_share_group = fake_share_group( 'fakeorigid', user_id=self.context.user_id, project_id=self.context.project_id, share_types=[fake_share_type_mapping], status=constants.STATUS_AVAILABLE, share_network_id='fake_network_id', share_server_id='fake_server_id') share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_types=[fake_share_type_mapping], status=constants.STATUS_CREATING, share_network_id='fake_network_id', share_server_id='fake_server_id') expected_values = share_group.copy() for name in ('id', 'created_at', 'share_network_id', 'share_server_id'): expected_values.pop(name, None) expected_values['source_share_group_snapshot_id'] = snap['id'] expected_values['share_types'] = [self.fake_share_type['id']] expected_values['share_network_id'] = 'fake_network_id' expected_values['share_server_id'] = 'fake_server_id' self.mock_object(db_driver, 'share_group_snapshot_get', mock.Mock(return_value=snap)) self.mock_object(db_driver, 'share_group_get', mock.Mock(return_value=orig_share_group)) self.mock_object(db_driver, 'share_network_get') self.mock_object(db_driver, 'share_instance_get', mock.Mock(return_value=share)) self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.mock_object(db_driver, 'share_get', mock.Mock(return_value=stubs.stub_share('fakeshare'))) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value={ "id": self.fake_share_type['id']})) self.mock_object(db_driver, 'share_group_snapshot_members_get_all', mock.Mock(return_value=[member, member_2])) self.mock_object(self.share_api, 'create', mock.Mock(side_effect=[None, exception.Error])) self.mock_object(db_driver, 'share_group_destroy') self.assertRaises(exception.Error, self.api.create, self.context, source_share_group_snapshot_id=snap['id']) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) self.assertEqual(2, self.share_api.create.call_count) self.assertEqual(1, db_driver.share_group_destroy.call_count) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) def test_create_with_source_sg_snapshot_id_error_snapshot_status(self): snap = fake_share_group_snapshot( "fake_source_share_group_snapshot_id", status=constants.STATUS_ERROR) self.mock_object( db_driver, 'share_group_snapshot_get', mock.Mock(return_value=snap)) self.assertRaises( exception.InvalidShareGroupSnapshot, self.api.create, self.context, source_share_group_snapshot_id=snap['id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_source_sg_snapshot_id_snap_not_found(self): snap = fake_share_group_snapshot( "fake_source_share_group_snapshot_id", status=constants.STATUS_ERROR) self.mock_object( db_driver, 'share_group_snapshot_get', mock.Mock(side_effect=exception.ShareGroupSnapshotNotFound( share_group_snapshot_id='fake_source_sg_snapshot_id'))) self.assertRaises( exception.ShareGroupSnapshotNotFound, self.api.create, self.context, source_share_group_snapshot_id=snap['id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_multiple_fields(self): fake_desc = 'fake_desc' fake_name = 'fake_name' share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) expected_values['name'] = fake_name expected_values['description'] = fake_desc self.mock_object(db_driver, 'share_group_create', mock.Mock(return_value=share_group)) self.api.create(self.context, name=fake_name, description=fake_desc) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_with_error_on_creation(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) expected_values = share_group.copy() for name in ('id', 'host', 'created_at'): expected_values.pop(name, None) self.mock_object(db_driver, 'share_group_create', mock.Mock(side_effect=exception.Error)) self.assertRaises(exception.Error, self.api.create, self.context) db_driver.share_group_create.assert_called_once_with( self.context, expected_values) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=1) share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) def test_delete_creating_no_host(self): share_group = fake_share_group( 'fakeid', user_id=self.user_id + '_different_user', project_id=self.project_id + '_in_different_project', status=constants.STATUS_CREATING) self.mock_object(db_driver, 'share_group_destroy') self.api.delete(self.context, share_group) db_driver.share_group_destroy.assert_called_once_with( mock.ANY, share_group['id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_creating_with_host(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING, host="fake_host") self.assertRaises( exception.InvalidShareGroup, self.api.delete, self.context, share_group) def test_delete_available(self): share_group = fake_share_group( 'fakeid', user_id=self.user_id + '_different_user', project_id=self.project_id + '_in_different_project', status=constants.STATUS_AVAILABLE, host="fake_host") deleted_share_group = copy.deepcopy(share_group) deleted_share_group['status'] = constants.STATUS_DELETING self.mock_object(db_driver, 'share_group_update', mock.Mock(return_value=deleted_share_group)) self.mock_object(db_driver, 'count_shares_in_share_group', mock.Mock(return_value=0)) self.api.delete(self.context, share_group) db_driver.share_group_update.assert_called_once_with( self.context, share_group['id'], {'status': constants.STATUS_DELETING}) self.share_rpcapi.delete_share_group.assert_called_once_with( self.context, deleted_share_group) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=-1, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_error_with_host(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_ERROR, host="fake_host") deleted_share_group = copy.deepcopy(share_group) deleted_share_group['status'] = constants.STATUS_DELETING self.mock_object(self.api, 'share_rpcapi') self.mock_object(db_driver, 'share_group_update', mock.Mock(return_value=deleted_share_group)) self.mock_object(db_driver, 'count_shares_in_share_group', mock.Mock(return_value=0)) self.api.delete(self.context, share_group) db_driver.share_group_update.assert_called_once_with( self.context, share_group['id'], {'status': constants.STATUS_DELETING}) self.api.share_rpcapi.delete_share_group.assert_called_once_with( self.context, deleted_share_group) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_groups=-1, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_error_without_host(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_ERROR) self.mock_object(db_driver, 'share_group_destroy') self.api.delete(self.context, share_group) db_driver.share_group_destroy.assert_called_once_with( mock.ANY, share_group['id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_with_shares(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE, host="fake_host") self.mock_object( db_driver, 'count_shares_in_share_group', mock.Mock(return_value=1)) self.assertRaises( exception.InvalidShareGroup, self.api.delete, self.context, share_group) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_with_share_group_snapshots(self): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE, host="fake_host") self.mock_object( db_driver, 'count_share_group_snapshots_in_share_group', mock.Mock(return_value=1)) self.assertRaises( exception.InvalidShareGroup, self.api.delete, self.context, share_group) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() @ddt.data({}, {"name": "fake_name"}, {"description": "fake_description"}) def test_update(self, expected_values): share_group = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) self.mock_object( db_driver, 'share_group_update', mock.Mock(return_value=share_group)) self.api.update(self.context, share_group, expected_values) db_driver.share_group_update.assert_called_once_with( self.context, share_group['id'], expected_values) def test_get(self): expected = fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=expected)) actual = self.api.get(self.context, expected['id']) self.assertEqual(expected, actual) def test_get_all_no_groups(self): self.mock_object( db_driver, 'share_group_get_all', mock.Mock(return_value=[])) actual_group = self.api.get_all(self.context) self.assertEqual([], actual_group) def test_get_all(self): expected = [fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING)] self.mock_object( db_driver, 'share_group_get_all_by_project', mock.Mock(return_value=expected)) actual = self.api.get_all(self.context, detailed=True) self.assertEqual(expected, actual) def test_get_all_all_tenants_not_admin(self): cxt = context.RequestContext( user_id=None, project_id=None, is_admin=False) expected = [fake_share_group( 'fakeid', user_id=cxt.user_id, project_id=cxt.project_id, status=constants.STATUS_CREATING)] self.mock_object(db_driver, 'share_group_get_all_by_project', mock.Mock(return_value=expected)) actual = self.api.get_all(cxt, search_opts={'all_tenants': True}) self.assertEqual(expected, actual) def test_get_all_all_tenants_as_admin(self): expected = [fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING)] self.mock_object(db_driver, 'share_group_get_all', mock.Mock(return_value=expected)) actual = self.api.get_all( self.context, search_opts={'all_tenants': True}) self.assertEqual(expected, actual) db_driver.share_group_get_all.assert_called_once_with( self.context, detailed=True, filters={}, sort_dir=None, sort_key=None) def test_create_share_group_snapshot_minimal_request_no_members(self): share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], status=constants.STATUS_CREATING) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[])) self.api.create_share_group_snapshot( self.context, share_group_id=share_group['id']) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) db_driver.share_group_snapshot_create.assert_called_once_with( self.context, expected_values) self.share_rpcapi.create_share_group_snapshot.assert_called_once_with( self.context, snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_sg_snapshot_minimal_request_no_members_with_name(self): fake_name = 'fake_name' share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], name=fake_name, status=constants.STATUS_CREATING) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[])) self.api.create_share_group_snapshot( self.context, share_group_id=share_group['id'], name=fake_name) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) db_driver.share_group_snapshot_create.assert_called_once_with( self.context, expected_values) self.share_rpcapi.create_share_group_snapshot.assert_called_once_with( self.context, snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_group_snapshot_minimal_request_no_members_with_desc(self): fake_description = 'fake_description' share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], description=fake_description, status=constants.STATUS_CREATING) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[])) self.api.create_share_group_snapshot( self.context, share_group_id=share_group['id'], description=fake_description) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) db_driver.share_group_snapshot_create.assert_called_once_with( self.context, expected_values) self.share_rpcapi.create_share_group_snapshot.assert_called_once_with( self.context, snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_share_group_snapshot_group_does_not_exist(self): share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], status=constants.STATUS_CREATING) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[])) self.assertRaises( exception.InvalidShareGroup, self.api.create_share_group_snapshot, self.context, share_group_id=share_group['id']) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_share_group_snapshot_failure_reserving_quota(self): overs = ["share_group_snapshots"] usages = {"share_group_snapshots": { "reserved": 1, "in_use": 3, "limit": 4, }} quotas = {"share_group_snapshots": 5} share_group = fake_share_group( "fake_group_id", user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) self.mock_object( db_driver, "share_group_get", mock.Mock(return_value=share_group)) self.mock_object( db_driver, "share_get_all_by_share_group_id", mock.Mock(return_value=[])) share_group_api.QUOTAS.reserve.side_effect = exception.OverQuota( overs=overs, usages=usages, quotas=quotas, ) self.mock_object(share_group_api.LOG, "warning") self.assertRaises( exception.ShareGroupSnapshotsLimitExceeded, self.api.create_share_group_snapshot, self.context, share_group_id=share_group["id"]) db_driver.share_group_get.assert_called_once_with( self.context, share_group["id"]) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() share_group_api.LOG.warning.assert_called_once_with(mock.ANY, mock.ANY) def test_create_share_group_snapshot_group_in_creating(self): self.mock_object( db_driver, 'share_group_get', mock.Mock(side_effect=exception.ShareGroupNotFound( share_group_id='fake_id'))) self.assertRaises( exception.ShareGroupNotFound, self.api.create_share_group_snapshot, self.context, share_group_id="fake_id") db_driver.share_group_get.assert_called_once_with( self.context, "fake_id") share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_share_group_snapshot_with_member(self): share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], status=constants.STATUS_CREATING) share = stubs.stub_share( 'fake_share_id', status=constants.STATUS_AVAILABLE) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) expected_member_values = { 'share_group_snapshot_id': snap['id'], 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_CREATING, 'size': share['size'], 'share_proto': share['share_proto'], 'share_instance_id': mock.ANY, } self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object(db_driver, 'share_group_snapshot_member_create') self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[share])) self.api.create_share_group_snapshot( self.context, share_group_id=share_group['id']) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) db_driver.share_group_snapshot_create.assert_called_once_with( self.context, expected_values) db_driver.share_group_snapshot_member_create.assert_called_once_with( self.context, expected_member_values) self.share_rpcapi.create_share_group_snapshot.assert_called_once_with( self.context, snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_share_group_snapshot_with_member_share_in_creating(self): share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) share = stubs.stub_share( 'fake_share_id', status=constants.STATUS_CREATING) self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[share])) self.assertRaises( exception.InvalidShareGroup, self.api.create_share_group_snapshot, self.context, share_group_id=share_group['id']) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_create_share_group_snapshot_with_two_members(self): share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], status=constants.STATUS_CREATING) share = stubs.stub_share( 'fake_share_id', status=constants.STATUS_AVAILABLE) share_2 = stubs.stub_share( 'fake_share2_id', status=constants.STATUS_AVAILABLE) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) expected_member_1_values = { 'share_group_snapshot_id': snap['id'], 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_CREATING, 'size': share['size'], 'share_proto': share['share_proto'], 'share_instance_id': mock.ANY, } expected_member_2_values = { 'share_group_snapshot_id': snap['id'], 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_CREATING, 'size': share_2['size'], 'share_proto': share_2['share_proto'], 'share_instance_id': mock.ANY, } self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[share, share_2])) self.mock_object(db_driver, 'share_group_snapshot_member_create') self.api.create_share_group_snapshot( self.context, share_group_id=share_group['id']) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) db_driver.share_group_snapshot_create.assert_called_once_with( self.context, expected_values) db_driver.share_group_snapshot_member_create.assert_any_call( self.context, expected_member_1_values) db_driver.share_group_snapshot_member_create.assert_any_call( self.context, expected_member_2_values) self.share_rpcapi.create_share_group_snapshot.assert_called_once_with( self.context, snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) share_group_api.QUOTAS.rollback.assert_not_called() def test_create_share_group_snapshot_error_creating_member(self): share_group = fake_share_group( 'fake_group_id', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE) snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, share_group_id=share_group['id'], status=constants.STATUS_CREATING) share = stubs.stub_share( 'fake_share_id', status=constants.STATUS_AVAILABLE) expected_values = snap.copy() for name in ('id', 'created_at'): expected_values.pop(name, None) expected_member_values = { 'share_group_snapshot_id': snap['id'], 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_CREATING, 'size': share['size'], 'share_proto': share['share_proto'], 'share_instance_id': mock.ANY, } self.mock_object( db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object( db_driver, 'share_group_snapshot_create', mock.Mock(return_value=snap)) self.mock_object(db_driver, 'share_group_snapshot_destroy') self.mock_object( db_driver, 'share_group_snapshot_member_create', mock.Mock(side_effect=exception.Error)) self.mock_object( db_driver, 'share_get_all_by_share_group_id', mock.Mock(return_value=[share])) self.assertRaises( exception.Error, self.api.create_share_group_snapshot, self.context, share_group_id=share_group['id']) db_driver.share_group_get.assert_called_once_with( self.context, share_group['id']) db_driver.share_group_snapshot_create.assert_called_once_with( self.context, expected_values) db_driver.share_group_snapshot_member_create.assert_called_once_with( self.context, expected_member_values) db_driver.share_group_snapshot_destroy.assert_called_once_with( self.context, snap['id']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=1) share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value) def test_delete_share_group_snapshot(self): share_group = fake_share_group('fake_id', host="fake_host") sg_snap = fake_share_group_snapshot( 'fake_groupsnap_id', share_group_id='fake_id', status=constants.STATUS_AVAILABLE) self.mock_object(db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object(db_driver, 'share_group_snapshot_update') self.api.delete_share_group_snapshot(self.context, sg_snap) db_driver.share_group_get.assert_called_once_with( self.context, "fake_id") db_driver.share_group_snapshot_update.assert_called_once_with( self.context, sg_snap['id'], {'status': constants.STATUS_DELETING}) self.share_rpcapi.delete_share_group_snapshot.assert_called_once_with( self.context, sg_snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=-1, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.commit.assert_called_once_with( self.context, share_group_api.QUOTAS.reserve.return_value, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_share_group_snapshot_fail_on_quota_reserve(self): share_group = fake_share_group('fake_id', host="fake_host") sg_snap = fake_share_group_snapshot( 'fake_groupsnap_id', share_group_id='fake_id', status=constants.STATUS_AVAILABLE) self.mock_object(db_driver, 'share_group_get', mock.Mock(return_value=share_group)) self.mock_object(db_driver, 'share_group_snapshot_update') share_group_api.QUOTAS.reserve.side_effect = exception.OverQuota( 'Failure') self.mock_object(share_group_api.LOG, 'exception') self.api.delete_share_group_snapshot(self.context, sg_snap) db_driver.share_group_get.assert_called_once_with( self.context, "fake_id") db_driver.share_group_snapshot_update.assert_called_once_with( self.context, sg_snap['id'], {'status': constants.STATUS_DELETING}) self.share_rpcapi.delete_share_group_snapshot.assert_called_once_with( self.context, sg_snap, share_group['host']) share_group_api.QUOTAS.reserve.assert_called_once_with( self.context, share_group_snapshots=-1, project_id=share_group['project_id'], user_id=share_group['user_id']) share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() share_group_api.LOG.exception.assert_called_once_with( mock.ANY, mock.ANY) def test_delete_share_group_snapshot_group_does_not_exist(self): snap = fake_share_group_snapshot( 'fake_groupsnap_id', share_group_id='fake_id') self.mock_object( db_driver, 'share_group_get', mock.Mock(side_effect=exception.ShareGroupNotFound( share_group_id='fake_id'))) self.assertRaises( exception.ShareGroupNotFound, self.api.delete_share_group_snapshot, self.context, snap) db_driver.share_group_get.assert_called_once_with( self.context, "fake_id") share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() def test_delete_share_group_snapshot_creating_status(self): snap = fake_share_group_snapshot( 'fake_groupsnap_id', share_group_id='fake_id', status=constants.STATUS_CREATING) self.mock_object(db_driver, 'share_group_get') self.assertRaises( exception.InvalidShareGroupSnapshot, self.api.delete_share_group_snapshot, self.context, snap) db_driver.share_group_get.assert_called_once_with( self.context, snap['share_group_id']) share_group_api.QUOTAS.reserve.assert_not_called() share_group_api.QUOTAS.commit.assert_not_called() share_group_api.QUOTAS.rollback.assert_not_called() @ddt.data({}, {"name": "fake_name"}) def test_update_share_group_snapshot_no_values(self, expected_values): snap = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) self.mock_object( db_driver, 'share_group_snapshot_update', mock.Mock(return_value=snap)) self.api.update_share_group_snapshot( self.context, snap, expected_values) db_driver.share_group_snapshot_update.assert_called_once_with( self.context, snap['id'], expected_values) def test_share_group_snapshot_get(self): expected = fake_share_group_snapshot( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING) self.mock_object( db_driver, 'share_group_snapshot_get', mock.Mock(return_value=expected)) actual = self.api.get_share_group_snapshot( self.context, expected['id']) self.assertEqual(expected, actual) def test_share_group_snapshot_get_all_no_groups(self): self.mock_object( db_driver, 'share_group_snapshot_get_all', mock.Mock(return_value=[])) actual = self.api.get_all_share_group_snapshots(self.context) self.assertEqual([], actual) def test_share_group_snapshot_get_all(self): expected = [fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING)] self.mock_object( db_driver, 'share_group_snapshot_get_all_by_project', mock.Mock(return_value=expected)) actual = self.api.get_all_share_group_snapshots( self.context, detailed=True) self.assertEqual(expected, actual) def test_share_group_snapshot_get_all_all_tenants_not_admin(self): cxt = context.RequestContext( user_id=None, project_id=None, is_admin=False) expected = [fake_share_group( 'fakeid', user_id=cxt.user_id, project_id=cxt.project_id, status=constants.STATUS_CREATING)] self.mock_object( db_driver, 'share_group_snapshot_get_all_by_project', mock.Mock(return_value=expected)) actual = self.api.get_all_share_group_snapshots( cxt, search_opts={'all_tenants': True}) self.assertEqual(expected, actual) def test_share_group_snapshot_get_all_all_tenants_as_admin(self): expected = [fake_share_group( 'fakeid', user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_CREATING)] self.mock_object( db_driver, 'share_group_snapshot_get_all', mock.Mock(return_value=expected)) actual = self.api.get_all_share_group_snapshots( self.context, search_opts={'all_tenants': True}) self.assertEqual(expected, actual) db_driver.share_group_snapshot_get_all.assert_called_once_with( self.context, detailed=True, filters={}, sort_dir=None, sort_key=None) def test_get_all_share_group_snapshot_members(self): self.mock_object( db_driver, 'share_group_snapshot_members_get_all', mock.Mock(return_value=[])) self.api.get_all_share_group_snapshot_members(self.context, 'fake_id') db_driver.share_group_snapshot_members_get_all.assert_called_once_with( self.context, 'fake_id') manila-10.0.0/manila/tests/api/0000775000175000017500000000000013656750362016244 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/__init__.py0000664000175000017500000000000013656750227020343 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/common.py0000664000175000017500000000225613656750227020113 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def compare_links(actual, expected): """Compare xml atom links.""" return compare_tree_to_dict(actual, expected, ('rel', 'href', 'type')) def compare_media_types(actual, expected): """Compare xml media types.""" return compare_tree_to_dict(actual, expected, ('base', 'type')) def compare_tree_to_dict(actual, expected, keys): """Compare parts of lxml.etree objects to dicts.""" for elem, data in zip(actual, expected): for key in keys: if elem.get(key) != data.get(key): return False return True manila-10.0.0/manila/tests/api/contrib/0000775000175000017500000000000013656750362017704 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/contrib/__init__.py0000664000175000017500000000000013656750227022003 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/contrib/stubs.py0000664000175000017500000001443213656750227021422 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime from manila.common import constants from manila import exception as exc FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' FAKE_UUIDS = {} def stub_share(id, **kwargs): share = { 'id': id, 'share_proto': 'FAKEPROTO', 'export_location': 'fake_location', 'export_locations': ['fake_location', 'fake_location2'], 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'size': 1, 'access_rules_status': 'active', 'status': 'fakestatus', 'display_name': 'displayname', 'display_description': 'displaydesc', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'snapshot_id': '2', 'share_type_id': '1', 'is_public': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'replication_type': None, 'has_replicas': False, } share_instance = { 'host': 'fakehost', 'availability_zone': 'fakeaz', 'share_network_id': None, 'share_server_id': 'fake_share_server_id', 'access_rules_status': 'active', 'share_type_id': '1', } if 'instance' in kwargs: share_instance.update(kwargs.pop('instance')) else: # remove any share instance kwargs so they don't go into the share for inst_key in share_instance.keys(): if inst_key in kwargs: share_instance[inst_key] = kwargs.pop(inst_key) share.update(kwargs) # NOTE(ameade): We must wrap the dictionary in an class in order to stub # object attributes. class wrapper(collections.Mapping): def __getitem__(self, name): if hasattr(self, name): return getattr(self, name) return self.__dict__[name] def __iter__(self): return iter(self.__dict__) def __len__(self): return len(self.__dict__) def update(self, other): self.__dict__.update(other) fake_share = wrapper() fake_share.instance = {'id': "fake_instance_id"} fake_share.instance.update(share_instance) fake_share.update(share) # stub out is_busy based on task_state, this mimics what's in the Share # data model type if share.get('task_state') in constants.BUSY_TASK_STATES: fake_share.is_busy = True else: fake_share.is_busy = False fake_share.instances = [fake_share.instance] return fake_share def stub_snapshot(id, **kwargs): snapshot = { 'id': id, 'share_id': 'fakeshareid', 'share_proto': 'fakesnapproto', 'export_location': 'fakesnaplocation', 'user_id': 'fakesnapuser', 'project_id': 'fakesnapproject', 'host': 'fakesnaphost', 'share_size': 1, 'size': 1, 'status': 'fakesnapstatus', 'aggregate_status': 'fakesnapstatus', 'share_name': 'fakesharename', 'display_name': 'displaysnapname', 'display_description': 'displaysnapdesc', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), } snapshot.update(kwargs) return snapshot def stub_share_type(id, **kwargs): share_type = { 'id': id, 'name': 'fakesharetype', 'description': 'fakesharetypedescription', 'is_public': True, } share_type.update(kwargs) return share_type def stub_share_type_get(context, share_type_id, **kwargs): return stub_share_type(share_type_id, **kwargs) def stub_share_get(self, context, share_id, **kwargs): return stub_share(share_id, **kwargs) def stub_share_get_notfound(self, context, share_id, **kwargs): raise exc.NotFound def stub_share_delete(self, context, *args, **param): pass def stub_share_update(self, context, *args, **param): share = stub_share('1') return share def stub_snapshot_update(self, context, *args, **param): share = stub_share('1') return share def stub_share_get_all_by_project(self, context, sort_key=None, sort_dir=None, search_opts={}): return [stub_share_get(self, context, '1')] def stub_get_all_shares(self, context): return [stub_share(100, project_id='fake'), stub_share(101, project_id='superfake'), stub_share(102, project_id='superduperfake')] def stub_snapshot_get(self, context, snapshot_id): return stub_snapshot(snapshot_id) def stub_snapshot_get_notfound(self, context, snapshot_id): raise exc.NotFound def stub_snapshot_create(self, context, share, display_name, display_description): return stub_snapshot(200, share_id=share['id'], display_name=display_name, display_description=display_description) def stub_snapshot_delete(self, context, *args, **param): pass def stub_snapshot_get_all_by_project(self, context, search_opts=None, sort_key=None, sort_dir=None): return [stub_snapshot_get(self, context, 2)] def stub_share_group_snapshot_member(id, **kwargs): member = { 'id': id, 'share_id': 'fakeshareid', 'share_instance_id': 'fakeshareinstanceid', 'share_proto': 'fakesnapproto', 'share_type_id': 'fake_share_type_id', 'export_location': 'fakesnaplocation', 'user_id': 'fakesnapuser', 'project_id': 'fakesnapproject', 'host': 'fakesnaphost', 'share_size': 1, 'size': 1, 'status': 'fakesnapstatus', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), } member.update(kwargs) return member manila-10.0.0/manila/tests/api/v1/0000775000175000017500000000000013656750362016572 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/v1/__init__.py0000664000175000017500000000000013656750227020671 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/v1/test_share_servers.py0000664000175000017500000005143713656750227023070 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from webob import exc from manila.api.openstack import api_version_request as api_version from manila.api.v1 import share_servers from manila.common import constants from manila import context from manila.db import api as db_api from manila import exception from manila import policy from manila import test from manila.tests.api import fakes fake_share_server_list = { 'share_servers': [ { 'status': constants.STATUS_ACTIVE, 'updated_at': None, 'host': 'fake_host', 'share_network_name': 'fake_sn_name', 'share_network_id': 'fake_sn_id', 'share_network_subnet_id': 'fake_sns_id', 'project_id': 'fake_project_id', 'id': 'fake_server_id', 'is_auto_deletable': False, 'identifier': 'fake_id' }, { 'status': constants.STATUS_ERROR, 'updated_at': None, 'host': 'fake_host_2', 'share_network_name': 'fake_sn_id_2', 'share_network_id': 'fake_sn_id_2', 'share_network_subnet_id': 'fake_sns_id_2', 'project_id': 'fake_project_id_2', 'id': 'fake_server_id_2', 'is_auto_deletable': True, 'identifier': 'fake_id_2' }, ] } fake_share_network_get_list = { 'share_networks': [ { 'name': 'fake_sn_name', 'id': 'fake_sn_id', 'project_id': 'fake_project_id', }, { 'name': None, 'id': 'fake_sn_id_2', 'project_id': 'fake_project_id_2', } ] } fake_share_server_get_result = { 'share_server': { 'status': constants.STATUS_ACTIVE, 'created_at': None, 'updated_at': None, 'host': 'fake_host', 'share_network_name': 'fake_sn_name', 'share_network_id': 'fake_sn_id', 'share_network_subnet_id': 'fake_sns_id', 'project_id': 'fake_project_id', 'id': 'fake_server_id', 'backend_details': { 'fake_key_1': 'fake_value_1', 'fake_key_2': 'fake_value_2', }, 'is_auto_deletable': False, 'identifier': 'fake_id' } } share_server_backend_details = { 'fake_key_1': 'fake_value_1', 'fake_key_2': 'fake_value_2', } fake_share_server_backend_details_get_result = { 'details': share_server_backend_details } CONTEXT = context.get_admin_context() class FakeShareServer(object): def __init__(self, *args, **kwargs): super(FakeShareServer, self).__init__() self.id = kwargs.get('id', 'fake_server_id') if 'created_at' in kwargs: self.created_at = kwargs.get('created_at', None) self.updated_at = kwargs.get('updated_at', None) self.host = kwargs.get('host', 'fake_host') self.share_network_subnet = kwargs.get('share_network_subnet', { 'id': 'fake_sns_id', 'share_network_id': 'fake_sn_id'}) self.share_network_subnet_id = kwargs.get( 'share_network_subnet_id', self.share_network_subnet['id']) self.status = kwargs.get('status', constants.STATUS_ACTIVE) self.project_id = 'fake_project_id' self.identifier = kwargs.get('identifier', 'fake_id') self.is_auto_deletable = kwargs.get('is_auto_deletable', False) self.backend_details = share_server_backend_details def __getitem__(self, item): return getattr(self, item) def fake_share_server_get_all(): fake_share_servers = [ FakeShareServer(), FakeShareServer(id='fake_server_id_2', host='fake_host_2', share_network_subnet={ 'id': 'fake_sns_id_2', 'share_network_id': 'fake_sn_id_2', }, identifier='fake_id_2', is_auto_deletable=True, status=constants.STATUS_ERROR) ] return fake_share_servers def fake_share_server_get(): return FakeShareServer(created_at=None) class FakeRequestAdmin(object): environ = {"manila.context": CONTEXT} GET = {} class FakeRequestWithHost(FakeRequestAdmin): GET = {'host': fake_share_server_list['share_servers'][0]['host']} class FakeRequestWithStatus(FakeRequestAdmin): GET = {'status': constants.STATUS_ERROR} class FakeRequestWithProjectId(FakeRequestAdmin): GET = {'project_id': fake_share_server_get_all()[0].project_id} class FakeRequestWithShareNetworkSubnetId(FakeRequestAdmin): GET = { 'share_network_subnet_id': fake_share_server_get_all()[0].share_network_subnet_id, } class FakeRequestWithFakeFilter(FakeRequestAdmin): GET = {'fake_key': 'fake_value'} @ddt.ddt class ShareServerAPITest(test.TestCase): def setUp(self): super(ShareServerAPITest, self).setUp() self.controller = share_servers.ShareServerController() self.resource_name = self.controller.resource_name self.mock_object(policy, 'check_policy', mock.Mock(return_value=True)) self.mock_object(db_api, 'share_server_get_all', mock.Mock(return_value=fake_share_server_get_all())) def _prepare_request(self, url, use_admin_context, version=api_version._MAX_API_VERSION): request = fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=version) ctxt = request.environ['manila.context'] return request, ctxt def test_index_no_filters(self): request, ctxt = self._prepare_request(url='/v2/share-servers/', use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual(fake_share_server_list, result) def test_index_host_filter(self): request, ctxt = self._prepare_request( url='/index?host=%s' % fake_share_server_list['share_servers'][0]['host'], use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual([fake_share_server_list['share_servers'][0]], result['share_servers']) def test_index_status_filter(self): request, ctxt = self._prepare_request(url='/index?status=%s' % constants.STATUS_ERROR, use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual([fake_share_server_list['share_servers'][1]], result['share_servers']) def test_index_project_id_filter(self): request, ctxt = self._prepare_request( url='/index?project_id=%s' % fake_share_server_get_all()[0].project_id, use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual([fake_share_server_list['share_servers'][0]], result['share_servers']) def test_index_share_network_filter_by_name(self): request, ctxt = self._prepare_request( url='/index?host=%s' % fake_share_server_list['share_servers'][0]['host'], use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual([fake_share_server_list['share_servers'][0]], result['share_servers']) def test_index_share_network_filter_by_id(self): request, ctxt = self._prepare_request( url='/index?share_network=%s' % fake_share_network_get_list['share_networks'][0]['id'], use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual([fake_share_server_list['share_servers'][0]], result['share_servers']) def test_index_fake_filter(self): request, ctxt = self._prepare_request(url='/index?fake_key=fake_value', use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=[fake_share_network_get_list['share_networks'][0], fake_share_network_get_list['share_networks'][1]])) result = self.controller.index(request) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') db_api.share_server_get_all.assert_called_once_with(ctxt) self.assertEqual(0, len(result['share_servers'])) def test_index_share_network_not_found(self): request, ctxt = self._prepare_request( url='/index?identifier=%s' % fake_share_server_get_all()[0].identifier, use_admin_context=True) self.mock_object( db_api, 'share_network_get', mock.Mock(side_effect=exception.ShareNetworkNotFound( share_network_id='fake'))) result = self.controller.index(request) db_api.share_server_get_all.assert_called_once_with(ctxt) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') exp_share_server = fake_share_server_list['share_servers'][0].copy() exp_share_server['project_id'] = '' exp_share_server['share_network_name'] = '' self.assertEqual([exp_share_server], result['share_servers']) def test_index_share_network_not_found_filter_project(self): request, ctxt = self._prepare_request( url='/index?project_id=%s' % fake_share_server_get_all()[0].project_id, use_admin_context=True) self.mock_object( db_api, 'share_network_get', mock.Mock(side_effect=exception.ShareNetworkNotFound( share_network_id='fake'))) result = self.controller.index(request) db_api.share_server_get_all.assert_called_once_with(ctxt) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'index') self.assertEqual(0, len(result['share_servers'])) def test_show(self): self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server_get())) request, ctxt = self._prepare_request('/show', use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( return_value=fake_share_network_get_list['share_networks'][0])) result = self.controller.show( request, fake_share_server_get_result['share_server']['id']) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'show') db_api.share_server_get.assert_called_once_with( ctxt, fake_share_server_get_result['share_server']['id']) self.assertEqual(fake_share_server_get_result['share_server'], result['share_server']) @ddt.data( {'share_server_side_effect': exception.ShareServerNotFound( share_server_id="foo"), 'share_net_side_effect': mock.Mock()}, {'share_server_side_effect': mock.Mock( return_value=fake_share_server_get()), 'share_net_side_effect': exception.ShareNetworkNotFound( share_network_id="foo")}) @ddt.unpack def test_show_server_not_found(self, share_server_side_effect, share_net_side_effect): self.mock_object(db_api, 'share_server_get', mock.Mock(side_effect=share_server_side_effect)) request, ctxt = self._prepare_request('/show', use_admin_context=True) self.mock_object(db_api, 'share_network_get', mock.Mock( side_effect=share_net_side_effect)) self.assertRaises( exc.HTTPNotFound, self.controller.show, request, fake_share_server_get_result['share_server']['id']) policy.check_policy.assert_called_once_with( ctxt, self.resource_name, 'show') db_api.share_server_get.assert_called_once_with( ctxt, fake_share_server_get_result['share_server']['id']) if isinstance(share_net_side_effect, exception.ShareNetworkNotFound): exp_share_net_id = (fake_share_server_get() .share_network_subnet['share_network_id']) db_api.share_network_get.assert_called_once_with( ctxt, exp_share_net_id) def test_details(self): self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server_get())) result = self.controller.details( FakeRequestAdmin, fake_share_server_get_result['share_server']['id']) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'details') db_api.share_server_get.assert_called_once_with( CONTEXT, fake_share_server_get_result['share_server']['id']) self.assertEqual(fake_share_server_backend_details_get_result, result) def test_details_share_server_not_found(self): share_server_id = 'fake' self.mock_object( db_api, 'share_server_get', mock.Mock(side_effect=exception.ShareServerNotFound( share_server_id=share_server_id))) self.assertRaises(exc.HTTPNotFound, self.controller.details, FakeRequestAdmin, share_server_id) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'details') db_api.share_server_get.assert_called_once_with( CONTEXT, share_server_id) def test_delete_active_server(self): share_server = FakeShareServer(status=constants.STATUS_ACTIVE) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share_server)) self.mock_object(self.controller.share_api, 'delete_share_server') self.controller.delete( FakeRequestAdmin, fake_share_server_get_result['share_server']['id']) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'delete') db_api.share_server_get.assert_called_once_with( CONTEXT, fake_share_server_get_result['share_server']['id']) self.controller.share_api.delete_share_server.assert_called_once_with( CONTEXT, share_server) def test_delete_error_server(self): share_server = FakeShareServer(status=constants.STATUS_ERROR) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share_server)) self.mock_object(self.controller.share_api, 'delete_share_server') self.controller.delete( FakeRequestAdmin, fake_share_server_get_result['share_server']['id']) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'delete') db_api.share_server_get.assert_called_once_with( CONTEXT, fake_share_server_get_result['share_server']['id']) self.controller.share_api.delete_share_server.assert_called_once_with( CONTEXT, share_server) def test_delete_used_server(self): share_server_id = fake_share_server_get_result['share_server']['id'] def raise_not_share_server_in_use(*args, **kwargs): raise exception.ShareServerInUse(share_server_id=share_server_id) share_server = fake_share_server_get() self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share_server)) self.mock_object(self.controller.share_api, 'delete_share_server', mock.Mock(side_effect=raise_not_share_server_in_use)) self.assertRaises(exc.HTTPConflict, self.controller.delete, FakeRequestAdmin, share_server_id) db_api.share_server_get.assert_called_once_with(CONTEXT, share_server_id) self.controller.share_api.delete_share_server.assert_called_once_with( CONTEXT, share_server) def test_delete_not_found(self): share_server_id = fake_share_server_get_result['share_server']['id'] def raise_not_found(*args, **kwargs): raise exception.ShareServerNotFound( share_server_id=share_server_id) self.mock_object(db_api, 'share_server_get', mock.Mock(side_effect=raise_not_found)) self.assertRaises(exc.HTTPNotFound, self.controller.delete, FakeRequestAdmin, share_server_id) db_api.share_server_get.assert_called_once_with( CONTEXT, share_server_id) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'delete') def test_delete_creating_server(self): share_server = FakeShareServer(status=constants.STATUS_CREATING) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share_server)) self.assertRaises(exc.HTTPForbidden, self.controller.delete, FakeRequestAdmin, share_server['id']) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'delete') def test_delete_deleting_server(self): share_server = FakeShareServer(status=constants.STATUS_DELETING) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share_server)) self.assertRaises(exc.HTTPForbidden, self.controller.delete, FakeRequestAdmin, share_server['id']) policy.check_policy.assert_called_once_with( CONTEXT, self.resource_name, 'delete') manila-10.0.0/manila/tests/api/v1/test_share_unmanage.py0000664000175000017500000002175213656750227023167 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt import webob from manila.api.v1 import share_unmanage from manila.common import constants from manila import exception from manila import policy from manila.share import api as share_api from manila import test from manila.tests.api.contrib import stubs from manila.tests.api import fakes @ddt.ddt class ShareUnmanageTest(test.TestCase): """Share Unmanage Test.""" def setUp(self): super(ShareUnmanageTest, self).setUp() self.controller = share_unmanage.ShareUnmanageController() self.resource_name = self.controller.resource_name self.mock_object(share_api.API, 'get_all', stubs.stub_get_all_shares) self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_object(share_api.API, 'update', stubs.stub_share_update) self.mock_object(share_api.API, 'delete', stubs.stub_share_delete) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) self.share_id = 'fake' self.request = fakes.HTTPRequest.blank( '/share/%s/unmanage' % self.share_id, use_admin_context=True ) self.context = self.request.environ['manila.context'] self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) def test_unmanage_share(self): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'unmanage', mock.Mock()) self.mock_object( self.controller.share_api.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[])) actual_result = self.controller.unmanage(self.request, share['id']) self.assertEqual(202, actual_result.status_int) (self.controller.share_api.db.share_snapshot_get_all_for_share. assert_called_once_with( self.request.environ['manila.context'], share['id'])) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) share_api.API.unmanage.assert_called_once_with( self.request.environ['manila.context'], share) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') def test_unmanage_share_that_has_snapshots(self): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) snapshots = ['foo', 'bar'] self.mock_object(self.controller.share_api, 'unmanage') self.mock_object( self.controller.share_api.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=snapshots)) self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.request, share['id']) self.assertFalse(self.controller.share_api.unmanage.called) (self.controller.share_api.db.share_snapshot_get_all_for_share. assert_called_once_with( self.request.environ['manila.context'], share['id'])) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') def test_unmanage_share_that_has_replicas(self): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}, has_replicas=True) mock_api_unmanage = self.mock_object(self.controller.share_api, 'unmanage') mock_db_snapshots_get = self.mock_object( self.controller.share_api.db, 'share_snapshot_get_all_for_share') self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPConflict, self.controller.unmanage, self.request, share['id']) self.assertFalse(mock_api_unmanage.called) self.assertFalse(mock_db_snapshots_get.called) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') def test_unmanage_share_based_on_share_server(self): share = dict(instance=dict(share_server_id='foo_id'), id='bar_id') self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.request, share['id']) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') @ddt.data(*constants.TRANSITIONAL_STATUSES) def test_unmanage_share_with_transitional_state(self, share_status): share = dict(status=share_status, id='foo_id', instance={}) self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.request, share['id']) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') def test_unmanage_share_not_found(self): self.mock_object(share_api.API, 'get', mock.Mock( side_effect=exception.NotFound)) self.mock_object(share_api.API, 'unmanage', mock.Mock()) self.assertRaises(webob.exc.HTTPNotFound, self.controller.unmanage, self.request, self.share_id) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') @ddt.data(exception.InvalidShare(reason="fake"), exception.PolicyNotAuthorized(action="fake"),) def test_unmanage_share_invalid(self, side_effect): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'unmanage', mock.Mock( side_effect=side_effect)) self.assertRaises(webob.exc.HTTPForbidden, self.controller.unmanage, self.request, self.share_id) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') def test_unmanage_allow_dhss_true_with_share_server(self): share = { 'status': constants.STATUS_AVAILABLE, 'id': 'foo_id', 'instance': '', 'share_server_id': 'fake' } self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'unmanage', mock.Mock()) self.mock_object( self.controller.share_api.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[])) actual_result = self.controller._unmanage(self.request, share['id'], allow_dhss_true=True) self.assertEqual(202, actual_result.status_int) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'unmanage') def test_wrong_permissions(self): share_id = 'fake' req = fakes.HTTPRequest.blank('/share/%s/unmanage' % share_id, use_admin_context=False) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPForbidden, self.controller.unmanage, req, share_id) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'unmanage') manila-10.0.0/manila/tests/api/v1/stubs.py0000664000175000017500000000763513656750227020317 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from manila.common import constants from manila import exception as exc FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' FAKE_UUIDS = {} def stub_volume(id, **kwargs): volume = { 'id': id, 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'host': 'fakehost', 'size': 1, 'availability_zone': 'fakeaz', 'instance_uuid': 'fakeuuid', 'mountpoint': '/', 'status': 'fakestatus', 'attach_status': 'attached', 'bootable': 'false', 'name': 'vol name', 'display_name': 'displayname', 'display_description': 'displaydesc', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'snapshot_id': None, 'source_volid': None, 'share_type_id': '3e196c20-3c06-11e2-81c1-0800200c9a66', 'volume_type_id': '3e196c20-3c06-11e2-81c1-0800200c9a66', 'volume_metadata': [], 'share_type': {'name': 'share_type_name'}, 'volume_type': {'name': 'share_type_name'}} volume.update(kwargs) return volume def stub_volume_create(self, context, size, name, description, snapshot, **param): vol = stub_volume('1') vol['size'] = size vol['display_name'] = name vol['display_description'] = description vol['source_volid'] = None try: vol['snapshot_id'] = snapshot['id'] except (KeyError, TypeError): vol['snapshot_id'] = None vol['availability_zone'] = param.get('availability_zone', 'fakeaz') return vol def stub_volume_create_from_image(self, context, size, name, description, snapshot, volume_type, metadata, availability_zone): vol = stub_volume('1') vol['status'] = 'creating' vol['size'] = size vol['display_name'] = name vol['display_description'] = description vol['availability_zone'] = 'manila' return vol def stub_volume_update(self, context, *args, **param): pass def stub_volume_delete(self, context, *args, **param): pass def stub_volume_get(self, context, volume_id): return stub_volume(volume_id) def stub_volume_get_notfound(self, context, volume_id): raise exc.NotFound def stub_volume_get_all(context, search_opts=None): return [stub_volume(100, project_id='fake'), stub_volume(101, project_id='superfake'), stub_volume(102, project_id='superduperfake')] def stub_volume_get_all_by_project(self, context, search_opts=None): return [stub_volume_get(self, context, '1')] def stub_snapshot(id, **kwargs): snapshot = {'id': id, 'volume_id': 12, 'status': constants.STATUS_AVAILABLE, 'volume_size': 100, 'created_at': None, 'display_name': 'Default name', 'display_description': 'Default description', 'project_id': 'fake'} snapshot.update(kwargs) return snapshot def stub_snapshot_get_all(self): return [stub_snapshot(100, project_id='fake'), stub_snapshot(101, project_id='superfake'), stub_snapshot(102, project_id='superduperfake')] def stub_snapshot_get_all_by_project(self, context): return [stub_snapshot(1)] def stub_snapshot_update(self, context, *args, **param): pass manila-10.0.0/manila/tests/api/v1/test_scheduler_stats.py0000664000175000017500000002772613656750227023415 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import uuidutils from webob import exc from manila.api.openstack import api_version_request as api_version from manila.api.v1 import scheduler_stats from manila import context from manila import policy from manila.scheduler import rpcapi from manila.share import share_types from manila import test from manila.tests.api import fakes FAKE_POOLS = [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', 'capabilities': { 'updated': None, 'total_capacity': 1024, 'free_capacity': 100, 'share_backend_name': 'pool1', 'reserved_percentage': 0, 'driver_version': '1.0.0', 'storage_protocol': 'iSCSI', 'qos': 'False', }, }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', 'capabilities': { 'updated': None, 'total_capacity': 512, 'free_capacity': 200, 'share_backend_name': 'pool2', 'reserved_percentage': 0, 'driver_version': '1.0.1', 'storage_protocol': 'iSER', 'qos': 'True', }, }, ] @ddt.ddt class SchedulerStatsControllerTestCase(test.TestCase): def setUp(self): super(SchedulerStatsControllerTestCase, self).setUp() self.flags(host='fake') self.controller = scheduler_stats.SchedulerStatsController() self.resource_name = self.controller.resource_name self.ctxt = context.RequestContext('admin', 'fake', True) self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) def test_pools_index(self): mock_get_pools = self.mock_object(rpcapi.SchedulerAPI, 'get_pools', mock.Mock(return_value=FAKE_POOLS)) req = fakes.HTTPRequest.blank('/v1/fake_project/scheduler_stats/pools') req.environ['manila.context'] = self.ctxt result = self.controller.pools_index(req) expected = { 'pools': [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', } ] } self.assertDictMatch(result, expected) mock_get_pools.assert_called_once_with(self.ctxt, filters={}, cached=True) self.mock_policy_check.assert_called_once_with( self.ctxt, self.resource_name, 'index') @ddt.data(('index', False), ('detail', True)) @ddt.unpack def test_pools_with_share_type_disabled(self, action, detail): mock_get_pools = self.mock_object(rpcapi.SchedulerAPI, 'get_pools', mock.Mock(return_value=FAKE_POOLS)) url = '/v1/fake_project/scheduler-stats/pools/%s' % action url += '?backend=back1&host=host1&pool=pool1' req = fakes.HTTPRequest.blank(url) req.environ['manila.context'] = self.ctxt expected_filters = { 'host': 'host1', 'pool': 'pool1', 'backend': 'back1', } if detail: expected_result = {"pools": FAKE_POOLS} else: expected_result = { 'pools': [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', } ] } result = self.controller._pools(req, action, False) self.assertDictMatch(result, expected_result) mock_get_pools.assert_called_once_with(self.ctxt, filters=expected_filters, cached=True) @ddt.data(('index', False, True), ('index', False, False), ('detail', True, True), ('detail', True, False)) @ddt.unpack def test_pools_with_share_type_enable(self, action, detail, uuid): mock_get_pools = self.mock_object(rpcapi.SchedulerAPI, 'get_pools', mock.Mock(return_value=FAKE_POOLS)) if uuid: share_type = uuidutils.generate_uuid() else: share_type = 'test_type' self.mock_object( share_types, 'get_share_type_by_name_or_id', mock.Mock(return_value={'extra_specs': {'snapshot_support': True}})) url = '/v1/fake_project/scheduler-stats/pools/%s' % action url += ('?backend=back1&host=host1&pool=pool1&share_type=%s' % share_type) req = fakes.HTTPRequest.blank(url) req.environ['manila.context'] = self.ctxt expected_filters = { 'host': 'host1', 'pool': 'pool1', 'backend': 'back1', 'capabilities': { 'snapshot_support': True } } if detail: expected_result = {"pools": FAKE_POOLS} else: expected_result = { 'pools': [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', } ] } result = self.controller._pools(req, action, True) self.assertDictMatch(result, expected_result) mock_get_pools.assert_called_once_with(self.ctxt, filters=expected_filters, cached=True) @ddt.data('index', 'detail') def test_pools_with_share_type_not_found(self, action): url = '/v1/fake_project/scheduler-stats/pools/%s' % action url += '?backend=.%2A&host=host1&pool=pool%2A&share_type=fake_name_1' req = fakes.HTTPRequest.blank(url) self.assertRaises(exc.HTTPBadRequest, self.controller._pools, req, action, True) @ddt.data("1.0", "2.22", "2.23") def test_pools_index_with_filters(self, microversion): mock_get_pools = self.mock_object(rpcapi.SchedulerAPI, 'get_pools', mock.Mock(return_value=FAKE_POOLS)) self.mock_object( share_types, 'get_share_type_by_name', mock.Mock(return_value={'extra_specs': {'snapshot_support': True}})) url = '/v1/fake_project/scheduler-stats/pools/detail' url += '?backend=.%2A&host=host1&pool=pool%2A&share_type=test_type' req = fakes.HTTPRequest.blank(url, version=microversion) req.environ['manila.context'] = self.ctxt result = self.controller.pools_index(req) expected = { 'pools': [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', } ] } expected_filters = { 'host': 'host1', 'pool': 'pool*', 'backend': '.*', 'share_type': 'test_type', } if (api_version.APIVersionRequest(microversion) >= api_version.APIVersionRequest('2.23')): expected_filters.update( {'capabilities': {'snapshot_support': True}}) expected_filters.pop('share_type', None) self.assertDictMatch(result, expected) mock_get_pools.assert_called_once_with(self.ctxt, filters=expected_filters, cached=True) self.mock_policy_check.assert_called_once_with( self.ctxt, self.resource_name, 'index') def test_get_pools_detail(self): mock_get_pools = self.mock_object(rpcapi.SchedulerAPI, 'get_pools', mock.Mock(return_value=FAKE_POOLS)) req = fakes.HTTPRequest.blank( '/v1/fake_project/scheduler_stats/pools/detail') req.environ['manila.context'] = self.ctxt result = self.controller.pools_detail(req) expected = { 'pools': [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', 'capabilities': { 'updated': None, 'total_capacity': 1024, 'free_capacity': 100, 'share_backend_name': 'pool1', 'reserved_percentage': 0, 'driver_version': '1.0.0', 'storage_protocol': 'iSCSI', 'qos': 'False', }, }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', 'capabilities': { 'updated': None, 'total_capacity': 512, 'free_capacity': 200, 'share_backend_name': 'pool2', 'reserved_percentage': 0, 'driver_version': '1.0.1', 'storage_protocol': 'iSER', 'qos': 'True', }, }, ], } self.assertDictMatch(expected, result) mock_get_pools.assert_called_once_with(self.ctxt, filters={}, cached=True) self.mock_policy_check.assert_called_once_with( self.ctxt, self.resource_name, 'detail') class SchedulerStatsTestCase(test.TestCase): def test_create_resource(self): result = scheduler_stats.create_resource() self.assertIsInstance(result.controller, scheduler_stats.SchedulerStatsController) manila-10.0.0/manila/tests/api/v1/test_share_types_extra_specs.py0000664000175000017500000003622213656750227025136 0ustar zuulzuul00000000000000# Copyright (c) 2011 Zadara Storage Inc. # Copyright (c) 2011 OpenStack Foundation # Copyright 2011 University of Southern California # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import strutils import webob from manila.api.v1 import share_types_extra_specs from manila.common import constants from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import fake_notifier import manila.wsgi DRIVER_HANDLES_SHARE_SERVERS = ( constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS) SNAPSHOT_SUPPORT = constants.ExtraSpecs.SNAPSHOT_SUPPORT def return_create_share_type_extra_specs(context, share_type_id, extra_specs): return stub_share_type_extra_specs() def return_share_type_extra_specs(context, share_type_id): return stub_share_type_extra_specs() def return_empty_share_type_extra_specs(context, share_type_id): return {} def delete_share_type_extra_specs(context, share_type_id, key): pass def delete_share_type_extra_specs_not_found(context, share_type_id, key): raise exception.ShareTypeExtraSpecsNotFound("Not Found") def stub_share_type_extra_specs(): specs = {"key1": "value1", "key2": "value2", "key3": "value3", "key4": "value4", "key5": "value5"} return specs def share_type_get(context, id, inactive=False, expected_fields=None): pass def get_large_string(): return "s" * 256 def get_extra_specs_dict(extra_specs, include_required=True): if not extra_specs: extra_specs = {} if include_required: extra_specs[DRIVER_HANDLES_SHARE_SERVERS] = False return {'extra_specs': extra_specs} @ddt.ddt class ShareTypesExtraSpecsTest(test.TestCase): def setUp(self): super(ShareTypesExtraSpecsTest, self).setUp() self.flags(host='fake') self.mock_object(manila.db, 'share_type_get', share_type_get) self.api_path = '/v2/fake/os-share-types/1/extra_specs' self.controller = ( share_types_extra_specs.ShareTypeExtraSpecsController()) self.resource_name = self.controller.resource_name self.mock_policy_check = self.mock_object(policy, 'check_policy') """to reset notifier drivers left over from other api/contrib tests""" self.addCleanup(fake_notifier.reset) def test_index(self): self.mock_object(manila.db, 'share_type_extra_specs_get', return_share_type_extra_specs) req = fakes.HTTPRequest.blank(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.index(req, 1) self.assertEqual('value1', res_dict['extra_specs']['key1']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') def test_index_no_data(self): self.mock_object(manila.db, 'share_type_extra_specs_get', return_empty_share_type_extra_specs) req = fakes.HTTPRequest.blank(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.index(req, 1) self.assertEqual(0, len(res_dict['extra_specs'])) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') def test_show(self): self.mock_object(manila.db, 'share_type_extra_specs_get', return_share_type_extra_specs) req = fakes.HTTPRequest.blank(self.api_path + '/key5') req_context = req.environ['manila.context'] res_dict = self.controller.show(req, 1, 'key5') self.assertEqual('value5', res_dict['key5']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'show') def test_show_spec_not_found(self): self.mock_object(manila.db, 'share_type_extra_specs_get', return_empty_share_type_extra_specs) req = fakes.HTTPRequest.blank(self.api_path + '/key6') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 1, 'key6') self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'show') @ddt.data( ('1.0', 'key5'), ('2.23', 'key5'), ('2.24', 'key5'), ('2.24', SNAPSHOT_SUPPORT), ) @ddt.unpack def test_delete(self, version, key): self.mock_object(manila.db, 'share_type_extra_specs_delete', delete_share_type_extra_specs) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank(self.api_path + '/' + key, version=version) req_context = req.environ['manila.context'] self.controller.delete(req, 1, key) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') def test_delete_not_found(self): self.mock_object(manila.db, 'share_type_extra_specs_delete', delete_share_type_extra_specs_not_found) req = fakes.HTTPRequest.blank(self.api_path + '/key6') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1, 'key6') self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') @ddt.data( ('1.0', DRIVER_HANDLES_SHARE_SERVERS), ('1.0', SNAPSHOT_SUPPORT), ('2.23', DRIVER_HANDLES_SHARE_SERVERS), ('2.23', SNAPSHOT_SUPPORT), ('2.24', DRIVER_HANDLES_SHARE_SERVERS), ) @ddt.unpack def test_delete_forbidden(self, version, key): req = fakes.HTTPRequest.blank( self.api_path + '/' + key, version=version) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, req, 1, key) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') @ddt.data( get_extra_specs_dict({}), {'foo': 'bar'}, {DRIVER_HANDLES_SHARE_SERVERS + 'foo': True}, {'foo' + DRIVER_HANDLES_SHARE_SERVERS: False}, *[{DRIVER_HANDLES_SHARE_SERVERS: v} for v in strutils.TRUE_STRINGS + strutils.FALSE_STRINGS] ) def test_create(self, data): body = {'extra_specs': data} self.mock_object( manila.db, 'share_type_extra_specs_update_or_create', mock.Mock(return_value=return_create_share_type_extra_specs)) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.create(req, 1, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) for k, v in data.items(): self.assertIn(k, res_dict['extra_specs']) self.assertEqual(v, res_dict['extra_specs'][k]) (manila.db.share_type_extra_specs_update_or_create. assert_called_once_with( req.environ['manila.context'], 1, body['extra_specs'])) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') @ddt.data( {"": "value"}, {"k" * 256: "value"}, {"key": ""}, {"key": "v" * 256}, {constants.ExtraSpecs.SNAPSHOT_SUPPORT: "non_boolean"}, ) def test_create_with_invalid_extra_specs(self, extra_specs): self.mock_object( manila.db, 'share_type_extra_specs_update_or_create', mock.Mock(return_value=return_create_share_type_extra_specs)) body = {"extra_specs": extra_specs} self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank(self.api_path) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, 1, body) self.assertFalse( manila.db.share_type_extra_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_create_key_allowed_chars(self): mock_return_value = {"key1": "value1", "key2": "value2", "key3": "value3", "key4": "value4", "key5": "value5"} self.mock_object( manila.db, 'share_type_extra_specs_update_or_create', mock.Mock(return_value=mock_return_value)) body = get_extra_specs_dict({"other_alphanum.-_:": "value1"}) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.create(req, 1, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(mock_return_value['key1'], res_dict['extra_specs']['other_alphanum.-_:']) (manila.db.share_type_extra_specs_update_or_create. assert_called_once_with( req.environ['manila.context'], 1, body['extra_specs'])) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_create_too_many_keys_allowed_chars(self): mock_return_value = {"key1": "value1", "key2": "value2", "key3": "value3", "key4": "value4", "key5": "value5"} self.mock_object( manila.db, 'share_type_extra_specs_update_or_create', mock.Mock(return_value=mock_return_value)) body = get_extra_specs_dict({ "other_alphanum.-_:": "value1", "other2_alphanum.-_:": "value2", "other3_alphanum.-_:": "value3" }) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.create(req, 1, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(mock_return_value['key1'], res_dict['extra_specs']['other_alphanum.-_:']) self.assertEqual(mock_return_value['key2'], res_dict['extra_specs']['other2_alphanum.-_:']) self.assertEqual(mock_return_value['key3'], res_dict['extra_specs']['other3_alphanum.-_:']) (manila.db.share_type_extra_specs_update_or_create. assert_called_once_with(req_context, 1, body['extra_specs'])) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_update_item(self): self.mock_object( manila.db, 'share_type_extra_specs_update_or_create', mock.Mock(return_value=return_create_share_type_extra_specs)) body = {DRIVER_HANDLES_SHARE_SERVERS: True} self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank( self.api_path + '/' + DRIVER_HANDLES_SHARE_SERVERS) req_context = req.environ['manila.context'] res_dict = self.controller.update( req, 1, DRIVER_HANDLES_SHARE_SERVERS, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertTrue(res_dict[DRIVER_HANDLES_SHARE_SERVERS]) (manila.db.share_type_extra_specs_update_or_create. assert_called_once_with(req_context, 1, body)) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') def test_update_item_too_many_keys(self): self.mock_object(manila.db, 'share_type_extra_specs_update_or_create') body = {"key1": "value1", "key2": "value2"} req = fakes.HTTPRequest.blank(self.api_path + '/key1') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'key1', body) self.assertFalse( manila.db.share_type_extra_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') def test_update_item_body_uri_mismatch(self): self.mock_object(manila.db, 'share_type_extra_specs_update_or_create') body = {"key1": "value1"} req = fakes.HTTPRequest.blank(self.api_path + '/bad') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'bad', body) self.assertFalse( manila.db.share_type_extra_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') @ddt.data(None, {}, {"extra_specs": {DRIVER_HANDLES_SHARE_SERVERS: ""}}) def test_update_invalid_body(self, body): req = fakes.HTTPRequest.blank('/v2/fake/types/1/extra_specs') req_context = req.environ['manila.context'] req.method = 'POST' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, '1', body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') @ddt.data( None, {}, {'foo': {'a': 'b'}}, {'extra_specs': 'string'}, {"extra_specs": {"ke/y1": "value1"}}, {"key1": "value1", "ke/y2": "value2", "key3": "value3"}, {"extra_specs": {DRIVER_HANDLES_SHARE_SERVERS: ""}}, {"extra_specs": {DRIVER_HANDLES_SHARE_SERVERS: "111"}}, {"extra_specs": {"": "value"}}, {"extra_specs": {"t": get_large_string()}}, {"extra_specs": {get_large_string(): get_large_string()}}, {"extra_specs": {get_large_string(): "v"}}, {"extra_specs": {"k": ""}}) def test_create_invalid_body(self, body): req = fakes.HTTPRequest.blank('/v2/fake/types/1/extra_specs') req_context = req.environ['manila.context'] req.method = 'POST' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, '1', body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') manila-10.0.0/manila/tests/api/v1/test_limits.py0000664000175000017500000007261013656750227021512 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests dealing with HTTP rate-limiting. """ import ddt from oslo_serialization import jsonutils import six from six import moves from six.moves import http_client import webob from manila.api.openstack import api_version_request as api_version from manila.api.v1 import limits from manila.api import views import manila.context from manila import test from manila.tests.api import fakes TEST_LIMITS = [ limits.Limit("GET", "/delayed", "^/delayed", 1, limits.PER_MINUTE), limits.Limit("POST", "*", ".*", 7, limits.PER_MINUTE), limits.Limit("POST", "/shares", "^/shares", 3, limits.PER_MINUTE), limits.Limit("PUT", "*", "", 10, limits.PER_MINUTE), limits.Limit("PUT", "/shares", "^/shares", 5, limits.PER_MINUTE), ] SHARE_REPLICAS_LIMIT_MICROVERSION = "2.53" class BaseLimitTestSuite(test.TestCase): """Base test suite which provides relevant stubs and time abstraction.""" def setUp(self): super(BaseLimitTestSuite, self).setUp() self.time = 0.0 self.mock_object(limits.Limit, "_get_time", self._get_time) self.absolute_limits = {} def stub_get_project_quotas(context, project_id, usages=True): quotas = {} for mapping_key in ('limit', 'in_use'): for k, v in self.absolute_limits.get(mapping_key, {}).items(): if k not in quotas: quotas[k] = {} quotas[k].update({mapping_key: v}) return quotas self.mock_object(manila.quota.QUOTAS, "get_project_quotas", stub_get_project_quotas) def _get_time(self): """Return the "time" according to this test suite.""" return self.time @ddt.ddt class LimitsControllerTest(BaseLimitTestSuite): """Tests for `limits.LimitsController` class.""" def setUp(self): """Run before each test.""" super(LimitsControllerTest, self).setUp() self.controller = limits.LimitsController() def _get_index_request(self, accept_header="application/json", microversion=api_version.DEFAULT_API_VERSION): """Helper to set routing arguments.""" request = fakes.HTTPRequest.blank('/limit', version=microversion) request.accept = accept_header return request def _populate_limits(self, request): """Put limit info into a request.""" _limits = [ limits.Limit("GET", "*", ".*", 10, 60).display(), limits.Limit("POST", "*", ".*", 5, 60 * 60).display(), limits.Limit("GET", "changes-since*", "changes-since", 5, 60).display(), ] request.environ["manila.limits"] = _limits return request def test_empty_index_json(self): """Test getting empty limit details in JSON.""" request = self._get_index_request() response = self.controller.index(request) expected = { "limits": { "rate": [], "absolute": {}, }, } self.assertEqual(expected, response) @ddt.data(api_version.DEFAULT_API_VERSION, SHARE_REPLICAS_LIMIT_MICROVERSION) def test_index_json(self, microversion): """Test getting limit details in JSON.""" request = self._get_index_request(microversion=microversion) request = self._populate_limits(request) self.absolute_limits = { 'limit': { 'shares': 11, 'gigabytes': 22, 'snapshots': 33, 'snapshot_gigabytes': 44, 'share_networks': 55, }, 'in_use': { 'shares': 3, 'gigabytes': 4, 'snapshots': 5, 'snapshot_gigabytes': 6, 'share_networks': 7, }, } if microversion == SHARE_REPLICAS_LIMIT_MICROVERSION: self.absolute_limits['limit']['share_replicas'] = 20 self.absolute_limits['limit']['replica_gigabytes'] = 20 self.absolute_limits['in_use']['share_replicas'] = 3 self.absolute_limits['in_use']['replica_gigabytes'] = 3 response = self.controller.index(request) expected = { "limits": { "rate": [ { "regex": ".*", "uri": "*", "limit": [ { "verb": "GET", "next-available": "1970-01-01T00:00:00Z", "unit": "MINUTE", "value": 10, "remaining": 10, }, { "verb": "POST", "next-available": "1970-01-01T00:00:00Z", "unit": "HOUR", "value": 5, "remaining": 5, }, ], }, { "regex": "changes-since", "uri": "changes-since*", "limit": [ { "verb": "GET", "next-available": "1970-01-01T00:00:00Z", "unit": "MINUTE", "value": 5, "remaining": 5, }, ], }, ], "absolute": { "totalSharesUsed": 3, "totalShareGigabytesUsed": 4, "totalShareSnapshotsUsed": 5, "totalSnapshotGigabytesUsed": 6, "totalShareNetworksUsed": 7, "maxTotalShares": 11, "maxTotalShareGigabytes": 22, "maxTotalShareSnapshots": 33, "maxTotalSnapshotGigabytes": 44, "maxTotalShareNetworks": 55, }, }, } if microversion == SHARE_REPLICAS_LIMIT_MICROVERSION: expected['limits']['absolute']["maxTotalShareReplicas"] = 20 expected['limits']['absolute']["totalShareReplicasUsed"] = 3 expected['limits']['absolute']["maxTotalReplicaGigabytes"] = 20 expected['limits']['absolute']["totalReplicaGigabytesUsed"] = 3 # body = jsonutils.loads(response.body) self.assertEqual(expected, response) def _populate_limits_diff_regex(self, request): """Put limit info into a request.""" _limits = [ limits.Limit("GET", "*", ".*", 10, 60).display(), limits.Limit("GET", "*", "*.*", 10, 60).display(), ] request.environ["manila.limits"] = _limits return request def test_index_diff_regex(self): """Test getting limit details in JSON.""" request = self._get_index_request() request = self._populate_limits_diff_regex(request) response = self.controller.index(request) expected = { "limits": { "rate": [ { "regex": ".*", "uri": "*", "limit": [ { "verb": "GET", "next-available": "1970-01-01T00:00:00Z", "unit": "MINUTE", "value": 10, "remaining": 10, }, ], }, { "regex": "*.*", "uri": "*", "limit": [ { "verb": "GET", "next-available": "1970-01-01T00:00:00Z", "unit": "MINUTE", "value": 10, "remaining": 10, }, ], }, ], "absolute": {}, }, } self.assertEqual(expected, response) def _test_index_absolute_limits_json(self, expected): request = self._get_index_request() response = self.controller.index(request) self.assertEqual(expected, response['limits']['absolute']) def test_index_ignores_extra_absolute_limits_json(self): self.absolute_limits = { 'in_use': {'unknown_limit': 9000}, 'limit': {'unknown_limit': 9001}, } self._test_index_absolute_limits_json({}) class TestLimiter(limits.Limiter): pass class LimitMiddlewareTest(BaseLimitTestSuite): """Tests for the `limits.RateLimitingMiddleware` class.""" @webob.dec.wsgify def _empty_app(self, request): """Do-nothing WSGI app.""" pass def setUp(self): """Prepare middleware for use through fake WSGI app.""" super(LimitMiddlewareTest, self).setUp() _limits = '(GET, *, .*, 1, MINUTE)' self.app = limits.RateLimitingMiddleware(self._empty_app, _limits, "%s.TestLimiter" % self.__class__.__module__) def test_limit_class(self): """Test that middleware selected correct limiter class.""" assert isinstance(self.app._limiter, TestLimiter) def test_good_request(self): """Test successful GET request through middleware.""" request = webob.Request.blank("/") response = request.get_response(self.app) self.assertEqual(200, response.status_int) def test_limited_request_json(self): """Test a rate-limited (413) GET request through middleware.""" request = webob.Request.blank("/") response = request.get_response(self.app) self.assertEqual(200, response.status_int) request = webob.Request.blank("/") response = request.get_response(self.app) self.assertEqual(413, response.status_int) self.assertIn('Retry-After', response.headers) retry_after = int(response.headers['Retry-After']) self.assertAlmostEqual(retry_after, 60, 1) body = jsonutils.loads(response.body) expected = "Only 1 GET request(s) can be made to * every minute." value = body["overLimitFault"]["details"].strip() self.assertEqual(expected, value) class LimitTest(BaseLimitTestSuite): """Tests for the `limits.Limit` class.""" def test_GET_no_delay(self): """Test a limit handles 1 GET per second.""" limit = limits.Limit("GET", "*", ".*", 1, 1) delay = limit("GET", "/anything") self.assertIsNone(delay) self.assertEqual(0, limit.next_request) self.assertEqual(0, limit.last_request) def test_GET_delay(self): """Test two calls to 1 GET per second limit.""" limit = limits.Limit("GET", "*", ".*", 1, 1) delay = limit("GET", "/anything") self.assertIsNone(delay) delay = limit("GET", "/anything") self.assertEqual(1, delay) self.assertEqual(1, limit.next_request) self.assertEqual(0, limit.last_request) self.time += 4 delay = limit("GET", "/anything") self.assertIsNone(delay) self.assertEqual(4, limit.next_request) self.assertEqual(4, limit.last_request) class ParseLimitsTest(BaseLimitTestSuite): """Test default limits parser. Tests for the default limits parser in the in-memory `limits.Limiter` class. """ def test_invalid(self): """Test that parse_limits() handles invalid input correctly.""" self.assertRaises(ValueError, limits.Limiter.parse_limits, ';;;;;') def test_bad_rule(self): """Test that parse_limits() handles bad rules correctly.""" self.assertRaises(ValueError, limits.Limiter.parse_limits, 'GET, *, .*, 20, minute') def test_missing_arg(self): """Test that parse_limits() handles missing args correctly.""" self.assertRaises(ValueError, limits.Limiter.parse_limits, '(GET, *, .*, 20)') def test_bad_value(self): """Test that parse_limits() handles bad values correctly.""" self.assertRaises(ValueError, limits.Limiter.parse_limits, '(GET, *, .*, foo, minute)') def test_bad_unit(self): """Test that parse_limits() handles bad units correctly.""" self.assertRaises(ValueError, limits.Limiter.parse_limits, '(GET, *, .*, 20, lightyears)') def test_multiple_rules(self): """Test that parse_limits() handles multiple rules correctly.""" try: lim = limits.Limiter.parse_limits( '(get, *, .*, 20, minute);' '(PUT, /foo*, /foo.*, 10, hour);' '(POST, /bar*, /bar.*, 5, second);' '(Say, /derp*, /derp.*, 1, day)') except ValueError as e: assert False, six.text_type(e) # Make sure the number of returned limits are correct self.assertEqual(4, len(lim)) # Check all the verbs... expected = ['GET', 'PUT', 'POST', 'SAY'] self.assertEqual(expected, [t.verb for t in lim]) # ...the URIs... expected = ['*', '/foo*', '/bar*', '/derp*'] self.assertEqual(expected, [t.uri for t in lim]) # ...the regexes... expected = ['.*', '/foo.*', '/bar.*', '/derp.*'] self.assertEqual(expected, [t.regex for t in lim]) # ...the values... expected = [20, 10, 5, 1] self.assertEqual(expected, [t.value for t in lim]) # ...and the units... expected = [limits.PER_MINUTE, limits.PER_HOUR, limits.PER_SECOND, limits.PER_DAY] self.assertEqual(expected, [t.unit for t in lim]) class LimiterTest(BaseLimitTestSuite): """Tests for the in-memory `limits.Limiter` class.""" def setUp(self): """Run before each test.""" super(LimiterTest, self).setUp() userlimits = {'user:user3': ''} self.limiter = limits.Limiter(TEST_LIMITS, **userlimits) def _check(self, num, verb, url, username=None): """Check and yield results from checks.""" for x in moves.range(num): yield self.limiter.check_for_delay(verb, url, username)[0] def _check_sum(self, num, verb, url, username=None): """Check and sum results from checks.""" results = self._check(num, verb, url, username) return sum(item for item in results if item) def test_no_delay_GET(self): """Test no delay on GET for single call. Simple test to ensure no delay on a single call for a limit verb we didn"t set. """ delay = self.limiter.check_for_delay("GET", "/anything") self.assertEqual((None, None), delay) def test_no_delay_PUT(self): """Test no delay on single call. Simple test to ensure no delay on a single call for a known limit. """ delay = self.limiter.check_for_delay("PUT", "/anything") self.assertEqual((None, None), delay) def test_delay_PUT(self): """Ensure 11th PUT will be delayed. Ensure the 11th PUT will result in a delay of 6.0 seconds until the next request will be granted. """ expected = [None] * 10 + [6.0] results = list(self._check(11, "PUT", "/anything")) self.assertEqual(expected, results) def test_delay_POST(self): """Ensure 8th POST will be delayed. Ensure the 8th POST will result in a delay of 6.0 seconds until the next request will be granced. """ expected = [None] * 7 results = list(self._check(7, "POST", "/anything")) self.assertEqual(expected, results) expected = 60.0 / 7.0 results = self._check_sum(1, "POST", "/anything") self.failUnlessAlmostEqual(expected, results, 8) def test_delay_GET(self): """Ensure the 11th GET will result in NO delay.""" expected = [None] * 11 results = list(self._check(11, "GET", "/anything")) self.assertEqual(expected, results) def test_delay_PUT_volumes(self): """Ensure PUT limits. Ensure PUT on /volumes limits at 5 requests, and PUT elsewhere is still OK after 5 requests...but then after 11 total requests, PUT limiting kicks in. """ # First 6 requests on PUT /volumes expected = [None] * 5 + [12.0] results = list(self._check(6, "PUT", "/shares")) self.assertEqual(expected, results) # Next 5 request on PUT /anything expected = [None] * 4 + [6.0] results = list(self._check(5, "PUT", "/anything")) self.assertEqual(expected, results) def test_delay_PUT_wait(self): """Test limit handling. Ensure after hitting the limit and then waiting for the correct amount of time, the limit will be lifted. """ expected = [None] * 10 + [6.0] results = list(self._check(11, "PUT", "/anything")) self.assertEqual(expected, results) # Advance time self.time += 6.0 expected = [None, 6.0] results = list(self._check(2, "PUT", "/anything")) self.assertEqual(expected, results) def test_multiple_delays(self): """Ensure multiple requests still get a delay.""" expected = [None] * 10 + [6.0] * 10 results = list(self._check(20, "PUT", "/anything")) self.assertEqual(expected, results) self.time += 1.0 expected = [5.0] * 10 results = list(self._check(10, "PUT", "/anything")) self.assertEqual(expected, results) def test_user_limit(self): """Test user-specific limits.""" self.assertEqual([], self.limiter.levels['user3']) def test_multiple_users(self): """Tests involving multiple users.""" # User1 expected = [None] * 10 + [6.0] * 10 results = list(self._check(20, "PUT", "/anything", "user1")) self.assertEqual(expected, results) # User2 expected = [None] * 10 + [6.0] * 5 results = list(self._check(15, "PUT", "/anything", "user2")) self.assertEqual(expected, results) # User3 expected = [None] * 20 results = list(self._check(20, "PUT", "/anything", "user3")) self.assertEqual(expected, results) self.time += 1.0 # User1 again expected = [5.0] * 10 results = list(self._check(10, "PUT", "/anything", "user1")) self.assertEqual(expected, results) self.time += 1.0 # User1 again expected = [4.0] * 5 results = list(self._check(5, "PUT", "/anything", "user2")) self.assertEqual(expected, results) class WsgiLimiterTest(BaseLimitTestSuite): """Tests for `limits.WsgiLimiter` class.""" def setUp(self): """Run before each test.""" super(WsgiLimiterTest, self).setUp() self.app = limits.WsgiLimiter(TEST_LIMITS) def _request_data(self, verb, path): """Get data describing a limit request verb/path.""" return six.b(jsonutils.dumps({"verb": verb, "path": path})) def _request(self, verb, url, username=None): """Send request. Make sure that POSTing to the given url causes the given username to perform the given action. Make the internal rate limiter return delay and make sure that the WSGI app returns the correct response. """ if username: request = webob.Request.blank("/%s" % username) else: request = webob.Request.blank("/") request.method = "POST" request.body = self._request_data(verb, url) response = request.get_response(self.app) if "X-Wait-Seconds" in response.headers: self.assertEqual(403, response.status_int) return response.headers["X-Wait-Seconds"] self.assertEqual(204, response.status_int) def test_invalid_methods(self): """Only POSTs should work.""" for method in ["GET", "PUT", "DELETE", "HEAD", "OPTIONS"]: request = webob.Request.blank("/", method=method) response = request.get_response(self.app) self.assertEqual(405, response.status_int) def test_good_url(self): delay = self._request("GET", "/something") self.assertIsNone(delay) def test_escaping(self): delay = self._request("GET", "/something/jump%20up") self.assertIsNone(delay) def test_response_to_delays(self): delay = self._request("GET", "/delayed") self.assertIsNone(delay) delay = self._request("GET", "/delayed") self.assertEqual('60.00', delay) def test_response_to_delays_usernames(self): delay = self._request("GET", "/delayed", "user1") self.assertIsNone(delay) delay = self._request("GET", "/delayed", "user2") self.assertIsNone(delay) delay = self._request("GET", "/delayed", "user1") self.assertEqual('60.00', delay) delay = self._request("GET", "/delayed", "user2") self.assertEqual('60.00', delay) class FakeHttplibSocket(object): """Fake `http_client.HTTPResponse` replacement.""" def __init__(self, response_string): """Initialize new `FakeHttplibSocket`.""" self._buffer = six.BytesIO(six.b(response_string)) def makefile(self, _mode, _other=None): """Returns the socket's internal buffer.""" return self._buffer class FakeHttplibConnection(object): """Fake `http_client.HTTPConnection`.""" def __init__(self, app, host): """Initialize `FakeHttplibConnection`.""" self.app = app self.host = host def request(self, method, path, body="", headers=None): """Translate request to WSGI app. Requests made via this connection actually get translated and routed into our WSGI app, we then wait for the response and turn it back into an `http_client.HTTPResponse`. """ if not headers: headers = {} req = webob.Request.blank(path) req.method = method req.headers = headers req.host = self.host req.body = six.b(body) resp = str(req.get_response(self.app)) resp = "HTTP/1.0 %s" % resp sock = FakeHttplibSocket(resp) self.http_response = http_client.HTTPResponse(sock) self.http_response.begin() def getresponse(self): """Return our generated response from the request.""" return self.http_response def wire_HTTPConnection_to_WSGI(host, app): """Wire HTTPConnection to WSGI app. Monkeypatches HTTPConnection so that if you try to connect to host, you are instead routed straight to the given WSGI app. After calling this method, when any code calls http_client.HTTPConnection(host) the connection object will be a fake. Its requests will be sent directly to the given WSGI app rather than through a socket. Code connecting to hosts other than host will not be affected. This method may be called multiple times to map different hosts to different apps. This method returns the original HTTPConnection object, so that the caller can restore the default HTTPConnection interface (for all hosts). """ class HTTPConnectionDecorator(object): """Wrapper for HTTPConnection class Wraps the real HTTPConnection class so that when you instantiate the class you might instead get a fake instance. """ def __init__(self, wrapped): self.wrapped = wrapped def __call__(self, connection_host, *args, **kwargs): if connection_host == host: return FakeHttplibConnection(app, host) else: return self.wrapped(connection_host, *args, **kwargs) oldHTTPConnection = http_client.HTTPConnection http_client.HTTPConnection = HTTPConnectionDecorator( http_client.HTTPConnection) return oldHTTPConnection class WsgiLimiterProxyTest(BaseLimitTestSuite): """Tests for the `limits.WsgiLimiterProxy` class.""" def setUp(self): """Set up HTTP/WSGI magic. Do some nifty HTTP/WSGI magic which allows for WSGI to be called directly by something like the `http_client` library. """ super(WsgiLimiterProxyTest, self).setUp() self.app = limits.WsgiLimiter(TEST_LIMITS) self.oldHTTPConnection = ( wire_HTTPConnection_to_WSGI("169.254.0.1:80", self.app)) self.proxy = limits.WsgiLimiterProxy("169.254.0.1:80") def test_200(self): """Successful request test.""" delay = self.proxy.check_for_delay("GET", "/anything") self.assertEqual((None, None), delay) def test_403(self): """Forbidden request test.""" delay = self.proxy.check_for_delay("GET", "/delayed") self.assertEqual((None, None), delay) delay, error = self.proxy.check_for_delay("GET", "/delayed") error = error.strip() expected = ("60.00", six.b("403 Forbidden\n\nOnly 1 GET request(s) " "can be made to /delayed every minute.")) self.assertEqual(expected, (delay, error)) def tearDown(self): # restore original HTTPConnection object http_client.HTTPConnection = self.oldHTTPConnection super(WsgiLimiterProxyTest, self).tearDown() class LimitsViewBuilderTest(test.TestCase): def setUp(self): super(LimitsViewBuilderTest, self).setUp() self.view_builder = views.limits.ViewBuilder() self.rate_limits = [{"URI": "*", "regex": ".*", "value": 10, "verb": "POST", "remaining": 2, "unit": "MINUTE", "resetTime": 1311272226}, {"URI": "*/shares", "regex": "^/shares", "value": 50, "verb": "POST", "remaining": 10, "unit": "DAY", "resetTime": 1311272226}] self.absolute_limits = { "limit": { "shares": 111, "gigabytes": 222, "snapshots": 333, "snapshot_gigabytes": 444, "share_networks": 555, }, "in_use": { "shares": 65, "gigabytes": 76, "snapshots": 87, "snapshot_gigabytes": 98, "share_networks": 107, }, } def test_build_limits(self): request = fakes.HTTPRequest.blank('/') tdate = "2011-07-21T18:17:06Z" expected_limits = { "limits": { "rate": [ {"uri": "*", "regex": ".*", "limit": [{"value": 10, "verb": "POST", "remaining": 2, "unit": "MINUTE", "next-available": tdate}]}, {"uri": "*/shares", "regex": "^/shares", "limit": [{"value": 50, "verb": "POST", "remaining": 10, "unit": "DAY", "next-available": tdate}]} ], "absolute": { "totalSharesUsed": 65, "totalShareGigabytesUsed": 76, "totalShareSnapshotsUsed": 87, "totalSnapshotGigabytesUsed": 98, "totalShareNetworksUsed": 107, "maxTotalShares": 111, "maxTotalShareGigabytes": 222, "maxTotalShareSnapshots": 333, "maxTotalSnapshotGigabytes": 444, "maxTotalShareNetworks": 555, } } } output = self.view_builder.build(request, self.rate_limits, self.absolute_limits) self.assertDictMatch(expected_limits, output) def test_build_limits_empty_limits(self): request = fakes.HTTPRequest.blank('/') expected_limits = {"limits": {"rate": [], "absolute": {}}} abs_limits = {} rate_limits = [] output = self.view_builder.build(request, rate_limits, abs_limits) self.assertDictMatch(expected_limits, output) manila-10.0.0/manila/tests/api/v1/test_security_service.py0000664000175000017500000004663713656750227023612 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from six.moves.urllib import parse import webob from manila.api.v1 import security_service from manila.common import constants from manila import db from manila import exception from manila import test from manila.tests.api import fakes @ddt.ddt class ShareApiTest(test.TestCase): """Share Api Test.""" def setUp(self): super(ShareApiTest, self).setUp() self.controller = security_service.SecurityServiceController() self.maxDiff = None self.ss_active_directory = { "created_at": "fake-time", "updated_at": "fake-time-2", "id": 1, "name": "fake-name", "description": "Fake Security Service Desc", "type": constants.SECURITY_SERVICES_ALLOWED_TYPES[0], "dns_ip": "1.1.1.1", "server": "fake-server", "domain": "fake-domain", "user": "fake-user", "password": "fake-password", "status": constants.STATUS_NEW, "project_id": "fake", } self.ss_ldap = { "created_at": "fake-time", "updated_at": "fake-time-2", "id": 2, "name": "ss-ldap", "description": "Fake Security Service Desc", "type": constants.SECURITY_SERVICES_ALLOWED_TYPES[1], "dns_ip": "2.2.2.2", "server": "test-server", "domain": "test-domain", "user": "test-user", "password": "test-password", "status": "active", "project_id": "fake", } self.valid_search_opts = { 'user': 'fake-user', 'server': 'fake-server', 'dns_ip': '1.1.1.1', 'domain': 'fake-domain', 'type': constants.SECURITY_SERVICES_ALLOWED_TYPES[0], } self.check_policy_patcher = mock.patch( 'manila.api.v1.security_service.policy.check_policy') self.check_policy_patcher.start() self.addCleanup(self._stop_started_patcher, self.check_policy_patcher) self.security_service_list_expected_resp = { 'security_services': [{ 'id': self.ss_active_directory['id'], 'name': self.ss_active_directory['name'], 'type': self.ss_active_directory['type'], 'status': self.ss_active_directory['status'] }, ] } self.fake_share_network_list_with_share_servers = [{ 'id': 'fake_sn_id', 'share_network_subnets': [{ 'id': 'fake_sns_id', 'share_servers': [{'id': 'fake_ss_id'}] }] }] self.fake_share_network_list_without_share_servers = [{ 'id': 'fake_sn_id', 'share_network_subnets': [{ 'id': 'fake_sns_id', 'share_servers': [] }] }] def _stop_started_patcher(self, patcher): if hasattr(patcher, 'is_local'): patcher.stop() def test_security_service_show(self): db.security_service_get = mock.Mock( return_value=self.ss_active_directory) req = fakes.HTTPRequest.blank('/security-services/1') res_dict = self.controller.show(req, '1') expected = self.ss_active_directory.copy() expected.update() self.assertEqual({'security_service': self.ss_active_directory}, res_dict) def test_security_service_show_not_found(self): db.security_service_get = mock.Mock(side_effect=exception.NotFound) req = fakes.HTTPRequest.blank('/shares/1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '1') def test_security_service_create(self): sec_service = self.ss_active_directory.copy() create_stub = mock.Mock( return_value=sec_service) self.mock_object(db, 'security_service_create', create_stub) req = fakes.HTTPRequest.blank('/security-services') res_dict = self.controller.create( req, {"security_service": sec_service}) expected = self.ss_active_directory.copy() self.assertEqual({'security_service': expected}, res_dict) def test_security_service_create_invalid_types(self): sec_service = self.ss_active_directory.copy() sec_service['type'] = 'invalid' req = fakes.HTTPRequest.blank('/security-services') self.assertRaises(exception.InvalidInput, self.controller.create, req, {"security_service": sec_service}) def test_create_security_service_no_body(self): body = {} req = fakes.HTTPRequest.blank('/security-services') self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.create, req, body) def test_security_service_delete(self): db.security_service_delete = mock.Mock() db.security_service_get = mock.Mock() db.share_network_get_all_by_security_service = mock.Mock( return_value=[]) req = fakes.HTTPRequest.blank('/security_services/1') resp = self.controller.delete(req, 1) db.security_service_delete.assert_called_once_with( req.environ['manila.context'], 1) self.assertEqual(202, resp.status_int) def test_security_service_delete_not_found(self): db.security_service_get = mock.Mock(side_effect=exception.NotFound) req = fakes.HTTPRequest.blank('/security_services/1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1) def test_security_service_delete_has_share_networks(self): db.security_service_get = mock.Mock() db.share_network_get_all_by_security_service = mock.Mock( return_value=[{'share_network': 'fake_share_network'}]) req = fakes.HTTPRequest.blank('/security_services/1') self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, req, 1) def test_security_service_update_name(self): new = self.ss_active_directory.copy() updated = self.ss_active_directory.copy() updated['name'] = 'new' self.mock_object(security_service.policy, 'check_policy') db.security_service_get = mock.Mock(return_value=new) db.security_service_update = mock.Mock(return_value=updated) fake_sns = {'id': 'fake_sns_id', 'share_servers': ['fake_ss']} db.share_network_get_all_by_security_service = mock.Mock( return_value=[{ 'id': 'fake_id', 'share_network_subnets': [fake_sns] }]) body = {"security_service": {"name": "new"}} req = fakes.HTTPRequest.blank('/security_service/1') res_dict = self.controller.update(req, 1, body)['security_service'] self.assertEqual(updated['name'], res_dict['name']) db.share_network_get_all_by_security_service.assert_called_once_with( req.environ['manila.context'], 1) self.assertEqual(2, security_service.policy.check_policy.call_count) security_service.policy.check_policy.assert_has_calls([ mock.call(req.environ['manila.context'], security_service.RESOURCE_NAME, 'update', new) ]) def test_security_service_update_description(self): new = self.ss_active_directory.copy() updated = self.ss_active_directory.copy() updated['description'] = 'new' self.mock_object(security_service.policy, 'check_policy') db.security_service_get = mock.Mock(return_value=new) db.security_service_update = mock.Mock(return_value=updated) fake_sns = {'id': 'fake_sns_id', 'share_servers': ['fake_ss']} db.share_network_get_all_by_security_service = mock.Mock( return_value=[{ 'id': 'fake_id', 'share_network_subnets': [fake_sns] }]) body = {"security_service": {"description": "new"}} req = fakes.HTTPRequest.blank('/security_service/1') res_dict = self.controller.update(req, 1, body)['security_service'] self.assertEqual(updated['description'], res_dict['description']) db.share_network_get_all_by_security_service.assert_called_once_with( req.environ['manila.context'], 1) self.assertEqual(2, security_service.policy.check_policy.call_count) security_service.policy.check_policy.assert_has_calls([ mock.call(req.environ['manila.context'], security_service.RESOURCE_NAME, 'update', new) ]) @mock.patch.object(db, 'security_service_get', mock.Mock()) @mock.patch.object(db, 'share_network_get_all_by_security_service', mock.Mock()) def test_security_service_update_invalid_keys_sh_server_exists(self): self.mock_object(security_service.policy, 'check_policy') fake_sns = {'id': 'fake_sns_id', 'share_servers': ['fake_ss']} db.share_network_get_all_by_security_service.return_value = [ {'id': 'fake_id', 'share_network_subnets': [fake_sns]}, ] db.security_service_get.return_value = self.ss_active_directory.copy() body = {'security_service': {'user_id': 'new_user'}} req = fakes.HTTPRequest.blank('/security_services/1') self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, req, 1, body) db.security_service_get.assert_called_once_with( req.environ['manila.context'], 1) db.share_network_get_all_by_security_service.assert_called_once_with( req.environ['manila.context'], 1) self.assertEqual(1, security_service.policy.check_policy.call_count) security_service.policy.check_policy.assert_has_calls([ mock.call(req.environ['manila.context'], security_service.RESOURCE_NAME, 'update', db.security_service_get.return_value) ]) @mock.patch.object(db, 'security_service_get', mock.Mock()) @mock.patch.object(db, 'security_service_update', mock.Mock()) @mock.patch.object(db, 'share_network_get_all_by_security_service', mock.Mock()) def test_security_service_update_valid_keys_sh_server_exists(self): self.mock_object(security_service.policy, 'check_policy') fake_sns = {'id': 'fake_sns_id', 'share_servers': ['fake_ss']} db.share_network_get_all_by_security_service.return_value = [ {'id': 'fake_id', 'share_network_subnets': [fake_sns]}, ] old = self.ss_active_directory.copy() updated = self.ss_active_directory.copy() updated['name'] = 'new name' updated['description'] = 'new description' db.security_service_get.return_value = old db.security_service_update.return_value = updated body = { 'security_service': { 'description': 'new description', 'name': 'new name', }, } req = fakes.HTTPRequest.blank('/security_services/1') res_dict = self.controller.update(req, 1, body)['security_service'] self.assertEqual(updated['description'], res_dict['description']) self.assertEqual(updated['name'], res_dict['name']) db.security_service_get.assert_called_once_with( req.environ['manila.context'], 1) db.share_network_get_all_by_security_service.assert_called_once_with( req.environ['manila.context'], 1) db.security_service_update.assert_called_once_with( req.environ['manila.context'], 1, body['security_service']) self.assertEqual(2, security_service.policy.check_policy.call_count) security_service.policy.check_policy.assert_has_calls([ mock.call(req.environ['manila.context'], security_service.RESOURCE_NAME, 'update', old) ]) @mock.patch.object(db, 'security_service_get', mock.Mock()) def test_security_service_update_has_share_servers(self): db.security_service_get = mock.Mock() self.mock_object( self.controller, '_share_servers_dependent_on_sn_exist', mock.Mock(return_value=True)) body = {"security_service": {"type": "ldap"}} req = fakes.HTTPRequest.blank('/security_services/1') self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, req, 1, body) @ddt.data(True, False) def test_security_service_update_share_server_dependent_exists(self, expected): req = fakes.HTTPRequest.blank('/security_services/1') context = req.environ['manila.context'] db.security_service_get = mock.Mock() network = (self.fake_share_network_list_with_share_servers if expected else self.fake_share_network_list_without_share_servers) db.share_network_get_all_by_security_service = mock.Mock( return_value=network) result = self.controller._share_servers_dependent_on_sn_exist( context, 'fake_id') self.assertEqual(expected, result) def test_security_service_list(self): db.security_service_get_all_by_project = mock.Mock( return_value=[self.ss_active_directory.copy()]) req = fakes.HTTPRequest.blank('/security_services') res_dict = self.controller.index(req) self.assertEqual(self.security_service_list_expected_resp, res_dict) @mock.patch.object(db, 'share_network_get', mock.Mock()) def test_security_service_list_filter_by_sn(self): sn = { 'id': 'fake_sn_id', 'security_services': [self.ss_active_directory, ], } db.share_network_get.return_value = sn req = fakes.HTTPRequest.blank( '/security-services?share_network_id=fake_sn_id') res_dict = self.controller.index(req) self.assertEqual(self.security_service_list_expected_resp, res_dict) db.share_network_get.assert_called_once_with( req.environ['manila.context'], sn['id']) @mock.patch.object(db, 'security_service_get_all', mock.Mock()) def test_security_services_list_all_tenants_admin_context(self): self.check_policy_patcher.stop() db.security_service_get_all.return_value = [ self.ss_active_directory, self.ss_ldap, ] req = fakes.HTTPRequest.blank( '/security-services?all_tenants=1&name=fake-name', use_admin_context=True) res_dict = self.controller.index(req) self.assertEqual(self.security_service_list_expected_resp, res_dict) db.security_service_get_all.assert_called_once_with( req.environ['manila.context']) @mock.patch.object(db, 'security_service_get_all_by_project', mock.Mock()) def test_security_services_list_all_tenants_non_admin_context(self): db.security_service_get_all_by_project.return_value = [] req = fakes.HTTPRequest.blank( '/security-services?all_tenants=1') fake_context = req.environ['manila.context'] self.controller.index(req) db.security_service_get_all_by_project.assert_called_once_with( fake_context, fake_context.project_id ) @mock.patch.object(db, 'security_service_get_all', mock.Mock()) def test_security_services_list_all_tenants_with_invalid_value(self): req = fakes.HTTPRequest.blank( '/security-services?all_tenants=nerd', use_admin_context=True) self.assertRaises(exception.InvalidInput, self.controller.index, req) @mock.patch.object(db, 'security_service_get_all_by_project', mock.Mock()) def test_security_services_list_all_tenants_with_value_zero(self): db.security_service_get_all_by_project.return_value = [] req = fakes.HTTPRequest.blank( '/security-services?all_tenants=0', use_admin_context=True) res_dict = self.controller.index(req) self.assertEqual({'security_services': []}, res_dict) db.security_service_get_all_by_project.assert_called_once_with( req.environ['manila.context'], req.environ['manila.context'].project_id) @mock.patch.object(db, 'security_service_get_all_by_project', mock.Mock()) def test_security_services_list_admin_context_invalid_opts(self): db.security_service_get_all_by_project.return_value = [ self.ss_active_directory, self.ss_ldap, ] req = fakes.HTTPRequest.blank( '/security-services?fake_opt=fake_value', use_admin_context=True) res_dict = self.controller.index(req) self.assertEqual({'security_services': []}, res_dict) db.security_service_get_all_by_project.assert_called_once_with( req.environ['manila.context'], req.environ['manila.context'].project_id) @mock.patch.object(db, 'security_service_get_all_by_project', mock.Mock()) def test_security_service_list_all_filter_opts_separately(self): db.security_service_get_all_by_project.return_value = [ self.ss_active_directory, self.ss_ldap, ] for opt, val in self.valid_search_opts.items(): for use_admin_context in [True, False]: req = fakes.HTTPRequest.blank( '/security-services?' + opt + '=' + val, use_admin_context=use_admin_context) res_dict = self.controller.index(req) self.assertEqual(self.security_service_list_expected_resp, res_dict) db.security_service_get_all_by_project.assert_called_with( req.environ['manila.context'], req.environ['manila.context'].project_id) @mock.patch.object(db, 'security_service_get_all_by_project', mock.Mock()) def test_security_service_list_all_filter_opts(self): db.security_service_get_all_by_project.return_value = [ self.ss_active_directory, self.ss_ldap, ] query_string = '/security-services?' + parse.urlencode(sorted( [(k, v) for (k, v) in list(self.valid_search_opts.items())])) for use_admin_context in [True, False]: req = fakes.HTTPRequest.blank(query_string, use_admin_context=use_admin_context) res_dict = self.controller.index(req) self.assertEqual(self.security_service_list_expected_resp, res_dict) db.security_service_get_all_by_project.assert_called_with( req.environ['manila.context'], req.environ['manila.context'].project_id) manila-10.0.0/manila/tests/api/v1/test_shares.py0000664000175000017500000014154713656750227021504 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils import six import webob from manila.api import common from manila.api.v1 import shares from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila.share import api as share_api from manila.share import share_types from manila import test from manila.tests.api.contrib import stubs from manila.tests.api import fakes from manila.tests import db_utils from manila import utils CONF = cfg.CONF @ddt.ddt class ShareAPITest(test.TestCase): """Share API Test.""" def setUp(self): super(ShareAPITest, self).setUp() self.controller = shares.ShareController() self.mock_object(db, 'availability_zone_get') self.mock_object(share_api.API, 'get_all', stubs.stub_get_all_shares) self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_object(share_api.API, 'update', stubs.stub_share_update) self.mock_object(share_api.API, 'delete', stubs.stub_share_delete) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) self.mock_object(share_types, 'get_share_type', stubs.stub_share_type_get) self.mock_object( common, 'validate_public_share_policy', mock.Mock(side_effect=lambda *args, **kwargs: args[1])) self.resource_name = self.controller.resource_name self.mock_policy_check = self.mock_object(policy, 'check_policy') self.maxDiff = None self.share = { "size": 100, "display_name": "Share Test Name", "display_description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "is_public": False, } self.create_mock = mock.Mock( return_value=stubs.stub_share( '1', display_name=self.share['display_name'], display_description=self.share['display_description'], size=100, share_proto=self.share['share_proto'].upper(), availability_zone=self.share['availability_zone']) ) self.vt = { 'id': 'fake_volume_type_id', 'name': 'fake_volume_type_name', 'required_extra_specs': { 'driver_handles_share_servers': 'False' }, 'extra_specs': { 'driver_handles_share_servers': 'False' } } CONF.set_default("default_share_type", None) def _get_expected_share_detailed_response(self, values=None, admin=False): share = { 'id': '1', 'name': 'displayname', 'availability_zone': 'fakeaz', 'description': 'displaydesc', 'export_location': 'fake_location', 'export_locations': ['fake_location', 'fake_location2'], 'project_id': 'fakeproject', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'share_proto': 'FAKEPROTO', 'metadata': {}, 'size': 1, 'snapshot_id': '2', 'share_network_id': None, 'status': 'fakestatus', 'share_type': '1', 'volume_type': '1', 'snapshot_support': True, 'is_public': False, 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } if values: if 'display_name' in values: values['name'] = values.pop('display_name') if 'display_description' in values: values['description'] = values.pop('display_description') share.update(values) if share.get('share_proto'): share['share_proto'] = share['share_proto'].upper() if admin: share['share_server_id'] = 'fake_share_server_id' share['host'] = 'fakehost' return {'share': share} @ddt.data("1.0", "2.0", "2.1") def test_share_create_original(self, microversion): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version=microversion) res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(self.share) expected['share'].pop('snapshot_support') self.assertEqual(expected, res_dict) @ddt.data("2.2", "2.3") def test_share_create_with_snapshot_support_without_cg(self, microversion): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version=microversion) res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(self.share) self.assertEqual(expected, res_dict) def test_share_create_with_valid_default_share_type(self): self.mock_object(share_types, 'get_share_type_by_name', mock.Mock(return_value=self.vt)) CONF.set_default("default_share_type", self.vt['name']) self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(self.share) expected['share'].pop('snapshot_support') share_types.get_share_type_by_name.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.vt['name']) self.assertEqual(expected, res_dict) def test_share_create_with_invalid_default_share_type(self): self.mock_object( share_types, 'get_default_share_type', mock.Mock(side_effect=exception.ShareTypeNotFoundByName( self.vt['name'])), ) CONF.set_default("default_share_type", self.vt['name']) req = fakes.HTTPRequest.blank('/shares') self.assertRaises(exception.ShareTypeNotFoundByName, self.controller.create, req, {'share': self.share}) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') share_types.get_default_share_type.assert_called_once_with() def test_share_create_with_dhss_true_and_network_notexist(self): fake_share_type = { 'id': 'fake_volume_type_id', 'name': 'fake_volume_type_name', 'extra_specs': { 'driver_handles_share_servers': True, } } self.mock_object( share_types, 'get_default_share_type', mock.Mock(return_value=fake_share_type), ) CONF.set_default("default_share_type", fake_share_type['name']) req = fakes.HTTPRequest.blank('/shares') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, {'share': self.share}) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') share_types.get_default_share_type.assert_called_once_with() def test_share_create_with_share_net(self): shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "share_network_id": "fakenetid" } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id'])) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': 'fakenetid'})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value={'id': 'fakesubnetid'})) body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(shr) expected['share'].pop('snapshot_support') self.assertEqual(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual("fakenetid", create_mock.call_args[1]['share_network_id']) def test_share_create_from_snapshot_without_share_net_no_parent(self): shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": None, } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), availability_zone=shr['availability_zone'], snapshot_id=shr['snapshot_id'], share_network_id=shr['share_network_id'])) self.mock_object(share_api.API, 'create', create_mock) body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(shr) expected['share'].pop('snapshot_support') self.assertEqual(expected, res_dict) def test_share_create_from_snapshot_without_share_net_parent_exists(self): shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": None, } parent_share_net = 444 create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), snapshot_id=shr['snapshot_id'], instance=dict( availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id']))) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) parent_share = stubs.stub_share( '1', instance={'share_network_id': parent_share_net}, create_share_from_snapshot_support=True) self.mock_object(share_api.API, 'get', mock.Mock( return_value=parent_share)) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': parent_share_net})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id') body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(shr) expected['share'].pop('snapshot_support') self.assertEqual(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual(parent_share_net, create_mock.call_args[1]['share_network_id']) def test_share_create_from_snapshot_with_share_net_equals_parent(self): parent_share_net = 444 shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": parent_share_net } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), snapshot_id=shr['snapshot_id'], instance=dict( availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id']))) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) parent_share = stubs.stub_share( '1', instance={'share_network_id': parent_share_net}, create_share_from_snapshot_support=True) self.mock_object(share_api.API, 'get', mock.Mock( return_value=parent_share)) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': parent_share_net})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id') body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(shr) expected['share'].pop('snapshot_support') self.assertEqual(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual(parent_share_net, create_mock.call_args[1]['share_network_id']) def test_share_create_from_snapshot_invalid_share_net(self): self.mock_object(share_api.API, 'create') shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": 1234 } body = {"share": shr} req = fakes.HTTPRequest.blank('/shares') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') @ddt.data("1.0", "2.0") def test_share_create_from_snapshot_not_supported(self, microversion): # This create operation should work, because the 1.0 API doesn't check # create_share_from_snapshot_support. parent_share_net = 444 shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": parent_share_net } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), snapshot_id=shr['snapshot_id'], instance=dict( availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id']))) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) parent_share = stubs.stub_share( '1', instance={'share_network_id': parent_share_net}, create_share_from_snapshot_support=False) self.mock_object(share_api.API, 'get', mock.Mock( return_value=parent_share)) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': parent_share_net})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id') body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares', version=microversion) res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') expected = self._get_expected_share_detailed_response(shr) expected['share'].pop('snapshot_support') self.assertDictEqual(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual(parent_share_net, create_mock.call_args[1]['share_network_id']) def test_share_creation_fails_with_bad_size(self): shr = {"size": '', "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1"} body = {"share": shr} req = fakes.HTTPRequest.blank('/shares') self.assertRaises(exception.InvalidInput, self.controller.create, req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') def test_share_create_no_body(self): body = {} req = fakes.HTTPRequest.blank('/shares') self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.create, req, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'create') def test_share_create_invalid_availability_zone(self): self.mock_object( db, 'availability_zone_get', mock.Mock(side_effect=exception.AvailabilityZoneNotFound(id='id')) ) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares') self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, body) @ddt.data((exception.ShareNetworkNotFound(share_network_id='fake'), webob.exc.HTTPNotFound), (mock.Mock(), webob.exc.HTTPBadRequest)) @ddt.unpack def test_share_create_invalid_subnet(self, share_network_side_effect, exception_to_raise): fake_share_with_sn = copy.deepcopy(self.share) fake_share_with_sn['share_network_id'] = 'fakenetid' self.mock_object(db, 'share_network_get', mock.Mock(side_effect=share_network_side_effect)) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=None)) body = {"share": fake_share_with_sn} req = fakes.HTTPRequest.blank('/shares') self.assertRaises(exception_to_raise, self.controller.create, req, body) def test_share_show(self): req = fakes.HTTPRequest.blank('/shares/1') expected = self._get_expected_share_detailed_response() expected['share'].pop('snapshot_support') res_dict = self.controller.show(req, '1') self.assertEqual(expected, res_dict) def test_share_show_with_share_type_name(self): req = fakes.HTTPRequest.blank('/shares/1', version='2.6') res_dict = self.controller.show(req, '1') expected = self._get_expected_share_detailed_response() expected['share']['share_type_name'] = None expected['share']['task_state'] = None self.assertEqual(expected, res_dict) def test_share_show_admin(self): req = fakes.HTTPRequest.blank('/shares/1', use_admin_context=True) expected = self._get_expected_share_detailed_response(admin=True) expected['share'].pop('snapshot_support') res_dict = self.controller.show(req, '1') self.assertEqual(expected, res_dict) def test_share_show_no_share(self): self.mock_object(share_api.API, 'get', stubs.stub_share_get_notfound) req = fakes.HTTPRequest.blank('/shares/1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '1') def test_share_delete(self): req = fakes.HTTPRequest.blank('/shares/1') resp = self.controller.delete(req, 1) self.assertEqual(202, resp.status_int) def test_share_update(self): shr = self.share body = {"share": shr} req = fakes.HTTPRequest.blank('/share/1') res_dict = self.controller.update(req, 1, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') self.assertEqual(shr["display_name"], res_dict['share']["name"]) self.assertEqual(shr["display_description"], res_dict['share']["description"]) self.assertEqual(shr['is_public'], res_dict['share']['is_public']) def test_share_not_updates_size(self): req = fakes.HTTPRequest.blank('/share/1') res_dict = self.controller.update(req, 1, {"share": self.share}) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') self.assertNotEqual(res_dict['share']["size"], self.share["size"]) def test_share_delete_no_share(self): self.mock_object(share_api.API, 'get', stubs.stub_share_get_notfound) req = fakes.HTTPRequest.blank('/shares/1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1) def _share_list_summary_with_search_opts(self, use_admin_context): search_opts = { 'name': 'fake_name', 'status': constants.STATUS_AVAILABLE, 'share_server_id': 'fake_share_server_id', 'share_type_id': 'fake_share_type_id', 'snapshot_id': 'fake_snapshot_id', 'share_network_id': 'fake_share_network_id', 'metadata': '%7B%27k1%27%3A+%27v1%27%7D', # serialized k1=v1 'extra_specs': '%7B%27k2%27%3A+%27v2%27%7D', # serialized k2=v2 'sort_key': 'fake_sort_key', 'sort_dir': 'fake_sort_dir', 'limit': '1', 'offset': '1', 'is_public': 'False', } if use_admin_context: search_opts['host'] = 'fake_host' # fake_key should be filtered for non-admin url = '/shares?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context) shares = [ {'id': 'id1', 'display_name': 'n1'}, {'id': 'id2', 'display_name': 'n2'}, {'id': 'id3', 'display_name': 'n3'}, ] self.mock_object(share_api.API, 'get_all', mock.Mock(return_value=[shares[1]])) result = self.controller.index(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_server_id': search_opts['share_server_id'], 'share_type_id': search_opts['share_type_id'], 'snapshot_id': search_opts['snapshot_id'], 'share_network_id': search_opts['share_network_id'], 'metadata': {'k1': 'v1'}, 'extra_specs': {'k2': 'v2'}, 'is_public': 'False', 'limit': '1', 'offset': '1' } if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) search_opts_expected['host'] = search_opts['host'] share_api.API.get_all.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['shares'])) self.assertEqual(shares[1]['id'], result['shares'][0]['id']) self.assertEqual( shares[1]['display_name'], result['shares'][0]['name']) def test_share_list_summary_with_search_opts_by_non_admin(self): self._share_list_summary_with_search_opts(use_admin_context=False) def test_share_list_summary_with_search_opts_by_admin(self): self._share_list_summary_with_search_opts(use_admin_context=True) def test_share_list_summary(self): self.mock_object(share_api.API, 'get_all', stubs.stub_share_get_all_by_project) req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.index(req) expected = { 'shares': [ { 'name': 'displayname', 'id': '1', 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } ] } self.assertEqual(expected, res_dict) def _share_list_detail_with_search_opts(self, use_admin_context): search_opts = { 'name': 'fake_name', 'status': constants.STATUS_AVAILABLE, 'share_server_id': 'fake_share_server_id', 'share_type_id': 'fake_share_type_id', 'snapshot_id': 'fake_snapshot_id', 'share_network_id': 'fake_share_network_id', 'metadata': '%7B%27k1%27%3A+%27v1%27%7D', # serialized k1=v1 'extra_specs': '%7B%27k2%27%3A+%27v2%27%7D', # serialized k2=v2 'sort_key': 'fake_sort_key', 'sort_dir': 'fake_sort_dir', 'limit': '1', 'offset': '1', 'is_public': 'False', } if use_admin_context: search_opts['host'] = 'fake_host' # fake_key should be filtered for non-admin url = '/shares/detail?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context) shares = [ {'id': 'id1', 'display_name': 'n1'}, { 'id': 'id2', 'display_name': 'n2', 'status': constants.STATUS_AVAILABLE, 'snapshot_id': 'fake_snapshot_id', 'instance': {'host': 'fake_host', 'share_network_id': 'fake_share_network_id', 'share_type_id': 'fake_share_type_id'}, }, {'id': 'id3', 'display_name': 'n3'}, ] self.mock_object(share_api.API, 'get_all', mock.Mock(return_value=[shares[1]])) result = self.controller.detail(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_server_id': search_opts['share_server_id'], 'share_type_id': search_opts['share_type_id'], 'snapshot_id': search_opts['snapshot_id'], 'share_network_id': search_opts['share_network_id'], 'metadata': {'k1': 'v1'}, 'extra_specs': {'k2': 'v2'}, 'is_public': 'False', 'limit': '1', 'offset': '1' } if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) search_opts_expected['host'] = search_opts['host'] share_api.API.get_all.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['shares'])) self.assertEqual(shares[1]['id'], result['shares'][0]['id']) self.assertEqual( shares[1]['display_name'], result['shares'][0]['name']) self.assertEqual( shares[1]['snapshot_id'], result['shares'][0]['snapshot_id']) self.assertEqual( shares[1]['status'], result['shares'][0]['status']) self.assertEqual( shares[1]['instance']['share_type_id'], result['shares'][0]['share_type']) self.assertEqual( shares[1]['snapshot_id'], result['shares'][0]['snapshot_id']) if use_admin_context: self.assertEqual( shares[1]['instance']['host'], result['shares'][0]['host']) self.assertEqual( shares[1]['instance']['share_network_id'], result['shares'][0]['share_network_id']) def test_share_list_detail_with_search_opts_by_non_admin(self): self._share_list_detail_with_search_opts(use_admin_context=False) def test_share_list_detail_with_search_opts_by_admin(self): self._share_list_detail_with_search_opts(use_admin_context=True) def _list_detail_common_expected(self, admin=False): share_dict = { 'status': 'fakestatus', 'description': 'displaydesc', 'export_location': 'fake_location', 'export_locations': ['fake_location', 'fake_location2'], 'availability_zone': 'fakeaz', 'name': 'displayname', 'share_proto': 'FAKEPROTO', 'metadata': {}, 'project_id': 'fakeproject', 'id': '1', 'snapshot_id': '2', 'snapshot_support': True, 'share_network_id': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'size': 1, 'share_type': '1', 'volume_type': '1', 'is_public': False, 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } if admin: share_dict['host'] = 'fakehost' return {'shares': [share_dict]} def _list_detail_test_common(self, req, expected): self.mock_object(share_api.API, 'get_all', stubs.stub_share_get_all_by_project) res_dict = self.controller.detail(req) self.assertEqual(expected, res_dict) self.assertEqual(res_dict['shares'][0]['volume_type'], res_dict['shares'][0]['share_type']) def test_share_list_detail(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/shares/detail', environ=env) expected = self._list_detail_common_expected() expected['shares'][0].pop('snapshot_support') self._list_detail_test_common(req, expected) def test_share_list_detail_with_task_state(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/shares/detail', environ=env, version="2.5") expected = self._list_detail_common_expected() expected['shares'][0]['task_state'] = None self._list_detail_test_common(req, expected) def test_remove_invalid_options(self): ctx = context.RequestContext('fakeuser', 'fakeproject', is_admin=False) search_opts = {'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd'} expected_opts = {'a': 'a', 'c': 'c'} allowed_opts = ['a', 'c'] common.remove_invalid_options(ctx, search_opts, allowed_opts) self.assertEqual(expected_opts, search_opts) def test_remove_invalid_options_admin(self): ctx = context.RequestContext('fakeuser', 'fakeproject', is_admin=True) search_opts = {'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd'} expected_opts = {'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd'} allowed_opts = ['a', 'c'] common.remove_invalid_options(ctx, search_opts, allowed_opts) self.assertEqual(expected_opts, search_opts) def _fake_access_get(self, ctxt, access_id): class Access(object): def __init__(self, **kwargs): self.STATE_NEW = 'fake_new' self.STATE_ACTIVE = 'fake_active' self.STATE_ERROR = 'fake_error' self.params = kwargs self.params['state'] = self.STATE_NEW self.share_id = kwargs.get('share_id') self.id = access_id def __getitem__(self, item): return self.params[item] access = Access(access_id=access_id, share_id='fake_share_id') return access @ddt.ddt class ShareActionsTest(test.TestCase): def setUp(self): super(ShareActionsTest, self).setUp() self.controller = shares.ShareController() self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_policy_check = self.mock_object(policy, 'check_policy') @ddt.data( {'access_type': 'ip', 'access_to': '127.0.0.1'}, {'access_type': 'user', 'access_to': '1' * 4}, {'access_type': 'user', 'access_to': '1' * 255}, {'access_type': 'user', 'access_to': 'fake{.-_\'`}'}, {'access_type': 'user', 'access_to': 'MYDOMAIN-Administrator'}, {'access_type': 'user', 'access_to': 'test group name'}, {'access_type': 'user', 'access_to': 'group$.-_\'`{}'}, {'access_type': 'cert', 'access_to': 'x'}, {'access_type': 'cert', 'access_to': 'tenant.example.com'}, {'access_type': 'cert', 'access_to': 'x' * 64}, ) def test_allow_access(self, access): self.mock_object(share_api.API, 'allow_access', mock.Mock(return_value={'fake': 'fake'})) self.mock_object(self.controller._access_view_builder, 'view', mock.Mock(return_value={'access': {'fake': 'fake'}})) id = 'fake_share_id' body = {'os-allow_access': access} expected = {'access': {'fake': 'fake'}} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) res = self.controller._allow_access(req, id, body) self.assertEqual(expected, res) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], 'share', 'allow_access') @ddt.data( {'access_type': 'error_type', 'access_to': '127.0.0.1'}, {'access_type': 'ip', 'access_to': 'localhost'}, {'access_type': 'ip', 'access_to': '127.0.0.*'}, {'access_type': 'ip', 'access_to': '127.0.0.0/33'}, {'access_type': 'ip', 'access_to': '127.0.0.256'}, {'access_type': 'user', 'access_to': '1'}, {'access_type': 'user', 'access_to': '1' * 3}, {'access_type': 'user', 'access_to': '1' * 256}, {'access_type': 'user', 'access_to': 'root<>'}, {'access_type': 'user', 'access_to': 'group\\'}, {'access_type': 'user', 'access_to': '+=*?group'}, {'access_type': 'cert', 'access_to': ''}, {'access_type': 'cert', 'access_to': ' '}, {'access_type': 'cert', 'access_to': 'x' * 65}, {'access_type': 'cephx', 'access_to': 'alice'} ) def test_allow_access_error(self, access): id = 'fake_share_id' body = {'os-allow_access': access} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._allow_access, req, id, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], 'share', 'allow_access') def test_deny_access(self): def _stub_deny_access(*args, **kwargs): pass self.mock_object(share_api.API, "deny_access", _stub_deny_access) self.mock_object(share_api.API, "access_get", _fake_access_get) id = 'fake_share_id' body = {"os-deny_access": {"access_id": 'fake_acces_id'}} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) res = self.controller._deny_access(req, id, body) self.assertEqual(202, res.status_int) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], 'share', 'deny_access') def test_deny_access_not_found(self): def _stub_deny_access(*args, **kwargs): pass self.mock_object(share_api.API, "deny_access", _stub_deny_access) self.mock_object(share_api.API, "access_get", _fake_access_get) id = 'super_fake_share_id' body = {"os-deny_access": {"access_id": 'fake_acces_id'}} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPNotFound, self.controller._deny_access, req, id, body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], 'share', 'deny_access') @ddt.data('_allow_access', '_deny_access') def test_allow_access_deny_access_policy_not_authorized(self, method): req = fakes.HTTPRequest.blank('/v1/tenant1/shares/someuuid/action') action = method[1:] body = {action: None} noauthexc = exception.PolicyNotAuthorized(action=action) with mock.patch.object( policy, 'check_policy', mock.Mock(side_effect=noauthexc)): method = getattr(self.controller, method) self.assertRaises( webob.exc.HTTPForbidden, method, req, body, 'someuuid') policy.check_policy.assert_called_once_with( req.environ['manila.context'], 'share', action) def test_access_list(self): fake_access_list = [ { "state": "fakestatus", "id": "fake_access_id", "access_type": "fakeip", "access_to": "127.0.0.1", } ] self.mock_object(self.controller._access_view_builder, 'list_view', mock.Mock(return_value={'access_list': fake_access_list})) id = 'fake_share_id' body = {"os-access_list": None} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) res_dict = self.controller._access_list(req, id, body) self.assertEqual({'access_list': fake_access_list}, res_dict) def test_extend(self): id = 'fake_share_id' share = stubs.stub_share_get(None, None, id) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, "extend") size = '123' body = {"os-extend": {'new_size': size}} req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) actual_response = self.controller._extend(req, id, body) share_api.API.get.assert_called_once_with(mock.ANY, id) share_api.API.extend.assert_called_once_with( mock.ANY, share, int(size)) self.assertEqual(202, actual_response.status_int) @ddt.data({"os-extend": ""}, {"os-extend": {"new_size": "foo"}}, {"os-extend": {"new_size": {'foo': 'bar'}}}) def test_extend_invalid_body(self, body): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._extend, req, id, body) @ddt.data({'source': exception.InvalidInput, 'target': webob.exc.HTTPBadRequest}, {'source': exception.InvalidShare, 'target': webob.exc.HTTPBadRequest}, {'source': exception.ShareSizeExceedsAvailableQuota, 'target': webob.exc.HTTPForbidden}) @ddt.unpack def test_extend_exception(self, source, target): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) body = {"os-extend": {'new_size': '123'}} self.mock_object(share_api.API, "extend", mock.Mock(side_effect=source('fake'))) self.assertRaises(target, self.controller._extend, req, id, body) def test_shrink(self): id = 'fake_share_id' share = stubs.stub_share_get(None, None, id) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, "shrink") size = '123' body = {"os-shrink": {'new_size': size}} req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) actual_response = self.controller._shrink(req, id, body) share_api.API.get.assert_called_once_with(mock.ANY, id) share_api.API.shrink.assert_called_once_with( mock.ANY, share, int(size)) self.assertEqual(202, actual_response.status_int) @ddt.data({"os-shrink": ""}, {"os-shrink": {"new_size": "foo"}}, {"os-shrink": {"new_size": {'foo': 'bar'}}}) def test_shrink_invalid_body(self, body): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._shrink, req, id, body) @ddt.data({'source': exception.InvalidInput, 'target': webob.exc.HTTPBadRequest}, {'source': exception.InvalidShare, 'target': webob.exc.HTTPBadRequest}) @ddt.unpack def test_shrink_exception(self, source, target): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) body = {"os-shrink": {'new_size': '123'}} self.mock_object(share_api.API, "shrink", mock.Mock(side_effect=source('fake'))) self.assertRaises(target, self.controller._shrink, req, id, body) @ddt.ddt class ShareAdminActionsAPITest(test.TestCase): def setUp(self): super(ShareAdminActionsAPITest, self).setUp() CONF.set_default("default_share_type", None) self.flags(transport_url='rabbit://fake:fake@mqhost:5672') self.share_api = share_api.API() self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_share_data(self, share=None): if share is None: share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size='1', override_defaults=True) req = webob.Request.blank('/v2/fake/shares/%s/action' % share['id']) return share, req def _reset_status(self, ctxt, model, req, db_access_method, valid_code, valid_status=None, body=None): if body is None: body = {'os-reset_status': {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) if valid_code == 404: self.assertRaises(exception.NotFound, db_access_method, ctxt, model['id']) else: actual_model = db_access_method(ctxt, model['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data( { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.STATUS_ERROR, }, { 'role': 'member', 'valid_code': 403, 'valid_status': constants.STATUS_AVAILABLE, }, ) @ddt.unpack def test_share_reset_status_with_different_roles(self, role, valid_code, valid_status): share, req = self._setup_share_data() ctxt = self._get_context(role) self._reset_status(ctxt, share, req, db.share_get, valid_code, valid_status) @ddt.data(*fakes.fixture_invalid_reset_status_body) def test_share_invalid_reset_status_body(self, body): share, req = self._setup_share_data() ctxt = self.admin_context self._reset_status(ctxt, share, req, db.share_get, 400, constants.STATUS_AVAILABLE, body) def test_share_reset_status_for_missing(self): fake_share = {'id': 'missing-share-id'} req = webob.Request.blank('/v1/fake/shares/%s/action' % fake_share['id']) self._reset_status(self.admin_context, fake_share, req, db.share_snapshot_get, 404) def _force_delete(self, ctxt, model, req, db_access_method, valid_code, check_model_in_db=False): req.method = 'POST' req.headers['content-type'] = 'application/json' req.body = six.b(jsonutils.dumps({'os-force_delete': {}})) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # validate response self.assertEqual(valid_code, resp.status_int) if valid_code == 202 and check_model_in_db: self.assertRaises(exception.NotFound, db_access_method, ctxt, model['id']) @ddt.data( {'role': 'admin', 'resp_code': 202}, {'role': 'member', 'resp_code': 403}, ) @ddt.unpack def test_share_force_delete_with_different_roles(self, role, resp_code): share, req = self._setup_share_data() ctxt = self._get_context(role) self._force_delete(ctxt, share, req, db.share_get, resp_code, check_model_in_db=True) def test_share_force_delete_missing(self): share, req = self._setup_share_data(share={'id': 'fake'}) ctxt = self._get_context('admin') self._force_delete(ctxt, share, req, db.share_get, 404) manila-10.0.0/manila/tests/api/v1/test_share_manage.py0000664000175000017500000002500013656750227022612 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt import webob from manila.api import common from manila.api.v1 import share_manage from manila.db import api as db_api from manila import exception from manila import policy from manila.share import api as share_api from manila.share import share_types from manila import test from manila.tests.api import fakes from manila import utils def get_fake_manage_body(export_path='/fake', service_host='fake@host#POOL', protocol='fake', share_type='fake', **kwargs): fake_share = { 'export_path': export_path, 'service_host': service_host, 'protocol': protocol, 'share_type': share_type, } fake_share.update(kwargs) return {'share': fake_share} @ddt.ddt class ShareManageTest(test.TestCase): """Share Manage Test.""" def setUp(self): super(ShareManageTest, self).setUp() self.controller = share_manage.ShareManageController() self.resource_name = self.controller.resource_name self.request = fakes.HTTPRequest.blank('/share/manage', use_admin_context=True) self.context = self.request.environ['manila.context'] self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.mock_object( common, 'validate_public_share_policy', mock.Mock(side_effect=lambda *args, **kwargs: args[1])) @ddt.data({}, {'shares': {}}, {'share': get_fake_manage_body('', None, None)}) def test_share_manage_invalid_body(self, body): self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_service_not_found(self): body = get_fake_manage_body() self.mock_object(db_api, 'service_get_by_host_and_topic', mock.Mock( side_effect=exception.ServiceNotFound(service_id='fake'))) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_share_type_not_found(self): body = get_fake_manage_body() self.mock_object(db_api, 'service_get_by_host_and_topic', mock.Mock()) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db_api, 'share_type_get_by_name', mock.Mock( side_effect=exception.ShareTypeNotFoundByName( share_type_name='fake'))) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def _setup_manage_mocks(self, service_is_up=True): self.mock_object(db_api, 'service_get_by_host_and_topic', mock.Mock( return_value={'host': 'fake'})) self.mock_object(share_types, 'get_share_type_by_name_or_id', mock.Mock(return_value={'id': 'fake'})) self.mock_object(utils, 'service_is_up', mock.Mock( return_value=service_is_up)) @ddt.data({'service_is_up': False, 'service_host': 'fake@host#POOL'}, {'service_is_up': True, 'service_host': 'fake@host'}) def test_share_manage_bad_request(self, settings): body = get_fake_manage_body(service_host=settings.pop('service_host')) self._setup_manage_mocks(**settings) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_duplicate_share(self): body = get_fake_manage_body() exc = exception.InvalidShare(reason="fake") self._setup_manage_mocks() self.mock_object(share_api.API, 'manage', mock.Mock(side_effect=exc)) self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_forbidden_manage(self): body = get_fake_manage_body() self._setup_manage_mocks() error = mock.Mock(side_effect=exception.PolicyNotAuthorized(action='')) self.mock_object(share_api.API, 'manage', error) self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_forbidden_validate_service_host(self): body = get_fake_manage_body() self._setup_manage_mocks() error = mock.Mock(side_effect=exception.PolicyNotAuthorized(action='')) self.mock_object(utils, 'service_is_up', mock.Mock(side_effect=error)) self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_invalid_input(self): body = get_fake_manage_body() self._setup_manage_mocks() error = mock.Mock(side_effect=exception.InvalidInput(message="", reason="fake")) self.mock_object(share_api.API, 'manage', mock.Mock(side_effect=error)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_invalid_share_server(self): body = get_fake_manage_body() self._setup_manage_mocks() error = mock.Mock( side_effect=exception.InvalidShareServer(message="", share_server_id="") ) self.mock_object(share_api.API, 'manage', mock.Mock(side_effect=error)) self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') @ddt.data( get_fake_manage_body(name='foo', description='bar'), get_fake_manage_body(display_name='foo', description='bar'), get_fake_manage_body(name='foo', display_description='bar'), get_fake_manage_body(display_name='foo', display_description='bar'), get_fake_manage_body(display_name='foo', display_description='bar', driver_options=dict(volume_id='quuz')), ) def test_share_manage(self, data): self._setup_manage_mocks() return_share = {'share_type_id': '', 'id': 'fake'} self.mock_object( share_api.API, 'manage', mock.Mock(return_value=return_share)) share = { 'host': data['share']['service_host'], 'export_location': data['share']['export_path'], 'share_proto': data['share']['protocol'].upper(), 'share_type_id': 'fake', 'display_name': 'foo', 'display_description': 'bar', } data['share']['is_public'] = 'foo' driver_options = data['share'].get('driver_options', {}) actual_result = self.controller.create(self.request, data) share_api.API.manage.assert_called_once_with( mock.ANY, share, driver_options) self.assertIsNotNone(actual_result) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_share_manage_allow_dhss_true(self): self._setup_manage_mocks() data = get_fake_manage_body(name='foo', description='bar') return_share = {'share_type_id': '', 'id': 'fake'} self.mock_object( share_api.API, 'manage', mock.Mock(return_value=return_share)) share = { 'host': data['share']['service_host'], 'export_location': data['share']['export_path'], 'share_proto': data['share']['protocol'].upper(), 'share_type_id': 'fake', 'display_name': 'foo', 'display_description': 'bar', 'share_server_id': 'fake' } data['share']['share_server_id'] = 'fake' driver_options = data['share'].get('driver_options', {}) self.controller._manage(self.request, data, allow_dhss_true=True) share_api.API.manage.assert_called_once_with( self.context, share, driver_options ) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'manage') def test_wrong_permissions(self): body = get_fake_manage_body() fake_req = fakes.HTTPRequest.blank( '/share/manage', use_admin_context=False) self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, fake_req, body) self.mock_policy_check.assert_called_once_with( fake_req.environ['manila.context'], self.resource_name, 'manage') manila-10.0.0/manila/tests/api/v1/test_share_snapshots.py0000664000175000017500000004067013656750227023416 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_serialization import jsonutils import six import webob from manila.api.v1 import share_snapshots from manila.common import constants from manila import context from manila import db from manila.share import api as share_api from manila import test from manila.tests.api.contrib import stubs from manila.tests.api import fakes from manila.tests import db_utils from manila.tests import fake_share @ddt.ddt class ShareSnapshotAPITest(test.TestCase): """Share Snapshot API Test.""" def setUp(self): super(ShareSnapshotAPITest, self).setUp() self.controller = share_snapshots.ShareSnapshotsController() self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_object(share_api.API, 'get_all_snapshots', stubs.stub_snapshot_get_all_by_project) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) self.mock_object(share_api.API, 'snapshot_update', stubs.stub_snapshot_update) self.snp_example = { 'share_id': 100, 'size': 12, 'force': False, 'display_name': 'updated_share_name', 'display_description': 'updated_share_description', } self.maxDiff = None def test_snapshot_show_status_none(self): return_snapshot = { 'share_id': 100, 'name': 'fake_share_name', 'description': 'fake_share_description', 'status': None, } self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=return_snapshot)) req = fakes.HTTPRequest.blank('/snapshots/200') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '200') @ddt.data('true', 'True', ' True', '1') def test_snapshot_create(self, snapshot_support): self.mock_object(share_api.API, 'create_snapshot', stubs.stub_snapshot_create) body = { 'snapshot': { 'share_id': 'fakeshareid', 'force': False, 'name': 'displaysnapname', 'description': 'displaysnapdesc', } } req = fakes.HTTPRequest.blank('/snapshots') res_dict = self.controller.create(req, body) expected = fake_share.expected_snapshot(id=200) self.assertEqual(expected, res_dict) @ddt.data(0, False) def test_snapshot_create_no_support(self, snapshot_support): self.mock_object(share_api.API, 'create_snapshot') self.mock_object( share_api.API, 'get', mock.Mock(return_value={'snapshot_support': snapshot_support})) body = { 'snapshot': { 'share_id': 100, 'force': False, 'name': 'fake_share_name', 'description': 'fake_share_description', } } req = fakes.HTTPRequest.blank('/snapshots') self.assertRaises( webob.exc.HTTPUnprocessableEntity, self.controller.create, req, body) self.assertFalse(share_api.API.create_snapshot.called) def test_snapshot_create_no_body(self): body = {} req = fakes.HTTPRequest.blank('/snapshots') self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.create, req, body) def test_snapshot_delete(self): self.mock_object(share_api.API, 'delete_snapshot', stubs.stub_snapshot_delete) req = fakes.HTTPRequest.blank('/snapshots/200') resp = self.controller.delete(req, 200) self.assertEqual(202, resp.status_int) def test_snapshot_delete_nofound(self): self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get_notfound) req = fakes.HTTPRequest.blank('/snapshots/200') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 200) def test_snapshot_show(self): req = fakes.HTTPRequest.blank('/snapshots/200') res_dict = self.controller.show(req, 200) expected = fake_share.expected_snapshot(id=200) self.assertEqual(expected, res_dict) def test_snapshot_show_nofound(self): self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get_notfound) req = fakes.HTTPRequest.blank('/snapshots/200') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '200') def test_snapshot_list_summary(self): self.mock_object(share_api.API, 'get_all_snapshots', stubs.stub_snapshot_get_all_by_project) req = fakes.HTTPRequest.blank('/snapshots') res_dict = self.controller.index(req) expected = { 'snapshots': [ { 'name': 'displaysnapname', 'id': 2, 'links': [ { 'href': 'http://localhost/v1/fake/' 'snapshots/2', 'rel': 'self' }, { 'href': 'http://localhost/fake/snapshots/2', 'rel': 'bookmark' } ], } ] } self.assertEqual(expected, res_dict) def _snapshot_list_summary_with_search_opts(self, use_admin_context): search_opts = fake_share.search_opts() # fake_key should be filtered for non-admin url = '/snapshots?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context) snapshots = [ {'id': 'id1', 'display_name': 'n1', 'status': 'fake_status', 'share_id': 'fake_share_id'}, {'id': 'id2', 'display_name': 'n2', 'status': 'fake_status', 'share_id': 'fake_share_id'}, {'id': 'id3', 'display_name': 'n3', 'status': 'fake_status', 'share_id': 'fake_share_id'}, ] self.mock_object(share_api.API, 'get_all_snapshots', mock.Mock(return_value=snapshots)) result = self.controller.index(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_id': search_opts['share_id'], } if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) share_api.API.get_all_snapshots.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['snapshots'])) self.assertEqual(snapshots[1]['id'], result['snapshots'][0]['id']) self.assertEqual( snapshots[1]['display_name'], result['snapshots'][0]['name']) def test_snapshot_list_summary_with_search_opts_by_non_admin(self): self._snapshot_list_summary_with_search_opts(use_admin_context=False) def test_snapshot_list_summary_with_search_opts_by_admin(self): self._snapshot_list_summary_with_search_opts(use_admin_context=True) def _snapshot_list_detail_with_search_opts(self, use_admin_context): search_opts = fake_share.search_opts() # fake_key should be filtered for non-admin url = '/shares/detail?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context) snapshots = [ { 'id': 'id1', 'display_name': 'n1', 'status': 'fake_status_other', 'aggregate_status': 'fake_status', 'share_id': 'fake_share_id', }, { 'id': 'id2', 'display_name': 'n2', 'status': 'fake_status', 'aggregate_status': 'fake_status', 'share_id': 'fake_share_id', }, { 'id': 'id3', 'display_name': 'n3', 'status': 'fake_status_other', 'aggregate_status': 'fake_status', 'share_id': 'fake_share_id', }, ] self.mock_object(share_api.API, 'get_all_snapshots', mock.Mock(return_value=snapshots)) result = self.controller.detail(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_id': search_opts['share_id'], } if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) share_api.API.get_all_snapshots.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['snapshots'])) self.assertEqual(snapshots[1]['id'], result['snapshots'][0]['id']) self.assertEqual( snapshots[1]['display_name'], result['snapshots'][0]['name']) self.assertEqual( snapshots[1]['status'], result['snapshots'][0]['status']) self.assertEqual( snapshots[1]['share_id'], result['snapshots'][0]['share_id']) def test_snapshot_list_detail_with_search_opts_by_non_admin(self): self._snapshot_list_detail_with_search_opts(use_admin_context=False) def test_snapshot_list_detail_with_search_opts_by_admin(self): self._snapshot_list_detail_with_search_opts(use_admin_context=True) def test_snapshot_list_detail(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/shares/detail', environ=env) res_dict = self.controller.detail(req) expected_s = fake_share.expected_snapshot(id=2) expected = {'snapshots': [expected_s['snapshot']]} self.assertEqual(expected, res_dict) def test_snapshot_list_status_none(self): snapshots = [ { 'id': 2, 'share_id': 'fakeshareid', 'size': 1, 'status': 'fakesnapstatus', 'name': 'displaysnapname', 'description': 'displaysnapdesc', }, { 'id': 3, 'share_id': 'fakeshareid', 'size': 1, 'status': None, 'name': 'displaysnapname', 'description': 'displaysnapdesc', } ] self.mock_object(share_api.API, 'get_all_snapshots', mock.Mock(return_value=snapshots)) req = fakes.HTTPRequest.blank('/snapshots') result = self.controller.index(req) self.assertEqual(1, len(result['snapshots'])) self.assertEqual(snapshots[0]['id'], result['snapshots'][0]['id']) def test_snapshot_updates_description(self): snp = self.snp_example body = {"snapshot": snp} req = fakes.HTTPRequest.blank('/snapshot/1') res_dict = self.controller.update(req, 1, body) self.assertEqual(snp["display_name"], res_dict['snapshot']["name"]) def test_snapshot_updates_display_descr(self): snp = self.snp_example body = {"snapshot": snp} req = fakes.HTTPRequest.blank('/snapshot/1') res_dict = self.controller.update(req, 1, body) self.assertEqual(snp["display_description"], res_dict['snapshot']["description"]) def test_share_not_updates_size(self): snp = self.snp_example body = {"snapshot": snp} req = fakes.HTTPRequest.blank('/snapshot/1') res_dict = self.controller.update(req, 1, body) self.assertNotEqual(snp["size"], res_dict['snapshot']["size"]) @ddt.ddt class ShareSnapshotAdminActionsAPITest(test.TestCase): def setUp(self): super(ShareSnapshotAdminActionsAPITest, self).setUp() self.controller = share_snapshots.ShareSnapshotsController() self.flags(transport_url='rabbit://fake:fake@mqhost:5672') self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_snapshot_data(self, snapshot=None): if snapshot is None: share = db_utils.create_share() snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) req = fakes.HTTPRequest.blank('/v1/fake/snapshots/%s/action' % snapshot['id']) return snapshot, req def _reset_status(self, ctxt, model, req, db_access_method, valid_code, valid_status=None, body=None): action_name = 'os-reset_status' if body is None: body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_model = db_access_method(ctxt, model['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data(*fakes.fixture_reset_status_with_different_roles_v1) @ddt.unpack def test_snapshot_reset_status_with_different_roles(self, role, valid_code, valid_status): ctxt = self._get_context(role) snapshot, req = self._setup_snapshot_data() self._reset_status(ctxt, snapshot, req, db.share_snapshot_get, valid_code, valid_status) @ddt.data( {'os-reset_status': {'x-status': 'bad'}}, {'os-reset_status': {'status': 'invalid'}}, ) def test_snapshot_invalid_reset_status_body(self, body): snapshot, req = self._setup_snapshot_data() self._reset_status(self.admin_context, snapshot, req, db.share_snapshot_get, 400, constants.STATUS_AVAILABLE, body) def _force_delete(self, ctxt, model, req, db_access_method, valid_code): action_name = 'os-force_delete' req.method = 'POST' req.headers['content-type'] = 'application/json' req.body = six.b(jsonutils.dumps({action_name: {}})) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # Validate response self.assertEqual(valid_code, resp.status_int) @ddt.data( {'role': 'admin', 'resp_code': 202}, {'role': 'member', 'resp_code': 403}, ) @ddt.unpack def test_snapshot_force_delete_with_different_roles(self, role, resp_code): ctxt = self._get_context(role) snapshot, req = self._setup_snapshot_data() self._force_delete(ctxt, snapshot, req, db.share_snapshot_get, resp_code) def test_snapshot_force_delete_missing(self): ctxt = self._get_context('admin') snapshot, req = self._setup_snapshot_data(snapshot={'id': 'fake'}) self._force_delete(ctxt, snapshot, req, db.share_snapshot_get, 404) manila-10.0.0/manila/tests/api/v1/test_share_metadata.py0000664000175000017500000003064713656750227023157 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from oslo_config import cfg from oslo_serialization import jsonutils import six import webob from manila.api.v1 import share_metadata from manila.api.v1 import shares from manila import context from manila import db from manila.share import api from manila import test from manila.tests.api import fakes CONF = cfg.CONF @ddt.ddt class ShareMetaDataTest(test.TestCase): def setUp(self): super(ShareMetaDataTest, self).setUp() self.share_api = api.API() self.share_controller = shares.ShareController() self.controller = share_metadata.ShareMetadataController() self.ctxt = context.RequestContext('admin', 'fake', True) self.origin_metadata = { "key1": "value1", "key2": "value2", "key3": "value3", } self.share = db.share_create(self.ctxt, {}) self.share_id = self.share['id'] self.url = '/shares/%s/metadata' % self.share_id db.share_metadata_update( self.ctxt, self.share_id, self.origin_metadata, delete=False) def test_index(self): req = fakes.HTTPRequest.blank(self.url) res_dict = self.controller.index(req, self.share_id) expected = { 'metadata': { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', }, } self.assertEqual(expected, res_dict) def test_index_nonexistent_share(self): req = fakes.HTTPRequest.blank(self.url) self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, req, self.url) def test_index_no_data(self): db.share_metadata_update( self.ctxt, self.share_id, {}, delete=True) req = fakes.HTTPRequest.blank(self.url) res_dict = self.controller.index(req, self.share_id) expected = {'metadata': {}} self.assertEqual(expected, res_dict) def test_show(self): req = fakes.HTTPRequest.blank(self.url + '/key2') res_dict = self.controller.show(req, self.share_id, 'key2') expected = {'meta': {'key2': 'value2'}} self.assertEqual(expected, res_dict) def test_show_nonexistent_share(self): req = fakes.HTTPRequest.blank(self.url + '/key2') self.assertRaises( webob.exc.HTTPNotFound, self.controller.show, req, "nonexistent_share", 'key2') def test_show_meta_not_found(self): req = fakes.HTTPRequest.blank(self.url + '/key6') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.share_id, 'key6') def test_delete(self): req = fakes.HTTPRequest.blank(self.url + '/key2') req.method = 'DELETE' res = self.controller.delete(req, self.share_id, 'key2') self.assertEqual(200, res.status_int) def test_delete_nonexistent_share(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'DELETE' self.assertRaises( webob.exc.HTTPNotFound, self.controller.delete, req, "nonexistent_share", 'key1') def test_delete_meta_not_found(self): req = fakes.HTTPRequest.blank(self.url + '/key6') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.share_id, 'key6') def test_create(self): req = fakes.HTTPRequest.blank('/v1/share_metadata') req.method = 'POST' req.content_type = "application/json" body = {"metadata": {"key9": "value9"}} req.body = six.b(jsonutils.dumps(body)) res_dict = self.controller.create(req, self.share_id, body) expected = self.origin_metadata expected.update(body['metadata']) self.assertEqual({'metadata': expected}, res_dict) def test_create_empty_body(self): req = fakes.HTTPRequest.blank(self.url) req.method = 'POST' req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, self.share_id, None) def test_create_item_empty_key(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {"": "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, self.share_id, body) def test_create_item_key_too_long(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {("a" * 260): "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, self.share_id, body) def test_create_nonexistent_share(self): req = fakes.HTTPRequest.blank('/v1/share_metadata') req.method = 'POST' req.content_type = "application/json" body = {"metadata": {"key9": "value9"}} req.body = six.b(jsonutils.dumps(body)) self.assertRaises( webob.exc.HTTPNotFound, self.controller.create, req, "nonexistent_share", body) def test_update_all(self): req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' req.content_type = "application/json" expected = { 'metadata': { 'key10': 'value10', 'key99': 'value99', }, } req.body = six.b(jsonutils.dumps(expected)) res_dict = self.controller.update_all(req, self.share_id, expected) self.assertEqual(expected, res_dict) def test_update_all_empty_container(self): req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' req.content_type = "application/json" expected = {'metadata': {}} req.body = six.b(jsonutils.dumps(expected)) res_dict = self.controller.update_all(req, self.share_id, expected) self.assertEqual(expected, res_dict) def test_update_all_malformed_container(self): req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' req.content_type = "application/json" expected = {'meta': {}} req.body = six.b(jsonutils.dumps(expected)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update_all, req, self.share_id, expected) @ddt.data(['asdf'], {'key': None}, {None: 'value'}, {None: None}) def test_update_all_malformed_data(self, metadata): req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' req.content_type = "application/json" expected = {'metadata': metadata} req.body = six.b(jsonutils.dumps(expected)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update_all, req, self.share_id, expected) def test_update_all_nonexistent_share(self): req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' req.content_type = "application/json" body = {'metadata': {'key10': 'value10'}} req.body = six.b(jsonutils.dumps(body)) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update_all, req, '100', body) def test_update_item(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" res_dict = self.controller.update(req, self.share_id, 'key1', body) expected = {'meta': {'key1': 'value1'}} self.assertEqual(expected, res_dict) def test_update_item_nonexistent_share(self): req = fakes.HTTPRequest.blank('/v1.1/fake/shares/asdf/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises( webob.exc.HTTPNotFound, self.controller.update, req, "nonexistent_share", 'key1', body) def test_update_item_empty_body(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.share_id, 'key1', None) def test_update_item_empty_key(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {"": "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.share_id, '', body) def test_update_item_key_too_long(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {("a" * 260): "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.share_id, ("a" * 260), body) def test_update_item_value_too_long(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {"key1": ("a" * 1025)}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.share_id, "key1", body) def test_update_item_too_many_keys(self): req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {"key1": "value1", "key2": "value2"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.share_id, 'key1', body) def test_update_item_body_uri_mismatch(self): req = fakes.HTTPRequest.blank(self.url + '/bad') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = six.b(jsonutils.dumps(body)) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.share_id, 'bad', body) def test_invalid_metadata_items_on_create(self): req = fakes.HTTPRequest.blank(self.url) req.method = 'POST' req.headers["content-type"] = "application/json" # test for long key data = {"metadata": {"a" * 260: "value1"}} req.body = six.b(jsonutils.dumps(data)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, self.share_id, data) # test for long value data = {"metadata": {"key": "v" * 1025}} req.body = six.b(jsonutils.dumps(data)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, self.share_id, data) # test for empty key. data = {"metadata": {"": "value1"}} req.body = six.b(jsonutils.dumps(data)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, self.share_id, data) manila-10.0.0/manila/tests/api/middleware/0000775000175000017500000000000013656750362020361 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/middleware/__init__.py0000664000175000017500000000000013656750227022460 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/middleware/test_faults.py0000664000175000017500000001611213656750227023271 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import six import webob import webob.dec import webob.exc from manila.api.middleware import fault from manila.api.openstack import wsgi from manila import exception from manila import test class TestFaults(test.TestCase): """Tests covering `manila.api.openstack.faults:Fault` class.""" def _prepare_xml(self, xml_string): """Remove characters from string which hinder XML equality testing.""" xml_string = xml_string.replace(" ", "") xml_string = xml_string.replace("\n", "") xml_string = xml_string.replace("\t", "") return xml_string def test_400_fault_json(self): """Test fault serialized to JSON via file-extension and/or header.""" requests = [ webob.Request.blank('/.json'), webob.Request.blank('/', headers={"Accept": "application/json"}), ] for request in requests: fault = wsgi.Fault(webob.exc.HTTPBadRequest(explanation='scram')) response = request.get_response(fault) expected = { "badRequest": { "message": "scram", "code": 400, }, } actual = jsonutils.loads(response.body) self.assertEqual("application/json", response.content_type) self.assertEqual(expected, actual) def test_413_fault_json(self): """Test fault serialized to JSON via file-extension and/or header.""" requests = [ webob.Request.blank('/.json'), webob.Request.blank('/', headers={"Accept": "application/json"}), ] for request in requests: exc = webob.exc.HTTPRequestEntityTooLarge fault = wsgi.Fault(exc(explanation='sorry', headers={'Retry-After': 4})) response = request.get_response(fault) expected = { "overLimit": { "message": "sorry", "code": 413, "retryAfter": '4', }, } actual = jsonutils.loads(response.body) self.assertEqual("application/json", response.content_type) self.assertEqual(expected, actual) def test_raise(self): """Ensure the ability to raise :class:`Fault` in WSGI-ified methods.""" @webob.dec.wsgify def raiser(req): raise wsgi.Fault(webob.exc.HTTPNotFound(explanation='whut?')) req = webob.Request.blank('/.json') resp = req.get_response(raiser) self.assertEqual("application/json", resp.content_type) self.assertEqual(404, resp.status_int) self.assertIn(six.b('whut?'), resp.body) def test_raise_403(self): """Ensure the ability to raise :class:`Fault` in WSGI-ified methods.""" @webob.dec.wsgify def raiser(req): raise wsgi.Fault(webob.exc.HTTPForbidden(explanation='whut?')) req = webob.Request.blank('/.json') resp = req.get_response(raiser) self.assertEqual("application/json", resp.content_type) self.assertEqual(403, resp.status_int) self.assertNotIn(six.b('resizeNotAllowed'), resp.body) self.assertIn(six.b('forbidden'), resp.body) def test_fault_has_status_int(self): """Ensure the status_int is set correctly on faults.""" fault = wsgi.Fault(webob.exc.HTTPBadRequest(explanation='what?')) self.assertEqual(400, fault.status_int) class ExceptionTest(test.TestCase): def _wsgi_app(self, inner_app): return fault.FaultWrapper(inner_app) def _do_test_exception_safety_reflected_in_faults(self, expose): class ExceptionWithSafety(exception.ManilaException): safe = expose @webob.dec.wsgify def fail(req): raise ExceptionWithSafety('some explanation') api = self._wsgi_app(fail) resp = webob.Request.blank('/').get_response(api) self.assertIn('{"computeFault', six.text_type(resp.body), resp.body) expected = ('ExceptionWithSafety: some explanation' if expose else 'The server has either erred or is incapable ' 'of performing the requested operation.') self.assertIn(expected, six.text_type(resp.body), resp.body) self.assertEqual(500, resp.status_int, resp.body) def test_safe_exceptions_are_described_in_faults(self): self._do_test_exception_safety_reflected_in_faults(True) def test_unsafe_exceptions_are_not_described_in_faults(self): self._do_test_exception_safety_reflected_in_faults(False) def _do_test_exception_mapping(self, exception_type, msg): @webob.dec.wsgify def fail(req): raise exception_type(msg) api = self._wsgi_app(fail) resp = webob.Request.blank('/').get_response(api) self.assertIn(msg, six.text_type(resp.body), resp.body) self.assertEqual(exception_type.code, resp.status_int, resp.body) if hasattr(exception_type, 'headers'): for (key, value) in exception_type.headers.items(): self.assertIn(key, resp.headers) self.assertEqual(value, resp.headers[key]) def test_quota_error_mapping(self): self._do_test_exception_mapping(exception.QuotaError, 'too many used') def test_non_manila_notfound_exception_mapping(self): class ExceptionWithCode(Exception): code = 404 self._do_test_exception_mapping(ExceptionWithCode, 'NotFound') def test_non_manila_exception_mapping(self): class ExceptionWithCode(Exception): code = 417 self._do_test_exception_mapping(ExceptionWithCode, 'Expectation failed') def test_exception_with_none_code_throws_500(self): class ExceptionWithNoneCode(Exception): code = None @webob.dec.wsgify def fail(req): raise ExceptionWithNoneCode() api = self._wsgi_app(fail) resp = webob.Request.blank('/').get_response(api) self.assertEqual(500, resp.status_int) def test_validate_request_unicode_decode_fault(self): @webob.dec.wsgify def unicode_error(req): raise UnicodeDecodeError("ascii", "test".encode(), 0, 1, "bad") api = self._wsgi_app(unicode_error) resp = webob.Request.blank('/test?foo=%88').get_response(api) self.assertEqual(400, resp.status_int) manila-10.0.0/manila/tests/api/middleware/test_auth.py0000664000175000017500000000440213656750227022733 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack, LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob import manila.api.middleware.auth from manila import test class TestManilaKeystoneContextMiddleware(test.TestCase): def setUp(self): super(TestManilaKeystoneContextMiddleware, self).setUp() @webob.dec.wsgify() def fake_app(req): self.context = req.environ['manila.context'] return webob.Response() self.context = None self.middleware = (manila.api.middleware.auth .ManilaKeystoneContext(fake_app)) self.request = webob.Request.blank('/') self.request.headers['X_TENANT_ID'] = 'testtenantid' self.request.headers['X_AUTH_TOKEN'] = 'testauthtoken' def test_no_user_or_user_id(self): response = self.request.get_response(self.middleware) self.assertEqual('401 Unauthorized', response.status) def test_user_only(self): self.request.headers['X_USER_ID'] = 'testuserid' response = self.request.get_response(self.middleware) self.assertEqual('200 OK', response.status) self.assertEqual('testuserid', self.context.user_id) def test_user_id_only(self): self.request.headers['X_USER'] = 'testuser' response = self.request.get_response(self.middleware) self.assertEqual('200 OK', response.status) self.assertEqual('testuser', self.context.user_id) def test_user_id_trumps_user(self): self.request.headers['X_USER_ID'] = 'testuserid' self.request.headers['X_USER'] = 'testuser' response = self.request.get_response(self.middleware) self.assertEqual('200 OK', response.status) self.assertEqual('testuserid', self.context.user_id) manila-10.0.0/manila/tests/api/v2/0000775000175000017500000000000013656750362016573 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/v2/test_services.py0000664000175000017500000002566313656750227022043 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from unittest import mock import ddt from oslo_utils import timeutils from manila.api.v2 import services from manila import context from manila import db from manila import exception from manila import policy from manila import test from manila.tests.api import fakes fake_services_list = [ { 'binary': 'manila-scheduler', 'host': 'host1', 'availability_zone': {'name': 'manila1'}, 'id': 1, 'disabled': True, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'created_at': datetime.datetime(2012, 9, 18, 2, 46, 27), }, { 'binary': 'manila-share', 'host': 'host1', 'availability_zone': {'name': 'manila1'}, 'id': 2, 'disabled': True, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'created_at': datetime.datetime(2012, 9, 18, 2, 46, 27)}, { 'binary': 'manila-scheduler', 'host': 'host2', 'availability_zone': {'name': 'manila2'}, 'id': 3, 'disabled': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'created_at': datetime.datetime(2012, 9, 18, 2, 46, 28)}, { 'binary': 'manila-share', 'host': 'host2', 'availability_zone': {'name': 'manila2'}, 'id': 4, 'disabled': True, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'created_at': datetime.datetime(2012, 9, 18, 2, 46, 28), }, ] fake_response_service_list = {'services': [ { 'id': 1, 'binary': 'manila-scheduler', 'host': 'host1', 'zone': 'manila1', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), }, { 'id': 2, 'binary': 'manila-share', 'host': 'host1', 'zone': 'manila1', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), }, { 'id': 3, 'binary': 'manila-scheduler', 'host': 'host2', 'zone': 'manila2', 'status': 'enabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), }, { 'id': 4, 'binary': 'manila-share', 'host': 'host2', 'zone': 'manila2', 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), }, ]} def fake_service_get_all(context): return fake_services_list def fake_service_get_by_host_binary(context, host, binary): for service in fake_services_list: if service['host'] == host and service['binary'] == binary: return service return None def fake_service_get_by_id(value): for service in fake_services_list: if service['id'] == value: return service return None def fake_service_update(context, service_id, values): service = fake_service_get_by_id(service_id) if service is None: raise exception.ServiceNotFound(service_id=service_id) else: {'host': 'host1', 'binary': 'manila-share', 'disabled': values['disabled']} def fake_utcnow(): return datetime.datetime(2012, 10, 29, 13, 42, 11) @ddt.ddt class ServicesTest(test.TestCase): def setUp(self): super(ServicesTest, self).setUp() self.mock_object(db, "service_get_all", fake_service_get_all) self.mock_object(timeutils, "utcnow", fake_utcnow) self.mock_object(db, "service_get_by_args", fake_service_get_by_host_binary) self.mock_object(db, "service_update", fake_service_update) self.context = context.get_admin_context() self.controller = services.ServiceController() self.controller_legacy = services.ServiceControllerLegacy() self.resource_name = self.controller.resource_name self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) @ddt.data( ('os-services', '1.0', services.ServiceControllerLegacy), ('os-services', '2.6', services.ServiceControllerLegacy), ('services', '2.7', services.ServiceController), ) @ddt.unpack def test_services_list(self, url, version, controller): req = fakes.HTTPRequest.blank('/%s' % url, version=version) req.environ['manila.context'] = self.context res_dict = controller().index(req) self.assertEqual(fake_response_service_list, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_services_list_with_host(self): req = fakes.HTTPRequest.blank('/services?host=host1', version='2.7') req.environ['manila.context'] = self.context res_dict = self.controller.index(req) response = {'services': [ fake_response_service_list['services'][0], fake_response_service_list['services'][1], ]} self.assertEqual(response, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_services_list_with_binary(self): req = fakes.HTTPRequest.blank( '/services?binary=manila-share', version='2.7') req.environ['manila.context'] = self.context res_dict = self.controller.index(req) response = {'services': [ fake_response_service_list['services'][1], fake_response_service_list['services'][3], ]} self.assertEqual(response, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_services_list_with_zone(self): req = fakes.HTTPRequest.blank('/services?zone=manila1', version='2.7') req.environ['manila.context'] = self.context res_dict = self.controller.index(req) response = {'services': [ fake_response_service_list['services'][0], fake_response_service_list['services'][1], ]} self.assertEqual(response, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_services_list_with_status(self): req = fakes.HTTPRequest.blank( '/services?status=enabled', version='2.7') req.environ['manila.context'] = self.context res_dict = self.controller.index(req) response = {'services': [ fake_response_service_list['services'][2], ]} self.assertEqual(response, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_services_list_with_state(self): req = fakes.HTTPRequest.blank('/services?state=up', version='2.7') req.environ['manila.context'] = self.context res_dict = self.controller.index(req) response = {'services': [ fake_response_service_list['services'][0], fake_response_service_list['services'][1], ]} self.assertEqual(response, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_services_list_with_host_binary(self): req = fakes.HTTPRequest.blank( "/services?binary=manila-share&state=up", version='2.7') req.environ['manila.context'] = self.context res_dict = self.controller.index(req) response = {'services': [fake_response_service_list['services'][1], ]} self.assertEqual(response, res_dict) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') @ddt.data( ('os-services', '1.0', services.ServiceControllerLegacy), ('os-services', '2.6', services.ServiceControllerLegacy), ('services', '2.7', services.ServiceController), ) @ddt.unpack def test_services_enable(self, url, version, controller): body = {'host': 'host1', 'binary': 'manila-share'} req = fakes.HTTPRequest.blank('/fooproject/%s' % url, version=version) res_dict = controller().update(req, "enable", body) self.assertFalse(res_dict['disabled']) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') @ddt.data( ('os-services', '1.0', services.ServiceControllerLegacy), ('os-services', '2.6', services.ServiceControllerLegacy), ('services', '2.7', services.ServiceController), ) @ddt.unpack def test_services_disable(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%s/disable' % url, version=version) body = {'host': 'host1', 'binary': 'manila-share'} res_dict = controller().update(req, "disable", body) self.assertTrue(res_dict['disabled']) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') @ddt.data( ('os-services', '2.7', services.ServiceControllerLegacy), ('services', '2.6', services.ServiceController), ('services', '1.0', services.ServiceController), ) @ddt.unpack def test_services_update_legacy_url_2_dot_7_api_not_found(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%s/fake' % url, version=version) body = {'host': 'host1', 'binary': 'manila-share'} self.assertRaises( exception.VersionNotFoundForAPIMethod, controller().update, req, "disable", body, ) @ddt.data( ('os-services', '2.7', services.ServiceControllerLegacy), ('services', '2.6', services.ServiceController), ('services', '1.0', services.ServiceController), ) @ddt.unpack def test_services_list_api_not_found(self, url, version, controller): req = fakes.HTTPRequest.blank('/fooproject/%s' % url, version=version) self.assertRaises( exception.VersionNotFoundForAPIMethod, controller().index, req) manila-10.0.0/manila/tests/api/v2/__init__.py0000664000175000017500000000000013656750227020672 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/v2/test_security_services.py0000664000175000017500000000600413656750227023756 0ustar zuulzuul00000000000000# Copyright 2018 SAP SE # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import datetime import ddt from manila.api.v1 import security_service from manila.common import constants from manila import context from manila import test from manila.tests.api import fakes def stub_security_service(self, version, id): ss_dict = dict( id=id, name='security_service_%s' % str(id), type=constants.SECURITY_SERVICES_ALLOWED_TYPES[0], description='Fake Security Service Desc', dns_ip='1.1.1.1', server='fake-server', domain='fake-domain', user='fake-user', password='fake-password', status=constants.STATUS_NEW, share_networks=[], created_at=datetime.datetime(2017, 8, 24, 1, 1, 1, 1), updated_at=datetime.datetime(2017, 8, 24, 1, 1, 1, 1), project_id='fake-project' ) if self.is_microversion_ge(version, '2.44'): ss_dict['ou'] = 'fake-ou' return ss_dict @ddt.ddt class SecurityServicesAPITest(test.TestCase): @ddt.data( ('2.0'), ('2.43'), ('2.44'), ) def test_index(self, version): ss = [ stub_security_service(self, version, 1), stub_security_service(self, version, 2), ] ctxt = context.RequestContext('admin', 'fake', True) request = fakes.HTTPRequest.blank('/security-services?all_tenants=1', version=version) request.headers['X-Openstack-Manila-Api-Version'] = version request.environ['manila.context'] = ctxt self.mock_object(security_service.db, 'security_service_get_all', mock.Mock(return_value=ss)) self.mock_object(security_service.db, 'share_network_get_all_by_security_service', mock.Mock(return_value=[])) ss_controller = security_service.SecurityServiceController() result = ss_controller.detail(request) self.assertIsInstance(result, dict) self.assertEqual(['security_services'], list(result.keys())) self.assertIsInstance(result['security_services'], list) self.assertEqual(2, len(result['security_services'])) self.assertIn(ss[0], result['security_services']) ss_keys = list(result['security_services'][0].keys()) if self.is_microversion_ge(version, '2.44'): self.assertIn('ou', ss_keys) else: self.assertNotIn('ou', ss_keys) manila-10.0.0/manila/tests/api/v2/test_share_servers.py0000664000175000017500000004626013656750227023067 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt import webob from manila.api.v2 import share_servers from manila.common import constants from manila.db import api as db_api from manila import exception from manila import policy from manila.share import api as share_api from manila import test from manila.tests.api import fakes from manila.tests import db_utils from manila import utils @ddt.ddt class ShareServerControllerTest(test.TestCase): """Share server api test""" def setUp(self): super(ShareServerControllerTest, self).setUp() self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.controller = share_servers.ShareServerController() self.resource_name = self.controller.resource_name @ddt.data(constants.STATUS_ACTIVE, constants.STATUS_ERROR, constants.STATUS_DELETING, constants.STATUS_CREATING, constants.STATUS_MANAGING, constants.STATUS_UNMANAGING, constants.STATUS_UNMANAGE_ERROR, constants.STATUS_MANAGE_ERROR) def test_share_server_reset_status(self, status): req = fakes.HTTPRequest.blank('/v2/share-servers/fake-share-server/', use_admin_context=True, version="2.49") body = {'reset_status': {'status': status}} context = req.environ['manila.context'] mock_update = self.mock_object(db_api, 'share_server_update') result = self.controller.share_server_reset_status( req, 'fake_server_id', body) self.assertEqual(202, result.status_int) policy.check_policy.assert_called_once_with( context, self.resource_name, 'reset_status') mock_update.assert_called_once_with( context, 'fake_server_id', {'status': status}) def test_share_reset_server_status_invalid(self): req = fakes.HTTPRequest.blank('/reset_status', use_admin_context=True, version="2.49") body = {'reset_status': {'status': constants.STATUS_EXTENDING}} context = req.environ['manila.context'] self.assertRaises( webob.exc.HTTPBadRequest, self.controller.share_server_reset_status, req, id='fake_server_id', body=body) policy.check_policy.assert_called_once_with( context, self.resource_name, 'reset_status') def test_share_server_reset_status_no_body(self): req = fakes.HTTPRequest.blank('/reset_status', use_admin_context=True, version="2.49") context = req.environ['manila.context'] self.assertRaises( webob.exc.HTTPBadRequest, self.controller.share_server_reset_status, req, id='fake_server_id', body={}) policy.check_policy.assert_called_once_with( context, self.resource_name, 'reset_status') def test_share_server_reset_status_no_status(self): req = fakes.HTTPRequest.blank('/reset_status', use_admin_context=True, version="2.49") context = req.environ['manila.context'] self.assertRaises( webob.exc.HTTPBadRequest, self.controller.share_server_reset_status, req, id='fake_server_id', body={'reset_status': {}}) policy.check_policy.assert_called_once_with( context, self.resource_name, 'reset_status') def _setup_manage_test_request_body(self): body = { 'share_network_id': 'fake_net_id', 'share_network_subnet_id': 'fake_subnet_id', 'host': 'fake_host', 'identifier': 'fake_identifier', 'driver_options': {'opt1': 'fake_opt1', 'opt2': 'fake_opt2'}, } return body @ddt.data('fake_net_name', '') def test_manage(self, share_net_name): """Tests share server manage""" req = fakes.HTTPRequest.blank('/v2/share-servers/', use_admin_context=True, version="2.49") context = req.environ['manila.context'] share_network = db_utils.create_share_network(name=share_net_name) share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) share_server = db_utils.create_share_server( share_network_subnet_id=share_net_subnet['id'], host='fake_host', identifier='fake_identifier', is_auto_deletable=False) self.mock_object(db_api, 'share_network_get', mock.Mock( return_value=share_network)) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=share_net_subnet)) self.mock_object(utils, 'validate_service_host') body = { 'share_server': self._setup_manage_test_request_body() } manage_share_server_mock = self.mock_object( share_api.API, 'manage_share_server', mock.Mock(return_value=share_server)) result = self.controller.manage(req, body) expected_result = { 'share_server': { 'id': share_server['id'], 'project_id': 'fake', 'updated_at': None, 'status': constants.STATUS_ACTIVE, 'host': 'fake_host', 'share_network_id': share_server['share_network_subnet']['share_network_id'], 'created_at': share_server['created_at'], 'backend_details': {}, 'identifier': share_server['identifier'], 'is_auto_deletable': share_server['is_auto_deletable'], } } if share_net_name != '': expected_result['share_server']['share_network_name'] = ( 'fake_net_name') else: expected_result['share_server']['share_network_name'] = ( share_net_subnet['share_network_id']) req_params = body['share_server'] manage_share_server_mock.assert_called_once_with( context, req_params['identifier'], req_params['host'], share_net_subnet, req_params['driver_options']) self.assertEqual(expected_result, result) self.mock_policy_check.assert_called_once_with( context, self.resource_name, 'manage_share_server') def test_manage_invalid(self): req = fakes.HTTPRequest.blank('/manage_share_server', use_admin_context=True, version="2.49") context = req.environ['manila.context'] share_network = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) body = { 'share_server': self._setup_manage_test_request_body() } self.mock_object(utils, 'validate_service_host') self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_network)) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=share_net_subnet)) manage_share_server_mock = self.mock_object( share_api.API, 'manage_share_server', mock.Mock(side_effect=exception.InvalidInput('foobar'))) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, req, body) req_params = body['share_server'] manage_share_server_mock.assert_called_once_with( context, req_params['identifier'], req_params['host'], share_net_subnet, req_params['driver_options']) def test_manage_forbidden(self): """Tests share server manage without admin privileges""" req = fakes.HTTPRequest.blank('/manage_share_server', version="2.49") error = mock.Mock(side_effect=exception.PolicyNotAuthorized(action='')) self.mock_object(share_api.API, 'manage_share_server', error) share_network = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) self.mock_object(db_api, 'share_network_get', mock.Mock( return_value=share_network)) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=share_net_subnet)) self.mock_object(utils, 'validate_service_host') body = { 'share_server': self._setup_manage_test_request_body() } self.assertRaises(webob.exc.HTTPForbidden, self.controller.manage, req, body) def test__validate_manage_share_server_validate_no_body(self): """Tests share server manage""" req = fakes.HTTPRequest.blank('/manage', version="2.49") body = {} self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.manage, req, body) @ddt.data({'empty': False, 'key': 'host'}, {'empty': False, 'key': 'share_network_id'}, {'empty': False, 'key': 'identifier'}, {'empty': True, 'key': 'host'}, {'empty': True, 'key': 'share_network_id'}, {'empty': True, 'key': 'identifier'}) @ddt.unpack def test__validate_manage_share_server_validate_without_parameters( self, empty, key): """Tests share server manage without some parameters""" req = fakes.HTTPRequest.blank('/manage_share_server', version="2.49") self.mock_object(share_api.API, 'manage_share_server', mock.Mock()) body = { 'share_server': self._setup_manage_test_request_body(), } if empty: body['share_server'][key] = None else: body['share_server'].pop(key) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, req, body) @ddt.data( (webob.exc.HTTPBadRequest, exception.ServiceNotFound('foobar')), (webob.exc.HTTPBadRequest, exception.ServiceIsDown('foobar')), (webob.exc.HTTPForbidden, exception.PolicyNotAuthorized('foobar')), (webob.exc.HTTPForbidden, exception.AdminRequired()) ) @ddt.unpack def test__validate_manage_share_server_validate_service_host( self, exception_to_raise, side_effect_exception): req = fakes.HTTPRequest.blank('/manage', version="2.49") context = req.environ['manila.context'] error = mock.Mock(side_effect=side_effect_exception) self.mock_object(utils, 'validate_service_host', error) share_network = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) self.mock_object(db_api, 'share_network_get', mock.Mock( return_value=share_network)) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=share_net_subnet)) self.assertRaises( exception_to_raise, self.controller.manage, req, {'share_server': self._setup_manage_test_request_body()}) policy.check_policy.assert_called_once_with( context, self.resource_name, 'manage_share_server') def test__validate_manage_share_server_share_network_not_found(self): req = fakes.HTTPRequest.blank('/manage', version="2.49") context = req.environ['manila.context'] self.mock_object(utils, 'validate_service_host') error = mock.Mock( side_effect=exception.ShareNetworkNotFound(share_network_id="foo")) self.mock_object(db_api, 'share_network_get', error) body = self._setup_manage_test_request_body() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, req, {'share_server': body}) policy.check_policy.assert_called_once_with( context, self.resource_name, 'manage_share_server') def test__validate_manage_share_server_driver_opts_not_instance_dict(self): req = fakes.HTTPRequest.blank('/manage', version="2.49") context = req.environ['manila.context'] self.mock_object(utils, 'validate_service_host') self.mock_object(db_api, 'share_network_get') body = self._setup_manage_test_request_body() body['driver_options'] = 'incorrect' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, req, {'share_server': body}) policy.check_policy.assert_called_once_with( context, self.resource_name, 'manage_share_server') def test__validate_manage_share_server_error_extract_host(self): req = fakes.HTTPRequest.blank('/manage', version="2.49") context = req.environ['manila.context'] body = self._setup_manage_test_request_body() body['host'] = 'fake@backend#pool' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, req, {'share_server': body}) policy.check_policy.assert_called_once_with( context, self.resource_name, 'manage_share_server') @ddt.data(True, False) def test__validate_manage_share_server_error_subnet_not_found( self, body_contains_subnet): req = fakes.HTTPRequest.blank('/manage', version="2.51") context = req.environ['manila.context'] share_network = db_utils.create_share_network() body = {'share_server': self._setup_manage_test_request_body()} share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) body['share_server']['share_network_subnet_id'] = ( share_net_subnet['id'] if body_contains_subnet else None) self.mock_object( db_api, 'share_network_subnet_get', mock.Mock(side_effect=exception.ShareNetworkSubnetNotFound( share_network_subnet_id='fake'))) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=None)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, req, body) policy.check_policy.assert_called_once_with( context, self.resource_name, 'manage_share_server') if body_contains_subnet: db_api.share_network_subnet_get.assert_called_once_with( context, share_net_subnet['id']) else: (db_api.share_network_subnet_get_default_subnet .assert_called_once_with( context, body['share_server']['share_network_id'])) @ddt.data(True, False) def test_unmanage(self, force): server = self._setup_unmanage_tests() req = fakes.HTTPRequest.blank('/unmanage', version="2.49") context = req.environ['manila.context'] mock_get = self.mock_object( db_api, 'share_server_get', mock.Mock(return_value=server)) mock_unmanage = self.mock_object( share_api.API, 'unmanage_share_server', mock.Mock(return_value=202)) body = {'unmanage': {'force': force}} resp = self.controller.unmanage(req, server['id'], body) self.assertEqual(202, resp.status_int) mock_get.assert_called_once_with(context, server['id']) mock_unmanage.assert_called_once_with(context, server, force=force) def test_unmanage_share_server_not_found(self): """Tests unmanaging share servers""" req = fakes.HTTPRequest.blank('/v2/share-servers/fake_server_id/', version="2.49") context = req.environ['manila.context'] share_server_error = mock.Mock( side_effect=exception.ShareServerNotFound( share_server_id='fake_server_id')) get_mock = self.mock_object( db_api, 'share_server_get', share_server_error) body = {'unmanage': {'force': True}} self.assertRaises(webob.exc.HTTPNotFound, self.controller.unmanage, req, 'fake_server_id', body) get_mock.assert_called_once_with(context, 'fake_server_id') @ddt.data(constants.STATUS_MANAGING, constants.STATUS_DELETING, constants.STATUS_CREATING, constants.STATUS_UNMANAGING) def test_unmanage_share_server_invalid_statuses(self, status): """Tests unmanaging share servers""" server = self._setup_unmanage_tests(status=status) get_mock = self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=server)) req = fakes.HTTPRequest.blank('/unmanage_share_server', version="2.49") context = req.environ['manila.context'] body = {'unmanage': {'force': True}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.unmanage, req, server['id'], body) get_mock.assert_called_once_with(context, server['id']) def _setup_unmanage_tests(self, status=constants.STATUS_ACTIVE): server = db_utils.create_share_server( id='fake_server_id', status=status) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=server)) return server @ddt.data(exception.ShareServerInUse, exception.PolicyNotAuthorized) def test_unmanage_share_server_badrequest(self, exc): req = fakes.HTTPRequest.blank('/unmanage', version="2.49") server = self._setup_unmanage_tests() context = req.environ['manila.context'] error = mock.Mock(side_effect=exc('foobar')) mock_unmanage = self.mock_object( share_api.API, 'unmanage_share_server', error) body = {'unmanage': {'force': True}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.unmanage, req, 'fake_server_id', body) mock_unmanage.assert_called_once_with(context, server, force=True) policy.check_policy.assert_called_once_with( context, self.resource_name, 'unmanage_share_server') manila-10.0.0/manila/tests/api/v2/test_share_group_types.py0000664000175000017500000007533413656750227023762 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from unittest import mock import ddt from oslo_config import cfg import webob from manila.api.v2 import share_group_types as types from manila import exception from manila import policy from manila.share_group import share_group_types from manila import test from manila.tests.api import fakes CONF = cfg.CONF PROJ1_UUID = '11111111-1111-1111-1111-111111111111' PROJ2_UUID = '22222222-2222-2222-2222-222222222222' PROJ3_UUID = '33333333-3333-3333-3333-333333333333' SHARE_TYPE_ID = '4b1e460f-8bc5-4a97-989b-739a2eceaec6' GROUP_TYPE_1 = { 'id': 'c8d7bf70-0db9-4b3e-8498-055dd0306461', 'name': u'group type 1', 'deleted': False, 'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1), 'updated_at': None, 'deleted_at': None, 'is_public': True, 'group_specs': {}, 'share_types': [], } GROUP_TYPE_2 = { 'id': 'f93f7a1f-62d7-4e7e-b9e6-72eec95a47f5', 'name': u'group type 2', 'deleted': False, 'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1), 'updated_at': None, 'deleted_at': None, 'is_public': False, 'group_specs': {'consistent_snapshots': 'true'}, 'share_types': [{'share_type_id': SHARE_TYPE_ID}], } GROUP_TYPE_3 = { 'id': '61fdcbed-db27-4cc0-8938-8b4f74c2ae59', 'name': u'group type 3', 'deleted': False, 'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1), 'updated_at': None, 'deleted_at': None, 'is_public': True, 'group_specs': {}, 'share_types': [], } SG_GRADUATION_VERSION = '2.55' def fake_request(url, admin=False, version='2.31', experimental=True, **kwargs): return fakes.HTTPRequest.blank( url, use_admin_context=admin, experimental=experimental, version=version, **kwargs ) @ddt.ddt class ShareGroupTypesAPITest(test.TestCase): def setUp(self): super(ShareGroupTypesAPITest, self).setUp() self.flags(host='fake') self.controller = types.ShareGroupTypesController() self.resource_name = self.controller.resource_name self.mock_object(policy, 'check_policy', mock.Mock(return_value=True)) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_types_index(self, microversion, experimental): fake_types = {GROUP_TYPE_1['name']: GROUP_TYPE_1} mock_get_all = self.mock_object( share_group_types, 'get_all', mock.Mock(return_value=fake_types)) req = fake_request('/v2/fake/share-group-types', admin=False, version=microversion, experimental=experimental) expected_list = [{ 'id': GROUP_TYPE_1['id'], 'name': GROUP_TYPE_1['name'], 'is_public': True, 'group_specs': {}, 'share_types': [], }] if self.is_microversion_ge(microversion, '2.46'): expected_list[0]['is_default'] = False res_dict = self.controller.index(req) mock_get_all.assert_called_once_with( mock.ANY, search_opts={"is_public": True}) self.assertEqual(1, len(res_dict['share_group_types'])) self.assertEqual(expected_list, res_dict['share_group_types']) def test_share_group_types_index_as_admin(self): fake_types = { GROUP_TYPE_1['name']: GROUP_TYPE_1, GROUP_TYPE_2['name']: GROUP_TYPE_2, } mock_get_all = self.mock_object( share_group_types, 'get_all', mock.Mock(return_value=fake_types)) req = fake_request( '/v2/fake/share-group-types?is_public=all', admin=True) expected_type_1 = { 'id': GROUP_TYPE_1['id'], 'name': GROUP_TYPE_1['name'], 'is_public': True, 'group_specs': {}, 'share_types': [], } expected_type_2 = { 'id': GROUP_TYPE_2['id'], 'name': GROUP_TYPE_2['name'], 'is_public': False, 'group_specs': {'consistent_snapshots': 'true'}, 'share_types': [SHARE_TYPE_ID], } res_dict = self.controller.index(req) mock_get_all.assert_called_once_with( mock.ANY, search_opts={'is_public': None}) self.assertEqual(2, len(res_dict['share_group_types'])) self.assertIn(expected_type_1, res_dict['share_group_types']) self.assertIn(expected_type_2, res_dict['share_group_types']) def test_share_group_types_index_as_admin_default_public_only(self): fake_types = {} mock_get_all = self.mock_object( share_group_types, 'get_all', mock.Mock(return_value=fake_types)) req = fake_request('/v2/fake/share-group-types', admin=True) self.controller.index(req) mock_get_all.assert_called_once_with( mock.ANY, search_opts={'is_public': True}) def test_share_group_types_index_not_experimental(self): self.mock_object( share_group_types, 'get_all', mock.Mock(return_value={})) req = fake_request('/v2/fake/share-group-types', experimental=False, version='2.54') self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, req) self.assertFalse(share_group_types.get_all.called) def test_share_group_types_index_older_api_version(self): self.mock_object( share_group_types, 'get_all', mock.Mock(return_value={})) req = fake_request('/v2/fake/share-group-types', version='2.1') self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, req) @ddt.data(True, False) def test_share_group_types_index_no_data(self, admin): self.mock_object( share_group_types, 'get_all', mock.Mock(return_value={})) req = fake_request('/v2/fake/share-group-types', admin=admin) res_dict = self.controller.index(req) self.assertEqual(0, len(res_dict['share_group_types'])) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_types_show(self, microversion, experimental): mock_get = self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_1)) req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_1['id'], version=microversion, experimental=experimental) expected_type = { 'id': GROUP_TYPE_1['id'], 'name': GROUP_TYPE_1['name'], 'is_public': True, 'group_specs': {}, 'share_types': [], } if self.is_microversion_ge(microversion, '2.46'): expected_type['is_default'] = False res_dict = self.controller.show(req, GROUP_TYPE_1['id']) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_1['id']) self.assertEqual(expected_type, res_dict['share_group_type']) def test_share_group_types_show_with_share_types(self): mock_get = self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_2)) req = fake_request('/v2/fake/group-types/%s' % GROUP_TYPE_2['id']) expected_type = { 'id': GROUP_TYPE_2['id'], 'name': GROUP_TYPE_2['name'], 'is_public': False, 'group_specs': {'consistent_snapshots': 'true'}, 'share_types': [SHARE_TYPE_ID], } res_dict = self.controller.show(req, GROUP_TYPE_2['id']) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_2['id']) self.assertEqual(expected_type, res_dict['share_group_type']) def test_share_group_types_show_not_found(self): mock_get = self.mock_object( share_group_types, 'get', mock.Mock(side_effect=exception.ShareGroupTypeNotFound( type_id=GROUP_TYPE_2['id']))) req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id']) self.assertRaises( webob.exc.HTTPNotFound, self.controller.show, req, GROUP_TYPE_2['id']) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_2['id']) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_types_default(self, microversion, experimental): mock_get = self.mock_object( share_group_types, 'get_default', mock.Mock(return_value=GROUP_TYPE_2)) req = fake_request('/v2/fake/share-group-types/default', version=microversion, experimental=experimental) expected_type = { 'id': GROUP_TYPE_2['id'], 'name': GROUP_TYPE_2['name'], 'is_public': False, 'group_specs': {'consistent_snapshots': 'true'}, 'share_types': [SHARE_TYPE_ID], } if self.is_microversion_ge(microversion, '2.46'): expected_type['is_default'] = False res_dict = self.controller.default(req) mock_get.assert_called_once_with(mock.ANY) self.assertEqual(expected_type, res_dict['share_group_type']) def test_share_group_types_default_not_found(self): mock_get = self.mock_object( share_group_types, 'get_default', mock.Mock(return_value=None)) req = fake_request('/v2/fake/share-group-types/default') self.assertRaises(webob.exc.HTTPNotFound, self.controller.default, req) mock_get.assert_called_once_with(mock.ANY) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_types_delete(self, microversion, experimental): mock_get = self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_1)) mock_destroy = self.mock_object(share_group_types, 'destroy') req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_1['id'], version=microversion, experimental=experimental) self.controller.delete(req, GROUP_TYPE_1['id']) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_1['id']) mock_destroy.assert_called_once_with(mock.ANY, GROUP_TYPE_1['id']) def test_share_group_types_delete_not_found(self): mock_get = self.mock_object( share_group_types, 'get', mock.Mock(side_effect=exception.ShareGroupTypeNotFound( type_id=GROUP_TYPE_2['id']))) req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id']) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, GROUP_TYPE_2['id']) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_2['id']) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_create_minimal(self, microversion, experimental): fake_type = copy.deepcopy(GROUP_TYPE_1) fake_type['share_types'] = [{'share_type_id': SHARE_TYPE_ID}] mock_create = self.mock_object(share_group_types, 'create') mock_get = self.mock_object( share_group_types, 'get_by_name', mock.Mock(return_value=fake_type)) req = fake_request('/v2/fake/share-group-types', version=microversion, experimental=experimental) fake_body = {'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': [SHARE_TYPE_ID], }} expected_type = { 'id': GROUP_TYPE_1['id'], 'name': GROUP_TYPE_1['name'], 'is_public': True, 'group_specs': {}, 'share_types': [SHARE_TYPE_ID], } if self.is_microversion_ge(microversion, '2.46'): expected_type['is_default'] = False res_dict = self.controller.create(req, fake_body) mock_create.assert_called_once_with( mock.ANY, GROUP_TYPE_1['name'], [SHARE_TYPE_ID], {}, True) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_1['name']) self.assertEqual(expected_type, res_dict['share_group_type']) @ddt.data( None, {'my_fake_group_spec': 'false'}, ) def test_create_with_group_specs(self, specs): fake_type = copy.deepcopy(GROUP_TYPE_1) fake_type['share_types'] = [{'share_type_id': SHARE_TYPE_ID}] fake_type['group_specs'] = specs mock_create = self.mock_object(share_group_types, 'create') mock_get = self.mock_object( share_group_types, 'get_by_name', mock.Mock(return_value=fake_type)) req = fake_request('/v2/fake/share-group-types') fake_body = {'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': [SHARE_TYPE_ID], 'group_specs': specs, }} expected_type = { 'id': GROUP_TYPE_1['id'], 'name': GROUP_TYPE_1['name'], 'is_public': True, 'group_specs': specs, 'share_types': [SHARE_TYPE_ID], } res_dict = self.controller.create(req, fake_body) mock_create.assert_called_once_with( mock.ANY, GROUP_TYPE_1['name'], [SHARE_TYPE_ID], specs, True) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_1['name']) self.assertEqual(expected_type, res_dict['share_group_type']) @ddt.data( 'str', ['l', 'i', 's', 't'], set([1]), ('t', 'u', 'p', 'l', 'e'), 1, {"foo": 1}, {1: "foo"}, {"foo": "bar", "quuz": []} ) def test_create_with_wrong_group_specs(self, specs): fake_type = copy.deepcopy(GROUP_TYPE_1) fake_type['share_types'] = [{'share_type_id': SHARE_TYPE_ID}] fake_type['group_specs'] = specs mock_create = self.mock_object(share_group_types, 'create') mock_get = self.mock_object( share_group_types, 'get_by_name', mock.Mock(return_value=fake_type)) req = fake_request('/v2/fake/share-group-types') fake_body = {'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': [SHARE_TYPE_ID], 'group_specs': specs, }} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, fake_body) self.assertEqual(0, mock_create.call_count) self.assertEqual(0, mock_get.call_count) def test_create_private_share_group_type(self): fake_type = copy.deepcopy(GROUP_TYPE_1) fake_type['share_types'] = [{'share_type_id': SHARE_TYPE_ID}] fake_type['is_public'] = False mock_create = self.mock_object(share_group_types, 'create') mock_get = self.mock_object( share_group_types, 'get_by_name', mock.Mock(return_value=fake_type)) req = fake_request('/v2/fake/share-group-types') fake_body = {'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': [SHARE_TYPE_ID], 'is_public': False }} expected_type = { 'id': GROUP_TYPE_1['id'], 'name': GROUP_TYPE_1['name'], 'is_public': False, 'group_specs': {}, 'share_types': [SHARE_TYPE_ID], } res_dict = self.controller.create(req, fake_body) mock_create.assert_called_once_with( mock.ANY, GROUP_TYPE_1['name'], [SHARE_TYPE_ID], {}, False) mock_get.assert_called_once_with(mock.ANY, GROUP_TYPE_1['name']) self.assertEqual(expected_type, res_dict['share_group_type']) def test_create_invalid_request_duplicate_name(self): mock_create = self.mock_object( share_group_types, 'create', mock.Mock(side_effect=exception.ShareGroupTypeExists( type_id=GROUP_TYPE_1['name']))) req = fake_request('/v2/fake/sahre-group-types') fake_body = {'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': [SHARE_TYPE_ID], }} self.assertRaises( webob.exc.HTTPConflict, self.controller.create, req, fake_body) mock_create.assert_called_once_with( mock.ANY, GROUP_TYPE_1['name'], [SHARE_TYPE_ID], {}, True) def test_create_invalid_request_missing_name(self): req = fake_request('/v2/fake/share-group-types') fake_body = {'share_group_type': {'share_types': [SHARE_TYPE_ID]}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, fake_body) def test_create_invalid_request_missing_share_types(self): req = fake_request('/v2/fake/share-group-types') fake_body = {'share_group_type': {'name': GROUP_TYPE_1['name']}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, fake_body) def test_create_provided_share_type_does_not_exist(self): req = fake_request('/v2/fake/share-group-types', admin=True) fake_body = { 'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': SHARE_TYPE_ID + '_does_not_exist', } } self.assertRaises( webob.exc.HTTPNotFound, self.controller.create, req, fake_body) @ddt.data(('2.45', True), ('2.45', False), ('2.46', True), ('2.46', False)) @ddt.unpack def test_share_group_types_create_with_is_default_key(self, version, admin): # is_default is false fake_type = copy.deepcopy(GROUP_TYPE_1) fake_type['share_types'] = [{'share_type_id': SHARE_TYPE_ID}] self.mock_object(share_group_types, 'create') self.mock_object( share_group_types, 'get_by_name', mock.Mock(return_value=fake_type)) req = fake_request('/v2/fake/share-group-types', version=version, admin=admin) fake_body = {'share_group_type': { 'name': GROUP_TYPE_1['name'], 'share_types': [SHARE_TYPE_ID], }} res_dict = self.controller.create(req, fake_body) if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res_dict['share_group_type']) self.assertIs(False, res_dict['share_group_type']['is_default']) else: self.assertNotIn('is_default', res_dict['share_group_type']) # is_default is true default_type_name = 'group type 3' CONF.set_default('default_share_group_type', default_type_name) fake_type = copy.deepcopy(GROUP_TYPE_3) fake_type['share_types'] = [{'share_type_id': SHARE_TYPE_ID}] self.mock_object(share_group_types, 'create') self.mock_object( share_group_types, 'get_by_name', mock.Mock(return_value=fake_type)) req = fake_request('/v2/fake/share-group-types', version=version, admin=admin) fake_body = {'share_group_type': { 'name': GROUP_TYPE_3['name'], 'share_types': [SHARE_TYPE_ID], }} res_dict = self.controller.create(req, fake_body) if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res_dict['share_group_type']) self.assertIs(True, res_dict['share_group_type']['is_default']) else: self.assertNotIn('is_default', res_dict['share_group_type']) @ddt.data(('2.45', True), ('2.45', False), ('2.46', True), ('2.46', False)) @ddt.unpack def test_share_group_types_list_with_is_default_key(self, version, admin): fake_types = { GROUP_TYPE_1['name']: GROUP_TYPE_1, GROUP_TYPE_2['name']: GROUP_TYPE_2, } self.mock_object( share_group_types, 'get_all', mock.Mock(return_value=fake_types)) req = fake_request( '/v2/fake/share-group-types?is_public=all', version=version, admin=admin) res_dict = self.controller.index(req) for res in res_dict['share_group_types']: if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res) self.assertIs(False, res['is_default']) else: self.assertNotIn('is_default', res) self.assertEqual(2, len(res_dict['share_group_types'])) @ddt.data(('2.45', True), ('2.45', False), ('2.46', True), ('2.46', False)) @ddt.unpack def test_shares_group_types_show_with_is_default_key(self, version, admin): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_2)) req = fake_request('/v2/fake/group-types/%s' % GROUP_TYPE_2['id'], version=version, admin=admin) res_dict = self.controller.show(req, GROUP_TYPE_2['id']) if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res_dict['share_group_type']) self.assertIs(False, res_dict['share_group_type']['is_default']) else: self.assertNotIn('is_default', res_dict['share_group_type']) @ddt.ddt class ShareGroupTypeAccessTest(test.TestCase): def setUp(self): super(ShareGroupTypeAccessTest, self).setUp() self.controller = types.ShareGroupTypesController() def test_list_type_access_public(self): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_1)) req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_1['id'], admin=True) self.assertRaises( webob.exc.HTTPNotFound, self.controller.share_group_type_access, req, GROUP_TYPE_1['id']) def test_list_type_access_private(self): fake_type = copy.deepcopy(GROUP_TYPE_2) fake_type['projects'] = [PROJ2_UUID, PROJ3_UUID] mock_get = self.mock_object( share_group_types, 'get', mock.Mock(return_value=fake_type)) expected = {'share_group_type_access': [ {'share_group_type_id': fake_type['id'], 'project_id': PROJ2_UUID}, {'share_group_type_id': fake_type['id'], 'project_id': PROJ3_UUID}, ]} req = fake_request( '/v2/fake/share-group-types/%s' % fake_type['id'], admin=True) actual = self.controller.share_group_type_access(req, fake_type['id']) mock_get.assert_called_once_with( mock.ANY, fake_type['id'], expected_fields=['projects']) self.assertEqual(expected, actual) def test_list_type_access_type_not_found(self): self.mock_object( share_group_types, 'get', mock.Mock(side_effect=exception.ShareGroupTypeNotFound( type_id=GROUP_TYPE_2['id']))) req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises( webob.exc.HTTPNotFound, self.controller.share_group_type_access, req, GROUP_TYPE_2['id']) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_add_project_access(self, microversion, experimental): self.mock_object(share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_2)) mock_add_access = self.mock_object( share_group_types, 'add_share_group_type_access') body = {'addProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True, experimental=experimental, version=microversion ) response = self.controller.add_project_access( req, GROUP_TYPE_2['id'], body) mock_add_access.assert_called_once_with( mock.ANY, GROUP_TYPE_2['id'], PROJ1_UUID) self.assertEqual(202, response.status_code) def test_add_project_access_non_existent_type(self): self.mock_object( share_group_types, 'get', mock.Mock(side_effect=exception.ShareGroupTypeNotFound( type_id=GROUP_TYPE_2['id']))) body = {'addProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises( webob.exc.HTTPNotFound, self.controller.add_project_access, req, GROUP_TYPE_2['id'], body) def test_add_project_access_missing_project_in_body(self): body = {'addProjectAccess': {}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises( webob.exc.HTTPBadRequest, self.controller.add_project_access, req, GROUP_TYPE_2['id'], body) def test_add_project_access_missing_add_project_access_in_body(self): body = {} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises( webob.exc.HTTPBadRequest, self.controller.add_project_access, req, GROUP_TYPE_2['id'], body) def test_add_project_access_with_already_added_access(self): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_2)) mock_add_access = self.mock_object( share_group_types, 'add_share_group_type_access', mock.Mock(side_effect=exception.ShareGroupTypeAccessExists( type_id=GROUP_TYPE_2['id'], project_id=PROJ1_UUID)) ) body = {'addProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises( webob.exc.HTTPConflict, self.controller.add_project_access, req, GROUP_TYPE_2['id'], body) mock_add_access.assert_called_once_with( mock.ANY, GROUP_TYPE_2['id'], PROJ1_UUID) def test_add_project_access_to_public_share_type(self): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_1)) body = {'addProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_1['id'], admin=True) self.assertRaises( webob.exc.HTTPConflict, self.controller.add_project_access, req, GROUP_TYPE_1['id'], body) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_remove_project_access(self, microversion, experimental): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_2)) mock_remove_access = self.mock_object( share_group_types, 'remove_share_group_type_access') body = {'removeProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True, version=microversion, experimental=experimental) response = self.controller.remove_project_access( req, GROUP_TYPE_2['id'], body) mock_remove_access.assert_called_once_with( mock.ANY, GROUP_TYPE_2['id'], PROJ1_UUID) self.assertEqual(202, response.status_code) def test_remove_project_access_nonexistent_rule(self): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_2)) mock_remove_access = self.mock_object( share_group_types, 'remove_share_group_type_access', mock.Mock( side_effect=exception.ShareGroupTypeAccessNotFound( type_id=GROUP_TYPE_2['id'], project_id=PROJ1_UUID))) body = {'removeProjectAccess': {'project': PROJ1_UUID}} req = fake_request('/v2/fake/group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises( webob.exc.HTTPNotFound, self.controller.remove_project_access, req, GROUP_TYPE_2['id'], body) mock_remove_access.assert_called_once_with( mock.ANY, GROUP_TYPE_2['id'], PROJ1_UUID) def test_remove_project_access_from_public_share_type(self): self.mock_object( share_group_types, 'get', mock.Mock(return_value=GROUP_TYPE_1)) body = {'removeProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_1['id'], admin=True) self.assertRaises(webob.exc.HTTPConflict, self.controller.remove_project_access, req, GROUP_TYPE_1['id'], body) def test_remove_project_access_non_existent_type(self): self.mock_object( share_group_types, 'get', mock.Mock(side_effect=exception.ShareGroupTypeNotFound( type_id=GROUP_TYPE_2['id']))) body = {'removeProjectAccess': {'project': PROJ1_UUID}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises(webob.exc.HTTPNotFound, self.controller.remove_project_access, req, GROUP_TYPE_2['id'], body) def test_remove_project_access_missing_project_in_body(self): body = {'removeProjectAccess': {}} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.remove_project_access, req, GROUP_TYPE_2['id'], body) def test_remove_project_access_missing_remove_project_access_in_body(self): body = {} req = fake_request( '/v2/fake/share-group-types/%s' % GROUP_TYPE_2['id'], admin=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.remove_project_access, req, GROUP_TYPE_2['id'], body) manila-10.0.0/manila/tests/api/v2/stubs.py0000664000175000017500000000317013656750227020306 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 from manila.message import message_field from manila.message import message_levels from manila.tests.api import fakes FAKE_UUID = fakes.FAKE_UUID def stub_message(id, **kwargs): message = { 'id': id, 'project_id': 'fake_project', 'action_id': message_field.Action.ALLOCATE_HOST[0], 'message_level': message_levels.ERROR, 'request_id': FAKE_UUID, 'resource_type': message_field.Resource.SHARE, 'resource_id': 'fake_uuid', 'updated_at': datetime.datetime(1900, 1, 1, 1, 1, 1, tzinfo=iso8601.UTC), 'created_at': datetime.datetime(1900, 1, 1, 1, 1, 1, tzinfo=iso8601.UTC), 'expires_at': datetime.datetime(1900, 1, 1, 1, 1, 1, tzinfo=iso8601.UTC), 'detail_id': message_field.Detail.NO_VALID_HOST[0], } message.update(kwargs) return message def stub_message_get(self, context, message_id): return stub_message(message_id) manila-10.0.0/manila/tests/api/v2/test_share_snapshot_export_locations.py0000664000175000017500000001027713656750227026710 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila.api.v2 import share_snapshot_export_locations as export_locations from manila.common import constants from manila import context from manila.db.sqlalchemy import api as db_api from manila import exception from manila import test from manila.tests.api import fakes from manila.tests import db_utils @ddt.ddt class ShareSnapshotExportLocationsAPITest(test.TestCase): def _get_request(self, version="2.32", use_admin_context=True): req = fakes.HTTPRequest.blank( '/snapshots/%s/export-locations' % self.snapshot['id'], version=version, use_admin_context=use_admin_context) return req def setUp(self): super(ShareSnapshotExportLocationsAPITest, self).setUp() self.controller = ( export_locations.ShareSnapshotExportLocationController()) self.share = db_utils.create_share() self.snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=self.share['id']) self.snapshot_instance = db_utils.create_snapshot_instance( status=constants.STATUS_AVAILABLE, share_instance_id=self.share['instance']['id'], snapshot_id=self.snapshot['id']) self.values = { 'share_snapshot_instance_id': self.snapshot_instance['id'], 'path': 'fake/user_path', 'is_admin_only': True, } self.exp_loc = db_api.share_snapshot_instance_export_location_create( context.get_admin_context(), self.values) self.req = self._get_request() def test_index(self): self.mock_object( db_api, 'share_snapshot_instance_export_locations_get_all', mock.Mock(return_value=[self.exp_loc])) out = self.controller.index(self._get_request('2.32'), self.snapshot['id']) values = { 'share_snapshot_export_locations': [{ 'share_snapshot_instance_id': self.snapshot_instance['id'], 'path': 'fake/user_path', 'is_admin_only': True, 'id': self.exp_loc['id'], 'links': [{ 'href': 'http://localhost/v1/fake/' 'share_snapshot_export_locations/' + self.exp_loc['id'], 'rel': 'self' }, { 'href': 'http://localhost/fake/' 'share_snapshot_export_locations/' + self.exp_loc['id'], 'rel': 'bookmark' }], }] } self.assertSubDictMatch(values, out) def test_show(self): out = self.controller.show(self._get_request('2.32'), self.snapshot['id'], self.exp_loc['id']) self.assertSubDictMatch( {'share_snapshot_export_location': self.values}, out) @ddt.data('1.0', '2.0', '2.5', '2.8', '2.31') def test_list_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, self._get_request(version), self.snapshot_instance['id'], ) @ddt.data('1.0', '2.0', '2.5', '2.8', '2.31') def test_show_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.show, self._get_request(version), self.snapshot['id'], self.exp_loc['id'] ) manila-10.0.0/manila/tests/api/v2/test_share_replica_export_locations.py0000664000175000017500000002011113656750227026454 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from webob import exc from manila.api.v2 import share_replica_export_locations as export_locations from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils @ddt.ddt class ShareReplicaExportLocationsAPITest(test.TestCase): def _get_request(self, version="2.47", use_admin_context=False): req = fakes.HTTPRequest.blank( '/v2/share-replicas/%s/export-locations' % self.active_replica_id, version=version, use_admin_context=use_admin_context, experimental=True) return req def setUp(self): super(ShareReplicaExportLocationsAPITest, self).setUp() self.controller = ( export_locations.ShareReplicaExportLocationController()) self.resource_name = 'share_replica_export_location' self.ctxt = context.RequestContext('fake', 'fake') self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.share = db_utils.create_share( replication_type=constants.REPLICATION_TYPE_READABLE, replica_state=constants.REPLICA_STATE_ACTIVE) self.active_replica_id = self.share.instance.id self.req = self._get_request() exports = [ {'path': 'myshare.mydomain/active-replica-exp1', 'is_admin_only': False}, {'path': 'myshare.mydomain/active-replica-exp2', 'is_admin_only': False}, ] db.share_export_locations_update( self.ctxt, self.active_replica_id, exports) # Replicas self.share_replica2 = db_utils.create_share_replica( share_id=self.share.id, replica_state=constants.REPLICA_STATE_IN_SYNC) self.share_replica3 = db_utils.create_share_replica( share_id=self.share.id, replica_state=constants.REPLICA_STATE_OUT_OF_SYNC) replica2_exports = [ {'path': 'myshare.mydomain/insync-replica-exp', 'is_admin_only': False}, {'path': 'myshare.mydomain/insync-replica-exp2', 'is_admin_only': False} ] replica3_exports = [ {'path': 'myshare.mydomain/outofsync-replica-exp', 'is_admin_only': False}, {'path': 'myshare.mydomain/outofsync-replica-exp2', 'is_admin_only': False} ] db.share_export_locations_update( self.ctxt, self.share_replica2.id, replica2_exports) db.share_export_locations_update( self.ctxt, self.share_replica3.id, replica3_exports) @ddt.data('user', 'admin') def test_list_and_show(self, role): summary_keys = [ 'id', 'path', 'replica_state', 'availability_zone', 'preferred' ] admin_summary_keys = summary_keys + [ 'share_instance_id', 'is_admin_only' ] detail_keys = summary_keys + ['created_at', 'updated_at'] admin_detail_keys = admin_summary_keys + ['created_at', 'updated_at'] self._test_list_and_show(role, summary_keys, detail_keys, admin_summary_keys, admin_detail_keys) def _test_list_and_show(self, role, summary_keys, detail_keys, admin_summary_keys, admin_detail_keys): req = self._get_request(use_admin_context=(role == 'admin')) for replica_id in (self.active_replica_id, self.share_replica2.id, self.share_replica3.id): index_result = self.controller.index(req, replica_id) self.assertIn('export_locations', index_result) self.assertEqual(1, len(index_result)) self.assertEqual(2, len(index_result['export_locations'])) for index_el in index_result['export_locations']: self.assertIn('id', index_el) show_result = self.controller.show( req, replica_id, index_el['id']) self.assertIn('export_location', show_result) self.assertEqual(1, len(show_result)) show_el = show_result['export_location'] # Check summary keys in index result & detail keys in show if role == 'admin': self.assertEqual(len(admin_summary_keys), len(index_el)) for key in admin_summary_keys: self.assertIn(key, index_el) self.assertEqual(len(admin_detail_keys), len(show_el)) for key in admin_detail_keys: self.assertIn(key, show_el) else: self.assertEqual(len(summary_keys), len(index_el)) for key in summary_keys: self.assertIn(key, index_el) self.assertEqual(len(detail_keys), len(show_el)) for key in detail_keys: self.assertIn(key, show_el) # Ensure keys common to index & show have matching values for key in summary_keys: self.assertEqual(index_el[key], show_el[key]) def test_list_and_show_with_non_replicas(self): non_replicated_share = db_utils.create_share() instance_id = non_replicated_share.instance.id exports = [ {'path': 'myshare.mydomain/non-replicated-share', 'is_admin_only': False}, {'path': 'myshare.mydomain/non-replicated-share-2', 'is_admin_only': False}, ] db.share_export_locations_update(self.ctxt, instance_id, exports) updated_exports = db.share_export_locations_get_by_share_id( self.ctxt, non_replicated_share.id) self.assertRaises(exc.HTTPNotFound, self.controller.index, self.req, instance_id) for export in updated_exports: self.assertRaises(exc.HTTPNotFound, self.controller.show, self.req, instance_id, export['id']) def test_list_export_locations_share_replica_not_found(self): self.assertRaises( exc.HTTPNotFound, self.controller.index, self.req, 'non-existent-share-replica-id') def test_show_export_location_share_replica_not_found(self): index_result = self.controller.index(self.req, self.active_replica_id) el_id = index_result['export_locations'][0]['id'] self.assertRaises( exc.HTTPNotFound, self.controller.show, self.req, 'non-existent-share-replica-id', el_id) self.assertRaises( exc.HTTPNotFound, self.controller.show, self.req, self.active_replica_id, 'non-existent-export-location-id') @ddt.data('1.0', '2.0', '2.46') def test_list_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, self._get_request(version), self.active_replica_id) @ddt.data('1.0', '2.0', '2.46') def test_show_with_unsupported_version(self, version): index_result = self.controller.index(self.req, self.active_replica_id) self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.show, self._get_request(version), self.active_replica_id, index_result['export_locations'][0]['id']) manila-10.0.0/manila/tests/api/v2/test_share_group_snapshots.py0000664000175000017500000006647213656750227024643 0ustar zuulzuul00000000000000# Copyright 2016 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import uuidutils import six import webob from manila.api.openstack import wsgi from manila.api.v2 import share_group_snapshots from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils CONF = cfg.CONF SG_GRADUATION_VERSION = '2.55' @ddt.ddt class ShareGroupSnapshotAPITest(test.TestCase): def setUp(self): super(ShareGroupSnapshotAPITest, self).setUp() self.controller = share_group_snapshots.ShareGroupSnapshotController() self.resource_name = self.controller.resource_name self.api_version = '2.31' self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.request = fakes.HTTPRequest.blank( '/share-groups', version=self.api_version, experimental=True) self.context = self.request.environ['manila.context'] self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') self.flags(transport_url='rabbit://fake:fake@mqhost:5672') def _get_fake_share_group_snapshot(self, **values): snap = { 'id': 'fake_id', 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'status': constants.STATUS_CREATING, 'name': None, 'description': None, 'share_group_id': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'members': [], } snap.update(**values) expected_snap = copy.deepcopy(snap) del expected_snap['user_id'] return snap, expected_snap def _get_fake_simple_share_group_snapshot(self, **values): snap = { 'id': 'fake_id', 'name': None, } snap.update(**values) expected_snap = copy.deepcopy(snap) return snap, expected_snap def _get_fake_share_group_snapshot_member(self, **values): member = { 'id': 'fake_id', 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'status': constants.STATUS_CREATING, 'share_group_snapshot_id': None, 'share_proto': None, 'share_id': None, 'size': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), } member.update(**values) expected_member = copy.deepcopy(member) del expected_member['user_id'] del expected_member['status'] expected_member['share_protocol'] = member['share_proto'] del expected_member['share_proto'] return member, expected_member def _get_fake_custom_request_and_context(self, microversion, experimental): req = fakes.HTTPRequest.blank( '/share-group-snapshots', version=microversion, experimental=experimental) req_context = req.environ['manila.context'] return req, req_context def test_create_invalid_body(self): body = {"not_group_snapshot": {}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_create_no_share_group_id(self): body = {"share_group_snapshot": {}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_create(self, microversion, experimental): fake_snap, expected_snap = self._get_fake_share_group_snapshot() fake_id = six.text_type(uuidutils.generate_uuid()) body = {"share_group_snapshot": {"share_group_id": fake_id}} mock_create = self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(return_value=fake_snap)) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res_dict = self.controller.create(req, body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') mock_create.assert_called_once_with( req_context, share_group_id=fake_id) res_dict['share_group_snapshot'].pop('links') self.assertEqual(expected_snap, res_dict['share_group_snapshot']) def test_create_group_does_not_exist(self): fake_id = six.text_type(uuidutils.generate_uuid()) body = {"share_group_snapshot": {"share_group_id": fake_id}} self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(side_effect=exception.ShareGroupNotFound( share_group_id=six.text_type(uuidutils.generate_uuid())))) self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_create_group_does_not_a_uuid(self): self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(side_effect=exception.ShareGroupNotFound( share_group_id='not_a_uuid', ))) body = {"share_group_snapshot": {"share_group_id": "not_a_uuid"}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_create_invalid_share_group(self): fake_id = six.text_type(uuidutils.generate_uuid()) body = {"share_group_snapshot": {"share_group_id": fake_id}} self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(side_effect=exception.InvalidShareGroup( reason='bad_status'))) self.assertRaises( webob.exc.HTTPConflict, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_create_with_name(self): fake_name = 'fake_name' fake_snap, expected_snap = self._get_fake_share_group_snapshot( name=fake_name) fake_id = six.text_type(uuidutils.generate_uuid()) mock_create = self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(return_value=fake_snap)) body = { "share_group_snapshot": { "share_group_id": fake_id, "name": fake_name, } } res_dict = self.controller.create(self.request, body) res_dict['share_group_snapshot'].pop('links') mock_create.assert_called_once_with( self.context, share_group_id=fake_id, name=fake_name) self.assertEqual(expected_snap, res_dict['share_group_snapshot']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_create_with_description(self): fake_description = 'fake_description' fake_snap, expected_snap = self._get_fake_share_group_snapshot( description=fake_description) fake_id = six.text_type(uuidutils.generate_uuid()) mock_create = self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(return_value=fake_snap)) body = { "share_group_snapshot": { "share_group_id": fake_id, "description": fake_description, } } res_dict = self.controller.create(self.request, body) res_dict['share_group_snapshot'].pop('links') mock_create.assert_called_once_with( self.context, share_group_id=fake_id, description=fake_description) self.assertEqual(expected_snap, res_dict['share_group_snapshot']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_create_with_name_and_description(self): fake_name = 'fake_name' fake_description = 'fake_description' fake_id = six.text_type(uuidutils.generate_uuid()) fake_snap, expected_snap = self._get_fake_share_group_snapshot( description=fake_description, name=fake_name) mock_create = self.mock_object( self.controller.share_group_api, 'create_share_group_snapshot', mock.Mock(return_value=fake_snap)) body = { "share_group_snapshot": { "share_group_id": fake_id, "description": fake_description, "name": fake_name, } } res_dict = self.controller.create(self.request, body) res_dict['share_group_snapshot'].pop('links') mock_create.assert_called_once_with( self.context, share_group_id=fake_id, name=fake_name, description=fake_description) self.assertEqual(expected_snap, res_dict['share_group_snapshot']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_update_with_name_and_description(self, microversion, experimental): fake_name = 'fake_name' fake_description = 'fake_description' fake_id = six.text_type(uuidutils.generate_uuid()) fake_snap, expected_snap = self._get_fake_share_group_snapshot( description=fake_description, name=fake_name) self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(return_value=fake_snap)) mock_update = self.mock_object( self.controller.share_group_api, 'update_share_group_snapshot', mock.Mock(return_value=fake_snap)) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) body = { "share_group_snapshot": { "description": fake_description, "name": fake_name, } } res_dict = self.controller.update(req, fake_id, body) res_dict['share_group_snapshot'].pop('links') mock_update.assert_called_once_with( req_context, fake_snap, {"name": fake_name, "description": fake_description}) self.assertEqual(expected_snap, res_dict['share_group_snapshot']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') def test_update_snapshot_not_found(self): body = {"share_group_snapshot": {}} self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(side_effect=exception.NotFound)) self.assertRaises( webob.exc.HTTPNotFound, self.controller.update, self.request, 'fake_id', body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') def test_update_invalid_body(self): body = {"not_group_snapshot": {}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.request, 'fake_id', body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') def test_update_invalid_body_invalid_field(self): body = {"share_group_snapshot": {"unknown_field": ""}} exc = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.request, 'fake_id', body) self.assertIn('unknown_field', six.text_type(exc)) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') def test_update_invalid_body_readonly_field(self): body = {"share_group_snapshot": {"created_at": []}} exc = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.request, 'fake_id', body) self.assertIn('created_at', six.text_type(exc)) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_list_index(self, microversion, experimental): fake_snap, expected_snap = self._get_fake_simple_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[fake_snap])) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res_dict = self.controller.index(req) res_dict['share_group_snapshots'][0].pop('links') self.assertEqual([expected_snap], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_index_no_share_groups(self): self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[])) res_dict = self.controller.index(self.request) self.assertEqual([], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'get_all') def test_list_index_with_limit(self): fake_snap, expected_snap = self._get_fake_simple_share_group_snapshot() fake_snap2, expected_snap2 = ( self._get_fake_simple_share_group_snapshot( id="fake_id2")) self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[fake_snap, fake_snap2])) req = fakes.HTTPRequest.blank('/share-group-snapshots?limit=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) res_dict['share_group_snapshots'][0].pop('links') self.assertEqual(1, len(res_dict['share_group_snapshots'])) self.assertEqual([expected_snap], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_index_with_limit_and_offset(self): fake_snap, expected_snap = self._get_fake_simple_share_group_snapshot() fake_snap2, expected_snap2 = ( self._get_fake_simple_share_group_snapshot(id="fake_id2")) self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[fake_snap, fake_snap2])) req = fakes.HTTPRequest.blank( '/share-group-snapshots?limit=1&offset=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) res_dict['share_group_snapshots'][0].pop('links') self.assertEqual(1, len(res_dict['share_group_snapshots'])) self.assertEqual([expected_snap2], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_list_detail(self, microversion, experimental): fake_snap, expected_snap = self._get_fake_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[fake_snap])) req, context = self._get_fake_custom_request_and_context( microversion, experimental) res_dict = self.controller.detail(req) res_dict['share_group_snapshots'][0].pop('links') self.assertEqual(1, len(res_dict['share_group_snapshots'])) self.assertEqual(expected_snap, res_dict['share_group_snapshots'][0]) self.mock_policy_check.assert_called_once_with( context, self.resource_name, 'get_all') def test_list_detail_no_share_groups(self): self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[])) res_dict = self.controller.detail(self.request) self.assertEqual([], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'get_all') def test_list_detail_with_limit(self): fake_snap, expected_snap = self._get_fake_share_group_snapshot() fake_snap2, expected_snap2 = self._get_fake_share_group_snapshot( id="fake_id2") self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[fake_snap, fake_snap2])) req = fakes.HTTPRequest.blank('/share-group-snapshots?limit=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) res_dict['share_group_snapshots'][0].pop('links') self.assertEqual(1, len(res_dict['share_group_snapshots'])) self.assertEqual([expected_snap], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_detail_with_limit_and_offset(self): fake_snap, expected_snap = self._get_fake_share_group_snapshot() fake_snap2, expected_snap2 = self._get_fake_share_group_snapshot( id="fake_id2") self.mock_object( self.controller.share_group_api, 'get_all_share_group_snapshots', mock.Mock(return_value=[fake_snap, fake_snap2])) req = fakes.HTTPRequest.blank( '/share-group-snapshots?limit=1&offset=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) res_dict['share_group_snapshots'][0].pop('links') self.assertEqual(1, len(res_dict['share_group_snapshots'])) self.assertEqual([expected_snap2], res_dict['share_group_snapshots']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_delete(self, microversion, experimental): fake_snap, expected_snap = self._get_fake_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(return_value=fake_snap)) self.mock_object( self.controller.share_group_api, 'delete_share_group_snapshot') req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res = self.controller.delete(req, fake_snap['id']) self.assertEqual(202, res.status_code) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') def test_delete_not_found(self): fake_snap, expected_snap = self._get_fake_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(side_effect=exception.NotFound)) self.assertRaises( webob.exc.HTTPNotFound, self.controller.delete, self.request, fake_snap['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'delete') def test_delete_in_conflicting_status(self): fake_snap, expected_snap = self._get_fake_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(return_value=fake_snap)) self.mock_object( self.controller.share_group_api, 'delete_share_group_snapshot', mock.Mock(side_effect=exception.InvalidShareGroupSnapshot( reason='blah'))) self.assertRaises( webob.exc.HTTPConflict, self.controller.delete, self.request, fake_snap['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'delete') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_show(self, microversion, experimental): fake_snap, expected_snap = self._get_fake_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(return_value=fake_snap)) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res_dict = self.controller.show(req, fake_snap['id']) res_dict['share_group_snapshot'].pop('links') self.assertEqual(expected_snap, res_dict['share_group_snapshot']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get') def test_show_share_group_not_found(self): fake_snap, expected_snap = self._get_fake_share_group_snapshot() self.mock_object( self.controller.share_group_api, 'get_share_group_snapshot', mock.Mock(side_effect=exception.NotFound)) self.assertRaises( webob.exc.HTTPNotFound, self.controller.show, self.request, fake_snap['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'get') def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_share_group_snapshot_data(self, share_group_snapshot=None, version='2.31'): if share_group_snapshot is None: share_group_snapshot = db_utils.create_share_group_snapshot( 'fake_id', status=constants.STATUS_AVAILABLE) path = ('/v2/fake/share-group-snapshots/%s/action' % share_group_snapshot['id']) req = fakes.HTTPRequest.blank(path, script_name=path, version=version) req.headers[wsgi.API_VERSION_REQUEST_HEADER] = version return share_group_snapshot, req @ddt.data(*fakes.fixture_force_delete_with_different_roles) @ddt.unpack def test_share_group_snapshot_force_delete_with_different_roles( self, role, resp_code, version): group_snap, req = self._setup_share_group_snapshot_data() ctxt = self._get_context(role) req.method = 'POST' req.headers['content-type'] = 'application/json' action_name = 'force_delete' body = {action_name: {'status': constants.STATUS_ERROR}} req.body = six.b(jsonutils.dumps(body)) req.headers['X-Openstack-Manila-Api-Version'] = self.api_version req.headers['X-Openstack-Manila-Api-Experimental'] = True req.environ['manila.context'] = ctxt with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # Validate response self.assertEqual(resp_code, resp.status_int) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test__force_delete_call(self, microversion, experimental): self.mock_object(self.controller, '_force_delete') req, _junk = self._get_fake_custom_request_and_context( microversion, experimental) sg_id = 'fake' body = {'force_delete': {}} self.controller.share_group_snapshot_force_delete(req, sg_id, body) self.controller._force_delete.assert_called_once_with(req, sg_id, body) @ddt.data(*fakes.fixture_reset_status_with_different_roles) @ddt.unpack def test_share_group_snapshot_reset_status_with_different_roles( self, role, valid_code, valid_status, version): ctxt = self._get_context(role) group_snap, req = self._setup_share_group_snapshot_data() action_name = 'reset_status' body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.body = six.b(jsonutils.dumps(body)) req.headers['X-Openstack-Manila-Api-Version'] = self.api_version req.headers['X-Openstack-Manila-Api-Experimental'] = True req.environ['manila.context'] = ctxt with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # Validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_model = db.share_group_snapshot_get(ctxt, group_snap['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test__reset_status_call(self, microversion, experimental): self.mock_object(self.controller, '_reset_status') req, _junk = self._get_fake_custom_request_and_context( microversion, experimental) sg_id = 'fake' body = {'reset_status': {'status': constants.STATUS_ERROR}} self.controller.share_group_snapshot_reset_status(req, sg_id, body) self.controller._reset_status.assert_called_once_with(req, sg_id, body) manila-10.0.0/manila/tests/api/v2/test_shares.py0000664000175000017500000036063713656750227021510 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import itertools from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import uuidutils import six import webob import webob.exc from manila.api import common from manila.api.openstack import api_version_request as api_version from manila.api.v2 import share_replicas from manila.api.v2 import shares from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila.share import api as share_api from manila.share import share_types from manila import test from manila.tests.api.contrib import stubs from manila.tests.api import fakes from manila.tests import db_utils from manila.tests import fake_share from manila.tests import utils as test_utils from manila import utils CONF = cfg.CONF LATEST_MICROVERSION = api_version._MAX_API_VERSION @ddt.ddt class ShareAPITest(test.TestCase): """Share API Test.""" def setUp(self): super(ShareAPITest, self).setUp() self.controller = shares.ShareController() self.mock_object(db, 'availability_zone_get') self.mock_object(share_api.API, 'get_all', stubs.stub_get_all_shares) self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_object(share_api.API, 'update', stubs.stub_share_update) self.mock_object(share_api.API, 'delete', stubs.stub_share_delete) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) self.mock_object(share_types, 'get_share_type', stubs.stub_share_type_get) self.maxDiff = None self.share = { "id": "1", "size": 100, "display_name": "Share Test Name", "display_description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "is_public": False, "task_state": None, } self.create_mock = mock.Mock( return_value=stubs.stub_share( '1', display_name=self.share['display_name'], display_description=self.share['display_description'], size=100, share_proto=self.share['share_proto'].upper(), instance={ 'availability_zone': self.share['availability_zone'], }) ) self.vt = { 'id': 'fake_volume_type_id', 'name': 'fake_volume_type_name', 'required_extra_specs': { 'driver_handles_share_servers': 'False' }, 'extra_specs': { 'driver_handles_share_servers': 'False' } } self.snapshot = { 'id': '2', 'share_id': '1', 'status': constants.STATUS_AVAILABLE, } CONF.set_default("default_share_type", None) def _process_expected_share_detailed_response(self, shr_dict, req_version): """Sets version based parameters on share dictionary.""" share_dict = copy.deepcopy(shr_dict) changed_parameters = { '2.2': {'snapshot_support': True}, '2.5': {'task_state': None}, '2.6': {'share_type_name': None}, '2.10': {'access_rules_status': constants.ACCESS_STATE_ACTIVE}, '2.11': {'replication_type': None, 'has_replicas': False}, '2.16': {'user_id': 'fakeuser'}, '2.24': {'create_share_from_snapshot_support': True}, '2.27': {'revert_to_snapshot_support': False}, '2.31': {'share_group_id': None, 'source_share_group_snapshot_member_id': None}, '2.32': {'mount_snapshot_support': False}, } # Apply all the share transformations if self.is_microversion_ge(req_version, '2.9'): share_dict.pop('export_locations', None) share_dict.pop('export_location', None) for version, parameters in changed_parameters.items(): for param, default in parameters.items(): if self.is_microversion_ge(req_version, version): share_dict[param] = share_dict.get(param, default) else: share_dict.pop(param, None) return share_dict def _get_expected_share_detailed_response(self, values=None, admin=False, version='2.0'): share = { 'id': '1', 'name': 'displayname', 'availability_zone': 'fakeaz', 'description': 'displaydesc', 'export_location': 'fake_location', 'export_locations': ['fake_location', 'fake_location2'], 'project_id': 'fakeproject', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'share_proto': 'FAKEPROTO', 'metadata': {}, 'size': 1, 'snapshot_id': '2', 'share_network_id': None, 'status': 'fakestatus', 'share_type': '1', 'volume_type': '1', 'snapshot_support': True, 'is_public': False, 'task_state': None, 'share_type_name': None, 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } if values: if 'display_name' in values: values['name'] = values.pop('display_name') if 'display_description' in values: values['description'] = values.pop('display_description') share.update(values) if share.get('share_proto'): share['share_proto'] = share['share_proto'].upper() if admin: share['share_server_id'] = 'fake_share_server_id' share['host'] = 'fakehost' return { 'share': self._process_expected_share_detailed_response( share, version) } def test__revert(self): share = copy.deepcopy(self.share) share['status'] = constants.STATUS_AVAILABLE share['revert_to_snapshot_support'] = True share["instances"] = [ { "id": "fakeid", "access_rules_status": constants.ACCESS_STATE_ACTIVE, }, ] share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = constants.STATUS_AVAILABLE body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') mock_validate_revert_parameters = self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) mock_get = self.mock_object( share_api.API, 'get', mock.Mock(return_value=share)) mock_get_snapshot = self.mock_object( share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) mock_get_latest_snapshot_for_share = self.mock_object( share_api.API, 'get_latest_snapshot_for_share', mock.Mock(return_value=snapshot)) mock_revert_to_snapshot = self.mock_object( share_api.API, 'revert_to_snapshot') response = self.controller._revert(req, '1', body=body) self.assertEqual(202, response.status_int) mock_validate_revert_parameters.assert_called_once_with( utils.IsAMatcher(context.RequestContext), body) mock_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), '1') mock_get_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), '2') mock_get_latest_snapshot_for_share.assert_called_once_with( utils.IsAMatcher(context.RequestContext), '1') mock_revert_to_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share, snapshot) def test__revert_not_supported(self): share = copy.deepcopy(self.share) share['revert_to_snapshot_support'] = False share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = constants.STATUS_AVAILABLE snapshot['share_id'] = 'wrong_id' body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object( share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._revert, req, '1', body=body) def test__revert_id_mismatch(self): share = copy.deepcopy(self.share) share['status'] = constants.STATUS_AVAILABLE share['revert_to_snapshot_support'] = True share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = constants.STATUS_AVAILABLE snapshot['share_id'] = 'wrong_id' body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object( share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._revert, req, '1', body=body) @ddt.data( { 'share_status': constants.STATUS_ERROR, 'share_is_busy': False, 'snapshot_status': constants.STATUS_AVAILABLE, }, { 'share_status': constants.STATUS_AVAILABLE, 'share_is_busy': True, 'snapshot_status': constants.STATUS_AVAILABLE, }, { 'share_status': constants.STATUS_AVAILABLE, 'share_is_busy': False, 'snapshot_status': constants.STATUS_ERROR, }) @ddt.unpack def test__revert_invalid_status(self, share_status, share_is_busy, snapshot_status): share = copy.deepcopy(self.share) share['status'] = share_status share['is_busy'] = share_is_busy share['revert_to_snapshot_support'] = True share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = snapshot_status body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object( share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.assertRaises(webob.exc.HTTPConflict, self.controller._revert, req, '1', body=body) def test__revert_snapshot_latest_not_found(self): share = copy.deepcopy(self.share) share['status'] = constants.STATUS_AVAILABLE share['revert_to_snapshot_support'] = True share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = constants.STATUS_AVAILABLE body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object( share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object( share_api.API, 'get_latest_snapshot_for_share', mock.Mock(return_value=None)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._revert, req, '1', body=body) def test__revert_snapshot_access_applying(self): share = copy.deepcopy(self.share) share['status'] = constants.STATUS_AVAILABLE share['revert_to_snapshot_support'] = True share["instances"] = [ { "id": "fakeid", "access_rules_status": constants.SHARE_INSTANCE_RULES_SYNCING, }, ] share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = constants.STATUS_AVAILABLE body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object(share_api.API, 'get_latest_snapshot_for_share', mock.Mock(return_value=snapshot)) self.mock_object(share_api.API, 'revert_to_snapshot') self.assertRaises(webob.exc.HTTPConflict, self.controller._revert, req, '1', body=body) def test__revert_snapshot_not_latest(self): share = copy.deepcopy(self.share) share['status'] = constants.STATUS_AVAILABLE share['revert_to_snapshot_support'] = True share = fake_share.fake_share(**share) snapshot = copy.deepcopy(self.snapshot) snapshot['status'] = constants.STATUS_AVAILABLE latest_snapshot = copy.deepcopy(self.snapshot) latest_snapshot['status'] = constants.STATUS_AVAILABLE latest_snapshot['id'] = '3' body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object( share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object( share_api.API, 'get_latest_snapshot_for_share', mock.Mock(return_value=latest_snapshot)) self.assertRaises(webob.exc.HTTPConflict, self.controller._revert, req, '1', body=body) @ddt.data( { 'caught': exception.ShareNotFound, 'exc_args': { 'share_id': '1', }, 'thrown': webob.exc.HTTPNotFound, }, { 'caught': exception.ShareSnapshotNotFound, 'exc_args': { 'snapshot_id': '2', }, 'thrown': webob.exc.HTTPBadRequest, }, { 'caught': exception.ShareSizeExceedsAvailableQuota, 'exc_args': {}, 'thrown': webob.exc.HTTPForbidden, }, { 'caught': exception.ReplicationException, 'exc_args': { 'reason': 'catastrophic failure', }, 'thrown': webob.exc.HTTPBadRequest, }) @ddt.unpack def test__revert_exception(self, caught, exc_args, thrown): body = {'revert': {'snapshot_id': '2'}} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.27') self.mock_object( self.controller, '_validate_revert_parameters', mock.Mock(return_value=body['revert'])) self.mock_object( share_api.API, 'get', mock.Mock(side_effect=caught(**exc_args))) self.assertRaises(thrown, self.controller._revert, req, '1', body=body) def test_validate_revert_parameters(self): body = {'revert': {'snapshot_id': 'fake_snapshot_id'}} result = self.controller._validate_revert_parameters( 'fake_context', body) self.assertEqual(body['revert'], result) @ddt.data( None, {}, {'manage': {'snapshot_id': 'fake_snapshot_id'}}, {'revert': {'share_id': 'fake_snapshot_id'}}, {'revert': {'snapshot_id': ''}}, ) def test_validate_revert_parameters_invalid(self, body): self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_revert_parameters, 'fake_context', body) @ddt.data("2.0", "2.1") def test_share_create_original(self, microversion): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version=microversion) res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( self.share, version=microversion) self.assertEqual(expected, res_dict) @ddt.data("2.2", "2.3") def test_share_create_with_snapshot_support_without_cg(self, microversion): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version=microversion) res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( self.share, version=microversion) self.assertEqual(expected, res_dict) def test_share_create_with_share_group(self): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version="2.31", experimental=True) res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( self.share, version="2.31") self.assertEqual(expected, res_dict) def test_share_create_with_sg_and_availability_zone(self): sg_id = 'fake_sg_id' az_id = 'bar_az_id' self.mock_object(share_api.API, 'create', self.create_mock) self.mock_object( db, 'availability_zone_get', mock.Mock(return_value=type('ReqAZ', (object, ), {"id": az_id}))) self.mock_object( db, 'share_group_get', mock.Mock(return_value={"availability_zone_id": az_id})) body = {"share": { "size": 100, "share_proto": "fakeproto", "availability_zone": az_id, "share_group_id": sg_id, }} req = fakes.HTTPRequest.blank( '/shares', version="2.31", experimental=True) self.controller.create(req, body) db.availability_zone_get.assert_called_once_with( req.environ['manila.context'], az_id) db.share_group_get.assert_called_once_with( req.environ['manila.context'], sg_id) share_api.API.create.assert_called_once_with( req.environ['manila.context'], body['share']['share_proto'].upper(), body['share']['size'], None, None, share_group_id=body['share']['share_group_id'], is_public=False, metadata=None, snapshot_id=None, availability_zone=az_id) def test_share_create_with_sg_and_different_availability_zone(self): sg_id = 'fake_sg_id' sg_az = 'foo_az_id' req_az = 'bar_az_id' self.mock_object(share_api.API, 'create', self.create_mock) self.mock_object( db, 'availability_zone_get', mock.Mock(return_value=type('ReqAZ', (object, ), {"id": req_az}))) self.mock_object( db, 'share_group_get', mock.Mock(return_value={"availability_zone_id": sg_az})) body = {"share": { "size": 100, "share_proto": "fakeproto", "availability_zone": req_az, "share_group_id": sg_id, }} req = fakes.HTTPRequest.blank( '/shares', version="2.31", experimental=True) self.assertRaises( exception.InvalidInput, self.controller.create, req, body) db.availability_zone_get.assert_called_once_with( req.environ['manila.context'], req_az) db.share_group_get.assert_called_once_with( req.environ['manila.context'], sg_id) self.assertEqual(0, share_api.API.create.call_count) def test_share_create_with_nonexistent_share_group(self): sg_id = 'fake_sg_id' self.mock_object(share_api.API, 'create', self.create_mock) self.mock_object(db, 'availability_zone_get') self.mock_object( db, 'share_group_get', mock.Mock(side_effect=exception.ShareGroupNotFound( share_group_id=sg_id))) body = {"share": { "size": 100, "share_proto": "fakeproto", "share_group_id": sg_id, }} req = fakes.HTTPRequest.blank( '/shares', version="2.31", experimental=True) self.assertRaises( webob.exc.HTTPNotFound, self.controller.create, req, body) self.assertEqual(0, db.availability_zone_get.call_count) self.assertEqual(0, share_api.API.create.call_count) db.share_group_get.assert_called_once_with( req.environ['manila.context'], sg_id) def test_share_create_with_valid_default_share_type(self): self.mock_object(share_types, 'get_share_type_by_name', mock.Mock(return_value=self.vt)) CONF.set_default("default_share_type", self.vt['name']) self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version='2.7') res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response(self.share, version='2.7') share_types.get_share_type_by_name.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.vt['name']) self.assertEqual(expected, res_dict) def test_share_create_with_invalid_default_share_type(self): self.mock_object( share_types, 'get_default_share_type', mock.Mock(side_effect=exception.ShareTypeNotFoundByName( self.vt['name'])), ) CONF.set_default("default_share_type", self.vt['name']) req = fakes.HTTPRequest.blank('/shares', version='2.7') self.assertRaises(exception.ShareTypeNotFoundByName, self.controller.create, req, {'share': self.share}) share_types.get_default_share_type.assert_called_once_with() def test_share_create_with_replication(self): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank( '/shares', version=share_replicas.MIN_SUPPORTED_API_VERSION) res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( self.share, version=share_replicas.MIN_SUPPORTED_API_VERSION) self.assertEqual(expected, res_dict) def test_share_create_with_share_net(self): shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "share_network_id": "fakenetid" } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id'])) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': 'fakenetid'})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id') body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares', version='2.7') res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( shr, version='2.7') self.assertDictMatch(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual("fakenetid", create_mock.call_args[1]['share_network_id']) @ddt.data("2.15", "2.16") def test_share_create_original_with_user_id(self, microversion): self.mock_object(share_api.API, 'create', self.create_mock) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version=microversion) res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( self.share, version=microversion) self.assertEqual(expected, res_dict) @ddt.data(test_utils.annotated('v2.0_az_unsupported', ('2.0', False)), test_utils.annotated('v2.0_az_supported', ('2.0', True)), test_utils.annotated('v2.47_az_unsupported', ('2.47', False)), test_utils.annotated('v2.47_az_supported', ('2.47', True))) @ddt.unpack def test_share_create_with_share_type_azs(self, version, az_supported): """For API version<2.48, AZ validation should not be performed.""" self.mock_object(share_api.API, 'create', self.create_mock) create_args = copy.deepcopy(self.share) create_args['availability_zone'] = 'az1' if az_supported else 'az2' create_args['share_type'] = uuidutils.generate_uuid() stype_with_azs = copy.deepcopy(self.vt) stype_with_azs['extra_specs']['availability_zones'] = 'az1,az3' self.mock_object(share_types, 'get_share_type', mock.Mock( return_value=stype_with_azs)) req = fakes.HTTPRequest.blank('/shares', version=version) res_dict = self.controller.create(req, {'share': create_args}) expected = self._get_expected_share_detailed_response( values=self.share, version=version) self.assertEqual(expected, res_dict) @ddt.data(*set([ test_utils.annotated('v2.48_share_from_snap', ('2.48', True)), test_utils.annotated('v2.48_share_not_from_snap', ('2.48', False)), test_utils.annotated('v%s_share_from_snap' % LATEST_MICROVERSION, (LATEST_MICROVERSION, True)), test_utils.annotated('v%s_share_not_from_snap' % LATEST_MICROVERSION, (LATEST_MICROVERSION, False))])) @ddt.unpack def test_share_create_az_not_in_share_type(self, version, snap): """For API version>=2.48, AZ validation should be performed.""" self.mock_object(share_api.API, 'create', self.create_mock) create_args = copy.deepcopy(self.share) create_args['availability_zone'] = 'az2' create_args['share_type'] = (uuidutils.generate_uuid() if not snap else None) create_args['snapshot_id'] = (uuidutils.generate_uuid() if snap else None) stype_with_azs = copy.deepcopy(self.vt) stype_with_azs['extra_specs']['availability_zones'] = 'az1 , az3' self.mock_object(share_types, 'get_share_type', mock.Mock( return_value=stype_with_azs)) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) req = fakes.HTTPRequest.blank('/shares', version=version) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, {'share': create_args}) share_api.API.create.assert_not_called() def test_migration_start(self): share = db_utils.create_share() share_network = db_utils.create_share_network() share_type = {'share_type_id': 'fake_type_id'} req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True context = req.environ['manila.context'] self.mock_object(db, 'share_network_get', mock.Mock( return_value=share_network)) self.mock_object(db, 'share_type_get', mock.Mock( return_value=share_type)) body = { 'migration_start': { 'host': 'fake_host', 'preserve_metadata': True, 'preserve_snapshots': True, 'writable': True, 'nondisruptive': True, 'new_share_network_id': 'fake_net_id', 'new_share_type_id': 'fake_type_id', } } method = 'migration_start' self.mock_object(share_api.API, 'migration_start', mock.Mock(return_value=202)) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) response = getattr(self.controller, method)(req, share['id'], body) self.assertEqual(202, response.status_int) share_api.API.get.assert_called_once_with(context, share['id']) share_api.API.migration_start.assert_called_once_with( context, share, 'fake_host', False, True, True, True, True, new_share_network=share_network, new_share_type=share_type) db.share_network_get.assert_called_once_with( context, 'fake_net_id') db.share_type_get.assert_called_once_with( context, 'fake_type_id') def test_migration_start_conflict(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True) req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request = api_version.APIVersionRequest('2.29') req.api_version_request.experimental = True body = { 'migration_start': { 'host': 'fake_host', 'preserve_metadata': True, 'preserve_snapshots': True, 'writable': True, 'nondisruptive': True, } } self.mock_object(share_api.API, 'migration_start', mock.Mock(side_effect=exception.Conflict(err='err'))) self.assertRaises(webob.exc.HTTPConflict, self.controller.migration_start, req, share['id'], body) @ddt.data('nondisruptive', 'writable', 'preserve_metadata', 'preserve_snapshots', 'host', 'body') def test_migration_start_missing_mandatory(self, param): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = { 'migration_start': { 'host': 'fake_host', 'preserve_metadata': True, 'preserve_snapshots': True, 'writable': True, 'nondisruptive': True, } } if param == 'body': body.pop('migration_start') else: body['migration_start'].pop(param) method = 'migration_start' self.mock_object(share_api.API, 'migration_start') self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.assertRaises(webob.exc.HTTPBadRequest, getattr(self.controller, method), req, 'fake_id', body) @ddt.data('nondisruptive', 'writable', 'preserve_metadata', 'preserve_snapshots', 'force_host_assisted_migration') def test_migration_start_non_boolean(self, param): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = { 'migration_start': { 'host': 'fake_host', 'preserve_metadata': True, 'preserve_snapshots': True, 'writable': True, 'nondisruptive': True, } } body['migration_start'][param] = None method = 'migration_start' self.mock_object(share_api.API, 'migration_start') self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.assertRaises(webob.exc.HTTPBadRequest, getattr(self.controller, method), req, 'fake_id', body) def test_migration_start_no_share_id(self): req = fakes.HTTPRequest.blank('/shares/%s/action' % 'fake_id', use_admin_context=True, version='2.29') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_start': {'host': 'fake_host'}} method = 'migration_start' self.mock_object(share_api.API, 'get', mock.Mock(side_effect=[exception.NotFound])) self.assertRaises(webob.exc.HTTPNotFound, getattr(self.controller, method), req, 'fake_id', body) def test_migration_start_new_share_network_not_found(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') context = req.environ['manila.context'] req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = { 'migration_start': { 'host': 'fake_host', 'preserve_metadata': True, 'preserve_snapshots': True, 'writable': True, 'nondisruptive': True, 'new_share_network_id': 'nonexistent'}} self.mock_object(db, 'share_network_get', mock.Mock(side_effect=exception.NotFound())) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.migration_start, req, share['id'], body) db.share_network_get.assert_called_once_with(context, 'nonexistent') def test_migration_start_new_share_type_not_found(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') context = req.environ['manila.context'] req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = { 'migration_start': { 'host': 'fake_host', 'preserve_metadata': True, 'preserve_snapshots': True, 'writable': True, 'nondisruptive': True, 'new_share_type_id': 'nonexistent'}} self.mock_object(db, 'share_type_get', mock.Mock(side_effect=exception.NotFound())) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.migration_start, req, share['id'], body) db.share_type_get.assert_called_once_with(context, 'nonexistent') def test_migration_start_invalid_force_host_assisted_migration(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_start': {'host': 'fake_host', 'force_host_assisted_migration': 'fake'}} method = 'migration_start' self.assertRaises(webob.exc.HTTPBadRequest, getattr(self.controller, method), req, share['id'], body) @ddt.data('writable', 'preserve_metadata') def test_migration_start_invalid_writable_preserve_metadata( self, parameter): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.29') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_start': {'host': 'fake_host', parameter: 'invalid'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.migration_start, req, share['id'], body) @ddt.data(constants.TASK_STATE_MIGRATION_ERROR, None) def test_reset_task_state(self, task_state): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True update = {'task_state': task_state} body = {'reset_task_state': update} self.mock_object(db, 'share_update') response = self.controller.reset_task_state(req, share['id'], body) self.assertEqual(202, response.status_int) db.share_update.assert_called_once_with(utils.IsAMatcher( context.RequestContext), share['id'], update) def test_reset_task_state_error_body(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True update = {'error': 'error'} body = {'reset_task_state': update} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.reset_task_state, req, share['id'], body) def test_reset_task_state_error_invalid(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True update = {'task_state': 'error'} body = {'reset_task_state': update} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.reset_task_state, req, share['id'], body) def test_reset_task_state_not_found(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True update = {'task_state': constants.TASK_STATE_MIGRATION_ERROR} body = {'reset_task_state': update} self.mock_object(db, 'share_update', mock.Mock(side_effect=exception.NotFound())) self.assertRaises(webob.exc.HTTPNotFound, self.controller.reset_task_state, req, share['id'], body) db.share_update.assert_called_once_with(utils.IsAMatcher( context.RequestContext), share['id'], update) def test_migration_complete(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_complete': None} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'migration_complete') response = self.controller.migration_complete(req, share['id'], body) self.assertEqual(202, response.status_int) share_api.API.migration_complete.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share) def test_migration_complete_not_found(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_complete': None} self.mock_object(share_api.API, 'get', mock.Mock(side_effect=exception.NotFound())) self.mock_object(share_api.API, 'migration_complete') self.assertRaises(webob.exc.HTTPNotFound, self.controller.migration_complete, req, share['id'], body) def test_migration_cancel(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_cancel': None} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'migration_cancel') response = self.controller.migration_cancel(req, share['id'], body) self.assertEqual(202, response.status_int) share_api.API.migration_cancel.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share) def test_migration_cancel_not_found(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_cancel': None} self.mock_object(share_api.API, 'get', mock.Mock(side_effect=exception.NotFound())) self.mock_object(share_api.API, 'migration_cancel') self.assertRaises(webob.exc.HTTPNotFound, self.controller.migration_cancel, req, share['id'], body) def test_migration_get_progress(self): share = db_utils.create_share( task_state=constants.TASK_STATE_MIGRATION_SUCCESS) req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_get_progress': None} expected = { 'total_progress': 'fake', 'task_state': constants.TASK_STATE_MIGRATION_SUCCESS, } self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'migration_get_progress', mock.Mock(return_value=expected)) response = self.controller.migration_get_progress(req, share['id'], body) self.assertEqual(expected, response) share_api.API.migration_get_progress.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share) def test_migration_get_progress_not_found(self): share = db_utils.create_share() req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'], use_admin_context=True, version='2.22') req.method = 'POST' req.headers['content-type'] = 'application/json' req.api_version_request.experimental = True body = {'migration_get_progress': None} self.mock_object(share_api.API, 'get', mock.Mock(side_effect=exception.NotFound())) self.mock_object(share_api.API, 'migration_get_progress') self.assertRaises(webob.exc.HTTPNotFound, self.controller.migration_get_progress, req, share['id'], body) def test_share_create_from_snapshot_without_share_net_no_parent(self): shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": None, } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), snapshot_id=shr['snapshot_id'], instance=dict( availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id']))) self.mock_object(share_api.API, 'create', create_mock) body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares', version='2.7') res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( shr, version='2.7') self.assertEqual(expected, res_dict) def test_share_create_from_snapshot_without_share_net_parent_exists(self): shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": None, } parent_share_net = 444 create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), snapshot_id=shr['snapshot_id'], instance=dict( availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id']))) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) parent_share = stubs.stub_share( '1', instance={'share_network_id': parent_share_net}, create_share_from_snapshot_support=True) self.mock_object(share_api.API, 'get', mock.Mock( return_value=parent_share)) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': parent_share_net})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id') body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares', version='2.7') res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( shr, version='2.7') self.assertEqual(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual(parent_share_net, create_mock.call_args[1]['share_network_id']) def test_share_create_from_snapshot_with_share_net_equals_parent(self): parent_share_net = 444 shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": parent_share_net, } create_mock = mock.Mock(return_value=stubs.stub_share('1', display_name=shr['name'], display_description=shr['description'], size=shr['size'], share_proto=shr['share_proto'].upper(), snapshot_id=shr['snapshot_id'], instance=dict( availability_zone=shr['availability_zone'], share_network_id=shr['share_network_id']))) self.mock_object(share_api.API, 'create', create_mock) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) parent_share = stubs.stub_share( '1', instance={'share_network_id': parent_share_net}, create_share_from_snapshot_support=True) self.mock_object(share_api.API, 'get', mock.Mock( return_value=parent_share)) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': parent_share_net})) self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id') body = {"share": copy.deepcopy(shr)} req = fakes.HTTPRequest.blank('/shares', version='2.7') res_dict = self.controller.create(req, body) expected = self._get_expected_share_detailed_response( shr, version='2.7') self.assertDictMatch(expected, res_dict) # pylint: disable=unsubscriptable-object self.assertEqual(parent_share_net, create_mock.call_args[1]['share_network_id']) def test_share_create_from_snapshot_invalid_share_net(self): self.mock_object(share_api.API, 'create') shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": 1234, } body = {"share": shr} req = fakes.HTTPRequest.blank('/shares', version='2.7') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, body) def test_share_create_from_snapshot_not_supported(self): parent_share_net = 444 self.mock_object(share_api.API, 'create') shr = { "size": 100, "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1", "snapshot_id": 333, "share_network_id": parent_share_net, } parent_share = stubs.stub_share( '1', instance={'share_network_id': parent_share_net}, create_share_from_snapshot_support=False) self.mock_object(share_api.API, 'get', mock.Mock( return_value=parent_share)) self.mock_object(share_api.API, 'get_share_network', mock.Mock( return_value={'id': parent_share_net})) body = {"share": shr} req = fakes.HTTPRequest.blank('/shares', version='2.24') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, body) def test_share_creation_fails_with_bad_size(self): shr = {"size": '', "name": "Share Test Name", "description": "Share Test Desc", "share_proto": "fakeproto", "availability_zone": "zone1:host1"} body = {"share": shr} req = fakes.HTTPRequest.blank('/shares', version='2.7') self.assertRaises(exception.InvalidInput, self.controller.create, req, body) def test_share_create_no_body(self): req = fakes.HTTPRequest.blank('/shares', version='2.7') self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.create, req, {}) def test_share_create_invalid_availability_zone(self): self.mock_object( db, 'availability_zone_get', mock.Mock(side_effect=exception.AvailabilityZoneNotFound(id='id')) ) body = {"share": copy.deepcopy(self.share)} req = fakes.HTTPRequest.blank('/shares', version='2.7') self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, body) def test_share_show(self): req = fakes.HTTPRequest.blank('/shares/1') expected = self._get_expected_share_detailed_response() res_dict = self.controller.show(req, '1') self.assertEqual(expected, res_dict) def test_share_show_with_share_group(self): req = fakes.HTTPRequest.blank( '/shares/1', version='2.31', experimental=True) expected = self._get_expected_share_detailed_response(version='2.31') res_dict = self.controller.show(req, '1') self.assertDictMatch(expected, res_dict) def test_share_show_with_share_group_earlier_version(self): req = fakes.HTTPRequest.blank( '/shares/1', version='2.23', experimental=True) expected = self._get_expected_share_detailed_response(version='2.23') res_dict = self.controller.show(req, '1') self.assertDictMatch(expected, res_dict) def test_share_show_with_share_type_name(self): req = fakes.HTTPRequest.blank('/shares/1', version='2.6') res_dict = self.controller.show(req, '1') expected = self._get_expected_share_detailed_response(version='2.6') self.assertEqual(expected, res_dict) @ddt.data("2.15", "2.16") def test_share_show_with_user_id(self, microversion): req = fakes.HTTPRequest.blank('/shares/1', version=microversion) res_dict = self.controller.show(req, '1') expected = self._get_expected_share_detailed_response( version=microversion) self.assertEqual(expected, res_dict) def test_share_show_admin(self): req = fakes.HTTPRequest.blank('/shares/1', use_admin_context=True) expected = self._get_expected_share_detailed_response(admin=True) res_dict = self.controller.show(req, '1') self.assertEqual(expected, res_dict) def test_share_show_no_share(self): self.mock_object(share_api.API, 'get', stubs.stub_share_get_notfound) req = fakes.HTTPRequest.blank('/shares/1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '1') def test_share_show_with_replication_type(self): req = fakes.HTTPRequest.blank( '/shares/1', version=share_replicas.MIN_SUPPORTED_API_VERSION) res_dict = self.controller.show(req, '1') expected = self._get_expected_share_detailed_response( version=share_replicas.MIN_SUPPORTED_API_VERSION) self.assertEqual(expected, res_dict) @ddt.data(('2.10', True), ('2.27', True), ('2.28', False)) @ddt.unpack def test_share_show_access_rules_status_translated(self, version, translated): share = db_utils.create_share( access_rules_status=constants.SHARE_INSTANCE_RULES_SYNCING, status=constants.STATUS_AVAILABLE) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) req = fakes.HTTPRequest.blank( '/shares/%s' % share['id'], version=version) res_dict = self.controller.show(req, share['id']) expected = (constants.STATUS_OUT_OF_SYNC if translated else constants.SHARE_INSTANCE_RULES_SYNCING) self.assertEqual(expected, res_dict['share']['access_rules_status']) def test_share_delete(self): req = fakes.HTTPRequest.blank('/shares/1') resp = self.controller.delete(req, 1) self.assertEqual(202, resp.status_int) def test_share_delete_has_replicas(self): req = fakes.HTTPRequest.blank('/shares/1') self.mock_object(share_api.API, 'get', mock.Mock(return_value=self.share)) self.mock_object(share_api.API, 'delete', mock.Mock(side_effect=exception.Conflict(err='err'))) self.assertRaises( webob.exc.HTTPConflict, self.controller.delete, req, 1) def test_share_delete_in_share_group_param_not_provided(self): fake_share = stubs.stub_share('fake_share', share_group_id='fake_group_id') self.mock_object(share_api.API, 'get', mock.Mock(return_value=fake_share)) req = fakes.HTTPRequest.blank('/shares/1') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, 1) def test_share_delete_in_share_group(self): fake_share = stubs.stub_share('fake_share', share_group_id='fake_group_id') self.mock_object(share_api.API, 'get', mock.Mock(return_value=fake_share)) req = fakes.HTTPRequest.blank( '/shares/1?share_group_id=fake_group_id') resp = self.controller.delete(req, 1) self.assertEqual(202, resp.status_int) def test_share_delete_in_share_group_wrong_id(self): fake_share = stubs.stub_share('fake_share', share_group_id='fake_group_id') self.mock_object(share_api.API, 'get', mock.Mock(return_value=fake_share)) req = fakes.HTTPRequest.blank( '/shares/1?share_group_id=not_fake_group_id') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, 1) def test_share_update(self): shr = self.share body = {"share": shr} req = fakes.HTTPRequest.blank('/share/1') res_dict = self.controller.update(req, 1, body) self.assertEqual(shr["display_name"], res_dict['share']["name"]) self.assertEqual(shr["display_description"], res_dict['share']["description"]) self.assertEqual(shr['is_public'], res_dict['share']['is_public']) def test_share_update_with_share_group(self): shr = self.share body = {"share": shr} req = fakes.HTTPRequest.blank( '/share/1', version="2.31", experimental=True) res_dict = self.controller.update(req, 1, body) self.assertIsNone(res_dict['share']["share_group_id"]) self.assertIsNone( res_dict['share']["source_share_group_snapshot_member_id"]) def test_share_not_updates_size(self): req = fakes.HTTPRequest.blank('/share/1') res_dict = self.controller.update(req, 1, {"share": self.share}) self.assertNotEqual(res_dict['share']["size"], self.share["size"]) def test_share_delete_no_share(self): self.mock_object(share_api.API, 'get', stubs.stub_share_get_notfound) req = fakes.HTTPRequest.blank('/shares/1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1) @ddt.data({'use_admin_context': False, 'version': '2.4'}, {'use_admin_context': True, 'version': '2.4'}, {'use_admin_context': True, 'version': '2.35'}, {'use_admin_context': False, 'version': '2.35'}, {'use_admin_context': True, 'version': '2.36'}, {'use_admin_context': False, 'version': '2.36'}, {'use_admin_context': True, 'version': '2.42'}, {'use_admin_context': False, 'version': '2.42'}) @ddt.unpack def test_share_list_summary_with_search_opts(self, use_admin_context, version): search_opts = { 'name': 'fake_name', 'status': constants.STATUS_AVAILABLE, 'share_server_id': 'fake_share_server_id', 'share_type_id': 'fake_share_type_id', 'snapshot_id': 'fake_snapshot_id', 'share_network_id': 'fake_share_network_id', 'metadata': '%7B%27k1%27%3A+%27v1%27%7D', # serialized k1=v1 'extra_specs': '%7B%27k2%27%3A+%27v2%27%7D', # serialized k2=v2 'sort_key': 'fake_sort_key', 'sort_dir': 'fake_sort_dir', 'limit': '1', 'offset': '1', 'is_public': 'False', 'export_location_id': 'fake_export_location_id', 'export_location_path': 'fake_export_location_path', } if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.36')): search_opts.update( {'display_name~': 'fake', 'display_description~': 'fake'}) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.42')): search_opts.update({'with_count': 'true'}) if use_admin_context: search_opts['host'] = 'fake_host' # fake_key should be filtered for non-admin url = '/shares?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, version=version, use_admin_context=use_admin_context) shares = [ {'id': 'id1', 'display_name': 'n1'}, {'id': 'id2', 'display_name': 'n2'}, {'id': 'id3', 'display_name': 'n3'}, ] self.mock_object(share_api.API, 'get_all', mock.Mock(return_value=[shares[1]])) result = self.controller.index(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_server_id': search_opts['share_server_id'], 'share_type_id': search_opts['share_type_id'], 'snapshot_id': search_opts['snapshot_id'], 'share_network_id': search_opts['share_network_id'], 'metadata': {'k1': 'v1'}, 'extra_specs': {'k2': 'v2'}, 'is_public': 'False', 'limit': '1', 'offset': '1' } if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.35')): search_opts_expected['export_location_id'] = ( search_opts['export_location_id']) search_opts_expected['export_location_path'] = ( search_opts['export_location_path']) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.36')): search_opts_expected.update( {'display_name~': search_opts['display_name~'], 'display_description~': search_opts['display_description~']}) if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) search_opts_expected['host'] = search_opts['host'] share_api.API.get_all.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['shares'])) self.assertEqual(shares[1]['id'], result['shares'][0]['id']) self.assertEqual( shares[1]['display_name'], result['shares'][0]['name']) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.42')): self.assertEqual(1, result['count']) @ddt.data({'use_admin_context': True, 'version': '2.42'}, {'use_admin_context': False, 'version': '2.42'}) @ddt.unpack def test_share_list_summary_with_search_opt_count_0(self, use_admin_context, version): search_opts = { 'sort_key': 'fake_sort_key', 'sort_dir': 'fake_sort_dir', 'with_count': 'true' } if use_admin_context: search_opts['host'] = 'fake_host' # fake_key should be filtered url = '/shares?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, version=version, use_admin_context=use_admin_context) self.mock_object(share_api.API, 'get_all', mock.Mock(return_value=[])) result = self.controller.index(req) search_opts_expected = {} if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) search_opts_expected['host'] = search_opts['host'] share_api.API.get_all.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(0, len(result['shares'])) self.assertEqual(0, result['count']) def test_share_list_summary(self): self.mock_object(share_api.API, 'get_all', stubs.stub_share_get_all_by_project) req = fakes.HTTPRequest.blank('/shares') res_dict = self.controller.index(req) expected = { 'shares': [ { 'name': 'displayname', 'id': '1', 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } ] } self.assertEqual(expected, res_dict) @ddt.data({'use_admin_context': False, 'version': '2.4'}, {'use_admin_context': True, 'version': '2.4'}, {'use_admin_context': True, 'version': '2.35'}, {'use_admin_context': False, 'version': '2.35'}, {'use_admin_context': True, 'version': '2.42'}, {'use_admin_context': False, 'version': '2.42'}) @ddt.unpack def test_share_list_detail_with_search_opts(self, use_admin_context, version): search_opts = { 'name': 'fake_name', 'status': constants.STATUS_AVAILABLE, 'share_server_id': 'fake_share_server_id', 'share_type_id': 'fake_share_type_id', 'snapshot_id': 'fake_snapshot_id', 'share_network_id': 'fake_share_network_id', 'metadata': '%7B%27k1%27%3A+%27v1%27%7D', # serialized k1=v1 'extra_specs': '%7B%27k2%27%3A+%27v2%27%7D', # serialized k2=v2 'sort_key': 'fake_sort_key', 'sort_dir': 'fake_sort_dir', 'limit': '1', 'offset': '1', 'is_public': 'False', 'export_location_id': 'fake_export_location_id', 'export_location_path': 'fake_export_location_path', } if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.42')): search_opts.update({'with_count': 'true'}) if use_admin_context: search_opts['host'] = 'fake_host' # fake_key should be filtered for non-admin url = '/shares/detail?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, version=version, use_admin_context=use_admin_context) shares = [ {'id': 'id1', 'display_name': 'n1'}, { 'id': 'id2', 'display_name': 'n2', 'status': constants.STATUS_AVAILABLE, 'snapshot_id': 'fake_snapshot_id', 'instance': { 'host': 'fake_host', 'share_network_id': 'fake_share_network_id', 'share_type_id': 'fake_share_type_id', }, 'has_replicas': False, }, {'id': 'id3', 'display_name': 'n3'}, ] self.mock_object(share_api.API, 'get_all', mock.Mock(return_value=[shares[1]])) result = self.controller.detail(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_server_id': search_opts['share_server_id'], 'share_type_id': search_opts['share_type_id'], 'snapshot_id': search_opts['snapshot_id'], 'share_network_id': search_opts['share_network_id'], 'metadata': {'k1': 'v1'}, 'extra_specs': {'k2': 'v2'}, 'is_public': 'False', 'limit': '1', 'offset': '1' } if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.35')): search_opts_expected['export_location_id'] = ( search_opts['export_location_id']) search_opts_expected['export_location_path'] = ( search_opts['export_location_path']) if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) search_opts_expected['host'] = search_opts['host'] share_api.API.get_all.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['shares'])) self.assertEqual(shares[1]['id'], result['shares'][0]['id']) self.assertEqual( shares[1]['display_name'], result['shares'][0]['name']) self.assertEqual( shares[1]['snapshot_id'], result['shares'][0]['snapshot_id']) self.assertEqual( shares[1]['status'], result['shares'][0]['status']) self.assertEqual( shares[1]['instance']['share_type_id'], result['shares'][0]['share_type']) self.assertEqual( shares[1]['snapshot_id'], result['shares'][0]['snapshot_id']) if use_admin_context: self.assertEqual( shares[1]['instance']['host'], result['shares'][0]['host']) self.assertEqual( shares[1]['instance']['share_network_id'], result['shares'][0]['share_network_id']) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.42')): self.assertEqual(1, result['count']) def _list_detail_common_expected(self, admin=False): share_dict = { 'status': 'fakestatus', 'description': 'displaydesc', 'export_location': 'fake_location', 'export_locations': ['fake_location', 'fake_location2'], 'availability_zone': 'fakeaz', 'name': 'displayname', 'share_proto': 'FAKEPROTO', 'metadata': {}, 'project_id': 'fakeproject', 'id': '1', 'snapshot_id': '2', 'snapshot_support': True, 'share_network_id': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'size': 1, 'share_type': '1', 'volume_type': '1', 'is_public': False, 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } if admin: share_dict['host'] = 'fakehost' return {'shares': [share_dict]} def _list_detail_test_common(self, req, expected): self.mock_object(share_api.API, 'get_all', stubs.stub_share_get_all_by_project) res_dict = self.controller.detail(req) self.assertDictListMatch(expected['shares'], res_dict['shares']) self.assertEqual(res_dict['shares'][0]['volume_type'], res_dict['shares'][0]['share_type']) def test_share_list_detail(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/shares/detail', environ=env) expected = self._list_detail_common_expected() expected['shares'][0].pop('snapshot_support') self._list_detail_test_common(req, expected) def test_share_list_detail_with_share_group(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank( '/shares/detail', environ=env, version="2.31", experimental=True) expected = self._list_detail_common_expected() expected['shares'][0]['task_state'] = None expected['shares'][0]['share_type_name'] = None expected['shares'][0].pop('export_location') expected['shares'][0].pop('export_locations') expected['shares'][0]['access_rules_status'] = 'active' expected['shares'][0]['replication_type'] = None expected['shares'][0]['has_replicas'] = False expected['shares'][0]['user_id'] = 'fakeuser' expected['shares'][0]['create_share_from_snapshot_support'] = True expected['shares'][0]['revert_to_snapshot_support'] = False expected['shares'][0]['share_group_id'] = None expected['shares'][0]['source_share_group_snapshot_member_id'] = None self._list_detail_test_common(req, expected) def test_share_list_detail_with_task_state(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/shares/detail', environ=env, version="2.5") expected = self._list_detail_common_expected() expected['shares'][0]['task_state'] = None self._list_detail_test_common(req, expected) def test_share_list_detail_without_export_locations(self): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/shares/detail', environ=env, version="2.9") expected = self._list_detail_common_expected() expected['shares'][0]['task_state'] = None expected['shares'][0]['share_type_name'] = None expected['shares'][0].pop('export_location') expected['shares'][0].pop('export_locations') self._list_detail_test_common(req, expected) def test_share_list_detail_with_replication_type(self): self.mock_object(share_api.API, 'get_all', stubs.stub_share_get_all_by_project) env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank( '/shares/detail', environ=env, version=share_replicas.MIN_SUPPORTED_API_VERSION) res_dict = self.controller.detail(req) expected = { 'shares': [ { 'status': 'fakestatus', 'description': 'displaydesc', 'availability_zone': 'fakeaz', 'name': 'displayname', 'share_proto': 'FAKEPROTO', 'metadata': {}, 'project_id': 'fakeproject', 'access_rules_status': 'active', 'id': '1', 'snapshot_id': '2', 'share_network_id': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'size': 1, 'share_type_name': None, 'share_type': '1', 'volume_type': '1', 'is_public': False, 'snapshot_support': True, 'has_replicas': False, 'replication_type': None, 'task_state': None, 'links': [ { 'href': 'http://localhost/v1/fake/shares/1', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/1', 'rel': 'bookmark' } ], } ] } self.assertEqual(expected, res_dict) self.assertEqual(res_dict['shares'][0]['volume_type'], res_dict['shares'][0]['share_type']) def test_remove_invalid_options(self): ctx = context.RequestContext('fakeuser', 'fakeproject', is_admin=False) search_opts = {'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd'} expected_opts = {'a': 'a', 'c': 'c'} allowed_opts = ['a', 'c'] common.remove_invalid_options(ctx, search_opts, allowed_opts) self.assertEqual(expected_opts, search_opts) def test_remove_invalid_options_admin(self): ctx = context.RequestContext('fakeuser', 'fakeproject', is_admin=True) search_opts = {'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd'} expected_opts = {'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd'} allowed_opts = ['a', 'c'] common.remove_invalid_options(ctx, search_opts, allowed_opts) self.assertEqual(expected_opts, search_opts) def _fake_access_get(self, ctxt, access_id): class Access(object): def __init__(self, **kwargs): self.STATE_NEW = 'fake_new' self.STATE_ACTIVE = 'fake_active' self.STATE_ERROR = 'fake_error' self.params = kwargs self.params['state'] = self.STATE_NEW self.share_id = kwargs.get('share_id') self.id = access_id def __getitem__(self, item): return self.params[item] access = Access(access_id=access_id, share_id='fake_share_id') return access @ddt.ddt class ShareActionsTest(test.TestCase): def setUp(self): super(ShareActionsTest, self).setUp() self.controller = shares.ShareController() self.mock_object(share_api.API, 'get', stubs.stub_share_get) @ddt.unpack @ddt.data( {"access": {'access_type': 'ip', 'access_to': '127.0.0.1'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': '1' * 4}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': '1' * 255}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': 'fake{.-_\'`}'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': 'MYDOMAIN-Administrator'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': 'test group name'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': 'group$.-_\'`{}'}, "version": "2.7"}, {"access": {'access_type': 'cert', 'access_to': 'x'}, "version": "2.7"}, {"access": {'access_type': 'cert', 'access_to': 'tenant.example.com'}, "version": "2.7"}, {"access": {'access_type': 'cert', 'access_to': 'x' * 64}, "version": "2.7"}, {"access": {'access_type': 'ip', 'access_to': 'ad80::abaa:0:c2:2'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': 'AD80:ABAA::'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': 'AD80::/36'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': 'AD80:ABAA::/128'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.1'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.1', 'metadata': {'test_key': 'test_value'}}, "version": "2.45"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.1', 'metadata': {'k' * 255: 'v' * 1023}}, "version": "2.45"}, ) def test_allow_access(self, access, version): self.mock_object(share_api.API, 'allow_access', mock.Mock(return_value={'fake': 'fake'})) self.mock_object(self.controller._access_view_builder, 'view', mock.Mock(return_value={'access': {'fake': 'fake'}})) id = 'fake_share_id' body = {'allow_access': access} expected = {'access': {'fake': 'fake'}} req = fakes.HTTPRequest.blank( '/v2/tenant1/shares/%s/action' % id, version=version) res = self.controller.allow_access(req, id, body) self.assertEqual(expected, res) @ddt.unpack @ddt.data( {"access": {'access_type': 'error_type', 'access_to': '127.0.0.1'}, "version": "2.7"}, {"access": {'access_type': 'ip', 'access_to': 'localhost'}, "version": "2.7"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.*'}, "version": "2.7"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.0/33'}, "version": "2.7"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.256'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': '1'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': '1' * 3}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': '1' * 256}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': 'root<>'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': 'group\\'}, "version": "2.7"}, {"access": {'access_type': 'user', 'access_to': '+=*?group'}, "version": "2.7"}, {"access": {'access_type': 'cert', 'access_to': ''}, "version": "2.7"}, {"access": {'access_type': 'cert', 'access_to': ' '}, "version": "2.7"}, {"access": {'access_type': 'cert', 'access_to': 'x' * 65}, "version": "2.7"}, {"access": {'access_type': 'ip', 'access_to': 'ad80::abaa:0:c2:2'}, "version": "2.37"}, {"access": {'access_type': 'ip', 'access_to': '127.4.0.3/33'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': 'AD80:ABAA::*'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': 'AD80::/129'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': 'ad80::abaa:0:c2:2/64'}, "version": "2.38"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.1', 'metadata': {'k' * 256: 'v' * 1024}}, "version": "2.45"}, {"access": {'access_type': 'ip', 'access_to': '127.0.0.1', 'metadata': {'key': None}}, "version": "2.45"}, ) def test_allow_access_error(self, access, version): id = 'fake_share_id' body = {'allow_access': access} req = fakes.HTTPRequest.blank('/v2/tenant1/shares/%s/action' % id, version=version) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.allow_access, req, id, body) @ddt.unpack @ddt.data( {'exc': None, 'access_to': 'alice', 'version': '2.13'}, {'exc': webob.exc.HTTPBadRequest, 'access_to': 'alice', 'version': '2.11'} ) def test_allow_access_ceph(self, exc, access_to, version): share_id = "fake_id" self.mock_object(share_api.API, 'allow_access', mock.Mock(return_value={'fake': 'fake'})) self.mock_object(self.controller._access_view_builder, 'view', mock.Mock(return_value={'access': {'fake': 'fake'}})) req = fakes.HTTPRequest.blank( '/v2/shares/%s/action' % share_id, version=version) body = {'allow_access': { 'access_type': 'cephx', 'access_to': access_to, 'access_level': 'rw' }} if exc: self.assertRaises(exc, self.controller.allow_access, req, share_id, body) else: expected = {'access': {'fake': 'fake'}} res = self.controller.allow_access(req, id, body) self.assertEqual(expected, res) @ddt.data('2.1', '2.27') def test_allow_access_access_rules_status_is_in_error(self, version): share = db_utils.create_share( access_rules_status=constants.SHARE_INSTANCE_RULES_ERROR) req = fakes.HTTPRequest.blank( '/v2/shares/%s/action' % share['id'], version=version) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'allow_access') if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.7')): key = 'allow_access' method = self.controller.allow_access else: key = 'os-allow_access' method = self.controller.allow_access_legacy body = { key: { 'access_type': 'user', 'access_to': 'crimsontide', 'access_level': 'rw', } } self.assertRaises(webob.exc.HTTPBadRequest, method, req, share['id'], body) self.assertFalse(share_api.API.allow_access.called) @ddt.data(*itertools.product( ('2.1', '2.27'), (constants.SHARE_INSTANCE_RULES_SYNCING, constants.STATUS_ACTIVE))) @ddt.unpack def test_allow_access_no_transitional_states(self, version, status): share = db_utils.create_share(access_rules_status=status, status=constants.STATUS_AVAILABLE) req = fakes.HTTPRequest.blank( '/v2/shares/%s/action' % share['id'], version=version) ctxt = req.environ['manila.context'] access = { 'access_type': 'user', 'access_to': 'clemsontigers', 'access_level': 'rw', } expected_mapping = { constants.SHARE_INSTANCE_RULES_SYNCING: constants.STATUS_NEW, constants.SHARE_INSTANCE_RULES_ERROR: constants.ACCESS_STATE_ERROR, constants.STATUS_ACTIVE: constants.ACCESS_STATE_ACTIVE, } share = db.share_get(ctxt, share['id']) updated_access = db_utils.create_access(share_id=share['id'], **access) expected_access = access expected_access.update( { 'id': updated_access['id'], 'state': expected_mapping[share['access_rules_status']], 'share_id': updated_access['share_id'], }) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.7')): key = 'allow_access' method = self.controller.allow_access else: key = 'os-allow_access' method = self.controller.allow_access_legacy if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.13')): expected_access['access_key'] = None self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'allow_access', mock.Mock(return_value=updated_access)) body = {key: access} access = method(req, share['id'], body) self.assertEqual(expected_access, access['access']) share_api.API.allow_access.assert_called_once_with( req.environ['manila.context'], share, 'user', 'clemsontigers', 'rw', None) @ddt.data(*itertools.product( set(['2.28', api_version._MAX_API_VERSION]), (constants.SHARE_INSTANCE_RULES_ERROR, constants.SHARE_INSTANCE_RULES_SYNCING, constants.STATUS_ACTIVE))) @ddt.unpack def test_allow_access_access_rules_status_dont_care(self, version, status): access = { 'access_type': 'user', 'access_to': 'clemsontigers', 'access_level': 'rw', } updated_access = db_utils.create_access(**access) expected_access = access expected_access.update( { 'id': updated_access['id'], 'state': updated_access['state'], 'share_id': updated_access['share_id'], 'access_key': None, }) share = db_utils.create_share(access_rules_status=status) req = fakes.HTTPRequest.blank( '/v2/shares/%s/action' % share['id'], version=version) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'allow_access', mock.Mock(return_value=updated_access)) body = {'allow_access': access} access = self.controller.allow_access(req, share['id'], body) if api_version.APIVersionRequest(version) >= ( api_version.APIVersionRequest("2.33")): expected_access.update( { 'created_at': updated_access['created_at'], 'updated_at': updated_access['updated_at'], }) if api_version.APIVersionRequest(version) >= ( api_version.APIVersionRequest("2.45")): expected_access.update( { 'metadata': {}, }) self.assertEqual(expected_access, access['access']) share_api.API.allow_access.assert_called_once_with( req.environ['manila.context'], share, 'user', 'clemsontigers', 'rw', None) def test_deny_access(self): def _stub_deny_access(*args, **kwargs): pass self.mock_object(share_api.API, "deny_access", _stub_deny_access) self.mock_object(share_api.API, "access_get", _fake_access_get) id = 'fake_share_id' body = {"os-deny_access": {"access_id": 'fake_acces_id'}} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) res = self.controller._deny_access(req, id, body) self.assertEqual(202, res.status_int) def test_deny_access_not_found(self): def _stub_deny_access(*args, **kwargs): pass self.mock_object(share_api.API, "deny_access", _stub_deny_access) self.mock_object(share_api.API, "access_get", _fake_access_get) id = 'super_fake_share_id' body = {"os-deny_access": {"access_id": 'fake_acces_id'}} req = fakes.HTTPRequest.blank('/v1/tenant1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPNotFound, self.controller._deny_access, req, id, body) def test_access_list(self): fake_access_list = [ { "state": "fakestatus", "id": "fake_access_id", "access_type": "fakeip", "access_to": "127.0.0.1", } ] self.mock_object(self.controller._access_view_builder, 'list_view', mock.Mock(return_value={'access_list': fake_access_list})) id = 'fake_share_id' body = {"os-access_list": None} req = fakes.HTTPRequest.blank('/v2/tenant1/shares/%s/action' % id) res_dict = self.controller._access_list(req, id, body) self.assertEqual({'access_list': fake_access_list}, res_dict) @ddt.unpack @ddt.data( {'body': {'os-extend': {'new_size': 2}}, 'version': '2.6'}, {'body': {'extend': {'new_size': 2}}, 'version': '2.7'}, ) def test_extend(self, body, version): id = 'fake_share_id' share = stubs.stub_share_get(None, None, id) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, "extend") size = '2' req = fakes.HTTPRequest.blank( '/v2/shares/%s/action' % id, version=version) actual_response = self.controller._extend(req, id, body) share_api.API.get.assert_called_once_with(mock.ANY, id) share_api.API.extend.assert_called_once_with( mock.ANY, share, int(size)) self.assertEqual(202, actual_response.status_int) @ddt.data({"os-extend": ""}, {"os-extend": {"new_size": "foo"}}, {"os-extend": {"new_size": {'foo': 'bar'}}}) def test_extend_invalid_body(self, body): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._extend, req, id, body) @ddt.data({'source': exception.InvalidInput, 'target': webob.exc.HTTPBadRequest}, {'source': exception.InvalidShare, 'target': webob.exc.HTTPBadRequest}, {'source': exception.ShareSizeExceedsAvailableQuota, 'target': webob.exc.HTTPForbidden}) @ddt.unpack def test_extend_exception(self, source, target): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) body = {"os-extend": {'new_size': '123'}} self.mock_object(share_api.API, "extend", mock.Mock(side_effect=source('fake'))) self.assertRaises(target, self.controller._extend, req, id, body) @ddt.unpack @ddt.data( {'body': {'os-shrink': {'new_size': 1}}, 'version': '2.6'}, {'body': {'shrink': {'new_size': 1}}, 'version': '2.7'}, ) def test_shrink(self, body, version): id = 'fake_share_id' share = stubs.stub_share_get(None, None, id) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, "shrink") size = '1' req = fakes.HTTPRequest.blank( '/v2/shares/%s/action' % id, version=version) actual_response = self.controller._shrink(req, id, body) share_api.API.get.assert_called_once_with(mock.ANY, id) share_api.API.shrink.assert_called_once_with( mock.ANY, share, int(size)) self.assertEqual(202, actual_response.status_int) @ddt.data({"os-shrink": ""}, {"os-shrink": {"new_size": "foo"}}, {"os-shrink": {"new_size": {'foo': 'bar'}}}) def test_shrink_invalid_body(self, body): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._shrink, req, id, body) @ddt.data({'source': exception.InvalidInput, 'target': webob.exc.HTTPBadRequest}, {'source': exception.InvalidShare, 'target': webob.exc.HTTPBadRequest}) @ddt.unpack def test_shrink_exception(self, source, target): id = 'fake_share_id' req = fakes.HTTPRequest.blank('/v1/shares/%s/action' % id) body = {"os-shrink": {'new_size': '123'}} self.mock_object(share_api.API, "shrink", mock.Mock(side_effect=source('fake'))) self.assertRaises(target, self.controller._shrink, req, id, body) @ddt.ddt class ShareAdminActionsAPITest(test.TestCase): def setUp(self): super(ShareAdminActionsAPITest, self).setUp() CONF.set_default("default_share_type", None) self.flags(transport_url='rabbit://fake:fake@mqhost:5672') self.share_api = share_api.API() self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_share_data(self, share=None, version='2.7'): if share is None: share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size='1', override_defaults=True) path = '/v2/fake/shares/%s/action' % share['id'] req = fakes.HTTPRequest.blank(path, script_name=path, version=version) return share, req def _reset_status(self, ctxt, model, req, db_access_method, valid_code, valid_status=None, body=None, version='2.7'): if float(version) > 2.6: action_name = 'reset_status' else: action_name = 'os-reset_status' if body is None: body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = version req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) if valid_code == 404: self.assertRaises(exception.NotFound, db_access_method, ctxt, model['id']) else: actual_model = db_access_method(ctxt, model['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data(*fakes.fixture_reset_status_with_different_roles) @ddt.unpack def test_share_reset_status_with_different_roles(self, role, valid_code, valid_status, version): share, req = self._setup_share_data(version=version) ctxt = self._get_context(role) self._reset_status(ctxt, share, req, db.share_get, valid_code, valid_status, version=version) @ddt.data(*fakes.fixture_invalid_reset_status_body) def test_share_invalid_reset_status_body(self, body): share, req = self._setup_share_data(version='2.6') ctxt = self.admin_context self._reset_status(ctxt, share, req, db.share_get, 400, constants.STATUS_AVAILABLE, body, version='2.6') @ddt.data('2.6', '2.7') def test_share_reset_status_for_missing(self, version): fake_share = {'id': 'missing-share-id'} req = fakes.HTTPRequest.blank( '/v2/fake/shares/%s/action' % fake_share['id'], version=version) self._reset_status(self.admin_context, fake_share, req, db.share_snapshot_get, 404, version=version) def _force_delete(self, ctxt, model, req, db_access_method, valid_code, check_model_in_db=False, version='2.7'): if float(version) > 2.6: action_name = 'force_delete' else: action_name = 'os-force_delete' req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = version req.body = six.b(jsonutils.dumps({action_name: {}})) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # validate response self.assertEqual(valid_code, resp.status_int) if valid_code == 202 and check_model_in_db: self.assertRaises(exception.NotFound, db_access_method, ctxt, model['id']) @ddt.data(*fakes.fixture_force_delete_with_different_roles) @ddt.unpack def test_share_force_delete_with_different_roles(self, role, resp_code, version): share, req = self._setup_share_data(version=version) ctxt = self._get_context(role) self._force_delete(ctxt, share, req, db.share_get, resp_code, check_model_in_db=True, version=version) @ddt.data('2.6', '2.7') def test_share_force_delete_missing(self, version): share, req = self._setup_share_data( share={'id': 'fake'}, version=version) ctxt = self._get_context('admin') self._force_delete( ctxt, share, req, db.share_get, 404, version=version) @ddt.ddt class ShareUnmanageTest(test.TestCase): def setUp(self): super(ShareUnmanageTest, self).setUp() self.controller = shares.ShareController() self.mock_object(share_api.API, 'get_all', stubs.stub_get_all_shares) self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_object(share_api.API, 'update', stubs.stub_share_update) self.mock_object(share_api.API, 'delete', stubs.stub_share_delete) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) self.share_id = 'fake' self.request = fakes.HTTPRequest.blank( '/share/%s/unmanage' % self.share_id, use_admin_context=True, version='2.7', ) def test_unmanage_share(self): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'unmanage', mock.Mock()) self.mock_object( self.controller.share_api.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[])) actual_result = self.controller.unmanage(self.request, share['id']) self.assertEqual(202, actual_result.status_int) (self.controller.share_api.db.share_snapshot_get_all_for_share. assert_called_once_with( self.request.environ['manila.context'], share['id'])) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) share_api.API.unmanage.assert_called_once_with( self.request.environ['manila.context'], share) def test__unmanage(self): body = {} req = fakes.HTTPRequest.blank( '/shares/1/action', use_admin_context=False, version='2.49') share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) mock_unmanage = self.mock_object(self.controller, '_unmanage') self.controller.unmanage(req, share['id'], body) mock_unmanage.assert_called_once_with( req, share['id'], body, allow_dhss_true=True ) def test_unmanage_share_that_has_snapshots(self): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) snapshots = ['foo', 'bar'] self.mock_object(self.controller.share_api, 'unmanage') self.mock_object( self.controller.share_api.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=snapshots)) self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.request, share['id']) self.assertFalse(self.controller.share_api.unmanage.called) (self.controller.share_api.db.share_snapshot_get_all_for_share. assert_called_once_with( self.request.environ['manila.context'], share['id'])) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) def test_unmanage_share_based_on_share_server(self): share = dict(instance=dict(share_server_id='foo_id'), id='bar_id') self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.request, share['id']) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) @ddt.data(*constants.TRANSITIONAL_STATUSES) def test_unmanage_share_with_transitional_state(self, share_status): share = dict(status=share_status, id='foo_id', instance={}) self.mock_object( self.controller.share_api, 'get', mock.Mock(return_value=share)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.request, share['id']) self.controller.share_api.get.assert_called_once_with( self.request.environ['manila.context'], share['id']) def test_unmanage_share_not_found(self): self.mock_object(share_api.API, 'get', mock.Mock( side_effect=exception.NotFound)) self.mock_object(share_api.API, 'unmanage', mock.Mock()) self.assertRaises(webob.exc.HTTPNotFound, self.controller.unmanage, self.request, self.share_id) @ddt.data(exception.InvalidShare(reason="fake"), exception.PolicyNotAuthorized(action="fake"),) def test_unmanage_share_invalid(self, side_effect): share = dict(status=constants.STATUS_AVAILABLE, id='foo_id', instance={}) self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'unmanage', mock.Mock( side_effect=side_effect)) self.assertRaises(webob.exc.HTTPForbidden, self.controller.unmanage, self.request, self.share_id) def test_wrong_permissions(self): share_id = 'fake' req = fakes.HTTPRequest.blank('/share/%s/unmanage' % share_id, use_admin_context=False, version='2.7') self.assertRaises(webob.exc.HTTPForbidden, self.controller.unmanage, req, share_id) def test_unsupported_version(self): share_id = 'fake' req = fakes.HTTPRequest.blank('/share/%s/unmanage' % share_id, use_admin_context=False, version='2.6') self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.unmanage, req, share_id) def get_fake_manage_body(export_path='/fake', service_host='fake@host#POOL', protocol='fake', share_type='fake', **kwargs): fake_share = { 'export_path': export_path, 'service_host': service_host, 'protocol': protocol, 'share_type': share_type, } fake_share.update(kwargs) return {'share': fake_share} @ddt.ddt class ShareManageTest(test.TestCase): def setUp(self): super(ShareManageTest, self).setUp() self.controller = shares.ShareController() self.resource_name = self.controller.resource_name self.request = fakes.HTTPRequest.blank( '/v2/shares/manage', use_admin_context=True, version='2.7') self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) def _setup_manage_mocks(self, service_is_up=True): self.mock_object(db, 'service_get_by_host_and_topic', mock.Mock( return_value={'host': 'fake'})) self.mock_object(share_types, 'get_share_type_by_name_or_id', mock.Mock(return_value={'id': 'fake'})) self.mock_object(utils, 'service_is_up', mock.Mock( return_value=service_is_up)) if service_is_up: self.mock_object(utils, 'validate_service_host') else: self.mock_object( utils, 'validate_service_host', mock.Mock(side_effect=exception.ServiceIsDown(service='fake'))) def test__manage(self): body = {} req = fakes.HTTPRequest.blank( '/v2/shares/manage', use_admin_context=True, version='2.49') mock_manage = self.mock_object(self.controller, '_manage') self.controller.manage(req, body) mock_manage.assert_called_once_with( req, body, allow_dhss_true=True ) @ddt.data({}, {'shares': {}}, {'share': get_fake_manage_body('', None, None)}) def test_share_manage_invalid_body(self, body): self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.manage, self.request, body) def test_share_manage_service_not_found(self): body = get_fake_manage_body() self.mock_object(db, 'service_get_by_host_and_topic', mock.Mock( side_effect=exception.ServiceNotFound(service_id='fake'))) self.assertRaises(webob.exc.HTTPNotFound, self.controller.manage, self.request, body) def test_share_manage_share_type_not_found(self): body = get_fake_manage_body() self.mock_object(db, 'service_get_by_host_and_topic', mock.Mock()) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db, 'share_type_get_by_name', mock.Mock( side_effect=exception.ShareTypeNotFoundByName( share_type_name='fake'))) self.assertRaises(webob.exc.HTTPNotFound, self.controller.manage, self.request, body) @ddt.data({'service_is_up': False, 'service_host': 'fake@host#POOL'}, {'service_is_up': True, 'service_host': 'fake@host'}) def test_share_manage_bad_request(self, settings): body = get_fake_manage_body(service_host=settings.pop('service_host')) self._setup_manage_mocks(**settings) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.manage, self.request, body) def test_share_manage_duplicate_share(self): body = get_fake_manage_body() exc = exception.InvalidShare(reason="fake") self._setup_manage_mocks() self.mock_object(share_api.API, 'manage', mock.Mock(side_effect=exc)) self.assertRaises(webob.exc.HTTPConflict, self.controller.manage, self.request, body) def test_share_manage_forbidden_manage(self): body = get_fake_manage_body() self._setup_manage_mocks() error = mock.Mock(side_effect=exception.PolicyNotAuthorized(action='')) self.mock_object(share_api.API, 'manage', error) self.assertRaises(webob.exc.HTTPForbidden, self.controller.manage, self.request, body) def test_share_manage_forbidden_validate_service_host(self): body = get_fake_manage_body() self._setup_manage_mocks() error = mock.Mock(side_effect=exception.PolicyNotAuthorized(action='')) self.mock_object( utils, 'validate_service_host', mock.Mock(side_effect=error)) self.assertRaises(webob.exc.HTTPForbidden, self.controller.manage, self.request, body) @ddt.data( get_fake_manage_body(name='foo', description='bar'), get_fake_manage_body(display_name='foo', description='bar'), get_fake_manage_body(name='foo', display_description='bar'), get_fake_manage_body(display_name='foo', display_description='bar'), get_fake_manage_body(display_name='foo', display_description='bar', driver_options=dict(volume_id='quuz')), ) def test_share_manage(self, data): self._test_share_manage(data, "2.7") @ddt.data( get_fake_manage_body(name='foo', description='bar', is_public=True), get_fake_manage_body(name='foo', description='bar', is_public=False) ) def test_share_manage_with_is_public(self, data): self._test_share_manage(data, "2.8") def test_share_manage_with_user_id(self): self._test_share_manage(get_fake_manage_body( name='foo', description='bar', is_public=True), "2.16") def _test_share_manage(self, data, version): expected = { 'share': { 'status': 'fakestatus', 'description': 'displaydesc', 'availability_zone': 'fakeaz', 'name': 'displayname', 'share_proto': 'FAKEPROTO', 'metadata': {}, 'project_id': 'fakeproject', 'host': 'fakehost', 'id': 'fake', 'snapshot_id': '2', 'share_network_id': None, 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'size': 1, 'share_type_name': None, 'share_server_id': 'fake_share_server_id', 'share_type': '1', 'volume_type': '1', 'is_public': False, 'snapshot_support': True, 'task_state': None, 'links': [ { 'href': 'http://localhost/v1/fake/shares/fake', 'rel': 'self' }, { 'href': 'http://localhost/fake/shares/fake', 'rel': 'bookmark' } ], } } self._setup_manage_mocks() return_share = mock.Mock( return_value=stubs.stub_share( 'fake', instance={ 'share_type_id': '1', }) ) self.mock_object( share_api.API, 'manage', return_share) self.mock_object( common, 'validate_public_share_policy', mock.Mock(side_effect=lambda *args, **kwargs: args[1])) share = { 'host': data['share']['service_host'], 'export_location': data['share']['export_path'], 'share_proto': data['share']['protocol'].upper(), 'share_type_id': 'fake', 'display_name': 'foo', 'display_description': 'bar', } driver_options = data['share'].get('driver_options', {}) if (api_version.APIVersionRequest(version) <= api_version.APIVersionRequest('2.8')): expected['share']['export_location'] = 'fake_location' expected['share']['export_locations'] = ( ['fake_location', 'fake_location2']) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.10')): expected['share']['access_rules_status'] = ( constants.STATUS_ACTIVE) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.11')): expected['share']['has_replicas'] = False expected['share']['replication_type'] = None if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.16')): expected['share']['user_id'] = 'fakeuser' if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.8')): share['is_public'] = data['share']['is_public'] req = fakes.HTTPRequest.blank('/v2/shares/manage', version=version, use_admin_context=True) actual_result = self.controller.manage(req, data) share_api.API.manage.assert_called_once_with( mock.ANY, share, driver_options) self.assertIsNotNone(actual_result) self.assertEqual(expected, actual_result) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'manage') def test_wrong_permissions(self): body = get_fake_manage_body() self.assertRaises( webob.exc.HTTPForbidden, self.controller.manage, fakes.HTTPRequest.blank( '/share/manage', use_admin_context=False, version='2.7'), body, ) def test_unsupported_version(self): share_id = 'fake' req = fakes.HTTPRequest.blank( '/share/manage', use_admin_context=False, version='2.6') self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.manage, req, share_id) def test_revert(self): mock_revert = self.mock_object( self.controller, '_revert', mock.Mock(return_value='fake_response')) req = fakes.HTTPRequest.blank( '/shares/fake_id/action', use_admin_context=False, version='2.27') result = self.controller.revert(req, 'fake_id', 'fake_body') self.assertEqual('fake_response', result) mock_revert.assert_called_once_with( req, 'fake_id', 'fake_body') def test_revert_unsupported(self): req = fakes.HTTPRequest.blank( '/shares/fake_id/action', use_admin_context=False, version='2.24') self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.revert, req, 'fake_id', 'fake_body') manila-10.0.0/manila/tests/api/v2/test_share_replicas.py0000664000175000017500000007550513656750227023204 0ustar zuulzuul00000000000000# Copyright 2015 Goutham Pacha Ravi # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils import six from webob import exc from manila.api.v2 import share_replicas from manila.common import constants from manila import context from manila import exception from manila import policy from manila import share from manila import test from manila.tests.api import fakes from manila.tests import db_utils from manila.tests import fake_share CONF = cfg.CONF @ddt.ddt class ShareReplicasApiTest(test.TestCase): """Share Replicas API Test Cases.""" def setUp(self): super(ShareReplicasApiTest, self).setUp() self.controller = share_replicas.ShareReplicationController() self.resource_name = self.controller.resource_name self.api_version = share_replicas.MIN_SUPPORTED_API_VERSION self.replicas_req = fakes.HTTPRequest.blank( '/share-replicas', version=self.api_version, experimental=True) self.member_context = context.RequestContext('fake', 'fake') self.replicas_req.environ['manila.context'] = self.member_context self.replicas_req_admin = fakes.HTTPRequest.blank( '/share-replicas', version=self.api_version, experimental=True, use_admin_context=True) self.admin_context = self.replicas_req_admin.environ['manila.context'] self.mock_policy_check = self.mock_object(policy, 'check_policy') def _get_context(self, role): return getattr(self, '%s_context' % role) def _create_replica_get_req(self, **kwargs): if 'status' not in kwargs: kwargs['status'] = constants.STATUS_AVAILABLE if 'replica_state' not in kwargs: kwargs['replica_state'] = constants.REPLICA_STATE_IN_SYNC replica = db_utils.create_share_replica(**kwargs) path = '/v2/fake/share-replicas/%s/action' % replica['id'] req = fakes.HTTPRequest.blank(path, script_name=path, version=self.api_version) req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = self.api_version req.headers['X-Openstack-Manila-Api-Experimental'] = True return replica, req def _get_fake_replica(self, summary=False, admin=False, **values): replica = fake_share.fake_replica(**values) replica['updated_at'] = '2016-02-11T19:57:56.506805' expected_keys = {'id', 'share_id', 'status', 'replica_state'} expected_replica = {key: replica[key] for key in replica if key in expected_keys} if not summary: expected_replica.update({ 'availability_zone': None, 'created_at': None, 'share_network_id': replica['share_network_id'], 'updated_at': replica['updated_at'], }) if admin: expected_replica['share_server_id'] = replica['share_server_id'] expected_replica['host'] = replica['host'] return replica, expected_replica def test_list_replicas_summary(self): fake_replica, expected_replica = self._get_fake_replica(summary=True) self.mock_object(share_replicas.db, 'share_replicas_get_all', mock.Mock(return_value=[fake_replica])) res_dict = self.controller.index(self.replicas_req) self.assertEqual([expected_replica], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'get_all') def test_list_share_replicas_summary(self): fake_replica, expected_replica = self._get_fake_replica(summary=True) self.mock_object(share_replicas.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[fake_replica])) req = fakes.HTTPRequest.blank( '/share-replicas?share_id=FAKE_SHARE_ID', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) self.assertEqual([expected_replica], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') @ddt.data(True, False) def test_list_replicas_detail(self, is_admin): fake_replica, expected_replica = self._get_fake_replica(admin=is_admin) self.mock_object(share_replicas.db, 'share_replicas_get_all', mock.Mock(return_value=[fake_replica])) req = self.replicas_req if not is_admin else self.replicas_req_admin req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) self.assertEqual([expected_replica], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_replicas_detail_with_limit(self): fake_replica_1, expected_replica_1 = self._get_fake_replica() fake_replica_2, expected_replica_2 = self._get_fake_replica( id="fake_id2") self.mock_object( share_replicas.db, 'share_replicas_get_all', mock.Mock(return_value=[fake_replica_1, fake_replica_2])) req = fakes.HTTPRequest.blank('/share-replicas?limit=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) self.assertEqual(1, len(res_dict['share_replicas'])) self.assertEqual([expected_replica_1], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_replicas_detail_with_limit_and_offset(self): fake_replica_1, expected_replica_1 = self._get_fake_replica() fake_replica_2, expected_replica_2 = self._get_fake_replica( id="fake_id2") self.mock_object( share_replicas.db, 'share_replicas_get_all', mock.Mock(return_value=[fake_replica_1, fake_replica_2])) req = fakes.HTTPRequest.blank( '/share-replicas/detail?limit=1&offset=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) self.assertEqual(1, len(res_dict['share_replicas'])) self.assertEqual([expected_replica_2], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_share_replicas_detail_invalid_share(self): self.mock_object(share_replicas.db, 'share_replicas_get_all_by_share', mock.Mock(side_effect=exception.NotFound)) mock__view_builder_call = self.mock_object( share_replicas.replication_view.ReplicationViewBuilder, 'detail_list') req = self.replicas_req req.GET['share_id'] = 'FAKE_SHARE_ID' self.assertRaises(exc.HTTPNotFound, self.controller.detail, req) self.assertFalse(mock__view_builder_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'get_all') @ddt.data(True, False) def test_list_share_replicas_detail(self, is_admin): fake_replica, expected_replica = self._get_fake_replica(admin=is_admin) self.mock_object(share_replicas.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[fake_replica])) req = fakes.HTTPRequest.blank( '/share-replicas?share_id=FAKE_SHARE_ID', version=self.api_version, experimental=True) req.environ['manila.context'] = ( self.member_context if not is_admin else self.admin_context) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) self.assertEqual([expected_replica], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_share_replicas_with_limit(self): fake_replica_1, expected_replica_1 = self._get_fake_replica() fake_replica_2, expected_replica_2 = self._get_fake_replica( id="fake_id2") self.mock_object( share_replicas.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[fake_replica_1, fake_replica_2])) req = fakes.HTTPRequest.blank( '/share-replicas?share_id=FAKE_SHARE_ID&limit=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) self.assertEqual(1, len(res_dict['share_replicas'])) self.assertEqual([expected_replica_1], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_list_share_replicas_with_limit_and_offset(self): fake_replica_1, expected_replica_1 = self._get_fake_replica() fake_replica_2, expected_replica_2 = self._get_fake_replica( id="fake_id2") self.mock_object( share_replicas.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[fake_replica_1, fake_replica_2])) req = fakes.HTTPRequest.blank( '/share-replicas?share_id=FAKE_SHARE_ID&limit=1&offset=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.detail(req) self.assertEqual(1, len(res_dict['share_replicas'])) self.assertEqual([expected_replica_2], res_dict['share_replicas']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') @ddt.data(True, False) def test_show(self, is_admin): fake_replica, expected_replica = self._get_fake_replica(admin=is_admin) self.mock_object( share_replicas.db, 'share_replica_get', mock.Mock(return_value=fake_replica)) req = self.replicas_req if not is_admin else self.replicas_req_admin req_context = req.environ['manila.context'] res_dict = self.controller.show(req, fake_replica.get('id')) self.assertEqual(expected_replica, res_dict['share_replica']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'show') def test_show_no_replica(self): mock__view_builder_call = self.mock_object( share_replicas.replication_view.ReplicationViewBuilder, 'detail') fake_exception = exception.ShareReplicaNotFound( replica_id='FAKE_REPLICA_ID') self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock( side_effect=fake_exception)) self.assertRaises(exc.HTTPNotFound, self.controller.show, self.replicas_req, 'FAKE_REPLICA_ID') self.assertFalse(mock__view_builder_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'show') def test_create_invalid_body(self): body = {} mock__view_builder_call = self.mock_object( share_replicas.replication_view.ReplicationViewBuilder, 'detail_list') self.assertRaises(exc.HTTPUnprocessableEntity, self.controller.create, self.replicas_req, body) self.assertEqual(0, mock__view_builder_call.call_count) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'create') def test_create_no_share_id(self): body = { 'share_replica': { 'share_id': None, 'availability_zone': None, } } mock__view_builder_call = self.mock_object( share_replicas.replication_view.ReplicationViewBuilder, 'detail_list') self.assertRaises(exc.HTTPBadRequest, self.controller.create, self.replicas_req, body) self.assertFalse(mock__view_builder_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'create') def test_create_invalid_share_id(self): body = { 'share_replica': { 'share_id': 'FAKE_SHAREID', 'availability_zone': 'FAKE_AZ' } } mock__view_builder_call = self.mock_object( share_replicas.replication_view.ReplicationViewBuilder, 'detail_list') self.mock_object(share_replicas.db, 'share_get', mock.Mock(side_effect=exception.NotFound)) self.assertRaises(exc.HTTPNotFound, self.controller.create, self.replicas_req, body) self.assertFalse(mock__view_builder_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'create') @ddt.data(exception.AvailabilityZoneNotFound, exception.ReplicationException, exception.ShareBusyException) def test_create_exception_path(self, exception_type): fake_replica, _ = self._get_fake_replica( replication_type='writable') mock__view_builder_call = self.mock_object( share_replicas.replication_view.ReplicationViewBuilder, 'detail_list') body = { 'share_replica': { 'share_id': 'FAKE_SHAREID', 'availability_zone': 'FAKE_AZ' } } exc_args = {'id': 'xyz', 'reason': 'abc'} self.mock_object(share_replicas.db, 'share_get', mock.Mock(return_value=fake_replica)) self.mock_object(share.API, 'create_share_replica', mock.Mock(side_effect=exception_type(**exc_args))) self.assertRaises(exc.HTTPBadRequest, self.controller.create, self.replicas_req, body) self.assertFalse(mock__view_builder_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'create') @ddt.data(True, False) def test_create(self, is_admin): fake_replica, expected_replica = self._get_fake_replica( replication_type='writable', admin=is_admin) body = { 'share_replica': { 'share_id': 'FAKE_SHAREID', 'availability_zone': 'FAKE_AZ' } } self.mock_object(share_replicas.db, 'share_get', mock.Mock(return_value=fake_replica)) self.mock_object(share.API, 'create_share_replica', mock.Mock(return_value=fake_replica)) self.mock_object(share_replicas.db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=[{'id': 'active1'}])) req = self.replicas_req if not is_admin else self.replicas_req_admin req_context = req.environ['manila.context'] res_dict = self.controller.create(req, body) self.assertEqual(expected_replica, res_dict['share_replica']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_delete_invalid_replica(self): fake_exception = exception.ShareReplicaNotFound( replica_id='FAKE_REPLICA_ID') self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(side_effect=fake_exception)) mock_delete_replica_call = self.mock_object( share.API, 'delete_share_replica') self.assertRaises( exc.HTTPNotFound, self.controller.delete, self.replicas_req, 'FAKE_REPLICA_ID') self.assertFalse(mock_delete_replica_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'delete') def test_delete_exception(self): fake_replica_1 = self._get_fake_replica( share_id='FAKE_SHARE_ID', replica_state=constants.REPLICA_STATE_ACTIVE)[0] fake_replica_2 = self._get_fake_replica( share_id='FAKE_SHARE_ID', replica_state=constants.REPLICA_STATE_ACTIVE)[0] exception_type = exception.ReplicationException(reason='xyz') self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=fake_replica_1)) self.mock_object( share_replicas.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[fake_replica_1, fake_replica_2])) self.mock_object(share.API, 'delete_share_replica', mock.Mock(side_effect=exception_type)) self.assertRaises(exc.HTTPBadRequest, self.controller.delete, self.replicas_req, 'FAKE_REPLICA_ID') self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'delete') def test_delete(self): fake_replica = self._get_fake_replica( share_id='FAKE_SHARE_ID', replica_state=constants.REPLICA_STATE_ACTIVE)[0] self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=fake_replica)) self.mock_object(share.API, 'delete_share_replica') resp = self.controller.delete( self.replicas_req, 'FAKE_REPLICA_ID') self.assertEqual(202, resp.status_code) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'delete') def test_promote_invalid_replica_id(self): body = {'promote': None} fake_exception = exception.ShareReplicaNotFound( replica_id='FAKE_REPLICA_ID') self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(side_effect=fake_exception)) self.assertRaises(exc.HTTPNotFound, self.controller.promote, self.replicas_req, 'FAKE_REPLICA_ID', body) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'promote') def test_promote_already_active(self): body = {'promote': None} replica, expected_replica = self._get_fake_replica( replica_state=constants.REPLICA_STATE_ACTIVE) self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=replica)) mock_api_promote_replica_call = self.mock_object( share.API, 'promote_share_replica') resp = self.controller.promote(self.replicas_req, replica['id'], body) self.assertEqual(200, resp.status_code) self.assertFalse(mock_api_promote_replica_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'promote') def test_promote_replication_exception(self): body = {'promote': None} replica, expected_replica = self._get_fake_replica( replica_state=constants.REPLICA_STATE_IN_SYNC) exception_type = exception.ReplicationException(reason='xyz') self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=replica)) mock_api_promote_replica_call = self.mock_object( share.API, 'promote_share_replica', mock.Mock(side_effect=exception_type)) self.assertRaises(exc.HTTPBadRequest, self.controller.promote, self.replicas_req, replica['id'], body) self.assertTrue(mock_api_promote_replica_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'promote') def test_promote_admin_required_exception(self): body = {'promote': None} replica, expected_replica = self._get_fake_replica( replica_state=constants.REPLICA_STATE_IN_SYNC) self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=replica)) mock_api_promote_replica_call = self.mock_object( share.API, 'promote_share_replica', mock.Mock(side_effect=exception.AdminRequired)) self.assertRaises(exc.HTTPForbidden, self.controller.promote, self.replicas_req, replica['id'], body) self.assertTrue(mock_api_promote_replica_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'promote') def test_promote(self): body = {'promote': None} replica, expected_replica = self._get_fake_replica( replica_state=constants.REPLICA_STATE_IN_SYNC) self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=replica)) mock_api_promote_replica_call = self.mock_object( share.API, 'promote_share_replica', mock.Mock(return_value=replica)) resp = self.controller.promote(self.replicas_req, replica['id'], body) self.assertEqual(expected_replica, resp['share_replica']) self.assertTrue(mock_api_promote_replica_call.called) self.mock_policy_check.assert_called_once_with( self.member_context, self.resource_name, 'promote') @ddt.data('index', 'detail', 'show', 'create', 'delete', 'promote', 'reset_replica_state', 'reset_status', 'resync') def test_policy_not_authorized(self, method_name): method = getattr(self.controller, method_name) arguments = { 'id': 'FAKE_REPLICA_ID', 'body': {'FAKE_KEY': 'FAKE_VAL'}, } if method_name in ('index', 'detail'): arguments.clear() noauthexc = exception.PolicyNotAuthorized(action=six.text_type(method)) with mock.patch.object( policy, 'check_policy', mock.Mock(side_effect=noauthexc)): self.assertRaises( exc.HTTPForbidden, method, self.replicas_req, **arguments) @ddt.data('index', 'detail', 'show', 'create', 'delete', 'promote', 'reset_replica_state', 'reset_status', 'resync') def test_upsupported_microversion(self, method_name): unsupported_microversions = ('1.0', '2.2', '2.10') method = getattr(self.controller, method_name) arguments = { 'id': 'FAKE_REPLICA_ID', 'body': {'FAKE_KEY': 'FAKE_VAL'}, } if method_name in ('index', 'detail'): arguments.clear() for microversion in unsupported_microversions: req = fakes.HTTPRequest.blank( '/share-replicas', version=microversion, experimental=True) self.assertRaises(exception.VersionNotFoundForAPIMethod, method, req, **arguments) def _reset_status(self, context, replica, req, valid_code=202, status_attr='status', valid_status=None, body=None): if status_attr == 'status': action_name = 'reset_status' body = body or {action_name: {'status': constants.STATUS_ERROR}} else: action_name = 'reset_replica_state' body = body or { action_name: {'replica_state': constants.STATUS_ERROR}, } req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = context with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_replica = share_replicas.db.share_replica_get( context, replica['id']) self.assertEqual(valid_status, actual_replica[status_attr]) @ddt.data(*fakes.fixture_reset_replica_status_with_different_roles) @ddt.unpack def test_reset_status_with_different_roles(self, role, valid_code, valid_status): context = self._get_context(role) replica, action_req = self._create_replica_get_req() self._reset_status(context, replica, action_req, valid_code=valid_code, status_attr='status', valid_status=valid_status) @ddt.data( {'os-reset_status': {'x-status': 'bad'}}, {'os-reset_status': {'status': constants.STATUS_AVAILABLE}}, {'reset_status': {'x-status': 'bad'}}, {'reset_status': {'status': 'invalid'}}, ) def test_reset_status_invalid_body(self, body): replica, action_req = self._create_replica_get_req() self._reset_status(self.admin_context, replica, action_req, valid_code=400, status_attr='status', valid_status=constants.STATUS_AVAILABLE, body=body) @ddt.data(*fakes.fixture_reset_replica_state_with_different_roles) @ddt.unpack def test_reset_replica_state_with_different_roles(self, role, valid_code, valid_status): context = self._get_context(role) replica, action_req = self._create_replica_get_req() body = {'reset_replica_state': {'replica_state': valid_status}} self._reset_status(context, replica, action_req, valid_code=valid_code, status_attr='replica_state', valid_status=valid_status, body=body) @ddt.data( {'os-reset_replica_state': {'x-replica_state': 'bad'}}, {'os-reset_replica_state': {'replica_state': constants.STATUS_ERROR}}, {'reset_replica_state': {'x-replica_state': 'bad'}}, {'reset_replica_state': {'replica_state': constants.STATUS_AVAILABLE}}, ) def test_reset_replica_state_invalid_body(self, body): replica, action_req = self._create_replica_get_req() self._reset_status(self.admin_context, replica, action_req, valid_code=400, status_attr='status', valid_status=constants.STATUS_AVAILABLE, body=body) def _force_delete(self, context, req, valid_code=202): body = {'force_delete': {}} req.environ['manila.context'] = context req.body = six.b(jsonutils.dumps(body)) with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response self.assertEqual(valid_code, resp.status_int) @ddt.data(*fakes.fixture_force_delete_with_different_roles) @ddt.unpack def test_force_delete_replica_with_different_roles(self, role, resp_code, version): replica, req = self._create_replica_get_req() context = self._get_context(role) self._force_delete(context, req, valid_code=resp_code) def test_force_delete_missing_replica(self): replica, req = self._create_replica_get_req() share_replicas.db.share_replica_delete( self.admin_context, replica['id'], need_to_update_usages=False) self._force_delete(self.admin_context, req, valid_code=404) def test_resync_replica_not_found(self): replica, req = self._create_replica_get_req() share_replicas.db.share_replica_delete( self.admin_context, replica['id'], need_to_update_usages=False) share_api_call = self.mock_object(self.controller.share_api, 'update_share_replica') body = {'resync': {}} req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = self.admin_context with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) self.assertEqual(404, resp.status_int) self.assertFalse(share_api_call.called) def test_resync_API_exception(self): replica, req = self._create_replica_get_req( replica_state=constants.REPLICA_STATE_OUT_OF_SYNC) self.mock_object(share_replicas.db, 'share_replica_get', mock.Mock(return_value=replica)) share_api_call = self.mock_object( share.API, 'update_share_replica', mock.Mock( side_effect=exception.InvalidHost(reason=''))) body = {'resync': None} req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = self.admin_context with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) self.assertEqual(400, resp.status_int) share_api_call.assert_called_once_with(self.admin_context, replica) @ddt.data(constants.REPLICA_STATE_ACTIVE, constants.REPLICA_STATE_IN_SYNC, constants.REPLICA_STATE_OUT_OF_SYNC, constants.STATUS_ERROR) def test_resync(self, replica_state): replica, req = self._create_replica_get_req( replica_state=replica_state, host='skywalker@jedi#temple') share_api_call = self.mock_object( share.API, 'update_share_replica', mock.Mock(return_value=None)) body = {'resync': {}} req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = self.admin_context with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) if replica_state == constants.REPLICA_STATE_ACTIVE: self.assertEqual(200, resp.status_int) self.assertFalse(share_api_call.called) else: self.assertEqual(202, resp.status_int) self.assertTrue(share_api_call.called) manila-10.0.0/manila/tests/api/v2/test_share_export_locations.py0000664000175000017500000002357213656750227024773 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from webob import exc from manila.api.openstack import api_version_request as api_version from manila.api.v2 import share_export_locations as export_locations from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils @ddt.ddt class ShareExportLocationsAPITest(test.TestCase): def _get_request(self, version="2.9", use_admin_context=True): req = fakes.HTTPRequest.blank( '/v2/shares/%s/export_locations' % self.share_instance_id, version=version, use_admin_context=use_admin_context) return req def setUp(self): super(ShareExportLocationsAPITest, self).setUp() self.controller = ( export_locations.ShareExportLocationController()) self.resource_name = self.controller.resource_name self.ctxt = { 'admin': context.RequestContext('admin', 'fake', True), 'user': context.RequestContext('fake', 'fake'), } self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.share = db_utils.create_share() self.share_instance_id = self.share.instance.id self.req = self._get_request() paths = ['fake1/1/', 'fake2/2', 'fake3/3'] db.share_export_locations_update( self.ctxt['admin'], self.share_instance_id, paths, False) @ddt.data({'role': 'admin', 'version': '2.9'}, {'role': 'user', 'version': '2.9'}, {'role': 'admin', 'version': '2.13'}, {'role': 'user', 'version': '2.13'}) @ddt.unpack def test_list_and_show(self, role, version): summary_keys = ['id', 'path'] admin_summary_keys = summary_keys + [ 'share_instance_id', 'is_admin_only'] detail_keys = summary_keys + ['created_at', 'updated_at'] admin_detail_keys = admin_summary_keys + ['created_at', 'updated_at'] self._test_list_and_show(role, version, summary_keys, detail_keys, admin_summary_keys, admin_detail_keys) @ddt.data('admin', 'user') def test_list_and_show_with_preferred_flag(self, role): summary_keys = ['id', 'path', 'preferred'] admin_summary_keys = summary_keys + [ 'share_instance_id', 'is_admin_only'] detail_keys = summary_keys + ['created_at', 'updated_at'] admin_detail_keys = admin_summary_keys + ['created_at', 'updated_at'] self._test_list_and_show(role, '2.14', summary_keys, detail_keys, admin_summary_keys, admin_detail_keys) def _test_list_and_show(self, role, version, summary_keys, detail_keys, admin_summary_keys, admin_detail_keys): req = self._get_request(version=version, use_admin_context=(role == 'admin')) index_result = self.controller.index(req, self.share['id']) self.assertIn('export_locations', index_result) self.assertEqual(1, len(index_result)) self.assertEqual(3, len(index_result['export_locations'])) for index_el in index_result['export_locations']: self.assertIn('id', index_el) show_result = self.controller.show( req, self.share['id'], index_el['id']) self.assertIn('export_location', show_result) self.assertEqual(1, len(show_result)) show_el = show_result['export_location'] # Check summary keys in index result & detail keys in show result if role == 'admin': self.assertEqual(len(admin_summary_keys), len(index_el)) for key in admin_summary_keys: self.assertIn(key, index_el) self.assertEqual(len(admin_detail_keys), len(show_el)) for key in admin_detail_keys: self.assertIn(key, show_el) else: self.assertEqual(len(summary_keys), len(index_el)) for key in summary_keys: self.assertIn(key, index_el) self.assertEqual(len(detail_keys), len(show_el)) for key in detail_keys: self.assertIn(key, show_el) # Ensure keys common to index & show results have matching values for key in summary_keys: self.assertEqual(index_el[key], show_el[key]) def test_list_export_locations_share_not_found(self): self.assertRaises( exc.HTTPNotFound, self.controller.index, self.req, 'inexistent_share_id', ) def test_show_export_location_share_not_found(self): index_result = self.controller.index(self.req, self.share['id']) el_id = index_result['export_locations'][0]['id'] self.assertRaises( exc.HTTPNotFound, self.controller.show, self.req, 'inexistent_share_id', el_id, ) def test_show_export_location_not_found(self): self.assertRaises( exc.HTTPNotFound, self.controller.show, self.req, self.share['id'], 'inexistent_export_location', ) def test_get_admin_export_location(self): el_data = { 'path': '/admin/export/location', 'is_admin_only': True, 'metadata': {'foo': 'bar'}, } db.share_export_locations_update( self.ctxt['admin'], self.share_instance_id, el_data, True) index_result = self.controller.index(self.req, self.share['id']) el_id = index_result['export_locations'][0]['id'] # Not found for member member_req = self._get_request(use_admin_context=False) self.assertRaises( exc.HTTPForbidden, self.controller.show, member_req, self.share['id'], el_id, ) # Ok for admin el = self.controller.show(self.req, self.share['id'], el_id) for k, v in el.items(): self.assertEqual(v, el[k]) @ddt.data(*set(('2.46', '2.47', api_version._MAX_API_VERSION))) def test_list_export_locations_replicated_share(self, version): """Test the export locations API changes between 2.46 and 2.47 For API version <= 2.46, non-active replica export locations are included in the API response. They are not included in and beyond version 2.47. """ # Setup data share = db_utils.create_share( replication_type=constants.REPLICATION_TYPE_READABLE, replica_state=constants.REPLICA_STATE_ACTIVE) active_replica_id = share.instance.id exports = [ {'path': 'myshare.mydomain/active-replica-exp1', 'is_admin_only': False}, {'path': 'myshare.mydomain/active-replica-exp2', 'is_admin_only': False}, ] db.share_export_locations_update( self.ctxt['user'], active_replica_id, exports) # Replicas share_replica2 = db_utils.create_share_replica( share_id=share.id, replica_state=constants.REPLICA_STATE_IN_SYNC) share_replica3 = db_utils.create_share_replica( share_id=share.id, replica_state=constants.REPLICA_STATE_OUT_OF_SYNC) replica2_exports = [ {'path': 'myshare.mydomain/insync-replica-exp', 'is_admin_only': False} ] replica3_exports = [ {'path': 'myshare.mydomain/outofsync-replica-exp', 'is_admin_only': False} ] db.share_export_locations_update( self.ctxt['user'], share_replica2.id, replica2_exports) db.share_export_locations_update( self.ctxt['user'], share_replica3.id, replica3_exports) req = self._get_request(version=version) index_result = self.controller.index(req, share['id']) actual_paths = [el['path'] for el in index_result['export_locations']] if self.is_microversion_ge(version, '2.47'): self.assertEqual(2, len(index_result['export_locations'])) self.assertNotIn( 'myshare.mydomain/insync-replica-exp', actual_paths) self.assertNotIn( 'myshare.mydomain/outofsync-replica-exp', actual_paths) else: self.assertEqual(4, len(index_result['export_locations'])) self.assertIn('myshare.mydomain/insync-replica-exp', actual_paths) self.assertIn( 'myshare.mydomain/outofsync-replica-exp', actual_paths) @ddt.data('1.0', '2.0', '2.8') def test_list_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, self._get_request(version), self.share_instance_id, ) @ddt.data('1.0', '2.0', '2.8') def test_show_with_unsupported_version(self, version): index_result = self.controller.index(self.req, self.share['id']) self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.show, self._get_request(version), self.share['id'], index_result['export_locations'][0]['id'] ) manila-10.0.0/manila/tests/api/v2/test_share_groups.py0000664000175000017500000012632413656750227022715 0ustar zuulzuul00000000000000# Copyright 2015 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import uuidutils import six import webob from manila.api.openstack import wsgi import manila.api.v2.share_groups as share_groups from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila.share import share_types from manila.share_group import api as share_group_api from manila.share_group import share_group_types from manila import test from manila.tests.api import fakes from manila.tests import db_utils CONF = cfg.CONF SG_GRADUATION_VERSION = '2.55' @ddt.ddt class ShareGroupAPITest(test.TestCase): """Consistency Groups API Test suite.""" def setUp(self): super(ShareGroupAPITest, self).setUp() self.controller = share_groups.ShareGroupController() self.resource_name = self.controller.resource_name self.fake_share_type = {'id': six.text_type(uuidutils.generate_uuid())} self.fake_share_group_type = { 'id': six.text_type(uuidutils.generate_uuid())} self.api_version = '2.34' self.request = fakes.HTTPRequest.blank( '/share-groups', version=self.api_version, experimental=True) self.flags(transport_url='rabbit://fake:fake@mqhost:5672') self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.context = self.request.environ['manila.context'] self.mock_object(share_group_types, 'get_default', mock.Mock(return_value=self.fake_share_group_type)) self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_share_group_data(self, share_group=None, version='2.31'): if share_group is None: share_group = db_utils.create_share_group( status=constants.STATUS_AVAILABLE) path = '/v2/fake/share-groups/%s/action' % share_group['id'] req = fakes.HTTPRequest.blank(path, script_name=path, version=version) req.headers[wsgi.API_VERSION_REQUEST_HEADER] = version req.headers[wsgi.EXPERIMENTAL_API_REQUEST_HEADER] = 'True' return share_group, req def _get_fake_share_group(self, ctxt=None, **values): if ctxt is None: ctxt = self.context share_group_db_dict = { 'id': 'fake_id', 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'status': constants.STATUS_CREATING, 'name': 'fake name', 'description': 'fake description', 'host': None, 'availability_zone': None, 'consistent_snapshot_support': None, 'source_share_group_snapshot_id': None, 'share_group_type_id': self.fake_share_group_type.get('id'), 'share_network_id': uuidutils.generate_uuid(), 'share_server_id': uuidutils.generate_uuid(), 'share_types': [], 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), } share_group_db_dict.update(**values) expected_share_group = { 'id': share_group_db_dict['id'], 'project_id': share_group_db_dict['project_id'], 'status': share_group_db_dict['status'], 'name': share_group_db_dict['name'], 'description': share_group_db_dict['description'], 'host': share_group_db_dict['host'], 'availability_zone': share_group_db_dict['availability_zone'], 'consistent_snapshot_support': share_group_db_dict[ 'consistent_snapshot_support'], 'source_share_group_snapshot_id': share_group_db_dict[ 'source_share_group_snapshot_id'], 'share_group_type_id': share_group_db_dict['share_group_type_id'], 'share_network_id': share_group_db_dict['share_network_id'], 'share_server_id': share_group_db_dict['share_server_id'], 'share_types': [st['share_type_id'] for st in share_group_db_dict.get('share_types')], 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'links': mock.ANY, } if not ctxt.is_admin: del expected_share_group['share_server_id'] return share_group_db_dict, expected_share_group def _get_fake_simple_share_group(self, **values): share_group = {'id': 'fake_id', 'name': None} share_group.update(**values) expected_share_group = copy.deepcopy(share_group) expected_share_group['links'] = mock.ANY return share_group, expected_share_group def _get_fake_custom_request_and_context(self, microversion, experimental): req = fakes.HTTPRequest.blank( '/share-groups', version=microversion, experimental=experimental) req_context = req.environ['manila.context'] return req, req_context @ddt.data({'microversion': '2.34', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_create(self, microversion, experimental): fake, expected = self._get_fake_share_group() self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake)) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) body = {"share_group": {}} res_dict = self.controller.create(req, body) self.controller.share_group_api.create.assert_called_once_with( req_context, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) self.assertEqual(expected, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_group_create_invalid_group_snapshot_state(self): fake_snap_id = six.text_type(uuidutils.generate_uuid()) self.mock_object( self.controller.share_group_api, 'create', mock.Mock(side_effect=exception.InvalidShareGroupSnapshot( reason='bad status', ))) body = { "share_group": { "source_share_group_snapshot_id": fake_snap_id } } self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_no_default_share_type(self): fake_group, expected_group = self._get_fake_share_group() self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=None)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_group)) body = {"share_group": {}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_no_default_group_type(self): fake_group, expected_group = self._get_fake_share_group() self.mock_object( share_group_types, 'get_default', mock.Mock(return_value=None)) self.mock_object( self.controller.share_group_api, 'create', mock.Mock(return_value=fake_group)) body = {"share_group": {}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_group_type_specified(self): fake_share_group, expected_group = self._get_fake_share_group() self.mock_object( share_group_types, 'get_default', mock.Mock(return_value=None)) self.mock_object( self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) body = { "share_group": { "share_group_type_id": self.fake_share_group_type.get('id'), } } self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_invalid_group_type_specified(self): fake_share_group, expected_share_group = self._get_fake_share_group() self.mock_object( share_group_types, 'get_default', mock.Mock(return_value=None)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) body = {"share_group": {"group_type_id": "invalid"}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_az(self): fake_az_name = 'fake_az_name' fake_az_id = 'fake_az_id' fake_share_group, expected_share_group = self._get_fake_share_group( availability_zone_id=fake_az_id) self.mock_object( self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) self.mock_object( share_groups.db, 'availability_zone_get', mock.Mock(return_value=type( 'FakeAZ', (object, ), { 'id': fake_az_id, 'name': fake_az_name, }))) body = {"share_group": {"availability_zone": fake_az_name}} res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, availability_zone_id=fake_az_id, availability_zone=fake_az_name, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) share_groups.db.availability_zone_get.assert_called_once_with( self.context, fake_az_name) self.assertEqual(expected_share_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_az_and_source_share_group_snapshot(self): fake_az_name = 'fake_az_name' fake_az_id = 'fake_az_id' fake_share_group, expected_share_group = self._get_fake_share_group( availability_zone_id=fake_az_id) self.mock_object( self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) self.mock_object( share_groups.db, 'availability_zone_get', mock.Mock(return_value=type( 'FakeAZ', (object, ), { 'id': fake_az_id, 'name': fake_az_name, }))) body = {"share_group": { "availability_zone": fake_az_name, "source_share_group_snapshot_id": 'fake_sgs_id', }} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.controller.share_group_api.create.assert_not_called() share_groups.db.availability_zone_get.assert_not_called() self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_nonexistent_az(self): fake_az_name = 'fake_az_name' fake_az_id = 'fake_az_id' fake_share_group, expected_share_group = self._get_fake_share_group( availability_zone_id=fake_az_id) self.mock_object( self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) self.mock_object( share_groups.db, 'availability_zone_get', mock.Mock( side_effect=exception.AvailabilityZoneNotFound(id=fake_az_id))) body = {"share_group": {"availability_zone": fake_az_name}} self.assertRaises( webob.exc.HTTPNotFound, self.controller.create, self.request, body) self.assertEqual(0, self.controller.share_group_api.create.call_count) share_groups.db.availability_zone_get.assert_called_once_with( self.context, fake_az_name) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_name(self): fake_name = 'fake_name' fake_share_group, expected_share_group = self._get_fake_share_group( name=fake_name) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) body = {"share_group": {"name": fake_name}} res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, name=fake_name, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) self.assertEqual(expected_share_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_description(self): fake_description = 'fake_description' fake_share_group, expected_share_group = self._get_fake_share_group( description=fake_description) self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) body = {"share_group": {"description": fake_description}} res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, description=fake_description, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) self.assertEqual(expected_share_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_share_types(self): fake_share_types = [{"share_type_id": self.fake_share_type['id']}] fake_group, expected_group = self._get_fake_share_group( share_types=fake_share_types) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_group)) body = { "share_group": { "share_types": [self.fake_share_type['id']] } } res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) self.assertEqual(expected_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_sg_create_with_source_sg_snapshot_id_and_share_network(self): fake_snap_id = six.text_type(uuidutils.generate_uuid()) fake_net_id = six.text_type(uuidutils.generate_uuid()) self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) mock_api_call = self.mock_object( self.controller.share_group_api, 'create') body = { "share_group": { "source_share_group_snapshot_id": fake_snap_id, "share_network_id": fake_net_id, } } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.assertFalse(mock_api_call.called) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_source_sg_snapshot_id(self): fake_snap_id = six.text_type(uuidutils.generate_uuid()) fake_share_group, expected_group = self._get_fake_share_group( source_share_group_snapshot_id=fake_snap_id) self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_share_group)) body = { "share_group": { "source_share_group_snapshot_id": fake_snap_id, } } res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, share_group_type_id=self.fake_share_group_type['id'], source_share_group_snapshot_id=fake_snap_id) self.assertEqual(expected_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_share_network_id(self): fake_net_id = six.text_type(uuidutils.generate_uuid()) fake_group, expected_group = self._get_fake_share_group( share_network_id=fake_net_id) self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_group)) body = { "share_group": { "share_network_id": fake_net_id, } } res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, share_network_id=fake_net_id, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=mock.ANY) self.assertEqual(expected_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_sg_create_no_default_share_type_with_share_group_snapshot(self): fake_snap_id = six.text_type(uuidutils.generate_uuid()) fake, expected = self._get_fake_share_group() self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=None)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake)) body = { "share_group": { "source_share_group_snapshot_id": fake_snap_id, } } res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, share_group_type_id=self.fake_share_group_type['id'], source_share_group_snapshot_id=fake_snap_id) self.assertEqual(expected, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_name_and_description(self): fake_name = 'fake_name' fake_description = 'fake_description' fake_group, expected_group = self._get_fake_share_group( name=fake_name, description=fake_description) self.mock_object(share_types, 'get_default_share_type', mock.Mock(return_value=self.fake_share_type)) self.mock_object(self.controller.share_group_api, 'create', mock.Mock(return_value=fake_group)) body = { "share_group": { "name": fake_name, "description": fake_description } } res_dict = self.controller.create(self.request, body) self.controller.share_group_api.create.assert_called_once_with( self.context, name=fake_name, description=fake_description, share_group_type_id=self.fake_share_group_type['id'], share_type_ids=[self.fake_share_type['id']]) self.assertEqual(expected_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_invalid_body(self): body = {"not_group": {}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_group_create_invalid_body_share_types_and_source_group_snapshot( self): body = { "share_group": { "share_types": [], "source_share_group_snapshot_id": "", } } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_source_group_snapshot_not_in_available(self): fake_snap_id = six.text_type(uuidutils.generate_uuid()) body = { "share_group": { "source_share_group_snapshot_id": fake_snap_id, } } self.mock_object(self.controller.share_group_api, 'create', mock.Mock( side_effect=exception.InvalidShareGroupSnapshot(reason='blah'))) self.assertRaises( webob.exc.HTTPConflict, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_source_group_snapshot_does_not_exist(self): fake_snap_id = six.text_type(uuidutils.generate_uuid()) body = { "share_group": {"source_share_group_snapshot_id": fake_snap_id} } self.mock_object( self.controller.share_group_api, 'create', mock.Mock(side_effect=exception.ShareGroupSnapshotNotFound( share_group_snapshot_id=fake_snap_id))) self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_source_group_snapshot_not_a_uuid(self): fake_snap_id = "Not a uuid" body = { "share_group": { "source_share_group_snapshot_id": fake_snap_id, } } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_share_network_id_not_a_uuid(self): fake_net_id = "Not a uuid" body = {"share_group": {"share_network_id": fake_net_id}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_invalid_body_share_types_not_a_list(self): body = {"share_group": {"share_types": ""}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_invalid_body_invalid_field(self): body = {"share_group": {"unknown_field": ""}} exc = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.assertIn('unknown_field', six.text_type(exc)) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_invalid_share_types_field(self): body = {"share_group": {"share_types": 'iamastring'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') def test_share_group_create_with_invalid_share_types_field_not_uuids(self): body = {"share_group": {"share_types": ['iamastring']}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.request, body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'create') @ddt.data({'microversion': '2.34', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_update_with_name_and_description( self, microversion, experimental): fake_name = 'fake_name' fake_description = 'fake_description' fake_group, expected_group = self._get_fake_share_group( name=fake_name, description=fake_description) self.mock_object(self.controller.share_group_api, 'get', mock.Mock(return_value=fake_group)) self.mock_object(self.controller.share_group_api, 'update', mock.Mock(return_value=fake_group)) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) body = { "share_group": { "name": fake_name, "description": fake_description, } } res_dict = self.controller.update(req, fake_group['id'], body) self.controller.share_group_api.update.assert_called_once_with( req_context, fake_group, {"name": fake_name, "description": fake_description}) self.assertEqual(expected_group, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') def test_share_group_update_group_not_found(self): body = {"share_group": {}} self.mock_object(self.controller.share_group_api, 'get', mock.Mock(side_effect=exception.NotFound)) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.request, 'fake_id', body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') def test_share_group_update_invalid_body(self): body = {"not_group": {}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.request, 'fake_id', body) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') def test_share_group_update_invalid_body_invalid_field(self): body = {"share_group": {"unknown_field": ""}} exc = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.request, 'fake_id', body) self.assertIn('unknown_field', six.text_type(exc)) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') def test_share_group_update_invalid_body_readonly_field(self): body = {"share_group": {"share_types": []}} exc = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.request, 'fake_id', body) self.assertIn('share_types', six.text_type(exc)) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'update') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_list_index(self, microversion, experimental): fake, expected = self._get_fake_simple_share_group() self.mock_object( share_group_api.API, 'get_all', mock.Mock(return_value=[fake])) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res_dict = self.controller.index(req) self.assertEqual([expected], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_share_group_list_index_no_groups(self): self.mock_object( share_group_api.API, 'get_all', mock.Mock(return_value=[])) res_dict = self.controller.index(self.request) self.assertEqual([], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'get_all') def test_share_group_list_index_with_limit(self): fake, expected = self._get_fake_simple_share_group() fake2, expected2 = self._get_fake_simple_share_group(id="fake_id2") self.mock_object( share_group_api.API, 'get_all', mock.Mock(return_value=[fake, fake2])) req = fakes.HTTPRequest.blank( '/share-groups?limit=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) self.assertEqual(1, len(res_dict['share_groups'])) self.assertEqual([expected], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_share_group_list_index_with_limit_and_offset(self): fake, expected = self._get_fake_simple_share_group() fake2, expected2 = self._get_fake_simple_share_group( id="fake_id2") self.mock_object(share_group_api.API, 'get_all', mock.Mock(return_value=[fake, fake2])) req = fakes.HTTPRequest.blank( '/share-groups?limit=1&offset=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) self.assertEqual(1, len(res_dict['share_groups'])) self.assertEqual([expected2], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_share_group_list_index_with_like_filter(self): fake, expected = self._get_fake_simple_share_group( name='fake_1', description='fake_ds_1') fake2, expected2 = self._get_fake_simple_share_group( name='fake_2', description='fake_ds_2') self.mock_object(share_group_api.API, 'get_all', mock.Mock(return_value=[fake, fake2])) req = fakes.HTTPRequest.blank( '/share-groups?name~=fake&description~=fake', version='2.36', experimental=True) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) expected.pop('description') expected2.pop('description') self.assertEqual(2, len(res_dict['share_groups'])) self.assertEqual([expected, expected2], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') @ddt.data({'microversion': '2.34', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_list_detail(self, microversion, experimental): fake, expected = self._get_fake_share_group() self.mock_object( share_group_api.API, 'get_all', mock.Mock(return_value=[fake])) req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res_dict = self.controller.detail(req) self.assertEqual([expected], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_share_group_list_detail_no_groups(self): self.mock_object( share_group_api.API, 'get_all', mock.Mock(return_value=[])) res_dict = self.controller.detail(self.request) self.assertEqual([], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'get_all') def test_share_group_list_detail_with_limit(self): req = fakes.HTTPRequest.blank('/share-groups?limit=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] fake_group, expected_group = self._get_fake_share_group( ctxt=req_context) fake_group2, expected_group2 = self._get_fake_share_group( ctxt=req_context, id="fake_id2") self.mock_object(share_group_api.API, 'get_all', mock.Mock(return_value=[fake_group, fake_group2])) res_dict = self.controller.detail(req) self.assertEqual(1, len(res_dict['share_groups'])) self.assertEqual([expected_group], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') def test_share_group_list_detail_with_limit_and_offset(self): req = fakes.HTTPRequest.blank('/share-groups?limit=1&offset=1', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] fake_group, expected_group = self._get_fake_share_group( ctxt=req_context) fake_group2, expected_group2 = self._get_fake_share_group( id="fake_id2", ctxt=req_context) self.mock_object(share_group_api.API, 'get_all', mock.Mock(return_value=[fake_group, fake_group2])) res_dict = self.controller.detail(req) self.assertEqual(1, len(res_dict['share_groups'])) self.assertEqual([expected_group2], res_dict['share_groups']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get_all') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_delete(self, microversion, experimental): fake_group, expected_group = self._get_fake_share_group() self.mock_object(share_group_api.API, 'get', mock.Mock(return_value=fake_group)) self.mock_object(share_group_api.API, 'delete') req, req_context = self._get_fake_custom_request_and_context( microversion, experimental) res = self.controller.delete(req, fake_group['id']) self.assertEqual(202, res.status_code) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') def test_share_group_delete_group_not_found(self): fake_group, expected_group = self._get_fake_share_group() self.mock_object(share_group_api.API, 'get', mock.Mock(side_effect=exception.NotFound)) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.request, fake_group['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'delete') def test_share_group_delete_in_conflicting_status(self): fake, expected = self._get_fake_share_group() self.mock_object( share_group_api.API, 'get', mock.Mock(return_value=fake)) self.mock_object(share_group_api.API, 'delete', mock.Mock( side_effect=exception.InvalidShareGroup(reason='blah'))) self.assertRaises( webob.exc.HTTPConflict, self.controller.delete, self.request, fake['id']) self.mock_policy_check.assert_called_once_with( self.context, self.resource_name, 'delete') @ddt.data({'microversion': '2.34', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_share_group_show(self, microversion, experimental): fake, expected = self._get_fake_share_group() self.mock_object( share_group_api.API, 'get', mock.Mock(return_value=fake)) req = fakes.HTTPRequest.blank( '/share-groupss/%s' % fake['id'], version=microversion, experimental=experimental) req_context = req.environ['manila.context'] res_dict = self.controller.show(req, fake['id']) self.assertEqual(expected, res_dict['share_group']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get') def test_share_group_show_as_admin(self): req = fakes.HTTPRequest.blank( '/share-groupss/my_group_id', version=self.api_version, experimental=True) admin_context = req.environ['manila.context'].elevated() req.environ['manila.context'] = admin_context fake_group, expected_group = self._get_fake_share_group( ctxt=admin_context, id='my_group_id') self.mock_object(share_group_api.API, 'get', mock.Mock(return_value=fake_group)) res_dict = self.controller.show(req, fake_group['id']) self.assertEqual(expected_group, res_dict['share_group']) self.assertIsNotNone(res_dict['share_group']['share_server_id']) self.mock_policy_check.assert_called_once_with( admin_context, self.resource_name, 'get') def test_share_group_show_group_not_found(self): req = fakes.HTTPRequest.blank( '/share-groupss/myfakegroup', version=self.api_version, experimental=True) req_context = req.environ['manila.context'] fake, expected = self._get_fake_share_group( ctxt=req_context, id='myfakegroup') self.mock_object(share_group_api.API, 'get', mock.Mock(side_effect=exception.NotFound)) self.assertRaises( webob.exc.HTTPNotFound, self.controller.show, req, fake['id']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'get') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test__reset_status_call(self, microversion, experimental): self.mock_object(self.controller, '_reset_status') req, _junk = self._get_fake_custom_request_and_context( microversion, experimental) sg_id = 'fake' body = {'reset_status': {'status': constants.STATUS_ERROR}} self.controller.share_group_reset_status(req, sg_id, body) self.controller._reset_status.assert_called_once_with(req, sg_id, body) @ddt.data(*fakes.fixture_reset_status_with_different_roles) @ddt.unpack def test_share_groups_reset_status_with_different_roles( self, role, valid_code, valid_status, version): ctxt = self._get_context(role) share_group, req = self._setup_share_group_data() action_name = 'reset_status' body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.body = six.b(jsonutils.dumps(body)) req.headers['X-Openstack-Manila-Api-Version'] = self.api_version req.environ['manila.context'] = ctxt with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_model = db.share_group_get(ctxt, share_group['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data(*fakes.fixture_force_delete_with_different_roles) @ddt.unpack def test_share_group_force_delete_with_different_roles(self, role, resp_code, version): ctxt = self._get_context(role) share_group, req = self._setup_share_group_data() req.method = 'POST' req.headers['content-type'] = 'application/json' action_name = 'force_delete' body = {action_name: {}} req.body = six.b(jsonutils.dumps(body)) req.headers['X-Openstack-Manila-Api-Version'] = self.api_version req.environ['manila.context'] = ctxt with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response self.assertEqual(resp_code, resp.status_int) @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test__force_delete_call(self, microversion, experimental): self.mock_object(self.controller, '_force_delete') req, _junk = self._get_fake_custom_request_and_context( microversion, experimental) sg_id = 'fake' body = {'force_delete': {}} self.controller.share_group_force_delete(req, sg_id, body) self.controller._force_delete.assert_called_once_with(req, sg_id, body) manila-10.0.0/manila/tests/api/v2/test_share_instances.py0000664000175000017500000003435113656750227023363 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils import six from webob import exc as webob_exc from manila.api.openstack import api_version_request from manila.api.v2 import share_instances from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils CONF = cfg.CONF @ddt.ddt class ShareInstancesAPITest(test.TestCase): """Share instances API Test.""" def setUp(self): super(ShareInstancesAPITest, self).setUp() self.controller = share_instances.ShareInstancesController() self.resource_name = self.controller.resource_name self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_share_instance_data(self, instance=None, version='2.7'): if instance is None: instance = db_utils.create_share(status=constants.STATUS_AVAILABLE, size='1').instance path = '/v2/fake/share_instances/%s/action' % instance['id'] req = fakes.HTTPRequest.blank(path, script_name=path, version=version) return instance, req def _get_request(self, uri, context=None, version="2.3"): if context is None: context = self.admin_context req = fakes.HTTPRequest.blank(uri, version=version) req.environ['manila.context'] = context return req def _validate_ids_in_share_instances_list(self, expected, actual): self.assertEqual(len(expected), len(actual)) self.assertEqual([i['id'] for i in expected], [i['id'] for i in actual]) @ddt.data("2.3", "2.34", "2.35") def test_index(self, version): url = '/share_instances' if (api_version_request.APIVersionRequest(version) >= api_version_request.APIVersionRequest('2.35')): url += "?export_location_path=/admin/export/location" req = self._get_request(url, version=version) req_context = req.environ['manila.context'] share_instances_count = 3 test_instances = [ db_utils.create_share(size=s + 1).instance for s in range(0, share_instances_count) ] db.share_export_locations_update( self.admin_context, test_instances[0]['id'], '/admin/export/location', False) actual_result = self.controller.index(req) if (api_version_request.APIVersionRequest(version) >= api_version_request.APIVersionRequest('2.35')): test_instances = test_instances[:1] self._validate_ids_in_share_instances_list( test_instances, actual_result['share_instances']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') def test_index_with_limit(self): req = self._get_request('/share_instances') req_context = req.environ['manila.context'] share_instances_count = 3 test_instances = [ db_utils.create_share(size=s + 1).instance for s in range(0, share_instances_count) ] expect_links = [ { 'href': ( 'http://localhost/v1/fake/share_instances?' 'limit=3&marker=%s' % test_instances[2]['id']), 'rel': 'next', } ] url = 'share_instances?limit=3' req = self._get_request(url) actual_result = self.controller.index(req) self._validate_ids_in_share_instances_list( test_instances, actual_result['share_instances']) self.assertEqual(expect_links, actual_result['share_instances_links']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') @ddt.data('2.3', '2.54') def test_show(self, version): test_instance = db_utils.create_share(size=1).instance id = test_instance['id'] actual_result = self.controller.show( self._get_request('fake', version=version), id) self.assertEqual(id, actual_result['share_instance']['id']) if (api_version_request.APIVersionRequest(version) >= api_version_request.APIVersionRequest("2.54")): self.assertIn("progress", actual_result['share_instance']) else: self.assertNotIn("progress", actual_result['share_instance']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'show') def test_show_with_export_locations(self): test_instance = db_utils.create_share(size=1).instance req = self._get_request('fake', version="2.8") id = test_instance['id'] actual_result = self.controller.show(req, id) self.assertEqual(id, actual_result['share_instance']['id']) self.assertIn("export_location", actual_result['share_instance']) self.assertIn("export_locations", actual_result['share_instance']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'show') def test_show_without_export_locations(self): test_instance = db_utils.create_share(size=1).instance req = self._get_request('fake', version="2.9") id = test_instance['id'] actual_result = self.controller.show(req, id) self.assertEqual(id, actual_result['share_instance']['id']) self.assertNotIn("export_location", actual_result['share_instance']) self.assertNotIn("export_locations", actual_result['share_instance']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'show') def test_show_with_replica_state(self): test_instance = db_utils.create_share(size=1).instance req = self._get_request('fake', version="2.11") id = test_instance['id'] actual_result = self.controller.show(req, id) self.assertEqual(id, actual_result['share_instance']['id']) self.assertIn("replica_state", actual_result['share_instance']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'show') @ddt.data("2.3", "2.8", "2.9", "2.11") def test_get_share_instances(self, version): test_share = db_utils.create_share(size=1) id = test_share['id'] req = self._get_request('fake', version=version) req_context = req.environ['manila.context'] share_policy_check_call = mock.call( req_context, 'share', 'get', mock.ANY) get_instances_policy_check_call = mock.call( req_context, 'share_instance', 'index') actual_result = self.controller.get_share_instances(req, id) self._validate_ids_in_share_instances_list( [test_share.instance], actual_result['share_instances'] ) self.assertEqual(1, len(actual_result.get("share_instances", 0))) for instance in actual_result["share_instances"]: if (api_version_request.APIVersionRequest(version) > api_version_request.APIVersionRequest("2.8")): assert_method = self.assertNotIn else: assert_method = self.assertIn assert_method("export_location", instance) assert_method("export_locations", instance) if (api_version_request.APIVersionRequest(version) > api_version_request.APIVersionRequest("2.10")): self.assertIn("replica_state", instance) self.mock_policy_check.assert_has_calls([ get_instances_policy_check_call, share_policy_check_call]) @ddt.data('show', 'get_share_instances') def test_not_found(self, target_method_name): method = getattr(self.controller, target_method_name) action = (target_method_name if target_method_name == 'show' else 'index') self.assertRaises(webob_exc.HTTPNotFound, method, self._get_request('fake'), 'fake') self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, action) @ddt.data(('show', 2), ('get_share_instances', 2), ('index', 1)) @ddt.unpack def test_access(self, target_method_name, args_count): user_context = context.RequestContext('fake', 'fake') req = self._get_request('fake', user_context) policy_exception = exception.PolicyNotAuthorized( action=target_method_name) target_method = getattr(self.controller, target_method_name) args = [i for i in range(1, args_count)] with mock.patch.object(policy, 'check_policy', mock.Mock( side_effect=policy_exception)): self.assertRaises( webob_exc.HTTPForbidden, target_method, req, *args) def _reset_status(self, ctxt, model, req, db_access_method, valid_code, valid_status=None, body=None, version='2.7'): if float(version) > 2.6: action_name = 'reset_status' else: action_name = 'os-reset_status' if body is None: body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = version req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = ctxt with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_model = db_access_method(ctxt, model['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data(*fakes.fixture_reset_status_with_different_roles) @ddt.unpack def test_share_instances_reset_status_with_different_roles(self, role, valid_code, valid_status, version): ctxt = self._get_context(role) instance, req = self._setup_share_instance_data(version=version) self._reset_status(ctxt, instance, req, db.share_instance_get, valid_code, valid_status, version=version) @ddt.data(*fakes.fixture_valid_reset_status_body) @ddt.unpack def test_share_instance_reset_status(self, body, version): instance, req = self._setup_share_instance_data() req.headers['X-Openstack-Manila-Api-Version'] = version if float(version) > 2.6: state = body['reset_status']['status'] else: state = body['os-reset_status']['status'] self._reset_status(self.admin_context, instance, req, db.share_instance_get, 202, state, body, version=version) @ddt.data( ({'os-reset_status': {'x-status': 'bad'}}, '2.6'), ({'os-reset_status': {'status': 'invalid'}}, '2.6'), ({'reset_status': {'x-status': 'bad'}}, '2.7'), ({'reset_status': {'status': 'invalid'}}, '2.7'), ) @ddt.unpack def test_share_instance_invalid_reset_status_body(self, body, version): instance, req = self._setup_share_instance_data() req.headers['X-Openstack-Manila-Api-Version'] = version self._reset_status(self.admin_context, instance, req, db.share_instance_get, 400, constants.STATUS_AVAILABLE, body, version=version) def _force_delete(self, ctxt, model, req, db_access_method, valid_code, check_model_in_db=False, version='2.7'): if float(version) > 2.6: action_name = 'force_delete' else: action_name = 'os-force_delete' body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = version req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = ctxt with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response self.assertEqual(valid_code, resp.status_int) if valid_code == 202 and check_model_in_db: self.assertRaises(exception.NotFound, db_access_method, ctxt, model['id']) @ddt.data(*fakes.fixture_force_delete_with_different_roles) @ddt.unpack def test_instance_force_delete_with_different_roles(self, role, resp_code, version): instance, req = self._setup_share_instance_data(version=version) ctxt = self._get_context(role) self._force_delete(ctxt, instance, req, db.share_instance_get, resp_code, version=version) def test_instance_force_delete_missing(self): instance, req = self._setup_share_instance_data( instance={'id': 'fake'}) ctxt = self._get_context('admin') self._force_delete(ctxt, instance, req, db.share_instance_get, 404) manila-10.0.0/manila/tests/api/v2/test_share_networks.py0000664000175000017500000012503513656750227023250 0ustar zuulzuul00000000000000# Copyright 2014 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from oslo_db import exception as db_exception from oslo_utils import timeutils from six.moves.urllib import parse from webob import exc as webob_exc from manila.api import common from manila.api.openstack import api_version_request as api_version from manila.api.v2 import share_networks from manila.db import api as db_api from manila import exception from manila import quota from manila import test from manila.tests.api import fakes fake_share_network_subnet = { 'id': 'fake subnet id', 'neutron_net_id': 'fake net id', 'neutron_subnet_id': 'fake subnet id', 'network_type': 'vlan', 'segmentation_id': 1000, 'cidr': '10.0.0.0/24', 'ip_version': 4, 'share_network_id': 'fake network id', 'availability_zone_id': None, 'share_servers': [], 'availability_zone': [] } fake_share_network = { 'id': 'fake network id', 'project_id': 'fake project', 'created_at': timeutils.parse_strtime('2002-02-02', fmt="%Y-%m-%d"), 'updated_at': None, 'name': 'fake name', 'description': 'fake description', 'security_services': [], 'share_network_subnets': [] } fake_share_network_shortened = { 'id': 'fake network id', 'name': 'fake name', } fake_share_network_with_ss = { 'id': 'sn-id', 'project_id': 'fake', 'created_at': timeutils.parse_strtime('2001-01-01', fmt="%Y-%m-%d"), 'updated_at': None, 'name': 'test-sn', 'description': 'fake description', 'share_network_subnets': [], 'security_services': [{'id': 'fake-ss-id'}] } fake_sn_with_ss_shortened = { 'id': 'sn-id', 'name': 'test-sn', } QUOTAS = quota.QUOTAS @ddt.ddt class ShareNetworkAPITest(test.TestCase): def setUp(self): super(ShareNetworkAPITest, self).setUp() self.controller = share_networks.ShareNetworkController() self.req = fakes.HTTPRequest.blank('/share-networks') self.body = {share_networks.RESOURCE_NAME: {'name': 'fake name'}} self.context = self.req.environ['manila.context'] def _check_share_network_view_shortened(self, view, share_nw): self.assertEqual(share_nw['id'], view['id']) self.assertEqual(share_nw['name'], view['name']) def _check_share_network_view(self, view, share_nw): self.assertEqual(share_nw['id'], view['id']) self.assertEqual(share_nw['project_id'], view['project_id']) self.assertEqual(share_nw['created_at'], view['created_at']) self.assertEqual(share_nw['updated_at'], view['updated_at']) self.assertEqual(share_nw['name'], view['name']) self.assertEqual(share_nw['description'], view['description']) self.assertNotIn('shares', view) self.assertNotIn('network_allocations', view) self.assertNotIn('security_services', view) def _setup_body_for_create_test(self, data): data.update({'user_id': 'fake_user_id'}) body = {share_networks.RESOURCE_NAME: data} return body @ddt.data( {'neutron_net_id': 'fake', 'neutron_subnet_id': 'fake'}) def test_create_valid_cases(self, data): body = self._setup_body_for_create_test(data) result = self.controller.create(self.req, body) data.pop('user_id', None) for k, v in data.items(): self.assertIn(data[k], result['share_network'][k]) @ddt.data( {'neutron_net_id': 'fake', 'neutron_subnet_id': 'fake', 'availability_zone': 'fake'}) def test_create_valid_cases_upper_2_50(self, data): req = fakes.HTTPRequest.blank('/share-networks', version="2.51") context = req.environ['manila.context'] body = self._setup_body_for_create_test(data) fake_az = { 'name': 'fake', 'id': 'fake_id' } self.mock_object(db_api, 'availability_zone_get', mock.Mock(return_value=fake_az)) result = self.controller.create(req, body) result_subnet = result['share_network']['share_network_subnets'][0] data.pop('user_id', None) data.pop('project_id', None) data.pop('availability_zone_id', None) data.pop('id', None) data['availability_zone'] = result_subnet['availability_zone'] for k, v in data.items(): self.assertIn(k, result_subnet.keys()) db_api.availability_zone_get.assert_called_once_with( context, fake_az['name'] ) @ddt.data( {'nova_net_id': 'foo', 'neutron_net_id': 'bar'}, {'nova_net_id': 'foo', 'neutron_subnet_id': 'quuz'}, {'nova_net_id': 'foo', 'neutron_net_id': 'bar', 'neutron_subnet_id': 'quuz'}, {'nova_net_id': 'fake_nova_net_id'}, {'neutron_net_id': 'bar'}, {'neutron_subnet_id': 'quuz'}) def test_create_invalid_cases(self, data): data.update({'user_id': 'fake_user_id'}) body = {share_networks.RESOURCE_NAME: data} self.assertRaises( webob_exc.HTTPBadRequest, self.controller.create, self.req, body) @ddt.data( {'name': 'new fake name'}, {'description': 'new fake description'}, {'name': 'new fake name', 'description': 'new fake description'}) def test_update_valid_cases(self, data): body = {share_networks.RESOURCE_NAME: {'user_id': 'fake_user'}} created = self.controller.create(self.req, body) body = {share_networks.RESOURCE_NAME: data} result = self.controller.update( self.req, created['share_network']['id'], body) for k, v in data.items(): self.assertIn(data[k], result['share_network'][k]) self._check_share_network_view( result[share_networks.RESOURCE_NAME], result['share_network']) @ddt.data( {'nova_net_id': 'foo', 'neutron_net_id': 'bar'}, {'nova_net_id': 'foo', 'neutron_subnet_id': 'quuz'}, {'nova_net_id': 'foo', 'neutron_net_id': 'bar', 'neutron_subnet_id': 'quuz'}, {'nova_net_id': 'fake_nova_net_id'}, ) def test_update_invalid_cases(self, data): body = {share_networks.RESOURCE_NAME: {'user_id': 'fake_user'}} created = self.controller.create(self.req, body) body = {share_networks.RESOURCE_NAME: data} self.assertRaises( webob_exc.HTTPBadRequest, self.controller.update, self.req, created['share_network']['id'], body) @ddt.data( ({'share_network_subnets': [ {'share_network_id': fake_share_network['id']}]}, True), ({'share_network_subnets': []}, False)) @ddt.unpack def test__subnet_has_search_opt(self, network, has_search_opt): search_opts = { 'share_network_id': fake_share_network['id'] } result = None for key, value in search_opts.items(): result = self.controller._subnet_has_search_opt( key, value, network) self.assertEqual(has_search_opt, result) def test_create_nominal(self): self.mock_object(db_api, 'share_network_subnet_create') self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) self.mock_object(common, 'check_net_id_and_subnet_id') with mock.patch.object(db_api, 'share_network_create', mock.Mock(return_value=fake_share_network)): result = self.controller.create(self.req, self.body) db_api.share_network_create.assert_called_once_with( self.req.environ['manila.context'], self.body[share_networks.RESOURCE_NAME]) self._check_share_network_view( result[share_networks.RESOURCE_NAME], fake_share_network) def test_create_db_api_exception(self): with mock.patch.object(db_api, 'share_network_create', mock.Mock(side_effect=db_exception.DBError)): self.assertRaises(webob_exc.HTTPInternalServerError, self.controller.create, self.req, self.body) def test_create_wrong_body(self): body = None self.assertRaises(webob_exc.HTTPUnprocessableEntity, self.controller.create, self.req, body) @ddt.data( {'availability_zone': 'fake-zone'}) def test_create_az_not_found(self, data): req = fakes.HTTPRequest.blank('/share-networks', version="2.51") self.mock_object( db_api, 'availability_zone_get', mock.Mock( side_effect=exception.AvailabilityZoneNotFound(id='fake'))) body = {share_networks.RESOURCE_NAME: data} self.assertRaises(webob_exc.HTTPBadRequest, self.controller.create, req, body) def test_create_error_on_subnet_creation(self): data = { 'neutron_net_id': 'fake', 'neutron_subnet_id': 'fake', 'id': fake_share_network['id'] } subnet_data = copy.deepcopy(data) self.mock_object(db_api, 'share_network_create', mock.Mock(return_value=fake_share_network)) self.mock_object(db_api, 'share_network_subnet_create', mock.Mock(side_effect=db_exception.DBError())) self.mock_object(db_api, 'share_network_delete') body = {share_networks.RESOURCE_NAME: data} self.assertRaises(webob_exc.HTTPInternalServerError, self.controller.create, self.req, body) db_api.share_network_create.assert_called_once_with(self.context, data) subnet_data['share_network_id'] = data['id'] subnet_data.pop('id') db_api.share_network_subnet_create.assert_called_once_with( self.context, subnet_data) db_api.share_network_delete.assert_called_once_with( self.context, fake_share_network['id']) def test_delete_nominal(self): share_nw = fake_share_network.copy() subnet = fake_share_network_subnet.copy() subnet['share_servers'] = ['foo', 'bar'] share_nw['share_network_subnets'] = [subnet] self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'share_instances_get_all_by_share_network', mock.Mock(return_value=[])) self.mock_object(self.controller.share_rpcapi, 'delete_share_server') self.mock_object(self.controller, '_all_share_servers_are_auto_deletable', mock.Mock(return_value=True)) self.mock_object(db_api, 'share_network_delete') self.controller.delete(self.req, share_nw['id']) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) (db_api.share_instances_get_all_by_share_network. assert_called_once_with(self.req.environ['manila.context'], share_nw['id'])) self.controller.share_rpcapi.delete_share_server.assert_has_calls([ mock.call(self.req.environ['manila.context'], 'foo'), mock.call(self.req.environ['manila.context'], 'bar')]) db_api.share_network_delete.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) def test_delete_not_found(self): share_nw = 'fake network id' self.mock_object(db_api, 'share_network_get', mock.Mock(side_effect=exception.ShareNetworkNotFound( share_network_id=share_nw))) self.assertRaises(webob_exc.HTTPNotFound, self.controller.delete, self.req, share_nw) def test_quota_delete_reservation_failed(self): share_nw = fake_share_network.copy() subnet = fake_share_network_subnet.copy() subnet['share_servers'] = ['foo', 'bar'] share_nw['share_network_subnets'] = [subnet] share_nw['user_id'] = 'fake_user_id' self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'share_instances_get_all_by_share_network', mock.Mock(return_value=[])) self.mock_object(self.controller, '_all_share_servers_are_auto_deletable', mock.Mock(return_value=True)) self.mock_object(self.controller.share_rpcapi, 'delete_share_server') self.mock_object(db_api, 'share_network_delete') self.mock_object(share_networks.QUOTAS, 'reserve', mock.Mock(side_effect=Exception)) self.mock_object(share_networks.QUOTAS, 'commit') self.controller.delete(self.req, share_nw['id']) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) (db_api.share_instances_get_all_by_share_network. assert_called_once_with(self.req.environ['manila.context'], share_nw['id'])) self.controller.share_rpcapi.delete_share_server.assert_has_calls([ mock.call(self.req.environ['manila.context'], 'foo'), mock.call(self.req.environ['manila.context'], 'bar')]) db_api.share_network_delete.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) share_networks.QUOTAS.reserve.assert_called_once_with( self.req.environ['manila.context'], project_id=share_nw['project_id'], share_networks=-1, user_id=share_nw['user_id'] ) self.assertFalse(share_networks.QUOTAS.commit.called) def test_delete_in_use_by_share(self): share_nw = fake_share_network.copy() self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'share_instances_get_all_by_share_network', mock.Mock(return_value=['foo', 'bar'])) self.assertRaises(webob_exc.HTTPConflict, self.controller.delete, self.req, share_nw['id']) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) (db_api.share_instances_get_all_by_share_network. assert_called_once_with(self.req.environ['manila.context'], share_nw['id'])) def test_delete_in_use_by_share_group(self): share_nw = fake_share_network.copy() self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'count_share_groups_in_share_network', mock.Mock(return_value=2)) self.assertRaises(webob_exc.HTTPConflict, self.controller.delete, self.req, share_nw['id']) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) def test_delete_contains_is_auto_deletable_false_servers(self): share_nw = fake_share_network.copy() self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'count_share_groups_in_share_network') self.mock_object(share_networks.ShareNetworkController, '_all_share_servers_are_auto_deletable', mock.Mock(return_value=False)) self.assertRaises(webob_exc.HTTPConflict, self.controller.delete, self.req, share_nw['id']) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_nw['id']) def test_delete_contains_more_than_one_subnet(self): share_nw = fake_share_network.copy() self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'share_instances_get_all_by_share_network', mock.Mock(return_value=None)) self.mock_object(db_api, 'count_share_groups_in_share_network', mock.Mock(return_value=None)) self.mock_object(self.controller, '_share_network_contains_subnets', mock.Mock(return_value=True)) self.assertRaises(webob_exc.HTTPConflict, self.controller.delete, self.req, share_nw['id']) db_api.share_network_get.assert_called_once_with( self.context, share_nw['id']) (db_api.share_instances_get_all_by_share_network .assert_called_once_with(self.context, share_nw['id'])) db_api.count_share_groups_in_share_network.assert_called_once_with( self.context, share_nw['id'] ) (self.controller._share_network_contains_subnets .assert_called_once_with(share_nw)) def test_delete_subnet_contains_share_server(self): share_nw = fake_share_network.copy() share_nw['share_network_subnets'].append({ 'id': 'fake_sns_id', 'share_servers': [{'id': 'fake_share_server_id'}] }) self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_nw)) self.mock_object(db_api, 'count_share_groups_in_share_network', mock.Mock(return_value=0)) self.mock_object(self.controller, '_share_network_contains_subnets', mock.Mock(return_value=False)) self.mock_object( self.controller, '_all_share_servers_are_auto_deletable', mock.Mock(return_value=False)) self.assertRaises(webob_exc.HTTPConflict, self.controller.delete, self.req, share_nw['id']) @ddt.data( ({'share_servers': [{'is_auto_deletable': True}, {'is_auto_deletable': True}]}, True), ({'share_servers': [{'is_auto_deletable': True}, {'is_auto_deletable': False}]}, False), ) @ddt.unpack def test__share_servers_are_auto_deletable(self, fake_share_network, expected_result): self.assertEqual( expected_result, self.controller._all_share_servers_are_auto_deletable( fake_share_network)) @ddt.data( ({'share_network_subnets': [{'share_servers': [{}, {}]}]}, True), ({'share_network_subnets': [{'share_servers': []}]}, False), ) @ddt.unpack def test__share_network_subnets_contain_share_servers(self, share_network, expected_result): self.assertEqual( expected_result, self.controller._share_network_subnets_contain_share_servers( share_network)) def test_show_nominal(self): share_nw = 'fake network id' with mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)): result = self.controller.show(self.req, share_nw) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_nw) self._check_share_network_view( result[share_networks.RESOURCE_NAME], fake_share_network) def test_show_not_found(self): share_nw = 'fake network id' test_exception = exception.ShareNetworkNotFound( share_network_id=share_nw) with mock.patch.object(db_api, 'share_network_get', mock.Mock(side_effect=test_exception)): self.assertRaises(webob_exc.HTTPNotFound, self.controller.show, self.req, share_nw) def test_index_no_filters(self): networks = [fake_share_network] with mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock(return_value=networks)): result = self.controller.index(self.req) db_api.share_network_get_all_by_project.assert_called_once_with( self.context, self.context.project_id) self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_share_network_shortened) def test_index_detailed(self): networks = [fake_share_network] with mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock(return_value=networks)): result = self.controller.detail(self.req) db_api.share_network_get_all_by_project.assert_called_once_with( self.context, self.context.project_id) self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view( result[share_networks.RESOURCES_NAME][0], fake_share_network) @mock.patch.object(db_api, 'share_network_get_all_by_security_service', mock.Mock()) def test_index_filter_by_security_service(self): db_api.share_network_get_all_by_security_service.return_value = [ fake_share_network_with_ss] req = fakes.HTTPRequest.blank( '/share_networks?security_service_id=fake-ss-id') result = self.controller.index(req) (db_api.share_network_get_all_by_security_service. assert_called_once_with(req.environ['manila.context'], 'fake-ss-id')) self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_sn_with_ss_shortened) @mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock()) def test_index_all_tenants_non_admin_context(self): req = fakes.HTTPRequest.blank( '/share_networks?all_tenants=1') fake_context = req.environ['manila.context'] db_api.share_network_get_all_by_project.return_value = [] self.controller.index(req) db_api.share_network_get_all_by_project.assert_called_with( fake_context, fake_context.project_id) @mock.patch.object(db_api, 'share_network_get_all', mock.Mock()) def test_index_all_tenants_admin_context(self): db_api.share_network_get_all.return_value = [fake_share_network] req = fakes.HTTPRequest.blank( '/share_networks?all_tenants=1', use_admin_context=True) result = self.controller.index(req) db_api.share_network_get_all.assert_called_once_with( req.environ['manila.context']) self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_share_network_shortened) @mock.patch.object(db_api, 'share_network_get_all', mock.Mock()) def test_index_all_tenants_with_invaild_value(self): req = fakes.HTTPRequest.blank( '/share_networks?all_tenants=wonk', use_admin_context=True) self.assertRaises(exception.InvalidInput, self.controller.index, req) @mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock()) @mock.patch.object(db_api, 'share_network_get_all', mock.Mock()) def test_index_all_tenants_with_value_zero(self): db_api.share_network_get_all_by_project.return_value = [ fake_share_network] req = fakes.HTTPRequest.blank( '/share_networks?all_tenants=0', use_admin_context=True) result = self.controller.index(req) self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_share_network_shortened) db_api.share_network_get_all_by_project.assert_called_once_with( req.environ['manila.context'], self.context.project_id) db_api.share_network_get_all.assert_not_called() @mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock()) def test_index_filter_by_project_id_non_admin_context(self): req = fakes.HTTPRequest.blank( '/share_networks?project_id=fake project') fake_context = req.environ['manila.context'] db_api.share_network_get_all_by_project.return_value = [] self.controller.index(req) db_api.share_network_get_all_by_project.assert_called_with( fake_context, fake_context.project_id) @mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock()) def test_index_filter_by_project_id_admin_context(self): db_api.share_network_get_all_by_project.return_value = [ fake_share_network_with_ss ] req = fakes.HTTPRequest.blank( '/share_networks?project_id=fake', use_admin_context=True) result = self.controller.index(req) db_api.share_network_get_all_by_project.assert_called_once_with( req.environ['manila.context'], 'fake') self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_sn_with_ss_shortened) @mock.patch.object(db_api, 'share_network_get_all_by_security_service', mock.Mock()) def test_index_filter_by_ss_and_project_id_admin_context(self): db_api.share_network_get_all_by_security_service.return_value = [ fake_share_network_with_ss ] req = fakes.HTTPRequest.blank( '/share_networks?security_service_id=fake-ss-id&project_id=fake', use_admin_context=True) result = self.controller.index(req) (db_api.share_network_get_all_by_security_service. assert_called_once_with(req.environ['manila.context'], 'fake-ss-id')) self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_sn_with_ss_shortened) @ddt.data(('name=fo', 0), ('description=d', 0), ('name=foo&description=d', 0), ('name=foo', 1), ('description=ds', 1), ('name~=foo&description~=ds', 2), ('name=foo&description~=ds', 1), ('name~=foo&description=ds', 1)) @ddt.unpack @mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock()) def test_index_filter_by_name_and_description( self, filter, share_network_number): fake_objs = [{'name': 'fo2', 'description': 'd2', 'id': 'fake1'}, {'name': 'foo', 'description': 'ds', 'id': 'fake2'}, {'name': 'foo1', 'description': 'ds1', 'id': 'fake3'}] db_api.share_network_get_all_by_project.return_value = fake_objs req = fakes.HTTPRequest.blank( '/share_networks?' + filter, use_admin_context=True, version='2.36') result = self.controller.index(req) db_api.share_network_get_all_by_project.assert_called_with( req.environ['manila.context'], self.context.project_id) self.assertEqual(share_network_number, len(result[share_networks.RESOURCES_NAME])) if share_network_number > 0: self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_objs[1]) if share_network_number > 1: self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][1], fake_objs[2]) @mock.patch.object(db_api, 'share_network_get_all_by_project', mock.Mock()) def test_index_all_filter_opts(self): valid_filter_opts = { 'created_before': '2001-02-02', 'created_since': '1999-01-01', 'name': 'test-sn' } db_api.share_network_get_all_by_project.return_value = [ fake_share_network, fake_share_network_with_ss] query_string = '/share-networks?' + parse.urlencode(sorted( [(k, v) for (k, v) in list(valid_filter_opts.items())])) for use_admin_context in [True, False]: req = fakes.HTTPRequest.blank(query_string, use_admin_context=use_admin_context) result = self.controller.index(req) db_api.share_network_get_all_by_project.assert_called_with( req.environ['manila.context'], 'fake') self.assertEqual(1, len(result[share_networks.RESOURCES_NAME])) self._check_share_network_view_shortened( result[share_networks.RESOURCES_NAME][0], fake_sn_with_ss_shortened) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) def test_update_nominal(self): share_nw = 'fake network id' db_api.share_network_get.return_value = fake_share_network body = {share_networks.RESOURCE_NAME: {'name': 'new name'}} with mock.patch.object(db_api, 'share_network_update', mock.Mock(return_value=fake_share_network)): result = self.controller.update(self.req, share_nw, body) db_api.share_network_update.assert_called_once_with( self.req.environ['manila.context'], share_nw, body[share_networks.RESOURCE_NAME]) self._check_share_network_view( result[share_networks.RESOURCE_NAME], fake_share_network) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) def test_update_not_found(self): share_nw = 'fake network id' db_api.share_network_get.side_effect = exception.ShareNetworkNotFound( share_network_id=share_nw) self.assertRaises(webob_exc.HTTPNotFound, self.controller.update, self.req, share_nw, self.body) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) def test_update_invalid_key_in_use(self): share_nw = fake_share_network.copy() subnet = fake_share_network_subnet.copy() subnet['share_servers'] = [{'id': 1}] share_nw['share_network_subnets'] = [subnet] db_api.share_network_get.return_value = share_nw body = { share_networks.RESOURCE_NAME: { 'name': 'new name', 'user_id': 'new id', }, } self.assertRaises(webob_exc.HTTPForbidden, self.controller.update, self.req, share_nw['id'], body) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) @mock.patch.object(db_api, 'share_network_update', mock.Mock()) def test_update_valid_keys_in_use(self): share_nw = fake_share_network.copy() subnet = fake_share_network_subnet.copy() subnet['share_servers'] = [{'id': 1}] share_nw['share_network_subnets'] = [subnet] updated_share_nw = share_nw.copy() updated_share_nw['name'] = 'new name' updated_share_nw['description'] = 'new description' db_api.share_network_get.return_value = share_nw db_api.share_network_update.return_value = updated_share_nw body = { share_networks.RESOURCE_NAME: { 'name': updated_share_nw['name'], 'description': updated_share_nw['description'], }, } self.controller.update(self.req, share_nw['id'], body) db_api.share_network_get.assert_called_once_with(self.context, share_nw['id']) db_api.share_network_update.assert_called_once_with( self.context, share_nw['id'], body['share_network']) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) def test_update_db_api_exception(self): share_nw = 'fake network id' db_api.share_network_get.return_value = fake_share_network self.mock_object( self.controller, '_share_network_subnets_contain_share_servers', mock.Mock(return_value=False)) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=fake_share_network_subnet)) self.mock_object(db_api, 'share_network_subnet_update') body = {share_networks.RESOURCE_NAME: {'neutron_subnet_id': 'new subnet'}} with mock.patch.object(db_api, 'share_network_update', mock.Mock(side_effect=db_exception.DBError)): self.assertRaises(webob_exc.HTTPBadRequest, self.controller.update, self.req, share_nw, body) db_api.share_network_subnet_get_default_subnet.assert_called_once_with( self.context, share_nw) db_api.share_network_subnet_update.assert_called_once_with( self.context, fake_share_network_subnet['id'], body['share_network']) @ddt.data((webob_exc.HTTPBadRequest, fake_share_network_subnet, None, 'new subnet'), (webob_exc.HTTPBadRequest, None, 'neutron net', None)) @ddt.unpack def test_update_default_subnet_errors(self, exception_to_raise, get_default_subnet_return, neutron_net_id, neutron_subnet_id): share_nw = 'fake network id' self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) self.mock_object( self.controller, '_share_network_subnets_contain_share_servers', mock.Mock(return_value=False)) self.mock_object(db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=get_default_subnet_return)) if get_default_subnet_return: fake_subnet = copy.deepcopy(fake_share_network_subnet) fake_subnet['neutron_net_id'] = None fake_subnet['neutron_subnet_id'] = None db_api.share_network_subnet_get_default_subnet.return_value = ( fake_subnet) body = { share_networks.RESOURCE_NAME: { 'neutron_net_id': neutron_net_id, 'neutron_subnet_id': neutron_subnet_id } } self.assertRaises(exception_to_raise, self.controller.update, self.req, share_nw, body) db_api.share_network_subnet_get_default_subnet.assert_called_once_with( self.context, share_nw) @ddt.data(*set(("1.0", "2.25", "2.26", api_version._MAX_API_VERSION))) def test_action_add_security_service(self, microversion): share_network_id = 'fake network id' security_service_id = 'fake ss id' self.mock_object( self.controller, '_share_network_subnets_contain_share_servers') body = {'add_security_service': {'security_service_id': security_service_id}} req = fakes.HTTPRequest.blank('/share-networks', version=microversion) with mock.patch.object(self.controller, '_add_security_service', mock.Mock()): self.controller.action(req, share_network_id, body) self.controller._add_security_service.assert_called_once_with( req, share_network_id, body['add_security_service']) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) @mock.patch.object(db_api, 'security_service_get', mock.Mock()) def test_action_add_security_service_conflict(self): share_network = fake_share_network.copy() share_network['security_services'] = [{'id': 'security_service_1', 'type': 'ldap'}] security_service = {'id': ' security_service_2', 'type': 'ldap'} body = {'add_security_service': {'security_service_id': security_service['id']}} self.mock_object( self.controller, '_share_network_subnets_contain_share_servers', mock.Mock(return_value=False)) db_api.security_service_get.return_value = security_service db_api.share_network_get.return_value = share_network with mock.patch.object(share_networks.policy, 'check_policy', mock.Mock()): self.assertRaises(webob_exc.HTTPConflict, self.controller.action, self.req, share_network['id'], body) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_network['id']) db_api.security_service_get.assert_called_once_with( self.req.environ['manila.context'], security_service['id']) share_networks.policy.check_policy.assert_called_once_with( self.req.environ['manila.context'], share_networks.RESOURCE_NAME, 'add_security_service', ) @ddt.data(*set(("1.0", "2.25", "2.26", api_version._MAX_API_VERSION))) def test_action_remove_security_service(self, microversion): share_network_id = 'fake network id' security_service_id = 'fake ss id' self.mock_object( self.controller, '_share_network_subnets_contain_share_servers') body = {'remove_security_service': {'security_service_id': security_service_id}} req = fakes.HTTPRequest.blank('/share-networks', version=microversion) with mock.patch.object(self.controller, '_remove_security_service', mock.Mock()): self.controller.action(req, share_network_id, body) self.controller._remove_security_service.assert_called_once_with( req, share_network_id, body['remove_security_service']) @mock.patch.object(db_api, 'share_network_get', mock.Mock()) @mock.patch.object(share_networks.policy, 'check_policy', mock.Mock()) def test_action_remove_security_service_forbidden(self): share_network = fake_share_network.copy() subnet = fake_share_network_subnet.copy() subnet['share_servers'] = ['foo'] share_network['share_network_subnets'] = [subnet] db_api.share_network_get.return_value = share_network self.mock_object( self.controller, '_share_network_subnets_contain_share_servers', mock.Mock(return_value=True)) body = { 'remove_security_service': { 'security_service_id': 'fake id', }, } self.assertRaises(webob_exc.HTTPForbidden, self.controller.action, self.req, share_network['id'], body) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_network['id']) share_networks.policy.check_policy.assert_called_once_with( self.req.environ['manila.context'], share_networks.RESOURCE_NAME, 'remove_security_service') def test_action_bad_request(self): share_network_id = 'fake network id' body = {'bad_action': {}} self.assertRaises(webob_exc.HTTPBadRequest, self.controller.action, self.req, share_network_id, body) @ddt.data('add_security_service', 'remove_security_service') def test_action_security_service_contains_share_servers(self, action): share_network = fake_share_network.copy() security_service = {'id': ' security_service_2', 'type': 'ldap'} body = { action: { 'security_service_id': security_service['id'] } } self.mock_object(share_networks.policy, 'check_policy') self.mock_object(db_api, 'share_network_get', mock.Mock(return_value=share_network)) self.mock_object( self.controller, '_share_network_subnets_contain_share_servers', mock.Mock(return_value=True)) self.assertRaises(webob_exc.HTTPForbidden, self.controller.action, self.req, share_network['id'], body) db_api.share_network_get.assert_called_once_with( self.req.environ['manila.context'], share_network['id']) share_networks.policy.check_policy.assert_called_once_with( self.req.environ['manila.context'], share_networks.RESOURCE_NAME, action) manila-10.0.0/manila/tests/api/v2/test_share_snapshot_instances.py0000664000175000017500000002410213656750227025273 0ustar zuulzuul00000000000000# Copyright 2016 Huawei Inc. # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from oslo_serialization import jsonutils import six from webob import exc from manila.api.v2 import share_snapshot_instances from manila.common import constants from manila import context from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils from manila.tests import fake_share CONF = cfg.CONF @ddt.ddt class ShareSnapshotInstancesApiTest(test.TestCase): """Share snapshot instance Api Test.""" def setUp(self): super(ShareSnapshotInstancesApiTest, self).setUp() self.controller = (share_snapshot_instances. ShareSnapshotInstancesController()) self.resource_name = self.controller.resource_name self.api_version = '2.19' self.snapshot_instances_req = fakes.HTTPRequest.blank( '/snapshot-instances', version=self.api_version) self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') self.snapshot_instances_req.environ['manila.context'] = ( self.admin_context) self.snapshot_instances_req_admin = fakes.HTTPRequest.blank( '/snapshot-instances', version=self.api_version, use_admin_context=True) self.mock_policy_check = self.mock_object(policy, 'check_policy') def _get_fake_snapshot_instance(self, summary=False, **values): snapshot_instance = fake_share.fake_snapshot_instance( as_primitive=True) expected_keys = { 'id', 'snapshot_id', 'status', } expected_snapshot_instance = {key: snapshot_instance[key] for key in snapshot_instance if key in expected_keys} if not summary: expected_snapshot_instance['share_id'] = ( snapshot_instance.get('share_instance').get('share_id')) expected_snapshot_instance.update({ 'created_at': snapshot_instance.get('created_at'), 'updated_at': snapshot_instance.get('updated_at'), 'progress': snapshot_instance.get('progress'), 'provider_location': snapshot_instance.get( 'provider_location'), 'share_instance_id': snapshot_instance.get( 'share_instance_id'), }) return snapshot_instance, expected_snapshot_instance def _setup_snapshot_instance_data(self, instance=None): if instance is None: share_instance = db_utils.create_share_instance( status=constants.STATUS_AVAILABLE, share_id='fake_share_id_1') instance = db_utils.create_snapshot_instance( 'fake_snapshot_id_1', status=constants.STATUS_AVAILABLE, share_instance_id=share_instance['id']) path = '/v2/fake/snapshot-instances/%s/action' % instance['id'] req = fakes.HTTPRequest.blank(path, version=self.api_version, script_name=path) req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = self.api_version return instance, req def _get_context(self, role): return getattr(self, '%s_context' % role) @ddt.data(None, 'FAKE_SNAPSHOT_ID') def test_list_snapshot_instances_summary(self, snapshot_id): snapshot_instance, expected_snapshot_instance = ( self._get_fake_snapshot_instance(summary=True)) self.mock_object(share_snapshot_instances.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[snapshot_instance])) url = '/snapshot-instances' if snapshot_id: url += '?snapshot_id=%s' % snapshot_id req = fakes.HTTPRequest.blank(url, version=self.api_version) req_context = req.environ['manila.context'] res_dict = self.controller.index(req) self.assertEqual([expected_snapshot_instance], res_dict['snapshot_instances']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') def test_list_snapshot_instances_detail(self): snapshot_instance, expected_snapshot_instance = ( self._get_fake_snapshot_instance()) self.mock_object(share_snapshot_instances.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[snapshot_instance])) res_dict = self.controller.detail(self.snapshot_instances_req) self.assertEqual([expected_snapshot_instance], res_dict['snapshot_instances']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'detail') def test_list_snapshot_instances_detail_invalid_snapshot(self): self.mock_object(share_snapshot_instances.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[])) req = self.snapshot_instances_req req.GET['snapshot_id'] = 'FAKE_SNAPSHOT_ID' res_dict = self.controller.detail(req) self.assertEqual([], res_dict['snapshot_instances']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'detail') def test_show(self): snapshot_instance, expected_snapshot_instance = ( self._get_fake_snapshot_instance()) self.mock_object( share_snapshot_instances.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) res_dict = self.controller.show(self.snapshot_instances_req, snapshot_instance.get('id')) self.assertEqual(expected_snapshot_instance, res_dict['snapshot_instance']) self.mock_policy_check.assert_called_once_with( self.admin_context, self.resource_name, 'show') def test_show_snapshot_instance_not_found(self): mock__view_builder_call = self.mock_object( share_snapshot_instances.instance_view.ViewBuilder, 'detail') fake_exception = exception.ShareSnapshotInstanceNotFound( instance_id='FAKE_SNAPSHOT_INSTANCE_ID') self.mock_object(share_snapshot_instances.db, 'share_snapshot_instance_get', mock.Mock(side_effect=fake_exception)) self.assertRaises(exc.HTTPNotFound, self.controller.show, self.snapshot_instances_req, 'FAKE_SNAPSHOT_INSTANCE_ID') self.assertFalse(mock__view_builder_call.called) @ddt.data('index', 'detail', 'show', 'reset_status') def test_policy_not_authorized(self, method_name): method = getattr(self.controller, method_name) if method_name in ('index', 'detail'): arguments = {} else: arguments = { 'id': 'FAKE_SNAPSHOT_ID', 'body': {'FAKE_KEY': 'FAKE_VAL'}, } noauthexc = exception.PolicyNotAuthorized(action=six.text_type(method)) with mock.patch.object( policy, 'check_policy', mock.Mock(side_effect=noauthexc)): self.assertRaises( exc.HTTPForbidden, method, self.snapshot_instances_req, **arguments) @ddt.data('index', 'show', 'detail', 'reset_status') def test_upsupported_microversion(self, method_name): unsupported_microversions = ('1.0', '2.18') method = getattr(self.controller, method_name) arguments = { 'id': 'FAKE_SNAPSHOT_ID', } if method_name in ('index'): arguments.clear() for microversion in unsupported_microversions: req = fakes.HTTPRequest.blank( '/snapshot-instances', version=microversion) self.assertRaises(exception.VersionNotFoundForAPIMethod, method, req, **arguments) def _reset_status(self, context, instance, req, valid_code=202, valid_status=None, body=None): if body is None: body = {'reset_status': {'status': constants.STATUS_ERROR}} req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = context with mock.patch.object( policy, 'check_policy', fakes.mock_fake_admin_check): resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_instance = ( share_snapshot_instances.db.share_snapshot_instance_get( context, instance['id'])) self.assertEqual(valid_status, actual_instance['status']) @ddt.data(*fakes.fixture_reset_status_with_different_roles) @ddt.unpack def test_reset_status_with_different_roles(self, role, valid_code, valid_status, version): instance, action_req = self._setup_snapshot_instance_data() ctxt = self._get_context(role) self._reset_status(ctxt, instance, action_req, valid_code=valid_code, valid_status=valid_status) manila-10.0.0/manila/tests/api/v2/test_share_accesses.py0000664000175000017500000001207113656750227023160 0ustar zuulzuul00000000000000# Copyright (c) 2018 Huawei Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from webob import exc from manila.api.v2 import share_accesses from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils from oslo_utils import uuidutils @ddt.ddt class ShareAccessesAPITest(test.TestCase): def _get_index_request(self, share_id=None, filters='', version="2.45", use_admin_context=True): share_id = share_id or self.share['id'] req = fakes.HTTPRequest.blank( '/v2/share-access-rules?share_id=%s' % share_id + filters, version=version, use_admin_context=use_admin_context) return req def _get_show_request(self, access_id=None, version="2.45", use_admin_context=True): access_id = access_id or self.access['id'] req = fakes.HTTPRequest.blank( '/v2/share-access-rules/%s' % access_id, version=version, use_admin_context=use_admin_context) return req def setUp(self): super(ShareAccessesAPITest, self).setUp() self.controller = ( share_accesses.ShareAccessesController()) self.resource_name = self.controller.resource_name self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.share = db_utils.create_share() self.access = db_utils.create_share_access( id=uuidutils.generate_uuid(), share_id=self.share['id'], ) db_utils.create_share_access( id=uuidutils.generate_uuid(), share_id=self.share['id'], metadata={'k1': 'v1'} ) @ddt.data({'role': 'admin', 'version': '2.45', 'filters': '&metadata=%7B%27k1%27%3A+%27v1%27%7D'}, {'role': 'user', 'version': '2.45', 'filters': ''}) @ddt.unpack def test_list_and_show(self, role, version, filters): summary_keys = ['id', 'access_level', 'access_to', 'access_type', 'state', 'metadata'] self._test_list_and_show(role, filters, version, summary_keys) def _test_list_and_show(self, role, filters, version, summary_keys): req = self._get_index_request( filters=filters, version=version, use_admin_context=(role == 'admin')) index_result = self.controller.index(req) self.assertIn('access_list', index_result) self.assertEqual(1, len(index_result)) access_count = 1 if filters else 2 self.assertEqual(access_count, len(index_result['access_list'])) for index_access in index_result['access_list']: self.assertIn('id', index_access) req = self._get_show_request( index_access['id'], version=version, use_admin_context=(role == 'admin')) show_result = self.controller.show(req, index_access['id']) self.assertIn('access', show_result) self.assertEqual(1, len(show_result)) show_el = show_result['access'] # Ensure keys common to index & show results have matching values for key in summary_keys: self.assertEqual(index_access[key], show_el[key]) def test_list_accesses_share_not_found(self): self.assertRaises( exc.HTTPBadRequest, self.controller.index, self._get_index_request(share_id='inexistent_share_id')) def test_list_accesses_share_req_share_id_not_exist(self): req = fakes.HTTPRequest.blank('/v2/share-access-rules?', version="2.45") self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) def test_show_access_not_found(self): self.assertRaises( exc.HTTPNotFound, self.controller.show, self._get_show_request('inexistent_id'), 'inexistent_id') @ddt.data('1.0', '2.0', '2.8', '2.44') def test_list_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, self._get_index_request(version=version)) @ddt.data('1.0', '2.0', '2.44') def test_show_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.show, self._get_show_request(version=version), self.access['id']) manila-10.0.0/manila/tests/api/v2/test_share_network_subnets.py0000664000175000017500000004750613656750227024636 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from oslo_db import exception as db_exception from manila.api import common from manila.api.v2 import share_network_subnets from manila.db import api as db_api from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils from webob import exc fake_az = { 'id': 'ae525e12-07e8-4ddc-a2fd-4a89ad4a65ff', 'name': 'fake_az_name' } fake_default_subnet = { 'neutron_net_id': 'fake_nn_id', 'neutron_subnet_id': 'fake_nsn_id', 'availability_zone_id': None } fake_subnet_with_az = { 'neutron_net_id': 'fake_nn_id', 'neutron_subnet_id': 'fake_nsn_id', 'availability_zone_id': 'fake_az_id' } @ddt.ddt class ShareNetworkSubnetControllerTest(test.TestCase): """Share network subnet api test""" def setUp(self): super(ShareNetworkSubnetControllerTest, self).setUp() self.controller = share_network_subnets.ShareNetworkSubnetController() self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.resource_name = self.controller.resource_name self.mock_az_get = self.mock_object(db_api, 'availability_zone_get', mock.Mock(return_value=fake_az)) self.share_network = db_utils.create_share_network( name='fake_network', id='fake_sn_id') self.share_server = db_utils.create_share_server( share_network_subnet_id='fake_sns_id') self.subnet = db_utils.create_share_network_subnet( share_network_id=self.share_network['id']) self.share = db_utils.create_share() def test_share_network_subnet_delete(self): req = fakes.HTTPRequest.blank('/subnets/%s' % self.subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sns_get = self.mock_object( db_api, 'share_network_subnet_get', mock.Mock(return_value=self.subnet)) mock_all_get_all_shares_by_ss = self.mock_object( db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=[])) mock_all_ss_are_auto_deletable = self.mock_object( self.controller, '_all_share_servers_are_auto_deletable', mock.Mock(return_value=True)) mock_delete_share_server = self.mock_object( self.controller.share_rpcapi, 'delete_share_server') mock_subnet_delete = self.mock_object(db_api, 'share_network_subnet_delete') result = self.controller.delete(req, self.share_network['id'], self.subnet['id']) self.assertEqual(202, result.status_int) mock_sns_get.assert_called_once_with( context, self.subnet['id']) mock_all_get_all_shares_by_ss.assert_called_once_with( context, self.subnet['share_servers'][0].id ) mock_all_ss_are_auto_deletable.assert_called_once_with( self.subnet) mock_delete_share_server.assert_called_once_with( context, self.subnet['share_servers'][0]) mock_subnet_delete.assert_called_once_with( context, self.subnet['id']) policy.check_policy.assert_called_once_with( context, self.resource_name, 'delete') def test_share_network_subnet_delete_network_not_found(self): req = fakes.HTTPRequest.blank('/subnets/%s' % self.subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sn_get = self.mock_object( db_api, 'share_network_get', mock.Mock(side_effect=exception.ShareNetworkNotFound( share_network_id=self.share_network['id'] ))) self.assertRaises(exc.HTTPNotFound, self.controller.delete, req, self.share_network['id'], self.subnet['id']) mock_sn_get.assert_called_once_with( context, self.share_network['id']) self.mock_policy_check.assert_called_once_with( context, self.resource_name, 'delete') def test_share_network_subnet_delete_subnet_not_found(self): req = fakes.HTTPRequest.blank('/subnets/%s' % self.subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sns_get = self.mock_object( db_api, 'share_network_subnet_get', mock.Mock(side_effect=exception.ShareNetworkSubnetNotFound( share_network_subnet_id=self.subnet['id'] ))) self.assertRaises(exc.HTTPNotFound, self.controller.delete, req, self.share_network['id'], self.subnet['id']) mock_sns_get.assert_called_once_with( context, self.subnet['id']) self.mock_policy_check.assert_called_once_with( context, self.resource_name, 'delete') def test_delete_subnet_with_share_servers_fail(self): req = fakes.HTTPRequest.blank('/subnets/%s' % self.subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sns_get = self.mock_object( db_api, 'share_network_subnet_get', mock.Mock(return_value=self.subnet)) mock_all_get_all_shares_by_ss = self.mock_object( db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=[])) mock_all_ss_are_auto_deletable = self.mock_object( self.controller, '_all_share_servers_are_auto_deletable', mock.Mock(return_value=False)) self.assertRaises(exc.HTTPConflict, self.controller.delete, req, self.share_network['id'], self.subnet['id']) mock_sns_get.assert_called_once_with( context, self.subnet['id']) mock_all_get_all_shares_by_ss.assert_called_once_with( context, self.subnet['share_servers'][0].id ) mock_all_ss_are_auto_deletable.assert_called_once_with( self.subnet ) self.mock_policy_check.assert_called_once_with( context, self.resource_name, 'delete') def test_delete_subnet_with_shares_fail(self): req = fakes.HTTPRequest.blank('/subnets/%s' % self.subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sns_get = self.mock_object( db_api, 'share_network_subnet_get', mock.Mock(return_value=self.subnet)) mock_all_get_all_shares_by_ss = self.mock_object( db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=[self.share])) self.assertRaises(exc.HTTPConflict, self.controller.delete, req, self.share_network['id'], self.subnet['id']) mock_sns_get.assert_called_once_with( context, self.subnet['id']) mock_all_get_all_shares_by_ss.assert_called_once_with( context, self.subnet['share_servers'][0].id ) self.mock_policy_check.assert_called_once_with( context, self.resource_name, 'delete') @ddt.data((None, fake_default_subnet, None), (fake_az, None, fake_subnet_with_az)) @ddt.unpack def test__validate_subnet(self, az, default_subnet, subnet_az): req = fakes.HTTPRequest.blank('/subnets', version='2.51') context = req.environ['manila.context'] mock_get_default_sns = self.mock_object( db_api, 'share_network_subnet_get_default_subnet', mock.Mock(return_value=default_subnet)) mock_get_subnet_by_az = self.mock_object( db_api, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=subnet_az)) self.assertRaises(exc.HTTPConflict, self.controller._validate_subnet, context, self.share_network['id'], az) if az: mock_get_subnet_by_az.assert_called_once_with( context, self.share_network['id'], az['id']) mock_get_default_sns.assert_not_called() else: mock_get_default_sns.assert_called_once_with( context, self.share_network['id']) mock_get_subnet_by_az.assert_not_called() def _setup_create_test_request_body(self): body = { 'share_network_id': self.share_network['id'], 'availability_zone': fake_az['name'], 'neutron_net_id': 'fake_nn_id', 'neutron_subnet_id': 'fake_nsn_id' } return body def test_subnet_create(self): req = fakes.HTTPRequest.blank('/subnets', version="2.51") context = req.environ['manila.context'] body = { 'share-network-subnet': self._setup_create_test_request_body() } sn_id = body['share-network-subnet']['share_network_id'] expected_result = copy.deepcopy(body) expected_result['share-network-subnet']['id'] = self.subnet['id'] mock_check_net_and_subnet_id = self.mock_object( common, 'check_net_id_and_subnet_id') mock_validate_subnet = self.mock_object( self.controller, '_validate_subnet') mock_subnet_create = self.mock_object( db_api, 'share_network_subnet_create', mock.Mock(return_value=self.subnet)) self.controller.create( req, body['share-network-subnet']['share_network_id'], body) mock_check_net_and_subnet_id.assert_called_once_with( body['share-network-subnet']) mock_validate_subnet.assert_called_once_with( context, sn_id, az=fake_az) mock_subnet_create.assert_called_once_with( context, body['share-network-subnet']) def test_subnet_create_share_network_not_found(self): fake_sn_id = 'fake_id' req = fakes.HTTPRequest.blank('/subnets', version="2.51") context = req.environ['manila.context'] body = { 'share-network-subnet': self._setup_create_test_request_body() } mock_sn_get = self.mock_object( db_api, 'share_network_get', mock.Mock(side_effect=exception.ShareNetworkNotFound( share_network_id=fake_sn_id))) self.assertRaises(exc.HTTPNotFound, self.controller.create, req, fake_sn_id, body) mock_sn_get.assert_called_once_with(context, fake_sn_id) def test_subnet_create_az_not_found(self): fake_sn_id = 'fake_id' req = fakes.HTTPRequest.blank('/subnets', version="2.51") context = req.environ['manila.context'] body = { 'share-network-subnet': self._setup_create_test_request_body() } mock_sn_get = self.mock_object(db_api, 'share_network_get') mock_az_get = self.mock_object( db_api, 'availability_zone_get', mock.Mock(side_effect=exception.AvailabilityZoneNotFound(id=''))) expected_az = body['share-network-subnet']['availability_zone'] self.assertRaises(exc.HTTPBadRequest, self.controller.create, req, fake_sn_id, body) mock_sn_get.assert_called_once_with(context, fake_sn_id) mock_az_get.assert_called_once_with( context, expected_az) def test_subnet_create_subnet_default_or_same_az_exists(self): fake_sn_id = 'fake_id' req = fakes.HTTPRequest.blank('/subnets', version="2.51") context = req.environ['manila.context'] body = { 'share-network-subnet': self._setup_create_test_request_body() } mock_sn_get = self.mock_object(db_api, 'share_network_get') mock__validate_subnet = self.mock_object( self.controller, '_validate_subnet', mock.Mock(side_effect=exc.HTTPConflict('')) ) expected_az = body['share-network-subnet']['availability_zone'] self.assertRaises(exc.HTTPConflict, self.controller.create, req, fake_sn_id, body) mock_sn_get.assert_called_once_with(context, fake_sn_id) self.mock_az_get.assert_called_once_with(context, expected_az) mock__validate_subnet.assert_called_once_with( context, fake_sn_id, az=fake_az) def test_subnet_create_subnet_db_error(self): fake_sn_id = 'fake_sn_id' req = fakes.HTTPRequest.blank('/subnets', version="2.51") context = req.environ['manila.context'] body = { 'share-network-subnet': self._setup_create_test_request_body() } mock_sn_get = self.mock_object(db_api, 'share_network_get') mock__validate_subnet = self.mock_object( self.controller, '_validate_subnet') mock_db_subnet_create = self.mock_object( db_api, 'share_network_subnet_create', mock.Mock(side_effect=db_exception.DBError())) expected_data = copy.deepcopy(body['share-network-subnet']) expected_data['availability_zone_id'] = fake_az['id'] expected_data.pop('availability_zone') self.assertRaises(exc.HTTPInternalServerError, self.controller.create, req, fake_sn_id, body) mock_sn_get.assert_called_once_with(context, fake_sn_id) self.mock_az_get.assert_called_once_with(context, fake_az['name']) mock__validate_subnet.assert_called_once_with( context, fake_sn_id, az=fake_az) mock_db_subnet_create.assert_called_once_with( context, expected_data ) def test_show_subnet(self): subnet = db_utils.create_share_network_subnet( id='fake_sns_2', share_network_id=self.share_network['id']) expected_result = { 'share_network_subnet': { "created_at": subnet['created_at'], "id": subnet['id'], "share_network_id": subnet['share_network_id'], "share_network_name": self.share_network['name'], "availability_zone": subnet['availability_zone'], "segmentation_id": subnet['segmentation_id'], "neutron_subnet_id": subnet['neutron_subnet_id'], "updated_at": subnet['updated_at'], "neutron_net_id": subnet['neutron_net_id'], "ip_version": subnet['ip_version'], "cidr": subnet['cidr'], "network_type": subnet['network_type'], "gateway": subnet['gateway'], "mtu": subnet['mtu'], } } req = fakes.HTTPRequest.blank('/subnets/%s' % subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sn_get = self.mock_object( db_api, 'share_network_get', mock.Mock( return_value=self.share_network)) mock_sns_get = self.mock_object( db_api, 'share_network_subnet_get', mock.Mock( return_value=subnet)) result = self.controller.show(req, self.share_network['id'], subnet['id']) self.assertEqual(expected_result, result) mock_sn_get.assert_called_once_with(context, self.share_network['id']) mock_sns_get.assert_called_once_with(context, subnet['id']) @ddt.data( (mock.Mock(side_effect=exception.ShareNetworkNotFound( share_network_id='fake_net_id')), None), (mock.Mock(), mock.Mock( side_effect=exception.ShareNetworkSubnetNotFound( share_network_subnet_id='fake_subnet_id')))) @ddt.unpack def test_show_subnet_not_found(self, sn_get_side_effect, sns_get_side_effect): req = fakes.HTTPRequest.blank('/subnets/%s' % self.subnet['id'], version="2.51") context = req.environ['manila.context'] mock_sn_get = self.mock_object( db_api, 'share_network_get', sn_get_side_effect) mock_sns_get = self.mock_object( db_api, 'share_network_subnet_get', sns_get_side_effect) self.assertRaises(exc.HTTPNotFound, self.controller.show, req, self.share_network['id'], self.subnet['id']) mock_sn_get.assert_called_once_with(context, self.share_network['id']) if sns_get_side_effect: mock_sns_get.assert_called_once_with(context, self.subnet['id']) def test_list_subnet(self): share_network_id = 'fake_id' subnet = db_utils.create_share_network_subnet( share_network_id=share_network_id, id='fake_id') fake_sn = db_utils.create_share_network(id=share_network_id) expected_result = { 'share_network_subnets': [{ "created_at": subnet['created_at'], "id": subnet['id'], "share_network_id": subnet['id'], "share_network_name": fake_sn["name"], "availability_zone": subnet['availability_zone'], "segmentation_id": subnet['segmentation_id'], "neutron_subnet_id": subnet['neutron_subnet_id'], "updated_at": subnet['updated_at'], "neutron_net_id": subnet['neutron_net_id'], "ip_version": subnet['ip_version'], "cidr": subnet['cidr'], "network_type": subnet['network_type'], "gateway": subnet['gateway'], "mtu": subnet['mtu'], }] } req = fakes.HTTPRequest.blank('/subnets/', version="2.51") context = req.environ['manila.context'] mock_sn_get = self.mock_object( db_api, 'share_network_get', mock.Mock( return_value=fake_sn)) result = self.controller.index(req, self.share_network['id']) self.assertEqual(expected_result, result) mock_sn_get.assert_called_once_with(context, self.share_network['id']) def test_list_subnet_share_network_not_found(self): req = fakes.HTTPRequest.blank('/subnets/', version="2.51") context = req.environ['manila.context'] mock_sn_get = self.mock_object( db_api, 'share_network_get', mock.Mock( side_effect=exception.ShareNetworkNotFound( share_network_id=self.share_network['id']))) self.assertRaises(exc.HTTPNotFound, self.controller.index, req, self.share_network['id']) mock_sn_get.assert_called_once_with(context, self.share_network['id']) manila-10.0.0/manila/tests/api/v2/test_share_snapshots.py0000664000175000017500000012334213656750227023415 0ustar zuulzuul00000000000000# Copyright 2015 EMC Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_serialization import jsonutils import six import webob from manila.api.openstack import api_version_request as api_version from manila.api.v2 import share_snapshots from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila.share import api as share_api from manila import test from manila.tests.api.contrib import stubs from manila.tests.api import fakes from manila.tests import db_utils from manila.tests import fake_share from manila import utils MIN_MANAGE_SNAPSHOT_API_VERSION = '2.12' def get_fake_manage_body(share_id=None, provider_location=None, driver_options=None, **kwargs): fake_snapshot = { 'share_id': share_id, 'provider_location': provider_location, 'driver_options': driver_options, 'user_id': 'fake_user_id', 'project_id': 'fake_project_id', } fake_snapshot.update(kwargs) return {'snapshot': fake_snapshot} @ddt.ddt class ShareSnapshotAPITest(test.TestCase): """Share Snapshot API Test.""" def setUp(self): super(ShareSnapshotAPITest, self).setUp() self.controller = share_snapshots.ShareSnapshotsController() self.mock_object(share_api.API, 'get', stubs.stub_share_get) self.mock_object(share_api.API, 'get_all_snapshots', stubs.stub_snapshot_get_all_by_project) self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get) self.mock_object(share_api.API, 'snapshot_update', stubs.stub_snapshot_update) self.snp_example = { 'share_id': 100, 'size': 12, 'force': False, 'display_name': 'updated_snapshot_name', 'display_description': 'updated_snapshot_description', } @ddt.data('1.0', '2.16', '2.17') def test_snapshot_create(self, version): self.mock_object(share_api.API, 'create_snapshot', stubs.stub_snapshot_create) body = { 'snapshot': { 'share_id': 'fakeshareid', 'force': False, 'name': 'displaysnapname', 'description': 'displaysnapdesc', } } req = fakes.HTTPRequest.blank('/snapshots', version=version) res_dict = self.controller.create(req, body) expected = fake_share.expected_snapshot(version=version, id=200) self.assertEqual(expected, res_dict) @ddt.data(0, False) def test_snapshot_create_no_support(self, snapshot_support): self.mock_object(share_api.API, 'create_snapshot') self.mock_object( share_api.API, 'get', mock.Mock(return_value={'snapshot_support': snapshot_support})) body = { 'snapshot': { 'share_id': 100, 'force': False, 'name': 'fake_share_name', 'description': 'fake_share_description', } } req = fakes.HTTPRequest.blank('/snapshots') self.assertRaises( webob.exc.HTTPUnprocessableEntity, self.controller.create, req, body) self.assertFalse(share_api.API.create_snapshot.called) def test_snapshot_create_no_body(self): body = {} req = fakes.HTTPRequest.blank('/snapshots') self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.create, req, body) def test_snapshot_delete(self): self.mock_object(share_api.API, 'delete_snapshot', stubs.stub_snapshot_delete) req = fakes.HTTPRequest.blank('/snapshots/200') resp = self.controller.delete(req, 200) self.assertEqual(202, resp.status_int) def test_snapshot_delete_nofound(self): self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get_notfound) req = fakes.HTTPRequest.blank('/snapshots/200') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 200) @ddt.data('2.0', '2.16', '2.17') def test_snapshot_show(self, version): req = fakes.HTTPRequest.blank('/snapshots/200', version=version) expected = fake_share.expected_snapshot(version=version, id=200) res_dict = self.controller.show(req, 200) self.assertEqual(expected, res_dict) def test_snapshot_show_nofound(self): self.mock_object(share_api.API, 'get_snapshot', stubs.stub_snapshot_get_notfound) req = fakes.HTTPRequest.blank('/snapshots/200') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '200') def test_snapshot_list_summary(self): self.mock_object(share_api.API, 'get_all_snapshots', stubs.stub_snapshot_get_all_by_project) req = fakes.HTTPRequest.blank('/snapshots') res_dict = self.controller.index(req) expected = { 'snapshots': [ { 'name': 'displaysnapname', 'id': 2, 'links': [ { 'href': 'http://localhost/v1/fake/' 'snapshots/2', 'rel': 'self' }, { 'href': 'http://localhost/fake/snapshots/2', 'rel': 'bookmark' } ], } ] } self.assertEqual(expected, res_dict) def _snapshot_list_summary_with_search_opts(self, version, use_admin_context): search_opts = fake_share.search_opts() if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.36')): search_opts.pop('name') search_opts['display_name~'] = 'fake_name' # fake_key should be filtered for non-admin url = '/snapshots?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank( url, use_admin_context=use_admin_context, version=version) snapshots = [ {'id': 'id1', 'display_name': 'n1', 'status': 'fake_status', }, {'id': 'id2', 'display_name': 'n2', 'status': 'fake_status', }, {'id': 'id3', 'display_name': 'n3', 'status': 'fake_status', }, ] self.mock_object(share_api.API, 'get_all_snapshots', mock.Mock(return_value=snapshots)) result = self.controller.index(req) search_opts_expected = { 'status': search_opts['status'], 'share_id': search_opts['share_id'], } if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.36')): search_opts_expected['display_name~'] = 'fake_name' else: search_opts_expected['display_name'] = search_opts['name'] if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) share_api.API.get_all_snapshots.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['snapshots'])) self.assertEqual(snapshots[1]['id'], result['snapshots'][0]['id']) self.assertEqual( snapshots[1]['display_name'], result['snapshots'][0]['name']) @ddt.data({'version': '2.35', 'use_admin_context': True}, {'version': '2.36', 'use_admin_context': True}, {'version': '2.35', 'use_admin_context': False}, {'version': '2.36', 'use_admin_context': False}) @ddt.unpack def test_snapshot_list_summary_with_search_opts(self, version, use_admin_context): self._snapshot_list_summary_with_search_opts( version=version, use_admin_context=use_admin_context) def _snapshot_list_detail_with_search_opts(self, use_admin_context): search_opts = fake_share.search_opts() # fake_key should be filtered for non-admin url = '/shares/detail?fake_key=fake_value' for k, v in search_opts.items(): url = url + '&' + k + '=' + v req = fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context) snapshots = [ { 'id': 'id1', 'display_name': 'n1', 'status': 'fake_status', 'aggregate_status': 'fake_status', }, { 'id': 'id2', 'display_name': 'n2', 'status': 'someotherstatus', 'aggregate_status': 'fake_status', 'share_id': 'fake_share_id', }, { 'id': 'id3', 'display_name': 'n3', 'status': 'fake_status', 'aggregate_status': 'fake_status', }, ] self.mock_object(share_api.API, 'get_all_snapshots', mock.Mock(return_value=snapshots)) result = self.controller.detail(req) search_opts_expected = { 'display_name': search_opts['name'], 'status': search_opts['status'], 'share_id': search_opts['share_id'], } if use_admin_context: search_opts_expected.update({'fake_key': 'fake_value'}) share_api.API.get_all_snapshots.assert_called_once_with( req.environ['manila.context'], sort_key=search_opts['sort_key'], sort_dir=search_opts['sort_dir'], search_opts=search_opts_expected, ) self.assertEqual(1, len(result['snapshots'])) self.assertEqual(snapshots[1]['id'], result['snapshots'][0]['id']) self.assertEqual( snapshots[1]['display_name'], result['snapshots'][0]['name']) self.assertEqual( snapshots[1]['aggregate_status'], result['snapshots'][0]['status']) self.assertEqual( snapshots[1]['share_id'], result['snapshots'][0]['share_id']) def test_snapshot_list_detail_with_search_opts_by_non_admin(self): self._snapshot_list_detail_with_search_opts(use_admin_context=False) def test_snapshot_list_detail_with_search_opts_by_admin(self): self._snapshot_list_detail_with_search_opts(use_admin_context=True) @ddt.data('2.0', '2.16', '2.17') def test_snapshot_list_detail(self, version): env = {'QUERY_STRING': 'name=Share+Test+Name'} req = fakes.HTTPRequest.blank('/snapshots/detail', environ=env, version=version) expected_s = fake_share.expected_snapshot(version=version, id=2) expected = {'snapshots': [expected_s['snapshot']]} res_dict = self.controller.detail(req) self.assertEqual(expected, res_dict) @ddt.data('2.0', '2.16', '2.17') def test_snapshot_updates_display_name_and_description(self, version): snp = self.snp_example body = {"snapshot": snp} req = fakes.HTTPRequest.blank('/snapshot/1', version=version) res_dict = self.controller.update(req, 1, body) self.assertEqual(snp["display_name"], res_dict['snapshot']["name"]) if (api_version.APIVersionRequest(version) <= api_version.APIVersionRequest('2.16')): self.assertNotIn('user_id', res_dict['snapshot']) self.assertNotIn('project_id', res_dict['snapshot']) else: self.assertIn('user_id', res_dict['snapshot']) self.assertIn('project_id', res_dict['snapshot']) def test_share_update_invalid_key(self): snp = self.snp_example body = {"snapshot": snp} req = fakes.HTTPRequest.blank('/snapshot/1') res_dict = self.controller.update(req, 1, body) self.assertNotEqual(snp["size"], res_dict['snapshot']["size"]) def test_access_list(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) expected = [] self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object(share_api.API, 'snapshot_access_get_all', mock.Mock(return_value=expected)) id = 'fake_snap_id' req = fakes.HTTPRequest.blank('/snapshots/%s/action' % id, version='2.32') actual = self.controller.access_list(req, id) self.assertEqual(expected, actual['snapshot_access_list']) @ddt.data(('1.1.1.1', '2.32'), ('1.1.1.1', '2.38'), ('1001::1001', '2.38')) @ddt.unpack def test_allow_access(self, ip_address, version): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) access = { 'id': 'fake_id', 'access_type': 'ip', 'access_to': ip_address, 'state': 'new', } get = self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) get_snapshot = self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) allow_access = self.mock_object(share_api.API, 'snapshot_allow_access', mock.Mock(return_value=access)) body = {'allow_access': access} req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version=version) actual = self.controller.allow_access(req, snapshot['id'], body) self.assertEqual(access, actual['snapshot_access']) get.assert_called_once_with(utils.IsAMatcher(context.RequestContext), share['id']) get_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) allow_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot, access['access_type'], access['access_to']) def test_allow_access_data_not_found_exception(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') body = {} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.allow_access, req, snapshot['id'], body) def test_allow_access_exists_exception(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') access = { 'id': 'fake_id', 'access_type': 'ip', 'access_to': '1.1.1.1', 'state': 'new', } msg = "Share snapshot access exists." get = self.mock_object(share_api.API, 'get', mock.Mock( return_value=share)) get_snapshot = self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) allow_access = self.mock_object( share_api.API, 'snapshot_allow_access', mock.Mock( side_effect=exception.ShareSnapshotAccessExists(msg))) body = {'allow_access': access} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.allow_access, req, snapshot['id'], body) get.assert_called_once_with(utils.IsAMatcher(context.RequestContext), share['id']) get_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) allow_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot, access['access_type'], access['access_to']) def test_allow_access_share_without_mount_snap_support(self): share = db_utils.create_share(mount_snapshot_support=False) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) access = { 'id': 'fake_id', 'access_type': 'ip', 'access_to': '1.1.1.1', 'state': 'new', } get_snapshot = self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) get = self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) body = {'allow_access': access} req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.allow_access, req, snapshot['id'], body) get.assert_called_once_with(utils.IsAMatcher(context.RequestContext), share['id']) get_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) def test_allow_access_empty_parameters(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) access = {'id': 'fake_id', 'access_type': '', 'access_to': ''} body = {'allow_access': access} req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.allow_access, req, snapshot['id'], body) def test_deny_access(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) access = db_utils.create_snapshot_access( share_snapshot_id=snapshot['id']) get = self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) get_snapshot = self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) access_get = self.mock_object(share_api.API, 'snapshot_access_get', mock.Mock(return_value=access)) deny_access = self.mock_object(share_api.API, 'snapshot_deny_access') body = {'deny_access': {'access_id': access.id}} req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') resp = self.controller.deny_access(req, snapshot['id'], body) self.assertEqual(202, resp.status_int) get.assert_called_once_with(utils.IsAMatcher(context.RequestContext), share['id']) get_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) access_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), body['deny_access']['access_id']) deny_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot, access) def test_deny_access_data_not_found_exception(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') body = {} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.deny_access, req, snapshot['id'], body) def test_deny_access_access_rule_not_found(self): share = db_utils.create_share(mount_snapshot_support=True) snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) access = db_utils.create_snapshot_access( share_snapshot_id=snapshot['id']) wrong_access = { 'access_type': 'fake_type', 'access_to': 'fake_IP', 'share_snapshot_id': 'fake_id' } get = self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) get_snapshot = self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) access_get = self.mock_object(share_api.API, 'snapshot_access_get', mock.Mock(return_value=wrong_access)) body = {'deny_access': {'access_id': access.id}} req = fakes.HTTPRequest.blank('/snapshots/%s/action' % snapshot['id'], version='2.32') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.deny_access, req, snapshot['id'], body) get.assert_called_once_with(utils.IsAMatcher(context.RequestContext), share['id']) get_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) access_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), body['deny_access']['access_id']) @ddt.ddt class ShareSnapshotAdminActionsAPITest(test.TestCase): def setUp(self): super(ShareSnapshotAdminActionsAPITest, self).setUp() self.controller = share_snapshots.ShareSnapshotsController() self.flags(transport_url='rabbit://fake:fake@mqhost:5672') self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') self.resource_name = self.controller.resource_name self.manage_request = fakes.HTTPRequest.blank( '/snapshots/manage', use_admin_context=True, version=MIN_MANAGE_SNAPSHOT_API_VERSION) self.snapshot_id = 'fake' self.unmanage_request = fakes.HTTPRequest.blank( '/snapshots/%s/unmanage' % self.snapshot_id, use_admin_context=True, version=MIN_MANAGE_SNAPSHOT_API_VERSION) def _get_context(self, role): return getattr(self, '%s_context' % role) def _setup_snapshot_data(self, snapshot=None, version='2.7'): if snapshot is None: share = db_utils.create_share() snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=share['id']) path = '/v2/fake/snapshots/%s/action' % snapshot['id'] req = fakes.HTTPRequest.blank(path, script_name=path, version=version) return snapshot, req def _reset_status(self, ctxt, model, req, db_access_method, valid_code, valid_status=None, body=None, version='2.7'): if float(version) > 2.6: action_name = 'reset_status' else: action_name = 'os-reset_status' if body is None: body = {action_name: {'status': constants.STATUS_ERROR}} req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = version req.body = six.b(jsonutils.dumps(body)) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # validate response code and model status self.assertEqual(valid_code, resp.status_int) actual_model = db_access_method(ctxt, model['id']) self.assertEqual(valid_status, actual_model['status']) @ddt.data(*fakes.fixture_reset_status_with_different_roles) @ddt.unpack def test_snapshot_reset_status_with_different_roles(self, role, valid_code, valid_status, version): ctxt = self._get_context(role) snapshot, req = self._setup_snapshot_data(version=version) self._reset_status(ctxt, snapshot, req, db.share_snapshot_get, valid_code, valid_status, version=version) @ddt.data( ({'os-reset_status': {'x-status': 'bad'}}, '2.6'), ({'reset_status': {'x-status': 'bad'}}, '2.7'), ({'os-reset_status': {'status': 'invalid'}}, '2.6'), ({'reset_status': {'status': 'invalid'}}, '2.7'), ) @ddt.unpack def test_snapshot_invalid_reset_status_body(self, body, version): snapshot, req = self._setup_snapshot_data(version=version) self._reset_status(self.admin_context, snapshot, req, db.share_snapshot_get, 400, constants.STATUS_AVAILABLE, body, version=version) def _force_delete(self, ctxt, model, req, db_access_method, valid_code, version='2.7'): if float(version) > 2.6: action_name = 'force_delete' else: action_name = 'os-force_delete' req.method = 'POST' req.headers['content-type'] = 'application/json' req.headers['X-Openstack-Manila-Api-Version'] = version req.body = six.b(jsonutils.dumps({action_name: {}})) req.environ['manila.context'] = ctxt resp = req.get_response(fakes.app()) # Validate response self.assertEqual(valid_code, resp.status_int) @ddt.data(*fakes.fixture_force_delete_with_different_roles) @ddt.unpack def test_snapshot_force_delete_with_different_roles(self, role, resp_code, version): ctxt = self._get_context(role) snapshot, req = self._setup_snapshot_data(version=version) self._force_delete(ctxt, snapshot, req, db.share_snapshot_get, resp_code, version=version) def test_snapshot_force_delete_missing(self): ctxt = self._get_context('admin') snapshot, req = self._setup_snapshot_data(snapshot={'id': 'fake'}) self._force_delete(ctxt, snapshot, req, db.share_snapshot_get, 404) @ddt.data( {}, {'snapshots': {}}, {'snapshot': get_fake_manage_body(share_id='xxxxxxxx')}, {'snapshot': get_fake_manage_body(provider_location='xxxxxxxx')} ) def test_snapshot_manage_invalid_body(self, body): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.assertRaises(webob.exc.HTTPUnprocessableEntity, self.controller.manage, self.manage_request, body) self.mock_policy_check.assert_called_once_with( self.manage_request.environ['manila.context'], self.resource_name, 'manage_snapshot') @ddt.data( {'version': '2.12', 'data': get_fake_manage_body(name='foo', display_description='bar')}, {'version': '2.12', 'data': get_fake_manage_body(display_name='foo', description='bar')}, {'version': '2.17', 'data': get_fake_manage_body(display_name='foo', description='bar')}, {'version': '2.17', 'data': get_fake_manage_body(name='foo', display_description='bar')}, ) @ddt.unpack def test_snapshot_manage(self, version, data): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) data['snapshot']['share_id'] = 'fake' data['snapshot']['provider_location'] = 'fake_volume_snapshot_id' data['snapshot']['driver_options'] = {} return_snapshot = fake_share.fake_snapshot( create_instance=True, id='fake_snap', provider_location='fake_volume_snapshot_id') self.mock_object( share_api.API, 'manage_snapshot', mock.Mock( return_value=return_snapshot)) share_snapshot = { 'share_id': 'fake', 'provider_location': 'fake_volume_snapshot_id', 'display_name': 'foo', 'display_description': 'bar', } req = fakes.HTTPRequest.blank( '/snapshots/manage', use_admin_context=True, version=version) actual_result = self.controller.manage(req, data) actual_snapshot = actual_result['snapshot'] share_api.API.manage_snapshot.assert_called_once_with( mock.ANY, share_snapshot, data['snapshot']['driver_options']) self.assertEqual(return_snapshot['id'], actual_result['snapshot']['id']) self.assertEqual('fake_volume_snapshot_id', actual_result['snapshot']['provider_location']) if (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.17')): self.assertEqual(return_snapshot['user_id'], actual_snapshot['user_id']) self.assertEqual(return_snapshot['project_id'], actual_snapshot['project_id']) else: self.assertNotIn('user_id', actual_snapshot) self.assertNotIn('project_id', actual_snapshot) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'manage_snapshot') @ddt.data(exception.ShareNotFound(share_id='fake'), exception.ShareSnapshotNotFound(snapshot_id='fake'), exception.ManageInvalidShareSnapshot(reason='error'), exception.InvalidShare(reason='error')) def test_manage_exception(self, exception_type): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) body = get_fake_manage_body( share_id='fake', provider_location='fake_volume_snapshot_id', driver_options={}) self.mock_object( share_api.API, 'manage_snapshot', mock.Mock( side_effect=exception_type)) http_ex = webob.exc.HTTPNotFound if (isinstance(exception_type, exception.ManageInvalidShareSnapshot) or isinstance(exception_type, exception.InvalidShare)): http_ex = webob.exc.HTTPConflict self.assertRaises(http_ex, self.controller.manage, self.manage_request, body) self.mock_policy_check.assert_called_once_with( self.manage_request.environ['manila.context'], self.resource_name, 'manage_snapshot') @ddt.data('1.0', '2.6', '2.11') def test_manage_version_not_found(self, version): body = get_fake_manage_body( share_id='fake', provider_location='fake_volume_snapshot_id', driver_options={}) fake_req = fakes.HTTPRequest.blank( '/snapshots/manage', use_admin_context=True, version=version) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.manage, fake_req, body) def test_snapshot__unmanage(self): body = {} snapshot = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id', 'share_id': 'bar_id'} fake_req = fakes.HTTPRequest.blank( '/snapshots/unmanage', use_admin_context=True, version='2.49') mock_unmanage = self.mock_object(self.controller, '_unmanage') self.controller.unmanage(fake_req, snapshot['id'], body) mock_unmanage.assert_called_once_with(fake_req, snapshot['id'], body, allow_dhss_true=True) def test_snapshot_unmanage_share_server(self): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) share = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id', 'share_server_id': 'fake_server_id'} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) snapshot = {'status': constants.STATUS_AVAILABLE, 'id': 'foo_id', 'share_id': 'bar_id'} self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.assertRaises(webob.exc.HTTPForbidden, self.controller.unmanage, self.unmanage_request, snapshot['id']) self.controller.share_api.get_snapshot.assert_called_once_with( self.unmanage_request.environ['manila.context'], snapshot['id']) self.controller.share_api.get.assert_called_once_with( self.unmanage_request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') def test_snapshot_unmanage_replicated_snapshot(self): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) share = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id', 'has_replicas': True} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) snapshot = {'status': constants.STATUS_AVAILABLE, 'id': 'foo_id', 'share_id': 'bar_id'} self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.assertRaises(webob.exc.HTTPConflict, self.controller.unmanage, self.unmanage_request, snapshot['id']) self.controller.share_api.get_snapshot.assert_called_once_with( self.unmanage_request.environ['manila.context'], snapshot['id']) self.controller.share_api.get.assert_called_once_with( self.unmanage_request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') @ddt.data(*constants.TRANSITIONAL_STATUSES) def test_snapshot_unmanage_with_transitional_state(self, status): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) share = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id'} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) snapshot = {'status': status, 'id': 'foo_id', 'share_id': 'bar_id'} self.mock_object( self.controller.share_api, 'get_snapshot', mock.Mock(return_value=snapshot)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.unmanage, self.unmanage_request, snapshot['id']) self.controller.share_api.get_snapshot.assert_called_once_with( self.unmanage_request.environ['manila.context'], snapshot['id']) self.controller.share_api.get.assert_called_once_with( self.unmanage_request.environ['manila.context'], share['id']) self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') def test_snapshot_unmanage(self): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) share = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id', 'host': 'fake_host'} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) snapshot = {'status': constants.STATUS_AVAILABLE, 'id': 'foo_id', 'share_id': 'bar_id'} self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object(share_api.API, 'unmanage_snapshot', mock.Mock()) actual_result = self.controller.unmanage(self.unmanage_request, snapshot['id']) self.assertEqual(202, actual_result.status_int) self.controller.share_api.get_snapshot.assert_called_once_with( self.unmanage_request.environ['manila.context'], snapshot['id']) share_api.API.unmanage_snapshot.assert_called_once_with( mock.ANY, snapshot, 'fake_host') self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') def test_unmanage_share_not_found(self): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.mock_object( share_api.API, 'get', mock.Mock( side_effect=exception.ShareNotFound(share_id='fake'))) snapshot = {'status': constants.STATUS_AVAILABLE, 'id': 'foo_id', 'share_id': 'bar_id'} self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object(share_api.API, 'unmanage_snapshot', mock.Mock()) self.assertRaises(webob.exc.HTTPNotFound, self.controller.unmanage, self.unmanage_request, 'foo_id') self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') def test_unmanage_snapshot_not_found(self): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) share = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id'} self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) self.mock_object( share_api.API, 'get_snapshot', mock.Mock( side_effect=exception.ShareSnapshotNotFound( snapshot_id='foo_id'))) self.mock_object(share_api.API, 'unmanage_snapshot', mock.Mock()) self.assertRaises(webob.exc.HTTPNotFound, self.controller.unmanage, self.unmanage_request, 'foo_id') self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') @ddt.data('1.0', '2.6', '2.11') def test_unmanage_version_not_found(self, version): snapshot_id = 'fake' fake_req = fakes.HTTPRequest.blank( '/snapshots/%s/unmanage' % snapshot_id, use_admin_context=True, version=version) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.unmanage, fake_req, 'fake') def test_snapshot_unmanage_dhss_true_with_share_server(self): self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) share = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id', 'host': 'fake_host', 'share_server_id': 'fake'} mock_get = self.mock_object(share_api.API, 'get', mock.Mock(return_value=share)) snapshot = {'status': constants.STATUS_AVAILABLE, 'id': 'bar_id', 'share_id': 'bar_id'} self.mock_object(share_api.API, 'get_snapshot', mock.Mock(return_value=snapshot)) self.mock_object(share_api.API, 'unmanage_snapshot') actual_result = self.controller._unmanage(self.unmanage_request, snapshot['id'], allow_dhss_true=True) self.assertEqual(202, actual_result.status_int) self.controller.share_api.get_snapshot.assert_called_once_with( self.unmanage_request.environ['manila.context'], snapshot['id']) share_api.API.unmanage_snapshot.assert_called_once_with( mock.ANY, snapshot, 'fake_host') mock_get.assert_called_once_with( self.unmanage_request.environ['manila.context'], snapshot['id'] ) self.mock_policy_check.assert_called_once_with( self.unmanage_request.environ['manila.context'], self.resource_name, 'unmanage_snapshot') manila-10.0.0/manila/tests/api/v2/test_share_instance_export_locations.py0000664000175000017500000001454113656750227026653 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from webob import exc from manila.api.v2 import share_instance_export_locations as export_locations from manila import context from manila import db from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils @ddt.ddt class ShareInstanceExportLocationsAPITest(test.TestCase): def _get_request(self, version="2.9", use_admin_context=True): req = fakes.HTTPRequest.blank( '/v2/share_instances/%s/export_locations' % self.share_instance_id, version=version, use_admin_context=use_admin_context) return req def setUp(self): super(ShareInstanceExportLocationsAPITest, self).setUp() self.controller = ( export_locations.ShareInstanceExportLocationController()) self.resource_name = self.controller.resource_name self.ctxt = { 'admin': context.RequestContext('admin', 'fake', True), 'user': context.RequestContext('fake', 'fake'), } self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.share = db_utils.create_share() self.share_instance_id = self.share.instance.id self.req = self._get_request() paths = ['fake1/1/', 'fake2/2', 'fake3/3'] db.share_export_locations_update( self.ctxt['admin'], self.share_instance_id, paths, False) @ddt.data({'role': 'admin', 'version': '2.9'}, {'role': 'user', 'version': '2.9'}, {'role': 'admin', 'version': '2.13'}, {'role': 'user', 'version': '2.13'}) @ddt.unpack def test_list_and_show(self, role, version): summary_keys = ['id', 'path'] admin_summary_keys = summary_keys + [ 'share_instance_id', 'is_admin_only'] detail_keys = summary_keys + ['created_at', 'updated_at'] admin_detail_keys = admin_summary_keys + ['created_at', 'updated_at'] self._test_list_and_show(role, version, summary_keys, detail_keys, admin_summary_keys, admin_detail_keys) @ddt.data('admin', 'user') def test_list_and_show_with_preferred_flag(self, role): summary_keys = ['id', 'path', 'preferred'] admin_summary_keys = summary_keys + [ 'share_instance_id', 'is_admin_only'] detail_keys = summary_keys + ['created_at', 'updated_at'] admin_detail_keys = admin_summary_keys + ['created_at', 'updated_at'] self._test_list_and_show(role, '2.14', summary_keys, detail_keys, admin_summary_keys, admin_detail_keys) def _test_list_and_show(self, role, version, summary_keys, detail_keys, admin_summary_keys, admin_detail_keys): req = self._get_request(version=version, use_admin_context=(role == 'admin')) index_result = self.controller.index(req, self.share_instance_id) self.assertIn('export_locations', index_result) self.assertEqual(1, len(index_result)) self.assertEqual(3, len(index_result['export_locations'])) for index_el in index_result['export_locations']: self.assertIn('id', index_el) show_result = self.controller.show( req, self.share_instance_id, index_el['id']) self.assertIn('export_location', show_result) self.assertEqual(1, len(show_result)) show_el = show_result['export_location'] # Check summary keys in index result & detail keys in show result if role == 'admin': self.assertEqual(len(admin_summary_keys), len(index_el)) for key in admin_summary_keys: self.assertIn(key, index_el) self.assertEqual(len(admin_detail_keys), len(show_el)) for key in admin_detail_keys: self.assertIn(key, show_el) else: self.assertEqual(len(summary_keys), len(index_el)) for key in summary_keys: self.assertIn(key, index_el) self.assertEqual(len(detail_keys), len(show_el)) for key in detail_keys: self.assertIn(key, show_el) # Ensure keys common to index & show results have matching values for key in summary_keys: self.assertEqual(index_el[key], show_el[key]) def test_list_export_locations_share_instance_not_found(self): self.assertRaises( exc.HTTPNotFound, self.controller.index, self.req, 'inexistent_share_instance_id', ) def test_show_export_location_share_instance_not_found(self): index_result = self.controller.index(self.req, self.share_instance_id) el_id = index_result['export_locations'][0]['id'] self.assertRaises( exc.HTTPNotFound, self.controller.show, self.req, 'inexistent_share_id', el_id, ) @ddt.data('1.0', '2.0', '2.8') def test_list_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, self._get_request(version), self.share_instance_id, ) @ddt.data('1.0', '2.0', '2.8') def test_show_with_unsupported_version(self, version): index_result = self.controller.index(self.req, self.share_instance_id) self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.show, self._get_request(version), self.share_instance_id, index_result['export_locations'][0]['id'] ) manila-10.0.0/manila/tests/api/v2/test_quota_class_sets.py0000664000175000017500000001717413656750227023572 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for manila.api.v1.quota_class_sets.py """ import copy from unittest import mock import ddt from oslo_config import cfg import webob.exc import webob.response from manila.api.openstack import api_version_request as api_version from manila.api.v2 import quota_class_sets from manila import context from manila import exception from manila import policy from manila import test from manila.tests.api import fakes CONF = cfg.CONF REQ = mock.MagicMock() REQ.environ = {'manila.context': context.get_admin_context()} REQ.environ['manila.context'].is_admin = True REQ.environ['manila.context'].auth_token = 'foo_auth_token' REQ.environ['manila.context'].project_id = 'foo_project_id' REQ_MEMBER = copy.deepcopy(REQ) REQ_MEMBER.environ['manila.context'].is_admin = False @ddt.ddt class QuotaSetsControllerTest(test.TestCase): def setUp(self): super(QuotaSetsControllerTest, self).setUp() self.controller = quota_class_sets.QuotaClassSetsController() self.resource_name = self.controller.resource_name self.class_name = 'foo_class_name' self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) @ddt.data( ('os-', '1.0', quota_class_sets.QuotaClassSetsControllerLegacy), ('os-', '2.6', quota_class_sets.QuotaClassSetsControllerLegacy), ('', '2.7', quota_class_sets.QuotaClassSetsController), ('', '2.53', quota_class_sets.QuotaClassSetsController), ) @ddt.unpack def test_show_quota(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%squota-class-sets' % url, version=version, use_admin_context=True) quotas = { "shares": 23, "snapshots": 34, "gigabytes": 45, "snapshot_gigabytes": 56, "share_networks": 67, } expected = { 'quota_class_set': { 'id': self.class_name, 'shares': quotas.get('shares', 50), 'gigabytes': quotas.get('gigabytes', 1000), 'snapshots': quotas.get('snapshots', 50), 'snapshot_gigabytes': quotas.get('snapshot_gigabytes', 1000), 'share_networks': quotas.get('share_networks', 10), } } for k, v in quotas.items(): CONF.set_default('quota_' + k, v) if req.api_version_request >= api_version.APIVersionRequest("2.40"): expected['quota_class_set']['share_groups'] = 50 expected['quota_class_set']['share_group_snapshots'] = 50 if req.api_version_request >= api_version.APIVersionRequest("2.53"): expected['quota_class_set']['share_replicas'] = 100 expected['quota_class_set']['replica_gigabytes'] = 1000 result = controller().show(req, self.class_name) self.assertEqual(expected, result) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') def test_show_quota_not_authorized(self): self.mock_object( quota_class_sets.db, 'authorize_quota_class_context', mock.Mock(side_effect=exception.NotAuthorized)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.show, REQ, self.class_name) self.mock_policy_check.assert_called_once_with( REQ.environ['manila.context'], self.resource_name, 'show') @ddt.data( ('os-', '1.0', quota_class_sets.QuotaClassSetsControllerLegacy), ('os-', '2.6', quota_class_sets.QuotaClassSetsControllerLegacy), ('', '2.7', quota_class_sets.QuotaClassSetsController), ('', '2.53', quota_class_sets.QuotaClassSetsController), ) @ddt.unpack def test_update_quota(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%squota-class-sets' % url, version=version, use_admin_context=True) CONF.set_default('quota_shares', 789) body = { 'quota_class_set': { 'class_name': self.class_name, 'shares': 788, } } expected = { 'quota_class_set': { 'shares': body['quota_class_set']['shares'], 'gigabytes': 1000, 'snapshots': 50, 'snapshot_gigabytes': 1000, 'share_networks': 10, } } if req.api_version_request >= api_version.APIVersionRequest("2.40"): expected['quota_class_set']['share_groups'] = 50 expected['quota_class_set']['share_group_snapshots'] = 50 if req.api_version_request >= api_version.APIVersionRequest("2.53"): expected['quota_class_set']['share_replicas'] = 100 expected['quota_class_set']['replica_gigabytes'] = 1000 update_result = controller().update( req, self.class_name, body=body) self.assertEqual(expected, update_result) show_result = controller().show(req, self.class_name) expected['quota_class_set']['id'] = self.class_name self.assertEqual(expected, show_result) self.mock_policy_check.assert_has_calls([mock.call( req.environ['manila.context'], self.resource_name, action_name) for action_name in ('update', 'show')]) def test_update_quota_not_authorized(self): body = { 'quota_class_set': { 'class_name': self.class_name, 'shares': 13, } } self.assertRaises( webob.exc.HTTPForbidden, self.controller.update, REQ_MEMBER, self.class_name, body=body) self.mock_policy_check.assert_called_once_with( REQ_MEMBER.environ['manila.context'], self.resource_name, 'update') @ddt.data( ('os-', '2.7', quota_class_sets.QuotaClassSetsControllerLegacy), ('', '2.6', quota_class_sets.QuotaClassSetsController), ('', '2.0', quota_class_sets.QuotaClassSetsController), ) @ddt.unpack def test_api_not_found(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%squota-class-sets' % url, version=version) for method_name in ('show', 'update'): self.assertRaises( exception.VersionNotFoundForAPIMethod, getattr(controller(), method_name), req, self.class_name) @ddt.data( ('os-', '2.7', quota_class_sets.QuotaClassSetsControllerLegacy), ('', '2.6', quota_class_sets.QuotaClassSetsController), ('', '2.0', quota_class_sets.QuotaClassSetsController), ) @ddt.unpack def test_update_api_not_found(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%squota-class-sets' % url, version=version) self.assertRaises( exception.VersionNotFoundForAPIMethod, controller().update, req, self.class_name) manila-10.0.0/manila/tests/api/v2/test_quota_sets.py0000664000175000017500000007031413656750227022400 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for manila.api.v2.quota_sets.py """ from unittest import mock import ddt from oslo_config import cfg import webob.exc import webob.response from manila.api.openstack import api_version_request as api_version from manila.api.v2 import quota_sets from manila import context from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila import utils CONF = cfg.CONF sg_quota_keys = ['share_groups', 'share_group_snapshots'] replica_quota_keys = ['share_replicas'] def _get_request(is_admin, user_in_url): req = mock.MagicMock( api_version_request=api_version.APIVersionRequest("2.40")) req.environ = {'manila.context': context.get_admin_context()} req.environ['manila.context'].is_admin = is_admin req.environ['manila.context'].auth_token = 'foo_auth_token' req.environ['manila.context'].project_id = 'foo_project_id' if user_in_url: req.environ['manila.context'].user_id = 'foo_user_id' req.environ['QUERY_STRING'] = 'user_id=foo_user_id' return req @ddt.ddt class QuotaSetsControllerTest(test.TestCase): def setUp(self): super(QuotaSetsControllerTest, self).setUp() self.controller = quota_sets.QuotaSetsController() self.resource_name = self.controller.resource_name self.project_id = 'foo_project_id' self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) @ddt.data( {"shares": 3, "snapshots": 4, "gigabytes": 5, "snapshot_gigabytes": 6, "share_networks": 7}, {"shares": -1, "snapshots": -1, "gigabytes": -1, "snapshot_gigabytes": -1, "share_networks": -1}, {"shares": 13}, {"snapshots": 24}, {"gigabytes": 7}, {"snapshot_gigabytes": 10001}, {"share_networks": 12345}, {"share_groups": 123456}, {"share_group_snapshots": 123456}, ) def test_defaults(self, quotas): req = _get_request(True, False) for k, v in quotas.items(): CONF.set_default('quota_' + k, v) expected = { 'quota_set': { 'id': self.project_id, 'shares': quotas.get('shares', 50), 'gigabytes': quotas.get('gigabytes', 1000), 'snapshots': quotas.get('snapshots', 50), 'snapshot_gigabytes': quotas.get('snapshot_gigabytes', 1000), 'share_networks': quotas.get('share_networks', 10), 'share_groups': quotas.get('share_groups', 50), 'share_group_snapshots': quotas.get( 'share_group_snapshots', 50), } } result = self.controller.defaults(req, self.project_id) self.assertEqual(expected, result) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') @ddt.data( ('os-', '1.0', quota_sets.QuotaSetsControllerLegacy, 'defaults'), ('os-', '2.6', quota_sets.QuotaSetsControllerLegacy, 'defaults'), ('', '2.7', quota_sets.QuotaSetsController, 'defaults'), ('os-', '1.0', quota_sets.QuotaSetsControllerLegacy, 'show'), ('os-', '2.6', quota_sets.QuotaSetsControllerLegacy, 'show'), ('', '2.7', quota_sets.QuotaSetsController, 'show'), ) @ddt.unpack def test_get_quotas_with_different_api_versions(self, url, version, controller, method_name): expected = { 'quota_set': { 'id': self.project_id, 'shares': 50, 'gigabytes': 1000, 'snapshots': 50, 'snapshot_gigabytes': 1000, 'share_networks': 10, } } req = fakes.HTTPRequest.blank( '/fooproject/%squota-sets' % url, version=version, use_admin_context=True) result = getattr(controller(), method_name)(req, self.project_id) self.assertEqual(expected, result) @staticmethod def _get_share_type_request_object(microversion=None): req = _get_request(True, False) req.environ['QUERY_STRING'] = 'share_type=fake_share_type_name_or_id' req.api_version_request = api_version.APIVersionRequest( microversion or '2.39') return req @ddt.data('2.39', '2.40') def test_share_type_quota_detail(self, microversion): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock(return_value={'id': 'fake_st_id'})) req = self._get_share_type_request_object(microversion) quotas = { "shares": 23, "snapshots": 34, "gigabytes": 45, "snapshot_gigabytes": 56, } expected = {'quota_set': { 'id': self.project_id, 'shares': { 'in_use': 0, 'limit': quotas['shares'], 'reserved': 0, }, 'gigabytes': { 'in_use': 0, 'limit': quotas['gigabytes'], 'reserved': 0, }, 'snapshots': { 'in_use': 0, 'limit': quotas['snapshots'], 'reserved': 0, }, 'snapshot_gigabytes': { 'in_use': 0, 'limit': quotas['snapshot_gigabytes'], 'reserved': 0, }, }} for k, v in quotas.items(): CONF.set_default('quota_' + k, v) result = self.controller.detail(req, self.project_id) self.assertEqual(expected, result) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') quota_sets.db.share_type_get_by_name_or_id.assert_called_once_with( req.environ['manila.context'], 'fake_share_type_name_or_id') @ddt.data('2.39', '2.40') def test_show_share_type_quota(self, microversion): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock(return_value={'id': 'fake_st_id'})) req = self._get_share_type_request_object(microversion) quotas = { "shares": 23, "snapshots": 34, "gigabytes": 45, "snapshot_gigabytes": 56, } expected = { 'quota_set': { 'id': self.project_id, 'shares': quotas.get('shares', 50), 'gigabytes': quotas.get('gigabytes', 1000), 'snapshots': quotas.get('snapshots', 50), 'snapshot_gigabytes': quotas.get('snapshot_gigabytes', 1000), } } for k, v in quotas.items(): CONF.set_default('quota_' + k, v) result = self.controller.show(req, self.project_id) self.assertEqual(expected, result) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') quota_sets.db.share_type_get_by_name_or_id.assert_called_once_with( req.environ['manila.context'], 'fake_share_type_name_or_id') @ddt.data('show', 'detail') def test_get_share_type_quota_with_old_microversion(self, method): req = self._get_share_type_request_object('2.38') self.assertRaises( webob.exc.HTTPBadRequest, getattr(self.controller, method), req, self.project_id) @ddt.data((None, None), (None, 'foo'), ('bar', None)) @ddt.unpack def test__validate_user_id_and_share_type_args(self, user_id, st_id): result = self.controller._validate_user_id_and_share_type_args( user_id, st_id) self.assertIsNone(result) def test__validate_user_id_and_share_type_args_exception(self): self.assertRaises( webob.exc.HTTPBadRequest, self.controller._validate_user_id_and_share_type_args, 'foo', 'bar') def test__get_share_type_id_found(self): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock(return_value={'id': 'fake_st_id'})) ctxt = 'fake_context' share_type = 'fake_share_type_name_or_id' result = self.controller._get_share_type_id(ctxt, share_type) self.assertEqual('fake_st_id', result) def test__get_share_type_id_not_found(self): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock(return_value=None)) ctxt = 'fake_context' share_type = 'fake_share_type_name_or_id' self.assertRaises( webob.exc.HTTPNotFound, self.controller._get_share_type_id, ctxt, share_type) def test__get_share_type_id_is_not_provided(self): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock(return_value={'id': 'fake_st_id'})) ctxt = 'fake_context' result = self.controller._get_share_type_id(ctxt, None) self.assertIsNone(result) @ddt.data( ({}, sg_quota_keys, '2.40'), ({"quota_set": {}}, sg_quota_keys, '2.40'), ({"quota_set": {"foo": "bar"}}, sg_quota_keys, '2.40'), ({"foo": "bar"}, replica_quota_keys, '2.53'), ({"quota_set": {"foo": "bar"}}, replica_quota_keys, '2.53'), ) @ddt.unpack def test__ensure_specific_microversion_args_are_absent_success( self, body, keys, microversion): result = self.controller._ensure_specific_microversion_args_are_absent( body, keys, microversion) self.assertIsNone(result) @ddt.data( ({"share_groups": 5}, sg_quota_keys, '2.40'), ({"share_group_snapshots": 6}, sg_quota_keys, '2.40'), ({"quota_set": {"share_groups": 7}}, sg_quota_keys, '2.40'), ({"quota_set": {"share_group_snapshots": 8}}, sg_quota_keys, '2.40'), ({"quota_set": {"share_replicas": 9}}, replica_quota_keys, '2.53'), ({"quota_set": {"share_replicas": 10}}, replica_quota_keys, '2.53'), ) @ddt.unpack def test__ensure_specific_microversion_args_are_absent_error( self, body, keys, microversion): self.assertRaises( webob.exc.HTTPBadRequest, self.controller._ensure_specific_microversion_args_are_absent, body, keys, microversion ) @ddt.data(_get_request(True, True), _get_request(True, False)) def test__ensure_share_type_arg_is_absent(self, req): result = self.controller._ensure_share_type_arg_is_absent(req) self.assertIsNone(result) def test__ensure_share_type_arg_is_absent_exception(self): req = self._get_share_type_request_object('2.39') self.assertRaises( webob.exc.HTTPBadRequest, self.controller._ensure_share_type_arg_is_absent, req) @ddt.data(_get_request(True, True), _get_request(True, False)) def test_quota_detail(self, request): request.api_version_request = api_version.APIVersionRequest('2.25') quotas = { "shares": 23, "snapshots": 34, "gigabytes": 45, "snapshot_gigabytes": 56, "share_networks": 67, } expected = { 'quota_set': { 'id': self.project_id, 'shares': {'in_use': 0, 'limit': quotas['shares'], 'reserved': 0}, 'gigabytes': {'in_use': 0, 'limit': quotas['gigabytes'], 'reserved': 0}, 'snapshots': {'in_use': 0, 'limit': quotas['snapshots'], 'reserved': 0}, 'snapshot_gigabytes': { 'in_use': 0, 'limit': quotas['snapshot_gigabytes'], 'reserved': 0, }, 'share_networks': { 'in_use': 0, 'limit': quotas['share_networks'], 'reserved': 0 }, } } for k, v in quotas.items(): CONF.set_default('quota_' + k, v) result = self.controller.detail(request, self.project_id) self.assertEqual(expected, result) self.mock_policy_check.assert_called_once_with( request.environ['manila.context'], self.resource_name, 'show') @ddt.data(_get_request(True, True), _get_request(True, False)) def test_show_quota(self, request): quotas = { "shares": 23, "snapshots": 34, "gigabytes": 45, "snapshot_gigabytes": 56, "share_networks": 67, "share_groups": 53, "share_group_snapshots": 57, } expected = { 'quota_set': { 'id': self.project_id, 'shares': quotas.get('shares', 50), 'gigabytes': quotas.get('gigabytes', 1000), 'snapshots': quotas.get('snapshots', 50), 'snapshot_gigabytes': quotas.get('snapshot_gigabytes', 1000), 'share_networks': quotas.get('share_networks', 10), 'share_groups': quotas.get('share_groups', 50), 'share_group_snapshots': quotas.get( 'share_group_snapshots', 50), } } for k, v in quotas.items(): CONF.set_default('quota_' + k, v) result = self.controller.show(request, self.project_id) self.assertEqual(expected, result) self.mock_policy_check.assert_called_once_with( request.environ['manila.context'], self.resource_name, 'show') def test_show_quota_not_authorized(self): req = _get_request(True, False) self.mock_object( quota_sets.db, 'authorize_project_context', mock.Mock(side_effect=exception.NotAuthorized)) self.assertRaises( webob.exc.HTTPForbidden, self.controller.show, req, self.project_id) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') @ddt.data(_get_request(True, True), _get_request(True, False)) def test_update_quota(self, request): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock( return_value={'id': 'fake_st_id', 'name': 'fake_st_name'})) CONF.set_default('quota_shares', 789) body = {'quota_set': {'tenant_id': self.project_id, 'shares': 788}} expected = { 'quota_set': { 'shares': body['quota_set']['shares'], 'gigabytes': 1000, 'snapshots': 50, 'snapshot_gigabytes': 1000, 'share_networks': 10, 'share_groups': 50, 'share_group_snapshots': 50, } } mock_policy_update_check_call = mock.call( request.environ['manila.context'], self.resource_name, 'update') mock_policy_show_check_call = mock.call( request.environ['manila.context'], self.resource_name, 'show') update_result = self.controller.update( request, self.project_id, body=body) self.assertEqual(expected, update_result) show_result = self.controller.show(request, self.project_id) expected['quota_set']['id'] = self.project_id self.assertEqual(expected, show_result) self.mock_policy_check.assert_has_calls([ mock_policy_update_check_call, mock_policy_show_check_call]) quota_sets.db.share_type_get_by_name_or_id.assert_not_called() @ddt.data('2.39', '2.40') def test_update_share_type_quota(self, microversion): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock( return_value={'id': 'fake_st_id', 'name': 'fake_st_name'})) req = self._get_share_type_request_object(microversion) CONF.set_default('quota_shares', 789) body = {'quota_set': {'tenant_id': self.project_id, 'shares': 788}} expected = { 'quota_set': { 'shares': body['quota_set']['shares'], 'gigabytes': 1000, 'snapshots': 50, 'snapshot_gigabytes': 1000, } } update_result = self.controller.update(req, self.project_id, body=body) self.assertEqual(expected, update_result) quota_sets.db.share_type_get_by_name_or_id.assert_called_once_with( req.environ['manila.context'], req.environ['QUERY_STRING'].split('=')[-1]) quota_sets.db.share_type_get_by_name_or_id.reset_mock() show_result = self.controller.show(req, self.project_id) expected['quota_set']['id'] = self.project_id self.assertEqual(expected, show_result) self.mock_policy_check.assert_has_calls([ mock.call(req.environ['manila.context'], self.resource_name, key) for key in ('update', 'show') ]) quota_sets.db.share_type_get_by_name_or_id.assert_called_once_with( req.environ['manila.context'], req.environ['QUERY_STRING'].split('=')[-1]) def test_update_share_type_quota_using_too_old_microversion(self): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock( return_value={'id': 'fake_st_id', 'name': 'fake_st_name'})) req = self._get_share_type_request_object('2.38') body = {'quota_set': {'tenant_id': self.project_id, 'shares': 788}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, req, self.project_id, body=body) quota_sets.db.share_type_get_by_name_or_id.assert_not_called() def test_update_share_type_quota_for_share_networks(self): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock( return_value={'id': 'fake_st_id', 'name': 'fake_st_name'})) req = self._get_share_type_request_object('2.39') body = {'quota_set': { 'tenant_id': self.project_id, 'share_networks': 788, }} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, req, self.project_id, body=body) quota_sets.db.share_type_get_by_name_or_id.assert_called_once_with( req.environ['manila.context'], req.environ['QUERY_STRING'].split('=')[-1]) @ddt.data(-2, 'foo', {1: 2}, [1]) def test_update_quota_with_invalid_value(self, value): req = _get_request(True, False) body = {'quota_set': {'tenant_id': self.project_id, 'shares': value}} self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, req, self.project_id, body=body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') def test_user_quota_can_not_be_bigger_than_tenant_quota(self): value = 777 CONF.set_default('quota_shares', value) body = { 'quota_set': { 'tenant_id': self.project_id, 'shares': value + 1, } } req = _get_request(True, True) self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, req, self.project_id, body=body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') def test_update_inexistent_quota(self): body = { 'quota_set': { 'tenant_id': self.project_id, 'fake_quota': 13, } } req = _get_request(True, False) self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, req, self.project_id, body=body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') def test_update_quota_not_authorized(self): body = {'quota_set': {'tenant_id': self.project_id, 'shares': 13}} req = _get_request(False, False) self.assertRaises( webob.exc.HTTPForbidden, self.controller.update, req, self.project_id, body=body) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'update') @ddt.data( ('os-quota-sets', '1.0', quota_sets.QuotaSetsControllerLegacy), ('os-quota-sets', '2.6', quota_sets.QuotaSetsControllerLegacy), ('quota-sets', '2.7', quota_sets.QuotaSetsController), ) @ddt.unpack def test_update_all_quotas_with_force(self, url, version, controller): req = fakes.HTTPRequest.blank( '/fooproject/%s' % url, version=version, use_admin_context=True) quotas = ( ('quota_shares', 13), ('quota_gigabytes', 14), ('quota_snapshots', 15), ('quota_snapshot_gigabytes', 16), ('quota_share_networks', 17), ) for quota, value in quotas: CONF.set_default(quota, value) expected = { 'quota_set': { 'tenant_id': self.project_id, 'shares': quotas[0][1], 'gigabytes': quotas[1][1], 'snapshots': quotas[2][1], 'snapshot_gigabytes': quotas[3][1], 'share_networks': quotas[4][1], 'force': True, } } update_result = controller().update( req, self.project_id, body=expected) expected['quota_set'].pop('force') expected['quota_set'].pop('tenant_id') self.assertEqual(expected, update_result) show_result = controller().show(req, self.project_id) expected['quota_set']['id'] = self.project_id self.assertEqual(expected, show_result) self.mock_policy_check.assert_has_calls([ mock.call(req.environ['manila.context'], self.resource_name, action) for action in ('update', 'show') ]) @ddt.data( ('os-quota-sets', '1.0', quota_sets.QuotaSetsControllerLegacy), ('os-quota-sets', '2.6', quota_sets.QuotaSetsControllerLegacy), ('quota-sets', '2.7', quota_sets.QuotaSetsController), ) @ddt.unpack def test_delete_tenant_quota(self, url, version, controller): self.mock_object(quota_sets.QUOTAS, 'destroy_all_by_project_and_user') self.mock_object(quota_sets.QUOTAS, 'destroy_all_by_project') req = fakes.HTTPRequest.blank( '/fooproject/%s' % url, version=version, use_admin_context=True) result = controller().delete(req, self.project_id) self.assertTrue( utils.IsAMatcher(webob.response.Response) == result ) self.assertTrue(hasattr(result, 'status_code')) self.assertEqual(202, result.status_code) self.assertFalse( quota_sets.QUOTAS.destroy_all_by_project_and_user.called) quota_sets.QUOTAS.destroy_all_by_project.assert_called_once_with( req.environ['manila.context'], self.project_id) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'delete') def test_delete_user_quota(self): project_id = 'foo_project_id' self.mock_object(quota_sets.QUOTAS, 'destroy_all_by_project_and_user') self.mock_object(quota_sets.QUOTAS, 'destroy_all_by_project') req = _get_request(True, True) result = self.controller.delete(req, project_id) self.assertTrue( utils.IsAMatcher(webob.response.Response) == result ) self.assertTrue(hasattr(result, 'status_code')) self.assertEqual(202, result.status_code) (quota_sets.QUOTAS.destroy_all_by_project_and_user. assert_called_once_with( req.environ['manila.context'], project_id, req.environ['manila.context'].user_id)) self.assertFalse(quota_sets.QUOTAS.destroy_all_by_project.called) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'delete') def test_delete_share_type_quota(self): req = self._get_share_type_request_object('2.39') self.mock_object(quota_sets.QUOTAS, 'destroy_all_by_project') self.mock_object(quota_sets.QUOTAS, 'destroy_all_by_project_and_user') mock_delete_st_quotas = self.mock_object( quota_sets.QUOTAS, 'destroy_all_by_project_and_share_type') self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock( return_value={'id': 'fake_st_id', 'name': 'fake_st_name'})) result = self.controller.delete(req, self.project_id) self.assertEqual(utils.IsAMatcher(webob.response.Response), result) self.assertTrue(hasattr(result, 'status_code')) self.assertEqual(202, result.status_code) mock_delete_st_quotas.assert_called_once_with( req.environ['manila.context'], self.project_id, 'fake_st_id') quota_sets.QUOTAS.destroy_all_by_project.assert_not_called() quota_sets.QUOTAS.destroy_all_by_project_and_user.assert_not_called() self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'delete') quota_sets.db.share_type_get_by_name_or_id.assert_called_once_with( req.environ['manila.context'], req.environ['QUERY_STRING'].split('=')[-1]) def test_delete_share_type_quota_using_too_old_microversion(self): self.mock_object( quota_sets.db, 'share_type_get_by_name_or_id', mock.Mock( return_value={'id': 'fake_st_id', 'name': 'fake_st_name'})) req = self._get_share_type_request_object('2.38') self.assertRaises( webob.exc.HTTPBadRequest, self.controller.delete, req, self.project_id) quota_sets.db.share_type_get_by_name_or_id.assert_not_called() def test_delete_not_authorized(self): req = _get_request(False, False) self.assertRaises( webob.exc.HTTPForbidden, self.controller.delete, req, self.project_id) self.mock_policy_check.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'delete') @ddt.data( ('os-quota-sets', '2.7', quota_sets.QuotaSetsControllerLegacy), ('quota-sets', '2.6', quota_sets.QuotaSetsController), ('quota-sets', '2.0', quota_sets.QuotaSetsController), ) @ddt.unpack def test_api_not_found(self, url, version, controller): req = fakes.HTTPRequest.blank('/fooproject/%s' % url, version=version) for method_name in ('show', 'defaults', 'delete'): self.assertRaises( exception.VersionNotFoundForAPIMethod, getattr(controller(), method_name), req, self.project_id) @ddt.data( ('os-quota-sets', '2.7', quota_sets.QuotaSetsControllerLegacy), ('quota-sets', '2.6', quota_sets.QuotaSetsController), ('quota-sets', '2.0', quota_sets.QuotaSetsController), ) @ddt.unpack def test_update_api_not_found(self, url, version, controller): req = fakes.HTTPRequest.blank('/fooproject/%s' % url, version=version) self.assertRaises( exception.VersionNotFoundForAPIMethod, controller().update, req, self.project_id) manila-10.0.0/manila/tests/api/v2/test_availability_zones.py0000664000175000017500000000772313656750227024105 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila.api.v2 import availability_zones from manila import context from manila import exception from manila import policy from manila import test from manila.tests.api import fakes @ddt.ddt class AvailabilityZonesAPITest(test.TestCase): @ddt.data( availability_zones.AvailabilityZoneControllerLegacy, availability_zones.AvailabilityZoneController, ) def test_instantiate_controller(self, controller): az_controller = controller() self.assertTrue(hasattr(az_controller, "resource_name")) self.assertEqual("availability_zone", az_controller.resource_name) self.assertTrue(hasattr(az_controller, "_view_builder")) self.assertTrue(hasattr(az_controller._view_builder, "detail_list")) @ddt.data( ('1.0', availability_zones.AvailabilityZoneControllerLegacy), ('2.0', availability_zones.AvailabilityZoneControllerLegacy), ('2.6', availability_zones.AvailabilityZoneControllerLegacy), ('2.7', availability_zones.AvailabilityZoneController), ) @ddt.unpack def test_index(self, version, controller): azs = [ { "id": "fake_id1", "name": "fake_name1", "created_at": "fake_created_at", "updated_at": "fake_updated_at", }, { "id": "fake_id2", "name": "fake_name2", "created_at": "fake_created_at", "updated_at": "fake_updated_at", "deleted": "False", "redundant_key": "redundant_value", }, ] mock_policy_check = self.mock_object(policy, 'check_policy') self.mock_object(availability_zones.db, 'availability_zone_get_all', mock.Mock(return_value=azs)) az_controller = controller() ctxt = context.RequestContext("admin", "fake", True) req = fakes.HTTPRequest.blank('/shares', version=version) req.environ['manila.context'] = ctxt result = az_controller.index(req) (availability_zones.db.availability_zone_get_all. assert_called_once_with(ctxt)) mock_policy_check.assert_called_once_with( ctxt, controller.resource_name, 'index') self.assertIsInstance(result, dict) self.assertEqual(["availability_zones"], list(result.keys())) self.assertIsInstance(result["availability_zones"], list) self.assertEqual(2, len(result["availability_zones"])) self.assertIn(azs[0], result["availability_zones"]) azs[1].pop("deleted") azs[1].pop("redundant_key") self.assertIn(azs[1], result["availability_zones"]) @ddt.data( ('1.0', availability_zones.AvailabilityZoneController), ('2.0', availability_zones.AvailabilityZoneController), ('2.6', availability_zones.AvailabilityZoneController), ('2.7', availability_zones.AvailabilityZoneControllerLegacy), ) @ddt.unpack def test_index_with_unsupported_versions(self, version, controller): ctxt = context.RequestContext("admin", "fake", True) req = fakes.HTTPRequest.blank('/shares', version=version) req.environ['manila.context'] = ctxt az_controller = controller() self.assertRaises( exception.VersionNotFoundForAPIMethod, az_controller.index, req) manila-10.0.0/manila/tests/api/v2/test_messages.py0000664000175000017500000002127213656750227022017 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import datetime import iso8601 from oslo_config import cfg import webob from manila.api.v2 import messages from manila import context from manila import exception from manila.message import api as message_api from manila.message import message_field from manila import policy from manila import test from manila.tests.api import fakes from manila.tests.api.v2 import stubs CONF = cfg.CONF class MessageApiTest(test.TestCase): def setUp(self): super(MessageApiTest, self).setUp() self.controller = messages.MessagesController() self.maxDiff = None self.ctxt = context.RequestContext('admin', 'fake', True) self.mock_object(policy, 'check_policy', mock.Mock(return_value=True)) def _expected_message_from_controller(self, id, **kwargs): message = stubs.stub_message(id, **kwargs) links = [ {'href': 'http://localhost/v2/fake/messages/%s' % id, 'rel': 'self'}, {'href': 'http://localhost/fake/messages/%s' % id, 'rel': 'bookmark'}, ] return { 'message': { 'id': message.get('id'), 'project_id': message.get('project_id'), 'user_message': "%s: %s" % ( message_field.translate_action(message.get('action_id')), message_field.translate_detail(message.get('detail_id'))), 'request_id': message.get('request_id'), 'action_id': message.get('action_id'), 'detail_id': message.get('detail_id'), 'created_at': message.get('created_at'), 'message_level': message.get('message_level'), 'expires_at': message.get('expires_at'), 'links': links, 'resource_type': message.get('resource_type'), 'resource_id': message.get('resource_id'), } } def test_show(self): self.mock_object(message_api.API, 'get', stubs.stub_message_get) req = fakes.HTTPRequest.blank( '/messages/%s' % fakes.FAKE_UUID, version=messages.MESSAGES_BASE_MICRO_VERSION, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt res_dict = self.controller.show(req, fakes.FAKE_UUID) ex = self._expected_message_from_controller(fakes.FAKE_UUID) self.assertEqual(ex, res_dict) def test_show_with_resource(self): resource_type = "FAKE_RESOURCE" resource_id = "b1872cb2-4c5f-4072-9828-8a51b02926a3" fake_message = stubs.stub_message(fakes.FAKE_UUID, resource_type=resource_type, resource_id=resource_id) mock_get = mock.Mock(return_value=fake_message) self.mock_object(message_api.API, 'get', mock_get) req = fakes.HTTPRequest.blank( '/messages/%s' % fakes.FAKE_UUID, version=messages.MESSAGES_BASE_MICRO_VERSION, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt res_dict = self.controller.show(req, fakes.FAKE_UUID) self.assertEqual(resource_type, res_dict['message']['resource_type']) self.assertEqual(resource_id, res_dict['message']['resource_id']) def test_show_not_found(self): fake_not_found = exception.MessageNotFound(message_id=fakes.FAKE_UUID) self.mock_object(message_api.API, 'get', mock.Mock(side_effect=fake_not_found)) req = fakes.HTTPRequest.blank( '/messages/%s' % fakes.FAKE_UUID, version=messages.MESSAGES_BASE_MICRO_VERSION, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, fakes.FAKE_UUID) def test_show_pre_microversion(self): self.mock_object(message_api.API, 'get', stubs.stub_message_get) req = fakes.HTTPRequest.blank('/messages/%s' % fakes.FAKE_UUID, version='2.35', base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, req, fakes.FAKE_UUID) def test_delete(self): self.mock_object(message_api.API, 'get', stubs.stub_message_get) self.mock_object(message_api.API, 'delete') req = fakes.HTTPRequest.blank( '/messages/%s' % fakes.FAKE_UUID, version=messages.MESSAGES_BASE_MICRO_VERSION) req.environ['manila.context'] = self.ctxt resp = self.controller.delete(req, fakes.FAKE_UUID) self.assertEqual(204, resp.status_int) self.assertTrue(message_api.API.delete.called) def test_delete_not_found(self): fake_not_found = exception.MessageNotFound(message_id=fakes.FAKE_UUID) self.mock_object(message_api.API, 'get', mock.Mock(side_effect=fake_not_found)) req = fakes.HTTPRequest.blank( '/messages/%s' % fakes.FAKE_UUID, version=messages.MESSAGES_BASE_MICRO_VERSION) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, fakes.FAKE_UUID) def test_index(self): msg1 = stubs.stub_message(fakes.get_fake_uuid()) msg2 = stubs.stub_message(fakes.get_fake_uuid()) self.mock_object(message_api.API, 'get_all', mock.Mock( return_value=[msg1, msg2])) req = fakes.HTTPRequest.blank( '/messages', version=messages.MESSAGES_BASE_MICRO_VERSION, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt res_dict = self.controller.index(req) ex1 = self._expected_message_from_controller(msg1['id'])['message'] ex2 = self._expected_message_from_controller(msg2['id'])['message'] expected = {'messages': [ex1, ex2]} self.assertDictMatch(expected, res_dict) def test_index_with_limit_and_offset(self): msg2 = stubs.stub_message(fakes.get_fake_uuid()) self.mock_object(message_api.API, 'get_all', mock.Mock( return_value=[msg2])) req = fakes.HTTPRequest.blank( '/messages?limit=1&offset=1', version=messages.MESSAGES_BASE_MICRO_VERSION, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt res_dict = self.controller.index(req) ex2 = self._expected_message_from_controller(msg2['id'])['message'] self.assertEqual([ex2], res_dict['messages']) def test_index_with_created_since_and_created_before(self): msg = stubs.stub_message( fakes.get_fake_uuid(), created_at=datetime.datetime(1900, 2, 1, 1, 1, 1, tzinfo=iso8601.UTC)) self.mock_object(message_api.API, 'get_all', mock.Mock( return_value=[msg])) req = fakes.HTTPRequest.blank( '/messages?created_since=1900-01-01T01:01:01&' 'created_before=1900-03-01T01:01:01', version=messages.MESSAGES_QUERY_BY_TIMESTAMP, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt res_dict = self.controller.index(req) ex2 = self._expected_message_from_controller( msg['id'], created_at=datetime.datetime(1900, 2, 1, 1, 1, 1, tzinfo=iso8601.UTC))['message'] self.assertEqual([ex2], res_dict['messages']) def test_index_with_invalid_time_format(self): req = fakes.HTTPRequest.blank( '/messages?created_since=invalid_time_str', version=messages.MESSAGES_QUERY_BY_TIMESTAMP, base_url='http://localhost/v2') req.environ['manila.context'] = self.ctxt self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) manila-10.0.0/manila/tests/api/v2/test_share_types.py0000664000175000017500000012475313656750227022546 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import random from unittest import mock import ddt from oslo_config import cfg from oslo_utils import timeutils import webob from manila.api.v2 import share_types as types from manila.api.views import types as views_types from manila.common import constants from manila import context from manila import db from manila import exception from manila import policy from manila.share import share_types from manila import test from manila.tests.api import fakes from manila.tests import fake_notifier CONF = cfg.CONF def stub_share_type(id): specs = { "key1": "value1", "key2": "value2", "key3": "value3", "key4": "value4", "key5": "value5", constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: "true", } if id == 4: name = 'update_share_type_%s' % str(id) description = 'update_description_%s' % str(id) is_public = False else: name = 'share_type_%s' % str(id) description = 'description_%s' % str(id) is_public = True share_type = { 'id': id, 'name': name, 'description': description, 'is_public': is_public, 'extra_specs': specs, 'required_extra_specs': { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: "true", } } return share_type def return_share_types_get_all_types(context, search_opts=None): return dict( share_type_1=stub_share_type(1), share_type_2=stub_share_type(2), share_type_3=stub_share_type(3) ) def stub_default_name(): return 'default_share_type' def stub_default_share_type(id): return dict( id=id, name=stub_default_name(), description='description_%s' % str(id), required_extra_specs={ constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: "true", } ) def return_all_share_types(context, search_opts=None): mock_value = dict( share_type_1=stub_share_type(1), share_type_2=stub_share_type(2), share_type_3=stub_default_share_type(3) ) return mock_value def return_default_share_type(context, search_opts=None): return stub_default_share_type(3) def return_empty_share_types_get_all_types(context, search_opts=None): return {} def return_share_types_get_share_type(context, id=1): if id == "777": raise exception.ShareTypeNotFound(share_type_id=id) return stub_share_type(int(id)) def return_share_type_update(context, id=4, name=None, description=None, is_public=None): if id == 888: raise exception.ShareTypeUpdateFailed(id=id) if id == 999: raise exception.ShareTypeNotFound(share_type_id=id) pre_share_type = stub_share_type(int(id)) new_name = name new_description = description return pre_share_type.update({"name": new_name, "description": new_description, "is_public": is_public}) def return_share_types_get_by_name(context, name): if name == "777": raise exception.ShareTypeNotFoundByName(share_type_name=name) return stub_share_type(int(name.split("_")[2])) def return_share_types_destroy(context, name): if name == "777": raise exception.ShareTypeNotFoundByName(share_type_name=name) pass def return_share_types_with_volumes_destroy(context, id): if id == "1": raise exception.ShareTypeInUse(share_type_id=id) pass def return_share_types_create(context, name, specs, is_public, description): pass def make_create_body(name="test_share_1", extra_specs=None, spec_driver_handles_share_servers=True, description=None): if not extra_specs: extra_specs = {} if spec_driver_handles_share_servers is not None: extra_specs[constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS] = ( spec_driver_handles_share_servers) body = { "share_type": { "name": name, "extra_specs": extra_specs, } } if description: body["share_type"].update({"description": description}) return body def generate_long_description(des_length=256): random_str = '' base_str = 'ABCDEFGHIGKLMNOPQRSTUVWXYZabcdefghigklmnopqrstuvwxyz' length = len(base_str) - 1 for i in range(des_length): random_str += base_str[random.randint(0, length)] return random_str def make_update_body(name=None, description=None, is_public=None): body = {"share_type": {}} if name: body["share_type"].update({"name": name}) if description: body["share_type"].update({"description": description}) if is_public is not None: body["share_type"].update( {"share_type_access:is_public": is_public}) return body @ddt.ddt class ShareTypesAPITest(test.TestCase): def setUp(self): super(ShareTypesAPITest, self).setUp() self.flags(host='fake') self.controller = types.ShareTypesController() self.resource_name = self.controller.resource_name self.mock_object(policy, 'check_policy', mock.Mock(return_value=True)) fake_notifier.reset() self.addCleanup(fake_notifier.reset) self.mock_object( share_types, 'create', mock.Mock(side_effect=return_share_types_create)) self.mock_object( share_types, 'get_share_type_by_name', mock.Mock(side_effect=return_share_types_get_by_name)) self.mock_object( share_types, 'get_share_type', mock.Mock(side_effect=return_share_types_get_share_type)) self.mock_object( share_types, 'update', mock.Mock(side_effect=return_share_type_update)) self.mock_object( share_types, 'destroy', mock.Mock(side_effect=return_share_types_destroy)) @ddt.data(True, False) def test_share_types_index(self, admin): self.mock_object(share_types, 'get_all_types', return_share_types_get_all_types) req = fakes.HTTPRequest.blank('/v2/fake/types', use_admin_context=admin) res_dict = self.controller.index(req) self.assertEqual(3, len(res_dict['share_types'])) expected_names = ['share_type_1', 'share_type_2', 'share_type_3'] actual_names = map(lambda e: e['name'], res_dict['share_types']) self.assertEqual(set(expected_names), set(actual_names)) for entry in res_dict['share_types']: if admin: self.assertEqual('value1', entry['extra_specs'].get('key1')) else: self.assertIsNone(entry['extra_specs'].get('key1')) self.assertIn('required_extra_specs', entry) required_extra_spec = entry['required_extra_specs'].get( constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS, '') self.assertEqual('true', required_extra_spec) policy.check_policy.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_share_types_index_no_data(self): self.mock_object(share_types, 'get_all_types', return_empty_share_types_get_all_types) req = fakes.HTTPRequest.blank('/v2/fake/types') res_dict = self.controller.index(req) self.assertEqual(0, len(res_dict['share_types'])) policy.check_policy.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'index') def test_share_types_show(self): self.mock_object(share_types, 'get_share_type', return_share_types_get_share_type) req = fakes.HTTPRequest.blank('/v2/fake/types/1') res_dict = self.controller.show(req, 1) self.assertEqual(2, len(res_dict)) self.assertEqual('1', res_dict['share_type']['id']) self.assertEqual('share_type_1', res_dict['share_type']['name']) expect = {constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: "true"} self.assertEqual(expect, res_dict['share_type']['required_extra_specs']) policy.check_policy.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') def test_share_types_show_not_found(self): self.mock_object(share_types, 'get_share_type', return_share_types_get_share_type) req = fakes.HTTPRequest.blank('/v2/fake/types/777') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '777') policy.check_policy.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'show') def test_share_types_default(self): self.mock_object(share_types, 'get_default_share_type', return_share_types_get_share_type) req = fakes.HTTPRequest.blank('/v2/fake/types/default') res_dict = self.controller.default(req) self.assertEqual(2, len(res_dict)) self.assertEqual('1', res_dict['share_type']['id']) self.assertEqual('share_type_1', res_dict['share_type']['name']) policy.check_policy.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'default') def test_share_types_default_not_found(self): self.mock_object(share_types, 'get_default_share_type', mock.Mock(side_effect=exception.ShareTypeNotFound( share_type_id="fake"))) req = fakes.HTTPRequest.blank('/v2/fake/types/default') self.assertRaises(webob.exc.HTTPNotFound, self.controller.default, req) policy.check_policy.assert_called_once_with( req.environ['manila.context'], self.resource_name, 'default') @ddt.data( ('1.0', 'os-share-type-access', True), ('1.0', 'os-share-type-access', False), ('2.0', 'os-share-type-access', True), ('2.0', 'os-share-type-access', False), ('2.6', 'os-share-type-access', True), ('2.6', 'os-share-type-access', False), ('2.7', 'share_type_access', True), ('2.7', 'share_type_access', False), ('2.23', 'share_type_access', True), ('2.23', 'share_type_access', False), ('2.24', 'share_type_access', True), ('2.24', 'share_type_access', False), ('2.27', 'share_type_access', True), ('2.27', 'share_type_access', False), ('2.41', 'share_type_access', True), ('2.41', 'share_type_access', False), ) @ddt.unpack def test_view_builder_show(self, version, prefix, admin): view_builder = views_types.ViewBuilder() now = timeutils.utcnow().isoformat() raw_share_type = dict( name='new_type', description='description_test', deleted=False, created_at=now, updated_at=now, extra_specs={}, deleted_at=None, required_extra_specs={}, id=42, ) request = fakes.HTTPRequest.blank("/v%s" % version[0], version=version, use_admin_context=admin) request.headers['X-Openstack-Manila-Api-Version'] = version output = view_builder.show(request, raw_share_type) self.assertIn('share_type', output) expected_share_type = { 'name': 'new_type', 'extra_specs': {}, '%s:is_public' % prefix: True, 'required_extra_specs': {}, 'id': 42, } if self.is_microversion_ge(version, '2.24') and not admin: for extra_spec in constants.ExtraSpecs.INFERRED_OPTIONAL_MAP: expected_share_type['extra_specs'][extra_spec] = ( constants.ExtraSpecs.INFERRED_OPTIONAL_MAP[extra_spec]) if self.is_microversion_ge(version, '2.41'): expected_share_type['description'] = 'description_test' self.assertDictMatch(expected_share_type, output['share_type']) @ddt.data( ('1.0', 'os-share-type-access', True), ('1.0', 'os-share-type-access', False), ('2.0', 'os-share-type-access', True), ('2.0', 'os-share-type-access', False), ('2.6', 'os-share-type-access', True), ('2.6', 'os-share-type-access', False), ('2.7', 'share_type_access', True), ('2.7', 'share_type_access', False), ('2.23', 'share_type_access', True), ('2.23', 'share_type_access', False), ('2.24', 'share_type_access', True), ('2.24', 'share_type_access', False), ('2.27', 'share_type_access', True), ('2.27', 'share_type_access', False), ('2.41', 'share_type_access', True), ('2.41', 'share_type_access', False), ) @ddt.unpack def test_view_builder_list(self, version, prefix, admin): view_builder = views_types.ViewBuilder() extra_specs = { constants.ExtraSpecs.SNAPSHOT_SUPPORT: True, constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: False, constants.ExtraSpecs.REVERT_TO_SNAPSHOT_SUPPORT: True, constants.ExtraSpecs.MOUNT_SNAPSHOT_SUPPORT: True, } now = timeutils.utcnow().isoformat() raw_share_types = [] for i in range(0, 10): raw_share_types.append( dict( name='new_type', description='description_test', deleted=False, created_at=now, updated_at=now, extra_specs=extra_specs, required_extra_specs={}, deleted_at=None, id=42 + i ) ) request = fakes.HTTPRequest.blank("/v%s" % version[0], version=version, use_admin_context=admin) output = view_builder.index(request, raw_share_types) self.assertIn('share_types', output) expected_share_type = { 'name': 'new_type', 'extra_specs': extra_specs, '%s:is_public' % prefix: True, 'required_extra_specs': {}, } if self.is_microversion_ge(version, '2.41'): expected_share_type['description'] = 'description_test' for i in range(0, 10): expected_share_type['id'] = 42 + i self.assertDictMatch(expected_share_type, output['share_types'][i]) @ddt.data(None, True, 'true', 'false', 'all') def test_parse_is_public_valid(self, value): result = self.controller._parse_is_public(value) self.assertIn(result, (True, False, None)) def test_parse_is_public_invalid(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller._parse_is_public, 'fakefakefake') @ddt.data( ("new_name", "new_description", "wrong_bool"), (" ", "new_description", "true"), (" ", generate_long_description(256), "true"), (None, None, None), ) @ddt.unpack def test_share_types_update_with_invalid_parameter( self, name, description, is_public): req = fakes.HTTPRequest.blank('/v2/fake/types/4', version='2.50') body = make_update_body(name, description, is_public) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 4, body) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) def test_share_types_update_with_invalid_body(self): req = fakes.HTTPRequest.blank('/v2/fake/types/4', version='2.50') body = {'share_type': 'i_am_invalid_body'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 4, body) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) def test_share_types_update(self): req = fakes.HTTPRequest.blank('/v2/fake/types/4', version='2.50') self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) body = make_update_body("update_share_type_4", "update_description_4", is_public=False) res_dict = self.controller.update(req, 4, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(2, len(res_dict)) self.assertEqual('update_share_type_4', res_dict['share_type']['name']) self.assertEqual('update_share_type_4', res_dict['volume_type']['name']) self.assertIs(False, res_dict['share_type']['share_type_access:is_public']) self.assertEqual('update_description_4', res_dict['share_type']['description']) self.assertEqual('update_description_4', res_dict['volume_type']['description']) def test_share_types_update_pre_v250(self): req = fakes.HTTPRequest.blank('/v2/fake/types/4', version='2.49') self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) body = make_update_body("update_share_type_4", "update_description_4", is_public=False) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, req, 4, body) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) def test_share_types_update_failed(self): self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank('/v2/fake/types/888', version='2.50') body = make_update_body("update_share_type_888", "update_description_888", is_public=False) self.assertRaises(webob.exc.HTTPInternalServerError, self.controller.update, req, 888, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) def test_share_types_update_not_found(self): self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank('/v2/fake/types/999', version='2.50') body = make_update_body("update_share_type_999", "update_description_999", is_public=False) self.assertRaises(exception.ShareTypeNotFound, self.controller.update, req, 999, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) def test_share_types_delete(self): req = fakes.HTTPRequest.blank('/v2/fake/types/1') self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.controller._delete(req, 1) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) def test_share_types_delete_not_found(self): self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) req = fakes.HTTPRequest.blank('/v2/fake/types/777') self.assertRaises(webob.exc.HTTPNotFound, self.controller._delete, req, '777') self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) def test_share_types_delete_in_use(self): req = fakes.HTTPRequest.blank('/v2/fake/types/1') self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) side_effect = exception.ShareTypeInUse(share_type_id='fake_id') self.mock_object(share_types, 'destroy', mock.Mock(side_effect=side_effect)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._delete, req, 1) def test_share_types_with_volumes_destroy(self): req = fakes.HTTPRequest.blank('/v2/fake/types/1') self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.controller._delete(req, 1) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) @ddt.data( (make_create_body("share_type_1"), "2.24"), (make_create_body(spec_driver_handles_share_servers=True), "2.24"), (make_create_body(spec_driver_handles_share_servers=False), "2.24"), (make_create_body("share_type_1"), "2.23"), (make_create_body(spec_driver_handles_share_servers=True), "2.23"), (make_create_body(spec_driver_handles_share_servers=False), "2.23"), (make_create_body(description="description_1"), "2.41")) @ddt.unpack def test_create(self, body, version): req = fakes.HTTPRequest.blank('/v2/fake/types', version=version) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) res_dict = self.controller.create(req, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(2, len(res_dict)) self.assertEqual('share_type_1', res_dict['share_type']['name']) self.assertEqual('share_type_1', res_dict['volume_type']['name']) if self.is_microversion_ge(version, '2.41'): self.assertEqual(body['share_type']['description'], res_dict['share_type']['description']) self.assertEqual(body['share_type']['description'], res_dict['volume_type']['description']) for extra_spec in constants.ExtraSpecs.REQUIRED: self.assertIn(extra_spec, res_dict['share_type']['required_extra_specs']) expected_extra_specs = { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: True, } if self.is_microversion_lt(version, '2.24'): expected_extra_specs[constants.ExtraSpecs.SNAPSHOT_SUPPORT] = True expected_extra_specs.update(body['share_type']['extra_specs']) share_types.create.assert_called_once_with( mock.ANY, body['share_type']['name'], expected_extra_specs, True, description=body['share_type'].get('description')) @ddt.data(None, make_create_body(""), make_create_body("n" * 256), {'foo': {'a': 'b'}}, {'share_type': 'string'}, make_create_body(spec_driver_handles_share_servers=None), make_create_body(spec_driver_handles_share_servers=""), make_create_body(spec_driver_handles_share_servers=[]), ) def test_create_invalid_request_1_0(self, body): req = fakes.HTTPRequest.blank('/v2/fake/types', version="1.0") self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, body) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) @ddt.data(*constants.ExtraSpecs.REQUIRED) def test_create_invalid_request_2_23(self, required_extra_spec): req = fakes.HTTPRequest.blank('/v2/fake/types', version="2.24") self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) body = make_create_body("share_type_1") del body['share_type']['extra_specs'][required_extra_spec] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, body) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) def test_create_already_exists(self): side_effect = exception.ShareTypeExists(id='fake_id') self.mock_object(share_types, 'create', mock.Mock(side_effect=side_effect)) req = fakes.HTTPRequest.blank('/v2/fake/types', version="2.24") self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) body = make_create_body('share_type_1') self.assertRaises(webob.exc.HTTPConflict, self.controller.create, req, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) def test_create_not_found(self): self.mock_object(share_types, 'create', mock.Mock(side_effect=exception.NotFound)) req = fakes.HTTPRequest.blank('/v2/fake/types', version="2.24") self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) body = make_create_body('share_type_1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, body) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) def assert_share_type_list_equal(self, expected, observed): self.assertEqual(len(expected), len(observed)) expected = sorted(expected, key=lambda item: item['id']) observed = sorted(observed, key=lambda item: item['id']) for d1, d2 in zip(expected, observed): self.assertEqual(d1['id'], d2['id']) @ddt.data(('2.45', True), ('2.45', False), ('2.46', True), ('2.46', False)) @ddt.unpack def test_share_types_create_with_is_default_key(self, version, admin): req = fakes.HTTPRequest.blank('/v2/fake/types', version=version, use_admin_context=admin) body = make_create_body() res_dict = self.controller.create(req, body) if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res_dict['share_type']) self.assertIs(False, res_dict['share_type']['is_default']) else: self.assertNotIn('is_default', res_dict['share_type']) @ddt.data(('2.45', True), ('2.45', False), ('2.46', True), ('2.46', False)) @ddt.unpack def test_share_types_index_with_is_default_key(self, version, admin): default_type_name = stub_default_name() CONF.set_default("default_share_type", default_type_name) self.mock_object(share_types, 'get_all_types', return_all_share_types) req = fakes.HTTPRequest.blank('/v2/fake/types', version=version, use_admin_context=admin) res_dict = self.controller.index(req) self.assertEqual(3, len(res_dict['share_types'])) for res in res_dict['share_types']: if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res) expected = res['name'] == default_type_name self.assertIs(res['is_default'], expected) else: self.assertNotIn('is_default', res) @ddt.data(('2.45', True), ('2.45', False), ('2.46', True), ('2.46', False)) @ddt.unpack def test_share_types_default_with_is_default_key(self, version, admin): default_type_name = stub_default_name() CONF.set_default("default_share_type", default_type_name) self.mock_object(share_types, 'get_default_share_type', return_default_share_type) req = fakes.HTTPRequest.blank('/v2/fake/types/default_share_type', version=version, use_admin_context=admin) res_dict = self.controller.default(req) if self.is_microversion_ge(version, '2.46'): self.assertIn('is_default', res_dict['share_type']) self.assertIs(True, res_dict['share_type']['is_default']) else: self.assertNotIn('is_default', res_dict['share_type']) def generate_type(type_id, is_public): return { 'id': type_id, 'name': u'test', 'description': u'ds_test', 'deleted': False, 'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1), 'updated_at': None, 'deleted_at': None, 'is_public': bool(is_public), 'extra_specs': {} } SHARE_TYPES = { '0': generate_type('0', True), '1': generate_type('1', True), '2': generate_type('2', False), '3': generate_type('3', False)} PROJ1_UUID = '11111111-1111-1111-1111-111111111111' PROJ2_UUID = '22222222-2222-2222-2222-222222222222' PROJ3_UUID = '33333333-3333-3333-3333-333333333333' ACCESS_LIST = [{'share_type_id': '2', 'project_id': PROJ2_UUID}, {'share_type_id': '2', 'project_id': PROJ3_UUID}, {'share_type_id': '3', 'project_id': PROJ3_UUID}] def fake_share_type_get(context, id, inactive=False, expected_fields=None): vol = SHARE_TYPES[id] if expected_fields and 'projects' in expected_fields: vol['projects'] = [a['project_id'] for a in ACCESS_LIST if a['share_type_id'] == id] return vol def _has_type_access(type_id, project_id): for access in ACCESS_LIST: if (access['share_type_id'] == type_id and access['project_id'] == project_id): return True return False def fake_share_type_get_all(context, inactive=False, filters=None): if filters is None or filters.get('is_public', None) is None: return SHARE_TYPES res = {} for k, v in SHARE_TYPES.items(): if filters['is_public'] and _has_type_access(k, context.project_id): res.update({k: v}) continue if v['is_public'] == filters['is_public']: res.update({k: v}) return res class FakeResponse(object): obj = {'share_type': {'id': '0'}, 'share_types': [{'id': '0'}, {'id': '2'}]} def attach(self, **kwargs): pass class FakeRequest(object): environ = {"manila.context": context.get_admin_context()} def get_db_share_type(self, resource_id): return SHARE_TYPES[resource_id] @ddt.ddt class ShareTypeAccessTest(test.TestCase): def setUp(self): super(ShareTypeAccessTest, self).setUp() self.controller = types.ShareTypesController() self.req = FakeRequest() self.mock_object(db, 'share_type_get', fake_share_type_get) self.mock_object(db, 'share_type_get_all', fake_share_type_get_all) def assertShareTypeListEqual(self, expected, observed): self.assertEqual(len(expected), len(observed)) expected = sorted(expected, key=lambda item: item['id']) observed = sorted(observed, key=lambda item: item['id']) for d1, d2 in zip(expected, observed): self.assertEqual(d1['id'], d2['id']) def test_list_type_access_public(self): """Querying os-share-type-access on public type should return 404.""" req = fakes.HTTPRequest.blank('/v1/fake/types/os-share-type-access', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotFound, self.controller.share_type_access, req, '1') def test_list_type_access_private(self): expected = {'share_type_access': [ {'share_type_id': '2', 'project_id': PROJ2_UUID}, {'share_type_id': '2', 'project_id': PROJ3_UUID}, ]} result = self.controller.share_type_access(self.req, '2') self.assertEqual(expected, result) def test_list_with_no_context(self): req = fakes.HTTPRequest.blank('/v1/types/fake/types') self.assertRaises(webob.exc.HTTPForbidden, self.controller.share_type_access, req, 'fake') def test_list_not_found(self): side_effect = exception.ShareTypeNotFound(share_type_id='fake_id') self.mock_object(share_types, 'get_share_type', mock.Mock(side_effect=side_effect)) self.assertRaises(webob.exc.HTTPNotFound, self.controller.share_type_access, self.req, 'fake') def test_list_type_with_admin_default_proj1(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank('/v1/fake/types', use_admin_context=True) req.environ['manila.context'].project_id = PROJ1_UUID result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_admin_default_proj2(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}, {'id': '2'}]} req = fakes.HTTPRequest.blank('/v2/fake/types', use_admin_context=True) req.environ['manila.context'].project_id = PROJ2_UUID result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_admin_ispublic_true(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=true', use_admin_context=True) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_admin_ispublic_false(self): expected = {'share_types': [{'id': '2'}, {'id': '3'}]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=false', use_admin_context=True) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_admin_ispublic_false_proj2(self): expected = {'share_types': [{'id': '2'}, {'id': '3'}]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=false', use_admin_context=True) req.environ['manila.context'].project_id = PROJ2_UUID result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_admin_ispublic_none(self): expected = {'share_types': [ {'id': '0'}, {'id': '1'}, {'id': '2'}, {'id': '3'}, ]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=all', use_admin_context=True) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_no_admin_default(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank('/v2/fake/types', use_admin_context=False) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_no_admin_ispublic_true(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=true', use_admin_context=False) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_no_admin_ispublic_false(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=false', use_admin_context=False) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_list_type_with_no_admin_ispublic_none(self): expected = {'share_types': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank('/v2/fake/types?is_public=all', use_admin_context=False) result = self.controller.index(req) self.assertShareTypeListEqual(expected['share_types'], result['share_types']) def test_add_project_access(self): def stub_add_share_type_access(context, type_id, project_id): self.assertEqual('3', type_id, "type_id") self.assertEqual(PROJ2_UUID, project_id, "project_id") self.mock_object(db, 'share_type_access_add', stub_add_share_type_access) body = {'addProjectAccess': {'project': PROJ2_UUID}} req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) result = self.controller._add_project_access(req, '3', body) self.assertEqual(202, result.status_code) @ddt.data({'addProjectAccess': {'project': 'fake_project'}}, {'invalid': {'project': PROJ2_UUID}}) def test_add_project_access_bad_request(self, body): req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._add_project_access, req, '2', body) def test_add_project_access_with_no_admin_user(self): req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=False) body = {'addProjectAccess': {'project': PROJ2_UUID}} self.assertRaises(webob.exc.HTTPForbidden, self.controller._add_project_access, req, '2', body) def test_add_project_access_with_already_added_access(self): def stub_add_share_type_access(context, type_id, project_id): raise exception.ShareTypeAccessExists(share_type_id=type_id, project_id=project_id) self.mock_object(db, 'share_type_access_add', stub_add_share_type_access) body = {'addProjectAccess': {'project': PROJ2_UUID}} req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPConflict, self.controller._add_project_access, req, '3', body) def test_add_project_access_to_public_share_type(self): share_type_id = '3' body = {'addProjectAccess': {'project': PROJ2_UUID}} self.mock_object(share_types, 'get_share_type', mock.Mock(return_value={"is_public": True})) req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPConflict, self.controller._add_project_access, req, share_type_id, body) share_types.get_share_type.assert_called_once_with( mock.ANY, share_type_id) def test_remove_project_access(self): share_type = stub_share_type(2) share_type['is_public'] = False self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) self.mock_object(share_types, 'remove_share_type_access') body = {'removeProjectAccess': {'project': PROJ2_UUID}} req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) result = self.controller._remove_project_access(req, '2', body) self.assertEqual(202, result.status_code) @ddt.data({'removeProjectAccess': {'project': 'fake_project'}}, {'invalid': {'project': PROJ2_UUID}}) def test_remove_project_access_bad_request(self, body): req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._remove_project_access, req, '2', body) def test_remove_project_access_with_bad_access(self): def stub_remove_share_type_access(context, type_id, project_id): raise exception.ShareTypeAccessNotFound(share_type_id=type_id, project_id=project_id) self.mock_object(db, 'share_type_access_remove', stub_remove_share_type_access) body = {'removeProjectAccess': {'project': PROJ2_UUID}} req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotFound, self.controller._remove_project_access, req, '3', body) def test_remove_project_access_with_no_admin_user(self): req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=False) body = {'removeProjectAccess': {'project': PROJ2_UUID}} self.assertRaises(webob.exc.HTTPForbidden, self.controller._remove_project_access, req, '2', body) def test_remove_project_access_from_public_share_type(self): share_type_id = '3' body = {'removeProjectAccess': {'project': PROJ2_UUID}} self.mock_object(share_types, 'get_share_type', mock.Mock(return_value={"is_public": True})) req = fakes.HTTPRequest.blank('/v2/fake/types/2/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPConflict, self.controller._remove_project_access, req, share_type_id, body) share_types.get_share_type.assert_called_once_with( mock.ANY, share_type_id) def test_remove_project_access_by_nonexistent_share_type(self): self.mock_object(share_types, 'get_share_type', return_share_types_get_share_type) body = {'removeProjectAccess': {'project': PROJ2_UUID}} req = fakes.HTTPRequest.blank('/v2/fake/types/777/action', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotFound, self.controller._remove_project_access, req, '777', body) manila-10.0.0/manila/tests/api/v2/test_share_group_type_specs.py0000664000175000017500000003725713656750227024776 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import strutils import webob from manila.api.v2 import share_group_type_specs from manila import exception from manila import policy from manila import test from manila.tests.api import fakes import manila.wsgi CONSISTENT_SNAPSHOTS = 'consistent_snapshots' def return_create_share_group_type_specs(context, share_group_type_id, group_specs): return stub_share_group_type_specs() def return_share_group_type_specs(context, share_group_type_id): return stub_share_group_type_specs() def return_empty_share_group_type_specs(context, share_group_type_id): return {} def delete_share_group_type_specs(context, share_group_type_id, key): pass def delete_share_group_type_specs_not_found(context, share_group_type_id, key): raise exception.ShareGroupTypeSpecsNotFound("Not Found") def stub_share_group_type_specs(): return {"key%d" % i: "value%d" % i for i in (1, 2, 3, 4, 5)} def get_large_string(): return "s" * 256 def get_group_specs_dict(group_specs, include_required=True): if not group_specs: group_specs = {} return {'group_specs': group_specs} def fake_request(url, admin=False, version='2.31', experimental=True, **kwargs): return fakes.HTTPRequest.blank( url, use_admin_context=admin, version=version, experimental=experimental, **kwargs) SG_GRADUATION_VERSION = '2.55' @ddt.ddt class ShareGroupTypesSpecsTest(test.TestCase): def setUp(self): super(ShareGroupTypesSpecsTest, self).setUp() self.flags(host='fake') self.mock_object(manila.db, 'share_group_type_get') self.api_path = '/v2/fake/share-group-types/1/group_specs' self.controller = ( share_group_type_specs.ShareGroupTypeSpecsController()) self.resource_name = self.controller.resource_name self.mock_policy_check = self.mock_object(policy, 'check_policy') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_index(self, microversion, experimental): self.mock_object( manila.db, 'share_group_type_specs_get', return_share_group_type_specs) req = fake_request(self.api_path, version=microversion, experimental=experimental) req_context = req.environ['manila.context'] res_dict = self.controller.index(req, 1) self.assertEqual('value1', res_dict['group_specs']['key1']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') def test_index_no_data(self): self.mock_object(manila.db, 'share_group_type_specs_get', return_empty_share_group_type_specs) req = fake_request(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.index(req, 1) self.assertEqual(0, len(res_dict['group_specs'])) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'index') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_show(self, microversion, experimental): self.mock_object(manila.db, 'share_group_type_specs_get', return_share_group_type_specs) req = fake_request(self.api_path + '/key5', version=microversion, experimental=experimental) req_context = req.environ['manila.context'] res_dict = self.controller.show(req, 1, 'key5') self.assertEqual('value5', res_dict['key5']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'show') def test_show_spec_not_found(self): self.mock_object(manila.db, 'share_group_type_specs_get', return_empty_share_group_type_specs) req = fake_request(self.api_path + '/key6') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 1, 'key6') self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'show') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test_delete(self, microversion, experimental): self.mock_object(manila.db, 'share_group_type_specs_delete', delete_share_group_type_specs) req = fake_request(self.api_path + '/key5', version=microversion, experimental=experimental) req_context = req.environ['manila.context'] self.controller.delete(req, 1, 'key5') self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') def test_delete_not_found(self): self.mock_object(manila.db, 'share_group_type_specs_delete', delete_share_group_type_specs_not_found) req = fake_request(self.api_path + '/key6') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, 1, 'key6') self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'delete') @ddt.data( get_group_specs_dict({}), {'foo': 'bar'}, {CONSISTENT_SNAPSHOTS + 'foo': True}, {'foo' + CONSISTENT_SNAPSHOTS: False}, *[{CONSISTENT_SNAPSHOTS: v} for v in strutils.TRUE_STRINGS + strutils.FALSE_STRINGS] ) def test_create_experimental(self, data): self._validate_create(data) @ddt.data( get_group_specs_dict({}), {'foo': 'bar'}, {CONSISTENT_SNAPSHOTS + 'foo': True}, {'foo' + CONSISTENT_SNAPSHOTS: False} ) def test_create_non_experimental(self, data): self._validate_create(data, microversion=SG_GRADUATION_VERSION, experimental=False) def _validate_create(self, data, microversion='2.31', experimental=True): body = {'group_specs': data} mock_spec_update_or_create = self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=return_create_share_group_type_specs)) req = fake_request(self.api_path, version=microversion, experimental=experimental) req_context = req.environ['manila.context'] res_dict = self.controller.create(req, 1, body) for k, v in data.items(): self.assertIn(k, res_dict['group_specs']) self.assertEqual(v, res_dict['group_specs'][k]) mock_spec_update_or_create.assert_called_once_with( req.environ['manila.context'], 1, body['group_specs']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_create_with_too_small_key(self): self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=return_create_share_group_type_specs)) too_small_key = "" body = {"group_specs": {too_small_key: "value"}} req = fake_request(self.api_path) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, 1, body) self.assertFalse( manila.db.share_group_type_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_create_with_too_big_key(self): self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=return_create_share_group_type_specs)) too_big_key = "k" * 256 body = {"group_specs": {too_big_key: "value"}} req = fake_request(self.api_path) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, 1, body) self.assertFalse( manila.db.share_group_type_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_create_with_too_small_value(self): self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=return_create_share_group_type_specs)) too_small_value = "" body = {"group_specs": {"key": too_small_value}} req = fake_request(self.api_path) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, 1, body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') self.assertFalse( manila.db.share_group_type_specs_update_or_create.called) def test_create_with_too_big_value(self): self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=return_create_share_group_type_specs)) too_big_value = "v" * 256 body = {"extra_specs": {"key": too_big_value}} req = fake_request(self.api_path) req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, 1, body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') self.assertFalse( manila.db.share_group_type_specs_update_or_create.called) def test_create_key_allowed_chars(self): mock_return_value = stub_share_group_type_specs() mock_spec_update_or_create = self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=mock_return_value)) body = get_group_specs_dict({"other_alphanum.-_:": "value1"}) req = fake_request(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.create(req, 1, body) self.assertEqual(mock_return_value['key1'], res_dict['group_specs']['other_alphanum.-_:']) mock_spec_update_or_create.assert_called_once_with( req.environ['manila.context'], 1, body['group_specs']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') def test_create_too_many_keys_allowed_chars(self): mock_return_value = stub_share_group_type_specs() mock_spec_update_or_create = self.mock_object( manila.db, 'share_group_type_specs_update_or_create', mock.Mock(return_value=mock_return_value)) body = get_group_specs_dict({ "other_alphanum.-_:": "value1", "other2_alphanum.-_:": "value2", "other3_alphanum.-_:": "value3", }) req = fake_request(self.api_path) req_context = req.environ['manila.context'] res_dict = self.controller.create(req, 1, body) self.assertEqual(mock_return_value['key1'], res_dict['group_specs']['other_alphanum.-_:']) self.assertEqual(mock_return_value['key2'], res_dict['group_specs']['other2_alphanum.-_:']) self.assertEqual(mock_return_value['key3'], res_dict['group_specs']['other3_alphanum.-_:']) mock_spec_update_or_create.assert_called_once_with( req_context, 1, body['group_specs']) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') @ddt.data({'microversion': '2.31', 'experimental': True}, {'microversion': SG_GRADUATION_VERSION, 'experimental': False}) @ddt.unpack def test__update_call(self, microversion, experimental): req = fake_request(self.api_path + '/key1', version=microversion, experimental=experimental) sg_id = 'fake_id' key = 'fake_key' body = {"group_specs": {"key1": "fake_value"}} self.mock_object(self.controller, '_update') self.controller.update(req, sg_id, key, body) self.controller._update.assert_called_once_with(req, sg_id, key, body) def test_update_item_too_many_keys(self): self.mock_object(manila.db, 'share_group_type_specs_update_or_create') body = {"key1": "value1", "key2": "value2"} req = fake_request(self.api_path + '/key1') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'key1', body) self.assertFalse( manila.db.share_group_type_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') def test_update_item_body_uri_mismatch(self): self.mock_object(manila.db, 'share_group_type_specs_update_or_create') body = {"key1": "value1"} req = fake_request(self.api_path + '/bad') req_context = req.environ['manila.context'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'bad', body) self.assertFalse( manila.db.share_group_type_specs_update_or_create.called) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') @ddt.data(None, {}, {"group_specs": {CONSISTENT_SNAPSHOTS: ""}}) def test_update_invalid_body(self, body): req = fake_request('/v2/fake/share-group-types/1/group_specs') req_context = req.environ['manila.context'] req.method = 'POST' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, '1', body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'update') @ddt.data( None, {}, {'foo': {'a': 'b'}}, {'group_specs': 'string'}, {"group_specs": {"ke/y1": "value1"}}, {"key1": "value1", "ke/y2": "value2", "key3": "value3"}, {"group_specs": {CONSISTENT_SNAPSHOTS: ""}}, {"group_specs": {"": "value"}}, {"group_specs": {"t": get_large_string()}}, {"group_specs": {get_large_string(): get_large_string()}}, {"group_specs": {get_large_string(): "v"}}, {"group_specs": {"k": ""}}) def test_create_invalid_body(self, body): req = fake_request('/v2/fake/share-group-types/1/group_specs') req_context = req.environ['manila.context'] req.method = 'POST' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, '1', body) self.mock_policy_check.assert_called_once_with( req_context, self.resource_name, 'create') manila-10.0.0/manila/tests/api/v2/test_share_snapshot_instance_export_locations.py0000664000175000017500000001026713656750227030573 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila.api.v2 import share_snapshot_instance_export_locations as exp_loc from manila.common import constants from manila import context from manila.db.sqlalchemy import api as db_api from manila import exception from manila import test from manila.tests.api import fakes from manila.tests import db_utils @ddt.ddt class ShareSnapshotInstanceExportLocationsAPITest(test.TestCase): def _get_request(self, version="2.32", use_admin_context=True): req = fakes.HTTPRequest.blank( '/snapshot-instances/%s/export-locations' % self.snapshot_instance['id'], version=version, use_admin_context=use_admin_context) return req def setUp(self): super(ShareSnapshotInstanceExportLocationsAPITest, self).setUp() self.controller = ( exp_loc.ShareSnapshotInstanceExportLocationController()) self.share = db_utils.create_share() self.snapshot = db_utils.create_snapshot( status=constants.STATUS_AVAILABLE, share_id=self.share['id']) self.snapshot_instance = db_utils.create_snapshot_instance( 'fake_snapshot_id_1', status=constants.STATUS_CREATING, share_instance_id=self.share['instance']['id']) self.values = { 'share_snapshot_instance_id': self.snapshot_instance['id'], 'path': 'fake/user_path', 'is_admin_only': True, } self.el = db_api.share_snapshot_instance_export_location_create( context.get_admin_context(), self.values) self.req = self._get_request() def test_index(self): self.mock_object( db_api, 'share_snapshot_instance_export_locations_get_all', mock.Mock(return_value=[self.el])) out = self.controller.index(self._get_request('2.32'), self.snapshot_instance['id']) values = { 'share_snapshot_export_locations': [{ 'share_snapshot_instance_id': self.snapshot_instance['id'], 'path': 'fake/user_path', 'is_admin_only': True, 'id': self.el['id'], 'links': [{ 'href': 'http://localhost/v1/fake/' 'share_snapshot_export_locations/' + self.el['id'], 'rel': 'self' }, { 'href': 'http://localhost/fake/' 'share_snapshot_export_locations/' + self.el['id'], 'rel': 'bookmark' }], }] } self.assertSubDictMatch(values, out) def test_show(self): out = self.controller.show(self._get_request('2.32'), self.snapshot_instance['id'], self.el['id']) self.assertSubDictMatch( {'share_snapshot_export_location': self.values}, out) @ddt.data('1.0', '2.0', '2.5', '2.8', '2.31') def test_list_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.index, self._get_request(version), self.snapshot_instance['id'], ) @ddt.data('1.0', '2.0', '2.5', '2.8', '2.31') def test_show_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.show, self._get_request(version), self.snapshot['id'], self.el['id'] ) manila-10.0.0/manila/tests/api/v2/test_share_access_metadata.py0000664000175000017500000001222413656750227024470 0ustar zuulzuul00000000000000# Copyright (c) 2018 Huawei Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt import webob from manila.api.v2 import share_access_metadata from manila.api.v2 import share_accesses from manila import context from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests import db_utils from oslo_utils import uuidutils @ddt.ddt class ShareAccessesMetadataAPITest(test.TestCase): def _get_request(self, version="2.45", use_admin_context=True): req = fakes.HTTPRequest.blank( '/v2/share-access-rules', version=version, use_admin_context=use_admin_context) return req def setUp(self): super(ShareAccessesMetadataAPITest, self).setUp() self.controller = ( share_access_metadata.ShareAccessMetadataController()) self.access_controller = ( share_accesses.ShareAccessesController()) self.resource_name = self.controller.resource_name self.admin_context = context.RequestContext('admin', 'fake', True) self.member_context = context.RequestContext('fake', 'fake') self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) self.share = db_utils.create_share() self.access = db_utils.create_share_access( id=uuidutils.generate_uuid(), share_id=self.share['id']) @ddt.data({'body': {'metadata': {'key1': 'v1'}}}, {'body': {'metadata': {'test_key1': 'test_v1'}}}, {'body': {'metadata': {'key1': 'v2'}}}) @ddt.unpack def test_update_metadata(self, body): url = self._get_request() update = self.controller.update(url, self.access['id'], body=body) self.assertEqual(body, update) show_result = self.access_controller.show(url, self.access['id']) self.assertEqual(1, len(show_result)) self.assertIn(self.access['id'], show_result['access']['id']) self.assertEqual(body['metadata'], show_result['access']['metadata']) def test_delete_metadata(self): body = {'metadata': {'test_key3': 'test_v3'}} url = self._get_request() self.controller.update(url, self.access['id'], body=body) self.controller.delete(url, self.access['id'], 'test_key3') show_result = self.access_controller.show(url, self.access['id']) self.assertEqual(1, len(show_result)) self.assertIn(self.access['id'], show_result['access']['id']) self.assertNotIn('test_key3', show_result['access']['metadata']) def test_update_access_metadata_with_access_id_not_found(self): self.assertRaises( webob.exc.HTTPNotFound, self.controller.update, self._get_request(), 'not_exist_access_id', {'metadata': {'key1': 'v1'}}) def test_update_access_metadata_with_body_error(self): self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, self._get_request(), self.access['id'], {'metadata_error': {'key1': 'v1'}}) @ddt.data({'metadata': {'key1': 'v1', 'key2': None}}, {'metadata': {None: 'v1', 'key2': 'v2'}}, {'metadata': {'k' * 256: 'v2'}}, {'metadata': {'key1': 'v' * 1024}}) @ddt.unpack def test_update_metadata_with_invalid_metadata(self, metadata): self.assertRaises( webob.exc.HTTPBadRequest, self.controller.update, self._get_request(), self.access['id'], {'metadata': metadata}) def test_delete_access_metadata_not_found(self): body = {'metadata': {'test_key_exist': 'test_v_exsit'}} update = self.controller.update( self._get_request(), self.access['id'], body=body) self.assertEqual(body, update) self.assertRaises( webob.exc.HTTPNotFound, self.controller.delete, self._get_request(), self.access['id'], 'key1') @ddt.data('1.0', '2.0', '2.8', '2.44') def test_update_metadata_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.update, self._get_request(version=version), self.access['id'], {'metadata': {'key1': 'v1'}}) @ddt.data('1.0', '2.0', '2.43') def test_delete_metadata_with_unsupported_version(self, version): self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller.delete, self._get_request(version=version), self.access['id'], 'key1') manila-10.0.0/manila/tests/api/extensions/0000775000175000017500000000000013656750362020443 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/extensions/__init__.py0000664000175000017500000000000013656750227022542 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/extensions/foxinsocks.py0000664000175000017500000000555513656750227023215 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from manila.api import extensions from manila.api.openstack import wsgi class FoxInSocksController(object): def index(self, req): return "Try to say this Mr. Knox, sir..." class FoxInSocksServerControllerExtension(wsgi.Controller): @wsgi.action('add_tweedle') def _add_tweedle(self, req, id, body): return "Tweedle Beetle Added." @wsgi.action('delete_tweedle') def _delete_tweedle(self, req, id, body): return "Tweedle Beetle Deleted." @wsgi.action('fail') def _fail(self, req, id, body): raise webob.exc.HTTPBadRequest(explanation='Tweedle fail') class FoxInSocksFlavorGooseControllerExtension(wsgi.Controller): @wsgi.extends def show(self, req, resp_obj, id): # NOTE: This only handles JSON responses. # You can use content type header to test for XML. resp_obj.obj['flavor']['googoose'] = req.GET.get('chewing') class FoxInSocksFlavorBandsControllerExtension(wsgi.Controller): @wsgi.extends def show(self, req, resp_obj, id): # NOTE: This only handles JSON responses. # You can use content type header to test for XML. resp_obj.obj['big_bands'] = 'Pig Bands!' class Foxinsocks(extensions.ExtensionDescriptor): """The Fox In Socks Extension.""" name = "Fox In Socks" alias = "FOXNSOX" namespace = "http://www.fox.in.socks/api/ext/pie/v1.0" updated = "2011-01-22T13:25:27-06:00" def __init__(self, ext_mgr): ext_mgr.register(self) def get_resources(self): resources = [] resource = extensions.ResourceExtension('foxnsocks', FoxInSocksController()) resources.append(resource) return resources def get_controller_extensions(self): extension_list = [] extension_set = [ (FoxInSocksServerControllerExtension, 'servers'), (FoxInSocksFlavorGooseControllerExtension, 'flavors'), (FoxInSocksFlavorBandsControllerExtension, 'flavors'), ] for klass, collection in extension_set: controller = klass() ext = extensions.ControllerExtension(self, collection, controller) extension_list.append(ext) return extension_list manila-10.0.0/manila/tests/api/test_middleware.py0000664000175000017500000000634713656750227022004 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing permissions and # limitations under the License. import ddt from oslo_config import cfg from manila.tests.integrated import integrated_helpers @ddt.ddt class TestCORSMiddleware(integrated_helpers._IntegratedTestBase): '''Provide a basic smoke test to ensure CORS middleware is active. The tests below provide minimal confirmation that the CORS middleware is active, and may be configured. For comprehensive tests, please consult the test suite in oslo_middleware. ''' def setUp(self): # Here we monkeypatch GroupAttr.__getattr__, necessary because the # paste.ini method of initializing this middleware creates its own # ConfigOpts instance, bypassing the regular config fixture. # Mocking also does not work, as accessing an attribute on a mock # object will return a MagicMock instance, which will fail # configuration type checks. def _mock_getattr(instance, key): if key != 'allowed_origin': return self._original_call_method(instance, key) return "http://valid.example.com" self._original_call_method = cfg.ConfigOpts.GroupAttr.__getattr__ cfg.ConfigOpts.GroupAttr.__getattr__ = _mock_getattr # Initialize the application after all the config overrides are in # place. super(TestCORSMiddleware, self).setUp() def tearDown(self): super(TestCORSMiddleware, self).tearDown() # Reset the configuration overrides. cfg.ConfigOpts.GroupAttr.__getattr__ = self._original_call_method @ddt.data( ('http://valid.example.com', 'http://valid.example.com'), ('http://invalid.example.com', None), ) @ddt.unpack def test_options_request(self, origin_url, acao_header_expected): response = self.api.api_request( '', method='OPTIONS', headers={ 'Origin': origin_url, 'Access-Control-Request-Method': 'GET', } ) self.assertEqual(200, response.status) self.assertEqual(acao_header_expected, response.getheader('Access-Control-Allow-Origin')) @ddt.data( ('http://valid.example.com', 'http://valid.example.com'), ('http://invalid.example.com', None), ) @ddt.unpack def test_get_request(self, origin_url, acao_header_expected): response = self.api.api_request( '', method='GET', headers={ 'Origin': origin_url } ) self.assertEqual(404, response.status) self.assertEqual(acao_header_expected, response.getheader('Access-Control-Allow-Origin')) manila-10.0.0/manila/tests/api/test_extensions.py0000664000175000017500000001447313656750227022065 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt import iso8601 from oslo_config import cfg from oslo_serialization import jsonutils import webob from manila.api import extensions from manila.api.v1 import router from manila import policy from manila import test CONF = cfg.CONF class ExtensionTestCase(test.TestCase): def setUp(self): super(ExtensionTestCase, self).setUp() ext_list = CONF.osapi_share_extension[:] fox = ('manila.tests.api.extensions.foxinsocks.Foxinsocks') if fox not in ext_list: ext_list.append(fox) self.flags(osapi_share_extension=ext_list) class ExtensionControllerTest(ExtensionTestCase): def setUp(self): super(ExtensionControllerTest, self).setUp() self.ext_list = [] self.ext_list.sort() def test_list_extensions_json(self): app = router.APIRouter() request = webob.Request.blank("/fake/extensions") response = request.get_response(app) self.assertEqual(200, response.status_int) # Make sure we have all the extensions, extra extensions being OK. data = jsonutils.loads(response.body) names = [str(x['name']) for x in data['extensions'] if str(x['name']) in self.ext_list] names.sort() self.assertEqual(self.ext_list, names) # Ensure all the timestamps are valid according to iso8601 for ext in data['extensions']: iso8601.parse_date(ext['updated']) # Make sure that at least Fox in Sox is correct. (fox_ext, ) = [ x for x in data['extensions'] if x['alias'] == 'FOXNSOX'] self.assertEqual( {'name': 'Fox In Socks', 'updated': '2011-01-22T13:25:27-06:00', 'description': 'The Fox In Socks Extension.', 'alias': 'FOXNSOX', 'links': []}, fox_ext) for ext in data['extensions']: url = '/fake/extensions/%s' % ext['alias'] request = webob.Request.blank(url) response = request.get_response(app) output = jsonutils.loads(response.body) self.assertEqual(ext['alias'], output['extension']['alias']) def test_get_extension_json(self): app = router.APIRouter() request = webob.Request.blank("/fake/extensions/FOXNSOX") response = request.get_response(app) self.assertEqual(200, response.status_int) data = jsonutils.loads(response.body) self.assertEqual( {"name": "Fox In Socks", "updated": "2011-01-22T13:25:27-06:00", "description": "The Fox In Socks Extension.", "alias": "FOXNSOX", "links": []}, data['extension']) def test_get_non_existing_extension_json(self): app = router.APIRouter() request = webob.Request.blank("/fake/extensions/4") response = request.get_response(app) self.assertEqual(404, response.status_int) @ddt.ddt class ExtensionAuthorizeTestCase(test.TestCase): @ddt.unpack @ddt.data({'action': 'fake', 'valid': 'api_extension:fake:fake'}, {'action': None, 'valid': 'api_extension:fake'}) def test_extension_authorizer(self, action, valid): self.mock_object(policy, 'enforce') target = 'fake' extensions.extension_authorizer('api', 'fake')( {}, target, action) policy.enforce.assert_called_once_with(mock.ANY, valid, target) def test_extension_authorizer_empty_target(self): self.mock_object(policy, 'enforce') target = None context = mock.Mock() context.project_id = 'fake' context.user_id = 'fake' extensions.extension_authorizer('api', 'fake')( context, target, 'fake') policy.enforce.assert_called_once_with( mock.ANY, mock.ANY, {'project_id': 'fake', 'user_id': 'fake'}) class StubExtensionManager(object): """Provides access to Tweedle Beetles.""" name = "Tweedle Beetle Extension" alias = "TWDLBETL" def __init__(self, resource_ext=None, action_ext=None, request_ext=None, controller_ext=None): self.resource_ext = resource_ext self.controller_ext = controller_ext self.extra_resource_ext = None def get_resources(self): resource_exts = [] if self.resource_ext: resource_exts.append(self.resource_ext) if self.extra_resource_ext: resource_exts.append(self.extra_resource_ext) return resource_exts def get_controller_extensions(self): controller_extensions = [] if self.controller_ext: controller_extensions.append(self.controller_ext) return controller_extensions class ExtensionControllerIdFormatTest(test.TestCase): def _bounce_id(self, test_id): class BounceController(object): def show(self, req, id): return id res_ext = extensions.ResourceExtension('bounce', BounceController()) manager = StubExtensionManager(res_ext) app = router.APIRouter(manager) request = webob.Request.blank("/fake/bounce/%s" % test_id) response = request.get_response(app) return response.body def test_id_with_xml_format(self): result = self._bounce_id('foo.xml') self.assertEqual('foo', result.decode('UTF-8')) def test_id_with_json_format(self): result = self._bounce_id('foo.json') self.assertEqual('foo', result.decode('UTF-8')) def test_id_with_bad_format(self): result = self._bounce_id('foo.bad') self.assertEqual('foo.bad', result.decode('UTF-8')) manila-10.0.0/manila/tests/api/test_common.py0000664000175000017500000004124013656750227021146 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test suites for 'common' code used throughout the OpenStack HTTP API. """ from unittest import mock import ddt import webob import webob.exc from manila.api import common from manila import exception from manila import policy from manila import test from manila.tests.api import fakes from manila.tests.db import fakes as db_fakes class LimiterTest(test.TestCase): """Unit tests for the `manila.api.common.limited` method. Takes in a list of items and, depending on the 'offset' and 'limit' GET params, returns a subset or complete set of the given items. """ def setUp(self): """Run before each test.""" super(LimiterTest, self).setUp() self.tiny = list(range(1)) self.small = list(range(10)) self.medium = list(range(1000)) self.large = list(range(10000)) def test_limiter_offset_zero(self): """Test offset key works with 0.""" req = webob.Request.blank('/?offset=0') self.assertEqual(self.tiny, common.limited(self.tiny, req)) self.assertEqual(self.small, common.limited(self.small, req)) self.assertEqual(self.medium, common.limited(self.medium, req)) self.assertEqual(self.large[:1000], common.limited(self.large, req)) def test_limiter_offset_medium(self): """Test offset key works with a medium sized number.""" req = webob.Request.blank('/?offset=10') self.assertEqual([], common.limited(self.tiny, req)) self.assertEqual(self.small[10:], common.limited(self.small, req)) self.assertEqual(self.medium[10:], common.limited(self.medium, req)) self.assertEqual(self.large[10:1010], common.limited(self.large, req)) def test_limiter_offset_over_max(self): """Test offset key works with a number over 1000 (max_limit).""" req = webob.Request.blank('/?offset=1001') self.assertEqual([], common.limited(self.tiny, req)) self.assertEqual([], common.limited(self.small, req)) self.assertEqual([], common.limited(self.medium, req)) self.assertEqual( self.large[1001:2001], common.limited(self.large, req)) def test_limiter_offset_blank(self): """Test offset key works with a blank offset.""" req = webob.Request.blank('/?offset=') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_offset_bad(self): """Test offset key works with a BAD offset.""" req = webob.Request.blank(u'/?offset=\u0020aa') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_nothing(self): """Test request with no offset or limit.""" req = webob.Request.blank('/') self.assertEqual(self.tiny, common.limited(self.tiny, req)) self.assertEqual(self.small, common.limited(self.small, req)) self.assertEqual(self.medium, common.limited(self.medium, req)) self.assertEqual(self.large[:1000], common.limited(self.large, req)) def test_limiter_limit_zero(self): """Test limit of zero.""" req = webob.Request.blank('/?limit=0') self.assertEqual(self.tiny, common.limited(self.tiny, req)) self.assertEqual(self.small, common.limited(self.small, req)) self.assertEqual(self.medium, common.limited(self.medium, req)) self.assertEqual(self.large[:1000], common.limited(self.large, req)) def test_limiter_limit_medium(self): """Test limit of 10.""" req = webob.Request.blank('/?limit=10') self.assertEqual(self.tiny, common.limited(self.tiny, req)) self.assertEqual(self.small, common.limited(self.small, req)) self.assertEqual(self.medium[:10], common.limited(self.medium, req)) self.assertEqual(self.large[:10], common.limited(self.large, req)) def test_limiter_limit_over_max(self): """Test limit of 3000.""" req = webob.Request.blank('/?limit=3000') self.assertEqual(self.tiny, common.limited(self.tiny, req)) self.assertEqual(self.small, common.limited(self.small, req)) self.assertEqual(self.medium, common.limited(self.medium, req)) self.assertEqual(self.large[:1000], common.limited(self.large, req)) def test_limiter_limit_and_offset(self): """Test request with both limit and offset.""" items = list(range(2000)) req = webob.Request.blank('/?offset=1&limit=3') self.assertEqual(items[1:4], common.limited(items, req)) req = webob.Request.blank('/?offset=3&limit=0') self.assertEqual(items[3:1003], common.limited(items, req)) req = webob.Request.blank('/?offset=3&limit=1500') self.assertEqual(items[3:1003], common.limited(items, req)) req = webob.Request.blank('/?offset=3000&limit=10') self.assertEqual([], common.limited(items, req)) def test_limiter_custom_max_limit(self): """Test a max_limit other than 1000.""" items = list(range(2000)) req = webob.Request.blank('/?offset=1&limit=3') self.assertEqual( items[1:4], common.limited(items, req, max_limit=2000)) req = webob.Request.blank('/?offset=3&limit=0') self.assertEqual( items[3:], common.limited(items, req, max_limit=2000)) req = webob.Request.blank('/?offset=3&limit=2500') self.assertEqual( items[3:], common.limited(items, req, max_limit=2000)) req = webob.Request.blank('/?offset=3000&limit=10') self.assertEqual([], common.limited(items, req, max_limit=2000)) def test_limiter_negative_limit(self): """Test a negative limit.""" req = webob.Request.blank('/?limit=-3000') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_negative_offset(self): """Test a negative offset.""" req = webob.Request.blank('/?offset=-30') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) class PaginationParamsTest(test.TestCase): """Unit tests for the `manila.api.common.get_pagination_params` method. Takes in a request object and returns 'marker' and 'limit' GET params. """ def test_no_params(self): """Test no params.""" req = webob.Request.blank('/') self.assertEqual({}, common.get_pagination_params(req)) def test_valid_marker(self): """Test valid marker param.""" req = webob.Request.blank( '/?marker=263abb28-1de6-412f-b00b-f0ee0c4333c2') self.assertEqual({'marker': '263abb28-1de6-412f-b00b-f0ee0c4333c2'}, common.get_pagination_params(req)) def test_valid_limit(self): """Test valid limit param.""" req = webob.Request.blank('/?limit=10') self.assertEqual({'limit': 10}, common.get_pagination_params(req)) def test_invalid_limit(self): """Test invalid limit param.""" req = webob.Request.blank('/?limit=-2') self.assertRaises( webob.exc.HTTPBadRequest, common.get_pagination_params, req) def test_valid_limit_and_marker(self): """Test valid limit and marker parameters.""" marker = '263abb28-1de6-412f-b00b-f0ee0c4333c2' req = webob.Request.blank('/?limit=20&marker=%s' % marker) self.assertEqual({'marker': marker, 'limit': 20}, common.get_pagination_params(req)) @ddt.ddt class MiscFunctionsTest(test.TestCase): @ddt.data( ('http://manila.example.com/v2/b2d18606-2673-4965-885a-4f5a8b955b9b/', 'http://manila.example.com/b2d18606-2673-4965-885a-4f5a8b955b9b/'), ('http://manila.example.com/v1/', 'http://manila.example.com/'), ('http://manila.example.com/share/v2.22/', 'http://manila.example.com/share/'), ('http://manila.example.com/share/v1/' 'b2d18606-2673-4965-885a-4f5a8b955b9b/', 'http://manila.example.com/share/' 'b2d18606-2673-4965-885a-4f5a8b955b9b/'), ('http://10.10.10.10:3366/v1/', 'http://10.10.10.10:3366/'), ('http://10.10.10.10:3366/v2/b2d18606-2673-4965-885a-4f5a8b955b9b/', 'http://10.10.10.10:3366/b2d18606-2673-4965-885a-4f5a8b955b9b/'), ('http://manila.example.com:3366/v1.1/', 'http://manila.example.com:3366/'), ('http://manila.example.com:3366/v2/' 'b2d18606-2673-4965-885a-4f5a8b955b9b/', 'http://manila.example.com:3366/' 'b2d18606-2673-4965-885a-4f5a8b955b9b/')) @ddt.unpack def test_remove_version_from_href(self, fixture, expected): actual = common.remove_version_from_href(fixture) self.assertEqual(expected, actual) @ddt.data('http://manila.example.com/1.1/shares', 'http://manila.example.com/v/shares', 'http://manila.example.com/v1.1shares') def test_remove_version_from_href_bad_request(self, fixture): self.assertRaises(ValueError, common.remove_version_from_href, fixture) def test_validate_cephx_id_invalid_with_period(self): self.assertRaises(webob.exc.HTTPBadRequest, common.validate_cephx_id, "client.manila") def test_validate_cephx_id_invalid_with_non_ascii(self): self.assertRaises(webob.exc.HTTPBadRequest, common.validate_cephx_id, u"bj\u00F6rn") @ddt.data("alice", "alice_bob", "alice bob") def test_validate_cephx_id_valid(self, test_id): common.validate_cephx_id(test_id) @ddt.data(['ip', '1.1.1.1', False, False], ['user', 'alice', False, False], ['cert', 'alice', False, False], ['cephx', 'alice', True, False], ['user', 'alice$', False, False], ['user', 'test group name', False, False], ['user', 'group$.-_\'`{}', False, False], ['ip', '172.24.41.0/24', False, False], ['ip', '1001::1001', False, True], ['ip', '1001::1000/120', False, True]) @ddt.unpack def test_validate_access(self, access_type, access_to, ceph, enable_ipv6): common.validate_access(access_type=access_type, access_to=access_to, enable_ceph=ceph, enable_ipv6=enable_ipv6) @ddt.data(['ip', 'alice', False], ['ip', '1.1.1.0/10/12', False], ['ip', '255.255.255.265', False], ['ip', '1.1.1.0/34', False], ['cert', '', False], ['cephx', 'client.alice', True], ['group', 'alice', True], ['cephx', 'alice', False], ['cephx', '', True], ['user', 'bob/', False], ['user', 'group<>', False], ['user', '+=*?group', False], ['ip', '1001::1001/256', False], ['ip', '1001:1001/256', False],) @ddt.unpack def test_validate_access_exception(self, access_type, access_to, ceph): self.assertRaises(webob.exc.HTTPBadRequest, common.validate_access, access_type=access_type, access_to=access_to, enable_ceph=ceph) def test_validate_public_share_policy_no_is_public(self): api_params = {'foo': 'bar', 'clemson': 'tigers'} self.mock_object(policy, 'check_policy') actual_params = common.validate_public_share_policy( 'fake_context', api_params) self.assertDictMatch(api_params, actual_params) policy.check_policy.assert_not_called() @ddt.data('foo', 123, 'all', None) def test_validate_public_share_policy_invalid_value(self, is_public): api_params = {'is_public': is_public} self.mock_object(policy, 'check_policy') self.assertRaises(exception.InvalidParameterValue, common.validate_public_share_policy, 'fake_context', api_params) policy.check_policy.assert_not_called() @ddt.data('1', True, 'true', 'yes') def test_validate_public_share_not_authorized(self, is_public): api_params = {'is_public': is_public, 'size': '16'} self.mock_object(policy, 'check_policy', mock.Mock(return_value=False)) self.assertRaises(exception.NotAuthorized, common.validate_public_share_policy, 'fake_context', api_params) policy.check_policy.assert_called_once_with( 'fake_context', 'share', 'create_public_share', do_raise=False) @ddt.data('0', False, 'false', 'no') def test_validate_public_share_is_public_False(self, is_public): api_params = {'is_public': is_public, 'size': '16'} self.mock_object(policy, 'check_policy', mock.Mock(return_value=False)) actual_params = common.validate_public_share_policy( 'fake_context', api_params, api='update') self.assertDictMatch({'is_public': False, 'size': '16'}, actual_params) policy.check_policy.assert_called_once_with( 'fake_context', 'share', 'set_public_share', do_raise=False) @ddt.data('1', True, 'true', 'yes') def test_validate_public_share_is_public_True(self, is_public): api_params = {'is_public': is_public, 'size': '16'} self.mock_object(policy, 'check_policy', mock.Mock(return_value=True)) actual_params = common.validate_public_share_policy( 'fake_context', api_params, api='update') self.assertDictMatch({'is_public': True, 'size': '16'}, actual_params) policy.check_policy.assert_called_once_with( 'fake_context', 'share', 'set_public_share', do_raise=False) @ddt.data(({}, True), ({'neutron_net_id': 'fake_nn_id'}, False), ({'neutron_subnet_id': 'fake_sn_id'}, False), ({'neutron_net_id': 'fake_nn_id', 'neutron_subnet_id': 'fake_sn_id'}, True)) @ddt.unpack def test__check_net_id_and_subnet_id(self, body, expected): if not expected: self.assertRaises(webob.exc.HTTPBadRequest, common.check_net_id_and_subnet_id, body) else: result = common.check_net_id_and_subnet_id(body) self.assertIsNone(result) @ddt.ddt class ViewBuilderTest(test.TestCase): def setUp(self): super(ViewBuilderTest, self).setUp() self.expected_resource_dict = { 'id': 'fake_resource_id', 'foo': 'quz', 'fred': 'bob', 'alice': 'waldo', 'spoon': 'spam', 'xyzzy': 'qwerty', } self.fake_resource = db_fakes.FakeModel(self.expected_resource_dict) self.view_builder = fakes.FakeResourceViewBuilder() @ddt.data('1.0', '1.40') def test_versioned_method_no_updates(self, version): req = fakes.HTTPRequest.blank('/my_resource', version=version) actual_resource = self.view_builder.view(req, self.fake_resource) self.assertEqual(set({'id', 'foo', 'fred', 'alice'}), set(actual_resource.keys())) @ddt.data(True, False) def test_versioned_method_v1_6(self, is_admin): req = fakes.HTTPRequest.blank('/my_resource', version='1.6', use_admin_context=is_admin) expected_keys = set({'id', 'foo', 'fred', 'alice'}) if is_admin: expected_keys.add('spoon') actual_resource = self.view_builder.view(req, self.fake_resource) self.assertEqual(expected_keys, set(actual_resource.keys())) @ddt.unpack @ddt.data({'is_admin': True, 'version': '3.14'}, {'is_admin': False, 'version': '3.14'}, {'is_admin': False, 'version': '6.2'}, {'is_admin': True, 'version': '6.2'}) def test_versioned_method_all_match(self, is_admin, version): req = fakes.HTTPRequest.blank( '/my_resource', version=version, use_admin_context=is_admin) expected_keys = set({'id', 'fred', 'xyzzy', 'alice'}) if is_admin: expected_keys.add('spoon') actual_resource = self.view_builder.view(req, self.fake_resource) self.assertEqual(expected_keys, set(actual_resource.keys())) manila-10.0.0/manila/tests/api/views/0000775000175000017500000000000013656750362017401 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/views/__init__.py0000664000175000017500000000000013656750227021500 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/views/test_scheduler_stats.py0000664000175000017500000000606313656750227024213 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from manila.api.views import scheduler_stats from manila import test POOL1 = { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', 'other': 'junk', 'capabilities': { 'pool_name': 'pool1', 'driver_handles_share_servers': False, 'qos': 'False', 'timestamp': '2015-03-15T19:15:42.611690', 'allocated_capacity_gb': 5, 'total_capacity_gb': 10, }, } POOL2 = { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', 'capabilities': { 'pool_name': 'pool2', 'driver_handles_share_servers': False, 'qos': 'False', 'timestamp': '2015-03-15T19:15:42.611690', 'allocated_capacity_gb': 15, 'total_capacity_gb': 20, }, } POOLS = [POOL1, POOL2] POOLS_DETAIL_VIEW = { 'pools': [ { 'name': 'host1@backend1#pool1', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool1', 'capabilities': { 'pool_name': 'pool1', 'driver_handles_share_servers': False, 'qos': 'False', 'timestamp': '2015-03-15T19:15:42.611690', 'allocated_capacity_gb': 5, 'total_capacity_gb': 10, }, }, { 'name': 'host1@backend1#pool2', 'host': 'host1', 'backend': 'backend1', 'pool': 'pool2', 'capabilities': { 'pool_name': 'pool2', 'driver_handles_share_servers': False, 'qos': 'False', 'timestamp': '2015-03-15T19:15:42.611690', 'allocated_capacity_gb': 15, 'total_capacity_gb': 20, } } ] } class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = scheduler_stats.ViewBuilder() def test_pools(self): result = self.builder.pools(POOLS) # Remove capabilities for summary view expected = copy.deepcopy(POOLS_DETAIL_VIEW) for pool in expected['pools']: del pool['capabilities'] self.assertDictEqual(expected, result) def test_pools_with_details(self): result = self.builder.pools(POOLS, detail=True) expected = POOLS_DETAIL_VIEW self.assertDictEqual(expected, result) manila-10.0.0/manila/tests/api/views/test_shares.py0000664000175000017500000001047113656750227022302 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import ddt from manila.api.views import shares from manila.common import constants from manila import test from manila.tests.api.contrib import stubs from manila.tests.api import fakes @ddt.ddt class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = shares.ViewBuilder() self.fake_share = self._get_fake_share() def _get_fake_share(self): fake_share = { 'share_type_id': 'fake_share_type_id', 'share_type': { 'name': 'fake_share_type_name', }, 'export_location': 'fake_export_location', 'export_locations': ['fake_export_location'], 'access_rules_status': 'fake_rule_status', 'instance': { 'share_type': { 'name': 'fake_share_type_name', }, 'share_type_id': 'fake_share_type_id', 'progress': '100%', }, 'replication_type': 'fake_replication_type', 'has_replicas': False, 'user_id': 'fake_userid', 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'progress': '100%', } return stubs.stub_share('fake_id', **fake_share) def test__collection_name(self): self.assertEqual('shares', self.builder._collection_name) @ddt.data('2.6', '2.9', '2.10', '2.11', '2.16', '2.24', '2.27', '2.54') def test_detail(self, microversion): req = fakes.HTTPRequest.blank('/shares', version=microversion) result = self.builder.detail(req, self.fake_share) expected = { 'id': self.fake_share['id'], 'share_type': self.fake_share['share_type_id'], 'share_type_name': self.fake_share['share_type']['name'], 'export_location': 'fake_export_location', 'export_locations': ['fake_export_location'], 'snapshot_support': True, } if self.is_microversion_ge(microversion, '2.9'): expected.pop('export_location') expected.pop('export_locations') if self.is_microversion_ge(microversion, '2.10'): expected['access_rules_status'] = 'fake_rule_status' if self.is_microversion_ge(microversion, '2.11'): expected['replication_type'] = 'fake_replication_type' expected['has_replicas'] = False if self.is_microversion_ge(microversion, '2.16'): expected['user_id'] = 'fake_userid' if self.is_microversion_ge(microversion, '2.24'): expected['create_share_from_snapshot_support'] = True if self.is_microversion_ge(microversion, '2.27'): expected['revert_to_snapshot_support'] = True if self.is_microversion_ge(microversion, '2.54'): expected['progress'] = '100%' self.assertSubDictMatch(expected, result['share']) @ddt.data('1.0', '2.51', '2.54') def test_detail_translate_creating_from_snapshot_status(self, microversion): req = fakes.HTTPRequest.blank('/shares', version=microversion) new_share_status = constants.STATUS_CREATING_FROM_SNAPSHOT fake_shr = copy.deepcopy(self.fake_share) fake_shr.update( {'status': new_share_status}) result = self.builder.detail(req, fake_shr) expected = { 'status': constants.STATUS_CREATING, } if self.is_microversion_ge(microversion, '2.54'): expected['status'] = new_share_status self.assertSubDictMatch(expected, result['share']) manila-10.0.0/manila/tests/api/views/test_share_networks.py0000664000175000017500000002172613656750227024060 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import itertools from manila.api.openstack import api_version_request as api_version from manila.api.views import share_networks from manila import test from manila.tests.api import fakes @ddt.ddt class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = share_networks.ViewBuilder() def test__collection_name(self): self.assertEqual('share_networks', self.builder._collection_name) @ddt.data(*itertools.product( [ {'id': 'fake_sn_id', 'name': 'fake_sn_name', 'share_network_subnets': []}, {'id': 'fake_sn_id', 'name': 'fake_sn_name', 'share_network_subnets': [], 'fake_extra_key': 'foo'}, {'id': 'fake_sn_id', 'name': 'fake_sn_name', 'share_network_subnets': [ {'availability_zone_id': None, 'id': 'fake', 'availability_zone': None, 'is_default': False }], 'fake_extra_key': 'foo'}, ], ["1.0", "2.0", "2.18", "2.20", "2.25", "2.26", "2.49", api_version._MAX_API_VERSION]) ) @ddt.unpack def test_build_share_network(self, share_network_data, microversion): gateway_support = (api_version.APIVersionRequest(microversion) >= api_version.APIVersionRequest('2.18')) mtu_support = (api_version.APIVersionRequest(microversion) >= api_version.APIVersionRequest('2.20')) nova_net_support = (api_version.APIVersionRequest(microversion) < api_version.APIVersionRequest('2.26')) default_net_info_support = (api_version.APIVersionRequest(microversion) <= api_version.APIVersionRequest('2.49')) subnets_support = (api_version.APIVersionRequest(microversion) > api_version.APIVersionRequest('2.49')) req = fakes.HTTPRequest.blank('/share-networks', version=microversion) expected_keys = { 'id', 'name', 'project_id', 'created_at', 'updated_at', 'description'} if subnets_support: expected_keys.add('share_network_subnets') else: if default_net_info_support: network_info = { 'neutron_net_id', 'neutron_subnet_id', 'network_type', 'segmentation_id', 'cidr', 'ip_version'} expected_keys.update(network_info) if gateway_support: expected_keys.add('gateway') if mtu_support: expected_keys.add('mtu') if nova_net_support: expected_keys.add('nova_net_id') result = self.builder.build_share_network(req, share_network_data) self.assertEqual(1, len(result)) self.assertIn('share_network', result) self.assertEqual(share_network_data['id'], result['share_network']['id']) self.assertEqual(share_network_data['name'], result['share_network']['name']) self.assertEqual(len(expected_keys), len(result['share_network'])) for key in expected_keys: self.assertIn(key, result['share_network']) for key in result['share_network']: self.assertIn(key, expected_keys) @ddt.data(*itertools.product( [ [], [{'id': 'fake_id', 'name': 'fake_name', 'project_id': 'fake_project_id', 'created_at': 'fake_created_at', 'updated_at': 'fake_updated_at', 'neutron_net_id': 'fake_neutron_net_id', 'neutron_subnet_id': 'fake_neutron_subnet_id', 'network_type': 'fake_network_type', 'segmentation_id': 'fake_segmentation_id', 'cidr': 'fake_cidr', 'ip_version': 'fake_ip_version', 'description': 'fake_description'}, {'id': 'fake_id2', 'name': 'fake_name2'}], ], set(["1.0", "2.0", "2.18", "2.20", "2.25", "2.26", "2.49", api_version._MAX_API_VERSION])) ) @ddt.unpack def test_build_share_networks_with_details(self, share_networks, microversion): gateway_support = (api_version.APIVersionRequest(microversion) >= api_version.APIVersionRequest('2.18')) mtu_support = (api_version.APIVersionRequest(microversion) >= api_version.APIVersionRequest('2.20')) nova_net_support = (api_version.APIVersionRequest(microversion) < api_version.APIVersionRequest('2.26')) default_net_info_support = (api_version.APIVersionRequest(microversion) <= api_version.APIVersionRequest('2.49')) subnets_support = (api_version.APIVersionRequest(microversion) > api_version.APIVersionRequest('2.49')) req = fakes.HTTPRequest.blank('/share-networks', version=microversion) expected_networks_list = [] for share_network in share_networks: expected_data = { 'id': share_network.get('id'), 'name': share_network.get('name'), 'project_id': share_network.get('project_id'), 'created_at': share_network.get('created_at'), 'updated_at': share_network.get('updated_at'), 'description': share_network.get('description'), } if subnets_support: share_network.update({'share_network_subnets': []}) expected_data.update({'share_network_subnets': []}) else: if default_net_info_support: network_data = { 'neutron_net_id': share_network.get('neutron_net_id'), 'neutron_subnet_id': share_network.get( 'neutron_subnet_id'), 'network_type': share_network.get('network_type'), 'segmentation_id': share_network.get( 'segmentation_id'), 'cidr': share_network.get('cidr'), 'ip_version': share_network.get('ip_version'), } expected_data.update(network_data) if gateway_support: share_network.update({'gateway': 'fake_gateway'}) expected_data.update({'gateway': share_network.get('gateway')}) if mtu_support: share_network.update({'mtu': 1509}) expected_data.update({'mtu': share_network.get('mtu')}) if nova_net_support: share_network.update({'nova_net_id': 'fake_nova_net_id'}) expected_data.update({'nova_net_id': None}) expected_networks_list.append(expected_data) expected = {'share_networks': expected_networks_list} result = self.builder.build_share_networks(req, share_networks, is_detail=True) self.assertEqual(expected, result) @ddt.data(*itertools.product( [ [], [{'id': 'foo', 'name': 'bar'}], [{'id': 'id1', 'name': 'name1'}, {'id': 'id2', 'name': 'name2'}], [{'id': 'id1', 'name': 'name1'}, {'id': 'id2', 'name': 'name2', 'fake': 'I should not be returned'}] ], set(["1.0", "2.0", "2.18", "2.20", "2.25", "2.26", "2.49", api_version._MAX_API_VERSION])) ) @ddt.unpack def test_build_share_networks_without_details(self, share_networks, microversion): req = fakes.HTTPRequest.blank('/share-networks', version=microversion) expected = [] for share_network in share_networks: expected.append({ 'id': share_network.get('id'), 'name': share_network.get('name') }) expected = {'share_networks': expected} result = self.builder.build_share_networks(req, share_networks, is_detail=False) self.assertEqual(expected, result) manila-10.0.0/manila/tests/api/views/test_share_accesses.py0000664000175000017500000000715313656750227023773 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila.api.openstack import api_version_request as api_version from manila.api.views import share_accesses from manila.share import api from manila import test from manila.tests.api import fakes @ddt.ddt class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = share_accesses.ViewBuilder() self.fake_access = { 'id': 'fakeaccessid', 'share_id': 'fakeshareid', 'access_level': 'fakeaccesslevel', 'access_to': 'fakeacccessto', 'access_type': 'fakeaccesstype', 'state': 'fakeaccessstate', 'access_key': 'fakeaccesskey', 'created_at': 'fakecreated_at', 'updated_at': 'fakeupdated_at', 'metadata': {}, } self.fake_share = { 'access_rules_status': self.fake_access['state'], } def test_collection_name(self): self.assertEqual('share_accesses', self.builder._collection_name) @ddt.data("2.20", "2.21", "2.33", "2.45") def test_view(self, version): req = fakes.HTTPRequest.blank('/shares', version=version) self.mock_object(api.API, 'get', mock.Mock(return_value=self.fake_share)) result = self.builder.view(req, self.fake_access) self._delete_unsupport_key(version, True) self.assertEqual({'access': self.fake_access}, result) @ddt.data("2.20", "2.21", "2.33", "2.45") def test_summary_view(self, version): req = fakes.HTTPRequest.blank('/shares', version=version) self.mock_object(api.API, 'get', mock.Mock(return_value=self.fake_share)) result = self.builder.summary_view(req, self.fake_access) self._delete_unsupport_key(version) self.assertEqual({'access': self.fake_access}, result) @ddt.data("2.20", "2.21", "2.33", "2.45") def test_list_view(self, version): req = fakes.HTTPRequest.blank('/shares', version=version) self.mock_object(api.API, 'get', mock.Mock(return_value=self.fake_share)) accesses = [self.fake_access, ] result = self.builder.list_view(req, accesses) self._delete_unsupport_key(version) self.assertEqual({'access_list': accesses}, result) def _delete_unsupport_key(self, version, support_share_id=False): if (api_version.APIVersionRequest(version) < api_version.APIVersionRequest("2.21")): del self.fake_access['access_key'] if (api_version.APIVersionRequest(version) < api_version.APIVersionRequest("2.33")): del self.fake_access['created_at'] del self.fake_access['updated_at'] if (api_version.APIVersionRequest(version) < api_version.APIVersionRequest("2.45")): del self.fake_access['metadata'] if not support_share_id: del self.fake_access['share_id'] manila-10.0.0/manila/tests/api/views/test_share_network_subnets.py0000664000175000017500000000560113656750227025432 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from manila.api.views import share_network_subnets from manila import test from manila.tests.api import fakes from manila.tests import db_utils @ddt.ddt class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = share_network_subnets.ViewBuilder() self.share_network = db_utils.create_share_network( name='fake_network', id='fake_sn_id') def _validate_is_detail_return(self, result): expected_keys = ['id', 'created_at', 'updated_at', 'neutron_net_id', 'neutron_subnet_id', 'network_type', 'cidr', 'segmentation_id', 'ip_version', 'share_network_id', 'availability_zone', 'gateway', 'mtu'] for key in expected_keys: self.assertIn(key, result) def test_build_share_network_subnet(self): req = fakes.HTTPRequest.blank('/subnets', version='2.51') subnet = db_utils.create_share_network_subnet( share_network_id=self.share_network['id']) result = self.builder.build_share_network_subnet(req, subnet) self.assertEqual(1, len(result)) self.assertIn('share_network_subnet', result) self.assertEqual(subnet['id'], result['share_network_subnet']['id']) self.assertEqual(subnet['share_network_id'], result['share_network_subnet']['share_network_id']) self.assertIsNone( result['share_network_subnet']['availability_zone']) self._validate_is_detail_return(result['share_network_subnet']) def test_build_share_network_subnets(self): req = fakes.HTTPRequest.blank('/subnets', version='2.51') share_network = db_utils.create_share_network( name='fake_network', id='fake_sn_id_1') subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) result = self.builder.build_share_network_subnets(req, [subnet]) self.assertIn('share_network_subnets', result) self.assertEqual(1, len(result['share_network_subnets'])) subnet_list = result['share_network_subnets'] for subnet in subnet_list: self._validate_is_detail_return(subnet) manila-10.0.0/manila/tests/api/views/test_quota_class_sets.py0000664000175000017500000000632613656750227024375 0ustar zuulzuul00000000000000# Copyright (c) 2017 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from manila.api.openstack import api_version_request as api_version from manila.api.views import quota_class_sets from manila import test from manila.tests.api import fakes @ddt.ddt class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = quota_class_sets.ViewBuilder() def test__collection_name(self): self.assertEqual('quota_class_set', self.builder._collection_name) @ddt.data( ("fake_quota_class", "2.40"), (None, "2.40"), ("fake_quota_class", "2.39"), (None, "2.39"), ("fake_quota_class", "2.53"), (None, "2.53"), ) @ddt.unpack def test_detail_list_with_share_type(self, quota_class, microversion): req = fakes.HTTPRequest.blank('/quota-sets', version=microversion) quota_class_set = { "shares": 13, "gigabytes": 31, "snapshots": 14, "snapshot_gigabytes": 41, "share_groups": 15, "share_group_snapshots": 51, "share_networks": 16, } expected = {self.builder._collection_name: { "shares": quota_class_set["shares"], "gigabytes": quota_class_set["gigabytes"], "snapshots": quota_class_set["snapshots"], "snapshot_gigabytes": quota_class_set["snapshot_gigabytes"], "share_networks": quota_class_set["share_networks"], }} if quota_class: expected[self.builder._collection_name]['id'] = quota_class if (api_version.APIVersionRequest(microversion) >= ( api_version.APIVersionRequest("2.40"))): expected[self.builder._collection_name][ "share_groups"] = quota_class_set["share_groups"] expected[self.builder._collection_name][ "share_group_snapshots"] = quota_class_set[ "share_group_snapshots"] if req.api_version_request >= api_version.APIVersionRequest("2.53"): fake_share_replicas_value = 46 fake_replica_gigabytes_value = 100 expected[self.builder._collection_name]["share_replicas"] = ( fake_share_replicas_value) expected[self.builder._collection_name][ "replica_gigabytes"] = fake_replica_gigabytes_value quota_class_set['share_replicas'] = fake_share_replicas_value quota_class_set['replica_gigabytes'] = fake_replica_gigabytes_value result = self.builder.detail_list( req, quota_class_set, quota_class=quota_class) self.assertEqual(expected, result) manila-10.0.0/manila/tests/api/views/test_quota_sets.py0000664000175000017500000000716313656750227023210 0ustar zuulzuul00000000000000# Copyright (c) 2017 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from manila.api.openstack import api_version_request as api_version from manila.api.views import quota_sets from manila import test from manila.tests.api import fakes @ddt.ddt class ViewBuilderTestCase(test.TestCase): def setUp(self): super(ViewBuilderTestCase, self).setUp() self.builder = quota_sets.ViewBuilder() def test__collection_name(self): self.assertEqual('quota_set', self.builder._collection_name) @ddt.data( ('fake_project_id', 'fake_share_type_id', "2.40"), (None, 'fake_share_type_id', "2.40"), ('fake_project_id', None, "2.40"), (None, None, "2.40"), ('fake_project_id', 'fake_share_type_id', "2.39"), (None, 'fake_share_type_id', "2.39"), ('fake_project_id', None, "2.39"), (None, None, "2.39"), (None, 'fake_share_type_id', "2.53"), ('fake_project_id', None, "2.53"), (None, None, "2.53"), ) @ddt.unpack def test_detail_list_with_share_type(self, project_id, share_type, microversion): req = fakes.HTTPRequest.blank('/quota-sets', version=microversion) quota_set = { "shares": 13, "gigabytes": 31, "snapshots": 14, "snapshot_gigabytes": 41, "share_groups": 15, "share_group_snapshots": 51, "share_networks": 16, } expected = {self.builder._collection_name: { "shares": quota_set["shares"], "gigabytes": quota_set["gigabytes"], "snapshots": quota_set["snapshots"], "snapshot_gigabytes": quota_set["snapshot_gigabytes"], }} if project_id: expected[self.builder._collection_name]['id'] = project_id if not share_type: expected[self.builder._collection_name][ "share_networks"] = quota_set["share_networks"] if (api_version.APIVersionRequest(microversion) >= ( api_version.APIVersionRequest("2.40"))): expected[self.builder._collection_name][ "share_groups"] = quota_set["share_groups"] expected[self.builder._collection_name][ "share_group_snapshots"] = quota_set[ "share_group_snapshots"] if req.api_version_request >= api_version.APIVersionRequest("2.53"): fake_share_replicas_value = 46 fake_replica_gigabytes_value = 100 expected[self.builder._collection_name]["share_replicas"] = ( fake_share_replicas_value) expected[self.builder._collection_name][ "replica_gigabytes"] = fake_replica_gigabytes_value quota_set['share_replicas'] = fake_share_replicas_value quota_set['replica_gigabytes'] = fake_replica_gigabytes_value result = self.builder.detail_list( req, quota_set, project_id=project_id, share_type=share_type) self.assertEqual(expected, result) manila-10.0.0/manila/tests/api/views/test_versions.py0000664000175000017500000001064713656750227022672 0ustar zuulzuul00000000000000# Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from manila.api.views import versions from manila import test class FakeRequest(object): def __init__(self, application_url): self.application_url = application_url URL_BASE = 'http://localhost/' FAKE_HREF = URL_BASE + 'v1/' FAKE_VERSIONS = { "v1.0": { "id": "v1.0", "status": "CURRENT", "version": "1.1", "min_version": "1.0", "updated": "2015-07-30T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": 'http://docs.openstack.org/', }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.share+json;version=1", } ], }, } FAKE_LINKS = [ { "rel": "describedby", "type": "text/html", "href": 'http://docs.openstack.org/', }, { 'rel': 'self', 'href': FAKE_HREF }, ] @ddt.ddt class ViewBuilderTestCase(test.TestCase): def _get_builder(self): request = FakeRequest('fake') return versions.get_view_builder(request) def test_build_versions(self): self.mock_object(versions.ViewBuilder, '_build_links', mock.Mock(return_value=FAKE_LINKS)) result = self._get_builder().build_versions(FAKE_VERSIONS) expected = {'versions': list(FAKE_VERSIONS.values())} expected['versions'][0]['links'] = FAKE_LINKS self.assertEqual(expected, result) def test_build_version(self): self.mock_object(versions.ViewBuilder, '_build_links', mock.Mock(return_value=FAKE_LINKS)) result = self._get_builder()._build_version(FAKE_VERSIONS['v1.0']) expected = copy.deepcopy(FAKE_VERSIONS['v1.0']) expected['links'] = FAKE_LINKS self.assertEqual(expected, result) def test_build_links(self): self.mock_object(versions.ViewBuilder, '_generate_href', mock.Mock(return_value=FAKE_HREF)) result = self._get_builder()._build_links(FAKE_VERSIONS['v1.0']) self.assertEqual(FAKE_LINKS, result) def test_generate_href_defaults(self): self.mock_object(versions.ViewBuilder, '_get_base_url_without_version', mock.Mock(return_value=URL_BASE)) result = self._get_builder()._generate_href() self.assertEqual('http://localhost/v1/', result) @ddt.data( ('v2', None, URL_BASE + 'v2/'), ('/v2/', None, URL_BASE + 'v2/'), ('/v2/', 'fake_path', URL_BASE + 'v2/fake_path'), ('/v2/', '/fake_path/', URL_BASE + 'v2/fake_path/'), ) @ddt.unpack def test_generate_href_no_path(self, version, path, expected): self.mock_object(versions.ViewBuilder, '_get_base_url_without_version', mock.Mock(return_value=URL_BASE)) result = self._get_builder()._generate_href(version=version, path=path) self.assertEqual(expected, result) @ddt.data( ('http://1.1.1.1/', 'http://1.1.1.1/'), ('http://localhost/', 'http://localhost/'), ('http://1.1.1.1/v1/', 'http://1.1.1.1/'), ('http://1.1.1.1/v1', 'http://1.1.1.1/'), ('http://1.1.1.1/v11', 'http://1.1.1.1/'), ) @ddt.unpack def test_get_base_url_without_version(self, base_url, base_url_no_version): request = FakeRequest(base_url) builder = versions.get_view_builder(request) result = builder._get_base_url_without_version() self.assertEqual(base_url_no_version, result) manila-10.0.0/manila/tests/api/fakes.py0000664000175000017500000002135213656750227017712 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_service import wsgi from oslo_utils import timeutils from oslo_utils import uuidutils import routes import webob import webob.dec import webob.request from manila.api import common as api_common from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi as os_wsgi from manila.api import urlmap from manila.api.v1 import router as router_v1 from manila.api.v2 import router as router_v2 from manila.common import constants from manila import context from manila import exception CONTEXT = context.get_admin_context() driver_opts = {} FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' FAKE_UUIDS = {} host = 'host_name' identifier = '7cf7c200-d3af-4e05-b87e-9167c95dfcad' class Context(object): pass class FakeRouter(wsgi.Router): def __init__(self, ext_mgr=None): pass @webob.dec.wsgify def __call__(self, req): res = webob.Response() res.status = '200' res.headers['X-Test-Success'] = 'True' return res @webob.dec.wsgify def fake_wsgi(self, req): return self.application class FakeToken(object): id_count = 0 def __getitem__(self, key): return getattr(self, key) def __init__(self, **kwargs): FakeToken.id_count += 1 self.id = FakeToken.id_count self.token_hash = None for k, v in kwargs.items(): setattr(self, k, v) class FakeRequestContext(context.RequestContext): def __init__(self, *args, **kwargs): kwargs['auth_token'] = kwargs.get('auth_token', 'fake_auth_token') super(FakeRequestContext, self).__init__(*args, **kwargs) class HTTPRequest(os_wsgi.Request): @classmethod def blank(cls, *args, **kwargs): if not kwargs.get('base_url'): kwargs['base_url'] = 'http://localhost/v1' use_admin_context = kwargs.pop('use_admin_context', False) version = kwargs.pop('version', api_version.DEFAULT_API_VERSION) experimental = kwargs.pop('experimental', False) out = os_wsgi.Request.blank(*args, **kwargs) out.environ['manila.context'] = FakeRequestContext( 'fake_user', 'fake', is_admin=use_admin_context) out.api_version_request = api_version.APIVersionRequest( version, experimental=experimental) return out class TestRouter(wsgi.Router): def __init__(self, controller): mapper = routes.Mapper() mapper.resource("test", "tests", controller=os_wsgi.Resource(controller)) super(TestRouter, self).__init__(mapper) class FakeAuthDatabase(object): data = {} @staticmethod def auth_token_get(context, token_hash): return FakeAuthDatabase.data.get(token_hash, None) @staticmethod def auth_token_create(context, token): fake_token = FakeToken(created_at=timeutils.utcnow(), **token) FakeAuthDatabase.data[fake_token.token_hash] = fake_token FakeAuthDatabase.data['id_%i' % fake_token.id] = fake_token return fake_token @staticmethod def auth_token_destroy(context, token_id): token = FakeAuthDatabase.data.get('id_%i' % token_id) if token and token.token_hash in FakeAuthDatabase.data: del FakeAuthDatabase.data[token.token_hash] del FakeAuthDatabase.data['id_%i' % token_id] class FakeRateLimiter(object): def __init__(self, application): self.application = application @webob.dec.wsgify def __call__(self, req): return self.application def get_fake_uuid(token=0): if token not in FAKE_UUIDS: FAKE_UUIDS[token] = uuidutils.generate_uuid() return FAKE_UUIDS[token] def app(): """API application. No auth, just let environ['manila.context'] pass through. """ mapper = urlmap.URLMap() mapper['/v1'] = router_v1.APIRouter() mapper['/v2'] = router_v2.APIRouter() return mapper fixture_reset_status_with_different_roles_v1 = ( { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.STATUS_ERROR, }, { 'role': 'member', 'valid_code': 403, 'valid_status': constants.STATUS_AVAILABLE, }, ) fixture_reset_status_with_different_roles = ( { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.STATUS_ERROR, 'version': '2.6', }, { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.STATUS_ERROR, 'version': '2.7', }, { 'role': 'member', 'valid_code': 403, 'valid_status': constants.STATUS_AVAILABLE, 'version': '2.6', }, { 'role': 'member', 'valid_code': 403, 'valid_status': constants.STATUS_AVAILABLE, 'version': '2.7', }, ) fixture_reset_replica_status_with_different_roles = ( { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.STATUS_ERROR, }, { 'role': 'member', 'valid_code': 403, 'valid_status': constants.STATUS_AVAILABLE, }, ) fixture_reset_replica_state_with_different_roles = ( { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.REPLICA_STATE_ACTIVE, }, { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.REPLICA_STATE_OUT_OF_SYNC, }, { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.REPLICA_STATE_IN_SYNC, }, { 'role': 'admin', 'valid_code': 202, 'valid_status': constants.STATUS_ERROR, }, { 'role': 'member', 'valid_code': 403, 'valid_status': constants.REPLICA_STATE_IN_SYNC, }, ) fixture_force_delete_with_different_roles = ( {'role': 'admin', 'resp_code': 202, 'version': '2.6'}, {'role': 'admin', 'resp_code': 202, 'version': '2.7'}, {'role': 'member', 'resp_code': 403, 'version': '2.6'}, {'role': 'member', 'resp_code': 403, 'version': '2.7'}, ) fixture_invalid_reset_status_body = ( {'os-reset_status': {'x-status': 'bad'}}, {'os-reset_status': {'status': 'invalid'}} ) fixture_valid_reset_status_body = ( ({'os-reset_status': {'status': 'creating'}}, '2.6'), ({'os-reset_status': {'status': 'available'}}, '2.6'), ({'os-reset_status': {'status': 'deleting'}}, '2.6'), ({'os-reset_status': {'status': 'error_deleting'}}, '2.6'), ({'os-reset_status': {'status': 'error'}}, '2.6'), ({'os-reset_status': {'status': 'migrating'}}, '2.6'), ({'os-reset_status': {'status': 'migrating_to'}}, '2.6'), ({'reset_status': {'status': 'creating'}}, '2.7'), ({'reset_status': {'status': 'available'}}, '2.7'), ({'reset_status': {'status': 'deleting'}}, '2.7'), ({'reset_status': {'status': 'error_deleting'}}, '2.7'), ({'reset_status': {'status': 'error'}}, '2.7'), ({'reset_status': {'status': 'migrating'}}, '2.7'), ({'reset_status': {'status': 'migrating_to'}}, '2.7'), ) def mock_fake_admin_check(context, resource_name, action, *args, **kwargs): if context.is_admin: return else: raise exception.PolicyNotAuthorized(action=action) class FakeResourceViewBuilder(api_common.ViewBuilder): _collection_name = 'fake_resource' _detail_version_modifiers = [ "add_field_xyzzy", "add_field_spoon_for_admins", "remove_field_foo", ] def view(self, req, resource): keys = ('id', 'foo', 'fred', 'alice') resource_dict = {key: resource.get(key) for key in keys} self.update_versioned_resource_dict(req, resource_dict, resource) return resource_dict @api_common.ViewBuilder.versioned_method("1.41") def add_field_xyzzy(self, context, resource_dict, resource): resource_dict['xyzzy'] = resource.get('xyzzy') @api_common.ViewBuilder.versioned_method("1.6") def add_field_spoon_for_admins(self, context, resource_dict, resource): if context.is_admin: resource_dict['spoon'] = resource.get('spoon') @api_common.ViewBuilder.versioned_method("3.14") def remove_field_foo(self, context, resource_dict, resource): resource_dict.pop('foo', None) manila-10.0.0/manila/tests/api/test_wsgi.py0000664000175000017500000000340013656750227020623 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test WSGI basics and provide some helper functions for other WSGI tests. """ from oslo_service import wsgi import routes import webob from manila import test from manila.wsgi import common as common_wsgi class Test(test.TestCase): def test_router(self): class Application(common_wsgi.Application): """Test application to call from router.""" def __call__(self, environ, start_response): start_response("200", []) return ['Router result'] class Router(wsgi.Router): """Test router.""" def __init__(self): mapper = routes.Mapper() mapper.connect("/test", controller=Application()) super(Router, self).__init__(mapper) result = webob.Request.blank('/test').get_response(Router()) self.assertEqual("Router result", result.body) result = webob.Request.blank('/bad').get_response(Router()) self.assertNotEqual(result.body, "Router result") manila-10.0.0/manila/tests/api/test_versions.py0000664000175000017500000003121713656750227021531 0ustar zuulzuul00000000000000# Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_serialization import jsonutils from oslo_utils import encodeutils from manila.api.openstack import api_version_request from manila.api.openstack import wsgi from manila.api.v1 import router from manila.api import versions from manila import test from manila.tests.api import fakes version_header_name = 'X-OpenStack-Manila-API-Version' experimental_header_name = 'X-OpenStack-Manila-API-Experimental' @ddt.ddt class VersionsControllerTestCase(test.TestCase): def setUp(self): super(VersionsControllerTestCase, self).setUp() self.wsgi_apps = (versions.VersionsRouter(), router.APIRouter()) @ddt.data('1.0', '1.1', '2.0', '3.0') def test_versions_root(self, version): req = fakes.HTTPRequest.blank('/', base_url='http://localhost') req.method = 'GET' req.content_type = 'application/json' req.headers = {version_header_name: version} response = req.get_response(versions.VersionsRouter()) self.assertEqual(300, response.status_int) body = jsonutils.loads(response.body) version_list = body['versions'] ids = [v['id'] for v in version_list] self.assertEqual({'v1.0', 'v2.0'}, set(ids)) self.assertNotIn(version_header_name, response.headers) self.assertNotIn('Vary', response.headers) v1 = [v for v in version_list if v['id'] == 'v1.0'][0] self.assertEqual('', v1.get('min_version')) self.assertEqual('', v1.get('version')) self.assertEqual('DEPRECATED', v1.get('status')) v2 = [v for v in version_list if v['id'] == 'v2.0'][0] self.assertEqual(api_version_request._MIN_API_VERSION, v2.get('min_version')) self.assertEqual(api_version_request._MAX_API_VERSION, v2.get('version')) self.assertEqual('CURRENT', v2.get('status')) @ddt.data('1.0', '1.1', api_version_request._MIN_API_VERSION, api_version_request._MAX_API_VERSION) def test_versions_v1(self, version): req = fakes.HTTPRequest.blank('/', base_url='http://localhost/v1') req.method = 'GET' req.content_type = 'application/json' req.headers = {version_header_name: version} response = req.get_response(router.APIRouter()) self.assertEqual(200, response.status_int) body = jsonutils.loads(response.body) version_list = body['versions'] ids = [v['id'] for v in version_list] self.assertEqual({'v1.0'}, set(ids)) self.assertEqual('1.0', response.headers[version_header_name]) self.assertEqual(version_header_name, response.headers['Vary']) self.assertEqual('', version_list[0].get('min_version')) self.assertEqual('', version_list[0].get('version')) self.assertEqual('DEPRECATED', version_list[0].get('status')) @ddt.data(api_version_request._MIN_API_VERSION, api_version_request._MAX_API_VERSION) def test_versions_v2(self, version): req = fakes.HTTPRequest.blank('/', base_url='http://localhost/v2') req.method = 'GET' req.content_type = 'application/json' req.headers = {version_header_name: version} response = req.get_response(router.APIRouter()) self.assertEqual(200, response.status_int) body = jsonutils.loads(response.body) version_list = body['versions'] ids = [v['id'] for v in version_list] self.assertEqual({'v2.0'}, set(ids)) self.assertEqual(version, response.headers[version_header_name]) self.assertEqual(version_header_name, response.headers['Vary']) v2 = [v for v in version_list if v['id'] == 'v2.0'][0] self.assertEqual(api_version_request._MIN_API_VERSION, v2.get('min_version')) self.assertEqual(api_version_request._MAX_API_VERSION, v2.get('version')) def test_versions_version_invalid(self): req = fakes.HTTPRequest.blank('/', base_url='http://localhost/v2') req.method = 'GET' req.content_type = 'application/json' req.headers = {version_header_name: '2.0.1'} for app in self.wsgi_apps: response = req.get_response(app) self.assertEqual(400, response.status_int) def test_versions_version_not_found(self): api_version_request_3_0 = api_version_request.APIVersionRequest('3.0') self.mock_object(api_version_request, 'max_api_version', mock.Mock(return_value=api_version_request_3_0)) class Controller(wsgi.Controller): @wsgi.Controller.api_version('2.0', '2.0') def index(self, req): return 'off' req = fakes.HTTPRequest.blank('/tests', base_url='http://localhost/v2') req.headers = {version_header_name: '2.5'} app = fakes.TestRouter(Controller()) response = req.get_response(app) self.assertEqual(404, response.status_int) def test_versions_version_not_acceptable(self): req = fakes.HTTPRequest.blank('/', base_url='http://localhost/v2') req.method = 'GET' req.content_type = 'application/json' req.headers = {version_header_name: '3.0'} response = req.get_response(router.APIRouter()) self.assertEqual(406, response.status_int) self.assertEqual('3.0', response.headers[version_header_name]) self.assertEqual(version_header_name, response.headers['Vary']) @ddt.data(['2.5', 200], ['2.55', 404]) @ddt.unpack def test_req_version_matches(self, version, HTTP_ret): version_request = api_version_request.APIVersionRequest(version) self.mock_object(api_version_request, 'max_api_version', mock.Mock(return_value=version_request)) class Controller(wsgi.Controller): @wsgi.Controller.api_version('2.0', '2.6') def index(self, req): return 'off' req = fakes.HTTPRequest.blank('/tests', base_url='http://localhost/v2') req.headers = {version_header_name: version} app = fakes.TestRouter(Controller()) response = req.get_response(app) if HTTP_ret == 200: self.assertEqual(b'off', response.body) elif HTTP_ret == 404: self.assertNotEqual(b'off', response.body) self.assertEqual(HTTP_ret, response.status_int) @ddt.data(['2.5', 'older'], ['2.37', 'newer']) @ddt.unpack def test_req_version_matches_with_if(self, version, ret_val): version_request = api_version_request.APIVersionRequest(version) self.mock_object(api_version_request, 'max_api_version', mock.Mock(return_value=version_request)) class Controller(wsgi.Controller): def index(self, req): req_version = req.api_version_request if req_version.matches('2.1', '2.8'): return 'older' if req_version.matches('2.9', '2.88'): return 'newer' req = fakes.HTTPRequest.blank('/tests', base_url='http://localhost/v2') req.headers = {version_header_name: version} app = fakes.TestRouter(Controller()) response = req.get_response(app) resp = encodeutils.safe_decode(response.body, incoming='utf-8') self.assertEqual(ret_val, resp) self.assertEqual(200, response.status_int) @ddt.data(['2.5', 'older'], ['2.37', 'newer']) @ddt.unpack def test_req_version_matches_with_None(self, version, ret_val): version_request = api_version_request.APIVersionRequest(version) self.mock_object(api_version_request, 'max_api_version', mock.Mock(return_value=version_request)) class Controller(wsgi.Controller): def index(self, req): req_version = req.api_version_request if req_version.matches(None, '2.8'): return 'older' if req_version.matches('2.9', None): return 'newer' req = fakes.HTTPRequest.blank('/tests', base_url='http://localhost/v2') req.headers = {version_header_name: version} app = fakes.TestRouter(Controller()) response = req.get_response(app) resp = encodeutils.safe_decode(response.body, incoming='utf-8') self.assertEqual(ret_val, resp) self.assertEqual(200, response.status_int) def test_req_version_matches_with_None_None(self): version_request = api_version_request.APIVersionRequest('2.39') self.mock_object(api_version_request, 'max_api_version', mock.Mock(return_value=version_request)) class Controller(wsgi.Controller): def index(self, req): req_version = req.api_version_request # This case is artificial, and will return True if req_version.matches(None, None): return "Pass" req = fakes.HTTPRequest.blank('/tests', base_url='http://localhost/v2') req.headers = {version_header_name: '2.39'} app = fakes.TestRouter(Controller()) response = req.get_response(app) resp = encodeutils.safe_decode(response.body, incoming='utf-8') self.assertEqual("Pass", resp) self.assertEqual(200, response.status_int) @ddt.ddt class ExperimentalAPITestCase(test.TestCase): class Controller(wsgi.Controller): @wsgi.Controller.api_version('2.0', '2.0') def index(self, req): return {'fake_key': 'fake_value'} @wsgi.Controller.api_version('2.1', '2.1', experimental=True) # noqa def index(self, req): # pylint: disable=function-redefined return {'fake_key': 'fake_value'} def setUp(self): super(ExperimentalAPITestCase, self).setUp() self.app = fakes.TestRouter(ExperimentalAPITestCase.Controller()) self.req = fakes.HTTPRequest.blank('/tests', base_url='http://localhost/v2') @ddt.data(True, False) def test_stable_api_always_called(self, experimental): self.req.headers = {version_header_name: '2.0'} if experimental: self.req.headers[experimental_header_name] = experimental response = self.req.get_response(self.app) self.assertEqual(200, response.status_int) self.assertEqual('2.0', response.headers[version_header_name]) if experimental: self.assertEqual('%s' % experimental, response.headers.get(experimental_header_name)) else: self.assertNotIn(experimental_header_name, response.headers) def test_experimental_api_called_when_requested(self): self.req.headers = { version_header_name: '2.1', experimental_header_name: 'True', } response = self.req.get_response(self.app) self.assertEqual(200, response.status_int) self.assertEqual('2.1', response.headers[version_header_name]) self.assertTrue(response.headers.get(experimental_header_name)) def test_experimental_api_not_called_when_not_requested(self): self.req.headers = {version_header_name: '2.1'} response = self.req.get_response(self.app) self.assertEqual(404, response.status_int) self.assertNotIn(experimental_header_name, response.headers) def test_experimental_header_returned_in_exception(self): api_version_request_3_0 = api_version_request.APIVersionRequest('3.0') self.mock_object(api_version_request, 'max_api_version', mock.Mock(return_value=api_version_request_3_0)) self.req.headers = { version_header_name: '2.2', experimental_header_name: 'True', } response = self.req.get_response(self.app) self.assertEqual(404, response.status_int) self.assertTrue(response.headers.get(experimental_header_name)) manila-10.0.0/manila/tests/api/openstack/0000775000175000017500000000000013656750362020233 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/openstack/__init__.py0000664000175000017500000000000013656750227022332 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/api/openstack/test_api_version_request.py0000664000175000017500000001604413656750227025737 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import six from manila.api.openstack import api_version_request from manila.api.openstack import versioned_method from manila import exception from manila import test @ddt.ddt class APIVersionRequestTests(test.TestCase): def test_init(self): result = api_version_request.APIVersionRequest() self.assertIsNone(result._ver_major) self.assertIsNone(result._ver_minor) self.assertFalse(result._experimental) def test_min_version(self): self.assertEqual( api_version_request.APIVersionRequest( api_version_request._MIN_API_VERSION), api_version_request.min_api_version()) def test_max_api_version(self): self.assertEqual( api_version_request.APIVersionRequest( api_version_request._MAX_API_VERSION), api_version_request.max_api_version()) @ddt.data( ('1.1', 1, 1), ('2.10', 2, 10), ('5.234', 5, 234), ('12.5', 12, 5), ('2.0', 2, 0), ('2.200', 2, 200) ) @ddt.unpack def test_valid_version_strings(self, version_string, major, minor): request = api_version_request.APIVersionRequest(version_string) self.assertEqual(major, request._ver_major) self.assertEqual(minor, request._ver_minor) def test_null_version(self): v = api_version_request.APIVersionRequest() self.assertTrue(v.is_null()) @ddt.data('2', '200', '2.1.4', '200.23.66.3', '5 .3', '5. 3', '5.03', '02.1', '2.001', '', ' 2.1', '2.1 ') def test_invalid_version_strings(self, version_string): self.assertRaises(exception.InvalidAPIVersionString, api_version_request.APIVersionRequest, version_string) def test_cmpkey(self): request = api_version_request.APIVersionRequest('1.2') self.assertEqual((1, 2), request._cmpkey()) @ddt.data(True, False) def test_experimental_property(self, experimental): request = api_version_request.APIVersionRequest() request.experimental = experimental self.assertEqual(experimental, request.experimental) def test_experimental_property_value_error(self): request = api_version_request.APIVersionRequest() def set_non_boolean(): request.experimental = 'non_bool_value' self.assertRaises(exception.InvalidParameterValue, set_non_boolean) def test_version_comparisons(self): v1 = api_version_request.APIVersionRequest('2.0') v2 = api_version_request.APIVersionRequest('2.5') v3 = api_version_request.APIVersionRequest('5.23') v4 = api_version_request.APIVersionRequest('2.0') v_null = api_version_request.APIVersionRequest() self.assertTrue(v1 < v2) self.assertTrue(v1 <= v2) self.assertTrue(v3 > v2) self.assertTrue(v3 >= v2) self.assertTrue(v1 != v2) self.assertTrue(v1 == v4) self.assertTrue(v1 != v_null) self.assertTrue(v_null == v_null) self.assertFalse(v1 == '2.0') def test_version_matches(self): v1 = api_version_request.APIVersionRequest('2.0') v2 = api_version_request.APIVersionRequest('2.5') v3 = api_version_request.APIVersionRequest('2.45') v4 = api_version_request.APIVersionRequest('3.3') v5 = api_version_request.APIVersionRequest('3.23') v6 = api_version_request.APIVersionRequest('2.0') v7 = api_version_request.APIVersionRequest('3.3') v8 = api_version_request.APIVersionRequest('4.0') v_null = api_version_request.APIVersionRequest() self.assertTrue(v2.matches(v1, v3)) self.assertTrue(v2.matches(v1, v_null)) self.assertTrue(v1.matches(v6, v2)) self.assertTrue(v4.matches(v2, v7)) self.assertTrue(v4.matches(v_null, v7)) self.assertTrue(v4.matches(v_null, v8)) self.assertFalse(v1.matches(v2, v3)) self.assertFalse(v5.matches(v2, v4)) self.assertFalse(v2.matches(v3, v1)) self.assertTrue(v1.matches(v_null, v_null)) self.assertRaises(ValueError, v_null.matches, v1, v3) def test_version_matches_experimental_request(self): experimental_request = api_version_request.APIVersionRequest('2.0') experimental_request.experimental = True non_experimental_request = api_version_request.APIVersionRequest('2.0') experimental_function = versioned_method.VersionedMethod( 'experimental_function', api_version_request.APIVersionRequest('2.0'), api_version_request.APIVersionRequest('2.1'), True, None) non_experimental_function = versioned_method.VersionedMethod( 'non_experimental_function', api_version_request.APIVersionRequest('2.0'), api_version_request.APIVersionRequest('2.1'), False, None) self.assertTrue(experimental_request.matches_versioned_method( experimental_function)) self.assertTrue(experimental_request.matches_versioned_method( non_experimental_function)) self.assertTrue(non_experimental_request.matches_versioned_method( non_experimental_function)) self.assertFalse(non_experimental_request.matches_versioned_method( experimental_function)) def test_matches_versioned_method(self): request = api_version_request.APIVersionRequest('2.0') self.assertRaises(exception.InvalidParameterValue, request.matches_versioned_method, 'fake_method') def test_get_string(self): v1_string = '3.23' v1 = api_version_request.APIVersionRequest(v1_string) self.assertEqual(v1_string, v1.get_string()) self.assertRaises(ValueError, api_version_request.APIVersionRequest().get_string) @ddt.data(('1', '0', False), ('1', '1', False), ('1', '0', True)) @ddt.unpack def test_str(self, major, minor, experimental): request_input = '%s.%s' % (major, minor) request = api_version_request.APIVersionRequest( request_input, experimental=experimental) request_string = six.text_type(request) self.assertEqual('API Version Request ' 'Major: %s, Minor: %s, Experimental: %s' % (major, minor, experimental), request_string) manila-10.0.0/manila/tests/api/openstack/test_wsgi.py0000664000175000017500000010767513656750227022635 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect from unittest import mock import ddt import six import webob from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila import context from manila import exception from manila import policy from manila import test from manila.tests.api import fakes @ddt.ddt class RequestTest(test.TestCase): def test_content_type_missing(self): request = wsgi.Request.blank('/tests/123', method='POST') request.body = six.b("") self.assertIsNone(request.get_content_type()) def test_content_type_unsupported(self): request = wsgi.Request.blank('/tests/123', method='POST') request.headers["Content-Type"] = "text/html" request.body = six.b("asdf
") self.assertRaises(exception.InvalidContentType, request.get_content_type) def test_content_type_with_charset(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "application/json; charset=UTF-8" result = request.get_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept(self): content_type = 'application/json' request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = content_type result = request.best_match_content_type() self.assertEqual(content_type, result) def test_content_type_from_accept_best(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/xml, application/json" result = request.best_match_content_type() self.assertEqual("application/json", result) request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = ("application/json; q=0.3, " "application/xml; q=0.9") result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_query_extension(self): request = wsgi.Request.blank('/tests/123.json') result = request.best_match_content_type() self.assertEqual("application/json", result) request = wsgi.Request.blank('/tests/123.invalid') result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_accept_default(self): request = wsgi.Request.blank('/tests/123.unsupported') request.headers["Accept"] = "application/unsupported1" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_cache_and_retrieve_resources(self): request = wsgi.Request.blank('/foo') # Test that trying to retrieve a cached object on # an empty cache fails gracefully self.assertIsNone(request.cached_resource()) self.assertIsNone(request.cached_resource_by_id('r-0')) resources = [{'id': 'r-%s' % x} for x in range(3)] # Cache an empty list of resources using the default name request.cache_resource([]) self.assertEqual({}, request.cached_resource()) self.assertIsNone(request.cached_resource('r-0')) # Cache some resources request.cache_resource(resources[:2]) # Cache one resource request.cache_resource(resources[2]) # Cache a different resource name other_resource = {'id': 'o-0'} request.cache_resource(other_resource, name='other-resource') self.assertEqual(resources[0], request.cached_resource_by_id('r-0')) self.assertEqual(resources[1], request.cached_resource_by_id('r-1')) self.assertEqual(resources[2], request.cached_resource_by_id('r-2')) self.assertIsNone(request.cached_resource_by_id('r-3')) self.assertEqual( {'r-0': resources[0], 'r-1': resources[1], 'r-2': resources[2]}, request.cached_resource()) self.assertEqual( other_resource, request.cached_resource_by_id('o-0', name='other-resource')) @ddt.data( 'share_type', ) def test_cache_and_retrieve_resources_by_resource(self, resource_name): cache_all_func = 'cache_db_%ss' % resource_name cache_one_func = 'cache_db_%s' % resource_name get_db_all_func = 'get_db_%ss' % resource_name get_db_one_func = 'get_db_%s' % resource_name r = wsgi.Request.blank('/foo') amount = 5 res_range = range(amount) resources = [{'id': 'id%s' % x} for x in res_range] # Store 2 getattr(r, cache_all_func)(resources[:amount - 1]) # Store 1 getattr(r, cache_one_func)(resources[amount - 1]) for i in res_range: self.assertEqual( resources[i], getattr(r, get_db_one_func)('id%s' % i), ) self.assertIsNone(getattr(r, get_db_one_func)('id%s' % amount)) self.assertEqual( {'id%s' % i: resources[i] for i in res_range}, getattr(r, get_db_all_func)()) def test_set_api_version_request_exception(self): min_version = api_version.APIVersionRequest('2.0') max_version = api_version.APIVersionRequest('2.45') self.mock_object(api_version, 'max_api_version', mock.Mock(return_value=max_version)) self.mock_object(api_version, 'min_api_version', mock.Mock(return_value=min_version)) headers = {'X-OpenStack-Manila-API-Version': '2.51'} request = wsgi.Request.blank( 'https://openstack.acme.com/v2/shares', method='GET', headers=headers, script_name='/v2/shares') self.assertRaises(exception.InvalidGlobalAPIVersion, request.set_api_version_request) self.assertEqual(api_version.APIVersionRequest('2.51'), request.api_version_request) @ddt.data('', '/share', '/v1', '/v2/shares', '/v1.1/', '/share/v1', '/shared-file-sytems/v2', '/share/v3.5/share-replicas', '/shared-file-sytems/v2/shares/xyzzy/action') def test_set_api_version_request(self, resource): min_version = api_version.APIVersionRequest('2.0') max_version = api_version.APIVersionRequest('3.0') self.mock_object(api_version, 'max_api_version', mock.Mock(return_value=max_version)) self.mock_object(api_version, 'min_api_version', mock.Mock(return_value=min_version)) request = wsgi.Request.blank( 'https://openstack.acme.com%s' % resource, method='GET', headers={'X-OpenStack-Manila-API-Version': '2.117'}, script_name=resource) self.assertIsNone(request.set_api_version_request()) if not resource or not ('/v1' in resource or '/v2' in resource): self.assertEqual(api_version.APIVersionRequest(), request.api_version_request) elif 'v1' in resource: self.assertEqual(api_version.APIVersionRequest('1.0'), request.api_version_request) else: self.assertEqual(api_version.APIVersionRequest('2.117'), request.api_version_request) def test_set_api_version_request_no_version_header(self): min_version = api_version.APIVersionRequest('2.0') max_version = api_version.APIVersionRequest('2.45') self.mock_object(api_version, 'max_api_version', mock.Mock(return_value=max_version)) self.mock_object(api_version, 'min_api_version', mock.Mock(return_value=min_version)) headers = {} request = wsgi.Request.blank( 'https://openstack.acme.com/v2/shares', method='GET', headers=headers, script_name='/v2/shares') self.assertIsNone(request.set_api_version_request()) self.assertEqual(api_version.APIVersionRequest('2.0'), request.api_version_request) @ddt.data(None, 'true', 'false') def test_set_api_version_request_experimental_header(self, experimental): min_version = api_version.APIVersionRequest('2.0') max_version = api_version.APIVersionRequest('2.45') self.mock_object(api_version, 'max_api_version', mock.Mock(return_value=max_version)) self.mock_object(api_version, 'min_api_version', mock.Mock(return_value=min_version)) headers = {'X-OpenStack-Manila-API-Version': '2.38'} if experimental: headers['X-OpenStack-Manila-API-Experimental'] = experimental request = wsgi.Request.blank( 'https://openstack.acme.com/v2/shares', method='GET', headers=headers, script_name='/v2/shares') self.assertIsNone(request.set_api_version_request()) self.assertEqual(request.api_version_request, api_version.APIVersionRequest('2.38')) expected_experimental = experimental == 'true' or False self.assertEqual(expected_experimental, request.api_version_request.experimental) class ActionDispatcherTest(test.TestCase): def test_dispatch(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' self.assertEqual('pants', serializer.dispatch({}, action='create')) def test_dispatch_action_None(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' serializer.default = lambda x: 'trousers' self.assertEqual('trousers', serializer.dispatch({}, action=None)) def test_dispatch_default(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' serializer.default = lambda x: 'trousers' self.assertEqual('trousers', serializer.dispatch({}, action='update')) class DictSerializerTest(test.TestCase): def test_dispatch_default(self): serializer = wsgi.DictSerializer() self.assertEqual('', serializer.serialize({}, 'update')) class JSONDictSerializerTest(test.TestCase): def test_json(self): input_dict = dict(servers=dict(a=(2, 3))) expected_json = six.b('{"servers":{"a":[2,3]}}') serializer = wsgi.JSONDictSerializer() result = serializer.serialize(input_dict) result = result.replace(six.b('\n'), six.b('')).replace(six.b(' '), six.b('')) self.assertEqual(expected_json, result) class TextDeserializerTest(test.TestCase): def test_dispatch_default(self): deserializer = wsgi.TextDeserializer() self.assertEqual({}, deserializer.deserialize({}, 'update')) class JSONDeserializerTest(test.TestCase): def test_json(self): data = """{"a": { "a1": "1", "a2": "2", "bs": ["1", "2", "3", {"c": {"c1": "1"}}], "d": {"e": "1"}, "f": "1"}}""" as_dict = { 'body': { 'a': { 'a1': '1', 'a2': '2', 'bs': ['1', '2', '3', {'c': {'c1': '1'}}], 'd': {'e': '1'}, 'f': '1', }, }, } deserializer = wsgi.JSONDeserializer() self.assertEqual(as_dict, deserializer.deserialize(data)) class ResourceTest(test.TestCase): def test_resource_call(self): class Controller(object): def index(self, req): return 'off' req = webob.Request.blank('/tests') app = fakes.TestRouter(Controller()) response = req.get_response(app) self.assertEqual(six.b('off'), response.body) self.assertEqual(200, response.status_int) def test_resource_not_authorized(self): class Controller(object): def index(self, req): raise exception.NotAuthorized() req = webob.Request.blank('/tests') app = fakes.TestRouter(Controller()) response = req.get_response(app) self.assertEqual(403, response.status_int) def test_dispatch(self): class Controller(object): def index(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) method, extensions = resource.get_method(None, 'index', None, '') actual = resource.dispatch(method, None, {'pants': 'off'}) expected = 'off' self.assertEqual(expected, actual) def test_get_method_undefined_controller_action(self): class Controller(object): def index(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(AttributeError, resource.get_method, None, 'create', None, '') def test_get_method_action_json(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) method, extensions = resource.get_method(None, 'action', 'application/json', '{"fooAction": true}') self.assertEqual(controller._action_foo, method) def test_get_method_action_bad_body(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(exception.MalformedRequestBody, resource.get_method, None, 'action', 'application/json', '{}') def test_get_method_unknown_controller_action(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(KeyError, resource.get_method, None, 'action', 'application/json', '{"barAction": true}') def test_get_method_action_method(self): class Controller(object): def action(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) method, extensions = resource.get_method(None, 'action', 'application/xml', 'true' + newer_than) self.client.send_iter_request.assert_has_calls([ mock.call('snapshot-get-iter', snapshot_get_iter_args)]) expected = [fake.SNAPSHOT_NAME] self.assertEqual(expected, result) @ddt.data('start_volume_move', 'check_volume_move') def test_volume_move_method(self, method_name): method = getattr(self.client, method_name) self.mock_object(self.client, 'send_request') retval = method(fake.SHARE_NAME, fake.VSERVER_NAME, fake.SHARE_AGGREGATE_NAME) expected_api_args = { 'source-volume': fake.SHARE_NAME, 'vserver': fake.VSERVER_NAME, 'dest-aggr': fake.SHARE_AGGREGATE_NAME, 'cutover-action': 'wait', 'encrypt-destination': 'false' } if method_name.startswith('check'): expected_api_args['perform-validation-only'] = 'true' self.assertIsNone(retval) self.client.send_request.assert_called_once_with( 'volume-move-start', expected_api_args) def test_abort_volume_move(self): self.mock_object(self.client, 'send_request') retval = self.client.abort_volume_move( fake.SHARE_NAME, fake.VSERVER_NAME) expected_api_args = { 'source-volume': fake.SHARE_NAME, 'vserver': fake.VSERVER_NAME, } self.assertIsNone(retval) self.client.send_request.assert_called_once_with( 'volume-move-trigger-abort', expected_api_args) @ddt.data(True, False) def test_trigger_volume_move_cutover_force(self, forced): self.mock_object(self.client, 'send_request') retval = self.client.trigger_volume_move_cutover( fake.SHARE_NAME, fake.VSERVER_NAME, force=forced) expected_api_args = { 'source-volume': fake.SHARE_NAME, 'vserver': fake.VSERVER_NAME, 'force': 'true' if forced else 'false', } self.assertIsNone(retval) self.client.send_request.assert_called_once_with( 'volume-move-trigger-cutover', expected_api_args) def test_get_volume_move_status_no_records(self): self.mock_object(self.client, 'send_iter_request') self.mock_object(self.client, '_has_records', mock.Mock(return_value=False)) self.assertRaises(exception.NetAppException, self.client.get_volume_move_status, fake.SHARE_NAME, fake.VSERVER_NAME) expected_api_args = { 'query': { 'volume-move-info': { 'volume': fake.SHARE_NAME, 'vserver': fake.VSERVER_NAME, }, }, 'desired-attributes': { 'volume-move-info': { 'percent-complete': None, 'estimated-completion-time': None, 'state': None, 'details': None, 'cutover-action': None, 'phase': None, }, }, } self.client.send_iter_request.assert_called_once_with( 'volume-move-get-iter', expected_api_args) def test_get_volume_move_status(self): move_status = netapp_api.NaElement(fake.VOLUME_MOVE_GET_ITER_RESULT) self.mock_object(self.client, 'send_iter_request', mock.Mock(return_value=move_status)) actual_status_info = self.client.get_volume_move_status( fake.SHARE_NAME, fake.VSERVER_NAME) expected_api_args = { 'query': { 'volume-move-info': { 'volume': fake.SHARE_NAME, 'vserver': fake.VSERVER_NAME, }, }, 'desired-attributes': { 'volume-move-info': { 'percent-complete': None, 'estimated-completion-time': None, 'state': None, 'details': None, 'cutover-action': None, 'phase': None, }, }, } expected_status_info = { 'percent-complete': '82', 'estimated-completion-time': '1481919246', 'state': 'healthy', 'details': 'Cutover Completed::Volume move job finishing move', 'cutover-action': 'retry_on_failure', 'phase': 'finishing', } self.assertDictMatch(expected_status_info, actual_status_info) self.client.send_iter_request.assert_called_once_with( 'volume-move-get-iter', expected_api_args) def test_qos_policy_group_exists_no_records(self): self.mock_object(self.client, 'qos_policy_group_get', mock.Mock( side_effect=exception.NetAppException)) policy_exists = self.client.qos_policy_group_exists( 'i-dont-exist-but-i-am') self.assertIs(False, policy_exists) def test_qos_policy_group_exists(self): self.mock_object(self.client, 'qos_policy_group_get', mock.Mock(return_value=fake.QOS_POLICY_GROUP)) policy_exists = self.client.qos_policy_group_exists( fake.QOS_POLICY_GROUP_NAME) self.assertIs(True, policy_exists) def test_qos_policy_group_get_no_permissions_to_execute_zapi(self): naapi_error = self._mock_api_error(code=netapp_api.EAPINOTFOUND, message='13005:Unable to find API') self.mock_object(self.client, 'send_request', naapi_error) self.assertRaises(exception.NetAppException, self.client.qos_policy_group_get, 'possibly-valid-qos-policy') def test_qos_policy_group_get_other_zapi_errors(self): naapi_error = self._mock_api_error(code=netapp_api.EINTERNALERROR, message='13114:Internal error') self.mock_object(self.client, 'send_request', naapi_error) self.assertRaises(netapp_api.NaApiError, self.client.qos_policy_group_get, 'possibly-valid-qos-policy') def test_qos_policy_group_get_none_found(self): no_records_response = netapp_api.NaElement(fake.NO_RECORDS_RESPONSE) self.mock_object(self.client, 'send_request', mock.Mock(return_value=no_records_response)) self.assertRaises(exception.NetAppException, self.client.qos_policy_group_get, 'non-existent-qos-policy') qos_policy_group_get_iter_args = { 'query': { 'qos-policy-group-info': { 'policy-group': 'non-existent-qos-policy', }, }, 'desired-attributes': { 'qos-policy-group-info': { 'policy-group': None, 'vserver': None, 'max-throughput': None, 'num-workloads': None }, }, } self.client.send_request.assert_called_once_with( 'qos-policy-group-get-iter', qos_policy_group_get_iter_args, False) def test_qos_policy_group_get(self): api_response = netapp_api.NaElement( fake.QOS_POLICY_GROUP_GET_ITER_RESPONSE) self.mock_object(self.client, 'send_request', mock.Mock(return_value=api_response)) qos_info = self.client.qos_policy_group_get(fake.QOS_POLICY_GROUP_NAME) qos_policy_group_get_iter_args = { 'query': { 'qos-policy-group-info': { 'policy-group': fake.QOS_POLICY_GROUP_NAME, }, }, 'desired-attributes': { 'qos-policy-group-info': { 'policy-group': None, 'vserver': None, 'max-throughput': None, 'num-workloads': None }, }, } self.client.send_request.assert_called_once_with( 'qos-policy-group-get-iter', qos_policy_group_get_iter_args, False) self.assertDictMatch(fake.QOS_POLICY_GROUP, qos_info) @ddt.data(None, fake.QOS_MAX_THROUGHPUT) def test_qos_policy_group_create(self, max_throughput): self.mock_object(self.client, 'send_request', mock.Mock(return_value=fake.PASSED_RESPONSE)) self.client.qos_policy_group_create( fake.QOS_POLICY_GROUP_NAME, fake.VSERVER_NAME, max_throughput=max_throughput) qos_policy_group_create_args = { 'policy-group': fake.QOS_POLICY_GROUP_NAME, 'vserver': fake.VSERVER_NAME, } if max_throughput: qos_policy_group_create_args.update( {'max-throughput': max_throughput}) self.client.send_request.assert_called_once_with( 'qos-policy-group-create', qos_policy_group_create_args, False) def test_qos_policy_group_modify(self): self.mock_object(self.client, 'send_request', mock.Mock(return_value=fake.PASSED_RESPONSE)) self.client.qos_policy_group_modify(fake.QOS_POLICY_GROUP_NAME, '3000iops') qos_policy_group_modify_args = { 'policy-group': fake.QOS_POLICY_GROUP_NAME, 'max-throughput': '3000iops', } self.client.send_request.assert_called_once_with( 'qos-policy-group-modify', qos_policy_group_modify_args, False) def test_qos_policy_group_delete(self): self.mock_object(self.client, 'send_request', mock.Mock(return_value=fake.PASSED_RESPONSE)) self.client.qos_policy_group_delete(fake.QOS_POLICY_GROUP_NAME) qos_policy_group_delete_args = { 'policy-group': fake.QOS_POLICY_GROUP_NAME, } self.client.send_request.assert_called_once_with( 'qos-policy-group-delete', qos_policy_group_delete_args, False) def test_qos_policy_group_rename(self): self.mock_object(self.client, 'send_request', mock.Mock(return_value=fake.PASSED_RESPONSE)) self.client.qos_policy_group_rename( fake.QOS_POLICY_GROUP_NAME, 'new_' + fake.QOS_POLICY_GROUP_NAME) qos_policy_group_rename_args = { 'policy-group-name': fake.QOS_POLICY_GROUP_NAME, 'new-name': 'new_' + fake.QOS_POLICY_GROUP_NAME, } self.client.send_request.assert_called_once_with( 'qos-policy-group-rename', qos_policy_group_rename_args, False) def test_qos_policy_group_rename_noop(self): self.mock_object(self.client, 'send_request') # rename to same name = no-op self.client.qos_policy_group_rename( fake.QOS_POLICY_GROUP_NAME, fake.QOS_POLICY_GROUP_NAME) self.assertFalse(self.client.send_request.called) def test_mark_qos_policy_group_for_deletion_rename_failure(self): self.mock_object(self.client, 'qos_policy_group_exists', mock.Mock(return_value=True)) self.mock_object(self.client, 'qos_policy_group_rename', mock.Mock(side_effect=netapp_api.NaApiError)) self.mock_object(client_cmode.LOG, 'warning') self.mock_object(self.client, 'remove_unused_qos_policy_groups') retval = self.client.mark_qos_policy_group_for_deletion( fake.QOS_POLICY_GROUP_NAME) self.assertIsNone(retval) client_cmode.LOG.warning.assert_called_once() self.client.qos_policy_group_exists.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME) self.client.qos_policy_group_rename.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME, client_cmode.DELETED_PREFIX + fake.QOS_POLICY_GROUP_NAME) self.client.remove_unused_qos_policy_groups.assert_called_once_with() @ddt.data(True, False) def test_mark_qos_policy_group_for_deletion_policy_exists(self, exists): self.mock_object(self.client, 'qos_policy_group_exists', mock.Mock(return_value=exists)) self.mock_object(self.client, 'qos_policy_group_rename') mock_remove_unused_policies = self.mock_object( self.client, 'remove_unused_qos_policy_groups') self.mock_object(client_cmode.LOG, 'warning') retval = self.client.mark_qos_policy_group_for_deletion( fake.QOS_POLICY_GROUP_NAME) self.assertIsNone(retval) if exists: self.client.qos_policy_group_rename.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME, client_cmode.DELETED_PREFIX + fake.QOS_POLICY_GROUP_NAME) mock_remove_unused_policies.assert_called_once_with() else: self.assertFalse(self.client.qos_policy_group_rename.called) self.assertFalse( self.client.remove_unused_qos_policy_groups.called) self.assertFalse(client_cmode.LOG.warning.called) @ddt.data(True, False) def test_remove_unused_qos_policy_groups_with_failure(self, failed): if failed: args = mock.Mock(side_effect=netapp_api.NaApiError) else: args = mock.Mock(return_value=fake.PASSED_FAILED_ITER_RESPONSE) self.mock_object(self.client, 'send_request', args) self.mock_object(client_cmode.LOG, 'debug') retval = self.client.remove_unused_qos_policy_groups() qos_policy_group_delete_iter_args = { 'query': { 'qos-policy-group-info': { 'policy-group': '%s*' % client_cmode.DELETED_PREFIX, } }, 'max-records': 3500, 'continue-on-failure': 'true', 'return-success-list': 'false', 'return-failure-list': 'false', } self.assertIsNone(retval) self.client.send_request.assert_called_once_with( 'qos-policy-group-delete-iter', qos_policy_group_delete_iter_args, False) self.assertIs(failed, client_cmode.LOG.debug.called) def test_get_cluster_name(self): api_response = netapp_api.NaElement( fake.CLUSTER_GET_CLUSTER_NAME) self.mock_object(self.client, 'send_request', mock.Mock(return_value=api_response)) api_args = { 'desired-attributes': { 'cluster-identity-info': { 'cluster-name': None, } } } result = self.client.get_cluster_name() self.assertEqual(fake.CLUSTER_NAME, result) self.client.send_request.assert_called_once_with( 'cluster-identity-get', api_args, enable_tunneling=False) @ddt.data('fake_snapshot_name', None) def test_check_volume_clone_split_completed(self, get_clone_parent): volume_name = fake.SHARE_NAME mock_get_vol_clone_parent = self.mock_object( self.client, 'get_volume_clone_parent_snaphot', mock.Mock(return_value=get_clone_parent)) result = self.client.check_volume_clone_split_completed(volume_name) mock_get_vol_clone_parent.assert_called_once_with(volume_name) expected_result = get_clone_parent is None self.assertEqual(expected_result, result) def test_rehost_volume(self): volume_name = fake.SHARE_NAME vserver = fake.VSERVER_NAME dest_vserver = fake.VSERVER_NAME_2 api_args = { 'volume': volume_name, 'vserver': vserver, 'destination-vserver': dest_vserver, } self.mock_object(self.client, 'send_request') self.client.rehost_volume(volume_name, vserver, dest_vserver) self.client.send_request.assert_called_once_with('volume-rehost', api_args) @ddt.data( {'fake_api_response': fake.VOLUME_GET_ITER_PARENT_SNAP_EMPTY_RESPONSE, 'expected_snapshot_name': None}, {'fake_api_response': fake.VOLUME_GET_ITER_PARENT_SNAP_RESPONSE, 'expected_snapshot_name': fake.SNAPSHOT_NAME}, {'fake_api_response': fake.NO_RECORDS_RESPONSE, 'expected_snapshot_name': None}) @ddt.unpack def test_get_volume_clone_parent_snaphot(self, fake_api_response, expected_snapshot_name): api_response = netapp_api.NaElement(fake_api_response) self.mock_object(self.client, 'send_iter_request', mock.Mock(return_value=api_response)) result = self.client.get_volume_clone_parent_snaphot(fake.SHARE_NAME) expected_api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': fake.SHARE_NAME } } }, 'desired-attributes': { 'volume-attributes': { 'volume-clone-attributes': { 'volume-clone-parent-attributes': { 'snapshot-name': '' } } } } } self.client.send_iter_request.assert_called_once_with( 'volume-get-iter', expected_api_args) self.assertEqual(expected_snapshot_name, result) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/client/test_api.py0000664000175000017500000002412013656750227027154 0ustar zuulzuul00000000000000# Copyright (c) 2014 Ben Swartzlander. All rights reserved. # Copyright (c) 2014 Navneet Singh. All rights reserved. # Copyright (c) 2014 Clinton Knight. All rights reserved. # Copyright (c) 2014 Alex Meade. All rights reserved. # Copyright (c) 2014 Bob Callaway. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for NetApp API layer """ from unittest import mock import ddt from six.moves import urllib from manila import exception from manila.share.drivers.netapp.dataontap.client import api from manila import test from manila.tests.share.drivers.netapp.dataontap.client import fakes as fake class NetAppApiElementTransTests(test.TestCase): """Test case for NetApp API element translations.""" def test_get_set_system_version(self): napi = api.NaServer('localhost') # Testing calls before version is set version = napi.get_system_version() self.assertIsNone(version) napi.set_system_version(fake.VERSION_TUPLE) version = napi.get_system_version() self.assertEqual(fake.VERSION_TUPLE, version) def test_translate_struct_dict_unique_key(self): """Tests if dict gets properly converted to NaElements.""" root = api.NaElement('root') child = {'e1': 'v1', 'e2': 'v2', 'e3': 'v3'} root.translate_struct(child) self.assertEqual(3, len(root.get_children())) for key, value in child.items(): self.assertEqual(value, root.get_child_content(key)) def test_translate_struct_dict_nonunique_key(self): """Tests if list/dict gets properly converted to NaElements.""" root = api.NaElement('root') child = [{'e1': 'v1', 'e2': 'v2'}, {'e1': 'v3'}] root.translate_struct(child) children = root.get_children() self.assertEqual(3, len(children)) for c in children: if c.get_name() == 'e1': self.assertIn(c.get_content(), ['v1', 'v3']) else: self.assertEqual('v2', c.get_content()) def test_translate_struct_list(self): """Tests if list gets properly converted to NaElements.""" root = api.NaElement('root') child = ['e1', 'e2'] root.translate_struct(child) self.assertEqual(2, len(root.get_children())) self.assertIsNone(root.get_child_content('e1')) self.assertIsNone(root.get_child_content('e2')) def test_translate_struct_tuple(self): """Tests if tuple gets properly converted to NaElements.""" root = api.NaElement('root') child = ('e1', 'e2') root.translate_struct(child) self.assertEqual(2, len(root.get_children())) self.assertIsNone(root.get_child_content('e1')) self.assertIsNone(root.get_child_content('e2')) def test_translate_invalid_struct(self): """Tests if invalid data structure raises exception.""" root = api.NaElement('root') child = 'random child element' self.assertRaises(ValueError, root.translate_struct, child) def test_setter_builtin_types(self): """Tests str, int, float get converted to NaElement.""" update = dict(e1='v1', e2='1', e3='2.0', e4='8') root = api.NaElement('root') for key, value in update.items(): root[key] = value for key, value in update.items(): self.assertEqual(value, root.get_child_content(key)) def test_setter_na_element(self): """Tests na_element gets appended as child.""" root = api.NaElement('root') root['e1'] = api.NaElement('nested') self.assertEqual(1, len(root.get_children())) e1 = root.get_child_by_name('e1') self.assertIsInstance(e1, api.NaElement) self.assertIsInstance(e1.get_child_by_name('nested'), api.NaElement) def test_setter_child_dict(self): """Tests dict is appended as child to root.""" root = api.NaElement('root') root['d'] = {'e1': 'v1', 'e2': 'v2'} e1 = root.get_child_by_name('d') self.assertIsInstance(e1, api.NaElement) sub_ch = e1.get_children() self.assertEqual(2, len(sub_ch)) for c in sub_ch: self.assertIn(c.get_name(), ['e1', 'e2']) if c.get_name() == 'e1': self.assertEqual('v1', c.get_content()) else: self.assertEqual('v2', c.get_content()) def test_setter_child_list_tuple(self): """Tests list/tuple are appended as child to root.""" root = api.NaElement('root') root['l'] = ['l1', 'l2'] root['t'] = ('t1', 't2') li = root.get_child_by_name('l') self.assertIsInstance(li, api.NaElement) t = root.get_child_by_name('t') self.assertIsInstance(t, api.NaElement) self.assertEqual(2, len(li.get_children())) for le in li.get_children(): self.assertIn(le.get_name(), ['l1', 'l2']) self.assertEqual(2, len(t.get_children())) for te in t.get_children(): self.assertIn(te.get_name(), ['t1', 't2']) def test_setter_no_value(self): """Tests key with None value.""" root = api.NaElement('root') root['k'] = None self.assertIsNone(root.get_child_content('k')) def test_setter_invalid_value(self): """Tests invalid value raises exception.""" self.assertRaises(TypeError, api.NaElement('root').__setitem__, 'k', api.NaServer('localhost')) def test_setter_invalid_key(self): """Tests invalid value raises exception.""" self.assertRaises(KeyError, api.NaElement('root').__setitem__, None, 'value') @ddt.ddt class NetAppApiServerTests(test.TestCase): """Test case for NetApp API server methods""" def setUp(self): self.root = api.NaServer('127.0.0.1') super(NetAppApiServerTests, self).setUp() @ddt.data(None, fake.FAKE_XML_STR) def test_invoke_elem_value_error(self, na_element): """Tests whether invalid NaElement parameter causes error""" self.assertRaises(ValueError, self.root.invoke_elem, na_element) def test_invoke_elem_http_error(self): """Tests handling of HTTPError""" na_element = fake.FAKE_NA_ELEMENT self.mock_object(self.root, '_create_request', mock.Mock( return_value=('abc', fake.FAKE_NA_ELEMENT))) self.mock_object(api, 'LOG') self.root._opener = fake.FAKE_HTTP_OPENER self.mock_object(self.root, '_build_opener') self.mock_object(self.root._opener, 'open', mock.Mock( side_effect=urllib.error.HTTPError(url='', hdrs='', fp=None, code='401', msg='httperror'))) self.assertRaises(api.NaApiError, self.root.invoke_elem, na_element) def test_invoke_elem_urlerror(self): """Tests handling of URLError""" na_element = fake.FAKE_NA_ELEMENT self.mock_object(self.root, '_create_request', mock.Mock( return_value=('abc', fake.FAKE_NA_ELEMENT))) self.mock_object(api, 'LOG') self.root._opener = fake.FAKE_HTTP_OPENER self.mock_object(self.root, '_build_opener') self.mock_object(self.root._opener, 'open', mock.Mock( side_effect=urllib.error.URLError(reason='urlerror'))) self.assertRaises(exception.StorageCommunicationException, self.root.invoke_elem, na_element) def test_invoke_elem_unknown_exception(self): """Tests handling of Unknown Exception""" na_element = fake.FAKE_NA_ELEMENT self.mock_object(self.root, '_create_request', mock.Mock( return_value=('abc', fake.FAKE_NA_ELEMENT))) self.mock_object(api, 'LOG') self.root._opener = fake.FAKE_HTTP_OPENER self.mock_object(self.root, '_build_opener') self.mock_object(self.root._opener, 'open', mock.Mock( side_effect=Exception)) exception = self.assertRaises(api.NaApiError, self.root.invoke_elem, na_element) self.assertEqual('unknown', exception.code) @ddt.data({'trace_enabled': False, 'trace_pattern': '(.*)', 'log': False}, {'trace_enabled': True, 'trace_pattern': '(?!(volume)).*', 'log': False}, {'trace_enabled': True, 'trace_pattern': '(.*)', 'log': True}, {'trace_enabled': True, 'trace_pattern': '^volume-(info|get-iter)$', 'log': True}) @ddt.unpack def test_invoke_elem_valid(self, trace_enabled, trace_pattern, log): """Tests the method invoke_elem with valid parameters""" na_element = fake.FAKE_NA_ELEMENT self.root._trace = trace_enabled self.root._api_trace_pattern = trace_pattern self.mock_object(self.root, '_create_request', mock.Mock( return_value=('abc', fake.FAKE_NA_ELEMENT))) self.mock_object(api, 'LOG') self.root._opener = fake.FAKE_HTTP_OPENER self.mock_object(self.root, '_build_opener') self.mock_object(self.root, '_get_result', mock.Mock( return_value=fake.FAKE_NA_ELEMENT)) opener_mock = self.mock_object( self.root._opener, 'open', mock.Mock()) opener_mock.read.side_effect = ['resp1', 'resp2'] self.root.invoke_elem(na_element) expected_log_count = 2 if log else 0 self.assertEqual(expected_log_count, api.LOG.debug.call_count) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/client/fakes.py0000664000175000017500000026073413656750227026452 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from lxml import etree from six.moves import urllib from manila.share.drivers.netapp.dataontap.client import api CONNECTION_INFO = { 'hostname': 'hostname', 'transport_type': 'https', 'port': 443, 'username': 'admin', 'password': 'passw0rd', 'api_trace_pattern': '(.*)', } CLUSTER_NAME = 'fake_cluster' REMOTE_CLUSTER_NAME = 'fake_cluster_2' CLUSTER_ADDRESS_1 = 'fake_cluster_address' CLUSTER_ADDRESS_2 = 'fake_cluster_address_2' VERSION = 'NetApp Release 8.2.1 Cluster-Mode: Fri Mar 21 14:25:07 PDT 2014' VERSION_NO_DARE = 'NetApp Release 9.1.0: Tue May 10 19:30:23 2016 <1no-DARE>' VERSION_TUPLE = (9, 1, 0) NODE_NAME = 'fake_node1' NODE_NAMES = ('fake_node1', 'fake_node2') VSERVER_NAME = 'fake_vserver' VSERVER_NAME_2 = 'fake_vserver_2' VSERVER_PEER_NAME = 'fake_vserver_peer' ADMIN_VSERVER_NAME = 'fake_admin_vserver' NODE_VSERVER_NAME = 'fake_node_vserver' NFS_VERSIONS = ['nfs3', 'nfs4.0'] ROOT_AGGREGATE_NAMES = ('root_aggr1', 'root_aggr2') ROOT_VOLUME_AGGREGATE_NAME = 'fake_root_aggr' ROOT_VOLUME_NAME = 'fake_root_volume' SHARE_AGGREGATE_NAME = 'fake_aggr1' SHARE_AGGREGATE_NAMES = ('fake_aggr1', 'fake_aggr2') SHARE_AGGREGATE_RAID_TYPES = ('raid4', 'raid_dp') SHARE_AGGREGATE_DISK_TYPE = 'FCAL' SHARE_AGGREGATE_DISK_TYPES = ['SATA', 'SSD'] SHARE_NAME = 'fake_share' SHARE_SIZE = '1000000000' SHARE_NAME_2 = 'fake_share_2' SNAPSHOT_NAME = 'fake_snapshot' CG_SNAPSHOT_ID = 'fake_cg_id' PARENT_SHARE_NAME = 'fake_parent_share' PARENT_SNAPSHOT_NAME = 'fake_parent_snapshot' MAX_FILES = 5000 LANGUAGE = 'fake_language' SNAPSHOT_POLICY_NAME = 'fake_snapshot_policy' EXPORT_POLICY_NAME = 'fake_export_policy' DELETED_EXPORT_POLICIES = { VSERVER_NAME: [ 'deleted_manila_fake_policy_1', 'deleted_manila_fake_policy_2', ], VSERVER_NAME_2: [ 'deleted_manila_fake_policy_3', ], } QOS_POLICY_GROUP_NAME = 'fake_qos_policy_group_name' QOS_MAX_THROUGHPUT = '5000B/s' USER_NAME = 'fake_user' PORT = 'e0a' VLAN = '1001' VLAN_PORT = 'e0a-1001' IP_ADDRESS = '10.10.10.10' NETMASK = '255.255.255.0' GATEWAY = '10.10.10.1' SUBNET = '10.10.10.0/24' NET_ALLOCATION_ID = 'fake_allocation_id' LIF_NAME_TEMPLATE = 'os_%(net_allocation_id)s' LIF_NAME = LIF_NAME_TEMPLATE % {'net_allocation_id': NET_ALLOCATION_ID} IPSPACE_NAME = 'fake_ipspace' BROADCAST_DOMAIN = 'fake_domain' MTU = 9000 SM_SOURCE_VSERVER = 'fake_source_vserver' SM_SOURCE_VOLUME = 'fake_source_volume' SM_DEST_VSERVER = 'fake_destination_vserver' SM_DEST_VOLUME = 'fake_destination_volume' NETWORK_INTERFACES = [{ 'interface_name': 'fake_interface', 'address': IP_ADDRESS, 'vserver': VSERVER_NAME, 'netmask': NETMASK, 'role': 'data', 'home-node': NODE_NAME, 'home-port': VLAN_PORT }] NETWORK_INTERFACES_MULTIPLE = [ { 'interface_name': 'fake_interface', 'address': IP_ADDRESS, 'vserver': VSERVER_NAME, 'netmask': NETMASK, 'role': 'data', 'home-node': NODE_NAME, 'home-port': VLAN_PORT, }, { 'interface_name': 'fake_interface_2', 'address': '10.10.12.10', 'vserver': VSERVER_NAME, 'netmask': NETMASK, 'role': 'data', 'home-node': NODE_NAME, 'home-port': PORT, } ] IPSPACES = [{ 'uuid': 'fake_uuid', 'ipspace': IPSPACE_NAME, 'id': 'fake_id', 'broadcast-domains': ['OpenStack'], 'ports': [NODE_NAME + ':' + VLAN_PORT], 'vservers': [ IPSPACE_NAME, VSERVER_NAME, ] }] EMS_MESSAGE = { 'computer-name': 'fake_host', 'event-id': '0', 'event-source': 'fake driver', 'app-version': 'fake app version', 'category': 'fake category', 'event-description': 'fake description', 'log-level': '6', 'auto-support': 'false', } QOS_POLICY_GROUP = { 'policy-group': QOS_POLICY_GROUP_NAME, 'vserver': VSERVER_NAME, 'max-throughput': QOS_MAX_THROUGHPUT, 'num-workloads': 1, } NO_RECORDS_RESPONSE = etree.XML(""" 0 """) PASSED_RESPONSE = etree.XML(""" """) PASSED_FAILED_ITER_RESPONSE = etree.XML(""" 0 1 """) INVALID_GET_ITER_RESPONSE_NO_ATTRIBUTES = etree.XML(""" 1 fake_tag """) INVALID_GET_ITER_RESPONSE_NO_RECORDS = etree.XML(""" fake_tag """) VSERVER_GET_ITER_RESPONSE = etree.XML(""" %(fake_vserver)s 1 """ % {'fake_vserver': VSERVER_NAME}) VSERVER_GET_ROOT_VOLUME_NAME_RESPONSE = etree.XML(""" %(root_volume)s %(fake_vserver)s 1 """ % {'root_volume': ROOT_VOLUME_NAME, 'fake_vserver': VSERVER_NAME}) VSERVER_GET_IPSPACE_NAME_RESPONSE = etree.XML(""" %(ipspace)s %(fake_vserver)s 1 """ % {'ipspace': IPSPACE_NAME, 'fake_vserver': VSERVER_NAME}) VSERVER_GET_RESPONSE = etree.XML(""" %(aggr1)s %(aggr2)s 45678592 %(aggr1)s 6448431104 %(aggr2)s %(vserver)s """ % { 'vserver': VSERVER_NAME, 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], }) VSERVER_DATA_LIST_RESPONSE = etree.XML(""" %(vserver)s data 1 """ % {'vserver': VSERVER_NAME}) VSERVER_AGGREGATES = { SHARE_AGGREGATE_NAMES[0]: { 'available': 45678592, }, SHARE_AGGREGATE_NAMES[1]: { 'available': 6448431104, }, } VSERVER_GET_RESPONSE_NO_AGGREGATES = etree.XML(""" %(vserver)s """ % {'vserver': VSERVER_NAME}) ONTAPI_VERSION_RESPONSE = etree.XML(""" 1 19 """) SYSTEM_GET_VERSION_RESPONSE = etree.XML(""" 1395426307 true %(version)s 8 2 1 """ % {'version': VERSION}) LICENSE_V2_LIST_INFO_RESPONSE = etree.XML(""" none Cluster Base License false cluster3 base 1-80-000008 license none NFS License false cluster3-01 nfs 1-81-0000000000000004082368507 license none CIFS License false cluster3-01 cifs 1-81-0000000000000004082368507 license none iSCSI License false cluster3-01 iscsi 1-81-0000000000000004082368507 license none FCP License false cluster3-01 fcp 1-81-0000000000000004082368507 license none SnapRestore License false cluster3-01 snaprestore 1-81-0000000000000004082368507 license none SnapMirror License false cluster3-01 snapmirror 1-81-0000000000000004082368507 license none FlexClone License false cluster3-01 flexclone 1-81-0000000000000004082368507 license none SnapVault License false cluster3-01 snapvault 1-81-0000000000000004082368507 license """) LICENSES = ( 'base', 'cifs', 'fcp', 'flexclone', 'iscsi', 'nfs', 'snapmirror', 'snaprestore', 'snapvault' ) VOLUME_COUNT_RESPONSE = etree.XML(""" vol0 cluster3-01 %(root_volume)s %(fake_vserver)s 2 """ % {'root_volume': ROOT_VOLUME_NAME, 'fake_vserver': VSERVER_NAME}) CIFS_SECURITY_SERVICE = { 'type': 'active_directory', 'password': 'fake_password', 'user': 'fake_user', 'ou': 'fake_ou', 'domain': 'fake_domain', 'dns_ip': 'fake_dns_ip', 'server': '', } LDAP_SECURITY_SERVICE = { 'type': 'ldap', 'password': 'fake_password', 'server': 'fake_server', 'id': 'fake_id', } KERBEROS_SECURITY_SERVICE = { 'type': 'kerberos', 'password': 'fake_password', 'user': 'fake_user', 'server': 'fake_server', 'id': 'fake_id', 'domain': 'fake_domain', 'dns_ip': 'fake_dns_ip', } KERBEROS_SERVICE_PRINCIPAL_NAME = 'nfs/fake-vserver.fake_domain@FAKE_DOMAIN' INVALID_SECURITY_SERVICE = { 'type': 'fake', } SYSTEM_NODE_GET_ITER_RESPONSE = etree.XML(""" %s 1 """ % NODE_NAME) SECUTITY_KEY_MANAGER_NVE_SUPPORT_RESPONSE_TRUE = etree.XML(""" true """) SECUTITY_KEY_MANAGER_NVE_SUPPORT_RESPONSE_FALSE = etree.XML(""" false """) NET_PORT_GET_RESPONSE_NO_VLAN = etree.XML(""" auto full auto %(domain)s healthy false %(ipspace)s true true true up 00:0c:29:fc:04:f7 1500 1500 %(node_name)s full receive 1000 %(port)s physical data """ % {'domain': BROADCAST_DOMAIN, 'ipspace': IPSPACE_NAME, 'node_name': NODE_NAME, 'port': PORT}) NET_PORT_GET_RESPONSE = etree.XML(""" auto full auto healthy false %(ipspace)s true true true up 00:0c:29:fc:04:f7 1500 1500 %(node_name)s full receive 1000 %(port)s-%(vlan)s vlan data %(vlan)s %(node_name)s %(port)s """ % {'ipspace': IPSPACE_NAME, 'node_name': NODE_NAME, 'port': PORT, 'vlan': VLAN}) NET_PORT_GET_ITER_RESPONSE = etree.XML(""" full full auto true true true up 00:0c:29:fc:04:d9 1500 %(node_name)s full none 10 e0a physical data full full auto true true true up 00:0c:29:fc:04:e3 1500 %(node_name)s full none 100 e0b physical data full full auto true true true up 00:0c:29:fc:04:ed 1500 %(node_name)s full none 1000 e0c physical data full full auto true true true up 00:0c:29:fc:04:f7 1500 %(node_name)s full none 10000 e0d physical data 4 """ % {'node_name': NODE_NAME}) SPEED_SORTED_PORTS = ( {'node': NODE_NAME, 'port': 'e0d', 'speed': '10000'}, {'node': NODE_NAME, 'port': 'e0c', 'speed': '1000'}, {'node': NODE_NAME, 'port': 'e0b', 'speed': '100'}, {'node': NODE_NAME, 'port': 'e0a', 'speed': '10'}, ) PORT_NAMES = ('e0a', 'e0b', 'e0c', 'e0d') SPEED_SORTED_PORT_NAMES = ('e0d', 'e0c', 'e0b', 'e0a') UNSORTED_PORTS_ALL_SPEEDS = ( {'node': NODE_NAME, 'port': 'port6', 'speed': 'undef'}, {'node': NODE_NAME, 'port': 'port3', 'speed': '100'}, {'node': NODE_NAME, 'port': 'port1', 'speed': '10000'}, {'node': NODE_NAME, 'port': 'port4', 'speed': '10'}, {'node': NODE_NAME, 'port': 'port7'}, {'node': NODE_NAME, 'port': 'port2', 'speed': '1000'}, {'node': NODE_NAME, 'port': 'port5', 'speed': 'auto'}, ) SORTED_PORTS_ALL_SPEEDS = ( {'node': NODE_NAME, 'port': 'port1', 'speed': '10000'}, {'node': NODE_NAME, 'port': 'port2', 'speed': '1000'}, {'node': NODE_NAME, 'port': 'port3', 'speed': '100'}, {'node': NODE_NAME, 'port': 'port4', 'speed': '10'}, {'node': NODE_NAME, 'port': 'port5', 'speed': 'auto'}, {'node': NODE_NAME, 'port': 'port6', 'speed': 'undef'}, {'node': NODE_NAME, 'port': 'port7'}, ) NET_PORT_GET_ITER_BROADCAST_DOMAIN_RESPONSE = etree.XML(""" %(ipspace)s %(domain)s %(node)s %(port)s 1 """ % { 'domain': BROADCAST_DOMAIN, 'node': NODE_NAME, 'port': PORT, 'ipspace': IPSPACE_NAME, }) NET_PORT_GET_ITER_BROADCAST_DOMAIN_MISSING_RESPONSE = etree.XML(""" %(ipspace)s %(node)s %(port)s 1 """ % {'node': NODE_NAME, 'port': PORT, 'ipspace': IPSPACE_NAME}) NET_PORT_BROADCAST_DOMAIN_GET_ITER_RESPONSE = etree.XML(""" %(domain)s %(ipspace)s 1 """ % {'domain': BROADCAST_DOMAIN, 'ipspace': IPSPACE_NAME}) NET_IPSPACES_GET_ITER_RESPONSE = etree.XML(""" OpenStack fake_id %(ipspace)s %(node)s:%(port)s fake_uuid %(ipspace)s %(vserver)s 1 """ % { 'ipspace': IPSPACE_NAME, 'node': NODE_NAME, 'port': VLAN_PORT, 'vserver': VSERVER_NAME }) NET_INTERFACE_GET_ITER_RESPONSE = etree.XML("""
192.168.228.42
ipv4 up %(node)s e0c none none system-defined disabled mgmt %(node)s e0c cluster_mgmt true true d3230112-7524-11e4-8608-123478563412 false %(netmask)s 24 up cluster_mgmt c192.168.228.0/24 system_defined cluster3
192.168.228.43
ipv4 up %(node)s e0d none system-defined nextavail mgmt %(node)s e0d mgmt1 true true 0ccc57cc-7525-11e4-8608-123478563412 false %(netmask)s 24 up node_mgmt n192.168.228.0/24 system_defined cluster3-01
%(address)s
ipv4 up %(node)s %(vlan)s nfs cifs none system-defined nextavail data %(node)s %(vlan)s %(lif)s false true db4d91b6-95d9-11e4-8608-123478563412 false %(netmask)s 24 up data d10.0.0.0/24 system_defined %(vserver)s
3
""" % { 'lif': LIF_NAME, 'vserver': VSERVER_NAME, 'node': NODE_NAME, 'address': IP_ADDRESS, 'netmask': NETMASK, 'vlan': VLAN_PORT, }) LIF_NAMES = ('cluster_mgmt', 'mgmt1', LIF_NAME) NET_INTERFACE_GET_ITER_RESPONSE_NFS = etree.XML("""
%(address)s
ipv4 up %(node)s %(vlan)s nfs cifs none system-defined nextavail data %(node)s %(vlan)s %(lif)s false true db4d91b6-95d9-11e4-8608-123478563412 false %(netmask)s 24 up data d10.0.0.0/24 system_defined %(vserver)s
1
""" % { 'lif': LIF_NAME, 'vserver': VSERVER_NAME, 'node': NODE_NAME, 'address': IP_ADDRESS, 'netmask': NETMASK, 'vlan': VLAN_PORT, }) LIFS = ( {'address': '192.168.228.42', 'home-node': NODE_NAME, 'home-port': 'e0c', 'interface-name': 'cluster_mgmt', 'netmask': NETMASK, 'role': 'cluster_mgmt', 'vserver': 'cluster3' }, {'address': '192.168.228.43', 'home-node': NODE_NAME, 'home-port': 'e0d', 'interface-name': 'mgmt1', 'netmask': NETMASK, 'role': 'node_mgmt', 'vserver': 'cluster3-01' }, {'address': IP_ADDRESS, 'home-node': NODE_NAME, 'home-port': VLAN_PORT, 'interface-name': LIF_NAME, 'netmask': NETMASK, 'role': 'data', 'vserver': VSERVER_NAME, }, ) NFS_LIFS = [ {'address': IP_ADDRESS, 'home-node': NODE_NAME, 'home-port': VLAN_PORT, 'interface-name': LIF_NAME, 'netmask': NETMASK, 'role': 'data', 'vserver': VSERVER_NAME, }, ] NET_INTERFACE_GET_ONE_RESPONSE = etree.XML(""" %(lif)s %(vserver)s 1 """ % {'lif': LIF_NAME, 'vserver': VSERVER_NAME}) AGGR_GET_NAMES_RESPONSE = etree.XML(""" %(root1)s %(root2)s %(aggr1)s %(aggr2)s 2 """ % { 'root1': ROOT_AGGREGATE_NAMES[0], 'root2': ROOT_AGGREGATE_NAMES[1], 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], }) AGGR_GET_SPACE_RESPONSE = etree.XML(""" /%(aggr1)s/plex0 /%(aggr1)s/plex0/rg0 45670400 943718400 898048000 %(aggr1)s /%(aggr2)s/plex0 /%(aggr2)s/plex0/rg0 /%(aggr2)s/plex0/rg1 4267659264 7549747200 3282087936 %(aggr2)s 2 """ % { 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], }) AGGR_GET_NODE_RESPONSE = etree.XML(""" %(node)s %(aggr)s 1 """ % { 'aggr': SHARE_AGGREGATE_NAME, 'node': NODE_NAME }) AGGR_GET_ITER_RESPONSE = etree.XML(""" false 64_bit 1758646411 aggr 512 30384 96 30384 30384 30384 243191 96 0 4082368507 cluster3-01 4082368507 cluster3-01 off 0 active block 3 cfo true false true false false false unmirrored online 1 true false /%(aggr1)s/plex0 normal,active block false false false /%(aggr1)s/plex0/rg0 0 0 0 on 16 raid_dp, normal raid_dp online false 0 0 true true 0 0 0 0 0 0 0 0 0 245760 0 95 45670400 943718400 898048000 0 898048000 897802240 1 0 0 %(aggr1)s 15863632-ea49-49a8-9c88-2bd2d57c6d7a cluster3-01 unknown false 64_bit 706602229 aggr 528 31142 96 31142 31142 31142 1945584 96 0 4082368507 cluster3-01 4082368507 cluster3-01 off 0 active block 10 sfo false false true false false false unmirrored online 1 true false /%(aggr2)s/plex0 normal,active block false false false /%(aggr2)s/plex0/rg0 0 0 block false false false /%(aggr2)s/plex0/rg1 0 0 0 on 8 raid4, normal raid4 online false 0 0 true true 0 0 0 0 0 0 0 0 0 425984 0 15 6448431104 7549747200 1101316096 0 1101316096 1100890112 2 0 0 %(aggr2)s 2a741934-1aaf-42dd-93ca-aaf231be108a cluster3-01 not_striped 2 """ % { 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], }) AGGR_GET_ITER_SSC_RESPONSE = etree.XML(""" false 64_bit 1758646411 aggr 512 30384 96 30384 30384 30384 243191 96 0 4082368507 cluster3-01 4082368507 cluster3-01 off 0 active block 3 cfo true false true false false false unmirrored online 1 true false /%(aggr1)s/plex0 normal,active block false false false /%(aggr1)s/plex0/rg0 0 0 0 on 16 raid_dp, normal raid_dp online false 0 0 true true 0 0 0 0 0 0 0 0 0 245760 0 95 45670400 943718400 898048000 0 898048000 897802240 1 0 0 %(aggr1)s 15863632-ea49-49a8-9c88-2bd2d57c6d7a cluster3-01 unknown 1 """ % {'aggr1': SHARE_AGGREGATE_NAMES[0]}) AGGR_GET_ITER_ROOT_AGGR_RESPONSE = etree.XML(""" true false %(root1)s true false %(root2)s false false %(aggr1)s false false %(aggr2)s 6 """ % { 'root1': ROOT_AGGREGATE_NAMES[0], 'root2': ROOT_AGGREGATE_NAMES[1], 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], }) AGGR_GET_ITER_NON_ROOT_AGGR_RESPONSE = etree.XML(""" false false %(aggr1)s false false %(aggr2)s 6 """ % { 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], }) VOLUME_GET_NAME_RESPONSE = etree.XML(""" %(volume)s %(vserver)s 1 """ % {'volume': SHARE_NAME, 'vserver': VSERVER_NAME}) VOLUME_GET_VOLUME_PATH_RESPONSE = etree.XML(""" /%(volume)s """ % {'volume': SHARE_NAME}) VOLUME_GET_VOLUME_PATH_CIFS_RESPONSE = etree.XML(""" \\%(volume)s """ % {'volume': SHARE_NAME}) VOLUME_JUNCTION_PATH = '/' + SHARE_NAME VOLUME_JUNCTION_PATH_CIFS = '\\' + SHARE_NAME VOLUME_MODIFY_ITER_RESPONSE = etree.XML(""" 0 1 %(volume)s %(vserver)s """ % {'volume': SHARE_NAME, 'vserver': VSERVER_NAME}) VOLUME_MODIFY_ITER_ERROR_RESPONSE = etree.XML(""" 160 Unable to set volume attribute "size" %(volume)s %(vserver)s 1 0 """ % {'volume': SHARE_NAME, 'vserver': VSERVER_NAME}) SNAPSHOT_ACCESS_TIME = '1466640058' SNAPSHOT_GET_ITER_NOT_BUSY_RESPONSE = etree.XML(""" %(access_time)s false %(snap)s %(volume)s %(vserver)s 1 """ % { 'access_time': SNAPSHOT_ACCESS_TIME, 'snap': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'vserver': VSERVER_NAME, }) SNAPSHOT_GET_ITER_BUSY_RESPONSE = etree.XML(""" %(access_time)s true %(snap)s %(volume)s %(vserver)s volume clone 1 """ % { 'access_time': SNAPSHOT_ACCESS_TIME, 'snap': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'vserver': VSERVER_NAME, }) SNAPSHOT_GET_ITER_NOT_UNIQUE_RESPONSE = etree.XML(""" false %(snap)s %(volume)s %(vserver)s false %(snap)s %(root_volume)s %(admin_vserver)s 1 """ % { 'snap': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'vserver': VSERVER_NAME, 'root_volume': ROOT_VOLUME_NAME, 'admin_vserver': ADMIN_VSERVER_NAME, }) SNAPSHOT_GET_ITER_UNAVAILABLE_RESPONSE = etree.XML(""" 0 13023 %(volume)s Unable to get information for Snapshot copies of volume \ "%(volume)s" on Vserver "%(vserver)s". Reason: Volume not online. %(vserver)s """ % {'volume': SHARE_NAME, 'vserver': VSERVER_NAME}) SNAPSHOT_GET_ITER_OTHER_ERROR_RESPONSE = etree.XML(""" 0 99999 %(volume)s Unable to get information for Snapshot copies of volume \ "%(volume)s" on Vserver "%(vserver)s". %(vserver)s """ % {'volume': SHARE_NAME, 'vserver': VSERVER_NAME}) SNAPSHOT_MULTIDELETE_ERROR_RESPONSE = etree.XML(""" 13021 %(volume)s No such snapshot. """ % {'volume': SHARE_NAME}) SNAPSHOT_GET_ITER_DELETED_RESPONSE = etree.XML(""" deleted_manila_%(snap)s %(volume)s %(vserver)s 1 """ % { 'snap': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'vserver': VSERVER_NAME, }) SNAPSHOT_GET_ITER_SNAPMIRROR_RESPONSE = etree.XML(""" %(snap)s %(volume)s %(vserver)s 1 """ % { 'snap': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'vserver': VSERVER_NAME, }) CIFS_SHARE_ACCESS_CONTROL_GET_ITER = etree.XML(""" full_control %(volume)s Administrator manila_svm_cifs change %(volume)s Administrators manila_svm_cifs read %(volume)s Power Users manila_svm_cifs no_access %(volume)s Users manila_svm_cifs 4 """ % {'volume': SHARE_NAME}) NFS_EXPORT_RULES = ('10.10.10.10', '10.10.10.20') NFS_EXPORTFS_LIST_RULES_2_NO_RULES_RESPONSE = etree.XML(""" """) NFS_EXPORTFS_LIST_RULES_2_RESPONSE = etree.XML(""" %(path)s 65534 false %(host1)s %(host2)s %(host1)s %(host2)s %(host1)s %(host2)s sys """ % { 'path': VOLUME_JUNCTION_PATH, 'host1': NFS_EXPORT_RULES[0], 'host2': NFS_EXPORT_RULES[1], }) AGGR_GET_RAID_TYPE_RESPONSE = etree.XML(""" /%(aggr1)s/plex0 /%(aggr1)s/plex0/rg0 %(raid_type1)s %(aggr1)s /%(aggr2)s/plex0 /%(aggr2)s/plex0/rg0 /%(aggr2)s/plex0/rg1 %(raid_type2)s %(aggr2)s 2 """ % { 'aggr1': SHARE_AGGREGATE_NAMES[0], 'aggr2': SHARE_AGGREGATE_NAMES[1], 'raid_type1': SHARE_AGGREGATE_RAID_TYPES[0], 'raid_type2': SHARE_AGGREGATE_RAID_TYPES[1] }) STORAGE_DISK_GET_ITER_RESPONSE = etree.XML(""" cluster3-01:v5.19 %(type0)s cluster3-01:v5.20 %(type0)s cluster3-01:v5.20 %(type1)s cluster3-01:v5.20 %(type1)s 4 """ % { 'type0': SHARE_AGGREGATE_DISK_TYPES[0], 'type1': SHARE_AGGREGATE_DISK_TYPES[1], }) STORAGE_DISK_GET_ITER_RESPONSE_PAGE_1 = etree.XML(""" cluster3-01:v4.16 cluster3-01:v4.17 cluster3-01:v4.18 cluster3-01:v4.19 cluster3-01:v4.20 cluster3-01:v4.21 cluster3-01:v4.22 cluster3-01:v4.24 cluster3-01:v4.25 cluster3-01:v4.26 next_tag_1 10 """) STORAGE_DISK_GET_ITER_RESPONSE_PAGE_2 = etree.XML(""" cluster3-01:v4.27 cluster3-01:v4.28 cluster3-01:v4.29 cluster3-01:v4.32 cluster3-01:v5.16 cluster3-01:v5.17 cluster3-01:v5.18 cluster3-01:v5.19 cluster3-01:v5.20 cluster3-01:v5.21 next_tag_2 10 """) STORAGE_DISK_GET_ITER_RESPONSE_PAGE_3 = etree.XML(""" cluster3-01:v5.22 cluster3-01:v5.24 cluster3-01:v5.25 cluster3-01:v5.26 cluster3-01:v5.27 cluster3-01:v5.28 cluster3-01:v5.29 cluster3-01:v5.32 8 """) GET_AGGREGATE_FOR_VOLUME_RESPONSE = etree.XML(""" %(aggr)s %(share)s os_aa666789-5576-4835-87b7-868069856459 1 """ % { 'aggr': SHARE_AGGREGATE_NAME, 'share': SHARE_NAME }) GET_VOLUME_FOR_ENCRYPTED_RESPONSE = etree.XML(""" true %(volume)s manila_svm 1 """ % {'volume': SHARE_NAME}) GET_VOLUME_FOR_ENCRYPTED_OLD_SYS_VERSION_RESPONSE = etree.XML(""" %(volume)s manila_svm 1 """ % {'volume': SHARE_NAME}) EXPORT_RULE_GET_ITER_RESPONSE = etree.XML(""" %(rule)s %(policy)s 3 manila_svm %(rule)s %(policy)s 1 manila_svm 2 """ % {'policy': EXPORT_POLICY_NAME, 'rule': IP_ADDRESS}) VOLUME_GET_EXPORT_POLICY_RESPONSE = etree.XML(""" %(policy)s %(volume)s manila_svm 1 """ % {'policy': EXPORT_POLICY_NAME, 'volume': SHARE_NAME}) DELETED_EXPORT_POLICY_GET_ITER_RESPONSE = etree.XML(""" %(policy1)s %(vserver)s %(policy2)s %(vserver)s %(policy3)s %(vserver2)s 2 """ % { 'vserver': VSERVER_NAME, 'vserver2': VSERVER_NAME_2, 'policy1': DELETED_EXPORT_POLICIES[VSERVER_NAME][0], 'policy2': DELETED_EXPORT_POLICIES[VSERVER_NAME][1], 'policy3': DELETED_EXPORT_POLICIES[VSERVER_NAME_2][0], }) LUN_GET_ITER_RESPONSE = etree.XML(""" /vol/%(volume)s/fakelun %(volume)s %(vserver)s 1 """ % { 'vserver': VSERVER_NAME, 'volume': SHARE_NAME, }) VOLUME_GET_ITER_NOT_UNIQUE_RESPONSE = etree.XML(""" %(volume1)s %(volume2)s 2 """ % { 'volume1': SHARE_NAME, 'volume2': SHARE_NAME_2, }) VOLUME_GET_ITER_JUNCTIONED_VOLUMES_RESPONSE = etree.XML(""" fake_volume test 1 """) VOLUME_GET_ITER_VOLUME_TO_MANAGE_RESPONSE = etree.XML(""" %(aggr)s /%(volume)s %(volume)s %(vserver)s rw %(size)s %(qos-policy-group-name)s 1 """ % { 'aggr': SHARE_AGGREGATE_NAME, 'vserver': VSERVER_NAME, 'volume': SHARE_NAME, 'size': SHARE_SIZE, 'qos-policy-group-name': QOS_POLICY_GROUP_NAME, }) VOLUME_GET_ITER_NO_QOS_RESPONSE = etree.XML(""" %(aggr)s /%(volume)s %(volume)s %(vserver)s rw %(size)s 1 """ % { 'aggr': SHARE_AGGREGATE_NAME, 'vserver': VSERVER_NAME, 'volume': SHARE_NAME, 'size': SHARE_SIZE, }) CLONE_CHILD_1 = 'fake_child_1' CLONE_CHILD_2 = 'fake_child_2' VOLUME_GET_ITER_CLONE_CHILDREN_RESPONSE = etree.XML(""" %(clone1)s %(vserver)s %(clone2)s %(vserver)s 2 """ % { 'vserver': VSERVER_NAME, 'clone1': CLONE_CHILD_1, 'clone2': CLONE_CHILD_2, }) VOLUME_GET_ITER_PARENT_SNAP_EMPTY_RESPONSE = etree.XML(""" %(name)s %(vserver)s 1 """ % { 'vserver': VSERVER_NAME, 'name': SHARE_NAME, }) VOLUME_GET_ITER_PARENT_SNAP_RESPONSE = etree.XML(""" %(snapshot_name)s %(name)s %(vserver)s 1 """ % { 'snapshot_name': SNAPSHOT_NAME, 'vserver': VSERVER_NAME, 'name': SHARE_NAME, }) SIS_GET_ITER_RESPONSE = etree.XML(""" true /vol/%(volume)s enabled %(vserver)s """ % { 'vserver': VSERVER_NAME, 'volume': SHARE_NAME, }) CLUSTER_PEER_GET_ITER_RESPONSE = etree.XML(""" %(addr1)s %(addr2)s available %(cluster)s fake_uuid %(addr1)s %(remote_cluster)s fake_serial_number 60 1 """ % { 'addr1': CLUSTER_ADDRESS_1, 'addr2': CLUSTER_ADDRESS_2, 'cluster': CLUSTER_NAME, 'remote_cluster': REMOTE_CLUSTER_NAME, }) CLUSTER_PEER_POLICY_GET_RESPONSE = etree.XML(""" false 8 """) CLUSTER_GET_CLUSTER_NAME = etree.XML(""" - %(cluster_name)s 1-80-000000 fake_uuid fake_rdb """ % { 'cluster_name': CLUSTER_NAME, }) VSERVER_PEER_GET_ITER_RESPONSE = etree.XML(""" snapmirror %(cluster)s peered %(vserver2)s %(vserver1)s 2 """ % { 'cluster': CLUSTER_NAME, 'vserver1': VSERVER_NAME, 'vserver2': VSERVER_NAME_2 }) SNAPMIRROR_GET_ITER_RESPONSE = etree.XML(""" fake_destination_volume fake_destination_node fake_destination_vserver fake_snapshot 1442701782 false true 2187 109 1442701890 test:manila 1171456 initialize 0 snapmirrored fake_snapshot 1442701782 DPDefault v2 ea8bfcc6-5f1d-11e5-8446-123478563412 idle data_protection daily fake_source_volume fake_source_vserver fake_destination_vserver 1 """) SNAPMIRROR_GET_ITER_FILTERED_RESPONSE = etree.XML(""" fake_destination_vserver fake_destination_volume true snapmirrored daily fake_source_vserver fake_source_volume 1 """) SNAPMIRROR_INITIALIZE_RESULT = etree.XML(""" succeeded """) VOLUME_MOVE_GET_ITER_RESULT = etree.XML(""" retry_on_failure
Cutover Completed::Volume move job finishing move
1481919246 82 finishing healthy %(volume)s %(vserver)s
1
""" % { 'volume': SHARE_NAME, 'vserver': VSERVER_NAME, }) PERF_OBJECT_COUNTER_TOTAL_CP_MSECS_LABELS = [ 'SETUP', 'PRE_P0', 'P0_SNAP_DEL', 'P1_CLEAN', 'P1_QUOTA', 'IPU_DISK_ADD', 'P2V_INOFILE', 'P2V_INO_PUB', 'P2V_INO_PRI', 'P2V_FSINFO', 'P2V_DLOG1', 'P2V_DLOG2', 'P2V_REFCOUNT', 'P2V_TOPAA', 'P2V_DF_SCORES_SUB', 'P2V_BM', 'P2V_SNAP', 'P2V_DF_SCORES', 'P2V_VOLINFO', 'P2V_CONT', 'P2A_INOFILE', 'P2A_INO', 'P2A_DLOG1', 'P2A_HYA', 'P2A_DLOG2', 'P2A_FSINFO', 'P2A_IPU_BITMAP_GROW', 'P2A_REFCOUNT', 'P2A_TOPAA', 'P2A_HYABC', 'P2A_BM', 'P2A_SNAP', 'P2A_VOLINFO', 'P2_FLUSH', 'P2_FINISH', 'P3_WAIT', 'P3V_VOLINFO', 'P3A_VOLINFO', 'P3_FINISH', 'P4_FINISH', 'P5_FINISH', ] PERF_OBJECT_COUNTER_LIST_INFO_WAFL_RESPONSE = etree.XML(""" No. of times 8.3 names are accessed per second. access_8_3_names diag rate per_sec Array of counts of different types of CPs wafl_timer generated CP snapshot generated CP wafl_avail_bufs generated CP dirty_blk_cnt generated CP full NV-log generated CP,back-to-back CP flush generated CP,sync generated CP deferred back-to-back CP low mbufs generated CP low datavecs generated CP nvlog replay takeover time limit CP cp_count diag delta array none total_cp_msecs Array of percentage time spent in different phases of CP %(labels)s cp_phase_times diag percent array percent """ % {'labels': ','.join(PERF_OBJECT_COUNTER_TOTAL_CP_MSECS_LABELS)}) PERF_OBJECT_GET_INSTANCES_SYSTEM_RESPONSE_CMODE = etree.XML(""" avg_processor_busy 5674745133134 system %(node1)s:kernel:system avg_processor_busy 4077649009234 system %(node2)s:kernel:system 1453412013 """ % {'node1': NODE_NAMES[0], 'node2': NODE_NAMES[1]}) PERF_OBJECT_GET_INSTANCES_SYSTEM_RESPONSE_7MODE = etree.XML(""" 1454146292 system avg_processor_busy 13215732322 """) PERF_OBJECT_INSTANCE_LIST_INFO_ITER_RESPONSE = etree.XML(""" system %(node)s:kernel:system 1 """ % {'node': NODE_NAME}) PERF_OBJECT_INSTANCE_LIST_INFO_RESPONSE = etree.XML(""" processor0 processor1 """) NET_ROUTES_CREATE_RESPONSE = etree.XML(""" ipv4 %(subnet)s %(gateway)s 20 %(vserver)s """ % { 'gateway': GATEWAY, 'vserver': VSERVER_NAME, 'subnet': SUBNET, }) QOS_POLICY_GROUP_GET_ITER_RESPONSE = etree.XML(""" %(max_throughput)s 1 %(qos_policy_group_name)s %(vserver)s 1 """ % { 'qos_policy_group_name': QOS_POLICY_GROUP_NAME, 'vserver': VSERVER_NAME, 'max_throughput': QOS_MAX_THROUGHPUT, }) FAKE_VOL_XML = """ open123 online 0 0 0 false false """ FAKE_XML1 = """\ abc\ abc\ """ FAKE_XML2 = """somecontent""" FAKE_NA_ELEMENT = api.NaElement(etree.XML(FAKE_VOL_XML)) FAKE_INVOKE_DATA = 'somecontent' FAKE_XML_STR = 'abc' FAKE_API_NAME = 'volume-get-iter' FAKE_API_NAME_ELEMENT = api.NaElement(FAKE_API_NAME) FAKE_NA_SERVER_STR = '127.0.0.1' FAKE_NA_SERVER = api.NaServer(FAKE_NA_SERVER_STR) FAKE_NA_SERVER_API_1_5 = api.NaServer(FAKE_NA_SERVER_STR) FAKE_NA_SERVER_API_1_5.set_vfiler('filer') FAKE_NA_SERVER_API_1_5.set_api_version(1, 5) FAKE_NA_SERVER_API_1_14 = api.NaServer(FAKE_NA_SERVER_STR) FAKE_NA_SERVER_API_1_14.set_vserver('server') FAKE_NA_SERVER_API_1_14.set_api_version(1, 14) FAKE_NA_SERVER_API_1_20 = api.NaServer(FAKE_NA_SERVER_STR) FAKE_NA_SERVER_API_1_20.set_vfiler('filer') FAKE_NA_SERVER_API_1_20.set_vserver('server') FAKE_NA_SERVER_API_1_20.set_api_version(1, 20) FAKE_QUERY = {'volume-attributes': None} FAKE_DES_ATTR = {'volume-attributes': ['volume-id-attributes', 'volume-space-attributes', 'volume-state-attributes', 'volume-qos-attributes']} FAKE_CALL_ARGS_LIST = [mock.call(80), mock.call(8088), mock.call(443), mock.call(8488)] FAKE_RESULT_API_ERR_REASON = api.NaElement('result') FAKE_RESULT_API_ERR_REASON.add_attr('errno', '000') FAKE_RESULT_API_ERR_REASON.add_attr('reason', 'fake_reason') FAKE_RESULT_API_ERRNO_INVALID = api.NaElement('result') FAKE_RESULT_API_ERRNO_INVALID.add_attr('errno', '000') FAKE_RESULT_API_ERRNO_VALID = api.NaElement('result') FAKE_RESULT_API_ERRNO_VALID.add_attr('errno', '14956') FAKE_RESULT_SUCCESS = api.NaElement('result') FAKE_RESULT_SUCCESS.add_attr('status', 'passed') FAKE_HTTP_OPENER = urllib.request.build_opener() FAKE_MANAGE_VOLUME = { 'aggregate': SHARE_AGGREGATE_NAME, 'name': SHARE_NAME, 'owning-vserver-name': VSERVER_NAME, 'junction_path': VOLUME_JUNCTION_PATH, 'style': 'fake_style', 'size': SHARE_SIZE, } FAKE_KEY_MANAGER_ERROR = "The onboard key manager is not enabled. To enable \ it, run \"security key-manager setup\"." manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/protocols/0000775000175000017500000000000013656750362025541 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/protocols/__init__.py0000664000175000017500000000000013656750227027640 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/protocols/test_nfs_cmode.py0000664000175000017500000002006713656750227031114 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Mock unit tests for the NetApp driver protocols NFS class module. """ import copy from unittest import mock import uuid import ddt from manila import exception from manila.share.drivers.netapp.dataontap.protocols import nfs_cmode from manila import test from manila.tests.share.drivers.netapp.dataontap.protocols \ import fakes as fake @ddt.ddt class NetAppClusteredNFSHelperTestCase(test.TestCase): def setUp(self): super(NetAppClusteredNFSHelperTestCase, self).setUp() self.mock_context = mock.Mock() self.mock_client = mock.Mock() self.helper = nfs_cmode.NetAppCmodeNFSHelper() self.helper.set_client(self.mock_client) @ddt.data(('1.2.3.4', '1.2.3.4'), ('fc00::1', '[fc00::1]')) @ddt.unpack def test__escaped_address(self, raw, escaped): self.assertEqual(escaped, self.helper._escaped_address(raw)) def test_create_share(self): mock_ensure_export_policy = self.mock_object(self.helper, '_ensure_export_policy') self.mock_client.get_volume_junction_path.return_value = ( fake.NFS_SHARE_PATH) result = self.helper.create_share(fake.NFS_SHARE, fake.SHARE_NAME) export_addresses = [fake.SHARE_ADDRESS_1, fake.SHARE_ADDRESS_2] export_paths = [result(address) for address in export_addresses] expected_paths = [ fake.SHARE_ADDRESS_1 + ":" + fake.NFS_SHARE_PATH, fake.SHARE_ADDRESS_2 + ":" + fake.NFS_SHARE_PATH, ] self.assertEqual(expected_paths, export_paths) (self.mock_client.clear_nfs_export_policy_for_volume. assert_called_once_with(fake.SHARE_NAME)) self.assertTrue(mock_ensure_export_policy.called) def test_delete_share(self): self.helper.delete_share(fake.NFS_SHARE, fake.SHARE_NAME) (self.mock_client.clear_nfs_export_policy_for_volume. assert_called_once_with(fake.SHARE_NAME)) self.mock_client.soft_delete_nfs_export_policy.assert_called_once_with( fake.EXPORT_POLICY_NAME) def test_update_access(self): self.mock_object(self.helper, '_ensure_export_policy') self.mock_object(self.helper, '_get_export_policy_name', mock.Mock(return_value='fake_export_policy')) self.mock_object(self.helper, '_get_temp_export_policy_name', mock.Mock(side_effect=['fake_new_export_policy', 'fake_old_export_policy'])) self.helper.update_access(fake.CIFS_SHARE, fake.SHARE_NAME, [fake.IP_ACCESS]) self.mock_client.create_nfs_export_policy.assert_called_once_with( 'fake_new_export_policy') self.mock_client.add_nfs_export_rule.assert_called_once_with( 'fake_new_export_policy', fake.CLIENT_ADDRESS_1, False) (self.mock_client.set_nfs_export_policy_for_volume. assert_called_once_with(fake.SHARE_NAME, 'fake_new_export_policy')) (self.mock_client.soft_delete_nfs_export_policy. assert_called_once_with('fake_old_export_policy')) self.mock_client.rename_nfs_export_policy.assert_has_calls([ mock.call('fake_export_policy', 'fake_old_export_policy'), mock.call('fake_new_export_policy', 'fake_export_policy'), ]) def test_validate_access_rule(self): result = self.helper._validate_access_rule(fake.IP_ACCESS) self.assertIsNone(result) def test_validate_access_rule_invalid_type(self): rule = copy.copy(fake.IP_ACCESS) rule['access_type'] = 'user' self.assertRaises(exception.InvalidShareAccess, self.helper._validate_access_rule, rule) def test_validate_access_rule_invalid_level(self): rule = copy.copy(fake.IP_ACCESS) rule['access_level'] = 'none' self.assertRaises(exception.InvalidShareAccessLevel, self.helper._validate_access_rule, rule) def test_get_target(self): target = self.helper.get_target(fake.NFS_SHARE) self.assertEqual(fake.SHARE_ADDRESS_1, target) def test_get_share_name_for_share(self): self.mock_client.get_volume_at_junction_path.return_value = ( fake.VOLUME) share_name = self.helper.get_share_name_for_share(fake.NFS_SHARE) self.assertEqual(fake.SHARE_NAME, share_name) self.mock_client.get_volume_at_junction_path.assert_called_once_with( fake.NFS_SHARE_PATH) def test_get_share_name_for_share_not_found(self): self.mock_client.get_volume_at_junction_path.return_value = None share_name = self.helper.get_share_name_for_share(fake.NFS_SHARE) self.assertIsNone(share_name) self.mock_client.get_volume_at_junction_path.assert_called_once_with( fake.NFS_SHARE_PATH) def test_get_target_missing_location(self): target = self.helper.get_target({'export_location': ''}) self.assertEqual('', target) def test_get_export_location(self): host_ip, export_path = self.helper._get_export_location( fake.NFS_SHARE) self.assertEqual(fake.SHARE_ADDRESS_1, host_ip) self.assertEqual('/' + fake.SHARE_NAME, export_path) @ddt.data('', 'invalid') def test_get_export_location_missing_location_invalid(self, export): fake_share = fake.NFS_SHARE.copy() fake_share['export_location'] = export host_ip, export_path = self.helper._get_export_location(fake_share) self.assertEqual('', host_ip) self.assertEqual('', export_path) def test_get_temp_export_policy_name(self): self.mock_object(uuid, 'uuid1', mock.Mock(return_value='fake-uuid')) result = self.helper._get_temp_export_policy_name() self.assertEqual('temp_fake_uuid', result) def test_get_export_policy_name(self): result = self.helper._get_export_policy_name(fake.NFS_SHARE) self.assertEqual(fake.EXPORT_POLICY_NAME, result) def test_ensure_export_policy_equal(self): self.mock_client.get_nfs_export_policy_for_volume.return_value = ( fake.EXPORT_POLICY_NAME) self.helper._ensure_export_policy(fake.NFS_SHARE, fake.SHARE_NAME) self.assertFalse(self.mock_client.create_nfs_export_policy.called) self.assertFalse(self.mock_client.rename_nfs_export_policy.called) def test_ensure_export_policy_default(self): self.mock_client.get_nfs_export_policy_for_volume.return_value = ( 'default') self.helper._ensure_export_policy(fake.NFS_SHARE, fake.SHARE_NAME) self.mock_client.create_nfs_export_policy.assert_called_once_with( fake.EXPORT_POLICY_NAME) (self.mock_client.set_nfs_export_policy_for_volume. assert_called_once_with(fake.SHARE_NAME, fake.EXPORT_POLICY_NAME)) self.assertFalse(self.mock_client.rename_nfs_export_policy.called) def test_ensure_export_policy_rename(self): self.mock_client.get_nfs_export_policy_for_volume.return_value = 'fake' self.helper._ensure_export_policy(fake.NFS_SHARE, fake.SHARE_NAME) self.assertFalse(self.mock_client.create_nfs_export_policy.called) self.mock_client.rename_nfs_export_policy.assert_called_once_with( 'fake', fake.EXPORT_POLICY_NAME) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/protocols/test_cifs_cmode.py0000664000175000017500000002001713656750227031245 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Mock unit tests for the NetApp driver protocols CIFS class module. """ import copy from unittest import mock import ddt from manila.common import constants from manila import exception from manila.share.drivers.netapp.dataontap.protocols import cifs_cmode from manila import test from manila.tests.share.drivers.netapp.dataontap.protocols \ import fakes as fake @ddt.ddt class NetAppClusteredCIFSHelperTestCase(test.TestCase): def setUp(self): super(NetAppClusteredCIFSHelperTestCase, self).setUp() self.mock_context = mock.Mock() self.mock_client = mock.Mock() self.helper = cifs_cmode.NetAppCmodeCIFSHelper() self.helper.set_client(self.mock_client) def test_create_share(self): result = self.helper.create_share(fake.CIFS_SHARE, fake.SHARE_NAME) export_addresses = [fake.SHARE_ADDRESS_1, fake.SHARE_ADDRESS_2] export_paths = [result(address) for address in export_addresses] expected_paths = [ r'\\%s\%s' % (fake.SHARE_ADDRESS_1, fake.SHARE_NAME), r'\\%s\%s' % (fake.SHARE_ADDRESS_2, fake.SHARE_NAME), ] self.assertEqual(expected_paths, export_paths) self.mock_client.create_cifs_share.assert_called_once_with( fake.SHARE_NAME) self.mock_client.remove_cifs_share_access.assert_called_once_with( fake.SHARE_NAME, 'Everyone') self.mock_client.set_volume_security_style.assert_called_once_with( fake.SHARE_NAME, security_style='ntfs') def test_delete_share(self): self.helper.delete_share(fake.CIFS_SHARE, fake.SHARE_NAME) self.mock_client.remove_cifs_share.assert_called_once_with( fake.SHARE_NAME) def test_update_access(self): mock_validate_access_rule = self.mock_object(self.helper, '_validate_access_rule') mock_get_access_rules = self.mock_object( self.helper, '_get_access_rules', mock.Mock(return_value=fake.EXISTING_CIFS_RULES)) mock_handle_added_rules = self.mock_object(self.helper, '_handle_added_rules') mock_handle_ro_to_rw_rules = self.mock_object(self.helper, '_handle_ro_to_rw_rules') mock_handle_rw_to_ro_rules = self.mock_object(self.helper, '_handle_rw_to_ro_rules') mock_handle_deleted_rules = self.mock_object(self.helper, '_handle_deleted_rules') self.helper.update_access(fake.CIFS_SHARE, fake.SHARE_NAME, [fake.USER_ACCESS]) new_rules = {'fake_user': constants.ACCESS_LEVEL_RW} mock_validate_access_rule.assert_called_once_with(fake.USER_ACCESS) mock_get_access_rules.assert_called_once_with(fake.CIFS_SHARE, fake.SHARE_NAME) mock_handle_added_rules.assert_called_once_with( fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, new_rules) mock_handle_ro_to_rw_rules.assert_called_once_with( fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, new_rules) mock_handle_rw_to_ro_rules.assert_called_once_with( fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, new_rules) mock_handle_deleted_rules.assert_called_once_with( fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, new_rules) def test_validate_access_rule(self): result = self.helper._validate_access_rule(fake.USER_ACCESS) self.assertIsNone(result) def test_validate_access_rule_invalid_type(self): rule = copy.copy(fake.USER_ACCESS) rule['access_type'] = 'ip' self.assertRaises(exception.InvalidShareAccess, self.helper._validate_access_rule, rule) def test_validate_access_rule_invalid_level(self): rule = copy.copy(fake.USER_ACCESS) rule['access_level'] = 'none' self.assertRaises(exception.InvalidShareAccessLevel, self.helper._validate_access_rule, rule) def test_handle_added_rules(self): self.helper._handle_added_rules(fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, fake.NEW_CIFS_RULES) self.mock_client.add_cifs_share_access.assert_has_calls([ mock.call(fake.SHARE_NAME, 'user5', False), mock.call(fake.SHARE_NAME, 'user6', True), ], any_order=True) def test_handle_ro_to_rw_rules(self): self.helper._handle_ro_to_rw_rules(fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, fake.NEW_CIFS_RULES) self.mock_client.modify_cifs_share_access.assert_has_calls([ mock.call(fake.SHARE_NAME, 'user2', False) ]) def test_handle_rw_to_ro_rules(self): self.helper._handle_rw_to_ro_rules(fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, fake.NEW_CIFS_RULES) self.mock_client.modify_cifs_share_access.assert_has_calls([ mock.call(fake.SHARE_NAME, 'user3', True) ]) def test_handle_deleted_rules(self): self.helper._handle_deleted_rules(fake.SHARE_NAME, fake.EXISTING_CIFS_RULES, fake.NEW_CIFS_RULES) self.mock_client.remove_cifs_share_access.assert_has_calls([ mock.call(fake.SHARE_NAME, 'user4') ]) def test_get_access_rules(self): self.mock_client.get_cifs_share_access = ( mock.Mock(return_value='fake_rules')) result = self.helper._get_access_rules(fake.CIFS_SHARE, fake.SHARE_NAME) self.assertEqual('fake_rules', result) self.mock_client.get_cifs_share_access.assert_called_once_with( fake.SHARE_NAME) def test_get_target(self): target = self.helper.get_target(fake.CIFS_SHARE) self.assertEqual(fake.SHARE_ADDRESS_1, target) def test_get_target_missing_location(self): target = self.helper.get_target({'export_location': ''}) self.assertEqual('', target) def test_get_share_name_for_share(self): share_name = self.helper.get_share_name_for_share(fake.CIFS_SHARE) self.assertEqual(fake.SHARE_NAME, share_name) @ddt.data( { 'location': r'\\%s\%s' % (fake.SHARE_ADDRESS_1, fake.SHARE_NAME), 'ip': fake.SHARE_ADDRESS_1, 'share_name': fake.SHARE_NAME, }, { 'location': r'//%s/%s' % (fake.SHARE_ADDRESS_1, fake.SHARE_NAME), 'ip': fake.SHARE_ADDRESS_1, 'share_name': fake.SHARE_NAME, }, {'location': '', 'ip': '', 'share_name': ''}, {'location': 'invalid', 'ip': '', 'share_name': ''}, ) @ddt.unpack def test_get_export_location(self, location, ip, share_name): share = fake.CIFS_SHARE.copy() share['export_location'] = location result_ip, result_share_name = self.helper._get_export_location(share) self.assertEqual(ip, result_ip) self.assertEqual(share_name, result_share_name) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/protocols/fakes.py0000664000175000017500000000413413656750227027206 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.common import constants SHARE_NAME = 'fake_share' SHARE_ID = '9dba208c-9aa7-11e4-89d3-123b93f75cba' EXPORT_POLICY_NAME = 'policy_9dba208c_9aa7_11e4_89d3_123b93f75cba' SHARE_ADDRESS_1 = '10.10.10.10' SHARE_ADDRESS_2 = '10.10.10.20' CLIENT_ADDRESS_1 = '20.20.20.10' CLIENT_ADDRESS_2 = '20.20.20.20' CIFS_SHARE = { 'export_location': r'\\%s\%s' % (SHARE_ADDRESS_1, SHARE_NAME), 'id': SHARE_ID } NFS_SHARE_PATH = '/%s' % SHARE_NAME NFS_SHARE = { 'export_location': '%s:%s' % (SHARE_ADDRESS_1, NFS_SHARE_PATH), 'id': SHARE_ID } IP_ACCESS = { 'access_type': 'ip', 'access_to': CLIENT_ADDRESS_1, 'access_level': constants.ACCESS_LEVEL_RW, } USER_ACCESS = { 'access_type': 'user', 'access_to': 'fake_user', 'access_level': constants.ACCESS_LEVEL_RW, } VOLUME = { 'name': SHARE_NAME, } NEW_NFS_RULES = { '10.10.10.0/30': constants.ACCESS_LEVEL_RW, '10.10.10.0/24': constants.ACCESS_LEVEL_RO, '10.10.10.10': constants.ACCESS_LEVEL_RW, '10.10.20.0/24': constants.ACCESS_LEVEL_RW, '10.10.20.10': constants.ACCESS_LEVEL_RW, } EXISTING_CIFS_RULES = { 'user1': constants.ACCESS_LEVEL_RW, 'user2': constants.ACCESS_LEVEL_RO, 'user3': constants.ACCESS_LEVEL_RW, 'user4': constants.ACCESS_LEVEL_RO, } NEW_CIFS_RULES = { 'user1': constants.ACCESS_LEVEL_RW, 'user2': constants.ACCESS_LEVEL_RW, 'user3': constants.ACCESS_LEVEL_RO, 'user5': constants.ACCESS_LEVEL_RW, 'user6': constants.ACCESS_LEVEL_RO, } manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/protocols/test_base.py0000664000175000017500000000306613656750227030071 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Mock unit tests for the NetApp driver protocols base class module. """ import ddt from manila.common import constants from manila.share.drivers.netapp.dataontap.protocols import nfs_cmode from manila import test @ddt.ddt class NetAppNASHelperBaseTestCase(test.TestCase): def test_set_client(self): # The base class is abstract, so we'll use a subclass to test # base class functionality. helper = nfs_cmode.NetAppCmodeNFSHelper() self.assertIsNone(helper._client) helper.set_client('fake_client') self.assertEqual('fake_client', helper._client) @ddt.data( {'level': constants.ACCESS_LEVEL_RW, 'readonly': False}, {'level': constants.ACCESS_LEVEL_RO, 'readonly': True}) @ddt.unpack def test_is_readonly(self, level, readonly): helper = nfs_cmode.NetAppCmodeNFSHelper() result = helper._is_readonly(level) self.assertEqual(readonly, result) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/fakes.py0000664000175000017500000012704213656750227025166 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight All rights reserved. # Copyright (c) 2015 Tom Barron All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from manila.common import constants import manila.tests.share.drivers.netapp.fakes as na_fakes CLUSTER_NAME = 'fake_cluster' CLUSTER_NAME_2 = 'fake_cluster_2' BACKEND_NAME = 'fake_backend_name' BACKEND_NAME_2 = 'fake_backend_name_2' DRIVER_NAME = 'fake_driver_name' APP_VERSION = 'fake_app_vsersion' HOST_NAME = 'fake_host' POOL_NAME = 'fake_pool' POOL_NAME_2 = 'fake_pool_2' VSERVER1 = 'fake_vserver_1' VSERVER2 = 'fake_vserver_2' LICENSES = ('base', 'cifs', 'fcp', 'flexclone', 'iscsi', 'nfs', 'snapmirror', 'snaprestore', 'snapvault') VOLUME_NAME_TEMPLATE = 'share_%(share_id)s' VSERVER_NAME_TEMPLATE = 'os_%s' AGGREGATE_NAME_SEARCH_PATTERN = '(.*)' SHARE_NAME = 'share_7cf7c200_d3af_4e05_b87e_9167c95dfcad' SHARE_INSTANCE_NAME = 'share_d24e7257_124e_4fb6_b05b_d384f660bc85' FLEXVOL_NAME = 'fake_volume' JUNCTION_PATH = '/%s' % FLEXVOL_NAME EXPORT_LOCATION = '%s:%s' % (HOST_NAME, JUNCTION_PATH) SNAPSHOT_NAME = 'fake_snapshot' SNAPSHOT_ACCESS_TIME = '1466455782' CONSISTENCY_GROUP_NAME = 'fake_consistency_group' SHARE_SIZE = 10 TENANT_ID = '24cb2448-13d8-4f41-afd9-eff5c4fd2a57' SHARE_ID = '7cf7c200-d3af-4e05-b87e-9167c95dfcad' SHARE_ID2 = 'b51c5a31-aa5b-4254-9ee8-7d39fa4c8c38' SHARE_ID3 = '1379991d-037b-4897-bf3a-81b4aac72eff' SHARE_ID4 = '1cb41aad-fd9b-4964-8059-646f69de925e' SHARE_INSTANCE_ID = 'd24e7257-124e-4fb6-b05b-d384f660bc85' PARENT_SHARE_ID = '585c3935-2aa9-437c-8bad-5abae1076555' SNAPSHOT_ID = 'de4c9050-e2f9-4ce1-ade4-5ed0c9f26451' CONSISTENCY_GROUP_ID = '65bfa2c9-dc6c-4513-951a-b8d15b453ad8' CONSISTENCY_GROUP_ID2 = '35f5c1ea-45fb-40c4-98ae-2a2a17554159' CG_SNAPSHOT_ID = '6ddd8a6b-5df7-417b-a2ae-3f6e449f4eea' CG_SNAPSHOT_MEMBER_ID1 = '629f79ef-b27e-4596-9737-30f084e5ba29' CG_SNAPSHOT_MEMBER_ID2 = 'e876aa9c-a322-4391-bd88-9266178262be' FREE_CAPACITY = 10000000000 TOTAL_CAPACITY = 20000000000 AGGREGATE = 'manila_aggr_1' AGGREGATES = ('manila_aggr_1', 'manila_aggr_2') ROOT_AGGREGATES = ('root_aggr_1', 'root_aggr_2') ROOT_VOLUME_AGGREGATE = 'manila1' ROOT_VOLUME = 'root' CLUSTER_NODE = 'cluster1_01' CLUSTER_NODES = ('cluster1_01', 'cluster1_02') NODE_DATA_PORT = 'e0c' NODE_DATA_PORTS = ('e0c', 'e0d') LIF_NAME_TEMPLATE = 'os_%(net_allocation_id)s' SHARE_TYPE_ID = '26e89a5b-960b-46bb-a8cf-0778e653098f' SHARE_TYPE_NAME = 'fake_share_type' IPSPACE = 'fake_ipspace' IPSPACE_ID = '27d38c27-3e8b-4d7d-9d91-fcf295e3ac8f' MTU = 1234 DEFAULT_MTU = 1500 MANILA_HOST_NAME = '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME, 'pool': POOL_NAME} MANILA_HOST_NAME_2 = '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME, 'pool': POOL_NAME_2} MANILA_HOST_NAME_3 = '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME_2, 'pool': POOL_NAME_2} QOS_EXTRA_SPEC = 'netapp:maxiops' QOS_SIZE_DEPENDENT_EXTRA_SPEC = 'netapp:maxbpspergib' QOS_NORMALIZED_SPEC = 'maxiops' QOS_POLICY_GROUP_NAME = 'fake_qos_policy_group_name' CLIENT_KWARGS = { 'username': 'admin', 'trace': False, 'hostname': '127.0.0.1', 'vserver': None, 'transport_type': 'https', 'password': 'pass', 'port': '443', 'api_trace_pattern': '(.*)', } SHARE = { 'id': SHARE_ID, 'host': MANILA_HOST_NAME, 'project_id': TENANT_ID, 'name': SHARE_NAME, 'size': SHARE_SIZE, 'share_proto': 'fake', 'share_type_id': 'fake_share_type_id', 'share_network_id': '5dfe0898-e2a1-4740-9177-81c7d26713b0', 'share_server_id': '7e6a2cc8-871f-4b1d-8364-5aad0f98da86', 'network_info': { 'network_allocations': [{'ip_address': 'ip'}] }, 'replica_state': constants.REPLICA_STATE_ACTIVE, 'status': constants.STATUS_AVAILABLE, 'share_server': None, 'encrypt': False, } SHARE_INSTANCE = { 'id': SHARE_INSTANCE_ID, 'share_id': SHARE_ID, 'host': MANILA_HOST_NAME, 'project_id': TENANT_ID, 'name': SHARE_INSTANCE_NAME, 'size': SHARE_SIZE, 'share_proto': 'fake', 'share_type_id': SHARE_TYPE_ID, 'share_network_id': '5dfe0898-e2a1-4740-9177-81c7d26713b0', 'share_server_id': '7e6a2cc8-871f-4b1d-8364-5aad0f98da86', 'replica_state': constants.REPLICA_STATE_ACTIVE, 'status': constants.STATUS_AVAILABLE, } FLEXVOL_TO_MANAGE = { 'aggregate': POOL_NAME, 'junction-path': '/%s' % FLEXVOL_NAME, 'name': FLEXVOL_NAME, 'type': 'rw', 'style': 'flex', 'size': '1610612736', # rounds up to 2 GB } FLEXVOL_WITHOUT_QOS = copy.deepcopy(FLEXVOL_TO_MANAGE) FLEXVOL_WITHOUT_QOS.update({'qos-policy-group-name': None}) FLEXVOL_WITH_QOS = copy.deepcopy(FLEXVOL_TO_MANAGE) FLEXVOL_WITH_QOS.update({'qos-policy-group-name': QOS_POLICY_GROUP_NAME}) QOS_POLICY_GROUP = { 'policy-group': QOS_POLICY_GROUP_NAME, 'vserver': VSERVER1, 'max-throughput': '3000iops', 'num-workloads': 1, } FLEXVOL = { 'aggregate': POOL_NAME, 'junction-path': '/%s' % FLEXVOL_NAME, 'name': FLEXVOL_NAME, 'type': 'rw', 'style': 'flex', 'size': '1610612736', # rounds down to 1 GB, 'owning-vserver-name': VSERVER1, } EXTRA_SPEC = { 'netapp:thin_provisioned': 'true', 'netapp:snapshot_policy': 'default', 'netapp:language': 'en-US', 'netapp:dedup': 'True', 'netapp:compression': 'false', 'netapp:max_files': 5000, 'netapp:split_clone_on_create': 'true', 'netapp_disk_type': 'FCAL', 'netapp_raid_type': 'raid4', 'netapp_flexvol_encryption': 'true', } EXTRA_SPEC_WITH_QOS = copy.deepcopy(EXTRA_SPEC) EXTRA_SPEC_WITH_QOS.update({ 'qos': True, QOS_EXTRA_SPEC: '3000', }) EXTRA_SPEC_WITH_SIZE_DEPENDENT_QOS = copy.deepcopy(EXTRA_SPEC) EXTRA_SPEC_WITH_SIZE_DEPENDENT_QOS.update({ 'qos': True, QOS_SIZE_DEPENDENT_EXTRA_SPEC: '1000', }) PROVISIONING_OPTIONS = { 'thin_provisioned': True, 'snapshot_policy': 'default', 'language': 'en-US', 'dedup_enabled': True, 'compression_enabled': False, 'max_files': 5000, 'split': True, 'encrypt': False, 'hide_snapdir': False, } PROVISIONING_OPTIONS_WITH_QOS = copy.deepcopy(PROVISIONING_OPTIONS) PROVISIONING_OPTIONS_WITH_QOS.update( {'qos_policy_group': QOS_POLICY_GROUP_NAME}) PROVISIONING_OPTIONS_BOOLEAN = { 'thin_provisioned': True, 'dedup_enabled': False, 'compression_enabled': False, 'split': False, 'hide_snapdir': False, } PROVISIONING_OPTIONS_BOOLEAN_THIN_PROVISIONED_TRUE = { 'thin_provisioned': True, 'snapshot_policy': None, 'language': None, 'dedup_enabled': False, 'compression_enabled': False, 'max_files': None, 'split': False, 'encrypt': False, } PROVISIONING_OPTIONS_STRING = { 'snapshot_policy': 'default', 'language': 'en-US', 'max_files': 5000, } PROVISIONING_OPTIONS_STRING_MISSING_SPECS = { 'snapshot_policy': 'default', 'language': 'en-US', 'max_files': None, } PROVISIONING_OPTIONS_STRING_DEFAULT = { 'snapshot_policy': None, 'language': None, 'max_files': None, } SHORT_BOOLEAN_EXTRA_SPEC = { 'netapp:thin_provisioned': 'true', } STRING_EXTRA_SPEC = { 'netapp:snapshot_policy': 'default', 'netapp:language': 'en-US', 'netapp:max_files': 5000, } SHORT_STRING_EXTRA_SPEC = { 'netapp:snapshot_policy': 'default', 'netapp:language': 'en-US', } INVALID_EXTRA_SPEC = { 'netapp:thin_provisioned': 'ture', 'netapp:snapshot_policy': 'wrong_default', 'netapp:language': 'abc', } INVALID_EXTRA_SPEC_COMBO = { 'netapp:dedup': 'false', 'netapp:compression': 'true' } INVALID_MAX_FILE_EXTRA_SPEC = { 'netapp:max_files': -1, } EMPTY_EXTRA_SPEC = {} SHARE_TYPE = { 'id': SHARE_TYPE_ID, 'name': SHARE_TYPE_NAME, 'extra_specs': EXTRA_SPEC } OVERLAPPING_EXTRA_SPEC = { 'compression': ' True', 'netapp:compression': 'true', 'dedupe': ' True', 'netapp:dedup': 'false', 'thin_provisioning': ' False', 'netapp:thin_provisioned': 'true', } REMAPPED_OVERLAPPING_EXTRA_SPEC = { 'netapp:compression': 'true', 'netapp:dedup': 'true', 'netapp:thin_provisioned': 'false', } USER_NETWORK_ALLOCATIONS = [ { 'id': '132dbb10-9a36-46f2-8d89-3d909830c356', 'ip_address': '10.10.10.10', 'cidr': '10.10.10.0/24', 'segmentation_id': '1000', 'network_type': 'vlan', 'label': 'user', 'mtu': MTU, 'gateway': '10.10.10.1', }, { 'id': '7eabdeed-bad2-46ea-bd0f-a33884c869e0', 'ip_address': '10.10.10.20', 'cidr': '10.10.10.0/24', 'segmentation_id': '1000', 'network_type': 'vlan', 'label': 'user', 'mtu': MTU, 'gateway': '10.10.10.1', } ] USER_NETWORK_ALLOCATIONS_IPV6 = [ { 'id': '234dbb10-9a36-46f2-8d89-3d909830c356', 'ip_address': 'fd68:1a09:66ab:8d51:0:10:0:1', 'cidr': 'fd68:1a09:66ab:8d51::/64', 'segmentation_id': '2000', 'network_type': 'vlan', 'label': 'user', 'mtu': MTU, 'gateway': 'fd68:1a09:66ab:8d51:0:0:0:1', }, { 'id': '6677deed-bad2-46ea-bd0f-a33884c869e0', 'ip_address': 'fd68:1a09:66ab:8d51:0:10:0:2', 'cidr': 'fd68:1a09:66ab:8d51::/64', 'segmentation_id': '2000', 'network_type': 'vlan', 'label': 'user', 'mtu': MTU, 'gateway': 'fd68:1a09:66ab:8d51:0:0:0:1', } ] ADMIN_NETWORK_ALLOCATIONS = [ { 'id': '132dbb10-9a36-46f2-8d89-3d909830c356', 'ip_address': '10.10.20.10', 'cidr': '10.10.20.0/24', 'segmentation_id': None, 'network_type': 'flat', 'label': 'admin', 'mtu': MTU, 'gateway': '10.10.20.1' }, ] NETWORK_INFO = { 'server_id': '56aafd02-4d44-43d7-b784-57fc88167224', 'security_services': ['fake_ldap', 'fake_kerberos', 'fake_ad', ], 'network_allocations': USER_NETWORK_ALLOCATIONS, 'admin_network_allocations': ADMIN_NETWORK_ALLOCATIONS, 'neutron_net_id': '4eff22ca-5ad2-454d-a000-aadfd7b40b39', 'neutron_subnet_id': '62bf1c2c-18eb-421b-8983-48a6d39aafe0', 'segmentation_id': '1000', } NETWORK_INFO_NETMASK = '255.255.255.0' SHARE_SERVER = { 'id': 'fake_id', 'share_network_id': 'c5b3a865-56d0-4d88-abe5-879965e099c9', 'backend_details': { 'vserver_name': VSERVER1 }, 'network_allocations': (USER_NETWORK_ALLOCATIONS + ADMIN_NETWORK_ALLOCATIONS), } SHARE_SERVER_2 = { 'id': 'fake_id_2', 'share_network_id': 'c5b3a865-56d0-4d88-abe5-879965e099c9', 'backend_details': { 'vserver_name': VSERVER2 }, 'network_allocations': (USER_NETWORK_ALLOCATIONS + ADMIN_NETWORK_ALLOCATIONS), } VSERVER_PEER = [{ 'vserver': VSERVER1, 'peer-vserver': VSERVER2, 'peer-state': 'peered', 'peer-cluster': 'fake_cluster' }] SNAPSHOT = { 'id': SNAPSHOT_ID, 'project_id': TENANT_ID, 'share_id': PARENT_SHARE_ID, 'status': constants.STATUS_CREATING, 'provider_location': None, } SNAPSHOT_TO_MANAGE = { 'id': SNAPSHOT_ID, 'project_id': TENANT_ID, 'share_id': PARENT_SHARE_ID, 'status': constants.STATUS_CREATING, 'provider_location': SNAPSHOT_NAME, } CDOT_SNAPSHOT = { 'name': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'busy': False, 'owners': set(), 'access-time': SNAPSHOT_ACCESS_TIME, } CDOT_SNAPSHOT_BUSY_VOLUME_CLONE = { 'name': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'busy': True, 'owners': {'volume clone'}, 'access-time': SNAPSHOT_ACCESS_TIME, } CDOT_SNAPSHOT_BUSY_SNAPMIRROR = { 'name': SNAPSHOT_NAME, 'volume': SHARE_NAME, 'busy': True, 'owners': {'snapmirror'}, 'access-time': SNAPSHOT_ACCESS_TIME, } CDOT_CLONE_CHILD_1 = 'fake_child_1' CDOT_CLONE_CHILD_2 = 'fake_child_2' CDOT_CLONE_CHILDREN = [ {'name': CDOT_CLONE_CHILD_1}, {'name': CDOT_CLONE_CHILD_2}, ] SHARE_FOR_CG1 = { 'id': SHARE_ID, 'host': '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME, 'pool': POOL_NAME}, 'name': 'share_1', 'share_proto': 'NFS', 'source_share_group_snapshot_member_id': None, } SHARE_FOR_CG2 = { 'id': SHARE_ID2, 'host': '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME, 'pool': POOL_NAME}, 'name': 'share_2', 'share_proto': 'NFS', 'source_share_group_snapshot_member_id': None, } # Clone dest of SHARE_FOR_CG1 SHARE_FOR_CG3 = { 'id': SHARE_ID3, 'host': '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME, 'pool': POOL_NAME}, 'name': 'share3', 'share_proto': 'NFS', 'source_share_group_snapshot_member_id': CG_SNAPSHOT_MEMBER_ID1, } # Clone dest of SHARE_FOR_CG2 SHARE_FOR_CG4 = { 'id': SHARE_ID4, 'host': '%(host)s@%(backend)s#%(pool)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME, 'pool': POOL_NAME}, 'name': 'share4', 'share_proto': 'NFS', 'source_share_group_snapshot_member_id': CG_SNAPSHOT_MEMBER_ID2, } EMPTY_CONSISTENCY_GROUP = { 'cgsnapshots': [], 'description': 'fake description', 'host': '%(host)s@%(backend)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME}, 'id': CONSISTENCY_GROUP_ID, 'name': CONSISTENCY_GROUP_NAME, 'shares': [], } CONSISTENCY_GROUP = { 'cgsnapshots': [], 'description': 'fake description', 'host': '%(host)s@%(backend)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME}, 'id': CONSISTENCY_GROUP_ID, 'name': CONSISTENCY_GROUP_NAME, 'shares': [SHARE_FOR_CG1, SHARE_FOR_CG2], } CONSISTENCY_GROUP_DEST = { 'cgsnapshots': [], 'description': 'fake description', 'host': '%(host)s@%(backend)s' % { 'host': HOST_NAME, 'backend': BACKEND_NAME}, 'id': CONSISTENCY_GROUP_ID, 'name': CONSISTENCY_GROUP_NAME, 'shares': [SHARE_FOR_CG3, SHARE_FOR_CG4], } CG_SNAPSHOT_MEMBER_1 = { 'cgsnapshot_id': CG_SNAPSHOT_ID, 'id': CG_SNAPSHOT_MEMBER_ID1, 'share_id': SHARE_ID, 'share_proto': 'NFS', 'size': SHARE_SIZE, } CG_SNAPSHOT_MEMBER_2 = { 'cgsnapshot_id': CG_SNAPSHOT_ID, 'id': CG_SNAPSHOT_MEMBER_ID2, 'share_id': SHARE_ID2, 'share_proto': 'NFS', 'size': SHARE_SIZE, } CG_SNAPSHOT = { 'share_group_snapshot_members': [CG_SNAPSHOT_MEMBER_1, CG_SNAPSHOT_MEMBER_2], 'share_group': CONSISTENCY_GROUP, 'share_group_id': CONSISTENCY_GROUP_ID, 'id': CG_SNAPSHOT_ID, 'project_id': TENANT_ID, } COLLATED_CGSNAPSHOT_INFO = [ { 'share': SHARE_FOR_CG3, 'snapshot': { 'share_id': SHARE_ID, 'id': CG_SNAPSHOT_ID, 'size': SHARE_SIZE, } }, { 'share': SHARE_FOR_CG4, 'snapshot': { 'share_id': SHARE_ID2, 'id': CG_SNAPSHOT_ID, 'size': SHARE_SIZE, } }, ] IDENTIFIER = 'c5b3a865-56d0-4d88-dke5-853465e099c9' LIF_NAMES = [] LIF_ADDRESSES = ['10.10.10.10', '10.10.10.20'] LIFS = ( {'address': LIF_ADDRESSES[0], 'home-node': CLUSTER_NODES[0], 'home-port': 'e0c', 'interface-name': 'os_132dbb10-9a36-46f2-8d89-3d909830c356', 'netmask': NETWORK_INFO_NETMASK, 'role': 'data', 'vserver': VSERVER1 }, {'address': LIF_ADDRESSES[1], 'home-node': CLUSTER_NODES[1], 'home-port': 'e0c', 'interface-name': 'os_7eabdeed-bad2-46ea-bd0f-a33884c869e0', 'netmask': NETWORK_INFO_NETMASK, 'role': 'data', 'vserver': VSERVER1 }, ) INTERFACE_ADDRESSES_WITH_METADATA = { LIF_ADDRESSES[0]: { 'is_admin_only': False, 'preferred': True, }, LIF_ADDRESSES[1]: { 'is_admin_only': True, 'preferred': False, }, } NFS_EXPORTS = [ { 'path': ':'.join([LIF_ADDRESSES[0], 'fake_export_path']), 'is_admin_only': False, 'metadata': { 'preferred': True, }, }, { 'path': ':'.join([LIF_ADDRESSES[1], 'fake_export_path']), 'is_admin_only': True, 'metadata': { 'preferred': False, }, }, ] SHARE_ACCESS = { 'access_type': 'user', 'access_to': [LIF_ADDRESSES[0]] } EMS_MESSAGE_0 = { 'computer-name': HOST_NAME, 'event-id': '0', 'event-source': 'Manila driver %s' % DRIVER_NAME, 'app-version': APP_VERSION, 'category': 'provisioning', 'event-description': 'OpenStack Manila connected to cluster node', 'log-level': '5', 'auto-support': 'false' } EMS_MESSAGE_1 = { 'computer-name': HOST_NAME, 'event-id': '1', 'event-source': 'Manila driver %s' % DRIVER_NAME, 'app-version': APP_VERSION, 'category': 'provisioning', 'event-description': '', 'log-level': '5', 'auto-support': 'false' } AGGREGATE_CAPACITIES = { AGGREGATES[0]: { 'available': 1181116007, # 1.1 GB 'total': 3543348020, # 3.3 GB 'used': 2362232013, # 2.2 GB }, AGGREGATES[1]: { 'available': 2147483648, # 2.0 GB 'total': 6442450944, # 6.0 GB 'used': 4294967296, # 4.0 GB } } AGGREGATE_CAPACITIES_VSERVER_CREDS = { AGGREGATES[0]: { 'available': 1181116007, # 1.1 GB }, AGGREGATES[1]: { 'available': 2147483648, # 2.0 GB } } SSC_INFO = { AGGREGATES[0]: { 'netapp_raid_type': 'raid4', 'netapp_disk_type': 'FCAL', 'netapp_hybrid_aggregate': 'false', 'netapp_aggregate': AGGREGATES[0], }, AGGREGATES[1]: { 'netapp_raid_type': 'raid_dp', 'netapp_disk_type': ['SATA', 'SSD'], 'netapp_hybrid_aggregate': 'true', 'netapp_aggregate': AGGREGATES[1], } } SSC_INFO_VSERVER_CREDS = { AGGREGATES[0]: { 'netapp_aggregate': AGGREGATES[0], }, AGGREGATES[1]: { 'netapp_aggregate': AGGREGATES[1], } } POOLS = [ { 'pool_name': AGGREGATES[0], 'netapp_aggregate': AGGREGATES[0], 'total_capacity_gb': 3.3, 'free_capacity_gb': 1.1, 'allocated_capacity_gb': 2.2, 'reserved_percentage': 5, 'dedupe': [True, False], 'compression': [True, False], 'thin_provisioning': [True, False], 'netapp_flexvol_encryption': True, 'netapp_raid_type': 'raid4', 'netapp_disk_type': 'FCAL', 'netapp_hybrid_aggregate': 'false', 'utilization': 30.0, 'filter_function': 'filter', 'goodness_function': 'goodness', 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'qos': True, }, { 'pool_name': AGGREGATES[1], 'netapp_aggregate': AGGREGATES[1], 'total_capacity_gb': 6.0, 'free_capacity_gb': 2.0, 'allocated_capacity_gb': 4.0, 'reserved_percentage': 5, 'dedupe': [True, False], 'compression': [True, False], 'thin_provisioning': [True, False], 'netapp_flexvol_encryption': True, 'netapp_raid_type': 'raid_dp', 'netapp_disk_type': ['SATA', 'SSD'], 'netapp_hybrid_aggregate': 'true', 'utilization': 42.0, 'filter_function': 'filter', 'goodness_function': 'goodness', 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'qos': True, }, ] POOLS_VSERVER_CREDS = [ { 'pool_name': AGGREGATES[0], 'netapp_aggregate': AGGREGATES[0], 'total_capacity_gb': 'unknown', 'free_capacity_gb': 1.1, 'allocated_capacity_gb': 0.0, 'reserved_percentage': 5, 'dedupe': [True, False], 'compression': [True, False], 'thin_provisioning': [True, False], 'netapp_flexvol_encryption': True, 'utilization': 50.0, 'filter_function': None, 'goodness_function': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'qos': False, }, { 'pool_name': AGGREGATES[1], 'netapp_aggregate': AGGREGATES[1], 'total_capacity_gb': 'unknown', 'free_capacity_gb': 2.0, 'allocated_capacity_gb': 0.0, 'reserved_percentage': 5, 'dedupe': [True, False], 'compression': [True, False], 'thin_provisioning': [True, False], 'netapp_flexvol_encryption': True, 'utilization': 50.0, 'filter_function': None, 'goodness_function': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'qos': False, }, ] SSC_AGGREGATES = [ { 'name': AGGREGATES[0], 'raid-type': 'raid4', 'is-hybrid': False, }, { 'name': AGGREGATES[1], 'raid-type': 'raid_dp', 'is-hybrid': True, }, ] CLUSTER_INFO = { 'nodes': CLUSTER_NODES, 'nve_support': True, } SSC_DISK_TYPES = ['FCAL', ['SATA', 'SSD']] NODE = 'cluster1-01' COUNTERS_T1 = [ { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'avg_processor_busy': '29078861388', 'instance-name': 'system', 'timestamp': '1453573776', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'cpu_elapsed_time': '1063283283681', 'instance-name': 'system', 'timestamp': '1453573776', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'cpu_elapsed_time1': '1063283283681', 'instance-name': 'system', 'timestamp': '1453573776', }, { 'cp_phase_times:p2a_snap': '714', 'cp_phase_times:p4_finish': '14897', 'cp_phase_times:setup': '581', 'cp_phase_times:p2a_dlog1': '6019', 'cp_phase_times:p2a_dlog2': '2328', 'cp_phase_times:p2v_cont': '2479', 'cp_phase_times:p2v_volinfo': '1138', 'cp_phase_times:p2v_bm': '3484', 'cp_phase_times:p2v_fsinfo': '2031', 'cp_phase_times:p2a_inofile': '356', 'cp_phase_times': '581,5007,1840,9832,498,0,839,799,1336,2031,0,377,' '427,1058,354,3484,5135,1460,1138,2479,356,1373' ',6019,9,2328,2257,229,493,1275,0,6059,714,530215,' '21603833,0,0,3286,11075940,22001,14897,36', 'cp_phase_times:p2v_dlog2': '377', 'instance-name': 'wafl', 'cp_phase_times:p3_wait': '0', 'cp_phase_times:p2a_bm': '6059', 'cp_phase_times:p1_quota': '498', 'cp_phase_times:p2v_inofile': '839', 'cp_phase_times:p2a_refcount': '493', 'cp_phase_times:p2a_fsinfo': '2257', 'cp_phase_times:p2a_hyabc': '0', 'cp_phase_times:p2a_volinfo': '530215', 'cp_phase_times:pre_p0': '5007', 'cp_phase_times:p2a_hya': '9', 'cp_phase_times:p0_snap_del': '1840', 'cp_phase_times:p2a_ino': '1373', 'cp_phase_times:p2v_df_scores_sub': '354', 'cp_phase_times:p2v_ino_pub': '799', 'cp_phase_times:p2a_ipu_bitmap_grow': '229', 'cp_phase_times:p2v_refcount': '427', 'timestamp': '1453573776', 'cp_phase_times:p2v_dlog1': '0', 'cp_phase_times:p2_finish': '0', 'cp_phase_times:p1_clean': '9832', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'cp_phase_times:p3a_volinfo': '11075940', 'cp_phase_times:p2a_topaa': '1275', 'cp_phase_times:p2_flush': '21603833', 'cp_phase_times:p2v_df_scores': '1460', 'cp_phase_times:ipu_disk_add': '0', 'cp_phase_times:p2v_snap': '5135', 'cp_phase_times:p5_finish': '36', 'cp_phase_times:p2v_ino_pri': '1336', 'cp_phase_times:p3v_volinfo': '3286', 'cp_phase_times:p2v_topaa': '1058', 'cp_phase_times:p3_finish': '22001', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'total_cp_msecs': '33309624', 'instance-name': 'wafl', 'timestamp': '1453573776', }, { 'domain_busy:kahuna': '2712467226', 'timestamp': '1453573777', 'domain_busy:cifs': '434036', 'domain_busy:raid_exempt': '28', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'domain_busy:target': '6460782', 'domain_busy:nwk_exempt': '20', 'domain_busy:raid': '722094140', 'domain_busy:storage': '2253156562', 'instance-name': 'processor0', 'domain_busy:cluster': '34', 'domain_busy:wafl_xcleaner': '51275254', 'domain_busy:wafl_exempt': '1243553699', 'domain_busy:protocol': '54', 'domain_busy': '1028851855595,2712467226,2253156562,5688808118,' '722094140,28,6460782,59,434036,1243553699,51275254,' '61237441,34,54,11,20,5254181873,13656398235,452215', 'domain_busy:nwk_legacy': '5254181873', 'domain_busy:dnscache': '59', 'domain_busy:exempt': '5688808118', 'domain_busy:hostos': '13656398235', 'domain_busy:sm_exempt': '61237441', 'domain_busy:nwk_exclusive': '11', 'domain_busy:idle': '1028851855595', 'domain_busy:ssan_exempt': '452215', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'processor_elapsed_time': '1063283843318', 'instance-name': 'processor0', 'timestamp': '1453573777', }, { 'domain_busy:kahuna': '1978024846', 'timestamp': '1453573777', 'domain_busy:cifs': '318584', 'domain_busy:raid_exempt': '0', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'domain_busy:target': '3330956', 'domain_busy:nwk_exempt': '0', 'domain_busy:raid': '722235930', 'domain_busy:storage': '1498890708', 'instance-name': 'processor1', 'domain_busy:cluster': '0', 'domain_busy:wafl_xcleaner': '50122685', 'domain_busy:wafl_exempt': '1265921369', 'domain_busy:protocol': '0', 'domain_busy': '1039557880852,1978024846,1498890708,3734060289,' '722235930,0,3330956,0,318584,1265921369,50122685,' '36417362,0,0,0,0,2815252976,10274810484,393451', 'domain_busy:nwk_legacy': '2815252976', 'domain_busy:dnscache': '0', 'domain_busy:exempt': '3734060289', 'domain_busy:hostos': '10274810484', 'domain_busy:sm_exempt': '36417362', 'domain_busy:nwk_exclusive': '0', 'domain_busy:idle': '1039557880852', 'domain_busy:ssan_exempt': '393451', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'processor_elapsed_time': '1063283843321', 'instance-name': 'processor1', 'timestamp': '1453573777', } ] COUNTERS_T2 = [ { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'avg_processor_busy': '29081228905', 'instance-name': 'system', 'timestamp': '1453573834', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'cpu_elapsed_time': '1063340792148', 'instance-name': 'system', 'timestamp': '1453573834', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'cpu_elapsed_time1': '1063340792148', 'instance-name': 'system', 'timestamp': '1453573834', }, { 'cp_phase_times:p2a_snap': '714', 'cp_phase_times:p4_finish': '14897', 'cp_phase_times:setup': '581', 'cp_phase_times:p2a_dlog1': '6019', 'cp_phase_times:p2a_dlog2': '2328', 'cp_phase_times:p2v_cont': '2479', 'cp_phase_times:p2v_volinfo': '1138', 'cp_phase_times:p2v_bm': '3484', 'cp_phase_times:p2v_fsinfo': '2031', 'cp_phase_times:p2a_inofile': '356', 'cp_phase_times': '581,5007,1840,9832,498,0,839,799,1336,2031,0,377,' '427,1058,354,3484,5135,1460,1138,2479,356,1373,' '6019,9,2328,2257,229,493,1275,0,6059,714,530215,' '21604863,0,0,3286,11076392,22001,14897,36', 'cp_phase_times:p2v_dlog2': '377', 'instance-name': 'wafl', 'cp_phase_times:p3_wait': '0', 'cp_phase_times:p2a_bm': '6059', 'cp_phase_times:p1_quota': '498', 'cp_phase_times:p2v_inofile': '839', 'cp_phase_times:p2a_refcount': '493', 'cp_phase_times:p2a_fsinfo': '2257', 'cp_phase_times:p2a_hyabc': '0', 'cp_phase_times:p2a_volinfo': '530215', 'cp_phase_times:pre_p0': '5007', 'cp_phase_times:p2a_hya': '9', 'cp_phase_times:p0_snap_del': '1840', 'cp_phase_times:p2a_ino': '1373', 'cp_phase_times:p2v_df_scores_sub': '354', 'cp_phase_times:p2v_ino_pub': '799', 'cp_phase_times:p2a_ipu_bitmap_grow': '229', 'cp_phase_times:p2v_refcount': '427', 'timestamp': '1453573834', 'cp_phase_times:p2v_dlog1': '0', 'cp_phase_times:p2_finish': '0', 'cp_phase_times:p1_clean': '9832', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'cp_phase_times:p3a_volinfo': '11076392', 'cp_phase_times:p2a_topaa': '1275', 'cp_phase_times:p2_flush': '21604863', 'cp_phase_times:p2v_df_scores': '1460', 'cp_phase_times:ipu_disk_add': '0', 'cp_phase_times:p2v_snap': '5135', 'cp_phase_times:p5_finish': '36', 'cp_phase_times:p2v_ino_pri': '1336', 'cp_phase_times:p3v_volinfo': '3286', 'cp_phase_times:p2v_topaa': '1058', 'cp_phase_times:p3_finish': '22001', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'total_cp_msecs': '33311106', 'instance-name': 'wafl', 'timestamp': '1453573834', }, { 'domain_busy:kahuna': '2712629374', 'timestamp': '1453573834', 'domain_busy:cifs': '434036', 'domain_busy:raid_exempt': '28', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'domain_busy:target': '6461082', 'domain_busy:nwk_exempt': '20', 'domain_busy:raid': '722136824', 'domain_busy:storage': '2253260824', 'instance-name': 'processor0', 'domain_busy:cluster': '34', 'domain_busy:wafl_xcleaner': '51277506', 'domain_busy:wafl_exempt': '1243637154', 'domain_busy:protocol': '54', 'domain_busy': '1028906640232,2712629374,2253260824,5689093500,' '722136824,28,6461082,59,434036,1243637154,51277506,' '61240335,34,54,11,20,5254491236,13657992139,452215', 'domain_busy:nwk_legacy': '5254491236', 'domain_busy:dnscache': '59', 'domain_busy:exempt': '5689093500', 'domain_busy:hostos': '13657992139', 'domain_busy:sm_exempt': '61240335', 'domain_busy:nwk_exclusive': '11', 'domain_busy:idle': '1028906640232', 'domain_busy:ssan_exempt': '452215', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'processor_elapsed_time': '1063341351916', 'instance-name': 'processor0', 'timestamp': '1453573834', }, { 'domain_busy:kahuna': '1978217049', 'timestamp': '1453573834', 'domain_busy:cifs': '318584', 'domain_busy:raid_exempt': '0', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'domain_busy:target': '3331147', 'domain_busy:nwk_exempt': '0', 'domain_busy:raid': '722276805', 'domain_busy:storage': '1498984059', 'instance-name': 'processor1', 'domain_busy:cluster': '0', 'domain_busy:wafl_xcleaner': '50126176', 'domain_busy:wafl_exempt': '1266039846', 'domain_busy:protocol': '0', 'domain_busy': '1039613222253,1978217049,1498984059,3734279672,' '722276805,0,3331147,0,318584,1266039846,50126176,' '36419297,0,0,0,0,2815435865,10276068104,393451', 'domain_busy:nwk_legacy': '2815435865', 'domain_busy:dnscache': '0', 'domain_busy:exempt': '3734279672', 'domain_busy:hostos': '10276068104', 'domain_busy:sm_exempt': '36419297', 'domain_busy:nwk_exclusive': '0', 'domain_busy:idle': '1039613222253', 'domain_busy:ssan_exempt': '393451', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'processor_elapsed_time': '1063341351919', 'instance-name': 'processor1', 'timestamp': '1453573834', }, ] SYSTEM_INSTANCE_UUIDS = ['cluster1-01:kernel:system'] SYSTEM_INSTANCE_NAMES = ['system'] SYSTEM_COUNTERS = [ { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'avg_processor_busy': '27877641199', 'instance-name': 'system', 'timestamp': '1453524928', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'cpu_elapsed_time': '1014438541279', 'instance-name': 'system', 'timestamp': '1453524928', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:system', 'cpu_elapsed_time1': '1014438541279', 'instance-name': 'system', 'timestamp': '1453524928', }, ] WAFL_INSTANCE_UUIDS = ['cluster1-01:kernel:wafl'] WAFL_INSTANCE_NAMES = ['wafl'] WAFL_COUNTERS = [ { 'cp_phase_times': '563,4844,1731,9676,469,0,821,763,1282,1937,0,359,' '418,1048,344,3344,4867,1397,1101,2380,356,1318,' '5954,9,2236,2190,228,476,1221,0,5838,696,515588,' '20542954,0,0,3122,10567367,20696,13982,36', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'instance-name': 'wafl', 'timestamp': '1453523339', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'total_cp_msecs': '31721222', 'instance-name': 'wafl', 'timestamp': '1453523339', }, ] WAFL_CP_PHASE_TIMES_COUNTER_INFO = { 'labels': [ 'SETUP', 'PRE_P0', 'P0_SNAP_DEL', 'P1_CLEAN', 'P1_QUOTA', 'IPU_DISK_ADD', 'P2V_INOFILE', 'P2V_INO_PUB', 'P2V_INO_PRI', 'P2V_FSINFO', 'P2V_DLOG1', 'P2V_DLOG2', 'P2V_REFCOUNT', 'P2V_TOPAA', 'P2V_DF_SCORES_SUB', 'P2V_BM', 'P2V_SNAP', 'P2V_DF_SCORES', 'P2V_VOLINFO', 'P2V_CONT', 'P2A_INOFILE', 'P2A_INO', 'P2A_DLOG1', 'P2A_HYA', 'P2A_DLOG2', 'P2A_FSINFO', 'P2A_IPU_BITMAP_GROW', 'P2A_REFCOUNT', 'P2A_TOPAA', 'P2A_HYABC', 'P2A_BM', 'P2A_SNAP', 'P2A_VOLINFO', 'P2_FLUSH', 'P2_FINISH', 'P3_WAIT', 'P3V_VOLINFO', 'P3A_VOLINFO', 'P3_FINISH', 'P4_FINISH', 'P5_FINISH', ], 'name': 'cp_phase_times', } EXPANDED_WAFL_COUNTERS = [ { 'cp_phase_times:p2a_snap': '696', 'cp_phase_times:p4_finish': '13982', 'cp_phase_times:setup': '563', 'cp_phase_times:p2a_dlog1': '5954', 'cp_phase_times:p2a_dlog2': '2236', 'cp_phase_times:p2v_cont': '2380', 'cp_phase_times:p2v_volinfo': '1101', 'cp_phase_times:p2v_bm': '3344', 'cp_phase_times:p2v_fsinfo': '1937', 'cp_phase_times:p2a_inofile': '356', 'cp_phase_times': '563,4844,1731,9676,469,0,821,763,1282,1937,0,359,' '418,1048,344,3344,4867,1397,1101,2380,356,1318,' '5954,9,2236,2190,228,476,1221,0,5838,696,515588,' '20542954,0,0,3122,10567367,20696,13982,36', 'cp_phase_times:p2v_dlog2': '359', 'instance-name': 'wafl', 'cp_phase_times:p3_wait': '0', 'cp_phase_times:p2a_bm': '5838', 'cp_phase_times:p1_quota': '469', 'cp_phase_times:p2v_inofile': '821', 'cp_phase_times:p2a_refcount': '476', 'cp_phase_times:p2a_fsinfo': '2190', 'cp_phase_times:p2a_hyabc': '0', 'cp_phase_times:p2a_volinfo': '515588', 'cp_phase_times:pre_p0': '4844', 'cp_phase_times:p2a_hya': '9', 'cp_phase_times:p0_snap_del': '1731', 'cp_phase_times:p2a_ino': '1318', 'cp_phase_times:p2v_df_scores_sub': '344', 'cp_phase_times:p2v_ino_pub': '763', 'cp_phase_times:p2a_ipu_bitmap_grow': '228', 'cp_phase_times:p2v_refcount': '418', 'timestamp': '1453523339', 'cp_phase_times:p2v_dlog1': '0', 'cp_phase_times:p2_finish': '0', 'cp_phase_times:p1_clean': '9676', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'cp_phase_times:p3a_volinfo': '10567367', 'cp_phase_times:p2a_topaa': '1221', 'cp_phase_times:p2_flush': '20542954', 'cp_phase_times:p2v_df_scores': '1397', 'cp_phase_times:ipu_disk_add': '0', 'cp_phase_times:p2v_snap': '4867', 'cp_phase_times:p5_finish': '36', 'cp_phase_times:p2v_ino_pri': '1282', 'cp_phase_times:p3v_volinfo': '3122', 'cp_phase_times:p2v_topaa': '1048', 'cp_phase_times:p3_finish': '20696', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:wafl', 'total_cp_msecs': '31721222', 'instance-name': 'wafl', 'timestamp': '1453523339', }, ] PROCESSOR_INSTANCE_UUIDS = [ 'cluster1-01:kernel:processor0', 'cluster1-01:kernel:processor1', ] PROCESSOR_INSTANCE_NAMES = ['processor0', 'processor1'] PROCESSOR_COUNTERS = [ { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'domain_busy': '980648687811,2597164534,2155400686,5443901498,' '690280568,28,6180773,59,413895,1190100947,48989575,' '58549809,34,54,11,20,5024141791,13136260754,452215', 'instance-name': 'processor0', 'timestamp': '1453524150', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'processor_elapsed_time': '1013660714257', 'instance-name': 'processor0', 'timestamp': '1453524150', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'domain_busy': '990957980543,1891766637,1433411516,3572427934,' '691372324,0,3188648,0,305947,1211235777,47954620,' '34832715,0,0,0,0,2692084482,9834648927,393451', 'instance-name': 'processor1', 'timestamp': '1453524150', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'processor_elapsed_time': '1013660714261', 'instance-name': 'processor1', 'timestamp': '1453524150', }, ] PROCESSOR_DOMAIN_BUSY_COUNTER_INFO = { 'labels': [ 'idle', 'kahuna', 'storage', 'exempt', 'raid', 'raid_exempt', 'target', 'dnscache', 'cifs', 'wafl_exempt', 'wafl_xcleaner', 'sm_exempt', 'cluster', 'protocol', 'nwk_exclusive', 'nwk_exempt', 'nwk_legacy', 'hostOS', 'ssan_exempt', ], 'name': 'domain_busy', } EXPANDED_PROCESSOR_COUNTERS = [ { 'domain_busy:kahuna': '2597164534', 'timestamp': '1453524150', 'domain_busy:cifs': '413895', 'domain_busy:raid_exempt': '28', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'domain_busy:target': '6180773', 'domain_busy:nwk_exempt': '20', 'domain_busy:raid': '690280568', 'domain_busy:storage': '2155400686', 'instance-name': 'processor0', 'domain_busy:cluster': '34', 'domain_busy:wafl_xcleaner': '48989575', 'domain_busy:wafl_exempt': '1190100947', 'domain_busy:protocol': '54', 'domain_busy': '980648687811,2597164534,2155400686,5443901498,' '690280568,28,6180773,59,413895,1190100947,48989575,' '58549809,34,54,11,20,5024141791,13136260754,452215', 'domain_busy:nwk_legacy': '5024141791', 'domain_busy:dnscache': '59', 'domain_busy:exempt': '5443901498', 'domain_busy:hostos': '13136260754', 'domain_busy:sm_exempt': '58549809', 'domain_busy:nwk_exclusive': '11', 'domain_busy:idle': '980648687811', 'domain_busy:ssan_exempt': '452215', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'processor_elapsed_time': '1013660714257', 'instance-name': 'processor0', 'timestamp': '1453524150', }, { 'domain_busy:kahuna': '1891766637', 'timestamp': '1453524150', 'domain_busy:cifs': '305947', 'domain_busy:raid_exempt': '0', 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'domain_busy:target': '3188648', 'domain_busy:nwk_exempt': '0', 'domain_busy:raid': '691372324', 'domain_busy:storage': '1433411516', 'instance-name': 'processor1', 'domain_busy:cluster': '0', 'domain_busy:wafl_xcleaner': '47954620', 'domain_busy:wafl_exempt': '1211235777', 'domain_busy:protocol': '0', 'domain_busy': '990957980543,1891766637,1433411516,3572427934,' '691372324,0,3188648,0,305947,1211235777,47954620,' '34832715,0,0,0,0,2692084482,9834648927,393451', 'domain_busy:nwk_legacy': '2692084482', 'domain_busy:dnscache': '0', 'domain_busy:exempt': '3572427934', 'domain_busy:hostos': '9834648927', 'domain_busy:sm_exempt': '34832715', 'domain_busy:nwk_exclusive': '0', 'domain_busy:idle': '990957980543', 'domain_busy:ssan_exempt': '393451', }, { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor1', 'processor_elapsed_time': '1013660714261', 'instance-name': 'processor1', 'timestamp': '1453524150', }, ] def get_config_cmode(): config = na_fakes.create_configuration_cmode() config.local_conf.set_override('share_backend_name', BACKEND_NAME) config.reserved_share_percentage = 5 config.netapp_login = CLIENT_KWARGS['username'] config.netapp_password = CLIENT_KWARGS['password'] config.netapp_server_hostname = CLIENT_KWARGS['hostname'] config.netapp_transport_type = CLIENT_KWARGS['transport_type'] config.netapp_server_port = CLIENT_KWARGS['port'] config.netapp_volume_name_template = VOLUME_NAME_TEMPLATE config.netapp_aggregate_name_search_pattern = AGGREGATE_NAME_SEARCH_PATTERN config.netapp_vserver_name_template = VSERVER_NAME_TEMPLATE config.netapp_root_volume_aggregate = ROOT_VOLUME_AGGREGATE config.netapp_root_volume = ROOT_VOLUME config.netapp_lif_name_template = LIF_NAME_TEMPLATE config.netapp_volume_snapshot_reserve_percent = 8 config.netapp_vserver = VSERVER1 return config def get_network_info(user_network_allocation, admin_network_allocation): net_info = copy.deepcopy(NETWORK_INFO) net_info['network_allocations'] = user_network_allocation net_info['admin_network_allocations'] = admin_network_allocation return net_info manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/0000775000175000017500000000000013656750362026202 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_lib_single_svm.py0000664000175000017500000002104213656750227032606 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the NetApp Data ONTAP cDOT single-SVM storage driver library. """ from unittest import mock import ddt from oslo_log import log from manila import exception from manila.share.drivers.netapp.dataontap.cluster_mode import lib_base from manila.share.drivers.netapp.dataontap.cluster_mode import lib_single_svm from manila.share.drivers.netapp import utils as na_utils from manila import test import manila.tests.share.drivers.netapp.dataontap.fakes as fake @ddt.ddt class NetAppFileStorageLibraryTestCase(test.TestCase): def setUp(self): super(NetAppFileStorageLibraryTestCase, self).setUp() self.mock_object(na_utils, 'validate_driver_instantiation') # Mock loggers as themselves to allow logger arg validation mock_logger = log.getLogger('mock_logger') self.mock_object(lib_single_svm.LOG, 'info', mock.Mock(side_effect=mock_logger.info)) config = fake.get_config_cmode() config.netapp_vserver = fake.VSERVER1 kwargs = { 'configuration': config, 'private_storage': mock.Mock(), 'app_version': fake.APP_VERSION } self.library = lib_single_svm.NetAppCmodeSingleSVMFileStorageLibrary( fake.DRIVER_NAME, **kwargs) self.library._client = mock.Mock() self.client = self.library._client def test_init(self): self.assertEqual(fake.VSERVER1, self.library._vserver) def test_check_for_setup_error(self): self.library._client.vserver_exists.return_value = True self.library._have_cluster_creds = True self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) mock_super = self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, 'check_for_setup_error') self.library.check_for_setup_error() self.assertTrue(lib_single_svm.LOG.info.called) mock_super.assert_called_once_with() self.assertTrue(self.library._find_matching_aggregates.called) def test_check_for_setup_error_no_vserver(self): self.library._vserver = None self.assertRaises(exception.InvalidInput, self.library.check_for_setup_error) def test_check_for_setup_error_vserver_not_found(self): self.library._client.vserver_exists.return_value = False self.assertRaises(exception.VserverNotFound, self.library.check_for_setup_error) def test_check_for_setup_error_cluster_creds_vserver_match(self): self.library._client.vserver_exists.return_value = True self.library._have_cluster_creds = False self.library._client.list_vservers.return_value = [fake.VSERVER1] self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) mock_super = self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, 'check_for_setup_error') self.library.check_for_setup_error() mock_super.assert_called_once_with() self.assertTrue(self.library._find_matching_aggregates.called) def test_check_for_setup_error_cluster_creds_vserver_mismatch(self): self.library._client.vserver_exists.return_value = True self.library._have_cluster_creds = False self.library._client.list_vservers.return_value = [fake.VSERVER2] self.assertRaises(exception.InvalidInput, self.library.check_for_setup_error) def test_check_for_setup_error_no_aggregates(self): self.library._client.vserver_exists.return_value = True self.library._have_cluster_creds = True self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=[])) self.assertRaises(exception.NetAppException, self.library.check_for_setup_error) self.assertTrue(self.library._find_matching_aggregates.called) def test_get_vserver(self): self.library._client.vserver_exists.return_value = True self.mock_object(self.library, '_get_api_client', mock.Mock(return_value='fake_client')) result_vserver, result_vserver_client = self.library._get_vserver() self.assertEqual(fake.VSERVER1, result_vserver) self.assertEqual('fake_client', result_vserver_client) def test_get_vserver_share_server_specified(self): self.assertRaises(exception.InvalidParameterValue, self.library._get_vserver, share_server=fake.SHARE_SERVER) def test_get_vserver_no_vserver(self): self.library._vserver = None self.assertRaises(exception.InvalidInput, self.library._get_vserver) def test_get_vserver_vserver_not_found(self): self.library._client.vserver_exists.return_value = False self.assertRaises(exception.VserverNotFound, self.library._get_vserver) def test_get_ems_pool_info(self): self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=['aggr1', 'aggr2'])) result = self.library._get_ems_pool_info() expected = { 'pools': { 'vserver': fake.VSERVER1, 'aggregates': ['aggr1', 'aggr2'], }, } self.assertEqual(expected, result) @ddt.data(True, False) def test_handle_housekeeping_tasks_with_cluster_creds(self, have_creds): self.library._have_cluster_creds = have_creds mock_vserver_client = mock.Mock() self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=mock_vserver_client)) mock_super = self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, '_handle_housekeeping_tasks') self.library._handle_housekeeping_tasks() self.assertTrue( mock_vserver_client.prune_deleted_nfs_export_policies.called) self.assertTrue(mock_vserver_client.prune_deleted_snapshots.called) self.assertIs( have_creds, mock_vserver_client.remove_unused_qos_policy_groups.called) self.assertTrue(mock_super.called) @ddt.data(True, False) def test_find_matching_aggregates(self, have_cluster_creds): self.library._have_cluster_creds = have_cluster_creds aggregates = fake.AGGREGATES + fake.ROOT_AGGREGATES mock_vserver_client = mock.Mock() mock_vserver_client.list_vserver_aggregates.return_value = aggregates self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=mock_vserver_client)) mock_client = mock.Mock() mock_client.list_root_aggregates.return_value = fake.ROOT_AGGREGATES self.library._client = mock_client self.library.configuration.netapp_aggregate_name_search_pattern = ( '.*_aggr_1') result = self.library._find_matching_aggregates() if have_cluster_creds: self.assertListEqual([fake.AGGREGATES[0]], result) mock_client.list_root_aggregates.assert_called_once_with() else: self.assertListEqual([fake.AGGREGATES[0], fake.ROOT_AGGREGATES[0]], result) self.assertFalse(mock_client.list_root_aggregates.called) def test_get_network_allocations_number(self): self.assertEqual(0, self.library.get_network_allocations_number()) def test_get_admin_network_allocations_number(self): result = self.library.get_admin_network_allocations_number() self.assertEqual(0, result) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/__init__.py0000664000175000017500000000000013656750227030301 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_lib_base.py0000664000175000017500000100342613656750227031361 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # Copyright (c) 2015 Tom Barron. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the NetApp Data ONTAP cDOT base storage driver library. """ import copy import json import math import socket import time from unittest import mock import ddt from oslo_log import log from oslo_service import loopingcall from oslo_utils import timeutils from oslo_utils import units from oslo_utils import uuidutils from manila.common import constants from manila import exception from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.client import client_cmode from manila.share.drivers.netapp.dataontap.cluster_mode import data_motion from manila.share.drivers.netapp.dataontap.cluster_mode import lib_base from manila.share.drivers.netapp.dataontap.cluster_mode import performance from manila.share.drivers.netapp.dataontap.protocols import cifs_cmode from manila.share.drivers.netapp.dataontap.protocols import nfs_cmode from manila.share.drivers.netapp import utils as na_utils from manila.share import share_types from manila.share import utils as share_utils from manila import test from manila.tests import fake_share from manila.tests.share.drivers.netapp.dataontap import fakes as fake from manila.tests import utils def fake_replica(**kwargs): return fake_share.fake_replica(for_manager=True, **kwargs) @ddt.ddt class NetAppFileStorageLibraryTestCase(test.TestCase): def setUp(self): super(NetAppFileStorageLibraryTestCase, self).setUp() self.mock_object(na_utils, 'validate_driver_instantiation') self.mock_object(na_utils, 'setup_tracing') # Mock loggers as themselves to allow logger arg validation mock_logger = log.getLogger('mock_logger') self.mock_object(lib_base.LOG, 'info', mock.Mock(side_effect=mock_logger.info)) self.mock_object(lib_base.LOG, 'warning', mock.Mock(side_effect=mock_logger.warning)) self.mock_object(lib_base.LOG, 'error', mock.Mock(side_effect=mock_logger.error)) self.mock_object(lib_base.LOG, 'debug', mock.Mock(side_effect=mock_logger.debug)) kwargs = { 'configuration': fake.get_config_cmode(), 'private_storage': mock.Mock(), 'app_version': fake.APP_VERSION } self.library = lib_base.NetAppCmodeFileStorageLibrary(fake.DRIVER_NAME, **kwargs) self.library._client = mock.Mock() self.library._perf_library = mock.Mock() self.client = self.library._client self.context = mock.Mock() self.fake_replica = copy.deepcopy(fake.SHARE) self.fake_replica_2 = copy.deepcopy(fake.SHARE) self.fake_replica_2['id'] = fake.SHARE_ID2 self.fake_replica_2['replica_state'] = ( constants.REPLICA_STATE_OUT_OF_SYNC) self.mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=self.mock_dm_session)) self.mock_object(data_motion, 'get_client_for_backend') def _mock_api_error(self, code='fake', message='fake'): return mock.Mock(side_effect=netapp_api.NaApiError(code=code, message=message)) def test_init(self): self.assertEqual(fake.DRIVER_NAME, self.library.driver_name) self.assertEqual(1, na_utils.validate_driver_instantiation.call_count) self.assertEqual(1, na_utils.setup_tracing.call_count) self.assertListEqual([], self.library._licenses) self.assertDictEqual({}, self.library._clients) self.assertDictEqual({}, self.library._ssc_stats) self.assertIsNotNone(self.library._app_version) def test_do_setup(self): mock_get_api_client = self.mock_object(self.library, '_get_api_client') self.mock_object( performance, 'PerformanceLibrary', mock.Mock(return_value='fake_perf_library')) self.mock_object( self.library._client, 'check_for_cluster_credentials', mock.Mock(return_value=True)) self.library.do_setup(self.context) mock_get_api_client.assert_called_once_with() (self.library._client.check_for_cluster_credentials. assert_called_once_with()) self.assertEqual('fake_perf_library', self.library._perf_library) self.mock_object(self.library._client, 'check_for_cluster_credentials', mock.Mock(return_value=True)) mock_set_cluster_info = self.mock_object( self.library, '_set_cluster_info') self.library.do_setup(self.context) mock_set_cluster_info.assert_called_once() def test_set_cluster_info(self): self.library._set_cluster_info() self.assertTrue(self.library._cluster_info['nve_support'], fake.CLUSTER_NODES) def test_check_for_setup_error(self): self.library._licenses = [] self.mock_object(self.library, '_get_licenses', mock.Mock(return_value=['fake_license'])) mock_start_periodic_tasks = self.mock_object(self.library, '_start_periodic_tasks') self.library.check_for_setup_error() self.assertEqual(['fake_license'], self.library._licenses) mock_start_periodic_tasks.assert_called_once_with() def test_get_vserver(self): self.assertRaises(NotImplementedError, self.library._get_vserver) def test_get_api_client(self): client_kwargs = fake.CLIENT_KWARGS.copy() # First call should proceed normally. mock_client_constructor = self.mock_object(client_cmode, 'NetAppCmodeClient') client1 = self.library._get_api_client() self.assertIsNotNone(client1) mock_client_constructor.assert_called_once_with(**client_kwargs) # Second call should yield the same object. mock_client_constructor = self.mock_object(client_cmode, 'NetAppCmodeClient') client2 = self.library._get_api_client() self.assertEqual(client1, client2) self.assertFalse(mock_client_constructor.called) def test_get_api_client_with_vserver(self): client_kwargs = fake.CLIENT_KWARGS.copy() client_kwargs['vserver'] = fake.VSERVER1 # First call should proceed normally. mock_client_constructor = self.mock_object(client_cmode, 'NetAppCmodeClient') client1 = self.library._get_api_client(vserver=fake.VSERVER1) self.assertIsNotNone(client1) mock_client_constructor.assert_called_once_with(**client_kwargs) # Second call should yield the same object. mock_client_constructor = self.mock_object(client_cmode, 'NetAppCmodeClient') client2 = self.library._get_api_client(vserver=fake.VSERVER1) self.assertEqual(client1, client2) self.assertFalse(mock_client_constructor.called) # A different vserver should work normally without caching. mock_client_constructor = self.mock_object(client_cmode, 'NetAppCmodeClient') client3 = self.library._get_api_client(vserver=fake.VSERVER2) self.assertNotEqual(client1, client3) client_kwargs['vserver'] = fake.VSERVER2 mock_client_constructor.assert_called_once_with(**client_kwargs) def test_get_licenses_both_protocols(self): self.library._have_cluster_creds = True self.mock_object(self.client, 'get_licenses', mock.Mock(return_value=fake.LICENSES)) result = self.library._get_licenses() self.assertSequenceEqual(fake.LICENSES, result) self.assertEqual(0, lib_base.LOG.error.call_count) self.assertEqual(1, lib_base.LOG.info.call_count) def test_get_licenses_one_protocol(self): self.library._have_cluster_creds = True licenses = list(fake.LICENSES) licenses.remove('nfs') self.mock_object(self.client, 'get_licenses', mock.Mock(return_value=licenses)) result = self.library._get_licenses() self.assertListEqual(licenses, result) self.assertEqual(0, lib_base.LOG.error.call_count) self.assertEqual(1, lib_base.LOG.info.call_count) def test_get_licenses_no_protocols(self): self.library._have_cluster_creds = True licenses = list(fake.LICENSES) licenses.remove('nfs') licenses.remove('cifs') self.mock_object(self.client, 'get_licenses', mock.Mock(return_value=licenses)) result = self.library._get_licenses() self.assertListEqual(licenses, result) self.assertEqual(1, lib_base.LOG.error.call_count) self.assertEqual(1, lib_base.LOG.info.call_count) def test_get_licenses_no_cluster_creds(self): self.library._have_cluster_creds = False result = self.library._get_licenses() self.assertListEqual([], result) self.assertEqual(1, lib_base.LOG.debug.call_count) def test_start_periodic_tasks(self): mock_update_ssc_info = self.mock_object(self.library, '_update_ssc_info') mock_handle_ems_logging = self.mock_object(self.library, '_handle_ems_logging') mock_handle_housekeeping_tasks = self.mock_object( self.library, '_handle_housekeeping_tasks') mock_ssc_periodic_task = mock.Mock() mock_ems_periodic_task = mock.Mock() mock_housekeeping_periodic_task = mock.Mock() mock_loopingcall = self.mock_object( loopingcall, 'FixedIntervalLoopingCall', mock.Mock(side_effect=[mock_ssc_periodic_task, mock_ems_periodic_task, mock_housekeeping_periodic_task])) self.library._start_periodic_tasks() self.assertTrue(mock_update_ssc_info.called) self.assertFalse(mock_handle_ems_logging.called) self.assertFalse(mock_housekeeping_periodic_task.called) mock_loopingcall.assert_has_calls( [mock.call(mock_update_ssc_info), mock.call(mock_handle_ems_logging), mock.call(mock_handle_housekeeping_tasks)]) self.assertTrue(mock_ssc_periodic_task.start.called) self.assertTrue(mock_ems_periodic_task.start.called) self.assertTrue(mock_housekeeping_periodic_task.start.called) def test_get_backend_share_name(self): result = self.library._get_backend_share_name(fake.SHARE_ID) expected = (fake.VOLUME_NAME_TEMPLATE % {'share_id': fake.SHARE_ID.replace('-', '_')}) self.assertEqual(expected, result) def test_get_backend_snapshot_name(self): result = self.library._get_backend_snapshot_name(fake.SNAPSHOT_ID) expected = 'share_snapshot_' + fake.SNAPSHOT_ID.replace('-', '_') self.assertEqual(expected, result) def test_get_backend_cg_snapshot_name(self): result = self.library._get_backend_cg_snapshot_name(fake.SNAPSHOT_ID) expected = 'share_cg_snapshot_' + fake.SNAPSHOT_ID.replace('-', '_') self.assertEqual(expected, result) def test_get_aggregate_space_cluster_creds(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) self.mock_object(self.library._client, 'get_cluster_aggregate_capacities', mock.Mock(return_value=fake.AGGREGATE_CAPACITIES)) result = self.library._get_aggregate_space() (self.library._client.get_cluster_aggregate_capacities. assert_called_once_with(fake.AGGREGATES)) self.assertDictEqual(fake.AGGREGATE_CAPACITIES, result) def test_get_aggregate_space_no_cluster_creds(self): self.library._have_cluster_creds = False self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) self.mock_object(self.library._client, 'get_vserver_aggregate_capacities', mock.Mock(return_value=fake.AGGREGATE_CAPACITIES)) result = self.library._get_aggregate_space() (self.library._client.get_vserver_aggregate_capacities. assert_called_once_with(fake.AGGREGATES)) self.assertDictEqual(fake.AGGREGATE_CAPACITIES, result) def test_check_snaprestore_license_notfound(self): licenses = list(fake.LICENSES) licenses.remove('snaprestore') self.mock_object(self.client, 'get_licenses', mock.Mock(return_value=licenses)) result = self.library._check_snaprestore_license() self.assertIs(False, result) def test_check_snaprestore_license_found(self): self.mock_object(self.client, 'get_licenses', mock.Mock(return_value=fake.LICENSES)) result = self.library._check_snaprestore_license() self.assertIs(True, result) def test_get_aggregate_node_cluster_creds(self): self.library._have_cluster_creds = True self.mock_object(self.library._client, 'get_node_for_aggregate', mock.Mock(return_value=fake.CLUSTER_NODE)) result = self.library._get_aggregate_node(fake.AGGREGATE) (self.library._client.get_node_for_aggregate. assert_called_once_with(fake.AGGREGATE)) self.assertEqual(fake.CLUSTER_NODE, result) def test_get_aggregate_node_no_cluster_creds(self): self.library._have_cluster_creds = False self.mock_object(self.library._client, 'get_node_for_aggregate') result = self.library._get_aggregate_node(fake.AGGREGATE) self.assertFalse(self.library._client.get_node_for_aggregate.called) self.assertIsNone(result) def test_get_default_filter_function(self): result = self.library.get_default_filter_function() self.assertEqual(self.library.DEFAULT_FILTER_FUNCTION, result) def test_get_default_goodness_function(self): result = self.library.get_default_goodness_function() self.assertEqual(self.library.DEFAULT_GOODNESS_FUNCTION, result) def test_get_share_stats(self): mock_get_pools = self.mock_object( self.library, '_get_pools', mock.Mock(return_value=fake.POOLS)) result = self.library.get_share_stats(filter_function='filter', goodness_function='goodness') expected = { 'share_backend_name': fake.BACKEND_NAME, 'driver_name': fake.DRIVER_NAME, 'vendor_name': 'NetApp', 'driver_version': '1.0', 'netapp_storage_family': 'ontap_cluster', 'storage_protocol': 'NFS_CIFS', 'pools': fake.POOLS, 'share_group_stats': {'consistent_snapshot_support': 'host'}, } self.assertDictEqual(expected, result) mock_get_pools.assert_called_once_with(filter_function='filter', goodness_function='goodness') def test_get_share_stats_with_replication(self): self.library.configuration.replication_domain = "fake_domain" mock_get_pools = self.mock_object( self.library, '_get_pools', mock.Mock(return_value=fake.POOLS)) result = self.library.get_share_stats(filter_function='filter', goodness_function='goodness') expected = { 'share_backend_name': fake.BACKEND_NAME, 'driver_name': fake.DRIVER_NAME, 'vendor_name': 'NetApp', 'driver_version': '1.0', 'netapp_storage_family': 'ontap_cluster', 'storage_protocol': 'NFS_CIFS', 'replication_type': 'dr', 'replication_domain': 'fake_domain', 'pools': fake.POOLS, 'share_group_stats': {'consistent_snapshot_support': 'host'}, } self.assertDictEqual(expected, result) mock_get_pools.assert_called_once_with(filter_function='filter', goodness_function='goodness') def test_get_share_server_pools(self): self.mock_object(self.library, '_get_pools', mock.Mock(return_value=fake.POOLS)) result = self.library.get_share_server_pools(fake.SHARE_SERVER) self.assertListEqual(fake.POOLS, result) def test_get_pools(self): self.mock_object( self.library, '_get_aggregate_space', mock.Mock(return_value=fake.AGGREGATE_CAPACITIES)) self.library._have_cluster_creds = True self.library._cluster_info = fake.CLUSTER_INFO self.library._ssc_stats = fake.SSC_INFO self.library._perf_library.get_node_utilization_for_pool = ( mock.Mock(side_effect=[30.0, 42.0])) self.mock_object(self.library, '_check_snaprestore_license', mock.Mock(return_value=True)) result = self.library._get_pools(filter_function='filter', goodness_function='goodness') self.assertListEqual(fake.POOLS, result) def test_get_pools_vserver_creds(self): self.mock_object( self.library, '_get_aggregate_space', mock.Mock(return_value=fake.AGGREGATE_CAPACITIES_VSERVER_CREDS)) self.library._have_cluster_creds = False self.library._cluster_info = fake.CLUSTER_INFO self.library._ssc_stats = fake.SSC_INFO_VSERVER_CREDS self.library._perf_library.get_node_utilization_for_pool = ( mock.Mock(side_effect=[50.0, 50.0])) self.mock_object(self.library, '_check_snaprestore_license', mock.Mock(return_value=True)) result = self.library._get_pools() self.assertListEqual(fake.POOLS_VSERVER_CREDS, result) def test_handle_ems_logging(self): self.mock_object(self.library, '_build_ems_log_message_0', mock.Mock(return_value=fake.EMS_MESSAGE_0)) self.mock_object(self.library, '_build_ems_log_message_1', mock.Mock(return_value=fake.EMS_MESSAGE_1)) self.library._handle_ems_logging() self.library._client.send_ems_log_message.assert_has_calls([ mock.call(fake.EMS_MESSAGE_0), mock.call(fake.EMS_MESSAGE_1), ]) def test_build_ems_log_message_0(self): self.mock_object(socket, 'gethostname', mock.Mock(return_value=fake.HOST_NAME)) result = self.library._build_ems_log_message_0() self.assertDictEqual(fake.EMS_MESSAGE_0, result) def test_build_ems_log_message_1(self): pool_info = { 'pools': { 'vserver': 'fake_vserver', 'aggregates': ['aggr1', 'aggr2'], }, } self.mock_object(socket, 'gethostname', mock.Mock(return_value=fake.HOST_NAME)) self.mock_object(self.library, '_get_ems_pool_info', mock.Mock(return_value=pool_info)) result = self.library._build_ems_log_message_1() self.assertDictEqual(pool_info, json.loads(result['event-description'])) result['event-description'] = '' self.assertDictEqual(fake.EMS_MESSAGE_1, result) def test_get_ems_pool_info(self): self.assertRaises(NotImplementedError, self.library._get_ems_pool_info) def test_find_matching_aggregates(self): self.assertRaises(NotImplementedError, self.library._find_matching_aggregates) @ddt.data(('NFS', nfs_cmode.NetAppCmodeNFSHelper), ('nfs', nfs_cmode.NetAppCmodeNFSHelper), ('CIFS', cifs_cmode.NetAppCmodeCIFSHelper), ('cifs', cifs_cmode.NetAppCmodeCIFSHelper)) @ddt.unpack def test_get_helper(self, protocol, helper_type): fake_share = fake.SHARE.copy() fake_share['share_proto'] = protocol mock_check_license_for_protocol = self.mock_object( self.library, '_check_license_for_protocol') result = self.library._get_helper(fake_share) mock_check_license_for_protocol.assert_called_once_with( protocol.lower()) self.assertEqual(helper_type, type(result)) def test_get_helper_invalid_protocol(self): fake_share = fake.SHARE.copy() fake_share['share_proto'] = 'iSCSI' self.mock_object(self.library, '_check_license_for_protocol') self.assertRaises(exception.NetAppException, self.library._get_helper, fake_share) def test_check_license_for_protocol_no_cluster_creds(self): self.library._have_cluster_creds = False result = self.library._check_license_for_protocol('fake_protocol') self.assertIsNone(result) def test_check_license_for_protocol_have_license(self): self.library._have_cluster_creds = True self.library._licenses = ['base', 'fake_protocol'] result = self.library._check_license_for_protocol('FAKE_PROTOCOL') self.assertIsNone(result) def test_check_license_for_protocol_newly_licensed_protocol(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_get_licenses', mock.Mock(return_value=['base', 'nfs'])) self.library._licenses = ['base'] result = self.library._check_license_for_protocol('NFS') self.assertIsNone(result) self.assertTrue(self.library._get_licenses.called) def test_check_license_for_protocol_unlicensed_protocol(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_get_licenses', mock.Mock(return_value=['base'])) self.library._licenses = ['base'] self.assertRaises(exception.NetAppException, self.library._check_license_for_protocol, 'NFS') def test_get_pool_has_pool(self): result = self.library.get_pool(fake.SHARE) self.assertEqual(fake.POOL_NAME, result) self.assertFalse(self.client.get_aggregate_for_volume.called) def test_get_pool_no_pool(self): fake_share = copy.deepcopy(fake.SHARE) fake_share['host'] = '%(host)s@%(backend)s' % { 'host': fake.HOST_NAME, 'backend': fake.BACKEND_NAME} self.client.get_aggregate_for_volume.return_value = fake.POOL_NAME result = self.library.get_pool(fake_share) self.assertEqual(fake.POOL_NAME, result) self.assertTrue(self.client.get_aggregate_for_volume.called) def test_get_pool_raises(self): fake_share = copy.deepcopy(fake.SHARE) fake_share['host'] = '%(host)s@%(backend)s' % { 'host': fake.HOST_NAME, 'backend': fake.BACKEND_NAME} self.client.get_aggregate_for_volume.side_effect = ( exception.NetAppException) self.assertRaises(exception.NetAppException, self.library.get_pool, fake_share) def test_create_share(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_allocate_container = self.mock_object(self.library, '_allocate_container') mock_create_export = self.mock_object( self.library, '_create_export', mock.Mock(return_value='fake_export_location')) result = self.library.create_share(self.context, fake.SHARE, share_server=fake.SHARE_SERVER) mock_allocate_container.assert_called_once_with(fake.SHARE, fake.VSERVER1, vserver_client) mock_create_export.assert_called_once_with(fake.SHARE, fake.SHARE_SERVER, fake.VSERVER1, vserver_client) self.assertEqual('fake_export_location', result) def test_create_share_from_snapshot(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_allocate_container_from_snapshot = self.mock_object( self.library, '_allocate_container_from_snapshot') mock_create_export = self.mock_object( self.library, '_create_export', mock.Mock(return_value='fake_export_location')) result = self.library.create_share_from_snapshot( self.context, fake.SHARE, fake.SNAPSHOT, share_server=fake.SHARE_SERVER, parent_share=fake.SHARE) mock_allocate_container_from_snapshot.assert_called_once_with( fake.SHARE, fake.SNAPSHOT, fake.VSERVER1, vserver_client) mock_create_export.assert_called_once_with(fake.SHARE, fake.SHARE_SERVER, fake.VSERVER1, vserver_client) self.assertEqual('fake_export_location', result) def _setup_mocks_for_create_share_from_snapshot( self, allocate_attr=None, dest_cluster=fake.CLUSTER_NAME): class FakeDBObj(dict): def to_dict(self): return self if allocate_attr is None: allocate_attr = mock.Mock() self.src_vserver_client = mock.Mock() self.mock_dm_session = mock.Mock() self.fake_share = FakeDBObj(fake.SHARE) self.fake_share_server = FakeDBObj(fake.SHARE_SERVER) self.mock_dm_constr = self.mock_object( data_motion, "DataMotionSession", mock.Mock(return_value=self.mock_dm_session)) self.mock_dm_backend = self.mock_object( self.mock_dm_session, 'get_backend_info_for_share', mock.Mock(return_value=(None, fake.VSERVER1, fake.BACKEND_NAME))) self.mock_dm_get_src_client = self.mock_object( data_motion, 'get_client_for_backend', mock.Mock(return_value=self.src_vserver_client)) self.mock_get_src_cluster = self.mock_object( self.src_vserver_client, 'get_cluster_name', mock.Mock(return_value=fake.CLUSTER_NAME)) self.dest_vserver_client = mock.Mock() self.mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER2, self.dest_vserver_client))) self.mock_get_dest_cluster = self.mock_object( self.dest_vserver_client, 'get_cluster_name', mock.Mock(return_value=dest_cluster)) self.mock_allocate_container_from_snapshot = self.mock_object( self.library, '_allocate_container_from_snapshot', allocate_attr) self.mock_allocate_container = self.mock_object( self.library, '_allocate_container') self.mock_dm_create_snapmirror = self.mock_object( self.mock_dm_session, 'create_snapmirror') self.mock_storage_update = self.mock_object( self.library.private_storage, 'update') self.mock_object(self.library, '_have_cluster_creds', mock.Mock(return_value=True)) # Parent share on MANILA_HOST_2 self.parent_share = copy.copy(fake.SHARE) self.parent_share['share_server'] = fake.SHARE_SERVER_2 self.parent_share['host'] = fake.MANILA_HOST_NAME_2 self.parent_share_server = {} ss_keys = ['id', 'identifier', 'backend_details', 'host'] for key in ss_keys: self.parent_share_server[key] = ( self.parent_share['share_server'].get(key, None)) self.temp_src_share = { 'id': self.fake_share['id'], 'host': self.parent_share['host'], 'share_server': self.parent_share_server or None } @ddt.data(fake.CLUSTER_NAME, fake.CLUSTER_NAME_2) def test_create_share_from_snapshot_another_host(self, dest_cluster): self._setup_mocks_for_create_share_from_snapshot( dest_cluster=dest_cluster) result = self.library.create_share_from_snapshot( self.context, self.fake_share, fake.SNAPSHOT, share_server=self.fake_share_server, parent_share=self.parent_share) self.fake_share['share_server'] = self.fake_share_server self.mock_dm_constr.assert_called_once() self.mock_dm_backend.assert_called_once_with(self.parent_share) self.mock_dm_get_src_client.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) self.mock_get_src_cluster.assert_called_once() self.mock_get_vserver.assert_called_once_with(self.fake_share_server) self.mock_get_dest_cluster.assert_called_once() if dest_cluster != fake.CLUSTER_NAME: self.mock_allocate_container_from_snapshot.assert_called_once_with( self.fake_share, fake.SNAPSHOT, fake.VSERVER1, self.src_vserver_client, split=False) self.mock_allocate_container.assert_called_once_with( self.fake_share, fake.VSERVER2, self.dest_vserver_client, replica=True) self.mock_dm_create_snapmirror.assert_called_once() self.temp_src_share['replica_state'] = ( constants.REPLICA_STATE_ACTIVE) state = self.library.STATE_SNAPMIRROR_DATA_COPYING else: self.mock_allocate_container_from_snapshot.assert_called_once_with( self.fake_share, fake.SNAPSHOT, fake.VSERVER1, self.src_vserver_client, split=True) state = self.library.STATE_SPLITTING_VOLUME_CLONE self.temp_src_share['internal_state'] = state self.temp_src_share['status'] = constants.STATUS_ACTIVE str_temp_src_share = json.dumps(self.temp_src_share) self.mock_storage_update.assert_called_once_with( self.fake_share['id'], { 'source_share': str_temp_src_share }) expected_return = {'status': constants.STATUS_CREATING_FROM_SNAPSHOT} self.assertEqual(expected_return, result) def test_create_share_from_snapshot_another_host_driver_error(self): self._setup_mocks_for_create_share_from_snapshot( allocate_attr=mock.Mock(side_effect=exception.NetAppException)) mock_delete_snapmirror = self.mock_object( self.mock_dm_session, 'delete_snapmirror') mock_get_backend_shr_name = self.mock_object( self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_share_exits = self.mock_object( self.library, '_share_exists', mock.Mock(return_value=True)) mock_deallocate_container = self.mock_object( self.library, '_deallocate_container') self.assertRaises(exception.NetAppException, self.library.create_share_from_snapshot, self.context, self.fake_share, fake.SNAPSHOT, share_server=self.fake_share_server, parent_share=self.parent_share) self.fake_share['share_server'] = self.fake_share_server self.mock_dm_constr.assert_called_once() self.mock_dm_backend.assert_called_once_with(self.parent_share) self.mock_dm_get_src_client.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) self.mock_get_src_cluster.assert_called_once() self.mock_get_vserver.assert_called_once_with(self.fake_share_server) self.mock_get_dest_cluster.assert_called_once() self.mock_allocate_container_from_snapshot.assert_called_once_with( self.fake_share, fake.SNAPSHOT, fake.VSERVER1, self.src_vserver_client, split=True) mock_delete_snapmirror.assert_called_once_with(self.temp_src_share, self.fake_share) mock_get_backend_shr_name.assert_called_once_with( self.fake_share['id']) mock_share_exits.assert_called_once_with(fake.SHARE_NAME, self.src_vserver_client) mock_deallocate_container.assert_called_once_with( fake.SHARE_NAME, self.src_vserver_client) def test__update_create_from_snapshot_status(self): fake_result = mock.Mock() mock_pvt_storage_get = self.mock_object( self.library.private_storage, 'get', mock.Mock(return_value=fake.SHARE)) mock__create_continue = self.mock_object( self.library, '_create_from_snapshot_continue', mock.Mock(return_value=fake_result)) result = self.library._update_create_from_snapshot_status( fake.SHARE, fake.SHARE_SERVER) mock_pvt_storage_get.assert_called_once_with(fake.SHARE['id'], 'source_share') mock__create_continue.assert_called_once_with(fake.SHARE, fake.SHARE_SERVER) self.assertEqual(fake_result, result) def test__update_create_from_snapshot_status_missing_source_share(self): mock_pvt_storage_get = self.mock_object( self.library.private_storage, 'get', mock.Mock(return_value=None)) expected_result = {'status': constants.STATUS_ERROR} result = self.library._update_create_from_snapshot_status( fake.SHARE, fake.SHARE_SERVER) mock_pvt_storage_get.assert_called_once_with(fake.SHARE['id'], 'source_share') self.assertEqual(expected_result, result) def test__update_create_from_snapshot_status_driver_error(self): fake_src_share = { 'id': fake.SHARE['id'], 'host': fake.SHARE['host'], 'internal_state': 'fake_internal_state', } copy_fake_src_share = copy.deepcopy(fake_src_share) src_vserver_client = mock.Mock() mock_dm_session = mock.Mock() mock_pvt_storage_get = self.mock_object( self.library.private_storage, 'get', mock.Mock(return_value=json.dumps(copy_fake_src_share))) mock__create_continue = self.mock_object( self.library, '_create_from_snapshot_continue', mock.Mock(side_effect=exception.NetAppException)) mock_dm_constr = self.mock_object( data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) mock_delete_snapmirror = self.mock_object( mock_dm_session, 'delete_snapmirror') mock_dm_backend = self.mock_object( mock_dm_session, 'get_backend_info_for_share', mock.Mock(return_value=(None, fake.VSERVER1, fake.BACKEND_NAME))) mock_dm_get_src_client = self.mock_object( data_motion, 'get_client_for_backend', mock.Mock(return_value=src_vserver_client)) mock_get_backend_shr_name = self.mock_object( self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_share_exits = self.mock_object( self.library, '_share_exists', mock.Mock(return_value=True)) mock_deallocate_container = self.mock_object( self.library, '_deallocate_container') mock_pvt_storage_delete = self.mock_object( self.library.private_storage, 'delete') result = self.library._update_create_from_snapshot_status( fake.SHARE, fake.SHARE_SERVER) expected_result = {'status': constants.STATUS_ERROR} mock_pvt_storage_get.assert_called_once_with(fake.SHARE['id'], 'source_share') mock__create_continue.assert_called_once_with(fake.SHARE, fake.SHARE_SERVER) mock_dm_constr.assert_called_once() mock_delete_snapmirror.assert_called_once_with(fake_src_share, fake.SHARE) mock_dm_backend.assert_called_once_with(fake_src_share) mock_dm_get_src_client.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) mock_get_backend_shr_name.assert_called_once_with(fake_src_share['id']) mock_share_exits.assert_called_once_with(fake.SHARE_NAME, src_vserver_client) mock_deallocate_container.assert_called_once_with(fake.SHARE_NAME, src_vserver_client) mock_pvt_storage_delete.assert_called_once_with(fake.SHARE['id']) self.assertEqual(expected_result, result) def _setup_mocks_for_create_from_snapshot_continue( self, src_host=fake.MANILA_HOST_NAME, dest_host=fake.MANILA_HOST_NAME, split_completed_result=True, move_completed_result=True, share_internal_state='fake_state', replica_state='in_sync'): self.fake_export_location = 'fake_export_location' self.fake_src_share = { 'id': fake.SHARE['id'], 'host': src_host, 'internal_state': share_internal_state, } self.copy_fake_src_share = copy.deepcopy(self.fake_src_share) src_pool = src_host.split('#')[1] dest_pool = dest_host.split('#')[1] self.src_vserver_client = mock.Mock() self.dest_vserver_client = mock.Mock() self.mock_dm_session = mock.Mock() self.mock_dm_constr = self.mock_object( data_motion, "DataMotionSession", mock.Mock(return_value=self.mock_dm_session)) self.mock_pvt_storage_get = self.mock_object( self.library.private_storage, 'get', mock.Mock(return_value=json.dumps(self.copy_fake_src_share))) self.mock_dm_backend = self.mock_object( self.mock_dm_session, 'get_backend_info_for_share', mock.Mock(return_value=(None, fake.VSERVER1, fake.BACKEND_NAME))) self.mock_extract_host = self.mock_object( share_utils, 'extract_host', mock.Mock(side_effect=[src_pool, dest_pool])) self.mock_dm_get_src_client = self.mock_object( data_motion, 'get_client_for_backend', mock.Mock(return_value=self.src_vserver_client)) self.mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER2, self.dest_vserver_client))) self.mock_split_completed = self.mock_object( self.library, '_check_volume_clone_split_completed', mock.Mock(return_value=split_completed_result)) self.mock_rehost_vol = self.mock_object( self.library, '_rehost_and_mount_volume') self.mock_move_vol = self.mock_object(self.library, '_move_volume_after_splitting') self.mock_move_completed = self.mock_object( self.library, '_check_volume_move_completed', mock.Mock(return_value=move_completed_result)) self.mock_update_rep_state = self.mock_object( self.library, 'update_replica_state', mock.Mock(return_value=replica_state) ) self.mock_update_snapmirror = self.mock_object( self.mock_dm_session, 'update_snapmirror') self.mock_break_snapmirror = self.mock_object( self.mock_dm_session, 'break_snapmirror') self.mock_delete_snapmirror = self.mock_object( self.mock_dm_session, 'delete_snapmirror') self.mock_get_backend_shr_name = self.mock_object( self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock__delete_share = self.mock_object(self.library, '_delete_share') self.mock_set_vol_size_fixes = self.mock_object( self.dest_vserver_client, 'set_volume_filesys_size_fixed') self.mock_create_export = self.mock_object( self.library, '_create_export', mock.Mock(return_value=self.fake_export_location)) self.mock_pvt_storage_update = self.mock_object( self.library.private_storage, 'update') self.mock_pvt_storage_delete = self.mock_object( self.library.private_storage, 'delete') self.mock_get_extra_specs_qos = self.mock_object( share_types, 'get_extra_specs_from_share', mock.Mock(return_value=fake.EXTRA_SPEC_WITH_QOS)) self.mock__get_provisioning_opts = self.mock_object( self.library, '_get_provisioning_options', mock.Mock(return_value=copy.deepcopy(fake.PROVISIONING_OPTIONS)) ) self.mock_modify_create_qos = self.mock_object( self.library, '_modify_or_create_qos_for_existing_share', mock.Mock(return_value=fake.QOS_POLICY_GROUP_NAME)) self.mock_modify_vol = self.mock_object(self.dest_vserver_client, 'modify_volume') self.mock_get_backend_qos_name = self.mock_object( self.library, '_get_backend_qos_policy_group_name', mock.Mock(return_value=fake.QOS_POLICY_GROUP_NAME)) self.mock_mark_qos_deletion = self.mock_object( self.src_vserver_client, 'mark_qos_policy_group_for_deletion') @ddt.data(fake.MANILA_HOST_NAME, fake.MANILA_HOST_NAME_2) def test__create_from_snapshot_continue_state_splitting(self, src_host): self._setup_mocks_for_create_from_snapshot_continue( src_host=src_host, share_internal_state=self.library.STATE_SPLITTING_VOLUME_CLONE) result = self.library._create_from_snapshot_continue(fake.SHARE, fake.SHARE_SERVER) fake.SHARE['share_server'] = fake.SHARE_SERVER self.mock_pvt_storage_get.assert_called_once_with(fake.SHARE['id'], 'source_share') self.mock_dm_backend.assert_called_once_with(self.fake_src_share) self.mock_extract_host.assert_has_calls([ mock.call(self.fake_src_share['host'], level='pool'), mock.call(fake.SHARE['host'], level='pool'), ]) self.mock_dm_get_src_client.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) self.mock_get_vserver.assert_called_once_with(fake.SHARE_SERVER) self.mock_split_completed.assert_called_once_with( self.fake_src_share, self.src_vserver_client) self.mock_get_backend_qos_name.assert_called_once_with(fake.SHARE_ID) self.mock_mark_qos_deletion.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME) self.mock_rehost_vol.assert_called_once_with( fake.SHARE, fake.VSERVER1, self.src_vserver_client, fake.VSERVER2, self.dest_vserver_client) if src_host != fake.MANILA_HOST_NAME: expected_result = { 'status': constants.STATUS_CREATING_FROM_SNAPSHOT } self.mock_move_vol.assert_called_once_with( self.fake_src_share, fake.SHARE, fake.SHARE_SERVER, cutover_action='defer') self.fake_src_share['internal_state'] = ( self.library.STATE_MOVING_VOLUME) self.mock_pvt_storage_update.assert_called_once_with( fake.SHARE['id'], {'source_share': json.dumps(self.fake_src_share)} ) self.assertEqual(expected_result, result) else: self.mock_get_extra_specs_qos.assert_called_once_with(fake.SHARE) self.mock__get_provisioning_opts.assert_called_once_with( fake.EXTRA_SPEC_WITH_QOS) self.mock_modify_create_qos.assert_called_once_with( fake.SHARE, fake.EXTRA_SPEC_WITH_QOS, fake.VSERVER2, self.dest_vserver_client) self.mock_get_backend_shr_name.assert_called_once_with( fake.SHARE_ID) self.mock_modify_vol.assert_called_once_with( fake.POOL_NAME, fake.SHARE_NAME, **fake.PROVISIONING_OPTIONS_WITH_QOS) self.mock_pvt_storage_delete.assert_called_once_with( fake.SHARE['id']) self.mock_create_export.assert_called_once_with( fake.SHARE, fake.SHARE_SERVER, fake.VSERVER2, self.dest_vserver_client, clear_current_export_policy=False) expected_result = { 'status': constants.STATUS_AVAILABLE, 'export_locations': self.fake_export_location, } self.assertEqual(expected_result, result) @ddt.data(True, False) def test__create_from_snapshot_continue_state_moving(self, move_completed): self._setup_mocks_for_create_from_snapshot_continue( share_internal_state=self.library.STATE_MOVING_VOLUME, move_completed_result=move_completed) result = self.library._create_from_snapshot_continue(fake.SHARE, fake.SHARE_SERVER) expect_result = { 'status': constants.STATUS_CREATING_FROM_SNAPSHOT } fake.SHARE['share_server'] = fake.SHARE_SERVER self.mock_pvt_storage_get.assert_called_once_with(fake.SHARE['id'], 'source_share') self.mock_dm_backend.assert_called_once_with(self.fake_src_share) self.mock_extract_host.assert_has_calls([ mock.call(self.fake_src_share['host'], level='pool'), mock.call(fake.SHARE['host'], level='pool'), ]) self.mock_dm_get_src_client.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) self.mock_get_vserver.assert_called_once_with(fake.SHARE_SERVER) self.mock_move_completed.assert_called_once_with( fake.SHARE, fake.SHARE_SERVER) if move_completed: expect_result['status'] = constants.STATUS_AVAILABLE self.mock_pvt_storage_delete.assert_called_once_with( fake.SHARE['id']) self.mock_create_export.assert_called_once_with( fake.SHARE, fake.SHARE_SERVER, fake.VSERVER2, self.dest_vserver_client, clear_current_export_policy=False) expect_result['export_locations'] = self.fake_export_location self.assertEqual(expect_result, result) else: self.mock_pvt_storage_update.assert_called_once_with( fake.SHARE['id'], {'source_share': json.dumps(self.fake_src_share)} ) self.assertEqual(expect_result, result) @ddt.data('in_sync', 'out_of_sync') def test__create_from_snapshot_continue_state_snapmirror(self, replica_state): self._setup_mocks_for_create_from_snapshot_continue( share_internal_state=self.library.STATE_SNAPMIRROR_DATA_COPYING, replica_state=replica_state) result = self.library._create_from_snapshot_continue(fake.SHARE, fake.SHARE_SERVER) expect_result = { 'status': constants.STATUS_CREATING_FROM_SNAPSHOT } fake.SHARE['share_server'] = fake.SHARE_SERVER self.mock_pvt_storage_get.assert_called_once_with(fake.SHARE['id'], 'source_share') self.mock_dm_backend.assert_called_once_with(self.fake_src_share) self.mock_extract_host.assert_has_calls([ mock.call(self.fake_src_share['host'], level='pool'), mock.call(fake.SHARE['host'], level='pool'), ]) self.mock_dm_get_src_client.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) self.mock_get_vserver.assert_called_once_with(fake.SHARE_SERVER) self.mock_update_rep_state.assert_called_once_with( None, [self.fake_src_share], fake.SHARE, [], [], fake.SHARE_SERVER) if replica_state == constants.REPLICA_STATE_IN_SYNC: self.mock_update_snapmirror.assert_called_once_with( self.fake_src_share, fake.SHARE) self.mock_break_snapmirror.assert_called_once_with( self.fake_src_share, fake.SHARE) self.mock_delete_snapmirror.assert_called_once_with( self.fake_src_share, fake.SHARE) self.mock_get_backend_shr_name.assert_has_calls( [mock.call(self.fake_src_share['id']), mock.call(fake.SHARE_ID)]) self.mock__delete_share.assert_called_once_with( self.fake_src_share, self.src_vserver_client, remove_export=False) self.mock_set_vol_size_fixes.assert_called_once_with( fake.SHARE_NAME, filesys_size_fixed=False) self.mock_get_extra_specs_qos.assert_called_once_with(fake.SHARE) self.mock__get_provisioning_opts.assert_called_once_with( fake.EXTRA_SPEC_WITH_QOS) self.mock_modify_create_qos.assert_called_once_with( fake.SHARE, fake.EXTRA_SPEC_WITH_QOS, fake.VSERVER2, self.dest_vserver_client) self.mock_modify_vol.assert_called_once_with( fake.POOL_NAME, fake.SHARE_NAME, **fake.PROVISIONING_OPTIONS_WITH_QOS) expect_result['status'] = constants.STATUS_AVAILABLE self.mock_pvt_storage_delete.assert_called_once_with( fake.SHARE['id']) self.mock_create_export.assert_called_once_with( fake.SHARE, fake.SHARE_SERVER, fake.VSERVER2, self.dest_vserver_client, clear_current_export_policy=False) expect_result['export_locations'] = self.fake_export_location self.assertEqual(expect_result, result) elif replica_state not in [constants.STATUS_ERROR, None]: self.mock_pvt_storage_update.assert_called_once_with( fake.SHARE['id'], {'source_share': json.dumps(self.fake_src_share)} ) self.assertEqual(expect_result, result) def test__create_from_snapshot_continue_state_unknown(self): self._setup_mocks_for_create_from_snapshot_continue( share_internal_state='unknown_state') self.assertRaises(exception.NetAppException, self.library._create_from_snapshot_continue, fake.SHARE, fake.SHARE_SERVER) self.mock_pvt_storage_delete.assert_called_once_with(fake.SHARE_ID) @ddt.data(False, True) def test_allocate_container(self, hide_snapdir): provisioning_options = copy.deepcopy(fake.PROVISIONING_OPTIONS) provisioning_options['hide_snapdir'] = hide_snapdir self.mock_object(self.library, '_get_backend_share_name', mock.Mock( return_value=fake.SHARE_NAME)) self.mock_object(share_utils, 'extract_host', mock.Mock( return_value=fake.POOL_NAME)) mock_get_provisioning_opts = self.mock_object( self.library, '_get_provisioning_options_for_share', mock.Mock(return_value=provisioning_options)) vserver_client = mock.Mock() self.library._allocate_container(fake.SHARE_INSTANCE, fake.VSERVER1, vserver_client) mock_get_provisioning_opts.assert_called_once_with( fake.SHARE_INSTANCE, fake.VSERVER1, vserver_client=vserver_client, replica=False) vserver_client.create_volume.assert_called_once_with( fake.POOL_NAME, fake.SHARE_NAME, fake.SHARE['size'], thin_provisioned=True, snapshot_policy='default', language='en-US', dedup_enabled=True, split=True, encrypt=False, compression_enabled=False, max_files=5000, snapshot_reserve=8) if hide_snapdir: vserver_client.set_volume_snapdir_access.assert_called_once_with( fake.SHARE_NAME, hide_snapdir) else: vserver_client.set_volume_snapdir_access.assert_not_called() def test_remap_standard_boolean_extra_specs(self): extra_specs = copy.deepcopy(fake.OVERLAPPING_EXTRA_SPEC) result = self.library._remap_standard_boolean_extra_specs(extra_specs) self.assertDictEqual(fake.REMAPPED_OVERLAPPING_EXTRA_SPEC, result) def test_allocate_container_as_replica(self): self.mock_object(self.library, '_get_backend_share_name', mock.Mock( return_value=fake.SHARE_NAME)) self.mock_object(share_utils, 'extract_host', mock.Mock( return_value=fake.POOL_NAME)) mock_get_provisioning_opts = self.mock_object( self.library, '_get_provisioning_options_for_share', mock.Mock(return_value=copy.deepcopy(fake.PROVISIONING_OPTIONS))) vserver_client = mock.Mock() self.library._allocate_container(fake.SHARE_INSTANCE, fake.VSERVER1, vserver_client, replica=True) mock_get_provisioning_opts.assert_called_once_with( fake.SHARE_INSTANCE, fake.VSERVER1, vserver_client=vserver_client, replica=True) vserver_client.create_volume.assert_called_once_with( fake.POOL_NAME, fake.SHARE_NAME, fake.SHARE['size'], thin_provisioned=True, snapshot_policy='default', language='en-US', dedup_enabled=True, split=True, compression_enabled=False, max_files=5000, encrypt=False, snapshot_reserve=8, volume_type='dp') def test_allocate_container_no_pool_name(self): self.mock_object(self.library, '_get_backend_share_name', mock.Mock( return_value=fake.SHARE_NAME)) self.mock_object(share_utils, 'extract_host', mock.Mock( return_value=None)) self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_get_provisioning_options') vserver_client = mock.Mock() self.assertRaises(exception.InvalidHost, self.library._allocate_container, fake.SHARE_INSTANCE, fake.VSERVER1, vserver_client) self.library._get_backend_share_name.assert_called_once_with( fake.SHARE_INSTANCE['id']) share_utils.extract_host.assert_called_once_with( fake.SHARE_INSTANCE['host'], level='pool') self.assertEqual(0, self.library._check_extra_specs_validity.call_count) self.assertEqual(0, self.library._get_provisioning_options.call_count) def test_check_extra_specs_validity(self): boolean_extra_spec_keys = list( self.library.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP) mock_bool_check = self.mock_object( self.library, '_check_boolean_extra_specs_validity') mock_string_check = self.mock_object( self.library, '_check_string_extra_specs_validity') self.library._check_extra_specs_validity( fake.SHARE_INSTANCE, fake.EXTRA_SPEC) mock_bool_check.assert_called_once_with( fake.SHARE_INSTANCE, fake.EXTRA_SPEC, boolean_extra_spec_keys) mock_string_check.assert_called_once_with( fake.SHARE_INSTANCE, fake.EXTRA_SPEC) def test_check_extra_specs_validity_empty_spec(self): result = self.library._check_extra_specs_validity( fake.SHARE_INSTANCE, fake.EMPTY_EXTRA_SPEC) self.assertIsNone(result) def test_check_extra_specs_validity_invalid_value(self): self.assertRaises( exception.Invalid, self.library._check_extra_specs_validity, fake.SHARE_INSTANCE, fake.INVALID_EXTRA_SPEC) def test_check_string_extra_specs_validity(self): result = self.library._check_string_extra_specs_validity( fake.SHARE_INSTANCE, fake.EXTRA_SPEC) self.assertIsNone(result) def test_check_string_extra_specs_validity_empty_spec(self): result = self.library._check_string_extra_specs_validity( fake.SHARE_INSTANCE, fake.EMPTY_EXTRA_SPEC) self.assertIsNone(result) def test_check_string_extra_specs_validity_invalid_value(self): self.assertRaises( exception.NetAppException, self.library._check_string_extra_specs_validity, fake.SHARE_INSTANCE, fake.INVALID_MAX_FILE_EXTRA_SPEC) def test_check_boolean_extra_specs_validity_invalid_value(self): self.assertRaises( exception.Invalid, self.library._check_boolean_extra_specs_validity, fake.SHARE_INSTANCE, fake.INVALID_EXTRA_SPEC, list(self.library.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP)) def test_check_extra_specs_validity_invalid_combination(self): self.assertRaises( exception.Invalid, self.library._check_boolean_extra_specs_validity, fake.SHARE_INSTANCE, fake.INVALID_EXTRA_SPEC_COMBO, list(self.library.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP)) @ddt.data({'extra_specs': fake.EXTRA_SPEC, 'is_replica': False}, {'extra_specs': fake.EXTRA_SPEC_WITH_QOS, 'is_replica': True}, {'extra_specs': fake.EXTRA_SPEC, 'is_replica': False}, {'extra_specs': fake.EXTRA_SPEC_WITH_QOS, 'is_replica': True}) @ddt.unpack def test_get_provisioning_options_for_share(self, extra_specs, is_replica): qos = True if fake.QOS_EXTRA_SPEC in extra_specs else False vserver_client = mock.Mock() mock_get_extra_specs_from_share = self.mock_object( share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) mock_remap_standard_boolean_extra_specs = self.mock_object( self.library, '_remap_standard_boolean_extra_specs', mock.Mock(return_value=extra_specs)) mock_check_extra_specs_validity = self.mock_object( self.library, '_check_extra_specs_validity') mock_get_provisioning_options = self.mock_object( self.library, '_get_provisioning_options', mock.Mock(return_value=fake.PROVISIONING_OPTIONS)) mock_get_normalized_qos_specs = self.mock_object( self.library, '_get_normalized_qos_specs', mock.Mock(return_value={fake.QOS_NORMALIZED_SPEC: 3000})) mock_create_qos_policy_group = self.mock_object( self.library, '_create_qos_policy_group', mock.Mock( return_value=fake.QOS_POLICY_GROUP_NAME)) result = self.library._get_provisioning_options_for_share( fake.SHARE_INSTANCE, fake.VSERVER1, vserver_client=vserver_client, replica=is_replica) if qos and is_replica: expected_provisioning_opts = fake.PROVISIONING_OPTIONS self.assertFalse(mock_create_qos_policy_group.called) else: expected_provisioning_opts = fake.PROVISIONING_OPTIONS_WITH_QOS mock_create_qos_policy_group.assert_called_once_with( fake.SHARE_INSTANCE, fake.VSERVER1, {fake.QOS_NORMALIZED_SPEC: 3000}, vserver_client) self.assertEqual(expected_provisioning_opts, result) mock_get_extra_specs_from_share.assert_called_once_with( fake.SHARE_INSTANCE) mock_remap_standard_boolean_extra_specs.assert_called_once_with( extra_specs) mock_check_extra_specs_validity.assert_called_once_with( fake.SHARE_INSTANCE, extra_specs) mock_get_provisioning_options.assert_called_once_with(extra_specs) mock_get_normalized_qos_specs.assert_called_once_with(extra_specs) def test_get_provisioning_options_implicit_false(self): result = self.library._get_provisioning_options( fake.EMPTY_EXTRA_SPEC) expected = { 'language': None, 'max_files': None, 'snapshot_policy': None, 'thin_provisioned': False, 'compression_enabled': False, 'dedup_enabled': False, 'split': False, 'encrypt': False, 'hide_snapdir': False, } self.assertEqual(expected, result) def test_get_boolean_provisioning_options(self): result = self.library._get_boolean_provisioning_options( fake.SHORT_BOOLEAN_EXTRA_SPEC, self.library.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP) self.assertEqual(fake.PROVISIONING_OPTIONS_BOOLEAN, result) def test_get_boolean_provisioning_options_missing_spec(self): result = self.library._get_boolean_provisioning_options( fake.SHORT_BOOLEAN_EXTRA_SPEC, self.library.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP) self.assertEqual(fake.PROVISIONING_OPTIONS_BOOLEAN, result) def test_get_boolean_provisioning_options_implicit_false(self): expected = { 'thin_provisioned': False, 'dedup_enabled': False, 'compression_enabled': False, 'split': False, 'hide_snapdir': False, } result = self.library._get_boolean_provisioning_options( fake.EMPTY_EXTRA_SPEC, self.library.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP) self.assertEqual(expected, result) def test_get_string_provisioning_options(self): result = self.library._get_string_provisioning_options( fake.STRING_EXTRA_SPEC, self.library.STRING_QUALIFIED_EXTRA_SPECS_MAP) self.assertEqual(fake.PROVISIONING_OPTIONS_STRING, result) def test_get_string_provisioning_options_missing_spec(self): result = self.library._get_string_provisioning_options( fake.SHORT_STRING_EXTRA_SPEC, self.library.STRING_QUALIFIED_EXTRA_SPECS_MAP) self.assertEqual(fake.PROVISIONING_OPTIONS_STRING_MISSING_SPECS, result) def test_get_string_provisioning_options_implicit_false(self): result = self.library._get_string_provisioning_options( fake.EMPTY_EXTRA_SPEC, self.library.STRING_QUALIFIED_EXTRA_SPECS_MAP) self.assertEqual(fake.PROVISIONING_OPTIONS_STRING_DEFAULT, result) @ddt.data({}, {'foo': 'bar'}, {'netapp:maxiops': '3000'}, {'qos': True, 'netapp:absiops': '3000'}, {'qos': True, 'netapp:maxiops:': '3000'}) def test_get_normalized_qos_specs_no_qos_specs(self, extra_specs): if 'qos' in extra_specs: self.assertRaises(exception.NetAppException, self.library._get_normalized_qos_specs, extra_specs) else: self.assertDictMatch( {}, self.library._get_normalized_qos_specs(extra_specs)) @ddt.data({'qos': True, 'netapp:maxiops': '3000', 'netapp:maxbps': '9000'}, {'qos': True, 'netapp:maxiopspergib': '1000', 'netapp:maxiops': '1000'}) def test_get_normalized_qos_specs_multiple_qos_specs(self, extra_specs): self.assertRaises(exception.NetAppException, self.library._get_normalized_qos_specs, extra_specs) @ddt.data({'qos': True, 'netapp:maxIOPS': '3000'}, {'qos': True, 'netapp:MAxBPs': '3000', 'clem': 'son'}, {'qos': True, 'netapp:maxbps': '3000', 'tig': 'ers'}, {'qos': True, 'netapp:MAXiopSPerGib': '3000', 'kin': 'gsof'}, {'qos': True, 'netapp:maxiopspergib': '3000', 'coll': 'ege'}, {'qos': True, 'netapp:maxBPSperGiB': '3000', 'foot': 'ball'}) def test_get_normalized_qos_specs(self, extra_specs): expected_normalized_spec = { key.lower().split('netapp:')[1]: value for key, value in extra_specs.items() if 'netapp:' in key } qos_specs = self.library._get_normalized_qos_specs(extra_specs) self.assertDictMatch(expected_normalized_spec, qos_specs) self.assertEqual(1, len(qos_specs)) @ddt.data({'qos': {'maxiops': '3000'}, 'expected': '3000iops'}, {'qos': {'maxbps': '3000'}, 'expected': '3000B/s'}, {'qos': {'maxbpspergib': '3000'}, 'expected': '12000B/s'}, {'qos': {'maxiopspergib': '3000'}, 'expected': '12000iops'}) @ddt.unpack def test_get_max_throughput(self, qos, expected): throughput = self.library._get_max_throughput(4, qos) self.assertEqual(expected, throughput) def test_create_qos_policy_group(self): mock_qos_policy_create = self.mock_object( self.library._client, 'qos_policy_group_create') self.library._create_qos_policy_group( fake.SHARE, fake.VSERVER1, {'maxiops': '3000'}) expected_policy_name = 'qos_share_' + fake.SHARE['id'].replace( '-', '_') mock_qos_policy_create.assert_called_once_with( expected_policy_name, fake.VSERVER1, max_throughput='3000iops') def test_check_if_max_files_is_valid_with_negative_integer(self): self.assertRaises(exception.NetAppException, self.library._check_if_max_files_is_valid, fake.SHARE, -1) def test_check_if_max_files_is_valid_with_string(self): self.assertRaises(ValueError, self.library._check_if_max_files_is_valid, fake.SHARE, 'abc') def test_allocate_container_no_pool(self): vserver_client = mock.Mock() fake_share_inst = copy.deepcopy(fake.SHARE_INSTANCE) fake_share_inst['host'] = fake_share_inst['host'].split('#')[0] self.assertRaises(exception.InvalidHost, self.library._allocate_container, fake_share_inst, fake.VSERVER1, vserver_client) def test_check_aggregate_extra_specs_validity(self): self.library._have_cluster_creds = True self.library._ssc_stats = fake.SSC_INFO result = self.library._check_aggregate_extra_specs_validity( fake.AGGREGATES[0], fake.EXTRA_SPEC) self.assertIsNone(result) def test_check_aggregate_extra_specs_validity_no_match(self): self.library._have_cluster_creds = True self.library._ssc_stats = fake.SSC_INFO self.assertRaises(exception.NetAppException, self.library._check_aggregate_extra_specs_validity, fake.AGGREGATES[1], fake.EXTRA_SPEC) @ddt.data({'provider_location': None, 'size': 50, 'hide_snapdir': True, 'split': None}, {'provider_location': 'fake_location', 'size': 30, 'hide_snapdir': False, 'split': True}, {'provider_location': 'fake_location', 'size': 20, 'hide_snapdir': True, 'split': False}) @ddt.unpack def test_allocate_container_from_snapshot( self, provider_location, size, hide_snapdir, split): provisioning_options = copy.deepcopy(fake.PROVISIONING_OPTIONS) provisioning_options['hide_snapdir'] = hide_snapdir mock_get_provisioning_opts = self.mock_object( self.library, '_get_provisioning_options_for_share', mock.Mock(return_value=provisioning_options)) vserver = fake.VSERVER1 vserver_client = mock.Mock() original_snapshot_size = 20 expected_split_op = split or fake.PROVISIONING_OPTIONS['split'] fake_share_inst = copy.deepcopy(fake.SHARE_INSTANCE) fake_share_inst['size'] = size fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['provider_location'] = provider_location fake_snapshot['size'] = original_snapshot_size self.library._allocate_container_from_snapshot(fake_share_inst, fake_snapshot, vserver, vserver_client) share_name = self.library._get_backend_share_name( fake_share_inst['id']) parent_share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) parent_snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) if not provider_location else 'fake_location' mock_get_provisioning_opts.assert_called_once_with( fake_share_inst, fake.VSERVER1, vserver_client=vserver_client) vserver_client.create_volume_clone.assert_called_once_with( share_name, parent_share_name, parent_snapshot_name, thin_provisioned=True, snapshot_policy='default', language='en-US', dedup_enabled=True, split=expected_split_op, encrypt=False, compression_enabled=False, max_files=5000) if size > original_snapshot_size: vserver_client.set_volume_size.assert_called_once_with( share_name, size) else: vserver_client.set_volume_size.assert_not_called() if hide_snapdir: vserver_client.set_volume_snapdir_access.assert_called_once_with( fake.SHARE_INSTANCE_NAME, hide_snapdir) else: vserver_client.set_volume_snapdir_access.assert_not_called() def test_share_exists(self): vserver_client = mock.Mock() vserver_client.volume_exists.return_value = True result = self.library._share_exists(fake.SHARE_NAME, vserver_client) self.assertTrue(result) def test_share_exists_not_found(self): vserver_client = mock.Mock() vserver_client.volume_exists.return_value = False result = self.library._share_exists(fake.SHARE_NAME, vserver_client) self.assertFalse(result) def test_delete_share(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_share_exists = self.mock_object(self.library, '_share_exists', mock.Mock(return_value=True)) mock_remove_export = self.mock_object(self.library, '_remove_export') mock_deallocate_container = self.mock_object(self.library, '_deallocate_container') self.library.delete_share(self.context, fake.SHARE, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name(fake.SHARE['id']) qos_policy_name = self.library._get_backend_qos_policy_group_name( fake.SHARE['id']) mock_share_exists.assert_called_once_with(share_name, vserver_client) mock_remove_export.assert_called_once_with(fake.SHARE, vserver_client) mock_deallocate_container.assert_called_once_with(share_name, vserver_client) (vserver_client.mark_qos_policy_group_for_deletion .assert_called_once_with(qos_policy_name)) self.assertEqual(0, lib_base.LOG.info.call_count) @ddt.data(exception.InvalidInput(reason='fake_reason'), exception.VserverNotSpecified(), exception.VserverNotFound(vserver='fake_vserver')) def test_delete_share_no_share_server(self, get_vserver_exception): self.mock_object(self.library, '_get_vserver', mock.Mock(side_effect=get_vserver_exception)) mock_share_exists = self.mock_object(self.library, '_share_exists', mock.Mock(return_value=False)) mock_remove_export = self.mock_object(self.library, '_remove_export') mock_deallocate_container = self.mock_object(self.library, '_deallocate_container') self.library.delete_share(self.context, fake.SHARE, share_server=fake.SHARE_SERVER) self.assertFalse(mock_share_exists.called) self.assertFalse(mock_remove_export.called) self.assertFalse(mock_deallocate_container.called) self.assertFalse( self.library._client.mark_qos_policy_group_for_deletion.called) self.assertEqual(1, lib_base.LOG.warning.call_count) def test_delete_share_not_found(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_share_exists = self.mock_object(self.library, '_share_exists', mock.Mock(return_value=False)) mock_remove_export = self.mock_object(self.library, '_remove_export') mock_deallocate_container = self.mock_object(self.library, '_deallocate_container') self.library.delete_share(self.context, fake.SHARE, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name(fake.SHARE['id']) mock_share_exists.assert_called_once_with(share_name, vserver_client) self.assertFalse(mock_remove_export.called) self.assertFalse(mock_deallocate_container.called) self.assertFalse( self.library._client.mark_qos_policy_group_for_deletion.called) self.assertEqual(1, lib_base.LOG.info.call_count) def test_deallocate_container(self): vserver_client = mock.Mock() self.library._deallocate_container(fake.SHARE_NAME, vserver_client) vserver_client.unmount_volume.assert_called_with(fake.SHARE_NAME, force=True) vserver_client.offline_volume.assert_called_with(fake.SHARE_NAME) vserver_client.delete_volume.assert_called_with(fake.SHARE_NAME) def test_create_export(self): protocol_helper = mock.Mock() callback = (lambda export_address, export_path='fake_export_path': ':'.join([export_address, export_path])) protocol_helper.create_share.return_value = callback self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) vserver_client = mock.Mock() vserver_client.get_network_interfaces.return_value = fake.LIFS fake_interface_addresses_with_metadata = copy.deepcopy( fake.INTERFACE_ADDRESSES_WITH_METADATA) mock_get_export_addresses_with_metadata = self.mock_object( self.library, '_get_export_addresses_with_metadata', mock.Mock(return_value=fake_interface_addresses_with_metadata)) result = self.library._create_export(fake.SHARE, fake.SHARE_SERVER, fake.VSERVER1, vserver_client) self.assertEqual(fake.NFS_EXPORTS, result) mock_get_export_addresses_with_metadata.assert_called_once_with( fake.SHARE, fake.SHARE_SERVER, fake.LIFS) protocol_helper.create_share.assert_called_once_with( fake.SHARE, fake.SHARE_NAME, clear_current_export_policy=True) def test_create_export_lifs_not_found(self): self.mock_object(self.library, '_get_helper') vserver_client = mock.Mock() vserver_client.get_network_interfaces.return_value = [] self.assertRaises(exception.NetAppException, self.library._create_export, fake.SHARE, fake.SHARE_SERVER, fake.VSERVER1, vserver_client) def test_get_export_addresses_with_metadata(self): mock_get_aggregate_node = self.mock_object( self.library, '_get_aggregate_node', mock.Mock(return_value=fake.CLUSTER_NODES[0])) mock_get_admin_addresses_for_share_server = self.mock_object( self.library, '_get_admin_addresses_for_share_server', mock.Mock(return_value=[fake.LIF_ADDRESSES[1]])) result = self.library._get_export_addresses_with_metadata( fake.SHARE, fake.SHARE_SERVER, fake.LIFS) self.assertEqual(fake.INTERFACE_ADDRESSES_WITH_METADATA, result) mock_get_aggregate_node.assert_called_once_with(fake.POOL_NAME) mock_get_admin_addresses_for_share_server.assert_called_once_with( fake.SHARE_SERVER) def test_get_export_addresses_with_metadata_node_unknown(self): mock_get_aggregate_node = self.mock_object( self.library, '_get_aggregate_node', mock.Mock(return_value=None)) mock_get_admin_addresses_for_share_server = self.mock_object( self.library, '_get_admin_addresses_for_share_server', mock.Mock(return_value=[fake.LIF_ADDRESSES[1]])) result = self.library._get_export_addresses_with_metadata( fake.SHARE, fake.SHARE_SERVER, fake.LIFS) expected = copy.deepcopy(fake.INTERFACE_ADDRESSES_WITH_METADATA) for key, value in expected.items(): value['preferred'] = False self.assertEqual(expected, result) mock_get_aggregate_node.assert_called_once_with(fake.POOL_NAME) mock_get_admin_addresses_for_share_server.assert_called_once_with( fake.SHARE_SERVER) def test_get_admin_addresses_for_share_server(self): result = self.library._get_admin_addresses_for_share_server( fake.SHARE_SERVER) self.assertEqual([fake.ADMIN_NETWORK_ALLOCATIONS[0]['ip_address']], result) def test_get_admin_addresses_for_share_server_no_share_server(self): result = self.library._get_admin_addresses_for_share_server(None) self.assertEqual([], result) @ddt.data(True, False) def test_sort_export_locations_by_preferred_paths(self, reverse): export_locations = copy.copy(fake.NFS_EXPORTS) if reverse: export_locations.reverse() result = self.library._sort_export_locations_by_preferred_paths( export_locations) self.assertEqual(fake.NFS_EXPORTS, result) def test_remove_export(self): protocol_helper = mock.Mock() protocol_helper.get_target.return_value = 'fake_target' self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) vserver_client = mock.Mock() self.library._remove_export(fake.SHARE, vserver_client) protocol_helper.set_client.assert_called_once_with(vserver_client) protocol_helper.get_target.assert_called_once_with(fake.SHARE) protocol_helper.delete_share.assert_called_once_with(fake.SHARE, fake.SHARE_NAME) def test_remove_export_target_not_found(self): protocol_helper = mock.Mock() protocol_helper.get_target.return_value = None self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) vserver_client = mock.Mock() self.library._remove_export(fake.SHARE, vserver_client) protocol_helper.set_client.assert_called_once_with(vserver_client) protocol_helper.get_target.assert_called_once_with(fake.SHARE) self.assertFalse(protocol_helper.delete_share.called) def test_create_snapshot(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) model_update = self.library.create_snapshot( self.context, fake.SNAPSHOT, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake.SNAPSHOT['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake.SNAPSHOT['id']) vserver_client.create_snapshot.assert_called_once_with(share_name, snapshot_name) self.assertEqual(snapshot_name, model_update['provider_location']) @ddt.data(True, False) def test_revert_to_snapshot(self, use_snap_provider_location): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) fake_snapshot = copy.deepcopy(fake.SNAPSHOT) if use_snap_provider_location: fake_snapshot['provider_location'] = 'fake-provider-location' else: del fake_snapshot['provider_location'] result = self.library.revert_to_snapshot( self.context, fake_snapshot, share_server=fake.SHARE_SERVER) self.assertIsNone(result) share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = (self.library._get_backend_snapshot_name( fake_snapshot['id']) if not use_snap_provider_location else 'fake-provider-location') vserver_client.restore_snapshot.assert_called_once_with(share_name, snapshot_name) def test_delete_snapshot(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_delete_snapshot = self.mock_object(self.library, '_delete_snapshot') self.library.delete_snapshot(self.context, fake.SNAPSHOT, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake.SNAPSHOT['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake.SNAPSHOT['id']) mock_delete_snapshot.assert_called_once_with( vserver_client, share_name, snapshot_name) def test_delete_snapshot_with_provider_location(self): vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['provider_location'] = 'fake_provider_location' self.library.delete_snapshot(self.context, fake_snapshot, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) vserver_client.delete_snapshot.assert_called_once_with( share_name, fake_snapshot['provider_location']) @ddt.data(exception.InvalidInput(reason='fake_reason'), exception.VserverNotSpecified(), exception.VserverNotFound(vserver='fake_vserver')) def test_delete_snapshot_no_share_server(self, get_vserver_exception): self.mock_object(self.library, '_get_vserver', mock.Mock(side_effect=get_vserver_exception)) mock_delete_snapshot = self.mock_object(self.library, '_delete_snapshot') self.library.delete_snapshot(self.context, fake.SNAPSHOT, share_server=fake.SHARE_SERVER) self.assertFalse(mock_delete_snapshot.called) def test_delete_snapshot_not_found(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_delete_snapshot = self.mock_object( self.library, '_delete_snapshot', mock.Mock(side_effect=exception.SnapshotResourceNotFound( name=fake.SNAPSHOT_NAME))) self.library.delete_snapshot(self.context, fake.SNAPSHOT, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake.SNAPSHOT['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake.SNAPSHOT['id']) mock_delete_snapshot.assert_called_once_with( vserver_client, share_name, snapshot_name) def test_delete_snapshot_not_unique(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_delete_snapshot = self.mock_object( self.library, '_delete_snapshot', mock.Mock(side_effect=exception.NetAppException())) self.assertRaises(exception.NetAppException, self.library.delete_snapshot, self.context, fake.SNAPSHOT, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake.SNAPSHOT['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake.SNAPSHOT['id']) mock_delete_snapshot.assert_called_once_with( vserver_client, share_name, snapshot_name) def test__delete_snapshot(self): vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT self.library._delete_snapshot(vserver_client, fake.SHARE_NAME, fake.SNAPSHOT_NAME) vserver_client.delete_snapshot.assert_called_once_with( fake.SHARE_NAME, fake.SNAPSHOT_NAME) self.assertFalse(vserver_client.get_clone_children_for_snapshot.called) self.assertFalse(vserver_client.split_volume_clone.called) self.assertFalse(vserver_client.soft_delete_snapshot.called) def test__delete_snapshot_busy_volume_clone(self): vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = ( fake.CDOT_SNAPSHOT_BUSY_VOLUME_CLONE) vserver_client.get_clone_children_for_snapshot.return_value = ( fake.CDOT_CLONE_CHILDREN) self.library._delete_snapshot(vserver_client, fake.SHARE_NAME, fake.SNAPSHOT_NAME) self.assertFalse(vserver_client.delete_snapshot.called) vserver_client.get_clone_children_for_snapshot.assert_called_once_with( fake.SHARE_NAME, fake.SNAPSHOT_NAME) vserver_client.split_volume_clone.assert_has_calls([ mock.call(fake.CDOT_CLONE_CHILD_1), mock.call(fake.CDOT_CLONE_CHILD_2), ]) vserver_client.soft_delete_snapshot.assert_called_once_with( fake.SHARE_NAME, fake.SNAPSHOT_NAME) def test__delete_snapshot_busy_snapmirror(self): vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = ( fake.CDOT_SNAPSHOT_BUSY_SNAPMIRROR) self.assertRaises(exception.ShareSnapshotIsBusy, self.library._delete_snapshot, vserver_client, fake.SHARE_NAME, fake.SNAPSHOT_NAME) self.assertFalse(vserver_client.delete_snapshot.called) self.assertFalse(vserver_client.get_clone_children_for_snapshot.called) self.assertFalse(vserver_client.split_volume_clone.called) self.assertFalse(vserver_client.soft_delete_snapshot.called) @ddt.data(None, fake.VSERVER1) def test_manage_existing(self, fake_vserver): vserver_client = mock.Mock() mock__get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_manage_container = self.mock_object( self.library, '_manage_container', mock.Mock(return_value=fake.SHARE_SIZE)) mock_create_export = self.mock_object( self.library, '_create_export', mock.Mock(return_value=fake.NFS_EXPORTS)) result = self.library.manage_existing(fake.SHARE, {}, share_server=fake_vserver) expected = { 'size': fake.SHARE_SIZE, 'export_locations': fake.NFS_EXPORTS } mock__get_vserver.assert_called_once_with(share_server=fake_vserver) mock_manage_container.assert_called_once_with(fake.SHARE, fake.VSERVER1, vserver_client) mock_create_export.assert_called_once_with(fake.SHARE, fake_vserver, fake.VSERVER1, vserver_client) self.assertDictEqual(expected, result) @ddt.data(None, fake.VSERVER1) def test_unmanage(self, fake_vserver): result = self.library.unmanage(fake.SHARE, share_server=fake_vserver) self.assertIsNone(result) @ddt.data(True, False) def test_manage_container_with_qos(self, qos): vserver_client = mock.Mock() qos_policy_group_name = fake.QOS_POLICY_GROUP_NAME if qos else None extra_specs = fake.EXTRA_SPEC_WITH_QOS if qos else fake.EXTRA_SPEC provisioning_opts = self.library._get_provisioning_options(extra_specs) if qos: provisioning_opts['qos_policy_group'] = fake.QOS_POLICY_GROUP_NAME share_to_manage = copy.deepcopy(fake.SHARE) share_to_manage['export_location'] = fake.EXPORT_LOCATION mock_helper = mock.Mock() mock_helper.get_share_name_for_share.return_value = fake.FLEXVOL_NAME self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock_helper)) mock_get_volume_to_manage = self.mock_object( vserver_client, 'get_volume_to_manage', mock.Mock(return_value=fake.FLEXVOL_TO_MANAGE)) mock_validate_volume_for_manage = self.mock_object( self.library, '_validate_volume_for_manage') self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) mock_check_extra_specs_validity = self.mock_object( self.library, '_check_extra_specs_validity') mock_check_aggregate_extra_specs_validity = self.mock_object( self.library, '_check_aggregate_extra_specs_validity') mock_modify_or_create_qos_policy = self.mock_object( self.library, '_modify_or_create_qos_for_existing_share', mock.Mock(return_value=qos_policy_group_name)) result = self.library._manage_container(share_to_manage, fake.VSERVER1, vserver_client) mock_get_volume_to_manage.assert_called_once_with( fake.POOL_NAME, fake.FLEXVOL_NAME) mock_check_extra_specs_validity.assert_called_once_with( share_to_manage, extra_specs) mock_check_aggregate_extra_specs_validity.assert_called_once_with( fake.POOL_NAME, extra_specs) vserver_client.unmount_volume.assert_called_once_with( fake.FLEXVOL_NAME) vserver_client.set_volume_name.assert_called_once_with( fake.FLEXVOL_NAME, fake.SHARE_NAME) vserver_client.mount_volume.assert_called_once_with( fake.SHARE_NAME) vserver_client.modify_volume.assert_called_once_with( fake.POOL_NAME, fake.SHARE_NAME, **provisioning_opts) mock_modify_or_create_qos_policy.assert_called_once_with( share_to_manage, extra_specs, fake.VSERVER1, vserver_client) mock_validate_volume_for_manage.assert_called() original_data = { 'original_name': fake.FLEXVOL_TO_MANAGE['name'], 'original_junction_path': fake.FLEXVOL_TO_MANAGE['junction-path'], } self.library.private_storage.update.assert_called_once_with( fake.SHARE['id'], original_data) expected_size = int( math.ceil(float(fake.FLEXVOL_TO_MANAGE['size']) / units.Gi)) self.assertEqual(expected_size, result) def test_manage_container_invalid_export_location(self): vserver_client = mock.Mock() share_to_manage = copy.deepcopy(fake.SHARE) share_to_manage['export_location'] = fake.EXPORT_LOCATION mock_helper = mock.Mock() mock_helper.get_share_name_for_share.return_value = None self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock_helper)) self.assertRaises(exception.ManageInvalidShare, self.library._manage_container, share_to_manage, fake.VSERVER1, vserver_client) def test_manage_container_not_found(self): vserver_client = mock.Mock() share_to_manage = copy.deepcopy(fake.SHARE) share_to_manage['export_location'] = fake.EXPORT_LOCATION mock_helper = mock.Mock() mock_helper.get_share_name_for_share.return_value = fake.FLEXVOL_NAME self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock_helper)) self.mock_object(vserver_client, 'get_volume_to_manage', mock.Mock(return_value=None)) self.assertRaises(exception.ManageInvalidShare, self.library._manage_container, share_to_manage, fake.VSERVER1, vserver_client) def test_manage_container_invalid_extra_specs(self): vserver_client = mock.Mock() share_to_manage = copy.deepcopy(fake.SHARE) share_to_manage['export_location'] = fake.EXPORT_LOCATION mock_helper = mock.Mock() mock_helper.get_share_name_for_share.return_value = fake.FLEXVOL_NAME self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock_helper)) self.mock_object(vserver_client, 'get_volume_to_manage', mock.Mock(return_value=fake.FLEXVOL_TO_MANAGE)) self.mock_object(self.library, '_validate_volume_for_manage') self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=fake.EXTRA_SPEC)) self.mock_object(self.library, '_check_extra_specs_validity', mock.Mock(side_effect=exception.NetAppException)) self.assertRaises(exception.ManageExistingShareTypeMismatch, self.library._manage_container, share_to_manage, fake.VSERVER1, vserver_client) def test_validate_volume_for_manage(self): vserver_client = mock.Mock() vserver_client.volume_has_luns = mock.Mock(return_value=False) vserver_client.volume_has_junctioned_volumes = mock.Mock( return_value=False) vserver_client.volume_has_snapmirror_relationships = mock.Mock( return_value=False) result = self.library._validate_volume_for_manage( fake.FLEXVOL_TO_MANAGE, vserver_client) self.assertIsNone(result) @ddt.data({ 'attribute': 'type', 'value': 'dp', }, { 'attribute': 'style', 'value': 'infinitevol', }) @ddt.unpack def test_validate_volume_for_manage_invalid_volume(self, attribute, value): flexvol_to_manage = copy.deepcopy(fake.FLEXVOL_TO_MANAGE) flexvol_to_manage[attribute] = value vserver_client = mock.Mock() vserver_client.volume_has_luns = mock.Mock(return_value=False) vserver_client.volume_has_junctioned_volumes = mock.Mock( return_value=False) vserver_client.volume_has_snapmirror_relationships = mock.Mock( return_value=False) self.assertRaises(exception.ManageInvalidShare, self.library._validate_volume_for_manage, flexvol_to_manage, vserver_client) def test_validate_volume_for_manage_luns_present(self): vserver_client = mock.Mock() vserver_client.volume_has_luns = mock.Mock(return_value=True) vserver_client.volume_has_junctioned_volumes = mock.Mock( return_value=False) vserver_client.volume_has_snapmirror_relationships = mock.Mock( return_value=False) self.assertRaises(exception.ManageInvalidShare, self.library._validate_volume_for_manage, fake.FLEXVOL_TO_MANAGE, vserver_client) def test_validate_volume_for_manage_junctioned_volumes_present(self): vserver_client = mock.Mock() vserver_client.volume_has_luns = mock.Mock(return_value=False) vserver_client.volume_has_junctioned_volumes = mock.Mock( return_value=True) vserver_client.volume_has_snapmirror_relationships = mock.Mock( return_value=False) self.assertRaises(exception.ManageInvalidShare, self.library._validate_volume_for_manage, fake.FLEXVOL_TO_MANAGE, vserver_client) @ddt.data(None, fake.VSERVER1) def test_manage_existing_snapshot(self, fake_vserver): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) vserver_client.get_volume.return_value = fake.FLEXVOL_TO_MANAGE vserver_client.volume_has_snapmirror_relationships.return_value = False result = self.library.manage_existing_snapshot( fake.SNAPSHOT_TO_MANAGE, {}, share_server=fake_vserver) share_name = self.library._get_backend_share_name( fake.SNAPSHOT['share_id']) new_snapshot_name = self.library._get_backend_snapshot_name( fake.SNAPSHOT['id']) mock_get_vserver.assert_called_once_with(share_server=fake_vserver) (vserver_client.volume_has_snapmirror_relationships. assert_called_once_with(fake.FLEXVOL_TO_MANAGE)) vserver_client.rename_snapshot.assert_called_once_with( share_name, fake.SNAPSHOT_NAME, new_snapshot_name) self.library.private_storage.update.assert_called_once_with( fake.SNAPSHOT['id'], {'original_name': fake.SNAPSHOT_NAME}) self.assertEqual({'size': 2, 'provider_location': new_snapshot_name}, result) def test_manage_existing_snapshot_no_snapshot_name(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) vserver_client.get_volume.return_value = fake.FLEXVOL_TO_MANAGE vserver_client.volume_has_snapmirror_relationships.return_value = False fake_snapshot = copy.deepcopy(fake.SNAPSHOT_TO_MANAGE) fake_snapshot['provider_location'] = '' self.assertRaises(exception.ManageInvalidShareSnapshot, self.library.manage_existing_snapshot, fake_snapshot, {}) @ddt.data(netapp_api.NaApiError, exception.NetAppException) def test_manage_existing_snapshot_get_volume_error(self, exception_type): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) vserver_client.get_volume.side_effect = exception_type self.mock_object(self.client, 'volume_has_snapmirror_relationships', mock.Mock(return_value=False)) self.assertRaises(exception.ShareNotFound, self.library.manage_existing_snapshot, fake.SNAPSHOT_TO_MANAGE, {}) def test_manage_existing_snapshot_mirrors_present(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) vserver_client.get_volume.return_value = fake.FLEXVOL_TO_MANAGE vserver_client.volume_has_snapmirror_relationships.return_value = True self.assertRaises(exception.ManageInvalidShareSnapshot, self.library.manage_existing_snapshot, fake.SNAPSHOT_TO_MANAGE, {}) def test_manage_existing_snapshot_rename_snapshot_error(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) vserver_client.get_volume.return_value = fake.FLEXVOL_TO_MANAGE vserver_client.volume_has_snapmirror_relationships.return_value = False vserver_client.rename_snapshot.side_effect = netapp_api.NaApiError self.assertRaises(exception.ManageInvalidShareSnapshot, self.library.manage_existing_snapshot, fake.SNAPSHOT_TO_MANAGE, {}) @ddt.data(None, fake.VSERVER1) def test_unmanage_snapshot(self, fake_vserver): result = self.library.unmanage_snapshot(fake.SNAPSHOT, fake_vserver) self.assertIsNone(result) def test_validate_volume_for_manage_snapmirror_relationships_present(self): vserver_client = mock.Mock() vserver_client.volume_has_luns.return_value = False vserver_client.volume_has_junctioned_volumes.return_value = False vserver_client.volume_has_snapmirror_relationships.return_value = True self.assertRaises(exception.ManageInvalidShare, self.library._validate_volume_for_manage, fake.FLEXVOL_TO_MANAGE, vserver_client) def test_create_consistency_group_from_cgsnapshot(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_allocate_container_from_snapshot = self.mock_object( self.library, '_allocate_container_from_snapshot') mock_create_export = self.mock_object( self.library, '_create_export', mock.Mock(side_effect=[['loc3'], ['loc4']])) result = self.library.create_consistency_group_from_cgsnapshot( self.context, fake.CONSISTENCY_GROUP_DEST, fake.CG_SNAPSHOT, share_server=fake.SHARE_SERVER) share_update_list = [ {'id': fake.SHARE_ID3, 'export_locations': ['loc3']}, {'id': fake.SHARE_ID4, 'export_locations': ['loc4']} ] expected = (None, share_update_list) self.assertEqual(expected, result) mock_allocate_container_from_snapshot.assert_has_calls([ mock.call(fake.COLLATED_CGSNAPSHOT_INFO[0]['share'], fake.COLLATED_CGSNAPSHOT_INFO[0]['snapshot'], fake.VSERVER1, vserver_client, mock.ANY), mock.call(fake.COLLATED_CGSNAPSHOT_INFO[1]['share'], fake.COLLATED_CGSNAPSHOT_INFO[1]['snapshot'], fake.VSERVER1, vserver_client, mock.ANY), ]) mock_create_export.assert_has_calls([ mock.call(fake.COLLATED_CGSNAPSHOT_INFO[0]['share'], fake.SHARE_SERVER, fake.VSERVER1, vserver_client), mock.call(fake.COLLATED_CGSNAPSHOT_INFO[1]['share'], fake.SHARE_SERVER, fake.VSERVER1, vserver_client), ]) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_create_consistency_group_from_cgsnapshot_no_members(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_allocate_container_from_snapshot = self.mock_object( self.library, '_allocate_container_from_snapshot') mock_create_export = self.mock_object( self.library, '_create_export', mock.Mock(side_effect=[['loc3'], ['loc4']])) fake_cg_snapshot = copy.deepcopy(fake.CG_SNAPSHOT) fake_cg_snapshot['share_group_snapshot_members'] = [] result = self.library.create_consistency_group_from_cgsnapshot( self.context, fake.CONSISTENCY_GROUP_DEST, fake_cg_snapshot, share_server=fake.SHARE_SERVER) self.assertEqual((None, None), result) self.assertFalse(mock_allocate_container_from_snapshot.called) self.assertFalse(mock_create_export.called) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_collate_cg_snapshot_info(self): result = self.library._collate_cg_snapshot_info( fake.CONSISTENCY_GROUP_DEST, fake.CG_SNAPSHOT) self.assertEqual(fake.COLLATED_CGSNAPSHOT_INFO, result) def test_collate_cg_snapshot_info_invalid(self): fake_cg_snapshot = copy.deepcopy(fake.CG_SNAPSHOT) fake_cg_snapshot['share_group_snapshot_members'] = [] self.assertRaises(exception.InvalidShareGroup, self.library._collate_cg_snapshot_info, fake.CONSISTENCY_GROUP_DEST, fake_cg_snapshot) def test_create_cgsnapshot(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) result = self.library.create_cgsnapshot( self.context, fake.CG_SNAPSHOT, share_server=fake.SHARE_SERVER) share_names = [ self.library._get_backend_share_name( fake.CG_SNAPSHOT_MEMBER_1['share_id']), self.library._get_backend_share_name( fake.CG_SNAPSHOT_MEMBER_2['share_id']) ] snapshot_name = self.library._get_backend_cg_snapshot_name( fake.CG_SNAPSHOT['id']) vserver_client.create_cg_snapshot.assert_called_once_with( share_names, snapshot_name) self.assertEqual((None, None), result) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_create_cgsnapshot_no_members(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) fake_cg_snapshot = copy.deepcopy(fake.CG_SNAPSHOT) fake_cg_snapshot['share_group_snapshot_members'] = [] result = self.library.create_cgsnapshot( self.context, fake_cg_snapshot, share_server=fake.SHARE_SERVER) self.assertFalse(vserver_client.create_cg_snapshot.called) self.assertEqual((None, None), result) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_delete_cgsnapshot(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_delete_snapshot = self.mock_object(self.library, '_delete_snapshot') result = self.library.delete_cgsnapshot( self.context, fake.CG_SNAPSHOT, share_server=fake.SHARE_SERVER) share_names = [ self.library._get_backend_share_name( fake.CG_SNAPSHOT_MEMBER_1['share_id']), self.library._get_backend_share_name( fake.CG_SNAPSHOT_MEMBER_2['share_id']) ] snapshot_name = self.library._get_backend_cg_snapshot_name( fake.CG_SNAPSHOT['id']) mock_delete_snapshot.assert_has_calls([ mock.call(vserver_client, share_names[0], snapshot_name), mock.call(vserver_client, share_names[1], snapshot_name) ]) self.assertEqual((None, None), result) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_delete_cgsnapshot_no_members(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_delete_snapshot = self.mock_object(self.library, '_delete_snapshot') fake_cg_snapshot = copy.deepcopy(fake.CG_SNAPSHOT) fake_cg_snapshot['share_group_snapshot_members'] = [] result = self.library.delete_cgsnapshot( self.context, fake_cg_snapshot, share_server=fake.SHARE_SERVER) self.assertFalse(mock_delete_snapshot.called) self.assertEqual((None, None), result) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_delete_cgsnapshot_snapshots_not_found(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_delete_snapshot = self.mock_object( self.library, '_delete_snapshot', mock.Mock(side_effect=exception.SnapshotResourceNotFound( name='fake'))) result = self.library.delete_cgsnapshot( self.context, fake.CG_SNAPSHOT, share_server=fake.SHARE_SERVER) share_names = [ self.library._get_backend_share_name( fake.CG_SNAPSHOT_MEMBER_1['share_id']), self.library._get_backend_share_name( fake.CG_SNAPSHOT_MEMBER_2['share_id']) ] snapshot_name = self.library._get_backend_cg_snapshot_name( fake.CG_SNAPSHOT['id']) mock_delete_snapshot.assert_has_calls([ mock.call(vserver_client, share_names[0], snapshot_name), mock.call(vserver_client, share_names[1], snapshot_name) ]) self.assertEqual((None, None), result) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) @ddt.data(exception.InvalidInput(reason='fake_reason'), exception.VserverNotSpecified(), exception.VserverNotFound(vserver='fake_vserver')) def test_delete_cgsnapshot_no_share_server(self, get_vserver_exception): mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(side_effect=get_vserver_exception)) result = self.library.delete_cgsnapshot( self.context, fake.EMPTY_CONSISTENCY_GROUP, share_server=fake.SHARE_SERVER) self.assertEqual((None, None), result) self.assertEqual(1, lib_base.LOG.warning.call_count) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) def test_adjust_qos_policy_with_volume_resize_no_cluster_creds(self): self.library._have_cluster_creds = False self.mock_object(share_types, 'get_extra_specs_from_share') retval = self.library._adjust_qos_policy_with_volume_resize( fake.SHARE, 10, mock.Mock()) self.assertIsNone(retval) share_types.get_extra_specs_from_share.assert_not_called() def test_adjust_qos_policy_with_volume_resize_no_qos_on_share(self): self.library._have_cluster_creds = True self.mock_object(share_types, 'get_extra_specs_from_share') vserver_client = mock.Mock() self.mock_object(vserver_client, 'get_volume', mock.Mock(return_value=fake.FLEXVOL_WITHOUT_QOS)) retval = self.library._adjust_qos_policy_with_volume_resize( fake.SHARE, 10, vserver_client) self.assertIsNone(retval) share_types.get_extra_specs_from_share.assert_not_called() def test_adjust_qos_policy_with_volume_resize_no_size_dependent_qos(self): self.library._have_cluster_creds = True self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=fake.EXTRA_SPEC_WITH_QOS)) vserver_client = mock.Mock() self.mock_object(vserver_client, 'get_volume', mock.Mock(return_value=fake.FLEXVOL_WITH_QOS)) self.mock_object(self.library, '_get_max_throughput') self.mock_object(self.library._client, 'qos_policy_group_modify') retval = self.library._adjust_qos_policy_with_volume_resize( fake.SHARE, 10, vserver_client) self.assertIsNone(retval) share_types.get_extra_specs_from_share.assert_called_once_with( fake.SHARE) self.library._get_max_throughput.assert_not_called() self.library._client.qos_policy_group_modify.assert_not_called() def test_adjust_qos_policy_with_volume_resize(self): self.library._have_cluster_creds = True self.mock_object( share_types, 'get_extra_specs_from_share', mock.Mock(return_value=fake.EXTRA_SPEC_WITH_SIZE_DEPENDENT_QOS)) vserver_client = mock.Mock() self.mock_object(vserver_client, 'get_volume', mock.Mock(return_value=fake.FLEXVOL_WITH_QOS)) self.mock_object(self.library._client, 'qos_policy_group_modify') retval = self.library._adjust_qos_policy_with_volume_resize( fake.SHARE, 10, vserver_client) expected_max_throughput = '10000B/s' self.assertIsNone(retval) share_types.get_extra_specs_from_share.assert_called_once_with( fake.SHARE) self.library._client.qos_policy_group_modify.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME, expected_max_throughput) def test_extend_share(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_adjust_qos_policy = self.mock_object( self.library, '_adjust_qos_policy_with_volume_resize') mock_set_volume_size = self.mock_object(vserver_client, 'set_volume_size') new_size = fake.SHARE['size'] * 2 self.library.extend_share(fake.SHARE, new_size) mock_set_volume_size.assert_called_once_with(fake.SHARE_NAME, new_size) mock_adjust_qos_policy.assert_called_once_with( fake.SHARE, new_size, vserver_client) def test_shrink_share(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_adjust_qos_policy = self.mock_object( self.library, '_adjust_qos_policy_with_volume_resize') mock_set_volume_size = self.mock_object(vserver_client, 'set_volume_size') new_size = fake.SHARE['size'] - 1 self.library.shrink_share(fake.SHARE, new_size) mock_set_volume_size.assert_called_once_with(fake.SHARE_NAME, new_size) mock_adjust_qos_policy.assert_called_once_with( fake.SHARE, new_size, vserver_client) def test_shrinking_possible_data_loss(self): naapi_error = self._mock_api_error(code=netapp_api.EVOLOPNOTSUPP, message='Possible data loss') vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) mock_set_volume_size = self.mock_object( vserver_client, 'set_volume_size', naapi_error) new_size = fake.SHARE['size'] - 1 self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self.library.shrink_share, fake.SHARE, new_size) self.library._get_vserver.assert_called_once_with(share_server=None) mock_set_volume_size.assert_called_once_with(fake.SHARE_NAME, new_size) def test_update_access(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) protocol_helper = mock.Mock() protocol_helper.update_access.return_value = None self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) mock_share_exists = self.mock_object(self.library, '_share_exists', mock.Mock(return_value=True)) self.library.update_access(self.context, fake.SHARE, [fake.SHARE_ACCESS], [], [], share_server=fake.SHARE_SERVER) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name(fake.SHARE['id']) mock_share_exists.assert_called_once_with(share_name, vserver_client) protocol_helper.set_client.assert_called_once_with(vserver_client) protocol_helper.update_access.assert_called_once_with( fake.SHARE, fake.SHARE_NAME, [fake.SHARE_ACCESS]) @ddt.data(exception.InvalidInput(reason='fake_reason'), exception.VserverNotSpecified(), exception.VserverNotFound(vserver='fake_vserver')) def test_update_access_no_share_server(self, get_vserver_exception): mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(side_effect=get_vserver_exception)) protocol_helper = mock.Mock() protocol_helper.update_access.return_value = None self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) mock_share_exists = self.mock_object(self.library, '_share_exists') self.library.update_access(self.context, fake.SHARE, [fake.SHARE_ACCESS], [], [], share_server=fake.SHARE_SERVER) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) self.assertFalse(mock_share_exists.called) self.assertFalse(protocol_helper.set_client.called) self.assertFalse(protocol_helper.update_access.called) def test_update_access_share_not_found(self): vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) protocol_helper = mock.Mock() protocol_helper.update_access.return_value = None self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) mock_share_exists = self.mock_object(self.library, '_share_exists', mock.Mock(return_value=False)) self.assertRaises(exception.ShareResourceNotFound, self.library.update_access, self.context, fake.SHARE, [fake.SHARE_ACCESS], [], [], share_server=fake.SHARE_SERVER) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name(fake.SHARE['id']) mock_share_exists.assert_called_once_with(share_name, vserver_client) self.assertFalse(protocol_helper.set_client.called) self.assertFalse(protocol_helper.update_access.called) def test_update_access_to_active_replica(self): fake_share = copy.deepcopy(fake.SHARE) fake_share['replica_state'] = constants.REPLICA_STATE_ACTIVE vserver_client = mock.Mock() mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) protocol_helper = mock.Mock() protocol_helper.update_access.return_value = None self.mock_object(self.library, '_get_helper', mock.Mock(return_value=protocol_helper)) mock_share_exists = self.mock_object(self.library, '_share_exists', mock.Mock(return_value=True)) self.library.update_access(self.context, fake_share, [fake.SHARE_ACCESS], [], [], share_server=fake.SHARE_SERVER) mock_get_vserver.assert_called_once_with( share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name(fake.SHARE['id']) mock_share_exists.assert_called_once_with(share_name, vserver_client) protocol_helper.set_client.assert_called_once_with(vserver_client) protocol_helper.update_access.assert_called_once_with( fake.SHARE, fake.SHARE_NAME, [fake.SHARE_ACCESS]) def test_update_access_to_in_sync_replica(self): fake_share = copy.deepcopy(fake.SHARE) fake_share['replica_state'] = constants.REPLICA_STATE_IN_SYNC self.library.update_access(self.context, fake_share, [fake.SHARE_ACCESS], [], [], share_server=fake.SHARE_SERVER) def test_setup_server(self): self.assertRaises(NotImplementedError, self.library.setup_server, fake.NETWORK_INFO) def test_teardown_server(self): self.assertRaises(NotImplementedError, self.library.teardown_server, fake.SHARE_SERVER['backend_details']) def test_get_network_allocations_number(self): self.assertRaises(NotImplementedError, self.library.get_network_allocations_number) def test_update_ssc_info(self): self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) mock_update_ssc_aggr_info = self.mock_object(self.library, '_update_ssc_aggr_info') self.library._update_ssc_info() expected = { fake.AGGREGATES[0]: { 'netapp_aggregate': fake.AGGREGATES[0], }, fake.AGGREGATES[1]: { 'netapp_aggregate': fake.AGGREGATES[1], } } self.assertDictEqual(expected, self.library._ssc_stats) self.assertTrue(mock_update_ssc_aggr_info.called) def test_update_ssc_info_no_aggregates(self): self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=[])) mock_update_ssc_aggr_info = self.mock_object(self.library, '_update_ssc_aggr_info') self.library._update_ssc_info() self.assertDictEqual({}, self.library._ssc_stats) self.assertFalse(mock_update_ssc_aggr_info.called) def test_update_ssc_aggr_info(self): self.library._have_cluster_creds = True mock_get_aggregate = self.mock_object( self.client, 'get_aggregate', mock.Mock(side_effect=fake.SSC_AGGREGATES)) mock_get_aggregate_disk_types = self.mock_object( self.client, 'get_aggregate_disk_types', mock.Mock(side_effect=fake.SSC_DISK_TYPES)) ssc_stats = { fake.AGGREGATES[0]: { 'netapp_aggregate': fake.AGGREGATES[0], }, fake.AGGREGATES[1]: { 'netapp_aggregate': fake.AGGREGATES[1], }, } self.library._update_ssc_aggr_info(fake.AGGREGATES, ssc_stats) self.assertDictEqual(fake.SSC_INFO, ssc_stats) mock_get_aggregate.assert_has_calls([ mock.call(fake.AGGREGATES[0]), mock.call(fake.AGGREGATES[1]), ]) mock_get_aggregate_disk_types.assert_has_calls([ mock.call(fake.AGGREGATES[0]), mock.call(fake.AGGREGATES[1]), ]) def test_update_ssc_aggr_info_not_found(self): self.library._have_cluster_creds = True self.mock_object(self.client, 'get_aggregate', mock.Mock(return_value={})) self.mock_object(self.client, 'get_aggregate_disk_types', mock.Mock(return_value=None)) ssc_stats = { fake.AGGREGATES[0]: {}, fake.AGGREGATES[1]: {}, } self.library._update_ssc_aggr_info(fake.AGGREGATES, ssc_stats) expected = { fake.AGGREGATES[0]: { 'netapp_raid_type': None, 'netapp_disk_type': None, 'netapp_hybrid_aggregate': None, }, fake.AGGREGATES[1]: { 'netapp_raid_type': None, 'netapp_disk_type': None, 'netapp_hybrid_aggregate': None, } } self.assertDictEqual(expected, ssc_stats) def test_update_ssc_aggr_info_no_cluster_creds(self): self.library._have_cluster_creds = False ssc_stats = {} self.library._update_ssc_aggr_info(fake.AGGREGATES, ssc_stats) self.assertDictEqual({}, ssc_stats) self.assertFalse(self.library._client.get_aggregate_raid_types.called) def test_create_replica(self): self.mock_object(self.library, '_allocate_container') mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(data_motion, 'get_client_for_backend') self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) expected_model_update = { 'export_locations': [], 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'access_rules_status': constants.STATUS_ACTIVE, } model_update = self.library.create_replica( None, [fake.SHARE], fake.SHARE, [], [], share_server=None) self.assertDictMatch(expected_model_update, model_update) mock_dm_session.create_snapmirror.assert_called_once_with(fake.SHARE, fake.SHARE) data_motion.get_client_for_backend.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) def test_create_replica_with_share_server(self): self.mock_object(self.library, '_allocate_container', mock.Mock()) mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(data_motion, 'get_client_for_backend') self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) expected_model_update = { 'export_locations': [], 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'access_rules_status': constants.STATUS_ACTIVE, } model_update = self.library.create_replica( None, [fake.SHARE], fake.SHARE, [], [], share_server=fake.SHARE_SERVER) self.assertDictMatch(expected_model_update, model_update) mock_dm_session.create_snapmirror.assert_called_once_with(fake.SHARE, fake.SHARE) data_motion.get_client_for_backend.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) def test_delete_replica(self): active_replica = fake_replica( replica_state=constants.REPLICA_STATE_ACTIVE) replica_1 = fake_replica( replica_state=constants.REPLICA_STATE_IN_SYNC, host=fake.MANILA_HOST_NAME) replica_2 = fake_replica( replica_state=constants.REPLICA_STATE_OUT_OF_SYNC) replica_list = [active_replica, replica_1, replica_2] self.mock_object(self.library, '_deallocate_container', mock.Mock()) self.mock_object(self.library, '_share_exists', mock.Mock(return_value=False)) mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(data_motion, 'get_client_for_backend') self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) result = self.library.delete_replica(None, replica_list, replica_1, [], share_server=None) self.assertIsNone(result) mock_dm_session.delete_snapmirror.assert_has_calls([ mock.call(active_replica, replica_1), mock.call(replica_2, replica_1), mock.call(replica_1, replica_2), mock.call(replica_1, active_replica)], any_order=True) self.assertEqual(4, mock_dm_session.delete_snapmirror.call_count) data_motion.get_client_for_backend.assert_called_with( fake.BACKEND_NAME, vserver_name=mock.ANY) self.assertEqual(1, data_motion.get_client_for_backend.call_count) def test_delete_replica_with_share_server(self): active_replica = fake_replica( replica_state=constants.REPLICA_STATE_ACTIVE) replica = fake_replica(replica_state=constants.REPLICA_STATE_IN_SYNC, host=fake.MANILA_HOST_NAME) replica_list = [active_replica, replica] self.mock_object(self.library, '_deallocate_container', mock.Mock()) self.mock_object(self.library, '_share_exists', mock.Mock(return_value=False)) mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(data_motion, 'get_client_for_backend') self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) result = self.library.delete_replica(None, replica_list, replica, [], share_server=fake.SHARE_SERVER) self.assertIsNone(result) mock_dm_session.delete_snapmirror.assert_has_calls([ mock.call(active_replica, replica), mock.call(replica, active_replica)], any_order=True) data_motion.get_client_for_backend.assert_called_once_with( fake.BACKEND_NAME, vserver_name=fake.VSERVER1) def test_delete_replica_share_absent_on_backend(self): active_replica = fake_replica( replica_state=constants.REPLICA_STATE_ACTIVE) replica = fake_replica(replica_state=constants.REPLICA_STATE_IN_SYNC, host=fake.MANILA_HOST_NAME) replica_list = [active_replica, replica] self.mock_object(self.library, '_deallocate_container', mock.Mock()) self.mock_object(self.library, '_share_exists', mock.Mock(return_value=False)) mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(data_motion, 'get_client_for_backend') self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) result = self.library.delete_replica(None, replica_list, replica, [], share_server=None) self.assertIsNone(result) self.assertFalse(self.library._deallocate_container.called) mock_dm_session.delete_snapmirror.assert_has_calls([ mock.call(active_replica, replica), mock.call(replica, active_replica)], any_order=True) data_motion.get_client_for_backend.assert_called_with( fake.BACKEND_NAME, vserver_name=mock.ANY) self.assertEqual(1, data_motion.get_client_for_backend.call_count) def test_update_replica_state_no_snapmirror_share_creating(self): vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock(return_value=[]) replica = copy.deepcopy(fake.SHARE) replica['status'] = constants.STATUS_CREATING result = self.library.update_replica_state( None, [replica], replica, None, [], share_server=None) self.assertFalse(self.mock_dm_session.create_snapmirror.called) self.assertEqual(constants.STATUS_OUT_OF_SYNC, result) def test_update_replica_state_share_reverting_to_snapshot(self): vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock(return_value=[]) replica = copy.deepcopy(fake.SHARE) replica['status'] = constants.STATUS_REVERTING result = self.library.update_replica_state( None, [replica], replica, None, [], share_server=None) self.assertFalse(self.mock_dm_session.get_snapmirrors.called) self.assertFalse(self.mock_dm_session.create_snapmirror.called) self.assertIsNone(result) def test_update_replica_state_no_snapmirror_create_failed(self): vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock(return_value=[]) self.mock_dm_session.create_snapmirror.side_effect = ( netapp_api.NaApiError(code=0)) replica = copy.deepcopy(fake.SHARE) replica['status'] = constants.REPLICA_STATE_OUT_OF_SYNC result = self.library.update_replica_state( None, [replica], replica, None, [], share_server=None) self.assertTrue(self.mock_dm_session.create_snapmirror.called) self.assertEqual(constants.STATUS_ERROR, result) @ddt.data(constants.STATUS_ERROR, constants.STATUS_AVAILABLE) def test_update_replica_state_no_snapmirror(self, status): vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock(return_value=[]) replica = copy.deepcopy(fake.SHARE) replica['status'] = status result = self.library.update_replica_state( None, [replica], replica, None, [], share_server=None) self.assertEqual(1, self.mock_dm_session.create_snapmirror.call_count) self.assertEqual(constants.STATUS_OUT_OF_SYNC, result) def test_update_replica_state_broken_snapmirror(self): fake_snapmirror = { 'mirror-state': 'broken-off', 'relationship-status': 'idle', 'source-vserver': fake.VSERVER2, 'source-volume': 'fake_volume', 'last-transfer-end-timestamp': '%s' % float(time.time() - 10000) } vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, [], share_server=None) vserver_client.resync_snapmirror.assert_called_once_with( fake.VSERVER2, 'fake_volume', fake.VSERVER1, fake.SHARE['name'] ) self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, result) def test_update_replica_state_snapmirror_still_initializing(self): fake_snapmirror = { 'mirror-state': 'uninitialized', 'relationship-status': 'transferring', 'source-vserver': fake.VSERVER2, 'source-volume': 'fake_volume', 'last-transfer-end-timestamp': '%s' % float(time.time() - 10000) } vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, [], share_server=None) self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, result) def test_update_replica_state_fail_to_get_snapmirrors(self): vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors.side_effect = ( netapp_api.NaApiError(code=0)) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, [], share_server=None) self.assertTrue(self.mock_dm_session.get_snapmirrors.called) self.assertEqual(constants.STATUS_ERROR, result) def test_update_replica_state_broken_snapmirror_resync_error(self): fake_snapmirror = { 'mirror-state': 'broken-off', 'relationship-status': 'idle', 'source-vserver': fake.VSERVER2, 'source-volume': 'fake_volume', 'last-transfer-end-timestamp': '%s' % float(time.time() - 10000) } vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) vserver_client.resync_snapmirror.side_effect = netapp_api.NaApiError result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, [], share_server=None) vserver_client.resync_snapmirror.assert_called_once_with( fake.VSERVER2, 'fake_volume', fake.VSERVER1, fake.SHARE['name'] ) self.assertEqual(constants.STATUS_ERROR, result) def test_update_replica_state_stale_snapmirror(self): fake_snapmirror = { 'mirror-state': 'snapmirrored', 'last-transfer-end-timestamp': '%s' % float( timeutils.utcnow_ts() - 10000) } vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, [], share_server=None) self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, result) def test_update_replica_state_in_sync(self): fake_snapmirror = { 'mirror-state': 'snapmirrored', 'relationship-status': 'idle', 'last-transfer-end-timestamp': '%s' % float(time.time()) } vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, [], share_server=None) self.assertEqual(constants.REPLICA_STATE_IN_SYNC, result) def test_update_replica_state_backend_volume_absent(self): vserver_client = mock.Mock() self.mock_object(vserver_client, 'volume_exists', mock.Mock(return_value=False)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.assertRaises(exception.ShareResourceNotFound, self.library.update_replica_state, None, [fake.SHARE], fake.SHARE, None, [], share_server=None) def test_update_replica_state_in_sync_with_snapshots(self): fake_snapmirror = { 'mirror-state': 'snapmirrored', 'relationship-status': 'idle', 'last-transfer-end-timestamp': '%s' % float(time.time()) } fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = fake.SHARE['id'] snapshots = [{'share_replica_snapshot': fake_snapshot}] vserver_client = mock.Mock() self.mock_object(vserver_client, 'snapshot_exists', mock.Mock( return_value=True)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, snapshots, share_server=None) self.assertEqual(constants.REPLICA_STATE_IN_SYNC, result) def test_update_replica_state_missing_snapshot(self): fake_snapmirror = { 'mirror-state': 'snapmirrored', 'relationship-status': 'idle', 'last-transfer-end-timestamp': '%s' % float(time.time()) } fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = fake.SHARE['id'] snapshots = [{'share_replica_snapshot': fake_snapshot}] vserver_client = mock.Mock() self.mock_object(vserver_client, 'snapshot_exists', mock.Mock( return_value=False)) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.get_snapmirrors = mock.Mock( return_value=[fake_snapmirror]) result = self.library.update_replica_state(None, [fake.SHARE], fake.SHARE, None, snapshots, share_server=None) self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, result) def test_promote_replica(self): self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock.Mock())) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) self.mock_object(self.library, '_unmount_orig_active_replica') self.mock_object(self.library, '_handle_qos_on_replication_change') mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) replicas = self.library.promote_replica( None, [self.fake_replica, self.fake_replica_2], self.fake_replica_2, [], share_server=None) mock_dm_session.change_snapmirror_source.assert_called_once_with( self.fake_replica, self.fake_replica, self.fake_replica_2, mock.ANY ) self.assertEqual(2, len(replicas)) actual_replica_1 = list(filter( lambda x: x['id'] == self.fake_replica['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, actual_replica_1['replica_state']) actual_replica_2 = list(filter( lambda x: x['id'] == self.fake_replica_2['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_ACTIVE, actual_replica_2['replica_state']) self.assertEqual('fake_export_location', actual_replica_2['export_locations']) self.assertEqual(constants.STATUS_ACTIVE, actual_replica_2['access_rules_status']) self.library._unmount_orig_active_replica.assert_called_once_with( self.fake_replica, fake.VSERVER1) self.library._handle_qos_on_replication_change.assert_called_once() def test_promote_replica_destination_unreachable(self): self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock.Mock())) self.mock_object(self.library, '_unmount_orig_active_replica') self.mock_object(self.library, '_handle_qos_on_replication_change') self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) self.mock_object( self.library, '_convert_destination_replica_to_independent', mock.Mock(side_effect=exception.StorageCommunicationException)) replicas = self.library.promote_replica( None, [self.fake_replica, self.fake_replica_2], self.fake_replica_2, [], share_server=None) self.assertEqual(1, len(replicas)) actual_replica = replicas[0] self.assertEqual(constants.STATUS_ERROR, actual_replica['replica_state']) self.assertEqual(constants.STATUS_ERROR, actual_replica['status']) self.assertFalse( self.library._unmount_orig_active_replica.called) self.assertFalse( self.library._handle_qos_on_replication_change.called) def test_promote_replica_more_than_two_replicas(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_replica_3['replica_state'] = constants.REPLICA_STATE_OUT_OF_SYNC self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_unmount_orig_active_replica') self.mock_object(self.library, '_handle_qos_on_replication_change') self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock.Mock())) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) replicas = self.library.promote_replica( None, [self.fake_replica, self.fake_replica_2, fake_replica_3], self.fake_replica_2, [], share_server=None) mock_dm_session.change_snapmirror_source.assert_has_calls([ mock.call(fake_replica_3, self.fake_replica, self.fake_replica_2, mock.ANY), mock.call(self.fake_replica, self.fake_replica, self.fake_replica_2, mock.ANY) ], any_order=True) self.assertEqual(3, len(replicas)) actual_replica_1 = list(filter( lambda x: x['id'] == self.fake_replica['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, actual_replica_1['replica_state']) actual_replica_2 = list(filter( lambda x: x['id'] == self.fake_replica_2['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_ACTIVE, actual_replica_2['replica_state']) self.assertEqual('fake_export_location', actual_replica_2['export_locations']) actual_replica_3 = list(filter( lambda x: x['id'] == fake_replica_3['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, actual_replica_3['replica_state']) self.library._unmount_orig_active_replica.assert_called_once_with( self.fake_replica, fake.VSERVER1) self.library._handle_qos_on_replication_change.assert_called_once() def test_promote_replica_with_access_rules(self): self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_unmount_orig_active_replica') self.mock_object(self.library, '_handle_qos_on_replication_change') mock_helper = mock.Mock() self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock_helper)) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) mock_dm_session = mock.Mock() self.mock_object(data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) self.mock_object(mock_dm_session, 'get_vserver_from_share', mock.Mock(return_value=fake.VSERVER1)) replicas = self.library.promote_replica( None, [self.fake_replica, self.fake_replica_2], self.fake_replica_2, [fake.SHARE_ACCESS], share_server=None) mock_dm_session.change_snapmirror_source.assert_has_calls([ mock.call(self.fake_replica, self.fake_replica, self.fake_replica_2, mock.ANY) ], any_order=True) self.assertEqual(2, len(replicas)) share_name = self.library._get_backend_share_name( self.fake_replica_2['id']) mock_helper.update_access.assert_called_once_with(self.fake_replica_2, share_name, [fake.SHARE_ACCESS]) self.library._unmount_orig_active_replica.assert_called_once_with( self.fake_replica, fake.VSERVER1) self.library._handle_qos_on_replication_change.assert_called_once() def test_unmount_orig_active_replica(self): self.mock_object(share_utils, 'extract_host', mock.Mock( return_value=fake.MANILA_HOST_NAME)) self.mock_object(data_motion, 'get_client_for_backend') self.mock_object(self.library, '_get_backend_share_name', mock.Mock( return_value=fake.SHARE_NAME)) result = self.library._unmount_orig_active_replica(fake.SHARE) self.assertIsNone(result) @ddt.data({'extra_specs': {'netapp:snapshot_policy': 'none'}, 'have_cluster_creds': True}, # Test Case 2 isn't possible input {'extra_specs': {'qos': True, 'netapp:maxiops': '3000'}, 'have_cluster_creds': False}) @ddt.unpack def test_handle_qos_on_replication_change_nothing_to_handle( self, extra_specs, have_cluster_creds): self.library._have_cluster_creds = have_cluster_creds self.mock_object(lib_base.LOG, 'exception') self.mock_object(lib_base.LOG, 'info') self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) retval = self.library._handle_qos_on_replication_change( self.mock_dm_session, self.fake_replica_2, self.fake_replica, share_server=fake.SHARE_SERVER) self.assertIsNone(retval) lib_base.LOG.exception.assert_not_called() lib_base.LOG.info.assert_not_called() def test_handle_qos_on_replication_change_exception(self): self.library._have_cluster_creds = True extra_specs = {'qos': True, fake.QOS_EXTRA_SPEC: '3000'} vserver_client = mock.Mock() self.mock_object(lib_base.LOG, 'exception') self.mock_object(lib_base.LOG, 'info') self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) self.mock_object(self.library, '_get_vserver', mock.Mock( return_value=(fake.VSERVER1, vserver_client))) self.mock_object(self.library._client, 'qos_policy_group_exists', mock.Mock(return_value=True)) self.mock_object(self.library._client, 'qos_policy_group_modify', mock.Mock(side_effect=netapp_api.NaApiError)) retval = self.library._handle_qos_on_replication_change( self.mock_dm_session, self.fake_replica_2, self.fake_replica, share_server=fake.SHARE_SERVER) self.assertIsNone(retval) (self.mock_dm_session.remove_qos_on_old_active_replica .assert_called_once_with(self.fake_replica)) lib_base.LOG.exception.assert_called_once() lib_base.LOG.info.assert_not_called() vserver_client.set_qos_policy_group_for_volume.assert_not_called() def test_handle_qos_on_replication_change_modify_existing_policy(self): self.library._have_cluster_creds = True extra_specs = {'qos': True, fake.QOS_EXTRA_SPEC: '3000'} vserver_client = mock.Mock() volume_name_on_backend = self.library._get_backend_share_name( self.fake_replica_2['id']) self.mock_object(lib_base.LOG, 'exception') self.mock_object(lib_base.LOG, 'info') self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) self.mock_object(self.library, '_get_vserver', mock.Mock( return_value=(fake.VSERVER1, vserver_client))) self.mock_object(self.library._client, 'qos_policy_group_exists', mock.Mock(return_value=True)) self.mock_object(self.library._client, 'qos_policy_group_modify') self.mock_object(self.library, '_create_qos_policy_group') retval = self.library._handle_qos_on_replication_change( self.mock_dm_session, self.fake_replica_2, self.fake_replica, share_server=fake.SHARE_SERVER) self.assertIsNone(retval) self.library._client.qos_policy_group_modify.assert_called_once_with( 'qos_' + volume_name_on_backend, '3000iops') vserver_client.set_qos_policy_group_for_volume.assert_called_once_with( volume_name_on_backend, 'qos_' + volume_name_on_backend) self.library._create_qos_policy_group.assert_not_called() lib_base.LOG.exception.assert_not_called() lib_base.LOG.info.assert_called_once() def test_handle_qos_on_replication_change_create_new_policy(self): self.library._have_cluster_creds = True extra_specs = {'qos': True, fake.QOS_EXTRA_SPEC: '3000'} vserver_client = mock.Mock() self.mock_object(lib_base.LOG, 'exception') self.mock_object(lib_base.LOG, 'info') self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) self.mock_object(self.library, '_get_vserver', mock.Mock( return_value=(fake.VSERVER1, vserver_client))) self.mock_object(self.library._client, 'qos_policy_group_exists', mock.Mock(return_value=False)) self.mock_object(self.library._client, 'qos_policy_group_modify') self.mock_object(self.library, '_create_qos_policy_group') retval = self.library._handle_qos_on_replication_change( self.mock_dm_session, self.fake_replica_2, self.fake_replica, share_server=fake.SHARE_SERVER) self.assertIsNone(retval) self.library._create_qos_policy_group.assert_called_once_with( self.fake_replica_2, fake.VSERVER1, {'maxiops': '3000'}) self.library._client.qos_policy_group_modify.assert_not_called() lib_base.LOG.exception.assert_not_called() lib_base.LOG.info.assert_called_once() def test_convert_destination_replica_to_independent(self): self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock.Mock())) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) replica = self.library._convert_destination_replica_to_independent( None, self.mock_dm_session, self.fake_replica, self.fake_replica_2, [], share_server=None) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.mock_dm_session.break_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.assertEqual('fake_export_location', replica['export_locations']) self.assertEqual(constants.REPLICA_STATE_ACTIVE, replica['replica_state']) def test_convert_destination_replica_to_independent_update_failed(self): self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_helper', mock.Mock(return_value=mock.Mock())) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) self.mock_object( self.mock_dm_session, 'update_snapmirror', mock.Mock(side_effect=exception.StorageCommunicationException)) replica = self.library._convert_destination_replica_to_independent( None, self.mock_dm_session, self.fake_replica, self.fake_replica_2, [], share_server=None) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.mock_dm_session.break_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.assertEqual('fake_export_location', replica['export_locations']) self.assertEqual(constants.REPLICA_STATE_ACTIVE, replica['replica_state']) def test_promote_replica_fail_to_set_access_rules(self): fake_helper = mock.Mock() fake_helper.update_access.side_effect = Exception fake_access_rules = [ {'access_to': "0.0.0.0", 'access_level': constants.ACCESS_LEVEL_RO}, {'access_to': "10.10.10.10", 'access_level': constants.ACCESS_LEVEL_RW}, ] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_handle_qos_on_replication_change') self.mock_object(self.library, '_get_helper', mock.Mock(return_value=fake_helper)) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) replicas = self.library.promote_replica( None, [self.fake_replica, self.fake_replica_2], self.fake_replica_2, fake_access_rules, share_server=None) self.mock_dm_session.change_snapmirror_source.assert_called_once_with( self.fake_replica, self.fake_replica, self.fake_replica_2, mock.ANY ) self.assertEqual(2, len(replicas)) actual_replica_1 = list(filter( lambda x: x['id'] == self.fake_replica['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, actual_replica_1['replica_state']) actual_replica_2 = list(filter( lambda x: x['id'] == self.fake_replica_2['id'], replicas))[0] self.assertEqual(constants.REPLICA_STATE_ACTIVE, actual_replica_2['replica_state']) self.assertEqual('fake_export_location', actual_replica_2['export_locations']) self.assertEqual(constants.SHARE_INSTANCE_RULES_SYNCING, actual_replica_2['access_rules_status']) self.library._handle_qos_on_replication_change.assert_called_once() def test_convert_destination_replica_to_independent_with_access_rules( self): fake_helper = mock.Mock() fake_helper.update_access.side_effect = Exception fake_access_rules = [ {'access_to': "0.0.0.0", 'access_level': constants.ACCESS_LEVEL_RO}, {'access_to': "10.10.10.10", 'access_level': constants.ACCESS_LEVEL_RW}, ] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_helper', mock.Mock(return_value=fake_helper)) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) replica = self.library._convert_destination_replica_to_independent( None, self.mock_dm_session, self.fake_replica, self.fake_replica_2, fake_access_rules, share_server=None) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.mock_dm_session.break_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.assertEqual('fake_export_location', replica['export_locations']) self.assertEqual(constants.REPLICA_STATE_ACTIVE, replica['replica_state']) self.assertEqual(constants.SHARE_INSTANCE_RULES_SYNCING, replica['access_rules_status']) def test_convert_destination_replica_to_independent_failed_access_rules( self): fake_helper = mock.Mock() fake_access_rules = [ {'access_to': "0.0.0.0", 'access_level': constants.ACCESS_LEVEL_RO}, {'access_to': "10.10.10.10", 'access_level': constants.ACCESS_LEVEL_RW}, ] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_helper', mock.Mock(return_value=fake_helper)) self.mock_object(self.library, '_create_export', mock.Mock(return_value='fake_export_location')) replica = self.library._convert_destination_replica_to_independent( None, self.mock_dm_session, self.fake_replica, self.fake_replica_2, fake_access_rules, share_server=None) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) self.mock_dm_session.break_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2) fake_helper.assert_has_calls([ mock.call.set_client(mock.ANY), mock.call.update_access(mock.ANY, mock.ANY, fake_access_rules), ]) self.assertEqual('fake_export_location', replica['export_locations']) self.assertEqual(constants.REPLICA_STATE_ACTIVE, replica['replica_state']) self.assertEqual(constants.STATUS_ACTIVE, replica['access_rules_status']) def test_safe_change_replica_source(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_replica_3['replica_state'] = constants.REPLICA_STATE_OUT_OF_SYNC replica = self.library._safe_change_replica_source( self.mock_dm_session, self.fake_replica, self.fake_replica_2, fake_replica_3, [self.fake_replica, self.fake_replica_2, fake_replica_3] ) self.assertEqual([], replica['export_locations']) self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, replica['replica_state']) def test_safe_change_replica_source_destination_unreachable(self): self.mock_dm_session.change_snapmirror_source.side_effect = ( exception.StorageCommunicationException ) fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_replica_3['replica_state'] = constants.REPLICA_STATE_OUT_OF_SYNC replica = self.library._safe_change_replica_source( self.mock_dm_session, self.fake_replica, self.fake_replica_2, fake_replica_3, [self.fake_replica, self.fake_replica_2, fake_replica_3] ) self.assertEqual([], replica['export_locations']) self.assertEqual(constants.STATUS_ERROR, replica['replica_state']) self.assertEqual(constants.STATUS_ERROR, replica['status']) def test_safe_change_replica_source_error(self): self.mock_dm_session.change_snapmirror_source.side_effect = ( netapp_api.NaApiError(code=0) ) fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_replica_3['replica_state'] = constants.REPLICA_STATE_OUT_OF_SYNC replica = self.library._safe_change_replica_source( self.mock_dm_session, self.fake_replica, self.fake_replica_2, fake_replica_3, [self.fake_replica, self.fake_replica_2, fake_replica_3] ) self.assertEqual([], replica['export_locations']) self.assertEqual(constants.STATUS_ERROR, replica['replica_state']) def test_create_replicated_snapshot(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) model_list = self.library.create_replicated_snapshot( self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client.create_snapshot.assert_called_once_with(share_name, snapshot_name) self.assertEqual(3, len(model_list)) for snapshot in model_list: self.assertEqual(snapshot['provider_location'], snapshot_name) actual_active_snapshot = list(filter( lambda x: x['id'] == fake_snapshot['id'], model_list))[0] self.assertEqual(constants.STATUS_AVAILABLE, actual_active_snapshot['status']) actual_non_active_snapshot_list = list(filter( lambda x: x['id'] != fake_snapshot['id'], model_list)) for snapshot in actual_non_active_snapshot_list: self.assertEqual(constants.STATUS_CREATING, snapshot['status']) self.mock_dm_session.update_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True ) def test_create_replicated_snapshot_with_creating_replica(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_replica_3['host'] = None replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) model_list = self.library.create_replicated_snapshot( self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client.create_snapshot.assert_called_once_with(share_name, snapshot_name) self.assertEqual(3, len(model_list)) for snapshot in model_list: self.assertEqual(snapshot['provider_location'], snapshot_name) actual_active_snapshot = list(filter( lambda x: x['id'] == fake_snapshot['id'], model_list))[0] self.assertEqual(constants.STATUS_AVAILABLE, actual_active_snapshot['status']) actual_non_active_snapshot_list = list(filter( lambda x: x['id'] != fake_snapshot['id'], model_list)) for snapshot in actual_non_active_snapshot_list: self.assertEqual(constants.STATUS_CREATING, snapshot['status']) self.mock_dm_session.update_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2)], any_order=True ) def test_create_replicated_snapshot_no_snapmirror(self): self.mock_dm_session.update_snapmirror.side_effect = [ None, netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND) ] fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) model_list = self.library.create_replicated_snapshot( self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client.create_snapshot.assert_called_once_with(share_name, snapshot_name) self.assertEqual(3, len(model_list)) for snapshot in model_list: self.assertEqual(snapshot['provider_location'], snapshot_name) actual_active_snapshot = list(filter( lambda x: x['id'] == fake_snapshot['id'], model_list))[0] self.assertEqual(constants.STATUS_AVAILABLE, actual_active_snapshot['status']) actual_non_active_snapshot_list = list(filter( lambda x: x['id'] != fake_snapshot['id'], model_list)) for snapshot in actual_non_active_snapshot_list: self.assertEqual(constants.STATUS_CREATING, snapshot['status']) self.mock_dm_session.update_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True ) def test_create_replicated_snapshot_update_error(self): self.mock_dm_session.update_snapmirror.side_effect = [ None, netapp_api.NaApiError() ] fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.assertRaises(netapp_api.NaApiError, self.library.create_replicated_snapshot, self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) def test_delete_replicated_snapshot(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_2['provider_location'] = snapshot_name fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] fake_snapshot_3['provider_location'] = snapshot_name snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.library.delete_replicated_snapshot( self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.delete_snapshot.assert_called_once_with(share_name, snapshot_name) self.mock_dm_session.update_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True ) def test_delete_replicated_snapshot_replica_still_creating(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_replica_3['host'] = None replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_2['provider_location'] = snapshot_name fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] fake_snapshot_3['provider_location'] = snapshot_name snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.library.delete_replicated_snapshot( self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.delete_snapshot.assert_called_once_with(share_name, snapshot_name) self.mock_dm_session.update_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2)], any_order=True ) def test_delete_replicated_snapshot_missing_snapmirror(self): self.mock_dm_session.update_snapmirror.side_effect = [ None, netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND) ] fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name fake_snapshot['busy'] = False fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_2['provider_location'] = snapshot_name fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] fake_snapshot_3['provider_location'] = snapshot_name snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake_snapshot self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.library.delete_replicated_snapshot( self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.delete_snapshot.assert_called_once_with(share_name, snapshot_name) self.mock_dm_session.update_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True ) def test_delete_replicated_snapshot_update_error(self): self.mock_dm_session.update_snapmirror.side_effect = [ None, netapp_api.NaApiError() ] fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name fake_snapshot['busy'] = False fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_2['provider_location'] = snapshot_name fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] fake_snapshot_3['provider_location'] = snapshot_name snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake_snapshot self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.assertRaises(netapp_api.NaApiError, self.library.delete_replicated_snapshot, self.context, replica_list, snapshot_list, share_server=fake.SHARE_SERVER) def test_update_replicated_snapshot_still_creating(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = False self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica, self.fake_replica_2] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_CREATING fake_snapshot['share_id'] = self.fake_replica_2['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica_2, [fake_snapshot], fake_snapshot) self.assertIsNone(model_update) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2 ) def test_update_replicated_snapshot_still_creating_no_host(self): self.fake_replica_2['host'] = None vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = False self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica, self.fake_replica_2] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_CREATING fake_snapshot['share_id'] = self.fake_replica_2['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica_2, [fake_snapshot], fake_snapshot) self.assertIsNone(model_update) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2 ) def test_update_replicated_snapshot_no_snapmirror(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = False self.mock_dm_session.update_snapmirror.side_effect = ( netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND) ) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica, self.fake_replica_2] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_CREATING fake_snapshot['share_id'] = self.fake_replica_2['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica_2, [fake_snapshot], fake_snapshot) self.assertIsNone(model_update) self.mock_dm_session.update_snapmirror.assert_called_once_with( self.fake_replica, self.fake_replica_2 ) def test_update_replicated_snapshot_update_error(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = False self.mock_dm_session.update_snapmirror.side_effect = ( netapp_api.NaApiError() ) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica, self.fake_replica_2] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_CREATING fake_snapshot['share_id'] = self.fake_replica_2['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name self.assertRaises(netapp_api.NaApiError, self.library.update_replicated_snapshot, replica_list, self.fake_replica_2, [fake_snapshot], fake_snapshot) def test_update_replicated_snapshot_still_deleting(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = True vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_DELETING fake_snapshot['share_id'] = self.fake_replica['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica, [fake_snapshot], fake_snapshot) self.assertIsNone(model_update) def test_update_replicated_snapshot_created(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = True self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_CREATING fake_snapshot['share_id'] = self.fake_replica['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica, [fake_snapshot], fake_snapshot) self.assertEqual(constants.STATUS_AVAILABLE, model_update['status']) self.assertEqual(snapshot_name, model_update['provider_location']) def test_update_replicated_snapshot_created_no_provider_location(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = True self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica, self.fake_replica_2] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_ACTIVE fake_snapshot['share_id'] = self.fake_replica['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['status'] = constants.STATUS_CREATING fake_snapshot_2['share_id'] = self.fake_replica_2['id'] model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica_2, [fake_snapshot, fake_snapshot_2], fake_snapshot_2) self.assertEqual(constants.STATUS_AVAILABLE, model_update['status']) self.assertEqual(snapshot_name, model_update['provider_location']) def test_update_replicated_snapshot_deleted(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = False self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_DELETING fake_snapshot['share_id'] = self.fake_replica['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name self.assertRaises(exception.SnapshotResourceNotFound, self.library.update_replicated_snapshot, replica_list, self.fake_replica, [fake_snapshot], fake_snapshot) def test_update_replicated_snapshot_no_provider_locations(self): vserver_client = mock.Mock() vserver_client.snapshot_exists.return_value = True self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) replica_list = [self.fake_replica] fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['status'] = constants.STATUS_CREATING fake_snapshot['share_id'] = self.fake_replica['id'] fake_snapshot['provider_location'] = None model_update = self.library.update_replicated_snapshot( replica_list, self.fake_replica, [fake_snapshot], fake_snapshot) self.assertIsNone(model_update) def _get_fake_replicas_and_snapshots(self): fake_replica_3 = copy.deepcopy(self.fake_replica_2) fake_replica_3['id'] = fake.SHARE_ID3 fake_snapshot = copy.deepcopy(fake.SNAPSHOT) fake_snapshot['share_id'] = self.fake_replica['id'] snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) fake_snapshot['provider_location'] = snapshot_name fake_snapshot_2 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_2['id'] = uuidutils.generate_uuid() fake_snapshot_2['share_id'] = self.fake_replica_2['id'] fake_snapshot_2['provider_location'] = snapshot_name fake_snapshot_3 = copy.deepcopy(fake.SNAPSHOT) fake_snapshot_3['id'] = uuidutils.generate_uuid() fake_snapshot_3['share_id'] = fake_replica_3['id'] fake_snapshot_3['provider_location'] = snapshot_name replica_list = [self.fake_replica, self.fake_replica_2, fake_replica_3] snapshot_list = [fake_snapshot, fake_snapshot_2, fake_snapshot_3] return replica_list, snapshot_list @ddt.data(True, False) def test_revert_to_replicated_snapshot(self, use_snap_provider_location): replica_list, snapshot_list = self._get_fake_replicas_and_snapshots() fake_replica, fake_replica_2, fake_replica_3 = replica_list fake_snapshot, fake_snapshot_2, fake_snapshot_3 = snapshot_list if not use_snap_provider_location: del fake_snapshot['provider_location'] del fake_snapshot_2['provider_location'] del fake_snapshot_3['provider_location'] share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT vserver_client.list_snapmirror_snapshots.return_value = ['sm_snap'] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.library.revert_to_replicated_snapshot( self.context, self.fake_replica, replica_list, fake_snapshot, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.get_snapshot.assert_called_once_with( share_name, snapshot_name) vserver_client.list_snapmirror_snapshots.assert_called_once_with( share_name) vserver_client.delete_snapshot.assert_called_once_with( share_name, 'sm_snap', ignore_owners=True) vserver_client.restore_snapshot.assert_called_once_with( share_name, snapshot_name) self.mock_dm_session.break_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2, mount=False), mock.call(self.fake_replica, fake_replica_3, mount=False)], any_order=True) self.mock_dm_session.resync_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True) def test_revert_to_replicated_snapshot_not_found(self): replica_list, snapshot_list = self._get_fake_replicas_and_snapshots() fake_snapshot, fake_snapshot_2, fake_snapshot_3 = snapshot_list share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client = mock.Mock() vserver_client.get_snapshot.side_effect = netapp_api.NaApiError vserver_client.list_snapmirror_snapshots.return_value = ['sm_snap'] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.assertRaises( netapp_api.NaApiError, self.library.revert_to_replicated_snapshot, self.context, self.fake_replica, replica_list, fake_snapshot, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.get_snapshot.assert_called_once_with( share_name, snapshot_name) self.assertFalse(vserver_client.list_snapmirror_snapshots.called) self.assertFalse(vserver_client.delete_snapshot.called) self.assertFalse(vserver_client.restore_snapshot.called) self.assertFalse(self.mock_dm_session.break_snapmirror.called) self.assertFalse(self.mock_dm_session.resync_snapmirror.called) def test_revert_to_replicated_snapshot_snapmirror_break_error(self): replica_list, snapshot_list = self._get_fake_replicas_and_snapshots() fake_snapshot, fake_snapshot_2, fake_snapshot_3 = snapshot_list vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT vserver_client.list_snapmirror_snapshots.return_value = ['sm_snap'] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.break_snapmirror.side_effect = ( netapp_api.NaApiError) self.assertRaises( netapp_api.NaApiError, self.library.revert_to_replicated_snapshot, self.context, self.fake_replica, replica_list, fake_snapshot, snapshot_list, share_server=fake.SHARE_SERVER) def test_revert_to_replicated_snapshot_snapmirror_break_not_found(self): replica_list, snapshot_list = self._get_fake_replicas_and_snapshots() fake_replica, fake_replica_2, fake_replica_3 = replica_list fake_snapshot, fake_snapshot_2, fake_snapshot_3 = snapshot_list share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT vserver_client.list_snapmirror_snapshots.return_value = ['sm_snap'] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.break_snapmirror.side_effect = ( netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND)) self.library.revert_to_replicated_snapshot( self.context, self.fake_replica, replica_list, fake_snapshot, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.get_snapshot.assert_called_once_with( share_name, snapshot_name) vserver_client.list_snapmirror_snapshots.assert_called_once_with( share_name) vserver_client.delete_snapshot.assert_called_once_with( share_name, 'sm_snap', ignore_owners=True) vserver_client.restore_snapshot.assert_called_once_with( share_name, snapshot_name) self.mock_dm_session.break_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2, mount=False), mock.call(self.fake_replica, fake_replica_3, mount=False)], any_order=True) self.mock_dm_session.resync_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True) def test_revert_to_replicated_snapshot_snapmirror_resync_error(self): replica_list, snapshot_list = self._get_fake_replicas_and_snapshots() fake_snapshot, fake_snapshot_2, fake_snapshot_3 = snapshot_list vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT vserver_client.list_snapmirror_snapshots.return_value = ['sm_snap'] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.resync_snapmirror.side_effect = ( netapp_api.NaApiError) self.assertRaises( netapp_api.NaApiError, self.library.revert_to_replicated_snapshot, self.context, self.fake_replica, replica_list, fake_snapshot, snapshot_list, share_server=fake.SHARE_SERVER) def test_revert_to_replicated_snapshot_snapmirror_resync_not_found(self): replica_list, snapshot_list = self._get_fake_replicas_and_snapshots() fake_replica, fake_replica_2, fake_replica_3 = replica_list fake_snapshot, fake_snapshot_2, fake_snapshot_3 = snapshot_list share_name = self.library._get_backend_share_name( fake_snapshot['share_id']) snapshot_name = self.library._get_backend_snapshot_name( fake_snapshot['id']) vserver_client = mock.Mock() vserver_client.get_snapshot.return_value = fake.CDOT_SNAPSHOT vserver_client.list_snapmirror_snapshots.return_value = ['sm_snap'] self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_dm_session.resync_snapmirror.side_effect = ( netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND)) self.library.revert_to_replicated_snapshot( self.context, self.fake_replica, replica_list, fake_snapshot, snapshot_list, share_server=fake.SHARE_SERVER) vserver_client.get_snapshot.assert_called_once_with( share_name, snapshot_name) vserver_client.list_snapmirror_snapshots.assert_called_once_with( share_name) vserver_client.delete_snapshot.assert_called_once_with( share_name, 'sm_snap', ignore_owners=True) vserver_client.restore_snapshot.assert_called_once_with( share_name, snapshot_name) self.mock_dm_session.break_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2, mount=False), mock.call(self.fake_replica, fake_replica_3, mount=False)], any_order=True) self.mock_dm_session.resync_snapmirror.assert_has_calls( [mock.call(self.fake_replica, self.fake_replica_2), mock.call(self.fake_replica, fake_replica_3)], any_order=True) def test_migration_check_compatibility_no_cluster_credentials(self): self.library._have_cluster_creds = False self.mock_object(data_motion, 'get_backend_configuration') mock_warning_log = self.mock_object(lib_base.LOG, 'warning') migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=None, destination_share_server=fake.SHARE_SERVER) expected_compatibility = { 'compatible': False, 'writable': False, 'nondisruptive': False, 'preserve_metadata': False, 'preserve_snapshots': False, } self.assertDictMatch(expected_compatibility, migration_compatibility) mock_warning_log.assert_called_once() self.assertFalse(data_motion.get_backend_configuration.called) @ddt.data((None, exception.NetAppException), (exception.Invalid, None)) @ddt.unpack def test_migration_check_compatibility_extra_specs_invalid( self, side_effect_1, side_effect_2): self.library._have_cluster_creds = True self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_exception_log = self.mock_object(lib_base.LOG, 'exception') self.mock_object(share_types, 'get_extra_specs_from_share') self.mock_object(self.library, '_check_extra_specs_validity', mock.Mock(side_effect=side_effect_1)) self.mock_object(self.library, '_check_aggregate_extra_specs_validity', mock.Mock(side_effect=side_effect_2)) self.mock_object(data_motion, 'get_backend_configuration') migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server=None) expected_compatibility = { 'compatible': False, 'writable': False, 'nondisruptive': False, 'preserve_metadata': False, 'preserve_snapshots': False, } self.assertDictMatch(expected_compatibility, migration_compatibility) mock_exception_log.assert_called_once() self.assertFalse(data_motion.get_backend_configuration.called) def test_migration_check_compatibility_destination_not_configured(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object( data_motion, 'get_backend_configuration', mock.Mock(side_effect=exception.BadConfigurationException)) self.mock_object(self.library, '_get_vserver') mock_exception_log = self.mock_object(lib_base.LOG, 'exception') self.mock_object(share_utils, 'extract_host', mock.Mock( return_value='destination_backend')) self.mock_object(share_types, 'get_extra_specs_from_share') self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_check_aggregate_extra_specs_validity') mock_vserver_compatibility_check = self.mock_object( self.library, '_check_destination_vserver_for_vol_move') self.mock_object(self.library, '_get_dest_flexvol_encryption_value', mock.Mock(return_value=False)) migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server=None) expected_compatibility = { 'compatible': False, 'writable': False, 'nondisruptive': False, 'preserve_metadata': False, 'preserve_snapshots': False, } self.assertDictMatch(expected_compatibility, migration_compatibility) mock_exception_log.assert_called_once() data_motion.get_backend_configuration.assert_called_once_with( 'destination_backend') self.assertFalse(mock_vserver_compatibility_check.called) self.assertFalse(self.library._get_vserver.called) @ddt.data( utils.annotated( 'dest_share_server_not_expected', (('src_vserver', None), exception.InvalidParameterValue)), utils.annotated( 'src_share_server_not_expected', (exception.InvalidParameterValue, ('dest_vserver', None)))) def test_migration_check_compatibility_errors(self, side_effects): self.library._have_cluster_creds = True self.mock_object(share_types, 'get_extra_specs_from_share') self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_check_aggregate_extra_specs_validity') self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(data_motion, 'get_backend_configuration') self.mock_object(self.library, '_get_vserver', mock.Mock(side_effect=side_effects)) mock_exception_log = self.mock_object(lib_base.LOG, 'exception') self.mock_object(share_utils, 'extract_host', mock.Mock( return_value='destination_backend')) mock_compatibility_check = self.mock_object( self.client, 'check_volume_move') migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server=None) expected_compatibility = { 'compatible': False, 'writable': False, 'nondisruptive': False, 'preserve_metadata': False, 'preserve_snapshots': False, } self.assertDictMatch(expected_compatibility, migration_compatibility) mock_exception_log.assert_called_once() data_motion.get_backend_configuration.assert_called_once_with( 'destination_backend') self.assertFalse(mock_compatibility_check.called) def test_migration_check_compatibility_incompatible_vservers(self): self.library._have_cluster_creds = True self.mock_object(share_types, 'get_extra_specs_from_share') self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_check_aggregate_extra_specs_validity') self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(data_motion, 'get_backend_configuration') mock_exception_log = self.mock_object(lib_base.LOG, 'exception') get_vserver_returns = [ (fake.VSERVER1, mock.Mock()), (fake.VSERVER2, mock.Mock()), ] self.mock_object(self.library, '_get_vserver', mock.Mock(side_effect=get_vserver_returns)) self.mock_object(share_utils, 'extract_host', mock.Mock( side_effect=['destination_backend', 'destination_pool'])) mock_move_check = self.mock_object(self.client, 'check_volume_move') migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') expected_compatibility = { 'compatible': False, 'writable': False, 'nondisruptive': False, 'preserve_metadata': False, 'preserve_snapshots': False, } self.assertDictMatch(expected_compatibility, migration_compatibility) mock_exception_log.assert_called_once() data_motion.get_backend_configuration.assert_called_once_with( 'destination_backend') self.assertFalse(mock_move_check.called) self.library._get_vserver.assert_has_calls( [mock.call(share_server=fake.SHARE_SERVER), mock.call(share_server='dst_srv')]) def test_migration_check_compatibility_client_error(self): self.library._have_cluster_creds = True self.mock_object(share_types, 'get_extra_specs_from_share') self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_check_aggregate_extra_specs_validity') self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_exception_log = self.mock_object(lib_base.LOG, 'exception') self.mock_object(data_motion, 'get_backend_configuration') self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(share_utils, 'extract_host', mock.Mock( side_effect=['destination_backend', 'destination_pool'])) mock_move_check = self.mock_object( self.client, 'check_volume_move', mock.Mock(side_effect=netapp_api.NaApiError)) self.mock_object(self.library, '_get_dest_flexvol_encryption_value', mock.Mock(return_value=False)) migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') expected_compatibility = { 'compatible': False, 'writable': False, 'nondisruptive': False, 'preserve_metadata': False, 'preserve_snapshots': False, } self.assertDictMatch(expected_compatibility, migration_compatibility) mock_exception_log.assert_called_once() data_motion.get_backend_configuration.assert_called_once_with( 'destination_backend') mock_move_check.assert_called_once_with( fake.SHARE_NAME, fake.VSERVER1, 'destination_pool', encrypt_destination=False) self.library._get_vserver.assert_has_calls( [mock.call(share_server=fake.SHARE_SERVER), mock.call(share_server='dst_srv')]) def test_migration_check_compatibility(self): self.library._have_cluster_creds = True self.mock_object(share_types, 'get_extra_specs_from_share') self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_check_aggregate_extra_specs_validity') self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(data_motion, 'get_backend_configuration') self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(share_utils, 'extract_host', mock.Mock( side_effect=['destination_backend', 'destination_pool'])) mock_move_check = self.mock_object(self.client, 'check_volume_move') self.mock_object(self.library, '_get_dest_flexvol_encryption_value', mock.Mock(return_value=False)) migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') expected_compatibility = { 'compatible': True, 'writable': True, 'nondisruptive': True, 'preserve_metadata': True, 'preserve_snapshots': True, } self.assertDictMatch(expected_compatibility, migration_compatibility) data_motion.get_backend_configuration.assert_called_once_with( 'destination_backend') mock_move_check.assert_called_once_with( fake.SHARE_NAME, fake.VSERVER1, 'destination_pool', encrypt_destination=False) self.library._get_vserver.assert_has_calls( [mock.call(share_server=fake.SHARE_SERVER), mock.call(share_server='dst_srv')]) def test_migration_check_compatibility_destination_type_is_encrypted(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(data_motion, 'get_backend_configuration') self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(share_utils, 'extract_host', mock.Mock( side_effect=['destination_backend', 'destination_pool'])) mock_move_check = self.mock_object(self.client, 'check_volume_move') self.mock_object(self.library, '_get_dest_flexvol_encryption_value', mock.Mock(return_value=True)) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={'spec1': 'spec-data'})) self.mock_object(self.library, '_check_extra_specs_validity') self.mock_object(self.library, '_check_aggregate_extra_specs_validity') migration_compatibility = self.library.migration_check_compatibility( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') expected_compatibility = { 'compatible': True, 'writable': True, 'nondisruptive': True, 'preserve_metadata': True, 'preserve_snapshots': True, } self.assertDictMatch(expected_compatibility, migration_compatibility) data_motion.get_backend_configuration.assert_called_once_with( 'destination_backend') mock_move_check.assert_called_once_with( fake.SHARE_NAME, fake.VSERVER1, 'destination_pool', encrypt_destination=True) self.library._get_vserver.assert_has_calls( [mock.call(share_server=fake.SHARE_SERVER), mock.call(share_server='dst_srv')]) def test_migration_start(self): mock_info_log = self.mock_object(lib_base.LOG, 'info') source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(share_utils, 'extract_host', mock.Mock(return_value='destination_pool')) mock_move = self.mock_object(self.client, 'start_volume_move') self.mock_object(self.library, '_get_dest_flexvol_encryption_value', mock.Mock(return_value=False)) retval = self.library.migration_start( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertIsNone(retval) self.assertTrue(mock_info_log.called) mock_move.assert_called_once_with( fake.SHARE_NAME, fake.VSERVER1, 'destination_pool', cutover_action='wait', encrypt_destination=False) def test_migration_start_encrypted_destination(self): mock_info_log = self.mock_object(lib_base.LOG, 'info') source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(share_utils, 'extract_host', mock.Mock(return_value='destination_pool')) mock_move = self.mock_object(self.client, 'start_volume_move') self.mock_object(self.library, '_get_dest_flexvol_encryption_value', mock.Mock(return_value=True)) retval = self.library.migration_start( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertIsNone(retval) self.assertTrue(mock_info_log.called) mock_move.assert_called_once_with( fake.SHARE_NAME, fake.VSERVER1, 'destination_pool', cutover_action='wait', encrypt_destination=True) def test_migration_continue_volume_move_failed(self): source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() mock_exception_log = self.mock_object(lib_base.LOG, 'exception') self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_status_check = self.mock_object( self.client, 'get_volume_move_status', mock.Mock(return_value={'phase': 'failed', 'details': 'unknown'})) self.assertRaises(exception.NetAppException, self.library.migration_continue, self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None) mock_status_check.assert_called_once_with( fake.SHARE_NAME, fake.VSERVER1) mock_exception_log.assert_called_once() @ddt.data({'phase': 'Queued', 'completed': False}, {'phase': 'Finishing', 'completed': False}, {'phase': 'cutover_hard_deferred', 'completed': True}, {'phase': 'cutover_soft_deferred', 'completed': True}, {'phase': 'completed', 'completed': True}) @ddt.unpack def test_migration_continue(self, phase, completed): source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(self.client, 'get_volume_move_status', mock.Mock(return_value={'phase': phase})) migration_completed = self.library.migration_continue( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertEqual(completed, migration_completed) @ddt.data('cutover_hard_deferred', 'cutover_soft_deferred', 'Queued', 'Replicating') def test_migration_get_progress_at_phase(self, phase): source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() mock_info_log = self.mock_object(lib_base.LOG, 'info') status = { 'state': 'healthy', 'details': '%s:: Volume move job in progress' % phase, 'phase': phase, 'estimated-completion-time': '1481919246', 'percent-complete': 80, } self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(self.client, 'get_volume_move_status', mock.Mock(return_value=status)) migration_progress = self.library.migration_get_progress( self.context, fake_share.fake_share_instance(), source_snapshots, snapshot_mappings, fake_share.fake_share_instance(), share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') expected_progress = { 'total_progress': 100 if phase.startswith('cutover') else 80, 'state': 'healthy', 'estimated_completion_time': '1481919246', 'details': '%s:: Volume move job in progress' % phase, 'phase': phase, } self.assertDictMatch(expected_progress, migration_progress) mock_info_log.assert_called_once() @ddt.data(utils.annotated('already_canceled', (True, )), utils.annotated('not_canceled_yet', (False, ))) def test_migration_cancel(self, already_canceled): source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() already_canceled = already_canceled[0] mock_exception_log = self.mock_object(lib_base.LOG, 'exception') mock_info_log = self.mock_object(lib_base.LOG, 'info') vol_move_side_effect = (exception.NetAppException if already_canceled else None) self.mock_object(self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, mock.Mock()))) self.mock_object(self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) self.mock_object(self.client, 'abort_volume_move') self.mock_object(self.client, 'get_volume_move_status', mock.Mock(side_effect=vol_move_side_effect)) retval = self.library.migration_cancel( self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance(), source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertIsNone(retval) if already_canceled: mock_exception_log.assert_called_once() else: mock_info_log.assert_called_once() self.assertEqual(not already_canceled, self.client.abort_volume_move.called) def test_migration_complete_invalid_phase(self): source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() status = { 'state': 'healthy', 'phase': 'Replicating', 'details': 'Replicating:: Volume move operation is in progress.', } mock_exception_log = self.mock_object(lib_base.LOG, 'exception') vserver_client = mock.Mock() self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_object( self.library, '_get_backend_share_name', mock.Mock(side_effect=[fake.SHARE_NAME, 'new_share_name'])) self.mock_object(self.library, '_get_volume_move_status', mock.Mock(return_value=status)) self.mock_object(self.library, '_create_export') self.assertRaises( exception.NetAppException, self.library.migration_complete, self.context, fake_share.fake_share_instance(), fake_share.fake_share_instance, source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertFalse(vserver_client.set_volume_name.called) self.assertFalse(self.library._create_export.called) mock_exception_log.assert_called_once() def test_migration_complete_timeout(self): source_snapshots = mock.Mock() snapshot_mappings = mock.Mock() self.library.configuration.netapp_volume_move_cutover_timeout = 15 vol_move_side_effects = [ {'phase': 'cutover_hard_deferred'}, {'phase': 'Cutover'}, {'phase': 'Finishing'}, {'phase': 'Finishing'}, ] self.mock_object(time, 'sleep') mock_warning_log = self.mock_object(lib_base.LOG, 'warning') vserver_client = mock.Mock() self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_object( self.library, '_get_backend_share_name', mock.Mock(side_effect=[fake.SHARE_NAME, 'new_share_name'])) self.mock_object(self.library, '_get_volume_move_status', mock.Mock( side_effect=vol_move_side_effects)) self.mock_object(self.library, '_create_export') src_share = fake_share.fake_share_instance(id='source-share-instance') dest_share = fake_share.fake_share_instance(id='dest-share-instance') self.assertRaises( exception.NetAppException, self.library.migration_complete, self.context, src_share, dest_share, source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertFalse(vserver_client.set_volume_name.called) self.assertFalse(self.library._create_export.called) self.assertEqual(3, mock_warning_log.call_count) @ddt.data({'phase': 'cutover_hard_deferred', 'provisioning_options': fake.PROVISIONING_OPTIONS_WITH_QOS, 'policy_group_name': fake.QOS_POLICY_GROUP_NAME}, {'phase': 'cutover_soft_deferred', 'provisioning_options': fake.PROVISIONING_OPTIONS_WITH_QOS, 'policy_group_name': fake.QOS_POLICY_GROUP_NAME}, {'phase': 'completed', 'provisioning_options': fake.PROVISIONING_OPTIONS, 'policy_group_name': False}) @ddt.unpack def test_migration_complete(self, phase, provisioning_options, policy_group_name): snap = fake_share.fake_snapshot_instance( id='src-snapshot', provider_location='test-src-provider-location') dest_snap = fake_share.fake_snapshot_instance(id='dest-snapshot', as_primitive=True) source_snapshots = [snap] snapshot_mappings = {snap['id']: dest_snap} self.library.configuration.netapp_volume_move_cutover_timeout = 15 vol_move_side_effects = [ {'phase': phase}, {'phase': 'Cutover'}, {'phase': 'Finishing'}, {'phase': 'completed'}, ] self.mock_object(time, 'sleep') mock_debug_log = self.mock_object(lib_base.LOG, 'debug') mock_info_log = self.mock_object(lib_base.LOG, 'info') mock_warning_log = self.mock_object(lib_base.LOG, 'warning') vserver_client = mock.Mock() self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=(fake.VSERVER1, vserver_client))) self.mock_object( self.library, '_get_backend_share_name', mock.Mock(side_effect=[fake.SHARE_NAME, 'new_share_name'])) self.mock_object(self.library, '_create_export', mock.Mock( return_value=fake.NFS_EXPORTS)) mock_move_status_check = self.mock_object( self.library, '_get_volume_move_status', mock.Mock(side_effect=vol_move_side_effects)) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=fake.EXTRA_SPEC)) self.mock_object( self.library, '_get_provisioning_options', mock.Mock(return_value=provisioning_options)) self.mock_object( self.library, '_modify_or_create_qos_for_existing_share', mock.Mock(return_value=policy_group_name)) self.mock_object(vserver_client, 'modify_volume') src_share = fake_share.fake_share_instance(id='source-share-instance') dest_share = fake_share.fake_share_instance(id='dest-share-instance') dest_aggr = share_utils.extract_host(dest_share['host'], level='pool') data_updates = self.library.migration_complete( self.context, src_share, dest_share, source_snapshots, snapshot_mappings, share_server=fake.SHARE_SERVER, destination_share_server='dst_srv') self.assertEqual(fake.NFS_EXPORTS, data_updates['export_locations']) expected_dest_snap_updates = { 'provider_location': snap['provider_location'], } self.assertIn(dest_snap['id'], data_updates['snapshot_updates']) self.assertEqual(expected_dest_snap_updates, data_updates['snapshot_updates'][dest_snap['id']]) vserver_client.set_volume_name.assert_called_once_with( fake.SHARE_NAME, 'new_share_name') self.library._create_export.assert_called_once_with( dest_share, fake.SHARE_SERVER, fake.VSERVER1, vserver_client, clear_current_export_policy=False) vserver_client.modify_volume.assert_called_once_with( dest_aggr, 'new_share_name', **provisioning_options) mock_info_log.assert_called_once() if phase != 'completed': self.assertEqual(2, mock_warning_log.call_count) self.assertFalse(mock_debug_log.called) self.assertEqual(4, mock_move_status_check.call_count) else: self.assertFalse(mock_warning_log.called) mock_debug_log.assert_called_once() mock_move_status_check.assert_called_once() def test_modify_or_create_qos_for_existing_share_no_qos_extra_specs(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_backend_qos_policy_group_name') self.mock_object(vserver_client, 'get_volume') self.mock_object(self.library, '_create_qos_policy_group') retval = self.library._modify_or_create_qos_for_existing_share( fake.SHARE, fake.EXTRA_SPEC, fake.VSERVER1, vserver_client) self.assertIsNone(retval) self.library._get_backend_qos_policy_group_name.assert_not_called() vserver_client.get_volume.assert_not_called() self.library._create_qos_policy_group.assert_not_called() def test_modify_or_create_qos_for_existing_share_no_existing_qos(self): vserver_client = mock.Mock() self.mock_object(self.library, '_get_backend_qos_policy_group_name') self.mock_object(vserver_client, 'get_volume', mock.Mock(return_value=fake.FLEXVOL_WITHOUT_QOS)) self.mock_object(self.library, '_create_qos_policy_group') self.mock_object(self.library._client, 'qos_policy_group_modify') qos_policy_name = self.library._get_backend_qos_policy_group_name( fake.SHARE['id']) retval = self.library._modify_or_create_qos_for_existing_share( fake.SHARE, fake.EXTRA_SPEC_WITH_QOS, fake.VSERVER1, vserver_client) share_obj = { 'size': 2, 'id': fake.SHARE['id'], } self.assertEqual(qos_policy_name, retval) self.library._client.qos_policy_group_modify.assert_not_called() self.library._create_qos_policy_group.assert_called_once_with( share_obj, fake.VSERVER1, {'maxiops': '3000'}, vserver_client=vserver_client) @ddt.data(utils.annotated('volume_has_shared_qos_policy', (2, )), utils.annotated('volume_has_nonshared_qos_policy', (1, ))) def test_modify_or_create_qos_for_existing_share(self, num_workloads): vserver_client = mock.Mock() num_workloads = num_workloads[0] qos_policy = copy.deepcopy(fake.QOS_POLICY_GROUP) qos_policy['num-workloads'] = num_workloads extra_specs = fake.EXTRA_SPEC_WITH_QOS self.mock_object(vserver_client, 'get_volume', mock.Mock(return_value=fake.FLEXVOL_WITH_QOS)) self.mock_object(self.library._client, 'qos_policy_group_get', mock.Mock(return_value=qos_policy)) mock_qos_policy_modify = self.mock_object( self.library._client, 'qos_policy_group_modify') mock_qos_policy_rename = self.mock_object( self.library._client, 'qos_policy_group_rename') mock_create_qos_policy = self.mock_object( self.library, '_create_qos_policy_group') new_qos_policy_name = self.library._get_backend_qos_policy_group_name( fake.SHARE['id']) retval = self.library._modify_or_create_qos_for_existing_share( fake.SHARE, extra_specs, fake.VSERVER1, vserver_client) self.assertEqual(new_qos_policy_name, retval) if num_workloads == 1: mock_create_qos_policy.assert_not_called() mock_qos_policy_modify.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME, '3000iops') mock_qos_policy_rename.assert_called_once_with( fake.QOS_POLICY_GROUP_NAME, new_qos_policy_name) else: share_obj = { 'size': 2, 'id': fake.SHARE['id'], } mock_create_qos_policy.assert_called_once_with( share_obj, fake.VSERVER1, {'maxiops': '3000'}, vserver_client=vserver_client) self.library._client.qos_policy_group_modify.assert_not_called() self.library._client.qos_policy_group_rename.assert_not_called() @ddt.data(('host', True), ('pool', False), (None, False), ('fake', False)) @ddt.unpack def test__is_group_cg(self, css, is_cg): share_group = mock.Mock() share_group.consistent_snapshot_support = css self.assertEqual(is_cg, self.library._is_group_cg(self.context, share_group)) def test_create_group_snapshot_cg(self): share_group = mock.Mock() share_group.consistent_snapshot_support = 'host' snap_dict = {'share_group': share_group} fallback_create = mock.Mock() mock_create_cgsnapshot = self.mock_object(self.library, 'create_cgsnapshot') self.library.create_group_snapshot(self.context, snap_dict, fallback_create, share_server=fake.SHARE_SERVER) mock_create_cgsnapshot.assert_called_once_with( self.context, snap_dict, share_server=fake.SHARE_SERVER) fallback_create.assert_not_called() @ddt.data('pool', None, 'fake') def test_create_group_snapshot_fallback(self, css): share_group = mock.Mock() share_group.consistent_snapshot_support = css snap_dict = {'share_group': share_group} fallback_create = mock.Mock() mock_create_cgsnapshot = self.mock_object(self.library, 'create_cgsnapshot') self.library.create_group_snapshot(self.context, snap_dict, fallback_create, share_server=fake.SHARE_SERVER) mock_create_cgsnapshot.assert_not_called() fallback_create.assert_called_once_with(self.context, snap_dict, share_server=fake.SHARE_SERVER) def test_delete_group_snapshot_cg(self): share_group = mock.Mock() share_group.consistent_snapshot_support = 'host' snap_dict = {'share_group': share_group} fallback_delete = mock.Mock() mock_delete_cgsnapshot = self.mock_object(self.library, 'delete_cgsnapshot') self.library.delete_group_snapshot(self.context, snap_dict, fallback_delete, share_server=fake.SHARE_SERVER) mock_delete_cgsnapshot.assert_called_once_with( self.context, snap_dict, share_server=fake.SHARE_SERVER) fallback_delete.assert_not_called() @ddt.data('pool', None, 'fake') def test_delete_group_snapshot_fallback(self, css): share_group = mock.Mock() share_group.consistent_snapshot_support = css snap_dict = {'share_group': share_group} fallback_delete = mock.Mock() mock_delete_cgsnapshot = self.mock_object(self.library, 'delete_cgsnapshot') self.library.delete_group_snapshot(self.context, snap_dict, fallback_delete, share_server=fake.SHARE_SERVER) mock_delete_cgsnapshot.assert_not_called() fallback_delete.assert_called_once_with(self.context, snap_dict, share_server=fake.SHARE_SERVER) def test_create_group_from_snapshot_cg(self): share_group = mock.Mock() share_group.consistent_snapshot_support = 'host' snap_dict = {'share_group': share_group} fallback_create = mock.Mock() mock_create_cg_from_snapshot = self.mock_object( self.library, 'create_consistency_group_from_cgsnapshot') self.library.create_group_from_snapshot(self.context, share_group, snap_dict, fallback_create, share_server=fake.SHARE_SERVER) mock_create_cg_from_snapshot.assert_called_once_with( self.context, share_group, snap_dict, share_server=fake.SHARE_SERVER) fallback_create.assert_not_called() @ddt.data('pool', None, 'fake') def test_create_group_from_snapshot_fallback(self, css): share_group = mock.Mock() share_group.consistent_snapshot_support = css snap_dict = {'share_group': share_group} fallback_create = mock.Mock() mock_create_cg_from_snapshot = self.mock_object( self.library, 'create_consistency_group_from_cgsnapshot') self.library.create_group_from_snapshot(self.context, share_group, snap_dict, fallback_create, share_server=fake.SHARE_SERVER) mock_create_cg_from_snapshot.assert_not_called() fallback_create.assert_called_once_with(self.context, share_group, snap_dict, share_server=fake.SHARE_SERVER) @ddt.data('default', 'hidden', 'visible') def test_get_backend_info(self, snapdir): self.library.configuration.netapp_reset_snapdir_visibility = snapdir expected = {'snapdir_visibility': snapdir} result = self.library.get_backend_info(self.context) self.assertEqual(expected, result) @ddt.data('default', 'hidden') def test_ensure_shares(self, snapdir_cfg): shares = [ fake_share.fake_share_instance(id='s-1', share_server='fake_server_1'), fake_share.fake_share_instance(id='s-2', share_server='fake_server_2'), fake_share.fake_share_instance(id='s-3', share_server='fake_server_2') ] vserver_client = mock.Mock() self.mock_object( self.library, '_get_vserver', mock.Mock(side_effect=[ (fake.VSERVER1, vserver_client), (fake.VSERVER2, vserver_client), (fake.VSERVER2, vserver_client) ])) (self.library.configuration. netapp_reset_snapdir_visibility) = snapdir_cfg self.library.ensure_shares(self.context, shares) if snapdir_cfg == 'default': self.library._get_vserver.assert_not_called() vserver_client.set_volume_snapdir_access.assert_not_called() else: self.library._get_vserver.assert_has_calls([ mock.call(share_server='fake_server_1'), mock.call(share_server='fake_server_2'), mock.call(share_server='fake_server_2'), ]) vserver_client.set_volume_snapdir_access.assert_has_calls([ mock.call('share_s_1', True), mock.call('share_s_2', True), mock.call('share_s_3', True), ]) def test__check_volume_clone_split_completed(self): vserver_client = mock.Mock() mock_share_name = self.mock_object( self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) vserver_client.check_volume_clone_split_completed.return_value = ( fake.CDOT_SNAPSHOT_BUSY_SNAPMIRROR) self.library._check_volume_clone_split_completed(fake.SHARE, vserver_client) mock_share_name.assert_called_once_with(fake.SHARE_ID) check_call = vserver_client.check_volume_clone_split_completed check_call.assert_called_once_with(fake.SHARE_NAME) @ddt.data(constants.STATUS_ACTIVE, constants.STATUS_CREATING_FROM_SNAPSHOT) def test_get_share_status(self, status): mock_update_from_snap = self.mock_object( self.library, '_update_create_from_snapshot_status') fake.SHARE['status'] = status self.library.get_share_status(fake.SHARE, fake.SHARE_SERVER) if status == constants.STATUS_CREATING_FROM_SNAPSHOT: mock_update_from_snap.assert_called_once_with(fake.SHARE, fake.SHARE_SERVER) else: mock_update_from_snap.assert_not_called() def test_volume_rehost(self): mock_share_name = self.mock_object( self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_rehost = self.mock_object(self.client, 'rehost_volume') self.library.volume_rehost(fake.SHARE, fake.VSERVER1, fake.VSERVER2) mock_share_name.assert_called_once_with(fake.SHARE_ID) mock_rehost.assert_called_once_with(fake.SHARE_NAME, fake.VSERVER1, fake.VSERVER2) def test__rehost_and_mount_volume(self): mock_share_name = self.mock_object( self.library, '_get_backend_share_name', mock.Mock(return_value=fake.SHARE_NAME)) mock_rehost = self.mock_object(self.library, 'volume_rehost', mock.Mock()) src_vserver_client = mock.Mock() mock_unmount = self.mock_object(src_vserver_client, 'unmount_volume') dst_vserver_client = mock.Mock() mock_mount = self.mock_object(dst_vserver_client, 'mount_volume') self.library._rehost_and_mount_volume( fake.SHARE, fake.VSERVER1, src_vserver_client, fake.VSERVER2, dst_vserver_client) mock_share_name.assert_called_once_with(fake.SHARE_ID) mock_unmount.assert_called_once_with(fake.SHARE_NAME) mock_rehost.assert_called_once_with(fake.SHARE, fake.VSERVER1, fake.VSERVER2) mock_mount.assert_called_once_with(fake.SHARE_NAME) def test__move_volume_after_splitting(self): src_share = fake_share.fake_share_instance(id='source-share-instance') dest_share = fake_share.fake_share_instance(id='dest-share-instance') cutover_action = 'defer' self.library.configuration.netapp_start_volume_move_timeout = 15 self.mock_object(time, 'sleep') mock_warning_log = self.mock_object(lib_base.LOG, 'warning') mock_vol_move = self.mock_object(self.library, '_move_volume') self.library._move_volume_after_splitting( src_share, dest_share, share_server=fake.SHARE_SERVER, cutover_action=cutover_action) mock_vol_move.assert_called_once_with(src_share, dest_share, fake.SHARE_SERVER, cutover_action) self.assertEqual(0, mock_warning_log.call_count) def test__move_volume_after_splitting_timeout(self): src_share = fake_share.fake_share_instance(id='source-share-instance') dest_share = fake_share.fake_share_instance(id='dest-share-instance') self.library.configuration.netapp_start_volume_move_timeout = 15 cutover_action = 'defer' self.mock_object(time, 'sleep') mock_warning_log = self.mock_object(lib_base.LOG, 'warning') undergoing_split_op_msg = ( 'The volume is undergoing a clone split operation.') na_api_error = netapp_api.NaApiError(code=netapp_api.EAPIERROR, message=undergoing_split_op_msg) mock_move_vol = self.mock_object( self.library, '_move_volume', mock.Mock(side_effect=na_api_error)) self.assertRaises(exception.NetAppException, self.library._move_volume_after_splitting, src_share, dest_share, share_server=fake.SHARE_SERVER, cutover_action=cutover_action) self.assertEqual(3, mock_move_vol.call_count) self.assertEqual(3, mock_warning_log.call_count) def test__move_volume_after_splitting_api_not_found(self): src_share = fake_share.fake_share_instance(id='source-share-instance') dest_share = fake_share.fake_share_instance(id='dest-share-instance') self.library.configuration.netapp_start_volume_move_timeout = 15 cutover_action = 'defer' self.mock_object(time, 'sleep') mock_warning_log = self.mock_object(lib_base.LOG, 'warning') na_api_error = netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND) mock_move_vol = self.mock_object( self.library, '_move_volume', mock.Mock(side_effect=na_api_error)) self.assertRaises(exception.NetAppException, self.library._move_volume_after_splitting, src_share, dest_share, share_server=fake.SHARE_SERVER, cutover_action=cutover_action) mock_move_vol.assert_called_once_with(src_share, dest_share, fake.SHARE_SERVER, cutover_action) mock_warning_log.assert_not_called() manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_data_motion.py0000664000175000017500000006544113656750227032123 0ustar zuulzuul00000000000000# Copyright (c) 2015 Alex Meade. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import time from unittest import mock import ddt from oslo_config import cfg from manila import exception from manila.share import configuration from manila.share import driver from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.client import client_cmode from manila.share.drivers.netapp.dataontap.cluster_mode import data_motion from manila.share.drivers.netapp import options as na_opts from manila import test from manila.tests.share.drivers.netapp.dataontap import fakes as fake from manila.tests.share.drivers.netapp import fakes as na_fakes CONF = cfg.CONF @ddt.ddt class NetAppCDOTDataMotionTestCase(test.TestCase): def setUp(self): super(NetAppCDOTDataMotionTestCase, self).setUp() self.backend = 'backend1' self.mock_cmode_client = self.mock_object(client_cmode, "NetAppCmodeClient", mock.Mock()) self.config = configuration.Configuration(driver.share_opts, config_group=self.backend) self.config.append_config_values(na_opts.netapp_cluster_opts) self.config.append_config_values(na_opts.netapp_connection_opts) self.config.append_config_values(na_opts.netapp_basicauth_opts) self.config.append_config_values(na_opts.netapp_transport_opts) self.config.append_config_values(na_opts.netapp_support_opts) self.config.append_config_values(na_opts.netapp_provisioning_opts) self.config.append_config_values(na_opts.netapp_data_motion_opts) CONF.set_override("share_backend_name", self.backend, group=self.backend) CONF.set_override("netapp_transport_type", "https", group=self.backend) CONF.set_override("netapp_login", "fake_user", group=self.backend) CONF.set_override("netapp_password", "fake_password", group=self.backend) CONF.set_override("netapp_server_hostname", "fake.hostname", group=self.backend) CONF.set_override("netapp_server_port", 8866, group=self.backend) def test_get_client_for_backend(self): self.mock_object(data_motion, "get_backend_configuration", mock.Mock(return_value=self.config)) data_motion.get_client_for_backend(self.backend) self.mock_cmode_client.assert_called_once_with( hostname='fake.hostname', password='fake_password', username='fake_user', transport_type='https', port=8866, trace=mock.ANY, vserver=None) def test_get_client_for_backend_with_vserver(self): self.mock_object(data_motion, "get_backend_configuration", mock.Mock(return_value=self.config)) CONF.set_override("netapp_vserver", 'fake_vserver', group=self.backend) data_motion.get_client_for_backend(self.backend) self.mock_cmode_client.assert_called_once_with( hostname='fake.hostname', password='fake_password', username='fake_user', transport_type='https', port=8866, trace=mock.ANY, vserver='fake_vserver') def test_get_config_for_backend(self): self.mock_object(data_motion, "CONF") CONF.set_override("netapp_vserver", 'fake_vserver', group=self.backend) data_motion.CONF.list_all_sections.return_value = [self.backend] config = data_motion.get_backend_configuration(self.backend) self.assertEqual('fake_vserver', config.netapp_vserver) def test_get_config_for_backend_different_backend_name(self): self.mock_object(data_motion, "CONF") CONF.set_override("netapp_vserver", 'fake_vserver', group=self.backend) CONF.set_override("share_backend_name", "fake_backend_name", group=self.backend) data_motion.CONF.list_all_sections.return_value = [self.backend] config = data_motion.get_backend_configuration(self.backend) self.assertEqual('fake_vserver', config.netapp_vserver) self.assertEqual('fake_backend_name', config.share_backend_name) @ddt.data([], ['fake_backend1', 'fake_backend2']) def test_get_config_for_backend_not_configured(self, conf_sections): self.mock_object(data_motion, "CONF") data_motion.CONF.list_all_sections.return_value = conf_sections self.assertRaises(exception.BadConfigurationException, data_motion.get_backend_configuration, self.backend) @ddt.ddt class NetAppCDOTDataMotionSessionTestCase(test.TestCase): def setUp(self): super(NetAppCDOTDataMotionSessionTestCase, self).setUp() self.source_backend = 'backend1' self.dest_backend = 'backend2' config = configuration.Configuration(driver.share_opts, config_group=self.source_backend) config.append_config_values(na_opts.netapp_cluster_opts) config.append_config_values(na_opts.netapp_connection_opts) config.append_config_values(na_opts.netapp_basicauth_opts) config.append_config_values(na_opts.netapp_transport_opts) config.append_config_values(na_opts.netapp_support_opts) config.append_config_values(na_opts.netapp_provisioning_opts) config.append_config_values(na_opts.netapp_data_motion_opts) self.mock_object(data_motion, "get_backend_configuration", mock.Mock(return_value=config)) self.mock_cmode_client = self.mock_object(client_cmode, "NetAppCmodeClient", mock.Mock()) self.dm_session = data_motion.DataMotionSession() self.fake_src_share = copy.deepcopy(fake.SHARE) self.fake_src_share_server = copy.deepcopy(fake.SHARE_SERVER) self.source_vserver = 'source_vserver' self.fake_src_share_server['backend_details']['vserver_name'] = ( self.source_vserver ) self.fake_src_share['share_server'] = self.fake_src_share_server self.fake_src_share['id'] = 'c02d497a-236c-4852-812a-0d39373e312a' self.fake_src_vol_name = 'share_c02d497a_236c_4852_812a_0d39373e312a' self.fake_dest_share = copy.deepcopy(fake.SHARE) self.fake_dest_share_server = copy.deepcopy(fake.SHARE_SERVER) self.dest_vserver = 'dest_vserver' self.fake_dest_share_server['backend_details']['vserver_name'] = ( self.dest_vserver ) self.fake_dest_share['share_server'] = self.fake_dest_share_server self.fake_dest_share['id'] = '34fbaf57-745d-460f-8270-3378c2945e30' self.fake_dest_vol_name = 'share_34fbaf57_745d_460f_8270_3378c2945e30' self.mock_src_client = mock.Mock() self.mock_dest_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[self.mock_dest_client, self.mock_src_client])) def test_create_snapmirror(self): mock_dest_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(return_value=mock_dest_client)) self.dm_session.create_snapmirror(self.fake_src_share, self.fake_dest_share) mock_dest_client.create_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, schedule='hourly' ) mock_dest_client.initialize_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) def test_delete_snapmirror(self): mock_src_client = mock.Mock() mock_dest_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[mock_dest_client, mock_src_client])) self.dm_session.delete_snapmirror(self.fake_src_share, self.fake_dest_share) mock_dest_client.abort_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, clear_checkpoint=False ) mock_dest_client.delete_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) mock_src_client.release_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) def test_delete_snapmirror_does_not_exist(self): """Ensure delete succeeds when the snapmirror does not exist.""" mock_src_client = mock.Mock() mock_dest_client = mock.Mock() mock_dest_client.abort_snapmirror.side_effect = netapp_api.NaApiError( code=netapp_api.EAPIERROR ) self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[mock_dest_client, mock_src_client])) self.dm_session.delete_snapmirror(self.fake_src_share, self.fake_dest_share) mock_dest_client.abort_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, clear_checkpoint=False ) mock_dest_client.delete_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) mock_src_client.release_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) def test_delete_snapmirror_error_deleting(self): """Ensure delete succeeds when the snapmirror does not exist.""" mock_src_client = mock.Mock() mock_dest_client = mock.Mock() mock_dest_client.delete_snapmirror.side_effect = netapp_api.NaApiError( code=netapp_api.ESOURCE_IS_DIFFERENT ) self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[mock_dest_client, mock_src_client])) self.dm_session.delete_snapmirror(self.fake_src_share, self.fake_dest_share) mock_dest_client.abort_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, clear_checkpoint=False ) mock_dest_client.delete_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) mock_src_client.release_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) def test_delete_snapmirror_error_releasing(self): """Ensure delete succeeds when the snapmirror does not exist.""" mock_src_client = mock.Mock() mock_dest_client = mock.Mock() mock_src_client.release_snapmirror.side_effect = ( netapp_api.NaApiError(code=netapp_api.EOBJECTNOTFOUND)) self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[mock_dest_client, mock_src_client])) self.dm_session.delete_snapmirror(self.fake_src_share, self.fake_dest_share) mock_dest_client.abort_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, clear_checkpoint=False ) mock_dest_client.delete_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) mock_src_client.release_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) def test_delete_snapmirror_without_release(self): mock_src_client = mock.Mock() mock_dest_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[mock_dest_client, mock_src_client])) self.dm_session.delete_snapmirror(self.fake_src_share, self.fake_dest_share, release=False) mock_dest_client.abort_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, clear_checkpoint=False ) mock_dest_client.delete_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) self.assertFalse(mock_src_client.release_snapmirror.called) def test_delete_snapmirror_source_unreachable(self): mock_src_client = mock.Mock() mock_dest_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[mock_dest_client, Exception])) self.dm_session.delete_snapmirror(self.fake_src_share, self.fake_dest_share) mock_dest_client.abort_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name, clear_checkpoint=False ) mock_dest_client.delete_snapmirror.assert_called_once_with( mock.ANY, self.fake_src_vol_name, mock.ANY, self.fake_dest_vol_name ) self.assertFalse(mock_src_client.release_snapmirror.called) def test_break_snapmirror(self): self.mock_object(self.dm_session, 'quiesce_then_abort') self.dm_session.break_snapmirror(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.break_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) self.dm_session.quiesce_then_abort.assert_called_once_with( self.fake_src_share, self.fake_dest_share) self.mock_dest_client.mount_volume.assert_called_once_with( self.fake_dest_vol_name) def test_break_snapmirror_no_mount(self): self.mock_object(self.dm_session, 'quiesce_then_abort') self.dm_session.break_snapmirror(self.fake_src_share, self.fake_dest_share, mount=False) self.mock_dest_client.break_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) self.dm_session.quiesce_then_abort.assert_called_once_with( self.fake_src_share, self.fake_dest_share) self.assertFalse(self.mock_dest_client.mount_volume.called) def test_break_snapmirror_wait_for_quiesced(self): self.mock_object(self.dm_session, 'quiesce_then_abort') self.dm_session.break_snapmirror(self.fake_src_share, self.fake_dest_share) self.dm_session.quiesce_then_abort.assert_called_once_with( self.fake_src_share, self.fake_dest_share) self.mock_dest_client.break_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) self.mock_dest_client.mount_volume.assert_called_once_with( self.fake_dest_vol_name) def test_quiesce_then_abort_timeout(self): self.mock_object(time, 'sleep') mock_get_snapmirrors = mock.Mock( return_value=[{'relationship-status': "transferring"}]) self.mock_object(self.mock_dest_client, 'get_snapmirrors', mock_get_snapmirrors) mock_backend_config = na_fakes.create_configuration() mock_backend_config.netapp_snapmirror_quiesce_timeout = 10 self.mock_object(data_motion, 'get_backend_configuration', mock.Mock(return_value=mock_backend_config)) self.dm_session.quiesce_then_abort(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.get_snapmirrors.assert_called_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name, desired_attributes=['relationship-status', 'mirror-state'] ) self.assertEqual(2, self.mock_dest_client.get_snapmirrors.call_count) self.mock_dest_client.quiesce_snapmirror.assert_called_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) self.mock_dest_client.abort_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name, clear_checkpoint=False ) def test_quiesce_then_abort_wait_for_quiesced(self): self.mock_object(time, 'sleep') self.mock_object(self.mock_dest_client, 'get_snapmirrors', mock.Mock(side_effect=[ [{'relationship-status': "transferring"}], [{'relationship-status': "quiesced"}]])) self.dm_session.quiesce_then_abort(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.get_snapmirrors.assert_called_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name, desired_attributes=['relationship-status', 'mirror-state'] ) self.assertEqual(2, self.mock_dest_client.get_snapmirrors.call_count) self.mock_dest_client.quiesce_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) def test_resync_snapmirror(self): self.dm_session.resync_snapmirror(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.resync_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) def test_change_snapmirror_source(self): fake_new_src_share = copy.deepcopy(fake.SHARE) fake_new_src_share['id'] = 'd02d497a-236c-4852-812a-0d39373e312a' fake_new_src_share_name = 'share_d02d497a_236c_4852_812a_0d39373e312a' mock_new_src_client = mock.Mock() self.mock_object(self.dm_session, 'delete_snapmirror') self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[self.mock_dest_client, self.mock_src_client, self.mock_dest_client, mock_new_src_client])) self.dm_session.change_snapmirror_source( self.fake_dest_share, self.fake_src_share, fake_new_src_share, [self.fake_dest_share, self.fake_src_share, fake_new_src_share]) self.assertFalse(self.mock_src_client.release_snapmirror.called) self.assertEqual(4, self.dm_session.delete_snapmirror.call_count) self.dm_session.delete_snapmirror.assert_called_with( mock.ANY, mock.ANY, release=False ) self.mock_dest_client.create_snapmirror.assert_called_once_with( mock.ANY, fake_new_src_share_name, mock.ANY, self.fake_dest_vol_name, schedule='hourly' ) self.mock_dest_client.resync_snapmirror.assert_called_once_with( mock.ANY, fake_new_src_share_name, mock.ANY, self.fake_dest_vol_name ) def test_change_snapmirror_source_dhss_true(self): fake_new_src_share = copy.deepcopy(self.fake_src_share) fake_new_src_share['id'] = 'd02d497a-236c-4852-812a-0d39373e312a' fake_new_src_share_name = 'share_d02d497a_236c_4852_812a_0d39373e312a' fake_new_src_share_server = fake_new_src_share['share_server'] fake_new_src_ss_name = ( fake_new_src_share_server['backend_details']['vserver_name']) self.mock_object(self.dm_session, 'delete_snapmirror') self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[self.mock_dest_client, self.mock_src_client])) mock_backend_config = na_fakes.create_configuration() mock_backend_config.driver_handles_share_servers = True self.mock_object(data_motion, 'get_backend_configuration', mock.Mock(return_value=mock_backend_config)) self.mock_object(self.mock_dest_client, 'get_vserver_peers', mock.Mock(return_value=[])) peer_cluster_name = 'new_src_cluster_name' self.mock_object(self.mock_src_client, 'get_cluster_name', mock.Mock(return_value=peer_cluster_name)) self.dm_session.change_snapmirror_source( self.fake_dest_share, self.fake_src_share, fake_new_src_share, [self.fake_dest_share, self.fake_src_share, fake_new_src_share]) self.assertEqual(4, self.dm_session.delete_snapmirror.call_count) self.mock_dest_client.get_vserver_peers.assert_called_once_with( self.dest_vserver, fake_new_src_ss_name ) self.assertTrue(self.mock_src_client.get_cluster_name.called) self.mock_dest_client.create_vserver_peer.assert_called_once_with( self.dest_vserver, fake_new_src_ss_name, peer_cluster_name=peer_cluster_name ) self.mock_src_client.accept_vserver_peer.assert_called_once_with( fake_new_src_ss_name, self.dest_vserver ) self.dm_session.delete_snapmirror.assert_called_with( mock.ANY, mock.ANY, release=False ) self.mock_dest_client.create_snapmirror.assert_called_once_with( mock.ANY, fake_new_src_share_name, mock.ANY, self.fake_dest_vol_name, schedule='hourly' ) self.mock_dest_client.resync_snapmirror.assert_called_once_with( mock.ANY, fake_new_src_share_name, mock.ANY, self.fake_dest_vol_name ) def test_get_snapmirrors(self): self.mock_object(self.mock_dest_client, 'get_snapmirrors') self.dm_session.get_snapmirrors(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.get_snapmirrors.assert_called_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name, desired_attributes=['relationship-status', 'mirror-state', 'source-vserver', 'source-volume', 'last-transfer-end-timestamp'] ) self.assertEqual(1, self.mock_dest_client.get_snapmirrors.call_count) def test_update_snapmirror(self): self.mock_object(self.mock_dest_client, 'get_snapmirrors') self.dm_session.update_snapmirror(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.update_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) def test_resume_snapmirror(self): self.mock_object(self.mock_dest_client, 'get_snapmirrors') self.dm_session.resume_snapmirror(self.fake_src_share, self.fake_dest_share) self.mock_dest_client.resume_snapmirror.assert_called_once_with( self.source_vserver, self.fake_src_vol_name, self.dest_vserver, self.fake_dest_vol_name) @ddt.data((None, exception.StorageCommunicationException), (exception.StorageCommunicationException, None)) @ddt.unpack def test_remove_qos_on_old_active_replica_unreachable_backend(self, side_eff_1, side_eff_2): mock_source_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(return_value=mock_source_client)) self.mock_object( mock_source_client, 'set_qos_policy_group_for_volume', mock.Mock(side_effect=side_eff_1)) self.mock_object( mock_source_client, 'mark_qos_policy_group_for_deletion', mock.Mock(side_effect=side_eff_2)) self.mock_object(data_motion.LOG, 'exception') retval = self.dm_session.remove_qos_on_old_active_replica( self.fake_src_share) self.assertIsNone(retval) (mock_source_client.set_qos_policy_group_for_volume .assert_called_once_with(self.fake_src_vol_name, 'none')) data_motion.LOG.exception.assert_called_once() def test_remove_qos_on_old_active_replica(self): mock_source_client = mock.Mock() self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(return_value=mock_source_client)) self.mock_object(data_motion.LOG, 'exception') retval = self.dm_session.remove_qos_on_old_active_replica( self.fake_src_share) self.assertIsNone(retval) (mock_source_client.set_qos_policy_group_for_volume .assert_called_once_with(self.fake_src_vol_name, 'none')) data_motion.LOG.exception.assert_not_called() manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_performance.py0000664000175000017500000010262513656750227032122 0ustar zuulzuul00000000000000# Copyright (c) 2016 Clinton Knight # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila import exception from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.cluster_mode import performance from manila import test from manila.tests.share.drivers.netapp.dataontap import fakes as fake @ddt.ddt class PerformanceLibraryTestCase(test.TestCase): def setUp(self): super(PerformanceLibraryTestCase, self).setUp() with mock.patch.object(performance.PerformanceLibrary, '_init_counter_info'): self.zapi_client = mock.Mock() self.perf_library = performance.PerformanceLibrary( self.zapi_client) self.perf_library.system_object_name = 'system' self.perf_library.avg_processor_busy_base_counter_name = ( 'cpu_elapsed_time1') self._set_up_fake_pools() def _set_up_fake_pools(self): self.fake_volumes = { 'pool1': { 'netapp_aggregate': 'aggr1', }, 'pool2': { 'netapp_aggregate': 'aggr2', }, 'pool3': { 'netapp_aggregate': 'aggr2', }, } self.fake_aggregates = { 'pool4': { 'netapp_aggregate': 'aggr3', } } self.fake_aggr_names = ['aggr1', 'aggr2', 'aggr3'] self.fake_nodes = ['node1', 'node2'] self.fake_aggr_node_map = { 'aggr1': 'node1', 'aggr2': 'node2', 'aggr3': 'node2', } def _get_fake_counters(self): return { 'node1': list(range(11, 21)), 'node2': list(range(21, 31)), } def test_init(self): mock_zapi_client = mock.Mock() mock_init_counter_info = self.mock_object( performance.PerformanceLibrary, '_init_counter_info') library = performance.PerformanceLibrary(mock_zapi_client) self.assertEqual(mock_zapi_client, library.zapi_client) mock_init_counter_info.assert_called_once_with() def test_init_counter_info_not_supported(self): self.zapi_client.features.SYSTEM_METRICS = False self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS = False mock_get_base_counter_name = self.mock_object( self.perf_library, '_get_base_counter_name') self.perf_library._init_counter_info() self.assertIsNone(self.perf_library.system_object_name) self.assertIsNone( self.perf_library.avg_processor_busy_base_counter_name) self.assertFalse(mock_get_base_counter_name.called) @ddt.data({ 'system_constituent': False, 'base_counter': 'cpu_elapsed_time1', }, { 'system_constituent': True, 'base_counter': 'cpu_elapsed_time', }) @ddt.unpack def test_init_counter_info_api_error(self, system_constituent, base_counter): self.zapi_client.features.SYSTEM_METRICS = True self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS = ( system_constituent) self.mock_object(self.perf_library, '_get_base_counter_name', mock.Mock(side_effect=netapp_api.NaApiError)) self.perf_library._init_counter_info() self.assertEqual( base_counter, self.perf_library.avg_processor_busy_base_counter_name) def test_init_counter_info_system(self): self.zapi_client.features.SYSTEM_METRICS = True self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS = False mock_get_base_counter_name = self.mock_object( self.perf_library, '_get_base_counter_name', mock.Mock(return_value='cpu_elapsed_time1')) self.perf_library._init_counter_info() self.assertEqual('system', self.perf_library.system_object_name) self.assertEqual( 'cpu_elapsed_time1', self.perf_library.avg_processor_busy_base_counter_name) mock_get_base_counter_name.assert_called_once_with( 'system', 'avg_processor_busy') def test_init_counter_info_system_constituent(self): self.zapi_client.features.SYSTEM_METRICS = False self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS = True mock_get_base_counter_name = self.mock_object( self.perf_library, '_get_base_counter_name', mock.Mock(return_value='cpu_elapsed_time')) self.perf_library._init_counter_info() self.assertEqual('system:constituent', self.perf_library.system_object_name) self.assertEqual( 'cpu_elapsed_time', self.perf_library.avg_processor_busy_base_counter_name) mock_get_base_counter_name.assert_called_once_with( 'system:constituent', 'avg_processor_busy') def test_update_performance_cache(self): self.perf_library.performance_counters = self._get_fake_counters() mock_get_aggregates_for_pools = self.mock_object( self.perf_library, '_get_aggregates_for_pools', mock.Mock(return_value=self.fake_aggr_names)) mock_get_nodes_for_aggregates = self.mock_object( self.perf_library, '_get_nodes_for_aggregates', mock.Mock(return_value=(self.fake_nodes, self.fake_aggr_node_map))) mock_get_node_utilization_counters = self.mock_object( self.perf_library, '_get_node_utilization_counters', mock.Mock(side_effect=[21, 31])) mock_get_node_utilization = self.mock_object( self.perf_library, '_get_node_utilization', mock.Mock(side_effect=[25, 75])) self.perf_library.update_performance_cache(self.fake_volumes, self.fake_aggregates) expected_performance_counters = { 'node1': list(range(12, 22)), 'node2': list(range(22, 32)), } self.assertEqual(expected_performance_counters, self.perf_library.performance_counters) expected_pool_utilization = { 'pool1': 25, 'pool2': 75, 'pool3': 75, 'pool4': 75, } self.assertEqual(expected_pool_utilization, self.perf_library.pool_utilization) mock_get_aggregates_for_pools.assert_called_once_with( self.fake_volumes, self.fake_aggregates) mock_get_nodes_for_aggregates.assert_called_once_with( self.fake_aggr_names) mock_get_node_utilization_counters.assert_has_calls([ mock.call('node1'), mock.call('node2')]) mock_get_node_utilization.assert_has_calls([ mock.call(12, 21, 'node1'), mock.call(22, 31, 'node2')]) def test_update_performance_cache_first_pass(self): mock_get_aggregates_for_pools = self.mock_object( self.perf_library, '_get_aggregates_for_pools', mock.Mock(return_value=self.fake_aggr_names)) mock_get_nodes_for_aggregates = self.mock_object( self.perf_library, '_get_nodes_for_aggregates', mock.Mock(return_value=(self.fake_nodes, self.fake_aggr_node_map))) mock_get_node_utilization_counters = self.mock_object( self.perf_library, '_get_node_utilization_counters', mock.Mock(side_effect=[11, 21])) mock_get_node_utilization = self.mock_object( self.perf_library, '_get_node_utilization', mock.Mock(side_effect=[25, 75])) self.perf_library.update_performance_cache(self.fake_volumes, self.fake_aggregates) expected_performance_counters = {'node1': [11], 'node2': [21]} self.assertEqual(expected_performance_counters, self.perf_library.performance_counters) expected_pool_utilization = { 'pool1': performance.DEFAULT_UTILIZATION, 'pool2': performance.DEFAULT_UTILIZATION, 'pool3': performance.DEFAULT_UTILIZATION, 'pool4': performance.DEFAULT_UTILIZATION, } self.assertEqual(expected_pool_utilization, self.perf_library.pool_utilization) mock_get_aggregates_for_pools.assert_called_once_with( self.fake_volumes, self.fake_aggregates) mock_get_nodes_for_aggregates.assert_called_once_with( self.fake_aggr_names) mock_get_node_utilization_counters.assert_has_calls([ mock.call('node1'), mock.call('node2')]) self.assertFalse(mock_get_node_utilization.called) def test_update_performance_cache_unknown_nodes(self): self.perf_library.performance_counters = self._get_fake_counters() mock_get_aggregates_for_pools = self.mock_object( self.perf_library, '_get_aggregates_for_pools', mock.Mock(return_value=self.fake_aggr_names)) mock_get_nodes_for_aggregates = self.mock_object( self.perf_library, '_get_nodes_for_aggregates', mock.Mock(return_value=([], {}))) mock_get_node_utilization_counters = self.mock_object( self.perf_library, '_get_node_utilization_counters', mock.Mock(side_effect=[11, 21])) mock_get_node_utilization = self.mock_object( self.perf_library, '_get_node_utilization', mock.Mock(side_effect=[25, 75])) self.perf_library.update_performance_cache(self.fake_volumes, self.fake_aggregates) self.assertEqual(self._get_fake_counters(), self.perf_library.performance_counters) expected_pool_utilization = { 'pool1': performance.DEFAULT_UTILIZATION, 'pool2': performance.DEFAULT_UTILIZATION, 'pool3': performance.DEFAULT_UTILIZATION, 'pool4': performance.DEFAULT_UTILIZATION, } self.assertEqual(expected_pool_utilization, self.perf_library.pool_utilization) mock_get_aggregates_for_pools.assert_called_once_with( self.fake_volumes, self.fake_aggregates) mock_get_nodes_for_aggregates.assert_called_once_with( self.fake_aggr_names) self.assertFalse(mock_get_node_utilization_counters.called) self.assertFalse(mock_get_node_utilization.called) def test_update_performance_cache_counters_unavailable(self): self.perf_library.performance_counters = self._get_fake_counters() mock_get_aggregates_for_pools = self.mock_object( self.perf_library, '_get_aggregates_for_pools', mock.Mock(return_value=self.fake_aggr_names)) mock_get_nodes_for_aggregates = self.mock_object( self.perf_library, '_get_nodes_for_aggregates', mock.Mock(return_value=(self.fake_nodes, self.fake_aggr_node_map))) mock_get_node_utilization_counters = self.mock_object( self.perf_library, '_get_node_utilization_counters', mock.Mock(side_effect=[None, None])) mock_get_node_utilization = self.mock_object( self.perf_library, '_get_node_utilization', mock.Mock(side_effect=[25, 75])) self.perf_library.update_performance_cache(self.fake_volumes, self.fake_aggregates) self.assertEqual(self._get_fake_counters(), self.perf_library.performance_counters) expected_pool_utilization = { 'pool1': performance.DEFAULT_UTILIZATION, 'pool2': performance.DEFAULT_UTILIZATION, 'pool3': performance.DEFAULT_UTILIZATION, 'pool4': performance.DEFAULT_UTILIZATION, } self.assertEqual(expected_pool_utilization, self.perf_library.pool_utilization) mock_get_aggregates_for_pools.assert_called_once_with( self.fake_volumes, self.fake_aggregates) mock_get_nodes_for_aggregates.assert_called_once_with( self.fake_aggr_names) mock_get_node_utilization_counters.assert_has_calls([ mock.call('node1'), mock.call('node2')]) self.assertFalse(mock_get_node_utilization.called) def test_update_performance_cache_not_supported(self): self.zapi_client.features.SYSTEM_METRICS = False self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS = False mock_get_aggregates_for_pools = self.mock_object( self.perf_library, '_get_aggregates_for_pools') self.perf_library.update_performance_cache(self.fake_volumes, self.fake_aggregates) expected_performance_counters = {} self.assertEqual(expected_performance_counters, self.perf_library.performance_counters) expected_pool_utilization = {} self.assertEqual(expected_pool_utilization, self.perf_library.pool_utilization) self.assertFalse(mock_get_aggregates_for_pools.called) @ddt.data({'pool': 'pool1', 'expected': 10.0}, {'pool': 'pool3', 'expected': performance.DEFAULT_UTILIZATION}) @ddt.unpack def test_get_node_utilization_for_pool(self, pool, expected): self.perf_library.pool_utilization = {'pool1': 10.0, 'pool2': 15.0} result = self.perf_library.get_node_utilization_for_pool(pool) self.assertAlmostEqual(expected, result) def test__update_for_failover(self): self.mock_object(self.perf_library, 'update_performance_cache') mock_client = mock.Mock(name='FAKE_ZAPI_CLIENT') self.perf_library.update_for_failover(mock_client, self.fake_volumes, self.fake_aggregates) self.assertEqual(mock_client, self.perf_library.zapi_client) self.perf_library.update_performance_cache.assert_called_once_with( self.fake_volumes, self.fake_aggregates) def test_get_aggregates_for_pools(self): result = self.perf_library._get_aggregates_for_pools( self.fake_volumes, self.fake_aggregates) expected_aggregate_names = ['aggr1', 'aggr2', 'aggr3'] self.assertItemsEqual(expected_aggregate_names, result) def test_get_nodes_for_aggregates(self): aggregate_names = ['aggr1', 'aggr2', 'aggr3'] aggregate_nodes = ['node1', 'node2', 'node2'] mock_get_node_for_aggregate = self.mock_object( self.zapi_client, 'get_node_for_aggregate', mock.Mock(side_effect=aggregate_nodes)) result = self.perf_library._get_nodes_for_aggregates(aggregate_names) self.assertEqual(2, len(result)) result_node_names, result_aggr_node_map = result expected_node_names = ['node1', 'node2'] expected_aggr_node_map = dict(zip(aggregate_names, aggregate_nodes)) self.assertItemsEqual(expected_node_names, result_node_names) self.assertEqual(expected_aggr_node_map, result_aggr_node_map) mock_get_node_for_aggregate.assert_has_calls([ mock.call('aggr1'), mock.call('aggr2'), mock.call('aggr3')]) def test_get_node_utilization_kahuna_overutilized(self): mock_get_kahuna_utilization = self.mock_object( self.perf_library, '_get_kahuna_utilization', mock.Mock(return_value=61.0)) mock_get_average_cpu_utilization = self.mock_object( self.perf_library, '_get_average_cpu_utilization', mock.Mock(return_value=25.0)) result = self.perf_library._get_node_utilization('fake1', 'fake2', 'fake_node') self.assertAlmostEqual(100.0, result) mock_get_kahuna_utilization.assert_called_once_with('fake1', 'fake2') self.assertFalse(mock_get_average_cpu_utilization.called) @ddt.data({'cpu': -0.01, 'cp_time': 10000, 'poll_time': 0}, {'cpu': 1.01, 'cp_time': 0, 'poll_time': 1000}, {'cpu': 0.50, 'cp_time': 0, 'poll_time': 0}) @ddt.unpack def test_get_node_utilization_zero_time(self, cpu, cp_time, poll_time): mock_get_kahuna_utilization = self.mock_object( self.perf_library, '_get_kahuna_utilization', mock.Mock(return_value=59.0)) mock_get_average_cpu_utilization = self.mock_object( self.perf_library, '_get_average_cpu_utilization', mock.Mock(return_value=cpu)) mock_get_total_consistency_point_time = self.mock_object( self.perf_library, '_get_total_consistency_point_time', mock.Mock(return_value=cp_time)) mock_get_consistency_point_p2_flush_time = self.mock_object( self.perf_library, '_get_consistency_point_p2_flush_time', mock.Mock(return_value=cp_time)) mock_get_total_time = self.mock_object( self.perf_library, '_get_total_time', mock.Mock(return_value=poll_time)) mock_get_adjusted_consistency_point_time = self.mock_object( self.perf_library, '_get_adjusted_consistency_point_time') result = self.perf_library._get_node_utilization('fake1', 'fake2', 'fake_node') expected = max(min(100.0, 100.0 * cpu), 0) self.assertEqual(expected, result) mock_get_kahuna_utilization.assert_called_once_with('fake1', 'fake2') mock_get_average_cpu_utilization.assert_called_once_with('fake1', 'fake2') mock_get_total_consistency_point_time.assert_called_once_with('fake1', 'fake2') mock_get_consistency_point_p2_flush_time.assert_called_once_with( 'fake1', 'fake2') mock_get_total_time.assert_called_once_with('fake1', 'fake2', 'total_cp_msecs') self.assertFalse(mock_get_adjusted_consistency_point_time.called) @ddt.data({'cpu': 0.75, 'adjusted_cp_time': 8000, 'expected': 80}, {'cpu': 0.80, 'adjusted_cp_time': 7500, 'expected': 80}, {'cpu': 0.50, 'adjusted_cp_time': 11000, 'expected': 100}) @ddt.unpack def test_get_node_utilization(self, cpu, adjusted_cp_time, expected): mock_get_kahuna_utilization = self.mock_object( self.perf_library, '_get_kahuna_utilization', mock.Mock(return_value=59.0)) mock_get_average_cpu_utilization = self.mock_object( self.perf_library, '_get_average_cpu_utilization', mock.Mock(return_value=cpu)) mock_get_total_consistency_point_time = self.mock_object( self.perf_library, '_get_total_consistency_point_time', mock.Mock(return_value=90.0)) mock_get_consistency_point_p2_flush_time = self.mock_object( self.perf_library, '_get_consistency_point_p2_flush_time', mock.Mock(return_value=50.0)) mock_get_total_time = self.mock_object( self.perf_library, '_get_total_time', mock.Mock(return_value=10000)) mock_get_adjusted_consistency_point_time = self.mock_object( self.perf_library, '_get_adjusted_consistency_point_time', mock.Mock(return_value=adjusted_cp_time)) result = self.perf_library._get_node_utilization('fake1', 'fake2', 'fake_node') self.assertEqual(expected, result) mock_get_kahuna_utilization.assert_called_once_with('fake1', 'fake2') mock_get_average_cpu_utilization.assert_called_once_with('fake1', 'fake2') mock_get_total_consistency_point_time.assert_called_once_with('fake1', 'fake2') mock_get_consistency_point_p2_flush_time.assert_called_once_with( 'fake1', 'fake2') mock_get_total_time.assert_called_once_with('fake1', 'fake2', 'total_cp_msecs') mock_get_adjusted_consistency_point_time.assert_called_once_with( 90.0, 50.0) def test_get_node_utilization_calculation_error(self): self.mock_object(self.perf_library, '_get_kahuna_utilization', mock.Mock(return_value=59.0)) self.mock_object(self.perf_library, '_get_average_cpu_utilization', mock.Mock(return_value=25.0)) self.mock_object(self.perf_library, '_get_total_consistency_point_time', mock.Mock(return_value=90.0)) self.mock_object(self.perf_library, '_get_consistency_point_p2_flush_time', mock.Mock(return_value=50.0)) self.mock_object(self.perf_library, '_get_total_time', mock.Mock(return_value=10000)) self.mock_object(self.perf_library, '_get_adjusted_consistency_point_time', mock.Mock(side_effect=ZeroDivisionError)) result = self.perf_library._get_node_utilization('fake1', 'fake2', 'fake_node') self.assertEqual(performance.DEFAULT_UTILIZATION, result) (self.perf_library._get_adjusted_consistency_point_time. assert_called_once_with(mock.ANY, mock.ANY)) def test_get_kahuna_utilization(self): mock_get_performance_counter = self.mock_object( self.perf_library, '_get_performance_counter_average_multi_instance', mock.Mock(return_value=[0.2, 0.3])) result = self.perf_library._get_kahuna_utilization('fake_t1', 'fake_t2') self.assertAlmostEqual(50.0, result) mock_get_performance_counter.assert_called_once_with( 'fake_t1', 'fake_t2', 'domain_busy:kahuna', 'processor_elapsed_time') def test_get_average_cpu_utilization(self): mock_get_performance_counter_average = self.mock_object( self.perf_library, '_get_performance_counter_average', mock.Mock(return_value=0.45)) result = self.perf_library._get_average_cpu_utilization('fake_t1', 'fake_t2') self.assertAlmostEqual(0.45, result) mock_get_performance_counter_average.assert_called_once_with( 'fake_t1', 'fake_t2', 'avg_processor_busy', 'cpu_elapsed_time1') def test_get_total_consistency_point_time(self): mock_get_performance_counter_delta = self.mock_object( self.perf_library, '_get_performance_counter_delta', mock.Mock(return_value=500)) result = self.perf_library._get_total_consistency_point_time( 'fake_t1', 'fake_t2') self.assertEqual(500, result) mock_get_performance_counter_delta.assert_called_once_with( 'fake_t1', 'fake_t2', 'total_cp_msecs') def test_get_consistency_point_p2_flush_time(self): mock_get_performance_counter_delta = self.mock_object( self.perf_library, '_get_performance_counter_delta', mock.Mock(return_value=500)) result = self.perf_library._get_consistency_point_p2_flush_time( 'fake_t1', 'fake_t2') self.assertEqual(500, result) mock_get_performance_counter_delta.assert_called_once_with( 'fake_t1', 'fake_t2', 'cp_phase_times:p2_flush') def test_get_total_time(self): mock_find_performance_counter_timestamp = self.mock_object( self.perf_library, '_find_performance_counter_timestamp', mock.Mock(side_effect=[100, 105])) result = self.perf_library._get_total_time('fake_t1', 'fake_t2', 'fake_counter') self.assertEqual(5000, result) mock_find_performance_counter_timestamp.assert_has_calls([ mock.call('fake_t1', 'fake_counter'), mock.call('fake_t2', 'fake_counter')]) def test_get_adjusted_consistency_point_time(self): result = self.perf_library._get_adjusted_consistency_point_time( 500, 200) self.assertAlmostEqual(360.0, result) def test_get_performance_counter_delta(self): result = self.perf_library._get_performance_counter_delta( fake.COUNTERS_T1, fake.COUNTERS_T2, 'total_cp_msecs') self.assertEqual(1482, result) def test_get_performance_counter_average(self): result = self.perf_library._get_performance_counter_average( fake.COUNTERS_T1, fake.COUNTERS_T2, 'domain_busy:kahuna', 'processor_elapsed_time', 'processor0') self.assertAlmostEqual(0.00281954360981, result) def test_get_performance_counter_average_multi_instance(self): result = ( self.perf_library._get_performance_counter_average_multi_instance( fake.COUNTERS_T1, fake.COUNTERS_T2, 'domain_busy:kahuna', 'processor_elapsed_time')) expected = [0.002819543609809441, 0.0033421611147606135] self.assertAlmostEqual(expected, result) def test_find_performance_counter_value(self): result = self.perf_library._find_performance_counter_value( fake.COUNTERS_T1, 'domain_busy:kahuna', instance_name='processor0') self.assertEqual('2712467226', result) def test_find_performance_counter_value_not_found(self): self.assertRaises( exception.NotFound, self.perf_library._find_performance_counter_value, fake.COUNTERS_T1, 'invalid', instance_name='processor0') def test_find_performance_counter_timestamp(self): result = self.perf_library._find_performance_counter_timestamp( fake.COUNTERS_T1, 'domain_busy') self.assertEqual('1453573777', result) def test_find_performance_counter_timestamp_not_found(self): self.assertRaises( exception.NotFound, self.perf_library._find_performance_counter_timestamp, fake.COUNTERS_T1, 'invalid', instance_name='processor0') def test_expand_performance_array(self): counter_info = { 'labels': ['idle', 'kahuna', 'storage', 'exempt'], 'name': 'domain_busy', } self.zapi_client.get_performance_counter_info = mock.Mock( return_value=counter_info) counter = { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'domain_busy': '969142314286,2567571412,2131582146,5383861579', 'instance-name': 'processor0', 'timestamp': '1453512244', } self.perf_library._expand_performance_array('wafl', 'domain_busy', counter) modified_counter = { 'node-name': 'cluster1-01', 'instance-uuid': 'cluster1-01:kernel:processor0', 'domain_busy': '969142314286,2567571412,2131582146,5383861579', 'instance-name': 'processor0', 'timestamp': '1453512244', 'domain_busy:idle': '969142314286', 'domain_busy:kahuna': '2567571412', 'domain_busy:storage': '2131582146', 'domain_busy:exempt': '5383861579', } self.assertEqual(modified_counter, counter) def test_get_base_counter_name(self): counter_info = { 'base-counter': 'cpu_elapsed_time', 'labels': [], 'name': 'avg_processor_busy', } self.zapi_client.get_performance_counter_info = mock.Mock( return_value=counter_info) result = self.perf_library._get_base_counter_name( 'system:constituent', 'avg_processor_busy') self.assertEqual('cpu_elapsed_time', result) def test_get_node_utilization_counters(self): mock_get_node_utilization_system_counters = self.mock_object( self.perf_library, '_get_node_utilization_system_counters', mock.Mock(return_value=['A', 'B', 'C'])) mock_get_node_utilization_wafl_counters = self.mock_object( self.perf_library, '_get_node_utilization_wafl_counters', mock.Mock(return_value=['D', 'E', 'F'])) mock_get_node_utilization_processor_counters = self.mock_object( self.perf_library, '_get_node_utilization_processor_counters', mock.Mock(return_value=['G', 'H', 'I'])) result = self.perf_library._get_node_utilization_counters(fake.NODE) expected = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'] self.assertEqual(expected, result) mock_get_node_utilization_system_counters.assert_called_once_with( fake.NODE) mock_get_node_utilization_wafl_counters.assert_called_once_with( fake.NODE) mock_get_node_utilization_processor_counters.assert_called_once_with( fake.NODE) def test_get_node_utilization_counters_api_error(self): self.mock_object(self.perf_library, '_get_node_utilization_system_counters', mock.Mock(side_effect=netapp_api.NaApiError)) result = self.perf_library._get_node_utilization_counters(fake.NODE) self.assertIsNone(result) def test_get_node_utilization_system_counters(self): mock_get_performance_instance_uuids = self.mock_object( self.zapi_client, 'get_performance_instance_uuids', mock.Mock(return_value=fake.SYSTEM_INSTANCE_UUIDS)) mock_get_performance_counters = self.mock_object( self.zapi_client, 'get_performance_counters', mock.Mock(return_value=fake.SYSTEM_COUNTERS)) result = self.perf_library._get_node_utilization_system_counters( fake.NODE) self.assertEqual(fake.SYSTEM_COUNTERS, result) mock_get_performance_instance_uuids.assert_called_once_with( 'system', fake.NODE) mock_get_performance_counters.assert_called_once_with( 'system', fake.SYSTEM_INSTANCE_UUIDS, ['avg_processor_busy', 'cpu_elapsed_time1', 'cpu_elapsed_time']) def test_get_node_utilization_wafl_counters(self): mock_get_performance_instance_uuids = self.mock_object( self.zapi_client, 'get_performance_instance_uuids', mock.Mock(return_value=fake.WAFL_INSTANCE_UUIDS)) mock_get_performance_counters = self.mock_object( self.zapi_client, 'get_performance_counters', mock.Mock(return_value=fake.WAFL_COUNTERS)) mock_get_performance_counter_info = self.mock_object( self.zapi_client, 'get_performance_counter_info', mock.Mock(return_value=fake.WAFL_CP_PHASE_TIMES_COUNTER_INFO)) result = self.perf_library._get_node_utilization_wafl_counters( fake.NODE) self.assertEqual(fake.EXPANDED_WAFL_COUNTERS, result) mock_get_performance_instance_uuids.assert_called_once_with( 'wafl', fake.NODE) mock_get_performance_counters.assert_called_once_with( 'wafl', fake.WAFL_INSTANCE_UUIDS, ['total_cp_msecs', 'cp_phase_times']) mock_get_performance_counter_info.assert_called_once_with( 'wafl', 'cp_phase_times') def test_get_node_utilization_processor_counters(self): mock_get_performance_instance_uuids = self.mock_object( self.zapi_client, 'get_performance_instance_uuids', mock.Mock(return_value=fake.PROCESSOR_INSTANCE_UUIDS)) mock_get_performance_counters = self.mock_object( self.zapi_client, 'get_performance_counters', mock.Mock(return_value=fake.PROCESSOR_COUNTERS)) self.mock_object( self.zapi_client, 'get_performance_counter_info', mock.Mock(return_value=fake.PROCESSOR_DOMAIN_BUSY_COUNTER_INFO)) result = self.perf_library._get_node_utilization_processor_counters( fake.NODE) self.assertEqual(fake.EXPANDED_PROCESSOR_COUNTERS, result) mock_get_performance_instance_uuids.assert_called_once_with( 'processor', fake.NODE) mock_get_performance_counters.assert_called_once_with( 'processor', fake.PROCESSOR_INSTANCE_UUIDS, ['domain_busy', 'processor_elapsed_time']) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_lib_multi_svm.py0000664000175000017500000014551113656750227032467 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the NetApp Data ONTAP cDOT multi-SVM storage driver library. """ import copy from unittest import mock import ddt from oslo_log import log from oslo_serialization import jsonutils from manila.common import constants from manila import context from manila import exception from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.cluster_mode import data_motion from manila.share.drivers.netapp.dataontap.cluster_mode import lib_base from manila.share.drivers.netapp.dataontap.cluster_mode import lib_multi_svm from manila.share.drivers.netapp import utils as na_utils from manila.share import utils as share_utils from manila import test from manila.tests.share.drivers.netapp.dataontap.client import fakes as c_fake from manila.tests.share.drivers.netapp.dataontap import fakes as fake @ddt.ddt class NetAppFileStorageLibraryTestCase(test.TestCase): def setUp(self): super(NetAppFileStorageLibraryTestCase, self).setUp() self.mock_object(na_utils, 'validate_driver_instantiation') # Mock loggers as themselves to allow logger arg validation mock_logger = log.getLogger('mock_logger') self.mock_object(lib_multi_svm.LOG, 'warning', mock.Mock(side_effect=mock_logger.warning)) self.mock_object(lib_multi_svm.LOG, 'error', mock.Mock(side_effect=mock_logger.error)) kwargs = { 'configuration': fake.get_config_cmode(), 'private_storage': mock.Mock(), 'app_version': fake.APP_VERSION } self.library = lib_multi_svm.NetAppCmodeMultiSVMFileStorageLibrary( fake.DRIVER_NAME, **kwargs) self.library._client = mock.Mock() self.library._client.get_ontapi_version.return_value = (1, 21) self.client = self.library._client self.fake_new_replica = copy.deepcopy(fake.SHARE) self.fake_new_ss = copy.deepcopy(fake.SHARE_SERVER) self.fake_new_vserver_name = 'fake_new_vserver' self.fake_new_ss['backend_details']['vserver_name'] = ( self.fake_new_vserver_name ) self.fake_new_replica['share_server'] = self.fake_new_ss self.fake_new_replica_host = 'fake_new_host' self.fake_replica = copy.deepcopy(fake.SHARE) self.fake_replica['id'] = fake.SHARE_ID2 fake_ss = copy.deepcopy(fake.SHARE_SERVER) self.fake_vserver = 'fake_vserver' fake_ss['backend_details']['vserver_name'] = ( self.fake_vserver ) self.fake_replica['share_server'] = fake_ss self.fake_replica_host = 'fake_host' self.fake_new_client = mock.Mock() self.fake_client = mock.Mock() def test_check_for_setup_error_cluster_creds_no_vserver(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) mock_super = self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, 'check_for_setup_error') self.library.check_for_setup_error() self.assertTrue(self.library._find_matching_aggregates.called) mock_super.assert_called_once_with() def test_check_for_setup_error_cluster_creds_with_vserver(self): self.library._have_cluster_creds = True self.library.configuration.netapp_vserver = fake.VSERVER1 self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) mock_super = self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, 'check_for_setup_error') self.library.check_for_setup_error() mock_super.assert_called_once_with() self.assertTrue(self.library._find_matching_aggregates.called) self.assertTrue(lib_multi_svm.LOG.warning.called) def test_check_for_setup_error_vserver_creds(self): self.library._have_cluster_creds = False self.assertRaises(exception.InvalidInput, self.library.check_for_setup_error) def test_check_for_setup_error_no_aggregates(self): self.library._have_cluster_creds = True self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=[])) self.assertRaises(exception.NetAppException, self.library.check_for_setup_error) self.assertTrue(self.library._find_matching_aggregates.called) def test_get_vserver_no_share_server(self): self.assertRaises(exception.InvalidInput, self.library._get_vserver) def test_get_vserver_no_share_server_with_vserver_name(self): fake_vserver_client = 'fake_client' mock_vserver_exists = self.mock_object( self.library._client, 'vserver_exists', mock.Mock(return_value=True)) self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=fake_vserver_client)) result_vserver, result_vserver_client = self.library._get_vserver( share_server=None, vserver_name=fake.VSERVER1) mock_vserver_exists.assert_called_once_with( fake.VSERVER1 ) self.assertEqual(fake.VSERVER1, result_vserver) self.assertEqual(fake_vserver_client, result_vserver_client) def test_get_vserver_no_backend_details(self): fake_share_server = copy.deepcopy(fake.SHARE_SERVER) fake_share_server.pop('backend_details') kwargs = {'share_server': fake_share_server} self.assertRaises(exception.VserverNotSpecified, self.library._get_vserver, **kwargs) def test_get_vserver_none_backend_details(self): fake_share_server = copy.deepcopy(fake.SHARE_SERVER) fake_share_server['backend_details'] = None kwargs = {'share_server': fake_share_server} self.assertRaises(exception.VserverNotSpecified, self.library._get_vserver, **kwargs) def test_get_vserver_no_vserver(self): fake_share_server = copy.deepcopy(fake.SHARE_SERVER) fake_share_server['backend_details'].pop('vserver_name') kwargs = {'share_server': fake_share_server} self.assertRaises(exception.VserverNotSpecified, self.library._get_vserver, **kwargs) def test_get_vserver_none_vserver(self): fake_share_server = copy.deepcopy(fake.SHARE_SERVER) fake_share_server['backend_details']['vserver_name'] = None kwargs = {'share_server': fake_share_server} self.assertRaises(exception.VserverNotSpecified, self.library._get_vserver, **kwargs) def test_get_vserver_not_found(self): self.library._client.vserver_exists.return_value = False kwargs = {'share_server': fake.SHARE_SERVER} self.assertRaises(exception.VserverNotFound, self.library._get_vserver, **kwargs) def test_get_vserver(self): self.library._client.vserver_exists.return_value = True self.mock_object(self.library, '_get_api_client', mock.Mock(return_value='fake_client')) result = self.library._get_vserver(share_server=fake.SHARE_SERVER) self.assertTupleEqual((fake.VSERVER1, 'fake_client'), result) def test_get_ems_pool_info(self): self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=['aggr1', 'aggr2'])) result = self.library._get_ems_pool_info() expected = { 'pools': { 'vserver': None, 'aggregates': ['aggr1', 'aggr2'], }, } self.assertEqual(expected, result) @ddt.data('fake', fake.IDENTIFIER) def test_manage_server(self, fake_vserver_name): self.mock_object(context, 'get_admin_context', mock.Mock(return_value='fake_admin_context')) mock_get_vserver_name = self.mock_object( self.library, '_get_vserver_name', mock.Mock(return_value=fake_vserver_name)) new_identifier, new_details = self.library.manage_server( context, fake.SHARE_SERVER, fake.IDENTIFIER, {}) mock_get_vserver_name.assert_called_once_with(fake.SHARE_SERVER['id']) self.assertEqual(fake_vserver_name, new_details['vserver_name']) self.assertEqual(fake_vserver_name, new_identifier) def test_get_share_server_network_info(self): fake_vserver_client = mock.Mock() self.mock_object(context, 'get_admin_context', mock.Mock(return_value='fake_admin_context')) mock_get_vserver = self.mock_object( self.library, '_get_vserver', mock.Mock(return_value=['fake', fake_vserver_client])) net_interfaces = copy.deepcopy(c_fake.NETWORK_INTERFACES_MULTIPLE) self.mock_object(fake_vserver_client, 'get_network_interfaces', mock.Mock(return_value=net_interfaces)) result = self.library.get_share_server_network_info(context, fake.SHARE_SERVER, fake.IDENTIFIER, {}) mock_get_vserver.assert_called_once_with( vserver_name=fake.IDENTIFIER ) reference_allocations = [] for lif in net_interfaces: reference_allocations.append(lif['address']) self.assertEqual(reference_allocations, result) @ddt.data((True, fake.IDENTIFIER), (False, fake.IDENTIFIER)) @ddt.unpack def test__verify_share_server_name(self, vserver_exists, identifier): mock_exists = self.mock_object(self.client, 'vserver_exists', mock.Mock(return_value=vserver_exists)) expected_result = identifier if not vserver_exists: expected_result = self.library._get_vserver_name(identifier) result = self.library._get_correct_vserver_old_name(identifier) self.assertEqual(result, expected_result) mock_exists.assert_called_once_with(identifier) def test_handle_housekeeping_tasks(self): self.mock_object(self.client, 'prune_deleted_nfs_export_policies') self.mock_object(self.client, 'prune_deleted_snapshots') mock_super = self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, '_handle_housekeeping_tasks') self.library._handle_housekeeping_tasks() self.assertTrue(self.client.prune_deleted_nfs_export_policies.called) self.assertTrue(self.client.prune_deleted_snapshots.called) self.assertTrue(mock_super.called) def test_find_matching_aggregates(self): mock_list_non_root_aggregates = self.mock_object( self.client, 'list_non_root_aggregates', mock.Mock(return_value=fake.AGGREGATES)) self.library.configuration.netapp_aggregate_name_search_pattern = ( '.*_aggr_1') result = self.library._find_matching_aggregates() self.assertListEqual([fake.AGGREGATES[0]], result) mock_list_non_root_aggregates.assert_called_once_with() def test_setup_server(self): mock_get_vserver_name = self.mock_object( self.library, '_get_vserver_name', mock.Mock(return_value=fake.VSERVER1)) mock_create_vserver = self.mock_object(self.library, '_create_vserver') mock_validate_network_type = self.mock_object( self.library, '_validate_network_type') result = self.library.setup_server(fake.NETWORK_INFO) ports = {} for network_allocation in fake.NETWORK_INFO['network_allocations']: ports[network_allocation['id']] = network_allocation['ip_address'] self.assertTrue(mock_validate_network_type.called) self.assertTrue(mock_get_vserver_name.called) self.assertTrue(mock_create_vserver.called) self.assertDictEqual({'vserver_name': fake.VSERVER1, 'ports': jsonutils.dumps(ports)}, result) def test_setup_server_with_error(self): mock_get_vserver_name = self.mock_object( self.library, '_get_vserver_name', mock.Mock(return_value=fake.VSERVER1)) fake_exception = exception.ManilaException("fake") mock_create_vserver = self.mock_object( self.library, '_create_vserver', mock.Mock(side_effect=fake_exception)) mock_validate_network_type = self.mock_object( self.library, '_validate_network_type') self.assertRaises( exception.ManilaException, self.library.setup_server, fake.NETWORK_INFO) ports = {} for network_allocation in fake.NETWORK_INFO['network_allocations']: ports[network_allocation['id']] = network_allocation['ip_address'] self.assertTrue(mock_validate_network_type.called) self.assertTrue(mock_get_vserver_name.called) self.assertTrue(mock_create_vserver.called) self.assertDictEqual( {'server_details': {'vserver_name': fake.VSERVER1, 'ports': jsonutils.dumps(ports)}}, fake_exception.detail_data) @ddt.data( {'network_info': {'network_type': 'vlan', 'segmentation_id': 1000}}, {'network_info': {'network_type': None, 'segmentation_id': None}}, {'network_info': {'network_type': 'flat', 'segmentation_id': None}}) @ddt.unpack def test_validate_network_type_with_valid_network_types(self, network_info): self.library._validate_network_type(network_info) @ddt.data( {'network_info': {'network_type': 'vxlan', 'segmentation_id': 1000}}, {'network_info': {'network_type': 'gre', 'segmentation_id': 100}}) @ddt.unpack def test_validate_network_type_with_invalid_network_types(self, network_info): self.assertRaises(exception.NetworkBadConfigurationException, self.library._validate_network_type, network_info) def test_get_vserver_name(self): vserver_id = fake.NETWORK_INFO['server_id'] vserver_name = fake.VSERVER_NAME_TEMPLATE % vserver_id actual_result = self.library._get_vserver_name(vserver_id) self.assertEqual(vserver_name, actual_result) @ddt.data(None, fake.IPSPACE) def test_create_vserver(self, existing_ipspace): versions = ['fake_v1', 'fake_v2'] self.library.configuration.netapp_enabled_share_protocols = versions vserver_id = fake.NETWORK_INFO['server_id'] vserver_name = fake.VSERVER_NAME_TEMPLATE % vserver_id vserver_client = mock.Mock() self.mock_object(self.library._client, 'list_cluster_nodes', mock.Mock(return_value=fake.CLUSTER_NODES)) self.mock_object(self.library, '_get_node_data_port', mock.Mock(return_value='fake_port')) self.mock_object(context, 'get_admin_context', mock.Mock(return_value='fake_admin_context')) self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=vserver_client)) self.mock_object(self.library._client, 'vserver_exists', mock.Mock(return_value=False)) self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) self.mock_object(self.library, '_create_ipspace', mock.Mock(return_value=fake.IPSPACE)) get_ipspace_name_for_vlan_port = self.mock_object( self.library._client, 'get_ipspace_name_for_vlan_port', mock.Mock(return_value=existing_ipspace)) self.mock_object(self.library, '_create_vserver_lifs') self.mock_object(self.library, '_create_vserver_admin_lif') self.mock_object(self.library, '_create_vserver_routes') self.library._create_vserver(vserver_name, fake.NETWORK_INFO) get_ipspace_name_for_vlan_port.assert_called_once_with( fake.CLUSTER_NODES[0], 'fake_port', fake.NETWORK_INFO['segmentation_id']) if not existing_ipspace: self.library._create_ipspace.assert_called_once_with( fake.NETWORK_INFO) self.library._client.create_vserver.assert_called_once_with( vserver_name, fake.ROOT_VOLUME_AGGREGATE, fake.ROOT_VOLUME, fake.AGGREGATES, fake.IPSPACE) self.library._get_api_client.assert_called_once_with( vserver=vserver_name) self.library._create_vserver_lifs.assert_called_once_with( vserver_name, vserver_client, fake.NETWORK_INFO, fake.IPSPACE) self.library._create_vserver_admin_lif.assert_called_once_with( vserver_name, vserver_client, fake.NETWORK_INFO, fake.IPSPACE) self.library._create_vserver_routes.assert_called_once_with( vserver_client, fake.NETWORK_INFO) vserver_client.enable_nfs.assert_called_once_with(versions) self.library._client.setup_security_services.assert_called_once_with( fake.NETWORK_INFO['security_services'], vserver_client, vserver_name) def test_create_vserver_already_present(self): vserver_id = fake.NETWORK_INFO['server_id'] vserver_name = fake.VSERVER_NAME_TEMPLATE % vserver_id self.mock_object(context, 'get_admin_context', mock.Mock(return_value='fake_admin_context')) self.mock_object(self.library._client, 'vserver_exists', mock.Mock(return_value=True)) self.assertRaises(exception.NetAppException, self.library._create_vserver, vserver_name, fake.NETWORK_INFO) @ddt.data( {'lif_exception': netapp_api.NaApiError, 'existing_ipspace': fake.IPSPACE}, {'lif_exception': netapp_api.NaApiError, 'existing_ipspace': None}, {'lif_exception': exception.NetAppException, 'existing_ipspace': None}, {'lif_exception': exception.NetAppException, 'existing_ipspace': fake.IPSPACE}) @ddt.unpack def test_create_vserver_lif_creation_failure(self, lif_exception, existing_ipspace): vserver_id = fake.NETWORK_INFO['server_id'] vserver_name = fake.VSERVER_NAME_TEMPLATE % vserver_id vserver_client = mock.Mock() self.mock_object(self.library._client, 'list_cluster_nodes', mock.Mock(return_value=fake.CLUSTER_NODES)) self.mock_object(self.library, '_get_node_data_port', mock.Mock(return_value='fake_port')) self.mock_object(context, 'get_admin_context', mock.Mock(return_value='fake_admin_context')) self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=vserver_client)) self.mock_object(self.library._client, 'vserver_exists', mock.Mock(return_value=False)) self.mock_object(self.library, '_find_matching_aggregates', mock.Mock(return_value=fake.AGGREGATES)) self.mock_object(self.library._client, 'get_ipspace_name_for_vlan_port', mock.Mock(return_value=existing_ipspace)) self.mock_object(self.library, '_create_ipspace', mock.Mock(return_value=fake.IPSPACE)) self.mock_object(self.library, '_create_vserver_lifs', mock.Mock(side_effect=lif_exception)) self.mock_object(self.library, '_delete_vserver') self.assertRaises(lif_exception, self.library._create_vserver, vserver_name, fake.NETWORK_INFO) self.library._get_api_client.assert_called_with(vserver=vserver_name) self.assertTrue(self.library._client.create_vserver.called) self.library._create_vserver_lifs.assert_called_with( vserver_name, vserver_client, fake.NETWORK_INFO, fake.IPSPACE) self.library._delete_vserver.assert_called_once_with( vserver_name, needs_lock=False, security_services=None) self.assertFalse(vserver_client.enable_nfs.called) self.assertEqual(1, lib_multi_svm.LOG.error.call_count) def test_get_valid_ipspace_name(self): result = self.library._get_valid_ipspace_name(fake.IPSPACE_ID) expected = 'ipspace_' + fake.IPSPACE_ID.replace('-', '_') self.assertEqual(expected, result) def test_create_ipspace_not_supported(self): self.library._client.features.IPSPACES = False result = self.library._create_ipspace(fake.NETWORK_INFO) self.assertIsNone(result) @ddt.data(None, 'flat') def test_create_ipspace_not_vlan(self, network_type): self.library._client.features.IPSPACES = True network_info = copy.deepcopy(fake.NETWORK_INFO) network_info['network_allocations'][0]['segmentation_id'] = None network_info['network_allocations'][0]['network_type'] = network_type result = self.library._create_ipspace(network_info) self.assertEqual('Default', result) def test_create_ipspace(self): self.library._client.features.IPSPACES = True self.mock_object(self.library._client, 'create_ipspace', mock.Mock(return_value=False)) result = self.library._create_ipspace(fake.NETWORK_INFO) expected = self.library._get_valid_ipspace_name( fake.NETWORK_INFO['neutron_subnet_id']) self.assertEqual(expected, result) self.library._client.create_ipspace.assert_called_once_with(expected) def test_create_vserver_lifs(self): self.mock_object(self.library._client, 'list_cluster_nodes', mock.Mock(return_value=fake.CLUSTER_NODES)) self.mock_object(self.library, '_get_lif_name', mock.Mock(side_effect=['fake_lif1', 'fake_lif2'])) self.mock_object(self.library, '_create_lif') self.library._create_vserver_lifs(fake.VSERVER1, 'fake_vserver_client', fake.NETWORK_INFO, fake.IPSPACE) self.library._create_lif.assert_has_calls([ mock.call('fake_vserver_client', fake.VSERVER1, fake.IPSPACE, fake.CLUSTER_NODES[0], 'fake_lif1', fake.NETWORK_INFO['network_allocations'][0]), mock.call('fake_vserver_client', fake.VSERVER1, fake.IPSPACE, fake.CLUSTER_NODES[1], 'fake_lif2', fake.NETWORK_INFO['network_allocations'][1])]) def test_create_vserver_admin_lif(self): self.mock_object(self.library._client, 'list_cluster_nodes', mock.Mock(return_value=fake.CLUSTER_NODES)) self.mock_object(self.library, '_get_lif_name', mock.Mock(return_value='fake_admin_lif')) self.mock_object(self.library, '_create_lif') self.library._create_vserver_admin_lif(fake.VSERVER1, 'fake_vserver_client', fake.NETWORK_INFO, fake.IPSPACE) self.library._create_lif.assert_has_calls([ mock.call('fake_vserver_client', fake.VSERVER1, fake.IPSPACE, fake.CLUSTER_NODES[0], 'fake_admin_lif', fake.NETWORK_INFO['admin_network_allocations'][0])]) def test_create_vserver_admin_lif_no_admin_network(self): fake_network_info = copy.deepcopy(fake.NETWORK_INFO) fake_network_info['admin_network_allocations'] = [] self.mock_object(self.library._client, 'list_cluster_nodes', mock.Mock(return_value=fake.CLUSTER_NODES)) self.mock_object(self.library, '_get_lif_name', mock.Mock(return_value='fake_admin_lif')) self.mock_object(self.library, '_create_lif') self.library._create_vserver_admin_lif(fake.VSERVER1, 'fake_vserver_client', fake_network_info, fake.IPSPACE) self.assertFalse(self.library._create_lif.called) @ddt.data( fake.get_network_info(fake.USER_NETWORK_ALLOCATIONS, fake.ADMIN_NETWORK_ALLOCATIONS), fake.get_network_info(fake.USER_NETWORK_ALLOCATIONS_IPV6, fake.ADMIN_NETWORK_ALLOCATIONS)) def test_create_vserver_routes(self, network_info): expected_gateway = network_info['network_allocations'][0]['gateway'] vserver_client = mock.Mock() self.mock_object(vserver_client, 'create_route') retval = self.library._create_vserver_routes( vserver_client, network_info) self.assertIsNone(retval) vserver_client.create_route.assert_called_once_with(expected_gateway) def test_get_node_data_port(self): self.mock_object(self.client, 'list_node_data_ports', mock.Mock(return_value=fake.NODE_DATA_PORTS)) self.library.configuration.netapp_port_name_search_pattern = 'e0c' result = self.library._get_node_data_port(fake.CLUSTER_NODE) self.assertEqual('e0c', result) self.library._client.list_node_data_ports.assert_has_calls([ mock.call(fake.CLUSTER_NODE)]) def test_get_node_data_port_no_match(self): self.mock_object(self.client, 'list_node_data_ports', mock.Mock(return_value=fake.NODE_DATA_PORTS)) self.library.configuration.netapp_port_name_search_pattern = 'ifgroup1' self.assertRaises(exception.NetAppException, self.library._get_node_data_port, fake.CLUSTER_NODE) def test_get_lif_name(self): result = self.library._get_lif_name( 'fake_node', fake.NETWORK_INFO['network_allocations'][0]) self.assertEqual('os_132dbb10-9a36-46f2-8d89-3d909830c356', result) @ddt.data(fake.MTU, None, 'not-present') def test_create_lif(self, mtu): """Tests cases where MTU is a valid value, None or not present.""" expected_mtu = (mtu if mtu not in (None, 'not-present') else fake.DEFAULT_MTU) network_allocations = copy.deepcopy( fake.NETWORK_INFO['network_allocations'][0]) network_allocations['mtu'] = mtu if mtu == 'not-present': network_allocations.pop('mtu') vserver_client = mock.Mock() vserver_client.network_interface_exists = mock.Mock( return_value=False) self.mock_object(self.library, '_get_node_data_port', mock.Mock(return_value='fake_port')) self.library._create_lif(vserver_client, 'fake_vserver', 'fake_ipspace', 'fake_node', 'fake_lif', network_allocations) self.library._client.create_network_interface.assert_has_calls([ mock.call('10.10.10.10', '255.255.255.0', '1000', 'fake_node', 'fake_port', 'fake_vserver', 'fake_lif', 'fake_ipspace', expected_mtu)]) def test_create_lif_if_nonexistent_already_present(self): vserver_client = mock.Mock() vserver_client.network_interface_exists = mock.Mock( return_value=True) self.mock_object(self.library, '_get_node_data_port', mock.Mock(return_value='fake_port')) self.library._create_lif(vserver_client, 'fake_vserver', fake.IPSPACE, 'fake_node', 'fake_lif', fake.NETWORK_INFO['network_allocations'][0]) self.assertFalse(self.library._client.create_network_interface.called) def test_get_network_allocations_number(self): self.library._client.list_cluster_nodes.return_value = ( fake.CLUSTER_NODES) result = self.library.get_network_allocations_number() self.assertEqual(len(fake.CLUSTER_NODES), result) def test_get_admin_network_allocations_number(self): result = self.library.get_admin_network_allocations_number( 'fake_admin_network_api') self.assertEqual(1, result) def test_get_admin_network_allocations_number_no_admin_network(self): result = self.library.get_admin_network_allocations_number(None) self.assertEqual(0, result) def test_teardown_server(self): self.library._client.vserver_exists.return_value = True mock_delete_vserver = self.mock_object(self.library, '_delete_vserver') self.library.teardown_server( fake.SHARE_SERVER['backend_details'], security_services=fake.NETWORK_INFO['security_services']) self.library._client.vserver_exists.assert_called_once_with( fake.VSERVER1) mock_delete_vserver.assert_called_once_with( fake.VSERVER1, security_services=fake.NETWORK_INFO['security_services']) @ddt.data(None, {}, {'vserver_name': None}) def test_teardown_server_no_share_server(self, server_details): mock_delete_vserver = self.mock_object(self.library, '_delete_vserver') self.library.teardown_server(server_details) self.assertFalse(mock_delete_vserver.called) self.assertTrue(lib_multi_svm.LOG.warning.called) def test_teardown_server_no_vserver(self): self.library._client.vserver_exists.return_value = False mock_delete_vserver = self.mock_object(self.library, '_delete_vserver') self.library.teardown_server( fake.SHARE_SERVER['backend_details'], security_services=fake.NETWORK_INFO['security_services']) self.library._client.vserver_exists.assert_called_once_with( fake.VSERVER1) self.assertFalse(mock_delete_vserver.called) self.assertTrue(lib_multi_svm.LOG.warning.called) @ddt.data(True, False) def test_delete_vserver_no_ipspace(self, lock): self.mock_object(self.library._client, 'get_vserver_ipspace', mock.Mock(return_value=None)) vserver_client = mock.Mock() self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=vserver_client)) mock_delete_vserver_vlans = self.mock_object(self.library, '_delete_vserver_vlans') net_interfaces = copy.deepcopy(c_fake.NETWORK_INTERFACES_MULTIPLE) net_interfaces_with_vlans = [net_interfaces[0]] self.mock_object(vserver_client, 'get_network_interfaces', mock.Mock(return_value=net_interfaces)) security_services = fake.NETWORK_INFO['security_services'] self.mock_object(self.library, '_delete_vserver_peers') self.library._delete_vserver(fake.VSERVER1, security_services=security_services, needs_lock=lock) self.library._client.get_vserver_ipspace.assert_called_once_with( fake.VSERVER1) self.library._delete_vserver_peers.assert_called_once_with( fake.VSERVER1) self.library._client.delete_vserver.assert_called_once_with( fake.VSERVER1, vserver_client, security_services=security_services) self.assertFalse(self.library._client.delete_ipspace.called) mock_delete_vserver_vlans.assert_called_once_with( net_interfaces_with_vlans) @ddt.data(True, False) def test_delete_vserver_ipspace_has_data_vservers(self, lock): self.mock_object(self.library._client, 'get_vserver_ipspace', mock.Mock(return_value=fake.IPSPACE)) vserver_client = mock.Mock() self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=vserver_client)) self.mock_object(self.library._client, 'ipspace_has_data_vservers', mock.Mock(return_value=True)) mock_delete_vserver_vlans = self.mock_object(self.library, '_delete_vserver_vlans') self.mock_object(self.library, '_delete_vserver_peers') self.mock_object( vserver_client, 'get_network_interfaces', mock.Mock(return_value=c_fake.NETWORK_INTERFACES_MULTIPLE)) security_services = fake.NETWORK_INFO['security_services'] self.library._delete_vserver(fake.VSERVER1, security_services=security_services, needs_lock=lock) self.library._client.get_vserver_ipspace.assert_called_once_with( fake.VSERVER1) self.library._client.delete_vserver.assert_called_once_with( fake.VSERVER1, vserver_client, security_services=security_services) self.library._delete_vserver_peers.assert_called_once_with( fake.VSERVER1) self.assertFalse(self.library._client.delete_ipspace.called) mock_delete_vserver_vlans.assert_called_once_with( [c_fake.NETWORK_INTERFACES_MULTIPLE[0]]) @ddt.data([], c_fake.NETWORK_INTERFACES) def test_delete_vserver_with_ipspace(self, interfaces): self.mock_object(self.library._client, 'get_vserver_ipspace', mock.Mock(return_value=fake.IPSPACE)) vserver_client = mock.Mock() self.mock_object(self.library, '_get_api_client', mock.Mock(return_value=vserver_client)) self.mock_object(self.library._client, 'ipspace_has_data_vservers', mock.Mock(return_value=False)) mock_delete_vserver_vlans = self.mock_object(self.library, '_delete_vserver_vlans') self.mock_object(self.library, '_delete_vserver_peers') self.mock_object(vserver_client, 'get_network_interfaces', mock.Mock(return_value=interfaces)) security_services = fake.NETWORK_INFO['security_services'] self.library._delete_vserver(fake.VSERVER1, security_services=security_services) self.library._delete_vserver_peers.assert_called_once_with( fake.VSERVER1 ) self.library._client.get_vserver_ipspace.assert_called_once_with( fake.VSERVER1) self.library._client.delete_vserver.assert_called_once_with( fake.VSERVER1, vserver_client, security_services=security_services) self.library._client.delete_ipspace.assert_called_once_with( fake.IPSPACE) mock_delete_vserver_vlans.assert_called_once_with(interfaces) def test__delete_vserver_peers(self): self.mock_object(self.library, '_get_vserver_peers', mock.Mock(return_value=fake.VSERVER_PEER)) self.mock_object(self.library, '_delete_vserver_peer') self.library._delete_vserver_peers(fake.VSERVER1) self.library._get_vserver_peers.assert_called_once_with( vserver=fake.VSERVER1 ) self.library._delete_vserver_peer.assert_called_once_with( fake.VSERVER_PEER[0]['vserver'], fake.VSERVER_PEER[0]['peer-vserver'] ) def test_delete_vserver_vlans(self): self.library._delete_vserver_vlans(c_fake.NETWORK_INTERFACES) for interface in c_fake.NETWORK_INTERFACES: home_port = interface['home-port'] port, vlan = home_port.split('-') node = interface['home-node'] self.library._client.delete_vlan.assert_called_once_with( node, port, vlan) def test_delete_vserver_vlans_client_error(self): mock_exception_log = self.mock_object(lib_multi_svm.LOG, 'exception') self.mock_object( self.library._client, 'delete_vlan', mock.Mock(side_effect=exception.NetAppException("fake error"))) self.library._delete_vserver_vlans(c_fake.NETWORK_INTERFACES) for interface in c_fake.NETWORK_INTERFACES: home_port = interface['home-port'] port, vlan = home_port.split('-') node = interface['home-node'] self.library._client.delete_vlan.assert_called_once_with( node, port, vlan) self.assertEqual(1, mock_exception_log.call_count) @ddt.data([], [{'vserver': c_fake.VSERVER_NAME, 'peer-vserver': c_fake.VSERVER_PEER_NAME, 'applications': [ {'vserver-peer-application': 'snapmirror'}] }]) def test_create_replica(self, vserver_peers): fake_cluster_name = 'fake_cluster' self.mock_object(self.library, '_get_vservers_from_replicas', mock.Mock(return_value=(self.fake_vserver, self.fake_new_vserver_name))) self.mock_object(self.library, 'find_active_replica', mock.Mock(return_value=self.fake_replica)) self.mock_object(share_utils, 'extract_host', mock.Mock(side_effect=[self.fake_new_replica_host, self.fake_replica_host])) self.mock_object(data_motion, 'get_client_for_backend', mock.Mock(side_effect=[self.fake_new_client, self.fake_client])) self.mock_object(self.library, '_get_vserver_peers', mock.Mock(return_value=vserver_peers)) self.mock_object(self.fake_new_client, 'get_cluster_name', mock.Mock(return_value=fake_cluster_name)) self.mock_object(self.fake_client, 'create_vserver_peer') self.mock_object(self.fake_new_client, 'accept_vserver_peer') lib_base_model_update = { 'export_locations': [], 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'access_rules_status': constants.STATUS_ACTIVE, } self.mock_object(lib_base.NetAppCmodeFileStorageLibrary, 'create_replica', mock.Mock(return_value=lib_base_model_update)) model_update = self.library.create_replica( None, [self.fake_replica], self.fake_new_replica, [], [], share_server=None) self.assertDictMatch(lib_base_model_update, model_update) self.library._get_vservers_from_replicas.assert_called_once_with( None, [self.fake_replica], self.fake_new_replica ) self.library.find_active_replica.assert_called_once_with( [self.fake_replica] ) self.assertEqual(2, share_utils.extract_host.call_count) self.assertEqual(2, data_motion.get_client_for_backend.call_count) self.library._get_vserver_peers.assert_called_once_with( self.fake_new_vserver_name, self.fake_vserver ) self.fake_new_client.get_cluster_name.assert_called_once_with() if not vserver_peers: self.fake_client.create_vserver_peer.assert_called_once_with( self.fake_new_vserver_name, self.fake_vserver, peer_cluster_name=fake_cluster_name ) self.fake_new_client.accept_vserver_peer.assert_called_once_with( self.fake_vserver, self.fake_new_vserver_name ) base_class = lib_base.NetAppCmodeFileStorageLibrary base_class.create_replica.assert_called_once_with( None, [self.fake_replica], self.fake_new_replica, [], [] ) def test_delete_replica(self): base_class = lib_base.NetAppCmodeFileStorageLibrary vserver_peers = copy.deepcopy(fake.VSERVER_PEER) vserver_peers[0]['vserver'] = self.fake_vserver vserver_peers[0]['peer-vserver'] = self.fake_new_vserver_name self.mock_object(self.library, '_get_vservers_from_replicas', mock.Mock(return_value=(self.fake_vserver, self.fake_new_vserver_name))) self.mock_object(base_class, 'delete_replica') self.mock_object(self.library, '_get_snapmirrors', mock.Mock(return_value=[])) self.mock_object(self.library, '_get_vserver_peers', mock.Mock(return_value=vserver_peers)) self.mock_object(self.library, '_delete_vserver_peer') self.library.delete_replica(None, [self.fake_replica], self.fake_new_replica, [], share_server=None) self.library._get_vservers_from_replicas.assert_called_once_with( None, [self.fake_replica], self.fake_new_replica ) base_class.delete_replica.assert_called_once_with( None, [self.fake_replica], self.fake_new_replica, [] ) self.library._get_snapmirrors.assert_has_calls( [mock.call(self.fake_vserver, self.fake_new_vserver_name), mock.call(self.fake_new_vserver_name, self.fake_vserver)] ) self.library._get_vserver_peers.assert_called_once_with( self.fake_new_vserver_name, self.fake_vserver ) self.library._delete_vserver_peer.assert_called_once_with( self.fake_new_vserver_name, self.fake_vserver ) def test_get_vservers_from_replicas(self): self.mock_object(self.library, 'find_active_replica', mock.Mock(return_value=self.fake_replica)) vserver, peer_vserver = self.library._get_vservers_from_replicas( None, [self.fake_replica], self.fake_new_replica) self.library.find_active_replica.assert_called_once_with( [self.fake_replica] ) self.assertEqual(self.fake_vserver, vserver) self.assertEqual(self.fake_new_vserver_name, peer_vserver) def test_get_vserver_peers(self): self.mock_object(self.library._client, 'get_vserver_peers') self.library._get_vserver_peers( vserver=self.fake_vserver, peer_vserver=self.fake_new_vserver_name) self.library._client.get_vserver_peers.assert_called_once_with( self.fake_vserver, self.fake_new_vserver_name ) def test_create_vserver_peer(self): self.mock_object(self.library._client, 'create_vserver_peer') self.library._create_vserver_peer( None, vserver=self.fake_vserver, peer_vserver=self.fake_new_vserver_name) self.library._client.create_vserver_peer.assert_called_once_with( self.fake_vserver, self.fake_new_vserver_name ) def test_delete_vserver_peer(self): self.mock_object(self.library._client, 'delete_vserver_peer') self.library._delete_vserver_peer( vserver=self.fake_vserver, peer_vserver=self.fake_new_vserver_name) self.library._client.delete_vserver_peer.assert_called_once_with( self.fake_vserver, self.fake_new_vserver_name ) def test_create_share_from_snaphot(self): fake_parent_share = copy.deepcopy(fake.SHARE) fake_parent_share['id'] = fake.SHARE_ID2 mock_create_from_snap = self.mock_object( lib_base.NetAppCmodeFileStorageLibrary, 'create_share_from_snapshot') self.library.create_share_from_snapshot( None, fake.SHARE, fake.SNAPSHOT, share_server=fake.SHARE_SERVER, parent_share=fake_parent_share) mock_create_from_snap.assert_called_once_with( None, fake.SHARE, fake.SNAPSHOT, share_server=fake.SHARE_SERVER, parent_share=fake_parent_share ) @ddt.data( {'src_cluster_name': fake.CLUSTER_NAME, 'dest_cluster_name': fake.CLUSTER_NAME, 'has_vserver_peers': None}, {'src_cluster_name': fake.CLUSTER_NAME, 'dest_cluster_name': fake.CLUSTER_NAME_2, 'has_vserver_peers': False}, {'src_cluster_name': fake.CLUSTER_NAME, 'dest_cluster_name': fake.CLUSTER_NAME_2, 'has_vserver_peers': True} ) @ddt.unpack def test_create_share_from_snaphot_different_hosts(self, src_cluster_name, dest_cluster_name, has_vserver_peers): class FakeDBObj(dict): def to_dict(self): return self fake_parent_share = copy.deepcopy(fake.SHARE) fake_parent_share['id'] = fake.SHARE_ID2 fake_parent_share['host'] = fake.MANILA_HOST_NAME_2 fake_share = FakeDBObj(fake.SHARE) fake_share_server = FakeDBObj(fake.SHARE_SERVER) src_vserver = fake.VSERVER2 dest_vserver = fake.VSERVER1 src_backend = fake.BACKEND_NAME dest_backend = fake.BACKEND_NAME_2 mock_dm_session = mock.Mock() mock_dm_constr = self.mock_object( data_motion, "DataMotionSession", mock.Mock(return_value=mock_dm_session)) mock_get_vserver = self.mock_object( mock_dm_session, 'get_vserver_from_share', mock.Mock(side_effect=[src_vserver, dest_vserver])) src_vserver_client = mock.Mock() dest_vserver_client = mock.Mock() mock_extract_host = self.mock_object( share_utils, 'extract_host', mock.Mock(side_effect=[src_backend, dest_backend])) mock_dm_get_client = self.mock_object( data_motion, 'get_client_for_backend', mock.Mock(side_effect=[src_vserver_client, dest_vserver_client])) mock_get_src_cluster_name = self.mock_object( src_vserver_client, 'get_cluster_name', mock.Mock(return_value=src_cluster_name)) mock_get_dest_cluster_name = self.mock_object( dest_vserver_client, 'get_cluster_name', mock.Mock(return_value=dest_cluster_name)) mock_get_vserver_peers = self.mock_object( self.library, '_get_vserver_peers', mock.Mock(return_value=has_vserver_peers)) mock_create_vserver_peer = self.mock_object(dest_vserver_client, 'create_vserver_peer') mock_accept_peer = self.mock_object(src_vserver_client, 'accept_vserver_peer') mock_create_from_snap = self.mock_object( lib_base.NetAppCmodeFileStorageLibrary, 'create_share_from_snapshot') self.library.create_share_from_snapshot( None, fake_share, fake.SNAPSHOT, share_server=fake_share_server, parent_share=fake_parent_share) internal_share = copy.deepcopy(fake.SHARE) internal_share['share_server'] = copy.deepcopy(fake.SHARE_SERVER) mock_dm_constr.assert_called_once() mock_get_vserver.assert_has_calls([mock.call(fake_parent_share), mock.call(internal_share)]) mock_extract_host.assert_has_calls([ mock.call(fake_parent_share['host'], level='backend_name'), mock.call(internal_share['host'], level='backend_name')]) mock_dm_get_client.assert_has_calls([ mock.call(src_backend, vserver_name=src_vserver), mock.call(dest_backend, vserver_name=dest_vserver) ]) mock_get_src_cluster_name.assert_called_once() mock_get_dest_cluster_name.assert_called_once() if src_cluster_name != dest_cluster_name: mock_get_vserver_peers.assert_called_once_with(dest_vserver, src_vserver) if not has_vserver_peers: mock_create_vserver_peer.assert_called_once_with( dest_vserver, src_vserver, peer_cluster_name=src_cluster_name) mock_accept_peer.assert_called_once_with(src_vserver, dest_vserver) mock_create_from_snap.assert_called_once_with( None, fake.SHARE, fake.SNAPSHOT, share_server=fake.SHARE_SERVER, parent_share=fake_parent_share) manila-10.0.0/manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_driver_interfaces.py0000664000175000017500000000511613656750227033314 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Mock unit tests for the NetApp file share driver interfaces """ from unittest import mock from manila.share.drivers.netapp.dataontap.cluster_mode import drv_multi_svm from manila.share.drivers.netapp.dataontap.cluster_mode import drv_single_svm from manila import test class NetAppFileStorageDriverInterfaceTestCase(test.TestCase): def setUp(self): super(NetAppFileStorageDriverInterfaceTestCase, self).setUp() self.mock_object(drv_multi_svm.NetAppCmodeMultiSvmShareDriver, '__init__', mock.Mock(return_value=None)) self.mock_object(drv_single_svm.NetAppCmodeSingleSvmShareDriver, '__init__', mock.Mock(return_value=None)) self.drv_multi_svm = drv_multi_svm.NetAppCmodeMultiSvmShareDriver() self.drv_single_svm = drv_single_svm.NetAppCmodeSingleSvmShareDriver() def test_driver_interfaces_match(self): """Ensure the NetApp file storage driver interfaces match. The two file share Manila drivers from NetApp (cDOT multi-SVM, cDOT single-SVM) are merely passthrough shim layers atop a common file storage library. Bugs are easily introduced when a Manila method is exposed via a subset of those driver shims. This test ensures they remain in sync and the library features are uniformly available in the drivers. """ # Get local functions of each driver interface multi_svm_methods = self._get_local_functions(self.drv_multi_svm) single_svm_methods = self._get_local_functions(self.drv_single_svm) # Ensure NetApp file share driver shims are identical self.assertSetEqual(multi_svm_methods, single_svm_methods) def _get_local_functions(self, obj): """Get function names of an object without superclass functions.""" return set([key for key, value in type(obj).__dict__.items() if callable(value)]) manila-10.0.0/manila/tests/share/drivers/netapp/test_common.py0000664000175000017500000001437713656750227024457 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import six from manila import exception from manila.share.drivers.netapp import common as na_common from manila.share.drivers.netapp.dataontap.cluster_mode import drv_multi_svm from manila.share.drivers.netapp import utils as na_utils from manila import test from manila.tests.share.drivers.netapp import fakes as na_fakes class NetAppDriverFactoryTestCase(test.TestCase): def test_new(self): self.mock_object(na_utils.OpenStackInfo, 'info', mock.Mock(return_value='fake_info')) mock_get_driver_mode = self.mock_object( na_common.NetAppDriver, '_get_driver_mode', mock.Mock(return_value='fake_mode')) mock_create_driver = self.mock_object(na_common.NetAppDriver, '_create_driver') config = na_fakes.create_configuration() config.netapp_storage_family = 'fake_family' config.driver_handles_share_servers = True kwargs = {'configuration': config} na_common.NetAppDriver(**kwargs) kwargs['app_version'] = 'fake_info' mock_get_driver_mode.assert_called_once_with('fake_family', True) mock_create_driver.assert_called_once_with('fake_family', 'fake_mode', *(), **kwargs) def test_new_missing_config(self): self.mock_object(na_utils.OpenStackInfo, 'info') self.mock_object(na_common.NetAppDriver, '_create_driver') self.assertRaises(exception.InvalidInput, na_common.NetAppDriver, **{}) def test_new_missing_family(self): self.mock_object(na_utils.OpenStackInfo, 'info') self.mock_object(na_common.NetAppDriver, '_create_driver') config = na_fakes.create_configuration() config.driver_handles_share_servers = True config.netapp_storage_family = None kwargs = {'configuration': config} self.assertRaises(exception.InvalidInput, na_common.NetAppDriver, **kwargs) def test_new_missing_mode(self): self.mock_object(na_utils.OpenStackInfo, 'info') self.mock_object(na_common.NetAppDriver, '_create_driver') config = na_fakes.create_configuration() config.driver_handles_share_servers = None config.netapp_storage_family = 'fake_family' kwargs = {'configuration': config} self.assertRaises(exception.InvalidInput, na_common.NetAppDriver, **kwargs) def test_get_driver_mode_missing_mode_good_default(self): result = na_common.NetAppDriver._get_driver_mode('ONTAP_CLUSTER', None) self.assertEqual(na_common.MULTI_SVM, result) def test_create_driver_missing_mode_no_default(self): self.assertRaises(exception.InvalidInput, na_common.NetAppDriver._get_driver_mode, 'fake_family', None) def test_get_driver_mode_multi_svm(self): result = na_common.NetAppDriver._get_driver_mode('ONTAP_CLUSTER', True) self.assertEqual(na_common.MULTI_SVM, result) def test_get_driver_mode_single_svm(self): result = na_common.NetAppDriver._get_driver_mode('ONTAP_CLUSTER', False) self.assertEqual(na_common.SINGLE_SVM, result) def test_create_driver(self): def get_full_class_name(obj): return obj.__module__ + '.' + obj.__class__.__name__ registry = na_common.NETAPP_UNIFIED_DRIVER_REGISTRY for family in six.iterkeys(registry): for mode, full_class_name in registry[family].items(): config = na_fakes.create_configuration() config.local_conf.set_override('driver_handles_share_servers', mode == na_common.MULTI_SVM) kwargs = { 'configuration': config, 'private_storage': mock.Mock(), 'app_version': 'fake_info' } driver = na_common.NetAppDriver._create_driver( family, mode, **kwargs) self.assertEqual(full_class_name, get_full_class_name(driver)) def test_create_driver_case_insensitive(self): config = na_fakes.create_configuration() config.local_conf.set_override('driver_handles_share_servers', True) kwargs = { 'configuration': config, 'private_storage': mock.Mock(), 'app_version': 'fake_info' } driver = na_common.NetAppDriver._create_driver('ONTAP_CLUSTER', na_common.MULTI_SVM, **kwargs) self.assertIsInstance(driver, drv_multi_svm.NetAppCmodeMultiSvmShareDriver) def test_create_driver_invalid_family(self): kwargs = { 'configuration': na_fakes.create_configuration(), 'app_version': 'fake_info', } self.assertRaises(exception.InvalidInput, na_common.NetAppDriver._create_driver, 'fake_family', na_common.MULTI_SVM, **kwargs) def test_create_driver_invalid_mode(self): kwargs = { 'configuration': na_fakes.create_configuration(), 'app_version': 'fake_info', } self.assertRaises(exception.InvalidInput, na_common.NetAppDriver._create_driver, 'ontap_cluster', 'fake_mode', **kwargs) manila-10.0.0/manila/tests/share/drivers/netapp/fakes.py0000664000175000017500000000247113656750227023211 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.share import configuration as conf from manila.share import driver as manila_opts from manila.share.drivers.netapp import options as na_opts def create_configuration(): config = conf.Configuration(None) config.append_config_values(manila_opts.share_opts) config.append_config_values(na_opts.netapp_connection_opts) config.append_config_values(na_opts.netapp_transport_opts) config.append_config_values(na_opts.netapp_basicauth_opts) config.append_config_values(na_opts.netapp_provisioning_opts) return config def create_configuration_cmode(): config = create_configuration() config.append_config_values(na_opts.netapp_support_opts) return config manila-10.0.0/manila/tests/share/drivers/cephfs/0000775000175000017500000000000013656750362021523 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/cephfs/__init__.py0000664000175000017500000000000013656750227023622 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/cephfs/test_driver.py0000664000175000017500000010353213656750227024433 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import units from manila.common import constants from manila import context import manila.exception as exception from manila.share import configuration from manila.share.drivers.cephfs import driver from manila.share import share_types from manila import test from manila.tests import fake_share DEFAULT_VOLUME_MODE = 0o755 ALT_VOLUME_MODE_CFG = '775' ALT_VOLUME_MODE = 0o775 class MockVolumeClientModule(object): """Mocked up version of ceph's VolumeClient interface.""" class VolumePath(object): """Copy of VolumePath from CephFSVolumeClient.""" def __init__(self, group_id, volume_id): self.group_id = group_id self.volume_id = volume_id def __eq__(self, other): return (self.group_id == other.group_id and self.volume_id == other.volume_id) def __str__(self): return "{0}/{1}".format(self.group_id, self.volume_id) class CephFSVolumeClient(mock.Mock): mock_used_bytes = 0 version = 1 def __init__(self, *args, **kwargs): mock.Mock.__init__(self, spec=[ "connect", "disconnect", "create_snapshot_volume", "destroy_snapshot_volume", "create_group", "destroy_group", "delete_volume", "purge_volume", "deauthorize", "evict", "set_max_bytes", "destroy_snapshot_group", "create_snapshot_group", "get_authorized_ids" ]) self.create_volume = mock.Mock(return_value={ "mount_path": "/foo/bar" }) self._get_path = mock.Mock(return_value='/foo/bar') self.get_mon_addrs = mock.Mock(return_value=["1.2.3.4", "5.6.7.8"]) self.get_authorized_ids = mock.Mock( return_value=[('eve', 'rw')]) self.authorize = mock.Mock(return_value={"auth_key": "abc123"}) self.get_used_bytes = mock.Mock(return_value=self.mock_used_bytes) self.rados = mock.Mock() self.rados.get_cluster_stats = mock.Mock(return_value={ "kb": 1000, "kb_avail": 500 }) @ddt.ddt class CephFSDriverTestCase(test.TestCase): """Test the CephFS driver. This is a very simple driver that mainly calls through to the CephFSVolumeClient interface, so the tests validate that the Manila driver calls map to the appropriate CephFSVolumeClient calls. """ def setUp(self): super(CephFSDriverTestCase, self).setUp() self._execute = mock.Mock() self.fake_conf = configuration.Configuration(None) self._context = context.get_admin_context() self._share = fake_share.fake_share(share_proto='CEPHFS') self.fake_conf.set_default('driver_handles_share_servers', False) self.fake_conf.set_default('cephfs_auth_id', 'manila') self.mock_object(driver, "ceph_volume_client", MockVolumeClientModule) self.mock_object(driver, "ceph_module_found", True) self.mock_object(driver, "cephfs_share_path") self.mock_object(driver, 'NativeProtocolHelper') self.mock_object(driver, 'NFSProtocolHelper') self._driver = ( driver.CephFSDriver(execute=self._execute, configuration=self.fake_conf)) self._driver.protocol_helper = mock.Mock() self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value={})) @ddt.data('cephfs', 'nfs') def test_do_setup(self, protocol_helper): self._driver.configuration.cephfs_protocol_helper_type = ( protocol_helper) self._driver.do_setup(self._context) if protocol_helper == 'cephfs': driver.NativeProtocolHelper.assert_called_once_with( self._execute, self._driver.configuration, ceph_vol_client=self._driver._volume_client) else: driver.NFSProtocolHelper.assert_called_once_with( self._execute, self._driver.configuration, ceph_vol_client=self._driver._volume_client) self._driver.protocol_helper.init_helper.assert_called_once_with() self.assertEqual(DEFAULT_VOLUME_MODE, self._driver._cephfs_volume_mode) @ddt.data('cephfs', 'nfs') def test_check_for_setup_error(self, protocol_helper): self._driver.configuration.cephfs_protocol_helper_type = ( protocol_helper) self._driver.check_for_setup_error() (self._driver.protocol_helper.check_for_setup_error. assert_called_once_with()) def test_create_share(self): cephfs_volume = {"mount_path": "/foo/bar"} self._driver.create_share(self._context, self._share) self._driver._volume_client.create_volume.assert_called_once_with( driver.cephfs_share_path(self._share), size=self._share['size'] * units.Gi, data_isolated=False, mode=DEFAULT_VOLUME_MODE) (self._driver.protocol_helper.get_export_locations. assert_called_once_with(self._share, cephfs_volume)) def test_create_share_error(self): share = fake_share.fake_share(share_proto='NFS') self.assertRaises(exception.ShareBackendException, self._driver.create_share, self._context, share) def test_update_access(self): alice = { 'id': 'instance_mapping_id1', 'access_id': 'accessid1', 'access_level': 'rw', 'access_type': 'cephx', 'access_to': 'alice' } add_rules = access_rules = [alice, ] delete_rules = [] self._driver.update_access( self._context, self._share, access_rules, add_rules, delete_rules, None) self._driver.protocol_helper.update_access.assert_called_once_with( self._context, self._share, access_rules, add_rules, delete_rules, share_server=None) def test_ensure_share(self): self._driver.ensure_share(self._context, self._share) self._driver._volume_client.create_volume.assert_called_once_with( driver.cephfs_share_path(self._share), size=self._share['size'] * units.Gi, data_isolated=False, mode=DEFAULT_VOLUME_MODE) def test_create_data_isolated(self): self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value={"cephfs:data_isolated": True}) ) self._driver.create_share(self._context, self._share) self._driver._volume_client.create_volume.assert_called_once_with( driver.cephfs_share_path(self._share), size=self._share['size'] * units.Gi, data_isolated=True, mode=DEFAULT_VOLUME_MODE) def test_delete_share(self): self._driver.delete_share(self._context, self._share) self._driver._volume_client.delete_volume.assert_called_once_with( driver.cephfs_share_path(self._share), data_isolated=False) self._driver._volume_client.purge_volume.assert_called_once_with( driver.cephfs_share_path(self._share), data_isolated=False) def test_delete_data_isolated(self): self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value={"cephfs:data_isolated": True}) ) self._driver.delete_share(self._context, self._share) self._driver._volume_client.delete_volume.assert_called_once_with( driver.cephfs_share_path(self._share), data_isolated=True) self._driver._volume_client.purge_volume.assert_called_once_with( driver.cephfs_share_path(self._share), data_isolated=True) def test_extend_share(self): new_size_gb = self._share['size'] * 2 new_size = new_size_gb * units.Gi self._driver.extend_share(self._share, new_size_gb, None) self._driver._volume_client.set_max_bytes.assert_called_once_with( driver.cephfs_share_path(self._share), new_size) def test_shrink_share(self): new_size_gb = self._share['size'] * 0.5 new_size = new_size_gb * units.Gi self._driver.shrink_share(self._share, new_size_gb, None) self._driver._volume_client.get_used_bytes.assert_called_once_with( driver.cephfs_share_path(self._share)) self._driver._volume_client.set_max_bytes.assert_called_once_with( driver.cephfs_share_path(self._share), new_size) def test_shrink_share_full(self): """That shrink fails when share is too full.""" new_size_gb = self._share['size'] * 0.5 # Pretend to be full up vc = MockVolumeClientModule.CephFSVolumeClient vc.mock_used_bytes = (units.Gi * self._share['size']) self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, self._share, new_size_gb, None) self._driver._volume_client.set_max_bytes.assert_not_called() def test_create_snapshot(self): self._driver.create_snapshot(self._context, { "id": "instance1", "share": self._share, "snapshot_id": "snappy1" }, None) (self._driver._volume_client.create_snapshot_volume .assert_called_once_with( driver.cephfs_share_path(self._share), "snappy1_instance1", mode=DEFAULT_VOLUME_MODE)) def test_delete_snapshot(self): self._driver.delete_snapshot(self._context, { "id": "instance1", "share": self._share, "snapshot_id": "snappy1" }, None) (self._driver._volume_client.destroy_snapshot_volume .assert_called_once_with( driver.cephfs_share_path(self._share), "snappy1_instance1")) def test_create_share_group(self): self._driver.create_share_group(self._context, {"id": "grp1"}, None) self._driver._volume_client.create_group.assert_called_once_with( "grp1", mode=DEFAULT_VOLUME_MODE) def test_delete_share_group(self): self._driver.delete_share_group(self._context, {"id": "grp1"}, None) self._driver._volume_client.destroy_group.assert_called_once_with( "grp1") def test_create_share_snapshot(self): self._driver.create_share_group_snapshot(self._context, { 'share_group_id': 'sgid', 'id': 'snapid', }) (self._driver._volume_client.create_snapshot_group. assert_called_once_with("sgid", "snapid", mode=DEFAULT_VOLUME_MODE)) def test_delete_share_group_snapshot(self): self._driver.delete_share_group_snapshot(self._context, { 'share_group_id': 'sgid', 'id': 'snapid', }) (self._driver._volume_client.destroy_snapshot_group. assert_called_once_with("sgid", "snapid")) def test_delete_driver(self): # Create share to prompt volume_client construction self._driver.create_share(self._context, self._share) vc = self._driver._volume_client del self._driver vc.disconnect.assert_called_once_with() def test_delete_driver_no_client(self): self.assertIsNone(self._driver._volume_client) del self._driver def test_connect_noevict(self): # When acting as "admin", driver should skip evicting self._driver.configuration.local_conf.set_override('cephfs_auth_id', "admin") self._driver.create_share(self._context, self._share) vc = self._driver._volume_client vc.connect.assert_called_once_with(premount_evict=None) def test_update_share_stats(self): self._driver.get_configured_ip_versions = mock.Mock(return_value=[4]) self._driver._volume_client self._driver._update_share_stats() result = self._driver._stats self.assertTrue(result['ipv4_support']) self.assertFalse(result['ipv6_support']) self.assertEqual("CEPHFS", result['storage_protocol']) def test_module_missing(self): driver.ceph_module_found = False driver.ceph_volume_client = None self.assertRaises(exception.ManilaException, self._driver.create_share, self._context, self._share) @ddt.data('cephfs', 'nfs') def test_get_configured_ip_versions(self, protocol_helper): self._driver.configuration.cephfs_protocol_helper_type = ( protocol_helper) self._driver.get_configured_ip_versions() (self._driver.protocol_helper.get_configured_ip_versions. assert_called_once_with()) @ddt.ddt class NativeProtocolHelperTestCase(test.TestCase): def setUp(self): super(NativeProtocolHelperTestCase, self).setUp() self.fake_conf = configuration.Configuration(None) self._context = context.get_admin_context() self._share = fake_share.fake_share(share_proto='CEPHFS') self.fake_conf.set_default('driver_handles_share_servers', False) self.mock_object(driver, "cephfs_share_path") self._native_protocol_helper = driver.NativeProtocolHelper( None, self.fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) def test_check_for_setup_error(self): expected = None result = self._native_protocol_helper.check_for_setup_error() self.assertEqual(expected, result) def test_get_export_locations(self): vc = self._native_protocol_helper.volume_client fake_cephfs_volume = {'mount_path': '/foo/bar'} expected_export_locations = { 'path': '1.2.3.4,5.6.7.8:/foo/bar', 'is_admin_only': False, 'metadata': {}, } export_locations = self._native_protocol_helper.get_export_locations( self._share, fake_cephfs_volume) self.assertEqual(expected_export_locations, export_locations) vc.get_mon_addrs.assert_called_once_with() @ddt.data(None, 1) def test_allow_access_rw(self, volume_client_version): vc = self._native_protocol_helper.volume_client rule = { 'access_level': constants.ACCESS_LEVEL_RW, 'access_to': 'alice', 'access_type': 'cephx', } vc.version = volume_client_version auth_key = self._native_protocol_helper._allow_access( self._context, self._share, rule) self.assertEqual("abc123", auth_key) if not volume_client_version: vc.authorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice") else: vc.authorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice", readonly=False, tenant_id=self._share['project_id']) @ddt.data(None, 1) def test_allow_access_ro(self, volume_client_version): vc = self._native_protocol_helper.volume_client rule = { 'access_level': constants.ACCESS_LEVEL_RO, 'access_to': 'alice', 'access_type': 'cephx', } vc.version = volume_client_version if not volume_client_version: self.assertRaises(exception.InvalidShareAccessLevel, self._native_protocol_helper._allow_access, self._context, self._share, rule) else: auth_key = ( self._native_protocol_helper._allow_access( self._context, self._share, rule) ) self.assertEqual("abc123", auth_key) vc.authorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice", readonly=True, tenant_id=self._share['project_id']) def test_allow_access_wrong_type(self): self.assertRaises(exception.InvalidShareAccess, self._native_protocol_helper._allow_access, self._context, self._share, { 'access_level': constants.ACCESS_LEVEL_RW, 'access_type': 'RHUBARB', 'access_to': 'alice' }) def test_allow_access_same_cephx_id_as_manila_service(self): self.assertRaises(exception.InvalidInput, self._native_protocol_helper._allow_access, self._context, self._share, { 'access_level': constants.ACCESS_LEVEL_RW, 'access_type': 'cephx', 'access_to': 'manila', }) def test_deny_access(self): vc = self._native_protocol_helper.volume_client self._native_protocol_helper._deny_access(self._context, self._share, { 'access_level': 'rw', 'access_type': 'cephx', 'access_to': 'alice' }) vc.deauthorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice") vc.evict.assert_called_once_with( "alice", volume_path=driver.cephfs_share_path(self._share)) def test_update_access_add_rm(self): vc = self._native_protocol_helper.volume_client alice = { 'id': 'instance_mapping_id1', 'access_id': 'accessid1', 'access_level': 'rw', 'access_type': 'cephx', 'access_to': 'alice' } bob = { 'id': 'instance_mapping_id2', 'access_id': 'accessid2', 'access_level': 'rw', 'access_type': 'cephx', 'access_to': 'bob' } access_updates = self._native_protocol_helper.update_access( self._context, self._share, access_rules=[alice], add_rules=[alice], delete_rules=[bob]) self.assertEqual( {'accessid1': {'access_key': 'abc123'}}, access_updates) vc.authorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice", readonly=False, tenant_id=self._share['project_id']) vc.deauthorize.assert_called_once_with( driver.cephfs_share_path(self._share), "bob") @ddt.data(None, 1) def test_update_access_all(self, volume_client_version): vc = self._native_protocol_helper.volume_client alice = { 'id': 'instance_mapping_id1', 'access_id': 'accessid1', 'access_level': 'rw', 'access_type': 'cephx', 'access_to': 'alice' } vc.version = volume_client_version access_updates = self._native_protocol_helper.update_access( self._context, self._share, access_rules=[alice], add_rules=[], delete_rules=[]) self.assertEqual( {'accessid1': {'access_key': 'abc123'}}, access_updates) if volume_client_version: vc.get_authorized_ids.assert_called_once_with( driver.cephfs_share_path(self._share)) vc.authorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice", readonly=False, tenant_id=self._share['project_id']) vc.deauthorize.assert_called_once_with( driver.cephfs_share_path(self._share), "eve") else: self.assertFalse(vc.get_authorized_ids.called) vc.authorize.assert_called_once_with( driver.cephfs_share_path(self._share), "alice") def test_get_configured_ip_versions(self): expected = [4] result = self._native_protocol_helper.get_configured_ip_versions() self.assertEqual(expected, result) @ddt.ddt class NFSProtocolHelperTestCase(test.TestCase): def setUp(self): super(NFSProtocolHelperTestCase, self).setUp() self._execute = mock.Mock() self._share = fake_share.fake_share(share_proto='NFS') self._volume_client = MockVolumeClientModule.CephFSVolumeClient() self.fake_conf = configuration.Configuration(None) self.fake_conf.set_default('cephfs_ganesha_server_ip', 'fakeip') self.mock_object(driver, "cephfs_share_path", mock.Mock(return_value='fakevolumepath')) self.mock_object(driver.ganesha_utils, 'SSHExecutor') self.mock_object(driver.ganesha_utils, 'RootExecutor') self.mock_object(driver.socket, 'gethostname') self._nfs_helper = driver.NFSProtocolHelper( self._execute, self.fake_conf, ceph_vol_client=self._volume_client) @ddt.data( (['fakehost', 'some.host.name', 'some.host.name.', '1.1.1.0'], False), (['fakehost', 'some.host.name', 'some.host.name.', '1.1..1.0'], True), (['fakehost', 'some.host.name', 'some.host.name', '1.1.1.256'], True), (['fakehost..', 'some.host.name', 'some.host.name', '1.1.1.0'], True), (['fakehost', 'some.host.name..', 'some.host.name', '1.1.1.0'], True), (['fakehost', 'some.host.name', 'some.host.name.', '1.1..1.0'], True), (['fakehost', 'some.host.name', '1.1.1.0/24'], True), (['fakehost', 'some.host.name', '1.1.1.0', '1001::1001'], False), (['fakehost', 'some.host.name', '1.1.1.0', '1001:1001'], True), (['fakehost', 'some.host.name', '1.1.1.0', '1001::1001:'], True), (['fakehost', 'some.host.name', '1.1.1.0', '1001::1001.'], True), (['fakehost', 'some.host.name', '1.1.1.0', '1001::1001/129.'], True), ) @ddt.unpack def test_check_for_setup_error(self, cephfs_ganesha_export_ips, raises): fake_conf = configuration.Configuration(None) fake_conf.set_default('cephfs_ganesha_export_ips', cephfs_ganesha_export_ips) helper = driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) if raises: self.assertRaises(exception.InvalidParameterValue, helper.check_for_setup_error) else: self.assertIsNone(helper.check_for_setup_error()) @ddt.data(False, True) def test_init_executor_type(self, ganesha_server_is_remote): fake_conf = configuration.Configuration(None) conf_args_list = [ ('cephfs_ganesha_server_is_remote', ganesha_server_is_remote), ('cephfs_ganesha_server_ip', 'fakeip'), ('cephfs_ganesha_server_username', 'fake_username'), ('cephfs_ganesha_server_password', 'fakepwd'), ('cephfs_ganesha_path_to_private_key', 'fakepathtokey')] for args in conf_args_list: fake_conf.set_default(*args) driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) if ganesha_server_is_remote: driver.ganesha_utils.SSHExecutor.assert_has_calls( [mock.call('fakeip', 22, None, 'fake_username', password='fakepwd', privatekey='fakepathtokey')]) else: driver.ganesha_utils.RootExecutor.assert_has_calls( [mock.call(self._execute)]) @ddt.data('fakeip', None) def test_init_identify_local_host(self, ganesha_server_ip): self.mock_object(driver.LOG, 'info') fake_conf = configuration.Configuration(None) conf_args_list = [ ('cephfs_ganesha_server_ip', ganesha_server_ip), ('cephfs_ganesha_server_username', 'fake_username'), ('cephfs_ganesha_server_password', 'fakepwd'), ('cephfs_ganesha_path_to_private_key', 'fakepathtokey')] for args in conf_args_list: fake_conf.set_default(*args) driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) driver.ganesha_utils.RootExecutor.assert_has_calls( [mock.call(self._execute)]) if ganesha_server_ip: self.assertFalse(driver.socket.gethostname.called) self.assertFalse(driver.LOG.info.called) else: driver.socket.gethostname.assert_called_once_with() driver.LOG.info.assert_called_once() def test_get_export_locations_no_export_ips_configured(self): cephfs_volume = {"mount_path": "/foo/bar"} fake_conf = configuration.Configuration(None) fake_conf.set_default('cephfs_ganesha_server_ip', '1.2.3.4') helper = driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) ret = helper.get_export_locations(self._share, cephfs_volume) self.assertEqual( [{ 'path': '1.2.3.4:/foo/bar', 'is_admin_only': False, 'metadata': {} }], ret) def test_get_export_locations_with_export_ips_configured(self): fake_conf = configuration.Configuration(None) conf_args_list = [ ('cephfs_ganesha_server_ip', '1.2.3.4'), ('cephfs_ganesha_export_ips', '127.0.0.1,fd3f:c057:1192:1::1,::1')] for args in conf_args_list: fake_conf.set_default(*args) helper = driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) cephfs_volume = {"mount_path": "/foo/bar"} ret = helper.get_export_locations(self._share, cephfs_volume) self.assertEqual( [ { 'path': '127.0.0.1:/foo/bar', 'is_admin_only': False, 'metadata': {}, }, { 'path': '[fd3f:c057:1192:1::1]:/foo/bar', 'is_admin_only': False, 'metadata': {}, }, { 'path': '[::1]:/foo/bar', 'is_admin_only': False, 'metadata': {}, }, ], ret) @ddt.data(('some.host.name', None, [4, 6]), ('host.', None, [4, 6]), ('1001::1001', None, [6]), ('1.1.1.0', None, [4]), (None, ['1001::1001', '1.1.1.0'], [6, 4]), (None, ['1001::1001'], [6]), (None, ['1.1.1.0'], [4]), (None, ['1001::1001/129', '1.1.1.0'], [4, 6])) @ddt.unpack def test_get_configured_ip_versions( self, cephfs_ganesha_server_ip, cephfs_ganesha_export_ips, configured_ip_version): fake_conf = configuration.Configuration(None) conf_args_list = [ ('cephfs_ganesha_server_ip', cephfs_ganesha_server_ip), ('cephfs_ganesha_export_ips', cephfs_ganesha_export_ips)] for args in conf_args_list: fake_conf.set_default(*args) helper = driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) self.assertEqual(set(configured_ip_version), set(helper.get_configured_ip_versions())) def test_get_configured_ip_versions_already_set(self): fake_conf = configuration.Configuration(None) helper = driver.NFSProtocolHelper( self._execute, fake_conf, ceph_vol_client=MockVolumeClientModule.CephFSVolumeClient() ) ip_versions = ['foo', 'bar'] helper.configured_ip_versions = ip_versions result = helper.get_configured_ip_versions() self.assertEqual(ip_versions, result) def test_default_config_hook(self): fake_conf_dict = {'key': 'value1'} self.mock_object(driver.ganesha.GaneshaNASHelper, '_default_config_hook', mock.Mock(return_value={})) self.mock_object(driver.ganesha_utils, 'path_from', mock.Mock(return_value='/fakedir/cephfs/conf')) self.mock_object(self._nfs_helper, '_load_conf_dir', mock.Mock(return_value=fake_conf_dict)) ret = self._nfs_helper._default_config_hook() (driver.ganesha.GaneshaNASHelper._default_config_hook. assert_called_once_with()) driver.ganesha_utils.path_from.assert_called_once_with( driver.__file__, 'conf') self._nfs_helper._load_conf_dir.assert_called_once_with( '/fakedir/cephfs/conf') self.assertEqual(fake_conf_dict, ret) def test_fsal_hook(self): expected_ret = { 'Name': 'Ceph', 'User_Id': 'ganesha-fakeid', 'Secret_Access_Key': 'fakekey' } self.mock_object(self._volume_client, 'authorize', mock.Mock(return_value={'auth_key': 'fakekey'})) ret = self._nfs_helper._fsal_hook(None, self._share, None) driver.cephfs_share_path.assert_called_once_with(self._share) self._volume_client.authorize.assert_called_once_with( 'fakevolumepath', 'ganesha-fakeid', readonly=False, tenant_id='fake_project_uuid') self.assertEqual(expected_ret, ret) def test_cleanup_fsal_hook(self): self.mock_object(self._volume_client, 'deauthorize') ret = self._nfs_helper._cleanup_fsal_hook(None, self._share, None) driver.cephfs_share_path.assert_called_once_with(self._share) self._volume_client.deauthorize.assert_called_once_with( 'fakevolumepath', 'ganesha-fakeid') self.assertIsNone(ret) def test_get_export_path(self): ret = self._nfs_helper._get_export_path(self._share) driver.cephfs_share_path.assert_called_once_with(self._share) self._volume_client._get_path.assert_called_once_with( 'fakevolumepath') self.assertEqual('/foo/bar', ret) def test_get_export_pseudo_path(self): ret = self._nfs_helper._get_export_pseudo_path(self._share) driver.cephfs_share_path.assert_called_once_with(self._share) self._volume_client._get_path.assert_called_once_with( 'fakevolumepath') self.assertEqual('/foo/bar', ret) @ddt.ddt class CephFSDriverAltConfigTestCase(test.TestCase): """Test the CephFS driver with non-default config values.""" def setUp(self): super(CephFSDriverAltConfigTestCase, self).setUp() self._execute = mock.Mock() self.fake_conf = configuration.Configuration(None) self._context = context.get_admin_context() self._share = fake_share.fake_share(share_proto='CEPHFS') self.fake_conf.set_default('driver_handles_share_servers', False) self.fake_conf.set_default('cephfs_auth_id', 'manila') self.mock_object(driver, "ceph_volume_client", MockVolumeClientModule) self.mock_object(driver, "ceph_module_found", True) self.mock_object(driver, "cephfs_share_path") self.mock_object(driver, 'NativeProtocolHelper') self.mock_object(driver, 'NFSProtocolHelper') @ddt.data('cephfs', 'nfs') def test_do_setup_alt_volume_mode(self, protocol_helper): self.fake_conf.set_default('cephfs_volume_mode', ALT_VOLUME_MODE_CFG) self._driver = driver.CephFSDriver(execute=self._execute, configuration=self.fake_conf) self._driver.configuration.cephfs_protocol_helper_type = ( protocol_helper) self._driver.do_setup(self._context) if protocol_helper == 'cephfs': driver.NativeProtocolHelper.assert_called_once_with( self._execute, self._driver.configuration, ceph_vol_client=self._driver._volume_client) else: driver.NFSProtocolHelper.assert_called_once_with( self._execute, self._driver.configuration, ceph_vol_client=self._driver._volume_client) self._driver.protocol_helper.init_helper.assert_called_once_with() self.assertEqual(ALT_VOLUME_MODE, self._driver._cephfs_volume_mode) @ddt.data('0o759', '0x755', '12a3') def test_volume_mode_exception(self, volume_mode): # cephfs_volume_mode must be a string representing an int as octal self.fake_conf.set_default('cephfs_volume_mode', volume_mode) self.assertRaises(exception.BadConfigurationException, driver.CephFSDriver, execute=self._execute, configuration=self.fake_conf) manila-10.0.0/manila/tests/share/drivers/__init__.py0000664000175000017500000000000013656750227022352 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/veritas/0000775000175000017500000000000013656750362021730 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/veritas/__init__.py0000664000175000017500000000000013656750227024027 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/veritas/test_veritas_isa.py0000664000175000017500000005504413656750227025662 0ustar zuulzuul00000000000000# Copyright 2017 Veritas Technologies LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for Veritas Manila driver. """ import hashlib import json from unittest import mock from oslo_config import cfg import requests import six from manila import context from manila import exception from manila.share import configuration as conf from manila.share.drivers.veritas import veritas_isa from manila import test CONF = cfg.CONF FAKE_BACKEND = 'fake_backend' class MockResponse(object): def __init__(self): self.status_code = 200 def json(self): data = {'fake_key': 'fake_val'} return json.dumps(data) class ACCESSShareDriverTestCase(test.TestCase): """Tests ACCESSShareDriver.""" share = { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'NFS', 'export_locations': [{'path': '10.20.30.40:/vx/fake_location'}], 'snapshot_id': False } share2 = { 'id': 'fakeid2', 'name': 'fakename2', 'size': 4, 'share_proto': 'NFS', } share3 = { 'id': 'fakeid3', 'name': 'fakename3', 'size': 2, 'share_proto': 'NFS', 'export_location': '/vx/fake_location', 'snapshot_id': True } snapshot = { 'id': 'fakesnapshotid', 'share_name': 'fakename', 'share_id': 'fakeid', 'name': 'fakesnapshotname', 'share_size': 1, 'share_proto': 'NFS', 'snapshot_id': 'fake_snap_id', } access = { 'id': 'fakeaccid', 'access_type': 'ip', 'access_to': '10.0.0.2', 'access_level': 'rw', 'state': 'active', } access2 = { 'id': 'fakeaccid2', 'access_type': 'user', 'access_to': '10.0.0.3', 'access_level': 'rw', 'state': 'active', } access3 = { 'id': 'fakeaccid3', 'access_type': 'ip', 'access_to': '10.0.0.4', 'access_level': 'rw+', 'state': 'active', } access4 = { 'id': 'fakeaccid', 'access_type': 'ip', 'access_to': '10.0.0.2', 'access_level': 'ro', 'state': 'active', } def setUp(self): super(ACCESSShareDriverTestCase, self).setUp() self._create_fake_config() lcfg = self.configuration self._context = context.get_admin_context() self._driver = veritas_isa.ACCESSShareDriver(False, configuration=lcfg) self._driver.do_setup(self._context) def _create_fake_config(self): def _safe_get(opt): return getattr(self.configuration, opt) self.mock_object(veritas_isa.ACCESSShareDriver, '_authenticate_access') self.configuration = mock.Mock(spec=conf.Configuration) self.configuration.safe_get = mock.Mock(side_effect=_safe_get) self.configuration.va_server_ip = '1.1.1.1' self.configuration.va_pool = 'pool1' self.configuration.va_user = 'user' self.configuration.va_pwd = 'passwd' self.configuration.va_port = 14161 self.configuration.va_ssl = 'False' self.configuration.va_fstype = 'simple' self.configuration.network_config_group = 'fake_network_config_group' self.configuration.admin_network_config_group = ( 'fake_admin_network_config_group') self.configuration.driver_handles_share_servers = False self.configuration.share_backend_name = FAKE_BACKEND self.configuration.replication_domain = 'Disable' self.configuration.filter_function = 'Disable' self.configuration.goodness_function = 'Disable' def test_create_share(self): self.mock_object(self._driver, '_get_va_share_name') self.mock_object(self._driver, '_get_va_share_path') self.mock_object(self._driver, '_get_vip') self.mock_object(self._driver, '_access_api') length = len(self.share['name']) index = int(length / 2) name1 = self.share['name'][:index] name2 = self.share['name'][index:] crc1 = hashlib.md5(name1.encode('utf-8')).hexdigest()[:8] crc2 = hashlib.md5(name2.encode('utf-8')).hexdigest()[:8] share_name_to_ret = crc1 + '-' + crc2 share_path_to_ret = '/vx/' + crc1 + '-' + crc2 self._driver._get_va_share_name.return_value = share_name_to_ret self._driver._get_va_share_path.return_value = share_path_to_ret self._driver._get_vip.return_value = '1.1.1.1' self._driver.create_share(self._context, self.share) self.assertEqual(1, self._driver._get_vip.call_count) self.assertEqual(1, self._driver._get_va_share_name.call_count) self.assertEqual(1, self._driver._get_va_share_path.call_count) def test_create_share_negative(self): self.mock_object(self._driver, '_access_api') self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.create_share, self._context, self.share) def test_create_share_from_snapshot(self): self.mock_object(self._driver, '_get_vip') sharename = self._driver._get_va_share_name( self.snapshot['share_name']) snapname = self._driver._get_va_snap_name(self.snapshot['name']) sharepath = self._driver._get_va_share_path(sharename) self._driver._get_vip.return_value = '1.1.1.1' vip = self._driver._get_vip() location = (six.text_type(vip) + ':' + six.text_type(sharepath) + ':' + six.text_type(snapname)) ret = self._driver.create_share_from_snapshot(self._context, self.share, self.snapshot) self.assertEqual(location, ret) def test_delete_share(self): self.mock_object(self._driver, '_access_api') self.mock_object(self._driver, '_does_item_exist_at_va_backend') self._driver._does_item_exist_at_va_backend.return_value = True self._driver.delete_share(self._context, self.share) self.assertEqual(2, self._driver._access_api.call_count) def test_delete_share_negative(self): self.mock_object(self._driver, '_access_api') self.mock_object(self._driver, '_does_item_exist_at_va_backend') self._driver._does_item_exist_at_va_backend.return_value = True self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.delete_share, self._context, self.share) def test_delete_share_if_share_created_from_snap(self): self.mock_object(self._driver, '_access_api') self.mock_object(self._driver, '_does_item_exist_at_va_backend') self._driver.delete_share(self._context, self.share3) self.assertEqual(0, (self._driver. _does_item_exist_at_va_backend.call_count)) self.assertEqual(0, self._driver._access_api.call_count) def test_delete_share_if_not_present_at_backend(self): self.mock_object(self._driver, '_does_item_exist_at_va_backend') self.mock_object(self._driver, '_access_api') self._driver._does_item_exist_at_va_backend.return_value = False self._driver.delete_share(self._context, self.share) self.assertEqual(1, (self._driver. _does_item_exist_at_va_backend.call_count)) self.assertEqual(0, self._driver._access_api.call_count) def test_create_snapshot(self): self.mock_object(self._driver, '_access_api') self._driver.create_snapshot(self._context, self.snapshot) self.assertEqual(2, self._driver._access_api.call_count) def test_create_snapshot_negative(self): self.mock_object(self._driver, '_access_api') self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.create_snapshot, self._context, self.snapshot) def test_delete_snapshot(self): self.mock_object(self._driver, '_access_api') self.mock_object(self._driver, '_does_item_exist_at_va_backend') self._driver._does_item_exist_at_va_backend.return_value = True self._driver.delete_snapshot(self._context, self.snapshot) self.assertEqual(2, self._driver._access_api.call_count) def test_delete_snapshot_negative(self): self.mock_object(self._driver, '_access_api') self.mock_object(self._driver, '_does_item_exist_at_va_backend') self._driver._does_item_exist_at_va_backend.return_value = True self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.delete_snapshot, self._context, self.snapshot) def test_delete_snapshot_if_not_present_at_backend(self): self.mock_object(self._driver, '_does_item_exist_at_va_backend') self.mock_object(self._driver, '_access_api') self._driver._does_item_exist_at_va_backend.return_value = False self._driver.delete_snapshot(self._context, self.snapshot) self.assertEqual(1, (self._driver. _does_item_exist_at_va_backend.call_count)) self.assertEqual(0, self._driver._access_api.call_count) def test_update_access_for_allow(self): self.mock_object(self._driver, '_access_api') self._driver.update_access(self._context, self.share, [], [self.access], []) self.assertEqual(2, self._driver._access_api.call_count) def test_update_access_for_allow_negative(self): self.mock_object(self._driver, '_access_api') self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.update_access, self._context, self.share, [], [self.access], []) self.assertRaises(exception.InvalidShareAccess, self._driver.update_access, self._context, self.share, [], [self.access2], []) self.assertRaises(exception.InvalidShareAccessLevel, self._driver.update_access, self._context, self.share, [], [self.access3], []) def test_update_access_for_deny(self): self.mock_object(self._driver, '_access_api') self._driver.update_access(self._context, self.share, [], [], [self.access]) self.assertEqual(2, self._driver._access_api.call_count) def test_update_access_for_deny_negative(self): self.mock_object(self._driver, '_access_api') self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.update_access, self._context, self.share, [], [], [self.access]) def test_update_access_for_deny_for_invalid_access_type(self): self.mock_object(self._driver, '_access_api') self._driver.update_access(self._context, self.share, [], [], [self.access2]) self.assertEqual(0, self._driver._access_api.call_count) def test_update_access_for_empty_rule_list(self): self.mock_object(self._driver, '_allow_access') self.mock_object(self._driver, '_deny_access') self._driver.update_access(self._context, self.share, [], [], []) self.assertEqual(0, self._driver._allow_access.call_count) self.assertEqual(0, self._driver._deny_access.call_count) def test_update_access_for_access_rules(self): self.mock_object(self._driver, '_fetch_existing_rule') self.mock_object(self._driver, '_allow_access') self.mock_object(self._driver, '_deny_access') existing_a_rules = [{'access_level': 'rw', 'access_type': 'ip', 'access_to': '10.0.0.2'}, {'access_level': 'rw', 'access_type': 'ip', 'access_to': '10.0.0.3'}] self._driver._fetch_existing_rule.return_value = existing_a_rules d_rule = self._driver._return_access_lists_difference(existing_a_rules, [self.access4]) a_rule = self._driver._return_access_lists_difference([self.access4], existing_a_rules) self._driver.update_access(self._context, self.share, [self.access4], [], []) self.assertEqual(d_rule, existing_a_rules) self.assertEqual(a_rule, [self.access4]) self.assertEqual(1, self._driver._allow_access.call_count) self.assertEqual(2, self._driver._deny_access.call_count) def test_extend_share(self): self.mock_object(self._driver, '_access_api') new_size = 3 self._driver.extend_share(self.share, new_size) self.assertEqual(1, self._driver._access_api.call_count) def test_extend_share_negative(self): self.mock_object(self._driver, '_access_api') new_size = 3 self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.extend_share, self.share, new_size) def test_shrink_share(self): self.mock_object(self._driver, '_access_api') new_size = 3 self._driver.shrink_share(self.share2, new_size) self.assertEqual(1, self._driver._access_api.call_count) def test_shrink_share_negative(self): self.mock_object(self._driver, '_access_api') new_size = 3 self._driver._access_api.return_value = False self.assertRaises(exception.ShareBackendException, self._driver.shrink_share, self.share2, new_size) def test__get_access_pool_details(self): self.mock_object(self._driver, '_access_api') pool_details = [] pool_details_dict = {} pool_details_dict['device_group_name'] = 'fake_pool' pool_details_dict['capacity'] = 10737418240 pool_details_dict['used_size'] = 9663676416 pool_details.append(pool_details_dict) pool_details_dict2 = {} pool_details_dict2['device_group_name'] = self.configuration.va_pool pool_details_dict2['capacity'] = 10737418240 pool_details_dict2['used_size'] = 9663676416 pool_details.append(pool_details_dict2) self._driver._access_api.return_value = pool_details total_space, free_space = self._driver._get_access_pool_details() self.assertEqual(10, total_space) self.assertEqual(1, free_space) def test__get_access_pool_details_negative(self): self.mock_object(self._driver, '_access_api') pool_details = [] self._driver._access_api.return_value = pool_details self.assertRaises(exception.ShareBackendException, self._driver._get_access_pool_details) def test__update_share_stats(self): self.mock_object(self._driver, '_authenticate_access') self.mock_object(self._driver, '_get_access_pool_details') self._driver._get_access_pool_details.return_value = (10, 9) self._driver._update_share_stats() data = { 'share_backend_name': FAKE_BACKEND, 'vendor_name': 'Veritas', 'driver_version': '1.0', 'storage_protocol': 'NFS', 'total_capacity_gb': 10, 'free_capacity_gb': 9, 'reserved_percentage': 0, 'QoS_support': False, 'create_share_from_snapshot_support': True, 'driver_handles_share_servers': False, 'filter_function': 'Disable', 'goodness_function': 'Disable', 'ipv4_support': True, 'ipv6_support': False, 'mount_snapshot_support': False, 'pools': None, 'qos': False, 'replication_domain': 'Disable', 'revert_to_snapshot_support': False, 'share_group_stats': {'consistent_snapshot_support': None}, 'snapshot_support': True } self.assertEqual(data, self._driver._stats) def test__get_vip(self): self.mock_object(self._driver, '_get_access_ips') pool_list = [] ip1 = {'isconsoleip': 1, 'type': 'Virtual', 'status': 'ONLINE', 'ip': '1.1.1.2'} ip2 = {'isconsoleip': 0, 'type': 'Virtual', 'status': 'ONLINE', 'ip': '1.1.1.4'} ip3 = {'isconsoleip': 0, 'type': 'Virtual', 'status': 'OFFLINE', 'ip': '1.1.1.5'} ip4 = {'isconsoleip': 0, 'type': 'Physical', 'status': 'OFFLINE', 'ip': '1.1.1.6'} pool_list = [ip1, ip2, ip3, ip4] self._driver._get_access_ips.return_value = pool_list vip = self._driver._get_vip() self.assertEqual('1.1.1.4', vip) def test__get_access_ips(self): self.mock_object(self._driver, '_access_api') ip_list = ['1.1.1.2', '1.1.2.3', '1.1.1.4'] self._driver._access_api.return_value = ip_list ret_value = self._driver._get_access_ips(self._driver.session, self._driver.host) self.assertEqual(ret_value, ip_list) def test__access_api(self): self.mock_object(requests, 'session') provider = '%s:%s' % (self._driver.host, self._driver._port) path = '/fake/path' input_data = {} mock_response = MockResponse() session = requests.session data = {'fake_key': 'fake_val'} json_data = json.dumps(data) session.request.return_value = mock_response ret_value = self._driver._access_api(session, provider, path, json.dumps(input_data), 'GET') self.assertEqual(json_data, ret_value) def test__access_api_ret_for_update_object(self): self.mock_object(requests, 'session') provider = '%s:%s' % (self._driver.host, self._driver._port) path = self._driver._update_object input_data = None mock_response = MockResponse() session = requests.session session.request.return_value = mock_response ret = self._driver._access_api(session, provider, path, input_data, 'GET') self.assertTrue(ret) def test__access_api_negative(self): session = self._driver.session provider = '%s:%s' % (self._driver.host, self._driver._port) path = '/fake/path' input_data = {} ret_value = self._driver._access_api(session, provider, path, json.dumps(input_data), 'GET') self.assertEqual(False, ret_value) def test__get_api(self): provider = '%s:%s' % (self._driver.host, self._driver._port) tail = '/fake/path' ret = self._driver._get_api(provider, tail) api_root = 'https://%s/api' % (provider) to_be_ret = api_root + tail self.assertEqual(to_be_ret, ret) def test__does_item_exist_at_va_backend(self): self.mock_object(self._driver, '_access_api') item_name = 'fake_item' path = '/fake/path' fake_item_list = [{'name': item_name}] self._driver._access_api.return_value = fake_item_list ret_value = self._driver._does_item_exist_at_va_backend(item_name, path) self.assertTrue(ret_value) def test__does_item_exist_at_va_backend_negative(self): self.mock_object(self._driver, '_access_api') item_name = 'fake_item' path = '/fake/path' fake_item_list = [{'name': 'item2'}] self._driver._access_api.return_value = fake_item_list ret_value = self._driver._does_item_exist_at_va_backend(item_name, path) self.assertEqual(False, ret_value) def test__fetch_existing_rule(self): self.mock_object(self._driver, '_access_api') fake_share = 'fake-share' fake_access_list = [] list1 = [] list1.append({ 'status': 'online', 'name': '/vx/fake-share', 'host_name': '10.0.0.1', 'privilege': 'rw' }) list1.append({ 'status': 'online', 'name': '/vx/fake-share', 'host_name': '10.0.0.2', 'privilege': 'rw' }) list1.append({ 'status': 'online', 'name': '/vx/fake-share', 'host_name': '10.0.0.3', 'privilege': 'ro' }) list1.append({ 'status': 'online', 'name': '/vx/fake-share2', 'host_name': '10.0.0.4', 'privilege': 'rw' }) fake_access_list.append({ 'shareType': 'NFS', 'shares': list1 }) fake_access_list.append({ 'shareType': 'CIFS', 'shares': [] }) ret_access_list = [] ret_access_list.append({ 'access_to': '10.0.0.1', 'access_level': 'rw', 'access_type': 'ip' }) ret_access_list.append({ 'access_to': '10.0.0.2', 'access_level': 'rw', 'access_type': 'ip' }) ret_access_list.append({ 'access_to': '10.0.0.3', 'access_level': 'ro', 'access_type': 'ip' }) self._driver._access_api.return_value = fake_access_list ret_value = self._driver._fetch_existing_rule(fake_share) self.assertEqual(ret_access_list, ret_value) manila-10.0.0/manila/tests/share/drivers/ganesha/0000775000175000017500000000000013656750362021661 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/ganesha/__init__.py0000664000175000017500000000000013656750227023760 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/ganesha/test_utils.py0000664000175000017500000001246013656750227024435 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from manila import exception from manila.share.drivers.ganesha import utils as ganesha_utils from manila import test from manila.tests import fake_share patch_test_dict1 = {'a': 1, 'b': {'c': 2}, 'd': 3, 'e': 4} patch_test_dict2 = {'a': 11, 'b': {'f': 5}, 'd': {'g': 6}} patch_test_dict3 = {'b': {'c': 22, 'h': {'i': 7}}, 'e': None} patch_test_dict_result = { 'a': 11, 'b': {'c': 22, 'f': 5, 'h': {'i': 7}}, 'd': {'g': 6}, 'e': None, } walk_test_dict = {'a': {'b': {'c': {'d': {'e': 'f'}}}}} walk_test_list = [('e', 'f')] def fake_access(kwargs): fake_access_rule = fake_share.fake_access(**kwargs) fake_access_rule.to_dict = lambda: fake_access_rule.values return fake_access_rule @ddt.ddt class GaneshaUtilsTests(test.TestCase): """Tests Ganesha utility functions.""" def test_patch(self): ret = ganesha_utils.patch(patch_test_dict1, patch_test_dict2, patch_test_dict3) self.assertEqual(patch_test_dict_result, ret) def test_walk(self): ret = [elem for elem in ganesha_utils.walk(walk_test_dict)] self.assertEqual(walk_test_list, ret) def test_path_from(self): self.mock_object(os.path, 'abspath', lambda path: os.path.join('/foo/bar', path)) ret = ganesha_utils.path_from('baz.py', '../quux', 'tic/tac/toe') self.assertEqual('/foo/quux/tic/tac/toe', os.path.normpath(ret)) @ddt.data({'rule': {'access_type': 'ip', 'access_level': 'ro', 'access_to': '10.10.10.12'}, 'kwargs': {'abort': True}}, {'rule': {'access_type': 'cert', 'access_level': 'ro', 'access_to': 'some-CN'}, 'kwargs': {'abort': False}}, {'rule': {'access_type': 'ip', 'access_level': 'rw', 'access_to': '10.10.10.12'}, 'kwargs': {}}) @ddt.unpack def test_get_valid_access_rules(self, rule, kwargs): supported = ['ip', 'ro'] ret = ganesha_utils.validate_access_rule( *([[a] for a in supported] + [fake_access(rule)]), **kwargs) self.assertEqual( [rule['access_' + k] for k in ['type', 'level']] == supported, ret) @ddt.data({'rule': {'access_type': 'cert', 'access_level': 'ro', 'access_to': 'some-CN'}, 'trouble': exception.InvalidShareAccess}, {'rule': {'access_type': 'ip', 'access_level': 'rw', 'access_to': '10.10.10.12'}, 'trouble': exception.InvalidShareAccessLevel}) @ddt.unpack def test_get_valid_access_rules_fail(self, rule, trouble): self.assertRaises(trouble, ganesha_utils.validate_access_rule, ['ip'], ['ro'], fake_access(rule), abort=True) @ddt.data({'rule': {'access_type': 'ip', 'access_level': 'rw', 'access_to': '10.10.10.12'}, 'result': {'access_type': 'ip', 'access_level': 'rw', 'access_to': '10.10.10.12'}, }, {'rule': {'access_type': 'ip', 'access_level': 'rw', 'access_to': '0.0.0.0/0'}, 'result': {'access_type': 'ip', 'access_level': 'rw', 'access_to': '0.0.0.0'}, }, ) @ddt.unpack def test_fixup_access_rules(self, rule, result): self.assertEqual(result, ganesha_utils.fixup_access_rule(rule)) @ddt.ddt class SSHExecutorTestCase(test.TestCase): """Tests SSHExecutor.""" @ddt.data({'run_as_root': True, 'expected_prefix': 'sudo '}, {'run_as_root': False, 'expected_prefix': ''}) @ddt.unpack def test_call_ssh_exec_object_with_run_as_root( self, run_as_root, expected_prefix): with mock.patch.object(ganesha_utils.utils, 'SSHPool'): self.execute = ganesha_utils.SSHExecutor() fake_ssh_object = mock.Mock() self.mock_object(self.execute.pool, 'get', mock.Mock(return_value=fake_ssh_object)) self.mock_object(ganesha_utils.processutils, 'ssh_execute', mock.Mock(return_value=('', ''))) ret = self.execute('ls', run_as_root=run_as_root) self.assertEqual(('', ''), ret) self.execute.pool.get.assert_called_once_with() ganesha_utils.processutils.ssh_execute.assert_called_once_with( fake_ssh_object, expected_prefix + 'ls') manila-10.0.0/manila/tests/share/drivers/ganesha/test_manager.py0000664000175000017500000013603413656750227024713 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re from unittest import mock import ddt from oslo_serialization import jsonutils import six from manila import exception from manila.share.drivers.ganesha import manager from manila import test from manila import utils test_export_id = 101 test_name = 'fakefile' test_path = '/fakedir0/export.d/fakefile.conf' test_tmp_path = '/fakedir0/export.d/fakefile.conf.RANDOM' test_ganesha_cnf = """EXPORT { Export_Id = 101; CLIENT { Clients = ip1; Access_Level = ro; } CLIENT { Clients = ip2; Access_Level = rw; } }""" test_dict_unicode = { u'EXPORT': { u'Export_Id': 101, u'CLIENT': [ {u'Clients': u"ip1", u'Access_Level': u'ro'}, {u'Clients': u"ip2", u'Access_Level': u'rw'}] } } test_dict_str = { 'EXPORT': { 'Export_Id': 101, 'CLIENT': [ {'Clients': 'ip1', 'Access_Level': 'ro'}, {'Clients': 'ip2', 'Access_Level': 'rw'}] } } manager_fake_kwargs = { 'ganesha_config_path': '/fakedir0/fakeconfig', 'ganesha_db_path': '/fakedir1/fake.db', 'ganesha_export_dir': '/fakedir0/export.d', 'ganesha_service_name': 'ganesha.fakeservice' } class MockRadosClientModule(object): """Mocked up version of Ceph's RADOS client interface.""" class ObjectNotFound(Exception): pass @ddt.ddt class MiscTests(test.TestCase): @ddt.data({'import_exc': None}, {'import_exc': ImportError}) @ddt.unpack def test_setup_rados(self, import_exc): manager.rados = None with mock.patch.object( manager.importutils, 'import_module', side_effect=import_exc) as mock_import_module: if import_exc: self.assertRaises( exception.ShareBackendException, manager.setup_rados) else: manager.setup_rados() self.assertEqual(mock_import_module.return_value, manager.rados) mock_import_module.assert_called_once_with('rados') class GaneshaConfigTests(test.TestCase): """Tests Ganesha config file format convertor functions.""" ref_ganesha_cnf = """EXPORT { CLIENT { Clients = ip1; Access_Level = "ro"; } CLIENT { Clients = ip2; Access_Level = "rw"; } Export_Id = 101; }""" @staticmethod def conf_mangle(*confs): """A "mangler" for the conf format. Its purpose is to transform conf data in a way so that semantically equivalent confs yield identical results. Besides this objective criteria, we seek a good trade-off between the following requirements: - low lossiness; - low code complexity. """ def _conf_mangle(conf): # split to expressions by the delimiter ";" # (braces are forced to be treated as expressions # by sandwiching them in ";"-s) conf = re.sub(r'[{}]', r';\g<0>;', conf).split(';') # whitespace-split expressions to tokens with # (equality is forced to be treated as token by # sandwiching in space) conf = map(lambda l: l.replace("=", " = ").split(), conf) # get rid of by-product empty lists (derived from superflouous # ";"-s that might have crept in due to "sandwiching") conf = map(lambda x: x, conf) # handle the non-deterministic order of confs conf = list(conf) conf.sort() return conf return (_conf_mangle(conf) for conf in confs) def test_conf2json(self): test_ganesha_cnf_with_comment = """EXPORT { # fake_export_block Export_Id = 101; CLIENT { Clients = ip1; } }""" result_dict_unicode = { u'EXPORT': { u'CLIENT': {u'Clients': u'ip1'}, u'Export_Id': 101 } } ret = manager._conf2json(test_ganesha_cnf_with_comment) self.assertEqual(result_dict_unicode, jsonutils.loads(ret)) def test_parseconf_ganesha_cnf_input(self): ret = manager.parseconf(test_ganesha_cnf) self.assertEqual(test_dict_unicode, ret) def test_parseconf_json_input(self): ret = manager.parseconf(jsonutils.dumps(test_dict_str)) self.assertEqual(test_dict_unicode, ret) def test_dump_to_conf(self): ganesha_cnf = six.StringIO() manager._dump_to_conf(test_dict_str, ganesha_cnf) self.assertEqual(*self.conf_mangle(self.ref_ganesha_cnf, ganesha_cnf.getvalue())) def test_mkconf(self): ganesha_cnf = manager.mkconf(test_dict_str) self.assertEqual(*self.conf_mangle(self.ref_ganesha_cnf, ganesha_cnf)) @ddt.ddt class GaneshaManagerTestCase(test.TestCase): """Tests GaneshaManager.""" def instantiate_ganesha_manager(self, *args, **kwargs): ganesha_rados_store_enable = kwargs.get('ganesha_rados_store_enable', False) if ganesha_rados_store_enable: with mock.patch.object( manager.GaneshaManager, '_get_rados_object') as self.mock_get_rados_object: return manager.GaneshaManager(*args, **kwargs) else: with mock.patch.object( manager.GaneshaManager, 'get_export_id', return_value=100) as self.mock_get_export_id: return manager.GaneshaManager(*args, **kwargs) def setUp(self): super(GaneshaManagerTestCase, self).setUp() self._execute = mock.Mock(return_value=('', '')) self._manager = self.instantiate_ganesha_manager( self._execute, 'faketag', **manager_fake_kwargs) self._ceph_vol_client = mock.Mock() self._setup_rados = mock.Mock() self._execute2 = mock.Mock(return_value=('', '')) self.mock_object(manager, 'rados', MockRadosClientModule) self.mock_object(manager, 'setup_rados', self._setup_rados) fake_kwargs = copy.copy(manager_fake_kwargs) fake_kwargs.update( ganesha_rados_store_enable=True, ganesha_rados_store_pool_name='fakepool', ganesha_rados_export_counter='fakecounter', ganesha_rados_export_index='fakeindex', ceph_vol_client=self._ceph_vol_client ) self._manager_with_rados_store = self.instantiate_ganesha_manager( self._execute2, 'faketag', **fake_kwargs) self.mock_object(utils, 'synchronized', mock.Mock(return_value=lambda f: f)) def test_init(self): self.mock_object(self._manager, 'reset_exports') self.mock_object(self._manager, 'restart_service') self.assertEqual('/fakedir0/fakeconfig', self._manager.ganesha_config_path) self.assertEqual('faketag', self._manager.tag) self.assertEqual('/fakedir0/export.d', self._manager.ganesha_export_dir) self.assertEqual('/fakedir1/fake.db', self._manager.ganesha_db_path) self.assertEqual('ganesha.fakeservice', self._manager.ganesha_service) self.assertEqual( [mock.call('mkdir', '-p', self._manager.ganesha_export_dir), mock.call('mkdir', '-p', '/fakedir1'), mock.call('sqlite3', self._manager.ganesha_db_path, 'create table ganesha(key varchar(20) primary key, ' 'value int); insert into ganesha values("exportid", ' '100);', run_as_root=False, check_exit_code=False)], self._execute.call_args_list) self.mock_get_export_id.assert_called_once_with(bump=False) def test_init_execute_error_log_message(self): fake_args = ('foo', 'bar') def raise_exception(*args, **kwargs): if args == fake_args: raise exception.GaneshaCommandFailure() test_execute = mock.Mock(side_effect=raise_exception) self.mock_object(manager.LOG, 'error') test_manager = self.instantiate_ganesha_manager( test_execute, 'faketag', **manager_fake_kwargs) self.assertRaises( exception.GaneshaCommandFailure, test_manager.execute, *fake_args, message='fakemsg') manager.LOG.error.assert_called_once_with( mock.ANY, {'tag': 'faketag', 'msg': 'fakemsg'}) def test_init_execute_error_no_log_message(self): fake_args = ('foo', 'bar') def raise_exception(*args, **kwargs): if args == fake_args: raise exception.GaneshaCommandFailure() test_execute = mock.Mock(side_effect=raise_exception) self.mock_object(manager.LOG, 'error') test_manager = self.instantiate_ganesha_manager( test_execute, 'faketag', **manager_fake_kwargs) self.assertRaises( exception.GaneshaCommandFailure, test_manager.execute, *fake_args, message='fakemsg', makelog=False) self.assertFalse(manager.LOG.error.called) @ddt.data(False, True) def test_init_with_rados_store_and_export_counter_exists( self, counter_exists): fake_execute = mock.Mock(return_value=('', '')) fake_kwargs = copy.copy(manager_fake_kwargs) fake_kwargs.update( ganesha_rados_store_enable=True, ganesha_rados_store_pool_name='fakepool', ganesha_rados_export_counter='fakecounter', ganesha_rados_export_index='fakeindex', ceph_vol_client=self._ceph_vol_client ) if counter_exists: self.mock_object( manager.GaneshaManager, '_get_rados_object', mock.Mock()) else: self.mock_object( manager.GaneshaManager, '_get_rados_object', mock.Mock(side_effect=MockRadosClientModule.ObjectNotFound)) self.mock_object(manager.GaneshaManager, '_put_rados_object') test_mgr = manager.GaneshaManager( fake_execute, 'faketag', **fake_kwargs) self.assertEqual('/fakedir0/fakeconfig', test_mgr.ganesha_config_path) self.assertEqual('faketag', test_mgr.tag) self.assertEqual('/fakedir0/export.d', test_mgr.ganesha_export_dir) self.assertEqual('ganesha.fakeservice', test_mgr.ganesha_service) fake_execute.assert_called_once_with( 'mkdir', '-p', '/fakedir0/export.d') self.assertTrue(test_mgr.ganesha_rados_store_enable) self.assertEqual('fakepool', test_mgr.ganesha_rados_store_pool_name) self.assertEqual('fakecounter', test_mgr.ganesha_rados_export_counter) self.assertEqual('fakeindex', test_mgr.ganesha_rados_export_index) self.assertEqual(self._ceph_vol_client, test_mgr.ceph_vol_client) self._setup_rados.assert_called_with() test_mgr._get_rados_object.assert_called_once_with('fakecounter') if counter_exists: self.assertFalse(test_mgr._put_rados_object.called) else: test_mgr._put_rados_object.assert_called_once_with( 'fakecounter', six.text_type(1000)) def test_ganesha_export_dir(self): self.assertEqual( '/fakedir0/export.d', self._manager.ganesha_export_dir) def test_getpath(self): self.assertEqual( '/fakedir0/export.d/fakefile.conf', self._manager._getpath('fakefile')) def test_get_export_rados_object_name(self): self.assertEqual( 'ganesha-export-fakeobj', self._manager._get_export_rados_object_name('fakeobj')) def test_write_tmp_conf_file(self): self.mock_object(manager.pipes, 'quote', mock.Mock(side_effect=['fakedata', test_tmp_path])) test_args = [ ('mktemp', '-p', '/fakedir0/export.d', '-t', 'fakefile.conf.XXXXXX'), ('sh', '-c', 'echo fakedata > %s' % test_tmp_path)] test_kwargs = { 'message': 'writing %s' % test_tmp_path } def return_tmpfile(*args, **kwargs): if args == test_args[0]: return (test_tmp_path + '\n', '') self.mock_object(self._manager, 'execute', mock.Mock(side_effect=return_tmpfile)) ret = self._manager._write_tmp_conf_file(test_path, 'fakedata') self._manager.execute.assert_has_calls([ mock.call(*test_args[0]), mock.call(*test_args[1], **test_kwargs)]) manager.pipes.quote.assert_has_calls([ mock.call('fakedata'), mock.call(test_tmp_path)]) self.assertEqual(test_tmp_path, ret) @ddt.data(True, False) def test_write_conf_file_with_mv_error(self, mv_error): test_data = 'fakedata' test_args = [ ('mv', test_tmp_path, test_path), ('rm', test_tmp_path)] self.mock_object(self._manager, '_getpath', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_write_tmp_conf_file', mock.Mock(return_value=test_tmp_path)) def mock_return(*args, **kwargs): if args == test_args[0]: if mv_error: raise exception.ProcessExecutionError() else: return ('', '') self.mock_object(self._manager, 'execute', mock.Mock(side_effect=mock_return)) if mv_error: self.assertRaises( exception.ProcessExecutionError, self._manager._write_conf_file, test_name, test_data) else: ret = self._manager._write_conf_file(test_name, test_data) self._manager._getpath.assert_called_once_with(test_name) self._manager._write_tmp_conf_file.assert_called_once_with( test_path, test_data) if mv_error: self._manager.execute.assert_has_calls([ mock.call(*test_args[0]), mock.call(*test_args[1])]) else: self._manager.execute.assert_has_calls([ mock.call(*test_args[0])]) self.assertEqual(test_path, ret) def test_mkindex(self): test_ls_output = 'INDEX.conf\nfakefile.conf\nfakefile.txt' test_index = '%include /fakedir0/export.d/fakefile.conf\n' self.mock_object(self._manager, 'execute', mock.Mock(return_value=(test_ls_output, ''))) self.mock_object(self._manager, '_write_conf_file') ret = self._manager._mkindex() self._manager.execute.assert_called_once_with( 'ls', '/fakedir0/export.d', run_as_root=False) self._manager._write_conf_file.assert_called_once_with( 'INDEX', test_index) self.assertIsNone(ret) def test_read_export_rados_object(self): self.mock_object(self._manager_with_rados_store, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) self.mock_object(self._manager_with_rados_store, '_get_rados_object', mock.Mock(return_value=test_ganesha_cnf)) self.mock_object(manager, 'parseconf', mock.Mock(return_value=test_dict_unicode)) ret = self._manager_with_rados_store._read_export_rados_object( test_name) (self._manager_with_rados_store._get_export_rados_object_name. assert_called_once_with(test_name)) (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakeobj')) manager.parseconf.assert_called_once_with(test_ganesha_cnf) self.assertEqual(test_dict_unicode, ret) def test_read_export_file(self): test_args = ('cat', test_path) test_kwargs = {'message': 'reading export fakefile'} self.mock_object(self._manager, '_getpath', mock.Mock(return_value=test_path)) self.mock_object(self._manager, 'execute', mock.Mock(return_value=(test_ganesha_cnf,))) self.mock_object(manager, 'parseconf', mock.Mock(return_value=test_dict_unicode)) ret = self._manager._read_export_file(test_name) self._manager._getpath.assert_called_once_with(test_name) self._manager.execute.assert_called_once_with( *test_args, **test_kwargs) manager.parseconf.assert_called_once_with(test_ganesha_cnf) self.assertEqual(test_dict_unicode, ret) @ddt.data(False, True) def test_read_export_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(self._manager, '_read_export_file', mock.Mock(return_value=test_dict_unicode)) self.mock_object(self._manager, '_read_export_rados_object', mock.Mock(return_value=test_dict_unicode)) ret = self._manager._read_export(test_name) if rados_store_enable: self._manager._read_export_rados_object.assert_called_once_with( test_name) self.assertFalse(self._manager._read_export_file.called) else: self._manager._read_export_file.assert_called_once_with(test_name) self.assertFalse(self._manager._read_export_rados_object.called) self.assertEqual(test_dict_unicode, ret) @ddt.data(True, False) def test_check_export_rados_object_exists(self, exists): self.mock_object( self._manager_with_rados_store, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) if exists: self.mock_object( self._manager_with_rados_store, '_get_rados_object') else: self.mock_object( self._manager_with_rados_store, '_get_rados_object', mock.Mock(side_effect=MockRadosClientModule.ObjectNotFound)) ret = self._manager_with_rados_store._check_export_rados_object_exists( test_name) (self._manager_with_rados_store._get_export_rados_object_name. assert_called_once_with(test_name)) (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakeobj')) if exists: self.assertTrue(ret) else: self.assertFalse(ret) def test_check_file_exists(self): self.mock_object(self._manager, 'execute', mock.Mock(return_value=(test_ganesha_cnf,))) ret = self._manager._check_file_exists(test_path) self._manager.execute.assert_called_once_with( 'test', '-f', test_path, makelog=False, run_as_root=False) self.assertTrue(ret) @ddt.data(1, 4) def test_check_file_exists_error(self, exit_code): self.mock_object( self._manager, 'execute', mock.Mock(side_effect=exception.GaneshaCommandFailure( exit_code=exit_code)) ) if exit_code == 1: ret = self._manager._check_file_exists(test_path) self.assertFalse(ret) else: self.assertRaises(exception.GaneshaCommandFailure, self._manager._check_file_exists, test_path) self._manager.execute.assert_called_once_with( 'test', '-f', test_path, makelog=False, run_as_root=False) def test_check_export_file_exists(self): self.mock_object(self._manager, '_getpath', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_check_file_exists', mock.Mock(return_value=True)) ret = self._manager._check_export_file_exists(test_name) self._manager._getpath.assert_called_once_with(test_name) self._manager._check_file_exists.assert_called_once_with(test_path) self.assertTrue(ret) @ddt.data(False, True) def test_check_export_exists_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(self._manager, '_check_export_file_exists', mock.Mock(return_value=True)) self.mock_object(self._manager, '_check_export_rados_object_exists', mock.Mock(return_value=True)) ret = self._manager.check_export_exists(test_name) if rados_store_enable: (self._manager._check_export_rados_object_exists. assert_called_once_with(test_name)) self.assertFalse(self._manager._check_export_file_exists.called) else: self._manager._check_export_file_exists.assert_called_once_with( test_name) self.assertFalse( self._manager._check_export_rados_object_exists.called) self.assertTrue(ret) def test_write_export_rados_object(self): self.mock_object(self._manager, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) self.mock_object(self._manager, '_put_rados_object') self.mock_object(self._manager, '_getpath', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_write_tmp_conf_file', mock.Mock(return_value=test_tmp_path)) ret = self._manager._write_export_rados_object(test_name, 'fakedata') self._manager._get_export_rados_object_name.assert_called_once_with( test_name) self._manager._put_rados_object.assert_called_once_with( 'fakeobj', 'fakedata') self._manager._getpath.assert_called_once_with(test_name) self._manager._write_tmp_conf_file.assert_called_once_with( test_path, 'fakedata') self.assertEqual(test_tmp_path, ret) @ddt.data(True, False) def test_write_export_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(manager, 'mkconf', mock.Mock(return_value=test_ganesha_cnf)) self.mock_object(self._manager, '_write_conf_file', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_write_export_rados_object', mock.Mock(return_value=test_path)) ret = self._manager._write_export(test_name, test_dict_str) manager.mkconf.assert_called_once_with(test_dict_str) if rados_store_enable: self._manager._write_export_rados_object.assert_called_once_with( test_name, test_ganesha_cnf) self.assertFalse(self._manager._write_conf_file.called) else: self._manager._write_conf_file.assert_called_once_with( test_name, test_ganesha_cnf) self.assertFalse(self._manager._write_export_rados_object.called) self.assertEqual(test_path, ret) def test_write_export_error_incomplete_export_block(self): test_errordict = { u'EXPORT': { u'Export_Id': '@config', u'CLIENT': {u'Clients': u"'ip1','ip2'"} } } self.mock_object(manager, 'mkconf', mock.Mock(return_value=test_ganesha_cnf)) self.mock_object(self._manager, '_write_conf_file', mock.Mock(return_value=test_path)) self.assertRaises(exception.InvalidParameterValue, self._manager._write_export, test_name, test_errordict) self.assertFalse(manager.mkconf.called) self.assertFalse(self._manager._write_conf_file.called) def test_rm_file(self): self.mock_object(self._manager, 'execute', mock.Mock(return_value=('', ''))) ret = self._manager._rm_export_file(test_name) self._manager.execute.assert_called_once_with('rm', '-f', test_path) self.assertIsNone(ret) def test_rm_export_file(self): self.mock_object(self._manager, '_getpath', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_rm_file') ret = self._manager._rm_export_file(test_name) self._manager._getpath.assert_called_once_with(test_name) self._manager._rm_file.assert_called_once_with(test_path) self.assertIsNone(ret) def test_rm_export_rados_object(self): self.mock_object(self._manager_with_rados_store, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) self.mock_object(self._manager_with_rados_store, '_delete_rados_object') ret = self._manager_with_rados_store._rm_export_rados_object( test_name) (self._manager_with_rados_store._get_export_rados_object_name. assert_called_once_with(test_name)) (self._manager_with_rados_store._delete_rados_object. assert_called_once_with('fakeobj')) self.assertIsNone(ret) def test_dbus_send_ganesha(self): test_args = ('arg1', 'arg2') test_kwargs = {'key': 'value'} self.mock_object(self._manager, 'execute', mock.Mock(return_value=('', ''))) ret = self._manager._dbus_send_ganesha('fakemethod', *test_args, **test_kwargs) self._manager.execute.assert_called_once_with( 'dbus-send', '--print-reply', '--system', '--dest=org.ganesha.nfsd', '/org/ganesha/nfsd/ExportMgr', 'org.ganesha.nfsd.exportmgr.fakemethod', *test_args, message='dbus call exportmgr.fakemethod', **test_kwargs) self.assertIsNone(ret) def test_remove_export_dbus(self): self.mock_object(self._manager, '_dbus_send_ganesha') ret = self._manager._remove_export_dbus(test_export_id) self._manager._dbus_send_ganesha.assert_called_once_with( 'RemoveExport', 'uint16:101') self.assertIsNone(ret) @ddt.data('', '%url rados://fakepool/fakeobj2') def test_add_rados_object_url_to_index_with_index_data( self, index_data): self.mock_object( self._manager_with_rados_store, '_get_rados_object', mock.Mock(return_value=index_data)) self.mock_object( self._manager_with_rados_store, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj1')) self.mock_object( self._manager_with_rados_store, '_put_rados_object') ret = (self._manager_with_rados_store. _add_rados_object_url_to_index('fakename')) (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakeindex')) (self._manager_with_rados_store._get_export_rados_object_name. assert_called_once_with('fakename')) if index_data: urls = ('%url rados://fakepool/fakeobj2\n' '%url rados://fakepool/fakeobj1') else: urls = '%url rados://fakepool/fakeobj1' (self._manager_with_rados_store._put_rados_object. assert_called_once_with('fakeindex', urls)) self.assertIsNone(ret) @ddt.data('', '%url rados://fakepool/fakeobj1\n' '%url rados://fakepool/fakeobj2') def test_remove_rados_object_url_from_index_with_index_data( self, index_data): self.mock_object( self._manager_with_rados_store, '_get_rados_object', mock.Mock(return_value=index_data)) self.mock_object( self._manager_with_rados_store, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj1')) self.mock_object( self._manager_with_rados_store, '_put_rados_object') ret = (self._manager_with_rados_store. _remove_rados_object_url_from_index('fakename')) if index_data: (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakeindex')) (self._manager_with_rados_store._get_export_rados_object_name. assert_called_once_with('fakename')) urls = '%url rados://fakepool/fakeobj2' (self._manager_with_rados_store._put_rados_object. assert_called_once_with('fakeindex', urls)) else: (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakeindex')) self.assertFalse(self._manager_with_rados_store. _get_export_rados_object_name.called) self.assertFalse(self._manager_with_rados_store. _put_rados_object.called) self.assertIsNone(ret) @ddt.data(False, True) def test_add_export_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(self._manager, '_write_export', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_dbus_send_ganesha') self.mock_object(self._manager, '_rm_file') self.mock_object(self._manager, '_add_rados_object_url_to_index') self.mock_object(self._manager, '_mkindex') ret = self._manager.add_export(test_name, test_dict_str) self._manager._write_export.assert_called_once_with( test_name, test_dict_str) self._manager._dbus_send_ganesha.assert_called_once_with( 'AddExport', 'string:' + test_path, 'string:EXPORT(Export_Id=101)') if rados_store_enable: self._manager._rm_file.assert_called_once_with(test_path) self._manager._add_rados_object_url_to_index(test_name) self.assertFalse(self._manager._mkindex.called) else: self._manager._mkindex.assert_called_once_with() self.assertFalse(self._manager._rm_file.called) self.assertFalse( self._manager._add_rados_object_url_to_index.called) self.assertIsNone(ret) def test_add_export_error_during_mkindex(self): self.mock_object(self._manager, '_write_export', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_dbus_send_ganesha') self.mock_object( self._manager, '_mkindex', mock.Mock(side_effect=exception.GaneshaCommandFailure)) self.mock_object(self._manager, '_rm_export_file') self.mock_object(self._manager, '_remove_export_dbus') self.assertRaises(exception.GaneshaCommandFailure, self._manager.add_export, test_name, test_dict_str) self._manager._write_export.assert_called_once_with( test_name, test_dict_str) self._manager._dbus_send_ganesha.assert_called_once_with( 'AddExport', 'string:' + test_path, 'string:EXPORT(Export_Id=101)') self._manager._mkindex.assert_called_once_with() self._manager._rm_export_file.assert_called_once_with(test_name) self._manager._remove_export_dbus.assert_called_once_with( test_export_id) @ddt.data(True, False) def test_add_export_error_during_write_export_with_rados_store( self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object( self._manager, '_write_export', mock.Mock(side_effect=exception.GaneshaCommandFailure)) self.mock_object(self._manager, '_mkindex') self.assertRaises(exception.GaneshaCommandFailure, self._manager.add_export, test_name, test_dict_str) self._manager._write_export.assert_called_once_with( test_name, test_dict_str) if rados_store_enable: self.assertFalse(self._manager._mkindex.called) else: self._manager._mkindex.assert_called_once_with() @ddt.data(True, False) def test_add_export_error_during_dbus_send_ganesha_with_rados_store( self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(self._manager, '_write_export', mock.Mock(return_value=test_path)) self.mock_object( self._manager, '_dbus_send_ganesha', mock.Mock(side_effect=exception.GaneshaCommandFailure)) self.mock_object(self._manager, '_mkindex') self.mock_object(self._manager, '_rm_export_file') self.mock_object(self._manager, '_rm_export_rados_object') self.mock_object(self._manager, '_rm_file') self.mock_object(self._manager, '_remove_export_dbus') self.assertRaises(exception.GaneshaCommandFailure, self._manager.add_export, test_name, test_dict_str) self._manager._write_export.assert_called_once_with( test_name, test_dict_str) self._manager._dbus_send_ganesha.assert_called_once_with( 'AddExport', 'string:' + test_path, 'string:EXPORT(Export_Id=101)') if rados_store_enable: self._manager._rm_export_rados_object.assert_called_once_with( test_name) self._manager._rm_file.assert_called_once_with(test_path) self.assertFalse(self._manager._rm_export_file.called) self.assertFalse(self._manager._mkindex.called) else: self._manager._rm_export_file.assert_called_once_with(test_name) self._manager._mkindex.assert_called_once_with() self.assertFalse(self._manager._rm_export_rados_object.called) self.assertFalse(self._manager._rm_file.called) self.assertFalse(self._manager._remove_export_dbus.called) @ddt.data(True, False) def test_update_export_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable confdict = { 'EXPORT': { 'Export_Id': 101, 'CLIENT': {'Clients': 'ip1', 'Access_Level': 'ro'}, } } self.mock_object(self._manager, '_read_export', mock.Mock(return_value=test_dict_unicode)) self.mock_object(self._manager, '_write_export', mock.Mock(return_value=test_path)) self.mock_object(self._manager, '_dbus_send_ganesha') self.mock_object(self._manager, '_rm_file') self._manager.update_export(test_name, confdict) self._manager._read_export.assert_called_once_with(test_name) self._manager._write_export.assert_called_once_with(test_name, confdict) self._manager._dbus_send_ganesha.assert_called_once_with( 'UpdateExport', 'string:' + test_path, 'string:EXPORT(Export_Id=101)') if rados_store_enable: self._manager._rm_file.assert_called_once_with(test_path) else: self.assertFalse(self._manager._rm_file.called) @ddt.data(True, False) def test_update_export_error_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable confdict = { 'EXPORT': { 'Export_Id': 101, 'CLIENT': {'Clients': 'ip1', 'Access_Level': 'ro'}, } } self.mock_object(self._manager, '_read_export', mock.Mock(return_value=test_dict_unicode)) self.mock_object(self._manager, '_write_export', mock.Mock(return_value=test_path)) self.mock_object( self._manager, '_dbus_send_ganesha', mock.Mock(side_effect=exception.GaneshaCommandFailure)) self.mock_object(self._manager, '_rm_file') self.assertRaises(exception.GaneshaCommandFailure, self._manager.update_export, test_name, confdict) self._manager._read_export.assert_called_once_with(test_name) self._manager._write_export.assert_has_calls([ mock.call(test_name, confdict), mock.call(test_name, test_dict_unicode)]) self._manager._dbus_send_ganesha.assert_called_once_with( 'UpdateExport', 'string:' + test_path, 'string:EXPORT(Export_Id=101)') if rados_store_enable: self._manager._rm_file.assert_called_once_with(test_path) else: self.assertFalse(self._manager._rm_file.called) @ddt.data(True, False) def test_remove_export_with_rados_store(self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(self._manager, '_read_export', mock.Mock(return_value=test_dict_unicode)) self.mock_object(self._manager, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) methods = ('_remove_export_dbus', '_rm_export_file', '_mkindex', '_remove_rados_object_url_from_index', '_delete_rados_object') for method in methods: self.mock_object(self._manager, method) ret = self._manager.remove_export(test_name) self._manager._read_export.assert_called_once_with(test_name) self._manager._remove_export_dbus.assert_called_once_with( test_dict_unicode['EXPORT']['Export_Id']) if rados_store_enable: (self._manager._get_export_rados_object_name. assert_called_once_with(test_name)) self._manager._delete_rados_object.assert_called_once_with( 'fakeobj') (self._manager._remove_rados_object_url_from_index. assert_called_once_with(test_name)) self.assertFalse(self._manager._rm_export_file.called) self.assertFalse(self._manager._mkindex.called) else: self._manager._rm_export_file.assert_called_once_with(test_name) self._manager._mkindex.assert_called_once_with() self.assertFalse( self._manager._get_export_rados_object_name.called) self.assertFalse(self._manager._delete_rados_object.called) self.assertFalse( self._manager._remove_rados_object_url_from_index.called) self.assertIsNone(ret) @ddt.data(True, False) def test_remove_export_error_during_read_export_with_rados_store( self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object( self._manager, '_read_export', mock.Mock(side_effect=exception.GaneshaCommandFailure)) self.mock_object(self._manager, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) methods = ('_remove_export_dbus', '_rm_export_file', '_mkindex', '_remove_rados_object_url_from_index', '_delete_rados_object') for method in methods: self.mock_object(self._manager, method) self.assertRaises(exception.GaneshaCommandFailure, self._manager.remove_export, test_name) self._manager._read_export.assert_called_once_with(test_name) self.assertFalse(self._manager._remove_export_dbus.called) if rados_store_enable: (self._manager._get_export_rados_object_name. assert_called_once_with(test_name)) self._manager._delete_rados_object.assert_called_once_with( 'fakeobj') (self._manager._remove_rados_object_url_from_index. assert_called_once_with(test_name)) self.assertFalse(self._manager._rm_export_file.called) self.assertFalse(self._manager._mkindex.called) else: self._manager._rm_export_file.assert_called_once_with(test_name) self._manager._mkindex.assert_called_once_with() self.assertFalse( self._manager._get_export_rados_object_name.called) self.assertFalse(self._manager._delete_rados_object.called) self.assertFalse( self._manager._remove_rados_object_url_from_index.called) @ddt.data(True, False) def test_remove_export_error_during_remove_export_dbus_with_rados_store( self, rados_store_enable): self._manager.ganesha_rados_store_enable = rados_store_enable self.mock_object(self._manager, '_read_export', mock.Mock(return_value=test_dict_unicode)) self.mock_object(self._manager, '_get_export_rados_object_name', mock.Mock(return_value='fakeobj')) self.mock_object( self._manager, '_remove_export_dbus', mock.Mock(side_effect=exception.GaneshaCommandFailure)) methods = ('_rm_export_file', '_mkindex', '_remove_rados_object_url_from_index', '_delete_rados_object') for method in methods: self.mock_object(self._manager, method) self.assertRaises(exception.GaneshaCommandFailure, self._manager.remove_export, test_name) self._manager._read_export.assert_called_once_with(test_name) self._manager._remove_export_dbus.assert_called_once_with( test_dict_unicode['EXPORT']['Export_Id']) if rados_store_enable: (self._manager._get_export_rados_object_name. assert_called_once_with(test_name)) self._manager._delete_rados_object.assert_called_once_with( 'fakeobj') (self._manager._remove_rados_object_url_from_index. assert_called_once_with(test_name)) self.assertFalse(self._manager._rm_export_file.called) self.assertFalse(self._manager._mkindex.called) else: self._manager._rm_export_file.assert_called_once_with(test_name) self._manager._mkindex.assert_called_once_with() self.assertFalse( self._manager._get_export_rados_object_name.called) self.assertFalse(self._manager._delete_rados_object.called) self.assertFalse( self._manager._remove_rados_object_url_from_index.called) def test_get_rados_object(self): fakebin = six.unichr(246).encode('utf-8') self.mock_object(self._ceph_vol_client, 'get_object', mock.Mock(return_value=fakebin)) ret = self._manager_with_rados_store._get_rados_object('fakeobj') self._ceph_vol_client.get_object.assert_called_once_with( 'fakepool', 'fakeobj') self.assertEqual(fakebin.decode('utf-8'), ret) def test_put_rados_object(self): faketext = six.unichr(246) self.mock_object(self._ceph_vol_client, 'put_object', mock.Mock(return_value=None)) ret = self._manager_with_rados_store._put_rados_object( 'fakeobj', faketext) self._ceph_vol_client.put_object.assert_called_once_with( 'fakepool', 'fakeobj', faketext.encode('utf-8')) self.assertIsNone(ret) def test_delete_rados_object(self): self.mock_object(self._ceph_vol_client, 'delete_object', mock.Mock(return_value=None)) ret = self._manager_with_rados_store._delete_rados_object('fakeobj') self._ceph_vol_client.delete_object.assert_called_once_with( 'fakepool', 'fakeobj') self.assertIsNone(ret) def test_get_export_id(self): self.mock_object(self._manager, 'execute', mock.Mock(return_value=('exportid|101', ''))) ret = self._manager.get_export_id() self._manager.execute.assert_called_once_with( 'sqlite3', self._manager.ganesha_db_path, 'update ganesha set value = value + 1;' 'select * from ganesha where key = "exportid";', run_as_root=False) self.assertEqual(101, ret) def test_get_export_id_nobump(self): self.mock_object(self._manager, 'execute', mock.Mock(return_value=('exportid|101', ''))) ret = self._manager.get_export_id(bump=False) self._manager.execute.assert_called_once_with( 'sqlite3', self._manager.ganesha_db_path, 'select * from ganesha where key = "exportid";', run_as_root=False) self.assertEqual(101, ret) def test_get_export_id_error_invalid_export_db(self): self.mock_object(self._manager, 'execute', mock.Mock(return_value=('invalid', ''))) self.mock_object(manager.LOG, 'error') self.assertRaises(exception.InvalidSqliteDB, self._manager.get_export_id) manager.LOG.error.assert_called_once_with( mock.ANY, mock.ANY) self._manager.execute.assert_called_once_with( 'sqlite3', self._manager.ganesha_db_path, 'update ganesha set value = value + 1;' 'select * from ganesha where key = "exportid";', run_as_root=False) @ddt.data(True, False) def test_get_export_id_with_rados_store_and_bump(self, bump): self.mock_object(self._manager_with_rados_store, '_get_rados_object', mock.Mock(return_value='1000')) self.mock_object(self._manager_with_rados_store, '_put_rados_object') ret = self._manager_with_rados_store.get_export_id(bump=bump) if bump: (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakecounter')) (self._manager_with_rados_store._put_rados_object. assert_called_once_with('fakecounter', '1001')) self.assertEqual(1001, ret) else: (self._manager_with_rados_store._get_rados_object. assert_called_once_with('fakecounter')) self.assertFalse( self._manager_with_rados_store._put_rados_object.called) self.assertEqual(1000, ret) def test_restart_service(self): self.mock_object(self._manager, 'execute') ret = self._manager.restart_service() self._manager.execute.assert_called_once_with( 'service', 'ganesha.fakeservice', 'restart') self.assertIsNone(ret) def test_reset_exports(self): self.mock_object(self._manager, 'execute') self.mock_object(self._manager, '_mkindex') ret = self._manager.reset_exports() self._manager.execute.assert_called_once_with( 'sh', '-c', 'rm -f /fakedir0/export.d/*.conf') self._manager._mkindex.assert_called_once_with() self.assertIsNone(ret) manila-10.0.0/manila/tests/share/drivers/windows/0000775000175000017500000000000013656750362021745 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/windows/__init__.py0000664000175000017500000000000013656750227024044 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/windows/test_windows_smb_driver.py0000664000175000017500000003045213656750227027270 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from manila.share import configuration from manila.share.drivers import generic from manila.share.drivers.windows import service_instance from manila.share.drivers.windows import windows_smb_driver as windows_drv from manila.share.drivers.windows import windows_smb_helper from manila.share.drivers.windows import windows_utils from manila.share.drivers.windows import winrm_helper from manila import test from manila.tests import fake_share @ddt.ddt class WindowsSMBDriverTestCase(test.TestCase): @mock.patch.object(winrm_helper, 'WinRMHelper') @mock.patch.object(windows_utils, 'WindowsUtils') @mock.patch.object(windows_smb_helper, 'WindowsSMBHelper') @mock.patch.object(service_instance, 'WindowsServiceInstanceManager') def setUp(self, mock_sv_instance_mgr, mock_smb_helper_cls, mock_utils_cls, mock_winrm_helper_cls): self.flags(driver_handles_share_servers=True) self._fake_conf = configuration.Configuration(None) self._share = fake_share.fake_share(share_proto='SMB') self._share_server = dict( backend_details=mock.sentinel.backend_details) self._drv = windows_drv.WindowsSMBDriver( configuration=self._fake_conf) self._drv._setup_helpers() self._remote_execute = mock_winrm_helper_cls.return_value self._windows_utils = mock_utils_cls.return_value self._smb_helper = mock_smb_helper_cls.return_value super(WindowsSMBDriverTestCase, self).setUp() @mock.patch('manila.share.driver.ShareDriver') def test_update_share_stats(self, mock_base_driver): self._drv._update_share_stats() mock_base_driver._update_share_stats.assert_called_once_with( self._drv, data=dict(storage_protocol="CIFS")) @mock.patch.object(service_instance, 'WindowsServiceInstanceManager') def test_setup_service_instance_manager(self, mock_sv_instance_mgr): self._drv._setup_service_instance_manager() mock_sv_instance_mgr.assert_called_once_with( driver_config=self._fake_conf) def test_setup_helpers(self): expected_helpers = {"SMB": self._smb_helper, "CIFS": self._smb_helper} self._drv._setup_helpers() self.assertEqual(expected_helpers, self._drv._helpers) @mock.patch.object(generic.GenericShareDriver, '_teardown_server') def test_teardown_server(self, mock_super_teardown): mock_server = {'joined_domain': True, 'instance_id': mock.sentinel.instance_id} mock_sec_service = {'user': mock.sentinel.user, 'password': mock.sentinel.password, 'domain': mock.sentinel.domain} sv_mgr = self._drv.service_instance_manager sv_mgr.get_valid_security_service.return_value = mock_sec_service # We ensure that domain unjoin exceptions do not prevent the # service instance from being teared down. self._windows_utils.unjoin_domain.side_effect = Exception self._drv._teardown_server(mock_server, mock_sec_service) sv_mgr.get_valid_security_service.assert_called_once_with( mock_sec_service) self._windows_utils.unjoin_domain.assert_called_once_with( mock_server, mock_sec_service['user'], mock_sec_service['password']) mock_super_teardown.assert_called_once_with(mock_server, mock_sec_service) @mock.patch.object(windows_drv.WindowsSMBDriver, '_get_disk_number') def test_format_device(self, mock_get_disk_number): mock_get_disk_number.return_value = mock.sentinel.disk_number self._drv._format_device(mock.sentinel.server, mock.sentinel.vol) self._drv._get_disk_number.assert_called_once_with( mock.sentinel.server, mock.sentinel.vol) self._windows_utils.initialize_disk.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number) self._windows_utils.create_partition.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number) self._windows_utils.format_partition.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number, self._drv._DEFAULT_SHARE_PARTITION) @mock.patch.object(windows_drv.WindowsSMBDriver, '_ensure_disk_online_and_writable') @mock.patch.object(windows_drv.WindowsSMBDriver, '_get_disk_number') @mock.patch.object(windows_drv.WindowsSMBDriver, '_get_mount_path') @mock.patch.object(windows_drv.WindowsSMBDriver, '_is_device_mounted') def test_mount_device(self, mock_device_mounted, mock_get_mount_path, mock_get_disk_number, mock_ensure_disk): mock_get_mount_path.return_value = mock.sentinel.mount_path mock_get_disk_number.return_value = mock.sentinel.disk_number mock_device_mounted.return_value = False self._drv._mount_device(share=mock.sentinel.share, server_details=mock.sentinel.server, volume=mock.sentinel.vol) mock_device_mounted.assert_called_once_with( mock.sentinel.mount_path, mock.sentinel.server, mock.sentinel.vol) mock_get_disk_number.assert_called_once_with( mock.sentinel.server, mock.sentinel.vol) self._windows_utils.ensure_directory_exists.assert_called_once_with( mock.sentinel.server, mock.sentinel.mount_path) self._windows_utils.add_access_path( mock.sentinel.server, mock.sentinel.mount_path, mock.sentinel.disk_number, self._drv._DEFAULT_SHARE_PARTITION) mock_ensure_disk.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number) @mock.patch.object(windows_drv.WindowsSMBDriver, '_get_mount_path') def test_unmount_device(self, mock_get_mount_path): mock_get_mount_path.return_value = mock.sentinel.mount_path mock_get_disk_number_by_path = ( self._windows_utils.get_disk_number_by_mount_path) self._drv._unmount_device(mock.sentinel.share, mock.sentinel.server) mock_get_mount_path.assert_called_once_with(mock.sentinel.share) mock_get_disk_number_by_path.assert_called_once_with( mock.sentinel.server, mock.sentinel.mount_path) self._windows_utils.set_disk_online_status.assert_called_once_with( mock.sentinel.server, mock_get_disk_number_by_path.return_value, online=False) @ddt.data(None, 1) @mock.patch.object(windows_drv.WindowsSMBDriver, '_get_disk_number') @mock.patch.object(windows_drv.WindowsSMBDriver, '_ensure_disk_online_and_writable') def test_resize_filesystem(self, new_size, mock_ensure_disk, mock_get_disk_number): mock_get_disk_number.return_value = mock.sentinel.disk_number mock_get_max_size = self._windows_utils.get_partition_maximum_size mock_get_max_size.return_value = mock.sentinel.max_size self._drv._resize_filesystem(mock.sentinel.server, mock.sentinel.vol, new_size=new_size) mock_get_disk_number.assert_called_once_with(mock.sentinel.server, mock.sentinel.vol) self._drv._ensure_disk_online_and_writable.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number) if not new_size: mock_get_max_size.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number, self._drv._DEFAULT_SHARE_PARTITION) expected_new_size = mock.sentinel.max_size else: expected_new_size = new_size << 30 self._windows_utils.resize_partition.assert_called_once_with( mock.sentinel.server, expected_new_size, mock.sentinel.disk_number, self._drv._DEFAULT_SHARE_PARTITION) def test_ensure_disk_online_and_writable(self): self._drv._ensure_disk_online_and_writable( mock.sentinel.server, mock.sentinel.disk_number) self._windows_utils.update_disk.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number) self._windows_utils.set_disk_online_status.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number, online=True) self._windows_utils.set_disk_readonly_status.assert_called_once_with( mock.sentinel.server, mock.sentinel.disk_number, readonly=False) def test_get_mounted_share_size(self): fake_size_gb = 10 self._windows_utils.get_disk_space_by_path.return_value = ( fake_size_gb << 30, mock.sentinel.free_bytes) share_size = self._drv._get_mounted_share_size( mock.sentinel.mount_path, mock.sentinel.server) self.assertEqual(fake_size_gb, share_size) def test_get_consumed_space(self): fake_size_gb = 2 fake_free_space_gb = 1 self._windows_utils.get_disk_space_by_path.return_value = ( fake_size_gb << 30, fake_free_space_gb << 30) consumed_space = self._drv._get_consumed_space( mock.sentinel.mount_path, mock.sentinel.server) self.assertEqual(fake_size_gb - fake_free_space_gb, consumed_space) def test_get_mount_path(self): fake_mount_path = 'fake_mount_path' fake_share_name = 'fake_share_name' mock_share = {'name': fake_share_name} self.flags(share_mount_path=fake_mount_path) mount_path = self._drv._get_mount_path(mock_share) self._windows_utils.normalize_path.assert_called_once_with( os.path.join(fake_mount_path, fake_share_name)) self.assertEqual(self._windows_utils.normalize_path.return_value, mount_path) @ddt.data(None, 2) def test_get_disk_number(self, disk_number_by_serial=None): mock_get_disk_number_by_serial = ( self._windows_utils.get_disk_number_by_serial_number) mock_get_disk_number_by_serial.return_value = disk_number_by_serial mock_volume = {'id': mock.sentinel.vol_id, 'mountpoint': "/dev/sdb"} # If the disk number cannot be identified using the disk serial # number, we expect it to be retrieved based on the volume mountpoint, # having disk number 1 in this case. expected_disk_number = (disk_number_by_serial if disk_number_by_serial else 1) disk_number = self._drv._get_disk_number(mock.sentinel.server, mock_volume) mock_get_disk_number_by_serial.assert_called_once_with( mock.sentinel.server, mock.sentinel.vol_id) self.assertEqual(expected_disk_number, disk_number) @ddt.data(None, 2) def test_is_device_mounted(self, disk_number_by_path): mock_get_disk_number_by_path = ( self._windows_utils.get_disk_number_by_mount_path) mock_get_disk_number_by_path.return_value = disk_number_by_path expected_result = disk_number_by_path is not None is_mounted = self._drv._is_device_mounted( mount_path=mock.sentinel.mount_path, server_details=mock.sentinel.server) mock_get_disk_number_by_path.assert_called_once_with( mock.sentinel.server, mock.sentinel.mount_path) self.assertEqual(expected_result, is_mounted) manila-10.0.0/manila/tests/share/drivers/windows/test_windows_smb_helper.py0000664000175000017500000004070113656750227027252 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from manila.common import constants from manila import exception from manila.share import configuration from manila.share.drivers.windows import windows_smb_helper from manila.share.drivers.windows import windows_utils from manila import test from oslo_config import cfg CONF = cfg.CONF CONF.import_opt('share_mount_path', 'manila.share.drivers.generic') @ddt.ddt class WindowsSMBHelperTestCase(test.TestCase): _FAKE_SERVER = {'public_address': mock.sentinel.public_address} _FAKE_SHARE_NAME = "fake_share_name" _FAKE_SHARE = "\\\\%s\\%s" % (_FAKE_SERVER['public_address'], _FAKE_SHARE_NAME) _FAKE_SHARE_LOCATION = os.path.join( configuration.Configuration(None).share_mount_path, _FAKE_SHARE_NAME) _FAKE_ACCOUNT_NAME = 'FakeDomain\\FakeUser' _FAKE_RW_ACC_RULE = { 'access_to': _FAKE_ACCOUNT_NAME, 'access_level': constants.ACCESS_LEVEL_RW, 'access_type': 'user', } def setUp(self): self._remote_exec = mock.Mock() fake_conf = configuration.Configuration(None) self._win_smb_helper = windows_smb_helper.WindowsSMBHelper( self._remote_exec, fake_conf) super(WindowsSMBHelperTestCase, self).setUp() def test_init_helper(self): self._win_smb_helper.init_helper(mock.sentinel.server) self._remote_exec.assert_called_once_with(mock.sentinel.server, "Get-SmbShare") @ddt.data(True, False) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_share_exists') def test_create_exports(self, share_exists, mock_share_exists): mock_share_exists.return_value = share_exists result = self._win_smb_helper.create_exports(self._FAKE_SERVER, self._FAKE_SHARE_NAME) if not share_exists: cmd = ['New-SmbShare', '-Name', self._FAKE_SHARE_NAME, '-Path', self._win_smb_helper._windows_utils.normalize_path( self._FAKE_SHARE_LOCATION), '-ReadAccess', "*%s" % self._win_smb_helper._NULL_SID] self._remote_exec.assert_called_once_with(self._FAKE_SERVER, cmd) else: self.assertFalse(self._remote_exec.called) expected_exports = [ { 'is_admin_only': False, 'metadata': {'export_location_metadata_example': 'example'}, 'path': self._FAKE_SHARE }, ] self.assertEqual(expected_exports, result) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_share_exists') def test_remove_exports(self, mock_share_exists): mock_share_exists.return_value = True self._win_smb_helper.remove_exports(mock.sentinel.server, mock.sentinel.share_name) cmd = ['Remove-SmbShare', '-Name', mock.sentinel.share_name, "-Force"] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) @mock.patch.object(windows_utils.WindowsUtils, 'get_volume_path_by_mount_path') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_get_share_path_by_name') def test_get_volume_path_by_share_name(self, mock_get_share_path, mock_get_vol_path): mock_get_share_path.return_value = self._FAKE_SHARE_LOCATION volume_path = self._win_smb_helper._get_volume_path_by_share_name( mock.sentinel.server, self._FAKE_SHARE_NAME) mock_get_share_path.assert_called_once_with(mock.sentinel.server, self._FAKE_SHARE_NAME) mock_get_vol_path.assert_called_once_with(mock.sentinel.server, self._FAKE_SHARE_LOCATION) self.assertEqual(mock_get_vol_path.return_value, volume_path) @ddt.data({'raw_out': '', 'expected': []}, {'raw_out': '{"key": "val"}', 'expected': [{"key": "val"}]}, {'raw_out': '[{"key": "val"}, {"key2": "val2"}]', 'expected': [{"key": "val"}, {"key2": "val2"}]}) @ddt.unpack def test_get_acls_helper(self, raw_out, expected): self._remote_exec.return_value = (raw_out, mock.sentinel.err) rules = self._win_smb_helper._get_acls(mock.sentinel.server, self._FAKE_SHARE_NAME) self.assertEqual(expected, rules) expected_cmd = ( 'Get-SmbShareAccess -Name %s | ' 'Select-Object @("Name", "AccountName", ' '"AccessControlType", "AccessRight") | ' 'ConvertTo-JSON -Compress') % self._FAKE_SHARE_NAME self._remote_exec.assert_called_once_with(mock.sentinel.server, expected_cmd) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_get_acls') def test_get_access_rules(self, mock_get_acls): helper = self._win_smb_helper valid_acl = { 'AccountName': self._FAKE_ACCOUNT_NAME, 'AccessRight': helper._WIN_ACCESS_RIGHT_FULL, 'AccessControlType': helper._WIN_ACL_ALLOW, } valid_acls = [valid_acl, dict(valid_acl, AccessRight=helper._WIN_ACCESS_RIGHT_CHANGE), dict(valid_acl, AccessRight=helper._WIN_ACCESS_RIGHT_READ)] # Those are rules that were not added by us and are expected to # be ignored. When encountering such a rule, a warning message # will be logged. ignored_acls = [ dict(valid_acl, AccessRight=helper._WIN_ACCESS_RIGHT_CUSTOM), dict(valid_acl, AccessControlType=helper._WIN_ACL_DENY)] mock_get_acls.return_value = valid_acls + ignored_acls # There won't be multiple access rules for the same account, # but we'll ignore this fact for the sake of this test. expected_rules = [self._FAKE_RW_ACC_RULE, self._FAKE_RW_ACC_RULE, dict(self._FAKE_RW_ACC_RULE, access_level=constants.ACCESS_LEVEL_RO)] rules = helper.get_access_rules(mock.sentinel.server, mock.sentinel.share_name) self.assertEqual(expected_rules, rules) mock_get_acls.assert_called_once_with(mock.sentinel.server, mock.sentinel.share_name) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_refresh_acl') def test_grant_share_access(self, mock_refresh_acl): self._win_smb_helper._grant_share_access(mock.sentinel.server, mock.sentinel.share_name, constants.ACCESS_LEVEL_RW, mock.sentinel.username) cmd = ["Grant-SmbShareAccess", "-Name", mock.sentinel.share_name, "-AccessRight", "Change", "-AccountName", "'%s'" % mock.sentinel.username, "-Force"] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) mock_refresh_acl.assert_called_once_with(mock.sentinel.server, mock.sentinel.share_name) def test_refresh_acl(self): self._win_smb_helper._refresh_acl(mock.sentinel.server, mock.sentinel.share_name) cmd = ['Set-SmbPathAcl', '-ShareName', mock.sentinel.share_name] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_refresh_acl') def test_revoke_share_access(self, mock_refresh_acl): self._win_smb_helper._revoke_share_access(mock.sentinel.server, mock.sentinel.share_name, mock.sentinel.username) cmd = ["Revoke-SmbShareAccess", "-Name", mock.sentinel.share_name, "-AccountName", '"%s"' % mock.sentinel.username, "-Force"] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) mock_refresh_acl.assert_called_once_with(mock.sentinel.server, mock.sentinel.share_name) def test_update_access_invalid_type(self): invalid_access_rule = dict(self._FAKE_RW_ACC_RULE, access_type='ip') self.assertRaises( exception.InvalidShareAccess, self._win_smb_helper.update_access, mock.sentinel.server, mock.sentinel.share_name, [invalid_access_rule], [], []) def test_update_access_invalid_level(self): invalid_access_rule = dict(self._FAKE_RW_ACC_RULE, access_level='fake_level') self.assertRaises( exception.InvalidShareAccessLevel, self._win_smb_helper.update_access, mock.sentinel.server, mock.sentinel.share_name, [], [invalid_access_rule], []) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_revoke_share_access') def test_update_access_deleting_invalid_rule(self, mock_revoke): # We want to make sure that we allow deleting invalid rules. invalid_access_rule = dict(self._FAKE_RW_ACC_RULE, access_level='fake_level') delete_rules = [invalid_access_rule, self._FAKE_RW_ACC_RULE] self._win_smb_helper.update_access( mock.sentinel.server, mock.sentinel.share_name, [], [], delete_rules) mock_revoke.assert_called_once_with( mock.sentinel.server, mock.sentinel.share_name, self._FAKE_RW_ACC_RULE['access_to']) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, 'validate_access_rules') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, 'get_access_rules') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_grant_share_access') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_revoke_share_access') def test_update_access(self, mock_revoke, mock_grant, mock_get_access_rules, mock_validate): added_rules = [mock.MagicMock(), mock.MagicMock()] deleted_rules = [mock.MagicMock(), mock.MagicMock()] self._win_smb_helper.update_access( mock.sentinel.server, mock.sentinel.share_name, [], added_rules, deleted_rules) mock_revoke.assert_has_calls( [mock.call(mock.sentinel.server, mock.sentinel.share_name, deleted_rule['access_to']) for deleted_rule in deleted_rules]) mock_grant.assert_has_calls( [mock.call(mock.sentinel.server, mock.sentinel.share_name, added_rule['access_level'], added_rule['access_to']) for added_rule in added_rules]) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_get_rule_updates') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, 'validate_access_rules') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, 'get_access_rules') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_grant_share_access') @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_revoke_share_access') def test_update_access_maintenance( self, mock_revoke, mock_grant, mock_get_access_rules, mock_validate, mock_get_rule_updates): all_rules = mock.MagicMock() added_rules = [mock.MagicMock(), mock.MagicMock()] deleted_rules = [mock.MagicMock(), mock.MagicMock()] mock_get_rule_updates.return_value = [ added_rules, deleted_rules] self._win_smb_helper.update_access( mock.sentinel.server, mock.sentinel.share_name, all_rules, [], []) mock_get_access_rules.assert_called_once_with( mock.sentinel.server, mock.sentinel.share_name) mock_get_rule_updates.assert_called_once_with( existing_rules=mock_get_access_rules.return_value, requested_rules=all_rules) mock_revoke.assert_has_calls( [mock.call(mock.sentinel.server, mock.sentinel.share_name, deleted_rule['access_to']) for deleted_rule in deleted_rules]) mock_grant.assert_has_calls( [mock.call(mock.sentinel.server, mock.sentinel.share_name, added_rule['access_level'], added_rule['access_to']) for added_rule in added_rules]) def test_get_rule_updates(self): req_rule_0 = self._FAKE_RW_ACC_RULE req_rule_1 = dict(self._FAKE_RW_ACC_RULE, access_to='fake_acc') curr_rule_0 = dict(self._FAKE_RW_ACC_RULE, access_to=self._FAKE_RW_ACC_RULE[ 'access_to'].upper()) curr_rule_1 = dict(self._FAKE_RW_ACC_RULE, access_to='fake_acc2') curr_rule_2 = dict(req_rule_1, access_level=constants.ACCESS_LEVEL_RO) expected_added_rules = [req_rule_1] expected_deleted_rules = [curr_rule_1, curr_rule_2] existing_rules = [curr_rule_0, curr_rule_1, curr_rule_2] requested_rules = [req_rule_0, req_rule_1] (added_rules, deleted_rules) = self._win_smb_helper._get_rule_updates( existing_rules, requested_rules) self.assertEqual(expected_added_rules, added_rules) self.assertEqual(expected_deleted_rules, deleted_rules) def test_get_share_name(self): result = self._win_smb_helper._get_share_name(self._FAKE_SHARE) self.assertEqual(self._FAKE_SHARE_NAME, result) def test_get_share_path_by_name(self): self._remote_exec.return_value = (self._FAKE_SHARE_LOCATION, mock.sentinel.std_err) result = self._win_smb_helper._get_share_path_by_name( mock.sentinel.server, mock.sentinel.share_name) cmd = ('Get-SmbShare -Name %s | ' 'Select-Object -ExpandProperty Path' % mock.sentinel.share_name) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd, check_exit_code=True) self.assertEqual(self._FAKE_SHARE_LOCATION, result) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_get_share_path_by_name') def test_get_share_path_by_export_location(self, mock_get_share_path_by_name): mock_get_share_path_by_name.return_value = mock.sentinel.share_path result = self._win_smb_helper.get_share_path_by_export_location( mock.sentinel.server, self._FAKE_SHARE) mock_get_share_path_by_name.assert_called_once_with( mock.sentinel.server, self._FAKE_SHARE_NAME) self.assertEqual(mock.sentinel.share_path, result) @mock.patch.object(windows_smb_helper.WindowsSMBHelper, '_get_share_path_by_name') def test_share_exists(self, mock_get_share_path_by_name): result = self._win_smb_helper._share_exists(mock.sentinel.server, mock.sentinel.share_name) mock_get_share_path_by_name.assert_called_once_with( mock.sentinel.server, mock.sentinel.share_name, ignore_missing=True) self.assertTrue(result) manila-10.0.0/manila/tests/share/drivers/windows/test_service_instance.py0000664000175000017500000004006413656750227026706 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from oslo_concurrency import processutils from oslo_config import cfg import six from manila import exception from manila.share import configuration from manila.share.drivers import service_instance as generic_service_instance from manila.share.drivers.windows import service_instance from manila.share.drivers.windows import windows_utils from manila import test CONF = cfg.CONF CONF.import_opt('driver_handles_share_servers', 'manila.share.driver') CONF.register_opts(generic_service_instance.common_opts) serv_mgr_cls = service_instance.WindowsServiceInstanceManager generic_serv_mgr_cls = generic_service_instance.ServiceInstanceManager @ddt.ddt class WindowsServiceInstanceManagerTestCase(test.TestCase): _FAKE_SERVER = {'ip': mock.sentinel.ip, 'instance_id': mock.sentinel.instance_id} @mock.patch.object(windows_utils, 'WindowsUtils') @mock.patch.object(serv_mgr_cls, '_check_auth_mode') def setUp(self, mock_check_auth, mock_utils_cls): self.flags(service_instance_user=mock.sentinel.username) self._remote_execute = mock.Mock() fake_conf = configuration.Configuration(None) self._mgr = serv_mgr_cls(remote_execute=self._remote_execute, driver_config=fake_conf) self._windows_utils = mock_utils_cls.return_value super(WindowsServiceInstanceManagerTestCase, self).setUp() @ddt.data({}, {'use_cert_auth': False}, {'use_cert_auth': False, 'valid_pass_complexity': False}, {'certs_exist': False}) @mock.patch('os.path.exists') @mock.patch.object(serv_mgr_cls, '_check_password_complexity') @ddt.unpack def test_check_auth_mode(self, mock_check_complexity, mock_path_exists, use_cert_auth=True, certs_exist=True, valid_pass_complexity=True): self.flags(service_instance_password=mock.sentinel.password) self._mgr._cert_pem_path = mock.sentinel.cert_path self._mgr._cert_key_pem_path = mock.sentinel.key_path mock_path_exists.return_value = certs_exist mock_check_complexity.return_value = valid_pass_complexity self._mgr._use_cert_auth = use_cert_auth invalid_auth = ((use_cert_auth and not certs_exist) or not valid_pass_complexity) if invalid_auth: self.assertRaises(exception.ServiceInstanceException, self._mgr._check_auth_mode) else: self._mgr._check_auth_mode() if not use_cert_auth: mock_check_complexity.assert_called_once_with( six.text_type(mock.sentinel.password)) @ddt.data(False, True) def test_get_auth_info(self, use_cert_auth): self._mgr._use_cert_auth = use_cert_auth self._mgr._cert_pem_path = mock.sentinel.cert_path self._mgr._cert_key_pem_path = mock.sentinel.key_path auth_info = self._mgr._get_auth_info() expected_auth_info = {'use_cert_auth': use_cert_auth} if use_cert_auth: expected_auth_info.update(cert_pem_path=mock.sentinel.cert_path, cert_key_pem_path=mock.sentinel.key_path) self.assertEqual(expected_auth_info, auth_info) @mock.patch.object(serv_mgr_cls, '_get_auth_info') @mock.patch.object(generic_serv_mgr_cls, 'get_common_server') def test_common_server(self, mock_generic_get_server, mock_get_auth): mock_server_details = {'backend_details': {}} mock_auth_info = {'fake_auth_info': mock.sentinel.auth_info} mock_generic_get_server.return_value = mock_server_details mock_get_auth.return_value = mock_auth_info expected_server_details = dict(backend_details=mock_auth_info) server_details = self._mgr.get_common_server() mock_generic_get_server.assert_called_once_with() self.assertEqual(expected_server_details, server_details) @mock.patch.object(serv_mgr_cls, '_get_auth_info') @mock.patch.object(generic_serv_mgr_cls, '_get_new_instance_details') def test_get_new_instance_details(self, mock_generic_get_details, mock_get_auth): mock_server_details = {'fake_server_details': mock.sentinel.server_details} mock_generic_get_details.return_value = mock_server_details mock_auth_info = {'fake_auth_info': mock.sentinel.auth_info} mock_get_auth.return_value = mock_auth_info expected_server_details = dict(mock_server_details, **mock_auth_info) instance_details = self._mgr._get_new_instance_details( server=mock.sentinel.server) mock_generic_get_details.assert_called_once_with(mock.sentinel.server) self.assertEqual(expected_server_details, instance_details) @ddt.data(('abAB01', True), ('abcdef', False), ('aA0', False)) @ddt.unpack def test_check_password_complexity(self, password, expected_result): valid_complexity = self._mgr._check_password_complexity( password) self.assertEqual(expected_result, valid_complexity) @ddt.data(None, Exception) def test_server_connection(self, side_effect): self._remote_execute.side_effect = side_effect expected_result = side_effect is None is_available = self._mgr._test_server_connection(self._FAKE_SERVER) self.assertEqual(expected_result, is_available) self._remote_execute.assert_called_once_with(self._FAKE_SERVER, "whoami", retry=False) @ddt.data(False, True) def test_get_service_instance_create_kwargs(self, use_cert_auth): self._mgr._use_cert_auth = use_cert_auth self.flags(service_instance_password=mock.sentinel.admin_pass) if use_cert_auth: mock_cert_data = 'mock_cert_data' self.mock_object(service_instance, 'open', mock.mock_open( read_data=mock_cert_data)) expected_kwargs = dict(user_data=mock_cert_data) else: expected_kwargs = dict( meta=dict(admin_pass=six.text_type(mock.sentinel.admin_pass))) create_kwargs = self._mgr._get_service_instance_create_kwargs() self.assertEqual(expected_kwargs, create_kwargs) @mock.patch.object(generic_serv_mgr_cls, 'set_up_service_instance') @mock.patch.object(serv_mgr_cls, 'get_valid_security_service') @mock.patch.object(serv_mgr_cls, '_setup_security_service') def test_set_up_service_instance(self, mock_setup_security_service, mock_get_valid_security_service, mock_generic_setup_serv_inst): mock_service_instance = {'instance_details': None} mock_network_info = {'security_services': mock.sentinel.security_services} mock_generic_setup_serv_inst.return_value = mock_service_instance mock_get_valid_security_service.return_value = ( mock.sentinel.security_service) instance_details = self._mgr.set_up_service_instance( mock.sentinel.context, mock_network_info) mock_generic_setup_serv_inst.assert_called_once_with( mock.sentinel.context, mock_network_info) mock_get_valid_security_service.assert_called_once_with( mock.sentinel.security_services) mock_setup_security_service.assert_called_once_with( mock_service_instance, mock.sentinel.security_service) expected_instance_details = dict(mock_service_instance, joined_domain=True) self.assertEqual(expected_instance_details, instance_details) @mock.patch.object(serv_mgr_cls, '_run_cloudbase_init_plugin_after_reboot') @mock.patch.object(serv_mgr_cls, '_join_domain') def test_setup_security_service(self, mock_join_domain, mock_run_cbsinit_plugin): utils = self._windows_utils mock_security_service = {'domain': mock.sentinel.domain, 'user': mock.sentinel.admin_username, 'password': mock.sentinel.admin_password, 'dns_ip': mock.sentinel.dns_ip} utils.get_interface_index_by_ip.return_value = ( mock.sentinel.interface_index) self._mgr._setup_security_service(self._FAKE_SERVER, mock_security_service) utils.set_dns_client_search_list.assert_called_once_with( self._FAKE_SERVER, [mock_security_service['domain']]) utils.get_interface_index_by_ip.assert_called_once_with( self._FAKE_SERVER, self._FAKE_SERVER['ip']) utils.set_dns_client_server_addresses.assert_called_once_with( self._FAKE_SERVER, mock.sentinel.interface_index, [mock_security_service['dns_ip']]) mock_run_cbsinit_plugin.assert_called_once_with( self._FAKE_SERVER, plugin_name=self._mgr._CBS_INIT_WINRM_PLUGIN) mock_join_domain.assert_called_once_with( self._FAKE_SERVER, mock.sentinel.domain, mock.sentinel.admin_username, mock.sentinel.admin_password) @ddt.data({'join_domain_side_eff': Exception}, {'server_available': False, 'expected_exception': exception.ServiceInstanceException}, {'join_domain_side_eff': processutils.ProcessExecutionError, 'expected_exception': processutils.ProcessExecutionError}, {'domain_mismatch': True, 'expected_exception': exception.ServiceInstanceException}) @mock.patch.object(generic_serv_mgr_cls, 'reboot_server') @mock.patch.object(generic_serv_mgr_cls, 'wait_for_instance_to_be_active') @mock.patch.object(generic_serv_mgr_cls, '_check_server_availability') @ddt.unpack def test_join_domain(self, mock_check_avail, mock_wait_instance_active, mock_reboot_server, expected_exception=None, server_available=True, domain_mismatch=False, join_domain_side_eff=None): self._windows_utils.join_domain.side_effect = join_domain_side_eff mock_check_avail.return_value = server_available self._windows_utils.get_current_domain.return_value = ( None if domain_mismatch else mock.sentinel.domain) domain_params = (mock.sentinel.domain, mock.sentinel.admin_username, mock.sentinel.admin_password) if expected_exception: self.assertRaises(expected_exception, self._mgr._join_domain, self._FAKE_SERVER, *domain_params) else: self._mgr._join_domain(self._FAKE_SERVER, *domain_params) if join_domain_side_eff != processutils.ProcessExecutionError: mock_reboot_server.assert_called_once_with( self._FAKE_SERVER, soft_reboot=True) mock_wait_instance_active.assert_called_once_with( self._FAKE_SERVER['instance_id'], timeout=self._mgr.max_time_to_build_instance) mock_check_avail.assert_called_once_with(self._FAKE_SERVER) if server_available: self._windows_utils.get_current_domain.assert_called_once_with( self._FAKE_SERVER) self._windows_utils.join_domain.assert_called_once_with( self._FAKE_SERVER, *domain_params) @ddt.data([], [{'type': 'active_directory'}], [{'type': 'active_directory'}] * 2, [{'type': mock.sentinel.invalid_type}]) def test_get_valid_security_service(self, security_services): valid_security_service = self._mgr.get_valid_security_service( security_services) if (security_services and len(security_services) == 1 and security_services[0]['type'] == 'active_directory'): expected_valid_sec_service = security_services[0] else: expected_valid_sec_service = None self.assertEqual(expected_valid_sec_service, valid_security_service) @mock.patch.object(serv_mgr_cls, '_get_cbs_init_reg_section') def test_run_cloudbase_init_plugin_after_reboot(self, mock_get_cbs_init_reg): self._FAKE_SERVER = {'instance_id': mock.sentinel.instance_id} mock_get_cbs_init_reg.return_value = mock.sentinel.cbs_init_reg_sect expected_plugin_key_path = "%(cbs_init)s\\%(instance_id)s\\Plugins" % { 'cbs_init': mock.sentinel.cbs_init_reg_sect, 'instance_id': self._FAKE_SERVER['instance_id']} self._mgr._run_cloudbase_init_plugin_after_reboot( server=self._FAKE_SERVER, plugin_name=mock.sentinel.plugin_name) mock_get_cbs_init_reg.assert_called_once_with(self._FAKE_SERVER) self._windows_utils.set_win_reg_value.assert_called_once_with( self._FAKE_SERVER, path=expected_plugin_key_path, key=mock.sentinel.plugin_name, value=self._mgr._CBS_INIT_RUN_PLUGIN_AFTER_REBOOT) @ddt.data( {}, {'exec_errors': [ processutils.ProcessExecutionError(stderr='Cannot find path'), processutils.ProcessExecutionError(stderr='Cannot find path')], 'expected_exception': exception.ServiceInstanceException}, {'exec_errors': [processutils.ProcessExecutionError(stderr='')], 'expected_exception': processutils.ProcessExecutionError}, {'exec_errors': [ processutils.ProcessExecutionError(stderr='Cannot find path'), None]} ) @ddt.unpack def test_get_cbs_init_reg_section(self, exec_errors=None, expected_exception=None): self._windows_utils.normalize_path.return_value = ( mock.sentinel.normalized_section_path) self._windows_utils.get_win_reg_value.side_effect = exec_errors if expected_exception: self.assertRaises(expected_exception, self._mgr._get_cbs_init_reg_section, mock.sentinel.server) else: cbs_init_section = self._mgr._get_cbs_init_reg_section( mock.sentinel.server) self.assertEqual(mock.sentinel.normalized_section_path, cbs_init_section) base_path = 'hklm:\\SOFTWARE' cbs_section = 'Cloudbase Solutions\\Cloudbase-Init' tested_upper_sections = [''] if exec_errors and 'Cannot find path' in exec_errors[0].stderr: tested_upper_sections.append('Wow6432Node') tested_sections = [os.path.join(base_path, upper_section, cbs_section) for upper_section in tested_upper_sections] self._windows_utils.normalize_path.assert_has_calls( [mock.call(tested_section) for tested_section in tested_sections]) manila-10.0.0/manila/tests/share/drivers/windows/test_winrm_helper.py0000664000175000017500000002641013656750227026054 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_concurrency import processutils from oslo_utils import importutils from oslo_utils import strutils from manila import exception from manila.share.drivers.windows import winrm_helper from manila import test @ddt.ddt class WinRMHelperTestCase(test.TestCase): _FAKE_SERVER = {'ip': mock.sentinel.ip} @mock.patch.object(importutils, 'import_module') def setUp(self, mock_import_module): self._winrm = winrm_helper.WinRMHelper() super(WinRMHelperTestCase, self).setUp() @ddt.data({'import_exc': None}, {'import_exc': ImportError}) @mock.patch.object(importutils, 'import_module') @ddt.unpack def test_setup_winrm(self, mock_import_module, import_exc): winrm_helper.winrm = None mock_import_module.side_effect = import_exc if import_exc: self.assertRaises(exception.ShareBackendException, winrm_helper.setup_winrm) else: winrm_helper.setup_winrm() self.assertEqual(mock_import_module.return_value, winrm_helper.winrm) mock_import_module.assert_called_once_with('winrm') @mock.patch.object(winrm_helper.WinRMHelper, '_get_auth') @mock.patch.object(winrm_helper, 'WinRMConnection') def test_get_conn(self, mock_conn_cls, mock_get_auth): mock_auth = {'mock_auth_key': mock.sentinel.auth_opt} mock_get_auth.return_value = mock_auth conn = self._winrm._get_conn(self._FAKE_SERVER) mock_get_auth.assert_called_once_with(self._FAKE_SERVER) mock_conn_cls.assert_called_once_with( ip=self._FAKE_SERVER['ip'], conn_timeout=self._winrm._config.winrm_conn_timeout, operation_timeout=self._winrm._config.winrm_operation_timeout, **mock_auth) self.assertEqual(mock_conn_cls.return_value, conn) @ddt.data({}, {'exit_code': 1}, {'exit_code': 1, 'check_exit_code': False}) @mock.patch.object(strutils, 'mask_password') @mock.patch.object(winrm_helper.WinRMHelper, '_parse_command') @mock.patch.object(winrm_helper.WinRMHelper, '_get_conn') @ddt.unpack def test_execute(self, mock_get_conn, mock_parse_command, mock_mask_password, check_exit_code=True, exit_code=0): mock_parse_command.return_value = (mock.sentinel.parsed_cmd, mock.sentinel.sanitized_cmd) mock_conn = mock_get_conn.return_value mock_conn.execute.return_value = (mock.sentinel.stdout, mock.sentinel.stderr, exit_code) if exit_code == 0 or not check_exit_code: result = self._winrm.execute(mock.sentinel.server, mock.sentinel.command, check_exit_code=check_exit_code, retry=False) expected_result = (mock.sentinel.stdout, mock.sentinel.stderr) self.assertEqual(expected_result, result) else: self.assertRaises(processutils.ProcessExecutionError, self._winrm.execute, mock.sentinel.server, mock.sentinel.command, check_exit_code=check_exit_code, retry=False) mock_get_conn.assert_called_once_with(mock.sentinel.server) mock_parse_command.assert_called_once_with(mock.sentinel.command) mock_conn.execute.assert_called_once_with(mock.sentinel.parsed_cmd) mock_mask_password.assert_has_calls([mock.call(mock.sentinel.stdout), mock.call(mock.sentinel.stderr)]) @mock.patch('base64.b64encode') @mock.patch.object(strutils, 'mask_password') def test_parse_command(self, mock_mask_password, mock_base64): mock_mask_password.return_value = mock.sentinel.sanitized_cmd mock_base64.return_value = mock.sentinel.encoded_string cmd = ('Get-Disk', '-Number', 1) result = self._winrm._parse_command(cmd) joined_cmd = 'Get-Disk -Number 1' expected_command = ("powershell.exe -ExecutionPolicy RemoteSigned " "-NonInteractive -EncodedCommand %s" % mock.sentinel.encoded_string) expected_result = expected_command, mock.sentinel.sanitized_cmd mock_mask_password.assert_called_once_with(joined_cmd) mock_base64.assert_called_once_with(joined_cmd.encode("utf_16_le")) self.assertEqual(expected_result, result) def _test_get_auth(self, use_cert_auth=False): mock_server = {'use_cert_auth': use_cert_auth, 'cert_pem_path': mock.sentinel.pem_path, 'cert_key_pem_path': mock.sentinel.key_path, 'username': mock.sentinel.username, 'password': mock.sentinel.password} result = self._winrm._get_auth(mock_server) expected_result = {'username': mock_server['username']} if use_cert_auth: expected_result['cert_pem_path'] = mock_server['cert_pem_path'] expected_result['cert_key_pem_path'] = ( mock_server['cert_key_pem_path']) else: expected_result['password'] = mock_server['password'] self.assertEqual(expected_result, result) def test_get_auth_using_certificates(self): self._test_get_auth(use_cert_auth=True) def test_get_auth_using_password(self): self._test_get_auth() class WinRMConnectionTestCase(test.TestCase): @mock.patch.object(winrm_helper, 'setup_winrm') @mock.patch.object(winrm_helper, 'winrm') @mock.patch.object(winrm_helper.WinRMConnection, '_get_url') @mock.patch.object(winrm_helper.WinRMConnection, '_get_default_port') def setUp(self, mock_get_port, mock_get_url, mock_winrm, mock_setup_winrm): self._winrm = winrm_helper.WinRMConnection() self._mock_conn = mock_winrm.protocol.Protocol.return_value super(WinRMConnectionTestCase, self).setUp() @mock.patch.object(winrm_helper, 'setup_winrm') @mock.patch.object(winrm_helper, 'winrm') @mock.patch.object(winrm_helper.WinRMConnection, '_get_url') @mock.patch.object(winrm_helper.WinRMConnection, '_get_default_port') def test_init_conn(self, mock_get_port, mock_get_url, mock_winrm, mock_setup_winrm): # certificates are passed so we expect cert auth to be used cert_auth = True winrm_conn = winrm_helper.WinRMConnection( ip=mock.sentinel.ip, username=mock.sentinel.username, password=mock.sentinel.password, cert_pem_path=mock.sentinel.cert_pem_path, cert_key_pem_path=mock.sentinel.cert_key_pem_path, operation_timeout=mock.sentinel.operation_timeout, conn_timeout=mock.sentinel.conn_timeout) mock_get_port.assert_called_once_with(cert_auth) mock_get_url.assert_called_once_with(mock.sentinel.ip, mock_get_port.return_value, cert_auth) mock_winrm.protocol.Protocol.assert_called_once_with( endpoint=mock_get_url.return_value, transport=winrm_helper.TRANSPORT_SSL, username=mock.sentinel.username, password=mock.sentinel.password, cert_pem=mock.sentinel.cert_pem_path, cert_key_pem=mock.sentinel.cert_key_pem_path) self.assertEqual(mock_winrm.protocol.Protocol.return_value, winrm_conn._conn) self.assertEqual(mock.sentinel.conn_timeout, winrm_conn._conn.transport.timeout) winrm_conn._conn.set_timeout.assert_called_once_with( mock.sentinel.operation_timeout) def test_get_default_port_https(self): port = self._winrm._get_default_port(use_ssl=True) self.assertEqual(winrm_helper.DEFAULT_PORT_HTTPS, port) def test_get_default_port_http(self): port = self._winrm._get_default_port(use_ssl=False) self.assertEqual(winrm_helper.DEFAULT_PORT_HTTP, port) def _test_get_url(self, ip=None, use_ssl=True): if not ip: self.assertRaises(exception.ShareBackendException, self._winrm._get_url, ip=ip, port=mock.sentinel.port, use_ssl=use_ssl) else: url = self._winrm._get_url(ip=ip, port=mock.sentinel.port, use_ssl=use_ssl) expected_protocol = 'https' if use_ssl else 'http' expected_url = self._winrm._URL_TEMPLATE % dict( protocol=expected_protocol, port=mock.sentinel.port, ip=ip) self.assertEqual(expected_url, url) def test_get_url_using_ssl(self): self._test_get_url(ip=mock.sentinel.ip) def test_get_url_using_plaintext(self): self._test_get_url(ip=mock.sentinel.ip, use_ssl=False) def test_get_url_missing_ip(self): self._test_get_url() def _test_execute(self, get_output_exception=None): self._mock_conn.open_shell.return_value = mock.sentinel.shell_id self._mock_conn.run_command.return_value = mock.sentinel.cmd_id command_output = (mock.sentinel.stdout, mock.sentinel.stderr, mock.sentinel.exit_code) if get_output_exception: self._mock_conn.get_command_output.side_effect = ( get_output_exception) self.assertRaises( get_output_exception, self._winrm.execute, mock.sentinel.cmd) else: self._mock_conn.get_command_output.return_value = command_output result = self._winrm.execute(mock.sentinel.cmd) self.assertEqual(command_output, result) self._mock_conn.open_shell.assert_called_once_with() self._mock_conn.run_command.assert_called_once_with( mock.sentinel.shell_id, mock.sentinel.cmd) self._mock_conn.cleanup_command.assert_called_once_with( mock.sentinel.shell_id, mock.sentinel.cmd_id) self._mock_conn.close_shell.assert_called_once_with( mock.sentinel.shell_id) def test_execute(self): self._test_execute() def test_execute_exception(self): self._test_execute(get_output_exception=Exception) manila-10.0.0/manila/tests/share/drivers/windows/test_windows_utils.py0000664000175000017500000003775613656750227026312 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila.share.drivers.windows import windows_utils from manila import test @ddt.ddt class WindowsUtilsTestCase(test.TestCase): def setUp(self): self._remote_exec = mock.Mock() self._windows_utils = windows_utils.WindowsUtils(self._remote_exec) super(WindowsUtilsTestCase, self).setUp() def test_initialize_disk(self): self._windows_utils.initialize_disk(mock.sentinel.server, mock.sentinel.disk_number) cmd = ["Initialize-Disk", "-Number", mock.sentinel.disk_number] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_create_partition(self): self._windows_utils.create_partition(mock.sentinel.server, mock.sentinel.disk_number) cmd = ["New-Partition", "-DiskNumber", mock.sentinel.disk_number, "-UseMaximumSize"] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_format_partition(self): self._windows_utils.format_partition(mock.sentinel.server, mock.sentinel.disk_number, mock.sentinel.partition_number) cmd = ("Get-Partition -DiskNumber %(disk_number)s " "-PartitionNumber %(partition_number)s | " "Format-Volume -FileSystem NTFS -Force -Confirm:$false" % { 'disk_number': mock.sentinel.disk_number, 'partition_number': mock.sentinel.partition_number, }) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_add_access_path(self): self._windows_utils.add_access_path(mock.sentinel.server, mock.sentinel.mount_path, mock.sentinel.disk_number, mock.sentinel.partition_number) cmd = ["Add-PartitionAccessPath", "-DiskNumber", mock.sentinel.disk_number, "-PartitionNumber", mock.sentinel.partition_number, "-AccessPath", self._windows_utils.quote_string( mock.sentinel.mount_path) ] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_resize_partition(self): self._windows_utils.resize_partition(mock.sentinel.server, mock.sentinel.size_bytes, mock.sentinel.disk_number, mock.sentinel.partition_number) cmd = ['Resize-Partition', '-DiskNumber', mock.sentinel.disk_number, '-PartitionNumber', mock.sentinel.partition_number, '-Size', mock.sentinel.size_bytes] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) @ddt.data("1", "") def test_get_disk_number_by_serial_number(self, disk_number): mock_serial_number = "serial_number" self._remote_exec.return_value = (disk_number, mock.sentinel.std_err) expected_disk_number = int(disk_number) if disk_number else None result = self._windows_utils.get_disk_number_by_serial_number( mock.sentinel.server, mock_serial_number) pattern = "%s*" % mock_serial_number cmd = ("Get-Disk | " "Where-Object {$_.SerialNumber -like '%s'} | " "Select-Object -ExpandProperty Number" % pattern) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(expected_disk_number, result) @ddt.data("1", "") def test_get_disk_number_by_mount_path(self, disk_number): fake_mount_path = "fake_mount_path" self._remote_exec.return_value = (disk_number, mock.sentinel.std_err) expected_disk_number = int(disk_number) if disk_number else None result = self._windows_utils.get_disk_number_by_mount_path( mock.sentinel.server, fake_mount_path) cmd = ('Get-Partition | ' 'Where-Object {$_.AccessPaths -contains "%s"} | ' 'Select-Object -ExpandProperty DiskNumber' % (fake_mount_path + "\\")) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(expected_disk_number, result) def test_get_volume_path_by_mount_path(self): fake_mount_path = "fake_mount_path" fake_volume_path = "fake_volume_path" self._remote_exec.return_value = fake_volume_path + '\r\n', None result = self._windows_utils.get_volume_path_by_mount_path( mock.sentinel.server, fake_mount_path) cmd = ('Get-Partition | ' 'Where-Object {$_.AccessPaths -contains "%s"} | ' 'Get-Volume | ' 'Select-Object -ExpandProperty Path' % (fake_mount_path + "\\")) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(fake_volume_path, result) def test_get_disk_space_by_path(self): fake_disk_size = 1024 fake_free_bytes = 1000 fake_fsutil_output = ("Total # of bytes : %(total_bytes)s" "Total # of avail free bytes : %(free_bytes)s" % dict(total_bytes=fake_disk_size, free_bytes=fake_free_bytes)) self._remote_exec.return_value = fake_fsutil_output, None result = self._windows_utils.get_disk_space_by_path( mock.sentinel.server, mock.sentinel.mount_path) cmd = ["fsutil", "volume", "diskfree", self._windows_utils.quote_string(mock.sentinel.mount_path)] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual((fake_disk_size, fake_free_bytes), result) def test_get_partition_maximum_size(self): fake_max_size = 1024 self._remote_exec.return_value = ("%s" % fake_max_size, mock.sentinel.std_err) result = self._windows_utils.get_partition_maximum_size( mock.sentinel.server, mock.sentinel.disk_number, mock.sentinel.partition_number) cmd = ('Get-PartitionSupportedSize -DiskNumber %(disk_number)s ' '-PartitionNumber %(partition_number)s | ' 'Select-Object -ExpandProperty SizeMax' % dict(disk_number=mock.sentinel.disk_number, partition_number=mock.sentinel.partition_number)) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(fake_max_size, result) def test_set_disk_online_status(self): self._windows_utils.set_disk_online_status(mock.sentinel.server, mock.sentinel.disk_number, online=True) cmd = ["Set-Disk", "-Number", mock.sentinel.disk_number, "-IsOffline", 0] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_set_disk_readonly_status(self): self._windows_utils.set_disk_readonly_status(mock.sentinel.server, mock.sentinel.disk_number, readonly=False) cmd = ["Set-Disk", "-Number", mock.sentinel.disk_number, "-IsReadOnly", 0] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_update_disk(self): self._windows_utils.update_disk(mock.sentinel.server, mock.sentinel.disk_number) cmd = ["Update-Disk", mock.sentinel.disk_number] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_join_domain(self): mock_server = {'ip': mock.sentinel.server_ip} self._windows_utils.join_domain(mock_server, mock.sentinel.domain, mock.sentinel.admin_username, mock.sentinel.admin_password) cmds = [ ('$password = "%s" | ' 'ConvertTo-SecureString -asPlainText -Force' % mock.sentinel.admin_password), ('$credential = ' 'New-Object System.Management.Automation.PSCredential(' '"%s", $password)' % mock.sentinel.admin_username), ('Add-Computer -DomainName "%s" -Credential $credential' % mock.sentinel.domain)] cmd = ";".join(cmds) self._remote_exec.assert_called_once_with(mock_server, cmd) def test_unjoin_domain(self): self._windows_utils.unjoin_domain(mock.sentinel.server, mock.sentinel.admin_username, mock.sentinel.admin_password) cmds = [ ('$password = "%s" | ' 'ConvertTo-SecureString -asPlainText -Force' % mock.sentinel.admin_password), ('$credential = ' 'New-Object System.Management.Automation.PSCredential(' '"%s", $password)' % mock.sentinel.admin_username), ('Remove-Computer -UnjoinDomaincredential $credential ' '-Passthru -Verbose -Force')] cmd = ";".join(cmds) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_get_current_domain(self): fake_domain = " domain" self._remote_exec.return_value = (fake_domain, mock.sentinel.std_err) result = self._windows_utils.get_current_domain(mock.sentinel.server) cmd = "(Get-WmiObject Win32_ComputerSystem).Domain" self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(fake_domain.strip(), result) def test_ensure_directory_exists(self): self._windows_utils.ensure_directory_exists(mock.sentinel.server, mock.sentinel.path) cmd = ["New-Item", "-ItemType", "Directory", "-Force", "-Path", self._windows_utils.quote_string(mock.sentinel.path)] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) @ddt.data(False, True) @mock.patch.object(windows_utils.WindowsUtils, 'path_exists') def test_remove(self, is_junction, mock_path_exists): recurse = True self._windows_utils.remove(mock.sentinel.server, mock.sentinel.path, is_junction=is_junction, recurse=recurse) if is_junction: cmd = ('[System.IO.Directory]::Delete(' '%(path)s, %(recurse)d)' % dict(path=self._windows_utils.quote_string( mock.sentinel.path), recurse=recurse)) else: cmd = ["Remove-Item", "-Confirm:$false", "-Path", self._windows_utils.quote_string(mock.sentinel.path), "-Force", '-Recurse'] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) @mock.patch.object(windows_utils.WindowsUtils, 'path_exists') def test_remove_unexisting_path(self, mock_path_exists): mock_path_exists.return_value = False self._windows_utils.remove(mock.sentinel.server, mock.sentinel.path) self.assertFalse(self._remote_exec.called) @ddt.data("True", "False") def test_path_exists(self, path_exists): self._remote_exec.return_value = (path_exists, mock.sentinel.std_err) result = self._windows_utils.path_exists(mock.sentinel.server, mock.sentinel.path) cmd = ["Test-Path", mock.sentinel.path] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(path_exists == "True", result) def test_normalize_path(self): fake_path = "C:/" result = self._windows_utils.normalize_path(fake_path) self.assertEqual("C:\\", result) def test_get_interface_index_by_ip(self): _FAKE_INDEX = "2" self._remote_exec.return_value = (_FAKE_INDEX, mock.sentinel.std_err) result = self._windows_utils.get_interface_index_by_ip( mock.sentinel.server, mock.sentinel.ip) cmd = ('Get-NetIPAddress | ' 'Where-Object {$_.IPAddress -eq "%(ip)s"} | ' 'Select-Object -ExpandProperty InterfaceIndex' % dict(ip=mock.sentinel.ip)) self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) self.assertEqual(int(_FAKE_INDEX), result) def test_set_dns_client_search_list(self): mock_search_list = ["A", "B", "C"] self._windows_utils.set_dns_client_search_list(mock.sentinel.server, mock_search_list) cmd = ["Set-DnsClientGlobalSetting", "-SuffixSearchList", "@('A','B','C')"] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_set_dns_client_server_addresses(self): mock_dns_servers = ["A", "B", "C"] self._windows_utils.set_dns_client_server_addresses( mock.sentinel.server, mock.sentinel.if_index, mock_dns_servers) cmd = ["Set-DnsClientServerAddress", "-InterfaceIndex", mock.sentinel.if_index, "-ServerAddresses", "('A','B','C')"] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) def test_set_win_reg_value(self): self._windows_utils.set_win_reg_value(mock.sentinel.server, mock.sentinel.path, mock.sentinel.key, mock.sentinel.value) cmd = ['Set-ItemProperty', '-Path', self._windows_utils.quote_string(mock.sentinel.path), '-Name', mock.sentinel.key, '-Value', mock.sentinel.value] self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd) @ddt.data(None, mock.sentinel.key_name) def test_get_win_reg_value(self, key_name): self._remote_exec.return_value = (mock.sentinel.value, mock.sentinel.std_err) result = self._windows_utils.get_win_reg_value(mock.sentinel.server, mock.sentinel.path, name=key_name) cmd = "Get-ItemProperty -Path %s" % ( self._windows_utils.quote_string(mock.sentinel.path)) if key_name: cmd += " | Select-Object -ExpandProperty %s" % key_name self._remote_exec.assert_called_once_with(mock.sentinel.server, cmd, retry=False) self.assertEqual(mock.sentinel.value, result) def test_quote_string(self): result = self._windows_utils.quote_string(mock.sentinel.string) self.assertEqual('"%s"' % mock.sentinel.string, result) manila-10.0.0/manila/tests/share/drivers/test_glusterfs.py0000664000175000017500000004535613656750227023717 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import socket from unittest import mock import ddt from oslo_config import cfg from manila import context from manila import exception from manila.share import configuration as config from manila.share.drivers import ganesha from manila.share.drivers import glusterfs from manila.share.drivers.glusterfs import layout from manila import test from manila.tests import fake_share from manila.tests import fake_utils CONF = cfg.CONF fake_gluster_manager_attrs = { 'export': '127.0.0.1:/testvol', 'host': '127.0.0.1', 'qualified': 'testuser@127.0.0.1:/testvol', 'user': 'testuser', 'volume': 'testvol', 'path_to_private_key': '/fakepath/to/privatekey', 'remote_server_password': 'fakepassword', } fake_share_name = 'fakename' NFS_EXPORT_DIR = 'nfs.export-dir' NFS_EXPORT_VOL = 'nfs.export-volumes' NFS_RPC_AUTH_ALLOW = 'nfs.rpc-auth-allow' NFS_RPC_AUTH_REJECT = 'nfs.rpc-auth-reject' @ddt.ddt class GlusterfsShareDriverTestCase(test.TestCase): """Tests GlusterfsShareDriver.""" def setUp(self): super(GlusterfsShareDriverTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._execute = fake_utils.fake_execute self._context = context.get_admin_context() self.addCleanup(fake_utils.fake_execute_set_repliers, []) self.addCleanup(fake_utils.fake_execute_clear_log) CONF.set_default('reserved_share_percentage', 50) CONF.set_default('driver_handles_share_servers', False) self.fake_conf = config.Configuration(None) self._driver = glusterfs.GlusterfsShareDriver( execute=self._execute, configuration=self.fake_conf) self.share = fake_share.fake_share(share_proto='NFS') def test_do_setup(self): self.mock_object(self._driver, '_get_helper') self.mock_object(layout.GlusterfsShareDriverBase, 'do_setup') _context = mock.Mock() self._driver.do_setup(_context) self._driver._get_helper.assert_called_once_with() layout.GlusterfsShareDriverBase.do_setup.assert_called_once_with( _context) @ddt.data(True, False) def test_setup_via_manager(self, has_parent): gmgr = mock.Mock() share_mgr_parent = mock.Mock() if has_parent else None nfs_helper = mock.Mock() nfs_helper.get_export = mock.Mock(return_value='host:/vol') self._driver.nfs_helper = mock.Mock(return_value=nfs_helper) ret = self._driver._setup_via_manager( {'manager': gmgr, 'share': self.share}, share_manager_parent=share_mgr_parent) gmgr.set_vol_option.assert_called_once_with( 'nfs.export-volumes', False) self._driver.nfs_helper.assert_called_once_with( self._execute, self.fake_conf, gluster_manager=gmgr) nfs_helper.get_export.assert_called_once_with(self.share) self.assertEqual('host:/vol', ret) @ddt.data({'helpercls': None, 'path': '/fakepath'}, {'helpercls': None, 'path': None}, {'helpercls': glusterfs.GlusterNFSHelper, 'path': '/fakepath'}, {'helpercls': glusterfs.GlusterNFSHelper, 'path': None}) @ddt.unpack def test_setup_via_manager_path(self, helpercls, path): gmgr = mock.Mock() gmgr.path = path if not helpercls: helper = mock.Mock() helper.get_export = mock.Mock(return_value='host:/vol') helpercls = mock.Mock(return_value=helper) self._driver.nfs_helper = helpercls if helpercls == glusterfs.GlusterNFSHelper and path is None: gmgr.get_vol_option = mock.Mock(return_value=True) self._driver._setup_via_manager( {'manager': gmgr, 'share': self.share}) if helpercls == glusterfs.GlusterNFSHelper and path is None: gmgr.get_vol_option.assert_called_once_with( NFS_EXPORT_VOL, boolean=True) args = (NFS_RPC_AUTH_REJECT, '*') else: args = (NFS_EXPORT_VOL, False) gmgr.set_vol_option.assert_called_once_with(*args) def test_setup_via_manager_export_volumes_off(self): gmgr = mock.Mock() gmgr.path = None gmgr.get_vol_option = mock.Mock(return_value=False) self._driver.nfs_helper = glusterfs.GlusterNFSHelper self.assertRaises(exception.GlusterfsException, self._driver._setup_via_manager, {'manager': gmgr, 'share': self.share}) gmgr.get_vol_option.assert_called_once_with(NFS_EXPORT_VOL, boolean=True) def test_check_for_setup_error(self): self._driver.check_for_setup_error() def test_update_share_stats(self): self.mock_object(layout.GlusterfsShareDriverBase, '_update_share_stats') self._driver._update_share_stats() (layout.GlusterfsShareDriverBase._update_share_stats. assert_called_once_with({'storage_protocol': 'NFS', 'vendor_name': 'Red Hat', 'share_backend_name': 'GlusterFS', 'reserved_percentage': 50})) def test_get_network_allocations_number(self): self.assertEqual(0, self._driver.get_network_allocations_number()) def test_get_helper(self): ret = self._driver._get_helper() self.assertIsInstance(ret, self._driver.nfs_helper) @ddt.data({'path': '/fakepath', 'helper': glusterfs.GlusterNFSHelper}, {'path': None, 'helper': glusterfs.GlusterNFSVolHelper}) @ddt.unpack def test_get_helper_vol(self, path, helper): self._driver.nfs_helper = glusterfs.GlusterNFSHelper gmgr = mock.Mock(path=path) ret = self._driver._get_helper(gmgr) self.assertIsInstance(ret, helper) @ddt.data('type', 'level') def test_supported_access_features(self, feature): nfs_helper = mock.Mock() supported_access_feature = mock.Mock() setattr(nfs_helper, 'supported_access_%ss' % feature, supported_access_feature) self.mock_object(self._driver, 'nfs_helper', nfs_helper) ret = getattr(self._driver, 'supported_access_%ss' % feature) self.assertEqual(supported_access_feature, ret) def test_update_access_via_manager(self): self.mock_object(self._driver, '_get_helper') gmgr = mock.Mock() add_rules = mock.Mock() delete_rules = mock.Mock() self._driver._update_access_via_manager( gmgr, self._context, self.share, add_rules, delete_rules, recovery=True) self._driver._get_helper.assert_called_once_with(gmgr) self._driver._get_helper().update_access.assert_called_once_with( '/', self.share, add_rules, delete_rules, recovery=True) @ddt.ddt class GlusterNFSHelperTestCase(test.TestCase): """Tests GlusterNFSHelper.""" def setUp(self): super(GlusterNFSHelperTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) gluster_manager = mock.Mock(**fake_gluster_manager_attrs) self._execute = mock.Mock(return_value=('', '')) self.fake_conf = config.Configuration(None) self._helper = glusterfs.GlusterNFSHelper( self._execute, self.fake_conf, gluster_manager=gluster_manager) def test_get_export(self): ret = self._helper.get_export(mock.Mock()) self.assertEqual(fake_gluster_manager_attrs['export'], ret) @ddt.data({'output_str': '/foo(10.0.0.1|10.0.0.2),/bar(10.0.0.1)', 'expected': {'foo': ['10.0.0.1', '10.0.0.2'], 'bar': ['10.0.0.1']}}, {'output_str': None, 'expected': {}}) @ddt.unpack def test_get_export_dir_dict(self, output_str, expected): self.mock_object(self._helper.gluster_manager, 'get_vol_option', mock.Mock(return_value=output_str)) ret = self._helper._get_export_dir_dict() self.assertEqual(expected, ret) (self._helper.gluster_manager.get_vol_option. assert_called_once_with(NFS_EXPORT_DIR)) @ddt.data({'delta': (['10.0.0.2'], []), 'extra_exports': {}, 'new_exports': '/fakename(10.0.0.1|10.0.0.2)'}, {'delta': (['10.0.0.1'], []), 'extra_exports': {}, 'new_exports': '/fakename(10.0.0.1)'}, {'delta': ([], ['10.0.0.2']), 'extra_exports': {}, 'new_exports': '/fakename(10.0.0.1)'}, {'delta': ([], ['10.0.0.1']), 'extra_exports': {}, 'new_exports': None}, {'delta': ([], ['10.0.0.1']), 'extra_exports': {'elsewhere': ['10.0.1.3']}, 'new_exports': '/elsewhere(10.0.1.3)'}) @ddt.unpack def test_update_access(self, delta, extra_exports, new_exports): gluster_manager_attrs = {'path': '/fakename'} gluster_manager_attrs.update(fake_gluster_manager_attrs) gluster_mgr = mock.Mock(**gluster_manager_attrs) helper = glusterfs.GlusterNFSHelper( self._execute, self.fake_conf, gluster_manager=gluster_mgr) export_dir_dict = {'fakename': ['10.0.0.1']} export_dir_dict.update(extra_exports) helper._get_export_dir_dict = mock.Mock(return_value=export_dir_dict) _share = mock.Mock() add_rules, delete_rules = ( map(lambda a: {'access_to': a}, r) for r in delta) helper.update_access('/', _share, add_rules, delete_rules) helper._get_export_dir_dict.assert_called_once_with() gluster_mgr.set_vol_option.assert_called_once_with(NFS_EXPORT_DIR, new_exports) @ddt.data({}, {'elsewhere': '10.0.1.3'}) def test_update_access_disjoint(self, export_dir_dict): gluster_manager_attrs = {'path': '/fakename'} gluster_manager_attrs.update(fake_gluster_manager_attrs) gluster_mgr = mock.Mock(**gluster_manager_attrs) helper = glusterfs.GlusterNFSHelper( self._execute, self.fake_conf, gluster_manager=gluster_mgr) helper._get_export_dir_dict = mock.Mock(return_value=export_dir_dict) _share = mock.Mock() helper.update_access('/', _share, [], [{'access_to': '10.0.0.2'}]) helper._get_export_dir_dict.assert_called_once_with() self.assertFalse(gluster_mgr.set_vol_option.called) @ddt.ddt class GlusterNFSVolHelperTestCase(test.TestCase): """Tests GlusterNFSVolHelper.""" def setUp(self): super(GlusterNFSVolHelperTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) gluster_manager = mock.Mock(**fake_gluster_manager_attrs) self._execute = mock.Mock(return_value=('', '')) self.fake_conf = config.Configuration(None) self._helper = glusterfs.GlusterNFSVolHelper( self._execute, self.fake_conf, gluster_manager=gluster_manager) @ddt.data({'output_str': '10.0.0.1,10.0.0.2', 'expected': ['10.0.0.1', '10.0.0.2']}, {'output_str': None, 'expected': []}) @ddt.unpack def test_get_vol_exports(self, output_str, expected): self.mock_object(self._helper.gluster_manager, 'get_vol_option', mock.Mock(return_value=output_str)) ret = self._helper._get_vol_exports() self.assertEqual(expected, ret) (self._helper.gluster_manager.get_vol_option. assert_called_once_with(NFS_RPC_AUTH_ALLOW)) @ddt.data({'delta': (["10.0.0.1"], []), 'expected': "10.0.0.1,10.0.0.3"}, {'delta': (["10.0.0.2"], []), 'expected': "10.0.0.1,10.0.0.2,10.0.0.3"}, {'delta': ([], ["10.0.0.1"]), 'expected': "10.0.0.3"}, {'delta': ([], ["10.0.0.2"]), 'expected': "10.0.0.1,10.0.0.3"}) @ddt.unpack def test_update_access(self, delta, expected): self.mock_object(self._helper, '_get_vol_exports', mock.Mock( return_value=["10.0.0.1", "10.0.0.3"])) _share = mock.Mock() add_rules, delete_rules = ( map(lambda a: {'access_to': a}, r) for r in delta) self._helper.update_access("/", _share, add_rules, delete_rules) self._helper._get_vol_exports.assert_called_once_with() argseq = [(NFS_RPC_AUTH_ALLOW, expected), (NFS_RPC_AUTH_REJECT, None)] self.assertEqual( [mock.call(*a) for a in argseq], self._helper.gluster_manager.set_vol_option.call_args_list) def test_update_access_empty(self): self.mock_object(self._helper, '_get_vol_exports', mock.Mock( return_value=["10.0.0.1"])) _share = mock.Mock() self._helper.update_access("/", _share, [], [{'access_to': "10.0.0.1"}]) self._helper._get_vol_exports.assert_called_once_with() argseq = [(NFS_RPC_AUTH_ALLOW, None), (NFS_RPC_AUTH_REJECT, "*")] self.assertEqual( [mock.call(*a) for a in argseq], self._helper.gluster_manager.set_vol_option.call_args_list) class GaneshaNFSHelperTestCase(test.TestCase): """Tests GaneshaNFSHelper.""" def setUp(self): super(GaneshaNFSHelperTestCase, self).setUp() self.gluster_manager = mock.Mock(**fake_gluster_manager_attrs) self._execute = mock.Mock(return_value=('', '')) self._root_execute = mock.Mock(return_value=('', '')) self.access = fake_share.fake_access() self.fake_conf = config.Configuration(None) self.fake_template = {'key': 'value'} self.share = fake_share.fake_share() self.mock_object(glusterfs.ganesha_utils, 'RootExecutor', mock.Mock(return_value=self._root_execute)) self.mock_object(glusterfs.ganesha.GaneshaNASHelper, '__init__', mock.Mock()) socket.gethostname = mock.Mock(return_value='example.com') self._helper = glusterfs.GaneshaNFSHelper( self._execute, self.fake_conf, gluster_manager=self.gluster_manager) self._helper.tag = 'GLUSTER-Ganesha-localhost' def test_init_local_ganesha_server(self): glusterfs.ganesha_utils.RootExecutor.assert_called_once_with( self._execute) socket.gethostname.assert_has_calls([mock.call()]) glusterfs.ganesha.GaneshaNASHelper.__init__.assert_has_calls( [mock.call(self._root_execute, self.fake_conf, tag='GLUSTER-Ganesha-example.com')]) def test_get_export(self): ret = self._helper.get_export(self.share) self.assertEqual('example.com:/fakename--', ret) def test_init_remote_ganesha_server(self): ssh_execute = mock.Mock(return_value=('', '')) CONF.set_default('glusterfs_ganesha_server_ip', 'fakeip') self.mock_object(glusterfs.ganesha_utils, 'SSHExecutor', mock.Mock(return_value=ssh_execute)) glusterfs.GaneshaNFSHelper( self._execute, self.fake_conf, gluster_manager=self.gluster_manager) glusterfs.ganesha_utils.SSHExecutor.assert_called_once_with( 'fakeip', 22, None, 'root', password=None, privatekey=None) glusterfs.ganesha.GaneshaNASHelper.__init__.assert_has_calls( [mock.call(ssh_execute, self.fake_conf, tag='GLUSTER-Ganesha-fakeip')]) def test_init_helper(self): ganeshelper = mock.Mock() exptemp = mock.Mock() def set_attributes(*a, **kw): self._helper.ganesha = ganeshelper self._helper.export_template = exptemp self.mock_object(ganesha.GaneshaNASHelper, 'init_helper', mock.Mock(side_effect=set_attributes)) self.assertEqual({}, glusterfs.GaneshaNFSHelper.shared_data) self._helper.init_helper() ganesha.GaneshaNASHelper.init_helper.assert_called_once_with() self.assertEqual(ganeshelper, self._helper.ganesha) self.assertEqual(exptemp, self._helper.export_template) self.assertEqual({ 'GLUSTER-Ganesha-localhost': { 'ganesha': ganeshelper, 'export_template': exptemp}}, glusterfs.GaneshaNFSHelper.shared_data) other_helper = glusterfs.GaneshaNFSHelper( self._execute, self.fake_conf, gluster_manager=self.gluster_manager) other_helper.tag = 'GLUSTER-Ganesha-localhost' other_helper.init_helper() self.assertEqual(ganeshelper, other_helper.ganesha) self.assertEqual(exptemp, other_helper.export_template) def test_default_config_hook(self): fake_conf_dict = {'key': 'value1'} mock_ganesha_utils_patch = mock.Mock() def fake_patch_run(tmpl1, tmpl2): mock_ganesha_utils_patch( copy.deepcopy(tmpl1), tmpl2) tmpl1.update(tmpl2) self.mock_object(glusterfs.ganesha.GaneshaNASHelper, '_default_config_hook', mock.Mock(return_value=self.fake_template)) self.mock_object(glusterfs.ganesha_utils, 'path_from', mock.Mock(return_value='/fakedir/glusterfs/conf')) self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value=fake_conf_dict)) self.mock_object(glusterfs.ganesha_utils, 'patch', mock.Mock(side_effect=fake_patch_run)) ret = self._helper._default_config_hook() (glusterfs.ganesha.GaneshaNASHelper._default_config_hook. assert_called_once_with()) glusterfs.ganesha_utils.path_from.assert_called_once_with( glusterfs.__file__, 'conf') self._helper._load_conf_dir.assert_called_once_with( '/fakedir/glusterfs/conf') glusterfs.ganesha_utils.patch.assert_called_once_with( self.fake_template, fake_conf_dict) self.assertEqual(fake_conf_dict, ret) def test_fsal_hook(self): self._helper.gluster_manager.path = '/fakename' output = { 'Hostname': '127.0.0.1', 'Volume': 'testvol', 'Volpath': '/fakename' } ret = self._helper._fsal_hook('/fakepath', self.share, self.access) self.assertEqual(output, ret) manila-10.0.0/manila/tests/share/drivers/dell_emc/0000775000175000017500000000000013656750362022017 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/__init__.py0000664000175000017500000000000013656750227024116 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/common/0000775000175000017500000000000013656750362023307 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/common/__init__.py0000664000175000017500000000000013656750227025406 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/common/enas/0000775000175000017500000000000013656750362024235 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/common/enas/utils.py0000664000175000017500000001250013656750227025745 0ustar zuulzuul00000000000000# Copyright (c) 2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import doctest from unittest import mock from lxml import doctestcompare import six CHECKER = doctestcompare.LXMLOutputChecker() PARSE_XML = doctest.register_optionflag('PARSE_XML') class RequestSideEffect(object): def __init__(self): self.actions = [] self.started = False def append(self, resp=None, ex=None): if not self.started: self.actions.append((resp, ex)) def __call__(self, *args, **kwargs): if not self.started: self.started = True self.actions.reverse() item = self.actions.pop() if item[1]: raise item[1] else: return item[0] class SSHSideEffect(object): def __init__(self): self.actions = [] self.started = False def append(self, resp=None, err=None, ex=None): if not self.started: self.actions.append((resp, err, ex)) def __call__(self, rel_url, req_data=None, method=None, return_rest_err=True, *args, **kwargs): if not self.started: self.started = True self.actions.reverse() item = self.actions.pop() if item[2]: raise item[2] else: if return_rest_err: return item[0:2] else: return item[1] class EMCMock(mock.Mock): def _get_req_from_call(self, call): if len(call) == 3: return call[1][0] elif len(call) == 2: return call[0][0] def assert_has_calls(self, calls): if len(calls) != len(self.mock_calls): raise AssertionError( 'Mismatch error.\nExpected: %r\n' 'Actual: %r' % (calls, self.mock_calls) ) iter_expect = iter(calls) iter_actual = iter(self.mock_calls) while True: try: expect = self._get_req_from_call(next(iter_expect)) actual = self._get_req_from_call(next(iter_actual)) except StopIteration: return True if not isinstance(expect, six.binary_type): expect = six.b(expect) if not isinstance(actual, six.binary_type): actual = six.b(actual) if not CHECKER.check_output(expect, actual, PARSE_XML): raise AssertionError( 'Mismatch error.\nExpected: %r\n' 'Actual: %r' % (calls, self.mock_calls) ) class EMCNFSShareMock(mock.Mock): def assert_has_calls(self, calls): if len(calls) != len(self.mock_calls): raise AssertionError( 'Mismatch error.\nExpected: %r\n' 'Actual: %r' % (calls, self.mock_calls) ) iter_expect = iter(calls) iter_actual = iter(self.mock_calls) while True: try: expect = next(iter_expect)[1][0] actual = next(iter_actual)[1][0] except StopIteration: return True if not self._option_check(expect, actual): raise AssertionError( 'Mismatch error.\nExpected: %r\n' 'Actual: %r' % (calls, self.mock_calls) ) def _option_parser(self, option): option_map = {} for item in option.split(','): key, value = item.split('=') option_map[key] = value return option_map @staticmethod def _opt_value_from_map(opt_map, key): value = opt_map.get(key) if value: ret = set(value.split(':')) else: ret = set() return ret def _option_check(self, expect, actual): if '-option' in actual and '-option' in expect: exp_option = expect[expect.index('-option') + 1] act_option = actual[actual.index('-option') + 1] exp_opt_map = self._option_parser(exp_option) act_opt_map = self._option_parser(act_option) for key in exp_opt_map: exp_set = self._opt_value_from_map(exp_opt_map, key) act_set = self._opt_value_from_map(act_opt_map, key) if exp_set != act_set: return False return True def patch_get_managed_ports_vnx(*arg, **kwargs): return mock.patch('manila.share.drivers.dell_emc.plugins.vnx.connection.' 'VNXStorageConnection.get_managed_ports', mock.Mock(*arg, **kwargs)) def patch_get_managed_ports_powermax(*arg, **kwargs): return mock.patch( 'manila.share.drivers.dell_emc.plugins.powermax.connection.' 'PowerMaxStorageConnection.get_managed_ports', mock.Mock(*arg, **kwargs)) manila-10.0.0/manila/tests/share/drivers/dell_emc/common/enas/__init__.py0000664000175000017500000000000013656750227026334 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/common/enas/test_connector.py0000664000175000017500000001754613656750227027655 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from eventlet import greenthread from oslo_concurrency import processutils from six.moves.urllib import error as url_error from six.moves.urllib import request as url_request from manila import exception from manila.share import configuration as conf from manila.share.drivers.dell_emc.common.enas import connector from manila import test from manila.tests.share.drivers.dell_emc.common.enas import fakes from manila.tests.share.drivers.dell_emc.common.enas import utils as enas_utils from manila import utils class XMLAPIConnectorTestData(object): FAKE_BODY = '' FAKE_RESP = '' FAKE_METHOD = 'fake_method' FAKE_KEY = 'key' FAKE_VALUE = 'value' @staticmethod def req_auth_url(): return 'https://' + fakes.FakeData.emc_nas_server + '/Login' @staticmethod def req_credential(): return ( 'user=' + fakes.FakeData.emc_nas_login + '&password=' + fakes.FakeData.emc_nas_password + '&Login=Login' ).encode() @staticmethod def req_url_encode(): return {'Content-Type': 'application/x-www-form-urlencoded'} @staticmethod def req_url(): return ( 'https://' + fakes.FakeData.emc_nas_server + '/servlets/CelerraManagementServices' ) XML_CONN_TD = XMLAPIConnectorTestData class XMLAPIConnectorTest(test.TestCase): @mock.patch.object(url_request, 'Request', mock.Mock()) def setUp(self): super(XMLAPIConnectorTest, self).setUp() emc_share_driver = fakes.FakeEMCShareDriver() self.configuration = emc_share_driver.configuration xml_socket = mock.Mock() xml_socket.read = mock.Mock(return_value=XML_CONN_TD.FAKE_RESP) opener = mock.Mock() opener.open = mock.Mock(return_value=xml_socket) with mock.patch.object(url_request, 'build_opener', mock.Mock(return_value=opener)): self.XmlConnector = connector.XMLAPIConnector( configuration=self.configuration, debug=False) expected_calls = [ mock.call(XML_CONN_TD.req_auth_url(), XML_CONN_TD.req_credential(), XML_CONN_TD.req_url_encode()), ] url_request.Request.assert_has_calls(expected_calls) def test_request_with_debug(self): self.XmlConnector.debug = True request = mock.Mock() request.headers = {XML_CONN_TD.FAKE_KEY: XML_CONN_TD.FAKE_VALUE} request.get_full_url = mock.Mock( return_value=XML_CONN_TD.FAKE_VALUE) with mock.patch.object(url_request, 'Request', mock.Mock(return_value=request)): rsp = self.XmlConnector.request(XML_CONN_TD.FAKE_BODY, XML_CONN_TD.FAKE_METHOD) self.assertEqual(XML_CONN_TD.FAKE_RESP, rsp) def test_request_with_no_authorized_exception(self): xml_socket = mock.Mock() xml_socket.read = mock.Mock(return_value=XML_CONN_TD.FAKE_RESP) hook = enas_utils.RequestSideEffect() hook.append(ex=url_error.HTTPError(XML_CONN_TD.req_url(), '403', 'fake_message', None, None)) hook.append(xml_socket) hook.append(xml_socket) self.XmlConnector.url_opener.open = mock.Mock(side_effect=hook) self.XmlConnector.request(XML_CONN_TD.FAKE_BODY) def test_request_with_general_exception(self): hook = enas_utils.RequestSideEffect() hook.append(ex=url_error.HTTPError(XML_CONN_TD.req_url(), 'error_code', 'fake_message', None, None)) self.XmlConnector.url_opener.open = mock.Mock(side_effect=hook) self.assertRaises(exception.ManilaException, self.XmlConnector.request, XML_CONN_TD.FAKE_BODY) class MockSSH(object): def __enter__(self): return self def __exit__(self, type, value, traceback): pass class MockSSHPool(object): def __init__(self): self.ssh = MockSSH() def item(self): try: return self.ssh finally: pass class CmdConnectorTest(test.TestCase): def setUp(self): super(CmdConnectorTest, self).setUp() self.configuration = conf.Configuration(None) self.configuration.append_config_values = mock.Mock(return_value=0) self.configuration.emc_nas_login = fakes.FakeData.emc_nas_login self.configuration.emc_nas_password = fakes.FakeData.emc_nas_password self.configuration.emc_nas_server = fakes.FakeData.emc_nas_server self.configuration.emc_ssl_cert_verify = False self.configuration.emc_ssl_cert_path = None self.sshpool = MockSSHPool() with mock.patch.object(utils, "SSHPool", mock.Mock(return_value=self.sshpool)): self.CmdHelper = connector.SSHConnector( configuration=self.configuration, debug=False) utils.SSHPool.assert_called_once_with( ip=fakes.FakeData.emc_nas_server, port=22, conn_timeout=None, login=fakes.FakeData.emc_nas_login, password=fakes.FakeData.emc_nas_password) def test_run_ssh(self): with mock.patch.object(processutils, "ssh_execute", mock.Mock(return_value=('fake_output', ''))): cmd_list = ['fake', 'cmd'] self.CmdHelper.run_ssh(cmd_list) processutils.ssh_execute.assert_called_once_with( self.sshpool.item(), 'fake cmd', check_exit_code=False) def test_run_ssh_with_debug(self): self.CmdHelper.debug = True with mock.patch.object(processutils, "ssh_execute", mock.Mock(return_value=('fake_output', ''))): cmd_list = ['fake', 'cmd'] self.CmdHelper.run_ssh(cmd_list) processutils.ssh_execute.assert_called_once_with( self.sshpool.item(), 'fake cmd', check_exit_code=False) @mock.patch.object( processutils, "ssh_execute", mock.Mock(side_effect=processutils.ProcessExecutionError)) def test_run_ssh_exception(self): cmd_list = ['fake', 'cmd'] self.mock_object(greenthread, 'sleep', mock.Mock()) sshpool = MockSSHPool() with mock.patch.object(utils, "SSHPool", mock.Mock(return_value=sshpool)): self.CmdHelper = connector.SSHConnector(self.configuration) self.assertRaises(processutils.ProcessExecutionError, self.CmdHelper.run_ssh, cmd_list, True) utils.SSHPool.assert_called_once_with( ip=fakes.FakeData.emc_nas_server, port=22, conn_timeout=None, login=fakes.FakeData.emc_nas_login, password=fakes.FakeData.emc_nas_password) processutils.ssh_execute.assert_called_once_with( sshpool.item(), 'fake cmd', check_exit_code=True) manila-10.0.0/manila/tests/share/drivers/dell_emc/common/enas/test_utils.py0000664000175000017500000001256713656750227027021 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ssl from unittest import mock import ddt from manila.share.drivers.dell_emc.common.enas import utils from manila import test @ddt.ddt class ENASUtilsTestCase(test.TestCase): @ddt.data({'full': ['cge-1-0', 'cge-1-1', 'cge-3-0', 'cge-3-1', 'cge-12-3'], 'matchers': ['cge-?-0', 'cge-3*', 'foo'], 'matched': set(['cge-1-0', 'cge-3-0', 'cge-3-1']), 'unmatched': set(['cge-1-1', 'cge-12-3'])}, {'full': ['cge-1-0', 'cge-1-1'], 'matchers': ['cge-1-0'], 'matched': set(['cge-1-0']), 'unmatched': set(['cge-1-1'])}, {'full': ['cge-1-0', 'cge-1-1'], 'matchers': ['foo'], 'matched': set([]), 'unmatched': set(['cge-1-0', 'cge-1-1'])}) @ddt.unpack def test_do_match_any(self, full, matchers, matched, unmatched): real_matched, real_unmatched = utils.do_match_any( full, matchers) self.assertEqual(matched, real_matched) self.assertEqual(unmatched, real_unmatched) class SslContextTestCase(test.TestCase): def test_create_ssl_context(self): configuration = mock.Mock() configuration.emc_ssl_cert_verify = True configuration.emc_ssl_cert_path = "./cert_path/" self.mock_object(ssl, 'create_default_context') context = utils.create_ssl_context(configuration) self.assertIsNotNone(context) def test_create_ssl_context_no_verify(self): configuration = mock.Mock() configuration.emc_ssl_cert_verify = False self.mock_object(ssl, 'create_default_context') context = utils.create_ssl_context(configuration) self.assertFalse(context.check_hostname) def test_no_create_default_context(self): """Test scenario of running on python 2.7.8 or earlier.""" configuration = mock.Mock() configuration.emc_ssl_cert_verify = False self.mock_object(ssl, 'create_default_context', mock.Mock(side_effect=AttributeError)) context = utils.create_ssl_context(configuration) self.assertIsNone(context) @ddt.ddt class ParseIpaddrTestCase(test.TestCase): @ddt.data({'lst_ipaddr': ['192.168.100.101', '192.168.100.102', '192.168.100.103']}, {'lst_ipaddr': ['[fdf8:f53b:82e4::57]', '[fdf8:f53b:82e4::54]', '[fdf8:f53b:82e4::55]']}, {'lst_ipaddr': ['[fdf8:f53b:82e4::57]', '[fdf8:f53b:82e4::54]', '192.168.100.103', '[fdf8:f53b:82e4::55]']}, {'lst_ipaddr': ['192.168.100.101', '[fdf8:f53b:82e4::57]', '[fdf8:f53b:82e4::54]', '192.168.100.101', '[fdf8:f53b:82e4::55]', '192.168.100.102']},) @ddt.unpack def test_parse_ipv4_addr(self, lst_ipaddr): self.assertEqual(lst_ipaddr, utils.parse_ipaddr(':'.join(lst_ipaddr))) @ddt.ddt class ConvertIPv6FormatTestCase(test.TestCase): @ddt.data({'ip_addr': 'fdf8:f53b:82e4::55'}, {'ip_addr': 'fdf8:f53b:82e4::55/64'}, {'ip_addr': 'fdf8:f53b:82e4::55/128'}) @ddt.unpack def test_ipv6_addr(self, ip_addr): expected_ip_addr = '[%s]' % ip_addr self.assertEqual(expected_ip_addr, utils.convert_ipv6_format_if_needed(ip_addr)) @ddt.data({'ip_addr': '192.168.1.100'}, {'ip_addr': '192.168.1.100/24'}, {'ip_addr': '192.168.1.100/32'}, {'ip_addr': '[fdf8:f53b:82e4::55]'}) @ddt.unpack def test_invalid_ipv6_addr(self, ip_addr): self.assertEqual(ip_addr, utils.convert_ipv6_format_if_needed(ip_addr)) @ddt.ddt class ExportUncPathTestCase(test.TestCase): @ddt.data({'ip_addr': 'fdf8:f53b:82e4::55'}, {'ip_addr': 'fdf8:f53b:82e4::'}, {'ip_addr': '2018::'}) @ddt.unpack def test_ipv6_addr(self, ip_addr): expected_ip_addr = '%s.ipv6-literal.net' % ip_addr.replace(':', '-') self.assertEqual(expected_ip_addr, utils.export_unc_path(ip_addr)) @ddt.data({'ip_addr': '192.168.1.100'}, {'ip_addr': '192.168.1.100/24'}, {'ip_addr': '192.168.1.100/32'}, {'ip_addr': 'fdf8:f53b:82e4::55/64'}, {'ip_addr': 'fdf8:f53b:82e4::55/128'}, {'ip_addr': '[fdf8:f53b:82e4::55]'}) @ddt.unpack def test_invalid_ipv6_addr(self, ip_addr): self.assertEqual(ip_addr, utils.export_unc_path(ip_addr)) manila-10.0.0/manila/tests/share/drivers/dell_emc/common/enas/fakes.py0000664000175000017500000017233413656750227025712 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_utils import units from manila.common import constants as const from manila.share import configuration as conf from manila.share.drivers.dell_emc.common.enas import utils from manila.tests import fake_share def query(func): def inner(*args, **kwargs): return ( '' '' + func(*args, **kwargs) + '' ) return inner def start_task(func): def inner(*args, **kwargs): return ( '' '' + func(*args, **kwargs) + '') return inner def response(func): def inner(*args, **kwargs): return ( '' '' + func(*args, **kwargs) + '' ).encode() return inner class FakeData(object): # Share information share_id = '7cf7c200_d3af_4e05_b87e_9167c95df4f9' host = 'HostA@BackendB#fake_pool_name' share_name = share_id share_size = 10 new_size = 20 src_share_name = '7cf7c200_d3af_4e05_b87e_9167c95df4f0' # Snapshot information snapshot_name = 'de4c9050-e2f9-4ce1-ade4-5ed0c9f26451' src_snap_name = 'de4c9050-e2f9-4ce1-ade4-5ed0c9f26452' snapshot_id = 'fake_snap_id' snapshot_size = 10 * units.Ki # Share network information share_network_id = 'c5b3a865-56d0-4d88-abe5-879965e099c9' cidr = '192.168.1.0/24' cidr_v6 = 'fdf8:f53b:82e1::/64' segmentation_id = 100 network_allocations_id1 = '132dbb10-9a36-46f2-8d89-3d909830c356' network_allocations_id2 = '7eabdeed-bad2-46ea-bd0f-a33884c869e0' network_allocations_id3 = '98c9e490-a842-4e59-b59a-a6042069d35b' network_allocations_id4 = '6319a917-ab95-4b65-a498-773ae33c5550' network_allocations_ip1 = '192.168.1.1' network_allocations_ip2 = '192.168.1.2' network_allocations_ip3 = 'fdf8:f53b:82e1::1' network_allocations_ip4 = 'fdf8:f53b:82e1::2' network_allocations_ip_version1 = 4 network_allocations_ip_version2 = 4 network_allocations_ip_version3 = 6 network_allocations_ip_version4 = 6 domain_name = 'fake_domain' domain_user = 'administrator' domain_password = 'password' dns_ip_address = '192.168.1.200' dns_ipv6_address = 'fdf8:f53b:82e1::f' # Share server information share_server_id = '56aafd02-4d44-43d7-b784-57fc88167224' # Filesystem information filesystem_name = share_name filesystem_id = 'fake_filesystem_id' filesystem_size = 10 * units.Ki filesystem_new_size = 20 * units.Ki # Mountpoint information path = '/' + share_name # Mover information mover_name = 'server_2' mover_id = 'fake_mover_id' interface_name1 = network_allocations_id1[-12:] interface_name2 = network_allocations_id2[-12:] interface_name3 = network_allocations_id3[-12:] interface_name4 = network_allocations_id4[-12:] long_interface_name = network_allocations_id1 net_mask = '255.255.255.0' net_mask_v6 = 64 device_name = 'cge-1-0' interconnect_id = '2001' # VDM information vdm_name = share_server_id vdm_id = 'fake_vdm_id' # Pool information pool_name = 'fake_pool_name' pool_id = 'fake_pool_id' pool_used_size = 20480 pool_total_size = 511999 # NFS share access information rw_hosts = ['192.168.1.1', '192.168.1.2'] ro_hosts = ['192.168.1.3', '192.168.1.4'] nfs_host_ip = '192.168.1.5' rw_hosts_ipv6 = ['fdf8:f53b:82e1::1', 'fdf8:f53b:82e1::2'] ro_hosts_ipv6 = ['fdf8:f53b:82e1::3', 'fdf8:f53b:82e1::4'] nfs_host_ipv6 = 'fdf8:f53b:82e1::5' fake_output = '' fake_error_msg = 'fake error message' emc_share_backend = 'vnx' powermax_share_backend = 'powermax' emc_nas_server = '192.168.1.20' emc_nas_login = 'fakename' emc_nas_password = 'fakepassword' share_backend_name = 'EMC_NAS_Storage' cifs_access = """ 1478607389: SMB:11: Unix user 'Guest' UID=32769 1478607389: SMB:10: FindUserUid:Access_Password 'Guest',1=0x8001 T=0 1478607389: SHARE: 6: ALLOWED:fullcontrol:S-1-5-15-3399d125-6dcdf5f4 1478607389: SMB:11: Unix user 'Administrator' UID=32768 1478607389: SMB:10: FindUserUid:Access_Password 'Administrator', 1478607389: SHARE: 6: ALLOWED:fullcontrol:S-1-5-15-3399d125 """ class StorageObjectTestData(object): def __init__(self): self.share_name = FakeData.share_name self.filesystem_name = FakeData.filesystem_name self.filesystem_id = FakeData.filesystem_id self.filesystem_size = 10 * units.Ki self.filesystem_new_size = 20 * units.Ki self.path = FakeData.path self.snapshot_name = FakeData.snapshot_name self.snapshot_id = FakeData.snapshot_id self.snapshot_size = 10 * units.Ki self.src_snap_name = FakeData.src_snap_name self.src_fileystems_name = FakeData.src_share_name self.mover_name = FakeData.mover_name self.mover_id = FakeData.mover_id self.vdm_name = FakeData.vdm_name self.vdm_id = FakeData.vdm_id self.pool_name = FakeData.pool_name self.pool_id = FakeData.pool_id self.pool_used_size = FakeData.pool_used_size self.pool_total_size = FakeData.pool_total_size self.interface_name1 = FakeData.interface_name1 self.interface_name2 = FakeData.interface_name2 self.interface_name3 = FakeData.interface_name3 self.interface_name4 = FakeData.interface_name4 self.long_interface_name = FakeData.long_interface_name self.ip_address1 = FakeData.network_allocations_ip1 self.ip_address2 = FakeData.network_allocations_ip2 self.ip_address3 = FakeData.network_allocations_ip3 self.ip_address4 = FakeData.network_allocations_ip4 self.net_mask = FakeData.net_mask self.net_mask_v6 = FakeData.net_mask_v6 self.vlan_id = FakeData.segmentation_id self.cifs_server_name = FakeData.vdm_name self.domain_name = FakeData.domain_name self.domain_user = FakeData.domain_user self.domain_password = FakeData.domain_password self.dns_ip_address = FakeData.dns_ip_address self.device_name = FakeData.device_name self.interconnect_id = FakeData.interconnect_id self.rw_hosts = FakeData.rw_hosts self.ro_hosts = FakeData.ro_hosts self.nfs_host_ip = FakeData.nfs_host_ip self.rw_hosts_ipv6 = FakeData.rw_hosts_ipv6 self.ro_hosts_ipv6 = FakeData.ro_hosts_ipv6 self.nfs_host_ipv6 = FakeData.nfs_host_ipv6 self.fake_output = FakeData.fake_output @response def resp_get_error(self): return ( '' '' 'Fake description.' 'Fake action.' 'Fake diagnostics.' '' '' 'Fake description.' 'Fake action.' 'Fake diagnostics.' '' ' ' ) @response def resp_get_without_value(self): return ( '' ) @response def resp_task_succeed(self): return ( '' '' '' ) @response def resp_task_error(self): return ( '' '' '' ) @response def resp_invalid_mover_id(self): return ( '' '' 'The Mover ID supplied with the request is invalid.' '' 'Refer to the XML API v2 schema/documentation and correct ' 'your user program logic.' ' Exception tag: 14fb692e556 Exception ' 'message: com.emc.nas.ccmd.common.MessageInstanceImpl@5004000d ' '' '' ' ' ) @response def resp_need_retry(self): return ('' '' '' ' fake desp. ' 'fake action ' '') @start_task def req_fake_start_task(self): return '' class FileSystemTestData(StorageObjectTestData): def __init__(self): super(FileSystemTestData, self).__init__() @start_task def req_create_on_vdm(self): return ( '' '' '' '' % {'name': self.filesystem_name, 'id': self.vdm_id, 'pool_id': self.pool_id, 'size': self.filesystem_size} ) @start_task def req_create_on_mover(self): return ( '' '' '' '' % {'name': self.filesystem_name, 'id': self.mover_id, 'pool_id': self.pool_id, 'size': self.filesystem_size} ) @response def resp_create_but_already_exist(self): return ( ' ' '' '' '' '' '' '' ' ' ) @start_task def req_delete(self): return ( '' % {'id': self.filesystem_id} ) @response def resp_delete_but_failed(self): return ( '' '' 'The file system ID supplied with the request is ' 'invalid.' 'Refer to the XML API v2 schema/documentation and correct ' 'your user program logic.' ' Exception tag: 14fb6b6a7b8 Exception ' 'message: com.emc.nas.ccmd.common.MessageInstanceImpl@5004000e ' '' '' ' ' ) @start_task def req_extend(self): return ( '' '' '' % {'id': self.filesystem_id, 'pool_id': self.pool_id, 'size': self.filesystem_new_size - self.filesystem_size} ) @response def resp_extend_but_error(self): return ( '' '' 'Fake description.' 'Fake action.' ' Fake diagnostics.' '' ' ' ) @query def req_get(self): return ( '' '' '' '' % {'name': self.filesystem_name} ) @response def resp_get_succeed(self): return ( '' '' '' '' '' % {'name': self.filesystem_name, 'id': self.filesystem_id, 'size': self.filesystem_size, 'pool_id': self.pool_id} ) @response def resp_get_but_miss_property(self): return ( '' '' '' '' '' % {'name': self.filesystem_name, 'id': self.filesystem_id, 'size': self.filesystem_size, 'pool_id': self.pool_id} ) @response def resp_get_but_not_found(self): return ( '' '' 'The query may be incomplete because some of the ' 'Celerra components are unavailable or do not exist. Another ' 'reason may be application error. ' 'If the entire Celerra is functioning correctly, ' 'check your client application logic. ' 'File system not found.' '' '' 'The query may be incomplete because some of the ' 'Celerra components are unavailable or do not exist. Another ' 'reason may be application error.' 'If the entire Celerra is functioning correctly, ' 'check your client application logic.' 'Migration file system not found.' '' ' ' ) def cmd_create_from_ckpt(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-name', self.filesystem_name, '-type', 'uxfs', '-create', 'samesize=' + self.src_fileystems_name, 'pool=' + self.pool_name, 'storage=SINGLE', 'worm=off', '-thin', 'no', '-option', 'slice=y', ] def cmd_copy_ckpt(self): session_name = self.filesystem_name + ':' + self.src_snap_name return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_copy', '-name', session_name[0:63], '-source', '-ckpt', self.src_snap_name, '-destination', '-fs', self.filesystem_name, '-interconnect', "id=" + self.interconnect_id, '-overwrite_destination', '-full_copy', ] output_copy_ckpt = "OK" error_copy_ckpt = "ERROR" def cmd_nas_fs_info(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-info', self.filesystem_name, ] def output_info(self): return ( """output = id = 515 name = %(share_name)s acl = 0 in_use = True type = uxfs worm = off volume = v993 deduplication = Off thin_storage = True tiering_policy = Auto-Tier/Optimize Pool compressed= False mirrored = False ckpts = %(ckpt)s stor_devs = FNM00124500890-004B disks = d7 disk=d7 fakeinfo""" % {'share_name': self.filesystem_name, 'ckpt': self.snapshot_name}) def cmd_delete(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-delete', self.snapshot_name, '-Force', ] class SnapshotTestData(StorageObjectTestData): def __init__(self): super(SnapshotTestData, self).__init__() @start_task def req_create(self): return ( '' '' '' % {'fsid': self.filesystem_id, 'name': self.snapshot_name, 'pool_id': self.pool_id} ) @start_task def req_create_with_size(self): return ( '' '' '' '' % {'fsid': self.filesystem_id, 'name': self.snapshot_name, 'pool_id': self.pool_id, 'size': self.snapshot_size} ) @response def resp_create_but_already_exist(self): return ( '' '' '' '' '' '' ) @query def req_get(self): return ( '' '' % {'name': self.snapshot_name} ) @response def resp_get_succeed(self): return ( '' '' % {'name': self.snapshot_name, 'fs_id': self.filesystem_id, 'snap_id': self.snapshot_id} ) @start_task def req_delete(self): return ( '' % {'id': self.snapshot_id} ) class MountPointTestData(StorageObjectTestData): def __init__(self): super(MountPointTestData, self).__init__() @start_task def req_create(self, mover_id, is_vdm=True): return ( '' '' '' % {'path': self.path, 'fs_id': self.filesystem_id, 'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false'} ) @response def resp_create_but_already_exist(self): return ( '' '' '' ' ' '' '' ' ') @start_task def req_delete(self, mover_id, is_vdm=True): return ( '' % {'path': self.path, 'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false'} ) @response def resp_delete_but_nonexistent(self): return ( '' ' ' ' ' '' '' ' ' ) @query def req_get(self, mover_id, is_vdm=True): return ( '' % {'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false'} ) @response def resp_get_succeed(self, mover_id, is_vdm=True): return ( '' '' '' '' % {'path': self.path, 'fsID': self.filesystem_id, 'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false'} ) def cmd_server_mount(self, mode): return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_mount', self.vdm_name, '-option', mode, self.filesystem_name, self.path, ] def cmd_server_umount(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_umount', self.vdm_name, '-perm', self.snapshot_name, ] class VDMTestData(StorageObjectTestData): def __init__(self): super(VDMTestData, self).__init__() @start_task def req_create(self): return ( '' % {'mover_id': self.mover_id, 'vdm_name': self.vdm_name} ) @response def resp_create_but_already_exist(self): return ( '' '' '' 'Duplicate name specified' 'Specify a unqiue name' '' '' 'Duplicate name specified' 'Specify a unqiue name' '' '' ' ' ) @query def req_get(self): return '' @response def resp_get_succeed(self, name=None, interface1=None, interface2=None): if name is None: name = self.vdm_name if interface1 is None: interface1 = self.interface_name1 if interface2 is None: interface2 = self.interface_name2 return ( '' '' '' '
  • %(interface1)s
  • %(interface2)s
  • ' '
    ' % {'vdm_name': name, 'vdm_id': self.vdm_id, 'mover_id': self.mover_id, 'interface1': interface1, 'interface2': interface2} ) @response def resp_get_but_not_found(self): return ( '' ) @start_task def req_delete(self): return '' % {'vdmid': self.vdm_id} def cmd_attach_nfs_interface(self, interface=None): if interface is None: interface = self.interface_name2 return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-vdm', self.vdm_name, '-attach', interface, ] def cmd_detach_nfs_interface(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-vdm', self.vdm_name, '-detach', self.interface_name2, ] def cmd_get_interfaces(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-i', '-vdm', self.vdm_name, ] def output_get_interfaces_vdm(self, cifs_interface=FakeData.interface_name1, nfs_interface=FakeData.interface_name2): return ( """id = %(vdmid)s name = %(name)s acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_vdm-fakeid I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = loaded, active Interfaces to services mapping: interface=%(nfs_if_name)s :vdm interface=%(cifs_if_name)s :cifs""" % {'vdmid': self.vdm_id, 'name': self.vdm_name, 'nfs_if_name': nfs_interface, 'cifs_if_name': cifs_interface} ) def output_get_interfaces_nfs(self, cifs_interface=FakeData.interface_name1, nfs_interface=FakeData.interface_name2): return ( """id = %(vdmid)s name = %(name)s acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_vdm-fakeid I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = loaded, active Interfaces to services mapping: interface=%(nfs_if_name)s :nfs interface=%(cifs_if_name)s :cifs""" % {'vdmid': self.vdm_id, 'name': self.vdm_name, 'nfs_if_name': nfs_interface, 'cifs_if_name': cifs_interface} ) class PoolTestData(StorageObjectTestData): def __init__(self): super(PoolTestData, self).__init__() @query def req_get(self): return ( '' ) @response def resp_get_succeed(self, name=None, id=None): if not name: name = self.pool_name if not id: id = self.pool_id return ( '' '' '' '' '' '' '' % {'name': name, 'id': id, 'pool_used_size': self.pool_used_size, 'pool_total_size': self.pool_total_size} ) class MoverTestData(StorageObjectTestData): def __init__(self): super(MoverTestData, self).__init__() @query def req_get_ref(self): return ( '' '' '' ) @response def resp_get_ref_succeed(self, name=None): if not name: name = self.mover_name return ( '' '' 'The query may be incomplete because some of the ' 'Celerra components are unavailable or do not exist. Another ' 'reason may be application error.' 'If the entire Celerra is functioning correctly, ' 'check your client application logic.' 'Standby Data Mover server_2.faulted.server_3 is ' 'out of service.' '' '' '' '' % {'name': name, 'id': self.mover_id} ) @query def req_get(self): return ( '' '' '' % {'id': self.mover_id} ) @response def resp_get_succeed(self, name=None): if not name: name = self.mover_name return ( '' '' '' '' '' '' '' '' '' % {'id': self.mover_id, 'name': name, 'long_interface_name': self.long_interface_name[:31], 'interface_name1': self.interface_name1, 'interface_name2': self.interface_name2} ) @start_task def req_create_interface(self, if_name=FakeData.interface_name1, ip=FakeData.network_allocations_ip1): return ( '' % {'if_name': if_name, 'vlan': self.vlan_id, 'ip': ip, 'mover_id': self.mover_id, 'device_name': self.device_name, 'net_mask': self.net_mask} ) @start_task def req_create_interface_with_ipv6(self, if_name=FakeData.interface_name3, ip=FakeData.network_allocations_ip3): return ( '' % {'if_name': if_name, 'vlan': self.vlan_id, 'ip': ip, 'mover_id': self.mover_id, 'device_name': self.device_name, 'net_mask': self.net_mask_v6} ) @response def resp_create_interface_but_name_already_exist(self): return ( '' '' 'Duplicate name specified' 'Specify a unqiue name' '' '' % {'interface_name': self.interface_name1} ) @response def resp_create_interface_but_ip_already_exist(self): return ( '' '' '' '' '' % {'ip': self.ip_address1} ) @response def resp_create_interface_with_conflicted_vlan_id(self): return ( '' '' 'The operation cannot complete because other ' 'interfaces on the same subnet are in a different VLAN. ' 'The Data Mover requires all interfaces in the same subnet ' 'to be in the same VLAN.' 'Specify a VLAN to match other interfaces in the same ' 'subnet. To move multiple interfaces to a different VLAN, ' 'first set the VLAN id on each interface to 0, ' 'and then set their VLAN id\'s to the new VLAN number.' '' '' ) @start_task def req_delete_interface(self, ip=FakeData.network_allocations_ip1): return ( '' % {'ip': ip, 'mover_id': self.mover_id, } ) @response def resp_delete_interface_but_nonexistent(self): return ( '' '' '' '' '' '' ) def cmd_get_interconnect_id(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_cel', '-interconnect', '-l', ] def output_get_interconnect_id(self): return ( 'id name source_server destination_system destination_server\n' '%(id)s loopback %(src_server)s nas149 %(dest_server)s\n' % {'id': self.interconnect_id, 'src_server': self.mover_name, 'dest_server': self.mover_name} ) def cmd_get_physical_devices(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_sysconfig', self.mover_name, '-pci', ] def output_get_physical_devices(self): return ( 'server_2 : PCI DEVICES:\n' 'On Board:\n' ' PMC QE8 Fibre Channel Controller\n' ' 0: fcp-0-0 IRQ: 20 addr: 5006016047a00245\n' ' 0: fcp-0-1 IRQ: 21 addr: 5006016147a00245\n' ' 0: fcp-0-2 IRQ: 22 addr: 5006016247a00245\n' ' 0: fcp-0-3 IRQ: 23 addr: 5006016347a00245\n' ' Broadcom Gigabit Ethernet Controller\n' ' 0: cge-1-0 IRQ: 24\n' ' speed=auto duplex=auto txflowctl=disable rxflowctl=disable\n' ' Link: Up\n' ' 0: cge-1-1 IRQ: 25\n' ' speed=auto duplex=auto txflowctl=disable rxflowctl=disable\n' ' Link: Down\n' ' 0: cge-1-2 IRQ: 26\n' ' speed=auto duplex=auto txflowctl=disable rxflowctl=disable\n' ' Link: Down\n' ' 0: cge-1-3 IRQ: 27\n' ' speed=auto duplex=auto txflowctl=disable rxflowctl=disable\n' ' Link: Up\n' 'Slot: 4\n' ' PLX PCI-Express Switch Controller\n' ' 1: PLX PEX8648 IRQ: 10\n' ) class DNSDomainTestData(StorageObjectTestData): def __init__(self): super(DNSDomainTestData, self).__init__() @start_task def req_create(self, ip_addr=None): if ip_addr is None: ip_addr = self.dns_ip_address return ( '' % {'mover_id': self.mover_id, 'domain_name': self.domain_name, 'server_ips': ip_addr} ) @start_task def req_delete(self): return ( '' % {'mover_id': self.mover_id, 'domain_name': self.domain_name} ) class CIFSServerTestData(StorageObjectTestData): def __init__(self): super(CIFSServerTestData, self).__init__() @start_task def req_create(self, mover_id, is_vdm=True, ip_addr=None): if ip_addr is None: ip_addr = self.ip_address1 return ( '' '' '
  • %(alias)s
  • ' '' '
    ' % {'ip': ip_addr, 'comp_name': self.cifs_server_name, 'name': self.cifs_server_name[-14:], 'mover_id': mover_id, 'alias': self.cifs_server_name[-12:], 'domain_user': self.domain_user, 'domain_password': self.domain_password, 'domain': self.domain_name, 'is_vdm': 'true' if is_vdm else 'false'} ) @query def req_get(self, mover_id, is_vdm=True): return ( '' '' '' % {'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false'} ) @response def resp_get_succeed(self, mover_id, is_vdm, join_domain, cifs_server_name=None, ip_addr=None): if cifs_server_name is None: cifs_server_name = self.cifs_server_name if ip_addr is None: ip_addr = self.ip_address1 return ( '' '' '
  • %(alias)s
  • ' % {'mover_id': mover_id, 'cifsserver': self.cifs_server_name[-14:], 'ip': ip_addr, 'is_vdm': 'true' if is_vdm else 'false', 'alias': self.cifs_server_name[-12:], 'domain': self.domain_name, 'join_domain': 'true' if join_domain else 'false', 'comp_name': cifs_server_name} ) @response def resp_get_without_interface(self, mover_id, is_vdm, join_domain): return ( '' '' '
  • %(alias)s
  • ' '
    ' % {'mover_id': mover_id, 'cifsserver': self.cifs_server_name[-14:], 'is_vdm': 'true' if is_vdm else 'false', 'alias': self.cifs_server_name[-12:], 'domain': self.domain_name, 'join_domain': 'true' if join_domain else 'false', 'comp_name': self.cifs_server_name} ) @start_task def req_modify(self, mover_id, is_vdm=True, join_domain=False): return ( '' '' '' % {'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false', 'join_domain': 'true' if join_domain else 'false', 'cifsserver': self.cifs_server_name[-14:], 'username': self.domain_user, 'pw': self.domain_password} ) @response def resp_modify_but_already_join_domain(self): return ( ' ' '' 'Fake description' 'Fake action.' '' ' ' ) @response def resp_modify_but_unjoin_domain(self): return ( ' ' '' 'Fake description' 'Fake action.' '' ' ' ) @start_task def req_delete(self, mover_id, is_vdm=True): return ( '' % {'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false', 'cifsserver': self.cifs_server_name[-14:]} ) class CIFSShareTestData(StorageObjectTestData): def __init__(self): super(CIFSShareTestData, self).__init__() @start_task def req_create(self, mover_id, is_vdm=True): return ( '' '' '
  • %(cifsserver)s
  • ' '
    ' % {'path': '/' + self.share_name, 'share_name': self.share_name, 'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false', 'cifsserver': self.cifs_server_name[-14:]} ) @start_task def req_delete(self, mover_id, is_vdm=True): return ( '' '
  • %(cifsserver)s
  • ' '
    ' % {'share_name': self.share_name, 'mover_id': mover_id, 'is_vdm': 'true' if is_vdm else 'false', 'cifsserver': self.cifs_server_name[-12:]} ) @query def req_get(self): return '' % self.share_name @response def resp_get_succeed(self, mover_id, is_vdm=True): return ( '' '' '
  • %(alias)s
  • ' '
    ' '
    ' % {'path': self.path, 'fsid': self.filesystem_id, 'name': self.share_name, 'moverid': mover_id, 'is_vdm': 'true' if is_vdm else 'false', 'alias': self.cifs_server_name[-12:]} ) def cmd_disable_access(self): cmd_str = 'sharesd %s set noaccess' % self.share_name return [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', self.vdm_name, '-v', '%s' % cmd_str, ] def cmd_change_access(self, access_level=const.ACCESS_LEVEL_RW, action='grant', user=None): if user is None: user = self.domain_user account = user + '@' + self.domain_name if access_level == const.ACCESS_LEVEL_RW: str_access = 'fullcontrol' else: str_access = 'read' allow_str = ( 'sharesd %(share_name)s %(action)s %(account)s=%(access)s' % {'share_name': self.share_name, 'action': action, 'account': account, 'access': str_access} ) return [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', self.vdm_name, '-v', '%s' % allow_str, ] def cmd_get_access(self): get_str = 'sharesd %s dump' % self.share_name return [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', self.vdm_name, '-v', '%s' % get_str, ] def output_allow_access(self): return ( "Command succeeded: :3 sharesd %(share)s grant " "%(user)s@%(domain)s=fullcontrol" % {'share': self.share_name, 'user': self.domain_user, 'domain': self.domain_name} ) def output_allow_access_but_duplicate_ace(self): return ( '%(vdm_name)s : commands processed: 1' 'output is complete' '1443422844: SMB: 6: ACE for %(domain)s\\%(user)s ' 'unchanged' '1443422844: ADMIN: 3: ' 'Command failed: :23 ' 'sharesd %(share)s grant %(user)s@%(domain)s=read' 'Error 4020: %(vdm_name)s : failed to complete command"' % {'share': self.share_name, 'user': self.domain_user, 'domain': self.domain_name, 'vdm_name': self.vdm_name} ) def output_deny_access_but_no_ace(self): return ( '%(vdm_name)s : commands processed: 1' 'output is complete' '1443515516: SMB: 6: No ACE found for %(domain)s\\%(user)s ' '1443515516: ADMIN: 3: ' 'Command failed: :26 ' 'sharesd %(share)s revoke %(user)s@%(domain)s=read' 'Error 4020: %(vdm_name)s : failed to complete command"' % {'share': self.share_name, 'user': self.domain_user, 'domain': self.domain_name, 'vdm_name': self.vdm_name} ) def output_deny_access_but_no_user_found(self): return ( '%(vdm_name)s : commands processed: 1' 'output is complete' '1443520322: SMB: 6: Cannot get mapping for %(domain)s\\%(user)s ' '1443520322: ADMIN: 3: ' 'Command failed: :26 ' 'sharesd %(share)s revoke %(user)s@%(domain)s=read' 'Error 4020: %(vdm_name)s : failed to complete command"' % {'share': self.share_name, 'user': self.domain_user, 'domain': self.domain_name, 'vdm_name': self.vdm_name} ) class NFSShareTestData(StorageObjectTestData): def __init__(self): super(NFSShareTestData, self).__init__() def cmd_create(self): default_access = 'access=-0.0.0.0/0.0.0.0' return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', self.vdm_name, '-option', default_access, self.path, ] def output_create(self): return "%s : done" % self.vdm_name def cmd_get(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', self.vdm_name, '-P', 'nfs', '-list', self.path, ] def output_get_succeed(self, rw_hosts, ro_hosts): rw_hosts = [utils.convert_ipv6_format_if_needed(ip_addr) for ip_addr in rw_hosts] ro_hosts = [utils.convert_ipv6_format_if_needed(ip_addr) for ip_addr in ro_hosts] if rw_hosts and ro_hosts: return ( '%(mover_name)s :\nexport "%(path)s" ' 'access=%(host)s:-0.0.0.0/0.0.0.0 root=%(host)s ' 'rw=%(rw_host)s ro=%(ro_host)s\n' % {'mover_name': self.vdm_name, 'path': self.path, 'host': ":".join(rw_hosts + ro_hosts), 'rw_host': ":".join(rw_hosts), 'ro_host': ":".join(ro_hosts)} ) elif rw_hosts: return ( '%(mover_name)s :\nexport "%(path)s" ' 'access=%(host)s:-0.0.0.0/0.0.0.0 root=%(host)s ' 'rw=%(rw_host)s\n' % {'mover_name': self.vdm_name, 'host': ":".join(rw_hosts), 'path': self.path, 'rw_host': ":".join(rw_hosts)} ) elif ro_hosts: return ( '%(mover_name)s :\nexport "%(path)s" ' 'access=%(host)s:-0.0.0.0/0.0.0.0 root=%(host)s ' 'ro=%(ro_host)s\n' % {'mover_name': self.vdm_name, 'host': ":".join(ro_hosts), 'path': self.path, 'ro_host': ":".join(ro_hosts)} ) else: return ( '%(mover_name)s :\nexport "%(path)s" ' 'access=-0.0.0.0/0.0.0.0\n' % {'mover_name': self.vdm_name, 'path': self.path} ) def output_get_but_not_found(self): return ( '%(mover_name)s : \nError 2: %(mover_name)s : ' 'No such file or directory \n' % {'mover_name': self.vdm_name} ) def cmd_delete(self): return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', self.vdm_name, '-unexport', '-perm', self.path, ] def output_delete_succeed(self): return "%s : done" % self.vdm_name def output_delete_but_locked(self): return ("Error 2201: %s : unable to acquire lock(s), try later" % self.vdm_name) def cmd_set_access(self, rw_hosts, ro_hosts): rw_hosts = [utils.convert_ipv6_format_if_needed(ip_addr) for ip_addr in rw_hosts] ro_hosts = [utils.convert_ipv6_format_if_needed(ip_addr) for ip_addr in ro_hosts] access_str = ("access=%(access_hosts)s:-0.0.0.0/0.0.0.0," "root=%(root_hosts)s,rw=%(rw_hosts)s,ro=%(ro_hosts)s" % {'rw_hosts': ":".join(rw_hosts), 'ro_hosts': ":".join(ro_hosts), 'root_hosts': ":".join(rw_hosts + ro_hosts), 'access_hosts': ":".join(rw_hosts + ro_hosts)}) return [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', self.vdm_name, '-ignore', '-option', access_str, self.path, ] def output_set_access_success(self): return "%s : done" % self.vdm_name class FakeEMCShareDriver(object): def __init__(self, enas_type='vnx'): self.configuration = conf.Configuration(None) self.configuration.append_config_values = mock.Mock(return_value=0) self.configuration.emc_share_backend = FakeData.emc_share_backend self.configuration.vnx_server_container = FakeData.mover_name if enas_type == 'powermax': self.configuration.emc_share_backend = ( FakeData.powermax_share_backend) self.configuration.vmax_server_container = FakeData.mover_name self.configuration.emc_nas_server = FakeData.emc_nas_server self.configuration.emc_nas_login = FakeData.emc_nas_login self.configuration.emc_nas_password = FakeData.emc_nas_password self.configuration.share_backend_name = FakeData.share_backend_name CIFS_SHARE = fake_share.fake_share( id=FakeData.share_id, name=FakeData.share_name, size=FakeData.share_size, share_network_id=FakeData.share_network_id, share_server_id=FakeData.share_server_id, host=FakeData.host, share_proto='CIFS') NFS_SHARE = fake_share.fake_share( id=FakeData.share_id, name=FakeData.share_name, size=FakeData.share_size, share_network_id=FakeData.share_network_id, share_server_id=FakeData.share_server_id, host=FakeData.host, share_proto='NFS') CIFS_RW_ACCESS = fake_share.fake_access( access_type='user', access_to=FakeData.domain_user, access_level='rw') CIFS_RO_ACCESS = fake_share.fake_access( access_type='user', access_to=FakeData.domain_user, access_level='ro') NFS_RW_ACCESS = fake_share.fake_access( access_type='ip', access_to=FakeData.nfs_host_ip, access_level='rw') NFS_RW_ACCESS_IPV6 = fake_share.fake_access( access_type='ip', access_to=FakeData.nfs_host_ipv6, access_level='rw') NFS_RO_ACCESS = fake_share.fake_access( access_type='ip', access_to=FakeData.nfs_host_ip, access_level='ro') NFS_RO_ACCESS_IPV6 = fake_share.fake_access( access_type='ip', access_to=FakeData.nfs_host_ipv6, access_level='ro') SHARE_SERVER = { 'id': FakeData.share_server_id, 'share_network': { 'name': 'fake_share_network', 'id': FakeData.share_network_id }, 'share_network_id': FakeData.share_network_id, 'backend_details': { 'share_server_name': FakeData.vdm_name, 'cifs_if': FakeData.network_allocations_ip1, 'nfs_if': FakeData.network_allocations_ip2, } } SHARE_SERVER_IPV6 = { 'id': FakeData.share_server_id, 'share_network': { 'name': 'fake_share_network', 'id': FakeData.share_network_id }, 'share_network_id': FakeData.share_network_id, 'backend_details': { 'share_server_name': FakeData.vdm_name, 'cifs_if': FakeData.network_allocations_ip3, 'nfs_if': FakeData.network_allocations_ip4, } } SERVER_DETAIL = { 'share_server_name': FakeData.vdm_name, 'cifs_if': FakeData.network_allocations_ip1, 'nfs_if': FakeData.network_allocations_ip2, } SERVER_DETAIL_IPV6 = { 'share_server_name': FakeData.vdm_name, 'cifs_if': FakeData.network_allocations_ip3, 'nfs_if': FakeData.network_allocations_ip4, } SECURITY_SERVICE = [ { 'type': 'active_directory', 'domain': FakeData.domain_name, 'dns_ip': FakeData.dns_ip_address, 'user': FakeData.domain_user, 'password': FakeData.domain_password }, ] SECURITY_SERVICE_IPV6 = [ { 'type': 'active_directory', 'domain': FakeData.domain_name, 'dns_ip': FakeData.dns_ipv6_address, 'user': FakeData.domain_user, 'password': FakeData.domain_password }, ] NETWORK_INFO = { 'server_id': FakeData.share_server_id, 'cidr': FakeData.cidr, 'security_services': [ {'type': 'active_directory', 'domain': FakeData.domain_name, 'dns_ip': FakeData.dns_ip_address, 'user': FakeData.domain_user, 'password': FakeData.domain_password}, ], 'segmentation_id': FakeData.segmentation_id, 'network_type': 'vlan', 'network_allocations': [ {'id': FakeData.network_allocations_id1, 'ip_address': FakeData.network_allocations_ip1, 'ip_version': FakeData.network_allocations_ip_version1}, {'id': FakeData.network_allocations_id2, 'ip_address': FakeData.network_allocations_ip2, 'ip_version': FakeData.network_allocations_ip_version2} ] } NETWORK_INFO_IPV6 = { 'server_id': FakeData.share_server_id, 'cidr': FakeData.cidr_v6, 'security_services': [ {'type': 'active_directory', 'domain': FakeData.domain_name, 'dns_ip': FakeData.dns_ipv6_address, 'user': FakeData.domain_user, 'password': FakeData.domain_password}, ], 'segmentation_id': FakeData.segmentation_id, 'network_type': 'vlan', 'network_allocations': [ {'id': FakeData.network_allocations_id3, 'ip_address': FakeData.network_allocations_ip3, 'ip_version': FakeData.network_allocations_ip_version3}, {'id': FakeData.network_allocations_id4, 'ip_address': FakeData.network_allocations_ip4, 'ip_version': FakeData.network_allocations_ip_version4} ] } STATS = dict( share_backend_name='VNX', vendor_name='EMC', storage_protocol='NFS_CIFS', driver_version='2.0.0,') STATS_VMAX = dict( share_backend_name='VMAX', vendor_name='EMC', storage_protocol='NFS_CIFS', driver_version='2.0.0,') manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/0000775000175000017500000000000013656750362023500 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/__init__.py0000664000175000017500000000000013656750227025577 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/0000775000175000017500000000000013656750362024650 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/utils.py0000664000175000017500000000226313656750227026365 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from os import path from unittest import mock import yaml from oslo_log import log LOG = log.getLogger(__name__) patch_system = mock.patch('storops.UnitySystem') def load_yaml(file_name): yaml_file = '{}/{}'.format(path.dirname(path.abspath(__file__)), file_name) with open(yaml_file) as f: res = yaml.safe_load(f) LOG.debug('Loaded yaml mock objects from %s.', yaml_file) return res patch_find_ports_by_mtu = mock.patch('manila.share.drivers.dell_emc.plugins.' 'unity.utils.find_ports_by_mtu') manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/__init__.py0000664000175000017500000000136213656750227026763 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from unittest import mock sys.modules['storops'] = mock.Mock() sys.modules['storops.unity'] = mock.Mock() manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/res_mock.py0000664000175000017500000003107413656750227027031 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_config import cfg from oslo_log import log from manila.share import configuration as conf from manila.share.drivers.dell_emc.plugins.unity import client from manila.share.drivers.dell_emc.plugins.unity import connection from manila.tests.db import fakes as db_fakes from manila.tests import fake_share from manila.tests.share.drivers.dell_emc.plugins.unity import fake_exceptions from manila.tests.share.drivers.dell_emc.plugins.unity import utils client.storops_ex = fake_exceptions connection.storops_ex = fake_exceptions LOG = log.getLogger(__name__) SYMBOL_TYPE = '_type' SYMBOL_PROPERTIES = '_properties' SYMBOL_METHODS = '_methods' SYMBOL_SIDE_EFFECT = '_side_effect' SYMBOL_RAISE = '_raise' CONF = cfg.CONF def _has_side_effect(node): return isinstance(node, dict) and SYMBOL_SIDE_EFFECT in node def _has_raise(node): return isinstance(node, dict) and SYMBOL_RAISE in node def fake_share_server(**kwargs): share_server = { 'instance_id': 'fake_instance_id', 'backend_details': {}, } share_server.update(kwargs) return db_fakes.FakeModel(share_server) def fake_network_info(**kwargs): network_info = { 'id': 'fake_net_id', 'name': 'net_name', 'subnet': [], } network_info.update(kwargs) return network_info def fake_server_detail(**kwargs): server_detail = { 'share_server_name': 'fake_server_name', } server_detail.update(kwargs) return server_detail def fake_security_services(**kwargs): return kwargs['services'] def fake_access(**kwargs): access = {} access.update(kwargs) return access class FakeEMCShareDriver(object): def __init__(self, dhss=None): if dhss in (True, False): CONF.set_default('driver_handles_share_servers', dhss) self.configuration = conf.Configuration(None) self.configuration.emc_share_backend = 'unity' self.configuration.emc_nas_server = '192.168.1.1' self.configuration.emc_nas_login = 'fake_user' self.configuration.emc_nas_password = 'fake_password' self.configuration.share_backend_name = 'EMC_NAS_Storage' self.configuration.vnx_server_meta_pool = 'nas_server_pool' self.configuration.unity_server_meta_pool = 'nas_server_pool' self.configuration.local_conf.max_over_subscription_ratio = 20 class FakeEMCShareDriverIPv6(object): def __init__(self, dhss=None): if dhss in (True, False): CONF.set_default('driver_handles_share_servers', dhss) self.configuration = conf.Configuration(None) self.configuration.emc_share_backend = 'unity' self.configuration.emc_nas_server = 'fa27:2a95:e734:0:0:0:0:01' self.configuration.emc_nas_login = 'fake_user' self.configuration.emc_nas_password = 'fake_password' self.configuration.share_backend_name = 'EMC_NAS_Storage' self.configuration.vnx_server_meta_pool = 'nas_server_pool' self.configuration.unity_server_meta_pool = 'nas_server_pool' self.configuration.local_conf.max_over_subscription_ratio = 20 STATS = dict( share_backend_name='Unity', vendor_name='EMC', storage_protocol='NFS_CIFS', driver_version='2.0.0,', pools=[], ) class DriverResourceMock(dict): fake_func_mapping = {} def __init__(self, yaml_file): yaml_dict = utils.load_yaml(yaml_file) if isinstance(yaml_dict, dict): for name, body in yaml_dict.items(): if isinstance(body, dict): props = body[SYMBOL_PROPERTIES] if isinstance(props, dict): for prop_name, prop_value in props.items(): if isinstance(prop_value, dict) and prop_value: # get the first key as the convert function func_name = list(prop_value.keys())[0] if func_name.startswith('_'): func = getattr(self, func_name) props[prop_name] = ( func(**prop_value[func_name])) if body[SYMBOL_TYPE] in self.fake_func_mapping: self[name] = ( self.fake_func_mapping[body[SYMBOL_TYPE]](**props)) class ManilaResourceMock(DriverResourceMock): fake_func_mapping = { 'share': fake_share.fake_share, 'snapshot': fake_share.fake_snapshot, 'network_info': fake_network_info, 'share_server': fake_share_server, 'server_detail': fake_server_detail, 'security_services': fake_security_services, 'access': fake_access, } def __init__(self, yaml_file): super(ManilaResourceMock, self).__init__(yaml_file) class StorageObjectMock(object): PROPS = 'props' def __init__(self, yaml_dict): self.__dict__[StorageObjectMock.PROPS] = {} props = yaml_dict.get(SYMBOL_PROPERTIES, None) if props: for k, v in props.items(): setattr(self, k, StoragePropertyMock(k, v)()) methods = yaml_dict.get(SYMBOL_METHODS, None) if methods: for k, v in methods.items(): setattr(self, k, StorageMethodMock(k, v)) def __setattr__(self, key, value): self.__dict__[StorageObjectMock.PROPS][key] = value def __getattr__(self, item): try: super(StorageObjectMock, self).__getattr__(item) except AttributeError: return self.__dict__[StorageObjectMock.PROPS][item] except KeyError: raise KeyError('No such method or property for mock object.') class StoragePropertyMock(mock.PropertyMock): def __init__(self, name, property_body): return_value = property_body side_effect = None # only support return_value and side_effect for property if _has_side_effect(property_body): side_effect = property_body[SYMBOL_SIDE_EFFECT] return_value = None if side_effect: super(StoragePropertyMock, self).__init__( name=name, side_effect=side_effect) elif return_value: super(StoragePropertyMock, self).__init__( name=name, return_value=_build_mock_object(return_value)) else: super(StoragePropertyMock, self).__init__( name=name, return_value=return_value) class StorageMethodMock(mock.Mock): def __init__(self, name, method_body): return_value = method_body exception = None side_effect = None # support return_value, side_effect and exception for method if _has_side_effect(method_body) or _has_raise(method_body): exception = method_body.get(SYMBOL_RAISE, None) side_effect = method_body.get(SYMBOL_SIDE_EFFECT, None) return_value = None if exception: if isinstance(exception, dict) and exception: ex_name = list(exception.keys())[0] ex = getattr(fake_exceptions, ex_name) super(StorageMethodMock, self).__init__( name=name, side_effect=ex(exception[ex_name])) elif side_effect: super(StorageMethodMock, self).__init__( name=name, side_effect=_build_mock_object(side_effect)) elif return_value is not None: super(StorageMethodMock, self).__init__( name=name, return_value=_build_mock_object(return_value)) else: super(StorageMethodMock, self).__init__( name=name, return_value=None) class StorageResourceMock(dict): def __init__(self, yaml_file): yaml_dict = utils.load_yaml(yaml_file) if isinstance(yaml_dict, dict): for section, sec_body in yaml_dict.items(): self[section] = {} if isinstance(sec_body, dict): for obj_name, obj_body in sec_body.items(): self[section][obj_name] = _build_mock_object(obj_body) def _is_mock_object(yaml_info): return (isinstance(yaml_info, dict) and (SYMBOL_PROPERTIES in yaml_info or SYMBOL_METHODS in yaml_info)) def _build_mock_object(yaml_dict): if _is_mock_object(yaml_dict): return StorageObjectMock(yaml_dict) elif isinstance(yaml_dict, dict): return {k: _build_mock_object(v) for k, v in yaml_dict.items()} elif isinstance(yaml_dict, list): return [_build_mock_object(each) for each in yaml_dict] else: return yaml_dict manila_res = ManilaResourceMock('mocked_manila.yaml') unity_res = StorageResourceMock('mocked_unity.yaml') STORAGE_RES_MAPPING = { 'TestClient': unity_res, 'TestConnection': unity_res, 'TestConnectionDHSSFalse': unity_res, } def mock_input(resource): def inner_dec(func): def decorated(cls, *args, **kwargs): if cls._testMethodName in resource: storage_res = resource[cls._testMethodName] return func(cls, storage_res, *args, **kwargs) return decorated return inner_dec mock_client_input = mock_input(unity_res) def patch_client(func): def client_decorator(cls, *args, **kwargs): storage_res = {} if func.__name__ in STORAGE_RES_MAPPING[cls.__class__.__name__]: storage_res = ( STORAGE_RES_MAPPING[cls.__class__.__name__][func.__name__]) with utils.patch_system as patched_system: if 'unity' in storage_res: patched_system.return_value = storage_res['unity'] _client = client.UnityClient(host='fake_host', username='fake_user', password='fake_passwd') return func(cls, _client, *args, **kwargs) return client_decorator def mock_driver_input(resource): def inner_dec(func): def decorated(cls, *args, **kwargs): return func(cls, resource, *args, **kwargs) return decorated return inner_dec mock_manila_input = mock_driver_input(manila_res) def patch_connection_init(func): def connection_decorator(cls, *args, **kwargs): storage_res = {} if func.__name__ in STORAGE_RES_MAPPING[cls.__class__.__name__]: storage_res = ( STORAGE_RES_MAPPING[cls.__class__.__name__][func.__name__]) with utils.patch_system as patched_system: if 'unity' in storage_res: patched_system.return_value = storage_res['unity'] conn = connection.UnityStorageConnection(LOG) return func(cls, conn, *args, **kwargs) return connection_decorator def do_connection_connect(conn, res): conn.config = None conn.client = client.UnityClient(host='fake_host', username='fake_user', password='fake_passwd') conn.pool_conf = ['pool_1', 'pool_2'] conn.pool_set = set(['pool_1', 'pool_2']) conn.reserved_percentage = 0 conn.max_over_subscription_ratio = 20 conn.port_set = set(['spa_eth1', 'spa_eth2']) conn.nas_server_pool = StorageObjectMock(res['nas_server_pool']) conn.storage_processor = StorageObjectMock(res['sp_a']) def patch_connection(func): def connection_decorator(cls, *args, **kwargs): storage_res = {} if func.__name__ in STORAGE_RES_MAPPING[cls.__class__.__name__]: storage_res = ( STORAGE_RES_MAPPING[cls.__class__.__name__][func.__name__]) with utils.patch_system as patched_system: conn = connection.UnityStorageConnection(LOG) if 'unity' in storage_res: patched_system.return_value = storage_res['unity'] do_connection_connect( conn, STORAGE_RES_MAPPING[cls.__class__.__name__]) return func(cls, conn, *args, **kwargs) return connection_decorator manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/mocked_unity.yaml0000664000175000017500000010463313656750227030235 0ustar zuulzuul00000000000000sp_a: &sp_a _properties: name: 'SPA' id: 'SPA' existed: true _methods: get_id: 'SPA' sp_b: &sp_b _properties: name: 'SPB' id: 'SPB' existed: true _methods: get_id: 'SPB' sp_c: &sp_invalid _properties: id: 'SPC' existed: false interface_1: &interface_1 _properties: ip_address: 'fake_ip_addr_1' interface_2: &interface_2 _properties: ip_address: 'fake_ip_addr_2' interface_ipv6: &interface_ipv6 _properties: ip_addr: '2001:db8:0:1:f816:3eff:fe76:35c4' gateway: '2001:db8:0:1::1' prefix_length: '64' vlan_id: '201' nas_server: &nas_server _properties: &nas_server_prop name: '78fd845f-8e7d-487f-bfde-051d83e78103' file_interface: [*interface_1, *interface_2] current_sp: *sp_a home_sp: *sp_a nas_server_ipv6: &nas_server_ipv6 _properties: &nas_server_ipv6_prop name: 'af1eef2f-be66-4df1-8f25-9720f087da05' file_interface: [*interface_ipv6] current_sp: *sp_a home_sp: *sp_a filesystem_base: &filesystem_base _properties: &filesystem_base_prop name: 'fake_filesystem_name' id: 'fake_filesystem_id' size_total: 50000000000 is_thin_enabled: true pool: null nas_server: null cifs_share: [] nfs_share: [] _methods: has_snap: False snap_base: _properties: &snap_base_prop name: 'fake_snap_name' id: 'fake_snap_id' size: 50000000000 filesystem: *filesystem_base share_base: _properties: &share_base_prop name: 'fake_share_name' id: 'fake_share_id' filesystem: null snap: null cifs_share_base: &cifs_share_base _properties: &cifs_share_base_prop <<: *share_base_prop nfs_share_base: &nfs_share_base _properties: &nfs_share_base_prop <<: *share_base_prop pool_base: _properties: &pool_base_prop name: 'fake_pool_name' pool_id: 0 state: Ready user_capacity_gbs: 1311 total_subscribed_capacity_gbs: 131 available_capacity_gbs: 132 percent_full_threshold: 70 fast_cache: True pool_1: &pool_1 _properties: &pool_1_prop <<: *pool_base_prop name: 'pool_1' size_total: 500000 size_used: 10000 size_subscribed: 30000 pool_2: &pool_2 _properties: &pool_2_prop <<: *pool_base_prop name: 'pool_2' size_total: 600000 size_used: 20000 size_subscribed: 40000 nas_server_pool: &nas_server_pool _properties: <<: *pool_base_prop name: 'nas_server_pool' port_base: _properties: &port_base_prop is_link_up: true id: 'fake_name' parent_storage_processor: *sp_a port_1: &port_1 _properties: <<: *port_base_prop is_link_up: true id: 'spa_eth1' parent_storage_processor: *sp_a _methods: get_id: 'spa_eth1' port_2: &port_2 _properties: <<: *port_base_prop is_link_up: true id: 'spa_eth2' parent_storage_processor: *sp_a _methods: get_id: 'spa_eth2' port_3: &port_internal_port _properties: <<: *port_base_prop is_link_up: true id: 'internal_port' parent_storage_processor: *sp_a _methods: get_id: 'internal_port' port_4: &port_4 _properties: <<: *port_base_prop is_link_up: true id: 'spb_eth1' parent_storage_processor: *sp_b _methods: get_id: 'spb_eth1' la_port: &la_port _properties: is_link_up: true id: 'spa_la_4' parent_storage_processor: *sp_a _methods: get_id: 'spa_la_4' tenant_1: &tenant_1 _properties: id: "tenant_1" name: "Tenant1" uuid: "173ca6c3-5952-427d-82a6-df88f49e3926" vlans: [2] snapshot_1: &snapshot_1 _properties: id: "snapshot_1" name: "Snapshot_1" _methods: restore: True unity_base: &unity_base _methods: &unity_base_method get_sp: *sp_a get_pool: _side_effect: [[*pool_1, *pool_2, *nas_server_pool], *nas_server_pool] get_file_port: [*port_1, *port_2] test_connect: &test_connect unity: *unity_base test_connect_with_ipv6: &test_connect_with_ipv6 unity: *unity_base test_dhss_false_connect: &test_dhss_false_connect unity: *unity_base test_connect__invalid_sp_configuration: unity: _methods: <<: *unity_base_method get_sp: *sp_invalid test_connect__invalid_pool_configuration: *test_connect test_create_nfs_share: nfs_share: &nfs_share__test_create_nfs_share _properties: <<: *nfs_share_base_prop name: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' pool: &pool__test_create_nfs_share _properties: <<: *pool_base_prop name: 'Pool_2' _methods: create_nfs_share: None unity: _methods: <<: *unity_base_method get_pool: _side_effect: [*pool__test_create_nfs_share] get_nas_server: *nas_server test_create_cifs_share: cifs_share: &cifs_share__test_create_cifs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: enable_ace: filesystem: &filesystem__test_create_cifs_share _properties: &filesystem_prop__test_create_cifs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' _methods: create_cifs_share: *cifs_share__test_create_cifs_share pool: &pool__test_create_cifs_share _properties: <<: *pool_base_prop name: 'Pool_2' _methods: create_filesystem: *filesystem__test_create_cifs_share unity: _methods: <<: *unity_base_method get_pool: _side_effect: [*pool__test_create_cifs_share] get_nas_server: *nas_server test_dhss_false_create_nfs_share: nfs_share: &nfs_share__test_dhss_false_create_nfs_share _properties: <<: *nfs_share_base_prop name: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' pool: &pool__test_dhss_false_create_nfs_share _properties: <<: *pool_base_prop name: 'Pool_2' _methods: create_nfs_share: None unity: _methods: <<: *unity_base_method get_pool: _side_effect: [*pool__test_dhss_false_create_nfs_share] get_nas_server: *nas_server test_dhss_false_create_cifs_share: cifs_share: &cifs_share__test_dhss_false_create_cifs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: enable_ace: filesystem: &filesystem__test_dhss_false_create_cifs_share _properties: &filesystem_prop__test_dhss_false_create_cifs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' _methods: create_cifs_share: *cifs_share__test_dhss_false_create_cifs_share pool: &pool__test_dhss_false_create_cifs_share _properties: <<: *pool_base_prop name: 'Pool_2' _methods: create_filesystem: *filesystem__test_dhss_false_create_cifs_share unity: _methods: <<: *unity_base_method get_pool: _side_effect: [*pool__test_dhss_false_create_cifs_share] get_nas_server: *nas_server test_create_share_with_invalid_share_server: pool: &pool__test_create_share_with_invalid_share_server _properties: <<: *pool_base_prop name: 'Pool_2' unity: _methods: <<: *unity_base_method get_pool: _side_effect: [*pool__test_create_share_with_invalid_share_server] get_nas_server: _raise: UnityResourceNotFoundError: 'Failed to get NAS server.' test_delete_share: filesystem: &filesystem__test_delete_share _properties: &filesystem_prop__test_delete_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: delete: update: has_snap: False cifs_share: &cifs_share__test_delete_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_delete_share _methods: delete: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_delete_share test_delete_share__with_invalid_share: unity: _methods: <<: *unity_base_method get_cifs_share: _raise: UnityResourceNotFoundError: 'Failed to get CIFS share.' test_delete_share__create_from_snap: filesystem: &filesystem__test_delete_share__create_from_snap _properties: &filesystem_prop__test_delete_share__create_from_snap <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' _methods: delete: update: has_snap: False snap: &snap__test_delete_share__create_from_snap _properties: &snap_prop__test_delete_share__create_from_snap <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_delete_share__create_from_snap _methods: delete: cifs_share: &cifs_share__test_delete_share__create_from_snap _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' snap: *snap__test_delete_share__create_from_snap _methods: delete: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_delete_share__create_from_snap get_snap: *snap__test_delete_share__create_from_snap test_delete_share__create_from_snap_but_not_isolated: filesystem: &filesystem__test_delete_share__create_from_snap_but_not_isolated _properties: &filesystem_prop__test_delete_share__create_from_snap_but_not_isolated <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' cifs_share: [*cifs_share_base] nfs_share: [*nfs_share_base] _methods: delete: update: has_snap: True snap: &snap__test_delete_share__create_from_snap_but_not_isolated _properties: &snap_prop__test_delete_share__create_from_snap_but_not_isolated <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_delete_share__create_from_snap_but_not_isolated _methods: delete: cifs_share: &cifs_share__test_delete_share__create_from_snap_but_not_isolated _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' snap: *snap__test_delete_share__create_from_snap_but_not_isolated _methods: delete: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_delete_share__create_from_snap_but_not_isolated test_delete_share__but_not_isolated: filesystem: &filesystem__test_delete_share__but_not_isolated _properties: &filesystem_prop__test_delete_share__but_not_isolated <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' _methods: update: has_snap: True cifs_share: &cifs_share__test_delete_share__but_not_isolated _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_delete_share__but_not_isolated _methods: delete: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_delete_share__but_not_isolated test_extend_cifs_share: filesystem: &filesystem__test_extend_cifs_share _properties: &filesystem_prop__test_extend_cifs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: extend: cifs_share: &cifs_share__test_extend_cifs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_extend_cifs_share unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_extend_cifs_share test_extend_nfs_share: filesystem: &filesystem__test_extend_nfs_share _properties: &filesystem_prop__test_extend_nfs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: extend: cifs_share: &cifs_share__test_extend_nfs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_extend_nfs_share unity: _methods: <<: *unity_base_method get_nfs_share: *cifs_share__test_extend_nfs_share test_shrink_cifs_share: filesystem: &filesystem__test_shrink_cifs_share _properties: &filesystem_prop__test_shrink_cifs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: shrink: cifs_share: &cifs_share__test_shrink_cifs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_shrink_cifs_share unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_shrink_cifs_share test_shrink_nfs_share: filesystem: &filesystem__test_shrink_nfs_share _properties: &filesystem_prop__test_shrink_nfs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: shrink: cifs_share: &cifs_share__test_shrink_nfs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_shrink_nfs_share unity: _methods: <<: *unity_base_method get_nfs_share: *cifs_share__test_shrink_nfs_share test_extend_share__create_from_snap: snap: &snap__test_extend_share__create_from_snap _properties: &snap_prop__test_extend_share__create_from_snap <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' cifs_share: &cifs_share__test_extend_share__create_from_snap _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' snap: *snap__test_extend_share__create_from_snap unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_extend_share__create_from_snap test_shrink_share_create_from_snap: snap: &snap__test_shrink_share_create_from_snap _properties: &snap_prop__test_shrink_share__create_from_snap <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' cifs_share: &cifs_share__test_shrink_share__create_from_snap _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' snap: *snap__test_shrink_share_create_from_snap unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_shrink_share__create_from_snap test_create_snapshot_from_filesystem: filesystem: &filesystem__test_create_snapshot_from_filesystem _properties: &filesystem_prop__test_create_snapshot_from_filesystem <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: create_snap: cifs_share: &cifs_share__test_create_snapshot_from_filesystem _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_create_snapshot_from_filesystem unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_create_snapshot_from_filesystem test_create_snapshot_from_snapshot: snap: &snap__test_create_snapshot_from_snapshot _properties: &snap_prop__test_create_snapshot_from_snapshot <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: create_snap: cifs_share: &cifs_share__test_create_snapshot_from_snapshot _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' snap: *snap__test_create_snapshot_from_snapshot unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_create_snapshot_from_snapshot get_snap: *snap__test_create_snapshot_from_snapshot test_delete_snapshot: snap: &snap__test_delete_snapshot _properties: &snap_prop__test_delete_snapshot <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: delete: unity: _methods: <<: *unity_base_method get_snap: *snap__test_delete_snapshot test_ensure_share_exists: cifs_share: &cifs_share_ensure_share_exists _properties: existed: True unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share_ensure_share_exists test_ensure_share_not_exists: cifs_share: &cifs_share_ensure_share_not_exists _properties: existed: False unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share_ensure_share_not_exists test_update_share_stats: unity: _methods: <<: *unity_base_method get_pool: _side_effect: [[*pool_1, *pool_2]] test_update_share_stats__nonexistent_pools: unity: _methods: <<: *unity_base_method get_pool: _side_effect: [[]] test_get_pool: filesystem: &filesystem__test_get_pool _properties: &filesystem_prop__test_get_pool <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' pool: *pool_1 cifs_share: &cifs_share__test_get_pool _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_get_pool unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_get_pool test_setup_server: &test_setup_server nas_server_1: &nas_server_1__test_setup_server _properties: <<: *nas_server_prop existed: false home_sp: *sp_a ip_port: &ip_port _methods: set_mtu: nas_server_2: &nas_server_2__test_setup_server _properties: <<: *nas_server_prop _methods: &nas_server_2__test_setup_server_mehtod create_file_interface: enable_nfs_service: unity: _methods: &unity_method__test_setup_server <<: *unity_base_method get_nas_server: *nas_server_1__test_setup_server create_nas_server: *nas_server_2__test_setup_server get_ip_port: *ip_port test_setup_server__vlan_network: <<: *test_setup_server nas_server: &nas_server__test_setup_server_flat_network _properties: <<: *nas_server_prop existed: true _methods: create_file_interface: create_dns_server: enable_nfs_service: unity: _methods: <<: *unity_method__test_setup_server get_nas_server: *nas_server__test_setup_server_flat_network create_tenant: *tenant_1 test_setup_server__vxlan_network: <<: *test_setup_server nas_server_2: &nas_server_2__test_setup_server__vxlan_network _properties: <<: *nas_server_prop _methods: delete: unity: _methods: <<: *unity_method__test_setup_server get_nas_server: *nas_server_2__test_setup_server__vxlan_network test_setup_server__active_directory: <<: *test_setup_server nas_server_2: &nas_server_2__test_setup_server__active_directory _properties: <<: *nas_server_prop _methods: create_file_interface: create_dns_server: enable_cifs_service: enable_nfs_service: unity: _methods: &unity_method__test_setup_server__active_directory <<: *unity_method__test_setup_server create_nas_server: *nas_server_2__test_setup_server__active_directory create_tenant: *tenant_1 test_setup_server__kerberos: *test_setup_server test_setup_server__throw_exception: <<: *test_setup_server nas_server_1: &nas_server_1__test_setup_server__throw_exception _properties: <<: *nas_server_prop existed: false nas_server_2: &nas_server_2__test_setup_server__throw_exception _properties: <<: *nas_server_prop tenant: _methods: create_file_interface: create_dns_server: enable_cifs_service: enable_nfs_service: _raise: UnityException: 'Failed to enable NFS service.' delete: unity: _methods: <<: *unity_method__test_setup_server get_nas_server: *nas_server_2__test_setup_server__throw_exception create_nas_server: *nas_server_2__test_setup_server__throw_exception create_tenant: *tenant_1 test_teardown_server: tenant: _properties: nas_servers: [] _methods: delete: nas_server: &nas_server__test_teardown_server _properties: <<: *nas_server_prop tenant: _methods: delete: unity: _methods: <<: *unity_base_method get_nas_server: *nas_server__test_teardown_server test__get_managed_pools: &test__get_managed_pools unity: _methods: <<: *unity_base_method get_pool: [*pool_1, *pool_2, *nas_server_pool] test__get_managed_pools__invalid_pool_configuration: *test__get_managed_pools test_validate_port_configuration: &test_validate_port_configuration unity: _methods: <<: *unity_base_method get_file_port: [*port_1, *port_2, *port_internal_port, *port_4, *la_port] test_validate_port_configuration_exception: *test_validate_port_configuration test__get_managed_pools__invalid_port_configuration: *test_validate_port_configuration test_create_cifs_share_from_snapshot: cifs_share: &cifs_share__test_create_cifs_share_from_snapshot _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: enable_ace: snapshot_1: &snapshot_1__test_create_cifs_share_from_snapshot _properties: &snapshot_1_prop__test_create_cifs_share_from_snapshot <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: create_cifs_share: *cifs_share__test_create_cifs_share_from_snapshot snapshot_2: &snapshot_2__test_create_cifs_share_from_snapshot _properties: &snapshot__prop__test_create_cifs_share_from_snapshot <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' _methods: create_snap: *snapshot_1__test_create_cifs_share_from_snapshot unity: _methods: <<: *unity_base_method get_nas_server: *nas_server get_snap: *snapshot_2__test_create_cifs_share_from_snapshot test_create_nfs_share_from_snapshot: nfs_share: &nfs_share__test_create_nfs_share_from_snapshot _properties: <<: *nfs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: enable_ace: snapshot_1: &snapshot_1__test_create_nfs_share_from_snapshot _properties: &snapshot_1_prop__test_create_nfs_share_from_snapshot <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: create_nfs_share: *nfs_share__test_create_nfs_share_from_snapshot snapshot_2: &snapshot_2__test_create_nfs_share_from_snapshot _properties: &snapshot__prop__test_create_nfs_share_from_snapshot <<: *snap_base_prop name: '716100cc-e0b4-416b-ac27-d38dd4587340' _methods: create_snap: *snapshot_1__test_create_nfs_share_from_snapshot unity: _methods: <<: *unity_base_method get_nas_server: *nas_server get_snap: *snapshot_2__test_create_nfs_share_from_snapshot test_create_share_from_snapshot_no_server_name: unity: _methods: <<: *unity_base_method get_nas_server: _raise: UnityResourceNotFoundError: 'NAS server is not found' test_clear_share_access_cifs: cifs_share: &cifs_share__test_clear_share_access_cifs _methods: clear_access: _raise: UnityException: 'clear cifs access invoked' unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_clear_share_access_cifs test_clear_share_access_nfs: nfs_share: &nfs_share__test_clear_share_access_nfs _methods: clear_access: _raise: UnityException: 'clear nfs access invoked' unity: _methods: <<: *unity_base_method get_nfs_share: *nfs_share__test_clear_share_access_nfs test_allow_rw_cifs_share_access: &test_allow_rw_cifs_share_access cifs_share: &cifs_share__test_allow_rw_cifs_share_access _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: add_ace: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_allow_rw_cifs_share_access test_update_access_allow_rw: *test_allow_rw_cifs_share_access test_update_access_recovery: cifs_share: &cifs_share__test_update_access_recovery _methods: add_ace: clear_access: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_update_access_recovery test_allow_ro_cifs_share_access: *test_allow_rw_cifs_share_access test_allow_rw_nfs_share_access: nfs_share: &nfs_share__test_allow_rw_nfs_share_access _properties: <<: *nfs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: allow_read_write_access: allow_root_access: unity: _methods: <<: *unity_base_method get_nfs_share: *nfs_share__test_allow_rw_nfs_share_access test_allow_ro_nfs_share_access: nfs_share: &nfs_share__test_allow_ro_nfs_share_access _properties: <<: *nfs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: allow_read_only_access: unity: _methods: <<: *unity_base_method get_nfs_share: *nfs_share__test_allow_ro_nfs_share_access test_deny_cifs_share_access: cifs_share: &cifs_share__test_deny_cifs_share_access _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: delete_ace: unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_deny_cifs_share_access test_deny_nfs_share_access: &test_deny_nfs_share_access nfs_share: &nfs_share__test_deny_nfs_share_access _properties: <<: *nfs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' _methods: delete_access: unity: _methods: <<: *unity_base_method get_nfs_share: *nfs_share__test_deny_nfs_share_access test_update_access_deny_nfs: *test_deny_nfs_share_access # The following test cases are for client.py test_create_cifs_share__existed_expt: filesystem: _methods: create_cifs_share: _raise: UnitySmbShareNameExistedError: 'CIFS share already exists.' cifs_share: &cifs_share__test_create_cifs_share__existed_expt _properties: name: '716100cc-e0b4-416b-ac27-d38dd019330d' unity: _methods: get_cifs_share: *cifs_share__test_create_cifs_share__existed_expt test_create_nfs_share__existed_expt: filesystem: _methods: create_nfs_share: _raise: UnityNfsShareNameExistedError: 'NFS share already exists.' nfs_share: &nfs_share__test_create_nfs_share__existed_expt _properties: name: '716100cc-e0b4-416b-ac27-d38dd019330d' unity: _methods: get_nfs_share: *nfs_share__test_create_nfs_share__existed_expt test_create_nfs_share_batch: nfs_share: &nfs_share__test_create_nfs_share_batch _properties: name: '716100cc-e0b4-416b-ac27-d38dd019330d' unity: _methods: get_nfs_share: *nfs_share__test_create_nfs_share_batch pool: _methods: create_nfs_share: nas_server: _properties: <<: *nas_server_prop nfs_share: _properties: name: '716100cc-e0b4-416b-ac27-d38dd019330d' size: 151081080 test_get_share_with_invalid_proto: share: _properties: <<: *share_base_prop test_create_filesystem__existed_expt: filesystem: &filesystem__test_create_filesystem__existed_expt _properties: name: '716100cc-e0b4-416b-ac27-d38dd019330d' size: 10 proto: 'CIFS' pool: _methods: create_filesystem: _raise: UnityFileSystemNameAlreadyExisted: 'Pool already exists.' nas_server: _properties: <<: *nas_server_prop unity: _methods: get_filesystem: *filesystem__test_create_filesystem__existed_expt test_delete_filesystem__nonexistent_expt: filesystem: _properties: name: already removed filsystem _methods: delete: _raise: UnityResourceNotFoundError: 'Filesystem is non-existent.' test_create_nas_server__existed_expt: sp: _properites: name: 'SP' pool: _properites: name: 'fake_pool' nas_server: &nas_server__test_create_nas_server__existed_expt _properties: <<: *nas_server_prop unity: _methods: create_nas_server: _raise: UnityNasServerNameUsedError: 'NAS Server already exists.' get_nas_server: *nas_server__test_create_nas_server__existed_expt test_delete_nas_server__nonexistent_expt: nas_server: &nas_server__test_delete_nas_server__nonexistent_expt _properties: <<: *nas_server_prop tenant: _methods: delete: _raise: UnityResourceNotFoundError: 'NAS server is non-existent.' unity: _methods: get_nas_server: *nas_server__test_delete_nas_server__nonexistent_expt test_create_dns_server__existed_expt: nas_server: _methods: create_dns_server: _raise: UnityOneDnsPerNasServerError: 'DNS server already exists.' test_create_interface__existed_expt: nas_server: _properties: <<: *nas_server_prop _methods: create_file_interface: _raise: UnityIpAddressUsedError: 'IP address is already used.' test_enable_cifs_service__existed_expt: nas_server: _properties: <<: *nas_server_prop _methods: enable_cifs_service: _raise: UnitySmbNameInUseError: 'CIFS server already exists.' test_enable_nfs_service__existed_expt: nas_server: _properties: <<: *nas_server_prop _methods: enable_nfs_service: _raise: UnityNfsAlreadyEnabledError: 'NFS server already exists.' test_create_snapshot__existed_expt: filesystem: _properties: <<: *filesystem_base_prop _methods: create_snap: _raise: UnitySnapNameInUseError: 'Snapshot already exists.' snapshot: _properties: <<: *snap_base_prop test_create_snap_of_snap__existed_expt: src_snapshot: _methods: create_snap: _raise: UnitySnapNameInUseError: 'Snapshot already exists.' dest_snapshot: &dest_snapshot__test_create_snap_of_snap__existed_expt _properties: <<: *snap_base_prop unity: _methods: get_snap: *dest_snapshot__test_create_snap_of_snap__existed_expt test_delete_snapshot__nonexistent_expt: snapshot: _properties: <<: *snap_base_prop _methods: delete: _raise: UnityResourceNotFoundError: 'Snapshot is non-existent.' test_nfs_deny_access__nonexistent_expt: nfs_share: &nfs_share__test_nfs_deny_access__nonexistent_expt _methods: delete_access: _raise: UnityHostNotFoundException: "Unity Host is non-existent" unity: _methods: get_nfs_share: *nfs_share__test_nfs_deny_access__nonexistent_expt test_get_storage_processor: unity: _methods: get_sp: *sp_a test_extend_filesystem: fs: _methods: get_id: 'svc_12' extend: _raise: UnityNothingToModifyError: test_shrink_filesystem: fs: _methods: get_id: 'svc_11' shrink: _raise: UnityNothingToModifyError: test_shrink_filesystem_size_too_small: fs: _methods: get_id: 'svc_10' shrink: _raise: UnityShareShrinkSizeTooSmallError: test_get_tenant: unity: _methods: create_tenant: *tenant_1 test_get_tenant_preexist: unity: _methods: create_tenant: _raise: UnityVLANUsedByOtherTenantError: get_tenant_use_vlan: *tenant_1 test_get_tenant_name_inuse_but_vlan_not_used: unity: _methods: create_tenant: _raise: UnityTenantNameInUseError: get_tenant_use_vlan: test_get_tenant_for_vlan_already_has_interfaces: unity: _methods: create_tenant: _raise: UnityVLANAlreadyHasInterfaceError: get_tenant_use_vlan: *tenant_1 test_get_file_ports: link_down_port: &down_port _properties: <<: *port_base_prop is_link_up: false id: 'down_port' _methods: get_id: 'down_port' unity: _methods: get_file_port: [*port_1, *port_internal_port, *down_port, *la_port] test_create_file_interface_ipv6: file_interface: *interface_ipv6 nas_server: _methods: create_file_interface: test_get_snapshot: unity: _methods: get_snap: *snapshot_1 test_get_snapshot_nonexistent_expt: unity: _methods: get_snap: _raise: UnityResourceNotFoundError: test_restore_snapshot: unity: _methods: get_snap: *snapshot_1 test_manage_cifs_share_with_server: filesystem: &filesystem__test_manage_cifs_share_with_server _properties: &filesystem_prop__test_manage_cifs_share_with_server <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' size_total: 5368709120 _methods: shrink: cifs_share: &cifs_share__test_manage_cifs_share_with_server _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_manage_cifs_share_with_server unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_manage_cifs_share_with_server test_manage_cifs_share: filesystem: &filesystem__test_manage_cifs_share _properties: &filesystem_prop__test_manage_cifs_share <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' size_total: 5368709120 _methods: shrink: cifs_share: &cifs_share__test_manage_cifs_share _properties: <<: *cifs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_manage_cifs_share unity: _methods: <<: *unity_base_method get_cifs_share: *cifs_share__test_manage_cifs_share test_manage_nfs_share_with_server: filesystem: &filesystem__test_manage_nfs_share_with_server _properties: &filesystem_prop__test_manage_nfs_share_with_server <<: *filesystem_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' size_total: 5368709120 _methods: extend: nfs_share: &nfs_share__test_manage_nfs_share_with_server _properties: <<: *nfs_share_base_prop name: '716100cc-e0b4-416b-ac27-d38dd019330d' filesystem: *filesystem__test_manage_nfs_share_with_server unity: _methods: <<: *unity_base_method get_nfs_share: *nfs_share__test_manage_nfs_share_with_server test_manage_nfs_share: filesystem: &filesystem__test_manage_nfs_share _properties: &filesystem_prop__test_manage_nfs_share <<: *filesystem_base_prop size_total: 5368709120 _methods: shrink: nfs_share: &nfs_share__test_manage_nfs_share _properties: <<: *nfs_share_base_prop filesystem: *filesystem__test_manage_nfs_share unity: _methods: <<: *unity_base_method get_nfs_share: *nfs_share__test_manage_nfs_share test_get_share_server_network_info: unity: _methods: <<: *unity_base_method get_nas_server: *nas_server manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/test_utils.py0000664000175000017500000001431713656750227027427 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from oslo_utils import units from manila.share.drivers.dell_emc.plugins.unity import utils from manila import test class MockSP(object): def __init__(self, sp_id): self.sp_id = sp_id def get_id(self): return self.sp_id SPA = MockSP('spa') SPB = MockSP('spb') class MockPort(object): def __init__(self, sp, port_id, mtu): self._sp = sp self.port_id = port_id self.mtu = mtu def get_id(self): return self.port_id @property def parent_storage_processor(self): return self._sp SPA_ETH0 = MockPort(SPA, 'spa_eth0', 1500) SPA_ETH1 = MockPort(SPA, 'spa_eth1', 9000) SPB_ETH0 = MockPort(SPB, 'spb_eth0', 1500) SPB_ETH1 = MockPort(SPB, 'spb_eth1', 9000) SPA_LA1 = MockPort(SPA, 'spa_la_1', 1500) SPB_LA1 = MockPort(SPB, 'spb_la_1', 1500) @ddt.ddt class TestUtils(test.TestCase): @ddt.data({'matcher': None, 'matched': {'pool_1', 'pool_2', 'nas_server_pool'}, 'not_matched': set()}, {'matcher': ['*'], 'matched': {'pool_1', 'pool_2', 'nas_server_pool'}, 'not_matched': set()}, {'matcher': ['pool_*'], 'matched': {'pool_1', 'pool_2'}, 'not_matched': {'nas_server_pool'}}, {'matcher': ['*pool'], 'matched': {'nas_server_pool'}, 'not_matched': {'pool_1', 'pool_2'}}, {'matcher': ['nas_server_pool'], 'matched': {'nas_server_pool'}, 'not_matched': {'pool_1', 'pool_2'}}, {'matcher': ['nas_*', 'pool_*'], 'matched': {'pool_1', 'pool_2', 'nas_server_pool'}, 'not_matched': set()}) def test_do_match(self, data): full = ['pool_1 ', ' pool_2', ' nas_server_pool '] matcher = data['matcher'] expected_matched = data['matched'] expected_not_matched = data['not_matched'] matched, not_matched = utils.do_match(full, matcher) self.assertEqual(expected_matched, matched) self.assertEqual(expected_not_matched, not_matched) @ddt.data({'ports': [SPA_ETH0, SPB_ETH0], 'ids_conf': None, 'port_map': {'spa': {'spa_eth0'}, 'spb': {'spb_eth0'}}, 'unmanaged': set()}, {'ports': [SPA_ETH0, SPB_ETH0], 'ids_conf': [' '], 'port_map': {'spa': {'spa_eth0'}, 'spb': {'spb_eth0'}}, 'unmanaged': set()}, {'ports': [SPA_ETH0, SPB_ETH0, SPA_ETH1], 'ids_conf': ['spa*'], 'port_map': {'spa': {'spa_eth0', 'spa_eth1'}}, 'unmanaged': {'spb_eth0'}}, ) @ddt.unpack def test_match_ports(self, ports, ids_conf, port_map, unmanaged): sp_ports_map, unmanaged_port_ids = utils.match_ports(ports, ids_conf) self.assertEqual(port_map, sp_ports_map) self.assertEqual(unmanaged, unmanaged_port_ids) def test_find_ports_by_mtu(self): all_ports = [SPA_ETH0, SPB_ETH0, SPA_ETH1, SPB_ETH1, SPA_LA1, SPB_LA1] port_ids_conf = '*' port_map = utils.find_ports_by_mtu(all_ports, port_ids_conf, 1500) self.assertEqual({'spa': {'spa_eth0', 'spa_la_1'}, 'spb': {'spb_eth0', 'spb_la_1'}}, port_map) def test_gb_to_byte(self): self.assertEqual(3 * units.Gi, utils.gib_to_byte(3)) def test_get_snapshot_id(self): snapshot = {'provider_location': '23047-ef2344-4563cvw-r4323cwed', 'id': 'test_id'} result = utils.get_snapshot_id(snapshot) expected = '23047-ef2344-4563cvw-r4323cwed' self.assertEqual(expected, result) def test_get_snapshot_id_without_pl(self): snapshot = {'provider_location': '', 'id': 'test_id'} result = utils.get_snapshot_id(snapshot) expected = 'test_id' self.assertEqual(expected, result) def test_get_nfs_share_id(self): nfs_share = {'export_locations': [{'path': '10.10.1.12:/addf-97e-46c-8ac6-55922f', 'share_instance_id': 'e24-457e-47-12c6-gf345'}], 'share_proto': 'NFS', 'id': 'test_nfs_id'} result = utils.get_share_backend_id(nfs_share) expected = 'addf-97e-46c-8ac6-55922f' self.assertEqual(expected, result) def test_get_nfs_share_id_without_path(self): nfs_share = {'export_locations': [{'path': '', 'share_instance_id': 'ev24-7e-4-12c6-g45245'}], 'share_proto': 'NFS', 'id': 'test_nfs_id'} result = utils.get_share_backend_id(nfs_share) expected = 'test_nfs_id' self.assertEqual(expected, result) def test_get_cifs_share_id(self): cifs_share = {'export_locations': [{'path': '\\\\17.66.5.3\\bdf-h4e-42c-122c5-b212', 'share_instance_id': 'ev4-47e-48-126-gfbh452'}], 'share_proto': 'CIFS', 'id': 'test_cifs_id'} result = utils.get_share_backend_id(cifs_share) expected = 'bdf-h4e-42c-122c5-b212' self.assertEqual(expected, result) def test_get_cifs_share_id_without_path(self): cifs_share = {'export_locations': [{'path': '', 'share_instance_id': 'ef4-47e-48-12c6-gf452'}], 'share_proto': 'CIFS', 'id': 'test_cifs_id'} result = utils.get_share_backend_id(cifs_share) expected = 'test_cifs_id' self.assertEqual(expected, result) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/test_client.py0000664000175000017500000002344413656750227027546 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import units from manila import exception from manila import test from manila.tests.share.drivers.dell_emc.plugins.unity import fake_exceptions from manila.tests.share.drivers.dell_emc.plugins.unity import res_mock @ddt.ddt class TestClient(test.TestCase): @res_mock.mock_client_input @res_mock.patch_client def test_create_cifs_share__existed_expt(self, client, mocked_input): resource = mocked_input['filesystem'] share = mocked_input['cifs_share'] new_share = client.create_cifs_share(resource, share.name) self.assertEqual(share.name, new_share.name) @res_mock.mock_client_input @res_mock.patch_client def test_create_nfs_share__existed_expt(self, client, mocked_input): resource = mocked_input['filesystem'] share = mocked_input['nfs_share'] new_share = client.create_nfs_share(resource, share.name) self.assertEqual(share.name, new_share.name) @res_mock.mock_client_input @res_mock.patch_client def test_create_nfs_filesystem_and_share(self, client, mocked_input): pool = mocked_input['pool'] nas_server = mocked_input['nas_server'] share = mocked_input['nfs_share'] client.create_nfs_filesystem_and_share( pool, nas_server, share.name, share.size) @res_mock.mock_client_input @res_mock.patch_client def test_get_share_with_invalid_proto(self, client, mocked_input): share = mocked_input['share'] self.assertRaises(exception.BadConfigurationException, client.get_share, share.name, 'fake_proto') @res_mock.mock_client_input @res_mock.patch_client def test_create_filesystem__existed_expt(self, client, mocked_input): pool = mocked_input['pool'] nas_server = mocked_input['nas_server'] filesystem = mocked_input['filesystem'] new_filesystem = client.create_filesystem(pool, nas_server, filesystem.name, filesystem.size, filesystem.proto) self.assertEqual(filesystem.name, new_filesystem.name) @res_mock.mock_client_input @res_mock.patch_client def test_delete_filesystem__nonexistent_expt(self, client, mocked_input): filesystem = mocked_input['filesystem'] client.delete_filesystem(filesystem) @res_mock.mock_client_input @res_mock.patch_client def test_create_nas_server__existed_expt(self, client, mocked_input): sp = mocked_input['sp'] pool = mocked_input['pool'] nas_server = mocked_input['nas_server'] new_nas_server = client.create_nas_server(nas_server.name, sp, pool) self.assertEqual(nas_server.name, new_nas_server.name) @res_mock.mock_client_input @res_mock.patch_client def test_delete_nas_server__nonexistent_expt(self, client, mocked_input): nas_server = mocked_input['nas_server'] client.delete_nas_server(nas_server.name) @res_mock.mock_client_input @res_mock.patch_client def test_create_dns_server__existed_expt(self, client, mocked_input): nas_server = mocked_input['nas_server'] client.create_dns_server(nas_server, 'fake_domain', 'fake_dns_ip') @res_mock.mock_client_input @res_mock.patch_client def test_create_interface__existed_expt(self, client, mocked_input): nas_server = mocked_input['nas_server'] self.assertRaises(exception.IPAddressInUse, client.create_interface, nas_server, 'fake_ip_addr', 'fake_mask', 'fake_gateway', port_id='fake_port_id') @res_mock.mock_client_input @res_mock.patch_client def test_enable_cifs_service__existed_expt(self, client, mocked_input): nas_server = mocked_input['nas_server'] client.enable_cifs_service( nas_server, 'domain_name', 'fake_user', 'fake_passwd') @res_mock.mock_client_input @res_mock.patch_client def test_enable_nfs_service__existed_expt(self, client, mocked_input): nas_server = mocked_input['nas_server'] client.enable_nfs_service(nas_server) @res_mock.mock_client_input @res_mock.patch_client def test_create_snapshot__existed_expt(self, client, mocked_input): nas_server = mocked_input['filesystem'] exp_snap = mocked_input['snapshot'] client.create_snapshot(nas_server, exp_snap.name) @res_mock.mock_client_input @res_mock.patch_client def test_create_snap_of_snap__existed_expt(self, client, mocked_input): snapshot = mocked_input['src_snapshot'] dest_snap = mocked_input['dest_snapshot'] new_snap = client.create_snap_of_snap(snapshot, dest_snap.name) self.assertEqual(dest_snap.name, new_snap.name) @res_mock.mock_client_input @res_mock.patch_client def test_delete_snapshot__nonexistent_expt(self, client, mocked_input): snapshot = mocked_input['snapshot'] client.delete_snapshot(snapshot) @res_mock.patch_client def test_cifs_deny_access__nonexistentuser_expt(self, client): try: client.cifs_deny_access('fake_share_name', 'fake_username') except fake_exceptions.UnityAclUserNotFoundError: self.fail("UnityAclUserNotFoundError raised unexpectedly!") @res_mock.patch_client def test_nfs_deny_access__nonexistent_expt(self, client): client.nfs_deny_access('fake_share_name', 'fake_ip_addr') @res_mock.patch_client def test_get_storage_processor(self, client): sp = client.get_storage_processor(sp_id='SPA') self.assertEqual('SPA', sp.name) @res_mock.mock_client_input @res_mock.patch_client def test_extend_filesystem(self, client, mocked_input): fs = mocked_input['fs'] size = client.extend_filesystem(fs, 5) self.assertEqual(5 * units.Gi, size) @res_mock.mock_client_input @res_mock.patch_client def test_shrink_filesystem(self, client, mocked_input): fs = mocked_input['fs'] size = client.shrink_filesystem('fake_share_id_1', fs, 4) self.assertEqual(4 * units.Gi, size) @res_mock.mock_client_input @res_mock.patch_client def test_shrink_filesystem_size_too_small(self, client, mocked_input): fs = mocked_input['fs'] self.assertRaises(exception.ShareShrinkingPossibleDataLoss, client.shrink_filesystem, 'fake_share_id_2', fs, 4) @res_mock.patch_client def test_get_file_ports(self, client): ports = client.get_file_ports() self.assertEqual(2, len(ports)) @res_mock.patch_client def test_get_tenant(self, client): tenant = client.get_tenant('test', 5) self.assertEqual('tenant_1', tenant.id) @res_mock.patch_client def test_get_tenant_preexist(self, client): tenant = client.get_tenant('test', 6) self.assertEqual('tenant_1', tenant.id) @res_mock.patch_client def test_get_tenant_name_inuse_but_vlan_not_used(self, client): self.assertRaises(fake_exceptions.UnityTenantNameInUseError, client.get_tenant, 'test', 7) @res_mock.patch_client def test_get_tenant_for_vlan_0(self, client): tenant = client.get_tenant('tenant', 0) self.assertIsNone(tenant) @res_mock.patch_client def test_get_tenant_for_vlan_already_has_interfaces(self, client): tenant = client.get_tenant('tenant', 3) self.assertEqual('tenant_1', tenant.id) @res_mock.mock_client_input @res_mock.patch_client def test_create_file_interface_ipv6(self, client, mocked_input): mock_nas_server = mock.Mock() mock_nas_server.create_file_interface = mock.Mock(return_value=None) mock_file_interface = mocked_input['file_interface'] mock_port_id = mock.Mock() client.create_interface(mock_nas_server, mock_file_interface.ip_addr, netmask=None, gateway=mock_file_interface.gateway, port_id=mock_port_id, vlan_id=mock_file_interface.vlan_id, prefix_length=mock_file_interface.prefix_length ) mock_nas_server.create_file_interface.assert_called_once_with( mock_port_id, mock_file_interface.ip_addr, netmask=None, v6_prefix_length=mock_file_interface.prefix_length, gateway=mock_file_interface.gateway, vlan_id=mock_file_interface.vlan_id) @res_mock.patch_client def test_get_snapshot(self, client): snapshot = client.get_snapshot('Snapshot_1') self.assertEqual('snapshot_1', snapshot.id) @res_mock.patch_client def test_restore_snapshot(self, client): snapshot = client.get_snapshot('Snapshot_1') rst = client.restore_snapshot(snapshot.name) self.assertIs(True, rst) snapshot.restore.assert_called_once_with(delete_backup=True) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/mocked_manila.yaml0000664000175000017500000002645013656750227030326 0ustar zuulzuul00000000000000network_allocations: _type: 'network_allocations' _properties: &network_allocations_prop - id: '04ac4c27-9cf7-4406-809c-13edc93e4849' ip_address: 'fake_ip_addr_1' cidr: '192.168.1.0/24' segmentation_id: null gateway: '192.168.1.1' network_type: flat mtu: 1500 - id: '0cf87de7-5c65-4036-8b6a-e8176c356958' ip_address: 'fake_ip_addr_2' cidr: '192.168.1.0/24' segmentation_id: null gateway: '192.168.1.1' network_type: flat mtu: 1500 network_allocations_vlan: _type: 'network_allocations' _properties: &network_allocations_vlan_prop - id: '04ac4c27-9cf7-4406-809c-13edc93e4849' ip_address: 'fake_ip_addr_1' cidr: '192.168.1.0/24' segmentation_id: 160 gateway: '192.168.1.1' network_type: vlan mtu: 1500 - id: '0cf87de7-5c65-4036-8b6a-e8176c356958' ip_address: 'fake_ip_addr_2' cidr: '192.168.1.0/24' segmentation_id: 160 gateway: '192.168.1.1' network_type: vlan mtu: 1500 network_allocations_vxlan: _type: 'network_allocations' _properties: &network_allocations_vxlan_prop - id: '04ac4c27-9cf7-4406-809c-13edc93e4849' ip_address: 'fake_ip_addr_1' cidr: '192.168.1.0/24' segmentation_id: 123 gateway: '192.168.1.1' network_type: vxlan mtu: 1500 network_allocations_ipv6: _type: 'network_allocations' _properties: &network_allocations_ipv6_prop - id: '04ac4c27-9cf7-4406-809c-13edc93e9844' ip_address: '2001:db8:0:1:f816:3eff:fe76:35c4' cidr: '2001:db8:0:1:f816:3eff:fe76:35c4/64' segmentation_id: 170 gateway: '2001:db8:0:1::1' network_type: vlan mtu: 1500 active_directory: _type: 'security_service' _properties: &active_directory_prop type: 'active_directory' domain: 'fake_domain_name' dns_ip: 'fake_dns_ip' user: 'fake_user' password: 'fake_password' kerberos: _type: 'security_service' _properties: &kerberos_prop <<: *active_directory_prop type: 'kerberos' server: 'fake_server' security_services: _type: 'security_services' _properties: &security_services_prop services: [*active_directory_prop, *kerberos_prop] network_info__flat: _type: 'network_info' _properties: &network_info_flat_prop name: 'share_network' neutron_subnet_id: 'a3f3eeac-0b16-4932-8c03-0a37003644ff' network_type: 'flat' neutron_net_id: 'e6c96730-2bcf-4ce3-86fa-7cb7740086cb' ip_version: 4 id: '232d8218-2743-41d1-832b-4194626e691e' mtu: 1500 network_allocations: *network_allocations_prop server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' segmentation_id: 0 security_services: [] network_info__vlan: _type: 'network_info' _properties: &network_info__vlan_prop <<: *network_info_flat_prop network_type: 'vlan' network_allocations: *network_allocations_vlan_prop segmentation_id: 160 network_info__vxlan: _type: 'network_info' _properties: &network_info__vxlan_prop <<: *network_info_flat_prop network_type: 'vxlan' network_allocations: *network_allocations_vxlan_prop network_info__ipv6: _type: 'network_info' _properties: &network_info__ipv6_prop <<: *network_info_flat_prop network_allocations: *network_allocations_ipv6_prop segmentation_id: 170 network_info__active_directory: _type: 'network_info' _properties: <<: *network_info__vlan_prop security_services: [*active_directory_prop] network_info__kerberos: _type: 'network_info' _properties: <<: *network_info_flat_prop security_services: [*kerberos_prop] share_server: _type: 'share_server' _properties: &share_server_prop status: 'active' share_network: *network_info_flat_prop share_network_id: '232d8218-2743-41d1-832b-4194626e691e' host: 'openstack@VNX' backend_details: share_server_name: '78fd845f-8e7d-487f-bfde-051d83e78103' network_allocations: *network_allocations_prop id: '78fd845f-8e7d-487f-bfde-051d83e78103' identifier: 'c2e48947-98ed-4eae-999b-fa0b83731dfd' share_server__no_share_server_name: _type: 'share_server' _properties: <<: *share_server_prop backend_details: share_server_name: None id: '78fd845f-8e7d-487f-bfde-051d83e78103' server_detail: _type: 'server_detail' _properties: &server_detail_prop share_server_name: '78fd845f-8e7d-487f-bfde-051d83e78103' cifs_share: _type: 'share' _properties: &cifs_share_prop share_id: '708e753c-aacb-411f-9c8a-8b8175da4e73' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: '716100cc-e0b4-416b-ac27-d38dd019330d' size: 1 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'cifs_share' share_proto: 'CIFS' export_locations: [] is_public: False managed_cifs_share: _type: 'share' _properties: &managed_cifs_share_share_prop share_id: '708e753c-aacb-411f-9c8a-8b8175da4e73' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: '716100cc-e0b4-416b-ac27-d38dd019330d' size: 10 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'cifs_share' share_proto: 'CIFS' export_locations: [path: '\\10.0.0.1\bd23121f-hg4e-432c-12cd2c5-bb93dfghe212'] is_public: False snapshot_support: False nfs_share: _type: 'share' _properties: &nfs_share_prop share_id: '12eb3777-7008-4721-8243-422507db8f9d' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' size: 1 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'nfs_share' share_proto: 'NFS' export_locations: null is_public: False managed_nfs_share: _type: 'share' _properties: &managed_nfs_share_prop share_id: '12eb3777-7008-4721-8243-422507db8f9d' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' size: 9 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'nfs_share' share_proto: 'NFS' export_locations: [path: '172.168.201.201:/ad1caddf-097e-462c-8ac6-5592ed6fe22f'] is_public: False snapshot_support: False dhss_false_cifs_share: _type: 'share' _properties: &dhss_false_cifs_share_prop share_id: '708e753c-aacb-411f-9c8a-8b8175da4e73' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: 'test-dhss-false-427f-b4de-0ad83el5j8' id: '716100cc-e0b4-416b-ac27-d38dd019330d' size: 1 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'cifs_share' share_proto: 'CIFS' export_locations: [] is_public: False dhss_false_nfs_share: _type: 'share' _properties: &dhss_false_nfs_share_prop share_id: '12eb3777-7008-4721-8243-422507db8f9d' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: 'test-dhss-false-427f-b4de-0ad83el5j8' id: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' size: 1 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'nfs_share' share_proto: 'NFS' export_locations: [] is_public: False shrink_cifs_share: _type: 'share' _properties: &shrink_cifs_share_prop share_id: '708e753c-aacb-411f-9c8a-8b8175da4e73' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: '716100cc-e0b4-416b-ac27-d38dd019330d' size: 9 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'cifs_share' share_proto: 'CIFS' export_locations: [] is_public: False shrink_nfs_share: _type: 'share' _properties: &shrink_nfs_share_prop share_id: '12eb3777-7008-4721-8243-422507db8f9d' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' size: 9 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'nfs_share' share_proto: 'NFS' export_locations: [] is_public: False invalid_share: _type: 'share' _properties: &invalid_share_prop share_id: '12eb3777-7008-4721-8243-422507db8f9d' availability_zone_id: 'de628fb6-1c99-41f6-a06a-adb61ff693b5' share_network_id: '232d8218-2743-41d1-832b-4194626e691e' share_server_id: '78fd845f-8e7d-487f-bfde-051d83e78103' id: 'cb532599-8dc6-4c3e-bb21-74ea54be566c' size: 1 user_id: '19bbda71b578471a93363653dcb4c61d' status: 'creating' share_type_id: '57679eab-3e67-4052-b180-62b609670e93' host: 'openstack@VNX#Pool_2' display_name: 'nfs_share' share_proto: 'fake_proto' export_locations: [] is_public: False snapshot: _type: 'snapshot' _properties: &snapshot_prop status: 'creating' share_instance_id: '27e4625e-c336-4749-85bc-634216755fbc' share: share_proto: 'CIFS' id: 's24r-3fgw2-g039ef-j029f0-nrver' snapshot_id: '345476cc-32ab-4565-ba88-e4733b7ffa0e' progress: '0%' id: 'ab411797-b1cf-4035-bf14-8771a7bf1805' share_id: '27e4625e-c336-4749-85bc-634216755fbc' provider_location: '23047-ef2344-4563cvw-r4323cwed' cifs_rw_access: _type: 'access' _properties: access_level: 'rw' access_to: 'administrator' access_type: 'user' cifs_ro_access: _type: 'access' _properties: access_level: 'ro' access_to: 'administrator' access_type: 'user' nfs_rw_access: _type: 'access' _properties: access_level: 'rw' access_to: '192.168.1.1' access_type: 'ip' nfs_rw_access_cidr: _type: 'access' _properties: access_level: 'rw' access_to: '192.168.1.0/24' access_type: 'ip' nfs_ro_access: _type: 'access' _properties: access_level: 'ro' access_to: '192.168.1.1' access_type: 'ip' invalid_access: _type: 'access' _properties: access_level: 'fake_access_level' access_to: 'fake_access_to' access_type: 'fake_type' manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/test_connection.py0000664000175000017500000010703213656750227030423 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from oslo_utils import units import six from manila import exception from manila import test from manila.tests.share.drivers.dell_emc.plugins.unity import fake_exceptions from manila.tests.share.drivers.dell_emc.plugins.unity import res_mock from manila.tests.share.drivers.dell_emc.plugins.unity import utils @ddt.ddt class TestConnection(test.TestCase): client = None @classmethod def setUpClass(cls): cls.emc_share_driver = res_mock.FakeEMCShareDriver() @res_mock.patch_connection_init def test_connect(self, connection): connection.connect(res_mock.FakeEMCShareDriver(dhss=True), None) @res_mock.patch_connection_init def test_connect_with_ipv6(self, connection): connection.connect(res_mock.FakeEMCShareDriverIPv6( dhss=True), None) @res_mock.patch_connection def test_connect__invalid_pool_configuration(self, connection): f = connection.client.system.get_pool f.side_effect = fake_exceptions.UnityResourceNotFoundError() self.assertRaises(exception.BadConfigurationException, connection._config_pool, 'faked_pool_name') @res_mock.mock_manila_input @res_mock.patch_connection def test_create_nfs_share(self, connection, mocked_input): share = mocked_input['nfs_share'] share_server = mocked_input['share_server'] location = connection.create_share(None, share, share_server) exp_location = [ {'path': 'fake_ip_addr_1:/cb532599-8dc6-4c3e-bb21-74ea54be566c'}, {'path': 'fake_ip_addr_2:/cb532599-8dc6-4c3e-bb21-74ea54be566c'}, ] exp_location = sorted(exp_location, key=lambda x: sorted(x['path'])) location = sorted(location, key=lambda x: sorted(x['path'])) self.assertEqual(exp_location, location) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_cifs_share(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] location = connection.create_share(None, share, share_server) exp_location = [ {'path': r'\\fake_ip_addr_1\716100cc-e0b4-416b-ac27-d38dd019330d'}, {'path': r'\\fake_ip_addr_2\716100cc-e0b4-416b-ac27-d38dd019330d'}, ] exp_location = sorted(exp_location, key=lambda x: sorted(x['path'])) location = sorted(location, key=lambda x: sorted(x['path'])) self.assertEqual(exp_location, location) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_share_with_invalid_proto(self, connection, mocked_input): share = mocked_input['invalid_share'] share_server = mocked_input['share_server'] self.assertRaises(exception.InvalidShare, connection.create_share, None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_share_without_share_server(self, connection, mocked_input): share = mocked_input['cifs_share'] self.assertRaises(exception.InvalidInput, connection.create_share, None, share, None) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_share__no_server_name_in_backend_details(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = { 'backend_details': {'share_server_name': None}, 'id': 'test', 'identifier': '', } self.assertRaises(exception.InvalidInput, connection.create_share, None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_share_with_invalid_share_server(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] self.assertRaises(exception.EMCUnityError, connection.create_share, None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_delete_share(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] connection.delete_share(None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_delete_share__with_invalid_share(self, connection, mocked_input): share = mocked_input['cifs_share'] connection.delete_share(None, share, None) @res_mock.mock_manila_input @res_mock.patch_connection def test_delete_share__create_from_snap(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] connection.delete_share(None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_delete_share__create_from_snap_but_not_isolated(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] connection.delete_share(None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_delete_share__but_not_isolated(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] connection.delete_share(None, share, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_shrink_cifs_share(self, connection, mocked_input): share = mocked_input['shrink_cifs_share'] new_size = 4 * units.Gi connection.shrink_share(share, new_size) @res_mock.mock_manila_input @res_mock.patch_connection def test_shrink_nfs_share(self, connection, mocked_input): share = mocked_input['shrink_nfs_share'] new_size = 4 * units.Gi connection.shrink_share(share, new_size) @res_mock.mock_manila_input @res_mock.patch_connection def test_extend_cifs_share(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] new_size = 50 * units.Gi connection.extend_share(share, new_size, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_extend_nfs_share(self, connection, mocked_input): share = mocked_input['nfs_share'] share_server = mocked_input['share_server'] new_size = 50 * units.Gi connection.extend_share(share, new_size, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_extend_share__create_from_snap(self, connection, mocked_input): share = mocked_input['cifs_share'] share_server = mocked_input['share_server'] new_size = 50 * units.Gi self.assertRaises(exception.ShareExtendingError, connection.extend_share, share, new_size, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_shrink_share_create_from_snap(self, connection, mocked_input): share = mocked_input['shrink_cifs_share'] share_server = mocked_input['share_server'] new_size = 4 * units.Gi self.assertRaises(exception.ShareShrinkingError, connection.shrink_share, share, new_size, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_snapshot_from_filesystem(self, connection, mocked_input): snapshot = mocked_input['snapshot'] share_server = mocked_input['share_server'] result = connection.create_snapshot(None, snapshot, share_server) self.assertEqual('ab411797-b1cf-4035-bf14-8771a7bf1805', result['provider_location']) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_snapshot_from_snapshot(self, connection, mocked_input): snapshot = mocked_input['snapshot'] share_server = mocked_input['share_server'] connection.create_snapshot(None, snapshot, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_delete_snapshot(self, connection, mocked_input): snapshot = mocked_input['snapshot'] share_server = mocked_input['share_server'] connection.delete_snapshot(None, snapshot, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_ensure_share_exists(self, connection, mocked_input): share = mocked_input['cifs_share'] connection.ensure_share(None, share, None) @res_mock.mock_manila_input @res_mock.patch_connection def test_ensure_share_not_exists(self, connection, mocked_input): share = mocked_input['cifs_share'] self.assertRaises(exception.ShareNotFound, connection.ensure_share, None, share, None) @res_mock.patch_connection def test_update_share_stats(self, connection): stat_dict = copy.deepcopy(res_mock.STATS) connection.update_share_stats(stat_dict) self.assertEqual(5, len(stat_dict)) pool = stat_dict['pools'][0] self.assertEqual('pool_1', pool['pool_name']) self.assertEqual(500000.0, pool['total_capacity_gb']) self.assertEqual(False, pool['qos']) self.assertEqual(30000.0, pool['provisioned_capacity_gb']) self.assertEqual(20, pool['max_over_subscription_ratio']) self.assertEqual(10000.0, pool['allocated_capacity_gb']) self.assertEqual(0, pool['reserved_percentage']) self.assertTrue(pool['thin_provisioning']) self.assertEqual(490000.0, pool['free_capacity_gb']) @res_mock.patch_connection def test_update_share_stats__nonexistent_pools(self, connection): stat_dict = copy.deepcopy(res_mock.STATS) self.assertRaises(exception.EMCUnityError, connection.update_share_stats, stat_dict) @res_mock.mock_manila_input @res_mock.patch_connection def test_get_pool(self, connection, mocked_input): share = mocked_input['cifs_share'] connection.get_pool(share) @utils.patch_find_ports_by_mtu @res_mock.mock_manila_input @res_mock.patch_connection def test_setup_server(self, connection, mocked_input, find_ports): find_ports.return_value = {'SPA': {'spa_eth1'}} network_info = mocked_input['network_info__flat'] server_info = connection.setup_server(network_info) self.assertEqual( {'share_server_name': '78fd845f-8e7d-487f-bfde-051d83e78103'}, server_info) self.assertIsNone(connection.client.system.create_nas_server. call_args[1]['tenant']) @utils.patch_find_ports_by_mtu @res_mock.mock_manila_input @res_mock.patch_connection def test_setup_server__vlan_network(self, connection, mocked_input, find_ports): find_ports.return_value = {'SPA': {'spa_eth1'}} network_info = mocked_input['network_info__vlan'] connection.setup_server(network_info) self.assertEqual('tenant_1', connection.client.system.create_nas_server .call_args[1]['tenant'].id) @utils.patch_find_ports_by_mtu @res_mock.mock_manila_input @res_mock.patch_connection def test_setup_server__vxlan_network(self, connection, mocked_input, find_ports): find_ports.return_value = {'SPA': {'spa_eth1'}} network_info = mocked_input['network_info__vxlan'] self.assertRaises(exception.NetworkBadConfigurationException, connection.setup_server, network_info) @utils.patch_find_ports_by_mtu @res_mock.mock_manila_input @res_mock.patch_connection def test_setup_server__active_directory(self, connection, mocked_input, find_ports): find_ports.return_value = {'SPA': {'spa_eth1'}} network_info = mocked_input['network_info__active_directory'] connection.setup_server(network_info) @utils.patch_find_ports_by_mtu @res_mock.mock_manila_input @res_mock.patch_connection def test_setup_server__kerberos(self, connection, mocked_input, find_ports): find_ports.return_value = {'SPA': {'spa_eth1'}} network_info = mocked_input['network_info__kerberos'] connection.setup_server(network_info) @utils.patch_find_ports_by_mtu @res_mock.mock_manila_input @res_mock.patch_connection def test_setup_server__throw_exception(self, connection, mocked_input, find_ports): find_ports.return_value = {'SPA': {'spa_eth1'}} network_info = mocked_input['network_info__flat'] self.assertRaises(fake_exceptions.UnityException, connection.setup_server, network_info) @res_mock.mock_manila_input @res_mock.patch_connection def test_teardown_server(self, connection, mocked_input): server_detail = mocked_input['server_detail'] security_services = mocked_input['security_services'] connection.teardown_server(server_detail, security_services) @res_mock.mock_manila_input @res_mock.patch_connection def test_teardown_server__no_server_detail(self, connection, mocked_input): security_services = mocked_input['security_services'] connection.teardown_server(None, security_services) @res_mock.mock_manila_input @res_mock.patch_connection def test_teardown_server__no_share_server_name(self, connection, mocked_input): server_detail = {'share_server_name': None} security_services = mocked_input['security_services'] connection.teardown_server(server_detail, security_services) @ddt.data({'configured_pools': None, 'matched_pools': {'pool_1', 'pool_2', 'nas_server_pool'}}, {'configured_pools': ['*'], 'matched_pools': {'pool_1', 'pool_2', 'nas_server_pool'}}, {'configured_pools': ['pool_*'], 'matched_pools': {'pool_1', 'pool_2'}}, {'configured_pools': ['*pool'], 'matched_pools': {'nas_server_pool'}}, {'configured_pools': ['nas_server_pool'], 'matched_pools': {'nas_server_pool'}}, {'configured_pools': ['nas_*', 'pool_*'], 'matched_pools': {'pool_1', 'pool_2', 'nas_server_pool'}}) @res_mock.patch_connection @ddt.unpack def test__get_managed_pools(self, connection, mocked_input): configured_pools = mocked_input['configured_pools'] matched_pool = mocked_input['matched_pools'] pools = connection._get_managed_pools(configured_pools) self.assertEqual(matched_pool, pools) @res_mock.patch_connection def test__get_managed_pools__invalid_pool_configuration(self, connection): configured_pools = 'fake_pool' self.assertRaises(exception.BadConfigurationException, connection._get_managed_pools, configured_pools) @res_mock.patch_connection def test_validate_port_configuration(self, connection): sp_ports_map = connection.validate_port_configuration(['sp*']) self.assertEqual({'spa_eth1', 'spa_eth2', 'spa_la_4'}, sp_ports_map['SPA']) self.assertEqual({'spb_eth1'}, sp_ports_map['SPB']) @res_mock.patch_connection def test_validate_port_configuration_exception(self, connection): self.assertRaises(exception.BadConfigurationException, connection.validate_port_configuration, ['xxxx*']) @res_mock.patch_connection def test__get_pool_name_from_host__no_pool_name(self, connection): host = 'openstack@Unity' self.assertRaises(exception.InvalidHost, connection._get_pool_name_from_host, host) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_cifs_share_from_snapshot(self, connection, mocked_input): share = mocked_input['cifs_share'] snapshot = mocked_input['snapshot'] share_server = mocked_input['share_server'] connection.create_share_from_snapshot(None, share, snapshot, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_nfs_share_from_snapshot(self, connection, mocked_input): share = mocked_input['nfs_share'] snapshot = mocked_input['snapshot'] share_server = mocked_input['share_server'] connection.create_share_from_snapshot(None, share, snapshot, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_create_share_from_snapshot_no_server_name(self, connection, mocked_input): share = mocked_input['nfs_share'] snapshot = mocked_input['snapshot'] share_server = mocked_input['share_server__no_share_server_name'] self.assertRaises(exception.EMCUnityError, connection.create_share_from_snapshot, None, share, snapshot, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_clear_share_access_cifs(self, connection, mocked_input): share = mocked_input['cifs_share'] self.assertRaises(fake_exceptions.UnityException, connection.clear_access, share) @res_mock.mock_manila_input @res_mock.patch_connection def test_clear_share_access_nfs(self, connection, mocked_input): share = mocked_input['nfs_share'] self.assertRaises(fake_exceptions.UnityException, connection.clear_access, share) @res_mock.mock_manila_input @res_mock.patch_connection def test_allow_rw_cifs_share_access(self, connection, mocked_input): share = mocked_input['cifs_share'] rw_access = mocked_input['cifs_rw_access'] share_server = mocked_input['share_server'] connection.allow_access(None, share, rw_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_update_access_allow_rw(self, connection, mocked_input): share = mocked_input['cifs_share'] rw_access = mocked_input['cifs_rw_access'] share_server = mocked_input['share_server'] connection.update_access(None, share, None, [rw_access], None, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_update_access_recovery(self, connection, mocked_input): share = mocked_input['cifs_share'] rw_access = mocked_input['cifs_rw_access'] share_server = mocked_input['share_server'] connection.update_access(None, share, [rw_access], None, None, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_allow_ro_cifs_share_access(self, connection, mocked_input): share = mocked_input['cifs_share'] rw_access = mocked_input['cifs_ro_access'] share_server = mocked_input['share_server'] connection.allow_access(None, share, rw_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_allow_rw_nfs_share_access(self, connection, mocked_input): share = mocked_input['nfs_share'] rw_access = mocked_input['nfs_rw_access'] share_server = mocked_input['share_server'] connection.allow_access(None, share, rw_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_allow_rw_nfs_share_access_cidr(self, connection, mocked_input): share = mocked_input['nfs_share'] rw_access = mocked_input['nfs_rw_access_cidr'] share_server = mocked_input['share_server'] connection.allow_access(None, share, rw_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_allow_ro_nfs_share_access(self, connection, mocked_input): share = mocked_input['nfs_share'] ro_access = mocked_input['nfs_ro_access'] share_server = mocked_input['share_server'] connection.allow_access(None, share, ro_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_deny_cifs_share_access(self, connection, mocked_input): share = mocked_input['cifs_share'] rw_access = mocked_input['cifs_rw_access'] share_server = mocked_input['share_server'] connection.deny_access(None, share, rw_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_deny_nfs_share_access(self, connection, mocked_input): share = mocked_input['nfs_share'] rw_access = mocked_input['nfs_rw_access'] share_server = mocked_input['share_server'] connection.deny_access(None, share, rw_access, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_update_access_deny_nfs(self, connection, mocked_input): share = mocked_input['nfs_share'] rw_access = mocked_input['nfs_rw_access'] connection.update_access(None, share, None, None, [rw_access], None) @res_mock.mock_manila_input @res_mock.patch_connection def test__validate_cifs_share_access_type(self, connection, mocked_input): share = mocked_input['cifs_share'] rw_access = mocked_input['invalid_access'] self.assertRaises(exception.InvalidShareAccess, connection._validate_share_access_type, share, rw_access) @res_mock.mock_manila_input @res_mock.patch_connection def test__validate_nfs_share_access_type(self, connection, mocked_input): share = mocked_input['nfs_share'] rw_access = mocked_input['invalid_access'] self.assertRaises(exception.InvalidShareAccess, connection._validate_share_access_type, share, rw_access) @res_mock.patch_connection def test_get_network_allocations_number(self, connection): self.assertEqual(1, connection.get_network_allocations_number()) @res_mock.patch_connection def test_get_proto_enum(self, connection): self.assertIn('FSSupportedProtocolEnum.CIFS', six.text_type(connection._get_proto_enum('CIFS'))) self.assertIn('FSSupportedProtocolEnum.NFS', six.text_type(connection._get_proto_enum('nfs'))) @res_mock.mock_manila_input @res_mock.patch_connection def test_allow_access_error_access_level(self, connection, mocked_input): share = mocked_input['nfs_share'] rw_access = mocked_input['invalid_access'] self.assertRaises(exception.InvalidShareAccessLevel, connection.allow_access, None, share, rw_access) @res_mock.patch_connection def test__create_network_interface_ipv6(self, connection): connection.client.create_interface = mock.Mock(return_value=None) nas_server = mock.Mock() network = {'ip_address': '2001:db8:0:1:f816:3eff:fe76:35c4', 'cidr': '2001:db8:0:1:f816:3eff:fe76:35c4/64', 'gateway': '2001:db8:0:1::1', 'segmentation_id': '201'} port_id = mock.Mock() connection._create_network_interface(nas_server, network, port_id) expected = {'ip_addr': '2001:db8:0:1:f816:3eff:fe76:35c4', 'netmask': None, 'gateway': '2001:db8:0:1::1', 'port_id': port_id, 'vlan_id': '201', 'prefix_length': '64'} connection.client.create_interface.assert_called_once_with(nas_server, **expected) @res_mock.patch_connection def test__create_network_interface_ipv4(self, connection): connection.client.create_interface = mock.Mock(return_value=None) nas_server = mock.Mock() network = {'ip_address': '192.168.1.10', 'cidr': '192.168.1.10/24', 'gateway': '192.168.1.1', 'segmentation_id': '201'} port_id = mock.Mock() connection._create_network_interface(nas_server, network, port_id) expected = {'ip_addr': '192.168.1.10', 'netmask': '255.255.255.0', 'gateway': '192.168.1.1', 'port_id': port_id, 'vlan_id': '201'} connection.client.create_interface.assert_called_once_with(nas_server, **expected) @res_mock.mock_manila_input @res_mock.patch_connection def test_revert_to_snapshot(self, connection, mocked_input): context = mock.Mock() snapshot = mocked_input['snapshot'] share_access_rules = [mocked_input['nfs_rw_access'], ] snapshot_access_rules = [mocked_input['nfs_rw_access'], ] connection.revert_to_snapshot(context, snapshot, share_access_rules, snapshot_access_rules) @res_mock.patch_connection_init def test_dhss_false_connect_without_nas_server(self, connection): self.assertRaises(exception.BadConfigurationException, connection.connect, res_mock.FakeEMCShareDriver(dhss=False), None) @res_mock.mock_manila_input @res_mock.patch_connection def test_dhss_false_create_nfs_share(self, connection, mocked_input): connection.driver_handles_share_servers = False connection.unity_share_server = 'test-dhss-false-427f-b4de-0ad83el5j8' share = mocked_input['dhss_false_nfs_share'] share_server = mocked_input['share_server'] location = connection.create_share(None, share, share_server) exp_location = [ {'path': 'fake_ip_addr_1:/cb532599-8dc6-4c3e-bb21-74ea54be566c'}, {'path': 'fake_ip_addr_2:/cb532599-8dc6-4c3e-bb21-74ea54be566c'}, ] exp_location = sorted(exp_location, key=lambda x: sorted(x['path'])) location = sorted(location, key=lambda x: sorted(x['path'])) self.assertEqual(exp_location, location) @res_mock.mock_manila_input @res_mock.patch_connection def test_dhss_false_create_cifs_share(self, connection, mocked_input): connection.driver_handles_share_servers = False connection.unity_share_server = 'test-dhss-false-427f-b4de-0ad83el5j8' share = mocked_input['dhss_false_cifs_share'] share_server = mocked_input['share_server'] location = connection.create_share(None, share, share_server) exp_location = [ {'path': r'\\fake_ip_addr_1\716100cc-e0b4-416b-ac27-d38dd019330d'}, {'path': r'\\fake_ip_addr_2\716100cc-e0b4-416b-ac27-d38dd019330d'}, ] exp_location = sorted(exp_location, key=lambda x: sorted(x['path'])) location = sorted(location, key=lambda x: sorted(x['path'])) self.assertEqual(exp_location, location) @res_mock.mock_manila_input @res_mock.patch_connection def test_get_share_server_id(self, connection, mocked_input): share_server = mocked_input['share_server'] result = connection._get_server_name(share_server) expected = 'c2e48947-98ed-4eae-999b-fa0b83731dfd' self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_snapshot(self, connection, mocked_input): snapshot = mocked_input['snapshot'] driver_options = {'size': 8} result = connection.manage_existing_snapshot(snapshot, driver_options, None) expected = {'provider_location': '23047-ef2344-4563cvw-r4323cwed', 'size': 8} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_snapshot_wrong_size_type(self, connection, mocked_input): snapshot = mocked_input['snapshot'] driver_options = {'size': 'str_size'} self.assertRaises(exception.ManageInvalidShareSnapshot, connection.manage_existing_snapshot, snapshot, driver_options, None) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_snapshot_with_server(self, connection, mocked_input): share_server = mocked_input['share_server'] snapshot = mocked_input['snapshot'] driver_options = {} result = connection.manage_existing_snapshot_with_server( snapshot, driver_options, share_server) expected = {'provider_location': '23047-ef2344-4563cvw-r4323cwed', 'size': 1} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_get_share_server_network_info(self, connection, mocked_input): share_server = mocked_input['share_server'] identifier = 'test_manage_nas_server' result = connection.get_share_server_network_info(None, share_server, identifier, None) expected = ['fake_ip_addr_1', 'fake_ip_addr_2'] self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_server(self, connection, mocked_input): share_server = mocked_input['share_server'] identifier = 'test_manage_nas_server' result = connection.manage_server(None, share_server, identifier, None) expected = (identifier, None) self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_nfs_share(self, connection, mocked_input): share = mocked_input['managed_nfs_share'] driver_options = {'size': 3} result = connection.manage_existing(share, driver_options) path = '172.168.201.201:/ad1caddf-097e-462c-8ac6-5592ed6fe22f' expected = {'export_locations': {'path': path}, 'size': 3} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_nfs_share_with_server(self, connection, mocked_input): share = mocked_input['managed_nfs_share'] share_server = mocked_input['share_server'] driver_options = {'size': 8} result = connection.manage_existing_with_server(share, driver_options, share_server) path = '172.168.201.201:/ad1caddf-097e-462c-8ac6-5592ed6fe22f' expected = {'export_locations': {'path': path}, 'size': 8} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_cifs_share(self, connection, mocked_input): share = mocked_input['managed_cifs_share'] driver_options = {'size': 3} result = connection.manage_existing(share, driver_options) path = '\\\\10.0.0.1\\bd23121f-hg4e-432c-12cd2c5-bb93dfghe212' expected = {'export_locations': {'path': path}, 'size': 3} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_cifs_share_with_server(self, connection, mocked_input): connection.client.create_interface = mock.Mock(return_value=None) share = mocked_input['managed_cifs_share'] share_server = mocked_input['share_server'] driver_options = {'size': 3} result = connection.manage_existing_with_server(share, driver_options, share_server) path = '\\\\10.0.0.1\\bd23121f-hg4e-432c-12cd2c5-bb93dfghe212' expected = {'export_locations': {'path': path}, 'size': 3} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_with_wrong_size_data_type(self, connection, mocked_input): connection.client.create_interface = mock.Mock(return_value=None) share = mocked_input['managed_nfs_share'] share_server = mocked_input['share_server'] driver_options = {'size': 'str_size'} self.assertRaises(exception.ManageInvalidShare, connection.manage_existing_with_server, share, driver_options, share_server) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_without_size(self, connection, mocked_input): connection.client.create_interface = mock.Mock(return_value=None) share = mocked_input['managed_nfs_share'] share_server = mocked_input['share_server'] driver_options = {'size': 0} result = connection.manage_existing_with_server(share, driver_options, share_server) path = '172.168.201.201:/ad1caddf-097e-462c-8ac6-5592ed6fe22f' expected = {'export_locations': {'path': path}, 'size': 1} self.assertEqual(expected, result) @res_mock.mock_manila_input @res_mock.patch_connection def test_manage_without_export_locations(self, connection, mocked_input): connection.client.create_interface = mock.Mock(return_value=None) share = mocked_input['nfs_share'] share_server = mocked_input['share_server'] driver_options = {'size': 3} self.assertRaises(exception.ManageInvalidShare, connection.manage_existing_with_server, share, driver_options, share_server) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/unity/fake_exceptions.py0000664000175000017500000000347413656750227030401 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class UnityFakeException(Exception): pass class UnityException(UnityFakeException): pass class UnitySmbShareNameExistedError(UnityException): pass class UnityFileSystemNameAlreadyExisted(UnityException): pass class UnityNasServerNameUsedError(UnityException): pass class UnityNfsShareNameExistedError(UnityException): pass class UnitySnapNameInUseError(UnityException): pass class UnityIpAddressUsedError(UnityException): pass class UnityResourceNotFoundError(UnityException): pass class UnityOneDnsPerNasServerError(UnityException): pass class UnitySmbNameInUseError(UnityException): pass class UnityNfsAlreadyEnabledError(UnityException): pass class UnityHostNotFoundException(UnityException): pass class UnityNothingToModifyError(UnityException): pass class UnityShareShrinkSizeTooSmallError(UnityException): pass class UnityTenantNameInUseError(UnityException): pass class UnityVLANUsedByOtherTenantError(UnityException): pass class SystemAPINotSupported(UnityException): pass class UnityVLANAlreadyHasInterfaceError(UnityException): pass class UnityAclUserNotFoundError(UnityException): pass manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/vnx/0000775000175000017500000000000013656750362024313 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/vnx/__init__.py0000664000175000017500000000000013656750227026412 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/vnx/test_object_manager.py0000664000175000017500000037546413656750227030707 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import time from unittest import mock import ddt from lxml import builder from oslo_concurrency import processutils from manila.common import constants as const from manila import exception from manila.share.drivers.dell_emc.common.enas import connector from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import xml_api_parser as parser from manila.share.drivers.dell_emc.plugins.vnx import object_manager as manager from manila import test from manila.tests.share.drivers.dell_emc.common.enas import fakes from manila.tests.share.drivers.dell_emc.common.enas import utils class StorageObjectManagerTestCase(test.TestCase): @mock.patch.object(connector, "XMLAPIConnector", mock.Mock()) @mock.patch.object(connector, "SSHConnector", mock.Mock()) def setUp(self): super(StorageObjectManagerTestCase, self).setUp() emd_share_driver = fakes.FakeEMCShareDriver() self.manager = manager.StorageObjectManager( emd_share_driver.configuration) def test_get_storage_context(self): type_map = { 'FileSystem': manager.FileSystem, 'StoragePool': manager.StoragePool, 'MountPoint': manager.MountPoint, 'Mover': manager.Mover, 'VDM': manager.VDM, 'Snapshot': manager.Snapshot, 'MoverInterface': manager.MoverInterface, 'DNSDomain': manager.DNSDomain, 'CIFSServer': manager.CIFSServer, 'CIFSShare': manager.CIFSShare, 'NFSShare': manager.NFSShare, } for key, value in type_map.items(): self.assertTrue( isinstance(self.manager.getStorageContext(key), value)) for key in self.manager.context.keys(): self.assertIn(key, type_map) def test_get_storage_context_invalid_type(self): fake_type = 'fake_type' self.assertRaises(exception.EMCVnxXMLAPIError, self.manager.getStorageContext, fake_type) class StorageObjectTestCaseBase(test.TestCase): @mock.patch.object(connector, "XMLAPIConnector", mock.Mock()) @mock.patch.object(connector, "SSHConnector", mock.Mock()) def setUp(self): super(StorageObjectTestCaseBase, self).setUp() emd_share_driver = fakes.FakeEMCShareDriver() self.manager = manager.StorageObjectManager( emd_share_driver.configuration) self.base = fakes.StorageObjectTestData() self.pool = fakes.PoolTestData() self.vdm = fakes.VDMTestData() self.mover = fakes.MoverTestData() self.fs = fakes.FileSystemTestData() self.mount = fakes.MountPointTestData() self.snap = fakes.SnapshotTestData() self.cifs_share = fakes.CIFSShareTestData() self.nfs_share = fakes.NFSShareTestData() self.cifs_server = fakes.CIFSServerTestData() self.dns = fakes.DNSDomainTestData() class StorageObjectTestCase(StorageObjectTestCaseBase): def test_xml_api_retry(self): hook = utils.RequestSideEffect() hook.append(self.base.resp_need_retry()) hook.append(self.base.resp_task_succeed()) elt_maker = builder.ElementMaker(nsmap={None: constants.XML_NAMESPACE}) xml_parser = parser.XMLAPIParser() storage_object = manager.StorageObject(self.manager.connectors, elt_maker, xml_parser, self.manager) storage_object.conn['XML'].request = utils.EMCMock(side_effect=hook) fake_req = storage_object._build_task_package( elt_maker.StartFake(name='foo') ) self.mock_object(time, 'sleep') resp = storage_object._send_request(fake_req) self.assertEqual('ok', resp['maxSeverity']) expected_calls = [ mock.call(self.base.req_fake_start_task()), mock.call(self.base.req_fake_start_task()) ] storage_object.conn['XML'].request.assert_has_calls(expected_calls) class FileSystemTestCase(StorageObjectTestCaseBase): def setUp(self): super(FileSystemTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_create_file_system_on_vdm(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_file_system_on_mover(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.fs.req_create_on_mover()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_file_system_but_already_exist(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.fs.resp_create_but_already_exist()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_file_system_invalid_mover_id(self, sleep_mock): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.fs.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.fs.req_create_on_mover()), mock.call(self.mover.req_get_ref()), mock.call(self.fs.req_create_on_mover()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_file_system_with_error(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.fs.resp_task_error()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.create, name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_file_system(self): self.hook.append(self.fs.resp_get_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.fs.filesystem_name, context.filesystem_map) property_map = [ 'name', 'pools_id', 'volume_id', 'size', 'id', 'type', 'dataServicePolicies', ] for prop in property_map: self.assertIn(prop, out) id = context.get_id(self.fs.filesystem_name) self.assertEqual(self.fs.filesystem_id, id) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_file_system_but_not_found(self): self.hook.append(self.fs.resp_get_but_not_found()) self.hook.append(self.fs.resp_get_without_value()) self.hook.append(self.fs.resp_get_error()) self.hook.append(self.fs.resp_get_but_not_found()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_ERROR, status) self.assertRaises(exception.EMCVnxXMLAPIError, context.get_id, self.fs.filesystem_name) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.fs.req_get()), mock.call(self.fs.req_get()), mock.call(self.fs.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_file_system_but_miss_property(self): self.hook.append(self.fs.resp_get_but_miss_property()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.fs.filesystem_name, context.filesystem_map) property_map = [ 'name', 'pools_id', 'volume_id', 'size', 'id', 'type', 'dataServicePolicies', ] for prop in property_map: self.assertIn(prop, out) self.assertIsNone(out['dataServicePolicies']) id = context.get_id(self.fs.filesystem_name) self.assertEqual(self.fs.filesystem_id, id) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_file_system(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.fs.filesystem_name) self.assertNotIn(self.fs.filesystem_name, context.filesystem_map) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertNotIn(self.fs.filesystem_name, context.filesystem_map) def test_delete_file_system_but_not_found(self): self.hook.append(self.fs.resp_get_but_not_found()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.fs.filesystem_name) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_file_system_but_get_file_system_error(self): self.hook.append(self.fs.resp_get_error()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, self.fs.filesystem_name) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_file_system_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.fs.resp_delete_but_failed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, self.fs.filesystem_name) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertIn(self.fs.filesystem_name, context.filesystem_map) def test_extend_file_system(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.extend(name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=self.fs.filesystem_new_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_but_not_found(self): self.hook.append(self.fs.resp_get_but_not_found()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.extend, name=self.fs.filesystem_name, pool_name=self.fs.pool_name, new_size=self.fs.filesystem_new_size) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_with_small_size(self): self.hook.append(self.fs.resp_get_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.extend, name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=1) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_with_same_size(self): self.hook.append(self.fs.resp_get_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.extend(name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=self.fs.filesystem_size) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.fs.resp_extend_but_error()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.extend, name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=self.fs.filesystem_new_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_filesystem_from_snapshot(self): self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append(self.fs.output_copy_ckpt) self.ssh_hook.append(self.fs.output_info()) self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append() context = self.manager.getStorageContext('FileSystem') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.create_from_snapshot(self.fs.filesystem_name, self.snap.src_snap_name, self.fs.src_fileystems_name, self.pool.pool_name, self.vdm.vdm_name, self.mover.interconnect_id,) ssh_calls = [ mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_create_filesystem_from_snapshot_with_error(self): self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.fs.fake_output, stderr=None)) self.ssh_hook.append(self.fs.output_info()) self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append() context = self.manager.getStorageContext('FileSystem') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.create_from_snapshot( self.fs.filesystem_name, self.snap.src_snap_name, self.fs.src_fileystems_name, self.pool.pool_name, self.vdm.vdm_name, self.mover.interconnect_id, ) ssh_calls = [ mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class MountPointTestCase(StorageObjectTestCaseBase): def setUp(self): super(MountPointTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_mount_point_on_vdm(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.mount.req_create(self.vdm.vdm_id, True)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mount_point_on_mover(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mount_point_but_already_exist(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_create_but_already_exist()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.mount.req_create(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_mount_point_invalid_mover_id(self, sleep_mock): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_create(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_mount_point_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_error()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.create, mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.mount.req_create(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mount_point_on_vdm(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mount_point_on_mover(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mount_point_but_nonexistent(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_delete_but_nonexistent()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_mount_point_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_delete(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_delete_mount_point_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_error()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, mount_path=self.mount.path, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mount_points(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_get_succeed(self.vdm.vdm_id)) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_succeed(self.mover.mover_id, False)) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'path', 'mover', 'moverIdIsVdm', 'fileSystem', ] for item in out: for prop in property_map: self.assertIn(prop, item) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'path', 'mover', 'moverIdIsVdm', 'fileSystem', ] for item in out: for prop in property_map: self.assertIn(prop, item) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_get(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mount_points_but_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_without_value()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_get_mount_points_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_succeed(self.mover.mover_id, False)) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'path', 'mover', 'moverIdIsVdm', 'fileSystem', ] for item in out: for prop in property_map: self.assertIn(prop, item) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_get_mount_points_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_error()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @ddt.ddt class VDMTestCase(StorageObjectTestCaseBase): def setUp(self): super(VDMTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_create_vdm(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_task_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(self.vdm.vdm_name, self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_vdm_but_already_exist(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_create_but_already_exist()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create VDM which already exists. context.create(self.vdm.vdm_name, self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_vdm_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_task_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create VDM with invalid mover ID context.create(self.vdm.vdm_name, self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_vdm_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_task_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create VDM with invalid mover ID self.assertRaises(exception.EMCVnxXMLAPIError, context.create, name=self.vdm.vdm_name, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm(self): self.hook.append(self.vdm.resp_get_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.vdm.vdm_name, context.vdm_map) property_map = [ 'name', 'id', 'state', 'host_mover_id', 'interfaces', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm_with_error(self): self.hook.append(self.vdm.resp_get_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Get VDM with error status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm_but_not_found(self): self.hook.append(self.vdm.resp_get_without_value()) self.hook.append(self.vdm.resp_get_succeed('fake')) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Get VDM which does not exist status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.vdm.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm_id_with_error(self): self.hook.append(self.vdm.resp_get_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.get_id, self.vdm.vdm_name) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.vdm.resp_task_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.vdm.vdm_name) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.vdm.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm_but_not_found(self): self.hook.append(self.vdm.resp_get_but_not_found()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.vdm.vdm_name) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm_but_failed_to_get_vdm(self): self.hook.append(self.vdm.resp_get_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, self.vdm.vdm_name) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.vdm.resp_task_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, self.vdm.vdm_name) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.vdm.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_attach_detach_nfs_interface(self): self.ssh_hook.append() self.ssh_hook.append() context = self.manager.getStorageContext('VDM') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.attach_nfs_interface(self.vdm.vdm_name, self.mover.interface_name2) context.detach_nfs_interface(self.vdm.vdm_name, self.mover.interface_name2) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_detach_nfs_interface_with_error(self): self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.vdm.fake_output)) self.ssh_hook.append(self.vdm.output_get_interfaces_vdm( self.mover.interface_name2)) self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.vdm.fake_output)) self.ssh_hook.append(self.vdm.output_get_interfaces_vdm( nfs_interface=fakes.FakeData.interface_name1)) context = self.manager.getStorageContext('VDM') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.detach_nfs_interface, self.vdm.vdm_name, self.mover.interface_name2) context.detach_nfs_interface(self.vdm.vdm_name, self.mover.interface_name2) ssh_calls = [ mock.call(self.vdm.cmd_detach_nfs_interface(), True), mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), mock.call(self.vdm.cmd_get_interfaces(), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) @ddt.data(fakes.VDMTestData().output_get_interfaces_vdm(), fakes.VDMTestData().output_get_interfaces_nfs()) def test_get_cifs_nfs_interface(self, fake_output): self.ssh_hook.append(fake_output) context = self.manager.getStorageContext('VDM') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) interfaces = context.get_interfaces(self.vdm.vdm_name) self.assertIsNotNone(interfaces['cifs']) self.assertIsNotNone(interfaces['nfs']) ssh_calls = [mock.call(self.vdm.cmd_get_interfaces(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class StoragePoolTestCase(StorageObjectTestCaseBase): def setUp(self): super(StoragePoolTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_get_pool(self): self.hook.append(self.pool.resp_get_succeed()) context = self.manager.getStorageContext('StoragePool') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.pool.pool_name, context.pool_map) property_map = [ 'name', 'movers_id', 'total_size', 'used_size', 'diskType', 'dataServicePolicies', 'id', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [mock.call(self.pool.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_pool_with_error(self): self.hook.append(self.pool.resp_get_error()) self.hook.append(self.pool.resp_get_without_value()) self.hook.append(self.pool.resp_get_succeed(name='other')) context = self.manager.getStorageContext('StoragePool') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_ERROR, status) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.pool.req_get()), mock.call(self.pool.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_pool_id_with_error(self): self.hook.append(self.pool.resp_get_error()) context = self.manager.getStorageContext('StoragePool') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.get_id, self.pool.pool_name) expected_calls = [mock.call(self.pool.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) class MoverTestCase(StorageObjectTestCaseBase): def setUp(self): super(MoverTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_get_mover(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.mover.mover_name, context.mover_map) property_map = [ 'name', 'id', 'Status', 'version', 'uptime', 'role', 'interfaces', 'devices', 'dns_domain', ] for prop in property_map: self.assertIn(prop, out) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) status, out = context.get(self.mover.mover_name, True) self.assertEqual(constants.STATUS_OK, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_ref_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed(name='other')) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_ref(self.mover.mover_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [mock.call(self.mover.req_get_ref())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_ref_with_error(self): self.hook.append(self.mover.resp_get_error()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_ref(self.mover.mover_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [mock.call(self.mover.req_get_ref())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_ref_and_mover(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_ref(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) property_map = ['name', 'id'] for prop in property_map: self.assertIn(prop, out) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.mover.mover_name, context.mover_map) property_map = [ 'name', 'id', 'Status', 'version', 'uptime', 'role', 'interfaces', 'devices', 'dns_domain', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_failed_to_get_mover_ref(self): self.hook.append(self.mover.resp_get_error()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.get, self.mover.mover_name) expected_calls = [mock.call(self.mover.req_get_ref())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_but_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_without_value()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.mover.mover_name, force=True) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_error()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_interconnect_id(self): self.ssh_hook.append(self.mover.output_get_interconnect_id()) context = self.manager.getStorageContext('Mover') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) conn_id = context.get_interconnect_id(self.mover.mover_name, self.mover.mover_name) self.assertEqual(self.mover.interconnect_id, conn_id) ssh_calls = [mock.call(self.mover.cmd_get_interconnect_id(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_physical_devices(self): self.ssh_hook.append(self.mover.output_get_physical_devices()) context = self.manager.getStorageContext('Mover') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) devices = context.get_physical_devices(self.mover.mover_name) self.assertIn(self.mover.device_name, devices) ssh_calls = [mock.call(self.mover.cmd_get_physical_devices(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class SnapshotTestCase(StorageObjectTestCaseBase): def setUp(self): super(SnapshotTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_snapshot(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.snap.resp_task_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.snap.snapshot_name, fs_name=self.fs.filesystem_name, pool_id=self.pool.pool_id) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_snapshot_but_already_exist(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.snap.resp_create_but_already_exist()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.snap.snapshot_name, fs_name=self.fs.filesystem_name, pool_id=self.pool.pool_id, ckpt_size=self.snap.snapshot_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create_with_size()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_snapshot_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.snap.resp_task_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.create, name=self.snap.snapshot_name, fs_name=self.fs.filesystem_name, pool_id=self.pool.pool_id, ckpt_size=self.snap.snapshot_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create_with_size()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot(self): self.hook.append(self.snap.resp_get_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.snap.snapshot_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.snap.snapshot_name, context.snap_map) property_map = [ 'name', 'id', 'checkpointOf', 'state', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_but_not_found(self): self.hook.append(self.snap.resp_get_without_value()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.snap.snapshot_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_with_error(self): self.hook.append(self.snap.resp_get_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.snap.snapshot_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot(self): self.hook.append(self.snap.resp_get_succeed()) self.hook.append(self.snap.resp_task_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.snap.snapshot_name) self.assertNotIn(self.snap.snapshot_name, context.snap_map) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot_failed_to_get_snapshot(self): self.hook.append(self.snap.resp_get_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, self.snap.snapshot_name) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot_but_not_found(self): self.hook.append(self.snap.resp_get_without_value()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.snap.snapshot_name) self.assertNotIn(self.snap.snapshot_name, context.snap_map) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot_with_error(self): self.hook.append(self.snap.resp_get_succeed()) self.hook.append(self.snap.resp_task_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, self.snap.snapshot_name) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_id(self): self.hook.append(self.snap.resp_get_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) id = context.get_id(self.snap.snapshot_name) self.assertEqual(self.snap.snapshot_id, id) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_id_with_error(self): self.hook.append(self.snap.resp_get_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.get_id, self.snap.snapshot_name) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) @ddt.ddt class MoverInterfaceTestCase(StorageObjectTestCaseBase): def setUp(self): super(MoverInterfaceTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_mover_interface(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) interface['name'] = self.mover.long_interface_name context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), mock.call(self.mover.req_create_interface( self.mover.long_interface_name[:31])), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mover_interface_name_already_exist(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append( self.mover.resp_create_interface_but_name_already_exist()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mover_interface_ip_already_exist(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append( self.mover.resp_create_interface_but_ip_already_exist()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @ddt.data(fakes.MoverTestData().resp_task_succeed(), fakes.MoverTestData().resp_task_error()) def test_create_mover_interface_with_conflict_vlan_id(self, xml_resp): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append( self.mover.resp_create_interface_with_conflicted_vlan_id()) self.hook.append(xml_resp) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } self.assertRaises(exception.EMCVnxXMLAPIError, context.create, interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_mover_interface_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_mover_interface_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_error()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } self.assertRaises(exception.EMCVnxXMLAPIError, context.create, interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_interface(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.mover.interface_name1, mover_name=self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'name', 'device', 'up', 'ipVersion', 'netMask', 'ipAddress', 'vlanid', ] for prop in property_map: self.assertIn(prop, out) context.get(name=self.mover.long_interface_name, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_interface_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_without_value()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.mover.interface_name1, mover_name=self.mover.mover_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mover_interface(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mover_interface_but_nonexistent(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_delete_interface_but_nonexistent()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_mover_interface_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_delete_mover_interface_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_error()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) class DNSDomainTestCase(StorageObjectTestCaseBase): def setUp(self): super(DNSDomainTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_dns_domain(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mover_name=self.mover.mover_name, name=self.dns.domain_name, servers=self.dns.dns_ip_address) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_dns_domain_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mover_name=self.mover.mover_name, name=self.dns.domain_name, servers=self.dns.dns_ip_address) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_dns_domain_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_error()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.create, mover_name=self.mover.mover_name, name=self.mover.domain_name, servers=self.dns.dns_ip_address) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_dns_domain(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) self.hook.append(self.dns.resp_task_error()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mover_name=self.mover.mover_name, name=self.mover.domain_name) context.delete(mover_name=self.mover.mover_name, name=self.mover.domain_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_delete()), mock.call(self.dns.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_dns_domain_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mover_name=self.mover.mover_name, name=self.mover.domain_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_delete()), mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) class CIFSServerTestCase(StorageObjectTestCaseBase): def setUp(self): super(CIFSServerTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_cifs_server(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.cifs_server.resp_task_error()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create CIFS server on mover cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.create(cifs_server_args) # Create CIFS server on VDM cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, 'is_vdm': True, } context.create(cifs_server_args) # Create CIFS server on VDM cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, 'is_vdm': True, } context.create(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_create(self.mover.mover_id, False)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_cifs_server_already_exist(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_error()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) @mock.patch('time.sleep') def test_create_cifs_server_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create CIFS server on mover cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.create(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_create(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_cifs_server_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_error()) self.hook.append(self.cifs_server.resp_get_error()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create CIFS server on VDM cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, 'is_vdm': True, } self.assertRaises(exception.EMCVnxXMLAPIError, context.create, cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_all_cifs_server(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_all(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.vdm.vdm_name, context.cifs_server_map) # Get CIFS server from the cache status, out = context.get_all(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.vdm.vdm_name, context.cifs_server_map) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_get_all_cifs_server_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_all(self.mover.mover_name, False) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.mover.mover_name, context.cifs_server_map) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_get_cifs_server(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.cifs_server.cifs_server_name, mover_name=self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) property_map = { 'name', 'compName', 'Aliases', 'type', 'interfaces', 'domain', 'domainJoined', 'mover', 'moverIdIsVdm', } for prop in property_map: self.assertIn(prop, out) context.get(name=self.cifs_server.cifs_server_name, mover_name=self.vdm.vdm_name) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_modify_cifs_server(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': True, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.modify(cifs_server_args) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': False, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_modify( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_modify_cifs_server_but_unjoin_domain(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_modify_but_unjoin_domain()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': False, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_modify_cifs_server_but_already_join_domain(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append( self.cifs_server.resp_modify_but_already_join_domain()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': True, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_modify_cifs_server_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': True, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_modify( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_modify( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_modify_cifs_server_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_error()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': False, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } self.assertRaises(exception.EMCVnxXMLAPIError, context.modify, cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_server(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.vdm.vdm_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), mock.call(self.cifs_server.req_delete(self.mover.mover_id, False)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_server_but_not_found(self): self.hook.append(self.mover.resp_get_without_value()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_without_value()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_server_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)) self.hook.append(self.cifs_server.resp_task_error()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), mock.call(self.cifs_server.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) class CIFSShareTestCase(StorageObjectTestCaseBase): def setUp(self): super(CIFSShareTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_create_cifs_share(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.vdm.vdm_name, is_vdm=True) context.create(name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_cifs_share_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_create(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_cifs_share_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_error()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.create, name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_share(self): self.hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) self.hook.append(self.cifs_share.resp_get_succeed(self.mover.mover_id, False)) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) context.delete(name=self.cifs_share.share_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.cifs_share.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_share_not_found(self): self.hook.append(self.cifs_share.resp_get_error()) self.hook.append(self.cifs_share.resp_get_without_value()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) context.delete(name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.cifs_share.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_cifs_share_invalid_mover_id(self, sleep_mock): self.hook.append(self.cifs_share.resp_get_succeed(self.mover.mover_id, False)) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(name=self.cifs_share.share_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_delete(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_delete_cifs_share_with_error(self): self.hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_error()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_cifs_share(self): self.hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.get(self.cifs_share.share_name) expected_calls = [mock.call(self.cifs_share.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_disable_share_access(self): self.ssh_hook.append('Command succeeded') context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.disable_share_access(share_name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_disable_share_access_with_error(self): self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.cifs_share.fake_output)) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.disable_share_access, share_name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access(self): self.ssh_hook.append(self.cifs_share.output_allow_access()) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.allow_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [mock.call(self.cifs_share.cmd_change_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access_duplicate_ACE(self): expt_dup_ace = processutils.ProcessExecutionError( stdout=self.cifs_share.output_allow_access_but_duplicate_ace()) self.ssh_hook.append(ex=expt_dup_ace) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.allow_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [mock.call(self.cifs_share.cmd_change_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access_with_error(self): expt_err = processutils.ProcessExecutionError( self.cifs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.allow_share_access, mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [mock.call(self.cifs_share.cmd_change_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access(self): self.ssh_hook.append('Command succeeded') context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.deny_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access_no_ace(self): expt_no_ace = processutils.ProcessExecutionError( stdout=self.cifs_share.output_deny_access_but_no_ace()) self.ssh_hook.append(ex=expt_no_ace) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.deny_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access_but_no_user_found(self): expt_no_user = processutils.ProcessExecutionError( stdout=self.cifs_share.output_deny_access_but_no_user_found()) self.ssh_hook.append(ex=expt_no_user) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.deny_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access_with_error(self): expt_err = processutils.ProcessExecutionError( self.cifs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.deny_share_access, mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_share_access(self): self.ssh_hook.append(fakes.FakeData.cifs_access) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) ret = context.get_share_access( mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name) ssh_calls = [ mock.call(self.cifs_share.cmd_get_access(), True), ] self.assertEqual(2, len(ret)) self.assertEqual(constants.CIFS_ACL_FULLCONTROL, ret['administrator']) self.assertEqual(constants.CIFS_ACL_FULLCONTROL, ret['guest']) context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_share_access_failed(self): expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.get_share_access, mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name) ssh_calls = [ mock.call(self.cifs_share.cmd_get_access(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_clear_share_access_has_white_list(self): self.ssh_hook.append(fakes.FakeData.cifs_access) self.ssh_hook.append('Command succeeded') context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) to_remove = context.clear_share_access( mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, domain=self.cifs_server.domain_name, white_list_users=['guest']) ssh_calls = [ mock.call(self.cifs_share.cmd_get_access(), True), mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] self.assertEqual({'administrator'}, to_remove) context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class NFSShareTestCase(StorageObjectTestCaseBase): def setUp(self): super(NFSShareTestCase, self).setUp() self.ssh_hook = utils.SSHSideEffect() def test_create_nfs_share(self): self.ssh_hook.append(self.nfs_share.output_create()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.create(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_create_nfs_share_with_error(self): expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.create, name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_delete_nfs_share(self): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_delete_succeed()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.delete(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_delete_nfs_share_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.delete(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) @mock.patch('time.sleep') def test_delete_nfs_share_locked(self, sleep_mock): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) expt_locked = processutils.ProcessExecutionError( stdout=self.nfs_share.output_delete_but_locked()) self.ssh_hook.append(ex=expt_locked) self.ssh_hook.append(self.nfs_share.output_delete_succeed()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.delete(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), mock.call(self.nfs_share.cmd_delete(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) self.assertTrue(sleep_mock.called) def test_delete_nfs_share_with_error(self): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.delete, name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_nfs_share(self): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) # Get NFS share from cache context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_nfs_share_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) self.ssh_hook.append(self.nfs_share.output_get_but_not_found()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_get(), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_nfs_share_with_error(self): expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.get, name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access(self): rw_hosts = copy.deepcopy(self.nfs_share.rw_hosts) rw_hosts.append(self.nfs_share.nfs_host_ip) ro_hosts = copy.deepcopy(self.nfs_share.ro_hosts) ro_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RO) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=ro_hosts)), mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.allow_share_access, share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) ssh_calls = [mock.call(self.nfs_share.cmd_get())] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_rw_share_access(self): rw_hosts = copy.deepcopy(self.nfs_share.rw_hosts) rw_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.deny_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access(self.nfs_share.rw_hosts, self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_clear_share_access(self): hosts = ['192.168.1.1', '192.168.1.3'] self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=[hosts[0]], ro_hosts=[hosts[1]])) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.clear_share_access(share_name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name, white_list_hosts=hosts) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=[hosts[0]], ro_hosts=[hosts[1]])), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_ro_share_access(self): ro_hosts = copy.deepcopy(self.nfs_share.ro_hosts) ro_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.deny_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) context.deny_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access(self.nfs_share.rw_hosts, self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.deny_share_access, share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get())] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_rw_share_with_error(self): rw_hosts = copy.deepcopy(self.nfs_share.rw_hosts) rw_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.deny_share_access, share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access(self.nfs_share.rw_hosts, self.nfs_share.ro_hosts)), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_clear_share_access_failed_to_get_share(self): self.ssh_hook.append("no output.") context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCVnxXMLAPIError, context.clear_share_access, share_name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name, white_list_hosts=None) context.conn['SSH'].run_ssh.assert_called_once_with( self.nfs_share.cmd_get(), False) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/vnx/test_connection.py0000664000175000017500000027725513656750227030105 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from oslo_log import log from manila import exception from manila.share.drivers.dell_emc.common.enas import connector from manila.share.drivers.dell_emc.plugins.vnx import connection from manila.share.drivers.dell_emc.plugins.vnx import object_manager from manila import test from manila.tests import fake_share from manila.tests.share.drivers.dell_emc.common.enas import fakes from manila.tests.share.drivers.dell_emc.common.enas import utils LOG = log.getLogger(__name__) @ddt.ddt class StorageConnectionTestCase(test.TestCase): @mock.patch.object(connector.XMLAPIConnector, "_do_setup", mock.Mock()) def setUp(self): super(StorageConnectionTestCase, self).setUp() self.emc_share_driver = fakes.FakeEMCShareDriver() self.connection = connection.VNXStorageConnection(LOG) self.pool = fakes.PoolTestData() self.vdm = fakes.VDMTestData() self.mover = fakes.MoverTestData() self.fs = fakes.FileSystemTestData() self.mount = fakes.MountPointTestData() self.snap = fakes.SnapshotTestData() self.cifs_share = fakes.CIFSShareTestData() self.nfs_share = fakes.NFSShareTestData() self.cifs_server = fakes.CIFSServerTestData() self.dns = fakes.DNSDomainTestData() with mock.patch.object(connector.XMLAPIConnector, 'request', mock.Mock()): self.connection.connect(self.emc_share_driver, None) def test_check_for_setup_error(self): hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_ref_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock with mock.patch.object(connection.VNXStorageConnection, '_get_managed_storage_pools', mock.Mock()): self.connection.check_for_setup_error() expected_calls = [mock.call(self.mover.req_get_ref())] xml_req_mock.assert_has_calls(expected_calls) def test_check_for_setup_error_with_invalid_mover_name(self): hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.InvalidParameterValue, self.connection.check_for_setup_error) expected_calls = [mock.call(self.mover.req_get_ref())] xml_req_mock.assert_has_calls(expected_calls) @ddt.data({'pool_conf': None, 'real_pools': ['fake_pool', 'nas_pool'], 'matched_pool': set()}, {'pool_conf': [], 'real_pools': ['fake_pool', 'nas_pool'], 'matched_pool': set()}, {'pool_conf': ['*'], 'real_pools': ['fake_pool', 'nas_pool'], 'matched_pool': {'fake_pool', 'nas_pool'}}, {'pool_conf': ['fake_*'], 'real_pools': ['fake_pool', 'nas_pool', 'Perf_Pool'], 'matched_pool': {'fake_pool'}}, {'pool_conf': ['*pool'], 'real_pools': ['fake_pool', 'NAS_Pool', 'Perf_POOL'], 'matched_pool': {'fake_pool'}}, {'pool_conf': ['nas_pool'], 'real_pools': ['fake_pool', 'nas_pool', 'perf_pool'], 'matched_pool': {'nas_pool'}}) @ddt.unpack def test__get_managed_storage_pools(self, pool_conf, real_pools, matched_pool): with mock.patch.object(object_manager.StoragePool, 'get_all', mock.Mock(return_value=('ok', real_pools))): pool = self.connection._get_managed_storage_pools(pool_conf) self.assertEqual(matched_pool, pool) def test__get_managed_storage_pools_failed_to_get_pool_info(self): hook = utils.RequestSideEffect() hook.append(self.pool.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock pool_conf = fakes.FakeData.pool_name self.assertRaises(exception.EMCVnxXMLAPIError, self.connection._get_managed_storage_pools, pool_conf) expected_calls = [mock.call(self.pool.req_get())] xml_req_mock.assert_has_calls(expected_calls) @ddt.data( {'pool_conf': ['fake_*'], 'real_pools': ['nas_pool', 'Perf_Pool']}, {'pool_conf': ['*pool'], 'real_pools': ['NAS_Pool', 'Perf_POOL']}, {'pool_conf': ['nas_pool'], 'real_pools': ['fake_pool', 'perf_pool']}, ) @ddt.unpack def test__get_managed_storage_pools_without_matched_pool(self, pool_conf, real_pools): with mock.patch.object(object_manager.StoragePool, 'get_all', mock.Mock(return_value=('ok', real_pools))): self.assertRaises(exception.InvalidParameterValue, self.connection._get_managed_storage_pools, pool_conf) def test_create_cifs_share(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.pool.req_get()), mock.call(self.fs.req_create_on_vdm()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': r'\\%s\%s' % ( fakes.FakeData.network_allocations_ip1, share['name'])}], 'CIFS export path is incorrect') def test_create_cifs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.pool.req_get()), mock.call(self.fs.req_create_on_vdm()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual( location, [{'path': r'\\%s.ipv6-literal.net\%s' % ( fakes.FakeData.network_allocations_ip3.replace(':', '-'), share['name'])}], 'CIFS export path is incorrect' ) def test_create_nfs_share(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.pool.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '192.168.1.2:/%s' % share['name']}], 'NFS export path is incorrect') def test_create_nfs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.pool.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '[%s]:/%s' % ( fakes.FakeData.network_allocations_ip4, share['name'])}], 'NFS export path is incorrect') def test_create_cifs_share_without_share_server(self): share = fakes.CIFS_SHARE self.assertRaises(exception.InvalidInput, self.connection.create_share, None, share, None) def test_create_cifs_share_without_share_server_name(self): share = fakes.CIFS_SHARE share_server = copy.deepcopy(fakes.SHARE_SERVER) share_server['backend_details']['share_server_name'] = None self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_share, None, share, share_server) def test_create_cifs_share_with_invalide_cifs_server_name(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_share, None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_cifs_share_without_interface_in_cifs_server(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_without_interface( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_share, None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.pool.req_get()), mock.call(self.fs.req_create_on_vdm()), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_cifs_share_without_pool_name(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(host='HostA@BackendB', share_proto='CIFS') self.assertRaises(exception.InvalidHost, self.connection.create_share, None, share, share_server) def test_create_cifs_share_from_snapshot(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.cifs_share.cmd_disable_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': r'\\192.168.1.1\%s' % share['name']}], 'CIFS export path is incorrect') def test_create_cifs_share_from_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.cifs_share.cmd_disable_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual( location, [{'path': r'\\%s.ipv6-literal.net\%s' % ( fakes.FakeData.network_allocations_ip3.replace(':', '-'), share['name'])}], 'CIFS export path is incorrect') def test_create_nfs_share_from_snapshot(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.nfs_share.cmd_create(), True) ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '192.168.1.2:/%s' % share['name']}], 'NFS export path is incorrect') def test_create_nfs_share_from_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.nfs_share.cmd_create(), True) ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '[%s]:/%s' % ( fakes.FakeData.network_allocations_ip4, share['name'])}], 'NFS export path is incorrect') def test_create_share_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') self.assertRaises(exception.InvalidShare, self.connection.create_share, context=None, share=share, share_server=share_server) def test_create_share_from_snapshot_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') snapshot = fake_share.fake_snapshot() self.assertRaises(exception.InvalidShare, self.connection.create_share_from_snapshot, None, share, snapshot, share_server) def test_create_share_from_snapshot_without_pool_name(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(host='HostA@BackendB', share_proto='CIFS') snapshot = fake_share.fake_snapshot() self.assertRaises(exception.InvalidHost, self.connection.create_share_from_snapshot, None, share, snapshot, share_server) def test_delete_cifs_share(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_share.resp_task_succeed()) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_delete_cifs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_share.resp_task_succeed()) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_delete_nfs_share(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) ssh_hook.append(self.nfs_share.output_delete_succeed()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_delete_nfs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) ssh_hook.append(self.nfs_share.output_delete_succeed()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_delete_share_without_share_server(self): share = fakes.CIFS_SHARE self.connection.delete_share(None, share) def test_delete_share_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') self.assertRaises(exception.InvalidShare, self.connection.delete_share, context=None, share=share, share_server=share_server) def test_delete_cifs_share_with_nonexistent_mount_and_filesystem(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_share.resp_task_succeed()) hook.append(self.mount.resp_task_error()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_extend_share(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE new_size = fakes.FakeData.new_size hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.extend_share(share, new_size, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] xml_req_mock.assert_has_calls(expected_calls) def test_extend_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE new_size = fakes.FakeData.new_size hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.extend_share(share, new_size, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] xml_req_mock.assert_has_calls(expected_calls) def test_extend_share_without_pool_name(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(host='HostA@BackendB', share_proto='CIFS') new_size = fakes.FakeData.new_size self.assertRaises(exception.InvalidHost, self.connection.extend_share, share, new_size, share_server) def test_create_snapshot(self): share_server = fakes.SHARE_SERVER snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.create_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create()), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.create_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create()), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_snapshot_with_incorrect_share_info(self): share_server = fakes.SHARE_SERVER snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_but_not_found()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_snapshot, None, snapshot, share_server) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) def test_delete_snapshot(self): share_server = fakes.SHARE_SERVER snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.snap.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_delete_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.snap.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.dns.resp_task_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.setup_server(fakes.NETWORK_INFO, None) if_name_1 = fakes.FakeData.interface_name1 if_name_2 = fakes.FakeData.interface_name2 expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_create_interface( if_name=if_name_1, ip=fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_create_interface( if_name=if_name_2, ip=fakes.FakeData.network_allocations_ip2)), mock.call(self.dns.req_create()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server_with_ipv6(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.dns.resp_task_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.setup_server(fakes.NETWORK_INFO_IPV6, None) if_name_1 = fakes.FakeData.interface_name3 if_name_2 = fakes.FakeData.interface_name4 expect_ip_1 = fakes.FakeData.network_allocations_ip3 expect_ip_2 = fakes.FakeData.network_allocations_ip4 expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_create_interface_with_ipv6( if_name=if_name_1, ip=expect_ip_1)), mock.call(self.mover.req_create_interface_with_ipv6( if_name=if_name_2, ip=expect_ip_2)), mock.call(self.dns.req_create( ip_addr=fakes.FakeData.dns_ipv6_address)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create( self.vdm.vdm_id, ip_addr=fakes.FakeData.network_allocations_ip3)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface( interface=fakes.FakeData.interface_name4), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server_with_existing_vdm(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.dns.resp_task_succeed()) hook.append(self.cifs_server.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.setup_server(fakes.NETWORK_INFO, None) if_name_1 = fakes.FakeData.network_allocations_id1[-12:] if_name_2 = fakes.FakeData.network_allocations_id2[-12:] expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface( if_name=if_name_1, ip=fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_create_interface( if_name=if_name_2, ip=fakes.FakeData.network_allocations_ip2)), mock.call(self.dns.req_create()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_setup_server_with_invalid_security_service(self): network_info = copy.deepcopy(fakes.NETWORK_INFO) network_info['security_services'][0]['type'] = 'fake_type' self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.setup_server, network_info, None) @utils.patch_get_managed_ports_vnx( side_effect=exception.EMCVnxXMLAPIError( err="Get managed ports fail.")) def test_setup_server_without_valid_physical_device(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_without_value()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm(nfs_interface='')) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.setup_server, fakes.NETWORK_INFO, None) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server_with_exception(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_error()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_without_value()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm(nfs_interface='')) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.setup_server, fakes.NETWORK_INFO, None) if_name_1 = fakes.FakeData.network_allocations_id1[-12:] if_name_2 = fakes.FakeData.network_allocations_id2[-12:] expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_create_interface( if_name=if_name_1, ip=fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_create_interface( if_name=if_name_2, ip=fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_with_ipv6(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL_IPV6, fakes.SECURITY_SERVICE_IPV6) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip3)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip4)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_without_server_detail(self): self.connection.teardown_server(None, fakes.SECURITY_SERVICE) def test_teardown_server_without_security_services(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, []) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_without_share_server_name_in_server_detail(self): server_detail = { 'cifs_if': fakes.FakeData.network_allocations_ip1, 'nfs_if': fakes.FakeData.network_allocations_ip2, } self.connection.teardown_server(server_detail, fakes.SECURITY_SERVICE) def test_teardown_server_with_invalid_server_name(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [mock.call(self.vdm.req_get())] xml_req_mock.assert_has_calls(expected_calls) def test_teardown_server_without_cifs_server(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_with_invalid_cifs_server_modification(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_server.resp_task_error()) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_modify(self.vdm.vdm_id)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_add_cifs_rw(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [access], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_add_cifs_rw_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [access], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_deny_nfs(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [], [access], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_deny_nfs_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts_ipv6, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [], [access], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts_ipv6, ro_hosts=self.nfs_share.ro_hosts_ipv6), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_nfs_rule(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS hosts = ['192.168.1.5'] rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=hosts, ro_hosts=[])) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=hosts, ro_hosts=[]), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_nfs_rule_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS_IPV6 hosts = ['fdf8:f53b:82e1::5'] rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=hosts, ro_hosts=[])) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=hosts, ro_hosts=[]), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_cifs_rule(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_hook.append(fakes.FakeData.cifs_access) ssh_hook.append('Command succeeded') ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), mock.call(self.cifs_share.cmd_get_access(), True), mock.call(self.cifs_share.cmd_change_access( action='revoke', user='guest'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_cifs_rule_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_hook.append(fakes.FakeData.cifs_access) ssh_hook.append('Command succeeded') ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), mock.call(self.cifs_share.cmd_get_access(), True), mock.call(self.cifs_share.cmd_change_access( action='revoke', user='guest'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_cifs_clear_access_server_not_found(self): server = fakes.SHARE_SERVER hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, cifs_server_name='cifs_server_name')) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection._cifs_clear_access, 'share_name', server, None) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_allow_cifs_rw_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_rw_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_ro_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_ro_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_ro_access_without_share_server_name(self): share = fakes.CIFS_SHARE share_server = copy.deepcopy(fakes.SHARE_SERVER) share_server['backend_details'].pop('share_server_name') access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_access_with_invalid_access_level(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fake_share.fake_access(access_level='fake_level') self.assertRaises(exception.InvalidShareAccessLevel, self.connection.allow_access, None, share, access, share_server) def test_allow_access_with_invalid_share_server_name(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.allow_access, None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_allow_nfs_access(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_nfs_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS_IPV6 rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts_ipv6, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts_ipv6), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.allow_access, None, share, access, share_server) def test_allow_nfs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.allow_access, None, share, access, share_server) def test_allow_access_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') access = fake_share.fake_access() self.assertRaises(exception.InvalidShare, self.connection.allow_access, None, share, access, share_server) def test_deny_cifs_rw_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_rw_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_ro_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro', 'revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_ro_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro', 'revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_access_with_invliad_share_server_name(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.deny_access, None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_deny_nfs_access(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_nfs_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS_IPV6 rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts_ipv6, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts_ipv6, ro_hosts=self.nfs_share.ro_hosts_ipv6), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_access_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') access = fakes.CIFS_RW_ACCESS self.assertRaises(exception.InvalidShare, self.connection.deny_access, None, share, access, share_server) def test_deny_cifs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.deny_access, None, share, access, share_server) def test_deny_nfs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.deny_access, None, share, access, share_server) def test_update_share_stats(self): hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.pool.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.update_share_stats(fakes.STATS) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) for pool in fakes.STATS['pools']: if pool['pool_name'] == fakes.FakeData.pool_name: self.assertEqual(fakes.FakeData.pool_total_size, pool['total_capacity_gb']) free_size = (fakes.FakeData.pool_total_size - fakes.FakeData.pool_used_size) self.assertEqual(free_size, pool['free_capacity_gb']) def test_update_share_stats_without_matched_config_pools(self): self.connection.pools = set('fake_pool') hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.pool.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.update_share_stats, fakes.STATS) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) def test_get_pool(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock pool_name = self.connection.get_pool(share) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) self.assertEqual(fakes.FakeData.pool_name, pool_name) def test_get_pool_failed_to_get_filesystem_info(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_pool, share) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) def test_get_pool_failed_to_get_pool_info(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_pool, share) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) def test_get_pool_failed_to_find_matched_pool_name(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed(name='unmatch_pool_name', id='unmatch_pool_id')) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_pool, share) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) @ddt.data({'port_conf': None, 'managed_ports': ['cge-1-0', 'cge-1-3']}, {'port_conf': '*', 'managed_ports': ['cge-1-0', 'cge-1-3']}, {'port_conf': ['cge-1-*'], 'managed_ports': ['cge-1-0', 'cge-1-3']}, {'port_conf': ['cge-1-3'], 'managed_ports': ['cge-1-3']}) @ddt.unpack def test_get_managed_ports_one_port(self, port_conf, managed_ports): hook = utils.SSHSideEffect() hook.append(self.mover.output_get_physical_devices()) ssh_cmd_mock = mock.Mock(side_effect=hook) expected_calls = [ mock.call(self.mover.cmd_get_physical_devices(), False), ] self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.port_conf = port_conf ports = self.connection.get_managed_ports() self.assertIsInstance(ports, list) self.assertEqual(sorted(managed_ports), sorted(ports)) ssh_cmd_mock.assert_has_calls(expected_calls) def test_get_managed_ports_no_valid_port(self): hook = utils.SSHSideEffect() hook.append(self.mover.output_get_physical_devices()) ssh_cmd_mock = mock.Mock(side_effect=hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.port_conf = ['cge-2-0'] self.assertRaises(exception.BadConfigurationException, self.connection.get_managed_ports) def test_get_managed_ports_query_devices_failed(self): hook = utils.SSHSideEffect() hook.append(self.mover.fake_output) ssh_cmd_mock = mock.Mock(side_effect=hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.port_conf = ['cge-2-0'] self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_managed_ports) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/powermax/0000775000175000017500000000000013656750362025342 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/powermax/__init__.py0000664000175000017500000000000013656750227027441 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/powermax/test_object_manager.py0000664000175000017500000037560213656750227031730 0ustar zuulzuul00000000000000# Copyright (c) 2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from lxml import builder from oslo_concurrency import processutils from manila import exception from manila.common import constants as const from manila.share.drivers.dell_emc.common.enas import connector from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import xml_api_parser as parser from manila.share.drivers.dell_emc.plugins.powermax import ( object_manager as manager) from manila import test from manila.tests.share.drivers.dell_emc.common.enas import fakes from manila.tests.share.drivers.dell_emc.common.enas import utils class StorageObjectManagerTestCase(test.TestCase): @mock.patch.object(connector, "XMLAPIConnector", mock.Mock()) @mock.patch.object(connector, "SSHConnector", mock.Mock()) def setUp(self): super(StorageObjectManagerTestCase, self).setUp() emd_share_driver = fakes.FakeEMCShareDriver('powermax') self.manager = manager.StorageObjectManager( emd_share_driver.configuration) def test_get_storage_context(self): type_map = { 'FileSystem': manager.FileSystem, 'StoragePool': manager.StoragePool, 'MountPoint': manager.MountPoint, 'Mover': manager.Mover, 'VDM': manager.VDM, 'Snapshot': manager.Snapshot, 'MoverInterface': manager.MoverInterface, 'DNSDomain': manager.DNSDomain, 'CIFSServer': manager.CIFSServer, 'CIFSShare': manager.CIFSShare, 'NFSShare': manager.NFSShare, } for key, value in type_map.items(): self.assertTrue( isinstance(self.manager.getStorageContext(key), value)) for key in self.manager.context.keys(): self.assertIn(key, type_map) def test_get_storage_context_invalid_type(self): fake_type = 'fake_type' self.assertRaises(exception.EMCPowerMaxXMLAPIError, self.manager.getStorageContext, fake_type) class StorageObjectTestCaseBase(test.TestCase): @mock.patch.object(connector, "XMLAPIConnector", mock.Mock()) @mock.patch.object(connector, "SSHConnector", mock.Mock()) def setUp(self): super(StorageObjectTestCaseBase, self).setUp() emd_share_driver = fakes.FakeEMCShareDriver('powermax') self.manager = manager.StorageObjectManager( emd_share_driver.configuration) self.base = fakes.StorageObjectTestData() self.pool = fakes.PoolTestData() self.vdm = fakes.VDMTestData() self.mover = fakes.MoverTestData() self.fs = fakes.FileSystemTestData() self.mount = fakes.MountPointTestData() self.snap = fakes.SnapshotTestData() self.cifs_share = fakes.CIFSShareTestData() self.nfs_share = fakes.NFSShareTestData() self.cifs_server = fakes.CIFSServerTestData() self.dns = fakes.DNSDomainTestData() class StorageObjectTestCase(StorageObjectTestCaseBase): def test_xml_api_retry(self): hook = utils.RequestSideEffect() hook.append(self.base.resp_need_retry()) hook.append(self.base.resp_task_succeed()) elt_maker = builder.ElementMaker(nsmap={None: constants.XML_NAMESPACE}) xml_parser = parser.XMLAPIParser() storage_object = manager.StorageObject(self.manager.connectors, elt_maker, xml_parser, self.manager) storage_object.conn['XML'].request = utils.EMCMock(side_effect=hook) fake_req = storage_object._build_task_package( elt_maker.StartFake(name='foo') ) resp = storage_object._send_request(fake_req) self.assertEqual('ok', resp['maxSeverity']) expected_calls = [ mock.call(self.base.req_fake_start_task()), mock.call(self.base.req_fake_start_task()) ] storage_object.conn['XML'].request.assert_has_calls(expected_calls) class FileSystemTestCase(StorageObjectTestCaseBase): def setUp(self): super(FileSystemTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_create_file_system_on_vdm(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_file_system_on_mover(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.fs.req_create_on_mover()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_file_system_but_already_exist(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.fs.resp_create_but_already_exist()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_file_system_invalid_mover_id(self, sleep_mock): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.fs.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.fs.req_create_on_mover()), mock.call(self.mover.req_get_ref()), mock.call(self.fs.req_create_on_mover()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_file_system_with_error(self): self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.fs.resp_task_error()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, name=self.fs.filesystem_name, size=self.fs.filesystem_size, pool_name=self.pool.pool_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_file_system(self): self.hook.append(self.fs.resp_get_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.fs.filesystem_name, context.filesystem_map) property_map = [ 'name', 'pools_id', 'volume_id', 'size', 'id', 'type', 'dataServicePolicies', ] for prop in property_map: self.assertIn(prop, out) id = context.get_id(self.fs.filesystem_name) self.assertEqual(self.fs.filesystem_id, id) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_file_system_but_not_found(self): self.hook.append(self.fs.resp_get_but_not_found()) self.hook.append(self.fs.resp_get_without_value()) self.hook.append(self.fs.resp_get_error()) self.hook.append(self.fs.resp_get_but_not_found()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_ERROR, status) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get_id, self.fs.filesystem_name) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.fs.req_get()), mock.call(self.fs.req_get()), mock.call(self.fs.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_file_system_but_miss_property(self): self.hook.append(self.fs.resp_get_but_miss_property()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.fs.filesystem_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.fs.filesystem_name, context.filesystem_map) property_map = [ 'name', 'pools_id', 'volume_id', 'size', 'id', 'type', 'dataServicePolicies', ] for prop in property_map: self.assertIn(prop, out) self.assertIsNone(out['dataServicePolicies']) id = context.get_id(self.fs.filesystem_name) self.assertEqual(self.fs.filesystem_id, id) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_file_system(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.fs.filesystem_name) self.assertNotIn(self.fs.filesystem_name, context.filesystem_map) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertNotIn(self.fs.filesystem_name, context.filesystem_map) def test_delete_file_system_but_not_found(self): self.hook.append(self.fs.resp_get_but_not_found()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.fs.filesystem_name) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_file_system_but_get_file_system_error(self): self.hook.append(self.fs.resp_get_error()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, self.fs.filesystem_name) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_file_system_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.fs.resp_delete_but_failed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, self.fs.filesystem_name) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertIn(self.fs.filesystem_name, context.filesystem_map) def test_extend_file_system(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.fs.resp_task_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.extend(name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=self.fs.filesystem_new_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_but_not_found(self): self.hook.append(self.fs.resp_get_but_not_found()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.extend, name=self.fs.filesystem_name, pool_name=self.fs.pool_name, new_size=self.fs.filesystem_new_size) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_with_small_size(self): self.hook.append(self.fs.resp_get_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.extend, name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=1) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_with_same_size(self): self.hook.append(self.fs.resp_get_succeed()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.extend(name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=self.fs.filesystem_size) expected_calls = [mock.call(self.fs.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_extend_file_system_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.pool.resp_get_succeed()) self.hook.append(self.fs.resp_extend_but_error()) context = self.manager.getStorageContext('FileSystem') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.extend, name=self.fs.filesystem_name, pool_name=self.pool.pool_name, new_size=self.fs.filesystem_new_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_filesystem_from_snapshot(self): self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append(self.fs.output_copy_ckpt) self.ssh_hook.append(self.fs.output_info()) self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append() context = self.manager.getStorageContext('FileSystem') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.create_from_snapshot(self.fs.filesystem_name, self.snap.src_snap_name, self.fs.src_fileystems_name, self.pool.pool_name, self.vdm.vdm_name, self.mover.interconnect_id) ssh_calls = [ mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_create_filesystem_from_snapshot_with_error(self): self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.fs.fake_output, stderr=None)) self.ssh_hook.append(self.fs.output_info()) self.ssh_hook.append() self.ssh_hook.append() self.ssh_hook.append() context = self.manager.getStorageContext('FileSystem') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.create_from_snapshot( self.fs.filesystem_name, self.snap.src_snap_name, self.fs.src_fileystems_name, self.pool.pool_name, self.vdm.vdm_name, self.mover.interconnect_id, ) ssh_calls = [ mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class MountPointTestCase(StorageObjectTestCaseBase): def setUp(self): super(MountPointTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_mount_point_on_vdm(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.mount.req_create(self.vdm.vdm_id, True)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mount_point_on_mover(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mount_point_but_already_exist(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_create_but_already_exist()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.mount.req_create(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_mount_point_invalid_mover_id(self, sleep_mock): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_create(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_mount_point_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_error()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, mount_path=self.mount.path, fs_name=self.fs.filesystem_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.mount.req_create(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mount_point_on_vdm(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mount_point_on_mover(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mount_point_but_nonexistent(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_delete_but_nonexistent()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_mount_point_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_task_succeed()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mount_path=self.mount.path, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_delete(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_delete_mount_point_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_task_error()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, mount_path=self.mount.path, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mount_points(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.mount.resp_get_succeed(self.vdm.vdm_id)) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_succeed(self.mover.mover_id, False)) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'path', 'mover', 'moverIdIsVdm', 'fileSystem', ] for item in out: for prop in property_map: self.assertIn(prop, item) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'path', 'mover', 'moverIdIsVdm', 'fileSystem', ] for item in out: for prop in property_map: self.assertIn(prop, item) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_get(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mount_points_but_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_without_value()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_get_mount_points_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_succeed(self.mover.mover_id, False)) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'path', 'mover', 'moverIdIsVdm', 'fileSystem', ] for item in out: for prop in property_map: self.assertIn(prop, item) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_get_mount_points_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mount.resp_get_error()) context = self.manager.getStorageContext('MountPoint') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name, False) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mount.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) class VDMTestCase(StorageObjectTestCaseBase): def setUp(self): super(VDMTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_create_vdm(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_task_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(self.vdm.vdm_name, self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_vdm_but_already_exist(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_create_but_already_exist()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create VDM which already exists. context.create(self.vdm.vdm_name, self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_vdm_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_task_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create VDM with invalid mover ID context.create(self.vdm.vdm_name, self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_vdm_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.vdm.resp_task_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create VDM with invalid mover ID self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, name=self.vdm.vdm_name, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm(self): self.hook.append(self.vdm.resp_get_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.vdm.vdm_name, context.vdm_map) property_map = [ 'name', 'id', 'state', 'host_mover_id', 'interfaces', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm_with_error(self): self.hook.append(self.vdm.resp_get_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Get VDM with error status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm_but_not_found(self): self.hook.append(self.vdm.resp_get_without_value()) self.hook.append(self.vdm.resp_get_succeed('fake')) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Get VDM which does not exist status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.vdm.vdm_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.vdm.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_vdm_id_with_error(self): self.hook.append(self.vdm.resp_get_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get_id, self.vdm.vdm_name) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.vdm.resp_task_succeed()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.vdm.vdm_name) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.vdm.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm_but_not_found(self): self.hook.append(self.vdm.resp_get_but_not_found()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.vdm.vdm_name) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm_but_failed_to_get_vdm(self): self.hook.append(self.vdm.resp_get_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, self.vdm.vdm_name) expected_calls = [mock.call(self.vdm.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_vdm_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.vdm.resp_task_error()) context = self.manager.getStorageContext('VDM') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, self.vdm.vdm_name) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.vdm.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_attach_detach_nfs_interface(self): self.ssh_hook.append() self.ssh_hook.append() context = self.manager.getStorageContext('VDM') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.attach_nfs_interface(self.vdm.vdm_name, self.mover.interface_name2) context.detach_nfs_interface(self.vdm.vdm_name, self.mover.interface_name2) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_detach_nfs_interface_with_error(self): self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.vdm.fake_output)) self.ssh_hook.append(self.vdm.output_get_interfaces_vdm( self.mover.interface_name2)) self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.vdm.fake_output)) self.ssh_hook.append(self.vdm.output_get_interfaces_vdm( nfs_interface=fakes.FakeData.interface_name1)) context = self.manager.getStorageContext('VDM') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.detach_nfs_interface, self.vdm.vdm_name, self.mover.interface_name2) context.detach_nfs_interface(self.vdm.vdm_name, self.mover.interface_name2) ssh_calls = [ mock.call(self.vdm.cmd_detach_nfs_interface(), True), mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), mock.call(self.vdm.cmd_get_interfaces(), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_cifs_nfs_interface(self): self.ssh_hook.append(self.vdm.output_get_interfaces_vdm()) context = self.manager.getStorageContext('VDM') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) interfaces = context.get_interfaces(self.vdm.vdm_name) self.assertIsNotNone(interfaces['cifs']) self.assertIsNotNone(interfaces['nfs']) ssh_calls = [mock.call(self.vdm.cmd_get_interfaces(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class StoragePoolTestCase(StorageObjectTestCaseBase): def setUp(self): super(StoragePoolTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_get_pool(self): self.hook.append(self.pool.resp_get_succeed()) context = self.manager.getStorageContext('StoragePool') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.pool.pool_name, context.pool_map) property_map = [ 'name', 'movers_id', 'total_size', 'used_size', 'diskType', 'dataServicePolicies', 'id', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [mock.call(self.pool.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_pool_with_error(self): self.hook.append(self.pool.resp_get_error()) self.hook.append(self.pool.resp_get_without_value()) self.hook.append(self.pool.resp_get_succeed(name='other')) context = self.manager.getStorageContext('StoragePool') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_ERROR, status) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) status, out = context.get(self.pool.pool_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.pool.req_get()), mock.call(self.pool.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_pool_id_with_error(self): self.hook.append(self.pool.resp_get_error()) context = self.manager.getStorageContext('StoragePool') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get_id, self.pool.pool_name) expected_calls = [mock.call(self.pool.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) class MoverTestCase(StorageObjectTestCaseBase): def setUp(self): super(MoverTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_get_mover(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.mover.mover_name, context.mover_map) property_map = [ 'name', 'id', 'Status', 'version', 'uptime', 'role', 'interfaces', 'devices', 'dns_domain', ] for prop in property_map: self.assertIn(prop, out) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) status, out = context.get(self.mover.mover_name, True) self.assertEqual(constants.STATUS_OK, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_ref_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed(name='other')) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_ref(self.mover.mover_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [mock.call(self.mover.req_get_ref())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_ref_with_error(self): self.hook.append(self.mover.resp_get_error()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_ref(self.mover.mover_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [mock.call(self.mover.req_get_ref())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_ref_and_mover(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_ref(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) property_map = ['name', 'id'] for prop in property_map: self.assertIn(prop, out) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.mover.mover_name, context.mover_map) property_map = [ 'name', 'id', 'Status', 'version', 'uptime', 'role', 'interfaces', 'devices', 'dns_domain', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_failed_to_get_mover_ref(self): self.hook.append(self.mover.resp_get_error()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get, self.mover.mover_name) expected_calls = [mock.call(self.mover.req_get_ref())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_but_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_without_value()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.mover.mover_name, force=True) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_error()) context = self.manager.getStorageContext('Mover') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.mover.mover_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_interconnect_id(self): self.ssh_hook.append(self.mover.output_get_interconnect_id()) context = self.manager.getStorageContext('Mover') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) conn_id = context.get_interconnect_id(self.mover.mover_name, self.mover.mover_name) self.assertEqual(self.mover.interconnect_id, conn_id) ssh_calls = [mock.call(self.mover.cmd_get_interconnect_id(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_physical_devices(self): self.ssh_hook.append(self.mover.output_get_physical_devices()) context = self.manager.getStorageContext('Mover') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) devices = context.get_physical_devices(self.mover.mover_name) self.assertIn(self.mover.device_name, devices) ssh_calls = [mock.call(self.mover.cmd_get_physical_devices(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class SnapshotTestCase(StorageObjectTestCaseBase): def setUp(self): super(SnapshotTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_snapshot(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.snap.resp_task_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.snap.snapshot_name, fs_name=self.fs.filesystem_name, pool_id=self.pool.pool_id) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_snapshot_but_already_exist(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.snap.resp_create_but_already_exist()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.snap.snapshot_name, fs_name=self.fs.filesystem_name, pool_id=self.pool.pool_id, ckpt_size=self.snap.snapshot_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create_with_size()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_snapshot_with_error(self): self.hook.append(self.fs.resp_get_succeed()) self.hook.append(self.snap.resp_task_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, name=self.snap.snapshot_name, fs_name=self.fs.filesystem_name, pool_id=self.pool.pool_id, ckpt_size=self.snap.snapshot_size) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create_with_size()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot(self): self.hook.append(self.snap.resp_get_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.snap.snapshot_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.snap.snapshot_name, context.snap_map) property_map = [ 'name', 'id', 'checkpointOf', 'state', ] for prop in property_map: self.assertIn(prop, out) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_but_not_found(self): self.hook.append(self.snap.resp_get_without_value()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.snap.snapshot_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_with_error(self): self.hook.append(self.snap.resp_get_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(self.snap.snapshot_name) self.assertEqual(constants.STATUS_ERROR, status) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot(self): self.hook.append(self.snap.resp_get_succeed()) self.hook.append(self.snap.resp_task_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.snap.snapshot_name) self.assertNotIn(self.snap.snapshot_name, context.snap_map) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot_failed_to_get_snapshot(self): self.hook.append(self.snap.resp_get_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, self.snap.snapshot_name) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot_but_not_found(self): self.hook.append(self.snap.resp_get_without_value()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(self.snap.snapshot_name) self.assertNotIn(self.snap.snapshot_name, context.snap_map) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_snapshot_with_error(self): self.hook.append(self.snap.resp_get_succeed()) self.hook.append(self.snap.resp_task_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, self.snap.snapshot_name) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_id(self): self.hook.append(self.snap.resp_get_succeed()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) id = context.get_id(self.snap.snapshot_name) self.assertEqual(self.snap.snapshot_id, id) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_snapshot_id_with_error(self): self.hook.append(self.snap.resp_get_error()) context = self.manager.getStorageContext('Snapshot') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get_id, self.snap.snapshot_name) expected_calls = [mock.call(self.snap.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) @ddt.ddt class MoverInterfaceTestCase(StorageObjectTestCaseBase): def setUp(self): super(MoverInterfaceTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_mover_interface(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) interface['name'] = self.mover.long_interface_name context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), mock.call(self.mover.req_create_interface( self.mover.long_interface_name[:31])), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mover_interface_name_already_exist(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append( self.mover.resp_create_interface_but_name_already_exist()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_mover_interface_ip_already_exist(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append( self.mover.resp_create_interface_but_ip_already_exist()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @ddt.data(fakes.MoverTestData().resp_task_succeed(), fakes.MoverTestData().resp_task_error()) def test_create_mover_interface_with_conflict_vlan_id(self, xml_resp): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append( self.mover.resp_create_interface_with_conflicted_vlan_id()) self.hook.append(xml_resp) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_mover_interface_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } context.create(interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_mover_interface_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_error()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) interface = { 'name': self.mover.interface_name1, 'device_name': self.mover.device_name, 'ip': self.mover.ip_address1, 'mover_name': self.mover.mover_name, 'net_mask': self.mover.net_mask, 'vlan_id': self.mover.vlan_id, } self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, interface) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_interface(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.mover.interface_name1, mover_name=self.mover.mover_name) self.assertEqual(constants.STATUS_OK, status) property_map = [ 'name', 'device', 'up', 'ipVersion', 'netMask', 'ipAddress', 'vlanid', ] for prop in property_map: self.assertIn(prop, out) context.get(name=self.mover.long_interface_name, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_mover_interface_not_found(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_get_without_value()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.mover.interface_name1, mover_name=self.mover.mover_name) self.assertEqual(constants.STATUS_NOT_FOUND, status) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mover_interface(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_mover_interface_but_nonexistent(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_delete_interface_but_nonexistent()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_mover_interface_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_succeed()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_delete_mover_interface_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.mover.resp_task_error()) context = self.manager.getStorageContext('MoverInterface') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, ip_addr=self.mover.ip_address1, mover_name=self.mover.mover_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface()), ] context.conn['XML'].request.assert_has_calls(expected_calls) class DNSDomainTestCase(StorageObjectTestCaseBase): def setUp(self): super(DNSDomainTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_dns_domain(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mover_name=self.mover.mover_name, name=self.dns.domain_name, servers=self.dns.dns_ip_address) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_dns_domain_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(mover_name=self.mover.mover_name, name=self.dns.domain_name, servers=self.dns.dns_ip_address) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_dns_domain_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_error()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, mover_name=self.mover.mover_name, name=self.mover.domain_name, servers=self.dns.dns_ip_address) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_create()), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_dns_domain(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) self.hook.append(self.dns.resp_task_error()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mover_name=self.mover.mover_name, name=self.mover.domain_name) context.delete(mover_name=self.mover.mover_name, name=self.mover.domain_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_delete()), mock.call(self.dns.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_dns_domain_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.dns.resp_task_succeed()) context = self.manager.getStorageContext('DNSDomain') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(mover_name=self.mover.mover_name, name=self.mover.domain_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_delete()), mock.call(self.mover.req_get_ref()), mock.call(self.dns.req_delete()), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) class CIFSServerTestCase(StorageObjectTestCaseBase): def setUp(self): super(CIFSServerTestCase, self).setUp() self.hook = utils.RequestSideEffect() def test_create_cifs_server(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.cifs_server.resp_task_error()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create CIFS server on mover cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.create(cifs_server_args) # Create CIFS server on VDM cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, 'is_vdm': True, } context.create(cifs_server_args) # Create CIFS server on VDM cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, 'is_vdm': True, } context.create(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_create(self.mover.mover_id, False)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_create_cifs_server_already_exist(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_error()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) @mock.patch('time.sleep') def test_create_cifs_server_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create CIFS server on mover cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.create(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_create(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_cifs_server_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_error()) self.hook.append(self.cifs_server.resp_get_error()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) # Create CIFS server on VDM cifs_server_args = { 'name': self.cifs_server.cifs_server_name, 'interface_ip': self.cifs_server.ip_address1, 'domain_name': self.cifs_server.domain_name, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, 'is_vdm': True, } self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_all_cifs_server(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_all(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.vdm.vdm_name, context.cifs_server_map) # Get CIFS server from the cache status, out = context.get_all(self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.vdm.vdm_name, context.cifs_server_map) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_get_all_cifs_server_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get_all(self.mover.mover_name, False) self.assertEqual(constants.STATUS_OK, status) self.assertIn(self.mover.mover_name, context.cifs_server_map) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_get_cifs_server(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) status, out = context.get(name=self.cifs_server.cifs_server_name, mover_name=self.vdm.vdm_name) self.assertEqual(constants.STATUS_OK, status) property_map = { 'name', 'compName', 'Aliases', 'type', 'interfaces', 'domain', 'domainJoined', 'mover', 'moverIdIsVdm', } for prop in property_map: self.assertIn(prop, out) context.get(name=self.cifs_server.cifs_server_name, mover_name=self.vdm.vdm_name) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_modify_cifs_server(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': True, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.modify(cifs_server_args) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': False, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_modify( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_modify_cifs_server_but_unjoin_domain(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_modify_but_unjoin_domain()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': False, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_modify_cifs_server_but_already_join_domain(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append( self.cifs_server.resp_modify_but_already_join_domain()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': True, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_modify_cifs_server_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': True, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.mover.mover_name, 'is_vdm': False, } context.modify(cifs_server_args) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_modify( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_modify( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_modify_cifs_server_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_task_error()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) cifs_server_args = { 'name': self.cifs_server.cifs_server_name[-14:], 'join_domain': False, 'user_name': self.cifs_server.domain_user, 'password': self.cifs_server.domain_password, 'mover_name': self.vdm.vdm_name, } self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.modify, cifs_server_args) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_server(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)) self.hook.append(self.cifs_server.resp_task_succeed()) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) self.hook.append(self.cifs_server.resp_task_succeed()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.vdm.vdm_name) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), mock.call(self.cifs_server.req_delete(self.mover.mover_id, False)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_server_but_not_found(self): self.hook.append(self.mover.resp_get_without_value()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_without_value()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) context.delete(computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_server_with_error(self): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_server.resp_get_succeed( mover_id=self.mover.mover_id, is_vdm=False, join_domain=True)) self.hook.append(self.cifs_server.resp_task_error()) context = self.manager.getStorageContext('CIFSServer') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, computer_name=self.cifs_server.cifs_server_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_server.req_get(self.mover.mover_id, False)), mock.call(self.cifs_server.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) class CIFSShareTestCase(StorageObjectTestCaseBase): def setUp(self): super(CIFSShareTestCase, self).setUp() self.hook = utils.RequestSideEffect() self.ssh_hook = utils.SSHSideEffect() def test_create_cifs_share(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.vdm.vdm_name, is_vdm=True) context.create(name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_create_cifs_share_invalid_mover_id(self, sleep_mock): self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.create(name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_create(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_create(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_create_cifs_share_with_error(self): self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_error()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, name=self.cifs_share.share_name, server_name=self.cifs_share.cifs_server_name[-14:], mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_share(self): self.hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) self.hook.append(self.cifs_share.resp_get_succeed(self.mover.mover_id, False)) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) context.delete(name=self.cifs_share.share_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.cifs_share.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_delete_cifs_share_not_found(self): self.hook.append(self.cifs_share.resp_get_error()) self.hook.append(self.cifs_share.resp_get_without_value()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) context.delete(name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.cifs_share.req_get()), ] context.conn['XML'].request.assert_has_calls(expected_calls) @mock.patch('time.sleep') def test_delete_cifs_share_invalid_mover_id(self, sleep_mock): self.hook.append(self.cifs_share.resp_get_succeed(self.mover.mover_id, False)) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_invalid_mover_id()) self.hook.append(self.mover.resp_get_ref_succeed()) self.hook.append(self.cifs_share.resp_task_succeed()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.delete(name=self.cifs_share.share_name, mover_name=self.mover.mover_name, is_vdm=False) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_delete(self.mover.mover_id, False)), mock.call(self.mover.req_get_ref()), mock.call(self.cifs_share.req_delete(self.mover.mover_id, False)), ] context.conn['XML'].request.assert_has_calls(expected_calls) self.assertTrue(sleep_mock.called) def test_delete_cifs_share_with_error(self): self.hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) self.hook.append(self.vdm.resp_get_succeed()) self.hook.append(self.cifs_share.resp_task_error()) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name, is_vdm=True) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), ] context.conn['XML'].request.assert_has_calls(expected_calls) def test_get_cifs_share(self): self.hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) context = self.manager.getStorageContext('CIFSShare') context.conn['XML'].request = utils.EMCMock(side_effect=self.hook) context.get(self.cifs_share.share_name) expected_calls = [mock.call(self.cifs_share.req_get())] context.conn['XML'].request.assert_has_calls(expected_calls) def test_disable_share_access(self): self.ssh_hook.append('Command succeeded') context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.disable_share_access(share_name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_disable_share_access_with_error(self): self.ssh_hook.append(ex=processutils.ProcessExecutionError( stdout=self.cifs_share.fake_output)) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.disable_share_access, share_name=self.cifs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access(self): self.ssh_hook.append(self.cifs_share.output_allow_access()) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.allow_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [mock.call(self.cifs_share.cmd_change_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access_duplicate_ACE(self): expt_dup_ace = processutils.ProcessExecutionError( stdout=self.cifs_share.output_allow_access_but_duplicate_ace()) self.ssh_hook.append(ex=expt_dup_ace) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.allow_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [mock.call(self.cifs_share.cmd_change_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access_with_error(self): expt_err = processutils.ProcessExecutionError( self.cifs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.allow_share_access, mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [mock.call(self.cifs_share.cmd_change_access(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access(self): self.ssh_hook.append('Command succeeded') context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.deny_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access_no_ace(self): expt_no_ace = processutils.ProcessExecutionError( stdout=self.cifs_share.output_deny_access_but_no_ace()) self.ssh_hook.append(ex=expt_no_ace) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.deny_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access_but_no_user_found(self): expt_no_user = processutils.ProcessExecutionError( stdout=self.cifs_share.output_deny_access_but_no_user_found()) self.ssh_hook.append(ex=expt_no_user) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.deny_share_access(mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_access_with_error(self): expt_err = processutils.ProcessExecutionError( self.cifs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.deny_share_access, mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, user_name=self.cifs_server.domain_user, domain=self.cifs_server.domain_name, access=constants.CIFS_ACL_FULLCONTROL) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_share_access(self): self.ssh_hook.append(fakes.FakeData.cifs_access) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) ret = context.get_share_access( mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name) ssh_calls = [ mock.call(self.cifs_share.cmd_get_access(), True), ] self.assertEqual(2, len(ret)) self.assertEqual(constants.CIFS_ACL_FULLCONTROL, ret['administrator']) self.assertEqual(constants.CIFS_ACL_FULLCONTROL, ret['guest']) context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_share_access_failed(self): expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get_share_access, mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name) ssh_calls = [ mock.call(self.cifs_share.cmd_get_access(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_clear_share_access_has_white_list(self): self.ssh_hook.append(fakes.FakeData.cifs_access) self.ssh_hook.append('Command succeeded') context = self.manager.getStorageContext('CIFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) to_remove = context.clear_share_access( mover_name=self.vdm.vdm_name, share_name=self.cifs_share.share_name, domain=self.cifs_server.domain_name, white_list_users=['guest']) ssh_calls = [ mock.call(self.cifs_share.cmd_get_access(), True), mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] self.assertEqual({'administrator'}, to_remove) context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) class NFSShareTestCase(StorageObjectTestCaseBase): def setUp(self): super(NFSShareTestCase, self).setUp() self.ssh_hook = utils.SSHSideEffect() def test_create_nfs_share(self): self.ssh_hook.append(self.nfs_share.output_create()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.create(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_create_nfs_share_with_error(self): expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.create, name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_delete_nfs_share(self): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_delete_succeed()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.delete(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_delete_nfs_share_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.delete(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) @mock.patch('time.sleep') def test_delete_nfs_share_locked(self, sleep_mock): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) expt_locked = processutils.ProcessExecutionError( stdout=self.nfs_share.output_delete_but_locked()) self.ssh_hook.append(ex=expt_locked) self.ssh_hook.append(self.nfs_share.output_delete_succeed()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.delete(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), mock.call(self.nfs_share.cmd_delete(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) self.assertTrue(sleep_mock.called) def test_delete_nfs_share_with_error(self): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.delete, name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_nfs_share(self): self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) # Get NFS share from cache context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_nfs_share_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) self.ssh_hook.append(self.nfs_share.output_get_but_not_found()) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) context.get(name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_get(), False), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_get_nfs_share_with_error(self): expt_err = processutils.ProcessExecutionError( stdout=self.nfs_share.fake_output) self.ssh_hook.append(ex=expt_err) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.get, name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get(), False)] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access(self): rw_hosts = copy.deepcopy(self.nfs_share.rw_hosts) rw_hosts.append(self.nfs_share.nfs_host_ip) ro_hosts = copy.deepcopy(self.nfs_share.ro_hosts) ro_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RO) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) context.allow_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=ro_hosts)), mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_allow_share_access_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.allow_share_access, share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name, access_level=const.ACCESS_LEVEL_RW) ssh_calls = [mock.call(self.nfs_share.cmd_get())] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_rw_share_access(self): rw_hosts = copy.deepcopy(self.nfs_share.rw_hosts) rw_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.deny_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access(self.nfs_share.rw_hosts, self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_clear_share_access(self): hosts = ['192.168.1.1', '192.168.1.3'] self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=[hosts[0]], ro_hosts=[hosts[1]])) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.clear_share_access(share_name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name, white_list_hosts=hosts) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access( rw_hosts=[hosts[0]], ro_hosts=[hosts[1]])), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_ro_share_access(self): ro_hosts = copy.deepcopy(self.nfs_share.ro_hosts) ro_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=ro_hosts)) self.ssh_hook.append(self.nfs_share.output_set_access_success()) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) context.deny_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) context.deny_share_access(share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access(self.nfs_share.rw_hosts, self.nfs_share.ro_hosts)), mock.call(self.nfs_share.cmd_get()), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_share_not_found(self): expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.deny_share_access, share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [mock.call(self.nfs_share.cmd_get())] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_deny_rw_share_with_error(self): rw_hosts = copy.deepcopy(self.nfs_share.rw_hosts) rw_hosts.append(self.nfs_share.nfs_host_ip) self.ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) expt_not_found = processutils.ProcessExecutionError( stdout=self.nfs_share.output_get_but_not_found()) self.ssh_hook.append(ex=expt_not_found) context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = utils.EMCNFSShareMock( side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.deny_share_access, share_name=self.nfs_share.share_name, host_ip=self.nfs_share.nfs_host_ip, mover_name=self.vdm.vdm_name) ssh_calls = [ mock.call(self.nfs_share.cmd_get()), mock.call(self.nfs_share.cmd_set_access(self.nfs_share.rw_hosts, self.nfs_share.ro_hosts)), ] context.conn['SSH'].run_ssh.assert_has_calls(ssh_calls) def test_clear_share_access_failed_to_get_share(self): self.ssh_hook.append("no output.") context = self.manager.getStorageContext('NFSShare') context.conn['SSH'].run_ssh = mock.Mock(side_effect=self.ssh_hook) self.assertRaises(exception.EMCPowerMaxXMLAPIError, context.clear_share_access, share_name=self.nfs_share.share_name, mover_name=self.vdm.vdm_name, white_list_hosts=None) context.conn['SSH'].run_ssh.assert_called_once_with( self.nfs_share.cmd_get(), False) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/powermax/test_connection.py0000664000175000017500000027721313656750227031126 0ustar zuulzuul00000000000000# Copyright (c) 2019 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from oslo_log import log from manila import exception from manila.share.drivers.dell_emc.common.enas import connector from manila.share.drivers.dell_emc.plugins.vnx import connection from manila.share.drivers.dell_emc.plugins.vnx import object_manager from manila import test from manila.tests import fake_share from manila.tests.share.drivers.dell_emc.common.enas import fakes from manila.tests.share.drivers.dell_emc.common.enas import utils LOG = log.getLogger(__name__) @ddt.ddt class StorageConnectionTestCase(test.TestCase): @mock.patch.object(connector.XMLAPIConnector, "_do_setup", mock.Mock()) def setUp(self): super(StorageConnectionTestCase, self).setUp() self.emc_share_driver = fakes.FakeEMCShareDriver() self.connection = connection.VNXStorageConnection(LOG) self.pool = fakes.PoolTestData() self.vdm = fakes.VDMTestData() self.mover = fakes.MoverTestData() self.fs = fakes.FileSystemTestData() self.mount = fakes.MountPointTestData() self.snap = fakes.SnapshotTestData() self.cifs_share = fakes.CIFSShareTestData() self.nfs_share = fakes.NFSShareTestData() self.cifs_server = fakes.CIFSServerTestData() self.dns = fakes.DNSDomainTestData() with mock.patch.object(connector.XMLAPIConnector, 'request', mock.Mock()): self.connection.connect(self.emc_share_driver, None) def test_check_for_setup_error(self): hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_ref_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock with mock.patch.object(connection.VNXStorageConnection, '_get_managed_storage_pools', mock.Mock()): self.connection.check_for_setup_error() expected_calls = [mock.call(self.mover.req_get_ref())] xml_req_mock.assert_has_calls(expected_calls) def test_check_for_setup_error_with_invalid_mover_name(self): hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.InvalidParameterValue, self.connection.check_for_setup_error) expected_calls = [mock.call(self.mover.req_get_ref())] xml_req_mock.assert_has_calls(expected_calls) @ddt.data({'pool_conf': None, 'real_pools': ['fake_pool', 'nas_pool'], 'matched_pool': set()}, {'pool_conf': [], 'real_pools': ['fake_pool', 'nas_pool'], 'matched_pool': set()}, {'pool_conf': ['*'], 'real_pools': ['fake_pool', 'nas_pool'], 'matched_pool': {'fake_pool', 'nas_pool'}}, {'pool_conf': ['fake_*'], 'real_pools': ['fake_pool', 'nas_pool', 'Perf_Pool'], 'matched_pool': {'fake_pool'}}, {'pool_conf': ['*pool'], 'real_pools': ['fake_pool', 'NAS_Pool', 'Perf_POOL'], 'matched_pool': {'fake_pool'}}, {'pool_conf': ['nas_pool'], 'real_pools': ['fake_pool', 'nas_pool', 'perf_pool'], 'matched_pool': {'nas_pool'}}) @ddt.unpack def test__get_managed_storage_pools(self, pool_conf, real_pools, matched_pool): with mock.patch.object(object_manager.StoragePool, 'get_all', mock.Mock(return_value=('ok', real_pools))): pool = self.connection._get_managed_storage_pools(pool_conf) self.assertEqual(matched_pool, pool) def test__get_managed_storage_pools_failed_to_get_pool_info(self): hook = utils.RequestSideEffect() hook.append(self.pool.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock pool_conf = fakes.FakeData.pool_name self.assertRaises(exception.EMCVnxXMLAPIError, self.connection._get_managed_storage_pools, pool_conf) expected_calls = [mock.call(self.pool.req_get())] xml_req_mock.assert_has_calls(expected_calls) @ddt.data( {'pool_conf': ['fake_*'], 'real_pools': ['nas_pool', 'Perf_Pool']}, {'pool_conf': ['*pool'], 'real_pools': ['NAS_Pool', 'Perf_POOL']}, {'pool_conf': ['nas_pool'], 'real_pools': ['fake_pool', 'perf_pool']}, ) @ddt.unpack def test__get_managed_storage_pools_without_matched_pool(self, pool_conf, real_pools): with mock.patch.object(object_manager.StoragePool, 'get_all', mock.Mock(return_value=('ok', real_pools))): self.assertRaises(exception.InvalidParameterValue, self.connection._get_managed_storage_pools, pool_conf) def test_create_cifs_share(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.pool.req_get()), mock.call(self.fs.req_create_on_vdm()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': r'\\%s\%s' % ( fakes.FakeData.network_allocations_ip1, share['name'])}], 'CIFS export path is incorrect') def test_create_cifs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.pool.req_get()), mock.call(self.fs.req_create_on_vdm()), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.cifs_share.cmd_disable_access(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual( location, [{'path': r'\\%s.ipv6-literal.net\%s' % ( fakes.FakeData.network_allocations_ip3.replace(':', '-'), share['name'])}], 'CIFS export path is incorrect') def test_create_nfs_share(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.pool.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '192.168.1.2:/%s' % share['name']}], 'NFS export path is incorrect') def test_create_nfs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.pool.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share(None, share, share_server) expected_calls = [ mock.call(self.pool.req_get()), mock.call(self.vdm.req_get()), mock.call(self.fs.req_create_on_vdm()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [mock.call(self.nfs_share.cmd_create(), True)] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '[%s]:/%s' % ( fakes.FakeData.network_allocations_ip4, share['name'])}], 'NFS export path is incorrect') def test_create_cifs_share_without_share_server(self): share = fakes.CIFS_SHARE self.assertRaises(exception.InvalidInput, self.connection.create_share, None, share, None) def test_create_cifs_share_without_share_server_name(self): share = fakes.CIFS_SHARE share_server = copy.deepcopy(fakes.SHARE_SERVER) share_server['backend_details']['share_server_name'] = None self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_share, None, share, share_server) def test_create_cifs_share_with_invalide_cifs_server_name(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_share, None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_cifs_share_without_interface_in_cifs_server(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_without_interface( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_share, None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.pool.req_get()), mock.call(self.fs.req_create_on_vdm()), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_cifs_share_without_pool_name(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(host='HostA@BackendB', share_proto='CIFS') self.assertRaises(exception.InvalidHost, self.connection.create_share, None, share, share_server) def test_create_cifs_share_from_snapshot(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.cifs_share.cmd_disable_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': r'\\192.168.1.1\%s' % share['name']}], 'CIFS export path is incorrect') def test_create_cifs_share_from_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) hook.append(self.cifs_share.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_share.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.cifs_share.cmd_disable_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual( location, [{'path': r'\\%s.ipv6-literal.net\%s' % ( fakes.FakeData.network_allocations_ip3.replace(':', '-'), share['name'])}], 'CIFS export path is incorrect') def test_create_nfs_share_from_snapshot(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.nfs_share.cmd_create(), True) ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual(location, [{'path': '192.168.1.2:/%s' % share['name']}], 'NFS export path is incorrect') def test_create_nfs_share_from_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE snapshot = fake_share.fake_snapshot( name=fakes.FakeData.src_snap_name, share_name=fakes.FakeData.src_share_name, share_id=fakes.FakeData.src_share_name, id=fakes.FakeData.src_snap_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.mover.output_get_interconnect_id()) ssh_hook.append() ssh_hook.append() ssh_hook.append(self.fs.output_copy_ckpt) ssh_hook.append(self.fs.output_info()) ssh_hook.append() ssh_hook.append() ssh_hook.append() ssh_hook.append(self.nfs_share.output_create()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock location = self.connection.create_share_from_snapshot( None, share, snapshot, share_server) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.mover.cmd_get_interconnect_id(), False), mock.call(self.fs.cmd_create_from_ckpt(), False), mock.call(self.mount.cmd_server_mount('ro'), False), mock.call(self.fs.cmd_copy_ckpt(), True), mock.call(self.fs.cmd_nas_fs_info(), False), mock.call(self.mount.cmd_server_umount(), False), mock.call(self.fs.cmd_delete(), False), mock.call(self.mount.cmd_server_mount('rw'), False), mock.call(self.nfs_share.cmd_create(), True) ] ssh_cmd_mock.assert_has_calls(ssh_calls) self.assertEqual( location, [{'path': '[%s]:/%s' % ( fakes.FakeData.network_allocations_ip4, share['name'])}], 'NFS export path is incorrect') def test_create_share_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') self.assertRaises(exception.InvalidShare, self.connection.create_share, context=None, share=share, share_server=share_server) def test_create_share_from_snapshot_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') snapshot = fake_share.fake_snapshot() self.assertRaises(exception.InvalidShare, self.connection.create_share_from_snapshot, None, share, snapshot, share_server) def test_create_share_from_snapshot_without_pool_name(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(host='HostA@BackendB', share_proto='CIFS') snapshot = fake_share.fake_snapshot() self.assertRaises(exception.InvalidHost, self.connection.create_share_from_snapshot, None, share, snapshot, share_server) def test_delete_cifs_share(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_share.resp_task_succeed()) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_delete_cifs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_share.resp_task_succeed()) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_delete_nfs_share(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) ssh_hook.append(self.nfs_share.output_delete_succeed()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_delete_nfs_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.mount.resp_task_succeed()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts)) ssh_hook.append(self.nfs_share.output_delete_succeed()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), False), mock.call(self.nfs_share.cmd_delete(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_delete_share_without_share_server(self): share = fakes.CIFS_SHARE self.connection.delete_share(None, share) def test_delete_share_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') self.assertRaises(exception.InvalidShare, self.connection.delete_share, context=None, share=share, share_server=share_server) def test_delete_cifs_share_with_nonexistent_mount_and_filesystem(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.cifs_share.resp_get_succeed(self.vdm.vdm_id)) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_share.resp_task_succeed()) hook.append(self.mount.resp_task_error()) hook.append(self.fs.resp_get_succeed()) hook.append(self.fs.resp_task_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_share(None, share, share_server) expected_calls = [ mock.call(self.cifs_share.req_get()), mock.call(self.vdm.req_get()), mock.call(self.cifs_share.req_delete(self.vdm.vdm_id)), mock.call(self.mount.req_delete(self.vdm.vdm_id)), mock.call(self.fs.req_get()), mock.call(self.fs.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_extend_share(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE new_size = fakes.FakeData.new_size hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.extend_share(share, new_size, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] xml_req_mock.assert_has_calls(expected_calls) def test_extend_share_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE new_size = fakes.FakeData.new_size hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed()) hook.append(self.fs.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.extend_share(share, new_size, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), mock.call(self.fs.req_extend()), ] xml_req_mock.assert_has_calls(expected_calls) def test_extend_share_without_pool_name(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(host='HostA@BackendB', share_proto='CIFS') new_size = fakes.FakeData.new_size self.assertRaises(exception.InvalidHost, self.connection.extend_share, share, new_size, share_server) def test_create_snapshot(self): share_server = fakes.SHARE_SERVER snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.create_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create()), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.create_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.snap.req_create()), ] xml_req_mock.assert_has_calls(expected_calls) def test_create_snapshot_with_incorrect_share_info(self): share_server = fakes.SHARE_SERVER snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_but_not_found()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.create_snapshot, None, snapshot, share_server) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) def test_delete_snapshot(self): share_server = fakes.SHARE_SERVER snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.snap.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) def test_delete_snapshot_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 snapshot = fake_share.fake_snapshot( id=fakes.FakeData.snapshot_name, share_id=fakes.FakeData.filesystem_name, share_name=fakes.FakeData.share_name) hook = utils.RequestSideEffect() hook.append(self.snap.resp_get_succeed()) hook.append(self.snap.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.delete_snapshot(None, snapshot, share_server) expected_calls = [ mock.call(self.snap.req_get()), mock.call(self.snap.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.dns.resp_task_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.setup_server(fakes.NETWORK_INFO, None) if_name_1 = fakes.FakeData.interface_name1 if_name_2 = fakes.FakeData.interface_name2 expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_create_interface( if_name=if_name_1, ip=fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_create_interface( if_name=if_name_2, ip=fakes.FakeData.network_allocations_ip2)), mock.call(self.dns.req_create()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server_with_ipv6(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.dns.resp_task_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.setup_server(fakes.NETWORK_INFO_IPV6, None) if_name_1 = fakes.FakeData.interface_name3 if_name_2 = fakes.FakeData.interface_name4 expect_ip_1 = fakes.FakeData.network_allocations_ip3 expect_ip_2 = fakes.FakeData.network_allocations_ip4 expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_create_interface_with_ipv6( if_name=if_name_1, ip=expect_ip_1)), mock.call(self.mover.req_create_interface_with_ipv6( if_name=if_name_2, ip=expect_ip_2)), mock.call(self.dns.req_create( ip_addr=fakes.FakeData.dns_ipv6_address)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_create( self.vdm.vdm_id, ip_addr=fakes.FakeData.network_allocations_ip3)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface( interface=fakes.FakeData.interface_name4), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server_with_existing_vdm(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.dns.resp_task_succeed()) hook.append(self.cifs_server.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.setup_server(fakes.NETWORK_INFO, None) if_name_1 = fakes.FakeData.network_allocations_id1[-12:] if_name_2 = fakes.FakeData.network_allocations_id2[-12:] expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_create_interface( if_name=if_name_1, ip=fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_create_interface( if_name=if_name_2, ip=fakes.FakeData.network_allocations_ip2)), mock.call(self.dns.req_create()), mock.call(self.cifs_server.req_create(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_attach_nfs_interface(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_setup_server_with_invalid_security_service(self): network_info = copy.deepcopy(fakes.NETWORK_INFO) network_info['security_services'][0]['type'] = 'fake_type' self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.setup_server, network_info, None) @utils.patch_get_managed_ports_vnx( side_effect=exception.EMCVnxXMLAPIError( err="Get managed ports fail.")) def test_setup_server_without_valid_physical_device(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_without_value()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm(nfs_interface='')) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.setup_server, fakes.NETWORK_INFO, None) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) @utils.patch_get_managed_ports_vnx(return_value=['cge-1-0']) def test_setup_server_with_exception(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_but_not_found()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.vdm.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_error()) hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_without_value()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm(nfs_interface='')) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.setup_server, fakes.NETWORK_INFO, None) if_name_1 = fakes.FakeData.network_allocations_id1[-12:] if_name_2 = fakes.FakeData.network_allocations_id2[-12:] expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.vdm.req_create()), mock.call(self.mover.req_create_interface( if_name=if_name_1, ip=fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_create_interface( if_name=if_name_2, ip=fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_with_ipv6(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL_IPV6, fakes.SECURITY_SERVICE_IPV6) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_modify( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip3)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip4)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_without_server_detail(self): self.connection.teardown_server(None, fakes.SECURITY_SERVICE) def test_teardown_server_without_security_services(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, []) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_without_share_server_name_in_server_detail(self): server_detail = { 'cifs_if': fakes.FakeData.network_allocations_ip1, 'nfs_if': fakes.FakeData.network_allocations_ip2, } self.connection.teardown_server(server_detail, fakes.SECURITY_SERVICE) def test_teardown_server_with_invalid_server_name(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [mock.call(self.vdm.req_get())] xml_req_mock.assert_has_calls(expected_calls) def test_teardown_server_without_cifs_server(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=False)) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_teardown_server_with_invalid_cifs_server_modification(self): hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) hook.append(self.cifs_server.resp_task_error()) hook.append(self.cifs_server.resp_task_succeed()) hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.mover.resp_task_succeed()) hook.append(self.vdm.resp_task_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.vdm.output_get_interfaces_vdm()) ssh_hook.append() ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.teardown_server(fakes.SERVER_DETAIL, fakes.SECURITY_SERVICE) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), mock.call(self.cifs_server.req_modify(self.vdm.vdm_id)), mock.call(self.cifs_server.req_delete(self.vdm.vdm_id)), mock.call(self.mover.req_get_ref()), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip1)), mock.call(self.mover.req_delete_interface( fakes.FakeData.network_allocations_ip2)), mock.call(self.vdm.req_delete()), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.vdm.cmd_get_interfaces(), False), mock.call(self.vdm.cmd_detach_nfs_interface(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_add_cifs_rw(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [access], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_add_cifs_rw_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [access], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_deny_nfs(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [], [access], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_deny_nfs_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts_ipv6, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [], [], [access], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts_ipv6, ro_hosts=self.nfs_share.ro_hosts_ipv6), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_nfs_rule(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS hosts = ['192.168.1.5'] rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=hosts, ro_hosts=[])) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=hosts, ro_hosts=[]), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_nfs_rule_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS_IPV6 hosts = ['fdf8:f53b:82e1::5'] rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=hosts, ro_hosts=[])) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=hosts, ro_hosts=[]), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_cifs_rule(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_hook.append(fakes.FakeData.cifs_access) ssh_hook.append('Command succeeded') ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), mock.call(self.cifs_share.cmd_get_access(), True), mock.call(self.cifs_share.cmd_change_access( action='revoke', user='guest'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_update_access_recover_cifs_rule_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_hook.append(fakes.FakeData.cifs_access) ssh_hook.append('Command succeeded') ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.update_access(None, share, [access], [], [], share_server=share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), mock.call(self.cifs_share.cmd_get_access(), True), mock.call(self.cifs_share.cmd_change_access( action='revoke', user='guest'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_cifs_clear_access_server_not_found(self): server = fakes.SHARE_SERVER hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, cifs_server_name='cifs_server_name')) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection._cifs_clear_access, 'share_name', server, None) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_allow_cifs_rw_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_rw_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_ro_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_ro_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_ro_access_without_share_server_name(self): share = fakes.CIFS_SHARE share_server = copy.deepcopy(fakes.SHARE_SERVER) share_server['backend_details'].pop('share_server_name') access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_access_with_invalid_access_level(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fake_share.fake_access(access_level='fake_level') self.assertRaises(exception.InvalidShareAccessLevel, self.connection.allow_access, None, share, access, share_server) def test_allow_access_with_invalid_share_server_name(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.allow_access, None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_allow_nfs_access(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_nfs_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS_IPV6 rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts_ipv6, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.allow_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=rw_hosts, ro_hosts=self.nfs_share.ro_hosts_ipv6), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_allow_cifs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.allow_access, None, share, access, share_server) def test_allow_nfs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.allow_access, None, share, access, share_server) def test_allow_access_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') access = fake_share.fake_access() self.assertRaises(exception.InvalidShare, self.connection.allow_access, None, share, access, share_server) def test_deny_cifs_rw_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_rw_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access(action='revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_ro_access(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro', 'revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_ro_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.CIFS_SHARE access = fakes.CIFS_RO_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed( interface1=fakes.FakeData.interface_name3, interface2=fakes.FakeData.interface_name4)) hook.append(self.cifs_server.resp_get_succeed( mover_id=self.vdm.vdm_id, is_vdm=True, join_domain=True, ip_addr=fakes.FakeData.network_allocations_ip3)) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.cifs_share.output_allow_access()) ssh_cmd_mock = mock.Mock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) ssh_calls = [ mock.call(self.cifs_share.cmd_change_access('ro', 'revoke'), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_cifs_access_with_invliad_share_server_name(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fakes.CIFS_RW_ACCESS hook = utils.RequestSideEffect() hook.append(self.vdm.resp_get_succeed()) hook.append(self.cifs_server.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.deny_access, None, share, access, share_server) expected_calls = [ mock.call(self.vdm.req_get()), mock.call(self.cifs_server.req_get(self.vdm.vdm_id)), ] xml_req_mock.assert_has_calls(expected_calls) def test_deny_nfs_access(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts, ro_hosts=fakes.FakeData.ro_hosts)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts, ro_hosts=self.nfs_share.ro_hosts), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_nfs_access_with_ipv6(self): share_server = fakes.SHARE_SERVER_IPV6 share = fakes.NFS_SHARE access = fakes.NFS_RW_ACCESS_IPV6 rw_hosts = copy.deepcopy(fakes.FakeData.rw_hosts_ipv6) rw_hosts.append(access['access_to']) ssh_hook = utils.SSHSideEffect() ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=rw_hosts, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_hook.append(self.nfs_share.output_set_access_success()) ssh_hook.append(self.nfs_share.output_get_succeed( rw_hosts=fakes.FakeData.rw_hosts_ipv6, ro_hosts=fakes.FakeData.ro_hosts_ipv6)) ssh_cmd_mock = utils.EMCNFSShareMock(side_effect=ssh_hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.deny_access(None, share, access, share_server) ssh_calls = [ mock.call(self.nfs_share.cmd_get(), True), mock.call(self.nfs_share.cmd_set_access( rw_hosts=self.nfs_share.rw_hosts_ipv6, ro_hosts=self.nfs_share.ro_hosts_ipv6), True), mock.call(self.nfs_share.cmd_get(), True), ] ssh_cmd_mock.assert_has_calls(ssh_calls) def test_deny_access_with_incorrect_proto(self): share_server = fakes.SHARE_SERVER share = fake_share.fake_share(share_proto='FAKE_PROTO') access = fakes.CIFS_RW_ACCESS self.assertRaises(exception.InvalidShare, self.connection.deny_access, None, share, access, share_server) def test_deny_cifs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.CIFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.deny_access, None, share, access, share_server) def test_deny_nfs_access_with_incorrect_access_type(self): share_server = fakes.SHARE_SERVER share = fakes.NFS_SHARE access = fake_share.fake_access(access_type='fake_type') self.assertRaises(exception.InvalidShareAccess, self.connection.deny_access, None, share, access, share_server) def test_update_share_stats(self): hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.pool.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.connection.update_share_stats(fakes.STATS) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) for pool in fakes.STATS['pools']: if pool['pool_name'] == fakes.FakeData.pool_name: self.assertEqual(fakes.FakeData.pool_total_size, pool['total_capacity_gb']) free_size = (fakes.FakeData.pool_total_size - fakes.FakeData.pool_used_size) self.assertEqual(free_size, pool['free_capacity_gb']) def test_update_share_stats_without_matched_config_pools(self): self.connection.pools = set('fake_pool') hook = utils.RequestSideEffect() hook.append(self.mover.resp_get_ref_succeed()) hook.append(self.pool.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.update_share_stats, fakes.STATS) expected_calls = [ mock.call(self.mover.req_get_ref()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) def test_get_pool(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock pool_name = self.connection.get_pool(share) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) self.assertEqual(fakes.FakeData.pool_name, pool_name) def test_get_pool_failed_to_get_filesystem_info(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_pool, share) expected_calls = [mock.call(self.fs.req_get())] xml_req_mock.assert_has_calls(expected_calls) def test_get_pool_failed_to_get_pool_info(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_error()) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_pool, share) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) def test_get_pool_failed_to_find_matched_pool_name(self): share = fakes.CIFS_SHARE hook = utils.RequestSideEffect() hook.append(self.fs.resp_get_succeed()) hook.append(self.pool.resp_get_succeed(name='unmatch_pool_name', id='unmatch_pool_id')) xml_req_mock = utils.EMCMock(side_effect=hook) self.connection.manager.connectors['XML'].request = xml_req_mock self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_pool, share) expected_calls = [ mock.call(self.fs.req_get()), mock.call(self.pool.req_get()), ] xml_req_mock.assert_has_calls(expected_calls) @ddt.data({'port_conf': None, 'managed_ports': ['cge-1-0', 'cge-1-3']}, {'port_conf': '*', 'managed_ports': ['cge-1-0', 'cge-1-3']}, {'port_conf': ['cge-1-*'], 'managed_ports': ['cge-1-0', 'cge-1-3']}, {'port_conf': ['cge-1-3'], 'managed_ports': ['cge-1-3']}) @ddt.unpack def test_get_managed_ports_one_port(self, port_conf, managed_ports): hook = utils.SSHSideEffect() hook.append(self.mover.output_get_physical_devices()) ssh_cmd_mock = mock.Mock(side_effect=hook) expected_calls = [ mock.call(self.mover.cmd_get_physical_devices(), False), ] self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.port_conf = port_conf ports = self.connection.get_managed_ports() self.assertIsInstance(ports, list) self.assertEqual(sorted(managed_ports), sorted(ports)) ssh_cmd_mock.assert_has_calls(expected_calls) def test_get_managed_ports_no_valid_port(self): hook = utils.SSHSideEffect() hook.append(self.mover.output_get_physical_devices()) ssh_cmd_mock = mock.Mock(side_effect=hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.port_conf = ['cge-2-0'] self.assertRaises(exception.BadConfigurationException, self.connection.get_managed_ports) def test_get_managed_ports_query_devices_failed(self): hook = utils.SSHSideEffect() hook.append(self.mover.fake_output) ssh_cmd_mock = mock.Mock(side_effect=hook) self.connection.manager.connectors['SSH'].run_ssh = ssh_cmd_mock self.connection.port_conf = ['cge-2-0'] self.assertRaises(exception.EMCVnxXMLAPIError, self.connection.get_managed_ports) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/isilon/0000775000175000017500000000000013656750362024775 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/isilon/__init__.py0000664000175000017500000000000013656750227027074 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/isilon/test_isilon_api.py0000664000175000017500000010163713656750227030544 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from oslo_serialization import jsonutils as json import requests import requests_mock import six from manila import exception from manila.share.drivers.dell_emc.plugins.isilon import isilon_api from manila import test @ddt.ddt class IsilonApiTest(test.TestCase): def setUp(self): super(IsilonApiTest, self).setUp() self._mock_url = 'https://localhost:8080' _mock_auth = ('admin', 'admin') self.isilon_api = isilon_api.IsilonApi( self._mock_url, _mock_auth ) @ddt.data(False, True) def test_create_directory(self, is_recursive): with requests_mock.Mocker() as m: path = '/ifs/test' self.assertEqual(0, len(m.request_history)) self._add_create_directory_response(m, path, is_recursive) r = self.isilon_api.create_directory(path, recursive=is_recursive) self.assertTrue(r) self.assertEqual(1, len(m.request_history)) request = m.request_history[0] self._verify_dir_creation_request(request, path, is_recursive) @requests_mock.mock() def test_clone_snapshot(self, m): snapshot_name = 'snapshot01' fq_target_dir = '/ifs/admin/target' self.assertEqual(0, len(m.request_history)) self._add_create_directory_response(m, fq_target_dir, False) snapshots_json = ( '{"snapshots": ' '[{"name": "snapshot01", "path": "/ifs/admin/source"}]' '}' ) self._add_get_snapshot_response(m, snapshot_name, snapshots_json) # In order to test cloning a snapshot, we build out a mock # source directory tree. After the method under test is called we # will verify the necessary calls are made to clone a snapshot. source_dir_listing_json = ( '{"children": [' '{"name": "dir1", "type": "container"},' '{"name": "dir2", "type": "container"},' '{"name": "file1", "type": "object"},' '{"name": "file2", "type": "object"}' ']}' ) self._add_get_directory_listing_response( m, '/ifs/.snapshot/{0}/admin/source'.format(snapshot_name), source_dir_listing_json) # Add request responses for creating directories and cloning files # to the destination tree self._add_file_clone_response(m, '/ifs/admin/target/file1', snapshot_name) self._add_file_clone_response(m, '/ifs/admin/target/file2', snapshot_name) self._add_create_directory_response(m, fq_target_dir + '/dir1', False) self._add_get_directory_listing_response( m, '/ifs/.snapshot/{0}/admin/source/dir1'.format(snapshot_name), '{"children": [' '{"name": "file11", "type": "object"}, ' '{"name": "file12", "type": "object"}' ']}') self._add_file_clone_response(m, '/ifs/admin/target/dir1/file11', snapshot_name) self._add_file_clone_response(m, '/ifs/admin/target/dir1/file12', snapshot_name) self._add_create_directory_response(m, fq_target_dir + '/dir2', False) self._add_get_directory_listing_response( m, '/ifs/.snapshot/{0}/admin/source/dir2'.format(snapshot_name), '{"children": [' '{"name": "file21", "type": "object"}, ' '{"name": "file22", "type": "object"}' ']}') self._add_file_clone_response(m, '/ifs/admin/target/dir2/file21', snapshot_name) self._add_file_clone_response(m, '/ifs/admin/target/dir2/file22', snapshot_name) # Call method under test self.isilon_api.clone_snapshot(snapshot_name, fq_target_dir) # Verify calls needed to clone the source snapshot to the target dir expected_calls = [] clone_path_list = [ 'file1', 'file2', 'dir1/file11', 'dir1/file12', 'dir2/file21', 'dir2/file22'] for path in clone_path_list: expected_call = IsilonApiTest.ExpectedCall( IsilonApiTest.ExpectedCall.FILE_CLONE, self._mock_url + '/namespace/ifs/admin/target/' + path, ['/ifs/admin/target/' + path, '/ifs/admin/source/' + path, snapshot_name]) expected_calls.append(expected_call) dir_path_list = [ ('/dir1?recursive', '/dir1'), ('/dir2?recursive', '/dir2'), ('?recursive=', '')] for url, path in dir_path_list: expected_call = IsilonApiTest.ExpectedCall( IsilonApiTest.ExpectedCall.DIR_CREATION, self._mock_url + '/namespace/ifs/admin/target' + url, ['/ifs/admin/target' + path, False]) expected_calls.append(expected_call) self._verify_clone_snapshot_calls(expected_calls, m.request_history) class ExpectedCall(object): DIR_CREATION = 'dir_creation' FILE_CLONE = 'file_clone' def __init__(self, request_type, match_url, verify_args): self.request_type = request_type self.match_url = match_url self.verify_args = verify_args def _verify_clone_snapshot_calls(self, expected_calls, response_calls): actual_calls = [] for call in response_calls: actual_calls.append(call) for expected_call in expected_calls: # Match the expected call to the actual call, then verify match_found = False for call in actual_calls: if call.url.startswith(expected_call.match_url): match_found = True if expected_call.request_type == 'dir_creation': self._verify_dir_creation_request( call, *expected_call.verify_args) elif expected_call.request_type == 'file_clone': pass else: self.fail('Invalid request type') actual_calls.remove(call) self.assertTrue(match_found) @requests_mock.mock() def test_get_directory_listing(self, m): self.assertEqual(0, len(m.request_history)) fq_dir_path = 'ifs/admin/test' json_str = '{"my_json": "test123"}' self._add_get_directory_listing_response(m, fq_dir_path, json_str) actual_json = self.isilon_api.get_directory_listing(fq_dir_path) self.assertEqual(1, len(m.request_history)) self.assertEqual(json.loads(json_str), actual_json) @ddt.data((200, True), (404, False)) def test_is_path_existent(self, data): status_code, expected_return_value = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) path = '/ifs/home/admin' m.head('{0}/namespace{1}'.format(self._mock_url, path), status_code=status_code) r = self.isilon_api.is_path_existent(path) self.assertEqual(expected_return_value, r) self.assertEqual(1, len(m.request_history)) @requests_mock.mock() def test_is_path_existent_unexpected_error(self, m): path = '/ifs/home/admin' m.head('{0}/namespace{1}'.format(self._mock_url, path), status_code=400) self.assertRaises( requests.exceptions.HTTPError, self.isilon_api.is_path_existent, '/ifs/home/admin') @ddt.data( (200, '{"snapshots": [{"path": "/ifs/home/test"}]}', {'path': '/ifs/home/test'}), (404, '{"errors": []}', None) ) def test_get_snapshot(self, data): status_code, json_body, expected_return_value = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) snapshot_name = 'foo1' self._add_get_snapshot_response(m, snapshot_name, json_body, status=status_code) r = self.isilon_api.get_snapshot(snapshot_name) self.assertEqual(1, len(m.request_history)) self.assertEqual(expected_return_value, r) @requests_mock.mock() def test_get_snapshot_unexpected_error(self, m): snapshot_name = 'foo1' json_body = '{"snapshots": [{"path": "/ifs/home/test"}]}' self._add_get_snapshot_response( m, snapshot_name, json_body, status=400) self.assertRaises( requests.exceptions.HTTPError, self.isilon_api.get_snapshot, snapshot_name) @requests_mock.mock() def test_get_snapshots(self, m): self.assertEqual(0, len(m.request_history)) snapshot_json = '{"snapshots": [{"path": "/ifs/home/test"}]}' m.get('{0}/platform/1/snapshot/snapshots'.format(self._mock_url), status_code=200, json=json.loads(snapshot_json)) r = self.isilon_api.get_snapshots() self.assertEqual(1, len(m.request_history)) self.assertEqual(json.loads(snapshot_json), r) @requests_mock.mock() def test_get_snapshots_error_occurred(self, m): self.assertEqual(0, len(m.request_history)) m.get('{0}/platform/1/snapshot/snapshots'.format(self._mock_url), status_code=404) self.assertRaises(requests.exceptions.HTTPError, self.isilon_api.get_snapshots) self.assertEqual(1, len(m.request_history)) @ddt.data( ('/ifs/home/admin', '{"exports": [{"id": 42, "paths": ["/ifs/home/admin"]}]}', 42), ('/ifs/home/test', '{"exports": [{"id": 42, "paths": ["/ifs/home/admin"]}]}', None) ) def test_lookup_nfs_export(self, data): share_path, response_json, expected_return = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) m.get('{0}/platform/1/protocols/nfs/exports' .format(self._mock_url), json=json.loads(response_json)) r = self.isilon_api.lookup_nfs_export(share_path) self.assertEqual(1, len(m.request_history)) self.assertEqual(expected_return, r) @requests_mock.mock() def test_get_nfs_export(self, m): self.assertEqual(0, len(m.request_history)) export_id = 42 response_json = '{"exports": [{"id": 1}]}' status_code = 200 m.get('{0}/platform/1/protocols/nfs/exports/{1}' .format(self._mock_url, export_id), json=json.loads(response_json), status_code=status_code) r = self.isilon_api.get_nfs_export(export_id) self.assertEqual(1, len(m.request_history)) self.assertEqual(json.loads('{"id": 1}'), r) @requests_mock.mock() def test_get_nfs_export_error(self, m): self.assertEqual(0, len(m.request_history)) export_id = 3 response_json = '{}' status_code = 404 m.get('{0}/platform/1/protocols/nfs/exports/{1}' .format(self._mock_url, export_id), json=json.loads(response_json), status_code=status_code) r = self.isilon_api.get_nfs_export(export_id) self.assertEqual(1, len(m.request_history)) self.assertIsNone(r) @requests_mock.mock() def test_lookup_smb_share(self, m): self.assertEqual(0, len(m.request_history)) share_name = 'my_smb_share' share_json = '{"id": "my_smb_share"}' response_json = '{{"shares": [{0}]}}'.format(share_json) m.get('{0}/platform/1/protocols/smb/shares/{1}' .format(self._mock_url, share_name), status_code=200, json=json.loads(response_json)) r = self.isilon_api.lookup_smb_share(share_name) self.assertEqual(1, len(m.request_history)) self.assertEqual(json.loads(share_json), r) @requests_mock.mock() def test_lookup_smb_share_error(self, m): self.assertEqual(0, len(m.request_history)) share_name = 'my_smb_share' m.get('{0}/platform/1/protocols/smb/shares/{1}'.format( self._mock_url, share_name), status_code=404) r = self.isilon_api.lookup_smb_share(share_name) self.assertEqual(1, len(m.request_history)) self.assertIsNone(r) @ddt.data((201, True), (404, False)) def test_create_nfs_export(self, data): status_code, expected_return_value = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) export_path = '/ifs/home/test' m.post(self._mock_url + '/platform/1/protocols/nfs/exports', status_code=status_code) r = self.isilon_api.create_nfs_export(export_path) self.assertEqual(1, len(m.request_history)) call = m.request_history[0] expected_request_body = '{"paths": ["/ifs/home/test"]}' self.assertEqual(json.loads(expected_request_body), json.loads(call.body)) self.assertEqual(expected_return_value, r) @ddt.data((201, True), (404, False)) def test_create_smb_share(self, data): status_code, expected_return_value = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) share_name = 'my_smb_share' share_path = '/ifs/home/admin/smb_share' m.post(self._mock_url + '/platform/1/protocols/smb/shares', status_code=status_code) r = self.isilon_api.create_smb_share(share_name, share_path) self.assertEqual(expected_return_value, r) self.assertEqual(1, len(m.request_history)) expected_request_data = { 'name': share_name, 'path': share_path, 'permissions': [] } self.assertEqual(expected_request_data, json.loads(m.request_history[0].body)) @requests_mock.mock() def test_create_snapshot(self, m): self.assertEqual(0, len(m.request_history)) snapshot_name = 'my_snapshot_01' snapshot_path = '/ifs/home/admin' m.post(self._mock_url + '/platform/1/snapshot/snapshots', status_code=201) r = self.isilon_api.create_snapshot(snapshot_name, snapshot_path) self.assertEqual(1, len(m.request_history)) self.assertTrue(r) expected_request_body = json.loads( '{{"name": "{0}", "path": "{1}"}}' .format(snapshot_name, snapshot_path) ) self.assertEqual(expected_request_body, json.loads(m.request_history[0].body)) @requests_mock.mock() def test_create_snapshot_error_case(self, m): self.assertEqual(0, len(m.request_history)) snapshot_name = 'my_snapshot_01' snapshot_path = '/ifs/home/admin' m.post(self._mock_url + '/platform/1/snapshot/snapshots', status_code=404) self.assertRaises(requests.exceptions.HTTPError, self.isilon_api.create_snapshot, snapshot_name, snapshot_path) self.assertEqual(1, len(m.request_history)) @ddt.data(True, False) def test_delete(self, is_recursive_delete): with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) fq_path = '/ifs/home/admin/test' m.delete(self._mock_url + '/namespace' + fq_path + '?recursive=' + six.text_type(is_recursive_delete), status_code=204) self.isilon_api.delete(fq_path, recursive=is_recursive_delete) self.assertEqual(1, len(m.request_history)) @requests_mock.mock() def test_delete_error_case(self, m): fq_path = '/ifs/home/admin/test' m.delete(self._mock_url + '/namespace' + fq_path + '?recursive=False', status_code=403) self.assertRaises(requests.exceptions.HTTPError, self.isilon_api.delete, fq_path, recursive=False) @ddt.data((204, True), (404, False)) def test_delete_nfs_share(self, data): status_code, expected_return_value = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) share_number = 42 m.delete('{0}/platform/1/protocols/nfs/exports/{1}' .format(self._mock_url, share_number), status_code=status_code) r = self.isilon_api.delete_nfs_share(share_number) self.assertEqual(1, len(m.request_history)) self.assertEqual(expected_return_value, r) @ddt.data((204, True), (404, False)) def test_delete_smb_shares(self, data): status_code, expected_return_value = data with requests_mock.mock() as m: self.assertEqual(0, len(m.request_history)) share_name = 'smb_share_42' m.delete('{0}/platform/1/protocols/smb/shares/{1}' .format(self._mock_url, share_name), status_code=status_code) r = self.isilon_api.delete_smb_share(share_name) self.assertEqual(1, len(m.request_history)) self.assertEqual(expected_return_value, r) @requests_mock.mock() def test_delete_snapshot(self, m): self.assertEqual(0, len(m.request_history)) m.delete(self._mock_url + '/platform/1/snapshot/snapshots/my_snapshot', status_code=204) self.isilon_api.delete_snapshot("my_snapshot") self.assertEqual(1, len(m.request_history)) @requests_mock.mock() def test_delete_snapshot_error_case(self, m): m.delete(self._mock_url + '/platform/1/snapshot/snapshots/my_snapshot', status_code=403) self.assertRaises(requests.exceptions.HTTPError, self.isilon_api.delete_snapshot, "my_snapshot") @requests_mock.mock() def test_quota_create(self, m): quota_path = '/ifs/manila/test' quota_size = 256 self.assertEqual(0, len(m.request_history)) m.post(self._mock_url + '/platform/1/quota/quotas', status_code=201) self.isilon_api.quota_create(quota_path, 'directory', quota_size) self.assertEqual(1, len(m.request_history)) expected_request_json = { 'path': quota_path, 'type': 'directory', 'include_snapshots': False, 'thresholds_include_overhead': False, 'enforced': True, 'thresholds': {'hard': quota_size}, } call_body = m.request_history[0].body self.assertEqual(expected_request_json, json.loads(call_body)) @requests_mock.mock() def test_quota_create__path_does_not_exist(self, m): quota_path = '/ifs/test2' self.assertEqual(0, len(m.request_history)) m.post(self._mock_url + '/platform/1/quota/quotas', status_code=400) self.assertRaises( requests.exceptions.HTTPError, self.isilon_api.quota_create, quota_path, 'directory', 2 ) @requests_mock.mock() def test_quota_get(self, m): self.assertEqual(0, len(m.request_history)) response_json = {'quotas': [{}]} m.get(self._mock_url + '/platform/1/quota/quotas', json=response_json, status_code=200) quota_path = "/ifs/manila/test" quota_type = "directory" self.isilon_api.quota_get(quota_path, quota_type) self.assertEqual(1, len(m.request_history)) request_query_string = m.request_history[0].qs expected_query_string = {'path': [quota_path]} self.assertEqual(expected_query_string, request_query_string) @requests_mock.mock() def test_quota_get__path_does_not_exist(self, m): self.assertEqual(0, len(m.request_history)) m.get(self._mock_url + '/platform/1/quota/quotas', status_code=404) response = self.isilon_api.quota_get( '/ifs/does_not_exist', 'directory') self.assertIsNone(response) @requests_mock.mock() def test_quota_modify(self, m): self.assertEqual(0, len(m.request_history)) quota_id = "ADEF1G" new_size = 1024 m.put('{0}/platform/1/quota/quotas/{1}'.format( self._mock_url, quota_id), status_code=204) self.isilon_api.quota_modify_size(quota_id, new_size) self.assertEqual(1, len(m.request_history)) expected_request_body = {'thresholds': {'hard': new_size}} request_body = m.request_history[0].body self.assertEqual(expected_request_body, json.loads(request_body)) @requests_mock.mock() def test_quota_modify__given_id_does_not_exist(self, m): quota_id = 'ADE2F' m.put('{0}/platform/1/quota/quotas/{1}'.format( self._mock_url, quota_id), status_code=404) self.assertRaises( requests.exceptions.HTTPError, self.isilon_api.quota_modify_size, quota_id, 1024 ) @requests_mock.mock() def test_quota_set__quota_already_exists(self, m): self.assertEqual(0, len(m.request_history)) quota_path = '/ifs/manila/test' quota_type = 'directory' quota_size = 256 quota_id = 'AFE2C' m.get('{0}/platform/1/quota/quotas'.format( self._mock_url), json={'quotas': [{'id': quota_id}]}, status_code=200) m.put( '{0}/platform/1/quota/quotas/{1}'.format(self._mock_url, quota_id), status_code=204 ) self.isilon_api.quota_set(quota_path, quota_type, quota_size) expected_quota_modify_json = {'thresholds': {'hard': quota_size}} quota_put_json = json.loads(m.request_history[1].body) self.assertEqual(expected_quota_modify_json, quota_put_json) @requests_mock.mock() def test_quota_set__quota_does_not_already_exist(self, m): self.assertEqual(0, len(m.request_history)) m.get('{0}/platform/1/quota/quotas'.format( self._mock_url), status_code=404) m.post('{0}/platform/1/quota/quotas'.format(self._mock_url), status_code=201) quota_path = '/ifs/manila/test' quota_type = 'directory' quota_size = 256 self.isilon_api.quota_set(quota_path, quota_type, quota_size) # verify a call is made to create a quota expected_create_json = { six.text_type('path'): quota_path, six.text_type('type'): 'directory', six.text_type('include_snapshots'): False, six.text_type('thresholds_include_overhead'): False, six.text_type('enforced'): True, six.text_type('thresholds'): {six.text_type('hard'): quota_size}, } create_request_json = json.loads(m.request_history[1].body) self.assertEqual(expected_create_json, create_request_json) @requests_mock.mock() def test_quota_set__path_does_not_already_exist(self, m): m.get(self._mock_url + '/platform/1/quota/quotas', status_code=400) e = self.assertRaises( requests.exceptions.HTTPError, self.isilon_api.quota_set, '/ifs/does_not_exist', 'directory', 2048 ) self.assertEqual(400, e.response.status_code) @ddt.data( ('foouser', isilon_api.SmbPermission.rw), ('testuser', isilon_api.SmbPermission.ro), ) def test_smb_permission_add(self, data): user, smb_permission = data share_name = 'testshare' with requests_mock.mock() as m: papi_share_url = '{0}/platform/1/protocols/smb/shares/{1}'.format( self._mock_url, share_name) share_data = { 'shares': [ {'permissions': []} ] } m.get(papi_share_url, status_code=200, json=share_data) auth_url = ('{0}/platform/1/auth/mapping/users/lookup?user={1}' ''.format(self._mock_url, user)) example_sid = 'SID:S-1-5-21' sid_json = { 'id': example_sid, 'name': user, 'type': 'user' } auth_json = {'mapping': [ {'user': {'sid': sid_json}} ]} m.get(auth_url, status_code=200, json=auth_json) m.put(papi_share_url) self.isilon_api.smb_permissions_add(share_name, user, smb_permission) perms_put_request = m.request_history[2] expected_perm_request_json = { 'permissions': [ {'permission': smb_permission.value, 'permission_type': 'allow', 'trustee': sid_json } ] } self.assertEqual(expected_perm_request_json, json.loads(perms_put_request.body)) @requests_mock.mock() def test_smb_permission_add_with_multiple_users_found(self, m): user = 'foouser' smb_permission = isilon_api.SmbPermission.rw share_name = 'testshare' papi_share_url = '{0}/platform/1/protocols/smb/shares/{1}'.format( self._mock_url, share_name) share_data = { 'shares': [ {'permissions': []} ] } m.get(papi_share_url, status_code=200, json=share_data) auth_url = ('{0}/platform/1/auth/mapping/users/lookup?user={1}' ''.format(self._mock_url, user)) example_sid = 'SID:S-1-5-21' sid_json = { 'id': example_sid, 'name': user, 'type': 'user' } auth_json = {'mapping': [ {'user': {'sid': sid_json}}, {'user': {'sid': sid_json}}, ]} m.get(auth_url, status_code=200, json=auth_json) m.put(papi_share_url) self.assertRaises(exception.ShareBackendException, self.isilon_api.smb_permissions_add, share_name, user, smb_permission) @requests_mock.mock() def test_smb_permission_remove(self, m): share_name = 'testshare' user = 'testuser' share_data = { 'permissions': [{ 'permission': 'change', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-21', 'name': user, 'type': 'user', } }] } papi_share_url = '{0}/platform/1/protocols/smb/shares/{1}'.format( self._mock_url, share_name) m.get(papi_share_url, status_code=200, json={'shares': [share_data]}) num_existing_perms = len(self.isilon_api.lookup_smb_share(share_name)) self.assertEqual(1, num_existing_perms) m.put(papi_share_url) self.isilon_api.smb_permissions_remove(share_name, user) smb_put_request = m.request_history[2] expected_body = {'permissions': []} expected_body = json.dumps(expected_body) self.assertEqual(expected_body, smb_put_request.body) @requests_mock.mock() def test_smb_permission_remove_with_multiple_existing_perms(self, m): share_name = 'testshare' user = 'testuser' foouser_perms = { 'permission': 'change', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-21', 'name': 'foouser', 'type': 'user', } } user_perms = { 'permission': 'change', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-22', 'name': user, 'type': 'user', } } share_data = { 'permissions': [ foouser_perms, user_perms, ] } papi_share_url = '{0}/platform/1/protocols/smb/shares/{1}'.format( self._mock_url, share_name) m.get(papi_share_url, status_code=200, json={'shares': [share_data]}) num_existing_perms = len(self.isilon_api.lookup_smb_share( share_name)['permissions']) self.assertEqual(2, num_existing_perms) m.put(papi_share_url) self.isilon_api.smb_permissions_remove(share_name, user) smb_put_request = m.request_history[2] expected_body = {'permissions': [foouser_perms]} expected_body = json.dumps(expected_body) self.assertEqual(json.loads(expected_body), json.loads(smb_put_request.body)) @requests_mock.mock() def test_smb_permission_remove_with_empty_perms_list(self, m): share_name = 'testshare' user = 'testuser' share_data = {'permissions': []} papi_share_url = '{0}/platform/1/protocols/smb/shares/{1}'.format( self._mock_url, share_name) m.get(papi_share_url, status_code=200, json={'shares': [share_data]}) m.put(papi_share_url) self.assertRaises(exception.ShareBackendException, self.isilon_api.smb_permissions_remove, share_name, user) @requests_mock.mock() def test_auth_lookup_user(self, m): user = 'foo' auth_url = '{0}/platform/1/auth/mapping/users/lookup?user={1}'.format( self._mock_url, user) example_sid = 'SID:S-1-5-21' sid_json = { 'id': example_sid, 'name': user, 'type': 'user' } auth_json = { 'mapping': [ {'user': {'sid': sid_json}} ] } m.get(auth_url, status_code=200, json=auth_json) returned_auth_json = self.isilon_api.auth_lookup_user(user) self.assertEqual(auth_json, returned_auth_json) @requests_mock.mock() def test_auth_lookup_user_with_nonexistent_user(self, m): user = 'nonexistent' auth_url = '{0}/platform/1/auth/mapping/users/lookup?user={1}'.format( self._mock_url, user) m.get(auth_url, status_code=404) self.assertRaises(exception.ShareBackendException, self.isilon_api.auth_lookup_user, user) @requests_mock.mock() def test_auth_lookup_user_with_backend_error(self, m): user = 'foo' auth_url = '{0}/platform/1/auth/mapping/users/lookup?user={1}'.format( self._mock_url, user) m.get(auth_url, status_code=400) self.assertRaises(requests.exceptions.HTTPError, self.isilon_api.auth_lookup_user, user) def _add_create_directory_response(self, m, path, is_recursive): url = '{0}/namespace{1}?recursive={2}'.format( self._mock_url, path, six.text_type(is_recursive)) m.put(url, status_code=200) def _add_file_clone_response(self, m, fq_dest_path, snapshot_name): url = '{0}/namespace{1}?clone=true&snapshot={2}'.format( self._mock_url, fq_dest_path, snapshot_name) m.put(url) def _add_get_directory_listing_response(self, m, fq_dir_path, json_str): url = '{0}/namespace{1}?detail=default'.format( self._mock_url, fq_dir_path) m.get(url, json=json.loads(json_str), status_code=200) def _add_get_snapshot_response( self, m, snapshot_name, json_str, status=200): url = '{0}/platform/1/snapshot/snapshots/{1}'.format( self._mock_url, snapshot_name ) m.get(url, status_code=status, json=json.loads(json_str)) def _verify_dir_creation_request(self, request, path, is_recursive): self.assertEqual('PUT', request.method) expected_url = '{0}/namespace{1}?recursive={2}'.format( self._mock_url, path, six.text_type(is_recursive)) self.assertEqual(expected_url, request.url) self.assertIn("x-isi-ifs-target-type", request.headers) self.assertEqual("container", request.headers['x-isi-ifs-target-type']) def _verify_clone_file_from_snapshot( self, request, fq_file_path, fq_dest_path, snapshot_name): self.assertEqual('PUT', request.method) expected_url = '{0}/namespace{1}?clone=true&snapshot={2}'.format( self._mock_url, fq_dest_path, snapshot_name ) self.assertEqual(expected_url, request.request.url) self.assertIn("x-isi-ifs-copy-source", request.headers) self.assertEqual('/namespace' + fq_file_path, request.headers['x-isi-ifs-copy-source']) manila-10.0.0/manila/tests/share/drivers/dell_emc/plugins/isilon/test_isilon.py0000664000175000017500000013412413656750227027710 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_log import log from oslo_utils import units from requests.exceptions import HTTPError import six from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.plugins.isilon import isilon from manila.share.drivers.dell_emc.plugins.isilon import isilon_api from manila import test LOG = log.getLogger(__name__) @ddt.ddt class IsilonTest(test.TestCase): """Integration test for the Isilon Manila driver.""" ISILON_ADDR = '10.0.0.1' API_URL = 'https://%s:8080' % ISILON_ADDR AUTH = ('admin', 'admin') ROOT_DIR = '/ifs/manila-test' SHARE_NAME = 'share-foo' SHARE_DIR = ROOT_DIR + '/' + SHARE_NAME ADMIN_HOME_DIR = '/ifs/home/admin' CLONE_DIR = ROOT_DIR + '/clone-dir' class MockConfig(object): def safe_get(self, value): if value == 'emc_nas_server': return '10.0.0.1' elif value == 'emc_nas_server_port': return '8080' elif value == 'emc_nas_login': return 'admin' elif value == 'emc_nas_password': return 'a' elif value == 'emc_nas_root_dir': return '/ifs/manila-test' else: return None @mock.patch( 'manila.share.drivers.dell_emc.plugins.isilon.isilon.isilon_api.' 'IsilonApi', autospec=True) def setUp(self, mock_isi_api): super(IsilonTest, self).setUp() self._mock_isilon_api = mock_isi_api.return_value self.storage_connection = isilon.IsilonStorageConnection(LOG) self.mock_context = mock.Mock('Context') self.mock_emc_driver = mock.Mock('EmcDriver') self.mock_emc_driver.attach_mock(self.MockConfig(), 'configuration') self.storage_connection.connect( self.mock_emc_driver, self.mock_context) def test_allow_access_single_ip_nfs(self): # setup share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RW} share_server = None fake_export_id = 1 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id self._mock_isilon_api.get_nfs_export.return_value = { 'clients': []} self.assertFalse(self._mock_isilon_api.request.called) # call method under test self.storage_connection.allow_access(self.mock_context, share, access, share_server) # verify expected REST API call is executed expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + str(fake_export_id)) expected_data = {'clients': ['10.1.1.10']} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) def test_allow_access_with_nfs_readonly(self): # setup share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RO} fake_export_id = 70 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id self._mock_isilon_api.get_nfs_export.return_value = { 'read_only_clients': []} self.assertFalse(self._mock_isilon_api.request.called) self.storage_connection.allow_access( self.mock_context, share, access, None) # verify expected REST API call is executed expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + six.text_type(fake_export_id)) expected_data = {'read_only_clients': ['10.1.1.10']} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) def test_allow_access_with_nfs_readwrite(self): # setup share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RW} fake_export_id = 70 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id self._mock_isilon_api.get_nfs_export.return_value = { 'clients': []} self.assertFalse(self._mock_isilon_api.request.called) self.storage_connection.allow_access( self.mock_context, share, access, None) # verify expected REST API call is executed expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + six.text_type(fake_export_id)) expected_data = {'clients': ['10.1.1.10']} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) def test_allow_access_with_cifs_ip_readonly(self): # Note: Driver does not currently support readonly access for "ip" type share = {'name': self.SHARE_NAME, 'share_proto': 'CIFS'} access = {'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RO} self.assertRaises( exception.InvalidShareAccess, self.storage_connection.allow_access, self.mock_context, share, access, None) def test_deny_access__ip_nfs_readwrite(self): """Verifies that an IP will be remove from a whitelist.""" fake_export_id = 1 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id # simulate an IP added to the whitelist ip_addr = '10.0.0.4' self._mock_isilon_api.get_nfs_export.return_value = { 'clients': [ip_addr]} share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': ip_addr, 'access_level': const.ACCESS_LEVEL_RW} share_server = None # call method under test self.assertFalse(self._mock_isilon_api.request.called) self.storage_connection.deny_access(self.mock_context, share, access, share_server) # verify that a call is made to remove an existing IP from the list expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + str(fake_export_id)) expected_data = {'clients': []} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data ) def test_deny_access__nfs_ip_readonly(self): fake_export_id = 1 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id # simulate an IP added to the whitelist ip_addr = '10.0.0.4' self._mock_isilon_api.get_nfs_export.return_value = { 'read_only_clients': [ip_addr]} share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': ip_addr, 'access_level': const.ACCESS_LEVEL_RO} share_server = None # call method under test self.assertFalse(self._mock_isilon_api.request.called) self.storage_connection.deny_access(self.mock_context, share, access, share_server) # verify that a call is made to remove an existing IP from the list expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + six.text_type(fake_export_id)) expected_data = {'read_only_clients': []} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data ) def test_deny_access_ip_cifs(self): """Verifies that an IP will be remove from a whitelist. Precondition: the IP to be removed exists in the whitelist. Otherwise, do nothing. """ # setup ip_addr = '10.1.1.10' share = {'name': self.SHARE_NAME, 'share_proto': 'CIFS'} self._mock_isilon_api.lookup_smb_share.return_value = { 'host_acl': ['allow:' + ip_addr]} self.assertFalse(self._mock_isilon_api.request.called) # call method under test access = {'access_type': 'ip', 'access_to': ip_addr, 'access_level': const.ACCESS_LEVEL_RW} share_server = None self.storage_connection.deny_access(self.mock_context, share, access, share_server) # verify API call is made to remove IP is removed from whitelist expected_url = (self.API_URL + '/platform/1/protocols/smb/shares/' + self.SHARE_NAME) expected_data = {'host_acl': []} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) def test_deny_access_nfs_invalid_access_type(self): share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'foo_access_type', 'access_to': '10.0.0.1'} # This operation should return silently self.storage_connection.deny_access( self.mock_context, share, access, None) def test_deny_access_cifs_invalid_access_type(self): share = {'name': self.SHARE_NAME, 'share_proto': 'CIFS'} access = {'access_type': 'foo_access_type', 'access_to': '10.0.0.1'} # This operation should return silently self.storage_connection.deny_access(self.mock_context, share, access, None) def test_deny_access_invalid_share_protocol(self): share = {'name': self.SHARE_NAME, 'share_proto': 'FOO'} access = {'access_type': 'ip', 'access_to': '10.0.0.1', 'access_level': const.ACCESS_LEVEL_RW} # This operation should return silently self.storage_connection.deny_access( self.mock_context, share, access, None) def test_deny_access_nfs_export_does_not_exist(self): share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': '10.0.0.1', 'access_level': const.ACCESS_LEVEL_RW} self._mock_isilon_api.lookup_nfs_export.return_value = 1 self._mock_isilon_api.get_nfs_export.return_value = None self.assertRaises( exception.ShareBackendException, self.storage_connection.deny_access, self.mock_context, share, access, None ) def test_deny_access_nfs_share_does_not_exist(self): share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': '10.0.0.1', 'access_level': const.ACCESS_LEVEL_RW} self._mock_isilon_api.lookup_nfs_export.return_value = None self.assertRaises( exception.ShareBackendException, self.storage_connection.deny_access, self.mock_context, share, access, None) def test_deny_access_nfs_share_does_not_contain_required_key(self): share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = { 'access_type': 'ip', 'access_to': '10.0.0.1', 'access_level': const.ACCESS_LEVEL_RW, } self._mock_isilon_api.get_nfs_export.return_value = {} self.assertRaises(exception.ShareBackendException, self.storage_connection.deny_access, self.mock_context, share, access, None) def test_allow_access_multiple_ip_nfs(self): """Verifies adding an IP to a whitelist with pre-existing ips. Verifies that when adding an additional IP to a whitelist which already contains IPs, the Isilon driver successfully appends the IP to the whitelist. """ # setup fake_export_id = 42 new_allowed_ip = '10.7.7.8' self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id existing_ips = ['10.0.0.1', '10.1.1.1', '10.0.0.2'] export_json = { 'clients': existing_ips, 'access_level': const.ACCESS_LEVEL_RW, } self._mock_isilon_api.get_nfs_export.return_value = export_json self.assertFalse(self._mock_isilon_api.request.called) # call method under test share = {'name': self.SHARE_NAME, 'share_proto': 'NFS'} access = {'access_type': 'ip', 'access_to': new_allowed_ip, 'access_level': const.ACCESS_LEVEL_RW} share_server = None self.storage_connection.allow_access( self.mock_context, share, access, share_server) # verify access rule is applied expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + str(fake_export_id)) self.assertTrue(self._mock_isilon_api.request.called) args, kwargs = self._mock_isilon_api.request.call_args action, url = args self.assertEqual('PUT', action) self.assertEqual(expected_url, url) self.assertEqual(1, len(kwargs)) self.assertIn('data', kwargs) actual_clients = set(kwargs['data']['clients']) expected_clients = set(existing_ips) expected_clients.add(new_allowed_ip) self.assertEqual(expected_clients, actual_clients) def test_allow_access_multiple_ip_cifs(self): """Verifies adding an IP to a whitelist with pre-existing ips. Verifies that when adding an additional IP to a whitelist which already contains IPs, the Isilon driver successfully appends the IP to the whitelist. """ # setup share_name = self.SHARE_NAME new_allowed_ip = '10.101.1.1' existing_ips = ['allow:10.0.0.1', 'allow:10.1.1.1', 'allow:10.0.0.2'] share_json = {'name': share_name, 'host_acl': existing_ips} self._mock_isilon_api.lookup_smb_share.return_value = share_json self.assertFalse(self._mock_isilon_api.request.called) # call method under test share = {'name': share_name, 'share_proto': 'CIFS'} access = {'access_type': 'ip', 'access_to': new_allowed_ip, 'access_level': const.ACCESS_LEVEL_RW} share_server = None self.storage_connection.allow_access(self.mock_context, share, access, share_server) # verify access rule is applied expected_url = (self.API_URL + '/platform/1/protocols/smb/shares/' + share_name) self.assertTrue(self._mock_isilon_api.request.called) args, kwargs = self._mock_isilon_api.request.call_args action, url = args self.assertEqual('PUT', action) self.assertEqual(expected_url, url) self.assertEqual(1, len(kwargs)) self.assertIn('data', kwargs) actual_clients = set(kwargs['data']['host_acl']) expected_clients = set(existing_ips) expected_clients.add('allow:' + new_allowed_ip) self.assertEqual(expected_clients, actual_clients) def test_allow_access_single_ip_cifs(self): # setup share_name = self.SHARE_NAME share = {'name': share_name, 'share_proto': 'CIFS'} allow_ip = '10.1.1.10' access = {'access_type': 'ip', 'access_to': allow_ip, 'access_level': const.ACCESS_LEVEL_RW} share_server = None self._mock_isilon_api.lookup_smb_share.return_value = { 'name': share_name, 'host_acl': []} self.assertFalse(self._mock_isilon_api.request.called) # call method under test self.storage_connection.allow_access(self.mock_context, share, access, share_server) # verify access rule is applied expected_url = (self.API_URL + '/platform/1/protocols/smb/shares/' + self.SHARE_NAME) expected_data = {'host_acl': ['allow:' + allow_ip]} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) @ddt.data( ('foo', const.ACCESS_LEVEL_RW, isilon_api.SmbPermission.rw), ('testuser', const.ACCESS_LEVEL_RO, isilon_api.SmbPermission.ro), ) def test_allow_access_with_cifs_user(self, data): # setup share_name = self.SHARE_NAME user, access_level, expected_smb_perm = data share = {'name': share_name, 'share_proto': 'CIFS'} access = {'access_type': 'user', 'access_to': user, 'access_level': access_level} self.storage_connection.allow_access(self.mock_context, share, access, None) self._mock_isilon_api.smb_permissions_add.assert_called_once_with( share_name, user, expected_smb_perm) def test_allow_access_with_cifs_user_invalid_access_level(self): share = {'name': self.SHARE_NAME, 'share_proto': 'CIFS'} access = { 'access_type': 'user', 'access_to': 'foo', 'access_level': 'everything', } self.assertRaises(exception.InvalidShareAccess, self.storage_connection.allow_access, self.mock_context, share, access, None) def test_allow_access_with_cifs_invalid_access_type(self): share_name = self.SHARE_NAME share = {'name': share_name, 'share_proto': 'CIFS'} access = {'access_type': 'fooaccesstype', 'access_to': 'testuser', 'access_level': const.ACCESS_LEVEL_RW} self.assertRaises(exception.InvalidShareAccess, self.storage_connection.allow_access, self.mock_context, share, access, None) def test_deny_access_with_cifs_user(self): share_name = self.SHARE_NAME user_to_remove = 'testuser' share = {'name': share_name, 'share_proto': 'CIFS'} access = {'access_type': 'user', 'access_to': user_to_remove, 'access_level': const.ACCESS_LEVEL_RW} self.assertFalse(self._mock_isilon_api.smb_permissions_remove.called) self.storage_connection.deny_access(self.mock_context, share, access, None) self._mock_isilon_api.smb_permissions_remove.assert_called_with( share_name, user_to_remove) def test_allow_access_invalid_access_type(self): # setup share_name = self.SHARE_NAME share = {'name': share_name, 'share_proto': 'NFS'} allow_ip = '10.1.1.10' access = {'access_type': 'foo_access_type', 'access_to': allow_ip} # verify method under test throws the expected exception self.assertRaises( exception.InvalidShareAccess, self.storage_connection.allow_access, self.mock_context, share, access, None) def test_allow_access_invalid_share_protocol(self): # setup share_name = self.SHARE_NAME share = {'name': share_name, 'share_proto': 'FOO_PROTOCOL'} allow_ip = '10.1.1.10' access = {'access_type': 'ip', 'access_to': allow_ip} # verify method under test throws the expected exception self.assertRaises( exception.InvalidShare, self.storage_connection.allow_access, self.mock_context, share, access, None) def test_create_share_nfs(self): share_path = self.SHARE_DIR self.assertFalse(self._mock_isilon_api.create_directory.called) self.assertFalse(self._mock_isilon_api.create_nfs_export.called) # create the share share = {"name": self.SHARE_NAME, "share_proto": 'NFS', "size": 8} location = self.storage_connection.create_share(self.mock_context, share, None) # verify location and API call made expected_location = '%s:%s' % (self.ISILON_ADDR, self.SHARE_DIR) self.assertEqual(expected_location, location) self._mock_isilon_api.create_directory.assert_called_with(share_path) self._mock_isilon_api.create_nfs_export.assert_called_with(share_path) # verify directory quota call made self._mock_isilon_api.quota_create.assert_called_with( share_path, 'directory', 8 * units.Gi) def test_create_share_cifs(self): self.assertFalse(self._mock_isilon_api.create_directory.called) self.assertFalse(self._mock_isilon_api.create_smb_share.called) # create the share share = {"name": self.SHARE_NAME, "share_proto": 'CIFS', "size": 8} location = self.storage_connection.create_share(self.mock_context, share, None) expected_location = '\\\\{0}\\{1}'.format( self.ISILON_ADDR, self.SHARE_NAME) self.assertEqual(expected_location, location) self._mock_isilon_api.create_directory.assert_called_once_with( self.SHARE_DIR) self._mock_isilon_api.create_smb_share.assert_called_once_with( self.SHARE_NAME, self.SHARE_DIR) # verify directory quota call made self._mock_isilon_api.quota_create.assert_called_with( self.SHARE_DIR, 'directory', 8 * units.Gi) def test_create_share_invalid_share_protocol(self): share = {"name": self.SHARE_NAME, "share_proto": 'FOO_PROTOCOL'} self.assertRaises( exception.InvalidShare, self.storage_connection.create_share, self.mock_context, share, share_server=None) def test_create_share_nfs_backend_failure(self): share = {"name": self.SHARE_NAME, "share_proto": 'NFS'} self._mock_isilon_api.create_nfs_export.return_value = False self.assertRaises( exception.ShareBackendException, self.storage_connection.create_share, self.mock_context, share, share_server=None) def test_create_snapshot(self): # create snapshot snapshot_name = "snapshot01" snapshot_path = '/ifs/home/admin' snapshot = {'name': snapshot_name, 'share_name': snapshot_path} self.storage_connection.create_snapshot(self.mock_context, snapshot, None) # verify the create snapshot API call is executed self._mock_isilon_api.create_snapshot.assert_called_with(snapshot_name, snapshot_path) def test_create_share_from_snapshot_nfs(self): # assertions self.assertFalse(self._mock_isilon_api.create_nfs_export.called) self.assertFalse(self._mock_isilon_api.clone_snapshot.called) snapshot_name = "snapshot01" snapshot_path = '/ifs/home/admin' # execute method under test snapshot = {'name': snapshot_name, 'share_name': snapshot_path} share = {"name": self.SHARE_NAME, "share_proto": 'NFS', 'size': 5} location = self.storage_connection.create_share_from_snapshot( self.mock_context, share, snapshot, None) # verify NFS export created at expected location self._mock_isilon_api.create_nfs_export.assert_called_with( self.SHARE_DIR) # verify clone_directory(container_path) method called self._mock_isilon_api.clone_snapshot.assert_called_once_with( snapshot_name, self.SHARE_DIR) expected_location = '{0}:{1}'.format( self.ISILON_ADDR, self.SHARE_DIR) self.assertEqual(expected_location, location) # verify directory quota call made self._mock_isilon_api.quota_create.assert_called_with( self.SHARE_DIR, 'directory', 5 * units.Gi) def test_create_share_from_snapshot_cifs(self): # assertions self.assertFalse(self._mock_isilon_api.create_smb_share.called) self.assertFalse(self._mock_isilon_api.clone_snapshot.called) # setup snapshot_name = "snapshot01" snapshot_path = '/ifs/home/admin' new_share_name = 'clone-dir' # execute method under test snapshot = {'name': snapshot_name, 'share_name': snapshot_path} share = {"name": new_share_name, "share_proto": 'CIFS', "size": 2} location = self.storage_connection.create_share_from_snapshot( self.mock_context, share, snapshot, None) # verify call made to create new CIFS share self._mock_isilon_api.create_smb_share.assert_called_once_with( new_share_name, self.CLONE_DIR) self._mock_isilon_api.clone_snapshot.assert_called_once_with( snapshot_name, self.CLONE_DIR) expected_location = '\\\\{0}\\{1}'.format(self.ISILON_ADDR, new_share_name) self.assertEqual(expected_location, location) # verify directory quota call made expected_share_path = '{0}/{1}'.format(self.ROOT_DIR, new_share_name) self._mock_isilon_api.quota_create.assert_called_with( expected_share_path, 'directory', 2 * units.Gi) def test_delete_share_nfs(self): share = {"name": self.SHARE_NAME, "share_proto": 'NFS'} fake_share_num = 42 self._mock_isilon_api.lookup_nfs_export.return_value = fake_share_num self.assertFalse(self._mock_isilon_api.delete_nfs_share.called) # delete the share self.storage_connection.delete_share(self.mock_context, share, None) # verify share delete self._mock_isilon_api.delete_nfs_share.assert_called_with( fake_share_num) def test_delete_share_cifs(self): self.assertFalse(self._mock_isilon_api.delete_smb_share.called) # delete the share share = {"name": self.SHARE_NAME, "share_proto": 'CIFS'} self.storage_connection.delete_share(self.mock_context, share, None) # verify share deleted self._mock_isilon_api.delete_smb_share.assert_called_with( self.SHARE_NAME) def test_delete_share_invalid_share_proto(self): share = {"name": self.SHARE_NAME, "share_proto": 'FOO_PROTOCOL'} self.assertRaises( exception.InvalidShare, self.storage_connection.delete_share, self.mock_context, share, None ) def test_delete_nfs_share_backend_failure(self): share = {"name": self.SHARE_NAME, "share_proto": 'NFS'} self._mock_isilon_api.delete_nfs_share.return_value = False self.assertRaises( exception.ShareBackendException, self.storage_connection.delete_share, self.mock_context, share, None ) def test_delete_nfs_share_share_does_not_exist(self): self._mock_isilon_api.lookup_nfs_export.return_value = None share = {"name": self.SHARE_NAME, "share_proto": 'NFS'} # verify the calling delete on a non-existent share returns and does # not throw exception self.storage_connection.delete_share(self.mock_context, share, None) def test_delete_cifs_share_backend_failure(self): share = {"name": self.SHARE_NAME, "share_proto": 'CIFS'} self._mock_isilon_api.delete_smb_share.return_value = False self.assertRaises( exception.ShareBackendException, self.storage_connection.delete_share, self.mock_context, share, None ) def test_delete_cifs_share_share_does_not_exist(self): share = {"name": self.SHARE_NAME, "share_proto": 'CIFS'} self._mock_isilon_api.lookup_smb_share.return_value = None # verify the calling delete on a non-existent share returns and does # not throw exception self.storage_connection.delete_share(self.mock_context, share, None) def test_delete_snapshot(self): # create a snapshot snapshot_name = "snapshot01" snapshot_path = '/ifs/home/admin' snapshot = {'name': snapshot_name, 'share_name': snapshot_path} self.assertFalse(self._mock_isilon_api.delete_snapshot.called) # delete the created snapshot self.storage_connection.delete_snapshot(self.mock_context, snapshot, None) # verify the API call was made to delete the snapshot self._mock_isilon_api.delete_snapshot.assert_called_once_with( snapshot_name) def test_ensure_share(self): share = {"name": self.SHARE_NAME, "share_proto": 'CIFS'} self.storage_connection.ensure_share(self.mock_context, share, None) @mock.patch( 'manila.share.drivers.dell_emc.plugins.isilon.isilon.isilon_api.' 'IsilonApi', autospec=True) def test_connect(self, mock_isi_api): storage_connection = isilon.IsilonStorageConnection(LOG) # execute method under test storage_connection.connect( self.mock_emc_driver, self.mock_context) # verify connect sets driver params appropriately mock_config = self.MockConfig() server_addr = mock_config.safe_get('emc_nas_server') self.assertEqual(server_addr, storage_connection._server) expected_port = int(mock_config.safe_get('emc_nas_server_port')) self.assertEqual(expected_port, storage_connection._port) self.assertEqual('https://{0}:{1}'.format(server_addr, expected_port), storage_connection._server_url) expected_username = mock_config.safe_get('emc_nas_login') self.assertEqual(expected_username, storage_connection._username) expected_password = mock_config.safe_get('emc_nas_password') self.assertEqual(expected_password, storage_connection._password) self.assertFalse(storage_connection._verify_ssl_cert) @mock.patch( 'manila.share.drivers.dell_emc.plugins.isilon.isilon.isilon_api.' 'IsilonApi', autospec=True) def test_connect_root_dir_does_not_exist(self, mock_isi_api): mock_isilon_api = mock_isi_api.return_value mock_isilon_api.is_path_existent.return_value = False storage_connection = isilon.IsilonStorageConnection(LOG) # call method under test storage_connection.connect(self.mock_emc_driver, self.mock_context) mock_isilon_api.create_directory.assert_called_once_with( self.ROOT_DIR, recursive=True) def test_update_share_stats(self): stats_dict = {} self.storage_connection.update_share_stats(stats_dict) expected_version = isilon.VERSION self.assertEqual({'driver_version': expected_version}, stats_dict) def test_get_network_allocations_number(self): # call method under test num = self.storage_connection.get_network_allocations_number() self.assertEqual(0, num) def test_extend_share(self): quota_id = 'abcdef' new_share_size = 8 share = { "name": self.SHARE_NAME, "share_proto": 'NFS', "size": new_share_size } self._mock_isilon_api.quota_get.return_value = {'id': quota_id} self.assertFalse(self._mock_isilon_api.quota_set.called) self.storage_connection.extend_share(share, new_share_size) share_path = '{0}/{1}'.format(self.ROOT_DIR, self.SHARE_NAME) expected_quota_size = new_share_size * units.Gi self._mock_isilon_api.quota_set.assert_called_once_with( share_path, 'directory', expected_quota_size) def test_update_access_add_nfs(self): share = { "name": self.SHARE_NAME, "share_proto": 'NFS', } fake_export_id = 4 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id self._mock_isilon_api.get_nfs_export.return_value = { 'clients': [], 'read_only_clients': [] } nfs_access = { 'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } access_rules = [nfs_access] add_rules = [nfs_access] delete_rules = [] rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, delete_rules, share_server=None) expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + str(fake_export_id)) expected_data = {'clients': ['10.1.1.10'], 'read_only_clients': []} expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'active' } } self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) self.assertEqual(expected_rule_map, rule_map) def test_update_access_add_cifs(self): share = { "name": self.SHARE_NAME, "share_proto": 'CIFS', } access = { 'access_type': 'user', 'access_to': 'foo', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } add_rules = [access] access_rules = [access] rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, []) self._mock_isilon_api.smb_permissions_add.assert_called_once_with( self.SHARE_NAME, 'foo', isilon_api.SmbPermission.rw) expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'active' } } self.assertEqual(expected_rule_map, rule_map) def test_update_access_delete_nfs(self): share = { "name": self.SHARE_NAME, "share_proto": 'NFS', } fake_export_id = 4 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id # simulate an IP added to the whitelist ip_addr = '10.0.0.4' ip_addr_ro = '10.0.0.50' self._mock_isilon_api.get_nfs_export.return_value = { 'clients': [ip_addr], 'read_only_clients': [ip_addr_ro]} nfs_access_del_1 = { 'access_type': 'ip', 'access_to': ip_addr, 'access_level': const.ACCESS_LEVEL_RW } nfs_access_del_2 = { 'access_type': 'ip', 'access_to': ip_addr, 'access_level': const.ACCESS_LEVEL_RW } access_rules = [] delete_rules = [nfs_access_del_1, nfs_access_del_2] rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, [], delete_rules) expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + six.text_type(fake_export_id)) expected_data = {'clients': [], 'read_only_clients': []} self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) self.assertEqual({}, rule_map) def test_update_access_delete_cifs(self): share = { "name": self.SHARE_NAME, "share_proto": 'CIFS', } delete_rule = { 'access_type': 'user', 'access_to': 'newuser', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '29960614-8574-4e03-89cf-7cf267b0bd08' } access_rules = [] delete_rules = [delete_rule] self._mock_isilon_api.lookup_smb_share.return_value = { 'permissions': [ { 'permission': 'change', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-21', 'name': 'newuser', 'type': 'user', } } ] } rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, [], delete_rules) expected_url = (self.API_URL + '/platform/1/protocols/smb/shares/' + self.SHARE_NAME) self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data={'permissions': [], 'host_acl': []}) self.assertEqual({}, rule_map) def test_update_access_nfs_share_not_found(self): share = { "name": self.SHARE_NAME, "share_proto": 'NFS', } access = { 'access_type': 'user', 'access_to': 'foouser', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } access_rules = [access] add_rules = [access] self._mock_isilon_api.lookup_nfs_export.return_value = None rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, []) expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'error' } } self.assertEqual(expected_rule_map, rule_map) def test_update_access_nfs_http_error_on_clear_rules(self): share = { "name": self.SHARE_NAME, "share_proto": 'NFS', } access = { 'access_type': 'user', 'access_to': 'foouser', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } access_rules = [access] add_rules = [access] (self._mock_isilon_api.request.return_value.raise_for_status. side_effect) = HTTPError rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, []) expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'error' } } self.assertEqual(expected_rule_map, rule_map) def test_update_access_cifs_http_error_on_clear_rules(self): share = { "name": self.SHARE_NAME, "share_proto": 'CIFS', } access = { 'access_type': 'user', 'access_to': 'foo', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } add_rules = [access] access_rules = [access] (self._mock_isilon_api.request.return_value.raise_for_status. side_effect) = HTTPError rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, []) expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'error' } } self.assertEqual(expected_rule_map, rule_map) def test_update_access_cifs_share_backend_error(self): share = { "name": self.SHARE_NAME, "share_proto": 'CIFS', } access = { 'access_type': 'user', 'access_to': 'foo', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } add_rules = [access] access_rules = [access] message = _('Only "RW" and "RO" access levels are supported.') self._mock_isilon_api.smb_permissions_add.side_effect = ( exception.ShareBackendException(message=message)) rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, []) expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'error' } } self.assertEqual(expected_rule_map, rule_map) def test_update_access_cifs_invalid_access_type(self): share = { "name": self.SHARE_NAME, "share_proto": 'CIFS', } access = { 'access_type': 'foo', 'access_to': 'foo', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } add_rules = [access] access_rules = [access] rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, add_rules, []) expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'error' } } self.assertEqual(expected_rule_map, rule_map) def test_update_access_recover_nfs(self): # verify that new ips are added and ips not in rules are removed share = { "name": self.SHARE_NAME, "share_proto": 'NFS', } fake_export_id = 4 self._mock_isilon_api.lookup_nfs_export.return_value = fake_export_id self._mock_isilon_api.get_nfs_export.return_value = { 'clients': ['10.1.1.8'], 'read_only_clients': ['10.2.0.2'] } nfs_access_1 = { 'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } nfs_access_2 = { 'access_type': 'ip', 'access_to': '10.1.1.2', 'access_level': const.ACCESS_LEVEL_RO, 'access_id': '19960614-8574-4e03-89cf-7cf267b0bd08' } access_rules = [nfs_access_1, nfs_access_2] rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, [], []) expected_url = (self.API_URL + '/platform/1/protocols/nfs/exports/' + six.text_type(fake_export_id)) expected_data = { 'clients': ['10.1.1.10'], 'read_only_clients': ['10.1.1.2'] } expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'active' }, '19960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'active' } } self._mock_isilon_api.request.assert_called_once_with( 'PUT', expected_url, data=expected_data) self.assertEqual(expected_rule_map, rule_map) def test_update_access_recover_cifs(self): share = { "name": self.SHARE_NAME, "share_proto": 'CIFS', } mock_smb_share_1 = { 'host_acl': ['allow:10.1.2.3', 'allow:10.1.1.12'], 'permissions': [ { 'permission': 'change', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-21', 'name': 'foouser', 'type': 'user', } }, { 'permission': 'ro', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-22', 'name': 'testuser', 'type': 'user', } } ] } mock_smb_share_2 = { 'host_acl': ['allow:10.1.2.3', 'allow:10.1.1.12', 'allow:10.1.1.10'], 'permissions': [ { 'permission': 'change', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-21', 'name': 'foouser', 'type': 'user', } }, { 'permission': 'ro', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-22', 'name': 'testuser', 'type': 'user', } } ] } self._mock_isilon_api.lookup_smb_share.side_effect = [ mock_smb_share_1, mock_smb_share_2] access_1 = { 'access_type': 'ip', 'access_to': '10.1.1.10', 'access_level': const.ACCESS_LEVEL_RW, 'access_id': '09960614-8574-4e03-89cf-7cf267b0bd08' } access_2 = { 'access_type': 'user', 'access_to': 'testuser', 'access_level': const.ACCESS_LEVEL_RO, 'access_id': '19960614-8574-4e03-89cf-7cf267b0bd08' } access_rules = [access_1, access_2] rule_map = self.storage_connection.update_access( self.mock_context, share, access_rules, [], []) expected_url = (self.API_URL + '/platform/1/protocols/smb/shares/' + self.SHARE_NAME) expected_data = { 'host_acl': ['allow:10.1.1.10'], 'permissions': [ { 'permission': 'ro', 'permission_type': 'allow', 'trustee': { 'id': 'SID:S-1-5-22', 'name': 'testuser', 'type': 'user', } } ] } expected_rule_map = { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'active' }, '19960614-8574-4e03-89cf-7cf267b0bd08': { 'state': 'active' } } self.assertEqual(2, self._mock_isilon_api.lookup_smb_share.call_count) http_method, url = self._mock_isilon_api.request.call_args[0] data = self._mock_isilon_api.request.call_args[1]['data'] self.assertEqual('PUT', http_method) self.assertEqual(expected_url, url) self.assertEqual(expected_data['host_acl'], data['host_acl']) self.assertEqual(1, len(data['permissions'])) self.assertEqual(expected_data['permissions'][0], data['permissions'][0]) self.assertEqual(expected_rule_map, rule_map) manila-10.0.0/manila/tests/share/drivers/dell_emc/test_driver.py0000664000175000017500000001765513656750227024741 0ustar zuulzuul00000000000000# Copyright (c) 2014 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from stevedore import extension from manila.share import configuration as conf from manila.share.drivers.dell_emc import driver as emcdriver from manila.share.drivers.dell_emc.plugins import base from manila import test class FakeConnection(base.StorageConnection): def __init__(self, *args, **kwargs): self.ipv6_implemented = True pass @property def driver_handles_share_servers(self): return True def create_share(self, context, share, share_server): """Is called to create share.""" def create_snapshot(self, context, snapshot, share_server): """Is called to create snapshot.""" def delete_share(self, context, share, share_server): """Is called to remove share.""" def extend_share(self, share, new_size, share_server): """Is called to extend share.""" def shrink_share(self, share, new_size, share_server): """Is called to shrink share.""" def delete_snapshot(self, context, snapshot, share_server): """Is called to remove snapshot.""" def ensure_share(self, context, share, share_server): """Invoked to sure that share is exported.""" def allow_access(self, context, share, access, share_server): """Allow access to the share.""" def deny_access(self, context, share, access, share_server): """Deny access to the share.""" def raise_connect_error(self): """Check for setup error.""" def connect(self, emc_share_driver, context): """Any initialization the share driver does while starting.""" raise NotImplementedError() def update_share_stats(self, stats_dict): """Add key/values to stats_dict.""" def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return 0 def setup_server(self, network_info, metadata=None): """Set up and configures share server with given network parameters.""" def teardown_server(self, server_details, security_services=None): """Teardown share server.""" FAKE_BACKEND = 'fake_backend' class FakeEMCExtensionManager(object): def __init__(self): self.extensions = [] self.extensions.append( extension.Extension(name=FAKE_BACKEND, plugin=FakeConnection, entry_point=None, obj=None)) class EMCShareFrameworkTestCase(test.TestCase): @mock.patch('stevedore.extension.ExtensionManager', mock.Mock(return_value=FakeEMCExtensionManager())) def setUp(self): super(EMCShareFrameworkTestCase, self).setUp() self.configuration = conf.Configuration(None) self.configuration.append_config_values = mock.Mock(return_value=0) self.configuration.share_backend_name = FAKE_BACKEND self.mock_object(self.configuration, 'safe_get', self._fake_safe_get) self.driver = emcdriver.EMCShareDriver( configuration=self.configuration) def test_driver_setup(self): FakeConnection.connect = mock.Mock() self.driver.do_setup(None) self.assertIsInstance(self.driver.plugin, FakeConnection, "Not an instance of FakeConnection") FakeConnection.connect.assert_called_with(self.driver, None) def test_update_share_stats(self): data = {} self.driver.plugin = mock.Mock() self.driver._update_share_stats() data["share_backend_name"] = FAKE_BACKEND data["driver_handles_share_servers"] = True data["vendor_name"] = 'Dell EMC' data["driver_version"] = '1.0' data["storage_protocol"] = 'NFS_CIFS' data['total_capacity_gb'] = 'unknown' data['free_capacity_gb'] = 'unknown' data['reserved_percentage'] = 0 data['qos'] = False data['pools'] = None data['snapshot_support'] = True data['create_share_from_snapshot_support'] = True data['revert_to_snapshot_support'] = False data['share_group_stats'] = {'consistent_snapshot_support': None} data['mount_snapshot_support'] = False data['replication_domain'] = None data['filter_function'] = None data['goodness_function'] = None data['snapshot_support'] = True data['create_share_from_snapshot_support'] = True data['ipv4_support'] = True data['ipv6_support'] = False self.assertEqual(data, self.driver._stats) def _fake_safe_get(self, value): if value in ['emc_share_backend', 'share_backend_name']: return FAKE_BACKEND elif value == 'driver_handles_share_servers': return True return None def test_support_manage(self): share = mock.Mock() driver_options = mock.Mock() share_server = mock.Mock() snapshot = mock.Mock() context = mock.Mock() identifier = mock.Mock() self.driver.plugin = mock.Mock() self.driver.manage_existing_support = True self.driver.manage_existing_with_server_support = True self.driver.manage_existing_snapshot_support = True self.driver.manage_snapshot_with_server_support = True self.driver.manage_server_support = True self.driver.manage_existing(share, driver_options) self.driver.manage_existing_with_server(share, driver_options, share_server) self.driver.manage_existing_snapshot(snapshot, driver_options) self.driver.manage_existing_snapshot_with_server(snapshot, driver_options, share_server) self.driver.manage_server(context, share_server, identifier, driver_options) def test_not_support_manage(self): share = mock.Mock() driver_options = {} share_server = mock.Mock() snapshot = mock.Mock() identifier = mock.Mock() self.driver.plugin = mock.Mock() result = self.driver.manage_existing(share, driver_options) self.assertIsInstance(result, NotImplementedError) result = self.driver.manage_existing_with_server( share, driver_options, share_server) self.assertIsInstance(result, NotImplementedError) result = self.driver.manage_existing_snapshot(snapshot, driver_options) self.assertIsInstance(result, NotImplementedError) result = self.driver.manage_existing_snapshot_with_server( snapshot, driver_options, share_server) self.assertIsInstance(result, NotImplementedError) result = self.driver.manage_server(None, share_server, identifier, driver_options) self.assertIsInstance(result, NotImplementedError) def test_unmanage_manage(self): share = mock.Mock() server_details = {} share_server = mock.Mock() snapshot = mock.Mock() self.driver.plugin = mock.Mock(share) self.driver.unmanage(share) self.driver.unmanage_with_server(share, share_server) self.driver.unmanage_snapshot(snapshot) self.driver.unmanage_snapshot_with_server(snapshot, share_server) self.driver.unmanage_server(server_details) manila-10.0.0/manila/tests/share/drivers/maprfs/0000775000175000017500000000000013656750362021543 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/maprfs/__init__.py0000664000175000017500000000000013656750227023642 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/maprfs/test_maprfs.py0000664000175000017500000011335013656750227024447 0ustar zuulzuul00000000000000# Copyright (c) 2016, MapR Technologies # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for MapRFS native protocol driver module.""" from unittest import mock from oslo_concurrency import processutils from oslo_config import cfg import six from manila import context from manila import exception import manila.share.configuration as config from manila.share.drivers.maprfs import driver_util as mapru import manila.share.drivers.maprfs.maprfs_native as maprfs from manila import test from manila.tests import fake_share from manila import utils CONF = cfg.CONF class MapRFSNativeShareDriverTestCase(test.TestCase): """Tests MapRFSNativeShareDriver.""" def setUp(self): super(MapRFSNativeShareDriverTestCase, self).setUp() self._context = context.get_admin_context() self._hdfs_execute = mock.Mock(return_value=('', '')) self.local_ip = '192.168.1.1' CONF.set_default('driver_handles_share_servers', False) CONF.set_default('maprfs_clinode_ip', [self.local_ip]) CONF.set_default('maprfs_ssh_name', 'fake_sshname') CONF.set_default('maprfs_ssh_pw', 'fake_sshpw') CONF.set_default('maprfs_ssh_private_key', 'fake_sshkey') CONF.set_default('maprfs_rename_managed_volume', True) self.fake_conf = config.Configuration(None) self.cluster_name = 'fake' export_locations = {0: {'path': '/share-0'}} export_locations[0]['el_metadata'] = { 'volume-name': 'share-0'} self.share = fake_share.fake_share(share_proto='MAPRFS', name='share-0', size=2, share_id=1, export_locations=export_locations, export_location='/share-0') self.snapshot = fake_share.fake_snapshot(share_proto='MAPRFS', name='fake', share_name=self.share['name'], share_id=self.share['id'], share=self.share, share_instance=self.share, provider_location='fake') self.access = fake_share.fake_access(access_type='user', access_to='fake', access_level='rw') self.snapshot = self.snapshot.values self.snapshot.update(share_instance=self.share) self.export_path = 'maprfs:///share-0 -C -Z -N fake' self.fakesnapshot_path = '/share-0/.snapshot/snapshot-0' self.hadoop_bin = '/usr/bin/hadoop' self.maprcli_bin = '/usr/bin/maprcli' self.mock_object(utils, 'execute') self.mock_object( mapru.socket, 'gethostname', mock.Mock(return_value='testserver')) self.mock_object( mapru.socket, 'gethostbyname_ex', mock.Mock(return_value=( 'localhost', ['localhost.localdomain', mapru.socket.gethostname.return_value], ['127.0.0.1', self.local_ip]))) self._driver = maprfs.MapRFSNativeShareDriver( configuration=self.fake_conf) self._driver.do_setup(self._context) self._driver.api.get_share_metadata = mock.Mock(return_value={}) self._driver.api.update_share_metadata = mock.Mock() def test_do_setup(self): self._driver.do_setup(self._context) self.assertIsNotNone(self._driver._maprfs_util) self.assertEqual([self.local_ip], self._driver._maprfs_util.hosts) def test_check_for_setup_error(self): self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver._maprfs_util.check_state = mock.Mock(return_value=True) self._driver._maprfs_util.maprfs_ls = mock.Mock() self._driver.check_for_setup_error() def test_check_for_setup_error_exception_config(self): self._driver.configuration.maprfs_clinode_ip = None self.assertRaises(exception.MapRFSException, self._driver.check_for_setup_error) def test_check_for_setup_error_exception_no_dir(self): self._driver._maprfs_util.check_state = mock.Mock(return_value=True) self._driver._maprfs_util.maprfs_ls = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver.check_for_setup_error) def test_check_for_setup_error_exception_cldb_state(self): self._driver._check_maprfs_state = mock.Mock(return_value=False) self.assertRaises(exception.MapRFSException, self._driver.check_for_setup_error) def test__check_maprfs_state_healthy(self): fake_out = """Found 8 items drwxr-xr-x - mapr mapr 0 2016-07-29 05:38 /apps""" self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, '')) result = self._driver._check_maprfs_state() self._driver._maprfs_util._execute.assert_called_once_with( self.hadoop_bin, 'fs', '-ls', '/', check_exit_code=False) self.assertTrue(result) def test__check_maprfs_state_down(self): fake_out = "No CLDB" self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, '')) result = self._driver._check_maprfs_state() self._driver._maprfs_util._execute.assert_called_once_with( self.hadoop_bin, 'fs', '-ls', '/', check_exit_code=False) self.assertFalse(result) def test__check_maprfs_state_exception(self): self._driver._maprfs_util._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver._check_maprfs_state) self._driver._maprfs_util._execute.assert_called_once_with( self.hadoop_bin, 'fs', '-ls', '/', check_exit_code=False) def test_create_share_unsupported_proto(self): self._driver.api.get_share_metadata = mock.Mock(return_value={}) self._driver._get_share_path = mock.Mock() self.assertRaises(exception.MapRFSException, self._driver.create_share, self._context, fake_share.fake_share(share_id=1), share_server=None) self.assertFalse(self._driver._get_share_path.called) def test_manage_existing(self): self._driver._maprfs_util.get_volume_info_by_path = mock.Mock( return_value={'quota': 1024, 'totalused': 966, 'volumename': 'fake'}) self._driver._maprfs_util._execute = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value="fake") def test_manage_existing_no_rename(self): self._driver._maprfs_util.get_volume_info_by_path = mock.Mock( return_value={'quota': 1024, 'totalused': 966, 'volumename': 'fake'}) self._driver._maprfs_util._execute = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value="fake") result = self._driver.manage_existing(self.share, {'rename': 'no'}) self.assertEqual(1, result['size']) def test_manage_existing_exception(self): self._driver._maprfs_util.get_volume_info_by_path = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver.manage_existing, self.share, {}) def test_manage_existing_invalid_share(self): def fake_execute(self, *cmd, **kwargs): check_exit_code = kwargs.get('check_exit_code', True) if check_exit_code: raise exception.ProcessExecutionError else: return 'No such volume', 0 self._driver._maprfs_util._execute = fake_execute mock_execute = self._driver.manage_existing self.assertRaises(exception.ManageInvalidShare, mock_execute, self.share, {}) def test_manage_existing_snapshot(self): self._driver._maprfs_util.get_snapshot_list = mock.Mock( return_value=[self.snapshot['provider_location']]) self._driver._maprfs_util.maprfs_du = mock.Mock(return_value=11) update = self._driver.manage_existing_snapshot(self.snapshot, {}) self.assertEqual(1, update['size']) def test_manage_existing_snapshot_invalid(self): self._driver._maprfs_util.get_snapshot_list = mock.Mock( return_value=[]) mock_execute = self._driver.manage_existing_snapshot self.assertRaises(exception.ManageInvalidShareSnapshot, mock_execute, self.snapshot, {}) def test_manage_existing_snapshot_exception(self): self._driver._maprfs_util.get_snapshot_list = mock.Mock( side_effect=exception.ProcessExecutionError) mock_execute = self._driver.manage_existing_snapshot self.assertRaises(exception.MapRFSException, mock_execute, self.snapshot, {}) def test_manage_existing_with_no_quota(self): self._driver._maprfs_util.get_volume_info_by_path = mock.Mock( return_value={'quota': 0, 'totalused': 1999, 'volumename': 'fake'}) self._driver._maprfs_util.rename_volume = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value="fake") result = self._driver.manage_existing(self.share, {}) self.assertEqual(2, result['size']) def test__set_volume_size(self): volume = self._driver._volume_name(self.share['name']) sizestr = six.text_type(self.share['size']) + 'G' self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver._maprfs_util.set_volume_size(volume, self.share['size']) self._driver._maprfs_util._execute.assert_called_once_with( self.maprcli_bin, 'volume', 'modify', '-name', volume, '-quota', sizestr) def test_extend_share(self): volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver.extend_share(self.share, self.share['size']) self._driver._maprfs_util.set_volume_size.assert_called_once_with( volume, self.share['size']) def test_extend_exception(self): self._driver._maprfs_util.set_volume_size = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver.extend_share, self.share, self.share['size']) def test_shrink_share(self): volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver._maprfs_util.get_volume_info = mock.Mock( return_value={'total_user': 0}) self._driver.shrink_share(self.share, self.share['size']) self._driver._maprfs_util.set_volume_size.assert_called_once_with( volume, self.share['size']) def test_update_access_add(self): aces = { 'volumeAces': { 'readAce': 'u:fake|fake:fake', 'writeAce': 'u:fake', } } volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util.get_volume_info = mock.Mock( return_value=aces) self._driver._maprfs_util.group_exists = mock.Mock(return_value=True) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.update_access(self._context, self.share, [self.access], [self.access], []) self._driver._maprfs_util._execute.assert_any_call( self.maprcli_bin, 'volume', 'modify', '-name', volume, '-readAce', 'g:' + self.access['access_to'], '-writeAce', 'g:' + self.access['access_to']) def test_update_access_add_no_user_no_group_exists(self): aces = { 'volumeAces': { 'readAce': 'u:fake|fake:fake', 'writeAce': 'u:fake', } } volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util.get_volume_info = mock.Mock( return_value=aces) self._driver._maprfs_util.group_exists = mock.Mock(return_value=False) self._driver._maprfs_util.user_exists = mock.Mock(return_value=False) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.update_access(self._context, self.share, [self.access], [self.access], []) self._driver._maprfs_util._execute.assert_any_call( self.maprcli_bin, 'volume', 'modify', '-name', volume, '-readAce', 'g:' + self.access['access_to'], '-writeAce', 'g:' + self.access['access_to']) def test_update_access_delete(self): aces = { 'volumeAces': { 'readAce': 'p', 'writeAce': 'p', } } volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util.get_volume_info = mock.Mock( return_value=aces) self._driver._maprfs_util.group_exists = mock.Mock(return_value=True) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.update_access(self._context, self.share, [], [], [self.access]) self._driver._maprfs_util._execute.assert_any_call( self.maprcli_bin, 'volume', 'modify', '-name', volume, '-readAce', '', '-writeAce', '') def test_update_access_recover(self): aces = { 'volumeAces': { 'readAce': 'u:fake', 'writeAce': 'u:fake', } } volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util.get_volume_info = mock.Mock( return_value=aces) self._driver._maprfs_util.group_exists = mock.Mock(return_value=False) self._driver._maprfs_util.user_exists = mock.Mock(return_value=True) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.update_access(self._context, self.share, [self.access], [], []) self._driver._maprfs_util._execute.assert_any_call( self.maprcli_bin, 'volume', 'modify', '-name', volume, '-readAce', 'u:' + self.access['access_to'], '-writeAce', 'u:' + self.access['access_to']) def test_update_access_share_not_exists(self): self._driver._maprfs_util.volume_exists = mock.Mock( return_value=False) self._driver._maprfs_util.group_exists = mock.Mock(return_value=True) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.update_access(self._context, self.share, [self.access], [], []) self._driver._maprfs_util._execute.assert_not_called() def test_update_access_exception(self): aces = { 'volumeAces': { 'readAce': 'p', 'writeAce': 'p', } } self._driver._maprfs_util.get_volume_info = mock.Mock( return_value=aces) self._driver._maprfs_util.group_exists = mock.Mock(return_value=True) utils.execute = mock.Mock( side_effect=exception.ProcessExecutionError(stdout='ERROR')) self.assertRaises(exception.MapRFSException, self._driver.update_access, self._context, self.share, [self.access], [], []) def test_update_access_invalid_access(self): access = fake_share.fake_access(access_type='ip', access_to='fake', access_level='rw') self.assertRaises(exception.InvalidShareAccess, self._driver.update_access, self._context, self.share, [access], [], []) def test_ensure_share(self): self._driver._maprfs_util.volume_exists = mock.Mock( return_value=True) self._driver._maprfs_util.get_volume_info = mock.Mock( return_value={'mountdir': self.share['export_location']}) self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value=self.cluster_name) result = self._driver.ensure_share(self._context, self.share) self.assertEqual(self.export_path, result[0]['path']) def test_create_share(self): size_str = six.text_type(self.share['size']) + 'G' path = self._driver._share_dir(self.share['name']) self._driver.api.get_share_metadata = mock.Mock( return_value={'_fake': 'fake'}) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver._maprfs_util.maprfs_chmod = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value=self.cluster_name) self._driver.create_share(self._context, self.share) self._driver._maprfs_util._execute.assert_called_once_with( self.maprcli_bin, 'volume', 'create', '-name', self.share['name'], '-path', path, '-quota', size_str, '-readAce', '', '-writeAce', '', '-fake', 'fake') self._driver._maprfs_util.maprfs_chmod.assert_called_once_with(path, '777') def test_create_share_with_custom_name(self): size_str = six.text_type(self.share['size']) + 'G' self._driver.api.get_share_metadata = mock.Mock( return_value={'_name': 'fake', '_path': 'fake'}) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver._maprfs_util.maprfs_chmod = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value=self.cluster_name) self._driver.create_share(self._context, self.share) self._driver._maprfs_util._execute.assert_called_once_with( self.maprcli_bin, 'volume', 'create', '-name', 'fake', '-path', 'fake', '-quota', size_str, '-readAce', '', '-writeAce', '') self._driver._maprfs_util.maprfs_chmod.assert_called_once_with('fake', '777') def test_create_share_exception(self): self._driver.api.get_share_metadata = mock.Mock(return_value={}) self._driver._maprfs_util._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver._maprfs_util.maprfs_chmod = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value=self.cluster_name) self.assertRaises(exception.MapRFSException, self._driver.create_share, self._context, self.share) def test_create_share_from_snapshot(self): fake_snapshot = dict(self.snapshot) fake_snapshot.update(share_instance={'share_id': 1}) size_str = six.text_type(self.share['size']) + 'G' path = self._driver._share_dir(self.share['name']) snapthot_path = self._driver._get_snapshot_path(self.snapshot) + '/*' self._driver._maprfs_util._execute = mock.Mock( return_value=('Found', 0)) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value=self.cluster_name) self._driver.api.get_share_metadata = mock.Mock( return_value={'_fake': 'fake', 'fake2': 'fake2'}) mock_execute = self._driver._maprfs_util._execute self._driver.create_share_from_snapshot(self._context, self.share, self.snapshot) mock_execute.assert_any_call(self.hadoop_bin, 'fs', '-cp', '-p', snapthot_path, path) mock_execute.assert_any_call(self.maprcli_bin, 'volume', 'create', '-name', self.share['name'], '-path', path, '-quota', size_str, '-readAce', '', '-writeAce', '', '-fake', 'fake') def test_create_share_from_snapshot_wrong_tenant(self): fake_snapshot = dict(self.snapshot) fake_snapshot.update(share_instance={'share_id': 10}) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver._maprfs_util.set_volume_size = mock.Mock() self._driver._maprfs_util.get_cluster_name = mock.Mock( return_value=self.cluster_name) def fake_meta(context, share): return {'_tenantuser': 'fake'} if share['id'] == 10 else {} self._driver.api.get_share_metadata = fake_meta self.assertRaises(exception.MapRFSException, self._driver.create_share_from_snapshot, self._context, self.share, fake_snapshot) def test_create_share_from_snapshot_exception(self): fake_snapshot = dict(self.snapshot) fake_snapshot.update(share_instance={'share_id': 10}) self._driver._maprfs_util._execute = mock.Mock( return_value=('Found 0', 0)) self._driver._maprfs_util.maprfs_cp = mock.Mock( side_effect=exception.ProcessExecutionError) self._driver.api.get_share_metadata = mock.Mock( return_value={'_tenantuser': 'fake'}) self.assertRaises(exception.MapRFSException, self._driver.create_share_from_snapshot, self._context, self.share, self.snapshot) def test_delete_share(self): self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.delete_share(self._context, self.share) self._driver._maprfs_util._execute.assert_called_once_with( self.maprcli_bin, 'volume', 'remove', '-name', self.share['name'], '-force', 'true', check_exit_code=False) def test_delete_share_skip(self): self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.api.get_share_metadata = mock.Mock( return_value={'_name': 'error'}) self._driver.delete_share(self._context, self.share) self._driver._maprfs_util._execute.assert_not_called() def test_delete_share_exception(self): self._driver._maprfs_util._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver.delete_share, self._context, self.share) def test_delete_share_not_exist(self): self._driver._maprfs_util._execute = mock.Mock( return_value=('No such volume', 0)) self._driver.delete_share(self._context, self.share) def test_create_snapshot(self): volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.create_snapshot(self._context, self.snapshot) self._driver._maprfs_util._execute.assert_called_once_with( self.maprcli_bin, 'volume', 'snapshot', 'create', '-snapshotname', self.snapshot['name'], '-volume', volume) def test_create_snapshot_exception(self): self._driver._maprfs_util._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver.create_snapshot, self._context, self.snapshot) def test_delete_snapshot(self): volume = self._driver._volume_name(self.share['name']) self._driver._maprfs_util._execute = mock.Mock(return_value=('', 0)) self._driver.delete_snapshot(self._context, self.snapshot) self._driver._maprfs_util._execute.assert_called_once_with( self.maprcli_bin, 'volume', 'snapshot', 'remove', '-snapshotname', self.snapshot['name'], '-volume', volume, check_exit_code=False) def test_delete_snapshot_exception(self): self._driver._maprfs_util._execute = mock.Mock( return_value=('ERROR (fake)', None)) self.assertRaises(exception.MapRFSException, self._driver.delete_snapshot, self._context, self.snapshot) def test__execute(self): first_host_skip = 'first' available_host = 'available' hosts = [first_host_skip, self.local_ip, available_host, 'extra'] test_config = mock.Mock() test_config.maprfs_clinode_ip = hosts test_config.maprfs_ssh_name = 'fake_maprfs_ssh_name' test_maprfs_util = mapru.get_version_handler(test_config) # mutable container done = [False] skips = [] def fake_ssh_run(host, cmd, check_exit_code): if host == available_host: done[0] = True return '', 0 else: skips.append(host) raise Exception() test_maprfs_util._run_ssh = fake_ssh_run test_maprfs_util._execute('fake', 'cmd') self.assertTrue(done[0]) self.assertEqual(available_host, test_maprfs_util.hosts[0]) self.assertEqual(first_host_skip, test_maprfs_util.hosts[2]) self.assertEqual([first_host_skip], skips) utils.execute.assert_called_once_with( 'sudo', 'su', '-', 'fake_maprfs_ssh_name', '-c', 'fake cmd', check_exit_code=True) def test__execute_exeption(self): utils.execute = mock.Mock(side_effect=Exception) self.assertRaises(exception.ProcessExecutionError, self._driver._maprfs_util._execute, "fake", "cmd") def test__execute_native_exeption(self): utils.execute = mock.Mock( side_effect=exception.ProcessExecutionError(stdout='fake')) self.assertRaises(exception.ProcessExecutionError, self._driver._maprfs_util._execute, "fake", "cmd") def test__execute_local(self): self.mock_object(utils, 'execute', mock.Mock(return_value=("fake", 0))) self._driver._maprfs_util._execute("fake", "cmd") utils.execute.assert_called_once_with('sudo', 'su', '-', 'fake_sshname', '-c', 'fake cmd', check_exit_code=True) def test_share_shrink_error(self): fake_info = { 'totalused': 1024, 'quota': 2024 } self._driver._maprfs_util._execute = mock.Mock() self._driver._maprfs_util.get_volume_info = mock.Mock( return_value=fake_info) self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, self.share, 1) def test__get_volume_info(self): fake_out = """ {"data": [{"mounted":1,"quota":"1024","used":"0","totalused":"0"}]} """ self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, 0)) result = self._driver._maprfs_util.get_volume_info('fake_name') self.assertEqual('0', result['used']) def test__get_volume_info_by_path(self): fake_out = """ {"data": [{"mounted":1,"quota":"1024","used":"0","totalused":"0"}]} """ self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, 0)) result = self._driver._maprfs_util.get_volume_info_by_path('fake_path') self.assertEqual('0', result['used']) def test__get_volume_info_by_path_not_exist(self): fake_out = "No such volume" self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, 0)) result = self._driver._maprfs_util.get_volume_info_by_path( 'fake_path', check_if_exists=True) self.assertIsNone(result) def test_get_share_stats_refresh_false(self): self._driver._stats = {'fake_key': 'fake_value'} result = self._driver.get_share_stats(False) self.assertEqual(self._driver._stats, result) def test_get_share_stats_refresh_true(self): self._driver._maprfs_util.fs_capacity = mock.Mock( return_value=(1143554.0, 124111.0)) result = self._driver.get_share_stats(True) expected_keys = [ 'qos', 'driver_version', 'share_backend_name', 'free_capacity_gb', 'total_capacity_gb', 'driver_handles_share_servers', 'reserved_percentage', 'vendor_name', 'storage_protocol', ] for key in expected_keys: self.assertIn(key, result) self.assertEqual('MAPRFS', result['storage_protocol']) self._driver._maprfs_util.fs_capacity.assert_called_once_with() def test_get_share_stats_refresh_exception(self): self._driver._maprfs_util.fs_capacity = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.MapRFSException, self._driver.get_share_stats, True) def test__get_available_capacity(self): fake_out = """Filesystem Size Used Available Use% maprfs:/// 26367492096 1231028224 25136463872 5% """ self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, '')) total, free = self._driver._maprfs_util.fs_capacity() self._driver._maprfs_util._execute.assert_called_once_with( self.hadoop_bin, 'fs', '-df') self.assertEqual(26367492096, total) self.assertEqual(25136463872, free) def test__get_available_capacity_exception(self): fake_out = 'fake' self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, '')) self.assertRaises(exception.ProcessExecutionError, self._driver._maprfs_util.fs_capacity) def test__get_snapshot_list(self): fake_out = """{"data":[{"snapshotname":"fake-snapshot"}]}""" self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, None)) snapshot_list = self._driver._maprfs_util.get_snapshot_list( volume_name='fake', volume_path='fake') self.assertEqual(['fake-snapshot'], snapshot_list) def test__cluster_name(self): fake_info = """{ "data":[ { "version":"fake", "cluster":{ "name":"fake", "secure":false, "ip":"10.10.10.10", "id":"7133813101868836065", "nodesUsed":1, "totalNodesAllowed":-1 } } ] } """ self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_info, 0)) name = self._driver._maprfs_util.get_cluster_name() self.assertEqual('fake', name) def test__cluster_name_exception(self): fake_info = 'fake' self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_info, 0)) self.assertRaises(exception.ProcessExecutionError, self._driver._maprfs_util.get_cluster_name) def test__run_ssh(self): ssh_output = 'fake_ssh_output' cmd_list = ['fake', 'cmd'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(return_value=False) ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(return_value=ssh) self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) self.mock_object(processutils, 'ssh_execute', mock.Mock(return_value=ssh_output)) result = self._driver._maprfs_util._run_ssh( self.local_ip, cmd_list, check_exit_code=False) utils.SSHPool.assert_called_once_with( self._driver.configuration.maprfs_clinode_ip[0], self._driver.configuration.maprfs_ssh_port, self._driver.configuration.ssh_conn_timeout, self._driver.configuration.maprfs_ssh_name, password=self._driver.configuration.maprfs_ssh_pw, privatekey=self._driver.configuration.maprfs_ssh_private_key, min_size=self._driver.configuration.ssh_min_pool_conn, max_size=self._driver.configuration.ssh_max_pool_conn) ssh_pool.create.assert_called() ssh.get_transport().is_active.assert_called_once_with() processutils.ssh_execute.assert_called_once_with( ssh, 'fake cmd', check_exit_code=False) self.assertEqual(ssh_output, result) def test__run_ssh_exception(self): cmd_list = ['fake', 'cmd'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(return_value=True) ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(return_value=ssh) self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) self.mock_object(processutils, 'ssh_execute', mock.Mock( side_effect=exception.ProcessExecutionError)) self.assertRaises(exception.ProcessExecutionError, self._driver._maprfs_util._run_ssh, self.local_ip, cmd_list) utils.SSHPool.assert_called_once_with( self._driver.configuration.maprfs_clinode_ip[0], self._driver.configuration.maprfs_ssh_port, self._driver.configuration.ssh_conn_timeout, self._driver.configuration.maprfs_ssh_name, password=self._driver.configuration.maprfs_ssh_pw, privatekey=self._driver.configuration.maprfs_ssh_private_key, min_size=self._driver.configuration.ssh_min_pool_conn, max_size=self._driver.configuration.ssh_max_pool_conn) ssh_pool.create.assert_called_once_with() ssh.get_transport().is_active.assert_called_once_with() processutils.ssh_execute.assert_called_once_with( ssh, 'fake cmd', check_exit_code=False) def test__share_dir(self): self._driver._base_volume_dir = '/volumes' share_dir = '/volumes/' + self.share['name'] actual_dir = self._driver._share_dir(self.share['name']) self.assertEqual(share_dir, actual_dir) def test__get_volume_name(self): volume_name = self._driver._get_volume_name("fake", self.share) self.assertEqual('share-0', volume_name) def test__maprfs_du(self): self._driver._maprfs_util._execute = mock.Mock( return_value=('1024 /', 0)) size = self._driver._maprfs_util.maprfs_du('/') self._driver._maprfs_util._execute.assert_called() self.assertEqual(1024, size) def test__maprfs_ls(self): self._driver._maprfs_util._execute = mock.Mock( return_value=('fake', 0)) self._driver._maprfs_util.maprfs_ls('/') self._driver._maprfs_util._execute.assert_called_with(self.hadoop_bin, 'fs', '-ls', '/') def test_rename_volume(self): self._driver._maprfs_util._execute = mock.Mock( return_value=('fake', 0)) self._driver._maprfs_util.rename_volume('fake', 'newfake') self._driver._maprfs_util._execute.assert_called_with(self.maprcli_bin, 'volume', 'rename', '-name', 'fake', '-newname', 'newfake') def test__run_as_user(self): cmd = ['fake', 'cmd'] u_cmd = self._driver._maprfs_util._as_user(cmd, 'user') self.assertEqual(['sudo', 'su', '-', 'user', '-c', 'fake cmd'], u_cmd) def test__add_params(self): params = {'p1': 1, 'p2': 2, 'p3': '3'} cmd = ['fake', 'cmd'] cmd_with_params = self._driver._maprfs_util._add_params(cmd, **params) self.assertEqual(cmd[:2], cmd_with_params[:2]) def test_get_network_allocations_number(self): number = self._driver.get_admin_network_allocations_number() self.assertEqual(0, number) def test__user_exists(self): fake_out = 'user:x:1000:1000::/opt/user:/bin/bash' self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, 0)) result = self._driver._maprfs_util.user_exists('user') self.assertTrue(result) def test__group_exists(self): fake_out = 'user:x:1000:' self._driver._maprfs_util._execute = mock.Mock( return_value=(fake_out, 0)) result = self._driver._maprfs_util.group_exists('user') self.assertTrue(result) manila-10.0.0/manila/tests/share/drivers/test_ganesha.py0000664000175000017500000006437313656750227023307 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import errno import os from unittest import mock import ddt from oslo_config import cfg from manila import context from manila import exception from manila.share import configuration as config from manila.share.drivers import ganesha from manila import test from manila.tests import fake_share CONF = cfg.CONF fake_basepath = '/fakepath' fake_export_name = 'fakename--fakeaccid' fake_output_template = { 'EXPORT': { 'Export_Id': 101, 'Path': '/fakepath/fakename', 'Pseudo': '/fakepath/fakename--fakeaccid', 'Tag': 'fakeaccid', 'CLIENT': { 'Clients': '10.0.0.1' }, 'FSAL': 'fakefsal' } } @ddt.ddt class GaneshaNASHelperTestCase(test.TestCase): """Tests GaneshaNASHElper.""" def setUp(self): super(GaneshaNASHelperTestCase, self).setUp() CONF.set_default('ganesha_config_path', '/fakedir0/fakeconfig') CONF.set_default('ganesha_db_path', '/fakedir1/fake.db') CONF.set_default('ganesha_export_dir', '/fakedir0/export.d') CONF.set_default('ganesha_export_template_dir', '/fakedir2/faketempl.d') CONF.set_default('ganesha_service_name', 'ganesha.fakeservice') self._context = context.get_admin_context() self._execute = mock.Mock(return_value=('', '')) self.fake_conf = config.Configuration(None) self.fake_conf_dir_path = '/fakedir0/exports.d' self._helper = ganesha.GaneshaNASHelper( self._execute, self.fake_conf, tag='faketag') self._helper.ganesha = mock.Mock() self._helper.export_template = {'key': 'value'} self.share = fake_share.fake_share() self.access = fake_share.fake_access() def test_load_conf_dir(self): fake_template1 = {'key': 'value1'} fake_template2 = {'key': 'value2'} fake_ls_dir = ['fakefile0.conf', 'fakefile1.json', 'fakefile2.txt'] mock_ganesha_utils_patch = mock.Mock() def fake_patch_run(tmpl1, tmpl2): mock_ganesha_utils_patch( copy.deepcopy(tmpl1), copy.deepcopy(tmpl2)) tmpl1.update(tmpl2) self.mock_object(ganesha.os, 'listdir', mock.Mock(return_value=fake_ls_dir)) self.mock_object(ganesha.LOG, 'info') self.mock_object(ganesha.ganesha_manager, 'parseconf', mock.Mock(side_effect=[fake_template1, fake_template2])) self.mock_object(ganesha.ganesha_utils, 'patch', mock.Mock(side_effect=fake_patch_run)) with mock.patch('six.moves.builtins.open', mock.mock_open()) as mockopen: mockopen().read.side_effect = ['fakeconf0', 'fakeconf1'] ret = self._helper._load_conf_dir(self.fake_conf_dir_path) ganesha.os.listdir.assert_called_once_with( self.fake_conf_dir_path) ganesha.LOG.info.assert_called_once_with( mock.ANY, self.fake_conf_dir_path) mockopen.assert_has_calls([ mock.call('/fakedir0/exports.d/fakefile0.conf'), mock.call('/fakedir0/exports.d/fakefile1.json')], any_order=True) ganesha.ganesha_manager.parseconf.assert_has_calls([ mock.call('fakeconf0'), mock.call('fakeconf1')]) mock_ganesha_utils_patch.assert_has_calls([ mock.call({}, fake_template1), mock.call(fake_template1, fake_template2)]) self.assertEqual(fake_template2, ret) def test_load_conf_dir_no_conf_dir_must_exist_false(self): self.mock_object( ganesha.os, 'listdir', mock.Mock(side_effect=OSError(errno.ENOENT, os.strerror(errno.ENOENT)))) self.mock_object(ganesha.LOG, 'info') self.mock_object(ganesha.ganesha_manager, 'parseconf') self.mock_object(ganesha.ganesha_utils, 'patch') with mock.patch('six.moves.builtins.open', mock.mock_open(read_data='fakeconf')) as mockopen: ret = self._helper._load_conf_dir(self.fake_conf_dir_path, must_exist=False) ganesha.os.listdir.assert_called_once_with( self.fake_conf_dir_path) ganesha.LOG.info.assert_called_once_with( mock.ANY, self.fake_conf_dir_path) self.assertFalse(mockopen.called) self.assertFalse(ganesha.ganesha_manager.parseconf.called) self.assertFalse(ganesha.ganesha_utils.patch.called) self.assertEqual({}, ret) def test_load_conf_dir_error_no_conf_dir_must_exist_true(self): self.mock_object( ganesha.os, 'listdir', mock.Mock(side_effect=OSError(errno.ENOENT, os.strerror(errno.ENOENT)))) self.assertRaises(OSError, self._helper._load_conf_dir, self.fake_conf_dir_path) ganesha.os.listdir.assert_called_once_with(self.fake_conf_dir_path) def test_load_conf_dir_error_conf_dir_present_must_exist_false(self): self.mock_object( ganesha.os, 'listdir', mock.Mock(side_effect=OSError(errno.EACCES, os.strerror(errno.EACCES)))) self.assertRaises(OSError, self._helper._load_conf_dir, self.fake_conf_dir_path, must_exist=False) ganesha.os.listdir.assert_called_once_with(self.fake_conf_dir_path) def test_load_conf_dir_error(self): self.mock_object( ganesha.os, 'listdir', mock.Mock(side_effect=RuntimeError('fake error'))) self.assertRaises(RuntimeError, self._helper._load_conf_dir, self.fake_conf_dir_path) ganesha.os.listdir.assert_called_once_with(self.fake_conf_dir_path) def test_init_helper(self): mock_template = mock.Mock() mock_ganesha_manager = mock.Mock() self.mock_object(ganesha.ganesha_manager, 'GaneshaManager', mock.Mock(return_value=mock_ganesha_manager)) self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value=mock_template)) self.mock_object(self._helper, '_default_config_hook') ret = self._helper.init_helper() ganesha.ganesha_manager.GaneshaManager.assert_called_once_with( self._execute, 'faketag', ganesha_config_path='/fakedir0/fakeconfig', ganesha_export_dir='/fakedir0/export.d', ganesha_db_path='/fakedir1/fake.db', ganesha_service_name='ganesha.fakeservice') self._helper._load_conf_dir.assert_called_once_with( '/fakedir2/faketempl.d', must_exist=False) self.assertFalse(self._helper._default_config_hook.called) self.assertEqual(mock_ganesha_manager, self._helper.ganesha) self.assertEqual(mock_template, self._helper.export_template) self.assertIsNone(ret) def test_init_helper_conf_dir_empty(self): mock_template = mock.Mock() mock_ganesha_manager = mock.Mock() self.mock_object(ganesha.ganesha_manager, 'GaneshaManager', mock.Mock(return_value=mock_ganesha_manager)) self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value={})) self.mock_object(self._helper, '_default_config_hook', mock.Mock(return_value=mock_template)) ret = self._helper.init_helper() ganesha.ganesha_manager.GaneshaManager.assert_called_once_with( self._execute, 'faketag', ganesha_config_path='/fakedir0/fakeconfig', ganesha_export_dir='/fakedir0/export.d', ganesha_db_path='/fakedir1/fake.db', ganesha_service_name='ganesha.fakeservice') self._helper._load_conf_dir.assert_called_once_with( '/fakedir2/faketempl.d', must_exist=False) self._helper._default_config_hook.assert_called_once_with() self.assertEqual(mock_ganesha_manager, self._helper.ganesha) self.assertEqual(mock_template, self._helper.export_template) self.assertIsNone(ret) def test_default_config_hook(self): fake_template = {'key': 'value'} self.mock_object(ganesha.ganesha_utils, 'path_from', mock.Mock(return_value='/fakedir3/fakeconfdir')) self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value=fake_template)) ret = self._helper._default_config_hook() ganesha.ganesha_utils.path_from.assert_called_once_with( ganesha.__file__, 'conf') self._helper._load_conf_dir.assert_called_once_with( '/fakedir3/fakeconfdir') self.assertEqual(fake_template, ret) def test_fsal_hook(self): ret = self._helper._fsal_hook('/fakepath', self.share, self.access) self.assertEqual({}, ret) def test_cleanup_fsal_hook(self): ret = self._helper._cleanup_fsal_hook('/fakepath', self.share, self.access) self.assertIsNone(ret) def test_allow_access(self): mock_ganesha_utils_patch = mock.Mock() def fake_patch_run(tmpl1, tmpl2, tmpl3): mock_ganesha_utils_patch(copy.deepcopy(tmpl1), tmpl2, tmpl3) tmpl1.update(tmpl3) self.mock_object(self._helper.ganesha, 'get_export_id', mock.Mock(return_value=101)) self.mock_object(self._helper, '_fsal_hook', mock.Mock(return_value='fakefsal')) self.mock_object(ganesha.ganesha_utils, 'patch', mock.Mock(side_effect=fake_patch_run)) self.mock_object(ganesha.ganesha_utils, 'validate_access_rule', mock.Mock(return_value=True)) ret = self._helper._allow_access(fake_basepath, self.share, self.access) self._helper.ganesha.get_export_id.assert_called_once_with() self._helper._fsal_hook.assert_called_once_with( fake_basepath, self.share, self.access) mock_ganesha_utils_patch.assert_called_once_with( {}, self._helper.export_template, fake_output_template) self._helper._fsal_hook.assert_called_once_with( fake_basepath, self.share, self.access) self._helper.ganesha.add_export.assert_called_once_with( fake_export_name, fake_output_template) self.assertIsNone(ret) def test_allow_access_error_invalid_share(self): access = fake_share.fake_access(access_type='notip') self.assertRaises(exception.InvalidShareAccess, self._helper._allow_access, '/fakepath', self.share, access) def test_deny_access(self): ret = self._helper._deny_access('/fakepath', self.share, self.access) self._helper.ganesha.remove_export.assert_called_once_with( 'fakename--fakeaccid') self.assertIsNone(ret) def test_update_access_for_allow(self): self.mock_object(self._helper, '_allow_access') self.mock_object(self._helper, '_deny_access') self._helper.update_access( self._context, self.share, access_rules=[self.access], add_rules=[self.access], delete_rules=[]) self._helper._allow_access.assert_called_once_with( '/', self.share, self.access) self.assertFalse(self._helper._deny_access.called) self.assertFalse(self._helper.ganesha.reset_exports.called) self.assertFalse(self._helper.ganesha.restart_service.called) def test_update_access_for_deny(self): self.mock_object(self._helper, '_allow_access') self.mock_object(self._helper, '_deny_access') self._helper.update_access( self._context, self.share, access_rules=[], add_rules=[], delete_rules=[self.access]) self._helper._deny_access.assert_called_once_with( '/', self.share, self.access) self.assertFalse(self._helper._allow_access.called) self.assertFalse(self._helper.ganesha.reset_exports.called) self.assertFalse(self._helper.ganesha.restart_service.called) def test_update_access_recovery(self): self.mock_object(self._helper, '_allow_access') self.mock_object(self._helper, '_deny_access') self._helper.update_access( self._context, self.share, access_rules=[self.access], add_rules=[], delete_rules=[]) self._helper._allow_access.assert_called_once_with( '/', self.share, self.access) self.assertFalse(self._helper._deny_access.called) self.assertTrue(self._helper.ganesha.reset_exports.called) self.assertTrue(self._helper.ganesha.restart_service.called) def test_update_access_invalid_share_access_type(self): bad_rule = fake_share.fake_access(access_type='notip', id='fakeid') expected = {'fakeid': {'state': 'error'}} result = self._helper.update_access(self._context, self.share, access_rules=[bad_rule], add_rules=[], delete_rules=[]) self.assertEqual(expected, result) def test_update_access_invalid_share_access_level(self): bad_rule = fake_share.fake_access(access_level='RO', id='fakeid') expected = {'fakeid': {'state': 'error'}} result = self._helper.update_access(self._context, self.share, access_rules=[bad_rule], add_rules=[], delete_rules=[]) self.assertEqual(expected, result) @ddt.ddt class GaneshaNASHelper2TestCase(test.TestCase): """Tests GaneshaNASHelper2.""" def setUp(self): super(GaneshaNASHelper2TestCase, self).setUp() CONF.set_default('ganesha_config_path', '/fakedir0/fakeconfig') CONF.set_default('ganesha_db_path', '/fakedir1/fake.db') CONF.set_default('ganesha_export_dir', '/fakedir0/export.d') CONF.set_default('ganesha_export_template_dir', '/fakedir2/faketempl.d') CONF.set_default('ganesha_service_name', 'ganesha.fakeservice') CONF.set_default('ganesha_rados_store_enable', True) CONF.set_default('ganesha_rados_store_pool_name', 'ceph_pool') CONF.set_default('ganesha_rados_export_index', 'fake_index') CONF.set_default('ganesha_rados_export_counter', 'fake_counter') self._context = context.get_admin_context() self._execute = mock.Mock(return_value=('', '')) self.ceph_vol_client = mock.Mock() self.fake_conf = config.Configuration(None) self.fake_conf_dir_path = '/fakedir0/exports.d' self._helper = ganesha.GaneshaNASHelper2( self._execute, self.fake_conf, tag='faketag', ceph_vol_client=self.ceph_vol_client) self._helper.ganesha = mock.Mock() self._helper.export_template = {} self.share = fake_share.fake_share() self.rule1 = fake_share.fake_access(access_level='ro') self.rule2 = fake_share.fake_access(access_level='rw', access_to='10.0.0.2') @ddt.data(False, True) def test_init_helper_with_rados_store(self, rados_store_enable): CONF.set_default('ganesha_rados_store_enable', rados_store_enable) mock_template = mock.Mock() mock_ganesha_manager = mock.Mock() self.mock_object(ganesha.ganesha_manager, 'GaneshaManager', mock.Mock(return_value=mock_ganesha_manager)) self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value={})) self.mock_object(self._helper, '_default_config_hook', mock.Mock(return_value=mock_template)) ret = self._helper.init_helper() if rados_store_enable: kwargs = { 'ganesha_config_path': '/fakedir0/fakeconfig', 'ganesha_export_dir': '/fakedir0/export.d', 'ganesha_service_name': 'ganesha.fakeservice', 'ganesha_rados_store_enable': True, 'ganesha_rados_store_pool_name': 'ceph_pool', 'ganesha_rados_export_index': 'fake_index', 'ganesha_rados_export_counter': 'fake_counter', 'ceph_vol_client': self.ceph_vol_client } else: kwargs = { 'ganesha_config_path': '/fakedir0/fakeconfig', 'ganesha_export_dir': '/fakedir0/export.d', 'ganesha_service_name': 'ganesha.fakeservice', 'ganesha_db_path': '/fakedir1/fake.db' } ganesha.ganesha_manager.GaneshaManager.assert_called_once_with( self._execute, '', **kwargs) self._helper._load_conf_dir.assert_called_once_with( '/fakedir2/faketempl.d', must_exist=False) self.assertEqual(mock_ganesha_manager, self._helper.ganesha) self._helper._default_config_hook.assert_called_once_with() self.assertEqual(mock_template, self._helper.export_template) self.assertIsNone(ret) @ddt.data(False, True) def test_init_helper_conf_dir_empty(self, conf_dir_empty): mock_template = mock.Mock() mock_ganesha_manager = mock.Mock() self.mock_object(ganesha.ganesha_manager, 'GaneshaManager', mock.Mock(return_value=mock_ganesha_manager)) if conf_dir_empty: self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value={})) else: self.mock_object(self._helper, '_load_conf_dir', mock.Mock(return_value=mock_template)) self.mock_object(self._helper, '_default_config_hook', mock.Mock(return_value=mock_template)) ret = self._helper.init_helper() ganesha.ganesha_manager.GaneshaManager.assert_called_once_with( self._execute, '', ganesha_config_path='/fakedir0/fakeconfig', ganesha_export_dir='/fakedir0/export.d', ganesha_service_name='ganesha.fakeservice', ganesha_rados_store_enable=True, ganesha_rados_store_pool_name='ceph_pool', ganesha_rados_export_index='fake_index', ganesha_rados_export_counter='fake_counter', ceph_vol_client=self.ceph_vol_client) self._helper._load_conf_dir.assert_called_once_with( '/fakedir2/faketempl.d', must_exist=False) self.assertEqual(mock_ganesha_manager, self._helper.ganesha) if conf_dir_empty: self._helper._default_config_hook.assert_called_once_with() else: self.assertFalse(self._helper._default_config_hook.called) self.assertEqual(mock_template, self._helper.export_template) self.assertIsNone(ret) def test_init_helper_with_rados_store_pool_name_not_set(self): self.mock_object(ganesha.ganesha_manager, 'GaneshaManager') self.mock_object(self._helper, '_load_conf_dir') self.mock_object(self._helper, '_default_config_hook') self._helper.configuration.ganesha_rados_store_pool_name = None self.assertRaises( exception.GaneshaException, self._helper.init_helper) self.assertFalse(ganesha.ganesha_manager.GaneshaManager.called) self.assertFalse(self._helper._load_conf_dir.called) self.assertFalse(self._helper._default_config_hook.called) def test_update_access_add_export(self): mock_gh = self._helper.ganesha self.mock_object(mock_gh, 'check_export_exists', mock.Mock(return_value=False)) self.mock_object(mock_gh, 'get_export_id', mock.Mock(return_value=100)) self.mock_object(self._helper, '_get_export_path', mock.Mock(return_value='/fakepath')) self.mock_object(self._helper, '_get_export_pseudo_path', mock.Mock(return_value='/fakepath')) self.mock_object(self._helper, '_fsal_hook', mock.Mock(return_value={'Name': 'fake'})) self.mock_object(ganesha.ganesha_utils, 'validate_access_rule', mock.Mock(return_value=True)) result_confdict = { 'EXPORT': { 'Export_Id': 100, 'Path': '/fakepath', 'Pseudo': '/fakepath', 'Tag': 'fakename', 'CLIENT': [{ 'Access_Type': 'ro', 'Clients': '10.0.0.1'}], 'FSAL': {'Name': 'fake'} } } self._helper.update_access( self._context, self.share, access_rules=[self.rule1], add_rules=[], delete_rules=[]) mock_gh.check_export_exists.assert_called_once_with('fakename') mock_gh.get_export_id.assert_called_once_with() self._helper._get_export_path.assert_called_once_with(self.share) (self._helper._get_export_pseudo_path.assert_called_once_with( self.share)) self._helper._fsal_hook.assert_called_once_with( None, self.share, None) mock_gh.add_export.assert_called_once_with( 'fakename', result_confdict) self.assertFalse(mock_gh.update_export.called) self.assertFalse(mock_gh.remove_export.called) @ddt.data({'Access_Type': 'ro', 'Clients': '10.0.0.1'}, [{'Access_Type': 'ro', 'Clients': '10.0.0.1'}]) def test_update_access_update_export(self, client): mock_gh = self._helper.ganesha self.mock_object(mock_gh, 'check_export_exists', mock.Mock(return_value=True)) self.mock_object( mock_gh, '_read_export', mock.Mock(return_value={'EXPORT': {'CLIENT': client}}) ) self.mock_object(ganesha.ganesha_utils, 'validate_access_rule', mock.Mock(return_value=True)) result_confdict = { 'EXPORT': { 'CLIENT': [ {'Access_Type': 'ro', 'Clients': '10.0.0.1'}, {'Access_Type': 'rw', 'Clients': '10.0.0.2'}] } } self._helper.update_access( self._context, self.share, access_rules=[self.rule1, self.rule2], add_rules=[self.rule2], delete_rules=[]) mock_gh.check_export_exists.assert_called_once_with('fakename') mock_gh.update_export.assert_called_once_with('fakename', result_confdict) self.assertFalse(mock_gh.add_export.called) self.assertFalse(mock_gh.remove_export.called) def test_update_access_remove_export(self): mock_gh = self._helper.ganesha self.mock_object(mock_gh, 'check_export_exists', mock.Mock(return_value=True)) self.mock_object(self._helper, '_cleanup_fsal_hook') client = {'Access_Type': 'ro', 'Clients': '10.0.0.1'} self.mock_object( mock_gh, '_read_export', mock.Mock(return_value={'EXPORT': {'CLIENT': client}}) ) self._helper.update_access( self._context, self.share, access_rules=[], add_rules=[], delete_rules=[self.rule1]) mock_gh.check_export_exists.assert_called_once_with('fakename') mock_gh.remove_export.assert_called_once_with('fakename') self._helper._cleanup_fsal_hook.assert_called_once_with( None, self.share, None) self.assertFalse(mock_gh.add_export.called) self.assertFalse(mock_gh.update_export.called) def test_update_access_export_file_already_removed(self): mock_gh = self._helper.ganesha self.mock_object(mock_gh, 'check_export_exists', mock.Mock(return_value=False)) self.mock_object(ganesha.LOG, 'warning') self.mock_object(self._helper, '_cleanup_fsal_hook') self._helper.update_access( self._context, self.share, access_rules=[], add_rules=[], delete_rules=[self.rule1]) mock_gh.check_export_exists.assert_called_once_with('fakename') ganesha.LOG.warning.assert_called_once_with(mock.ANY, mock.ANY) self.assertFalse(mock_gh.add_export.called) self.assertFalse(mock_gh.update_export.called) self.assertFalse(mock_gh.remove_export.called) self.assertFalse(self._helper._cleanup_fsal_hook.called) def test_update_access_invalid_share_access_type(self): mock_gh = self._helper.ganesha self.mock_object(mock_gh, 'check_export_exists', mock.Mock(return_value=False)) bad_rule = fake_share.fake_access(access_type='notip', id='fakeid') expected = {'fakeid': {'state': 'error'}} result = self._helper.update_access(self._context, self.share, access_rules=[bad_rule], add_rules=[], delete_rules=[]) self.assertEqual(expected, result) def test_update_access_invalid_share_access_level(self): bad_rule = fake_share.fake_access(access_level='NOT_RO_OR_RW', id='fakeid') expected = {'fakeid': {'state': 'error'}} mock_gh = self._helper.ganesha self.mock_object(mock_gh, 'check_export_exists', mock.Mock(return_value=False)) result = self._helper.update_access(self._context, self.share, access_rules=[bad_rule], add_rules=[], delete_rules=[]) self.assertEqual(expected, result) manila-10.0.0/manila/tests/share/drivers/infortrend/0000775000175000017500000000000013656750362022425 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/infortrend/__init__.py0000664000175000017500000000000013656750227024524 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/infortrend/fake_infortrend_nas_data.py0000664000175000017500000003263213656750227027777 0ustar zuulzuul00000000000000# Copyright (c) 2019 Infortrend Technology, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class InfortrendNASTestData(object): fake_share_id = ['5a0aa06e-1c57-4996-be46-b81e360e8866', # NFS 'aac4fe64-7a9c-472a-b156-9adbb50b4d29'] # CIFS fake_share_name = [fake_share_id[0].replace('-', ''), fake_share_id[1].replace('-', '')] fake_channel_ip = ['172.27.112.223', '172.27.113.209'] fake_service_status_data = ('(64175, 1234, 272, 0)\n\n' '{"cliCode": ' '[{"Return": "0x0000", "CLI": "Successful"}], ' '"returnCode": [], ' '"data": ' '[{"A": ' '{"NFS": ' '{"displayName": "NFS", ' '"state_time": "2017-05-04 14:19:53", ' '"enabled": true, ' '"cpu_rate": "0.0", ' '"mem_rate": "0.0", ' '"state": "exited", ' '"type": "share"}}}]}\n\n') fake_folder_status_data = ('(64175, 1234, 1017, 0)\n\n' '{"cliCode": ' '[{"Return": "0x0000", "CLI": "Successful"}], ' '"returnCode": [], ' '"data": ' '[{"utility": "1.00", ' '"used": "33886208", ' '"subshare": true, ' '"share": false, ' '"worm": "", ' '"free": "321931374592", ' '"fsType": "xfs", ' '"owner": "A", ' '"readOnly": false, ' '"modifyTime": "2017-04-27 16:16", ' '"directory": "/share-pool-01/LV-1", ' '"volumeId": "6541BAFB2E6C57B6", ' '"mounted": true, ' '"size": "321965260800"}, ' '{"utility": "1.00", ' '"used": "33779712", ' '"subshare": false, ' '"share": false, ' '"worm": "", ' '"free": "107287973888", ' '"fsType": "xfs", ' '"owner": "A", ' '"readOnly": false, ' '"modifyTime": "2017-04-27 15:45", ' '"directory": "/share-pool-02/LV-1", ' '"volumeId": "147A8FB67DA39914", ' '"mounted": true, ' '"size": "107321753600"}]}\n\n') fake_nfs_status_off = [{ 'A': { 'NFS': { 'displayName': 'NFS', 'state_time': '2017-05-04 14:19:53', 'enabled': False, 'cpu_rate': '0.0', 'mem_rate': '0.0', 'state': 'exited', 'type': 'share', } } }] fake_folder_status = [{ 'utility': '1.00', 'used': '33886208', 'subshare': True, 'share': False, 'worm': '', 'free': '321931374592', 'fsType': 'xfs', 'owner': 'A', 'readOnly': False, 'modifyTime': '2017-04-27 16:16', 'directory': '/share-pool-01/LV-1', 'volumeId': '6541BAFB2E6C57B6', 'mounted': True, 'size': '321965260800'}, { 'utility': '1.00', 'used': '33779712', 'subshare': False, 'share': False, 'worm': '', 'free': '107287973888', 'fsType': 'xfs', 'owner': 'A', 'readOnly': False, 'modifyTime': '2017-04-27 15:45', 'directory': '/share-pool-02/LV-1', 'volumeId': '147A8FB67DA39914', 'mounted': True, 'size': '107321753600', }] def fake_get_channel_status(self, ch1_status='UP'): return [{ 'datalink': 'mgmt0', 'status': 'UP', 'typeConfig': 'DHCP', 'IP': '172.27.112.125', 'MAC': '00:d0:23:00:15:a6', 'netmask': '255.255.240.0', 'type': 'dhcp', 'gateway': '172.27.127.254'}, { 'datalink': 'CH0', 'status': 'UP', 'typeConfig': 'DHCP', 'IP': self.fake_channel_ip[0], 'MAC': '00:d0:23:80:15:a6', 'netmask': '255.255.240.0', 'type': 'dhcp', 'gateway': '172.27.127.254'}, { 'datalink': 'CH1', 'status': ch1_status, 'typeConfig': 'DHCP', 'IP': self.fake_channel_ip[1], 'MAC': '00:d0:23:40:15:a6', 'netmask': '255.255.240.0', 'type': 'dhcp', 'gateway': '172.27.127.254'}, { 'datalink': 'CH2', 'status': 'DOWN', 'typeConfig': 'DHCP', 'IP': '', 'MAC': '00:d0:23:c0:15:a6', 'netmask': '', 'type': '', 'gateway': ''}, { 'datalink': 'CH3', 'status': 'DOWN', 'typeConfig': 'DHCP', 'IP': '', 'MAC': '00:d0:23:20:15:a6', 'netmask': '', 'type': '', 'gateway': '', }] fake_fquota_status = [{ 'quota': '21474836480', 'used': '0', 'name': 'test-folder', 'type': 'subfolder', 'id': '537178178'}, { 'quota': '32212254720', 'used': '0', 'name': fake_share_name[0], 'type': 'subfolder', 'id': '805306752'}, { 'quota': '53687091200', 'used': '21474836480', 'name': fake_share_name[1], 'type': 'subfolder', 'id': '69'}, { 'quota': '94091997184', 'used': '0', 'type': 'subfolder', 'id': '70', "name": 'test-folder-02' }] fake_fquota_status_with_no_settings = [] def fake_get_share_status_nfs(self, status=False): fake_share_status_nfs = [{ 'ftp': False, 'cifs': False, 'oss': False, 'sftp': False, 'nfs': status, 'directory': '/LV-1/share-pool-01/' + self.fake_share_name[0], 'exist': True, 'afp': False, 'webdav': False }] if status: fake_share_status_nfs[0]['nfs_detail'] = { 'hostList': [{ 'uid': '65534', 'insecure': 'insecure', 'squash': 'all', 'access': 'ro', 'host': '*', 'gid': '65534', 'mode': 'async', 'no_subtree_check': 'no_subtree_check', }] } return fake_share_status_nfs def fake_get_share_status_cifs(self, status=False): fake_share_status_cifs = [{ 'ftp': False, 'cifs': status, 'oss': False, 'sftp': False, 'nfs': False, 'directory': '/share-pool-01/LV-1/' + self.fake_share_name[1], 'exist': True, 'afp': False, 'webdav': False }] if status: fake_share_status_cifs[0]['cifs_detail'] = { 'available': True, 'encrypt': False, 'description': '', 'sharename': 'cifs-01', 'failover': '', 'AIO': True, 'priv': 'None', 'recycle_bin': False, 'ABE': True, } return fake_share_status_cifs fake_subfolder_data = [{ 'size': '6', 'index': '34', 'description': '', 'encryption': '', 'isEnd': False, 'share': False, 'volumeId': '6541BAFB2E6C57B6', 'quota': '', 'modifyTime': '2017-04-06 11:35', 'owner': 'A', 'path': '/share-pool-01/LV-1/UserHome', 'subshare': True, 'type': 'subfolder', 'empty': False, 'name': 'UserHome'}, { 'size': '6', 'index': '39', 'description': '', 'encryption': '', 'isEnd': False, 'share': False, 'volumeId': '6541BAFB2E6C57B6', 'quota': '21474836480', 'modifyTime': '2017-04-27 15:44', 'owner': 'A', 'path': '/share-pool-01/LV-1/test-folder', 'subshare': False, 'type': 'subfolder', 'empty': True, 'name': 'test-folder'}, { 'size': '6', 'index': '45', 'description': '', 'encryption': '', 'isEnd': False, 'share': True, 'volumeId': '6541BAFB2E6C57B6', 'quota': '32212254720', 'modifyTime': '2017-04-27 16:15', 'owner': 'A', 'path': '/share-pool-01/LV-1/' + fake_share_name[0], 'subshare': False, 'type': 'subfolder', 'empty': True, 'name': fake_share_name[0]}, { 'size': '6', 'index': '512', 'description': '', 'encryption': '', 'isEnd': True, 'share': True, 'volumeId': '6541BAFB2E6C57B6', 'quota': '53687091200', 'modifyTime': '2017-04-27 16:16', 'owner': 'A', 'path': '/share-pool-01/LV-1/' + fake_share_name[1], 'subshare': False, 'type': 'subfolder', 'empty': True, 'name': fake_share_name[1]}, { 'size': '6', 'index': '777', 'description': '', 'encryption': '', 'isEnd': False, 'share': False, 'volumeId': '6541BAFB2E6C57B6', 'quota': '94091997184', 'modifyTime': '2017-04-28 15:44', 'owner': 'A', 'path': '/share-pool-01/LV-1/test-folder-02', 'subshare': False, 'type': 'subfolder', 'empty': True, 'name': 'test-folder-02' }] fake_cifs_user_list = [{ 'Superuser': 'No', 'Group': 'users', 'Description': '', 'Quota': 'none', 'PWD Expiry Date': '2291-01-19', 'Home Directory': '/share-pool-01/LV-1/UserHome/user01', 'UID': '100001', 'Type': 'Local', 'Name': 'user01'}, { 'Superuser': 'No', 'Group': 'users', 'Description': '', 'Quota': 'none', 'PWD Expiry Date': '2017-08-07', 'Home Directory': '/share-pool-01/LV-1/UserHome/user02', 'UID': '100002', 'Type': 'Local', 'Name': 'user02' }] fake_share_status_nfs_with_rules = [{ 'ftp': False, 'cifs': False, 'oss': False, 'sftp': False, 'nfs': True, 'directory': '/share-pool-01/LV-1/' + fake_share_name[0], 'exist': True, 'nfs_detail': { 'hostList': [{ 'uid': '65534', 'insecure': 'insecure', 'squash': 'all', 'access': 'ro', 'host': '*', 'gid': '65534', 'mode': 'async', 'no_subtree_check': 'no_subtree_check'}, { 'uid': '65534', 'insecure': 'insecure', 'squash': 'all', 'access': 'rw', 'host': '172.27.1.1', 'gid': '65534', 'mode': 'async', 'no_subtree_check': 'no_subtree_check'}, { 'uid': '65534', 'insecure': 'insecure', 'squash': 'all', 'access': 'rw', 'host': '172.27.1.2', 'gid': '65534', 'mode': 'async', 'no_subtree_check': 'no_subtree_check'}] }, 'afp': False, 'webdav': False, }] fake_share_status_cifs_with_rules = [ { 'permission': { 'Read': True, 'Write': True, 'Execute': True}, 'type': 'user', 'id': '100001', 'name': 'user01' }, { 'permission': { 'Read': True, 'Write': False, 'Execute': True}, 'type': 'user', 'id': '100002', 'name': 'user02' }, { 'permission': { 'Read': True, 'Write': False, 'Execute': True}, 'type': 'group@', 'id': '100', 'name': 'users' }, { 'permission': { 'Read': True, 'Write': False, 'Execute': True}, 'type': 'other@', 'id': '', 'name': '' } ] manila-10.0.0/manila/tests/share/drivers/infortrend/test_infortrend_nas.py0000664000175000017500000005322113656750227027054 0ustar zuulzuul00000000000000# Copyright (c) 2019 Infortrend Technology, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from manila import context from manila import exception from manila.share import configuration from manila.share.drivers.infortrend import driver from manila.share.drivers.infortrend import infortrend_nas from manila import test from manila.tests.share.drivers.infortrend import fake_infortrend_manila_data from manila.tests.share.drivers.infortrend import fake_infortrend_nas_data CONF = cfg.CONF SUCCEED = (0, []) @ddt.ddt class InfortrendNASDriverTestCase(test.TestCase): def __init__(self, *args, **kwargs): super(InfortrendNASDriverTestCase, self).__init__(*args, **kwargs) self._ctxt = context.get_admin_context() self.nas_data = fake_infortrend_nas_data.InfortrendNASTestData() self.m_data = fake_infortrend_manila_data.InfortrendManilaTestData() def setUp(self): CONF.set_default('driver_handles_share_servers', False) CONF.set_default('infortrend_nas_ip', '172.27.1.1') CONF.set_default('infortrend_nas_user', 'fake_user') CONF.set_default('infortrend_nas_password', 'fake_password') CONF.set_default('infortrend_nas_ssh_key', 'fake_sshkey') CONF.set_default('infortrend_share_pools', 'share-pool-01') CONF.set_default('infortrend_share_channels', '0,1') self.fake_conf = configuration.Configuration(None) super(InfortrendNASDriverTestCase, self).setUp() def _get_driver(self, fake_conf, init_dict=False): self._driver = driver.InfortrendNASDriver( configuration=fake_conf) self._iftnas = self._driver.ift_nas self.pool_id = ['6541BAFB2E6C57B6'] self.pool_path = ['/share-pool-01/LV-1/'] if init_dict: self._iftnas.pool_dict = { 'share-pool-01': { 'id': self.pool_id[0], 'path': self.pool_path[0], } } self._iftnas.channel_dict = { '0': self.nas_data.fake_channel_ip[0], '1': self.nas_data.fake_channel_ip[1], } def test_no_login_ssh_key_and_pass(self): self.fake_conf.set_default('infortrend_nas_password', None) self.fake_conf.set_default('infortrend_nas_ssh_key', None) self.assertRaises( exception.InvalidParameterValue, self._get_driver, self.fake_conf) def test_parser_with_service_status(self): self._get_driver(self.fake_conf) expect_service_status = [{ 'A': { 'NFS': { 'displayName': 'NFS', 'state_time': '2017-05-04 14:19:53', 'enabled': True, 'cpu_rate': '0.0', 'mem_rate': '0.0', 'state': 'exited', 'type': 'share', } } }] rc, service_status = self._iftnas._parser( self.nas_data.fake_service_status_data) self.assertEqual(0, rc) self.assertDictListMatch(expect_service_status, service_status) def test_parser_with_folder_status(self): self._get_driver(self.fake_conf) expect_folder_status = [{ 'utility': '1.00', 'used': '33886208', 'subshare': True, 'share': False, 'worm': '', 'free': '321931374592', 'fsType': 'xfs', 'owner': 'A', 'readOnly': False, 'modifyTime': '2017-04-27 16:16', 'directory': self.pool_path[0][:-1], 'volumeId': self.pool_id[0], 'mounted': True, 'size': '321965260800'}, { 'utility': '1.00', 'used': '33779712', 'subshare': False, 'share': False, 'worm': '', 'free': '107287973888', 'fsType': 'xfs', 'owner': 'A', 'readOnly': False, 'modifyTime': '2017-04-27 15:45', 'directory': '/share-pool-02/LV-1', 'volumeId': '147A8FB67DA39914', 'mounted': True, 'size': '107321753600' }] rc, folder_status = self._iftnas._parser( self.nas_data.fake_folder_status_data) self.assertEqual(0, rc) self.assertDictListMatch(expect_folder_status, folder_status) def test_ensure_service_on(self): self._get_driver(self.fake_conf) mock_execute = mock.Mock( side_effect=[(0, self.nas_data.fake_nfs_status_off), SUCCEED]) self._iftnas._execute = mock_execute self._iftnas._ensure_service_on('nfs') mock_execute.assert_called_with(['service', 'restart', 'nfs']) def test_check_channels_status(self): self._get_driver(self.fake_conf) expect_channel_dict = { '0': self.nas_data.fake_channel_ip[0], '1': self.nas_data.fake_channel_ip[1], } self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_get_channel_status())) self._iftnas._check_channels_status() self.assertDictMatch(expect_channel_dict, self._iftnas.channel_dict) @mock.patch.object(infortrend_nas.LOG, 'warning') def test_channel_status_down(self, log_warning): self._get_driver(self.fake_conf) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_get_channel_status('DOWN'))) self._iftnas._check_channels_status() self.assertEqual(1, log_warning.call_count) @mock.patch.object(infortrend_nas.LOG, 'error') def test_invalid_channel(self, log_error): self.fake_conf.set_default('infortrend_share_channels', '0, 6') self._get_driver(self.fake_conf) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_get_channel_status())) self.assertRaises( exception.InfortrendNASException, self._iftnas._check_channels_status) def test_check_pools_setup(self): self._get_driver(self.fake_conf) expect_pool_dict = { 'share-pool-01': { 'id': self.pool_id[0], 'path': self.pool_path[0], } } self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_folder_status)) self._iftnas._check_pools_setup() self.assertDictMatch(expect_pool_dict, self._iftnas.pool_dict) def test_unknow_pools_setup(self): self.fake_conf.set_default( 'infortrend_share_pools', 'chengwei, share-pool-01') self._get_driver(self.fake_conf) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_folder_status)) self.assertRaises( exception.InfortrendNASException, self._iftnas._check_pools_setup) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_get_pool_quota_used(self, mock_execute): self._get_driver(self.fake_conf, True) mock_execute.return_value = (0, self.nas_data.fake_fquota_status) pool_quota = self._iftnas._get_pool_quota_used('share-pool-01') mock_execute.assert_called_with( ['fquota', 'status', self.pool_id[0], 'LV-1', '-t', 'folder']) self.assertEqual(201466179584, pool_quota) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_create_share_nfs(self, mock_execute): self._get_driver(self.fake_conf, True) fake_share_id = self.m_data.fake_share_nfs['id'] fake_share_name = fake_share_id.replace('-', '') expect_locations = [ self.nas_data.fake_channel_ip[0] + ':/share-pool-01/LV-1/' + fake_share_name, self.nas_data.fake_channel_ip[1] + ':/share-pool-01/LV-1/' + fake_share_name, ] mock_execute.side_effect = [ SUCCEED, # create folder SUCCEED, # set size (0, self.nas_data.fake_get_share_status_nfs()), # check proto SUCCEED, # enable proto (0, self.nas_data.fake_get_channel_status()) # update channel ] locations = self._driver.create_share( self._ctxt, self.m_data.fake_share_nfs) self.assertEqual(expect_locations, locations) mock_execute.assert_any_call( ['share', self.pool_path[0] + fake_share_name, 'nfs', 'on']) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_create_share_cifs(self, mock_execute): self._get_driver(self.fake_conf, True) fake_share_id = self.m_data.fake_share_cifs['id'] fake_share_name = fake_share_id.replace('-', '') expect_locations = [ '\\\\' + self.nas_data.fake_channel_ip[0] + '\\' + fake_share_name, '\\\\' + self.nas_data.fake_channel_ip[1] + '\\' + fake_share_name, ] mock_execute.side_effect = [ SUCCEED, # create folder SUCCEED, # set size (0, self.nas_data.fake_get_share_status_cifs()), # check proto SUCCEED, # enable proto (0, self.nas_data.fake_get_channel_status()) # update channel ] locations = self._driver.create_share( self._ctxt, self.m_data.fake_share_cifs) self.assertEqual(expect_locations, locations) mock_execute.assert_any_call( ['share', self.pool_path[0] + fake_share_name, 'cifs', 'on', '-n', fake_share_name]) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_delete_share_nfs(self, mock_execute): self._get_driver(self.fake_conf, True) fake_share_id = self.m_data.fake_share_nfs['id'] fake_share_name = fake_share_id.replace('-', '') mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder SUCCEED, # delete folder ] self._driver.delete_share( self._ctxt, self.m_data.fake_share_nfs) mock_execute.assert_any_call( ['folder', 'options', self.pool_id[0], 'LV-1', '-d', fake_share_name]) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_delete_share_cifs(self, mock_execute): self._get_driver(self.fake_conf, True) fake_share_id = self.m_data.fake_share_cifs['id'] fake_share_name = fake_share_id.replace('-', '') mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder SUCCEED, # delete folder ] self._driver.delete_share( self._ctxt, self.m_data.fake_share_cifs) mock_execute.assert_any_call( ['folder', 'options', self.pool_id[0], 'LV-1', '-d', fake_share_name]) @mock.patch.object(infortrend_nas.LOG, 'warning') @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_delete_non_exist_share(self, mock_execute, log_warning): self._get_driver(self.fake_conf, True) mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder ] self._driver.delete_share( self._ctxt, self.m_data.fake_non_exist_share) self.assertEqual(1, log_warning.call_count) def test_get_pool(self): self._get_driver(self.fake_conf, True) pool = self._driver.get_pool(self.m_data.fake_share_nfs) self.assertEqual('share-pool-01', pool) def test_get_pool_without_host(self): self._get_driver(self.fake_conf, True) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_subfolder_data)) pool = self._driver.get_pool(self.m_data.fake_share_cifs_no_host) self.assertEqual('share-pool-01', pool) def test_ensure_share_nfs(self): self._get_driver(self.fake_conf, True) share_id = self.m_data.fake_share_nfs['id'] share_name = share_id.replace('-', '') share_path = self.pool_path[0] + share_name expect_locations = [ self.nas_data.fake_channel_ip[0] + ':' + share_path, self.nas_data.fake_channel_ip[1] + ':' + share_path, ] self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_get_channel_status())) locations = self._driver.ensure_share( self._ctxt, self.m_data.fake_share_nfs) self.assertEqual(expect_locations, locations) def test_ensure_share_cifs(self): self._get_driver(self.fake_conf, True) share_id = self.m_data.fake_share_cifs['id'] share_name = share_id.replace('-', '') expect_locations = [ '\\\\' + self.nas_data.fake_channel_ip[0] + '\\' + share_name, '\\\\' + self.nas_data.fake_channel_ip[1] + '\\' + share_name, ] self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_get_channel_status())) locations = self._driver.ensure_share( self._ctxt, self.m_data.fake_share_cifs) self.assertEqual(expect_locations, locations) def test_extend_share(self): self._get_driver(self.fake_conf, True) share_id = self.m_data.fake_share_nfs['id'] share_name = share_id.replace('-', '') self._iftnas._execute = mock.Mock(return_value=SUCCEED) self._driver.extend_share(self.m_data.fake_share_nfs, 100) self._iftnas._execute.assert_called_once_with( ['fquota', 'create', self.pool_id[0], 'LV-1', share_name, '100G', '-t', 'folder']) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_shrink_share(self, mock_execute): self._get_driver(self.fake_conf, True) share_id = self.m_data.fake_share_nfs['id'] share_name = share_id.replace('-', '') mock_execute.side_effect = [ (0, self.nas_data.fake_fquota_status), # check used SUCCEED, ] self._driver.shrink_share(self.m_data.fake_share_nfs, 10) mock_execute.assert_has_calls([ mock.call(['fquota', 'status', self.pool_id[0], 'LV-1', '-t', 'folder']), mock.call(['fquota', 'create', self.pool_id[0], 'LV-1', share_name, '10G', '-t', 'folder'])]) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_shrink_share_smaller_than_used_size(self, mock_execute): self._get_driver(self.fake_conf, True) mock_execute.side_effect = [ (0, self.nas_data.fake_fquota_status), # check used ] self.assertRaises( exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, self.m_data.fake_share_cifs, 10) def test_get_share_size(self): self._get_driver(self.fake_conf, True) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_fquota_status)) size = self._iftnas._get_share_size('', '', 'test-folder-02') self.assertEqual(87.63, size) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_manage_existing_nfs(self, mock_execute): self._get_driver(self.fake_conf, True) share_id = self.m_data.fake_share_for_manage_nfs['id'] share_name = share_id.replace('-', '') origin_share_path = self.pool_path[0] + 'test-folder' export_share_path = self.pool_path[0] + share_name expect_result = { 'size': 20.0, 'export_locations': [ self.nas_data.fake_channel_ip[0] + ':' + export_share_path, self.nas_data.fake_channel_ip[1] + ':' + export_share_path, ] } mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder (0, self.nas_data.fake_get_share_status_nfs()), # check proto SUCCEED, # enable nfs (0, self.nas_data.fake_fquota_status), # get share size SUCCEED, # rename share (0, self.nas_data.fake_get_channel_status()) # update channel ] result = self._driver.manage_existing( self.m_data.fake_share_for_manage_nfs, {} ) self.assertEqual(expect_result, result) mock_execute.assert_has_calls([ mock.call(['pagelist', 'folder', self.pool_path[0]]), mock.call(['share', 'status', '-f', origin_share_path]), mock.call(['share', origin_share_path, 'nfs', 'on']), mock.call(['fquota', 'status', self.pool_id[0], origin_share_path.split('/')[3], '-t', 'folder']), mock.call(['folder', 'options', self.pool_id[0], 'LV-1', '-k', 'test-folder', share_name]), mock.call(['ifconfig', 'inet', 'show']), ]) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_manage_existing_cifs(self, mock_execute): self._get_driver(self.fake_conf, True) share_id = self.m_data.fake_share_for_manage_cifs['id'] share_name = share_id.replace('-', '') origin_share_path = self.pool_path[0] + 'test-folder-02' expect_result = { 'size': 87.63, 'export_locations': [ '\\\\' + self.nas_data.fake_channel_ip[0] + '\\' + share_name, '\\\\' + self.nas_data.fake_channel_ip[1] + '\\' + share_name, ] } mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder (0, self.nas_data.fake_get_share_status_cifs()), # check proto SUCCEED, # enable cifs (0, self.nas_data.fake_fquota_status), # get share size SUCCEED, # rename share (0, self.nas_data.fake_get_channel_status()) # update channel ] result = self._driver.manage_existing( self.m_data.fake_share_for_manage_cifs, {} ) self.assertEqual(expect_result, result) mock_execute.assert_has_calls([ mock.call(['pagelist', 'folder', self.pool_path[0]]), mock.call(['share', 'status', '-f', origin_share_path]), mock.call(['share', origin_share_path, 'cifs', 'on', '-n', share_name]), mock.call(['fquota', 'status', self.pool_id[0], origin_share_path.split('/')[3], '-t', 'folder']), mock.call(['folder', 'options', self.pool_id[0], 'LV-1', '-k', 'test-folder-02', share_name]), mock.call(['ifconfig', 'inet', 'show']), ]) def test_manage_existing_with_no_location(self): self._get_driver(self.fake_conf, True) fake_share = self.m_data._get_fake_share_for_manage('') self.assertRaises( exception.InfortrendNASException, self._driver.manage_existing, fake_share, {}) @ddt.data('172.27.1.1:/share-pool-01/LV-1/test-folder', '172.27.112.223:/share-pool-01/LV-1/some-folder') def test_manage_existing_wrong_ip_or_name(self, fake_share_path): self._get_driver(self.fake_conf, True) fake_share = self.m_data._get_fake_share_for_manage(fake_share_path) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_subfolder_data)) self.assertRaises( exception.InfortrendNASException, self._driver.manage_existing, fake_share, {}) @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_manage_existing_with_no_size_setting(self, mock_execute): self._get_driver(self.fake_conf, True) mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder (0, self.nas_data.fake_get_share_status_nfs()), # check proto SUCCEED, # enable nfs (0, self.nas_data.fake_fquota_status_with_no_settings), ] self.assertRaises( exception.InfortrendNASException, self._driver.manage_existing, self.m_data.fake_share_for_manage_nfs, {}) @ddt.data('NFS', 'CIFS') @mock.patch.object(infortrend_nas.InfortrendNAS, '_execute') def test_unmanage(self, protocol, mock_execute): share_to_unmanage = (self.m_data.fake_share_nfs if protocol == 'NFS' else self.m_data.fake_share_cifs) self._get_driver(self.fake_conf, True) mock_execute.side_effect = [ (0, self.nas_data.fake_subfolder_data), # pagelist folder ] self._driver.unmanage(share_to_unmanage) mock_execute.assert_called_once_with( ['pagelist', 'folder', self.pool_path[0]], ) @mock.patch.object(infortrend_nas.LOG, 'warning') def test_unmanage_share_not_exist(self, log_warning): self._get_driver(self.fake_conf, True) self._iftnas._execute = mock.Mock( return_value=(0, self.nas_data.fake_subfolder_data)) self._driver.unmanage( self.m_data.fake_share_for_manage_nfs, ) self.assertEqual(1, log_warning.call_count) manila-10.0.0/manila/tests/share/drivers/infortrend/fake_infortrend_manila_data.py0000664000175000017500000003650713656750227030464 0ustar zuulzuul00000000000000# Copyright (c) 2019 Infortrend Technology, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class InfortrendManilaTestData(object): fake_share_id = ['4d6984fd-8572-4467-964f-24936a8c4ea2', # NFS 'a7b933e6-bb77-4823-a86f-f2c3ab41a8a5'] # CIFS fake_id = ['iftt8862-2226-0126-7610-chengweichou', '987c8763-3333-4444-5555-666666666666'] fake_share_nfs = { 'share_id': fake_share_id[0], 'availability_zone': 'nova', 'terminated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)', 'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3', 'updated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)', 'share_network_id': None, 'export_locations': [], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': '5a0aa06e-1c57-4996-be46-b81e360e8866', 'size': 30, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': '172.27.112.223:/share-pool-01/LV-1/' + fake_share_id[0], 'display_description': None, 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': 'datetime.datetime(2017, 5, 8, 8, 23, 33)', 'scheduled_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)', 'status': 'deleting', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': 'compute@ift-manila#share-pool-01', 'access_rules_status': 'active', 'display_name': 'nfs-01', 'name': 'share-5a0aa06e-1c57-4996-be46-b81e360e8866', 'created_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)', 'share_proto': 'NFS', 'is_public': False, 'source_cgsnapshot_member_id': None } fake_share_cifs = { 'share_id': fake_share_id[1], 'availability_zone': 'nova', 'terminated_at': None, 'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3', 'updated_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)', 'share_network_id': None, 'export_locations': [], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': 'aac4fe64-7a9c-472a-b156-9adbb50b4d29', 'size': 50, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': None, 'display_description': None, 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': None, 'scheduled_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)', 'status': 'creating', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': 'compute@ift-manila#share-pool-01', 'access_rules_status': 'active', 'display_name': 'cifs-01', 'name': 'share-aac4fe64-7a9c-472a-b156-9adbb50b4d29', 'created_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)', 'share_proto': 'CIFS', 'is_public': False, 'source_cgsnapshot_member_id': None } fake_share_cifs_no_host = { 'share_id': fake_share_id[1], 'availability_zone': 'nova', 'terminated_at': None, 'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3', 'updated_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)', 'share_network_id': None, 'export_locations': [], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': 'aac4fe64-7a9c-472a-b156-9adbb50b4d29', 'size': 50, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': None, 'display_description': None, 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': None, 'scheduled_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)', 'status': 'creating', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': '', 'access_rules_status': 'active', 'display_name': 'cifs-01', 'name': 'share-aac4fe64-7a9c-472a-b156-9adbb50b4d29', 'created_at': 'datetime.datetime(2017, 5, 9, 2, 28, 35)', 'share_proto': 'CIFS', 'is_public': False, 'source_cgsnapshot_member_id': None } fake_non_exist_share = { 'share_id': fake_id[0], 'availability_zone': 'nova', 'terminated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)', 'availability_zone_id': 'fd32d76d-b5a8-4c5c-93d7-8f09fc2a8ad3', 'updated_at': 'datetime.datetime(2017, 5, 8, 8, 27, 25)', 'share_network_id': None, 'export_locations': [], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': fake_id[1], 'size': 30, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': '172.27.112.223:/share-pool-01/LV-1/' + fake_id[0], 'display_description': None, 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': 'datetime.datetime(2017, 5, 8, 8, 23, 33)', 'scheduled_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)', 'status': 'available', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': 'compute@ift-manila#share-pool-01', 'access_rules_status': 'active', 'display_name': 'nfs-01', 'name': 'share-5a0aa06e-1c57-4996-be46-b81e360e8866', 'created_at': 'datetime.datetime(2017, 5, 8, 8, 23, 29)', 'share_proto': 'NFS', 'is_public': False, 'source_cgsnapshot_member_id': None } fake_access_rules_nfs = [{ 'share_id': fake_share_id[0], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 8, 41, 21)', 'updated_at': None, 'access_type': 'ip', 'access_to': '172.27.1.1', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': 'fa60b50f-1428-44a2-9931-7e31f0c5b033'}, { 'share_id': fake_share_id[0], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 8, 45, 37)', 'updated_at': None, 'access_type': 'ip', 'access_to': '172.27.1.2', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': '9bcdd5e6-11c7-4f8f-939c-84fa2f3334bc' }] fake_rule_ip_1 = [{ 'share_id': fake_share_id[0], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 8, 41, 21)', 'updated_at': None, 'access_type': 'ip', 'access_to': '172.27.1.1', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': 'fa60b50f-1428-44a2-9931-7e31f0c5b033' }] fake_rule_ip_2 = [{ 'share_id': fake_share_id[0], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 8, 45, 37)', 'updated_at': None, 'access_type': 'ip', 'access_to': '172.27.1.2', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': '9bcdd5e6-11c7-4f8f-939c-84fa2f3334bc' }] fake_access_rules_cifs = [{ 'share_id': fake_share_id[1], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 9, 39, 18)', 'updated_at': None, 'access_type': 'user', 'access_to': 'user02', 'access_level': 'ro', 'instance_mappings': [], 'deleted_at': None, 'id': '6e8bc969-51c9-4bbb-8e8b-020dc5fec81e'}, { 'share_id': fake_share_id[1], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 9, 38, 59)', 'updated_at': None, 'access_type': 'user', 'access_to': 'user01', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': '0cd9926d-fac4-4122-a523-538e98752e78' }] fake_rule_user01 = [{ 'share_id': fake_share_id[1], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 9, 38, 59)', 'updated_at': None, 'access_type': 'user', 'access_to': 'user01', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': '0cd9926d-fac4-4122-a523-538e98752e78' }] fake_rule_user02 = [{ 'share_id': fake_share_id[1], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 9, 39, 18)', 'updated_at': None, 'access_type': 'user', 'access_to': 'user02', 'access_level': 'ro', 'instance_mappings': [], 'deleted_at': None, 'id': '6e8bc969-51c9-4bbb-8e8b-020dc5fec81e' }] fake_rule_user03 = [{ 'share_id': fake_id[0], 'deleted': 'False', 'created_at': 'datetime.datetime(2017, 5, 9, 9, 39, 18)', 'updated_at': None, 'access_type': 'user', 'access_to': 'user03', 'access_level': 'rw', 'instance_mappings': [], 'deleted_at': None, 'id': fake_id[1] }] fake_share_for_manage_nfs = { 'share_id': '419ab73c-c0fc-4e73-b56a-70756e0b6d27', 'availability_zone': None, 'terminated_at': None, 'availability_zone_id': None, 'updated_at': None, 'share_network_id': None, 'export_locations': [{ 'uuid': '0ebd59e4-e65e-4fda-9457-320375efd0be', 'deleted': 0, 'created_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)', 'updated_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)', 'is_admin_only': False, 'share_instance_id': 'd3cfe195-85cf-41e6-be4f-a96f7e7db192', 'path': '172.27.112.223:/share-pool-01/LV-1/test-folder', 'el_metadata': {}, 'deleted_at': None, 'id': 83 }], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': '615ac1ed-e808-40b5-8d7b-87018c6f66eb', 'size': None, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': '172.27.112.223:/share-pool-01/LV-1/test-folder', 'display_description': '', 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': None, 'scheduled_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)', 'status': 'manage_starting', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': 'compute@ift-manila#share-pool-01', 'access_rules_status': 'active', 'display_name': 'test-manage', 'name': 'share-615ac1ed-e808-40b5-8d7b-87018c6f66eb', 'created_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)', 'share_proto': 'NFS', 'is_public': False, 'source_cgsnapshot_member_id': None } def _get_fake_share_for_manage(self, location=''): return { 'share_id': '419ab73c-c0fc-4e73-b56a-70756e0b6d27', 'availability_zone': None, 'terminated_at': None, 'availability_zone_id': None, 'updated_at': None, 'share_network_id': None, 'export_locations': [{ 'uuid': '0ebd59e4-e65e-4fda-9457-320375efd0be', 'deleted': 0, 'created_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)', 'updated_at': 'datetime.datetime(2017, 5, 10, 10, 0, 3)', 'is_admin_only': False, 'share_instance_id': 'd3cfe195-85cf-41e6-be4f-a96f7e7db192', 'path': location, 'el_metadata': {}, 'deleted_at': None, 'id': 83 }], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': '615ac1ed-e808-40b5-8d7b-87018c6f66eb', 'size': None, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': location, 'display_description': '', 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': None, 'scheduled_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)', 'status': 'manage_starting', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': 'compute@ift-manila#share-pool-01', 'access_rules_status': 'active', 'display_name': 'test-manage', 'name': 'share-615ac1ed-e808-40b5-8d7b-87018c6f66eb', 'created_at': 'datetime.datetime(2017, 5, 10, 9, 22, 5)', 'share_proto': 'NFS', 'is_public': False, 'source_cgsnapshot_member_id': None } fake_share_for_manage_cifs = { 'share_id': '3a1222d3-c981-490a-9390-4d560ced68eb', 'availability_zone': None, 'terminated_at': None, 'availability_zone_id': None, 'updated_at': None, 'share_network_id': None, 'export_locations': [{ 'uuid': '0ebd59e4-e65e-4fda-9457-320375efd0de', 'deleted': 0, 'created_at': 'datetime.datetime(2017, 5, 11, 10, 10, 3)', 'updated_at': 'datetime.datetime(2017, 5, 11, 10, 10, 3)', 'is_admin_only': False, 'share_instance_id': 'd3cfe195-85cf-41e6-be4f-a96f7e7db192', 'path': '\\\\172.27.113.209\\test-folder-02', 'el_metadata': {}, 'deleted_at': None, 'id': 87 }], 'share_server_id': None, 'snapshot_id': None, 'deleted_at': None, 'id': 'd156baf7-5422-4c9b-8c78-ee7943d000ec', 'size': None, 'replica_state': None, 'user_id': '4944594433f0405588928a4212964658', 'export_location': '\\\\172.27.113.209\\test-folder-02', 'display_description': '', 'consistency_group_id': None, 'project_id': '0e63326c50a246ac81fa1a0c8e003d5b', 'launched_at': None, 'scheduled_at': 'datetime.datetime(2017, 5, 11, 3, 7, 59)', 'status': 'manage_starting', 'share_type_id': '23d8c637-0192-47fa-b921-958f22ed772f', 'deleted': 'False', 'host': 'compute@ift-manila#share-pool-01', 'access_rules_status': 'active', 'display_name': 'test-manage-02', 'name': 'share-d156baf7-5422-4c9b-8c78-ee7943d000ec', 'created_at': 'datetime.datetime(2017, 5, 11, 3, 7, 59)', 'share_proto': 'CIFS', 'is_public': False, 'source_cgsnapshot_member_id': None } manila-10.0.0/manila/tests/share/drivers/test_service_instance.py0000664000175000017500000033146513656750227025224 0ustar zuulzuul00000000000000# Copyright (c) 2014 NetApp, Inc. # Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the instance module.""" import os import time from unittest import mock import ddt import netaddr from oslo_config import cfg from oslo_utils import importutils import six from manila import exception from manila.share import configuration from manila.share import driver # noqa from manila.share.drivers import service_instance from manila import test from manila.tests import fake_compute from manila.tests import fake_network from manila.tests import utils as test_utils CONF = cfg.CONF def fake_get_config_option(key): if key == 'driver_handles_share_servers': return True elif key == 'service_instance_password': return None elif key == 'service_instance_user': return 'fake_user' elif key == 'service_network_name': return 'fake_service_network_name' elif key == 'service_instance_flavor_id': return '100' elif key == 'service_instance_name_template': return 'fake_manila_service_instance_%s' elif key == 'service_image_name': return 'fake_service_image_name' elif key == 'manila_service_keypair_name': return 'fake_manila_service_keypair_name' elif key == 'path_to_private_key': return 'fake_path_to_private_key' elif key == 'path_to_public_key': return 'fake_path_to_public_key' elif key == 'max_time_to_build_instance': return 500 elif key == 'connect_share_server_to_tenant_network': return False elif key == 'service_network_cidr': return '99.254.0.0/24' elif key == 'service_network_division_mask': return 27 elif key == 'service_network_name': return 'fake_service_network_name' elif key == 'interface_driver': return 'i.am.fake.VifDriver' elif key == 'admin_network_id': return None elif key == 'admin_subnet_id': return None elif key == 'backend_availability_zone': return None else: return mock.Mock() class FakeServiceInstance(object): def __init__(self, driver_config=None): super(FakeServiceInstance, self).__init__() self.compute_api = service_instance.compute.API() self.admin_context = service_instance.context.get_admin_context() self.driver_config = driver_config def get_config_option(self, key): return fake_get_config_option(key) class FakeNetworkHelper(service_instance.BaseNetworkhelper): @property def NAME(self): return service_instance.NEUTRON_NAME @property def neutron_api(self): if not hasattr(self, '_neutron_api'): self._neutron_api = mock.Mock() return self._neutron_api def __init__(self, service_instance_manager): self.get_config_option = service_instance_manager.get_config_option def get_network_name(self, network_info): """Return name of network.""" return 'fake_network_name' def setup_connectivity_with_service_instances(self): """Nothing to do in fake network helper.""" def setup_network(self, network_info): """Combine fake network data.""" return dict() def teardown_network(self, server_details): """Nothing to do in fake network helper.""" @ddt.ddt class ServiceInstanceManagerTestCase(test.TestCase): """Test suite for service instance manager.""" def setUp(self): super(ServiceInstanceManagerTestCase, self).setUp() self.instance_id = 'fake_instance_id' self.config = configuration.Configuration(None) self.config.safe_get = mock.Mock(side_effect=fake_get_config_option) self.mock_object(service_instance.compute, 'API', fake_compute.API) self.mock_object( service_instance.os.path, 'exists', mock.Mock(return_value=True)) self.mock_object(service_instance, 'NeutronNetworkHelper', mock.Mock(side_effect=FakeNetworkHelper)) self._manager = service_instance.ServiceInstanceManager(self.config) self._manager._execute = mock.Mock(return_value=('', '')) self.mock_object(time, 'sleep') def test_get_config_option_from_driver_config(self): username1 = 'fake_username_1_%s' % self.id() username2 = 'fake_username_2_%s' % self.id() config_data = dict( DEFAULT=dict(service_instance_user=username1), CUSTOM=dict(service_instance_user=username2)) with test_utils.create_temp_config_with_opts(config_data): self.config = configuration.Configuration( service_instance.common_opts, config_group='CUSTOM') self._manager = service_instance.ServiceInstanceManager( self.config) result = self._manager.get_config_option('service_instance_user') self.assertEqual(username2, result) def test_get_config_option_from_common_config(self): username = 'fake_username_%s' % self.id() config_data = dict(DEFAULT=dict(service_instance_user=username)) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() result = self._manager.get_config_option('service_instance_user') self.assertEqual(username, result) def test_get_neutron_network_helper(self): # Mock it again, because it was called in setUp method. self.mock_object(service_instance, 'NeutronNetworkHelper') config_data = dict(DEFAULT=dict(service_instance_user='fake_username', driver_handles_share_servers=True)) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() self._manager.network_helper service_instance.NeutronNetworkHelper.assert_called_once_with( self._manager) def test_init_with_driver_config_and_handling_of_share_servers(self): self.mock_object(service_instance, 'NeutronNetworkHelper') config_data = dict(CUSTOM=dict( driver_handles_share_servers=True, service_instance_user='fake_user')) opts = service_instance.common_opts + driver.share_opts with test_utils.create_temp_config_with_opts(config_data): self.config = configuration.Configuration(opts, 'CUSTOM') self._manager = service_instance.ServiceInstanceManager( self.config) self.assertTrue( self._manager.get_config_option("driver_handles_share_servers")) self.assertIsNotNone(self._manager.driver_config) self.assertTrue(hasattr(self._manager, 'network_helper')) self.assertTrue(service_instance.NeutronNetworkHelper.called) def test_init_with_driver_config_and_wo_handling_of_share_servers(self): self.mock_object(service_instance, 'NeutronNetworkHelper') config_data = dict(CUSTOM=dict( driver_handles_share_servers=False, service_instance_user='fake_user')) opts = service_instance.common_opts + driver.share_opts with test_utils.create_temp_config_with_opts(config_data): self.config = configuration.Configuration(opts, 'CUSTOM') self._manager = service_instance.ServiceInstanceManager( self.config) self.assertIsNotNone(self._manager.driver_config) self.assertFalse(hasattr(self._manager, 'network_helper')) self.assertFalse(service_instance.NeutronNetworkHelper.called) def test_init_with_common_config_and_handling_of_share_servers(self): self.mock_object(service_instance, 'NeutronNetworkHelper') config_data = dict(DEFAULT=dict( service_instance_user='fake_username', driver_handles_share_servers=True)) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() self.assertTrue( self._manager.get_config_option("driver_handles_share_servers")) self.assertIsNone(self._manager.driver_config) self.assertTrue(hasattr(self._manager, 'network_helper')) self.assertTrue(service_instance.NeutronNetworkHelper.called) def test_init_with_common_config_and_wo_handling_of_share_servers(self): self.mock_object(service_instance, 'NeutronNetworkHelper') config_data = dict(DEFAULT=dict( service_instance_user='fake_username', driver_handles_share_servers=False)) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() self.assertEqual( False, self._manager.get_config_option("driver_handles_share_servers")) self.assertIsNone(self._manager.driver_config) self.assertFalse(hasattr(self._manager, 'network_helper')) self.assertFalse(service_instance.NeutronNetworkHelper.called) def test_no_service_user_defined(self): group_name = 'GROUP_%s' % self.id() config_data = {group_name: dict()} with test_utils.create_temp_config_with_opts(config_data): config = configuration.Configuration( service_instance.common_opts, config_group=group_name) self.assertRaises( exception.ServiceInstanceException, service_instance.ServiceInstanceManager, config) def test_get_service_instance_name_using_driver_config(self): fake_server_id = 'fake_share_server_id_%s' % self.id() self.mock_object(service_instance, 'NeutronNetworkHelper') config_data = dict(CUSTOM=dict( driver_handles_share_servers=True, service_instance_user='fake_user')) opts = service_instance.common_opts + driver.share_opts with test_utils.create_temp_config_with_opts(config_data): self.config = configuration.Configuration(opts, 'CUSTOM') self._manager = service_instance.ServiceInstanceManager( self.config) result = self._manager._get_service_instance_name(fake_server_id) self.assertIsNotNone(self._manager.driver_config) self.assertEqual( self._manager.get_config_option( "service_instance_name_template") % "%s_%s" % ( self._manager.driver_config.config_group, fake_server_id), result) self.assertTrue( self._manager.get_config_option("driver_handles_share_servers")) self.assertTrue(hasattr(self._manager, 'network_helper')) self.assertTrue(service_instance.NeutronNetworkHelper.called) def test_get_service_instance_name_using_default_config(self): fake_server_id = 'fake_share_server_id_%s' % self.id() config_data = dict(CUSTOM=dict( service_instance_user='fake_user')) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() result = self._manager._get_service_instance_name(fake_server_id) self.assertIsNone(self._manager.driver_config) self.assertEqual( self._manager.get_config_option( "service_instance_name_template") % fake_server_id, result) def test__check_server_availability_available_from_start(self): fake_server = dict(id='fake_server', ip='127.0.0.1') self.mock_object(service_instance.socket.socket, 'connect') self.mock_object(service_instance.time, 'sleep') self.mock_object(service_instance.time, 'time', mock.Mock(return_value=0)) result = self._manager._check_server_availability(fake_server) self.assertTrue(result) service_instance.socket.socket.connect.assert_called_once_with( (fake_server['ip'], 22)) service_instance.time.time.assert_has_calls([ mock.call(), mock.call()]) service_instance.time.time.assert_has_calls([]) @ddt.data(True, False) def test__check_server_availability_with_recall(self, is_ok): fake_server = dict(id='fake_server', ip='fake_ip_address') self.fake_time = 0 def fake_connect(addr): if not(is_ok and self.fake_time > 1): raise service_instance.socket.error def fake_time(): return self.fake_time def fake_sleep(time): self.fake_time += 5 self.mock_object(service_instance.time, 'sleep', mock.Mock(side_effect=fake_sleep)) self.mock_object(service_instance.socket.socket, 'connect', mock.Mock(side_effect=fake_connect)) self.mock_object(service_instance.time, 'time', mock.Mock(side_effect=fake_time)) self._manager.max_time_to_build_instance = 6 result = self._manager._check_server_availability(fake_server) if is_ok: self.assertTrue(result) else: self.assertFalse(result) service_instance.socket.socket.connect.assert_has_calls([ mock.call((fake_server['ip'], 22)), mock.call((fake_server['ip'], 22))]) service_instance.time.time.assert_has_calls([ mock.call(), mock.call(), mock.call()]) service_instance.time.time.assert_has_calls([mock.call()]) def test_get_server_ip_found_in_networks_section(self): ip = '10.0.0.1' net_name = self._manager.get_config_option('service_network_name') fake_server = dict(networks={net_name: [ip]}) result = self._manager._get_server_ip(fake_server, net_name) self.assertEqual(ip, result) def test_get_server_ip_found_in_addresses_section(self): ip = '10.0.0.1' net_name = self._manager.get_config_option('service_network_name') fake_server = dict(addresses={net_name: [dict(addr=ip, version=4)]}) result = self._manager._get_server_ip(fake_server, net_name) self.assertEqual(ip, result) @ddt.data( {}, {'networks': {fake_get_config_option('service_network_name'): []}}, {'addresses': {fake_get_config_option('service_network_name'): []}}) def test_get_server_ip_not_found(self, data): self.assertRaises( exception.ManilaException, self._manager._get_server_ip, data, fake_get_config_option('service_network_name')) def test_security_group_name_not_specified(self): self.mock_object(self._manager, 'get_config_option', mock.Mock(return_value=None)) result = self._manager._get_or_create_security_groups( self._manager.admin_context) self.assertIsNone(result) self._manager.get_config_option.assert_called_once_with( 'service_instance_security_group') def test_security_group_name_from_config_and_sg_exist(self): name = "fake_sg_name_from_config" desc = "fake_sg_description" fake_secgroup = {'id': 'fake_sg_id', 'name': name, 'description': desc} self.mock_object(self._manager, 'get_config_option', mock.Mock(return_value=name)) neutron_api = self._manager.network_helper.neutron_api neutron_api.security_group_list.return_value = { 'security_groups': [fake_secgroup]} result = self._manager._get_or_create_security_groups( self._manager.admin_context) self.assertEqual([fake_secgroup, ], result) self._manager.get_config_option.assert_called_once_with( 'service_instance_security_group') neutron_api.security_group_list.assert_called_once_with({"name": name}) @ddt.data(None, 'fake_name') def test_security_group_creation_with_name_from_config(self, name): config_name = "fake_sg_name_from_config" desc = "fake_sg_description" fake_secgroup = {'id': 'fake_sg_id', 'name': name, 'description': desc} self.mock_object(self._manager, 'get_config_option', mock.Mock(return_value=name or config_name)) neutron_api = self._manager.network_helper.neutron_api neutron_api.security_group_list.return_value = {'security_groups': []} neutron_api.security_group_create.return_value = { 'security_group': fake_secgroup, } result = self._manager._get_or_create_security_groups( context=self._manager.admin_context, name=name, description=desc, ) self.assertEqual([fake_secgroup, ], result) if not name: self._manager.get_config_option.assert_called_once_with( 'service_instance_security_group') neutron_api.security_group_list.assert_called_once_with( {"name": name or config_name}) neutron_api.security_group_create.assert_called_once_with( name or config_name, desc) @ddt.data(None, 'fake_name') def test_security_group_creation_with_name_from_conf_allow_ssh(self, name): def fake_secgroup(*args, **kwargs): return {'security_group': {'id': 'fake_sg_id', 'name': args[0], 'description': args[1]}} config_name = "fake_sg_name_from_config" desc = "fake_sg_description" self.mock_object(self._manager, 'get_config_option', mock.Mock(return_value=name or config_name)) neutron_api = self._manager.network_helper.neutron_api neutron_api.security_group_list.return_value = {'security_groups': []} self.mock_object(neutron_api, 'security_group_create', mock.Mock(side_effect=fake_secgroup)) fake_ssh_allow_subnet = dict(cidr="10.254.0.1/24", id='allow_subnet_id') ssh_sg_name = 'manila-service-subnet-{}'.format( fake_ssh_allow_subnet['id']) result = self._manager._get_or_create_security_groups( context=self._manager.admin_context, name=name, description=desc, allow_ssh_subnet=fake_ssh_allow_subnet ) self.assertEqual([fake_secgroup(name if name else config_name, desc)['security_group'], fake_secgroup(ssh_sg_name, desc)['security_group']], result) if not name: self._manager.get_config_option.assert_called_with( 'service_instance_security_group') neutron_api.security_group_list.assert_has_calls([ mock.call({"name": name or config_name}), mock.call({"name": ssh_sg_name})]) neutron_api.security_group_create.assert_has_calls([ mock.call(name or config_name, desc), mock.call(ssh_sg_name, desc)]) def test_security_group_limit_ssh_invalid_subnet(self): def fake_secgroup(*args, **kwargs): return {'security_group': {'id': 'fake_sg_id', 'name': args[0], 'description': args[1]}} config_name = "fake_sg_name_from_config" desc = "fake_sg_description" self.mock_object(self._manager, 'get_config_option', mock.Mock(config_name)) neutron_api = self._manager.network_helper.neutron_api neutron_api.security_group_list.return_value = {'security_groups': []} self.mock_object(neutron_api, 'security_group_create', mock.Mock(side_effect=fake_secgroup)) fake_ssh_allow_subnet = dict(id='allow_subnet_id') self.assertRaises(exception.ManilaException, self._manager._get_or_create_security_groups, context=self._manager.admin_context, name=None, description=desc, allow_ssh_subnet=fake_ssh_allow_subnet) def test_security_group_two_sg_in_list(self): name = "fake_name" fake_secgroup1 = {'id': 'fake_sg_id1', 'name': name} fake_secgroup2 = {'id': 'fake_sg_id2', 'name': name} neutron_api = self._manager.network_helper.neutron_api neutron_api.security_group_list.return_value = { 'security_groups': [fake_secgroup1, fake_secgroup2]} self.assertRaises(exception.ServiceInstanceException, self._manager._get_or_create_security_groups, self._manager.admin_context, name) neutron_api.security_group_list.assert_called_once_with( {"name": name}) @ddt.data( dict(), dict(service_port_id='fake_service_port_id'), dict(public_port_id='fake_public_port_id'), dict(service_port_id='fake_service_port_id', public_port_id='fake_public_port_id'), ) def test_set_up_service_instance(self, update_data): fake_network_info = {'foo': 'bar', 'server_id': 'fake_server_id'} fake_server = { 'id': 'fake', 'ip': '1.2.3.4', 'public_address': '1.2.3.4', 'pk_path': None, 'subnet_id': 'fake-subnet-id', 'router_id': 'fake-router-id', 'username': self._manager.get_config_option( 'service_instance_user'), 'admin_ip': 'admin_ip'} fake_server.update(update_data) expected_details = fake_server.copy() expected_details.pop('pk_path') expected_details['instance_id'] = expected_details.pop('id') self.mock_object(self._manager, '_create_service_instance', mock.Mock(return_value=fake_server)) self.mock_object(self._manager, '_check_server_availability') result = self._manager.set_up_service_instance( self._manager.admin_context, fake_network_info) self._manager._create_service_instance.assert_called_once_with( self._manager.admin_context, fake_network_info['server_id'], fake_network_info) self._manager._check_server_availability.assert_called_once_with( expected_details) self.assertEqual(expected_details, result) def test_set_up_service_instance_not_available(self): fake_network_info = {'foo': 'bar', 'server_id': 'fake_server_id'} fake_server = { 'id': 'fake', 'ip': '1.2.3.4', 'public_address': '1.2.3.4', 'pk_path': None, 'subnet_id': 'fake-subnet-id', 'router_id': 'fake-router-id', 'username': self._manager.get_config_option( 'service_instance_user'), 'admin_ip': 'admin_ip'} expected_details = fake_server.copy() expected_details.pop('pk_path') expected_details['instance_id'] = expected_details.pop('id') self.mock_object(self._manager, '_create_service_instance', mock.Mock(return_value=fake_server)) self.mock_object(self._manager, '_check_server_availability', mock.Mock(return_value=False)) result = self.assertRaises( exception.ServiceInstanceException, self._manager.set_up_service_instance, self._manager.admin_context, fake_network_info) self.assertTrue(hasattr(result, 'detail_data')) self.assertEqual( {'server_details': expected_details}, result.detail_data) self._manager._create_service_instance.assert_called_once_with( self._manager.admin_context, fake_network_info['server_id'], fake_network_info) self._manager._check_server_availability.assert_called_once_with( expected_details) def test_ensure_server(self): server_details = {'instance_id': 'fake_inst_id', 'ip': '1.2.3.4'} fake_server = fake_compute.FakeServer() self.mock_object(self._manager, '_check_server_availability', mock.Mock(return_value=True)) self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=fake_server)) result = self._manager.ensure_service_instance( self._manager.admin_context, server_details) self._manager.compute_api.server_get.assert_called_once_with( self._manager.admin_context, server_details['instance_id']) self._manager._check_server_availability.assert_called_once_with( server_details) self.assertTrue(result) def test_ensure_server_not_exists(self): server_details = {'instance_id': 'fake_inst_id', 'ip': '1.2.3.4'} self.mock_object(self._manager, '_check_server_availability', mock.Mock(return_value=True)) self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(side_effect=exception.InstanceNotFound( instance_id=server_details['instance_id']))) result = self._manager.ensure_service_instance( self._manager.admin_context, server_details) self._manager.compute_api.server_get.assert_called_once_with( self._manager.admin_context, server_details['instance_id']) self.assertFalse(self._manager._check_server_availability.called) self.assertFalse(result) def test_ensure_server_exception(self): server_details = {'instance_id': 'fake_inst_id', 'ip': '1.2.3.4'} self.mock_object(self._manager, '_check_server_availability', mock.Mock(return_value=True)) self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(side_effect=exception.ManilaException)) self.assertRaises(exception.ManilaException, self._manager.ensure_service_instance, self._manager.admin_context, server_details) self._manager.compute_api.server_get.assert_called_once_with( self._manager.admin_context, server_details['instance_id']) self.assertFalse(self._manager._check_server_availability.called) def test_ensure_server_non_active(self): server_details = {'instance_id': 'fake_inst_id', 'ip': '1.2.3.4'} fake_server = fake_compute.FakeServer(status='ERROR') self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=fake_server)) self.mock_object(self._manager, '_check_server_availability', mock.Mock(return_value=True)) result = self._manager.ensure_service_instance( self._manager.admin_context, server_details) self.assertFalse(self._manager._check_server_availability.called) self.assertFalse(result) def test_ensure_server_no_instance_id(self): # Tests that we avoid a KeyError if the share details don't have an # instance_id key set (so we can't find the share instance). self.assertFalse(self._manager.ensure_service_instance( self._manager.admin_context, {'ip': '1.2.3.4'})) def test_get_key_create_new(self): keypair_name = self._manager.get_config_option( 'manila_service_keypair_name') fake_keypair = fake_compute.FakeKeypair(name=keypair_name) self.mock_object(self._manager.compute_api, 'keypair_list', mock.Mock(return_value=[])) self.mock_object(self._manager.compute_api, 'keypair_import', mock.Mock(return_value=fake_keypair)) result = self._manager._get_key(self._manager.admin_context) self.assertEqual( (fake_keypair.name, os.path.expanduser(self._manager.get_config_option( 'path_to_private_key'))), result) self._manager.compute_api.keypair_list.assert_called_once_with( self._manager.admin_context) self._manager.compute_api.keypair_import.assert_called_once_with( self._manager.admin_context, keypair_name, '') def test_get_key_exists(self): fake_keypair = fake_compute.FakeKeypair( name=self._manager.get_config_option( 'manila_service_keypair_name'), public_key='fake_public_key') self.mock_object(self._manager.compute_api, 'keypair_list', mock.Mock(return_value=[fake_keypair])) self.mock_object(self._manager.compute_api, 'keypair_import', mock.Mock(return_value=fake_keypair)) self.mock_object(self._manager, '_execute', mock.Mock(return_value=('fake_public_key', ''))) result = self._manager._get_key(self._manager.admin_context) self._manager.compute_api.keypair_list.assert_called_once_with( self._manager.admin_context) self.assertFalse(self._manager.compute_api.keypair_import.called) self.assertEqual( (fake_keypair.name, os.path.expanduser(self._manager.get_config_option( 'path_to_private_key'))), result) def test_get_key_exists_recreate(self): fake_keypair = fake_compute.FakeKeypair( name=self._manager.get_config_option( 'manila_service_keypair_name'), public_key='fake_public_key1') self.mock_object(self._manager.compute_api, 'keypair_list', mock.Mock(return_value=[fake_keypair])) self.mock_object(self._manager.compute_api, 'keypair_import', mock.Mock(return_value=fake_keypair)) self.mock_object(self._manager.compute_api, 'keypair_delete') self.mock_object(self._manager, '_execute', mock.Mock(return_value=('fake_public_key2', ''))) result = self._manager._get_key(self._manager.admin_context) self._manager.compute_api.keypair_list.assert_called_once_with( self._manager.admin_context) self._manager.compute_api.keypair_delete.assert_called_once_with( self._manager.admin_context, fake_keypair.id) self._manager.compute_api.keypair_import.assert_called_once_with( self._manager.admin_context, fake_keypair.name, 'fake_public_key2') self.assertEqual( (fake_keypair.name, os.path.expanduser(self._manager.get_config_option( 'path_to_private_key'))), result) def test_get_key_more_than_one_exist(self): fake_keypair = fake_compute.FakeKeypair( name=self._manager.get_config_option( 'manila_service_keypair_name'), public_key='fake_public_key1') self.mock_object(self._manager.compute_api, 'keypair_list', mock.Mock(return_value=[fake_keypair, fake_keypair])) self.assertRaises( exception.ServiceInstanceException, self._manager._get_key, self._manager.admin_context) self._manager.compute_api.keypair_list.assert_called_once_with( self._manager.admin_context) def test_get_key_keypath_to_public_not_set(self): self._manager.path_to_public_key = None result = self._manager._get_key(self._manager.admin_context) self.assertEqual((None, None), result) def test_get_key_keypath_to_private_not_set(self): self._manager.path_to_private_key = None result = self._manager._get_key(self._manager.admin_context) self.assertEqual((None, None), result) def test_get_key_incorrect_keypath_to_public(self): def exists_side_effect(path): return False if path == 'fake_path' else True self._manager.path_to_public_key = 'fake_path' os_path_exists_mock = mock.Mock(side_effect=exists_side_effect) with mock.patch.object(os.path, 'exists', os_path_exists_mock): with mock.patch.object(os.path, 'expanduser', mock.Mock(side_effect=lambda value: value)): result = self._manager._get_key(self._manager.admin_context) self.assertEqual((None, None), result) def test_get_key_incorrect_keypath_to_private(self): def exists_side_effect(path): return False if path == 'fake_path' else True self._manager.path_to_private_key = 'fake_path' os_path_exists_mock = mock.Mock(side_effect=exists_side_effect) with mock.patch.object(os.path, 'exists', os_path_exists_mock): with mock.patch.object(os.path, 'expanduser', mock.Mock(side_effect=lambda value: value)): result = self._manager._get_key(self._manager.admin_context) self.assertEqual((None, None), result) def test_get_service_image(self): fake_image = fake_compute.FakeImage( name=self._manager.get_config_option('service_image_name'), status='active') self.mock_object(self._manager.compute_api, 'image_get', mock.Mock(return_value=fake_image)) result = self._manager._get_service_image(self._manager.admin_context) self.assertEqual(fake_image, result) def test_get_service_image_not_active(self): fake_error_image = fake_compute.FakeImage( name='service_image_name', status='error') self.mock_object(self._manager.compute_api, 'image_get', mock.Mock(return_value=fake_error_image)) self.assertRaises( exception.ServiceInstanceException, self._manager._get_service_image, self._manager.admin_context) def test__delete_server_not_found(self): self.mock_object(self._manager.compute_api, 'server_delete') self.mock_object( self._manager.compute_api, 'server_get', mock.Mock(side_effect=exception.InstanceNotFound( instance_id=self.instance_id))) self._manager._delete_server( self._manager.admin_context, self.instance_id) self.assertFalse(self._manager.compute_api.server_delete.called) self._manager.compute_api.server_get.assert_called_once_with( self._manager.admin_context, self.instance_id) def test__delete_server(self): def fake_server_get(*args, **kwargs): ctx = args[0] if not hasattr(ctx, 'called'): ctx.called = True return else: raise exception.InstanceNotFound(instance_id=self.instance_id) self.mock_object(self._manager.compute_api, 'server_delete') self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(side_effect=fake_server_get)) self._manager._delete_server( self._manager.admin_context, self.instance_id) self._manager.compute_api.server_delete.assert_called_once_with( self._manager.admin_context, self.instance_id) self._manager.compute_api.server_get.assert_has_calls([ mock.call(self._manager.admin_context, self.instance_id), mock.call(self._manager.admin_context, self.instance_id)]) def test__delete_server_found_always(self): self.fake_time = 0 def fake_time(): return self.fake_time def fake_sleep(time): self.fake_time += 1 server_details = {'instance_id': 'fake_inst_id', 'status': 'ACTIVE'} self.mock_object(self._manager.compute_api, 'server_delete') self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=server_details)) self.mock_object(service_instance, 'time') self.mock_object( service_instance.time, 'time', mock.Mock(side_effect=fake_time)) self.mock_object( service_instance.time, 'sleep', mock.Mock(side_effect=fake_sleep)) self.mock_object(self._manager, 'max_time_to_build_instance', 2) self.assertRaises( exception.ServiceInstanceException, self._manager._delete_server, self._manager.admin_context, self.instance_id) self._manager.compute_api.server_delete.assert_called_once_with( self._manager.admin_context, self.instance_id) service_instance.time.sleep.assert_has_calls( [mock.call(mock.ANY) for i in range(2)]) service_instance.time.time.assert_has_calls( [mock.call() for i in range(4)]) self._manager.compute_api.server_get.assert_has_calls( [mock.call(self._manager.admin_context, self.instance_id) for i in range(3)]) def test_delete_server_soft_deleted(self): server_details = {'instance_id': 'fake_inst_id', 'status': 'SOFT_DELETED'} self.mock_object(self._manager.compute_api, 'server_delete') self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=server_details)) self._manager._delete_server( self._manager.admin_context, self.instance_id) self._manager.compute_api.server_delete.assert_called_once_with( self._manager.admin_context, self.instance_id) self._manager.compute_api.server_get.assert_has_calls([ mock.call(self._manager.admin_context, self.instance_id), mock.call(self._manager.admin_context, self.instance_id)]) def test_delete_service_instance(self): fake_server_details = dict( router_id='foo', subnet_id='bar', instance_id='quuz') self.mock_object(self._manager, '_delete_server') self.mock_object(self._manager.network_helper, 'teardown_network') self._manager.delete_service_instance( self._manager.admin_context, fake_server_details) self._manager._delete_server.assert_called_once_with( self._manager.admin_context, fake_server_details['instance_id']) self._manager.network_helper.teardown_network.assert_called_once_with( fake_server_details) @ddt.data( *[{'service_config': service_config, 'tenant_config': tenant_config, 'server': server} for service_config, tenant_config in ( ('fake_net_s', 'fake_net_t'), ('fake_net_s', '12.34.56.78'), ('98.76.54.123', 'fake_net_t'), ('98.76.54.123', '12.34.56.78')) for server in ( {'networks': { 'fake_net_s': ['foo', '98.76.54.123', 'bar'], 'fake_net_t': ['baar', '12.34.56.78', 'quuz']}}, {'addresses': { 'fake_net_s': [ {'addr': 'fake1'}, {'addr': '98.76.54.123'}, {'addr': 'fake2'}], 'fake_net_t': [ {'addr': 'fake3'}, {'addr': '12.34.56.78'}, {'addr': 'fake4'}], }})]) @ddt.unpack def test_get_common_server_valid_cases(self, service_config, tenant_config, server): self._get_common_server(service_config, tenant_config, server, '98.76.54.123', '12.34.56.78', True) @ddt.data( *[{'service_config': service_config, 'tenant_config': tenant_config, 'server': server} for service_config, tenant_config in ( ('fake_net_s', 'fake'), ('fake', 'fake_net_t'), ('fake', 'fake'), ('98.76.54.123', '12.12.12.1212'), ('12.12.12.1212', '12.34.56.78'), ('12.12.12.1212', '12.12.12.1212'), ('1001::1001', '1001::100G'), ('1001::10G1', '1001::1001'), ) for server in ( {'networks': { 'fake_net_s': ['foo', '98.76.54.123', 'bar'], 'fake_net_t': ['baar', '12.34.56.78', 'quuz']}}, {'addresses': { 'fake_net_s': [ {'addr': 'fake1'}, {'addr': '98.76.54.123'}, {'addr': 'fake2'}], 'fake_net_t': [ {'addr': 'fake3'}, {'addr': '12.34.56.78'}, {'addr': 'fake4'}], }})]) @ddt.unpack def test_get_common_server_invalid_cases(self, service_config, tenant_config, server): self._get_common_server(service_config, tenant_config, server, '98.76.54.123', '12.34.56.78', False) @ddt.data( *[{'service_config': service_config, 'tenant_config': tenant_config, 'server': server} for service_config, tenant_config in ( ('fake_net_s', '1001::1002'), ('1001::1001', 'fake_net_t'), ('1001::1001', '1001::1002')) for server in ( {'networks': { 'fake_net_s': ['foo', '1001::1001'], 'fake_net_t': ['bar', '1001::1002']}}, {'addresses': { 'fake_net_s': [{'addr': 'foo'}, {'addr': '1001::1001'}], 'fake_net_t': [{'addr': 'bar'}, {'addr': '1001::1002'}]}})]) @ddt.unpack def test_get_common_server_valid_ipv6_address(self, service_config, tenant_config, server): self._get_common_server(service_config, tenant_config, server, '1001::1001', '1001::1002', True) def _get_common_server(self, service_config, tenant_config, server, service_address, network_address, is_valid=True): fake_instance_id = 'fake_instance_id' fake_user = 'fake_user' fake_pass = 'fake_pass' fake_server = {'id': fake_instance_id} fake_server.update(server) expected = { 'backend_details': { 'username': fake_user, 'password': fake_pass, 'pk_path': self._manager.path_to_private_key, 'ip': service_address, 'public_address': network_address, 'instance_id': fake_instance_id, } } def fake_get_config_option(attr): if attr == 'service_net_name_or_ip': return service_config elif attr == 'tenant_net_name_or_ip': return tenant_config elif attr == 'service_instance_name_or_id': return fake_instance_id elif attr == 'service_instance_user': return fake_user elif attr == 'service_instance_password': return fake_pass else: raise exception.ManilaException("Wrong test data provided.") self.mock_object( self._manager.compute_api, 'server_get_by_name_or_id', mock.Mock(return_value=fake_server)) self.mock_object( self._manager, 'get_config_option', mock.Mock(side_effect=fake_get_config_option)) if is_valid: actual = self._manager.get_common_server() self.assertEqual(expected, actual) else: self.assertRaises( exception.ManilaException, self._manager.get_common_server) self.assertTrue( self._manager.compute_api.server_get_by_name_or_id.called) def test___create_service_instance_with_sg_success(self): self.mock_object(service_instance, 'NeutronNetworkHelper', mock.Mock(side_effect=FakeNetworkHelper)) config_data = dict(DEFAULT=dict( driver_handles_share_servers=True, service_instance_user='fake_user', limit_ssh_access=True)) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() server_create = dict(id='fakeid', status='CREATING', networks=dict()) net_name = self._manager.get_config_option("service_network_name") sg = [{'id': 'fakeid', 'name': 'fakename'}, ] ip_address = 'fake_ip_address' service_image_id = 'fake_service_image_id' key_data = 'fake_key_name', 'fake_key_path' instance_name = 'fake_instance_name' network_info = dict() network_data = {'nics': ['fake_nic1', 'fake_nic2']} network_data['router'] = dict(id='fake_router_id') server_get = dict( id='fakeid', status='ACTIVE', networks={net_name: [ip_address]}) network_data.update(dict( router_id='fake_router_id', subnet_id='fake_subnet_id', public_port=dict(id='fake_public_port', fixed_ips=[dict(ip_address=ip_address)]), service_port=dict(id='fake_service_port', fixed_ips=[{'ip_address': ip_address}]), admin_port={'id': 'fake_admin_port', 'fixed_ips': [{'ip_address': ip_address}]}, service_subnet={'id': 'fake_subnet_id', 'cidr': '10.254.0.0/28'}) ) self.mock_object(service_instance.time, 'time', mock.Mock(return_value=5)) self.mock_object(self._manager.network_helper, 'setup_network', mock.Mock(return_value=network_data)) self.mock_object(self._manager.network_helper, 'get_network_name', mock.Mock(return_value=net_name)) self.mock_object(self._manager, '_get_service_image', mock.Mock(return_value=service_image_id)) self.mock_object(self._manager, '_get_key', mock.Mock(return_value=key_data)) self.mock_object(self._manager, '_get_or_create_security_groups', mock.Mock(return_value=sg)) self.mock_object(self._manager.compute_api, 'server_create', mock.Mock(return_value=server_create)) self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=server_get)) self.mock_object(self._manager.compute_api, 'add_security_group_to_server') expected = { 'id': server_get['id'], 'status': server_get['status'], 'pk_path': key_data[1], 'public_address': ip_address, 'router_id': network_data.get('router_id'), 'subnet_id': network_data.get('subnet_id'), 'instance_id': server_get['id'], 'ip': ip_address, 'networks': server_get['networks'], 'public_port_id': 'fake_public_port', 'service_port_id': 'fake_service_port', 'admin_port_id': 'fake_admin_port', 'admin_ip': 'fake_ip_address', } result = self._manager._create_service_instance( self._manager.admin_context, instance_name, network_info) self.assertEqual(expected, result) self.assertTrue(service_instance.time.time.called) self._manager.network_helper.setup_network.assert_called_once_with( network_info) self._manager._get_service_image.assert_called_once_with( self._manager.admin_context) self._manager._get_key.assert_called_once_with( self._manager.admin_context) self._manager._get_or_create_security_groups.assert_called_once_with( self._manager.admin_context, allow_ssh_subnet=network_data['service_subnet']) self._manager.compute_api.server_create.assert_called_once_with( self._manager.admin_context, name=instance_name, image=service_image_id, flavor='100', key_name=key_data[0], nics=network_data['nics'], availability_zone=service_instance.CONF.storage_availability_zone) self._manager.compute_api.server_get.assert_called_once_with( self._manager.admin_context, server_create['id']) (self._manager.compute_api.add_security_group_to_server. assert_called_once_with( self._manager.admin_context, server_get['id'], sg[0]['id'])) self._manager.network_helper.get_network_name.assert_has_calls([]) def test___create_service_instance_neutron_no_admin_ip(self): self.mock_object(service_instance, 'NeutronNetworkHelper', mock.Mock(side_effect=FakeNetworkHelper)) config_data = {'DEFAULT': { 'driver_handles_share_servers': True, 'service_instance_user': 'fake_user', 'limit_ssh_access': True}} with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() server_create = {'id': 'fakeid', 'status': 'CREATING', 'networks': {}} net_name = self._manager.get_config_option("service_network_name") sg = {'id': 'fakeid', 'name': 'fakename'} ip_address = 'fake_ip_address' service_image_id = 'fake_service_image_id' key_data = 'fake_key_name', 'fake_key_path' instance_name = 'fake_instance_name' network_info = {} network_data = { 'nics': ['fake_nic1', 'fake_nic2'], 'router_id': 'fake_router_id', 'subnet_id': 'fake_subnet_id', 'public_port': {'id': 'fake_public_port', 'fixed_ips': [{'ip_address': ip_address}]}, 'service_port': {'id': 'fake_service_port', 'fixed_ips': [{'ip_address': ip_address}]}, 'admin_port': {'id': 'fake_admin_port', 'fixed_ips': []}, 'router': {'id': 'fake_router_id'}, 'service_subnet': {'id': 'fake_id', 'cidr': '10.254.0.0/28'} } server_get = { 'id': 'fakeid', 'status': 'ACTIVE', 'networks': {net_name: [ip_address]}} self.mock_object(service_instance.time, 'time', mock.Mock(return_value=5)) self.mock_object(self._manager.network_helper, 'setup_network', mock.Mock(return_value=network_data)) self.mock_object(self._manager.network_helper, 'get_network_name', mock.Mock(return_value=net_name)) self.mock_object(self._manager, '_get_service_image', mock.Mock(return_value=service_image_id)) self.mock_object(self._manager, '_get_key', mock.Mock(return_value=key_data)) self.mock_object(self._manager, '_get_or_create_security_groups', mock.Mock(return_value=[sg, ])) self.mock_object(self._manager.compute_api, 'server_create', mock.Mock(return_value=server_create)) self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=server_get)) self.mock_object(self._manager.compute_api, 'add_security_group_to_server') self.assertRaises( exception.AdminIPNotFound, self._manager._create_service_instance, self._manager.admin_context, instance_name, network_info) self.assertTrue(service_instance.time.time.called) self._manager.network_helper.setup_network.assert_called_once_with( network_info) self._manager._get_service_image.assert_called_once_with( self._manager.admin_context) self._manager._get_key.assert_called_once_with( self._manager.admin_context) self._manager._get_or_create_security_groups.assert_called_once_with( self._manager.admin_context, allow_ssh_subnet=network_data['service_subnet']) self._manager.compute_api.server_create.assert_called_once_with( self._manager.admin_context, name=instance_name, image=service_image_id, flavor='100', key_name=key_data[0], nics=network_data['nics'], availability_zone=service_instance.CONF.storage_availability_zone) self._manager.compute_api.server_get.assert_called_once_with( self._manager.admin_context, server_create['id']) (self._manager.compute_api.add_security_group_to_server. assert_called_once_with( self._manager.admin_context, server_get['id'], sg['id'])) self._manager.network_helper.get_network_name.assert_has_calls([]) @ddt.data( dict( instance_id_included=False, mockobj=mock.Mock(side_effect=exception.ServiceInstanceException)), dict( instance_id_included=True, mockobj=mock.Mock(return_value=dict(id='fakeid', status='ERROR')))) @ddt.unpack def test___create_service_instance_failed_to_create( self, instance_id_included, mockobj): service_image_id = 'fake_service_image_id' key_data = 'fake_key_name', 'fake_key_path' instance_name = 'fake_instance_name' network_info = dict() network_data = dict( nics=['fake_nic1', 'fake_nic2'], router_id='fake_router_id', subnet_id='fake_subnet_id') self.mock_object(self._manager.network_helper, 'setup_network', mock.Mock(return_value=network_data)) self.mock_object(self._manager, '_get_service_image', mock.Mock(return_value=service_image_id)) self.mock_object(self._manager, '_get_key', mock.Mock(return_value=key_data)) self.mock_object( self._manager.compute_api, 'server_create', mockobj) self.mock_object( self._manager, 'wait_for_instance_to_be_active', mock.Mock(side_effect=exception.ServiceInstanceException)) try: self._manager._create_service_instance( self._manager.admin_context, instance_name, network_info) except exception.ServiceInstanceException as e: expected = dict(server_details=dict( subnet_id=network_data['subnet_id'], router_id=network_data['router_id'])) if instance_id_included: expected['server_details']['instance_id'] = 'fakeid' self.assertEqual(expected, e.detail_data) else: raise exception.ManilaException('Expected error was not raised.') self._manager.network_helper.setup_network.assert_called_once_with( network_info) self._manager._get_service_image.assert_called_once_with( self._manager.admin_context) self._manager._get_key.assert_called_once_with( self._manager.admin_context) self._manager.compute_api.server_create.assert_called_once_with( self._manager.admin_context, name=instance_name, image=service_image_id, flavor='100', key_name=key_data[0], nics=network_data['nics'], availability_zone=service_instance.CONF.storage_availability_zone) def test___create_service_instance_limit_ssh_no_service_subnet(self): self.mock_object(service_instance, 'NeutronNetworkHelper', mock.Mock(side_effect=FakeNetworkHelper)) config_data = dict(DEFAULT=dict( driver_handles_share_servers=True, service_instance_user='fake_user', limit_ssh_access=True)) with test_utils.create_temp_config_with_opts(config_data): self._manager = service_instance.ServiceInstanceManager() server_create = dict(id='fakeid', status='CREATING', networks=dict()) net_name = self._manager.get_config_option("service_network_name") ip_address = 'fake_ip_address' service_image_id = 'fake_service_image_id' key_data = 'fake_key_name', 'fake_key_path' instance_name = 'fake_instance_name' network_info = dict() network_data = {'nics': ['fake_nic1', 'fake_nic2']} network_data['router'] = dict(id='fake_router_id') server_get = dict( id='fakeid', status='ACTIVE', networks={net_name: [ip_address]}) network_data.update(dict( router_id='fake_router_id', subnet_id='fake_subnet_id', public_port=dict(id='fake_public_port', fixed_ips=[dict(ip_address=ip_address)]), service_port=dict(id='fake_service_port', fixed_ips=[{'ip_address': ip_address}]), admin_port={'id': 'fake_admin_port', 'fixed_ips': [{'ip_address': ip_address}]},) ) self.mock_object(service_instance.time, 'time', mock.Mock(return_value=5)) self.mock_object(self._manager.network_helper, 'setup_network', mock.Mock(return_value=network_data)) self.mock_object(self._manager.network_helper, 'get_network_name', mock.Mock(return_value=net_name)) self.mock_object(self._manager, '_get_service_image', mock.Mock(return_value=service_image_id)) self.mock_object(self._manager, '_get_key', mock.Mock(return_value=key_data)) self.mock_object(self._manager.compute_api, 'server_create', mock.Mock(return_value=server_create)) self.mock_object(self._manager.compute_api, 'server_get', mock.Mock(return_value=server_get)) self.assertRaises(exception.ManilaException, self._manager._create_service_instance, self._manager.admin_context, instance_name, network_info) def test___create_service_instance_failed_to_build(self): server_create = dict(id='fakeid', status='CREATING', networks=dict()) service_image_id = 'fake_service_image_id' key_data = 'fake_key_name', 'fake_key_path' instance_name = 'fake_instance_name' network_info = dict() network_data = dict( nics=['fake_nic1', 'fake_nic2'], router_id='fake_router_id', subnet_id='fake_subnet_id') self.mock_object(self._manager.network_helper, 'setup_network', mock.Mock(return_value=network_data)) self.mock_object(self._manager, '_get_service_image', mock.Mock(return_value=service_image_id)) self.mock_object(self._manager, '_get_key', mock.Mock(return_value=key_data)) self.mock_object(self._manager.compute_api, 'server_create', mock.Mock(return_value=server_create)) self.mock_object( self._manager, 'wait_for_instance_to_be_active', mock.Mock(side_effect=exception.ServiceInstanceException)) try: self._manager._create_service_instance( self._manager.admin_context, instance_name, network_info) except exception.ServiceInstanceException as e: self.assertEqual( dict(server_details=dict(subnet_id=network_data['subnet_id'], router_id=network_data['router_id'], instance_id=server_create['id'])), e.detail_data) else: raise exception.ManilaException('Expected error was not raised.') self._manager.network_helper.setup_network.assert_called_once_with( network_info) self._manager._get_service_image.assert_called_once_with( self._manager.admin_context) self._manager._get_key.assert_called_once_with( self._manager.admin_context) self._manager.compute_api.server_create.assert_called_once_with( self._manager.admin_context, name=instance_name, image=service_image_id, flavor='100', key_name=key_data[0], nics=network_data['nics'], availability_zone=service_instance.CONF.storage_availability_zone) @ddt.data( dict(name=None, path=None), dict(name=None, path='/tmp')) @ddt.unpack def test__create_service_instance_no_key_and_no_path(self, name, path): key_data = name, path self.mock_object(self._manager, '_get_service_image') self.mock_object(self._manager, '_get_key', mock.Mock(return_value=key_data)) self.assertRaises( exception.ServiceInstanceException, self._manager._create_service_instance, self._manager.admin_context, 'fake_instance_name', dict()) self._manager._get_service_image.assert_called_once_with( self._manager.admin_context) self._manager._get_key.assert_called_once_with( self._manager.admin_context) @mock.patch('time.sleep') @mock.patch('time.time') def _test_wait_for_instance(self, mock_time, mock_sleep, server_get_side_eff=None, expected_try_count=1, expected_sleep_count=0, expected_ret_val=None, expected_exc=None): mock_server_get = mock.Mock(side_effect=server_get_side_eff) self.mock_object(self._manager.compute_api, 'server_get', mock_server_get) self.fake_time = 0 def fake_time(): return self.fake_time def fake_sleep(sleep_time): self.fake_time += sleep_time # Note(lpetrut): LOG methods can call time.time mock_time.side_effect = fake_time mock_sleep.side_effect = fake_sleep timeout = 3 if expected_exc: self.assertRaises( expected_exc, self._manager.wait_for_instance_to_be_active, instance_id=mock.sentinel.instance_id, timeout=timeout) else: instance = self._manager.wait_for_instance_to_be_active( instance_id=mock.sentinel.instance_id, timeout=timeout) self.assertEqual(expected_ret_val, instance) mock_server_get.assert_has_calls( [mock.call(self._manager.admin_context, mock.sentinel.instance_id)] * expected_try_count) mock_sleep.assert_has_calls([mock.call(1)] * expected_sleep_count) def test_wait_for_instance_timeout(self): server_get_side_eff = [ exception.InstanceNotFound( instance_id=mock.sentinel.instance_id), {'status': 'BUILDING'}, {'status': 'ACTIVE'}] # Note that in this case, although the status is active, the # 'networks' field is missing. self._test_wait_for_instance( # pylint: disable=no-value-for-parameter server_get_side_eff=server_get_side_eff, expected_exc=exception.ServiceInstanceException, expected_try_count=3, expected_sleep_count=3) def test_wait_for_instance_error_state(self): mock_instance = {'status': 'ERROR'} self._test_wait_for_instance( # pylint: disable=no-value-for-parameter server_get_side_eff=[mock_instance], expected_exc=exception.ServiceInstanceException, expected_try_count=1) def test_wait_for_instance_available(self): mock_instance = {'status': 'ACTIVE', 'networks': mock.sentinel.networks} self._test_wait_for_instance( # pylint: disable=no-value-for-parameter server_get_side_eff=[mock_instance], expected_try_count=1, expected_ret_val=mock_instance) def test_reboot_server(self): fake_server = {'instance_id': mock.sentinel.instance_id} soft_reboot = True mock_reboot = mock.Mock() self.mock_object(self._manager.compute_api, 'server_reboot', mock_reboot) self._manager.reboot_server(fake_server, soft_reboot) mock_reboot.assert_called_once_with(self._manager.admin_context, fake_server['instance_id'], soft_reboot) class BaseNetworkHelperTestCase(test.TestCase): """Tests Base network helper for service instance.""" def test_instantiate_valid(self): class FakeNetworkHelper(service_instance.BaseNetworkhelper): @property def NAME(self): return 'fake_NAME' def __init__(self, service_instance_manager): self.fake_init = 'fake_init_value' def get_network_name(self, network_info): return 'fake_network_name' def setup_connectivity_with_service_instances(self): return 'fake_setup_connectivity_with_service_instances' def setup_network(self, network_info): return 'fake_setup_network' def teardown_network(self, server_details): return 'fake_teardown_network' instance = FakeNetworkHelper('fake') attrs = [ 'fake_init', 'NAME', 'get_network_name', 'teardown_network', 'setup_connectivity_with_service_instances', 'setup_network', ] for attr in attrs: self.assertTrue(hasattr(instance, attr)) self.assertEqual('fake_init_value', instance.fake_init) self.assertEqual('fake_NAME', instance.NAME) self.assertEqual( 'fake_network_name', instance.get_network_name('fake')) self.assertEqual( 'fake_setup_connectivity_with_service_instances', instance.setup_connectivity_with_service_instances()) self.assertEqual('fake_setup_network', instance.setup_network('fake')) self.assertEqual( 'fake_teardown_network', instance.teardown_network('fake')) def test_instantiate_invalid(self): self.assertRaises( TypeError, service_instance.BaseNetworkhelper, 'fake') @ddt.ddt class NeutronNetworkHelperTestCase(test.TestCase): """Tests Neutron network helper for service instance.""" def setUp(self): super(NeutronNetworkHelperTestCase, self).setUp() self.mock_object(importutils, 'import_class') self.fake_manager = FakeServiceInstance() def _init_neutron_network_plugin(self): self.mock_object( service_instance.NeutronNetworkHelper, '_get_service_network_id', mock.Mock(return_value='fake_service_network_id')) return service_instance.NeutronNetworkHelper(self.fake_manager) def test_init_neutron_network_plugin(self): instance = self._init_neutron_network_plugin() self.assertEqual(service_instance.NEUTRON_NAME, instance.NAME) attrs = [ 'neutron_api', 'vif_driver', 'service_network_id', 'connect_share_server_to_tenant_network', 'get_config_option'] for attr in attrs: self.assertTrue(hasattr(instance, attr), "No attr '%s'" % attr) (service_instance.NeutronNetworkHelper._get_service_network_id. assert_called_once_with()) self.assertEqual('DEFAULT', instance.neutron_api.config_group_name) def test_init_neutron_network_plugin_with_driver_config_group(self): self.fake_manager.driver_config = mock.Mock() self.fake_manager.driver_config.config_group = ( 'fake_config_group') self.fake_manager.driver_config.network_config_group = None instance = self._init_neutron_network_plugin() self.assertEqual('fake_config_group', instance.neutron_api.config_group_name) def test_init_neutron_network_plugin_with_network_config_group(self): self.fake_manager.driver_config = mock.Mock() self.fake_manager.driver_config.config_group = ( "fake_config_group") self.fake_manager.driver_config.network_config_group = ( "fake_network_config_group") instance = self._init_neutron_network_plugin() self.assertEqual('fake_network_config_group', instance.neutron_api.config_group_name) def test_admin_project_id(self): instance = self._init_neutron_network_plugin() admin_project_id = 'fake_admin_project_id' self.mock_class('manila.network.neutron.api.API', mock.Mock()) instance.neutron_api.admin_project_id = admin_project_id self.assertEqual(admin_project_id, instance.admin_project_id) def test_get_network_name(self): network_info = dict(neutron_net_id='fake_neutron_net_id') network = dict(name='fake_network_name') instance = self._init_neutron_network_plugin() self.mock_object( instance.neutron_api, 'get_network', mock.Mock(return_value=network)) result = instance.get_network_name(network_info) self.assertEqual(network['name'], result) instance.neutron_api.get_network.assert_called_once_with( network_info['neutron_net_id']) def test_get_service_network_id_none_exist(self): service_network_name = fake_get_config_option('service_network_name') network = dict(id='fake_network_id') admin_project_id = 'fake_admin_project_id' self.mock_object( service_instance.neutron.API, 'get_all_admin_project_networks', mock.Mock(return_value=[])) self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) self.mock_object( service_instance.neutron.API, 'network_create', mock.Mock(return_value=network)) instance = service_instance.NeutronNetworkHelper(self.fake_manager) result = instance._get_service_network_id() self.assertEqual(network['id'], result) self.assertTrue(service_instance.neutron.API. get_all_admin_project_networks.called) service_instance.neutron.API.network_create.assert_has_calls([ mock.call(instance.admin_project_id, service_network_name)]) def test_get_service_network_id_one_exist(self): service_network_name = fake_get_config_option('service_network_name') network = dict(id='fake_network_id', name=service_network_name) admin_project_id = 'fake_admin_project_id' self.mock_object( service_instance.neutron.API, 'get_all_admin_project_networks', mock.Mock(return_value=[network])) self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) instance = service_instance.NeutronNetworkHelper(self.fake_manager) result = instance._get_service_network_id() self.assertEqual(network['id'], result) self.assertTrue(service_instance.neutron.API. get_all_admin_project_networks.called) def test_get_service_network_id_two_exist(self): service_network_name = fake_get_config_option('service_network_name') network = dict(id='fake_network_id', name=service_network_name) self.mock_object( service_instance.neutron.API, 'get_all_admin_project_networks', mock.Mock(return_value=[network, network])) helper = service_instance.NeutronNetworkHelper(self.fake_manager) self.assertRaises(exception.ManilaException, lambda: helper.service_network_id) (service_instance.neutron.API.get_all_admin_project_networks. assert_has_calls([mock.call()])) @ddt.data(dict(), dict(subnet_id='foo'), dict(router_id='bar')) def test_teardown_network_no_service_data(self, server_details): instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface') instance.teardown_network(server_details) self.assertFalse( service_instance.neutron.API.router_remove_interface.called) @ddt.data( *[dict(server_details=sd, fail=f) for f in (True, False) for sd in (dict(service_port_id='fake_service_port_id'), dict(public_port_id='fake_public_port_id'), dict(service_port_id='fake_service_port_id', public_port_id='fake_public_port_id'))] ) @ddt.unpack def test_teardown_network_with_ports(self, server_details, fail): instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface') if fail: delete_port_mock = mock.Mock( side_effect=exception.NetworkException(code=404)) else: delete_port_mock = mock.Mock() self.mock_object(instance.neutron_api, 'delete_port', delete_port_mock) self.mock_object(service_instance.LOG, 'debug') instance.teardown_network(server_details) self.assertFalse(instance.neutron_api.router_remove_interface.called) self.assertEqual( len(server_details), len(instance.neutron_api.delete_port.mock_calls)) for k, v in server_details.items(): self.assertIn( mock.call(v), instance.neutron_api.delete_port.mock_calls) if fail: service_instance.LOG.debug.assert_has_calls([ mock.call(mock.ANY, mock.ANY) for sd in server_details ]) else: service_instance.LOG.debug.assert_has_calls([]) @ddt.data( dict(service_port_id='fake_service_port_id'), dict(public_port_id='fake_public_port_id'), dict(service_port_id='fake_service_port_id', public_port_id='fake_public_port_id'), ) def test_teardown_network_with_ports_unhandled_exception(self, server_details): instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface') delete_port_mock = mock.Mock( side_effect=exception.NetworkException(code=500)) self.mock_object( service_instance.neutron.API, 'delete_port', delete_port_mock) self.mock_object(service_instance.LOG, 'debug') self.assertRaises( exception.NetworkException, instance.teardown_network, server_details, ) self.assertFalse( service_instance.neutron.API.router_remove_interface.called) service_instance.neutron.API.delete_port.assert_called_once_with( mock.ANY) service_instance.LOG.debug.assert_has_calls([]) def test_teardown_network_with_wrong_ports(self): instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface') self.mock_object( service_instance.neutron.API, 'delete_port') self.mock_object(service_instance.LOG, 'debug') instance.teardown_network(dict(foo_id='fake_service_port_id')) service_instance.neutron.API.router_remove_interface.assert_has_calls( []) service_instance.neutron.API.delete_port.assert_has_calls([]) service_instance.LOG.debug.assert_has_calls([]) def test_teardown_network_subnet_is_used(self): server_details = dict(subnet_id='foo', router_id='bar') fake_ports = [ {'fixed_ips': [{'subnet_id': server_details['subnet_id']}], 'device_id': 'fake_device_id', 'device_owner': 'compute:foo'}, ] instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface') self.mock_object( service_instance.neutron.API, 'update_subnet') self.mock_object( service_instance.neutron.API, 'list_ports', mock.Mock(return_value=fake_ports)) instance.teardown_network(server_details) self.assertFalse( service_instance.neutron.API.router_remove_interface.called) self.assertFalse(service_instance.neutron.API.update_subnet.called) service_instance.neutron.API.list_ports.assert_called_once_with( fields=['fixed_ips', 'device_id', 'device_owner']) def test_teardown_network_subnet_not_used(self): server_details = dict(subnet_id='foo', router_id='bar') fake_ports = [ {'fixed_ips': [{'subnet_id': server_details['subnet_id']}], 'device_id': 'fake_device_id', 'device_owner': 'network:router_interface'}, {'fixed_ips': [{'subnet_id': 'bar' + server_details['subnet_id']}], 'device_id': 'fake_device_id', 'device_owner': 'compute'}, {'fixed_ips': [{'subnet_id': server_details['subnet_id']}], 'device_id': '', 'device_owner': 'compute'}, ] instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface') self.mock_object( service_instance.neutron.API, 'update_subnet') self.mock_object( service_instance.neutron.API, 'list_ports', mock.Mock(return_value=fake_ports)) instance.teardown_network(server_details) (service_instance.neutron.API.router_remove_interface. assert_called_once_with('bar', 'foo')) (service_instance.neutron.API.update_subnet. assert_called_once_with('foo', '')) service_instance.neutron.API.list_ports.assert_called_once_with( fields=['fixed_ips', 'device_id', 'device_owner']) def test_teardown_network_subnet_not_used_and_get_error_404(self): server_details = dict(subnet_id='foo', router_id='bar') fake_ports = [ {'fixed_ips': [{'subnet_id': server_details['subnet_id']}], 'device_id': 'fake_device_id', 'device_owner': 'fake'}, ] instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface', mock.Mock(side_effect=exception.NetworkException(code=404))) self.mock_object( service_instance.neutron.API, 'update_subnet') self.mock_object( service_instance.neutron.API, 'list_ports', mock.Mock(return_value=fake_ports)) instance.teardown_network(server_details) (service_instance.neutron.API.router_remove_interface. assert_called_once_with('bar', 'foo')) (service_instance.neutron.API.update_subnet. assert_called_once_with('foo', '')) service_instance.neutron.API.list_ports.assert_called_once_with( fields=['fixed_ips', 'device_id', 'device_owner']) def test_teardown_network_subnet_not_used_get_unhandled_error(self): server_details = dict(subnet_id='foo', router_id='bar') fake_ports = [ {'fixed_ips': [{'subnet_id': server_details['subnet_id']}], 'device_id': 'fake_device_id', 'device_owner': 'fake'}, ] instance = self._init_neutron_network_plugin() self.mock_object( service_instance.neutron.API, 'router_remove_interface', mock.Mock(side_effect=exception.NetworkException(code=500))) self.mock_object( service_instance.neutron.API, 'update_subnet') self.mock_object( service_instance.neutron.API, 'list_ports', mock.Mock(return_value=fake_ports)) self.assertRaises( exception.NetworkException, instance.teardown_network, server_details) (service_instance.neutron.API.router_remove_interface. assert_called_once_with('bar', 'foo')) self.assertFalse(service_instance.neutron.API.update_subnet.called) service_instance.neutron.API.list_ports.assert_called_once_with( fields=['fixed_ips', 'device_id', 'device_owner']) def test_setup_network_and_connect_share_server_to_tenant_net(self): def fake_create_port(*aargs, **kwargs): if aargs[1] == 'fake_service_network_id': return self.service_port elif aargs[1] == 'fake_tenant_network_id': return self.public_port else: raise exception.ManilaException('Got unexpected data') admin_project_id = 'fake_admin_project_id' network_info = dict( neutron_net_id='fake_tenant_network_id', neutron_subnet_id='fake_tenant_subnet_id') cidr = '13.0.0.0/24' self.service_port = dict( id='fake_service_port_id', fixed_ips=[dict(ip_address='fake_service_port_ip_address')]) self.public_port = dict( id='fake_tenant_port_id', fixed_ips=[dict(ip_address='fake_public_port_ip_address')]) service_subnet = dict(id='fake_service_subnet') instance = self._init_neutron_network_plugin() instance.connect_share_server_to_tenant_network = True self.mock_object(instance, '_get_service_network_id', mock.Mock(return_value='fake_service_network_id')) self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) self.mock_object( service_instance.neutron.API, 'create_port', mock.Mock(side_effect=fake_create_port)) self.mock_object( service_instance.neutron.API, 'subnet_create', mock.Mock(return_value=service_subnet)) self.mock_object( instance, 'setup_connectivity_with_service_instances', mock.Mock(return_value=service_subnet)) self.mock_object( instance, '_get_cidr_for_subnet', mock.Mock(return_value=cidr)) self.mock_object( instance, '_get_service_subnet', mock.Mock(return_value=None)) expected = { 'ip_address': self.public_port['fixed_ips'][0]['ip_address'], 'public_port': self.public_port, 'service_port': self.service_port, 'service_subnet': service_subnet, 'ports': [self.public_port, self.service_port], 'nics': [{'port-id': self.public_port['id']}, {'port-id': self.service_port['id']}]} result = instance.setup_network(network_info) self.assertEqual(expected, result) (instance.setup_connectivity_with_service_instances. assert_called_once_with()) instance._get_service_subnet.assert_called_once_with(mock.ANY) instance._get_cidr_for_subnet.assert_called_once_with() self.assertTrue(service_instance.neutron.API.subnet_create.called) self.assertTrue(service_instance.neutron.API.create_port.called) def test_setup_network_and_connect_share_server_to_tenant_net_admin(self): def fake_create_port(*aargs, **kwargs): if aargs[1] == 'fake_admin_network_id': return self.admin_port elif aargs[1] == 'fake_tenant_network_id': return self.public_port else: raise exception.ManilaException('Got unexpected data') admin_project_id = 'fake_admin_project_id' network_info = { 'neutron_net_id': 'fake_tenant_network_id', 'neutron_subnet_id': 'fake_tenant_subnet_id'} self.admin_port = { 'id': 'fake_admin_port_id', 'fixed_ips': [{'ip_address': 'fake_admin_port_ip_address'}]} self.public_port = { 'id': 'fake_tenant_port_id', 'fixed_ips': [{'ip_address': 'fake_public_port_ip_address'}]} instance = self._init_neutron_network_plugin() instance.use_admin_port = True instance.use_service_network = False instance.admin_network_id = 'fake_admin_network_id' instance.admin_subnet_id = 'fake_admin_subnet_id' instance.connect_share_server_to_tenant_network = True self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) self.mock_object( service_instance.neutron.API, 'create_port', mock.Mock(side_effect=fake_create_port)) self.mock_object( instance, 'setup_connectivity_with_service_instances') expected = { 'ip_address': self.public_port['fixed_ips'][0]['ip_address'], 'public_port': self.public_port, 'admin_port': self.admin_port, 'ports': [self.public_port, self.admin_port], 'nics': [{'port-id': self.public_port['id']}, {'port-id': self.admin_port['id']}]} result = instance.setup_network(network_info) self.assertEqual(expected, result) (instance.setup_connectivity_with_service_instances. assert_called_once_with()) self.assertTrue(service_instance.neutron.API.create_port.called) @ddt.data(None, exception.NetworkException(code=400)) def test_setup_network_using_router_success(self, return_obj): admin_project_id = 'fake_admin_project_id' network_info = dict( neutron_net_id='fake_tenant_network_id', neutron_subnet_id='fake_tenant_subnet_id') cidr = '13.0.0.0/24' self.admin_port = { 'id': 'fake_admin_port_id', 'fixed_ips': [{'ip_address': 'fake_admin_port_ip_address'}]} self.service_port = dict( id='fake_service_port_id', fixed_ips=[dict(ip_address='fake_service_port_ip_address')]) service_subnet = dict(id='fake_service_subnet') instance = self._init_neutron_network_plugin() instance.use_admin_port = True instance.admin_network_id = 'fake_admin_network_id' instance.admin_subnet_id = 'fake_admin_subnet_id' instance.connect_share_server_to_tenant_network = False self.mock_object(instance, '_get_service_network_id', mock.Mock(return_value='fake_service_network_id')) router = dict(id='fake_router_id') self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) self.mock_object( service_instance.neutron.API, 'create_port', mock.Mock(side_effect=[self.service_port, self.admin_port])) self.mock_object( service_instance.neutron.API, 'subnet_create', mock.Mock(return_value=service_subnet)) self.mock_object( instance, '_get_private_router', mock.Mock(return_value=router)) self.mock_object( service_instance.neutron.API, 'router_add_interface', mock.Mock(side_effect=return_obj)) self.mock_object(instance, 'setup_connectivity_with_service_instances') self.mock_object( instance, '_get_cidr_for_subnet', mock.Mock(return_value=cidr)) self.mock_object( instance, '_get_service_subnet', mock.Mock(return_value=None)) expected = { 'ip_address': self.service_port['fixed_ips'][0]['ip_address'], 'service_port': self.service_port, 'service_subnet': service_subnet, 'admin_port': self.admin_port, 'router': router, 'ports': [self.service_port, self.admin_port], 'nics': [{'port-id': self.service_port['id']}, {'port-id': self.admin_port['id']}]} result = instance.setup_network(network_info) self.assertEqual(expected, result) (instance.setup_connectivity_with_service_instances. assert_called_once_with()) instance._get_service_subnet.assert_called_once_with(mock.ANY) instance._get_cidr_for_subnet.assert_called_once_with() self.assertTrue(service_instance.neutron.API.subnet_create.called) self.assertTrue(service_instance.neutron.API.create_port.called) instance._get_private_router.assert_called_once_with( network_info['neutron_net_id'], network_info['neutron_subnet_id']) (service_instance.neutron.API.router_add_interface. assert_called_once_with(router['id'], service_subnet['id'])) def test_setup_network_using_router_addon_of_interface_failed(self): network_info = dict( neutron_net_id='fake_tenant_network_id', neutron_subnet_id='fake_tenant_subnet_id') service_subnet = dict(id='fake_service_subnet') instance = self._init_neutron_network_plugin() instance.connect_share_server_to_tenant_network = False self.mock_object(instance, '_get_service_network_id', mock.Mock(return_value='fake_service_network_id')) router = dict(id='fake_router_id') self.mock_object( instance, '_get_private_router', mock.Mock(return_value=router)) self.mock_object( service_instance.neutron.API, 'router_add_interface', mock.Mock(side_effect=exception.NetworkException(code=500))) self.mock_object( instance, '_get_service_subnet', mock.Mock(return_value=service_subnet)) self.assertRaises( exception.NetworkException, instance.setup_network, network_info) instance._get_service_subnet.assert_called_once_with(mock.ANY) instance._get_private_router.assert_called_once_with( network_info['neutron_net_id'], network_info['neutron_subnet_id']) (service_instance.neutron.API.router_add_interface. assert_called_once_with(router['id'], service_subnet['id'])) def test_setup_network_using_router_connectivity_verification_fail(self): admin_project_id = 'fake_admin_project_id' network_info = dict( neutron_net_id='fake_tenant_network_id', neutron_subnet_id='fake_tenant_subnet_id') cidr = '13.0.0.0/24' self.service_port = dict( id='fake_service_port_id', fixed_ips=[dict(ip_address='fake_service_port_ip_address')]) service_subnet = dict(id='fake_service_subnet') instance = self._init_neutron_network_plugin() instance.connect_share_server_to_tenant_network = False self.mock_object(instance, '_get_service_network_id', mock.Mock(return_value='fake_service_network_id')) router = dict(id='fake_router_id') self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) self.mock_object( service_instance.neutron.API, 'create_port', mock.Mock(return_value=self.service_port)) self.mock_object( service_instance.neutron.API, 'subnet_create', mock.Mock(return_value=service_subnet)) self.mock_object(service_instance.neutron.API, 'delete_port') self.mock_object( instance, '_get_private_router', mock.Mock(return_value=router)) self.mock_object( service_instance.neutron.API, 'router_add_interface') self.mock_object( instance, 'setup_connectivity_with_service_instances', mock.Mock(side_effect=exception.ManilaException('Fake'))) self.mock_object( instance, '_get_cidr_for_subnet', mock.Mock(return_value=cidr)) self.mock_object( instance, '_get_service_subnet', mock.Mock(return_value=None)) self.assertRaises( exception.ManilaException, instance.setup_network, network_info) (instance.setup_connectivity_with_service_instances. assert_called_once_with()) instance._get_service_subnet.assert_called_once_with(mock.ANY) instance._get_cidr_for_subnet.assert_called_once_with() self.assertTrue(service_instance.neutron.API.subnet_create.called) self.assertTrue(service_instance.neutron.API.create_port.called) instance._get_private_router.assert_called_once_with( network_info['neutron_net_id'], network_info['neutron_subnet_id']) (service_instance.neutron.API.router_add_interface. assert_called_once_with(router['id'], service_subnet['id'])) service_instance.neutron.API.delete_port.assert_has_calls([ mock.call(self.service_port['id'])]) def test__get_cidr_for_subnet_success(self): expected = ( fake_get_config_option('service_network_cidr').split('/')[0] + '/' + six.text_type( fake_get_config_option('service_network_division_mask'))) instance = self._init_neutron_network_plugin() self.mock_object( instance, '_get_all_service_subnets', mock.Mock(return_value=[])) result = instance._get_cidr_for_subnet() self.assertEqual(expected, result) instance._get_all_service_subnets.assert_called_once_with() def test__get_cidr_for_subnet_failure(self): subnets = [] serv_cidr = netaddr.IPNetwork( fake_get_config_option('service_network_cidr')) division_mask = fake_get_config_option('service_network_division_mask') for subnet in serv_cidr.subnet(division_mask): subnets.append(dict(cidr=six.text_type(subnet.cidr))) instance = self._init_neutron_network_plugin() self.mock_object( instance, '_get_all_service_subnets', mock.Mock(return_value=subnets)) self.assertRaises( exception.ServiceInstanceException, instance._get_cidr_for_subnet) instance._get_all_service_subnets.assert_called_once_with() def test_setup_connectivity_with_service_instances(self): instance = self._init_neutron_network_plugin() instance.use_admin_port = True instance.admin_network_id = 'fake_admin_network_id' instance.admin_subnet_id = 'fake_admin_subnet_id' interface_name_service = 'fake_interface_name_service' interface_name_admin = 'fake_interface_name_admin' fake_division_mask = fake_get_config_option( 'service_network_division_mask') fake_subnet_service = fake_network.FakeSubnet( cidr='10.254.0.0/%s' % fake_division_mask) fake_subnet_admin = fake_network.FakeSubnet(id='fake_admin_subnet_id', cidr='10.0.0.0/24') fake_service_port = fake_network.FakePort(fixed_ips=[ {'subnet_id': fake_subnet_service['id'], 'ip_address': '10.254.0.2'}], mac_address='fake_mac_address') fake_admin_port = fake_network.FakePort(fixed_ips=[ {'subnet_id': fake_subnet_admin['id'], 'ip_address': '10.0.0.4'}], mac_address='fake_mac_address') self.mock_object(instance, '_get_service_port', mock.Mock(side_effect=[fake_service_port, fake_admin_port])) self.mock_object(instance, '_add_fixed_ips_to_service_port', mock.Mock(return_value=fake_service_port)) self.mock_object(instance.vif_driver, 'get_device_name', mock.Mock(side_effect=[interface_name_service, interface_name_admin])) self.mock_object(instance.neutron_api, 'get_subnet', mock.Mock(side_effect=[fake_subnet_service, fake_subnet_admin, fake_subnet_admin])) self.mock_object(instance.vif_driver, 'plug') device_mock = mock.Mock() self.mock_object(service_instance.ip_lib, 'IPDevice', mock.Mock(return_value=device_mock)) instance.setup_connectivity_with_service_instances() instance._get_service_port.assert_has_calls([ mock.call(instance.service_network_id, None, 'manila-share'), mock.call('fake_admin_network_id', 'fake_admin_subnet_id', 'manila-admin-share')]) instance.vif_driver.get_device_name.assert_has_calls([ mock.call(fake_service_port), mock.call(fake_admin_port)]) instance.vif_driver.plug.assert_has_calls([ mock.call(interface_name_service, fake_service_port['id'], fake_service_port['mac_address']), mock.call(interface_name_admin, fake_admin_port['id'], fake_admin_port['mac_address'])]) instance.neutron_api.get_subnet.assert_has_calls([ mock.call(fake_subnet_service['id']), mock.call(fake_subnet_admin['id'])]) instance.vif_driver.init_l3.assert_has_calls([ mock.call(interface_name_service, ['10.254.0.2/%s' % fake_division_mask], clear_cidrs=[]), mock.call(interface_name_admin, ['10.0.0.4/24'], clear_cidrs=[fake_subnet_admin['cidr']])]) service_instance.ip_lib.IPDevice.assert_has_calls([ mock.call(interface_name_service), mock.call(interface_name_admin)]) def test__get_service_port_none_exist(self): instance = self._init_neutron_network_plugin() admin_project_id = 'fake_admin_project_id' fake_port_values = {'device_id': 'manila-share', 'binding:host_id': 'fake_host'} self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) fake_service_port = fake_network.FakePort(device_id='manila-share') self.mock_object(instance.neutron_api, 'list_ports', mock.Mock(return_value=[])) self.mock_object(service_instance.socket, 'gethostname', mock.Mock(return_value='fake_host')) self.mock_object(instance.neutron_api, 'create_port', mock.Mock(return_value=fake_service_port)) self.mock_object(instance.neutron_api, 'update_port_fixed_ips', mock.Mock(return_value=fake_service_port)) result = instance._get_service_port(instance.service_network_id, None, 'manila-share') instance.neutron_api.list_ports.assert_called_once_with( **fake_port_values) instance.neutron_api.create_port.assert_called_once_with( instance.admin_project_id, instance.service_network_id, device_id='manila-share', device_owner='manila:share', host_id='fake_host', subnet_id=None, port_security_enabled=False) service_instance.socket.gethostname.assert_called_once_with() self.assertFalse(instance.neutron_api.update_port_fixed_ips.called) self.assertEqual(fake_service_port, result) def test__get_service_port_one_exist_on_same_host(self): instance = self._init_neutron_network_plugin() fake_port_values = {'device_id': 'manila-share', 'binding:host_id': 'fake_host'} fake_service_port = fake_network.FakePort(**fake_port_values) self.mock_object(service_instance.socket, 'gethostname', mock.Mock(return_value='fake_host')) self.mock_object(instance.neutron_api, 'list_ports', mock.Mock(return_value=[fake_service_port])) self.mock_object(instance.neutron_api, 'create_port', mock.Mock(return_value=fake_service_port)) self.mock_object(instance.neutron_api, 'update_port_fixed_ips', mock.Mock(return_value=fake_service_port)) result = instance._get_service_port(instance.service_network_id, None, 'manila-share') instance.neutron_api.list_ports.assert_called_once_with( **fake_port_values) self.assertFalse(instance.neutron_api.create_port.called) self.assertFalse(instance.neutron_api.update_port_fixed_ips.called) self.assertEqual(fake_service_port, result) def test__get_service_port_one_exist_on_different_host(self): instance = self._init_neutron_network_plugin() admin_project_id = 'fake_admin_project_id' fake_port = {'device_id': 'manila-share', 'binding:host_id': 'fake_host'} self.mock_object( service_instance.neutron.API, 'admin_project_id', mock.Mock(return_value=admin_project_id)) fake_service_port = fake_network.FakePort(**fake_port) self.mock_object(instance.neutron_api, 'list_ports', mock.Mock(return_value=[])) self.mock_object(service_instance.socket, 'gethostname', mock.Mock(return_value='fake_host')) self.mock_object(instance.neutron_api, 'create_port', mock.Mock(return_value=fake_service_port)) self.mock_object(instance.neutron_api, 'update_port_fixed_ips', mock.Mock(return_value=fake_service_port)) result = instance._get_service_port(instance.service_network_id, None, 'manila-share') instance.neutron_api.list_ports.assert_called_once_with( **fake_port) instance.neutron_api.create_port.assert_called_once_with( instance.admin_project_id, instance.service_network_id, device_id='manila-share', device_owner='manila:share', host_id='fake_host', subnet_id=None, port_security_enabled=False) service_instance.socket.gethostname.assert_called_once_with() self.assertFalse(instance.neutron_api.update_port_fixed_ips.called) self.assertEqual(fake_service_port, result) def test__get_service_port_two_exist_on_same_host(self): instance = self._init_neutron_network_plugin() fake_service_port = fake_network.FakePort(**{ 'device_id': 'manila-share', 'binding:host_id': 'fake_host'}) self.mock_object( instance.neutron_api, 'list_ports', mock.Mock(return_value=[fake_service_port, fake_service_port])) self.mock_object(service_instance.socket, 'gethostname', mock.Mock(return_value='fake_host')) self.mock_object(instance.neutron_api, 'create_port', mock.Mock(return_value=fake_service_port)) self.assertRaises( exception.ServiceInstanceException, instance._get_service_port, instance.service_network_id, None, 'manila-share') self.assertFalse(instance.neutron_api.create_port.called) def test__add_fixed_ips_to_service_port(self): ip_address1 = '13.0.0.13' subnet_id1 = 'fake_subnet_id1' subnet_id2 = 'fake_subnet_id2' port = dict(id='fooport', fixed_ips=[dict( subnet_id=subnet_id1, ip_address=ip_address1)]) expected = mock.Mock() network = dict(subnets=[subnet_id1, subnet_id2]) instance = self._init_neutron_network_plugin() self.mock_object(instance.neutron_api, 'get_network', mock.Mock(return_value=network)) self.mock_object(instance.neutron_api, 'update_port_fixed_ips', mock.Mock(return_value=expected)) result = instance._add_fixed_ips_to_service_port(port) self.assertEqual(expected, result) instance.neutron_api.get_network.assert_called_once_with( instance.service_network_id) instance.neutron_api.update_port_fixed_ips.assert_called_once_with( port['id'], dict(fixed_ips=[ dict(subnet_id=subnet_id1, ip_address=ip_address1), dict(subnet_id=subnet_id2)])) def test__get_private_router_success(self): instance = self._init_neutron_network_plugin() network = fake_network.FakeNetwork() subnet = fake_network.FakeSubnet(gateway_ip='fake_ip') router = fake_network.FakeRouter(id='fake_router_id') port = fake_network.FakePort(fixed_ips=[ dict(subnet_id=subnet['id'], ip_address=subnet['gateway_ip'])], device_id=router['id']) self.mock_object(instance.neutron_api, 'get_subnet', mock.Mock(return_value=subnet)) self.mock_object(instance.neutron_api, 'list_ports', mock.Mock(return_value=[port])) self.mock_object(instance.neutron_api, 'show_router', mock.Mock(return_value=router)) result = instance._get_private_router(network['id'], subnet['id']) self.assertEqual(router, result) instance.neutron_api.get_subnet.assert_called_once_with(subnet['id']) instance.neutron_api.list_ports.assert_called_once_with( network_id=network['id']) instance.neutron_api.show_router.assert_called_once_with(router['id']) def test__get_private_router_no_gateway(self): instance = self._init_neutron_network_plugin() subnet = fake_network.FakeSubnet(gateway_ip='') self.mock_object(instance.neutron_api, 'get_subnet', mock.Mock(return_value=subnet)) self.assertRaises( exception.ServiceInstanceException, instance._get_private_router, 'fake_network_id', subnet['id']) instance.neutron_api.get_subnet.assert_called_once_with( subnet['id']) def test__get_private_router_subnet_is_not_attached_to_the_router(self): instance = self._init_neutron_network_plugin() network_id = 'fake_network_id' subnet = fake_network.FakeSubnet(gateway_ip='fake_ip') self.mock_object(instance.neutron_api, 'get_subnet', mock.Mock(return_value=subnet)) self.mock_object(instance.neutron_api, 'list_ports', mock.Mock(return_value=[])) self.assertRaises( exception.ServiceInstanceException, instance._get_private_router, network_id, subnet['id']) instance.neutron_api.get_subnet.assert_called_once_with( subnet['id']) instance.neutron_api.list_ports.assert_called_once_with( network_id=network_id) def test__get_service_subnet_none_found(self): subnet_name = 'fake_subnet_name' instance = self._init_neutron_network_plugin() self.mock_object(instance, '_get_all_service_subnets', mock.Mock(return_value=[])) result = instance._get_service_subnet(subnet_name) self.assertIsNone(result) instance._get_all_service_subnets.assert_called_once_with() def test__get_service_subnet_unused_found(self): subnet_name = 'fake_subnet_name' subnets = [fake_network.FakeSubnet(id='foo', name=''), fake_network.FakeSubnet(id='bar', name='quuz')] instance = self._init_neutron_network_plugin() self.mock_object(instance.neutron_api, 'update_subnet') self.mock_object(instance, '_get_all_service_subnets', mock.Mock(return_value=subnets)) result = instance._get_service_subnet(subnet_name) self.assertEqual(subnets[0], result) instance._get_all_service_subnets.assert_called_once_with() instance.neutron_api.update_subnet.assert_called_once_with( subnets[0]['id'], subnet_name) def test__get_service_subnet_one_found(self): subnet_name = 'fake_subnet_name' subnets = [fake_network.FakeSubnet(id='foo', name='quuz'), fake_network.FakeSubnet(id='bar', name=subnet_name)] instance = self._init_neutron_network_plugin() self.mock_object(instance, '_get_all_service_subnets', mock.Mock(return_value=subnets)) result = instance._get_service_subnet(subnet_name) self.assertEqual(subnets[1], result) instance._get_all_service_subnets.assert_called_once_with() def test__get_service_subnet_two_found(self): subnet_name = 'fake_subnet_name' subnets = [fake_network.FakeSubnet(id='foo', name=subnet_name), fake_network.FakeSubnet(id='bar', name=subnet_name)] instance = self._init_neutron_network_plugin() self.mock_object(instance, '_get_all_service_subnets', mock.Mock(return_value=subnets)) self.assertRaises( exception.ServiceInstanceException, instance._get_service_subnet, subnet_name) instance._get_all_service_subnets.assert_called_once_with() def test__get_all_service_subnets(self): subnet_id1 = 'fake_subnet_id1' subnet_id2 = 'fake_subnet_id2' instance = self._init_neutron_network_plugin() network = dict(subnets=[subnet_id1, subnet_id2]) self.mock_object(instance.neutron_api, 'get_subnet', mock.Mock(side_effect=lambda s_id: dict(id=s_id))) self.mock_object(instance.neutron_api, 'get_network', mock.Mock(return_value=network)) result = instance._get_all_service_subnets() self.assertEqual([dict(id=subnet_id1), dict(id=subnet_id2)], result) instance.neutron_api.get_network.assert_called_once_with( instance.service_network_id) instance.neutron_api.get_subnet.assert_has_calls([ mock.call(subnet_id1), mock.call(subnet_id2)]) manila-10.0.0/manila/tests/share/drivers/zfssa/0000775000175000017500000000000013656750362021401 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/zfssa/__init__.py0000664000175000017500000000000013656750227023500 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/zfssa/test_zfssashare.py0000664000175000017500000004735113656750227025175 0ustar zuulzuul00000000000000# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for Oracle's ZFSSA Manila driver. """ from unittest import mock from oslo_config import cfg from oslo_utils import units from manila import context from manila import exception from manila.share import configuration as conf from manila.share.drivers.zfssa import zfssashare from manila import test from manila.tests import fake_zfssa CONF = cfg.CONF class ZFSSAShareDriverTestCase(test.TestCase): """Tests ZFSSAShareDriver.""" share = { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'NFS', 'export_location': '/mnt/nfs/volume-00002', } share2 = { 'id': 'fakeid2', 'name': 'fakename2', 'size': 4, 'share_proto': 'CIFS', 'export_location': '/mnt/nfs/volume-00003', 'space_data': 3006477107 } snapshot = { 'id': 'fakesnapshotid', 'share_name': 'fakename', 'share_id': 'fakeid', 'name': 'fakesnapshotname', 'share_size': 1, 'share_proto': 'NFS', } access = { 'id': 'fakeaccid', 'access_type': 'ip', 'access_to': '10.0.0.2', 'state': 'active', } @mock.patch.object(zfssashare, 'factory_zfssa') def setUp(self, _factory_zfssa): super(ZFSSAShareDriverTestCase, self).setUp() self._create_fake_config() lcfg = self.configuration self.mountpoint = '/export/' + lcfg.zfssa_nas_mountpoint _factory_zfssa.return_value = fake_zfssa.FakeZFSSA() _factory_zfssa.set_host(lcfg.zfssa_host) _factory_zfssa.login(lcfg.zfssa_auth_user) self._context = context.get_admin_context() self._driver = zfssashare.ZFSSAShareDriver(False, configuration=lcfg) self._driver.do_setup(self._context) self.fake_proto_share = { 'id': self.share['id'], 'share_proto': 'fake_proto', 'export_locations': [{'path': self.share['export_location']}], } self.test_share = { 'id': self.share['id'], 'share_proto': 'NFS', 'export_locations': [{'path': self.share['export_location']}], } self.test_share2 = { 'id': self.share2['id'], 'share_proto': 'CIFS', 'export_locations': [{'path': self.share2['export_location']}], } self.driver_options = {'zfssa_name': self.share['name']} def _create_fake_config(self): def _safe_get(opt): return getattr(self.configuration, opt) self.configuration = mock.Mock(spec=conf.Configuration) self.configuration.safe_get = mock.Mock(side_effect=_safe_get) self.configuration.zfssa_host = '1.1.1.1' self.configuration.zfssa_data_ip = '1.1.1.1' self.configuration.zfssa_auth_user = 'user' self.configuration.zfssa_auth_password = 'passwd' self.configuration.zfssa_pool = 'pool' self.configuration.zfssa_project = 'project' self.configuration.zfssa_nas_mountpoint = 'project' self.configuration.zfssa_nas_checksum = 'fletcher4' self.configuration.zfssa_nas_logbias = 'latency' self.configuration.zfssa_nas_compression = 'off' self.configuration.zfssa_nas_vscan = 'false' self.configuration.zfssa_nas_rstchown = 'true' self.configuration.zfssa_nas_quota_snap = 'true' self.configuration.zfssa_rest_timeout = 60 self.configuration.network_config_group = 'fake_network_config_group' self.configuration.admin_network_config_group = ( 'fake_admin_network_config_group') self.configuration.driver_handles_share_servers = False self.configuration.zfssa_manage_policy = 'strict' def test_create_share(self): self.mock_object(self._driver.zfssa, 'create_share') self.mock_object(self._driver, '_export_location') lcfg = self.configuration arg = { 'host': lcfg.zfssa_data_ip, 'mountpoint': self.mountpoint, 'name': self.share['id'], } location = ("%(host)s:%(mountpoint)s/%(name)s" % arg) self._driver._export_location.return_value = location arg = self._driver.create_arg(self.share['size']) arg.update(self._driver.default_args) arg.update({'name': self.share['id']}) ret = self._driver.create_share(self._context, self.share) self._driver.zfssa.create_share.assert_called_with(lcfg.zfssa_pool, lcfg.zfssa_project, arg) self.assertEqual(location, ret) self.assertEqual(1, self._driver.zfssa.create_share.call_count) self.assertEqual(1, self._driver._export_location.call_count) def test_create_share_from_snapshot(self): self.mock_object(self._driver.zfssa, 'clone_snapshot') self.mock_object(self._driver, '_export_location') lcfg = self.configuration arg = { 'host': lcfg.zfssa_data_ip, 'mountpoint': self.mountpoint, 'name': self.share['id'], } location = ("%(host)s:%(mountpoint)s/%(name)s" % arg) self._driver._export_location.return_value = location arg = self._driver.create_arg(self.share['size']) details = { 'share': self.share['id'], 'project': lcfg.zfssa_project, } arg.update(details) ret = self._driver.create_share_from_snapshot(self._context, self.share, self.snapshot) self.assertEqual(location, ret) self.assertEqual(1, self._driver.zfssa.clone_snapshot.call_count) self.assertEqual(1, self._driver._export_location.call_count) self._driver.zfssa.clone_snapshot.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.snapshot, self.share, arg) def test_delete_share(self): self.mock_object(self._driver.zfssa, 'delete_share') self._driver.delete_share(self._context, self.share) self.assertEqual(1, self._driver.zfssa.delete_share.call_count) lcfg = self.configuration self._driver.zfssa.delete_share.assert_called_with(lcfg.zfssa_pool, lcfg.zfssa_project, self.share['id']) def test_create_snapshot(self): self.mock_object(self._driver.zfssa, 'create_snapshot') lcfg = self.configuration self._driver.create_snapshot(self._context, self.snapshot) self.assertEqual(1, self._driver.zfssa.create_snapshot.call_count) self._driver.zfssa.create_snapshot.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.snapshot['share_id'], self.snapshot['id']) def test_delete_snapshot(self): self.mock_object(self._driver.zfssa, 'delete_snapshot') self._driver.delete_snapshot(self._context, self.snapshot) self.assertEqual(1, self._driver.zfssa.delete_snapshot.call_count) def test_delete_snapshot_negative(self): self.mock_object(self._driver.zfssa, 'has_clones') self._driver.zfssa.has_clones.return_value = True self.assertRaises(exception.ShareSnapshotIsBusy, self._driver.delete_snapshot, self._context, self.snapshot) def test_ensure_share(self): self.mock_object(self._driver.zfssa, 'get_share') lcfg = self.configuration self._driver.ensure_share(self._context, self.share) self.assertEqual(1, self._driver.zfssa.get_share.call_count) self._driver.zfssa.get_share.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.share['id']) self._driver.zfssa.get_share.return_value = None self.assertRaises(exception.ManilaException, self._driver.ensure_share, self._context, self.share) def test_allow_access(self): self.mock_object(self._driver.zfssa, 'allow_access_nfs') lcfg = self.configuration self._driver.allow_access(self._context, self.share, self.access) self.assertEqual(1, self._driver.zfssa.allow_access_nfs.call_count) self._driver.zfssa.allow_access_nfs.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.share['id'], self.access) def test_deny_access(self): self.mock_object(self._driver.zfssa, 'deny_access_nfs') lcfg = self.configuration self._driver.deny_access(self._context, self.share, self.access) self.assertEqual(1, self._driver.zfssa.deny_access_nfs.call_count) self._driver.zfssa.deny_access_nfs.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.share['id'], self.access) def test_extend_share_negative(self): self.mock_object(self._driver.zfssa, 'modify_share') new_size = 3 # Not enough space in project, expect an exception: self.mock_object(self._driver.zfssa, 'get_project_stats') self._driver.zfssa.get_project_stats.return_value = 1 * units.Gi self.assertRaises(exception.ShareExtendingError, self._driver.extend_share, self.share, new_size) def test_extend_share(self): self.mock_object(self._driver.zfssa, 'modify_share') new_size = 3 lcfg = self.configuration self.mock_object(self._driver.zfssa, 'get_project_stats') self._driver.zfssa.get_project_stats.return_value = 10 * units.Gi arg = self._driver.create_arg(new_size) self._driver.extend_share(self.share, new_size) self.assertEqual(1, self._driver.zfssa.modify_share.call_count) self._driver.zfssa.modify_share.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.share['id'], arg) def test_shrink_share_negative(self): self.mock_object(self._driver.zfssa, 'modify_share') # Used space is larger than 2GB new_size = 2 self.mock_object(self._driver.zfssa, 'get_share') self._driver.zfssa.get_share.return_value = self.share2 self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, self.share2, new_size) def test_shrink_share(self): self.mock_object(self._driver.zfssa, 'modify_share') new_size = 3 lcfg = self.configuration self.mock_object(self._driver.zfssa, 'get_share') self._driver.zfssa.get_share.return_value = self.share2 arg = self._driver.create_arg(new_size) self._driver.shrink_share(self.share2, new_size) self.assertEqual(1, self._driver.zfssa.modify_share.call_count) self._driver.zfssa.modify_share.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.share2['id'], arg) def test_manage_invalid_option(self): self.mock_object(self._driver, '_get_share_details') # zfssa_name not in driver_options: self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, self.share, {}) def test_manage_no_share_details(self): self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.side_effect = ( exception.ShareResourceNotFound(share_id=self.share['name'])) self.assertRaises(exception.ShareResourceNotFound, self._driver.manage_existing, self.share, self.driver_options) def test_manage_invalid_size(self): details = { 'quota': 10, # 10 bytes 'reservation': 10, } self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details self.mock_object(self._driver.zfssa, 'get_project_stats') self._driver.zfssa.get_project_stats.return_value = 900 # Share size is less than 1GB, but there is not enough free space self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, self.test_share, self.driver_options) def test_manage_invalid_protocol(self): self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = { 'quota': self.share['size'] * units.Gi, 'reservation': self.share['size'] * units.Gi, 'custom:manila_managed': False, } self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, self.fake_proto_share, self.driver_options) def test_manage_unmanage_no_schema(self): self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = {} # Share does not have custom:manila_managed property # Test manage_existing(): self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, self.test_share, self.driver_options) # Test unmanage(): self.assertRaises(exception.UnmanageInvalidShare, self._driver.unmanage, self.test_share) def test_manage_round_up_size(self): details = { 'quota': 100, 'reservation': 50, 'custom:manila_managed': False, } self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details self.mock_object(self._driver.zfssa, 'get_project_stats') self._driver.zfssa.get_project_stats.return_value = 1 * units.Gi ret = self._driver.manage_existing(self.test_share, self.driver_options) # Expect share size is 1GB self.assertEqual(1, ret['size']) def test_manage_not_enough_space(self): details = { 'quota': 3.5 * units.Gi, 'reservation': 3.5 * units.Gi, 'custom:manila_managed': False, } self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details self.mock_object(self._driver.zfssa, 'get_project_stats') self._driver.zfssa.get_project_stats.return_value = 0.1 * units.Gi self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, self.test_share, self.driver_options) def test_manage_unmanage_NFS(self): lcfg = self.configuration details = { # Share size is 1GB 'quota': self.share['size'] * units.Gi, 'reservation': self.share['size'] * units.Gi, 'custom:manila_managed': False, } arg = { 'host': lcfg.zfssa_data_ip, 'mountpoint': self.share['export_location'], 'name': self.share['id'], } export_loc = "%(host)s:%(mountpoint)s/%(name)s" % arg self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details ret = self._driver.manage_existing(self.test_share, self.driver_options) self.assertEqual(export_loc, ret['export_locations']) self.assertEqual(1, ret['size']) def test_manage_unmanage_CIFS(self): lcfg = self.configuration details = { # Share size is 1GB 'quota': self.share2['size'] * units.Gi, 'reservation': self.share2['size'] * units.Gi, 'custom:manila_managed': False, } arg = { 'host': lcfg.zfssa_data_ip, 'name': self.share2['id'], } export_loc = "\\\\%(host)s\\%(name)s" % arg self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details ret = self._driver.manage_existing(self.test_share2, self.driver_options) self.assertEqual(export_loc, ret['export_locations']) self.assertEqual(4, ret['size']) def test_unmanage_NFS(self): self.mock_object(self._driver.zfssa, 'modify_share') lcfg = self.configuration details = { 'quota': self.share['size'] * units.Gi, 'reservation': self.share['size'] * units.Gi, 'custom:manila_managed': True, } arg = { 'custom:manila_managed': False, 'sharenfs': 'off', } self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details self._driver.unmanage(self.test_share) self._driver.zfssa.modify_share.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.test_share['id'], arg) def test_unmanage_CIFS(self): self.mock_object(self._driver.zfssa, 'modify_share') lcfg = self.configuration details = { 'quota': self.share2['size'] * units.Gi, 'reservation': self.share2['size'] * units.Gi, 'custom:manila_managed': True, } arg = { 'custom:manila_managed': False, 'sharesmb': 'off', } self.mock_object(self._driver, '_get_share_details') self._driver._get_share_details.return_value = details self._driver.unmanage(self.test_share2) self._driver.zfssa.modify_share.assert_called_with( lcfg.zfssa_pool, lcfg.zfssa_project, self.test_share2['id'], arg) def test_verify_share_to_manage_loose_policy(self): # Temporarily change policy to loose self.configuration.zfssa_manage_policy = 'loose' ret = self._driver._verify_share_to_manage('sharename', {}) self.assertIsNone(ret) # Change it back to strict self.configuration.zfssa_manage_policy = 'strict' def test_verify_share_to_manage_no_property(self): self.configuration.zfssa_manage_policy = 'strict' self.assertRaises(exception.ManageInvalidShare, self._driver._verify_share_to_manage, 'sharename', {}) def test_verify_share_to_manage_alredy_managed(self): details = {'custom:manila_managed': True} self.assertRaises(exception.ManageInvalidShare, self._driver._verify_share_to_manage, 'sharename', details) manila-10.0.0/manila/tests/share/drivers/zfssa/test_zfssarest.py0000664000175000017500000004374013656750227025046 0ustar zuulzuul00000000000000# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for Oracle's ZFSSA REST API. """ from unittest import mock from manila import exception from manila.share.drivers.zfssa import restclient from manila.share.drivers.zfssa import zfssarest from manila import test from manila.tests import fake_zfssa class ZFSSAApiTestCase(test.TestCase): """Tests ZFSSAApi.""" @mock.patch.object(zfssarest, 'factory_restclient') def setUp(self, _restclient): super(ZFSSAApiTestCase, self).setUp() self.host = 'fakehost' self.user = 'fakeuser' self.url = None self.pool = 'fakepool' self.project = 'fakeproject' self.share = 'fakeshare' self.snap = 'fakesnapshot' _restclient.return_value = fake_zfssa.FakeRestClient() self._zfssa = zfssarest.ZFSSAApi() self._zfssa.set_host('fakehost') self.schema = { 'property': 'manila_managed', 'description': 'Managed by Manila', 'type': 'Boolean', } def _create_response(self, status): response = fake_zfssa.FakeResponse(status) return response def test_enable_service(self): self.mock_object(self._zfssa.rclient, 'put') self._zfssa.rclient.put.return_value = self._create_response( restclient.Status.ACCEPTED) self._zfssa.enable_service('nfs') self.assertEqual(1, self._zfssa.rclient.put.call_count) self._zfssa.rclient.put.return_value = self._create_response( restclient.Status.OK) self.assertRaises(exception.ShareBackendException, self._zfssa.enable_service, 'nfs') def test_verify_avail_space(self): self.mock_object(self._zfssa, 'verify_project') self.mock_object(self._zfssa, 'get_project_stats') self._zfssa.get_project_stats.return_value = 2000 self._zfssa.verify_avail_space(self.pool, self.project, self.share, 1000) self.assertEqual(1, self._zfssa.verify_project.call_count) self.assertEqual(1, self._zfssa.get_project_stats.call_count) self._zfssa.verify_project.assert_called_with(self.pool, self.project) self._zfssa.get_project_stats.assert_called_with(self.pool, self.project) self._zfssa.get_project_stats.return_value = 900 self.assertRaises(exception.ShareBackendException, self._zfssa.verify_avail_space, self.pool, self.project, self.share, 1000) def test_create_project(self): self.mock_object(self._zfssa, 'verify_pool') self.mock_object(self._zfssa.rclient, 'get') self.mock_object(self._zfssa.rclient, 'post') arg = { 'name': self.project, 'sharesmb': 'off', 'sharenfs': 'off', 'mountpoint': 'fakemnpt', } self._zfssa.rclient.get.return_value = self._create_response( restclient.Status.NOT_FOUND) self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.CREATED) self._zfssa.create_project(self.pool, self.project, arg) self.assertEqual(1, self._zfssa.rclient.get.call_count) self.assertEqual(1, self._zfssa.rclient.post.call_count) self.assertEqual(1, self._zfssa.verify_pool.call_count) self._zfssa.verify_pool.assert_called_with(self.pool) self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.NOT_FOUND) self.assertRaises(exception.ShareBackendException, self._zfssa.create_project, self.pool, self.project, arg) def test_create_share(self): self.mock_object(self._zfssa, 'verify_avail_space') self.mock_object(self._zfssa.rclient, 'get') self.mock_object(self._zfssa.rclient, 'post') self._zfssa.rclient.get.return_value = self._create_response( restclient.Status.NOT_FOUND) self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.CREATED) arg = { "name": self.share, "quota": 1, } self._zfssa.create_share(self.pool, self.project, arg) self.assertEqual(1, self._zfssa.rclient.get.call_count) self.assertEqual(1, self._zfssa.rclient.post.call_count) self.assertEqual(1, self._zfssa.verify_avail_space.call_count) self._zfssa.verify_avail_space.assert_called_with(self.pool, self.project, arg, arg['quota']) self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.NOT_FOUND) self.assertRaises(exception.ShareBackendException, self._zfssa.create_share, self.pool, self.project, arg) self._zfssa.rclient.get.return_value = self._create_response( restclient.Status.OK) self.assertRaises(exception.ShareBackendException, self._zfssa.create_share, self.pool, self.project, arg) def test_modify_share(self): self.mock_object(self._zfssa.rclient, 'put') self._zfssa.rclient.put.return_value = self._create_response( restclient.Status.ACCEPTED) arg = {"name": "dummyname"} svc = self._zfssa.share_path % (self.pool, self.project, self.share) self._zfssa.modify_share(self.pool, self.project, self.share, arg) self.assertEqual(1, self._zfssa.rclient.put.call_count) self._zfssa.rclient.put.assert_called_with(svc, arg) self._zfssa.rclient.put.return_value = self._create_response( restclient.Status.BAD_REQUEST) self.assertRaises(exception.ShareBackendException, self._zfssa.modify_share, self.pool, self.project, self.share, arg) def test_delete_share(self): self.mock_object(self._zfssa.rclient, 'delete') self._zfssa.rclient.delete.return_value = self._create_response( restclient.Status.NO_CONTENT) svc = self._zfssa.share_path % (self.pool, self.project, self.share) self._zfssa.delete_share(self.pool, self.project, self.share) self.assertEqual(1, self._zfssa.rclient.delete.call_count) self._zfssa.rclient.delete.assert_called_with(svc) def test_create_snapshot(self): self.mock_object(self._zfssa.rclient, 'post') self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.CREATED) arg = {"name": self.snap} svc = self._zfssa.snapshots_path % (self.pool, self.project, self.share) self._zfssa.create_snapshot(self.pool, self.project, self.share, self.snap) self.assertEqual(1, self._zfssa.rclient.post.call_count) self._zfssa.rclient.post.assert_called_with(svc, arg) self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.BAD_REQUEST) self.assertRaises(exception.ShareBackendException, self._zfssa.create_snapshot, self.pool, self.project, self.share, self.snap) def test_delete_snapshot(self): self.mock_object(self._zfssa.rclient, 'delete') self._zfssa.rclient.delete.return_value = self._create_response( restclient.Status.NO_CONTENT) svc = self._zfssa.snapshot_path % (self.pool, self.project, self.share, self.snap) self._zfssa.delete_snapshot(self.pool, self.project, self.share, self.snap) self.assertEqual(1, self._zfssa.rclient.delete.call_count) self._zfssa.rclient.delete.assert_called_with(svc) self._zfssa.rclient.delete.return_value = self._create_response( restclient.Status.BAD_REQUEST) self.assertRaises(exception.ShareBackendException, self._zfssa.delete_snapshot, self.pool, self.project, self.share, self.snap) def test_clone_snapshot(self): self.mock_object(self._zfssa, 'verify_avail_space') self.mock_object(self._zfssa.rclient, 'put') self._zfssa.rclient.put.return_value = self._create_response( restclient.Status.CREATED) snapshot = { "id": self.snap, "share_id": self.share, } clone = { "id": "cloneid", "size": 1, } arg = { "name": "dummyname", "quota": 1, } self._zfssa.clone_snapshot(self.pool, self.project, snapshot, clone, arg) self.assertEqual(1, self._zfssa.rclient.put.call_count) self.assertEqual(1, self._zfssa.verify_avail_space.call_count) self._zfssa.verify_avail_space.assert_called_with(self.pool, self.project, clone['id'], clone['size']) self._zfssa.rclient.put.return_value = self._create_response( restclient.Status.NOT_FOUND) self.assertRaises(exception.ShareBackendException, self._zfssa.clone_snapshot, self.pool, self.project, snapshot, clone, arg) def _create_entry(self, sharenfs, ip): if sharenfs == 'off': sharenfs = 'sec=sys' entry = (',rw=@%s' % ip) if '/' not in ip: entry = entry + '/32' arg = {'sharenfs': sharenfs + entry} return arg def test_allow_access_nfs(self): self.mock_object(self._zfssa, 'get_share') self.mock_object(self._zfssa, 'modify_share') details = {"sharenfs": "off"} access = { "access_type": "nonip", "access_to": "foo", } # invalid access type self.assertRaises(exception.InvalidShareAccess, self._zfssa.allow_access_nfs, self.pool, self.project, self.share, access) # valid entry access.update({"access_type": "ip"}) arg = self._create_entry("off", access['access_to']) self._zfssa.get_share.return_value = details self._zfssa.allow_access_nfs(self.pool, self.project, self.share, access) self.assertEqual(1, self._zfssa.get_share.call_count) self.assertEqual(1, self._zfssa.modify_share.call_count) self._zfssa.get_share.assert_called_with(self.pool, self.project, self.share) self._zfssa.modify_share.assert_called_with(self.pool, self.project, self.share, arg) # add another entry access.update({"access_to": "10.0.0.1/24"}) arg = self._create_entry("off", access['access_to']) self._zfssa.allow_access_nfs(self.pool, self.project, self.share, access) self.assertEqual(2, self._zfssa.modify_share.call_count) self._zfssa.modify_share.assert_called_with(self.pool, self.project, self.share, arg) # verify modify_share is not called if sharenfs='on' details = {"sharenfs": "on"} self._zfssa.get_share.return_value = details self._zfssa.allow_access_nfs(self.pool, self.project, self.share, access) self.assertEqual(2, self._zfssa.modify_share.call_count) # verify modify_share is not called if ip is already in the list access.update({"access_to": "10.0.0.1/24"}) details = self._create_entry("off", access['access_to']) self._zfssa.get_share.return_value = details self._zfssa.allow_access_nfs(self.pool, self.project, self.share, access) self.assertEqual(2, self._zfssa.modify_share.call_count) def test_deny_access_nfs(self): self.mock_object(self._zfssa, 'get_share') self.mock_object(self._zfssa, 'modify_share') data1 = self._create_entry("off", "10.0.0.1") access = { "access_type": "nonip", "access_to": "foo", } # invalid access_type self.assertRaises(exception.InvalidShareAccess, self._zfssa.deny_access_nfs, self.pool, self.project, self.share, access) # valid entry access.update({"access_type": "ip"}) self._zfssa.get_share.return_value = data1 self._zfssa.deny_access_nfs(self.pool, self.project, self.share, access) self.assertEqual(1, self._zfssa.get_share.call_count) self.assertEqual(0, self._zfssa.modify_share.call_count) self._zfssa.get_share.assert_called_with(self.pool, self.project, self.share) # another valid entry data1 = self._create_entry(data1['sharenfs'], '10.0.0.2/24') data2 = self._create_entry(data1['sharenfs'], access['access_to']) self._zfssa.get_share.return_value = data2 self._zfssa.deny_access_nfs(self.pool, self.project, self.share, access) self.assertEqual(2, self._zfssa.get_share.call_count) self.assertEqual(1, self._zfssa.modify_share.call_count) self._zfssa.get_share.assert_called_with(self.pool, self.project, self.share) self._zfssa.modify_share.assert_called_with(self.pool, self.project, self.share, data1) def test_create_schema_negative(self): self.mock_object(self._zfssa.rclient, 'get') self.mock_object(self._zfssa.rclient, 'post') self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.NOT_FOUND) self.assertRaises(exception.ShareBackendException, self._zfssa.create_schema, self.schema) def test_create_schema_property_exists(self): self.mock_object(self._zfssa.rclient, 'get') self.mock_object(self._zfssa.rclient, 'post') self._zfssa.rclient.get.return_value = self._create_response( restclient.Status.OK) self._zfssa.create_schema(self.schema) self.assertEqual(1, self._zfssa.rclient.get.call_count) self.assertEqual(0, self._zfssa.rclient.post.call_count) def test_create_schema(self): self.mock_object(self._zfssa.rclient, 'get') self.mock_object(self._zfssa.rclient, 'post') self._zfssa.rclient.get.return_value = self._create_response( restclient.Status.NOT_FOUND) self._zfssa.rclient.post.return_value = self._create_response( restclient.Status.CREATED) self._zfssa.create_schema(self.schema) self.assertEqual(1, self._zfssa.rclient.get.call_count) self.assertEqual(1, self._zfssa.rclient.post.call_count) manila-10.0.0/manila/tests/share/drivers/tegile/0000775000175000017500000000000013656750362021524 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/tegile/__init__.py0000664000175000017500000000000013656750227023623 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/tegile/test_tegile.py0000664000175000017500000007744213656750227024424 0ustar zuulzuul00000000000000# Copyright (c) 2016 by Tegile Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver Test for Tegile storage. """ from unittest import mock import ddt from oslo_config import cfg import requests import six from manila.common import constants as const from manila import context from manila import exception from manila.share import configuration from manila.share.drivers.tegile import tegile from manila import test CONF = cfg.CONF test_config = configuration.Configuration(None) test_config.tegile_nas_server = 'some-ip' test_config.tegile_nas_login = 'some-user' test_config.tegile_nas_password = 'some-password' test_config.reserved_share_percentage = 10 test_config.max_over_subscription_ratio = 30.0 test_share = { 'host': 'node#fake_pool', 'name': 'testshare', 'id': 'a24c2ee8-525a-4406-8ccd-8d38688f8e9e', 'share_proto': 'NFS', 'size': 10, } test_share_cifs = { 'host': 'node#fake_pool', 'name': 'testshare', 'id': 'a24c2ee8-525a-4406-8ccd-8d38688f8e9e', 'share_proto': 'CIFS', 'size': 10, } test_share_fail = { 'host': 'node#fake_pool', 'name': 'testshare', 'id': 'a24c2ee8-525a-4406-8ccd-8d38688f8e9e', 'share_proto': 'OTHER', 'size': 10, } test_snapshot = { 'name': 'testSnap', 'id': '07ae9978-5445-405e-8881-28f2adfee732', 'share': test_share, 'share_name': 'snapshotted', 'display_name': 'disp', 'display_description': 'disp-desc', } array_stats = { 'total_capacity_gb': 4569.199686084874, 'free_capacity_gb': 4565.381390112452, 'pools': [ { 'total_capacity_gb': 913.5, 'QoS_support': False, 'free_capacity_gb': 911.812650680542, 'reserved_percentage': 0, 'pool_name': 'pyramid', }, { 'total_capacity_gb': 2742.1996604874, 'QoS_support': False, 'free_capacity_gb': 2740.148867149747, 'reserved_percentage': 0, 'pool_name': 'cobalt', }, { 'total_capacity_gb': 913.5, 'QoS_support': False, 'free_capacity_gb': 913.4198722839355, 'reserved_percentage': 0, 'pool_name': 'test', }, ], } fake_tegile_backend_fail = mock.Mock( side_effect=exception.TegileAPIException(response="Fake Exception")) class FakeResponse(object): def __init__(self, status, json_output): self.status_code = status self.text = 'Random text' self._json = json_output def json(self): return self._json def close(self): pass @ddt.ddt class TegileShareDriverTestCase(test.TestCase): def __init__(self, *args, **kwds): super(TegileShareDriverTestCase, self).__init__(*args, **kwds) self._ctxt = context.get_admin_context() self.configuration = test_config def setUp(self): CONF.set_default('driver_handles_share_servers', False) self._driver = tegile.TegileShareDriver( configuration=self.configuration) self._driver._default_project = 'fake_project' super(TegileShareDriverTestCase, self).setUp() def test_create_share(self): api_return_value = (test_config.tegile_nas_server + " " + test_share['name']) mock_api = self.mock_object(self._driver, '_api', mock.Mock( return_value=api_return_value)) result = self._driver.create_share(self._ctxt, test_share) expected = { 'is_admin_only': False, 'metadata': { 'preferred': True, }, 'path': 'some-ip:testshare', } self.assertEqual(expected, result) create_params = ( 'fake_pool', 'fake_project', test_share['name'], test_share['share_proto'], ) mock_api.assert_called_once_with('createShare', create_params) def test_create_share_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.create_share, self._ctxt, test_share) create_params = ( 'fake_pool', 'fake_project', test_share['name'], test_share['share_proto'], ) mock_api.assert_called_once_with('createShare', create_params) def test_delete_share(self): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') self._driver.delete_share(self._ctxt, test_share) delete_path = '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name']) delete_params = (delete_path, True, False) mock_api.assert_called_once_with('deleteShare', delete_params) mock_params.assert_called_once_with(test_share) def test_delete_share_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.delete_share, self._ctxt, test_share) delete_path = '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name']) delete_params = (delete_path, True, False) mock_api.assert_called_once_with('deleteShare', delete_params) def test_create_snapshot(self): mock_api = self.mock_object(self._driver, '_api') fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) self._driver.create_snapshot(self._ctxt, test_snapshot) share = { 'poolName': 'fake_pool', 'projectName': 'fake_project', 'name': test_share['name'], 'availableSize': 0, 'totalSize': 0, 'datasetPath': '%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', ), 'mountpoint': test_share['name'], 'local': 'true', } create_params = (share, test_snapshot['name'], False) mock_api.assert_called_once_with('createShareSnapshot', create_params) mock_params.assert_called_once_with(test_share) def test_create_snapshot_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.create_snapshot, self._ctxt, test_snapshot) share = { 'poolName': 'fake_pool', 'projectName': 'fake_project', 'name': test_share['name'], 'availableSize': 0, 'totalSize': 0, 'datasetPath': '%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', ), 'mountpoint': test_share['name'], 'local': 'true', } create_params = (share, test_snapshot['name'], False) mock_api.assert_called_once_with('createShareSnapshot', create_params) def test_delete_snapshot(self): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') self._driver.delete_snapshot(self._ctxt, test_snapshot) delete_snap_path = ('%s/%s/%s/%s@%s%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'], 'Manual-S-', test_snapshot['name'], )) delete_params = (delete_snap_path, False) mock_api.assert_called_once_with('deleteShareSnapshot', delete_params) mock_params.assert_called_once_with(test_share) def test_delete_snapshot_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.delete_snapshot, self._ctxt, test_snapshot) delete_snap_path = ('%s/%s/%s/%s@%s%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'], 'Manual-S-', test_snapshot['name'], )) delete_params = (delete_snap_path, False) mock_api.assert_called_once_with('deleteShareSnapshot', delete_params) def test_create_share_from_snapshot(self): api_return_value = (test_config.tegile_nas_server + " " + test_share['name']) mock_api = self.mock_object(self._driver, '_api', mock.Mock( return_value=api_return_value)) fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) result = self._driver.create_share_from_snapshot(self._ctxt, test_share, test_snapshot) expected = { 'is_admin_only': False, 'metadata': { 'preferred': True, }, 'path': 'some-ip:testshare', } self.assertEqual(expected, result) create_params = ( '%s/%s/%s/%s@%s%s' % ( 'fake_pool', 'Local', 'fake_project', test_snapshot['share_name'], 'Manual-S-', test_snapshot['name'], ), test_share['name'], True, ) mock_api.assert_called_once_with('cloneShareSnapshot', create_params) mock_params.assert_called_once_with(test_share) def test_create_share_from_snapshot_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.create_share_from_snapshot, self._ctxt, test_share, test_snapshot) create_params = ( '%s/%s/%s/%s@%s%s' % ( 'fake_pool', 'Local', 'fake_project', test_snapshot['share_name'], 'Manual-S-', test_snapshot['name'], ), test_share['name'], True, ) mock_api.assert_called_once_with('cloneShareSnapshot', create_params) def test_ensure_share(self): api_return_value = (test_config.tegile_nas_server + " " + test_share['name']) mock_api = self.mock_object(self._driver, '_api', mock.Mock( return_value=api_return_value)) fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) result = self._driver.ensure_share(self._ctxt, test_share) expected = [ { 'is_admin_only': False, 'metadata': { 'preferred': True, }, 'path': 'some-ip:testshare', }, ] self.assertEqual(expected, result) ensure_params = [ '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'])] mock_api.assert_called_once_with('getShareIPAndMountPoint', ensure_params) mock_params.assert_called_once_with(test_share) def test_ensure_share_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.ensure_share, self._ctxt, test_share) ensure_params = [ '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'])] mock_api.assert_called_once_with('getShareIPAndMountPoint', ensure_params) def test_get_share_stats(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( return_value=array_stats)) result_dict = self._driver.get_share_stats(True) expected_dict = { 'driver_handles_share_servers': False, 'driver_version': '1.0.0', 'free_capacity_gb': 4565.381390112452, 'pools': [ { 'allocated_capacity_gb': 0.0, 'compression': True, 'dedupe': True, 'free_capacity_gb': 911.812650680542, 'pool_name': 'pyramid', 'qos': False, 'reserved_percentage': 10, 'thin_provisioning': True, 'max_over_subscription_ratio': 30.0, 'total_capacity_gb': 913.5}, { 'allocated_capacity_gb': 0.0, 'compression': True, 'dedupe': True, 'free_capacity_gb': 2740.148867149747, 'pool_name': 'cobalt', 'qos': False, 'reserved_percentage': 10, 'thin_provisioning': True, 'max_over_subscription_ratio': 30.0, 'total_capacity_gb': 2742.1996604874 }, { 'allocated_capacity_gb': 0.0, 'compression': True, 'dedupe': True, 'free_capacity_gb': 913.4198722839355, 'pool_name': 'test', 'qos': False, 'reserved_percentage': 10, 'thin_provisioning': True, 'max_over_subscription_ratio': 30.0, 'total_capacity_gb': 913.5}, ], 'qos': False, 'reserved_percentage': 0, 'replication_domain': None, 'share_backend_name': 'Tegile', 'snapshot_support': True, 'storage_protocol': 'NFS_CIFS', 'total_capacity_gb': 4569.199686084874, 'vendor_name': 'Tegile Systems Inc.', } self.assertSubDictMatch(expected_dict, result_dict) mock_api.assert_called_once_with(fine_logging=False, method='getArrayStats', request_type='get') def test_get_share_stats_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.get_share_stats, True) mock_api.assert_called_once_with(fine_logging=False, method='getArrayStats', request_type='get') def test_get_pool(self): result = self._driver.get_pool(test_share) expected = 'fake_pool' self.assertEqual(expected, result) def test_extend_share(self): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') self._driver.extend_share(test_share, 12) extend_path = '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name']) extend_params = (extend_path, six.text_type(12), 'GB') mock_api.assert_called_once_with('resizeShare', extend_params) mock_params.assert_called_once_with(test_share) def test_extend_share_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.extend_share, test_share, 30) extend_path = '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name']) extend_params = (extend_path, six.text_type(30), 'GB') mock_api.assert_called_once_with('resizeShare', extend_params) def test_shrink_share(self): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') self._driver.shrink_share(test_share, 15) shrink_path = '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name']) shrink_params = (shrink_path, six.text_type(15), 'GB') mock_api.assert_called_once_with('resizeShare', shrink_params) mock_params.assert_called_once_with(test_share) def test_shrink_share_fail(self): mock_api = self.mock_object(self._driver, '_api', mock.Mock( side_effect=( exception.TegileAPIException( response="Fake Exception")))) self.assertRaises(exception.TegileAPIException, self._driver.shrink_share, test_share, 30) shrink_path = '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name']) shrink_params = (shrink_path, six.text_type(30), 'GB') mock_api.assert_called_once_with('resizeShare', shrink_params) @ddt.data('ip', 'user') def test_allow_access(self, access_type): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') access = { 'access_type': access_type, 'access_level': const.ACCESS_LEVEL_RW, 'access_to': 'some-ip', } self._driver._allow_access(self._ctxt, test_share, access) allow_params = ( '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'], ), test_share['share_proto'], access_type, access['access_to'], access['access_level'], ) mock_api.assert_called_once_with('shareAllowAccess', allow_params) mock_params.assert_called_once_with(test_share) @ddt.data({'access_type': 'other', 'to': 'some-ip', 'share': test_share, 'exception_type': exception.InvalidShareAccess}, {'access_type': 'ip', 'to': 'some-ip', 'share': test_share, 'exception_type': exception.TegileAPIException}, {'access_type': 'ip', 'to': 'some-ip', 'share': test_share_cifs, 'exception_type': exception.InvalidShareAccess}, {'access_type': 'ip', 'to': 'some-ip', 'share': test_share_fail, 'exception_type': exception.InvalidShareAccess}) @ddt.unpack def test_allow_access_fail(self, access_type, to, share, exception_type): self.mock_object(self._driver, '_api', mock.Mock( side_effect=exception.TegileAPIException( response="Fake Exception"))) access = { 'access_type': access_type, 'access_level': const.ACCESS_LEVEL_RW, 'access_to': to, } self.assertRaises(exception_type, self._driver._allow_access, self._ctxt, share, access) @ddt.data('ip', 'user') def test_deny_access(self, access_type): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') access = { 'access_type': access_type, 'access_level': const.ACCESS_LEVEL_RW, 'access_to': 'some-ip', } self._driver._deny_access(self._ctxt, test_share, access) deny_params = ( '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'], ), test_share['share_proto'], access_type, access['access_to'], access['access_level'], ) mock_api.assert_called_once_with('shareDenyAccess', deny_params) mock_params.assert_called_once_with(test_share) @ddt.data({'access_type': 'other', 'to': 'some-ip', 'share': test_share, 'exception_type': exception.InvalidShareAccess}, {'access_type': 'ip', 'to': 'some-ip', 'share': test_share, 'exception_type': exception.TegileAPIException}, {'access_type': 'ip', 'to': 'some-ip', 'share': test_share_cifs, 'exception_type': exception.InvalidShareAccess}, {'access_type': 'ip', 'to': 'some-ip', 'share': test_share_fail, 'exception_type': exception.InvalidShareAccess}) @ddt.unpack def test_deny_access_fail(self, access_type, to, share, exception_type): self.mock_object(self._driver, '_api', mock.Mock( side_effect=exception.TegileAPIException( response="Fake Exception"))) access = { 'access_type': access_type, 'access_level': const.ACCESS_LEVEL_RW, 'access_to': to, } self.assertRaises(exception_type, self._driver._deny_access, self._ctxt, share, access) @ddt.data({'access_rules': [{'access_type': 'ip', 'access_level': const.ACCESS_LEVEL_RW, 'access_to': 'some-ip', }, ], 'add_rules': None, 'delete_rules': None, 'call_name': 'shareAllowAccess'}, {'access_rules': [], 'add_rules': [{'access_type': 'ip', 'access_level': const.ACCESS_LEVEL_RW, 'access_to': 'some-ip'}, ], 'delete_rules': [], 'call_name': 'shareAllowAccess'}, {'access_rules': [], 'add_rules': [], 'delete_rules': [{'access_type': 'ip', 'access_level': const.ACCESS_LEVEL_RW, 'access_to': 'some-ip', }, ], 'call_name': 'shareDenyAccess'}) @ddt.unpack def test_update_access(self, access_rules, add_rules, delete_rules, call_name): fake_share_info = ('fake_pool', 'fake_project', test_share['name']) mock_params = self.mock_object(self._driver, '_get_pool_project_share_name', mock.Mock(return_value=fake_share_info)) mock_api = self.mock_object(self._driver, '_api') self._driver.update_access(self._ctxt, test_share, access_rules=access_rules, add_rules=add_rules, delete_rules=delete_rules) allow_params = ( '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'], ), test_share['share_proto'], 'ip', 'some-ip', const.ACCESS_LEVEL_RW, ) if not (add_rules or delete_rules): clear_params = ( '%s/%s/%s/%s' % ( 'fake_pool', 'Local', 'fake_project', test_share['name'], ), test_share['share_proto'], ) mock_api.assert_has_calls([mock.call('clearAccessRules', clear_params), mock.call(call_name, allow_params)]) mock_params.assert_called_with(test_share) else: mock_api.assert_called_once_with(call_name, allow_params) mock_params.assert_called_once_with(test_share) @ddt.data({'path': r'\\some-ip\shareName', 'share_proto': 'CIFS', 'host': 'some-ip'}, {'path': 'some-ip:shareName', 'share_proto': 'NFS', 'host': 'some-ip'}, {'path': 'some-ip:shareName', 'share_proto': 'NFS', 'host': None}) @ddt.unpack def test_get_location_path(self, path, share_proto, host): self._driver._hostname = 'some-ip' result = self._driver._get_location_path('shareName', share_proto, host) expected = { 'is_admin_only': False, 'metadata': { 'preferred': True, }, 'path': path, } self.assertEqual(expected, result) def test_get_location_path_fail(self): self.assertRaises(exception.InvalidInput, self._driver._get_location_path, 'shareName', 'SOME', 'some-ip') def test_get_network_allocations_number(self): result = self._driver.get_network_allocations_number() expected = 0 self.assertEqual(expected, result) class TegileAPIExecutorTestCase(test.TestCase): def setUp(self): self._api = tegile.TegileAPIExecutor("TestCase", test_config.tegile_nas_server, test_config.tegile_nas_login, test_config.tegile_nas_password) super(TegileAPIExecutorTestCase, self).setUp() def test_send_api_post(self): json_output = {'value': 'abc'} self.mock_object(requests, 'post', mock.Mock(return_value=FakeResponse(200, json_output))) result = self._api(method="Test", request_type='post', params='[]', fine_logging=True) self.assertEqual(json_output, result) def test_send_api_get(self): json_output = {'value': 'abc'} self.mock_object(requests, 'get', mock.Mock(return_value=FakeResponse(200, json_output))) result = self._api(method="Test", request_type='get', fine_logging=False) self.assertEqual(json_output, result) def test_send_api_get_fail(self): self.mock_object(requests, 'get', mock.Mock(return_value=FakeResponse(404, []))) self.assertRaises(exception.TegileAPIException, self._api, method="Test", request_type='get', fine_logging=False) def test_send_api_value_error_fail(self): json_output = {'value': 'abc'} self.mock_object(requests, 'post', mock.Mock(return_value=FakeResponse(200, json_output))) self.mock_object(FakeResponse, 'json', mock.Mock(side_effect=ValueError)) result = self._api(method="Test", request_type='post', fine_logging=False) expected = '' self.assertEqual(expected, result) manila-10.0.0/manila/tests/share/drivers/test_helpers.py0000664000175000017500000007715413656750227023344 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from oslo_config import cfg from manila.common import constants as const from manila import exception import manila.share.configuration from manila.share.drivers import helpers from manila import test from manila.tests import fake_compute from manila.tests import fake_utils from manila.tests.share.drivers import test_generic CONF = cfg.CONF @ddt.ddt class NFSHelperTestCase(test.TestCase): """Test case for NFS helper.""" def setUp(self): super(NFSHelperTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self.fake_conf = manila.share.configuration.Configuration(None) self._ssh_exec = mock.Mock(return_value=('', '')) self._execute = mock.Mock(return_value=('', '')) self._helper = helpers.NFSHelper(self._execute, self._ssh_exec, self.fake_conf) ip = '10.254.0.3' self.server = fake_compute.FakeServer( ip=ip, public_address=ip, instance_id='fake_instance_id') self.share_name = 'fake_share_name' def test_init_helper(self): # mocks self.mock_object( self._helper, '_ssh_exec', mock.Mock(side_effect=exception.ProcessExecutionError( stderr='command not found'))) # run self.assertRaises(exception.ManilaException, self._helper.init_helper, self.server) # asserts self._helper._ssh_exec.assert_called_once_with( self.server, ['sudo', 'exportfs']) def test_init_helper_log(self): # mocks self.mock_object( self._helper, '_ssh_exec', mock.Mock(side_effect=exception.ProcessExecutionError( stderr='fake'))) # run self._helper.init_helper(self.server) # asserts self._helper._ssh_exec.assert_called_once_with( self.server, ['sudo', 'exportfs']) @ddt.data( {"server": {"public_address": "1.2.3.4"}, "version": 4}, {"server": {"public_address": "1001::1002"}, "version": 6}, {"server": {"public_address": "1.2.3.4", "admin_ip": "5.6.7.8"}, "version": 4}, {"server": {"public_address": "1.2.3.4", "ip": "9.10.11.12"}, "version": 4}, {"server": {"public_address": "1001::1001", "ip": "1001::1002"}, "version": 6}, {"server": {"public_address": "1001::1002", "admin_ip": "1001::1002"}, "version": 6}, {"server": {"public_addresses": ["1001::1002"]}, "version": 6}, {"server": {"public_addresses": ["1.2.3.4", "1001::1002"]}, "version": {"1.2.3.4": 4, "1001::1002": 6}}, ) @ddt.unpack def test_create_exports(self, server, version): result = self._helper.create_exports(server, self.share_name) expected_export_locations = [] path = os.path.join(CONF.share_mount_path, self.share_name) service_address = server.get("admin_ip", server.get("ip")) version_copy = version def convert_address(address, version): if version == 4: return address return "[%s]" % address if 'public_addresses' in server: pairs = list(map(lambda addr: (addr, False), server['public_addresses'])) else: pairs = [(server['public_address'], False)] service_address = server.get("admin_ip", server.get("ip")) if service_address: pairs.append((service_address, True)) for ip, is_admin in pairs: if isinstance(version_copy, dict): version = version_copy.get(ip) expected_export_locations.append({ "path": "%s:%s" % (convert_address(ip, version), path), "is_admin_only": is_admin, "metadata": { "export_location_metadata_example": "example", }, }) self.assertEqual(expected_export_locations, result) @ddt.data(const.ACCESS_LEVEL_RW, const.ACCESS_LEVEL_RO) def test_update_access(self, access_level): expected_mount_options = '%s,no_subtree_check,no_root_squash' self.mock_object(self._helper, '_sync_nfs_temp_and_perm_files') local_path = os.path.join(CONF.share_mount_path, self.share_name) exec_result = ' '.join([local_path, '2.2.2.3']) self.mock_object(self._helper, '_ssh_exec', mock.Mock(return_value=(exec_result, ''))) access_rules = [ test_generic.get_fake_access_rule('1.1.1.1', access_level), test_generic.get_fake_access_rule('2.2.2.2', access_level), test_generic.get_fake_access_rule('2.2.2.3', access_level)] add_rules = [ test_generic.get_fake_access_rule('2.2.2.2', access_level), test_generic.get_fake_access_rule('2.2.2.3', access_level), test_generic.get_fake_access_rule('5.5.5.0/24', access_level)] delete_rules = [ test_generic.get_fake_access_rule('3.3.3.3', access_level), test_generic.get_fake_access_rule('4.4.4.4', access_level, 'user'), test_generic.get_fake_access_rule('0.0.0.0/0', access_level)] self._helper.update_access(self.server, self.share_name, access_rules, add_rules=add_rules, delete_rules=delete_rules) local_path = os.path.join(CONF.share_mount_path, self.share_name) self._helper._ssh_exec.assert_has_calls([ mock.call(self.server, ['sudo', 'exportfs']), mock.call(self.server, ['sudo', 'exportfs', '-u', ':'.join(['3.3.3.3', local_path])]), mock.call(self.server, ['sudo', 'exportfs', '-u', ':'.join(['*', local_path])]), mock.call(self.server, ['sudo', 'exportfs', '-o', expected_mount_options % access_level, ':'.join(['2.2.2.2', local_path])]), mock.call(self.server, ['sudo', 'exportfs', '-o', expected_mount_options % access_level, ':'.join(['5.5.5.0/24', local_path])]), ]) self._helper._sync_nfs_temp_and_perm_files.assert_has_calls([ mock.call(self.server), mock.call(self.server)]) @ddt.data({'access': '10.0.0.1', 'result': '10.0.0.1'}, {'access': '10.0.0.1/32', 'result': '10.0.0.1'}, {'access': '10.0.0.0/24', 'result': '10.0.0.0/24'}, {'access': '1001::1001', 'result': '[1001::1001]'}, {'access': '1001::1000/128', 'result': '[1001::1000]'}, {'access': '1001::1000/124', 'result': '[1001::1000]/124'}) @ddt.unpack def test__get_parsed_address_or_cidr(self, access, result): self.assertEqual(result, self._helper._get_parsed_address_or_cidr(access)) @ddt.data('10.0.0.265', '10.0.0.1/33', '1001::10069', '1001::1000/129') def test__get_parsed_address_or_cidr_with_invalid_access(self, access): self.assertRaises(ValueError, self._helper._get_parsed_address_or_cidr, access) def test_update_access_invalid_type(self): access_rules = [test_generic.get_fake_access_rule( '2.2.2.2', const.ACCESS_LEVEL_RW, access_type='fake'), ] self.assertRaises( exception.InvalidShareAccess, self._helper.update_access, self.server, self.share_name, access_rules, [], []) def test_update_access_invalid_level(self): access_rules = [test_generic.get_fake_access_rule( '2.2.2.2', 'fake_level', access_type='ip'), ] self.assertRaises( exception.InvalidShareAccessLevel, self._helper.update_access, self.server, self.share_name, access_rules, [], []) def test_update_access_delete_invalid_rule(self): delete_rules = [test_generic.get_fake_access_rule( 'lala', 'fake_level', access_type='user'), ] self.mock_object(self._helper, '_sync_nfs_temp_and_perm_files') self._helper.update_access(self.server, self.share_name, [], [], delete_rules) self._helper._sync_nfs_temp_and_perm_files.assert_called_with( self.server) def test_get_host_list(self): fake_exportfs = ('/shares/share-1\n\t\t20.0.0.3\n' '/shares/share-1\n\t\t20.0.0.6\n' '/shares/share-2\n\t\t10.0.0.2\n' '/shares/share-2\n\t\t10.0.0.5\n' '/shares/share-3\n\t\t30.0.0.4\n' '/shares/share-3\n\t\t30.0.0.7\n') expected = ['20.0.0.3', '20.0.0.6'] result = self._helper.get_host_list(fake_exportfs, '/shares/share-1') self.assertEqual(expected, result) @ddt.data({"level": const.ACCESS_LEVEL_RW, "ip": "1.1.1.1", "expected": "1.1.1.1"}, {"level": const.ACCESS_LEVEL_RO, "ip": "1.1.1.1", "expected": "1.1.1.1"}, {"level": const.ACCESS_LEVEL_RW, "ip": "fd12:abcd::10", "expected": "[fd12:abcd::10]"}, {"level": const.ACCESS_LEVEL_RO, "ip": "fd12:abcd::10", "expected": "[fd12:abcd::10]"}) @ddt.unpack def test_update_access_recovery_mode(self, level, ip, expected): expected_mount_options = '%s,no_subtree_check,no_root_squash' access_rules = [test_generic.get_fake_access_rule( ip, level), ] self.mock_object(self._helper, '_sync_nfs_temp_and_perm_files') self.mock_object(self._helper, 'get_host_list', mock.Mock(return_value=[ip])) self._helper.update_access(self.server, self.share_name, access_rules, [], []) local_path = os.path.join(CONF.share_mount_path, self.share_name) self._ssh_exec.assert_has_calls([ mock.call(self.server, ['sudo', 'exportfs']), mock.call( self.server, ['sudo', 'exportfs', '-u', ':'.join([expected, local_path])]), mock.call(self.server, ['sudo', 'exportfs', '-o', expected_mount_options % level, ':'.join([expected, local_path])]), ]) self._helper._sync_nfs_temp_and_perm_files.assert_called_with( self.server) def test_sync_nfs_temp_and_perm_files(self): self._helper._sync_nfs_temp_and_perm_files(self.server) self._helper._ssh_exec.assert_has_calls( [mock.call(self.server, mock.ANY) for i in range(1)]) @ddt.data('/foo/bar', '5.6.7.8:/bar/quuz', '5.6.7.9:/foo/quuz', '[1001::1001]:/foo/bar', '[1001::1000]/:124:/foo/bar') def test_get_exports_for_share_single_ip(self, export_location): server = dict(public_address='1.2.3.4') result = self._helper.get_exports_for_share(server, export_location) path = export_location.split(':')[-1] expected_export_locations = [ {"is_admin_only": False, "path": "%s:%s" % (server["public_address"], path), "metadata": {"export_location_metadata_example": "example"}} ] self.assertEqual(expected_export_locations, result) @ddt.data('/foo/bar', '5.6.7.8:/bar/quuz', '5.6.7.9:/foo/quuz') def test_get_exports_for_share_multi_ip(self, export_location): server = dict(public_addresses=['1.2.3.4', '1.2.3.5']) result = self._helper.get_exports_for_share(server, export_location) path = export_location.split(':')[-1] expected_export_locations = list(map( lambda addr: { "is_admin_only": False, "path": "%s:%s" % (addr, path), "metadata": {"export_location_metadata_example": "example"} }, server['public_addresses']) ) self.assertEqual(expected_export_locations, result) @ddt.data( {'public_address_with_suffix': 'foo'}, {'with_prefix_public_address': 'bar'}, {'with_prefix_public_address_and_with_suffix': 'quuz'}, {}) def test_get_exports_for_share_with_error(self, server): export_location = '1.2.3.4:/foo/bar' self.assertRaises( exception.ManilaException, self._helper.get_exports_for_share, server, export_location) @ddt.data('/foo/bar', '5.6.7.8:/foo/bar', '5.6.7.88:fake:/foo/bar', '[1001::1002]:/foo/bar', '[1001::1000]/124:/foo/bar') def test_get_share_path_by_export_location(self, export_location): result = self._helper.get_share_path_by_export_location( dict(), export_location) self.assertEqual('/foo/bar', result) @ddt.data( ('/shares/fake_share1\n\t\t1.1.1.10\n' '/shares/fake_share2\n\t\t1.1.1.16\n' '/mnt/fake_share1 1.1.1.11', False), ('/shares/fake_share_name\n\t\t1.1.1.10\n' '/shares/fake_share_name\n\t\t1.1.1.16\n' '/mnt/fake_share1\n\t\t1.1.1.11', True), ('/mnt/fake_share_name\n\t\t1.1.1.11\n' '/shares/fake_share_name\n\t\t1.1.1.10\n' '/shares/fake_share_name\n\t\t1.1.1.16\n', True)) @ddt.unpack def test_disable_access_for_maintenance(self, output, hosts_match): fake_maintenance_path = "fake.path" self._helper.configuration.share_mount_path = '/shares' local_path = os.path.join(self._helper.configuration.share_mount_path, self.share_name) def fake_ssh_exec(*args, **kwargs): if 'exportfs' in args[1] and '-u' not in args[1]: return output, '' else: return '', '' self.mock_object(self._helper, '_ssh_exec', mock.Mock(side_effect=fake_ssh_exec)) self.mock_object(self._helper, '_sync_nfs_temp_and_perm_files') self.mock_object(self._helper, '_get_maintenance_file_path', mock.Mock(return_value=fake_maintenance_path)) self._helper.disable_access_for_maintenance( self.server, self.share_name) self._helper._ssh_exec.assert_any_call( self.server, ['cat', const.NFS_EXPORTS_FILE, '|', 'grep', self.share_name, '|', 'sudo', 'tee', fake_maintenance_path] ) self._helper._ssh_exec.assert_has_calls([ mock.call(self.server, ['sudo', 'exportfs']), ]) if hosts_match: self._helper._ssh_exec.assert_has_calls([ mock.call(self.server, ['sudo', 'exportfs', '-u', ':'.join(['1.1.1.10', local_path])]), mock.call(self.server, ['sudo', 'exportfs', '-u', ':'.join(['1.1.1.16', local_path])]), ]) self._helper._sync_nfs_temp_and_perm_files.assert_called_once_with( self.server ) def test_restore_access_after_maintenance(self): fake_maintenance_path = "fake.path" self.mock_object(self._helper, '_get_maintenance_file_path', mock.Mock(return_value=fake_maintenance_path)) self.mock_object(self._helper, '_ssh_exec') self._helper.restore_access_after_maintenance( self.server, self.share_name) self._helper._ssh_exec.assert_called_once_with( self.server, ['cat', fake_maintenance_path, '|', 'sudo', 'tee', '-a', const.NFS_EXPORTS_FILE, '&&', 'sudo', 'exportfs', '-r', '&&', 'sudo', 'rm', '-f', fake_maintenance_path] ) @ddt.ddt class CIFSHelperIPAccessTestCase(test.TestCase): """Test case for CIFS helper with IP access.""" def setUp(self): super(CIFSHelperIPAccessTestCase, self).setUp() self.server_details = {'instance_id': 'fake', 'public_address': '1.2.3.4', } self.share_name = 'fake_share_name' self.fake_conf = manila.share.configuration.Configuration(None) self._ssh_exec = mock.Mock(return_value=('', '')) self._execute = mock.Mock(return_value=('', '')) self._helper = helpers.CIFSHelperIPAccess(self._execute, self._ssh_exec, self.fake_conf) self.access = dict( access_level=const.ACCESS_LEVEL_RW, access_type='ip', access_to='1.1.1.1') def test_init_helper(self): self._helper.init_helper(self.server_details) self._helper._ssh_exec.assert_called_once_with( self.server_details, ['sudo', 'net', 'conf', 'list'], ) def test_create_export_share_does_not_exist(self): def fake_ssh_exec(*args, **kwargs): if 'showshare' in args[1]: raise exception.ProcessExecutionError() else: return '', '' self.mock_object(self._helper, '_ssh_exec', mock.Mock(side_effect=fake_ssh_exec)) ret = self._helper.create_exports(self.server_details, self.share_name) expected_location = [{ "is_admin_only": False, "path": "\\\\%s\\%s" % ( self.server_details['public_address'], self.share_name), "metadata": {"export_location_metadata_example": "example"} }] self.assertEqual(expected_location, ret) share_path = os.path.join( self._helper.configuration.share_mount_path, self.share_name) self._helper._ssh_exec.assert_has_calls([ mock.call( self.server_details, ['sudo', 'net', 'conf', 'showshare', self.share_name, ] ), mock.call( self.server_details, [ 'sudo', 'net', 'conf', 'addshare', self.share_name, share_path, 'writeable=y', 'guest_ok=y', ] ), mock.call(self.server_details, mock.ANY), ]) def test_create_export_share_does_not_exist_exception(self): self.mock_object(self._helper, '_ssh_exec', mock.Mock( side_effect=[exception.ProcessExecutionError(), Exception('')] )) self.assertRaises( exception.ManilaException, self._helper.create_exports, self.server_details, self.share_name) def test_create_exports_share_exist_recreate_true(self): ret = self._helper.create_exports( self.server_details, self.share_name, recreate=True) expected_location = [{ "is_admin_only": False, "path": "\\\\%s\\%s" % ( self.server_details['public_address'], self.share_name), "metadata": {"export_location_metadata_example": "example"} }] self.assertEqual(expected_location, ret) share_path = os.path.join( self._helper.configuration.share_mount_path, self.share_name) self._helper._ssh_exec.assert_has_calls([ mock.call( self.server_details, ['sudo', 'net', 'conf', 'showshare', self.share_name, ] ), mock.call( self.server_details, ['sudo', 'net', 'conf', 'delshare', self.share_name, ] ), mock.call( self.server_details, [ 'sudo', 'net', 'conf', 'addshare', self.share_name, share_path, 'writeable=y', 'guest_ok=y', ] ), mock.call(self.server_details, mock.ANY), ]) def test_create_export_share_exist_recreate_false(self): self.assertRaises( exception.ShareBackendException, self._helper.create_exports, self.server_details, self.share_name, recreate=False, ) self._helper._ssh_exec.assert_has_calls([ mock.call( self.server_details, ['sudo', 'net', 'conf', 'showshare', self.share_name, ] ), ]) def test_remove_exports(self): self._helper.remove_exports(self.server_details, self.share_name) self._helper._ssh_exec.assert_called_once_with( self.server_details, ['sudo', 'net', 'conf', 'delshare', self.share_name], ) def test_remove_export_forcibly(self): delshare_command = ['sudo', 'net', 'conf', 'delshare', self.share_name] def fake_ssh_exec(*args, **kwargs): if delshare_command == args[1]: raise exception.ProcessExecutionError() else: return ('', '') self.mock_object(self._helper, '_ssh_exec', mock.Mock(side_effect=fake_ssh_exec)) self._helper.remove_exports(self.server_details, self.share_name) self._helper._ssh_exec.assert_has_calls([ mock.call( self.server_details, ['sudo', 'net', 'conf', 'delshare', self.share_name], ), mock.call( self.server_details, ['sudo', 'smbcontrol', 'all', 'close-share', self.share_name], ), ]) def test_update_access_wrong_access_level(self): access_rules = [test_generic.get_fake_access_rule( '2.2.2.2', const.ACCESS_LEVEL_RO), ] self.assertRaises( exception.InvalidShareAccessLevel, self._helper.update_access, self.server_details, self.share_name, access_rules, [], []) def test_update_access_wrong_access_type(self): access_rules = [test_generic.get_fake_access_rule( '2.2.2.2', const.ACCESS_LEVEL_RW, access_type='fake'), ] self.assertRaises( exception.InvalidShareAccess, self._helper.update_access, self.server_details, self.share_name, access_rules, [], []) def test_update_access(self): access_rules = [test_generic.get_fake_access_rule( '1.1.1.1', const.ACCESS_LEVEL_RW), ] self._helper.update_access(self.server_details, self.share_name, access_rules, [], []) self._helper._ssh_exec.assert_called_once_with( self.server_details, ['sudo', 'net', 'conf', 'setparm', self.share_name, 'hosts allow', '1.1.1.1']) def test_get_allow_hosts(self): self.mock_object(self._helper, '_ssh_exec', mock.Mock( return_value=('1.1.1.1 2.2.2.2 3.3.3.3', ''))) expected = ['1.1.1.1', '2.2.2.2', '3.3.3.3'] result = self._helper._get_allow_hosts( self.server_details, self.share_name) self.assertEqual(expected, result) cmd = ['sudo', 'net', 'conf', 'getparm', self.share_name, 'hosts allow'] self._helper._ssh_exec.assert_called_once_with( self.server_details, cmd) @ddt.data( '', '1.2.3.4:/nfs/like/export', '/1.2.3.4/foo', '\\1.2.3.4\\foo', '//1.2.3.4\\mixed_slashes_and_backslashes_one', '\\\\1.2.3.4/mixed_slashes_and_backslashes_two') def test__get_share_group_name_from_export_location(self, export_location): self.assertRaises( exception.InvalidShare, self._helper._get_share_group_name_from_export_location, export_location) @ddt.data('//5.6.7.8/foo', '\\\\5.6.7.8\\foo') def test_get_exports_for_share(self, export_location): server = dict(public_address='1.2.3.4') self.mock_object( self._helper, '_get_share_group_name_from_export_location', mock.Mock(side_effect=( self._helper._get_share_group_name_from_export_location))) result = self._helper.get_exports_for_share(server, export_location) expected_export_location = [{ "is_admin_only": False, "path": "\\\\%s\\foo" % server['public_address'], "metadata": {"export_location_metadata_example": "example"} }] self.assertEqual(expected_export_location, result) (self._helper._get_share_group_name_from_export_location. assert_called_once_with(export_location)) @ddt.data( {'public_address_with_suffix': 'foo'}, {'with_prefix_public_address': 'bar'}, {'with_prefix_public_address_and_with_suffix': 'quuz'}, {}) def test_get_exports_for_share_with_exception(self, server): export_location = '1.2.3.4:/foo/bar' self.assertRaises( exception.ManilaException, self._helper.get_exports_for_share, server, export_location) @ddt.data('//5.6.7.8/foo', '\\\\5.6.7.8\\foo') def test_get_share_path_by_export_location(self, export_location): fake_path = ' /bar/quuz\n ' fake_server = dict() self.mock_object( self._helper, '_ssh_exec', mock.Mock(return_value=(fake_path, 'fake'))) self.mock_object( self._helper, '_get_share_group_name_from_export_location', mock.Mock(side_effect=( self._helper._get_share_group_name_from_export_location))) result = self._helper.get_share_path_by_export_location( fake_server, export_location) self.assertEqual('/bar/quuz', result) self._helper._ssh_exec.assert_called_once_with( fake_server, ['sudo', 'net', 'conf', 'getparm', 'foo', 'path']) (self._helper._get_share_group_name_from_export_location. assert_called_once_with(export_location)) def test_disable_access_for_maintenance(self): allowed_hosts = ['test', 'test2'] maintenance_path = os.path.join( self._helper.configuration.share_mount_path, "%s.maintenance" % self.share_name) self.mock_object(self._helper, '_set_allow_hosts') self.mock_object(self._helper, '_get_allow_hosts', mock.Mock(return_value=allowed_hosts)) self._helper.disable_access_for_maintenance( self.server_details, self.share_name) self._helper._get_allow_hosts.assert_called_once_with( self.server_details, self.share_name) self._helper._set_allow_hosts.assert_called_once_with( self.server_details, [], self.share_name) valid_cmd = ['echo', "'test test2'", '|', 'sudo', 'tee', maintenance_path] self._helper._ssh_exec.assert_called_once_with( self.server_details, valid_cmd) def test_restore_access_after_maintenance(self): fake_maintenance_path = "test.path" self.mock_object(self._helper, '_set_allow_hosts') self.mock_object(self._helper, '_get_maintenance_file_path', mock.Mock(return_value=fake_maintenance_path)) self.mock_object(self._helper, '_ssh_exec', mock.Mock(side_effect=[("fake fake2", 0), "fake"])) self._helper.restore_access_after_maintenance( self.server_details, self.share_name) self._helper._set_allow_hosts.assert_called_once_with( self.server_details, ['fake', 'fake2'], self.share_name) self._helper._ssh_exec.assert_any_call( self.server_details, ['cat', fake_maintenance_path]) self._helper._ssh_exec.assert_any_call( self.server_details, ['sudo', 'rm', '-f', fake_maintenance_path]) @ddt.ddt class CIFSHelperUserAccessTestCase(test.TestCase): """Test case for CIFS helper with user access.""" access_rw = dict( access_level=const.ACCESS_LEVEL_RW, access_type='user', access_to='manila-user') access_ro = dict( access_level=const.ACCESS_LEVEL_RO, access_type='user', access_to='manila-user') def setUp(self): super(CIFSHelperUserAccessTestCase, self).setUp() self.server_details = {'instance_id': 'fake', 'public_address': '1.2.3.4', } self.share_name = 'fake_share_name' self.fake_conf = manila.share.configuration.Configuration(None) self._ssh_exec = mock.Mock(return_value=('', '')) self._execute = mock.Mock(return_value=('', '')) self._helper = helpers.CIFSHelperUserAccess( self._execute, self._ssh_exec, self.fake_conf) def test_update_access_exception_type(self): access_rules = [test_generic.get_fake_access_rule( 'user1', const.ACCESS_LEVEL_RW, access_type='ip')] self.assertRaises(exception.InvalidShareAccess, self._helper.update_access, self.server_details, self.share_name, access_rules, [], []) def test_update_access(self): access_list = [test_generic.get_fake_access_rule( 'user1', const.ACCESS_LEVEL_RW, access_type='user'), test_generic.get_fake_access_rule( 'user2', const.ACCESS_LEVEL_RO, access_type='user')] self._helper.update_access(self.server_details, self.share_name, access_list, [], []) self._helper._ssh_exec.assert_has_calls([ mock.call(self.server_details, ['sudo', 'net', 'conf', 'setparm', self.share_name, 'valid users', 'user1']), mock.call(self.server_details, ['sudo', 'net', 'conf', 'setparm', self.share_name, 'read list', 'user2']) ]) def test_update_access_exception_level(self): access_rules = [test_generic.get_fake_access_rule( 'user1', 'fake_level', access_type='user'), ] self.assertRaises( exception.InvalidShareAccessLevel, self._helper.update_access, self.server_details, self.share_name, access_rules, [], []) @ddt.ddt class NFSSynchronizedTestCase(test.TestCase): @helpers.nfs_synchronized def wrapped_method(self, server, share_name): return server['instance_id'] + share_name @ddt.data( ({'lock_name': 'FOO', 'instance_id': 'QUUZ'}, 'nfs-FOO'), ({'instance_id': 'QUUZ'}, 'nfs-QUUZ'), ) @ddt.unpack def test_with_lock_name(self, server, expected_lock_name): share_name = 'fake_share_name' self.mock_object( helpers.utils, 'synchronized', mock.Mock(side_effect=helpers.utils.synchronized)) result = self.wrapped_method(server, share_name) self.assertEqual(server['instance_id'] + share_name, result) helpers.utils.synchronized.assert_called_once_with( expected_lock_name, external=True) manila-10.0.0/manila/tests/share/drivers/zfsonlinux/0000775000175000017500000000000013656750362022472 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/zfsonlinux/__init__.py0000664000175000017500000000000013656750227024571 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/zfsonlinux/test_utils.py0000664000175000017500000004604613656750227025255 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from unittest import mock import ddt from oslo_config import cfg from manila import exception from manila.share.drivers.ganesha import utils as ganesha_utils from manila.share.drivers.zfsonlinux import utils as zfs_utils from manila import test CONF = cfg.CONF def get_fake_configuration(*args, **kwargs): fake_config_options = { "zfs_use_ssh": kwargs.get("zfs_use_ssh", False), "zfs_share_export_ip": kwargs.get( "zfs_share_export_ip", "240.241.242.243"), "zfs_service_ip": kwargs.get("zfs_service_ip", "240.241.242.244"), "ssh_conn_timeout": kwargs.get("ssh_conn_timeout", 123), "zfs_ssh_username": kwargs.get( "zfs_ssh_username", 'fake_username'), "zfs_ssh_user_password": kwargs.get( "zfs_ssh_user_password", 'fake_pass'), "zfs_ssh_private_key_path": kwargs.get( "zfs_ssh_private_key_path", '/fake/path'), "append_config_values": mock.Mock(), } return type("FakeConfig", (object, ), fake_config_options) class FakeShareDriver(zfs_utils.ExecuteMixin): def __init__(self, *args, **kwargs): self.configuration = get_fake_configuration(*args, **kwargs) self.init_execute_mixin(*args, **kwargs) @ddt.ddt class ExecuteMixinTestCase(test.TestCase): def setUp(self): super(ExecuteMixinTestCase, self).setUp() self.ssh_executor = self.mock_object(ganesha_utils, 'SSHExecutor') self.driver = FakeShareDriver() def test_init(self): self.assertIsNone(self.driver.ssh_executor) self.assertEqual(0, self.ssh_executor.call_count) def test_init_ssh(self): driver = FakeShareDriver(zfs_use_ssh=True) self.assertIsNotNone(driver.ssh_executor) self.ssh_executor.assert_called_once_with( ip=driver.configuration.zfs_service_ip, port=22, conn_timeout=driver.configuration.ssh_conn_timeout, login=driver.configuration.zfs_ssh_username, password=driver.configuration.zfs_ssh_user_password, privatekey=driver.configuration.zfs_ssh_private_key_path, max_size=10, ) def test_execute_with_provided_executor(self): self.mock_object(self.driver, '_execute') fake_executor = mock.Mock() self.driver.execute('fake', '--foo', '--bar', executor=fake_executor) self.assertFalse(self.driver._execute.called) self.assertFalse(self.ssh_executor.called) fake_executor.assert_called_once_with('fake', '--foo', '--bar') def test_local_shell_execute(self): self.mock_object(self.driver, '_execute') self.driver.execute('fake', '--foo', '--bar') self.assertEqual(0, self.ssh_executor.call_count) self.driver._execute.assert_called_once_with( 'fake', '--foo', '--bar') def test_local_shell_execute_with_sudo(self): self.mock_object(self.driver, '_execute') self.driver.execute('sudo', 'fake', '--foo', '--bar') self.assertEqual(0, self.ssh_executor.call_count) self.driver._execute.assert_called_once_with( 'fake', '--foo', '--bar', run_as_root=True) def test_ssh_execute(self): driver = FakeShareDriver(zfs_use_ssh=True) self.mock_object(driver, '_execute') driver.execute('fake', '--foo', '--bar') self.assertEqual(0, driver._execute.call_count) self.ssh_executor.return_value.assert_called_once_with( 'fake', '--foo', '--bar') def test_ssh_execute_with_sudo(self): driver = FakeShareDriver(zfs_use_ssh=True) self.mock_object(driver, '_execute') driver.execute('sudo', 'fake', '--foo', '--bar') self.assertEqual(0, driver._execute.call_count) self.ssh_executor.return_value.assert_called_once_with( 'fake', '--foo', '--bar', run_as_root=True) def test_execute_with_retry(self): self.mock_object(time, 'sleep') self.mock_object(self.driver, 'execute', mock.Mock( side_effect=[exception.ProcessExecutionError('FAKE'), None])) self.driver.execute_with_retry('foo', 'bar') self.assertEqual(2, self.driver.execute.call_count) self.driver.execute.assert_has_calls( [mock.call('foo', 'bar'), mock.call('foo', 'bar')]) def test_execute_with_retry_exceeded(self): self.mock_object(time, 'sleep') self.mock_object(self.driver, 'execute', mock.Mock( side_effect=exception.ProcessExecutionError('FAKE'))) self.assertRaises( exception.ProcessExecutionError, self.driver.execute_with_retry, 'foo', 'bar', ) self.assertEqual(36, self.driver.execute.call_count) @ddt.data(True, False) def test__get_option(self, pool_level): out = """NAME PROPERTY VALUE SOURCE\n foo_resource_name bar_option_name some_value local""" self.mock_object( self.driver, '_execute', mock.Mock(return_value=(out, ''))) res_name = 'foo_resource_name' opt_name = 'bar_option_name' result = self.driver._get_option( res_name, opt_name, pool_level=pool_level) self.assertEqual('some_value', result) self.driver._execute.assert_called_once_with( 'zpool' if pool_level else 'zfs', 'get', opt_name, res_name, run_as_root=True) def test_parse_zfs_answer(self): not_parsed_str = '' not_parsed_str = """NAME PROPERTY VALUE SOURCE\n foo_res opt_1 bar local foo_res opt_2 foo default foo_res opt_3 some_value local""" expected = [ {'NAME': 'foo_res', 'PROPERTY': 'opt_1', 'VALUE': 'bar', 'SOURCE': 'local'}, {'NAME': 'foo_res', 'PROPERTY': 'opt_2', 'VALUE': 'foo', 'SOURCE': 'default'}, {'NAME': 'foo_res', 'PROPERTY': 'opt_3', 'VALUE': 'some_value', 'SOURCE': 'local'}, ] result = self.driver.parse_zfs_answer(not_parsed_str) self.assertEqual(expected, result) def test_parse_zfs_answer_empty(self): result = self.driver.parse_zfs_answer('') self.assertEqual([], result) def test_get_zpool_option(self): self.mock_object(self.driver, '_get_option') zpool_name = 'foo_resource_name' opt_name = 'bar_option_name' result = self.driver.get_zpool_option(zpool_name, opt_name) self.assertEqual(self.driver._get_option.return_value, result) self.driver._get_option.assert_called_once_with( zpool_name, opt_name, True) def test_get_zfs_option(self): self.mock_object(self.driver, '_get_option') dataset_name = 'foo_resource_name' opt_name = 'bar_option_name' result = self.driver.get_zfs_option(dataset_name, opt_name) self.assertEqual(self.driver._get_option.return_value, result) self.driver._get_option.assert_called_once_with( dataset_name, opt_name, False) def test_zfs(self): self.mock_object(self.driver, 'execute') self.mock_object(self.driver, 'execute_with_retry') self.driver.zfs('foo', 'bar') self.assertEqual(0, self.driver.execute_with_retry.call_count) self.driver.execute.assert_called_once_with( 'sudo', 'zfs', 'foo', 'bar') def test_zfs_with_retry(self): self.mock_object(self.driver, 'execute') self.mock_object(self.driver, 'execute_with_retry') self.driver.zfs_with_retry('foo', 'bar') self.assertEqual(0, self.driver.execute.call_count) self.driver.execute_with_retry.assert_called_once_with( 'sudo', 'zfs', 'foo', 'bar') @ddt.ddt class NFSviaZFSHelperTestCase(test.TestCase): def setUp(self): super(NFSviaZFSHelperTestCase, self).setUp() configuration = get_fake_configuration() self.out = "fake_out" self.mock_object( zfs_utils.utils, "execute", mock.Mock(return_value=(self.out, ""))) self.helper = zfs_utils.NFSviaZFSHelper(configuration) def test_init(self): zfs_utils.utils.execute.assert_has_calls([ mock.call("which", "exportfs"), mock.call("exportfs", run_as_root=True), ]) def test_verify_setup_exportfs_not_installed(self): zfs_utils.utils.execute.reset_mock() zfs_utils.utils.execute.side_effect = [('', '')] self.assertRaises( exception.ZFSonLinuxException, self.helper.verify_setup) zfs_utils.utils.execute.assert_called_once_with("which", "exportfs") def test_verify_setup_error_calling_exportfs(self): zfs_utils.utils.execute.reset_mock() zfs_utils.utils.execute.side_effect = [ ('fake_out', ''), exception.ProcessExecutionError('Fake')] self.assertRaises( exception.ProcessExecutionError, self.helper.verify_setup) zfs_utils.utils.execute.assert_has_calls([ mock.call("which", "exportfs"), mock.call("exportfs", run_as_root=True), ]) def test_is_kernel_version_true(self): delattr(self.helper, '_is_kernel_version') zfs_utils.utils.execute.reset_mock() self.assertTrue(self.helper.is_kernel_version) zfs_utils.utils.execute.assert_has_calls([ mock.call("modinfo", "zfs"), ]) def test_is_kernel_version_false(self): delattr(self.helper, '_is_kernel_version') zfs_utils.utils.execute.reset_mock() zfs_utils.utils.execute.side_effect = ( exception.ProcessExecutionError('Fake')) self.assertFalse(self.helper.is_kernel_version) zfs_utils.utils.execute.assert_has_calls([ mock.call("modinfo", "zfs"), ]) def test_is_kernel_version_second_call(self): delattr(self.helper, '_is_kernel_version') zfs_utils.utils.execute.reset_mock() self.assertTrue(self.helper.is_kernel_version) self.assertTrue(self.helper.is_kernel_version) zfs_utils.utils.execute.assert_has_calls([ mock.call("modinfo", "zfs"), ]) def test_create_exports(self): self.mock_object(self.helper, 'get_exports') result = self.helper.create_exports('foo') self.assertEqual( self.helper.get_exports.return_value, result) def test_get_exports(self): self.mock_object( self.helper, 'get_zfs_option', mock.Mock(return_value='fake_mp')) expected = [ { "path": "%s:fake_mp" % ip, "metadata": {}, "is_admin_only": is_admin_only, } for ip, is_admin_only in ( (self.helper.configuration.zfs_share_export_ip, False), (self.helper.configuration.zfs_service_ip, True)) ] result = self.helper.get_exports('foo') self.assertEqual(expected, result) self.helper.get_zfs_option.assert_called_once_with( 'foo', 'mountpoint', executor=None) def test_remove_exports(self): zfs_utils.utils.execute.reset_mock() self.mock_object( self.helper, 'get_zfs_option', mock.Mock(return_value='bar')) self.helper.remove_exports('foo') self.helper.get_zfs_option.assert_called_once_with( 'foo', 'sharenfs', executor=None) zfs_utils.utils.execute.assert_called_once_with( 'zfs', 'set', 'sharenfs=off', 'foo', run_as_root=True) def test_remove_exports_that_absent(self): zfs_utils.utils.execute.reset_mock() self.mock_object( self.helper, 'get_zfs_option', mock.Mock(return_value='off')) self.helper.remove_exports('foo') self.helper.get_zfs_option.assert_called_once_with( 'foo', 'sharenfs', executor=None) self.assertEqual(0, zfs_utils.utils.execute.call_count) @ddt.data( (('fake_modinfo_result', ''), ('sharenfs=rw=1.1.1.1:3.3.3.0/255.255.255.0,no_root_squash,' 'ro=2.2.2.2,no_root_squash'), False), (('fake_modinfo_result', ''), ('sharenfs=ro=1.1.1.1:2.2.2.2:3.3.3.0/255.255.255.0,no_root_squash'), True), (exception.ProcessExecutionError('Fake'), ('sharenfs=1.1.1.1:rw,no_root_squash 3.3.3.0/255.255.255.0:rw,' 'no_root_squash 2.2.2.2:ro,no_root_squash'), False), (exception.ProcessExecutionError('Fake'), ('sharenfs=1.1.1.1:ro,no_root_squash 2.2.2.2:ro,' 'no_root_squash 3.3.3.0/255.255.255.0:ro,no_root_squash'), True), ) @ddt.unpack def test_update_access_rw_and_ro(self, modinfo_response, access_str, make_all_ro): delattr(self.helper, '_is_kernel_version') zfs_utils.utils.execute.reset_mock() dataset_name = 'zpoolz/foo_dataset_name/fake' zfs_utils.utils.execute.side_effect = [ modinfo_response, ("""NAME USED AVAIL REFER MOUNTPOINT\n %(dn)s 2.58M 14.8G 27.5K /%(dn)s\n %(dn)s_some_other 3.58M 15.8G 28.5K /%(dn)s\n """ % {'dn': dataset_name}, ''), ('fake_set_opt_result', ''), ("""NAME PROPERTY VALUE SOURCE\n %s mountpoint /%s default\n """ % (dataset_name, dataset_name), ''), ('fake_1_result', ''), ('fake_2_result', ''), ('fake_3_result', ''), ('fake_4_result', ''), ('fake_5_result', ''), ] access_rules = [ {'access_type': 'ip', 'access_level': 'rw', 'access_to': '1.1.1.1'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '2.2.2.2'}, {'access_type': 'ip', 'access_level': 'rw', 'access_to': '3.3.3.0/24'}, ] delete_rules = [ {'access_type': 'ip', 'access_level': 'rw', 'access_to': '4.4.4.4'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '5.5.5.5/32'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '5.5.5.6/16'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '5.5.5.7/0'}, {'access_type': 'user', 'access_level': 'rw', 'access_to': '6.6.6.6'}, {'access_type': 'user', 'access_level': 'ro', 'access_to': '7.7.7.7'}, ] self.helper.update_access( dataset_name, access_rules, [], delete_rules, make_all_ro=make_all_ro) zfs_utils.utils.execute.assert_has_calls([ mock.call('modinfo', 'zfs'), mock.call('zfs', 'list', '-r', 'zpoolz', run_as_root=True), mock.call( 'zfs', 'set', access_str, dataset_name, run_as_root=True), mock.call( 'zfs', 'get', 'mountpoint', dataset_name, run_as_root=True), mock.call( 'exportfs', '-u', '4.4.4.4:/%s' % dataset_name, run_as_root=True), mock.call( 'exportfs', '-u', '5.5.5.5:/%s' % dataset_name, run_as_root=True), mock.call( 'exportfs', '-u', '5.5.5.6/255.255.0.0:/%s' % dataset_name, run_as_root=True), mock.call( 'exportfs', '-u', '5.5.5.7/0.0.0.0:/%s' % dataset_name, run_as_root=True), ]) def test_update_access_dataset_not_found(self): self.mock_object(zfs_utils.LOG, 'warning') zfs_utils.utils.execute.reset_mock() dataset_name = 'zpoolz/foo_dataset_name/fake' zfs_utils.utils.execute.side_effect = [ ('fake_modinfo_result', ''), ('fake_dataset_not_found_result', ''), ('fake_set_opt_result', ''), ] access_rules = [ {'access_type': 'ip', 'access_level': 'rw', 'access_to': '1.1.1.1'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '1.1.1.2'}, ] self.helper.update_access(dataset_name, access_rules, [], []) zfs_utils.utils.execute.assert_has_calls([ mock.call('zfs', 'list', '-r', 'zpoolz', run_as_root=True), ]) zfs_utils.LOG.warning.assert_called_once_with( mock.ANY, {'name': dataset_name}) @ddt.data(exception.ProcessExecutionError('Fake'), ('Ok', '')) def test_update_access_no_rules(self, first_execute_result): zfs_utils.utils.execute.reset_mock() dataset_name = 'zpoolz/foo_dataset_name/fake' zfs_utils.utils.execute.side_effect = [ ("""NAME USED AVAIL REFER MOUNTPOINT\n %s 2.58M 14.8G 27.5K /%s\n """ % (dataset_name, dataset_name), ''), ('fake_set_opt_result', ''), ] self.helper.update_access(dataset_name, [], [], []) zfs_utils.utils.execute.assert_has_calls([ mock.call('zfs', 'list', '-r', 'zpoolz', run_as_root=True), mock.call('zfs', 'set', 'sharenfs=off', dataset_name, run_as_root=True), ]) @ddt.data('user', 'cert', 'cephx', '', 'fake', 'i', 'p') def test_update_access_not_ip_access_type(self, access_type): zfs_utils.utils.execute.reset_mock() dataset_name = 'zpoolz/foo_dataset_name/fake' access_rules = [ {'access_type': access_type, 'access_level': 'rw', 'access_to': '1.1.1.1'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '1.1.1.2'}, ] self.assertRaises( exception.InvalidShareAccess, self.helper.update_access, dataset_name, access_rules, access_rules, [], ) self.assertEqual(0, zfs_utils.utils.execute.call_count) @ddt.data('', 'r', 'o', 'w', 'fake', 'su') def test_update_access_neither_rw_nor_ro_access_level(self, access_level): zfs_utils.utils.execute.reset_mock() dataset_name = 'zpoolz/foo_dataset_name/fake' access_rules = [ {'access_type': 'ip', 'access_level': access_level, 'access_to': '1.1.1.1'}, {'access_type': 'ip', 'access_level': 'ro', 'access_to': '1.1.1.2'}, ] self.assertRaises( exception.InvalidShareAccess, self.helper.update_access, dataset_name, access_rules, access_rules, [], ) self.assertEqual(0, zfs_utils.utils.execute.call_count) manila-10.0.0/manila/tests/share/drivers/zfsonlinux/test_driver.py0000664000175000017500000032064313656750227025406 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from manila import context from manila import exception from manila.share.drivers.ganesha import utils as ganesha_utils from manila.share.drivers.zfsonlinux import driver as zfs_driver from manila import test from manila.tests import db_utils CONF = cfg.CONF class FakeConfig(object): def __init__(self, *args, **kwargs): self.driver_handles_share_servers = False self.share_driver = 'fake_share_driver_name' self.share_backend_name = 'FAKE_BACKEND_NAME' self.zfs_share_export_ip = kwargs.get( "zfs_share_export_ip", "1.1.1.1") self.zfs_service_ip = kwargs.get("zfs_service_ip", "2.2.2.2") self.zfs_zpool_list = kwargs.get( "zfs_zpool_list", ["foo", "bar/subbar", "quuz"]) self.zfs_use_ssh = kwargs.get("zfs_use_ssh", False) self.zfs_share_export_ip = kwargs.get( "zfs_share_export_ip", "240.241.242.243") self.zfs_service_ip = kwargs.get("zfs_service_ip", "240.241.242.244") self.ssh_conn_timeout = kwargs.get("ssh_conn_timeout", 123) self.zfs_ssh_username = kwargs.get( "zfs_ssh_username", 'fake_username') self.zfs_ssh_user_password = kwargs.get( "zfs_ssh_user_password", 'fake_pass') self.zfs_ssh_private_key_path = kwargs.get( "zfs_ssh_private_key_path", '/fake/path') self.zfs_replica_snapshot_prefix = kwargs.get( "zfs_replica_snapshot_prefix", "tmp_snapshot_for_replication_") self.zfs_migration_snapshot_prefix = kwargs.get( "zfs_migration_snapshot_prefix", "tmp_snapshot_for_migration_") self.zfs_dataset_creation_options = kwargs.get( "zfs_dataset_creation_options", ["fook=foov", "bark=barv"]) self.network_config_group = kwargs.get( "network_config_group", "fake_network_config_group") self.admin_network_config_group = kwargs.get( "admin_network_config_group", "fake_admin_network_config_group") self.config_group = kwargs.get("config_group", "fake_config_group") self.reserved_share_percentage = kwargs.get( "reserved_share_percentage", 0) self.max_over_subscription_ratio = kwargs.get( "max_over_subscription_ratio", 15.0) self.filter_function = kwargs.get("filter_function", None) self.goodness_function = kwargs.get("goodness_function", None) def safe_get(self, key): return getattr(self, key) def append_config_values(self, *args, **kwargs): pass class FakeDriverPrivateStorage(object): def __init__(self): self.storage = {} def update(self, entity_id, data): if entity_id not in self.storage: self.storage[entity_id] = {} self.storage[entity_id].update(data) def get(self, entity_id, key): return self.storage.get(entity_id, {}).get(key) def delete(self, entity_id): self.storage.pop(entity_id, None) class FakeTempDir(object): def __enter__(self, *args, **kwargs): return '/foo/path' def __exit__(self, *args, **kwargs): pass class GetBackendConfigurationTestCase(test.TestCase): def test_get_backend_configuration_success(self): backend_name = 'fake_backend_name' self.mock_object( zfs_driver.CONF, 'list_all_sections', mock.Mock(return_value=['fake1', backend_name, 'fake2'])) mock_config = self.mock_object( zfs_driver.configuration, 'Configuration') result = zfs_driver.get_backend_configuration(backend_name) self.assertEqual(mock_config.return_value, result) mock_config.assert_called_once_with( zfs_driver.driver.share_opts, config_group=backend_name) mock_config.return_value.append_config_values.assert_has_calls([ mock.call(zfs_driver.zfsonlinux_opts), mock.call(zfs_driver.share_manager_opts), mock.call(zfs_driver.driver.ssh_opts), ]) def test_get_backend_configuration_error(self): backend_name = 'fake_backend_name' self.mock_object( zfs_driver.CONF, 'list_all_sections', mock.Mock(return_value=['fake1', 'fake2'])) mock_config = self.mock_object( zfs_driver.configuration, 'Configuration') self.assertRaises( exception.BadConfigurationException, zfs_driver.get_backend_configuration, backend_name, ) self.assertFalse(mock_config.called) self.assertFalse(mock_config.return_value.append_config_values.called) @ddt.ddt class ZFSonLinuxShareDriverTestCase(test.TestCase): def setUp(self): self.mock_object(zfs_driver.CONF, '_check_required_opts') super(ZFSonLinuxShareDriverTestCase, self).setUp() self._context = context.get_admin_context() self.ssh_executor = self.mock_object(ganesha_utils, 'SSHExecutor') self.configuration = FakeConfig() self.private_storage = FakeDriverPrivateStorage() self.driver = zfs_driver.ZFSonLinuxShareDriver( configuration=self.configuration, private_storage=self.private_storage) def test_init(self): self.assertTrue(hasattr(self.driver, 'replica_snapshot_prefix')) self.assertEqual( self.driver.replica_snapshot_prefix, self.configuration.zfs_replica_snapshot_prefix) self.assertEqual( self.driver.backend_name, self.configuration.share_backend_name) self.assertEqual( self.driver.zpool_list, ['foo', 'bar', 'quuz']) self.assertEqual( self.driver.dataset_creation_options, self.configuration.zfs_dataset_creation_options) self.assertEqual( self.driver.share_export_ip, self.configuration.zfs_share_export_ip) self.assertEqual( self.driver.service_ip, self.configuration.zfs_service_ip) self.assertEqual( self.driver.private_storage, self.private_storage) self.assertTrue(hasattr(self.driver, '_helpers')) self.assertEqual(self.driver._helpers, {}) for attr_name in ('execute', 'execute_with_retry', 'parse_zfs_answer', 'get_zpool_option', 'get_zfs_option', 'zfs'): self.assertTrue(hasattr(self.driver, attr_name)) def test_init_error_with_duplicated_zpools(self): configuration = FakeConfig( zfs_zpool_list=['foo', 'bar', 'foo/quuz']) self.assertRaises( exception.BadConfigurationException, zfs_driver.ZFSonLinuxShareDriver, configuration=configuration, private_storage=self.private_storage ) def test__setup_helpers(self): mock_import_class = self.mock_object( zfs_driver.importutils, 'import_class') self.configuration.zfs_share_helpers = ['FOO=foo.module.WithHelper'] result = self.driver._setup_helpers() self.assertIsNone(result) mock_import_class.assert_called_once_with('foo.module.WithHelper') mock_import_class.return_value.assert_called_once_with( self.configuration) self.assertEqual( self.driver._helpers, {'FOO': mock_import_class.return_value.return_value}) def test__setup_helpers_error(self): self.configuration.zfs_share_helpers = [] self.assertRaises( exception.BadConfigurationException, self.driver._setup_helpers) def test__get_share_helper(self): self.driver._helpers = {'FOO': 'BAR'} result = self.driver._get_share_helper('FOO') self.assertEqual('BAR', result) @ddt.data({}, {'foo': 'bar'}) def test__get_share_helper_error(self, share_proto): self.assertRaises( exception.InvalidShare, self.driver._get_share_helper, 'NFS') @ddt.data(True, False) def test_do_setup(self, use_ssh): self.mock_object(self.driver, '_setup_helpers') self.mock_object(self.driver, 'ssh_executor') self.configuration.zfs_use_ssh = use_ssh self.driver.do_setup('fake_context') self.driver._setup_helpers.assert_called_once_with() if use_ssh: self.assertEqual(4, self.driver.ssh_executor.call_count) else: self.assertEqual(3, self.driver.ssh_executor.call_count) @ddt.data( ('foo', '127.0.0.1'), ('127.0.0.1', 'foo'), ('256.0.0.1', '127.0.0.1'), ('::1/128', '127.0.0.1'), ('127.0.0.1', '::1/128'), ) @ddt.unpack def test_do_setup_error_on_ip_addresses_configuration( self, share_export_ip, service_ip): self.mock_object(self.driver, '_setup_helpers') self.driver.share_export_ip = share_export_ip self.driver.service_ip = service_ip self.assertRaises( exception.BadConfigurationException, self.driver.do_setup, 'fake_context') self.driver._setup_helpers.assert_called_once_with() @ddt.data([], '', None) def test_do_setup_no_zpools_configured(self, zpool_list): self.mock_object(self.driver, '_setup_helpers') self.driver.zpool_list = zpool_list self.assertRaises( exception.BadConfigurationException, self.driver.do_setup, 'fake_context') self.driver._setup_helpers.assert_called_once_with() @ddt.data(None, '', 'foo_replication_domain') def test__get_pools_info(self, replication_domain): self.mock_object( self.driver, 'get_zpool_option', mock.Mock(side_effect=['2G', '3G', '5G', '4G'])) self.configuration.replication_domain = replication_domain self.driver.zpool_list = ['foo', 'bar'] expected = [ {'pool_name': 'foo', 'total_capacity_gb': 3.0, 'free_capacity_gb': 2.0, 'reserved_percentage': 0, 'compression': [True, False], 'dedupe': [True, False], 'thin_provisioning': [True], 'max_over_subscription_ratio': ( self.driver.configuration.max_over_subscription_ratio), 'qos': [False]}, {'pool_name': 'bar', 'total_capacity_gb': 4.0, 'free_capacity_gb': 5.0, 'reserved_percentage': 0, 'compression': [True, False], 'dedupe': [True, False], 'thin_provisioning': [True], 'max_over_subscription_ratio': ( self.driver.configuration.max_over_subscription_ratio), 'qos': [False]}, ] if replication_domain: for pool in expected: pool['replication_type'] = 'readable' result = self.driver._get_pools_info() self.assertEqual(expected, result) self.driver.get_zpool_option.assert_has_calls([ mock.call('foo', 'free'), mock.call('foo', 'size'), mock.call('bar', 'free'), mock.call('bar', 'size'), ]) @ddt.data( ([], {'compression': [True, False], 'dedupe': [True, False]}), (['dedup=off'], {'compression': [True, False], 'dedupe': [False]}), (['dedup=on'], {'compression': [True, False], 'dedupe': [True]}), (['compression=on'], {'compression': [True], 'dedupe': [True, False]}), (['compression=off'], {'compression': [False], 'dedupe': [True, False]}), (['compression=fake'], {'compression': [True], 'dedupe': [True, False]}), (['compression=fake', 'dedup=off'], {'compression': [True], 'dedupe': [False]}), (['compression=off', 'dedup=on'], {'compression': [False], 'dedupe': [True]}), ) @ddt.unpack def test__init_common_capabilities( self, dataset_creation_options, expected_part): self.driver.dataset_creation_options = ( dataset_creation_options) expected = { 'thin_provisioning': [True], 'qos': [False], 'max_over_subscription_ratio': ( self.driver.configuration.max_over_subscription_ratio), } expected.update(expected_part) self.driver._init_common_capabilities() self.assertEqual(expected, self.driver.common_capabilities) @ddt.data(None, '', 'foo_replication_domain') def test__update_share_stats(self, replication_domain): self.configuration.replication_domain = replication_domain self.mock_object(self.driver, '_get_pools_info') self.assertEqual({}, self.driver._stats) expected = { 'driver_handles_share_servers': False, 'driver_name': 'ZFS', 'driver_version': '1.0', 'free_capacity_gb': 'unknown', 'pools': self.driver._get_pools_info.return_value, 'qos': False, 'replication_domain': replication_domain, 'reserved_percentage': 0, 'share_backend_name': self.driver.backend_name, 'share_group_stats': {'consistent_snapshot_support': None}, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'storage_protocol': 'NFS', 'total_capacity_gb': 'unknown', 'vendor_name': 'Open Source', 'filter_function': None, 'goodness_function': None, 'ipv4_support': True, 'ipv6_support': False, } if replication_domain: expected['replication_type'] = 'readable' self.driver._update_share_stats() self.assertEqual(expected, self.driver._stats) self.driver._get_pools_info.assert_called_once_with() @ddt.data('', 'foo', 'foo-bar', 'foo_bar', 'foo-bar_quuz') def test__get_share_name(self, share_id): prefix = 'fake_prefix_' self.configuration.zfs_dataset_name_prefix = prefix self.configuration.zfs_dataset_snapshot_name_prefix = 'quuz' expected = prefix + share_id.replace('-', '_') result = self.driver._get_share_name(share_id) self.assertEqual(expected, result) @ddt.data('', 'foo', 'foo-bar', 'foo_bar', 'foo-bar_quuz') def test__get_snapshot_name(self, snapshot_id): prefix = 'fake_prefix_' self.configuration.zfs_dataset_name_prefix = 'quuz' self.configuration.zfs_dataset_snapshot_name_prefix = prefix expected = prefix + snapshot_id.replace('-', '_') result = self.driver._get_snapshot_name(snapshot_id) self.assertEqual(expected, result) def test__get_dataset_creation_options_not_set(self): self.driver.dataset_creation_options = [] mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) share = {'size': '5'} result = self.driver._get_dataset_creation_options(share=share) self.assertIsInstance(result, list) self.assertEqual(2, len(result)) for v in ('quota=5G', 'readonly=off'): self.assertIn(v, result) mock_get_extra_specs_from_share.assert_called_once_with(share) @ddt.data(True, False) def test__get_dataset_creation_options(self, is_readonly): mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) self.driver.dataset_creation_options = [ 'readonly=quuz', 'sharenfs=foo', 'sharesmb=bar', 'k=v', 'q=w', ] share = {'size': 5} readonly = 'readonly=%s' % ('on' if is_readonly else 'off') expected = [readonly, 'k=v', 'q=w', 'quota=5G'] result = self.driver._get_dataset_creation_options( share=share, is_readonly=is_readonly) self.assertEqual(sorted(expected), sorted(result)) mock_get_extra_specs_from_share.assert_called_once_with(share) @ddt.data( (' True', [True, False], ['dedup=off'], 'dedup=on'), ('True', [True, False], ['dedup=off'], 'dedup=on'), ('on', [True, False], ['dedup=off'], 'dedup=on'), ('yes', [True, False], ['dedup=off'], 'dedup=on'), ('1', [True, False], ['dedup=off'], 'dedup=on'), ('True', [True], [], 'dedup=on'), (' False', [True, False], [], 'dedup=off'), ('False', [True, False], [], 'dedup=off'), ('False', [False], ['dedup=on'], 'dedup=off'), ('off', [False], ['dedup=on'], 'dedup=off'), ('no', [False], ['dedup=on'], 'dedup=off'), ('0', [False], ['dedup=on'], 'dedup=off'), ) @ddt.unpack def test__get_dataset_creation_options_with_updated_dedupe( self, dedupe_extra_spec, dedupe_capability, driver_options, expected): mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={'dedupe': dedupe_extra_spec})) self.driver.dataset_creation_options = driver_options self.driver.common_capabilities['dedupe'] = dedupe_capability share = {'size': 5} expected_options = ['quota=5G', 'readonly=off'] expected_options.append(expected) result = self.driver._get_dataset_creation_options(share=share) self.assertEqual(sorted(expected_options), sorted(result)) mock_get_extra_specs_from_share.assert_called_once_with(share) @ddt.data( ('on', [True, False], ['compression=off'], 'compression=on'), ('on', [True], [], 'compression=on'), ('off', [False], ['compression=on'], 'compression=off'), ('off', [True, False], [], 'compression=off'), ('foo', [True, False], [], 'compression=foo'), ('bar', [True], [], 'compression=bar'), ) @ddt.unpack def test__get_dataset_creation_options_with_updated_compression( self, extra_spec, capability, driver_options, expected_option): mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={'zfsonlinux:compression': extra_spec})) self.driver.dataset_creation_options = driver_options self.driver.common_capabilities['compression'] = capability share = {'size': 5} expected_options = ['quota=5G', 'readonly=off'] expected_options.append(expected_option) result = self.driver._get_dataset_creation_options(share=share) self.assertEqual(sorted(expected_options), sorted(result)) mock_get_extra_specs_from_share.assert_called_once_with(share) @ddt.data( ({'dedupe': 'fake'}, {'dedupe': [True, False]}), ({'dedupe': 'on'}, {'dedupe': [False]}), ({'dedupe': 'off'}, {'dedupe': [True]}), ({'zfsonlinux:compression': 'fake'}, {'compression': [False]}), ({'zfsonlinux:compression': 'on'}, {'compression': [False]}), ({'zfsonlinux:compression': 'off'}, {'compression': [True]}), ) @ddt.unpack def test__get_dataset_creation_options_error( self, extra_specs, common_capabilities): mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value=extra_specs)) share = {'size': 5} self.driver.common_capabilities.update(common_capabilities) self.assertRaises( exception.ZFSonLinuxException, self.driver._get_dataset_creation_options, share=share ) mock_get_extra_specs_from_share.assert_called_once_with(share) @ddt.data('bar/quuz', 'bar/quuz/', 'bar') def test__get_dataset_name(self, second_zpool): self.configuration.zfs_zpool_list = ['foo', second_zpool] prefix = 'fake_prefix_' self.configuration.zfs_dataset_name_prefix = prefix share = {'id': 'abc-def_ghi', 'host': 'hostname@backend_name#bar'} result = self.driver._get_dataset_name(share) if second_zpool[-1] == '/': second_zpool = second_zpool[0:-1] expected = '%s/%sabc_def_ghi' % (second_zpool, prefix) self.assertEqual(expected, result) def test_create_share(self): mock_get_helper = self.mock_object(self.driver, '_get_share_helper') self.mock_object(self.driver, 'zfs') mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) context = 'fake_context' share = { 'id': 'fake_share_id', 'host': 'hostname@backend_name#bar', 'share_proto': 'NFS', 'size': 4, } self.configuration.zfs_dataset_name_prefix = 'some_prefix_' self.configuration.zfs_ssh_username = 'someuser' self.driver.share_export_ip = '1.1.1.1' self.driver.service_ip = '2.2.2.2' dataset_name = 'bar/subbar/some_prefix_fake_share_id' result = self.driver.create_share(context, share, share_server=None) self.assertEqual( mock_get_helper.return_value.create_exports.return_value, result, ) self.assertEqual( 'share', self.driver.private_storage.get(share['id'], 'entity_type')) self.assertEqual( dataset_name, self.driver.private_storage.get(share['id'], 'dataset_name')) self.assertEqual( 'someuser@2.2.2.2', self.driver.private_storage.get(share['id'], 'ssh_cmd')) self.assertEqual( 'bar', self.driver.private_storage.get(share['id'], 'pool_name')) self.driver.zfs.assert_called_once_with( 'create', '-o', 'quota=4G', '-o', 'fook=foov', '-o', 'bark=barv', '-o', 'readonly=off', 'bar/subbar/some_prefix_fake_share_id') mock_get_helper.assert_has_calls([ mock.call('NFS'), mock.call().create_exports(dataset_name) ]) mock_get_extra_specs_from_share.assert_called_once_with(share) def test_create_share_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.create_share, 'fake_context', 'fake_share', share_server={'id': 'fake_server'}, ) def test_delete_share(self): dataset_name = 'bar/subbar/some_prefix_fake_share_id' mock_delete = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object(self.driver, '_get_share_helper') self.mock_object(zfs_driver.LOG, 'warning') self.mock_object( self.driver, 'zfs', mock.Mock(return_value=('a', 'b'))) snap_name = '%s@%s' % ( dataset_name, self.driver.replica_snapshot_prefix) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock( side_effect=[ [{'NAME': 'fake_dataset_name'}, {'NAME': dataset_name}], [{'NAME': 'snap_name'}, {'NAME': '%s@foo' % dataset_name}, {'NAME': snap_name}], ])) context = 'fake_context' share = { 'id': 'fake_share_id', 'host': 'hostname@backend_name#bar', 'share_proto': 'NFS', 'size': 4, } self.configuration.zfs_dataset_name_prefix = 'some_prefix_' self.configuration.zfs_ssh_username = 'someuser' self.driver.share_export_ip = '1.1.1.1' self.driver.service_ip = '2.2.2.2' self.driver.private_storage.update( share['id'], {'pool_name': 'bar', 'dataset_name': dataset_name} ) self.driver.delete_share(context, share, share_server=None) self.driver.zfs.assert_has_calls([ mock.call('list', '-r', 'bar'), mock.call('list', '-r', '-t', 'snapshot', 'bar'), ]) self.driver._get_share_helper.assert_has_calls([ mock.call('NFS'), mock.call().remove_exports(dataset_name)]) self.driver.parse_zfs_answer.assert_has_calls([ mock.call('a'), mock.call('a')]) mock_delete.assert_has_calls([ mock.call(snap_name), mock.call(dataset_name), ]) self.assertEqual(0, zfs_driver.LOG.warning.call_count) def test_delete_share_absent(self): dataset_name = 'bar/subbar/some_prefix_fake_share_id' mock_delete = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object(self.driver, '_get_share_helper') self.mock_object(zfs_driver.LOG, 'warning') self.mock_object( self.driver, 'zfs', mock.Mock(return_value=('a', 'b'))) snap_name = '%s@%s' % ( dataset_name, self.driver.replica_snapshot_prefix) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[[], [{'NAME': snap_name}]])) context = 'fake_context' share = { 'id': 'fake_share_id', 'host': 'hostname@backend_name#bar', 'size': 4, } self.configuration.zfs_dataset_name_prefix = 'some_prefix_' self.configuration.zfs_ssh_username = 'someuser' self.driver.share_export_ip = '1.1.1.1' self.driver.service_ip = '2.2.2.2' self.driver.private_storage.update(share['id'], {'pool_name': 'bar'}) self.driver.delete_share(context, share, share_server=None) self.assertEqual(0, self.driver._get_share_helper.call_count) self.assertEqual(0, mock_delete.call_count) self.driver.zfs.assert_called_once_with('list', '-r', 'bar') self.driver.parse_zfs_answer.assert_called_once_with('a') zfs_driver.LOG.warning.assert_called_once_with( mock.ANY, {'id': share['id'], 'name': dataset_name}) def test_delete_share_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.delete_share, 'fake_context', 'fake_share', share_server={'id': 'fake_server'}, ) def test_create_snapshot(self): self.configuration.zfs_dataset_snapshot_name_prefix = 'prefx_' self.mock_object(self.driver, 'zfs') snapshot = { 'id': 'fake_snapshot_instance_id', 'snapshot_id': 'fake_snapshot_id', 'host': 'hostname@backend_name#bar', 'size': 4, 'share_instance_id': 'fake_share_id' } snapshot_name = 'foo_data_set_name@prefx_%s' % snapshot['id'] self.driver.private_storage.update( snapshot['share_instance_id'], {'dataset_name': 'foo_data_set_name'}) result = self.driver.create_snapshot('fake_context', snapshot) self.driver.zfs.assert_called_once_with( 'snapshot', snapshot_name) self.assertEqual( snapshot_name.split('@')[-1], self.driver.private_storage.get( snapshot['snapshot_id'], 'snapshot_tag')) self.assertEqual({"provider_location": snapshot_name}, result) def test_delete_snapshot(self): snapshot = { 'id': 'fake_snapshot_instance_id', 'snapshot_id': 'fake_snapshot_id', 'host': 'hostname@backend_name#bar', 'size': 4, 'share_instance_id': 'fake_share_id', } dataset_name = 'foo_zpool/bar_dataset_name' snap_tag = 'prefix_%s' % snapshot['id'] snap_name = '%(dataset)s@%(tag)s' % { 'dataset': dataset_name, 'tag': snap_tag} mock_delete = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object(zfs_driver.LOG, 'warning') self.mock_object( self.driver, 'zfs', mock.Mock(return_value=('a', 'b'))) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[ [{'NAME': 'some_other_dataset@snapshot_name'}, {'NAME': snap_name}], []])) context = 'fake_context' self.driver.private_storage.update( snapshot['id'], {'snapshot_name': snap_name}) self.driver.private_storage.update( snapshot['snapshot_id'], {'snapshot_tag': snap_tag}) self.driver.private_storage.update( snapshot['share_instance_id'], {'dataset_name': dataset_name}) self.assertEqual( snap_tag, self.driver.private_storage.get( snapshot['snapshot_id'], 'snapshot_tag')) self.driver.delete_snapshot(context, snapshot, share_server=None) self.assertIsNone( self.driver.private_storage.get( snapshot['snapshot_id'], 'snapshot_tag')) self.assertEqual(0, zfs_driver.LOG.warning.call_count) self.driver.zfs.assert_called_once_with( 'list', '-r', '-t', 'snapshot', snap_name) self.driver.parse_zfs_answer.assert_called_once_with('a') mock_delete.assert_called_once_with(snap_name) def test_delete_snapshot_absent(self): snapshot = { 'id': 'fake_snapshot_instance_id', 'snapshot_id': 'fake_snapshot_id', 'host': 'hostname@backend_name#bar', 'size': 4, 'share_instance_id': 'fake_share_id', } dataset_name = 'foo_zpool/bar_dataset_name' snap_tag = 'prefix_%s' % snapshot['id'] snap_name = '%(dataset)s@%(tag)s' % { 'dataset': dataset_name, 'tag': snap_tag} mock_delete = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object(zfs_driver.LOG, 'warning') self.mock_object( self.driver, 'zfs', mock.Mock(return_value=('a', 'b'))) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[[], [{'NAME': snap_name}]])) context = 'fake_context' self.driver.private_storage.update( snapshot['id'], {'snapshot_name': snap_name}) self.driver.private_storage.update( snapshot['snapshot_id'], {'snapshot_tag': snap_tag}) self.driver.private_storage.update( snapshot['share_instance_id'], {'dataset_name': dataset_name}) self.driver.delete_snapshot(context, snapshot, share_server=None) self.assertEqual(0, mock_delete.call_count) self.driver.zfs.assert_called_once_with( 'list', '-r', '-t', 'snapshot', snap_name) self.driver.parse_zfs_answer.assert_called_once_with('a') zfs_driver.LOG.warning.assert_called_once_with( mock.ANY, {'id': snapshot['id'], 'name': snap_name}) def test_delete_snapshot_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.delete_snapshot, 'fake_context', 'fake_snapshot', share_server={'id': 'fake_server'}, ) @ddt.data({'src_backend_name': 'backend_a', 'src_user': 'someuser', 'src_ip': '2.2.2.2'}, {'src_backend_name': 'backend_b', 'src_user': 'someuser2', 'src_ip': '3.3.3.3'}) @ddt.unpack def test_create_share_from_snapshot(self, src_backend_name, src_user, src_ip): mock_get_helper = self.mock_object(self.driver, '_get_share_helper') self.mock_object(self.driver, 'zfs') self.mock_object(self.driver, 'execute') mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) context = 'fake_context' dst_backend_name = 'backend_a' parent_share = db_utils.create_share_without_instance( id='fake_share_id_1', size=4 ) parent_instance = db_utils.create_share_instance( id='fake_parent_instance', share_id=parent_share['id'], host='hostname@%s#bar' % src_backend_name ) share = db_utils.create_share( id='fake_share_id_2', host='hostname@%s#bar' % dst_backend_name, size=4 ) snapshot = db_utils.create_snapshot( id='fake_snap_id_1', share_id='fake_share_id_1' ) snap_instance = db_utils.create_snapshot_instance( id='fake_snap_instance', snapshot_id=snapshot['id'], share_instance_id=parent_instance['id'] ) dataset_name = 'bar/subbar/some_prefix_%s' % share['id'] snap_tag = 'prefix_%s' % snapshot['id'] snap_name = '%(dataset)s@%(tag)s' % { 'dataset': dataset_name, 'tag': snap_tag} self.configuration.zfs_dataset_name_prefix = 'some_prefix_' self.configuration.zfs_ssh_username = 'someuser' self.driver.share_export_ip = '1.1.1.1' self.driver.service_ip = '2.2.2.2' self.driver.private_storage.update( snap_instance['id'], {'snapshot_name': snap_name}) self.driver.private_storage.update( snap_instance['snapshot_id'], {'snapshot_tag': snap_tag}) self.driver.private_storage.update( snap_instance['share_instance_id'], {'dataset_name': dataset_name}) self.mock_object( zfs_driver, 'get_backend_configuration', mock.Mock(return_value=type( 'FakeConfig', (object,), { 'zfs_ssh_username': src_user, 'zfs_service_ip': src_ip }))) result = self.driver.create_share_from_snapshot( context, share, snap_instance, share_server=None) self.assertEqual( mock_get_helper.return_value.create_exports.return_value, result, ) dst_ssh_host = (self.configuration.zfs_ssh_username + '@' + self.driver.service_ip) src_ssh_host = src_user + '@' + src_ip self.assertEqual( 'share', self.driver.private_storage.get(share['id'], 'entity_type')) self.assertEqual( dataset_name, self.driver.private_storage.get( snap_instance['share_instance_id'], 'dataset_name')) self.assertEqual( dst_ssh_host, self.driver.private_storage.get(share['id'], 'ssh_cmd')) self.assertEqual( 'bar', self.driver.private_storage.get(share['id'], 'pool_name')) self.driver.execute.assert_has_calls([ mock.call( 'ssh', src_ssh_host, 'sudo', 'zfs', 'send', '-vD', snap_name, '|', 'ssh', dst_ssh_host, 'sudo', 'zfs', 'receive', '-v', '%s' % dataset_name), mock.call( 'sudo', 'zfs', 'destroy', '%s@%s' % (dataset_name, snap_tag)), ]) self.driver.zfs.assert_has_calls([ mock.call('set', opt, '%s' % dataset_name) for opt in ('quota=4G', 'bark=barv', 'readonly=off', 'fook=foov') ], any_order=True) mock_get_helper.assert_has_calls([ mock.call('NFS'), mock.call().create_exports(dataset_name) ]) mock_get_extra_specs_from_share.assert_called_once_with(share) def test_create_share_from_snapshot_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.create_share_from_snapshot, 'fake_context', 'fake_share', 'fake_snapshot', share_server={'id': 'fake_server'}, ) def test_get_pool(self): share = {'host': 'hostname@backend_name#bar'} result = self.driver.get_pool(share) self.assertEqual('bar', result) @ddt.data('on', 'off', 'rw=1.1.1.1') def test_ensure_share(self, get_zfs_option_answer): share = { 'id': 'fake_share_id', 'host': 'hostname@backend_name#bar', 'share_proto': 'NFS', } dataset_name = 'foo_zpool/foo_fs' self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dataset_name)) self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value=get_zfs_option_answer)) mock_helper = self.mock_object(self.driver, '_get_share_helper') self.mock_object( self.driver, 'zfs', mock.Mock(return_value=('a', 'b'))) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[[{'NAME': 'fake1'}, {'NAME': dataset_name}, {'NAME': 'fake2'}]] * 2)) for s in ('1', '2'): self.driver.zfs.reset_mock() self.driver.get_zfs_option.reset_mock() mock_helper.reset_mock() self.driver.parse_zfs_answer.reset_mock() self.driver._get_dataset_name.reset_mock() self.driver.share_export_ip = '1.1.1.%s' % s self.driver.service_ip = '2.2.2.%s' % s self.configuration.zfs_ssh_username = 'user%s' % s result = self.driver.ensure_share('fake_context', share) self.assertEqual( 'user%(s)s@2.2.2.%(s)s' % {'s': s}, self.driver.private_storage.get(share['id'], 'ssh_cmd')) self.driver.get_zfs_option.assert_called_once_with( dataset_name, 'sharenfs') mock_helper.assert_called_once_with( share['share_proto']) mock_helper.return_value.get_exports.assert_called_once_with( dataset_name) expected_calls = [mock.call('list', '-r', 'bar')] if get_zfs_option_answer != 'off': expected_calls.append(mock.call('share', dataset_name)) self.driver.zfs.assert_has_calls(expected_calls) self.driver.parse_zfs_answer.assert_called_once_with('a') self.driver._get_dataset_name.assert_called_once_with(share) self.assertEqual( mock_helper.return_value.get_exports.return_value, result, ) def test_ensure_share_absent(self): share = {'id': 'fake_share_id', 'host': 'hostname@backend_name#bar'} dataset_name = 'foo_zpool/foo_fs' self.driver.private_storage.update( share['id'], {'dataset_name': dataset_name}) self.mock_object(self.driver, 'get_zfs_option') self.mock_object(self.driver, '_get_share_helper') self.mock_object( self.driver, 'zfs', mock.Mock(return_value=('a', 'b'))) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[[], [{'NAME': dataset_name}]])) self.assertRaises( exception.ShareResourceNotFound, self.driver.ensure_share, 'fake_context', share, ) self.assertEqual(0, self.driver.get_zfs_option.call_count) self.assertEqual(0, self.driver._get_share_helper.call_count) self.driver.zfs.assert_called_once_with('list', '-r', 'bar') self.driver.parse_zfs_answer.assert_called_once_with('a') def test_ensure_share_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.ensure_share, 'fake_context', 'fake_share', share_server={'id': 'fake_server'}, ) def test_get_network_allocations_number(self): self.assertEqual(0, self.driver.get_network_allocations_number()) def test_extend_share(self): dataset_name = 'foo_zpool/foo_fs' self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dataset_name)) self.mock_object(self.driver, 'zfs') self.driver.extend_share('fake_share', 5) self.driver._get_dataset_name.assert_called_once_with('fake_share') self.driver.zfs.assert_called_once_with( 'set', 'quota=5G', dataset_name) def test_extend_share_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.extend_share, 'fake_context', 'fake_share', 5, share_server={'id': 'fake_server'}, ) def test_shrink_share(self): dataset_name = 'foo_zpool/foo_fs' self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dataset_name)) self.mock_object(self.driver, 'zfs') self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value='4G')) share = {'id': 'fake_share_id'} self.driver.shrink_share(share, 5) self.driver._get_dataset_name.assert_called_once_with(share) self.driver.get_zfs_option.assert_called_once_with( dataset_name, 'used') self.driver.zfs.assert_called_once_with( 'set', 'quota=5G', dataset_name) def test_shrink_share_data_loss(self): dataset_name = 'foo_zpool/foo_fs' self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dataset_name)) self.mock_object(self.driver, 'zfs') self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value='6G')) share = {'id': 'fake_share_id'} self.assertRaises( exception.ShareShrinkingPossibleDataLoss, self.driver.shrink_share, share, 5) self.driver._get_dataset_name.assert_called_once_with(share) self.driver.get_zfs_option.assert_called_once_with( dataset_name, 'used') self.assertEqual(0, self.driver.zfs.call_count) def test_shrink_share_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.shrink_share, 'fake_context', 'fake_share', 5, share_server={'id': 'fake_server'}, ) def test__get_replication_snapshot_prefix(self): replica = {'id': 'foo-_bar-_id'} self.driver.replica_snapshot_prefix = 'PrEfIx' result = self.driver._get_replication_snapshot_prefix(replica) self.assertEqual('PrEfIx_foo__bar__id', result) def test__get_replication_snapshot_tag(self): replica = {'id': 'foo-_bar-_id'} self.driver.replica_snapshot_prefix = 'PrEfIx' mock_utcnow = self.mock_object(zfs_driver.timeutils, 'utcnow') result = self.driver._get_replication_snapshot_tag(replica) self.assertEqual( ('PrEfIx_foo__bar__id_time_' '%s' % mock_utcnow.return_value.isoformat.return_value), result) mock_utcnow.assert_called_once_with() mock_utcnow.return_value.isoformat.assert_called_once_with() def test__get_active_replica(self): replica_list = [ {'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, 'id': '1'}, {'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, 'id': '2'}, {'replica_state': zfs_driver.constants.REPLICA_STATE_OUT_OF_SYNC, 'id': '3'}, ] result = self.driver._get_active_replica(replica_list) self.assertEqual(replica_list[1], result) def test__get_active_replica_not_found(self): replica_list = [ {'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, 'id': '1'}, {'replica_state': zfs_driver.constants.REPLICA_STATE_OUT_OF_SYNC, 'id': '3'}, ] self.assertRaises( exception.ReplicationException, self.driver._get_active_replica, replica_list, ) def test_update_access(self): self.mock_object(self.driver, '_get_dataset_name') mock_helper = self.mock_object(self.driver, '_get_share_helper') mock_shell_executor = self.mock_object( self.driver, '_get_shell_executor_by_host') share = { 'share_proto': 'NFS', 'host': 'foo_host@bar_backend@quuz_pool', } result = self.driver.update_access( 'fake_context', share, [1], [2], [3]) self.driver._get_dataset_name.assert_called_once_with(share) mock_shell_executor.assert_called_once_with(share['host']) self.assertEqual( mock_helper.return_value.update_access.return_value, result, ) def test_update_access_with_share_server(self): self.assertRaises( exception.InvalidInput, self.driver.update_access, 'fake_context', 'fake_share', [], [], [], share_server={'id': 'fake_server'}, ) @ddt.data( ({}, True), ({"size": 5}, True), ({"size": 5, "foo": "bar"}, False), ({"size": "5", "foo": "bar"}, True), ) @ddt.unpack def test_manage_share_success_expected(self, driver_options, mount_exists): old_dataset_name = "foopool/path/to/old/dataset/name" new_dataset_name = "foopool/path/to/new/dataset/name" share = { "id": "fake_share_instance_id", "share_id": "fake_share_id", "export_locations": [{"path": "1.1.1.1:/%s" % old_dataset_name}], "host": "foobackend@foohost#foopool", "share_proto": "NFS", } mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) self.mock_object(zfs_driver.time, "sleep") mock__get_dataset_name = self.mock_object( self.driver, "_get_dataset_name", mock.Mock(return_value=new_dataset_name)) mock_helper = self.mock_object(self.driver, "_get_share_helper") mock_zfs = self.mock_object( self.driver, "zfs", mock.Mock(return_value=("fake_out", "fake_error"))) mock_zfs_with_retry = self.mock_object(self.driver, "zfs_with_retry") mock_execute_side_effects = [ ("%s " % old_dataset_name, "fake_err") if mount_exists else ("foo", "bar") ] * 3 if mount_exists: # After three retries, assume the mount goes away mock_execute_side_effects.append((("foo", "bar"))) mock_execute = self.mock_object( self.driver, "execute", mock.Mock(side_effect=iter(mock_execute_side_effects))) mock_parse_zfs_answer = self.mock_object( self.driver, "parse_zfs_answer", mock.Mock(return_value=[ {"NAME": "some_other_dataset_1"}, {"NAME": old_dataset_name}, {"NAME": "some_other_dataset_2"}, ])) mock_get_zfs_option = self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value="4G")) result = self.driver.manage_existing(share, driver_options) self.assertTrue(mock_helper.return_value.get_exports.called) self.assertTrue(mock_zfs_with_retry.called) self.assertEqual(2, len(result)) self.assertIn("size", result) self.assertIn("export_locations", result) self.assertEqual(5, result["size"]) self.assertEqual( mock_helper.return_value.get_exports.return_value, result["export_locations"]) mock_execute.assert_called_with("sudo", "mount") if mount_exists: self.assertEqual(4, mock_execute.call_count) else: self.assertEqual(1, mock_execute.call_count) mock_parse_zfs_answer.assert_called_once_with(mock_zfs.return_value[0]) if driver_options.get("size"): self.assertFalse(mock_get_zfs_option.called) else: mock_get_zfs_option.assert_called_once_with( old_dataset_name, "used") mock__get_dataset_name.assert_called_once_with(share) mock_get_extra_specs_from_share.assert_called_once_with(share) def test_manage_share_wrong_pool(self): old_dataset_name = "foopool/path/to/old/dataset/name" new_dataset_name = "foopool/path/to/new/dataset/name" share = { "id": "fake_share_instance_id", "share_id": "fake_share_id", "export_locations": [{"path": "1.1.1.1:/%s" % old_dataset_name}], "host": "foobackend@foohost#barpool", "share_proto": "NFS", } mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) mock__get_dataset_name = self.mock_object( self.driver, "_get_dataset_name", mock.Mock(return_value=new_dataset_name)) mock_get_zfs_option = self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value="4G")) self.assertRaises( exception.ZFSonLinuxException, self.driver.manage_existing, share, {} ) mock__get_dataset_name.assert_called_once_with(share) mock_get_zfs_option.assert_called_once_with(old_dataset_name, "used") mock_get_extra_specs_from_share.assert_called_once_with(share) def test_manage_share_dataset_not_found(self): old_dataset_name = "foopool/path/to/old/dataset/name" new_dataset_name = "foopool/path/to/new/dataset/name" share = { "id": "fake_share_instance_id", "share_id": "fake_share_id", "export_locations": [{"path": "1.1.1.1:/%s" % old_dataset_name}], "host": "foobackend@foohost#foopool", "share_proto": "NFS", } mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) mock__get_dataset_name = self.mock_object( self.driver, "_get_dataset_name", mock.Mock(return_value=new_dataset_name)) mock_get_zfs_option = self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value="4G")) mock_zfs = self.mock_object( self.driver, "zfs", mock.Mock(return_value=("fake_out", "fake_error"))) mock_parse_zfs_answer = self.mock_object( self.driver, "parse_zfs_answer", mock.Mock(return_value=[{"NAME": "some_other_dataset_1"}])) self.assertRaises( exception.ZFSonLinuxException, self.driver.manage_existing, share, {} ) mock__get_dataset_name.assert_called_once_with(share) mock_get_zfs_option.assert_called_once_with(old_dataset_name, "used") mock_zfs.assert_called_once_with( "list", "-r", old_dataset_name.split("/")[0]) mock_parse_zfs_answer.assert_called_once_with(mock_zfs.return_value[0]) mock_get_extra_specs_from_share.assert_called_once_with(share) def test_manage_unmount_exception(self): old_ds_name = "foopool/path/to/old/dataset/name" new_ds_name = "foopool/path/to/new/dataset/name" share = { "id": "fake_share_instance_id", "share_id": "fake_share_id", "export_locations": [{"path": "1.1.1.1:/%s" % old_ds_name}], "host": "foobackend@foohost#foopool", "share_proto": "NFS", } mock_get_extra_specs_from_share = self.mock_object( zfs_driver.share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) self.mock_object(zfs_driver.time, "sleep") mock__get_dataset_name = self.mock_object( self.driver, "_get_dataset_name", mock.Mock(return_value=new_ds_name)) mock_helper = self.mock_object(self.driver, "_get_share_helper") mock_zfs = self.mock_object( self.driver, "zfs", mock.Mock(return_value=("fake_out", "fake_error"))) mock_zfs_with_retry = self.mock_object(self.driver, "zfs_with_retry") # 10 Retries, would mean 20 calls to check the mount still exists mock_execute_side_effects = [("%s " % old_ds_name, "fake_err")] * 21 mock_execute = self.mock_object( self.driver, "execute", mock.Mock(side_effect=mock_execute_side_effects)) mock_parse_zfs_answer = self.mock_object( self.driver, "parse_zfs_answer", mock.Mock(return_value=[ {"NAME": "some_other_dataset_1"}, {"NAME": old_ds_name}, {"NAME": "some_other_dataset_2"}, ])) mock_get_zfs_option = self.mock_object( self.driver, 'get_zfs_option', mock.Mock(return_value="4G")) self.assertRaises(exception.ZFSonLinuxException, self.driver.manage_existing, share, {'size': 10}) self.assertFalse(mock_helper.return_value.get_exports.called) mock_zfs_with_retry.assert_called_with("umount", "-f", old_ds_name) mock_execute.assert_called_with("sudo", "mount") self.assertEqual(10, mock_zfs_with_retry.call_count) self.assertEqual(20, mock_execute.call_count) mock_parse_zfs_answer.assert_called_once_with(mock_zfs.return_value[0]) self.assertFalse(mock_get_zfs_option.called) mock__get_dataset_name.assert_called_once_with(share) mock_get_extra_specs_from_share.assert_called_once_with(share) def test_unmanage(self): share = {'id': 'fake_share_id'} self.mock_object(self.driver.private_storage, 'delete') self.driver.unmanage(share) self.driver.private_storage.delete.assert_called_once_with(share['id']) @ddt.data( {}, {"size": 5}, {"size": "5"}, ) def test_manage_existing_snapshot(self, driver_options): dataset_name = "path/to/dataset" old_provider_location = dataset_name + "@original_snapshot_tag" snapshot_instance = { "id": "fake_snapshot_instance_id", "share_instance_id": "fake_share_instance_id", "snapshot_id": "fake_snapshot_id", "provider_location": old_provider_location, } new_snapshot_tag = "fake_new_snapshot_tag" new_provider_location = ( old_provider_location.split("@")[0] + "@" + new_snapshot_tag) self.mock_object(self.driver, "zfs") self.mock_object( self.driver, "get_zfs_option", mock.Mock(return_value="5G")) self.mock_object( self.driver, '_get_snapshot_name', mock.Mock(return_value=new_snapshot_tag)) self.driver.private_storage.update( snapshot_instance["share_instance_id"], {"dataset_name": dataset_name}) result = self.driver.manage_existing_snapshot( snapshot_instance, driver_options) expected_result = { "size": 5, "provider_location": new_provider_location, } self.assertEqual(expected_result, result) self.driver._get_snapshot_name.assert_called_once_with( snapshot_instance["id"]) self.driver.zfs.assert_has_calls([ mock.call("list", "-r", "-t", "snapshot", old_provider_location), mock.call("rename", old_provider_location, new_provider_location), ]) def test_manage_existing_snapshot_not_found(self): dataset_name = "path/to/dataset" old_provider_location = dataset_name + "@original_snapshot_tag" new_snapshot_tag = "fake_new_snapshot_tag" snapshot_instance = { "id": "fake_snapshot_instance_id", "snapshot_id": "fake_snapshot_id", "provider_location": old_provider_location, } self.mock_object( self.driver, "_get_snapshot_name", mock.Mock(return_value=new_snapshot_tag)) self.mock_object( self.driver, "zfs", mock.Mock(side_effect=exception.ProcessExecutionError("FAKE"))) self.assertRaises( exception.ManageInvalidShareSnapshot, self.driver.manage_existing_snapshot, snapshot_instance, {}, ) self.driver.zfs.assert_called_once_with( "list", "-r", "-t", "snapshot", old_provider_location) self.driver._get_snapshot_name.assert_called_once_with( snapshot_instance["id"]) def test_unmanage_snapshot(self): snapshot_instance = { "id": "fake_snapshot_instance_id", "snapshot_id": "fake_snapshot_id", } self.mock_object(self.driver.private_storage, "delete") self.driver.unmanage_snapshot(snapshot_instance) self.driver.private_storage.delete.assert_called_once_with( snapshot_instance["snapshot_id"]) def test__delete_dataset_or_snapshot_with_retry_snapshot(self): self.mock_object(self.driver, 'get_zfs_option') self.mock_object(self.driver, 'zfs') self.driver._delete_dataset_or_snapshot_with_retry('foo@bar') self.driver.get_zfs_option.assert_called_once_with( 'foo@bar', 'mountpoint') self.driver.zfs.assert_called_once_with( 'destroy', '-f', 'foo@bar') def test__delete_dataset_or_snapshot_with_retry_of(self): self.mock_object(self.driver, 'get_zfs_option') self.mock_object( self.driver, 'execute', mock.Mock(return_value=('a', 'b'))) self.mock_object(zfs_driver.time, 'sleep') self.mock_object(zfs_driver.LOG, 'debug') self.mock_object( zfs_driver.time, 'time', mock.Mock(side_effect=range(1, 70, 2))) dataset_name = 'fake/dataset/name' self.assertRaises( exception.ZFSonLinuxException, self.driver._delete_dataset_or_snapshot_with_retry, dataset_name, ) self.driver.get_zfs_option.assert_called_once_with( dataset_name, 'mountpoint') self.assertEqual(29, zfs_driver.LOG.debug.call_count) def test__delete_dataset_or_snapshot_with_retry_temp_of(self): self.mock_object(self.driver, 'get_zfs_option') self.mock_object(self.driver, 'zfs') self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[ ('a', 'b'), exception.ProcessExecutionError( 'FAKE lsof returns not found')])) self.mock_object(zfs_driver.time, 'sleep') self.mock_object(zfs_driver.LOG, 'debug') self.mock_object( zfs_driver.time, 'time', mock.Mock(side_effect=range(1, 70, 2))) dataset_name = 'fake/dataset/name' self.driver._delete_dataset_or_snapshot_with_retry(dataset_name) self.driver.get_zfs_option.assert_called_once_with( dataset_name, 'mountpoint') self.assertEqual(2, self.driver.execute.call_count) self.assertEqual(1, zfs_driver.LOG.debug.call_count) zfs_driver.LOG.debug.assert_called_once_with( mock.ANY, {'name': dataset_name, 'out': 'a'}) zfs_driver.time.sleep.assert_called_once_with(2) self.driver.zfs.assert_called_once_with('destroy', '-f', dataset_name) def test__delete_dataset_or_snapshot_with_retry_busy(self): self.mock_object(self.driver, 'get_zfs_option') self.mock_object( self.driver, 'execute', mock.Mock( side_effect=exception.ProcessExecutionError( 'FAKE lsof returns not found'))) self.mock_object( self.driver, 'zfs', mock.Mock(side_effect=[ exception.ProcessExecutionError( 'cannot destroy FAKE: dataset is busy\n'), None, None])) self.mock_object(zfs_driver.time, 'sleep') self.mock_object(zfs_driver.LOG, 'info') dataset_name = 'fake/dataset/name' self.driver._delete_dataset_or_snapshot_with_retry(dataset_name) self.driver.get_zfs_option.assert_called_once_with( dataset_name, 'mountpoint') self.assertEqual(2, zfs_driver.time.sleep.call_count) self.assertEqual(2, self.driver.execute.call_count) self.assertEqual(1, zfs_driver.LOG.info.call_count) self.assertEqual(2, self.driver.zfs.call_count) def test_create_replica(self): active_replica = { 'id': 'fake_active_replica_id', 'host': 'hostname1@backend_name1#foo', 'size': 5, 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, } replica_list = [active_replica] new_replica = { 'id': 'fake_new_replica_id', 'host': 'hostname2@backend_name2#bar', 'share_proto': 'NFS', 'replica_state': None, } dst_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % new_replica['id']) access_rules = ['foo_rule', 'bar_rule'] self.driver.private_storage.update( active_replica['id'], {'dataset_name': 'fake/active/dataset/name', 'ssh_cmd': 'fake_ssh_cmd'} ) self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[('a', 'b'), ('c', 'd')])) self.mock_object(self.driver, 'zfs') mock_helper = self.mock_object(self.driver, '_get_share_helper') self.configuration.zfs_dataset_name_prefix = 'fake_dataset_name_prefix' mock_utcnow = self.mock_object(zfs_driver.timeutils, 'utcnow') mock_utcnow.return_value.isoformat.return_value = 'some_time' result = self.driver.create_replica( 'fake_context', replica_list, new_replica, access_rules, []) expected = { 'export_locations': ( mock_helper.return_value.create_exports.return_value), 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, 'access_rules_status': zfs_driver.constants.STATUS_ACTIVE, } self.assertEqual(expected, result) mock_helper.assert_has_calls([ mock.call('NFS'), mock.call().update_access( dst_dataset_name, access_rules, add_rules=[], delete_rules=[], make_all_ro=True), mock.call('NFS'), mock.call().create_exports(dst_dataset_name), ]) self.driver.zfs.assert_has_calls([ mock.call('set', 'readonly=on', dst_dataset_name), mock.call('set', 'quota=%sG' % active_replica['size'], dst_dataset_name), ]) src_snapshot_name = ( 'fake/active/dataset/name@' 'tmp_snapshot_for_replication__fake_new_replica_id_time_some_time') self.driver.execute.assert_has_calls([ mock.call('ssh', 'fake_ssh_cmd', 'sudo', 'zfs', 'snapshot', src_snapshot_name), mock.call( 'ssh', 'fake_ssh_cmd', 'sudo', 'zfs', 'send', '-vDR', src_snapshot_name, '|', 'ssh', 'fake_username@240.241.242.244', 'sudo', 'zfs', 'receive', '-v', dst_dataset_name ), ]) mock_utcnow.assert_called_once_with() mock_utcnow.return_value.isoformat.assert_called_once_with() def test_delete_replica_not_found(self): dataset_name = 'foo/dataset/name' pool_name = 'foo_pool' replica = {'id': 'fake_replica_id'} replica_list = [replica] replica_snapshots = [] self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dataset_name)) self.mock_object( self.driver, 'zfs', mock.Mock(side_effect=[('a', 'b'), ('c', 'd')])) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[[], []])) self.mock_object(self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object(zfs_driver.LOG, 'warning') self.mock_object(self.driver, '_get_share_helper') self.driver.private_storage.update( replica['id'], {'pool_name': pool_name}) self.driver.delete_replica('fake_context', replica_list, replica_snapshots, replica) zfs_driver.LOG.warning.assert_called_once_with( mock.ANY, {'id': replica['id'], 'name': dataset_name}) self.assertEqual(0, self.driver._get_share_helper.call_count) self.assertEqual( 0, self.driver._delete_dataset_or_snapshot_with_retry.call_count) self.driver._get_dataset_name.assert_called_once_with(replica) self.driver.zfs.assert_has_calls([ mock.call('list', '-r', '-t', 'snapshot', pool_name), mock.call('list', '-r', pool_name), ]) self.driver.parse_zfs_answer.assert_has_calls([ mock.call('a'), mock.call('c'), ]) def test_delete_replica(self): dataset_name = 'foo/dataset/name' pool_name = 'foo_pool' replica = {'id': 'fake_replica_id', 'share_proto': 'NFS'} replica_list = [replica] self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dataset_name)) self.mock_object( self.driver, 'zfs', mock.Mock(side_effect=[('a', 'b'), ('c', 'd')])) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[ [{'NAME': 'some_other_dataset@snapshot'}, {'NAME': dataset_name + '@foo_snap'}], [{'NAME': 'some_other_dataset'}, {'NAME': dataset_name}], ])) mock_helper = self.mock_object(self.driver, '_get_share_helper') self.mock_object(self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object(zfs_driver.LOG, 'warning') self.driver.private_storage.update( replica['id'], {'pool_name': pool_name, 'dataset_name': dataset_name}) self.driver.delete_replica('fake_context', replica_list, [], replica) self.assertEqual(0, zfs_driver.LOG.warning.call_count) self.assertEqual(0, self.driver._get_dataset_name.call_count) self.driver._delete_dataset_or_snapshot_with_retry.assert_has_calls([ mock.call(dataset_name + '@foo_snap'), mock.call(dataset_name), ]) self.driver.zfs.assert_has_calls([ mock.call('list', '-r', '-t', 'snapshot', pool_name), mock.call('list', '-r', pool_name), ]) self.driver.parse_zfs_answer.assert_has_calls([ mock.call('a'), mock.call('c'), ]) mock_helper.assert_called_once_with(replica['share_proto']) mock_helper.return_value.remove_exports.assert_called_once_with( dataset_name) def test_update_replica(self): active_replica = { 'id': 'fake_active_replica_id', 'host': 'hostname1@backend_name1#foo', 'size': 5, 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, } replica = { 'id': 'fake_new_replica_id', 'host': 'hostname2@backend_name2#bar', 'share_proto': 'NFS', 'replica_state': None, } replica_list = [replica, active_replica] replica_snapshots = [] dst_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % replica['id']) src_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % active_replica['id']) access_rules = ['foo_rule', 'bar_rule'] old_repl_snapshot_tag = ( self.driver._get_replication_snapshot_prefix( active_replica) + 'foo') snap_tag_prefix = self.driver._get_replication_snapshot_prefix( replica) self.driver.private_storage.update( active_replica['id'], {'dataset_name': src_dataset_name, 'ssh_cmd': 'fake_src_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) self.driver.private_storage.update( replica['id'], {'dataset_name': dst_dataset_name, 'ssh_cmd': 'fake_dst_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[('a', 'b'), ('c', 'd'), ('e', 'f')])) self.mock_object(self.driver, 'execute_with_retry', mock.Mock(side_effect=[('g', 'h')])) self.mock_object(self.driver, 'zfs', mock.Mock(side_effect=[('j', 'k'), ('l', 'm')])) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[ ({'NAME': dst_dataset_name + '@' + old_repl_snapshot_tag}, {'NAME': dst_dataset_name + '@%s_time_some_time' % snap_tag_prefix}, {'NAME': 'other/dataset/name1@' + old_repl_snapshot_tag}), ({'NAME': src_dataset_name + '@' + old_repl_snapshot_tag}, {'NAME': src_dataset_name + '@' + snap_tag_prefix + 'quuz'}, {'NAME': 'other/dataset/name2@' + old_repl_snapshot_tag}), ]) ) mock_helper = self.mock_object(self.driver, '_get_share_helper') self.configuration.zfs_dataset_name_prefix = 'fake_dataset_name_prefix' mock_utcnow = self.mock_object(zfs_driver.timeutils, 'utcnow') mock_utcnow.return_value.isoformat.return_value = 'some_time' mock_delete_snapshot = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') result = self.driver.update_replica_state( 'fake_context', replica_list, replica, access_rules, replica_snapshots) self.assertEqual(zfs_driver.constants.REPLICA_STATE_IN_SYNC, result) mock_helper.assert_called_once_with('NFS') mock_helper.return_value.update_access.assert_called_once_with( dst_dataset_name, access_rules, add_rules=[], delete_rules=[], make_all_ro=True) self.driver.execute_with_retry.assert_called_once_with( 'ssh', 'fake_src_ssh_cmd', 'sudo', 'zfs', 'destroy', '-f', src_dataset_name + '@' + snap_tag_prefix + 'quuz') self.driver.execute.assert_has_calls([ mock.call( 'ssh', 'fake_src_ssh_cmd', 'sudo', 'zfs', 'snapshot', src_dataset_name + '@' + self.driver._get_replication_snapshot_tag(replica)), mock.call( 'ssh', 'fake_src_ssh_cmd', 'sudo', 'zfs', 'send', '-vDRI', old_repl_snapshot_tag, src_dataset_name + '@%s' % snap_tag_prefix + '_time_some_time', '|', 'ssh', 'fake_dst_ssh_cmd', 'sudo', 'zfs', 'receive', '-vF', dst_dataset_name), mock.call( 'ssh', 'fake_src_ssh_cmd', 'sudo', 'zfs', 'list', '-r', '-t', 'snapshot', 'bar'), ]) mock_delete_snapshot.assert_called_once_with( dst_dataset_name + '@' + old_repl_snapshot_tag) self.driver.parse_zfs_answer.assert_has_calls( [mock.call('l'), mock.call('e')]) def test_promote_replica_active_available(self): active_replica = { 'id': 'fake_active_replica_id', 'host': 'hostname1@backend_name1#foo', 'size': 5, 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, } replica = { 'id': 'fake_first_replica_id', 'host': 'hostname2@backend_name2#bar', 'share_proto': 'NFS', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } second_replica = { 'id': 'fake_second_replica_id', 'host': 'hostname3@backend_name3#quuz', 'share_proto': 'NFS', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } replica_list = [replica, active_replica, second_replica] dst_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % replica['id']) src_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % active_replica['id']) access_rules = ['foo_rule', 'bar_rule'] old_repl_snapshot_tag = ( self.driver._get_replication_snapshot_prefix( active_replica) + 'foo') snap_tag_prefix = self.driver._get_replication_snapshot_prefix( active_replica) + '_time_some_time' self.driver.private_storage.update( active_replica['id'], {'dataset_name': src_dataset_name, 'ssh_cmd': 'fake_src_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) for repl in (replica, second_replica): self.driver.private_storage.update( repl['id'], {'dataset_name': ( 'bar/subbar/fake_dataset_name_prefix%s' % repl['id']), 'ssh_cmd': 'fake_dst_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[ ('a', 'b'), ('c', 'd'), ('e', 'f'), exception.ProcessExecutionError('Second replica sync failure'), ])) self.mock_object(self.driver, 'zfs', mock.Mock(side_effect=[('g', 'h')])) mock_helper = self.mock_object(self.driver, '_get_share_helper') self.configuration.zfs_dataset_name_prefix = 'fake_dataset_name_prefix' mock_utcnow = self.mock_object(zfs_driver.timeutils, 'utcnow') mock_utcnow.return_value.isoformat.return_value = 'some_time' mock_delete_snapshot = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') result = self.driver.promote_replica( 'fake_context', replica_list, replica, access_rules) expected = [ {'access_rules_status': zfs_driver.constants.SHARE_INSTANCE_RULES_SYNCING, 'id': 'fake_active_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC}, {'access_rules_status': zfs_driver.constants.STATUS_ACTIVE, 'id': 'fake_first_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE}, {'access_rules_status': zfs_driver.constants.SHARE_INSTANCE_RULES_SYNCING, 'id': 'fake_second_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_OUT_OF_SYNC}, ] for repl in expected: self.assertIn(repl, result) self.assertEqual(3, len(result)) mock_helper.assert_called_once_with('NFS') mock_helper.return_value.update_access.assert_called_once_with( dst_dataset_name, access_rules, add_rules=[], delete_rules=[]) self.driver.zfs.assert_called_once_with( 'set', 'readonly=off', dst_dataset_name) self.assertEqual(0, mock_delete_snapshot.call_count) for repl in (active_replica, replica): self.assertEqual( snap_tag_prefix, self.driver.private_storage.get( repl['id'], 'repl_snapshot_tag')) self.assertEqual( old_repl_snapshot_tag, self.driver.private_storage.get( second_replica['id'], 'repl_snapshot_tag')) def test_promote_replica_active_not_available(self): active_replica = { 'id': 'fake_active_replica_id', 'host': 'hostname1@backend_name1#foo', 'size': 5, 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, } replica = { 'id': 'fake_first_replica_id', 'host': 'hostname2@backend_name2#bar', 'share_proto': 'NFS', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } second_replica = { 'id': 'fake_second_replica_id', 'host': 'hostname3@backend_name3#quuz', 'share_proto': 'NFS', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } third_replica = { 'id': 'fake_third_replica_id', 'host': 'hostname4@backend_name4#fff', 'share_proto': 'NFS', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } replica_list = [replica, active_replica, second_replica, third_replica] dst_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % replica['id']) src_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % active_replica['id']) access_rules = ['foo_rule', 'bar_rule'] old_repl_snapshot_tag = ( self.driver._get_replication_snapshot_prefix( active_replica) + 'foo') snap_tag_prefix = self.driver._get_replication_snapshot_prefix( replica) + '_time_some_time' self.driver.private_storage.update( active_replica['id'], {'dataset_name': src_dataset_name, 'ssh_cmd': 'fake_src_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) for repl in (replica, second_replica, third_replica): self.driver.private_storage.update( repl['id'], {'dataset_name': ( 'bar/subbar/fake_dataset_name_prefix%s' % repl['id']), 'ssh_cmd': 'fake_dst_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[ exception.ProcessExecutionError('Active replica failure'), ('a', 'b'), exception.ProcessExecutionError('Second replica sync failure'), ('c', 'd'), ])) self.mock_object(self.driver, 'zfs', mock.Mock(side_effect=[('g', 'h'), ('i', 'j')])) mock_helper = self.mock_object(self.driver, '_get_share_helper') self.configuration.zfs_dataset_name_prefix = 'fake_dataset_name_prefix' mock_utcnow = self.mock_object(zfs_driver.timeutils, 'utcnow') mock_utcnow.return_value.isoformat.return_value = 'some_time' mock_delete_snapshot = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') result = self.driver.promote_replica( 'fake_context', replica_list, replica, access_rules) expected = [ {'access_rules_status': zfs_driver.constants.SHARE_INSTANCE_RULES_SYNCING, 'id': 'fake_active_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_OUT_OF_SYNC}, {'access_rules_status': zfs_driver.constants.STATUS_ACTIVE, 'id': 'fake_first_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE}, {'access_rules_status': zfs_driver.constants.SHARE_INSTANCE_RULES_SYNCING, 'id': 'fake_second_replica_id'}, {'access_rules_status': zfs_driver.constants.SHARE_INSTANCE_RULES_SYNCING, 'id': 'fake_third_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_OUT_OF_SYNC}, ] for repl in expected: self.assertIn(repl, result) self.assertEqual(4, len(result)) mock_helper.assert_called_once_with('NFS') mock_helper.return_value.update_access.assert_called_once_with( dst_dataset_name, access_rules, add_rules=[], delete_rules=[]) self.driver.zfs.assert_has_calls([ mock.call('snapshot', dst_dataset_name + '@' + snap_tag_prefix), mock.call('set', 'readonly=off', dst_dataset_name), ]) self.assertEqual(0, mock_delete_snapshot.call_count) for repl in (second_replica, replica): self.assertEqual( snap_tag_prefix, self.driver.private_storage.get( repl['id'], 'repl_snapshot_tag')) for repl in (active_replica, third_replica): self.assertEqual( old_repl_snapshot_tag, self.driver.private_storage.get( repl['id'], 'repl_snapshot_tag')) def test_create_replicated_snapshot(self): active_replica = { 'id': 'fake_active_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, } replica = { 'id': 'fake_first_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } second_replica = { 'id': 'fake_second_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } replica_list = [replica, active_replica, second_replica] snapshot_instances = [ {'id': 'si_%s' % r['id'], 'share_instance_id': r['id'], 'snapshot_id': 'some_snapshot_id'} for r in replica_list ] src_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % active_replica['id']) old_repl_snapshot_tag = ( self.driver._get_replication_snapshot_prefix( active_replica) + 'foo') self.driver.private_storage.update( active_replica['id'], {'dataset_name': src_dataset_name, 'ssh_cmd': 'fake_src_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) for repl in (replica, second_replica): self.driver.private_storage.update( repl['id'], {'dataset_name': ( 'bar/subbar/fake_dataset_name_prefix%s' % repl['id']), 'ssh_cmd': 'fake_dst_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[ ('a', 'b'), ('c', 'd'), ('e', 'f'), exception.ProcessExecutionError('Second replica sync failure'), ])) self.configuration.zfs_dataset_name_prefix = 'fake_dataset_name_prefix' self.configuration.zfs_dataset_snapshot_name_prefix = ( 'fake_dataset_snapshot_name_prefix') snap_tag_prefix = ( self.configuration.zfs_dataset_snapshot_name_prefix + 'si_%s' % active_replica['id']) repl_snap_tag = 'fake_repl_tag' self.mock_object( self.driver, '_get_replication_snapshot_tag', mock.Mock(return_value=repl_snap_tag)) result = self.driver.create_replicated_snapshot( 'fake_context', replica_list, snapshot_instances) expected = [ {'id': 'si_fake_active_replica_id', 'status': zfs_driver.constants.STATUS_AVAILABLE}, {'id': 'si_fake_first_replica_id', 'status': zfs_driver.constants.STATUS_AVAILABLE}, {'id': 'si_fake_second_replica_id', 'status': zfs_driver.constants.STATUS_ERROR}, ] for repl in expected: self.assertIn(repl, result) self.assertEqual(3, len(result)) for repl in (active_replica, replica): self.assertEqual( repl_snap_tag, self.driver.private_storage.get( repl['id'], 'repl_snapshot_tag')) self.assertEqual( old_repl_snapshot_tag, self.driver.private_storage.get( second_replica['id'], 'repl_snapshot_tag')) self.assertEqual( snap_tag_prefix, self.driver.private_storage.get( snapshot_instances[0]['snapshot_id'], 'snapshot_tag')) self.driver._get_replication_snapshot_tag.assert_called_once_with( active_replica) def test_delete_replicated_snapshot(self): active_replica = { 'id': 'fake_active_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_ACTIVE, } replica = { 'id': 'fake_first_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } second_replica = { 'id': 'fake_second_replica_id', 'replica_state': zfs_driver.constants.REPLICA_STATE_IN_SYNC, } replica_list = [replica, active_replica, second_replica] active_snapshot_instance = { 'id': 'si_%s' % active_replica['id'], 'share_instance_id': active_replica['id'], 'snapshot_id': 'some_snapshot_id', 'share_id': 'some_share_id', } snapshot_instances = [ {'id': 'si_%s' % r['id'], 'share_instance_id': r['id'], 'snapshot_id': active_snapshot_instance['snapshot_id'], 'share_id': active_snapshot_instance['share_id']} for r in (replica, second_replica) ] snapshot_instances.append(active_snapshot_instance) for si in snapshot_instances: self.driver.private_storage.update( si['id'], {'snapshot_name': 'fake_snap_name_%s' % si['id']}) src_dataset_name = ( 'bar/subbar/fake_dataset_name_prefix%s' % active_replica['id']) old_repl_snapshot_tag = ( self.driver._get_replication_snapshot_prefix( active_replica) + 'foo') new_repl_snapshot_tag = 'foo_snapshot_tag' dataset_name = 'some_dataset_name' self.driver.private_storage.update( active_replica['id'], {'dataset_name': src_dataset_name, 'ssh_cmd': 'fake_src_ssh_cmd', 'repl_snapshot_tag': old_repl_snapshot_tag} ) for replica in (replica, second_replica): self.driver.private_storage.update( replica['id'], {'dataset_name': dataset_name, 'ssh_cmd': 'fake_ssh_cmd'} ) self.driver.private_storage.update( snapshot_instances[0]['snapshot_id'], {'snapshot_tag': new_repl_snapshot_tag} ) snap_name = 'fake_snap_name' self.mock_object( self.driver, 'zfs', mock.Mock(return_value=['out', 'err'])) self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[ ('a', 'b'), ('c', 'd'), exception.ProcessExecutionError('Second replica sync failure'), ])) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[ ({'NAME': 'foo'}, {'NAME': snap_name}), ({'NAME': 'bar'}, {'NAME': snap_name}), [], ])) expected = sorted([ {'id': si['id'], 'status': 'deleted'} for si in snapshot_instances ], key=lambda item: item['id']) self.assertEqual( new_repl_snapshot_tag, self.driver.private_storage.get( snapshot_instances[0]['snapshot_id'], 'snapshot_tag')) result = self.driver.delete_replicated_snapshot( 'fake_context', replica_list, snapshot_instances) self.assertIsNone( self.driver.private_storage.get( snapshot_instances[0]['snapshot_id'], 'snapshot_tag')) self.driver.execute.assert_has_calls([ mock.call('ssh', 'fake_ssh_cmd', 'sudo', 'zfs', 'list', '-r', '-t', 'snapshot', dataset_name + '@' + new_repl_snapshot_tag) for i in (0, 1) ]) self.assertIsInstance(result, list) self.assertEqual(3, len(result)) self.assertEqual(expected, sorted(result, key=lambda item: item['id'])) self.driver.parse_zfs_answer.assert_has_calls([ mock.call('out'), ]) @ddt.data( ({'NAME': 'fake'}, zfs_driver.constants.STATUS_ERROR), ({'NAME': 'fake_snap_name'}, zfs_driver.constants.STATUS_AVAILABLE), ) @ddt.unpack def test_update_replicated_snapshot(self, parse_answer, expected_status): snap_name = 'fake_snap_name' self.mock_object(self.driver, '_update_replica_state') self.mock_object( self.driver, '_get_saved_snapshot_name', mock.Mock(return_value=snap_name)) self.mock_object( self.driver, 'zfs', mock.Mock(side_effect=[('a', 'b')])) self.mock_object( self.driver, 'parse_zfs_answer', mock.Mock(side_effect=[ [parse_answer] ])) fake_context = 'fake_context' replica_list = ['foo', 'bar'] share_replica = 'quuz' snapshot_instance = {'id': 'fake_snapshot_instance_id'} snapshot_instances = ['q', 'w', 'e', 'r', 't', 'y'] result = self.driver.update_replicated_snapshot( fake_context, replica_list, share_replica, snapshot_instances, snapshot_instance) self.driver._update_replica_state.assert_called_once_with( fake_context, replica_list, share_replica) self.driver._get_saved_snapshot_name.assert_called_once_with( snapshot_instance) self.driver.zfs.assert_called_once_with( 'list', '-r', '-t', 'snapshot', snap_name) self.driver.parse_zfs_answer.assert_called_once_with('a') self.assertIsInstance(result, dict) self.assertEqual(2, len(result)) self.assertIn('status', result) self.assertIn('id', result) self.assertEqual(expected_status, result['status']) self.assertEqual(snapshot_instance['id'], result['id']) def test__get_shell_executor_by_host_local(self): backend_name = 'foobackend' host = 'foohost@%s#foopool' % backend_name CONF.set_default( 'enabled_share_backends', 'fake1,%s,fake2,fake3' % backend_name) self.assertIsNone(self.driver._shell_executors.get(backend_name)) result = self.driver._get_shell_executor_by_host(host) self.assertEqual(self.driver.execute, result) def test__get_shell_executor_by_host_remote(self): backend_name = 'foobackend' host = 'foohost@%s#foopool' % backend_name CONF.set_default('enabled_share_backends', 'fake1,fake2,fake3') mock_get_remote_shell_executor = self.mock_object( zfs_driver.zfs_utils, 'get_remote_shell_executor') mock_config = self.mock_object(zfs_driver, 'get_backend_configuration') self.assertIsNone(self.driver._shell_executors.get(backend_name)) for i in (1, 2): result = self.driver._get_shell_executor_by_host(host) self.assertEqual( mock_get_remote_shell_executor.return_value, result) mock_get_remote_shell_executor.assert_called_once_with( ip=mock_config.return_value.zfs_service_ip, port=22, conn_timeout=mock_config.return_value.ssh_conn_timeout, login=mock_config.return_value.zfs_ssh_username, password=mock_config.return_value.zfs_ssh_user_password, privatekey=mock_config.return_value.zfs_ssh_private_key_path, max_size=10, ) zfs_driver.get_backend_configuration.assert_called_once_with( backend_name) def test__get_migration_snapshot_tag(self): share_instance = {'id': 'fake-share_instance_id'} current_time = 'fake_current_time' mock_utcnow = self.mock_object(zfs_driver.timeutils, 'utcnow') mock_utcnow.return_value.isoformat.return_value = current_time expected_value = ( self.driver.migration_snapshot_prefix + '_fake_share_instance_id_time_' + current_time) result = self.driver._get_migration_snapshot_tag(share_instance) self.assertEqual(expected_value, result) def test_migration_check_compatibility(self): src_share = {'host': 'foohost@foobackend#foopool'} dst_backend_name = 'barbackend' dst_share = {'host': 'barhost@%s#barpool' % dst_backend_name} expected = { 'compatible': True, 'writable': False, 'preserve_metadata': True, 'nondisruptive': True, } self.mock_object( zfs_driver, 'get_backend_configuration', mock.Mock(return_value=type( 'FakeConfig', (object,), { 'share_driver': self.driver.configuration.share_driver}))) actual = self.driver.migration_check_compatibility( 'fake_context', src_share, dst_share) self.assertEqual(expected, actual) zfs_driver.get_backend_configuration.assert_called_once_with( dst_backend_name) def test_migration_start(self): username = self.driver.configuration.zfs_ssh_username hostname = self.driver.configuration.zfs_service_ip dst_username = username + '_dst' dst_hostname = hostname + '_dst' src_share = { 'id': 'fake_src_share_id', 'host': 'foohost@foobackend#foopool', } src_dataset_name = 'foo_dataset_name' dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', } dst_dataset_name = 'bar_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' self.mock_object( self.driver, '_get_dataset_name', mock.Mock(return_value=dst_dataset_name)) self.mock_object( self.driver, '_get_migration_snapshot_tag', mock.Mock(return_value=snapshot_tag)) self.mock_object( zfs_driver, 'get_backend_configuration', mock.Mock(return_value=type( 'FakeConfig', (object,), { 'zfs_ssh_username': dst_username, 'zfs_service_ip': dst_hostname, }))) self.mock_object(self.driver, 'execute') self.mock_object( zfs_driver.utils, 'tempdir', mock.MagicMock(side_effect=FakeTempDir)) self.driver.private_storage.update( src_share['id'], {'dataset_name': src_dataset_name, 'ssh_cmd': username + '@' + hostname}) src_snapshot_name = ( '%(dataset_name)s@%(snapshot_tag)s' % { 'snapshot_tag': snapshot_tag, 'dataset_name': src_dataset_name, } ) with mock.patch("six.moves.builtins.open", mock.mock_open(read_data="data")) as mock_file: self.driver.migration_start( self._context, src_share, dst_share, None, None) expected_file_content = ( 'ssh %(ssh_cmd)s sudo zfs send -vDR %(snap)s | ' 'ssh %(dst_ssh_cmd)s sudo zfs receive -v %(dst_dataset)s' ) % { 'ssh_cmd': self.driver.private_storage.get( src_share['id'], 'ssh_cmd'), 'dst_ssh_cmd': self.driver.private_storage.get( dst_share['id'], 'ssh_cmd'), 'snap': src_snapshot_name, 'dst_dataset': dst_dataset_name, } mock_file.assert_called_with("/foo/path/bar_dataset_name.sh", "w") mock_file.return_value.write.assert_called_once_with( expected_file_content) self.driver.execute.assert_has_calls([ mock.call('sudo', 'zfs', 'snapshot', src_snapshot_name), mock.call('sudo', 'chmod', '755', mock.ANY), mock.call('nohup', mock.ANY, '&'), ]) self.driver._get_migration_snapshot_tag.assert_called_once_with( dst_share) self.driver._get_dataset_name.assert_called_once_with( dst_share) for k, v in (('dataset_name', dst_dataset_name), ('migr_snapshot_tag', snapshot_tag), ('pool_name', 'barpool'), ('ssh_cmd', dst_username + '@' + dst_hostname)): self.assertEqual( v, self.driver.private_storage.get(dst_share['id'], k)) def test_migration_continue_success(self): dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', } dst_dataset_name = 'bar_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' self.driver.private_storage.update( dst_share['id'], { 'migr_snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, }) mock_executor = self.mock_object( self.driver, '_get_shell_executor_by_host') self.mock_object( self.driver, 'execute', mock.Mock(return_value=('fake_out', 'fake_err'))) result = self.driver.migration_continue( self._context, 'fake_src_share', dst_share, None, None) self.assertTrue(result) mock_executor.assert_called_once_with(dst_share['host']) self.driver.execute.assert_has_calls([ mock.call('ps', 'aux'), mock.call('sudo', 'zfs', 'get', 'quota', dst_dataset_name, executor=mock_executor.return_value), ]) def test_migration_continue_pending(self): dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', } dst_dataset_name = 'bar_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' self.driver.private_storage.update( dst_share['id'], { 'migr_snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, }) mock_executor = self.mock_object( self.driver, '_get_shell_executor_by_host') self.mock_object( self.driver, 'execute', mock.Mock(return_value=('foo@%s' % snapshot_tag, 'fake_err'))) result = self.driver.migration_continue( self._context, 'fake_src_share', dst_share, None, None) self.assertIsNone(result) self.assertFalse(mock_executor.called) self.driver.execute.assert_called_once_with('ps', 'aux') def test_migration_continue_exception(self): dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', } dst_dataset_name = 'bar_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' self.driver.private_storage.update( dst_share['id'], { 'migr_snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, }) mock_executor = self.mock_object( self.driver, '_get_shell_executor_by_host') self.mock_object( self.driver, 'execute', mock.Mock(side_effect=[ ('fake_out', 'fake_err'), exception.ProcessExecutionError('fake'), ])) self.assertRaises( exception.ZFSonLinuxException, self.driver.migration_continue, self._context, 'fake_src_share', dst_share, None, None ) mock_executor.assert_called_once_with(dst_share['host']) self.driver.execute.assert_has_calls([ mock.call('ps', 'aux'), mock.call('sudo', 'zfs', 'get', 'quota', dst_dataset_name, executor=mock_executor.return_value), ]) def test_migration_complete(self): src_share = {'id': 'fake_src_share_id'} dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', 'share_proto': 'fake_share_proto', } dst_dataset_name = 'bar_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' self.driver.private_storage.update( dst_share['id'], { 'migr_snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, }) dst_snapshot_name = ( '%(dataset_name)s@%(snapshot_tag)s' % { 'snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, } ) mock_helper = self.mock_object(self.driver, '_get_share_helper') mock_executor = self.mock_object( self.driver, '_get_shell_executor_by_host') self.mock_object( self.driver, 'execute', mock.Mock(return_value=('fake_out', 'fake_err'))) self.mock_object(self.driver, 'delete_share') result = self.driver.migration_complete( self._context, src_share, dst_share, None, None) expected_result = { 'export_locations': (mock_helper.return_value. create_exports.return_value) } self.assertEqual(expected_result, result) mock_executor.assert_called_once_with(dst_share['host']) self.driver.execute.assert_called_once_with( 'sudo', 'zfs', 'destroy', dst_snapshot_name, executor=mock_executor.return_value, ) self.driver.delete_share.assert_called_once_with( self._context, src_share) mock_helper.assert_called_once_with(dst_share['share_proto']) mock_helper.return_value.create_exports.assert_called_once_with( dst_dataset_name, executor=self.driver._get_shell_executor_by_host.return_value) def test_migration_cancel_success(self): src_dataset_name = 'fake_src_dataset_name' src_share = { 'id': 'fake_src_share_id', 'dataset_name': src_dataset_name, } dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', 'share_proto': 'fake_share_proto', } dst_dataset_name = 'fake_dst_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' dst_ssh_cmd = 'fake_dst_ssh_cmd' self.driver.private_storage.update( src_share['id'], {'dataset_name': src_dataset_name}) self.driver.private_storage.update( dst_share['id'], { 'migr_snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, 'ssh_cmd': dst_ssh_cmd, }) self.mock_object(zfs_driver.time, 'sleep') mock_delete_dataset = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') ps_output = ( "fake_line1\nfoo_user 12345 foo_dataset_name@%s\n" "fake_line2") % snapshot_tag self.mock_object( self.driver, 'execute', mock.Mock(return_value=(ps_output, 'fake_err')) ) self.driver.migration_cancel( self._context, src_share, dst_share, [], {}) self.driver.execute.assert_has_calls([ mock.call('ps', 'aux'), mock.call('sudo', 'kill', '-9', '12345'), mock.call('ssh', dst_ssh_cmd, 'sudo', 'zfs', 'destroy', '-r', dst_dataset_name), ]) zfs_driver.time.sleep.assert_called_once_with(2) mock_delete_dataset.assert_called_once_with( src_dataset_name + '@' + snapshot_tag) def test_migration_cancel_error(self): src_dataset_name = 'fake_src_dataset_name' src_share = { 'id': 'fake_src_share_id', 'dataset_name': src_dataset_name, } dst_share = { 'id': 'fake_dst_share_id', 'host': 'barhost@barbackend#barpool', 'share_proto': 'fake_share_proto', } dst_dataset_name = 'fake_dst_dataset_name' snapshot_tag = 'fake_migration_snapshot_tag' dst_ssh_cmd = 'fake_dst_ssh_cmd' self.driver.private_storage.update( src_share['id'], {'dataset_name': src_dataset_name}) self.driver.private_storage.update( dst_share['id'], { 'migr_snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, 'ssh_cmd': dst_ssh_cmd, }) self.mock_object(zfs_driver.time, 'sleep') mock_delete_dataset = self.mock_object( self.driver, '_delete_dataset_or_snapshot_with_retry') self.mock_object( self.driver, 'execute', mock.Mock(side_effect=exception.ProcessExecutionError), ) self.driver.migration_cancel( self._context, src_share, dst_share, [], {}) self.driver.execute.assert_has_calls([ mock.call('ps', 'aux'), mock.call('ssh', dst_ssh_cmd, 'sudo', 'zfs', 'destroy', '-r', dst_dataset_name), ]) zfs_driver.time.sleep.assert_called_once_with(2) mock_delete_dataset.assert_called_once_with( src_dataset_name + '@' + snapshot_tag) manila-10.0.0/manila/tests/share/drivers/test_generic.py0000664000175000017500000023717313656750227023315 0ustar zuulzuul00000000000000# Copyright (c) 2014 NetApp, Inc. # Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Generic driver module.""" import os import time from unittest import mock import ddt from oslo_concurrency import processutils from oslo_config import cfg from six import moves from manila.common import constants as const from manila import compute from manila import context from manila import exception import manila.share.configuration from manila.share.drivers import generic from manila.share import share_types from manila import test from manila.tests import fake_compute from manila.tests import fake_service_instance from manila.tests import fake_share from manila.tests import fake_volume from manila import utils from manila import volume CONF = cfg.CONF def get_fake_manage_share(): return { 'id': 'fake', 'share_proto': 'NFS', 'share_type_id': 'fake', 'export_locations': [ {'path': '10.0.0.1:/foo/fake/path'}, {'path': '11.0.0.1:/bar/fake/path'}, ], } def get_fake_snap_dict(): snap_dict = { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'created_at': '2015-08-10 00:05:58', 'updated_at': '2015-08-10 00:05:58', 'consistency_group_id': None, 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None, } return snap_dict def get_fake_access_rule(access_to, access_level, access_type='ip'): return { 'access_type': access_type, 'access_to': access_to, 'access_level': access_level, } @ddt.ddt class GenericShareDriverTestCase(test.TestCase): """Tests GenericShareDriver.""" def setUp(self): super(GenericShareDriverTestCase, self).setUp() self._context = context.get_admin_context() self._execute = mock.Mock(return_value=('', '')) self._helper_cifs = mock.Mock() self._helper_nfs = mock.Mock() CONF.set_default('driver_handles_share_servers', True) self.fake_conf = manila.share.configuration.Configuration(None) self.fake_private_storage = mock.Mock() self.mock_object(self.fake_private_storage, 'get', mock.Mock(return_value=None)) with mock.patch.object( generic.service_instance, 'ServiceInstanceManager', fake_service_instance.FakeServiceInstanceManager): self._driver = generic.GenericShareDriver( private_storage=self.fake_private_storage, execute=self._execute, configuration=self.fake_conf) self._driver.service_tenant_id = 'service tenant id' self._driver.service_network_id = 'service network id' self._driver.compute_api = fake_compute.API() self._driver.volume_api = fake_volume.API() self._driver.share_networks_locks = {} self._driver.get_service_instance = mock.Mock() self._driver.share_networks_servers = {} self._driver.admin_context = self._context self.fake_sn = {"id": "fake_sn_id"} self.fake_net_info = { "id": "fake_srv_id", "share_network_id": "fake_sn_id" } fsim = fake_service_instance.FakeServiceInstanceManager() sim = mock.Mock(return_value=fsim) self._driver.instance_manager = sim self._driver.service_instance_manager = sim self.fake_server = sim._create_service_instance( context="fake", instance_name="fake", share_network_id=self.fake_sn["id"], old_server_ip="fake") self.mock_object(utils, 'synchronized', mock.Mock(return_value=lambda f: f)) self.mock_object(generic.os.path, 'exists', mock.Mock(return_value=True)) self._driver._helpers = { 'CIFS': self._helper_cifs, 'NFS': self._helper_nfs, } self.share = fake_share.fake_share(share_proto='NFS') self.server = { 'instance_id': 'fake_instance_id', 'ip': 'fake_ip', 'username': 'fake_username', 'password': 'fake_password', 'pk_path': 'fake_pk_path', 'backend_details': { 'ip': '1.2.3.4', 'public_address': 'fake_public_address', 'instance_id': 'fake', 'service_ip': 'fake_ip', }, 'availability_zone': 'fake_az', } self.access = fake_share.fake_access() self.snapshot = fake_share.fake_snapshot() self.mock_object(time, 'sleep') self.mock_debug_log = self.mock_object(generic.LOG, 'debug') self.mock_warning_log = self.mock_object(generic.LOG, 'warning') self.mock_error_log = self.mock_object(generic.LOG, 'error') self.mock_exception_log = self.mock_object(generic.LOG, 'exception') @ddt.data(True, False) def test_do_setup_with_dhss(self, dhss): CONF.set_default('driver_handles_share_servers', dhss) fake_server = {'id': 'fake_server_id'} self.mock_object(volume, 'API') self.mock_object(compute, 'API') self.mock_object(self._driver, '_setup_helpers') self.mock_object( self._driver, '_is_share_server_active', mock.Mock(return_value=True)) self.mock_object( self._driver.service_instance_manager, 'get_common_server', mock.Mock(return_value=fake_server)) self._driver.do_setup(self._context) volume.API.assert_called_once_with() compute.API.assert_called_once_with() self._driver._setup_helpers.assert_called_once_with() if not dhss: (self._driver.service_instance_manager.get_common_server. assert_called_once_with()) self._driver._is_share_server_active.assert_called_once_with( self._context, fake_server) else: self.assertFalse( self._driver.service_instance_manager.get_common_server.called) self.assertFalse(self._driver._is_share_server_active.called) @mock.patch('time.sleep') def test_do_setup_dhss_false_server_avail_after_retry(self, mock_sleep): # This tests the scenario in which the common share server cannot be # retrieved during the first attempt, is not active during the second, # becoming active during the third attempt. CONF.set_default('driver_handles_share_servers', False) fake_server = {'id': 'fake_server_id'} self.mock_object(volume, 'API') self.mock_object(compute, 'API') self.mock_object(self._driver, '_setup_helpers') self.mock_object( self._driver, '_is_share_server_active', mock.Mock(side_effect=[False, True])) self.mock_object( self._driver.service_instance_manager, 'get_common_server', mock.Mock(side_effect=[exception.ManilaException, fake_server, fake_server])) self._driver.do_setup(self._context) volume.API.assert_called_once_with() compute.API.assert_called_once_with() self._driver._setup_helpers.assert_called_once_with() (self._driver.service_instance_manager.get_common_server. assert_has_calls([mock.call()] * 3)) self._driver._is_share_server_active.assert_has_calls( [mock.call(self._context, fake_server)] * 2) mock_sleep.assert_has_calls([mock.call(5)] * 2) def test_setup_helpers(self): self._driver._helpers = {} CONF.set_default('share_helpers', ['NFS=fakenfs']) self.mock_object(generic.importutils, 'import_class', mock.Mock(return_value=self._helper_nfs)) self._driver._setup_helpers() generic.importutils.import_class.assert_has_calls([ mock.call('fakenfs') ]) self._helper_nfs.assert_called_once_with( self._execute, self._driver._ssh_exec, self.fake_conf ) self.assertEqual(1, len(self._driver._helpers)) def test_setup_helpers_no_helpers(self): self._driver._helpers = {} CONF.set_default('share_helpers', []) self.assertRaises(exception.ManilaException, self._driver._setup_helpers) def test_create_share(self): volume = 'fake_volume' volume2 = 'fake_volume2' self.mock_object(self._driver, '_allocate_container', mock.Mock(return_value=volume)) self.mock_object(self._driver, '_attach_volume', mock.Mock(return_value=volume2)) self.mock_object(self._driver, '_format_device') self.mock_object(self._driver, '_mount_device') result = self._driver.create_share( self._context, self.share, share_server=self.server) self.assertEqual(self._helper_nfs.create_exports.return_value, result) self._driver._allocate_container.assert_called_once_with( self._driver.admin_context, self.share, snapshot=None) self._driver._attach_volume.assert_called_once_with( self._driver.admin_context, self.share, self.server['backend_details']['instance_id'], volume) self._driver._format_device.assert_called_once_with( self.server['backend_details'], volume2) self._driver._mount_device.assert_called_once_with( self.share, self.server['backend_details'], volume2) def test_create_share_exception(self): share = fake_share.fake_share(share_network_id=None) self.assertRaises(exception.ManilaException, self._driver.create_share, self._context, share) def test_create_share_invalid_helper(self): self._driver._helpers = {'CIFS': self._helper_cifs} self.assertRaises(exception.InvalidShare, self._driver.create_share, self._context, self.share, share_server=self.server) def test_is_device_file_available(self): volume = {'mountpoint': 'fake_mount_point'} self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=None)) self._driver._is_device_file_available(self.server, volume) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'test', '-b', volume['mountpoint']]) def test_format_device(self): volume = {'mountpoint': 'fake_mount_point'} self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=('', ''))) self.mock_object(self._driver, '_is_device_file_available') self._driver._format_device(self.server, volume) self._driver._is_device_file_available.assert_called_once_with( self.server, volume) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'mkfs.%s' % self.fake_conf.share_volume_fstype, volume['mountpoint']]) def test_mount_device_not_present(self): server = {'instance_id': 'fake_server_id'} mount_path = self._driver._get_mount_path(self.share) volume = {'mountpoint': 'fake_mount_point'} self.mock_object(self._driver, '_is_device_mounted', mock.Mock(return_value=False)) self.mock_object(self._driver, '_add_mount_permanently') self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=('', ''))) self._driver._mount_device(self.share, server, volume) self._driver._is_device_mounted.assert_called_once_with( mount_path, server, volume) self._driver._add_mount_permanently.assert_called_once_with( self.share.id, server) self._driver._ssh_exec.assert_called_once_with( server, ( 'sudo', 'mkdir', '-p', mount_path, '&&', 'sudo', 'mount', volume['mountpoint'], mount_path, '&&', 'sudo', 'chmod', '777', mount_path, '&&', 'sudo', 'umount', mount_path, '&&', 'sudo', 'e2fsck', '-y', '-f', volume['mountpoint'], '&&', 'sudo', 'tune2fs', '-U', 'random', volume['mountpoint'], '&&', 'sudo', 'mount', volume['mountpoint'], mount_path, ), ) def test_mount_device_present(self): mount_path = '/fake/mount/path' volume = {'mountpoint': 'fake_mount_point'} self.mock_object(self._driver, '_is_device_mounted', mock.Mock(return_value=True)) self.mock_object(self._driver, '_get_mount_path', mock.Mock(return_value=mount_path)) self.mock_object(generic.LOG, 'warning') self._driver._mount_device(self.share, self.server, volume) self._driver._get_mount_path.assert_called_once_with(self.share) self._driver._is_device_mounted.assert_called_once_with( mount_path, self.server, volume) generic.LOG.warning.assert_called_once_with(mock.ANY, mock.ANY) def test_mount_device_exception_raised(self): volume = {'mountpoint': 'fake_mount_point'} self.mock_object( self._driver, '_is_device_mounted', mock.Mock(side_effect=exception.ProcessExecutionError)) self.assertRaises( exception.ShareBackendException, self._driver._mount_device, self.share, self.server, volume, ) self._driver._is_device_mounted.assert_called_once_with( self._driver._get_mount_path(self.share), self.server, volume) def test_unmount_device_present(self): mount_path = '/fake/mount/path' self.mock_object(self._driver, '_is_device_mounted', mock.Mock(return_value=True)) self.mock_object(self._driver, '_remove_mount_permanently') self.mock_object(self._driver, '_get_mount_path', mock.Mock(return_value=mount_path)) self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=('', ''))) self._driver._unmount_device(self.share, self.server) self._driver._get_mount_path.assert_called_once_with(self.share) self._driver._is_device_mounted.assert_called_once_with( mount_path, self.server) self._driver._remove_mount_permanently.assert_called_once_with( self.share.id, self.server) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'umount', mount_path, '&&', 'sudo', 'rmdir', mount_path], ) def test_unmount_device_retry_once(self): self.counter = 0 def _side_effect(*args): self.counter += 1 if self.counter < 2: raise exception.ProcessExecutionError mount_path = '/fake/mount/path' self.mock_object(self._driver, '_is_device_mounted', mock.Mock(return_value=True)) self.mock_object(self._driver, '_remove_mount_permanently') self.mock_object(self._driver, '_get_mount_path', mock.Mock(return_value=mount_path)) self.mock_object(self._driver, '_ssh_exec', mock.Mock(side_effect=_side_effect)) self._driver._unmount_device(self.share, self.server) self.assertEqual(1, time.sleep.call_count) self.assertEqual([mock.call(self.share) for i in moves.range(2)], self._driver._get_mount_path.mock_calls) self.assertEqual([mock.call(mount_path, self.server) for i in moves.range(2)], self._driver._is_device_mounted.mock_calls) self._driver._remove_mount_permanently.assert_called_once_with( self.share.id, self.server) self.assertEqual( [mock.call(self.server, ['sudo', 'umount', mount_path, '&&', 'sudo', 'rmdir', mount_path]) for i in moves.range(2)], self._driver._ssh_exec.mock_calls, ) def test_unmount_device_not_present(self): mount_path = '/fake/mount/path' self.mock_object(self._driver, '_is_device_mounted', mock.Mock(return_value=False)) self.mock_object(self._driver, '_get_mount_path', mock.Mock(return_value=mount_path)) self.mock_object(generic.LOG, 'warning') self._driver._unmount_device(self.share, self.server) self._driver._get_mount_path.assert_called_once_with(self.share) self._driver._is_device_mounted.assert_called_once_with( mount_path, self.server) generic.LOG.warning.assert_called_once_with(mock.ANY, mock.ANY) def test_is_device_mounted_true(self): volume = {'mountpoint': 'fake_mount_point', 'id': 'fake_id'} mount_path = '/fake/mount/path' mounts = "%(dev)s on %(path)s" % {'dev': volume['mountpoint'], 'path': mount_path} self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=(mounts, ''))) result = self._driver._is_device_mounted( mount_path, self.server, volume) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'mount']) self.assertTrue(result) def test_is_device_mounted_true_no_volume_provided(self): mount_path = '/fake/mount/path' mounts = "/fake/dev/path on %(path)s type fake" % {'path': mount_path} self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=(mounts, ''))) result = self._driver._is_device_mounted(mount_path, self.server) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'mount']) self.assertTrue(result) def test_is_device_mounted_false(self): mount_path = '/fake/mount/path' volume = {'mountpoint': 'fake_mount_point', 'id': 'fake_id'} mounts = "%(dev)s on %(path)s" % {'dev': '/fake', 'path': mount_path} self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=(mounts, ''))) result = self._driver._is_device_mounted( mount_path, self.server, volume) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'mount']) self.assertFalse(result) def test_is_device_mounted_false_no_volume_provided(self): mount_path = '/fake/mount/path' mounts = "%(path)s" % {'path': 'fake'} self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=(mounts, ''))) self.mock_object(self._driver, '_get_mount_path', mock.Mock(return_value=mount_path)) result = self._driver._is_device_mounted(mount_path, self.server) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'mount']) self.assertFalse(result) def test_add_mount_permanently(self): self.mock_object(self._driver, '_ssh_exec') self._driver._add_mount_permanently(self.share.id, self.server) self._driver._ssh_exec.assert_has_calls([ mock.call( self.server, ['grep', self.share.id, const.MOUNT_FILE_TEMP, '|', 'sudo', 'tee', '-a', const.MOUNT_FILE]), mock.call(self.server, ['sudo', 'mount', '-a']) ]) def test_add_mount_permanently_raise_error_on_add(self): self.mock_object( self._driver, '_ssh_exec', mock.Mock(side_effect=exception.ProcessExecutionError)) self.assertRaises( exception.ShareBackendException, self._driver._add_mount_permanently, self.share.id, self.server ) self._driver._ssh_exec.assert_called_once_with( self.server, ['grep', self.share.id, const.MOUNT_FILE_TEMP, '|', 'sudo', 'tee', '-a', const.MOUNT_FILE], ) def test_remove_mount_permanently(self): self.mock_object(self._driver, '_ssh_exec') self._driver._remove_mount_permanently(self.share.id, self.server) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'sed', '-i', '\'/%s/d\'' % self.share.id, const.MOUNT_FILE], ) def test_remove_mount_permanently_raise_error_on_remove(self): self.mock_object( self._driver, '_ssh_exec', mock.Mock(side_effect=exception.ProcessExecutionError)) self.assertRaises( exception.ShareBackendException, self._driver._remove_mount_permanently, self.share.id, self.server ) self._driver._ssh_exec.assert_called_once_with( self.server, ['sudo', 'sed', '-i', '\'/%s/d\'' % self.share.id, const.MOUNT_FILE], ) def test_get_mount_path(self): result = self._driver._get_mount_path(self.share) self.assertEqual(os.path.join(CONF.share_mount_path, self.share['name']), result) def test_attach_volume_not_attached(self): available_volume = fake_volume.FakeVolume() attached_volume = fake_volume.FakeVolume(status='in-use') self.mock_object(self._driver.compute_api, 'instance_volume_attach') self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=attached_volume)) result = self._driver._attach_volume(self._context, self.share, 'fake_inst_id', available_volume) (self._driver.compute_api.instance_volume_attach. assert_called_once_with(self._context, 'fake_inst_id', available_volume['id'])) self._driver.volume_api.get.assert_called_once_with( self._context, attached_volume['id']) self.assertEqual(attached_volume, result) def test_attach_volume_attached_correct(self): fake_server = fake_compute.FakeServer() attached_volume = fake_volume.FakeVolume(status='in-use') self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[attached_volume])) result = self._driver._attach_volume(self._context, self.share, fake_server, attached_volume) self.assertEqual(attached_volume, result) def test_attach_volume_attached_incorrect(self): fake_server = fake_compute.FakeServer() attached_volume = fake_volume.FakeVolume(status='in-use') anoter_volume = fake_volume.FakeVolume(id='fake_id2', status='in-use') self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[anoter_volume])) self.assertRaises(exception.ManilaException, self._driver._attach_volume, self._context, self.share, fake_server, attached_volume) @ddt.data(exception.ManilaException, exception.Invalid) def test_attach_volume_failed_attach(self, side_effect): fake_server = fake_compute.FakeServer() available_volume = fake_volume.FakeVolume() self.mock_object(self._driver.compute_api, 'instance_volume_attach', mock.Mock(side_effect=side_effect)) self.assertRaises(exception.ManilaException, self._driver._attach_volume, self._context, self.share, fake_server, available_volume) self.assertEqual( 3, self._driver.compute_api.instance_volume_attach.call_count) def test_attach_volume_attached_retry_correct(self): fake_server = fake_compute.FakeServer() attached_volume = fake_volume.FakeVolume(status='available') in_use_volume = fake_volume.FakeVolume(status='in-use') side_effect = [exception.Invalid("Fake"), attached_volume] attach_mock = mock.Mock(side_effect=side_effect) self.mock_object(self._driver.compute_api, 'instance_volume_attach', attach_mock) self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[attached_volume])) self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=in_use_volume)) result = self._driver._attach_volume(self._context, self.share, fake_server, attached_volume) self.assertEqual(in_use_volume, result) self.assertEqual( 2, self._driver.compute_api.instance_volume_attach.call_count) def test_attach_volume_error(self): fake_server = fake_compute.FakeServer() available_volume = fake_volume.FakeVolume() error_volume = fake_volume.FakeVolume(status='error') self.mock_object(self._driver.compute_api, 'instance_volume_attach') self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=error_volume)) self.assertRaises(exception.ManilaException, self._driver._attach_volume, self._context, self.share, fake_server, available_volume) def test_get_volume(self): volume = fake_volume.FakeVolume( name=CONF.volume_name_template % self.share['id']) self.mock_object(self._driver.volume_api, 'get_all', mock.Mock(return_value=[volume])) result = self._driver._get_volume(self._context, self.share['id']) self.assertEqual(volume, result) self._driver.volume_api.get_all.assert_called_once_with( self._context, {'all_tenants': True, 'name': volume['name']}) def test_get_volume_with_private_data(self): volume = fake_volume.FakeVolume() self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=volume)) self.mock_object(self.fake_private_storage, 'get', mock.Mock(return_value=volume['id'])) result = self._driver._get_volume(self._context, self.share['id']) self.assertEqual(volume, result) self._driver.volume_api.get.assert_called_once_with( self._context, volume['id']) self.fake_private_storage.get.assert_called_once_with( self.share['id'], 'volume_id' ) def test_get_volume_none(self): vol_name = ( self._driver.configuration.volume_name_template % self.share['id']) self.mock_object(self._driver.volume_api, 'get_all', mock.Mock(return_value=[])) result = self._driver._get_volume(self._context, self.share['id']) self.assertIsNone(result) self._driver.volume_api.get_all.assert_called_once_with( self._context, {'all_tenants': True, 'name': vol_name}) def test_get_volume_error(self): volume = fake_volume.FakeVolume( name=CONF.volume_name_template % self.share['id']) self.mock_object(self._driver.volume_api, 'get_all', mock.Mock(return_value=[volume, volume])) self.assertRaises(exception.ManilaException, self._driver._get_volume, self._context, self.share['id']) self._driver.volume_api.get_all.assert_called_once_with( self._context, {'all_tenants': True, 'name': volume['name']}) def test_get_volume_snapshot(self): volume_snapshot = fake_volume.FakeVolumeSnapshot( name=self._driver.configuration.volume_snapshot_name_template % self.snapshot['id']) self.mock_object(self._driver.volume_api, 'get_all_snapshots', mock.Mock(return_value=[volume_snapshot])) result = self._driver._get_volume_snapshot(self._context, self.snapshot['id']) self.assertEqual(volume_snapshot, result) self._driver.volume_api.get_all_snapshots.assert_called_once_with( self._context, {'name': volume_snapshot['name']}) def test_get_volume_snapshot_with_private_data(self): volume_snapshot = fake_volume.FakeVolumeSnapshot() self.mock_object(self._driver.volume_api, 'get_snapshot', mock.Mock(return_value=volume_snapshot)) self.mock_object(self.fake_private_storage, 'get', mock.Mock(return_value=volume_snapshot['id'])) result = self._driver._get_volume_snapshot(self._context, self.snapshot['id']) self.assertEqual(volume_snapshot, result) self._driver.volume_api.get_snapshot.assert_called_once_with( self._context, volume_snapshot['id']) self.fake_private_storage.get.assert_called_once_with( self.snapshot['id'], 'volume_snapshot_id' ) def test_get_volume_snapshot_none(self): snap_name = ( self._driver.configuration.volume_snapshot_name_template % self.share['id']) self.mock_object(self._driver.volume_api, 'get_all_snapshots', mock.Mock(return_value=[])) result = self._driver._get_volume_snapshot(self._context, self.share['id']) self.assertIsNone(result) self._driver.volume_api.get_all_snapshots.assert_called_once_with( self._context, {'name': snap_name}) def test_get_volume_snapshot_error(self): volume_snapshot = fake_volume.FakeVolumeSnapshot( name=self._driver.configuration.volume_snapshot_name_template % self.snapshot['id']) self.mock_object( self._driver.volume_api, 'get_all_snapshots', mock.Mock(return_value=[volume_snapshot, volume_snapshot])) self.assertRaises( exception.ManilaException, self._driver._get_volume_snapshot, self._context, self.snapshot['id']) self._driver.volume_api.get_all_snapshots.assert_called_once_with( self._context, {'name': volume_snapshot['name']}) def test_detach_volume(self): available_volume = fake_volume.FakeVolume() attached_volume = fake_volume.FakeVolume(status='in-use') self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=attached_volume)) self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[attached_volume])) self.mock_object(self._driver.compute_api, 'instance_volume_detach') self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=available_volume)) self._driver._detach_volume(self._context, self.share, self.server['backend_details']) (self._driver.compute_api.instance_volume_detach. assert_called_once_with( self._context, self.server['backend_details']['instance_id'], available_volume['id'])) self._driver.volume_api.get.assert_called_once_with( self._context, available_volume['id']) def test_detach_volume_detached(self): available_volume = fake_volume.FakeVolume() attached_volume = fake_volume.FakeVolume(status='in-use') self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=attached_volume)) self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[])) self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=available_volume)) self.mock_object(self._driver.compute_api, 'instance_volume_detach') self._driver._detach_volume(self._context, self.share, self.server['backend_details']) self.assertFalse(self._driver.volume_api.get.called) self.assertFalse( self._driver.compute_api.instance_volume_detach.called) def test_allocate_container(self): fake_vol = fake_volume.FakeVolume() self.fake_conf.cinder_volume_type = 'fake_volume_type' self.mock_object(self._driver.volume_api, 'create', mock.Mock(return_value=fake_vol)) result = self._driver._allocate_container(self._context, self.share) self.assertEqual(fake_vol, result) self._driver.volume_api.create.assert_called_once_with( self._context, self.share['size'], CONF.volume_name_template % self.share['id'], '', snapshot=None, volume_type='fake_volume_type', availability_zone=self.share['availability_zone']) def test_allocate_container_with_snaphot(self): fake_vol = fake_volume.FakeVolume() fake_vol_snap = fake_volume.FakeVolumeSnapshot() self.mock_object(self._driver, '_get_volume_snapshot', mock.Mock(return_value=fake_vol_snap)) self.mock_object(self._driver.volume_api, 'create', mock.Mock(return_value=fake_vol)) result = self._driver._allocate_container(self._context, self.share, self.snapshot) self.assertEqual(fake_vol, result) self._driver.volume_api.create.assert_called_once_with( self._context, self.share['size'], CONF.volume_name_template % self.share['id'], '', snapshot=fake_vol_snap, volume_type=None, availability_zone=self.share['availability_zone']) def test_allocate_container_error(self): fake_vol = fake_volume.FakeVolume(status='error') self.mock_object(self._driver.volume_api, 'create', mock.Mock(return_value=fake_vol)) self.assertRaises(exception.ManilaException, self._driver._allocate_container, self._context, self.share) def test_wait_for_available_volume(self): fake_volume = {'status': 'creating', 'id': 'fake'} fake_available_volume = {'status': 'available', 'id': 'fake'} self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=fake_available_volume)) actual_result = self._driver._wait_for_available_volume( fake_volume, 5, "error", "timeout") self.assertEqual(fake_available_volume, actual_result) self._driver.volume_api.get.assert_called_once_with( mock.ANY, fake_volume['id']) @mock.patch('time.sleep') def test_wait_for_available_volume_error_extending(self, mock_sleep): fake_volume = {'status': 'error_extending', 'id': 'fake'} self.assertRaises(exception.ManilaException, self._driver._wait_for_available_volume, fake_volume, 5, 'error', 'timeout') self.assertFalse(mock_sleep.called) @mock.patch('time.sleep') def test_wait_for_extending_volume(self, mock_sleep): initial_size = 1 expected_size = 2 mock_volume = fake_volume.FakeVolume(status='available', size=initial_size) mock_extending_vol = fake_volume.FakeVolume(status='extending', size=initial_size) mock_extended_vol = fake_volume.FakeVolume(status='available', size=expected_size) self.mock_object(self._driver.volume_api, 'get', mock.Mock(side_effect=[mock_extending_vol, mock_extended_vol])) result = self._driver._wait_for_available_volume( mock_volume, 5, "error", "timeout", expected_size=expected_size) expected_get_count = 2 self.assertEqual(mock_extended_vol, result) self._driver.volume_api.get.assert_has_calls( [mock.call(self._driver.admin_context, mock_volume['id'])] * expected_get_count) mock_sleep.assert_has_calls([mock.call(1)] * expected_get_count) @ddt.data(mock.Mock(return_value={'status': 'creating', 'id': 'fake'}), mock.Mock(return_value={'status': 'error', 'id': 'fake'})) def test_wait_for_available_volume_invalid(self, volume_get_mock): fake_volume = {'status': 'creating', 'id': 'fake'} self.mock_object(self._driver.volume_api, 'get', volume_get_mock) self.mock_object(time, 'time', mock.Mock(side_effect=[1.0, 1.33, 1.67, 2.0])) self.assertRaises( exception.ManilaException, self._driver._wait_for_available_volume, fake_volume, 1, "error", "timeout" ) def test_deallocate_container(self): fake_vol = fake_volume.FakeVolume() self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=fake_vol)) self.mock_object(self._driver.volume_api, 'delete') self.mock_object(self._driver.volume_api, 'get', mock.Mock( side_effect=exception.VolumeNotFound(volume_id=fake_vol['id']))) self._driver._deallocate_container(self._context, self.share) self._driver._get_volume.assert_called_once_with( self._context, self.share['id']) self._driver.volume_api.delete.assert_called_once_with( self._context, fake_vol['id']) self._driver.volume_api.get.assert_called_once_with( self._context, fake_vol['id']) def test_deallocate_container_with_volume_not_found(self): fake_vol = fake_volume.FakeVolume() self.mock_object(self._driver, '_get_volume', mock.Mock(side_effect=exception.VolumeNotFound( volume_id=fake_vol['id']))) self.mock_object(self._driver.volume_api, 'delete') self._driver._deallocate_container(self._context, self.share) self._driver._get_volume.assert_called_once_with( self._context, self.share['id']) self.assertFalse(self._driver.volume_api.delete.called) def test_create_share_from_snapshot(self): vol1 = 'fake_vol1' vol2 = 'fake_vol2' self.mock_object(self._driver, '_allocate_container', mock.Mock(return_value=vol1)) self.mock_object(self._driver, '_attach_volume', mock.Mock(return_value=vol2)) self.mock_object(self._driver, '_mount_device') result = self._driver.create_share_from_snapshot( self._context, self.share, self.snapshot, share_server=self.server) self.assertEqual(self._helper_nfs.create_exports.return_value, result) self._driver._allocate_container.assert_called_once_with( self._driver.admin_context, self.share, snapshot=self.snapshot) self._driver._attach_volume.assert_called_once_with( self._driver.admin_context, self.share, self.server['backend_details']['instance_id'], vol1) self._driver._mount_device.assert_called_once_with( self.share, self.server['backend_details'], vol2) self._helper_nfs.create_exports.assert_called_once_with( self.server['backend_details'], self.share['name']) def test_create_share_from_snapshot_invalid_helper(self): self._driver._helpers = {'CIFS': self._helper_cifs} self.assertRaises(exception.InvalidShare, self._driver.create_share_from_snapshot, self._context, self.share, self.snapshot, share_server=self.server) def test_delete_share_no_share_servers_handling(self): self.mock_object(self._driver, '_deallocate_container') self.mock_object( self._driver.service_instance_manager, 'get_common_server', mock.Mock(return_value=self.server)) self.mock_object( self._driver.service_instance_manager, 'ensure_service_instance', mock.Mock(return_value=False)) CONF.set_default('driver_handles_share_servers', False) self._driver.delete_share(self._context, self.share) (self._driver.service_instance_manager.get_common_server. assert_called_once_with()) self._driver._deallocate_container.assert_called_once_with( self._driver.admin_context, self.share) (self._driver.service_instance_manager.ensure_service_instance. assert_called_once_with( self._context, self.server['backend_details'])) def test_delete_share(self): self.mock_object(self._driver, '_unmount_device') self.mock_object(self._driver, '_detach_volume') self.mock_object(self._driver, '_deallocate_container') self._driver.delete_share( self._context, self.share, share_server=self.server) self._helper_nfs.remove_exports.assert_called_once_with( self.server['backend_details'], self.share['name']) self._driver._unmount_device.assert_called_once_with( self.share, self.server['backend_details']) self._driver._detach_volume.assert_called_once_with( self._driver.admin_context, self.share, self.server['backend_details']) self._driver._deallocate_container.assert_called_once_with( self._driver.admin_context, self.share) (self._driver.service_instance_manager.ensure_service_instance. assert_called_once_with( self._context, self.server['backend_details'])) def test_detach_volume_with_volume_not_found(self): fake_vol = fake_volume.FakeVolume() fake_server_details = mock.MagicMock() self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[])) self.mock_object(self._driver, '_get_volume', mock.Mock(side_effect=exception.VolumeNotFound( volume_id=fake_vol['id']))) self._driver._detach_volume(self._context, self.share, fake_server_details) (self._driver.compute_api.instance_volumes_list. assert_called_once_with(self._driver.admin_context, fake_server_details['instance_id'])) (self._driver._get_volume. assert_called_once_with(self._driver.admin_context, self.share['id'])) self.assertEqual(1, self.mock_warning_log.call_count) def test_delete_share_without_share_server(self): self.mock_object(self._driver, '_unmount_device') self.mock_object(self._driver, '_detach_volume') self.mock_object(self._driver, '_deallocate_container') self._driver.delete_share( self._context, self.share, share_server=None) self.assertFalse(self._helper_nfs.remove_export.called) self.assertFalse(self._driver._unmount_device.called) self.assertFalse(self._driver._detach_volume.called) self._driver._deallocate_container.assert_called_once_with( self._driver.admin_context, self.share) def test_delete_share_without_server_backend_details(self): self.mock_object(self._driver, '_unmount_device') self.mock_object(self._driver, '_detach_volume') self.mock_object(self._driver, '_deallocate_container') fake_share_server = { 'instance_id': 'fake_instance_id', 'ip': 'fake_ip', 'username': 'fake_username', 'password': 'fake_password', 'pk_path': 'fake_pk_path', 'backend_details': {} } self._driver.delete_share( self._context, self.share, share_server=fake_share_server) self.assertFalse(self._helper_nfs.remove_export.called) self.assertFalse(self._driver._unmount_device.called) self.assertFalse(self._driver._detach_volume.called) self._driver._deallocate_container.assert_called_once_with( self._driver.admin_context, self.share) def test_delete_share_without_server_availability(self): self.mock_object(self._driver, '_unmount_device') self.mock_object(self._driver, '_detach_volume') self.mock_object(self._driver, '_deallocate_container') self.mock_object( self._driver.service_instance_manager, 'ensure_service_instance', mock.Mock(return_value=False)) self._driver.delete_share( self._context, self.share, share_server=self.server) self.assertFalse(self._helper_nfs.remove_export.called) self.assertFalse(self._driver._unmount_device.called) self.assertFalse(self._driver._detach_volume.called) self._driver._deallocate_container.assert_called_once_with( self._driver.admin_context, self.share) (self._driver.service_instance_manager.ensure_service_instance. assert_called_once_with( self._context, self.server['backend_details'])) def test_delete_share_invalid_helper(self): self._driver._helpers = {'CIFS': self._helper_cifs} self.assertRaises(exception.InvalidShare, self._driver.delete_share, self._context, self.share, share_server=self.server) def test_create_snapshot(self): fake_vol = fake_volume.FakeVolume() fake_vol_snap = fake_volume.FakeVolumeSnapshot( share_instance_id=fake_vol['id']) self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=fake_vol)) self.mock_object(self._driver.volume_api, 'create_snapshot_force', mock.Mock(return_value=fake_vol_snap)) self._driver.create_snapshot(self._context, fake_vol_snap, share_server=self.server) self._driver._get_volume.assert_called_once_with( self._driver.admin_context, fake_vol_snap['share_instance_id']) self._driver.volume_api.create_snapshot_force.assert_called_once_with( self._context, fake_vol['id'], CONF.volume_snapshot_name_template % fake_vol_snap['id'], '' ) def test_delete_snapshot(self): fake_vol_snap = fake_volume.FakeVolumeSnapshot() fake_vol_snap2 = {'id': 'fake_vol_snap2'} self.mock_object(self._driver, '_get_volume_snapshot', mock.Mock(return_value=fake_vol_snap2)) self.mock_object(self._driver.volume_api, 'delete_snapshot') self.mock_object( self._driver.volume_api, 'get_snapshot', mock.Mock(side_effect=exception.VolumeSnapshotNotFound( snapshot_id=fake_vol_snap['id']))) self._driver.delete_snapshot(self._context, fake_vol_snap, share_server=self.server) self._driver._get_volume_snapshot.assert_called_once_with( self._driver.admin_context, fake_vol_snap['id']) self._driver.volume_api.delete_snapshot.assert_called_once_with( self._driver.admin_context, fake_vol_snap2['id']) self._driver.volume_api.get_snapshot.assert_called_once_with( self._driver.admin_context, fake_vol_snap2['id']) def test_ensure_share(self): vol1 = 'fake_vol1' vol2 = 'fake_vol2' self._helper_nfs.create_export.return_value = 'fakelocation' self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=vol1)) self.mock_object(self._driver, '_attach_volume', mock.Mock(return_value=vol2)) self.mock_object(self._driver, '_mount_device') self._driver.ensure_share( self._context, self.share, share_server=self.server) self._driver._get_volume.assert_called_once_with( self._context, self.share['id']) self._driver._attach_volume.assert_called_once_with( self._context, self.share, self.server['backend_details']['instance_id'], vol1) self._driver._mount_device.assert_called_once_with( self.share, self.server['backend_details'], vol2) self._helper_nfs.create_exports.assert_called_once_with( self.server['backend_details'], self.share['name'], recreate=True) def test_ensure_share_volume_is_absent(self): self.mock_object( self._driver, '_get_volume', mock.Mock(return_value=None)) self.mock_object(self._driver, '_attach_volume') self._driver.ensure_share( self._context, self.share, share_server=self.server) self._driver._get_volume.assert_called_once_with( self._context, self.share['id']) self.assertFalse(self._driver._attach_volume.called) def test_ensure_share_invalid_helper(self): self._driver._helpers = {'CIFS': self._helper_cifs} self.assertRaises(exception.InvalidShare, self._driver.ensure_share, self._context, self.share, share_server=self.server) @ddt.data(const.ACCESS_LEVEL_RW, const.ACCESS_LEVEL_RO) def test_update_access(self, access_level): # fakes access_rules = [get_fake_access_rule('1.1.1.1', access_level), get_fake_access_rule('2.2.2.2', access_level)] add_rules = [get_fake_access_rule('2.2.2.2', access_level), ] delete_rules = [get_fake_access_rule('3.3.3.3', access_level), ] # run self._driver.update_access(self._context, self.share, access_rules, add_rules=add_rules, delete_rules=delete_rules, share_server=self.server) # asserts (self._driver._helpers[self.share['share_proto']]. update_access.assert_called_once_with( self.server['backend_details'], self.share['name'], access_rules, add_rules=add_rules, delete_rules=delete_rules)) @ddt.data(fake_share.fake_share(), fake_share.fake_share(share_proto='NFSBOGUS'), fake_share.fake_share(share_proto='CIFSBOGUS')) def test__get_helper_with_wrong_proto(self, share): self.assertRaises(exception.InvalidShare, self._driver._get_helper, share) def test__setup_server(self): sim = self._driver.instance_manager net_info = { 'server_id': 'fake', 'neutron_net_id': 'fake-net-id', 'neutron_subnet_id': 'fake-subnet-id', } self._driver.setup_server(net_info) sim.set_up_service_instance.assert_called_once_with( self._context, net_info) def test__setup_server_revert(self): def raise_exception(*args, **kwargs): raise exception.ServiceInstanceException net_info = {'server_id': 'fake', 'neutron_net_id': 'fake-net-id', 'neutron_subnet_id': 'fake-subnet-id'} self.mock_object(self._driver.service_instance_manager, 'set_up_service_instance', mock.Mock(side_effect=raise_exception)) self.assertRaises(exception.ServiceInstanceException, self._driver.setup_server, net_info) def test__teardown_server(self): server_details = { 'instance_id': 'fake_instance_id', 'subnet_id': 'fake_subnet_id', 'router_id': 'fake_router_id', } self._driver.teardown_server(server_details) (self._driver.service_instance_manager.delete_service_instance. assert_called_once_with( self._driver.admin_context, server_details)) def test_ssh_exec_connection_not_exist(self): ssh_conn_timeout = 30 CONF.set_default('ssh_conn_timeout', ssh_conn_timeout) ssh_output = 'fake_ssh_output' cmd = ['fake', 'command'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(return_value=True) ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(return_value=ssh) self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) self.mock_object(processutils, 'ssh_execute', mock.Mock(return_value=ssh_output)) self._driver.ssh_connections = {} result = self._driver._ssh_exec(self.server, cmd) utils.SSHPool.assert_called_once_with( self.server['ip'], 22, ssh_conn_timeout, self.server['username'], self.server['password'], self.server['pk_path'], max_size=1) ssh_pool.create.assert_called_once_with() processutils.ssh_execute.assert_called_once_with( ssh, 'fake command', check_exit_code=True) ssh.get_transport().is_active.assert_called_once_with() self.assertEqual( self._driver.ssh_connections, {self.server['instance_id']: (ssh_pool, ssh)} ) self.assertEqual(ssh_output, result) def test_ssh_exec_connection_exist(self): ssh_output = 'fake_ssh_output' cmd = ['fake', 'command'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(side_effect=lambda: True) ssh_pool = mock.Mock() self.mock_object(processutils, 'ssh_execute', mock.Mock(return_value=ssh_output)) self._driver.ssh_connections = { self.server['instance_id']: (ssh_pool, ssh) } result = self._driver._ssh_exec(self.server, cmd) processutils.ssh_execute.assert_called_once_with( ssh, 'fake command', check_exit_code=True) ssh.get_transport().is_active.assert_called_once_with() self.assertEqual( self._driver.ssh_connections, {self.server['instance_id']: (ssh_pool, ssh)} ) self.assertEqual(ssh_output, result) def test_ssh_exec_connection_recreation(self): ssh_output = 'fake_ssh_output' cmd = ['fake', 'command'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(side_effect=lambda: False) ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(side_effect=lambda: ssh) ssh_pool.remove = mock.Mock() self.mock_object(processutils, 'ssh_execute', mock.Mock(return_value=ssh_output)) self._driver.ssh_connections = { self.server['instance_id']: (ssh_pool, ssh) } result = self._driver._ssh_exec(self.server, cmd) processutils.ssh_execute.assert_called_once_with( ssh, 'fake command', check_exit_code=True) ssh.get_transport().is_active.assert_called_once_with() ssh_pool.create.assert_called_once_with() ssh_pool.remove.assert_called_once_with(ssh) self.assertEqual( self._driver.ssh_connections, {self.server['instance_id']: (ssh_pool, ssh)} ) self.assertEqual(ssh_output, result) def test__ssh_exec_check_list_comprehensions_still_work(self): ssh_output = 'fake_ssh_output' cmd = ['fake', 'command spaced'] ssh = mock.Mock() ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(side_effect=lambda: ssh) ssh_pool.remove = mock.Mock() self.mock_object(processutils, 'ssh_execute', mock.Mock(return_value=ssh_output)) self._driver.ssh_connections = { self.server['instance_id']: (ssh_pool, ssh) } self._driver._ssh_exec(self.server, cmd) processutils.ssh_execute.assert_called_once_with( ssh, 'fake "command spaced"', check_exit_code=True) def test_get_share_stats_refresh_false(self): self._driver._stats = {'fake_key': 'fake_value'} result = self._driver.get_share_stats(False) self.assertEqual(self._driver._stats, result) def test_get_share_stats_refresh_true(self): fake_stats = {'fake_key': 'fake_value'} self._driver._stats = fake_stats expected_keys = [ 'qos', 'driver_version', 'share_backend_name', 'free_capacity_gb', 'total_capacity_gb', 'driver_handles_share_servers', 'reserved_percentage', 'vendor_name', 'storage_protocol', ] result = self._driver.get_share_stats(True) self.assertNotEqual(fake_stats, result) for key in expected_keys: self.assertIn(key, result) self.assertTrue(result['driver_handles_share_servers']) self.assertEqual('Open Source', result['vendor_name']) def _setup_manage_mocks(self, get_share_type_extra_specs='False', is_device_mounted=True, server_details=None): CONF.set_default('driver_handles_share_servers', False) self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value=get_share_type_extra_specs)) self.mock_object(self._driver, '_is_device_mounted', mock.Mock(return_value=is_device_mounted)) self.mock_object(self._driver, 'service_instance_manager') server = {'backend_details': server_details} self.mock_object(self._driver.service_instance_manager, 'get_common_server', mock.Mock(return_value=server)) def test_manage_invalid_protocol(self): share = {'share_proto': 'fake_proto'} self._setup_manage_mocks() self.assertRaises(exception.InvalidShare, self._driver.manage_existing, share, {}) def test_manage_not_mounted_share(self): share = get_fake_manage_share() fake_path = '/foo/bar' self._setup_manage_mocks(is_device_mounted=False) self.mock_object( self._driver._helpers[share['share_proto']], 'get_share_path_by_export_location', mock.Mock(return_value=fake_path)) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, share, {}) self.assertEqual( 1, self._driver.service_instance_manager.get_common_server.call_count) self._driver._is_device_mounted.assert_called_once_with( fake_path, None) (self._driver._helpers[share['share_proto']]. get_share_path_by_export_location.assert_called_once_with( None, share['export_locations'][0]['path'])) def test_manage_share_not_attached_to_cinder_volume_invalid_size(self): share = get_fake_manage_share() server_details = {} fake_path = '/foo/bar' self._setup_manage_mocks(server_details=server_details) self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=None)) error = exception.ManageInvalidShare(reason="fake") self.mock_object( self._driver, '_get_mounted_share_size', mock.Mock(side_effect=error)) self.mock_object( self._driver._helpers[share['share_proto']], 'get_share_path_by_export_location', mock.Mock(return_value=fake_path)) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, share, {}) self._driver._get_mounted_share_size.assert_called_once_with( fake_path, server_details) (self._driver._helpers[share['share_proto']]. get_share_path_by_export_location.assert_called_once_with( server_details, share['export_locations'][0]['path'])) def test_manage_share_not_attached_to_cinder_volume(self): share = get_fake_manage_share() share_size = "fake" fake_path = '/foo/bar' fake_exports = ['foo', 'bar'] server_details = {} self._setup_manage_mocks(server_details=server_details) self.mock_object(self._driver, '_get_volume') self.mock_object(self._driver, '_get_mounted_share_size', mock.Mock(return_value=share_size)) self.mock_object( self._driver._helpers[share['share_proto']], 'get_share_path_by_export_location', mock.Mock(return_value=fake_path)) self.mock_object( self._driver._helpers[share['share_proto']], 'get_exports_for_share', mock.Mock(return_value=fake_exports)) result = self._driver.manage_existing(share, {}) self.assertEqual( {'size': share_size, 'export_locations': fake_exports}, result) (self._driver._helpers[share['share_proto']].get_exports_for_share. assert_called_once_with( server_details, share['export_locations'][0]['path'])) (self._driver._helpers[share['share_proto']]. get_share_path_by_export_location.assert_called_once_with( server_details, share['export_locations'][0]['path'])) self._driver._get_mounted_share_size.assert_called_once_with( fake_path, server_details) self.assertFalse(self._driver._get_volume.called) def test_manage_share_attached_to_cinder_volume_not_found(self): share = get_fake_manage_share() server_details = {} driver_options = {'volume_id': 'fake'} self._setup_manage_mocks(server_details=server_details) self.mock_object( self._driver.volume_api, 'get', mock.Mock(side_effect=exception.VolumeNotFound(volume_id="fake")) ) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, share, driver_options) self._driver.volume_api.get.assert_called_once_with( mock.ANY, driver_options['volume_id']) def test_manage_share_attached_to_cinder_volume_not_mounted_to_srv(self): share = get_fake_manage_share() server_details = {'instance_id': 'fake'} driver_options = {'volume_id': 'fake'} volume = {'id': 'fake'} self._setup_manage_mocks(server_details=server_details) self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=volume)) self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[])) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, share, driver_options) self._driver.volume_api.get.assert_called_once_with( mock.ANY, driver_options['volume_id']) self._driver.compute_api.instance_volumes_list.assert_called_once_with( mock.ANY, server_details['instance_id']) def test_manage_share_attached_to_cinder_volume(self): share = get_fake_manage_share() fake_size = 'foobar' fake_exports = ['foo', 'bar'] server_details = {'instance_id': 'fake'} driver_options = {'volume_id': 'fake'} volume = {'id': 'fake', 'name': 'fake_volume_1', 'size': fake_size} self._setup_manage_mocks(server_details=server_details) self.mock_object(self._driver.volume_api, 'get', mock.Mock(return_value=volume)) self._driver.volume_api.update = mock.Mock() fake_volume = mock.Mock() fake_volume.id = 'fake' self.mock_object(self._driver.compute_api, 'instance_volumes_list', mock.Mock(return_value=[fake_volume])) self.mock_object( self._driver._helpers[share['share_proto']], 'get_exports_for_share', mock.Mock(return_value=fake_exports)) result = self._driver.manage_existing(share, driver_options) self.assertEqual( {'size': fake_size, 'export_locations': fake_exports}, result) (self._driver._helpers[share['share_proto']].get_exports_for_share. assert_called_once_with( server_details, share['export_locations'][0]['path'])) expected_volume_update = { 'name': self._driver._get_volume_name(share['id']) } self._driver.volume_api.update.assert_called_once_with( mock.ANY, volume['id'], expected_volume_update) self.fake_private_storage.update.assert_called_once_with( share['id'], {'volume_id': volume['id']} ) def test_get_mounted_share_size(self): output = ("Filesystem blocks Used Available Capacity Mounted on\n" "/dev/fake 1G 1G 1G 4% /shares/share-fake") self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=(output, ''))) actual_result = self._driver._get_mounted_share_size('/fake/path', {}) self.assertEqual(1, actual_result) @ddt.data("fake\nfake\n", "fake", "fake\n") def test_get_mounted_share_size_invalid_output(self, output): self.mock_object(self._driver, '_ssh_exec', mock.Mock(return_value=(output, ''))) self.assertRaises(exception.ManageInvalidShare, self._driver._get_mounted_share_size, '/fake/path', {}) def test_get_consumed_space(self): mount_path = "fake_path" server_details = {} index = 2 valid_result = 1 self.mock_object(self._driver, '_get_mount_stats_by_index', mock.Mock(return_value=valid_result * 1024)) actual_result = self._driver._get_consumed_space( mount_path, server_details) self.assertEqual(valid_result, actual_result) self._driver._get_mount_stats_by_index.assert_called_once_with( mount_path, server_details, index, block_size='M' ) def test_get_consumed_space_invalid(self): self.mock_object( self._driver, '_get_mount_stats_by_index', mock.Mock(side_effect=exception.ManilaException("fake")) ) self.assertRaises( exception.InvalidShare, self._driver._get_consumed_space, "fake", "fake" ) @ddt.data(100, 130, 123) def test_extend_share(self, volume_size): fake_volume = { "name": "fake", "size": volume_size, } fake_share = { 'id': 'fake', 'share_proto': 'NFS', 'name': 'test_share', } new_size = 123 srv_details = self.server['backend_details'] self.mock_object( self._driver.service_instance_manager, 'get_common_server', mock.Mock(return_value=self.server) ) self.mock_object(self._driver, '_unmount_device') self.mock_object(self._driver, '_detach_volume') self.mock_object(self._driver, '_extend_volume') self.mock_object(self._driver, '_attach_volume') self.mock_object(self._driver, '_mount_device') self.mock_object(self._driver, '_resize_filesystem') self.mock_object( self._driver, '_get_volume', mock.Mock(return_value=fake_volume) ) CONF.set_default('driver_handles_share_servers', False) self._driver.extend_share(fake_share, new_size) self.assertTrue( self._driver.service_instance_manager.get_common_server.called) self._driver._unmount_device.assert_called_once_with( fake_share, srv_details) self._driver._get_volume.assert_called_once_with( mock.ANY, fake_share['id']) if new_size > volume_size: self._driver._detach_volume.assert_called_once_with( mock.ANY, fake_share, srv_details) self._driver._extend_volume.assert_called_once_with( mock.ANY, fake_volume, new_size) self._driver._attach_volume.assert_called_once_with( mock.ANY, fake_share, srv_details['instance_id'], mock.ANY) else: self.assertFalse(self._driver._detach_volume.called) self.assertFalse(self._driver._extend_volume.called) self.assertFalse(self._driver._attach_volume.called) (self._helper_nfs.disable_access_for_maintenance. assert_called_once_with(srv_details, 'test_share')) (self._helper_nfs.restore_access_after_maintenance. assert_called_once_with(srv_details, 'test_share')) self.assertTrue(self._driver._resize_filesystem.called) def test_extend_volume(self): fake_volume = {'id': 'fake'} new_size = 123 self.mock_object(self._driver.volume_api, 'extend') self.mock_object(self._driver, '_wait_for_available_volume') self._driver._extend_volume(self._context, fake_volume, new_size) self._driver.volume_api.extend.assert_called_once_with( self._context, fake_volume['id'], new_size ) self._driver._wait_for_available_volume.assert_called_once_with( fake_volume, mock.ANY, msg_timeout=mock.ANY, msg_error=mock.ANY, expected_size=new_size ) def test_resize_filesystem(self): fake_server_details = {'fake': 'fake'} fake_volume = {'mountpoint': '/dev/fake'} self.mock_object(self._driver, '_ssh_exec') self._driver._resize_filesystem( fake_server_details, fake_volume, new_size=123) self._driver._ssh_exec.assert_any_call( fake_server_details, ['sudo', 'fsck', '-pf', '/dev/fake']) self._driver._ssh_exec.assert_any_call( fake_server_details, ['sudo', 'resize2fs', '/dev/fake', "%sG" % 123] ) self.assertEqual(2, self._driver._ssh_exec.call_count) @ddt.data( { 'source': processutils.ProcessExecutionError( stderr="resize2fs: New size smaller than minimum (123456)"), 'target': exception.Invalid }, { 'source': processutils.ProcessExecutionError(stderr="fake_error"), 'target': exception.ManilaException } ) @ddt.unpack def test_resize_filesystem_invalid_new_size(self, source, target): fake_server_details = {'fake': 'fake'} fake_volume = {'mountpoint': '/dev/fake'} ssh_mock = mock.Mock(side_effect=["fake", source]) self.mock_object(self._driver, '_ssh_exec', ssh_mock) self.assertRaises( target, self._driver._resize_filesystem, fake_server_details, fake_volume, new_size=123 ) def test_shrink_share_invalid_size(self): fake_share = {'id': 'fake', 'export_locations': [{'path': 'test'}]} new_size = 123 self.mock_object( self._driver.service_instance_manager, 'get_common_server', mock.Mock(return_value=self.server) ) self.mock_object(self._driver, '_get_helper') self.mock_object(self._driver, '_get_consumed_space', mock.Mock(return_value=200)) CONF.set_default('driver_handles_share_servers', False) self.assertRaises( exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, fake_share, new_size ) self._driver._get_helper.assert_called_once_with(fake_share) self._driver._get_consumed_space.assert_called_once_with( mock.ANY, self.server['backend_details']) def _setup_shrink_mocks(self): share = {'id': 'fake', 'export_locations': [{'path': 'test'}], 'name': 'fake'} volume = {'id': 'fake'} new_size = 123 server_details = self.server['backend_details'] self.mock_object( self._driver.service_instance_manager, 'get_common_server', mock.Mock(return_value=self.server) ) helper = mock.Mock() self.mock_object(self._driver, '_get_helper', mock.Mock(return_value=helper)) self.mock_object(self._driver, '_get_consumed_space', mock.Mock(return_value=100)) self.mock_object(self._driver, '_get_volume', mock.Mock(return_value=volume)) self.mock_object(self._driver, '_unmount_device') self.mock_object(self._driver, '_mount_device') CONF.set_default('driver_handles_share_servers', False) return share, volume, new_size, server_details, helper @ddt.data({'source': exception.Invalid("fake"), 'target': exception.ShareShrinkingPossibleDataLoss}, {'source': exception.ManilaException("fake"), 'target': exception.Invalid}) @ddt.unpack def test_shrink_share_error_on_resize_fs(self, source, target): share, vol, size, server_details, _ = self._setup_shrink_mocks() resize_mock = mock.Mock(side_effect=source) self.mock_object(self._driver, '_resize_filesystem', resize_mock) self.assertRaises(target, self._driver.shrink_share, share, size) resize_mock.assert_called_once_with(server_details, vol, new_size=size) def test_shrink_share(self): share, vol, size, server_details, helper = self._setup_shrink_mocks() self.mock_object(self._driver, '_resize_filesystem') self._driver.shrink_share(share, size) self._driver._get_helper.assert_called_once_with(share) self._driver._get_consumed_space.assert_called_once_with( mock.ANY, server_details) self._driver._get_volume.assert_called_once_with(mock.ANY, share['id']) self._driver._unmount_device.assert_called_once_with(share, server_details) self._driver._resize_filesystem( server_details, vol, new_size=size) self._driver._mount_device(share, server_details, vol) self.assertTrue(helper.disable_access_for_maintenance.called) self.assertTrue(helper.restore_access_after_maintenance.called) @ddt.data({'share_servers': [], 'result': None}, {'share_servers': None, 'result': None}, {'share_servers': ['fake'], 'result': 'fake'}, {'share_servers': ['fake', 'test'], 'result': 'fake'}) @ddt.unpack def tests_choose_share_server_compatible_with_share(self, share_servers, result): fake_share = "fake" actual_result = self._driver.choose_share_server_compatible_with_share( self._context, share_servers, fake_share ) self.assertEqual(result, actual_result) def test_manage_snapshot_not_found(self): snapshot_instance = {'id': 'snap_instance_id', 'provider_location': 'vol_snap_id'} driver_options = {} self.mock_object( self._driver.volume_api, 'get_snapshot', mock.Mock(side_effect=exception.VolumeSnapshotNotFound( snapshot_id='vol_snap_id'))) self.assertRaises(exception.ManageInvalidShareSnapshot, self._driver.manage_existing_snapshot, snapshot_instance, driver_options) self._driver.volume_api.get_snapshot.assert_called_once_with( self._context, 'vol_snap_id') def test_manage_snapshot_valid(self): snapshot_instance = {'id': 'snap_instance_id', 'provider_location': 'vol_snap_id'} volume_snapshot = {'id': 'vol_snap_id', 'size': 1} self.mock_object(self._driver.volume_api, 'get_snapshot', mock.Mock(return_value=volume_snapshot)) ret_manage = self._driver.manage_existing_snapshot( snapshot_instance, {}) self.assertEqual({'provider_location': 'vol_snap_id', 'size': 1}, ret_manage) self._driver.volume_api.get_snapshot.assert_called_once_with( self._context, 'vol_snap_id') def test_unmanage_snapshot(self): snapshot_instance = {'id': 'snap_instance_id', 'provider_location': 'vol_snap_id'} self.mock_object(self._driver.private_storage, 'delete') self._driver.unmanage_snapshot(snapshot_instance) self._driver.private_storage.delete.assert_called_once_with( 'snap_instance_id') @generic.ensure_server def fake(driver_instance, context, share_server=None): return share_server @ddt.ddt class GenericDriverEnsureServerTestCase(test.TestCase): def setUp(self): super(GenericDriverEnsureServerTestCase, self).setUp() self._context = context.get_admin_context() self.server = {'id': 'fake_id', 'backend_details': {'foo': 'bar'}} self.dhss_false = type( 'Fake', (object,), {'driver_handles_share_servers': False}) self.dhss_true = type( 'Fake', (object,), {'driver_handles_share_servers': True}) def test_share_servers_are_not_handled_server_not_provided(self): self.dhss_false.service_instance_manager = mock.Mock() self.dhss_false.service_instance_manager.get_common_server = ( mock.Mock(return_value=self.server)) self.dhss_false.service_instance_manager.ensure_service_instance = ( mock.Mock(return_value=True)) actual = fake(self.dhss_false, self._context) self.assertEqual(self.server, actual) (self.dhss_false.service_instance_manager. get_common_server.assert_called_once_with()) (self.dhss_false.service_instance_manager.ensure_service_instance. assert_called_once_with( self._context, self.server['backend_details'])) @ddt.data({'id': 'without_details'}, {'id': 'with_details', 'backend_details': {'foo': 'bar'}}) def test_share_servers_are_not_handled_server_provided(self, server): self.assertRaises( exception.ManilaException, fake, self.dhss_false, self._context, share_server=server) def test_share_servers_are_handled_server_provided(self): self.dhss_true.service_instance_manager = mock.Mock() self.dhss_true.service_instance_manager.ensure_service_instance = ( mock.Mock(return_value=True)) actual = fake(self.dhss_true, self._context, share_server=self.server) self.assertEqual(self.server, actual) (self.dhss_true.service_instance_manager.ensure_service_instance. assert_called_once_with( self._context, self.server['backend_details'])) def test_share_servers_are_handled_invalid_server_provided(self): server = {'id': 'without_details'} self.assertRaises( exception.ManilaException, fake, self.dhss_true, self._context, share_server=server) def test_share_servers_are_handled_server_not_provided(self): self.assertRaises( exception.ManilaException, fake, self.dhss_true, self._context) manila-10.0.0/manila/tests/share/drivers/container/0000775000175000017500000000000013656750362022235 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/container/__init__.py0000664000175000017500000000000013656750227024334 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/container/test_storage_helper.py0000664000175000017500000001757713656750227026672 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Storage helper module.""" import functools from unittest import mock import ddt from manila import exception from manila.share import configuration from manila.share.drivers.container import storage_helper from manila import test from manila.tests.share.drivers.container.fakes import fake_share @ddt.ddt class LVMHelperTestCase(test.TestCase): """Tests ContainerShareDriver""" def setUp(self): super(LVMHelperTestCase, self).setUp() self.share = fake_share() self.fake_conf = configuration.Configuration(None) self.LVMHelper = storage_helper.LVMHelper(configuration=self.fake_conf) def fake_exec_sync(self, *args, **kwargs): kwargs['execute_arguments'].append(args) try: ret_val = kwargs['ret_val'] except KeyError: ret_val = None return ret_val def test_lvmhelper_setup_explodes_in_gore_on_no_config_supplied(self): self.assertRaises(exception.ManilaException, storage_helper.LVMHelper, None) @ddt.data("62.50g 72.50g", " 72.50g 62.50g\n", " <62.50g <72.50g\n") def test_get_share_server_pools(self, ret_vgs): expected_result = [{'reserved_percentage': 0, 'pool_name': 'manila_docker_volumes', 'total_capacity_gb': 72.5, 'free_capacity_gb': 62.5}] self.mock_object(self.LVMHelper, "_execute", mock.Mock(return_value=(ret_vgs, 0))) result = self.LVMHelper.get_share_server_pools() self.assertEqual(expected_result, result) def test__get_lv_device(self): fake_share_name = 'fakeshareid' self.assertEqual("/dev/manila_docker_volumes/%s" % fake_share_name, self.LVMHelper._get_lv_device(fake_share_name)) def test__get_lv_folder(self): fake_share_name = 'fakeshareid' self.assertEqual("/tmp/shares/%s" % fake_share_name, self.LVMHelper._get_lv_folder(fake_share_name)) def test_provide_storage(self): actual_arguments = [] fake_share_name = 'fakeshareid' expected_arguments = [ ('lvcreate', '-p', 'rw', '-L', '1G', '-n', 'fakeshareid', 'manila_docker_volumes'), ('mkfs.ext4', '/dev/manila_docker_volumes/fakeshareid'), ] self.LVMHelper._execute = functools.partial( self.fake_exec_sync, execute_arguments=actual_arguments, ret_val='') self.LVMHelper.provide_storage(fake_share_name, 1) self.assertEqual(expected_arguments, actual_arguments) @ddt.data(None, exception.ProcessExecutionError) def test__try_to_unmount_device(self, side_effect): device = {} mock_warning = self.mock_object(storage_helper.LOG, 'warning') mock_execute = self.mock_object(self.LVMHelper, '_execute', mock.Mock(side_effect=side_effect)) self.LVMHelper._try_to_unmount_device(device) mock_execute.assert_called_once_with( "umount", device, run_as_root=True ) if side_effect is not None: mock_warning.assert_called_once() def test_remove_storage(self): fake_share_name = 'fakeshareid' fake_device = {} mock_get_lv_device = self.mock_object( self.LVMHelper, '_get_lv_device', mock.Mock(return_value=fake_device)) mock_try_to_umount = self.mock_object(self.LVMHelper, '_try_to_unmount_device') mock_execute = self.mock_object(self.LVMHelper, '_execute') self.LVMHelper.remove_storage(fake_share_name) mock_get_lv_device.assert_called_once_with( fake_share_name ) mock_try_to_umount.assert_called_once_with(fake_device) mock_execute.assert_called_once_with( 'lvremove', '-f', '--autobackup', 'n', fake_device, run_as_root=True ) def test_remove_storage_lvremove_failed(self): fake_share_name = 'fakeshareid' def fake_execute(*args, **kwargs): if 'lvremove' in args: raise exception.ProcessExecutionError() self.mock_object(storage_helper.LOG, "warning") self.mock_object(self.LVMHelper, "_execute", fake_execute) self.LVMHelper.remove_storage(fake_share_name) self.assertTrue(storage_helper.LOG.warning.called) @ddt.data(None, exception.ProcessExecutionError) def test_rename_storage(self, side_effect): fake_old_share_name = 'fake_old_name' fake_new_share_name = 'fake_new_name' fake_new_device = "/dev/new_device" fake_old_device = "/dev/old_device" mock_get_lv_device = self.mock_object( self.LVMHelper, '_get_lv_device', mock.Mock(side_effect=[fake_old_device, fake_new_device])) mock_try_to_umount = self.mock_object(self.LVMHelper, '_try_to_unmount_device') mock_execute = self.mock_object(self.LVMHelper, '_execute', mock.Mock(side_effect=side_effect)) if side_effect is None: self.LVMHelper.rename_storage(fake_old_share_name, fake_new_share_name) else: self.assertRaises(exception.ProcessExecutionError, self.LVMHelper.rename_storage, fake_old_share_name, fake_new_share_name) mock_try_to_umount.assert_called_once_with(fake_old_device) mock_execute.mock_assert_called_once_with( "lvrename", "--autobackup", "n", fake_old_device, fake_new_device, run_as_root=True ) mock_get_lv_device.assert_has_calls([ mock.call(fake_old_share_name), mock.call(fake_new_share_name) ]) def test_extend_share(self): actual_arguments = [] expected_arguments = [ ('lvextend', '-L', 'shareG', '-n', '/dev/manila_docker_volumes/fakeshareid'), ('e2fsck', '-f', '-y', '/dev/manila_docker_volumes/fakeshareid'), ('resize2fs', '/dev/manila_docker_volumes/fakeshareid'), ] fake_share_name = 'fakeshareid' self.LVMHelper._execute = functools.partial( self.fake_exec_sync, execute_arguments=actual_arguments, ret_val='') self.LVMHelper.extend_share(fake_share_name, 'share', 3) self.assertEqual(expected_arguments, actual_arguments) def test_get_size(self): share_name = 'fakeshareid' fake_old_device = {} mock_get_lv_device = self.mock_object( self.LVMHelper, '_get_lv_device', mock.Mock(return_value=fake_old_device)) mock_execute = self.mock_object(self.LVMHelper, '_execute', mock.Mock(return_value=[1, "args"])) result = self.LVMHelper.get_size(share_name) mock_execute.assert_called_once_with( "lvs", "-o", "lv_size", "--noheadings", "--nosuffix", "--units", "g", fake_old_device, run_as_root=True ) mock_get_lv_device.assert_called_once_with(share_name) self.assertEqual(result, 1) manila-10.0.0/manila/tests/share/drivers/container/fakes.py0000664000175000017500000000670613656750227023711 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Some useful fakes.""" from manila.tests.db import fakes as db_fakes FAKE_VSCTL_LIST_INTERFACES_X = ( 'fake stuff\n' 'foo not_a_veth something_fake bar\n' 'foo veth11b2c34 something_fake bar\n' 'foo veth25f6g7h manila-container="fake1" bar\n' 'foo veth3jd83j7 manila-container="my_container" bar\n' 'foo veth4i9j10k manila-container="fake2" bar\n' 'more fake stuff\n' ) FAKE_VSCTL_LIST_INTERFACES = ( 'fake stuff\n' 'foo not_a_veth something_fake bar\n' 'foo veth11b2c34 something_fake bar\n' 'foo veth25f6g7h manila-container="fake1" bar\n' 'foo veth3jd83j7 manila-container="manila_my_container" bar\n' 'foo veth4i9j10k manila-container="fake2" bar\n' 'more fake stuff\n' ) FAKE_VSCTL_LIST_INTERFACE_1 = ( 'fake stuff\n' 'foo veth11b2c34 something_fake bar\n' 'more fake stuff\n' ) FAKE_VSCTL_LIST_INTERFACE_2 = ( 'fake stuff\n' 'foo veth25f6g7h manila-container="fake1" bar\n' 'more fake stuff\n' ) FAKE_VSCTL_LIST_INTERFACE_3_X = ( 'fake stuff\n' 'foo veth3jd83j7 manila-container="my_container" bar\n' 'more fake stuff\n' ) FAKE_VSCTL_LIST_INTERFACE_3 = ( 'fake stuff\n' 'foo veth3jd83j7 manila-container="manila_my_container" bar\n' 'more fake stuff\n' ) FAKE_VSCTL_LIST_INTERFACE_4 = ( 'fake stuff\n' 'foo veth4i9j10k manila-container="fake2" bar\n' 'more fake stuff\n' ) def fake_share(**kwargs): share = { 'id': 'fakeid', 'share_id': 'fakeshareid', 'name': 'fakename', 'size': 1, 'share_proto': 'NFS', 'export_location': '127.0.0.1:/mnt/nfs/volume-00002', } share.update(kwargs) return db_fakes.FakeModel(share) def fake_access(**kwargs): access = { 'id': 'fakeaccid', 'access_type': 'ip', 'access_to': '10.0.0.2', 'access_level': 'rw', 'state': 'active', } access.update(kwargs) return db_fakes.FakeModel(access) def fake_network(**kwargs): allocations = db_fakes.FakeModel({'id': 'fake_allocation_id', 'ip_address': '127.0.0.0.1', 'mac_address': 'fe:16:3e:61:e0:58'}) network = { 'id': 'fake_network_id', 'server_id': 'fake_server_id', 'network_allocations': [allocations], 'neutron_net_id': 'fake_net', 'neutron_subnet_id': 'fake_subnet', } network.update(kwargs) return db_fakes.FakeModel(network) def fake_share_server(**kwargs): share_server = { 'id': 'fake' } share_server.update(kwargs) return db_fakes.FakeModel(share_server) def fake_identifier(): return '7cf7c200-d3af-4e05-b87e-9167c95dfcad' def fake_share_no_export_location(**kwargs): share = { 'share_id': 'fakeshareid', } share.update(kwargs) return db_fakes.FakeModel(share) manila-10.0.0/manila/tests/share/drivers/container/test_protocol_helper.py0000664000175000017500000003033613656750227027053 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Protocol helper module.""" import functools from unittest import mock import ddt from manila.common import constants as const from manila import exception from manila.share.drivers.container import protocol_helper from manila import test from manila.tests.share.drivers.container.fakes import fake_share @ddt.ddt class DockerCIFSHelperTestCase(test.TestCase): """Tests ContainerShareDriver""" def setUp(self): super(DockerCIFSHelperTestCase, self).setUp() self._helper = mock.Mock() self.fake_conf = mock.Mock() self.fake_conf.container_cifs_guest_ok = "yes" self.DockerCIFSHelper = protocol_helper.DockerCIFSHelper( self._helper, share=fake_share(), config=self.fake_conf) def fake_exec_sync(self, *args, **kwargs): kwargs["execute_arguments"].append(args) try: ret_val = kwargs["ret_val"] except KeyError: ret_val = None return [ret_val] def test_create_share_guest_ok(self): expected_arguments = [ ("fakeserver", ["net", "conf", "addshare", "fakeshareid", "/shares/fakeshareid", "writeable=y", "guest_ok=y"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "browseable", "yes"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "hosts allow", "127.0.0.1"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "read only", "no"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "hosts deny", "0.0.0.0/0"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "create mask", "0755"])] actual_arguments = [] self._helper.execute = functools.partial( self.fake_exec_sync, execute_arguments=actual_arguments, ret_val=" fake 192.0.2.2/24 more fake \n" * 20) self.DockerCIFSHelper.share = fake_share() self.DockerCIFSHelper.create_share("fakeserver") self.assertEqual(expected_arguments.sort(), actual_arguments.sort()) def test_create_share_guest_not_ok(self): self.DockerCIFSHelper.conf = mock.Mock() self.DockerCIFSHelper.conf.container_cifs_guest_ok = False expected_arguments = [ ("fakeserver", ["net", "conf", "addshare", "fakeshareid", "/shares/fakeshareid", "writeable=y", "guest_ok=n"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "browseable", "yes"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "hosts allow", "192.0.2.2"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "read only", "no"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "hosts deny", "0.0.0.0/0"]), ("fakeserver", ["net", "conf", "setparm", "fakeshareid", "create mask", "0755"])] actual_arguments = [] self._helper.execute = functools.partial( self.fake_exec_sync, execute_arguments=actual_arguments, ret_val=" fake 192.0.2.2/24 more fake \n" * 20) self.DockerCIFSHelper.share = fake_share() self.DockerCIFSHelper.create_share("fakeserver") self.assertEqual(expected_arguments.sort(), actual_arguments.sort()) def test_delete_share(self): self.DockerCIFSHelper.share = fake_share() self.DockerCIFSHelper.delete_share("fakeserver", "fakeshareid") self.DockerCIFSHelper.container.execute.assert_called_with( "fakeserver", ["net", "conf", "delshare", "fakeshareid"], ignore_errors=False) def test__get_access_group_ro(self): result = self.DockerCIFSHelper._get_access_group(const.ACCESS_LEVEL_RO) self.assertEqual("read list", result) def test__get_access_group_rw(self): result = self.DockerCIFSHelper._get_access_group(const.ACCESS_LEVEL_RW) self.assertEqual("valid users", result) def test__get_access_group_other(self): self.assertRaises(exception.InvalidShareAccessLevel, self.DockerCIFSHelper._get_access_group, "fake_level") def test__get_existing_users(self): self.DockerCIFSHelper.container.execute = mock.Mock( return_value=("fake_user", "")) result = self.DockerCIFSHelper._get_existing_users("fake_server_id", "fake_share", "fake_access") self.assertEqual("fake_user", result) self.DockerCIFSHelper.container.execute.assert_called_once_with( "fake_server_id", ["net", "conf", "getparm", "fake_share", "fake_access"], ignore_errors=True) def test__set_users(self): self.DockerCIFSHelper.container.execute = mock.Mock() self.DockerCIFSHelper._set_users("fake_server_id", "fake_share", "fake_access", "fake_user") self.DockerCIFSHelper.container.execute.assert_called_once_with( "fake_server_id", ["net", "conf", "setparm", "fake_share", "fake_access", "fake_user"]) def test__allow_access_ok(self): self.DockerCIFSHelper._get_access_group = mock.Mock( return_value="valid users") self.DockerCIFSHelper._get_existing_users = mock.Mock( return_value="fake_user") self.DockerCIFSHelper._set_users = mock.Mock() self.DockerCIFSHelper._allow_access("fake_share", "fake_server_id", "fake_user2", "rw") self.DockerCIFSHelper._get_access_group.assert_called_once_with("rw") self.DockerCIFSHelper._get_existing_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users") self.DockerCIFSHelper._set_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users", "fake_user fake_user2") def test__allow_access_not_ok(self): self.DockerCIFSHelper._get_access_group = mock.Mock( return_value="valid users") self.DockerCIFSHelper._get_existing_users = mock.Mock() self.DockerCIFSHelper._get_existing_users.side_effect = TypeError self.DockerCIFSHelper._set_users = mock.Mock() self.DockerCIFSHelper._allow_access("fake_share", "fake_server_id", "fake_user2", "rw") self.DockerCIFSHelper._get_access_group.assert_called_once_with("rw") self.DockerCIFSHelper._get_existing_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users") self.DockerCIFSHelper._set_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users", "fake_user2") def test__deny_access_ok(self): self.DockerCIFSHelper._get_access_group = mock.Mock( return_value="valid users") self.DockerCIFSHelper._get_existing_users = mock.Mock( return_value="fake_user fake_user2") self.DockerCIFSHelper._set_users = mock.Mock() self.DockerCIFSHelper._deny_access("fake_share", "fake_server_id", "fake_user2", "rw") self.DockerCIFSHelper._get_access_group.assert_called_once_with("rw") self.DockerCIFSHelper._get_existing_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users") self.DockerCIFSHelper._set_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users", "fake_user") def test__deny_access_ok_so_many_users(self): self.DockerCIFSHelper._get_access_group = mock.Mock( return_value="valid users") self.DockerCIFSHelper._get_existing_users = mock.Mock( return_value="joost jaap huub dirk") self.DockerCIFSHelper._set_users = mock.Mock() # Sorry, Jaap. self.DockerCIFSHelper._deny_access("fake_share", "fake_server_id", "jaap", "rw") self.DockerCIFSHelper._get_access_group.assert_called_once_with("rw") self.DockerCIFSHelper._get_existing_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users") self.DockerCIFSHelper._set_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users", "dirk huub joost") def test__deny_access_not_ok(self): self.DockerCIFSHelper._get_access_group = mock.Mock( return_value="valid users") self.DockerCIFSHelper._get_existing_users = mock.Mock() self.DockerCIFSHelper._get_existing_users.side_effect = TypeError self.DockerCIFSHelper._set_users = mock.Mock() self.mock_object(protocol_helper.LOG, "warning") self.DockerCIFSHelper._deny_access("fake_share", "fake_server_id", "fake_user2", "rw") self.DockerCIFSHelper._get_access_group.assert_called_once_with("rw") self.DockerCIFSHelper._get_existing_users.assert_called_once_with( "fake_server_id", "fake_share", "valid users") self.assertFalse(self.DockerCIFSHelper._set_users.called) self.assertTrue(protocol_helper.LOG.warning.called) def test_update_access_access_rules_wrong_type(self): allow_rules = [{ "access_to": "192.0.2.2", "access_level": "ro", "access_type": "fake" }] self.mock_object(self.DockerCIFSHelper, "_allow_access") self.assertRaises(exception.InvalidShareAccess, self.DockerCIFSHelper.update_access, "fakeserver", "fakeshareid", allow_rules, [], []) def test_update_access_access_rules_ok(self): access_rules = [{ "access_to": "fakeuser", "access_level": "ro", "access_type": "user" }] self.mock_object(self.DockerCIFSHelper, "_allow_access") self.DockerCIFSHelper.container.execute = mock.Mock() self.DockerCIFSHelper.update_access("fakeserver", "fakeshareid", access_rules, [], []) self.DockerCIFSHelper._allow_access.assert_called_once_with( "fakeshareid", "fakeserver", "fakeuser", "ro") self.DockerCIFSHelper.container.execute.assert_called_once_with( "fakeserver", ["net", "conf", "setparm", "fakeshareid", "valid users", ""]) def test_update_access_add_rules(self): add_rules = [{ "access_to": "fakeuser", "access_level": "ro", "access_type": "user" }] self.mock_object(self.DockerCIFSHelper, "_allow_access") self.DockerCIFSHelper.update_access("fakeserver", "fakeshareid", [], add_rules, []) self.DockerCIFSHelper._allow_access.assert_called_once_with( "fakeshareid", "fakeserver", "fakeuser", "ro") def test_update_access_delete_rules(self): delete_rules = [{ "access_to": "fakeuser", "access_level": "ro", "access_type": "user" }] self.mock_object(self.DockerCIFSHelper, "_deny_access") self.DockerCIFSHelper.update_access("fakeserver", "fakeshareid", [], [], delete_rules) self.DockerCIFSHelper._deny_access.assert_called_once_with( "fakeshareid", "fakeserver", "fakeuser", "ro") manila-10.0.0/manila/tests/share/drivers/container/test_driver.py0000664000175000017500000005113413656750227025145 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Container driver module.""" import functools from unittest import mock import ddt from oslo_config import cfg from manila.common import constants as const from manila import context from manila import exception from manila.share import configuration from manila.share.drivers.container import driver from manila.share.drivers.container import protocol_helper from manila import test from manila.tests import fake_utils from manila.tests.share.drivers.container import fakes as cont_fakes CONF = cfg.CONF CONF.import_opt('lvm_share_export_ips', 'manila.share.drivers.lvm') @ddt.ddt class ContainerShareDriverTestCase(test.TestCase): """Tests ContainerShareDriver""" def setUp(self): super(ContainerShareDriverTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._context = context.get_admin_context() self._db = mock.Mock() self.fake_conf = configuration.Configuration(None) CONF.set_default('driver_handles_share_servers', True) self._driver = driver.ContainerShareDriver( configuration=self.fake_conf) self.share = cont_fakes.fake_share() self.access = cont_fakes.fake_access() self.server = { 'public_address': self.fake_conf.lvm_share_export_ips, 'instance_id': 'LVM', } # Used only to test compatibility with share manager self.share_server = "fake_share_server" def fake_exec_sync(self, *args, **kwargs): kwargs['execute_arguments'].append(args) try: ret_val = kwargs['ret_val'] except KeyError: ret_val = None return ret_val def test__get_helper_ok(self): share = cont_fakes.fake_share(share_proto='CIFS') expected = protocol_helper.DockerCIFSHelper(None) actual = self._driver._get_helper(share) self.assertEqual(type(expected), type(actual)) def test__get_helper_existing_ok(self): share = cont_fakes.fake_share(share_proto='CIFS') expected = protocol_helper.DockerCIFSHelper self._driver._helpers = {'CIFS': expected} actual = self._driver._get_helper(share) self.assertEqual(expected, type(actual)) def test__get_helper_not_ok(self): share = cont_fakes.fake_share() self.assertRaises(exception.InvalidShare, self._driver._get_helper, share) def test_update_share_stats(self): self.mock_object(self._driver.storage, 'get_share_server_pools', mock.Mock(return_value='test-pool')) self._driver._update_share_stats() self.assertEqual('Docker', self._driver._stats['share_backend_name']) self.assertEqual('CIFS', self._driver._stats['storage_protocol']) self.assertEqual(0, self._driver._stats['reserved_percentage']) self.assertIsNone(self._driver._stats['consistency_group_support']) self.assertEqual(False, self._driver._stats['snapshot_support']) self.assertEqual('ContainerShareDriver', self._driver._stats['driver_name']) self.assertEqual('test-pool', self._driver._stats['pools']) self.assertTrue(self._driver._stats['ipv4_support']) self.assertFalse(self._driver._stats['ipv6_support']) def test_create_share(self): share_server = {'id': 'fake'} fake_container_name = 'manila_fake_container' mock_provide_storage = self.mock_object(self._driver.storage, 'provide_storage') mock_get_container_name = self.mock_object( self._driver, '_get_container_name', mock.Mock(return_value=fake_container_name)) mock_create_and_mount = self.mock_object( self._driver, '_create_export_and_mount_storage', mock.Mock(return_value='export_location')) self.assertEqual('export_location', self._driver.create_share(self._context, self.share, share_server)) mock_provide_storage.assert_called_once_with( self.share.share_id, self.share.size ) mock_create_and_mount.assert_called_once_with( self.share, fake_container_name, self.share.share_id ) mock_get_container_name.assert_called_once_with( share_server['id'] ) def test__create_export_and_mount_storage(self): helper = mock.Mock() server_id = 'fake_id' share_name = 'fake_name' mock_create_share = self.mock_object( helper, 'create_share', mock.Mock(return_value='export_location')) mock__get_helper = self.mock_object( self._driver, "_get_helper", mock.Mock(return_value=helper)) self.mock_object(self._driver.storage, "_get_lv_device", mock.Mock(return_value={})) mock_execute = self.mock_object(self._driver.container, 'execute') self.assertEqual('export_location', self._driver._create_export_and_mount_storage( self.share, server_id, share_name)) mock_create_share.assert_called_once_with(server_id) mock__get_helper.assert_called_once_with(self.share) mock_execute.assert_has_calls([ mock.call(server_id, ["mkdir", "-m", "750", "/shares/%s" % share_name]), mock.call(server_id, ["mount", {}, "/shares/%s" % share_name]) ]) def test__delete_export_and_umount_storage(self): helper = mock.Mock() server_id = 'fake_id' share_name = 'fake_name' mock__get_helper = self.mock_object( self._driver, "_get_helper", mock.Mock(return_value=helper)) mock_delete_share = self.mock_object(helper, 'delete_share') mock_execute = self.mock_object(self._driver.container, 'execute') self._driver._delete_export_and_umount_storage( self.share, server_id, share_name) mock__get_helper.assert_called_once_with(self.share) mock_delete_share.assert_called_once_with( server_id, share_name, ignore_errors=False) mock_execute.assert_has_calls([ mock.call(server_id, ["umount", "/shares/%s" % share_name], ignore_errors=False), mock.call(server_id, ["rm", "-fR", "/shares/%s" % share_name], ignore_errors=True)] ) def test_delete_share(self): fake_server_id = "manila_container_name" fake_share_name = "fake_share_name" fake_share_server = {'id': 'fake'} mock_get_container_name = self.mock_object( self._driver, '_get_container_name', mock.Mock(return_value=fake_server_id)) mock_get_share_name = self.mock_object( self._driver, '_get_share_name', mock.Mock(return_value=fake_share_name)) self.mock_object(self._driver.storage, 'remove_storage') mock_delete_and_umount = self.mock_object( self._driver, '_delete_export_and_umount_storage') self._driver.delete_share(self._context, self.share, fake_share_server) mock_get_container_name.assert_called_once_with( fake_share_server['id'] ) mock_get_share_name.assert_called_with( self.share ) mock_delete_and_umount.assert_called_once_with( self.share, fake_server_id, fake_share_name, ignore_errors=True ) @ddt.data(True, False) def test__get_share_name(self, has_export_location): if not has_export_location: fake_share = cont_fakes.fake_share_no_export_location() expected_result = fake_share.share_id else: fake_share = cont_fakes.fake_share() expected_result = fake_share['export_location'].split('/')[-1] result = self._driver._get_share_name(fake_share) self.assertEqual(expected_result, result) def test_extend_share(self): fake_new_size = 2 fake_share_server = {'id': 'fake-server'} share = cont_fakes.fake_share() share_name = self._driver._get_share_name(share) actual_arguments = [] expected_arguments = [ ('manila_fake_server', ['umount', '/shares/%s' % share_name]), ('manila_fake_server', ['mount', '/dev/manila_docker_volumes/%s' % share_name, '/shares/%s' % share_name]) ] mock_extend_share = self.mock_object(self._driver.storage, "extend_share") self._driver.container.execute = functools.partial( self.fake_exec_sync, execute_arguments=actual_arguments, ret_val='') self._driver.extend_share(share, fake_new_size, fake_share_server) self.assertEqual(expected_arguments, actual_arguments) mock_extend_share.assert_called_once_with(share_name, fake_new_size, fake_share_server) def test_ensure_share(self): # Does effectively nothing by design. self.assertEqual(1, 1) def test_update_access_access_rules_ok(self): helper = mock.Mock() fake_share_name = self._driver._get_share_name(self.share) self.mock_object(self._driver, "_get_helper", mock.Mock(return_value=helper)) self._driver.update_access(self._context, self.share, [{'access_level': const.ACCESS_LEVEL_RW}], [], [], {"id": "fake"}) helper.update_access.assert_called_with('manila_fake', fake_share_name, [{'access_level': 'rw'}], [], []) def test_get_network_allocation_numer(self): # Does effectively nothing by design. self.assertEqual(1, self._driver.get_network_allocations_number()) def test__get_container_name(self): self.assertEqual("manila_fake_server", self._driver._get_container_name("fake-server")) def test_do_setup(self): # Does effectively nothing by design. self.assertEqual(1, 1) def test_check_for_setup_error_host_not_ok_class_ok(self): setattr(self._driver.configuration.local_conf, 'neutron_host_id', None) self.assertRaises(exception.ManilaException, self._driver.check_for_setup_error) def test_check_for_setup_error_host_not_ok_class_some_other(self): setattr(self._driver.configuration.local_conf, 'neutron_host_id', None) setattr(self._driver.configuration.local_conf, 'network_api_class', 'manila.share.drivers.container.driver.ContainerShareDriver') self.mock_object(driver.LOG, "warning") self._driver.check_for_setup_error() setattr(self._driver.configuration.local_conf, 'network_api_class', 'manila.network.neutron.neutron_network_plugin.' 'NeutronNetworkPlugin') self.assertTrue(driver.LOG.warning.called) def test__connect_to_network(self): network_info = cont_fakes.fake_network() helper = mock.Mock() self.mock_object(self._driver, "_execute", mock.Mock(return_value=helper)) self.mock_object(self._driver.container, "execute") self._driver._connect_to_network("fake-server", network_info, "fake-veth") @ddt.data({'veth': "fake_veth", 'exception': None}, {'veth': "fake_veth", 'exception': exception.ProcessExecutionError('fake')}, {'veth': None, 'exception': None}) @ddt.unpack def test__teardown_server(self, veth, exception): fake_server_details = {"id": "b5afb5c1-6011-43c4-8a37-29820e6951a7"} container_name = self._driver._get_container_name( fake_server_details['id']) mock_stop_container = self.mock_object( self._driver.container, "stop_container") mock_find_container = self.mock_object( self._driver.container, "find_container_veth", mock.Mock(return_value=veth)) mock_execute = self.mock_object(self._driver, "_execute", mock.Mock(side_effect=exception)) self._driver._teardown_server( server_details=fake_server_details) mock_stop_container.assert_called_once_with( container_name ) mock_find_container.assert_called_once_with( container_name ) if exception is None and veth is not None: mock_execute.assert_called_once_with( "ovs-vsctl", "--", "del-port", self._driver.configuration.container_ovs_bridge_name, veth, run_as_root=True) def test__get_veth_state(self): retval = ('veth0000000\n', '') self.mock_object(self._driver, "_execute", mock.Mock(return_value=retval)) result = self._driver._get_veth_state() self.assertEqual(['veth0000000'], result) def test__get_corresponding_veth_ok(self): before = ['veth0000000'] after = ['veth0000000', 'veth0000001'] result = self._driver._get_corresponding_veth(before, after) self.assertEqual('veth0000001', result) def test__get_corresponding_veth_raises(self): before = ['veth0000000'] after = ['veth0000000', 'veth0000001', 'veth0000002'] self.assertRaises(exception.ManilaException, self._driver._get_corresponding_veth, before, after) def test__setup_server_container_fails(self): network_info = cont_fakes.fake_network() self.mock_object(self._driver.container, 'start_container') self._driver.container.start_container.side_effect = KeyError() self.assertRaises(exception.ManilaException, self._driver._setup_server, network_info) def test__setup_server_ok(self): network_info = cont_fakes.fake_network() server_id = self._driver._get_container_name(network_info["server_id"]) self.mock_object(self._driver.container, 'start_container') self.mock_object(self._driver, '_get_veth_state') self.mock_object(self._driver, '_get_corresponding_veth', mock.Mock(return_value='veth0')) self.mock_object(self._driver, '_connect_to_network') self.assertEqual(network_info['server_id'], self._driver._setup_server(network_info)['id']) self._driver.container.start_container.assert_called_once_with( server_id) self._driver._connect_to_network.assert_called_once_with(server_id, network_info, 'veth0') def test_manage_existing(self): fake_container_name = "manila_fake_container" fake_export_location = 'export_location' expected_result = { 'size': 1, 'export_locations': [fake_export_location] } fake_share_server = cont_fakes.fake_share() fake_share_name = self._driver._get_share_name(self.share) mock_get_container_name = self.mock_object( self._driver, '_get_container_name', mock.Mock(return_value=fake_container_name)) mock_get_share_name = self.mock_object( self._driver, '_get_share_name', mock.Mock(return_value=fake_share_name)) mock_rename_storage = self.mock_object( self._driver.storage, 'rename_storage') mock_get_size = self.mock_object( self._driver.storage, 'get_size', mock.Mock(return_value=1)) mock_delete_and_umount = self.mock_object( self._driver, '_delete_export_and_umount_storage') mock_create_and_mount = self.mock_object( self._driver, '_create_export_and_mount_storage', mock.Mock(return_value=fake_export_location) ) result = self._driver.manage_existing_with_server( self.share, {}, fake_share_server) mock_rename_storage.assert_called_once_with( fake_share_name, self.share.share_id ) mock_get_size.assert_called_once_with( fake_share_name ) mock_delete_and_umount.assert_called_once_with( self.share, fake_container_name, fake_share_name ) mock_create_and_mount.assert_called_once_with( self.share, fake_container_name, self.share.share_id ) mock_get_container_name.assert_called_once_with( fake_share_server['id'] ) mock_get_share_name.assert_called_with( self.share ) self.assertEqual(expected_result, result) def test_manage_existing_no_share_server(self): self.assertRaises(exception.ShareBackendException, self._driver.manage_existing_with_server, self.share, {}) def test_unmanage(self): self.assertIsNone(self._driver.unmanage_with_server(self.share)) def test_get_share_server_network_info(self): fake_share_server = cont_fakes.fake_share_server() fake_id = cont_fakes.fake_identifier() expected_result = ['veth11b2c34'] interfaces = [cont_fakes.FAKE_VSCTL_LIST_INTERFACE_1, cont_fakes.FAKE_VSCTL_LIST_INTERFACE_2, cont_fakes.FAKE_VSCTL_LIST_INTERFACE_4, cont_fakes.FAKE_VSCTL_LIST_INTERFACE_3] self.mock_object(self._driver.container, 'execute', mock.Mock(return_value=interfaces)) result = self._driver.get_share_server_network_info(self._context, fake_share_server, fake_id, {}) self.assertEqual(expected_result, result) def test_manage_server(self): fake_id = cont_fakes.fake_identifier() fake_share_server = cont_fakes.fake_share_server() fake_container_name = "manila_fake_container" fake_container_old_name = "fake_old_name" mock_get_container_name = self.mock_object( self._driver, '_get_container_name', mock.Mock(return_value=fake_container_name)) mock_get_correct_container_old_name = self.mock_object( self._driver, '_get_correct_container_old_name', mock.Mock(return_value=fake_container_old_name) ) mock_rename_container = self.mock_object(self._driver.container, 'rename_container') expected_result = {'id': fake_share_server['id']} new_identifier, new_backend_details = self._driver.manage_server( self._context, fake_share_server, fake_id, {}) self.assertEqual(expected_result, new_backend_details) self.assertEqual(fake_container_name, new_identifier) mock_rename_container.assert_called_once_with( fake_container_old_name, fake_container_name) mock_get_container_name.assert_called_with( fake_share_server['id'] ) mock_get_correct_container_old_name.assert_called_once_with( fake_id ) @ddt.data(True, False) def test__get_correct_container_old_name(self, container_exists): expected_name = 'fake-name' fake_name = 'fake-name' mock_container_exists = self.mock_object( self._driver.container, 'container_exists', mock.Mock(return_value=container_exists)) if not container_exists: expected_name = 'manila_fake_name' result = self._driver._get_correct_container_old_name(fake_name) self.assertEqual(expected_name, result) mock_container_exists.assert_called_once_with( fake_name ) manila-10.0.0/manila/tests/share/drivers/container/test_container_helper.py0000664000175000017500000002616113656750227027175 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Container helper module.""" from unittest import mock import uuid import ddt from manila import exception from manila.share import configuration from manila.share.drivers.container import container_helper from manila import test from manila.tests.share.drivers.container import fakes @ddt.ddt class DockerExecHelperTestCase(test.TestCase): """Tests DockerExecHelper""" def setUp(self): super(DockerExecHelperTestCase, self).setUp() self.fake_conf = configuration.Configuration(None) self.fake_conf.container_image_name = "fake_image" self.DockerExecHelper = container_helper.DockerExecHelper( configuration=self.fake_conf) def test_start_container(self): self.mock_object(self.DockerExecHelper, "_inner_execute", mock.Mock(return_value=['fake_output', ''])) uuid.uuid1 = mock.Mock(return_value='') expected = ['docker', 'run', '-d', '-i', '-t', '--privileged', '-v', '/dev:/dev', '--name=manila_cifs_docker_container', '-v', '/tmp/shares:/shares', 'fake_image'] self.DockerExecHelper.start_container() self.DockerExecHelper._inner_execute.assert_called_once_with(expected) def test_start_container_impossible_failure(self): self.mock_object(self.DockerExecHelper, "_inner_execute", mock.Mock(side_effect=OSError())) self.assertRaises(exception.ShareBackendException, self.DockerExecHelper.start_container) def test_stop_container(self): self.mock_object(self.DockerExecHelper, "_inner_execute", mock.Mock(return_value=['fake_output', ''])) expected = ['docker', 'stop', 'manila-fake-conainer'] self.DockerExecHelper.stop_container("manila-fake-conainer") self.DockerExecHelper._inner_execute.assert_called_once_with(expected) def test_stop_container_oh_noes(self): self.mock_object(self.DockerExecHelper, "_inner_execute", mock.Mock(side_effect=OSError)) self.assertRaises(exception.ShareBackendException, self.DockerExecHelper.stop_container, "manila-fake-container") def test_execute(self): self.mock_object(self.DockerExecHelper, "_inner_execute", mock.Mock(return_value='fake_output')) expected = ['docker', 'exec', '-i', 'fake_container', 'fake_script'] self.DockerExecHelper.execute("fake_container", ["fake_script"]) self.DockerExecHelper._inner_execute.assert_called_once_with( expected, ignore_errors=False) def test_execute_name_not_there(self): self.assertRaises(exception.ManilaException, self.DockerExecHelper.execute, None, ['do', 'stuff']) def test_execute_command_not_there(self): self.assertRaises(exception.ManilaException, self.DockerExecHelper.execute, 'fake-name', None) def test_execute_bad_command_format(self): self.assertRaises(exception.ManilaException, self.DockerExecHelper.execute, 'fake-name', 'do stuff') def test__inner_execute_ok(self): self.DockerExecHelper._execute = mock.Mock(return_value='fake') result = self.DockerExecHelper._inner_execute("fake_command") self.assertEqual(result, 'fake') def test__inner_execute_not_ok(self): self.DockerExecHelper._execute = mock.Mock(side_effect=[OSError()]) self.assertRaises(OSError, self.DockerExecHelper._inner_execute, "fake_command") def test__inner_execute_not_ok_ignore_errors(self): self.DockerExecHelper._execute = mock.Mock(side_effect=OSError()) result = self.DockerExecHelper._inner_execute("fake_command", ignore_errors=True) self.assertIsNone(result) @ddt.data(('inet', "192.168.0.254", ["5: br0 inet 192.168.0.254/24 brd 192.168.0.255 " "scope global br0 valid_lft forever preferred_lft forever"]), ("inet6", "2001:470:8:c82:6600:6aff:fe84:8dda", ["5: br0 inet6 2001:470:8:c82:6600:6aff:fe84:8dda/64 " "scope global valid_lft forever preferred_lft forever"]), ) @ddt.unpack def test_fetch_container_address(self, address_family, expected_address, return_value): fake_name = "fakeserver" mock_execute = self.DockerExecHelper.execute = mock.Mock( return_value=return_value) address = self.DockerExecHelper.fetch_container_address( fake_name, address_family) self.assertEqual(expected_address, address) mock_execute.assert_called_once_with( fake_name, ["ip", "-oneline", "-family", address_family, "address", "show", "scope", "global", "dev", "eth0"] ) def test_rename_container(self): fake_old_name = "old_name" fake_new_name = "new_name" fake_veth_name = "veth_fake" self.DockerExecHelper.find_container_veth = mock.Mock( return_value=fake_veth_name) mock__inner_execute = self.DockerExecHelper._inner_execute = mock.Mock( return_value=['fake', '']) self.DockerExecHelper.rename_container(fake_old_name, fake_new_name) self.DockerExecHelper.find_container_veth.assert_called_once_with( fake_old_name ) mock__inner_execute.assert_has_calls([ mock.call(["docker", "rename", fake_old_name, fake_new_name]), mock.call(["ovs-vsctl", "set", "interface", fake_veth_name, "external-ids:manila-container=%s" % fake_new_name]) ]) def test_rename_container_exception_veth(self): self.DockerExecHelper.find_container_veth = mock.Mock( return_value=None) self.assertRaises(exception.ManilaException, self.DockerExecHelper.rename_container, "old_name", "new_name") @ddt.data([['fake', ''], OSError, ['fake', '']], [['fake', ''], OSError, OSError], [OSError]) def test_rename_container_exception_cmds(self, side_effect): fake_old_name = "old_name" fake_new_name = "new_name" fake_veth_name = "veth_fake" self.DockerExecHelper.find_container_veth = mock.Mock( return_value=fake_veth_name) mock__inner_execute = self.DockerExecHelper._inner_execute = mock.Mock( side_effect=side_effect) self.assertRaises(exception.ShareBackendException, self.DockerExecHelper.rename_container, fake_old_name, fake_new_name) if len(side_effect) > 1: mock__inner_execute.assert_has_calls([ mock.call(["docker", "rename", fake_old_name, fake_new_name]), mock.call(["ovs-vsctl", "set", "interface", fake_veth_name, "external-ids:manila-container=%s" % fake_new_name]) ]) else: mock__inner_execute.assert_has_calls([ mock.call(["docker", "rename", fake_old_name, fake_new_name]), ]) @ddt.data('my_container', 'manila_my_container') def test_find_container_veth(self, name): interfaces = [fakes.FAKE_VSCTL_LIST_INTERFACE_1, fakes.FAKE_VSCTL_LIST_INTERFACE_2, fakes.FAKE_VSCTL_LIST_INTERFACE_4] if 'manila_' in name: list_interfaces = [fakes.FAKE_VSCTL_LIST_INTERFACES] interfaces.append(fakes.FAKE_VSCTL_LIST_INTERFACE_3) else: list_interfaces = [fakes.FAKE_VSCTL_LIST_INTERFACES_X] interfaces.append(fakes.FAKE_VSCTL_LIST_INTERFACE_3_X) def get_interface_data_according_to_veth(*args, **kwargs): if len(args) == 4: for interface in interfaces: if args[3] in interface: return [interface] else: return list_interfaces self.DockerExecHelper._execute = mock.Mock( side_effect=get_interface_data_according_to_veth) result = self.DockerExecHelper.find_container_veth(name) self.assertEqual("veth3jd83j7", result) @ddt.data(True, False) def test_find_container_veth_not_found(self, remove_veth): if remove_veth: list_executes = [[fakes.FAKE_VSCTL_LIST_INTERFACES], [fakes.FAKE_VSCTL_LIST_INTERFACE_1], OSError, [fakes.FAKE_VSCTL_LIST_INTERFACE_3], [fakes.FAKE_VSCTL_LIST_INTERFACE_4]] else: list_executes = [[fakes.FAKE_VSCTL_LIST_INTERFACES], [fakes.FAKE_VSCTL_LIST_INTERFACE_1], [fakes.FAKE_VSCTL_LIST_INTERFACE_2], [fakes.FAKE_VSCTL_LIST_INTERFACE_3], [fakes.FAKE_VSCTL_LIST_INTERFACE_4]] self.DockerExecHelper._execute = mock.Mock( side_effect=list_executes) list_veths = ['veth11b2c34', 'veth25f6g7h', 'veth3jd83j7', 'veth4i9j10k'] self.assertIsNone( self.DockerExecHelper.find_container_veth("foo_bar")) list_calls = [mock.call("ovs-vsctl", "list", "interface", run_as_root=True)] for veth in list_veths: list_calls.append( mock.call("ovs-vsctl", "list", "interface", veth, run_as_root=True) ) self.DockerExecHelper._execute.assert_has_calls( list_calls, any_order=True ) @ddt.data((["wrong_name\nfake\nfake_container\nfake_name'"], True), (["wrong_name\nfake_container\nfake'"], False), ("\n", False)) @ddt.unpack def test_container_exists(self, fake_return_value, expected_result): self.DockerExecHelper._execute = mock.Mock( return_value=fake_return_value) result = self.DockerExecHelper.container_exists("fake_name") self.DockerExecHelper._execute.assert_called_once_with( "docker", "ps", "--no-trunc", "--format='{{.Names}}'", run_as_root=True) self.assertEqual(expected_result, result) manila-10.0.0/manila/tests/share/drivers/glusterfs/0000775000175000017500000000000013656750362022271 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/glusterfs/__init__.py0000664000175000017500000000000013656750227024370 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/glusterfs/test_layout.py0000664000175000017500000003126613656750227025227 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import os from unittest import mock import ddt from oslo_config import cfg from oslo_utils import importutils from manila import exception from manila.share import configuration as config from manila.share import driver from manila.share.drivers.glusterfs import layout from manila import test from manila.tests import fake_share from manila.tests import fake_utils CONF = cfg.CONF fake_local_share_path = '/mnt/nfs/testvol/fakename' fake_path_to_private_key = '/fakepath/to/privatekey' fake_remote_server_password = 'fakepassword' def fake_access(kwargs): fake_access_rule = fake_share.fake_access(**kwargs) fake_access_rule.to_dict = lambda: fake_access_rule.values return fake_access_rule class GlusterfsFakeShareDriver(layout.GlusterfsShareDriverBase): supported_layouts = ('layout_fake.FakeLayout', 'layout_something.SomeLayout') supported_protocols = ('NFS,') _supported_access_types = ('ip',) _supported_access_levels = ('rw',) @ddt.ddt class GlusterfsShareDriverBaseTestCase(test.TestCase): """Tests GlusterfsShareDriverBase.""" def setUp(self): super(GlusterfsShareDriverBaseTestCase, self).setUp() CONF.set_default('driver_handles_share_servers', False) fake_conf, __ = self._setup() self._driver = GlusterfsFakeShareDriver(False, configuration=fake_conf) self.fake_share = mock.Mock(name='fake_share') self.fake_context = mock.Mock(name='fake_context') self.fake_access = mock.Mock(name='fake_access') def _setup(self): fake_conf = config.Configuration(None) fake_layout = mock.Mock() self.mock_object(importutils, "import_object", mock.Mock(return_value=fake_layout)) return fake_conf, fake_layout def test_init(self): self.assertRaises(IndexError, layout.GlusterfsShareDriverBase, False, configuration=config.Configuration(None)) @ddt.data({'has_snap': None, 'layout_name': None}, {'has_snap': False, 'layout_name': 'layout_fake.FakeLayout'}, {'has_snap': True, 'layout_name': 'layout_something.SomeLayout'}) @ddt.unpack def test_init_subclass(self, has_snap, layout_name): conf, _layout = self._setup() if layout_name is not None: conf.glusterfs_share_layout = layout_name if has_snap is None: del(_layout._snapshots_are_supported) else: _layout._snapshots_are_supported = has_snap _driver = GlusterfsFakeShareDriver(False, configuration=conf) snap_result = {None: False}.get(has_snap, has_snap) layout_result = {None: 'layout_fake.FakeLayout'}.get(layout_name, layout_name) importutils.import_object.assert_called_once_with( 'manila.share.drivers.glusterfs.%s' % layout_result, _driver, configuration=conf) self.assertEqual(_layout, _driver.layout) self.assertEqual(snap_result, _driver.snapshots_are_supported) def test_init_nosupp_layout(self): conf = config.Configuration(None) conf.glusterfs_share_layout = 'nonsense_layout' self.assertRaises(exception.GlusterfsException, GlusterfsFakeShareDriver, False, configuration=conf) def test_setup_via_manager(self): self.assertIsNone(self._driver._setup_via_manager(mock.Mock())) def test_supported_access_types(self): self.assertEqual(('ip',), self._driver.supported_access_types) def test_supported_access_levels(self): self.assertEqual(('rw',), self._driver.supported_access_levels) def test_access_rule_validator(self): rule = mock.Mock() abort = mock.Mock() valid = mock.Mock() self.mock_object(layout.ganesha_utils, 'validate_access_rule', mock.Mock(return_value=valid)) ret = self._driver._access_rule_validator(abort)(rule) self.assertEqual(valid, ret) layout.ganesha_utils.validate_access_rule.assert_called_once_with( ('ip',), ('rw',), rule, abort) @ddt.data({'inset': ([], ['ADD'], []), 'outset': (['ADD'], []), 'recovery': False}, {'inset': ([], [], ['DELETE']), 'outset': ([], ['DELETE']), 'recovery': False}, {'inset': (['EXISTING'], ['ADD'], ['DELETE']), 'outset': (['ADD'], ['DELETE']), 'recovery': False}, {'inset': (['EXISTING'], [], []), 'outset': (['EXISTING'], []), 'recovery': True}) @ddt.unpack def test_update_access(self, inset, outset, recovery): conf, _layout = self._setup() gluster_mgr = mock.Mock(name='gluster_mgr') self.mock_object(_layout, '_share_manager', mock.Mock(return_value=gluster_mgr)) _driver = GlusterfsFakeShareDriver(False, configuration=conf) self.mock_object(_driver, '_update_access_via_manager', mock.Mock()) rulemap = {t: fake_access({'access_type': "ip", 'access_level': "rw", 'access_to': t}) for t in ( 'EXISTING', 'ADD', 'DELETE')} in_rules, out_rules = ( [ [ rulemap[t] for t in r ] for r in rs ] for rs in (inset, outset)) _driver.update_access(self.fake_context, self.fake_share, *in_rules) _layout._share_manager.assert_called_once_with(self.fake_share) _driver._update_access_via_manager.assert_called_once_with( gluster_mgr, self.fake_context, self.fake_share, *out_rules, recovery=recovery) def test_update_access_via_manager(self): self.assertRaises(NotImplementedError, self._driver._update_access_via_manager, mock.Mock(), self.fake_context, self.fake_share, [self.fake_access], [self.fake_access]) @ddt.data('NFS', 'PROTATO') def test_check_proto_baseclass(self, proto): self.assertRaises(exception.ShareBackendException, layout.GlusterfsShareDriverBase._check_proto, {'share_proto': proto}) def test_check_proto(self): GlusterfsFakeShareDriver._check_proto({'share_proto': 'NFS'}) def test_check_proto_notsupported(self): self.assertRaises(exception.ShareBackendException, GlusterfsFakeShareDriver._check_proto, {'share_proto': 'PROTATO'}) @ddt.data('', '_from_snapshot') def test_create_share(self, variant): conf, _layout = self._setup() _driver = GlusterfsFakeShareDriver(False, configuration=conf) self.mock_object(_driver, '_check_proto', mock.Mock()) getattr(_driver, 'create_share%s' % variant)(self.fake_context, self.fake_share) _driver._check_proto.assert_called_once_with(self.fake_share) getattr(_layout, 'create_share%s' % variant).assert_called_once_with( self.fake_context, self.fake_share) @ddt.data(True, False) def test_update_share_stats(self, internal_exception): data = mock.Mock() conf, _layout = self._setup() def raise_exception(*args, **kwargs): raise NotImplementedError layoutstats = mock.Mock() mock_kw = ({'side_effect': raise_exception} if internal_exception else {'return_value': layoutstats}) self.mock_object(_layout, '_update_share_stats', mock.Mock(**mock_kw)) self.mock_object(driver.ShareDriver, '_update_share_stats', mock.Mock()) _driver = GlusterfsFakeShareDriver(False, configuration=conf) _driver._update_share_stats(data) if internal_exception: self.assertFalse(data.update.called) else: data.update.assert_called_once_with(layoutstats) driver.ShareDriver._update_share_stats.assert_called_once_with( data) @ddt.data('do_setup', 'create_snapshot', 'delete_share', 'delete_snapshot', 'ensure_share', 'manage_existing', 'unmanage', 'extend_share', 'shrink_share') def test_delegated_methods(self, method): conf, _layout = self._setup() _driver = GlusterfsFakeShareDriver(False, configuration=conf) fake_args = (mock.Mock(), mock.Mock(), mock.Mock()) getattr(_driver, method)(*fake_args) getattr(_layout, method).assert_called_once_with(*fake_args) @ddt.ddt class GlusterfsShareLayoutBaseTestCase(test.TestCase): """Tests GlusterfsShareLayoutBaseTestCase.""" def setUp(self): super(GlusterfsShareLayoutBaseTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._execute = fake_utils.fake_execute self.addCleanup(fake_utils.fake_execute_set_repliers, []) self.addCleanup(fake_utils.fake_execute_clear_log) self.fake_driver = mock.Mock() self.mock_object(self.fake_driver, '_execute', self._execute) class FakeLayout(layout.GlusterfsShareLayoutBase): def _share_manager(self, share): """Return GlusterManager object representing share's backend.""" def do_setup(self, context): """Any initialization the share driver does while starting.""" def create_share(self, context, share, share_server=None): """Is called to create share.""" def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" def create_snapshot(self, context, snapshot, share_server=None): """Is called to create snapshot.""" def delete_share(self, context, share, share_server=None): """Is called to remove share.""" def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove snapshot.""" def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported.""" def manage_existing(self, share, driver_options): """Brings an existing share under Manila management.""" def unmanage(self, share): """Removes the specified share from Manila management.""" def extend_share(self, share, new_size, share_server=None): """Extends size of existing share.""" def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" def test_init_invalid(self): self.assertRaises(TypeError, layout.GlusterfsShareLayoutBase, mock.Mock()) def test_subclass(self): fake_conf = mock.Mock() _layout = self.FakeLayout(self.fake_driver, configuration=fake_conf) self.assertEqual(fake_conf, _layout.configuration) self.assertRaises(NotImplementedError, _layout._update_share_stats) def test_check_mount_glusterfs(self): fake_conf = mock.Mock() _driver = mock.Mock() _driver._execute = mock.Mock() _layout = self.FakeLayout(_driver, configuration=fake_conf) _layout._check_mount_glusterfs() _driver._execute.assert_called_once_with( 'mount.glusterfs', check_exit_code=False) @ddt.data({'_errno': errno.ENOENT, '_exception': exception.GlusterfsException}, {'_errno': errno.EACCES, '_exception': OSError}) @ddt.unpack def test_check_mount_glusterfs_not_installed(self, _errno, _exception): fake_conf = mock.Mock() _layout = self.FakeLayout(self.fake_driver, configuration=fake_conf) def exec_runner(*ignore_args, **ignore_kwargs): raise OSError(_errno, os.strerror(_errno)) expected_exec = ['mount.glusterfs'] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self.assertRaises(_exception, _layout._check_mount_glusterfs) manila-10.0.0/manila/tests/share/drivers/glusterfs/test_common.py0000664000175000017500000010337013656750227025176 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test cases for GlusterFS common routines.""" from unittest import mock import ddt from oslo_config import cfg from manila import exception from manila.share.drivers.glusterfs import common from manila import test from manila.tests import fake_utils CONF = cfg.CONF fake_gluster_manager_attrs = { 'export': '127.0.0.1:/testvol', 'host': '127.0.0.1', 'qualified': 'testuser@127.0.0.1:/testvol', 'user': 'testuser', 'volume': 'testvol', 'path_to_private_key': '/fakepath/to/privatekey', 'remote_server_password': 'fakepassword', } fake_args = ('foo', 'bar') fake_kwargs = {'key1': 'value1', 'key2': 'value2'} fake_path_to_private_key = '/fakepath/to/privatekey' fake_remote_server_password = 'fakepassword' NFS_EXPORT_DIR = 'nfs.export-dir' fakehost = 'example.com' fakevol = 'testvol' fakeexport = ':/'.join((fakehost, fakevol)) fakemnt = '/mnt/glusterfs' @ddt.ddt class GlusterManagerTestCase(test.TestCase): """Tests GlusterManager.""" def setUp(self): super(GlusterManagerTestCase, self).setUp() self.fake_execf = mock.Mock() self.fake_executor = mock.Mock(return_value=('', '')) with mock.patch.object(common.GlusterManager, 'make_gluster_call', return_value=self.fake_executor): self._gluster_manager = common.GlusterManager( 'testuser@127.0.0.1:/testvol', self.fake_execf, fake_path_to_private_key, fake_remote_server_password) fake_gluster_manager_dict = { 'host': '127.0.0.1', 'user': 'testuser', 'volume': 'testvol' } self._gluster_manager_dict = common.GlusterManager( fake_gluster_manager_dict, self.fake_execf, fake_path_to_private_key, fake_remote_server_password) self._gluster_manager_array = [self._gluster_manager, self._gluster_manager_dict] def test_check_volume_presence(self): common._check_volume_presence(mock.Mock())(self._gluster_manager) def test_check_volume_presence_error(self): gmgr = common.GlusterManager('testuser@127.0.0.1') self.assertRaises( exception.GlusterfsException, common._check_volume_presence(mock.Mock()), gmgr) def test_volxml_get(self): xmlout = mock.Mock() value = mock.Mock() value.text = 'foobar' xmlout.find = mock.Mock(return_value=value) ret = common.volxml_get(xmlout, 'some/path') self.assertEqual('foobar', ret) @ddt.data(None, 'some-value') def test_volxml_get_notfound_fallback(self, default): xmlout = mock.Mock() xmlout.find = mock.Mock(return_value=None) ret = common.volxml_get(xmlout, 'some/path', default=default) self.assertEqual(default, ret) def test_volxml_get_multiple(self): xmlout = mock.Mock() value = mock.Mock() value.text = 'foobar' xmlout.find = mock.Mock(side_effect=(None, value)) ret = common.volxml_get(xmlout, 'some/path', 'better/path') self.assertEqual('foobar', ret) def test_volxml_get_notfound(self): xmlout = mock.Mock() xmlout.find = mock.Mock(return_value=None) self.assertRaises(exception.InvalidShare, common.volxml_get, xmlout, 'some/path') def test_gluster_manager_common_init(self): for gmgr in self._gluster_manager_array: self.assertEqual( fake_gluster_manager_attrs['user'], gmgr.user) self.assertEqual( fake_gluster_manager_attrs['host'], gmgr.host) self.assertEqual( fake_gluster_manager_attrs['volume'], gmgr.volume) self.assertEqual( fake_gluster_manager_attrs['qualified'], gmgr.qualified) self.assertEqual( fake_gluster_manager_attrs['export'], gmgr.export) self.assertEqual( fake_gluster_manager_attrs['path_to_private_key'], gmgr.path_to_private_key) self.assertEqual( fake_gluster_manager_attrs['remote_server_password'], gmgr.remote_server_password) self.assertEqual( self.fake_executor, gmgr.gluster_call) @ddt.data({'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol', 'path': None}, {'user': None, 'host': '127.0.0.1', 'volume': 'testvol', 'path': '/testpath'}, {'user': None, 'host': '127.0.0.1', 'volume': 'testvol', 'path': None}, {'user': None, 'host': '127.0.0.1', 'volume': None, 'path': None}, {'user': 'testuser', 'host': '127.0.0.1', 'volume': None, 'path': None}, {'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol', 'path': '/testpath'}) def test_gluster_manager_init_check(self, test_addr_dict): test_gluster_manager = common.GlusterManager( test_addr_dict, self.fake_execf) self.assertEqual(test_addr_dict, test_gluster_manager.components) @ddt.data(None, True) def test_gluster_manager_init_has_vol(self, has_volume): test_gluster_manager = common.GlusterManager( 'testuser@127.0.0.1:/testvol', self.fake_execf, requires={'volume': has_volume}) self.assertEqual('testvol', test_gluster_manager.volume) @ddt.data(None, True) def test_gluster_manager_dict_init_has_vol(self, has_volume): test_addr_dict = {'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol', 'path': '/testdir'} test_gluster_manager = common.GlusterManager( test_addr_dict, self.fake_execf, requires={'volume': has_volume}) self.assertEqual('testvol', test_gluster_manager.volume) @ddt.data(None, False) def test_gluster_manager_init_no_vol(self, has_volume): test_gluster_manager = common.GlusterManager( 'testuser@127.0.0.1', self.fake_execf, requires={'volume': has_volume}) self.assertIsNone(test_gluster_manager.volume) @ddt.data(None, False) def test_gluster_manager_dict_init_no_vol(self, has_volume): test_addr_dict = {'user': 'testuser', 'host': '127.0.0.1'} test_gluster_manager = common.GlusterManager( test_addr_dict, self.fake_execf, requires={'volume': has_volume}) self.assertIsNone(test_gluster_manager.volume) def test_gluster_manager_init_has_shouldnt_have_vol(self): self.assertRaises(exception.GlusterfsException, common.GlusterManager, 'testuser@127.0.0.1:/testvol', self.fake_execf, requires={'volume': False}) def test_gluster_manager_dict_init_has_shouldnt_have_vol(self): test_addr_dict = {'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol'} self.assertRaises(exception.GlusterfsException, common.GlusterManager, test_addr_dict, self.fake_execf, requires={'volume': False}) def test_gluster_manager_hasnt_should_have_vol(self): self.assertRaises(exception.GlusterfsException, common.GlusterManager, 'testuser@127.0.0.1', self.fake_execf, requires={'volume': True}) def test_gluster_manager_dict_hasnt_should_have_vol(self): test_addr_dict = {'user': 'testuser', 'host': '127.0.0.1'} self.assertRaises(exception.GlusterfsException, common.GlusterManager, test_addr_dict, self.fake_execf, requires={'volume': True}) def test_gluster_manager_invalid(self): self.assertRaises(exception.GlusterfsException, common.GlusterManager, '127.0.0.1:vol', 'self.fake_execf') def test_gluster_manager_dict_invalid_req_host(self): test_addr_dict = {'user': 'testuser', 'volume': 'testvol'} self.assertRaises(exception.GlusterfsException, common.GlusterManager, test_addr_dict, 'self.fake_execf') @ddt.data({'user': 'testuser'}, {'host': 'johndoe@example.com'}, {'host': 'example.com/so', 'volume': 'me/path'}, {'user': 'user@error', 'host': "example.com", 'volume': 'vol'}, {'host': 'example.com', 'volume': 'vol', 'pith': '/path'}, {'host': 'example.com', 'path': '/path'}, {'user': 'user@error', 'host': "example.com", 'path': '/path'}) def test_gluster_manager_dict_invalid_input(self, test_addr_dict): self.assertRaises(exception.GlusterfsException, common.GlusterManager, test_addr_dict, 'self.fake_execf') def test_gluster_manager_getattr(self): self.assertEqual('testvol', self._gluster_manager.volume) def test_gluster_manager_getattr_called(self): class FakeGlusterManager(common.GlusterManager): pass _gluster_manager = FakeGlusterManager('127.0.0.1:/testvol', self.fake_execf) FakeGlusterManager.__getattr__ = mock.Mock() _gluster_manager.volume _gluster_manager.__getattr__.assert_called_once_with('volume') def test_gluster_manager_getattr_noattr(self): self.assertRaises(AttributeError, getattr, self._gluster_manager, 'fakeprop') @ddt.data({'mockargs': {}, 'kwargs': {}}, {'mockargs': {'side_effect': exception.ProcessExecutionError}, 'kwargs': {'error_policy': 'suppress'}}, {'mockargs': { 'side_effect': exception.ProcessExecutionError(exit_code=2)}, 'kwargs': {'error_policy': (2,)}}) @ddt.unpack def test_gluster_manager_make_gluster_call_local(self, mockargs, kwargs): fake_obj = mock.Mock(**mockargs) fake_execute = mock.Mock() kwargs.update(fake_kwargs) with mock.patch.object(common.ganesha_utils, 'RootExecutor', mock.Mock(return_value=fake_obj)): gluster_manager = common.GlusterManager( '127.0.0.1:/testvol', self.fake_execf) gluster_manager.make_gluster_call(fake_execute)(*fake_args, **kwargs) common.ganesha_utils.RootExecutor.assert_called_with( fake_execute) fake_obj.assert_called_once_with( *(('gluster',) + fake_args), **fake_kwargs) def test_gluster_manager_make_gluster_call_remote(self): fake_obj = mock.Mock() fake_execute = mock.Mock() with mock.patch.object(common.ganesha_utils, 'SSHExecutor', mock.Mock(return_value=fake_obj)): gluster_manager = common.GlusterManager( 'testuser@127.0.0.1:/testvol', self.fake_execf, fake_path_to_private_key, fake_remote_server_password) gluster_manager.make_gluster_call(fake_execute)(*fake_args, **fake_kwargs) common.ganesha_utils.SSHExecutor.assert_called_with( gluster_manager.host, 22, None, gluster_manager.user, password=gluster_manager.remote_server_password, privatekey=gluster_manager.path_to_private_key) fake_obj.assert_called_once_with( *(('gluster',) + fake_args), **fake_kwargs) @ddt.data({'trouble': exception.ProcessExecutionError, '_exception': exception.GlusterfsException, 'xkw': {}}, {'trouble': exception.ProcessExecutionError(exit_code=2), '_exception': exception.GlusterfsException, 'xkw': {'error_policy': (1,)}}, {'trouble': exception.ProcessExecutionError, '_exception': exception.GlusterfsException, 'xkw': {'error_policy': 'coerce'}}, {'trouble': exception.ProcessExecutionError, '_exception': exception.ProcessExecutionError, 'xkw': {'error_policy': 'raw'}}, {'trouble': RuntimeError, '_exception': RuntimeError, 'xkw': {}}) @ddt.unpack def test_gluster_manager_make_gluster_call_error(self, trouble, _exception, xkw): fake_obj = mock.Mock(side_effect=trouble) fake_execute = mock.Mock() kwargs = fake_kwargs.copy() kwargs.update(xkw) with mock.patch.object(common.ganesha_utils, 'RootExecutor', mock.Mock(return_value=fake_obj)): gluster_manager = common.GlusterManager( '127.0.0.1:/testvol', self.fake_execf) self.assertRaises(_exception, gluster_manager.make_gluster_call(fake_execute), *fake_args, **kwargs) common.ganesha_utils.RootExecutor.assert_called_with( fake_execute) fake_obj.assert_called_once_with( *(('gluster',) + fake_args), **fake_kwargs) def test_gluster_manager_make_gluster_call_bad_policy(self): fake_obj = mock.Mock() fake_execute = mock.Mock() with mock.patch.object(common.ganesha_utils, 'RootExecutor', mock.Mock(return_value=fake_obj)): gluster_manager = common.GlusterManager( '127.0.0.1:/testvol', self.fake_execf) self.assertRaises(TypeError, gluster_manager.make_gluster_call(fake_execute), *fake_args, error_policy='foobar') @ddt.data({}, {'opErrstr': None}, {'opErrstr': 'error'}) def test_xml_response_check(self, xdict): fdict = {'opRet': '0', 'opErrno': '0', 'some/count': '1'} fdict.update(xdict) def vxget(x, e, *a): if a: return fdict.get(e, a[0]) else: return fdict[e] xtree = mock.Mock() command = ['volume', 'command', 'fake'] with mock.patch.object(common, 'volxml_get', side_effect=vxget): self._gluster_manager.xml_response_check(xtree, command, 'some/count') self.assertTrue(common.volxml_get.called) @ddt.data('1', '2') def test_xml_response_check_failure(self, count): fdict = {'opRet': '-1', 'opErrno': '0', 'some/count': count} def vxget(x, e, *a): if a: return fdict.get(e, a[0]) else: return fdict[e] xtree = mock.Mock() command = ['volume', 'command', 'fake'] with mock.patch.object(common, 'volxml_get', side_effect=vxget): self.assertRaises(exception.GlusterfsException, self._gluster_manager.xml_response_check, xtree, command, 'some/count') self.assertTrue(common.volxml_get.called) @ddt.data({'opRet': '-2', 'opErrno': '0', 'some/count': '1'}, {'opRet': '0', 'opErrno': '1', 'some/count': '1'}, {'opRet': '0', 'opErrno': '0', 'some/count': '0'}, {'opRet': '0', 'opErrno': '0', 'some/count': '2'}) def test_xml_response_check_invalid(self, fdict): def vxget(x, *e, **kw): if kw: return fdict.get(e[0], kw['default']) else: return fdict[e[0]] xtree = mock.Mock() command = ['volume', 'command', 'fake'] with mock.patch.object(common, 'volxml_get', side_effect=vxget): self.assertRaises(exception.InvalidShare, self._gluster_manager.xml_response_check, xtree, command, 'some/count') self.assertTrue(common.volxml_get.called) @ddt.data({'opRet': '0', 'opErrno': '0'}, {'opRet': '0', 'opErrno': '0', 'some/count': '2'}) def test_xml_response_check_count_ignored(self, fdict): def vxget(x, e, *a): if a: return fdict.get(e, a[0]) else: return fdict[e] xtree = mock.Mock() command = ['volume', 'command', 'fake'] with mock.patch.object(common, 'volxml_get', side_effect=vxget): self._gluster_manager.xml_response_check(xtree, command) self.assertTrue(common.volxml_get.called) def test_get_vol_option_via_info_empty_volinfo(self): args = ('--xml', 'volume', 'info', self._gluster_manager.volume) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(return_value=('', {}))) self.assertRaises(exception.GlusterfsException, self._gluster_manager._get_vol_option_via_info, 'foobar') self._gluster_manager.gluster_call.assert_called_once_with( *args, log=mock.ANY) def test_get_vol_option_via_info_ambiguous_volinfo(self): def xml_output(*ignore_args, **ignore_kwargs): return """ 0 0 0 """, '' args = ('--xml', 'volume', 'info', self._gluster_manager.volume) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(side_effect=xml_output)) self.assertRaises(exception.InvalidShare, self._gluster_manager._get_vol_option_via_info, 'foobar') self._gluster_manager.gluster_call.assert_called_once_with( *args, log=mock.ANY) def test_get_vol_option_via_info_trivial_volinfo(self): def xml_output(*ignore_args, **ignore_kwargs): return """ 0 0 1 """, '' args = ('--xml', 'volume', 'info', self._gluster_manager.volume) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(side_effect=xml_output)) ret = self._gluster_manager._get_vol_option_via_info('foobar') self.assertIsNone(ret) self._gluster_manager.gluster_call.assert_called_once_with( *args, log=mock.ANY) def test_get_vol_option_via_info(self): def xml_output(*ignore_args, **ignore_kwargs): return """ 0 0 1 """, '' args = ('--xml', 'volume', 'info', self._gluster_manager.volume) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(side_effect=xml_output)) ret = self._gluster_manager._get_vol_option_via_info('foobar') self.assertEqual('FIRE MONKEY!', ret) self._gluster_manager.gluster_call.assert_called_once_with( *args, log=mock.ANY) def test_get_vol_user_option(self): self.mock_object(self._gluster_manager, '_get_vol_option_via_info', mock.Mock(return_value='VALUE')) ret = self._gluster_manager._get_vol_user_option('OPT') self.assertEqual(ret, 'VALUE') (self._gluster_manager._get_vol_option_via_info. assert_called_once_with('user.OPT')) def test_get_vol_regular_option_empty_reponse(self): args = ('--xml', 'volume', 'get', self._gluster_manager.volume, NFS_EXPORT_DIR) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(return_value=('', {}))) ret = self._gluster_manager._get_vol_regular_option(NFS_EXPORT_DIR) self.assertIsNone(ret) self._gluster_manager.gluster_call.assert_called_once_with( *args, check_exit_code=False) @ddt.data(0, 2) def test_get_vol_regular_option_ambiguous_volinfo(self, count): def xml_output(*ignore_args, **ignore_kwargs): return """ 0 0 %d """ % count, '' args = ('--xml', 'volume', 'get', self._gluster_manager.volume, NFS_EXPORT_DIR) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(side_effect=xml_output)) self.assertRaises(exception.InvalidShare, self._gluster_manager._get_vol_regular_option, NFS_EXPORT_DIR) self._gluster_manager.gluster_call.assert_called_once_with( *args, check_exit_code=False) @ddt.data({'start': "", 'end': ""}, {'start': "", 'end': ""}) def test_get_vol_regular_option(self, extratag): def xml_output(*ignore_args, **ignore_kwargs): return """ 0 0 1 %(start)s /foo(10.0.0.1|10.0.0.2),/bar(10.0.0.1) %(end)s """ % extratag, '' args = ('--xml', 'volume', 'get', self._gluster_manager.volume, NFS_EXPORT_DIR) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(side_effect=xml_output)) ret = self._gluster_manager._get_vol_regular_option(NFS_EXPORT_DIR) self.assertEqual('/foo(10.0.0.1|10.0.0.2),/bar(10.0.0.1)', ret) self._gluster_manager.gluster_call.assert_called_once_with( *args, check_exit_code=False) def test_get_vol_regular_option_not_suppored(self): args = ('--xml', 'volume', 'get', self._gluster_manager.volume, NFS_EXPORT_DIR) self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(return_value=( """Ceci n'est pas un XML.""", ''))) self.mock_object(self._gluster_manager, '_get_vol_option_via_info', mock.Mock(return_value="VALUE")) ret = self._gluster_manager._get_vol_regular_option(NFS_EXPORT_DIR) self.assertEqual("VALUE", ret) self._gluster_manager.gluster_call.assert_called_once_with( *args, check_exit_code=False) (self._gluster_manager._get_vol_option_via_info. assert_called_once_with(NFS_EXPORT_DIR)) @ddt.data({'opt': 'some.option', 'opttype': 'regular', 'lowopt': 'some.option'}, {'opt': 'user.param', 'opttype': 'user', 'lowopt': 'param'}) @ddt.unpack def test_get_vol_option(self, opt, opttype, lowopt): for t in ('user', 'regular'): self.mock_object(self._gluster_manager, '_get_vol_%s_option' % t, mock.Mock(return_value='value-%s' % t)) ret = self._gluster_manager.get_vol_option(opt) self.assertEqual('value-%s' % opttype, ret) for t in ('user', 'regular'): func = getattr(self._gluster_manager, '_get_vol_%s_option' % t) if opttype == t: func.assert_called_once_with(lowopt) else: self.assertFalse(func.called) def test_get_vol_option_unset(self): self.mock_object(self._gluster_manager, '_get_vol_regular_option', mock.Mock(return_value=None)) ret = self._gluster_manager.get_vol_option('some.option') self.assertIsNone(ret) @ddt.data({'value': '0', 'boolval': False}, {'value': 'Off', 'boolval': False}, {'value': 'no', 'boolval': False}, {'value': '1', 'boolval': True}, {'value': 'true', 'boolval': True}, {'value': 'enAble', 'boolval': True}, {'value': None, 'boolval': None}) @ddt.unpack def test_get_vol_option_boolean(self, value, boolval): self.mock_object(self._gluster_manager, '_get_vol_regular_option', mock.Mock(return_value=value)) ret = self._gluster_manager.get_vol_option('some.option', boolean=True) self.assertEqual(boolval, ret) def test_get_vol_option_boolean_bad(self): self.mock_object(self._gluster_manager, '_get_vol_regular_option', mock.Mock(return_value='jabberwocky')) self.assertRaises(exception.GlusterfsException, self._gluster_manager.get_vol_option, 'some.option', boolean=True) @ddt.data({'setting': 'some_value', 'args': ('set', 'some_value')}, {'setting': None, 'args': ('reset',)}, {'setting': True, 'args': ('set', 'ON')}, {'setting': False, 'args': ('set', 'OFF')}) @ddt.unpack def test_set_vol_option(self, setting, args): self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock()) self._gluster_manager.set_vol_option('an_option', setting) self._gluster_manager.gluster_call.assert_called_once_with( 'volume', args[0], 'testvol', 'an_option', *args[1:], error_policy=mock.ANY) @ddt.data({}, {'ignore_failure': False}) def test_set_vol_option_error(self, kwargs): fake_obj = mock.Mock( side_effect=exception.ProcessExecutionError(exit_code=1)) with mock.patch.object(common.ganesha_utils, 'RootExecutor', mock.Mock(return_value=fake_obj)): gluster_manager = common.GlusterManager( '127.0.0.1:/testvol', self.fake_execf) self.assertRaises(exception.GlusterfsException, gluster_manager.set_vol_option, 'an_option', "some_value", **kwargs) self.assertTrue(fake_obj.called) def test_set_vol_option_error_relaxed(self): fake_obj = mock.Mock( side_effect=exception.ProcessExecutionError(exit_code=1)) with mock.patch.object(common.ganesha_utils, 'RootExecutor', mock.Mock(return_value=fake_obj)): gluster_manager = common.GlusterManager( '127.0.0.1:/testvol', self.fake_execf) gluster_manager.set_vol_option('an_option', "some_value", ignore_failure=True) self.assertTrue(fake_obj.called) def test_get_gluster_version(self): self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(return_value=('glusterfs 3.6.2beta3', ''))) ret = self._gluster_manager.get_gluster_version() self.assertEqual(['3', '6', '2beta3'], ret) self._gluster_manager.gluster_call.assert_called_once_with( '--version', log=mock.ANY) @ddt.data("foo 1.1.1", "glusterfs 3-6", "glusterfs 3.6beta3") def test_get_gluster_version_exception(self, versinfo): self.mock_object(self._gluster_manager, 'gluster_call', mock.Mock(return_value=(versinfo, ''))) self.assertRaises(exception.GlusterfsException, self._gluster_manager.get_gluster_version) self._gluster_manager.gluster_call.assert_called_once_with( '--version', log=mock.ANY) def test_check_gluster_version(self): self.mock_object(self._gluster_manager, 'get_gluster_version', mock.Mock(return_value=('3', '6'))) ret = self._gluster_manager.check_gluster_version((3, 5, 2)) self.assertIsNone(ret) self._gluster_manager.get_gluster_version.assert_called_once_with() def test_check_gluster_version_unmet(self): self.mock_object(self._gluster_manager, 'get_gluster_version', mock.Mock(return_value=('3', '5', '2'))) self.assertRaises(exception.GlusterfsException, self._gluster_manager.check_gluster_version, (3, 6)) self._gluster_manager.get_gluster_version.assert_called_once_with() @ddt.data(('3', '6'), ('3', '6', '2beta'), ('3', '6', '2beta', '4')) def test_numreduct(self, vers): ret = common.numreduct(vers) self.assertEqual((3, 6), ret) @ddt.ddt class GlusterFSCommonTestCase(test.TestCase): """Tests common GlusterFS utility functions.""" def setUp(self): super(GlusterFSCommonTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._execute = fake_utils.fake_execute self.addCleanup(fake_utils.fake_execute_set_repliers, []) self.addCleanup(fake_utils.fake_execute_clear_log) self.mock_object(common.GlusterManager, 'make_gluster_call') @staticmethod def _mount_exec(vol, mnt): return ['mkdir -p %s' % mnt, 'mount -t glusterfs %(exp)s %(mnt)s' % {'exp': vol, 'mnt': mnt}] def test_mount_gluster_vol(self): expected_exec = self._mount_exec(fakeexport, fakemnt) ret = common._mount_gluster_vol(self._execute, fakeexport, fakemnt, False) self.assertEqual(fake_utils.fake_execute_get_log(), expected_exec) self.assertIsNone(ret) def test_mount_gluster_vol_mounted_noensure(self): def exec_runner(*ignore_args, **ignore_kwargs): raise exception.ProcessExecutionError(stderr='already mounted') expected_exec = self._mount_exec(fakeexport, fakemnt) fake_utils.fake_execute_set_repliers([('mount', exec_runner)]) self.assertRaises(exception.GlusterfsException, common._mount_gluster_vol, self._execute, fakeexport, fakemnt, False) self.assertEqual(fake_utils.fake_execute_get_log(), expected_exec) def test_mount_gluster_vol_mounted_ensure(self): def exec_runner(*ignore_args, **ignore_kwargs): raise exception.ProcessExecutionError(stderr='already mounted') expected_exec = self._mount_exec(fakeexport, fakemnt) common.LOG.warning = mock.Mock() fake_utils.fake_execute_set_repliers([('mount', exec_runner)]) ret = common._mount_gluster_vol(self._execute, fakeexport, fakemnt, True) self.assertEqual(fake_utils.fake_execute_get_log(), expected_exec) self.assertIsNone(ret) common.LOG.warning.assert_called_with( "%s is already mounted.", fakeexport) @ddt.data(True, False) def test_mount_gluster_vol_fail(self, ensure): def exec_runner(*ignore_args, **ignore_kwargs): raise RuntimeError('fake error') expected_exec = self._mount_exec(fakeexport, fakemnt) fake_utils.fake_execute_set_repliers([('mount', exec_runner)]) self.assertRaises(RuntimeError, common._mount_gluster_vol, self._execute, fakeexport, fakemnt, ensure) self.assertEqual(fake_utils.fake_execute_get_log(), expected_exec) def test_umount_gluster_vol(self): expected_exec = ['umount %s' % fakemnt] ret = common._umount_gluster_vol(self._execute, fakemnt) self.assertEqual(fake_utils.fake_execute_get_log(), expected_exec) self.assertIsNone(ret) @ddt.data({'in_exc': exception.ProcessExecutionError, 'out_exc': exception.GlusterfsException}, {'in_exc': RuntimeError, 'out_exc': RuntimeError}) @ddt.unpack def test_umount_gluster_vol_fail(self, in_exc, out_exc): def exec_runner(*ignore_args, **ignore_kwargs): raise in_exc('fake error') expected_exec = ['umount %s' % fakemnt] fake_utils.fake_execute_set_repliers([('umount', exec_runner)]) self.assertRaises(out_exc, common._umount_gluster_vol, self._execute, fakemnt) self.assertEqual(fake_utils.fake_execute_get_log(), expected_exec) def test_restart_gluster_vol(self): gmgr = common.GlusterManager(fakeexport, self._execute, None, None) test_args = [(('volume', 'stop', fakevol, '--mode=script'), {'log': mock.ANY}), (('volume', 'start', fakevol), {'log': mock.ANY})] common._restart_gluster_vol(gmgr) self.assertEqual( [mock.call(*arg[0], **arg[1]) for arg in test_args], gmgr.gluster_call.call_args_list) manila-10.0.0/manila/tests/share/drivers/glusterfs/test_layout_volume.py0000664000175000017500000012572713656750227026624 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ GlusterFS volume mapped share layout testcases. """ import re import shutil import tempfile from unittest import mock import ddt from oslo_config import cfg from manila.common import constants from manila import context from manila import exception from manila.share import configuration as config from manila.share.drivers.glusterfs import common from manila.share.drivers.glusterfs import layout_volume from manila import test from manila.tests import fake_utils CONF = cfg.CONF def new_share(**kwargs): share = { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'glusterfs', } share.update(kwargs) return share def glusterXMLOut(**kwargs): template = """ %(ret)d %(errno)d fake error """ return template % kwargs, '' FAKE_UUID1 = '11111111-1111-1111-1111-111111111111' FAKE_UUID2 = '22222222-2222-2222-2222-222222222222' @ddt.ddt class GlusterfsVolumeMappedLayoutTestCase(test.TestCase): """Tests GlusterfsVolumeMappedLayout.""" def setUp(self): super(GlusterfsVolumeMappedLayoutTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._execute = fake_utils.fake_execute self._context = context.get_admin_context() self.glusterfs_target1 = 'root@host1:/gv1' self.glusterfs_target2 = 'root@host2:/gv2' self.glusterfs_server1 = 'root@host1' self.glusterfs_server2 = 'root@host2' self.glusterfs_server1_volumes = 'manila-share-1-1G\nshare1' self.glusterfs_server2_volumes = 'manila-share-2-2G\nshare2' self.share1 = new_share( export_location=self.glusterfs_target1, status=constants.STATUS_AVAILABLE) self.share2 = new_share( export_location=self.glusterfs_target2, status=constants.STATUS_AVAILABLE) gmgr = common.GlusterManager self.gmgr1 = gmgr(self.glusterfs_server1, self._execute, None, None, requires={'volume': False}) self.gmgr2 = gmgr(self.glusterfs_server2, self._execute, None, None, requires={'volume': False}) self.glusterfs_volumes_dict = ( {'root@host1:/manila-share-1-1G': {'size': 1}, 'root@host2:/manila-share-2-2G': {'size': 2}}) self.glusterfs_used_vols = set([ 'root@host1:/manila-share-1-1G', 'root@host2:/manila-share-2-2G']) CONF.set_default('glusterfs_servers', [self.glusterfs_server1, self.glusterfs_server2]) CONF.set_default('glusterfs_server_password', 'fake_password') CONF.set_default('glusterfs_path_to_private_key', '/fakepath/to/privatekey') CONF.set_default('glusterfs_volume_pattern', r'manila-share-\d+-#{size}G$') CONF.set_default('driver_handles_share_servers', False) self.fake_driver = mock.Mock() self.mock_object(self.fake_driver, '_execute', self._execute) self.fake_driver.GLUSTERFS_VERSION_MIN = (3, 6) self.fake_conf = config.Configuration(None) self.mock_object(tempfile, 'mkdtemp', mock.Mock(return_value='/tmp/tmpKGHKJ')) self.mock_object(common.GlusterManager, 'make_gluster_call') self.fake_private_storage = mock.Mock() with mock.patch.object(layout_volume.GlusterfsVolumeMappedLayout, '_glustermanager', side_effect=[self.gmgr1, self.gmgr2]): self._layout = layout_volume.GlusterfsVolumeMappedLayout( self.fake_driver, configuration=self.fake_conf, private_storage=self.fake_private_storage) self._layout.glusterfs_versions = {self.glusterfs_server1: ('3', '6'), self.glusterfs_server2: ('3', '7')} self.addCleanup(fake_utils.fake_execute_set_repliers, []) self.addCleanup(fake_utils.fake_execute_clear_log) @ddt.data({"test_kwargs": {}, "requires": {"volume": True}}, {"test_kwargs": {'req_volume': False}, "requires": {"volume": False}}) @ddt.unpack def test_glustermanager(self, test_kwargs, requires): fake_obj = mock.Mock() self.mock_object(common, 'GlusterManager', mock.Mock(return_value=fake_obj)) ret = self._layout._glustermanager(self.glusterfs_target1, **test_kwargs) common.GlusterManager.assert_called_once_with( self.glusterfs_target1, self._execute, self._layout.configuration.glusterfs_path_to_private_key, self._layout.configuration.glusterfs_server_password, requires=requires) self.assertEqual(fake_obj, ret) def test_compile_volume_pattern(self): volume_pattern = r'manila-share-\d+-(?P\d+)G$' ret = self._layout._compile_volume_pattern() self.assertEqual(re.compile(volume_pattern), ret) @ddt.data({'root@host1:/manila-share-1-1G': 'NONE', 'root@host2:/manila-share-2-2G': None}, {'root@host1:/manila-share-1-1G': FAKE_UUID1, 'root@host2:/manila-share-2-2G': None}, {'root@host1:/manila-share-1-1G': 'foobarbaz', 'root@host2:/manila-share-2-2G': FAKE_UUID2}, {'root@host1:/manila-share-1-1G': FAKE_UUID1, 'root@host2:/manila-share-2-2G': FAKE_UUID2}) def test_fetch_gluster_volumes(self, sharemark): vol1_qualified = 'root@host1:/manila-share-1-1G' gmgr_vol1 = common.GlusterManager(vol1_qualified) gmgr_vol1.get_vol_option = mock.Mock( return_value=sharemark[vol1_qualified]) vol2_qualified = 'root@host2:/manila-share-2-2G' gmgr_vol2 = common.GlusterManager(vol2_qualified) gmgr_vol2.get_vol_option = mock.Mock( return_value=sharemark[vol2_qualified]) self.mock_object( self.gmgr1, 'gluster_call', mock.Mock(return_value=(self.glusterfs_server1_volumes, ''))) self.mock_object( self.gmgr2, 'gluster_call', mock.Mock(return_value=(self.glusterfs_server2_volumes, ''))) _glustermanager_calls = (self.gmgr1, gmgr_vol1, self.gmgr2, gmgr_vol2) self.mock_object(self._layout, '_glustermanager', mock.Mock(side_effect=_glustermanager_calls)) expected_output = {} for q, d in self.glusterfs_volumes_dict.items(): if sharemark[q] not in (FAKE_UUID1, FAKE_UUID2): expected_output[q] = d ret = self._layout._fetch_gluster_volumes() test_args = ('volume', 'list') self.gmgr1.gluster_call.assert_called_once_with(*test_args, log=mock.ANY) self.gmgr2.gluster_call.assert_called_once_with(*test_args, log=mock.ANY) gmgr_vol1.get_vol_option.assert_called_once_with( 'user.manila-share') gmgr_vol2.get_vol_option.assert_called_once_with( 'user.manila-share') self.assertEqual(expected_output, ret) def test_fetch_gluster_volumes_no_filter_used(self): vol1_qualified = 'root@host1:/manila-share-1-1G' gmgr_vol1 = common.GlusterManager(vol1_qualified) gmgr_vol1.get_vol_option = mock.Mock() vol2_qualified = 'root@host2:/manila-share-2-2G' gmgr_vol2 = common.GlusterManager(vol2_qualified) gmgr_vol2.get_vol_option = mock.Mock() self.mock_object( self.gmgr1, 'gluster_call', mock.Mock(return_value=(self.glusterfs_server1_volumes, ''))) self.mock_object( self.gmgr2, 'gluster_call', mock.Mock(return_value=(self.glusterfs_server2_volumes, ''))) _glustermanager_calls = (self.gmgr1, gmgr_vol1, self.gmgr2, gmgr_vol2) self.mock_object(self._layout, '_glustermanager', mock.Mock(side_effect=_glustermanager_calls)) expected_output = self.glusterfs_volumes_dict ret = self._layout._fetch_gluster_volumes(filter_used=False) test_args = ('volume', 'list') self.gmgr1.gluster_call.assert_called_once_with(*test_args, log=mock.ANY) self.gmgr2.gluster_call.assert_called_once_with(*test_args, log=mock.ANY) self.assertFalse(gmgr_vol1.get_vol_option.called) self.assertFalse(gmgr_vol2.get_vol_option.called) self.assertEqual(expected_output, ret) def test_fetch_gluster_volumes_no_keymatch(self): vol1_qualified = 'root@host1:/manila-share-1' gmgr_vol1 = common.GlusterManager(vol1_qualified) gmgr_vol1.get_vol_option = mock.Mock(return_value=None) self._layout.configuration.glusterfs_servers = [self.glusterfs_server1] self.mock_object( self.gmgr1, 'gluster_call', mock.Mock(return_value=('manila-share-1', ''))) _glustermanager_calls = (self.gmgr1, gmgr_vol1) self.mock_object(self._layout, '_glustermanager', mock.Mock(side_effect=_glustermanager_calls)) self.mock_object(self._layout, 'volume_pattern', re.compile(r'manila-share-\d+(-(?P\d+)G)?$')) expected_output = {'root@host1:/manila-share-1': {'size': None}} ret = self._layout._fetch_gluster_volumes() test_args = ('volume', 'list') self.gmgr1.gluster_call.assert_called_once_with(*test_args, log=mock.ANY) self.assertEqual(expected_output, ret) def test_fetch_gluster_volumes_error(self): test_args = ('volume', 'list') def raise_exception(*args, **kwargs): if(args == test_args): raise exception.GlusterfsException() self._layout.configuration.glusterfs_servers = [self.glusterfs_server1] self.mock_object(self.gmgr1, 'gluster_call', mock.Mock(side_effect=raise_exception)) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=self.gmgr1)) self.mock_object(layout_volume.LOG, 'error') self.assertRaises(exception.GlusterfsException, self._layout._fetch_gluster_volumes) self.gmgr1.gluster_call.assert_called_once_with(*test_args, log=mock.ANY) def test_do_setup(self): self._layout.configuration.glusterfs_servers = [self.glusterfs_server1] self.mock_object(self.gmgr1, 'get_gluster_version', mock.Mock(return_value=('3', '6'))) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=self.gmgr1)) self.mock_object(self._layout, '_fetch_gluster_volumes', mock.Mock(return_value=self.glusterfs_volumes_dict)) self.mock_object(self._layout, '_check_mount_glusterfs') self._layout.gluster_used_vols = self.glusterfs_used_vols self.mock_object(layout_volume.LOG, 'warning') self._layout.do_setup(self._context) self._layout._fetch_gluster_volumes.assert_called_once_with( filter_used=False) self._layout._check_mount_glusterfs.assert_called_once_with() self.gmgr1.get_gluster_version.assert_called_once_with() def test_do_setup_unsupported_glusterfs_version(self): self._layout.configuration.glusterfs_servers = [self.glusterfs_server1] self.mock_object(self.gmgr1, 'get_gluster_version', mock.Mock(return_value=('3', '5'))) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=self.gmgr1)) self.assertRaises(exception.GlusterfsException, self._layout.do_setup, self._context) self.gmgr1.get_gluster_version.assert_called_once_with() @ddt.data(exception.GlusterfsException, RuntimeError) def test_do_setup_get_gluster_version_fails(self, exc): def raise_exception(*args, **kwargs): raise exc self._layout.configuration.glusterfs_servers = [self.glusterfs_server1] self.mock_object(self.gmgr1, 'get_gluster_version', mock.Mock(side_effect=raise_exception)) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=self.gmgr1)) self.assertRaises(exc, self._layout.do_setup, self._context) self.gmgr1.get_gluster_version.assert_called_once_with() def test_do_setup_glusterfs_no_volumes_provided_by_backend(self): self._layout.configuration.glusterfs_servers = [self.glusterfs_server1] self.mock_object(self.gmgr1, 'get_gluster_version', mock.Mock(return_value=('3', '6'))) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=self.gmgr1)) self.mock_object(self._layout, '_fetch_gluster_volumes', mock.Mock(return_value={})) self.assertRaises(exception.GlusterfsException, self._layout.do_setup, self._context) self._layout._fetch_gluster_volumes.assert_called_once_with( filter_used=False) def test_share_manager(self): self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=self.gmgr1)) self.mock_object(self._layout.private_storage, 'get', mock.Mock(return_value='host1:/gv1')) ret = self._layout._share_manager(self.share1) self._layout.private_storage.get.assert_called_once_with( self.share1['id'], 'volume') self._layout._glustermanager.assert_called_once_with('host1:/gv1') self.assertEqual(self.gmgr1, ret) def test_share_manager_no_privdata(self): self.mock_object(self._layout.private_storage, 'get', mock.Mock(return_value=None)) ret = self._layout._share_manager(self.share1) self._layout.private_storage.get.assert_called_once_with( self.share1['id'], 'volume') self.assertIsNone(ret) def test_ensure_share(self): share = self.share1 gmgr1 = common.GlusterManager(self.glusterfs_target1, self._execute, None, None) gmgr1.set_vol_option = mock.Mock() self.mock_object(self._layout, '_share_manager', mock.Mock(return_value=gmgr1)) self._layout.ensure_share(self._context, share) self._layout._share_manager.assert_called_once_with(share) self.assertIn(self.glusterfs_target1, self._layout.gluster_used_vols) gmgr1.set_vol_option.assert_called_once_with( 'user.manila-share', share['id']) @ddt.data({"voldict": {"host:/share2G": {"size": 2}}, "used_vols": set(), "size": 1, "expected": "host:/share2G"}, {"voldict": {"host:/share2G": {"size": 2}}, "used_vols": set(), "size": 2, "expected": "host:/share2G"}, {"voldict": {"host:/share2G": {"size": 2}}, "used_vols": set(), "size": None, "expected": "host:/share2G"}, {"voldict": {"host:/share2G": {"size": 2}, "host:/share": {"size": None}}, "used_vols": set(["host:/share2G"]), "size": 1, "expected": "host:/share"}, {"voldict": {"host:/share2G": {"size": 2}, "host:/share": {"size": None}}, "used_vols": set(["host:/share2G"]), "size": 2, "expected": "host:/share"}, {"voldict": {"host:/share2G": {"size": 2}, "host:/share": {"size": None}}, "used_vols": set(["host:/share2G"]), "size": 3, "expected": "host:/share"}, {"voldict": {"host:/share2G": {"size": 2}, "host:/share": {"size": None}}, "used_vols": set(["host:/share2G"]), "size": None, "expected": "host:/share"}, {"voldict": {"host:/share": {}}, "used_vols": set(), "size": 1, "expected": "host:/share"}, {"voldict": {"host:/share": {}}, "used_vols": set(), "size": None, "expected": "host:/share"}) @ddt.unpack def test_pop_gluster_vol(self, voldict, used_vols, size, expected): gmgr = common.GlusterManager gmgr1 = gmgr(expected, self._execute, None, None) self._layout._fetch_gluster_volumes = mock.Mock(return_value=voldict) self._layout.gluster_used_vols = used_vols self._layout._glustermanager = mock.Mock(return_value=gmgr1) self._layout.volume_pattern_keys = list(voldict.values())[0].keys() result = self._layout._pop_gluster_vol(size=size) self.assertEqual(expected, result) self.assertIn(result, used_vols) self._layout._fetch_gluster_volumes.assert_called_once_with() self._layout._glustermanager.assert_called_once_with(result) @ddt.data({"voldict": {"share2G": {"size": 2}}, "used_vols": set(), "size": 3}, {"voldict": {"share2G": {"size": 2}}, "used_vols": set(["share2G"]), "size": None}) @ddt.unpack def test_pop_gluster_vol_excp(self, voldict, used_vols, size): self._layout._fetch_gluster_volumes = mock.Mock(return_value=voldict) self._layout.gluster_used_vols = used_vols self._layout.volume_pattern_keys = list(voldict.values())[0].keys() self.assertRaises(exception.GlusterfsException, self._layout._pop_gluster_vol, size=size) self._layout._fetch_gluster_volumes.assert_called_once_with() self.assertFalse( self.fake_driver._setup_via_manager.called) def test_push_gluster_vol(self): self._layout.gluster_used_vols = set([ self.glusterfs_target1, self.glusterfs_target2]) self._layout._push_gluster_vol(self.glusterfs_target2) self.assertEqual(1, len(self._layout.gluster_used_vols)) self.assertFalse( self.glusterfs_target2 in self._layout.gluster_used_vols) def test_push_gluster_vol_excp(self): self._layout.gluster_used_vols = set([self.glusterfs_target1]) self._layout.gluster_unused_vols_dict = {} self.assertRaises(exception.GlusterfsException, self._layout._push_gluster_vol, self.glusterfs_target2) @ddt.data({'vers_minor': '6', 'cmd': ['find', '/tmp/tmpKGHKJ', '-mindepth', '1', '-delete']}, {'vers_minor': '7', 'cmd': ['find', '/tmp/tmpKGHKJ', '-mindepth', '1', '!', '-path', '/tmp/tmpKGHKJ/.trashcan', '!', '-path', '/tmp/tmpKGHKJ/.trashcan/internal_op', '-delete']}) @ddt.unpack def test_wipe_gluster_vol(self, vers_minor, cmd): tmpdir = '/tmp/tmpKGHKJ' gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self._layout.glusterfs_versions = { self.glusterfs_server1: ('3', vers_minor)} self.mock_object(tempfile, 'mkdtemp', mock.Mock(return_value=tmpdir)) self.mock_object(self.fake_driver, '_execute', mock.Mock()) self.mock_object(common, '_mount_gluster_vol', mock.Mock()) self.mock_object(common, '_umount_gluster_vol', mock.Mock()) self.mock_object(shutil, 'rmtree', mock.Mock()) self._layout._wipe_gluster_vol(gmgr1) tempfile.mkdtemp.assert_called_once_with() common._mount_gluster_vol.assert_called_once_with( self.fake_driver._execute, gmgr1.export, tmpdir) kwargs = {'run_as_root': True} self.fake_driver._execute.assert_called_once_with( *cmd, **kwargs) common._umount_gluster_vol.assert_called_once_with( self.fake_driver._execute, tmpdir) kwargs = {'ignore_errors': True} shutil.rmtree.assert_called_once_with(tmpdir, **kwargs) def test_wipe_gluster_vol_mount_fail(self): tmpdir = '/tmp/tmpKGHKJ' gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self._layout.glusterfs_versions = { self.glusterfs_server1: ('3', '6')} self.mock_object(tempfile, 'mkdtemp', mock.Mock(return_value=tmpdir)) self.mock_object(self.fake_driver, '_execute', mock.Mock()) self.mock_object(common, '_mount_gluster_vol', mock.Mock(side_effect=exception.GlusterfsException)) self.mock_object(common, '_umount_gluster_vol', mock.Mock()) self.mock_object(shutil, 'rmtree', mock.Mock()) self.assertRaises(exception.GlusterfsException, self._layout._wipe_gluster_vol, gmgr1) tempfile.mkdtemp.assert_called_once_with() common._mount_gluster_vol.assert_called_once_with( self.fake_driver._execute, gmgr1.export, tmpdir) self.assertFalse(self.fake_driver._execute.called) self.assertFalse(common._umount_gluster_vol.called) kwargs = {'ignore_errors': True} shutil.rmtree.assert_called_once_with(tmpdir, **kwargs) def test_wipe_gluster_vol_error_wiping_gluster_vol(self): tmpdir = '/tmp/tmpKGHKJ' gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self._layout.glusterfs_versions = { self.glusterfs_server1: ('3', '6')} cmd = ['find', '/tmp/tmpKGHKJ', '-mindepth', '1', '-delete'] self.mock_object(tempfile, 'mkdtemp', mock.Mock(return_value=tmpdir)) self.mock_object( self.fake_driver, '_execute', mock.Mock(side_effect=exception.ProcessExecutionError)) self.mock_object(common, '_mount_gluster_vol', mock.Mock()) self.mock_object(common, '_umount_gluster_vol', mock.Mock()) self.mock_object(shutil, 'rmtree', mock.Mock()) self.assertRaises(exception.GlusterfsException, self._layout._wipe_gluster_vol, gmgr1) tempfile.mkdtemp.assert_called_once_with() common._mount_gluster_vol.assert_called_once_with( self.fake_driver._execute, gmgr1.export, tmpdir) kwargs = {'run_as_root': True} self.fake_driver._execute.assert_called_once_with( *cmd, **kwargs) common._umount_gluster_vol.assert_called_once_with( self.fake_driver._execute, tmpdir) kwargs = {'ignore_errors': True} shutil.rmtree.assert_called_once_with(tmpdir, **kwargs) def test_create_share(self): self._layout._pop_gluster_vol = mock.Mock( return_value=self.glusterfs_target1) gmgr1 = common.GlusterManager(self.glusterfs_target1) gmgr1.set_vol_option = mock.Mock() self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) self.mock_object(self.fake_driver, '_setup_via_manager', mock.Mock(return_value='host1:/gv1')) share = new_share() exp_locn = self._layout.create_share(self._context, share) self._layout._pop_gluster_vol.assert_called_once_with(share['size']) self.fake_driver._setup_via_manager.assert_called_once_with( {'manager': gmgr1, 'share': share}) self._layout.private_storage.update.assert_called_once_with( share['id'], {'volume': self.glusterfs_target1}) gmgr1.set_vol_option.assert_called_once_with( 'user.manila-share', share['id']) self.assertEqual('host1:/gv1', exp_locn) def test_create_share_error(self): self._layout._pop_gluster_vol = mock.Mock( side_effect=exception.GlusterfsException) share = new_share() self.assertRaises(exception.GlusterfsException, self._layout.create_share, self._context, share) self._layout._pop_gluster_vol.assert_called_once_with( share['size']) @ddt.data(None, '', 'Eeyore') def test_delete_share(self, clone_of): self._layout._push_gluster_vol = mock.Mock() self._layout._wipe_gluster_vol = mock.Mock() gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) gmgr1.set_vol_option = mock.Mock() gmgr1.get_vol_option = mock.Mock(return_value=clone_of) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self._layout.delete_share(self._context, self.share1) gmgr1.get_vol_option.assert_called_once_with( 'user.manila-cloned-from') self._layout._wipe_gluster_vol.assert_called_once_with(gmgr1) self._layout._push_gluster_vol.assert_called_once_with( self.glusterfs_target1) self._layout.private_storage.delete.assert_called_once_with( self.share1['id']) gmgr1.set_vol_option.assert_called_once_with( 'user.manila-share', 'NONE') def test_delete_share_clone(self): self._layout._push_gluster_vol = mock.Mock() self._layout._wipe_gluster_vol = mock.Mock() gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) gmgr1.gluster_call = mock.Mock() gmgr1.get_vol_option = mock.Mock(return_value=FAKE_UUID1) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self._layout.delete_share(self._context, self.share1) gmgr1.get_vol_option.assert_called_once_with( 'user.manila-cloned-from') self.assertFalse(self._layout._wipe_gluster_vol.called) self._layout._push_gluster_vol.assert_called_once_with( self.glusterfs_target1) self._layout.private_storage.delete.assert_called_once_with( self.share1['id']) gmgr1.gluster_call.assert_called_once_with( 'volume', 'delete', 'gv1') def test_delete_share_error(self): self._layout._wipe_gluster_vol = mock.Mock() self._layout._wipe_gluster_vol.side_effect = ( exception.GlusterfsException) self._layout._push_gluster_vol = mock.Mock() gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) gmgr1.get_vol_option = mock.Mock(return_value=None) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self.assertRaises(exception.GlusterfsException, self._layout.delete_share, self._context, self.share1) self._layout._wipe_gluster_vol.assert_called_once_with(gmgr1) self.assertFalse(self._layout._push_gluster_vol.called) def test_delete_share_missing_record(self): self.mock_object(self._layout, '_share_manager', mock.Mock(return_value=None)) self._layout.delete_share(self._context, self.share1) self._layout._share_manager.assert_called_once_with(self.share1) def test_create_snapshot(self): self._layout.gluster_nosnap_vols_dict = {} self._layout.glusterfs_versions = {self.glusterfs_server1: ('3', '6')} gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self.mock_object(gmgr1, 'gluster_call', mock.Mock( side_effect=(glusterXMLOut(ret=0, errno=0),))) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } ret = self._layout.create_snapshot(self._context, snapshot) self.assertIsNone(ret) args = ('--xml', 'snapshot', 'create', 'manila-fake_snap_id', gmgr1.volume) gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) @ddt.data({'side_effect': (glusterXMLOut(ret=-1, errno=2),), '_exception': exception.GlusterfsException}, {'side_effect': (('', ''),), '_exception': exception.GlusterfsException}) @ddt.unpack def test_create_snapshot_error(self, side_effect, _exception): self._layout.gluster_nosnap_vols_dict = {} self._layout.glusterfs_versions = {self.glusterfs_server1: ('3', '6')} gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self.mock_object(gmgr1, 'gluster_call', mock.Mock(side_effect=side_effect)) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } self.assertRaises(_exception, self._layout.create_snapshot, self._context, snapshot) args = ('--xml', 'snapshot', 'create', 'manila-fake_snap_id', gmgr1.volume) gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) @ddt.data({"vers_minor": '6', "exctype": exception.GlusterfsException}, {"vers_minor": '7', "exctype": exception.ShareSnapshotNotSupported}) @ddt.unpack def test_create_snapshot_no_snap(self, vers_minor, exctype): self._layout.gluster_nosnap_vols_dict = {} self._layout.glusterfs_versions = { self.glusterfs_server1: ('3', vers_minor)} gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self.mock_object(gmgr1, 'gluster_call', mock.Mock( side_effect=(glusterXMLOut(ret=-1, errno=0),))) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } self.assertRaises(exctype, self._layout.create_snapshot, self._context, snapshot) args = ('--xml', 'snapshot', 'create', 'manila-fake_snap_id', gmgr1.volume) gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) @ddt.data({"vers_minor": '6', "exctype": exception.GlusterfsException}, {"vers_minor": '7', "exctype": exception.ShareSnapshotNotSupported}) @ddt.unpack def test_create_snapshot_no_snap_cached(self, vers_minor, exctype): self._layout.gluster_nosnap_vols_dict = { self.glusterfs_target1: 'fake error'} self._layout.glusterfs_versions = { self.glusterfs_server1: ('3', vers_minor)} self._layout.gluster_used_vols = set([self.glusterfs_target1]) gmgr = common.GlusterManager gmgr1 = gmgr(self.glusterfs_target1, self._execute, None, None) self.mock_object(self._layout, '_share_manager', mock.Mock(return_value=gmgr1)) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } self.assertRaises(exctype, self._layout.create_snapshot, self._context, snapshot) def test_find_actual_backend_snapshot_name(self): gmgr = common.GlusterManager gmgr1 = gmgr(self.share1['export_location'], self._execute, None, None) self.mock_object(gmgr1, 'gluster_call', mock.Mock(return_value=('fake_snap_id_xyz', ''))) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } ret = self._layout._find_actual_backend_snapshot_name(gmgr1, snapshot) args = ('snapshot', 'list', gmgr1.volume, '--mode=script') gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) self.assertEqual('fake_snap_id_xyz', ret) @ddt.data('this is too bad', 'fake_snap_id_xyx\nfake_snap_id_pqr') def test_find_actual_backend_snapshot_name_bad_snap_list(self, snaplist): gmgr = common.GlusterManager gmgr1 = gmgr(self.share1['export_location'], self._execute, None, None) self.mock_object(gmgr1, 'gluster_call', mock.Mock(return_value=(snaplist, ''))) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } self.assertRaises(exception.GlusterfsException, self._layout._find_actual_backend_snapshot_name, gmgr1, snapshot) args = ('snapshot', 'list', gmgr1.volume, '--mode=script') gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) @ddt.data({'glusterfs_target': 'root@host1:/gv1', 'glusterfs_server': 'root@host1'}, {'glusterfs_target': 'host1:/gv1', 'glusterfs_server': 'host1'}) @ddt.unpack def test_create_share_from_snapshot(self, glusterfs_target, glusterfs_server): share = new_share() snapshot = { 'id': 'fake_snap_id', 'share_instance': new_share(export_location=glusterfs_target), 'share_id': 'fake_share_id', } volume = ''.join(['manila-', share['id']]) new_vol_addr = ':/'.join([glusterfs_server, volume]) gmgr = common.GlusterManager old_gmgr = gmgr(glusterfs_target, self._execute, None, None) new_gmgr = gmgr(new_vol_addr, self._execute, None, None) self._layout.gluster_used_vols = set([glusterfs_target]) self._layout.glusterfs_versions = {glusterfs_server: ('3', '7')} self.mock_object(old_gmgr, 'gluster_call', mock.Mock(side_effect=[('', ''), ('', '')])) self.mock_object(new_gmgr, 'gluster_call', mock.Mock(side_effect=[('', ''), ('', ''), ('', '')])) self.mock_object(new_gmgr, 'get_vol_option', mock.Mock()) new_gmgr.get_vol_option.return_value = ( 'glusterfs-server-1,client') self.mock_object(self._layout, '_find_actual_backend_snapshot_name', mock.Mock(return_value='fake_snap_id_xyz')) self.mock_object(self._layout, '_share_manager', mock.Mock(return_value=old_gmgr)) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=new_gmgr)) self.mock_object(self.fake_driver, '_setup_via_manager', mock.Mock(return_value='host1:/gv1')) ret = self._layout.create_share_from_snapshot( self._context, share, snapshot, None) (self._layout._find_actual_backend_snapshot_name. assert_called_once_with(old_gmgr, snapshot)) args = (('snapshot', 'activate', 'fake_snap_id_xyz', 'force', '--mode=script'), ('snapshot', 'clone', volume, 'fake_snap_id_xyz')) old_gmgr.gluster_call.assert_has_calls( [mock.call(*a, log=mock.ANY) for a in args]) args = (('volume', 'start', volume), ('volume', 'set', volume, 'user.manila-share', share['id']), ('volume', 'set', volume, 'user.manila-cloned-from', snapshot['share_id'])) new_gmgr.gluster_call.assert_has_calls( [mock.call(*a, log=mock.ANY) for a in args], any_order=True) self._layout._share_manager.assert_called_once_with( snapshot['share_instance']) self._layout._glustermanager.assert_called_once_with( gmgr.parse(new_vol_addr)) self._layout.driver._setup_via_manager.assert_called_once_with( {'manager': new_gmgr, 'share': share}, {'manager': old_gmgr, 'share': snapshot['share_instance']}) self._layout.private_storage.update.assert_called_once_with( share['id'], {'volume': new_vol_addr}) self.assertIn( new_vol_addr, self._layout.gluster_used_vols) self.assertEqual('host1:/gv1', ret) def test_create_share_from_snapshot_error_unsupported_gluster_version( self): glusterfs_target = 'root@host1:/gv1' glusterfs_server = 'root@host1' share = new_share() volume = ''.join(['manila-', share['id']]) new_vol_addr = ':/'.join([glusterfs_server, volume]) gmgr = common.GlusterManager old_gmgr = gmgr(glusterfs_target, self._execute, None, None) new_gmgr = gmgr(new_vol_addr, self._execute, None, None) self._layout.gluster_used_vols_dict = {glusterfs_target: old_gmgr} self._layout.glusterfs_versions = {glusterfs_server: ('3', '6')} self.mock_object( old_gmgr, 'gluster_call', mock.Mock(side_effect=[('', ''), ('', '')])) self.mock_object(new_gmgr, 'get_vol_option', mock.Mock()) new_gmgr.get_vol_option.return_value = ( 'glusterfs-server-1,client') self.mock_object(self._layout, '_find_actual_backend_snapshot_name', mock.Mock(return_value='fake_snap_id_xyz')) self.mock_object(self._layout, '_share_manager', mock.Mock(return_value=old_gmgr)) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=new_gmgr)) snapshot = { 'id': 'fake_snap_id', 'share_instance': new_share(export_location=glusterfs_target) } self.assertRaises(exception.GlusterfsException, self._layout.create_share_from_snapshot, self._context, share, snapshot) self.assertFalse( self._layout._find_actual_backend_snapshot_name.called) self.assertFalse(old_gmgr.gluster_call.called) self._layout._share_manager.assert_called_once_with( snapshot['share_instance']) self.assertFalse(self._layout._glustermanager.called) self.assertFalse(new_gmgr.get_vol_option.called) self.assertFalse(new_gmgr.gluster_call.called) self.assertNotIn(new_vol_addr, self._layout.glusterfs_versions.keys()) def test_delete_snapshot(self): self._layout.gluster_nosnap_vols_dict = {} gmgr = common.GlusterManager gmgr1 = gmgr(self.share1['export_location'], self._execute, None, None) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self.mock_object(self._layout, '_find_actual_backend_snapshot_name', mock.Mock(return_value='fake_snap_id_xyz')) self.mock_object( gmgr1, 'gluster_call', mock.Mock(return_value=glusterXMLOut(ret=0, errno=0))) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } ret = self._layout.delete_snapshot(self._context, snapshot) self.assertIsNone(ret) args = ('--xml', 'snapshot', 'delete', 'fake_snap_id_xyz', '--mode=script') gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) (self._layout._find_actual_backend_snapshot_name. assert_called_once_with(gmgr1, snapshot)) @ddt.data({'side_effect': (glusterXMLOut(ret=-1, errno=0),), '_exception': exception.GlusterfsException}, {'side_effect': (('', ''),), '_exception': exception.GlusterfsException}) @ddt.unpack def test_delete_snapshot_error(self, side_effect, _exception): self._layout.gluster_nosnap_vols_dict = {} gmgr = common.GlusterManager gmgr1 = gmgr(self.share1['export_location'], self._execute, None, None) self._layout.gluster_used_vols = set([self.glusterfs_target1]) self.mock_object(self._layout, '_find_actual_backend_snapshot_name', mock.Mock(return_value='fake_snap_id_xyz')) args = ('--xml', 'snapshot', 'delete', 'fake_snap_id_xyz', '--mode=script') self.mock_object( gmgr1, 'gluster_call', mock.Mock(side_effect=side_effect)) self.mock_object(self._layout, '_glustermanager', mock.Mock(return_value=gmgr1)) snapshot = { 'id': 'fake_snap_id', 'share_id': self.share1['id'], 'share': self.share1 } self.assertRaises(_exception, self._layout.delete_snapshot, self._context, snapshot) gmgr1.gluster_call.assert_called_once_with(*args, log=mock.ANY) (self._layout._find_actual_backend_snapshot_name. assert_called_once_with(gmgr1, snapshot)) @ddt.data( ('manage_existing', ('share', 'driver_options'), {}), ('unmanage', ('share',), {}), ('extend_share', ('share', 'new_size'), {'share_server': None}), ('shrink_share', ('share', 'new_size'), {'share_server': None})) def test_nonimplemented_methods(self, method_invocation): method, args, kwargs = method_invocation self.assertRaises(NotImplementedError, getattr(self._layout, method), *args, **kwargs) manila-10.0.0/manila/tests/share/drivers/glusterfs/test_layout_directory.py0000664000175000017500000004772113656750227027316 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from unittest import mock import ddt from oslo_config import cfg from manila import context from manila import exception from manila.share import configuration as config from manila.share.drivers.glusterfs import common from manila.share.drivers.glusterfs import layout_directory from manila import test from manila.tests import fake_share from manila.tests import fake_utils CONF = cfg.CONF fake_gluster_manager_attrs = { 'export': '127.0.0.1:/testvol', 'host': '127.0.0.1', 'qualified': 'testuser@127.0.0.1:/testvol', 'user': 'testuser', 'volume': 'testvol', 'path_to_private_key': '/fakepath/to/privatekey', 'remote_server_password': 'fakepassword', 'components': {'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol', 'path': None} } fake_local_share_path = '/mnt/nfs/testvol/fakename' fake_path_to_private_key = '/fakepath/to/privatekey' fake_remote_server_password = 'fakepassword' @ddt.ddt class GlusterfsDirectoryMappedLayoutTestCase(test.TestCase): """Tests GlusterfsDirectoryMappedLayout.""" def setUp(self): super(GlusterfsDirectoryMappedLayoutTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._execute = fake_utils.fake_execute self._context = context.get_admin_context() self.addCleanup(fake_utils.fake_execute_set_repliers, []) self.addCleanup(fake_utils.fake_execute_clear_log) CONF.set_default('glusterfs_target', '127.0.0.1:/testvol') CONF.set_default('glusterfs_mount_point_base', '/mnt/nfs') CONF.set_default('glusterfs_server_password', fake_remote_server_password) CONF.set_default('glusterfs_path_to_private_key', fake_path_to_private_key) self.fake_driver = mock.Mock() self.mock_object(self.fake_driver, '_execute', self._execute) self.fake_driver.GLUSTERFS_VERSION_MIN = (3, 6) self.fake_conf = config.Configuration(None) self.mock_object(common.GlusterManager, 'make_gluster_call') self._layout = layout_directory.GlusterfsDirectoryMappedLayout( self.fake_driver, configuration=self.fake_conf) self._layout.gluster_manager = mock.Mock(**fake_gluster_manager_attrs) self.share = fake_share.fake_share(share_proto='NFS') def test_do_setup(self): fake_gluster_manager = mock.Mock(**fake_gluster_manager_attrs) self.mock_object(fake_gluster_manager, 'get_gluster_version', mock.Mock(return_value=('3', '5'))) methods = ('_check_mount_glusterfs', '_ensure_gluster_vol_mounted') for method in methods: self.mock_object(self._layout, method) self.mock_object(common, 'GlusterManager', mock.Mock(return_value=fake_gluster_manager)) self._layout.do_setup(self._context) self.assertEqual(fake_gluster_manager, self._layout.gluster_manager) common.GlusterManager.assert_called_once_with( self._layout.configuration.glusterfs_target, self._execute, self._layout.configuration.glusterfs_path_to_private_key, self._layout.configuration.glusterfs_server_password, requires={'volume': True}) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'enable') self._layout._check_mount_glusterfs.assert_called_once_with() self._layout._ensure_gluster_vol_mounted.assert_called_once_with() def test_do_setup_glusterfs_target_not_set(self): self._layout.configuration.glusterfs_target = None self.assertRaises(exception.GlusterfsException, self._layout.do_setup, self._context) def test_do_setup_error_enabling_creation_share_specific_size(self): attrs = {'volume': 'testvol', 'gluster_call.side_effect': exception.GlusterfsException, 'get_vol_option.return_value': 'off'} fake_gluster_manager = mock.Mock(**attrs) self.mock_object(layout_directory.LOG, 'exception') methods = ('_check_mount_glusterfs', '_ensure_gluster_vol_mounted') for method in methods: self.mock_object(self._layout, method) self.mock_object(common, 'GlusterManager', mock.Mock(return_value=fake_gluster_manager)) self.assertRaises(exception.GlusterfsException, self._layout.do_setup, self._context) self.assertEqual(fake_gluster_manager, self._layout.gluster_manager) common.GlusterManager.assert_called_once_with( self._layout.configuration.glusterfs_target, self._execute, self._layout.configuration.glusterfs_path_to_private_key, self._layout.configuration.glusterfs_server_password, requires={'volume': True}) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'enable') (self._layout.gluster_manager.get_vol_option. assert_called_once_with('features.quota')) layout_directory.LOG.exception.assert_called_once_with(mock.ANY) self._layout._check_mount_glusterfs.assert_called_once_with() self.assertFalse(self._layout._ensure_gluster_vol_mounted.called) def test_do_setup_error_already_enabled_creation_share_specific_size(self): attrs = {'volume': 'testvol', 'gluster_call.side_effect': exception.GlusterfsException, 'get_vol_option.return_value': 'on'} fake_gluster_manager = mock.Mock(**attrs) self.mock_object(layout_directory.LOG, 'error') methods = ('_check_mount_glusterfs', '_ensure_gluster_vol_mounted') for method in methods: self.mock_object(self._layout, method) self.mock_object(common, 'GlusterManager', mock.Mock(return_value=fake_gluster_manager)) self._layout.do_setup(self._context) self.assertEqual(fake_gluster_manager, self._layout.gluster_manager) common.GlusterManager.assert_called_once_with( self._layout.configuration.glusterfs_target, self._execute, self._layout.configuration.glusterfs_path_to_private_key, self._layout.configuration.glusterfs_server_password, requires={'volume': True}) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'enable') (self._layout.gluster_manager.get_vol_option. assert_called_once_with('features.quota')) self.assertFalse(layout_directory.LOG.error.called) self._layout._check_mount_glusterfs.assert_called_once_with() self._layout._ensure_gluster_vol_mounted.assert_called_once_with() def test_share_manager(self): self._layout._glustermanager = mock.Mock() self._layout._share_manager(self.share) self._layout._glustermanager.assert_called_once_with( {'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol', 'path': '/fakename'}) def test_ensure_gluster_vol_mounted(self): common._mount_gluster_vol = mock.Mock() self._layout._ensure_gluster_vol_mounted() self.assertTrue(common._mount_gluster_vol.called) def test_ensure_gluster_vol_mounted_error(self): common._mount_gluster_vol = ( mock.Mock(side_effect=exception.GlusterfsException)) self.assertRaises(exception.GlusterfsException, self._layout._ensure_gluster_vol_mounted) def test_get_local_share_path(self): with mock.patch.object(os, 'access', return_value=True): ret = self._layout._get_local_share_path(self.share) self.assertEqual('/mnt/nfs/testvol/fakename', ret) def test_local_share_path_not_exists(self): with mock.patch.object(os, 'access', return_value=False): self.assertRaises(exception.GlusterfsException, self._layout._get_local_share_path, self.share) def test_update_share_stats(self): test_statvfs = mock.Mock(f_frsize=4096, f_blocks=524288, f_bavail=524288) self._layout._get_mount_point_for_gluster_vol = ( mock.Mock(return_value='/mnt/nfs/testvol')) some_no = 42 not_some_no = some_no + 1 os_stat = (lambda path: mock.Mock(st_dev=some_no) if path == '/mnt/nfs' else mock.Mock(st_dev=not_some_no)) with mock.patch.object(os, 'statvfs', return_value=test_statvfs): with mock.patch.object(os, 'stat', os_stat): ret = self._layout._update_share_stats() test_data = { 'total_capacity_gb': 2, 'free_capacity_gb': 2, } self.assertEqual(test_data, ret) def test_update_share_stats_gluster_mnt_unavailable(self): self._layout._get_mount_point_for_gluster_vol = ( mock.Mock(return_value='/mnt/nfs/testvol')) some_no = 42 with mock.patch.object(os, 'stat', return_value=mock.Mock(st_dev=some_no)): self.assertRaises(exception.GlusterfsException, self._layout._update_share_stats) @ddt.data((), (None,)) def test_create_share(self, extra_args): exec_cmd1 = 'mkdir %s' % fake_local_share_path expected_exec = [exec_cmd1, ] expected_ret = 'testuser@127.0.0.1:/testvol/fakename' self.mock_object( self._layout, '_get_local_share_path', mock.Mock(return_value=fake_local_share_path)) gmgr = mock.Mock() self.mock_object( self._layout, '_glustermanager', mock.Mock(return_value=gmgr)) self.mock_object( self._layout.driver, '_setup_via_manager', mock.Mock(return_value=expected_ret)) ret = self._layout.create_share(self._context, self.share, *extra_args) self._layout._get_local_share_path.called_once_with(self.share) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'limit-usage', '/fakename', '1GB') self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) self._layout._glustermanager.assert_called_once_with( {'user': 'testuser', 'host': '127.0.0.1', 'volume': 'testvol', 'path': '/fakename'}) self._layout.driver._setup_via_manager.assert_called_once_with( {'share': self.share, 'manager': gmgr}) self.assertEqual(expected_ret, ret) @ddt.data(exception.ProcessExecutionError, exception.GlusterfsException) def test_create_share_unable_to_create_share(self, trouble): def exec_runner(*ignore_args, **ignore_kw): raise trouble self.mock_object( self._layout, '_get_local_share_path', mock.Mock(return_value=fake_local_share_path)) self.mock_object(self._layout, '_cleanup_create_share') self.mock_object(layout_directory.LOG, 'error') expected_exec = ['mkdir %s' % fake_local_share_path] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self.assertRaises( exception.GlusterfsException, self._layout.create_share, self._context, self.share) self._layout._get_local_share_path.called_once_with(self.share) self._layout._cleanup_create_share.assert_called_once_with( fake_local_share_path, self.share['name']) layout_directory.LOG.error.assert_called_once_with( mock.ANY, mock.ANY) def test_create_share_unable_to_create_share_weird(self): def exec_runner(*ignore_args, **ignore_kw): raise RuntimeError self.mock_object( self._layout, '_get_local_share_path', mock.Mock(return_value=fake_local_share_path)) self.mock_object(self._layout, '_cleanup_create_share') self.mock_object(layout_directory.LOG, 'error') expected_exec = ['mkdir %s' % fake_local_share_path] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self.assertRaises( RuntimeError, self._layout.create_share, self._context, self.share) self._layout._get_local_share_path.called_once_with(self.share) self.assertFalse(self._layout._cleanup_create_share.called) def test_cleanup_create_share_local_share_path_exists(self): expected_exec = ['rm -rf %s' % fake_local_share_path] self.mock_object(os.path, 'exists', mock.Mock(return_value=True)) ret = self._layout._cleanup_create_share(fake_local_share_path, self.share['name']) os.path.exists.assert_called_once_with(fake_local_share_path) self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) self.assertIsNone(ret) def test_cleanup_create_share_cannot_cleanup_unusable_share(self): def exec_runner(*ignore_args, **ignore_kw): raise exception.ProcessExecutionError expected_exec = ['rm -rf %s' % fake_local_share_path] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self.mock_object(layout_directory.LOG, 'error') self.mock_object(os.path, 'exists', mock.Mock(return_value=True)) self.assertRaises(exception.GlusterfsException, self._layout._cleanup_create_share, fake_local_share_path, self.share['name']) os.path.exists.assert_called_once_with(fake_local_share_path) layout_directory.LOG.error.assert_called_once_with(mock.ANY, mock.ANY) def test_cleanup_create_share_local_share_path_does_not_exist(self): self.mock_object(os.path, 'exists', mock.Mock(return_value=False)) ret = self._layout._cleanup_create_share(fake_local_share_path, self.share['name']) os.path.exists.assert_called_once_with(fake_local_share_path) self.assertIsNone(ret) def test_delete_share(self): self._layout._get_local_share_path = ( mock.Mock(return_value='/mnt/nfs/testvol/fakename')) self._layout.delete_share(self._context, self.share) self.assertEqual(['rm -rf /mnt/nfs/testvol/fakename'], fake_utils.fake_execute_get_log()) def test_cannot_delete_share(self): self._layout._get_local_share_path = ( mock.Mock(return_value='/mnt/nfs/testvol/fakename')) def exec_runner(*ignore_args, **ignore_kw): raise exception.ProcessExecutionError expected_exec = ['rm -rf %s' % (self._layout._get_local_share_path())] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self.assertRaises(exception.ProcessExecutionError, self._layout.delete_share, self._context, self.share) self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test_delete_share_can_be_called_with_extra_arg_share_server(self): self._layout._get_local_share_path = mock.Mock() share_server = None ret = self._layout.delete_share(self._context, self.share, share_server) self.assertIsNone(ret) self._layout._get_local_share_path.assert_called_once_with(self.share) def test_ensure_share(self): self.assertIsNone(self._layout.ensure_share(self._context, self.share)) def test_extend_share(self): self._layout.extend_share(self.share, 3) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'limit-usage', '/fakename', '3GB') def test_shrink_share(self): self.mock_object(self._layout, '_get_directory_usage', mock.Mock(return_value=10.0)) self._layout.shrink_share(self.share, 11) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'limit-usage', '/fakename', '11GB') def test_shrink_share_data_loss(self): self.mock_object(self._layout, '_get_directory_usage', mock.Mock(return_value=10.0)) shrink_on_gluster = self.mock_object(self._layout, '_set_directory_quota') self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self._layout.shrink_share, self.share, 9) shrink_on_gluster.assert_not_called() def test_set_directory_quota(self): self._layout._set_directory_quota(self.share, 3) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'limit-usage', '/fakename', '3GB') def test_set_directory_quota_unable_to_set(self): self.mock_object(self._layout.gluster_manager, 'gluster_call', mock.Mock(side_effect=exception.GlusterfsException)) self.assertRaises(exception.GlusterfsException, self._layout._set_directory_quota, self.share, 3) self._layout.gluster_manager.gluster_call.assert_called_once_with( 'volume', 'quota', 'testvol', 'limit-usage', '/fakename', '3GB') def test_get_directory_usage(self): def xml_output(*ignore_args, **ignore_kwargs): return """ 0 0 10737418240 """, '' self.mock_object(self._layout.gluster_manager, 'gluster_call', mock.Mock(side_effect=xml_output)) ret = self._layout._get_directory_usage(self.share) self.assertEqual(10.0, ret) share_dir = '/' + self.share['name'] self._layout.gluster_manager.gluster_call.assert_called_once_with( '--xml', 'volume', 'quota', self._layout.gluster_manager.volume, 'list', share_dir) def test_get_directory_usage_unable_to_get(self): self.mock_object(self._layout.gluster_manager, 'gluster_call', mock.Mock(side_effect=exception.GlusterfsException)) self.assertRaises(exception.GlusterfsException, self._layout._get_directory_usage, self.share) share_dir = '/' + self.share['name'] self._layout.gluster_manager.gluster_call.assert_called_once_with( '--xml', 'volume', 'quota', self._layout.gluster_manager.volume, 'list', share_dir) @ddt.data( ('create_share_from_snapshot', ('context', 'share', 'snapshot'), {'share_server': None}), ('create_snapshot', ('context', 'snapshot'), {'share_server': None}), ('delete_snapshot', ('context', 'snapshot'), {'share_server': None}), ('manage_existing', ('share', 'driver_options'), {}), ('unmanage', ('share',), {})) def test_nonimplemented_methods(self, method_invocation): method, args, kwargs = method_invocation self.assertRaises(NotImplementedError, getattr(self._layout, method), *args, **kwargs) manila-10.0.0/manila/tests/share/drivers/glusterfs/test_glusterfs_native.py0000664000175000017500000002543313656750227027275 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ GlusterFS native protocol (glusterfs) driver for shares. Test cases for GlusterFS native protocol driver. """ from unittest import mock import ddt from oslo_config import cfg from manila.common import constants from manila import context from manila import exception from manila.share import configuration as config from manila.share.drivers.glusterfs import common from manila.share.drivers.glusterfs import glusterfs_native from manila import test from manila.tests import fake_utils CONF = cfg.CONF def new_share(**kwargs): share = { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'glusterfs', } share.update(kwargs) return share @ddt.ddt class GlusterfsNativeShareDriverTestCase(test.TestCase): """Tests GlusterfsNativeShareDriver.""" def setUp(self): super(GlusterfsNativeShareDriverTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._execute = fake_utils.fake_execute self._context = context.get_admin_context() self.glusterfs_target1 = 'root@host1:/gv1' self.glusterfs_target2 = 'root@host2:/gv2' self.glusterfs_server1 = 'root@host1' self.glusterfs_server2 = 'root@host2' self.glusterfs_server1_volumes = 'manila-share-1-1G\nshare1' self.glusterfs_server2_volumes = 'manila-share-2-2G\nshare2' self.share1 = new_share( export_location=self.glusterfs_target1, status=constants.STATUS_AVAILABLE) self.share2 = new_share( export_location=self.glusterfs_target2, status=constants.STATUS_AVAILABLE) self.gmgr1 = common.GlusterManager(self.glusterfs_server1, self._execute, None, None, requires={'volume': False}) self.gmgr2 = common.GlusterManager(self.glusterfs_server2, self._execute, None, None, requires={'volume': False}) self.glusterfs_volumes_dict = ( {'root@host1:/manila-share-1-1G': {'size': 1}, 'root@host2:/manila-share-2-2G': {'size': 2}}) self.glusterfs_used_vols = set([ 'root@host1:/manila-share-1-1G', 'root@host2:/manila-share-2-2G']) CONF.set_default('glusterfs_volume_pattern', r'manila-share-\d+-#{size}G$') CONF.set_default('driver_handles_share_servers', False) self.fake_conf = config.Configuration(None) self.mock_object(common.GlusterManager, 'make_gluster_call') self._driver = glusterfs_native.GlusterfsNativeShareDriver( execute=self._execute, configuration=self.fake_conf) self.addCleanup(fake_utils.fake_execute_set_repliers, []) self.addCleanup(fake_utils.fake_execute_clear_log) def test_supported_protocols(self): self.assertEqual(('GLUSTERFS', ), self._driver.supported_protocols) def test_setup_via_manager(self): gmgr = mock.Mock() gmgr.gluster_call = mock.Mock() gmgr.set_vol_option = mock.Mock() gmgr.volume = 'fakevol' gmgr.export = 'fakehost:/fakevol' gmgr.get_vol_option = mock.Mock( return_value='glusterfs-server-name,some-other-name') share = mock.Mock() settings = ( ('nfs.export-volumes', False, {}), ('client.ssl', True, {}), ('server.ssl', True, {}), ('server.dynamic-auth', True, {'ignore_failure': True}), ) call_args = ( ('volume', 'stop', 'fakevol', '--mode=script', {'log': mock.ANY}), ('volume', 'start', 'fakevol', {'log': mock.ANY}), ) ret = self._driver._setup_via_manager({'manager': gmgr, 'share': share}) gmgr.get_vol_option.assert_called_once_with('auth.ssl-allow') gmgr.set_vol_option.assert_has_calls( [mock.call(*a[:-1], **a[-1]) for a in settings]) gmgr.gluster_call.assert_has_calls( [mock.call(*a[:-1], **a[-1]) for a in call_args]) self.assertEqual(ret, gmgr.export) def test_setup_via_manager_with_parent(self): gmgr = mock.Mock() gmgr.set_vol_option = mock.Mock() gmgr.volume = 'fakevol' gmgr.export = 'fakehost:/fakevol' gmgr_parent = mock.Mock() gmgr_parent.get_vol_option = mock.Mock( return_value=( 'glusterfs-server-name,some-other-name,manila-host.com')) share = mock.Mock() share_parent = mock.Mock() settings = ( ('auth.ssl-allow', 'glusterfs-server-name,manila-host.com', {}), ('nfs.export-volumes', False, {}), ('client.ssl', True, {}), ('server.ssl', True, {}), ('server.dynamic-auth', True, {'ignore_failure': True}), ) ret = self._driver._setup_via_manager( {'manager': gmgr, 'share': share}, {'manager': gmgr_parent, 'share': share_parent}) gmgr_parent.get_vol_option.assert_called_once_with( 'auth.ssl-allow') gmgr.set_vol_option.assert_has_calls( [mock.call(*a[:-1], **a[-1]) for a in settings]) self.assertEqual(ret, gmgr.export) @ddt.data(True, False) def test_setup_via_manager_no_option_data(self, has_parent): share = mock.Mock() gmgr = mock.Mock() if has_parent: share_parent = mock.Mock() gmgr_parent = mock.Mock() share_mgr_parent = {'share': share_parent, 'manager': gmgr_parent} gmgr_queried = gmgr_parent else: share_mgr_parent = None gmgr_queried = gmgr gmgr_queried.get_vol_option = mock.Mock(return_value='') self.assertRaises(exception.GlusterfsException, self._driver._setup_via_manager, {'share': share, 'manager': gmgr}, share_mgr_parent=share_mgr_parent) gmgr_queried.get_vol_option.assert_called_once_with( 'auth.ssl-allow') def test_snapshots_are_supported(self): self.assertTrue(self._driver.snapshots_are_supported) @ddt.data({'delta': (["oldCN"], []), 'expected': "glusterCN,oldCN"}, {'delta': (["newCN"], []), 'expected': "glusterCN,newCN,oldCN"}, {'delta': ([], ["newCN"]), 'expected': "glusterCN,oldCN"}, {'delta': ([], ["oldCN"]), 'expected': "glusterCN"}) @ddt.unpack def test_update_access_via_manager(self, delta, expected): gluster_mgr = common.GlusterManager(self.glusterfs_target1, self._execute, None, None) self.mock_object( gluster_mgr, 'get_vol_option', mock.Mock( side_effect=lambda a, *x, **kw: { 'auth.ssl-allow': "glusterCN,oldCN", 'server.dynamic-auth': True}[a])) self.mock_object(gluster_mgr, 'set_vol_option') add_rules, delete_rules = ( map(lambda a: {'access_to': a}, r) for r in delta) self._driver._update_access_via_manager( gluster_mgr, self._context, self.share1, add_rules, delete_rules) argseq = [('auth.ssl-allow', {})] if delete_rules: argseq.append(('server.dynamic-auth', {'boolean': True})) self.assertEqual([mock.call(a[0], **a[1]) for a in argseq], gluster_mgr.get_vol_option.call_args_list) gluster_mgr.set_vol_option.assert_called_once_with('auth.ssl-allow', expected) def test_update_access_via_manager_restart(self): gluster_mgr = common.GlusterManager(self.glusterfs_target1, self._execute, None, None) self.mock_object( gluster_mgr, 'get_vol_option', mock.Mock( side_effect=lambda a, *x, **kw: { 'auth.ssl-allow': "glusterCN,oldCN", 'server.dynamic-auth': False}[a])) self.mock_object(gluster_mgr, 'set_vol_option') self.mock_object(common, '_restart_gluster_vol') self._driver._update_access_via_manager( gluster_mgr, self._context, self.share1, [], [{'access_to': "oldCN"}]) common._restart_gluster_vol.assert_called_once_with(gluster_mgr) @ddt.data('common name with space', 'comma,nama') def test_update_access_via_manager_badcn(self, common_name): gluster_mgr = common.GlusterManager(self.glusterfs_target1, self._execute, None, None) self.mock_object(gluster_mgr, 'get_vol_option', mock.Mock( return_value="glusterCN,oldCN")) self.assertRaises(exception.GlusterfsException, self._driver._update_access_via_manager, gluster_mgr, self._context, self.share1, [{'access_to': common_name}], []) def test_update_share_stats(self): self._driver._update_share_stats() test_data = { 'share_backend_name': 'GlusterFS-Native', 'driver_handles_share_servers': False, 'vendor_name': 'Red Hat', 'driver_version': '1.1', 'storage_protocol': 'glusterfs', 'reserved_percentage': 0, 'qos': False, 'total_capacity_gb': 'unknown', 'free_capacity_gb': 'unknown', 'pools': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'share_group_stats': { 'consistent_snapshot_support': None, }, 'replication_domain': None, 'filter_function': None, 'goodness_function': None, 'ipv4_support': True, 'ipv6_support': False, } self.assertEqual(test_data, self._driver._stats) def test_get_network_allocations_number(self): self.assertEqual(0, self._driver.get_network_allocations_number()) manila-10.0.0/manila/tests/share/drivers/huawei/0000775000175000017500000000000013656750362021535 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/huawei/__init__.py0000664000175000017500000000000013656750227023634 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/huawei/test_huawei_nas.py0000664000175000017500000057576613656750227025323 0ustar zuulzuul00000000000000# Copyright (c) 2014 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Huawei nas driver module.""" import os import requests import shutil import six import tempfile import time from unittest import mock import xml.dom.minidom import ddt from oslo_serialization import jsonutils from xml.etree import ElementTree as ET from manila.common import constants as common_constants from manila import context from manila.data import utils as data_utils from manila import db from manila import exception from manila import rpc from manila.share import configuration as conf from manila.share.drivers.huawei import constants from manila.share.drivers.huawei import huawei_nas from manila.share.drivers.huawei.v3 import connection from manila.share.drivers.huawei.v3 import helper from manila.share.drivers.huawei.v3 import replication from manila.share.drivers.huawei.v3 import rpcapi from manila.share.drivers.huawei.v3 import smartx from manila import test from manila import utils def fake_sleep(time): pass def data_session(url): if url == "/xx/sessions": data = """{"error":{"code":0}, "data":{"username":"admin", "iBaseToken":"2001031430", "deviceid":"210235G7J20000000000"}}""" if url == "sessions": data = '{"error":{"code":0},"data":{"ID":11}}' return data def filesystem(method, data, fs_status_flag): extend_share_flag = False shrink_share_flag = False if method == "PUT": if data == """{"CAPACITY": 10485760}""": data = """{"error":{"code":0}, "data":{"ID":"4", "CAPACITY":"8388608"}}""" extend_share_flag = True elif data == """{"CAPACITY": 2097152}""": data = """{"error":{"code":0}, "data":{"ID":"4", "CAPACITY":"2097152"}}""" shrink_share_flag = True elif data == """{"NAME": "share_fake_manage_uuid"}""": data = """{"error":{"code":0}, "data":{"ID":"4", "CAPACITY":"8388608"}}""" elif data == jsonutils.dumps({"ENABLEDEDUP": True, "ENABLECOMPRESSION": True}): data = """{"error":{"code":0}, "data":{"ID":"4", "CAPACITY":"8388608"}}""" elif data == jsonutils.dumps({"ENABLEDEDUP": False, "ENABLECOMPRESSION": False}): data = """{"error":{"code":0}, "data":{"ID":"4", "CAPACITY":"8388608"}}""" elif data == """{"IOPRIORITY": "3"}""": data = """{"error":{"code":0}}""" elif method == "DELETE": data = """{"error":{"code":0}}""" elif method == "GET": if fs_status_flag: data = """{"error":{"code":0}, "data":{"HEALTHSTATUS":"1", "RUNNINGSTATUS":"27", "ALLOCTYPE":"1", "CAPACITY":"8388608", "PARENTNAME":"OpenStack_Pool", "ENABLECOMPRESSION":"false", "ENABLEDEDUP":"false", "CACHEPARTITIONID":"", "SMARTCACHEPARTITIONID":"", "IOCLASSID":"11"}}""" else: data = """{"error":{"code":0}, "data":{"HEALTHSTATUS":"0", "RUNNINGSTATUS":"27", "ALLOCTYPE":"0", "CAPACITY":"8388608", "PARENTNAME":"OpenStack_Pool", "ENABLECOMPRESSION":"false", "ENABLEDEDUP":"false", "CACHEPARTITIONID":"", "SMARTCACHEPARTITIONID":"", "IOCLASSID":"11"}}""" else: data = '{"error":{"code":31755596}}' return (data, extend_share_flag, shrink_share_flag) def filesystem_thick(method, data, fs_status_flag): extend_share_flag = False shrink_share_flag = False if method == "PUT": if data == """{"CAPACITY": 10485760}""": data = """{"error":{"code":0}, "data":{"ID":"5", "CAPACITY":"8388608"}}""" extend_share_flag = True elif data == """{"CAPACITY": 2097152}""": data = """{"error":{"code":0}, "data":{"ID":"5", "CAPACITY":"2097152"}}""" shrink_share_flag = True elif data == """{"NAME": "share_fake_uuid_thickfs"}""": data = """{"error":{"code":0}, "data":{"ID":"5", "CAPACITY":"8388608"}}""" elif data == jsonutils.dumps({"ENABLEDEDUP": False, "ENABLECOMPRESSION": False}): data = """{"error":{"code":0}, "data":{"ID":"5", "CAPACITY":"8388608"}}""" elif method == "DELETE": data = """{"error":{"code":0}}""" elif method == "GET": if fs_status_flag: data = """{"error":{"code":0}, "data":{"HEALTHSTATUS":"1", "RUNNINGSTATUS":"27", "ALLOCTYPE":"0", "CAPACITY":"8388608", "PARENTNAME":"OpenStack_Pool_Thick", "ENABLECOMPRESSION":"false", "ENABLEDEDUP":"false", "CACHEPARTITIONID":"", "SMARTCACHEPARTITIONID":"", "IOCLASSID":"11"}}""" else: data = """{"error":{"code":0}, "data":{"HEALTHSTATUS":"0", "RUNNINGSTATUS":"27", "ALLOCTYPE":"0", "CAPACITY":"8388608", "PARENTNAME":"OpenStack_Pool_Thick", "ENABLECOMPRESSION":"false", "ENABLEDEDUP":"false", "CACHEPARTITIONID":"", "SMARTCACHEPARTITIONID":"", "IOCLASSID":"11"}}""" else: data = '{"error":{"code":31755596}}' return (data, extend_share_flag, shrink_share_flag) def filesystem_inpartition(method, data, fs_status_flag): extend_share_flag = False shrink_share_flag = False if method == "PUT": if data == """{"CAPACITY": 10485760}""": data = """{"error":{"code":0}, "data":{"ID":"6", "CAPACITY":"8388608"}}""" extend_share_flag = True elif data == """{"CAPACITY": 2097152}""": data = """{"error":{"code":0}, "data":{"ID":"6", "CAPACITY":"2097152"}}""" shrink_share_flag = True elif data == """{"NAME": "share_fake_manage_uuid"}""": data = """{"error":{"code":0}, "data":{"ID":"6", "CAPACITY":"8388608"}}""" elif data == """{"NAME": "share_fake_uuid_inpartition"}""": data = """{"error":{"code":0}, "data":{"ID":"6", "CAPACITY":"8388608"}}""" elif data == jsonutils.dumps({"ENABLEDEDUP": True, "ENABLECOMPRESSION": True}): data = """{"error":{"code":0}, "data":{"ID":"6", "CAPACITY":"8388608"}}""" elif data == jsonutils.dumps({"ENABLEDEDUP": False, "ENABLECOMPRESSION": False}): data = """{"error":{"code":0}, "data":{"ID":"6", "CAPACITY":"8388608"}}""" elif method == "DELETE": data = """{"error":{"code":0}}""" elif method == "GET": if fs_status_flag: data = """{"error":{"code":0}, "data":{"HEALTHSTATUS":"1", "RUNNINGSTATUS":"27", "ALLOCTYPE":"1", "CAPACITY":"8388608", "PARENTNAME":"OpenStack_Pool", "ENABLECOMPRESSION":"false", "ENABLEDEDUP":"false", "CACHEPARTITIONID":"1", "SMARTCACHEPARTITIONID":"1", "IOCLASSID":"11"}}""" else: data = """{"error":{"code":0}, "data":{"HEALTHSTATUS":"0", "RUNNINGSTATUS":"27", "ALLOCTYPE":"0", "CAPACITY":"8388608", "PARENTNAME":"OpenStack_Pool", "ENABLECOMPRESSION":"false", "ENABLEDEDUP":"false", "CACHEPARTITIONID":"1", "SMARTCACHEPARTITIONID":"1", "IOCLASSID":"11"}}""" else: data = '{"error":{"code":31755596}}' return (data, extend_share_flag, shrink_share_flag) def allow_access(type, method, data): allow_ro_flag = False allow_rw_flag = False request_data = jsonutils.loads(data) success_data = """{"error":{"code":0}}""" fail_data = """{"error":{"code":1077939723}}""" ret = None if type == "NFS": if request_data['ACCESSVAL'] == '0': allow_ro_flag = True ret = success_data elif request_data['ACCESSVAL'] == '1': allow_rw_flag = True ret = success_data elif type == "CIFS": if request_data['PERMISSION'] == '0': allow_ro_flag = True ret = success_data elif request_data['PERMISSION'] == '1': allow_rw_flag = True ret = success_data # Group name should start with '@'. if ('group' in request_data['NAME'] and not request_data['NAME'].startswith('@')): ret = fail_data if ret is None: ret = fail_data return (ret, allow_ro_flag, allow_rw_flag) def dec_driver_handles_share_servers(func): def wrapper(*args, **kw): self = args[0] self.configuration.driver_handles_share_servers = True self.recreate_fake_conf_file(logical_port='CTE0.A.H0') self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.login() return func(*args, **kw) return wrapper def QoS_response(method): if method == "GET": data = """{"error":{"code":0}, "data":{"NAME": "OpenStack_Fake_QoS", "MAXIOPS": "100", "FSLIST": "4", "LUNLIST": "", "RUNNINGSTATUS": "2"}}""" elif method == "PUT": data = """{"error":{"code":0}}""" else: data = """{"error":{"code":0}, "data":{"ID": "11"}}""" return data class FakeHuaweiNasHelper(helper.RestHelper): def __init__(self, *args, **kwargs): helper.RestHelper.__init__(self, *args, **kwargs) self.test_normal = True self.deviceid = None self.delete_flag = False self.allow_flag = False self.deny_flag = False self.create_snapflag = False self.setupserver_flag = False self.fs_status_flag = True self.create_share_flag = False self.snapshot_flag = True self.service_status_flag = True self.share_exist = True self.service_nfs_status_flag = True self.create_share_data_flag = False self.allow_ro_flag = False self.allow_rw_flag = False self.extend_share_flag = False self.shrink_share_flag = False self.add_fs_to_partition_flag = False self.add_fs_to_cache_flag = False self.test_multi_url_flag = 0 self.cache_exist = True self.partition_exist = True self.alloc_type = None self.custom_results = {} def _change_file_mode(self, filepath): pass def do_call(self, url, data, method, calltimeout=4): url = url.replace('http://100.115.10.69:8082/deviceManager/rest', '') url = url.replace('/210235G7J20000000000/', '') if self.custom_results and self.custom_results.get(url): result = self.custom_results[url] if isinstance(result, six.string_types): return jsonutils.loads(result) if isinstance(result, dict) and result.get(method): return jsonutils.loads(result[method]) if self.test_normal: if self.test_multi_url_flag == 1: data = '{"error":{"code":-403}}' res_json = jsonutils.loads(data) return res_json elif self.test_multi_url_flag == 2: if ('http://100.115.10.70:8082/deviceManager/rest/xx/' 'sessions' == url): self.url = url data = data_session("/xx/sessions") res_json = jsonutils.loads(data) return res_json elif (('/xx/sessions' == url) or (self.url is not None and 'http://100.115.10.69:8082/deviceManager/rest' in self.url)): data = '{"error":{"code":-403}}' res_json = jsonutils.loads(data) return res_json if url == "/xx/sessions" or url == "/sessions": data = data_session(url) if url == "/storagepool": data = """{"error":{"code":0}, "data":[{"USERFREECAPACITY":"2097152", "ID":"1", "NAME":"OpenStack_Pool", "USERTOTALCAPACITY":"4194304", "USAGETYPE":"2", "USERCONSUMEDCAPACITY":"2097152", "TIER0CAPACITY":"100", "TIER1CAPACITY":"0", "TIER2CAPACITY":"0"}, {"USERFREECAPACITY":"2097152", "ID":"2", "NAME":"OpenStack_Pool_Thick", "USERTOTALCAPACITY":"4194304", "USAGETYPE":"2", "USERCONSUMEDCAPACITY":"2097152", "TIER0CAPACITY":"100", "TIER1CAPACITY":"0", "TIER2CAPACITY":"0"}]}""" if url == "/filesystem": request_data = jsonutils.loads(data) self.alloc_type = request_data.get('ALLOCTYPE') data = """{"error":{"code":0},"data":{ "ID":"4"}}""" if url == "/system/": data = """{"error":{"code":0}, "data":{"PRODUCTVERSION": "V300R003C10", "wwn": "fake_wwn"}}""" if url == "/remote_device": data = """{"error":{"code":0}, "data":[{"ID": "0", "NAME": "fake_name", "WWN": "fake_wwn"}]}""" if url == "/ioclass" or url == "/ioclass/11": data = QoS_response(method) if url == "/ioclass/active/11": data = """{"error":{"code":0}, "data":[{"ID": "11", "MAXIOPS": "100", "FSLIST": ""}]}""" if url == "/NFSHARE" or url == "/CIFSHARE": if self.create_share_flag: data = '{"error":{"code":31755596}}' elif self.create_share_data_flag: data = '{"error":{"code":0}}' else: data = """{"error":{"code":0},"data":{ "ID":"10"}}""" if url == "/NFSHARE?range=[100-200]": if self.share_exist: data = """{"error":{"code":0}, "data":[{"ID":"1", "FSID":"4", "NAME":"test", "SHAREPATH":"/share_fake_uuid/"}, {"ID":"2", "FSID":"5", "NAME":"test", "SHAREPATH":"/share_fake_uuid_thickfs/"}, {"ID":"3", "FSID":"6", "NAME":"test", "SHAREPATH":"/share_fake_uuid_inpartition/"}]}""" else: data = """{"error":{"code":0}, "data":[{"ID":"1", "FSID":"4", "NAME":"test", "SHAREPATH":"/share_fake_uuid_fail/"}]}""" if url == "/CIFSHARE?range=[100-200]": data = """{"error":{"code":0}, "data":[{"ID":"2", "FSID":"4", "NAME":"test", "SHAREPATH":"/share_fake_uuid/"}]}""" if url == "/NFSHARE?range=[0-100]": data = """{"error":{"code":0}, "data":[{"ID":"1", "FSID":"4", "NAME":"test_fail", "SHAREPATH":"/share_fake_uuid_fail/"}]}""" if url == "/CIFSHARE?range=[0-100]": data = """{"error":{"code":0}, "data":[{"ID":"2", "FSID":"4", "NAME":"test_fail", "SHAREPATH":"/share_fake_uuid_fail/"}]}""" if url == "/NFSHARE/1" or url == "/CIFSHARE/2": data = """{"error":{"code":0}}""" self.delete_flag = True if url == "/FSSNAPSHOT": data = """{"error":{"code":0},"data":{ "ID":"3"}}""" self.create_snapflag = True if url == "/FSSNAPSHOT/4@share_snapshot_fake_snapshot_uuid": if self.snapshot_flag: data = """{"error":{"code":0}, "data":{"ID":"4@share_snapshot_fake_snapshot_uuid"}}""" else: data = '{"error":{"code":1073754118}}' self.delete_flag = True if url == "/FSSNAPSHOT/4@fake_storage_snapshot_name": if self.snapshot_flag: data = """{"error":{"code":0}, "data":{"ID":"4@share_snapshot_fake_snapshot_uuid", "NAME":"share_snapshot_fake_snapshot_uuid", "HEALTHSTATUS":"1"}}""" else: data = '{"error":{"code":1073754118}}' if url == "/FSSNAPSHOT/3": data = """{"error":{"code":0}}""" self.delete_flag = True if url == "/NFS_SHARE_AUTH_CLIENT": data, self.allow_ro_flag, self.allow_rw_flag = ( allow_access('NFS', method, data)) self.allow_flag = True if url == "/CIFS_SHARE_AUTH_CLIENT": data, self.allow_ro_flag, self.allow_rw_flag = ( allow_access('CIFS', method, data)) self.allow_flag = True if url == ("/FSSNAPSHOT?TYPE=48&PARENTID=4" "&&sortby=TIMESTAMP,d&range=[0-2000]"): data = """{"error":{"code":0}, "data":[{"ID":"3", "NAME":"share_snapshot_fake_snapshot_uuid"}]}""" self.delete_flag = True if url == ("/NFS_SHARE_AUTH_CLIENT?" "filter=PARENTID::1&range=[0-100]"): data = """{"error":{"code":0}, "data":[{"ID":"0", "NAME":"100.112.0.1_fail"}]}""" if url == ("/CIFS_SHARE_AUTH_CLIENT?" "filter=PARENTID::2&range=[0-100]"): data = """{"error":{"code":0}, "data":[{"ID":"0", "NAME":"user_name_fail"}]}""" if url == ("/NFS_SHARE_AUTH_CLIENT?" "filter=PARENTID::1&range=[100-200]"): data = """{"error":{"code":0}, "data":[{"ID":"5", "NAME":"100.112.0.2"}]}""" if url == ("/CIFS_SHARE_AUTH_CLIENT?" "filter=PARENTID::2&range=[100-200]"): data = """{"error":{"code":0}, "data":[{"ID":"6", "NAME":"user_exist"}]}""" if url in ("/NFS_SHARE_AUTH_CLIENT/0", "/NFS_SHARE_AUTH_CLIENT/5", "/CIFS_SHARE_AUTH_CLIENT/0", "/CIFS_SHARE_AUTH_CLIENT/6"): if method == "DELETE": data = """{"error":{"code":0}}""" self.deny_flag = True elif method == "GET": if 'CIFS' in url: data = """{"error":{"code":0}, "data":{"'PERMISSION'":"0"}}""" else: data = """{"error":{"code":0}, "data":{"ACCESSVAL":"0"}}""" else: data = """{"error":{"code":0}}""" self.allow_rw_flagg = True if url == "/NFSHARE/count" or url == "/CIFSHARE/count": data = """{"error":{"code":0},"data":{ "COUNT":"196"}}""" if (url == "/NFS_SHARE_AUTH_CLIENT/count?filter=PARENTID::1" or url == ("/CIFS_SHARE_AUTH_CLIENT/count?filter=" "PARENTID::2")): data = """{"error":{"code":0},"data":{ "COUNT":"196"}}""" if url == "/CIFSSERVICE": if self.service_status_flag: data = """{"error":{"code":0},"data":{ "RUNNINGSTATUS":"2"}}""" else: data = """{"error":{"code":0},"data":{ "RUNNINGSTATUS":"1"}}""" if url == "/NFSSERVICE": if self.service_nfs_status_flag: data = """{"error":{"code":0}, "data":{"RUNNINGSTATUS":"2", "SUPPORTV3":"true", "SUPPORTV4":"true"}}""" else: data = """{"error":{"code":0}, "data":{"RUNNINGSTATUS":"1", "SUPPORTV3":"true", "SUPPORTV4":"true"}}""" self.setupserver_flag = True if "/FILESYSTEM?filter=NAME::" in url: data = """{"error":{"code":0}, "data":[{"ID":"4", "NAME":"share_fake_uuid"}, {"ID":"8", "NAME":"share_fake_new_uuid"}]}""" if url == "/filesystem/4": data, self.extend_share_flag, self.shrink_share_flag = ( filesystem(method, data, self.fs_status_flag)) self.delete_flag = True if url == "/filesystem/5": data, self.extend_share_flag, self.shrink_share_flag = ( filesystem_thick(method, data, self.fs_status_flag)) self.delete_flag = True if url == "/filesystem/6": data, self.extend_share_flag, self.shrink_share_flag = ( filesystem_inpartition(method, data, self.fs_status_flag)) self.delete_flag = True if url == "/cachepartition": if self.partition_exist: data = """{"error":{"code":0}, "data":[{"ID":"7", "NAME":"test_partition_name"}]}""" else: data = """{"error":{"code":0}, "data":[{"ID":"7", "NAME":"test_partition_name_fail"}]}""" if url == "/cachepartition/1": if self.partition_exist: data = """{"error":{"code":0}, "data":{"ID":"7", "NAME":"test_partition_name"}}""" else: data = """{"error":{"code":0}, "data":{"ID":"7", "NAME":"test_partition_name_fail"}}""" if url == "/SMARTCACHEPARTITION": if self.cache_exist: data = """{"error":{"code":0}, "data":[{"ID":"8", "NAME":"test_cache_name"}]}""" else: data = """{"error":{"code":0}, "data":[{"ID":"8", "NAME":"test_cache_name_fail"}]}""" if url == "/SMARTCACHEPARTITION/1": if self.cache_exist: data = """{"error":{"code":0}, "data":{"ID":"8", "NAME":"test_cache_name"}}""" else: data = """{"error":{"code":0}, "data":{"ID":"8", "NAME":"test_cache_name_fail"}}""" if url == "/filesystem/associate/cachepartition": data = """{"error":{"code":0}}""" self.add_fs_to_partition_flag = True if url == "/SMARTCACHEPARTITION/CREATE_ASSOCIATE": data = """{"error":{"code":0}}""" self.add_fs_to_cache_flag = True if url == "/SMARTCACHEPARTITION/REMOVE_ASSOCIATE": data = """{"error":{"code":0}}""" if url == "/smartPartition/removeFs": data = """{"error":{"code":0}}""" if url == "/ETH_PORT": data = """{"error":{"code":0}, "data":[{"ID": "4", "LOCATION":"CTE0.A.H0", "IPV4ADDR":"", "BONDNAME":"", "BONDID":"", "RUNNINGSTATUS":"10"}, {"ID": "6", "LOCATION":"CTE0.A.H1", "IPV4ADDR":"", "BONDNAME":"fake_bond", "BONDID":"5", "RUNNINGSTATUS":"10"}]}""" if url == "/ETH_PORT/6": data = """{"error":{"code":0}, "data":{"ID": "6", "LOCATION":"CTE0.A.H1", "IPV4ADDR":"", "BONDNAME":"fake_bond", "BONDID":"5", "RUNNINGSTATUS":"10"}}""" if url == "/BOND_PORT": data = "{\"error\":{\"code\":0},\ \"data\":[{\"ID\": \"5\",\ \"NAME\":\"fake_bond\",\ \"PORTIDLIST\": \"[\\\"6\\\"]\",\ \"RUNNINGSTATUS\":\"10\"}]}" if url == "/vlan": if method == "GET": data = """{"error":{"code":0}}""" else: data = """{"error":{"code":0},"data":{ "ID":"4"}}""" if url == "/LIF": if method == "GET": data = """{"error":{"code":0}}""" else: data = """{"error":{"code":0},"data":{ "ID":"4"}}""" if url == "/DNS_Server": if method == "GET": data = "{\"error\":{\"code\":0},\"data\":{\ \"ADDRESS\":\"[\\\"\\\"]\"}}" else: data = """{"error":{"code":0}}""" if url == "/AD_CONFIG": if method == "GET": data = """{"error":{"code":0},"data":{ "DOMAINSTATUS":"1", "FULLDOMAINNAME":"huawei.com"}}""" else: data = """{"error":{"code":0}}""" if url == "/LDAP_CONFIG": if method == "GET": data = """{"error":{"code":0},"data":{ "BASEDN":"dc=huawei,dc=com", "LDAPSERVER": "100.97.5.87"}}""" else: data = """{"error":{"code":0}}""" if url == "/REPLICATIONPAIR": data = """{"error":{"code":0},"data":{ "ID":"fake_pair_id"}}""" if url == "/REPLICATIONPAIR/sync": data = """{"error":{"code":0}}""" if url == "/REPLICATIONPAIR/switch": data = """{"error":{"code":0}}""" if url == "/REPLICATIONPAIR/split": data = """{"error":{"code":0}}""" if url == "/REPLICATIONPAIR/CANCEL_SECODARY_WRITE_LOCK": data = """{"error":{"code":0}}""" if url == "/REPLICATIONPAIR/SET_SECODARY_WRITE_LOCK": data = """{"error":{"code":0}}""" if url == "/REPLICATIONPAIR/fake_pair_id": data = """{"error":{"code":0},"data":{ "ID": "fake_pair_id", "HEALTHSTATUS": "1", "SECRESDATASTATUS": "1", "ISPRIMARY": "false", "SECRESACCESS": "1", "RUNNINGSTATUS": "1"}}""" else: data = '{"error":{"code":31755596}}' res_json = jsonutils.loads(data) return res_json class FakeRpcClient(rpcapi.HuaweiV3API): def __init__(self, helper): super(FakeRpcClient, self).__init__() self.replica_mgr = replication.ReplicaPairManager(helper) class fake_call_context(object): def __init__(self, replica_mgr): self.replica_mgr = replica_mgr def call(self, context, func_name, **kwargs): if func_name == 'create_replica_pair': return self.replica_mgr.create_replica_pair( context, **kwargs) def create_replica_pair(self, context, host, local_share_info, remote_device_wwn, remote_fs_id): self.client.prepare = mock.Mock( return_value=self.fake_call_context(self.replica_mgr)) return super(FakeRpcClient, self).create_replica_pair( context, host, local_share_info, remote_device_wwn, remote_fs_id) class FakeRpcServer(object): def start(self): pass class FakePrivateStorage(object): def __init__(self): self.map = {} def get(self, entity_id, key=None, default=None): if self.map.get(entity_id): return self.map[entity_id].get(key, default) return default def update(self, entity_id, details, delete_existing=False): self.map[entity_id] = details def delete(self, entity_id, key=None): self.map.pop(entity_id) class FakeHuaweiNasDriver(huawei_nas.HuaweiNasDriver): """Fake HuaweiNasDriver.""" def __init__(self, *args, **kwargs): huawei_nas.HuaweiNasDriver.__init__(self, *args, **kwargs) self.plugin = connection.V3StorageConnection(self.configuration) self.plugin.helper = FakeHuaweiNasHelper(self.configuration) self.plugin.replica_mgr = replication.ReplicaPairManager( self.plugin.helper) self.plugin.rpc_client = FakeRpcClient(self.plugin.helper) self.plugin.private_storage = FakePrivateStorage() class FakeConfigParseTree(object): class FakeNode(object): def __init__(self, text): self._text = text @property def text(self): return self._text @text.setter def text(self, text): self._text = text class FakeRoot(object): def __init__(self): self._node_map = {} def findtext(self, path, default=None): if path in self._node_map: return self._node_map[path].text return default def find(self, path): if path in self._node_map: return self._node_map[path] return None def __init__(self, path_value): self.root = self.FakeRoot() for k in path_value: self.root._node_map[k] = self.FakeNode(path_value[k]) def getroot(self): return self.root def write(self, filename, format): pass @ddt.ddt class HuaweiShareDriverTestCase(test.TestCase): """Tests GenericShareDriver.""" def setUp(self): super(HuaweiShareDriverTestCase, self).setUp() self._context = context.get_admin_context() def _safe_get(opt): return getattr(self.configuration, opt) self.configuration = mock.Mock(spec=conf.Configuration) self.configuration.safe_get = mock.Mock(side_effect=_safe_get) self.configuration.network_config_group = 'fake_network_config_group' self.configuration.admin_network_config_group = ( 'fake_admin_network_config_group') self.configuration.config_group = 'fake_share_backend_name' self.configuration.share_backend_name = 'fake_share_backend_name' self.configuration.huawei_share_backend = 'V3' self.configuration.max_over_subscription_ratio = 1 self.configuration.driver_handles_share_servers = False self.configuration.replication_domain = None self.configuration.filter_function = None self.configuration.goodness_function = None self.tmp_dir = tempfile.mkdtemp() self.fake_conf_file = self.tmp_dir + '/manila_huawei_conf.xml' self.addCleanup(shutil.rmtree, self.tmp_dir) self.create_fake_conf_file(self.fake_conf_file) self.addCleanup(os.remove, self.fake_conf_file) self.configuration.manila_huawei_conf_file = self.fake_conf_file self._helper_fake = mock.Mock() self.mock_object(huawei_nas.importutils, 'import_object', mock.Mock(return_value=self._helper_fake)) self.mock_object(time, 'sleep', fake_sleep) self.driver = FakeHuaweiNasDriver(configuration=self.configuration) self.driver.plugin.helper.test_normal = True self.share_nfs = { 'id': 'fake_uuid', 'share_id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid'}, ], 'host': 'fake_host@fake_backend#OpenStack_Pool', 'share_type_id': 'fake_id', } self.share_nfs_thick = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#OpenStack_Pool_Thick', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid'}, ], 'share_type_id': 'fake_id', } self.share_nfs_thickfs = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid-thickfs', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#OpenStack_Pool', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid_thickfs'}, ], 'share_type_id': 'fake_id', } self.share_nfs_thick_thickfs = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid-thickfs', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#OpenStack_Pool_Thick', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid_thickfs'}, ], 'share_type_id': 'fake_id', } self.share_nfs_inpartition = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid-inpartition', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#OpenStack_Pool', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid_inpartition'}, ], 'share_type_id': 'fake_id', } self.share_manage_nfs = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-manage-uuid', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid'}, ], 'host': 'fake_host@fake_backend#OpenStack_Pool', 'share_type_id': 'fake_id', } self.share_pool_name_not_match = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-manage-uuid', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'export_locations': [ {'path': '100.115.10.68:/share_fake_uuid'}, ], 'host': 'fake_host@fake_backend#OpenStack_Pool_not_match', 'share_type_id': 'fake_id', } self.share_proto_fail = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid', 'size': 1, 'share_proto': 'proto_fail', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#OpenStack_Pool', } self.share_cifs = { 'id': 'fake_uuid', 'share_id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid', 'size': 1, 'share_proto': 'CIFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'export_locations': [ {'path': 'share_fake_uuid'}, ], 'host': 'fake_host@fake_backend#OpenStack_Pool', 'share_type_id': 'fake_id', } self.share_manage_cifs = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-manage-uuid', 'size': 1, 'share_proto': 'CIFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'export_locations': [ {'path': '\\\\100.115.10.68\\share_fake_uuid'}, ], 'host': 'fake_host@fake_backend#OpenStack_Pool', 'share_type_id': 'fake_id', } self.nfs_snapshot = { 'id': 'fake_snapshot_uuid', 'snapshot_id': 'fake_snapshot_uuid', 'display_name': 'snapshot', 'name': 'fake_snapshot_name', 'size': 1, 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share': { 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share_size': 1, 'share_proto': 'NFS', }, } self.cifs_snapshot = { 'id': 'fake_snapshot_uuid', 'snapshot_id': 'fake_snapshot_uuid', 'display_name': 'snapshot', 'name': 'fake_snapshot_name', 'size': 1, 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share': { 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share_size': 1, 'share_proto': 'CIFS', }, } self.storage_nfs_snapshot = { 'id': 'fake_snapshot_uuid', 'snapshot_id': 'fake_snapshot_uuid', 'display_name': 'snapshot', 'name': 'fake_snapshot_name', 'provider_location': 'fake_storage_snapshot_name', 'size': 1, 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share': { 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share_size': 1, 'share_proto': 'NFS', }, } self.storage_cifs_snapshot = { 'id': 'fake_snapshot_uuid', 'snapshot_id': 'fake_snapshot_uuid', 'display_name': 'snapshot', 'name': 'fake_snapshot_name', 'provider_location': 'fake_storage_snapshot_name', 'size': 1, 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share': { 'share_name': 'share_fake_uuid', 'share_id': 'fake_uuid', 'share_size': 1, 'share_proto': 'CIFS', }, } self.security_service = { 'id': 'fake_id', 'domain': 'FAKE', 'server': 'fake_server', 'user': 'fake_user', 'password': 'fake_password', } self.access_ip = { 'access_type': 'ip', 'access_to': '100.112.0.1', 'access_level': 'rw', } self.access_ip_exist = { 'access_type': 'ip', 'access_to': '100.112.0.2', 'access_level': 'rw', } self.access_user = { 'access_type': 'user', 'access_to': 'user_name', 'access_level': 'rw', } self.access_user_exist = { 'access_type': 'user', 'access_to': 'user_exist', 'access_level': 'rw', } self.access_group = { 'access_type': 'user', 'access_to': 'group_name', 'access_level': 'rw', } self.access_cert = { 'access_type': 'cert', 'access_to': 'fake_cert', 'access_level': 'rw', } self.driver_options = { 'volume_id': 'fake', } self.share_server = None self.driver._licenses = ['fake'] self.fake_network_allocations = [{ 'id': 'fake_network_allocation_id', 'ip_address': '111.111.111.109', }] self.fake_network_info = { 'server_id': '0', 'segmentation_id': '2', 'cidr': '111.111.111.0/24', 'neutron_net_id': 'fake_neutron_net_id', 'neutron_subnet_id': 'fake_neutron_subnet_id', 'security_services': '', 'network_allocations': self.fake_network_allocations, 'network_type': 'vlan', } self.fake_active_directory = { 'type': 'active_directory', 'dns_ip': '100.97.5.5', 'user': 'ad_user', 'password': 'ad_password', 'domain': 'huawei.com' } self.fake_ldap = { 'type': 'ldap', 'server': '100.97.5.87', 'domain': 'dc=huawei,dc=com' } fake_share_type_id_not_extra = 'fake_id' self.fake_type_not_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': {}, 'required_extra_specs': {}, 'id': fake_share_type_id_not_extra, 'name': 'test_with_extra', 'updated_at': None } } fake_extra_specs = { 'capabilities:dedupe': ' True', 'capabilities:compression': ' True', 'capabilities:huawei_smartcache': ' True', 'huawei_smartcache:cachename': 'test_cache_name', 'capabilities:huawei_smartpartition': ' True', 'huawei_smartpartition:partitionname': 'test_partition_name', 'capabilities:thin_provisioning': ' True', 'test:test:test': 'test', } fake_share_type_id = 'fooid-2' self.fake_type_w_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } fake_extra_specs = { 'capabilities:dedupe': ' True', 'capabilities:compression': ' True', 'capabilities:huawei_smartcache': ' False', 'huawei_smartcache:cachename': None, 'capabilities:huawei_smartpartition': ' False', 'huawei_smartpartition:partitionname': None, 'capabilities:thin_provisioning': ' True', 'test:test:test': 'test', } fake_share_type_id = 'fooid-3' self.fake_type_fake_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } fake_extra_specs = { 'capabilities:dedupe': ' True', 'capabilities:compression': ' True', 'capabilities:huawei_smartcache': ' False', 'huawei_smartcache:cachename': None, 'capabilities:huawei_smartpartition': ' False', 'huawei_smartpartition:partitionname': None, 'capabilities:thin_provisioning': ' False', 'test:test:test': 'test', } fake_share_type_id = 'fooid-4' self.fake_type_thin_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } self.share_nfs_host_not_exist = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#', } self.share_nfs_storagepool_fail = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-uuid', 'size': 1, 'share_proto': 'NFS', 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'host': 'fake_host@fake_backend#OpenStack_Pool2', } fake_extra_specs = { 'driver_handles_share_servers': 'False', } fake_share_type_id = 'fake_id' self.fake_type_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } self.active_replica = { 'id': 'fake_active_replica_id', 'share_id': 'fake_share_id', 'name': 'share_fake_uuid', 'host': 'hostname1@backend_name1#OpenStack_Pool', 'size': 5, 'share_proto': 'NFS', 'replica_state': common_constants.REPLICA_STATE_ACTIVE, } self.new_replica = { 'id': 'fake_new_replica_id', 'share_id': 'fake_share_id', 'name': 'share_fake_new_uuid', 'host': 'hostname2@backend_name2#OpenStack_Pool', 'size': 5, 'share_proto': 'NFS', 'replica_state': common_constants.REPLICA_STATE_OUT_OF_SYNC, 'share_type_id': 'fake_id', } def _get_share_by_proto(self, share_proto): if share_proto == "NFS": share = self.share_nfs elif share_proto == "CIFS": share = self.share_cifs else: share = None return share def mock_share_type(self, share_type): self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) def test_no_configuration(self): self.mock_object(huawei_nas.HuaweiNasDriver, 'driver_handles_share_servers', True) self.assertRaises(exception.InvalidInput, huawei_nas.HuaweiNasDriver) def test_conf_product_fail(self): self.recreate_fake_conf_file(product_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_conf_file) def test_conf_pool_node_fail(self): self.recreate_fake_conf_file(pool_node_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_conf_file) def test_conf_username_fail(self): self.recreate_fake_conf_file(username_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_conf_file) def test_conf_timeout_fail(self): self.recreate_fake_conf_file(timeout_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) timeout = self.driver.plugin._get_timeout() self.assertEqual(60, timeout) def test_conf_wait_interval_fail(self): self.recreate_fake_conf_file(wait_interval_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) wait_interval = self.driver.plugin._get_wait_interval() self.assertEqual(3, wait_interval) def test_conf_logical_ip_fail(self): self.configuration.driver_handles_share_servers = True self.recreate_fake_conf_file(logical_port="fake_port") self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.configuration.driver_handles_share_servers = False self.assertRaises(exception.InvalidInput, self.driver.plugin.check_conf_file) def test_conf_snapshot_replication_conflict(self): self.recreate_fake_conf_file(snapshot_support=True, replication_support=True) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin._setup_conf() self.assertRaises(exception.BadConfigurationException, self.driver.plugin.check_conf_file) def test_get_backend_driver_fail(self): test_fake_conf_file = None self.driver.plugin.configuration.manila_huawei_conf_file = ( test_fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver.get_backend_driver) def test_get_backend_driver_fail_driver_none(self): self.recreate_fake_conf_file(product_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver.get_backend_driver) def test_create_share_storagepool_not_exist(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidHost, self.driver.create_share, self._context, self.share_nfs_host_not_exist, self.share_server) def test_create_share_nfs_storagepool_fail(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidHost, self.driver.create_share, self._context, self.share_nfs_storagepool_fail, self.share_server) def test_create_share_nfs_no_data_fail(self): self.driver.plugin.helper.create_share_data_flag = True self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_read_xml_fail(self): test_fake_conf_file = None self.driver.plugin.configuration.manila_huawei_conf_file = ( test_fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver.plugin.helper._read_xml) def test_connect_success(self): FakeRpcServer.start = mock.Mock() rpc.get_server = mock.Mock(return_value=FakeRpcServer()) self.driver.plugin.connect() FakeRpcServer.start.assert_called_once() def test_connect_fail(self): self.driver.plugin.helper.test_multi_url_flag = 1 self.assertRaises(exception.InvalidShare, self.driver.plugin.connect) def test_login_success(self): deviceid = self.driver.plugin.helper.login() self.assertEqual("210235G7J20000000000", deviceid) def test_check_for_setup_success(self): self.driver.plugin.helper.login() self.driver.check_for_setup_error() def test_check_for_setup_service_down(self): self.driver.plugin.helper.service_status_flag = False self.driver.plugin.helper.login() self.driver.check_for_setup_error() def test_check_for_setup_nfs_down(self): self.driver.plugin.helper.service_nfs_status_flag = False self.driver.plugin.helper.login() self.driver.check_for_setup_error() def test_check_for_setup_service_false(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.check_for_setup_error) def test_create_share_no_extra(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) self.assertEqual(constants.ALLOC_TYPE_THIN_FLAG, self.driver.plugin.helper.alloc_type) def test_create_share_with_extra_thin(self): share_type = { 'extra_specs': { 'capabilities:thin_provisioning': ' True' }, } self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) self.assertEqual(constants.ALLOC_TYPE_THIN_FLAG, self.driver.plugin.helper.alloc_type) def test_create_share_with_extra_thick(self): share_type = { 'extra_specs': { 'capabilities:thin_provisioning': ' False' }, } self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) self.assertEqual(constants.ALLOC_TYPE_THICK_FLAG, self.driver.plugin.helper.alloc_type) @ddt.data(*constants.VALID_SECTOR_SIZES) def test_create_share_with_sectorsize_in_type(self, sectorsize): share_type = { 'extra_specs': { 'capabilities:huawei_sectorsize': " true", 'huawei_sectorsize:sectorsize': sectorsize, }, } self.mock_share_type(share_type) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) self.assertTrue(db.share_type_get.called) @ddt.data('128', 'xx', 'None', ' ') def test_create_share_with_illegal_sectorsize_in_type(self, sectorsize): share_type = { 'extra_specs': { 'capabilities:huawei_sectorsize': " true", 'huawei_sectorsize:sectorsize': sectorsize, }, } self.mock_share_type(share_type) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) @ddt.data({'extra_specs': {'capabilities:huawei_sectorsize': " false", 'huawei_sectorsize:sectorsize': '0'}, 'xmlvalue': '4'}, {'extra_specs': {'capabilities:huawei_sectorsize': " False", 'huawei_sectorsize:sectorsize': '128'}, 'xmlvalue': '8'}, {'extra_specs': {'capabilities:huawei_sectorsize': "false", 'huawei_sectorsize:sectorsize': 'a'}, 'xmlvalue': '16'}, {'extra_specs': {'capabilities:huawei_sectorsize': "False", 'huawei_sectorsize:sectorsize': 'xx'}, 'xmlvalue': '32'}, {'extra_specs': {'capabilities:huawei_sectorsize': "true", 'huawei_sectorsize:sectorsize': 'None'}, 'xmlvalue': '64'}, {'extra_specs': {'capabilities:huawei_sectorsize': "True", 'huawei_sectorsize:sectorsize': ' '}, 'xmlvalue': ' '}, {'extra_specs': {'capabilities:huawei_sectorsize': "True", 'huawei_sectorsize:sectorsize': ''}, 'xmlvalue': ''}) @ddt.unpack def test_create_share_with_invalid_type_valid_xml(self, extra_specs, xmlvalue): fake_share_type = {} fake_share_type['extra_specs'] = extra_specs self.mock_share_type(fake_share_type) self.recreate_fake_conf_file(sectorsize_value=xmlvalue) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) self.assertTrue(db.share_type_get.called) @ddt.data({'extra_specs': {'capabilities:huawei_sectorsize': " false", 'huawei_sectorsize:sectorsize': '4'}, 'xmlvalue': '0'}, {'extra_specs': {'capabilities:huawei_sectorsize': " False", 'huawei_sectorsize:sectorsize': '8'}, 'xmlvalue': '128'}, {'extra_specs': {'capabilities:huawei_sectorsize': "false", 'huawei_sectorsize:sectorsize': '16'}, 'xmlvalue': 'a'}, {'extra_specs': {'capabilities:huawei_sectorsize': "False", 'huawei_sectorsize:sectorsize': '32'}, 'xmlvalue': 'xx'}, {'extra_specs': {'capabilities:huawei_sectorsize': "true", 'huawei_sectorsize:sectorsize': '64'}, 'xmlvalue': 'None'}) @ddt.unpack def test_create_share_with_invalid_type_illegal_xml(self, extra_specs, xmlvalue): fake_share_type = {} fake_share_type['extra_specs'] = extra_specs self.mock_share_type(fake_share_type) self.recreate_fake_conf_file(sectorsize_value=xmlvalue) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_shrink_share_success(self): self.driver.plugin.helper.shrink_share_flag = False self.driver.plugin.helper.login() self.driver.shrink_share(self.share_nfs, 1, self.share_server) self.assertTrue(self.driver.plugin.helper.shrink_share_flag) def test_shrink_share_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.shrink_share, self.share_nfs, 1, self.share_server) def test_shrink_share_size_fail(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.shrink_share, self.share_nfs, 5, self.share_server) def test_shrink_share_alloctype_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.fs_status_flag = False self.assertRaises(exception.InvalidShare, self.driver.shrink_share, self.share_nfs, 1, self.share_server) def test_shrink_share_not_exist(self): self.driver.plugin.helper.login() self.driver.plugin.helper.share_exist = False self.assertRaises(exception.InvalidShare, self.driver.shrink_share, self.share_nfs, 1, self.share_server) def test_extend_share_success(self): self.driver.plugin.helper.extend_share_flag = False self.driver.plugin.helper.login() self.driver.extend_share(self.share_nfs, 5, self.share_server) self.assertTrue(self.driver.plugin.helper.extend_share_flag) def test_extend_share_fail(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidInput, self.driver.extend_share, self.share_nfs, 3, self.share_server) def test_extend_share_not_exist(self): self.driver.plugin.helper.login() self.driver.plugin.helper.share_exist = False self.assertRaises(exception.InvalidShareAccess, self.driver.extend_share, self.share_nfs, 4, self.share_server) def test_create_share_nfs_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) def test_create_share_cifs_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_cifs, self.share_server) self.assertEqual("\\\\100.115.10.68\\share_fake_uuid", location) def test_create_share_with_extra(self): self.driver.plugin.helper.add_fs_to_partition_flag = False self.driver.plugin.helper.add_fs_to_cache_flag = False share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) self.assertTrue(self.driver.plugin.helper.add_fs_to_partition_flag) self.assertTrue(self.driver.plugin.helper.add_fs_to_cache_flag) @ddt.data({'capabilities:dedupe': ' True', 'capabilities:thin_provisioning': ' False'}, {'capabilities:dedupe': ' True', 'capabilities:compression': ' True', 'capabilities:thin_provisioning': ' False'}, {'capabilities:huawei_smartcache': ' True', 'huawei_smartcache:cachename': None}, {'capabilities:huawei_smartpartition': ' True', 'huawei_smartpartition:partitionname': None}, {'capabilities:huawei_smartcache': ' True'}, {'capabilities:huawei_smartpartition': ' True'}) def test_create_share_with_extra_error(self, fake_extra_specs): fake_share_type_id = 'fooid-2' fake_type_error_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } share_type = fake_type_error_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs_thick, self.share_server) @ddt.data({"fake_extra_specs_qos": {"qos:maxIOPS": "100", "qos:maxBandWidth": "50", "qos:IOType": "0"}, "fake_qos_info": {"MAXIOPS": "100", "MAXBANDWIDTH": "50", "IOTYPE": "0", "LATENCY": "0", "NAME": "OpenStack_fake_qos"}}, {"fake_extra_specs_qos": {"qos:maxIOPS": "100", "qos:IOType": "1"}, "fake_qos_info": {"NAME": "fake_qos", "MAXIOPS": "100", "IOTYPE": "1", "LATENCY": "0"}}, {"fake_extra_specs_qos": {"qos:minIOPS": "100", "qos:minBandWidth": "50", 'qos:latency': "50", "qos:IOType": "0"}, "fake_qos_info": {"MINIOPS": "100", "MINBANDWIDTH": "50", "IOTYPE": "0", "LATENCY": "50", "NAME": "OpenStack_fake_qos"}}) @ddt.unpack def test_create_share_with_qos(self, fake_extra_specs_qos, fake_qos_info): fake_share_type_id = 'fooid-2' fake_extra_specs = {"capabilities:qos": " True"} fake_extra_specs.update(fake_extra_specs_qos) fake_type_error_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } fake_qos_info_respons = { "error": { "code": 0 }, "data": [{ "ID": "11", "FSLIST": u'["1", "2", "3", "4"]', "LUNLIST": '[""]', "RUNNINGSTATUS": "2", }] } fake_qos_info_respons["data"][0].update(fake_qos_info) share_type = fake_type_error_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(helper.RestHelper, 'get_qos', mock.Mock(return_value=fake_qos_info_respons)) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.login() location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) @ddt.data({'capabilities:qos': ' True', 'qos:maxIOPS': -1}, {'capabilities:qos': ' True', 'qos:IOTYPE': 4}, {'capabilities:qos': ' True', 'qos:IOTYPE': 100}, {'capabilities:qos': ' True', 'qos:maxIOPS': 0}, {'capabilities:qos': ' True', 'qos:minIOPS': 0}, {'capabilities:qos': ' True', 'qos:minBandWidth': 0}, {'capabilities:qos': ' True', 'qos:maxBandWidth': 0}, {'capabilities:qos': ' True', 'qos:latency': 0}, {'capabilities:qos': ' True', 'qos:maxIOPS': 100}, {'capabilities:qos': ' True', 'qos:maxIOPS': 100, 'qos:minBandWidth': 100, 'qos:IOType': '0'}) def test_create_share_with_invalid_qos(self, fake_extra_specs): fake_share_type_id = 'fooid-2' fake_type_error_extra = { 'test_with_extra': { 'created_at': 'fake_time', 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': 'test_with_extra', 'updated_at': None } } share_type = fake_type_error_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_create_share_cache_not_exist(self): self.driver.plugin.helper.cache_exist = False share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_add_share_to_cache_fail(self): opts = dict( huawei_smartcache='true', cachename=None, ) fsid = 4 smartcache = smartx.SmartCache(self.driver.plugin.helper) self.assertRaises(exception.InvalidInput, smartcache.add, opts, fsid) def test_create_share_partition_not_exist(self): self.driver.plugin.helper.partition_exist = False share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_add_share_to_partition_fail(self): opts = dict( huawei_smartpartition='true', partitionname=None, ) fsid = 4 smartpartition = smartx.SmartPartition(self.driver.plugin.helper) self.assertRaises(exception.InvalidInput, smartpartition.add, opts, fsid) def test_login_fail(self): self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.plugin.helper.login) def test_create_share_nfs_fs_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_create_share_nfs_status_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.fs_status_flag = False self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) def test_create_share_cifs_fs_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_cifs, self.share_server) def test_create_share_cifs_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.create_share_flag = True self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_cifs, self.share_server) def test_create_share_nfs_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.create_share_flag = True self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) @ddt.data({"share_proto": "NFS", "fake_qos_info_respons": {"ID": "11", "MAXIOPS": "100", "IOType": "2", "FSLIST": u'["0", "1", "4"]'}}, {"share_proto": "CIFS", "fake_qos_info_respons": {"ID": "11", "MAXIOPS": "100", "IOType": "2", "FSLIST": u'["4"]', "RUNNINGSTATUS": "2"}}) @ddt.unpack def test_delete_share_success(self, share_proto, fake_qos_info_respons): self.driver.plugin.helper.login() self.driver.plugin.helper.delete_flag = False if share_proto == 'NFS': share = self.share_nfs else: share = self.share_cifs with mock.patch.object(helper.RestHelper, 'get_qos_info', return_value=fake_qos_info_respons): self.driver.delete_share(self._context, share, self.share_server) self.assertTrue(self.driver.plugin.helper.delete_flag) def test_delete_share_withoutqos_success(self): self.driver.plugin.helper.login() self.driver.plugin.helper.delete_flag = False self.driver.plugin.qos_support = True self.driver.delete_share(self._context, self.share_nfs, self.share_server) self.assertTrue(self.driver.plugin.helper.delete_flag) def test_check_snapshot_id_exist_fail(self): snapshot_id = "4@share_snapshot_not_exist" self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False snapshot_info = self.driver.plugin.helper._get_snapshot_by_id( snapshot_id) self.assertRaises(exception.InvalidShareSnapshot, self.driver.plugin.helper._check_snapshot_id_exist, snapshot_info) def test_delete_share_nfs_fail_not_exist(self): self.driver.plugin.helper.login() self.driver.plugin.helper.delete_flag = False self.driver.plugin.helper.share_exist = False self.driver.delete_share(self._context, self.share_nfs, self.share_server) self.assertTrue(self.driver.plugin.helper.delete_flag) def test_delete_share_cifs_success(self): self.driver.plugin.helper.delete_flag = False fake_qos_info_respons = { "ID": "11", "FSLIST": u'["1", "2", "3", "4"]', "LUNLIST": '[""]', "RUNNINGSTATUS": "2", } self.mock_object(helper.RestHelper, 'get_qos_info', mock.Mock(return_value=fake_qos_info_respons)) self.driver.plugin.helper.login() self.driver.delete_share(self._context, self.share_cifs, self.share_server) self.assertTrue(self.driver.plugin.helper.delete_flag) def test_get_network_allocations_number_dhss_true(self): self.configuration.driver_handles_share_servers = True number = self.driver.get_network_allocations_number() self.assertEqual(1, number) def test_get_network_allocations_number_dhss_false(self): self.configuration.driver_handles_share_servers = False number = self.driver.get_network_allocations_number() self.assertEqual(0, number) def test_create_nfsshare_from_nfssnapshot_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(return_value={})) self.mock_object(self.driver.plugin, 'copy_snapshot_data', mock.Mock(return_value=True)) self.mock_object(self.driver.plugin, 'umount_share_from_host', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True location = self.driver.create_share_from_snapshot(self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertEqual(2, self.driver.plugin. mount_share_to_host.call_count) self.assertTrue(self.driver.plugin. copy_snapshot_data.called) self.assertEqual(2, self.driver.plugin. umount_share_from_host.call_count) self.assertEqual("100.115.10.68:/share_fake_uuid", location) def test_create_cifsshare_from_cifssnapshot_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(return_value={})) self.mock_object(self.driver.plugin, 'copy_snapshot_data', mock.Mock(return_value=True)) self.mock_object(self.driver.plugin, 'umount_share_from_host', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True location = self.driver.create_share_from_snapshot(self._context, self.share_cifs, self.cifs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertEqual(2, self.driver.plugin. mount_share_to_host.call_count) self.assertTrue(self.driver.plugin. copy_snapshot_data.called) self.assertEqual(2, self.driver.plugin. umount_share_from_host.call_count) self.assertEqual("\\\\100.115.10.68\\share_fake_uuid", location) def test_create_nfsshare_from_cifssnapshot_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, '_get_access_id', mock.Mock(return_value={})) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(return_value={})) self.mock_object(self.driver.plugin, 'copy_snapshot_data', mock.Mock(return_value=True)) self.mock_object(self.driver.plugin, 'umount_share_from_host', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.access_id = None self.driver.plugin.helper.snapshot_flag = True location = self.driver.create_share_from_snapshot(self._context, self.share_nfs, self.cifs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertTrue(self.driver.plugin. _get_access_id.called) self.assertEqual(2, self.driver.plugin. mount_share_to_host.call_count) self.assertTrue(self.driver.plugin. copy_snapshot_data.called) self.assertEqual(2, self.driver.plugin. umount_share_from_host.call_count) self.assertEqual("100.115.10.68:/share_fake_uuid", location) def test_create_cifsshare_from_nfssnapshot_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, '_get_access_id', mock.Mock(return_value={})) self.mock_object(utils, 'execute', mock.Mock(return_value=("", ""))) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True location = self.driver.create_share_from_snapshot(self._context, self.share_cifs, self.nfs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertTrue(self.driver.plugin. _get_access_id.called) self.assertEqual(7, utils.execute.call_count) self.assertEqual("\\\\100.115.10.68\\share_fake_uuid", location) def test_create_share_from_snapshot_nonefs(self): self.driver.plugin.helper.login() self.mock_object(self.driver.plugin.helper, 'get_fsid_by_name', mock.Mock(return_value={})) self.assertRaises(exception.StorageResourceNotFound, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(self.driver.plugin.helper. get_fsid_by_name.called) def test_create_share_from_notexistingsnapshot_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = False self.assertRaises(exception.ShareSnapshotNotFound, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) def test_create_share_from_share_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True self.mock_object(self.driver.plugin, 'check_fs_status', mock.Mock(return_value={})) self.assertRaises(exception.StorageResourceException, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(self.driver.plugin.check_fs_status.called) def test_create_share_from_snapshot_share_error(self): self.mock_object(self.driver.plugin, '_get_share_proto', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True self.assertRaises(exception.ShareResourceNotFound, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(self.driver.plugin. _get_share_proto.called) def test_create_share_from_snapshot_allow_oldaccess_fail(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, '_get_share_proto', mock.Mock(return_value='NFS')) self.mock_object(self.driver.plugin, '_get_access_id', mock.Mock(return_value={})) self.mock_object(self.driver.plugin.helper, '_get_share_by_name', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True self.assertRaises(exception.ShareResourceNotFound, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertTrue(self.driver.plugin._get_share_proto.called) self.assertTrue(self.driver.plugin._get_access_id.called) self.assertTrue(self.driver.plugin.helper._get_share_by_name.called) def test_create_share_from_snapshot_mountshare_fail(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(side_effect=exception. ShareMountException('err'))) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True self.assertRaises(exception.ShareMountException, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertEqual(1, self.driver.plugin. mount_share_to_host.call_count) def test_create_share_from_snapshot_allow_newaccess_fail(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, '_get_share_proto', mock.Mock(return_value='NFS')) self.mock_object(self.driver.plugin, '_get_access_id', mock.Mock(return_value='5')) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(return_value={})) self.mock_object(self.driver.plugin.helper, '_get_share_by_name', mock.Mock(return_value={})) self.mock_object(self.driver.plugin, 'umount_share_from_host', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True self.assertRaises(exception.ShareResourceNotFound, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertTrue(self.driver.plugin._get_share_proto.called) self.assertTrue(self.driver.plugin._get_access_id.called) self.assertEqual(1, self.driver.plugin. mount_share_to_host.call_count) self.assertTrue(self.driver.plugin.helper. _get_share_by_name.called) self.assertEqual(1, self.driver.plugin. umount_share_from_host.call_count) def test_create_nfsshare_from_nfssnapshot_copydata_fail(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(return_value={})) self.mock_object(data_utils, 'Copy', mock.Mock(side_effect=Exception('err'))) self.mock_object(utils, 'execute', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True self.assertRaises(exception.ShareCopyDataException, self.driver.create_share_from_snapshot, self._context, self.share_nfs, self.nfs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertEqual(2, self.driver.plugin. mount_share_to_host.call_count) self.assertTrue(data_utils.Copy.called) self.assertEqual(2, utils.execute.call_count) def test_create_nfsshare_from_nfssnapshot_umountshare_fail(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.mock_object(self.driver.plugin, 'mount_share_to_host', mock.Mock(return_value={})) self.mock_object(self.driver.plugin, 'copy_snapshot_data', mock.Mock(return_value=True)) self.mock_object(self.driver.plugin, 'umount_share_from_host', mock.Mock(side_effect=exception. ShareUmountException('err'))) self.mock_object(os, 'rmdir', mock.Mock(side_effect=Exception('err'))) self.driver.plugin.helper.login() self.driver.plugin.helper.snapshot_flag = True location = self.driver.create_share_from_snapshot(self._context, self.share_nfs, self.cifs_snapshot, self.share_server) self.assertTrue(db.share_type_get.called) self.assertEqual(2, self.driver.plugin. mount_share_to_host.call_count) self.assertTrue(self.driver.plugin.copy_snapshot_data.called) self.assertEqual(2, self.driver.plugin. umount_share_from_host.call_count) self.assertTrue(os.rmdir.called) self.assertEqual("100.115.10.68:/share_fake_uuid", location) def test_get_share_stats_refresh_pool_not_exist(self): self.recreate_fake_conf_file(pool_node_flag=False) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.assertRaises(exception.InvalidInput, self.driver._update_share_stats) @ddt.data({"snapshot_support": True, "replication_support": False}, {"snapshot_support": False, "replication_support": True}) @ddt.unpack def test_get_share_stats_refresh(self, snapshot_support, replication_support): self.recreate_fake_conf_file(snapshot_support=snapshot_support, replication_support=replication_support) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin._setup_conf() self.driver._update_share_stats() expected = { "share_backend_name": "fake_share_backend_name", "driver_handles_share_servers": False, "vendor_name": "Huawei", "driver_version": "1.3", "storage_protocol": "NFS_CIFS", "reserved_percentage": 0, "total_capacity_gb": 0.0, "free_capacity_gb": 0.0, "qos": True, "snapshot_support": snapshot_support, "create_share_from_snapshot_support": snapshot_support, "revert_to_snapshot_support": snapshot_support, "mount_snapshot_support": False, "replication_domain": None, "filter_function": None, "goodness_function": None, "pools": [], "share_group_stats": {"consistent_snapshot_support": None}, "ipv4_support": True, "ipv6_support": False, } if replication_support: expected['replication_type'] = 'dr' pool = dict( pool_name='OpenStack_Pool', total_capacity_gb=2.0, free_capacity_gb=1.0, allocated_capacity_gb=1.0, qos=True, reserved_percentage=0, compression=[True, False], dedupe=[True, False], max_over_subscription_ratio=1, provisioned_capacity_gb=1.0, thin_provisioning=[True, False], huawei_smartcache=[True, False], huawei_smartpartition=[True, False], huawei_sectorsize=[True, False], huawei_disk_type='ssd', ) expected["pools"].append(pool) self.assertEqual(expected, self.driver._stats) @ddt.data({'TIER0CAPACITY': '100', 'TIER1CAPACITY': '0', 'TIER2CAPACITY': '0', 'disktype': 'ssd'}, {'TIER0CAPACITY': '0', 'TIER1CAPACITY': '100', 'TIER2CAPACITY': '0', 'disktype': 'sas'}, {'TIER0CAPACITY': '0', 'TIER1CAPACITY': '0', 'TIER2CAPACITY': '100', 'disktype': 'nl_sas'}, {'TIER0CAPACITY': '100', 'TIER1CAPACITY': '100', 'TIER2CAPACITY': '100', 'disktype': 'mix'}, {'TIER0CAPACITY': '0', 'TIER1CAPACITY': '0', 'TIER2CAPACITY': '0', 'disktype': ''}) def test_get_share_stats_disk_type(self, disk_type_value): self.driver.plugin.helper.login() storage_pool_info = {"error": {"code": 0}, "data": [{"USERFREECAPACITY": "2097152", "ID": "1", "NAME": "OpenStack_Pool", "USERTOTALCAPACITY": "4194304", "USAGETYPE": "2", "USERCONSUMEDCAPACITY": "2097152"}]} storage_pool_info['data'][0]['TIER0CAPACITY'] = ( disk_type_value['TIER0CAPACITY']) storage_pool_info['data'][0]['TIER1CAPACITY'] = ( disk_type_value['TIER1CAPACITY']) storage_pool_info['data'][0]['TIER2CAPACITY'] = ( disk_type_value['TIER2CAPACITY']) self.mock_object(self.driver.plugin.helper, '_find_all_pool_info', mock.Mock(return_value=storage_pool_info)) self.driver._update_share_stats() if disk_type_value['disktype']: self.assertEqual( disk_type_value['disktype'], self.driver._stats['pools'][0]['huawei_disk_type']) else: self.assertIsNone( self.driver._stats['pools'][0].get('huawei_disk_type')) def test_get_disk_type_pool_info_none(self): self.driver.plugin.helper.login() self.mock_object(self.driver.plugin.helper, '_find_pool_info', mock.Mock(return_value=None)) self.assertRaises(exception.InvalidInput, self.driver._update_share_stats) def test_allow_access_proto_fail(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidInput, self.driver.allow_access, self._context, self.share_proto_fail, self.access_ip, self.share_server) def test_allow_access_ip_rw_success(self): self.driver.plugin.helper.login() self.allow_flag = False self.allow_rw_flag = False self.driver.allow_access(self._context, self.share_nfs, self.access_ip, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_rw_flag) def test_allow_access_ip_ro_success(self): access_ro = { 'access_type': 'ip', 'access_to': '1.2.3.4', 'access_level': 'ro', } self.driver.plugin.helper.login() self.allow_flag = False self.allow_ro_flag = False self.driver.allow_access(self._context, self.share_nfs, access_ro, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_ro_flag) def test_allow_access_nfs_user_success(self): self.driver.plugin.helper.login() self.allow_flag = False self.allow_rw_flag = False self.driver.allow_access(self._context, self.share_nfs, self.access_user, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_rw_flag) @ddt.data( { 'access_type': 'user', 'access_to': 'user_name', 'access_level': 'rw', }, { 'access_type': 'user', 'access_to': 'group_name', 'access_level': 'rw', }, { 'access_type': 'user', 'access_to': 'domain\\user_name', 'access_level': 'rw', }, { 'access_type': 'user', 'access_to': 'domain\\group_name', 'access_level': 'rw', }, ) def test_allow_access_cifs_rw_success(self, access_user): self.driver.plugin.helper.login() self.allow_flag = False self.allow_rw_flag = False self.driver.allow_access(self._context, self.share_cifs, access_user, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_rw_flag) def test_allow_access_cifs_user_ro_success(self): access_ro = { 'access_type': 'user', 'access_to': 'user_name', 'access_level': 'ro', } self.driver.plugin.helper.login() self.allow_flag = False self.allow_ro_flag = False self.driver.allow_access(self._context, self.share_cifs, access_ro, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_ro_flag) def test_allow_access_level_fail(self): access_fail = { 'access_type': 'user', 'access_to': 'user_name', 'access_level': 'fail', } self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShareAccess, self.driver.allow_access, self._context, self.share_cifs, access_fail, self.share_server) def test_update_access_add_delete(self): self.driver.plugin.helper.login() self.allow_flag = False self.allow_rw_flag = False self.deny_flag = False add_rules = [self.access_ip] delete_rules = [self.access_ip_exist] self.driver.update_access(self._context, self.share_nfs, None, add_rules, delete_rules, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_rw_flag) self.assertTrue(self.driver.plugin.helper.deny_flag) def test_update_access_nfs(self): self.driver.plugin.helper.login() self.allow_flag = False self.allow_rw_flag = False rules = [self.access_ip, self.access_ip_exist] self.driver.update_access(self._context, self.share_nfs, rules, None, None, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_rw_flag) def test_update_access_cifs(self): self.driver.plugin.helper.login() self.allow_flag = False self.allow_rw_flag = False rules = [self.access_user, self.access_user_exist] self.driver.update_access(self._context, self.share_cifs, rules, None, None, self.share_server) self.assertTrue(self.driver.plugin.helper.allow_flag) self.assertTrue(self.driver.plugin.helper.allow_rw_flag) def test_update_access_rules_share_not_exist(self): self.driver.plugin.helper.login() rules = [self.access_ip] self.driver.plugin.helper.share_exist = False self.assertRaises(exception.ShareResourceNotFound, self.driver.update_access, self._context, self.share_nfs, rules, None, None, self.share_server) @ddt.data(True, False) def test_nfs_access_for_all_ip_addresses(self, is_allow): access_all = { 'access_type': 'ip', 'access_to': '0.0.0.0/0', 'access_level': 'rw', } self.driver.plugin.helper.login() method = (self.driver.allow_access if is_allow else self.driver.deny_access) with mock.patch.object(self.driver.plugin.helper, '_get_access_from_share') as mock_call: mock_call.return_value = None method(self._context, self.share_nfs, access_all, self.share_server) mock_call.assert_called_with('1', '*', 'NFS') def test_get_share_client_type_fail(self): share_proto = 'fake_proto' self.assertRaises(exception.InvalidInput, self.driver.plugin.helper._get_share_client_type, share_proto) @ddt.data("NFS", "CIFS") def test_get_share_url_type(self, share_proto): share_url_type = self.driver.plugin.helper._get_share_url_type( share_proto) self.assertEqual(share_proto + 'HARE', share_url_type) def test_get_location_path_fail(self): share_name = 'share-fake-uuid' share_proto = 'fake_proto' self.assertRaises(exception.InvalidShareAccess, self.driver.plugin._get_location_path, share_name, share_proto) def test_allow_access_nfs_fail(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShareAccess, self.driver.allow_access, self._context, self.share_nfs, self.access_cert, self.share_server) def test_allow_access_cifs_fail(self): self.driver.plugin.helper.login() self.assertRaises(exception.InvalidShareAccess, self.driver.allow_access, self._context, self.share_cifs, self.access_ip, self.share_server) def test_deny_access_nfs_fail(self): self.driver.plugin.helper.login() result = self.driver.deny_access(self._context, self.share_nfs, self.access_cert, self.share_server) self.assertIsNone(result) def test_deny_access_not_exist_fail(self): self.driver.plugin.helper.login() access_ip_not_exist = { 'access_type': 'ip', 'access_to': '100.112.0.99', 'access_level': 'rw', } result = self.driver.deny_access(self._context, self.share_nfs, access_ip_not_exist, self.share_server) self.assertIsNone(result) def test_deny_access_cifs_fail(self): self.driver.plugin.helper.login() result = self.driver.deny_access(self._context, self.share_cifs, self.access_ip, self.share_server) self.assertIsNone(result) def test_allow_access_ip_share_not_exist(self): self.driver.plugin.helper.login() self.driver.plugin.helper.share_exist = False self.assertRaises(exception.ShareResourceNotFound, self.driver.allow_access, self._context, self.share_nfs, self.access_ip, self.share_server) def test_deny_access_ip_share_not_exist(self): self.driver.plugin.helper.login() self.driver.plugin.helper.share_exist = False self.driver.deny_access(self._context, self.share_nfs, self.access_ip, self.share_server) def test_allow_access_ip_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.allow_access, self._context, self.share_nfs, self.access_ip, self.share_server) def test_allow_access_user_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.allow_access, self._context, self.share_cifs, self.access_user, self.share_server) def test_deny_access_ip_success(self): self.driver.plugin.helper.login() self.deny_flag = False self.driver.deny_access(self._context, self.share_nfs, self.access_ip_exist, self.share_server) self.assertTrue(self.driver.plugin.helper.deny_flag) def test_deny_access_user_success(self): self.driver.plugin.helper.login() self.deny_flag = False self.driver.deny_access(self._context, self.share_cifs, self.access_user_exist, self.share_server) self.assertTrue(self.driver.plugin.helper.deny_flag) def test_deny_access_ip_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.deny_access, self._context, self.share_nfs, self.access_ip, self.share_server) def test_deny_access_user_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.deny_access, self._context, self.share_cifs, self.access_user, self.share_server) def test_create_nfs_snapshot_success(self): self.driver.plugin.helper.login() self.driver.plugin.helper.create_snapflag = False self.driver.create_snapshot(self._context, self.nfs_snapshot, self.share_server) self.assertTrue(self.driver.plugin.helper.create_snapflag) def test_create_nfs_snapshot_share_not_exist(self): self.driver.plugin.helper.login() self.driver.plugin.helper.share_exist = False self.assertRaises(exception.InvalidInput, self.driver.create_snapshot, self._context, self.nfs_snapshot, self.share_server) def test_create_cifs_snapshot_success(self): self.driver.plugin.helper.login() self.driver.plugin.helper.create_snapflag = False self.driver.create_snapshot(self._context, self.cifs_snapshot, self.share_server) self.assertTrue(self.driver.plugin.helper.create_snapflag) def test_delete_snapshot_success(self): self.driver.plugin.helper.login() self.driver.plugin.helper.delete_flag = False self.driver.plugin.helper.snapshot_flag = True self.driver.delete_snapshot(self._context, self.nfs_snapshot, self.share_server) self.assertTrue(self.driver.plugin.helper.delete_flag) def test_delete_snapshot_not_exist_success(self): self.driver.plugin.helper.login() self.driver.plugin.helper.delete_flag = False self.driver.plugin.helper.snapshot_flag = False self.driver.delete_snapshot(self._context, self.nfs_snapshot, self.share_server) self.assertTrue(self.driver.plugin.helper.delete_flag) def test_create_nfs_snapshot_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.create_snapshot, self._context, self.nfs_snapshot, self.share_server) def test_create_cifs_snapshot_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.create_snapshot, self._context, self.cifs_snapshot, self.share_server) def test_delete_nfs_snapshot_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.delete_snapshot, self._context, self.nfs_snapshot, self.share_server) def test_delete_cifs_snapshot_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.test_normal = False self.assertRaises(exception.InvalidShare, self.driver.delete_snapshot, self._context, self.cifs_snapshot, self.share_server) @ddt.data({"share_proto": "NFS", "path": ["100.115.10.68:/share_fake_manage_uuid"]}, {"share_proto": "CIFS", "path": ["\\\\100.115.10.68\\share_fake_manage_uuid"]}) @ddt.unpack def test_manage_share_nfs_success(self, share_proto, path): if share_proto == "NFS": share = self.share_manage_nfs elif share_proto == "CIFS": share = self.share_manage_cifs share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() share_info = self.driver.manage_existing(share, self.driver_options) self.assertEqual(4, share_info["size"]) self.assertEqual(path, share_info["export_locations"]) @ddt.data({"fs_alloctype": "THIN", "path": ["100.115.10.68:/share_fake_manage_uuid"]}, {"fs_alloctype": "THICK", "path": ["100.115.10.68:/share_fake_uuid_thickfs"]}) @ddt.unpack def test_manage_share_with_default_type(self, fs_alloctype, path): if fs_alloctype == "THIN": share = self.share_manage_nfs elif fs_alloctype == "THICK": share = self.share_nfs_thick_thickfs share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() share_info = self.driver.manage_existing(share, self.driver_options) self.assertEqual(4, share_info["size"]) self.assertEqual(path, share_info["export_locations"]) @ddt.data({"path": ["100.115.10.68:/share_fake_uuid_inpartition"]}) @ddt.unpack def test_manage_share_remove_from_partition(self, path): share = self.share_nfs_inpartition share_type = self.fake_type_fake_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() share_info = self.driver.manage_existing(share, self.driver_options) self.assertEqual(4, share_info["size"]) self.assertEqual(path, share_info["export_locations"]) @ddt.data({"flag": "share_not_exist", "exc": exception.InvalidShare}, {"flag": "fs_status_error", "exc": exception.InvalidShare}, {"flag": "poolname_not_match", "exc": exception.InvalidHost}) @ddt.unpack def test_manage_share_fail(self, flag, exc): share = None if flag == "share_not_exist": self.driver.plugin.helper.share_exist = False share = self.share_nfs elif flag == "fs_status_error": self.driver.plugin.helper.fs_status_flag = False share = self.share_nfs elif flag == "poolname_not_match": share = self.share_pool_name_not_match self.driver.plugin.helper.login() share_type = self.fake_type_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.assertRaises(exc, self.driver.manage_existing, share, self.driver_options) def test_manage_share_thickfs_set_dedupe_fail(self): share = self.share_nfs_thick_thickfs self.driver.plugin.helper.login() share_type = self.fake_type_thin_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.assertRaises(exception.InvalidInput, self.driver.manage_existing, share, self.driver_options) def test_manage_share_thickfs_not_match_thinpool_fail(self): share = self.share_nfs_thickfs self.driver.plugin.helper.login() share_type = self.fake_type_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.assertRaises(exception.InvalidHost, self.driver.manage_existing, share, self.driver_options) @ddt.data({"flag": "old_cache_id", "exc": exception.InvalidInput}, {"flag": "not_old_cache_id", "exc": exception.InvalidInput}) @ddt.unpack def test_manage_share_cache_not_exist(self, flag, exc): share = None if flag == "old_cache_id": share = self.share_nfs_inpartition elif flag == "not_old_cache_id": share = self.share_nfs self.driver.plugin.helper.cache_exist = False share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() self.assertRaises(exc, self.driver.manage_existing, share, self.share_server) def test_manage_add_share_to_cache_fail(self): opts = dict( huawei_smartcache='true', huawei_smartpartition='true', cachename='test_cache_name_fake', partitionname='test_partition_name_fake', ) fs = dict( SMARTCACHEID='6', SMARTPARTITIONID=None, ) poolinfo = dict( type='Thin', ) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_retype_change_opts, opts, poolinfo, fs) def test_manage_notsetcache_fail(self): opts = dict( huawei_smartcache='true', huawei_smartpartition='true', cachename=None, partitionname='test_partition_name_fake', ) fs = dict( SMARTCACHEID='6', SMARTPARTITIONID='6', ) poolinfo = dict( type='Thin', ) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_retype_change_opts, opts, poolinfo, fs) @ddt.data({"flag": "old_partition_id", "exc": exception.InvalidInput}, {"flag": "not_old_partition_id", "exc": exception.InvalidInput}) @ddt.unpack def test_manage_share_partition_not_exist(self, flag, exc): share = None if flag == "old_partition_id": share = self.share_nfs_inpartition elif flag == "not_old_partition_id": share = self.share_nfs self.driver.plugin.helper.partition_exist = False share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() self.assertRaises(exc, self.driver.manage_existing, share, self.share_server) def test_manage_add_share_to_partition_fail(self): opts = dict( huawei_smartcache='true', huawei_smartpartition='true', cachename='test_cache_name_fake', partitionname='test_partition_name_fake', ) fs = dict( SMARTCACHEID=None, SMARTPARTITIONID='6', ) poolinfo = dict( type='Thin', ) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_retype_change_opts, opts, poolinfo, fs) def test_manage_notset_partition_fail(self): opts = dict( huawei_smartcache='true', huawei_smartpartition='true', cachename='test_cache_name_fake', partitionname=None, ) fs = dict( SMARTCACHEID=None, SMARTPARTITIONID='6', ) poolinfo = dict( type='Thin', ) self.assertRaises(exception.InvalidInput, self.driver.plugin.check_retype_change_opts, opts, poolinfo, fs) @ddt.data({"share_proto": "NFS", "export_path": "fake_ip:/share_fake_uuid"}, {"share_proto": "NFS", "export_path": "fake_ip:/"}, {"share_proto": "NFS", "export_path": "100.112.0.1://share_fake_uuid"}, {"share_proto": "NFS", "export_path": None}, {"share_proto": "NFS", "export_path": "\\share_fake_uuid"}, {"share_proto": "CIFS", "export_path": "\\\\fake_ip\\share_fake_uuid"}, {"share_proto": "CIFS", "export_path": "\\dd\\100.115.10.68\\share_fake_uuid"}) @ddt.unpack def test_manage_export_path_fail(self, share_proto, export_path): share_manage_nfs_export_path_fail = { 'id': 'fake_uuid', 'project_id': 'fake_tenant_id', 'display_name': 'fake', 'name': 'share-fake-manage-uuid', 'size': 1, 'share_proto': share_proto, 'share_network_id': 'fake_net_id', 'share_server_id': 'fake-share-srv-id', 'export_locations': [ {'path': export_path}, ], 'host': 'fake_host@fake_backend#OpenStack_Pool', 'share_type_id': 'fake_id' } share_type = self.fake_type_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.login() self.assertRaises(exception.InvalidInput, self.driver.manage_existing, share_manage_nfs_export_path_fail, self.driver_options) def test_manage_logical_port_ip_fail(self): self.recreate_fake_conf_file(logical_port="") self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.login() share_type = self.fake_type_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.assertRaises(exception.InvalidInput, self.driver.manage_existing, self.share_nfs, self.driver_options) @ddt.data({"share_proto": "NFS", "provider_location": "share_snapshot_fake_snapshot_uuid"}, {"share_proto": "CIFS", "provider_location": "share_snapshot_fake_snapshot_uuid"}) @ddt.unpack def test_manage_existing_snapshot_success(self, share_proto, provider_location): if share_proto == "NFS": snapshot = self.storage_nfs_snapshot elif share_proto == "CIFS": snapshot = self.storage_cifs_snapshot self.driver.plugin.helper.login() snapshot_info = self.driver.manage_existing_snapshot( snapshot, self.driver_options) self.assertEqual(provider_location, snapshot_info['provider_location']) def test_manage_existing_snapshot_share_not_exist(self): self.driver.plugin.helper.login() self.mock_object(self.driver.plugin.helper, '_get_share_by_name', mock.Mock(return_value={})) self.assertRaises(exception.InvalidShare, self.driver.manage_existing_snapshot, self.storage_nfs_snapshot, self.driver_options) def test_manage_existing_snapshot_sharesnapshot_not_exist(self): self.driver.plugin.helper.login() self.mock_object(self.driver.plugin.helper, '_check_snapshot_id_exist', mock.Mock(return_value={})) self.assertRaises(exception.ManageInvalidShareSnapshot, self.driver.manage_existing_snapshot, self.storage_nfs_snapshot, self.driver_options) def test_manage_existing_snapshot_sharesnapshot_not_normal(self): snapshot_info = {"error": {"code": 0}, "data": {"ID": "4@share_snapshot_fake_snapshot_uuid", "NAME": "share_snapshot_fake_snapshot_uuid", "HEALTHSTATUS": "2"}} self.driver.plugin.helper.login() self.mock_object(self.driver.plugin.helper, '_get_snapshot_by_id', mock.Mock(return_value=snapshot_info)) self.assertRaises(exception.ManageInvalidShareSnapshot, self.driver.manage_existing_snapshot, self.storage_nfs_snapshot, self.driver_options) def test_get_pool_success(self): self.driver.plugin.helper.login() pool_name = self.driver.get_pool(self.share_nfs_host_not_exist) self.assertEqual('OpenStack_Pool', pool_name) def test_get_pool_fail(self): self.driver.plugin.helper.login() self.driver.plugin.helper.share_exist = False pool_name = self.driver.get_pool(self.share_nfs_host_not_exist) self.assertIsNone(pool_name) def test_multi_resturls_success(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.recreate_fake_conf_file(multi_url=True) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.test_multi_url_flag = 2 location = self.driver.create_share(self._context, self.share_nfs, self.share_server) self.assertEqual("100.115.10.68:/share_fake_uuid", location) def test_multi_resturls_fail(self): self.recreate_fake_conf_file(multi_url=True) self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.driver.plugin.helper.test_multi_url_flag = 1 self.assertRaises(exception.InvalidShare, self.driver.create_share, self._context, self.share_nfs, self.share_server) @dec_driver_handles_share_servers def test_setup_server_success(self): backend_details = self.driver.setup_server(self.fake_network_info) fake_share_server = { 'backend_details': backend_details } share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) location = self.driver.create_share(self._context, self.share_nfs, fake_share_server) self.assertTrue(db.share_type_get.called) self.assertEqual((self.fake_network_allocations[0]['ip_address'] + ":/share_fake_uuid"), location) @dec_driver_handles_share_servers def test_setup_server_with_bond_port_success(self): self.recreate_fake_conf_file(logical_port='fake_bond') self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) backend_details = self.driver.setup_server(self.fake_network_info) fake_share_server = { 'backend_details': backend_details } share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) location = self.driver.create_share(self._context, self.share_nfs, fake_share_server) self.assertTrue(db.share_type_get.called) self.assertEqual((self.fake_network_allocations[0]['ip_address'] + ":/share_fake_uuid"), location) @dec_driver_handles_share_servers def test_setup_server_logical_port_exist(self): def call_logical_port_exist(*args, **kwargs): url = args[0] method = args[2] if url == "/LIF" and method == "GET": data = """{"error":{"code":0},"data":[{ "ID":"4", "HOMEPORTID":"4", "IPV4ADDR":"111.111.111.109", "IPV4MASK":"255.255.255.0", "OPERATIONALSTATUS":"false"}]}""" elif url == "/LIF/4" and method == "PUT": data = """{"error":{"code":0}}""" else: return self.driver.plugin.helper.do_call(*args, **kwargs) res_json = jsonutils.loads(data) return res_json self.mock_object(self.driver.plugin.helper, "create_logical_port") with mock.patch.object(self.driver.plugin.helper, 'call') as mock_call: mock_call.side_effect = call_logical_port_exist backend_details = self.driver.setup_server(self.fake_network_info) self.assertEqual(backend_details['ip'], self.fake_network_allocations[0]['ip_address']) self.assertEqual( 0, self.driver.plugin.helper.create_logical_port.call_count) @dec_driver_handles_share_servers def test_setup_server_vlan_exist(self): def call_vlan_exist(*args, **kwargs): url = args[0] method = args[2] if url == "/vlan" and method == "GET": data = """{"error":{"code":0},"data":[{ "ID":"4", "NAME":"fake_vlan", "PORTID":"4", "TAG":"2"}]}""" else: return self.driver.plugin.helper.do_call(*args, **kwargs) res_json = jsonutils.loads(data) return res_json self.mock_object(self.driver.plugin.helper, "create_vlan") with mock.patch.object(self.driver.plugin.helper, 'call') as mock_call: mock_call.side_effect = call_vlan_exist backend_details = self.driver.setup_server(self.fake_network_info) self.assertEqual(backend_details['ip'], self.fake_network_allocations[0]['ip_address']) self.assertEqual( 0, self.driver.plugin.helper.create_vlan.call_count) def test_setup_server_invalid_ipv4(self): netwot_info_invali_ipv4 = self.fake_network_info netwot_info_invali_ipv4['network_allocations'][0]['ip_address'] = ( "::1/128") self.assertRaises(exception.InvalidInput, self.driver._setup_server, netwot_info_invali_ipv4) @dec_driver_handles_share_servers def test_setup_server_network_type_error(self): vxlan_netwotk_info = self.fake_network_info vxlan_netwotk_info['network_type'] = 'vxlan' self.assertRaises(exception.NetworkBadConfigurationException, self.driver.setup_server, vxlan_netwotk_info) @dec_driver_handles_share_servers def test_setup_server_port_conf_miss(self): self.recreate_fake_conf_file(logical_port='') self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) backend_details = self.driver.setup_server(self.fake_network_info) self.assertEqual(self.fake_network_allocations[0]['ip_address'], backend_details['ip']) @dec_driver_handles_share_servers def test_setup_server_port_offline_error(self): self.mock_object(self.driver.plugin, '_get_online_port', mock.Mock(return_value=(None, None))) self.assertRaises(exception.InvalidInput, self.driver.setup_server, self.fake_network_info) self.assertTrue(self.driver.plugin._get_online_port.called) @dec_driver_handles_share_servers def test_setup_server_port_not_exist(self): self.mock_object(self.driver.plugin.helper, 'get_port_id', mock.Mock(return_value=None)) self.assertRaises(exception.InvalidInput, self.driver.setup_server, self.fake_network_info) self.assertTrue(self.driver.plugin.helper.get_port_id.called) @dec_driver_handles_share_servers def test_setup_server_port_type_not_exist(self): self.mock_object(self.driver.plugin, '_get_optimal_port', mock.Mock(return_value=('CTE0.A.H2', '8'))) self.assertRaises(exception.InvalidInput, self.driver.setup_server, self.fake_network_info) self.assertTrue(self.driver.plugin._get_optimal_port.called) @dec_driver_handles_share_servers def test_setup_server_choose_eth_port(self): self.recreate_fake_conf_file(logical_port='CTE0.A.H0;fake_bond') self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.mock_object(self.driver.plugin.helper, 'get_all_vlan', mock.Mock(return_value=[{'NAME': 'fake_bond.10'}])) fake_network_info = self.fake_network_info backend_details = self.driver.setup_server(fake_network_info) self.assertTrue(self.driver.plugin.helper.get_all_vlan.called) self.assertEqual(self.fake_network_allocations[0]['ip_address'], backend_details['ip']) @dec_driver_handles_share_servers def test_setup_server_choose_bond_port(self): self.recreate_fake_conf_file(logical_port='CTE0.A.H0;fake_bond') self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) self.mock_object(self.driver.plugin.helper, 'get_all_vlan', mock.Mock(return_value=[{'NAME': 'CTE0.A.H0.10'}])) fake_network_info = self.fake_network_info backend_details = self.driver.setup_server(fake_network_info) self.assertTrue(self.driver.plugin.helper.get_all_vlan.called) self.assertEqual(self.fake_network_allocations[0]['ip_address'], backend_details['ip']) @dec_driver_handles_share_servers def test_setup_server_choose_least_logic_port(self): self.recreate_fake_conf_file( logical_port='CTE0.A.H0;CTE0.A.H2;CTE0.B.H0;BOND0') self.driver.plugin.configuration.manila_huawei_conf_file = ( self.fake_conf_file) fake_network_info = { 'server_id': '0', 'segmentation_id': None, 'cidr': '111.111.111.0/24', 'network_allocations': self.fake_network_allocations, 'network_type': None, } self.mock_object(self.driver.plugin, '_get_online_port', mock.Mock(return_value=(['CTE0.A.H0', 'CTE0.A.H2', 'CTE0.B.H0'], ['BOND0']))) self.mock_object(self.driver.plugin.helper, 'get_all_logical_port', mock.Mock(return_value=[ {'HOMEPORTTYPE': constants.PORT_TYPE_ETH, 'HOMEPORTNAME': 'CTE0.A.H0'}, {'HOMEPORTTYPE': constants.PORT_TYPE_VLAN, 'HOMEPORTNAME': 'CTE0.B.H0.10'}, {'HOMEPORTTYPE': constants.PORT_TYPE_BOND, 'HOMEPORTNAME': 'BOND0'}])) self.mock_object(self.driver.plugin.helper, 'get_port_id', mock.Mock(return_value=4)) backend_details = self.driver.setup_server(fake_network_info) self.assertEqual(self.fake_network_allocations[0]['ip_address'], backend_details['ip']) self.driver.plugin._get_online_port.assert_called_once_with( ['CTE0.A.H0', 'CTE0.A.H2', 'CTE0.B.H0', 'BOND0']) self.assertTrue(self.driver.plugin.helper.get_all_logical_port.called) self.driver.plugin.helper.get_port_id.assert_called_once_with( 'CTE0.A.H2', constants.PORT_TYPE_ETH) @dec_driver_handles_share_servers def test_setup_server_create_vlan_fail(self): def call_create_vlan_fail(*args, **kwargs): url = args[0] method = args[2] if url == "/vlan" and method == "POST": data = """{"error":{"code":1}}""" res_json = jsonutils.loads(data) return res_json else: return self.driver.plugin.helper.do_call(*args, **kwargs) with mock.patch.object(self.driver.plugin.helper, 'call') as mock_call: mock_call.side_effect = call_create_vlan_fail self.assertRaises(exception.InvalidShare, self.driver.setup_server, self.fake_network_info) @dec_driver_handles_share_servers def test_setup_server_create_logical_port_fail(self): def call_create_logical_port_fail(*args, **kwargs): url = args[0] method = args[2] if url == "/LIF" and method == "POST": data = """{"error":{"code":1}}""" res_json = jsonutils.loads(data) return res_json else: return self.driver.plugin.helper.do_call(*args, **kwargs) fake_network_info = self.fake_network_info fake_network_info['security_services'] = [ self.fake_active_directory, self.fake_ldap] self.mock_object(self.driver.plugin.helper, "delete_vlan") self.mock_object(self.driver.plugin.helper, "delete_AD_config") self.mock_object(self.driver.plugin.helper, "delete_LDAP_config") self.mock_object(self.driver.plugin.helper, "get_AD_config", mock.Mock(side_effect=[None, {'DOMAINSTATUS': '1'}, {'DOMAINSTATUS': '0'}])) self.mock_object( self.driver.plugin.helper, "get_LDAP_config", mock.Mock( side_effect=[None, {'BASEDN': 'dc=huawei,dc=com'}])) with mock.patch.object(self.driver.plugin.helper, 'call') as mock_call: mock_call.side_effect = call_create_logical_port_fail self.assertRaises(exception.InvalidShare, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_AD_config.called) self.assertTrue(self.driver.plugin.helper.get_LDAP_config.called) self.assertEqual( 1, self.driver.plugin.helper.delete_vlan.call_count) self.assertEqual( 1, self.driver.plugin.helper.delete_AD_config.call_count) self.assertEqual( 1, self.driver.plugin.helper.delete_LDAP_config.call_count) @dec_driver_handles_share_servers def test_setup_server_with_ad_domain_success(self): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_active_directory] self.mock_object(self.driver.plugin.helper, "get_AD_config", mock.Mock( side_effect=[None, {'DOMAINSTATUS': '0', 'FULLDOMAINNAME': 'huawei.com'}, {'DOMAINSTATUS': '1', 'FULLDOMAINNAME': 'huawei.com'}])) backend_details = self.driver.setup_server(fake_network_info) self.assertEqual(self.fake_network_allocations[0]['ip_address'], backend_details['ip']) self.assertTrue(self.driver.plugin.helper.get_AD_config.called) @ddt.data( "100.97.5.87", "100.97.5.87,100.97.5.88", "100.97.5.87,100.97.5.88,100.97.5.89" ) @dec_driver_handles_share_servers def test_setup_server_with_ldap_domain_success(self, server_ips): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_ldap] fake_network_info['security_services'][0]['server'] = server_ips self.mock_object( self.driver.plugin.helper, "get_LDAP_config", mock.Mock( side_effect=[None, {'BASEDN': 'dc=huawei,dc=com'}])) backend_details = self.driver.setup_server(fake_network_info) self.assertEqual(self.fake_network_allocations[0]['ip_address'], backend_details['ip']) self.assertTrue(self.driver.plugin.helper.get_LDAP_config.called) @dec_driver_handles_share_servers def test_setup_server_with_ldap_domain_fail(self): server_ips = "100.97.5.87,100.97.5.88,100.97.5.89,100.97.5.86" fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_ldap] fake_network_info['security_services'][0]['server'] = server_ips self.mock_object( self.driver.plugin.helper, "get_LDAP_config", mock.Mock( side_effect=[None, {'BASEDN': 'dc=huawei,dc=com'}])) self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_LDAP_config.called) @ddt.data( {'type': 'fake_unsupport'}, {'type': 'active_directory', 'dns_ip': '', 'user': '', 'password': '', 'domain': ''}, {'type': 'ldap', 'server': '', 'domain': ''}, ) @dec_driver_handles_share_servers def test_setup_server_with_security_service_invalid(self, data): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [data] self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) @dec_driver_handles_share_servers def test_setup_server_with_security_service_number_invalid(self): fake_network_info = self.fake_network_info ss = [ {'type': 'fake_unsupport'}, {'type': 'active_directory', 'dns_ip': '', 'user': '', 'password': '', 'domain': ''}, {'type': 'ldap', 'server': '', 'domain': ''}, ] fake_network_info['security_services'] = ss self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) @dec_driver_handles_share_servers def test_setup_server_dns_exist_error(self): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_active_directory] self.mock_object(self.driver.plugin.helper, "get_DNS_ip_address", mock.Mock(return_value=['100.97.5.85'])) self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_DNS_ip_address.called) @dec_driver_handles_share_servers def test_setup_server_ad_exist_error(self): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_active_directory] self.mock_object(self.driver.plugin.helper, "get_AD_config", mock.Mock( return_value={'DOMAINSTATUS': '1', 'FULLDOMAINNAME': 'huawei.com'})) self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_AD_config.called) @dec_driver_handles_share_servers def test_setup_server_ldap_exist_error(self): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_ldap] self.mock_object(self.driver.plugin.helper, "get_LDAP_config", mock.Mock( return_value={'LDAPSERVER': '100.97.5.87'})) self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_LDAP_config.called) @dec_driver_handles_share_servers def test_setup_server_with_dns_fail(self): fake_network_info = self.fake_network_info fake_active_directory = self.fake_active_directory ip_list = "100.97.5.5,100.97.5.6,100.97.5.7,100.97.5.8" fake_active_directory['dns_ip'] = ip_list fake_network_info['security_services'] = [fake_active_directory] self.mock_object( self.driver.plugin.helper, "get_AD_config", mock.Mock(side_effect=[None, {'DOMAINSTATUS': '1'}])) self.assertRaises(exception.InvalidInput, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_AD_config.called) @dec_driver_handles_share_servers def test_setup_server_with_ad_domain_fail(self): fake_network_info = self.fake_network_info fake_network_info['security_services'] = [self.fake_active_directory] self.mock_object(self.driver.plugin, '_get_wait_interval', mock.Mock(return_value=1)) self.mock_object(self.driver.plugin, '_get_timeout', mock.Mock(return_value=1)) self.mock_object( self.driver.plugin.helper, "get_AD_config", mock.Mock(side_effect=[None, {'DOMAINSTATUS': '0', 'FULLDOMAINNAME': 'huawei.com'}])) self.mock_object(self.driver.plugin.helper, "set_DNS_ip_address") self.assertRaises(exception.InvalidShare, self.driver.setup_server, fake_network_info) self.assertTrue(self.driver.plugin.helper.get_AD_config.called) self.assertTrue(self.driver.plugin._get_wait_interval.called) self.assertTrue(self.driver.plugin._get_timeout.called) self.assertEqual( 2, self.driver.plugin.helper.set_DNS_ip_address.call_count) def test_teardown_server_success(self): server_details = { "logical_port_id": "1", "vlan_id": "2", "ad_created": "1", "ldap_created": "1", } security_services = [ self.fake_ldap, self.fake_active_directory ] self.logical_port_deleted = False self.vlan_deleted = False self.ad_deleted = False self.ldap_deleted = False self.dns_deleted = False def fake_teardown_call(*args, **kwargs): url = args[0] method = args[2] if url.startswith("/LIF"): if method == "GET": data = """{"error":{"code":0},"data":[{ "ID":"1"}]}""" elif method == "DELETE": data = """{"error":{"code":0}}""" self.logical_port_deleted = True elif url.startswith("/vlan"): if method == "GET": data = """{"error":{"code":0},"data":[{ "ID":"2"}]}""" elif method == "DELETE": data = """{"error":{"code":1073813505}}""" self.vlan_deleted = True elif url == "/AD_CONFIG": if method == "PUT": data = """{"error":{"code":0}}""" self.ad_deleted = True elif method == "GET": if self.ad_deleted: data = """{"error":{"code":0},"data":{ "DOMAINSTATUS":"0"}}""" else: data = """{"error":{"code":0},"data":{ "DOMAINSTATUS":"1", "FULLDOMAINNAME":"huawei.com"}}""" else: data = """{"error":{"code":0}}""" elif url == "/LDAP_CONFIG": if method == "DELETE": data = """{"error":{"code":0}}""" self.ldap_deleted = True elif method == "GET": if self.ldap_deleted: data = """{"error":{"code":0}}""" else: data = """{"error":{"code":0},"data":{ "LDAPSERVER":"100.97.5.87", "BASEDN":"dc=huawei,dc=com"}}""" else: data = """{"error":{"code":0}}""" elif url == "/DNS_Server": if method == "GET": data = "{\"error\":{\"code\":0},\"data\":{\ \"ADDRESS\":\"[\\\"100.97.5.5\\\",\\\"\\\"]\"}}" elif method == "PUT": data = """{"error":{"code":0}}""" self.dns_deleted = True else: data = """{"error":{"code":0}}""" else: return self.driver.plugin.helper.do_call(*args, **kwargs) res_json = jsonutils.loads(data) return res_json with mock.patch.object(self.driver.plugin.helper, 'call') as mock_call: mock_call.side_effect = fake_teardown_call self.driver._teardown_server(server_details, security_services) self.assertTrue(self.logical_port_deleted) self.assertTrue(self.vlan_deleted) self.assertTrue(self.ad_deleted) self.assertTrue(self.ldap_deleted) self.assertTrue(self.dns_deleted) def test_teardown_server_with_already_deleted(self): server_details = { "logical_port_id": "1", "vlan_id": "2", "ad_created": "1", "ldap_created": "1", } security_services = [ self.fake_ldap, self.fake_active_directory ] self.mock_object(self.driver.plugin.helper, "check_logical_port_exists_by_id", mock.Mock(return_value=False)) self.mock_object(self.driver.plugin.helper, "check_vlan_exists_by_id", mock.Mock(return_value=False)) self.mock_object(self.driver.plugin.helper, "get_DNS_ip_address", mock.Mock(return_value=None)) self.mock_object(self.driver.plugin.helper, "get_AD_domain_name", mock.Mock(return_value=(False, None))) self.mock_object(self.driver.plugin.helper, "get_LDAP_domain_server", mock.Mock(return_value=(False, None))) self.driver._teardown_server(server_details, security_services) self.assertEqual(1, (self.driver.plugin.helper. check_logical_port_exists_by_id.call_count)) self.assertEqual(1, (self.driver.plugin.helper. check_vlan_exists_by_id.call_count)) self.assertEqual(1, (self.driver.plugin.helper. get_DNS_ip_address.call_count)) self.assertEqual(1, (self.driver.plugin.helper. get_AD_domain_name.call_count)) self.assertEqual(1, (self.driver.plugin.helper. get_LDAP_domain_server.call_count)) def test_teardown_server_with_vlan_logical_port_deleted(self): server_details = { "logical_port_id": "1", "vlan_id": "2", } self.mock_object(self.driver.plugin.helper, 'get_all_logical_port', mock.Mock(return_value=[{'ID': '4'}])) self.mock_object(self.driver.plugin.helper, 'get_all_vlan', mock.Mock(return_value=[{'ID': '4'}])) self.driver._teardown_server(server_details, None) self.assertEqual(1, (self.driver.plugin.helper. get_all_logical_port.call_count)) self.assertEqual(1, (self.driver.plugin.helper. get_all_vlan.call_count)) def test_teardown_server_with_empty_detail(self): server_details = {} with mock.patch.object(connection.LOG, 'debug') as mock_debug: self.driver._teardown_server(server_details, None) mock_debug.assert_called_with('Server details are empty.') @ddt.data({"share_proto": "NFS", "path": ["100.115.10.68:/share_fake_uuid"]}, {"share_proto": "CIFS", "path": ["\\\\100.115.10.68\\share_fake_uuid"]}) @ddt.unpack def test_ensure_share_sucess(self, share_proto, path): share = self._get_share_by_proto(share_proto) self.driver.plugin.helper.login() location = self.driver.ensure_share(self._context, share, self.share_server) self.assertEqual(path, location) @ddt.data({"share_proto": "NFS", "path": ["111.111.111.109:/share_fake_uuid"]}, {"share_proto": "CIFS", "path": ["\\\\111.111.111.109\\share_fake_uuid"]}) @ddt.unpack @dec_driver_handles_share_servers def test_ensure_share_with_share_server_sucess(self, share_proto, path): share = self._get_share_by_proto(share_proto) backend_details = self.driver.setup_server(self.fake_network_info) fake_share_server = {'backend_details': backend_details} location = self.driver.ensure_share(self._context, share, fake_share_server) self.assertEqual(path, location) @ddt.data({"share_proto": "NFS"}, {"share_proto": "CIFS"}) @ddt.unpack def test_ensure_share_get_share_fail(self, share_proto): share = self._get_share_by_proto(share_proto) self.mock_object(self.driver.plugin.helper, '_get_share_by_name', mock.Mock(return_value={})) self.driver.plugin.helper.login() self.assertRaises(exception.ShareResourceNotFound, self.driver.ensure_share, self._context, share, self.share_server) def test_ensure_share_get_filesystem_status_fail(self): self.driver.plugin.helper.fs_status_flag = False share = self.share_nfs_thickfs self.driver.plugin.helper.login() self.assertRaises(exception.StorageResourceException, self.driver.ensure_share, self._context, share, self.share_server) def _add_conf_file_element(self, doc, parent_element, name, value=None): new_element = doc.createElement(name) if value: new_text = doc.createTextNode(value) new_element.appendChild(new_text) parent_element.appendChild(new_element) def create_fake_conf_file(self, fake_conf_file, product_flag=True, username_flag=True, pool_node_flag=True, timeout_flag=True, wait_interval_flag=True, sectorsize_value='4', multi_url=False, logical_port='100.115.10.68', snapshot_support=True, replication_support=False): doc = xml.dom.minidom.Document() config = doc.createElement('Config') doc.appendChild(config) storage = doc.createElement('Storage') config.appendChild(storage) if self.configuration.driver_handles_share_servers: port0 = doc.createElement('Port') port0_text = doc.createTextNode(logical_port) port0.appendChild(port0_text) storage.appendChild(port0) else: controllerip0 = doc.createElement('LogicalPortIP') controllerip0_text = doc.createTextNode(logical_port) controllerip0.appendChild(controllerip0_text) storage.appendChild(controllerip0) if product_flag: product_text = doc.createTextNode('V3') else: product_text = doc.createTextNode('V3_fail') product = doc.createElement('Product') product.appendChild(product_text) storage.appendChild(product) if username_flag: username_text = doc.createTextNode('admin') else: username_text = doc.createTextNode('') username = doc.createElement('UserName') username.appendChild(username_text) storage.appendChild(username) userpassword = doc.createElement('UserPassword') userpassword_text = doc.createTextNode('Admin@storage') userpassword.appendChild(userpassword_text) storage.appendChild(userpassword) url = doc.createElement('RestURL') if multi_url: url_text = doc.createTextNode('http://100.115.10.69:8082/' 'deviceManager/rest/;' 'http://100.115.10.70:8082/' 'deviceManager/rest/') else: url_text = doc.createTextNode('http://100.115.10.69:8082/' 'deviceManager/rest/') url.appendChild(url_text) storage.appendChild(url) if snapshot_support: self._add_conf_file_element( doc, storage, 'SnapshotSupport', 'True') if replication_support: self._add_conf_file_element( doc, storage, 'ReplicationSupport', 'True') lun = doc.createElement('Filesystem') config.appendChild(lun) storagepool = doc.createElement('StoragePool') if pool_node_flag: pool_text = doc.createTextNode('OpenStack_Pool;OpenStack_Pool2; ;') else: pool_text = doc.createTextNode('') storagepool.appendChild(pool_text) timeout = doc.createElement('Timeout') if timeout_flag: timeout_text = doc.createTextNode('60') else: timeout_text = doc.createTextNode('') timeout.appendChild(timeout_text) waitinterval = doc.createElement('WaitInterval') if wait_interval_flag: waitinterval_text = doc.createTextNode('3') else: waitinterval_text = doc.createTextNode('') waitinterval.appendChild(waitinterval_text) NFSClient = doc.createElement('NFSClient') virtualip = doc.createElement('IP') virtualip_text = doc.createTextNode('100.112.0.1') virtualip.appendChild(virtualip_text) NFSClient.appendChild(virtualip) CIFSClient = doc.createElement('CIFSClient') username = doc.createElement('UserName') username_text = doc.createTextNode('user_name') username.appendChild(username_text) CIFSClient.appendChild(username) userpassword = doc.createElement('UserPassword') userpassword_text = doc.createTextNode('user_password') userpassword.appendChild(userpassword_text) CIFSClient.appendChild(userpassword) lun.appendChild(NFSClient) lun.appendChild(CIFSClient) lun.appendChild(timeout) lun.appendChild(waitinterval) lun.appendChild(storagepool) if sectorsize_value: sectorsize = doc.createElement('SectorSize') sectorsize_text = doc.createTextNode(sectorsize_value) sectorsize.appendChild(sectorsize_text) lun.appendChild(sectorsize) prefetch = doc.createElement('Prefetch') prefetch.setAttribute('Type', '0') prefetch.setAttribute('Value', '0') lun.appendChild(prefetch) fakefile = open(fake_conf_file, 'w') fakefile.write(doc.toprettyxml(indent='')) fakefile.close() def recreate_fake_conf_file(self, product_flag=True, username_flag=True, pool_node_flag=True, timeout_flag=True, wait_interval_flag=True, sectorsize_value='4', multi_url=False, logical_port='100.115.10.68', snapshot_support=True, replication_support=False): self.tmp_dir = tempfile.mkdtemp() self.fake_conf_file = self.tmp_dir + '/manila_huawei_conf.xml' self.addCleanup(shutil.rmtree, self.tmp_dir) self.create_fake_conf_file(self.fake_conf_file, product_flag, username_flag, pool_node_flag, timeout_flag, wait_interval_flag, sectorsize_value, multi_url, logical_port, snapshot_support, replication_support) self.addCleanup(os.remove, self.fake_conf_file) @ddt.data(common_constants.STATUS_ERROR, common_constants.REPLICA_STATE_IN_SYNC, common_constants.REPLICA_STATE_OUT_OF_SYNC) def test_create_replica_success(self, replica_state): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) if replica_state == common_constants.STATUS_ERROR: self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":0}, "data":{"HEALTHSTATUS": "2"}}"""} elif replica_state == common_constants.REPLICA_STATE_OUT_OF_SYNC: self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":0}, "data":{"HEALTHSTATUS": "1", "RUNNINGSTATUS": "1", "SECRESDATASTATUS": "5"}}"""} result = self.driver.create_replica( self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) expected = { 'export_locations': ['100.115.10.68:/share_fake_new_uuid'], 'replica_state': replica_state, 'access_rules_status': common_constants.STATUS_ACTIVE, } self.assertEqual(expected, result) self.assertEqual('fake_pair_id', self.driver.plugin.private_storage.get( 'fake_share_id', 'replica_pair_id')) @ddt.data({'url': '/FILESYSTEM?filter=NAME::share_fake_uuid' '&range=[0-8191]', 'url_result': '{"error":{"code":0}}', 'expected_exception': exception.ReplicationException}, {'url': '/NFSHARE', 'url_result': '{"error":{"code":-403}}', 'expected_exception': exception.InvalidShare}, {'url': '/REPLICATIONPAIR', 'url_result': '{"error":{"code":-403}}', 'expected_exception': exception.InvalidShare},) @ddt.unpack def test_create_replica_fail(self, url, url_result, expected_exception): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.custom_results[url] = url_result self.assertRaises(expected_exception, self.driver.create_replica, self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) self.assertIsNone(self.driver.plugin.private_storage.get( 'fake_share_id', 'replica_pair_id')) def test_create_replica_with_get_state_fail(self): share_type = self.fake_type_not_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":-403}}"""} result = self.driver.create_replica( self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) expected = { 'export_locations': ['100.115.10.68:/share_fake_new_uuid'], 'replica_state': common_constants.STATUS_ERROR, 'access_rules_status': common_constants.STATUS_ACTIVE, } self.assertEqual(expected, result) self.assertEqual('fake_pair_id', self.driver.plugin.private_storage.get( 'fake_share_id', 'replica_pair_id')) def test_create_replica_with_already_exists(self): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.assertRaises(exception.ReplicationException, self.driver.create_replica, self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) @ddt.data({'pair_info': """{"HEALTHSTATUS": "2", "SECRESDATASTATUS": "2", "ISPRIMARY": "false", "SECRESACCESS": "1", "RUNNINGSTATUS": "1"}""", 'assert_method': 'get_replication_pair_by_id'}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "true", "SECRESACCESS": "1", "RUNNINGSTATUS": "1"}""", 'assert_method': 'switch_replication_pair'}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "false", "SECRESACCESS": "3", "RUNNINGSTATUS": "1"}""", 'assert_method': 'set_pair_secondary_write_lock'}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "false", "SECRESACCESS": "1", "RUNNINGSTATUS": "33"}""", 'assert_method': 'sync_replication_pair'},) @ddt.unpack def test_update_replica_state_success(self, pair_info, assert_method): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) helper_method = getattr(self.driver.plugin.helper, assert_method) mocker = self.mock_object(self.driver.plugin.helper, assert_method, mock.Mock(wraps=helper_method)) self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":0}, "data":%s}""" % pair_info} self.driver.update_replica_state( self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) mocker.assert_called_with('fake_pair_id') @ddt.data({'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "true", "SECRESACCESS": "1", "RUNNINGSTATUS": "1"}""", 'assert_method': 'switch_replication_pair', 'error_url': '/REPLICATIONPAIR/switch'}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "false", "SECRESACCESS": "3", "RUNNINGSTATUS": "1"}""", 'assert_method': 'set_pair_secondary_write_lock', 'error_url': '/REPLICATIONPAIR/SET_SECODARY_WRITE_LOCK'}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "false", "SECRESACCESS": "1", "RUNNINGSTATUS": "26"}""", 'assert_method': 'sync_replication_pair', 'error_url': '/REPLICATIONPAIR/sync'},) @ddt.unpack def test_update_replica_state_with_exception_ignore( self, pair_info, assert_method, error_url): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) helper_method = getattr(self.driver.plugin.helper, assert_method) mocker = self.mock_object(self.driver.plugin.helper, assert_method, mock.Mock(wraps=helper_method)) self.driver.plugin.helper.custom_results[ error_url] = """{"error":{"code":-403}}""" self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":0}, "data":%s}""" % pair_info} self.driver.update_replica_state( self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) mocker.assert_called_once_with('fake_pair_id') def test_update_replica_state_with_replication_abnormal(self): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":0}, "data":{"HEALTHSTATUS": "2"}}"""} result = self.driver.update_replica_state( self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) self.assertEqual(common_constants.STATUS_ERROR, result) def test_update_replica_state_with_no_pair_id(self): result = self.driver.update_replica_state( self._context, [self.active_replica, self.new_replica], self.new_replica, [], [], None) self.assertEqual(common_constants.STATUS_ERROR, result) @ddt.data('true', 'false') def test_promote_replica_success(self, is_primary): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error": {"code": 0}, "data": {"HEALTHSTATUS": "1", "RUNNINGSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "%s"}}""" % is_primary} result = self.driver.promote_replica( self._context, [self.active_replica, self.new_replica], self.new_replica, [], None) expected = [ {'id': self.new_replica['id'], 'replica_state': common_constants.REPLICA_STATE_ACTIVE, 'access_rules_status': common_constants.STATUS_ACTIVE}, {'id': self.active_replica['id'], 'replica_state': common_constants.REPLICA_STATE_IN_SYNC, 'access_rules_status': common_constants.SHARE_INSTANCE_RULES_SYNCING}, ] self.assertEqual(expected, result) @ddt.data({'mock_method': 'update_access', 'new_access_status': common_constants.SHARE_INSTANCE_RULES_SYNCING, 'old_access_status': common_constants.SHARE_INSTANCE_RULES_SYNCING}, {'mock_method': 'clear_access', 'new_access_status': common_constants.SHARE_INSTANCE_RULES_SYNCING, 'old_access_status': common_constants.STATUS_ACTIVE},) @ddt.unpack def test_promote_replica_with_access_update_error( self, mock_method, new_access_status, old_access_status): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error": {"code": 0}, "data": {"HEALTHSTATUS": "1", "RUNNINGSTATUS": "1", "SECRESDATASTATUS": "2", "ISPRIMARY": "false"}}"""} mocker = self.mock_object(self.driver.plugin, mock_method, mock.Mock(side_effect=Exception('err'))) result = self.driver.promote_replica( self._context, [self.active_replica, self.new_replica], self.new_replica, [], None) expected = [ {'id': self.new_replica['id'], 'replica_state': common_constants.REPLICA_STATE_ACTIVE, 'access_rules_status': new_access_status}, {'id': self.active_replica['id'], 'replica_state': common_constants.REPLICA_STATE_IN_SYNC, 'access_rules_status': old_access_status}, ] self.assertEqual(expected, result) mocker.assert_called() @ddt.data({'error_url': '/REPLICATIONPAIR/split', 'assert_method': 'split_replication_pair'}, {'error_url': '/REPLICATIONPAIR/switch', 'assert_method': 'switch_replication_pair'}, {'error_url': '/REPLICATIONPAIR/SET_SECODARY_WRITE_LOCK', 'assert_method': 'set_pair_secondary_write_lock'}, {'error_url': '/REPLICATIONPAIR/sync', 'assert_method': 'sync_replication_pair'},) @ddt.unpack def test_promote_replica_with_error_ignore(self, error_url, assert_method): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) helper_method = getattr(self.driver.plugin.helper, assert_method) mocker = self.mock_object(self.driver.plugin.helper, assert_method, mock.Mock(wraps=helper_method)) self.driver.plugin.helper.custom_results[ error_url] = '{"error":{"code":-403}}' fake_pair_infos = [{'ISPRIMARY': 'False', 'HEALTHSTATUS': '1', 'RUNNINGSTATUS': '1', 'SECRESDATASTATUS': '1'}, {'HEALTHSTATUS': '2'}] self.mock_object(self.driver.plugin.replica_mgr, '_get_replication_pair_info', mock.Mock(side_effect=fake_pair_infos)) result = self.driver.promote_replica( self._context, [self.active_replica, self.new_replica], self.new_replica, [], None) expected = [ {'id': self.new_replica['id'], 'replica_state': common_constants.REPLICA_STATE_ACTIVE, 'access_rules_status': common_constants.STATUS_ACTIVE}, {'id': self.active_replica['id'], 'replica_state': common_constants.STATUS_ERROR, 'access_rules_status': common_constants.SHARE_INSTANCE_RULES_SYNCING}, ] self.assertEqual(expected, result) mocker.assert_called_once_with('fake_pair_id') @ddt.data({'error_url': '/REPLICATIONPAIR/fake_pair_id', 'url_result': """{"error":{"code":0}, "data":{"HEALTHSTATUS": "1", "ISPRIMARY": "false", "RUNNINGSTATUS": "1", "SECRESDATASTATUS": "5"}}""", 'expected_exception': exception.ReplicationException}, {'error_url': '/REPLICATIONPAIR/CANCEL_SECODARY_WRITE_LOCK', 'url_result': """{"error":{"code":-403}}""", 'expected_exception': exception.InvalidShare},) @ddt.unpack def test_promote_replica_fail(self, error_url, url_result, expected_exception): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.driver.plugin.helper.custom_results[error_url] = url_result self.assertRaises(expected_exception, self.driver.promote_replica, self._context, [self.active_replica, self.new_replica], self.new_replica, [], None) def test_promote_replica_with_no_pair_id(self): self.assertRaises(exception.ReplicationException, self.driver.promote_replica, self._context, [self.active_replica, self.new_replica], self.new_replica, [], None) @ddt.data({'url': '/REPLICATIONPAIR/split', 'url_result': '{"error":{"code":-403}}'}, {'url': '/REPLICATIONPAIR/fake_pair_id', 'url_result': '{"error":{"code":1077937923}}'}, {'url': '/REPLICATIONPAIR/fake_pair_id', 'url_result': '{"error":{"code":0}}'},) @ddt.unpack def test_delete_replica_success(self, url, url_result): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.driver.plugin.helper.custom_results['/filesystem/8'] = { "DELETE": '{"error":{"code":0}}'} self.driver.plugin.helper.custom_results[url] = url_result self.driver.delete_replica(self._context, [self.active_replica, self.new_replica], [], self.new_replica, None) self.assertIsNone(self.driver.plugin.private_storage.get( 'fake_share_id', 'replica_pair_id')) @ddt.data({'url': '/REPLICATIONPAIR/fake_pair_id', 'expected': 'fake_pair_id'}, {'url': '/filesystem/8', 'expected': None},) @ddt.unpack def test_delete_replica_fail(self, url, expected): self.driver.plugin.private_storage.update( 'fake_share_id', {'replica_pair_id': 'fake_pair_id'}) self.driver.plugin.helper.custom_results[url] = { "DELETE": '{"error":{"code":-403}}'} self.assertRaises(exception.InvalidShare, self.driver.delete_replica, self._context, [self.active_replica, self.new_replica], [], self.new_replica, None) self.assertEqual(expected, self.driver.plugin.private_storage.get( 'fake_share_id', 'replica_pair_id')) def test_delete_replica_with_no_pair_id(self): self.driver.plugin.helper.custom_results['/filesystem/8'] = { "DELETE": '{"error":{"code":0}}'} self.driver.delete_replica(self._context, [self.active_replica, self.new_replica], [], self.new_replica, None) @ddt.data({'pair_info': """{"HEALTHSTATUS": "2"}""", 'expected_state': common_constants.STATUS_ERROR}, {'pair_info': """{"HEALTHSTATUS": "1", "RUNNINGSTATUS": "26"}""", 'expected_state': common_constants.REPLICA_STATE_OUT_OF_SYNC}, {'pair_info': """{"HEALTHSTATUS": "1", "RUNNINGSTATUS": "33"}""", 'expected_state': common_constants.REPLICA_STATE_OUT_OF_SYNC}, {'pair_info': """{"HEALTHSTATUS": "1", "RUNNINGSTATUS": "34"}""", 'expected_state': common_constants.STATUS_ERROR}, {'pair_info': """{"HEALTHSTATUS": "1", "RUNNINGSTATUS": "35"}""", 'expected_state': common_constants.STATUS_ERROR}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "1", "RUNNINGSTATUS": "1"}""", 'expected_state': common_constants.REPLICA_STATE_IN_SYNC}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "2", "RUNNINGSTATUS": "1"}""", 'expected_state': common_constants.REPLICA_STATE_IN_SYNC}, {'pair_info': """{"HEALTHSTATUS": "1", "SECRESDATASTATUS": "5", "RUNNINGSTATUS": "1"}""", 'expected_state': common_constants.REPLICA_STATE_OUT_OF_SYNC}) @ddt.unpack def test_get_replica_state(self, pair_info, expected_state): self.driver.plugin.helper.custom_results[ '/REPLICATIONPAIR/fake_pair_id'] = { "GET": """{"error":{"code":0}, "data":%s}""" % pair_info} result_state = self.driver.plugin.replica_mgr.get_replica_state( 'fake_pair_id') self.assertEqual(expected_state, result_state) @ddt.data(*constants.QOS_STATUSES) def test_delete_qos(self, qos_status): self.driver.plugin.helper.custom_results['/ioclass/11'] = { "GET": """{"error":{"code":0}, "data":{"RUNNINGSTATUS": "%s"}}""" % qos_status } activate_deactivate_qos_mock = self.mock_object( self.driver.plugin.helper, 'activate_deactivate_qos') delete_qos_mock = self.mock_object( self.driver.plugin.helper, 'delete_qos_policy') qos = smartx.SmartQos(self.driver.plugin.helper) qos.delete_qos('11') if qos_status == constants.STATUS_QOS_INACTIVATED: activate_deactivate_qos_mock.assert_not_called() else: activate_deactivate_qos_mock.assert_called_once_with('11', False) delete_qos_mock.assert_called_once_with('11') def test_username_password_encode_decode(self): for i in (1, 2): # First loop will encode the username/password and # write back to configuration. # Second loop will get the encoded username/password and # decode them. logininfo = self.driver.plugin.helper._get_login_info() self.assertEqual('admin', logininfo['UserName']) self.assertEqual('Admin@storage', logininfo['UserPassword']) @ddt.data({ 'username': 'abc', 'password': '123456', 'expect_username': 'abc', 'expect_password': '123456', }, { 'username': '!$$$YWJj', 'password': '!$$$MTIzNDU2', 'expect_username': 'abc', 'expect_password': '123456', }, { 'username': 'ab!$$$c', 'password': '123!$$$456', 'expect_username': 'ab!$$$c', 'expect_password': '123!$$$456', }) @ddt.unpack def test__get_login_info(self, username, password, expect_username, expect_password): configs = { 'Storage/RestURL': 'https://123456', 'Storage/UserName': username, 'Storage/UserPassword': password, } self.mock_object( ET, 'parse', mock.Mock(return_value=FakeConfigParseTree(configs))) result = self.driver.plugin.helper._get_login_info() self.assertEqual(expect_username, result['UserName']) self.assertEqual(expect_password, result['UserPassword']) ET.parse.assert_called_once_with(self.fake_conf_file) def test_revert_to_snapshot_success(self): snapshot = {'id': 'fake-fs-id', 'share_name': 'share_fake_uuid'} with mock.patch.object( self.driver.plugin.helper, 'call') as mock_call: mock_call.return_value = { "error": {"code": 0}, "data": [{"ID": "4", "NAME": "share_fake_uuid"}] } self.driver.revert_to_snapshot(None, snapshot, None, None) expect_snapshot_id = "4@share_snapshot_fake_fs_id" mock_call.assert_called_with( "/FSSNAPSHOT/ROLLBACK_FSSNAPSHOT", jsonutils.dumps({"ID": expect_snapshot_id}), 'PUT') def test_revert_to_snapshot_exception(self): snapshot = {'id': 'fake-snap-id', 'share_name': 'not_exist_share_name', 'share_id': 'fake_share_id'} self.assertRaises(exception.ShareResourceNotFound, self.driver.revert_to_snapshot, None, snapshot, None, None) @ddt.data({'name': 'fake_name', 'share_proto': 'NFS', 'mount_path': 'fake_nfs_mount_path', 'mount_src': '/mnt/test'}, {'name': 'fake_name', 'share_proto': 'CIFS', 'mount_path': 'fake_cifs_mount_path', 'mount_src': '/mnt/test'}, ) def test_mount_share_to_host(self, share): access = {'access_to': 'cifs_user', 'access_password': 'cifs_password'} mocker = self.mock_object(utils, 'execute') self.driver.plugin.mount_share_to_host(share, access) if share['share_proto'] == 'NFS': mocker.assert_called_once_with( 'mount', '-t', 'nfs', 'fake_nfs_mount_path', '/mnt/test', run_as_root=True) else: mocker.assert_called_once_with( 'mount', '-t', 'cifs', 'fake_cifs_mount_path', '/mnt/test', '-o', 'username=cifs_user,password=cifs_password', run_as_root=True) @ddt.ddt class HuaweiDriverHelperTestCase(test.TestCase): def setUp(self): super(HuaweiDriverHelperTestCase, self).setUp() self.helper = helper.RestHelper(None) def test_init_http_head(self): self.helper.init_http_head() self.assertIsNone(self.helper.url) self.assertFalse(self.helper.session.verify) self.assertEqual("keep-alive", self.helper.session.headers["Connection"]) self.assertEqual("application/json", self.helper.session.headers["Content-Type"]) @ddt.data(('fake_data', 'POST'), (None, 'POST'), (None, 'PUT'), (None, 'GET'), ('fake_data', 'PUT'), (None, 'DELETE'), ) @ddt.unpack def test_do_call_with_valid_method(self, data, method): self.helper.init_http_head() mocker = self.mock_object(self.helper.session, method.lower()) self.helper.do_call("fake-rest-url", data, method) kwargs = {'timeout': constants.SOCKET_TIMEOUT} if data: kwargs['data'] = data mocker.assert_called_once_with("fake-rest-url", **kwargs) def test_do_call_with_invalid_method(self): self.assertRaises(exception.ShareBackendException, self.helper.do_call, "fake-rest-url", None, 'fake-method') def test_do_call_with_http_error(self): self.helper.init_http_head() fake_res = requests.Response() fake_res.reason = 'something wrong' fake_res.status_code = 500 fake_res.url = "fake-rest-url" self.mock_object(self.helper.session, 'post', mock.Mock(return_value=fake_res)) res = self.helper.do_call("fake-rest-url", None, 'POST') expected = { "error": { "code": 500, "description": '500 Server Error: something wrong for ' 'url: fake-rest-url'} } self.assertDictEqual(expected, res) manila-10.0.0/manila/tests/share/drivers/infinidat/0000775000175000017500000000000013656750362022220 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/infinidat/__init__.py0000664000175000017500000000000013656750227024317 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/infinidat/test_infinidat.py0000664000175000017500000010316713656750227025606 0ustar zuulzuul00000000000000# Copyright 2017 Infinidat Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for INFINIDAT InfiniBox share driver.""" import copy import functools from unittest import mock from oslo_utils import units from manila.common import constants from manila import exception from manila.share import configuration from manila.share.drivers.infinidat import infinibox from manila import test from manila import version _MOCK_SHARE_ID = 1 _MOCK_SNAPSHOT_ID = 2 _MOCK_CLONE_ID = 3 _MOCK_SHARE_SIZE = 4 _MOCK_NETWORK_SPACE_IP_1 = '1.2.3.4' _MOCK_NETWORK_SPACE_IP_2 = '1.2.3.5' def _create_mock__getitem__(mock): def mock__getitem__(self, key, default=None): return getattr(mock, key, default) return mock__getitem__ test_share = mock.Mock(id=_MOCK_SHARE_ID, size=_MOCK_SHARE_SIZE, share_proto='NFS') test_share.__getitem__ = _create_mock__getitem__(test_share) test_snapshot = mock.Mock(id=_MOCK_SNAPSHOT_ID, size=test_share.size, share=test_share, share_proto='NFS') test_snapshot.__getitem__ = _create_mock__getitem__(test_snapshot) original_test_clone = mock.Mock(id=_MOCK_CLONE_ID, size=test_share.size, share=test_snapshot, share_proto='NFS') original_test_clone.__getitem__ = _create_mock__getitem__(original_test_clone) def skip_driver_setup(func): @functools.wraps(func) def f(*args, **kwargs): return func(*args, **kwargs) f.__skip_driver_setup = True return f class FakeInfinisdkException(Exception): pass class FakeInfinisdkPermission(object): def __init__(self, permission): self._permission = permission def __getattr__(self, attr): return self._permission[attr] def __getitem__(self, key): return self._permission[key] class InfiniboxDriverTestCaseBase(test.TestCase): def _test_skips_driver_setup(self): test_method_name = self.id().split('.')[-1] test_method = getattr(self, test_method_name) return getattr(test_method, '__skip_driver_setup', False) def setUp(self): super(InfiniboxDriverTestCaseBase, self).setUp() # create mock configuration self.configuration = mock.Mock(spec=configuration.Configuration) self.configuration.infinibox_hostname = 'mockbox' self.configuration.infinidat_pool_name = 'mockpool' self.configuration.infinidat_nas_network_space_name = 'mockspace' self.configuration.infinidat_thin_provision = True self.configuration.infinibox_login = 'user' self.configuration.infinibox_password = 'pass' self.configuration.network_config_group = 'test_network_config_group' self.configuration.admin_network_config_group = ( 'test_admin_network_config_group') self.configuration.reserved_share_percentage = 0 self.configuration.filter_function = None self.configuration.goodness_function = None self.configuration.driver_handles_share_servers = False self.configuration.max_over_subscription_ratio = 2 self.mock_object(self.configuration, 'safe_get', self._fake_safe_get) self.driver = infinibox.InfiniboxShareDriver( configuration=self.configuration) # mock external library dependencies infinisdk = self._patch( "manila.share.drivers.infinidat.infinibox.infinisdk") self._capacity_module = ( self._patch("manila.share.drivers.infinidat.infinibox.capacity")) self._capacity_module.byte = 1 self._capacity_module.GiB = units.Gi self._system = self._infinibox_mock() infinisdk.core.exceptions.InfiniSDKException = FakeInfinisdkException infinisdk.InfiniBox.return_value = self._system if not self._test_skips_driver_setup(): self.driver.do_setup(None) def _infinibox_mock(self): result = mock.Mock() self._mock_export_permissions = [] self._mock_export = mock.Mock() self._mock_export.get_export_path.return_value = '/mock_export' self._mock_export.get_permissions = self._fake_get_permissions self._mock_export.update_permissions = self._fake_update_permissions self._mock_filesystem = mock.Mock() self._mock_filesystem.has_children.return_value = False self._mock_filesystem.create_snapshot.return_value = ( self._mock_filesystem) self._mock_filesystem.get_exports.return_value = [self._mock_export, ] self._mock_filesystem.size = 4 * self._capacity_module.GiB self._mock_filesystem.get_size.return_value = ( self._mock_filesystem.size) self._mock_pool = mock.Mock() self._mock_pool.get_free_physical_capacity.return_value = units.Gi self._mock_pool.get_physical_capacity.return_value = units.Gi self._mock_pool.get_virtual_capacity.return_value = units.Gi self._mock_pool.get_free_virtual_capacity.return_value = units.Gi self._mock_network_space = mock.Mock() self._mock_network_space.get_ips.return_value = ( [mock.Mock(ip_address=_MOCK_NETWORK_SPACE_IP_1, enabled=True), mock.Mock(ip_address=_MOCK_NETWORK_SPACE_IP_2, enabled=True)]) result.network_spaces.safe_get.return_value = self._mock_network_space result.pools.safe_get.return_value = self._mock_pool result.filesystems.safe_get.return_value = self._mock_filesystem result.filesystems.create.return_value = self._mock_filesystem result.components.nodes.get_all.return_value = [] return result def _raise_infinisdk(self, *args, **kwargs): raise FakeInfinisdkException() def _fake_safe_get(self, value): return getattr(self.configuration, value, None) def _fake_get_permissions(self): return self._mock_export_permissions def _fake_update_permissions(self, new_export_permissions): self._mock_export_permissions = [ FakeInfinisdkPermission(permission) for permission in new_export_permissions] def _patch(self, path, *args, **kwargs): patcher = mock.patch(path, *args, **kwargs) result = patcher.start() self.addCleanup(patcher.stop) return result class InfiniboxDriverTestCase(InfiniboxDriverTestCaseBase): def _generate_mock_metadata(self, share): return {"system": "openstack", "openstack_version": version.version_info.release_string(), "manila_id": share['id'], "manila_name": share['name'], "host.created_by": infinibox._INFINIDAT_MANILA_IDENTIFIER} def _validate_metadata(self, share): self._mock_filesystem.set_metadata_from_dict.assert_called_once_with( self._generate_mock_metadata(share)) @mock.patch("manila.share.drivers.infinidat.infinibox.infinisdk", None) def test_no_infinisdk_module(self): self.assertRaises(exception.ManilaException, self.driver.do_setup, None) def test_no_auth_parameters(self): self.configuration.infinibox_login = None self.configuration.infinibox_password = None self.assertRaises(exception.BadConfigurationException, self.driver.do_setup, None) def test_empty_auth_parameters(self): self.configuration.infinibox_login = "" self.configuration.infinibox_password = "" self.assertRaises(exception.BadConfigurationException, self.driver.do_setup, None) @skip_driver_setup def test__setup_and_get_system_object(self): # This test should skip the driver setup, as it generates more calls to # the add_auto_retry, set_source_identifier and login methods: auth = (self.configuration.infinibox_login, self.configuration.infinibox_password) self.driver._setup_and_get_system_object( self.configuration.infinibox_hostname, auth) self._system.api.add_auto_retry.assert_called_once() self._system.api.set_source_identifier.assert_called_once_with( infinibox._INFINIDAT_MANILA_IDENTIFIER) self._system.login.assert_called_once() def test_get_share_stats_refreshes(self): self.driver._update_share_stats() result = self.driver.get_share_stats() self.assertEqual(1, result["free_capacity_gb"]) # change the "free space" in the pool self._mock_pool.get_free_physical_capacity.return_value = 0 # no refresh - free capacity should stay the same result = self.driver.get_share_stats(refresh=False) self.assertEqual(1, result["free_capacity_gb"]) # refresh - free capacity should change to 0 result = self.driver.get_share_stats(refresh=True) self.assertEqual(0, result["free_capacity_gb"]) def test_get_share_stats_pool_not_found(self): self._system.pools.safe_get.return_value = None self.assertRaises(exception.ManilaException, self.driver._update_share_stats) def test__verify_share_protocol(self): # test_share is NFS by default: self.driver._verify_share_protocol(test_share) def test__verify_share_protocol_fails_for_non_nfs(self): # set test_share protocol for non-NFS (CIFS, for that matter) and see # that we fail: cifs_share = copy.deepcopy(test_share) cifs_share.share_proto = 'CIFS' # also need to re-define getitem, otherwise we'll get attributes from # test_share: cifs_share.__getitem__ = _create_mock__getitem__(cifs_share) self.assertRaises(exception.InvalidShare, self.driver._verify_share_protocol, cifs_share) def test__verify_access_type_ip(self): self.assertTrue(self.driver._verify_access_type({'access_type': 'ip'})) def test__verify_access_type_fails_for_type_user(self): self.assertRaises( exception.InvalidShareAccess, self.driver._verify_access_type, {'access_type': 'user'}) def test__verify_access_type_fails_for_type_cert(self): self.assertRaises( exception.InvalidShareAccess, self.driver._verify_access_type, {'access_type': 'cert'}) def test__get_ip_address_range_single_ip(self): ip_address = self.driver._get_ip_address_range('1.2.3.4') self.assertEqual('1.2.3.4', ip_address) def test__get_ip_address_range_ip_range(self): ip_address_range = self.driver._get_ip_address_range('5.6.7.8/28') self.assertEqual('5.6.7.1-5.6.7.14', ip_address_range) def test__get_ip_address_range_invalid_address(self): self.assertRaises(ValueError, self.driver._get_ip_address_range, 'invalid') def test__get_infinidat_pool(self): self.driver._get_infinidat_pool() self._system.pools.safe_get.assert_called_once() def test__get_infinidat_pool_no_pools(self): self._system.pools.safe_get.return_value = None self.assertRaises(exception.ShareBackendException, self.driver._get_infinidat_pool) def test__get_infinidat_pool_api_error(self): self._system.pools.safe_get.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver._get_infinidat_pool) def test__get_infinidat_nas_network_space_ips(self): network_space_ips = self.driver._get_infinidat_nas_network_space_ips() self._system.network_spaces.safe_get.assert_called_once() self._mock_network_space.get_ips.assert_called_once() for network_space_ip in \ (_MOCK_NETWORK_SPACE_IP_1, _MOCK_NETWORK_SPACE_IP_2): self.assertIn(network_space_ip, network_space_ips) def test__get_infinidat_nas_network_space_ips_only_one_ip_enabled(self): self._mock_network_space.get_ips.return_value = ( [mock.Mock(ip_address=_MOCK_NETWORK_SPACE_IP_1, enabled=True), mock.Mock(ip_address=_MOCK_NETWORK_SPACE_IP_2, enabled=False)]) self.assertEqual([_MOCK_NETWORK_SPACE_IP_1], self.driver._get_infinidat_nas_network_space_ips()) def test__get_infinidat_nas_network_space_ips_no_network_space(self): self._system.network_spaces.safe_get.return_value = None self.assertRaises(exception.ShareBackendException, self.driver._get_infinidat_nas_network_space_ips) def test__get_infinidat_nas_network_space_ips_no_ips(self): self._mock_network_space.get_ips.return_value = [] self.assertRaises(exception.ShareBackendException, self.driver._get_infinidat_nas_network_space_ips) def test__get_infinidat_nas_network_space_ips_no_ips_enabled(self): self._mock_network_space.get_ips.return_value = ( [mock.Mock(ip_address=_MOCK_NETWORK_SPACE_IP_1, enabled=False), mock.Mock(ip_address=_MOCK_NETWORK_SPACE_IP_2, enabled=False)]) self.assertRaises(exception.ShareBackendException, self.driver._get_infinidat_nas_network_space_ips) def test__get_infinidat_nas_network_space_api_error(self): self._system.network_spaces.safe_get.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver._get_infinidat_nas_network_space_ips) def test__get_full_nfs_export_paths(self): export_paths = self.driver._get_full_nfs_export_paths( self._mock_export.get_export_path()) for network_space_ip in \ (_MOCK_NETWORK_SPACE_IP_1, _MOCK_NETWORK_SPACE_IP_2): self.assertIn( "{network_space_ip}:{export_path}".format( network_space_ip=network_space_ip, export_path=self._mock_export.get_export_path()), export_paths) def test__get_export(self): # The default return value of get_exports is [mock_export, ]: export = self.driver._get_export(self._mock_filesystem) self._mock_filesystem.get_exports.assert_called_once() self.assertEqual(self._mock_export, export) def test__get_export_no_filesystem_exports(self): self._mock_filesystem.get_exports.return_value = [] self.assertRaises(exception.ShareBackendException, self.driver._get_export, self._mock_filesystem) def test__get_export_too_many_filesystem_exports(self): self._mock_filesystem.get_exports.return_value = [ self._mock_export, self._mock_export] self.assertRaises(exception.ShareBackendException, self.driver._get_export, self._mock_filesystem) def test__get_export_api_error(self): self._mock_filesystem.get_exports.side_effect = self._raise_infinisdk self.assertRaises(exception.ShareBackendException, self.driver._get_export, self._mock_filesystem) def test__get_infinidat_access_level_rw(self): access_level = ( self.driver._get_infinidat_access_level( {'access_level': constants.ACCESS_LEVEL_RW})) self.assertEqual('RW', access_level) def test__get_infinidat_access_level_ro(self): access_level = ( self.driver._get_infinidat_access_level( {'access_level': constants.ACCESS_LEVEL_RO})) self.assertEqual('RO', access_level) def test__get_infinidat_access_level_fails_for_invalid_level(self): self.assertRaises(exception.InvalidShareAccessLevel, self.driver._get_infinidat_access_level, {'access_level': 'invalid'}) def test_create_share(self): # This test uses the default infinidat_thin_provision = True setting: self.driver.create_share(None, test_share) self._system.filesystems.create.assert_called_once() self._validate_metadata(test_share) self._mock_filesystem.add_export.assert_called_once_with( permissions=[]) def test_create_share_thick_provisioning(self): self.configuration.infinidat_thin_provision = False self.driver.create_share(None, test_share) self._system.filesystems.create.assert_called_once() self._validate_metadata(test_share) self._mock_filesystem.add_export.assert_called_once_with( permissions=[]) def test_create_share_pool_not_found(self): self._system.pools.safe_get.return_value = None self.assertRaises(exception.ManilaException, self.driver.create_share, None, test_share) def test_create_share_fails_non_nfs(self): # set test_share protocol for non-NFS (CIFS, for that matter) and see # that we fail: cifs_share = copy.deepcopy(test_share) cifs_share.share_proto = 'CIFS' # also need to re-define getitem, otherwise we'll get attributes from # test_share: cifs_share.__getitem__ = _create_mock__getitem__(cifs_share) self.assertRaises(exception.InvalidShare, self.driver.create_share, None, cifs_share) def test_create_share_pools_api_fail(self): # will fail when trying to get pool for share creation: self._system.pools.safe_get.side_effect = self._raise_infinisdk self.assertRaises(exception.ShareBackendException, self.driver.create_share, None, test_share) def test_create_share_network_spaces_api_fail(self): # will fail when trying to get full export path to the new share: self._system.network_spaces.safe_get.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver.create_share, None, test_share) def test_delete_share(self): self.driver.delete_share(None, test_share) self._mock_filesystem.safe_delete.assert_called_once() self._mock_export.safe_delete.assert_called_once() def test_delete_share_doesnt_exist(self): self._system.shares.safe_get.return_value = None # should not raise an exception self.driver.delete_share(None, test_share) def test_delete_share_export_doesnt_exist(self): self._mock_filesystem.get_exports.return_value = [] # should not raise an exception self.driver.delete_share(None, test_share) def test_delete_share_with_snapshots(self): # deleting a share with snapshots should succeed: self._mock_filesystem.has_children.return_value = True self.driver.delete_share(None, test_share) self._mock_filesystem.safe_delete.assert_called_once() self._mock_export.safe_delete.assert_called_once() def test_delete_share_wrong_share_protocol(self): # set test_share protocol for non-NFS (CIFS, for that matter) and see # that delete_share doesn't fail: cifs_share = copy.deepcopy(test_share) cifs_share.share_proto = 'CIFS' # also need to re-define getitem, otherwise we'll get attributes from # test_share: cifs_share.__getitem__ = _create_mock__getitem__(cifs_share) self.driver.delete_share(None, cifs_share) def test_extend_share(self): self.driver.extend_share(test_share, _MOCK_SHARE_SIZE * 2) self._mock_filesystem.resize.assert_called_once() def test_extend_share_api_fail(self): self._mock_filesystem.resize.side_effect = self._raise_infinisdk self.assertRaises(exception.ShareBackendException, self.driver.extend_share, test_share, 8) def test_create_snapshot(self): self.driver.create_snapshot(None, test_snapshot) self._mock_filesystem.create_snapshot.assert_called_once() self._validate_metadata(test_snapshot) self._mock_filesystem.add_export.assert_called_once_with( permissions=[]) def test_create_snapshot_metadata(self): self._mock_filesystem.create_snapshot.return_value = ( self._mock_filesystem) self.driver.create_snapshot(None, test_snapshot) self._validate_metadata(test_snapshot) def test_create_snapshot_share_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None self.assertRaises(exception.ShareResourceNotFound, self.driver.create_snapshot, None, test_snapshot) def test_create_snapshot_create_snapshot_api_fail(self): # will fail when trying to create a child to the original share: self._mock_filesystem.create_snapshot.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver.create_snapshot, None, test_snapshot) def test_create_snapshot_network_spaces_api_fail(self): # will fail when trying to get full export path to the new snapshot: self._system.network_spaces.safe_get.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver.create_snapshot, None, test_snapshot) def test_create_share_from_snapshot(self): self.driver.create_share_from_snapshot(None, original_test_clone, test_snapshot) self._mock_filesystem.create_snapshot.assert_called_once() self._mock_filesystem.add_export.assert_called_once_with( permissions=[]) def test_create_share_from_snapshot_bigger_size(self): test_clone = copy.copy(original_test_clone) test_clone.size = test_share.size * 2 # also need to re-define getitem, otherwise we'll get attributes from # original_get_clone: test_clone.__getitem__ = _create_mock__getitem__(test_clone) self.driver.create_share_from_snapshot(None, test_clone, test_snapshot) def test_create_share_from_snapshot_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None self.assertRaises(exception.ShareSnapshotNotFound, self.driver.create_share_from_snapshot, None, original_test_clone, test_snapshot) def test_create_share_from_snapshot_create_fails(self): self._mock_filesystem.create_snapshot.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver.create_share_from_snapshot, None, original_test_clone, test_snapshot) def test_delete_snapshot(self): self.driver.delete_snapshot(None, test_snapshot) self._mock_filesystem.safe_delete.assert_called_once() self._mock_export.safe_delete.assert_called_once() def test_delete_snapshot_with_snapshots(self): # deleting a snapshot with snapshots should succeed: self._mock_filesystem.has_children.return_value = True self.driver.delete_snapshot(None, test_snapshot) self._mock_filesystem.safe_delete.assert_called_once() self._mock_export.safe_delete.assert_called_once() def test_delete_snapshot_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None # should not raise an exception self.driver.delete_snapshot(None, test_snapshot) def test_delete_snapshot_api_fail(self): self._mock_filesystem.safe_delete.side_effect = self._raise_infinisdk self.assertRaises(exception.ShareBackendException, self.driver.delete_snapshot, None, test_snapshot) def test_ensure_share(self): self.driver.ensure_share(None, test_share) self._mock_filesystem.get_exports.assert_called_once() self._mock_export.get_export_path.assert_called_once() def test_ensure_share_export_missing(self): self._mock_filesystem.get_exports.return_value = [] self.driver.ensure_share(None, test_share) self._mock_filesystem.get_exports.assert_called_once() self._mock_filesystem.add_export.assert_called_once_with( permissions=[]) def test_ensure_share_share_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None self.assertRaises(exception.ShareResourceNotFound, self.driver.ensure_share, None, test_share) def test_ensure_share_get_exports_api_fail(self): self._mock_filesystem.get_exports.side_effect = self._raise_infinisdk self._mock_filesystem.add_export.side_effect = self._raise_infinisdk self.assertRaises(exception.ShareBackendException, self.driver.ensure_share, None, test_share) def test_ensure_share_network_spaces_api_fail(self): self._system.network_spaces.safe_get.side_effect = ( self._raise_infinisdk) self.assertRaises(exception.ShareBackendException, self.driver.ensure_share, None, test_share) def test_get_network_allocations_number(self): # Mostly to increase test coverage. The return value should always be 0 # for our driver (see method documentation in base class code): self.assertEqual(0, self.driver.get_network_allocations_number()) def test_revert_to_snapshot(self): self.driver.revert_to_snapshot(None, test_snapshot, [], []) self._mock_filesystem.restore.assert_called_once() def test_revert_to_snapshot_snapshot_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None self.assertRaises(exception.ShareSnapshotNotFound, self.driver.revert_to_snapshot, None, test_snapshot, [], []) def test_revert_to_snapshot_api_fail(self): self._mock_filesystem.restore.side_effect = self._raise_infinisdk self.assertRaises(exception.ShareBackendException, self.driver.revert_to_snapshot, None, test_snapshot, [], []) def test_update_access(self): access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RW, 'access_to': '1.2.3.5', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '5.6.7.8/28', 'access_type': 'ip'}] self.driver.update_access(None, test_share, access_rules, [], []) permissions = self._mock_filesystem.get_exports()[0].get_permissions() # now we are supposed to have three permissions: # 1. for 1.2.3.4 # 2. for 1.2.3.5 # 3. for 5.6.7.1-5.6.7.14 self.assertEqual(3, len(permissions)) # sorting according to clients, to avoid mismatch errors: permissions = sorted(permissions, key=lambda permission: permission.client) self.assertEqual('RO', permissions[0].access) self.assertEqual('1.2.3.4', permissions[0].client) self.assertTrue(permissions[0].no_root_squash) self.assertEqual('RW', permissions[1].access) self.assertEqual('1.2.3.5', permissions[1].client) self.assertTrue(permissions[1].no_root_squash) self.assertEqual('RO', permissions[2].access) self.assertEqual('5.6.7.1-5.6.7.14', permissions[2].client) self.assertTrue(permissions[2].no_root_squash) def test_update_access_share_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RW, 'access_to': '1.2.3.5', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '5.6.7.8/28', 'access_type': 'ip'}] self.assertRaises(exception.ShareResourceNotFound, self.driver.update_access, None, test_share, access_rules, [], []) def test_update_access_api_fail(self): self._mock_filesystem.get_exports.side_effect = self._raise_infinisdk access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RW, 'access_to': '1.2.3.5', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '5.6.7.8/28', 'access_type': 'ip'}] self.assertRaises(exception.ShareBackendException, self.driver.update_access, None, test_share, access_rules, [], []) def test_update_access_fails_non_ip_access_type(self): access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'user'}] self.assertRaises(exception.InvalidShareAccess, self.driver.update_access, None, test_share, access_rules, [], []) def test_update_access_fails_invalid_ip(self): access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': 'invalid', 'access_type': 'ip'}] self.assertRaises(ValueError, self.driver.update_access, None, test_share, access_rules, [], []) def test_snapshot_update_access(self): access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RW, 'access_to': '1.2.3.5', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '5.6.7.8/28', 'access_type': 'ip'}] self.driver.snapshot_update_access(None, test_snapshot, access_rules, [], []) permissions = self._mock_filesystem.get_exports()[0].get_permissions() # now we are supposed to have three permissions: # 1. for 1.2.3.4 # 2. for 1.2.3.5 # 3. for 5.6.7.1-5.6.7.14 self.assertEqual(3, len(permissions)) # sorting according to clients, to avoid mismatch errors: permissions = sorted(permissions, key=lambda permission: permission.client) self.assertEqual('RO', permissions[0].access) self.assertEqual('1.2.3.4', permissions[0].client) self.assertTrue(permissions[0].no_root_squash) # despite sending a RW rule, all rules are converted to RO: self.assertEqual('RO', permissions[1].access) self.assertEqual('1.2.3.5', permissions[1].client) self.assertTrue(permissions[1].no_root_squash) self.assertEqual('RO', permissions[2].access) self.assertEqual('5.6.7.1-5.6.7.14', permissions[2].client) self.assertTrue(permissions[2].no_root_squash) def test_snapshot_update_access_snapshot_doesnt_exist(self): self._system.filesystems.safe_get.return_value = None access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RW, 'access_to': '1.2.3.5', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '5.6.7.8/28', 'access_type': 'ip'}] self.assertRaises(exception.ShareSnapshotNotFound, self.driver.snapshot_update_access, None, test_snapshot, access_rules, [], []) def test_snapshot_update_access_api_fail(self): self._mock_filesystem.get_exports.side_effect = self._raise_infinisdk access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RW, 'access_to': '1.2.3.5', 'access_type': 'ip'}, {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '5.6.7.8/28', 'access_type': 'ip'}] self.assertRaises(exception.ShareBackendException, self.driver.snapshot_update_access, None, test_snapshot, access_rules, [], []) def test_snapshot_update_access_fails_non_ip_access_type(self): access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': '1.2.3.4', 'access_type': 'user'}] self.assertRaises(exception.InvalidSnapshotAccess, self.driver.snapshot_update_access, None, test_share, access_rules, [], []) def test_snapshot_update_access_fails_invalid_ip(self): access_rules = [ {'access_level': constants.ACCESS_LEVEL_RO, 'access_to': 'invalid', 'access_type': 'ip'}] self.assertRaises(ValueError, self.driver.snapshot_update_access, None, test_share, access_rules, [], []) manila-10.0.0/manila/tests/share/drivers/ibm/0000775000175000017500000000000013656750362021022 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/ibm/__init__.py0000664000175000017500000000000013656750227023121 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/ibm/test_gpfs.py0000664000175000017500000022457113656750227023405 0ustar zuulzuul00000000000000# Copyright (c) 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the IBM GPFS driver module.""" import re import socket from unittest import mock import ddt from oslo_config import cfg from manila import context from manila import exception import manila.share.configuration as config import manila.share.drivers.ibm.gpfs as gpfs from manila.share import share_types from manila import test from manila.tests import fake_share from manila import utils CONF = cfg.CONF @ddt.ddt class GPFSShareDriverTestCase(test.TestCase): """Tests GPFSShareDriver.""" def setUp(self): super(GPFSShareDriverTestCase, self).setUp() self._context = context.get_admin_context() self._gpfs_execute = mock.Mock(return_value=('', '')) self.GPFS_PATH = '/usr/lpp/mmfs/bin/' self._helper_fake = mock.Mock() CONF.set_default('driver_handles_share_servers', False) CONF.set_default('share_backend_name', 'GPFS') self.fake_conf = config.Configuration(None) self._driver = gpfs.GPFSShareDriver(execute=self._gpfs_execute, configuration=self.fake_conf) self._knfs_helper = gpfs.KNFSHelper(self._gpfs_execute, self.fake_conf) self._ces_helper = gpfs.CESHelper(self._gpfs_execute, self.fake_conf) self.fakedev = "/dev/gpfs0" self.fakefspath = "/gpfs0" self.fakesharepath = "/gpfs0/share-fakeid" self.fakeexistingshare = "existingshare" self.fakesnapshotpath = "/gpfs0/.snapshots/snapshot-fakesnapshotid" self.fake_ces_exports = """ mmcesnfslsexport:nfsexports:HEADER:version:reserved:reserved:Path:Delegations:Clients:Access_Type:Protocols:Transports:Squash:Anonymous_uid:Anonymous_gid:SecType:PrivilegedPort:DefaultDelegations:Manage_Gids:NFS_Commit: mmcesnfslsexport:nfsexports:0:1:::/gpfs0/share-fakeid:none:44.3.2.11:RW:3,4:TCP:ROOT_SQUASH:-2:-2:SYS:FALSE:none:FALSE:FALSE: mmcesnfslsexport:nfsexports:0:1:::/gpfs0/share-fakeid:none:1:2:3:4:5:6:7:8:RW:3,4:TCP:ROOT_SQUASH:-2:-2:SYS:FALSE:none:FALSE:FALSE: mmcesnfslsexport:nfsexports:0:1:::/gpfs0/share-fakeid:none:10.0.0.1:RW:3,4:TCP:ROOT_SQUASH:-2:-2:SYS:FALSE:none:FALSE:FALSE: """ self.fake_ces_exports_not_found = """ mmcesnfslsexport:nfsexports:HEADER:version:reserved:reserved:Path:Delegations:Clients:Access_Type:Protocols:Transports:Squash:Anonymous_uid:Anonymous_gid:SecType:PrivilegedPort:DefaultDelegations:Manage_Gids:NFS_Commit: """ self.mock_object(gpfs.os.path, 'exists', mock.Mock(return_value=True)) self._driver._helpers = { 'CES': self._helper_fake } self.share = fake_share.fake_share(share_proto='NFS', host='fakehost@fakehost#GPFS') self.server = { 'backend_details': { 'ip': '1.2.3.4', 'instance_id': 'fake' } } self.access = fake_share.fake_access() self.snapshot = fake_share.fake_snapshot() self.local_ip = "192.11.22.1" self.remote_ip = "192.11.22.2" self.remote_ip2 = "2.2.2.2" gpfs_nfs_server_list = [self.remote_ip, self.local_ip, self.remote_ip2, "fake_location"] self._knfs_helper.configuration.gpfs_nfs_server_list = ( gpfs_nfs_server_list) self._ces_helper.configuration.gpfs_nfs_server_list = ( gpfs_nfs_server_list) self._ces_helper.configuration.ganesha_config_path = ( "fake_ganesha_config_path") self.sshlogin = "fake_login" self.sshkey = "fake_sshkey" self.gservice = "fake_ganesha_service" self._ces_helper.configuration.gpfs_ssh_login = self.sshlogin self._ces_helper.configuration.gpfs_ssh_private_key = self.sshkey self._ces_helper.configuration.ganesha_service_name = self.gservice self.mock_object(socket, 'gethostname', mock.Mock(return_value="testserver")) self.mock_object(socket, 'gethostbyname_ex', mock.Mock( return_value=('localhost', ['localhost.localdomain', 'testserver'], ['127.0.0.1', self.local_ip]) )) def test__run_ssh(self): cmd_list = ['fake', 'cmd'] expected_cmd = 'fake cmd' ssh_pool = mock.Mock() ssh = mock.Mock() self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) ssh_pool.item = mock.Mock(return_value=ssh) setattr(ssh, '__enter__', mock.Mock()) setattr(ssh, '__exit__', mock.Mock()) self.mock_object(self._driver, '_gpfs_ssh_execute') self._driver._run_ssh(self.local_ip, cmd_list) self._driver._gpfs_ssh_execute.assert_called_once_with( mock.ANY, expected_cmd, check_exit_code=True, ignore_exit_code=None) def test__run_ssh_exception(self): cmd_list = ['fake', 'cmd'] ssh_pool = mock.Mock() ssh = mock.Mock() self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) ssh_pool.item = mock.Mock(return_value=ssh) self.mock_object(self._driver, '_gpfs_ssh_execute') self.assertRaises(exception.GPFSException, self._driver._run_ssh, self.local_ip, cmd_list) def test__gpfs_ssh_execute(self): cmd = 'fake cmd' expected_out = 'cmd successful' expected_err = 'cmd error' ssh = mock.Mock() stdin_stream = mock.Mock() stdout_stream = mock.Mock() stderr_stream = mock.Mock() ssh.exec_command = mock.Mock(return_value=(stdin_stream, stdout_stream, stderr_stream)) stdout_stream.channel.recv_exit_status = mock.Mock(return_value=-1) stdout_stream.read = mock.Mock(return_value=expected_out) stderr_stream.read = mock.Mock(return_value=expected_err) stdin_stream.close = mock.Mock() actual_out, actual_err = self._driver._gpfs_ssh_execute(ssh, cmd) self.assertEqual(actual_out, expected_out) self.assertEqual(actual_err, expected_err) def test__gpfs_ssh_execute_exception(self): cmd = 'fake cmd' ssh = mock.Mock() stdin_stream = mock.Mock() stdout_stream = mock.Mock() stderr_stream = mock.Mock() ssh.exec_command = mock.Mock(return_value=(stdin_stream, stdout_stream, stderr_stream)) stdout_stream.channel.recv_exit_status = mock.Mock(return_value=1) stdout_stream.read = mock.Mock() stderr_stream.read = mock.Mock() stdin_stream.close = mock.Mock() self.assertRaises(exception.ProcessExecutionError, self._driver._gpfs_ssh_execute, ssh, cmd) def test_get_share_stats_refresh_false(self): self._driver._stats = {'fake_key': 'fake_value'} result = self._driver.get_share_stats(False) self.assertEqual(self._driver._stats, result) def test_get_share_stats_refresh_true(self): self.mock_object( self._driver, '_get_available_capacity', mock.Mock(return_value=(11111.0, 12345.0))) result = self._driver.get_share_stats(True) expected_keys = [ 'qos', 'driver_version', 'share_backend_name', 'free_capacity_gb', 'total_capacity_gb', 'driver_handles_share_servers', 'reserved_percentage', 'vendor_name', 'storage_protocol', ] for key in expected_keys: self.assertIn(key, result) self.assertFalse(result['driver_handles_share_servers']) self.assertEqual('IBM', result['vendor_name']) self._driver._get_available_capacity.assert_called_once_with( self._driver.configuration.gpfs_mount_point_base) def test_do_setup(self): self.mock_object(self._driver, '_setup_helpers') self._driver.do_setup(self._context) self.assertEqual(self._driver._gpfs_execute, self._driver._gpfs_remote_execute) self._driver._setup_helpers.assert_called_once_with() def test_do_setup_gpfs_local_execute(self): self.mock_object(self._driver, '_setup_helpers') self._driver.configuration.is_gpfs_node = True self._driver.do_setup(self._context) self.assertEqual(self._driver._gpfs_execute, self._driver._gpfs_local_execute) self._driver._setup_helpers.assert_called_once_with() def test_setup_helpers(self): self._driver._helpers = {} CONF.set_default('gpfs_share_helpers', ['CES=fakenfs']) self.mock_object(gpfs.importutils, 'import_class', mock.Mock(return_value=self._helper_fake)) self._driver._setup_helpers() gpfs.importutils.import_class.assert_has_calls( [mock.call('fakenfs')] ) self.assertEqual(len(self._driver._helpers), 1) @ddt.data(fake_share.fake_share(), fake_share.fake_share(share_proto='NFSBOGUS')) def test__get_helper_with_wrong_proto(self, share): self.assertRaises(exception.InvalidShare, self._driver._get_helper, share) def test__local_path(self): sharename = 'fakesharename' self._driver.configuration.gpfs_mount_point_base = ( self.fakefspath) local_path = self._driver._local_path(sharename) self.assertEqual(self.fakefspath + '/' + sharename, local_path) def test__get_share_path(self): self._driver.configuration.gpfs_mount_point_base = ( self.fakefspath) share_path = self._driver._get_share_path(self.share) self.assertEqual(self.fakefspath + '/' + self.share['name'], share_path) def test__get_snapshot_path(self): self._driver.configuration.gpfs_mount_point_base = ( self.fakefspath) snapshot_path = self._driver._get_snapshot_path(self.snapshot) self.assertEqual(self.fakefspath + '/' + self.snapshot['share_name'] + '/.snapshots/' + self.snapshot['name'], snapshot_path) def test_check_for_setup_error_for_gpfs_state(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=False)) self.assertRaises(exception.GPFSException, self._driver.check_for_setup_error) def test_check_for_setup_error_for_export_ip(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=True)) self._driver.configuration.gpfs_share_export_ip = None self.assertRaises(exception.InvalidParameterValue, self._driver.check_for_setup_error) def test_check_for_setup_error_for_gpfs_mount_point_base(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=True)) self._driver.configuration.gpfs_share_export_ip = self.local_ip self._driver.configuration.gpfs_mount_point_base = 'test' self.assertRaises(exception.GPFSException, self._driver.check_for_setup_error) def test_check_for_setup_error_for_directory_check(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=True)) self._driver.configuration.gpfs_share_export_ip = self.local_ip self._driver.configuration.gpfs_mount_point_base = self.fakefspath self.mock_object(self._driver, '_is_dir', mock.Mock(return_value=False)) self.assertRaises(exception.GPFSException, self._driver.check_for_setup_error) def test_check_for_setup_error_for_gpfs_path_check(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=True)) self._driver.configuration.gpfs_share_export_ip = self.local_ip self._driver.configuration.gpfs_mount_point_base = self.fakefspath self.mock_object(self._driver, '_is_dir', mock.Mock(return_value=True)) self.mock_object(self._driver, '_is_gpfs_path', mock.Mock(return_value=False)) self.assertRaises(exception.GPFSException, self._driver.check_for_setup_error) def test_check_for_setup_error_for_nfs_server_type(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=True)) self._driver.configuration.gpfs_share_export_ip = self.local_ip self._driver.configuration.gpfs_mount_point_base = self.fakefspath self.mock_object(self._driver, '_is_dir', mock.Mock(return_value=True)) self.mock_object(self._driver, '_is_gpfs_path', mock.Mock(return_value=True)) self._driver.configuration.gpfs_nfs_server_type = 'test' self.assertRaises(exception.InvalidParameterValue, self._driver.check_for_setup_error) def test_check_for_setup_error_for_nfs_server_list(self): self.mock_object(self._driver, '_check_gpfs_state', mock.Mock(return_value=True)) self._driver.configuration.gpfs_share_export_ip = self.local_ip self._driver.configuration.gpfs_mount_point_base = self.fakefspath self.mock_object(self._driver, '_is_dir', mock.Mock(return_value=True)) self.mock_object(self._driver, '_is_gpfs_path', mock.Mock(return_value=True)) self._driver.configuration.gpfs_nfs_server_type = 'KNFS' self._driver.configuration.gpfs_nfs_server_list = None self.assertRaises(exception.InvalidParameterValue, self._driver.check_for_setup_error) def test__get_available_capacity(self): path = self.fakefspath mock_out = "Filesystem 1-blocks Used Available Capacity Mounted on\n\ /dev/gpfs0 100 30 70 30% /gpfs0" self.mock_object(self._driver, '_gpfs_execute', mock.Mock(return_value=(mock_out, ''))) available, size = self._driver._get_available_capacity(path) self.assertEqual(70, available) self.assertEqual(100, size) def test_create_share(self): self._helper_fake.create_export.return_value = 'fakelocation' methods = ('_create_share', '_get_share_path') for method in methods: self.mock_object(self._driver, method) result = self._driver.create_share(self._context, self.share, share_server=self.server) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_share_path.assert_called_once_with(self.share) self.assertEqual(result, 'fakelocation') def test_create_share_from_snapshot(self): self._helper_fake.create_export.return_value = 'fakelocation' self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._driver._create_share_from_snapshot = mock.Mock() result = self._driver.create_share_from_snapshot(self._context, self.share, self.snapshot, share_server=None) self._driver._get_share_path.assert_called_once_with(self.share) self._driver._create_share_from_snapshot.assert_called_once_with( self.share, self.snapshot, self.fakesharepath ) self.assertEqual(result, 'fakelocation') def test_create_snapshot(self): self._driver._create_share_snapshot = mock.Mock() self._driver.create_snapshot(self._context, self.snapshot, share_server=None) self._driver._create_share_snapshot.assert_called_once_with( self.snapshot ) def test_delete_share(self): self._driver._get_share_path = mock.Mock( return_value=self.fakesharepath ) self._driver._delete_share = mock.Mock() self._driver.delete_share(self._context, self.share, share_server=None) self._driver._get_share_path.assert_called_once_with(self.share) self._driver._delete_share.assert_called_once_with(self.share) self._helper_fake.remove_export.assert_called_once_with( self.fakesharepath, self.share ) def test_delete_snapshot(self): self._driver._delete_share_snapshot = mock.Mock() self._driver.delete_snapshot(self._context, self.snapshot, share_server=None) self._driver._delete_share_snapshot.assert_called_once_with( self.snapshot ) def test__delete_share_snapshot(self): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock(return_value=0) self._driver._delete_share_snapshot(self.snapshot) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmdelsnapshot', self.fakedev, self.snapshot['name'], '-j', self.snapshot['share_name'] ) self._driver._get_gpfs_device.assert_called_once_with() def test__delete_share_snapshot_exception(self): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._delete_share_snapshot, self.snapshot) self._driver._get_gpfs_device.assert_called_once_with() self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmdelsnapshot', self.fakedev, self.snapshot['name'], '-j', self.snapshot['share_name'] ) def test_extend_share(self): self._driver._extend_share = mock.Mock() self._driver.extend_share(self.share, 10) self._driver._extend_share.assert_called_once_with(self.share, 10) def test__extend_share(self): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock(return_value=True) self._driver._extend_share(self.share, 10) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:10G') self._driver._get_gpfs_device.assert_called_once_with() def test__extend_share_exception(self): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._extend_share, self.share, 10) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:10G') self._driver._get_gpfs_device.assert_called_once_with() def test_update_access_allow(self): """Test allow_access functionality via update_access.""" self._driver._get_share_path = mock.Mock( return_value=self.fakesharepath ) self._helper_fake.allow_access = mock.Mock() self._driver.update_access(self._context, self.share, ["ignored"], [self.access], [], share_server=None) self._helper_fake.allow_access.assert_called_once_with( self.fakesharepath, self.share, self.access) self.assertFalse(self._helper_fake.resync_access.called) self._driver._get_share_path.assert_called_once_with(self.share) def test_update_access_deny(self): """Test deny_access functionality via update_access.""" self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._helper_fake.deny_access = mock.Mock() self._driver.update_access(self._context, self.share, ["ignored"], [], [self.access], share_server=None) self._helper_fake.deny_access.assert_called_once_with( self.fakesharepath, self.share, self.access) self.assertFalse(self._helper_fake.resync_access.called) self._driver._get_share_path.assert_called_once_with(self.share) def test_update_access_both(self): """Test update_access with allow and deny lists.""" self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._helper_fake.deny_access = mock.Mock() self._helper_fake.allow_access = mock.Mock() self._helper_fake.resync_access = mock.Mock() access_1 = fake_share.fake_access(access_to="1.1.1.1") access_2 = fake_share.fake_access(access_to="2.2.2.2") self._driver.update_access(self._context, self.share, ["ignore"], [access_1], [access_2], share_server=None) self.assertFalse(self._helper_fake.resync_access.called) self._helper_fake.allow_access.assert_called_once_with( self.fakesharepath, self.share, access_1) self._helper_fake.deny_access.assert_called_once_with( self.fakesharepath, self.share, access_2) self._driver._get_share_path.assert_called_once_with(self.share) def test_update_access_resync(self): """Test recovery mode update_access.""" self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._helper_fake.deny_access = mock.Mock() self._helper_fake.allow_access = mock.Mock() self._helper_fake.resync_access = mock.Mock() access_1 = fake_share.fake_access(access_to="1.1.1.1") access_2 = fake_share.fake_access(access_to="2.2.2.2") self._driver.update_access(self._context, self.share, [access_1, access_2], [], [], share_server=None) self._helper_fake.resync_access.assert_called_once_with( self.fakesharepath, self.share, [access_1, access_2]) self.assertFalse(self._helper_fake.allow_access.called) self.assertFalse(self._helper_fake.allow_access.called) self._driver._get_share_path.assert_called_once_with(self.share) def test__check_gpfs_state_active(self): fakeout = "mmgetstate::state:\nmmgetstate::active:" self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) result = self._driver._check_gpfs_state() self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmgetstate', '-Y') self.assertEqual(result, True) def test__check_gpfs_state_down(self): fakeout = "mmgetstate::state:\nmmgetstate::down:" self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) result = self._driver._check_gpfs_state() self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmgetstate', '-Y') self.assertEqual(result, False) def test__check_gpfs_state_wrong_output_exception(self): fakeout = "mmgetstate fake out" self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) self.assertRaises(exception.GPFSException, self._driver._check_gpfs_state) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmgetstate', '-Y') def test__check_gpfs_state_exception(self): self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._check_gpfs_state) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmgetstate', '-Y') def test__is_dir_success(self): fakeoutput = "directory" self._driver._gpfs_execute = mock.Mock(return_value=(fakeoutput, '')) result = self._driver._is_dir(self.fakefspath) self._driver._gpfs_execute.assert_called_once_with( 'stat', '--format=%F', self.fakefspath, run_as_root=False ) self.assertEqual(result, True) def test__is_dir_failure(self): fakeoutput = "regular file" self._driver._gpfs_execute = mock.Mock(return_value=(fakeoutput, '')) result = self._driver._is_dir(self.fakefspath) self._driver._gpfs_execute.assert_called_once_with( 'stat', '--format=%F', self.fakefspath, run_as_root=False ) self.assertEqual(result, False) def test__is_dir_exception(self): self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._is_dir, self.fakefspath) self._driver._gpfs_execute.assert_called_once_with( 'stat', '--format=%F', self.fakefspath, run_as_root=False ) def test__is_gpfs_path_ok(self): self._driver._gpfs_execute = mock.Mock(return_value=0) result = self._driver._is_gpfs_path(self.fakefspath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsattr', self.fakefspath) self.assertEqual(result, True) def test__is_gpfs_path_exception(self): self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._is_gpfs_path, self.fakefspath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsattr', self.fakefspath) def test__get_gpfs_device(self): fakeout = "Filesystem\n" + self.fakedev orig_val = self._driver.configuration.gpfs_mount_point_base self._driver.configuration.gpfs_mount_point_base = self.fakefspath self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) result = self._driver._get_gpfs_device() self._driver._gpfs_execute.assert_called_once_with('df', self.fakefspath) self.assertEqual(result, self.fakedev) self._driver.configuration.gpfs_mount_point_base = orig_val def test__get_gpfs_device_exception(self): self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.GPFSException, self._driver._get_gpfs_device) def test__create_share(self): sizestr = '%sG' % self.share['size'] self._driver._gpfs_execute = mock.Mock(return_value=True) self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._create_share(self.share) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmcrfileset', self.fakedev, self.share['name'], '--inode-space', 'new') self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:' + sizestr) self._driver._gpfs_execute.assert_any_call( 'chmod', '777', self.fakesharepath) self._driver._local_path.assert_called_once_with(self.share['name']) self._driver._get_gpfs_device.assert_called_once_with() def test__create_share_exception(self): self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._create_share, self.share) self._driver._get_gpfs_device.assert_called_once_with() self._driver._local_path.assert_called_once_with(self.share['name']) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmcrfileset', self.fakedev, self.share['name'], '--inode-space', 'new') def test__delete_share(self): self._driver._gpfs_execute = mock.Mock(return_value=True) self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._delete_share(self.share) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.share['name'], '-f', ignore_exit_code=[2]) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmdelfileset', self.fakedev, self.share['name'], '-f', ignore_exit_code=[2]) self._driver._get_gpfs_device.assert_called_once_with() def test__delete_share_exception(self): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._delete_share, self.share) self._driver._get_gpfs_device.assert_called_once_with() self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.share['name'], '-f', ignore_exit_code=[2]) def test__create_share_snapshot(self): self._driver._gpfs_execute = mock.Mock(return_value=True) self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._create_share_snapshot(self.snapshot) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmcrsnapshot', self.fakedev, self.snapshot['name'], '-j', self.snapshot['share_name'] ) self._driver._get_gpfs_device.assert_called_once_with() def test__create_share_snapshot_exception(self): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._create_share_snapshot, self.snapshot) self._driver._get_gpfs_device.assert_called_once_with() self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmcrsnapshot', self.fakedev, self.snapshot['name'], '-j', self.snapshot['share_name'] ) def test__create_share_from_snapshot(self): self._driver._gpfs_execute = mock.Mock(return_value=True) self._driver._create_share = mock.Mock(return_value=True) self._driver._get_snapshot_path = mock.Mock(return_value=self. fakesnapshotpath) self._driver._create_share_from_snapshot(self.share, self.snapshot, self.fakesharepath) self._driver._gpfs_execute.assert_called_once_with( 'rsync', '-rp', self.fakesnapshotpath + '/', self.fakesharepath ) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_snapshot_path.assert_called_once_with(self.snapshot) def test__create_share_from_snapshot_exception(self): self._driver._create_share = mock.Mock(return_value=True) self._driver._get_snapshot_path = mock.Mock(return_value=self. fakesnapshotpath) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError ) self.assertRaises(exception.GPFSException, self._driver._create_share_from_snapshot, self.share, self.snapshot, self.fakesharepath) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_snapshot_path.assert_called_once_with(self.snapshot) self._driver._gpfs_execute.assert_called_once_with( 'rsync', '-rp', self.fakesnapshotpath + '/', self.fakesharepath ) @ddt.data("mmlsfileset::allocInodes:\nmmlsfileset::100096:", "mmlsfileset::allocInodes:\nmmlsfileset::0:") def test__is_share_valid_with_quota(self, fakeout): self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) result = self._driver._is_share_valid(self.fakedev, self.fakesharepath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsfileset', self.fakedev, '-J', self.fakesharepath, '-L', '-Y') if fakeout == "mmlsfileset::allocInodes:\nmmlsfileset::100096:": self.assertTrue(result) else: self.assertFalse(result) def test__is_share_valid_exception(self): self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.ManageInvalidShare, self._driver._is_share_valid, self.fakedev, self.fakesharepath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsfileset', self.fakedev, '-J', self.fakesharepath, '-L', '-Y') def test__is_share_valid_no_share_exist_exception(self): fakeout = "mmlsfileset::allocInodes:" self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) self.assertRaises(exception.GPFSException, self._driver._is_share_valid, self.fakedev, self.fakesharepath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsfileset', self.fakedev, '-J', self.fakesharepath, '-L', '-Y') def test__get_share_name(self): fakeout = "mmlsfileset::filesetName:\nmmlsfileset::existingshare:" self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) result = self._driver._get_share_name(self.fakedev, self.fakesharepath) self.assertEqual('existingshare', result) def test__get_share_name_exception(self): self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.ManageInvalidShare, self._driver._get_share_name, self.fakedev, self.fakesharepath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsfileset', self.fakedev, '-J', self.fakesharepath, '-L', '-Y') def test__get_share_name_no_share_exist_exception(self): fakeout = "mmlsfileset::filesetName:" self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) self.assertRaises(exception.GPFSException, self._driver._get_share_name, self.fakedev, self.fakesharepath) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmlsfileset', self.fakedev, '-J', self.fakesharepath, '-L', '-Y') @ddt.data("mmlsquota::blockLimit:\nmmlsquota::1048577", "mmlsquota::blockLimit:\nmmlsquota::1048576", "mmlsquota::blockLimit:\nmmlsquota::0") def test__manage_existing(self, fakeout): self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) self._helper_fake.create_export.return_value = 'fakelocation' self._driver._local_path = mock.Mock(return_value=self.fakesharepath) actual_size, actual_path = self._driver._manage_existing( self.fakedev, self.share, self.fakeexistingshare) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f') self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath) self._driver._gpfs_execute.assert_any_call( 'chmod', '777', self.fakesharepath) if fakeout == "mmlsquota::blockLimit:\nmmlsquota::1048577": self._driver._gpfs_execute.assert_called_with( self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:2G') self.assertEqual(2, actual_size) self.assertEqual('fakelocation', actual_path) elif fakeout == "mmlsquota::blockLimit:\nmmlsquota::0": self._driver._gpfs_execute.assert_called_with( self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:1G') self.assertEqual(1, actual_size) self.assertEqual('fakelocation', actual_path) else: self.assertEqual(1, actual_size) self.assertEqual('fakelocation', actual_path) def test__manage_existing_fileset_unlink_exception(self): self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self._driver._gpfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_called_once_with(self.share['name']) self._driver._gpfs_execute.assert_called_once_with( self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f') def test__manage_existing_fileset_creation_exception(self): self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self.mock_object(self._driver, '_gpfs_execute', mock.Mock( side_effect=['', exception.ProcessExecutionError])) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f'), mock.call(self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name'])]) def test__manage_existing_fileset_relink_exception(self): self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self.mock_object(self._driver, '_gpfs_execute', mock.Mock( side_effect=['', '', exception.ProcessExecutionError])) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f'), mock.call(self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']), mock.call(self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath)]) def test__manage_existing_permission_change_exception(self): self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self.mock_object(self._driver, '_gpfs_execute', mock.Mock( side_effect=['', '', '', exception.ProcessExecutionError])) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f'), mock.call(self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']), mock.call(self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath), mock.call('chmod', '777', self.fakesharepath)]) def test__manage_existing_checking_quota_of_fileset_exception(self): self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self.mock_object(self._driver, '_gpfs_execute', mock.Mock( side_effect=['', '', '', '', exception.ProcessExecutionError])) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f'), mock.call(self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']), mock.call(self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath), mock.call('chmod', '777', self.fakesharepath), mock.call(self.GPFS_PATH + 'mmlsquota', '-j', self.share['name'], '-Y', self.fakedev)]) def test__manage_existing_unable_to_get_quota_of_fileset_exception(self): fakeout = "mmlsquota::blockLimit:" self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self._driver._gpfs_execute = mock.Mock(return_value=(fakeout, '')) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f') self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']) self._driver._gpfs_execute.assert_any_call( self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath) self._driver._gpfs_execute.assert_any_call( 'chmod', '777', self.fakesharepath) self._driver._gpfs_execute.assert_called_with( self.GPFS_PATH + 'mmlsquota', '-j', self.share['name'], '-Y', self.fakedev) def test__manage_existing_set_quota_of_fileset_less_than_1G_exception( self): sizestr = '1G' mock_out = "mmlsquota::blockLimit:\nmmlsquota::0:", None self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self.mock_object(self._driver, '_gpfs_execute', mock.Mock( side_effect=['', '', '', '', mock_out, exception.ProcessExecutionError])) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f'), mock.call(self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']), mock.call(self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath), mock.call('chmod', '777', self.fakesharepath), mock.call(self.GPFS_PATH + 'mmlsquota', '-j', self.share['name'], '-Y', self.fakedev), mock.call(self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:' + sizestr)]) def test__manage_existing_set_quota_of_fileset_grater_than_1G_exception( self): sizestr = '2G' mock_out = "mmlsquota::blockLimit:\nmmlsquota::1048577:", None self._driver._local_path = mock.Mock(return_value=self.fakesharepath) self.mock_object(self._driver, '_gpfs_execute', mock.Mock( side_effect=['', '', '', '', mock_out, exception.ProcessExecutionError])) self.assertRaises(exception.GPFSException, self._driver._manage_existing, self.fakedev, self.share, self.fakeexistingshare) self._driver._local_path.assert_any_call(self.share['name']) self._driver._gpfs_execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmunlinkfileset', self.fakedev, self.fakeexistingshare, '-f'), mock.call(self.GPFS_PATH + 'mmchfileset', self.fakedev, self.fakeexistingshare, '-j', self.share['name']), mock.call(self.GPFS_PATH + 'mmlinkfileset', self.fakedev, self.share['name'], '-J', self.fakesharepath), mock.call('chmod', '777', self.fakesharepath), mock.call(self.GPFS_PATH + 'mmlsquota', '-j', self.share['name'], '-Y', self.fakedev), mock.call(self.GPFS_PATH + 'mmsetquota', self.fakedev + ':' + self.share['name'], '--block', '0:' + sizestr)]) def test_manage_existing(self): self._driver._manage_existing = mock.Mock(return_value=('1', 'fakelocation')) self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._is_share_valid = mock.Mock(return_value=True) self._driver._get_share_name = mock.Mock(return_value=self. fakeexistingshare) self._helper_fake._has_client_access = mock.Mock(return_value=[]) result = self._driver.manage_existing(self.share, {}) self.assertEqual('1', result['size']) self.assertEqual('fakelocation', result['export_locations']) def test_manage_existing_incorrect_path_exception(self): share = fake_share.fake_share(export_location="wrong_ip::wrong_path") self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, share, {}) def test_manage_existing_incorrect_ip_exception(self): share = fake_share.fake_share(export_location="wrong_ip:wrong_path") self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, share, {}) def test__manage_existing_invalid_export_exception(self): share = fake_share.fake_share(export_location="wrong_ip/wrong_path") self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, share, {}) @ddt.data(True, False) def test_manage_existing_invalid_share_exception(self, valid_share): self._driver._get_gpfs_device = mock.Mock(return_value=self.fakedev) self._driver._is_share_valid = mock.Mock(return_value=valid_share) if valid_share: self._driver._get_share_name = mock.Mock(return_value=self. fakeexistingshare) self._helper_fake._has_client_access = mock.Mock() else: self.assertFalse(self._helper_fake._has_client_access.called) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, self.share, {}) def test__gpfs_local_execute(self): self.mock_object(utils, 'execute', mock.Mock(return_value=True)) cmd = "testcmd" self._driver._gpfs_local_execute(cmd, ignore_exit_code=[2]) utils.execute.assert_called_once_with(cmd, run_as_root=True, check_exit_code=[2, 0]) def test__gpfs_remote_execute(self): self._driver._run_ssh = mock.Mock(return_value=True) cmd = "testcmd" orig_value = self._driver.configuration.gpfs_share_export_ip self._driver.configuration.gpfs_share_export_ip = self.local_ip self._driver._gpfs_remote_execute(cmd, check_exit_code=True) self._driver._run_ssh.assert_called_once_with( self.local_ip, tuple([cmd]), None, True ) self._driver.configuration.gpfs_share_export_ip = orig_value def test_knfs_resync_access(self): self._knfs_helper.allow_access = mock.Mock() path = self.fakesharepath to_remove = '3.3.3.3' fake_exportfs_before = ('%(path)s\n\t\t%(ip)s\n' '/other/path\n\t\t4.4.4.4\n' % {'path': path, 'ip': to_remove}) fake_exportfs_after = '/other/path\n\t\t4.4.4.4\n' self._knfs_helper._execute = mock.Mock( return_value=(fake_exportfs_before, '')) self._knfs_helper._publish_access = mock.Mock( side_effect=[[(fake_exportfs_before, '')], [(fake_exportfs_after, '')]]) access_1 = fake_share.fake_access(access_to="1.1.1.1") access_2 = fake_share.fake_access(access_to="2.2.2.2") self._knfs_helper.resync_access(path, self.share, [access_1, access_2]) self._knfs_helper.allow_access.assert_has_calls([ mock.call(path, self.share, access_1, error_on_exists=False), mock.call(path, self.share, access_2, error_on_exists=False)]) self._knfs_helper._execute.assert_called_once_with( 'exportfs', run_as_root=True) self._knfs_helper._publish_access.assert_has_calls([ mock.call('exportfs', '-u', '%(ip)s:%(path)s' % {'ip': to_remove, 'path': path}, check_exit_code=[0, 1]), mock.call('exportfs')]) @ddt.data('rw', 'ro') def test_knfs_get_export_options(self, access_level): mock_out = {"knfs:export_options": "no_root_squash"} self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=mock_out)) access = fake_share.fake_access(access_level=access_level) out = self._knfs_helper.get_export_options(self.share, access, 'KNFS') self.assertEqual("no_root_squash,%s" % access_level, out) def test_knfs_get_export_options_default(self): self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) access = self.access out = self._knfs_helper.get_export_options(self.share, access, 'KNFS') self.assertEqual("rw", out) def test_knfs_get_export_options_invalid_option_ro(self): mock_out = {"knfs:export_options": "ro"} self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=mock_out)) access = self.access share = fake_share.fake_share(share_type="fake_share_type") self.assertRaises(exception.InvalidInput, self._knfs_helper.get_export_options, share, access, 'KNFS') def test_knfs_get_export_options_invalid_option_rw(self): mock_out = {"knfs:export_options": "rw"} self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=mock_out)) access = self.access share = fake_share.fake_share(share_type="fake_share_type") self.assertRaises(exception.InvalidInput, self._knfs_helper.get_export_options, share, access, 'KNFS') @ddt.data(("/gpfs0/share-fakeid\t10.0.0.1", None), ("", None), ("/gpfs0/share-fakeid\t10.0.0.1", "10.0.0.1"), ("/gpfs0/share-fakeid\t10.0.0.1", "10.0.0.2")) @ddt.unpack def test_knfs__has_client_access(self, mock_out, access_to): self._knfs_helper._execute = mock.Mock(return_value=[mock_out, 0]) result = self._knfs_helper._has_client_access(self.fakesharepath, access_to) self._ces_helper._execute.assert_called_once_with('exportfs', check_exit_code=True, run_as_root=True) if mock_out == "/gpfs0/share-fakeid\t10.0.0.1": if access_to in (None, "10.0.0.1"): self.assertTrue(result) else: self.assertFalse(result) else: self.assertFalse(result) def test_knfs_allow_access(self): self._knfs_helper._execute = mock.Mock( return_value=['/fs0 ', 0] ) self.mock_object(re, 'search', mock.Mock(return_value=None)) export_opts = None self._knfs_helper.get_export_options = mock.Mock( return_value=export_opts ) self._knfs_helper._publish_access = mock.Mock() access = self.access local_path = self.fakesharepath self._knfs_helper.allow_access(local_path, self.share, access) self._knfs_helper._execute.assert_called_once_with('exportfs', run_as_root=True) self.assertTrue(re.search.called) self._knfs_helper.get_export_options.assert_any_call( self.share, access, 'KNFS') cmd = ['exportfs', '-o', export_opts, ':'.join([access['access_to'], local_path])] self._knfs_helper._publish_access.assert_called_once_with(*cmd) def test_knfs_allow_access_access_exists(self): out = ['/fs0 ', 0] self._knfs_helper._execute = mock.Mock(return_value=out) self.mock_object(re, 'search', mock.Mock(return_value="fake")) self._knfs_helper.get_export_options = mock.Mock() access = self.access local_path = self.fakesharepath self.assertRaises(exception.ShareAccessExists, self._knfs_helper.allow_access, local_path, self.share, access) self._knfs_helper._execute.assert_any_call('exportfs', run_as_root=True) self.assertTrue(re.search.called) self.assertFalse(self._knfs_helper.get_export_options.called) def test_knfs_allow_access_publish_exception(self): self._knfs_helper.get_export_options = mock.Mock() self._knfs_helper._publish_access = mock.Mock( side_effect=exception.ProcessExecutionError('boom')) self.assertRaises(exception.GPFSException, self._knfs_helper.allow_access, self.fakesharepath, self.share, self.access, error_on_exists=False) self.assertTrue(self._knfs_helper.get_export_options.called) self.assertTrue(self._knfs_helper._publish_access.called) def test_knfs_allow_access_invalid_access(self): access = fake_share.fake_access(access_type='test') self.assertRaises(exception.InvalidShareAccess, self._knfs_helper.allow_access, self.fakesharepath, self.share, access) def test_knfs_allow_access_exception(self): self._knfs_helper._execute = mock.Mock( side_effect=exception.ProcessExecutionError ) access = self.access local_path = self.fakesharepath self.assertRaises(exception.GPFSException, self._knfs_helper.allow_access, local_path, self.share, access) self._knfs_helper._execute.assert_called_once_with('exportfs', run_as_root=True) def test_knfs__verify_denied_access_pass(self): local_path = self.fakesharepath ip = self.access['access_to'] fake_exportfs = ('/shares/share-1\n\t\t1.1.1.1\n' '/shares/share-2\n\t\t2.2.2.2\n') self._knfs_helper._publish_access = mock.Mock( return_value=[(fake_exportfs, '')]) self._knfs_helper._verify_denied_access(local_path, self.share, ip) self._knfs_helper._publish_access.assert_called_once_with('exportfs') def test_knfs__verify_denied_access_fail(self): local_path = self.fakesharepath ip = self.access['access_to'] data = {'path': local_path, 'ip': ip} fake_exportfs = ('/shares/share-1\n\t\t1.1.1.1\n' '%(path)s\n\t\t%(ip)s\n' '/shares/share-2\n\t\t2.2.2.2\n') % data self._knfs_helper._publish_access = mock.Mock( return_value=[(fake_exportfs, '')]) self.assertRaises(exception.GPFSException, self._knfs_helper._verify_denied_access, local_path, self.share, ip) self._knfs_helper._publish_access.assert_called_once_with('exportfs') def test_knfs__verify_denied_access_exception(self): self._knfs_helper._publish_access = mock.Mock( side_effect=exception.ProcessExecutionError ) ip = self.access['access_to'] local_path = self.fakesharepath self.assertRaises(exception.GPFSException, self._knfs_helper._verify_denied_access, local_path, self.share, ip) self._knfs_helper._publish_access.assert_called_once_with('exportfs') @ddt.data((None, False), ('', False), (' ', False), ('Some error to log', True)) @ddt.unpack def test_knfs__verify_denied_access_stderr(self, stderr, is_logged): """Stderr debug logging should only happen when not empty.""" outputs = [('', stderr)] self._knfs_helper._publish_access = mock.Mock(return_value=outputs) gpfs.LOG.debug = mock.Mock() self._knfs_helper._verify_denied_access( self.fakesharepath, self.share, self.remote_ip) self._knfs_helper._publish_access.assert_called_once_with('exportfs') self.assertEqual(is_logged, gpfs.LOG.debug.called) def test_knfs_deny_access(self): self._knfs_helper._publish_access = mock.Mock(return_value=[('', '')]) access = self.access local_path = self.fakesharepath self._knfs_helper.deny_access(local_path, self.share, access) deny = ['exportfs', '-u', ':'.join([access['access_to'], local_path])] self._knfs_helper._publish_access.assert_has_calls([ mock.call(*deny, check_exit_code=[0, 1]), mock.call('exportfs')]) def test_knfs_deny_access_exception(self): self._knfs_helper._publish_access = mock.Mock( side_effect=exception.ProcessExecutionError ) access = self.access local_path = self.fakesharepath cmd = ['exportfs', '-u', ':'.join([access['access_to'], local_path])] self.assertRaises(exception.GPFSException, self._knfs_helper.deny_access, local_path, self.share, access) self._knfs_helper._publish_access.assert_called_once_with( *cmd, check_exit_code=[0, 1]) def test_knfs__publish_access(self): self.mock_object(utils, 'execute') fake_command = 'fakecmd' cmd = [fake_command] self._knfs_helper._publish_access(*cmd) utils.execute.assert_any_call(*cmd, run_as_root=True, check_exit_code=True) remote_login = self.sshlogin + '@' + self.remote_ip remote_login2 = self.sshlogin + '@' + self.remote_ip2 utils.execute.assert_has_calls([ mock.call('ssh', remote_login, fake_command, check_exit_code=True, run_as_root=False), mock.call(fake_command, check_exit_code=True, run_as_root=True), mock.call('ssh', remote_login2, fake_command, check_exit_code=True, run_as_root=False)]) self.assertTrue(socket.gethostbyname_ex.called) self.assertTrue(socket.gethostname.called) def test_knfs__publish_access_exception(self): self.mock_object( utils, 'execute', mock.Mock(side_effect=(0, exception.ProcessExecutionError))) fake_command = 'fakecmd' cmd = [fake_command] self.assertRaises(exception.ProcessExecutionError, self._knfs_helper._publish_access, *cmd) self.assertTrue(socket.gethostbyname_ex.called) self.assertTrue(socket.gethostname.called) remote_login = self.sshlogin + '@' + self.remote_ip utils.execute.assert_has_calls([ mock.call('ssh', remote_login, fake_command, check_exit_code=True, run_as_root=False), mock.call(fake_command, check_exit_code=True, run_as_root=True)]) @ddt.data('rw', 'ro') def test_ces_get_export_options(self, access_level): mock_out = {"ces:export_options": "squash=no_root_squash"} self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=mock_out)) access = fake_share.fake_access(access_level=access_level) out = self._ces_helper.get_export_options(self.share, access, 'CES') self.assertEqual("squash=no_root_squash,access_type=%s" % access_level, out) def test_ces_get_export_options_default(self): self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) access = self.access out = self._ces_helper.get_export_options(self.share, access, 'CES') self.assertEqual("access_type=rw", out) def test_ces_get_export_options_invalid_option_ro(self): mock_out = {"ces:export_options": "access_type=ro"} self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=mock_out)) access = self.access share = fake_share.fake_share(share_type="fake_share_type") self.assertRaises(exception.InvalidInput, self._ces_helper.get_export_options, share, access, 'CES') def test_ces_get_export_options_invalid_option_rw(self): mock_out = {"ces:export_options": "access_type=rw"} self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=mock_out)) access = self.access share = fake_share.fake_share(share_type="fake_share_type") self.assertRaises(exception.InvalidInput, self._ces_helper.get_export_options, share, access, 'CES') def test__get_nfs_client_exports_exception(self): self._ces_helper._execute = mock.Mock(return_value=('junk', '')) local_path = self.fakesharepath self.assertRaises(exception.GPFSException, self._ces_helper._get_nfs_client_exports, local_path) self._ces_helper._execute.assert_called_once_with( self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y') @ddt.data('44.3.2.11', '1:2:3:4:5:6:7:8') def test__fix_export_data(self, ip): data = None for line in self.fake_ces_exports.splitlines(): if "HEADER" in line: headers = line.split(':') if ip in line: data = line.split(':') break self.assertIsNotNone( data, "Test data did not contain a line with the test IP.") result_data = self._ces_helper._fix_export_data(data, headers) self.assertEqual(ip, result_data[headers.index('Clients')]) @ddt.data((None, True), ('44.3.2.11', True), ('44.3.2.1', False), ('4.3.2.1', False), ('4.3.2.11', False), ('1.2.3.4', False), ('', False), ('*', False), ('.', False), ('1:2:3:4:5:6:7:8', True)) @ddt.unpack def test_ces__has_client_access(self, ip, has_access): mock_out = self.fake_ces_exports self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) local_path = self.fakesharepath self.assertEqual(has_access, self._ces_helper._has_client_access(local_path, ip)) self._ces_helper._execute.assert_called_once_with( self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y') def test_ces_remove_export_no_exports(self): mock_out = self.fake_ces_exports_not_found self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) local_path = self.fakesharepath self._ces_helper.remove_export(local_path, self.share) self._ces_helper._execute.assert_called_once_with( self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y') def test_ces_remove_export_existing_exports(self): mock_out = self.fake_ces_exports self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) local_path = self.fakesharepath self._ces_helper.remove_export(local_path, self.share) self._ces_helper._execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y'), mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'remove', local_path), ]) def test_ces_remove_export_exception(self): local_path = self.fakesharepath self._ces_helper._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.GPFSException, self._ces_helper.remove_export, local_path, self.share) def test_ces_allow_access(self): mock_out = self.fake_ces_exports_not_found self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) export_opts = "access_type=rw" self._ces_helper.get_export_options = mock.Mock( return_value=export_opts) access = self.access local_path = self.fakesharepath self._ces_helper.allow_access(local_path, self.share, access) self._ces_helper._execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y'), mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'add', local_path, '-c', access['access_to'] + '(' + export_opts + ')')]) def test_ces_allow_access_existing_exports(self): mock_out = self.fake_ces_exports self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) export_opts = "access_type=rw" self._ces_helper.get_export_options = mock.Mock( return_value=export_opts) access = self.access local_path = self.fakesharepath self._ces_helper.allow_access(self.fakesharepath, self.share, self.access) self._ces_helper._execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y'), mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'change', local_path, '--nfsadd', access['access_to'] + '(' + export_opts + ')')]) def test_ces_allow_access_invalid_access_type(self): access = fake_share.fake_access(access_type='test') self.assertRaises(exception.InvalidShareAccess, self._ces_helper.allow_access, self.fakesharepath, self.share, access) def test_ces_allow_access_exception(self): access = self.access local_path = self.fakesharepath self._ces_helper._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.GPFSException, self._ces_helper.allow_access, local_path, self.share, access) def test_ces_deny_access(self): mock_out = self.fake_ces_exports self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) access = self.access local_path = self.fakesharepath self._ces_helper.deny_access(local_path, self.share, access) self._ces_helper._execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y'), mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'change', local_path, '--nfsremove', access['access_to'])]) def test_ces_deny_access_exception(self): access = self.access local_path = self.fakesharepath self._ces_helper._execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.GPFSException, self._ces_helper.deny_access, local_path, self.share, access) def test_ces_resync_access_add(self): mock_out = self.fake_ces_exports_not_found self._ces_helper._execute = mock.Mock(return_value=(mock_out, '')) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) access_rules = [self.access] local_path = self.fakesharepath self._ces_helper.resync_access(local_path, self.share, access_rules) self._ces_helper._execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y'), mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'add', local_path, '-c', self.access['access_to'] + '(' + "access_type=rw" + ')') ]) share_types.get_extra_specs_from_share.assert_called_once_with( self.share) def test_ces_resync_access_change(self): class SortedMatch(object): def __init__(self, f, expected): self.assertEqual = f self.expected = expected def __eq__(self, actual): expected_list = self.expected.split(',') actual_list = actual.split(',') self.assertEqual(sorted(expected_list), sorted(actual_list)) return True mock_out = self.fake_ces_exports self._ces_helper._execute = mock.Mock( return_value=(mock_out, '')) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) access_rules = [fake_share.fake_access(access_to='1.1.1.1'), fake_share.fake_access( access_to='10.0.0.1', access_level='ro')] local_path = self.fakesharepath self._ces_helper.resync_access(local_path, self.share, access_rules) share_types.get_extra_specs_from_share.assert_called_once_with( self.share) to_remove = '1:2:3:4:5:6:7:8,44.3.2.11' to_add = access_rules[0]['access_to'] + '(' + "access_type=rw" + ')' to_change = access_rules[1]['access_to'] + '(' + "access_type=ro" + ')' self._ces_helper._execute.assert_has_calls([ mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y'), mock.call(self.GPFS_PATH + 'mmnfs', 'export', 'change', local_path, '--nfsremove', SortedMatch(self.assertEqual, to_remove), '--nfsadd', to_add, '--nfschange', to_change) ]) def test_ces_resync_nothing(self): """Test that hits the add-no-rules case.""" mock_out = self.fake_ces_exports_not_found self._ces_helper._execute = mock.Mock(return_value=(mock_out, '')) local_path = self.fakesharepath self._ces_helper.resync_access(local_path, None, []) self._ces_helper._execute.assert_called_once_with( self.GPFS_PATH + 'mmnfs', 'export', 'list', '-n', local_path, '-Y') manila-10.0.0/manila/tests/share/drivers/hdfs/0000775000175000017500000000000013656750362021177 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hdfs/__init__.py0000664000175000017500000000000013656750227023276 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hdfs/test_hdfs_native.py0000664000175000017500000005445213656750227025114 0ustar zuulzuul00000000000000# Copyright 2015 Intel, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for HDFS native protocol driver module.""" import socket from unittest import mock from oslo_concurrency import processutils from oslo_config import cfg import six from manila import context from manila import exception import manila.share.configuration as config import manila.share.drivers.hdfs.hdfs_native as hdfs_native from manila import test from manila.tests import fake_share from manila import utils CONF = cfg.CONF class HDFSNativeShareDriverTestCase(test.TestCase): """Tests HDFSNativeShareDriver.""" def setUp(self): super(HDFSNativeShareDriverTestCase, self).setUp() self._context = context.get_admin_context() self._hdfs_execute = mock.Mock(return_value=('', '')) self.local_ip = '192.168.1.1' CONF.set_default('driver_handles_share_servers', False) CONF.set_default('hdfs_namenode_ip', self.local_ip) CONF.set_default('hdfs_ssh_name', 'fake_sshname') CONF.set_default('hdfs_ssh_pw', 'fake_sshpw') CONF.set_default('hdfs_ssh_private_key', 'fake_sshkey') self.fake_conf = config.Configuration(None) self._driver = hdfs_native.HDFSNativeShareDriver( execute=self._hdfs_execute, configuration=self.fake_conf) self.hdfs_bin = 'hdfs' self._driver._hdfs_bin = 'fake_hdfs_bin' self.share = fake_share.fake_share(share_proto='HDFS') self.snapshot = fake_share.fake_snapshot(share_proto='HDFS') self.access = fake_share.fake_access(access_type='user') self.fakesharepath = 'hdfs://1.2.3.4:5/share-0' self.fakesnapshotpath = '/share-0/.snapshot/snapshot-0' socket.gethostname = mock.Mock(return_value='testserver') socket.gethostbyname_ex = mock.Mock(return_value=( 'localhost', ['localhost.localdomain', 'testserver'], ['127.0.0.1', self.local_ip])) def test_do_setup(self): self._driver.do_setup(self._context) self.assertEqual(self._driver._hdfs_bin, self.hdfs_bin) def test_create_share(self): self._driver._create_share = mock.Mock() self._driver._get_share_path = mock.Mock( return_value=self.fakesharepath) result = self._driver.create_share(self._context, self.share, share_server=None) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_share_path.assert_called_once_with(self.share) self.assertEqual(self.fakesharepath, result) def test_create_share_unsupported_proto(self): self._driver._get_share_path = mock.Mock() self.assertRaises(exception.HDFSException, self._driver.create_share, self._context, fake_share.fake_share(), share_server=None) self.assertFalse(self._driver._get_share_path.called) def test__set_share_size(self): share_dir = '/' + self.share['name'] sizestr = six.text_type(self.share['size']) + 'g' self._driver._hdfs_execute = mock.Mock(return_value=True) self._driver._set_share_size(self.share) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfsadmin', '-setSpaceQuota', sizestr, share_dir) def test__set_share_size_exception(self): share_dir = '/' + self.share['name'] sizestr = six.text_type(self.share['size']) + 'g' self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver._set_share_size, self.share) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfsadmin', '-setSpaceQuota', sizestr, share_dir) def test__set_share_size_with_new_size(self): share_dir = '/' + self.share['name'] new_size = 'fake_size' sizestr = new_size + 'g' self._driver._hdfs_execute = mock.Mock(return_value=True) self._driver._set_share_size(self.share, new_size) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfsadmin', '-setSpaceQuota', sizestr, share_dir) def test__create_share(self): share_dir = '/' + self.share['name'] self._driver._hdfs_execute = mock.Mock(return_value=True) self._driver._set_share_size = mock.Mock() self._driver._create_share(self.share) self._driver._hdfs_execute.assert_any_call( 'fake_hdfs_bin', 'dfs', '-mkdir', share_dir) self._driver._set_share_size.assert_called_once_with(self.share) self._driver._hdfs_execute.assert_any_call( 'fake_hdfs_bin', 'dfsadmin', '-allowSnapshot', share_dir) def test__create_share_exception(self): share_dir = '/' + self.share['name'] self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver._create_share, self.share) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-mkdir', share_dir) def test_create_share_from_empty_snapshot(self): return_hdfs_execute = (None, None) self._driver._hdfs_execute = mock.Mock( return_value=return_hdfs_execute) self._driver._create_share = mock.Mock(return_value=True) self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._driver._get_snapshot_path = mock.Mock(return_value=self. fakesnapshotpath) result = self._driver.create_share_from_snapshot(self._context, self.share, self.snapshot, share_server=None) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_snapshot_path.assert_called_once_with( self.snapshot) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-ls', self.fakesnapshotpath) self._driver._get_share_path.assert_called_once_with(self.share) self.assertEqual(self.fakesharepath, result) def test_create_share_from_snapshot(self): return_hdfs_execute = ("fake_content", None) self._driver._hdfs_execute = mock.Mock( return_value=return_hdfs_execute) self._driver._create_share = mock.Mock(return_value=True) self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._driver._get_snapshot_path = mock.Mock(return_value=self. fakesnapshotpath) result = self._driver.create_share_from_snapshot(self._context, self.share, self.snapshot, share_server=None) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_snapshot_path.assert_called_once_with( self.snapshot) calls = [mock.call('fake_hdfs_bin', 'dfs', '-ls', self.fakesnapshotpath), mock.call('fake_hdfs_bin', 'dfs', '-cp', self.fakesnapshotpath + '/*', '/' + self.share['name'])] self._driver._hdfs_execute.assert_has_calls(calls) self._driver._get_share_path.assert_called_once_with(self.share) self.assertEqual(self.fakesharepath, result) def test_create_share_from_snapshot_exception(self): self._driver._create_share = mock.Mock(return_value=True) self._driver._get_snapshot_path = mock.Mock(return_value=self. fakesnapshotpath) self._driver._get_share_path = mock.Mock(return_value=self. fakesharepath) self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver.create_share_from_snapshot, self._context, self.share, self.snapshot, share_server=None) self._driver._create_share.assert_called_once_with(self.share) self._driver._get_snapshot_path.assert_called_once_with(self.snapshot) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-ls', self.fakesnapshotpath) self.assertFalse(self._driver._get_share_path.called) def test_create_snapshot(self): self._driver._hdfs_execute = mock.Mock(return_value=True) self._driver.create_snapshot(self._context, self.snapshot, share_server=None) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-createSnapshot', '/' + self.snapshot['share_name'], self.snapshot['name']) def test_create_snapshot_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver.create_snapshot, self._context, self.snapshot, share_server=None) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-createSnapshot', '/' + self.snapshot['share_name'], self.snapshot['name']) def test_delete_share(self): self._driver._hdfs_execute = mock.Mock(return_value=True) self._driver.delete_share(self._context, self.share, share_server=None) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-rm', '-r', '/' + self.share['name']) def test_delete_share_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver.delete_share, self._context, self.share, share_server=None) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-rm', '-r', '/' + self.share['name']) def test_delete_snapshot(self): self._driver._hdfs_execute = mock.Mock(return_value=True) self._driver.delete_snapshot(self._context, self.snapshot, share_server=None) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-deleteSnapshot', '/' + self.snapshot['share_name'], self.snapshot['name']) def test_delete_snapshot_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver.delete_snapshot, self._context, self.snapshot, share_server=None) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfs', '-deleteSnapshot', '/' + self.snapshot['share_name'], self.snapshot['name']) def test_allow_access(self): self._driver._hdfs_execute = mock.Mock( return_value=['', '']) share_dir = '/' + self.share['name'] user_access = ':'.join([self.access['access_type'], self.access['access_to'], 'rwx']) cmd = ['fake_hdfs_bin', 'dfs', '-setfacl', '-m', '-R', user_access, share_dir] self._driver.allow_access(self._context, self.share, self.access, share_server=None) self._driver._hdfs_execute.assert_called_once_with( *cmd, check_exit_code=True) def test_allow_access_invalid_access_type(self): self.assertRaises(exception.InvalidShareAccess, self._driver.allow_access, self._context, self.share, fake_share.fake_access( access_type='invalid_access_type'), share_server=None) def test_allow_access_invalid_access_level(self): self.assertRaises(exception.InvalidShareAccess, self._driver.allow_access, self._context, self.share, fake_share.fake_access( access_level='invalid_access_level'), share_server=None) def test_allow_access_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) share_dir = '/' + self.share['name'] user_access = ':'.join([self.access['access_type'], self.access['access_to'], 'rwx']) cmd = ['fake_hdfs_bin', 'dfs', '-setfacl', '-m', '-R', user_access, share_dir] self.assertRaises(exception.HDFSException, self._driver.allow_access, self._context, self.share, self.access, share_server=None) self._driver._hdfs_execute.assert_called_once_with( *cmd, check_exit_code=True) def test_deny_access(self): self._driver._hdfs_execute = mock.Mock(return_value=['', '']) share_dir = '/' + self.share['name'] access_name = ':'.join([self.access['access_type'], self.access['access_to']]) cmd = ['fake_hdfs_bin', 'dfs', '-setfacl', '-x', '-R', access_name, share_dir] self._driver.deny_access(self._context, self.share, self.access, share_server=None) self._driver._hdfs_execute.assert_called_once_with( *cmd, check_exit_code=True) def test_deny_access_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) share_dir = '/' + self.share['name'] access_name = ':'.join([self.access['access_type'], self.access['access_to']]) cmd = ['fake_hdfs_bin', 'dfs', '-setfacl', '-x', '-R', access_name, share_dir] self.assertRaises(exception.HDFSException, self._driver.deny_access, self._context, self.share, self.access, share_server=None) self._driver._hdfs_execute.assert_called_once_with( *cmd, check_exit_code=True) def test_extend_share(self): new_size = "fake_size" self._driver._set_share_size = mock.Mock() self._driver.extend_share(self.share, new_size) self._driver._set_share_size.assert_called_once_with( self.share, new_size) def test__check_hdfs_state_healthy(self): fake_out = "fakeinfo\n...Status: HEALTHY" self._driver._hdfs_execute = mock.Mock(return_value=(fake_out, '')) result = self._driver._check_hdfs_state() self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'fsck', '/') self.assertTrue(result) def test__check_hdfs_state_down(self): fake_out = "fakeinfo\n...Status: DOWN" self._driver._hdfs_execute = mock.Mock(return_value=(fake_out, '')) result = self._driver._check_hdfs_state() self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'fsck', '/') self.assertFalse(result) def test__check_hdfs_state_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver._check_hdfs_state) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'fsck', '/') def test__get_available_capacity(self): fake_out = ('Configured Capacity: 2.4\n' + 'Total Capacity: 2\n' + 'DFS free: 1') self._driver._hdfs_execute = mock.Mock(return_value=(fake_out, '')) total, free = self._driver._get_available_capacity() self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfsadmin', '-report') self.assertEqual(2, total) self.assertEqual(1, free) def test__get_available_capacity_exception(self): self._driver._hdfs_execute = mock.Mock( side_effect=exception.ProcessExecutionError) self.assertRaises(exception.HDFSException, self._driver._get_available_capacity) self._driver._hdfs_execute.assert_called_once_with( 'fake_hdfs_bin', 'dfsadmin', '-report') def test_get_share_stats_refresh_false(self): self._driver._stats = {'fake_key': 'fake_value'} result = self._driver.get_share_stats(False) self.assertEqual(self._driver._stats, result) def test_get_share_stats_refresh_true(self): self._driver._get_available_capacity = mock.Mock( return_value=(11111.0, 12345.0)) result = self._driver.get_share_stats(True) expected_keys = [ 'qos', 'driver_version', 'share_backend_name', 'free_capacity_gb', 'total_capacity_gb', 'driver_handles_share_servers', 'reserved_percentage', 'vendor_name', 'storage_protocol', 'ipv4_support', 'ipv6_support' ] for key in expected_keys: self.assertIn(key, result) self.assertTrue(result['ipv4_support']) self.assertFalse(result['ipv6_support']) self.assertEqual('HDFS', result['storage_protocol']) self._driver._get_available_capacity.assert_called_once_with() def test__hdfs_local_execute(self): cmd = 'testcmd' self.mock_object(utils, 'execute', mock.Mock(return_value=True)) self._driver._hdfs_local_execute(cmd) utils.execute.assert_called_once_with(cmd, run_as_root=False) def test__hdfs_remote_execute(self): self._driver._run_ssh = mock.Mock(return_value=True) cmd = 'testcmd' self._driver._hdfs_remote_execute(cmd, check_exit_code=True) self._driver._run_ssh.assert_called_once_with( self.local_ip, tuple([cmd]), True) def test__run_ssh(self): ssh_output = 'fake_ssh_output' cmd_list = ['fake', 'cmd'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(return_value=True) ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(return_value=ssh) self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) self.mock_object(processutils, 'ssh_execute', mock.Mock(return_value=ssh_output)) result = self._driver._run_ssh(self.local_ip, cmd_list) utils.SSHPool.assert_called_once_with( self._driver.configuration.hdfs_namenode_ip, self._driver.configuration.hdfs_ssh_port, self._driver.configuration.ssh_conn_timeout, self._driver.configuration.hdfs_ssh_name, password=self._driver.configuration.hdfs_ssh_pw, privatekey=self._driver.configuration.hdfs_ssh_private_key, min_size=self._driver.configuration.ssh_min_pool_conn, max_size=self._driver.configuration.ssh_max_pool_conn) ssh_pool.create.assert_called_once_with() ssh.get_transport().is_active.assert_called_once_with() processutils.ssh_execute.assert_called_once_with( ssh, 'fake cmd', check_exit_code=False) self.assertEqual(ssh_output, result) def test__run_ssh_exception(self): cmd_list = ['fake', 'cmd'] ssh = mock.Mock() ssh.get_transport = mock.Mock() ssh.get_transport().is_active = mock.Mock(return_value=True) ssh_pool = mock.Mock() ssh_pool.create = mock.Mock(return_value=ssh) self.mock_object(utils, 'SSHPool', mock.Mock(return_value=ssh_pool)) self.mock_object(processutils, 'ssh_execute', mock.Mock(side_effect=Exception)) self.assertRaises(exception.HDFSException, self._driver._run_ssh, self.local_ip, cmd_list) utils.SSHPool.assert_called_once_with( self._driver.configuration.hdfs_namenode_ip, self._driver.configuration.hdfs_ssh_port, self._driver.configuration.ssh_conn_timeout, self._driver.configuration.hdfs_ssh_name, password=self._driver.configuration.hdfs_ssh_pw, privatekey=self._driver.configuration.hdfs_ssh_private_key, min_size=self._driver.configuration.ssh_min_pool_conn, max_size=self._driver.configuration.ssh_max_pool_conn) ssh_pool.create.assert_called_once_with() ssh.get_transport().is_active.assert_called_once_with() processutils.ssh_execute.assert_called_once_with( ssh, 'fake cmd', check_exit_code=False) manila-10.0.0/manila/tests/share/drivers/dummy.py0000664000175000017500000007201513656750227021765 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Dummy share driver for testing Manila APIs and other interfaces. This driver simulates support of: - Both available driver modes: DHSS=True/False - NFS and CIFS protocols - IP access for NFS shares and USER access for CIFS shares - CIFS shares in DHSS=True driver mode - Creation and deletion of share snapshots - Share replication (readable) - Share migration - Consistency groups - Resize of a share (extend/shrink) """ import functools import time from oslo_config import cfg from oslo_log import log from oslo_utils import timeutils from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import configuration from manila.share import driver from manila.share.manager import share_manager_opts # noqa from manila.share import utils as share_utils LOG = log.getLogger(__name__) dummy_opts = [ cfg.FloatOpt( "dummy_driver_default_driver_method_delay", help="Defines default time delay in seconds for each dummy driver " "method. To redefine some specific method delay use other " "'dummy_driver_driver_methods_delays' config opt. Optional.", default=2.0, min=0, ), cfg.DictOpt( "dummy_driver_driver_methods_delays", help="It is dictionary-like config option, that consists of " "driver method names as keys and integer/float values that are " "time delay in seconds. Optional.", default={ "ensure_share": "1.05", "create_share": "3.98", "get_pool": "0.5", "do_setup": "0.05", "_get_pools_info": "0.1", "_update_share_stats": "0.3", "create_replica": "3.99", "delete_replica": "2.98", "promote_replica": "0.75", "update_replica_state": "0.85", "create_replicated_snapshot": "4.15", "delete_replicated_snapshot": "3.16", "update_replicated_snapshot": "1.17", "migration_start": 1.01, "migration_continue": 1.02, # it will be called 2 times "migration_complete": 1.03, "migration_cancel": 1.04, "migration_get_progress": 1.05, "migration_check_compatibility": 0.05, }, ), ] CONF = cfg.CONF def slow_me_down(f): @functools.wraps(f) def wrapped_func(self, *args, **kwargs): sleep_time = self.configuration.safe_get( "dummy_driver_driver_methods_delays").get( f.__name__, self.configuration.safe_get( "dummy_driver_default_driver_method_delay") ) time.sleep(float(sleep_time)) return f(self, *args, **kwargs) return wrapped_func def get_backend_configuration(backend_name): config_stanzas = CONF.list_all_sections() if backend_name not in config_stanzas: msg = _("Could not find backend stanza %(backend_name)s in " "configuration which is required for share replication and " "migration. Available stanzas are %(stanzas)s") params = { "stanzas": config_stanzas, "backend_name": backend_name, } raise exception.BadConfigurationException(reason=msg % params) config = configuration.Configuration( driver.share_opts, config_group=backend_name) config.append_config_values(dummy_opts) config.append_config_values(share_manager_opts) config.append_config_values(driver.ssh_opts) return config class DummyDriver(driver.ShareDriver): """Dummy share driver that implements all share driver interfaces.""" def __init__(self, *args, **kwargs): """Do initialization.""" super(DummyDriver, self).__init__( [False, True], *args, config_opts=[dummy_opts], **kwargs) self._verify_configuration() self.private_storage = kwargs.get('private_storage') self.backend_name = self.configuration.safe_get( "share_backend_name") or "DummyDriver" self.migration_progress = {} def _verify_configuration(self): allowed_driver_methods = [m for m in dir(self) if m[0] != '_'] allowed_driver_methods.extend([ "_setup_server", "_teardown_server", "_get_pools_info", "_update_share_stats", ]) disallowed_driver_methods = ( "get_admin_network_allocations_number", "get_network_allocations_number", "get_share_server_pools", ) for k, v in self.configuration.safe_get( "dummy_driver_driver_methods_delays").items(): if k not in allowed_driver_methods: raise exception.BadConfigurationException(reason=( "Dummy driver does not have '%s' method." % k )) elif k in disallowed_driver_methods: raise exception.BadConfigurationException(reason=( "Method '%s' does not support delaying." % k )) try: float(v) except (TypeError, ValueError): raise exception.BadConfigurationException(reason=( "Wrong value (%(v)s) for '%(k)s' dummy driver method time " "delay is set in 'dummy_driver_driver_methods_delays' " "config option." % {"k": k, "v": v} )) def _get_share_name(self, share): return "share_%(s_id)s_%(si_id)s" % { "s_id": share["share_id"].replace("-", "_"), "si_id": share["id"].replace("-", "_")} def _get_snapshot_name(self, snapshot): return "snapshot_%(s_id)s_%(si_id)s" % { "s_id": snapshot["snapshot_id"].replace("-", "_"), "si_id": snapshot["id"].replace("-", "_")} def _generate_export_locations(self, mountpoint, share_server=None): details = share_server["backend_details"] if share_server else { "primary_public_ip": "10.0.0.10", "secondary_public_ip": "10.0.0.20", "service_ip": "11.0.0.11", } return [ { "path": "%(ip)s:%(mp)s" % {"ip": ip, "mp": mountpoint}, "metadata": { "preferred": preferred, }, "is_admin_only": is_admin_only, } for ip, is_admin_only, preferred in ( (details["primary_public_ip"], False, True), (details["secondary_public_ip"], False, False), (details["service_ip"], True, False)) ] def _create_share(self, share, share_server=None): share_proto = share["share_proto"] if share_proto not in ("NFS", "CIFS"): msg = _("Unsupported share protocol provided - %s.") % share_proto raise exception.InvalidShareAccess(reason=msg) share_name = self._get_share_name(share) mountpoint = "/path/to/fake/share/%s" % share_name self.private_storage.update( share["id"], { "fake_provider_share_name": share_name, "fake_provider_location": mountpoint, } ) return self._generate_export_locations( mountpoint, share_server=share_server) @slow_me_down def create_share(self, context, share, share_server=None): """Is called to create share.""" return self._create_share(share, share_server=share_server) @slow_me_down def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" export_locations = self._create_share(share, share_server=share_server) return { 'export_locations': export_locations, 'status': constants.STATUS_AVAILABLE } def _create_snapshot(self, snapshot, share_server=None): snapshot_name = self._get_snapshot_name(snapshot) mountpoint = "/path/to/fake/snapshot/%s" % snapshot_name self.private_storage.update( snapshot["id"], { "fake_provider_snapshot_name": snapshot_name, "fake_provider_location": mountpoint, } ) return { 'fake_key1': 'fake_value1', 'fake_key2': 'fake_value2', 'fake_key3': 'fake_value3', "provider_location": mountpoint, "export_locations": self._generate_export_locations( mountpoint, share_server=share_server) } @slow_me_down def create_snapshot(self, context, snapshot, share_server=None): """Is called to create snapshot.""" return self._create_snapshot(snapshot, share_server) @slow_me_down def delete_share(self, context, share, share_server=None): """Is called to remove share.""" self.private_storage.delete(share["id"]) @slow_me_down def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove snapshot.""" LOG.debug('Deleting snapshot with following data: %s', snapshot) self.private_storage.delete(snapshot["id"]) @slow_me_down def get_pool(self, share): """Return pool name where the share resides on.""" pool_name = share_utils.extract_host(share["host"], level="pool") return pool_name @slow_me_down def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported.""" @slow_me_down def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share.""" for rule in add_rules + access_rules: share_proto = share["share_proto"].lower() access_type = rule["access_type"].lower() if not ( (share_proto == "nfs" and access_type == "ip") or (share_proto == "cifs" and access_type == "user")): msg = _("Unsupported '%(access_type)s' access type provided " "for '%(share_proto)s' share protocol.") % { "access_type": access_type, "share_proto": share_proto} raise exception.InvalidShareAccess(reason=msg) @slow_me_down def snapshot_update_access(self, context, snapshot, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given snapshot.""" self.update_access(context, snapshot['share'], access_rules, add_rules, delete_rules, share_server) @slow_me_down def do_setup(self, context): """Any initialization the share driver does while starting.""" @slow_me_down def manage_existing(self, share, driver_options): """Brings an existing share under Manila management.""" new_export = share['export_location'] old_share_id = self._get_share_id_from_export(new_export) old_export = self.private_storage.get( old_share_id, key='export_location') if old_export.split(":/")[-1] == new_export.split(":/")[-1]: result = {"size": 1, "export_locations": self._create_share(share)} self.private_storage.delete(old_share_id) return result else: msg = ("Invalid export specified, existing share %s" " could not be found" % old_share_id) raise exception.ShareBackendException(msg=msg) @slow_me_down def manage_existing_with_server( self, share, driver_options, share_server=None): return self.manage_existing(share, driver_options) def _get_share_id_from_export(self, export_location): values = export_location.split('share_') if len(values) > 1: return values[1][37:].replace("_", "-") else: return export_location @slow_me_down def unmanage(self, share): """Removes the specified share from Manila management.""" self.private_storage.update( share['id'], {'export_location': share['export_location']}) @slow_me_down def unmanage_with_server(self, share, share_server=None): self.unmanage(share) @slow_me_down def manage_existing_snapshot_with_server(self, snapshot, driver_options, share_server=None): return self.manage_existing_snapshot(snapshot, driver_options) @slow_me_down def manage_existing_snapshot(self, snapshot, driver_options): """Brings an existing snapshot under Manila management.""" old_snap_id = self._get_snap_id_from_provider_location( snapshot['provider_location']) old_provider_location = self.private_storage.get( old_snap_id, key='provider_location') if old_provider_location == snapshot['provider_location']: self._create_snapshot(snapshot) self.private_storage.delete(old_snap_id) return {"size": 1, "provider_location": snapshot["provider_location"]} else: msg = ("Invalid provider location specified, existing snapshot %s" " could not be found" % old_snap_id) raise exception.ShareBackendException(msg=msg) def _get_snap_id_from_provider_location(self, provider_location): values = provider_location.split('snapshot_') if len(values) > 1: return values[1][37:].replace("_", "-") else: return provider_location @slow_me_down def unmanage_snapshot(self, snapshot): """Removes the specified snapshot from Manila management.""" self.private_storage.update( snapshot['id'], {'provider_location': snapshot['provider_location']}) @slow_me_down def unmanage_snapshot_with_server(self, snapshot, share_server=None): self.unmanage_snapshot(snapshot) @slow_me_down def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a share (in place) to the specified snapshot.""" @slow_me_down def extend_share(self, share, new_size, share_server=None): """Extends size of existing share.""" @slow_me_down def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return 2 def get_admin_network_allocations_number(self): return 1 @slow_me_down def _setup_server(self, network_info, metadata=None): """Sets up and configures share server with given network parameters. Redefine it within share driver when it is going to handle share servers. """ server_details = { "primary_public_ip": network_info[ "network_allocations"][0]["ip_address"], "secondary_public_ip": network_info[ "network_allocations"][1]["ip_address"], "service_ip": network_info[ "admin_network_allocations"][0]["ip_address"], "username": "fake_username", "server_id": network_info['server_id'] } return server_details @slow_me_down def _teardown_server(self, server_details, security_services=None): """Tears down share server.""" @slow_me_down def _get_pools_info(self): pools = [{ "pool_name": "fake_pool_for_%s" % self.backend_name, "total_capacity_gb": 1230.0, "free_capacity_gb": 1210.0, "reserved_percentage": self.configuration.reserved_share_percentage }] if self.configuration.replication_domain: pools[0]["replication_type"] = "readable" return pools @slow_me_down def _update_share_stats(self, data=None): """Retrieve stats info from share group.""" data = { "share_backend_name": self.backend_name, "storage_protocol": "NFS_CIFS", "reserved_percentage": self.configuration.reserved_share_percentage, "snapshot_support": True, "create_share_from_snapshot_support": True, "revert_to_snapshot_support": True, "mount_snapshot_support": True, "driver_name": "Dummy", "pools": self._get_pools_info(), "share_group_stats": { "consistent_snapshot_support": "pool", } } if self.configuration.replication_domain: data["replication_type"] = "readable" super(DummyDriver, self)._update_share_stats(data) def get_share_server_pools(self, share_server): """Return list of pools related to a particular share server.""" return [] @slow_me_down def create_consistency_group(self, context, cg_dict, share_server=None): """Create a consistency group.""" LOG.debug( "Successfully created dummy Consistency Group with ID: %s.", cg_dict["id"]) @slow_me_down def delete_consistency_group(self, context, cg_dict, share_server=None): """Delete a consistency group.""" LOG.debug( "Successfully deleted dummy consistency group with ID %s.", cg_dict["id"]) @slow_me_down def create_cgsnapshot(self, context, snap_dict, share_server=None): """Create a consistency group snapshot.""" LOG.debug("Successfully created CG snapshot %s.", snap_dict["id"]) return None, None @slow_me_down def delete_cgsnapshot(self, context, snap_dict, share_server=None): """Delete a consistency group snapshot.""" LOG.debug("Successfully deleted CG snapshot %s.", snap_dict["id"]) return None, None @slow_me_down def create_consistency_group_from_cgsnapshot( self, context, cg_dict, cgsnapshot_dict, share_server=None): """Create a consistency group from a cgsnapshot.""" LOG.debug( ("Successfully created dummy Consistency Group (%(cg_id)s) " "from CG snapshot (%(cg_snap_id)s)."), {"cg_id": cg_dict["id"], "cg_snap_id": cgsnapshot_dict["id"]}) return None, [] @slow_me_down def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None): """Replicate the active replica to a new replica on this backend.""" replica_name = self._get_share_name(new_replica) mountpoint = "/path/to/fake/share/%s" % replica_name self.private_storage.update( new_replica["id"], { "fake_provider_replica_name": replica_name, "fake_provider_location": mountpoint, } ) return { "export_locations": self._generate_export_locations( mountpoint, share_server=share_server), "replica_state": constants.REPLICA_STATE_IN_SYNC, "access_rules_status": constants.STATUS_ACTIVE, } @slow_me_down def delete_replica(self, context, replica_list, replica_snapshots, replica, share_server=None): """Delete a replica.""" self.private_storage.delete(replica["id"]) @slow_me_down def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): """Promote a replica to 'active' replica state.""" return_replica_list = [] for r in replica_list: if r["id"] == replica["id"]: replica_state = constants.REPLICA_STATE_ACTIVE else: replica_state = constants.REPLICA_STATE_IN_SYNC return_replica_list.append( {"id": r["id"], "replica_state": replica_state}) return return_replica_list @slow_me_down def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): """Update the replica_state of a replica.""" return constants.REPLICA_STATE_IN_SYNC @slow_me_down def create_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): """Create a snapshot on active instance and update across the replicas. """ return_replica_snapshots = [] for r in replica_snapshots: return_replica_snapshots.append( {"id": r["id"], "status": constants.STATUS_AVAILABLE}) return return_replica_snapshots @slow_me_down def revert_to_replicated_snapshot(self, context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a replicated share (in place) to the specified snapshot.""" @slow_me_down def delete_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): """Delete a snapshot by deleting its instances across the replicas.""" return_replica_snapshots = [] for r in replica_snapshots: return_replica_snapshots.append( {"id": r["id"], "status": constants.STATUS_DELETED}) return return_replica_snapshots @slow_me_down def update_replicated_snapshot(self, context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=None): """Update the status of a snapshot instance that lives on a replica.""" return { "id": replica_snapshot["id"], "status": constants.STATUS_AVAILABLE} @slow_me_down def migration_check_compatibility( self, context, source_share, destination_share, share_server=None, destination_share_server=None): """Is called to test compatibility with destination backend.""" backend_name = share_utils.extract_host( destination_share['host'], level='backend_name') config = get_backend_configuration(backend_name) compatible = 'Dummy' in config.share_driver return { 'compatible': compatible, 'writable': compatible, 'preserve_metadata': compatible, 'nondisruptive': False, 'preserve_snapshots': compatible, } @slow_me_down def migration_start( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to perform 1st phase of driver migration of a given share. """ LOG.debug( "Migration of dummy share with ID '%s' has been started.", source_share["id"]) self.migration_progress[source_share['share_id']] = 0 @slow_me_down def migration_continue( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): if source_share["id"] not in self.migration_progress: self.migration_progress[source_share["id"]] = 0 self.migration_progress[source_share["id"]] += 50 LOG.debug( "Migration of dummy share with ID '%s' is continuing, %s.", source_share["id"], self.migration_progress[source_share["id"]]) return self.migration_progress[source_share["id"]] == 100 @slow_me_down def migration_complete( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to perform 2nd phase of driver migration of a given share. """ snapshot_updates = {} for src_snap_ins, dest_snap_ins in snapshot_mappings.items(): snapshot_updates[dest_snap_ins['id']] = self._create_snapshot( dest_snap_ins) return { 'snapshot_updates': snapshot_updates, 'export_locations': self._do_migration( source_share, destination_share, share_server) } def _do_migration(self, source_share_ref, dest_share_ref, share_server): share_name = self._get_share_name(dest_share_ref) mountpoint = "/path/to/fake/share/%s" % share_name self.private_storage.delete(source_share_ref["id"]) self.private_storage.update( dest_share_ref["id"], { "fake_provider_share_name": share_name, "fake_provider_location": mountpoint, } ) LOG.debug( "Migration of dummy share with ID '%s' has been completed.", source_share_ref["id"]) self.migration_progress.pop(source_share_ref["id"], None) return self._generate_export_locations( mountpoint, share_server=share_server) @slow_me_down def migration_cancel( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to cancel driver migration.""" LOG.debug( "Migration of dummy share with ID '%s' has been canceled.", source_share["id"]) self.migration_progress.pop(source_share["id"], None) @slow_me_down def migration_get_progress( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to get migration progress.""" # Simulate migration progress. if source_share["id"] not in self.migration_progress: self.migration_progress[source_share["id"]] = 0 total_progress = self.migration_progress[source_share["id"]] LOG.debug("Progress of current dummy share migration " "with ID '%(id)s' is %(progress)s.", { "id": source_share["id"], "progress": total_progress }) return {"total_progress": total_progress} def update_share_usage_size(self, context, shares): share_updates = [] gathered_at = timeutils.utcnow() for s in shares: share_updates.append({'id': s['id'], 'used_size': 1, 'gathered_at': gathered_at}) return share_updates @slow_me_down def get_share_server_network_info( self, context, share_server, identifier, driver_options): try: server_details = self.private_storage.get(identifier) except Exception: msg = ("Unable to find share server %s in " "private storage." % identifier) raise exception.ShareBackendException(msg=msg) return [server_details['primary_public_ip'], server_details['secondary_public_ip'], server_details['service_ip']] @slow_me_down def manage_server(self, context, share_server, identifier, driver_options): server_details = self.private_storage.get(identifier) self.private_storage.delete(identifier) return identifier, server_details def unmanage_server(self, server_details, security_services=None): server_details = server_details or {} if not server_details or 'server_id' not in server_details: # This share server doesn't have any network details. Since it's # just being cleaned up, we'll log a warning and return without # errors. LOG.warning("Share server does not have network information. " "It is being unmanaged, but cannot be re-managed " "without first creating network allocations in this " "driver's private storage.") return self.private_storage.update(server_details['server_id'], server_details) def get_share_status(self, share, share_server=None): return { 'status': constants.STATUS_AVAILABLE, 'export_locations': self.private_storage.get(share['id'], key='export_location') } manila-10.0.0/manila/tests/share/drivers/nexenta/0000775000175000017500000000000013656750362021715 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/nexenta/__init__.py0000664000175000017500000000000013656750227024014 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/nexenta/ns5/0000775000175000017500000000000013656750362022422 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/nexenta/ns5/__init__.py0000664000175000017500000000000013656750227024521 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/nexenta/ns5/test_jsonrpc.py0000664000175000017500000012437213656750227025522 0ustar zuulzuul00000000000000# Copyright 2019 Nexenta by DDN, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for NexentaStor 5 REST API helper """ import copy import hashlib import json import posixpath from unittest import mock import uuid import requests import six from manila.share import configuration as conf from manila.share.drivers.nexenta.ns5 import jsonrpc from manila import test class FakeNefProxy(object): def __init__(self): self.scheme = 'https' self.port = 8443 self.hosts = ['1.1.1.1', '2.2.2.2'] self.host = self.hosts[0] self.root = 'pool/share' self.username = 'username' self.password = 'password' self.retries = 3 self.timeout = 5 self.session = mock.Mock() self.session.headers = {} def __getattr__(self, name): pass def delay(self, interval): pass def delete_bearer(self): pass def update_lock(self): pass def update_token(self, token): pass def update_host(self, host): pass def url(self, path): return '%s://%s:%s/%s' % (self.scheme, self.host, self.port, path) class TestNefException(test.TestCase): def test_message(self): message = 'test message 1' result = jsonrpc.NefException(message) self.assertIn(message, result.msg) def test_message_kwargs(self): code = 'EAGAIN' message = 'test message 2' result = jsonrpc.NefException(message, code=code) self.assertEqual(code, result.code) self.assertIn(message, result.msg) def test_no_message_kwargs(self): code = 'ESRCH' message = 'test message 3' result = jsonrpc.NefException(None, code=code, message=message) self.assertEqual(code, result.code) self.assertIn(message, result.msg) def test_message_plus_kwargs(self): code = 'ENODEV' message1 = 'test message 4' message2 = 'test message 5' result = jsonrpc.NefException(message1, code=code, message=message2) self.assertEqual(code, result.code) self.assertIn(message2, result.msg) def test_dict(self): code = 'ENOENT' message = 'test message 4' result = jsonrpc.NefException({'code': code, 'message': message}) self.assertEqual(code, result.code) self.assertIn(message, result.msg) def test_kwargs(self): code = 'EPERM' message = 'test message 5' result = jsonrpc.NefException(code=code, message=message) self.assertEqual(code, result.code) self.assertIn(message, result.msg) def test_dict_kwargs(self): code = 'EINVAL' message = 'test message 6' result = jsonrpc.NefException({'code': code}, message=message) self.assertEqual(code, result.code) self.assertIn(message, result.msg) def test_defaults(self): code = 'EBADMSG' message = 'NexentaError' result = jsonrpc.NefException() self.assertEqual(code, result.code) self.assertIn(message, result.msg) class TestNefRequest(test.TestCase): def setUp(self): super(TestNefRequest, self).setUp() self.proxy = FakeNefProxy() def fake_response(self, method, path, payload, code, content): request = requests.PreparedRequest() request.method = method request.url = self.proxy.url(path) request.headers = {'Content-Type': 'application/json'} request.body = None if method in ['get', 'delete']: request.params = payload elif method in ['put', 'post']: request.data = json.dumps(payload) response = requests.Response() response.request = request response.status_code = code response._content = json.dumps(content) if content else '' return response def test___call___invalid_method(self): method = 'unsupported' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' self.assertRaises(jsonrpc.NefException, instance, path) def test___call___none_path(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) self.assertRaises(jsonrpc.NefException, instance, None) def test___call___empty_path(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) self.assertRaises(jsonrpc.NefException, instance, '') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___get(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {} content = {'name': 'snapshot'} response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) request.assert_called_with(method, path) self.assertEqual(content, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___get_payload(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'name': 'snapshot'} response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) params = {'params': payload} request.assert_called_with(method, path, **params) self.assertEqual(content, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___get_data_payload(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} data = [ { 'name': 'fs1', 'path': 'pool/fs1' }, { 'name': 'fs2', 'path': 'pool/fs2' } ] content = {'data': data} response = self.fake_response(method, path, payload, 200, content) request.return_value = response instance.data = data result = instance(path, payload) params = {'params': payload} request.assert_called_with(method, path, **params) self.assertEqual(data, result) def test___call___get_invalid_payload(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = 'bad data' self.assertRaises(jsonrpc.NefException, instance, path, payload) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___delete(self, request): method = 'delete' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {} content = {'name': 'snapshot'} response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) request.assert_called_with(method, path) self.assertEqual(content, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___delete_payload(self, request): method = 'delete' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'name': 'snapshot'} response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) params = {'params': payload} request.assert_called_with(method, path, **params) self.assertEqual(content, result) def test___call___delete_invalid_payload(self): method = 'delete' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = 'bad data' self.assertRaises(jsonrpc.NefException, instance, path, payload) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___post(self, request): method = 'post' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {} content = None response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) request.assert_called_with(method, path) self.assertEqual(content, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___post_payload(self, request): method = 'post' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = None response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) params = {'data': json.dumps(payload)} request.assert_called_with(method, path, **params) self.assertEqual(content, result) def test___call___post_invalid_payload(self): method = 'post' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = 'bad data' self.assertRaises(jsonrpc.NefException, instance, path, payload) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___put(self, request): method = 'put' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {} content = None response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) request.assert_called_with(method, path) self.assertEqual(content, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___put_payload(self, request): method = 'put' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = None response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance(path, payload) params = {'data': json.dumps(payload)} request.assert_called_with(method, path, **params) self.assertEqual(content, result) def test___call___put_invalid_payload(self): method = 'put' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = 'bad data' self.assertRaises(jsonrpc.NefException, instance, path, payload) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___non_ok_response(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'code': 'ENOENT', 'message': 'error'} response = self.fake_response(method, path, payload, 500, content) request.return_value = response self.assertRaises(jsonrpc.NefException, instance, path, payload) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.failover') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___request_after_failover(self, request, failover): method = 'post' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = None response = self.fake_response(method, path, payload, 200, content) request.side_effect = [requests.exceptions.Timeout, response] failover.return_value = True result = instance(path, payload) params = {'data': json.dumps(payload)} request.assert_called_with(method, path, **params) self.assertEqual(content, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.failover') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test___call___request_failover_error(self, request, failover): method = 'put' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} request.side_effect = requests.exceptions.Timeout failover.return_value = False self.assertRaises(requests.exceptions.Timeout, instance, path, payload) def test_hook_default(self): method = 'post' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'name': 'dataset'} response = self.fake_response(method, path, payload, 303, content) result = instance.hook(response) self.assertEqual(response, result) def test_hook_200_empty(self): method = 'delete' instance = jsonrpc.NefRequest(self.proxy, method) path = 'storage/filesystems' payload = {'force': True} content = None response = self.fake_response(method, path, payload, 200, content) result = instance.hook(response) self.assertEqual(response, result) def test_hook_201_empty(self): method = 'post' instance = jsonrpc.NefRequest(self.proxy, method) path = 'storage/snapshots' payload = {'path': 'parent/child@name'} content = None response = self.fake_response(method, path, payload, 201, content) result = instance.hook(response) self.assertEqual(response, result) def test_hook_500_empty(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'storage/pools' payload = {'poolName': 'tank'} content = None response = self.fake_response(method, path, payload, 500, content) self.assertRaises(jsonrpc.NefException, instance.hook, response) def test_hook_200_bad_content(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'storage/volumes' payload = {'name': 'test'} content = None response = self.fake_response(method, path, payload, 200, content) response._content = 'bad_content' self.assertRaises(jsonrpc.NefException, instance.hook, response) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.auth') def test_hook_401(self, auth, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'code': 'EAUTH'} response = self.fake_response(method, path, payload, 401, content) auth.return_value = True content2 = {'name': 'test'} response2 = self.fake_response(method, path, payload, 200, content2) request.return_value = response2 self.proxy.session.send.return_value = content2 result = instance.hook(response) self.assertEqual(content2, result) def test_hook_401_max_retries(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) instance.stat[401] = self.proxy.retries path = 'parent/child' payload = {'key': 'value'} content = {'code': 'EAUTH'} response = self.fake_response(method, path, payload, 401, content) self.assertRaises(jsonrpc.NefException, instance.hook, response) def test_hook_404_nested(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) instance.lock = True path = 'parent/child' payload = {'key': 'value'} content = {'code': 'ENOENT'} response = self.fake_response(method, path, payload, 404, content) result = instance.hook(response) self.assertEqual(response, result) def test_hook_404_max_retries(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) instance.stat[404] = self.proxy.retries path = 'parent/child' payload = {'key': 'value'} content = {'code': 'ENOENT'} response = self.fake_response(method, path, payload, 404, content) self.assertRaises(jsonrpc.NefException, instance.hook, response) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.failover') def test_hook_404_failover_error(self, failover): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'code': 'ENOENT'} response = self.fake_response(method, path, payload, 404, content) failover.return_value = False result = instance.hook(response) self.assertEqual(response, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.failover') def test_hook_404_failover_ok(self, failover, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'code': 'ENOENT'} response = self.fake_response(method, path, payload, 404, content) failover.return_value = True content2 = {'name': 'test'} response2 = self.fake_response(method, path, payload, 200, content2) request.return_value = response2 result = instance.hook(response) self.assertEqual(response2, result) def test_hook_500_permanent(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'code': 'EINVAL'} response = self.fake_response(method, path, payload, 500, content) self.assertRaises(jsonrpc.NefException, instance.hook, response) def test_hook_500_busy_max_retries(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) instance.stat[500] = self.proxy.retries path = 'parent/child' payload = {'key': 'value'} content = {'code': 'EBUSY'} response = self.fake_response(method, path, payload, 500, content) self.assertRaises(jsonrpc.NefException, instance.hook, response) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_hook_500_busy_ok(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'code': 'EBUSY'} response = self.fake_response(method, path, payload, 500, content) content2 = {'name': 'test'} response2 = self.fake_response(method, path, payload, 200, content2) request.return_value = response2 result = instance.hook(response) self.assertEqual(response2, result) def test_hook_201_no_monitor(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'monitor': 'unknown'} response = self.fake_response(method, path, payload, 202, content) self.assertRaises(jsonrpc.NefException, instance.hook, response) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_hook_201_ok(self, request): method = 'delete' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = { 'links': [{ 'rel': 'monitor', 'href': '/jobStatus/jobID' }] } response = self.fake_response(method, path, payload, 202, content) content2 = None response2 = self.fake_response(method, path, payload, 201, content2) request.return_value = response2 result = instance.hook(response) self.assertEqual(response2, result) def test_200_no_data(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'name': 'test'} response = self.fake_response(method, path, payload, 200, content) result = instance.hook(response) self.assertEqual(response, result) def test_200_pagination_end(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = {'data': 'value'} response = self.fake_response(method, path, payload, 200, content) result = instance.hook(response) self.assertEqual(response, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_200_pagination_next(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} content = { 'data': [{ 'name': 'test' }], 'links': [{ 'rel': 'next', 'href': path }] } response = self.fake_response(method, path, payload, 200, content) response2 = self.fake_response(method, path, payload, 200, content) request.return_value = response2 result = instance.hook(response) self.assertEqual(response2, result) def test_request(self): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = 'parent/child' payload = {'key': 'value'} expected = {'name': 'dataset'} url = self.proxy.url(path) kwargs = payload.copy() kwargs['timeout'] = self.proxy.timeout kwargs['hooks'] = {'response': instance.hook} self.proxy.session.request.return_value = expected result = instance.request(method, path, **payload) self.proxy.session.request.assert_called_with(method, url, **kwargs) self.assertEqual(expected, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_auth(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) method = 'post' path = 'auth/login' payload = { 'data': json.dumps({ 'username': self.proxy.username, 'password': self.proxy.password }) } content = {'token': 'test'} response = self.fake_response(method, path, payload, 200, content) request.return_value = response instance.auth() request.assert_called_once_with(method, path, **payload) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_auth_error(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) method = 'post' path = 'auth/login' payload = { 'data': json.dumps({ 'username': self.proxy.username, 'password': self.proxy.password }) } content = {'data': 'noauth'} response = self.fake_response(method, path, payload, 200, content) request.return_value = response self.assertRaises(jsonrpc.NefException, instance.auth) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_failover(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = self.proxy.root payload = {} content = {'path': path} response = self.fake_response(method, path, payload, 200, content) request.return_value = response result = instance.failover() request.assert_called_once_with(method, path) expected = True self.assertEqual(expected, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_failover_timeout(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = self.proxy.root payload = {} content = {'path': path} response = self.fake_response(method, path, payload, 200, content) request.side_effect = [requests.exceptions.Timeout, response] result = instance.failover() request.assert_called_once_with(method, path) expected = False self.assertEqual(expected, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_failover_404(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = self.proxy.root payload = {} content = {} response = self.fake_response(method, path, payload, 404, content) request.side_effect = [response, response] result = instance.failover() request.assert_called_once_with(method, path) expected = False self.assertEqual(expected, result) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefRequest.request') def test_failover_error(self, request): method = 'get' instance = jsonrpc.NefRequest(self.proxy, method) path = self.proxy.root request.side_effect = [ requests.exceptions.Timeout, requests.exceptions.ConnectionError ] result = instance.failover() request.assert_called_with(method, path) expected = False self.assertEqual(expected, result) def test_getpath(self): method = 'get' rel = 'monitor' href = 'jobStatus/jobID' content = { 'links': [ [1, 2], 'bad link', { 'rel': 'next', 'href': href }, { 'rel': rel, 'href': href } ] } instance = jsonrpc.NefRequest(self.proxy, method) result = instance.getpath(content, rel) expected = href self.assertEqual(expected, result) def test_getpath_no_content(self): method = 'get' rel = 'next' content = None instance = jsonrpc.NefRequest(self.proxy, method) result = instance.getpath(content, rel) self.assertIsNone(result) def test_getpath_no_links(self): method = 'get' rel = 'next' content = {'a': 'b'} instance = jsonrpc.NefRequest(self.proxy, method) result = instance.getpath(content, rel) self.assertIsNone(result) def test_getpath_no_rel(self): method = 'get' rel = 'next' content = { 'links': [ { 'rel': 'monitor', 'href': '/jobs/jobID' } ] } instance = jsonrpc.NefRequest(self.proxy, method) result = instance.getpath(content, rel) self.assertIsNone(result) def test_getpath_no_href(self): method = 'get' rel = 'next' content = { 'links': [ { 'rel': rel } ] } instance = jsonrpc.NefRequest(self.proxy, method) result = instance.getpath(content, rel) self.assertIsNone(result) class TestNefCollections(test.TestCase): def setUp(self): super(TestNefCollections, self).setUp() self.proxy = mock.Mock() self.instance = jsonrpc.NefCollections(self.proxy) def test_path(self): path = 'path/to/item name + - & # $ = 0' result = self.instance.path(path) quoted_path = six.moves.urllib.parse.quote_plus(path) expected = posixpath.join(self.instance.root, quoted_path) self.assertEqual(expected, result) def test_get(self): name = 'parent/child' payload = {'key': 'value'} expected = {'name': 'dataset'} path = self.instance.path(name) self.proxy.get.return_value = expected result = self.instance.get(name, payload) self.proxy.get.assert_called_with(path, payload) self.assertEqual(expected, result) def test_set(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) self.proxy.put.return_value = expected result = self.instance.set(name, payload) self.proxy.put.assert_called_with(path, payload) self.assertIsNone(result) def test_list(self): payload = {'key': 'value'} expected = [{'name': 'dataset'}] self.proxy.get.return_value = expected result = self.instance.list(payload) self.proxy.get.assert_called_with(self.instance.root, payload) self.assertEqual(expected, result) def test_create(self): payload = {'key': 'value'} expected = None self.proxy.post.return_value = expected result = self.instance.create(payload) self.proxy.post.assert_called_with(self.instance.root, payload) self.assertIsNone(result) def test_create_exist(self): payload = {'key': 'value'} self.proxy.post.side_effect = jsonrpc.NefException(code='EEXIST') result = self.instance.create(payload) self.proxy.post.assert_called_with(self.instance.root, payload) self.assertIsNone(result) def test_create_error(self): payload = {'key': 'value'} self.proxy.post.side_effect = jsonrpc.NefException(code='EBUSY') self.assertRaises(jsonrpc.NefException, self.instance.create, payload) self.proxy.post.assert_called_with(self.instance.root, payload) def test_delete(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) self.proxy.delete.return_value = expected result = self.instance.delete(name, payload) self.proxy.delete.assert_called_with(path, payload) self.assertIsNone(result) def test_delete_not_found(self): name = 'parent/child' payload = {'key': 'value'} path = self.instance.path(name) self.proxy.delete.side_effect = jsonrpc.NefException(code='ENOENT') result = self.instance.delete(name, payload) self.proxy.delete.assert_called_with(path, payload) self.assertIsNone(result) def test_delete_error(self): name = 'parent/child' payload = {'key': 'value'} path = self.instance.path(name) self.proxy.delete.side_effect = jsonrpc.NefException(code='EINVAL') self.assertRaises(jsonrpc.NefException, self.instance.delete, name, payload) self.proxy.delete.assert_called_with(path, payload) class TestNefSettings(test.TestCase): def setUp(self): super(TestNefSettings, self).setUp() self.proxy = mock.Mock() self.instance = jsonrpc.NefSettings(self.proxy) def test_create(self): payload = {'key': 'value'} result = self.instance.create(payload) expected = NotImplemented self.assertEqual(expected, result) def test_delete(self): name = 'parent/child' payload = {'key': 'value'} result = self.instance.delete(name, payload) expected = NotImplemented self.assertEqual(expected, result) class TestNefDatasets(test.TestCase): def setUp(self): super(TestNefDatasets, self).setUp() self.proxy = mock.Mock() self.instance = jsonrpc.NefDatasets(self.proxy) def test_rename(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'rename') self.proxy.post.return_value = expected result = self.instance.rename(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) class TestNefSnapshots(test.TestCase): def setUp(self): super(TestNefSnapshots, self).setUp() self.proxy = mock.Mock() self.instance = jsonrpc.NefSnapshots(self.proxy) def test_clone(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'clone') self.proxy.post.return_value = expected result = self.instance.clone(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) class TestNefFilesystems(test.TestCase): def setUp(self): super(TestNefFilesystems, self).setUp() self.proxy = mock.Mock() self.instance = jsonrpc.NefFilesystems(self.proxy) def test_mount(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'mount') self.proxy.post.return_value = expected result = self.instance.mount(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) def test_unmount(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'unmount') self.proxy.post.return_value = expected result = self.instance.unmount(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) def test_acl(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'acl') self.proxy.post.return_value = expected result = self.instance.acl(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) def test_promote(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'promote') self.proxy.post.return_value = expected result = self.instance.promote(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) def test_rollback(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = self.instance.path(name) path = posixpath.join(path, 'rollback') self.proxy.post.return_value = expected result = self.instance.rollback(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) class TestNefHpr(test.TestCase): def setUp(self): super(TestNefHpr, self).setUp() self.proxy = mock.Mock() self.instance = jsonrpc.NefHpr(self.proxy) def test_activate(self): payload = {'key': 'value'} expected = None path = posixpath.join(self.instance.root, 'activate') self.proxy.post.return_value = expected result = self.instance.activate(payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) def test_start(self): name = 'parent/child' payload = {'key': 'value'} expected = None path = posixpath.join(self.instance.path(name), 'start') self.proxy.post.return_value = expected result = self.instance.start(name, payload) self.proxy.post.assert_called_with(path, payload) self.assertIsNone(result) class TestNefProxy(test.TestCase): def setUp(self): super(TestNefProxy, self).setUp() self.cfg = mock.Mock(spec=conf.Configuration) self.cfg.nexenta_use_https = True self.cfg.nexenta_ssl_cert_verify = True self.cfg.nexenta_user = 'user' self.cfg.nexenta_password = 'pass' self.cfg.nexenta_rest_addresses = ['1.1.1.1', '2.2.2.2'] self.cfg.nexenta_rest_port = 8443 self.cfg.nexenta_rest_backoff_factor = 1 self.cfg.nexenta_rest_retry_count = 3 self.cfg.nexenta_rest_connect_timeout = 1 self.cfg.nexenta_rest_read_timeout = 1 self.cfg.nexenta_nas_host = '3.3.3.3' self.cfg.nexenta_folder = 'pool/path/to/share' self.nef_mock = mock.Mock() self.mock_object(jsonrpc, 'NefRequest') self.proto = 'nfs' self.proxy = jsonrpc.NefProxy(self.proto, self.cfg.nexenta_folder, self.cfg) def test___init___http(self): proto = 'nfs' cfg = copy.copy(self.cfg) cfg.nexenta_use_https = False result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) self.assertIsInstance(result, jsonrpc.NefProxy) def test___init___no_rest_port_http(self): proto = 'nfs' cfg = copy.copy(self.cfg) cfg.nexenta_rest_port = 0 cfg.nexenta_use_https = False result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) self.assertIsInstance(result, jsonrpc.NefProxy) def test___init___no_rest_port_https(self): proto = 'nfs' cfg = copy.copy(self.cfg) cfg.nexenta_rest_port = 0 cfg.nexenta_use_https = True result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) self.assertIsInstance(result, jsonrpc.NefProxy) def test___init___iscsi(self): proto = 'iscsi' cfg = copy.copy(self.cfg) result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) self.assertIsInstance(result, jsonrpc.NefProxy) def test___init___nfs_no_rest_address(self): proto = 'nfs' cfg = copy.copy(self.cfg) cfg.nexenta_rest_addresses = '' result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) self.assertIsInstance(result, jsonrpc.NefProxy) def test___init___iscsi_no_rest_address(self): proto = 'iscsi' cfg = copy.copy(self.cfg) cfg.nexenta_rest_addresses = '' cfg.nexenta_host = '4.4.4.4' result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) self.assertIsInstance(result, jsonrpc.NefProxy) @mock.patch('requests.packages.urllib3.disable_warnings') def test___init___no_ssl_cert_verify(self, disable_warnings): proto = 'nfs' cfg = copy.copy(self.cfg) cfg.nexenta_ssl_cert_verify = False disable_warnings.return_value = None result = jsonrpc.NefProxy(proto, cfg.nexenta_folder, cfg) disable_warnings.assert_called() self.assertIsInstance(result, jsonrpc.NefProxy) def test_delete_bearer(self): self.assertIsNone(self.proxy.delete_bearer()) self.assertNotIn('Authorization', self.proxy.session.headers) self.proxy.session.headers['Authorization'] = 'Bearer token' self.assertIsNone(self.proxy.delete_bearer()) self.assertNotIn('Authorization', self.proxy.session.headers) def test_update_bearer(self): token = 'token' bearer = 'Bearer %s' % token self.assertNotIn('Authorization', self.proxy.session.headers) self.assertIsNone(self.proxy.update_bearer(token)) self.assertIn('Authorization', self.proxy.session.headers) self.assertEqual(self.proxy.session.headers['Authorization'], bearer) def test_update_token(self): token = 'token' bearer = 'Bearer %s' % token self.assertIsNone(self.proxy.update_token(token)) self.assertEqual(self.proxy.tokens[self.proxy.host], token) self.assertEqual(self.proxy.session.headers['Authorization'], bearer) def test_update_host(self): token = 'token' bearer = 'Bearer %s' % token host = self.cfg.nexenta_rest_addresses[0] self.proxy.tokens[host] = token self.assertIsNone(self.proxy.update_host(host)) self.assertEqual(self.proxy.session.headers['Authorization'], bearer) def test_skip_update_host(self): host = 'nonexistent' self.assertIsNone(self.proxy.update_host(host)) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefSettings.get') def test_update_lock(self, get_settings): guid = uuid.uuid4().hex settings = {'value': guid} get_settings.return_value = settings self.assertIsNone(self.proxy.update_lock()) path = '%s:%s' % (guid, self.proxy.path) if isinstance(path, six.text_type): path = path.encode('utf-8') expected = hashlib.md5(path).hexdigest() self.assertEqual(expected, self.proxy.lock) def test_url(self): path = '/path/to/api' result = self.proxy.url(path) expected = '%s://%s:%s%s' % (self.proxy.scheme, self.proxy.host, self.proxy.port, path) self.assertEqual(expected, result) @mock.patch('eventlet.greenthread.sleep') def test_delay(self, sleep): sleep.return_value = None for attempt in range(0, 10): expected = int(self.proxy.backoff_factor * (2 ** (attempt - 1))) self.assertIsNone(self.proxy.delay(attempt)) sleep.assert_called_with(expected) manila-10.0.0/manila/tests/share/drivers/nexenta/ns5/test_nexenta_nas.py0000664000175000017500000005314113656750227026342 0ustar zuulzuul00000000000000# Copyright 2019 Nexenta by DDN, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import units from manila import context from manila.share.drivers.nexenta.ns5 import jsonrpc from manila.share.drivers.nexenta.ns5 import nexenta_nas from manila import test RPC_PATH = 'manila.share.drivers.nexenta.ns5.jsonrpc' DRV_PATH = 'manila.share.drivers.nexenta.ns5.nexenta_nas.NexentaNasDriver' DRIVER_VERSION = '1.1' SHARE = {'share_id': 'uuid', 'size': 1, 'share_proto': 'NFS'} SHARE_PATH = 'pool1/nfs_share/share-uuid' SHARE2 = {'share_id': 'uuid2', 'size': 2, 'share_proto': 'NFS'} SHARE2_PATH = 'pool1/nfs_share/share-uuid2' SNAPSHOT = { 'snapshot_id': 'snap_id', 'share': SHARE, 'snapshot_path': '%s@%s' % (SHARE_PATH, 'snapshot-snap_id')} @ddt.ddt class TestNexentaNasDriver(test.TestCase): def setUp(self): def _safe_get(opt): return getattr(self.cfg, opt) self.cfg = mock.Mock() self.mock_object( self.cfg, 'safe_get', mock.Mock(side_effect=_safe_get)) super(TestNexentaNasDriver, self).setUp() self.cfg.nexenta_nas_host = '1.1.1.1' self.cfg.nexenta_rest_addresses = ['2.2.2.2'] self.ctx = context.get_admin_context() self.cfg.nexenta_rest_port = 8080 self.cfg.nexenta_rest_protocol = 'auto' self.cfg.nexenta_pool = 'pool1' self.cfg.nexenta_dataset_record_size = 131072 self.cfg.reserved_share_percentage = 0 self.cfg.nexenta_folder = 'nfs_share' self.cfg.nexenta_user = 'user' self.cfg.share_backend_name = 'NexentaStor5' self.cfg.nexenta_password = 'password' self.cfg.nexenta_thin_provisioning = False self.cfg.nexenta_mount_point_base = 'mnt' self.cfg.nexenta_rest_retry_count = 3 self.cfg.nexenta_share_name_prefix = 'share-' self.cfg.max_over_subscription_ratio = 20.0 self.cfg.enabled_share_protocols = 'NFS' self.cfg.nexenta_mount_point_base = '$state_path/mnt' self.cfg.nexenta_dataset_compression = 'on' self.cfg.network_config_group = 'DEFAULT' self.cfg.admin_network_config_group = ( 'fake_admin_network_config_group') self.cfg.driver_handles_share_servers = False self.cfg.safe_get = self.fake_safe_get self.nef_mock = mock.Mock() self.mock_object(jsonrpc, 'NefRequest') self.drv = nexenta_nas.NexentaNasDriver(configuration=self.cfg) self.drv.do_setup(self.ctx) def fake_safe_get(self, key): try: value = getattr(self.cfg, key) except AttributeError: value = None return value def test_backend_name(self): self.assertEqual('NexentaStor5', self.drv.share_backend_name) @mock.patch('%s._get_provisioned_capacity' % DRV_PATH) @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefServices.get') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefFilesystems.set') @mock.patch('manila.share.drivers.nexenta.ns5.' 'jsonrpc.NefFilesystems.get') def test_check_for_setup_error(self, get_filesystem, set_filesystem, get_service, prov_capacity): prov_capacity.return_value = 1 get_filesystem.return_value = { 'mountPoint': '/path/to/volume', 'nonBlockingMandatoryMode': False, 'smartCompression': False, 'isMounted': True } get_service.return_value = { 'state': 'online' } self.assertIsNone(self.drv.check_for_setup_error()) get_filesystem.assert_called_with(self.drv.root_path) set_filesystem.assert_not_called() get_service.assert_called_with('nfs') get_filesystem.return_value = { 'mountPoint': '/path/to/volume', 'nonBlockingMandatoryMode': True, 'smartCompression': True, 'isMounted': True } set_filesystem.return_value = {} payload = { 'nonBlockingMandatoryMode': False, 'smartCompression': False } self.assertIsNone(self.drv.check_for_setup_error()) get_filesystem.assert_called_with(self.drv.root_path) set_filesystem.assert_called_with(self.drv.root_path, payload) get_service.assert_called_with('nfs') get_filesystem.return_value = { 'mountPoint': '/path/to/volume', 'nonBlockingMandatoryMode': False, 'smartCompression': True, 'isMounted': True } payload = { 'smartCompression': False } set_filesystem.return_value = {} self.assertIsNone(self.drv.check_for_setup_error()) get_filesystem.assert_called_with(self.drv.root_path) set_filesystem.assert_called_with(self.drv.root_path, payload) get_service.assert_called_with('nfs') get_filesystem.return_value = { 'mountPoint': '/path/to/volume', 'nonBlockingMandatoryMode': True, 'smartCompression': False, 'isMounted': True } payload = { 'nonBlockingMandatoryMode': False } set_filesystem.return_value = {} self.assertIsNone(self.drv.check_for_setup_error()) get_filesystem.assert_called_with(self.drv.root_path) set_filesystem.assert_called_with(self.drv.root_path, payload) get_service.assert_called_with('nfs') get_filesystem.return_value = { 'mountPoint': 'none', 'nonBlockingMandatoryMode': False, 'smartCompression': False, 'isMounted': False } self.assertRaises(jsonrpc.NefException, self.drv.check_for_setup_error) get_filesystem.return_value = { 'mountPoint': '/path/to/volume', 'nonBlockingMandatoryMode': False, 'smartCompression': False, 'isMounted': False } self.assertRaises(jsonrpc.NefException, self.drv.check_for_setup_error) get_service.return_value = { 'state': 'online' } self.assertRaises(jsonrpc.NefException, self.drv.check_for_setup_error) @mock.patch('%s.NefFilesystems.get' % RPC_PATH) def test__get_provisioned_capacity(self, fs_get): fs_get.return_value = { 'path': 'pool1/nfs_share/123', 'referencedQuotaSize': 1 * units.Gi } self.drv._get_provisioned_capacity() self.assertEqual(1 * units.Gi, self.drv.provisioned_capacity) @mock.patch('%s._mount_filesystem' % DRV_PATH) @mock.patch('%s.NefFilesystems.create' % RPC_PATH) @mock.patch('%s.NefFilesystems.delete' % RPC_PATH) def test_create_share(self, delete_fs, create_fs, mount_fs): mount_path = '%s:/%s' % (self.cfg.nexenta_nas_host, SHARE_PATH) mount_fs.return_value = mount_path size = int(1 * units.Gi * 1.1) self.assertEqual( [{ 'path': mount_path, 'id': 'share-uuid' }], self.drv.create_share(self.ctx, SHARE)) payload = { 'recordSize': 131072, 'compressionMode': self.cfg.nexenta_dataset_compression, 'path': SHARE_PATH, 'referencedQuotaSize': size, 'nonBlockingMandatoryMode': False, 'referencedReservationSize': size } self.drv.nef.filesystems.create.assert_called_with(payload) mount_fs.side_effect = jsonrpc.NefException('some error') self.assertRaises(jsonrpc.NefException, self.drv.create_share, self.ctx, SHARE) delete_payload = {'force': True} self.drv.nef.filesystems.delete.assert_called_with( SHARE_PATH, delete_payload) @mock.patch('%s.NefFilesystems.promote' % RPC_PATH) @mock.patch('%s.NefSnapshots.get' % RPC_PATH) @mock.patch('%s.NefSnapshots.list' % RPC_PATH) @mock.patch('%s.NefFilesystems.delete' % RPC_PATH) def test_delete_share(self, fs_delete, snap_list, snap_get, fs_promote): delete_payload = {'force': True, 'snapshots': True} snapshots_payload = {'parent': SHARE_PATH, 'fields': 'path'} clones_payload = {'fields': 'clones,creationTxg'} clone_path = '%s:/%s' % (self.cfg.nexenta_nas_host, 'path_to_fs') fs_delete.side_effect = [ jsonrpc.NefException({ 'message': 'some_error', 'code': 'EEXIST'}), None] snap_list.return_value = [{'path': '%s@snap1' % SHARE_PATH}] snap_get.return_value = {'clones': [clone_path], 'creationTxg': 1} self.assertIsNone(self.drv.delete_share(self.ctx, SHARE)) fs_delete.assert_called_with(SHARE_PATH, delete_payload) fs_promote.assert_called_with(clone_path) snap_get.assert_called_with('%s@snap1' % SHARE_PATH, clones_payload) snap_list.assert_called_with(snapshots_payload) @mock.patch('%s.NefFilesystems.mount' % RPC_PATH) @mock.patch('%s.NefFilesystems.get' % RPC_PATH) def test_mount_filesystem(self, fs_get, fs_mount): mount_path = '%s:/%s' % (self.cfg.nexenta_nas_host, SHARE_PATH) fs_get.return_value = { 'mountPoint': '/%s' % SHARE_PATH, 'isMounted': False} self.assertEqual(mount_path, self.drv._mount_filesystem(SHARE)) self.drv.nef.filesystems.mount.assert_called_with(SHARE_PATH) @mock.patch('%s.NefHpr.activate' % RPC_PATH) @mock.patch('%s.NefFilesystems.mount' % RPC_PATH) @mock.patch('%s.NefFilesystems.get' % RPC_PATH) def test_mount_filesystem_with_activate( self, fs_get, fs_mount, hpr_activate): mount_path = '%s:/%s' % (self.cfg.nexenta_nas_host, SHARE_PATH) fs_get.side_effect = [ {'mountPoint': 'none', 'isMounted': False}, {'mountPoint': '/%s' % SHARE_PATH, 'isMounted': False}] self.assertEqual(mount_path, self.drv._mount_filesystem(SHARE)) payload = {'datasetName': SHARE_PATH} self.drv.nef.hpr.activate.assert_called_once_with(payload) @mock.patch('%s.NefFilesystems.mount' % RPC_PATH) @mock.patch('%s.NefFilesystems.unmount' % RPC_PATH) def test_remount_filesystem(self, fs_unmount, fs_mount): self.drv._remount_filesystem(SHARE_PATH) fs_unmount.assert_called_once_with(SHARE_PATH) fs_mount.assert_called_once_with(SHARE_PATH) def parse_fqdn(self, fqdn): address_mask = fqdn.strip().split('/', 1) address = address_mask[0] ls = {"allow": True, "etype": "fqdn", "entity": address} if len(address_mask) == 2: ls['mask'] = address_mask[1] ls['etype'] = 'network' return ls @ddt.data({'key': 'value'}, {}) @mock.patch('%s.NefNfs.list' % RPC_PATH) @mock.patch('%s.NefNfs.set' % RPC_PATH) @mock.patch('%s.NefFilesystems.acl' % RPC_PATH) def test_update_nfs_access(self, acl, nfs_set, nfs_list, list_data): security_contexts = {'securityModes': ['sys']} nfs_list.return_value = list_data rw_list = ['1.1.1.1/24', '2.2.2.2'] ro_list = ['3.3.3.3', '4.4.4.4/30'] security_contexts['readWriteList'] = [] security_contexts['readOnlyList'] = [] for fqdn in rw_list: ls = self.parse_fqdn(fqdn) if ls.get('mask'): ls['mask'] = int(ls['mask']) security_contexts['readWriteList'].append(ls) for fqdn in ro_list: ls = self.parse_fqdn(fqdn) if ls.get('mask'): ls['mask'] = int(ls['mask']) security_contexts['readOnlyList'].append(ls) self.assertIsNone(self.drv._update_nfs_access(SHARE, rw_list, ro_list)) payload = { 'flags': ['file_inherit', 'dir_inherit'], 'permissions': ['full_set'], 'principal': 'everyone@', 'type': 'allow' } self.drv.nef.filesystems.acl.assert_called_with(SHARE_PATH, payload) payload = {'securityContexts': [security_contexts]} if list_data: self.drv.nef.nfs.set.assert_called_with(SHARE_PATH, payload) else: payload['filesystem'] = SHARE_PATH self.drv.nef.nfs.create.assert_called_with(payload) def test_update_nfs_access_bad_mask(self): security_contexts = {'securityModes': ['sys']} rw_list = ['1.1.1.1/24', '2.2.2.2/1a'] ro_list = ['3.3.3.3', '4.4.4.4/30'] security_contexts['readWriteList'] = [] security_contexts['readOnlyList'] = [] for fqdn in rw_list: security_contexts['readWriteList'].append(self.parse_fqdn(fqdn)) for fqdn in ro_list: security_contexts['readOnlyList'].append(self.parse_fqdn(fqdn)) self.assertRaises(ValueError, self.drv._update_nfs_access, SHARE, rw_list, ro_list) @mock.patch('%s._update_nfs_access' % DRV_PATH) def test_update_access__ip_rw(self, update_nfs_access): access = { 'access_type': 'ip', 'access_to': '1.1.1.1', 'access_level': 'rw', 'access_id': 'fake_id' } self.assertEqual( {'fake_id': {'state': 'active'}}, self.drv.update_access( self.ctx, SHARE, [access], None, None)) self.drv._update_nfs_access.assert_called_with(SHARE, ['1.1.1.1'], []) @mock.patch('%s._update_nfs_access' % DRV_PATH) def test_update_access__ip_ro(self, update_nfs_access): access = { 'access_type': 'ip', 'access_to': '1.1.1.1', 'access_level': 'ro', 'access_id': 'fake_id' } expected = {'fake_id': {'state': 'active'}} self.assertEqual( expected, self.drv.update_access( self.ctx, SHARE, [access], None, None)) self.drv._update_nfs_access.assert_called_with(SHARE, [], ['1.1.1.1']) @ddt.data('rw', 'ro') def test_update_access__not_ip(self, access_level): access = { 'access_type': 'username', 'access_to': 'some_user', 'access_level': access_level, 'access_id': 'fake_id' } expected = {'fake_id': {'state': 'error'}} self.assertEqual(expected, self.drv.update_access( self.ctx, SHARE, [access], None, None)) @mock.patch('%s._get_capacity_info' % DRV_PATH) @mock.patch('manila.share.driver.ShareDriver._update_share_stats') def test_update_share_stats(self, super_stats, info): info.return_value = (100, 90, 10) stats = { 'vendor_name': 'Nexenta', 'storage_protocol': 'NFS', 'nfs_mount_point_base': self.cfg.nexenta_mount_point_base, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'snapshot_support': True, 'driver_version': DRIVER_VERSION, 'share_backend_name': self.cfg.share_backend_name, 'pools': [{ 'compression': True, 'pool_name': 'pool1', 'total_capacity_gb': 100, 'free_capacity_gb': 90, 'provisioned_capacity_gb': 0, 'max_over_subscription_ratio': 20.0, 'reserved_percentage': ( self.cfg.reserved_share_percentage), 'thin_provisioning': self.cfg.nexenta_thin_provisioning, }], } self.drv._update_share_stats() self.assertEqual(stats, self.drv._stats) def test_get_capacity_info(self): self.drv.nef.get.return_value = { 'bytesAvailable': 9 * units.Gi, 'bytesUsed': 1 * units.Gi} self.assertEqual((10, 9, 1), self.drv._get_capacity_info()) @mock.patch('%s._set_reservation' % DRV_PATH) @mock.patch('%s._set_quota' % DRV_PATH) @mock.patch('%s.NefFilesystems.rename' % RPC_PATH) @mock.patch('%s.NefFilesystems.get' % RPC_PATH) def test_manage_existing(self, fs_get, fs_rename, set_res, set_quota): fs_get.return_value = {'referencedQuotaSize': 1073741824} old_path = '%s:/%s' % (self.cfg.nexenta_nas_host, 'path_to_fs') new_path = '%s:/%s' % (self.cfg.nexenta_nas_host, SHARE_PATH) SHARE['export_locations'] = [{'path': old_path}] expected = {'size': 2, 'export_locations': [{ 'path': new_path }]} self.assertEqual(expected, self.drv.manage_existing(SHARE, None)) fs_rename.assert_called_with('path_to_fs', {'newPath': SHARE_PATH}) set_res.assert_called_with(SHARE, 2) set_quota.assert_called_with(SHARE, 2) @mock.patch('%s.NefSnapshots.create' % RPC_PATH) def test_create_snapshot(self, snap_create): self.assertIsNone(self.drv.create_snapshot(self.ctx, SNAPSHOT)) snap_create.assert_called_once_with({ 'path': SNAPSHOT['snapshot_path']}) @mock.patch('%s.NefSnapshots.delete' % RPC_PATH) def test_delete_snapshot(self, snap_delete): self.assertIsNone(self.drv.delete_snapshot(self.ctx, SNAPSHOT)) payload = {'defer': True} snap_delete.assert_called_once_with( SNAPSHOT['snapshot_path'], payload) @mock.patch('%s._mount_filesystem' % DRV_PATH) @mock.patch('%s._remount_filesystem' % DRV_PATH) @mock.patch('%s.NefFilesystems.delete' % RPC_PATH) @mock.patch('%s.NefSnapshots.clone' % RPC_PATH) def test_create_share_from_snapshot( self, snap_clone, fs_delete, remount_fs, mount_fs): mount_fs.return_value = 'mount_path' location = { 'path': 'mount_path', 'id': 'share-uuid2' } self.assertEqual([location], self.drv.create_share_from_snapshot( self.ctx, SHARE2, SNAPSHOT)) size = int(SHARE2['size'] * units.Gi * 1.1) payload = { 'targetPath': SHARE2_PATH, 'referencedQuotaSize': size, 'recordSize': self.cfg.nexenta_dataset_record_size, 'compressionMode': self.cfg.nexenta_dataset_compression, 'nonBlockingMandatoryMode': False, 'referencedReservationSize': size } snap_clone.assert_called_once_with(SNAPSHOT['snapshot_path'], payload) @mock.patch('%s._mount_filesystem' % DRV_PATH) @mock.patch('%s._remount_filesystem' % DRV_PATH) @mock.patch('%s.NefFilesystems.delete' % RPC_PATH) @mock.patch('%s.NefSnapshots.clone' % RPC_PATH) def test_create_share_from_snapshot_error( self, snap_clone, fs_delete, remount_fs, mount_fs): fs_delete.side_effect = jsonrpc.NefException('delete error') mount_fs.side_effect = jsonrpc.NefException('create error') self.assertRaises( jsonrpc.NefException, self.drv.create_share_from_snapshot, self.ctx, SHARE2, SNAPSHOT) size = int(SHARE2['size'] * units.Gi * 1.1) payload = { 'targetPath': SHARE2_PATH, 'referencedQuotaSize': size, 'recordSize': self.cfg.nexenta_dataset_record_size, 'compressionMode': self.cfg.nexenta_dataset_compression, 'nonBlockingMandatoryMode': False, 'referencedReservationSize': size } snap_clone.assert_called_once_with(SNAPSHOT['snapshot_path'], payload) payload = {'force': True} fs_delete.assert_called_once_with(SHARE2_PATH, payload) @mock.patch('%s.NefFilesystems.rollback' % RPC_PATH) def test_revert_to_snapshot(self, fs_rollback): self.assertIsNone(self.drv.revert_to_snapshot( self.ctx, SNAPSHOT, [], [])) payload = {'snapshot': 'snapshot-snap_id'} fs_rollback.assert_called_once_with( SHARE_PATH, payload) @mock.patch('%s._set_reservation' % DRV_PATH) @mock.patch('%s._set_quota' % DRV_PATH) def test_extend_share(self, set_quota, set_reservation): self.assertIsNone(self.drv.extend_share( SHARE, 2)) set_quota.assert_called_once_with( SHARE, 2) set_reservation.assert_called_once_with( SHARE, 2) @mock.patch('%s.NefFilesystems.get' % RPC_PATH) @mock.patch('%s._set_reservation' % DRV_PATH) @mock.patch('%s._set_quota' % DRV_PATH) def test_shrink_share(self, set_quota, set_reservation, fs_get): fs_get.return_value = { 'bytesUsedBySelf': 0.5 * units.Gi } self.assertIsNone(self.drv.shrink_share( SHARE2, 1)) set_quota.assert_called_once_with( SHARE2, 1) set_reservation.assert_called_once_with( SHARE2, 1) @mock.patch('%s.NefFilesystems.set' % RPC_PATH) def test_set_quota(self, fs_set): quota = int(2 * units.Gi * 1.1) payload = {'referencedQuotaSize': quota} self.assertIsNone(self.drv._set_quota( SHARE, 2)) fs_set.assert_called_once_with(SHARE_PATH, payload) @mock.patch('%s.NefFilesystems.set' % RPC_PATH) def test_set_reservation(self, fs_set): reservation = int(2 * units.Gi * 1.1) payload = {'referencedReservationSize': reservation} self.assertIsNone(self.drv._set_reservation( SHARE, 2)) fs_set.assert_called_once_with(SHARE_PATH, payload) manila-10.0.0/manila/tests/share/drivers/nexenta/test_utils.py0000664000175000017500000000257513656750227024477 0ustar zuulzuul00000000000000# Copyright 2016 Nexenta Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from oslo_utils import units from manila.share.drivers.nexenta import utils from manila import test @ddt.ddt class TestNexentaUtils(test.TestCase): @ddt.data( # Test empty value (None, 0), ('', 0), ('0', 0), ('12', 12), # Test int values (10, 10), # Test bytes string ('1b', 1), ('1B', 1), ('1023b', 1023), ('0B', 0), # Test other units ('1M', units.Mi), ('1.0M', units.Mi), ) @ddt.unpack def test_str2size(self, value, result): self.assertEqual(result, utils.str2size(value)) def test_str2size_input_error(self): # Invalid format value self.assertRaises(ValueError, utils.str2size, 'A') manila-10.0.0/manila/tests/share/drivers/nexenta/ns4/0000775000175000017500000000000013656750362022421 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/nexenta/ns4/__init__.py0000664000175000017500000000000013656750227024520 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/nexenta/ns4/test_jsonrpc.py0000664000175000017500000000251613656750227025514 0ustar zuulzuul00000000000000# Copyright 2016 Nexenta Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_serialization import jsonutils import requests from manila import exception from manila.share.drivers.nexenta.ns4 import jsonrpc from manila import test class TestNexentaJSONProxy(test.TestCase): @mock.patch('requests.post') def test_call(self, post): nms_post = jsonrpc.NexentaJSONProxy( 'http', '1.1.1.1', '8080', 'user', 'pass', 'obj', auto=False, method='get') data = {'error': {'message': 'some_error'}} post.return_value = requests.Response() post.return_value.__setstate__({ 'status_code': 500, '_content': jsonutils.dumps(data)}) self.assertRaises(exception.NexentaException, nms_post) manila-10.0.0/manila/tests/share/drivers/nexenta/ns4/test_nexenta_nas.py0000664000175000017500000005270413656750227026345 0ustar zuulzuul00000000000000# Copyright 2016 Nexenta Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import json from unittest import mock from oslo_serialization import jsonutils from oslo_utils import units from manila import context from manila import exception from manila.share import configuration as conf from manila.share.drivers.nexenta.ns4 import nexenta_nas from manila import test PATH_TO_RPC = 'requests.post' CODE = mock.PropertyMock(return_value=200) class FakeResponse(object): def __init__(self, response={}): self.content = json.dumps(response) super(FakeResponse, self).__init__() def close(self): pass class RequestParams(object): def __init__(self, scheme, host, port, path, user, password): self.scheme = scheme.lower() self.host = host self.port = port self.path = path self.user = user self.password = password @property def url(self): return '%s://%s:%s%s' % (self.scheme, self.host, self.port, self.path) @property def headers(self): auth = base64.b64encode( ('%s:%s' % (self.user, self.password)).encode('utf-8')) headers = { 'Content-Type': 'application/json', 'Authorization': 'Basic %s' % auth, } return headers def build_post_args(self, obj, method, *args): data = jsonutils.dumps({ 'object': obj, 'method': method, 'params': args, }) return data class TestNexentaNasDriver(test.TestCase): def _get_share_path(self, share_name): return '%s/%s/%s' % (self.volume, self.share, share_name) def setUp(self): def _safe_get(opt): return getattr(self.cfg, opt) self.cfg = mock.Mock(spec=conf.Configuration) self.cfg.nexenta_host = '1.1.1.1' super(TestNexentaNasDriver, self).setUp() self.ctx = context.get_admin_context() self.cfg.safe_get = mock.Mock(side_effect=_safe_get) self.cfg.nexenta_rest_port = 1000 self.cfg.reserved_share_percentage = 0 self.cfg.max_over_subscription_ratio = 0 self.cfg.nexenta_rest_protocol = 'auto' self.cfg.nexenta_volume = 'volume' self.cfg.nexenta_nfs_share = 'nfs_share' self.cfg.nexenta_user = 'user' self.cfg.nexenta_password = 'password' self.cfg.nexenta_thin_provisioning = False self.cfg.enabled_share_protocols = 'NFS' self.cfg.nexenta_mount_point_base = '$state_path/mnt' self.cfg.share_backend_name = 'NexentaStor' self.cfg.nexenta_dataset_compression = 'on' self.cfg.nexenta_smb = 'on' self.cfg.nexenta_nfs = 'on' self.cfg.nexenta_dataset_dedupe = 'on' self.cfg.network_config_group = 'DEFAULT' self.cfg.admin_network_config_group = ( 'fake_admin_network_config_group') self.cfg.driver_handles_share_servers = False self.request_params = RequestParams( 'http', self.cfg.nexenta_host, self.cfg.nexenta_rest_port, '/rest/nms/', self.cfg.nexenta_user, self.cfg.nexenta_password) self.drv = nexenta_nas.NexentaNasDriver(configuration=self.cfg) self.drv.do_setup(self.ctx) self.volume = self.cfg.nexenta_volume self.share = self.cfg.nexenta_nfs_share @mock.patch(PATH_TO_RPC) def test_check_for_setup_error__volume_doesnt_exist(self, post): post.return_value = FakeResponse() self.assertRaises( exception.NexentaException, self.drv.check_for_setup_error) @mock.patch(PATH_TO_RPC) def test_check_for_setup_error__folder_doesnt_exist(self, post): folder = '%s/%s' % (self.volume, self.share) create_folder_props = { 'recordsize': '4K', 'quota': '1G', 'compression': self.cfg.nexenta_dataset_compression, 'sharesmb': self.cfg.nexenta_smb, 'sharenfs': self.cfg.nexenta_nfs, } share_opts = { 'read_write': '*', 'read_only': '', 'root': 'nobody', 'extra_options': 'anon=0', 'recursive': 'true', 'anonymous_rw': 'true', } def my_side_effect(*args, **kwargs): if kwargs['data'] == self.request_params.build_post_args( 'volume', 'object_exists', self.volume): return FakeResponse({'result': 'OK'}) elif kwargs['data'] == self.request_params.build_post_args( 'folder', 'object_exists', folder): return FakeResponse() elif kwargs['data'] == self.request_params.build_post_args( 'folder', 'create_with_props', self.volume, self.share, create_folder_props): return FakeResponse() elif kwargs['data'] == self.request_params.build_post_args( 'netstorsvc', 'share_folder', 'svc:/network/nfs/server:default', folder, share_opts): return FakeResponse() else: raise exception.ManilaException('Unexpected request') post.side_effect = my_side_effect self.assertRaises( exception.ManilaException, self.drv.check_for_setup_error) post.assert_any_call( self.request_params.url, data=self.request_params.build_post_args( 'volume', 'object_exists', self.volume), headers=self.request_params.headers) post.assert_any_call( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'object_exists', folder), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_create_share(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } self.cfg.nexenta_thin_provisioning = False path = '%s/%s/%s' % (self.volume, self.share, share['name']) location = {'path': '%s:/volumes/%s' % (self.cfg.nexenta_host, path)} post.return_value = FakeResponse() self.assertEqual([location], self.drv.create_share(self.ctx, share)) @mock.patch(PATH_TO_RPC) def test_create_share__wrong_proto(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': 'A_VERY_WRONG_PROTO' } post.return_value = FakeResponse() self.assertRaises(exception.InvalidShare, self.drv.create_share, self.ctx, share) @mock.patch(PATH_TO_RPC) def test_create_share__thin_provisioning(self, post): share = {'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols} create_folder_props = { 'recordsize': '4K', 'quota': '1G', 'compression': self.cfg.nexenta_dataset_compression, } parent_path = '%s/%s' % (self.volume, self.share) post.return_value = FakeResponse() self.cfg.nexenta_thin_provisioning = True self.drv.create_share(self.ctx, share) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'create_with_props', parent_path, share['name'], create_folder_props), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_create_share__thick_provisioning(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } quota = '%sG' % share['size'] create_folder_props = { 'recordsize': '4K', 'quota': quota, 'compression': self.cfg.nexenta_dataset_compression, 'reservation': quota, } parent_path = '%s/%s' % (self.volume, self.share) post.return_value = FakeResponse() self.cfg.nexenta_thin_provisioning = False self.drv.create_share(self.ctx, share) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'create_with_props', parent_path, share['name'], create_folder_props), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_create_share_from_snapshot(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } snapshot = {'name': 'sn1', 'share_name': share['name']} post.return_value = FakeResponse() path = '%s/%s/%s' % (self.volume, self.share, share['name']) location = {'path': '%s:/volumes/%s' % (self.cfg.nexenta_host, path)} snapshot_name = '%s/%s/%s@%s' % ( self.volume, self.share, snapshot['share_name'], snapshot['name']) self.assertEqual([location], self.drv.create_share_from_snapshot( self.ctx, share, snapshot)) post.assert_any_call( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'clone', snapshot_name, '%s/%s/%s' % (self.volume, self.share, share['name'])), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_delete_share(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } post.return_value = FakeResponse() folder = '%s/%s/%s' % (self.volume, self.share, share['name']) self.drv.delete_share(self.ctx, share) post.assert_any_call( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'destroy', folder.strip(), '-r'), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_delete_share__exists_error(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } post.return_value = FakeResponse() post.side_effect = exception.NexentaException('does not exist') self.drv.delete_share(self.ctx, share) @mock.patch(PATH_TO_RPC) def test_delete_share__some_error(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } post.return_value = FakeResponse() post.side_effect = exception.ManilaException('Some error') self.assertRaises( exception.ManilaException, self.drv.delete_share, self.ctx, share) @mock.patch(PATH_TO_RPC) def test_extend_share__thin_provisoning(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } new_size = 5 quota = '%sG' % new_size post.return_value = FakeResponse() self.cfg.nexenta_thin_provisioning = True self.drv.extend_share(share, new_size) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'set_child_prop', '%s/%s/%s' % (self.volume, self.share, share['name']), 'quota', quota), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_extend_share__thick_provisoning(self, post): share = { 'name': 'share', 'size': 1, 'share_proto': self.cfg.enabled_share_protocols } new_size = 5 post.return_value = FakeResponse() self.cfg.nexenta_thin_provisioning = False self.drv.extend_share(share, new_size) post.assert_not_called() @mock.patch(PATH_TO_RPC) def test_create_snapshot(self, post): snapshot = {'share_name': 'share', 'name': 'share@first'} post.return_value = FakeResponse() folder = '%s/%s/%s' % (self.volume, self.share, snapshot['share_name']) self.drv.create_snapshot(self.ctx, snapshot) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'folder', 'create_snapshot', folder, snapshot['name'], '-r'), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_delete_snapshot(self, post): snapshot = {'share_name': 'share', 'name': 'share@first'} post.return_value = FakeResponse() self.drv.delete_snapshot(self.ctx, snapshot) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'snapshot', 'destroy', '%s@%s' % ( self._get_share_path(snapshot['share_name']), snapshot['name']), ''), headers=self.request_params.headers) @mock.patch(PATH_TO_RPC) def test_delete_snapshot__nexenta_error_1(self, post): snapshot = {'share_name': 'share', 'name': 'share@first'} post.return_value = FakeResponse() post.side_effect = exception.NexentaException('does not exist') self.drv.delete_snapshot(self.ctx, snapshot) @mock.patch(PATH_TO_RPC) def test_delete_snapshot__nexenta_error_2(self, post): snapshot = {'share_name': 'share', 'name': 'share@first'} post.return_value = FakeResponse() post.side_effect = exception.NexentaException('has dependent clones') self.drv.delete_snapshot(self.ctx, snapshot) @mock.patch(PATH_TO_RPC) def test_delete_snapshot__some_error(self, post): snapshot = {'share_name': 'share', 'name': 'share@first'} post.return_value = FakeResponse() post.side_effect = exception.ManilaException('Some error') self.assertRaises(exception.ManilaException, self.drv.delete_snapshot, self.ctx, snapshot) @mock.patch(PATH_TO_RPC) def test_update_access__unsupported_access_type(self, post): share = { 'name': 'share', 'share_proto': self.cfg.enabled_share_protocols } access = { 'access_type': 'group', 'access_to': 'ordinary_users', 'access_level': 'rw' } self.assertRaises(exception.InvalidShareAccess, self.drv.update_access, self.ctx, share, [access], None, None) @mock.patch(PATH_TO_RPC) def test_update_access__cidr(self, post): share = { 'name': 'share', 'share_proto': self.cfg.enabled_share_protocols } access1 = { 'access_type': 'ip', 'access_to': '1.1.1.1/24', 'access_level': 'rw' } access2 = { 'access_type': 'ip', 'access_to': '1.2.3.4', 'access_level': 'rw' } access_rules = [access1, access2] share_opts = { 'auth_type': 'none', 'read_write': '%s:%s' % ( access1['access_to'], access2['access_to']), 'read_only': '', 'recursive': 'true', 'anonymous_rw': 'true', 'anonymous': 'true', 'extra_options': 'anon=0', } def my_side_effect(*args, **kwargs): if kwargs['data'] == self.request_params.build_post_args( 'netstorsvc', 'share_folder', 'svc:/network/nfs/server:default', self._get_share_path(share['name']), share_opts): return FakeResponse() else: raise exception.ManilaException('Unexpected request') post.return_value = FakeResponse() post.side_effect = my_side_effect self.drv.update_access(self.ctx, share, access_rules, None, None) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'netstorsvc', 'share_folder', 'svc:/network/nfs/server:default', self._get_share_path(share['name']), share_opts), headers=self.request_params.headers) self.assertRaises(exception.ManilaException, self.drv.update_access, self.ctx, share, [access1, {'access_type': 'ip', 'access_to': '2.2.2.2', 'access_level': 'rw'}], None, None) @mock.patch(PATH_TO_RPC) def test_update_access__add_one_ip_to_empty_access_list(self, post): share = {'name': 'share', 'share_proto': self.cfg.enabled_share_protocols} access = { 'access_type': 'ip', 'access_to': '1.1.1.1', 'access_level': 'rw' } rw_list = None share_opts = { 'auth_type': 'none', 'read_write': access['access_to'], 'read_only': '', 'recursive': 'true', 'anonymous_rw': 'true', 'anonymous': 'true', 'extra_options': 'anon=0', } def my_side_effect(*args, **kwargs): if kwargs['data'] == self.request_params.build_post_args( 'netstorsvc', 'get_shareopts', 'svc:/network/nfs/server:default', self._get_share_path(share['name'])): return FakeResponse({'result': {'read_write': rw_list}}) elif kwargs['data'] == self.request_params.build_post_args( 'netstorsvc', 'share_folder', 'svc:/network/nfs/server:default', self._get_share_path(share['name']), share_opts): return FakeResponse() else: raise exception.ManilaException('Unexpected request') post.return_value = FakeResponse() self.drv.update_access(self.ctx, share, [access], None, None) post.assert_called_with( self.request_params.url, data=self.request_params.build_post_args( 'netstorsvc', 'share_folder', 'svc:/network/nfs/server:default', self._get_share_path(share['name']), share_opts), headers=self.request_params.headers) post.side_effect = my_side_effect self.assertRaises(exception.ManilaException, self.drv.update_access, self.ctx, share, [{'access_type': 'ip', 'access_to': '1111', 'access_level': 'rw'}], None, None) @mock.patch(PATH_TO_RPC) def test_deny_access__unsupported_access_type(self, post): share = {'name': 'share', 'share_proto': self.cfg.enabled_share_protocols} access = { 'access_type': 'group', 'access_to': 'ordinary_users', 'access_level': 'rw' } self.assertRaises(exception.InvalidShareAccess, self.drv.update_access, self.ctx, share, [access], None, None) def test_share_backend_name(self): self.assertEqual('NexentaStor', self.drv.share_backend_name) @mock.patch(PATH_TO_RPC) def test_get_capacity_info(self, post): post.return_value = FakeResponse({'result': { 'available': 9 * units.Gi, 'used': 1 * units.Gi}}) self.assertEqual( (10, 9, 1), self.drv.helper._get_capacity_info()) @mock.patch('manila.share.drivers.nexenta.ns4.nexenta_nfs_helper.' 'NFSHelper._get_capacity_info') @mock.patch('manila.share.driver.ShareDriver._update_share_stats') def test_update_share_stats(self, super_stats, info): info.return_value = (100, 90, 10) stats = { 'vendor_name': 'Nexenta', 'storage_protocol': 'NFS', 'nfs_mount_point_base': self.cfg.nexenta_mount_point_base, 'driver_version': '1.0', 'share_backend_name': self.cfg.share_backend_name, 'pools': [{ 'total_capacity_gb': 100, 'free_capacity_gb': 90, 'pool_name': 'volume', 'reserved_percentage': ( self.cfg.reserved_share_percentage), 'compression': True, 'dedupe': True, 'thin_provisioning': self.cfg.nexenta_thin_provisioning, 'max_over_subscription_ratio': ( self.cfg.safe_get( 'max_over_subscription_ratio')), }], } self.drv._update_share_stats() self.assertEqual(stats, self.drv._stats) manila-10.0.0/manila/tests/share/drivers/qnap/0000775000175000017500000000000013656750362021212 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/qnap/__init__.py0000664000175000017500000000000013656750227023311 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/qnap/test_api.py0000664000175000017500000011006613656750227023400 0ustar zuulzuul00000000000000# Copyright (c) 2016 QNAP Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import time from unittest import mock import ddt import six from six.moves import urllib from manila import exception from manila.share.drivers.qnap import qnap from manila import test from manila.tests import fake_share from manila.tests.share.drivers.qnap import fakes def create_configuration(management_url, qnap_share_ip, qnap_nas_login, qnap_nas_password, qnap_poolname): """Create configuration.""" configuration = mock.Mock() configuration.qnap_management_url = management_url configuration.qnap_share_ip = qnap_share_ip configuration.qnap_nas_login = qnap_nas_login configuration.qnap_nas_password = qnap_nas_password configuration.qnap_poolname = qnap_poolname configuration.safe_get.return_value = False return configuration class QnapShareDriverBaseTestCase(test.TestCase): """Base Class for the QnapShareDriver Tests.""" def setUp(self): """Setup the Qnap Driver Base TestCase.""" super(QnapShareDriverBaseTestCase, self).setUp() self.driver = None self.share_api = None def _do_setup(self, management_url, share_ip, nas_login, nas_password, poolname, **kwargs): """Config do setup configurations.""" self.driver = qnap.QnapShareDriver( configuration=create_configuration( management_url, share_ip, nas_login, nas_password, poolname), private_storage=kwargs.get('private_storage')) self.driver.do_setup('context') @ddt.ddt class QnapAPITestCase(QnapShareDriverBaseTestCase): """Tests QNAP api functions.""" login_url = ('/cgi-bin/authLogin.cgi?') get_basic_info_url = ('/cgi-bin/authLogin.cgi') fake_password = 'qnapadmin' def setUp(self): """Setup the Qnap API TestCase.""" super(QnapAPITestCase, self).setUp() fake_parms = {} fake_parms['user'] = 'admin' fake_parms['pwd'] = base64.b64encode( self.fake_password.encode("utf-8")) fake_parms['serviceKey'] = 1 sanitized_params = self._sanitize_params(fake_parms) self.login_url = ('/cgi-bin/authLogin.cgi?%s' % sanitized_params) self.mock_object(six.moves.http_client, 'HTTPConnection') self.share = fake_share.fake_share( share_proto='NFS', id='shareId', display_name='fakeDisplayName', export_locations=[{'path': '1.2.3.4:/share/fakeShareName'}], host='QnapShareDriver', size=10) def _sanitize_params(self, params, doseq=False): sanitized_params = {} for key in params: value = params[key] if value is not None: if isinstance(value, list): sanitized_params[key] = [six.text_type(v) for v in value] else: sanitized_params[key] = six.text_type(value) sanitized_params = urllib.parse.urlencode(sanitized_params, doseq) return sanitized_params @ddt.data('fake_share_name', 'fakeLabel') def test_create_share_api(self, fake_name): """Test create share api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeCreateShareResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.create_share( self.share, 'Storage Pool 1', fake_name, 'NFS', qnap_deduplication=False, qnap_compression=True, qnap_thin_provision=True, qnap_ssd_cache=False) fake_params = { 'wiz_func': 'share_create', 'action': 'add_share', 'vol_name': fake_name, 'vol_size': '10' + 'GB', 'threshold': '80', 'dedup': 'off', 'compression': '1', 'thin_pro': '1', 'cache': '0', 'cifs_enable': '0', 'nfs_enable': '1', 'afp_enable': '0', 'ftp_enable': '0', 'encryption': '0', 'hidden': '0', 'oplocks': '1', 'sync': 'always', 'userrw0': 'admin', 'userrd_len': '0', 'userrw_len': '1', 'userno_len': '0', 'access_r': 'setup_users', 'path_type': 'auto', 'recycle_bin': '1', 'recycle_bin_administrators_only': '0', 'pool_name': 'Storage Pool 1', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ('/cgi-bin/wizReq.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_api_delete_share(self): """Test delete share api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeDeleteShareResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.delete_share( 'fakeId') fake_params = { 'func': 'volume_mgmt', 'vol_remove': '1', 'volumeID': 'fakeId', 'stop_service': 'no', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_get_specific_poolinfo(self): """Test get specific poolinfo api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeSpecificPoolInfoResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.get_specific_poolinfo( 'fakePoolId') fake_params = { 'store': 'poolInfo', 'func': 'extra_get', 'poolID': 'fakePoolId', 'Pool_Info': '1', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) @ddt.data({'pool_id': "Storage Pool 1"}, {'pool_id': "Storage Pool 1", 'vol_no': 'fakeNo'}, {'pool_id': "Storage Pool 1", 'vol_label': 'fakeShareName'}) def test_get_share_info(self, dict_parm): """Test get share info api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeShareInfoResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.get_share_info(**dict_parm) fake_params = { 'store': 'poolVolumeList', 'poolID': 'Storage Pool 1', 'func': 'extra_get', 'Pool_Vol_Info': '1', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_get_specific_volinfo(self): """Test get specific volume info api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeSpecificVolInfoResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.get_specific_volinfo( 'fakeNo') fake_params = { 'store': 'volumeInfo', 'volumeID': 'fakeNo', 'func': 'extra_get', 'Volume_Info': '1', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_get_snapshot_info_es(self): """Test get snapsho info api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeSnapshotInfoResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.get_snapshot_info( volID='volId', snapshot_name='fakeSnapshotName') fake_params = { 'func': 'extra_get', 'volumeID': 'volId', 'snapshot_list': '1', 'snap_start': '0', 'snap_count': '100', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_create_snapshot_api(self): """Test create snapshot api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeCreateSnapshotResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.create_snapshot_api( 'fakeVolumeId', 'fakeSnapshotName') fake_params = { 'func': 'create_snapshot', 'volumeID': 'fakeVolumeId', 'snapshot_name': 'fakeSnapshotName', 'expire_min': '0', 'vital': '1', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) @ddt.data(fakes.FakeDeleteSnapshotResponse(), fakes.FakeDeleteSnapshotResponseSnapshotNotExist(), fakes.FakeDeleteSnapshotResponseShareNotExist()) def test_delete_snapshot_api(self, fakeDeleteSnapshotResponse): """Test delete snapshot api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakeDeleteSnapshotResponse] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.delete_snapshot_api( 'fakeSnapshotId') fake_params = { 'func': 'del_snapshots', 'snapshotID': 'fakeSnapshotId', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_clone_snapshot_api(self): """Test clone snapshot api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeDeleteSnapshotResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.clone_snapshot( 'fakeSnapshotId', 'fakeNewShareName', 'fakeCloneSize') fake_params = { 'func': 'clone_qsnapshot', 'by_vol': '1', 'snapshotID': 'fakeSnapshotId', 'new_name': 'fakeNewShareName', 'clone_size': '{}g'.format('fakeCloneSize'), 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_edit_share_api(self): """Test edit share api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseTs_4_3_0(), fakes.FakeLoginResponse(), fakes.FakeCreateSnapshotResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') expect_share_dict = { "sharename": 'fakeVolId', "old_sharename": 'fakeVolId', "new_size": 100, "deduplication": False, "compression": True, "thin_provision": True, "ssd_cache": False, "share_proto": "NFS" } self.driver.api_executor.edit_share( expect_share_dict) fake_params = { 'wiz_func': 'share_property', 'action': 'share_property', 'sharename': 'fakeVolId', 'old_sharename': 'fakeVolId', 'dedup': 'off', 'compression': '1', 'thin_pro': '1', 'cache': '0', 'cifs_enable': '0', 'nfs_enable': '1', 'afp_enable': '0', 'ftp_enable': '0', 'hidden': '0', 'oplocks': '1', 'sync': 'always', 'recycle_bin': '1', 'recycle_bin_administrators_only': '0', 'sid': 'fakeSid', } if expect_share_dict.get('new_size'): fake_params['vol_size'] = '100GB' sanitized_params = self._sanitize_params(fake_params) fake_url = ( '/cgi-bin/priv/privWizard.cgi?%s' % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) @ddt.data(fakes.FakeGetHostListResponse(), fakes.FakeGetNoHostListResponse()) def test_get_host_list(self, fakeGetHostListResponse): """Test get host list api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakeGetHostListResponse] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.get_host_list() fake_params = { 'module': 'hosts', 'func': 'get_hostlist', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_add_host(self): """Test add host api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeGetHostListResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.add_host( 'fakeHostName', 'fakeIpV4') fake_params = { 'module': 'hosts', 'func': 'apply_addhost', 'name': 'fakeHostName', 'ipaddr_v4': 'fakeIpV4', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_edit_host(self): """Test edit host api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeGetHostListResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.edit_host( 'fakeHostName', ['fakeIpV4']) fake_params = { 'module': 'hosts', 'func': 'apply_sethost', 'name': 'fakeHostName', 'ipaddr_v4': ['fakeIpV4'], 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params, doseq=True) fake_url = ( ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_delete_host(self): """Test delete host api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakes.FakeGetHostListResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.delete_host('fakeHostName') fake_params = { 'module': 'hosts', 'func': 'apply_delhost', 'host_name': 'fakeHostName', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s') % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) @ddt.data(fakes.FakeGetHostListResponse()) def test_set_nfs_access(self, fakeGetHostListResponse): """Test get host list api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fakeGetHostListResponse] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.set_nfs_access( 'fakeShareName', 'fakeAccess', 'fakeHostName') fake_params = { 'wiz_func': 'share_nfs_control', 'action': 'share_nfs_control', 'sharename': 'fakeShareName', 'access': 'fakeAccess', 'host_name': 'fakeHostName', 'sid': 'fakeSid', } sanitized_params = self._sanitize_params(fake_params) fake_url = ( ('/cgi-bin/priv/privWizard.cgi?%s') % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) def test_get_snapshot_info_ts_api(self): """Test get snapshot info api.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseTs_4_3_0(), fakes.FakeLoginResponse(), fakes.FakeSnapshotInfoResponse()] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.driver.api_executor.get_snapshot_info( snapshot_name='fakeSnapshotName', lun_index='fakeLunIndex') fake_params = { 'func': 'extra_get', 'LUNIndex': 'fakeLunIndex', 'smb_snapshot_list': '1', 'smb_snapshot': '1', 'snapshot_list': '1', 'sid': 'fakeSid'} sanitized_params = self._sanitize_params(fake_params) fake_url = ( ('/cgi-bin/disk/snapshot.cgi?%s') % sanitized_params) expected_call_list = [ mock.call('GET', self.login_url), mock.call('GET', self.get_basic_info_url), mock.call('GET', self.login_url), mock.call('GET', fake_url)] self.assertEqual( expected_call_list, mock_http_connection.return_value.request.call_args_list) @ddt.data(fakes.FakeAuthPassFailResponse(), fakes.FakeEsResCodeNegativeResponse()) def test_api_create_share_with_fail_response(self, fake_fail_response): """Test create share api with fail response.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3(), fakes.FakeLoginResponse(), fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response] self.mock_object(time, 'sleep') self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.assertRaises( exception.ShareBackendException, self.driver.api_executor.create_share, share=self.share, pool_name='Storage Pool 1', create_share_name='fake_share_name', share_proto='NFS', qnap_deduplication=False, qnap_compression=True, qnap_thin_provision=True, qnap_ssd_cache=False) @ddt.unpack @ddt.data(['self.driver.api_executor.get_share_info', {'pool_id': 'fakeId'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_specific_volinfo', {'vol_id': 'fakeId'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.create_snapshot_api', {'volumeID': 'fakeVolumeId', 'snapshot_name': 'fakeSnapshotName'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.create_snapshot_api', {'volumeID': 'fakeVolumeId', 'snapshot_name': 'fakeSnapshotName'}, fakes.FakeEsResCodeNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_snapshot_info', {'volID': 'volId'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_snapshot_info', {'volID': 'volId'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_specific_poolinfo', {'pool_id': 'Storage Pool 1'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_specific_poolinfo', {'pool_id': 'Storage Pool 1'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.delete_share', {'vol_id': 'fakeId'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.delete_share', {'vol_id': 'fakeId'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.delete_snapshot_api', {'snapshot_id': 'fakeSnapshotId'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.delete_snapshot_api', {'snapshot_id': 'fakeSnapshotId'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.clone_snapshot', {'snapshot_id': 'fakeSnapshotId', 'new_sharename': 'fakeNewShareName', 'clone_size': 'fakeCloneSize'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.clone_snapshot', {'snapshot_id': 'fakeSnapshotId', 'new_sharename': 'fakeNewShareName', 'clone_size': 'fakeCloneSize'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.edit_share', {'share_dict': {"sharename": 'fakeVolId', "old_sharename": 'fakeVolId', "new_size": 100, "deduplication": False, "compression": True, "thin_provision": False, "ssd_cache": False, "share_proto": "NFS"}}, fakes.FakeEsResCodeNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.edit_share', {'share_dict': {"sharename": 'fakeVolId', "old_sharename": 'fakeVolId', "new_size": 100, "deduplication": False, "compression": True, "thin_provision": False, "ssd_cache": False, "share_proto": "NFS"}}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.add_host', {'hostname': 'fakeHostName', 'ipv4': 'fakeIpV4'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.add_host', {'hostname': 'fakeHostName', 'ipv4': 'fakeIpV4'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.edit_host', {'hostname': 'fakeHostName', 'ipv4_list': 'fakeIpV4List'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.edit_host', {'hostname': 'fakeHostName', 'ipv4_list': 'fakeIpV4List'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.delete_host', {'hostname': 'fakeHostName'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.delete_host', {'hostname': 'fakeHostName'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_host_list', {}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_host_list', {}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.set_nfs_access', {'sharename': 'fakeShareName', 'access': 'fakeAccess', 'host_name': 'fakeHostName'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.set_nfs_access', {'sharename': 'fakeShareName', 'access': 'fakeAccess', 'host_name': 'fakeHostName'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseEs_1_1_3()], ['self.driver.api_executor.get_snapshot_info', {'snapshot_name': 'fakeSnapshoName', 'lun_index': 'fakeLunIndex'}, fakes.FakeAuthPassFailResponse(), fakes.FakeGetBasicInfoResponseTs_4_3_0()], ['self.driver.api_executor.get_snapshot_info', {'snapshot_name': 'fakeSnapshoName', 'lun_index': 'fakeLunIndex'}, fakes.FakeResultNegativeResponse(), fakes.FakeGetBasicInfoResponseTs_4_3_0()]) def test_get_snapshot_info_ts_with_fail_response( self, api, dict_parm, fake_fail_response, fake_basic_info): """Test get snapshot info api with fail response.""" mock_http_connection = six.moves.http_client.HTTPConnection mock_http_connection.return_value.getresponse.side_effect = [ fakes.FakeLoginResponse(), fake_basic_info, fakes.FakeLoginResponse(), fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response, fake_fail_response] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.mock_object(time, 'sleep') self.assertRaises( exception.ShareBackendException, eval(api), **dict_parm) manila-10.0.0/manila/tests/share/drivers/qnap/test_qnap.py0000664000175000017500000017671413656750227023602 0ustar zuulzuul00000000000000# Copyright (c) 2016 QNAP Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from unittest import mock try: import xml.etree.cElementTree as ET except ImportError: import xml.etree.ElementTree as ET import ddt from eventlet import greenthread from oslo_config import cfg import six from manila import exception from manila.share.drivers.qnap import api from manila.share.drivers.qnap import qnap from manila.share import share_types from manila import test from manila.tests import fake_share from manila.tests.share.drivers.qnap import fakes CONF = cfg.CONF def create_configuration(management_url, qnap_share_ip, qnap_nas_login, qnap_nas_password, qnap_poolname): """Create configuration.""" configuration = mock.Mock() configuration.qnap_management_url = management_url configuration.qnap_share_ip = qnap_share_ip configuration.qnap_nas_login = qnap_nas_login configuration.qnap_nas_password = qnap_nas_password configuration.qnap_poolname = qnap_poolname configuration.safe_get.return_value = False return configuration class QnapShareDriverBaseTestCase(test.TestCase): """Base Class for the QnapShareDriver Tests.""" def setUp(self): """Setup the Qnap Driver Base TestCase.""" super(QnapShareDriverBaseTestCase, self).setUp() self.driver = None self.share_api = None def _do_setup(self, management_url, share_ip, nas_login, nas_password, poolname, **kwargs): """Config do setup configurations.""" self.driver = qnap.QnapShareDriver( configuration=create_configuration( management_url, share_ip, nas_login, nas_password, poolname), private_storage=kwargs.get('private_storage')) self.driver.do_setup('context') @ddt.ddt class QnapShareDriverLoginTestCase(QnapShareDriverBaseTestCase): """Tests do_setup api.""" def setUp(self): """Setup the Qnap Share Driver login TestCase.""" super(QnapShareDriverLoginTestCase, self).setUp() self.mock_object(six.moves.http_client, 'HTTPConnection') self.mock_object(six.moves.http_client, 'HTTPSConnection') @ddt.unpack @ddt.data({'mng_url': 'http://1.2.3.4:8080', 'port': '8080', 'ssl': False}, {'mng_url': 'https://1.2.3.4:443', 'port': '443', 'ssl': True}) def test_do_setup_positive(self, mng_url, port, ssl): """Test do_setup with http://1.2.3.4:8080.""" fake_login_response = fakes.FakeLoginResponse() fake_get_basic_info_response_es = ( fakes.FakeGetBasicInfoResponseEs_1_1_3()) if ssl: mock_connection = six.moves.http_client.HTTPSConnection else: mock_connection = six.moves.http_client.HTTPConnection mock_connection.return_value.getresponse.side_effect = [ fake_login_response, fake_get_basic_info_response_es, fake_login_response] self._do_setup(mng_url, '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.assertEqual( mng_url, self.driver.configuration.qnap_management_url) self.assertEqual( '1.2.3.4', self.driver.configuration.qnap_share_ip) self.assertEqual( 'admin', self.driver.configuration.qnap_nas_login) self.assertEqual( 'qnapadmin', self.driver.configuration.qnap_nas_password) self.assertEqual( 'Storage Pool 1', self.driver.configuration.qnap_poolname) self.assertEqual('fakeSid', self.driver.api_executor.sid) self.assertEqual('admin', self.driver.api_executor.username) self.assertEqual('qnapadmin', self.driver.api_executor.password) self.assertEqual('1.2.3.4', self.driver.api_executor.ip) self.assertEqual(port, self.driver.api_executor.port) self.assertEqual(ssl, self.driver.api_executor.ssl) @ddt.data(fakes.FakeGetBasicInfoResponseTs_4_3_0(), fakes.FakeGetBasicInfoResponseTesTs_4_3_0(), fakes.FakeGetBasicInfoResponseTesEs_1_1_3()) def test_do_setup_positive_with_diff_nas(self, fake_basic_info): """Test do_setup with different NAS model.""" fake_login_response = fakes.FakeLoginResponse() mock_connection = six.moves.http_client.HTTPSConnection mock_connection.return_value.getresponse.side_effect = [ fake_login_response, fake_basic_info, fake_login_response] self._do_setup('https://1.2.3.4:443', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.assertEqual('fakeSid', self.driver.api_executor.sid) self.assertEqual('admin', self.driver.api_executor.username) self.assertEqual('qnapadmin', self.driver.api_executor.password) self.assertEqual('1.2.3.4', self.driver.api_executor.ip) self.assertEqual('443', self.driver.api_executor.port) self.assertTrue(self.driver.api_executor.ssl) @ddt.data({ 'fake_basic_info': fakes.FakeGetBasicInfoResponseTs_4_3_0(), 'expect_result': api.QnapAPIExecutorTS }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesTs_4_3_0(), 'expect_result': api.QnapAPIExecutorTS }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesEs_1_1_3(), 'expect_result': api.QnapAPIExecutor }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesEs_2_0_0(), 'expect_result': api.QnapAPIExecutor }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesEs_2_1_0(), 'expect_result': api.QnapAPIExecutor }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseEs_1_1_3(), 'expect_result': api.QnapAPIExecutor }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseEs_2_0_0(), 'expect_result': api.QnapAPIExecutor }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseEs_2_1_0(), 'expect_result': api.QnapAPIExecutor }) @ddt.unpack def test_create_api_executor(self, fake_basic_info, expect_result): """Test do_setup with different NAS model.""" fake_login_response = fakes.FakeLoginResponse() mock_connection = six.moves.http_client.HTTPSConnection mock_connection.return_value.getresponse.side_effect = [ fake_login_response, fake_basic_info, fake_login_response] self._do_setup('https://1.2.3.4:443', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') self.assertIsInstance(self.driver.api_executor, expect_result) @ddt.data({ 'fake_basic_info': fakes.FakeGetBasicInfoResponseTs_4_0_0(), 'expect_result': exception.ShareBackendException }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesTs_4_0_0(), 'expect_result': exception.ShareBackendException }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesEs_1_1_1(), 'expect_result': exception.ShareBackendException }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseTesEs_2_2_0(), 'expect_result': exception.ShareBackendException }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseEs_1_1_1(), 'expect_result': exception.ShareBackendException }, { 'fake_basic_info': fakes.FakeGetBasicInfoResponseEs_2_2_0(), 'expect_result': exception.ShareBackendException }) @ddt.unpack def test_create_api_executor_negative(self, fake_basic_info, expect_result): """Test do_setup with different NAS model.""" fake_login_response = fakes.FakeLoginResponse() mock_connection = six.moves.http_client.HTTPSConnection mock_connection.return_value.getresponse.side_effect = [ fake_login_response, fake_basic_info, fake_login_response] self.assertRaises( exception.ShareBackendException, self._do_setup, 'https://1.2.3.4:443', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1') def test_do_setup_with_exception(self): """Test do_setup with exception.""" fake_login_response = fakes.FakeLoginResponse() fake_get_basic_info_response_error = ( fakes.FakeGetBasicInfoResponseError()) mock_connection = six.moves.http_client.HTTPSConnection mock_connection.return_value.getresponse.side_effect = [ fake_login_response, fake_get_basic_info_response_error, fake_login_response] self.driver = qnap.QnapShareDriver( configuration=create_configuration( 'https://1.2.3.4:443', '1.2.3.4', 'admin', 'qnapadmin', 'Pool1')) self.assertRaises( exception.ShareBackendException, self.driver.do_setup, context='context') def test_check_for_setup_error(self): """Test do_setup with exception.""" self.driver = qnap.QnapShareDriver( configuration=create_configuration( 'https://1.2.3.4:443', '1.2.3.4', 'admin', 'qnapadmin', 'Pool1')) self.assertRaises( exception.ShareBackendException, self.driver.check_for_setup_error) @ddt.ddt class QnapShareDriverTestCase(QnapShareDriverBaseTestCase): """Tests share driver functions.""" def setUp(self): """Setup the Qnap Driver Base TestCase.""" super(QnapShareDriverTestCase, self).setUp() self.mock_object(qnap.QnapShareDriver, '_create_api_executor') self.share = fake_share.fake_share( share_proto='NFS', id='shareId', display_name='fakeDisplayName', export_locations=[{'path': '1.2.3.4:/share/fakeShareName'}], host='QnapShareDriver', size=10) def get_share_info_return_value(self): """Return the share info form get_share_info method.""" root = ET.fromstring(fakes.FAKE_RES_DETAIL_DATA_SHARE_INFO) share_list = root.find('Volume_Info') share_info_tree = share_list.findall('row') for share in share_info_tree: return share def get_snapshot_info_return_value(self): """Return the snapshot info form get_snapshot_info method.""" root = ET.fromstring(fakes.FAKE_RES_DETAIL_DATA_SNAPSHOT) snapshot_list = root.find('SnapshotList') snapshot_info_tree = snapshot_list.findall('row') for snapshot in snapshot_info_tree: return snapshot def get_specific_volinfo_return_value(self): """Return the volume info form get_specific_volinfo method.""" root = ET.fromstring(fakes.FAKE_RES_DETAIL_DATA_VOLUME_INFO) volume_list = root.find('Volume_Info') volume_info_tree = volume_list.findall('row') for volume in volume_info_tree: return volume def get_specific_poolinfo_return_value(self): """Get specific pool info.""" root = ET.fromstring(fakes.FAKE_RES_DETAIL_DATA_SPECIFIC_POOL_INFO) pool_list = root.find('Pool_Index') pool_info_tree = pool_list.findall('row') for pool in pool_info_tree: return pool def get_host_list_return_value(self): """Get host list.""" root = ET.fromstring(fakes.FAKE_RES_DETAIL_DATA_GET_HOST_LIST) hosts = [] host_list = root.find('host_list') host_tree = host_list.findall('host') for host in host_tree: hosts.append(host) return hosts @ddt.data({ 'fake_extra_spec': {}, 'expect_extra_spec': { 'qnap_thin_provision': True, 'qnap_compression': True, 'qnap_deduplication': False, 'qnap_ssd_cache': False } }, { 'fake_extra_spec': { 'thin_provisioning': u'true', 'compression': u'true', 'qnap_ssd_cache': u'true' }, 'expect_extra_spec': { 'qnap_thin_provision': True, 'qnap_compression': True, 'qnap_deduplication': False, 'qnap_ssd_cache': True } }, { 'fake_extra_spec': { 'thin_provisioning': u' False', 'compression': u' True', 'qnap_ssd_cache': u' True' }, 'expect_extra_spec': { 'qnap_thin_provision': False, 'qnap_compression': True, 'qnap_deduplication': False, 'qnap_ssd_cache': True } }, { 'fake_extra_spec': { 'thin_provisioning': u'true', 'dedupe': u' True', 'qnap_ssd_cache': u'False' }, 'expect_extra_spec': { 'qnap_thin_provision': True, 'qnap_compression': True, 'qnap_deduplication': True, 'qnap_ssd_cache': False } }, { 'fake_extra_spec': { 'thin_provisioning': u' False', 'compression': u'false', 'dedupe': u' False', 'qnap_ssd_cache': u' False' }, 'expect_extra_spec': { 'qnap_thin_provision': False, 'qnap_compression': False, 'qnap_deduplication': False, 'qnap_ssd_cache': False } }) @ddt.unpack @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_positive( self, mock_gen_random_name, mock_get_location_path, fake_extra_spec, expect_extra_spec): """Test create share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.side_effect = [ None, self.get_share_info_return_value()] mock_gen_random_name.return_value = 'fakeShareName' mock_api_executor.return_value.create_share.return_value = ( 'fakeCreateShareId') mock_get_location_path.return_value = None mock_private_storage = mock.Mock() self.mock_object(greenthread, 'sleep') self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value=fake_extra_spec)) self.driver.create_share('context', self.share) mock_api_return = mock_api_executor.return_value expected_call_list = [ mock.call('Storage Pool 1', vol_label='fakeShareName'), mock.call('Storage Pool 1', vol_label='fakeShareName')] self.assertEqual( expected_call_list, mock_api_return.get_share_info.call_args_list) mock_api_executor.return_value.create_share.assert_called_once_with( self.share, self.driver.configuration.qnap_poolname, 'fakeShareName', 'NFS', **expect_extra_spec) mock_get_location_path.assert_called_once_with( 'fakeShareName', 'NFS', '1.2.3.4', 'fakeNo') @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_negative_share_exist( self, mock_gen_random_name, mock_get_location_path): """Test create share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_gen_random_name.return_value = 'fakeShareName' mock_get_location_path.return_value = None mock_private_storage = mock.Mock() self.mock_object(time, 'sleep') self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) self.assertRaises( exception.ShareBackendException, self.driver.create_share, context='context', share=self.share) @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_negative_create_fail( self, mock_gen_random_name, mock_get_location_path): """Test create share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = None mock_gen_random_name.return_value = 'fakeShareName' mock_get_location_path.return_value = None mock_private_storage = mock.Mock() self.mock_object(time, 'sleep') self.mock_object(greenthread, 'sleep') self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) self.assertRaises( exception.ShareBackendException, self.driver.create_share, context='context', share=self.share) @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_negative_configutarion( self, mock_gen_random_name, mock_get_location_path): """Test create share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.side_effect = [ None, self.get_share_info_return_value()] mock_gen_random_name.return_value = 'fakeShareName' mock_get_location_path.return_value = None mock_private_storage = mock.Mock() self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={ 'dedupe': 'true', 'thin_provisioning': 'false'})) self.assertRaises( exception.InvalidExtraSpec, self.driver.create_share, context='context', share=self.share) def test_delete_share_positive(self): """Test delete share with fake_share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_api_executor.return_value.delete_share.return_value = ( 'fakeCreateShareId') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolNo' self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.delete_share('context', self.share, share_server=None) mock_api_executor.return_value.get_share_info.assert_called_once_with( 'Storage Pool 1', vol_no='fakeVolNo') mock_api_executor.return_value.delete_share.assert_called_once_with( 'fakeNo') def test_delete_share_no_volid(self): """Test delete share with fake_share and no volID.""" mock_private_storage = mock.Mock() mock_private_storage.get.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.delete_share('context', self.share, share_server=None) mock_private_storage.get.assert_called_once_with( 'shareId', 'volID') def test_delete_share_no_delete_share(self): """Test delete share with fake_share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = None mock_api_executor.return_value.delete_share.return_value = ( 'fakeCreateShareId') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolNo' self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.delete_share('context', self.share, share_server=None) mock_api_executor.return_value.get_share_info.assert_called_once_with( 'Storage Pool 1', vol_no='fakeVolNo') def test_extend_share(self): """Test extend share with fake_share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_api_executor.return_value.edit_share.return_value = None mock_private_storage = mock.Mock() mock_private_storage.get.side_effect = [ 'fakeVolName', 'True', 'True', 'False', 'False'] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.extend_share(self.share, 100, share_server=None) expect_share_dict = { 'sharename': 'fakeVolName', 'old_sharename': 'fakeVolName', 'new_size': 100, 'thin_provision': True, 'compression': True, 'deduplication': False, 'ssd_cache': False, 'share_proto': 'NFS' } mock_api_executor.return_value.edit_share.assert_called_once_with( expect_share_dict) def test_extend_share_without_share_name(self): """Test extend share without share name.""" mock_private_storage = mock.Mock() mock_private_storage.get.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareResourceNotFound, self.driver.extend_share, share=self.share, new_size=100, share_server=None) @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_snapshot( self, mock_gen_random_name): """Test create snapshot with fake_snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_gen_random_name.return_value = 'fakeSnapshotName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_snapshot_info.side_effect = [ None, self.get_snapshot_info_return_value()] mock_api_executor.return_value.create_snapshot_api.return_value = ( 'fakeCreateShareId') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolId' self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.create_snapshot( 'context', fake_snapshot, share_server=None) mock_api_return = mock_api_executor.return_value expected_call_list = [ mock.call(volID='fakeVolId', snapshot_name='fakeSnapshotName'), mock.call(volID='fakeVolId', snapshot_name='fakeSnapshotName')] self.assertEqual( expected_call_list, mock_api_return.get_snapshot_info.call_args_list) mock_api_return.create_snapshot_api.assert_called_once_with( 'fakeVolId', 'fakeSnapshotName') def test_create_snapshot_without_volid(self): """Test create snapshot with fake_snapshot.""" fake_snapshot = fakes.SnapshotClass(10, None) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareResourceNotFound, self.driver.create_snapshot, context='context', snapshot=fake_snapshot, share_server=None) def test_delete_snapshot(self): """Test delete snapshot with fakeSnapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.delete_snapshot_api.return_value = ( 'fakeCreateShareId') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeSnapshotId' self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.delete_snapshot( 'context', fake_snapshot, share_server=None) mock_api_return = mock_api_executor.return_value mock_api_return.delete_snapshot_api.assert_called_once_with( 'fakeShareName@fakeSnapshotName') def test_delete_snapshot_without_snapshot_id(self): """Test delete snapshot with fakeSnapshot and no snapshot id.""" fake_snapshot = fakes.SnapshotClass(10, None) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.delete_snapshot( 'context', fake_snapshot, share_server=None) mock_private_storage.get.assert_called_once_with( 'fakeSnapshotId', 'snapshot_id') @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch('manila.share.API') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_from_snapshot( self, mock_gen_random_name, mock_share_api, mock_get_location_path): """Test create share from snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_gen_random_name.return_value = 'fakeShareName' mock_api_executor.return_value.get_share_info.side_effect = [ None, self.get_share_info_return_value()] mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeSnapshotId' mock_share_api.return_value.get.return_value = {'size': 10} self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.create_share_from_snapshot( 'context', self.share, fake_snapshot, share_server=None) mock_gen_random_name.assert_called_once_with( 'share') mock_api_return = mock_api_executor.return_value expected_call_list = [ mock.call('Storage Pool 1', vol_label='fakeShareName'), mock.call('Storage Pool 1', vol_label='fakeShareName')] self.assertEqual( expected_call_list, mock_api_return.get_share_info.call_args_list) mock_api_return.clone_snapshot.assert_called_once_with( 'fakeShareName@fakeSnapshotName', 'fakeShareName', 10) @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_from_snapshot_diff_size( self, mock_gen_random_name, mock_get_location_path): """Test create share from snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_gen_random_name.return_value = 'fakeShareName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.side_effect = [ None, self.get_share_info_return_value()] mock_private_storage = mock.Mock() mock_private_storage.get.side_effect = [ 'True', 'True', 'False', 'False', 'fakeVolName'] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.create_share_from_snapshot( 'context', self.share, fake_snapshot, share_server=None) mock_gen_random_name.assert_called_once_with( 'share') mock_api_return = mock_api_executor.return_value expected_call_list = [ mock.call('Storage Pool 1', vol_label='fakeShareName'), mock.call('Storage Pool 1', vol_label='fakeShareName')] self.assertEqual( expected_call_list, mock_api_return.get_share_info.call_args_list) mock_api_return.clone_snapshot.assert_called_once_with( 'fakeShareName@fakeSnapshotName', 'fakeShareName', 10) def test_create_share_from_snapshot_without_snapshot_id(self): """Test create share from snapshot.""" fake_snapshot = fakes.SnapshotClass(10, None) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.SnapshotResourceNotFound, self.driver.create_share_from_snapshot, context='context', share=self.share, snapshot=fake_snapshot, share_server=None) @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch('manila.share.API') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_from_snapshot_negative_name_exist( self, mock_gen_random_name, mock_share_api, mock_get_location_path): """Test create share from snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_gen_random_name.return_value = 'fakeShareName' mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeSnapshotId' mock_share_api.return_value.get.return_value = {'size': 10} self.mock_object(time, 'sleep') self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareBackendException, self.driver.create_share_from_snapshot, context='context', share=self.share, snapshot=fake_snapshot, share_server=None) @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') @mock.patch('manila.share.API') @mock.patch.object(qnap.QnapShareDriver, '_gen_random_name') def test_create_share_from_snapshot_negative_clone_fail( self, mock_gen_random_name, mock_share_api, mock_get_location_path): """Test create share from snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_gen_random_name.return_value = 'fakeShareName' mock_api_executor.return_value.get_share_info.return_value = None mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeSnapshotId' mock_share_api.return_value.get.return_value = {'size': 10} self.mock_object(time, 'sleep') self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareBackendException, self.driver.create_share_from_snapshot, context='context', share=self.share, snapshot=fake_snapshot, share_server=None) @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_allow_access') @ddt.data('fakeHostName', 'fakeHostNameNotMatch') def test_update_access_allow_access( self, fakeHostName, mock_allow_access, mock_get_timestamp_from_vol_name): """Test update access with allow access rules.""" mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = ( self.get_host_list_return_value()) mock_api_executor.return_value.set_nfs_access.return_value = None mock_api_executor.return_value.delete_host.return_value = None mock_allow_access.return_value = None mock_get_timestamp_from_vol_name.return_value = fakeHostName self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.update_access( 'context', self.share, 'access_rules', None, None, share_server=None) mock_api_executor.return_value.set_nfs_access.assert_called_once_with( 'fakeVolName', 2, 'all') @mock.patch.object(qnap.QnapShareDriver, '_allow_access') @mock.patch.object(qnap.QnapShareDriver, '_deny_access') def test_update_access_deny_and_allow_access( self, mock_deny_access, mock_allow_access): """Test update access with deny and allow access rules.""" mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_deny_access.return_value = None mock_allow_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) delete_rules = [] delete_rules.append('access1') add_rules = [] add_rules.append('access1') self.driver.update_access( 'context', self.share, None, add_rules, delete_rules, share_server=None) mock_deny_access.assert_called_once_with( 'context', self.share, 'access1', None) mock_allow_access.assert_called_once_with( 'context', self.share, 'access1', None) def test_update_access_without_volname(self): """Test update access without volName.""" mock_private_storage = mock.Mock() mock_private_storage.get.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareResourceNotFound, self.driver.update_access, context='context', share=self.share, access_rules='access_rules', add_rules=None, delete_rules=None, share_server=None) @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') def test_manage_existing_nfs( self, mock_get_location_path): """Test manage existing.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_private_storage = mock.Mock() mock_private_storage.update.return_value = None mock_private_storage.get.side_effect = [ 'fakeVolId', 'fakeVolName'] mock_api_executor.return_value.get_specific_volinfo.return_value = ( self.get_specific_volinfo_return_value()) mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_get_location_path.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={})) self.driver.manage_existing(self.share, 'driver_options') mock_api_return = mock_api_executor.return_value mock_api_return.get_share_info.assert_called_once_with( 'Storage Pool 1', vol_label='fakeShareName') mock_api_return.get_specific_volinfo.assert_called_once_with( 'fakeNo') mock_get_location_path.assert_called_once_with( 'fakeShareName', 'NFS', '1.2.3.4', 'fakeNo') @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') def test_manage_existing_nfs_negative_configutarion( self, mock_get_location_path): """Test manage existing.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_private_storage = mock.Mock() mock_private_storage.update.return_value = None mock_private_storage.get.side_effect = [ 'fakeVolId', 'fakeVolName'] mock_api_executor.return_value.get_specific_volinfo.return_value = ( self.get_specific_volinfo_return_value()) mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_get_location_path.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.mock_object(share_types, 'get_extra_specs_from_share', mock.Mock(return_value={ 'dedupe': 'true', 'thin_provisioning': 'false'})) self.assertRaises( exception.InvalidExtraSpec, self.driver.manage_existing, share=self.share, driver_options='driver_options') def test_manage_invalid_protocol(self): """Test manage existing.""" share = fake_share.fake_share( share_proto='fakeProtocol', id='fakeId', display_name='fakeDisplayName', export_locations=[{'path': ''}], host='QnapShareDriver', size=10) mock_private_storage = mock.Mock() self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.InvalidInput, self.driver.manage_existing, share=share, driver_options='driver_options') def test_manage_existing_nfs_without_export_locations(self): share = fake_share.fake_share( share_proto='NFS', id='fakeId', display_name='fakeDisplayName', export_locations=[{'path': ''}], host='QnapShareDriver', size=10) mock_private_storage = mock.Mock() self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareBackendException, self.driver.manage_existing, share=share, driver_options='driver_options') @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') def test_manage_existing_nfs_ip_not_equel_share_ip( self, mock_get_location_path): """Test manage existing with nfs ip not equel to share ip.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_private_storage = mock.Mock() mock_private_storage.update.return_value = None mock_private_storage.get.side_effect = [ 'fakeVolId', 'fakeVolName'] mock_api_executor.return_value.get_specific_volinfo.return_value = ( self.get_specific_volinfo_return_value()) mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_get_location_path.return_value = None self._do_setup('http://1.2.3.4:8080', '1.1.1.1', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ShareBackendException, self.driver.manage_existing, share=self.share, driver_options='driver_options') @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') def test_manage_existing_nfs_without_existing_share( self, mock_get_location_path): """Test manage existing nfs without existing share.""" mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_private_storage = mock.Mock() mock_private_storage.update.return_value = None mock_private_storage.get.side_effect = [ 'fakeVolId', 'fakeVolName'] mock_api_executor.return_value.get_specific_volinfo.return_value = ( self.get_specific_volinfo_return_value()) mock_api_executor.return_value.get_share_info.return_value = ( None) mock_get_location_path.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.ManageInvalidShare, self.driver.manage_existing, share=self.share, driver_options='driver_options') def test_unmanage(self): """Test unmanage.""" mock_private_storage = mock.Mock() mock_private_storage.delete.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.unmanage(self.share) mock_private_storage.delete.assert_called_once_with( 'shareId') @mock.patch.object(qnap.QnapShareDriver, '_get_location_path') def test_manage_existing_snapshot( self, mock_get_location_path): """Test manage existing snapshot snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_api_executor.return_value.get_snapshot_info.return_value = ( self.get_snapshot_info_return_value()) mock_private_storage = mock.Mock() mock_private_storage.update.return_value = None mock_private_storage.get.side_effect = [ 'fakeVolId', 'fakeVolName'] self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.manage_existing_snapshot(fake_snapshot, 'driver_options') mock_api_return = mock_api_executor.return_value mock_api_return.get_share_info.assert_called_once_with( 'Storage Pool 1', vol_no='fakeVolId') fake_metadata = { 'snapshot_id': 'fakeShareName@fakeSnapshotName'} mock_private_storage.update.assert_called_once_with( 'fakeSnapshotId', fake_metadata) def test_unmanage_snapshot(self): """Test unmanage snapshot.""" fake_snapshot = fakes.SnapshotClass( 10, 'fakeShareName@fakeSnapshotName') mock_private_storage = mock.Mock() mock_private_storage.delete.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver.unmanage_snapshot(fake_snapshot) mock_private_storage.delete.assert_called_once_with( 'fakeSnapshotId') @ddt.data( {'expect_result': 'manila-shr-fake_time', 'test_string': 'share'}, {'expect_result': 'manila-snp-fake_time', 'test_string': 'snapshot'}, {'expect_result': 'manila-hst-fake_time', 'test_string': 'host'}, {'expect_result': 'manila-fake_time', 'test_string': ''}) @ddt.unpack @mock.patch('oslo_utils.timeutils.utcnow') def test_gen_random_name( self, mock_utcnow, expect_result, test_string): """Test gen random name.""" mock_private_storage = mock.Mock() mock_utcnow.return_value.strftime.return_value = 'fake_time' self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertEqual( expect_result, self.driver._gen_random_name(test_string)) def test_get_location_path(self): """Test get location path name.""" mock_private_storage = mock.Mock() mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_share_info.return_value = ( self.get_share_info_return_value()) mock_api_executor.return_value.get_specific_volinfo.return_value = ( self.get_specific_volinfo_return_value()) self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) location = 'fakeIp:fakeMountPath' expect_result = { 'path': location, 'is_admin_only': False, } self.assertEqual( expect_result, self.driver._get_location_path( 'fakeShareName', 'NFS', 'fakeIp', 'fakeVolId')) self.assertRaises( exception.InvalidInput, self.driver._get_location_path, share_name='fakeShareName', share_proto='fakeProto', ip='fakeIp', vol_id='fakeVolId') def test_update_share_stats(self): """Test update share stats.""" mock_private_storage = mock.Mock() mock_api_return = ( qnap.QnapShareDriver._create_api_executor.return_value) mock_api_return.get_specific_poolinfo.return_value = ( self.get_specific_poolinfo_return_value()) mock_api_return.get_share_info.return_value = ( self.get_share_info_return_value()) mock_api_return.get_specific_volinfo.return_value = ( self.get_specific_volinfo_return_value()) self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._update_share_stats() mock_api_return.get_specific_poolinfo.assert_called_once_with( self.driver.configuration.qnap_poolname) def test_get_vol_host(self): """Test get manila host IPV4s.""" mock_private_storage = mock.Mock() self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) expect_host_dict_ips = [] host_list = self.get_host_list_return_value() for host in host_list: host_dict = { 'index': host.find('index').text, 'hostid': host.find('hostid').text, 'name': host.find('name').text, 'ipv4': [host.find('netaddrs').find('ipv4').text] } expect_host_dict_ips.append(host_dict) self.assertEqual( expect_host_dict_ips, self.driver._get_vol_host( host_list, 'fakeHostName')) @mock.patch.object(qnap.QnapShareDriver, '_gen_host_name') @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_allow_access_ro( self, mock_check_share_access, mock_get_timestamp_from_vol_name, mock_gen_host_name): """Test allow_access with access type ro.""" fake_access = fakes.AccessClass('fakeAccessType', 'ro', 'fakeIp') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = [] mock_get_timestamp_from_vol_name.return_value = 'fakeHostName' mock_gen_host_name.return_value = 'manila-fakeHostName-ro' mock_api_executor.return_value.add_host.return_value = None mock_api_executor.return_value.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._allow_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') mock_api_executor.return_value.add_host.assert_called_once_with( 'manila-fakeHostName-ro', 'fakeIp') @mock.patch.object(qnap.QnapShareDriver, '_gen_host_name') @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_allow_access_ro_with_hostlist( self, mock_check_share_access, mock_get_timestamp_from_vol_name, mock_gen_host_name): """Test allow_access_ro_with_hostlist.""" host_dict_ips = [] for host in self.get_host_list_return_value(): if host.find('netaddrs/ipv4').text is not None: host_dict = { 'index': host.find('index').text, 'hostid': host.find('hostid').text, 'name': host.find('name').text, 'ipv4': [host.find('netaddrs').find('ipv4').text]} host_dict_ips.append(host_dict) for host in host_dict_ips: fake_access_to = host['ipv4'] fake_access = fakes.AccessClass( 'fakeAccessType', 'ro', fake_access_to) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = ( self.get_host_list_return_value()) mock_get_timestamp_from_vol_name.return_value = 'fakeHostName' mock_gen_host_name.return_value = 'manila-fakeHostName' mock_api_executor.return_value.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._allow_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') @mock.patch.object(qnap.QnapShareDriver, '_gen_host_name') @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_allow_access_rw_with_hostlist_invalid_access( self, mock_check_share_access, mock_get_timestamp_from_vol_name, mock_gen_host_name): """Test allow_access_rw_invalid_access.""" host_dict_ips = [] for host in self.get_host_list_return_value(): if host.find('netaddrs/ipv4').text is not None: host_dict = { 'index': host.find('index').text, 'hostid': host.find('hostid').text, 'name': host.find('name').text, 'ipv4': [host.find('netaddrs').find('ipv4').text]} host_dict_ips.append(host_dict) for host in host_dict_ips: fake_access_to = host['ipv4'] fake_access = fakes.AccessClass( 'fakeAccessType', 'rw', fake_access_to) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = ( self.get_host_list_return_value()) mock_get_timestamp_from_vol_name.return_value = 'fakeHostName' mock_gen_host_name.return_value = 'manila-fakeHostName-rw' self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.InvalidShareAccess, self.driver._allow_access, context='context', share=self.share, access=fake_access, share_server=None) @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_allow_access_rw( self, mock_check_share_access, mock_get_timestamp_from_vol_name): """Test allow_access with access type rw.""" fake_access = fakes.AccessClass('fakeAccessType', 'rw', 'fakeIp') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = [] mock_get_timestamp_from_vol_name.return_value = 'fakeHostName' mock_api_executor.return_value.add_host.return_value = None mock_api_executor.return_value.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._allow_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') mock_api_executor.return_value.add_host.assert_called_once_with( 'manila-fakeHostName-rw', 'fakeIp') @mock.patch.object(qnap.QnapShareDriver, '_gen_host_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_allow_access_ro_without_hostlist( self, mock_check_share_access, mock_gen_host_name): """Test allow access without host list.""" fake_access = fakes.AccessClass('fakeAccessType', 'ro', 'fakeIp') mock_private_storage = mock.Mock() mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = None mock_gen_host_name.return_value = 'fakeHostName' mock_api_executor.return_value.add_host.return_value = None mock_api_executor.return_value.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) share_name = self.driver._gen_random_name('share') mock_private_storage.get.return_value = share_name self.driver._allow_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') mock_api_executor.return_value.add_host.assert_called_once_with( 'fakeHostName', 'fakeIp') @mock.patch.object(qnap.QnapShareDriver, '_get_vol_host') @mock.patch.object(qnap.QnapShareDriver, '_gen_host_name') @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_deny_access_with_hostlist( self, mock_check_share_access, mock_get_timestamp_from_vol_name, mock_gen_host_name, mock_get_vol_host): """Test deny access.""" host_dict_ips = [] for host in self.get_host_list_return_value(): if host.find('netaddrs/ipv4').text is not None: host_dict = { 'index': host.find('index').text, 'hostid': host.find('hostid').text, 'name': host.find('name').text, 'ipv4': [host.find('netaddrs').find('ipv4').text]} host_dict_ips.append(host_dict) for host in host_dict_ips: fake_access_to = host['ipv4'][0] fake_access = fakes.AccessClass('fakeAccessType', 'ro', fake_access_to) mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'vol_name' mock_api_return = ( qnap.QnapShareDriver._create_api_executor.return_value) mock_api_return.get_host_list.return_value = ( self.get_host_list_return_value()) mock_get_timestamp_from_vol_name.return_value = 'fakeTimeStamp' mock_gen_host_name.return_value = 'manila-fakeHostName' mock_get_vol_host.return_value = host_dict_ips mock_api_return.add_host.return_value = None mock_api_return.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._deny_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_deny_access_with_hostlist_not_equel_access_to( self, mock_check_share_access, mock_get_timestamp_from_vol_name): """Test deny access.""" fake_access = fakes.AccessClass('fakeAccessType', 'ro', 'fakeIp') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'vol_name' mock_api_return = ( qnap.QnapShareDriver._create_api_executor.return_value) mock_api_return.get_host_list.return_value = ( self.get_host_list_return_value()) mock_api_return.add_host.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._deny_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') @mock.patch.object(qnap.QnapShareDriver, '_get_timestamp_from_vol_name') @mock.patch.object(qnap.QnapShareDriver, '_check_share_access') def test_deny_access_without_hostlist( self, mock_check_share_access, mock_get_timestamp_from_vol_name): """Test deny access without hostlist.""" fake_access = fakes.AccessClass('fakeAccessType', 'ro', 'fakeIp') mock_private_storage = mock.Mock() mock_private_storage.get.return_value = 'fakeVolName' mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = None mock_get_timestamp_from_vol_name.return_value = 'fakeHostName' mock_api_executor.return_value.add_host.return_value = None mock_api_executor.return_value.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.driver._deny_access( 'context', self.share, fake_access, share_server=None) mock_check_share_access.assert_called_once_with( 'NFS', 'fakeAccessType') @ddt.data('NFS', 'CIFS', 'proto') def test_check_share_access(self, test_proto): """Test check_share_access.""" mock_private_storage = mock.Mock() mock_api_executor = qnap.QnapShareDriver._create_api_executor mock_api_executor.return_value.get_host_list.return_value = None mock_api_executor.return_value.add_host.return_value = None mock_api_executor.return_value.set_nfs_access.return_value = None self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', 'Storage Pool 1', private_storage=mock_private_storage) self.assertRaises( exception.InvalidShareAccess, self.driver._check_share_access, share_proto=test_proto, access_type='notser') def test_get_ts_model_pool_id(self): """Test get ts model pool id.""" mock_private_storage = mock.Mock() self._do_setup('http://1.2.3.4:8080', '1.2.3.4', 'admin', 'qnapadmin', '1', private_storage=mock_private_storage) self.assertEqual('1', self.driver._get_ts_model_pool_id('1')) manila-10.0.0/manila/tests/share/drivers/qnap/fakes.py0000664000175000017500000005474713656750227022676 0ustar zuulzuul00000000000000# Copyright (c) 2016 QNAP Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. FAKE_RES_DETAIL_DATA_LOGIN = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_1_1_1 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_1_1_3 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_2_0_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_2_1_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_2_2_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TS_4_0_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TS_4_3_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_TS_4_0_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_TS_4_3_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_1_1_1 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_1_1_3 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_2_0_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_2_1_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_2_2_0 = """ """ FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ERROR = """ """ FAKE_RES_DETAIL_DATA_SHARE_INFO = """ """ FAKE_RES_DETAIL_DATA_VOLUME_INFO = """ fakeMountPath """ FAKE_RES_DETAIL_DATA_SNAPSHOT = """ 10 """ FAKE_RES_DETAIL_DATA_SPECIFIC_POOL_INFO = """ """ FAKE_RES_DETAIL_DATA_GET_HOST_LIST = """ """ FAKE_RES_DETAIL_DATA_CREATE_SHARE = """ """ FAKE_RES_DETAIL_DATA_ES_RET_CODE_NEGATIVE = """ """ FAKE_RES_DETAIL_DATA_RESULT_NEGATIVE = """ """ FAKE_RES_DETAIL_DATA_AUTHPASS_FAIL = """ """ FAKE_RES_DETAIL_DATA_DELETE_SHARE = """ 0 """ FAKE_RES_DETAIL_DATA_DELETE_SNAPSHOT = """ 0 """ FAKE_RES_DETAIL_DATA_DELETE_SNAPSHOT_SNAPSHOT_NOT_EXIST = """ -206021 """ FAKE_RES_DETAIL_DATA_DELETE_SNAPSHOT_SHARE_NOT_EXIST = """ -200005 """ FAKE_RES_DETAIL_DATA_GET_HOST_LIST_API = """ """ FAKE_RES_DETAIL_DATA_GET_NO_HOST_LIST_API = """ """ FAKE_RES_DETAIL_DATA_CREATE_SNAPSHOT = """ """ class SnapshotClass(object): """Snapshot Class.""" size = 0 provider_location = 'fakeShareName@fakeSnapshotName' def __init__(self, size, provider_location=None): """Init.""" self.size = size self.provider_location = provider_location def get(self, provider_location): """Get function.""" return self.provider_location def __getitem__(self, arg): """Getitem.""" return { 'display_name': 'fakeSnapshotDisplayName', 'id': 'fakeSnapshotId', 'share': {'share_id': 'fakeShareId', 'id': 'fakeId'}, 'share_instance': {'share_id': 'fakeShareId', 'id': 'fakeId'}, 'size': self.size, 'share_instance_id': 'fakeShareId' }[arg] def __setitem__(self, key, value): """Setitem.""" if key == 'provider_location': self.provider_location = value class ShareNfsClass(object): """Share Class.""" share_proto = 'NFS' id = '' size = 0 def __init__(self, share_id, size): """Init.""" self.id = share_id self.size = size def __getitem__(self, arg): """Getitem.""" return { 'share_proto': self.share_proto, 'id': self.id, 'display_name': 'fakeDisplayName', 'export_locations': [{'path': '1.2.3.4:/share/fakeShareName'}], 'host': 'QnapShareDriver', 'size': self.size }[arg] def __setitem__(self, key, value): """Setitem.""" if key == 'share_proto': self.share_proto = value class ShareCifsClass(object): """Share Class.""" share_proto = 'CIFS' id = '' size = 0 def __init__(self, share_id, size): """Init.""" self.id = share_id self.size = size def __getitem__(self, arg): """Getitem.""" return { 'share_proto': self.share_proto, 'id': self.id, 'display_name': 'fakeDisplayName', 'export_locations': [{'path': '\\\\1.2.3.4\\fakeShareName'}], 'host': 'QnapShareDriver', 'size': self.size }[arg] def __setitem__(self, key, value): """Setitem.""" if key == 'share_proto': self.share_proto = value class AccessClass(object): """Access Class.""" access_type = 'fakeAccessType' access_level = 'ro' access_to = 'fakeIp' def __init__(self, access_type, access_level, access_to): """Init.""" self.access_type = access_type self.access_level = access_level self.access_to = access_to def __getitem__(self, arg): """Getitem.""" return { 'access_type': self.access_type, 'access_level': self.access_level, 'access_to': self.access_to, }[arg] class FakeGetBasicInfoResponseEs_1_1_1(object): """Fake GetBasicInfo response from ES nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_1_1_1 class FakeGetBasicInfoResponseEs_1_1_3(object): """Fake GetBasicInfo response from ES nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_1_1_3 class FakeGetBasicInfoResponseEs_2_0_0(object): """Fake GetBasicInfo response from ES nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_2_0_0 class FakeGetBasicInfoResponseEs_2_1_0(object): """Fake GetBasicInfo response from ES nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_2_1_0 class FakeGetBasicInfoResponseEs_2_2_0(object): """Fake GetBasicInfo response from ES nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ES_2_2_0 class FakeGetBasicInfoResponseTs_4_0_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TS_4_0_0 class FakeGetBasicInfoResponseTs_4_3_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TS_4_3_0 class FakeGetBasicInfoResponseTesTs_4_0_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_TS_4_0_0 class FakeGetBasicInfoResponseTesTs_4_3_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_TS_4_3_0 class FakeGetBasicInfoResponseTesEs_1_1_1(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_1_1_1 class FakeGetBasicInfoResponseTesEs_1_1_3(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_1_1_3 class FakeGetBasicInfoResponseTesEs_2_0_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_2_0_0 class FakeGetBasicInfoResponseTesEs_2_1_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_2_1_0 class FakeGetBasicInfoResponseTesEs_2_2_0(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_TES_ES_2_2_0 class FakeGetBasicInfoResponseError(object): """Fake GetBasicInfoTS response from TS nas.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GETBASIC_INFO_ERROR class FakeCreateShareResponse(object): """Fake login response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_CREATE_SHARE class FakeDeleteShareResponse(object): """Fake login response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_DELETE_SHARE class FakeDeleteSnapshotResponse(object): """Fake delete snapshot response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_DELETE_SNAPSHOT class FakeDeleteSnapshotResponseSnapshotNotExist(object): """Fake delete snapshot response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_DELETE_SNAPSHOT_SNAPSHOT_NOT_EXIST class FakeDeleteSnapshotResponseShareNotExist(object): """Fake delete snapshot response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_DELETE_SNAPSHOT_SHARE_NOT_EXIST class FakeGetHostListResponse(object): """Fake host info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GET_HOST_LIST_API class FakeGetNoHostListResponse(object): """Fake host info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_GET_NO_HOST_LIST_API class FakeAuthPassFailResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_AUTHPASS_FAIL class FakeEsResCodeNegativeResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_ES_RET_CODE_NEGATIVE class FakeResultNegativeResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_RESULT_NEGATIVE class FakeLoginResponse(object): """Fake login response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_LOGIN class FakeSpecificPoolInfoResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_SPECIFIC_POOL_INFO class FakeShareInfoResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_SHARE_INFO class FakeSnapshotInfoResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_SNAPSHOT class FakeSpecificVolInfoResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_VOLUME_INFO class FakeCreateSnapshotResponse(object): """Fake pool info response.""" status = 'fackStatus' def read(self): """Mock response.read.""" return FAKE_RES_DETAIL_DATA_CREATE_SNAPSHOT manila-10.0.0/manila/tests/share/drivers/inspur/0000775000175000017500000000000013656750362021573 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/inspur/__init__.py0000664000175000017500000000000013656750227023672 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/inspur/instorage/0000775000175000017500000000000013656750362023566 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/inspur/instorage/__init__.py0000664000175000017500000000000013656750227025665 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/inspur/instorage/test_instorage.py0000664000175000017500000014457113656750227027206 0ustar zuulzuul00000000000000# Copyright 2019 Inspur Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver test for Inspur InStorage """ from unittest import mock import ddt from eventlet import greenthread from oslo_concurrency import processutils from oslo_config import cfg import paramiko from manila import context from manila import exception from manila.share import driver from manila.share.drivers.inspur.instorage import cli_helper from manila.share.drivers.inspur.instorage import instorage from manila import test from manila.tests import fake_share from manila import utils as manila_utils CONF = cfg.CONF class FakeConfig(object): def __init__(self, *args, **kwargs): self.driver_handles_share_servers = False self.share_driver = 'fake_share_driver_name' self.share_backend_name = 'fake_instorage' self.instorage_nas_ip = kwargs.get( 'instorage_nas_ip', 'some_ip') self.instorage_nas_port = kwargs.get( 'instorage_nas_port', 'some_port') self.instorage_nas_login = kwargs.get( 'instorage_nas_login', 'username') self.instorage_nas_password = kwargs.get( 'instorage_nas_password', 'password') self.instorage_nas_pools = kwargs.get( 'instorage_nas_pools', ['fakepool']) self.network_config_group = kwargs.get( "network_config_group", "fake_network_config_group") self.admin_network_config_group = kwargs.get( "admin_network_config_group", "fake_admin_network_config_group") self.config_group = kwargs.get("config_group", "fake_config_group") self.reserved_share_percentage = kwargs.get( "reserved_share_percentage", 0) self.max_over_subscription_ratio = kwargs.get( "max_over_subscription_ratio", 0) self.filter_function = kwargs.get("filter_function", None) self.goodness_function = kwargs.get("goodness_function", None) def safe_get(self, key): return getattr(self, key) def append_config_values(self, *args, **kwargs): pass @ddt.ddt class InStorageShareDriverTestCase(test.TestCase): def __init__(self, *args, **kwargs): super(InStorageShareDriverTestCase, self).__init__(*args, **kwargs) self._ctxt = context.get_admin_context() self.configuration = FakeConfig() self.share = fake_share.fake_share() self.share_instance = fake_share.fake_share_instance( self.share, host='H@B#P' ) def setUp(self): self.mock_object(instorage.CONF, '_check_required_opts') self.driver = instorage.InStorageShareDriver( configuration=self.configuration ) super(InStorageShareDriverTestCase, self).setUp() def test_check_for_setup_error_failed_no_nodes(self): mock_gni = mock.Mock(return_value={}) self.mock_object( instorage.InStorageAssistant, 'get_nodes_info', mock_gni ) self.assertRaises( exception.ShareBackendException, self.driver.check_for_setup_error ) def test_check_for_setup_error_failed_pool_invalid(self): mock_gni = mock.Mock(return_value={'node1': {}}) self.mock_object( instorage.InStorageAssistant, 'get_nodes_info', mock_gni ) mock_gap = mock.Mock(return_value=['pool0']) self.mock_object( instorage.InStorageAssistant, 'get_available_pools', mock_gap ) self.assertRaises( exception.InvalidParameterValue, self.driver.check_for_setup_error ) def test_check_for_setup_error_success(self): mock_gni = mock.Mock(return_value={'node1': {}}) self.mock_object( instorage.InStorageAssistant, 'get_nodes_info', mock_gni ) mock_gap = mock.Mock(return_value=['fakepool', 'pool0']) self.mock_object( instorage.InStorageAssistant, 'get_available_pools', mock_gap ) self.driver.check_for_setup_error() mock_gni.assert_called_once() mock_gap.assert_called_once() def test__update_share_stats(self): pool_attr = { 'pool0': { 'pool_name': 'pool0', 'total_capacity_gb': 110, 'free_capacity_gb': 100, 'allocated_capacity_gb': 10, 'reserved_percentage': 0, 'qos': False, 'dedupe': False, 'compression': False, 'thin_provisioning': False, 'max_over_subscription_ratio': 0 } } mock_gpa = mock.Mock(return_value=pool_attr) self.mock_object( instorage.InStorageAssistant, 'get_pools_attr', mock_gpa ) mock_uss = mock.Mock() self.mock_object(driver.ShareDriver, '_update_share_stats', mock_uss) self.driver._update_share_stats() mock_gpa.assert_called_once_with(['fakepool']) stats = { 'share_backend_name': 'fake_instorage', 'vendor_name': 'INSPUR', 'driver_version': '1.0.0', 'storage_protocol': 'NFS_CIFS', 'reserved_percentage': 0, 'max_over_subscription_ratio': 0, 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'qos': False, 'total_capacity_gb': 110, 'free_capacity_gb': 100, 'pools': [pool_attr['pool0']] } mock_uss.assert_called_once_with(stats) @ddt.data( {'id': 'abc-123', 'real': 'abc123'}, {'id': '123-abc', 'real': 'B23abc'}) @ddt.unpack def test_generate_share_name(self, id, real): ret = self.driver.generate_share_name({'id': id}) self.assertEqual(real, ret) def test_get_network_allocations_number(self): ret = self.driver.get_network_allocations_number() self.assertEqual(0, ret) def test_create_share(self): mock_cs = self.mock_object( instorage.InStorageAssistant, 'create_share' ) mock_gel = self.mock_object( instorage.InStorageAssistant, 'get_export_locations', mock.Mock(return_value=['fake_export_location']) ) ret = self.driver.create_share(self._ctxt, self.share_instance) self.assertEqual(['fake_export_location'], ret) mock_cs.assert_called_once_with('fakeinstanceid', 'P', 1, 'fake_proto') mock_gel.assert_called_once_with('fakeinstanceid', 'fake_proto') def test_delete_share(self): mock_ds = self.mock_object( instorage.InStorageAssistant, 'delete_share' ) self.driver.delete_share(self._ctxt, self.share_instance) mock_ds.assert_called_once_with('fakeinstanceid', 'fake_proto') def test_extend_share(self): mock_es = self.mock_object( instorage.InStorageAssistant, 'extend_share' ) self.driver.extend_share(self.share_instance, 3) mock_es.assert_called_once_with('fakeinstanceid', 3) def test_ensure_share(self): mock_gel = self.mock_object( instorage.InStorageAssistant, 'get_export_locations', mock.Mock(return_value=['fake_export_location']) ) ret = self.driver.ensure_share(self._ctxt, self.share_instance) self.assertEqual(['fake_export_location'], ret) mock_gel.assert_called_once_with('fakeinstanceid', 'fake_proto') def test_update_access(self): mock_ua = self.mock_object( instorage.InStorageAssistant, 'update_access' ) self.driver.update_access(self._ctxt, self.share_instance, [], [], []) mock_ua.assert_called_once_with( 'fakeinstanceid', 'fake_proto', [], [], [] ) class FakeSSH(object): def __enter__(self): return self def __exit__(self, exec_type, exec_val, exec_tb): if exec_val: raise class FakeSSHPool(object): def __init__(self, ssh): self.fakessh = ssh def item(self): return self.fakessh class SSHRunnerTestCase(test.TestCase): def setUp(self): self.fakessh = FakeSSH() self.fakePool = FakeSSHPool(self.fakessh) super(SSHRunnerTestCase, self).setUp() def test___call___success(self): mock_csi = self.mock_object(manila_utils, 'check_ssh_injection') mock_sshpool = mock.Mock(return_value=self.fakePool) self.mock_object(manila_utils, 'SSHPool', mock_sshpool) mock_se = mock.Mock(return_value='fake_value') self.mock_object(cli_helper.SSHRunner, '_ssh_execute', mock_se) runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) ret = runner(['mcsinq', 'lsvdisk']) mock_csi.assert_called_once_with(['mcsinq', 'lsvdisk']) mock_sshpool.assert_called_once_with( '127.0.0.1', '22', 60, 'fakeuser', password='fakepassword', privatekey=None, min_size=1, max_size=10 ) mock_se.assert_called_once_with( self.fakePool, 'mcsinq lsvdisk', True, 1 ) self.assertEqual('fake_value', ret) def test___call___ssh_pool_failed(self): mock_csi = self.mock_object(manila_utils, 'check_ssh_injection') mock_sshpool = mock.Mock(side_effect=paramiko.SSHException()) self.mock_object(manila_utils, 'SSHPool', mock_sshpool) runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) self.assertRaises(paramiko.SSHException, runner, ['mcsinq', 'lsvdisk']) mock_csi.assert_called_once_with(['mcsinq', 'lsvdisk']) def test___call___ssh_exec_failed(self): mock_csi = self.mock_object(manila_utils, 'check_ssh_injection') mock_sshpool = mock.Mock(return_value=self.fakePool) self.mock_object(manila_utils, 'SSHPool', mock_sshpool) exception = processutils.ProcessExecutionError() mock_se = mock.Mock(side_effect=exception) self.mock_object(cli_helper.SSHRunner, '_ssh_execute', mock_se) runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) self.assertRaises( processutils.ProcessExecutionError, runner, ['mcsinq', 'lsvdisk'] ) mock_csi.assert_called_once_with(['mcsinq', 'lsvdisk']) mock_sshpool.assert_called_once_with( '127.0.0.1', '22', 60, 'fakeuser', password='fakepassword', privatekey=None, min_size=1, max_size=10 ) def test__ssh_execute_success(self): mock_se = mock.Mock(return_value='fake_value') self.mock_object(processutils, 'ssh_execute', mock_se) runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) ret = runner._ssh_execute(self.fakePool, 'mcsinq lsvdisk') mock_se.assert_called_once_with( self.fakessh, 'mcsinq lsvdisk', check_exit_code=True ) self.assertEqual('fake_value', ret) def test__ssh_execute_success_run_again(self): mock_se = mock.Mock(side_effect=[Exception(), 'fake_value']) self.mock_object(processutils, 'ssh_execute', mock_se) mock_sleep = self.mock_object(greenthread, 'sleep') runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) ret = runner._ssh_execute( self.fakePool, 'mcsinq lsvdisk', check_exit_code=True, attempts=2 ) call = mock.call(self.fakessh, 'mcsinq lsvdisk', check_exit_code=True) mock_se.assert_has_calls([call, call]) mock_sleep.assert_called_once() self.assertEqual('fake_value', ret) def test__ssh_execute_failed_exec_failed(self): exception = Exception() exception.exit_code = '1' exception.stdout = 'fake_stdout' exception.stderr = 'fake_stderr' exception.cmd = 'fake_cmd_list' mock_se = mock.Mock(side_effect=exception) self.mock_object(processutils, 'ssh_execute', mock_se) mock_sleep = self.mock_object(greenthread, 'sleep') runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) self.assertRaises( processutils.ProcessExecutionError, runner._ssh_execute, self.fakePool, 'mcsinq lsvdisk', check_exit_code=True, attempts=1 ) mock_se.assert_called_once_with( self.fakessh, 'mcsinq lsvdisk', check_exit_code=True ) mock_sleep.assert_called_once() def test__ssh_execute_failed_exec_failed_exception_error(self): mock_se = mock.Mock(side_effect=Exception()) self.mock_object(processutils, 'ssh_execute', mock_se) mock_sleep = self.mock_object(greenthread, 'sleep') runner = cli_helper.SSHRunner( '127.0.0.1', '22', 'fakeuser', 'fakepassword' ) self.assertRaises( processutils.ProcessExecutionError, runner._ssh_execute, self.fakePool, 'mcsinq lsvdisk', check_exit_code=True, attempts=1 ) mock_se.assert_called_once_with( self.fakessh, 'mcsinq lsvdisk', check_exit_code=True ) mock_sleep.assert_called_once() class CLIParserTestCase(test.TestCase): def test_cliparser_with_header(self): cmdlist = ['mcsinq', 'lsnasportip', '-delim', '!'] response = [ 'head1!head2', 'r1c1!r1c2', 'r2c1!r2c2' ] response = '\n'.join(response) ret = cli_helper.CLIParser( response, cmdlist, delim='!', with_header=True ) self.assertEqual(2, len(ret)) self.assertEqual('r1c1', ret[0]['head1']) self.assertEqual('r1c2', ret[0]['head2']) self.assertEqual('r2c1', ret[1]['head1']) self.assertEqual('r2c2', ret[1]['head2']) value = [(v['head1'], v['head2']) for v in ret] self.assertEqual([('r1c1', 'r1c2'), ('r2c1', 'r2c2')], value) def test_cliparser_without_header(self): cmdlist = ['mcsinq', 'lsnasportip', '-delim', '!'] response = [ 'head1!p1v1', 'head2!p1v2', '', 'head1!p2v1', 'head2!p2v2' ] response = '\n'.join(response) ret = cli_helper.CLIParser( response, cmdlist, delim='!', with_header=False ) self.assertEqual(2, len(ret)) self.assertEqual('p1v1', ret[0]['head1']) self.assertEqual('p1v2', ret[0]['head2']) self.assertEqual('p2v1', ret[1]['head1']) self.assertEqual('p2v2', ret[1]['head2']) @ddt.ddt class InStorageSSHTestCase(test.TestCase): def setUp(self): self.sshMock = mock.Mock() self.ssh = cli_helper.InStorageSSH(self.sshMock) super(InStorageSSHTestCase, self).setUp() def tearDown(self): super(InStorageSSHTestCase, self).tearDown() @ddt.data(None, 'node1') def test_lsnode(self, node_id): if node_id: cmd = ['mcsinq', 'lsnode', '-delim', '!', node_id] response = [ 'id!1', 'name!node1' ] else: cmd = ['mcsinq', 'lsnode', '-delim', '!'] response = [ 'id!name', '1!node1', '2!node2' ] response = '\n'.join(response) self.sshMock.return_value = (response, '') ret = self.ssh.lsnode(node_id) if node_id: self.sshMock.assert_called_once_with(cmd) self.assertEqual('node1', ret[0]['name']) else: self.sshMock.assert_called_once_with(cmd) self.assertEqual('node1', ret[0]['name']) self.assertEqual('node2', ret[1]['name']) @ddt.data(None, 'Pool0') def test_lsnaspool(self, pool_id): response = [ 'pool_name!available_capacity', 'Pool0!2GB' ] if pool_id is None: response.append('Pool1!3GB') response = '\n'.join(response) self.sshMock.return_value = (response, '') ret = self.ssh.lsnaspool(pool_id) if pool_id is None: cmd = ['mcsinq', 'lsnaspool', '-delim', '!'] self.sshMock.assert_called_once_with(cmd) self.assertEqual('Pool0', ret[0]['pool_name']) self.assertEqual('2GB', ret[0]['available_capacity']) self.assertEqual('Pool1', ret[1]['pool_name']) self.assertEqual('3GB', ret[1]['available_capacity']) else: cmd = ['mcsinq', 'lsnaspool', '-delim', '!', pool_id] self.sshMock.assert_called_once_with(cmd) self.assertEqual('Pool0', ret[0]['pool_name']) self.assertEqual('2GB', ret[0]['available_capacity']) @ddt.data({'node_name': 'node1', 'fsname': 'fs1'}, {'node_name': 'node1', 'fsname': None}, {'node_name': None, 'fsname': 'fs1'}, {'node_name': None, 'fsname': None}) @ddt.unpack def test_lsfs(self, node_name, fsname): response = [ 'pool_name!fs_name!total_capacity!used_capacity', 'pool0!fs0!10GB!1GB', 'pool1!fs1!8GB!3GB' ] response = '\n'.join(response) self.sshMock.return_value = (response, '') if fsname and not node_name: self.assertRaises(exception.InvalidParameterValue, self.ssh.lsfs, node_name=node_name, fsname=fsname) else: ret = self.ssh.lsfs(node_name, fsname) cmdlist = [] if node_name and not fsname: cmdlist = ['mcsinq', 'lsfs', '-delim', '!', '-node', '"node1"'] elif node_name and fsname: cmdlist = ['mcsinq', 'lsfs', '-delim', '!', '-node', '"node1"', '-name', '"fs1"'] else: cmdlist = ['mcsinq', 'lsfs', '-delim', '!', '-all'] self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('pool0', ret[0]['pool_name']) self.assertEqual('fs0', ret[0]['fs_name']) self.assertEqual('10GB', ret[0]['total_capacity']) self.assertEqual('1GB', ret[0]['used_capacity']) self.assertEqual('pool1', ret[1]['pool_name']) self.assertEqual('fs1', ret[1]['fs_name']) self.assertEqual('8GB', ret[1]['total_capacity']) self.assertEqual('3GB', ret[1]['used_capacity']) def test_addfs(self): self.sshMock.return_value = ('', '') self.ssh.addfs('fsname', 'fake_pool', 1, 'node1') cmdlist = ['mcsop', 'addfs', '-name', '"fsname"', '-pool', '"fake_pool"', '-size', '1g', '-node', '"node1"'] self.sshMock.assert_called_once_with(cmdlist) def test_rmfs(self): self.sshMock.return_value = ('', '') self.ssh.rmfs('fsname') cmdlist = ['mcsop', 'rmfs', '-name', '"fsname"'] self.sshMock.assert_called_once_with(cmdlist) def test_expandfs(self): self.sshMock.return_value = ('', '') self.ssh.expandfs('fsname', 2) cmdlist = ['mcsop', 'expandfs', '-name', '"fsname"', '-size', '2g'] self.sshMock.assert_called_once_with(cmdlist) def test_lsnasdir(self): response = [ 'parent_dir!name', '/fs/test_01!share_01' ] response = '\n'.join(response) self.sshMock.return_value = (response, '') ret = self.ssh.lsnasdir('/fs/test_01') cmdlist = ['mcsinq', 'lsnasdir', '-delim', '!', '"/fs/test_01"'] self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('/fs/test_01', ret[0]['parent_dir']) self.assertEqual('share_01', ret[0]['name']) def test_addnasdir(self): self.sshMock.return_value = ('', '') self.ssh.addnasdir('/fs/test_01/share_01') cmdlist = ['mcsop', 'addnasdir', '"/fs/test_01/share_01"'] self.sshMock.assert_called_once_with(cmdlist) def test_chnasdir(self): self.sshMock.return_value = ('', '') self.ssh.chnasdir('/fs/test_01/share_01', '/fs/test_01/share_02') cmdlist = ['mcsop', 'chnasdir', '-oldpath', '"/fs/test_01/share_01"', '-newpath', '"/fs/test_01/share_02"'] self.sshMock.assert_called_once_with(cmdlist) def test_rmnasdir(self): self.sshMock.return_value = ('', '') self.ssh.rmnasdir('/fs/test_01/share_01') cmdlist = ['mcsop', 'rmnasdir', '"/fs/test_01/share_01"'] self.sshMock.assert_called_once_with(cmdlist) def test_rmnfs(self): self.sshMock.return_value = ('', '') self.ssh.rmnfs('/fs/test_01/share_01') cmdlist = ['mcsop', 'rmnfs', '"/fs/test_01/share_01"'] self.sshMock.assert_called_once_with(cmdlist) @ddt.data(None, '/fs/test_01') def test_lsnfslist(self, prefix): cmdlist = ['mcsinq', 'lsnfslist', '-delim', '!'] if prefix: cmdlist.append('"/fs/test_01"') response = '\n'.join([ 'path', '/fs/test_01/share_01', '/fs/test_01/share_02' ]) self.sshMock.return_value = (response, '') ret = self.ssh.lsnfslist(prefix) self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('/fs/test_01/share_01', ret[0]['path']) self.assertEqual('/fs/test_01/share_02', ret[1]['path']) def test_lsnfsinfo(self): cmdlist = [ 'mcsinq', 'lsnfsinfo', '-delim', '!', '"/fs/test_01/share_01"' ] response = '\n'.join([ 'ip!mask!rights!root_squash!all_squash', '192.168.1.0!255.255.255.0!rw!root_squash!all_squash' ]) self.sshMock.return_value = (response, '') ret = self.ssh.lsnfsinfo('/fs/test_01/share_01') self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('192.168.1.0', ret[0]['ip']) self.assertEqual('255.255.255.0', ret[0]['mask']) self.assertEqual('rw', ret[0]['rights']) def test_addnfsclient(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'addnfsclient', '-path', '"/fs/test_01/share_01"', '-client', '192.168.1.0/255.255.255.0:rw:ALL_SQUASH:ROOT_SQUASH' ] self.ssh.addnfsclient( '/fs/test_01/share_01', '192.168.1.0/255.255.255.0:rw:ALL_SQUASH:ROOT_SQUASH' ) self.sshMock.assert_called_once_with(cmdlist) def test_chnfsclient(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'chnfsclient', '-path', '"/fs/test_01/share_01"', '-client', '192.168.1.0/255.255.255.0:rw:ALL_SQUASH:ROOT_SQUASH' ] self.ssh.chnfsclient( '/fs/test_01/share_01', '192.168.1.0/255.255.255.0:rw:ALL_SQUASH:ROOT_SQUASH' ) self.sshMock.assert_called_once_with(cmdlist) def test_rmnfsclient(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'rmnfsclient', '-path', '"/fs/test_01/share_01"', '-client', '192.168.1.0/255.255.255.0' ] self.ssh.rmnfsclient( '/fs/test_01/share_01', '192.168.1.0/255.255.255.0:rw:ALL_SQUASH:ROOT_SQUASH' ) self.sshMock.assert_called_once_with(cmdlist) @ddt.data(None, 'cifs') def test_lscifslist(self, filter): cmdlist = ['mcsinq', 'lscifslist', '-delim', '!'] if filter: cmdlist.append('"%s"' % filter) response = '\n'.join([ 'name!path', 'cifs!/fs/test_01/share_01' ]) self.sshMock.return_value = (response, '') ret = self.ssh.lscifslist(filter) self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('cifs', ret[0]['name']) self.assertEqual('/fs/test_01/share_01', ret[0]['path']) def test_lscifsinfo(self): cmdlist = ['mcsinq', 'lscifsinfo', '-delim', '!', '"cifs"'] response = '\n'.join([ 'path!oplocks!type!name!rights', '/fs/test_01/share_01!on!LU!user1!rw' ]) self.sshMock.return_value = (response, '') ret = self.ssh.lscifsinfo('cifs') self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('/fs/test_01/share_01', ret[0]['path']) self.assertEqual('on', ret[0]['oplocks']) self.assertEqual('LU', ret[0]['type']) self.assertEqual('user1', ret[0]['name']) self.assertEqual('rw', ret[0]['rights']) def test_addcifs(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'addcifs', '-name', 'cifs', '-path', '/fs/test_01/share_01', '-oplocks', 'off' ] self.ssh.addcifs('cifs', '/fs/test_01/share_01', 'off') self.sshMock.assert_called_once_with(cmdlist) def test_rmcifs(self): self.sshMock.return_value = ('', '') cmdlist = ['mcsop', 'rmcifs', 'cifs'] self.ssh.rmcifs('cifs') self.sshMock.assert_called_once_with(cmdlist) def test_chcifs(self): self.sshMock.return_value = ('', '') cmdlist = ['mcsop', 'chcifs', '-name', 'cifs', '-oplocks', 'off'] self.ssh.chcifs('cifs', 'off') self.sshMock.assert_called_once_with(cmdlist) def test_addcifsuser(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'addcifsuser', '-name', 'cifs', '-rights', 'LU:user1:rw' ] self.ssh.addcifsuser('cifs', 'LU:user1:rw') self.sshMock.assert_called_once_with(cmdlist) def test_chcifsuser(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'chcifsuser', '-name', 'cifs', '-rights', 'LU:user1:rw' ] self.ssh.chcifsuser('cifs', 'LU:user1:rw') self.sshMock.assert_called_once_with(cmdlist) def test_rmcifsuser(self): self.sshMock.return_value = ('', '') cmdlist = [ 'mcsop', 'rmcifsuser', '-name', 'cifs', '-rights', 'LU:user1' ] self.ssh.rmcifsuser('cifs', 'LU:user1:rw') self.sshMock.assert_called_once_with(cmdlist) def test_lsnasportip(self): cmdlist = ['mcsinq', 'lsnasportip', '-delim', '!'] response = '\n'.join([ 'node_name!id!ip!mask!gw!link_state', 'node1!1!192.168.10.1!255.255.255.0!192.168.10.254!active', 'node2!1!192.168.10.2!255.255.255.0!192.168.10.254!inactive' ]) self.sshMock.return_value = (response, '') ret = self.ssh.lsnasportip() self.sshMock.assert_called_once_with(cmdlist) self.assertEqual('node1', ret[0]['node_name']) self.assertEqual('1', ret[0]['id']) self.assertEqual('192.168.10.1', ret[0]['ip']) self.assertEqual('255.255.255.0', ret[0]['mask']) self.assertEqual('192.168.10.254', ret[0]['gw']) self.assertEqual('active', ret[0]['link_state']) self.assertEqual('node2', ret[1]['node_name']) self.assertEqual('1', ret[1]['id']) self.assertEqual('192.168.10.2', ret[1]['ip']) self.assertEqual('255.255.255.0', ret[1]['mask']) self.assertEqual('192.168.10.254', ret[1]['gw']) self.assertEqual('inactive', ret[1]['link_state']) @ddt.ddt class InStorageAssistantTestCase(test.TestCase): def setUp(self): self.sshMock = mock.Mock() self.assistant = instorage.InStorageAssistant(self.sshMock) super(InStorageAssistantTestCase, self).setUp() def tearDown(self): super(InStorageAssistantTestCase, self).tearDown() @ddt.data( {'size': '1000MB', 'gb_size': 1}, {'size': '3GB', 'gb_size': 3}, {'size': '4TB', 'gb_size': 4096}, {'size': '5PB', 'gb_size': 5242880}) @ddt.unpack def test_size_to_gb(self, size, gb_size): ret = self.assistant.size_to_gb(size) self.assertEqual(gb_size, ret) def test_get_available_pools(self): response_for_lsnaspool = ('\n'.join([ 'pool_name!available_capacity', 'pool0!100GB', 'pool1!150GB' ]), '') cmdlist = ['mcsinq', 'lsnaspool', '-delim', '!'] self.sshMock.return_value = response_for_lsnaspool ret = self.assistant.get_available_pools() pools = ['pool0', 'pool1'] self.assertEqual(pools, ret) self.sshMock.assert_called_once_with(cmdlist) def test_get_pools_attr(self): response_for_lsfs = ('\n'.join([ 'pool_name!fs_name!total_capacity!used_capacity', 'pool0!fs0!10GB!1GB', 'pool1!fs1!8GB!3GB' ]), '') call_for_lsfs = mock.call(['mcsinq', 'lsfs', '-delim', '!', '-all']) response_for_lsnaspool = ('\n'.join([ 'pool_name!available_capacity', 'pool0!100GB', 'pool1!150GB' ]), '') call_for_lsnaspool = mock.call(['mcsinq', 'lsnaspool', '-delim', '!']) self.sshMock.side_effect = [ response_for_lsfs, response_for_lsnaspool ] ret = self.assistant.get_pools_attr(['pool0']) pools = { 'pool0': { 'pool_name': 'pool0', 'total_capacity_gb': 110, 'free_capacity_gb': 100, 'allocated_capacity_gb': 10, 'qos': False, 'reserved_percentage': 0, 'dedupe': False, 'compression': False, 'thin_provisioning': False, 'max_over_subscription_ratio': 0 } } self.assertEqual(pools, ret) self.sshMock.assert_has_calls([call_for_lsfs, call_for_lsnaspool]) def test_get_nodes_info(self): response_for_lsnasportip = ('\n'.join([ 'node_name!id!ip!mask!gw!link_state', 'node1!1!192.168.10.1!255.255.255.0!192.168.10.254!active', 'node2!1!192.168.10.2!255.255.255.0!192.168.10.254!inactive', 'node1!2!!!!inactive', 'node2!2!!!!inactive' ]), '') call_for_lsnasportip = mock.call([ 'mcsinq', 'lsnasportip', '-delim', '!' ]) self.sshMock.side_effect = [response_for_lsnasportip] ret = self.assistant.get_nodes_info() nodes = { 'node1': { '1': { 'node_name': 'node1', 'id': '1', 'ip': '192.168.10.1', 'mask': '255.255.255.0', 'gw': '192.168.10.254', 'link_state': 'active' } }, 'node2': { '1': { 'node_name': 'node2', 'id': '1', 'ip': '192.168.10.2', 'mask': '255.255.255.0', 'gw': '192.168.10.254', 'link_state': 'inactive' } } } self.assertEqual(nodes, ret) self.sshMock.assert_has_calls([call_for_lsnasportip]) @ddt.data( {'name': '1' * 30, 'fsname': '1' * 30}, {'name': '1' * 40, 'fsname': '1' * 32}) @ddt.unpack def test_get_fsname_by_name(self, name, fsname): ret = self.assistant.get_fsname_by_name(name) self.assertEqual(fsname, ret) @ddt.data( {'name': '1' * 30, 'dirname': '1' * 30}, {'name': '1' * 40, 'dirname': '1' * 32}) @ddt.unpack def test_get_dirsname_by_name(self, name, dirname): ret = self.assistant.get_dirname_by_name(name) self.assertEqual(dirname, ret) @ddt.data( {'name': '1' * 30, 'dirpath': '/fs/' + '1' * 30 + '/' + '1' * 30}, {'name': '1' * 40, 'dirpath': '/fs/' + '1' * 32 + '/' + '1' * 32}) @ddt.unpack def test_get_dirpath_by_name(self, name, dirpath): ret = self.assistant.get_dirpath_by_name(name) self.assertEqual(dirpath, ret) @ddt.data('CIFS', 'NFS') def test_create_share(self, proto): response_for_lsnasportip = ('\n'.join([ 'node_name!id!ip!mask!gw!link_state', 'node1!1!192.168.10.1!255.255.255.0!192.168.10.254!active' ]), '') call_for_lsnasportip = mock.call([ 'mcsinq', 'lsnasportip', '-delim', '!' ]) response_for_addfs = ('', '') call_for_addfs = mock.call([ 'mcsop', 'addfs', '-name', '"fakename"', '-pool', '"fakepool"', '-size', '10g', '-node', '"node1"' ]) response_for_addnasdir = ('', '') call_for_addnasdir = mock.call([ 'mcsop', 'addnasdir', '"/fs/fakename/fakename"' ]) response_for_addcifs = ('', '') call_for_addcifs = mock.call([ 'mcsop', 'addcifs', '-name', 'fakename', '-path', '/fs/fakename/fakename', '-oplocks', 'off' ]) side_effect = [ response_for_lsnasportip, response_for_addfs, response_for_addnasdir ] calls = [call_for_lsnasportip, call_for_addfs, call_for_addnasdir] if proto == 'CIFS': side_effect.append(response_for_addcifs) calls.append(call_for_addcifs) self.sshMock.side_effect = side_effect self.assistant.create_share('fakename', 'fakepool', 10, proto) self.sshMock.assert_has_calls(calls) @ddt.data(True, False) def test_check_share_exist(self, exist): response_for_lsfs = ('\n'.join([ 'pool_name!fs_name!total_capacity!used_capacity', 'pool0!fs0!10GB!1GB', 'pool1!fs1!8GB!3GB' ]), '') call_for_lsfs = mock.call([ 'mcsinq', 'lsfs', '-delim', '!', '-all' ]) self.sshMock.side_effect = [ response_for_lsfs ] share_name = 'fs0' if exist else 'fs2' ret = self.assistant.check_share_exist(share_name) self.assertEqual(exist, ret) self.sshMock.assert_has_calls([call_for_lsfs]) @ddt.data({'proto': 'CIFS', 'share_exist': False}, {'proto': 'CIFS', 'share_exist': True}, {'proto': 'NFS', 'share_exist': False}, {'proto': 'NFS', 'share_exist': True}) @ddt.unpack def test_delete_share(self, proto, share_exist): mock_cse = self.mock_object( instorage.InStorageAssistant, 'check_share_exist', mock.Mock(return_value=share_exist) ) response_for_rmcifs = ('', '') call_for_rmcifs = mock.call([ 'mcsop', 'rmcifs', 'fakename' ]) response_for_rmnasdir = ('', '') call_for_rmnasdir = mock.call([ 'mcsop', 'rmnasdir', '"/fs/fakename/fakename"' ]) response_for_rmfs = ('', '') call_for_rmfs = mock.call([ 'mcsop', 'rmfs', '-name', '"fakename"' ]) side_effect = [response_for_rmnasdir, response_for_rmfs] calls = [call_for_rmnasdir, call_for_rmfs] if proto == 'CIFS': side_effect.insert(0, response_for_rmcifs) calls.insert(0, call_for_rmcifs) self.sshMock.side_effect = side_effect self.assistant.delete_share('fakename', proto) mock_cse.assert_called_once_with('fakename') if share_exist: self.sshMock.assert_has_calls(calls) else: self.sshMock.assert_not_called() def test_extend_share(self): response_for_lsfs = ('\n'.join([ 'pool_name!fs_name!total_capacity!used_capacity', 'pool0!fs0!10GB!1GB', 'pool1!fs1!8GB!3GB' ]), '') call_for_lsfs = mock.call([ 'mcsinq', 'lsfs', '-delim', '!', '-all' ]) response_for_expandfs = ('', '') call_for_expandfs = mock.call([ 'mcsop', 'expandfs', '-name', '"fs0"', '-size', '2g' ]) self.sshMock.side_effect = [response_for_lsfs, response_for_expandfs] self.assistant.extend_share('fs0', 12) self.sshMock.assert_has_calls([call_for_lsfs, call_for_expandfs]) @ddt.data('CIFS', 'NFS') def test_get_export_locations(self, proto): response_for_lsnode = ('\n'.join([ 'id!name', '1!node1', '2!node2' ]), '') call_for_lsnode = mock.call([ 'mcsinq', 'lsnode', '-delim', '!' ]) response_for_lsfs_node1 = ('\n'.join([ 'pool_name!fs_name!total_capacity!used_capacity', 'pool0!fs0!10GB!1GB' ]), '') call_for_lsfs_node1 = mock.call([ 'mcsinq', 'lsfs', '-delim', '!', '-node', '"node1"' ]) response_for_lsfs_node2 = ('\n'.join([ 'pool_name!fs_name!total_capacity!used_capacity', 'pool1!fs1!10GB!1GB' ]), '') call_for_lsfs_node2 = mock.call([ 'mcsinq', 'lsfs', '-delim', '!', '-node', '"node2"' ]) response_for_lsnasportip = ('\n'.join([ 'node_name!id!ip!mask!gw!link_state', 'node1!1!192.168.10.1!255.255.255.0!192.168.10.254!active', 'node1!2!192.168.10.2!255.255.255.0!192.168.10.254!active', 'node1!3!!!!inactive', 'node2!1!192.168.10.3!255.255.255.0!192.168.10.254!active', 'node2!2!192.168.10.4!255.255.255.0!192.168.10.254!active', 'node2!3!!!!inactive' ]), '') call_for_lsnasportip = mock.call([ 'mcsinq', 'lsnasportip', '-delim', '!' ]) self.sshMock.side_effect = [ response_for_lsnode, response_for_lsfs_node1, response_for_lsfs_node2, response_for_lsnasportip ] calls = [ call_for_lsnode, call_for_lsfs_node1, call_for_lsfs_node2, call_for_lsnasportip ] ret = self.assistant.get_export_locations('fs1', proto) if proto == 'CIFS': locations = [ { 'path': '\\\\192.168.10.3\\fs1', 'is_admin_only': False, 'metadata': {} }, { 'path': '\\\\192.168.10.4\\fs1', 'is_admin_only': False, 'metadata': {} } ] else: locations = [ { 'path': '192.168.10.3:/fs/fs1/fs1', 'is_admin_only': False, 'metadata': {} }, { 'path': '192.168.10.4:/fs/fs1/fs1', 'is_admin_only': False, 'metadata': {} } ] self.assertEqual(locations, ret) self.sshMock.assert_has_calls(calls) def test_classify_nfs_client_spec_has_nfsinfo(self): response_for_lsnfslist = ('\n'.join([ 'path', '/fs/fs01/fs01' ]), '') call_for_lsnfslist = mock.call([ 'mcsinq', 'lsnfslist', '-delim', '!', '"/fs/fs01/fs01"' ]) response_for_lsnfsinfo = ('\n'.join([ 'ip!mask!rights!all_squash!root_squash', '192.168.1.0!255.255.255.0!rw!all_squash!root_squash', '192.168.2.0!255.255.255.0!rw!all_squash!root_squash' ]), '') call_for_lsnfsinfo = mock.call([ 'mcsinq', 'lsnfsinfo', '-delim', '!', '"/fs/fs01/fs01"' ]) self.sshMock.side_effect = [ response_for_lsnfslist, response_for_lsnfsinfo ] calls = [call_for_lsnfslist, call_for_lsnfsinfo] client_spec = [ '192.168.2.0/255.255.255.0:rw:all_squash:root_squash', '192.168.3.0/255.255.255.0:rw:all_squash:root_squash' ] add_spec, del_spec = self.assistant.classify_nfs_client_spec( client_spec, '/fs/fs01/fs01' ) self.assertEqual( add_spec, ['192.168.3.0/255.255.255.0:rw:all_squash:root_squash'] ) self.assertEqual( del_spec, ['192.168.1.0/255.255.255.0:rw:all_squash:root_squash'] ) self.sshMock.assert_has_calls(calls) def test_classify_nfs_client_spec_has_no_nfsinfo(self): cmdlist = [ 'mcsinq', 'lsnfslist', '-delim', '!', '"/fs/fs01/fs01"' ] self.sshMock.return_value = ('', '') client_spec = [ '192.168.2.0/255.255.255.0:rw:all_squash:root_squash', ] add_spec, del_spec = self.assistant.classify_nfs_client_spec( client_spec, '/fs/fs01/fs01' ) self.assertEqual(client_spec, add_spec) self.assertEqual([], del_spec) self.sshMock.assert_called_once_with(cmdlist) def test_access_rule_to_client_spec(self): rule = { 'access_type': 'ip', 'access_to': '192.168.10.0/24', 'access_level': 'rw' } ret = self.assistant.access_rule_to_client_spec(rule) spec = '192.168.10.0/255.255.255.0:rw:all_squash:root_squash' self.assertEqual(spec, ret) def test_access_rule_to_client_spec_type_failed(self): rule = { 'access_type': 'user', 'access_to': 'test01', 'access_level': 'rw' } self.assertRaises( exception.ShareBackendException, self.assistant.access_rule_to_client_spec, rule ) def test_access_rule_to_client_spec_ipversion_failed(self): rule = { 'access_type': 'ip', 'access_to': '2001:db8::/64', 'access_level': 'rw' } self.assertRaises( exception.ShareBackendException, self.assistant.access_rule_to_client_spec, rule ) @ddt.data(True, False) def test_update_nfs_access(self, check_del_add): response_for_rmnfsclient = ('', '') call_for_rmnfsclient = mock.call( ['mcsop', 'rmnfsclient', '-path', '"/fs/fs01/fs01"', '-client', '192.168.1.0/255.255.255.0'] ) response_for_addnfsclient = ('', '') call_for_addnfsclient = mock.call( ['mcsop', 'addnfsclient', '-path', '"/fs/fs01/fs01"', '-client', '192.168.3.0/255.255.255.0:rw:all_squash:root_squash'] ) access_rules = [ { 'access_type': 'ip', 'access_to': '192.168.2.0/24', 'access_level': 'rw' }, { 'access_type': 'ip', 'access_to': '192.168.3.0/24', 'access_level': 'rw' } ] add_rules = [ { 'access_type': 'ip', 'access_to': '192.168.3.0/24', 'access_level': 'rw' } ] del_rules = [ { 'access_type': 'ip', 'access_to': '192.168.1.0/24', 'access_level': 'rw' }, { 'access_type': 'ip', 'access_to': '192.168.4.0/24', 'access_level': 'rw' } ] cncs_mock = mock.Mock(return_value=( ['192.168.3.0/255.255.255.0:rw:all_squash:root_squash'], ['192.168.1.0/255.255.255.0:rw:all_squash:root_squash'] )) self.mock_object(self.assistant, 'classify_nfs_client_spec', cncs_mock) self.sshMock.side_effect = [ response_for_rmnfsclient, response_for_addnfsclient ] if check_del_add: self.assistant.update_nfs_access('fs01', [], add_rules, del_rules) else: self.assistant.update_nfs_access('fs01', access_rules, [], []) if check_del_add: cncs_mock.assert_called_once_with( [], '/fs/fs01/fs01' ) else: cncs_mock.assert_called_once_with( [ '192.168.2.0/255.255.255.0:rw:all_squash:root_squash', '192.168.3.0/255.255.255.0:rw:all_squash:root_squash' ], '/fs/fs01/fs01' ) self.sshMock.assert_has_calls( [call_for_rmnfsclient, call_for_addnfsclient] ) def test_classify_cifs_rights(self): cmdlist = ['mcsinq', 'lscifsinfo', '-delim', '!', '"fs01"'] response_for_lscifsinfo = '\n'.join([ 'path!oplocks!type!name!rights', '/fs/fs01/fs01!on!LU!user1!rw', '/fs/fs01/fs01!on!LU!user2!rw' ]) self.sshMock.return_value = (response_for_lscifsinfo, '') access_rights = [ 'LU:user2:rw', 'LU:user3:rw' ] add_rights, del_rights = self.assistant.classify_cifs_rights( access_rights, 'fs01' ) self.sshMock.assert_called_once_with(cmdlist) self.assertEqual(['LU:user3:rw'], add_rights) self.assertEqual(['LU:user1:rw'], del_rights) def test_access_rule_to_rights(self): rule = { 'access_type': 'user', 'access_to': 'test01', 'access_level': 'rw' } ret = self.assistant.access_rule_to_rights(rule) self.assertEqual('LU:test01:rw', ret) def test_access_rule_to_rights_fail_type(self): rule = { 'access_type': 'ip', 'access_to': '192.168.1.0/24', 'access_level': 'rw' } self.assertRaises( exception.ShareBackendException, self.assistant.access_rule_to_rights, rule ) @ddt.data(True, False) def test_update_cifs_access(self, check_del_add): response_for_rmcifsuser = ('', None) call_for_rmcifsuser = mock.call( ['mcsop', 'rmcifsuser', '-name', 'fs01', '-rights', 'LU:user1'] ) response_for_addcifsuser = ('', None) call_for_addcifsuser = mock.call( ['mcsop', 'addcifsuser', '-name', 'fs01', '-rights', 'LU:user3:rw'] ) access_rules = [ { 'access_type': 'user', 'access_to': 'user2', 'access_level': 'rw' }, { 'access_type': 'user', 'access_to': 'user3', 'access_level': 'rw' } ] add_rules = [ { 'access_type': 'user', 'access_to': 'user3', 'access_level': 'rw' } ] del_rules = [ { 'access_type': 'user', 'access_to': 'user1', 'access_level': 'rw' } ] ccr_mock = mock.Mock(return_value=(['LU:user3:rw'], ['LU:user1:rw'])) self.mock_object(self.assistant, 'classify_cifs_rights', ccr_mock) self.sshMock.side_effect = [ response_for_rmcifsuser, response_for_addcifsuser ] if check_del_add: self.assistant.update_cifs_access('fs01', [], add_rules, del_rules) else: self.assistant.update_cifs_access('fs01', access_rules, [], []) if not check_del_add: ccr_mock.assert_called_once_with( ['LU:user2:rw', 'LU:user3:rw'], 'fs01' ) self.sshMock.assert_has_calls( [call_for_rmcifsuser, call_for_addcifsuser] ) def test_check_access_type(self): rules1 = { 'access_type': 'ip', 'access_to': '192.168.1.0/24', 'access_level': 'rw' } rules2 = { 'access_type': 'ip', 'access_to': '192.168.2.0/24', 'access_level': 'rw' } rules3 = { 'access_type': 'user', 'access_to': 'user1', 'access_level': 'rw' } rules4 = { 'access_type': 'user', 'access_to': 'user2', 'access_level': 'rw' } ret = self.assistant.check_access_type('ip', [rules1], [rules2]) self.assertTrue(ret) ret = self.assistant.check_access_type('user', [rules3], [rules4]) self.assertTrue(ret) ret = self.assistant.check_access_type('ip', [rules1], [rules3]) self.assertFalse(ret) ret = self.assistant.check_access_type('user', [rules3], [rules1]) self.assertFalse(ret) @ddt.data( {'proto': 'CIFS', 'ret': True}, {'proto': 'CIFS', 'ret': False}, {'proto': 'NFS', 'ret': True}, {'proto': 'NFS', 'ret': False}, {'proto': 'unknown', 'ret': True}) @ddt.unpack def test_update_access(self, proto, ret): uca_mock = self.mock_object( self.assistant, 'update_cifs_access', mock.Mock() ) una_mock = self.mock_object( self.assistant, 'update_nfs_access', mock.Mock() ) cat_mock = self.mock_object( self.assistant, 'check_access_type', mock.Mock(return_value=ret) ) if proto == 'unknown': self.assertRaises( exception.ShareBackendException, self.assistant.update_access, 'fs01', proto, [], [], [] ) cat_mock.assert_not_called() elif ret is False: self.assertRaises( exception.InvalidShareAccess, self.assistant.update_access, 'fs01', proto, [], [], [] ) cat_mock.assert_called_once() else: self.assistant.update_access( 'fs01', proto, [], [], [] ) if proto == 'CIFS': uca_mock.assert_called_once_with('fs01', [], [], []) una_mock.assert_not_called() else: una_mock.assert_called_once_with('fs01', [], [], []) uca_mock.assert_not_called() cat_mock.assert_called_once() manila-10.0.0/manila/tests/share/drivers/inspur/as13000/0000775000175000017500000000000013656750362022562 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/inspur/as13000/__init__.py0000664000175000017500000000000013656750227024661 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/inspur/as13000/test_as13000_nas.py0000664000175000017500000014325313656750227026033 0ustar zuulzuul00000000000000# Copyright 2018 Inspur Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver test for Inspur AS13000 """ import json import time from unittest import mock import ddt from oslo_config import cfg import requests from manila import context from manila import exception from manila.share import driver from manila.share.drivers.inspur.as13000 import as13000_nas from manila import test from manila.tests import fake_share CONF = cfg.CONF class FakeConfig(object): def __init__(self, *args, **kwargs): self.driver_handles_share_servers = False self.share_driver = 'fake_share_driver_name' self.share_backend_name = 'fake_as13000' self.as13000_nas_ip = kwargs.get( 'as13000_nas_ip', 'some_ip') self.as13000_nas_port = kwargs.get( 'as13000_nas_port', 'some_port') self.as13000_nas_login = kwargs.get( 'as13000_nas_login', 'username') self.as13000_nas_password = kwargs.get( 'as13000_nas_password', 'password') self.as13000_share_pools = kwargs.get( 'as13000_share_pools', ['fakepool']) self.as13000_token_available_time = kwargs.get( 'as13000_token_available_time', 3600) self.network_config_group = kwargs.get( "network_config_group", "fake_network_config_group") self.admin_network_config_group = kwargs.get( "admin_network_config_group", "fake_admin_network_config_group") self.config_group = kwargs.get("config_group", "fake_config_group") self.reserved_share_percentage = kwargs.get( "reserved_share_percentage", 0) self.max_over_subscription_ratio = kwargs.get( "max_over_subscription_ratio", 20.0) self.filter_function = kwargs.get("filter_function", None) self.goodness_function = kwargs.get("goodness_function", None) def safe_get(self, key): return getattr(self, key) def append_config_values(self, *args, **kwargs): pass test_config = FakeConfig() class FakeResponse(object): def __init__(self, status, output): self.status_code = status self.text = 'return message' self._json = output def json(self): return self._json def close(self): pass @ddt.ddt class RestAPIExecutorTestCase(test.TestCase): def setUp(self): self.rest_api = as13000_nas.RestAPIExecutor( test_config.as13000_nas_ip, test_config.as13000_nas_port, test_config.as13000_nas_login, test_config.as13000_nas_password) super(RestAPIExecutorTestCase, self).setUp() def test_logins(self): mock_login = self.mock_object(self.rest_api, 'login', mock.Mock(return_value='fake_token')) self.rest_api.logins() mock_login.assert_called_once() def test_login(self): fake_response = { 'token': 'fake_token', 'expireTime': '7200', 'type': 0} mock_sra = self.mock_object(self.rest_api, 'send_rest_api', mock.Mock(return_value=fake_response)) result = self.rest_api.login() self.assertEqual('fake_token', result) login_params = {'name': test_config.as13000_nas_login, 'password': test_config.as13000_nas_password} mock_sra.assert_called_once_with(method='security/token', params=login_params, request_type='post') def test_logout(self): mock_sra = self.mock_object(self.rest_api, 'send_rest_api', mock.Mock(return_value=None)) self.rest_api.logout() mock_sra.assert_called_once_with( method='security/token', request_type='delete') @ddt.data(True, False) def test_refresh_token(self, force): mock_login = self.mock_object(self.rest_api, 'login', mock.Mock(return_value='fake_token')) mock_logout = self.mock_object(self.rest_api, 'logout', mock.Mock()) self.rest_api.refresh_token(force) if force is not True: mock_logout.assert_called_once_with() mock_login.assert_called_once_with() def test_send_rest_api(self): expected = {'value': 'abc'} mock_sa = self.mock_object(self.rest_api, 'send_api', mock.Mock(return_value=expected)) result = self.rest_api.send_rest_api( method='fake_method', params='fake_params', request_type='fake_type') self.assertEqual(expected, result) mock_sa.assert_called_once_with( 'fake_method', 'fake_params', 'fake_type') def test_send_rest_api_retry(self): expected = {'value': 'abc'} mock_sa = self.mock_object( self.rest_api, 'send_api', mock.Mock( side_effect=( exception.NetworkException, expected))) # mock.Mock(side_effect=exception.NetworkException)) mock_rt = self.mock_object(self.rest_api, 'refresh_token', mock.Mock()) result = self.rest_api.send_rest_api( method='fake_method', params='fake_params', request_type='fake_type' ) self.assertEqual(expected, result) mock_sa.assert_called_with( 'fake_method', 'fake_params', 'fake_type') mock_rt.assert_called_with(force=True) def test_send_rest_api_3times_fail(self): mock_sa = self.mock_object( self.rest_api, 'send_api', mock.Mock( side_effect=(exception.NetworkException))) mock_rt = self.mock_object(self.rest_api, 'refresh_token', mock.Mock()) self.assertRaises( exception.ShareBackendException, self.rest_api.send_rest_api, method='fake_method', params='fake_params', request_type='fake_type') mock_sa.assert_called_with('fake_method', 'fake_params', 'fake_type') mock_rt.assert_called_with(force=True) def test_send_rest_api_backend_error_fail(self): mock_sa = self.mock_object(self.rest_api, 'send_api', mock.Mock( side_effect=(exception.ShareBackendException( 'fake_error_message')))) mock_rt = self.mock_object(self.rest_api, 'refresh_token') self.assertRaises( exception.ShareBackendException, self.rest_api.send_rest_api, method='fake_method', params='fake_params', request_type='fake_type') mock_sa.assert_called_with('fake_method', 'fake_params', 'fake_type') mock_rt.assert_not_called() @ddt.data( {'method': 'fake_method', 'request_type': 'post', 'params': {'fake_param': 'fake_value'}}, {'method': 'fake_method', 'request_type': 'get', 'params': {'fake_param': 'fake_value'}}, {'method': 'fake_method', 'request_type': 'delete', 'params': {'fake_param': 'fake_value'}}, {'method': 'fake_method', 'request_type': 'put', 'params': {'fake_param': 'fake_value'}}, ) @ddt.unpack def test_send_api(self, method, params, request_type): self.rest_api._token_pool = ['fake_token'] if request_type in ('post', 'delete', 'put'): fake_output = {'code': 0, 'message': 'success'} elif request_type == 'get': fake_output = {'code': 0, 'data': 'fake_date'} fake_response = FakeResponse(200, fake_output) mock_request = self.mock_object(requests, request_type, mock.Mock(return_value=fake_response)) self.rest_api.send_api(method, params=params, request_type=request_type) url = 'http://%s:%s/rest/%s' % (test_config.as13000_nas_ip, test_config.as13000_nas_port, method) headers = {'X-Auth-Token': 'fake_token'} mock_request.assert_called_once_with(url, data=json.dumps(params), headers=headers) @ddt.data({'method': r'security/token', 'params': {'name': test_config.as13000_nas_login, 'password': test_config.as13000_nas_password}, 'request_type': 'post'}, {'method': r'security/token', 'params': None, 'request_type': 'delete'}) @ddt.unpack def test_send_api_access_success(self, method, params, request_type): if request_type == 'post': fake_value = {'code': 0, 'data': { 'token': 'fake_token', 'expireTime': '7200', 'type': 0}} mock_requests = self.mock_object( requests, 'post', mock.Mock( return_value=FakeResponse( 200, fake_value))) result = self.rest_api.send_api(method, params, request_type) self.assertEqual(fake_value['data'], result) mock_requests.assert_called_once_with( 'http://%s:%s/rest/%s' % (test_config.as13000_nas_ip, test_config.as13000_nas_port, method), data=json.dumps(params), headers=None) if request_type == 'delete': fake_value = {'code': 0, 'message': 'Success!'} self.rest_api._token_pool = ['fake_token'] mock_requests = self.mock_object( requests, 'delete', mock.Mock( return_value=FakeResponse( 200, fake_value))) self.rest_api.send_api(method, params, request_type) mock_requests.assert_called_once_with( 'http://%s:%s/rest/%s' % (test_config.as13000_nas_ip, test_config.as13000_nas_port, method), data=None, headers={'X-Auth-Token': 'fake_token'}) def test_send_api_wrong_access_fail(self): req_params = {'method': r'security/token', 'params': {'name': test_config.as13000_nas_login, 'password': 'fake_password'}, 'request_type': 'post'} fake_value = {'message': ' User name or password error.', 'code': 400} mock_request = self.mock_object( requests, 'post', mock.Mock( return_value=FakeResponse( 200, fake_value))) self.assertRaises( exception.ShareBackendException, self.rest_api.send_api, method=req_params['method'], params=req_params['params'], request_type=req_params['request_type']) mock_request.assert_called_once_with( 'http://%s:%s/rest/%s' % (test_config.as13000_nas_ip, test_config.as13000_nas_port, req_params['method']), data=json.dumps( req_params['params']), headers=None) def test_send_api_token_overtime_fail(self): self.rest_api._token_pool = ['fake_token'] fake_value = {'method': 'fake_url', 'params': 'fake_params', 'reuest_type': 'post'} fake_out_put = {'message': 'Unauthorized access!', 'code': 301} mock_requests = self.mock_object( requests, 'post', mock.Mock( return_value=FakeResponse( 200, fake_out_put))) self.assertRaises(exception.NetworkException, self.rest_api.send_api, method='fake_url', params='fake_params', request_type='post') mock_requests.assert_called_once_with( 'http://%s:%s/rest/%s' % (test_config.as13000_nas_ip, test_config.as13000_nas_port, fake_value['method']), data=json.dumps('fake_params'), headers={ 'X-Auth-Token': 'fake_token'}) def test_send_api_fail(self): self.rest_api._token_pool = ['fake_token'] fake_output = {'code': 100, 'message': 'fake_message'} mock_request = self.mock_object( requests, 'post', mock.Mock( return_value=FakeResponse( 200, fake_output))) self.assertRaises( exception.ShareBackendException, self.rest_api.send_api, method='fake_method', params='fake_params', request_type='post') mock_request.assert_called_once_with( 'http://%s:%s/rest/%s' % (test_config.as13000_nas_ip, test_config.as13000_nas_port, 'fake_method'), data=json.dumps('fake_params'), headers={'X-Auth-Token': 'fake_token'} ) @ddt.ddt class AS13000ShareDriverTestCase(test.TestCase): def __init__(self, *args, **kwds): super(AS13000ShareDriverTestCase, self).__init__(*args, **kwds) self._ctxt = context.get_admin_context() self.configuration = FakeConfig() def setUp(self): self.mock_object(as13000_nas.CONF, '_check_required_opts') self.driver = as13000_nas.AS13000ShareDriver( configuration=self.configuration) super(AS13000ShareDriverTestCase, self).setUp() def test_do_setup(self): mock_login = self.mock_object( as13000_nas.RestAPIExecutor, 'logins', mock.Mock()) mock_vpe = self.mock_object( self.driver, '_validate_pools_exist', mock.Mock()) mock_gdd = self.mock_object( self.driver, '_get_directory_detail', mock.Mock( return_value='{}')) mock_gni = self.mock_object( self.driver, '_get_nodes_ips', mock.Mock( return_value=['fake_ips'])) self.driver.do_setup(self._ctxt) mock_login.assert_called_once() mock_vpe.assert_called_once() mock_gdd.assert_called_once_with( test_config.as13000_share_pools[0]) mock_gni.assert_called_once() def test_do_setup_login_fail(self): mock_login = self.mock_object( as13000_nas.RestAPIExecutor, 'logins', mock.Mock( side_effect=exception.ShareBackendException('fake_exception'))) self.assertRaises( exception.ShareBackendException, self.driver.do_setup, self._ctxt) mock_login.assert_called_once() def test_do_setup_vpe_failed(self): mock_login = self.mock_object(as13000_nas.RestAPIExecutor, 'logins', mock.Mock()) side_effect = exception.InvalidInput(reason='fake_exception') mock_vpe = self.mock_object(self.driver, '_validate_pools_exist', mock.Mock(side_effect=side_effect)) self.assertRaises(exception.InvalidInput, self.driver.do_setup, self._ctxt) mock_login.assert_called_once() mock_vpe.assert_called_once() def test_check_for_setup_error_base_dir_detail_failed(self): self.driver.base_dir_detail = None self.driver.ips = ['fake_ip'] self.assertRaises( exception.ShareBackendException, self.driver.check_for_setup_error) def test_check_for_setup_error_node_status_fail(self): self.driver.base_dir_detail = 'fakepool' self.driver.ips = [] self.assertRaises(exception.ShareBackendException, self.driver.check_for_setup_error) @ddt.data('nfs', 'cifs') def test_create_share(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") mock_cd = self.mock_object(self.driver, '_create_directory', mock.Mock(return_value='/fake/path')) mock_cns = self.mock_object(self.driver, '_create_nfs_share') mock_ccs = self.mock_object(self.driver, '_create_cifs_share') mock_sdq = self.mock_object(self.driver, '_set_directory_quota') self.driver.ips = ['127.0.0.1'] locations = self.driver.create_share(self._ctxt, share_instance) if share_proto == 'nfs': expect_locations = [{'path': r'127.0.0.1:/fake/path'}] self.assertEqual(locations, expect_locations) else: expect_locations = [{'path': r'\\127.0.0.1\share_fakeinstanceid'}] self.assertEqual(locations, expect_locations) mock_cd.assert_called_once_with(share_name='share_fakeinstanceid', pool_name='P') if share_proto == 'nfs': mock_cns.assert_called_once_with(share_path='/fake/path') elif share['share_proto'] == 'cifs': mock_ccs.assert_called_once_with(share_path='/fake/path', share_name='share_fakeinstanceid') mock_sdq.assert_called_once_with('/fake/path', share['size']) @ddt.data('nfs', 'cifs') def test_create_share_from_snapshot(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") mock_cd = self.mock_object(self.driver, '_create_directory', mock.Mock(return_value='/fake/path')) mock_cns = self.mock_object(self.driver, '_create_nfs_share') mock_ccs = self.mock_object(self.driver, '_create_cifs_share') mock_sdq = self.mock_object(self.driver, '_set_directory_quota') mock_cdtd = self.mock_object(self.driver, '_clone_directory_to_dest') self.driver.ips = ['127.0.0.1'] locations = self.driver.create_share_from_snapshot( self._ctxt, share_instance, None) if share_proto == 'nfs': expect_locations = [{'path': r'127.0.0.1:/fake/path'}] self.assertEqual(locations, expect_locations) else: expect_locations = [{'path': r'\\127.0.0.1\share_fakeinstanceid'}] self.assertEqual(locations, expect_locations) mock_cd.assert_called_once_with(share_name='share_fakeinstanceid', pool_name='P') if share_proto == 'nfs': mock_cns.assert_called_once_with(share_path='/fake/path') elif share['share_proto'] == 'cifs': mock_ccs.assert_called_once_with(share_path='/fake/path', share_name='share_fakeinstanceid') mock_sdq.assert_called_once_with('/fake/path', share['size']) mock_cdtd.assert_called_once_with(snapshot=None, dest_path='/fake/path') @ddt.data('nfs', 'cifs') def test_delete_share(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") expect_share_path = r'/P/share_fakeinstanceid' mock_gns = self.mock_object(self.driver, '_get_nfs_share', mock.Mock(return_value=['fake_share'])) mock_dns = self.mock_object(self.driver, '_delete_nfs_share') mock_gcs = self.mock_object(self.driver, '_get_cifs_share', mock.Mock(return_value=['fake_share'])) mock_dcs = self.mock_object(self.driver, '_delete_cifs_share') mock_dd = self.mock_object(self.driver, '_delete_directory') self.driver.delete_share(self._ctxt, share_instance) if share_proto == 'nfs': mock_gns.assert_called_once_with(expect_share_path) mock_dns.assert_called_once_with(expect_share_path) else: mock_gcs.assert_called_once_with('share_fakeinstanceid') mock_dcs.assert_called_once_with('share_fakeinstanceid') mock_dd.assert_called_once_with(expect_share_path) @ddt.data('nfs', 'cifs') def test_delete_share_not_exist(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") expect_share_path = r'/P/share_fakeinstanceid' mock_gns = self.mock_object(self.driver, '_get_nfs_share', mock.Mock(return_value=[])) mock_gcs = self.mock_object(self.driver, '_get_cifs_share', mock.Mock(return_value=[])) self.driver.delete_share(self._ctxt, share_instance) if share_proto == 'nfs': mock_gns.assert_called_once_with(expect_share_path) elif share_proto == 'cifs': mock_gcs.assert_called_once_with('share_fakeinstanceid') def test_extend_share(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") expect_share_path = r'/P/share_fakeinstanceid' mock_sdq = self.mock_object(self.driver, '_set_directory_quota') self.driver.extend_share(share_instance, 2) mock_sdq.assert_called_once_with(expect_share_path, 2) @ddt.data('nfs', 'cifs') def test_ensure_share(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") mock_gns = self.mock_object(self.driver, '_get_nfs_share', mock.Mock(return_value=['fake_share'])) mock_gcs = self.mock_object(self.driver, '_get_cifs_share', mock.Mock(return_value=['fake_share'])) self.driver.ips = ['127.0.0.1'] locations = self.driver.ensure_share(self._ctxt, share_instance) if share_proto == 'nfs': expect_locations = [{'path': r'127.0.0.1:/P/share_fakeinstanceid'}] self.assertEqual(locations, expect_locations) mock_gns.assert_called_once_with(r'/P/share_fakeinstanceid') else: expect_locations = [{'path': r'\\127.0.0.1\share_fakeinstanceid'}] self.assertEqual(locations, expect_locations) mock_gcs.assert_called_once_with(r'share_fakeinstanceid') def test_ensure_share_fail_1(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") self.assertRaises(exception.InvalidInput, self.driver.ensure_share, self._ctxt, share_instance) @ddt.data('nfs', 'cifs') def test_ensure_share_None_share_fail(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") mock_gns = self.mock_object(self.driver, '_get_nfs_share', mock.Mock(return_value=[])) mock_gcs = self.mock_object(self.driver, '_get_cifs_share', mock.Mock(return_value=[])) self.assertRaises(exception.ShareResourceNotFound, self.driver.ensure_share, self._ctxt, share_instance) if share_proto == 'nfs': mock_gns.assert_called_once_with(r'/P/share_fakeinstanceid') elif share['share_proto'] == 'cifs': mock_gcs.assert_called_once_with(r'share_fakeinstanceid') def test_create_snapshot(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") snapshot_instance_pseudo = { 'share': share_instance, 'id': 'fakesnapid' } mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver.create_snapshot(self._ctxt, snapshot_instance_pseudo) method = 'snapshot/directory' request_type = 'post' params = {'path': r'/P/share_fakeinstanceid', 'snapName': 'snap_fakesnapid'} mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test_delete_snapshot_normal(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") snapshot_instance_pseudo = { 'share': share_instance, 'id': 'fakesnapid' } mock_gsfs = self.mock_object(self.driver, '_get_snapshots_from_share', mock.Mock(return_value=['fakesnapshot'])) mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver.delete_snapshot(self._ctxt, snapshot_instance_pseudo) mock_gsfs.assert_called_once_with('/P/share_fakeinstanceid') method = ('snapshot/directory?' 'path=/P/share_fakeinstanceid&snapName=snap_fakesnapid') request_type = 'delete' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test_delete_snapshot_not_exist(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") snapshot_instance_pseudo = { 'share': share_instance, 'snapshot_id': 'fakesnapid' } mock_gsfs = self.mock_object(self.driver, '_get_snapshots_from_share', mock.Mock(return_value=[])) mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver.delete_snapshot(self._ctxt, snapshot_instance_pseudo) mock_gsfs.assert_called_once_with('/P/share_fakeinstanceid') mock_rest.assert_not_called() @ddt.data('nfs', 'icfs', 'cifs') def test_transfer_rule_to_client(self, proto): rule = {'access_to': '1.1.1.1', 'access_level': 'rw'} result = self.driver.transfer_rule_to_client(proto, rule) client = {'name': '1.1.1.1', 'authority': 'rwx' if proto == 'cifs' else 'rw'} if proto == 'nfs': client.update({'type': 0}) else: client.update({'type': 1}) self.assertEqual(client, result) @ddt.data({'share_proto': 'nfs', 'use_access': True}, {'share_proto': 'nfs', 'use_access': False}, {'share_proto': 'cifs', 'use_access': True}, {'share_proto': 'cifs', 'use_access': False}) @ddt.unpack def test_update_access(self, share_proto, use_access): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") access_rules = [{'access_to': 'fakename1', 'access_level': 'fakelevel1'}, {'access_to': 'fakename2', 'access_level': 'fakelevel2'}] add_rules = [{'access_to': 'fakename1', 'access_level': 'fakelevel1'}] del_rules = [{'access_to': 'fakename2', 'access_level': 'fakelevel2'}] mock_ca = self.mock_object(self.driver, '_clear_access') fake_share_backend = {'pathAuthority': 'fakepathAuthority'} mock_gns = self.mock_object(self.driver, '_get_nfs_share', mock.Mock(return_value=fake_share_backend)) mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') if use_access: self.driver.update_access(self._ctxt, share_instance, access_rules, [], []) else: self.driver.update_access(self._ctxt, share_instance, [], add_rules, del_rules) access_clients = [{'name': rule['access_to'], 'type': 0 if share_proto == 'nfs' else 1, 'authority': rule['access_level'] } for rule in access_rules] add_clients = [{'name': rule['access_to'], 'type': 0 if share_proto == 'nfs' else 1, 'authority': rule['access_level'] } for rule in add_rules] del_clients = [{'name': rule['access_to'], 'type': 0 if share_proto == 'nfs' else 1, 'authority': rule['access_level'] } for rule in del_rules] params = { 'path': r'/P/share_fakeinstanceid', 'addedClientList': [], 'deletedClientList': [], 'editedClientList': [] } if share_proto == 'nfs': mock_gns.assert_called_once_with(r'/P/share_fakeinstanceid') params['pathAuthority'] = fake_share_backend['pathAuthority'] else: params['name'] = 'share_fakeinstanceid' if use_access: mock_ca.assert_called_once_with(share_instance) params['addedClientList'] = access_clients else: params['addedClientList'] = add_clients params['deletedClientList'] = del_clients mock_rest.assert_called_once_with( method=('file/share/%s' % share_proto), params=params, request_type='put') def test__update_share_stats(self): mock_sg = self.mock_object(FakeConfig, 'safe_get', mock.Mock(return_value='fake_as13000')) self.driver.pools = ['fake_pool'] mock_gps = self.mock_object(self.driver, '_get_pool_stats', mock.Mock(return_value='fake_pool')) self.driver._token_time = time.time() mock_rt = self.mock_object(as13000_nas.RestAPIExecutor, 'refresh_token') mock_uss = self.mock_object(driver.ShareDriver, '_update_share_stats') self.driver._update_share_stats() data = {} data['vendor_name'] = self.driver.VENDOR data['driver_version'] = self.driver.VERSION data['storage_protocol'] = self.driver.PROTOCOL data['share_backend_name'] = 'fake_as13000' data['snapshot_support'] = True data['create_share_from_snapshot_support'] = True data['pools'] = ['fake_pool'] mock_sg.assert_called_once_with('share_backend_name') mock_gps.assert_called_once_with('fake_pool') mock_rt.assert_not_called() mock_uss.assert_called_once_with(data) def test__update_share_stats_refresh_token(self): mock_sg = self.mock_object(FakeConfig, 'safe_get', mock.Mock(return_value='fake_as13000')) self.driver.pools = ['fake_pool'] mock_gps = self.mock_object(self.driver, '_get_pool_stats', mock.Mock(return_value='fake_pool')) self.driver._token_time = ( time.time() - self.driver.token_available_time - 1) mock_rt = self.mock_object(as13000_nas.RestAPIExecutor, 'refresh_token') mock_uss = self.mock_object(driver.ShareDriver, '_update_share_stats') self.driver._update_share_stats() data = {} data['vendor_name'] = self.driver.VENDOR data['driver_version'] = self.driver.VERSION data['storage_protocol'] = self.driver.PROTOCOL data['share_backend_name'] = 'fake_as13000' data['snapshot_support'] = True data['create_share_from_snapshot_support'] = True data['pools'] = ['fake_pool'] mock_sg.assert_called_once_with('share_backend_name') mock_gps.assert_called_once_with('fake_pool') mock_rt.assert_called_once() mock_uss.assert_called_once_with(data) @ddt.data('nfs', 'cifs') def test__clear_access(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") fake_share_backend = {'pathAuthority': 'fakepathAuthority', 'clientList': ['fakeclient'], 'userList': ['fakeuser']} mock_gns = self.mock_object(self.driver, '_get_nfs_share', mock.Mock(return_value=fake_share_backend)) mock_gcs = self.mock_object(self.driver, '_get_cifs_share', mock.Mock(return_value=fake_share_backend)) mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._clear_access(share_instance) method = 'file/share/%s' % share_proto request_type = 'put' params = { 'path': r'/P/share_fakeinstanceid', 'addedClientList': [], 'deletedClientList': [], 'editedClientList': [] } if share_proto == 'nfs': mock_gns.assert_called_once_with(r'/P/share_fakeinstanceid') params['deletedClientList'] = fake_share_backend['clientList'] params['pathAuthority'] = fake_share_backend['pathAuthority'] else: mock_gcs.assert_called_once_with('share_fakeinstanceid') params['deletedClientList'] = fake_share_backend['userList'] params['name'] = 'share_fakeinstanceid' mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test__validate_pools_exist(self): self.driver.pools = ['fakepool'] mock_gdl = self.mock_object(self.driver, '_get_directory_list', mock.Mock(return_value=['fakepool'])) self.driver._validate_pools_exist() mock_gdl.assert_called_once_with('/') def test__validate_pools_exist_fail(self): self.driver.pools = ['fakepool_fail'] mock_gdl = self.mock_object(self.driver, '_get_directory_list', mock.Mock(return_value=['fakepool'])) self.assertRaises(exception.InvalidInput, self.driver._validate_pools_exist) mock_gdl.assert_called_once_with('/') @ddt.data(0, 1) def test__get_directory_quota(self, hardunit): fake_data = {'hardthreshold': 200, 'hardunit': hardunit, 'capacity': '50GB'} mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=fake_data)) total, used = (self.driver._get_directory_quota('fakepath')) if hardunit == 0: self.assertEqual((200, 50), (total, used)) else: self.assertEqual((200 * 1024, 50), (total, used)) method = 'file/quota/directory?path=/fakepath' request_type = 'get' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__get_directory_quota_fail(self): fake_data = {'hardthreshold': None, 'hardunit': 0, 'capacity': '50GB'} mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=fake_data)) self.assertRaises(exception.ShareBackendException, self.driver._get_directory_quota, 'fakepath') method = 'file/quota/directory?path=/fakepath' request_type = 'get' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__get_pool_stats(self): mock_gdq = self.mock_object(self.driver, '_get_directory_quota', mock.Mock(return_value=(200, 50))) pool = dict() pool['pool_name'] = 'fakepath' pool['reserved_percentage'] = 0 pool['max_over_subscription_ratio'] = 20.0 pool['dedupe'] = False pool['compression'] = False pool['qos'] = False pool['thin_provisioning'] = True pool['total_capacity_gb'] = 200 pool['free_capacity_gb'] = 150 pool['allocated_capacity_gb'] = 50 pool['snapshot_support'] = True pool['create_share_from_snapshot_support'] = True result = self.driver._get_pool_stats('fakepath') self.assertEqual(pool, result) mock_gdq.assert_called_once_with('fakepath') def test__get_directory_list(self): fake_dir_list = [{'name': 'fakedirectory1', 'size': 20}, {'name': 'fakedirectory2', 'size': 30}] mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=fake_dir_list)) expected = ['fakedirectory1', 'fakedirectory2'] result = self.driver._get_directory_list('/fakepath') self.assertEqual(expected, result) method = 'file/directory?path=/fakepath' mock_rest.assert_called_once_with(method=method, request_type='get') def test__create_directory(self): base_dir_detail = { 'path': '/fakepath', 'authorityInfo': {'user': 'root', 'group': 'root', 'authority': 'rwxrwxrwx' }, 'dataProtection': {'type': 0, 'dc': 2, 'cc': 1, 'rn': 0, 'st': 4}, 'poolName': 'storage_pool' } mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver.base_dir_detail = base_dir_detail result = self.driver._create_directory('fakename', 'fakepool') self.assertEqual('/fakepool/fakename', result) method = 'file/directory' request_type = 'post' params = {'name': 'fakename', 'parentPath': base_dir_detail['path'], 'authorityInfo': base_dir_detail['authorityInfo'], 'dataProtection': base_dir_detail['dataProtection'], 'poolName': base_dir_detail['poolName']} mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test__delete_directory(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._delete_directory('/fakepath') method = 'file/directory?path=/fakepath' request_type = 'delete' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__set_directory_quota(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._set_directory_quota('fakepath', 200) method = 'file/quota/directory' request_type = 'put' params = {'path': 'fakepath', 'hardthreshold': 200, 'hardunit': 2} mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test__create_nfs_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._create_nfs_share('fakepath') method = 'file/share/nfs' request_type = 'post' params = {'path': 'fakepath', 'pathAuthority': 'rw', 'client': []} mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test__delete_nfs_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._delete_nfs_share('/fakepath') method = 'file/share/nfs?path=/fakepath' request_type = 'delete' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__get_nfs_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value='fakebackend')) result = self.driver._get_nfs_share('/fakepath') self.assertEqual('fakebackend', result) method = 'file/share/nfs?path=/fakepath' request_type = 'get' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__create_cifs_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._create_cifs_share('fakename', 'fakepath') method = 'file/share/cifs' request_type = 'post' params = {'path': 'fakepath', 'name': 'fakename', 'userlist': []} mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test__delete_cifs_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._delete_cifs_share('fakename') method = 'file/share/cifs?name=fakename' request_type = 'delete' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__get_cifs_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value='fakebackend')) result = self.driver._get_cifs_share('fakename') self.assertEqual('fakebackend', result) method = 'file/share/cifs?name=fakename' request_type = 'get' mock_rest.assert_called_once_with(method=method, request_type=request_type) def test__clone_directory_to_dest(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") snapshot_instance_pseudo = { 'id': 'fakesnapid', 'share_instance': share_instance } mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api') self.driver._clone_directory_to_dest(snapshot_instance_pseudo, 'fakepath') method = 'snapshot/directory/clone' request_type = 'post' params = {'path': '/P/share_fakeinstanceid', 'snapName': 'snap_fakesnapid', 'destPath': 'fakepath'} mock_rest.assert_called_once_with(method=method, request_type=request_type, params=params) def test__get_snapshots_from_share(self): mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=['fakesnap'])) result = self.driver._get_snapshots_from_share('/fakepath') self.assertEqual(['fakesnap'], result) method = 'snapshot/directory?path=/fakepath' request_type = 'get' mock_rest.assert_called_once_with(method=method, request_type=request_type) @ddt.data('nfs', 'cifs') def test__get_location_path(self, proto): self.driver.ips = ['ip1', 'ip2'] result = self.driver._get_location_path('fake_name', '/fake/path', proto) if proto == 'nfs': expect = [{'path': 'ip1:/fake/path'}, {'path': 'ip2:/fake/path'}] else: expect = [{'path': r'\\ip1\fake_name'}, {'path': r'\\ip2\fake_name'}] self.assertEqual(expect, result) def test__get_nodes_virtual_ips(self): ctdb_set = { 'virtualIpList': [{'ip': 'fakeip1/24'}, {'ip': 'fakeip2/24'}] } mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=ctdb_set)) result = self.driver._get_nodes_virtual_ips() self.assertEqual(result, ['fakeip1', 'fakeip2']) mock_rest.assert_called_once_with(method='ctdb/set', request_type='get') def test__get_nodes_physical_ips(self): nodes = [{'nodeIp': 'fakeip1', 'runningStatus': 1, 'healthStatus': 1}, {'nodeIp': 'fakeip2', 'runningStatus': 1, 'healthStatus': 0}, {'nodeIp': 'fakeip3', 'runningStatus': 0, 'healthStatus': 1}, {'nodeIp': 'fakeip4', 'runningStatus': 0, 'healthStatus': 0}] mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=nodes)) result = self.driver._get_nodes_physical_ips() expect = ['fakeip1'] self.assertEqual(expect, result) mock_rest.assert_called_once_with(method='cluster/node/cache', request_type='get') def test__get_nodes_ips(self): mock_virtual = self.mock_object(self.driver, '_get_nodes_virtual_ips', mock.Mock(return_value=['ip1'])) mock_physical = self.mock_object(self.driver, '_get_nodes_physical_ips', mock.Mock(return_value=['ip2'])) result = self.driver._get_nodes_ips() self.assertEqual(['ip1', 'ip2'], result) mock_virtual.assert_called_once() mock_physical.assert_called_once() @ddt.data('nfs', 'cifs') def test__get_share_instance_pnsp(self, share_proto): share = fake_share.fake_share(share_proto=share_proto) share_instance = fake_share.fake_share_instance(share, host="H@B#P") result = self.driver._get_share_instance_pnsp(share_instance) self.assertEqual(('P', 'share_fakeinstanceid', 1, share_proto), result) @ddt.data('5000000000', '5000000k', '5000mb', '50G', '5TB') def test__unit_convert(self, capacity): trans = {'5000000000': '%.0f' % (float(5000000000) / 1024 ** 3), '5000000k': '%.0f' % (float(5000000) / 1024 ** 2), '5000mb': '%.0f' % (float(5000) / 1024), '50G': '%.0f' % float(50), '5TB': '%.0f' % (float(5) * 1024)} expect = float(trans[capacity]) result = self.driver._unit_convert(capacity) self.assertEqual(expect, result) def test__format_name(self): a = 'atest-1234567890-1234567890-1234567890' expect = 'atest_1234567890_1234567890_1234' result = self.driver._format_name(a) self.assertEqual(expect, result) def test__generate_share_name(self): share = fake_share.fake_share() share_instance = fake_share.fake_share_instance(share, host="H@B#P") result = self.driver._generate_share_name(share_instance) self.assertEqual('share_fakeinstanceid', result) def test__generate_snapshot_name(self): snapshot_instance_pesudo = {'id': 'fakesnapinstanceid'} result = self.driver._generate_snapshot_name(snapshot_instance_pesudo) self.assertEqual('snap_fakesnapinstanceid', result) def test__generate_share_path(self): result = self.driver._generate_share_path('fakepool', 'fakename') self.assertEqual('/fakepool/fakename', result) def test__get_directory_detail(self): details = [{'poolName': 'fakepool1'}, {'poolName': 'fakepool2'}] mock_rest = self.mock_object(as13000_nas.RestAPIExecutor, 'send_rest_api', mock.Mock(return_value=details)) result = self.driver._get_directory_detail('fakepath') self.assertEqual(details[0], result) method = 'file/directory/detail?path=/fakepath' request_type = 'get' mock_rest.assert_called_once_with(method=method, request_type=request_type) manila-10.0.0/manila/tests/share/drivers/quobyte/0000775000175000017500000000000013656750362021743 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/quobyte/__init__.py0000664000175000017500000000000013656750227024042 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/quobyte/test_jsonrpc.py0000664000175000017500000002211013656750227025026 0ustar zuulzuul00000000000000# Copyright (c) 2015 Quobyte, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import time from unittest import mock import requests from requests import auth from requests import exceptions import six from manila import exception from manila.share.drivers.quobyte import jsonrpc from manila import test class FakeResponse(object): def __init__(self, status, body): self.status_code = status self.reason = "HTTP reason" self.body = body self.text = six.text_type(body) def json(self): return self.body class QuobyteJsonRpcTestCase(test.TestCase): def setUp(self): super(QuobyteJsonRpcTestCase, self).setUp() self.rpc = jsonrpc.JsonRpc(url="http://test", user_credentials=("me", "team")) self.mock_object(time, 'sleep') @mock.patch.object(requests, 'post', return_value=FakeResponse(200, {"result": "yes"})) def test_request_generation_and_basic_auth(self, req_get_mock): self.rpc.call('method', {'param': 'value'}) req_get_mock.assert_called_once_with( url='http://test', auth=auth.HTTPBasicAuth("me", "team"), json=mock.ANY) def test_jsonrpc_init_with_ca(self): foofile = tempfile.TemporaryFile() fake_url = "https://foo.bar/" fake_credentials = ('fakeuser', 'fakepwd') fake_cert_file = tempfile.TemporaryFile() fake_key_file = tempfile.TemporaryFile() self.rpc = jsonrpc.JsonRpc(url=fake_url, user_credentials=fake_credentials, ca_file=foofile, key_file=fake_key_file, cert_file=fake_cert_file) self.assertEqual("https", self.rpc._url_scheme) self.assertEqual(fake_url, self.rpc._url) self.assertEqual(foofile, self.rpc._ca_file) self.assertEqual(fake_cert_file, self.rpc._cert_file) self.assertEqual(fake_key_file, self.rpc._key_file) @mock.patch.object(jsonrpc.LOG, "warning") def test_jsonrpc_init_without_ca(self, mock_warning): self.rpc = jsonrpc.JsonRpc("https://foo.bar/", ('fakeuser', 'fakepwd'), None) mock_warning.assert_called_once_with( "Will not verify the server certificate of the API service" " because the CA certificate is not available.") def test_jsonrpc_init_no_ssl(self): self.rpc = jsonrpc.JsonRpc("http://foo.bar/", ('fakeuser', 'fakepwd')) self.assertEqual("http", self.rpc._url_scheme) @mock.patch.object(requests, "post", return_value=FakeResponse( 200, {"result": "Sweet gorilla of Manila"})) def test_successful_call(self, mock_req_get): result = self.rpc.call('method', {'param': 'value'}) mock_req_get.assert_called_once_with( url=self.rpc._url, json=mock.ANY, # not checking here as of undefined order in dict auth=self.rpc._credentials) self.assertEqual("Sweet gorilla of Manila", result) @mock.patch.object(requests, "post", return_value=FakeResponse( 200, {"result": "Sweet gorilla of Manila"})) def test_https_call_with_cert(self, mock_req_get): fake_cert_file = tempfile.TemporaryFile() fake_key_file = tempfile.TemporaryFile() self.rpc = jsonrpc.JsonRpc(url="https://test", user_credentials=("me", "team"), cert_file=fake_cert_file, key_file=fake_key_file) result = self.rpc.call('method', {'param': 'value'}) mock_req_get.assert_called_once_with( url=self.rpc._url, json=mock.ANY, # not checking here as of undefined order in dict auth=self.rpc._credentials, verify=False, cert=(fake_cert_file, fake_key_file)) self.assertEqual("Sweet gorilla of Manila", result) @mock.patch.object(requests, "post", return_value=FakeResponse( 200, {"result": "Sweet gorilla of Manila"})) def test_https_call_verify(self, mock_req_get): fake_ca_file = tempfile.TemporaryFile() self.rpc = jsonrpc.JsonRpc(url="https://test", user_credentials=("me", "team"), ca_file=fake_ca_file) result = self.rpc.call('method', {'param': 'value'}) mock_req_get.assert_called_once_with( url=self.rpc._url, json=mock.ANY, # not checking here as of undefined order in dict auth=self.rpc._credentials, verify=fake_ca_file) self.assertEqual("Sweet gorilla of Manila", result) @mock.patch.object(jsonrpc.JsonRpc, "_checked_for_application_error", return_value="Sweet gorilla of Manila") @mock.patch.object(requests, "post", return_value=FakeResponse( 200, {"result": "Sweet gorilla of Manila"})) def test_https_call_verify_expected_error(self, mock_req_get, mock_check): fake_ca_file = tempfile.TemporaryFile() self.rpc = jsonrpc.JsonRpc(url="https://test", user_credentials=("me", "team"), ca_file=fake_ca_file) result = self.rpc.call('method', {'param': 'value'}, expected_errors=[42]) mock_req_get.assert_called_once_with( url=self.rpc._url, json=mock.ANY, # not checking here as of undefined order in dict auth=self.rpc._credentials, verify=fake_ca_file) mock_check.assert_called_once_with( {'result': 'Sweet gorilla of Manila'}, [42]) self.assertEqual("Sweet gorilla of Manila", result) @mock.patch.object(requests, "post", side_effect=exceptions.HTTPError) def test_jsonrpc_call_http_exception(self, req_get_mock): self.assertRaises(exceptions.HTTPError, self.rpc.call, 'method', {'param': 'value'}) req_get_mock.assert_called_once_with( url=self.rpc._url, json=mock.ANY, # not checking here as of undefined order in dict auth=self.rpc._credentials) @mock.patch.object(requests, "post", return_value=FakeResponse( 200, {"error": {"code": 28, "message": "text"}})) def test_application_error(self, req_get_mock): self.assertRaises(exception.QBRpcException, self.rpc.call, 'method', {'param': 'value'}) req_get_mock.assert_called_once_with( url=self.rpc._url, json=mock.ANY, # not checking here as of undefined order in dict auth=self.rpc._credentials) def test_checked_for_application_error(self): resultdict = {"result": "Sweet gorilla of Manila"} self.assertEqual("Sweet gorilla of Manila", (self.rpc._checked_for_application_error( result=resultdict))) def test_checked_for_application_error_enf(self): resultdict = {"result": "Sweet gorilla of Manila", "error": {"message": "No Gorilla", "code": jsonrpc.ERROR_ENTITY_NOT_FOUND}} self.assertIsNone( self.rpc._checked_for_application_error( result=resultdict, expected_errors=[jsonrpc.ERROR_ENTITY_NOT_FOUND])) def test_checked_for_application_error_no_entry(self): resultdict = {"result": "Sweet gorilla of Manila", "error": {"message": "No Gorilla", "code": jsonrpc.ERROR_ENOENT}} self.assertIsNone( self.rpc._checked_for_application_error( result=resultdict, expected_errors=[jsonrpc.ERROR_ENOENT])) def test_checked_for_application_error_exception(self): self.assertRaises(exception.QBRpcException, self.rpc._checked_for_application_error, {"result": "Sweet gorilla of Manila", "error": {"message": "No Gorilla", "code": 666 } } ) manila-10.0.0/manila/tests/share/drivers/quobyte/test_quobyte.py0000664000175000017500000007321613656750227025055 0ustar zuulzuul00000000000000# Copyright (c) 2015 Quobyte, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_config import cfg from oslo_utils import units import six from manila import context from manila import exception from manila.share import configuration as config from manila.share import driver from manila.share.drivers.quobyte import jsonrpc from manila.share.drivers.quobyte import quobyte from manila import test from manila.tests import fake_share CONF = cfg.CONF def fake_rpc_handler(name, *args, **kwargs): if name == 'resolveVolumeName': return None elif name == 'createVolume': return {'volume_uuid': 'voluuid'} elif name == 'exportVolume': return {'nfs_server_ip': 'fake_location', 'nfs_export_path': '/fake_share'} elif name == 'getConfiguration': return { "tenant_configuration": [{ "domain_name": "fake_domain_name", "volume_access": [ {"volume_uuid": "fake_id_1", "restrict_to_network": "10.0.0.1", "read_only": False}, {"volume_uuid": "fake_id_1", "restrict_to_network": "10.0.0.2", "read_only": False}, {"volume_uuid": "fake_id_2", "restrict_to_network": "10.0.0.3", "read_only": False} ]}, {"domain_name": "fake_domain_name_2", "volume_access": [ {"volume_uuid": "fake_id_3", "restrict_to_network": "10.0.0.4", "read_only": False}, {"volume_uuid": "fake_id_3", "restrict_to_network": "10.0.0.5", "read_only": True}, {"volume_uuid": "fake_id_4", "restrict_to_network": "10.0.0.6", "read_only": False} ]} ] } else: return "Unknown fake rpc handler call" def create_fake_access(access_adr, access_id='fake_access_id', access_type='ip', access_level='rw'): return { 'access_id': access_id, 'access_type': access_type, 'access_to': access_adr, 'access_level': access_level } class QuobyteShareDriverTestCase(test.TestCase): """Tests QuobyteShareDriver.""" def setUp(self): super(QuobyteShareDriverTestCase, self).setUp() self._context = context.get_admin_context() CONF.set_default('driver_handles_share_servers', False) self.fake_conf = config.Configuration(None) self._driver = quobyte.QuobyteShareDriver(configuration=self.fake_conf) self._driver.rpc = mock.Mock() self.share = fake_share.fake_share( share_proto='NFS', export_location='fake_location:/quobyte/fake_share') self.access = fake_share.fake_access() @mock.patch('manila.share.drivers.quobyte.jsonrpc.JsonRpc', mock.Mock()) def test_do_setup_success(self): self._driver.rpc.call = mock.Mock(return_value=None) self._driver.do_setup(self._context) self._driver.rpc.call.assert_called_with('getInformation', {}) @mock.patch('manila.share.drivers.quobyte.jsonrpc.JsonRpc.__init__', mock.Mock(return_value=None)) @mock.patch.object(jsonrpc.JsonRpc, 'call', side_effect=exception.QBRpcException) def test_do_setup_failure(self, mock_call): self.assertRaises(exception.QBException, self._driver.do_setup, self._context) @mock.patch.object(quobyte.QuobyteShareDriver, "_resize_share") def test_create_share_new_volume(self, qb_resize_mock): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) result = self._driver.create_share(self._context, self.share) self.assertEqual(self.share['export_location'], result) self._driver.rpc.call.assert_has_calls([ mock.call('createVolume', dict( name=self.share['name'], tenant_domain=self.share['project_id'], root_user_id=self.fake_conf.quobyte_default_volume_user, root_group_id=self.fake_conf.quobyte_default_volume_group, configuration_name=self.fake_conf.quobyte_volume_configuration )), mock.call('exportVolume', dict(protocol='NFS', volume_uuid='voluuid'))]) qb_resize_mock.assert_called_once_with(self.share, self.share['size']) @mock.patch.object(quobyte.QuobyteShareDriver, "_resize_share") def test_create_share_existing_volume(self, qb_resize_mock): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) result = self._driver.create_share(self._context, self.share) self.assertEqual(self.share['export_location'], result) resolv_params = {'tenant_domain': 'fake_project_uuid', 'volume_name': 'fakename'} sett_params = {'tenant': {'tenant_id': 'fake_project_uuid'}} create_params = dict( name='fakename', tenant_domain='fake_project_uuid', root_user_id='root', root_group_id='root', configuration_name='BASE') self._driver.rpc.call.assert_has_calls([ mock.call('resolveVolumeName', resolv_params, [jsonrpc.ERROR_ENOENT, jsonrpc.ERROR_ENTITY_NOT_FOUND]), mock.call('setTenant', sett_params, expected_errors=[jsonrpc.ERROR_GARBAGE_ARGS]), mock.call('createVolume', create_params), mock.call('exportVolume', dict(protocol='NFS', volume_uuid='voluuid'))]) qb_resize_mock.assert_called_once_with(self.share, self.share['size']) def test_create_share_wrong_protocol(self): share = {'share_proto': 'WRONG_PROTOCOL'} self.assertRaises(exception.QBException, self._driver.create_share, context=None, share=share) def test_delete_share_existing_volume(self): def rpc_handler(name, *args): if name == 'resolveVolumeName': return {'volume_uuid': 'voluuid'} elif name == 'exportVolume': return {} self._driver.configuration.quobyte_delete_shares = True self._driver.rpc.call = mock.Mock(wraps=rpc_handler) self._driver.delete_share(self._context, self.share) resolv_params = {'volume_name': 'fakename', 'tenant_domain': 'fake_project_uuid'} self._driver.rpc.call.assert_has_calls([ mock.call('resolveVolumeName', resolv_params, [jsonrpc.ERROR_ENOENT, jsonrpc.ERROR_ENTITY_NOT_FOUND]), mock.call('deleteVolume', {'volume_uuid': 'voluuid'})]) def test_delete_share_existing_volume_disabled(self): def rpc_handler(name, *args): if name == 'resolveVolumeName': return {'volume_uuid': 'voluuid'} elif name == 'exportVolume': return {} CONF.set_default('quobyte_delete_shares', False) self._driver.rpc.call = mock.Mock(wraps=rpc_handler) self._driver.delete_share(self._context, self.share) self._driver.rpc.call.assert_called_with( 'exportVolume', {'volume_uuid': 'voluuid', 'remove_export': True}) @mock.patch.object(quobyte.LOG, 'warning') def test_delete_share_nonexisting_volume(self, mock_warning): def rpc_handler(name, *args): if name == 'resolveVolumeName': return None self._driver.rpc.call = mock.Mock(wraps=rpc_handler) self._driver.delete_share(self._context, self.share) mock_warning.assert_called_with( 'No volume found for share %(project_id)s/%(name)s', {'project_id': 'fake_project_uuid', 'name': 'fakename'}) def test_allow_access(self): def rpc_handler(name, *args): if name == 'resolveVolumeName': return {'volume_uuid': 'voluuid'} elif name == 'exportVolume': return {'nfs_server_ip': '10.10.1.1', 'nfs_export_path': '/voluuid'} self._driver.rpc.call = mock.Mock(wraps=rpc_handler) self._driver._allow_access(self._context, self.share, self.access) exp_params = {'volume_uuid': 'voluuid', 'read_only': False, 'add_allow_ip': '10.0.0.1'} self._driver.rpc.call.assert_called_with('exportVolume', exp_params) def test_allow_ro_access(self): def rpc_handler(name, *args): if name == 'resolveVolumeName': return {'volume_uuid': 'voluuid'} elif name == 'exportVolume': return {'nfs_server_ip': '10.10.1.1', 'nfs_export_path': '/voluuid'} self._driver.rpc.call = mock.Mock(wraps=rpc_handler) ro_access = fake_share.fake_access(access_level='ro') self._driver._allow_access(self._context, self.share, ro_access) exp_params = {'volume_uuid': 'voluuid', 'read_only': True, 'add_allow_ip': '10.0.0.1'} self._driver.rpc.call.assert_called_with('exportVolume', exp_params) def test_allow_access_nonip(self): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) self.access = fake_share.fake_access(**{"access_type": "non_existant_access_type"}) self.assertRaises(exception.InvalidShareAccess, self._driver._allow_access, self._context, self.share, self.access) def test_deny_access(self): def rpc_handler(name, *args): if name == 'resolveVolumeName': return {'volume_uuid': 'voluuid'} elif name == 'exportVolume': return {'nfs_server_ip': '10.10.1.1', 'nfs_export_path': '/voluuid'} self._driver.rpc.call = mock.Mock(wraps=rpc_handler) self._driver._deny_access(self._context, self.share, self.access) self._driver.rpc.call.assert_called_with( 'exportVolume', {'volume_uuid': 'voluuid', 'remove_allow_ip': '10.0.0.1'}) @mock.patch.object(quobyte.LOG, 'debug') def test_deny_access_nonip(self, mock_debug): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) self.access = fake_share.fake_access( access_type="non_existant_access_type") self._driver._deny_access(self._context, self.share, self.access) mock_debug.assert_called_with( 'Quobyte driver only supports ip access control. ' 'Ignoring deny access call for %s , %s', 'fakename', 'fake_project_uuid') def test_resolve_volume_name(self): self._driver.rpc.call = mock.Mock( return_value={'volume_uuid': 'fake_uuid'}) self._driver._resolve_volume_name('fake_vol_name', 'fake_domain_name') exp_params = {'volume_name': 'fake_vol_name', 'tenant_domain': 'fake_domain_name'} self._driver.rpc.call.assert_called_with( 'resolveVolumeName', exp_params, [jsonrpc.ERROR_ENOENT, jsonrpc.ERROR_ENTITY_NOT_FOUND]) def test_resolve_volume_name_NOENT(self): self._driver.rpc.call = mock.Mock( return_value=None) self.assertIsNone( self._driver._resolve_volume_name('fake_vol_name', 'fake_domain_name')) self._driver.rpc.call.assert_called_once_with( 'resolveVolumeName', dict(volume_name='fake_vol_name', tenant_domain='fake_domain_name'), [jsonrpc.ERROR_ENOENT, jsonrpc.ERROR_ENTITY_NOT_FOUND] ) def test_resolve_volume_name_other_error(self): self._driver.rpc.call = mock.Mock( side_effect=exception.QBRpcException( result='fubar', qbcode=666)) self.assertRaises(exception.QBRpcException, self._driver._resolve_volume_name, volume_name='fake_vol_name', tenant_domain='fake_domain_name') @mock.patch.object(driver.ShareDriver, '_update_share_stats') def test_update_share_stats(self, mock_uss): self._driver._get_capacities = mock.Mock(return_value=[42, 23]) self._driver._update_share_stats() mock_uss.assert_called_once_with( dict(storage_protocol='NFS', vendor_name='Quobyte', share_backend_name=self._driver.backend_name, driver_version=self._driver.DRIVER_VERSION, total_capacity_gb=42, free_capacity_gb=23, reserved_percentage=0)) def test_get_capacities_gb(self): capval = 42115548133 useval = 19695128917 replfact = 3 self._driver._get_qb_replication_factor = mock.Mock( return_value=replfact) self._driver.rpc.call = mock.Mock( return_value={'total_physical_capacity': six.text_type(capval), 'total_physical_usage': six.text_type(useval)}) self.assertEqual((39.223160718, 6.960214182), self._driver._get_capacities()) def test_get_capacities_gb_full(self): capval = 1024 * 1024 * 1024 * 3 useval = 1024 * 1024 * 1024 * 3 + 1 replfact = 1 self._driver._get_qb_replication_factor = mock.Mock( return_value=replfact) self._driver.rpc.call = mock.Mock( return_value={'total_physical_capacity': six.text_type(capval), 'total_physical_usage': six.text_type(useval)}) self.assertEqual((3.0, 0), self._driver._get_capacities()) def test_get_replication(self): fakerepl = 42 self._driver.configuration.quobyte_volume_configuration = 'fakeVolConf' self._driver.rpc.call = mock.Mock( return_value={'configuration': {'volume_metadata_configuration': {'replication_factor': six.text_type(fakerepl)}}}) self.assertEqual(fakerepl, self._driver._get_qb_replication_factor()) @mock.patch.object(quobyte.QuobyteShareDriver, "_resolve_volume_name", return_value="fake_uuid") def test_ensure_share(self, mock_qb_resolve_volname): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) result = self._driver.ensure_share(self._context, self.share, None) self.assertEqual(self.share["export_location"], result) (mock_qb_resolve_volname. assert_called_once_with(self.share['name'], self.share['project_id'])) self._driver.rpc.call.assert_has_calls([ mock.call('exportVolume', dict( volume_uuid="fake_uuid", protocol='NFS' ))]) @mock.patch.object(quobyte.QuobyteShareDriver, "_resolve_volume_name", return_value=None) def test_ensure_deleted_share(self, mock_qb_resolve_volname): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) self.assertRaises(exception.ShareResourceNotFound, self._driver.ensure_share, self._context, self.share, None) (mock_qb_resolve_volname. assert_called_once_with(self.share['name'], self.share['project_id'])) @mock.patch.object(quobyte.QuobyteShareDriver, "_resize_share") def test_extend_share(self, mock_qsd_resize_share): self._driver.extend_share(ext_share=self.share, ext_size=2, share_server=None) mock_qsd_resize_share.assert_called_once_with(share=self.share, new_size=2) @mock.patch.object(quobyte.QuobyteShareDriver, "_resolve_volume_name", return_value="fake_volume_uuid") def test_resize_share(self, mock_qb_resolv): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) manila_size = 7 newsize_bytes = manila_size * units.Gi self._driver._resize_share(share=self.share, new_size=manila_size) exp_params = { "quotas": [{ "consumer": [{ "type": "VOLUME", "identifier": "fake_volume_uuid", "tenant_id": self.share["project_id"] }], "limits": [{ "type": "LOGICAL_DISK_SPACE", "value": newsize_bytes, }], }]} self._driver.rpc.call.assert_has_calls([ mock.call('setQuota', exp_params)]) mock_qb_resolv.assert_called_once_with(self.share['name'], self.share['project_id']) @mock.patch.object(quobyte.QuobyteShareDriver, "_resolve_volume_name", return_value="fake_id_3") def test_fetch_existing_access(self, mock_qb_resolve_volname): self._driver.rpc.call = mock.Mock(wraps=fake_rpc_handler) old_access_1 = create_fake_access(access_id="old_1", access_adr="10.0.0.4") old_access_2 = create_fake_access(access_id="old_2", access_adr="10.0.0.5") exist_list = self._driver._fetch_existing_access(context=self._context, share=self.share) # assert expected result here self.assertEqual([old_access_1['access_to'], old_access_2['access_to']], [e.get('access_to') for e in exist_list]) (mock_qb_resolve_volname. assert_called_once_with(self.share['name'], self.share['project_id'])) @mock.patch.object(quobyte.QuobyteShareDriver, "_resize_share") def test_shrink_share(self, mock_qsd_resize_share): self._driver.shrink_share(shrink_share=self.share, shrink_size=3, share_server=None) mock_qsd_resize_share.assert_called_once_with(share=self.share, new_size=3) def test_subtract_access_lists(self): access_1 = create_fake_access(access_id="new_1", access_adr="10.0.0.5", access_type="rw",) access_2 = create_fake_access(access_id="old_1", access_adr="10.0.0.1", access_type="rw") access_3 = create_fake_access(access_id="old_2", access_adr="10.0.0.3", access_type="ro") access_4 = create_fake_access(access_id="new_2", access_adr="10.0.0.6", access_type="rw") access_5 = create_fake_access(access_id="old_3", access_adr="10.0.0.4", access_type="rw") min_list = [access_1, access_2, access_3, access_4] sub_list = [access_5, access_3, access_2] self.assertEqual([access_1, access_4], self._driver._subtract_access_lists(min_list, sub_list)) def test_subtract_access_lists_level(self): access_1 = create_fake_access(access_id="new_1", access_adr="10.0.0.5", access_level="rw") access_2 = create_fake_access(access_id="old_1", access_adr="10.0.0.1", access_level="rw") access_3 = create_fake_access(access_id="old_2", access_adr="10.0.0.3", access_level="rw") access_4 = create_fake_access(access_id="new_2", access_adr="10.0.0.6", access_level="rw") access_5 = create_fake_access(access_id="old_2_ro", access_adr="10.0.0.3", access_level="ro") min_list = [access_1, access_2, access_3, access_4] sub_list = [access_5, access_2] self.assertEqual([access_1, access_3, access_4], self._driver._subtract_access_lists(min_list, sub_list)) def test_subtract_access_lists_type(self): access_1 = create_fake_access(access_id="new_1", access_adr="10.0.0.5", access_type="ip") access_2 = create_fake_access(access_id="old_1", access_adr="10.0.0.1", access_type="ip") access_3 = create_fake_access(access_id="old_2", access_adr="10.0.0.3", access_type="ip") access_4 = create_fake_access(access_id="new_2", access_adr="10.0.0.6", access_type="ip") access_5 = create_fake_access(access_id="old_2_ro", access_adr="10.0.0.3", access_type="other") min_list = [access_1, access_2, access_3, access_4] sub_list = [access_5, access_2] self.assertEqual([access_1, access_3, access_4], self._driver._subtract_access_lists(min_list, sub_list)) @mock.patch.object(quobyte.QuobyteShareDriver, "_allow_access") @mock.patch.object(quobyte.QuobyteShareDriver, "_deny_access") def test_update_access_add_delete(self, qb_deny_mock, qb_allow_mock): access_1 = create_fake_access(access_id="new_1", access_adr="10.0.0.5", access_level="rw") access_2 = create_fake_access(access_id="old_1", access_adr="10.0.0.1", access_level="rw") access_3 = create_fake_access(access_id="old_2", access_adr="10.0.0.3", access_level="rw") self._driver.update_access(self._context, self.share, access_rules=None, add_rules=[access_1], delete_rules=[access_2, access_3]) qb_allow_mock.assert_called_once_with(self._context, self.share, access_1) deny_calls = [mock.call(self._context, self.share, access_2), mock.call(self._context, self.share, access_3)] qb_deny_mock.assert_has_calls(deny_calls) @mock.patch.object(quobyte.LOG, "warning") def test_update_access_no_rules(self, qb_log_mock): self._driver.update_access(context=None, share=None, access_rules=[], add_rules=[], delete_rules=[]) qb_log_mock.assert_has_calls([mock.ANY]) @mock.patch.object(quobyte.QuobyteShareDriver, "_subtract_access_lists") @mock.patch.object(quobyte.QuobyteShareDriver, "_fetch_existing_access") @mock.patch.object(quobyte.QuobyteShareDriver, "_allow_access") def test_update_access_recovery_additionals(self, qb_allow_mock, qb_exist_mock, qb_subtr_mock): new_access_1 = create_fake_access(access_id="new_1", access_adr="10.0.0.2") old_access = create_fake_access(access_id="fake_access_id", access_adr="10.0.0.1") new_access_2 = create_fake_access(access_id="new_2", access_adr="10.0.0.3") add_access_rules = [new_access_1, old_access, new_access_2] qb_exist_mock.return_value = [old_access] qb_subtr_mock.side_effect = [[new_access_1, new_access_2], []] self._driver.update_access(self._context, self.share, access_rules=add_access_rules, add_rules=[], delete_rules=[]) assert_calls = [mock.call(self._context, self.share, new_access_1), mock.call(self._context, self.share, new_access_2)] qb_allow_mock.assert_has_calls(assert_calls, any_order=True) qb_exist_mock.assert_called_once_with(self._context, self.share) @mock.patch.object(quobyte.QuobyteShareDriver, "_subtract_access_lists") @mock.patch.object(quobyte.QuobyteShareDriver, "_fetch_existing_access") @mock.patch.object(quobyte.QuobyteShareDriver, "_deny_access") def test_update_access_recovery_superfluous(self, qb_deny_mock, qb_exist_mock, qb_subtr_mock): old_access_1 = create_fake_access(access_id="old_1", access_adr="10.0.0.1") missing_access_1 = create_fake_access(access_id="mis_1", access_adr="10.0.0.2") old_access_2 = create_fake_access(access_id="old_2", access_adr="10.0.0.3") qb_exist_mock.side_effect = [[old_access_1, old_access_2]] qb_subtr_mock.side_effect = [[], [missing_access_1]] old_access_rules = [old_access_1, old_access_2] self._driver.update_access(self._context, self.share, access_rules=old_access_rules, add_rules=[], delete_rules=[]) qb_deny_mock.assert_called_once_with(self._context, self.share, (missing_access_1)) qb_exist_mock.assert_called_once_with(self._context, self.share) @mock.patch.object(quobyte.QuobyteShareDriver, "_subtract_access_lists") @mock.patch.object(quobyte.QuobyteShareDriver, "_fetch_existing_access") @mock.patch.object(quobyte.QuobyteShareDriver, "_deny_access") @mock.patch.object(quobyte.QuobyteShareDriver, "_allow_access") def test_update_access_recovery_add_superfluous(self, qb_allow_mock, qb_deny_mock, qb_exist_mock, qb_subtr_mock): new_access_1 = create_fake_access(access_id="new_1", access_adr="10.0.0.5") old_access_1 = create_fake_access(access_id="old_1", access_adr="10.0.0.1") old_access_2 = create_fake_access(access_id="old_2", access_adr="10.0.0.3") old_access_3 = create_fake_access(access_id="old_3", access_adr="10.0.0.4") miss_access_1 = create_fake_access(access_id="old_3", access_adr="10.0.0.4") new_access_2 = create_fake_access(access_id="new_2", access_adr="10.0.0.3", access_level="ro") new_access_rules = [new_access_1, old_access_1, old_access_2, old_access_3, new_access_2] qb_exist_mock.return_value = [old_access_1, old_access_2, old_access_3, miss_access_1] qb_subtr_mock.side_effect = [[new_access_1, new_access_2], [miss_access_1, old_access_2]] self._driver.update_access(self._context, self.share, new_access_rules, add_rules=[], delete_rules=[]) a_calls = [mock.call(self._context, self.share, new_access_1), mock.call(self._context, self.share, new_access_2)] qb_allow_mock.assert_has_calls(a_calls) b_calls = [mock.call(self._context, self.share, miss_access_1), mock.call(self._context, self.share, old_access_2)] qb_deny_mock.assert_has_calls(b_calls) qb_exist_mock.assert_called_once_with(self._context, self.share) manila-10.0.0/manila/tests/share/drivers/hpe/0000775000175000017500000000000013656750362021027 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hpe/__init__.py0000664000175000017500000000000013656750227023126 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hpe/test_hpe_3par_driver.py0000664000175000017500000012105313656750227025516 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from copy import deepcopy import sys from unittest import mock import ddt if 'hpe3parclient' not in sys.modules: sys.modules['hpe3parclient'] = mock.Mock() from manila import exception from manila.share.drivers.hpe import hpe_3par_driver as hpe3pardriver from manila.share.drivers.hpe import hpe_3par_mediator as hpe3parmediator from manila import test from manila.tests.share.drivers.hpe import test_hpe_3par_constants as constants @ddt.ddt class HPE3ParDriverFPGTestCase(test.TestCase): @ddt.data((-1, 4), (0, 5), (0, -1)) @ddt.unpack def test_FPG_init_args_failure(self, min_ip, max_ip): self.assertRaises(exception.HPE3ParInvalid, hpe3pardriver.FPG, min_ip, max_ip) @ddt.data(('invalid_ip_fpg, 10.256.0.1', 0, 4), (None, 0, 4), (' ', 0, 4), ('', 0, 4), ('max_ip_fpg, 10.0.0.1, 10.0.0.2, 10.0.0.3, 10.0.0.4, 10.0.0.5', 0, 4), ('min_1_ip_fpg', 1, 4)) @ddt.unpack def test_FPG_type_failures(self, value, min_ip, max_ip): fpg_type_obj = hpe3pardriver.FPG(min_ip=min_ip, max_ip=max_ip) self.assertRaises(exception.HPE3ParInvalid, fpg_type_obj, value) @ddt.data(('samplefpg, 10.0.0.1', {'samplefpg': ['10.0.0.1']}), ('samplefpg', {'samplefpg': []}), ('samplefpg, 10.0.0.1, 10.0.0.2', {'samplefpg': ['10.0.0.1', '10.0.0.2']})) @ddt.unpack def test_FPG_type_success(self, value, expected_fpg): fpg_type_obj = hpe3pardriver.FPG() fpg = fpg_type_obj(value) self.assertEqual(expected_fpg, fpg) @ddt.ddt class HPE3ParDriverTestCase(test.TestCase): def setUp(self): super(HPE3ParDriverTestCase, self).setUp() # Create a mock configuration with attributes and a safe_get() self.conf = mock.Mock() self.conf.driver_handles_share_servers = True self.conf.hpe3par_debug = constants.EXPECTED_HPE_DEBUG self.conf.hpe3par_username = constants.USERNAME self.conf.hpe3par_password = constants.PASSWORD self.conf.hpe3par_api_url = constants.API_URL self.conf.hpe3par_san_login = constants.SAN_LOGIN self.conf.hpe3par_san_password = constants.SAN_PASSWORD self.conf.hpe3par_san_ip = constants.EXPECTED_IP_1234 self.conf.hpe3par_fpg = constants.EXPECTED_FPG_CONF self.conf.hpe3par_san_ssh_port = constants.PORT self.conf.ssh_conn_timeout = constants.TIMEOUT self.conf.hpe3par_fstore_per_share = False self.conf.hpe3par_require_cifs_ip = False self.conf.hpe3par_cifs_admin_access_username = constants.USERNAME, self.conf.hpe3par_cifs_admin_access_password = constants.PASSWORD, self.conf.hpe3par_cifs_admin_access_domain = ( constants.EXPECTED_CIFS_DOMAIN), self.conf.hpe3par_share_mount_path = constants.EXPECTED_MOUNT_PATH, self.conf.my_ip = constants.EXPECTED_IP_1234 self.conf.network_config_group = 'test_network_config_group' self.conf.admin_network_config_group = ( 'test_admin_network_config_group') self.conf.filter_function = None self.conf.goodness_function = None def safe_get(attr): try: return self.conf.__getattribute__(attr) except AttributeError: return None self.conf.safe_get = safe_get self.real_hpe_3par_mediator = hpe3parmediator.HPE3ParMediator self.mock_object(hpe3parmediator, 'HPE3ParMediator') self.mock_mediator_constructor = hpe3parmediator.HPE3ParMediator self.mock_mediator = self.mock_mediator_constructor() # restore needed static methods self.mock_mediator.ensure_supported_protocol = ( self.real_hpe_3par_mediator.ensure_supported_protocol) self.mock_mediator.build_export_locations = ( self.real_hpe_3par_mediator.build_export_locations) self.driver = hpe3pardriver.HPE3ParShareDriver( configuration=self.conf) def test_driver_setup_success(self, get_vfs_ret_val=constants.EXPECTED_GET_VFS): """Driver do_setup without any errors.""" self.mock_mediator.get_vfs.return_value = get_vfs_ret_val self.driver.do_setup(None) conf = self.conf self.mock_mediator_constructor.assert_has_calls([ mock.call(hpe3par_san_ssh_port=conf.hpe3par_san_ssh_port, hpe3par_san_password=conf.hpe3par_san_password, hpe3par_username=conf.hpe3par_username, hpe3par_san_login=conf.hpe3par_san_login, hpe3par_debug=conf.hpe3par_debug, hpe3par_api_url=conf.hpe3par_api_url, hpe3par_password=conf.hpe3par_password, hpe3par_san_ip=conf.hpe3par_san_ip, hpe3par_fstore_per_share=conf.hpe3par_fstore_per_share, hpe3par_require_cifs_ip=conf.hpe3par_require_cifs_ip, hpe3par_cifs_admin_access_username=( conf.hpe3par_cifs_admin_access_username), hpe3par_cifs_admin_access_password=( conf.hpe3par_cifs_admin_access_password), hpe3par_cifs_admin_access_domain=( conf.hpe3par_cifs_admin_access_domain), hpe3par_share_mount_path=conf.hpe3par_share_mount_path, my_ip=self.conf.my_ip, ssh_conn_timeout=conf.ssh_conn_timeout)]) self.mock_mediator.assert_has_calls([ mock.call.do_setup(), mock.call.get_vfs(constants.EXPECTED_FPG)]) def test_driver_setup_dhss_success(self): """Driver do_setup without any errors with dhss=True.""" self.test_driver_setup_success() self.assertEqual(constants.EXPECTED_FPG_MAP, self.driver.fpgs) def test_driver_setup_no_dhss_success(self): """Driver do_setup without any errors with dhss=False.""" self.conf.driver_handles_share_servers = False self.test_driver_setup_success() self.assertEqual(constants.EXPECTED_FPG_MAP, self.driver.fpgs) def test_driver_setup_no_dhss_multi_getvfs_success(self): """Driver do_setup when dhss=False, getvfs returns multiple IPs.""" self.conf.driver_handles_share_servers = False self.test_driver_setup_success( get_vfs_ret_val=constants.EXPECTED_GET_VFS_MULTIPLES) self.assertEqual(constants.EXPECTED_FPG_MAP, self.driver.fpgs) def test_driver_setup_success_no_dhss_no_conf_ss_ip(self): """test driver's do_setup() Driver do_setup with dhss=False, share server ip not set in config file but discoverable at 3par array """ self.conf.driver_handles_share_servers = False # ss ip not provided in conf original_fpg = deepcopy(self.conf.hpe3par_fpg) self.conf.hpe3par_fpg[0][constants.EXPECTED_FPG] = [] self.test_driver_setup_success() self.assertEqual(constants.EXPECTED_FPG_MAP, self.driver.fpgs) constants.EXPECTED_FPG_CONF = original_fpg def test_driver_setup_failure_no_dhss_no_conf_ss_ip(self): """Configured IP address is required for dhss=False.""" self.conf.driver_handles_share_servers = False # ss ip not provided in conf fpg_without_ss_ip = deepcopy(self.conf.hpe3par_fpg) self.conf.hpe3par_fpg[0][constants.EXPECTED_FPG] = [] # ss ip not configured on array vfs_without_ss_ip = deepcopy(constants.EXPECTED_GET_VFS) vfs_without_ss_ip['vfsip']['address'] = [] self.mock_mediator.get_vfs.return_value = vfs_without_ss_ip self.assertRaises(exception.HPE3ParInvalid, self.driver.do_setup, None) constants.EXPECTED_FPG_CONF = fpg_without_ss_ip def test_driver_setup_mediator_error(self): """Driver do_setup when the mediator setup fails.""" self.mock_mediator.do_setup.side_effect = ( exception.ShareBackendException('fail')) self.assertRaises(exception.ShareBackendException, self.driver.do_setup, None) conf = self.conf self.mock_mediator_constructor.assert_has_calls([ mock.call(hpe3par_san_ssh_port=conf.hpe3par_san_ssh_port, hpe3par_san_password=conf.hpe3par_san_password, hpe3par_username=conf.hpe3par_username, hpe3par_san_login=conf.hpe3par_san_login, hpe3par_debug=conf.hpe3par_debug, hpe3par_api_url=conf.hpe3par_api_url, hpe3par_password=conf.hpe3par_password, hpe3par_san_ip=conf.hpe3par_san_ip, hpe3par_fstore_per_share=conf.hpe3par_fstore_per_share, hpe3par_require_cifs_ip=conf.hpe3par_require_cifs_ip, hpe3par_cifs_admin_access_username=( conf.hpe3par_cifs_admin_access_username), hpe3par_cifs_admin_access_password=( conf.hpe3par_cifs_admin_access_password), hpe3par_cifs_admin_access_domain=( conf.hpe3par_cifs_admin_access_domain), hpe3par_share_mount_path=conf.hpe3par_share_mount_path, my_ip=self.conf.my_ip, ssh_conn_timeout=conf.ssh_conn_timeout)]) self.mock_mediator.assert_has_calls([mock.call.do_setup()]) def test_driver_setup_with_vfs_error(self): """Driver do_setup when the get_vfs fails.""" self.mock_mediator.get_vfs.side_effect = ( exception.ShareBackendException('fail')) self.assertRaises(exception.ShareBackendException, self.driver.do_setup, None) conf = self.conf self.mock_mediator_constructor.assert_has_calls([ mock.call(hpe3par_san_ssh_port=conf.hpe3par_san_ssh_port, hpe3par_san_password=conf.hpe3par_san_password, hpe3par_username=conf.hpe3par_username, hpe3par_san_login=conf.hpe3par_san_login, hpe3par_debug=conf.hpe3par_debug, hpe3par_api_url=conf.hpe3par_api_url, hpe3par_password=conf.hpe3par_password, hpe3par_san_ip=conf.hpe3par_san_ip, hpe3par_fstore_per_share=conf.hpe3par_fstore_per_share, hpe3par_require_cifs_ip=conf.hpe3par_require_cifs_ip, hpe3par_cifs_admin_access_username=( conf.hpe3par_cifs_admin_access_username), hpe3par_cifs_admin_access_password=( conf.hpe3par_cifs_admin_access_password), hpe3par_cifs_admin_access_domain=( conf.hpe3par_cifs_admin_access_domain), hpe3par_share_mount_path=conf.hpe3par_share_mount_path, my_ip=self.conf.my_ip, ssh_conn_timeout=conf.ssh_conn_timeout)]) self.mock_mediator.assert_has_calls([ mock.call.do_setup(), mock.call.get_vfs(constants.EXPECTED_FPG)]) def test_driver_setup_conf_ips_validation_fails(self): """Driver do_setup when the _validate_pool_ips fails.""" self.conf.driver_handles_share_servers = False vfs_with_ss_ip = deepcopy(constants.EXPECTED_GET_VFS) vfs_with_ss_ip['vfsip']['address'] = ['10.100.100.100'] self.mock_mediator.get_vfs.return_value = vfs_with_ss_ip self.assertRaises(exception.HPE3ParInvalid, self.driver.do_setup, None) conf = self.conf self.mock_mediator_constructor.assert_has_calls([ mock.call(hpe3par_san_ssh_port=conf.hpe3par_san_ssh_port, hpe3par_san_password=conf.hpe3par_san_password, hpe3par_username=conf.hpe3par_username, hpe3par_san_login=conf.hpe3par_san_login, hpe3par_debug=conf.hpe3par_debug, hpe3par_api_url=conf.hpe3par_api_url, hpe3par_password=conf.hpe3par_password, hpe3par_san_ip=conf.hpe3par_san_ip, hpe3par_fstore_per_share=conf.hpe3par_fstore_per_share, hpe3par_require_cifs_ip=conf.hpe3par_require_cifs_ip, hpe3par_cifs_admin_access_username=( conf.hpe3par_cifs_admin_access_username), hpe3par_cifs_admin_access_password=( conf.hpe3par_cifs_admin_access_password), hpe3par_cifs_admin_access_domain=( conf.hpe3par_cifs_admin_access_domain), hpe3par_share_mount_path=conf.hpe3par_share_mount_path, my_ip=self.conf.my_ip, ssh_conn_timeout=conf.ssh_conn_timeout)]) self.mock_mediator.assert_has_calls([ mock.call.do_setup(), mock.call.get_vfs(constants.EXPECTED_FPG)]) def init_driver(self): """Simple driver setup for re-use with tests that need one.""" self.driver._hpe3par = self.mock_mediator self.driver.fpgs = constants.EXPECTED_FPG_MAP self.mock_object(hpe3pardriver, 'share_types') get_extra_specs = hpe3pardriver.share_types.get_extra_specs_from_share get_extra_specs.return_value = constants.EXPECTED_EXTRA_SPECS def test_driver_check_for_setup_error_success(self): """check_for_setup_error when things go well.""" # Generally this is always mocked, but here we reference the class. hpe3parmediator.HPE3ParMediator = self.real_hpe_3par_mediator self.mock_object(hpe3pardriver, 'LOG') self.init_driver() self.driver.check_for_setup_error() expected_calls = [ mock.call.debug('HPE3ParShareDriver SHA1: %s', mock.ANY), mock.call.debug('HPE3ParMediator SHA1: %s', mock.ANY) ] hpe3pardriver.LOG.assert_has_calls(expected_calls) def test_driver_check_for_setup_error_exception(self): """check_for_setup_error catch and log any exceptions.""" # Since HPE3ParMediator is mocked, we'll hit the except/log. self.mock_object(hpe3pardriver, 'LOG') self.init_driver() self.driver.check_for_setup_error() expected_calls = [ mock.call.debug('HPE3ParShareDriver SHA1: %s', mock.ANY), mock.call.debug('Source code SHA1 not logged due to: %s', mock.ANY) ] hpe3pardriver.LOG.assert_has_calls(expected_calls) @ddt.data(([constants.SHARE_SERVER], constants.SHARE_SERVER), ([], None),) @ddt.unpack def test_choose_share_server_compatible_with_share(self, share_servers, expected_share_sever): context = None share_server = self.driver.choose_share_server_compatible_with_share( context, share_servers, constants.NFS_SHARE_INFO, None, None) self.assertEqual(expected_share_sever, share_server) def test_choose_share_server_compatible_with_share_with_cg(self): context = None cg_ref = {'id': 'dummy'} self.assertRaises( exception.InvalidRequest, self.driver.choose_share_server_compatible_with_share, context, [constants.SHARE_SERVER], constants.NFS_SHARE_INFO, None, cg_ref) def do_create_share(self, protocol, share_type_id, expected_project_id, expected_share_id, expected_size): """Re-usable code for create share.""" context = None share = { 'display_name': constants.EXPECTED_SHARE_NAME, 'host': constants.EXPECTED_HOST, 'project_id': expected_project_id, 'id': expected_share_id, 'share_proto': protocol, 'share_type_id': share_type_id, 'size': expected_size, } location = self.driver.create_share(context, share, constants.SHARE_SERVER) return location def do_create_share_from_snapshot(self, protocol, share_type_id, snapshot_instance, expected_share_id, expected_size): """Re-usable code for create share from snapshot.""" context = None share = { 'project_id': constants.EXPECTED_PROJECT_ID, 'display_name': constants.EXPECTED_SHARE_NAME, 'host': constants.EXPECTED_HOST, 'id': expected_share_id, 'share_proto': protocol, 'share_type_id': share_type_id, 'size': expected_size, } location = self.driver.create_share_from_snapshot( context, share, snapshot_instance, constants.SHARE_SERVER) return location @ddt.data((constants.UNEXPECTED_HOST, exception.InvalidHost), (constants.HOST_WITHOUT_POOL_1, exception.InvalidHost), (constants.HOST_WITHOUT_POOL_2, exception.InvalidHost)) @ddt.unpack def test_driver_create_share_fails_get_pool_location(self, host, expected_exception): """get_pool_location fails to extract pool name from host""" self.init_driver() context = None share_server = None share = { 'display_name': constants.EXPECTED_SHARE_NAME, 'host': host, 'project_id': constants.EXPECTED_PROJECT_ID, 'id': constants.EXPECTED_SHARE_ID, 'share_proto': constants.CIFS, 'share_type_id': constants.SHARE_TYPE_ID, 'size': constants.EXPECTED_SIZE_2, } self.assertRaises(expected_exception, self.driver.create_share, context, share, share_server) def test_driver_create_cifs_share(self): self.init_driver() expected_location = '\\\\%s\\%s' % (constants.EXPECTED_IP_10203040, constants.EXPECTED_SHARE_NAME) self.mock_mediator.create_share.return_value = ( constants.EXPECTED_SHARE_NAME) hpe3parmediator.HPE3ParMediator = self.real_hpe_3par_mediator location = self.do_create_share(constants.CIFS, constants.SHARE_TYPE_ID, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_2) self.assertIn(expected_location, location) expected_calls = [mock.call.create_share( constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, comment=mock.ANY, size=constants.EXPECTED_SIZE_2)] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_create_nfs_share(self): self.init_driver() expected_location = ':'.join((constants.EXPECTED_IP_10203040, constants.EXPECTED_SHARE_PATH)) self.mock_mediator.create_share.return_value = ( constants.EXPECTED_SHARE_PATH) hpe3parmediator.HPE3ParMediator = self.real_hpe_3par_mediator location = self.do_create_share(constants.NFS, constants.SHARE_TYPE_ID, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1) self.assertIn(expected_location, location) expected_calls = [ mock.call.create_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, comment=mock.ANY, size=constants.EXPECTED_SIZE_1)] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_create_cifs_share_from_snapshot(self): self.init_driver() expected_location = '\\\\%s\\%s' % (constants.EXPECTED_IP_10203040, constants.EXPECTED_SHARE_NAME) self.mock_mediator.create_share_from_snapshot.return_value = ( constants.EXPECTED_SHARE_NAME) hpe3parmediator.HPE3ParMediator = self.real_hpe_3par_mediator snapshot_instance = constants.SNAPSHOT_INSTANCE.copy() snapshot_instance['protocol'] = constants.CIFS location = self.do_create_share_from_snapshot( constants.CIFS, constants.SHARE_TYPE_ID, snapshot_instance, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_2) self.assertIn(expected_location, location) expected_calls = [ mock.call.create_share_from_snapshot( constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_FSTORE, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040], comment=mock.ANY, size=constants.EXPECTED_SIZE_2), ] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_create_nfs_share_from_snapshot(self): self.init_driver() expected_location = ':'.join((constants.EXPECTED_IP_10203040, constants.EXPECTED_SHARE_PATH)) self.mock_mediator.create_share_from_snapshot.return_value = ( constants.EXPECTED_SHARE_PATH) hpe3parmediator.HPE3ParMediator = self.real_hpe_3par_mediator location = self.do_create_share_from_snapshot( constants.NFS, constants.SHARE_TYPE_ID, constants.SNAPSHOT_INSTANCE, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1) self.assertIn(expected_location, location) expected_calls = [ mock.call.create_share_from_snapshot( constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040], comment=mock.ANY, size=constants.EXPECTED_SIZE_1), ] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_delete_share(self): self.init_driver() context = None share_server = None share = { 'project_id': constants.EXPECTED_PROJECT_ID, 'id': constants.EXPECTED_SHARE_ID, 'share_proto': constants.CIFS, 'size': constants.EXPECTED_SIZE_1, 'host': constants.EXPECTED_HOST } self.driver.delete_share(context, share, share_server) expected_calls = [ mock.call.delete_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040)] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_create_snapshot(self): self.init_driver() context = None share_server = None self.driver.create_snapshot(context, constants.SNAPSHOT_INFO, share_server) expected_calls = [ mock.call.create_snapshot(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS)] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_delete_snapshot(self): self.init_driver() context = None share_server = None self.driver.delete_snapshot(context, constants.SNAPSHOT_INFO, share_server) expected_calls = [ mock.call.delete_snapshot(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS) ] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_update_access_add_rule(self): self.init_driver() context = None self.driver.update_access(context, constants.NFS_SHARE_INFO, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.SHARE_SERVER) expected_calls = [ mock.call.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) ] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_update_access_delete_rule(self): self.init_driver() context = None self.driver.update_access(context, constants.NFS_SHARE_INFO, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP], constants.SHARE_SERVER) expected_calls = [ mock.call.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP], constants.EXPECTED_FPG, constants.EXPECTED_VFS) ] self.mock_mediator.assert_has_calls(expected_calls) def test_driver_extend_share(self): self.init_driver() old_size = constants.NFS_SHARE_INFO['size'] new_size = old_size * 2 share_server = None self.driver.extend_share(constants.NFS_SHARE_INFO, new_size, share_server) self.mock_mediator.resize_share.assert_called_once_with( constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, new_size, old_size, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_driver_shrink_share(self): self.init_driver() old_size = constants.NFS_SHARE_INFO['size'] new_size = old_size / 2 share_server = None self.driver.shrink_share(constants.NFS_SHARE_INFO, new_size, share_server) self.mock_mediator.resize_share.assert_called_once_with( constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, new_size, old_size, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_driver_get_share_stats_not_ready(self): """Protect against stats update before driver is ready.""" self.mock_object(hpe3pardriver, 'LOG') expected_result = { 'driver_handles_share_servers': True, 'qos': False, 'driver_version': self.driver.VERSION, 'free_capacity_gb': 0, 'max_over_subscription_ratio': None, 'reserved_percentage': 0, 'provisioned_capacity_gb': 0, 'share_backend_name': 'HPE_3PAR', 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'share_group_stats': { 'consistent_snapshot_support': None, }, 'storage_protocol': 'NFS_CIFS', 'thin_provisioning': True, 'total_capacity_gb': 0, 'vendor_name': 'HPE', 'pools': None, 'replication_domain': None, 'filter_function': None, 'goodness_function': None, 'ipv4_support': True, 'ipv6_support': False, } result = self.driver.get_share_stats(refresh=True) self.assertEqual(expected_result, result) expected_calls = [ mock.call.info('Skipping capacity and capabilities update. ' 'Setup has not completed.') ] hpe3pardriver.LOG.assert_has_calls(expected_calls) def test_driver_get_share_stats_no_refresh(self): """Driver does not call mediator when refresh=False.""" self.init_driver() self.driver._stats = constants.EXPECTED_STATS result = self.driver.get_share_stats(refresh=False) self.assertEqual(constants.EXPECTED_STATS, result) self.assertEqual([], self.mock_mediator.mock_calls) def test_driver_get_share_stats_with_refresh(self): """Driver adds stats from mediator to expected structure.""" self.init_driver() expected_free = constants.EXPECTED_SIZE_1 expected_capacity = constants.EXPECTED_SIZE_2 expected_version = self.driver.VERSION self.mock_mediator.get_fpg_status.return_value = { 'pool_name': constants.EXPECTED_FPG, 'total_capacity_gb': expected_capacity, 'free_capacity_gb': expected_free, 'thin_provisioning': True, 'dedupe': False, 'hpe3par_flash_cache': False, 'hp3par_flash_cache': False, 'reserved_percentage': 0, 'provisioned_capacity_gb': expected_capacity } expected_result = { 'share_backend_name': 'HPE_3PAR', 'vendor_name': 'HPE', 'driver_version': expected_version, 'storage_protocol': 'NFS_CIFS', 'driver_handles_share_servers': True, 'total_capacity_gb': 0, 'free_capacity_gb': 0, 'provisioned_capacity_gb': 0, 'reserved_percentage': 0, 'max_over_subscription_ratio': None, 'qos': False, 'thin_provisioning': True, 'pools': [{ 'pool_name': constants.EXPECTED_FPG, 'total_capacity_gb': expected_capacity, 'free_capacity_gb': expected_free, 'thin_provisioning': True, 'dedupe': False, 'hpe3par_flash_cache': False, 'hp3par_flash_cache': False, 'reserved_percentage': 0, 'provisioned_capacity_gb': expected_capacity}], 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'share_group_stats': { 'consistent_snapshot_support': None, }, 'replication_domain': None, 'filter_function': None, 'goodness_function': None, 'ipv4_support': True, 'ipv6_support': False, } result = self.driver.get_share_stats(refresh=True) self.assertEqual(expected_result, result) expected_calls = [ mock.call.get_fpg_status(constants.EXPECTED_FPG) ] self.mock_mediator.assert_has_calls(expected_calls) self.assertTrue(self.mock_mediator.get_fpg_status.called) def test_driver_get_share_stats_premature(self): """Driver init stats before init_driver completed.""" expected_version = self.driver.VERSION self.mock_mediator.get_fpg_status.return_value = {'not_called': 1} expected_result = { 'qos': False, 'driver_handles_share_servers': True, 'driver_version': expected_version, 'free_capacity_gb': 0, 'max_over_subscription_ratio': None, 'pools': None, 'provisioned_capacity_gb': 0, 'reserved_percentage': 0, 'share_backend_name': 'HPE_3PAR', 'storage_protocol': 'NFS_CIFS', 'thin_provisioning': True, 'total_capacity_gb': 0, 'vendor_name': 'HPE', 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'share_group_stats': { 'consistent_snapshot_support': None, }, 'replication_domain': None, 'filter_function': None, 'goodness_function': None, 'ipv4_support': True, 'ipv6_support': False, } result = self.driver.get_share_stats(refresh=True) self.assertEqual(expected_result, result) self.assertFalse(self.mock_mediator.get_fpg_status.called) @ddt.data(('test"dquote', 'test_dquote'), ("test'squote", "test_squote"), ('test-:;,.punc', 'test-:_punc'), ('test with spaces ', 'test with spaces '), ('x' * 300, 'x' * 300)) @ddt.unpack def test_build_comment(self, display_name, clean_name): host = 'test-stack1@backend#pool' share = { 'host': host, 'display_name': display_name } comment = self.driver.build_share_comment(share) cleaned = { 'host': host, 'clean_name': clean_name } expected = ("OpenStack Manila - host=%(host)s " "orig_name=%(clean_name)s created=" % cleaned)[:254] self.assertLess(len(comment), 255) self.assertTrue(comment.startswith(expected)) # Test for some chars that are not allowed. # Don't test with same regex as the code uses. for c in "'\".,;": self.assertNotIn(c, comment) def test_get_network_allocations_number(self): self.assertEqual(1, self.driver.get_network_allocations_number()) def test_setup_server(self): """Setup server by creating a new FSIP.""" self.init_driver() network_info = { 'network_allocations': [ {'ip_address': constants.EXPECTED_IP_1234}], 'cidr': '/'.join((constants.EXPECTED_IP_1234, constants.CIDR_PREFIX)), 'network_type': constants.EXPECTED_VLAN_TYPE, 'segmentation_id': constants.EXPECTED_VLAN_TAG, 'server_id': constants.EXPECTED_SERVER_ID, } expected_result = { 'share_server_name': constants.EXPECTED_SERVER_ID, 'share_server_id': constants.EXPECTED_SERVER_ID, 'ip': constants.EXPECTED_IP_1234, 'subnet': constants.EXPECTED_SUBNET, 'vlantag': constants.EXPECTED_VLAN_TAG, 'fpg': constants.EXPECTED_FPG, 'vfs': constants.EXPECTED_VFS, } metadata = {'request_host': constants.EXPECTED_HOST} result = self.driver._setup_server(network_info, metadata) expected_calls = [ mock.call.create_fsip(constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VLAN_TAG, constants.EXPECTED_FPG, constants.EXPECTED_VFS) ] self.mock_mediator.assert_has_calls(expected_calls) self.assertEqual(expected_result, result) def test_setup_server_fails_for_unsupported_network_type(self): """Setup server fails for unsupported network type""" self.init_driver() network_info = { 'network_allocations': [ {'ip_address': constants.EXPECTED_IP_1234}], 'cidr': '/'.join((constants.EXPECTED_IP_1234, constants.CIDR_PREFIX)), 'network_type': constants.EXPECTED_VXLAN_TYPE, 'segmentation_id': constants.EXPECTED_VLAN_TAG, 'server_id': constants.EXPECTED_SERVER_ID, } metadata = {'request_host': constants.EXPECTED_HOST} self.assertRaises(exception.NetworkBadConfigurationException, self.driver._setup_server, network_info, metadata) def test_setup_server_fails_for_exceed_pool_max_supported_ips(self): """Setup server fails when the VFS has reached max supported IPs""" self.init_driver() network_info = { 'network_allocations': [ {'ip_address': constants.EXPECTED_IP_1234}], 'cidr': '/'.join((constants.EXPECTED_IP_1234, constants.CIDR_PREFIX)), 'network_type': constants.EXPECTED_VLAN_TYPE, 'segmentation_id': constants.EXPECTED_VLAN_TAG, 'server_id': constants.EXPECTED_SERVER_ID, } metadata = {'request_host': constants.EXPECTED_HOST} expected_vfs = self.driver.fpgs[ constants.EXPECTED_FPG][constants.EXPECTED_VFS] self.driver.fpgs[constants.EXPECTED_FPG][constants.EXPECTED_VFS] = [ '10.0.0.1', '10.0.0.2', '10.0.0.3', '10.0.0.4'] self.assertRaises(exception.Invalid, self.driver._setup_server, network_info, metadata) self.driver.fpgs[constants.EXPECTED_FPG][constants.EXPECTED_VFS ] = expected_vfs def test_teardown_server(self): """Test tear down server""" self.init_driver() server_details = { 'ip': constants.EXPECTED_IP_10203040, 'fpg': constants.EXPECTED_FPG, 'vfs': constants.EXPECTED_VFS, } self.driver._teardown_server(server_details) expected_calls = [ mock.call.remove_fsip(constants.EXPECTED_IP_10203040, constants.EXPECTED_FPG, constants.EXPECTED_VFS) ] self.mock_mediator.assert_has_calls(expected_calls) manila-10.0.0/manila/tests/share/drivers/hpe/test_hpe_3par_mediator.py0000664000175000017500000037552313656750227026044 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from unittest import mock import ddt if 'hpe3parclient' not in sys.modules: sys.modules['hpe3parclient'] = mock.Mock() from oslo_utils import units import six from manila.data import utils as data_utils from manila import exception from manila.share.drivers.hpe import hpe_3par_mediator as hpe3parmediator from manila import test from manila.tests.share.drivers.hpe import test_hpe_3par_constants as constants from manila import utils CLIENT_VERSION_MIN_OK = hpe3parmediator.MIN_CLIENT_VERSION TEST_WSAPI_VERSION_STR = '30201292' @ddt.ddt class HPE3ParMediatorTestCase(test.TestCase): def setUp(self): super(HPE3ParMediatorTestCase, self).setUp() # Fake utils.execute self.mock_object(utils, 'execute', mock.Mock(return_value={})) # Fake data_utils.Copy class FakeCopy(object): def run(self): pass def get_progress(self): return {'total_progress': 100} self.mock_copy = self.mock_object( data_utils, 'Copy', mock.Mock(return_value=FakeCopy())) # This is the fake client to use. self.mock_client = mock.Mock() # Take over the hpe3parclient module and stub the constructor. hpe3parclient = sys.modules['hpe3parclient'] hpe3parclient.version_tuple = CLIENT_VERSION_MIN_OK # Need a fake constructor to return the fake client. # This is also be used for constructor error tests. self.mock_object(hpe3parclient.file_client, 'HPE3ParFilePersonaClient') self.mock_client_constructor = ( hpe3parclient.file_client.HPE3ParFilePersonaClient ) self.mock_client = self.mock_client_constructor() # Set the mediator to use in tests. self.mediator = hpe3parmediator.HPE3ParMediator( hpe3par_username=constants.USERNAME, hpe3par_password=constants.PASSWORD, hpe3par_api_url=constants.API_URL, hpe3par_debug=constants.EXPECTED_HPE_DEBUG, hpe3par_san_ip=constants.EXPECTED_IP_1234, hpe3par_san_login=constants.SAN_LOGIN, hpe3par_san_password=constants.SAN_PASSWORD, hpe3par_san_ssh_port=constants.PORT, hpe3par_cifs_admin_access_username=constants.USERNAME, hpe3par_cifs_admin_access_password=constants.PASSWORD, hpe3par_cifs_admin_access_domain=constants.EXPECTED_CIFS_DOMAIN, hpe3par_share_mount_path=constants.EXPECTED_MOUNT_PATH, ssh_conn_timeout=constants.TIMEOUT, my_ip=constants.EXPECTED_MY_IP) def test_mediator_no_client(self): """Test missing hpe3parclient error.""" mock_log = self.mock_object(hpe3parmediator, 'LOG') self.mock_object(hpe3parmediator.HPE3ParMediator, 'no_client', None) self.assertRaises(exception.HPE3ParInvalidClient, self.mediator.do_setup) mock_log.error.assert_called_once_with(mock.ANY) def test_mediator_setup_client_init_error(self): """Any client init exceptions should result in a ManilaException.""" self.mock_client_constructor.side_effect = ( Exception('Any exception. E.g., bad version or some other ' 'non-Manila Exception.')) self.assertRaises(exception.ManilaException, self.mediator.do_setup) def test_mediator_setup_client_ssh_error(self): # This could be anything the client comes up with, but the # mediator should turn it into a ManilaException. non_manila_exception = Exception('non-manila-except') self.mock_client.setSSHOptions.side_effect = non_manila_exception self.assertRaises(exception.ManilaException, self.mediator.do_setup) self.mock_client.assert_has_calls( [mock.call.setSSHOptions(constants.EXPECTED_IP_1234, constants.SAN_LOGIN, constants.SAN_PASSWORD, port=constants.PORT, conn_timeout=constants.TIMEOUT)]) def test_mediator_vfs_exception(self): """Backend exception during get_vfs.""" self.init_mediator() self.mock_client.getvfs.side_effect = Exception('non-manila-except') self.assertRaises(exception.ManilaException, self.mediator.get_vfs, fpg=constants.EXPECTED_FPG) expected_calls = [ mock.call.getvfs(fpg=constants.EXPECTED_FPG, vfs=None), ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_vfs_not_found(self): """VFS not found.""" self.init_mediator() self.mock_client.getvfs.return_value = {'total': 0} self.assertRaises(exception.ManilaException, self.mediator.get_vfs, fpg=constants.EXPECTED_FPG) expected_calls = [ mock.call.getvfs(fpg=constants.EXPECTED_FPG, vfs=None), ] self.mock_client.assert_has_calls(expected_calls) @ddt.data((constants.EXPECTED_CLIENT_GET_VFS_RETURN_VALUE, constants.EXPECTED_MEDIATOR_GET_VFS_RET_VAL), (constants.EXPECTED_CLIENT_GET_VFS_RETURN_VALUE_MULTI, constants.EXPECTED_MEDIATOR_GET_VFS_RET_VAL_MULTI)) @ddt.unpack def test_mediator_get_vfs(self, get_vfs_val, exp_vfs_val): """VFS not found.""" self.init_mediator() self.mock_client.getvfs.return_value = get_vfs_val ret_val = self.mediator.get_vfs(constants.EXPECTED_FPG) self.assertEqual(exp_vfs_val, ret_val) expected_calls = [ mock.call.getvfs(fpg=constants.EXPECTED_FPG, vfs=None), ] self.mock_client.assert_has_calls(expected_calls) def init_mediator(self): """Basic mediator setup for re-use with tests that need one.""" self.mock_client.getWsApiVersion.return_value = { 'build': TEST_WSAPI_VERSION_STR, } self.mock_client.getvfs.return_value = { 'total': 1, 'members': [{'vfsname': constants.EXPECTED_VFS}] } self.mock_client.getfshare.return_value = { 'total': 1, 'members': [ {'fstoreName': constants.EXPECTED_FSTORE, 'shareName': constants.EXPECTED_SHARE_ID, 'shareDir': constants.EXPECTED_SHARE_PATH, 'share_proto': constants.NFS, 'sharePath': constants.EXPECTED_SHARE_PATH, 'comment': constants.EXPECTED_COMMENT, }] } self.mock_client.setfshare.return_value = [] self.mock_client.setfsquota.return_value = [] self.mock_client.getfsquota.return_value = constants.GET_FSQUOTA self.mediator.do_setup() def test_mediator_setup_success(self): """Do a mediator setup without errors.""" self.init_mediator() self.assertIsNotNone(self.mediator._client) expected_calls = [ mock.call.setSSHOptions(constants.EXPECTED_IP_1234, constants.SAN_LOGIN, constants.SAN_PASSWORD, port=constants.PORT, conn_timeout=constants.TIMEOUT), mock.call.getWsApiVersion(), mock.call.debug_rest(constants.EXPECTED_HPE_DEBUG) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_client_login_error(self): """Test exception during login.""" self.init_mediator() self.mock_client.login.side_effect = constants.FAKE_EXCEPTION self.assertRaises(exception.ShareBackendException, self.mediator._wsapi_login) expected_calls = [mock.call.login(constants.USERNAME, constants.PASSWORD)] self.mock_client.assert_has_calls(expected_calls) def test_mediator_client_logout_error(self): """Test exception during logout.""" self.init_mediator() mock_log = self.mock_object(hpe3parmediator, 'LOG') fake_exception = constants.FAKE_EXCEPTION self.mock_client.http.unauthenticate.side_effect = fake_exception self.mediator._wsapi_logout() # Warning is logged (no exception thrown). self.assertTrue(mock_log.warning.called) expected_calls = [mock.call.http.unauthenticate()] self.mock_client.assert_has_calls(expected_calls) def test_mediator_client_version_unsupported(self): """Try a client with version less than minimum.""" self.hpe3parclient = sys.modules['hpe3parclient'] self.hpe3parclient.version_tuple = (CLIENT_VERSION_MIN_OK[0], CLIENT_VERSION_MIN_OK[1], CLIENT_VERSION_MIN_OK[2] - 1) mock_log = self.mock_object(hpe3parmediator, 'LOG') self.assertRaises(exception.HPE3ParInvalidClient, self.init_mediator) mock_log.error.assert_called_once_with(mock.ANY) def test_mediator_client_version_supported(self): """Try a client with a version greater than the minimum.""" # The setup success already tests the min version. Try version > min. self.hpe3parclient = sys.modules['hpe3parclient'] self.hpe3parclient.version_tuple = (CLIENT_VERSION_MIN_OK[0], CLIENT_VERSION_MIN_OK[1], CLIENT_VERSION_MIN_OK[2] + 1) self.init_mediator() expected_calls = [ mock.call.setSSHOptions(constants.EXPECTED_IP_1234, constants.SAN_LOGIN, constants.SAN_PASSWORD, port=constants.PORT, conn_timeout=constants.TIMEOUT), mock.call.getWsApiVersion(), mock.call.debug_rest(constants.EXPECTED_HPE_DEBUG) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_client_version_exception(self): """Test the getWsApiVersion exception handling.""" self.mock_client.getWsApiVersion.side_effect = constants.FAKE_EXCEPTION self.assertRaises(exception.ShareBackendException, self.init_mediator) def test_mediator_client_version_bad_return_value(self): """Test the getWsApiVersion exception handling with bad value.""" # Expecting a dict with 'build' in it. This would fail badly. self.mock_client.getWsApiVersion.return_value = 'bogus' self.assertRaises(exception.ShareBackendException, self.mediator.do_setup) def get_expected_calls_for_create_share(self, client_version, expected_fpg, expected_vfsname, expected_protocol, extra_specs, expected_project_id, expected_share_id): expected_sharedir = expected_share_id createfshare_kwargs = dict(comment=mock.ANY, fpg=expected_fpg, sharedir=expected_sharedir, fstore=expected_project_id) if expected_protocol == constants.NFS_LOWER: createfshare_kwargs['clientip'] = '127.0.0.1' # Options from extra-specs. opt_string = extra_specs.get('hpe3par:nfs_options', []) opt_list = opt_string.split(',') # Options that the mediator adds. nfs_options = ['rw', 'no_root_squash', 'insecure'] nfs_options += opt_list expected_options = ','.join(nfs_options) createfshare_kwargs['options'] = OptionMatcher( self.assertListEqual, expected_options) expected_calls = [ mock.call.createfstore(expected_vfsname, expected_project_id, comment=mock.ANY, fpg=expected_fpg), mock.call.getfsquota(fpg=expected_fpg, vfs=expected_vfsname, fstore=expected_project_id), mock.call.setfsquota(expected_vfsname, fpg=expected_fpg, hcapacity='2048', scapacity='2048', fstore=expected_project_id), mock.call.createfshare(expected_protocol, expected_vfsname, expected_share_id, **createfshare_kwargs), mock.call.getfshare(expected_protocol, expected_share_id, fpg=expected_fpg, vfs=expected_vfsname, fstore=expected_project_id)] else: smb_opts = (hpe3parmediator.ACCESS_BASED_ENUM, hpe3parmediator.CONTINUOUS_AVAIL, hpe3parmediator.CACHE) for smb_opt in smb_opts: opt_value = extra_specs.get('hpe3par:smb_%s' % smb_opt) if opt_value: opt_key = hpe3parmediator.SMB_EXTRA_SPECS_MAP[smb_opt] createfshare_kwargs[opt_key] = opt_value expected_calls = [ mock.call.createfstore(expected_vfsname, expected_project_id, comment=mock.ANY, fpg=expected_fpg), mock.call.getfsquota(fpg=expected_fpg, vfs=expected_vfsname, fstore=expected_project_id), mock.call.setfsquota(expected_vfsname, fpg=expected_fpg, hcapacity='2048', scapacity='2048', fstore=expected_project_id), mock.call.createfshare(expected_protocol, expected_vfsname, expected_share_id, **createfshare_kwargs), mock.call.getfshare(expected_protocol, expected_share_id, fpg=expected_fpg, vfs=expected_vfsname, fstore=expected_project_id)] return expected_calls @staticmethod def _build_smb_extra_specs(**kwargs): extra_specs = {'driver_handles_share_servers': False} for k, v in kwargs.items(): extra_specs['hpe3par:smb_%s' % k] = v return extra_specs @ddt.data(((4, 0, 0), None, None, None), ((4, 0, 0), 'true', None, None), ((4, 0, 0), None, 'false', None), ((4, 0, 0), None, 'false', None), ((4, 0, 0), None, None, 'optimized'), ((4, 0, 0), 'true', 'false', 'optimized')) @ddt.unpack def test_mediator_create_cifs_share(self, client_version, abe, ca, cache): self.hpe3parclient = sys.modules['hpe3parclient'] self.hpe3parclient.version_tuple = client_version self.init_mediator() self.mock_client.getfshare.return_value = { 'message': None, 'total': 1, 'members': [{'shareName': constants.EXPECTED_SHARE_NAME}] } extra_specs = self._build_smb_extra_specs(access_based_enum=abe, continuous_avail=ca, cache=cache) location = self.mediator.create_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, extra_specs, constants.EXPECTED_FPG, constants.EXPECTED_VFS, size=constants.EXPECTED_SIZE_1) self.assertEqual(constants.EXPECTED_SHARE_NAME, location) expected_calls = self.get_expected_calls_for_create_share( client_version, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.SMB_LOWER, extra_specs, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID) self.mock_client.assert_has_calls(expected_calls) @ddt.data('ro', 'rw', 'no_root_squash', 'root_squash', 'secure', 'insecure', 'hide,insecure,no_wdelay,ro,bogus,root_squash,test') def test_mediator_create_nfs_share_bad_options(self, nfs_options): self.init_mediator() extra_specs = {'hpe3par:nfs_options': nfs_options} self.assertRaises(exception.InvalidInput, self.mediator.create_share, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS.lower(), extra_specs, constants.EXPECTED_FPG, constants.EXPECTED_VFS, size=constants.EXPECTED_SIZE_1) self.assertFalse(self.mock_client.createfshare.called) @ddt.data('sync', 'no_wdelay,sec=sys,hide,sync') def test_mediator_create_nfs_share(self, nfs_options): self.init_mediator() self.mock_client.getfshare.return_value = { 'message': None, 'total': 1, 'members': [{'sharePath': constants.EXPECTED_SHARE_PATH}] } extra_specs = {'hpe3par:nfs_options': nfs_options} location = self.mediator.create_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS.lower(), extra_specs, constants.EXPECTED_FPG, constants.EXPECTED_VFS, size=constants.EXPECTED_SIZE_1) self.assertEqual(constants.EXPECTED_SHARE_PATH, location) expected_calls = self.get_expected_calls_for_create_share( hpe3parmediator.MIN_CLIENT_VERSION, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.NFS.lower(), extra_specs, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID) self.mock_client.assert_has_calls(expected_calls) def test_mediator_create_nfs_share_get_exception(self): self.init_mediator() self.mock_client.getfshare.side_effect = constants.FAKE_EXCEPTION self.assertRaises(exception.ShareBackendException, self.mediator.create_share, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS.lower(), constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, size=constants.EXPECTED_SIZE_1) @ddt.data(0, 2) def test_mediator_create_nfs_share_get_fail(self, count): self.init_mediator() self.mock_client.getfshare.return_value = {'total': count} self.assertRaises(exception.ShareBackendException, self.mediator.create_share, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS.lower(), constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, size=constants.EXPECTED_SIZE_1) @ddt.data(True, False) def test_mediator_create_cifs_share_from_snapshot(self, require_cifs_ip): self.init_mediator() self.mediator.hpe3par_require_cifs_ip = require_cifs_ip self.mock_client.getfsnap.return_value = { 'message': None, 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_ID, 'fstoreName': constants.EXPECTED_FSTORE}] } location = self.mediator.create_share_from_snapshot( constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040]) self.assertEqual(constants.EXPECTED_SHARE_ID, location) expected_kwargs_ro = { 'comment': mock.ANY, 'fpg': constants.EXPECTED_FPG, 'fstore': constants.EXPECTED_FSTORE, } expected_kwargs_rw = expected_kwargs_ro.copy() expected_kwargs_ro['sharedir'] = '.snapshot/%s/%s' % ( constants.EXPECTED_SNAP_ID, constants.EXPECTED_SHARE_ID) expected_kwargs_rw['sharedir'] = constants.EXPECTED_SHARE_ID if require_cifs_ip: expected_kwargs_ro['allowip'] = constants.EXPECTED_MY_IP expected_kwargs_rw['allowip'] = ( ','.join((constants.EXPECTED_MY_IP, constants.EXPECTED_IP_127))) expected_calls = [ mock.call.getfsnap('*_%s' % constants.EXPECTED_SNAP_ID, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_FSTORE), mock.call.createfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, **expected_kwargs_ro), mock.call.getfshare(constants.SMB_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.createfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, **expected_kwargs_rw), mock.call.getfshare(constants.SMB_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, allowperm=constants.ADD_USERNAME, comment=mock.ANY, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, allowperm=constants.ADD_USERNAME, comment=mock.ANY, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SUPER_SHARE, allowperm=constants.DROP_USERNAME, comment=mock.ANY, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_create_cifs_share_from_snapshot_ro(self): self.init_mediator() # RO because CIFS admin access username is not configured self.mediator.hpe3par_cifs_admin_access_username = None self.mock_client.getfsnap.return_value = { 'message': None, 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_ID, 'fstoreName': constants.EXPECTED_FSTORE}] } location = self.mediator.create_share_from_snapshot( constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040], comment=constants.EXPECTED_COMMENT) self.assertEqual(constants.EXPECTED_SHARE_ID, location) share_dir = '.snapshot/%s/%s' % ( constants.EXPECTED_SNAP_ID, constants.EXPECTED_SHARE_ID) expected_kwargs_ro = { 'comment': constants.EXPECTED_COMMENT, 'fpg': constants.EXPECTED_FPG, 'fstore': constants.EXPECTED_FSTORE, 'sharedir': share_dir, } self.mock_client.createfshare.assert_called_once_with( constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, **expected_kwargs_ro ) def test_mediator_create_nfs_share_from_snapshot(self): self.init_mediator() self.mock_client.getfsnap.return_value = { 'message': None, 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_ID, 'fstoreName': constants.EXPECTED_FSTORE}] } location = self.mediator.create_share_from_snapshot( constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040]) self.assertEqual(constants.EXPECTED_SHARE_PATH, location) expected_calls = [ mock.call.getfsnap('*_%s' % constants.EXPECTED_SNAP_ID, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_FSTORE), mock.call.createfshare(constants.NFS_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, comment=mock.ANY, fpg=constants.EXPECTED_FPG, sharedir='.snapshot/%s/%s' % (constants.EXPECTED_SNAP_ID, constants.EXPECTED_SHARE_ID), fstore=constants.EXPECTED_FSTORE, clientip=constants.EXPECTED_MY_IP, options='ro,no_root_squash,insecure'), mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.createfshare(constants.NFS_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, comment=mock.ANY, fpg=constants.EXPECTED_FPG, sharedir=constants.EXPECTED_SHARE_ID, fstore=constants.EXPECTED_FSTORE, clientip=','.join(( constants.EXPECTED_MY_IP, constants.EXPECTED_IP_127)), options='rw,no_root_squash,insecure'), mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.setfshare(constants.NFS_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, clientip=''.join(('-', constants.EXPECTED_MY_IP)), comment=mock.ANY, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.removefshare(constants.NFS_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_create_share_from_snap_copy_incomplete(self): self.init_mediator() self.mock_client.getfsnap.return_value = { 'message': None, 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_ID, 'fstoreName': constants.EXPECTED_FSTORE}] } mock_bad_copy = mock.Mock() mock_bad_copy.get_progress.return_value = {'total_progress': 99} self.mock_object( data_utils, 'Copy', mock.Mock(return_value=mock_bad_copy)) self.assertRaises(exception.ShareBackendException, self.mediator.create_share_from_snapshot, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040]) self.assertTrue(mock_bad_copy.run.called) self.assertTrue(mock_bad_copy.get_progress.called) def test_mediator_create_share_from_snap_copy_exception(self): self.init_mediator() self.mock_client.getfsnap.return_value = { 'message': None, 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_ID, 'fstoreName': constants.EXPECTED_FSTORE}] } mock_bad_copy = mock.Mock() mock_bad_copy.run.side_effect = Exception('run exception') self.mock_object( data_utils, 'Copy', mock.Mock(return_value=mock_bad_copy)) self.assertRaises(exception.ShareBackendException, self.mediator.create_share_from_snapshot, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040]) self.assertTrue(mock_bad_copy.run.called) def test_mediator_create_share_from_snap_not_found(self): self.init_mediator() self.mock_client.getfsnap.return_value = { 'message': None, 'total': 0, 'members': [] } self.assertRaises(exception.ShareBackendException, self.mediator.create_share_from_snapshot, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS, [constants.EXPECTED_IP_10203040]) def test_mediator_delete_nfs_share(self): self.init_mediator() share_id = 'foo' osf_share_id = '-'.join(('osf', share_id)) osf_ro_share_id = '-ro-'.join(('osf', share_id)) fstore = osf_share_id self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=fstore)) self.mock_object(self.mediator, '_delete_file_tree') self.mock_object(self.mediator, '_update_capacity_quotas') self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, share_id, constants.EXPECTED_SIZE_1, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_IP) expected_calls = [ mock.call.removefshare(constants.NFS_LOWER, constants.EXPECTED_VFS, osf_share_id, fpg=constants.EXPECTED_FPG, fstore=fstore), mock.call.removefshare(constants.NFS_LOWER, constants.EXPECTED_VFS, osf_ro_share_id, fpg=constants.EXPECTED_FPG, fstore=fstore), mock.call.removefstore(constants.EXPECTED_VFS, fstore, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) self.assertFalse(self.mediator._delete_file_tree.called) self.assertFalse(self.mediator._update_capacity_quotas.called) def test_mediator_delete_share_not_found(self): self.init_mediator() self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=None)) self.mock_object(self.mediator, '_delete_file_tree') self.mock_object(self.mediator, '_update_capacity_quotas') self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) self.assertFalse(self.mock_client.removefshare.called) self.assertFalse(self.mediator._delete_file_tree.called) self.assertFalse(self.mediator._update_capacity_quotas.called) def test_mediator_delete_nfs_share_only_readonly(self): self.init_mediator() fstores = (None, constants.EXPECTED_FSTORE) self.mock_object(self.mediator, '_find_fstore', mock.Mock(side_effect=fstores)) self.mock_object(self.mediator, '_delete_file_tree') self.mock_object(self.mediator, '_update_capacity_quotas') self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) self.mock_client.removefshare.assert_called_once_with( constants.NFS_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE ) self.assertFalse(self.mediator._delete_file_tree.called) self.assertFalse(self.mediator._update_capacity_quotas.called) def test_mediator_delete_share_exception(self): self.init_mediator() self.mock_client.removefshare.side_effect = Exception( 'removeshare fail.') self.assertRaises(exception.ShareBackendException, self.mediator.delete_share, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) expected_calls = [ mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_delete_fstore_exception(self): self.init_mediator() self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=constants.EXPECTED_SHARE_ID)) self.mock_object(self.mediator, '_delete_file_tree') self.mock_object(self.mediator, '_update_capacity_quotas') self.mock_client.removefstore.side_effect = Exception( 'removefstore fail.') self.assertRaises(exception.ShareBackendException, self.mediator.delete_share, constants.EXPECTED_PROJECT_ID, constants.SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) expected_calls = [ mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_SHARE_ID), mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID_RO, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_SHARE_ID), mock.call.removefstore(constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) self.assertFalse(self.mediator._delete_file_tree.called) self.assertFalse(self.mediator._update_capacity_quotas.called) def test_mediator_delete_file_tree_exception(self): self.init_mediator() mock_log = self.mock_object(hpe3parmediator, 'LOG') self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=constants.EXPECTED_FSTORE)) self.mock_object(self.mediator, '_delete_file_tree', mock.Mock(side_effect=Exception('test'))) self.mock_object(self.mediator, '_update_capacity_quotas') self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, constants.SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) expected_calls = [ mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID_RO, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), ] self.mock_client.assert_has_calls(expected_calls) self.assertTrue(self.mediator._delete_file_tree.called) self.assertFalse(self.mediator._update_capacity_quotas.called) mock_log.warning.assert_called_once_with(mock.ANY, mock.ANY) def test_mediator_delete_cifs_share(self): self.init_mediator() self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=constants.EXPECTED_FSTORE)) self.mock_object(self.mediator, '_create_mount_directory', mock.Mock(return_value={})) self.mock_object(self.mediator, '_mount_super_share', mock.Mock(return_value={})) self.mock_object(self.mediator, '_delete_share_directory', mock.Mock(return_value={})) self.mock_object(self.mediator, '_unmount_share', mock.Mock(return_value={})) self.mock_object(self.mediator, '_update_capacity_quotas', mock.Mock(return_value={})) self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) expected_calls = [ mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.createfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SUPER_SHARE, allowip=constants.EXPECTED_MY_IP, comment=( constants.EXPECTED_SUPER_SHARE_COMMENT), fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, sharedir=''), mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SUPER_SHARE, comment=( constants.EXPECTED_SUPER_SHARE_COMMENT), allowperm=( '+' + constants.USERNAME + ':fullcontrol'), fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), ] self.mock_client.assert_has_calls(expected_calls) expected_mount_path = constants.EXPECTED_MOUNT_PATH + ( constants.EXPECTED_SHARE_ID) expected_share_path = '/'.join((expected_mount_path, constants.EXPECTED_SHARE_ID)) self.mediator._create_mount_directory.assert_called_once_with( expected_mount_path) self.mediator._mount_super_share.assert_called_once_with( constants.SMB_LOWER, expected_mount_path, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_FSTORE, constants.EXPECTED_IP_10203040) self.mediator._delete_share_directory.assert_has_calls([ mock.call(expected_share_path), mock.call(expected_mount_path), ]) self.mediator._unmount_share.assert_called_once_with( expected_mount_path) self.mediator._update_capacity_quotas.assert_called_once_with( constants.EXPECTED_FSTORE, 0, constants.EXPECTED_SIZE_1, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_delete_cifs_share_and_fstore(self): self.init_mediator() self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=constants.EXPECTED_SHARE_ID)) self.mock_object(self.mediator, '_delete_file_tree') self.mock_object(self.mediator, '_update_capacity_quotas') self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) expected_calls = [ mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_SHARE_ID), mock.call.removefstore(constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) self.assertFalse(self.mediator._delete_file_tree.called) self.assertFalse(self.mediator._update_capacity_quotas.called) def test_mediator_delete_share_with_fstore_per_share_false(self): self.init_mediator() self.mediator.hpe3par_fstore_per_share = False share_size = int(constants.EXPECTED_SIZE_1) fstore_init_size = int( constants.GET_FSQUOTA['members'][0]['hardBlock']) expected_capacity = (0 - share_size) * units.Ki + fstore_init_size self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=constants.EXPECTED_FSTORE)) self.mock_object(self.mediator, '_create_mount_directory', mock.Mock(return_value={})) self.mock_object(self.mediator, '_mount_super_share', mock.Mock(return_value={})) self.mock_object(self.mediator, '_delete_share_directory', mock.Mock(return_value={})) self.mock_object(self.mediator, '_unmount_share', mock.Mock(return_value={})) self.mediator.delete_share(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.EXPECTED_SIZE_1, constants.CIFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_IP_10203040) expected_calls = [ mock.call.removefshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.createfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SUPER_SHARE, allowip=constants.EXPECTED_MY_IP, comment=( constants.EXPECTED_SUPER_SHARE_COMMENT), fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, sharedir=''), mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SUPER_SHARE, comment=( constants.EXPECTED_SUPER_SHARE_COMMENT), allowperm=( '+' + constants.USERNAME + ':fullcontrol'), fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE), mock.call.getfsquota(fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, vfs=constants.EXPECTED_VFS), mock.call.setfsquota(constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, scapacity=six.text_type(expected_capacity), hcapacity=six.text_type(expected_capacity))] self.mock_client.assert_has_calls(expected_calls) expected_mount_path = constants.EXPECTED_MOUNT_PATH + ( constants.EXPECTED_SHARE_ID) self.mediator._create_mount_directory.assert_called_with( expected_mount_path) self.mediator._mount_super_share.assert_called_with( constants.SMB_LOWER, expected_mount_path, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_FSTORE, constants.EXPECTED_IP_10203040) self.mediator._delete_share_directory.assert_called_with( expected_mount_path) self.mediator._unmount_share.assert_called_with( expected_mount_path) def test_mediator_create_snapshot(self): self.init_mediator() self.mediator.create_snapshot(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.createfsnap(constants.EXPECTED_VFS, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SNAP_NAME, fpg=constants.EXPECTED_FPG) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_create_snapshot_not_allowed(self): self.init_mediator() self.mock_client.getfshare.return_value['members'][0]['shareDir'] = ( None) self.mock_client.getfshare.return_value['members'][0]['sharePath'] = ( 'foo/.snapshot/foo') self.assertRaises(exception.ShareBackendException, self.mediator.create_snapshot, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_create_snapshot_share_not_found(self): self.init_mediator() mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(return_value=None)) self.assertRaises(exception.ShareBackendException, self.mediator.create_snapshot, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) mock_find_fshare.assert_called_once_with(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_create_snapshot_backend_exception(self): self.init_mediator() # createfsnap exception self.mock_client.createfsnap.side_effect = Exception( 'createfsnap fail.') self.assertRaises(exception.ShareBackendException, self.mediator.create_snapshot, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_delete_snapshot(self): self.init_mediator() expected_name_from_array = 'name-from-array' self.mock_client.getfsnap.return_value = { 'total': 1, 'members': [ { 'snapName': expected_name_from_array, 'fstoreName': constants.EXPECTED_PROJECT_ID, } ], 'message': None } self.mock_client.getfshare.side_effect = [ # some typical independent NFS share (path) and SMB share (dir) { 'total': 1, 'members': [{'sharePath': '/anyfpg/anyvfs/anyfstore'}] }, { 'total': 1, 'members': [{'shareDir': []}], } ] self.mediator.delete_snapshot(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.getfsnap('*_%s' % constants.EXPECTED_SNAP_NAME, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_PROJECT_ID), mock.call.getfshare(constants.NFS_LOWER, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_PROJECT_ID), mock.call.getfshare(constants.SMB_LOWER, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_PROJECT_ID), mock.call.removefsnap(constants.EXPECTED_VFS, constants.EXPECTED_PROJECT_ID, fpg=constants.EXPECTED_FPG, snapname=expected_name_from_array), mock.call.startfsnapclean(constants.EXPECTED_FPG, reclaimStrategy='maxspeed') ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_delete_snapshot_not_found(self): self.init_mediator() self.mock_client.getfsnap.return_value = { 'total': 0, 'members': [], } self.mediator.delete_snapshot(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.getfsnap('*_%s' % constants.EXPECTED_SNAP_NAME, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_SHARE_ID), ] # Code coverage for early exit when nothing to delete. self.mock_client.assert_has_calls(expected_calls) self.assertFalse(self.mock_client.getfshare.called) self.assertFalse(self.mock_client.removefsnap.called) self.assertFalse(self.mock_client.startfsnapclean.called) def test_mediator_delete_snapshot_shared_nfs(self): self.init_mediator() # Mock a share under this snapshot for NFS snapshot_dir = '.snapshot/DT_%s' % constants.EXPECTED_SNAP_NAME snapshot_path = '%s/%s' % (constants.EXPECTED_SHARE_PATH, snapshot_dir) self.mock_client.getfsnap.return_value = { 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_NAME}] } self.mock_client.getfshare.side_effect = [ # some typical independent NFS share (path) and SMB share (dir) { 'total': 1, 'members': [{'sharePath': snapshot_path}], }, { 'total': 0, 'members': [], } ] self.assertRaises(exception.Invalid, self.mediator.delete_snapshot, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_delete_snapshot_shared_smb(self): self.init_mediator() # Mock a share under this snapshot for SMB snapshot_dir = '.snapshot/DT_%s' % constants.EXPECTED_SNAP_NAME self.mock_client.getfsnap.return_value = { 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_NAME}] } self.mock_client.getfshare.side_effect = [ # some typical independent NFS share (path) and SMB share (dir) { 'total': 1, 'members': [{'sharePath': constants.EXPECTED_SHARE_PATH}], }, { 'total': 1, 'members': [{'shareDir': snapshot_dir}], } ] self.assertRaises(exception.Invalid, self.mediator.delete_snapshot, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def _assert_delete_snapshot_raises(self): self.assertRaises(exception.ShareBackendException, self.mediator.delete_snapshot, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_delete_snapshot_backend_exceptions(self): self.init_mediator() # getfsnap exception self.mock_client.getfsnap.side_effect = Exception('getfsnap fail.') self._assert_delete_snapshot_raises() # getfsnap OK self.mock_client.getfsnap.side_effect = None self.mock_client.getfsnap.return_value = { 'total': 1, 'members': [{'snapName': constants.EXPECTED_SNAP_NAME, 'fstoreName': constants.EXPECTED_FSTORE}] } # getfshare exception self.mock_client.getfshare.side_effect = Exception('getfshare fail.') self._assert_delete_snapshot_raises() # getfshare OK def mock_fshare(*args, **kwargs): if args[0] == constants.NFS_LOWER: return { 'total': 1, 'members': [{'sharePath': '/anyfpg/anyvfs/anyfstore', 'fstoreName': constants.EXPECTED_FSTORE}] } else: return { 'total': 1, 'members': [{'shareDir': [], 'fstoreName': constants.EXPECTED_FSTORE}] } self.mock_client.getfshare.side_effect = mock_fshare # removefsnap exception self.mock_client.removefsnap.side_effect = Exception( 'removefsnap fail.') self._assert_delete_snapshot_raises() # removefsnap OK self.mock_client.removefsnap.side_effect = None self.mock_client.removefsnap.return_value = [] # startfsnapclean exception (logged, not raised) self.mock_client.startfsnapclean.side_effect = Exception( 'startfsnapclean fail.') mock_log = self.mock_object(hpe3parmediator, 'LOG') self.mediator.delete_snapshot(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_NAME, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.getfsnap('*_%s' % constants.EXPECTED_SNAP_NAME, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_FSTORE), mock.call.getfshare(constants.NFS_LOWER, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.getfshare(constants.SMB_LOWER, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_FSTORE), mock.call.removefsnap(constants.EXPECTED_VFS, constants.EXPECTED_FSTORE, fpg=constants.EXPECTED_FPG, snapname=constants.EXPECTED_SNAP_NAME), mock.call.startfsnapclean(constants.EXPECTED_FPG, reclaimStrategy='maxspeed'), ] self.mock_client.assert_has_calls(expected_calls) self.assertTrue(mock_log.debug.called) self.assertTrue(mock_log.exception.called) @ddt.data(six.text_type('volname.1'), ['volname.2', 'volname.3']) def test_mediator_get_fpg_status(self, volume_name_or_list): """Mediator converts client stats to capacity result.""" expected_capacity = constants.EXPECTED_SIZE_2 expected_free = constants.EXPECTED_SIZE_1 self.init_mediator() self.mock_client.getfpg.return_value = { 'total': 1, 'members': [ { 'capacityKiB': str(expected_capacity * units.Mi), 'availCapacityKiB': str(expected_free * units.Mi), 'vvs': volume_name_or_list, } ], 'message': None, } self.mock_client.getfsquota.return_value = { 'total': 3, 'members': [ {'hardBlock': 1 * units.Ki}, {'hardBlock': 2 * units.Ki}, {'hardBlock': 3 * units.Ki}, ], 'message': None, } self.mock_client.getVolume.return_value = { 'provisioningType': hpe3parmediator.DEDUPE} expected_result = { 'pool_name': constants.EXPECTED_FPG, 'free_capacity_gb': expected_free, 'hpe3par_flash_cache': False, 'hp3par_flash_cache': False, 'dedupe': True, 'thin_provisioning': True, 'total_capacity_gb': expected_capacity, 'provisioned_capacity_gb': 6, } result = self.mediator.get_fpg_status(constants.EXPECTED_FPG) self.assertEqual(expected_result, result) expected_calls = [ mock.call.getfpg(constants.EXPECTED_FPG) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_get_fpg_status_exception(self): """Exception during get_fpg_status call to getfpg.""" self.init_mediator() self.mock_client.getfpg.side_effect = constants.FAKE_EXCEPTION self.assertRaises(exception.ShareBackendException, self.mediator.get_fpg_status, constants.EXPECTED_FPG) expected_calls = [mock.call.getfpg(constants.EXPECTED_FPG)] self.mock_client.assert_has_calls(expected_calls) def test_mediator_get_fpg_status_error(self): """Unexpected result from getfpg during get_fpg_status.""" self.init_mediator() self.mock_client.getfpg.return_value = {'total': 0} self.assertRaises(exception.ShareBackendException, self.mediator.get_fpg_status, constants.EXPECTED_FPG) expected_calls = [mock.call.getfpg(constants.EXPECTED_FPG)] self.mock_client.assert_has_calls(expected_calls) def test_mediator_get_fpg_status_bad_prov_type(self): """Test get_fpg_status handling of unexpected provisioning type.""" self.init_mediator() self.mock_client.getfpg.return_value = { 'total': 1, 'members': [ { 'capacityKiB': '1', 'availCapacityKiB': '1', 'vvs': 'foo', } ], 'message': None, } self.mock_client.getVolume.return_value = { 'provisioningType': 'BOGUS'} self.assertRaises(exception.ShareBackendException, self.mediator.get_fpg_status, constants.EXPECTED_FPG) expected_calls = [mock.call.getfpg(constants.EXPECTED_FPG)] self.mock_client.assert_has_calls(expected_calls) def test_mediator_get_provisioned_error(self): """Test error during get provisioned GB.""" self.init_mediator() error_return = {'message': 'Some error happened.'} self.mock_client.getfsquota.return_value = error_return self.assertRaises(exception.ShareBackendException, self.mediator.get_provisioned_gb, constants.EXPECTED_FPG) expected_calls = [mock.call.getfsquota(fpg=constants.EXPECTED_FPG)] self.mock_client.assert_has_calls(expected_calls) def test_mediator_get_provisioned_exception(self): """Test exception during get provisioned GB.""" self.init_mediator() self.mock_client.getfsquota.side_effect = constants.FAKE_EXCEPTION self.assertRaises(exception.ShareBackendException, self.mediator.get_provisioned_gb, constants.EXPECTED_FPG) expected_calls = [mock.call.getfsquota(fpg=constants.EXPECTED_FPG)] self.mock_client.assert_has_calls(expected_calls) def test_update_access_resync_rules_nfs(self): self.init_mediator() getfshare_result = { 'shareName': constants.EXPECTED_SHARE_NAME, 'fstoreName': constants.EXPECTED_FSTORE, 'clients': [constants.EXPECTED_IP_127], 'comment': constants.EXPECTED_COMMENT, } self.mock_client.getfshare.return_value = { 'total': 1, 'members': [getfshare_result], 'message': None, } self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], None, None, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare( constants.NFS_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_NAME, clientip='+' + constants.EXPECTED_IP_1234, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, comment=constants.EXPECTED_COMMENT), ] self.mock_client.assert_has_calls(expected_calls) def test_update_access_resync_rules_cifs(self): self.init_mediator() getfshare_result = { 'shareName': constants.EXPECTED_SHARE_NAME, 'fstoreName': constants.EXPECTED_FSTORE, 'allowPerm': [['foo_user', 'fullcontrol']], 'allowIP': '', 'comment': constants.EXPECTED_COMMENT, } self.mock_client.getfshare.return_value = { 'total': 1, 'members': [getfshare_result], 'message': None, } self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_CIFS], None, None, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare( constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_NAME, allowperm='+' + constants.USERNAME + ':fullcontrol', fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, comment=constants.EXPECTED_COMMENT), ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_allow_ip_ro_access_cifs_error(self): self.init_mediator() self.assertRaises(exception.InvalidShareAccess, self.mediator.update_access, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP_RO], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) @ddt.data(constants.CIFS, constants.NFS) def test_mediator_allow_rw_snapshot_error(self, proto): self.init_mediator() getfshare_result = { 'shareName': 'foo_ro_name', 'fstoreName': 'foo_fstore', 'comment': 'foo_comment', } path = 'foo/.snapshot/foo' if proto == constants.NFS: getfshare_result['sharePath'] = path else: getfshare_result['shareDir'] = path self.mock_client.getfshare.return_value = { 'total': 1, 'members': [getfshare_result], 'message': None, } self.assertRaises(exception.InvalidShareAccess, self.mediator.update_access, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) @ddt.data((constants.READ_WRITE, True), (constants.READ_WRITE, False), (constants.READ_ONLY, True), (constants.READ_ONLY, False)) @ddt.unpack def test_mediator_allow_user_access_cifs(self, access_level, use_other): """"Allow user access to cifs share.""" self.init_mediator() if use_other: # Don't find share until second attempt. findings = (None, self.mock_client.getfshare.return_value['members'][0]) mock_find_fshare = self.mock_object( self.mediator, '_find_fshare', mock.Mock(side_effect=findings)) if access_level == constants.READ_ONLY: expected_allowperm = '+%s:read' % constants.USERNAME else: expected_allowperm = '+%s:fullcontrol' % constants.USERNAME constants.ADD_RULE_USER['access_level'] = access_level self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_CIFS], [constants.ADD_RULE_USER], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, allowperm=expected_allowperm, comment=constants.EXPECTED_COMMENT, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE) ] self.mock_client.assert_has_calls(expected_calls) if use_other: readonly = access_level == constants.READ_ONLY expected_find_calls = [ mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.SMB_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=readonly), mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.SMB_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=not readonly), ] mock_find_fshare.assert_has_calls(expected_find_calls) @ddt.data(constants.CIFS, constants.NFS) def test_mediator_deny_rw_snapshot_error(self, proto): self.init_mediator() getfshare_result = { 'shareName': 'foo_ro_name', 'fstoreName': 'foo_fstore', 'comment': 'foo_comment', } path = 'foo/.snapshot/foo' if proto == constants.NFS: getfshare_result['sharePath'] = path else: getfshare_result['shareDir'] = path self.mock_client.getfshare.return_value = { 'total': 1, 'members': [getfshare_result], 'message': None, } mock_log = self.mock_object(hpe3parmediator, 'LOG') self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, proto, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP], constants.EXPECTED_FPG, constants.EXPECTED_VFS) self.assertFalse(self.mock_client.setfshare.called) self.assertTrue(mock_log.error.called) def test_mediator_deny_user_access_cifs(self): """"Deny user access to cifs share.""" self.init_mediator() expected_denyperm = '-%s:fullcontrol' % constants.USERNAME self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_CIFS], [], [constants.DELETE_RULE_USER], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, allowperm=expected_denyperm, comment=constants.EXPECTED_COMMENT, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_allow_ip_access_cifs(self): """"Allow ip access to cifs share.""" self.init_mediator() expected_allowip = '+%s' % constants.EXPECTED_IP_1234 self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, allowip=expected_allowip, comment=constants.EXPECTED_COMMENT, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_deny_ip_access_cifs(self): """"Deny ip access to cifs share.""" self.init_mediator() expected_denyip = '-%s' % constants.EXPECTED_IP_1234 self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare(constants.SMB_LOWER, constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, allowip=expected_denyip, comment=constants.EXPECTED_COMMENT, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_allow_ip_access_nfs(self): """"Allow ip access to nfs share.""" self.init_mediator() already_exists = (hpe3parmediator.IP_ALREADY_EXISTS % constants.EXPECTED_IP_1234) self.mock_client.setfshare.side_effect = ([], [already_exists]) expected_clientip = '+%s' % constants.EXPECTED_IP_1234 for _ in range(2): # Test 2nd allow w/ already exists message. self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = 2 * [ mock.call.setfshare(constants.NFS.lower(), constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, clientip=expected_clientip, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, comment=constants.EXPECTED_COMMENT), ] self.mock_client.assert_has_calls(expected_calls, any_order=True) def test_mediator_deny_ip_access_nfs(self): """"Deny ip access to nfs share.""" self.init_mediator() expected_clientip = '-%s' % constants.EXPECTED_IP_1234 self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare(constants.NFS.lower(), constants.EXPECTED_VFS, constants.EXPECTED_SHARE_ID, clientip=expected_clientip, fpg=constants.EXPECTED_FPG, fstore=constants.EXPECTED_FSTORE, comment=constants.EXPECTED_COMMENT) ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_deny_ip_ro_access_nfs_legacy(self): self.init_mediator() # Fail to find share with new naming. Succeed finding legacy naming. legacy = { 'shareName': 'foo_name', 'fstoreName': 'foo_fstore', 'comment': 'foo_comment', 'sharePath': 'foo/.snapshot/foo', } fshares = (None, legacy) mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(side_effect=fshares)) expected_clientip = '-%s' % constants.EXPECTED_IP_1234 self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP_RO], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.setfshare(constants.NFS.lower(), constants.EXPECTED_VFS, legacy['shareName'], clientip=expected_clientip, fpg=constants.EXPECTED_FPG, fstore=legacy['fstoreName'], comment=legacy['comment']) ] self.mock_client.assert_has_calls(expected_calls) expected_find_fshare_calls = [ mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=True), mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=False), ] mock_find_fshare.assert_has_calls(expected_find_fshare_calls) def test_mediator_allow_user_access_nfs(self): """"Allow user access to nfs share is not supported.""" self.init_mediator() self.assertRaises(exception.HPE3ParInvalid, self.mediator.update_access, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_USER], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_allow_access_bad_proto(self): """"Allow user access to unsupported protocol.""" self.init_mediator() self.assertRaises(exception.InvalidShareAccess, self.mediator.update_access, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, 'unsupported_other_protocol', constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_allow_access_bad_type(self): """"Allow user access to unsupported access type.""" self.init_mediator() self.assertRaises(exception.InvalidInput, self.mediator.update_access, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.CIFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_BAD_TYPE], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) def test_mediator_allow_access_missing_nfs_share(self): self.init_mediator() mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(return_value=None)) self.assertRaises(exception.HPE3ParInvalid, self.mediator.update_access, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=False), mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=True), ] mock_find_fshare.assert_has_calls(expected_calls) def test_mediator_allow_nfs_ro_access(self): self.init_mediator() getfshare_result = { 'shareName': 'foo_ro_name', 'fstoreName': 'foo_fstore', 'shareDir': 'foo_dir', 'comment': 'foo_comment', } findings = (None, getfshare_result) mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(side_effect=findings)) self.mock_client.getfshare.return_value = { 'total': 1, 'members': [getfshare_result], 'message': None, } share_id = 'foo' self.mediator.update_access(constants.EXPECTED_PROJECT_ID, share_id, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [constants.ADD_RULE_IP_RO], [], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call(constants.EXPECTED_PROJECT_ID, share_id, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=True), mock.call(constants.EXPECTED_PROJECT_ID, share_id, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=False), ] mock_find_fshare.assert_has_calls(expected_calls) ro_share = 'osf-ro-%s' % share_id expected_calls = [ mock.call.createfshare(constants.NFS_LOWER, constants.EXPECTED_VFS, ro_share, clientip=constants.EXPECTED_IP_127_2, comment=getfshare_result['comment'], fpg=constants.EXPECTED_FPG, fstore=getfshare_result['fstoreName'], options='ro,no_root_squash,insecure', sharedir=getfshare_result['shareDir']), mock.call.getfshare(constants.NFS_LOWER, ro_share, fstore=getfshare_result['fstoreName'], fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS), mock.call.setfshare(constants.NFS_LOWER, constants.EXPECTED_VFS, getfshare_result['shareName'], clientip='+%s' % constants.EXPECTED_IP_1234, comment=getfshare_result['comment'], fpg=constants.EXPECTED_FPG, fstore=getfshare_result['fstoreName']), ] self.mock_client.assert_has_calls(expected_calls) def test_mediator_deny_access_missing_nfs_share(self): self.init_mediator() mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(return_value=None)) self.mediator.update_access(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_EXTRA_SPECS, [constants.ACCESS_RULE_NFS], [], [constants.DELETE_RULE_IP], constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=False), ] mock_find_fshare.assert_has_calls(expected_calls) @ddt.data((hpe3parmediator.ALLOW, 'ip', True, ['IP address foo already exists']), (hpe3parmediator.ALLOW, 'ip', False, ['Another share already exists for this path and client']), (hpe3parmediator.ALLOW, 'user', True, ['"allow" permission already exists for "foo"']), (hpe3parmediator.DENY, 'ip', True, ['foo does not exist, cannot be removed']), (hpe3parmediator.DENY, 'user', True, ['foo:fullcontrol" does not exist, cannot delete it.']), (hpe3parmediator.DENY, 'user', False, ['SMB share osf-foo does not exist']), (hpe3parmediator.ALLOW, 'ip', True, ['\r']), (hpe3parmediator.ALLOW, 'user', True, ['\r']), (hpe3parmediator.DENY, 'ip', True, ['\r']), (hpe3parmediator.DENY, 'user', True, ['\r']), (hpe3parmediator.ALLOW, 'ip', True, []), (hpe3parmediator.ALLOW, 'user', True, []), (hpe3parmediator.DENY, 'ip', True, []), (hpe3parmediator.DENY, 'user', True, [])) @ddt.unpack def test_ignore_benign_access_results(self, access, access_type, expect_false, results): returned = self.mediator.ignore_benign_access_results( access, access_type, 'foo', results) if expect_false: self.assertFalse(returned) else: self.assertEqual(results, returned) @ddt.data((2, 1, True), (2, 1, False), (1, 2, True), (1, 2, False), (1024, 2048, True), (1024, 2048, False), (2048, 1024, True), (2048, 1024, False), (99999999, 1, True), (99999999, 1, False), (1, 99999999, True), (1, 99999999, False), ) @ddt.unpack def test_mediator_resize_share(self, new_size, old_size, fstore_per_share): self.init_mediator() fstore = 'foo_fstore' mock_find_fstore = self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=fstore)) fstore_init_size = int( constants.GET_FSQUOTA['members'][0]['hardBlock']) self.mediator.hpe3par_fstore_per_share = fstore_per_share if fstore_per_share: expected_capacity = new_size * units.Ki else: expected_capacity = ( (new_size - old_size) * units.Ki + fstore_init_size) self.mediator.resize_share( constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, new_size, old_size, constants.EXPECTED_FPG, constants.EXPECTED_VFS) mock_find_fstore.assert_called_with(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, allow_cross_protocol=False) self.mock_client.setfsquota.assert_called_with( constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, fstore=fstore, scapacity=six.text_type(expected_capacity), hcapacity=six.text_type(expected_capacity)) @ddt.data(['This is a fake setfsquota returned error'], Exception('boom')) def test_mediator_resize_share_setfsquota_side_effects(self, side_effect): self.init_mediator() fstore_init_size = int( constants.GET_FSQUOTA['members'][0]['hardBlock']) fstore = 'foo_fstore' new_size = 2 old_size = 1 expected_capacity = (new_size - old_size) * units.Ki + fstore_init_size mock_find_fstore = self.mock_object(self.mediator, '_find_fstore', mock.Mock(return_value=fstore)) self.mock_client.setfsquota.side_effect = side_effect self.assertRaises(exception.ShareBackendException, self.mediator.resize_share, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, new_size, old_size, constants.EXPECTED_FPG, constants.EXPECTED_VFS) mock_find_fstore.assert_called_with(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, allow_cross_protocol=False) self.mock_client.setfsquota.assert_called_with( constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, fstore=fstore, scapacity=six.text_type(expected_capacity), hcapacity=six.text_type(expected_capacity)) def test_mediator_resize_share_not_found(self): self.init_mediator() mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(return_value=None)) self.assertRaises(exception.InvalidShare, self.mediator.resize_share, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, 999, 99, constants.EXPECTED_FPG, constants.EXPECTED_VFS) mock_find_fshare.assert_called_with(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, allow_cross_protocol=False) @ddt.data((('nfs', 'NFS', 'nFs'), 'smb'), (('smb', 'SMB', 'SmB', 'CIFS', 'cifs', 'CiFs'), 'nfs')) @ddt.unpack def test_other_protocol(self, protocols, expected_other): for protocol in protocols: self.assertEqual(expected_other, hpe3parmediator.HPE3ParMediator().other_protocol( protocol)) @ddt.data('', 'bogus') def test_other_protocol_exception(self, protocol): self.assertRaises(exception.InvalidShareAccess, hpe3parmediator.HPE3ParMediator().other_protocol, protocol) @ddt.data(('osf-uid', None, None, 'osf-uid'), ('uid', None, True, 'osf-ro-uid'), ('uid', None, False, 'osf-uid'), ('uid', 'smb', True, 'osf-smb-ro-uid'), ('uid', 'smb', False, 'osf-smb-uid'), ('uid', 'nfs', True, 'osf-nfs-ro-uid'), ('uid', 'nfs', False, 'osf-nfs-uid')) @ddt.unpack def test_ensure_prefix(self, uid, protocol, readonly, expected): self.assertEqual(expected, hpe3parmediator.HPE3ParMediator().ensure_prefix( uid, protocol=protocol, readonly=readonly)) def test_find_fstore_search(self): self.init_mediator() mock_find_fshare = self.mock_object(self.mediator, '_find_fshare', mock.Mock(return_value=None)) result = self.mediator._find_fstore(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS) mock_find_fshare.assert_called_once_with(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, allow_cross_protocol=False) self.assertIsNone(result) def test_find_fstore_search_xproto(self): self.init_mediator() mock_find_fshare = self.mock_object(self.mediator, '_find_fshare_with_proto', mock.Mock(return_value=None)) result = self.mediator._find_fstore(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, allow_cross_protocol=True) expected_calls = [ mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=False), mock.call(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.SMB_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, readonly=False), ] mock_find_fshare.assert_has_calls(expected_calls) self.assertIsNone(result) def test_find_fshare_search(self): self.init_mediator() self.mock_client.getfshare.return_value = {} result = self.mediator._find_fshare(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_PROJECT_ID), mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_SHARE_ID), mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG), mock.call.getfshare(constants.NFS_LOWER, constants.EXPECTED_SHARE_ID), ] self.mock_client.assert_has_calls(expected_calls) self.assertIsNone(result) def test_find_fshare_exception(self): self.init_mediator() self.mock_client.getfshare.side_effect = Exception('test unexpected') self.assertRaises(exception.ShareBackendException, self.mediator._find_fshare, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS) self.mock_client.getfshare.assert_called_once_with( constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_PROJECT_ID) def test_find_fshare_hit(self): self.init_mediator() expected_result = {'shareName': 'hit'} self.mock_client.getfshare.return_value = { 'total': 1, 'members': [expected_result] } result = self.mediator._find_fshare(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_FPG, constants.EXPECTED_VFS) self.mock_client.getfshare.assert_called_once_with( constants.NFS_LOWER, constants.EXPECTED_SHARE_ID, fpg=constants.EXPECTED_FPG, vfs=constants.EXPECTED_VFS, fstore=constants.EXPECTED_PROJECT_ID), self.assertEqual(expected_result, result) def test_find_fsnap_search(self): self.init_mediator() self.mock_client.getfsnap.return_value = {} result = self.mediator._find_fsnap(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_snap_pattern = '*_%s' % constants.EXPECTED_SNAP_ID expected_calls = [ mock.call.getfsnap(expected_snap_pattern, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_PROJECT_ID), mock.call.getfsnap(expected_snap_pattern, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_SHARE_ID), mock.call.getfsnap(expected_snap_pattern, fpg=constants.EXPECTED_FPG, pat=True), mock.call.getfsnap(expected_snap_pattern, pat=True), ] self.mock_client.assert_has_calls(expected_calls) self.assertIsNone(result) def test_find_fsnap_exception(self): self.init_mediator() self.mock_client.getfsnap.side_effect = Exception('test unexpected') self.assertRaises(exception.ShareBackendException, self.mediator._find_fsnap, constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_snap_pattern = '*_%s' % constants.EXPECTED_SNAP_ID self.mock_client.getfsnap.assert_called_once_with( expected_snap_pattern, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_PROJECT_ID) def test_find_fsnap_hit(self): self.init_mediator() expected_result = {'snapName': 'hit'} self.mock_client.getfsnap.return_value = { 'total': 1, 'members': [expected_result] } result = self.mediator._find_fsnap(constants.EXPECTED_PROJECT_ID, constants.EXPECTED_SHARE_ID, constants.NFS, constants.EXPECTED_SNAP_ID, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_snap_pattern = '*_%s' % constants.EXPECTED_SNAP_ID self.mock_client.getfsnap.assert_called_once_with( expected_snap_pattern, vfs=constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, pat=True, fstore=constants.EXPECTED_PROJECT_ID) self.assertEqual(expected_result, result) def test_fsip_exists(self): self.init_mediator() # Make the result member a superset of the fsip items. fsip_plus = constants.EXPECTED_FSIP.copy() fsip_plus.update({'k': 'v', 'k2': 'v2'}) self.mock_client.getfsip.return_value = { 'total': 3, 'members': [{'bogus1': 1}, fsip_plus, {'bogus2': '2'}] } self.assertTrue(self.mediator.fsip_exists(constants.EXPECTED_FSIP)) self.mock_client.getfsip.assert_called_once_with( constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG) def test_fsip_does_not_exist(self): self.init_mediator() self.mock_client.getfsip.return_value = { 'total': 3, 'members': [{'bogus1': 1}, constants.OTHER_FSIP, {'bogus2': '2'}] } self.assertFalse(self.mediator.fsip_exists(constants.EXPECTED_FSIP)) self.mock_client.getfsip.assert_called_once_with( constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG) def test_fsip_exists_exception(self): self.init_mediator() class FakeException(Exception): pass self.mock_client.getfsip.side_effect = FakeException() self.assertRaises(exception.ShareBackendException, self.mediator.fsip_exists, constants.EXPECTED_FSIP) self.mock_client.getfsip.assert_called_once_with( constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG) def test_create_fsip_success(self): self.init_mediator() # Make the result member a superset of the fsip items. fsip_plus = constants.EXPECTED_FSIP.copy() fsip_plus.update({'k': 'v', 'k2': 'v2'}) self.mock_client.getfsip.return_value = { 'total': 3, 'members': [{'bogus1': 1}, fsip_plus, {'bogus2': '2'}] } self.mediator.create_fsip(constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VLAN_TAG, constants.EXPECTED_FPG, constants.EXPECTED_VFS) self.mock_client.getfsip.assert_called_once_with( constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG) expected_calls = [ mock.call.createfsip(constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, vlantag=constants.EXPECTED_VLAN_TAG), mock.call.getfsip(constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) def test_create_fsip_exception(self): self.init_mediator() class FakeException(Exception): pass self.mock_client.createfsip.side_effect = FakeException() self.assertRaises(exception.ShareBackendException, self.mediator.create_fsip, constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VLAN_TAG, constants.EXPECTED_FPG, constants.EXPECTED_VFS) self.mock_client.createfsip.assert_called_once_with( constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, vlantag=constants.EXPECTED_VLAN_TAG) def test_create_fsip_get_none(self): self.init_mediator() self.mock_client.getfsip.return_value = {'members': []} self.assertRaises(exception.ShareBackendException, self.mediator.create_fsip, constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VLAN_TAG, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.createfsip(constants.EXPECTED_IP_1234, constants.EXPECTED_SUBNET, constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG, vlantag=constants.EXPECTED_VLAN_TAG), mock.call.getfsip(constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) def test_remove_fsip_success(self): self.init_mediator() self.mock_client.getfsip.return_value = { 'members': [constants.OTHER_FSIP] } self.mediator.remove_fsip(constants.EXPECTED_IP_1234, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.removefsip(constants.EXPECTED_VFS, constants.EXPECTED_IP_1234, fpg=constants.EXPECTED_FPG), mock.call.getfsip(constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) @ddt.data(('ip', None), ('ip', ''), (None, 'vfs'), ('', 'vfs'), (None, None), ('', '')) @ddt.unpack def test_remove_fsip_without_ip_or_vfs(self, ip, vfs): self.init_mediator() self.mediator.remove_fsip(ip, constants.EXPECTED_FPG, vfs) self.assertFalse(self.mock_client.removefsip.called) def test_remove_fsip_not_gone(self): self.init_mediator() self.mock_client.getfsip.return_value = { 'members': [constants.EXPECTED_FSIP] } self.assertRaises(exception.ShareBackendException, self.mediator.remove_fsip, constants.EXPECTED_IP_1234, constants.EXPECTED_FPG, constants.EXPECTED_VFS) expected_calls = [ mock.call.removefsip(constants.EXPECTED_VFS, constants.EXPECTED_IP_1234, fpg=constants.EXPECTED_FPG), mock.call.getfsip(constants.EXPECTED_VFS, fpg=constants.EXPECTED_FPG), ] self.mock_client.assert_has_calls(expected_calls) def test_remove_fsip_exception(self): self.init_mediator() class FakeException(Exception): pass self.mock_client.removefsip.side_effect = FakeException() self.assertRaises(exception.ShareBackendException, self.mediator.remove_fsip, constants.EXPECTED_IP_1234, constants.EXPECTED_FPG, constants.EXPECTED_VFS) self.mock_client.removefsip.assert_called_once_with( constants.EXPECTED_VFS, constants.EXPECTED_IP_1234, fpg=constants.EXPECTED_FPG) def test__create_mount_directory(self): self.init_mediator() mount_location = '/mnt/foo' self.mediator._create_mount_directory(mount_location) utils.execute.assert_called_with('mkdir', mount_location, run_as_root=True) def test__create_mount_directory_error(self): self.init_mediator() self.mock_object(utils, 'execute', mock.Mock(side_effect=Exception('mkdir error.'))) mock_log = self.mock_object(hpe3parmediator, 'LOG') mount_location = '/mnt/foo' self.mediator._create_mount_directory(mount_location) utils.execute.assert_called_with('mkdir', mount_location, run_as_root=True) # Warning is logged (no exception thrown). self.assertTrue(mock_log.warning.called) def test__mount_super_share(self): self.init_mediator() # Test mounting NFS share. protocol = 'nfs' mount_location = '/mnt/foo' fpg = 'foo-fpg' vfs = 'bar-vfs' fstore = 'fstore' mount_path = '%s:/%s/%s/%s/' % (constants.EXPECTED_IP_10203040, fpg, vfs, fstore) self.mediator._mount_super_share(protocol, mount_location, fpg, vfs, fstore, constants.EXPECTED_IP_10203040) utils.execute.assert_called_with('mount', '-t', protocol, mount_path, mount_location, run_as_root=True) # Test mounting CIFS share. protocol = 'smb' mount_path = '//%s/%s/' % (constants.EXPECTED_IP_10203040, constants.EXPECTED_SUPER_SHARE) user = 'username=%s,password=%s,domain=%s' % ( constants.USERNAME, constants.PASSWORD, constants.EXPECTED_CIFS_DOMAIN) self.mediator._mount_super_share(protocol, mount_location, fpg, vfs, fstore, constants.EXPECTED_IP_10203040) utils.execute.assert_called_with('mount', '-t', 'cifs', mount_path, mount_location, '-o', user, run_as_root=True) def test__mount_super_share_error(self): self.init_mediator() self.mock_object(utils, 'execute', mock.Mock(side_effect=Exception('mount error.'))) mock_log = self.mock_object(hpe3parmediator, 'LOG') protocol = 'nfs' mount_location = '/mnt/foo' fpg = 'foo-fpg' vfs = 'bar-vfs' fstore = 'fstore' self.mediator._mount_super_share(protocol, mount_location, fpg, vfs, fstore, constants.EXPECTED_IP_10203040) # Warning is logged (no exception thrown). self.assertTrue(mock_log.warning.called) def test__delete_share_directory(self): self.init_mediator() mount_location = '/mnt/foo' self.mediator._delete_share_directory(mount_location) utils.execute.assert_called_with('rm', '-rf', mount_location, run_as_root=True) def test__delete_share_directory_error(self): self.init_mediator() self.mock_object(utils, 'execute', mock.Mock(side_effect=Exception('rm error.'))) mock_log = self.mock_object(hpe3parmediator, 'LOG') mount_location = '/mnt/foo' self.mediator._delete_share_directory(mount_location) # Warning is logged (no exception thrown). self.assertTrue(mock_log.warning.called) def test__unmount_share(self): self.init_mediator() mount_dir = '/mnt/foo' self.mediator._unmount_share(mount_dir) utils.execute.assert_called_with('umount', mount_dir, run_as_root=True) def test__unmount_share_error(self): self.init_mediator() self.mock_object(utils, 'execute', mock.Mock(side_effect=Exception('umount error.'))) mock_log = self.mock_object(hpe3parmediator, 'LOG') mount_dir = '/mnt/foo' self.mediator._unmount_share(mount_dir) # Warning is logged (no exception thrown). self.assertTrue(mock_log.warning.called) def test__delete_file_tree_no_config_options(self): self.init_mediator() mock_log = self.mock_object(hpe3parmediator, 'LOG') self.mediator.hpe3par_cifs_admin_access_username = None self.mediator._delete_file_tree( constants.EXPECTED_SHARE_ID, constants.SMB_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_FSTORE, constants.EXPECTED_SHARE_IP) # Warning is logged (no exception thrown). self.assertTrue(mock_log.warning.called) def test__create_super_share_createfshare_exception(self): self.init_mediator() self.mock_client.createfshare.side_effect = ( Exception("createfshare error.")) self.assertRaises( exception.ShareBackendException, self.mediator._create_super_share, constants.NFS_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_FSTORE) def test__create_super_share_setfshare_exception(self): self.init_mediator() self.mock_client.setfshare.side_effect = ( Exception("setfshare error.")) self.assertRaises( exception.ShareBackendException, self.mediator._create_super_share, constants.SMB_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_FSTORE) def test__revoke_admin_smb_access_error(self): self.init_mediator() self.mock_client.setfshare.side_effect = ( Exception("setfshare error")) self.assertRaises( exception.ShareBackendException, self.mediator._revoke_admin_smb_access, constants.SMB_LOWER, constants.EXPECTED_FPG, constants.EXPECTED_VFS, constants.EXPECTED_FSTORE, constants.EXPECTED_COMMENT) def test_build_export_locations_bad_protocol(self): self.assertRaises(exception.InvalidShareAccess, self.mediator.build_export_locations, "BOGUS", [constants.EXPECTED_IP_1234], constants.EXPECTED_SHARE_PATH) def test_build_export_locations_bad_ip(self): self.assertRaises(exception.InvalidInput, self.mediator.build_export_locations, constants.NFS, None, None) def test_build_export_locations_bad_path(self): self.assertRaises(exception.InvalidInput, self.mediator.build_export_locations, constants.NFS, [constants.EXPECTED_IP_1234], None) class OptionMatcher(object): """Options string order can vary. Compare as lists.""" def __init__(self, assert_func, expected_string): self.assert_func = assert_func self.expected = expected_string.split(',') def __eq__(self, actual_string): actual = actual_string.split(',') self.assert_func(sorted(self.expected), sorted(actual)) return True manila-10.0.0/manila/tests/share/drivers/hpe/test_hpe_3par_constants.py0000664000175000017500000001654713656750227026252 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. CIFS = 'CIFS' SMB_LOWER = 'smb' NFS = 'NFS' NFS_LOWER = 'nfs' IP = 'ip' USER = 'user' USERNAME = 'USERNAME_0' ADD_USERNAME = '+USERNAME_0:fullcontrol' DROP_USERNAME = '-USERNAME_0:fullcontrol' PASSWORD = 'PASSWORD_0' READ_WRITE = 'rw' READ_ONLY = 'ro' SAN_LOGIN = 'testlogin4san' SAN_PASSWORD = 'testpassword4san' API_URL = 'https://1.2.3.4:8080/api/v1' TIMEOUT = 60 PORT = 22 SHARE_TYPE_ID = 123456789 CIDR_PREFIX = '24' # Constants to use with Mock and expect in results EXPECTED_IP_10203040 = '10.20.30.40' EXPECTED_IP_10203041 = '10.20.30.41' EXPECTED_IP_1234 = '1.2.3.4' EXPECTED_MY_IP = '9.8.7.6' EXPECTED_IP_127 = '127.0.0.1' EXPECTED_IP_127_2 = '127.0.0.2' EXPECTED_ACCESS_LEVEL = 'foo_access' EXPECTED_SUBNET = '255.255.255.0' # based on CIDR_PREFIX above EXPECTED_VLAN_TYPE = 'vlan' EXPECTED_VXLAN_TYPE = 'vxlan' EXPECTED_VLAN_TAG = '101' EXPECTED_SERVER_ID = '1a1a1a1a-2b2b-3c3c-4d4d-5e5e5e5e5e5e' EXPECTED_PROJECT_ID = 'osf-nfs-project-id' SHARE_ID = 'share-id' EXPECTED_SHARE_ID = 'osf-share-id' EXPECTED_SHARE_ID_RO = 'osf-ro-share-id' EXPECTED_SHARE_NAME = 'share-name' EXPECTED_NET_NAME = 'testnet' EXPECTED_FPG = 'pool' EXPECTED_HOST = 'hostname@backend#' + EXPECTED_FPG UNEXPECTED_FPG = 'not_a_pool' UNEXPECTED_HOST = 'hostname@backend#' + UNEXPECTED_FPG HOST_WITHOUT_POOL_1 = 'hostname@backend' HOST_WITHOUT_POOL_2 = 'hostname@backend#' EXPECTED_SHARE_PATH = '/anyfpg/anyvfs/anyfstore' EXPECTED_SIZE_1 = 1 EXPECTED_SIZE_2 = 2 EXPECTED_SNAP_NAME = 'osf-snap-name' EXPECTED_SNAP_ID = 'osf-snap-id' EXPECTED_STATS = {'test': 'stats'} EXPECTED_FPG_CONF = [{EXPECTED_FPG: [EXPECTED_IP_10203040]}] EXPECTED_FSTORE = EXPECTED_PROJECT_ID EXPECTED_VFS = 'test_vfs' EXPECTED_GET_VFS = {'vfsname': EXPECTED_VFS, 'vfsip': {'address': [EXPECTED_IP_10203040]}} EXPECTED_GET_VFS_MULTIPLES = { 'vfsname': EXPECTED_VFS, 'vfsip': {'address': [EXPECTED_IP_10203041, EXPECTED_IP_10203040]}} EXPECTED_CLIENT_GET_VFS_MEMBERS_MULTI = { 'fspname': EXPECTED_VFS, 'vfsip': [ {'networkName': EXPECTED_NET_NAME, 'fspool': EXPECTED_VFS, 'address': EXPECTED_IP_10203040, 'prefixLen': EXPECTED_SUBNET, 'vfs': EXPECTED_VFS, 'vlanTag': EXPECTED_VLAN_TAG, }, {'networkName': EXPECTED_NET_NAME, 'fspool': EXPECTED_VFS, 'address': EXPECTED_IP_10203041, 'prefixLen': EXPECTED_SUBNET, 'vfs': EXPECTED_VFS, 'vlanTag': EXPECTED_VLAN_TAG, }, ], 'vfsname': EXPECTED_VFS, } EXPECTED_MEDIATOR_GET_VFS_RET_VAL_MULTI = { 'fspname': EXPECTED_VFS, 'vfsip': { 'networkName': EXPECTED_NET_NAME, 'fspool': EXPECTED_VFS, 'address': [ EXPECTED_IP_10203040, EXPECTED_IP_10203041, ], 'prefixLen': EXPECTED_SUBNET, 'vfs': EXPECTED_VFS, 'vlanTag': EXPECTED_VLAN_TAG }, 'vfsname': EXPECTED_VFS, } EXPECTED_CLIENT_GET_VFS_MEMBERS = { 'fspname': EXPECTED_VFS, 'vfsip': { 'networkName': EXPECTED_NET_NAME, 'fspool': EXPECTED_VFS, 'address': EXPECTED_IP_10203040, 'prefixLen': EXPECTED_SUBNET, 'vfs': EXPECTED_VFS, 'vlanTag': EXPECTED_VLAN_TAG, }, 'vfsname': EXPECTED_VFS, } EXPECTED_MEDIATOR_GET_VFS_RET_VAL = { 'fspname': EXPECTED_VFS, 'vfsip': { 'networkName': EXPECTED_NET_NAME, 'fspool': EXPECTED_VFS, 'address': [EXPECTED_IP_10203040], 'prefixLen': EXPECTED_SUBNET, 'vfs': EXPECTED_VFS, 'vlanTag': EXPECTED_VLAN_TAG, }, 'vfsname': EXPECTED_VFS, } EXPECTED_CLIENT_GET_VFS_RETURN_VALUE = { 'total': 1, 'members': [EXPECTED_CLIENT_GET_VFS_MEMBERS], } EXPECTED_CLIENT_GET_VFS_RETURN_VALUE_MULTI = { 'total': 1, 'members': [EXPECTED_CLIENT_GET_VFS_MEMBERS_MULTI], } EXPECTED_FPG_MAP = {EXPECTED_FPG: {EXPECTED_VFS: [EXPECTED_IP_10203040]}} EXPECTED_FPG_MAP_MULTI_VFS = {EXPECTED_FPG: { EXPECTED_VFS: [EXPECTED_IP_10203041, EXPECTED_IP_10203040]}} EXPECTED_SHARE_IP = '10.50.3.8' EXPECTED_HPE_DEBUG = True EXPECTED_COMMENT = "OpenStack Manila - foo-comment" EXPECTED_EXTRA_SPECS = {} EXPECTED_LOCATION = ':'.join((EXPECTED_IP_1234, EXPECTED_SHARE_PATH)) EXPECTED_SUPER_SHARE = 'OPENSTACK_SUPER_SHARE' EXPECTED_SUPER_SHARE_COMMENT = ('OpenStack super share used to delete nested ' 'shares.') EXPECTED_CIFS_DOMAIN = 'LOCAL_CLUSTER' EXPECTED_MOUNT_PATH = '/mnt/' SHARE_SERVER = { 'backend_details': { 'ip': EXPECTED_IP_10203040, 'fpg': EXPECTED_FPG, 'vfs': EXPECTED_VFS, }, } # Access rules. Allow for overwrites. ACCESS_RULE_NFS = { 'access_type': IP, 'access_to': EXPECTED_IP_1234, 'access_level': READ_WRITE, } ACCESS_RULE_CIFS = { 'access_type': USER, 'access_to': USERNAME, 'access_level': READ_WRITE, } ADD_RULE_BAD_TYPE = { 'access_type': 'unsupported_other_type', 'access_to': USERNAME, 'access_level': READ_WRITE, } ADD_RULE_IP = { 'access_type': IP, 'access_to': EXPECTED_IP_1234, 'access_level': READ_WRITE, } ADD_RULE_IP_RO = { 'access_type': IP, 'access_to': EXPECTED_IP_1234, 'access_level': READ_ONLY, } ADD_RULE_USER = { 'access_type': USER, 'access_to': USERNAME, 'access_level': READ_WRITE, } DELETE_RULE_IP = { 'access_type': IP, 'access_to': EXPECTED_IP_1234, 'access_level': READ_WRITE, } DELETE_RULE_USER = { 'access_type': USER, 'access_to': USERNAME, 'access_level': READ_WRITE, } DELETE_RULE_IP_RO = { 'access_type': IP, 'access_to': EXPECTED_IP_1234, 'access_level': READ_ONLY, } GET_FSQUOTA = {'message': None, 'total': 1, 'members': [{'hardBlock': '1024', 'softBlock': '1024'}]} EXPECTED_FSIP = { 'fspool': EXPECTED_FPG, 'vfs': EXPECTED_VFS, 'address': EXPECTED_IP_1234, 'prefixLen': EXPECTED_SUBNET, 'vlanTag': EXPECTED_VLAN_TAG, } OTHER_FSIP = { 'fspool': EXPECTED_FPG, 'vfs': EXPECTED_VFS, 'address': '9.9.9.9', 'prefixLen': EXPECTED_SUBNET, 'vlanTag': EXPECTED_VLAN_TAG, } NFS_SHARE_INFO = { 'project_id': EXPECTED_PROJECT_ID, 'id': EXPECTED_SHARE_ID, 'share_proto': NFS, 'export_location': EXPECTED_LOCATION, 'size': 1234, 'host': EXPECTED_HOST, } SNAPSHOT_INFO = { 'name': EXPECTED_SNAP_NAME, 'id': EXPECTED_SNAP_ID, 'share': { 'project_id': EXPECTED_PROJECT_ID, 'id': EXPECTED_SHARE_ID, 'share_proto': NFS, 'export_location': EXPECTED_LOCATION, 'host': EXPECTED_HOST, }, } SNAPSHOT_INSTANCE = { 'name': EXPECTED_SNAP_NAME, 'id': EXPECTED_SNAP_ID, 'share_id': EXPECTED_SHARE_ID, 'share_proto': NFS, } class FakeException(Exception): pass FAKE_EXCEPTION = FakeException("Fake exception for testing.") manila-10.0.0/manila/tests/share/drivers/test_lvm.py0000664000175000017500000006763513656750227022503 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the LVM driver module.""" import os from unittest import mock import ddt from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import timeutils from manila.common import constants as const from manila import context from manila import exception from manila.share import configuration from manila.share.drivers import lvm from manila import test from manila.tests.db import fakes as db_fakes from manila.tests import fake_utils from manila.tests.share.drivers import test_generic CONF = cfg.CONF def fake_share(**kwargs): share = { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'NFS', 'export_location': '127.0.0.1:/mnt/nfs/volume-00002', } share.update(kwargs) return db_fakes.FakeModel(share) def fake_snapshot(**kwargs): snapshot = { 'id': 'fakesnapshotid', 'share_name': 'fakename', 'share_id': 'fakeid', 'name': 'fakesnapshotname', 'share_proto': 'NFS', 'export_location': '127.0.0.1:/mnt/nfs/volume-00002', 'share': { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'NFS', }, } snapshot.update(kwargs) return db_fakes.FakeModel(snapshot) def fake_access(**kwargs): access = { 'id': 'fakeaccid', 'access_type': 'ip', 'access_to': '10.0.0.2', 'access_level': 'rw', 'state': 'active', } access.update(kwargs) return db_fakes.FakeModel(access) @ddt.ddt class LVMShareDriverTestCase(test.TestCase): """Tests LVMShareDriver.""" def setUp(self): super(LVMShareDriverTestCase, self).setUp() fake_utils.stub_out_utils_execute(self) self._context = context.get_admin_context() CONF.set_default('lvm_share_volume_group', 'fakevg') CONF.set_default('lvm_share_export_ips', ['10.0.0.1', '10.0.0.2']) CONF.set_default('driver_handles_share_servers', False) CONF.set_default('reserved_share_percentage', 50) self._helper_cifs = mock.Mock() self._helper_nfs = mock.Mock() self.fake_conf = configuration.Configuration(None) self._db = mock.Mock() self._os = lvm.os = mock.Mock() self._os.path.join = os.path.join self._driver = lvm.LVMShareDriver(self._db, configuration=self.fake_conf) self._driver._helpers = { 'CIFS': self._helper_cifs, 'NFS': self._helper_nfs, } self.share = fake_share() self.access = fake_access() self.snapshot = fake_snapshot() self.server = { 'public_addresses': self.fake_conf.lvm_share_export_ips, 'instance_id': 'LVM', 'lock_name': 'manila_lvm', } # Used only to test compatibility with share manager self.share_server = "fake_share_server" def tearDown(self): super(LVMShareDriverTestCase, self).tearDown() fake_utils.fake_execute_set_repliers([]) fake_utils.fake_execute_clear_log() def test_do_setup(self): CONF.set_default('lvm_share_helpers', ['NFS=fakenfs']) lvm.importutils = mock.Mock() lvm.importutils.import_class.return_value = self._helper_nfs self._driver.do_setup(self._context) lvm.importutils.import_class.assert_has_calls([ mock.call('fakenfs') ]) def test_check_for_setup_error(self): def exec_runner(*ignore_args, **ignore_kwargs): return '\n fake1\n fakevg\n fake2\n', '' expected_exec = ['vgs --noheadings -o name'] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self._driver.check_for_setup_error() self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test_check_for_setup_error_no_vg(self): def exec_runner(*ignore_args, **ignore_kwargs): return '\n fake0\n fake1\n fake2\n', '' fake_utils.fake_execute_set_repliers([('vgs --noheadings -o name', exec_runner)]) self.assertRaises(exception.InvalidParameterValue, self._driver.check_for_setup_error) def test_check_for_setup_error_no_export_ips(self): def exec_runner(*ignore_args, **ignore_kwargs): return '\n fake1\n fakevg\n fake2\n', '' fake_utils.fake_execute_set_repliers([('vgs --noheadings -o name', exec_runner)]) CONF.set_default('lvm_share_export_ips', None) self.assertRaises(exception.InvalidParameterValue, self._driver.check_for_setup_error) def test_local_path_normal(self): share = fake_share(name='fake_sharename') CONF.set_default('lvm_share_volume_group', 'fake_vg') ret = self._driver._get_local_path(share) self.assertEqual('/dev/mapper/fake_vg-fake_sharename', ret) def test_local_path_escapes(self): share = fake_share(name='fake-sharename') CONF.set_default('lvm_share_volume_group', 'fake-vg') ret = self._driver._get_local_path(share) self.assertEqual('/dev/mapper/fake--vg-fake--sharename', ret) def test_create_share(self): CONF.set_default('lvm_share_mirrors', 0) self._driver._mount_device = mock.Mock() ret = self._driver.create_share(self._context, self.share, self.share_server) self._driver._mount_device.assert_called_with( self.share, '/dev/mapper/fakevg-fakename') expected_exec = [ 'lvcreate -L 1G -n fakename fakevg', 'mkfs.ext4 /dev/mapper/fakevg-fakename', ] self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) self.assertEqual(self._helper_nfs.create_exports.return_value, ret) def test_create_share_from_snapshot(self): CONF.set_default('lvm_share_mirrors', 0) self._driver._mount_device = mock.Mock() snapshot_instance = { 'snapshot_id': 'fakesnapshotid', 'name': 'fakename' } mount_share = '/dev/mapper/fakevg-fakename' mount_snapshot = '/dev/mapper/fakevg-fakename' self._helper_nfs.create_export.return_value = 'fakelocation' self._driver.create_share_from_snapshot(self._context, self.share, snapshot_instance, self.share_server) self._driver._mount_device.assert_called_with(self.share, mount_snapshot) expected_exec = [ 'lvcreate -L 1G -n fakename fakevg', 'mkfs.ext4 /dev/mapper/fakevg-fakename', 'e2fsck -y -f %s' % mount_share, 'tune2fs -U random %s' % mount_share, ("dd count=0 if=%s of=%s iflag=direct oflag=direct" % (mount_snapshot, mount_share)), ("dd if=%s of=%s count=1024 bs=1M iflag=direct oflag=direct" % (mount_snapshot, mount_share)), ] self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test_create_share_mirrors(self): share = fake_share(size='2048') CONF.set_default('lvm_share_mirrors', 2) self._driver._mount_device = mock.Mock() ret = self._driver.create_share(self._context, share, self.share_server) self._driver._mount_device.assert_called_with( share, '/dev/mapper/fakevg-fakename') expected_exec = [ 'lvcreate -L 2048G -n fakename fakevg -m 2 --nosync -R 2', 'mkfs.ext4 /dev/mapper/fakevg-fakename', ] self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) self.assertEqual(self._helper_nfs.create_exports.return_value, ret) def test_deallocate_container(self): expected_exec = ['lvremove -f fakevg/fakename'] self._driver._deallocate_container(self.share['name']) self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test_deallocate_container_error(self): def _fake_exec(*args, **kwargs): raise exception.ProcessExecutionError(stderr="error") self.mock_object(self._driver, '_try_execute', _fake_exec) self.assertRaises(exception.ProcessExecutionError, self._driver._deallocate_container, self.share['name']) def test_deallocate_container_not_found_error(self): def _fake_exec(*args, **kwargs): raise exception.ProcessExecutionError(stderr="not found") self.mock_object(self._driver, '_try_execute', _fake_exec) self._driver._deallocate_container(self.share['name']) @mock.patch.object(lvm.LVMShareDriver, '_update_share_stats', mock.Mock()) def test_get_share_stats(self): with mock.patch.object(self._driver, '_stats', mock.Mock) as stats: self.assertEqual(stats, self._driver.get_share_stats()) self.assertFalse(self._driver._update_share_stats.called) @mock.patch.object(lvm.LVMShareDriver, '_update_share_stats', mock.Mock()) def test_get_share_stats_refresh(self): with mock.patch.object(self._driver, '_stats', mock.Mock) as stats: self.assertEqual(stats, self._driver.get_share_stats(refresh=True)) self._driver._update_share_stats.assert_called_once_with() def test__unmount_device_is_busy_error(self): def exec_runner(*ignore_args, **ignore_kwargs): raise exception.ProcessExecutionError(stderr='device is busy') self._os.path.exists.return_value = True mount_path = self._get_mount_path(self.share) expected_exec = [ "umount -f %s" % (mount_path), ] fake_utils.fake_execute_set_repliers([(expected_exec[0], exec_runner)]) self.assertRaises(exception.ShareBusyException, self._driver._unmount_device, self.share) self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test__unmount_device_error(self): def exec_runner(*ignore_args, **ignore_kwargs): raise exception.ProcessExecutionError(stderr='fake error') mount_path = self._get_mount_path(self.share) self._os.path.exists.return_value = True cmd = "umount -f %s" % (mount_path) fake_utils.fake_execute_set_repliers([(cmd, exec_runner)]) self.assertRaises(processutils.ProcessExecutionError, self._driver._unmount_device, self.share) self._os.path.exists.assert_called_with(mount_path) def test__unmount_device_rmdir_error(self): def exec_runner(*ignore_args, **ignore_kwargs): raise exception.ProcessExecutionError(stderr='fake error') mount_path = self._get_mount_path(self.share) self._os.path.exists.return_value = True cmd = "rmdir %s" % (mount_path) fake_utils.fake_execute_set_repliers([(cmd, exec_runner)]) self.assertRaises(processutils.ProcessExecutionError, self._driver._unmount_device, self.share) self._os.path.exists.assert_called_with(mount_path) def test_create_snapshot(self): self._driver.create_snapshot(self._context, self.snapshot, self.share_server) mount_path = self._get_mount_path(self.snapshot) expected_exec = [ ("lvcreate -L 1G --name fakesnapshotname --snapshot " "%s/fakename" % (CONF.lvm_share_volume_group,)), "e2fsck -y -f /dev/mapper/fakevg-%s" % self.snapshot['name'], "tune2fs -U random /dev/mapper/fakevg-%s" % self.snapshot['name'], "mkdir -p " + mount_path, "mount /dev/mapper/fakevg-fakesnapshotname " + mount_path, "chmod 777 " + mount_path, ] self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test_ensure_share(self): device_name = '/dev/mapper/fakevg-fakename' with mock.patch.object(self._driver, '_mount_device', mock.Mock(return_value='fake_location')): self._driver.ensure_share(self._context, self.share, self.share_server) self._driver._mount_device.assert_called_with(self.share, device_name) self._helper_nfs.create_exports.assert_called_once_with( self.server, self.share['name'], recreate=True) def test_delete_share(self): mount_path = self._get_mount_path(self.share) self._helper_nfs.remove_export(mount_path, self.share['name']) self._driver._delete_share(self._context, self.share) def test_delete_snapshot(self): mount_path = self._get_mount_path(self.snapshot) expected_exec = [ 'umount -f %s' % mount_path, 'rmdir %s' % mount_path, 'lvremove -f fakevg/fakesnapshotname', ] self._driver.delete_snapshot(self._context, self.snapshot, self.share_server) self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) def test_delete_share_invalid_share(self): self._driver._get_helper = mock.Mock( side_effect=exception.InvalidShare(reason='fake')) self._driver.delete_share(self._context, self.share, self.share_server) def test_delete_share_process_execution_error(self): self.mock_object( self._helper_nfs, 'remove_export', mock.Mock(side_effect=exception.ProcessExecutionError)) self._driver._delete_share(self._context, self.share) self._helper_nfs.remove_exports.assert_called_once_with( self.server, self.share['name']) @ddt.data(const.ACCESS_LEVEL_RW, const.ACCESS_LEVEL_RO) def test_update_access(self, access_level): access_rules = [test_generic.get_fake_access_rule( '1.1.1.1', access_level), ] add_rules = [test_generic.get_fake_access_rule( '2.2.2.2', access_level), ] delete_rules = [test_generic.get_fake_access_rule( '3.3.3.3', access_level), ] self._driver.update_access(self._context, self.share, access_rules, add_rules=add_rules, delete_rules=delete_rules, share_server=self.server) (self._driver._helpers[self.share['share_proto']]. update_access.assert_called_once_with( self.server, self.share['name'], access_rules, add_rules=add_rules, delete_rules=delete_rules)) @ddt.data((['1001::1001/129'], False), (['1.1.1.256'], False), (['1001::1001'], [6]), ('1.1.1.0', [4]), (['1001::1001', '1.1.1.0'], [6, 4]), (['1001::1001/129', '1.1.1.0'], False)) @ddt.unpack def test_get_configured_ip_versions(self, configured_ips, configured_ip_version): CONF.set_default('lvm_share_export_ips', configured_ips) if configured_ip_version: self.assertEqual(configured_ip_version, self._driver.get_configured_ip_versions()) else: self.assertRaises(exception.InvalidInput, self._driver.get_configured_ip_versions) def test_mount_device(self): mount_path = self._get_mount_path(self.share) ret = self._driver._mount_device(self.share, 'fakedevice') expected_exec = [ "mkdir -p %s" % (mount_path,), "mount fakedevice %s" % (mount_path,), "chmod 777 %s" % (mount_path,), ] self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) self.assertEqual(mount_path, ret) def test_mount_device_already(self): def exec_runner(*args, **kwargs): if 'mount' in args and '-l' not in args: raise exception.ProcessExecutionError() else: return 'fakedevice', '' self.mock_object(self._driver, '_execute', exec_runner) mount_path = self._get_mount_path(self.share) ret = self._driver._mount_device(self.share, 'fakedevice') self.assertEqual(mount_path, ret) def test_mount_device_error(self): def exec_runner(*args, **kwargs): if 'mount' in args and '-l' not in args: raise exception.ProcessExecutionError() else: return 'fake', '' self.mock_object(self._driver, '_execute', exec_runner) self.assertRaises(exception.ProcessExecutionError, self._driver._mount_device, self.share, 'fakedevice') def test_get_helper(self): share_cifs = fake_share(share_proto='CIFS') share_nfs = fake_share(share_proto='NFS') share_fake = fake_share(share_proto='FAKE') self.assertEqual(self._driver._get_helper(share_cifs), self._helper_cifs) self.assertEqual(self._driver._get_helper(share_nfs), self._helper_nfs) self.assertRaises(exception.InvalidShare, self._driver._get_helper, share_fake) def _get_mount_path(self, share): return os.path.join(CONF.lvm_share_export_root, share['name']) def test__unmount_device(self): mount_path = self._get_mount_path(self.share) self._os.path.exists.return_value = True self.mock_object(self._driver, '_execute') self._driver._unmount_device(self.share) self._driver._execute.assert_any_call('umount', '-f', mount_path, run_as_root=True) self._driver._execute.assert_any_call('rmdir', mount_path, run_as_root=True) self._os.path.exists.assert_called_with(mount_path) def test_extend_share(self): local_path = self._driver._get_local_path(self.share) self.mock_object(self._driver, '_extend_container') self.mock_object(self._driver, '_execute') self._driver.extend_share(self.share, 3) self._driver._extend_container.assert_called_once_with(self.share, local_path, 3) self._driver._execute.assert_called_once_with('resize2fs', local_path, run_as_root=True) def test_ssh_exec_as_root(self): command = ['fake_command'] self.mock_object(self._driver, '_execute') self._driver._ssh_exec_as_root('fake_server', command) self._driver._execute.assert_called_once_with('fake_command', check_exit_code=True) def test_ssh_exec_as_root_with_sudo(self): command = ['sudo', 'fake_command'] self.mock_object(self._driver, '_execute') self._driver._ssh_exec_as_root('fake_server', command) self._driver._execute.assert_called_once_with( 'fake_command', run_as_root=True, check_exit_code=True) def test_extend_container(self): self.mock_object(self._driver, '_try_execute') self._driver._extend_container(self.share, 'device_name', 3) self._driver._try_execute.assert_called_once_with( 'lvextend', '-L', '3G', '-n', 'device_name', run_as_root=True) def test_get_share_server_pools(self): expected_result = [{ 'pool_name': 'lvm-single-pool', 'total_capacity_gb': 33, 'free_capacity_gb': 22, 'reserved_percentage': 0, }, ] self.mock_object( self._driver, '_execute', mock.Mock(return_value=("VSize 33g VFree 22g", None))) self.assertEqual(expected_result, self._driver.get_share_server_pools()) self._driver._execute.assert_called_once_with( 'vgs', 'fakevg', '--rows', '--units', 'g', run_as_root=True) def test_copy_volume_error(self): def _fake_exec(*args, **kwargs): if 'count=0' in args: raise exception.ProcessExecutionError() self.mock_object(self._driver, '_execute', mock.Mock(side_effect=_fake_exec)) self._driver._copy_volume('src', 'dest', 1) self._driver._execute.assert_any_call('dd', 'count=0', 'if=src', 'of=dest', 'iflag=direct', 'oflag=direct', run_as_root=True) self._driver._execute.assert_any_call('dd', 'if=src', 'of=dest', 'count=1024', 'bs=1M', run_as_root=True) @ddt.data((['1.1.1.1'], 4), (['1001::1001'], 6)) @ddt.unpack def test_update_share_stats(self, configured_ip, version): CONF.set_default('lvm_share_export_ips', configured_ip) self.mock_object(self._driver, 'get_share_server_pools', mock.Mock(return_value='test-pool')) self._driver._update_share_stats() self.assertEqual('LVM', self._driver._stats['share_backend_name']) self.assertEqual('NFS_CIFS', self._driver._stats['storage_protocol']) self.assertEqual(50, self._driver._stats['reserved_percentage']) self.assertTrue(self._driver._stats['snapshot_support']) self.assertEqual('LVMShareDriver', self._driver._stats['driver_name']) self.assertEqual('test-pool', self._driver._stats['pools']) self.assertEqual(version == 4, self._driver._stats['ipv4_support']) self.assertEqual(version == 6, self._driver._stats['ipv6_support']) def test_revert_to_snapshot(self): mock_update_access = self.mock_object(self._helper_nfs, 'update_access') self._driver.revert_to_snapshot(self._context, self.snapshot, [], [], self.share_server) snap_lv = "%s/fakesnapshotname" % (CONF.lvm_share_volume_group) share_lv = "%s/fakename" % (CONF.lvm_share_volume_group) share_mount_path = self._get_mount_path(self.snapshot['share']) snapshot_mount_path = self._get_mount_path(self.snapshot) expected_exec = [ ('umount -f %s' % snapshot_mount_path), ("rmdir %s" % snapshot_mount_path), ("umount -f %s" % share_mount_path), ("rmdir %s" % share_mount_path), ("lvconvert --merge %s" % snap_lv), ("lvcreate -L 1G --name fakesnapshotname --snapshot %s" % share_lv), ("e2fsck -y -f /dev/mapper/%s-fakesnapshotname" % CONF.lvm_share_volume_group), ("tune2fs -U random /dev/mapper/%s-fakesnapshotname" % CONF.lvm_share_volume_group), ("mkdir -p %s" % share_mount_path), ("mount /dev/mapper/%s-fakename %s" % (CONF.lvm_share_volume_group, share_mount_path)), ("chmod 777 %s" % share_mount_path), ("mkdir -p %s" % snapshot_mount_path), ("mount /dev/mapper/fakevg-fakesnapshotname " "%s" % snapshot_mount_path), ("chmod 777 %s" % snapshot_mount_path), ] self.assertEqual(expected_exec, fake_utils.fake_execute_get_log()) self.assertEqual(4, mock_update_access.call_count) def test_snapshot_update_access(self): access_rules = [{ 'access_type': 'ip', 'access_to': '1.1.1.1', 'access_level': 'ro', }] add_rules = [{ 'access_type': 'ip', 'access_to': '2.2.2.2', 'access_level': 'ro', }] delete_rules = [{ 'access_type': 'ip', 'access_to': '3.3.3.3', 'access_level': 'ro', }] self._driver.snapshot_update_access(self._context, self.snapshot, access_rules, add_rules, delete_rules) (self._driver._helpers[self.snapshot['share']['share_proto']]. update_access.assert_called_once_with( self.server, self.snapshot['name'], access_rules, add_rules=add_rules, delete_rules=delete_rules)) @mock.patch.object(timeutils, 'utcnow', mock.Mock( return_value='fake_date')) def test_update_share_usage_size(self): mount_path = self._get_mount_path(self.share) self._os.path.exists.return_value = True self.mock_object( self._driver, '_execute', mock.Mock(return_value=( "Mounted on Used " + mount_path + " 1G", None))) update_shares = self._driver.update_share_usage_size( self._context, [self.share, ]) self._os.path.exists.assert_called_with(mount_path) self.assertEqual( [{'id': 'fakeid', 'used_size': '1', 'gathered_at': 'fake_date'}], update_shares) self._driver._execute.assert_called_once_with( 'df', '-l', '--output=target,used', '--block-size=g') @mock.patch.object(timeutils, 'utcnow', mock.Mock( return_value='fake_date')) def test_update_share_usage_size_multiple_share(self): share1 = fake_share(id='fakeid_get_fail', name='get_fail') share2 = fake_share(id='fakeid_success', name='get_success') share3 = fake_share(id='fakeid_not_exist', name='get_not_exist') mount_path2 = self._get_mount_path(share2) mount_path3 = self._get_mount_path(share3) self._os.path.exists.side_effect = [True, True, False] self.mock_object( self._driver, '_execute', mock.Mock(return_value=( "Mounted on Used " + mount_path2 + " 1G", None))) update_shares = self._driver.update_share_usage_size( self._context, [share1, share2, share3]) self._os.path.exists.assert_called_with(mount_path3) self.assertEqual( [{'gathered_at': 'fake_date', 'id': 'fakeid_success', 'used_size': '1'}], update_shares) self._driver._execute.assert_called_with( 'df', '-l', '--output=target,used', '--block-size=g') def test_update_share_usage_size_fail(self): def _fake_exec(*args, **kwargs): raise exception.ProcessExecutionError(stderr="error") self.mock_object(self._driver, '_execute', _fake_exec) self.assertRaises(exception.ProcessExecutionError, self._driver.update_share_usage_size, self._context, [self.share]) def test_get_backend_info(self): backend_info = self._driver.get_backend_info(self._context) self.assertEqual( {'export_ips': ','.join(self.server['public_addresses']), 'db_version': mock.ANY}, backend_info) manila-10.0.0/manila/tests/share/drivers/hitachi/0000775000175000017500000000000013656750362021664 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hitachi/__init__.py0000664000175000017500000000000013656750227023763 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hitachi/hsp/0000775000175000017500000000000013656750362022456 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hitachi/hsp/__init__.py0000664000175000017500000000000013656750227024555 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hitachi/hsp/test_rest.py0000664000175000017500000003110313656750227025042 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import json import requests import time from unittest import mock from manila import exception from manila.share.drivers.hitachi.hsp import rest from manila import test from manila.tests.share.drivers.hitachi.hsp import fakes class FakeRequests(object): status_code = 0 headers = {} content = "" def __init__(self, status_code, content='null'): self.status_code = status_code self.headers = {'location': 'fake_location'} self.content = content def json(self): return {'messages': [{'message': 'fake_msg'}]} @ddt.ddt class HitachiHSPRestTestCase(test.TestCase): def setUp(self): super(HitachiHSPRestTestCase, self).setUp() self.hitachi_hsp_host = '172.24.47.190' self.hitachi_hsp_username = 'hds_hnas_user' self.hitachi_hsp_password = 'hds_hnas_password' self._driver = rest.HSPRestBackend(self.hitachi_hsp_host, self.hitachi_hsp_username, self.hitachi_hsp_password) @ddt.data(202, 500) def test__send_post(self, code): self.mock_object(requests, "post", mock.Mock( return_value=FakeRequests(code))) if code == 202: self.mock_object(rest.HSPRestBackend, "_wait_job_status", mock.Mock()) self._driver._send_post('fake_url') rest.HSPRestBackend._wait_job_status.assert_called_once_with( 'fake_location', 'COMPLETE') else: self.assertRaises(exception.HSPBackendException, self._driver._send_post, 'fake_url') @ddt.data({'code': 200, 'content': 'null'}, {'code': 200, 'content': 'fake_content'}, {'code': 500, 'content': 'null'}) @ddt.unpack def test__send_get(self, code, content): self.mock_object(requests, "get", mock.Mock( return_value=FakeRequests(code, content))) if code == 200: result = self._driver._send_get('fake_url') if content == 'null': self.assertIsNone(result) else: self.assertEqual(FakeRequests(code, content).json(), result) else: self.assertRaises(exception.HSPBackendException, self._driver._send_get, 'fake_url') @ddt.data(202, 500) def test__send_delete(self, code): self.mock_object(requests, "delete", mock.Mock( return_value=FakeRequests(code))) if code == 202: self.mock_object(rest.HSPRestBackend, "_wait_job_status", mock.Mock()) self._driver._send_delete('fake_url') rest.HSPRestBackend._wait_job_status.assert_called_once_with( 'fake_location', 'COMPLETE') else: self.assertRaises(exception.HSPBackendException, self._driver._send_delete, 'fake_url') def test_add_file_system(self): url = "https://172.24.47.190/hspapi/file-systems/" payload = { 'quota': fakes.file_system['properties']['quota'], 'auto-access': False, 'enabled': True, 'description': '', 'record-access-time': True, 'tags': '', 'space-hwm': 90, 'space-lwm': 70, 'name': fakes.file_system['properties']['name'], } self.mock_object(rest.HSPRestBackend, "_send_post", mock.Mock()) self._driver.add_file_system(fakes.file_system['properties']['name'], fakes.file_system['properties']['quota']) rest.HSPRestBackend._send_post.assert_called_once_with( url, payload=json.dumps(payload)) def test_get_file_system(self): url = ("https://172.24.47.190/hspapi/file-systems/list?name=%s" % fakes.file_system['properties']['name']) self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock( return_value={'list': [fakes.file_system]})) result = self._driver.get_file_system( fakes.file_system['properties']['name']) self.assertEqual(fakes.file_system, result) rest.HSPRestBackend._send_get.assert_called_once_with(url) def test_get_file_system_exception(self): url = ("https://172.24.47.190/hspapi/file-systems/list?name=%s" % fakes.file_system['properties']['name']) self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock(return_value=None)) self.assertRaises(exception.HSPItemNotFoundException, self._driver.get_file_system, fakes.file_system['properties']['name']) rest.HSPRestBackend._send_get.assert_called_once_with(url) def test_delete_file_system(self): url = ("https://172.24.47.190/hspapi/file-systems/%s" % fakes.file_system['id']) self.mock_object(rest.HSPRestBackend, "_send_delete", mock.Mock()) self._driver.delete_file_system(fakes.file_system['id']) rest.HSPRestBackend._send_delete.assert_called_once_with(url) def test_resize_file_system(self): url = ("https://172.24.47.190/hspapi/file-systems/%s" % fakes.file_system['id']) new_size = 53687091200 payload = {'quota': new_size} self.mock_object(rest.HSPRestBackend, "_send_post", mock.Mock()) self._driver.resize_file_system(fakes.file_system['id'], new_size) rest.HSPRestBackend._send_post.assert_called_once_with( url, payload=json.dumps(payload)) def test_rename_file_system(self): url = ("https://172.24.47.190/hspapi/file-systems/%s" % fakes.file_system['id']) new_name = "fs_rename" payload = {'name': new_name} self.mock_object(rest.HSPRestBackend, "_send_post", mock.Mock()) self._driver.rename_file_system(fakes.file_system['id'], new_name) rest.HSPRestBackend._send_post.assert_called_once_with( url, payload=json.dumps(payload)) def test_add_share(self): url = "https://172.24.47.190/hspapi/shares/" payload = { 'description': '', 'type': 'NFS', 'enabled': True, 'tags': '', 'name': fakes.share['name'], 'file-system-id': fakes.share['properties']['file-system-id'], } self.mock_object(rest.HSPRestBackend, "_send_post", mock.Mock()) self._driver.add_share(fakes.share['name'], fakes.share['properties']['file-system-id']) rest.HSPRestBackend._send_post.assert_called_once_with( url, payload=json.dumps(payload)) @ddt.data({'fs_id': None, 'name': fakes.share['name'], 'url': 'https://172.24.47.190/hspapi/shares/list?' 'name=aa4a7710-f326-41fb-ad18-b4ad587fc87a'}, {'fs_id': fakes.share['properties']['file-system-id'], 'name': None, 'url': 'https://172.24.47.190/hspapi/shares/list?' 'file-system-id=33689245-1806-45d0-8507-0700b5f89750'}) @ddt.unpack def test_get_share(self, fs_id, name, url): self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock(return_value={'list': [fakes.share]})) result = self._driver.get_share(fs_id, name) self.assertEqual(fakes.share, result) rest.HSPRestBackend._send_get.assert_called_once_with(url) def test_get_share_exception(self): url = ("https://172.24.47.190/hspapi/shares/list?" "name=aa4a7710-f326-41fb-ad18-b4ad587fc87a") self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock( return_value=None)) self.assertRaises(exception.HSPItemNotFoundException, self._driver.get_share, None, fakes.share['name']) rest.HSPRestBackend._send_get.assert_called_once_with(url) def test_delete_share(self): url = "https://172.24.47.190/hspapi/shares/%s" % fakes.share['id'] self.mock_object(rest.HSPRestBackend, "_send_delete") self._driver.delete_share(fakes.share['id']) rest.HSPRestBackend._send_delete.assert_called_once_with(url) def test_add_access_rule(self): url = "https://172.24.47.190/hspapi/shares/%s/" % fakes.share['id'] payload = { "action": "add-access-rule", "name": fakes.share['id'] + fakes.access_rule['access_to'], "host-specification": fakes.access_rule['access_to'], "read-write": fakes.access_rule['access_level'], } self.mock_object(rest.HSPRestBackend, "_send_post", mock.Mock()) self._driver.add_access_rule(fakes.share['id'], fakes.access_rule['access_to'], fakes.access_rule['access_level']) rest.HSPRestBackend._send_post.assert_called_once_with( url, payload=json.dumps(payload)) def test_delete_access_rule(self): url = "https://172.24.47.190/hspapi/shares/%s/" % fakes.share['id'] payload = { "action": "delete-access-rule", "name": fakes.hsp_rules[0]['name'], } self.mock_object(rest.HSPRestBackend, "_send_post", mock.Mock()) self._driver.delete_access_rule(fakes.share['id'], fakes.hsp_rules[0]['name']) rest.HSPRestBackend._send_post.assert_called_once_with( url, payload=json.dumps(payload)) @ddt.data({'value': {'list': fakes.hsp_rules}, 'res': fakes.hsp_rules}, {'value': None, 'res': []}) @ddt.unpack def test_get_access_rules(self, value, res): url = ("https://172.24.47.190/hspapi/shares/%s/access-rules" % fakes.share['id']) self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock( return_value=value)) result = self._driver.get_access_rules(fakes.share['id']) self.assertEqual(res, result) rest.HSPRestBackend._send_get.assert_called_once_with(url) @ddt.data({'list': [fakes.hsp_cluster]}, None) def test_get_clusters(self, value): url = "https://172.24.47.190/hspapi/clusters/list" self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock( return_value=value)) if value: result = self._driver.get_cluster() self.assertEqual(fakes.hsp_cluster, result) else: self.assertRaises(exception.HSPBackendException, self._driver.get_cluster) rest.HSPRestBackend._send_get.assert_called_once_with(url) @ddt.data('COMPLETE', 'ERROR', 'RUNNING') def test__wait_job_status(self, stat): url = "fake_job_url" json = { 'id': 'fake_id', 'properties': { 'completion-details': 'Duplicate NFS access rule exists', 'completion-status': stat, }, 'messages': [{ 'id': 'fake_id', 'message': 'fake_msg', }] } self.mock_object(rest.HSPRestBackend, "_send_get", mock.Mock( return_value=json)) self.mock_object(time, "sleep") if stat == 'COMPLETE': self._driver._wait_job_status(url, 'COMPLETE') rest.HSPRestBackend._send_get.assert_called_once_with(url) elif stat == 'ERROR': self.assertRaises(exception.HSPBackendException, self._driver._wait_job_status, url, 'COMPLETE') rest.HSPRestBackend._send_get.assert_called_once_with(url) else: self.assertRaises(exception.HSPTimeoutException, self._driver._wait_job_status, url, 'COMPLETE') rest.HSPRestBackend._send_get.assert_has_calls([ mock.call(url), mock.call(url), mock.call(url), mock.call(url), mock.call(url), ]) manila-10.0.0/manila/tests/share/drivers/hitachi/hsp/fakes.py0000664000175000017500000000472313656750227024127 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. file_system = { 'id': '33689245-1806-45d0-8507-0700b5f89750', 'properties': { 'cluster-id': '85d5b9e2-27f3-11e6-8b50-005056a75f66', 'quota': 107374182400, 'name': '07c966f9-fea2-4e12-ab72-97cb3c529bb5', 'used-capacity': 53687091200, 'free-capacity': 53687091200 }, } share = { 'id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'name': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'properties': { 'file-system-id': '33689245-1806-45d0-8507-0700b5f89750', 'file-system-name': 'fake_name', }, } invalid_share = { 'id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'name': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'size': 100, 'host': 'hsp', 'share_proto': 'CIFS', } access_rule = { 'id': 'acdc7172b-fe07-46c4-b78f-df3e0324ccd0', 'access_type': 'ip', 'access_to': '172.24.44.200', 'access_level': 'rw', } hsp_rules = [{ 'name': 'qa_access', 'host-specification': '172.24.44.200', 'read-write': 'true', }] hsp_cluster = { 'id': '835e7c00-9d04-11e5-a935-f4521480e990', 'properties': { 'total-storage-capacity': 107374182400, 'total-storage-used': 53687091200, 'total-storage-available': 53687091200, 'total-file-system-capacity': 107374182400, 'total-file-system-space-used': 53687091200, 'total-file-system-space-available': 53687091200 }, } stats_data = { 'share_backend_name': 'HSP', 'vendor_name': 'Hitachi', 'driver_version': '1.0.0', 'storage_protocol': 'NFS', 'pools': [{ 'reserved_percentage': 0, 'pool_name': 'HSP', 'thin_provisioning': True, 'total_capacity_gb': 100, 'free_capacity_gb': 50, 'max_over_subscription_ratio': 20, 'qos': False, 'dedupe': False, 'compression': False, }], } manila-10.0.0/manila/tests/share/drivers/hitachi/hsp/test_driver.py0000664000175000017500000005456413656750227025400 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from manila import exception import manila.share.configuration import manila.share.driver from manila.share.drivers.hitachi.hsp import driver from manila.share.drivers.hitachi.hsp import rest from manila import test from manila.tests import fake_share from manila.tests.share.drivers.hitachi.hsp import fakes from manila.common import constants from oslo_utils import units CONF = cfg.CONF @ddt.ddt class HitachiHSPTestCase(test.TestCase): def setUp(self): super(HitachiHSPTestCase, self).setUp() CONF.set_default('driver_handles_share_servers', False) CONF.hitachi_hsp_host = '172.24.47.190' CONF.hitachi_hsp_username = 'hsp_user' CONF.hitachi_hsp_password = 'hsp_password' CONF.hitachi_hsp_job_timeout = 300 self.fake_el = [{ "path": CONF.hitachi_hsp_host + ":/fakeinstanceid", "metadata": {}, "is_admin_only": False, }] self.fake_share = fake_share.fake_share(share_proto='nfs') self.fake_share_instance = fake_share.fake_share_instance( base_share=self.fake_share, export_locations=self.fake_el) self.fake_conf = manila.share.configuration.Configuration(None) self.fake_private_storage = mock.Mock() self.mock_object(rest.HSPRestBackend, "get_cluster", mock.Mock(return_value=fakes.hsp_cluster)) self._driver = driver.HitachiHSPDriver( configuration=self.fake_conf, private_storage=self.fake_private_storage) self._driver.backend_name = "HSP" self.mock_log = self.mock_object(driver, 'LOG') @ddt.data(None, exception.HSPBackendException( message="Duplicate NFS access rule exists.")) def test_update_access_add(self, add_rule): access = { 'access_type': 'ip', 'access_to': '172.24.10.10', 'access_level': 'rw', } access_list = [access] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "add_access_rule", mock.Mock( side_effect=add_rule)) self._driver.update_access('context', self.fake_share_instance, [], access_list, []) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.add_access_rule.assert_called_once_with( fakes.share['id'], access['access_to'], (access['access_level'] == constants.ACCESS_LEVEL_RW)) def test_update_access_add_exception(self): access = { 'access_type': 'ip', 'access_to': '172.24.10.10', 'access_level': 'rw', } access_list = [access] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "add_access_rule", mock.Mock(side_effect=exception.HSPBackendException( message="HSP Backend Exception: error adding " "rule."))) self.assertRaises(exception.HSPBackendException, self._driver.update_access, 'context', self.fake_share_instance, [], access_list, []) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.add_access_rule.assert_called_once_with( fakes.share['id'], access['access_to'], (access['access_level'] == constants.ACCESS_LEVEL_RW)) def test_update_access_recovery(self): access1 = { 'access_type': 'ip', 'access_to': '172.24.10.10', 'access_level': 'rw', } access2 = { 'access_type': 'ip', 'access_to': '188.100.20.10', 'access_level': 'ro', } access_list = [access1, access2] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "get_access_rules", mock.Mock(side_effect=[fakes.hsp_rules, []])) self.mock_object(rest.HSPRestBackend, "delete_access_rule") self.mock_object(rest.HSPRestBackend, "add_access_rule") self._driver.update_access('context', self.fake_share_instance, access_list, [], []) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.get_access_rules.assert_has_calls([ mock.call(fakes.share['id'])]) rest.HSPRestBackend.delete_access_rule.assert_called_once_with( fakes.share['id'], fakes.share['id'] + fakes.hsp_rules[0]['host-specification']) rest.HSPRestBackend.add_access_rule.assert_has_calls([ mock.call(fakes.share['id'], access1['access_to'], True), mock.call(fakes.share['id'], access2['access_to'], False) ], any_order=True) @ddt.data(None, exception.HSPBackendException( message="No matching access rule found.")) def test_update_access_delete(self, delete_rule): access1 = { 'access_type': 'ip', 'access_to': '172.24.44.200', 'access_level': 'rw', } access2 = { 'access_type': 'something', 'access_to': '188.100.20.10', 'access_level': 'ro', } delete_rules = [access1, access2] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "delete_access_rule", mock.Mock(side_effect=delete_rule)) self.mock_object(rest.HSPRestBackend, "get_access_rules", mock.Mock(return_value=fakes.hsp_rules)) self._driver.update_access('context', self.fake_share_instance, [], [], delete_rules) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.delete_access_rule.assert_called_once_with( fakes.share['id'], fakes.hsp_rules[0]['name']) rest.HSPRestBackend.get_access_rules.assert_called_once_with( fakes.share['id']) def test_update_access_delete_exception(self): access1 = { 'access_type': 'ip', 'access_to': '172.24.10.10', 'access_level': 'rw', } access2 = { 'access_type': 'something', 'access_to': '188.100.20.10', 'access_level': 'ro', } delete_rules = [access1, access2] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "delete_access_rule", mock.Mock(side_effect=exception.HSPBackendException( message="HSP Backend Exception: error deleting " "rule."))) self.mock_object(rest.HSPRestBackend, 'get_access_rules', mock.Mock(return_value=[])) self.assertRaises(exception.HSPBackendException, self._driver.update_access, 'context', self.fake_share_instance, [], [], delete_rules) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.delete_access_rule.assert_called_once_with( fakes.share['id'], fakes.share['id'] + access1['access_to']) rest.HSPRestBackend.get_access_rules.assert_called_once_with( fakes.share['id']) @ddt.data(True, False) def test_update_access_ip_exception(self, is_recovery): access = { 'access_type': 'something', 'access_to': '172.24.10.10', 'access_level': 'rw', } access_list = [access] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "get_access_rules", mock.Mock(return_value=fakes.hsp_rules)) if is_recovery: access_args = [access_list, [], []] else: access_args = [[], access_list, []] self.assertRaises(exception.InvalidShareAccess, self._driver.update_access, 'context', self.fake_share_instance, *access_args) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) if is_recovery: rest.HSPRestBackend.get_access_rules.assert_called_once_with( fakes.share['id']) def test_update_access_not_found_exception(self): access_list = [] self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock( side_effect=exception.HSPItemNotFoundException(msg='fake'))) self.assertRaises(exception.ShareResourceNotFound, self._driver.update_access, 'context', self.fake_share_instance, access_list, [], []) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) def test_create_share(self): self.mock_object(rest.HSPRestBackend, "add_file_system", mock.Mock()) self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "add_share", mock.Mock()) result = self._driver.create_share('context', self.fake_share_instance) self.assertEqual(self.fake_el, result) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.add_file_system.assert_called_once_with( self.fake_share_instance['id'], self.fake_share_instance['size'] * units.Gi) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.add_share.assert_called_once_with( self.fake_share_instance['id'], fakes.file_system['id']) def test_create_share_export_error(self): self.mock_object(rest.HSPRestBackend, "add_file_system", mock.Mock()) self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "add_share", mock.Mock( side_effect=exception.HSPBackendException(msg='fake'))) self.mock_object(rest.HSPRestBackend, "delete_file_system", mock.Mock()) self.assertRaises(exception.HSPBackendException, self._driver.create_share, 'context', self.fake_share_instance) self.assertTrue(self.mock_log.debug.called) self.assertTrue(self.mock_log.exception.called) rest.HSPRestBackend.add_file_system.assert_called_once_with( self.fake_share_instance['id'], self.fake_share_instance['size'] * units.Gi) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.add_share.assert_called_once_with( self.fake_share_instance['id'], fakes.file_system['id']) rest.HSPRestBackend.delete_file_system.assert_called_once_with( fakes.file_system['id']) def test_create_share_invalid_share_protocol(self): self.assertRaises(exception.InvalidShare, self._driver.create_share, 'context', fakes.invalid_share) @ddt.data(None, exception.HSPBackendException( message="No matching access rule found.")) def test_delete_share(self, delete_rule): self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "delete_share") self.mock_object(rest.HSPRestBackend, "delete_file_system") self.mock_object(rest.HSPRestBackend, "get_access_rules", mock.Mock(return_value=[fakes.hsp_rules[0]])) self.mock_object(rest.HSPRestBackend, "delete_access_rule", mock.Mock( side_effect=[exception.HSPBackendException( message="No matching access rule found."), delete_rule])) self._driver.delete_share('context', self.fake_share_instance) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.delete_share.assert_called_once_with( fakes.share['id']) rest.HSPRestBackend.delete_file_system.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.get_access_rules.assert_called_once_with( fakes.share['id']) rest.HSPRestBackend.delete_access_rule.assert_called_once_with( fakes.share['id'], fakes.hsp_rules[0]['name']) def test_delete_share_rule_exception(self): self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "get_access_rules", mock.Mock(return_value=[fakes.hsp_rules[0]])) self.mock_object(rest.HSPRestBackend, "delete_access_rule", mock.Mock(side_effect=exception.HSPBackendException( message="Internal Server Error."))) self.assertRaises(exception.HSPBackendException, self._driver.delete_share, 'context', self.fake_share_instance) self.assertTrue(self.mock_log.debug.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.get_share.assert_called_once_with( fakes.file_system['id']) rest.HSPRestBackend.get_access_rules.assert_called_once_with( fakes.share['id']) rest.HSPRestBackend.delete_access_rule.assert_called_once_with( fakes.share['id'], fakes.hsp_rules[0]['name']) def test_delete_share_already_deleted(self): self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock( side_effect=exception.HSPItemNotFoundException(msg='fake'))) self.mock_object(driver.LOG, "info") self._driver.delete_share('context', self.fake_share_instance) self.assertTrue(self.mock_log.info.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) def test_extend_share(self): new_size = 2 self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "resize_file_system", mock.Mock()) self._driver.extend_share(self.fake_share_instance, new_size) self.assertTrue(self.mock_log.info.called) rest.HSPRestBackend.get_cluster.assert_called_once_with() rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.resize_file_system.assert_called_once_with( fakes.file_system['id'], new_size * units.Gi) def test_extend_share_with_no_available_space_in_fs(self): new_size = 150 self.assertRaises(exception.HSPBackendException, self._driver.extend_share, self.fake_share_instance, new_size) rest.HSPRestBackend.get_cluster.assert_called_once_with() def test_shrink_share(self): new_size = 70 self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.mock_object(rest.HSPRestBackend, "resize_file_system", mock.Mock()) self._driver.shrink_share(self.fake_share_instance, new_size) self.assertTrue(self.mock_log.info.called) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) rest.HSPRestBackend.resize_file_system.assert_called_once_with( fakes.file_system['id'], new_size * units.Gi) def test_shrink_share_new_size_lower_than_usage(self): new_size = 20 self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, self.fake_share_instance, new_size) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) def test_manage_existing(self): self.mock_object(self.fake_private_storage, "update") self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock(return_value=fakes.share)) self.mock_object(rest.HSPRestBackend, "rename_file_system", mock.Mock()) self.mock_object(rest.HSPRestBackend, "get_file_system", mock.Mock(return_value=fakes.file_system)) result = self._driver.manage_existing(self.fake_share_instance, 'option') expected = { 'size': fakes.file_system['properties']['quota'] / units.Gi, 'export_locations': self.fake_el, } self.assertTrue(self.mock_log.info.called) self.assertEqual(expected, result) rest.HSPRestBackend.get_share.assert_called_once_with( name=self.fake_share_instance['id']) rest.HSPRestBackend.rename_file_system.assert_called_once_with( fakes.file_system['id'], self.fake_share_instance['id']) rest.HSPRestBackend.get_file_system.assert_called_once_with( self.fake_share_instance['id']) def test_manage_existing_wrong_share_id(self): self.mock_object(rest.HSPRestBackend, "get_share", mock.Mock( side_effect=exception.HSPItemNotFoundException(msg='fake'))) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, self.fake_share_instance, 'option') rest.HSPRestBackend.get_share.assert_called_once_with( name=self.fake_share_instance['id']) def test_unmanage(self): self.mock_object(self.fake_private_storage, "get", mock.Mock( return_value='original_name')) self.mock_object(self.fake_private_storage, "delete") self._driver.unmanage(self.fake_share_instance) self.assertTrue(self.mock_log.info.called) def test__update_share_stats(self): mock__update_share_stats = self.mock_object( manila.share.driver.ShareDriver, '_update_share_stats') self.mock_object(self.fake_private_storage, 'get', mock.Mock( return_value={'provisioned': 0} )) self._driver._update_share_stats() rest.HSPRestBackend.get_cluster.assert_called_once_with() mock__update_share_stats.assert_called_once_with(fakes.stats_data) self.assertTrue(self.mock_log.info.called) def test_get_default_filter_function(self): expected = "share.size >= 128" actual = self._driver.get_default_filter_function() self.assertEqual(expected, actual) manila-10.0.0/manila/tests/share/drivers/hitachi/hnas/0000775000175000017500000000000013656750362022615 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hitachi/hnas/__init__.py0000664000175000017500000000000013656750227024714 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/share/drivers/hitachi/hnas/test_ssh.py0000664000175000017500000020014613656750227025026 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from unittest import mock import ddt from oslo_concurrency import processutils as putils from oslo_config import cfg import paramiko import six from manila import exception from manila.share.drivers.hitachi.hnas import ssh from manila import test from manila import utils as mutils CONF = cfg.CONF HNAS_RESULT_empty = "" HNAS_RESULT_limits = """ Filesystem Ensure on span fake_fs: Current capacity 50GiB Thin provision: disabled Filesystem is confined to: 100GiB (Run 'filesystem-confine') Free space on span allows expansion to: 143GiB (Run 'span-expand') Chunk size allows growth to: 1069GiB (This is a conservative estimate) Largest filesystem that can be checked: 262144GiB (This is a hard limit) This server model allows growth to: 262144GiB (Upgrade the server) """ HNAS_RESULT_expdel = """Deleting the export '/dir1' on fs 'fake_fs'... NFS Export Delete: Export successfully deleted""" HNAS_RESULT_vvoldel = """ Warning: Clearing dangling space trackers from empty vivol""" HNAS_RESULT_selectfs = "Current selected file system: fake_fs, number(1)" HNAS_RESULT_expadd = "NFS Export Add: Export added successfully" HNAS_RESULT_vvol = """vvol_test email : root : /vvol_test tag : 39 usage bytes : 0 B files: 1 last modified: 2015-06-23 22:36:12.830698800+00:00""" HNAS_RESULT_vvol_error = "The virtual volume does not exist." HNAS_RESULT_mount = """ \ Request to mount file system fake_fs submitted successfully. File system fake_fs successfully mounted.""" HNAS_RESULT_quota = """Type : Explicit Target : ViVol: vvol_test Usage : 1 GB Limit : 5 GB (Hard) Warning : Unset Critical : Unset Reset : 5% (51.2 MB) File Count : 1 Limit : Unset Warning : Unset Critical : Unset Reset : 5% (0) Generate Events : Disabled Global id : 28a3c9f8-ae05-11d0-9025-836896aada5d Last modified : 2015-06-23 22:37:17.363660800+00:00 """ HNAS_RESULT_quota_tb = """Type : Explicit Target : ViVol: vvol_test Usage : 1 TB Limit : 1 TB (Hard) Warning : Unset Critical : Unset Reset : 5% (51.2 MB) File Count : 1 Limit : Unset Warning : Unset Critical : Unset Reset : 5% (0) Generate Events : Disabled Global id : 28a3c9f8-ae05-11d0-9025-836896aada5d Last modified : 2015-06-23 22:37:17.363660800+00:00 """ HNAS_RESULT_quota_mb = """Type : Explicit Target : ViVol: vvol_test Usage : 20 MB Limit : 500 MB (Hard) Warning : Unset Critical : Unset Reset : 5% (51.2 MB) File Count : 1 Limit : Unset Warning : Unset Critical : Unset Reset : 5% (0) Generate Events : Disabled Global id : 28a3c9f8-ae05-11d0-9025-836896aada5d Last modified : 2015-06-23 22:37:17.363660800+00:00 """ HNAS_RESULT_quota_unset = """Type : Explicit Target : ViVol: vvol_test Usage : 0 B Limit : Unset Warning : Unset Critical : Unset Reset : 5% (51.2 MB) File Count : 1 Limit : Unset Warning : Unset Critical : Unset Reset : 5% (0) Generate Events : Disabled Global id : 28a3c9f8-ae05-11d0-9025-836896aada5d Last modified : 2015-06-23 22:37:17.363660800+00:00 """ HNAS_RESULT_quota_err = """No quotas matching specified filter criteria. """ HNAS_RESULT_export = """Export name: vvol_test Export path: /vvol_test File system label: file_system File system size: 3.969 GB File system free space: 1.848 GB File system state: formatted = Yes mounted = Yes failed = No thin provisioned = No Access snapshots: No Display snapshots: No Read Caching: Disabled Disaster recovery setting: Recovered = No Transfer setting = Use file system default \n Export configuration:\n 127.0.0.2 """ HNAS_RESULT_wrong_export = """Export name: wrong_name Export path: /vvol_test File system label: file_system File system size: 3.969 GB File system free space: 1.848 GB File system state: formatted = Yes mounted = Yes failed = No thin provisioned = No Access snapshots: No Display snapshots: No Read Caching: Disabled Disaster recovery setting: Recovered = No Transfer setting = Use file system default Export configuration: 127.0.0.1""" HNAS_RESULT_exp_no_fs = """ Export name: no_fs Export path: /export_without_fs File system info: *** not available *** Access snapshots: Yes Display snapshots: Yes Read Caching: Disabled Disaster recovery setting: Recovered = No Transfer setting = Use file system default Export configuration: """ HNAS_RESULT_export_ip = """ Export name: vvol_test Export path: /vvol_test File system label: fake_fs File system size: 3.969 GB File system free space: 1.848 GB File system state: formatted = Yes mounted = Yes failed = No thin provisioned = No Access snapshots: No Display snapshots: No Read Caching: Disabled Disaster recovery setting: Recovered = No Transfer setting = Use file system default Export configuration: 127.0.0.1(rw) """ HNAS_RESULT_export_ip2 = """ Export name: vvol_test Export path: /vvol_test File system label: fake_fs File system size: 3.969 GB File system free space: 1.848 GB File system state: formatted = Yes mounted = Yes failed = No thin provisioned = No Access snapshots: No Display snapshots: No Read Caching: Disabled Disaster recovery setting: Recovered = No Transfer setting = Use file system default Export configuration: 127.0.0.1(ro) """ HNAS_RESULT_expmod = """Modifying the export '/fake_export' on fs 'fake_fs'... NFS Export Modify: changing configuration options to: 127.0.0.2 NFS Export Modify: Export modified successfully""" HNAS_RESULT_expnotmod = "Export not modified." HNAS_RESULT_job = """tree-operation-job-submit: Request submitted successfully. tree-operation-job-submit: Job id = d933100a-b5f6-11d0-91d9-836896aada5d""" HNAS_RESULT_vvol_list = """vol1 email : root : /shares/vol1 tag : 10 usage bytes : 0 B files: 1 last modified: 2015-07-27 22:25:02.746426000+00:00 vol2 email : root : /shares/vol2 tag : 13 usage bytes : 0 B files: 1 last modified: 2015-07-28 01:30:21.125671700+00:00 vol3 email : root : /shares/vol3 tag : 14 usage bytes : 5 GB (5368709120 B) files: 2 last modified: 2015-07-28 20:23:05.672404600+00:00""" HNAS_RESULT_tree_job_status_fail = """JOB ID : d933100a-b5f6-11d0-91d9-836896aada5d Job request Physical node : 1 EVS : 1 Volume number : 1 File system id : 2ea361c20ed0f80d0000000000000000 File system name : fs1 Source path : "/foo" Creation time : 2013-09-05 23:16:48-07:00 Destination path : "/clone/bar" Ensure destination path exists : true Job state : Job failed Job info Started : 2013-09-05 23:16:48-07:00 Ended : 2013-09-05 23:17:02-07:00 Status : Success Error details : Directories processed : 220 Files processed : 910 Data bytes processed : 34.5 MB (36174754 B) Source directories missing : 0 Source files missing : 0 Source files skipped : 801 Skipping details : 104 symlinks, 452 hard links, 47 block special devices, 25 character devices""" HNAS_RESULT_job_completed = """JOB ID : ab4211b8-aac8-11ce-91af-39e0822ea368 Job request Physical node : 1 EVS : 1 Volume number : 1 File system id : 2ea361c20ed0f80d0000000000000000 File system name : fs1 Source path : "/foo" Creation time : 2013-09-05 23:16:48-07:00 Destination path : "/clone/bar" Ensure destination path exists : true Job state : Job was completed Job info Started : 2013-09-05 23:16:48-07:00 Ended : 2013-09-05 23:17:02-07:00 Status : Success Error details : Directories processed : 220 Files processed : 910 Data bytes processed : 34.5 MB (36174754 B) Source directories missing : 0 Source files missing : 0 Source files skipped : 801 Skipping details : 104 symlinks, 452 hard links, 47 \ block special devices, 25 character devices """ HNAS_RESULT_job_running = """JOB ID : ab4211b8-aac8-11ce-91af-39e0822ea368 Job request Physical node : 1 EVS : 1 Volume number : 1 File system id : 2ea361c20ed0f80d0000000000000000 File system name : fs1 Source path : "/foo" Creation time : 2013-09-05 23:16:48-07:00 Destination path : "/clone/bar" Ensure destination path exists : true Job state : Job is running Job info Started : 2013-09-05 23:16:48-07:00 Ended : 2013-09-05 23:17:02-07:00 Status : Success Error details : Directories processed : 220 Files processed : 910 Data bytes processed : 34.5 MB (36174754 B) Source directories missing : 0 Source files missing : 0 Source files skipped : 801 Skipping details : 104 symlinks, 452 hard links, 47 \ block special devices, 25 character devices """ HNAS_RESULT_df = """ ID Label EVS Size Used Snapshots Deduped \ Avail Thin ThinSize ThinAvail FS Type ---- ------------- --- -------- -------------- --------- ------- \ ------------- ---- -------- --------- ------------------- 1051 FS-ManilaDev1 3 70.00 GB 10.00 GB (75%) 0 B (0%) NA \ 18.3 GB (25%) No 4 KB,WFS-2,128 DSBs """ HNAS_RESULT_df_tb = """ ID Label EVS Size Used Snapshots Deduped \ Avail Thin ThinSize ThinAvail FS Type ---- ------------- --- -------- -------------- --------- ------- \ ------------- ---- -------- --------- ------------------- 1051 FS-ManilaDev1 3.00 7.00 TB 2 TB (75%) 0 B (0%) NA \ 18.3 GB (25%) No 4 KB,WFS-2,128 DSBs """ HNAS_RESULT_df_dedupe_on = """ ID Label EVS Size Used Snapshots Deduped \ Avail Thin ThinSize ThinAvail FS Type ---- ------------- --- -------- -------------- --------- ------- \ ------------- ---- -------- --------- ------------------- 1051 FS-ManilaDev1 3.00 7.00 TB 2 TB (75%) NA 0 B (0%) \ 18.3 GB (25%) No 4 KB,WFS-2,128 DSBs,dedupe enabled """ HNAS_RESULT_df_unmounted = """ ID Label EVS Size Used Snapshots Deduped \ Avail Thin ThinSize ThinAvail FS Type ---- ------------- --- -------- -------------- --------- ------- \ ------------- ---- -------- --------- ------------------- 1051 FS-ManilaDev1 3 70.00 GB Not mounted 0 B (0%) NA \ 18.3 GB (25%) No 4 KB,WFS-2,128 DSBs """ HNAS_RESULT_df_error = """File system file_system not found""" HNAS_RESULT_mounted_filesystem = """ file_system 1055 fake_span Mount 2 4 5 1 """ HNAS_RESULT_unmounted_filesystem = """ file_system 1055 fake_span Umount 2 4 5 1 """ HNAS_RESULT_cifs_list = """ Share name: vvol_test Share path: \\\\shares\\vvol_test Share users: 2 Share online: Yes Share comment: Cache options: Manual local caching for documents ABE enabled: Yes Continuous Availability: No Access snapshots: No Display snapshots: No ShadowCopy enabled: Yes Lower case on create: No Follow symlinks: Yes Follow global symlinks: No Scan for viruses: Yes File system label: file_system File system size: 9.938 GB File system free space: 6.763 GB File system state: formatted = Yes mounted = Yes failed = No thin provisioned = No Disaster recovery setting: Recovered = No Transfer setting = Use file system default Home directories: Off Mount point options: """ HNAS_RESULT_different_fs_cifs_list = """ Share name: vvol_test Share path: \\\\shares\\vvol_test Share users: 0 Share online: Yes Share comment: Cache options: Manual local caching for documents ABE enabled: Yes Continuous Availability: No Access snapshots: No Display snapshots: No ShadowCopy enabled: Yes Lower case on create: No Follow symlinks: Yes Follow global symlinks: No Scan for viruses: Yes File system label: different_filesystem File system size: 9.938 GB File system free space: 6.763 GB File system state: formatted = Yes mounted = Yes failed = No thin provisioned = No Disaster recovery setting: Recovered = No Transfer setting = Use file system default Home directories: Off Mount point options: """ HNAS_RESULT_list_cifs_permissions = """ \ Displaying the details of the share 'vvol_test' on file system 'filesystem' ... Maximum user count is unlimited Type Permission User/Group U Deny Read NFSv4 user\\user1@domain.com G Deny Change & Read Unix user\\1087 U Allow Full Control Unix user\\1088 U Allow Read Unix user\\1089 ? Deny Full Control NFSv4 user\\user2@company.com X Allow Change & Read Unix user\\1090 """ HNAS_RESULT_check_snap_error = """ \ path-to-object-number/FS-TestCG: Unable to locate component: share1 path-to-object-number/FS-TestCG: Failed to resolve object number""" @ddt.ddt class HNASSSHTestCase(test.TestCase): def setUp(self): super(HNASSSHTestCase, self).setUp() self.ip = '192.168.1.1' self.port = 22 self.user = 'hnas_user' self.password = 'hnas_password' self.default_commands = ['ssc', '127.0.0.1'] self.fs_name = 'file_system' self.evs_ip = '172.24.44.1' self.evs_id = 2 self.ssh_private_key = 'private_key' self.cluster_admin_ip0 = 'fake' self.job_timeout = 30 self.mock_log = self.mock_object(ssh, 'LOG') self._driver_ssh = ssh.HNASSSHBackend(self.ip, self.user, self.password, self.ssh_private_key, self.cluster_admin_ip0, self.evs_id, self.evs_ip, self.fs_name, self.job_timeout) self.vvol = { 'id': 'vvol_test', 'share_proto': 'nfs', 'size': 4, 'host': '127.0.0.1', } self.snapshot = { 'id': 'snapshot_test', 'share_proto': 'nfs', 'size': 4, 'share_id': 'vvol_test', 'host': 'ubuntu@hitachi2#HITACHI2', } self.mock_log.debug.reset_mock() def test_get_stats(self): fake_list_command = ['df', '-a', '-f', self.fs_name] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(return_value=(HNAS_RESULT_df_tb, ""))) total, free, dedupe = self._driver_ssh.get_stats() ssh.HNASSSHBackend._execute.assert_called_with(fake_list_command) self.assertEqual(7168.0, total) self.assertEqual(5120.0, free) self.assertFalse(dedupe) def test_get_stats_dedupe_on(self): fake_list_command = ['df', '-a', '-f', self.fs_name] self.mock_object( ssh.HNASSSHBackend, '_execute', mock.Mock(return_value=(HNAS_RESULT_df_dedupe_on, ""))) total, free, dedupe = self._driver_ssh.get_stats() ssh.HNASSSHBackend._execute.assert_called_with(fake_list_command) self.assertEqual(7168.0, total) self.assertEqual(5120.0, free) self.assertTrue(dedupe) def test_get_stats_error(self): fake_list_command = ['df', '-a', '-f', self.fs_name] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.get_stats) ssh.HNASSSHBackend._execute.assert_called_with(fake_list_command) @ddt.data(True, False) def test_nfs_export_add(self, is_snapshot): if is_snapshot: name = '/snapshots/fake_snap' path = '/snapshots/fake_share/fake_snap' else: name = path = '/shares/fake_share' fake_nfs_command = ['nfs-export', 'add', '-S', 'disable', '-c', '127.0.0.1', name, self.fs_name, path] self.mock_object(ssh.HNASSSHBackend, '_execute') if is_snapshot: self._driver_ssh.nfs_export_add('fake_share', snapshot_id='fake_snap') else: self._driver_ssh.nfs_export_add('fake_share') self._driver_ssh._execute.assert_called_with(fake_nfs_command) def test_nfs_export_add_error(self): self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr='')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.nfs_export_add, 'vvol_test') self.assertTrue(self.mock_log.exception.called) @ddt.data(True, False) def test_nfs_export_del(self, is_snapshot): if is_snapshot: name = '/snapshots/vvol_test' args = {'snapshot_id': 'vvol_test'} else: name = '/shares/vvol_test' args = {'share_id': 'vvol_test'} fake_nfs_command = ['nfs-export', 'del', name] self.mock_object(ssh.HNASSSHBackend, '_execute') self._driver_ssh.nfs_export_del(**args) self._driver_ssh._execute.assert_called_with(fake_nfs_command) def test_nfs_export_del_inexistent_export(self): self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='does not exist')])) self._driver_ssh.nfs_export_del('vvol_test') self.assertTrue(self.mock_log.warning.called) def test_nfs_export_del_exception(self): self.assertRaises(exception.HNASBackendException, self._driver_ssh.nfs_export_del) def test_nfs_export_del_execute_error(self): self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr='')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.nfs_export_del, 'vvol_test') self.assertTrue(self.mock_log.exception.called) @ddt.data(True, False) def test_cifs_share_add(self, is_snapshot): if is_snapshot: name = 'fake_snap' path = r'\\snapshots\\fake_share\\fake_snap' else: name = 'fake_share' path = r'\\shares\\fake_share' fake_cifs_add_command = ['cifs-share', 'add', '-S', 'disable', '--enable-abe', '--nodefaultsaa', name, self.fs_name, path] self.mock_object(ssh.HNASSSHBackend, '_execute') if is_snapshot: self._driver_ssh.cifs_share_add('fake_share', snapshot_id='fake_snap') else: self._driver_ssh.cifs_share_add('fake_share') self._driver_ssh._execute.assert_called_with(fake_cifs_add_command) def test_cifs_share_add_error(self): self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr='')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.cifs_share_add, 'vvol_test') self.assertTrue(self.mock_log.exception.called) def test_cifs_share_del(self): fake_cifs_del_command = ['cifs-share', 'del', '--target-label', self.fs_name, 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute') self._driver_ssh.cifs_share_del('vvol_test') self._driver_ssh._execute.assert_called_with(fake_cifs_del_command) def test_cifs_share_del_inexistent_share(self): fake_cifs_del_command = ['cifs-share', 'del', '--target-label', self.fs_name, 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError( exit_code=1))) self._driver_ssh.cifs_share_del('vvol_test') self._driver_ssh._execute.assert_called_with(fake_cifs_del_command) self.assertTrue(self.mock_log.warning.called) def test_cifs_share_del_exception(self): fake_cifs_del_command = ['cifs-share', 'del', '--target-label', self.fs_name, 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.cifs_share_del, 'vvol_test') self._driver_ssh._execute.assert_called_with(fake_cifs_del_command) def test_get_nfs_host_list(self): self.mock_object(ssh.HNASSSHBackend, "_get_export", mock.Mock( return_value=[ssh.Export(HNAS_RESULT_export)])) host_list = self._driver_ssh.get_nfs_host_list('fake_id') self.assertEqual(['127.0.0.2'], host_list) def test_update_nfs_access_rule_empty_host_list(self): fake_export_command = ['nfs-export', 'mod', '-c', '127.0.0.1', '/snapshots/fake_id'] self.mock_object(ssh.HNASSSHBackend, "_execute") self._driver_ssh.update_nfs_access_rule([], snapshot_id="fake_id") self._driver_ssh._execute.assert_called_with(fake_export_command) def test_update_nfs_access_rule(self): fake_export_command = ['nfs-export', 'mod', '-c', u'"127.0.0.1,127.0.0.2"', '/shares/fake_id'] self.mock_object(ssh.HNASSSHBackend, "_execute") self._driver_ssh.update_nfs_access_rule(['127.0.0.1', '127.0.0.2'], share_id="fake_id") self._driver_ssh._execute.assert_called_with(fake_export_command) def test_update_nfs_access_rule_exception_no_share_provided(self): self.assertRaises(exception.HNASBackendException, self._driver_ssh.update_nfs_access_rule, ['127.0.0.1']) def test_update_nfs_access_rule_exception_error(self): fake_export_command = ['nfs-export', 'mod', '-c', u'"127.0.0.1,127.0.0.2"', '/shares/fake_id'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.update_nfs_access_rule, ['127.0.0.1', '127.0.0.2'], share_id="fake_id") self._driver_ssh._execute.assert_called_with(fake_export_command) def test_cifs_allow_access(self): fake_cifs_allow_command = ['cifs-saa', 'add', '--target-label', self.fs_name, 'vvol_test', 'fake_user', 'ar'] self.mock_object(ssh.HNASSSHBackend, '_execute') self._driver_ssh.cifs_allow_access('vvol_test', 'fake_user', 'ar') self._driver_ssh._execute.assert_called_with(fake_cifs_allow_command) @ddt.data(True, False) def test_cifs_allow_access_already_allowed_user(self, is_snapshot): fake_cifs_allow_command = ['cifs-saa', 'add', '--target-label', self.fs_name, 'vvol_test', 'fake_user', 'acr'] if not is_snapshot: fake_cifs_allow_command2 = ['cifs-saa', 'change', '--target-label', 'file_system', 'vvol_test', 'fake_user', 'acr'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=[putils.ProcessExecutionError( stderr='already listed as a user'), "Rule modified."])) self._driver_ssh.cifs_allow_access('vvol_test', 'fake_user', 'acr', is_snapshot=is_snapshot) _execute_calls = [mock.call(fake_cifs_allow_command)] if not is_snapshot: _execute_calls.append(mock.call(fake_cifs_allow_command2)) self._driver_ssh._execute.assert_has_calls(_execute_calls) self.assertTrue(self.mock_log.debug.called) @ddt.data(True, False) def test_cifs_allow_access_exception(self, is_snapshot): fake_cifs_allow_command = ['cifs-saa', 'add', '--target-label', self.fs_name, 'vvol_test', 'fake_user', 'acr'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=[putils.ProcessExecutionError( stderr='Could not add user/group fake_user to ' 'share \'vvol_test\'')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.cifs_allow_access, 'vvol_test', 'fake_user', 'acr', is_snapshot=is_snapshot) self._driver_ssh._execute.assert_called_with(fake_cifs_allow_command) def test_cifs_update_access_level_exception(self): fake_cifs_allow_command = ['cifs-saa', 'add', '--target-label', self.fs_name, 'vvol_test', 'fake_user', 'acr'] fake_cifs_allow_command2 = ['cifs-saa', 'change', '--target-label', 'file_system', 'vvol_test', 'fake_user', 'acr'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=[putils.ProcessExecutionError( stderr='already listed as a user'), putils.ProcessExecutionError( stderr='Error when trying to modify rule.')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.cifs_allow_access, 'vvol_test', 'fake_user', 'acr') self._driver_ssh._execute.assert_has_calls( [mock.call(fake_cifs_allow_command), mock.call(fake_cifs_allow_command2)]) self.assertTrue(self.mock_log.debug.called) def test_cifs_deny_access(self): fake_cifs_deny_command = ['cifs-saa', 'delete', '--target-label', self.fs_name, 'vvol_test', 'fake_user'] self.mock_object(ssh.HNASSSHBackend, '_execute') self._driver_ssh.cifs_deny_access('vvol_test', 'fake_user') self._driver_ssh._execute.assert_called_with(fake_cifs_deny_command) @ddt.data(True, False) def test_cifs_deny_access_already_deleted_user(self, is_snapshot): fake_cifs_deny_command = ['cifs-saa', 'delete', '--target-label', self.fs_name, 'vvol_test', 'fake_user'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='not listed as a user')])) self._driver_ssh.cifs_deny_access('vvol_test', 'fake_user', is_snapshot=is_snapshot) self._driver_ssh._execute.assert_called_with(fake_cifs_deny_command) self.assertTrue(self.mock_log.warning.called) def test_cifs_deny_access_backend_exception(self): fake_cifs_deny_command = ['cifs-saa', 'delete', '--target-label', self.fs_name, 'vvol_test', 'fake_user'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=[putils.ProcessExecutionError( stderr='Unexpected error')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.cifs_deny_access, 'vvol_test', 'fake_user') self._driver_ssh._execute.assert_called_with(fake_cifs_deny_command) def test_list_cifs_permission(self): fake_cifs_list_command = ['cifs-saa', 'list', '--target-label', self.fs_name, 'vvol_test'] expected_out = ssh.CIFSPermissions(HNAS_RESULT_list_cifs_permissions) self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=(HNAS_RESULT_list_cifs_permissions, ''))) out = self._driver_ssh.list_cifs_permissions('vvol_test') for i in range(len(expected_out.permission_list)): self.assertEqual(expected_out.permission_list[i], out[i]) self._driver_ssh._execute.assert_called_with(fake_cifs_list_command) def test_list_cifs_no_permissions_added(self): fake_cifs_list_command = ['cifs-saa', 'list', '--target-label', self.fs_name, 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='No entries for this share')])) out = self._driver_ssh.list_cifs_permissions('vvol_test') self.assertEqual([], out) self._driver_ssh._execute.assert_called_with(fake_cifs_list_command) self.assertTrue(self.mock_log.debug.called) def test_list_cifs_exception(self): fake_cifs_list_command = ['cifs-saa', 'list', '--target-label', self.fs_name, 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='Error.')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.list_cifs_permissions, "vvol_test") self._driver_ssh._execute.assert_called_with(fake_cifs_list_command) self.assertTrue(self.mock_log.exception.called) def test_tree_clone_nothing_to_clone(self): fake_tree_clone_command = ['tree-clone-job-submit', '-e', '-f', self.fs_name, '/src', '/dst'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='Cannot find any clonable files in the source directory' )])) self.assertRaises(exception.HNASNothingToCloneException, self._driver_ssh.tree_clone, "/src", "/dst") self._driver_ssh._execute.assert_called_with(fake_tree_clone_command) def test_tree_clone_error_cloning(self): fake_tree_clone_command = ['tree-clone-job-submit', '-e', '-f', self.fs_name, '/src', '/dst'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[putils.ProcessExecutionError(stderr='')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.tree_clone, "/src", "/dst") self._driver_ssh._execute.assert_called_with(fake_tree_clone_command) self.assertTrue(self.mock_log.exception.called) def test_tree_clone(self): fake_tree_clone_command = ['tree-clone-job-submit', '-e', '-f', self.fs_name, '/src', '/dst'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[(HNAS_RESULT_job, ''), (HNAS_RESULT_job_completed, '')])) self._driver_ssh.tree_clone("/src", "/dst") self._driver_ssh._execute.assert_any_call(fake_tree_clone_command) self.assertTrue(self.mock_log.debug.called) def test_tree_clone_job_failed(self): fake_tree_clone_command = ['tree-clone-job-submit', '-e', '-f', self.fs_name, '/src', '/dst'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[(HNAS_RESULT_job, ''), (HNAS_RESULT_tree_job_status_fail, '')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.tree_clone, "/src", "/dst") self._driver_ssh._execute.assert_any_call(fake_tree_clone_command) self.assertTrue(self.mock_log.error.called) def test_tree_clone_job_timeout(self): fake_tree_clone_command = ['tree-clone-job-submit', '-e', '-f', self.fs_name, '/src', '/dst'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[(HNAS_RESULT_job, ''), (HNAS_RESULT_job_running, ''), (HNAS_RESULT_job_running, ''), (HNAS_RESULT_job_running, ''), (HNAS_RESULT_empty, '')])) self.mock_object(time, "time", mock.Mock(side_effect=[0, 0, 200, 200])) self.mock_object(time, "sleep") self.assertRaises(exception.HNASBackendException, self._driver_ssh.tree_clone, "/src", "/dst") self._driver_ssh._execute.assert_any_call(fake_tree_clone_command) self.assertTrue(self.mock_log.error.called) def test_tree_delete_path_does_not_exist(self): fake_tree_delete_command = ['tree-delete-job-submit', '--confirm', '-f', self.fs_name, '/path'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='Source path: Cannot access')] )) self._driver_ssh.tree_delete("/path") self.assertTrue(self.mock_log.warning.called) self._driver_ssh._execute.assert_called_with(fake_tree_delete_command) def test_tree_delete_error(self): fake_tree_delete_command = ['tree-delete-job-submit', '--confirm', '-f', self.fs_name, '/path'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='')] )) self.assertRaises(exception.HNASBackendException, self._driver_ssh.tree_delete, "/path") self.assertTrue(self.mock_log.exception.called) self._driver_ssh._execute.assert_called_with(fake_tree_delete_command) def test_create_directory(self): locked_selectfs_args = ['create', '/path'] self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs") self.mock_object(ssh.HNASSSHBackend, "check_directory", mock.Mock(return_value=True)) self._driver_ssh.create_directory("/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_called_once_with('/path') self.assertFalse(self.mock_log.warning.called) def test_create_directory_context_change_fail(self): locked_selectfs_args = ['create', '/path'] self.mock_object(time, 'sleep') self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs") self.mock_object(ssh.HNASSSHBackend, "check_directory", mock.Mock(return_value=False)) self.assertRaises(exception.HNASSSCContextChange, self._driver_ssh.create_directory, "/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_called_with('/path') self.assertTrue(self.mock_log.warning.called) def test_create_directory_context_change_success(self): locked_selectfs_args = ['create', '/path'] self.mock_object(time, 'sleep') self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs") self.mock_object(ssh.HNASSSHBackend, "check_directory", mock.Mock(side_effect=[False, False, True])) self._driver_ssh.create_directory("/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_called_with('/path') self.assertTrue(self.mock_log.warning.called) def test_delete_directory(self): locked_selectfs_args = ['delete', '/path'] self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs") self.mock_object(ssh.HNASSSHBackend, "check_directory", mock.Mock(return_value=False)) self._driver_ssh.delete_directory("/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_called_once_with('/path') self.assertFalse(self.mock_log.debug.called) def test_delete_directory_directory_not_empty(self): locked_selectfs_args = ['delete', '/path'] self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs", mock.Mock( side_effect=exception.HNASDirectoryNotEmpty(msg='fake'))) self.mock_object(ssh.HNASSSHBackend, "check_directory") self._driver_ssh.delete_directory("/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_not_called() self.assertFalse(self.mock_log.debug.called) def test_delete_directory_context_change_fail(self): locked_selectfs_args = ['delete', '/path'] self.mock_object(time, 'sleep') self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs") self.mock_object(ssh.HNASSSHBackend, "check_directory", mock.Mock(return_value=True)) self.assertRaises(exception.HNASSSCContextChange, self._driver_ssh.delete_directory, "/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_called_with('/path') self.assertTrue(self.mock_log.debug.called) def test_delete_directory_context_change_success(self): locked_selectfs_args = ['delete', '/path'] self.mock_object(time, 'sleep') self.mock_object(ssh.HNASSSHBackend, "_locked_selectfs") self.mock_object(ssh.HNASSSHBackend, "check_directory", mock.Mock(side_effect=[True, True, False])) self._driver_ssh.delete_directory("/path") self._driver_ssh._locked_selectfs.assert_called_with( *locked_selectfs_args) ssh.HNASSSHBackend.check_directory.assert_called_with('/path') self.assertTrue(self.mock_log.debug.called) def test_check_directory(self): path = ("/snapshots/" + self.snapshot['share_id'] + "/" + self.snapshot['id']) check_snap_args = ['path-to-object-number', '-f', self.fs_name, path] self.mock_object(ssh.HNASSSHBackend, '_execute') out = self._driver_ssh.check_directory(path) self.assertTrue(out) self._driver_ssh._execute.assert_called_with(check_snap_args) def test_check_directory_retry(self): error_msg = ("Unable to run path-to-object-number as " "path-to-object-number is currently running on volume " "39.") path = ("/snapshots/" + self.snapshot['share_id'] + "/" + self.snapshot['id']) check_snap_args = ['path-to-object-number', '-f', self.fs_name, path] self.mock_object(time, "sleep") self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=[putils.ProcessExecutionError( stdout=error_msg), putils.ProcessExecutionError( stdout=error_msg), 'Object number: 0x45a4'])) out = self._driver_ssh.check_directory(path) self.assertIs(True, out) self._driver_ssh._execute.assert_called_with(check_snap_args) def test_check_inexistent_snapshot(self): path = "/path/snap1/snapshot07-08-2016" check_snap_args = ['path-to-object-number', '-f', self.fs_name, path] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError( stdout=HNAS_RESULT_check_snap_error))) out = self._driver_ssh.check_directory(path) self.assertFalse(out) self._driver_ssh._execute.assert_called_with(check_snap_args) def test_check_directory_error(self): path = "/path/snap1/snapshot07-08-2016" check_snap_args = ['path-to-object-number', '-f', self.fs_name, path] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError( stdout="Internal Server Error."))) self.assertRaises(exception.HNASBackendException, self._driver_ssh.check_directory, path) self._driver_ssh._execute.assert_called_with(check_snap_args) def test_check_fs_mounted_true(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock(return_value=(HNAS_RESULT_df, ''))) self.assertTrue(self._driver_ssh.check_fs_mounted()) def test_check_fs_mounted_false(self): self.mock_object( ssh.HNASSSHBackend, "_execute", mock.Mock(return_value=(HNAS_RESULT_df_unmounted, ''))) self.assertFalse(self._driver_ssh.check_fs_mounted()) def test_check_fs_mounted_error(self): self.mock_object( ssh.HNASSSHBackend, "_execute", mock.Mock(return_value=(HNAS_RESULT_df_error, ''))) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.check_fs_mounted) def test_mount_already_mounted(self): fake_mount_command = ['mount', self.fs_name] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=putils.ProcessExecutionError(stderr=''))) self.assertRaises( exception.HNASBackendException, self._driver_ssh.mount) self._driver_ssh._execute.assert_called_with(fake_mount_command) def test_vvol_create(self): fake_vvol_create_command = ['virtual-volume', 'add', '--ensure', self.fs_name, 'vvol', '/shares/vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute") self._driver_ssh.vvol_create("vvol") self._driver_ssh._execute.assert_called_with(fake_vvol_create_command) def test_vvol_create_error(self): fake_vvol_create_command = ['virtual-volume', 'add', '--ensure', self.fs_name, 'vvol', '/shares/vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock(side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.vvol_create, "vvol") self._driver_ssh._execute.assert_called_with(fake_vvol_create_command) def test_vvol_delete_vvol_does_not_exist(self): fake_vvol_delete_command = ['tree-delete-job-submit', '--confirm', '-f', self.fs_name, '/shares/vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='Source path: Cannot access')] )) self._driver_ssh.vvol_delete("vvol") self.assertTrue(self.mock_log.warning.called) self._driver_ssh._execute.assert_called_with(fake_vvol_delete_command) def test_vvol_delete_error(self): fake_vvol_delete_command = ['tree-delete-job-submit', '--confirm', '-f', self.fs_name, '/shares/vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='')] )) self.assertRaises(exception.HNASBackendException, self._driver_ssh.vvol_delete, "vvol") self.assertTrue(self.mock_log.exception.called) self._driver_ssh._execute.assert_called_with(fake_vvol_delete_command) def test_quota_add(self): fake_add_quota_command = ['quota', 'add', '--usage-limit', '1G', '--usage-hard-limit', 'yes', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute") self._driver_ssh.quota_add('vvol', 1) self._driver_ssh._execute.assert_called_with(fake_add_quota_command) def test_modify_quota(self): fake_modify_quota_command = ['quota', 'mod', '--usage-limit', '1G', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute") self._driver_ssh.modify_quota('vvol', 1) self._driver_ssh._execute.assert_called_with(fake_modify_quota_command) def test_quota_add_error(self): fake_add_quota_command = ['quota', 'add', '--usage-limit', '1G', '--usage-hard-limit', 'yes', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock(side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.quota_add, 'vvol', 1) self._driver_ssh._execute.assert_called_with(fake_add_quota_command) def test_modify_quota_error(self): fake_modify_quota_command = ['quota', 'mod', '--usage-limit', '1G', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock(side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.modify_quota, 'vvol', 1) self._driver_ssh._execute.assert_called_with(fake_modify_quota_command) def test_check_vvol(self): fake_check_vvol_command = ['virtual-volume', 'list', '--verbose', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=putils.ProcessExecutionError(stderr=''))) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.check_vvol, 'vvol') self._driver_ssh._execute.assert_called_with(fake_check_vvol_command) def test_check_quota(self): fake_check_quota_command = ['quota', 'list', '--verbose', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=('No quotas matching specified filter criteria', ''))) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.check_quota, 'vvol') self._driver_ssh._execute.assert_called_with(fake_check_quota_command) def test_check_quota_error(self): fake_check_quota_command = ['quota', 'list', '--verbose', self.fs_name, 'vvol'] self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=putils.ProcessExecutionError)) self.assertRaises(exception.HNASBackendException, self._driver_ssh.check_quota, 'vvol') self._driver_ssh._execute.assert_called_with(fake_check_quota_command) @ddt.data(True, False) def test_check_export(self, is_snapshot): self.mock_object(ssh.HNASSSHBackend, "_get_export", mock.Mock( return_value=[ssh.Export(HNAS_RESULT_export)])) self._driver_ssh.check_export("vvol_test", is_snapshot) def test_check_export_error(self): self.mock_object(ssh.HNASSSHBackend, "_get_export", mock.Mock( return_value=[ssh.Export(HNAS_RESULT_wrong_export)])) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.check_export, "vvol_test") def test_check_cifs(self): check_cifs_share_command = ['cifs-share', 'list', 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=[HNAS_RESULT_cifs_list, ''])) self._driver_ssh.check_cifs('vvol_test') self._driver_ssh._execute.assert_called_with(check_cifs_share_command) def test_check_cifs_inexistent_share(self): check_cifs_share_command = ['cifs-share', 'list', 'wrong_vvol'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError( stderr='Export wrong_vvol does not exist on backend ' 'anymore.')])) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.check_cifs, 'wrong_vvol') self._driver_ssh._execute.assert_called_with(check_cifs_share_command) def test_check_cifs_exception(self): check_cifs_share_command = ['cifs-share', 'list', 'wrong_vvol'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr='Error.')])) self.assertRaises(exception.HNASBackendException, self._driver_ssh.check_cifs, 'wrong_vvol') self._driver_ssh._execute.assert_called_with(check_cifs_share_command) def test_check_cifs_different_fs_exception(self): check_cifs_share_command = ['cifs-share', 'list', 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=[HNAS_RESULT_different_fs_cifs_list, ''])) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.check_cifs, 'vvol_test') self._driver_ssh._execute.assert_called_with(check_cifs_share_command) def test_is_cifs_in_use(self): check_cifs_share_command = ['cifs-share', 'list', 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=[HNAS_RESULT_cifs_list, ''])) out = self._driver_ssh.is_cifs_in_use('vvol_test') self.assertTrue(out) self._driver_ssh._execute.assert_called_with(check_cifs_share_command) def test_is_cifs_without_use(self): check_cifs_share_command = ['cifs-share', 'list', 'vvol_test'] self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=[HNAS_RESULT_different_fs_cifs_list, ''])) out = self._driver_ssh.is_cifs_in_use('vvol_test') self.assertFalse(out) self._driver_ssh._execute.assert_called_with(check_cifs_share_command) def test_get_share_quota(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota, ''))) result = self._driver_ssh.get_share_quota("vvol_test") self.assertEqual(5, result) @ddt.data(HNAS_RESULT_quota_unset, HNAS_RESULT_quota_err) def test_get_share_quota_errors(self, hnas_output): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(hnas_output, ''))) result = self._driver_ssh.get_share_quota("vvol_test") self.assertIsNone(result) def test_get_share_quota_tb(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota_tb, ''))) result = self._driver_ssh.get_share_quota("vvol_test") self.assertEqual(1024, result) def test_get_share_quota_mb(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota_mb, ''))) self.assertRaises(exception.HNASBackendException, self._driver_ssh.get_share_quota, "vvol_test") def test_get_share_usage(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota, ''))) self.assertEqual(1, self._driver_ssh.get_share_usage("vvol_test")) def test_get_share_usage_error(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota_err, ''))) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh.get_share_usage, "vvol_test") def test_get_share_usage_mb(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota_mb, ''))) self.assertEqual(0.01953125, self._driver_ssh.get_share_usage( "vvol_test")) def test_get_share_usage_tb(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( return_value=(HNAS_RESULT_quota_tb, ''))) self.assertEqual(1024, self._driver_ssh.get_share_usage("vvol_test")) @ddt.data(True, False) def test__get_share_export(self, is_snapshot): self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=[HNAS_RESULT_export_ip, ''])) export_list = self._driver_ssh._get_export( name='fake_name', is_snapshot=is_snapshot) path = '/shares/fake_name' if is_snapshot: path = '/snapshots/fake_name' command = ['nfs-export', 'list ', path] self._driver_ssh._execute.assert_called_with(command) self.assertEqual('vvol_test', export_list[0].export_name) self.assertEqual('/vvol_test', export_list[0].export_path) self.assertEqual('fake_fs', export_list[0].file_system_label) self.assertEqual('Yes', export_list[0].mounted) self.assertIn('rw', export_list[0].export_configuration[0]) def test__get_share_export_fs_not_available(self): self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( return_value=[HNAS_RESULT_exp_no_fs, ''])) export_list = self._driver_ssh._get_export(name='fake_name') path = '/shares/fake_name' command = ['nfs-export', 'list ', path] self._driver_ssh._execute.assert_called_with(command) self.assertEqual('no_fs', export_list[0].export_name) self.assertEqual('/export_without_fs', export_list[0].export_path) self.assertEqual('*** not available ***', export_list[0].file_system_info) self.assertEqual([], export_list[0].export_configuration) not_in_keys = ['file_system_label', 'file_system_size', 'formatted', 'file_system_free_space', 'file_system_state', 'failed', 'mounted', 'thin_provisioned'] for key in not_in_keys: self.assertNotIn(key, export_list[0].__dict__) def test__get_share_export_exception_not_found(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=putils.ProcessExecutionError( stderr="NFS Export List: Export 'id' does not exist."))) self.assertRaises(exception.HNASItemNotFoundException, self._driver_ssh._get_export, 'fake_id') def test__get_share_export_exception_error(self): self.mock_object(ssh.HNASSSHBackend, "_execute", mock.Mock( side_effect=putils.ProcessExecutionError(stderr="Some error.") )) self.assertRaises(exception.HNASBackendException, self._driver_ssh._get_export, 'fake_id') def test__execute(self): key = self.ssh_private_key commands = ['tree-clone-job-submit', '-e', '/src', '/dst'] concat_command = ('ssc --smuauth fake console-context --evs 2 ' 'tree-clone-job-submit -e /src /dst') self.mock_object(paramiko.SSHClient, 'connect') self.mock_object(putils, 'ssh_execute', mock.Mock(return_value=[HNAS_RESULT_job, ''])) output, err = self._driver_ssh._execute(commands) putils.ssh_execute.assert_called_once_with(mock.ANY, concat_command, check_exit_code=True) paramiko.SSHClient.connect.assert_called_with(self.ip, username=self.user, key_filename=key, look_for_keys=False, timeout=None, password=self.password, port=self.port, banner_timeout=None) self.assertIn('Request submitted successfully.', output) def test__execute_ssh_exception(self): commands = ['tree-clone-job-submit', '-e', '/src', '/dst'] concat_command = ('ssc --smuauth fake console-context --evs 2 ' 'tree-clone-job-submit -e /src /dst') msg = 'Failed to establish SSC connection' self.mock_object(time, "sleep") self.mock_object(paramiko.SSHClient, 'connect') self.mock_object(putils, 'ssh_execute', mock.Mock(side_effect=[ putils.ProcessExecutionError(stderr=msg), putils.ProcessExecutionError(stderr='Invalid!')])) self.mock_object(mutils.SSHPool, "item", mock.Mock(return_value=paramiko.SSHClient())) self.mock_object(paramiko.SSHClient, "set_missing_host_key_policy") self.assertRaises(putils.ProcessExecutionError, self._driver_ssh._execute, commands) putils.ssh_execute.assert_called_with(mock.ANY, concat_command, check_exit_code=True) self.assertTrue(self.mock_log.debug.called) def test__locked_selectfs_create_operation(self): exec_command = ['selectfs', self.fs_name, '\n', 'ssc', '127.0.0.1', 'console-context', '--evs', six.text_type(self.evs_id), 'mkdir', '-p', '/path'] self.mock_object(ssh.HNASSSHBackend, '_execute') self._driver_ssh._locked_selectfs('create', '/path') self._driver_ssh._execute.assert_called_with(exec_command) def test__locked_selectfs_create_operation_error(self): exec_command = ['selectfs', self.fs_name, '\n', 'ssc', '127.0.0.1', 'console-context', '--evs', six.text_type(self.evs_id), 'mkdir', '-p', '/path'] self.mock_object( ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError( stderr="some error"))) self.assertRaises(exception.HNASBackendException, self._driver_ssh._locked_selectfs, 'create', '/path') self._driver_ssh._execute.assert_called_with(exec_command) def test__locked_selectfs_create_operation_context_change(self): exec_command = ['selectfs', self.fs_name, '\n', 'ssc', '127.0.0.1', 'console-context', '--evs', six.text_type(self.evs_id), 'mkdir', '-p', '/path'] self.mock_object( ssh.HNASSSHBackend, '_execute', mock.Mock(side_effect=putils.ProcessExecutionError( stderr="Current file system invalid: VolumeNotFound"))) self.assertRaises(exception.HNASSSCContextChange, self._driver_ssh._locked_selectfs, 'create', '/path') self._driver_ssh._execute.assert_called_with(exec_command) self.assertTrue(self.mock_log.debug.called) def test__locked_selectfs_delete_operation_successful(self): exec_command = ['selectfs', self.fs_name, '\n', 'ssc', '127.0.0.1', 'console-context', '--evs', six.text_type(self.evs_id), 'rmdir', '/path'] self.mock_object(ssh.HNASSSHBackend, '_execute') self._driver_ssh._locked_selectfs('delete', '/path') self._driver_ssh._execute.assert_called_with(exec_command) def test__locked_selectfs_deleting_not_empty_directory(self): msg = 'This path has more snapshot. Currenty DirectoryNotEmpty' self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr=msg)])) self.assertRaises(exception.HNASDirectoryNotEmpty, self._driver_ssh._locked_selectfs, 'delete', '/path') self.assertTrue(self.mock_log.debug.called) def test__locked_selectfs_delete_exception(self): msg = "rmdir: cannot remove '/path'" self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr=msg)])) self.assertRaises(exception.HNASBackendException, self._driver_ssh._locked_selectfs, 'delete', 'path') self.assertTrue(self.mock_log.exception.called) def test__locked_selectfs_delete_not_found(self): msg = "rmdir: cannot remove '/path': NotFound" self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr=msg)])) self._driver_ssh._locked_selectfs('delete', 'path') self.assertTrue(self.mock_log.warning.called) def test__locked_selectfs_delete_context_change(self): msg = "Current file system invalid: VolumeNotFound" self.mock_object(ssh.HNASSSHBackend, '_execute', mock.Mock( side_effect=[putils.ProcessExecutionError(stderr=msg)])) self.assertRaises(exception.HNASSSCContextChange, self._driver_ssh._locked_selectfs, 'delete', 'path') self.assertTrue(self.mock_log.debug.called) manila-10.0.0/manila/tests/share/drivers/hitachi/hnas/test_driver.py0000664000175000017500000015113313656750227025525 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_config import cfg from manila import exception import manila.share.configuration import manila.share.driver from manila.share.drivers.hitachi.hnas import driver from manila.share.drivers.hitachi.hnas import ssh from manila import test CONF = cfg.CONF share_nfs = { 'id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'name': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'size': 50, 'host': 'hnas', 'share_proto': 'NFS', 'share_type_id': 1, 'share_network_id': 'bb329e24-3bdb-491d-acfd-dfe70c09b98d', 'share_server_id': 'cc345a53-491d-acfd-3bdb-dfe70c09b98d', 'export_locations': [{'path': '172.24.44.10:/shares/' 'aa4a7710-f326-41fb-ad18-b4ad587fc87a'}], } share_cifs = { 'id': 'f5cadaf2-afbe-4cc4-9021-85491b6b76f7', 'name': 'f5cadaf2-afbe-4cc4-9021-85491b6b76f7', 'size': 50, 'host': 'hnas', 'share_proto': 'CIFS', 'share_type_id': 1, 'share_network_id': 'bb329e24-3bdb-491d-acfd-dfe70c09b98d', 'share_server_id': 'cc345a53-491d-acfd-3bdb-dfe70c09b98d', 'export_locations': [{'path': '\\\\172.24.44.10\\' 'f5cadaf2-afbe-4cc4-9021-85491b6b76f7'}], } share_invalid_host = { 'id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'name': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'size': 50, 'host': 'invalid', 'share_proto': 'NFS', 'share_type_id': 1, 'share_network_id': 'bb329e24-3bdb-491d-acfd-dfe70c09b98d', 'share_server_id': 'cc345a53-491d-acfd-3bdb-dfe70c09b98d', 'export_locations': [{'path': '172.24.44.10:/shares/' 'aa4a7710-f326-41fb-ad18-b4ad587fc87a'}], } share_mount_support_nfs = { 'id': '62125744-fcdd-4f55-a8c1-d1498102f634', 'name': '62125744-fcdd-4f55-a8c1-d1498102f634', 'size': 50, 'host': 'hnas', 'share_proto': 'NFS', 'share_type_id': 1, 'share_network_id': 'bb329e24-3bdb-491d-acfd-dfe70c09b98d', 'share_server_id': 'cc345a53-491d-acfd-3bdb-dfe70c09b98d', 'export_locations': [{'path': '172.24.44.10:/shares/' '62125744-fcdd-4f55-a8c1-d1498102f634'}], 'mount_snapshot_support': True, } share_mount_support_cifs = { 'id': 'd6e7dc6b-f65f-49d9-968d-936f75474f29', 'name': 'd6e7dc6b-f65f-49d9-968d-936f75474f29', 'size': 50, 'host': 'hnas', 'share_proto': 'CIFS', 'share_type_id': 1, 'share_network_id': 'bb329e24-3bdb-491d-acfd-dfe70c09b98d', 'share_server_id': 'cc345a53-491d-acfd-3bdb-dfe70c09b98d', 'export_locations': [{'path': '172.24.44.10:/shares/' 'd6e7dc6b-f65f-49d9-968d-936f75474f29'}], 'mount_snapshot_support': True, } access_nfs_rw = { 'id': 'acdc7172b-fe07-46c4-b78f-df3e0324ccd0', 'access_type': 'ip', 'access_to': '172.24.44.200', 'access_level': 'rw', 'state': 'active', } access_cifs_rw = { 'id': '43167594-40e9-b899-1f4f-b9c2176b7564', 'access_type': 'user', 'access_to': 'fake_user', 'access_level': 'rw', 'state': 'active', } access_cifs_ro = { 'id': '32407088-1f4f-40e9-b899-b9a4176b574d', 'access_type': 'user', 'access_to': 'fake_user', 'access_level': 'ro', 'state': 'active', } snapshot_nfs = { 'id': 'abba6d9b-f29c-4bf7-aac1-618cda7aaf0f', 'share_id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'share': share_nfs, 'provider_location': '/snapshots/aa4a7710-f326-41fb-ad18-b4ad587fc87a/' 'abba6d9b-f29c-4bf7-aac1-618cda7aaf0f', 'size': 2, } snapshot_cifs = { 'id': '91bc6e1b-1ba5-f29c-abc1-da7618cabf0a', 'share_id': 'f5cadaf2-afbe-4cc4-9021-85491b6b76f7', 'share': share_cifs, 'provider_location': '/snapshots/f5cadaf2-afbe-4cc4-9021-85491b6b76f7/' '91bc6e1b-1ba5-f29c-abc1-da7618cabf0a', 'size': 2, } manage_snapshot = { 'id': 'bc168eb-fa71-beef-153a-3d451aa1351f', 'share_id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'share': share_nfs, 'provider_location': '/snapshots/aa4a7710-f326-41fb-ad18-b4ad587fc87a' '/snapshot18-05-2106', } snapshot_mount_support_nfs = { 'id': '3377b015-a695-4a5a-8aa5-9b931b023380', 'share_id': '62125744-fcdd-4f55-a8c1-d1498102f634', 'share': share_mount_support_nfs, 'provider_location': '/snapshots/62125744-fcdd-4f55-a8c1-d1498102f634' '/3377b015-a695-4a5a-8aa5-9b931b023380', } snapshot_mount_support_cifs = { 'id': 'f9916515-5cb8-4612-afa6-7f2baa74223a', 'share_id': 'd6e7dc6b-f65f-49d9-968d-936f75474f29', 'share': share_mount_support_cifs, 'provider_location': '/snapshots/d6e7dc6b-f65f-49d9-968d-936f75474f29' '/f9916515-5cb8-4612-afa6-7f2baa74223a', } invalid_share = { 'id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'name': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'size': 100, 'host': 'hnas', 'share_proto': 'HDFS', } invalid_snapshot = { 'id': '24dcdcb5-a582-4bcc-b462-641da143afee', 'share_id': 'aa4a7710-f326-41fb-ad18-b4ad587fc87a', 'share': invalid_share, } invalid_access_type = { 'id': 'acdc7172b-fe07-46c4-b78f-df3e0324ccd0', 'access_type': 'cert', 'access_to': 'manila_user', 'access_level': 'rw', 'state': 'active', } invalid_access_level = { 'id': 'acdc7172b-fe07-46c4-b78f-df3e0324ccd0', 'access_type': 'ip', 'access_to': 'manila_user', 'access_level': '777', 'state': 'active', } invalid_protocol_msg = ("Share backend error: Only NFS or CIFS protocol are " "currently supported. Share provided %(id)s with " "protocol %(proto)s." % {'id': invalid_share['id'], 'proto': invalid_share['share_proto']}) @ddt.ddt class HitachiHNASTestCase(test.TestCase): def setUp(self): super(HitachiHNASTestCase, self).setUp() CONF.set_default('driver_handles_share_servers', False) CONF.hitachi_hnas_evs_id = '2' CONF.hitachi_hnas_evs_ip = '172.24.44.10' CONF.hitachi_hnas_admin_network_ip = '10.20.30.40' CONF.hitachi_hnas_ip = '172.24.44.1' CONF.hitachi_hnas_ip_port = 'hitachi_hnas_ip_port' CONF.hitachi_hnas_user = 'hitachi_hnas_user' CONF.hitachi_hnas_password = 'hitachi_hnas_password' CONF.hitachi_hnas_file_system_name = 'file_system' CONF.hitachi_hnas_ssh_private_key = 'private_key' CONF.hitachi_hnas_cluster_admin_ip0 = None CONF.hitachi_hnas_stalled_job_timeout = 10 CONF.hitachi_hnas_driver_helper = ('manila.share.drivers.hitachi.hnas.' 'ssh.HNASSSHBackend') self.fake_conf = manila.share.configuration.Configuration(None) self.fake_private_storage = mock.Mock() self.mock_object(self.fake_private_storage, 'get', mock.Mock(return_value=None)) self.mock_object(self.fake_private_storage, 'delete', mock.Mock(return_value=None)) self._driver = driver.HitachiHNASDriver( private_storage=self.fake_private_storage, configuration=self.fake_conf) self._driver.backend_name = "hnas" self.mock_log = self.mock_object(driver, 'LOG') # mocking common backend calls self.mock_object(ssh.HNASSSHBackend, "check_fs_mounted", mock.Mock( return_value=True)) self.mock_object(ssh.HNASSSHBackend, "check_vvol") self.mock_object(ssh.HNASSSHBackend, "check_quota") self.mock_object(ssh.HNASSSHBackend, "check_cifs") self.mock_object(ssh.HNASSSHBackend, "check_export") self.mock_object(ssh.HNASSSHBackend, 'check_directory') @ddt.data('hitachi_hnas_driver_helper', 'hitachi_hnas_evs_id', 'hitachi_hnas_evs_ip', 'hitachi_hnas_ip', 'hitachi_hnas_user') def test_init_invalid_conf_parameters(self, attr_name): self.mock_object(manila.share.driver.ShareDriver, '__init__') setattr(CONF, attr_name, None) self.assertRaises(exception.InvalidParameterValue, self._driver.__init__) def test_init_invalid_credentials(self): self.mock_object(manila.share.driver.ShareDriver, '__init__') CONF.hitachi_hnas_password = None CONF.hitachi_hnas_ssh_private_key = None self.assertRaises(exception.InvalidParameterValue, self._driver.__init__) @ddt.data(True, False) def test_update_access_nfs(self, empty_rules): if not empty_rules: access1 = { 'access_type': 'ip', 'access_to': '172.24.10.10', 'access_level': 'rw' } access2 = { 'access_type': 'ip', 'access_to': '188.100.20.10', 'access_level': 'ro' } access_list = [access1, access2] access_list_updated = ( [access1['access_to'] + '(' + access1['access_level'] + ',norootsquash)', access2['access_to'] + '(' + access2['access_level'] + ')', ]) else: access_list = [] access_list_updated = [] self.mock_object(ssh.HNASSSHBackend, "update_nfs_access_rule", mock.Mock()) self._driver.update_access('context', share_nfs, access_list, [], []) ssh.HNASSSHBackend.update_nfs_access_rule.assert_called_once_with( access_list_updated, share_id=share_nfs['id']) self.assertTrue(self.mock_log.debug.called) def test_update_access_ip_exception(self): access1 = { 'access_type': 'ip', 'access_to': '188.100.20.10', 'access_level': 'ro' } access2 = { 'access_type': 'something', 'access_to': '172.24.10.10', 'access_level': 'rw' } access_list = [access1, access2] self.assertRaises(exception.InvalidShareAccess, self._driver.update_access, 'context', share_nfs, access_list, [], []) def test_update_access_not_found_exception(self): access1 = { 'access_type': 'ip', 'access_to': '188.100.20.10', 'access_level': 'ro' } access2 = { 'access_type': 'something', 'access_to': '172.24.10.10', 'access_level': 'rw' } access_list = [access1, access2] self.mock_object(self._driver, '_ensure_share', mock.Mock( side_effect=exception.HNASItemNotFoundException(msg='fake'))) self.assertRaises(exception.ShareResourceNotFound, self._driver.update_access, 'context', share_nfs, access_list, add_rules=[], delete_rules=[]) @ddt.data([access_cifs_rw, 'acr'], [access_cifs_ro, 'ar']) @ddt.unpack def test_allow_access_cifs(self, access_cifs, permission): access_list_allow = [access_cifs] self.mock_object(ssh.HNASSSHBackend, 'cifs_allow_access') self._driver.update_access('context', share_cifs, [], access_list_allow, []) ssh.HNASSSHBackend.cifs_allow_access.assert_called_once_with( share_cifs['id'], 'fake_user', permission, is_snapshot=False) self.assertTrue(self.mock_log.debug.called) def test_allow_access_cifs_invalid_type(self): access_cifs_type_ip = { 'id': '43167594-40e9-b899-1f4f-b9c2176b7564', 'access_type': 'ip', 'access_to': 'fake_user', 'access_level': 'rw', 'state': 'active', } access_list_allow = [access_cifs_type_ip] self.assertRaises(exception.InvalidShareAccess, self._driver.update_access, 'context', share_cifs, [], access_list_allow, []) def test_deny_access_cifs(self): access_list_deny = [access_cifs_rw] self.mock_object(ssh.HNASSSHBackend, 'cifs_deny_access') self._driver.update_access('context', share_cifs, [], [], access_list_deny) ssh.HNASSSHBackend.cifs_deny_access.assert_called_once_with( share_cifs['id'], 'fake_user', is_snapshot=False) self.assertTrue(self.mock_log.debug.called) def test_deny_access_cifs_unsupported_type(self): access_cifs_type_ip = { 'id': '43167594-40e9-b899-1f4f-b9c2176b7564', 'access_type': 'ip', 'access_to': 'fake_user', 'access_level': 'rw', 'state': 'active', } access_list_deny = [access_cifs_type_ip] self.mock_object(ssh.HNASSSHBackend, 'cifs_deny_access') self._driver.update_access('context', share_cifs, [], [], access_list_deny) self.assertTrue(self.mock_log.warning.called) def test_update_access_invalid_share_protocol(self): self.mock_object(self._driver, '_ensure_share') ex = self.assertRaises(exception.ShareBackendException, self._driver.update_access, 'context', invalid_share, [], [], []) self.assertEqual(invalid_protocol_msg, ex.msg) def test_update_access_cifs_recovery_mode(self): access_list = [access_cifs_rw, access_cifs_ro] permission_list = [('fake_user1', 'acr'), ('fake_user2', 'ar')] self.mock_object(ssh.HNASSSHBackend, 'list_cifs_permissions', mock.Mock(return_value=permission_list)) self.mock_object(ssh.HNASSSHBackend, 'cifs_deny_access') self.mock_object(ssh.HNASSSHBackend, 'cifs_allow_access') self._driver.update_access('context', share_cifs, access_list, [], []) ssh.HNASSSHBackend.list_cifs_permissions.assert_called_once_with( share_cifs['id']) self.assertTrue(self.mock_log.debug.called) def _get_export(self, id, share_proto, ip, is_admin_only, is_snapshot=False): if share_proto.lower() == 'nfs': if is_snapshot: path = '/snapshots/' + id else: path = '/shares/' + id export = ':'.join((ip, path)) else: export = r'\\%s\%s' % (ip, id) return { "path": export, "is_admin_only": is_admin_only, "metadata": {}, } @ddt.data(share_nfs, share_cifs) def test_create_share(self, share): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "vvol_create") self.mock_object(ssh.HNASSSHBackend, "quota_add") self.mock_object(ssh.HNASSSHBackend, "nfs_export_add", mock.Mock( return_value='/shares/' + share['id'])) self.mock_object(ssh.HNASSSHBackend, "cifs_share_add") result = self._driver.create_share('context', share) self.assertTrue(self.mock_log.debug.called) ssh.HNASSSHBackend.vvol_create.assert_called_once_with(share['id']) ssh.HNASSSHBackend.quota_add.assert_called_once_with(share['id'], share['size']) expected = [ self._get_export( share['id'], share['share_proto'], self._driver.hnas_evs_ip, False), self._get_export( share['id'], share['share_proto'], self._driver.hnas_admin_network_ip, True)] if share['share_proto'].lower() == 'nfs': ssh.HNASSSHBackend.nfs_export_add.assert_called_once_with( share_nfs['id'], snapshot_id=None) self.assertFalse(ssh.HNASSSHBackend.cifs_share_add.called) else: ssh.HNASSSHBackend.cifs_share_add.assert_called_once_with( share_cifs['id'], snapshot_id=None) self.assertFalse(ssh.HNASSSHBackend.nfs_export_add.called) self.assertEqual(expected, result) def test_create_share_export_error(self): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "vvol_create") self.mock_object(ssh.HNASSSHBackend, "quota_add") self.mock_object(ssh.HNASSSHBackend, "nfs_export_add", mock.Mock( side_effect=exception.HNASBackendException('msg'))) self.mock_object(ssh.HNASSSHBackend, "vvol_delete") self.assertRaises(exception.HNASBackendException, self._driver.create_share, 'context', share_nfs) self.assertTrue(self.mock_log.debug.called) ssh.HNASSSHBackend.vvol_create.assert_called_once_with(share_nfs['id']) ssh.HNASSSHBackend.quota_add.assert_called_once_with(share_nfs['id'], share_nfs['size']) ssh.HNASSSHBackend.nfs_export_add.assert_called_once_with( share_nfs['id'], snapshot_id=None) ssh.HNASSSHBackend.vvol_delete.assert_called_once_with(share_nfs['id']) def test_create_share_invalid_share_protocol(self): self.mock_object(driver.HitachiHNASDriver, "_create_share", mock.Mock(return_value="path")) ex = self.assertRaises(exception.ShareBackendException, self._driver.create_share, 'context', invalid_share) self.assertEqual(invalid_protocol_msg, ex.msg) @ddt.data(share_nfs, share_cifs) def test_delete_share(self, share): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "nfs_export_del") self.mock_object(ssh.HNASSSHBackend, "cifs_share_del") self.mock_object(ssh.HNASSSHBackend, "vvol_delete") self._driver.delete_share('context', share) self.assertTrue(self.mock_log.debug.called) ssh.HNASSSHBackend.vvol_delete.assert_called_once_with(share['id']) if share['share_proto'].lower() == 'nfs': ssh.HNASSSHBackend.nfs_export_del.assert_called_once_with( share['id']) self.assertFalse(ssh.HNASSSHBackend.cifs_share_del.called) else: ssh.HNASSSHBackend.cifs_share_del.assert_called_once_with( share['id']) self.assertFalse(ssh.HNASSSHBackend.nfs_export_del.called) @ddt.data(snapshot_nfs, snapshot_cifs, snapshot_mount_support_nfs, snapshot_mount_support_cifs) def test_create_snapshot(self, snapshot): hnas_id = snapshot['share_id'] access_list = ['172.24.44.200(rw,norootsquash)', '172.24.49.180(all_squash,read_write,secure)', '172.24.49.110(ro, secure)', '172.24.49.112(secure,readwrite,norootsquash)', '172.24.49.142(read_only, secure)', '172.24.49.201(rw,read_write,readwrite)', '172.24.49.218(rw)'] ro_list = ['172.24.44.200(ro,norootsquash)', '172.24.49.180(all_squash,ro,secure)', '172.24.49.110(ro, secure)', '172.24.49.112(secure,ro,norootsquash)', '172.24.49.142(read_only, secure)', '172.24.49.201(ro,ro,ro)', '172.24.49.218(ro)'] export_locations = [ self._get_export( snapshot['id'], snapshot['share']['share_proto'], self._driver.hnas_evs_ip, False, is_snapshot=True), self._get_export( snapshot['id'], snapshot['share']['share_proto'], self._driver.hnas_admin_network_ip, True, is_snapshot=True)] expected = {'provider_location': '/snapshots/' + hnas_id + '/' + snapshot['id']} if snapshot['share'].get('mount_snapshot_support'): expected['export_locations'] = export_locations self.mock_object(ssh.HNASSSHBackend, "get_nfs_host_list", mock.Mock( return_value=access_list)) self.mock_object(ssh.HNASSSHBackend, "update_nfs_access_rule", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "is_cifs_in_use", mock.Mock( return_value=False)) self.mock_object(ssh.HNASSSHBackend, "tree_clone") self.mock_object(ssh.HNASSSHBackend, "nfs_export_add") self.mock_object(ssh.HNASSSHBackend, "cifs_share_add") out = self._driver.create_snapshot('context', snapshot) ssh.HNASSSHBackend.tree_clone.assert_called_once_with( '/shares/' + hnas_id, '/snapshots/' + hnas_id + '/' + snapshot['id']) self.assertEqual(expected, out) if snapshot['share']['share_proto'].lower() == 'nfs': ssh.HNASSSHBackend.get_nfs_host_list.assert_called_once_with( hnas_id) ssh.HNASSSHBackend.update_nfs_access_rule.assert_any_call( ro_list, share_id=hnas_id) ssh.HNASSSHBackend.update_nfs_access_rule.assert_any_call( access_list, share_id=hnas_id) else: ssh.HNASSSHBackend.is_cifs_in_use.assert_called_once_with( hnas_id) def test_create_snapshot_invalid_protocol(self): self.mock_object(self._driver, '_ensure_share') ex = self.assertRaises(exception.ShareBackendException, self._driver.create_snapshot, 'context', invalid_snapshot) self.assertEqual(invalid_protocol_msg, ex.msg) def test_create_snapshot_cifs_exception(self): cifs_excep_msg = ("Share backend error: CIFS snapshot when share is " "mounted is disabled. Set " "hitachi_hnas_allow_cifs_snapshot_while_mounted to " "True or unmount the share to take a snapshot.") self.mock_object(ssh.HNASSSHBackend, "is_cifs_in_use", mock.Mock( return_value=True)) ex = self.assertRaises(exception.ShareBackendException, self._driver.create_snapshot, 'context', snapshot_cifs) self.assertEqual(cifs_excep_msg, ex.msg) def test_create_snapshot_first_snapshot(self): hnas_id = snapshot_nfs['share_id'] self.mock_object(ssh.HNASSSHBackend, "get_nfs_host_list", mock.Mock( return_value=['172.24.44.200(rw)'])) self.mock_object(ssh.HNASSSHBackend, "update_nfs_access_rule", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "tree_clone", mock.Mock( side_effect=exception.HNASNothingToCloneException('msg'))) self.mock_object(ssh.HNASSSHBackend, "create_directory") self.mock_object(ssh.HNASSSHBackend, "nfs_export_add") self.mock_object(ssh.HNASSSHBackend, "cifs_share_add") self._driver.create_snapshot('context', snapshot_nfs) self.assertTrue(self.mock_log.warning.called) ssh.HNASSSHBackend.get_nfs_host_list.assert_called_once_with( hnas_id) ssh.HNASSSHBackend.update_nfs_access_rule.assert_any_call( ['172.24.44.200(ro)'], share_id=hnas_id) ssh.HNASSSHBackend.update_nfs_access_rule.assert_any_call( ['172.24.44.200(rw)'], share_id=hnas_id) ssh.HNASSSHBackend.create_directory.assert_called_once_with( '/snapshots/' + hnas_id + '/' + snapshot_nfs['id']) @ddt.data(snapshot_nfs, snapshot_cifs, snapshot_mount_support_nfs, snapshot_mount_support_cifs) def test_delete_snapshot(self, snapshot): hnas_share_id = snapshot['share_id'] hnas_snapshot_id = snapshot['id'] self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted") self.mock_object(ssh.HNASSSHBackend, "tree_delete") self.mock_object(ssh.HNASSSHBackend, "delete_directory") self.mock_object(ssh.HNASSSHBackend, "nfs_export_del") self.mock_object(ssh.HNASSSHBackend, "cifs_share_del") self._driver.delete_snapshot('context', snapshot) self.assertTrue(self.mock_log.debug.called) self.assertTrue(self.mock_log.info.called) driver.HitachiHNASDriver._check_fs_mounted.assert_called_once_with() ssh.HNASSSHBackend.tree_delete.assert_called_once_with( '/snapshots/' + hnas_share_id + '/' + snapshot['id']) ssh.HNASSSHBackend.delete_directory.assert_called_once_with( '/snapshots/' + hnas_share_id) if snapshot['share']['share_proto'].lower() == 'nfs': if snapshot['share'].get('mount_snapshot_support'): ssh.HNASSSHBackend.nfs_export_del.assert_called_once_with( snapshot_id=hnas_snapshot_id) else: ssh.HNASSSHBackend.nfs_export_del.assert_not_called() else: if snapshot['share'].get('mount_snapshot_support'): ssh.HNASSSHBackend.cifs_share_del.assert_called_once_with( hnas_snapshot_id) else: ssh.HNASSSHBackend.cifs_share_del.assert_not_called() def test_delete_managed_snapshot(self): hnas_id = manage_snapshot['share_id'] self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted") self.mock_object(ssh.HNASSSHBackend, "tree_delete") self.mock_object(ssh.HNASSSHBackend, "delete_directory") self.mock_object(ssh.HNASSSHBackend, "nfs_export_del") self.mock_object(ssh.HNASSSHBackend, "cifs_share_del") self._driver.delete_snapshot('context', manage_snapshot) self.assertTrue(self.mock_log.debug.called) self.assertTrue(self.mock_log.info.called) driver.HitachiHNASDriver._check_fs_mounted.assert_called_once_with() ssh.HNASSSHBackend.tree_delete.assert_called_once_with( manage_snapshot['provider_location']) ssh.HNASSSHBackend.delete_directory.assert_called_once_with( '/snapshots/' + hnas_id) @ddt.data(share_nfs, share_cifs) def test_ensure_share(self, share): result = self._driver.ensure_share('context', share) ssh.HNASSSHBackend.check_vvol.assert_called_once_with(share['id']) ssh.HNASSSHBackend.check_quota.assert_called_once_with(share['id']) expected = [ self._get_export( share['id'], share['share_proto'], self._driver.hnas_evs_ip, False), self._get_export( share['id'], share['share_proto'], self._driver.hnas_admin_network_ip, True)] if share['share_proto'].lower() == 'nfs': ssh.HNASSSHBackend.check_export.assert_called_once_with( share['id']) self.assertFalse(ssh.HNASSSHBackend.check_cifs.called) else: ssh.HNASSSHBackend.check_cifs.assert_called_once_with(share['id']) self.assertFalse(ssh.HNASSSHBackend.check_export.called) self.assertEqual(expected, result) def test_ensure_share_invalid_protocol(self): ex = self.assertRaises(exception.ShareBackendException, self._driver.ensure_share, 'context', invalid_share) self.assertEqual(invalid_protocol_msg, ex.msg) def test_shrink_share(self): self.mock_object(ssh.HNASSSHBackend, "get_share_usage", mock.Mock( return_value=10)) self.mock_object(ssh.HNASSSHBackend, "modify_quota") self._driver.shrink_share(share_nfs, 11) ssh.HNASSSHBackend.get_share_usage.assert_called_once_with( share_nfs['id']) ssh.HNASSSHBackend.modify_quota.assert_called_once_with( share_nfs['id'], 11) def test_shrink_share_new_size_lower_than_usage(self): self.mock_object(ssh.HNASSSHBackend, "get_share_usage", mock.Mock( return_value=10)) self.assertRaises(exception.ShareShrinkingPossibleDataLoss, self._driver.shrink_share, share_nfs, 9) ssh.HNASSSHBackend.get_share_usage.assert_called_once_with( share_nfs['id']) def test_extend_share(self): self.mock_object(ssh.HNASSSHBackend, "get_stats", mock.Mock( return_value=(500, 200, True))) self.mock_object(ssh.HNASSSHBackend, "modify_quota") self._driver.extend_share(share_nfs, 150) ssh.HNASSSHBackend.get_stats.assert_called_once_with() ssh.HNASSSHBackend.modify_quota.assert_called_once_with( share_nfs['id'], 150) def test_extend_share_with_no_available_space_in_fs(self): self.mock_object(ssh.HNASSSHBackend, "get_stats", mock.Mock( return_value=(500, 200, False))) self.mock_object(ssh.HNASSSHBackend, "modify_quota") self.assertRaises(exception.HNASBackendException, self._driver.extend_share, share_nfs, 1000) ssh.HNASSSHBackend.get_stats.assert_called_once_with() @ddt.data(share_nfs, share_cifs) def test_manage_existing(self, share): expected_exports = [ self._get_export( share['id'], share['share_proto'], self._driver.hnas_evs_ip, False), self._get_export( share['id'], share['share_proto'], self._driver.hnas_admin_network_ip, True)] expected_out = {'size': share['size'], 'export_locations': expected_exports} self.mock_object(ssh.HNASSSHBackend, "get_share_quota", mock.Mock( return_value=share['size'])) out = self._driver.manage_existing(share, 'option') self.assertEqual(expected_out, out) ssh.HNASSSHBackend.get_share_quota.assert_called_once_with( share['id']) def test_manage_existing_no_quota(self): self.mock_object(ssh.HNASSSHBackend, "get_share_quota", mock.Mock( return_value=None)) self.assertRaises(exception.ManageInvalidShare, self._driver.manage_existing, share_nfs, 'option') ssh.HNASSSHBackend.get_share_quota.assert_called_once_with( share_nfs['id']) def test_manage_existing_wrong_share_id(self): self.mock_object(self.fake_private_storage, 'get', mock.Mock(return_value='Wrong_share_id')) self.assertRaises(exception.HNASBackendException, self._driver.manage_existing, share_nfs, 'option') @ddt.data(':/', '1.1.1.1:/share_id', '1.1.1.1:/shares', '1.1.1.1:shares/share_id', ':/share_id') def test_manage_existing_wrong_path_format_nfs(self, wrong_location): expected_exception = ("Share backend error: Incorrect path. It " "should have the following format: " "IP:/shares/share_id.") self._test_manage_existing_wrong_path( share_nfs.copy(), expected_exception, wrong_location) @ddt.data('\\\\1.1.1.1', '1.1.1.1\\share_id', '1.1.1.1\\shares\\share_id', '\\\\1.1.1.1\\shares\\share_id', '\\\\share_id') def test_manage_existing_wrong_path_format_cifs(self, wrong_location): expected_exception = ("Share backend error: Incorrect path. It should " "have the following format: \\\\IP\\share_id.") self._test_manage_existing_wrong_path( share_cifs.copy(), expected_exception, wrong_location) def _test_manage_existing_wrong_path( self, share, expected_exception, wrong_location): share['export_locations'] = [{'path': wrong_location}] ex = self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, share, 'option') self.assertEqual(expected_exception, ex.msg) def test_manage_existing_wrong_evs_ip(self): share_nfs['export_locations'] = [{'path': '172.24.44.189:/shares/' 'aa4a7710-f326-41fb-ad18-'}] self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, share_nfs, 'option') def test_manage_existing_invalid_host(self): self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, share_invalid_host, 'option') def test_manage_existing_invalid_protocol(self): self.assertRaises(exception.ShareBackendException, self._driver.manage_existing, invalid_share, 'option') @ddt.data(True, False) def test_unmanage(self, has_export_locations): share_copy = share_nfs.copy() if not has_export_locations: share_copy['export_locations'] = [] self._driver.unmanage(share_copy) self.assertTrue(self.fake_private_storage.delete.called) self.assertTrue(self.mock_log.info.called) def test_get_network_allocations_number(self): result = self._driver.get_network_allocations_number() self.assertEqual(0, result) @ddt.data([share_nfs, snapshot_nfs], [share_cifs, snapshot_cifs]) @ddt.unpack def test_create_share_from_snapshot(self, share, snapshot): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "vvol_create") self.mock_object(ssh.HNASSSHBackend, "quota_add") self.mock_object(ssh.HNASSSHBackend, "tree_clone") self.mock_object(ssh.HNASSSHBackend, "cifs_share_add") self.mock_object(ssh.HNASSSHBackend, "nfs_export_add") result = self._driver.create_share_from_snapshot('context', share, snapshot) ssh.HNASSSHBackend.vvol_create.assert_called_once_with(share['id']) ssh.HNASSSHBackend.quota_add.assert_called_once_with(share['id'], share['size']) ssh.HNASSSHBackend.tree_clone.assert_called_once_with( '/snapshots/' + share['id'] + '/' + snapshot['id'], '/shares/' + share['id']) expected = [ self._get_export( share['id'], share['share_proto'], self._driver.hnas_evs_ip, False), self._get_export( share['id'], share['share_proto'], self._driver.hnas_admin_network_ip, True)] if share['share_proto'].lower() == 'nfs': ssh.HNASSSHBackend.nfs_export_add.assert_called_once_with( share['id']) self.assertFalse(ssh.HNASSSHBackend.cifs_share_add.called) else: ssh.HNASSSHBackend.cifs_share_add.assert_called_once_with( share['id']) self.assertFalse(ssh.HNASSSHBackend.nfs_export_add.called) self.assertEqual(expected, result) def test_create_share_from_snapshot_empty_snapshot(self): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "vvol_create") self.mock_object(ssh.HNASSSHBackend, "quota_add") self.mock_object(ssh.HNASSSHBackend, "tree_clone", mock.Mock( side_effect=exception.HNASNothingToCloneException('msg'))) self.mock_object(ssh.HNASSSHBackend, "nfs_export_add") result = self._driver.create_share_from_snapshot('context', share_nfs, snapshot_nfs) expected = [ self._get_export( share_nfs['id'], share_nfs['share_proto'], self._driver.hnas_evs_ip, False), self._get_export( share_nfs['id'], share_nfs['share_proto'], self._driver.hnas_admin_network_ip, True)] self.assertEqual(expected, result) self.assertTrue(self.mock_log.warning.called) ssh.HNASSSHBackend.vvol_create.assert_called_once_with(share_nfs['id']) ssh.HNASSSHBackend.quota_add.assert_called_once_with(share_nfs['id'], share_nfs['size']) ssh.HNASSSHBackend.tree_clone.assert_called_once_with( '/snapshots/' + share_nfs['id'] + '/' + snapshot_nfs['id'], '/shares/' + share_nfs['id']) ssh.HNASSSHBackend.nfs_export_add.assert_called_once_with( share_nfs['id']) def test_create_share_from_snapshot_invalid_protocol(self): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "vvol_create") self.mock_object(ssh.HNASSSHBackend, "quota_add") self.mock_object(ssh.HNASSSHBackend, "tree_clone") ex = self.assertRaises(exception.ShareBackendException, self._driver.create_share_from_snapshot, 'context', invalid_share, snapshot_nfs) self.assertEqual(invalid_protocol_msg, ex.msg) def test_create_share_from_snapshot_cleanup(self): dest_path = '/snapshots/' + share_nfs['id'] + '/' + snapshot_nfs['id'] src_path = '/shares/' + share_nfs['id'] self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(ssh.HNASSSHBackend, "vvol_create") self.mock_object(ssh.HNASSSHBackend, "quota_add") self.mock_object(ssh.HNASSSHBackend, "tree_clone") self.mock_object(ssh.HNASSSHBackend, "vvol_delete") self.mock_object(ssh.HNASSSHBackend, "nfs_export_add", mock.Mock( side_effect=exception.HNASBackendException( msg='Error adding nfs export.'))) self.assertRaises(exception.HNASBackendException, self._driver.create_share_from_snapshot, 'context', share_nfs, snapshot_nfs) ssh.HNASSSHBackend.vvol_create.assert_called_once_with( share_nfs['id']) ssh.HNASSSHBackend.quota_add.assert_called_once_with( share_nfs['id'], share_nfs['size']) ssh.HNASSSHBackend.tree_clone.assert_called_once_with( dest_path, src_path) ssh.HNASSSHBackend.nfs_export_add.assert_called_once_with( share_nfs['id']) ssh.HNASSSHBackend.vvol_delete.assert_called_once_with( share_nfs['id']) def test__check_fs_mounted(self): self._driver._check_fs_mounted() ssh.HNASSSHBackend.check_fs_mounted.assert_called_once_with() def test__check_fs_mounted_not_mounted(self): self.mock_object(ssh.HNASSSHBackend, 'check_fs_mounted', mock.Mock( return_value=False)) self.assertRaises(exception.HNASBackendException, self._driver._check_fs_mounted) ssh.HNASSSHBackend.check_fs_mounted.assert_called_once_with() def test__update_share_stats(self): fake_data = { 'share_backend_name': self._driver.backend_name, 'driver_handles_share_servers': self._driver.driver_handles_share_servers, 'vendor_name': 'Hitachi', 'driver_version': '4.0.0', 'storage_protocol': 'NFS_CIFS', 'total_capacity_gb': 1000, 'free_capacity_gb': 200, 'reserved_percentage': driver.CONF.reserved_share_percentage, 'qos': False, 'thin_provisioning': True, 'dedupe': True, 'revert_to_snapshot_support': True, 'mount_snapshot_support': True, } self.mock_object(ssh.HNASSSHBackend, 'get_stats', mock.Mock( return_value=(1000, 200, True))) self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted", mock.Mock()) self.mock_object(manila.share.driver.ShareDriver, '_update_share_stats') self._driver._update_share_stats() self.assertTrue(self._driver.hnas.get_stats.called) (manila.share.driver.ShareDriver._update_share_stats. assert_called_once_with(fake_data)) self.assertTrue(self.mock_log.info.called) @ddt.data(snapshot_nfs, snapshot_cifs, snapshot_mount_support_nfs, snapshot_mount_support_cifs) def test_ensure_snapshot(self, snapshot): result = self._driver.ensure_snapshot('context', snapshot) if snapshot['share'].get('mount_snapshot_support'): expected = [ self._get_export( snapshot['id'], snapshot['share']['share_proto'], self._driver.hnas_evs_ip, False, is_snapshot=True), self._get_export( snapshot['id'], snapshot['share']['share_proto'], self._driver.hnas_admin_network_ip, True, is_snapshot=True)] if snapshot['share']['share_proto'].lower() == 'nfs': ssh.HNASSSHBackend.check_export.assert_called_once_with( snapshot['id'], is_snapshot=True) self.assertFalse(ssh.HNASSSHBackend.check_cifs.called) else: ssh.HNASSSHBackend.check_cifs.assert_called_once_with( snapshot['id']) self.assertFalse(ssh.HNASSSHBackend.check_export.called) else: expected = None ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot['provider_location']) self.assertEqual(expected, result) def test_manage_existing_snapshot(self): self.mock_object(ssh.HNASSSHBackend, 'check_directory', mock.Mock(return_value=True)) self.mock_object(self._driver, '_ensure_snapshot', mock.Mock(return_value=[])) path_info = manage_snapshot['provider_location'].split('/') hnas_snapshot_id = path_info[3] out = self._driver.manage_existing_snapshot(manage_snapshot, {'size': 20}) ssh.HNASSSHBackend.check_directory.assert_called_with( '/snapshots/aa4a7710-f326-41fb-ad18-b4ad587fc87a' '/snapshot18-05-2106') self._driver._ensure_snapshot.assert_called_with( manage_snapshot, hnas_snapshot_id) self.assertEqual(20, out['size']) self.assertTrue(self.mock_log.debug.called) self.assertTrue(self.mock_log.info.called) @ddt.data(None, exception.HNASItemNotFoundException('Fake error.')) def test_manage_existing_snapshot_with_mount_support(self, exc): export_locations = [{ 'path': '172.24.44.10:/snapshots/' '3377b015-a695-4a5a-8aa5-9b931b023380'}] self.mock_object(ssh.HNASSSHBackend, 'check_directory', mock.Mock(return_value=True)) self.mock_object(self._driver, '_ensure_snapshot', mock.Mock(return_value=[], side_effect=exc)) self.mock_object(self._driver, '_get_export_locations', mock.Mock(return_value=export_locations)) if exc: self.mock_object(self._driver, '_create_export') path_info = snapshot_mount_support_nfs['provider_location'].split('/') hnas_snapshot_id = path_info[3] out = self._driver.manage_existing_snapshot( snapshot_mount_support_nfs, {'size': 20, 'export_locations': export_locations}) ssh.HNASSSHBackend.check_directory.assert_called_with( '/snapshots/62125744-fcdd-4f55-a8c1-d1498102f634' '/3377b015-a695-4a5a-8aa5-9b931b023380') self._driver._ensure_snapshot.assert_called_with( snapshot_mount_support_nfs, hnas_snapshot_id) self._driver._get_export_locations.assert_called_with( snapshot_mount_support_nfs['share']['share_proto'], hnas_snapshot_id, is_snapshot=True) if exc: self._driver._create_export.assert_called_with( snapshot_mount_support_nfs['share_id'], snapshot_mount_support_nfs['share']['share_proto'], snapshot_id=hnas_snapshot_id) self.assertEqual(20, out['size']) self.assertEqual(export_locations, out['export_locations']) self.assertTrue(self.mock_log.debug.called) self.assertTrue(self.mock_log.info.called) @ddt.data('fake_size', '128GB', '512 GB', {'size': 128}) def test_manage_snapshot_invalid_size_exception(self, size): self.assertRaises(exception.ManageInvalidShareSnapshot, self._driver.manage_existing_snapshot, manage_snapshot, {'size': size}) def test_manage_snapshot_size_not_provided_exception(self): self.assertRaises(exception.ManageInvalidShareSnapshot, self._driver.manage_existing_snapshot, manage_snapshot, {}) @ddt.data('/root/snapshot_id', '/snapshots/share1/snapshot_id', '/directory1', 'snapshots/share1/snapshot_id') def test_manage_snapshot_invalid_path_exception(self, path): snap_copy = manage_snapshot.copy() snap_copy['provider_location'] = path self.assertRaises(exception.ManageInvalidShareSnapshot, self._driver.manage_existing_snapshot, snap_copy, {'size': 20}) self.assertTrue(self.mock_log.debug.called) def test_manage_inexistent_snapshot_exception(self): self.mock_object(ssh.HNASSSHBackend, 'check_directory', mock.Mock(return_value=False)) self.assertRaises(exception.ManageInvalidShareSnapshot, self._driver.manage_existing_snapshot, manage_snapshot, {'size': 20}) self.assertTrue(self.mock_log.debug.called) def test_unmanage_snapshot(self): self._driver.unmanage_snapshot(snapshot_nfs) self.assertTrue(self.mock_log.info.called) @ddt.data({'snap': snapshot_nfs, 'exc': None}, {'snap': snapshot_cifs, 'exc': None}, {'snap': snapshot_nfs, 'exc': exception.HNASNothingToCloneException('fake')}, {'snap': snapshot_cifs, 'exc': exception.HNASNothingToCloneException('fake')}) @ddt.unpack def test_revert_to_snapshot(self, exc, snap): self.mock_object(driver.HitachiHNASDriver, "_check_fs_mounted") self.mock_object(ssh.HNASSSHBackend, 'tree_delete') self.mock_object(ssh.HNASSSHBackend, 'vvol_create') self.mock_object(ssh.HNASSSHBackend, 'quota_add') self.mock_object(ssh.HNASSSHBackend, 'tree_clone', mock.Mock(side_effect=exc)) self._driver.revert_to_snapshot('context', snap, None, None) driver.HitachiHNASDriver._check_fs_mounted.assert_called_once_with() ssh.HNASSSHBackend.tree_delete.assert_called_once_with( '/'.join(('/shares', snap['share_id']))) ssh.HNASSSHBackend.vvol_create.assert_called_once_with( snap['share_id']) ssh.HNASSSHBackend.quota_add.assert_called_once_with( snap['share_id'], 2) ssh.HNASSSHBackend.tree_clone.assert_called_once_with( '/'.join(('/snapshots', snap['share_id'], snap['id'])), '/'.join(('/shares', snap['share_id']))) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snap['provider_location']) if exc: self.assertTrue(self.mock_log.warning.called) self.assertTrue(self.mock_log.info.called) def test_nfs_snapshot_update_access_allow(self): access1 = { 'access_type': 'ip', 'access_to': '172.24.10.10', } access2 = { 'access_type': 'ip', 'access_to': '172.31.20.20', } access_list = [access1, access2] self.mock_object(ssh.HNASSSHBackend, "update_nfs_access_rule") self._driver.snapshot_update_access('ctxt', snapshot_nfs, access_list, access_list, []) ssh.HNASSSHBackend.update_nfs_access_rule.assert_called_once_with( [access1['access_to'] + '(ro)', access2['access_to'] + '(ro)'], snapshot_id=snapshot_nfs['id']) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot_nfs['provider_location']) self.assertTrue(self.mock_log.debug.called) def test_nfs_snapshot_update_access_deny(self): access1 = { 'access_type': 'ip', 'access_to': '172.24.10.10', } self.mock_object(ssh.HNASSSHBackend, "update_nfs_access_rule") self._driver.snapshot_update_access('ctxt', snapshot_nfs, [], [], [access1]) ssh.HNASSSHBackend.update_nfs_access_rule.assert_called_once_with( [], snapshot_id=snapshot_nfs['id']) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot_nfs['provider_location']) self.assertTrue(self.mock_log.debug.called) def test_nfs_snapshot_update_access_invalid_access_type(self): access1 = { 'access_type': 'user', 'access_to': 'user1', } self.assertRaises(exception.InvalidSnapshotAccess, self._driver.snapshot_update_access, 'ctxt', snapshot_nfs, [access1], [], []) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot_nfs['provider_location']) def test_cifs_snapshot_update_access_allow(self): access1 = { 'access_type': 'user', 'access_to': 'fake_user1', } self.mock_object(ssh.HNASSSHBackend, 'cifs_allow_access') self._driver.snapshot_update_access('ctxt', snapshot_cifs, [access1], [access1], []) ssh.HNASSSHBackend.cifs_allow_access.assert_called_with( snapshot_cifs['id'], access1['access_to'], 'ar', is_snapshot=True) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot_cifs['provider_location']) self.assertTrue(self.mock_log.debug.called) def test_cifs_snapshot_update_access_deny(self): access1 = { 'access_type': 'user', 'access_to': 'fake_user1', } self.mock_object(ssh.HNASSSHBackend, 'cifs_deny_access') self._driver.snapshot_update_access('ctxt', snapshot_cifs, [], [], [access1]) ssh.HNASSSHBackend.cifs_deny_access.assert_called_with( snapshot_cifs['id'], access1['access_to'], is_snapshot=True) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot_cifs['provider_location']) self.assertTrue(self.mock_log.debug.called) def test_cifs_snapshot_update_access_recovery_mode(self): access1 = { 'access_type': 'user', 'access_to': 'fake_user1', } access2 = { 'access_type': 'user', 'access_to': 'HDS\\fake_user2', } access_list = [access1, access2] permission_list = [('fake_user1', 'ar'), ('HDS\\fake_user2', 'ar')] formatted_user = r'"\{1}{0}\{1}"'.format(access2['access_to'], '"') self.mock_object(ssh.HNASSSHBackend, 'list_cifs_permissions', mock.Mock(return_value=permission_list)) self.mock_object(ssh.HNASSSHBackend, 'cifs_deny_access') self.mock_object(ssh.HNASSSHBackend, 'cifs_allow_access') self._driver.snapshot_update_access('ctxt', snapshot_cifs, access_list, [], []) ssh.HNASSSHBackend.list_cifs_permissions.assert_called_once_with( snapshot_cifs['id']) ssh.HNASSSHBackend.cifs_deny_access.assert_called_with( snapshot_cifs['id'], formatted_user, is_snapshot=True) ssh.HNASSSHBackend.cifs_allow_access.assert_called_with( snapshot_cifs['id'], access2['access_to'].replace('\\', '\\\\'), 'ar', is_snapshot=True) ssh.HNASSSHBackend.check_directory.assert_called_once_with( snapshot_cifs['provider_location']) self.assertTrue(self.mock_log.debug.called) manila-10.0.0/manila/tests/share/test_share_utils.py0000664000175000017500000002054413656750227022535 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright (c) 2015 Rushil Chugh # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests For miscellaneous util methods used with share.""" from unittest import mock import ddt from manila.common import constants from manila.share import utils as share_utils from manila import test @ddt.ddt class ShareUtilsTestCase(test.TestCase): def test_extract_host_without_pool(self): host = 'Host@Backend' self.assertEqual( 'Host@Backend', share_utils.extract_host(host)) def test_extract_host_only_return_host(self): host = 'Host@Backend' self.assertEqual( 'Host', share_utils.extract_host(host, 'host')) def test_extract_host_only_return_pool(self): host = 'Host@Backend' self.assertIsNone( share_utils.extract_host(host, 'pool')) def test_extract_host_only_return_backend(self): host = 'Host@Backend' self.assertEqual( 'Host@Backend', share_utils.extract_host(host, 'backend')) def test_extract_host_missing_backend_and_pool(self): host = 'Host' # Default level is 'backend' self.assertEqual( 'Host', share_utils.extract_host(host)) def test_extract_host_only_return_backend_name(self): host = 'Host@Backend#Pool' self.assertEqual( 'Backend', share_utils.extract_host(host, 'backend_name')) def test_extract_host_only_return_backend_name_index_error(self): host = 'Host#Pool' self.assertRaises(IndexError, share_utils.extract_host, host, 'backend_name') def test_extract_host_missing_backend(self): host = 'Host#Pool' self.assertEqual( 'Host', share_utils.extract_host(host)) self.assertEqual( 'Host', share_utils.extract_host(host, 'host')) def test_extract_host_missing_backend_only_return_backend(self): host = 'Host#Pool' self.assertEqual( 'Host', share_utils.extract_host(host, 'backend')) def test_extract_host_missing_backend_only_return_pool(self): host = 'Host#Pool' self.assertEqual( 'Pool', share_utils.extract_host(host, 'pool')) self.assertEqual( 'Pool', share_utils.extract_host(host, 'pool', True)) def test_extract_host_missing_pool(self): host = 'Host@Backend' self.assertIsNone( share_utils.extract_host(host, 'pool')) def test_extract_host_missing_pool_use_default_pool(self): host = 'Host@Backend' self.assertEqual( '_pool0', share_utils.extract_host(host, 'pool', True)) def test_extract_host_with_default_pool(self): host = 'Host' # Default_pool_name doesn't work for level other than 'pool' self.assertEqual( 'Host', share_utils.extract_host(host, 'host', True)) self.assertEqual( 'Host', share_utils.extract_host(host, 'host', False)) self.assertEqual( 'Host', share_utils.extract_host(host, 'backend', True)) self.assertEqual( 'Host', share_utils.extract_host(host, 'backend', False)) def test_extract_host_with_pool(self): host = 'Host@Backend#Pool' self.assertEqual( 'Host@Backend', share_utils.extract_host(host)) self.assertEqual( 'Host', share_utils.extract_host(host, 'host')) self.assertEqual( 'Host@Backend', share_utils.extract_host(host, 'backend'),) self.assertEqual( 'Pool', share_utils.extract_host(host, 'pool')) self.assertEqual( 'Pool', share_utils.extract_host(host, 'pool', True)) def test_append_host_with_host_and_pool(self): host = 'Host' pool = 'Pool' expected = 'Host#Pool' self.assertEqual(expected, share_utils.append_host(host, pool)) def test_append_host_with_host(self): host = 'Host' pool = None expected = 'Host' self.assertEqual(expected, share_utils.append_host(host, pool)) def test_append_host_with_pool(self): host = None pool = 'pool' expected = None self.assertEqual(expected, share_utils.append_host(host, pool)) def test_append_host_with_no_values(self): host = None pool = None expected = None self.assertEqual(expected, share_utils.append_host(host, pool)) def test_get_active_replica_success(self): replica_list = [{'id': '123456', 'replica_state': constants.REPLICA_STATE_IN_SYNC}, {'id': '654321', 'replica_state': constants.REPLICA_STATE_ACTIVE}, ] replica = share_utils.get_active_replica(replica_list) self.assertEqual('654321', replica['id']) def test_get_active_replica_not_exist(self): replica_list = [{'id': '123456', 'replica_state': constants.REPLICA_STATE_IN_SYNC}, {'id': '654321', 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC}, ] replica = share_utils.get_active_replica(replica_list) self.assertIsNone(replica) class NotifyUsageTestCase(test.TestCase): @mock.patch('manila.share.utils._usage_from_share') @mock.patch('manila.share.utils.CONF') @mock.patch('manila.share.utils.rpc') def test_notify_about_share_usage(self, mock_rpc, mock_conf, mock_usage): mock_conf.host = 'host1' output = share_utils.notify_about_share_usage(mock.sentinel.context, mock.sentinel.share, mock.sentinel. share_instance, 'test_suffix') self.assertIsNone(output) mock_usage.assert_called_once_with(mock.sentinel.share, mock.sentinel.share_instance) mock_rpc.get_notifier.assert_called_once_with('share', 'host1') mock_rpc.get_notifier.return_value.info.assert_called_once_with( mock.sentinel.context, 'share.test_suffix', mock_usage.return_value) @mock.patch('manila.share.utils._usage_from_share') @mock.patch('manila.share.utils.CONF') @mock.patch('manila.share.utils.rpc') def test_notify_about_share_usage_with_kwargs(self, mock_rpc, mock_conf, mock_usage): mock_conf.host = 'host1' output = share_utils.notify_about_share_usage(mock.sentinel.context, mock.sentinel.share, mock.sentinel. share_instance, 'test_suffix', extra_usage_info={ 'a': 'b', 'c': 'd'}, host='host2') self.assertIsNone(output) mock_usage.assert_called_once_with(mock.sentinel.share, mock.sentinel.share_instance, a='b', c='d') mock_rpc.get_notifier.assert_called_once_with('share', 'host2') mock_rpc.get_notifier.return_value.info.assert_called_once_with( mock.sentinel.context, 'share.test_suffix', mock_usage.return_value) manila-10.0.0/manila/tests/share/test_access.py0000664000175000017500000010645713656750227021464 0ustar zuulzuul00000000000000# Copyright 2016 Hitachi Data Systems inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import random from unittest import mock import ddt import six from manila.common import constants from manila import context from manila import db from manila import exception from manila.share import access from manila import test from manila.tests import db_utils from manila import utils class LockedOperationsTestCase(test.TestCase): class FakeAccessHelper(object): @access.locked_access_rules_operation def some_access_rules_operation(self, context, share_instance_id=None): pass def setUp(self): super(LockedOperationsTestCase, self).setUp() self.access_helper = self.FakeAccessHelper() self.context = context.RequestContext('fake_user', 'fake_project') self.lock_call = self.mock_object( utils, 'synchronized', mock.Mock(return_value=lambda f: f)) def test_locked_access_rules_operation(self, **replica): self.access_helper.some_access_rules_operation( self.context, share_instance_id='FAKE_INSTANCE_ID') self.lock_call.assert_called_once_with( "locked_access_rules_operation_by_share_instance_FAKE_INSTANCE_ID", external=True) @ddt.ddt class ShareInstanceAccessDatabaseMixinTestCase(test.TestCase): def setUp(self): super(ShareInstanceAccessDatabaseMixinTestCase, self).setUp() self.driver = mock.Mock() self.access_helper = access.ShareInstanceAccess(db, self.driver) self.context = context.RequestContext('fake_user', 'fake_project') self.mock_object( utils, 'synchronized', mock.Mock(return_value=lambda f: f)) def test_get_and_update_access_rules_status_force_status(self): share = db_utils.create_share( access_rule_status=constants.STATUS_ACTIVE, status=constants.STATUS_AVAILABLE) share = db.share_get(self.context, share['id']) self.assertEqual(constants.STATUS_ACTIVE, share['access_rules_status']) self.access_helper.get_and_update_share_instance_access_rules_status( self.context, status=constants.SHARE_INSTANCE_RULES_SYNCING, share_instance_id=share['instance']['id']) share = db.share_get(self.context, share['id']) self.assertEqual(constants.SHARE_INSTANCE_RULES_SYNCING, share['access_rules_status']) @ddt.data((constants.SHARE_INSTANCE_RULES_SYNCING, True), (constants.STATUS_ERROR, False)) @ddt.unpack def test_get_and_update_access_rules_status_conditionally_change( self, initial_status, change_allowed): share = db_utils.create_share(access_rules_status=initial_status, status=constants.STATUS_AVAILABLE) share = db.share_get(self.context, share['id']) self.assertEqual(initial_status, share['access_rules_status']) conditionally_change = { constants.SHARE_INSTANCE_RULES_SYNCING: constants.STATUS_ACTIVE, } updated_instance = ( self.access_helper. get_and_update_share_instance_access_rules_status( self.context, conditionally_change=conditionally_change, share_instance_id=share['instance']['id']) ) share = db.share_get(self.context, share['id']) if change_allowed: self.assertEqual(constants.STATUS_ACTIVE, share['access_rules_status']) self.assertIsNotNone(updated_instance) else: self.assertEqual(initial_status, share['access_rules_status']) self.assertIsNone(updated_instance) def test_get_and_update_all_access_rules_just_get(self): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) rule_1 = db_utils.create_access(share_id=share['id']) rule_2 = db_utils.create_access(share_id=share['id']) self.mock_object(db, 'share_instance_access_update') rules = self.access_helper.get_and_update_share_instance_access_rules( self.context, share_instance_id=share['instance']['id']) self.assertEqual(2, len(rules)) rule_ids = [r['access_id'] for r in rules] self.assertIn(rule_1['id'], rule_ids) self.assertIn(rule_2['id'], rule_ids) self.assertFalse(db.share_instance_access_update.called) @ddt.data( ([constants.ACCESS_STATE_QUEUED_TO_APPLY], 2), ([constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.STATUS_ACTIVE], 1), ([constants.ACCESS_STATE_APPLYING], 2), ([constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_ERROR], 1), ([constants.ACCESS_STATE_ACTIVE, constants.ACCESS_STATE_DENYING], 0)) @ddt.unpack def test_get_and_update_all_access_rules_updates_conditionally_changed( self, statuses, changes_allowed): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) db_utils.create_access(share_id=share['id'], state=statuses[0]) db_utils.create_access(share_id=share['id'], state=statuses[-1]) self.mock_object(db, 'share_instance_access_update', mock.Mock( side_effect=db.share_instance_access_update)) updates = { 'access_key': 'renfrow2stars' } expected_updates = { 'access_key': 'renfrow2stars', 'state': constants.ACCESS_STATE_QUEUED_TO_DENY, } conditionally_change = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_QUEUED_TO_DENY, constants.ACCESS_STATE_QUEUED_TO_APPLY: constants.ACCESS_STATE_QUEUED_TO_DENY, } rules = self.access_helper.get_and_update_share_instance_access_rules( self.context, share_instance_id=share['instance']['id'], updates=updates, conditionally_change=conditionally_change) state_changed_rules = [ r for r in rules if r['state'] == constants.ACCESS_STATE_QUEUED_TO_DENY ] self.assertEqual(changes_allowed, len(state_changed_rules)) self.assertEqual(2, db.share_instance_access_update.call_count) db.share_instance_access_update.assert_has_calls([ mock.call(self.context, mock.ANY, share['instance']['id'], expected_updates), ] * changes_allowed) def test_get_and_update_access_rule_just_get(self): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) expected_rule = db_utils.create_access(share_id=share['id']) self.mock_object(db, 'share_instance_access_update') actual_rule = ( self.access_helper.get_and_update_share_instance_access_rule( self.context, expected_rule['id'], share_instance_id=share['instance']['id']) ) self.assertEqual(expected_rule['id'], actual_rule['access_id']) self.assertFalse(db.share_instance_access_update.called) @ddt.data(constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_DENYING, constants.ACCESS_STATE_ACTIVE, constants.ACCESS_STATE_QUEUED_TO_APPLY) def test_get_and_update_access_rule_updates_conditionally_changed( self, initial_state): mock_debug_log = self.mock_object(access.LOG, 'debug') share = db_utils.create_share(status=constants.STATUS_AVAILABLE) rule = db_utils.create_access(share_id=share['id'], state=initial_state) self.mock_object(db, 'share_instance_access_update', mock.Mock( side_effect=db.share_instance_access_update)) updates = { 'access_key': 'renfrow2stars' } conditionally_change = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_QUEUED_TO_DENY, constants.ACCESS_STATE_DENYING: constants.ACCESS_STATE_QUEUED_TO_DENY, } actual_rule = ( self.access_helper.get_and_update_share_instance_access_rule( self.context, rule['id'], updates=updates, share_instance_id=share['instance']['id'], conditionally_change=conditionally_change) ) self.assertEqual(rule['id'], actual_rule['access_id']) if 'ing' in initial_state: self.assertEqual(constants.ACCESS_STATE_QUEUED_TO_DENY, actual_rule['state']) self.assertFalse(mock_debug_log.called) else: self.assertEqual(initial_state, actual_rule['state']) mock_debug_log.assert_called_once() @ddt.ddt class ShareInstanceAccessTestCase(test.TestCase): def setUp(self): super(ShareInstanceAccessTestCase, self).setUp() self.driver = self.mock_class("manila.share.driver.ShareDriver", mock.Mock()) self.access_helper = access.ShareInstanceAccess(db, self.driver) self.context = context.RequestContext('fake_user', 'fake_project') @ddt.data(constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_DENYING) def test_update_access_rules_an_update_is_in_progress(self, initial_state): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) share_instance = share['instance'] db_utils.create_access(share_id=share['id'], state=initial_state) mock_debug_log = self.mock_object(access.LOG, 'debug') self.mock_object(self.access_helper, '_update_access_rules') get_and_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rules)) retval = self.access_helper.update_access_rules( self.context, share_instance['id']) expected_filters = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_DENYING), } self.assertIsNone(retval) mock_debug_log.assert_called_once() get_and_update_call.assert_called_once_with( self.context, filters=expected_filters, share_instance_id=share_instance['id']) self.assertFalse(self.access_helper._update_access_rules.called) def test_update_access_rules_nothing_to_update(self): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) share_instance = share['instance'] db_utils.create_access(share_id=share['id'], state=constants.STATUS_ACTIVE) mock_debug_log = self.mock_object(access.LOG, 'debug') self.mock_object(self.access_helper, '_update_access_rules') get_and_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rules)) retval = self.access_helper.update_access_rules( self.context, share_instance['id']) expected_rule_filter_1 = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_DENYING), } expected_rule_filter_2 = { 'state': (constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY), } expected_conditionally_change = { constants.ACCESS_STATE_QUEUED_TO_APPLY: constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_QUEUED_TO_DENY: constants.ACCESS_STATE_DENYING, } self.assertIsNone(retval) mock_debug_log.assert_called_once() get_and_update_call.assert_has_calls( [ mock.call(self.context, filters=expected_rule_filter_1, share_instance_id=share_instance['id']), mock.call(self.context, filters=expected_rule_filter_2, share_instance_id=share_instance['id'], conditionally_change=expected_conditionally_change), ]) self.assertFalse(self.access_helper._update_access_rules.called) @ddt.data(True, False) def test_update_access_rules_delete_all_rules(self, delete_all_rules): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) share_instance = share['instance'] db_utils.create_access( share_id=share['id'], state=constants.STATUS_ACTIVE) db_utils.create_access( share_id=share['id'], state=constants.ACCESS_STATE_QUEUED_TO_APPLY) db_utils.create_access( share_id=share['id'], state=constants.ACCESS_STATE_QUEUED_TO_DENY) mock_debug_log = self.mock_object(access.LOG, 'debug') self.mock_object(self.access_helper, '_update_access_rules') get_and_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rules)) retval = self.access_helper.update_access_rules( self.context, share_instance['id'], delete_all_rules=delete_all_rules) expected_rule_filter_1 = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_DENYING), } expected_rule_filter_2 = { 'state': (constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY), } expected_conditionally_change = { constants.ACCESS_STATE_QUEUED_TO_APPLY: constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_QUEUED_TO_DENY: constants.ACCESS_STATE_DENYING, } expected_get_and_update_calls = [] if delete_all_rules: deny_all_updates = { 'state': constants.ACCESS_STATE_QUEUED_TO_DENY, } expected_get_and_update_calls = [ mock.call(self.context, updates=deny_all_updates, share_instance_id=share_instance['id']), ] expected_get_and_update_calls.extend([ mock.call(self.context, filters=expected_rule_filter_1, share_instance_id=share_instance['id']), mock.call(self.context, filters=expected_rule_filter_2, share_instance_id=share_instance['id'], conditionally_change=expected_conditionally_change), ]) self.assertIsNone(retval) mock_debug_log.assert_called_once() get_and_update_call.assert_has_calls(expected_get_and_update_calls) self.access_helper._update_access_rules.assert_called_once_with( self.context, share_instance['id'], share_server=None) @ddt.data(*itertools.product( (True, False), (constants.ACCESS_STATE_ERROR, constants.ACCESS_STATE_ACTIVE))) @ddt.unpack def test__update_access_rules_with_driver_updates( self, driver_returns_updates, access_state): expected_access_rules_status = ( constants.STATUS_ACTIVE if access_state == constants.ACCESS_STATE_ACTIVE else constants.SHARE_INSTANCE_RULES_ERROR ) share = db_utils.create_share( status=constants.STATUS_AVAILABLE, access_rules_status=expected_access_rules_status) share_instance_id = share['instance']['id'] rule_1 = db_utils.create_access( share_id=share['id'], state=access_state) rule_1 = db.share_instance_access_get( self.context, rule_1['id'], share_instance_id) rule_2 = db_utils.create_access( share_id=share['id'], state=constants.ACCESS_STATE_APPLYING) rule_2 = db.share_instance_access_get( self.context, rule_2['id'], share_instance_id) rule_3 = db_utils.create_access( share_id=share['id'], state=constants.ACCESS_STATE_DENYING) rule_3 = db.share_instance_access_get( self.context, rule_3['id'], share_instance_id) if driver_returns_updates: driver_rule_updates = { rule_3['access_id']: {'access_key': 'alic3h4sAcc355'}, rule_2['access_id']: {'state': access_state} } else: driver_rule_updates = None shr_instance_access_rules_status_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules_status', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rules_status)) all_access_rules_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rules)) one_access_rule_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rule', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rule)) driver_call = self.mock_object( self.access_helper.driver, 'update_access', mock.Mock(return_value=driver_rule_updates)) self.mock_object(self.access_helper, '_check_needs_refresh', mock.Mock(return_value=False)) retval = self.access_helper._update_access_rules( self.context, share_instance_id, share_server='fake_server') # Expected Values: if access_state != constants.ACCESS_STATE_ERROR: expected_rules_to_be_on_share = [r['id'] for r in (rule_1, rule_2)] else: expected_rules_to_be_on_share = [rule_2['id']] expected_filters_1 = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_ACTIVE, constants.ACCESS_STATE_DENYING), } expected_filters_2 = {'state': constants.STATUS_ERROR} expected_get_and_update_calls = [ mock.call(self.context, filters=expected_filters_1, share_instance_id=share_instance_id), mock.call(self.context, filters=expected_filters_2, share_instance_id=share_instance_id), ] expected_access_rules_status_change_cond1 = { constants.STATUS_ACTIVE: constants.SHARE_INSTANCE_RULES_SYNCING, } if access_state == constants.SHARE_INSTANCE_RULES_ERROR: expected_access_rules_status_change_cond2 = { constants.SHARE_INSTANCE_RULES_SYNCING: constants.SHARE_INSTANCE_RULES_ERROR, } else: expected_access_rules_status_change_cond2 = { constants.SHARE_INSTANCE_RULES_SYNCING: constants.STATUS_ACTIVE, constants.SHARE_INSTANCE_RULES_ERROR: constants.STATUS_ACTIVE, } call_args = driver_call.call_args_list[0][0] call_kwargs = driver_call.call_args_list[0][1] access_rules_to_be_on_share = [r['id'] for r in call_args[2]] # Asserts self.assertIsNone(retval) self.assertEqual(share_instance_id, call_args[1]['id']) self.assertTrue(isinstance(access_rules_to_be_on_share, list)) self.assertEqual(len(expected_rules_to_be_on_share), len(access_rules_to_be_on_share)) for pool in expected_rules_to_be_on_share: self.assertIn(pool, access_rules_to_be_on_share) self.assertEqual(1, len(call_kwargs['add_rules'])) self.assertEqual(rule_2['id'], call_kwargs['add_rules'][0]['id']) self.assertEqual(1, len(call_kwargs['delete_rules'])) self.assertEqual(rule_3['id'], call_kwargs['delete_rules'][0]['id']) self.assertEqual('fake_server', call_kwargs['share_server']) shr_instance_access_rules_status_update_call.assert_has_calls([ mock.call( self.context, share_instance_id=share_instance_id, conditionally_change=expected_access_rules_status_change_cond1 ), mock.call( self.context, share_instance_id=share_instance_id, conditionally_change=expected_access_rules_status_change_cond2 ), ]) if driver_returns_updates: expected_conditional_state_updates = { constants.ACCESS_STATE_APPLYING: access_state, constants.ACCESS_STATE_DENYING: access_state, constants.ACCESS_STATE_ACTIVE: access_state, } expected_access_rule_update_calls = [ mock.call( self.context, rule_3['access_id'], updates={'access_key': 'alic3h4sAcc355'}, share_instance_id=share_instance_id, conditionally_change={}), mock.call( self.context, rule_2['access_id'], updates=mock.ANY, share_instance_id=share_instance_id, conditionally_change=expected_conditional_state_updates) ] one_access_rule_update_call.assert_has_calls( expected_access_rule_update_calls, any_order=True) else: self.assertFalse(one_access_rule_update_call.called) expected_conditionally_change = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_ACTIVE, } expected_get_and_update_calls.append( mock.call(self.context, share_instance_id=share_instance_id, conditionally_change=expected_conditionally_change)) all_access_rules_update_call.assert_has_calls( expected_get_and_update_calls, any_order=True) share_instance = db.share_instance_get( self.context, share_instance_id) self.assertEqual(expected_access_rules_status, share_instance['access_rules_status']) @ddt.data(True, False) def test__update_access_rules_recursive_driver_exception(self, drv_exc): other = access.ShareInstanceAccess(db, None) share = db_utils.create_share( status=constants.STATUS_AVAILABLE, access_rules_status=constants.SHARE_INSTANCE_RULES_SYNCING) share_instance_id = share['instance']['id'] rule_4 = [] get_and_update_count = [1] drv_count = [1] def _get_and_update_side_effect(*args, **kwargs): # The third call to this method needs to create a new access rule mtd = other.get_and_update_share_instance_access_rules if get_and_update_count[0] == 3: rule_4.append( db_utils.create_access( state=constants.ACCESS_STATE_QUEUED_TO_APPLY, share_id=share['id'])) get_and_update_count[0] += 1 return mtd(*args, **kwargs) def _driver_side_effect(*args, **kwargs): if drv_exc and drv_count[0] == 2: raise exception.ManilaException('fake') drv_count[0] += 1 rule_kwargs = {'share_id': share['id'], 'access_level': 'rw'} rule_1 = db_utils.create_access(state=constants.ACCESS_STATE_APPLYING, **rule_kwargs) rule_2 = db_utils.create_access(state=constants.ACCESS_STATE_ACTIVE, **rule_kwargs) rule_3 = db_utils.create_access(state=constants.ACCESS_STATE_DENYING, **rule_kwargs) self.mock_object(self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(side_effect=_get_and_update_side_effect)) self.mock_object(self.access_helper.driver, 'update_access', mock.Mock(side_effect=_driver_side_effect)) if drv_exc: self.assertRaises(exception.ManilaException, self.access_helper._update_access_rules, self.context, share_instance_id) else: retval = self.access_helper._update_access_rules(self.context, share_instance_id) self.assertIsNone(retval) expected_filters_1 = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_ACTIVE, constants.ACCESS_STATE_DENYING), } conditionally_change_2 = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_ACTIVE, } expected_filters_3 = { 'state': (constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY), } expected_conditionally_change_3 = { constants.ACCESS_STATE_QUEUED_TO_APPLY: constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_QUEUED_TO_DENY: constants.ACCESS_STATE_DENYING, } expected_conditionally_change_4 = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_ERROR, constants.ACCESS_STATE_DENYING: constants.ACCESS_STATE_ERROR, } expected_get_and_update_calls = [ mock.call(self.context, filters=expected_filters_1, share_instance_id=share_instance_id), mock.call(self.context, share_instance_id=share_instance_id, conditionally_change=conditionally_change_2), mock.call(self.context, filters=expected_filters_3, share_instance_id=share_instance_id, conditionally_change=expected_conditionally_change_3), mock.call(self.context, filters=expected_filters_1, share_instance_id=share_instance_id), ] if drv_exc: expected_get_and_update_calls.append( mock.call( self.context, share_instance_id=share_instance_id, conditionally_change=expected_conditionally_change_4)) else: expected_get_and_update_calls.append( mock.call(self.context, share_instance_id=share_instance_id, conditionally_change=conditionally_change_2)) # Verify rule changes: # 'denying' rule must not exist self.assertRaises(exception.NotFound, db.share_access_get, self.context, rule_3['id']) # 'applying' rule must be set to 'active' rules_that_must_be_active = (rule_1, rule_2) if not drv_exc: rules_that_must_be_active += (rule_4[0], ) for rule in rules_that_must_be_active: rule = db.share_access_get(self.context, rule['id']) self.assertEqual(constants.ACCESS_STATE_ACTIVE, rule['state']) # access_rules_status must be as expected expected_access_rules_status = ( constants.SHARE_INSTANCE_RULES_ERROR if drv_exc else constants.STATUS_ACTIVE) share_instance = db.share_instance_get(self.context, share_instance_id) self.assertEqual( expected_access_rules_status, share_instance['access_rules_status']) def test__update_access_rules_for_migration(self): share = db_utils.create_share() instance = db_utils.create_share_instance( status=constants.STATUS_MIGRATING, access_rules_status=constants.STATUS_ACTIVE, cast_rules_to_readonly=True, share_id=share['id']) rule_kwargs = {'share_id': share['id'], 'access_level': 'rw'} rule_1 = db_utils.create_access( state=constants.ACCESS_STATE_ACTIVE, **rule_kwargs) rule_1 = db.share_instance_access_get( self.context, rule_1['id'], instance['id']) rule_2 = db_utils.create_access( state=constants.ACCESS_STATE_APPLYING, share_id=share['id'], access_level='ro') rule_2 = db.share_instance_access_get( self.context, rule_2['id'], instance['id']) driver_call = self.mock_object( self.access_helper.driver, 'update_access', mock.Mock(return_value=None)) self.mock_object(self.access_helper, '_check_needs_refresh', mock.Mock(return_value=False)) retval = self.access_helper._update_access_rules( self.context, instance['id'], share_server='fake_server') call_args = driver_call.call_args_list[0][0] call_kwargs = driver_call.call_args_list[0][1] access_rules_to_be_on_share = [r['id'] for r in call_args[2]] access_levels = [r['access_level'] for r in call_args[2]] expected_rules_to_be_on_share = ([rule_1['id'], rule_2['id']]) self.assertIsNone(retval) self.assertEqual(instance['id'], call_args[1]['id']) self.assertTrue(isinstance(access_rules_to_be_on_share, list)) self.assertEqual(len(expected_rules_to_be_on_share), len(access_rules_to_be_on_share)) for pool in expected_rules_to_be_on_share: self.assertIn(pool, access_rules_to_be_on_share) self.assertEqual(['ro'] * len(expected_rules_to_be_on_share), access_levels) self.assertEqual(0, len(call_kwargs['add_rules'])) self.assertEqual(0, len(call_kwargs['delete_rules'])) self.assertEqual('fake_server', call_kwargs['share_server']) @ddt.data(True, False) def test__check_needs_refresh(self, expected_needs_refresh): states = ( [constants.ACCESS_STATE_QUEUED_TO_DENY, constants.ACCESS_STATE_QUEUED_TO_APPLY] if expected_needs_refresh else [constants.ACCESS_STATE_ACTIVE] ) share = db_utils.create_share( status=constants.STATUS_AVAILABLE, access_rules_status=constants.SHARE_INSTANCE_RULES_SYNCING) share_instance_id = share['instance']['id'] rule_kwargs = {'share_id': share['id'], 'access_level': 'rw'} rule_1 = db_utils.create_access(state=states[0], **rule_kwargs) db_utils.create_access(state=constants.ACCESS_STATE_ACTIVE, **rule_kwargs) db_utils.create_access(state=constants.ACCESS_STATE_DENYING, **rule_kwargs) rule_4 = db_utils.create_access(state=states[-1], **rule_kwargs) get_and_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(side_effect=self.access_helper. get_and_update_share_instance_access_rules)) needs_refresh = self.access_helper._check_needs_refresh( self.context, share_instance_id) expected_filter = { 'state': (constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY), } expected_conditionally_change = { constants.ACCESS_STATE_QUEUED_TO_APPLY: constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_QUEUED_TO_DENY: constants.ACCESS_STATE_DENYING, } self.assertEqual(expected_needs_refresh, needs_refresh) get_and_update_call.assert_called_once_with( self.context, filters=expected_filter, share_instance_id=share_instance_id, conditionally_change=expected_conditionally_change) rule_1 = db.share_instance_access_get( self.context, rule_1['id'], share_instance_id) rule_4 = db.share_instance_access_get( self.context, rule_4['id'], share_instance_id) if expected_needs_refresh: self.assertEqual(constants.ACCESS_STATE_DENYING, rule_1['state']) self.assertEqual(constants.ACCESS_STATE_APPLYING, rule_4['state']) else: self.assertEqual(states[0], rule_1['state']) self.assertEqual(states[-1], rule_4['state']) @ddt.data(('nfs', True, False), ('nfs', False, True), ('cifs', True, False), ('cifs', False, False), ('cephx', True, False), ('cephx', False, False)) @ddt.unpack def test__update_rules_through_share_driver(self, proto, enable_ipv6, filtered): self.driver.ipv6_implemented = enable_ipv6 share_instance = {'share_proto': proto} pass_rules, fail_rules = self._get_pass_rules_and_fail_rules() pass_add_rules, fail_add_rules = self._get_pass_rules_and_fail_rules() pass_delete_rules, fail_delete_rules = ( self._get_pass_rules_and_fail_rules()) test_rules = pass_rules + fail_rules test_add_rules = pass_add_rules + fail_add_rules test_delete_rules = pass_delete_rules + fail_delete_rules fake_expect_driver_update_rules = pass_rules update_access_call = self.mock_object( self.access_helper.driver, 'update_access', mock.Mock(return_value=pass_rules)) driver_update_rules = ( self.access_helper._update_rules_through_share_driver( self.context, share_instance=share_instance, access_rules_to_be_on_share=test_rules, add_rules=test_add_rules, delete_rules=test_delete_rules, rules_to_be_removed_from_db=test_rules, share_server=None)) if filtered: update_access_call.assert_called_once_with( self.context, share_instance, pass_rules, add_rules=pass_add_rules, delete_rules=pass_delete_rules, share_server=None) else: update_access_call.assert_called_once_with( self.context, share_instance, test_rules, add_rules=test_add_rules, delete_rules=test_delete_rules, share_server=None) self.assertEqual(fake_expect_driver_update_rules, driver_update_rules) def _get_pass_rules_and_fail_rules(self): random_value = six.text_type(random.randint(10, 32)) pass_rules = [ { 'access_type': 'ip', 'access_to': '1.1.1.' + random_value, }, { 'access_type': 'ip', 'access_to': '1.1.%s.0/24' % random_value, }, { 'access_type': 'user', 'access_to': 'fake_user' + random_value, }, ] fail_rules = [ { 'access_type': 'ip', 'access_to': '1001::' + random_value, }, { 'access_type': 'ip', 'access_to': '%s::/64' % random_value, }, ] return pass_rules, fail_rules manila-10.0.0/manila/tests/share/test_snapshot_access.py0000664000175000017500000001557213656750227023400 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from unittest import mock import ddt from manila.common import constants from manila import context from manila import db from manila import exception from manila.share import snapshot_access from manila import test from manila.tests import db_utils from manila import utils @ddt.ddt class SnapshotAccessTestCase(test.TestCase): def setUp(self): super(SnapshotAccessTestCase, self).setUp() self.driver = self.mock_class("manila.share.driver.ShareDriver", mock.Mock()) self.snapshot_access = snapshot_access.ShareSnapshotInstanceAccess( db, self.driver) self.context = context.get_admin_context() share = db_utils.create_share() self.snapshot = db_utils.create_snapshot(share_id=share['id']) self.snapshot_instance = db_utils.create_snapshot_instance( snapshot_id=self.snapshot['id'], share_instance_id=self.snapshot['share']['instance']['id']) @ddt.data(constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY) def test_update_access_rules(self, state): rules = [] for i in range(2): rules.append({ 'id': 'id-%s' % i, 'state': state, 'access_id': 'rule_id%s' % i }) all_rules = copy.deepcopy(rules) all_rules.append({ 'id': 'id-3', 'state': constants.ACCESS_STATE_ERROR, 'access_id': 'rule_id3' }) snapshot_instance_get = self.mock_object( db, 'share_snapshot_instance_get', mock.Mock(return_value=self.snapshot_instance)) snap_get_all_for_snap_instance = self.mock_object( db, 'share_snapshot_access_get_all_for_snapshot_instance', mock.Mock(return_value=all_rules)) self.mock_object(db, 'share_snapshot_instance_access_update') self.mock_object(self.driver, 'snapshot_update_access') self.mock_object(self.snapshot_access, '_check_needs_refresh', mock.Mock(return_value=False)) self.mock_object(db, 'share_snapshot_instance_access_delete') self.snapshot_access.update_access_rules(self.context, self.snapshot_instance['id']) snapshot_instance_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance['id'], with_share_data=True) snap_get_all_for_snap_instance.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance['id']) if state == constants.ACCESS_STATE_QUEUED_TO_APPLY: self.driver.snapshot_update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance, rules, add_rules=rules, delete_rules=[], share_server=None) else: self.driver.snapshot_update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance, [], add_rules=[], delete_rules=rules, share_server=None) def test_update_access_rules_delete_all_rules(self): rules = [] for i in range(2): rules.append({ 'id': 'id-%s' % i, 'state': constants.ACCESS_STATE_QUEUED_TO_DENY, 'access_id': 'rule_id%s' % i }) snapshot_instance_get = self.mock_object( db, 'share_snapshot_instance_get', mock.Mock(return_value=self.snapshot_instance)) snap_get_all_for_snap_instance = self.mock_object( db, 'share_snapshot_access_get_all_for_snapshot_instance', mock.Mock(side_effect=[rules, []])) self.mock_object(db, 'share_snapshot_instance_access_update') self.mock_object(self.driver, 'snapshot_update_access') self.mock_object(db, 'share_snapshot_instance_access_delete') self.snapshot_access.update_access_rules(self.context, self.snapshot_instance['id'], delete_all_rules=True) snapshot_instance_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance['id'], with_share_data=True) snap_get_all_for_snap_instance.assert_called_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance['id']) self.driver.snapshot_update_access.assert_called_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance, [], add_rules=[], delete_rules=rules, share_server=None) def test_update_access_rules_exception(self): rules = [] for i in range(2): rules.append({ 'id': 'id-%s' % i, 'state': constants.ACCESS_STATE_APPLYING, 'access_id': 'rule_id%s' % i }) snapshot_instance_get = self.mock_object( db, 'share_snapshot_instance_get', mock.Mock(return_value=self.snapshot_instance)) snap_get_all_for_snap_instance = self.mock_object( db, 'share_snapshot_access_get_all_for_snapshot_instance', mock.Mock(return_value=rules)) self.mock_object(db, 'share_snapshot_instance_access_update') self.mock_object(self.driver, 'snapshot_update_access', mock.Mock(side_effect=exception.NotFound)) self.assertRaises(exception.NotFound, self.snapshot_access.update_access_rules, self.context, self.snapshot_instance['id']) snapshot_instance_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance['id'], with_share_data=True) snap_get_all_for_snap_instance.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance['id']) self.driver.snapshot_update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.snapshot_instance, rules, add_rules=rules, delete_rules=[], share_server=None) manila-10.0.0/manila/tests/share/test_rpcapi.py0000664000175000017500000004055613656750227021476 0ustar zuulzuul00000000000000# Copyright 2015 Alex Meade # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for manila.share.rpcapi. """ import copy from oslo_config import cfg from oslo_serialization import jsonutils from manila.common import constants from manila import context from manila.share import rpcapi as share_rpcapi from manila import test from manila.tests import db_utils CONF = cfg.CONF class ShareRpcAPITestCase(test.TestCase): def setUp(self): super(ShareRpcAPITestCase, self).setUp() share = db_utils.create_share( availability_zone=CONF.storage_availability_zone, status=constants.STATUS_AVAILABLE ) snapshot = db_utils.create_snapshot(share_id=share['id']) share_replica = db_utils.create_share_replica( id='fake_replica', share_id='fake_share_id', host='fake_host', ) share_server = db_utils.create_share_server() share_group = {'id': 'fake_share_group_id', 'host': 'fake_host'} share_group_snapshot = {'id': 'fake_share_group_id'} host = 'fake_host' self.fake_share = jsonutils.to_primitive(share) # mock out the getattr on the share db model object since jsonutils # doesn't know about those extra attributes to pull in self.fake_share['instance'] = jsonutils.to_primitive(share.instance) self.fake_share_replica = jsonutils.to_primitive(share_replica) self.fake_snapshot = jsonutils.to_primitive(snapshot) self.fake_snapshot['share_instance'] = jsonutils.to_primitive( snapshot.instance) self.fake_share_server = jsonutils.to_primitive(share_server) self.fake_share_group = jsonutils.to_primitive(share_group) self.fake_share_group_snapshot = jsonutils.to_primitive( share_group_snapshot) self.fake_host = jsonutils.to_primitive(host) self.ctxt = context.RequestContext('fake_user', 'fake_project') self.rpcapi = share_rpcapi.ShareAPI() def test_serialized_share_has_id(self): self.assertIn('id', self.fake_share) def _test_share_api(self, method, rpc_method, **kwargs): expected_retval = 'foo' if method == 'call' else None target = { "version": kwargs.pop('version', self.rpcapi.BASE_RPC_API_VERSION) } expected_msg = copy.deepcopy(kwargs) if 'share' in expected_msg and method != 'get_connection_info': share = expected_msg['share'] del expected_msg['share'] expected_msg['share_id'] = share['id'] if 'share_instance' in expected_msg: share_instance = expected_msg.pop('share_instance', None) expected_msg['share_instance_id'] = share_instance['id'] if 'share_group' in expected_msg: share_group = expected_msg['share_group'] del expected_msg['share_group'] expected_msg['share_group_id'] = share_group['id'] if 'share_group_snapshot' in expected_msg: snap = expected_msg['share_group_snapshot'] del expected_msg['share_group_snapshot'] expected_msg['share_group_snapshot_id'] = snap['id'] if 'host' in expected_msg: del expected_msg['host'] if 'snapshot' in expected_msg: snapshot = expected_msg['snapshot'] del expected_msg['snapshot'] expected_msg['snapshot_id'] = snapshot['id'] if 'dest_host' in expected_msg: del expected_msg['dest_host'] expected_msg['dest_host'] = self.fake_host if 'share_replica' in expected_msg: share_replica = expected_msg.pop('share_replica', None) expected_msg['share_replica_id'] = share_replica['id'] expected_msg['share_id'] = share_replica['share_id'] if 'replicated_snapshot' in expected_msg: snapshot = expected_msg.pop('replicated_snapshot', None) expected_msg['snapshot_id'] = snapshot['id'] expected_msg['share_id'] = snapshot['share_id'] if 'src_share_instance' in expected_msg: share_instance = expected_msg.pop('src_share_instance', None) expected_msg['src_instance_id'] = share_instance['id'] if 'update_access' in expected_msg: share_instance = expected_msg.pop('share_instance', None) expected_msg['share_instance_id'] = share_instance['id'] if 'snapshot_instance' in expected_msg: snapshot_instance = expected_msg.pop('snapshot_instance', None) expected_msg['snapshot_instance_id'] = snapshot_instance['id'] if ('share_server' in expected_msg and (method == 'manage_share_server') or method == 'unmanage_share_server'): share_server = expected_msg.pop('share_server', None) expected_msg['share_server_id'] = share_server['id'] if 'host' in kwargs: host = kwargs['host'] elif 'share_group' in kwargs: host = kwargs['share_group']['host'] elif 'share_instance' in kwargs: host = kwargs['share_instance']['host'] elif 'share_server' in kwargs: host = kwargs['share_server']['host'] elif 'share_replica' in kwargs: host = kwargs['share_replica']['host'] elif 'replicated_snapshot' in kwargs: host = kwargs['share']['instance']['host'] elif 'share' in kwargs: host = kwargs['share']['host'] else: host = self.fake_host target['server'] = host target['topic'] = '%s.%s' % (CONF.share_topic, host) self.fake_args = None self.fake_kwargs = None def _fake_prepare_method(*args, **kwds): for kwd in kwds: self.assertEqual(target[kwd], kwds[kwd]) return self.rpcapi.client def _fake_rpc_method(*args, **kwargs): self.fake_args = args self.fake_kwargs = kwargs if expected_retval: return expected_retval self.mock_object(self.rpcapi.client, "prepare", _fake_prepare_method) self.mock_object(self.rpcapi.client, rpc_method, _fake_rpc_method) retval = getattr(self.rpcapi, method)(self.ctxt, **kwargs) self.assertEqual(expected_retval, retval) expected_args = [self.ctxt, method] for arg, expected_arg in zip(self.fake_args, expected_args): self.assertEqual(expected_arg, arg) for kwarg, value in self.fake_kwargs.items(): self.assertEqual(expected_msg[kwarg], value) def test_create_share_instance(self): self._test_share_api('create_share_instance', rpc_method='cast', version='1.4', share_instance=self.fake_share, host='fake_host1', snapshot_id='fake_snapshot_id', filter_properties=None, request_spec=None) def test_delete_share_instance(self): self._test_share_api('delete_share_instance', rpc_method='cast', version='1.4', share_instance=self.fake_share, force=False) def test_update_access(self): self._test_share_api('update_access', rpc_method='cast', version='1.14', share_instance=self.fake_share) def test_create_snapshot(self): self._test_share_api('create_snapshot', rpc_method='cast', share=self.fake_share, snapshot=self.fake_snapshot) def test_delete_snapshot(self): self._test_share_api('delete_snapshot', rpc_method='cast', snapshot=self.fake_snapshot, host='fake_host', force=False) def test_delete_share_server(self): self._test_share_api('delete_share_server', rpc_method='cast', share_server=self.fake_share_server) def test_extend_share(self): self._test_share_api('extend_share', rpc_method='cast', version='1.2', share=self.fake_share, new_size=123, reservations={'fake': 'fake'}) def test_shrink_share(self): self._test_share_api('shrink_share', rpc_method='cast', version='1.3', share=self.fake_share, new_size=123) def test_create_share_group(self): self._test_share_api('create_share_group', version='1.16', rpc_method='cast', share_group=self.fake_share_group, host='fake_host1') def test_delete_share_group(self): self._test_share_api('delete_share_group', version='1.16', rpc_method='cast', share_group=self.fake_share_group) def test_create_share_group_snapshot(self): self._test_share_api( 'create_share_group_snapshot', version='1.16', rpc_method='cast', share_group_snapshot=self.fake_share_group_snapshot, host='fake_host1') def test_delete_share_group_snapshot(self): self._test_share_api( 'delete_share_group_snapshot', version='1.16', rpc_method='cast', share_group_snapshot=self.fake_share_group_snapshot, host='fake_host1') def test_migration_start(self): self._test_share_api('migration_start', rpc_method='cast', version='1.15', share=self.fake_share, dest_host=self.fake_host, force_host_assisted_migration=True, preserve_metadata=True, writable=True, nondisruptive=False, preserve_snapshots=True, new_share_network_id='fake_net_id', new_share_type_id='fake_type_id') def test_connection_get_info(self): self._test_share_api('connection_get_info', rpc_method='call', version='1.12', share_instance=self.fake_share) def test_migration_complete(self): self._test_share_api('migration_complete', rpc_method='cast', version='1.12', src_share_instance=self.fake_share['instance'], dest_instance_id='new_fake_ins_id') def test_migration_cancel(self): self._test_share_api('migration_cancel', rpc_method='cast', version='1.12', src_share_instance=self.fake_share['instance'], dest_instance_id='ins2_id') def test_migration_get_progress(self): self._test_share_api('migration_get_progress', rpc_method='call', version='1.12', src_share_instance=self.fake_share['instance'], dest_instance_id='ins2_id') def test_delete_share_replica(self): self._test_share_api('delete_share_replica', rpc_method='cast', version='1.8', share_replica=self.fake_share_replica, force=False) def test_promote_share_replica(self): self._test_share_api('promote_share_replica', rpc_method='cast', version='1.8', share_replica=self.fake_share_replica) def test_update_share_replica(self): self._test_share_api('update_share_replica', rpc_method='cast', version='1.8', share_replica=self.fake_share_replica) def test_manage_snapshot(self): self._test_share_api('manage_snapshot', rpc_method='cast', version='1.9', snapshot=self.fake_snapshot, host='fake_host', driver_options={'volume_snapshot_id': 'fake'}) def test_unmanage_snapshot(self): self._test_share_api('unmanage_snapshot', rpc_method='cast', version='1.9', snapshot=self.fake_snapshot, host='fake_host') def test_manage_share_server(self): self._test_share_api('manage_share_server', rpc_method='cast', version='1.19', share_server=self.fake_share_server, identifier='fake', driver_opts={}) def test_unmanage_share_server(self): self._test_share_api('unmanage_share_server', rpc_method='cast', version='1.19', share_server=self.fake_share_server, force='fake_force') def test_revert_to_snapshot(self): self._test_share_api('revert_to_snapshot', rpc_method='cast', version='1.18', share=self.fake_share, snapshot=self.fake_snapshot, host='fake_host', reservations={'fake': 'fake'}) def test_create_replicated_snapshot(self): self._test_share_api('create_replicated_snapshot', rpc_method='cast', version='1.11', replicated_snapshot=self.fake_snapshot, share=self.fake_share) def test_delete_replicated_snapshot(self): self._test_share_api('delete_replicated_snapshot', rpc_method='cast', version='1.11', replicated_snapshot=self.fake_snapshot, share_id=self.fake_snapshot['share_id'], force=False, host='fake_host') def test_provide_share_server(self): self._test_share_api('provide_share_server', rpc_method='call', version='1.12', share_instance=self.fake_share['instance'], share_network_id='fake_network_id', snapshot_id='fake_snapshot_id') def test_create_share_server(self): self._test_share_api('create_share_server', rpc_method='cast', version='1.12', share_instance=self.fake_share['instance'], share_server_id='fake_server_id') def test_snapshot_update_access(self): self._test_share_api('snapshot_update_access', rpc_method='cast', version='1.17', snapshot_instance=self.fake_snapshot[ 'share_instance']) manila-10.0.0/manila/tests/share/test_migration.py0000664000175000017500000003121413656750227022200 0ustar zuulzuul00000000000000# Copyright 2015 Hitachi Data Systems inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from unittest import mock import ddt from manila.common import constants from manila import context from manila import db from manila import exception from manila.share import access as access_helper from manila.share import api as share_api from manila.share import migration from manila import test from manila.tests import db_utils from manila import utils @ddt.ddt class ShareMigrationHelperTestCase(test.TestCase): """Tests ShareMigrationHelper.""" def setUp(self): super(ShareMigrationHelperTestCase, self).setUp() self.share = db_utils.create_share() self.share_instance = db_utils.create_share_instance( share_id=self.share['id'], share_network_id='fake_network_id') self.access_helper = access_helper.ShareInstanceAccess(db, None) self.context = context.get_admin_context() self.helper = migration.ShareMigrationHelper( self.context, db, self.share, self.access_helper) def test_delete_instance_and_wait(self): # mocks self.mock_object(share_api.API, 'delete_instance') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[self.share_instance, exception.NotFound()])) self.mock_object(time, 'sleep') # run self.helper.delete_instance_and_wait(self.share_instance) # asserts share_api.API.delete_instance.assert_called_once_with( self.context, self.share_instance, True) db.share_instance_get.assert_has_calls([ mock.call(self.context, self.share_instance['id']), mock.call(self.context, self.share_instance['id'])]) time.sleep.assert_called_once_with(1.414) def test_delete_instance_and_wait_timeout(self): # mocks self.mock_object(share_api.API, 'delete_instance') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[self.share_instance, None])) self.mock_object(time, 'sleep') now = time.time() timeout = now + 310 self.mock_object(time, 'time', mock.Mock(side_effect=[now, timeout])) # run self.assertRaises(exception.ShareMigrationFailed, self.helper.delete_instance_and_wait, self.share_instance) # asserts share_api.API.delete_instance.assert_called_once_with( self.context, self.share_instance, True) db.share_instance_get.assert_called_once_with( self.context, self.share_instance['id']) time.time.assert_has_calls([mock.call(), mock.call()]) def test_delete_instance_and_wait_not_found(self): # mocks self.mock_object(share_api.API, 'delete_instance') self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=exception.NotFound)) # run self.helper.delete_instance_and_wait(self.share_instance) # asserts share_api.API.delete_instance.assert_called_once_with( self.context, self.share_instance, True) db.share_instance_get.assert_called_once_with( self.context, self.share_instance['id']) def test_create_instance_and_wait(self): host = 'fake_host' share_instance_creating = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_CREATING, share_network_id='fake_network_id') share_instance_available = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_AVAILABLE, share_network_id='fake_network_id') # mocks self.mock_object(share_api.API, 'create_instance', mock.Mock(return_value=share_instance_creating)) self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[share_instance_creating, share_instance_available])) self.mock_object(time, 'sleep') # run self.helper.create_instance_and_wait( self.share, host, 'fake_net_id', 'fake_az_id', 'fake_type_id') # asserts share_api.API.create_instance.assert_called_once_with( self.context, self.share, 'fake_net_id', 'fake_host', 'fake_az_id', share_type_id='fake_type_id') db.share_instance_get.assert_has_calls([ mock.call(self.context, share_instance_creating['id'], with_share_data=True), mock.call(self.context, share_instance_creating['id'], with_share_data=True)]) time.sleep.assert_called_once_with(1.414) def test_create_instance_and_wait_status_error(self): host = 'fake_host' share_instance_error = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_ERROR, share_network_id='fake_network_id') # mocks self.mock_object(share_api.API, 'create_instance', mock.Mock(return_value=share_instance_error)) self.mock_object(self.helper, 'cleanup_new_instance') self.mock_object(db, 'share_instance_get', mock.Mock(return_value=share_instance_error)) # run self.assertRaises( exception.ShareMigrationFailed, self.helper.create_instance_and_wait, self.share, host, 'fake_net_id', 'fake_az_id', 'fake_type_id') # asserts share_api.API.create_instance.assert_called_once_with( self.context, self.share, 'fake_net_id', 'fake_host', 'fake_az_id', share_type_id='fake_type_id') db.share_instance_get.assert_called_once_with( self.context, share_instance_error['id'], with_share_data=True) self.helper.cleanup_new_instance.assert_called_once_with( share_instance_error) def test_create_instance_and_wait_timeout(self): host = 'fake_host' share_instance_creating = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_CREATING, share_network_id='fake_network_id') # mocks self.mock_object(share_api.API, 'create_instance', mock.Mock(return_value=share_instance_creating)) self.mock_object(self.helper, 'cleanup_new_instance') self.mock_object(db, 'share_instance_get', mock.Mock(return_value=share_instance_creating)) self.mock_object(time, 'sleep') now = time.time() timeout = now + 310 self.mock_object(time, 'time', mock.Mock(side_effect=[now, timeout])) # run self.assertRaises( exception.ShareMigrationFailed, self.helper.create_instance_and_wait, self.share, host, 'fake_net_id', 'fake_az_id', 'fake_type_id') # asserts share_api.API.create_instance.assert_called_once_with( self.context, self.share, 'fake_net_id', 'fake_host', 'fake_az_id', share_type_id='fake_type_id') db.share_instance_get.assert_called_once_with( self.context, share_instance_creating['id'], with_share_data=True) time.time.assert_has_calls([mock.call(), mock.call()]) self.helper.cleanup_new_instance.assert_called_once_with( share_instance_creating) @ddt.data(constants.STATUS_ACTIVE, constants.STATUS_ERROR, constants.STATUS_CREATING) def test_wait_for_share_server(self, status): server = db_utils.create_share_server(status=status) # mocks self.mock_object(db, 'share_server_get', mock.Mock(return_value=server)) # run if status == constants.STATUS_ACTIVE: result = self.helper.wait_for_share_server('fake_server_id') self.assertEqual(server, result) elif status == constants.STATUS_ERROR: self.assertRaises( exception.ShareServerNotCreated, self.helper.wait_for_share_server, 'fake_server_id') else: self.mock_object(time, 'sleep') self.assertRaises( exception.ShareServerNotReady, self.helper.wait_for_share_server, 'fake_server_id') # asserts db.share_server_get.assert_called_with(self.context, 'fake_server_id') def test_revert_access_rules(self): share_instance = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_AVAILABLE) access = db_utils.create_access(share_id=self.share['id'], access_to='fake_ip', access_level='rw') server = db_utils.create_share_server(share_id=self.share['id']) # mocks self.mock_object(self.access_helper, 'update_access_rules') get_and_update_call = self.mock_object( self.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(return_value=[access])) # run self.helper.revert_access_rules(share_instance, server) # asserts get_and_update_call.assert_called_once_with( self.context, share_instance_id=share_instance['id'], updates={'state': constants.ACCESS_STATE_QUEUED_TO_APPLY}) self.access_helper.update_access_rules.assert_called_once_with( self.context, share_instance['id'], share_server=server) @ddt.data(True, False) def test_apply_new_access_rules_there_are_rules(self, prior_rules): new_share_instance = db_utils.create_share_instance( share_id=self.share['id'], status=constants.STATUS_AVAILABLE, access_rules_status='active') rules = None if prior_rules: rules = [ db_utils.create_access( share_id=self.share['id'], access_to='fake_ip') ] # mocks self.mock_object(db, 'share_instance_access_copy', mock.Mock( return_value=rules)) self.mock_object(share_api.API, 'allow_access_to_instance') self.mock_object(utils, 'wait_for_access_update') # run self.helper.apply_new_access_rules(new_share_instance) # asserts db.share_instance_access_copy.assert_called_once_with( self.context, self.share['id'], new_share_instance['id']) if prior_rules: share_api.API.allow_access_to_instance.assert_called_with( self.context, new_share_instance) utils.wait_for_access_update.assert_called_with( self.context, db, new_share_instance, self.helper.migration_wait_access_rules_timeout) else: self.assertFalse(share_api.API.allow_access_to_instance.called) self.assertFalse(utils.wait_for_access_update.called) @ddt.data(None, Exception('fake')) def test_cleanup_new_instance(self, exc): # mocks self.mock_object(self.helper, 'delete_instance_and_wait', mock.Mock(side_effect=exc)) self.mock_object(migration.LOG, 'warning') # run self.helper.cleanup_new_instance(self.share_instance) # asserts self.helper.delete_instance_and_wait.assert_called_once_with( self.share_instance) if exc: self.assertEqual(1, migration.LOG.warning.call_count) @ddt.data(None, Exception('fake')) def test_cleanup_access_rules(self, exc): # mocks server = db_utils.create_share_server() self.mock_object(self.helper, 'revert_access_rules', mock.Mock(side_effect=exc)) self.mock_object(migration.LOG, 'warning') # run self.helper.cleanup_access_rules(self.share_instance, server) # asserts self.helper.revert_access_rules.assert_called_once_with( self.share_instance, server) if exc: self.assertEqual(1, migration.LOG.warning.call_count) manila-10.0.0/manila/tests/share/test_drivers_private_data.py0000664000175000017500000001324313656750227024412 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from oslo_utils import uuidutils from manila.share import drivers_private_data as pd from manila import test @ddt.ddt class DriverPrivateDataTestCase(test.TestCase): """Tests DriverPrivateData.""" def setUp(self): super(DriverPrivateDataTestCase, self).setUp() self.fake_storage = mock.Mock() self.entity_id = uuidutils.generate_uuid() def test_default_storage_driver(self): private_data = pd.DriverPrivateData( storage=None, context="fake", backend_host="fake") self.assertIsInstance(private_data._storage, pd.SqlStorageDriver) def test_custom_storage_driver(self): private_data = pd.DriverPrivateData(storage=self.fake_storage) self.assertEqual(self.fake_storage, private_data._storage) def test_invalid_parameters(self): self.assertRaises(ValueError, pd.DriverPrivateData) @ddt.data({'context': 'fake'}, {'backend_host': 'fake'}) def test_invalid_single_parameter(self, test_args): self.assertRaises(ValueError, pd.DriverPrivateData, **test_args) @ddt.data("111", ["fake"], None) def test_validate_entity_id_invalid(self, entity_id): data = pd.DriverPrivateData(storage="fake") self.assertRaises(ValueError, data._validate_entity_id, entity_id) def test_validate_entity_id_valid(self): actual_result = ( pd.DriverPrivateData._validate_entity_id(self.entity_id) ) self.assertIsNone(actual_result) def test_update(self): data = pd.DriverPrivateData(storage=self.fake_storage) details = {"foo": "bar"} self.mock_object(self.fake_storage, 'update', mock.Mock(return_value=True)) actual_result = data.update( self.entity_id, details, delete_existing=True ) self.assertTrue(actual_result) self.fake_storage.update.assert_called_once_with( self.entity_id, details, True ) def test_update_invalid(self): data = pd.DriverPrivateData(storage=self.fake_storage) details = ["invalid"] self.mock_object(self.fake_storage, 'update', mock.Mock(return_value=True)) self.assertRaises( ValueError, data.update, self.entity_id, details) self.assertFalse(self.fake_storage.update.called) def test_get(self): data = pd.DriverPrivateData(storage=self.fake_storage) key = "fake_key" value = "fake_value" default_value = "def" self.mock_object(self.fake_storage, 'get', mock.Mock(return_value=value)) actual_result = data.get(self.entity_id, key, default_value) self.assertEqual(value, actual_result) self.fake_storage.get.assert_called_once_with( self.entity_id, key, default_value ) def test_delete(self): data = pd.DriverPrivateData(storage=self.fake_storage) key = "fake_key" self.mock_object(self.fake_storage, 'get', mock.Mock(return_value=True)) actual_result = data.delete(self.entity_id, key) self.assertTrue(actual_result) self.fake_storage.delete.assert_called_once_with( self.entity_id, key ) fake_storage_data = { "entity_id": "fake_id", "details": {"foo": "bar"}, "context": "fake_context", "backend_host": "fake_host", "default": "def", "delete_existing": True, "key": "fake_key", } def create_arg_list(key_names): return [fake_storage_data[key] for key in key_names] def create_arg_dict(key_names): return {key: fake_storage_data[key] for key in key_names} @ddt.ddt class SqlStorageDriverTestCase(test.TestCase): @ddt.data( { "method_name": 'update', "method_kwargs": create_arg_dict( ["entity_id", "details", "delete_existing"]), "valid_args": create_arg_list( ["context", "entity_id", "details", "delete_existing"] ) }, { "method_name": 'get', "method_kwargs": create_arg_dict(["entity_id", "key", "default"]), "valid_args": create_arg_list( ["context", "entity_id", "key", "default"]), }, { "method_name": 'delete', "method_kwargs": create_arg_dict(["entity_id", "key"]), "valid_args": create_arg_list( ["context", "entity_id", "key"]), }) @ddt.unpack def test_methods(self, method_kwargs, method_name, valid_args): method = method_name db_method = 'driver_private_data_' + method_name with mock.patch('manila.db.api.' + db_method) as db_method: storage_driver = pd.SqlStorageDriver( context=fake_storage_data['context'], backend_host=fake_storage_data['backend_host']) method = getattr(storage_driver, method) method(**method_kwargs) db_method.assert_called_once_with(*valid_args) manila-10.0.0/manila/tests/share/test_manager.py0000664000175000017500000130536713656750227021637 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test of Share Manager for Manila.""" import datetime import hashlib import random from unittest import mock import ddt from oslo_concurrency import lockutils from oslo_serialization import jsonutils from oslo_utils import importutils from oslo_utils import timeutils import six from manila.common import constants from manila import context from manila import coordination from manila.data import rpcapi as data_rpc from manila import db from manila.db.sqlalchemy import models from manila import exception from manila.message import message_field from manila import quota from manila.share import api from manila.share import drivers_private_data from manila.share import manager from manila.share import migration as migration_api from manila.share import rpcapi from manila.share import share_types from manila import test from manila.tests.api import fakes as test_fakes from manila.tests import db_utils from manila.tests import fake_notifier from manila.tests import fake_share as fakes from manila.tests import fake_utils from manila.tests import utils as test_utils from manila import utils def fake_replica(**kwargs): return fakes.fake_replica(for_manager=True, **kwargs) class CustomTimeSleepException(Exception): pass class LockedOperationsTestCase(test.TestCase): class FakeManager(object): @manager.locked_share_replica_operation def fake_replica_operation(self, context, replica, share_id=None): pass def setUp(self): super(LockedOperationsTestCase, self).setUp() self.manager = self.FakeManager() self.fake_context = test_fakes.FakeRequestContext self.lock_call = self.mock_object( coordination, 'synchronized', mock.Mock(return_value=lambda f: f)) @ddt.data({'id': 'FAKE_REPLICA_ID'}, 'FAKE_REPLICA_ID') @ddt.unpack def test_locked_share_replica_operation(self, **replica): self.manager.fake_replica_operation(self.fake_context, replica, share_id='FAKE_SHARE_ID') self.assertTrue(self.lock_call.called) @ddt.ddt class ShareManagerTestCase(test.TestCase): def setUp(self): super(ShareManagerTestCase, self).setUp() self.flags(share_driver='manila.tests.fake_driver.FakeShareDriver') # Define class directly, because this test suite dedicated # to specific manager. self.share_manager = importutils.import_object( "manila.share.manager.ShareManager") self.mock_object(self.share_manager.driver, 'do_setup') self.mock_object(self.share_manager.driver, 'check_for_setup_error') self.share_manager.driver._stats = { 'share_group_stats': {'consistent_snapshot_support': None}, } self.mock_object(self.share_manager.message_api, 'create') self.context = context.get_admin_context() self.share_manager.driver.initialized = True mock.patch.object( lockutils, 'lock', fake_utils.get_fake_lock_context()) self.synchronized_lock_decorator_call = self.mock_object( coordination, 'synchronized', mock.Mock(return_value=lambda f: f)) def test_share_manager_instance(self): fake_service_name = "fake_service" importutils_mock = mock.Mock() self.mock_object(importutils, "import_object", importutils_mock) private_data_mock = mock.Mock() self.mock_object(drivers_private_data, "DriverPrivateData", private_data_mock) self.mock_object(manager.ShareManager, '_init_hook_drivers') share_manager = manager.ShareManager(service_name=fake_service_name) private_data_mock.assert_called_once_with( context=mock.ANY, backend_host=share_manager.host, config_group=fake_service_name ) self.assertTrue(importutils_mock.called) self.assertTrue(manager.ShareManager._init_hook_drivers.called) def test__init_hook_drivers(self): fake_service_name = "fake_service" importutils_mock = mock.Mock() self.mock_object(importutils, "import_object", importutils_mock) self.mock_object(drivers_private_data, "DriverPrivateData") share_manager = manager.ShareManager(service_name=fake_service_name) share_manager.configuration.safe_get = mock.Mock( return_value=["Foo", "Bar"]) self.assertEqual(0, len(share_manager.hooks)) importutils_mock.reset() share_manager._init_hook_drivers() self.assertEqual( len(share_manager.configuration.safe_get.return_value), len(share_manager.hooks)) importutils_mock.assert_has_calls([ mock.call( hook, configuration=share_manager.configuration, host=share_manager.host ) for hook in share_manager.configuration.safe_get.return_value ], any_order=True) def test__execute_periodic_hook(self): share_instances_mock = mock.Mock() hook_data_mock = mock.Mock() self.mock_object( self.share_manager.db, "share_instances_get_all_by_host", share_instances_mock) self.mock_object( self.share_manager.driver, "get_periodic_hook_data", hook_data_mock) self.share_manager.hooks = [mock.Mock(return_value=i) for i in (0, 1)] self.share_manager._execute_periodic_hook(self.context) share_instances_mock.assert_called_once_with( context=self.context, host=self.share_manager.host) hook_data_mock.assert_called_once_with( context=self.context, share_instances=share_instances_mock.return_value) for mock_hook in self.share_manager.hooks: mock_hook.execute_periodic_hook.assert_called_once_with( context=self.context, periodic_hook_data=hook_data_mock.return_value) def test_is_service_ready(self): self.assertTrue(self.share_manager.is_service_ready()) # switch it to false and check again self.share_manager.driver.initialized = False self.assertFalse(self.share_manager.is_service_ready()) def test_init_host_with_no_shares(self): self.mock_object(self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=[])) self.share_manager.init_host() self.assertTrue(self.share_manager.driver.initialized) (self.share_manager.db.share_instances_get_all_by_host. assert_called_once_with(utils.IsAMatcher(context.RequestContext), self.share_manager.host)) self.share_manager.driver.do_setup.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) (self.share_manager.driver.check_for_setup_error. assert_called_once_with()) @ddt.data( "connection_get_info", "migration_cancel", "migration_get_progress", "migration_complete", "migration_start", "create_share_instance", "manage_share", "unmanage_share", "delete_share_instance", "delete_free_share_servers", "create_snapshot", "delete_snapshot", "update_access", "_report_driver_status", "_execute_periodic_hook", "publish_service_capabilities", "delete_share_server", "extend_share", "shrink_share", "create_share_group", "delete_share_group", "create_share_group_snapshot", "delete_share_group_snapshot", "create_share_replica", "delete_share_replica", "promote_share_replica", "periodic_share_replica_update", "update_share_replica", "create_replicated_snapshot", "delete_replicated_snapshot", "periodic_share_replica_snapshot_update", ) def test_call_driver_when_its_init_failed(self, method_name): self.mock_object(self.share_manager.driver, 'do_setup', mock.Mock(side_effect=Exception())) # break the endless retry loop with mock.patch("time.sleep", side_effect=CustomTimeSleepException()): self.assertRaises(CustomTimeSleepException, self.share_manager.init_host) self.assertRaises( exception.DriverNotInitialized, getattr(self.share_manager, method_name), 'foo', 'bar', 'quuz' ) @ddt.data("do_setup", "check_for_setup_error") def test_init_host_with_driver_failure(self, method_name): self.mock_object(self.share_manager.driver, method_name, mock.Mock(side_effect=Exception())) self.mock_object(manager.LOG, 'exception') self.share_manager.driver.initialized = False with mock.patch("time.sleep", side_effect=CustomTimeSleepException()): self.assertRaises(CustomTimeSleepException, self.share_manager.init_host) manager.LOG.exception.assert_called_once_with( mock.ANY, "%(name)s@%(host)s" % {'name': self.share_manager.driver.__class__.__name__, 'host': self.share_manager.host}) self.assertFalse(self.share_manager.driver.initialized) def _setup_init_mocks(self, setup_access_rules=True): instances = [ db_utils.create_share(id='fake_id_1', status=constants.STATUS_AVAILABLE, display_name='fake_name_1').instance, db_utils.create_share(id='fake_id_2', status=constants.STATUS_ERROR, display_name='fake_name_2').instance, db_utils.create_share(id='fake_id_3', status=constants.STATUS_AVAILABLE, display_name='fake_name_3').instance, db_utils.create_share( id='fake_id_4', status=constants.STATUS_MIGRATING, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS, display_name='fake_name_4').instance, db_utils.create_share(id='fake_id_5', status=constants.STATUS_AVAILABLE, display_name='fake_name_5').instance, db_utils.create_share( id='fake_id_6', status=constants.STATUS_MIGRATING, task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, display_name='fake_name_6').instance, db_utils.create_share( id='fake_id_7', status=constants.STATUS_CREATING_FROM_SNAPSHOT, display_name='fake_name_7').instance, ] instances[4]['access_rules_status'] = ( constants.SHARE_INSTANCE_RULES_SYNCING) if not setup_access_rules: return instances rules = [ db_utils.create_access(share_id='fake_id_1'), db_utils.create_access(share_id='fake_id_3'), ] return instances, rules @ddt.data(("some_hash", {"db_version": "test_version"}), ("ddd86ec90923b686597501e2f2431f3af59238c0", {"db_version": "test_version"}), (None, {"db_version": "test_version"}), (None, None)) @ddt.unpack def test_init_host_with_shares_and_rules( self, old_backend_info_hash, new_backend_info): # initialization of test data def raise_share_access_exists(*args, **kwargs): raise exception.ShareAccessExists( access_type='fake_access_type', access='fake_access') new_backend_info_hash = (hashlib.sha1(six.text_type( sorted(new_backend_info.items())).encode('utf-8')).hexdigest() if new_backend_info else None) old_backend_info = {'info_hash': old_backend_info_hash} share_server = fakes.fake_share_server_get() instances, rules = self._setup_init_mocks() fake_export_locations = ['fake/path/1', 'fake/path'] fake_update_instances = { instances[0]['id']: {'export_locations': fake_export_locations}, instances[2]['id']: {'export_locations': fake_export_locations} } instances[0]['access_rules_status'] = '' instances[2]['access_rules_status'] = '' self.mock_object(self.share_manager.db, 'backend_info_get', mock.Mock(return_value=old_backend_info)) mock_backend_info_update = self.mock_object( self.share_manager.db, 'backend_info_update') self.mock_object(self.share_manager.driver, 'get_backend_info', mock.Mock(return_value=new_backend_info)) mock_share_get_all_by_host = self.mock_object( self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instances[0], instances[2], instances[4]])) self.mock_object(self.share_manager.db, 'share_export_locations_update') mock_ensure_shares = self.mock_object( self.share_manager.driver, 'ensure_shares', mock.Mock(return_value=fake_update_instances)) self.mock_object(self.share_manager, '_ensure_share_instance_has_pool') self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=share_server)) self.mock_object(self.share_manager, '_get_share_server_dict', mock.Mock(return_value=share_server)) self.mock_object(self.share_manager, 'publish_service_capabilities', mock.Mock()) self.mock_object(self.share_manager.db, 'share_access_get_all_for_share', mock.Mock(return_value=rules)) self.mock_object( self.share_manager.access_helper, 'update_access_rules', mock.Mock(side_effect=raise_share_access_exists) ) dict_instances = [self._get_share_instance_dict( instance, share_server=share_server) for instance in instances] # call of 'init_host' method self.share_manager.init_host() # verification of call exports_update = self.share_manager.db.share_export_locations_update self.share_manager.driver.do_setup.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) (self.share_manager.driver.check_for_setup_error. assert_called_once_with()) if new_backend_info_hash == old_backend_info_hash: mock_backend_info_update.assert_not_called() mock_ensure_shares.assert_not_called() mock_share_get_all_by_host.assert_not_called() else: mock_backend_info_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host, new_backend_info_hash) self.share_manager.driver.ensure_shares.assert_called_once_with( utils.IsAMatcher(context.RequestContext), [dict_instances[0], dict_instances[2], dict_instances[4]]) mock_share_get_all_by_host.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host) exports_update.assert_has_calls([ mock.call(mock.ANY, instances[0]['id'], fake_export_locations), mock.call(mock.ANY, instances[2]['id'], fake_export_locations) ]) (self.share_manager._ensure_share_instance_has_pool. assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[0]), mock.call(utils.IsAMatcher(context.RequestContext), instances[2]), ])) self.share_manager._get_share_server.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[0]), mock.call(utils.IsAMatcher(context.RequestContext), instances[2]), ]) (self.share_manager.publish_service_capabilities. assert_called_once_with( utils.IsAMatcher(context.RequestContext))) (self.share_manager.access_helper.update_access_rules. assert_has_calls([ mock.call(mock.ANY, instances[0]['id'], share_server=share_server), mock.call(mock.ANY, instances[2]['id'], share_server=share_server), ])) @ddt.data(("some_hash", {"db_version": "test_version"}), ("ddd86ec90923b686597501e2f2431f3af59238c0", {"db_version": "test_version"}), (None, {"db_version": "test_version"}), (None, None)) @ddt.unpack def test_init_host_without_shares_and_rules( self, old_backend_info_hash, new_backend_info): old_backend_info = {'info_hash': old_backend_info_hash} new_backend_info_hash = (hashlib.sha1(six.text_type( sorted(new_backend_info.items())).encode('utf-8')).hexdigest() if new_backend_info else None) mock_backend_info_update = self.mock_object( self.share_manager.db, 'backend_info_update') self.mock_object( self.share_manager.db, 'backend_info_get', mock.Mock(return_value=old_backend_info)) self.mock_object(self.share_manager.driver, 'get_backend_info', mock.Mock(return_value=new_backend_info)) self.mock_object(self.share_manager, 'publish_service_capabilities', mock.Mock()) mock_ensure_shares = self.mock_object( self.share_manager.driver, 'ensure_shares') mock_share_instances_get_all_by_host = self.mock_object( self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=[])) # call of 'init_host' method self.share_manager.init_host() if new_backend_info_hash == old_backend_info_hash: mock_backend_info_update.assert_not_called() mock_ensure_shares.assert_not_called() mock_share_instances_get_all_by_host.assert_not_called() else: mock_backend_info_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host, new_backend_info_hash) self.share_manager.driver.do_setup.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) self.share_manager.db.backend_info_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host) self.share_manager.driver.get_backend_info.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) mock_ensure_shares.assert_not_called() mock_share_instances_get_all_by_host.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host) @ddt.data(exception.ManilaException, ['fake/path/1', 'fake/path']) def test_init_host_with_ensure_share(self, expected_ensure_share_result): def raise_NotImplementedError(*args, **kwargs): raise NotImplementedError instances = self._setup_init_mocks(setup_access_rules=False) share_server = fakes.fake_share_server_get() self.mock_object(self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instances[0], instances[2], instances[3]])) self.mock_object( self.share_manager.driver, 'ensure_shares', mock.Mock(side_effect=raise_NotImplementedError)) self.mock_object(self.share_manager.driver, 'ensure_share', mock.Mock(side_effect=expected_ensure_share_result)) self.mock_object( self.share_manager, '_ensure_share_instance_has_pool') self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=share_server)) self.mock_object(self.share_manager, '_get_share_server_dict', mock.Mock(return_value=share_server)) self.mock_object(self.share_manager, 'publish_service_capabilities') self.mock_object(manager.LOG, 'error') self.mock_object(manager.LOG, 'info') dict_instances = [self._get_share_instance_dict( instance, share_server=share_server) for instance in instances] # call of 'init_host' method self.share_manager.init_host() # verification of call (self.share_manager.db.share_instances_get_all_by_host. assert_called_once_with(utils.IsAMatcher(context.RequestContext), self.share_manager.host)) self.share_manager.driver.do_setup.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) self.share_manager.driver.check_for_setup_error.assert_called_with() self.share_manager._ensure_share_instance_has_pool.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[0]), mock.call(utils.IsAMatcher(context.RequestContext), instances[2]), ]) self.share_manager.driver.ensure_shares.assert_called_once_with( utils.IsAMatcher(context.RequestContext), [dict_instances[0], dict_instances[2], dict_instances[3]]) self.share_manager._get_share_server.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[0]), mock.call(utils.IsAMatcher(context.RequestContext), instances[2]), ]) self.share_manager.driver.ensure_share.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), dict_instances[0], share_server=share_server), mock.call(utils.IsAMatcher(context.RequestContext), dict_instances[2], share_server=share_server), ]) (self.share_manager.publish_service_capabilities. assert_called_once_with( utils.IsAMatcher(context.RequestContext))) manager.LOG.info.assert_any_call( mock.ANY, {'task': constants.TASK_STATE_MIGRATION_IN_PROGRESS, 'id': instances[3]['id']}, ) manager.LOG.info.assert_any_call( mock.ANY, {'id': instances[1]['id'], 'status': instances[1]['status']}, ) def _get_share_instance_dict(self, share_instance, **kwargs): # TODO(gouthamr): remove method when the db layer returns primitives share_instance_ref = { 'id': share_instance.get('id'), 'name': share_instance.get('name'), 'share_id': share_instance.get('share_id'), 'host': share_instance.get('host'), 'status': share_instance.get('status'), 'replica_state': share_instance.get('replica_state'), 'availability_zone_id': share_instance.get('availability_zone_id'), 'share_network_id': share_instance.get('share_network_id'), 'share_server_id': share_instance.get('share_server_id'), 'deleted': share_instance.get('deleted'), 'terminated_at': share_instance.get('terminated_at'), 'launched_at': share_instance.get('launched_at'), 'scheduled_at': share_instance.get('scheduled_at'), 'updated_at': share_instance.get('updated_at'), 'deleted_at': share_instance.get('deleted_at'), 'created_at': share_instance.get('created_at'), 'share_server': kwargs.get('share_server'), 'access_rules_status': share_instance.get('access_rules_status'), # Share details 'user_id': share_instance.get('user_id'), 'project_id': share_instance.get('project_id'), 'size': share_instance.get('size'), 'display_name': share_instance.get('display_name'), 'display_description': share_instance.get('display_description'), 'snapshot_id': share_instance.get('snapshot_id'), 'share_proto': share_instance.get('share_proto'), 'share_type_id': share_instance.get('share_type_id'), 'is_public': share_instance.get('is_public'), 'share_group_id': share_instance.get('share_group_id'), 'source_share_group_snapshot_member_id': share_instance.get( 'source_share_group_snapshot_member_id'), 'availability_zone': share_instance.get('availability_zone'), 'export_locations': share_instance.get('export_locations') or [], } return share_instance_ref def test_init_host_with_exception_on_ensure_shares(self): def raise_exception(*args, **kwargs): raise exception.ManilaException(message="Fake raise") instances = self._setup_init_mocks(setup_access_rules=False) mock_ensure_share = self.mock_object( self.share_manager.driver, 'ensure_share') self.mock_object(self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instances[0], instances[2], instances[3]])) self.mock_object( self.share_manager.driver, 'ensure_shares', mock.Mock(side_effect=raise_exception)) self.mock_object( self.share_manager, '_ensure_share_instance_has_pool') self.mock_object(db, 'share_server_get', mock.Mock(return_value=fakes.fake_share_server_get())) dict_instances = [self._get_share_instance_dict(instance) for instance in instances] # call of 'init_host' method self.share_manager.init_host() # verification of call (self.share_manager.db.share_instances_get_all_by_host. assert_called_once_with(utils.IsAMatcher(context.RequestContext), self.share_manager.host)) self.share_manager.driver.do_setup.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) self.share_manager.driver.check_for_setup_error.assert_called_with() self.share_manager._ensure_share_instance_has_pool.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[0]), mock.call(utils.IsAMatcher(context.RequestContext), instances[2]), ]) self.share_manager.driver.ensure_shares.assert_called_once_with( utils.IsAMatcher(context.RequestContext), [dict_instances[0], dict_instances[2], dict_instances[3]]) mock_ensure_share.assert_not_called() def test_init_host_with_exception_on_get_backend_info(self): def raise_exception(*args, **kwargs): raise exception.ManilaException(message="Fake raise") old_backend_info = {'info_hash': "test_backend_info"} mock_ensure_share = self.mock_object( self.share_manager.driver, 'ensure_share') mock_ensure_shares = self.mock_object( self.share_manager.driver, 'ensure_shares') self.mock_object(self.share_manager.db, 'backend_info_get', mock.Mock(return_value=old_backend_info)) self.mock_object( self.share_manager.driver, 'get_backend_info', mock.Mock(side_effect=raise_exception)) # call of 'init_host' method self.assertRaises( exception.ManilaException, self.share_manager.init_host, ) # verification of call self.share_manager.db.backend_info_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host) self.share_manager.driver.get_backend_info.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) mock_ensure_share.assert_not_called() mock_ensure_shares.assert_not_called() def test_init_host_with_exception_on_update_access_rules(self): def raise_exception(*args, **kwargs): raise exception.ManilaException(message="Fake raise") instances, rules = self._setup_init_mocks() share_server = fakes.fake_share_server_get() fake_update_instances = { instances[0]['id']: {'status': 'available'}, instances[2]['id']: {'status': 'available'}, instances[4]['id']: {'status': 'available'} } smanager = self.share_manager self.mock_object(smanager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instances[0], instances[2], instances[4]])) self.mock_object(self.share_manager.driver, 'ensure_share', mock.Mock(return_value=None)) self.mock_object(self.share_manager.driver, 'ensure_shares', mock.Mock(return_value=fake_update_instances)) self.mock_object(smanager, '_ensure_share_instance_has_pool') self.mock_object(smanager, '_get_share_server', mock.Mock(return_value=share_server)) self.mock_object(smanager, 'publish_service_capabilities') self.mock_object(manager.LOG, 'exception') self.mock_object(manager.LOG, 'info') self.mock_object(smanager.db, 'share_access_get_all_for_share', mock.Mock(return_value=rules)) self.mock_object(smanager.access_helper, 'update_access_rules', mock.Mock(side_effect=raise_exception)) self.mock_object(smanager, '_get_share_server_dict', mock.Mock(return_value=share_server)) dict_instances = [self._get_share_instance_dict( instance, share_server=share_server) for instance in instances] # call of 'init_host' method smanager.init_host() # verification of call (smanager.db.share_instances_get_all_by_host. assert_called_once_with(utils.IsAMatcher(context.RequestContext), smanager.host)) smanager.driver.do_setup.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) smanager.driver.check_for_setup_error.assert_called_with() smanager._ensure_share_instance_has_pool.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[0]), mock.call(utils.IsAMatcher(context.RequestContext), instances[2]), ]) smanager.driver.ensure_shares.assert_called_once_with( utils.IsAMatcher(context.RequestContext), [dict_instances[0], dict_instances[2], dict_instances[4]]) (self.share_manager.publish_service_capabilities. assert_called_once_with( utils.IsAMatcher(context.RequestContext))) manager.LOG.info.assert_any_call( mock.ANY, {'task': constants.TASK_STATE_MIGRATION_IN_PROGRESS, 'id': instances[3]['id']}, ) manager.LOG.info.assert_any_call( mock.ANY, {'id': instances[1]['id'], 'status': instances[1]['status']}, ) smanager.access_helper.update_access_rules.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), instances[4]['id'], share_server=share_server), ]) manager.LOG.exception.assert_has_calls([ mock.call(mock.ANY, mock.ANY), ]) def test_create_share_instance_from_snapshot_with_server(self): """Test share can be created from snapshot if server exists.""" network = db_utils.create_share_network() subnet = db_utils.create_share_network_subnet( share_network_id=network['id']) server = db_utils.create_share_server( share_network_subnet_id=subnet['id'], host='fake_host', backend_details=dict(fake='fake')) parent_share = db_utils.create_share(share_network_id='net-id', share_server_id=server['id']) share = db_utils.create_share() share_id = share['id'] snapshot = db_utils.create_snapshot(share_id=parent_share['id']) snapshot_id = snapshot['id'] self.share_manager.create_share_instance( self.context, share.instance['id'], snapshot_id=snapshot_id) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_AVAILABLE, shr['status']) self.assertEqual(server['id'], shr['instance']['share_server_id']) def test_create_share_instance_from_snapshot_with_server_not_found(self): """Test creation from snapshot fails if server not found.""" parent_share = db_utils.create_share(share_network_id='net-id', share_server_id='fake-id') share = db_utils.create_share() share_id = share['id'] snapshot = db_utils.create_snapshot(share_id=parent_share['id']) snapshot_id = snapshot['id'] self.assertRaises(exception.ShareServerNotFound, self.share_manager.create_share_instance, self.context, share.instance['id'], snapshot_id=snapshot_id ) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_ERROR, shr['status']) def test_create_share_instance_from_snapshot_status_creating(self): """Test share can be created from snapshot in asynchronous mode.""" share = db_utils.create_share() share_id = share['id'] snapshot = db_utils.create_snapshot(share_id=share_id) snapshot_id = snapshot['id'] create_from_snap_ret = { 'status': constants.STATUS_CREATING_FROM_SNAPSHOT, } driver_call = self.mock_object( self.share_manager.driver, 'create_share_from_snapshot', mock.Mock(return_value=create_from_snap_ret)) self.share_manager.create_share_instance( self.context, share.instance['id'], snapshot_id=snapshot_id) self.assertTrue(driver_call.called) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) self.assertTrue(driver_call.called) self.assertEqual(constants.STATUS_CREATING_FROM_SNAPSHOT, shr['status']) self.assertEqual(0, len(shr['export_locations'])) def test_create_share_instance_from_snapshot_invalid_status(self): """Test share can't be created from snapshot with 'creating' status.""" share = db_utils.create_share() share_id = share['id'] snapshot = db_utils.create_snapshot(share_id=share_id) snapshot_id = snapshot['id'] create_from_snap_ret = { 'status': constants.STATUS_CREATING, } driver_call = self.mock_object( self.share_manager.driver, 'create_share_from_snapshot', mock.Mock(return_value=create_from_snap_ret)) self.assertRaises(exception.InvalidShareInstance, self.share_manager.create_share_instance, self.context, share.instance['id'], snapshot_id=snapshot_id) self.assertTrue(driver_call.called) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_ERROR, shr['status']) def test_create_share_instance_from_snapshot_export_locations_only(self): """Test share can be created from snapshot on old driver interface.""" share = db_utils.create_share() share_id = share['id'] snapshot = db_utils.create_snapshot(share_id=share_id) snapshot_id = snapshot['id'] create_from_snap_ret = ['/path/fake', '/path/fake2', '/path/fake3'] driver_call = self.mock_object( self.share_manager.driver, 'create_share_from_snapshot', mock.Mock(return_value=create_from_snap_ret)) self.share_manager.create_share_instance( self.context, share.instance['id'], snapshot_id=snapshot_id) self.assertTrue(driver_call.called) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_AVAILABLE, shr['status']) self.assertEqual(3, len(shr['export_locations'])) def test_create_share_instance_from_snapshot(self): """Test share can be created from snapshot.""" share = db_utils.create_share() share_id = share['id'] snapshot = db_utils.create_snapshot(share_id=share_id) snapshot_id = snapshot['id'] self.share_manager.create_share_instance( self.context, share.instance['id'], snapshot_id=snapshot_id) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_AVAILABLE, shr['status']) self.assertGreater(len(shr['export_location']), 0) self.assertEqual(2, len(shr['export_locations'])) def test_create_share_instance_for_share_with_replication_support(self): """Test update call is made to update replica_state.""" share = db_utils.create_share(replication_type='writable') share_id = share['id'] self.share_manager.create_share_instance(self.context, share.instance['id']) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) shr_instance = db.share_instance_get(self.context, share.instance['id']) self.assertEqual(constants.STATUS_AVAILABLE, shr['status'],) self.assertEqual(constants.REPLICA_STATE_ACTIVE, shr_instance['replica_state']) @ddt.data([], None) def test_create_share_replica_no_active_replicas(self, active_replicas): replica = fake_replica() self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replicas)) self.mock_object( db, 'share_replica_get', mock.Mock(return_value=replica)) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_driver_replica_call = self.mock_object( self.share_manager.driver, 'create_replica') self.assertRaises(exception.ReplicationException, self.share_manager.create_share_replica, self.context, replica) mock_replica_update_call.assert_called_once_with( mock.ANY, replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self.assertFalse(mock_driver_replica_call.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], detail=message_field.Detail.NO_ACTIVE_REPLICA) def test_create_share_replica_with_share_network_id_and_not_dhss(self): replica = fake_replica() manager.CONF.set_default('driver_handles_share_servers', False) self.mock_object(db, 'share_access_get_all_for_share', mock.Mock(return_value=[])) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=fake_replica(id='fake2'))) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_driver_replica_call = self.mock_object( self.share_manager.driver, 'create_replica') self.assertRaises(exception.InvalidDriverMode, self.share_manager.create_share_replica, self.context, replica) mock_replica_update_call.assert_called_once_with( mock.ANY, replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self.assertFalse(mock_driver_replica_call.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], detail=message_field.Detail.UNEXPECTED_NETWORK) def test_create_share_replica_with_share_server_exception(self): replica = fake_replica() share_network_subnet = db_utils.create_share_network_subnet( share_network_id=replica['share_network_id'], availability_zone_id=replica['availability_zone_id']) manager.CONF.set_default('driver_handles_share_servers', True) self.mock_object(db, 'share_instance_access_copy', mock.Mock(return_value=[])) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=fake_replica(id='fake2'))) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=share_network_subnet)) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_driver_replica_call = self.mock_object( self.share_manager.driver, 'create_replica') self.assertRaises(exception.NotFound, self.share_manager.create_share_replica, self.context, replica) mock_replica_update_call.assert_called_once_with( mock.ANY, replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self.assertFalse(mock_driver_replica_call.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], detail=message_field.Detail.NO_SHARE_SERVER) def test_create_share_replica_driver_error_on_creation(self): fake_access_rules = [{'id': '1'}, {'id': '2'}, {'id': '3'}] replica = fake_replica() replica_2 = fake_replica(id='fake2') self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_instance_access_copy', mock.Mock(return_value=fake_access_rules)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replica_2)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, replica_2])) self.mock_object(self.share_manager, '_provide_share_server_for_share', mock.Mock(return_value=('FAKE_SERVER', replica))) self.mock_object(self.share_manager, '_get_replica_snapshots_for_snapshot', mock.Mock(return_value=[])) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_export_locs_update_call = self.mock_object( db, 'share_export_locations_update') mock_log_error = self.mock_object(manager.LOG, 'error') mock_log_info = self.mock_object(manager.LOG, 'info') self.mock_object(db, 'share_instance_access_get', mock.Mock(return_value=fake_access_rules[0])) mock_share_replica_access_update = self.mock_object( self.share_manager.access_helper, 'get_and_update_share_instance_access_rules_status') self.mock_object(self.share_manager, '_get_share_server') driver_call = self.mock_object( self.share_manager.driver, 'create_replica', mock.Mock(side_effect=exception.ManilaException)) self.assertRaises(exception.ManilaException, self.share_manager.create_share_replica, self.context, replica) mock_replica_update_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) mock_share_replica_access_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_instance_id=replica['id'], status=constants.SHARE_INSTANCE_RULES_ERROR) self.assertFalse(mock_export_locs_update_call.called) self.assertTrue(mock_log_error.called) self.assertFalse(mock_log_info.called) self.assertTrue(driver_call.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], exception=mock.ANY) def test_create_share_replica_invalid_locations_state(self): driver_retval = { 'export_locations': 'FAKE_EXPORT_LOC', } replica = fake_replica(share_network='', access_rules_status=constants.STATUS_ACTIVE) replica_2 = fake_replica(id='fake2') fake_access_rules = [{'id': '1'}, {'id': '2'}] self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replica_2)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, replica_2])) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_instance_access_copy', mock.Mock(return_value=fake_access_rules)) self.mock_object(self.share_manager, '_provide_share_server_for_share', mock.Mock(return_value=('FAKE_SERVER', replica))) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager, '_get_replica_snapshots_for_snapshot', mock.Mock(return_value=[])) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_export_locs_update_call = self.mock_object( db, 'share_export_locations_update') mock_log_info = self.mock_object(manager.LOG, 'info') mock_log_warning = self.mock_object(manager.LOG, 'warning') mock_log_error = self.mock_object(manager.LOG, 'error') driver_call = self.mock_object( self.share_manager.driver, 'create_replica', mock.Mock(return_value=driver_retval)) self.mock_object(db, 'share_instance_access_get', mock.Mock(return_value=fake_access_rules[0])) mock_share_replica_access_update = self.mock_object( self.share_manager.access_helper, 'get_and_update_share_instance_access_rules_status') self.share_manager.create_share_replica(self.context, replica) self.assertFalse(mock_replica_update_call.called) mock_share_replica_access_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_instance_id=replica['id'], status=constants.STATUS_ACTIVE) self.assertFalse(mock_export_locs_update_call.called) self.assertTrue(mock_log_info.called) self.assertTrue(mock_log_warning.called) self.assertFalse(mock_log_error.called) self.assertTrue(driver_call.called) call_args = driver_call.call_args_list[0][0] replica_list_arg = call_args[1] r_ids = [r['id'] for r in replica_list_arg] for r in (replica, replica_2): self.assertIn(r['id'], r_ids) self.assertEqual(2, len(r_ids)) def test_create_share_replica_no_availability_zone(self): replica = fake_replica( availability_zone=None, share_network='', replica_state=constants.REPLICA_STATE_OUT_OF_SYNC) replica_2 = fake_replica(id='fake2') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, replica_2])) self.share_manager.availability_zone = 'fake_az' fake_access_rules = [{'id': '1'}, {'id': '2'}, {'id': '3'}] self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_instance_access_copy', mock.Mock(return_value=fake_access_rules)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replica_2)) self.mock_object(self.share_manager, '_provide_share_server_for_share', mock.Mock(return_value=('FAKE_SERVER', replica))) self.mock_object(self.share_manager, '_get_replica_snapshots_for_snapshot', mock.Mock(return_value=[])) mock_replica_update_call = self.mock_object( db, 'share_replica_update', mock.Mock(return_value=replica)) mock_calls = [ mock.call(mock.ANY, replica['id'], {'availability_zone': 'fake_az'}, with_share_data=True), mock.call(mock.ANY, replica['id'], {'status': constants.STATUS_AVAILABLE, 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'progress': '100%'}), ] mock_export_locs_update_call = self.mock_object( db, 'share_export_locations_update') mock_log_info = self.mock_object(manager.LOG, 'info') mock_log_warning = self.mock_object(manager.LOG, 'warning') mock_log_error = self.mock_object(manager.LOG, 'warning') self.mock_object(db, 'share_instance_access_get', mock.Mock(return_value=fake_access_rules[0])) mock_share_replica_access_update = self.mock_object( self.share_manager, '_update_share_replica_access_rules_state') driver_call = self.mock_object( self.share_manager.driver, 'create_replica', mock.Mock(return_value=replica)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock()) self.share_manager.create_share_replica(self.context, replica) mock_replica_update_call.assert_has_calls(mock_calls, any_order=False) mock_share_replica_access_update.assert_called_once_with( mock.ANY, replica['id'], replica['access_rules_status']) self.assertTrue(mock_export_locs_update_call.called) self.assertTrue(mock_log_info.called) self.assertFalse(mock_log_warning.called) self.assertFalse(mock_log_error.called) self.assertTrue(driver_call.called) @ddt.data(True, False) def test_create_share_replica(self, has_snapshots): replica = fake_replica( share_network='', replica_state=constants.REPLICA_STATE_IN_SYNC) replica_2 = fake_replica(id='fake2') snapshots = ([fakes.fake_snapshot(create_instance=True)] if has_snapshots else []) snapshot_instances = [ fakes.fake_snapshot_instance(share_instance_id=replica['id']), fakes.fake_snapshot_instance(share_instance_id='fake2'), ] fake_access_rules = [{'id': '1'}, {'id': '2'}, {'id': '3'}] self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_instance_access_copy', mock.Mock(return_value=fake_access_rules)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replica_2)) self.mock_object(self.share_manager, '_provide_share_server_for_share', mock.Mock(return_value=('FAKE_SERVER', replica))) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, replica_2])) self.mock_object(db, 'share_snapshot_get_all_for_share', mock.Mock( return_value=snapshots)) mock_instance_get_call = self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_export_locs_update_call = self.mock_object( db, 'share_export_locations_update') mock_log_info = self.mock_object(manager.LOG, 'info') mock_log_warning = self.mock_object(manager.LOG, 'warning') mock_log_error = self.mock_object(manager.LOG, 'warning') self.mock_object(db, 'share_instance_access_get', mock.Mock(return_value=fake_access_rules[0])) mock_share_replica_access_update = self.mock_object( self.share_manager.access_helper, 'get_and_update_share_instance_access_rules_status') driver_call = self.mock_object( self.share_manager.driver, 'create_replica', mock.Mock(return_value=replica)) self.mock_object(self.share_manager, '_get_share_server') self.share_manager.create_share_replica(self.context, replica) mock_replica_update_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), replica['id'], {'status': constants.STATUS_AVAILABLE, 'replica_state': constants.REPLICA_STATE_IN_SYNC, 'progress': '100%'}) mock_share_replica_access_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_instance_id=replica['id'], status=replica['access_rules_status']) self.assertTrue(mock_export_locs_update_call.called) self.assertTrue(mock_log_info.called) self.assertFalse(mock_log_warning.called) self.assertFalse(mock_log_error.called) self.assertTrue(driver_call.called) call_args = driver_call.call_args_list[0][0] replica_list_arg = call_args[1] snapshot_list_arg = call_args[4] r_ids = [r['id'] for r in replica_list_arg] for r in (replica, replica_2): self.assertIn(r['id'], r_ids) self.assertEqual(2, len(r_ids)) if has_snapshots: for snapshot_dict in snapshot_list_arg: self.assertIn('active_replica_snapshot', snapshot_dict) self.assertIn('share_replica_snapshot', snapshot_dict) else: self.assertFalse(mock_instance_get_call.called) def test_delete_share_replica_access_rules_exception(self): replica = fake_replica() replica_2 = fake_replica(id='fake_2') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, replica_2])) active_replica = fake_replica( id='Current_active_replica', replica_state=constants.REPLICA_STATE_ACTIVE) mock_exception_log = self.mock_object(manager.LOG, 'exception') self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replica)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(self.share_manager.access_helper, 'update_access_rules') mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_replica_delete_call = self.mock_object(db, 'share_replica_delete') mock_drv_delete_replica_call = self.mock_object( self.share_manager.driver, 'delete_replica') self.mock_object( self.share_manager.access_helper, 'update_access_rules', mock.Mock(side_effect=exception.ManilaException)) self.assertRaises(exception.ManilaException, self.share_manager.delete_share_replica, self.context, replica['id'], share_id=replica['share_id']) mock_replica_update_call.assert_called_once_with( mock.ANY, replica['id'], {'status': constants.STATUS_ERROR}) self.assertFalse(mock_drv_delete_replica_call.called) self.assertFalse(mock_replica_delete_call.called) self.assertFalse(mock_exception_log.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.DELETE_ACCESS_RULES, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], exception=mock.ANY) def test_delete_share_replica_drv_misbehavior_ignored_with_the_force(self): replica = fake_replica() active_replica = fake_replica(id='Current_active_replica') mock_exception_log = self.mock_object(manager.LOG, 'exception') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replica)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.access_helper, 'update_access_rules') self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[])) mock_snap_instance_delete = self.mock_object( db, 'share_snapshot_instance_delete') mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_replica_delete_call = self.mock_object(db, 'share_replica_delete') mock_drv_delete_replica_call = self.mock_object( self.share_manager.driver, 'delete_replica', mock.Mock(side_effect=exception.ManilaException)) self.mock_object( self.share_manager.access_helper, 'update_access_rules') self.share_manager.delete_share_replica( self.context, replica['id'], share_id=replica['share_id'], force=True) self.assertFalse(mock_replica_update_call.called) self.assertTrue(mock_replica_delete_call.called) self.assertEqual(1, mock_exception_log.call_count) self.assertTrue(mock_drv_delete_replica_call.called) self.assertFalse(mock_snap_instance_delete.called) def test_delete_share_replica_driver_exception(self): replica = fake_replica() active_replica = fake_replica(id='Current_active_replica') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replica)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_snapshot_get_call = self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[])) mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_replica_delete_call = self.mock_object(db, 'share_replica_delete') self.mock_object( self.share_manager.access_helper, 'update_access_rules') mock_drv_delete_replica_call = self.mock_object( self.share_manager.driver, 'delete_replica', mock.Mock(side_effect=exception.ManilaException)) self.assertRaises(exception.ManilaException, self.share_manager.delete_share_replica, self.context, replica['id'], share_id=replica['share_id']) self.assertTrue(mock_replica_update_call.called) self.assertFalse(mock_replica_delete_call.called) self.assertTrue(mock_drv_delete_replica_call.called) self.assertTrue(mock_snapshot_get_call.called) def test_delete_share_replica_both_exceptions_ignored_with_the_force(self): replica = fake_replica() active_replica = fake_replica(id='Current_active_replica') snapshots = [ fakes.fake_snapshot(share_id=replica['id'], status=constants.STATUS_AVAILABLE), fakes.fake_snapshot(share_id=replica['id'], id='test_creating_to_err', status=constants.STATUS_CREATING) ] self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) mock_exception_log = self.mock_object(manager.LOG, 'exception') self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replica)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshots)) mock_snapshot_instance_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_replica_delete_call = self.mock_object(db, 'share_replica_delete') self.mock_object( self.share_manager.access_helper, 'update_access_rules', mock.Mock(side_effect=exception.ManilaException)) mock_drv_delete_replica_call = self.mock_object( self.share_manager.driver, 'delete_replica', mock.Mock(side_effect=exception.ManilaException)) self.share_manager.delete_share_replica( self.context, replica['id'], share_id=replica['share_id'], force=True) mock_replica_update_call.assert_called_once_with( mock.ANY, replica['id'], {'status': constants.STATUS_ERROR}) self.assertTrue(mock_replica_delete_call.called) self.assertEqual(2, mock_exception_log.call_count) self.assertTrue(mock_drv_delete_replica_call.called) self.assertEqual(2, mock_snapshot_instance_delete_call.call_count) def test_delete_share_replica(self): replica = fake_replica() active_replica = fake_replica(id='current_active_replica') snapshots = [ fakes.fake_snapshot(share_id=replica['share_id'], status=constants.STATUS_AVAILABLE), fakes.fake_snapshot(share_id=replica['share_id'], id='test_creating_to_err', status=constants.STATUS_CREATING) ] self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshots)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replica)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_info_log = self.mock_object(manager.LOG, 'info') mock_snapshot_instance_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') mock_replica_update_call = self.mock_object(db, 'share_replica_update') mock_replica_delete_call = self.mock_object(db, 'share_replica_delete') self.mock_object( self.share_manager.access_helper, 'update_access_rules') mock_drv_delete_replica_call = self.mock_object( self.share_manager.driver, 'delete_replica') self.share_manager.delete_share_replica(self.context, replica) self.assertFalse(mock_replica_update_call.called) self.assertTrue(mock_replica_delete_call.called) self.assertTrue(mock_info_log.called) self.assertTrue(mock_drv_delete_replica_call.called) self.assertEqual(2, mock_snapshot_instance_delete_call.call_count) def test_promote_share_replica_no_active_replica(self): replica = fake_replica() replica_list = [replica] self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replica_list)) mock_info_log = self.mock_object(manager.LOG, 'info') mock_driver_call = self.mock_object(self.share_manager.driver, 'promote_replica') mock_replica_update = self.mock_object(db, 'share_replica_update') expected_update_call = mock.call( mock.ANY, replica['id'], {'status': constants.STATUS_AVAILABLE}) self.assertRaises(exception.ReplicationException, self.share_manager.promote_share_replica, self.context, replica) self.assertFalse(mock_info_log.called) self.assertFalse(mock_driver_call.called) mock_replica_update.assert_has_calls([expected_update_call]) def test_promote_share_replica_driver_exception(self): replica = fake_replica() active_replica = fake_replica( id='current_active_replica', replica_state=constants.REPLICA_STATE_ACTIVE) replica_list = [replica, active_replica] self.mock_object(db, 'share_access_get_all_for_share', mock.Mock(return_value=[])) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replica_list)) self.mock_object(self.share_manager.driver, 'promote_replica', mock.Mock(side_effect=exception.ManilaException)) mock_info_log = self.mock_object(manager.LOG, 'info') mock_replica_update = self.mock_object(db, 'share_replica_update') expected_update_calls = [mock.call( mock.ANY, r['id'], {'status': constants.STATUS_ERROR}) for r in(replica, active_replica)] self.assertRaises(exception.ManilaException, self.share_manager.promote_share_replica, self.context, replica) mock_replica_update.assert_has_calls(expected_update_calls) self.assertFalse(mock_info_log.called) expected_message_calls = [ mock.call( utils.IsAMatcher(context.RequestContext), message_field.Action.PROMOTE, r['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=r['id'], exception=mock.ANY) for r in (replica, active_replica)] self.share_manager.message_api.create.assert_has_calls( expected_message_calls) @ddt.data([], None) def test_promote_share_replica_driver_update_nothing_has_snaps(self, retval): replica = fake_replica( replication_type=constants.REPLICATION_TYPE_READABLE) active_replica = fake_replica( id='current_active_replica', replica_state=constants.REPLICA_STATE_ACTIVE) snapshots_instances = [ fakes.fake_snapshot(create_instance=True, share_id=replica['share_id'], status=constants.STATUS_AVAILABLE), fakes.fake_snapshot(create_instance=True, share_id=replica['share_id'], id='test_creating_to_err', status=constants.STATUS_CREATING) ] replica_list = [replica, active_replica] self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_access_get_all_for_share', mock.Mock(return_value=[])) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replica_list)) self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshots_instances)) self.mock_object( self.share_manager.driver, 'promote_replica', mock.Mock(return_value=retval)) mock_snap_instance_update = self.mock_object( db, 'share_snapshot_instance_update') mock_info_log = self.mock_object(manager.LOG, 'info') mock_export_locs_update = self.mock_object( db, 'share_export_locations_update') mock_replica_update = self.mock_object(db, 'share_replica_update') call_1 = mock.call(mock.ANY, replica['id'], {'status': constants.STATUS_AVAILABLE, 'replica_state': constants.REPLICA_STATE_ACTIVE, 'cast_rules_to_readonly': False}) call_2 = mock.call( mock.ANY, 'current_active_replica', {'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'cast_rules_to_readonly': True}) expected_update_calls = [call_1, call_2] self.share_manager.promote_share_replica(self.context, replica) self.assertFalse(mock_export_locs_update.called) mock_replica_update.assert_has_calls(expected_update_calls, any_order=True) mock_snap_instance_update.assert_called_once_with( mock.ANY, 'test_creating_to_err', {'status': constants.STATUS_ERROR}) self.assertEqual(2, mock_info_log.call_count) @ddt.data(constants.REPLICATION_TYPE_READABLE, constants.REPLICATION_TYPE_WRITABLE, constants.REPLICATION_TYPE_DR) def test_promote_share_replica_driver_updates_replica_list(self, rtype): replica = fake_replica(replication_type=rtype) active_replica = fake_replica( id='current_active_replica', replica_state=constants.REPLICA_STATE_ACTIVE) replica_list = [ replica, active_replica, fake_replica(id=3), fake_replica(id='one_more_replica'), ] updated_replica_list = [ { 'id': replica['id'], 'export_locations': ['TEST1', 'TEST2'], 'replica_state': constants.REPLICA_STATE_ACTIVE, }, { 'id': 'current_active_replica', 'export_locations': 'junk_return_value', 'replica_state': constants.REPLICA_STATE_IN_SYNC, }, { 'id': 'other_replica', 'export_locations': ['TEST3', 'TEST4'], }, { 'id': replica_list[3]['id'], 'export_locations': ['TEST5', 'TEST6'], 'replica_state': constants.REPLICA_STATE_IN_SYNC, }, ] self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[])) self.mock_object(db, 'share_access_get_all_for_share', mock.Mock(return_value=[])) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replica_list)) mock_snap_instance_update = self.mock_object( db, 'share_snapshot_instance_update') self.mock_object( self.share_manager.driver, 'promote_replica', mock.Mock(return_value=updated_replica_list)) mock_info_log = self.mock_object(manager.LOG, 'info') mock_export_locs_update = self.mock_object( db, 'share_export_locations_update') mock_replica_update = self.mock_object(db, 'share_replica_update') reset_replication_change_updates = { 'replica_state': constants.STATUS_ACTIVE, 'status': constants.STATUS_AVAILABLE, 'cast_rules_to_readonly': False, } demoted_replica_updates = { 'replica_state': constants.REPLICA_STATE_IN_SYNC, 'cast_rules_to_readonly': False, } if rtype == constants.REPLICATION_TYPE_READABLE: demoted_replica_updates['cast_rules_to_readonly'] = True reset_replication_change_call = mock.call( mock.ANY, replica['id'], reset_replication_change_updates) demoted_replica_update_call = mock.call( mock.ANY, active_replica['id'], demoted_replica_updates ) additional_replica_update_call = mock.call( mock.ANY, replica_list[3]['id'], { 'replica_state': constants.REPLICA_STATE_IN_SYNC, } ) self.share_manager.promote_share_replica(self.context, replica) self.assertEqual(3, mock_export_locs_update.call_count) mock_replica_update.assert_has_calls([ demoted_replica_update_call, additional_replica_update_call, reset_replication_change_call, ]) self.assertTrue(mock_info_log.called) self.assertFalse(mock_snap_instance_update.called) @ddt.data('openstack1@watson#_pool0', 'openstack1@newton#_pool0') def test_periodic_share_replica_update(self, host): mock_debug_log = self.mock_object(manager.LOG, 'debug') replicas = [ fake_replica(host='openstack1@watson#pool4'), fake_replica(host='openstack1@watson#pool5'), fake_replica(host='openstack1@newton#pool5'), fake_replica(host='openstack1@newton#pool5'), ] self.mock_object(self.share_manager.db, 'share_replicas_get_all', mock.Mock(return_value=replicas)) mock_update_method = self.mock_object( self.share_manager, '_share_replica_update') self.share_manager.host = host self.share_manager.periodic_share_replica_update(self.context) self.assertEqual(2, mock_update_method.call_count) self.assertEqual(1, mock_debug_log.call_count) @ddt.data(constants.REPLICA_STATE_IN_SYNC, constants.REPLICA_STATE_OUT_OF_SYNC) def test__share_replica_update_driver_exception(self, replica_state): mock_debug_log = self.mock_object(manager.LOG, 'debug') replica = fake_replica(replica_state=replica_state) active_replica = fake_replica( replica_state=constants.REPLICA_STATE_ACTIVE) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) self.mock_object(self.share_manager.db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_server_get', mock.Mock(return_value=fakes.fake_share_server_get())) self.mock_object(self.share_manager.driver, 'update_replica_state', mock.Mock(side_effect=exception.ManilaException)) mock_db_update_call = self.mock_object( self.share_manager.db, 'share_replica_update') self.share_manager._share_replica_update( self.context, replica, share_id=replica['share_id']) mock_db_update_call.assert_called_once_with( self.context, replica['id'], {'replica_state': constants.STATUS_ERROR, 'status': constants.STATUS_ERROR} ) self.assertEqual(1, mock_debug_log.call_count) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.UPDATE, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], exception=mock.ANY) def test__share_replica_update_driver_exception_ignored(self): mock_debug_log = self.mock_object(manager.LOG, 'debug') replica = fake_replica(replica_state=constants.STATUS_ERROR) active_replica = fake_replica(replica_state=constants.STATUS_ACTIVE) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) self.mock_object(self.share_manager.db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_server_get', mock.Mock(return_value={})) self.share_manager.host = replica['host'] self.mock_object(self.share_manager.driver, 'update_replica_state', mock.Mock(side_effect=exception.ManilaException)) mock_db_update_call = self.mock_object( self.share_manager.db, 'share_replica_update') self.share_manager._share_replica_update( self.context, replica, share_id=replica['share_id']) mock_db_update_call.assert_called_once_with( self.context, replica['id'], {'replica_state': constants.STATUS_ERROR, 'status': constants.STATUS_ERROR} ) self.assertEqual(1, mock_debug_log.call_count) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.UPDATE, replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica['id'], exception=mock.ANY) @ddt.data({'status': constants.STATUS_AVAILABLE, 'replica_state': constants.REPLICA_STATE_ACTIVE, }, {'status': constants.STATUS_DELETING, 'replica_state': constants.REPLICA_STATE_IN_SYNC, }, {'status': constants.STATUS_CREATING, 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, }, {'status': constants.STATUS_MANAGING, 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, }, {'status': constants.STATUS_UNMANAGING, 'replica_state': constants.REPLICA_STATE_ACTIVE, }, {'status': constants.STATUS_EXTENDING, 'replica_state': constants.REPLICA_STATE_IN_SYNC, }, {'status': constants.STATUS_SHRINKING, 'replica_state': constants.REPLICA_STATE_IN_SYNC, }) def test__share_replica_update_unqualified_replica(self, state): mock_debug_log = self.mock_object(manager.LOG, 'debug') mock_warning_log = self.mock_object(manager.LOG, 'warning') mock_driver_call = self.mock_object( self.share_manager.driver, 'update_replica_state') mock_db_update_call = self.mock_object( self.share_manager.db, 'share_replica_update') replica = fake_replica(**state) self.mock_object(db, 'share_server_get', mock.Mock(return_value='fake_share_server')) self.mock_object(db, 'share_replica_get', mock.Mock(return_value=replica)) self.share_manager._share_replica_update(self.context, replica, share_id=replica['share_id']) self.assertFalse(mock_debug_log.called) self.assertFalse(mock_warning_log.called) self.assertFalse(mock_driver_call.called) self.assertFalse(mock_db_update_call.called) @ddt.data(None, constants.REPLICA_STATE_IN_SYNC, constants.REPLICA_STATE_OUT_OF_SYNC, constants.REPLICA_STATE_ACTIVE, constants.STATUS_ERROR) def test__share_replica_update(self, retval): mock_debug_log = self.mock_object(manager.LOG, 'debug') mock_warning_log = self.mock_object(manager.LOG, 'warning') replica_states = [constants.REPLICA_STATE_IN_SYNC, constants.REPLICA_STATE_OUT_OF_SYNC] replica = fake_replica(replica_state=random.choice(replica_states), share_server=fakes.fake_share_server_get()) active_replica = fake_replica( id='fake2', replica_state=constants.STATUS_ACTIVE) snapshots = [fakes.fake_snapshot( create_instance=True, aggregate_status=constants.STATUS_AVAILABLE)] snapshot_instances = [ fakes.fake_snapshot_instance(share_instance_id=replica['id']), fakes.fake_snapshot_instance(share_instance_id='fake2'), ] del replica['availability_zone'] self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica, active_replica])) self.mock_object(db, 'share_server_get', mock.Mock(return_value=fakes.fake_share_server_get())) mock_db_update_calls = [] self.mock_object(self.share_manager.db, 'share_replica_get', mock.Mock(return_value=replica)) mock_driver_call = self.mock_object( self.share_manager.driver, 'update_replica_state', mock.Mock(return_value=retval)) mock_db_update_call = self.mock_object( self.share_manager.db, 'share_replica_update') self.mock_object(db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=snapshots)) self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.share_manager._share_replica_update( self.context, replica, share_id=replica['share_id']) if retval == constants.REPLICA_STATE_ACTIVE: self.assertEqual(1, mock_warning_log.call_count) elif retval: self.assertEqual(0, mock_warning_log.call_count) self.assertTrue(mock_driver_call.called) # pylint: disable=unsubscriptable-object snapshot_list_arg = mock_driver_call.call_args[0][4] # pylint: enable=unsubscriptable-object self.assertIn('active_replica_snapshot', snapshot_list_arg[0]) self.assertIn('share_replica_snapshot', snapshot_list_arg[0]) mock_db_update_call.assert_has_calls(mock_db_update_calls) self.assertEqual(1, mock_debug_log.call_count) def test_update_share_replica_replica_not_found(self): replica = fake_replica() self.mock_object( self.share_manager.db, 'share_replica_get', mock.Mock( side_effect=exception.ShareReplicaNotFound(replica_id='fake'))) self.mock_object(self.share_manager, '_get_share_server') driver_call = self.mock_object( self.share_manager, '_share_replica_update') self.assertRaises( exception.ShareReplicaNotFound, self.share_manager.update_share_replica, self.context, replica, share_id=replica['share_id']) self.assertFalse(driver_call.called) def test_update_share_replica_replica(self): replica_update_call = self.mock_object( self.share_manager, '_share_replica_update') self.mock_object(self.share_manager.db, 'share_replica_get') retval = self.share_manager.update_share_replica( self.context, 'fake_replica_id', share_id='fake_share_id') self.assertIsNone(retval) self.assertTrue(replica_update_call.called) def _get_snapshot_instance_dict(self, snapshot_instance, share, snapshot=None): expected_snapshot_instance_dict = { 'status': constants.STATUS_CREATING, 'share_id': share['id'], 'share_name': snapshot_instance['share_name'], 'deleted': snapshot_instance['deleted'], 'share': share, 'updated_at': snapshot_instance['updated_at'], 'snapshot_id': snapshot_instance['snapshot_id'], 'id': snapshot_instance['id'], 'name': snapshot_instance['name'], 'created_at': snapshot_instance['created_at'], 'share_instance_id': snapshot_instance['share_instance_id'], 'progress': snapshot_instance['progress'], 'deleted_at': snapshot_instance['deleted_at'], 'provider_location': snapshot_instance['provider_location'], } if snapshot: expected_snapshot_instance_dict.update({ 'size': snapshot['size'], }) return expected_snapshot_instance_dict def test_create_snapshot_driver_exception(self): def _raise_not_found(self, *args, **kwargs): raise exception.NotFound() share_id = 'FAKE_SHARE_ID' share = fakes.fake_share(id=share_id, instance={'id': 'fake_id'}) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot') snapshot = fakes.fake_snapshot( share_id=share_id, share=share, instance=snapshot_instance, project_id=self.context.project_id) snapshot_id = snapshot['id'] self.mock_object(self.share_manager.driver, "create_snapshot", mock.Mock(side_effect=_raise_not_found)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) db_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') expected_snapshot_instance_dict = self._get_snapshot_instance_dict( snapshot_instance, share) self.assertRaises(exception.NotFound, self.share_manager.create_snapshot, self.context, share_id, snapshot_id) db_update.assert_called_once_with(self.context, snapshot_instance['id'], {'status': constants.STATUS_ERROR}) self.share_manager.driver.create_snapshot.assert_called_once_with( self.context, expected_snapshot_instance_dict, share_server=None) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, snapshot['project_id'], resource_type=message_field.Resource.SHARE_SNAPSHOT, resource_id=snapshot_instance['id'], exception=mock.ANY) @ddt.data({'model_update': {}, 'mount_snapshot_support': True}, {'model_update': {}, 'mount_snapshot_support': False}, {'model_update': {'export_locations': [ {'path': '/path1', 'is_admin_only': True}, {'path': '/path2', 'is_admin_only': False} ]}, 'mount_snapshot_support': True}, {'model_update': {'export_locations': [ {'path': '/path1', 'is_admin_only': True}, {'path': '/path2', 'is_admin_only': False} ]}, 'mount_snapshot_support': False}) @ddt.unpack def test_create_snapshot(self, model_update, mount_snapshot_support): export_locations = model_update.get('export_locations') share_id = 'FAKE_SHARE_ID' share = fakes.fake_share( id=share_id, instance={'id': 'fake_id'}, mount_snapshot_support=mount_snapshot_support) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot') snapshot = fakes.fake_snapshot( share_id=share_id, share=share, instance=snapshot_instance) snapshot_id = snapshot['id'] self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_export_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_export_location_create') expected_update_calls = [ mock.call(self.context, snapshot_instance['id'], {'status': constants.STATUS_AVAILABLE, 'progress': '100%'}) ] expected_snapshot_instance_dict = self._get_snapshot_instance_dict( snapshot_instance, share) self.mock_object( self.share_manager.driver, 'create_snapshot', mock.Mock(return_value=model_update)) db_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') return_value = self.share_manager.create_snapshot( self.context, share_id, snapshot_id) self.assertIsNone(return_value) self.share_manager.driver.create_snapshot.assert_called_once_with( self.context, expected_snapshot_instance_dict, share_server=None) db_update.assert_has_calls(expected_update_calls, any_order=True) if mount_snapshot_support and export_locations: snap_ins_id = snapshot.instance['id'] for i in range(0, 2): export_locations[i]['share_snapshot_instance_id'] = snap_ins_id mock_export_update.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), export_locations[0]), mock.call(utils.IsAMatcher(context.RequestContext), export_locations[1]), ]) else: mock_export_update.assert_not_called() @ddt.data(exception.ShareSnapshotIsBusy(snapshot_name='fake_name'), exception.NotFound()) def test_delete_snapshot_driver_exception(self, exc): share_id = 'FAKE_SHARE_ID' share = fakes.fake_share(id=share_id, instance={'id': 'fake_id'}, mount_snapshot_support=True) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot') snapshot = fakes.fake_snapshot( share_id=share_id, share=share, instance=snapshot_instance, project_id=self.context.project_id) snapshot_id = snapshot['id'] update_access = self.mock_object( self.share_manager.snapshot_access_helper, 'update_access_rules') self.mock_object(self.share_manager.driver, "delete_snapshot", mock.Mock(side_effect=exc)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object( self.share_manager.db, 'share_get', mock.Mock(return_value=share)) db_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') db_destroy_call = self.mock_object( self.share_manager.db, 'share_snapshot_instance_delete') expected_snapshot_instance_dict = self._get_snapshot_instance_dict( snapshot_instance, share) mock_exception_log = self.mock_object(manager.LOG, 'exception') self.assertRaises(type(exc), self.share_manager.delete_snapshot, self.context, snapshot_id) db_update.assert_called_once_with( mock.ANY, snapshot_instance['id'], {'status': constants.STATUS_ERROR_DELETING}) self.share_manager.driver.delete_snapshot.assert_called_once_with( mock.ANY, expected_snapshot_instance_dict, share_server=None) update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot_instance['id'], delete_all_rules=True, share_server=None) self.assertFalse(db_destroy_call.called) self.assertFalse(mock_exception_log.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.DELETE, snapshot['project_id'], resource_type=message_field.Resource.SHARE_SNAPSHOT, resource_id=snapshot_instance['id'], exception=mock.ANY) @ddt.data(True, False) def test_delete_snapshot_with_quota_error(self, quota_error): share_id = 'FAKE_SHARE_ID' share = fakes.fake_share(id=share_id) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot') snapshot = fakes.fake_snapshot( share_id=share_id, share=share, instance=snapshot_instance, project_id=self.context.project_id, size=1) snapshot_id = snapshot['id'] self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_exception_log = self.mock_object(manager.LOG, 'exception') expected_exc_count = 1 if quota_error else 0 expected_snapshot_instance_dict = self._get_snapshot_instance_dict( snapshot_instance, share) self.mock_object(self.share_manager.driver, 'delete_snapshot') db_update_call = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') snapshot_destroy_call = self.mock_object( self.share_manager.db, 'share_snapshot_instance_delete') side_effect = exception.QuotaError(code=500) if quota_error else None self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=side_effect)) quota_commit_call = self.mock_object(quota.QUOTAS, 'commit') retval = self.share_manager.delete_snapshot( self.context, snapshot_id) self.assertIsNone(retval) self.share_manager.driver.delete_snapshot.assert_called_once_with( mock.ANY, expected_snapshot_instance_dict, share_server=None) self.assertFalse(db_update_call.called) self.assertTrue(snapshot_destroy_call.called) self.assertTrue(manager.QUOTAS.reserve.called) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, project_id=self.context.project_id, snapshots=-1, snapshot_gigabytes=-snapshot['size'], user_id=snapshot['user_id'], share_type_id=share['instance']['share_type_id']) self.assertEqual(not quota_error, quota_commit_call.called) self.assertEqual(quota_error, mock_exception_log.called) self.assertEqual(expected_exc_count, mock_exception_log.call_count) @ddt.data(exception.ShareSnapshotIsBusy, exception.ManilaException) def test_delete_snapshot_ignore_exceptions_with_the_force(self, exc): def _raise_quota_error(): raise exception.QuotaError(code='500') share_id = 'FAKE_SHARE_ID' share = fakes.fake_share(id=share_id) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot') snapshot = fakes.fake_snapshot( share_id=share_id, share=share, instance=snapshot_instance, project_id=self.context.project_id, size=1) snapshot_id = snapshot['id'] self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_exception_log = self.mock_object(manager.LOG, 'exception') self.mock_object(self.share_manager.driver, 'delete_snapshot', mock.Mock(side_effect=exc)) db_update_call = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') snapshot_destroy_call = self.mock_object( self.share_manager.db, 'share_snapshot_instance_delete') self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=_raise_quota_error)) quota_commit_call = self.mock_object(quota.QUOTAS, 'commit') retval = self.share_manager.delete_snapshot( self.context, snapshot_id, force=True) self.assertIsNone(retval) self.assertEqual(2, mock_exception_log.call_count) snapshot_destroy_call.assert_called_once_with( mock.ANY, snapshot_instance['id']) self.assertFalse(quota_commit_call.called) self.assertFalse(db_update_call.called) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.DELETE, snapshot['project_id'], resource_type=message_field.Resource.SHARE_SNAPSHOT, resource_id=snapshot_instance['id'], exception=mock.ANY) def test_create_share_instance_with_share_network_dhss_false(self): manager.CONF.set_default('driver_handles_share_servers', False) self.mock_object( self.share_manager.driver.configuration, 'safe_get', mock.Mock(return_value=False)) share_network_id = 'fake_sn' share = db_utils.create_share(share_network_id=share_network_id) share_instance = share.instance self.mock_object( self.share_manager.db, 'share_instance_get', mock.Mock(return_value=share_instance)) self.mock_object(self.share_manager.db, 'share_instance_update') self.assertRaisesRegex( exception.ManilaException, '.*%s.*' % share_instance['id'], self.share_manager.create_share_instance, self.context, share_instance['id']) self.share_manager.db.share_instance_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_instance['id'], with_share_data=True ) self.share_manager.db.share_instance_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_instance['id'], {'status': constants.STATUS_ERROR}) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, six.text_type(share.project_id), resource_type=message_field.Resource.SHARE, resource_id=share['id'], detail=mock.ANY) def test_create_share_instance_with_share_network_server_not_exists(self): """Test share can be created without share server.""" share_net = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_net['id'], availability_zone_id=None, ) share = db_utils.create_share(share_network_id=share_net['id']) share_id = share['id'] def fake_setup_server(context, share_network, *args, **kwargs): return db_utils.create_share_server( share_network_subnet_id=share_net_subnet['id'], host='fake_host') self.mock_object(manager.LOG, 'info') self.share_manager.driver.create_share = mock.Mock( return_value='fake_location') self.share_manager._setup_server = fake_setup_server self.share_manager.create_share_instance(self.context, share.instance['id']) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) manager.LOG.info.assert_called_with(mock.ANY, share.instance['id']) def test_create_share_instance_with_share_network_server_fail(self): share_network = db_utils.create_share_network(id='fake_sn_id') share_net_subnet = db_utils.create_share_network_subnet( id='fake_sns_id', share_network_id=share_network['id'] ) fake_share = db_utils.create_share( share_network_id=share_network['id'], size=1) fake_server = db_utils.create_share_server( id='fake_srv_id', status=constants.STATUS_CREATING) self.mock_object(db, 'share_server_create', mock.Mock(return_value=fake_server)) self.mock_object(db, 'share_instance_update', mock.Mock(return_value=fake_share.instance)) self.mock_object(db, 'share_instance_get', mock.Mock(return_value=fake_share.instance)) self.mock_object(db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=share_net_subnet)) self.mock_object(manager.LOG, 'error') def raise_share_server_not_found(*args, **kwargs): raise exception.ShareServerNotFound( share_server_id=fake_server['id']) def raise_manila_exception(*args, **kwargs): raise exception.ManilaException() self.mock_object(db, 'share_server_get_all_by_host_and_share_subnet_valid', mock.Mock(side_effect=raise_share_server_not_found)) self.mock_object(self.share_manager, '_setup_server', mock.Mock(side_effect=raise_manila_exception)) self.assertRaises( exception.ManilaException, self.share_manager.create_share_instance, self.context, fake_share.instance['id'], ) (db.share_server_get_all_by_host_and_share_subnet_valid. assert_called_once_with( utils.IsAMatcher(context.RequestContext), self.share_manager.host, share_net_subnet['id'], )) db.share_server_create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), mock.ANY) db.share_instance_update.assert_has_calls([ mock.call( utils.IsAMatcher(context.RequestContext), fake_share.instance['id'], {'status': constants.STATUS_ERROR}, ) ]) self.share_manager._setup_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_server, metadata={'request_host': 'fake_host'}) manager.LOG.error.assert_called_with(mock.ANY, fake_share.instance['id']) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, six.text_type(fake_share.project_id), resource_type=message_field.Resource.SHARE, resource_id=fake_share['id'], detail=message_field.Detail.NO_SHARE_SERVER) def test_create_share_instance_with_share_network_subnet_not_found(self): """Test creation fails if share network not found.""" self.mock_object(manager.LOG, 'error') share = db_utils.create_share(share_network_id='fake-net-id') share_id = share['id'] self.assertRaises( exception.ShareNetworkSubnetNotFound, self.share_manager.create_share_instance, self.context, share.instance['id'] ) manager.LOG.error.assert_called_with(mock.ANY, share.instance['id']) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_ERROR, shr['status']) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, six.text_type(shr.project_id), resource_type=message_field.Resource.SHARE, resource_id=shr['id'], detail=message_field.Detail.NO_SHARE_SERVER) def test_create_share_instance_with_share_network_server_exists(self): """Test share can be created with existing share server.""" share_net = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_net['id'], availability_zone_id=None, ) share = db_utils.create_share(share_network_id=share_net['id']) share_srv = db_utils.create_share_server( share_network_subnet_id=share_net_subnet['id'], host=self.share_manager.host) share_id = share['id'] self.mock_object(manager.LOG, 'info') driver_mock = mock.Mock() driver_mock.create_share.return_value = "fake_location" driver_mock.choose_share_server_compatible_with_share.return_value = ( share_srv ) self.share_manager.driver = driver_mock self.share_manager.create_share_instance(self.context, share.instance['id']) self.assertFalse(self.share_manager.driver.setup_network.called) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) self.assertEqual(shr['status'], constants.STATUS_AVAILABLE) self.assertEqual(shr['share_server_id'], share_srv['id']) self.assertGreater(len(shr['export_location']), 0) self.assertEqual(1, len(shr['export_locations'])) manager.LOG.info.assert_called_with(mock.ANY, share.instance['id']) @ddt.data('export_location', 'export_locations') def test_create_share_instance_with_error_in_driver(self, details_key): """Test db updates if share creation fails in driver.""" share = db_utils.create_share() share_id = share['id'] some_data = 'fake_location' self.share_manager.driver = mock.Mock() e = exception.ManilaException(detail_data={details_key: some_data}) self.share_manager.driver.create_share.side_effect = e self.assertRaises( exception.ManilaException, self.share_manager.create_share_instance, self.context, share.instance['id'] ) self.assertTrue(self.share_manager.driver.create_share.called) shr = db.share_get(self.context, share_id) self.assertEqual(some_data, shr['export_location']) self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.CREATE, six.text_type(share.project_id), resource_type=message_field.Resource.SHARE, resource_id=share['id'], exception=mock.ANY) def test_create_share_instance_with_server_created(self): """Test share can be created and share server is created.""" share_net = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_net['id'], availability_zone_id=None) share = db_utils.create_share(share_network_id=share_net['id'], availability_zone=None) db_utils.create_share_server( share_network_subnet_id=share_net_subnet['id'], host=self.share_manager.host, status=constants.STATUS_ERROR) share_id = share['id'] fake_server = { 'id': 'fake_srv_id', 'status': constants.STATUS_CREATING, } self.mock_object(db, 'share_server_create', mock.Mock(return_value=fake_server)) self.mock_object(self.share_manager, '_setup_server', mock.Mock(return_value=fake_server)) self.share_manager.create_share_instance(self.context, share.instance['id']) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_AVAILABLE, shr['status']) self.assertEqual('fake_srv_id', shr['share_server_id']) db.share_server_create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), mock.ANY) self.share_manager._setup_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_server, metadata={'request_host': 'fake_host'}) def test_create_share_instance_update_replica_state(self): share_net = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_net['id'], availability_zone_id=None ) share = db_utils.create_share(share_network_id=share_net['id'], replication_type='dr', availability_zone=None) db_utils.create_share_server( share_network_subnet_id=share_net_subnet['id'], host=self.share_manager.host, status=constants.STATUS_ERROR) share_id = share['id'] fake_server = { 'id': 'fake_srv_id', 'status': constants.STATUS_CREATING, } self.mock_object(db, 'share_server_create', mock.Mock(return_value=fake_server)) self.mock_object(self.share_manager, '_setup_server', mock.Mock(return_value=fake_server)) self.share_manager.create_share_instance(self.context, share.instance['id']) self.assertEqual(share_id, db.share_get(context.get_admin_context(), share_id).id) shr = db.share_get(self.context, share_id) shr_instances = db.share_instances_get_all_by_share( self.context, shr['id']) self.assertEqual(1, len(shr_instances)) self.assertEqual(constants.STATUS_AVAILABLE, shr['status']) self.assertEqual( constants.REPLICA_STATE_ACTIVE, shr_instances[0]['replica_state']) self.assertEqual('fake_srv_id', shr['share_server_id']) db.share_server_create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), mock.ANY) self.share_manager._setup_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_server, metadata={'request_host': 'fake_host'}) @mock.patch('manila.tests.fake_notifier.FakeNotifier._notify') def test_create_delete_share_instance(self, mock_notify): """Test share can be created and deleted.""" share = db_utils.create_share() mock_notify.assert_not_called() self.share_manager.create_share_instance( self.context, share.instance['id']) self.assert_notify_called(mock_notify, (['INFO', 'share.create.start'], ['INFO', 'share.create.end'])) self.share_manager.delete_share_instance( self.context, share.instance['id']) self.assert_notify_called(mock_notify, (['INFO', 'share.create.start'], ['INFO', 'share.create.end'], ['INFO', 'share.delete.start'], ['INFO', 'share.delete.end'])) @ddt.data(True, False) def test_create_delete_share_instance_error(self, exception_update_access): """Test share can be created and deleted with error.""" def _raise_exception(self, *args, **kwargs): raise exception.ManilaException('fake') self.mock_object(self.share_manager.driver, "create_share", mock.Mock(side_effect=_raise_exception)) self.mock_object(self.share_manager.driver, "delete_share", mock.Mock(side_effect=_raise_exception)) if exception_update_access: self.mock_object( self.share_manager.access_helper, "update_access_rules", mock.Mock(side_effect=_raise_exception)) share = db_utils.create_share() share_id = share['id'] self.assertRaises(exception.ManilaException, self.share_manager.create_share_instance, self.context, share.instance['id']) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_ERROR, shr['status']) self.assertRaises(exception.ManilaException, self.share_manager.delete_share_instance, self.context, share.instance['id']) shr = db.share_get(self.context, share_id) self.assertEqual(constants.STATUS_ERROR_DELETING, shr['status']) self.share_manager.driver.create_share.assert_called_once_with( utils.IsAMatcher(context.RequestContext), utils.IsAMatcher(models.ShareInstance), share_server=None) if not exception_update_access: self.share_manager.driver.delete_share.assert_called_once_with( utils.IsAMatcher(context.RequestContext), utils.IsAMatcher(models.ShareInstance), share_server=None) def test_create_share_instance_update_availability_zone(self): share = db_utils.create_share(availability_zone=None) share_id = share['id'] self.share_manager.create_share_instance( self.context, share.instance['id']) actual_share = db.share_get(context.get_admin_context(), share_id) self.assertIsNotNone(actual_share.availability_zone) self.assertEqual(manager.CONF.storage_availability_zone, actual_share.availability_zone) def test_provide_share_server_for_share_incompatible_servers(self): fake_exception = exception.ManilaException("fake") fake_share_network = db_utils.create_share_network(id='fake_sn_id') fake_share_net_subnet = db_utils.create_share_network_subnet( id='fake_sns_id', share_network_id=fake_share_network['id'] ) fake_share_server = db_utils.create_share_server(id='fake') share = db_utils.create_share() db_method_mock = self.mock_object( db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=fake_share_net_subnet)) self.mock_object(db, 'share_server_get_all_by_host_and_share_subnet_valid', mock.Mock(return_value=[fake_share_server])) self.mock_object( self.share_manager.driver, "choose_share_server_compatible_with_share", mock.Mock(side_effect=fake_exception) ) self.assertRaises(exception.ManilaException, self.share_manager._provide_share_server_for_share, self.context, fake_share_network['id'], share.instance) db_method_mock.assert_called_once_with( self.context, fake_share_network['id'], availability_zone_id=share.instance.get('availability_zone_id') ) driver_mock = self.share_manager.driver driver_method_mock = ( driver_mock.choose_share_server_compatible_with_share ) driver_method_mock.assert_called_once_with( self.context, [fake_share_server], share.instance, snapshot=None, share_group=None) def test_provide_share_server_for_share_invalid_arguments(self): self.assertRaises(ValueError, self.share_manager._provide_share_server_for_share, self.context, None, None) def test_provide_share_server_for_share_parent_ss_not_found(self): fake_parent_id = "fake_server_id" fake_share_network = db_utils.create_share_network(id='fake_sn_id') fake_exception = exception.ShareServerNotFound("fake") share = db_utils.create_share() fake_snapshot = { 'share': { 'instance': { 'share_server_id': fake_parent_id } } } self.mock_object(db, 'share_server_get', mock.Mock(side_effect=fake_exception)) self.assertRaises(exception.ShareServerNotFound, self.share_manager._provide_share_server_for_share, self.context, fake_share_network['id'], share.instance, snapshot=fake_snapshot) db.share_server_get.assert_called_once_with( self.context, fake_parent_id) def test_provide_share_server_for_share_parent_ss_invalid(self): fake_parent_id = "fake_server_id" fake_share_network = db_utils.create_share_network(id='fake_sn_id') share = db_utils.create_share() fake_snapshot = { 'share': { 'instance': { 'share_server_id': fake_parent_id } } } fake_parent_share_server = {'status': 'fake'} self.mock_object(db, 'share_server_get', mock.Mock(return_value=fake_parent_share_server)) self.assertRaises(exception.InvalidShareServer, self.share_manager._provide_share_server_for_share, self.context, fake_share_network['id'], share.instance, snapshot=fake_snapshot) db.share_server_get.assert_called_once_with( self.context, fake_parent_id) def test_provide_share_server_for_share_group_incompatible_servers(self): fake_exception = exception.ManilaException("fake") fake_share_server = {'id': 'fake'} sg = db_utils.create_share_group() self.mock_object(db, 'share_server_get_all_by_host_and_share_subnet_valid', mock.Mock(return_value=[fake_share_server])) self.mock_object( self.share_manager.driver, "choose_share_server_compatible_with_share_group", mock.Mock(side_effect=fake_exception) ) self.assertRaises( exception.ManilaException, self.share_manager._provide_share_server_for_share_group, self.context, "fake_sn_id", "fake_sns_id", sg) driver_mock = self.share_manager.driver driver_method_mock = ( driver_mock.choose_share_server_compatible_with_share_group) driver_method_mock.assert_called_once_with( self.context, [fake_share_server], sg, share_group_snapshot=None) def test_provide_share_server_for_share_group_invalid_arguments(self): self.assertRaises( exception.InvalidInput, self.share_manager._provide_share_server_for_share_group, self.context, None, None, None) def test_manage_share_driver_exception(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False CustomException = type('CustomException', (Exception,), dict()) self.mock_object(self.share_manager.driver, 'manage_existing', mock.Mock(side_effect=CustomException)) self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value='False')) self.mock_object( self.share_manager, '_get_extra_specs_from_share_type', mock.Mock(return_value={})) self.mock_object(self.share_manager.db, 'share_update', mock.Mock()) share = db_utils.create_share() share_id = share['id'] driver_options = {'fake': 'fake'} self.assertRaises( CustomException, self.share_manager.manage_share, self.context, share_id, driver_options) (self.share_manager.driver.manage_existing. assert_called_once_with(mock.ANY, driver_options)) self.share_manager.db.share_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_id, {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}) (self.share_manager._get_extra_specs_from_share_type. assert_called_once_with( mock.ANY, share['instance']['share_type_id'])) def test_manage_share_invalid_size(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value='False')) self.mock_object(self.share_manager.driver, "manage_existing", mock.Mock(return_value=None)) self.mock_object(self.share_manager.db, 'share_update', mock.Mock()) self.mock_object( self.share_manager, '_get_extra_specs_from_share_type', mock.Mock(return_value={})) share = db_utils.create_share() share_id = share['id'] driver_options = {'fake': 'fake'} self.assertRaises( exception.InvalidShare, self.share_manager.manage_share, self.context, share_id, driver_options) (self.share_manager.driver.manage_existing. assert_called_once_with(mock.ANY, driver_options)) self.share_manager.db.share_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_id, {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}) (self.share_manager._get_extra_specs_from_share_type. assert_called_once_with( mock.ANY, share['instance']['share_type_id'])) def test_manage_share_quota_error(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value='False')) self.mock_object(self.share_manager.driver, "manage_existing", mock.Mock(return_value={'size': 3})) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=exception.QuotaError)) self.mock_object(self.share_manager.db, 'share_update', mock.Mock()) self.mock_object( self.share_manager, '_get_extra_specs_from_share_type', mock.Mock(return_value={})) share = db_utils.create_share() share_id = share['id'] driver_options = {'fake': 'fake'} self.assertRaises( exception.QuotaError, self.share_manager.manage_share, self.context, share_id, driver_options) (self.share_manager.driver.manage_existing. assert_called_once_with(mock.ANY, driver_options)) self.share_manager.db.share_update.assert_called_once_with( mock.ANY, share_id, {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}) (self.share_manager._get_extra_specs_from_share_type. assert_called_once_with( mock.ANY, share['instance']['share_type_id'])) def test_manage_share_incompatible_dhss(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False share = db_utils.create_share() self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value="True")) self.mock_object( self.share_manager, '_get_extra_specs_from_share_type', mock.Mock(return_value={})) self.assertRaises( exception.InvalidShare, self.share_manager.manage_share, self.context, share['id'], {}) (self.share_manager._get_extra_specs_from_share_type. assert_called_once_with( mock.ANY, share['instance']['share_type_id'])) @ddt.data({'dhss': True, 'driver_data': {'size': 1, 'replication_type': None}}, {'dhss': False, 'driver_data': {'size': 2, 'name': 'fake', 'replication_type': 'dr'}}, {'dhss': False, 'driver_data': {'size': 3, 'export_locations': ['foo', 'bar', 'quuz'], 'replication_type': 'writable'}}) @ddt.unpack def test_manage_share_valid_share(self, dhss, driver_data): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = dhss replication_type = driver_data.pop('replication_type') extra_specs = {} if replication_type is not None: extra_specs.update({'replication_type': replication_type}) export_locations = driver_data.get('export_locations') self.mock_object(self.share_manager.db, 'share_update', mock.Mock()) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock()) self.mock_object( self.share_manager.db, 'share_export_locations_update', mock.Mock(side_effect=( self.share_manager.db.share_export_locations_update))) self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value=six.text_type(dhss))) self.mock_object( self.share_manager, '_get_extra_specs_from_share_type', mock.Mock(return_value=extra_specs)) if dhss: mock_manage = self.mock_object( self.share_manager.driver, "manage_existing_with_server", mock.Mock(return_value=driver_data)) else: mock_manage = self.mock_object( self.share_manager.driver, "manage_existing", mock.Mock(return_value=driver_data)) share = db_utils.create_share(replication_type=replication_type) share_id = share['id'] driver_options = {'fake': 'fake'} expected_deltas = { 'project_id': share['project_id'], 'user_id': self.context.user_id, 'shares': 1, 'gigabytes': driver_data['size'], 'share_type_id': share['instance']['share_type_id'], } if replication_type: expected_deltas.update({'share_replicas': 1, 'replica_gigabytes': driver_data['size']}) self.share_manager.manage_share(self.context, share_id, driver_options) if dhss: mock_manage.assert_called_once_with(mock.ANY, driver_options, None) else: mock_manage.assert_called_once_with(mock.ANY, driver_options) if export_locations: (self.share_manager.db.share_export_locations_update. assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], export_locations, delete=True)) else: self.assertFalse( self.share_manager.db.share_export_locations_update.called) valid_share_data = { 'status': constants.STATUS_AVAILABLE, 'launched_at': mock.ANY} if replication_type: valid_share_data['replica_state'] = constants.REPLICA_STATE_ACTIVE valid_share_data.update(driver_data) self.share_manager.db.share_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_id, valid_share_data) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, **expected_deltas) (self.share_manager._get_extra_specs_from_share_type. assert_called_once_with( mock.ANY, share['instance']['share_type_id'])) def test_update_quota_usages_new(self): self.mock_object(self.share_manager.db, 'quota_usage_get', mock.Mock(return_value={'in_use': 1})) self.mock_object(self.share_manager.db, 'quota_usage_update') project_id = 'fake_project_id' resource_name = 'fake' usage = 1 self.share_manager._update_quota_usages( self.context, project_id, {resource_name: usage}) self.share_manager.db.quota_usage_get.assert_called_once_with( mock.ANY, project_id, resource_name, mock.ANY) self.share_manager.db.quota_usage_update.assert_called_once_with( mock.ANY, project_id, mock.ANY, resource_name, in_use=2) def test_update_quota_usages_update(self): project_id = 'fake_project_id' resource_name = 'fake' usage = 1 side_effect = exception.QuotaUsageNotFound(project_id=project_id) self.mock_object( self.share_manager.db, 'quota_usage_get', mock.Mock(side_effect=side_effect)) self.mock_object(self.share_manager.db, 'quota_usage_create') self.share_manager._update_quota_usages( self.context, project_id, {resource_name: usage}) self.share_manager.db.quota_usage_get.assert_called_once_with( mock.ANY, project_id, resource_name, mock.ANY) self.share_manager.db.quota_usage_create.assert_called_once_with( mock.ANY, project_id, mock.ANY, resource_name, usage) def _setup_unmanage_mocks(self, mock_driver=True, mock_unmanage=None, dhss=False, supports_replication=False): if mock_driver: self.mock_object(self.share_manager, 'driver') replicas_list = [] if supports_replication: replicas_list.append({'id': 'fake_id'}) if mock_unmanage: if dhss: self.mock_object( self.share_manager.driver, "unmanage_with_share_server", mock_unmanage) else: self.mock_object(self.share_manager.driver, "unmanage", mock_unmanage) self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager.db, 'share_instance_delete') self.mock_object( self.share_manager.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas_list)) def test_unmanage_share_invalid_share(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False unmanage = mock.Mock(side_effect=exception.InvalidShare(reason="fake")) self._setup_unmanage_mocks(mock_driver=False, mock_unmanage=unmanage) share = db_utils.create_share() self.share_manager.unmanage_share(self.context, share['id']) self.share_manager.db.share_update.assert_called_once_with( mock.ANY, share['id'], {'status': constants.STATUS_UNMANAGE_ERROR}) (self.share_manager.db.share_replicas_get_all_by_share. assert_called_once_with(mock.ANY, share['id'])) @ddt.data(True, False) def test_unmanage_share_valid_share(self, supports_replication): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False self._setup_unmanage_mocks( mock_driver=False, mock_unmanage=mock.Mock(), supports_replication=supports_replication) self.mock_object(quota.QUOTAS, 'reserve') share = db_utils.create_share() share_id = share['id'] share_instance_id = share.instance['id'] self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=share.instance)) reservation_params = { 'project_id': share['project_id'], 'shares': -1, 'gigabytes': -share['size'], 'share_type_id': share['instance']['share_type_id'], } if supports_replication: reservation_params.update( {'share_replicas': -1, 'replica_gigabytes': -share['size']}) self.share_manager.unmanage_share(self.context, share_id) (self.share_manager.driver.unmanage. assert_called_once_with(share.instance)) self.share_manager.db.share_instance_delete.assert_called_once_with( mock.ANY, share_instance_id) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, **reservation_params) (self.share_manager.db.share_replicas_get_all_by_share. assert_called_once_with(mock.ANY, share['id'])) @ddt.data(True, False) def test_unmanage_share_valid_share_with_share_server( self, supports_replication): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = True self._setup_unmanage_mocks( mock_driver=False, mock_unmanage=mock.Mock(), dhss=True, supports_replication=supports_replication) server = db_utils.create_share_server(id='fake_server_id') share = db_utils.create_share(share_server_id='fake_server_id') self.mock_object(self.share_manager.db, 'share_server_update') self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=server)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=share.instance)) self.mock_object(quota.QUOTAS, 'reserve') reservation_params = { 'project_id': share['project_id'], 'shares': -1, 'gigabytes': -share['size'], 'share_type_id': share['instance']['share_type_id'], } if supports_replication: reservation_params.update( {'share_replicas': -1, 'replica_gigabytes': -share['size']}) share_id = share['id'] share_instance_id = share.instance['id'] self.share_manager.unmanage_share(self.context, share_id) (self.share_manager.driver.unmanage_with_server. assert_called_once_with(share.instance, server)) self.share_manager.db.share_instance_delete.assert_called_once_with( mock.ANY, share_instance_id) self.share_manager.db.share_server_update.assert_called_once_with( mock.ANY, server['id'], {'is_auto_deletable': False}) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, **reservation_params) (self.share_manager.db.share_replicas_get_all_by_share .assert_called_once_with(mock.ANY, share['id'])) def test_unmanage_share_valid_share_with_quota_error(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False self._setup_unmanage_mocks(mock_driver=False, mock_unmanage=mock.Mock()) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=Exception())) share = db_utils.create_share() share_instance_id = share.instance['id'] self.share_manager.unmanage_share(self.context, share['id']) self.share_manager.driver.unmanage.assert_called_once_with(mock.ANY) self.share_manager.db.share_instance_delete.assert_called_once_with( mock.ANY, share_instance_id) (self.share_manager.db.share_replicas_get_all_by_share. assert_called_once_with(mock.ANY, share['id'])) def test_unmanage_share_remove_access_rules_error(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False manager.CONF.unmanage_remove_access_rules = True self._setup_unmanage_mocks(mock_driver=False, mock_unmanage=mock.Mock()) self.mock_object( self.share_manager.access_helper, 'update_access_rules', mock.Mock(side_effect=Exception()) ) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=[])) share = db_utils.create_share() self.share_manager.unmanage_share(self.context, share['id']) self.share_manager.db.share_update.assert_called_once_with( mock.ANY, share['id'], {'status': constants.STATUS_UNMANAGE_ERROR}) (self.share_manager.db.share_replicas_get_all_by_share. assert_called_once_with(mock.ANY, share['id'])) def test_unmanage_share_valid_share_remove_access_rules(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False manager.CONF.unmanage_remove_access_rules = True self._setup_unmanage_mocks(mock_driver=False, mock_unmanage=mock.Mock()) smanager = self.share_manager self.mock_object(smanager.access_helper, 'update_access_rules') self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=[])) share = db_utils.create_share() share_id = share['id'] share_instance_id = share.instance['id'] smanager.unmanage_share(self.context, share_id) smanager.driver.unmanage.assert_called_once_with(mock.ANY) smanager.access_helper.update_access_rules.assert_called_once_with( mock.ANY, mock.ANY, delete_all_rules=True, share_server=None ) smanager.db.share_instance_delete.assert_called_once_with( mock.ANY, share_instance_id) (self.share_manager.db.share_replicas_get_all_by_share. assert_called_once_with(mock.ANY, share['id'])) def test_delete_share_instance_share_server_not_found(self): share_net = db_utils.create_share_network() share = db_utils.create_share(share_network_id=share_net['id'], share_server_id='fake-id') self.assertRaises( exception.ShareServerNotFound, self.share_manager.delete_share_instance, self.context, share.instance['id'] ) @ddt.data(True, False) def test_delete_share_instance_last_on_srv_with_sec_service( self, with_details): share_net = db_utils.create_share_network() sec_service = db_utils.create_security_service( share_network_id=share_net['id']) backend_details = dict( security_service_ldap=jsonutils.dumps(sec_service)) if with_details: share_srv = db_utils.create_share_server( share_network_id=share_net['id'], host=self.share_manager.host, backend_details=backend_details) else: share_srv = db_utils.create_share_server( share_network_id=share_net['id'], host=self.share_manager.host) db.share_server_backend_details_set( context.get_admin_context(), share_srv['id'], backend_details) share = db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) mock_access_helper_call = self.mock_object( self.share_manager.access_helper, 'update_access_rules') self.share_manager.driver = mock.Mock() manager.CONF.delete_share_server_with_last_share = True self.share_manager.delete_share_instance(self.context, share.instance['id']) mock_access_helper_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], delete_all_rules=True, share_server=mock.ANY) self.share_manager.driver.teardown_server.assert_called_once_with( server_details=backend_details, security_services=[jsonutils.loads( backend_details['security_service_ldap'])]) @ddt.data({'force': True, 'side_effect': 'update_access'}, {'force': True, 'side_effect': 'delete_share'}, {'force': False, 'side_effect': None}) @ddt.unpack def test_delete_share_instance_last_on_server(self, force, side_effect): share_net = db_utils.create_share_network() share_srv = db_utils.create_share_server( share_network_id=share_net['id'], host=self.share_manager.host ) share = db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) share_srv = db.share_server_get(self.context, share_srv['id']) mock_access_helper_call = self.mock_object( self.share_manager.access_helper, 'update_access_rules') self.share_manager.driver = mock.Mock() if side_effect == 'update_access': mock_access_helper_call.side_effect = exception.ManilaException if side_effect == 'delete_share': self.mock_object(self.share_manager.driver, 'delete_share', mock.Mock(side_effect=Exception('fake'))) self.mock_object(manager.LOG, 'error') manager.CONF.delete_share_server_with_last_share = True self.share_manager.delete_share_instance( self.context, share.instance['id'], force=force) mock_access_helper_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], delete_all_rules=True, share_server=mock.ANY) self.share_manager.driver.teardown_server.assert_called_once_with( server_details=share_srv.get('backend_details'), security_services=[]) self.assertEqual(force, manager.LOG.error.called) def test_delete_share_instance_last_on_server_deletion_disabled(self): share_net = db_utils.create_share_network() share_srv = db_utils.create_share_server( share_network_id=share_net['id'], host=self.share_manager.host ) share = db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) share_srv = db.share_server_get(self.context, share_srv['id']) manager.CONF.delete_share_server_with_last_share = False self.share_manager.driver = mock.Mock() mock_access_helper_call = self.mock_object( self.share_manager.access_helper, 'update_access_rules') self.mock_object(db, 'share_server_get', mock.Mock(return_value=share_srv)) self.share_manager.delete_share_instance(self.context, share.instance['id']) mock_access_helper_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], delete_all_rules=True, share_server=share_srv) self.assertFalse(self.share_manager.driver.teardown_network.called) def test_delete_share_instance_not_last_on_server(self): share_net = db_utils.create_share_network() share_srv = db_utils.create_share_server( share_network_id=share_net['id'], host=self.share_manager.host ) share = db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) share_srv = db.share_server_get(self.context, share_srv['id']) manager.CONF.delete_share_server_with_last_share = True self.share_manager.driver = mock.Mock() self.mock_object(db, 'share_server_get', mock.Mock(return_value=share_srv)) mock_access_helper_call = self.mock_object( self.share_manager.access_helper, 'update_access_rules') self.share_manager.delete_share_instance(self.context, share.instance['id']) mock_access_helper_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], delete_all_rules=True, share_server=share_srv) self.assertFalse(self.share_manager.driver.teardown_network.called) @ddt.data('update_access', 'delete_share') def test_delete_share_instance_not_found(self, side_effect): share_net = db_utils.create_share_network() share_srv = db_utils.create_share_server( share_network_id=share_net['id'], host=self.share_manager.host) share = db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) access = db_utils.create_access(share_id=share['id']) db_utils.create_share(share_network_id=share_net['id'], share_server_id=share_srv['id']) share_srv = db.share_server_get(self.context, share_srv['id']) manager.CONF.delete_share_server_with_last_share = False self.mock_object(db, 'share_server_get', mock.Mock(return_value=share_srv)) self.mock_object(db, 'share_instance_get', mock.Mock(return_value=share.instance)) self.mock_object(db, 'share_access_get_all_for_instance', mock.Mock(return_value=[access])) self.share_manager.driver = mock.Mock() self.share_manager.access_helper.driver = mock.Mock() if side_effect == 'update_access': mock_access_helper_call = self.mock_object( self.share_manager.access_helper, 'update_access_rules', mock.Mock(side_effect=exception.ShareResourceNotFound( share_id=share['id']))) if side_effect == 'delete_share': mock_access_helper_call = self.mock_object( self.share_manager.access_helper, 'update_access_rules', mock.Mock(return_value=None) ) self.mock_object( self.share_manager.driver, 'delete_share', mock.Mock(side_effect=exception.ShareResourceNotFound( share_id=share['id']))) self.mock_object(manager.LOG, 'warning') self.share_manager.delete_share_instance(self.context, share.instance['id']) self.assertFalse(self.share_manager.driver.teardown_network.called) mock_access_helper_call.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], delete_all_rules=True, share_server=share_srv) self.assertTrue(manager.LOG.warning.called) def test_setup_server(self): # Setup required test data metadata = {'fake_metadata_key': 'fake_metadata_value'} share_network = db_utils.create_share_network(id='fake_sn_id') share_net_subnet = db_utils.create_share_network_subnet( id='fake_sns_id', share_network_id=share_network['id'] ) share_server = db_utils.create_share_server( id='fake_id', share_network_subnet_id=share_net_subnet['id']) network_info = {'security_services': []} for ss_type in constants.SECURITY_SERVICES_ALLOWED_TYPES: network_info['security_services'].append({ 'name': 'fake_name' + ss_type, 'ou': 'fake_ou' + ss_type, 'domain': 'fake_domain' + ss_type, 'server': 'fake_server' + ss_type, 'dns_ip': 'fake_dns_ip' + ss_type, 'user': 'fake_user' + ss_type, 'type': ss_type, 'password': 'fake_password' + ss_type, }) sec_services = network_info['security_services'] server_info = {'fake_server_info_key': 'fake_server_info_value'} network_info['network_type'] = 'fake_network_type' # mock required stuff self.mock_object(self.share_manager.db, 'share_network_subnet_get', mock.Mock(return_value=share_net_subnet)) self.mock_object(self.share_manager.db, 'share_network_get', mock.Mock(return_value=share_network)) self.mock_object(self.share_manager.driver, 'allocate_network') self.mock_object(self.share_manager, '_form_server_setup_info', mock.Mock(return_value=network_info)) self.mock_object(self.share_manager, '_validate_segmentation_id') self.mock_object(self.share_manager.driver, 'setup_server', mock.Mock(return_value=server_info)) self.mock_object(self.share_manager.db, 'share_server_backend_details_set') self.mock_object(self.share_manager.db, 'share_server_update', mock.Mock(return_value=share_server)) # execute method _setup_server result = self.share_manager._setup_server( self.context, share_server, metadata=metadata) # verify results self.assertEqual(share_server, result) self.share_manager.db.share_network_get.assert_called_once_with( self.context, share_net_subnet['share_network_id']) self.share_manager.db.share_network_subnet_get.assert_called_once_with( self.context, share_server['share_network_subnet']['id']) self.share_manager.driver.allocate_network.assert_called_once_with( self.context, share_server, share_network, share_server['share_network_subnet']) self.share_manager._form_server_setup_info.assert_called_once_with( self.context, share_server, share_network, share_net_subnet) self.share_manager._validate_segmentation_id.assert_called_once_with( network_info) self.share_manager.driver.setup_server.assert_called_once_with( network_info, metadata=metadata) (self.share_manager.db.share_server_backend_details_set. assert_has_calls([ mock.call(self.context, share_server['id'], {'security_service_' + sec_services[0]['type']: jsonutils.dumps(sec_services[0])}), mock.call(self.context, share_server['id'], {'security_service_' + sec_services[1]['type']: jsonutils.dumps(sec_services[1])}), mock.call(self.context, share_server['id'], {'security_service_' + sec_services[2]['type']: jsonutils.dumps(sec_services[2])}), mock.call(self.context, share_server['id'], server_info), ])) self.share_manager.db.share_server_update.assert_called_once_with( self.context, share_server['id'], {'status': constants.STATUS_ACTIVE, 'identifier': share_server['id']}) def test_setup_server_server_info_not_present(self): # Setup required test data metadata = {'fake_metadata_key': 'fake_metadata_value'} share_network = {'id': 'fake_sn_id'} share_net_subnet = {'id': 'fake_sns_id', 'share_network_id': share_network['id']} share_server = { 'id': 'fake_id', 'share_network_subnet': share_net_subnet, } network_info = { 'fake_network_info_key': 'fake_network_info_value', 'security_services': [], 'network_type': 'fake_network_type', } server_info = {} # mock required stuff self.mock_object(self.share_manager.db, 'share_network_subnet_get', mock.Mock(return_value=share_net_subnet)) self.mock_object(self.share_manager.db, 'share_network_get', mock.Mock(return_value=share_network)) self.mock_object(self.share_manager, '_form_server_setup_info', mock.Mock(return_value=network_info)) self.mock_object(self.share_manager.driver, 'setup_server', mock.Mock(return_value=server_info)) self.mock_object(self.share_manager.db, 'share_server_update', mock.Mock(return_value=share_server)) self.mock_object(self.share_manager.driver, 'allocate_network') # execute method _setup_server result = self.share_manager._setup_server( self.context, share_server, metadata=metadata) # verify results self.assertEqual(share_server, result) self.share_manager.db.share_network_get.assert_called_once_with( self.context, share_net_subnet['share_network_id']) self.share_manager.db.share_network_subnet_get.assert_called_once_with( self.context, share_server['share_network_subnet']['id']) self.share_manager._form_server_setup_info.assert_called_once_with( self.context, share_server, share_network, share_net_subnet) self.share_manager.driver.setup_server.assert_called_once_with( network_info, metadata=metadata) self.share_manager.db.share_server_update.assert_called_once_with( self.context, share_server['id'], {'status': constants.STATUS_ACTIVE, 'identifier': share_server['id']}) self.share_manager.driver.allocate_network.assert_called_once_with( self.context, share_server, share_network, share_net_subnet) def setup_server_raise_exception(self, detail_data_proper): # Setup required test data server_info = {'details_key': 'value'} share_network = {'id': 'fake_sn_id'} share_net_subnet = {'id': 'fake_sns_id', 'share_network_id': share_network['id']} share_server = { 'id': 'fake_id', 'share_network_subnet': share_net_subnet } network_info = { 'fake_network_info_key': 'fake_network_info_value', 'security_services': [], 'network_type': 'fake_network_type', } if detail_data_proper: detail_data = {'server_details': server_info} self.mock_object(self.share_manager.db, 'share_server_backend_details_set') else: detail_data = 'not dictionary detail data' # Mock required parameters self.mock_object(self.share_manager.db, 'share_network_get', mock.Mock(return_value=share_network)) self.mock_object(self.share_manager.db, 'share_network_subnet_get', mock.Mock(return_value=share_net_subnet)) self.mock_object(self.share_manager.db, 'share_server_update') for m in ['deallocate_network', 'allocate_network']: self.mock_object(self.share_manager.driver, m) self.mock_object(self.share_manager, '_form_server_setup_info', mock.Mock(return_value=network_info)) self.mock_object(self.share_manager.db, 'share_server_backend_details_set') self.mock_object(self.share_manager.driver, 'setup_server', mock.Mock(side_effect=exception.ManilaException( detail_data=detail_data))) # execute method _setup_server self.assertRaises( exception.ManilaException, self.share_manager._setup_server, self.context, share_server, ) # verify results if detail_data_proper: (self.share_manager.db.share_server_backend_details_set. assert_called_once_with( self.context, share_server['id'], server_info)) self.share_manager._form_server_setup_info.assert_called_once_with( self.context, share_server, share_network, share_net_subnet) self.share_manager.db.share_server_update.assert_called_once_with( self.context, share_server['id'], {'status': constants.STATUS_ERROR}) self.share_manager.db.share_network_get.assert_called_once_with( self.context, share_net_subnet['share_network_id']) self.share_manager.db.share_network_subnet_get.assert_called_once_with( self.context, share_server['share_network_subnet']['id']) self.share_manager.driver.allocate_network.assert_has_calls([ mock.call(self.context, share_server, share_network, share_net_subnet)]) self.share_manager.driver.deallocate_network.assert_has_calls([ mock.call(self.context, share_server['id'])]) def test_setup_server_incorrect_detail_data(self): self.setup_server_raise_exception(detail_data_proper=False) def test_setup_server_exception_in_driver(self): self.setup_server_raise_exception(detail_data_proper=True) @ddt.data({}, {'detail_data': 'fake'}, {'detail_data': {'server_details': 'fake'}}, {'detail_data': {'server_details': {'fake': 'fake'}}}, {'detail_data': { 'server_details': {'fake': 'fake', 'fake2': 'fake2'}}},) def test_setup_server_exception_in_cleanup_after_error(self, data): def get_server_details_from_data(data): d = data.get('detail_data') if not isinstance(d, dict): return {} d = d.get('server_details') if not isinstance(d, dict): return {} return d share_net_subnet = db_utils.create_share_network_subnet( id='fake_subnet_id', share_network_id='fake_share_net_id' ) share_server = db_utils.create_share_server( id='fake', share_network_subnet_id=share_net_subnet['id']) details = get_server_details_from_data(data) exc_mock = mock.Mock(side_effect=exception.ManilaException(**data)) details_mock = mock.Mock(side_effect=exception.ManilaException()) self.mock_object(self.share_manager.db, 'share_network_get', exc_mock) self.mock_object(self.share_manager.db, 'share_server_backend_details_set', details_mock) self.mock_object(self.share_manager.db, 'share_server_update') self.mock_object(self.share_manager.driver, 'deallocate_network') self.mock_object(manager.LOG, 'debug') self.mock_object(manager.LOG, 'warning') self.assertRaises( exception.ManilaException, self.share_manager._setup_server, self.context, share_server, ) self.assertTrue(self.share_manager.db.share_network_get.called) if details: self.assertEqual(len(details), details_mock.call_count) expected = [mock.call(mock.ANY, share_server['id'], {k: v}) for k, v in details.items()] self.assertEqual(expected, details_mock.call_args_list) self.share_manager.db.share_server_update.assert_called_once_with( self.context, share_server['id'], {'status': constants.STATUS_ERROR}) self.share_manager.driver.deallocate_network.assert_called_once_with( self.context, share_server['id'] ) self.assertFalse(manager.LOG.warning.called) if get_server_details_from_data(data): self.assertTrue(manager.LOG.debug.called) def test_ensure_share_instance_has_pool_with_only_host(self): fake_share = { 'status': constants.STATUS_AVAILABLE, 'host': 'host1', 'id': 1} host = self.share_manager._ensure_share_instance_has_pool( context.get_admin_context(), fake_share) self.assertIsNone(host) def test_ensure_share_instance_has_pool_with_full_pool_name(self): fake_share = {'host': 'host1#pool0', 'id': 1, 'status': constants.STATUS_AVAILABLE} fake_share_expected_value = 'pool0' host = self.share_manager._ensure_share_instance_has_pool( context.get_admin_context(), fake_share) self.assertEqual(fake_share_expected_value, host) def test_ensure_share_instance_has_pool_unable_to_fetch_share(self): fake_share = {'host': 'host@backend', 'id': 1, 'status': constants.STATUS_AVAILABLE} with mock.patch.object(self.share_manager.driver, 'get_pool', side_effect=Exception): with mock.patch.object(manager, 'LOG') as mock_LOG: self.share_manager._ensure_share_instance_has_pool( context.get_admin_context(), fake_share) self.assertEqual(1, mock_LOG.exception.call_count) def test_ensure_share_instance_pool_notexist_and_get_from_driver(self): fake_share_instance = {'host': 'host@backend', 'id': 1, 'status': constants.STATUS_AVAILABLE} fake_host_expected_value = 'fake_pool' self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.driver, 'get_pool', mock.Mock(return_value='fake_pool')) host = self.share_manager._ensure_share_instance_has_pool( context.get_admin_context(), fake_share_instance) self.share_manager.db.share_instance_update.assert_any_call( mock.ANY, 1, {'host': 'host@backend#fake_pool'}) self.assertEqual(fake_host_expected_value, host) def test__form_server_setup_info(self): def fake_network_allocations_get_for_share_server(*args, **kwargs): if kwargs.get('label') != 'admin': return ['foo', 'bar'] return ['admin-foo', 'admin-bar'] self.mock_object( self.share_manager.db, 'network_allocations_get_for_share_server', mock.Mock( side_effect=fake_network_allocations_get_for_share_server)) fake_share_server = dict( id='fake_share_server_id', backend_details=dict(foo='bar')) fake_share_network = dict( security_services='fake_security_services' ) fake_share_network_subnet = dict( segmentation_id='fake_segmentation_id', cidr='fake_cidr', neutron_net_id='fake_neutron_net_id', neutron_subnet_id='fake_neutron_subnet_id', network_type='fake_network_type') expected = dict( server_id=fake_share_server['id'], segmentation_id=fake_share_network_subnet['segmentation_id'], cidr=fake_share_network_subnet['cidr'], neutron_net_id=fake_share_network_subnet['neutron_net_id'], neutron_subnet_id=fake_share_network_subnet['neutron_subnet_id'], security_services=fake_share_network['security_services'], network_allocations=( fake_network_allocations_get_for_share_server()), admin_network_allocations=( fake_network_allocations_get_for_share_server(label='admin')), backend_details=fake_share_server['backend_details'], network_type=fake_share_network_subnet['network_type']) network_info = self.share_manager._form_server_setup_info( self.context, fake_share_server, fake_share_network, fake_share_network_subnet) self.assertEqual(expected, network_info) (self.share_manager.db.network_allocations_get_for_share_server. assert_has_calls([ mock.call(self.context, fake_share_server['id'], label='user'), mock.call(self.context, fake_share_server['id'], label='admin') ])) @ddt.data( {'network_info': {'network_type': 'vlan', 'segmentation_id': '100'}}, {'network_info': {'network_type': 'vlan', 'segmentation_id': '1'}}, {'network_info': {'network_type': 'vlan', 'segmentation_id': '4094'}}, {'network_info': {'network_type': 'vxlan', 'segmentation_id': '100'}}, {'network_info': {'network_type': 'vxlan', 'segmentation_id': '1'}}, {'network_info': {'network_type': 'vxlan', 'segmentation_id': '16777215'}}, {'network_info': {'network_type': 'gre', 'segmentation_id': '100'}}, {'network_info': {'network_type': 'gre', 'segmentation_id': '1'}}, {'network_info': {'network_type': 'gre', 'segmentation_id': '4294967295'}}, {'network_info': {'network_type': 'flat', 'segmentation_id': None}}, {'network_info': {'network_type': 'flat', 'segmentation_id': 0}}, {'network_info': {'network_type': None, 'segmentation_id': None}}, {'network_info': {'network_type': None, 'segmentation_id': 0}}) @ddt.unpack def test_validate_segmentation_id_with_valid_values(self, network_info): self.share_manager._validate_segmentation_id(network_info) @ddt.data( {'network_info': {'network_type': 'vlan', 'segmentation_id': None}}, {'network_info': {'network_type': 'vlan', 'segmentation_id': -1}}, {'network_info': {'network_type': 'vlan', 'segmentation_id': 0}}, {'network_info': {'network_type': 'vlan', 'segmentation_id': '4095'}}, {'network_info': {'network_type': 'vxlan', 'segmentation_id': None}}, {'network_info': {'network_type': 'vxlan', 'segmentation_id': 0}}, {'network_info': {'network_type': 'vxlan', 'segmentation_id': '16777216'}}, {'network_info': {'network_type': 'gre', 'segmentation_id': None}}, {'network_info': {'network_type': 'gre', 'segmentation_id': 0}}, {'network_info': {'network_type': 'gre', 'segmentation_id': '4294967296'}}, {'network_info': {'network_type': 'flat', 'segmentation_id': '1000'}}, {'network_info': {'network_type': None, 'segmentation_id': '1000'}}) @ddt.unpack def test_validate_segmentation_id_with_invalid_values(self, network_info): self.assertRaises(exception.NetworkBadConfigurationException, self.share_manager._validate_segmentation_id, network_info) @ddt.data(10, 36, 60) def test_verify_server_cleanup_interval_valid_cases(self, val): data = dict(DEFAULT=dict(unused_share_server_cleanup_interval=val)) with test_utils.create_temp_config_with_opts(data): manager.ShareManager() @mock.patch.object(db, 'share_server_get_all_unused_deletable', mock.Mock()) @mock.patch.object(manager.ShareManager, 'delete_share_server', mock.Mock()) def test_delete_free_share_servers_cleanup_disabled(self): data = dict(DEFAULT=dict(automatic_share_server_cleanup=False)) with test_utils.create_temp_config_with_opts(data): share_manager = manager.ShareManager() share_manager.driver.initialized = True share_manager.delete_free_share_servers(self.context) self.assertFalse(db.share_server_get_all_unused_deletable.called) @mock.patch.object(db, 'share_server_get_all_unused_deletable', mock.Mock()) @mock.patch.object(manager.ShareManager, 'delete_share_server', mock.Mock()) def test_delete_free_share_servers_driver_handles_ss_disabled(self): data = dict(DEFAULT=dict(driver_handles_share_servers=False)) with test_utils.create_temp_config_with_opts(data): share_manager = manager.ShareManager() share_manager.driver.initialized = True share_manager.delete_free_share_servers(self.context) self.assertFalse(db.share_server_get_all_unused_deletable.called) self.assertFalse(share_manager.delete_share_server.called) @mock.patch.object(db, 'share_server_get_all_unused_deletable', mock.Mock(return_value=['server1', ])) @mock.patch.object(manager.ShareManager, 'delete_share_server', mock.Mock()) @mock.patch.object(timeutils, 'utcnow', mock.Mock( return_value=datetime.timedelta(minutes=20))) def test_delete_free_share_servers(self): self.share_manager.delete_free_share_servers(self.context) db.share_server_get_all_unused_deletable.assert_called_once_with( self.context, self.share_manager.host, datetime.timedelta(minutes=10)) self.share_manager.delete_share_server.assert_called_once_with( self.context, 'server1') timeutils.utcnow.assert_called_once_with() @mock.patch('manila.tests.fake_notifier.FakeNotifier._notify') def test_extend_share_invalid(self, mock_notify): share = db_utils.create_share() share_id = share['id'] reservations = {} mock_notify.assert_not_called() self.mock_object(self.share_manager, 'driver') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(quota.QUOTAS, 'rollback') self.mock_object(self.share_manager.driver, 'extend_share', mock.Mock(side_effect=Exception('fake'))) self.assertRaises( exception.ShareExtendingError, self.share_manager.extend_share, self.context, share_id, 123, {}) quota.QUOTAS.rollback.assert_called_once_with( mock.ANY, reservations, project_id=six.text_type(share['project_id']), user_id=six.text_type(share['user_id']), share_type_id=None, ) @mock.patch('manila.tests.fake_notifier.FakeNotifier._notify') def test_extend_share(self, mock_notify): share = db_utils.create_share() share_id = share['id'] new_size = 123 shr_update = { 'size': int(new_size), 'status': constants.STATUS_AVAILABLE.lower() } reservations = {} fake_share_server = 'fake' mock_notify.assert_not_called() manager = self.share_manager self.mock_object(manager, 'driver') self.mock_object(manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(manager.db, 'share_update', mock.Mock(return_value=share)) self.mock_object(quota.QUOTAS, 'commit') self.mock_object(manager.driver, 'extend_share') self.mock_object(manager, '_get_share_server', mock.Mock(return_value=fake_share_server)) self.share_manager.extend_share(self.context, share_id, new_size, reservations) self.assertTrue(manager._get_share_server.called) manager.driver.extend_share.assert_called_once_with( utils.IsAMatcher(models.ShareInstance), new_size, share_server=fake_share_server ) quota.QUOTAS.commit.assert_called_once_with( mock.ANY, reservations, project_id=share['project_id'], user_id=share['user_id'], share_type_id=None) manager.db.share_update.assert_called_once_with( mock.ANY, share_id, shr_update ) self.assert_notify_called(mock_notify, (['INFO', 'share.extend.start'], ['INFO', 'share.extend.end'])) @ddt.data((True, [{'id': 'fake'}]), (False, [])) @ddt.unpack def test_shrink_share_quota_error(self, supports_replication, replicas_list): size = 5 new_size = 1 share = db_utils.create_share(size=size) share_id = share['id'] self.mock_object(self.share_manager.db, 'share_update') self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=Exception('fake'))) self.mock_object( self.share_manager.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas_list)) deltas = {} if supports_replication: deltas.update({'replica_gigabytes': new_size - size}) self.assertRaises( exception.ShareShrinkingError, self.share_manager.shrink_share, self.context, share_id, new_size) quota.QUOTAS.reserve.assert_called_with( mock.ANY, project_id=six.text_type(share['project_id']), user_id=six.text_type(share['user_id']), share_type_id=None, gigabytes=new_size - size, **deltas ) self.assertTrue(self.share_manager.db.share_update.called) (self.share_manager.db.share_replicas_get_all_by_share .assert_called_once_with(mock.ANY, share['id'])) @ddt.data({'exc': exception.InvalidShare("fake"), 'status': constants.STATUS_SHRINKING_ERROR}, {'exc': exception.ShareShrinkingPossibleDataLoss("fake"), 'status': constants.STATUS_AVAILABLE}) @ddt.unpack def test_shrink_share_invalid(self, exc, status): share = db_utils.create_share() new_size = 1 share_id = share['id'] size_decrease = int(share['size']) - new_size self.mock_object(self.share_manager, 'driver') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(quota.QUOTAS, 'reserve') self.mock_object(quota.QUOTAS, 'rollback') self.mock_object(self.share_manager.driver, 'shrink_share', mock.Mock(side_effect=exc)) self.assertRaises( exception.ShareShrinkingError, self.share_manager.shrink_share, self.context, share_id, new_size) self.share_manager.driver.shrink_share.assert_called_once_with( utils.IsAMatcher(models.ShareInstance), new_size, share_server=None ) self.share_manager.db.share_update.assert_called_once_with( mock.ANY, share_id, {'status': status} ) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, gigabytes=-size_decrease, project_id=share['project_id'], share_type_id=None, user_id=share['user_id'], ) quota.QUOTAS.rollback.assert_called_once_with( mock.ANY, mock.ANY, project_id=share['project_id'], share_type_id=None, user_id=share['user_id'], ) self.assertTrue(self.share_manager.db.share_get.called) if isinstance(exc, exception.ShareShrinkingPossibleDataLoss): self.share_manager.message_api.create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), message_field.Action.SHRINK, share['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_id, detail=message_field.Detail.DRIVER_REFUSED_SHRINK) @ddt.data(True, False) def test_shrink_share(self, supports_replication): share = db_utils.create_share() share_id = share['id'] new_size = 123 shr_update = { 'size': int(new_size), 'status': constants.STATUS_AVAILABLE } fake_share_server = 'fake' size_decrease = int(share['size']) - new_size mock_notify = self.mock_object(fake_notifier.FakeNotifier, '_notify') replicas_list = [] if supports_replication: replicas_list.append(share) replicas_list.append({'name': 'fake_replica'}) mock_notify.assert_not_called() manager = self.share_manager self.mock_object(manager, 'driver') self.mock_object(manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(manager.db, 'share_update', mock.Mock(return_value=share)) self.mock_object(quota.QUOTAS, 'commit') self.mock_object(quota.QUOTAS, 'reserve') self.mock_object(manager.driver, 'shrink_share') self.mock_object(manager, '_get_share_server', mock.Mock(return_value=fake_share_server)) self.mock_object(manager.db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas_list)) reservation_params = { 'gigabytes': -size_decrease, 'project_id': share['project_id'], 'share_type_id': None, 'user_id': share['user_id'], } if supports_replication: reservation_params.update( {'replica_gigabytes': -size_decrease * 2}) self.share_manager.shrink_share(self.context, share_id, new_size) self.assertTrue(manager._get_share_server.called) manager.driver.shrink_share.assert_called_once_with( utils.IsAMatcher(models.ShareInstance), new_size, share_server=fake_share_server ) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, **reservation_params, ) quota.QUOTAS.commit.assert_called_once_with( mock.ANY, mock.ANY, project_id=share['project_id'], share_type_id=None, user_id=share['user_id'], ) manager.db.share_update.assert_called_once_with( mock.ANY, share_id, shr_update ) self.assert_notify_called(mock_notify, (['INFO', 'share.shrink.start'], ['INFO', 'share.shrink.end'])) (self.share_manager.db.share_replicas_get_all_by_share. assert_called_once_with(mock.ANY, share['id'])) def test_report_driver_status_driver_handles_ss_false(self): fake_stats = {'field': 'val'} fake_pool = {'name': 'pool1'} self.share_manager.last_capabilities = {'field': 'old_val'} self.mock_object(self.share_manager, 'driver', mock.Mock()) driver = self.share_manager.driver driver.get_share_stats = mock.Mock(return_value=fake_stats) self.mock_object(db, 'share_server_get_all_by_host', mock.Mock()) driver.driver_handles_share_servers = False driver.get_share_server_pools = mock.Mock(return_value=fake_pool) self.share_manager._report_driver_status(self.context) driver.get_share_stats.assert_called_once_with( refresh=True) self.assertFalse(db.share_server_get_all_by_host.called) self.assertFalse(driver.get_share_server_pools.called) self.assertEqual(fake_stats, self.share_manager.last_capabilities) def test_report_driver_status_driver_handles_ss(self): fake_stats = {'field': 'val'} fake_ss = {'id': '1234'} fake_pool = {'name': 'pool1'} self.mock_object(self.share_manager, 'driver', mock.Mock()) driver = self.share_manager.driver driver.get_share_stats = mock.Mock(return_value=fake_stats) self.mock_object(db, 'share_server_get_all_by_host', mock.Mock( return_value=[fake_ss])) driver.driver_handles_share_servers = True driver.get_share_server_pools = mock.Mock(return_value=fake_pool) self.share_manager._report_driver_status(self.context) driver.get_share_stats.assert_called_once_with(refresh=True) db.share_server_get_all_by_host.assert_called_once_with( self.context, self.share_manager.host) driver.get_share_server_pools.assert_called_once_with(fake_ss) expected_stats = { 'field': 'val', 'server_pools_mapping': { '1234': fake_pool}, } self.assertEqual(expected_stats, self.share_manager.last_capabilities) def test_report_driver_status_empty_share_stats(self): old_capabilities = {'field': 'old_val'} fake_pool = {'name': 'pool1'} self.share_manager.last_capabilities = old_capabilities self.mock_object(self.share_manager, 'driver', mock.Mock()) driver = self.share_manager.driver driver.get_share_stats = mock.Mock(return_value={}) self.mock_object(db, 'share_server_get_all_by_host', mock.Mock()) driver.driver_handles_share_servers = True driver.get_share_server_pools = mock.Mock(return_value=fake_pool) self.share_manager._report_driver_status(self.context) driver.get_share_stats.assert_called_once_with(refresh=True) self.assertFalse(db.share_server_get_all_by_host.called) self.assertFalse(driver.get_share_server_pools.called) self.assertEqual(old_capabilities, self.share_manager.last_capabilities) def test_create_share_group(self): fake_group = { 'id': 'fake_id', 'availability_zone_id': 'fake_az', } self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'create_share_group', mock.Mock(return_value=None)) self.share_manager.create_share_group(self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', { 'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_cg_with_share_network_driver_not_handles_servers(self): manager.CONF.set_default('driver_handles_share_servers', False) self.mock_object( self.share_manager.driver.configuration, 'safe_get', mock.Mock(return_value=False)) cg_id = 'fake_group_id' share_network_id = 'fake_sn' fake_group = {'id': 'fake_id', 'share_network_id': share_network_id} self.mock_object( self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update') self.assertRaises( exception.ManilaException, self.share_manager.create_share_group, self.context, cg_id) self.share_manager.db.share_group_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), cg_id) self.share_manager.db.share_group_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), cg_id, {'status': constants.STATUS_ERROR}) def test_create_sg_with_share_network_driver_handles_servers(self): manager.CONF.set_default('driver_handles_share_servers', True) self.mock_object( self.share_manager.driver.configuration, 'safe_get', mock.Mock(return_value=True)) share_network_id = 'fake_sn' fake_group = { 'id': 'fake_id', 'share_network_id': share_network_id, 'host': "fake_host", 'availability_zone_id': 'fake_az', } fake_subnet = { 'id': 'fake_subnet_id' } self.mock_object( self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object( self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object( self.share_manager.db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=fake_subnet) ) self.mock_object( self.share_manager, '_provide_share_server_for_share_group', mock.Mock(return_value=({}, fake_group))) self.mock_object( self.share_manager.driver, 'create_share_group', mock.Mock(return_value=None)) self.share_manager.create_share_group(self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', { 'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_share_group_with_update(self): fake_group = { 'id': 'fake_id', 'availability_zone_id': 'fake_az', } self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'create_share_group', mock.Mock(return_value={'foo': 'bar'})) self.share_manager.create_share_group(self.context, "fake_id") (self.share_manager.db.share_group_update. assert_any_call(mock.ANY, 'fake_id', {'foo': 'bar'})) self.share_manager.db.share_group_update.assert_any_call( mock.ANY, 'fake_id', { 'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_share_group_with_error(self): fake_group = { 'id': 'fake_id', 'availability_zone_id': 'fake_az', } self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'create_share_group', mock.Mock(side_effect=exception.Error)) self.assertRaises(exception.Error, self.share_manager.create_share_group, self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', { 'status': constants.STATUS_ERROR, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_share_group_from_sg_snapshot(self): fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [], 'share_server_id': 'fake_ss_id', 'availability_zone_id': 'fake_az', } fake_sn = {'id': 'fake_sn_id'} fake_sns = {'id': 'fake_sns_id', 'share_network_id': fake_sn['id']} fake_ss = {'id': 'fake_ss_id', 'share_network_subnet': fake_sns} fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': [], 'share_group': {'share_server_id': fake_ss['id']}} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock( return_value=fake_ss)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) mock_create_sg_from_sg_snap = self.mock_object( self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock(return_value=(None, None))) self.share_manager.create_share_group(self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', {'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'availability_zone_id': fake_group['availability_zone_id'], 'consistent_snapshot_support': None}) self.share_manager.db.share_server_get(mock.ANY, 'fake_ss_id') mock_create_sg_from_sg_snap.assert_called_once_with( mock.ANY, fake_group, fake_snap, share_server=fake_ss) def test_create_sg_snapshot_share_network_driver_not_handles_servers(self): manager.CONF.set_default('driver_handles_share_servers', False) self.mock_object( self.share_manager.driver.configuration, 'safe_get', mock.Mock(return_value=False)) sg_id = 'fake_share_group_id' share_network_id = 'fake_sn' fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [], 'share_network_id': share_network_id, 'host': "fake_host", } self.mock_object( self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_group_update') self.assertRaises(exception.ManilaException, self.share_manager.create_share_group, self.context, sg_id) self.share_manager.db.share_group_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), sg_id) self.share_manager.db.share_group_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), sg_id, {'status': constants.STATUS_ERROR}) def test_create_share_group_from_sg_snapshot_share_network_dhss(self): manager.CONF.set_default('driver_handles_share_servers', True) self.mock_object(self.share_manager.driver.configuration, 'safe_get', mock.Mock(return_value=True)) share_network_id = 'fake_sn' share_network_subnet = { 'id': 'fake_subnet_id' } fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [], 'share_network_id': share_network_id, 'availability_zone_id': 'fake_az', } fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=share_network_subnet)) self.mock_object( self.share_manager, '_provide_share_server_for_share_group', mock.Mock(return_value=({}, fake_group))) self.mock_object( self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock(return_value=(None, None))) self.share_manager.create_share_group(self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', {'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id']}) def test_create_share_group_from_share_group_snapshot_with_update(self): fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [], 'availability_zone_id': 'fake_az', } fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock(return_value=({'foo': 'bar'}, None))) self.share_manager.create_share_group(self.context, "fake_id") self.share_manager.db.share_group_update.assert_any_call( mock.ANY, 'fake_id', {'foo': 'bar'}) self.share_manager.db.share_group_update.assert_any_call( mock.ANY, 'fake_id', { 'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) @ddt.data(constants.STATUS_AVAILABLE, constants.STATUS_CREATING_FROM_SNAPSHOT, None) def test_create_share_group_from_sg_snapshot_with_share_update_status( self, share_status): fake_share = {'id': 'fake_share_id'} # if share_status is not None: # fake_share.update({'status': share_status}) fake_export_locations = ['my_export_location'] fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [fake_share], 'availability_zone_id': 'fake_az', } fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_group_update') self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_export_locations_update') fake_share_update = {'id': fake_share['id'], 'foo': 'bar', 'export_locations': fake_export_locations} if share_status is not None: fake_share_update.update({'status': share_status}) self.mock_object(self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock(return_value=(None, [fake_share_update]))) self.share_manager.create_share_group(self.context, "fake_id") exp_progress = ( '0%' if share_status == constants.STATUS_CREATING_FROM_SNAPSHOT else '100%') self.share_manager.db.share_instance_update.assert_any_call( mock.ANY, 'fake_share_id', {'foo': 'bar', 'status': share_status or constants.STATUS_AVAILABLE, 'progress': exp_progress}) self.share_manager.db.share_export_locations_update.assert_any_call( mock.ANY, 'fake_share_id', fake_export_locations) self.share_manager.db.share_group_update.assert_any_call( mock.ANY, 'fake_id', { 'status': constants.STATUS_AVAILABLE, 'created_at': mock.ANY, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_share_group_from_sg_snapshot_with_error(self): fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [], 'availability_zone_id': 'fake_az', } fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_instances_get_all_by_share_group_id', mock.Mock(return_value=[])) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock(side_effect=exception.Error)) self.assertRaises(exception.Error, self.share_manager.create_share_group, self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', { 'status': constants.STATUS_ERROR, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_share_group_from_sg_snapshot_with_invalid_status(self): fake_share = {'id': 'fake_share_id', 'status': constants.STATUS_CREATING} fake_export_locations = ['my_export_location'] fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [fake_share], 'availability_zone_id': 'fake_az', } fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_instances_get_all_by_share_group_id', mock.Mock(return_value=[])) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) fake_share_update_list = [{'id': fake_share['id'], 'status': fake_share['status'], 'foo': 'bar', 'export_locations': fake_export_locations}] self.mock_object(self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock( return_value=(None, fake_share_update_list))) self.assertRaises(exception.InvalidShareInstance, self.share_manager.create_share_group, self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', { 'status': constants.STATUS_ERROR, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_create_share_group_from_sg_snapshot_with_share_error(self): fake_share = {'id': 'fake_share_id'} fake_group = { 'id': 'fake_id', 'source_share_group_snapshot_id': 'fake_snap_id', 'shares': [fake_share], 'availability_zone_id': 'fake_az', } fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_instances_get_all_by_share_group_id', mock.Mock(return_value=[fake_share])) self.mock_object(self.share_manager.db, 'share_group_update') self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.driver, 'create_share_group_from_share_group_snapshot', mock.Mock(side_effect=exception.Error)) self.assertRaises(exception.Error, self.share_manager.create_share_group, self.context, "fake_id") self.share_manager.db.share_instance_update.assert_any_call( mock.ANY, 'fake_share_id', {'status': constants.STATUS_ERROR}) self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', { 'status': constants.STATUS_ERROR, 'consistent_snapshot_support': None, 'availability_zone_id': fake_group['availability_zone_id'], } ) def test_delete_share_group(self): fake_group = {'id': 'fake_id'} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_destroy', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'delete_share_group', mock.Mock(return_value=None)) self.share_manager.delete_share_group(self.context, "fake_id") self.share_manager.db.share_group_destroy.assert_called_once_with( mock.ANY, 'fake_id') def test_delete_share_group_with_update(self): fake_group = {'id': 'fake_id'} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_destroy', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'delete_share_group', mock.Mock(return_value={'foo': 'bar'})) self.share_manager.delete_share_group(self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', {'foo': 'bar'}) self.share_manager.db.share_group_destroy.assert_called_once_with( mock.ANY, 'fake_id') def test_delete_share_group_with_error(self): fake_group = {'id': 'fake_id'} self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.db, 'share_group_update', mock.Mock(return_value=fake_group)) self.mock_object(self.share_manager.driver, 'delete_share_group', mock.Mock(side_effect=exception.Error)) self.assertRaises(exception.Error, self.share_manager.delete_share_group, self.context, "fake_id") self.share_manager.db.share_group_update.assert_called_once_with( mock.ANY, 'fake_id', {'status': constants.STATUS_ERROR}) def test_create_share_group_snapshot(self): fake_snap = { 'id': 'fake_snap_id', 'share_group': {}, 'share_group_snapshot_members': [], } self.mock_object( self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) mock_sg_snap_update = self.mock_object( self.share_manager.db, 'share_group_snapshot_update', mock.Mock(return_value=fake_snap)) self.mock_object( self.share_manager.driver, 'create_share_group_snapshot', mock.Mock(return_value=(None, None))) self.share_manager.create_share_group_snapshot( self.context, fake_snap['id']) mock_sg_snap_update.assert_called_once_with( mock.ANY, fake_snap['id'], {'status': constants.STATUS_AVAILABLE, 'updated_at': mock.ANY}) def test_create_share_group_snapshot_with_update(self): fake_snap = {'id': 'fake_snap_id', 'share_group': {}, 'share_group_snapshot_members': []} self.mock_object(self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.db, 'share_group_snapshot_update', mock.Mock(return_value=fake_snap)) self.mock_object(self.share_manager.driver, 'create_share_group_snapshot', mock.Mock(return_value=({'foo': 'bar'}, None))) self.share_manager.create_share_group_snapshot( self.context, fake_snap['id']) self.share_manager.db.share_group_snapshot_update.assert_any_call( mock.ANY, 'fake_snap_id', {'foo': 'bar'}) self.share_manager.db.share_group_snapshot_update.assert_any_call( mock.ANY, fake_snap['id'], {'status': constants.STATUS_AVAILABLE, 'updated_at': mock.ANY}) def test_create_share_group_snapshot_with_member_update(self): fake_member1 = {'id': 'fake_member_id_1', 'share_instance_id': 'si_1'} fake_member2 = {'id': 'fake_member_id_2', 'share_instance_id': 'si_2'} fake_member3 = {'id': 'fake_member_id_3', 'share_instance_id': 'si_3'} fake_member_update1 = { 'id': fake_member1['id'], 'provider_location': 'fake_provider_location_1', 'size': 13, 'export_locations': ['fake_el_1_1', 'fake_el_1_2'], 'should_not_be_used_k1': 'should_not_be_used_v1', } fake_member_update2 = { 'id': fake_member2['id'], 'provider_location': 'fake_provider_location_2', 'size': 31, 'export_locations': ['fake_el_2_1', 'fake_el_2_2'], 'status': 'fake_status_for_update', 'should_not_be_used_k2': 'should_not_be_used_k2', } fake_member_update3 = { 'provider_location': 'fake_provider_location_3', 'size': 42, 'export_locations': ['fake_el_3_1', 'fake_el_3_2'], 'should_not_be_used_k3': 'should_not_be_used_k3', } expected_member_update1 = { 'id': fake_member_update1['id'], 'provider_location': fake_member_update1['provider_location'], 'size': fake_member_update1['size'], } expected_member_update2 = { 'id': fake_member_update2['id'], 'provider_location': fake_member_update2['provider_location'], 'size': fake_member_update2['size'], 'status': fake_member_update2['status'], } fake_snap = { 'id': 'fake_snap_id', 'share_group': {}, 'share_group_snapshot_members': [ fake_member1, fake_member2, fake_member3], } self.mock_object( self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) mock_sg_snapshot_update = self.mock_object( self.share_manager.db, 'share_group_snapshot_update', mock.Mock(return_value=fake_snap)) mock_sg_snapshot_member_update = self.mock_object( self.share_manager.db, 'share_group_snapshot_member_update') self.mock_object( self.share_manager.db, 'share_instance_get', mock.Mock(return_value={'id': 'blah'})) self.mock_object( timeutils, 'utcnow', mock.Mock(side_effect=range(1, 10))) mock_driver_create_sg_snapshot = self.mock_object( self.share_manager.driver, 'create_share_group_snapshot', mock.Mock(return_value=( None, [fake_member_update1, fake_member_update2, fake_member_update3]))) self.share_manager.create_share_group_snapshot( self.context, fake_snap['id']) mock_driver_create_sg_snapshot.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_snap, share_server=None) mock_sg_snapshot_update.assert_called_once_with( mock.ANY, fake_snap['id'], {'status': constants.STATUS_AVAILABLE, 'updated_at': mock.ANY}) mock_sg_snapshot_member_update.assert_has_calls([ mock.call( utils.IsAMatcher(context.RequestContext), expected_member_update1['id'], {'provider_location': expected_member_update1[ 'provider_location'], 'size': expected_member_update1['size'], 'updated_at': 1, 'status': manager.constants.STATUS_AVAILABLE}), mock.call( utils.IsAMatcher(context.RequestContext), expected_member_update2['id'], {'provider_location': expected_member_update2[ 'provider_location'], 'size': expected_member_update2['size'], 'updated_at': 1, 'status': expected_member_update2['status']}), ]) def test_create_group_snapshot_with_error(self): fake_snap = {'id': 'fake_snap_id', 'share_group': {}, 'share_group_snapshot_members': []} self.mock_object( self.share_manager.db, 'share_group_snapshot_get', mock.Mock(return_value=fake_snap)) mock_sg_snap_update = self.mock_object( self.share_manager.db, 'share_group_snapshot_update', mock.Mock(return_value=fake_snap)) self.mock_object( self.share_manager.driver, 'create_share_group_snapshot', mock.Mock(side_effect=exception.Error)) self.assertRaises( exception.Error, self.share_manager.create_share_group_snapshot, self.context, fake_snap['id']) mock_sg_snap_update.assert_called_once_with( mock.ANY, fake_snap['id'], {'status': constants.STATUS_ERROR}) def test_connection_get_info(self): share_instance = {'share_server_id': 'fake_server_id'} share_instance_id = 'fake_id' share_server = 'fake_share_server' connection_info = 'fake_info' # mocks self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=share_instance)) self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=share_server)) self.mock_object(self.share_manager.driver, 'connection_get_info', mock.Mock(return_value=connection_info)) # run result = self.share_manager.connection_get_info( self.context, share_instance_id) # asserts self.assertEqual(connection_info, result) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, share_instance_id, with_share_data=True) self.share_manager.driver.connection_get_info.assert_called_once_with( self.context, share_instance, share_server) @ddt.data(True, False) def test_migration_start(self, success): instance = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_AVAILABLE, share_server_id='fake_server_id', host='fake@backend#pool') share = db_utils.create_share(id='fake_id', instances=[instance]) fake_service = {'availability_zone_id': 'fake_az_id'} host = 'fake2@backend#pool' # mocks self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=instance)) self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager, '_migration_start_driver', mock.Mock(return_value=success)) self.mock_object(self.share_manager.db, 'service_get_by_args', mock.Mock(return_value=fake_service)) if not success: self.mock_object( self.share_manager, '_migration_start_host_assisted') # run self.share_manager.migration_start( self.context, 'fake_id', host, False, False, False, False, False, 'fake_net_id', 'fake_type_id') # asserts self.share_manager.db.share_get.assert_called_once_with( self.context, share['id']) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, instance['id'], with_share_data=True) share_update_calls = [ mock.call( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}), ] if not success: share_update_calls.append(mock.call( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS})) self.share_manager.db.share_update.assert_has_calls(share_update_calls) self.share_manager._migration_start_driver.assert_called_once_with( self.context, share, instance, host, False, False, False, False, 'fake_net_id', 'fake_az_id', 'fake_type_id') if not success: (self.share_manager._migration_start_host_assisted. assert_called_once_with( self.context, share, instance, host, 'fake_net_id', 'fake_az_id', 'fake_type_id')) self.share_manager.db.service_get_by_args.assert_called_once_with( self.context, 'fake2@backend', 'manila-share') @ddt.data({'writable': False, 'preserve_metadata': False, 'nondisruptive': False, 'preserve_snapshots': True, 'has_snapshots': False}, {'writable': False, 'preserve_metadata': False, 'nondisruptive': True, 'preserve_snapshots': False, 'has_snapshots': False}, {'writable': False, 'preserve_metadata': True, 'nondisruptive': False, 'preserve_snapshots': False, 'has_snapshots': False}, {'writable': True, 'preserve_metadata': False, 'nondisruptive': False, 'preserve_snapshots': False, 'has_snapshots': False}, {'writable': False, 'preserve_metadata': False, 'nondisruptive': False, 'preserve_snapshots': False, 'has_snapshots': True} ) @ddt.unpack def test_migration_start_prevent_host_assisted( self, writable, preserve_metadata, nondisruptive, preserve_snapshots, has_snapshots): share = db_utils.create_share() instance = share.instance host = 'fake@backend#pool' fake_service = {'availability_zone_id': 'fake_az_id'} if has_snapshots: snapshot = db_utils.create_snapshot(share_id=share['id']) self.mock_object( self.share_manager.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[snapshot])) # mocks self.mock_object(self.share_manager, '_reset_read_only_access_rules') self.mock_object(self.share_manager.db, 'service_get_by_args', mock.Mock(return_value=fake_service)) self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) # run self.assertRaises( exception.ShareMigrationFailed, self.share_manager.migration_start, self.context, 'share_id', host, True, writable, preserve_metadata, nondisruptive, preserve_snapshots, 'fake_net_id') self.share_manager.db.share_update.assert_has_calls([ mock.call( self.context, 'share_id', {'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}), mock.call( self.context, 'share_id', {'task_state': constants.TASK_STATE_MIGRATION_ERROR}), ]) self.share_manager.db.share_instance_update.assert_called_once_with( self.context, instance['id'], {'status': constants.STATUS_AVAILABLE}) self.share_manager.db.share_get.assert_called_once_with( self.context, 'share_id') self.share_manager.db.service_get_by_args.assert_called_once_with( self.context, 'fake@backend', 'manila-share') (self.share_manager._reset_read_only_access_rules. assert_called_once_with(self.context, share, instance['id'])) def test_migration_start_exception(self): instance = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_AVAILABLE, share_server_id='fake_server_id', host='fake@backend#pool') share = db_utils.create_share(id='fake_id', instances=[instance]) host = 'fake2@backend#pool' fake_service = {'availability_zone_id': 'fake_az_id'} # mocks self.mock_object(self.share_manager.db, 'service_get_by_args', mock.Mock(return_value=fake_service)) self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=instance)) self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager, '_migration_start_driver', mock.Mock(side_effect=Exception('fake_exc_1'))) self.mock_object(self.share_manager, '_migration_start_host_assisted', mock.Mock(side_effect=Exception('fake_exc_2'))) self.mock_object(self.share_manager, '_reset_read_only_access_rules') # run self.assertRaises( exception.ShareMigrationFailed, self.share_manager.migration_start, self.context, 'fake_id', host, False, False, False, False, False, 'fake_net_id', 'fake_type_id') # asserts self.share_manager.db.share_get.assert_called_once_with( self.context, share['id']) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, instance['id'], with_share_data=True) share_update_calls = [ mock.call( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}), mock.call( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) ] (self.share_manager._reset_read_only_access_rules. assert_called_once_with(self.context, share, instance['id'])) self.share_manager.db.share_update.assert_has_calls(share_update_calls) self.share_manager.db.share_instance_update.assert_called_once_with( self.context, instance['id'], {'status': constants.STATUS_AVAILABLE}) self.share_manager._migration_start_driver.assert_called_once_with( self.context, share, instance, host, False, False, False, False, 'fake_net_id', 'fake_az_id', 'fake_type_id') self.share_manager.db.service_get_by_args.assert_called_once_with( self.context, 'fake2@backend', 'manila-share') @ddt.data(None, Exception('fake')) def test__migration_start_host_assisted(self, exc): instance = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_AVAILABLE, share_server_id='fake_server_id') new_instance = db_utils.create_share_instance( share_id='new_fake_id', status=constants.STATUS_AVAILABLE) share = db_utils.create_share(id='fake_id', instances=[instance]) server = 'share_server' src_connection_info = 'src_fake_info' dest_connection_info = 'dest_fake_info' instance_updates = [ mock.call( self.context, instance['id'], {'cast_rules_to_readonly': True}) ] # mocks helper = mock.Mock() self.mock_object(migration_api, 'ShareMigrationHelper', mock.Mock(return_value=helper)) self.mock_object(helper, 'cleanup_new_instance') self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=server)) self.mock_object(self.share_manager.db, 'share_instance_update', mock.Mock(return_value=server)) self.mock_object(self.share_manager.access_helper, 'get_and_update_share_instance_access_rules') self.mock_object(self.share_manager.access_helper, 'update_access_rules') self.mock_object(utils, 'wait_for_access_update') if exc is None: self.mock_object(helper, 'create_instance_and_wait', mock.Mock(return_value=new_instance)) self.mock_object(self.share_manager.driver, 'connection_get_info', mock.Mock(return_value=src_connection_info)) self.mock_object(rpcapi.ShareAPI, 'connection_get_info', mock.Mock(return_value=dest_connection_info)) self.mock_object(data_rpc.DataAPI, 'migration_start', mock.Mock(side_effect=Exception('fake'))) self.mock_object(helper, 'cleanup_new_instance') instance_updates.append( mock.call(self.context, new_instance['id'], {'status': constants.STATUS_MIGRATING_TO})) else: self.mock_object(helper, 'create_instance_and_wait', mock.Mock(side_effect=exc)) # run self.assertRaises( exception.ShareMigrationFailed, self.share_manager._migration_start_host_assisted, self.context, share, instance, 'fake_host', 'fake_net_id', 'fake_az_id', 'fake_type_id') # asserts self.share_manager.db.share_server_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), instance['share_server_id']) (self.share_manager.access_helper.update_access_rules. assert_called_once_with( self.context, instance['id'], share_server=server)) helper.create_instance_and_wait.assert_called_once_with( share, 'fake_host', 'fake_net_id', 'fake_az_id', 'fake_type_id') utils.wait_for_access_update.assert_called_once_with( self.context, self.share_manager.db, instance, self.share_manager.migration_wait_access_rules_timeout) if exc is None: (self.share_manager.driver.connection_get_info. assert_called_once_with(self.context, instance, server)) rpcapi.ShareAPI.connection_get_info.assert_called_once_with( self.context, new_instance) data_rpc.DataAPI.migration_start.assert_called_once_with( self.context, share['id'], ['lost+found'], instance['id'], new_instance['id'], src_connection_info, dest_connection_info) helper.cleanup_new_instance.assert_called_once_with(new_instance) @ddt.data({'share_network_id': 'fake_net_id', 'exc': None, 'has_snapshots': True}, {'share_network_id': None, 'exc': Exception('fake'), 'has_snapshots': True}, {'share_network_id': None, 'exc': None, 'has_snapshots': False}) @ddt.unpack def test__migration_start_driver( self, exc, share_network_id, has_snapshots): fake_dest_host = 'fake_host' src_server = db_utils.create_share_server() if share_network_id: dest_server = db_utils.create_share_server() else: dest_server = None share = db_utils.create_share( id='fake_id', share_server_id='fake_src_server_id', share_network_id=share_network_id) migrating_instance = db_utils.create_share_instance( share_id='fake_id', share_network_id=share_network_id) if has_snapshots: snapshot = db_utils.create_snapshot( status=(constants.STATUS_AVAILABLE if not exc else constants.STATUS_ERROR), share_id=share['id']) migrating_snap_instance = db_utils.create_snapshot( status=constants.STATUS_MIGRATING, share_id=share['id']) dest_snap_instance = db_utils.create_snapshot_instance( status=constants.STATUS_AVAILABLE, snapshot_id=snapshot['id'], share_instance_id=migrating_instance['id']) snapshot_mappings = {snapshot.instance['id']: dest_snap_instance} else: snapshot_mappings = {} src_instance = share.instance compatibility = { 'compatible': True, 'writable': False, 'preserve_metadata': False, 'nondisruptive': False, 'preserve_snapshots': has_snapshots, } # mocks self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=migrating_instance)) self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=src_server)) self.mock_object(self.share_manager.driver, 'migration_check_compatibility', mock.Mock(return_value=compatibility)) self.mock_object( api.API, 'create_share_instance_and_get_request_spec', mock.Mock(return_value=({}, migrating_instance))) self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(rpcapi.ShareAPI, 'provide_share_server', mock.Mock(return_value='fake_dest_share_server_id')) self.mock_object(rpcapi.ShareAPI, 'create_share_server') self.mock_object( migration_api.ShareMigrationHelper, 'wait_for_share_server', mock.Mock(return_value=dest_server)) self.mock_object( self.share_manager.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[snapshot] if has_snapshots else [])) if has_snapshots: self.mock_object( self.share_manager.db, 'share_snapshot_instance_create', mock.Mock(return_value=dest_snap_instance)) self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') self.mock_object( self.share_manager.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[migrating_snap_instance])) self.mock_object(self.share_manager.driver, 'migration_start') self.mock_object(self.share_manager, '_migration_delete_instance') self.mock_object(self.share_manager.access_helper, 'update_access_rules') self.mock_object(utils, 'wait_for_access_update') # run if exc: self.assertRaises( exception.ShareMigrationFailed, self.share_manager._migration_start_driver, self.context, share, src_instance, fake_dest_host, False, False, False, False, share_network_id, 'fake_az_id', 'fake_type_id') else: result = self.share_manager._migration_start_driver( self.context, share, src_instance, fake_dest_host, False, False, False, False, share_network_id, 'fake_az_id', 'fake_type_id') # asserts if not exc: self.assertTrue(result) self.share_manager.db.share_update.assert_has_calls([ mock.call( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_DRIVER_STARTING}), mock.call( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS}) ]) (self.share_manager.db.share_instance_update.assert_has_calls([ mock.call(self.context, migrating_instance['id'], {'status': constants.STATUS_MIGRATING_TO}), mock.call(self.context, src_instance['id'], {'cast_rules_to_readonly': True})])) (self.share_manager.access_helper.update_access_rules. assert_called_once_with( self.context, src_instance['id'], share_server=src_server)) self.share_manager.driver.migration_start.assert_called_once_with( self.context, src_instance, migrating_instance, [snapshot.instance] if has_snapshots else [], snapshot_mappings, src_server, dest_server) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, migrating_instance['id'], with_share_data=True) self.share_manager.db.share_server_get.assert_called_once_with( self.context, 'fake_src_server_id') (api.API.create_share_instance_and_get_request_spec. assert_called_once_with(self.context, share, 'fake_az_id', None, 'fake_host', share_network_id, 'fake_type_id')) (self.share_manager.driver.migration_check_compatibility. assert_called_once_with(self.context, src_instance, migrating_instance, src_server, dest_server)) (self.share_manager.db.share_snapshot_get_all_for_share. assert_called_once_with(self.context, share['id'])) if share_network_id: (rpcapi.ShareAPI.provide_share_server. assert_called_once_with( self.context, migrating_instance, share_network_id)) rpcapi.ShareAPI.create_share_server.assert_called_once_with( self.context, migrating_instance, 'fake_dest_share_server_id') (migration_api.ShareMigrationHelper.wait_for_share_server. assert_called_once_with('fake_dest_share_server_id')) if exc: (self.share_manager._migration_delete_instance. assert_called_once_with(self.context, migrating_instance['id'])) if has_snapshots: (self.share_manager.db.share_snapshot_instance_update. assert_called_once_with( self.context, migrating_snap_instance['id'], {'status': constants.STATUS_AVAILABLE})) (self.share_manager.db. share_snapshot_instance_get_all_with_filters( self.context, {'share_instance_ids': [src_instance['id']]})) else: if has_snapshots: snap_data = { 'status': constants.STATUS_MIGRATING_TO, 'progress': '0%', 'share_instance_id': migrating_instance['id'], } (self.share_manager.db.share_snapshot_instance_create. assert_called_once_with(self.context, snapshot['id'], snap_data)) (self.share_manager.db.share_snapshot_instance_update. assert_called_once_with( self.context, snapshot.instance['id'], {'status': constants.STATUS_MIGRATING})) @ddt.data({'writable': False, 'preserve_metadata': True, 'nondisruptive': True, 'compatible': True, 'preserve_snapshots': True, 'has_snapshots': False}, {'writable': True, 'preserve_metadata': False, 'nondisruptive': True, 'compatible': True, 'preserve_snapshots': True, 'has_snapshots': False}, {'writable': True, 'preserve_metadata': True, 'nondisruptive': False, 'compatible': True, 'preserve_snapshots': True, 'has_snapshots': False}, {'writable': True, 'preserve_metadata': True, 'nondisruptive': True, 'compatible': False, 'preserve_snapshots': True, 'has_snapshots': False}, {'writable': True, 'preserve_metadata': True, 'nondisruptive': True, 'compatible': True, 'preserve_snapshots': False, 'has_snapshots': False}, {'writable': True, 'preserve_metadata': True, 'nondisruptive': True, 'compatible': True, 'preserve_snapshots': False, 'has_snapshots': True}) @ddt.unpack def test__migration_start_driver_not_compatible( self, compatible, writable, preserve_metadata, nondisruptive, preserve_snapshots, has_snapshots): share = db_utils.create_share() src_instance = db_utils.create_share_instance( share_id='fake_id', share_server_id='src_server_id', share_network_id='fake_share_network_id') fake_dest_host = 'fake_host' src_server = db_utils.create_share_server() dest_server = db_utils.create_share_server() migrating_instance = db_utils.create_share_instance( share_id='fake_id', share_network_id='fake_net_id') compatibility = { 'compatible': compatible, 'writable': writable, 'preserve_metadata': preserve_metadata, 'nondisruptive': nondisruptive, 'preserve_snapshots': preserve_snapshots, } snapshot = db_utils.create_snapshot(share_id=share['id']) # mocks self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=src_server)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=migrating_instance)) self.mock_object( api.API, 'create_share_instance_and_get_request_spec', mock.Mock(return_value=({}, migrating_instance))) self.mock_object(rpcapi.ShareAPI, 'provide_share_server', mock.Mock(return_value='fake_dest_share_server_id')) self.mock_object(rpcapi.ShareAPI, 'create_share_server') self.mock_object( migration_api.ShareMigrationHelper, 'wait_for_share_server', mock.Mock(return_value=dest_server)) self.mock_object(self.share_manager, '_migration_delete_instance') self.mock_object(self.share_manager.driver, 'migration_check_compatibility', mock.Mock(return_value=compatibility)) self.mock_object(utils, 'wait_for_access_update') self.mock_object( self.share_manager.db, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[snapshot] if has_snapshots else [])) # run self.assertRaises( exception.ShareMigrationFailed, self.share_manager._migration_start_driver, self.context, share, src_instance, fake_dest_host, True, True, True, not has_snapshots, 'fake_net_id', 'fake_az_id', 'fake_new_type_id') # asserts self.share_manager.db.share_server_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), 'src_server_id') self.share_manager.db.share_instance_get.assert_called_once_with( self.context, migrating_instance['id'], with_share_data=True) (rpcapi.ShareAPI.provide_share_server. assert_called_once_with( self.context, migrating_instance, 'fake_net_id')) rpcapi.ShareAPI.create_share_server.assert_called_once_with( self.context, migrating_instance, 'fake_dest_share_server_id') (migration_api.ShareMigrationHelper.wait_for_share_server. assert_called_once_with('fake_dest_share_server_id')) (api.API.create_share_instance_and_get_request_spec. assert_called_once_with(self.context, share, 'fake_az_id', None, 'fake_host', 'fake_net_id', 'fake_new_type_id')) self.share_manager._migration_delete_instance.assert_called_once_with( self.context, migrating_instance['id']) @ddt.data(Exception('fake'), False, True) def test_migration_driver_continue(self, finished): src_server = db_utils.create_share_server() dest_server = db_utils.create_share_server() share = db_utils.create_share( task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, id='share_id', share_server_id=src_server['id'], status=constants.STATUS_MIGRATING) share_cancelled = db_utils.create_share( task_state=constants.TASK_STATE_MIGRATION_CANCELLED) if finished: share_cancelled = share regular_instance = db_utils.create_share_instance( status=constants.STATUS_AVAILABLE, share_id='other_id') dest_instance = db_utils.create_share_instance( share_id='share_id', host='fake_host', share_server_id=dest_server['id'], status=constants.STATUS_MIGRATING_TO) src_instance = share.instance snapshot = db_utils.create_snapshot(share_id=share['id']) dest_snap_instance = db_utils.create_snapshot_instance( snapshot_id=snapshot['id'], share_instance_id=dest_instance['id']) migrating_snap_instance = db_utils.create_snapshot( status=constants.STATUS_MIGRATING, share_id=share['id']) snapshot_mappings = {snapshot.instance['id']: dest_snap_instance} self.mock_object(manager.LOG, 'warning') self.mock_object(self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock( return_value=[regular_instance, src_instance])) self.mock_object(self.share_manager.db, 'share_get', mock.Mock(side_effect=[share, share_cancelled])) self.mock_object(api.API, 'get_migrating_instances', mock.Mock(return_value=( src_instance['id'], dest_instance['id']))) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=dest_instance)) self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(side_effect=[src_server, dest_server])) self.mock_object(self.share_manager.driver, 'migration_continue', mock.Mock(side_effect=[finished])) self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager, '_migration_delete_instance') side_effect = [[dest_snap_instance], [snapshot.instance]] if isinstance(finished, Exception): side_effect.append([migrating_snap_instance]) self.mock_object( self.share_manager.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(side_effect=side_effect)) self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') share_get_calls = [mock.call(self.context, 'share_id')] self.mock_object(self.share_manager, '_reset_read_only_access_rules') self.share_manager.migration_driver_continue(self.context) snapshot_instance_get_all_calls = [ mock.call(self.context, {'share_instance_ids': [dest_instance['id']]}), mock.call(self.context, {'share_instance_ids': [src_instance['id']]}) ] if isinstance(finished, Exception): self.share_manager.db.share_update.assert_called_once_with( self.context, 'share_id', {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) (self.share_manager.db.share_instance_update. assert_called_once_with( self.context, src_instance['id'], {'status': constants.STATUS_AVAILABLE})) (self.share_manager._migration_delete_instance. assert_called_once_with(self.context, dest_instance['id'])) (self.share_manager._reset_read_only_access_rules. assert_called_once_with(self.context, share, src_instance['id'])) (self.share_manager.db.share_snapshot_instance_update. assert_called_once_with( self.context, migrating_snap_instance['id'], {'status': constants.STATUS_AVAILABLE})) snapshot_instance_get_all_calls.append( mock.call( self.context, {'share_instance_ids': [src_instance['id']]})) else: if finished: self.share_manager.db.share_update.assert_called_once_with( self.context, 'share_id', {'task_state': constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE}) else: share_get_calls.append(mock.call(self.context, 'share_id')) self.assertTrue(manager.LOG.warning.called) self.share_manager.db.share_instances_get_all_by_host( self.context, self.share_manager.host) self.share_manager.db.share_get.assert_has_calls(share_get_calls) api.API.get_migrating_instances.assert_called_once_with(share) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, dest_instance['id'], with_share_data=True) self.share_manager.db.share_server_get.assert_has_calls([ mock.call(self.context, src_server['id']), mock.call(self.context, dest_server['id']), ]) self.share_manager.driver.migration_continue.assert_called_once_with( self.context, src_instance, dest_instance, [snapshot.instance], snapshot_mappings, src_server, dest_server) (self.share_manager.db.share_snapshot_instance_get_all_with_filters. assert_has_calls(snapshot_instance_get_all_calls)) @ddt.data({'task_state': constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, 'exc': None}, {'task_state': constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, 'exc': Exception('fake')}, {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETED, 'exc': None}, {'task_state': constants.TASK_STATE_DATA_COPYING_COMPLETED, 'exc': Exception('fake')}) @ddt.unpack def test_migration_complete(self, task_state, exc): instance_1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, share_server_id='fake_server_id', share_type_id='fake_type_id') instance_2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO, share_server_id='fake_server_id', share_type_id='fake_type_id') share = db_utils.create_share( id='fake_id', instances=[instance_1, instance_2], task_state=task_state) model_type_update = {'create_share_from_snapshot_support': False} share_update = model_type_update share_update['task_state'] = constants.TASK_STATE_MIGRATION_SUCCESS # mocks self.mock_object(self.share_manager.db, 'share_get', mock.Mock(return_value=share)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instance_1, instance_2])) self.mock_object(api.API, 'get_share_attributes_from_share_type', mock.Mock(return_value=model_type_update)) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value='fake_type')) self.mock_object(self.share_manager.db, 'share_update') if task_state == constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE: self.mock_object( self.share_manager, '_migration_complete_driver', mock.Mock(side_effect=exc)) else: self.mock_object( self.share_manager, '_migration_complete_host_assisted', mock.Mock(side_effect=exc)) if exc: snapshot = db_utils.create_snapshot(share_id=share['id']) snapshot_ins1 = db_utils.create_snapshot_instance( snapshot_id=snapshot['id'], share_instance_id=instance_1['id'], status=constants.STATUS_MIGRATING,) snapshot_ins2 = db_utils.create_snapshot_instance( snapshot_id=snapshot['id'], share_instance_id=instance_2['id'], status=constants.STATUS_MIGRATING_TO) self.mock_object(manager.LOG, 'exception') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_snapshot_instance_update') self.mock_object(self.share_manager.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock( return_value=[snapshot_ins1, snapshot_ins2])) self.assertRaises( exception.ShareMigrationFailed, self.share_manager.migration_complete, self.context, instance_1['id'], instance_2['id']) else: self.share_manager.migration_complete( self.context, instance_1['id'], instance_2['id']) # asserts self.share_manager.db.share_get.assert_called_once_with( self.context, share['id']) self.share_manager.db.share_instance_get.assert_has_calls([ mock.call(self.context, instance_1['id'], with_share_data=True), mock.call(self.context, instance_2['id'], with_share_data=True)]) if task_state == constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE: (self.share_manager._migration_complete_driver. assert_called_once_with( self.context, share, instance_1, instance_2)) else: (self.share_manager._migration_complete_host_assisted. assert_called_once_with( self.context, share, instance_1['id'], instance_2['id'])) if exc: self.assertTrue(manager.LOG.exception.called) self.share_manager.db.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) if task_state == constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE: share_instance_update_calls = [ mock.call(self.context, instance_1['id'], {'status': constants.STATUS_ERROR}), mock.call(self.context, instance_2['id'], {'status': constants.STATUS_ERROR}) ] else: share_instance_update_calls = [ mock.call(self.context, instance_1['id'], {'status': constants.STATUS_AVAILABLE}), ] self.share_manager.db.share_instance_update.assert_has_calls( share_instance_update_calls) if task_state == constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE: (self.share_manager.db.share_snapshot_instance_update. assert_has_calls([ mock.call(self.context, snapshot_ins1['id'], {'status': constants.STATUS_ERROR}), mock.call(self.context, snapshot_ins2['id'], {'status': constants.STATUS_ERROR})])) (self.share_manager.db. share_snapshot_instance_get_all_with_filters. assert_called_once_with( self.context, { 'share_instance_ids': [instance_1['id'], instance_2['id']] } )) else: (api.API.get_share_attributes_from_share_type. assert_called_once_with('fake_type')) share_types.get_share_type.assert_called_once_with( self.context, 'fake_type_id') self.share_manager.db.share_update.assert_called_once_with( self.context, share['id'], share_update) @ddt.data(constants.TASK_STATE_DATA_COPYING_ERROR, constants.TASK_STATE_DATA_COPYING_CANCELLED, constants.TASK_STATE_DATA_COPYING_COMPLETED, 'other') def test__migration_complete_host_assisted_status(self, status): instance = db_utils.create_share_instance( share_id='fake_id', share_server_id='fake_server_id') new_instance = db_utils.create_share_instance(share_id='fake_id') share = db_utils.create_share(id='fake_id', task_state=status) helper = mock.Mock() # mocks self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instance, new_instance])) self.mock_object(helper, 'cleanup_new_instance') self.mock_object(migration_api, 'ShareMigrationHelper', mock.Mock(return_value=helper)) self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager, '_reset_read_only_access_rules') if status == constants.TASK_STATE_DATA_COPYING_COMPLETED: self.mock_object(helper, 'apply_new_access_rules', mock.Mock(side_effect=Exception('fake'))) self.mock_object(manager.LOG, 'exception') # run if status == constants.TASK_STATE_DATA_COPYING_CANCELLED: self.share_manager._migration_complete_host_assisted( self.context, share, instance['id'], new_instance['id']) else: self.assertRaises( exception.ShareMigrationFailed, self.share_manager._migration_complete_host_assisted, self.context, share, instance['id'], new_instance['id']) # asserts self.share_manager.db.share_instance_get.assert_has_calls([ mock.call(self.context, instance['id'], with_share_data=True), mock.call(self.context, new_instance['id'], with_share_data=True) ]) cancelled = not(status == constants.TASK_STATE_DATA_COPYING_CANCELLED) if status != 'other': helper.cleanup_new_instance.assert_called_once_with(new_instance) (self.share_manager._reset_read_only_access_rules. assert_called_once_with(self.context, share, instance['id'], helper=helper, supress_errors=cancelled)) if status == constants.TASK_STATE_MIGRATION_CANCELLED: (self.share_manager.db.share_instance_update. assert_called_once_with( self.context, instance['id'], {'status': constants.STATUS_AVAILABLE, 'progress': '100%'})) self.share_manager.db.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_CANCELLED}) if status == constants.TASK_STATE_DATA_COPYING_COMPLETED: helper.apply_new_access_rules. assert_called_once_with( new_instance) self.assertTrue(manager.LOG.exception.called) @ddt.data({'mount_snapshot_support': True, 'snapshot_els': False}, {'mount_snapshot_support': True, 'snapshot_els': True}, {'mount_snapshot_support': False, 'snapshot_els': False}, {'mount_snapshot_support': False, 'snapshot_els': True},) @ddt.unpack def test__migration_complete_driver( self, mount_snapshot_support, snapshot_els): fake_src_host = 'src_host' fake_dest_host = 'dest_host' fake_rules = 'fake_rules' src_server = db_utils.create_share_server() dest_server = db_utils.create_share_server() share_type = db_utils.create_share_type( extra_specs={'mount_snapshot_support': mount_snapshot_support}) share = db_utils.create_share( share_server_id='fake_src_server_id', host=fake_src_host) dest_instance = db_utils.create_share_instance( share_id=share['id'], share_server_id='fake_dest_server_id', host=fake_dest_host, share_type_id=share_type['id']) src_instance = share.instance snapshot = db_utils.create_snapshot(share_id=share['id']) dest_snap_instance = db_utils.create_snapshot_instance( snapshot_id=snapshot['id'], share_instance_id=dest_instance['id']) snapshot_mappings = {snapshot.instance['id']: dest_snap_instance} model_update = {'fake_keys': 'fake_values'} if snapshot_els: el = {'path': 'fake_path', 'is_admin_only': False} model_update['export_locations'] = [el] fake_return_data = { 'export_locations': 'fake_export_locations', 'snapshot_updates': {dest_snap_instance['id']: model_update}, } # mocks self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock( side_effect=[src_server, dest_server])) self.mock_object( self.share_manager.db, 'share_access_get_all_for_instance', mock.Mock(return_value=fake_rules)) self.mock_object( self.share_manager.db, 'share_export_locations_update') self.mock_object(self.share_manager.driver, 'migration_complete', mock.Mock(return_value=fake_return_data)) self.mock_object( self.share_manager.access_helper, '_check_needs_refresh', mock.Mock(return_value=True)) self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_update') self.mock_object(self.share_manager, '_migration_delete_instance') self.mock_object(migration_api.ShareMigrationHelper, 'apply_new_access_rules') self.mock_object( self.share_manager.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(side_effect=[[dest_snap_instance], [snapshot.instance]])) self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') el_create = self.mock_object( self.share_manager.db, 'share_snapshot_instance_export_location_create') # run self.share_manager._migration_complete_driver( self.context, share, src_instance, dest_instance) # asserts self.share_manager.db.share_server_get.assert_has_calls([ mock.call(self.context, 'fake_src_server_id'), mock.call(self.context, 'fake_dest_server_id')]) (self.share_manager.db.share_export_locations_update. assert_called_once_with(self.context, dest_instance['id'], 'fake_export_locations')) self.share_manager.driver.migration_complete.assert_called_once_with( self.context, src_instance, dest_instance, [snapshot.instance], snapshot_mappings, src_server, dest_server) (migration_api.ShareMigrationHelper.apply_new_access_rules. assert_called_once_with(dest_instance)) self.share_manager._migration_delete_instance.assert_called_once_with( self.context, src_instance['id']) self.share_manager.db.share_instance_update.assert_has_calls([ mock.call(self.context, dest_instance['id'], {'status': constants.STATUS_AVAILABLE, 'progress': '100%'}), mock.call(self.context, src_instance['id'], {'status': constants.STATUS_INACTIVE})]) self.share_manager.db.share_update.assert_called_once_with( self.context, dest_instance['share_id'], {'task_state': constants.TASK_STATE_MIGRATION_COMPLETING}) (self.share_manager.db.share_snapshot_instance_get_all_with_filters. assert_has_calls([ mock.call(self.context, {'share_instance_ids': [dest_instance['id']]}), mock.call(self.context, {'share_instance_ids': [src_instance['id']]})])) snap_data_update = ( fake_return_data['snapshot_updates'][dest_snap_instance['id']]) snap_data_update.update({ 'status': constants.STATUS_AVAILABLE, 'progress': '100%', }) (self.share_manager.db.share_snapshot_instance_update. assert_called_once_with(self.context, dest_snap_instance['id'], snap_data_update)) if mount_snapshot_support and snapshot_els: el['share_snapshot_instance_id'] = dest_snap_instance['id'] el_create.assert_called_once_with(self.context, el) else: el_create.assert_not_called() def test__migration_complete_host_assisted(self): instance = db_utils.create_share_instance( share_id='fake_id', share_server_id='fake_server_id') new_instance = db_utils.create_share_instance(share_id='fake_id') share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_COMPLETED) # mocks self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=[instance, new_instance])) self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_update') delete_mock = self.mock_object(migration_api.ShareMigrationHelper, 'delete_instance_and_wait') self.mock_object(migration_api.ShareMigrationHelper, 'apply_new_access_rules') # run self.share_manager._migration_complete_host_assisted( self.context, share, instance['id'], new_instance['id']) # asserts self.share_manager.db.share_instance_get.assert_has_calls([ mock.call(self.context, instance['id'], with_share_data=True), mock.call(self.context, new_instance['id'], with_share_data=True) ]) self.share_manager.db.share_instance_update.assert_has_calls([ mock.call(self.context, new_instance['id'], {'status': constants.STATUS_AVAILABLE, 'progress': '100%'}), mock.call(self.context, instance['id'], {'status': constants.STATUS_INACTIVE}) ]) self.share_manager.db.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_COMPLETING}) (migration_api.ShareMigrationHelper.apply_new_access_rules. assert_called_once_with(new_instance)) delete_mock.assert_called_once_with(instance) @ddt.data(constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_DATA_COPYING_COMPLETED) def test_migration_cancel(self, task_state): dest_host = 'fake_host' server_1 = db_utils.create_share_server() server_2 = db_utils.create_share_server() share = db_utils.create_share(task_state=task_state) instance_1 = db_utils.create_share_instance( share_id=share['id'], share_server_id=server_1['id']) instance_2 = db_utils.create_share_instance( share_id=share['id'], share_server_id=server_2['id'], host=dest_host) helper = mock.Mock() self.mock_object(migration_api, 'ShareMigrationHelper', mock.Mock(return_value=helper)) self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[instance_1, instance_2])) self.mock_object(db, 'share_update') self.mock_object(db, 'share_instance_update') self.mock_object(self.share_manager, '_migration_delete_instance') self.mock_object(self.share_manager, '_restore_migrating_snapshots_status') self.mock_object(db, 'share_server_get', mock.Mock(side_effect=[server_1, server_2])) self.mock_object(self.share_manager.driver, 'migration_cancel') self.mock_object(helper, 'cleanup_new_instance') self.mock_object(self.share_manager, '_reset_read_only_access_rules') self.share_manager.migration_cancel( self.context, instance_1['id'], instance_2['id']) share_instance_update_calls = [] if task_state == constants.TASK_STATE_DATA_COPYING_COMPLETED: share_instance_update_calls.append(mock.call( self.context, instance_2['id'], {'status': constants.STATUS_INACTIVE})) (helper.cleanup_new_instance.assert_called_once_with(instance_2)) (self.share_manager._reset_read_only_access_rules. assert_called_once_with(self.context, share, instance_1['id'], helper=helper, supress_errors=False)) else: self.share_manager.driver.migration_cancel.assert_called_once_with( self.context, instance_1, instance_2, [], {}, server_1, server_2) (self.share_manager._migration_delete_instance. assert_called_once_with(self.context, instance_2['id'])) (self.share_manager._restore_migrating_snapshots_status. assert_called_once_with(self.context, instance_1['id'])) self.share_manager.db.share_get.assert_called_once_with( self.context, share['id']) self.share_manager.db.share_server_get.assert_has_calls([ mock.call(self.context, server_1['id']), mock.call(self.context, server_2['id']), ]) self.share_manager.db.share_instance_get.assert_has_calls([ mock.call(self.context, instance_1['id'], with_share_data=True), mock.call(self.context, instance_2['id'], with_share_data=True) ]) self.share_manager.db.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_CANCELLED}) share_instance_update_calls.append(mock.call( self.context, instance_1['id'], {'status': constants.STATUS_AVAILABLE})) self.share_manager.db.share_instance_update.assert_has_calls( share_instance_update_calls) @ddt.data(True, False) def test__reset_read_only_access_rules(self, supress_errors): share = db_utils.create_share() server = db_utils.create_share_server() instance = db_utils.create_share_instance( share_id=share['id'], cast_rules_to_readonly=True, share_server_id=server['id']) # mocks self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=server)) self.mock_object(self.share_manager.db, 'share_instance_update') self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=instance)) self.mock_object(migration_api.ShareMigrationHelper, 'cleanup_access_rules') self.mock_object(migration_api.ShareMigrationHelper, 'revert_access_rules') # run self.share_manager._reset_read_only_access_rules( self.context, share, instance['id'], supress_errors=supress_errors) # asserts self.share_manager.db.share_server_get.assert_called_once_with( self.context, server['id']) self.share_manager.db.share_instance_update.assert_called_once_with( self.context, instance['id'], {'cast_rules_to_readonly': False}) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, instance['id'], with_share_data=True) if supress_errors: (migration_api.ShareMigrationHelper.cleanup_access_rules. assert_called_once_with(instance, server)) else: (migration_api.ShareMigrationHelper.revert_access_rules. assert_called_once_with(instance, server)) def test__migration_delete_instance(self): share = db_utils.create_share(id='fake_id') instance = share.instance snapshot = db_utils.create_snapshot(share_id=share['id']) rules = [{'id': 'rule_id_1'}, {'id': 'rule_id_2'}] # mocks self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=instance)) mock_get_access_rules_call = self.mock_object( self.share_manager.access_helper, 'get_and_update_share_instance_access_rules', mock.Mock(return_value=rules)) mock_delete_access_rules_call = self.mock_object( self.share_manager.access_helper, 'delete_share_instance_access_rules') self.mock_object(self.share_manager.db, 'share_instance_delete') self.mock_object(self.share_manager.db, 'share_instance_access_delete') self.mock_object(self.share_manager, '_check_delete_share_server') self.mock_object(self.share_manager.db, 'share_snapshot_instance_delete') self.mock_object(self.share_manager.db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[snapshot.instance])) # run self.share_manager._migration_delete_instance( self.context, instance['id']) # asserts self.share_manager.db.share_instance_get.assert_called_once_with( self.context, instance['id'], with_share_data=True) mock_get_access_rules_call.assert_called_once_with( self.context, share_instance_id=instance['id']) mock_delete_access_rules_call.assert_called_once_with( self.context, rules, instance['id']) self.share_manager.db.share_instance_delete.assert_called_once_with( self.context, instance['id']) self.share_manager._check_delete_share_server.assert_called_once_with( self.context, instance) (self.share_manager.db.share_snapshot_instance_get_all_with_filters. assert_called_once_with(self.context, {'share_instance_ids': [instance['id']]})) (self.share_manager.db.share_snapshot_instance_delete. assert_called_once_with(self.context, snapshot.instance['id'])) def test_migration_cancel_invalid(self): share = db_utils.create_share() self.mock_object(db, 'share_instance_get', mock.Mock(return_value=share.instance)) self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.assertRaises( exception.InvalidShare, self.share_manager.migration_cancel, self.context, 'ins1_id', 'ins2_id') def test_migration_get_progress(self): expected = 'fake_progress' dest_host = 'fake_host' server_1 = db_utils.create_share_server() server_2 = db_utils.create_share_server() share = db_utils.create_share( task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, share_server_id=server_1['id']) instance_1 = db_utils.create_share_instance( share_id=share['id'], share_server_id=server_1['id']) instance_2 = db_utils.create_share_instance( share_id=share['id'], share_server_id=server_2['id'], host=dest_host) self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.mock_object(db, 'share_instance_get', mock.Mock(side_effect=[instance_1, instance_2])) self.mock_object(db, 'share_server_get', mock.Mock(side_effect=[server_1, server_2])) self.mock_object(self.share_manager.driver, 'migration_get_progress', mock.Mock(return_value=expected)) result = self.share_manager.migration_get_progress( self.context, instance_1['id'], instance_2['id']) self.assertEqual(expected, result) (self.share_manager.driver.migration_get_progress. assert_called_once_with( self.context, instance_1, instance_2, [], {}, server_1, server_2)) self.share_manager.db.share_get.assert_called_once_with( self.context, share['id']) self.share_manager.db.share_server_get.assert_has_calls([ mock.call(self.context, server_1['id']), mock.call(self.context, server_2['id']), ]) self.share_manager.db.share_instance_get.assert_has_calls([ mock.call(self.context, instance_1['id'], with_share_data=True), mock.call(self.context, instance_2['id'], with_share_data=True) ]) def test_migration_get_progress_invalid(self): share = db_utils.create_share() self.mock_object(db, 'share_instance_get', mock.Mock(return_value=share.instance)) self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.assertRaises( exception.InvalidShare, self.share_manager.migration_get_progress, self.context, 'ins1_id', 'ins2_id') def test_provide_share_server(self): instance = db_utils.create_share_instance(share_id='fake_id', share_group_id='sg_id') snapshot = db_utils.create_snapshot(with_share=True) group = db_utils.create_share_group() server = db_utils.create_share_server() # mocks self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(return_value=instance)) self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager.db, 'share_group_get', mock.Mock(return_value=group)) self.mock_object(self.share_manager, '_provide_share_server_for_share', mock.Mock(return_value=(server, instance))) # run result = self.share_manager.provide_share_server( self.context, 'ins_id', 'net_id', 'snap_id') # asserts self.assertEqual(server['id'], result) self.share_manager.db.share_instance_get.assert_called_once_with( self.context, 'ins_id', with_share_data=True) self.share_manager.db.share_snapshot_get.assert_called_once_with( self.context, 'snap_id') self.share_manager.db.share_group_get.assert_called_once_with( self.context, 'sg_id') (self.share_manager._provide_share_server_for_share. assert_called_once_with(self.context, 'net_id', instance, snapshot, group, create_on_backend=False)) def test_create_share_server(self): server = db_utils.create_share_server() # mocks self.mock_object(self.share_manager.db, 'share_server_get', mock.Mock(return_value=server)) self.mock_object(self.share_manager, '_create_share_server_in_backend') # run self.share_manager.create_share_server( self.context, 'server_id') # asserts self.share_manager.db.share_server_get.assert_called_once_with( self.context, 'server_id') (self.share_manager._create_share_server_in_backend. assert_called_once_with(self.context, server)) @ddt.data({'admin_network_api': mock.Mock(), 'driver_return': ('new_identifier', {'some_id': 'some_value'})}, {'admin_network_api': None, 'driver_return': (None, None)}) @ddt.unpack def test_manage_share_server(self, admin_network_api, driver_return): driver_opts = {} fake_share_server = fakes.fake_share_server_get() fake_list_network_info = [{}, {}] fake_list_empty_network_info = [] identifier = 'fake_id' ss_data = { 'name': 'fake_name', 'ou': 'fake_ou', 'domain': 'fake_domain', 'server': 'fake_server', 'dns_ip': 'fake_dns_ip', 'user': 'fake_user', 'type': 'FAKE', 'password': 'fake_pass', } mock_manage_admin_network_allocations = mock.Mock() share_server = db_utils.create_share_server(**fake_share_server) security_service = db_utils.create_security_service(**ss_data) share_network = db_utils.create_share_network() share_net_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id']) db.share_network_add_security_service(context.get_admin_context(), share_network['id'], security_service['id']) share_network = db.share_network_get(context.get_admin_context(), share_network['id']) self.share_manager.driver._admin_network_api = admin_network_api mock_share_server_update = self.mock_object( db, 'share_server_update') mock_share_server_get = self.mock_object( db, 'share_server_get', mock.Mock(return_value=share_server)) mock_share_network_get = self.mock_object( db, 'share_network_get', mock.Mock(return_value=share_network)) mock_share_net_subnet_get = self.mock_object( db, 'share_network_subnet_get', mock.Mock( return_value=share_net_subnet) ) mock_network_allocations_get = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=1)) mock_share_server_net_info = self.mock_object( self.share_manager.driver, 'get_share_server_network_info', mock.Mock(return_value=fake_list_network_info)) mock_manage_network_allocations = self.mock_object( self.share_manager.driver.network_api, 'manage_network_allocations', mock.Mock(return_value=fake_list_empty_network_info)) mock_manage_server = self.mock_object( self.share_manager.driver, 'manage_server', mock.Mock(return_value=driver_return)) mock_set_backend_details = self.mock_object( db, 'share_server_backend_details_set') ss_from_db = share_network['security_services'][0] ss_data_from_db = { 'name': ss_from_db['name'], 'ou': ss_from_db['ou'], 'domain': ss_from_db['domain'], 'server': ss_from_db['server'], 'dns_ip': ss_from_db['dns_ip'], 'user': ss_from_db['user'], 'type': ss_from_db['type'], 'password': ss_from_db['password'], } expected_backend_details = { 'security_service_FAKE': jsonutils.dumps(ss_data_from_db), } if driver_return[1]: expected_backend_details.update(driver_return[1]) if admin_network_api is not None: mock_manage_admin_network_allocations = self.mock_object( self.share_manager.driver.admin_network_api, 'manage_network_allocations', mock.Mock(return_value=fake_list_network_info)) self.share_manager.manage_share_server(self.context, fake_share_server['id'], identifier, driver_opts) mock_share_server_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_share_network_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_net_subnet['share_network_id'] ) mock_share_net_subnet_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['share_network_subnet_id'] ) mock_network_allocations_get.assert_called_once_with() mock_share_server_net_info.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server, identifier, driver_opts ) mock_manage_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_list_network_info, share_server, share_network, share_net_subnet ) mock_manage_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server, identifier, driver_opts ) mock_share_server_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'], {'status': constants.STATUS_ACTIVE, 'identifier': driver_return[0] or share_server['id']} ) mock_set_backend_details.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server['id'], expected_backend_details ) if admin_network_api is not None: mock_manage_admin_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_list_network_info, share_server ) def test_manage_share_server_dhss_false(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False self.assertRaises( exception.ManageShareServerError, self.share_manager.manage_share_server, self.context, "fake_id", "foo", {}) def test_manage_share_server_without_allocations(self): driver_opts = {} fake_share_server = fakes.fake_share_server_get() fake_list_empty_network_info = [] identifier = 'fake_id' share_server = db_utils.create_share_server(**fake_share_server) share_network = db_utils.create_share_network() share_network_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id'] ) share_server['share_network_subnet_id'] = share_network_subnet['id'] self.share_manager.driver._admin_network_api = mock.Mock() mock_share_server_get = self.mock_object( db, 'share_server_get', mock.Mock(return_value=share_server)) mock_share_network_get = self.mock_object( db, 'share_network_get', mock.Mock(return_value=share_network)) mock_share_net_subnet_get = self.mock_object( db, 'share_network_subnet_get', mock.Mock( return_value=share_network_subnet)) mock_network_allocations_get = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=1)) mock_get_share_network_info = self.mock_object( self.share_manager.driver, 'get_share_server_network_info', mock.Mock(return_value=fake_list_empty_network_info)) self.assertRaises(exception.ManageShareServerError, self.share_manager.manage_share_server, context=self.context, share_server_id=fake_share_server['id'], identifier=identifier, driver_opts=driver_opts) mock_share_server_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_share_network_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_network_subnet['share_network_id'] ) mock_share_net_subnet_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server['share_network_subnet_id'] ) mock_network_allocations_get.assert_called_once_with() mock_get_share_network_info.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server, identifier, driver_opts ) def test_manage_share_server_allocations_not_managed(self): driver_opts = {} fake_share_server = fakes.fake_share_server_get() fake_list_network_info = [{}, {}] identifier = 'fake_id' share_server = db_utils.create_share_server(**fake_share_server) share_network = db_utils.create_share_network() share_network_subnet = db_utils.create_share_network_subnet( share_network_id=share_network['id'] ) share_server['share_network_subnet_id'] = share_network_subnet['id'] self.share_manager.driver._admin_network_api = mock.Mock() mock_share_server_get = self.mock_object( db, 'share_server_get', mock.Mock(return_value=share_server)) mock_share_network_get = self.mock_object( db, 'share_network_get', mock.Mock(return_value=share_network)) mock_share_net_subnet_get = self.mock_object( db, 'share_network_subnet_get', mock.Mock( return_value=share_network_subnet)) mock_network_allocations_get = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=1)) mock_get_share_network_info = self.mock_object( self.share_manager.driver, 'get_share_server_network_info', mock.Mock(return_value=fake_list_network_info)) mock_manage_admin_network_allocations = self.mock_object( self.share_manager.driver.admin_network_api, 'manage_network_allocations', mock.Mock(return_value=fake_list_network_info)) mock_manage_network_allocations = self.mock_object( self.share_manager.driver.network_api, 'manage_network_allocations', mock.Mock(return_value=fake_list_network_info)) self.assertRaises(exception.ManageShareServerError, self.share_manager.manage_share_server, context=self.context, share_server_id=fake_share_server['id'], identifier=identifier, driver_opts=driver_opts) mock_share_server_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_share_network_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_network_subnet['share_network_id'] ) mock_share_net_subnet_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server['share_network_subnet_id'] ) mock_network_allocations_get.assert_called_once_with() mock_get_share_network_info.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_server, identifier, driver_opts ) mock_manage_admin_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_list_network_info, share_server ) mock_manage_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_list_network_info, share_server, share_network, share_network_subnet ) def test_manage_snapshot_driver_exception(self): CustomException = type('CustomException', (Exception,), {}) self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value="False")) mock_manage = self.mock_object(self.share_manager.driver, 'manage_existing_snapshot', mock.Mock(side_effect=CustomException)) share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) driver_options = {} mock_get = self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.assertRaises( CustomException, self.share_manager.manage_snapshot, self.context, snapshot['id'], driver_options) mock_manage.assert_called_once_with(mock.ANY, driver_options) mock_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) def test_unmanage_share_server_no_allocations(self): fake_share_server = fakes.fake_share_server_get() ss_list = [ {'name': 'fake_AD'}, {'name': 'fake_LDAP'}, {'name': 'fake_kerberos'} ] db_utils.create_share_server(**fake_share_server) self.mock_object(self.share_manager.driver, 'unmanage_server', mock.Mock(side_effect=NotImplementedError())) self.mock_object(self.share_manager.db, 'share_server_delete') mock_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=0) ) mock_admin_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_admin_network_allocations_number', mock.Mock(return_value=0) ) self.share_manager.unmanage_share_server( self.context, fake_share_server['id'], True) mock_network_allocations_number.assert_called_once_with() mock_admin_network_allocations_number.assert_called_once_with() self.share_manager.driver.unmanage_server.assert_called_once_with( fake_share_server['backend_details'], ss_list) self.share_manager.db.share_server_delete.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id']) def test_unmanage_share_server_no_allocations_driver_not_implemented(self): fake_share_server = fakes.fake_share_server_get() fake_share_server['status'] = constants.STATUS_UNMANAGING ss_list = [ {'name': 'fake_AD'}, {'name': 'fake_LDAP'}, {'name': 'fake_kerberos'} ] db_utils.create_share_server(**fake_share_server) self.mock_object(self.share_manager.driver, 'unmanage_server', mock.Mock(side_effect=NotImplementedError())) self.mock_object(self.share_manager.db, 'share_server_update') self.share_manager.unmanage_share_server( self.context, fake_share_server['id'], False) self.share_manager.driver.unmanage_server.assert_called_once_with( fake_share_server['backend_details'], ss_list) self.share_manager.db.share_server_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'], {'status': constants.STATUS_UNMANAGE_ERROR}) def test_unmanage_share_server_with_network_allocations(self): fake_share_server = fakes.fake_share_server_get() db_utils.create_share_server(**fake_share_server) mock_unmanage_network_allocations = self.mock_object( self.share_manager.driver.network_api, 'unmanage_network_allocations' ) mock_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=1) ) self.share_manager.unmanage_share_server( self.context, fake_share_server['id'], True) mock_network_allocations_number.assert_called_once_with() mock_unmanage_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id']) def test_unmanage_share_server_with_admin_network_allocations(self): fake_share_server = fakes.fake_share_server_get() db_utils.create_share_server(**fake_share_server) mock_admin_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_admin_network_allocations_number', mock.Mock(return_value=1) ) mock_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=0) ) self.share_manager.driver._admin_network_api = mock.Mock() self.share_manager.unmanage_share_server( self.context, fake_share_server['id'], True) mock_admin_network_allocations_number.assert_called_once_with() mock_network_allocations_number.assert_called_once_with() def test_unmanage_share_server_error(self): fake_share_server = fakes.fake_share_server_get() db_utils.create_share_server(**fake_share_server) mock_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=1) ) error = mock.Mock( side_effect=exception.ShareServerNotFound(share_server_id="fake")) mock_share_server_delete = self.mock_object( db, 'share_server_delete', error ) mock_share_server_update = self.mock_object( db, 'share_server_update' ) self.share_manager.driver._admin_network_api = mock.Mock() self.assertRaises(exception.ShareServerNotFound, self.share_manager.unmanage_share_server, self.context, fake_share_server['id'], True) mock_network_allocations_number.assert_called_once_with() mock_share_server_delete.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_share_server_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'], {'status': constants.STATUS_UNMANAGE_ERROR} ) def test_unmanage_share_server_network_allocations_error(self): fake_share_server = fakes.fake_share_server_get() db_utils.create_share_server(**fake_share_server) mock_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=1) ) error = mock.Mock( side_effect=exception.ShareNetworkNotFound(share_network_id="fake") ) mock_unmanage_network_allocations = self.mock_object( self.share_manager.driver.network_api, 'unmanage_network_allocations', error) mock_share_server_update = self.mock_object( db, 'share_server_update' ) self.share_manager.driver._admin_network_api = mock.Mock() self.assertRaises(exception.ShareNetworkNotFound, self.share_manager.unmanage_share_server, self.context, fake_share_server['id'], True) mock_network_allocations_number.assert_called_once_with() mock_unmanage_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_share_server_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'], {'status': constants.STATUS_UNMANAGE_ERROR} ) def test_unmanage_share_server_admin_network_allocations_error(self): fake_share_server = fakes.fake_share_server_get() db_utils.create_share_server(**fake_share_server) self.share_manager.driver._admin_network_api = mock.Mock() mock_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_network_allocations_number', mock.Mock(return_value=0) ) mock_admin_network_allocations_number = self.mock_object( self.share_manager.driver, 'get_admin_network_allocations_number', mock.Mock(return_value=1) ) error = mock.Mock( side_effect=exception.ShareNetworkNotFound(share_network_id="fake") ) mock_unmanage_admin_network_allocations = self.mock_object( self.share_manager.driver._admin_network_api, 'unmanage_network_allocations', error ) mock_unmanage_network_allocations = self.mock_object( self.share_manager.driver.network_api, 'unmanage_network_allocations', error) mock_share_server_update = self.mock_object( db, 'share_server_update' ) self.assertRaises(exception.ShareNetworkNotFound, self.share_manager.unmanage_share_server, self.context, fake_share_server['id'], True) mock_network_allocations_number.assert_called_once_with() mock_admin_network_allocations_number.assert_called_once_with() mock_unmanage_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_unmanage_admin_network_allocations.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'] ) mock_share_server_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), fake_share_server['id'], {'status': constants.STATUS_UNMANAGE_ERROR} ) @ddt.data({'dhss': True, 'driver_data': {'size': 1}, 'mount_snapshot_support': False}, {'dhss': True, 'driver_data': {'size': 2, 'name': 'fake'}, 'mount_snapshot_support': False}, {'dhss': False, 'driver_data': {'size': 3}, 'mount_snapshot_support': False}, {'dhss': False, 'driver_data': {'size': 3, 'export_locations': [ {'path': '/path1', 'is_admin_only': True}, {'path': '/path2', 'is_admin_only': False} ]}, 'mount_snapshot_support': False}, {'dhss': False, 'driver_data': {'size': 3, 'export_locations': [ {'path': '/path1', 'is_admin_only': True}, {'path': '/path2', 'is_admin_only': False} ]}, 'mount_snapshot_support': True}) @ddt.unpack def test_manage_snapshot_valid_snapshot( self, driver_data, mount_snapshot_support, dhss): mock_get_share_server = self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.db, 'share_snapshot_update') self.mock_object(self.share_manager, 'driver') self.mock_object(quota.QUOTAS, 'reserve', mock.Mock()) self.share_manager.driver.driver_handles_share_servers = dhss if dhss: mock_manage = self.mock_object( self.share_manager.driver, "manage_existing_snapshot_with_server", mock.Mock(return_value=driver_data)) else: mock_manage = self.mock_object( self.share_manager.driver, "manage_existing_snapshot", mock.Mock(return_value=driver_data)) size = driver_data['size'] export_locations = driver_data.get('export_locations') share = db_utils.create_share( size=size, mount_snapshot_support=mount_snapshot_support) snapshot = db_utils.create_snapshot(share_id=share['id'], size=size) snapshot_id = snapshot['id'] driver_options = {} mock_get = self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) mock_export_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_export_location_create') self.share_manager.manage_snapshot(self.context, snapshot_id, driver_options) if dhss: mock_manage.assert_called_once_with(mock.ANY, driver_options, None) else: mock_manage.assert_called_once_with(mock.ANY, driver_options) valid_snapshot_data = { 'status': constants.STATUS_AVAILABLE} valid_snapshot_data.update(driver_data) self.share_manager.db.share_snapshot_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot_id, valid_snapshot_data) if dhss: mock_get_share_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['share']) mock_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot_id) if mount_snapshot_support and export_locations: snap_ins_id = snapshot.instance['id'] for i in range(0, 2): export_locations[i]['share_snapshot_instance_id'] = snap_ins_id mock_export_update.assert_has_calls([ mock.call(utils.IsAMatcher(context.RequestContext), export_locations[0]), mock.call(utils.IsAMatcher(context.RequestContext), export_locations[1]), ]) else: mock_export_update.assert_not_called() def test_unmanage_snapshot_invalid_share(self): manager.CONF.unmanage_remove_access_rules = False self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False mock_unmanage = mock.Mock( side_effect=exception.UnmanageInvalidShareSnapshot(reason="fake")) self.mock_object(self.share_manager.driver, "unmanage_snapshot", mock_unmanage) mock_get_share_server = self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.db, 'share_snapshot_update') share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) mock_get = self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.share_manager.unmanage_snapshot(self.context, snapshot['id']) self.share_manager.db.share_snapshot_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id'], {'status': constants.STATUS_UNMANAGE_ERROR}) self.share_manager.driver.unmanage_snapshot.assert_called_once_with( mock.ANY) mock_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) mock_get_share_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['share']) @ddt.data({'dhss': False, 'quota_error': False}, {'dhss': True, 'quota_error': False}, {'dhss': False, 'quota_error': True}, {'dhss': True, 'quota_error': True}) @ddt.unpack def test_unmanage_snapshot_valid_snapshot(self, dhss, quota_error): if quota_error: self.mock_object(quota.QUOTAS, 'reserve', mock.Mock( side_effect=exception.ManilaException(message='error'))) manager.CONF.unmanage_remove_access_rules = True mock_log_warning = self.mock_object(manager.LOG, 'warning') self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = dhss mock_update_access = self.mock_object( self.share_manager.snapshot_access_helper, "update_access_rules") if dhss: mock_unmanage = self.mock_object( self.share_manager.driver, "unmanage_snapshot_with_server") else: mock_unmanage = self.mock_object( self.share_manager.driver, "unmanage_snapshot") mock_get_share_server = self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_snapshot_instance_destroy_call = self.mock_object( self.share_manager.db, 'share_snapshot_instance_delete') share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) mock_get = self.mock_object(self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) mock_snap_ins_get = self.mock_object( self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot.instance)) self.share_manager.unmanage_snapshot(self.context, snapshot['id']) if dhss: mock_unmanage.assert_called_once_with(snapshot.instance, None) else: mock_unmanage.assert_called_once_with(snapshot.instance) mock_update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot.instance['id'], delete_all_rules=True, share_server=None) mock_snapshot_instance_destroy_call.assert_called_once_with( mock.ANY, snapshot['instance']['id']) mock_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) mock_get_share_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['share']) mock_snap_ins_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot.instance['id'], with_share_data=True) if quota_error: self.assertTrue(mock_log_warning.called) @ddt.data(True, False) def test_revert_to_snapshot(self, has_replicas): reservations = 'fake_reservations' share_id = 'fake_share_id' snapshot_id = 'fake_snapshot_id' snapshot_instance_id = 'fake_snapshot_instance_id' share_instance_id = 'fake_share_instance_id' share_instance = fakes.fake_share_instance( id=share_instance_id, share_id=share_id) share = fakes.fake_share( id=share_id, instance=share_instance, project_id='fake_project', user_id='fake_user', size=2, has_replicas=has_replicas) snapshot_instance = fakes.fake_snapshot_instance( id=snapshot_instance_id, share_id=share_instance_id, share=share, name='fake_snapshot', share_instance=share_instance, share_instance_id=share_instance_id) snapshot = fakes.fake_snapshot( id=snapshot_id, share_id=share_id, share=share, instance=snapshot_instance, project_id='fake_project', user_id='fake_user', size=1) share_access_rules = ['fake_share_access_rule'] snapshot_access_rules = ['fake_snapshot_access_rule'] mock_share_snapshot_get = self.mock_object( self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) mock_share_access_get = self.mock_object( self.share_manager.access_helper, 'get_share_instance_access_rules', mock.Mock(return_value=share_access_rules)) mock_snapshot_access_get = self.mock_object( self.share_manager.snapshot_access_helper, 'get_snapshot_instance_access_rules', mock.Mock(return_value=snapshot_access_rules)) mock_revert_to_snapshot = self.mock_object( self.share_manager, '_revert_to_snapshot') mock_revert_to_replicated_snapshot = self.mock_object( self.share_manager, '_revert_to_replicated_snapshot') self.share_manager.revert_to_snapshot(self.context, snapshot_id, reservations) mock_share_snapshot_get.assert_called_once_with(mock.ANY, snapshot_id) mock_share_access_get.assert_called_once_with( mock.ANY, filters={'state': constants.STATUS_ACTIVE}, share_instance_id=share_instance_id) mock_snapshot_access_get.assert_called_once_with( mock.ANY, snapshot_instance_id) if not has_replicas: mock_revert_to_snapshot.assert_called_once_with( mock.ANY, share, snapshot, reservations, share_access_rules, snapshot_access_rules) self.assertFalse(mock_revert_to_replicated_snapshot.called) else: self.assertFalse(mock_revert_to_snapshot.called) mock_revert_to_replicated_snapshot.assert_called_once_with( mock.ANY, share, snapshot, reservations, share_access_rules, snapshot_access_rules, share_id=share_id) @ddt.data(None, 'fake_reservations') def test__revert_to_snapshot(self, reservations): mock_quotas_rollback = self.mock_object(quota.QUOTAS, 'rollback') mock_quotas_commit = self.mock_object(quota.QUOTAS, 'commit') self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_driver = self.mock_object(self.share_manager, 'driver') share_id = 'fake_share_id' share = fakes.fake_share( id=share_id, instance={'id': 'fake_instance_id', 'share_type_id': 'fake_share_type_id'}, project_id='fake_project', user_id='fake_user', size=2) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot', share_instance=share['instance']) snapshot = fakes.fake_snapshot( id='fake_snapshot_id', share_id=share_id, share=share, instance=snapshot_instance, project_id='fake_project', user_id='fake_user', size=1) share_access_rules = [] snapshot_access_rules = [] self.mock_object( self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object( self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) mock_share_update = self.mock_object( self.share_manager.db, 'share_update') mock_share_snapshot_update = self.mock_object( self.share_manager.db, 'share_snapshot_update') self.share_manager._revert_to_snapshot(self.context, share, snapshot, reservations, share_access_rules, snapshot_access_rules) mock_driver.revert_to_snapshot.assert_called_once_with( mock.ANY, self._get_snapshot_instance_dict( snapshot_instance, share, snapshot=snapshot), share_access_rules, snapshot_access_rules, share_server=None) self.assertFalse(mock_quotas_rollback.called) if reservations: mock_quotas_commit.assert_called_once_with( mock.ANY, reservations, project_id='fake_project', user_id='fake_user', share_type_id=( snapshot_instance['share_instance']['share_type_id'])) else: self.assertFalse(mock_quotas_commit.called) mock_share_update.assert_called_once_with( mock.ANY, share_id, {'status': constants.STATUS_AVAILABLE, 'size': snapshot['size']}) mock_share_snapshot_update.assert_called_once_with( mock.ANY, 'fake_snapshot_id', {'status': constants.STATUS_AVAILABLE}) @ddt.data(None, 'fake_reservations') def test__revert_to_snapshot_driver_exception(self, reservations): mock_quotas_rollback = self.mock_object(quota.QUOTAS, 'rollback') mock_quotas_commit = self.mock_object(quota.QUOTAS, 'commit') self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_driver = self.mock_object(self.share_manager, 'driver') mock_driver.revert_to_snapshot.side_effect = exception.ManilaException share_id = 'fake_share_id' share = fakes.fake_share( id=share_id, instance={'id': 'fake_instance_id', 'share_type_id': 'fake_share_type_id'}, project_id='fake_project', user_id='fake_user', size=2) snapshot_instance = fakes.fake_snapshot_instance( share_id=share_id, share=share, name='fake_snapshot', share_instance=share['instance']) snapshot = fakes.fake_snapshot( id='fake_snapshot_id', share_id=share_id, share=share, instance=snapshot_instance, project_id='fake_project', user_id='fake_user', size=1) share_access_rules = [] snapshot_access_rules = [] self.mock_object( self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object( self.share_manager.db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) mock_share_update = self.mock_object( self.share_manager.db, 'share_update') mock_share_snapshot_update = self.mock_object( self.share_manager.db, 'share_snapshot_update') self.assertRaises(exception.ManilaException, self.share_manager._revert_to_snapshot, self.context, share, snapshot, reservations, share_access_rules, snapshot_access_rules) mock_driver.revert_to_snapshot.assert_called_once_with( mock.ANY, self._get_snapshot_instance_dict( snapshot_instance, share, snapshot=snapshot), share_access_rules, snapshot_access_rules, share_server=None) self.assertFalse(mock_quotas_commit.called) if reservations: mock_quotas_rollback.assert_called_once_with( mock.ANY, reservations, project_id='fake_project', user_id='fake_user', share_type_id=( snapshot_instance['share_instance']['share_type_id'])) else: self.assertFalse(mock_quotas_rollback.called) mock_share_update.assert_called_once_with( mock.ANY, share_id, {'status': constants.STATUS_REVERTING_ERROR}) mock_share_snapshot_update.assert_called_once_with( mock.ANY, 'fake_snapshot_id', {'status': constants.STATUS_AVAILABLE}) def test_unmanage_snapshot_update_access_rule_exception(self): self.mock_object(self.share_manager, 'driver') self.share_manager.driver.driver_handles_share_servers = False share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) manager.CONF.unmanage_remove_access_rules = True mock_get = self.mock_object( self.share_manager.db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) mock_get_share_server = self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.snapshot_access_helper, 'update_access_rules', mock.Mock(side_effect=Exception)) mock_log_exception = self.mock_object(manager.LOG, 'exception') mock_update = self.mock_object(self.share_manager.db, 'share_snapshot_update') self.share_manager.unmanage_snapshot(self.context, snapshot['id']) self.assertTrue(mock_log_exception.called) mock_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id']) mock_get_share_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['share']) mock_update.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id'], {'status': constants.STATUS_UNMANAGE_ERROR}) def test_snapshot_update_access(self): snapshot = fakes.fake_snapshot(create_instance=True) snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot) mock_instance_get = self.mock_object( db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) mock_get_share_server = self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) mock_update_access = self.mock_object( self.share_manager.snapshot_access_helper, 'update_access_rules') self.share_manager.snapshot_update_access(self.context, snapshot_instance['id']) mock_instance_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot_instance['id'], with_share_data=True) mock_get_share_server.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot_instance['share_instance']) mock_update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot_instance['id'], share_server=None) def _setup_crud_replicated_snapshot_data(self): snapshot = fakes.fake_snapshot(create_instance=True) snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot) snapshot_instances = [snapshot['instance'], snapshot_instance] replicas = [fake_replica(), fake_replica()] return snapshot, snapshot_instances, replicas def test_create_replicated_snapshot_driver_exception(self): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'create_replicated_snapshot', mock.Mock(side_effect=exception.ManilaException)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') self.assertRaises(exception.ManilaException, self.share_manager.create_replicated_snapshot, self.context, snapshot['id'], share_id='fake_share') mock_db_update_call.assert_has_calls([ mock.call( self.context, snapshot['instance']['id'], {'status': constants.STATUS_ERROR}), mock.call( self.context, snapshot_instances[1]['id'], {'status': constants.STATUS_ERROR}), ]) @ddt.data(None, []) def test_create_replicated_snapshot_driver_updates_nothing(self, retval): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'create_replicated_snapshot', mock.Mock(return_value=retval)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') return_value = self.share_manager.create_replicated_snapshot( self.context, snapshot['id'], share_id='fake_share') self.assertIsNone(return_value) self.assertFalse(mock_db_update_call.called) def test_create_replicated_snapshot_driver_updates_snapshot(self): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) snapshot_dict = { 'status': constants.STATUS_AVAILABLE, 'provider_location': 'spinners_end', 'progress': '100%', 'id': snapshot['instance']['id'], } self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'create_replicated_snapshot', mock.Mock(return_value=[snapshot_dict])) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') return_value = self.share_manager.create_replicated_snapshot( self.context, snapshot['id'], share_id='fake_share') self.assertIsNone(return_value) mock_db_update_call.assert_called_once_with( self.context, snapshot['instance']['id'], snapshot_dict) @ddt.data(None, 'fake_reservations') def test_revert_to_replicated_snapshot(self, reservations): share_id = 'id1' mock_quotas_rollback = self.mock_object(quota.QUOTAS, 'rollback') mock_quotas_commit = self.mock_object(quota.QUOTAS, 'commit') share = fakes.fake_share( id=share_id, project_id='fake_project', user_id='fake_user') snapshot = fakes.fake_snapshot( create_instance=True, share=share, size=1) snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot) snapshot_instances = [snapshot['instance'], snapshot_instance] active_replica = fake_replica( id='rid1', share_id=share_id, host=self.share_manager.host, replica_state=constants.REPLICA_STATE_ACTIVE, as_primitive=False) replica = fake_replica( id='rid2', share_id=share_id, host='secondary', replica_state=constants.REPLICA_STATE_IN_SYNC, as_primitive=False) replicas = [active_replica, replica] share_access_rules = [] snapshot_access_rules = [] self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object( db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(side_effect=[snapshot_instances, [snapshot_instances[0]]])) mock_driver = self.mock_object(self.share_manager, 'driver') mock_share_update = self.mock_object( self.share_manager.db, 'share_update') mock_share_replica_update = self.mock_object( self.share_manager.db, 'share_replica_update') mock_share_snapshot_instance_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') self.share_manager._revert_to_replicated_snapshot( self.context, share, snapshot, reservations, share_access_rules, snapshot_access_rules, share_id=share_id) self.assertTrue(mock_driver.revert_to_replicated_snapshot.called) self.assertFalse(mock_quotas_rollback.called) if reservations: mock_quotas_commit.assert_called_once_with( mock.ANY, reservations, project_id='fake_project', user_id='fake_user', share_type_id=None) else: self.assertFalse(mock_quotas_commit.called) mock_share_update.assert_called_once_with( mock.ANY, share_id, {'size': snapshot['size']}) mock_share_replica_update.assert_called_once_with( mock.ANY, active_replica['id'], {'status': constants.STATUS_AVAILABLE}) mock_share_snapshot_instance_update.assert_called_once_with( mock.ANY, snapshot['instance']['id'], {'status': constants.STATUS_AVAILABLE}) @ddt.data(None, 'fake_reservations') def test_revert_to_replicated_snapshot_driver_exception( self, reservations): mock_quotas_rollback = self.mock_object(quota.QUOTAS, 'rollback') mock_quotas_commit = self.mock_object(quota.QUOTAS, 'commit') share_id = 'id1' share = fakes.fake_share( id=share_id, project_id='fake_project', user_id='fake_user') snapshot = fakes.fake_snapshot( create_instance=True, share=share, size=1) snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot) snapshot_instances = [snapshot['instance'], snapshot_instance] active_replica = fake_replica( id='rid1', share_id=share_id, host=self.share_manager.host, replica_state=constants.REPLICA_STATE_ACTIVE, as_primitive=False, share_type_id='fake_share_type_id') replica = fake_replica( id='rid2', share_id=share_id, host='secondary', replica_state=constants.REPLICA_STATE_IN_SYNC, as_primitive=False, share_type_id='fake_share_type_id') replicas = [active_replica, replica] share_access_rules = [] snapshot_access_rules = [] self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object( self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object( db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(side_effect=[snapshot_instances, [snapshot_instances[0]]])) mock_driver = self.mock_object(self.share_manager, 'driver') mock_driver.revert_to_replicated_snapshot.side_effect = ( exception.ManilaException) mock_share_update = self.mock_object( self.share_manager.db, 'share_update') mock_share_replica_update = self.mock_object( self.share_manager.db, 'share_replica_update') mock_share_snapshot_instance_update = self.mock_object( self.share_manager.db, 'share_snapshot_instance_update') self.assertRaises(exception.ManilaException, self.share_manager._revert_to_replicated_snapshot, self.context, share, snapshot, reservations, share_access_rules, snapshot_access_rules, share_id=share_id) self.assertTrue(mock_driver.revert_to_replicated_snapshot.called) self.assertFalse(mock_quotas_commit.called) if reservations: mock_quotas_rollback.assert_called_once_with( mock.ANY, reservations, project_id='fake_project', user_id='fake_user', share_type_id=replica['share_type_id']) else: self.assertFalse(mock_quotas_rollback.called) self.assertFalse(mock_share_update.called) mock_share_replica_update.assert_called_once_with( mock.ANY, active_replica['id'], {'status': constants.STATUS_REVERTING_ERROR}) mock_share_snapshot_instance_update.assert_called_once_with( mock.ANY, snapshot['instance']['id'], {'status': constants.STATUS_AVAILABLE}) def delete_replicated_snapshot_driver_exception(self): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'delete_replicated_snapshot', mock.Mock(side_effect=exception.ManilaException)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') self.assertRaises(exception.ManilaException, self.share_manager.delete_replicated_snapshot, self.context, snapshot['id'], share_id='fake_share') mock_db_update_call.assert_has_calls([ mock.call( self.context, snapshot['instance']['id'], {'status': constants.STATUS_ERROR_DELETING}), mock.call( self.context, snapshot_instances[1]['id'], {'status': constants.STATUS_ERROR_DELETING}), ]) self.assertFalse(mock_db_delete_call.called) def delete_replicated_snapshot_driver_exception_ignored_with_force(self): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'delete_replicated_snapshot', mock.Mock(side_effect=exception.ManilaException)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') retval = self.share_manager.delete_replicated_snapshot( self.context, snapshot['id'], share_id='fake_share') self.assertIsNone(retval) mock_db_delete_call.assert_has_calls([ mock.call( self.context, snapshot['instance']['id']), mock.call( self.context, snapshot_instances[1]['id']), ]) self.assertFalse(mock_db_update_call.called) @ddt.data(None, []) def delete_replicated_snapshot_driver_updates_nothing(self, retval): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'delete_replicated_snapshot', mock.Mock(return_value=retval)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') return_value = self.share_manager.delete_replicated_snapshot( self.context, snapshot['id'], share_id='fake_share') self.assertIsNone(return_value) self.assertFalse(mock_db_delete_call.called) self.assertFalse(mock_db_update_call.called) def delete_replicated_snapshot_driver_deletes_snapshots(self): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) retval = [{ 'status': constants.STATUS_DELETED, 'id': snapshot['instance']['id'], }] self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'delete_replicated_snapshot', mock.Mock(return_value=retval)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') return_value = self.share_manager.delete_replicated_snapshot( self.context, snapshot['id'], share_id='fake_share') self.assertIsNone(return_value) mock_db_delete_call.assert_called_once_with( self.context, snapshot['instance']['id']) self.assertFalse(mock_db_update_call.called) @ddt.data(True, False) def delete_replicated_snapshot_drv_del_and_updates_snapshots(self, force): snapshot, snapshot_instances, replicas = ( self._setup_crud_replicated_snapshot_data() ) updated_instance_details = { 'status': constants.STATUS_ERROR, 'id': snapshot_instances[1]['id'], 'provider_location': 'azkaban', } retval = [ { 'status': constants.STATUS_DELETED, 'id': snapshot['instance']['id'], }, ] retval.append(updated_instance_details) self.mock_object( db, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(self.share_manager, '_get_share_server') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( self.share_manager.driver, 'delete_replicated_snapshot', mock.Mock(return_value=retval)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') return_value = self.share_manager.delete_replicated_snapshot( self.context, snapshot['id'], share_id='fake_share', force=force) self.assertIsNone(return_value) if force: self.assertEqual(2, mock_db_delete_call.call_count) self.assertFalse(mock_db_update_call.called) else: mock_db_delete_call.assert_called_once_with( self.context, snapshot['instance']['id']) mock_db_update_call.assert_called_once_with( self.context, snapshot_instances[1]['id'], updated_instance_details) def test_periodic_share_replica_snapshot_update(self): mock_debug_log = self.mock_object(manager.LOG, 'debug') replicas = 3 * [ fake_replica(host='malfoy@manor#_pool0', replica_state=constants.REPLICA_STATE_IN_SYNC) ] replicas.append(fake_replica(replica_state=constants.STATUS_ACTIVE)) snapshot = fakes.fake_snapshot(create_instance=True, status=constants.STATUS_DELETING) snapshot_instances = 3 * [ fakes.fake_snapshot_instance(base_snapshot=snapshot) ] self.mock_object( db, 'share_replicas_get_all', mock.Mock(return_value=replicas)) self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshot_instances)) mock_snapshot_update_call = self.mock_object( self.share_manager, '_update_replica_snapshot') retval = self.share_manager.periodic_share_replica_snapshot_update( self.context) self.assertIsNone(retval) self.assertEqual(1, mock_debug_log.call_count) self.assertEqual(0, mock_snapshot_update_call.call_count) @ddt.data(True, False) def test_periodic_share_replica_snapshot_update_nothing_to_update( self, has_instances): mock_debug_log = self.mock_object(manager.LOG, 'debug') replicas = 3 * [ fake_replica(host='malfoy@manor#_pool0', replica_state=constants.REPLICA_STATE_IN_SYNC) ] replicas.append(fake_replica(replica_state=constants.STATUS_ACTIVE)) snapshot = fakes.fake_snapshot(create_instance=True, status=constants.STATUS_DELETING) snapshot_instances = 3 * [ fakes.fake_snapshot_instance(base_snapshot=snapshot) ] self.mock_object(db, 'share_replicas_get_all', mock.Mock(side_effect=[[], replicas])) self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(side_effect=[snapshot_instances, []])) mock_snapshot_update_call = self.mock_object( self.share_manager, '_update_replica_snapshot') retval = self.share_manager.periodic_share_replica_snapshot_update( self.context) self.assertIsNone(retval) self.assertEqual(1, mock_debug_log.call_count) self.assertEqual(0, mock_snapshot_update_call.call_count) def test__update_replica_snapshot_replica_deleted_from_database(self): replica_not_found = exception.ShareReplicaNotFound(replica_id='xyzzy') self.mock_object(db, 'share_replica_get', mock.Mock( side_effect=replica_not_found)) mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_driver_update_call = self.mock_object( self.share_manager.driver, 'update_replicated_snapshot') snaphot_instance = fakes.fake_snapshot_instance() retval = self.share_manager._update_replica_snapshot( self.context, snaphot_instance) self.assertIsNone(retval) mock_db_delete_call.assert_called_once_with( self.context, snaphot_instance['id']) self.assertFalse(mock_driver_update_call.called) self.assertFalse(mock_db_update_call.called) def test__update_replica_snapshot_both_deleted_from_database(self): replica_not_found = exception.ShareReplicaNotFound(replica_id='xyzzy') instance_not_found = exception.ShareSnapshotInstanceNotFound( instance_id='spoon!') self.mock_object(db, 'share_replica_get', mock.Mock( side_effect=replica_not_found)) mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete', mock.Mock( side_effect=instance_not_found)) mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') mock_driver_update_call = self.mock_object( self.share_manager.driver, 'update_replicated_snapshot') snapshot_instance = fakes.fake_snapshot_instance() retval = self.share_manager._update_replica_snapshot( self.context, snapshot_instance) self.assertIsNone(retval) mock_db_delete_call.assert_called_once_with( self.context, snapshot_instance['id']) self.assertFalse(mock_driver_update_call.called) self.assertFalse(mock_db_update_call.called) def test__update_replica_snapshot_driver_raises_Not_Found_exception(self): mock_debug_log = self.mock_object(manager.LOG, 'debug') replica = fake_replica() snapshot_instance = fakes.fake_snapshot_instance( status=constants.STATUS_DELETING) self.mock_object( db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica])) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object( self.share_manager.driver, 'update_replicated_snapshot', mock.Mock( side_effect=exception.SnapshotResourceNotFound(name='abc'))) mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') retval = self.share_manager._update_replica_snapshot( self.context, snapshot_instance, replica_snapshots=None) self.assertIsNone(retval) self.assertEqual(1, mock_debug_log.call_count) mock_db_delete_call.assert_called_once_with( self.context, snapshot_instance['id']) self.assertFalse(mock_db_update_call.called) @ddt.data(exception.NotFound, exception.ManilaException) def test__update_replica_snapshot_driver_raises_other_exception(self, exc): mock_debug_log = self.mock_object(manager.LOG, 'debug') mock_info_log = self.mock_object(manager.LOG, 'info') mock_exception_log = self.mock_object(manager.LOG, 'exception') replica = fake_replica() snapshot_instance = fakes.fake_snapshot_instance( status=constants.STATUS_CREATING) self.mock_object( db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica])) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.driver, 'update_replicated_snapshot', mock.Mock(side_effect=exc)) mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') retval = self.share_manager._update_replica_snapshot( self.context, snapshot_instance) self.assertIsNone(retval) self.assertEqual(1, mock_exception_log.call_count) self.assertEqual(1, mock_debug_log.call_count) self.assertFalse(mock_info_log.called) mock_db_update_call.assert_called_once_with( self.context, snapshot_instance['id'], {'status': 'error'}) self.assertFalse(mock_db_delete_call.called) @ddt.data(True, False) def test__update_replica_snapshot_driver_updates_replica(self, update): replica = fake_replica() snapshot_instance = fakes.fake_snapshot_instance() driver_update = {} if update: driver_update = { 'id': snapshot_instance['id'], 'provider_location': 'knockturn_alley', 'status': constants.STATUS_AVAILABLE, } mock_debug_log = self.mock_object(manager.LOG, 'debug') mock_info_log = self.mock_object(manager.LOG, 'info') self.mock_object( db, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(db, 'share_snapshot_instance_get', mock.Mock(return_value=snapshot_instance)) self.mock_object(db, 'share_replicas_get_all_by_share', mock.Mock(return_value=[replica])) self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value=None)) self.mock_object(self.share_manager.driver, 'update_replicated_snapshot', mock.Mock(return_value=driver_update)) mock_db_delete_call = self.mock_object( db, 'share_snapshot_instance_delete') mock_db_update_call = self.mock_object( db, 'share_snapshot_instance_update') retval = self.share_manager._update_replica_snapshot( self.context, snapshot_instance, replica_snapshots=None) driver_update['progress'] = '100%' self.assertIsNone(retval) self.assertEqual(1, mock_debug_log.call_count) self.assertFalse(mock_info_log.called) if update: mock_db_update_call.assert_called_once_with( self.context, snapshot_instance['id'], driver_update) else: self.assertFalse(mock_db_update_call.called) self.assertFalse(mock_db_delete_call.called) def test_update_access(self): share_instance = fakes.fake_share_instance() self.mock_object(self.share_manager, '_get_share_server', mock.Mock(return_value='fake_share_server')) self.mock_object(self.share_manager, '_get_share_instance', mock.Mock(return_value=share_instance)) access_rules_update_method = self.mock_object( self.share_manager.access_helper, 'update_access_rules') retval = self.share_manager.update_access( self.context, share_instance['id']) self.assertIsNone(retval) access_rules_update_method.assert_called_once_with( self.context, share_instance['id'], share_server='fake_share_server') @mock.patch('manila.tests.fake_notifier.FakeNotifier._notify') def test_update_share_usage_size(self, mock_notify): instances = self._setup_init_mocks(setup_access_rules=False) update_shares = [{'id': 'fake_id', 'used_size': '3', 'gathered_at': 'fake'}] mock_notify.assert_not_called() manager = self.share_manager self.mock_object(manager, 'driver') self.mock_object(manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) self.mock_object(manager.db, 'share_instance_get', mock.Mock(side_effect=instances)) mock_driver_call = self.mock_object( manager.driver, 'update_share_usage_size', mock.Mock(return_value=update_shares)) self.share_manager.update_share_usage_size(self.context) self.assert_notify_called(mock_notify, (['INFO', 'share.consumed.size'], )) mock_driver_call.assert_called_once_with( self.context, instances) @mock.patch('manila.tests.fake_notifier.FakeNotifier._notify') def test_update_share_usage_size_fail(self, mock_notify): instances = self._setup_init_mocks(setup_access_rules=False) mock_notify.assert_not_called() self.mock_object(self.share_manager, 'driver') self.mock_object(self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) self.mock_object(self.share_manager.db, 'share_instance_get', mock.Mock(side_effect=instances)) self.mock_object( self.share_manager.driver, 'update_share_usage_size', mock.Mock(side_effect=exception.ProcessExecutionError)) mock_log_exception = self.mock_object(manager.LOG, 'exception') self.share_manager.update_share_usage_size(self.context) self.assertTrue(mock_log_exception.called) def test_periodic_share_status_update(self): instances = self._setup_init_mocks(setup_access_rules=False) instances_creating_from_snap = [ x for x in instances if x['status'] == constants.STATUS_CREATING_FROM_SNAPSHOT ] self.mock_object(self.share_manager, 'driver') self.mock_object(self.share_manager.db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances_creating_from_snap)) mock_update_share_status = self.mock_object( self.share_manager, '_update_share_status') instances_dict = [ self.share_manager._get_share_instance_dict(self.context, si) for si in instances_creating_from_snap] self.share_manager.periodic_share_status_update(self.context) mock_update_share_status.assert_has_calls([ mock.call(self.context, share_instance) for share_instance in instances_dict ]) def test__update_share_status(self): instances = self._setup_init_mocks(setup_access_rules=False) fake_export_locations = ['fake/path/1', 'fake/path'] instance_model_update = { 'status': constants.STATUS_AVAILABLE, 'export_locations': fake_export_locations } expected_si_update_info = { 'status': constants.STATUS_AVAILABLE, 'progress': '100%' } driver_get_status = self.mock_object( self.share_manager.driver, 'get_share_status', mock.Mock(return_value=instance_model_update)) db_si_update = self.mock_object(self.share_manager.db, 'share_instance_update') db_el_update = self.mock_object(self.share_manager.db, 'share_export_locations_update') in_progress_instances = [x for x in instances if x['status'] == constants.STATUS_CREATING_FROM_SNAPSHOT] instance = self.share_manager.db.share_instance_get( self.context, in_progress_instances[0]['id'], with_share_data=True) self.share_manager._update_share_status(self.context, instance) driver_get_status.assert_called_once_with(instance, None) db_si_update.assert_called_once_with(self.context, instance['id'], expected_si_update_info) db_el_update.assert_called_once_with(self.context, instance['id'], fake_export_locations) @ddt.data(mock.Mock(return_value={'status': constants.STATUS_ERROR}), mock.Mock(side_effect=exception.ShareBackendException)) def test__update_share_status_share_with_error_or_exception(self, driver_error): instances = self._setup_init_mocks(setup_access_rules=False) expected_si_update_info = { 'status': constants.STATUS_ERROR, 'progress': None, } driver_get_status = self.mock_object( self.share_manager.driver, 'get_share_status', driver_error) db_si_update = self.mock_object(self.share_manager.db, 'share_instance_update') in_progress_instances = [x for x in instances if x['status'] == constants.STATUS_CREATING_FROM_SNAPSHOT] instance = self.share_manager.db.share_instance_get( self.context, in_progress_instances[0]['id'], with_share_data=True) self.share_manager._update_share_status(self.context, instance) driver_get_status.assert_called_once_with(instance, None) db_si_update.assert_called_once_with(self.context, instance['id'], expected_si_update_info) self.share_manager.message_api.create.assert_called_once_with( self.context, message_field.Action.UPDATE, instance['project_id'], resource_type=message_field.Resource.SHARE, resource_id=instance['share_id'], detail=message_field.Detail.DRIVER_FAILED_CREATING_FROM_SNAP) @ddt.ddt class HookWrapperTestCase(test.TestCase): def setUp(self): super(HookWrapperTestCase, self).setUp() self.configuration = mock.Mock() self.configuration.safe_get.return_value = True @manager.add_hooks def _fake_wrapped_method(self, some_arg, some_kwarg): return "foo" def test_hooks_enabled(self): self.hooks = [mock.Mock(return_value=i) for i in range(2)] result = self._fake_wrapped_method( "some_arg", some_kwarg="some_kwarg_value") self.assertEqual("foo", result) for i, mock_hook in enumerate(self.hooks): mock_hook.execute_pre_hook.assert_called_once_with( "some_arg", func_name="_fake_wrapped_method", some_kwarg="some_kwarg_value") mock_hook.execute_post_hook.assert_called_once_with( "some_arg", func_name="_fake_wrapped_method", driver_action_results="foo", pre_hook_data=self.hooks[i].execute_pre_hook.return_value, some_kwarg="some_kwarg_value") def test_hooks_disabled(self): self.hooks = [] result = self._fake_wrapped_method( "some_arg", some_kwarg="some_kwarg_value") self.assertEqual("foo", result) for mock_hook in self.hooks: self.assertFalse(mock_hook.execute_pre_hook.called) self.assertFalse(mock_hook.execute_post_hook.called) manila-10.0.0/manila/tests/share/test_api.py0000664000175000017500000057415113656750227020774 0ustar zuulzuul00000000000000# Copyright 2012 NetApp. All rights reserved. # Copyright (c) 2015 Tom Barron. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Share API module.""" import copy import datetime from unittest import mock import ddt from oslo_config import cfg from oslo_utils import timeutils from oslo_utils import uuidutils from manila.common import constants from manila import context from manila.data import rpcapi as data_rpc from manila import db as db_api from manila.db.sqlalchemy import models from manila import exception from manila import policy from manila import quota from manila import share from manila.share import api as share_api from manila.share import share_types from manila import test from manila.tests import db_utils from manila.tests import fake_share as fakes from manila.tests import utils as test_utils from manila import utils CONF = cfg.CONF _FAKE_LIST_OF_ALL_SHARES = [ { 'name': 'foo', 'description': 'ds', 'status': constants.STATUS_AVAILABLE, 'project_id': 'fake_pid_1', 'share_server_id': 'fake_server_1', }, { 'name': 'bar', 'status': constants.STATUS_ERROR, 'project_id': 'fake_pid_2', 'share_server_id': 'fake_server_2', }, { 'name': 'foo1', 'description': 'ds1', 'status': constants.STATUS_AVAILABLE, 'project_id': 'fake_pid_2', 'share_server_id': 'fake_server_3', }, { 'name': 'bar', 'status': constants.STATUS_ERROR, 'project_id': 'fake_pid_2', 'share_server_id': 'fake_server_3', }, ] _FAKE_LIST_OF_ALL_SNAPSHOTS = [ { 'name': 'foo', 'status': constants.STATUS_AVAILABLE, 'project_id': 'fake_pid_1', 'share_id': 'fake_server_1', }, { 'name': 'bar', 'status': constants.STATUS_ERROR, 'project_id': 'fake_pid_2', 'share_id': 'fake_server_2', }, { 'name': 'foo', 'status': constants.STATUS_AVAILABLE, 'project_id': 'fake_pid_2', 'share_id': 'fake_share_id_3', }, { 'name': 'bar', 'status': constants.STATUS_ERROR, 'project_id': 'fake_pid_2', 'share_id': 'fake_share_id_3', }, ] @ddt.ddt class ShareAPITestCase(test.TestCase): def setUp(self): super(ShareAPITestCase, self).setUp() self.context = context.get_admin_context() self.scheduler_rpcapi = mock.Mock() self.share_rpcapi = mock.Mock() self.api = share.API() self.mock_object(self.api, 'scheduler_rpcapi', self.scheduler_rpcapi) self.mock_object(self.api, 'share_rpcapi', self.share_rpcapi) self.mock_object(quota.QUOTAS, 'reserve', lambda *args, **kwargs: None) self.dt_utc = datetime.datetime.utcnow() self.mock_object(timeutils, 'utcnow', mock.Mock(return_value=self.dt_utc)) self.mock_object(share_api.policy, 'check_policy') def _setup_create_mocks(self, protocol='nfs', **kwargs): share = db_utils.create_share( user_id=self.context.user_id, project_id=self.context.project_id, share_type_id=kwargs.pop('share_type_id', 'fake'), **kwargs ) share_data = { 'share_proto': protocol, 'size': 1, 'display_name': 'fakename', 'display_description': 'fakedesc', 'availability_zone': 'fakeaz' } self.mock_object(db_api, 'share_create', mock.Mock(return_value=share)) self.mock_object(self.api, 'create_instance') return share, share_data def _setup_create_instance_mocks(self): host = 'fake' share_type_id = "fake_share_type" share = db_utils.create_share( user_id=self.context.user_id, project_id=self.context.project_id, create_share_instance=False, ) share_instance = db_utils.create_share_instance( share_id=share['id'], share_type_id=share_type_id) share_type = {'fake': 'fake'} self.mock_object(db_api, 'share_instance_create', mock.Mock(return_value=share_instance)) self.mock_object(db_api, 'share_type_get', mock.Mock(return_value=share_type)) az_mock = mock.Mock() type(az_mock.return_value).id = mock.PropertyMock( return_value='fake_id') self.mock_object(db_api, 'availability_zone_get', az_mock) self.mock_object(self.api.share_rpcapi, 'create_share_instance') self.mock_object(self.api.scheduler_rpcapi, 'create_share_instance') return host, share, share_instance def _setup_create_from_snapshot_mocks(self, use_scheduler=True, host=None): CONF.set_default("use_scheduler_creating_share_from_snapshot", use_scheduler) share_type = fakes.fake_share_type() original_share = db_utils.create_share( user_id=self.context.user_id, project_id=self.context.project_id, status=constants.STATUS_AVAILABLE, host=host if host else 'fake', size=1, share_type_id=share_type['id'], ) snapshot = db_utils.create_snapshot( share_id=original_share['id'], status=constants.STATUS_AVAILABLE, size=1 ) share, share_data = self._setup_create_mocks( snapshot_id=snapshot['id'], share_type_id=share_type['id']) request_spec = { 'share_properties': share.to_dict(), 'share_proto': share['share_proto'], 'share_id': share['id'], 'share_type': None, 'snapshot_id': share['snapshot_id'], } self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value='reservation')) self.mock_object(quota.QUOTAS, 'commit') self.mock_object( share_types, 'get_share_type', mock.Mock(return_value=share_type)) return snapshot, share, share_data, request_spec def _setup_delete_mocks(self, status, snapshots=None, **kwargs): if snapshots is None: snapshots = [] share = db_utils.create_share(status=status, **kwargs) self.mock_object(db_api, 'share_delete') self.mock_object(db_api, 'share_server_update') self.mock_object(db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=snapshots)) self.mock_object(self.api, 'delete_instance') return share def _setup_delete_share_instance_mocks(self, **kwargs): share = db_utils.create_share(**kwargs) self.mock_object(db_api, 'share_instance_update', mock.Mock(return_value=share.instance)) self.mock_object(self.api.share_rpcapi, 'delete_share_instance') self.mock_object(db_api, 'share_server_update') return share.instance def test_get_all_admin_no_filters(self): self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=True) shares = self.api.get_all(ctx) share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_1', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[0], shares) def test_get_all_admin_filter_by_all_tenants(self): ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=True) self.mock_object(db_api, 'share_get_all', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES)) shares = self.api.get_all(ctx, {'all_tenants': 1}) share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') db_api.share_get_all.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', filters={}) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES, shares) def test_get_all_admin_filter_by_all_tenants_with_blank(self): ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=True) self.mock_object(db_api, 'share_get_all', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES)) shares = self.api.get_all(ctx, {'all_tenants': ''}) share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') db_api.share_get_all.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', filters={}) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES, shares) def test_get_all_admin_filter_by_all_tenants_with_false(self): ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=True) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) shares = self.api.get_all(ctx, {'all_tenants': 'false'}) share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_1', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[0], shares) def test_get_all_admin_filter_by_all_tenants_with_invaild_value(self): ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=True) self.mock_object(db_api, 'share_get_all') self.assertRaises( exception.InvalidInput, self.api.get_all, ctx, {'all_tenants': 'wonk'}) @ddt.data( ({'share_server_id': 'fake_share_server'}, 'list_by_share_server_id'), ({'host': 'fake_host'}, 'list_by_host'), ) @ddt.unpack def test_get_all_by_non_admin_using_admin_filter(self, filters, policy): def fake_policy_checker(*args, **kwargs): if policy == args[2] and not args[0].is_admin: raise exception.NotAuthorized ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) self.mock_object( share_api.policy, 'check_policy', mock.Mock(side_effect=fake_policy_checker)) self.assertRaises( exception.NotAuthorized, self.api.get_all, ctx, filters) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), mock.call(ctx, 'share', policy), ]) def test_get_all_admin_filter_by_share_server_and_all_tenants(self): # NOTE(vponomaryov): if share_server_id provided, 'all_tenants' opt # should not make any influence. ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=True) self.mock_object(db_api, 'share_get_all_by_share_server', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[2:])) self.mock_object(db_api, 'share_get_all') self.mock_object(db_api, 'share_get_all_by_project') shares = self.api.get_all( ctx, {'share_server_id': 'fake_server_3', 'all_tenants': 1}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), mock.call(ctx, 'share', 'list_by_share_server_id'), ]) db_api.share_get_all_by_share_server.assert_called_once_with( ctx, 'fake_server_3', sort_dir='desc', sort_key='created_at', filters={}, ) db_api.share_get_all_by_project.assert_has_calls([]) db_api.share_get_all.assert_has_calls([]) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[2:], shares) def test_get_all_admin_filter_by_name(self): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=True) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all(ctx, {'name': 'bar'}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1::2], shares) @ddt.data(({'name': 'fo'}, 0), ({'description': 'd'}, 0), ({'name': 'foo', 'description': 'd'}, 0), ({'name': 'foo'}, 1), ({'description': 'ds'}, 1), ({'name~': 'foo', 'description~': 'ds'}, 2), ({'name': 'foo', 'description~': 'ds'}, 1), ({'name~': 'foo', 'description': 'ds'}, 1)) @ddt.unpack def test_get_all_admin_filter_by_name_and_description( self, search_opts, get_share_number): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=True) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES)) shares = self.api.get_all(ctx, search_opts) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False ) self.assertEqual(get_share_number, len(shares)) if get_share_number == 2: self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[0::2], shares) elif get_share_number == 1: self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[:1], shares) @ddt.data('id', 'path') def test_get_all_admin_filter_by_export_location(self, type): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=True) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all(ctx, {'export_location_' + type: 'test'}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={'export_location_' + type: 'test'}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1:], shares) def test_get_all_admin_filter_by_name_and_all_tenants(self): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=True) self.mock_object(db_api, 'share_get_all', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES)) shares = self.api.get_all(ctx, {'name': 'foo', 'all_tenants': 1}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', filters={}) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[:1], shares) def test_get_all_admin_filter_by_status(self): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=True) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all(ctx, {'status': constants.STATUS_AVAILABLE}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[2::4], shares) def test_get_all_admin_filter_by_status_and_all_tenants(self): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=True) self.mock_object(db_api, 'share_get_all', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES)) shares = self.api.get_all( ctx, {'status': constants.STATUS_ERROR, 'all_tenants': 1}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', filters={}) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1::2], shares) def test_get_all_non_admin_filter_by_all_tenants(self): # Expected share list only by project of non-admin user ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=False) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all(ctx, {'all_tenants': 1}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1:], shares) def test_get_all_non_admin_with_name_and_status_filters(self): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=False) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all( ctx, {'name': 'bar', 'status': constants.STATUS_ERROR}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False ) # two items expected, one filtered self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1::2], shares) # one item expected, two filtered shares = self.api.get_all( ctx, {'name': 'foo1', 'status': constants.STATUS_AVAILABLE}) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[2::4], shares) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_has_calls([ mock.call(ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False), mock.call(ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False), ]) @ddt.data('True', 'true', '1', 'yes', 'y', 'on', 't', True) def test_get_all_non_admin_public(self, is_public): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=False) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock( return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all(ctx, {'is_public': is_public}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=True ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1:], shares) @ddt.data('False', 'false', '0', 'no', 'n', 'off', 'f', False) def test_get_all_non_admin_not_public(self, is_public): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=False) self.mock_object(db_api, 'share_get_all_by_project', mock.Mock( return_value=_FAKE_LIST_OF_ALL_SHARES[1:])) shares = self.api.get_all(ctx, {'is_public': is_public}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_2', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[1:], shares) @ddt.data('truefoo', 'bartrue') def test_get_all_invalid_public_value(self, is_public): ctx = context.RequestContext('fake_uid', 'fake_pid_2', is_admin=False) self.assertRaises(ValueError, self.api.get_all, ctx, {'is_public': is_public}) share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), ]) def test_get_all_with_sorting_valid(self): self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) shares = self.api.get_all(ctx, sort_key='status', sort_dir='asc') share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='asc', sort_key='status', project_id='fake_pid_1', filters={}, is_public=False ) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[0], shares) def test_get_all_sort_key_invalid(self): self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) self.assertRaises( exception.InvalidInput, self.api.get_all, ctx, sort_key=1, ) share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') def test_get_all_sort_dir_invalid(self): self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) self.assertRaises( exception.InvalidInput, self.api.get_all, ctx, sort_dir=1, ) share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') def _get_all_filter_metadata_or_extra_specs_valid(self, key): self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) search_opts = {key: {'foo1': 'bar1', 'foo2': 'bar2'}} shares = self.api.get_all(ctx, search_opts=search_opts.copy()) if key == 'extra_specs': share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), mock.call(ctx, 'share_types_extra_spec', 'index'), ]) else: share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') db_api.share_get_all_by_project.assert_called_once_with( ctx, sort_dir='desc', sort_key='created_at', project_id='fake_pid_1', filters=search_opts, is_public=False) self.assertEqual(_FAKE_LIST_OF_ALL_SHARES[0], shares) def test_get_all_filter_by_metadata(self): self._get_all_filter_metadata_or_extra_specs_valid(key='metadata') def test_get_all_filter_by_extra_specs(self): self._get_all_filter_metadata_or_extra_specs_valid(key='extra_specs') def _get_all_filter_metadata_or_extra_specs_invalid(self, key): self.mock_object(db_api, 'share_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SHARES[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) search_opts = {key: "{'foo': 'bar'}"} self.assertRaises(exception.InvalidInput, self.api.get_all, ctx, search_opts=search_opts) if key == 'extra_specs': share_api.policy.check_policy.assert_has_calls([ mock.call(ctx, 'share', 'get_all'), mock.call(ctx, 'share_types_extra_spec', 'index'), ]) else: share_api.policy.check_policy.assert_called_once_with( ctx, 'share', 'get_all') def test_get_all_filter_by_invalid_metadata(self): self._get_all_filter_metadata_or_extra_specs_invalid(key='metadata') def test_get_all_filter_by_invalid_extra_specs(self): self._get_all_filter_metadata_or_extra_specs_invalid(key='extra_specs') @ddt.data(True, False) def test_create_public_and_private_share(self, is_public): share, share_data = self._setup_create_mocks(is_public=is_public) az = share_data.pop('availability_zone') self.api.create( self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], availability_zone=az ) share['status'] = constants.STATUS_CREATING share['host'] = None self.assertSubDictMatch(share_data, db_api.share_create.call_args[0][1]) @ddt.data( {}, { constants.ExtraSpecs.SNAPSHOT_SUPPORT: True, }, { constants.ExtraSpecs.SNAPSHOT_SUPPORT: False, constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: False, }, { constants.ExtraSpecs.SNAPSHOT_SUPPORT: True, constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: False, }, { constants.ExtraSpecs.SNAPSHOT_SUPPORT: True, constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: True, } ) def test_create_default_snapshot_semantics(self, extra_specs): share, share_data = self._setup_create_mocks(is_public=False) az = share_data.pop('availability_zone') share_type = fakes.fake_share_type(extra_specs=extra_specs) self.api.create( self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], availability_zone=az, share_type=share_type ) share['status'] = constants.STATUS_CREATING share['host'] = None share_data.update(extra_specs) if extra_specs.get('snapshot_support') is None: share_data['snapshot_support'] = False if extra_specs.get('create_share_from_snapshot_support') is None: share_data['create_share_from_snapshot_support'] = False self.assertSubDictMatch(share_data, db_api.share_create.call_args[0][1]) @ddt.data(*constants.SUPPORTED_SHARE_PROTOCOLS) def test_create_share_valid_protocol(self, proto): share, share_data = self._setup_create_mocks(protocol=proto) az = share_data.pop('availability_zone') all_protos = ','.join( proto for proto in constants.SUPPORTED_SHARE_PROTOCOLS) data = dict(DEFAULT=dict(enabled_share_protocols=all_protos)) with test_utils.create_temp_config_with_opts(data): self.api.create( self.context, proto, share_data['size'], share_data['display_name'], share_data['display_description'], availability_zone=az) share['status'] = constants.STATUS_CREATING share['host'] = None self.assertSubDictMatch(share_data, db_api.share_create.call_args[0][1]) @ddt.data( {'get_all_azs_return': [], 'subnet_by_az_side_effect': []}, {'get_all_azs_return': [{'name': 'az1', 'id': 'az_id_1'}], 'subnet_by_az_side_effect': [None]}, {'get_all_azs_return': [{'name': 'az1', 'id': 'az_id_1'}], 'subnet_by_az_side_effect': ['fake_sns_1']}, {'get_all_azs_return': [{'name': 'az1', 'id': 'az_id_1'}, {'name': 'az2', 'id': 'az_id_2'}], 'subnet_by_az_side_effect': [None, 'fake_sns_2']} ) @ddt.unpack def test__get_all_availability_zones_with_subnets( self, get_all_azs_return, subnet_by_az_side_effect): fake_share_network_id = 'fake_sn_id' self.mock_object(db_api, 'availability_zone_get_all', mock.Mock(return_value=get_all_azs_return)) self.mock_object(db_api, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(side_effect=subnet_by_az_side_effect)) expected_az_names = [] expected_get_az_calls = [] for index, value in enumerate(get_all_azs_return): expected_get_az_calls.append(mock.call( self.context, share_network_id=fake_share_network_id, availability_zone_id=value['id'])) if subnet_by_az_side_effect[index] is not None: expected_az_names.append(value['name']) get_all_subnets = self.api._get_all_availability_zones_with_subnets compatible_azs = get_all_subnets(self.context, fake_share_network_id) db_api.availability_zone_get_all.assert_called_once_with( self.context) db_get_azs_with_subnet = ( db_api.share_network_subnet_get_by_availability_zone_id) db_get_azs_with_subnet.assert_has_calls(expected_get_az_calls) self.assertEqual(expected_az_names, compatible_azs) @ddt.data( {'availability_zones': None, 'azs_with_subnet': ['fake_az_1']}, {'availability_zones': ['fake_az_2'], 'azs_with_subnet': ['fake_az_2']}, {'availability_zones': ['fake_az_1', 'faze_az_2', 'fake_az_3'], 'azs_with_subnet': ['fake_az_3']} ) @ddt.unpack def test_create_share_with_subnets(self, availability_zones, azs_with_subnet): share, share_data = self._setup_create_mocks() reservation = 'fake' self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=reservation)) self.mock_object(self.api, '_get_all_availability_zones_with_subnets', mock.Mock(return_value=azs_with_subnet)) self.mock_object(quota.QUOTAS, 'commit') self.mock_object(self.api, 'create_instance') self.mock_object(db_api, 'share_get') fake_share_network_id = 'fake_sn_id' if availability_zones: expected_azs = ( [az for az in availability_zones if az in azs_with_subnet]) else: expected_azs = azs_with_subnet self.api.create( self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], share_network_id=fake_share_network_id, availability_zones=availability_zones) share['status'] = constants.STATUS_CREATING share['host'] = None quota.QUOTAS.reserve.assert_called_once() get_all_azs_sns = self.api._get_all_availability_zones_with_subnets get_all_azs_sns.assert_called_once_with( self.context, fake_share_network_id) quota.QUOTAS.commit.assert_called_once() self.api.create_instance.assert_called_once_with( self.context, share, share_network_id=fake_share_network_id, host=None, availability_zone=None, share_group=None, share_group_snapshot_member=None, share_type_id=None, availability_zones=expected_azs, snapshot_host=None ) db_api.share_get.assert_called_once() @ddt.data( {'availability_zones': None, 'azs_with_subnet': []}, {'availability_zones': ['fake_az_1'], 'azs_with_subnet': ['fake_az_2']} ) @ddt.unpack def test_create_share_with_subnets_invalid_azs(self, availability_zones, azs_with_subnet): share, share_data = self._setup_create_mocks() reservation = 'fake' self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=reservation)) self.mock_object(self.api, '_get_all_availability_zones_with_subnets', mock.Mock(return_value=azs_with_subnet)) self.mock_object(quota.QUOTAS, 'commit') self.mock_object(self.api, 'create_instance') self.mock_object(db_api, 'share_get') fake_share_network_id = 'fake_sn_id' self.assertRaises( exception.InvalidInput, self.api.create, self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], share_network_id=fake_share_network_id, availability_zones=availability_zones) quota.QUOTAS.reserve.assert_called_once() get_all_azs_sns = self.api._get_all_availability_zones_with_subnets get_all_azs_sns.assert_called_once_with( self.context, fake_share_network_id) @ddt.data( None, '', 'fake', 'nfsfake', 'cifsfake', 'glusterfsfake', 'hdfsfake') def test_create_share_invalid_protocol(self, proto): share, share_data = self._setup_create_mocks(protocol=proto) all_protos = ','.join( proto for proto in constants.SUPPORTED_SHARE_PROTOCOLS) data = dict(DEFAULT=dict(enabled_share_protocols=all_protos)) with test_utils.create_temp_config_with_opts(data): self.assertRaises( exception.InvalidInput, self.api.create, self.context, proto, share_data['size'], share_data['display_name'], share_data['display_description']) @ddt.data({'overs': {'gigabytes': 'fake'}, 'expected_exception': exception.ShareSizeExceedsAvailableQuota}, {'overs': {'shares': 'fake'}, 'expected_exception': exception.ShareLimitExceeded}) @ddt.unpack def test_create_share_over_quota(self, overs, expected_exception): share, share_data = self._setup_create_mocks() usages = {'gigabytes': {'reserved': 5, 'in_use': 5}, 'shares': {'reserved': 10, 'in_use': 10}} quotas = {'gigabytes': 5, 'shares': 10} exc = exception.OverQuota(overs=overs, usages=usages, quotas=quotas) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=exc)) self.assertRaises( expected_exception, self.api.create, self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'] ) quota.QUOTAS.reserve.assert_called_once_with( self.context, share_type_id=None, shares=1, gigabytes=share_data['size']) @ddt.data(exception.QuotaError, exception.InvalidShare) def test_create_share_error_on_quota_commit(self, expected_exception): share, share_data = self._setup_create_mocks() reservation = 'fake' self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=reservation)) self.mock_object(quota.QUOTAS, 'commit', mock.Mock(side_effect=expected_exception('fake'))) self.mock_object(quota.QUOTAS, 'rollback') self.mock_object(db_api, 'share_delete') self.assertRaises( expected_exception, self.api.create, self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'] ) quota.QUOTAS.rollback.assert_called_once_with( self.context, reservation, share_type_id=None) db_api.share_delete.assert_called_once_with(self.context, share['id']) def test_create_share_instance_with_host_and_az(self): host, share, share_instance = self._setup_create_instance_mocks() self.api.create_instance(self.context, share, host=host, availability_zone='fake', share_type_id='fake_share_type') db_api.share_instance_create.assert_called_once_with( self.context, share['id'], { 'share_network_id': None, 'status': constants.STATUS_CREATING, 'scheduled_at': self.dt_utc, 'host': host, 'availability_zone_id': 'fake_id', 'share_type_id': 'fake_share_type', 'cast_rules_to_readonly': False, } ) db_api.share_type_get.assert_called_once_with( self.context, share_instance['share_type_id']) self.api.share_rpcapi.create_share_instance.assert_called_once_with( self.context, share_instance, host, request_spec=mock.ANY, filter_properties={}, snapshot_id=share['snapshot_id'], ) self.assertFalse( self.api.scheduler_rpcapi.create_share_instance.called) def test_create_share_instance_without_host(self): _, share, share_instance = self._setup_create_instance_mocks() self.api.create_instance(self.context, share) (self.api.scheduler_rpcapi.create_share_instance. assert_called_once_with( self.context, request_spec=mock.ANY, filter_properties={})) self.assertFalse(self.api.share_rpcapi.create_share_instance.called) def test_create_share_instance_from_snapshot(self): snapshot, share, _, _ = self._setup_create_from_snapshot_mocks() request_spec, share_instance = ( self.api.create_share_instance_and_get_request_spec( self.context, share) ) self.assertIsNotNone(share_instance) self.assertEqual(share['id'], request_spec['share_instance_properties']['share_id']) self.assertEqual(share['snapshot_id'], request_spec['snapshot_id']) self.assertFalse( self.api.share_rpcapi.create_share_instance_and_get_request_spec .called) def test_create_instance_share_group_snapshot_member(self): fake_req_spec = { 'share_properties': 'fake_share_properties', 'share_instance_properties': 'fake_share_instance_properties', } share = fakes.fake_share() member_info = { 'host': 'host', 'share_network_id': 'share_network_id', 'share_server_id': 'share_server_id', } fake_instance = fakes.fake_share_instance( share_id=share['id'], **member_info) sg_snap_member = {'share_instance': fake_instance} self.mock_policy_check = self.mock_object( policy, 'check_policy', mock.Mock(return_value=True)) mock_share_rpcapi_call = self.mock_object(self.share_rpcapi, 'create_share_instance') mock_scheduler_rpcapi_call = self.mock_object(self.scheduler_rpcapi, 'create_share_instance') mock_db_share_instance_update = self.mock_object( db_api, 'share_instance_update') self.mock_object( share_api.API, 'create_share_instance_and_get_request_spec', mock.Mock(return_value=(fake_req_spec, fake_instance))) retval = self.api.create_instance( self.context, fakes.fake_share(), share_group_snapshot_member=sg_snap_member) self.assertIsNone(retval) mock_db_share_instance_update.assert_called_once_with( self.context, fake_instance['id'], member_info) self.assertFalse(mock_scheduler_rpcapi_call.called) self.assertFalse(mock_share_rpcapi_call.called) def test_get_share_attributes_from_share_type(self): share_type = { 'extra_specs': { 'snapshot_support': True, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'replication_type': 'dr', } } result = self.api.get_share_attributes_from_share_type(share_type) self.assertEqual(share_type['extra_specs'], result) @ddt.data({}, {'extra_specs': {}}, None) def test_get_share_attributes_from_share_type_defaults(self, share_type): result = self.api.get_share_attributes_from_share_type(share_type) expected = { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'replication_type': None, } self.assertEqual(expected, result) @ddt.data({'extra_specs': {'snapshot_support': 'fake'}}, {'extra_specs': {'create_share_from_snapshot_support': 'fake'}}) def test_get_share_attributes_from_share_type_invalid(self, share_type): self.assertRaises(exception.InvalidExtraSpec, self.api.get_share_attributes_from_share_type, share_type) @ddt.data( {'replication_type': 'dr', 'dhss': False, 'share_server_id': None}, {'replication_type': 'readable', 'dhss': False, 'share_server_id': None}, {'replication_type': None, 'dhss': False, 'share_server_id': None}, {'replication_type': None, 'dhss': True, 'share_server_id': 'fake'} ) @ddt.unpack def test_manage_new(self, replication_type, dhss, share_server_id): share_data = { 'host': 'fake', 'export_location': 'fake', 'share_proto': 'fake', 'share_type_id': 'fake', } if dhss: share_data['share_server_id'] = share_server_id driver_options = {} date = datetime.datetime(1, 1, 1, 1, 1, 1) timeutils.utcnow.return_value = date fake_subnet = db_utils.create_share_network_subnet( share_network_id='fake') share_server = db_utils.create_share_server( status=constants.STATUS_ACTIVE, id=share_server_id, share_network_subnet_id=fake_subnet['id']) fake_share_data = { 'id': 'fakeid', 'status': constants.STATUS_CREATING, } fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'replication_type': replication_type, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': dhss, }, } share = db_api.share_create(self.context, fake_share_data) self.mock_object(self.scheduler_rpcapi, 'manage_share') self.mock_object(db_api, 'share_create', mock.Mock(return_value=share)) self.mock_object(db_api, 'share_export_locations_update') self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share_server)) self.mock_object(db_api, 'share_network_subnet_get', mock.Mock(return_value=fake_subnet)) self.mock_object(self.api, 'get_all', mock.Mock(return_value=[])) self.api.manage(self.context, copy.deepcopy(share_data), driver_options) share_data.update({ 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_MANAGING, 'scheduled_at': date, 'snapshot_support': fake_type['extra_specs']['snapshot_support'], 'create_share_from_snapshot_support': fake_type['extra_specs']['create_share_from_snapshot_support'], 'revert_to_snapshot_support': fake_type['extra_specs']['revert_to_snapshot_support'], 'mount_snapshot_support': fake_type['extra_specs']['mount_snapshot_support'], 'replication_type': replication_type, }) expected_request_spec = self._get_request_spec_dict( share, fake_type, size=0, share_proto=share_data['share_proto'], host=share_data['host']) if dhss: share_data.update({ 'share_network_id': fake_subnet['share_network_id']}) export_location = share_data.pop('export_location') self.api.get_all.assert_called_once_with(self.context, mock.ANY) db_api.share_create.assert_called_once_with(self.context, share_data) db_api.share_get.assert_called_once_with(self.context, share['id']) db_api.share_export_locations_update.assert_called_once_with( self.context, share.instance['id'], export_location ) self.scheduler_rpcapi.manage_share.assert_called_once_with( self.context, share['id'], driver_options, expected_request_spec) if dhss: db_api.share_server_get.assert_called_once_with( self.context, share_data['share_server_id']) db_api.share_network_subnet_get.assert_called_once_with( self.context, share_server['share_network_subnet_id']) @ddt.data((True, exception.InvalidInput, True), (True, exception.InvalidInput, False), (False, exception.InvalidInput, True), (True, exception.InvalidInput, True)) @ddt.unpack def test_manage_new_dhss_true_and_false(self, dhss, exception_type, has_share_server_id): share_data = { 'host': 'fake', 'export_location': 'fake', 'share_proto': 'fake', 'share_type_id': 'fake', } if has_share_server_id: share_data['share_server_id'] = 'fake' driver_options = {} date = datetime.datetime(1, 1, 1, 1, 1, 1) timeutils.utcnow.return_value = date fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': dhss, }, } self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) self.mock_object(self.api, 'get_all', mock.Mock(return_value=[])) self.assertRaises(exception_type, self.api.manage, self.context, share_data=share_data, driver_options=driver_options ) share_types.get_share_type.assert_called_once_with( self.context, share_data['share_type_id'] ) self.api.get_all.assert_called_once_with( self.context, { 'host': share_data['host'], 'export_location': share_data['export_location'], 'share_proto': share_data['share_proto'], 'share_type_id': share_data['share_type_id'] } ) def test_manage_new_share_server_not_found(self): share_data = { 'host': 'fake', 'export_location': 'fake', 'share_proto': 'fake', 'share_type_id': 'fake', 'share_server_id': 'fake' } driver_options = {} date = datetime.datetime(1, 1, 1, 1, 1, 1) timeutils.utcnow.return_value = date fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'replication_type': 'dr', 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': True, }, } self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) self.mock_object(self.api, 'get_all', mock.Mock(return_value=[])) self.assertRaises(exception.InvalidInput, self.api.manage, self.context, share_data=share_data, driver_options=driver_options ) share_types.get_share_type.assert_called_once_with( self.context, share_data['share_type_id'] ) self.api.get_all.assert_called_once_with( self.context, { 'host': share_data['host'], 'export_location': share_data['export_location'], 'share_proto': share_data['share_proto'], 'share_type_id': share_data['share_type_id'] } ) def test_manage_new_share_server_not_active(self): share_data = { 'host': 'fake', 'export_location': 'fake', 'share_proto': 'fake', 'share_type_id': 'fake', 'share_server_id': 'fake' } fake_share_data = { 'id': 'fakeid', 'status': constants.STATUS_ERROR, } driver_options = {} date = datetime.datetime(1, 1, 1, 1, 1, 1) timeutils.utcnow.return_value = date fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'replication_type': 'dr', 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': True, }, } share = db_api.share_create(self.context, fake_share_data) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) self.mock_object(self.api, 'get_all', mock.Mock(return_value=[])) self.mock_object(db_api, 'share_server_get', mock.Mock(return_value=share)) self.assertRaises(exception.InvalidShareServer, self.api.manage, self.context, share_data=share_data, driver_options=driver_options ) share_types.get_share_type.assert_called_once_with( self.context, share_data['share_type_id'] ) self.api.get_all.assert_called_once_with( self.context, { 'host': share_data['host'], 'export_location': share_data['export_location'], 'share_proto': share_data['share_proto'], 'share_type_id': share_data['share_type_id'] } ) db_api.share_server_get.assert_called_once_with( self.context, share_data['share_server_id'] ) @ddt.data(constants.STATUS_MANAGE_ERROR, constants.STATUS_AVAILABLE) def test_manage_duplicate(self, status): share_data = { 'host': 'fake', 'export_location': 'fake', 'share_proto': 'fake', 'share_type_id': 'fake', } driver_options = {} fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'driver_handles_share_servers': False, }, } shares = [{'id': 'fake', 'status': status}] self.mock_object(self.api, 'get_all', mock.Mock(return_value=shares)) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) self.assertRaises(exception.InvalidShare, self.api.manage, self.context, share_data, driver_options) def _get_request_spec_dict(self, share, share_type, **kwargs): if share is None: share = {'instance': {}} share_instance = share['instance'] share_properties = { 'size': kwargs.get('size', share.get('size')), 'user_id': kwargs.get('user_id', share.get('user_id')), 'project_id': kwargs.get('project_id', share.get('project_id')), 'snapshot_support': kwargs.get( 'snapshot_support', share_type['extra_specs']['snapshot_support']), 'create_share_from_snapshot_support': kwargs.get( 'create_share_from_snapshot_support', share_type['extra_specs'].get( 'create_share_from_snapshot_support')), 'revert_to_snapshot_support': kwargs.get( 'revert_to_snapshot_support', share_type['extra_specs'].get('revert_to_snapshot_support')), 'mount_snapshot_support': kwargs.get( 'mount_snapshot_support', share_type['extra_specs'].get('mount_snapshot_support')), 'share_proto': kwargs.get('share_proto', share.get('share_proto')), 'share_type_id': share_type['id'], 'is_public': kwargs.get('is_public', share.get('is_public')), 'share_group_id': kwargs.get( 'share_group_id', share.get('share_group_id')), 'source_share_group_snapshot_member_id': kwargs.get( 'source_share_group_snapshot_member_id', share.get('source_share_group_snapshot_member_id')), 'snapshot_id': kwargs.get('snapshot_id', share.get('snapshot_id')), } share_instance_properties = { 'availability_zone_id': kwargs.get( 'availability_zone_id', share_instance.get('availability_zone_id')), 'share_network_id': kwargs.get( 'share_network_id', share_instance.get('share_network_id')), 'share_server_id': kwargs.get( 'share_server_id', share_instance.get('share_server_id')), 'share_id': kwargs.get('share_id', share_instance.get('share_id')), 'host': kwargs.get('host', share_instance.get('host')), 'status': kwargs.get('status', share_instance.get('status')), } request_spec = { 'share_properties': share_properties, 'share_instance_properties': share_instance_properties, 'share_type': share_type, 'share_id': share.get('id'), } return request_spec def test_unmanage(self): share = db_utils.create_share( id='fakeid', host='fake', size='1', status=constants.STATUS_AVAILABLE, user_id=self.context.user_id, project_id=self.context.project_id, task_state=None) self.mock_object(db_api, 'share_update', mock.Mock()) self.api.unmanage(self.context, share) self.share_rpcapi.unmanage_share.assert_called_once_with( self.context, mock.ANY) db_api.share_update.assert_called_once_with( mock.ANY, share['id'], mock.ANY) def test_unmanage_task_state_busy(self): share = db_utils.create_share( id='fakeid', host='fake', size='1', status=constants.STATUS_AVAILABLE, user_id=self.context.user_id, project_id=self.context.project_id, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS) self.assertRaises(exception.ShareBusyException, self.api.unmanage, self.context, share) @mock.patch.object(quota.QUOTAS, 'reserve', mock.Mock(return_value='reservation')) @mock.patch.object(quota.QUOTAS, 'commit', mock.Mock()) def test_create_snapshot(self): snapshot = db_utils.create_snapshot( with_share=True, status=constants.STATUS_CREATING, size=1) share = snapshot['share'] fake_name = 'fakename' fake_desc = 'fakedesc' options = { 'share_id': share['id'], 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_CREATING, 'progress': '0%', 'share_size': share['size'], 'size': 1, 'display_name': fake_name, 'display_description': fake_desc, 'share_proto': share['share_proto'], } with mock.patch.object(db_api, 'share_snapshot_create', mock.Mock(return_value=snapshot)): self.api.create_snapshot(self.context, share, fake_name, fake_desc) share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'create_snapshot', share) quota.QUOTAS.reserve.assert_called_once_with( self.context, share_type_id=None, snapshot_gigabytes=1, snapshots=1) quota.QUOTAS.commit.assert_called_once_with( self.context, 'reservation', share_type_id=None) db_api.share_snapshot_create.assert_called_once_with( self.context, options) def test_create_snapshot_space_quota_exceeded(self): share = fakes.fake_share( id=uuidutils.generate_uuid(), size=1, project_id='fake_project', user_id='fake_user', has_replicas=False, status='available') usages = {'snapshot_gigabytes': {'reserved': 10, 'in_use': 0}} quotas = {'snapshot_gigabytes': 10} side_effect = exception.OverQuota( overs='snapshot_gigabytes', usages=usages, quotas=quotas) self.mock_object( quota.QUOTAS, 'reserve', mock.Mock(side_effect=side_effect)) mock_snap_create = self.mock_object(db_api, 'share_snapshot_create') self.assertRaises(exception.SnapshotSizeExceedsAvailableQuota, self.api.create_snapshot, self.context, share, 'fake_name', 'fake_description') mock_snap_create.assert_not_called() def test_create_snapshot_count_quota_exceeded(self): share = fakes.fake_share( id=uuidutils.generate_uuid(), size=1, project_id='fake_project', user_id='fake_user', has_replicas=False, status='available') usages = {'snapshots': {'reserved': 10, 'in_use': 0}} quotas = {'snapshots': 10} side_effect = exception.OverQuota( overs='snapshots', usages=usages, quotas=quotas) self.mock_object( quota.QUOTAS, 'reserve', mock.Mock(side_effect=side_effect)) mock_snap_create = self.mock_object(db_api, 'share_snapshot_create') self.assertRaises(exception.SnapshotLimitExceeded, self.api.create_snapshot, self.context, share, 'fake_name', 'fake_description') mock_snap_create.assert_not_called() def test_manage_snapshot_share_not_found(self): snapshot = fakes.fake_snapshot(share_id='fake_share', as_primitive=True) mock_share_get_call = self.mock_object( db_api, 'share_get', mock.Mock(side_effect=exception.NotFound)) mock_db_snapshot_call = self.mock_object( db_api, 'share_snapshot_get_all_for_share') self.assertRaises(exception.ShareNotFound, self.api.manage_snapshot, self.context, snapshot, {}) self.assertFalse(mock_db_snapshot_call.called) mock_share_get_call.assert_called_once_with( self.context, snapshot['share_id']) def test_manage_snapshot_share_has_replicas(self): share_ref = fakes.fake_share( has_replicas=True, status=constants.STATUS_AVAILABLE) self.mock_object( db_api, 'share_get', mock.Mock(return_value=share_ref)) snapshot = fakes.fake_snapshot(create_instance=True, as_primitive=True) mock_db_snapshot_get_all_for_share_call = self.mock_object( db_api, 'share_snapshot_get_all_for_share') self.assertRaises(exception.InvalidShare, self.api.manage_snapshot, context, snapshot, {}) self.assertFalse(mock_db_snapshot_get_all_for_share_call.called) def test_manage_snapshot_already_managed(self): share_ref = fakes.fake_share( has_replicas=False, status=constants.STATUS_AVAILABLE) snapshot = fakes.fake_snapshot(create_instance=True, as_primitive=True) self.mock_object( db_api, 'share_get', mock.Mock(return_value=share_ref)) mock_db_snapshot_call = self.mock_object( db_api, 'share_snapshot_get_all_for_share', mock.Mock( return_value=[snapshot])) mock_db_snapshot_create_call = self.mock_object( db_api, 'share_snapshot_create') self.assertRaises(exception.ManageInvalidShareSnapshot, self.api.manage_snapshot, self.context, snapshot, {}) mock_db_snapshot_call.assert_called_once_with( self.context, snapshot['share_id']) self.assertFalse(mock_db_snapshot_create_call.called) def test_manage_snapshot(self): share_ref = fakes.fake_share( has_replicas=False, status=constants.STATUS_AVAILABLE, host='fake_host') existing_snapshot = fakes.fake_snapshot( create_instance=True, share_id=share_ref['id']) self.mock_object(db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[existing_snapshot])) snapshot_data = { 'share_id': share_ref['id'], 'provider_location': 'someproviderlocation', } expected_snapshot_data = { 'user_id': self.context.user_id, 'project_id': self.context.project_id, 'status': constants.STATUS_MANAGING, 'share_size': share_ref['size'], 'progress': '0%', 'share_proto': share_ref['share_proto'], } expected_snapshot_data.update(**snapshot_data) snapshot = fakes.fake_snapshot( create_instance=True, **expected_snapshot_data) self.mock_object( db_api, 'share_get', mock.Mock(return_value=share_ref)) mock_db_snapshot_create_call = self.mock_object( db_api, 'share_snapshot_create', mock.Mock(return_value=snapshot)) mock_rpc_call = self.mock_object(self.share_rpcapi, 'manage_snapshot', mock.Mock(return_value=snapshot)) new_snap = self.api.manage_snapshot( self.context, snapshot_data, {}) self.assertEqual(new_snap, snapshot) mock_db_snapshot_create_call.assert_called_once_with( self.context, expected_snapshot_data) mock_rpc_call.assert_called_once_with( self.context, snapshot, share_ref['host'], {}) def test_manage_share_server(self): """Tests manage share server""" host = 'fake_host' fake_share_network = { 'id': 'fake_net_id' } fake_share_net_subnet = { 'id': 'fake_subnet_id', 'share_network_id': fake_share_network['id'] } identifier = 'fake_identifier' values = { 'host': host, 'share_network_subnet_id': fake_share_net_subnet['id'], 'status': constants.STATUS_MANAGING, 'is_auto_deletable': False, 'identifier': identifier, } server_managing = { 'id': 'fake_server_id', 'status': constants.STATUS_MANAGING, 'host': host, 'share_network_subnet_id': fake_share_net_subnet['id'], 'is_auto_deletable': False, 'identifier': identifier, } mock_share_server_search = self.mock_object( db_api, 'share_server_search_by_identifier', mock.Mock(side_effect=exception.ShareServerNotFound('fake'))) mock_share_server_get = self.mock_object( db_api, 'share_server_get', mock.Mock( return_value=server_managing) ) mock_share_server_create = self.mock_object( db_api, 'share_server_create', mock.Mock(return_value=server_managing) ) result = self.api.manage_share_server( self.context, 'fake_identifier', host, fake_share_net_subnet, {'opt1': 'val1', 'opt2': 'val2'} ) mock_share_server_create.assert_called_once_with( self.context, values) mock_share_server_get.assert_called_once_with( self.context, 'fake_server_id') mock_share_server_search.assert_called_once_with( self.context, 'fake_identifier') result_dict = { 'host': result['host'], 'share_network_subnet_id': result['share_network_subnet_id'], 'status': result['status'], 'is_auto_deletable': result['is_auto_deletable'], 'identifier': result['identifier'], } self.assertEqual(values, result_dict) def test_manage_share_server_invalid(self): server = {'identifier': 'fake_server'} mock_share_server_search = self.mock_object( db_api, 'share_server_search_by_identifier', mock.Mock(return_value=[server])) self.assertRaises( exception.InvalidInput, self.api.manage_share_server, self.context, 'invalid_identifier', 'fake_host', 'fake_share_net', {}) mock_share_server_search.assert_called_once_with( self.context, 'invalid_identifier') def test_unmanage_snapshot(self): fake_host = 'fake_host' snapshot_data = { 'status': constants.STATUS_UNMANAGING, 'terminated_at': timeutils.utcnow(), } snapshot = fakes.fake_snapshot( create_instance=True, share_instance_id='id2', **snapshot_data) mock_db_snap_update_call = self.mock_object( db_api, 'share_snapshot_update', mock.Mock(return_value=snapshot)) mock_rpc_call = self.mock_object( self.share_rpcapi, 'unmanage_snapshot') retval = self.api.unmanage_snapshot( self.context, snapshot, fake_host) self.assertIsNone(retval) mock_db_snap_update_call.assert_called_once_with( self.context, snapshot['id'], snapshot_data) mock_rpc_call.assert_called_once_with( self.context, snapshot, fake_host) def test_unmanage_share_server(self): shr1 = {} share_server = db_utils.create_share_server(**shr1) update_data = {'status': constants.STATUS_UNMANAGING, 'terminated_at': timeutils.utcnow()} mock_share_instances_get_all = self.mock_object( db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value={})) mock_share_group_get_all = self.mock_object( db_api, 'share_group_get_all_by_share_server', mock.Mock(return_value={})) mock_share_server_update = self.mock_object( db_api, 'share_server_update', mock.Mock(return_value=share_server)) mock_rpc = self.mock_object( self.api.share_rpcapi, 'unmanage_share_server') self.api.unmanage_share_server(self.context, share_server, True) mock_share_instances_get_all.assert_called_once_with( self.context, share_server['id'] ) mock_share_group_get_all.assert_called_once_with( self.context, share_server['id'] ) mock_share_server_update.assert_called_once_with( self.context, share_server['id'], update_data ) mock_rpc.assert_called_once_with( self.context, share_server, force=True) def test_unmanage_share_server_in_use(self): fake_share = db_utils.create_share() fake_share_server = db_utils.create_share_server() fake_share_instance = db_utils.create_share_instance( share_id=fake_share['id']) share_instance_get_all_mock = self.mock_object( db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=fake_share_instance) ) self.assertRaises(exception.ShareServerInUse, self.api.unmanage_share_server, self.context, fake_share_server, True) share_instance_get_all_mock.assert_called_once_with( self.context, fake_share_server['id'] ) def test_unmanage_share_server_in_use_share_groups(self): fake_share_server = db_utils.create_share_server() fake_share_groups = db_utils.create_share_group() share_instance_get_all_mock = self.mock_object( db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value={}) ) group_get_all_mock = self.mock_object( db_api, 'share_group_get_all_by_share_server', mock.Mock(return_value=fake_share_groups) ) self.assertRaises(exception.ShareServerInUse, self.api.unmanage_share_server, self.context, fake_share_server, True) share_instance_get_all_mock.assert_called_once_with( self.context, fake_share_server['id'] ) group_get_all_mock.assert_called_once_with( self.context, fake_share_server['id'] ) @ddt.data(True, False) def test_revert_to_snapshot(self, has_replicas): share = fakes.fake_share(id=uuidutils.generate_uuid(), has_replicas=has_replicas) self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) mock_handle_revert_to_snapshot_quotas = self.mock_object( self.api, '_handle_revert_to_snapshot_quotas', mock.Mock(return_value='fake_reservations')) mock_revert_to_replicated_snapshot = self.mock_object( self.api, '_revert_to_replicated_snapshot') mock_revert_to_snapshot = self.mock_object( self.api, '_revert_to_snapshot') snapshot = fakes.fake_snapshot(share_id=share['id']) self.api.revert_to_snapshot(self.context, share, snapshot) mock_handle_revert_to_snapshot_quotas.assert_called_once_with( self.context, share, snapshot) if not has_replicas: self.assertFalse(mock_revert_to_replicated_snapshot.called) mock_revert_to_snapshot.assert_called_once_with( self.context, share, snapshot, 'fake_reservations') else: mock_revert_to_replicated_snapshot.assert_called_once_with( self.context, share, snapshot, 'fake_reservations') self.assertFalse(mock_revert_to_snapshot.called) @ddt.data(None, 'fake_reservations') def test_revert_to_snapshot_exception(self, reservations): share = fakes.fake_share(id=uuidutils.generate_uuid(), has_replicas=False) self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) self.mock_object( self.api, '_handle_revert_to_snapshot_quotas', mock.Mock(return_value=reservations)) side_effect = exception.ReplicationException(reason='error') self.mock_object( self.api, '_revert_to_snapshot', mock.Mock(side_effect=side_effect)) mock_quotas_rollback = self.mock_object(quota.QUOTAS, 'rollback') snapshot = fakes.fake_snapshot(share_id=share['id']) self.assertRaises(exception.ReplicationException, self.api.revert_to_snapshot, self.context, share, snapshot) if reservations is not None: mock_quotas_rollback.assert_called_once_with( self.context, reservations, share_type_id=share['instance']['share_type_id']) else: self.assertFalse(mock_quotas_rollback.called) def test_handle_revert_to_snapshot_quotas(self): share = fakes.fake_share( id=uuidutils.generate_uuid(), size=1, project_id='fake_project', user_id='fake_user', has_replicas=False) snapshot = fakes.fake_snapshot( id=uuidutils.generate_uuid(), share_id=share['id'], size=1) mock_quotas_reserve = self.mock_object(quota.QUOTAS, 'reserve') result = self.api._handle_revert_to_snapshot_quotas( self.context, share, snapshot) self.assertIsNone(result) self.assertFalse(mock_quotas_reserve.called) def test_handle_revert_to_snapshot_quotas_different_size(self): share = fakes.fake_share( id=uuidutils.generate_uuid(), size=1, project_id='fake_project', user_id='fake_user', has_replicas=False) snapshot = fakes.fake_snapshot( id=uuidutils.generate_uuid(), share_id=share['id'], size=2) mock_quotas_reserve = self.mock_object( quota.QUOTAS, 'reserve', mock.Mock(return_value='fake_reservations')) result = self.api._handle_revert_to_snapshot_quotas( self.context, share, snapshot) self.assertEqual('fake_reservations', result) mock_quotas_reserve.assert_called_once_with( self.context, project_id='fake_project', gigabytes=1, share_type_id=share['instance']['share_type_id'], user_id='fake_user') def test_handle_revert_to_snapshot_quotas_quota_exceeded(self): share = fakes.fake_share( id=uuidutils.generate_uuid(), size=1, project_id='fake_project', user_id='fake_user', has_replicas=False) snapshot = fakes.fake_snapshot( id=uuidutils.generate_uuid(), share_id=share['id'], size=2) usages = {'gigabytes': {'reserved': 10, 'in_use': 0}} quotas = {'gigabytes': 10} side_effect = exception.OverQuota( overs='fake', usages=usages, quotas=quotas) self.mock_object( quota.QUOTAS, 'reserve', mock.Mock(side_effect=side_effect)) self.assertRaises(exception.ShareSizeExceedsAvailableQuota, self.api._handle_revert_to_snapshot_quotas, self.context, share, snapshot) def test__revert_to_snapshot(self): share = fakes.fake_share( id=uuidutils.generate_uuid(), size=1, project_id='fake_project', user_id='fake_user', has_replicas=False) snapshot = fakes.fake_snapshot( id=uuidutils.generate_uuid(), share_id=share['id'], size=2) mock_share_update = self.mock_object(db_api, 'share_update') mock_share_snapshot_update = self.mock_object( db_api, 'share_snapshot_update') mock_revert_rpc_call = self.mock_object( self.share_rpcapi, 'revert_to_snapshot') self.api._revert_to_snapshot( self.context, share, snapshot, 'fake_reservations') mock_share_update.assert_called_once_with( self.context, share['id'], {'status': constants.STATUS_REVERTING}) mock_share_snapshot_update.assert_called_once_with( self.context, snapshot['id'], {'status': constants.STATUS_RESTORING}) mock_revert_rpc_call.assert_called_once_with( self.context, share, snapshot, share['instance']['host'], 'fake_reservations') def test_revert_to_replicated_snapshot(self): share = fakes.fake_share( has_replicas=True, status=constants.STATUS_AVAILABLE) snapshot = fakes.fake_snapshot(share_instance_id='id1') snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot, id='sid1') replicas = [ fakes.fake_replica( id='rid1', replica_state=constants.REPLICA_STATE_ACTIVE), fakes.fake_replica( id='rid2', replica_state=constants.REPLICA_STATE_IN_SYNC), ] self.mock_object( db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replicas[0])) self.mock_object( db_api, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[snapshot_instance])) mock_share_replica_update = self.mock_object( db_api, 'share_replica_update') mock_share_snapshot_instance_update = self.mock_object( db_api, 'share_snapshot_instance_update') mock_revert_rpc_call = self.mock_object( self.share_rpcapi, 'revert_to_snapshot') self.api._revert_to_replicated_snapshot( self.context, share, snapshot, 'fake_reservations') mock_share_replica_update.assert_called_once_with( self.context, 'rid1', {'status': constants.STATUS_REVERTING}) mock_share_snapshot_instance_update.assert_called_once_with( self.context, 'sid1', {'status': constants.STATUS_RESTORING}) mock_revert_rpc_call.assert_called_once_with( self.context, share, snapshot, replicas[0]['host'], 'fake_reservations') def test_revert_to_replicated_snapshot_no_active_replica(self): share = fakes.fake_share( has_replicas=True, status=constants.STATUS_AVAILABLE) snapshot = fakes.fake_snapshot(share_instance_id='id1') self.mock_object( db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=None)) self.assertRaises(exception.ReplicationException, self.api._revert_to_replicated_snapshot, self.context, share, snapshot, 'fake_reservations') def test_revert_to_replicated_snapshot_no_snapshot_instance(self): share = fakes.fake_share( has_replicas=True, status=constants.STATUS_AVAILABLE) snapshot = fakes.fake_snapshot(share_instance_id='id1') replicas = [ fakes.fake_replica( id='rid1', replica_state=constants.REPLICA_STATE_ACTIVE), fakes.fake_replica( id='rid2', replica_state=constants.REPLICA_STATE_IN_SYNC), ] self.mock_object( db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=replicas[0])) self.mock_object( db_api, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[None])) self.assertRaises(exception.ReplicationException, self.api._revert_to_replicated_snapshot, self.context, share, snapshot, 'fake_reservations') def test_create_snapshot_for_replicated_share(self): share = fakes.fake_share( has_replicas=True, status=constants.STATUS_AVAILABLE) snapshot = fakes.fake_snapshot( create_instance=True, share_instance_id='id2') replicas = [ fakes.fake_replica( id='id1', replica_state=constants.REPLICA_STATE_ACTIVE), fakes.fake_replica( id='id2', replica_state=constants.REPLICA_STATE_IN_SYNC) ] self.mock_object(share_api.policy, 'check_policy') self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value='reservation')) self.mock_object( db_api, 'share_snapshot_create', mock.Mock(return_value=snapshot)) self.mock_object(db_api, 'share_replicas_get_all_by_share', mock.Mock(return_value=replicas)) self.mock_object( db_api, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(quota.QUOTAS, 'commit') mock_instance_create_call = self.mock_object( db_api, 'share_snapshot_instance_create') mock_snapshot_rpc_call = self.mock_object( self.share_rpcapi, 'create_snapshot') mock_replicated_snapshot_rpc_call = self.mock_object( self.share_rpcapi, 'create_replicated_snapshot') snapshot_instance_args = { 'status': constants.STATUS_CREATING, 'progress': '0%', 'share_instance_id': 'id1', } retval = self.api.create_snapshot( self.context, share, 'fake_name', 'fake_description') self.assertEqual(snapshot['id'], retval['id']) mock_instance_create_call.assert_called_once_with( self.context, snapshot['id'], snapshot_instance_args) self.assertFalse(mock_snapshot_rpc_call.called) self.assertTrue(mock_replicated_snapshot_rpc_call.called) @mock.patch.object(db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=[])) @mock.patch.object(db_api, 'share_group_get_all_by_share_server', mock.Mock(return_value=[])) def test_delete_share_server_no_dependent_shares(self): server = {'id': 'fake_share_server_id'} server_returned = { 'id': 'fake_share_server_id', } self.mock_object(db_api, 'share_server_update', mock.Mock(return_value=server_returned)) self.api.delete_share_server(self.context, server) db_api.share_instances_get_all_by_share_server.assert_called_once_with( self.context, server['id']) (db_api.share_group_get_all_by_share_server. assert_called_once_with(self.context, server['id'])) self.share_rpcapi.delete_share_server.assert_called_once_with( self.context, server_returned) @mock.patch.object(db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=['fake_share', ])) @mock.patch.object(db_api, 'share_group_get_all_by_share_server', mock.Mock(return_value=[])) def test_delete_share_server_dependent_share_exists(self): server = {'id': 'fake_share_server_id'} self.assertRaises(exception.ShareServerInUse, self.api.delete_share_server, self.context, server) db_api.share_instances_get_all_by_share_server.assert_called_once_with( self.context, server['id']) @mock.patch.object(db_api, 'share_instances_get_all_by_share_server', mock.Mock(return_value=[])) @mock.patch.object(db_api, 'share_group_get_all_by_share_server', mock.Mock(return_value=['fake_group', ])) def test_delete_share_server_dependent_group_exists(self): server = {'id': 'fake_share_server_id'} self.assertRaises(exception.ShareServerInUse, self.api.delete_share_server, self.context, server) db_api.share_instances_get_all_by_share_server.assert_called_once_with( self.context, server['id']) (db_api.share_group_get_all_by_share_server. assert_called_once_with(self.context, server['id'])) @mock.patch.object(db_api, 'share_snapshot_instance_update', mock.Mock()) def test_delete_snapshot(self): snapshot = db_utils.create_snapshot( with_share=True, status=constants.STATUS_AVAILABLE) share = snapshot['share'] with mock.patch.object(db_api, 'share_get', mock.Mock(return_value=share)): self.api.delete_snapshot(self.context, snapshot) self.share_rpcapi.delete_snapshot.assert_called_once_with( self.context, snapshot, share['host'], force=False) share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'delete_snapshot', snapshot) db_api.share_snapshot_instance_update.assert_called_once_with( self.context, snapshot['instance']['id'], {'status': constants.STATUS_DELETING}) db_api.share_get.assert_called_once_with( self.context, snapshot['share_id']) def test_delete_snapshot_wrong_status(self): snapshot = db_utils.create_snapshot( with_share=True, status=constants.STATUS_CREATING) self.assertRaises(exception.InvalidShareSnapshot, self.api.delete_snapshot, self.context, snapshot) share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'delete_snapshot', snapshot) @ddt.data(constants.STATUS_MANAGING, constants.STATUS_ERROR_DELETING, constants.STATUS_CREATING, constants.STATUS_AVAILABLE) def test_delete_snapshot_force_delete(self, status): share = fakes.fake_share(id=uuidutils.generate_uuid(), has_replicas=False) snapshot = fakes.fake_snapshot(aggregate_status=status, share=share) snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot) self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) self.mock_object( db_api, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[snapshot_instance])) mock_instance_update_call = self.mock_object( db_api, 'share_snapshot_instance_update') mock_rpc_call = self.mock_object(self.share_rpcapi, 'delete_snapshot') retval = self.api.delete_snapshot(self.context, snapshot, force=True) self.assertIsNone(retval) mock_instance_update_call.assert_called_once_with( self.context, snapshot_instance['id'], {'status': constants.STATUS_DELETING}) mock_rpc_call.assert_called_once_with( self.context, snapshot, share['instance']['host'], force=True) @ddt.data(True, False) def test_delete_snapshot_replicated_snapshot(self, force): share = fakes.fake_share(has_replicas=True) snapshot = fakes.fake_snapshot( create_instance=True, share_id=share['id'], status=constants.STATUS_ERROR) snapshot_instance = fakes.fake_snapshot_instance( base_snapshot=snapshot) expected_update_calls = [ mock.call(self.context, x, {'status': constants.STATUS_DELETING}) for x in (snapshot['instance']['id'], snapshot_instance['id']) ] self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) self.mock_object( db_api, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[snapshot['instance'], snapshot_instance])) mock_db_update_call = self.mock_object( db_api, 'share_snapshot_instance_update') mock_snapshot_rpc_call = self.mock_object( self.share_rpcapi, 'delete_snapshot') mock_replicated_snapshot_rpc_call = self.mock_object( self.share_rpcapi, 'delete_replicated_snapshot') retval = self.api.delete_snapshot(self.context, snapshot, force=force) self.assertIsNone(retval) self.assertEqual(2, mock_db_update_call.call_count) mock_db_update_call.assert_has_calls(expected_update_calls) mock_replicated_snapshot_rpc_call.assert_called_once_with( self.context, snapshot, share['instance']['host'], share_id=share['id'], force=force) self.assertFalse(mock_snapshot_rpc_call.called) def test_create_snapshot_if_share_not_available(self): share = db_utils.create_share(status=constants.STATUS_ERROR) self.assertRaises(exception.InvalidShare, self.api.create_snapshot, self.context, share, 'fakename', 'fakedesc') share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'create_snapshot', share) def test_create_snapshot_invalid_task_state(self): share = db_utils.create_share( status=constants.STATUS_AVAILABLE, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS) self.assertRaises(exception.ShareBusyException, self.api.create_snapshot, self.context, share, 'fakename', 'fakedesc') share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'create_snapshot', share) @ddt.data({'use_scheduler': False, 'valid_host': 'fake', 'az': None}, {'use_scheduler': True, 'valid_host': None, 'az': None}, {'use_scheduler': True, 'valid_host': None, 'az': "fakeaz2"}) @ddt.unpack def test_create_from_snapshot(self, use_scheduler, valid_host, az): snapshot, share, share_data, request_spec = ( self._setup_create_from_snapshot_mocks( use_scheduler=use_scheduler, host=valid_host) ) share_type = fakes.fake_share_type() mock_get_share_type_call = self.mock_object( share_types, 'get_share_type', mock.Mock(return_value=share_type)) az = snapshot['share']['availability_zone'] if not az else az self.api.create( self.context, share_data['share_proto'], None, # NOTE(u_glide): Get share size from snapshot share_data['display_name'], share_data['display_description'], snapshot_id=snapshot['id'], availability_zone=az, ) share_data.pop('availability_zone') mock_get_share_type_call.assert_called_once_with( self.context, share['share_type_id']) self.assertSubDictMatch(share_data, db_api.share_create.call_args[0][1]) self.api.create_instance.assert_called_once_with( self.context, share, share_network_id=share['share_network_id'], host=valid_host, share_type_id=share_type['id'], availability_zone=az, share_group=None, share_group_snapshot_member=None, availability_zones=None, snapshot_host=snapshot['share']['instance']['host']) share_api.policy.check_policy.assert_called_once_with( self.context, 'share_snapshot', 'get_snapshot') quota.QUOTAS.reserve.assert_called_once_with( self.context, share_type_id=share_type['id'], gigabytes=1, shares=1) quota.QUOTAS.commit.assert_called_once_with( self.context, 'reservation', share_type_id=share_type['id']) def test_create_share_share_type_contains_replication_type(self): extra_specs = {'replication_type': constants.REPLICATION_TYPE_READABLE} share_type = db_utils.create_share_type(extra_specs=extra_specs) share_type = db_api.share_type_get(self.context, share_type['id']) share, share_data = self._setup_create_mocks( share_type_id=share_type['id']) az = share_data.pop('availability_zone') self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value='reservation')) self.mock_object(quota.QUOTAS, 'commit') self.api.create( self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], availability_zone=az, share_type=share_type ) quota.QUOTAS.reserve.assert_called_once_with( self.context, share_type_id=share_type['id'], gigabytes=1, shares=1, share_replicas=1, replica_gigabytes=1) quota.QUOTAS.commit.assert_called_once_with( self.context, 'reservation', share_type_id=share_type['id'] ) def test_create_from_snapshot_az_different_from_source(self): snapshot, share, share_data, request_spec = ( self._setup_create_from_snapshot_mocks(use_scheduler=False) ) self.assertRaises(exception.InvalidInput, self.api.create, self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], snapshot_id=snapshot['id'], availability_zone='fake_different_az') def test_create_from_snapshot_with_different_share_type(self): snapshot, share, share_data, request_spec = ( self._setup_create_from_snapshot_mocks() ) share_type = {'id': 'super_fake_share_type'} self.assertRaises(exception.InvalidInput, self.api.create, self.context, share_data['share_proto'], share_data['size'], share_data['display_name'], share_data['display_description'], snapshot_id=snapshot['id'], availability_zone=share_data['availability_zone'], share_type=share_type) def test_get_snapshot(self): fake_get_snap = {'fake_key': 'fake_val'} with mock.patch.object(db_api, 'share_snapshot_get', mock.Mock(return_value=fake_get_snap)): rule = self.api.get_snapshot(self.context, 'fakeid') self.assertEqual(fake_get_snap, rule) share_api.policy.check_policy.assert_called_once_with( self.context, 'share_snapshot', 'get_snapshot') db_api.share_snapshot_get.assert_called_once_with( self.context, 'fakeid') def test_create_from_snapshot_not_available(self): snapshot = db_utils.create_snapshot( with_share=True, status=constants.STATUS_ERROR) self.assertRaises(exception.InvalidShareSnapshot, self.api.create, self.context, 'nfs', '1', 'fakename', 'fakedesc', snapshot_id=snapshot['id'], availability_zone='fakeaz') def test_create_from_snapshot_larger_size(self): snapshot = db_utils.create_snapshot( size=100, status=constants.STATUS_AVAILABLE, with_share=True) self.assertRaises(exception.InvalidInput, self.api.create, self.context, 'nfs', 1, 'fakename', 'fakedesc', availability_zone='fakeaz', snapshot_id=snapshot['id']) def test_create_share_wrong_size_0(self): self.assertRaises(exception.InvalidInput, self.api.create, self.context, 'nfs', 0, 'fakename', 'fakedesc', availability_zone='fakeaz') def test_create_share_wrong_size_some(self): self.assertRaises(exception.InvalidInput, self.api.create, self.context, 'nfs', 'some', 'fakename', 'fakedesc', availability_zone='fakeaz') @ddt.data(constants.STATUS_AVAILABLE, constants.STATUS_ERROR) def test_delete(self, status): share = self._setup_delete_mocks(status) self.api.delete(self.context, share) self.api.delete_instance.assert_called_once_with( utils.IsAMatcher(context.RequestContext), utils.IsAMatcher(models.ShareInstance), force=False ) db_api.share_snapshot_get_all_for_share.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share['id']) def test_delete_quota_with_different_user(self): share = self._setup_delete_mocks(constants.STATUS_AVAILABLE) diff_user_context = context.RequestContext( user_id='fake2', project_id='fake', is_admin=False ) self.api.delete(diff_user_context, share) def test_delete_wrong_status(self): share = fakes.fake_share(status='wrongstatus') self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) self.assertRaises(exception.InvalidShare, self.api.delete, self.context, share) def test_delete_share_has_replicas(self): share = self._setup_delete_mocks(constants.STATUS_AVAILABLE, replication_type='writable') db_utils.create_share_replica(share_id=share['id'], replica_state='in_sync') db_utils.create_share_replica(share_id=share['id'], replica_state='out_of_sync') self.assertRaises(exception.Conflict, self.api.delete, self.context, share) @mock.patch.object(db_api, 'count_share_group_snapshot_members_in_share', mock.Mock(return_value=2)) def test_delete_dependent_share_group_snapshot_members(self): share_server_id = 'fake-ss-id' share = self._setup_delete_mocks(constants.STATUS_AVAILABLE, share_server_id) self.assertRaises(exception.InvalidShare, self.api.delete, self.context, share) @mock.patch.object(db_api, 'share_instance_delete', mock.Mock()) def test_delete_no_host(self): share = self._setup_delete_mocks(constants.STATUS_AVAILABLE, host=None) self.api.delete(self.context, share) db_api.share_instance_delete.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share.instance['id'], need_to_update_usages=True) def test_delete_share_with_snapshots(self): share = self._setup_delete_mocks(constants.STATUS_AVAILABLE, snapshots=['fake']) self.assertRaises( exception.InvalidShare, self.api.delete, self.context, share ) def test_delete_share_invalid_task_state(self): share = db_utils.create_share( status=constants.STATUS_AVAILABLE, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS) self.assertRaises(exception.ShareBusyException, self.api.delete, self.context, share) def test_delete_share_quota_error(self): share = self._setup_delete_mocks(constants.STATUS_AVAILABLE) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=exception.QuotaError('fake'))) self.api.delete(self.context, share) @ddt.data({'status': constants.STATUS_AVAILABLE, 'force': False}, {'status': constants.STATUS_ERROR, 'force': True}) @ddt.unpack def test_delete_share_instance(self, status, force): instance = self._setup_delete_share_instance_mocks( status=status, share_server_id='fake') self.api.delete_instance(self.context, instance, force=force) db_api.share_instance_update.assert_called_once_with( self.context, instance['id'], {'status': constants.STATUS_DELETING, 'terminated_at': self.dt_utc} ) self.api.share_rpcapi.delete_share_instance.assert_called_once_with( self.context, instance, force=force ) db_api.share_server_update( self.context, instance['share_server_id'], {'updated_at': self.dt_utc} ) def test_delete_share_instance_invalid_status(self): instance = self._setup_delete_share_instance_mocks( status=constants.STATUS_CREATING, share_server_id='fake') self.assertRaises( exception.InvalidShareInstance, self.api.delete_instance, self.context, instance ) def test_get(self): share = db_utils.create_share() with mock.patch.object(db_api, 'share_get', mock.Mock(return_value=share)): result = self.api.get(self.context, 'fakeid') self.assertEqual(share, result) share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'get', share) db_api.share_get.assert_called_once_with( self.context, 'fakeid') @mock.patch.object(db_api, 'share_snapshot_get_all_by_project', mock.Mock()) def test_get_all_snapshots_admin_not_all_tenants(self): ctx = context.RequestContext('fakeuid', 'fakepid', is_admin=True) self.api.get_all_snapshots(ctx) share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') db_api.share_snapshot_get_all_by_project.assert_called_once_with( ctx, 'fakepid', sort_dir='desc', sort_key='share_id', filters={}) @mock.patch.object(db_api, 'share_snapshot_get_all', mock.Mock()) def test_get_all_snapshots_admin_all_tenants(self): self.api.get_all_snapshots(self.context, search_opts={'all_tenants': 1}) share_api.policy.check_policy.assert_called_once_with( self.context, 'share_snapshot', 'get_all_snapshots') db_api.share_snapshot_get_all.assert_called_once_with( self.context, sort_dir='desc', sort_key='share_id', filters={}) @mock.patch.object(db_api, 'share_snapshot_get_all_by_project', mock.Mock()) def test_get_all_snapshots_not_admin(self): ctx = context.RequestContext('fakeuid', 'fakepid', is_admin=False) self.api.get_all_snapshots(ctx) share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') db_api.share_snapshot_get_all_by_project.assert_called_once_with( ctx, 'fakepid', sort_dir='desc', sort_key='share_id', filters={}) def test_get_all_snapshots_not_admin_search_opts(self): search_opts = {'size': 'fakesize'} fake_objs = [{'name': 'fakename1'}, search_opts] ctx = context.RequestContext('fakeuid', 'fakepid', is_admin=False) self.mock_object(db_api, 'share_snapshot_get_all_by_project', mock.Mock(return_value=fake_objs)) result = self.api.get_all_snapshots(ctx, search_opts) self.assertEqual([search_opts], result) share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') db_api.share_snapshot_get_all_by_project.assert_called_once_with( ctx, 'fakepid', sort_dir='desc', sort_key='share_id', filters=search_opts) @ddt.data(({'name': 'fo'}, 0), ({'description': 'd'}, 0), ({'name': 'foo', 'description': 'd'}, 0), ({'name': 'foo'}, 1), ({'description': 'ds'}, 1), ({'name~': 'foo', 'description~': 'ds'}, 2), ({'name': 'foo', 'description~': 'ds'}, 1), ({'name~': 'foo', 'description': 'ds'}, 1)) @ddt.unpack def test_get_all_snapshots_filter_by_name_and_description( self, search_opts, get_snapshot_number): fake_objs = [{'name': 'fo2', 'description': 'd2'}, {'name': 'foo', 'description': 'ds'}, {'name': 'foo1', 'description': 'ds1'}] ctx = context.RequestContext('fakeuid', 'fakepid', is_admin=False) self.mock_object(db_api, 'share_snapshot_get_all_by_project', mock.Mock(return_value=fake_objs)) result = self.api.get_all_snapshots(ctx, search_opts) self.assertEqual(get_snapshot_number, len(result)) if get_snapshot_number == 2: self.assertEqual(fake_objs[1:], result) elif get_snapshot_number == 1: self.assertEqual(fake_objs[1:2], result) share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') db_api.share_snapshot_get_all_by_project.assert_called_once_with( ctx, 'fakepid', sort_dir='desc', sort_key='share_id', filters=search_opts) def test_get_all_snapshots_with_sorting_valid(self): self.mock_object( db_api, 'share_snapshot_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SNAPSHOTS[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) snapshots = self.api.get_all_snapshots( ctx, sort_key='status', sort_dir='asc') share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') db_api.share_snapshot_get_all_by_project.assert_called_once_with( ctx, 'fake_pid_1', sort_dir='asc', sort_key='status', filters={}) self.assertEqual(_FAKE_LIST_OF_ALL_SNAPSHOTS[0], snapshots) def test_get_all_snapshots_sort_key_invalid(self): self.mock_object( db_api, 'share_snapshot_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SNAPSHOTS[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) self.assertRaises( exception.InvalidInput, self.api.get_all_snapshots, ctx, sort_key=1, ) share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') def test_get_all_snapshots_sort_dir_invalid(self): self.mock_object( db_api, 'share_snapshot_get_all_by_project', mock.Mock(return_value=_FAKE_LIST_OF_ALL_SNAPSHOTS[0])) ctx = context.RequestContext('fake_uid', 'fake_pid_1', is_admin=False) self.assertRaises( exception.InvalidInput, self.api.get_all_snapshots, ctx, sort_dir=1, ) share_api.policy.check_policy.assert_called_once_with( ctx, 'share_snapshot', 'get_all_snapshots') def test_allow_access_rule_already_exists(self): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) fake_access = db_utils.create_access(share_id=share['id']) self.mock_object(self.api.db, 'share_access_create') self.assertRaises( exception.ShareAccessExists, self.api.allow_access, self.context, share, fake_access['access_type'], fake_access['access_to'], fake_access['access_level']) self.assertFalse(self.api.db.share_access_create.called) def test_allow_access_invalid_access_level(self): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) self.mock_object(self.api.db, 'share_access_create') self.assertRaises( exception.InvalidShareAccess, self.api.allow_access, self.context, share, 'user', 'alice', access_level='execute') self.assertFalse(self.api.db.share_access_create.called) @ddt.data({'host': None}, {'status': constants.STATUS_ERROR_DELETING, 'access_rules_status': constants.STATUS_ACTIVE}, {'host': None, 'access_rules_status': constants.STATUS_ERROR}, {'access_rules_status': constants.STATUS_ERROR}) def test_allow_access_invalid_instance(self, params): share = db_utils.create_share(host='fake') db_utils.create_share_instance(share_id=share['id']) db_utils.create_share_instance(share_id=share['id'], **params) self.mock_object(self.api.db, 'share_access_create') self.assertRaises(exception.InvalidShare, self.api.allow_access, self.context, share, 'ip', '10.0.0.1') self.assertFalse(self.api.db.share_access_create.called) @ddt.data(*(constants.ACCESS_LEVELS + (None,))) def test_allow_access(self, level): share = db_utils.create_share(status=constants.STATUS_AVAILABLE) values = { 'share_id': share['id'], 'access_type': 'fake_access_type', 'access_to': 'fake_access_to', 'access_level': level, 'metadata': None, } fake_access = copy.deepcopy(values) fake_access.update({ 'id': 'fake_access_id', 'state': constants.STATUS_ACTIVE, 'deleted': 'fake_deleted', 'deleted_at': 'fake_deleted_at', 'instance_mappings': ['foo', 'bar'], }) self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) self.mock_object(db_api, 'share_access_create', mock.Mock(return_value=fake_access)) self.mock_object(db_api, 'share_access_get', mock.Mock(return_value=fake_access)) self.mock_object(db_api, 'share_access_get_all_by_type_and_access', mock.Mock(return_value=[])) self.mock_object(self.api, 'allow_access_to_instance') access = self.api.allow_access( self.context, share, fake_access['access_type'], fake_access['access_to'], level) self.assertEqual(fake_access, access) db_api.share_access_create.assert_called_once_with( self.context, values) self.api.allow_access_to_instance.assert_called_once_with( self.context, share.instance) def test_allow_access_to_instance(self): share = db_utils.create_share(host='fake') rpc_method = self.mock_object(self.api.share_rpcapi, 'update_access') self.api.allow_access_to_instance(self.context, share.instance) rpc_method.assert_called_once_with(self.context, share.instance) @ddt.data({'host': None}, {'status': constants.STATUS_ERROR_DELETING, 'access_rules_status': constants.STATUS_ACTIVE}, {'host': None, 'access_rules_status': constants.STATUS_ERROR}, {'access_rules_status': constants.STATUS_ERROR}) def test_deny_access_invalid_instance(self, params): share = db_utils.create_share(host='fake') db_utils.create_share_instance(share_id=share['id']) db_utils.create_share_instance(share_id=share['id'], **params) access_rule = db_utils.create_access(share_id=share['id']) self.mock_object(self.api, 'deny_access_to_instance') self.assertRaises(exception.InvalidShare, self.api.deny_access, self.context, share, access_rule) self.assertFalse(self.api.deny_access_to_instance.called) def test_deny_access(self): share = db_utils.create_share( host='fake', status=constants.STATUS_AVAILABLE, access_rules_status=constants.STATUS_ACTIVE) access_rule = db_utils.create_access(share_id=share['id']) self.mock_object(self.api, 'deny_access_to_instance') retval = self.api.deny_access(self.context, share, access_rule) self.assertIsNone(retval) self.api.deny_access_to_instance.assert_called_once_with( self.context, share.instance, access_rule) def test_deny_access_to_instance(self): share = db_utils.create_share(host='fake') share_instance = db_utils.create_share_instance( share_id=share['id'], host='fake') access = db_utils.create_access(share_id=share['id']) rpc_method = self.mock_object(self.api.share_rpcapi, 'update_access') self.mock_object(db_api, 'share_instance_access_get', mock.Mock(return_value=access.instance_mappings[0])) mock_share_instance_rules_status_update = self.mock_object( self.api.access_helper, 'get_and_update_share_instance_access_rules_status') mock_access_rule_state_update = self.mock_object( self.api.access_helper, 'get_and_update_share_instance_access_rule') self.api.deny_access_to_instance(self.context, share_instance, access) rpc_method.assert_called_once_with(self.context, share_instance) mock_access_rule_state_update.assert_called_once_with( self.context, access['id'], updates={'state': constants.ACCESS_STATE_QUEUED_TO_DENY}, share_instance_id=share_instance['id']) expected_conditional_change = { constants.STATUS_ACTIVE: constants.SHARE_INSTANCE_RULES_SYNCING, } mock_share_instance_rules_status_update.assert_called_once_with( self.context, share_instance_id=share_instance['id'], conditionally_change=expected_conditional_change) def test_access_get(self): with mock.patch.object(db_api, 'share_access_get', mock.Mock(return_value='fake')): rule = self.api.access_get(self.context, 'fakeid') self.assertEqual('fake', rule) db_api.share_access_get.assert_called_once_with( self.context, 'fakeid') def test_access_get_all(self): share = db_utils.create_share(id='fakeid') values = { 'fakeacc0id': { 'id': 'fakeacc0id', 'access_type': 'fakeacctype', 'access_to': 'fakeaccto', 'access_level': 'rw', 'share_id': share['id'], }, 'fakeacc1id': { 'id': 'fakeacc1id', 'access_type': 'fakeacctype', 'access_to': 'fakeaccto', 'access_level': 'rw', 'share_id': share['id'], }, } rules = [ db_utils.create_access(**values['fakeacc0id']), db_utils.create_access(**values['fakeacc1id']), ] # add state property values['fakeacc0id']['state'] = constants.STATUS_ACTIVE values['fakeacc1id']['state'] = constants.STATUS_ACTIVE self.mock_object(db_api, 'share_access_get_all_for_share', mock.Mock(return_value=rules)) actual = self.api.access_get_all(self.context, share) self.assertEqual(rules, actual) share_api.policy.check_policy.assert_called_once_with( self.context, 'share', 'access_get_all') db_api.share_access_get_all_for_share.assert_called_once_with( self.context, 'fakeid', filters=None) def test_share_metadata_get(self): metadata = {'a': 'b', 'c': 'd'} share_id = uuidutils.generate_uuid() db_api.share_create(self.context, {'id': share_id, 'metadata': metadata}) self.assertEqual(metadata, db_api.share_metadata_get(self.context, share_id)) def test_share_metadata_update(self): metadata1 = {'a': '1', 'c': '2'} metadata2 = {'a': '3', 'd': '5'} should_be = {'a': '3', 'c': '2', 'd': '5'} share_id = uuidutils.generate_uuid() db_api.share_create(self.context, {'id': share_id, 'metadata': metadata1}) db_api.share_metadata_update(self.context, share_id, metadata2, False) self.assertEqual(should_be, db_api.share_metadata_get(self.context, share_id)) def test_share_metadata_update_delete(self): metadata1 = {'a': '1', 'c': '2'} metadata2 = {'a': '3', 'd': '4'} should_be = metadata2 share_id = uuidutils.generate_uuid() db_api.share_create(self.context, {'id': share_id, 'metadata': metadata1}) db_api.share_metadata_update(self.context, share_id, metadata2, True) self.assertEqual(should_be, db_api.share_metadata_get(self.context, share_id)) def test_extend_invalid_status(self): invalid_status = 'fake' share = db_utils.create_share(status=invalid_status) new_size = 123 self.assertRaises(exception.InvalidShare, self.api.extend, self.context, share, new_size) def test_extend_invalid_task_state(self): share = db_utils.create_share( status=constants.STATUS_AVAILABLE, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS) new_size = 123 self.assertRaises(exception.ShareBusyException, self.api.extend, self.context, share, new_size) def test_extend_invalid_size(self): share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size=200) new_size = 123 self.assertRaises(exception.InvalidInput, self.api.extend, self.context, share, new_size) def _setup_extend_mocks(self, supports_replication): replica_list = [] if supports_replication: replica_list.append({'id': 'fake_replica_id'}) replica_list.append({'id': 'fake_replica_id_2'}) self.mock_object(db_api, 'share_replicas_get_all_by_share', mock.Mock(return_value=replica_list)) @ddt.data( (False, 'gigabytes', exception.ShareSizeExceedsAvailableQuota), (True, 'replica_gigabytes', exception.ShareReplicaSizeExceedsAvailableQuota) ) @ddt.unpack def test_extend_quota_error(self, supports_replication, quota_key, expected_exception): self._setup_extend_mocks(supports_replication) share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size=100) new_size = 123 replica_amount = len( db_api.share_replicas_get_all_by_share.return_value) value_to_be_extended = new_size - share['size'] usages = {quota_key: {'reserved': 11, 'in_use': 12}} quotas = {quota_key: 13} overs = {quota_key: new_size} exc = exception.OverQuota(usages=usages, quotas=quotas, overs=overs) expected_deltas = { 'project_id': share['project_id'], 'gigabytes': value_to_be_extended, 'user_id': share['user_id'], 'share_type_id': share['instance']['share_type_id'] } if supports_replication: expected_deltas.update( {'replica_gigabytes': value_to_be_extended * replica_amount}) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=exc)) self.assertRaises(expected_exception, self.api.extend, self.context, share, new_size) quota.QUOTAS.reserve.assert_called_once_with( mock.ANY, **expected_deltas ) def test_extend_quota_user(self): self._setup_extend_mocks(False) share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size=100) diff_user_context = context.RequestContext( user_id='fake2', project_id='fake', is_admin=False ) new_size = 123 size_increase = int(new_size) - share['size'] self.mock_object(quota.QUOTAS, 'reserve') self.api.extend(diff_user_context, share, new_size) quota.QUOTAS.reserve.assert_called_once_with( diff_user_context, project_id=share['project_id'], gigabytes=size_increase, share_type_id=None, user_id=share['user_id'] ) @ddt.data(True, False) def test_extend_valid(self, supports_replication): self._setup_extend_mocks(supports_replication) share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size=100) new_size = 123 size_increase = int(new_size) - share['size'] replica_amount = len( db_api.share_replicas_get_all_by_share.return_value) expected_deltas = { 'project_id': share['project_id'], 'gigabytes': size_increase, 'user_id': share['user_id'], 'share_type_id': share['instance']['share_type_id'] } if supports_replication: new_replica_size = size_increase * replica_amount expected_deltas.update({'replica_gigabytes': new_replica_size}) self.mock_object(self.api, 'update') self.mock_object(self.api.share_rpcapi, 'extend_share') self.mock_object(quota.QUOTAS, 'reserve') self.api.extend(self.context, share, new_size) self.api.update.assert_called_once_with( self.context, share, {'status': constants.STATUS_EXTENDING}) self.api.share_rpcapi.extend_share.assert_called_once_with( self.context, share, new_size, mock.ANY ) quota.QUOTAS.reserve.assert_called_once_with( self.context, **expected_deltas) def test_shrink_invalid_status(self): invalid_status = 'fake' share = db_utils.create_share(status=invalid_status) self.assertRaises(exception.InvalidShare, self.api.shrink, self.context, share, 123) def test_shrink_invalid_task_state(self): share = db_utils.create_share( status=constants.STATUS_AVAILABLE, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS) self.assertRaises(exception.ShareBusyException, self.api.shrink, self.context, share, 123) @ddt.data(300, 0, -1) def test_shrink_invalid_size(self, new_size): share = db_utils.create_share(status=constants.STATUS_AVAILABLE, size=200) self.assertRaises(exception.InvalidInput, self.api.shrink, self.context, share, new_size) @ddt.data(constants.STATUS_AVAILABLE, constants.STATUS_SHRINKING_POSSIBLE_DATA_LOSS_ERROR) def test_shrink_valid(self, share_status): share = db_utils.create_share(status=share_status, size=100) new_size = 50 self.mock_object(self.api, 'update') self.mock_object(self.api.share_rpcapi, 'shrink_share') self.api.shrink(self.context, share, new_size) self.api.update.assert_called_once_with( self.context, share, {'status': constants.STATUS_SHRINKING}) self.api.share_rpcapi.shrink_share.assert_called_once_with( self.context, share, new_size ) def test_snapshot_allow_access(self): access_to = '1.1.1.1' access_type = 'ip' share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id'], status=constants.STATUS_AVAILABLE) access = db_utils.create_snapshot_access( share_snapshot_id=snapshot['id']) values = {'share_snapshot_id': snapshot['id'], 'access_type': access_type, 'access_to': access_to} existing_access_check = self.mock_object( db_api, 'share_snapshot_check_for_existing_access', mock.Mock(return_value=False)) access_create = self.mock_object( db_api, 'share_snapshot_access_create', mock.Mock(return_value=access)) self.mock_object(self.api.share_rpcapi, 'snapshot_update_access') out = self.api.snapshot_allow_access(self.context, snapshot, access_type, access_to) self.assertEqual(access, out) existing_access_check.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id'], access_type, access_to) access_create.assert_called_once_with( utils.IsAMatcher(context.RequestContext), values) def test_snapshot_allow_access_instance_exception(self): access_to = '1.1.1.1' access_type = 'ip' share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) existing_access_check = self.mock_object( db_api, 'share_snapshot_check_for_existing_access', mock.Mock(return_value=False)) self.assertRaises(exception.InvalidShareSnapshotInstance, self.api.snapshot_allow_access, self.context, snapshot, access_type, access_to) existing_access_check.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id'], access_type, access_to) def test_snapshot_allow_access_access_exists_exception(self): access_to = '1.1.1.1' access_type = 'ip' share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) db_utils.create_snapshot_access( share_snapshot_id=snapshot['id'], access_to=access_to, access_type=access_type) existing_access_check = self.mock_object( db_api, 'share_snapshot_check_for_existing_access', mock.Mock(return_value=True)) self.assertRaises(exception.ShareSnapshotAccessExists, self.api.snapshot_allow_access, self.context, snapshot, access_type, access_to) existing_access_check.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['id'], access_type, access_to) def test_snapshot_deny_access(self): share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id'], status=constants.STATUS_AVAILABLE) access = db_utils.create_snapshot_access( share_snapshot_id=snapshot['id']) mapping = {'id': 'fake_id', 'state': constants.STATUS_ACTIVE, 'access_id': access['id']} access_get = self.mock_object( db_api, 'share_snapshot_instance_access_get', mock.Mock(return_value=mapping)) access_update_state = self.mock_object( db_api, 'share_snapshot_instance_access_update') update_access = self.mock_object(self.api.share_rpcapi, 'snapshot_update_access') self.api.snapshot_deny_access(self.context, snapshot, access) access_get.assert_called_once_with( utils.IsAMatcher(context.RequestContext), access['id'], snapshot['instance']['id']) access_update_state.assert_called_once_with( utils.IsAMatcher(context.RequestContext), access['id'], snapshot.instance['id'], {'state': constants.ACCESS_STATE_QUEUED_TO_DENY}) update_access.assert_called_once_with( utils.IsAMatcher(context.RequestContext), snapshot['instance']) def test_snapshot_deny_access_exception(self): share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) access = db_utils.create_snapshot_access( share_snapshot_id=snapshot['id']) self.assertRaises(exception.InvalidShareSnapshotInstance, self.api.snapshot_deny_access, self.context, snapshot, access) def test_snapshot_access_get_all(self): share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) access = [] access.append(db_utils.create_snapshot_access( share_snapshot_id=snapshot['id'])) self.mock_object( db_api, 'share_snapshot_access_get_all_for_share_snapshot', mock.Mock(return_value=access)) out = self.api.snapshot_access_get_all(self.context, snapshot) self.assertEqual(access, out) def test_snapshot_access_get(self): share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) access = db_utils.create_snapshot_access( share_snapshot_id=snapshot['id']) self.mock_object( db_api, 'share_snapshot_access_get', mock.Mock(return_value=access)) out = self.api.snapshot_access_get(self.context, access['id']) self.assertEqual(access, out) def test_snapshot_export_locations_get(self): share = db_utils.create_share() snapshot = db_utils.create_snapshot(share_id=share['id']) self.mock_object( db_api, 'share_snapshot_export_locations_get', mock.Mock(return_value='')) out = self.api.snapshot_export_locations_get(self.context, snapshot) self.assertEqual('', out) def test_snapshot_export_location_get(self): fake_el = '/fake_export_location' self.mock_object( db_api, 'share_snapshot_instance_export_location_get', mock.Mock(return_value=fake_el)) out = self.api.snapshot_export_location_get(self.context, 'fake_id') self.assertEqual(fake_el, out) @ddt.data({'share_type': True, 'share_net': True, 'dhss': True}, {'share_type': False, 'share_net': True, 'dhss': True}, {'share_type': False, 'share_net': False, 'dhss': True}, {'share_type': True, 'share_net': False, 'dhss': False}, {'share_type': False, 'share_net': False, 'dhss': False}) @ddt.unpack def test_migration_start(self, share_type, share_net, dhss): host = 'fake2@backend#pool' service = {'availability_zone_id': 'fake_az_id', 'availability_zone': {'name': 'fake_az1'}} share_network = None share_network_id = None if share_net: share_network = db_utils.create_share_network(id='fake_net_id') share_network_id = share_network['id'] fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': dhss, }, } if share_type: fake_type_2 = { 'id': 'fake_type_2_id', 'extra_specs': { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': dhss, 'availability_zones': 'fake_az1,fake_az2', }, } else: fake_type_2 = fake_type share = db_utils.create_share( status=constants.STATUS_AVAILABLE, host='fake@backend#pool', share_type_id=fake_type['id'], share_network_id=share_network_id) request_spec = self._get_request_spec_dict( share, fake_type_2, size=0, availability_zone_id='fake_az_id', share_network_id=share_network_id) self.mock_object(self.scheduler_rpcapi, 'migrate_share_to_host') self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) self.mock_object(utils, 'validate_service_host') self.mock_object(db_api, 'share_instance_update') self.mock_object(db_api, 'share_update') self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) if share_type: self.api.migration_start(self.context, share, host, False, True, True, True, True, share_network, fake_type_2) else: self.api.migration_start(self.context, share, host, False, True, True, True, True, share_network, None) self.scheduler_rpcapi.migrate_share_to_host.assert_called_once_with( self.context, share['id'], host, False, True, True, True, True, share_network_id, fake_type_2['id'], request_spec) if not share_type: share_types.get_share_type.assert_called_once_with( self.context, fake_type['id']) utils.validate_service_host.assert_called_once_with( self.context, 'fake2@backend') db_api.service_get_by_args.assert_called_once_with( self.context, 'fake2@backend', 'manila-share') db_api.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_STARTING}) db_api.share_instance_update.assert_called_once_with( self.context, share.instance['id'], {'status': constants.STATUS_MIGRATING}) def test_migration_start_destination_az_unsupported(self): host = 'fake2@backend#pool' host_without_pool = host.split('#')[0] service = {'availability_zone_id': 'fake_az_id', 'availability_zone': {'name': 'fake_az3'}} share_network = db_utils.create_share_network(id='fake_net_id') share_network_id = share_network['id'] existing_share_type = { 'id': '4b5b0920-a294-401b-bb7d-c55b425e1cad', 'name': 'fake_type_1', 'extra_specs': { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': 'true', 'availability_zones': 'fake_az3' }, } new_share_type = { 'id': 'fa844ae2-494d-4da9-95e7-37ac6a26f635', 'name': 'fake_type_2', 'extra_specs': { 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'driver_handles_share_servers': 'true', 'availability_zones': 'fake_az1,fake_az2', }, } share = db_utils.create_share( status=constants.STATUS_AVAILABLE, host='fake@backend#pool', share_type_id=existing_share_type['id'], share_network_id=share_network_id) self.mock_object(self.api, '_get_request_spec_dict') self.mock_object(self.scheduler_rpcapi, 'migrate_share_to_host') self.mock_object(share_types, 'get_share_type') self.mock_object(utils, 'validate_service_host') self.mock_object(db_api, 'share_instance_update') self.mock_object(db_api, 'share_update') self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.assertRaises(exception.InvalidShare, self.api.migration_start, self.context, share, host, False, True, True, True, False, new_share_network=share_network, new_share_type=new_share_type) utils.validate_service_host.assert_called_once_with( self.context, host_without_pool) share_types.get_share_type.assert_not_called() db_api.share_update.assert_not_called() db_api.service_get_by_args.assert_called_once_with( self.context, host_without_pool, 'manila-share') self.api._get_request_spec_dict.assert_not_called() db_api.share_instance_update.assert_not_called() self.scheduler_rpcapi.migrate_share_to_host.assert_not_called() @ddt.data({'force_host_assisted': True, 'writable': True, 'preserve_metadata': False, 'preserve_snapshots': False, 'nondisruptive': False}, {'force_host_assisted': True, 'writable': False, 'preserve_metadata': True, 'preserve_snapshots': False, 'nondisruptive': False}, {'force_host_assisted': True, 'writable': False, 'preserve_metadata': False, 'preserve_snapshots': True, 'nondisruptive': False}, {'force_host_assisted': True, 'writable': False, 'preserve_metadata': False, 'preserve_snapshots': False, 'nondisruptive': True}) @ddt.unpack def test_migration_start_invalid_host_and_driver_assisted_params( self, force_host_assisted, writable, preserve_metadata, preserve_snapshots, nondisruptive): self.assertRaises( exception.InvalidInput, self.api.migration_start, self.context, 'some_share', 'some_host', force_host_assisted, preserve_metadata, writable, preserve_snapshots, nondisruptive) @ddt.data(True, False) def test_migration_start_invalid_share_network_type_combo(self, dhss): host = 'fake2@backend#pool' share_network = None if not dhss: share_network = db_utils.create_share_network(id='fake_net_id') fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': False, 'driver_handles_share_servers': not dhss, }, } fake_type_2 = { 'id': 'fake_type_2_id', 'extra_specs': { 'snapshot_support': False, 'driver_handles_share_servers': dhss, }, } share = db_utils.create_share( status=constants.STATUS_AVAILABLE, host='fake@backend#pool', share_type_id=fake_type['id']) self.mock_object(utils, 'validate_service_host') self.assertRaises( exception.InvalidInput, self.api.migration_start, self.context, share, host, False, True, True, True, True, share_network, fake_type_2) utils.validate_service_host.assert_called_once_with( self.context, 'fake2@backend') def test_migration_start_status_unavailable(self): host = 'fake2@backend#pool' share = db_utils.create_share( status=constants.STATUS_ERROR) self.assertRaises(exception.InvalidShare, self.api.migration_start, self.context, share, host, False, True, True, True, True) def test_migration_start_access_rules_status_error(self): host = 'fake2@backend#pool' instance = db_utils.create_share_instance( share_id='fake_share_id', access_rules_status=constants.STATUS_ERROR, status=constants.STATUS_AVAILABLE) share = db_utils.create_share( id='fake_share_id', instances=[instance]) self.assertRaises(exception.InvalidShare, self.api.migration_start, self.context, share, host, False, True, True, True, True) def test_migration_start_task_state_invalid(self): host = 'fake2@backend#pool' share = db_utils.create_share( status=constants.STATUS_AVAILABLE, task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS) self.assertRaises(exception.ShareBusyException, self.api.migration_start, self.context, share, host, False, True, True, True, True) def test_migration_start_host_assisted_with_snapshots(self): host = 'fake2@backend#pool' share = db_utils.create_share( host='fake@backend#pool', status=constants.STATUS_AVAILABLE) self.mock_object(db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=True)) self.assertRaises(exception.Conflict, self.api.migration_start, self.context, share, host, True, False, False, False, False) def test_migration_start_with_snapshots(self): host = 'fake2@backend#pool' fake_type = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': True, 'driver_handles_share_servers': False, }, } service = {'availability_zone_id': 'fake_az_id', 'availability_zone': {'name': 'fake_az'}} self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.mock_object(utils, 'validate_service_host') self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type)) share = db_utils.create_share( host='fake@backend#pool', status=constants.STATUS_AVAILABLE, share_type_id=fake_type['id']) request_spec = self._get_request_spec_dict( share, fake_type, availability_zone_id='fake_az_id') self.api.migration_start(self.context, share, host, False, True, True, True, True) self.scheduler_rpcapi.migrate_share_to_host.assert_called_once_with( self.context, share['id'], host, False, True, True, True, True, None, 'fake_type_id', request_spec) def test_migration_start_has_replicas(self): host = 'fake2@backend#pool' share = db_utils.create_share( host='fake@backend#pool', status=constants.STATUS_AVAILABLE, replication_type='dr') for i in range(1, 4): db_utils.create_share_replica( share_id=share['id'], replica_state='in_sync') self.mock_object(db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=True)) mock_log = self.mock_object(share_api, 'LOG') mock_snapshot_get_call = self.mock_object( db_api, 'share_snapshot_get_all_for_share') # Share was updated after adding replicas, grabbing it again. share = db_api.share_get(self.context, share['id']) self.assertRaises(exception.Conflict, self.api.migration_start, self.context, share, host, False, True, True, True, True) self.assertTrue(mock_log.error.called) self.assertFalse(mock_snapshot_get_call.called) def test_migration_start_is_member_of_group(self): group = db_utils.create_share_group() share = db_utils.create_share( host='fake@backend#pool', status=constants.STATUS_AVAILABLE, share_group_id=group['id']) mock_log = self.mock_object(share_api, 'LOG') self.assertRaises(exception.InvalidShare, self.api.migration_start, self.context, share, 'fake_host', False, True, True, True, True) self.assertTrue(mock_log.error.called) def test_migration_start_invalid_host(self): host = 'fake@backend#pool' share = db_utils.create_share( host='fake2@backend', status=constants.STATUS_AVAILABLE) self.mock_object(db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=False)) self.assertRaises(exception.ServiceNotFound, self.api.migration_start, self.context, share, host, False, True, True, True, True) @ddt.data({'dhss': True, 'new_share_network_id': 'fake_net_id', 'new_share_type_id': 'fake_type_id'}, {'dhss': False, 'new_share_network_id': None, 'new_share_type_id': 'fake_type_id'}, {'dhss': True, 'new_share_network_id': 'fake_net_id', 'new_share_type_id': None}) @ddt. unpack def test_migration_start_same_data_as_source( self, dhss, new_share_network_id, new_share_type_id): host = 'fake@backend#pool' fake_type_src = { 'id': 'fake_type_id', 'extra_specs': { 'snapshot_support': True, 'driver_handles_share_servers': True, }, } new_share_type_param = None if new_share_type_id: new_share_type_param = { 'id': new_share_type_id, 'extra_specs': { 'snapshot_support': True, 'driver_handles_share_servers': dhss, }, } new_share_net_param = None if new_share_network_id: new_share_net_param = db_utils.create_share_network( id=new_share_network_id) share = db_utils.create_share( host='fake@backend#pool', status=constants.STATUS_AVAILABLE, share_type_id=fake_type_src['id'], share_network_id=new_share_network_id) self.mock_object(utils, 'validate_service_host') self.mock_object(db_api, 'share_update') self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=fake_type_src)) result = self.api.migration_start( self.context, share, host, False, True, True, True, True, new_share_net_param, new_share_type_param) self.assertEqual(200, result) db_api.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_SUCCESS}) @ddt.data({}, {'replication_type': None}) def test_create_share_replica_invalid_share_type(self, attributes): share = fakes.fake_share(id='FAKE_SHARE_ID', **attributes) mock_request_spec_call = self.mock_object( self.api, 'create_share_instance_and_get_request_spec') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') mock_scheduler_rpcapi_call = self.mock_object( self.api.scheduler_rpcapi, 'create_share_replica') self.assertRaises(exception.InvalidShare, self.api.create_share_replica, self.context, share) self.assertFalse(mock_request_spec_call.called) self.assertFalse(mock_db_update_call.called) self.assertFalse(mock_scheduler_rpcapi_call.called) def test_create_share_replica_busy_share(self): share = fakes.fake_share( id='FAKE_SHARE_ID', task_state='doing_something_real_important', is_busy=True, replication_type='dr') mock_request_spec_call = self.mock_object( self.api, 'create_share_instance_and_get_request_spec') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') mock_scheduler_rpcapi_call = self.mock_object( self.api.scheduler_rpcapi, 'create_share_replica') self.assertRaises(exception.ShareBusyException, self.api.create_share_replica, self.context, share) self.assertFalse(mock_request_spec_call.called) self.assertFalse(mock_db_update_call.called) self.assertFalse(mock_scheduler_rpcapi_call.called) @ddt.data(None, []) def test_create_share_replica_no_active_replica(self, active_replicas): share = fakes.fake_share( id='FAKE_SHARE_ID', replication_type='dr') mock_request_spec_call = self.mock_object( self.api, 'create_share_instance_and_get_request_spec') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') mock_scheduler_rpcapi_call = self.mock_object( self.api.scheduler_rpcapi, 'create_share_replica') self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=active_replicas)) self.assertRaises(exception.ReplicationException, self.api.create_share_replica, self.context, share) self.assertFalse(mock_request_spec_call.called) self.assertFalse(mock_db_update_call.called) self.assertFalse(mock_scheduler_rpcapi_call.called) @ddt.data(None, 'fake-share-type') def test_create_share_replica_type_doesnt_support_AZ(self, st_name): share_type = fakes.fake_share_type( name=st_name, extra_specs={'availability_zones': 'zone 1,zone 3'}) share = fakes.fake_share( id='FAKE_SHARE_ID', replication_type='dr', availability_zone='zone 2') share['instance'].update({ 'share_type': share_type, 'share_type_id': '359b9851-2bd5-4404-89a9-5cd22bbc5fb9', }) mock_request_spec_call = self.mock_object( self.api, 'create_share_instance_and_get_request_spec') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') mock_scheduler_rpcapi_call = self.mock_object( self.api.scheduler_rpcapi, 'create_share_replica') self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=mock.Mock( return_value={'host': 'fake_ar_host'}))) self.assertRaises(exception.InvalidShare, self.api.create_share_replica, self.context, share, availability_zone='zone 2') share_types.get_share_type.assert_called_once_with( self.context, '359b9851-2bd5-4404-89a9-5cd22bbc5fb9') self.assertFalse(mock_request_spec_call.called) self.assertFalse(mock_db_update_call.called) self.assertFalse(mock_scheduler_rpcapi_call.called) def test_create_share_replica_subnet_not_found(self): request_spec = fakes.fake_replica_request_spec() replica = request_spec['share_instance_properties'] extra_specs = { 'availability_zones': 'FAKE_AZ,FAKE_AZ2', 'replication_type': constants.REPLICATION_TYPE_DR } share_type = db_utils.create_share_type(extra_specs=extra_specs) share_type = db_api.share_type_get(self.context, share_type['id']) az_name = 'FAKE_AZ' share = db_utils.create_share( id=replica['share_id'], replication_type='dr') self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=mock.Mock( return_value={'host': 'fake_ar_host'}))) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) self.mock_object(db_api, 'availability_zone_get') self.mock_object(db_api, 'share_network_subnet_get_by_availability_zone_id', mock.Mock(return_value=None)) self.assertRaises(exception.InvalidShare, self.api.create_share_replica, self.context, share, availability_zone=az_name, share_network_id='fake_id') (db_api.share_replicas_get_available_active_replica .assert_called_once_with(self.context, share['id'])) self.assertTrue(share_types.get_share_type.called) db_api.availability_zone_get.assert_called_once_with( self.context, az_name) self.assertTrue( db_api.share_network_subnet_get_by_availability_zone_id.called) def test_create_share_replica_az_not_found(self): request_spec = fakes.fake_replica_request_spec() replica = request_spec['share_instance_properties'] extra_specs = { 'availability_zones': 'FAKE_AZ,FAKE_AZ2', 'replication_type': constants.REPLICATION_TYPE_DR } share_type = db_utils.create_share_type(extra_specs=extra_specs) share_type = db_api.share_type_get(self.context, share_type['id']) az_name = 'FAKE_AZ' share = db_utils.create_share( id=replica['share_id'], replication_type='dr') self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value=mock.Mock( return_value={'host': 'fake_ar_host'}))) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) side_effect = exception.AvailabilityZoneNotFound(id=az_name) self.mock_object(db_api, 'availability_zone_get', mock.Mock(side_effect=side_effect)) self.assertRaises(exception.InvalidInput, self.api.create_share_replica, self.context, share, availability_zone=az_name, share_network_id='fake_id') (db_api.share_replicas_get_available_active_replica .assert_called_once_with(self.context, share['id'])) self.assertTrue(share_types.get_share_type.called) db_api.availability_zone_get.assert_called_once_with( self.context, az_name) @ddt.data( {'availability_zones': '', 'azs_with_subnet': ['fake_az_1']}, {'availability_zones': 'fake_az_1,fake_az_2', 'azs_with_subnet': ['fake_az_2']} ) @ddt.unpack def test_create_share_replica_azs_with_subnets(self, availability_zones, azs_with_subnet): request_spec = fakes.fake_replica_request_spec() replica = request_spec['share_instance_properties'] share_network_id = 'fake_share_network_id' extra_specs = { 'availability_zones': availability_zones, 'replication_type': constants.REPLICATION_TYPE_DR } share_type = db_utils.create_share_type(extra_specs=extra_specs) share_type = db_api.share_type_get(self.context, share_type['id']) share = db_utils.create_share( id=replica['share_id'], replication_type='dr', share_type_id=share_type['id']) cast_rules_to_readonly = ( share['replication_type'] == constants.REPLICATION_TYPE_READABLE) fake_replica = fakes.fake_replica(id=replica['id']) fake_request_spec = fakes.fake_replica_request_spec() self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value={'host': 'fake_ar_host'})) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) mock_get_all_az_subnet = self.mock_object( self.api, '_get_all_availability_zones_with_subnets', mock.Mock(return_value=azs_with_subnet)) if availability_zones == '': expected_azs = azs_with_subnet else: availability_zones = [ t for t in availability_zones.split(',') if availability_zones] expected_azs = ( [az for az in availability_zones if az in azs_with_subnet]) self.mock_object( self.api, 'create_share_instance_and_get_request_spec', mock.Mock(return_value=(fake_request_spec, fake_replica))) self.mock_object(db_api, 'share_replica_update') mock_snapshot_get_all_call = self.mock_object( db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=[])) mock_sched_rpcapi_call = self.mock_object( self.api.scheduler_rpcapi, 'create_share_replica') self.api.create_share_replica( self.context, share, share_network_id=share_network_id) (db_api.share_replicas_get_available_active_replica .assert_called_once_with(self.context, share['id'])) self.assertTrue(share_types.get_share_type.called) mock_get_all_az_subnet.assert_called_once_with( self.context, share_network_id ) (self.api.create_share_instance_and_get_request_spec. assert_called_once_with( self.context, share, availability_zone=None, share_network_id=share_network_id, share_type_id=share_type['id'], availability_zones=expected_azs, cast_rules_to_readonly=cast_rules_to_readonly)) db_api.share_replica_update.assert_called_once() mock_snapshot_get_all_call.assert_called_once() mock_sched_rpcapi_call.assert_called_once() @ddt.data( {'availability_zones': '', 'azs_with_subnet': []}, {'availability_zones': 'fake_az_1,fake_az_2', 'azs_with_subnet': ['fake_az_3']} ) @ddt.unpack def test_create_share_replica_azs_with_subnets_invalid_input( self, availability_zones, azs_with_subnet): request_spec = fakes.fake_replica_request_spec() replica = request_spec['share_instance_properties'] share_network_id = 'fake_share_network_id' extra_specs = { 'availability_zones': availability_zones, 'replication_type': constants.REPLICATION_TYPE_DR } share_type = db_utils.create_share_type(extra_specs=extra_specs) share_type = db_api.share_type_get(self.context, share_type['id']) share = db_utils.create_share( id=replica['share_id'], replication_type='dr', share_type_id=share_type['id']) self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value={'host': 'fake_ar_host'})) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) mock_get_all_az_subnet = self.mock_object( self.api, '_get_all_availability_zones_with_subnets', mock.Mock(return_value=azs_with_subnet)) self.assertRaises( exception.InvalidInput, self.api.create_share_replica, self.context, share, share_network_id=share_network_id) (db_api.share_replicas_get_available_active_replica .assert_called_once_with(self.context, share['id'])) self.assertTrue(share_types.get_share_type.called) mock_get_all_az_subnet.assert_called_once_with( self.context, share_network_id ) @ddt.data({'has_snapshots': True, 'extra_specs': { 'replication_type': constants.REPLICATION_TYPE_DR, }, 'share_network_id': None}, {'has_snapshots': False, 'extra_specs': { 'availability_zones': 'FAKE_AZ,FAKE_AZ2', 'replication_type': constants.REPLICATION_TYPE_DR, }, 'share_network_id': None}, {'has_snapshots': True, 'extra_specs': { 'availability_zones': 'FAKE_AZ,FAKE_AZ2', 'replication_type': constants.REPLICATION_TYPE_READABLE, }, 'share_network_id': None}, {'has_snapshots': False, 'extra_specs': { 'replication_type': constants.REPLICATION_TYPE_READABLE, }, 'share_network_id': 'fake_sn_id'}) @ddt.unpack def test_create_share_replica(self, has_snapshots, extra_specs, share_network_id): request_spec = fakes.fake_replica_request_spec() replication_type = extra_specs['replication_type'] replica = request_spec['share_instance_properties'] share_type = db_utils.create_share_type(extra_specs=extra_specs) share_type = db_api.share_type_get(self.context, share_type['id']) share = db_utils.create_share( id=replica['share_id'], replication_type=replication_type, share_type_id=share_type['id']) snapshots = ( [fakes.fake_snapshot(), fakes.fake_snapshot()] if has_snapshots else [] ) cast_rules_to_readonly = ( replication_type == constants.REPLICATION_TYPE_READABLE) fake_replica = fakes.fake_replica(id=replica['id']) fake_request_spec = fakes.fake_replica_request_spec() self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value={'host': 'fake_ar_host'})) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) self.mock_object(db_api, 'availability_zone_get') self.mock_object(db_api, 'share_network_subnet_get_by_availability_zone_id') self.mock_object( share_api.API, 'create_share_instance_and_get_request_spec', mock.Mock(return_value=(fake_request_spec, fake_replica))) self.mock_object(db_api, 'share_replica_update') mock_sched_rpcapi_call = self.mock_object( self.api.scheduler_rpcapi, 'create_share_replica') mock_snapshot_get_all_call = self.mock_object( db_api, 'share_snapshot_get_all_for_share', mock.Mock(return_value=snapshots)) mock_snapshot_instance_create_call = self.mock_object( db_api, 'share_snapshot_instance_create') expected_snap_instance_create_call_count = 2 if has_snapshots else 0 result = self.api.create_share_replica( self.context, share, availability_zone='FAKE_AZ', share_network_id=share_network_id) self.assertTrue(mock_sched_rpcapi_call.called) self.assertEqual(replica, result) share_types.get_share_type.assert_called_once_with( self.context, share_type['id']) mock_snapshot_get_all_call.assert_called_once_with( self.context, fake_replica['share_id']) self.assertEqual(expected_snap_instance_create_call_count, mock_snapshot_instance_create_call.call_count) expected_azs = extra_specs.get('availability_zones', '') expected_azs = expected_azs.split(',') if expected_azs else [] (share_api.API.create_share_instance_and_get_request_spec. assert_called_once_with( self.context, share, availability_zone='FAKE_AZ', share_network_id=share_network_id, share_type_id=share_type['id'], availability_zones=expected_azs, cast_rules_to_readonly=cast_rules_to_readonly)) def test_delete_last_active_replica(self): fake_replica = fakes.fake_replica( share_id='FAKE_SHARE_ID', replica_state=constants.REPLICA_STATE_ACTIVE) self.mock_object(db_api, 'share_replicas_get_all_by_share', mock.Mock(return_value=[fake_replica])) mock_log = self.mock_object(share_api.LOG, 'info') self.assertRaises( exception.ReplicationException, self.api.delete_share_replica, self.context, fake_replica) self.assertFalse(mock_log.called) @ddt.data(True, False) def test_delete_share_replica_no_host(self, has_snapshots): snapshots = [{'id': 'xyz'}, {'id': 'abc'}, {'id': 'pqr'}] snapshots = snapshots if has_snapshots else [] replica = fakes.fake_replica('FAKE_ID', host='') mock_sched_rpcapi_call = self.mock_object( self.share_rpcapi, 'delete_share_replica') mock_db_replica_delete_call = self.mock_object( db_api, 'share_replica_delete') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') mock_snapshot_get_call = self.mock_object( db_api, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=snapshots)) mock_snapshot_instance_delete_call = self.mock_object( db_api, 'share_snapshot_instance_delete') self.api.delete_share_replica(self.context, replica) self.assertFalse(mock_sched_rpcapi_call.called) mock_db_replica_delete_call.assert_called_once_with( self.context, replica['id']) mock_db_update_call.assert_called_once_with( self.context, replica['id'], {'status': constants.STATUS_DELETING, 'terminated_at': mock.ANY}) mock_snapshot_get_call.assert_called_once_with( self.context, {'share_instance_ids': replica['id']}) self.assertEqual( len(snapshots), mock_snapshot_instance_delete_call.call_count) @ddt.data(True, False) def test_delete_share_replica(self, force): replica = fakes.fake_replica('FAKE_ID', host='HOSTA@BackendB#PoolC') mock_sched_rpcapi_call = self.mock_object( self.share_rpcapi, 'delete_share_replica') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') self.api.delete_share_replica(self.context, replica, force=force) mock_sched_rpcapi_call.assert_called_once_with( self.context, replica, force=force) mock_db_update_call.assert_called_once_with( self.context, replica['id'], {'status': constants.STATUS_DELETING, 'terminated_at': mock.ANY}) @ddt.data(constants.STATUS_CREATING, constants.STATUS_DELETING, constants.STATUS_ERROR, constants.STATUS_EXTENDING, constants.STATUS_REPLICATION_CHANGE, constants.STATUS_MANAGING, constants.STATUS_ERROR_DELETING) def test_promote_share_replica_non_available_status(self, status): replica = fakes.fake_replica( status=status, replica_state=constants.REPLICA_STATE_IN_SYNC) mock_rpcapi_promote_share_replica_call = self.mock_object( self.share_rpcapi, 'promote_share_replica') self.assertRaises(exception.ReplicationException, self.api.promote_share_replica, self.context, replica) self.assertFalse(mock_rpcapi_promote_share_replica_call.called) @ddt.data(constants.REPLICA_STATE_OUT_OF_SYNC, constants.STATUS_ERROR) def test_promote_share_replica_out_of_sync_non_admin(self, replica_state): fake_user_context = context.RequestContext( user_id=None, project_id=None, is_admin=False, read_deleted='no', overwrite=False) replica = fakes.fake_replica( status=constants.STATUS_AVAILABLE, replica_state=replica_state) mock_rpcapi_promote_share_replica_call = self.mock_object( self.share_rpcapi, 'promote_share_replica') self.assertRaises(exception.AdminRequired, self.api.promote_share_replica, fake_user_context, replica) self.assertFalse(mock_rpcapi_promote_share_replica_call.called) @ddt.data(constants.REPLICA_STATE_OUT_OF_SYNC, constants.STATUS_ERROR) def test_promote_share_replica_admin_authorized(self, replica_state): replica = fakes.fake_replica( status=constants.STATUS_AVAILABLE, replica_state=replica_state, host='HOSTA@BackendB#PoolC') self.mock_object(db_api, 'share_replica_get', mock.Mock(return_value=replica)) mock_rpcapi_promote_share_replica_call = self.mock_object( self.share_rpcapi, 'promote_share_replica') mock_db_update_call = self.mock_object(db_api, 'share_replica_update') retval = self.api.promote_share_replica( self.context, replica) self.assertEqual(replica, retval) mock_db_update_call.assert_called_once_with( self.context, replica['id'], {'status': constants.STATUS_REPLICATION_CHANGE}) mock_rpcapi_promote_share_replica_call.assert_called_once_with( self.context, replica) def test_promote_share_replica(self): replica = fakes.fake_replica('FAKE_ID', host='HOSTA@BackendB#PoolC') self.mock_object(db_api, 'share_replica_get', mock.Mock(return_value=replica)) self.mock_object(db_api, 'share_replica_update') mock_sched_rpcapi_call = self.mock_object( self.share_rpcapi, 'promote_share_replica') result = self.api.promote_share_replica(self.context, replica) mock_sched_rpcapi_call.assert_called_once_with( self.context, replica) self.assertEqual(replica, result) def test_update_share_replica_no_host(self): replica = fakes.fake_replica('FAKE_ID') replica['host'] = None mock_rpcapi_update_share_replica_call = self.mock_object( self.share_rpcapi, 'update_share_replica') self.assertRaises(exception.InvalidHost, self.api.update_share_replica, self.context, replica) self.assertFalse(mock_rpcapi_update_share_replica_call.called) def test_update_share_replica(self): replica = fakes.fake_replica('FAKE_ID', host='HOSTA@BackendB#PoolC') mock_rpcapi_update_share_replica_call = self.mock_object( self.share_rpcapi, 'update_share_replica') retval = self.api.update_share_replica(self.context, replica) self.assertTrue(mock_rpcapi_update_share_replica_call.called) self.assertIsNone(retval) @ddt.data({'overs': {'replica_gigabytes': 'fake'}, 'expected_exception': exception.ShareReplicaSizeExceedsAvailableQuota}, {'overs': {'share_replicas': 'fake'}, 'expected_exception': exception.ShareReplicasLimitExceeded}) @ddt.unpack def test_create_share_replica_over_quota(self, overs, expected_exception): request_spec = fakes.fake_replica_request_spec() replica = request_spec['share_instance_properties'] share = db_utils.create_share(replication_type='dr', id=replica['share_id']) share_type = db_utils.create_share_type() share_type = db_api.share_type_get(self.context, share_type['id']) usages = {'replica_gigabytes': {'reserved': 5, 'in_use': 5}, 'share_replicas': {'reserved': 5, 'in_use': 5}} quotas = {'share_replicas': 5, 'replica_gigabytes': 5} exc = exception.OverQuota(overs=overs, usages=usages, quotas=quotas) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(side_effect=exc)) self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value={'host': 'fake_ar_host'})) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) self.assertRaises( expected_exception, self.api.create_share_replica, self.context, share ) quota.QUOTAS.reserve.assert_called_once_with( self.context, share_type_id=share_type['id'], share_replicas=1, replica_gigabytes=share['size']) (db_api.share_replicas_get_available_active_replica .assert_called_once_with(self.context, share['id'])) share_types.get_share_type.assert_called_once_with( self.context, share['instance']['share_type_id']) def test_create_share_replica_error_on_quota_commit(self): request_spec = fakes.fake_replica_request_spec() replica = request_spec['share_instance_properties'] share_type = db_utils.create_share_type() fake_replica = fakes.fake_replica(id=replica['id']) share = db_utils.create_share(replication_type='dr', id=fake_replica['share_id'], share_type_id=share_type['id']) share_network_id = None share_type = db_api.share_type_get(self.context, share_type['id']) expected_azs = share_type['extra_specs'].get('availability_zones', '') expected_azs = expected_azs.split(',') if expected_azs else [] reservation = 'fake' self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=reservation)) self.mock_object(quota.QUOTAS, 'commit', mock.Mock(side_effect=exception.QuotaError('fake'))) self.mock_object(db_api, 'share_replica_delete') self.mock_object(quota.QUOTAS, 'rollback') self.mock_object(db_api, 'share_replicas_get_available_active_replica', mock.Mock(return_value={'host': 'fake_ar_host'})) self.mock_object(share_types, 'get_share_type', mock.Mock(return_value=share_type)) self.mock_object( share_api.API, 'create_share_instance_and_get_request_spec', mock.Mock(return_value=(request_spec, fake_replica))) self.assertRaises( exception.QuotaError, self.api.create_share_replica, self.context, share ) db_api.share_replica_delete.assert_called_once_with( self.context, replica['id'], need_to_update_usages=False) quota.QUOTAS.rollback.assert_called_once_with( self.context, reservation, share_type_id=share['instance']['share_type_id']) (db_api.share_replicas_get_available_active_replica. assert_called_once_with(self.context, share['id'])) share_types.get_share_type.assert_called_once_with( self.context, share['instance']['share_type_id']) (share_api.API.create_share_instance_and_get_request_spec. assert_called_once_with(self.context, share, availability_zone=None, share_network_id=share_network_id, share_type_id=share_type['id'], availability_zones=expected_azs, cast_rules_to_readonly=False)) def test_migration_complete(self): instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING) instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_COMPLETED, instances=[instance1, instance2]) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(self.api.share_rpcapi, 'migration_complete') self.api.migration_complete(self.context, share) self.api.share_rpcapi.migration_complete.assert_called_once_with( self.context, instance1, instance2['id']) @ddt.data(constants.TASK_STATE_DATA_COPYING_STARTING, constants.TASK_STATE_MIGRATION_SUCCESS, constants.TASK_STATE_DATA_COPYING_IN_PROGRESS, constants.TASK_STATE_MIGRATION_ERROR, constants.TASK_STATE_MIGRATION_CANCELLED, None) def test_migration_complete_task_state_invalid(self, task_state): share = db_utils.create_share( id='fake_id', task_state=task_state) self.assertRaises(exception.InvalidShare, self.api.migration_complete, self.context, share) def test_migration_complete_status_invalid(self): instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_ERROR) instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_ERROR) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_COMPLETED, instances=[instance1, instance2]) self.assertRaises(exception.ShareMigrationFailed, self.api.migration_complete, self.context, share) @ddt.data(None, Exception('fake')) def test_migration_cancel(self, exc): share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_IN_PROGRESS) services = ['fake_service'] self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db_api, 'service_get_all_by_topic', mock.Mock(return_value=services)) self.mock_object(data_rpc.DataAPI, 'data_copy_cancel', mock.Mock(side_effect=[exc])) if exc: self.assertRaises( exception.ShareMigrationError, self.api.migration_cancel, self.context, share) else: self.api.migration_cancel(self.context, share) data_rpc.DataAPI.data_copy_cancel.assert_called_once_with( self.context, share['id']) db_api.service_get_all_by_topic.assert_called_once_with( self.context, 'manila-data') def test_migration_cancel_service_down(self): service = 'fake_service' instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING) instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_IN_PROGRESS, instances=[instance1, instance2]) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=False)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(db_api, 'service_get_all_by_topic', mock.Mock(return_value=service)) self.assertRaises(exception.InvalidShare, self.api.migration_cancel, self.context, share) def test_migration_cancel_driver(self): service = 'fake_service' instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, host='some_host') instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, instances=[instance1, instance2]) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(self.api.share_rpcapi, 'migration_cancel') self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.api.migration_cancel(self.context, share) self.api.share_rpcapi.migration_cancel.assert_called_once_with( self.context, instance1, instance2['id']) db_api.service_get_by_args.assert_called_once_with( self.context, instance1['host'], 'manila-share') @ddt.data(constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_DATA_COPYING_COMPLETED) def test_migration_cancel_driver_service_down(self, task_state): service = 'fake_service' instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, host='some_host') instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=task_state, instances=[instance1, instance2]) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=False)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.assertRaises(exception.InvalidShare, self.api.migration_cancel, self.context, share) @ddt.data(constants.TASK_STATE_DATA_COPYING_STARTING, constants.TASK_STATE_MIGRATION_SUCCESS, constants.TASK_STATE_MIGRATION_ERROR, constants.TASK_STATE_MIGRATION_CANCELLED, None) def test_migration_cancel_task_state_invalid(self, task_state): share = db_utils.create_share( id='fake_id', task_state=task_state) self.assertRaises(exception.InvalidShare, self.api.migration_cancel, self.context, share) @ddt.data({'total_progress': 50}, Exception('fake')) def test_migration_get_progress(self, expected): share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_IN_PROGRESS) services = ['fake_service'] self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db_api, 'service_get_all_by_topic', mock.Mock(return_value=services)) self.mock_object(data_rpc.DataAPI, 'data_copy_get_progress', mock.Mock(side_effect=[expected])) if not isinstance(expected, Exception): result = self.api.migration_get_progress(self.context, share) self.assertEqual(expected, result) else: self.assertRaises( exception.ShareMigrationError, self.api.migration_get_progress, self.context, share) data_rpc.DataAPI.data_copy_get_progress.assert_called_once_with( self.context, share['id']) db_api.service_get_all_by_topic.assert_called_once_with( self.context, 'manila-data') def test_migration_get_progress_service_down(self): instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING) instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_DATA_COPYING_IN_PROGRESS, instances=[instance1, instance2]) services = ['fake_service'] self.mock_object(utils, 'service_is_up', mock.Mock(return_value=False)) self.mock_object(db_api, 'service_get_all_by_topic', mock.Mock(return_value=services)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.assertRaises(exception.InvalidShare, self.api.migration_get_progress, self.context, share) def test_migration_get_progress_driver(self): expected = {'total_progress': 50} instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, host='some_host') instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, instances=[instance1, instance2]) service = 'fake_service' self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(self.api.share_rpcapi, 'migration_get_progress', mock.Mock(return_value=expected)) result = self.api.migration_get_progress(self.context, share) self.assertEqual(expected, result) self.api.share_rpcapi.migration_get_progress.assert_called_once_with( self.context, instance1, instance2['id']) db_api.service_get_by_args.assert_called_once_with( self.context, instance1['host'], 'manila-share') def test_migration_get_progress_driver_error(self): instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, host='some_host') instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, instances=[instance1, instance2]) service = 'fake_service' self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(self.api.share_rpcapi, 'migration_get_progress', mock.Mock(side_effect=Exception('fake'))) self.assertRaises(exception.ShareMigrationError, self.api.migration_get_progress, self.context, share) self.api.share_rpcapi.migration_get_progress.assert_called_once_with( self.context, instance1, instance2['id']) def test_migration_get_progress_driver_service_down(self): service = 'fake_service' instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, host='some_host') instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, instances=[instance1, instance2]) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=False)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.assertRaises(exception.InvalidShare, self.api.migration_get_progress, self.context, share) @ddt.data(constants.TASK_STATE_MIGRATION_STARTING, constants.TASK_STATE_MIGRATION_DRIVER_STARTING, constants.TASK_STATE_DATA_COPYING_STARTING, constants.TASK_STATE_MIGRATION_IN_PROGRESS) def test_migration_get_progress_task_state_progress_0(self, task_state): share = db_utils.create_share( id='fake_id', task_state=task_state) expected = {'total_progress': 0} result = self.api.migration_get_progress(self.context, share) self.assertEqual(expected, result) @ddt.data(constants.TASK_STATE_MIGRATION_SUCCESS, constants.TASK_STATE_DATA_COPYING_ERROR, constants.TASK_STATE_MIGRATION_CANCELLED, constants.TASK_STATE_MIGRATION_COMPLETING, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_DATA_COPYING_COMPLETED, constants.TASK_STATE_DATA_COPYING_COMPLETING, constants.TASK_STATE_DATA_COPYING_CANCELLED, constants.TASK_STATE_MIGRATION_ERROR) def test_migration_get_progress_task_state_progress_100(self, task_state): share = db_utils.create_share( id='fake_id', task_state=task_state) expected = {'total_progress': 100} result = self.api.migration_get_progress(self.context, share) self.assertEqual(expected, result) def test_migration_get_progress_task_state_None(self): share = db_utils.create_share(id='fake_id', task_state=None) self.assertRaises(exception.InvalidShare, self.api.migration_get_progress, self.context, share) @ddt.data(None, {'invalid_progress': None}, {}) def test_migration_get_progress_invalid(self, progress): instance1 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING, host='some_host') instance2 = db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_MIGRATING_TO) share = db_utils.create_share( id='fake_id', task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS, instances=[instance1, instance2]) service = 'fake_service' self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object(db_api, 'service_get_by_args', mock.Mock(return_value=service)) self.mock_object(db_api, 'share_instance_get', mock.Mock(return_value=instance1)) self.mock_object(self.api.share_rpcapi, 'migration_get_progress', mock.Mock(return_value=progress)) self.assertRaises(exception.InvalidShare, self.api.migration_get_progress, self.context, share) self.api.share_rpcapi.migration_get_progress.assert_called_once_with( self.context, instance1, instance2['id']) class OtherTenantsShareActionsTestCase(test.TestCase): def setUp(self): super(OtherTenantsShareActionsTestCase, self).setUp() self.api = share.API() def test_delete_other_tenants_public_share(self): share = db_utils.create_share(is_public=True) ctx = context.RequestContext(user_id='1111', project_id='2222') self.assertRaises(exception.PolicyNotAuthorized, self.api.delete, ctx, share) def test_update_other_tenants_public_share(self): share = db_utils.create_share(is_public=True) ctx = context.RequestContext(user_id='1111', project_id='2222') self.assertRaises(exception.PolicyNotAuthorized, self.api.update, ctx, share, {'display_name': 'newname'}) def test_get_other_tenants_public_share(self): share = db_utils.create_share(is_public=True) ctx = context.RequestContext(user_id='1111', project_id='2222') self.mock_object(db_api, 'share_get', mock.Mock(return_value=share)) result = self.api.get(ctx, 'fakeid') self.assertEqual(share, result) db_api.share_get.assert_called_once_with(ctx, 'fakeid') manila-10.0.0/manila/tests/share/test_driver.py0000664000175000017500000014501213656750227021504 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Share driver module.""" import time from unittest import mock import ddt from manila.common import constants from manila import exception from manila import network from manila.share import configuration from manila.share import driver from manila import test from manila.tests import utils as test_utils from manila import utils def fake_execute_with_raise(*cmd, **kwargs): raise exception.ProcessExecutionError def fake_sleep(duration): pass class ShareDriverWithExecuteMixin(driver.ShareDriver, driver.ExecuteMixin): pass @ddt.ddt class ShareDriverTestCase(test.TestCase): _SNAPSHOT_METHOD_NAMES = ["create_snapshot", "delete_snapshot"] def setUp(self): super(ShareDriverTestCase, self).setUp() self.utils = utils self.mock_object(self.utils, 'execute', fake_execute_with_raise) self.time = time self.mock_object(self.time, 'sleep', fake_sleep) driver.CONF.set_default('driver_handles_share_servers', True) def test__try_execute(self): execute_mixin = ShareDriverWithExecuteMixin( True, configuration=configuration.Configuration(None)) self.assertRaises(exception.ProcessExecutionError, execute_mixin._try_execute) def test_verify_share_driver_mode_option_type(self): data = {'DEFAULT': {'driver_handles_share_servers': 'True'}} with test_utils.create_temp_config_with_opts(data): share_driver = driver.ShareDriver([True, False]) self.assertTrue(share_driver.driver_handles_share_servers) def _instantiate_share_driver(self, network_config_group, driver_handles_share_servers, admin_network_config_group=None): self.mock_object(network, 'API') config = mock.Mock() config.append_config_values = mock.Mock() config.config_group = 'fake_config_group' config.network_config_group = network_config_group if admin_network_config_group: config.admin_network_config_group = admin_network_config_group config.safe_get = mock.Mock(return_value=driver_handles_share_servers) share_driver = driver.ShareDriver([True, False], configuration=config) self.assertTrue(hasattr(share_driver, 'configuration')) config.append_config_values.assert_called_once_with(driver.share_opts) if driver_handles_share_servers: calls = [] if network_config_group: calls.append(mock.call( config_group_name=config.network_config_group)) else: calls.append(mock.call( config_group_name=config.config_group)) if admin_network_config_group: calls.append(mock.call( config_group_name=config.admin_network_config_group, label='admin')) network.API.assert_has_calls(calls) self.assertTrue(hasattr(share_driver, 'network_api')) self.assertTrue(hasattr(share_driver, 'admin_network_api')) self.assertIsNotNone(share_driver.network_api) self.assertIsNotNone(share_driver.admin_network_api) else: self.assertFalse(hasattr(share_driver, 'network_api')) self.assertTrue(hasattr(share_driver, 'admin_network_api')) self.assertIsNone(share_driver.admin_network_api) self.assertFalse(network.API.called) return share_driver def test_instantiate_share_driver(self): self._instantiate_share_driver(None, True) def test_instantiate_share_driver_another_config_group(self): self._instantiate_share_driver("fake_network_config_group", True) def test_instantiate_share_driver_with_admin_network(self): self._instantiate_share_driver( "fake_network_config_group", True, "fake_admin_network_config_group") def test_instantiate_share_driver_no_configuration(self): self.mock_object(network, 'API') share_driver = driver.ShareDriver(True, configuration=None) self.assertIsNone(share_driver.configuration) network.API.assert_called_once_with(config_group_name=None) def test_get_share_stats_refresh_false(self): share_driver = driver.ShareDriver(True, configuration=None) share_driver._stats = {'fake_key': 'fake_value'} result = share_driver.get_share_stats(False) self.assertEqual(share_driver._stats, result) def test_get_share_stats_refresh_true(self): conf = configuration.Configuration(None) expected_keys = [ 'qos', 'driver_version', 'share_backend_name', 'free_capacity_gb', 'total_capacity_gb', 'driver_handles_share_servers', 'reserved_percentage', 'vendor_name', 'storage_protocol', 'snapshot_support', 'mount_snapshot_support', ] share_driver = driver.ShareDriver(True, configuration=conf) fake_stats = {'fake_key': 'fake_value'} share_driver._stats = fake_stats result = share_driver.get_share_stats(True) self.assertNotEqual(fake_stats, result) for key in expected_keys: self.assertIn(key, result) self.assertEqual('Open Source', result['vendor_name']) def test_get_share_status(self): share_driver = self._instantiate_share_driver(None, False) self.assertRaises(NotImplementedError, share_driver.get_share_status, None, None) @ddt.data( {'opt': True, 'allowed': True}, {'opt': True, 'allowed': (True, False)}, {'opt': True, 'allowed': [True, False]}, {'opt': True, 'allowed': set([True, False])}, {'opt': False, 'allowed': False}, {'opt': False, 'allowed': (True, False)}, {'opt': False, 'allowed': [True, False]}, {'opt': False, 'allowed': set([True, False])}) @ddt.unpack def test__verify_share_server_handling_valid_cases(self, opt, allowed): conf = configuration.Configuration(None) self.mock_object(conf, 'safe_get', mock.Mock(return_value=opt)) share_driver = driver.ShareDriver(allowed, configuration=conf) self.assertTrue(conf.safe_get.called) self.assertEqual(opt, share_driver.driver_handles_share_servers) @ddt.data( {'opt': False, 'allowed': True}, {'opt': True, 'allowed': False}, {'opt': None, 'allowed': True}, {'opt': 'True', 'allowed': True}, {'opt': 'False', 'allowed': False}, {'opt': [], 'allowed': True}, {'opt': True, 'allowed': []}, {'opt': True, 'allowed': ['True']}, {'opt': False, 'allowed': ['False']}) @ddt.unpack def test__verify_share_server_handling_invalid_cases(self, opt, allowed): conf = configuration.Configuration(None) self.mock_object(conf, 'safe_get', mock.Mock(return_value=opt)) self.assertRaises( exception.ManilaException, driver.ShareDriver, allowed, configuration=conf) self.assertTrue(conf.safe_get.called) def test_setup_server_handling_disabled(self): share_driver = self._instantiate_share_driver(None, False) # We expect successful execution, nothing to assert share_driver.setup_server('Nothing is expected to happen.') def test_setup_server_handling_enabled(self): share_driver = self._instantiate_share_driver(None, True) self.assertRaises( NotImplementedError, share_driver.setup_server, 'fake_network_info') def test_teardown_server_handling_disabled(self): share_driver = self._instantiate_share_driver(None, False) # We expect successful execution, nothing to assert share_driver.teardown_server('Nothing is expected to happen.') def test_teardown_server_handling_enabled(self): share_driver = self._instantiate_share_driver(None, True) self.assertRaises( NotImplementedError, share_driver.teardown_server, 'fake_share_server_details') def _assert_is_callable(self, obj, attr): self.assertTrue(callable(getattr(obj, attr))) @ddt.data('manage_existing', 'unmanage') def test_drivers_methods_needed_by_manage_functionality(self, method): share_driver = self._instantiate_share_driver(None, False) self._assert_is_callable(share_driver, method) @ddt.data('manage_existing_snapshot', 'unmanage_snapshot') def test_drivers_methods_needed_by_manage_snapshot_functionality( self, method): share_driver = self._instantiate_share_driver(None, False) self._assert_is_callable(share_driver, method) @ddt.data('revert_to_snapshot', 'revert_to_replicated_snapshot') def test_drivers_methods_needed_by_share_revert_to_snapshot_functionality( self, method): share_driver = self._instantiate_share_driver(None, False) self._assert_is_callable(share_driver, method) @ddt.data(True, False) def test_get_share_server_pools(self, value): driver.CONF.set_default('driver_handles_share_servers', value) share_driver = driver.ShareDriver(value) self.assertEqual([], share_driver.get_share_server_pools('fake_server')) @ddt.data(0.8, 1.0, 10.5, 20.0, None, '1', '1.1') def test_check_for_setup_error(self, value): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) share_driver.configuration = configuration.Configuration(None) self.mock_object(share_driver.configuration, 'safe_get', mock.Mock(return_value=value)) if value and float(value) >= 1.0: share_driver.check_for_setup_error() else: self.assertRaises(exception.InvalidParameterValue, share_driver.check_for_setup_error) def test_snapshot_support_exists(self): driver.CONF.set_default('driver_handles_share_servers', True) fake_method = lambda *args, **kwargs: None # noqa: E731 child_methods = { "create_snapshot": fake_method, "delete_snapshot": fake_method, } child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), child_methods)(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats() self.assertTrue(child_class_instance._stats["snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) @ddt.data( ([], [], False), (_SNAPSHOT_METHOD_NAMES, [], True), (_SNAPSHOT_METHOD_NAMES, _SNAPSHOT_METHOD_NAMES, True), (_SNAPSHOT_METHOD_NAMES[0:1], _SNAPSHOT_METHOD_NAMES[1:], True), ([], _SNAPSHOT_METHOD_NAMES, True), ) @ddt.unpack def test_check_redefined_driver_methods(self, common_drv_meth_names, child_drv_meth_names, expected_result): # This test covers the case of drivers inheriting other drivers or # common classes. driver.CONF.set_default('driver_handles_share_servers', True) common_drv_methods, child_drv_methods = [ {method_name: lambda *args, **kwargs: None # noqa: E731 for method_name in method_names} for method_names in (common_drv_meth_names, child_drv_meth_names)] common_drv = type( "NotRedefinedCommon", (driver.ShareDriver, ), common_drv_methods) child_drv_instance = type("NotRedefined", (common_drv, ), child_drv_methods)(True) has_redefined_methods = ( child_drv_instance._has_redefined_driver_methods( self._SNAPSHOT_METHOD_NAMES)) self.assertEqual(expected_result, has_redefined_methods) @ddt.data( (), ("create_snapshot"), ("delete_snapshot"), ("create_snapshot", "delete_snapshotFOO"), ) def test_snapshot_support_absent(self, methods): driver.CONF.set_default('driver_handles_share_servers', True) fake_method = lambda *args, **kwargs: None # noqa: E731 child_methods = {} for method in methods: child_methods[method] = fake_method child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), child_methods)(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats() self.assertFalse(child_class_instance._stats["snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) @ddt.data(True, False) def test_snapshot_support_not_exists_and_set_explicitly( self, snapshots_are_supported): driver.CONF.set_default('driver_handles_share_servers', True) child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), {})(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats( {"snapshot_support": snapshots_are_supported}) self.assertEqual( snapshots_are_supported, child_class_instance._stats["snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) @ddt.data(True, False) def test_snapshot_support_exists_and_set_explicitly( self, snapshots_are_supported): driver.CONF.set_default('driver_handles_share_servers', True) fake_method = lambda *args, **kwargs: None # noqa: E731 child_methods = { "create_snapshot": fake_method, "delete_snapshot": fake_method, } child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), child_methods)(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats( {"snapshot_support": snapshots_are_supported}) self.assertEqual( snapshots_are_supported, child_class_instance._stats["snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) def test_create_share_from_snapshot_support_exists(self): driver.CONF.set_default('driver_handles_share_servers', True) fake_method = lambda *args, **kwargs: None # noqa: E731 child_methods = { "create_share_from_snapshot": fake_method, "create_snapshot": fake_method, "delete_snapshot": fake_method, } child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), child_methods)(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats() self.assertTrue( child_class_instance._stats["create_share_from_snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) @ddt.data( (), ("create_snapshot"), ("create_share_from_snapshotFOO"), ) def test_create_share_from_snapshot_support_absent(self, methods): driver.CONF.set_default('driver_handles_share_servers', True) fake_method = lambda *args, **kwargs: None # noqa: E731 child_methods = {} for method in methods: child_methods[method] = fake_method child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), child_methods)(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats() self.assertFalse( child_class_instance._stats["create_share_from_snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) @ddt.data(True, False) def test_create_share_from_snapshot_not_exists_and_set_explicitly( self, creating_shares_from_snapshot_is_supported): driver.CONF.set_default('driver_handles_share_servers', True) child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), {})(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats({ "create_share_from_snapshot_support": creating_shares_from_snapshot_is_supported, }) self.assertEqual( creating_shares_from_snapshot_is_supported, child_class_instance._stats["create_share_from_snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) @ddt.data(True, False) def test_create_share_from_snapshot_exists_and_set_explicitly( self, create_share_from_snapshot_supported): driver.CONF.set_default('driver_handles_share_servers', True) fake_method = lambda *args, **kwargs: None # noqa: E731 child_methods = {"create_share_from_snapshot": fake_method} child_class_instance = type( "NotRedefined", (driver.ShareDriver, ), child_methods)(True) self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats({ "create_share_from_snapshot_support": create_share_from_snapshot_supported, }) self.assertEqual( create_share_from_snapshot_supported, child_class_instance._stats["create_share_from_snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) def test_get_periodic_hook_data(self): share_driver = self._instantiate_share_driver(None, False) share_instances = ["list", "of", "share", "instances"] result = share_driver.get_periodic_hook_data( "fake_context", share_instances) self.assertEqual(share_instances, result) def test_get_admin_network_allocations_number(self): share_driver = self._instantiate_share_driver(None, True) self.assertEqual( 0, share_driver.get_admin_network_allocations_number()) def test_allocate_admin_network_count_None(self): share_driver = self._instantiate_share_driver(None, True) ctxt = 'fake_context' share_server = 'fake_share_server' mock_get_admin_network_allocations_number = self.mock_object( share_driver, 'get_admin_network_allocations_number', mock.Mock(return_value=0)) self.mock_object( share_driver.admin_network_api, 'allocate_network', mock.Mock(side_effect=Exception('ShouldNotBeRaised'))) share_driver.allocate_admin_network(ctxt, share_server) mock_get_admin_network_allocations_number.assert_called_once_with() self.assertFalse( share_driver.admin_network_api.allocate_network.called) def test_allocate_admin_network_count_0(self): share_driver = self._instantiate_share_driver(None, True) ctxt = 'fake_context' share_server = 'fake_share_server' self.mock_object( share_driver, 'get_admin_network_allocations_number', mock.Mock(return_value=0)) self.mock_object( share_driver.admin_network_api, 'allocate_network', mock.Mock(side_effect=Exception('ShouldNotBeRaised'))) share_driver.allocate_admin_network(ctxt, share_server, count=0) self.assertFalse( share_driver.get_admin_network_allocations_number.called) self.assertFalse( share_driver.admin_network_api.allocate_network.called) def test_allocate_admin_network_count_1_api_initialized(self): share_driver = self._instantiate_share_driver(None, True) ctxt = 'fake_context' share_server = 'fake_share_server' mock_get_admin_network_allocations_number = self.mock_object( share_driver, 'get_admin_network_allocations_number', mock.Mock(return_value=1)) self.mock_object( share_driver.admin_network_api, 'allocate_network', mock.Mock()) share_driver.allocate_admin_network(ctxt, share_server) mock_get_admin_network_allocations_number.assert_called_once_with() (share_driver.admin_network_api.allocate_network. assert_called_once_with(ctxt, share_server, count=1)) def test_allocate_admin_network_count_1_api_not_initialized(self): share_driver = self._instantiate_share_driver(None, True, None) ctxt = 'fake_context' share_server = 'fake_share_server' share_driver._admin_network_api = None mock_get_admin_network_allocations_number = self.mock_object( share_driver, 'get_admin_network_allocations_number', mock.Mock(return_value=1)) self.assertRaises( exception.NetworkBadConfigurationException, share_driver.allocate_admin_network, ctxt, share_server, ) mock_get_admin_network_allocations_number.assert_called_once_with() def test_migration_start(self): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) self.assertRaises(NotImplementedError, share_driver.migration_start, None, None, None, None, None, None, None) def test_migration_continue(self): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) self.assertRaises(NotImplementedError, share_driver.migration_continue, None, None, None, None, None, None, None) def test_migration_complete(self): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) self.assertRaises(NotImplementedError, share_driver.migration_complete, None, None, None, None, None, None, None) def test_migration_cancel(self): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) self.assertRaises(NotImplementedError, share_driver.migration_cancel, None, None, None, None, None, None, None) def test_migration_get_progress(self): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) self.assertRaises(NotImplementedError, share_driver.migration_get_progress, None, None, None, None, None, None, None) @ddt.data(True, False) def test_connection_get_info(self, admin): expected = { 'mount': 'mount -vt nfs %(options)s /fake/fake_id %(path)s', 'unmount': 'umount -v %(path)s', 'access_mapping': { 'ip': ['nfs'] } } fake_share = { 'id': 'fake_id', 'share_proto': 'nfs', 'export_locations': [{ 'path': '/fake/fake_id', 'is_admin_only': admin }] } driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) share_driver.configuration = configuration.Configuration(None) connection_info = share_driver.connection_get_info( None, fake_share, "fake_server") self.assertEqual(expected, connection_info) def test_migration_check_compatibility(self): driver.CONF.set_default('driver_handles_share_servers', False) share_driver = driver.ShareDriver(False) share_driver.configuration = configuration.Configuration(None) expected = { 'compatible': False, 'writable': False, 'preserve_metadata': False, 'nondisruptive': False, 'preserve_snapshots': False, } result = share_driver.migration_check_compatibility( None, None, None, None, None) self.assertEqual(expected, result) def test_update_access(self): share_driver = driver.ShareDriver(True, configuration=None) self.assertRaises( NotImplementedError, share_driver.update_access, 'ctx', 'fake_share', 'fake_access_rules', 'fake_add_rules', 'fake_delete_rules' ) def test_create_replica(self): share_driver = self._instantiate_share_driver(None, True) self.assertRaises(NotImplementedError, share_driver.create_replica, 'fake_context', ['r1', 'r2'], 'fake_new_replica', [], []) def test_delete_replica(self): share_driver = self._instantiate_share_driver(None, True) self.assertRaises(NotImplementedError, share_driver.delete_replica, 'fake_context', ['r1', 'r2'], 'fake_replica', []) def test_promote_replica(self): share_driver = self._instantiate_share_driver(None, True) self.assertRaises(NotImplementedError, share_driver.promote_replica, 'fake_context', [], 'fake_replica', []) def test_update_replica_state(self): share_driver = self._instantiate_share_driver(None, True) self.assertRaises(NotImplementedError, share_driver.update_replica_state, 'fake_context', ['r1', 'r2'], 'fake_replica', [], []) def test_create_replicated_snapshot(self): share_driver = self._instantiate_share_driver(None, False) self.assertRaises(NotImplementedError, share_driver.create_replicated_snapshot, 'fake_context', ['r1', 'r2'], ['s1', 's2']) def test_delete_replicated_snapshot(self): share_driver = self._instantiate_share_driver(None, False) self.assertRaises(NotImplementedError, share_driver.delete_replicated_snapshot, 'fake_context', ['r1', 'r2'], ['s1', 's2']) def test_update_replicated_snapshot(self): share_driver = self._instantiate_share_driver(None, False) self.assertRaises(NotImplementedError, share_driver.update_replicated_snapshot, 'fake_context', ['r1', 'r2'], 'r1', ['s1', 's2'], 's1') @ddt.data(True, False) def test_share_group_snapshot_support_exists_and_equals_snapshot_support( self, snapshots_are_supported): driver.CONF.set_default('driver_handles_share_servers', True) child_class_instance = driver.ShareDriver(True) child_class_instance._snapshots_are_supported = snapshots_are_supported self.mock_object(child_class_instance, "configuration") child_class_instance._update_share_stats() self.assertEqual( snapshots_are_supported, child_class_instance._stats["snapshot_support"]) self.assertTrue(child_class_instance.configuration.safe_get.called) def test_create_share_group_from_share_group_snapshot(self): share_driver = self._instantiate_share_driver(None, False) fake_shares = [ {'id': 'fake_share_%d' % i, 'source_share_group_snapshot_member_id': 'fake_member_%d' % i} for i in (1, 2)] fake_share_group_dict = { 'source_share_group_snapshot_id': 'some_fake_uuid_abc', 'shares': fake_shares, 'id': 'some_fake_uuid_def', } fake_share_group_snapshot_dict = { 'share_group_snapshot_members': [ {'id': 'fake_member_1'}, {'id': 'fake_member_2'}], 'id': 'fake_share_group_snapshot_id', } mock_create = self.mock_object( share_driver, 'create_share_from_snapshot', mock.Mock(side_effect=['fake_export1', 'fake_export2'])) expected_share_updates = [ { 'id': 'fake_share_1', 'export_locations': 'fake_export1', }, { 'id': 'fake_share_2', 'export_locations': 'fake_export2', }, ] share_group_update, share_update = ( share_driver.create_share_group_from_share_group_snapshot( 'fake_context', fake_share_group_dict, fake_share_group_snapshot_dict)) mock_create.assert_has_calls([ mock.call( 'fake_context', {'id': 'fake_share_1', 'source_share_group_snapshot_member_id': 'fake_member_1'}, {'id': 'fake_member_1'}), mock.call( 'fake_context', {'id': 'fake_share_2', 'source_share_group_snapshot_member_id': 'fake_member_2'}, {'id': 'fake_member_2'}) ]) self.assertIsNone(share_group_update) self.assertEqual(expected_share_updates, share_update) def test_create_share_group_from_share_group_snapshot_dhss(self): share_driver = self._instantiate_share_driver(None, True) mock_share_server = mock.Mock() fake_shares = [ {'id': 'fake_share_1', 'source_share_group_snapshot_member_id': 'foo_member_1'}, {'id': 'fake_share_2', 'source_share_group_snapshot_member_id': 'foo_member_2'}] fake_share_group_dict = { 'source_share_group_snapshot_id': 'some_fake_uuid', 'shares': fake_shares, 'id': 'eda52174-0442-476d-9694-a58327466c14', } fake_share_group_snapshot_dict = { 'share_group_snapshot_members': [ {'id': 'foo_member_1'}, {'id': 'foo_member_2'}], 'id': 'fake_share_group_snapshot_id' } mock_create = self.mock_object( share_driver, 'create_share_from_snapshot', mock.Mock(side_effect=['fake_export1', 'fake_export2'])) expected_share_updates = [ {'id': 'fake_share_1', 'export_locations': 'fake_export1'}, {'id': 'fake_share_2', 'export_locations': 'fake_export2'}, ] share_group_update, share_update = ( share_driver.create_share_group_from_share_group_snapshot( 'fake_context', fake_share_group_dict, fake_share_group_snapshot_dict, share_server=mock_share_server, ) ) mock_create.assert_has_calls([ mock.call( 'fake_context', {'id': 'fake_share_%d' % i, 'source_share_group_snapshot_member_id': 'foo_member_%d' % i}, {'id': 'foo_member_%d' % i}, share_server=mock_share_server) for i in (1, 2) ]) self.assertIsNone(share_group_update) self.assertEqual(expected_share_updates, share_update) def test_create_share_group_from_share_group_snapshot_with_dict_raise( self): share_driver = self._instantiate_share_driver(None, False) fake_shares = [ {'id': 'fake_share_%d' % i, 'source_share_group_snapshot_member_id': 'fake_member_%d' % i} for i in (1, 2)] fake_share_group_dict = { 'source_share_group_snapshot_id': 'some_fake_uuid_abc', 'shares': fake_shares, 'id': 'some_fake_uuid_def', } fake_share_group_snapshot_dict = { 'share_group_snapshot_members': [ {'id': 'fake_member_1'}, {'id': 'fake_member_2'}], 'id': 'fake_share_group_snapshot_id', } self.mock_object( share_driver, 'create_share_from_snapshot', mock.Mock(side_effect=[{ 'export_locations': 'fake_export1', 'status': constants.STATUS_CREATING}, {'export_locations': 'fake_export2', 'status': constants.STATUS_CREATING}])) self.assertRaises( exception.InvalidShareInstance, share_driver.create_share_group_from_share_group_snapshot, 'fake_context', fake_share_group_dict, fake_share_group_snapshot_dict) def test_create_share_group_from_share_group_snapshot_with_dict( self): share_driver = self._instantiate_share_driver(None, False) fake_shares = [ {'id': 'fake_share_%d' % i, 'source_share_group_snapshot_member_id': 'fake_member_%d' % i} for i in (1, 2)] fake_share_group_dict = { 'source_share_group_snapshot_id': 'some_fake_uuid_abc', 'shares': fake_shares, 'id': 'some_fake_uuid_def', } fake_share_group_snapshot_dict = { 'share_group_snapshot_members': [ {'id': 'fake_member_1'}, {'id': 'fake_member_2'}], 'id': 'fake_share_group_snapshot_id', } mock_create = self.mock_object( share_driver, 'create_share_from_snapshot', mock.Mock(side_effect=[{ 'export_locations': 'fake_export1', 'status': constants.STATUS_CREATING_FROM_SNAPSHOT}, {'export_locations': 'fake_export2', 'status': constants.STATUS_AVAILABLE}])) expected_share_updates = [ { 'id': 'fake_share_1', 'status': constants.STATUS_CREATING_FROM_SNAPSHOT, 'export_locations': 'fake_export1', }, { 'id': 'fake_share_2', 'status': constants.STATUS_AVAILABLE, 'export_locations': 'fake_export2', }, ] share_group_update, share_update = ( share_driver.create_share_group_from_share_group_snapshot( 'fake_context', fake_share_group_dict, fake_share_group_snapshot_dict)) mock_create.assert_has_calls([ mock.call( 'fake_context', {'id': 'fake_share_1', 'source_share_group_snapshot_member_id': 'fake_member_1'}, {'id': 'fake_member_1'}), mock.call( 'fake_context', {'id': 'fake_share_2', 'source_share_group_snapshot_member_id': 'fake_member_2'}, {'id': 'fake_member_2'}) ]) self.assertIsNone(share_group_update) self.assertEqual(expected_share_updates, share_update) def test_create_share_group_from_sg_snapshot_with_no_members(self): share_driver = self._instantiate_share_driver(None, False) fake_share_group_dict = {} fake_share_group_snapshot_dict = {'share_group_snapshot_members': []} share_group_update, share_update = ( share_driver.create_share_group_from_share_group_snapshot( 'fake_context', fake_share_group_dict, fake_share_group_snapshot_dict)) self.assertIsNone(share_group_update) self.assertIsNone(share_update) def test_create_share_group_snapshot(self): fake_snap_member_1 = { 'id': '6813e06b-a8f5-4784-b17d-f3e91afa370e', 'share_id': 'a3ebdba5-b4e1-46c8-a0ea-a9ac8daf5296', 'share_group_snapshot_id': 'fake_share_group_snapshot_id', 'share_instance_id': 'fake_share_instance_id_1', 'provider_location': 'should_not_be_used_1', 'share': { 'id': '420f978b-dbf6-4b3c-92fe-f5b17a0bb5e2', 'size': 3, 'share_proto': 'fake_share_proto', }, } fake_snap_member_2 = { 'id': '1e010dfe-545b-432d-ab95-4ef03cd82f89', 'share_id': 'a3ebdba5-b4e1-46c8-a0ea-a9ac8daf5296', 'share_group_snapshot_id': 'fake_share_group_snapshot_id', 'share_instance_id': 'fake_share_instance_id_2', 'provider_location': 'should_not_be_used_2', 'share': { 'id': '420f978b-dbf6-4b3c-92fe-f5b17a0bb5e2', 'size': '2', 'share_proto': 'fake_share_proto', }, } fake_snap_dict = { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'share_group_id': '4b04fdc3-00b9-4909-ba1a-06e9b3f88b67', 'share_group_snapshot_members': [ fake_snap_member_1, fake_snap_member_2], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } share_driver = self._instantiate_share_driver(None, False) share_driver._stats['snapshot_support'] = True mock_create_snap = self.mock_object( share_driver, 'create_snapshot', mock.Mock(side_effect=lambda *args, **kwargs: { 'foo_k': 'foo_v', 'bar_k': 'bar_v_%s' % args[1]['id']})) share_group_snapshot_update, member_update_list = ( share_driver.create_share_group_snapshot( 'fake_context', fake_snap_dict)) mock_create_snap.assert_has_calls([ mock.call( 'fake_context', {'snapshot_id': member['share_group_snapshot_id'], 'share_id': member['share_id'], 'share_instance_id': member['share']['id'], 'id': member['id'], 'share': member['share'], 'size': member['share']['size'], 'share_size': member['share']['size'], 'share_proto': member['share']['share_proto'], 'provider_location': None}, share_server=None) for member in (fake_snap_member_1, fake_snap_member_2) ]) self.assertIsNone(share_group_snapshot_update) self.assertEqual( [{'id': member['id'], 'foo_k': 'foo_v', 'bar_k': 'bar_v_%s' % member['id']} for member in (fake_snap_member_1, fake_snap_member_2)], member_update_list, ) def test_create_share_group_snapshot_failed_snapshot(self): fake_snap_member_1 = { 'id': '6813e06b-a8f5-4784-b17d-f3e91afa370e', 'share_id': 'a3ebdba5-b4e1-46c8-a0ea-a9ac8daf5296', 'share_group_snapshot_id': 'fake_share_group_snapshot_id', 'share_instance_id': 'fake_share_instance_id_1', 'provider_location': 'should_not_be_used_1', 'share': { 'id': '420f978b-dbf6-4b3c-92fe-f5b17a0bb5e2', 'size': 3, 'share_proto': 'fake_share_proto', }, } fake_snap_member_2 = { 'id': '1e010dfe-545b-432d-ab95-4ef03cd82f89', 'share_id': 'a3ebdba5-b4e1-46c8-a0ea-a9ac8daf5296', 'share_group_snapshot_id': 'fake_share_group_snapshot_id', 'share_instance_id': 'fake_share_instance_id_2', 'provider_location': 'should_not_be_used_2', 'share': { 'id': '420f978b-dbf6-4b3c-92fe-f5b17a0bb5e2', 'size': '2', 'share_proto': 'fake_share_proto', }, } fake_snap_dict = { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'share_group_id': '4b04fdc3-00b9-4909-ba1a-06e9b3f88b67', 'share_group_snapshot_members': [ fake_snap_member_1, fake_snap_member_2], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } expected_exception = exception.ManilaException share_driver = self._instantiate_share_driver(None, False) share_driver._stats['snapshot_support'] = True mock_create_snap = self.mock_object( share_driver, 'create_snapshot', mock.Mock(side_effect=[None, expected_exception])) mock_delete_snap = self.mock_object(share_driver, 'delete_snapshot') self.assertRaises( expected_exception, share_driver.create_share_group_snapshot, 'fake_context', fake_snap_dict) fake_snap_member_1_expected = { 'snapshot_id': fake_snap_member_1['share_group_snapshot_id'], 'share_id': fake_snap_member_1['share_id'], 'share_instance_id': fake_snap_member_1['share']['id'], 'id': fake_snap_member_1['id'], 'share': fake_snap_member_1['share'], 'size': fake_snap_member_1['share']['size'], 'share_size': fake_snap_member_1['share']['size'], 'share_proto': fake_snap_member_1['share']['share_proto'], 'provider_location': None, } mock_create_snap.assert_has_calls([ mock.call( 'fake_context', {'snapshot_id': member['share_group_snapshot_id'], 'share_id': member['share_id'], 'share_instance_id': member['share']['id'], 'id': member['id'], 'share': member['share'], 'size': member['share']['size'], 'share_size': member['share']['size'], 'share_proto': member['share']['share_proto'], 'provider_location': None}, share_server=None) for member in (fake_snap_member_1, fake_snap_member_2) ]) mock_delete_snap.assert_called_with( 'fake_context', fake_snap_member_1_expected, share_server=None) def test_create_share_group_snapshot_no_support(self): fake_snap_dict = { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'share_group_id': '4b04fdc3-00b9-4909-ba1a-06e9b3f88b67', 'share_group_snapshot_members': [ { 'status': 'available', 'share_type_id': '1a9ed31e-ee70-483d-93ba-89690e028d7f', 'user_id': 'a0314a441ca842019b0952224aa39192', 'deleted': 'False', 'share_proto': 'NFS', 'project_id': '13c0be6290934bd98596cfa004650049', 'share_group_snapshot_id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'deleted_at': None, 'id': '6813e06b-a8f5-4784-b17d-f3e91afa370e', 'size': 1 }, ], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } share_driver = self._instantiate_share_driver(None, False) share_driver._stats['snapshot_support'] = False self.assertRaises( exception.ShareGroupSnapshotNotSupported, share_driver.create_share_group_snapshot, 'fake_context', fake_snap_dict) def test_create_share_group_snapshot_no_members(self): fake_snap_dict = { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'share_group_id': '4b04fdc3-00b9-4909-ba1a-06e9b3f88b67', 'share_group_snapshot_members': [], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } share_driver = self._instantiate_share_driver(None, False) share_driver._stats['snapshot_support'] = True share_group_snapshot_update, member_update_list = ( share_driver.create_share_group_snapshot( 'fake_context', fake_snap_dict)) self.assertIsNone(share_group_snapshot_update) self.assertIsNone(member_update_list) def test_delete_share_group_snapshot(self): fake_snap_member_1 = { 'id': '6813e06b-a8f5-4784-b17d-f3e91afa370e', 'share_id': 'a3ebdba5-b4e1-46c8-a0ea-a9ac8daf5296', 'share_group_snapshot_id': 'fake_share_group_snapshot_id', 'share_instance_id': 'fake_share_instance_id_1', 'provider_location': 'fake_provider_location_2', 'share': { 'id': '420f978b-dbf6-4b3c-92fe-f5b17a0bb5e2', 'size': 3, 'share_proto': 'fake_share_proto', }, } fake_snap_member_2 = { 'id': '1e010dfe-545b-432d-ab95-4ef03cd82f89', 'share_id': 'a3ebdba5-b4e1-46c8-a0ea-a9ac8daf5296', 'share_group_snapshot_id': 'fake_share_group_snapshot_id', 'share_instance_id': 'fake_share_instance_id_2', 'provider_location': 'fake_provider_location_2', 'share': { 'id': '420f978b-dbf6-4b3c-92fe-f5b17a0bb5e2', 'size': '2', 'share_proto': 'fake_share_proto', }, } fake_snap_dict = { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'share_group_id': '4b04fdc3-00b9-4909-ba1a-06e9b3f88b67', 'share_group_snapshot_members': [ fake_snap_member_1, fake_snap_member_2], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } share_driver = self._instantiate_share_driver(None, False) share_driver._stats['share_group_snapshot_support'] = True mock_delete_snap = self.mock_object(share_driver, 'delete_snapshot') share_group_snapshot_update, member_update_list = ( share_driver.delete_share_group_snapshot( 'fake_context', fake_snap_dict)) mock_delete_snap.assert_has_calls([ mock.call( 'fake_context', {'snapshot_id': member['share_group_snapshot_id'], 'share_id': member['share_id'], 'share_instance_id': member['share']['id'], 'id': member['id'], 'share': member['share'], 'size': member['share']['size'], 'share_size': member['share']['size'], 'share_proto': member['share']['share_proto'], 'provider_location': member['provider_location']}, share_server=None) for member in (fake_snap_member_1, fake_snap_member_2) ]) self.assertIsNone(share_group_snapshot_update) self.assertIsNone(member_update_list) def test_snapshot_update_access(self): share_driver = self._instantiate_share_driver(None, False) self.assertRaises(NotImplementedError, share_driver.snapshot_update_access, 'fake_context', 'fake_snapshot', ['r1', 'r2'], [], []) @ddt.data({'user_networks': set([4]), 'conf': [4], 'expected': {'ipv4': True, 'ipv6': False}}, {'user_networks': set([6]), 'conf': [4], 'expected': {'ipv4': False, 'ipv6': False}}, {'user_networks': set([4, 6]), 'conf': [4], 'expected': {'ipv4': True, 'ipv6': False}}, {'user_networks': set([4]), 'conf': [6], 'expected': {'ipv4': False, 'ipv6': False}}, {'user_networks': set([6]), 'conf': [6], 'expected': {'ipv4': False, 'ipv6': True}}, {'user_networks': set([4, 6]), 'conf': [6], 'expected': {'ipv4': False, 'ipv6': True}}, {'user_networks': set([4]), 'conf': [4, 6], 'expected': {'ipv4': True, 'ipv6': False}}, {'user_networks': set([6]), 'conf': [4, 6], 'expected': {'ipv4': False, 'ipv6': True}}, {'user_networks': set([4, 6]), 'conf': [4, 6], 'expected': {'ipv4': True, 'ipv6': True}}, ) @ddt.unpack def test_add_ip_version_capability_if_dhss_true(self, user_networks, conf, expected): share_driver = self._instantiate_share_driver(None, True) self.mock_object(share_driver, 'get_configured_ip_versions', mock.Mock(return_value=conf)) versions = mock.PropertyMock(return_value=user_networks) type(share_driver.network_api).enabled_ip_versions = versions data = {'share_backend_name': 'fake_backend'} result = share_driver.add_ip_version_capability(data) self.assertIsNotNone(result['ipv4_support']) self.assertEqual(expected['ipv4'], result['ipv4_support']) self.assertIsNotNone(result['ipv6_support']) self.assertEqual(expected['ipv6'], result['ipv6_support']) @ddt.data({'conf': [4], 'expected': {'ipv4': True, 'ipv6': False}}, {'conf': [6], 'expected': {'ipv4': False, 'ipv6': True}}, {'conf': [4, 6], 'expected': {'ipv4': True, 'ipv6': True}}, ) @ddt.unpack def test_add_ip_version_capability_if_dhss_false(self, conf, expected): share_driver = self._instantiate_share_driver(None, False) self.mock_object(share_driver, 'get_configured_ip_versions', mock.Mock(return_value=conf)) data = {'share_backend_name': 'fake_backend'} result = share_driver.add_ip_version_capability(data) self.assertIsNotNone(result['ipv4_support']) self.assertEqual(expected['ipv4'], result['ipv4_support']) self.assertIsNotNone(result['ipv6_support']) self.assertEqual(expected['ipv6'], result['ipv6_support']) manila-10.0.0/manila/tests/share/test_hook.py0000664000175000017500000003072713656750227021157 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from manila import context from manila.share import hook from manila import test class FakeHookImplementation(hook.HookBase): def _execute_pre_hook(self, context, func_name, *args, **kwargs): """Fake implementation of a pre hook action.""" def _execute_post_hook(self, context, func_name, pre_hook_data, driver_action_results, *args, **kwargs): """Fake implementation of a post hook action.""" def _execute_periodic_hook(self, context, periodic_hook_data, *args, **kwargs): """Fake implementation of a periodic hook action.""" @ddt.ddt class HookBaseTestCase(test.TestCase): def setUp(self): super(HookBaseTestCase, self).setUp() self.context = context.get_admin_context() self.default_config = { "enable_pre_hooks": True, "enable_post_hooks": True, "enable_periodic_hooks": True, "suppress_pre_hooks_errors": True, "suppress_post_hooks_errors": True, } for k, v in self.default_config.items(): hook.CONF.set_default(k, v) def _fake_safe_get(self, key): return self.default_config.get(key) def _get_hook_instance(self, set_configuration=True, host="fake_host"): if set_configuration: configuration = mock.Mock() configuration.safe_get.side_effect = self._fake_safe_get else: configuration = None instance = FakeHookImplementation( configuration=configuration, host=host) return instance def test_instantiate_hook_fail(self): self.assertRaises(TypeError, hook.HookBase) @ddt.data(True, False) def test_instantiate_hook_successfully_and_set_configuration( self, set_configuration): instance = self._get_hook_instance(set_configuration) self.assertTrue(hasattr(instance, 'host')) self.assertEqual("fake_host", instance.host) self.assertTrue(hasattr(instance, 'configuration')) if not set_configuration: self.assertIsNone(instance.configuration) for attr_name in ("pre_hooks_enabled", "post_hooks_enabled", "periodic_hooks_enabled", "suppress_pre_hooks_errors", "suppress_post_hooks_errors"): self.assertTrue(hasattr(instance, attr_name)) if set_configuration: instance.configuration.append_config_values.assert_has_calls([ mock.call(hook.hook_options)]) conf_func = self._fake_safe_get else: conf_func = self.default_config.get self.assertEqual( conf_func("enable_pre_hooks"), instance.pre_hooks_enabled) self.assertEqual( conf_func("enable_post_hooks"), instance.post_hooks_enabled) self.assertEqual( conf_func("enable_periodic_hooks"), instance.periodic_hooks_enabled) self.assertEqual( conf_func("suppress_pre_hooks_errors"), instance.suppress_pre_hooks_errors) self.assertEqual( conf_func("suppress_post_hooks_errors"), instance.suppress_post_hooks_errors) def test_execute_pre_hook_disabled(self): instance = self._get_hook_instance() instance.pre_hooks_enabled = False self.mock_object( instance, "_execute_pre_hook", mock.Mock(side_effect=Exception("I should not be raised."))) result = instance.execute_pre_hook( self.context, "fake_func_name", "some_arg", some_kwarg="foo") self.assertIsNone(result) @ddt.data(True, False) def test_execute_pre_hook_success(self, provide_context): instance = self._get_hook_instance() instance.pre_hooks_enabled = True instance.suppress_pre_hooks_errors = True expected = "fake_expected_result" some_arg = "some_arg" func_name = "fake_func_name" self.mock_object(hook.LOG, 'error') self.mock_object( instance, "_execute_pre_hook", mock.Mock(return_value=expected)) mock_ctxt = self.mock_object(context, 'get_admin_context') ctxt = self.context if provide_context else mock_ctxt result = instance.execute_pre_hook( ctxt, func_name, some_arg, some_kwarg="foo") self.assertEqual(expected, result) instance._execute_pre_hook.assert_called_once_with( some_arg, context=self.context if provide_context else mock_ctxt, func_name=func_name, some_kwarg="foo") self.assertFalse(hook.LOG.error.called) def test_execute_pre_hook_exception_with_suppression(self): instance = self._get_hook_instance() instance.pre_hooks_enabled = True instance.suppress_pre_hooks_errors = True some_arg = "some_arg" func_name = "fake_func_name" FakeException = type("FakeException", (Exception, ), {}) self.mock_object(hook.LOG, 'warning') self.mock_object( instance, "_execute_pre_hook", mock.Mock(side_effect=( FakeException("Some exception that should be suppressed.")))) result = instance.execute_pre_hook( self.context, func_name, some_arg, some_kwarg="foo") self.assertIsInstance(result, FakeException) instance._execute_pre_hook.assert_called_once_with( some_arg, context=self.context, func_name=func_name, some_kwarg="foo") self.assertTrue(hook.LOG.warning.called) def test_execute_pre_hook_exception_without_suppression(self): instance = self._get_hook_instance() instance.pre_hooks_enabled = True instance.suppress_pre_hooks_errors = False some_arg = "some_arg" func_name = "fake_func_name" FakeException = type("FakeException", (Exception, ), {}) self.mock_object(hook.LOG, 'warning') self.mock_object( instance, "_execute_pre_hook", mock.Mock(side_effect=( FakeException( "Some exception that should NOT be suppressed.")))) self.assertRaises( FakeException, instance.execute_pre_hook, self.context, func_name, some_arg, some_kwarg="foo") instance._execute_pre_hook.assert_called_once_with( some_arg, context=self.context, func_name=func_name, some_kwarg="foo") self.assertFalse(hook.LOG.warning.called) def test_execute_post_hook_disabled(self): instance = self._get_hook_instance() instance.post_hooks_enabled = False self.mock_object( instance, "_execute_post_hook", mock.Mock(side_effect=Exception("I should not be raised."))) result = instance.execute_post_hook( self.context, "fake_func_name", "some_pre_hook_data", "some_driver_action_results", "some_arg", some_kwarg="foo") self.assertIsNone(result) @ddt.data(True, False) def test_execute_post_hook_success(self, provide_context): instance = self._get_hook_instance() instance.post_hooks_enabled = True instance.suppress_post_hooks_errors = True expected = "fake_expected_result" some_arg = "some_arg" func_name = "fake_func_name" pre_hook_data = "some_pre_hook_data" driver_action_results = "some_driver_action_results" self.mock_object(hook.LOG, 'warning') self.mock_object( instance, "_execute_post_hook", mock.Mock(return_value=expected)) mock_ctxt = self.mock_object(context, 'get_admin_context') ctxt = self.context if provide_context else mock_ctxt result = instance.execute_post_hook( ctxt, func_name, pre_hook_data, driver_action_results, some_arg, some_kwarg="foo") self.assertEqual(expected, result) instance._execute_post_hook.assert_called_once_with( some_arg, context=self.context if provide_context else mock_ctxt, func_name=func_name, pre_hook_data=pre_hook_data, driver_action_results=driver_action_results, some_kwarg="foo") self.assertFalse(hook.LOG.warning.called) def test_execute_post_hook_exception_with_suppression(self): instance = self._get_hook_instance() instance.post_hooks_enabled = True instance.suppress_post_hooks_errors = True some_arg = "some_arg" func_name = "fake_func_name" pre_hook_data = "some_pre_hook_data" driver_action_results = "some_driver_action_results" FakeException = type("FakeException", (Exception, ), {}) self.mock_object(hook.LOG, 'warning') self.mock_object( instance, "_execute_post_hook", mock.Mock(side_effect=( FakeException("Some exception that should be suppressed.")))) result = instance.execute_post_hook( self.context, func_name, pre_hook_data, driver_action_results, some_arg, some_kwarg="foo") self.assertIsInstance(result, FakeException) instance._execute_post_hook.assert_called_once_with( some_arg, context=self.context, func_name=func_name, pre_hook_data=pre_hook_data, driver_action_results=driver_action_results, some_kwarg="foo") self.assertTrue(hook.LOG.warning.called) def test_execute_post_hook_exception_without_suppression(self): instance = self._get_hook_instance() instance.post_hooks_enabled = True instance.suppress_post_hooks_errors = False some_arg = "some_arg" func_name = "fake_func_name" pre_hook_data = "some_pre_hook_data" driver_action_results = "some_driver_action_results" FakeException = type("FakeException", (Exception, ), {}) self.mock_object(hook.LOG, 'error') self.mock_object( instance, "_execute_post_hook", mock.Mock(side_effect=( FakeException( "Some exception that should NOT be suppressed.")))) self.assertRaises( FakeException, instance.execute_post_hook, self.context, func_name, pre_hook_data, driver_action_results, some_arg, some_kwarg="foo") instance._execute_post_hook.assert_called_once_with( some_arg, context=self.context, func_name=func_name, pre_hook_data=pre_hook_data, driver_action_results=driver_action_results, some_kwarg="foo") self.assertFalse(hook.LOG.error.called) def test_execute_periodic_hook_disabled(self): instance = self._get_hook_instance() instance.periodic_hooks_enabled = False self.mock_object(instance, "_execute_periodic_hook") instance.execute_periodic_hook( self.context, "fake_periodic_hook_data", "some_arg", some_kwarg="foo") self.assertFalse(instance._execute_periodic_hook.called) @ddt.data(True, False) def test_execute_periodic_hook_enabled(self, provide_context): instance = self._get_hook_instance() instance.periodic_hooks_enabled = True expected = "some_expected_result" self.mock_object( instance, "_execute_periodic_hook", mock.Mock(return_value=expected)) mock_ctxt = self.mock_object(context, 'get_admin_context') ctxt = self.context if provide_context else mock_ctxt result = instance.execute_periodic_hook( ctxt, "fake_periodic_hook_data", "some_arg", some_kwarg="foo") instance._execute_periodic_hook.assert_called_once_with( ctxt, "fake_periodic_hook_data", "some_arg", some_kwarg="foo") self.assertEqual(expected, result) manila-10.0.0/manila/tests/share/test_share_types.py0000664000175000017500000004667213656750227022553 0ustar zuulzuul00000000000000# Copyright 2015 Deutsche Telekom AG. All rights reserved. # Copyright 2015 Tom Barron. All rights reserved. # Copyright 2015 Mirantis, Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test of Share Type methods for Manila.""" import copy import datetime import itertools from unittest import mock import ddt from oslo_utils import strutils from manila.common import constants from manila import context from manila import db from manila import exception from manila.share import share_types from manila import test def create_share_type_dict(extra_specs=None): return { 'fake_type': { 'name': 'fake1', 'extra_specs': extra_specs } } def return_share_type_update(context, id, values): name = values.get('name') description = values.get('description') is_public = values.get('is_public') if id == '444': raise exception.ShareTypeUpdateFailed(id=id) else: st_update = { 'created_at': datetime.datetime(2019, 9, 9, 14, 40, 31), 'deleted': '0', 'deleted_at': None, 'extra_specs': {u'gold': u'True'}, 'required_extra_specs': {}, 'id': id, 'name': name, 'is_public': is_public, 'description': description, 'updated_at': None } return st_update @ddt.ddt class ShareTypesTestCase(test.TestCase): fake_type = { 'test': { 'created_at': datetime.datetime(2015, 1, 22, 11, 43, 24), 'deleted': '0', 'deleted_at': None, 'extra_specs': {}, 'required_extra_specs': {}, 'id': u'fooid-1', 'name': u'test', 'updated_at': None } } fake_extra_specs = {u'gold': u'True'} fake_share_type_id = u'fooid-2' fake_type_w_extra = { 'test_with_extra': { 'created_at': datetime.datetime(2015, 1, 22, 11, 45, 31), 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_extra_specs, 'required_extra_specs': {}, 'id': fake_share_type_id, 'name': u'test_with_extra', 'updated_at': None } } fake_type_update = { 'test_type_update': { 'created_at': datetime.datetime(2019, 9, 9, 14, 40, 31), 'deleted': '0', 'deleted_at': None, 'extra_specs': {u'gold': u'True'}, 'required_extra_specs': {}, 'id': '888', 'name': 'new_name', 'is_public': True, 'description': 'new_description', 'updated_at': None } } fake_r_extra_specs = { u'gold': u'True', u'driver_handles_share_servers': u'True' } fake_r_required_extra_specs = { u'driver_handles_share_servers': u'True' } fake_r_type_extra = { 'test_with_extra': { 'created_at': datetime.datetime(2015, 1, 22, 11, 45, 31), 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_r_extra_specs, 'required_extra_specs': fake_r_required_extra_specs, 'id': fake_share_type_id, 'name': u'test_with_extra', 'updated_at': None } } fake_required_extra_specs = { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: 'true', } fake_optional_extra_specs = { constants.ExtraSpecs.SNAPSHOT_SUPPORT: 'true', constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: 'false', constants.ExtraSpecs.REVERT_TO_SNAPSHOT_SUPPORT: 'false', } fake_type_w_valid_extra = { 'test_with_extra': { 'created_at': datetime.datetime(2015, 1, 22, 11, 45, 31), 'deleted': '0', 'deleted_at': None, 'extra_specs': fake_required_extra_specs, 'required_extra_specs': fake_required_extra_specs, 'id': u'fooid-2', 'name': u'test_with_extra', 'updated_at': None } } fake_types = fake_type.copy() fake_types.update(fake_type_w_extra) fake_types.update(fake_type_w_valid_extra) fake_share = {'id': u'fooid-1', 'share_type_id': fake_share_type_id} def setUp(self): super(ShareTypesTestCase, self).setUp() self.context = context.get_admin_context() @ddt.data({}, fake_type, fake_type_w_extra, fake_types) def test_get_all_types(self, share_type): self.mock_object(db, 'share_type_get_all', mock.Mock(return_value=copy.deepcopy(share_type))) returned_type = share_types.get_all_types(self.context) self.assertItemsEqual(share_type, returned_type) def test_get_all_types_search(self): share_type = self.fake_type_w_extra search_filter = {'extra_specs': {'gold': 'True'}, 'is_public': True} self.mock_object(db, 'share_type_get_all', mock.Mock(return_value=share_type)) returned_type = share_types.get_all_types(self.context, search_opts=search_filter) db.share_type_get_all.assert_called_once_with( mock.ANY, 0, filters={'is_public': True}) self.assertItemsEqual(share_type, returned_type) search_filter = {'extra_specs': {'gold': 'False'}} expected_types = {} returned_types = share_types.get_all_types(self.context, search_opts=search_filter) self.assertEqual(expected_types, returned_types) share_type = self.fake_r_type_extra search_filter = {'extra_specs': {'gold': 'True'}} returned_type = share_types.get_all_types(self.context, search_opts=search_filter) self.assertItemsEqual(share_type, returned_type) @ddt.data("nova", "supernova,nova", "supernova", "nova,hypernova,supernova") def test_get_all_types_search_by_availability_zone(self, search_azs): all_share_types = { 'gold': { 'extra_specs': { 'somepoolcap': 'somevalue', 'availability_zones': 'nova,supernova,hypernova', }, 'required_extra_specs': { 'driver_handles_share_servers': True, }, 'id': '1e8f93a8-9669-4467-88a0-7b8229a9a609', 'name': u'gold-share-type', 'is_public': True, }, 'silver': { 'extra_specs': { 'somepoolcap': 'somevalue', 'availability_zones': 'nova,supernova', }, 'required_extra_specs': { 'driver_handles_share_servers': False, }, 'id': '39a7b9a8-8c76-4b49-aed3-60b718d54325', 'name': u'silver-share-type', 'is_public': True, }, 'bronze': { 'extra_specs': { 'somepoolcap': 'somevalue', 'availability_zones': 'milkyway,andromeda', }, 'required_extra_specs': { 'driver_handles_share_servers': True, }, 'id': '5a55a54d-6688-49b4-9344-bfc2d9634f70', 'name': u'bronze-share-type', 'is_public': True, }, 'default': { 'extra_specs': { 'somepoolcap': 'somevalue', }, 'required_extra_specs': { 'driver_handles_share_servers': True, }, 'id': '5a55a54d-6688-49b4-9344-bfc2d9634f70', 'name': u'bronze-share-type', 'is_public': True, } } self.mock_object( db, 'share_type_get_all', mock.Mock(return_value=all_share_types)) self.mock_object(share_types, 'get_valid_required_extra_specs') search_opts = { 'extra_specs': { 'somepoolcap': 'somevalue', 'availability_zones': search_azs }, 'is_public': True, } returned_types = share_types.get_all_types( self.context, search_opts=search_opts) db.share_type_get_all.assert_called_once_with( mock.ANY, 0, filters={'is_public': True}) expected_return_types = (['gold', 'silver', 'default'] if len(search_azs.split(',')) < 3 else ['gold', 'default']) self.assertItemsEqual(expected_return_types, returned_types) def test_get_share_type_extra_specs(self): share_type = self.fake_type_w_extra['test_with_extra'] self.mock_object(db, 'share_type_get', mock.Mock(return_value=share_type)) id = share_type['id'] extra_spec = share_types.get_share_type_extra_specs(id, key='gold') self.assertEqual(share_type['extra_specs']['gold'], extra_spec) extra_spec = share_types.get_share_type_extra_specs(id) self.assertEqual(share_type['extra_specs'], extra_spec) def test_get_extra_specs_from_share(self): expected = self.fake_extra_specs self.mock_object(share_types, 'get_share_type_extra_specs', mock.Mock(return_value=expected)) spec_value = share_types.get_extra_specs_from_share(self.fake_share) self.assertEqual(expected, spec_value) share_types.get_share_type_extra_specs.assert_called_once_with( self.fake_share_type_id) def test_update_share_type(self): expected = self.fake_type_update['test_type_update'] self.mock_object(db, 'share_type_update', mock.Mock(side_effect=return_share_type_update)) self.mock_object(db, 'share_type_get', mock.Mock(return_value=expected)) new_name = "new_name" new_description = "new_description" is_public = True self.assertRaises(exception.ShareTypeUpdateFailed, share_types.update, self.context, id='444', name=new_name, description=new_description, is_public=is_public) share_types.update(self.context, '888', new_name, new_description, is_public) st_update = share_types.get_share_type(self.context, '888') self.assertEqual(new_name, st_update['name']) self.assertEqual(new_description, st_update['description']) self.assertEqual(is_public, st_update['is_public']) @ddt.data({}, {"fake": "fake"}) def test_create_without_required_extra_spec(self, optional_specs): specs = copy.copy(self.fake_required_extra_specs) del specs['driver_handles_share_servers'] specs.update(optional_specs) self.assertRaises(exception.InvalidShareType, share_types.create, self.context, "fake_share_type", specs) @ddt.data({"snapshot_support": "fake"}) def test_create_with_invalid_optional_extra_spec(self, optional_specs): specs = copy.copy(self.fake_required_extra_specs) specs.update(optional_specs) self.assertRaises(exception.InvalidShareType, share_types.create, self.context, "fake_share_type", specs) def test_get_required_extra_specs(self): result = share_types.get_required_extra_specs() self.assertEqual(constants.ExtraSpecs.REQUIRED, result) def test_get_optional_extra_specs(self): result = share_types.get_optional_extra_specs() self.assertEqual(constants.ExtraSpecs.OPTIONAL, result) def test_get_tenant_visible_extra_specs(self): result = share_types.get_tenant_visible_extra_specs() self.assertEqual(constants.ExtraSpecs.TENANT_VISIBLE, result) def test_get_boolean_extra_specs(self): result = share_types.get_boolean_extra_specs() self.assertEqual(constants.ExtraSpecs.BOOLEAN, result) def test_is_valid_required_extra_spec_other(self): actual_result = share_types.is_valid_required_extra_spec( 'fake', 'fake') self.assertIsNone(actual_result) @ddt.data(*itertools.product( constants.ExtraSpecs.REQUIRED, strutils.TRUE_STRINGS + strutils.FALSE_STRINGS)) @ddt.unpack def test_is_valid_required_extra_spec_valid(self, key, value): actual_result = share_types.is_valid_required_extra_spec(key, value) self.assertTrue(actual_result) @ddt.data('invalid', {}, '0000000000') def test_is_valid_required_extra_spec_invalid(self, value): key = constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS actual_result = share_types.is_valid_required_extra_spec(key, value) self.assertFalse(actual_result) @ddt.data({}, {'another_key': True}) def test_get_valid_required_extra_specs_valid(self, optional_specs): specs = copy.copy(self.fake_required_extra_specs) specs.update(optional_specs) actual_result = share_types.get_valid_required_extra_specs(specs) self.assertEqual(self.fake_required_extra_specs, actual_result) @ddt.data(None, {}, {constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: 'fake'}) def test_get_valid_required_extra_specs_invalid(self, extra_specs): self.assertRaises(exception.InvalidExtraSpec, share_types.get_valid_required_extra_specs, extra_specs) @ddt.data(*( list(itertools.product( (constants.ExtraSpecs.SNAPSHOT_SUPPORT, constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT, constants.ExtraSpecs.REVERT_TO_SNAPSHOT_SUPPORT, constants.ExtraSpecs.MOUNT_SNAPSHOT_SUPPORT), strutils.TRUE_STRINGS + strutils.FALSE_STRINGS)) + list(itertools.product( (constants.ExtraSpecs.REPLICATION_TYPE_SPEC,), constants.ExtraSpecs.REPLICATION_TYPES)) + [(constants.ExtraSpecs.AVAILABILITY_ZONES, 'zone a, zoneb$c'), (constants.ExtraSpecs.AVAILABILITY_ZONES, ' zonea, zoneb'), (constants.ExtraSpecs.AVAILABILITY_ZONES, 'zone1')] )) @ddt.unpack def test_is_valid_optional_extra_spec_valid(self, key, value): result = share_types.is_valid_optional_extra_spec(key, value) self.assertTrue(result) def test_is_valid_optional_extra_spec_valid_unknown_key(self): result = share_types.is_valid_optional_extra_spec('fake', 'fake') self.assertIsNone(result) def test_get_valid_optional_extra_specs(self): extra_specs = copy.copy(self.fake_required_extra_specs) extra_specs.update(self.fake_optional_extra_specs) extra_specs.update({'fake': 'fake'}) result = share_types.get_valid_optional_extra_specs(extra_specs) self.assertEqual(self.fake_optional_extra_specs, result) def test_get_valid_optional_extra_specs_empty(self): result = share_types.get_valid_optional_extra_specs({}) self.assertEqual({}, result) @ddt.data({constants.ExtraSpecs.SNAPSHOT_SUPPORT: 'fake'}, {constants.ExtraSpecs.AVAILABILITY_ZONES: 'ZoneA,'}) def test_get_valid_optional_extra_specs_invalid(self, extra_specs): self.assertRaises(exception.InvalidExtraSpec, share_types.get_valid_optional_extra_specs, extra_specs) @ddt.data(' az 1, az2 ,az 3 ', 'az 1,az2,az 3 ', None) def test_sanitize_extra_specs(self, spec_value): extra_specs = { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: 'True', constants.ExtraSpecs.SNAPSHOT_SUPPORT: 'True', constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: 'False' } expected_specs = copy.copy(extra_specs) if spec_value is not None: extra_specs[constants.ExtraSpecs.AVAILABILITY_ZONES] = spec_value expected_specs['availability_zones'] = 'az 1,az2,az 3' self.assertDictMatch(expected_specs, share_types.sanitize_extra_specs(extra_specs)) def test_add_access(self): project_id = '456' extra_specs = { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: 'true', constants.ExtraSpecs.SNAPSHOT_SUPPORT: 'true', constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: 'false', } share_type = share_types.create(self.context, 'type1', extra_specs) share_type_id = share_type.get('id') share_types.add_share_type_access(self.context, share_type_id, project_id) stype_access = db.share_type_access_get_all(self.context, share_type_id) self.assertIn(project_id, [a.project_id for a in stype_access]) def test_add_access_invalid(self): self.assertRaises(exception.InvalidShareType, share_types.add_share_type_access, 'fake', None, 'fake') def test_remove_access(self): project_id = '456' extra_specs = { constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: 'true', constants.ExtraSpecs.SNAPSHOT_SUPPORT: 'true', constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: 'false', } share_type = share_types.create( self.context, 'type1', projects=['456'], extra_specs=extra_specs) share_type_id = share_type.get('id') share_types.remove_share_type_access(self.context, share_type_id, project_id) stype_access = db.share_type_access_get_all(self.context, share_type_id) self.assertNotIn(project_id, stype_access) def test_remove_access_invalid(self): self.assertRaises(exception.InvalidShareType, share_types.remove_share_type_access, 'fake', None, 'fake') @ddt.data({'spec_value': ' True', 'expected': True}, {'spec_value': 'true', 'expected': True}, {'spec_value': ' False', 'expected': False}, {'spec_value': 'false', 'expected': False}, {'spec_value': u' FaLsE ', 'expected': False}) @ddt.unpack def test_parse_boolean_extra_spec(self, spec_value, expected): result = share_types.parse_boolean_extra_spec('fake_key', spec_value) self.assertEqual(expected, result) @ddt.data(' True', ' Wrong', None, 5) def test_parse_boolean_extra_spec_invalid(self, spec_value): self.assertRaises(exception.InvalidExtraSpec, share_types.parse_boolean_extra_spec, 'fake_key', spec_value) manila-10.0.0/manila/tests/test_network.py0000664000175000017500000001565713656750227020613 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from oslo_config import cfg from oslo_utils import importutils from manila import exception from manila import network from manila import test CONF = cfg.CONF @ddt.ddt class APITestCase(test.TestCase): def setUp(self): super(APITestCase, self).setUp() self.mock_object(importutils, 'import_class') def test_init_api_with_default_config_group_name(self): network.API() importutils.import_class.assert_called_once_with( CONF.network_api_class) importutils.import_class.return_value.assert_called_once_with( config_group_name=None, label='user') def test_init_api_with_custom_config_group_name(self): group_name = 'FOO_GROUP_NAME' network.API(config_group_name=group_name) importutils.import_class.assert_called_once_with( getattr(CONF, group_name).network_api_class) importutils.import_class.return_value.assert_called_once_with( config_group_name=group_name, label='user') def test_init_api_with_custom_config_group_name_and_label(self): group_name = 'FOO_GROUP_NAME' label = 'custom_label' network.API(config_group_name=group_name, label=label) importutils.import_class.assert_called_once_with( getattr(CONF, group_name).network_api_class) importutils.import_class.return_value.assert_called_once_with( config_group_name=group_name, label=label) @ddt.ddt class NetworkBaseAPITestCase(test.TestCase): def setUp(self): super(NetworkBaseAPITestCase, self).setUp() self.db_driver = 'fake_driver' self.mock_object(importutils, 'import_module') def test_inherit_network_base_api_no_redefinitions(self): class FakeNetworkAPI(network.NetworkBaseAPI): pass self.assertRaises(TypeError, FakeNetworkAPI) def test_inherit_network_base_api_deallocate_not_redefined(self): class FakeNetworkAPI(network.NetworkBaseAPI): def allocate_network(self, *args, **kwargs): pass def manage_network_allocations( self, context, allocations, share_server, share_network=None): pass def unmanage_network_allocations(self, context, share_server_id): pass self.assertRaises(TypeError, FakeNetworkAPI) def test_inherit_network_base_api_allocate_not_redefined(self): class FakeNetworkAPI(network.NetworkBaseAPI): def deallocate_network(self, *args, **kwargs): pass def manage_network_allocations( self, context, allocations, share_server, share_network=None): pass def unmanage_network_allocations(self, context, share_server_id): pass self.assertRaises(TypeError, FakeNetworkAPI) def test_inherit_network_base_api(self): class FakeNetworkAPI(network.NetworkBaseAPI): def allocate_network(self, *args, **kwargs): pass def deallocate_network(self, *args, **kwargs): pass def manage_network_allocations( self, context, allocations, share_server, share_network=None): pass def unmanage_network_allocations(self, context, share_server_id): pass result = FakeNetworkAPI() self.assertTrue(hasattr(result, '_verify_share_network')) self.assertTrue(hasattr(result, 'allocate_network')) self.assertTrue(hasattr(result, 'deallocate_network')) def test__verify_share_network_ok(self): class FakeNetworkAPI(network.NetworkBaseAPI): def allocate_network(self, *args, **kwargs): pass def deallocate_network(self, *args, **kwargs): pass def manage_network_allocations( self, context, allocations, share_server, share_network=None): pass def unmanage_network_allocations(self, context, share_server_id): pass result = FakeNetworkAPI() result._verify_share_network('foo_id', {'id': 'bar_id'}) def test__verify_share_network_fail(self): class FakeNetworkAPI(network.NetworkBaseAPI): def allocate_network(self, *args, **kwargs): pass def deallocate_network(self, *args, **kwargs): pass def manage_network_allocations( self, context, allocations, share_server, share_network=None): pass def unmanage_network_allocations(self, context, share_server_id): pass result = FakeNetworkAPI() self.assertRaises( exception.NetworkBadConfigurationException, result._verify_share_network, 'foo_id', None) @ddt.data((True, False, set([6])), (False, True, set([4])), (True, True, set([4, 6])), (False, False, set())) @ddt.unpack def test_enabled_ip_versions(self, network_plugin_ipv6_enabled, network_plugin_ipv4_enabled, enable_ip_versions): class FakeNetworkAPI(network.NetworkBaseAPI): def allocate_network(self, *args, **kwargs): pass def deallocate_network(self, *args, **kwargs): pass def manage_network_allocations( self, context, allocations, share_server, share_network=None): pass def unmanage_network_allocations(self, context, share_server_id): pass network.CONF.set_default('network_plugin_ipv6_enabled', network_plugin_ipv6_enabled) network.CONF.set_default('network_plugin_ipv4_enabled', network_plugin_ipv4_enabled) result = FakeNetworkAPI() if enable_ip_versions: self.assertTrue(hasattr(result, 'enabled_ip_versions')) self.assertEqual(enable_ip_versions, result.enabled_ip_versions) else: self.assertRaises(exception.NetworkBadConfigurationException, getattr, result, 'enabled_ip_versions') manila-10.0.0/manila/tests/network/0000775000175000017500000000000013656750362017164 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/network/__init__.py0000664000175000017500000000000013656750227021263 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/network/linux/0000775000175000017500000000000013656750362020323 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/network/linux/__init__.py0000664000175000017500000000000013656750227022422 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/network/linux/test_ovs_lib.py0000664000175000017500000000551513656750227023377 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from manila.network.linux import ovs_lib from manila import test class OVS_Lib_Test(test.TestCase): """A test suite to exercise the OVS libraries.""" def setUp(self): super(OVS_Lib_Test, self).setUp() self.BR_NAME = "br-int" self.TO = "--timeout=2" self.br = ovs_lib.OVSBridge(self.BR_NAME) self.execute_p = mock.patch('manila.utils.execute') self.execute = self.execute_p.start() def tearDown(self): self.execute_p.stop() super(OVS_Lib_Test, self).tearDown() def test_reset_bridge(self): self.br.reset_bridge() self.execute.assert_has_calls([mock.call("ovs-vsctl", self.TO, "--", "--if-exists", "del-br", self.BR_NAME, run_as_root=True), mock.call("ovs-vsctl", self.TO, "add-br", self.BR_NAME, run_as_root=True)]) def test_delete_port(self): pname = "tap5" self.br.delete_port(pname) self.execute.assert_called_once_with("ovs-vsctl", self.TO, "--", "--if-exists", "del-port", self.BR_NAME, pname, run_as_root=True) def test_port_id_regex(self): result = ('external_ids : {attached-mac="fa:16:3e:23:5b:f2",' ' iface-id="5c1321a7-c73f-4a77-95e6-9f86402e5c8f",' ' iface-status=active}\nname :' ' "dhc5c1321a7-c7"\nofport : 2\n') match = self.br.re_id.search(result) vif_mac = match.group('vif_mac') vif_id = match.group('vif_id') port_name = match.group('port_name') ofport = int(match.group('ofport')) self.assertEqual('fa:16:3e:23:5b:f2', vif_mac) self.assertEqual('5c1321a7-c73f-4a77-95e6-9f86402e5c8f', vif_id) self.assertEqual('dhc5c1321a7-c7', port_name) self.assertEqual(2, ofport) manila-10.0.0/manila/tests/network/linux/test_interface.py0000664000175000017500000002511113656750227023674 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import six from manila.network.linux import interface from manila.network.linux import ip_lib from manila import test from manila.tests import conf_fixture from manila.tests import fake_network from manila import utils class BaseChild(interface.LinuxInterfaceDriver): def plug(self, *args): pass def unplug(self, *args): pass FakeSubnet = { 'cidr': '192.168.1.1/24', } FakeAllocation = { 'subnet': FakeSubnet, 'ip_address': '192.168.1.2', 'ip_version': 4, } FakePort = { 'id': 'abcdef01-1234-5678-90ab-ba0987654321', 'fixed_ips': [FakeAllocation], 'device_id': 'cccccccc-cccc-cccc-cccc-cccccccccccc', } class TestBase(test.TestCase): def setUp(self): super(TestBase, self).setUp() self.conf = conf_fixture.CONF self.conf.register_opts(interface.OPTS) self.ip_dev_p = mock.patch.object(ip_lib, 'IPDevice') self.ip_dev = self.ip_dev_p.start() self.ip_p = mock.patch.object(ip_lib, 'IPWrapper') self.ip = self.ip_p.start() self.device_exists_p = mock.patch.object(ip_lib, 'device_exists') self.device_exists = self.device_exists_p.start() self.addCleanup(self.ip_dev_p.stop) self.addCleanup(self.ip_p.stop) self.addCleanup(self.device_exists_p.stop) class TestABCDriver(TestBase): def test_verify_abs_class_has_abs_methods(self): class ICanNotBeInstancetiated(interface.LinuxInterfaceDriver): pass try: # pylint: disable=abstract-class-instantiated ICanNotBeInstancetiated() except TypeError: pass except Exception as e: self.fail("Unexpected exception thrown: '%s'" % six.text_type(e)) else: self.fail("ExpectedException 'TypeError' not thrown.") def test_get_device_name(self): bc = BaseChild() device_name = bc.get_device_name(FakePort) self.assertEqual('tapabcdef01-12', device_name) def test_l3_init(self): addresses = [dict(ip_version=4, scope='global', dynamic=False, cidr='172.16.77.240/24')] self.ip_dev().addr.list = mock.Mock(return_value=addresses) bc = BaseChild() self.mock_object(bc, '_remove_outdated_interfaces') ns = '12345678-1234-5678-90ab-ba0987654321' bc.init_l3('tap0', ['192.168.1.2/24'], namespace=ns, clear_cidrs=['192.168.0.0/16']) self.ip_dev.assert_has_calls( [mock.call('tap0', namespace=ns), mock.call().route.clear_outdated_routes('192.168.0.0/16'), mock.call().addr.list(scope='global', filters=['permanent']), mock.call().addr.add(4, '192.168.1.2/24', '192.168.1.255'), mock.call().addr.delete(4, '172.16.77.240/24'), mock.call().route.pullup_route('tap0')]) bc._remove_outdated_interfaces.assert_called_with(self.ip_dev()) def test__remove_outdated_interfaces(self): device = fake_network.FakeDevice( 'foobarquuz', [dict(ip_version=4, cidr='1.0.0.0/27')]) devices = [fake_network.FakeDevice('foobar')] self.ip().get_devices = mock.Mock(return_value=devices) bc = BaseChild() self.mock_object(bc, 'unplug') bc._remove_outdated_interfaces(device) bc.unplug.assert_called_once_with('foobar') def test__get_set_of_device_cidrs(self): device = fake_network.FakeDevice('foo') expected = set(('1.0.0.0/27', '2.0.0.0/27')) bc = BaseChild() result = bc._get_set_of_device_cidrs(device) self.assertEqual(expected, result) def test__get_set_of_device_cidrs_exception(self): device = fake_network.FakeDevice('foo') self.mock_object(device.addr, 'list', mock.Mock( side_effect=Exception('foo does not exist'))) bc = BaseChild() result = bc._get_set_of_device_cidrs(device) self.assertEqual(set(), result) class TestNoopInterfaceDriver(TestBase): def test_init_l3(self): self.ip.assert_not_called() self.ip_dev.assert_not_called() def test_plug(self): self.ip.assert_not_called() self.ip_dev.assert_not_called() def test_unplug(self): self.ip.assert_not_called() self.ip_dev.assert_not_called() class TestOVSInterfaceDriver(TestBase): def test_get_device_name(self): br = interface.OVSInterfaceDriver() device_name = br.get_device_name(FakePort) self.assertEqual('tapabcdef01-12', device_name) def test_plug_no_ns(self): self._test_plug() def test_plug_with_ns(self): self._test_plug(namespace='01234567-1234-1234-99') def test_plug_alt_bridge(self): self._test_plug(bridge='br-foo') def _test_plug(self, additional_expectation=None, bridge=None, namespace=None): if additional_expectation is None: additional_expectation = [] if not bridge: bridge = 'br-int' def device_exists(dev, namespace=None): return dev == bridge vsctl_cmd = ['ovs-vsctl', '--', '--may-exist', 'add-port', bridge, 'tap0', '--', 'set', 'Interface', 'tap0', 'type=internal', '--', 'set', 'Interface', 'tap0', 'external-ids:iface-id=port-1234', '--', 'set', 'Interface', 'tap0', 'external-ids:iface-status=active', '--', 'set', 'Interface', 'tap0', 'external-ids:attached-mac=aa:bb:cc:dd:ee:ff'] with mock.patch.object(utils, 'execute') as execute: ovs = interface.OVSInterfaceDriver() self.device_exists.side_effect = device_exists ovs.plug('tap0', 'port-1234', 'aa:bb:cc:dd:ee:ff', bridge=bridge, namespace=namespace) execute.assert_called_once_with(*vsctl_cmd, run_as_root=True) expected = [mock.call(), mock.call().device('tap0'), mock.call().device().link.set_address('aa:bb:cc:dd:ee:ff')] expected.extend(additional_expectation) if namespace: expected.extend( [mock.call().ensure_namespace(namespace), mock.call().ensure_namespace().add_device_to_namespace( mock.ANY)]) expected.extend([mock.call().device().link.set_up()]) self.ip.assert_has_calls(expected) def test_plug_reset_mac(self): fake_mac_addr = 'aa:bb:cc:dd:ee:ff' self.device_exists.return_value = True self.ip().device().link.address = mock.Mock(return_value=fake_mac_addr) ovs = interface.OVSInterfaceDriver() ovs.plug('tap0', 'port-1234', 'ff:ee:dd:cc:bb:aa', bridge='br-int') expected = [mock.call(), mock.call().device('tap0'), mock.call().device().link.set_address('ff:ee:dd:cc:bb:aa'), mock.call().device().link.set_up()] self.ip.assert_has_calls(expected) def test_unplug(self, bridge=None): if not bridge: bridge = 'br-int' with mock.patch('manila.network.linux.ovs_lib.OVSBridge') as ovs_br: ovs = interface.OVSInterfaceDriver() ovs.unplug('tap0') ovs_br.assert_has_calls([mock.call(bridge), mock.call().delete_port('tap0')]) class TestBridgeInterfaceDriver(TestBase): def test_get_device_name(self): br = interface.BridgeInterfaceDriver() device_name = br.get_device_name(FakePort) self.assertEqual('ns-abcdef01-12', device_name) def test_plug_no_ns(self): self._test_plug() def test_plug_with_ns(self): self._test_plug(namespace='01234567-1234-1234-99') def _test_plug(self, namespace=None, mtu=None): def device_exists(device, root_helper=None, namespace=None): return device.startswith('brq') root_veth = mock.Mock() ns_veth = mock.Mock() self.ip().add_veth = mock.Mock(return_value=(root_veth, ns_veth)) self.device_exists.side_effect = device_exists br = interface.BridgeInterfaceDriver() mac_address = 'aa:bb:cc:dd:ee:ff' br.plug('ns-0', 'port-1234', mac_address, namespace=namespace) ip_calls = [mock.call(), mock.call().add_veth('tap0', 'ns-0', namespace2=namespace)] ns_veth.assert_has_calls([mock.call.link.set_address(mac_address)]) self.ip.assert_has_calls(ip_calls) root_veth.assert_has_calls([mock.call.link.set_up()]) ns_veth.assert_has_calls([mock.call.link.set_up()]) def test_plug_dev_exists(self): self.device_exists.return_value = True with mock.patch('manila.network.linux.interface.LOG.warning') as log: br = interface.BridgeInterfaceDriver() br.plug('port-1234', 'tap0', 'aa:bb:cc:dd:ee:ff') self.ip_dev.assert_has_calls([]) self.assertEqual(1, log.call_count) def test_unplug_no_device(self): self.device_exists.return_value = False self.ip_dev().link.delete.side_effect = RuntimeError with mock.patch('manila.network.linux.interface.LOG') as log: br = interface.BridgeInterfaceDriver() br.unplug('tap0') [mock.call(), mock.call('tap0'), mock.call().link.delete()] self.assertEqual(1, log.error.call_count) def test_unplug(self): self.device_exists.return_value = True with mock.patch('manila.network.linux.interface.LOG.debug') as log: br = interface.BridgeInterfaceDriver() br.unplug('tap0') self.assertTrue(log.called) self.ip_dev.assert_has_calls([mock.call('tap0', None), mock.call().link.delete()]) manila-10.0.0/manila/tests/network/linux/test_ip_lib.py0000664000175000017500000007172013656750227023201 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from manila.network.linux import ip_lib from manila import test NETNS_SAMPLE = [ '12345678-1234-5678-abcd-1234567890ab', 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', 'cccccccc-cccc-cccc-cccc-cccccccccccc'] LINK_SAMPLE = [ '1: lo: mtu 16436 qdisc noqueue state UNKNOWN \\' 'link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00', '2: eth0: mtu 1500 qdisc mq state UP ' 'qlen 1000\\ link/ether cc:dd:ee:ff:ab:cd brd ff:ff:ff:ff:ff:ff' '\\ alias openvswitch', '3: br-int: mtu 1500 qdisc noop state DOWN ' '\\ link/ether aa:bb:cc:dd:ee:ff brd ff:ff:ff:ff:ff:ff', '4: gw-ddc717df-49: mtu 1500 qdisc noop ' 'state DOWN \\ link/ether fe:dc:ba:fe:dc:ba brd ff:ff:ff:ff:ff:ff', '5: eth0.50@eth0: mtu 1500 qdisc ' ' noqueue master brq0b24798c-07 state UP mode DEFAULT' '\\ link/ether ab:04:49:b6:ab:a0 brd ff:ff:ff:ff:ff:ff'] ADDR_SAMPLE = (""" 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 link/ether dd:cc:aa:b9:76:ce brd ff:ff:ff:ff:ff:ff inet 172.16.77.240/24 brd 172.16.77.255 scope global eth0 inet6 2001:470:9:1224:5595:dd51:6ba2:e788/64 scope global temporary dynamic valid_lft 14187sec preferred_lft 3387sec inet6 2001:470:9:1224:fd91:272:581e:3a32/64 scope global temporary """ """deprecated dynamic valid_lft 14187sec preferred_lft 0sec inet6 2001:470:9:1224:4508:b885:5fb:740b/64 scope global temporary """ """deprecated dynamic valid_lft 14187sec preferred_lft 0sec inet6 2001:470:9:1224:dfcc:aaff:feb9:76ce/64 scope global dynamic valid_lft 14187sec preferred_lft 3387sec inet6 fe80::dfcc:aaff:feb9:76ce/64 scope link valid_lft forever preferred_lft forever """) ADDR_SAMPLE2 = (""" 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 link/ether dd:cc:aa:b9:76:ce brd ff:ff:ff:ff:ff:ff inet 172.16.77.240/24 scope global eth0 inet6 2001:470:9:1224:5595:dd51:6ba2:e788/64 scope global temporary dynamic valid_lft 14187sec preferred_lft 3387sec inet6 2001:470:9:1224:fd91:272:581e:3a32/64 scope global temporary """ """deprecated dynamic valid_lft 14187sec preferred_lft 0sec inet6 2001:470:9:1224:4508:b885:5fb:740b/64 scope global temporary """ """deprecated dynamic valid_lft 14187sec preferred_lft 0sec inet6 2001:470:9:1224:dfcc:aaff:feb9:76ce/64 scope global dynamic valid_lft 14187sec preferred_lft 3387sec inet6 fe80::dfcc:aaff:feb9:76ce/64 scope link valid_lft forever preferred_lft forever """) GATEWAY_SAMPLE1 = (""" default via 10.35.19.254 metric 100 10.35.16.0/22 proto kernel scope link src 10.35.17.97 """) GATEWAY_SAMPLE2 = (""" default via 10.35.19.254 metric 100 """) GATEWAY_SAMPLE3 = (""" 10.35.16.0/22 proto kernel scope link src 10.35.17.97 """) GATEWAY_SAMPLE4 = (""" default via 10.35.19.254 """) GATEWAY_SAMPLE5 = (""" default via 172.24.47.1 dev eth0 10.0.0.0/24 dev tapc226b810-a0 proto kernel scope link src 10.0.0.3 10.254.0.0/28 dev tap6de90453-1c proto kernel scope link src 10.254.0.4 10.35.16.0/22 proto kernel scope link src 10.35.17.97 172.24.4.0/24 via 10.35.19.254 metric 100 """) DEVICE_ROUTE_SAMPLE = ("10.0.0.0/24 scope link src 10.0.0.2") SUBNET_SAMPLE1 = ("10.0.0.0/24 dev qr-23380d11-d2 scope link src 10.0.0.1\n" "10.0.0.0/24 dev tap1d7888a7-10 scope link src 10.0.0.2") SUBNET_SAMPLE2 = ("10.0.0.0/24 dev tap1d7888a7-10 scope link src 10.0.0.2\n" "10.0.0.0/24 dev qr-23380d11-d2 scope link src 10.0.0.1") class TestSubProcessBase(test.TestCase): def setUp(self): super(TestSubProcessBase, self).setUp() self.execute_p = mock.patch('manila.utils.execute') self.execute = self.execute_p.start() def tearDown(self): self.execute_p.stop() super(TestSubProcessBase, self).tearDown() def test_execute_wrapper(self): ip_lib.SubProcessBase._execute('o', 'link', ('list',)) self.execute.assert_called_once_with('ip', '-o', 'link', 'list', run_as_root=False) def test_execute_wrapper_int_options(self): ip_lib.SubProcessBase._execute([4], 'link', ('list',)) self.execute.assert_called_once_with('ip', '-4', 'link', 'list', run_as_root=False) def test_execute_wrapper_no_options(self): ip_lib.SubProcessBase._execute([], 'link', ('list',)) self.execute.assert_called_once_with('ip', 'link', 'list', run_as_root=False) def test_run_no_namespace(self): base = ip_lib.SubProcessBase() base._run([], 'link', ('list',)) self.execute.assert_called_once_with('ip', 'link', 'list', run_as_root=False) def test_run_namespace(self): base = ip_lib.SubProcessBase('ns') base._run([], 'link', ('list',)) self.execute.assert_called_once_with('ip', 'netns', 'exec', 'ns', 'ip', 'link', 'list', run_as_root=True) def test_as_root_namespace(self): base = ip_lib.SubProcessBase('ns') base._as_root([], 'link', ('list',)) self.execute.assert_called_once_with('ip', 'netns', 'exec', 'ns', 'ip', 'link', 'list', run_as_root=True) class TestIpWrapper(test.TestCase): def setUp(self): super(TestIpWrapper, self).setUp() self.execute_p = mock.patch.object(ip_lib.IPWrapper, '_execute') self.execute = self.execute_p.start() def tearDown(self): self.execute_p.stop() super(TestIpWrapper, self).tearDown() def test_get_devices(self): self.execute.return_value = '\n'.join(LINK_SAMPLE) retval = ip_lib.IPWrapper().get_devices() self.assertEqual([ip_lib.IPDevice('lo'), ip_lib.IPDevice('eth0'), ip_lib.IPDevice('br-int'), ip_lib.IPDevice('gw-ddc717df-49'), ip_lib.IPDevice('eth0.50')], retval) self.execute.assert_called_once_with('o', 'link', ('list',), None) def test_get_devices_malformed_line(self): self.execute.return_value = '\n'.join(LINK_SAMPLE + ['gibberish']) retval = ip_lib.IPWrapper().get_devices() self.assertEqual([ip_lib.IPDevice('lo'), ip_lib.IPDevice('eth0'), ip_lib.IPDevice('br-int'), ip_lib.IPDevice('gw-ddc717df-49'), ip_lib.IPDevice('eth0.50')], retval) self.execute.assert_called_once_with('o', 'link', ('list',), None) def test_get_namespaces(self): self.execute.return_value = '\n'.join(NETNS_SAMPLE) retval = ip_lib.IPWrapper.get_namespaces() self.assertEqual(['12345678-1234-5678-abcd-1234567890ab', 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', 'cccccccc-cccc-cccc-cccc-cccccccccccc'], retval) self.execute.assert_called_once_with('', 'netns', ('list',)) def test_add_tuntap(self): ip_lib.IPWrapper().add_tuntap('tap0') self.execute.assert_called_once_with('', 'tuntap', ('add', 'tap0', 'mode', 'tap'), None, as_root=True) def test_add_veth(self): ip_lib.IPWrapper().add_veth('tap0', 'tap1') self.execute.assert_called_once_with('', 'link', ('add', 'tap0', 'type', 'veth', 'peer', 'name', 'tap1'), None, as_root=True) def test_add_veth_with_namespaces(self): ns2 = 'ns2' with mock.patch.object(ip_lib.IPWrapper, 'ensure_namespace') as en: ip_lib.IPWrapper().add_veth('tap0', 'tap1', namespace2=ns2) en.assert_has_calls([mock.call(ns2)]) self.execute.assert_called_once_with('', 'link', ('add', 'tap0', 'type', 'veth', 'peer', 'name', 'tap1', 'netns', ns2), None, as_root=True) def test_get_device(self): dev = ip_lib.IPWrapper('ns').device('eth0') self.assertEqual('ns', dev.namespace) self.assertEqual('eth0', dev.name) def test_ensure_namespace(self): with mock.patch.object(ip_lib, 'IPDevice') as ip_dev: ip = ip_lib.IPWrapper() with mock.patch.object(ip.netns, 'exists') as ns_exists: ns_exists.return_value = False ip.ensure_namespace('ns') self.execute.assert_has_calls( [mock.call([], 'netns', ('add', 'ns'), None, as_root=True)]) ip_dev.assert_has_calls([mock.call('lo', 'ns'), mock.call().link.set_up()]) def test_ensure_namespace_existing(self): with mock.patch.object(ip_lib, 'IpNetnsCommand') as ip_ns_cmd: ip_ns_cmd.exists.return_value = True ns = ip_lib.IPWrapper().ensure_namespace('ns') self.assertFalse(self.execute.called) self.assertEqual('ns', ns.namespace) def test_namespace_is_empty_no_devices(self): ip = ip_lib.IPWrapper('ns') with mock.patch.object(ip, 'get_devices') as get_devices: get_devices.return_value = [] self.assertTrue(ip.namespace_is_empty()) get_devices.assert_called_once_with(exclude_loopback=True) def test_namespace_is_empty(self): ip = ip_lib.IPWrapper('ns') with mock.patch.object(ip, 'get_devices') as get_devices: get_devices.return_value = [mock.Mock()] self.assertFalse(ip.namespace_is_empty()) get_devices.assert_called_once_with(exclude_loopback=True) def test_garbage_collect_namespace_does_not_exist(self): with mock.patch.object(ip_lib, 'IpNetnsCommand') as ip_ns_cmd_cls: ip_ns_cmd_cls.return_value.exists.return_value = False ip = ip_lib.IPWrapper('ns') with mock.patch.object(ip, 'namespace_is_empty') as mock_is_empty: self.assertFalse(ip.garbage_collect_namespace()) ip_ns_cmd_cls.assert_has_calls([mock.call().exists('ns')]) self.assertNotIn(mock.call().delete('ns'), ip_ns_cmd_cls.return_value.mock_calls) self.assertEqual([], mock_is_empty.mock_calls) def test_garbage_collect_namespace_existing_empty_ns(self): with mock.patch.object(ip_lib, 'IpNetnsCommand') as ip_ns_cmd_cls: ip_ns_cmd_cls.return_value.exists.return_value = True ip = ip_lib.IPWrapper('ns') with mock.patch.object(ip, 'namespace_is_empty') as mock_is_empty: mock_is_empty.return_value = True self.assertTrue(ip.garbage_collect_namespace()) mock_is_empty.assert_called_once_with() expected = [mock.call().exists('ns'), mock.call().delete('ns')] ip_ns_cmd_cls.assert_has_calls(expected) def test_garbage_collect_namespace_existing_not_empty(self): lo_device = mock.Mock() lo_device.name = 'lo' tap_device = mock.Mock() tap_device.name = 'tap1' with mock.patch.object(ip_lib, 'IpNetnsCommand') as ip_ns_cmd_cls: ip_ns_cmd_cls.return_value.exists.return_value = True ip = ip_lib.IPWrapper('ns') with mock.patch.object(ip, 'namespace_is_empty') as mock_is_empty: mock_is_empty.return_value = False self.assertFalse(ip.garbage_collect_namespace()) mock_is_empty.assert_called_once_with() expected = [mock.call(ip), mock.call().exists('ns')] self.assertEqual(expected, ip_ns_cmd_cls.mock_calls) self.assertNotIn(mock.call().delete('ns'), ip_ns_cmd_cls.mock_calls) def test_add_device_to_namespace(self): dev = mock.Mock() ip_lib.IPWrapper('ns').add_device_to_namespace(dev) dev.assert_has_calls([mock.call.link.set_netns('ns')]) def test_add_device_to_namespace_is_none(self): dev = mock.Mock() ip_lib.IPWrapper().add_device_to_namespace(dev) self.assertEqual([], dev.mock_calls) class TestIPDevice(test.TestCase): def test_eq_same_name(self): dev1 = ip_lib.IPDevice('tap0') dev2 = ip_lib.IPDevice('tap0') self.assertEqual(dev1, dev2) def test_eq_diff_name(self): dev1 = ip_lib.IPDevice('tap0') dev2 = ip_lib.IPDevice('tap1') self.assertNotEqual(dev1, dev2) def test_eq_same_namespace(self): dev1 = ip_lib.IPDevice('tap0', 'ns1') dev2 = ip_lib.IPDevice('tap0', 'ns1') self.assertEqual(dev1, dev2) def test_eq_diff_namespace(self): dev1 = ip_lib.IPDevice('tap0', 'ns1') dev2 = ip_lib.IPDevice('tap0', 'ns2') self.assertNotEqual(dev1, dev2) def test_eq_other_is_none(self): dev1 = ip_lib.IPDevice('tap0', 'ns1') self.assertIsNotNone(dev1) def test_str(self): self.assertEqual('tap0', str(ip_lib.IPDevice('tap0'))) class TestIPCommandBase(test.TestCase): def setUp(self): super(TestIPCommandBase, self).setUp() self.ip = mock.Mock() self.ip.namespace = 'namespace' self.ip_cmd = ip_lib.IpCommandBase(self.ip) self.ip_cmd.COMMAND = 'foo' def test_run(self): self.ip_cmd._run('link', 'show') self.ip.assert_has_calls([mock.call._run([], 'foo', ('link', 'show'))]) def test_run_with_options(self): self.ip_cmd._run('link', options='o') self.ip.assert_has_calls([mock.call._run('o', 'foo', ('link', ))]) def test_as_root(self): self.ip_cmd._as_root('link') self.ip.assert_has_calls( [mock.call._as_root([], 'foo', ('link', ), False)]) def test_as_root_with_options(self): self.ip_cmd._as_root('link', options='o') self.ip.assert_has_calls( [mock.call._as_root('o', 'foo', ('link', ), False)]) class TestIPDeviceCommandBase(test.TestCase): def setUp(self): super(TestIPDeviceCommandBase, self).setUp() self.ip_dev = mock.Mock() self.ip_dev.name = 'eth0' self.ip_dev._execute = mock.Mock(return_value='executed') self.ip_cmd = ip_lib.IpDeviceCommandBase(self.ip_dev) self.ip_cmd.COMMAND = 'foo' def test_name_property(self): self.assertEqual('eth0', self.ip_cmd.name) class TestIPCmdBase(test.TestCase): def setUp(self): super(TestIPCmdBase, self).setUp() self.parent = mock.Mock() self.parent.name = 'eth0' def _assert_call(self, options, args): self.parent.assert_has_calls([ mock.call._run(options, self.command, args)]) def _assert_sudo(self, options, args, force_root_namespace=False): self.parent.assert_has_calls( [mock.call._as_root(options, self.command, args, force_root_namespace)]) class TestIpLinkCommand(TestIPCmdBase): def setUp(self): super(TestIpLinkCommand, self).setUp() self.parent._run.return_value = LINK_SAMPLE[1] self.command = 'link' self.link_cmd = ip_lib.IpLinkCommand(self.parent) def test_set_address(self): self.link_cmd.set_address('aa:bb:cc:dd:ee:ff') self._assert_sudo([], ('set', 'eth0', 'address', 'aa:bb:cc:dd:ee:ff')) def test_set_mtu(self): self.link_cmd.set_mtu(1500) self._assert_sudo([], ('set', 'eth0', 'mtu', 1500)) def test_set_up(self): self.link_cmd.set_up() self._assert_sudo([], ('set', 'eth0', 'up')) def test_set_down(self): self.link_cmd.set_down() self._assert_sudo([], ('set', 'eth0', 'down')) def test_set_netns(self): self.link_cmd.set_netns('foo') self._assert_sudo([], ('set', 'eth0', 'netns', 'foo')) self.assertEqual('foo', self.parent.namespace) def test_set_name(self): self.link_cmd.set_name('tap1') self._assert_sudo([], ('set', 'eth0', 'name', 'tap1')) self.assertEqual('tap1', self.parent.name) def test_set_alias(self): self.link_cmd.set_alias('openvswitch') self._assert_sudo([], ('set', 'eth0', 'alias', 'openvswitch')) def test_delete(self): self.link_cmd.delete() self._assert_sudo([], ('delete', 'eth0')) def test_address_property(self): self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual('cc:dd:ee:ff:ab:cd', self.link_cmd.address) def test_mtu_property(self): self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual(1500, self.link_cmd.mtu) def test_qdisc_property(self): self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual('mq', self.link_cmd.qdisc) def test_qlen_property(self): self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual(1000, self.link_cmd.qlen) def test_alias_property(self): self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual('openvswitch', self.link_cmd.alias) def test_state_property(self): self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual('UP', self.link_cmd.state) def test_settings_property(self): expected = {'mtu': 1500, 'qlen': 1000, 'state': 'UP', 'qdisc': 'mq', 'brd': 'ff:ff:ff:ff:ff:ff', 'link/ether': 'cc:dd:ee:ff:ab:cd', 'alias': 'openvswitch'} self.parent._execute = mock.Mock(return_value=LINK_SAMPLE[1]) self.assertEqual(expected, self.link_cmd.attributes) self._assert_call('o', ('show', 'eth0')) class TestIpAddrCommand(TestIPCmdBase): def setUp(self): super(TestIpAddrCommand, self).setUp() self.parent.name = 'tap0' self.command = 'addr' self.addr_cmd = ip_lib.IpAddrCommand(self.parent) def test_add_address(self): self.addr_cmd.add(4, '192.168.45.100/24', '192.168.45.255') self._assert_sudo([4], ('add', '192.168.45.100/24', 'brd', '192.168.45.255', 'scope', 'global', 'dev', 'tap0')) def test_add_address_scoped(self): self.addr_cmd.add(4, '192.168.45.100/24', '192.168.45.255', scope='link') self._assert_sudo([4], ('add', '192.168.45.100/24', 'brd', '192.168.45.255', 'scope', 'link', 'dev', 'tap0')) def test_del_address(self): self.addr_cmd.delete(4, '192.168.45.100/24') self._assert_sudo([4], ('del', '192.168.45.100/24', 'dev', 'tap0')) def test_flush(self): self.addr_cmd.flush() self._assert_sudo([], ('flush', 'tap0')) def test_list(self): expected = [ dict(ip_version=4, scope='global', dynamic=False, cidr='172.16.77.240/24', broadcast='172.16.77.255'), dict(ip_version=6, scope='global', dynamic=True, cidr='2001:470:9:1224:5595:dd51:6ba2:e788/64', broadcast='::'), dict(ip_version=6, scope='global', dynamic=True, cidr='2001:470:9:1224:fd91:272:581e:3a32/64', broadcast='::'), dict(ip_version=6, scope='global', dynamic=True, cidr='2001:470:9:1224:4508:b885:5fb:740b/64', broadcast='::'), dict(ip_version=6, scope='global', dynamic=True, cidr='2001:470:9:1224:dfcc:aaff:feb9:76ce/64', broadcast='::'), dict(ip_version=6, scope='link', dynamic=False, cidr='fe80::dfcc:aaff:feb9:76ce/64', broadcast='::')] test_cases = [ADDR_SAMPLE, ADDR_SAMPLE2] for test_case in test_cases: self.parent._run = mock.Mock(return_value=test_case) self.assertEqual(expected, self.addr_cmd.list()) self._assert_call([], ('show', 'tap0')) def test_list_filtered(self): expected = [ dict(ip_version=4, scope='global', dynamic=False, cidr='172.16.77.240/24', broadcast='172.16.77.255')] test_cases = [ADDR_SAMPLE, ADDR_SAMPLE2] for test_case in test_cases: output = '\n'.join(test_case.split('\n')[0:4]) self.parent._run.return_value = output self.assertEqual(expected, self.addr_cmd.list('global', filters=['permanent'])) self._assert_call([], ('show', 'tap0', 'permanent', 'scope', 'global')) class TestIpRouteCommand(TestIPCmdBase): def setUp(self): super(TestIpRouteCommand, self).setUp() self.parent.name = 'eth0' self.command = 'route' self.route_cmd = ip_lib.IpRouteCommand(self.parent) def test_add_gateway(self): gateway = '192.168.45.100' metric = 100 self.route_cmd.add_gateway(gateway, metric) self._assert_sudo([], ('replace', 'default', 'via', gateway, 'metric', metric, 'dev', self.parent.name)) def test_del_gateway(self): gateway = '192.168.45.100' self.route_cmd.delete_gateway(gateway) self._assert_sudo([], ('del', 'default', 'via', gateway, 'dev', self.parent.name)) def test_get_gateway(self): test_cases = [{'sample': GATEWAY_SAMPLE1, 'expected': {'gateway': '10.35.19.254', 'metric': 100}}, {'sample': GATEWAY_SAMPLE2, 'expected': {'gateway': '10.35.19.254', 'metric': 100}}, {'sample': GATEWAY_SAMPLE3, 'expected': None}, {'sample': GATEWAY_SAMPLE4, 'expected': {'gateway': '10.35.19.254'}}] for test_case in test_cases: self.parent._run = mock.Mock(return_value=test_case['sample']) self.assertEqual(test_case['expected'], self.route_cmd.get_gateway()) def test_pullup_route(self): # interface is not the first in the list - requires # deleting and creating existing entries output = [DEVICE_ROUTE_SAMPLE, SUBNET_SAMPLE1] def pullup_side_effect(self, *args): result = output.pop(0) return result self.parent._run = mock.Mock(side_effect=pullup_side_effect) self.route_cmd.pullup_route('tap1d7888a7-10') self._assert_sudo([], ('del', '10.0.0.0/24', 'dev', 'qr-23380d11-d2')) self._assert_sudo([], ('append', '10.0.0.0/24', 'proto', 'kernel', 'src', '10.0.0.1', 'dev', 'qr-23380d11-d2')) def test_pullup_route_first(self): # interface is first in the list - no changes output = [DEVICE_ROUTE_SAMPLE, SUBNET_SAMPLE2] def pullup_side_effect(self, *args): result = output.pop(0) return result self.parent._run = mock.Mock(side_effect=pullup_side_effect) self.route_cmd.pullup_route('tap1d7888a7-10') # Check two calls - device get and subnet get self.assertEqual(2, len(self.parent._run.mock_calls)) def test_list(self): self.route_cmd._as_root = mock.Mock(return_value=GATEWAY_SAMPLE5) expected = [{'Destination': 'default', 'Device': 'eth0', 'Gateway': '172.24.47.1'}, {'Destination': '10.0.0.0/24', 'Device': 'tapc226b810-a0'}, {'Destination': '10.254.0.0/28', 'Device': 'tap6de90453-1c'}, {'Destination': '10.35.16.0/22'}, {'Destination': '172.24.4.0/24', 'Gateway': '10.35.19.254'}] result = self.route_cmd.list() self.assertEqual(expected, result) self.route_cmd._as_root.assert_called_once_with('list') def test_delete_net_route(self): self.route_cmd._as_root = mock.Mock() self.route_cmd.delete_net_route('10.0.0.0/24', 'br-ex') self.route_cmd._as_root.assert_called_once_with( 'delete', '10.0.0.0/24', 'dev', 'br-ex') def test_clear_outdated_routes(self): self.route_cmd.delete_net_route = mock.Mock() list_result = [{'Destination': 'default', 'Device': 'eth0', 'Gateway': '172.24.47.1'}, {'Destination': '10.0.0.0/24', 'Device': 'eth0'}, {'Destination': '10.0.0.0/24', 'Device': 'br-ex'}] self.route_cmd.list = mock.Mock(return_value=list_result) self.route_cmd.clear_outdated_routes('10.0.0.0/24') self.route_cmd.delete_net_route.assert_called_once_with( '10.0.0.0/24', 'br-ex') class TestIpNetnsCommand(TestIPCmdBase): def setUp(self): super(TestIpNetnsCommand, self).setUp() self.command = 'netns' self.netns_cmd = ip_lib.IpNetnsCommand(self.parent) def test_add_namespace(self): ns = self.netns_cmd.add('ns') self._assert_sudo([], ('add', 'ns'), force_root_namespace=True) self.assertEqual('ns', ns.namespace) def test_delete_namespace(self): with mock.patch('manila.utils.execute'): self.netns_cmd.delete('ns') self._assert_sudo([], ('delete', 'ns'), force_root_namespace=True) def test_namespace_exists(self): retval = '\n'.join(NETNS_SAMPLE) self.parent._as_root.return_value = retval self.assertTrue( self.netns_cmd.exists('bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb')) self._assert_sudo('o', ('list',), force_root_namespace=True) def test_namespace_doest_not_exist(self): retval = '\n'.join(NETNS_SAMPLE) self.parent._as_root.return_value = retval self.assertFalse( self.netns_cmd.exists('bbbbbbbb-1111-2222-3333-bbbbbbbbbbbb')) self._assert_sudo('o', ('list',), force_root_namespace=True) def test_execute(self): self.parent.namespace = 'ns' with mock.patch('manila.utils.execute') as execute: self.netns_cmd.execute(['ip', 'link', 'list']) execute.assert_called_once_with('ip', 'netns', 'exec', 'ns', 'ip', 'link', 'list', run_as_root=True, check_exit_code=True) def test_execute_env_var_prepend(self): self.parent.namespace = 'ns' with mock.patch('manila.utils.execute') as execute: env = dict(FOO=1, BAR=2) self.netns_cmd.execute(['ip', 'link', 'list'], env) execute.assert_called_once_with( 'ip', 'netns', 'exec', 'ns', 'env', 'BAR=2', 'FOO=1', 'ip', 'link', 'list', run_as_root=True, check_exit_code=True) class TestDeviceExists(test.TestCase): def test_device_exists(self): with mock.patch.object(ip_lib.IPDevice, '_execute') as _execute: _execute.return_value = LINK_SAMPLE[1] self.assertTrue(ip_lib.device_exists('eth0')) _execute.assert_called_once_with('o', 'link', ('show', 'eth0')) def test_device_does_not_exist(self): with mock.patch.object(ip_lib.IPDevice, '_execute') as _execute: _execute.return_value = '' _execute.side_effect = RuntimeError('Device does not exist.') self.assertFalse(ip_lib.device_exists('eth0')) manila-10.0.0/manila/tests/network/neutron/0000775000175000017500000000000013656750362020656 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/network/neutron/__init__.py0000664000175000017500000000000013656750227022755 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/network/neutron/test_neutron_plugin.py0000664000175000017500000023465013656750227025351 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2015 Mirantis, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import time from unittest import mock import ddt from oslo_config import cfg from manila.common import constants from manila import context from manila.db import api as db_api from manila import exception from manila.network.neutron import api as neutron_api from manila.network.neutron import constants as neutron_constants from manila.network.neutron import neutron_network_plugin as plugin from manila import test from manila.tests import utils as test_utils CONF = cfg.CONF fake_neutron_port = { "status": "ACTIVE", "allowed_address_pairs": [], "admin_state_up": True, "network_id": "test_net_id", "tenant_id": "fake_tenant_id", "extra_dhcp_opts": [], "device_owner": "test", "binding:capabilities": {"port_filter": True}, "mac_address": "test_mac", "fixed_ips": [ {"subnet_id": "test_subnet_id", "ip_address": "203.0.113.100"}, ], "id": "test_port_id", "security_groups": ["fake_sec_group_id"], "device_id": "fake_device_id", } fake_neutron_network = { 'admin_state_up': True, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'description': '', 'id': 'fake net id', 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'name': 'test_neutron_network', 'port_security_enabled': True, 'provider:network_type': 'vxlan', 'provider:physical_network': None, 'provider:segmentation_id': 1234, 'router:external': False, 'shared': False, 'status': 'ACTIVE', 'subnets': ['fake subnet id', 'fake subnet id 2'], } fake_ip_version = 4 fake_neutron_subnet = { 'cidr': '10.0.0.0/24', 'ip_version': fake_ip_version, 'gateway_ip': '10.0.0.1', } fake_share_network_subnet = { 'id': 'fake nw subnet id', 'neutron_subnet_id': fake_neutron_network['subnets'][0], 'neutron_net_id': fake_neutron_network['id'], 'network_type': 'fake_network_type', 'segmentation_id': 1234, 'ip_version': 4, 'cidr': 'fake_cidr', 'gateway': 'fake_gateway', 'mtu': 1509, } fake_share_network = { 'id': 'fake nw info id', 'project_id': 'fake project id', 'status': 'test_subnet_status', 'name': 'fake name', 'description': 'fake description', 'security_services': [], 'subnets': [fake_share_network_subnet], } fake_share_server = { 'id': 'fake nw info id', 'status': 'test_server_status', 'host': 'fake@host', 'network_allocations': [], 'shares': [], } fake_network_allocation = { 'id': fake_neutron_port['id'], 'share_server_id': fake_share_server['id'], 'ip_address': fake_neutron_port['fixed_ips'][0]['ip_address'], 'mac_address': fake_neutron_port['mac_address'], 'status': constants.STATUS_ACTIVE, 'label': 'user', 'network_type': fake_share_network_subnet['network_type'], 'segmentation_id': fake_share_network_subnet['segmentation_id'], 'ip_version': fake_share_network_subnet['ip_version'], 'cidr': fake_share_network_subnet['cidr'], 'gateway': fake_share_network_subnet['gateway'], 'mtu': 1509, } fake_nw_info = { 'segments': [ { 'provider:network_type': 'vlan', 'provider:physical_network': 'net1', 'provider:segmentation_id': 3926, }, { 'provider:network_type': 'vxlan', 'provider:physical_network': None, 'provider:segmentation_id': 2000, }, ], 'mtu': 1509, } fake_neutron_network_multi = { 'admin_state_up': True, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'description': '', 'id': 'fake net id', 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'name': 'test_neutron_network', 'port_security_enabled': True, 'router:external': False, 'shared': False, 'status': 'ACTIVE', 'subnets': ['fake subnet id', 'fake subnet id 2'], 'segments': fake_nw_info['segments'], 'mtu': fake_nw_info['mtu'], } fake_share_network_multi = { 'id': 'fake nw info id', 'neutron_subnet_id': fake_neutron_network_multi['subnets'][0], 'neutron_net_id': fake_neutron_network_multi['id'], 'project_id': 'fake project id', 'status': 'test_subnet_status', 'name': 'fake name', 'description': 'fake description', 'security_services': [], 'ip_version': None, 'cidr': 'fake_cidr', 'gateway': 'fake_gateway', 'mtu': fake_neutron_network_multi['mtu'] - 1, } fake_network_allocation_multi = { 'id': fake_neutron_port['id'], 'share_server_id': fake_share_server['id'], 'ip_address': fake_neutron_port['fixed_ips'][0]['ip_address'], 'mac_address': fake_neutron_port['mac_address'], 'status': constants.STATUS_ACTIVE, 'label': 'user', 'network_type': None, 'segmentation_id': None, 'ip_version': fake_neutron_subnet['ip_version'], 'cidr': fake_neutron_subnet['cidr'], 'gateway': fake_neutron_subnet['gateway_ip'], 'mtu': fake_neutron_network_multi['mtu'], } fake_binding_profile = { 'neutron_switch_id': 'fake switch id', 'neutron_port_id': 'fake port id', 'neutron_switch_info': 'fake switch info' } @ddt.ddt class NeutronNetworkPluginTest(test.TestCase): def setUp(self): super(NeutronNetworkPluginTest, self).setUp() self.plugin = self._get_neutron_network_plugin_instance() self.plugin.db = db_api self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) def _get_neutron_network_plugin_instance(self, config_data=None): if config_data is None: return plugin.NeutronNetworkPlugin() with test_utils.create_temp_config_with_opts(config_data): return plugin.NeutronNetworkPlugin() @mock.patch.object(db_api, 'network_allocation_create', mock.Mock(return_values=fake_network_allocation)) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_one_allocation(self): has_provider_nw_ext = mock.patch.object( self.plugin, '_has_provider_network_extension').start() has_provider_nw_ext.return_value = True save_nw_data = mock.patch.object(self.plugin, '_save_neutron_network_data').start() save_subnet_data = mock.patch.object( self.plugin, '_save_neutron_subnet_data').start() with mock.patch.object(self.plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, allocation_info={'count': 1}) has_provider_nw_ext.assert_any_call() save_nw_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) save_subnet_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) self.plugin.neutron_api.create_port.assert_called_once_with( fake_share_network['project_id'], network_id=fake_share_network_subnet['neutron_net_id'], subnet_id=fake_share_network_subnet['neutron_subnet_id'], device_owner='manila:share', device_id=fake_share_network['id']) db_api.network_allocation_create.assert_called_once_with( self.fake_context, fake_network_allocation) has_provider_nw_ext.stop() save_nw_data.stop() save_subnet_data.stop() @mock.patch.object(db_api, 'network_allocation_create', mock.Mock(return_values=fake_network_allocation)) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_two_allocation(self): has_provider_nw_ext = mock.patch.object( self.plugin, '_has_provider_network_extension').start() has_provider_nw_ext.return_value = True save_nw_data = mock.patch.object(self.plugin, '_save_neutron_network_data').start() save_subnet_data = mock.patch.object( self.plugin, '_save_neutron_subnet_data').start() with mock.patch.object(self.plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, count=2) neutron_api_calls = [ mock.call( fake_share_network['project_id'], network_id=fake_share_network_subnet['neutron_net_id'], subnet_id=fake_share_network_subnet['neutron_subnet_id'], device_owner='manila:share', device_id=fake_share_network['id']), mock.call( fake_share_network['project_id'], network_id=fake_share_network_subnet['neutron_net_id'], subnet_id=fake_share_network_subnet['neutron_subnet_id'], device_owner='manila:share', device_id=fake_share_network['id']), ] db_api_calls = [ mock.call(self.fake_context, fake_network_allocation), mock.call(self.fake_context, fake_network_allocation) ] self.plugin.neutron_api.create_port.assert_has_calls( neutron_api_calls) db_api.network_allocation_create.assert_has_calls(db_api_calls) has_provider_nw_ext.stop() save_nw_data.stop() save_subnet_data.stop() @mock.patch.object(db_api, 'share_network_update', mock.Mock()) def test_allocate_network_create_port_exception(self): has_provider_nw_ext = mock.patch.object( self.plugin, '_has_provider_network_extension').start() has_provider_nw_ext.return_value = True save_nw_data = mock.patch.object(self.plugin, '_save_neutron_network_data').start() save_subnet_data = mock.patch.object( self.plugin, '_save_neutron_subnet_data').start() create_port = mock.patch.object(self.plugin.neutron_api, 'create_port').start() create_port.side_effect = exception.NetworkException self.assertRaises(exception.NetworkException, self.plugin.allocate_network, self.fake_context, fake_share_server, fake_share_network) has_provider_nw_ext.stop() save_nw_data.stop() save_subnet_data.stop() create_port.stop() def _setup_manage_network_allocations(self): allocations = ['192.168.0.11', '192.168.0.12', 'fd12::2000'] neutron_ports = [ copy.deepcopy(fake_neutron_port), copy.deepcopy(fake_neutron_port), copy.deepcopy(fake_neutron_port), copy.deepcopy(fake_neutron_port), ] neutron_ports[0]['fixed_ips'][0]['ip_address'] = '192.168.0.10' neutron_ports[0]['id'] = 'fake_port_id_0' neutron_ports[1]['fixed_ips'][0]['ip_address'] = '192.168.0.11' neutron_ports[1]['id'] = 'fake_port_id_1' neutron_ports[2]['fixed_ips'][0]['ip_address'] = '192.168.0.12' neutron_ports[2]['id'] = 'fake_port_id_2' neutron_ports[3]['fixed_ips'][0]['ip_address'] = '192.168.0.13' neutron_ports[3]['id'] = 'fake_port_id_3' self.mock_object(self.plugin, '_verify_share_network_subnet') self.mock_object(self.plugin, '_store_neutron_net_info') self.mock_object(self.plugin.neutron_api, 'list_ports', mock.Mock(return_value=neutron_ports)) return neutron_ports, allocations @ddt.data({}, exception.NotFound) def test_manage_network_allocations_create_update(self, side_effect): neutron_ports, allocations = self._setup_manage_network_allocations() self.mock_object(db_api, 'network_allocation_get', mock.Mock( side_effect=[exception.NotFound, side_effect, exception.NotFound, side_effect])) if side_effect: self.mock_object(db_api, 'network_allocation_create') else: self.mock_object(db_api, 'network_allocation_update') result = self.plugin.manage_network_allocations( self.fake_context, allocations, fake_share_server, share_network_subnet=fake_share_network_subnet) self.assertEqual(['fd12::2000'], result) self.plugin.neutron_api.list_ports.assert_called_once_with( network_id=fake_share_network_subnet['neutron_net_id'], device_owner='manila:share', fixed_ips='subnet_id=' + fake_share_network_subnet['neutron_subnet_id']) db_api.network_allocation_get.assert_has_calls([ mock.call(self.fake_context, 'fake_port_id_1', read_deleted=False), mock.call(self.fake_context, 'fake_port_id_1', read_deleted=True), mock.call(self.fake_context, 'fake_port_id_2', read_deleted=False), mock.call(self.fake_context, 'fake_port_id_2', read_deleted=True), ]) port_dict_list = [{ 'share_server_id': fake_share_server['id'], 'ip_address': x, 'gateway': fake_share_network_subnet['gateway'], 'mac_address': fake_neutron_port['mac_address'], 'status': constants.STATUS_ACTIVE, 'label': 'user', 'network_type': fake_share_network_subnet['network_type'], 'segmentation_id': fake_share_network_subnet['segmentation_id'], 'ip_version': fake_share_network_subnet['ip_version'], 'cidr': fake_share_network_subnet['cidr'], 'mtu': fake_share_network_subnet['mtu'], } for x in ['192.168.0.11', '192.168.0.12']] if side_effect: port_dict_list[0]['id'] = 'fake_port_id_1' port_dict_list[1]['id'] = 'fake_port_id_2' db_api.network_allocation_create.assert_has_calls([ mock.call(self.fake_context, port_dict_list[0]), mock.call(self.fake_context, port_dict_list[1]) ]) else: for x in port_dict_list: x['deleted_at'] = None x['deleted'] = 'False' db_api.network_allocation_update.assert_has_calls([ mock.call(self.fake_context, 'fake_port_id_1', port_dict_list[0], read_deleted=True), mock.call(self.fake_context, 'fake_port_id_2', port_dict_list[1], read_deleted=True) ]) self.plugin._verify_share_network_subnet.assert_called_once_with( fake_share_server['id'], fake_share_network_subnet) self.plugin._store_neutron_net_info( self.fake_context, fake_share_network_subnet) def test__get_ports_respective_to_ips_multiple_fixed_ips(self): self.mock_object(plugin.LOG, 'warning') allocations = ['192.168.0.10', '192.168.0.11', '192.168.0.12'] neutron_ports = [ copy.deepcopy(fake_neutron_port), copy.deepcopy(fake_neutron_port), ] neutron_ports[0]['fixed_ips'][0]['ip_address'] = '192.168.0.10' neutron_ports[0]['id'] = 'fake_port_id_0' neutron_ports[0]['fixed_ips'].append({'ip_address': '192.168.0.11', 'subnet_id': 'test_subnet_id'}) neutron_ports[1]['fixed_ips'][0]['ip_address'] = '192.168.0.12' neutron_ports[1]['id'] = 'fake_port_id_2' expected = [{'port': neutron_ports[0], 'allocation': '192.168.0.10'}, {'port': neutron_ports[1], 'allocation': '192.168.0.12'}] result = self.plugin._get_ports_respective_to_ips(allocations, neutron_ports) self.assertEqual(expected, result) self.assertIs(True, plugin.LOG.warning.called) def test_manage_network_allocations_exception(self): neutron_ports, allocations = self._setup_manage_network_allocations() fake_allocation = { 'id': 'fake_port_id', 'share_server_id': 'fake_server_id' } self.mock_object(db_api, 'network_allocation_get', mock.Mock(return_value=fake_allocation)) self.assertRaises( exception.ManageShareServerError, self.plugin.manage_network_allocations, self.fake_context, allocations, fake_share_server, fake_share_network, fake_share_network_subnet) db_api.network_allocation_get.assert_called_once_with( self.fake_context, 'fake_port_id_1', read_deleted=False) def test_unmanage_network_allocations(self): neutron_ports = [ copy.deepcopy(fake_neutron_port), copy.deepcopy(fake_neutron_port), ] neutron_ports[0]['id'] = 'fake_port_id_0' neutron_ports[1]['id'] = 'fake_port_id_1' get_mock = self.mock_object( db_api, 'network_allocations_get_for_share_server', mock.Mock(return_value=neutron_ports)) self.mock_object(db_api, 'network_allocation_delete') self.plugin.unmanage_network_allocations( self.fake_context, fake_share_server['id']) get_mock.assert_called_once_with( self.fake_context, fake_share_server['id']) db_api.network_allocation_delete.assert_has_calls([ mock.call(self.fake_context, 'fake_port_id_0'), mock.call(self.fake_context, 'fake_port_id_1') ]) @mock.patch.object(db_api, 'network_allocation_delete', mock.Mock()) @mock.patch.object(db_api, 'share_network_update', mock.Mock()) @mock.patch.object(db_api, 'network_allocations_get_for_share_server', mock.Mock(return_value=[fake_network_allocation])) def test_deallocate_network_nominal(self): share_srv = {'id': fake_share_server['id']} share_srv['network_allocations'] = [fake_network_allocation] with mock.patch.object(self.plugin.neutron_api, 'delete_port', mock.Mock()): self.plugin.deallocate_network(self.fake_context, share_srv) self.plugin.neutron_api.delete_port.assert_called_once_with( fake_network_allocation['id']) db_api.network_allocation_delete.assert_called_once_with( self.fake_context, fake_network_allocation['id']) @mock.patch.object(db_api, 'share_network_update', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'network_allocation_update', mock.Mock()) @mock.patch.object(db_api, 'network_allocations_get_for_share_server', mock.Mock(return_value=[fake_network_allocation])) def test_deallocate_network_neutron_api_exception(self): share_srv = {'id': fake_share_server['id']} share_srv['network_allocations'] = [fake_network_allocation] delete_port = mock.patch.object(self.plugin.neutron_api, 'delete_port').start() delete_port.side_effect = exception.NetworkException self.assertRaises(exception.NetworkException, self.plugin.deallocate_network, self.fake_context, share_srv) db_api.network_allocation_update.assert_called_once_with( self.fake_context, fake_network_allocation['id'], {'status': constants.STATUS_ERROR}) delete_port.stop() @mock.patch.object(db_api, 'share_network_subnet_update', mock.Mock()) def test_save_neutron_network_data(self): neutron_nw_info = { 'provider:network_type': 'vlan', 'provider:segmentation_id': 1000, 'mtu': 1509, } share_nw_update_dict = { 'network_type': 'vlan', 'segmentation_id': 1000, 'mtu': 1509, } with mock.patch.object(self.plugin.neutron_api, 'get_network', mock.Mock(return_value=neutron_nw_info)): self.plugin._save_neutron_network_data(self.fake_context, fake_share_network_subnet) self.plugin.neutron_api.get_network.assert_called_once_with( fake_share_network_subnet['neutron_net_id']) self.plugin.db.share_network_subnet_update.assert_called_once_with( self.fake_context, fake_share_network_subnet['id'], share_nw_update_dict) @mock.patch.object(db_api, 'share_network_subnet_update', mock.Mock()) def test_save_neutron_network_data_multi_segment(self): share_nw_update_dict = { 'network_type': 'vlan', 'segmentation_id': 3926, 'mtu': 1509 } config_data = { 'DEFAULT': { 'neutron_physical_net_name': 'net1', } } self.mock_object(self.plugin.neutron_api, 'get_network') self.plugin.neutron_api.get_network.return_value = fake_nw_info with test_utils.create_temp_config_with_opts(config_data): self.plugin._save_neutron_network_data(self.fake_context, fake_share_network_subnet) self.plugin.neutron_api.get_network.assert_called_once_with( fake_share_network_subnet['neutron_net_id']) self.plugin.db.share_network_subnet_update.assert_called_once_with( self.fake_context, fake_share_network_subnet['id'], share_nw_update_dict) @mock.patch.object(db_api, 'share_network_update', mock.Mock()) def test_save_neutron_network_data_multi_segment_without_ident(self): config_data = { 'DEFAULT': { 'neutron_physical_net_name': 'net100', } } self.mock_object(self.plugin.neutron_api, 'get_network') self.plugin.neutron_api.get_network.return_value = fake_nw_info with test_utils.create_temp_config_with_opts(config_data): self.assertRaises(exception.NetworkBadConfigurationException, self.plugin._save_neutron_network_data, self.fake_context, fake_share_network_subnet) @mock.patch.object(db_api, 'share_network_update', mock.Mock()) def test_save_neutron_network_data_multi_segment_without_cfg(self): self.mock_object(self.plugin.neutron_api, 'get_network') self.plugin.neutron_api.get_network.return_value = fake_nw_info self.assertRaises(exception.NetworkBadConfigurationException, self.plugin._save_neutron_network_data, self.fake_context, fake_share_network_subnet) @mock.patch.object(db_api, 'share_network_subnet_update', mock.Mock()) def test_save_neutron_subnet_data(self): neutron_subnet_info = fake_neutron_subnet subnet_value = { 'cidr': '10.0.0.0/24', 'ip_version': 4, 'gateway': '10.0.0.1', } with mock.patch.object(self.plugin.neutron_api, 'get_subnet', mock.Mock(return_value=neutron_subnet_info)): self.plugin._save_neutron_subnet_data(self.fake_context, fake_share_network_subnet) self.plugin.neutron_api.get_subnet.assert_called_once_with( fake_share_network_subnet['neutron_subnet_id']) self.plugin.db.share_network_subnet_update.assert_called_once_with( self.fake_context, fake_share_network_subnet['id'], subnet_value) def test_has_network_provider_extension_true(self): extensions = {neutron_constants.PROVIDER_NW_EXT: {}} with mock.patch.object(self.plugin.neutron_api, 'list_extensions', mock.Mock(return_value=extensions)): result = self.plugin._has_provider_network_extension() self.plugin.neutron_api.list_extensions.assert_any_call() self.assertTrue(result) def test_has_network_provider_extension_false(self): with mock.patch.object(self.plugin.neutron_api, 'list_extensions', mock.Mock(return_value={})): result = self.plugin._has_provider_network_extension() self.plugin.neutron_api.list_extensions.assert_any_call() self.assertFalse(result) @ddt.ddt class NeutronSingleNetworkPluginTest(test.TestCase): def setUp(self): super(NeutronSingleNetworkPluginTest, self).setUp() self.context = 'fake_context' def test_init_valid(self): fake_net_id = 'fake_net_id' fake_subnet_id = 'fake_subnet_id' config_data = { 'DEFAULT': { 'neutron_net_id': fake_net_id, 'neutron_subnet_id': fake_subnet_id, } } fake_net = {'subnets': ['fake1', 'fake2', fake_subnet_id]} self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): instance = plugin.NeutronSingleNetworkPlugin() self.assertEqual(fake_net_id, instance.net) self.assertEqual(fake_subnet_id, instance.subnet) neutron_api.API.get_network.assert_called_once_with(fake_net_id) @ddt.data( {'net': None, 'subnet': None}, {'net': 'fake_net_id', 'subnet': None}, {'net': None, 'subnet': 'fake_subnet_id'}) @ddt.unpack def test_init_invalid(self, net, subnet): config_data = dict() # Simulate absence of set values if net: config_data['neutron_net_id'] = net if subnet: config_data['neutron_subnet_id'] = subnet config_data = dict(DEFAULT=config_data) with test_utils.create_temp_config_with_opts(config_data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.NeutronSingleNetworkPlugin) @ddt.data({}, {'subnets': []}, {'subnets': ['different_foo_subnet']}) def test_init_subnet_does_not_belong_to_net(self, fake_net): fake_net_id = 'fake_net_id' config_data = { 'DEFAULT': { 'neutron_net_id': fake_net_id, 'neutron_subnet_id': 'fake_subnet_id', } } self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.NeutronSingleNetworkPlugin) neutron_api.API.get_network.assert_called_once_with(fake_net_id) def _get_neutron_network_plugin_instance( self, config_data=None, label=None): if not config_data: fake_subnet_id = 'fake_subnet_id' config_data = { 'DEFAULT': { 'neutron_net_id': 'fake_net_id', 'neutron_subnet_id': fake_subnet_id, } } fake_net = {'subnets': [fake_subnet_id]} self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): instance = plugin.NeutronSingleNetworkPlugin(label=label) return instance def test___update_share_network_net_data_same_values(self): instance = self._get_neutron_network_plugin_instance() share_network = { 'neutron_net_id': instance.net, 'neutron_subnet_id': instance.subnet, } result = instance._update_share_network_net_data( self.context, share_network) self.assertEqual(share_network, result) def test___update_share_network_net_data_different_values_empty(self): instance = self._get_neutron_network_plugin_instance() share_network_input = { 'id': 'fake_share_network_id', } share_network_result = { 'neutron_net_id': instance.net, 'neutron_subnet_id': instance.subnet, } self.mock_object( instance.db, 'share_network_subnet_update', mock.Mock(return_value='foo')) instance._update_share_network_net_data( self.context, share_network_input) instance.db.share_network_subnet_update.assert_called_once_with( self.context, share_network_input['id'], share_network_result) @ddt.data( {'n': 'fake_net_id', 's': 'bar'}, {'n': 'foo', 's': 'fake_subnet_id'}) @ddt.unpack def test___update_share_network_net_data_different_values(self, n, s): instance = self._get_neutron_network_plugin_instance() share_network = { 'id': 'fake_share_network_id', 'neutron_net_id': n, 'neutron_subnet_id': s, } self.mock_object( instance.db, 'share_network_update', mock.Mock(return_value=share_network)) self.assertRaises( exception.NetworkBadConfigurationException, instance._update_share_network_net_data, self.context, share_network) self.assertFalse(instance.db.share_network_update.called) def test_allocate_network(self): self.mock_object(plugin.NeutronNetworkPlugin, 'allocate_network') plugin.NeutronNetworkPlugin.allocate_network.return_value = [ fake_neutron_port, fake_neutron_port] instance = self._get_neutron_network_plugin_instance() share_server = 'fake_share_server' share_network = {'id': 'fake_share_network'} share_network_subnet = {'id': 'fake_share_network_subnet'} share_network_subnet_upd = {'id': 'updated_fake_share_network_subnet'} count = 2 device_owner = 'fake_device_owner' self.mock_object( instance, '_update_share_network_net_data', mock.Mock(return_value=share_network_subnet_upd)) instance.allocate_network( self.context, share_server, share_network, share_network_subnet, count=count, device_owner=device_owner) instance._update_share_network_net_data.assert_called_once_with( self.context, share_network_subnet) plugin.NeutronNetworkPlugin.allocate_network.assert_called_once_with( self.context, share_server, share_network, share_network_subnet_upd, count=count, device_owner=device_owner) def test_manage_network_allocations(self): allocations = ['192.168.10.10', 'fd12::2000'] instance = self._get_neutron_network_plugin_instance() parent = self.mock_object( plugin.NeutronNetworkPlugin, 'manage_network_allocations', mock.Mock(return_value=['fd12::2000'])) self.mock_object( instance, '_update_share_network_net_data', mock.Mock(return_value=fake_share_network_subnet)) result = instance.manage_network_allocations( self.context, allocations, fake_share_server, fake_share_network, fake_share_network_subnet) self.assertEqual(['fd12::2000'], result) instance._update_share_network_net_data.assert_called_once_with( self.context, fake_share_network_subnet) parent.assert_called_once_with( self.context, allocations, fake_share_server, fake_share_network, fake_share_network_subnet) def test_manage_network_allocations_admin(self): allocations = ['192.168.10.10', 'fd12::2000'] instance = self._get_neutron_network_plugin_instance(label='admin') parent = self.mock_object( plugin.NeutronNetworkPlugin, 'manage_network_allocations', mock.Mock(return_value=['fd12::2000'])) share_network_dict = { 'project_id': instance.neutron_api.admin_project_id, 'neutron_net_id': 'fake_net_id', 'neutron_subnet_id': 'fake_subnet_id', } result = instance.manage_network_allocations( self.context, allocations, fake_share_server, share_network_subnet=share_network_dict) self.assertEqual(['fd12::2000'], result) parent.assert_called_once_with( self.context, allocations, fake_share_server, None, share_network_dict) @ddt.ddt class NeutronBindNetworkPluginTest(test.TestCase): def setUp(self): super(NeutronBindNetworkPluginTest, self).setUp() self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) self.has_binding_ext_mock = self.mock_object( neutron_api.API, '_has_port_binding_extension') self.has_binding_ext_mock.return_value = True self.bind_plugin = self._get_neutron_network_plugin_instance() self.bind_plugin.db = db_api self.sleep_mock = self.mock_object(time, 'sleep') self.fake_share_network_multi = dict(fake_share_network_multi) def _get_neutron_network_plugin_instance(self, config_data=None): if config_data is None: return plugin.NeutronBindNetworkPlugin() with test_utils.create_temp_config_with_opts(config_data): return plugin.NeutronBindNetworkPlugin() def test_wait_for_bind(self): self.mock_object(self.bind_plugin.neutron_api, 'show_port') self.bind_plugin.neutron_api.show_port.return_value = fake_neutron_port self.bind_plugin._wait_for_ports_bind([fake_neutron_port], fake_share_server) self.bind_plugin.neutron_api.show_port.assert_called_once_with( fake_neutron_port['id']) self.sleep_mock.assert_not_called() def test_wait_for_bind_error(self): fake_neut_port = copy.copy(fake_neutron_port) fake_neut_port['status'] = 'ERROR' self.mock_object(self.bind_plugin.neutron_api, 'show_port') self.bind_plugin.neutron_api.show_port.return_value = fake_neut_port self.assertRaises(exception.NetworkException, self.bind_plugin._wait_for_ports_bind, [fake_neut_port, fake_neut_port], fake_share_server) self.bind_plugin.neutron_api.show_port.assert_called_once_with( fake_neutron_port['id']) self.sleep_mock.assert_not_called() @ddt.data(('DOWN', 'ACTIVE'), ('DOWN', 'DOWN'), ('ACTIVE', 'DOWN')) def test_wait_for_bind_two_ports_no_bind(self, state): fake_neut_port1 = copy.copy(fake_neutron_port) fake_neut_port1['status'] = state[0] fake_neut_port2 = copy.copy(fake_neutron_port) fake_neut_port2['status'] = state[1] self.mock_object(self.bind_plugin.neutron_api, 'show_port') self.bind_plugin.neutron_api.show_port.side_effect = ( [fake_neut_port1, fake_neut_port2] * 20) self.assertRaises(exception.NetworkBindException, self.bind_plugin._wait_for_ports_bind, [fake_neut_port1, fake_neut_port2], fake_share_server) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_one_allocation(self): self.mock_object(self.bind_plugin, '_has_provider_network_extension') self.bind_plugin._has_provider_network_extension.return_value = True save_nw_data = self.mock_object(self.bind_plugin, '_save_neutron_network_data') save_subnet_data = self.mock_object(self.bind_plugin, '_save_neutron_subnet_data') self.mock_object(self.bind_plugin, '_wait_for_ports_bind') neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = 'foohost1' self.mock_object(db_api, 'network_allocation_create') db_api.network_allocation_create.return_value = fake_network_allocation self.mock_object(self.bind_plugin.neutron_api, 'get_network') self.bind_plugin.neutron_api.get_network.return_value = ( fake_neutron_network) with mock.patch.object(self.bind_plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.bind_plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, allocation_info={'count': 1}) self.bind_plugin._has_provider_network_extension.assert_any_call() save_nw_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) save_subnet_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) expected_kwargs = { 'binding:vnic_type': 'baremetal', 'host_id': 'foohost1', 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:share', 'device_id': fake_share_network['id'], } self.bind_plugin.neutron_api.create_port.assert_called_once_with( fake_share_network['project_id'], **expected_kwargs) db_api.network_allocation_create.assert_called_once_with( self.fake_context, fake_network_allocation) self.bind_plugin._wait_for_ports_bind.assert_called_once_with( [db_api.network_allocation_create( self.fake_context, fake_network_allocation)], fake_share_server) @mock.patch.object(db_api, 'network_allocation_create', mock.Mock(return_values=fake_network_allocation_multi)) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network_multi)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_multi_segment(self): network_allocation_update_data = { 'network_type': fake_nw_info['segments'][0]['provider:network_type'], 'segmentation_id': fake_nw_info['segments'][0]['provider:segmentation_id'], } network_update_data = dict(network_allocation_update_data) network_update_data['mtu'] = fake_nw_info['mtu'] fake_network_allocation_multi_updated = dict( fake_network_allocation_multi) fake_network_allocation_multi_updated.update( network_allocation_update_data) fake_share_network_multi_updated = dict(fake_share_network_multi) fake_share_network_multi_updated.update(network_update_data) fake_share_network_multi_updated.update(fake_neutron_subnet) config_data = { 'DEFAULT': { 'neutron_net_id': 'fake net id', 'neutron_subnet_id': 'fake subnet id', 'neutron_physical_net_name': 'net1', } } self.bind_plugin = self._get_neutron_network_plugin_instance( config_data) self.bind_plugin.db = db_api self.mock_object(self.bind_plugin, '_has_provider_network_extension') self.bind_plugin._has_provider_network_extension.return_value = True self.mock_object(self.bind_plugin, '_wait_for_ports_bind') neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = 'foohost1' self.mock_object(db_api, 'network_allocation_create') db_api.network_allocation_create.return_value = ( fake_network_allocation_multi) self.mock_object(db_api, 'network_allocation_update') db_api.network_allocation_update.return_value = ( fake_network_allocation_multi_updated) self.mock_object(self.bind_plugin.neutron_api, 'get_network') self.bind_plugin.neutron_api.get_network.return_value = ( fake_neutron_network_multi) self.mock_object(self.bind_plugin.neutron_api, 'get_subnet') self.bind_plugin.neutron_api.get_subnet.return_value = ( fake_neutron_subnet) self.mock_object(db_api, 'share_network_subnet_update') with mock.patch.object(self.bind_plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.bind_plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, self.fake_share_network_multi, allocation_info={'count': 1}) self.bind_plugin._has_provider_network_extension.assert_any_call() expected_kwargs = { 'binding:vnic_type': 'baremetal', 'host_id': 'foohost1', 'network_id': fake_share_network_multi['neutron_net_id'], 'subnet_id': fake_share_network_multi['neutron_subnet_id'], 'device_owner': 'manila:share', 'device_id': fake_share_network_multi['id'] } self.bind_plugin.neutron_api.create_port.assert_called_once_with( fake_share_network_multi['project_id'], **expected_kwargs) db_api.network_allocation_create.assert_called_once_with( self.fake_context, fake_network_allocation_multi) db_api.share_network_subnet_update.assert_called_with( self.fake_context, fake_share_network_multi['id'], network_update_data) network_allocation_update_data['cidr'] = ( fake_neutron_subnet['cidr']) network_allocation_update_data['ip_version'] = ( fake_neutron_subnet['ip_version']) db_api.network_allocation_update.assert_called_once_with( self.fake_context, fake_neutron_port['id'], network_allocation_update_data) @ddt.data({ 'neutron_binding_profiles': None, 'binding_profiles': {} }, { 'neutron_binding_profiles': 'fake_profile', 'binding_profiles': {} }, { 'neutron_binding_profiles': 'fake_profile', 'binding_profiles': None }, { 'neutron_binding_profiles': 'fake_profile', 'binding_profiles': { 'fake_profile': { 'neutron_switch_id': 'fake switch id', 'neutron_port_id': 'fake port id', 'neutron_switch_info': 'switch_ip: 127.0.0.1' } } }, { 'neutron_binding_profiles': None, 'binding_profiles': { 'fake_profile': { 'neutron_switch_id': 'fake switch id', 'neutron_port_id': 'fake port id', 'neutron_switch_info': 'switch_ip: 127.0.0.1' } } }, { 'neutron_binding_profiles': 'fake_profile_one,fake_profile_two', 'binding_profiles': { 'fake_profile_one': { 'neutron_switch_id': 'fake switch id 1', 'neutron_port_id': 'fake port id 1', 'neutron_switch_info': 'switch_ip: 127.0.0.1' }, 'fake_profile_two': { 'neutron_switch_id': 'fake switch id 2', 'neutron_port_id': 'fake port id 2', 'neutron_switch_info': 'switch_ip: 127.0.0.2' } } }, { 'neutron_binding_profiles': 'fake_profile_two', 'binding_profiles': { 'fake_profile_one': { 'neutron_switch_id': 'fake switch id 1', 'neutron_port_id': 'fake port id 1', 'neutron_switch_info': 'switch_ip: 127.0.0.1' }, 'fake_profile_two': { 'neutron_switch_id': 'fake switch id 2', 'neutron_port_id': 'fake port id 2', 'neutron_switch_info': 'switch_ip: 127.0.0.2' } } }) @ddt.unpack @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test__get_port_create_args(self, neutron_binding_profiles, binding_profiles): fake_device_owner = 'share' fake_host_id = 'fake host' neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = fake_host_id config_data = { 'DEFAULT': { 'neutron_net_id': fake_neutron_network['id'], 'neutron_subnet_id': fake_neutron_network['subnets'][0] } } # Simulate absence of set values if neutron_binding_profiles: config_data['DEFAULT'][ 'neutron_binding_profiles'] = neutron_binding_profiles if binding_profiles: for name, binding_profile in binding_profiles.items(): config_data[name] = binding_profile instance = self._get_neutron_network_plugin_instance(config_data) create_args = instance._get_port_create_args(fake_share_server, fake_share_network_subnet, fake_device_owner) expected_create_args = { 'binding:vnic_type': 'baremetal', 'host_id': fake_host_id, 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:' + fake_device_owner, 'device_id': fake_share_server['id'] } if neutron_binding_profiles: expected_create_args['binding:profile'] = { 'local_link_information': [] } local_links = expected_create_args[ 'binding:profile']['local_link_information'] for profile in neutron_binding_profiles.split(','): if binding_profiles is None: binding_profile = {} else: binding_profile = binding_profiles.get(profile, {}) local_links.append({ 'port_id': binding_profile.get('neutron_port_id', None), 'switch_id': binding_profile.get('neutron_switch_id', None) }) switch_info = binding_profile.get('neutron_switch_info', None) if switch_info is None: local_links[-1]['switch_info'] = None else: local_links[-1]['switch_info'] = cfg.types.Dict()( switch_info) self.assertEqual(expected_create_args, create_args) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test__get_port_create_args_host_id(self): fake_device_owner = 'share' fake_host_id = 'fake host' config_data = { 'DEFAULT': { 'neutron_net_id': fake_neutron_network['id'], 'neutron_subnet_id': fake_neutron_network['subnets'][0], 'neutron_host_id': fake_host_id } } instance = self._get_neutron_network_plugin_instance(config_data) create_args = instance._get_port_create_args(fake_share_server, fake_share_network_subnet, fake_device_owner) expected_create_args = { 'binding:vnic_type': 'baremetal', 'host_id': fake_host_id, 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:' + fake_device_owner, 'device_id': fake_share_server['id'] } self.assertEqual(expected_create_args, create_args) @ddt.ddt class NeutronBindSingleNetworkPluginTest(test.TestCase): def setUp(self): super(NeutronBindSingleNetworkPluginTest, self).setUp() self.context = 'fake_context' self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) self.has_binding_ext_mock = self.mock_object( neutron_api.API, '_has_port_binding_extension') self.has_binding_ext_mock.return_value = True self.bind_plugin = plugin.NeutronBindNetworkPlugin() self.bind_plugin.db = db_api self.sleep_mock = self.mock_object(time, 'sleep') self.bind_plugin = self._get_neutron_network_plugin_instance() self.bind_plugin.db = db_api def _get_neutron_network_plugin_instance(self, config_data=None): if not config_data: fake_net_id = 'fake net id' fake_subnet_id = 'fake subnet id' config_data = { 'DEFAULT': { 'neutron_net_id': fake_net_id, 'neutron_subnet_id': fake_subnet_id, 'neutron_physical_net_name': 'net1', } } fake_net = {'subnets': ['fake1', 'fake2', fake_subnet_id]} self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): return plugin.NeutronBindSingleNetworkPlugin() def test_allocate_network(self): self.mock_object(plugin.NeutronNetworkPlugin, 'allocate_network') plugin.NeutronNetworkPlugin.allocate_network.return_value = [ 'port1', 'port2'] instance = self._get_neutron_network_plugin_instance() share_server = 'fake_share_server' share_network = {} share_network_subnet = {'neutron_net_id': {}} share_network_upd = {'neutron_net_id': {'upd': True}} count = 2 device_owner = 'fake_device_owner' self.mock_object( instance, '_update_share_network_net_data', mock.Mock(return_value=share_network_upd)) self.mock_object(instance, '_wait_for_ports_bind', mock.Mock()) instance.allocate_network( self.context, share_server, share_network, share_network_subnet, count=count, device_owner=device_owner) instance._update_share_network_net_data.assert_called_once_with( self.context, share_network_subnet) plugin.NeutronNetworkPlugin.allocate_network.assert_called_once_with( self.context, share_server, share_network, share_network_upd, count=count, device_owner=device_owner) instance._wait_for_ports_bind.assert_called_once_with( ['port1', 'port2'], share_server) def test_init_valid(self): fake_net_id = 'fake_net_id' fake_subnet_id = 'fake_subnet_id' config_data = { 'DEFAULT': { 'neutron_net_id': fake_net_id, 'neutron_subnet_id': fake_subnet_id, } } fake_net = {'subnets': ['fake1', 'fake2', fake_subnet_id]} self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): instance = plugin.NeutronSingleNetworkPlugin() self.assertEqual(fake_net_id, instance.net) self.assertEqual(fake_subnet_id, instance.subnet) neutron_api.API.get_network.assert_called_once_with(fake_net_id) @ddt.data( {'net': None, 'subnet': None}, {'net': 'fake_net_id', 'subnet': None}, {'net': None, 'subnet': 'fake_subnet_id'}) @ddt.unpack def test_init_invalid(self, net, subnet): config_data = dict() # Simulate absence of set values if net: config_data['neutron_net_id'] = net if subnet: config_data['neutron_subnet_id'] = subnet config_data = dict(DEFAULT=config_data) with test_utils.create_temp_config_with_opts(config_data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.NeutronSingleNetworkPlugin) @ddt.data({}, {'subnets': []}, {'subnets': ['different_foo_subnet']}) def test_init_subnet_does_not_belong_to_net(self, fake_net): fake_net_id = 'fake_net_id' config_data = { 'DEFAULT': { 'neutron_net_id': fake_net_id, 'neutron_subnet_id': 'fake_subnet_id', } } self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.NeutronSingleNetworkPlugin) neutron_api.API.get_network.assert_called_once_with(fake_net_id) def _get_neutron_single_network_plugin_instance(self): fake_subnet_id = 'fake_subnet_id' config_data = { 'DEFAULT': { 'neutron_net_id': 'fake_net_id', 'neutron_subnet_id': fake_subnet_id, } } fake_net = {'subnets': [fake_subnet_id]} self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) with test_utils.create_temp_config_with_opts(config_data): instance = plugin.NeutronSingleNetworkPlugin() return instance def test___update_share_network_net_data_same_values(self): instance = self._get_neutron_single_network_plugin_instance() share_network = { 'neutron_net_id': instance.net, 'neutron_subnet_id': instance.subnet, } result = instance._update_share_network_net_data( self.context, share_network) self.assertEqual(share_network, result) def test___update_share_network_net_data_different_values_empty(self): instance = self._get_neutron_single_network_plugin_instance() share_network_subnet_input = { 'id': 'fake_share_network_id', } share_network_result = { 'neutron_net_id': instance.net, 'neutron_subnet_id': instance.subnet, } self.mock_object( instance.db, 'share_network_subnet_update', mock.Mock(return_value='foo')) instance._update_share_network_net_data( self.context, share_network_subnet_input) instance.db.share_network_subnet_update.assert_called_once_with( self.context, share_network_subnet_input['id'], share_network_result) @ddt.data( {'n': 'fake_net_id', 's': 'bar'}, {'n': 'foo', 's': 'fake_subnet_id'}) @ddt.unpack def test___update_share_network_net_data_different_values(self, n, s): instance = self._get_neutron_single_network_plugin_instance() share_network = { 'id': 'fake_share_network_id', 'neutron_net_id': n, 'neutron_subnet_id': s, } self.mock_object( instance.db, 'share_network_update', mock.Mock(return_value=share_network)) self.assertRaises( exception.NetworkBadConfigurationException, instance._update_share_network_net_data, self.context, share_network) self.assertFalse(instance.db.share_network_update.called) def test_wait_for_bind(self): self.mock_object(self.bind_plugin.neutron_api, 'show_port') self.bind_plugin.neutron_api.show_port.return_value = fake_neutron_port self.bind_plugin._wait_for_ports_bind([fake_neutron_port], fake_share_server) self.bind_plugin.neutron_api.show_port.assert_called_once_with( fake_neutron_port['id']) self.sleep_mock.assert_not_called() def test_wait_for_bind_error(self): fake_neut_port = copy.copy(fake_neutron_port) fake_neut_port['status'] = 'ERROR' self.mock_object(self.bind_plugin.neutron_api, 'show_port') self.bind_plugin.neutron_api.show_port.return_value = fake_neut_port self.assertRaises(exception.NetworkException, self.bind_plugin._wait_for_ports_bind, [fake_neut_port, fake_neut_port], fake_share_server) self.bind_plugin.neutron_api.show_port.assert_called_once_with( fake_neutron_port['id']) self.sleep_mock.assert_not_called() @ddt.data(('DOWN', 'ACTIVE'), ('DOWN', 'DOWN'), ('ACTIVE', 'DOWN')) def test_wait_for_bind_two_ports_no_bind(self, state): fake_neut_port1 = copy.copy(fake_neutron_port) fake_neut_port1['status'] = state[0] fake_neut_port2 = copy.copy(fake_neutron_port) fake_neut_port2['status'] = state[1] self.mock_object(self.bind_plugin.neutron_api, 'show_port') self.bind_plugin.neutron_api.show_port.side_effect = ( [fake_neut_port1, fake_neut_port2] * 20) self.assertRaises(exception.NetworkBindException, self.bind_plugin._wait_for_ports_bind, [fake_neut_port1, fake_neut_port2], fake_share_server) @mock.patch.object(db_api, 'network_allocation_create', mock.Mock(return_values=fake_network_allocation)) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_one_allocation(self): self.mock_object(self.bind_plugin, '_has_provider_network_extension') self.bind_plugin._has_provider_network_extension.return_value = True save_nw_data = self.mock_object(self.bind_plugin, '_save_neutron_network_data') save_subnet_data = self.mock_object(self.bind_plugin, '_save_neutron_subnet_data') self.mock_object(self.bind_plugin, '_wait_for_ports_bind') neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = 'foohost1' self.mock_object(db_api, 'network_allocation_create') with mock.patch.object(self.bind_plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.bind_plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, allocation_info={'count': 1}) self.bind_plugin._has_provider_network_extension.assert_any_call() save_nw_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) save_subnet_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) expected_kwargs = { 'binding:vnic_type': 'baremetal', 'host_id': 'foohost1', 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:share', 'device_id': fake_share_network['id'], } self.bind_plugin.neutron_api.create_port.assert_called_once_with( fake_share_network['project_id'], **expected_kwargs) db_api.network_allocation_create.assert_called_once_with( self.fake_context, fake_network_allocation) self.bind_plugin._wait_for_ports_bind.assert_called_once_with( [db_api.network_allocation_create( self.fake_context, fake_network_allocation)], fake_share_server) @ddt.data({ 'neutron_binding_profiles': None, 'binding_profiles': {} }, { 'neutron_binding_profiles': 'fake_profile', 'binding_profiles': {} }, { 'neutron_binding_profiles': 'fake_profile', 'binding_profiles': None }, { 'neutron_binding_profiles': 'fake_profile', 'binding_profiles': { 'fake_profile': { 'neutron_switch_id': 'fake switch id', 'neutron_port_id': 'fake port id', 'neutron_switch_info': 'switch_ip: 127.0.0.1' } } }, { 'neutron_binding_profiles': None, 'binding_profiles': { 'fake_profile': { 'neutron_switch_id': 'fake switch id', 'neutron_port_id': 'fake port id', 'neutron_switch_info': 'switch_ip: 127.0.0.1' } } }, { 'neutron_binding_profiles': 'fake_profile_one,fake_profile_two', 'binding_profiles': { 'fake_profile_one': { 'neutron_switch_id': 'fake switch id 1', 'neutron_port_id': 'fake port id 1', 'neutron_switch_info': 'switch_ip: 127.0.0.1' }, 'fake_profile_two': { 'neutron_switch_id': 'fake switch id 2', 'neutron_port_id': 'fake port id 2', 'neutron_switch_info': 'switch_ip: 127.0.0.2' } } }, { 'neutron_binding_profiles': 'fake_profile_two', 'binding_profiles': { 'fake_profile_one': { 'neutron_switch_id': 'fake switch id 1', 'neutron_port_id': 'fake port id 1', 'neutron_switch_info': 'switch_ip: 127.0.0.1' }, 'fake_profile_two': { 'neutron_switch_id': 'fake switch id 2', 'neutron_port_id': 'fake port id 2', 'neutron_switch_info': 'switch_ip: 127.0.0.2' } } }) @ddt.unpack @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test__get_port_create_args(self, neutron_binding_profiles, binding_profiles): fake_device_owner = 'share' fake_host_id = 'fake host' neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = fake_host_id config_data = { 'DEFAULT': { 'neutron_net_id': fake_neutron_network['id'], 'neutron_subnet_id': fake_neutron_network['subnets'][0] } } # Simulate absence of set values if neutron_binding_profiles: config_data['DEFAULT'][ 'neutron_binding_profiles'] = neutron_binding_profiles if binding_profiles: for name, binding_profile in binding_profiles.items(): config_data[name] = binding_profile instance = self._get_neutron_network_plugin_instance(config_data) create_args = instance._get_port_create_args(fake_share_server, fake_share_network_subnet, fake_device_owner) expected_create_args = { 'binding:vnic_type': 'baremetal', 'host_id': fake_host_id, 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:' + fake_device_owner, 'device_id': fake_share_server['id'] } if neutron_binding_profiles: expected_create_args['binding:profile'] = { 'local_link_information': [] } local_links = expected_create_args[ 'binding:profile']['local_link_information'] for profile in neutron_binding_profiles.split(','): if binding_profiles is None: binding_profile = {} else: binding_profile = binding_profiles.get(profile, {}) local_links.append({ 'port_id': binding_profile.get('neutron_port_id', None), 'switch_id': binding_profile.get('neutron_switch_id', None) }) switch_info = binding_profile.get('neutron_switch_info', None) if switch_info is None: local_links[-1]['switch_info'] = None else: local_links[-1]['switch_info'] = cfg.types.Dict()( switch_info) self.assertEqual(expected_create_args, create_args) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test__get_port_create_args_host_id(self): fake_device_owner = 'share' fake_host_id = 'fake host' config_data = { 'DEFAULT': { 'neutron_net_id': fake_neutron_network['id'], 'neutron_subnet_id': fake_neutron_network['subnets'][0], 'neutron_host_id': fake_host_id } } instance = self._get_neutron_network_plugin_instance(config_data) create_args = instance._get_port_create_args(fake_share_server, fake_share_network_subnet, fake_device_owner) expected_create_args = { 'binding:vnic_type': 'baremetal', 'host_id': fake_host_id, 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:' + fake_device_owner, 'device_id': fake_share_server['id'] } self.assertEqual(expected_create_args, create_args) class NeutronBindNetworkPluginWithNormalTypeTest(test.TestCase): def setUp(self): super(NeutronBindNetworkPluginWithNormalTypeTest, self).setUp() config_data = { 'DEFAULT': { 'neutron_vnic_type': 'normal', } } self.plugin = plugin.NeutronNetworkPlugin() self.plugin.db = db_api self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) with test_utils.create_temp_config_with_opts(config_data): self.bind_plugin = plugin.NeutronBindNetworkPlugin() self.bind_plugin.db = db_api @mock.patch.object(db_api, 'network_allocation_create', mock.Mock(return_values=fake_network_allocation)) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_one_allocation(self): self.mock_object(self.bind_plugin, '_has_provider_network_extension') self.bind_plugin._has_provider_network_extension.return_value = True save_nw_data = self.mock_object(self.bind_plugin, '_save_neutron_network_data') save_subnet_data = self.mock_object(self.bind_plugin, '_save_neutron_subnet_data') self.mock_object(self.bind_plugin, '_wait_for_ports_bind') neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = 'foohost1' self.mock_object(db_api, 'network_allocation_create') multi_seg = self.mock_object( self.bind_plugin, '_is_neutron_multi_segment') multi_seg.return_value = False with mock.patch.object(self.bind_plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.bind_plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, allocation_info={'count': 1}) self.bind_plugin._has_provider_network_extension.assert_any_call() save_nw_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) save_subnet_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) expected_kwargs = { 'binding:vnic_type': 'normal', 'host_id': 'foohost1', 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:share', 'device_id': fake_share_server['id'], } self.bind_plugin.neutron_api.create_port.assert_called_once_with( fake_share_network['project_id'], **expected_kwargs) db_api.network_allocation_create.assert_called_once_with( self.fake_context, fake_network_allocation) self.bind_plugin._wait_for_ports_bind.assert_not_called() def test_update_network_allocation(self): self.mock_object(self.bind_plugin, '_wait_for_ports_bind') self.mock_object(db_api, 'network_allocations_get_for_share_server') db_api.network_allocations_get_for_share_server.return_value = [ fake_neutron_port] self.bind_plugin.update_network_allocation(self.fake_context, fake_share_server) self.bind_plugin._wait_for_ports_bind.assert_called_once_with( [fake_neutron_port], fake_share_server) @ddt.ddt class NeutronBindSingleNetworkPluginWithNormalTypeTest(test.TestCase): def setUp(self): super(NeutronBindSingleNetworkPluginWithNormalTypeTest, self).setUp() fake_net_id = 'fake net id' fake_subnet_id = 'fake subnet id' config_data = { 'DEFAULT': { 'neutron_net_id': fake_net_id, 'neutron_subnet_id': fake_subnet_id, 'neutron_vnic_type': 'normal', } } fake_net = {'subnets': ['fake1', 'fake2', fake_subnet_id]} self.mock_object( neutron_api.API, 'get_network', mock.Mock(return_value=fake_net)) self.plugin = plugin.NeutronNetworkPlugin() self.plugin.db = db_api self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) with test_utils.create_temp_config_with_opts(config_data): self.bind_plugin = plugin.NeutronBindSingleNetworkPlugin() self.bind_plugin.db = db_api @mock.patch.object(db_api, 'network_allocation_create', mock.Mock(return_values=fake_network_allocation)) @mock.patch.object(db_api, 'share_network_get', mock.Mock(return_value=fake_share_network)) @mock.patch.object(db_api, 'share_server_get', mock.Mock(return_value=fake_share_server)) def test_allocate_network_one_allocation(self): self.mock_object(self.bind_plugin, '_has_provider_network_extension') self.bind_plugin._has_provider_network_extension.return_value = True save_nw_data = self.mock_object(self.bind_plugin, '_save_neutron_network_data') save_subnet_data = self.mock_object(self.bind_plugin, '_save_neutron_subnet_data') self.mock_object(self.bind_plugin, '_wait_for_ports_bind') neutron_host_id_opts = plugin.neutron_bind_network_plugin_opts[1] self.mock_object(neutron_host_id_opts, 'default') neutron_host_id_opts.default = 'foohost1' self.mock_object(db_api, 'network_allocation_create') with mock.patch.object(self.bind_plugin.neutron_api, 'create_port', mock.Mock(return_value=fake_neutron_port)): self.bind_plugin.allocate_network( self.fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, allocation_info={'count': 1}) self.bind_plugin._has_provider_network_extension.assert_any_call() save_nw_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) save_subnet_data.assert_called_once_with(self.fake_context, fake_share_network_subnet) expected_kwargs = { 'binding:vnic_type': 'normal', 'host_id': 'foohost1', 'network_id': fake_share_network_subnet['neutron_net_id'], 'subnet_id': fake_share_network_subnet['neutron_subnet_id'], 'device_owner': 'manila:share', 'device_id': fake_share_network['id'], } self.bind_plugin.neutron_api.create_port.assert_called_once_with( fake_share_network['project_id'], **expected_kwargs) db_api.network_allocation_create.assert_called_once_with( self.fake_context, fake_network_allocation) self.bind_plugin._wait_for_ports_bind.assert_not_called() def test_update_network_allocation(self): self.mock_object(self.bind_plugin, '_wait_for_ports_bind') self.mock_object(db_api, 'network_allocations_get_for_share_server') db_api.network_allocations_get_for_share_server.return_value = [ fake_neutron_port] self.bind_plugin.update_network_allocation(self.fake_context, fake_share_server) self.bind_plugin._wait_for_ports_bind.assert_called_once_with( [fake_neutron_port], fake_share_server) @ddt.data({'fix_ips': [{'ip_address': 'test_ip'}, {'ip_address': '10.78.223.129'}], 'ip_version': 4}, {'fix_ips': [{'ip_address': 'test_ip'}, {'ip_address': 'ad80::abaa:0:c2:2'}], 'ip_version': 6}, {'fix_ips': [{'ip_address': '10.78.223.129'}, {'ip_address': 'ad80::abaa:0:c2:2'}], 'ip_version': 6}, ) @ddt.unpack def test__get_matched_ip_address(self, fix_ips, ip_version): result = self.bind_plugin._get_matched_ip_address(fix_ips, ip_version) self.assertEqual(fix_ips[1]['ip_address'], result) @ddt.data({'fix_ips': [{'ip_address': 'test_ip_1'}, {'ip_address': 'test_ip_2'}], 'ip_version': (4, 6)}, {'fix_ips': [{'ip_address': 'ad80::abaa:0:c2:1'}, {'ip_address': 'ad80::abaa:0:c2:2'}], 'ip_version': (4, )}, {'fix_ips': [{'ip_address': '192.0.0.2'}, {'ip_address': '192.0.0.3'}], 'ip_version': (6, )}, {'fix_ips': [{'ip_address': '192.0.0.2/12'}, {'ip_address': '192.0.0.330'}, {'ip_address': 'ad80::001::ad80'}, {'ip_address': 'ad80::abaa:0:c2:2/64'}], 'ip_version': (4, 6)}, ) @ddt.unpack def test__get_matched_ip_address_illegal(self, fix_ips, ip_version): for version in ip_version: self.assertRaises(exception.NetworkBadConfigurationException, self.bind_plugin._get_matched_ip_address, fix_ips, version) manila-10.0.0/manila/tests/network/neutron/test_neutron_api.py0000664000175000017500000006100613656750227024615 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2014 Mirantis Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from neutronclient.common import exceptions as neutron_client_exc from neutronclient.v2_0 import client as clientv20 from oslo_config import cfg from manila.db import base from manila import exception from manila.network.neutron import api as neutron_api from manila.network.neutron import constants as neutron_constants from manila import test from manila.tests.db import fakes from manila.tests import utils as test_utils CONF = cfg.CONF class FakeNeutronClient(object): def create_port(self, body): return body def delete_port(self, port_id): pass def show_port(self, port_id): pass def list_ports(self, **search_opts): pass def list_networks(self): pass def show_network(self, network_uuid): pass def show_subnet(self, subnet_uuid): pass def create_router(self, body): return body def list_routers(self): pass def create_network(self, body): return body def create_subnet(self, body): return body def update_port(self, port_id, body): return body def add_interface_router(self, router_id, subnet_id, port_id): pass def update_router(self, router_id, body): return body def show_router(self, router_id): pass def list_extensions(self): pass class NeutronclientTestCase(test.TestCase): def test_no_auth_obj(self): mock_client_loader = self.mock_object( neutron_api.client_auth, 'AuthClientLoader') fake_context = 'fake_context' data = { 'neutron': { 'endpoint_type': 'foo_endpoint_type', 'region_name': 'foo_region_name', } } self.client = None with test_utils.create_temp_config_with_opts(data): self.client = neutron_api.API() self.client.get_client(fake_context) mock_client_loader.assert_called_once_with( client_class=neutron_api.clientv20.Client, exception_module=neutron_api.neutron_client_exc, cfg_group=neutron_api.NEUTRON_GROUP ) mock_client_loader.return_value.get_client.assert_called_once_with( self.client, fake_context, endpoint_type=data['neutron']['endpoint_type'], region_name=data['neutron']['region_name'], ) def test_with_auth_obj(self): fake_context = 'fake_context' data = { 'neutron': { 'endpoint_type': 'foo_endpoint_type', 'region_name': 'foo_region_name', } } self.client = None with test_utils.create_temp_config_with_opts(data): self.client = neutron_api.API() self.client.auth_obj = type( 'FakeAuthObj', (object, ), {'get_client': mock.Mock()}) self.client.get_client(fake_context) self.client.auth_obj.get_client.assert_called_once_with( self.client, fake_context, endpoint_type=data['neutron']['endpoint_type'], region_name=data['neutron']['region_name'], ) class NeutronApiTest(test.TestCase): def setUp(self): super(NeutronApiTest, self).setUp() self.mock_object(base, 'Base', fakes.FakeModel) self.mock_object( clientv20, 'Client', mock.Mock(return_value=FakeNeutronClient())) self.neutron_api = neutron_api.API() def test_create_api_object(self): # instantiate Neutron API object neutron_api_instance = neutron_api.API() # Verify results self.assertTrue(hasattr(neutron_api_instance, 'client')) self.assertTrue(hasattr(neutron_api_instance, 'configuration')) self.assertEqual('DEFAULT', neutron_api_instance.config_group_name) def test_create_port_with_all_args(self): # Set up test data self.mock_object(self.neutron_api, '_has_port_binding_extension', mock.Mock(return_value=True)) port_args = { 'tenant_id': 'test tenant', 'network_id': 'test net', 'host_id': 'test host', 'subnet_id': 'test subnet', 'fixed_ip': 'test ip', 'device_owner': 'test owner', 'device_id': 'test device', 'mac_address': 'test mac', 'security_group_ids': 'test group', 'dhcp_opts': 'test dhcp', } # Execute method 'create_port' port = self.neutron_api.create_port(**port_args) # Verify results self.assertEqual(port_args['tenant_id'], port['tenant_id']) self.assertEqual(port_args['network_id'], port['network_id']) self.assertEqual(port_args['host_id'], port['binding:host_id']) self.assertEqual(port_args['subnet_id'], port['fixed_ips'][0]['subnet_id']) self.assertEqual(port_args['fixed_ip'], port['fixed_ips'][0]['ip_address']) self.assertEqual(port_args['device_owner'], port['device_owner']) self.assertEqual(port_args['device_id'], port['device_id']) self.assertEqual(port_args['mac_address'], port['mac_address']) self.assertEqual(port_args['security_group_ids'], port['security_groups']) self.assertEqual(port_args['dhcp_opts'], port['extra_dhcp_opts']) self.neutron_api._has_port_binding_extension.assert_called_once_with() self.assertTrue(clientv20.Client.called) def test_create_port_with_required_args(self): # Set up test data port_args = {'tenant_id': 'test tenant', 'network_id': 'test net'} # Execute method 'create_port' port = self.neutron_api.create_port(**port_args) # Verify results self.assertEqual(port_args['tenant_id'], port['tenant_id']) self.assertEqual(port_args['network_id'], port['network_id']) self.assertTrue(clientv20.Client.called) def test_create_port_with_additional_kwargs(self): # Set up test data port_args = {'tenant_id': 'test tenant', 'network_id': 'test net', 'binding_arg': 'foo'} # Execute method 'create_port' port = self.neutron_api.create_port(**port_args) # Verify results self.assertEqual(port_args['tenant_id'], port['tenant_id']) self.assertEqual(port_args['network_id'], port['network_id']) self.assertEqual(port_args['binding_arg'], port['binding_arg']) self.assertTrue(clientv20.Client.called) def test_create_port_with_host_id_no_binding_ext(self): self.mock_object(self.neutron_api, '_has_port_binding_extension', mock.Mock(return_value=False)) port_args = { 'tenant_id': 'test tenant', 'network_id': 'test net', 'host_id': 'foohost' } self.assertRaises(exception.NetworkException, self.neutron_api.create_port, **port_args) @mock.patch.object(neutron_api.LOG, 'exception', mock.Mock()) def test_create_port_exception(self): self.mock_object( self.neutron_api.client, 'create_port', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) port_args = {'tenant_id': 'test tenant', 'network_id': 'test net'} # Execute method 'create_port' self.assertRaises(exception.NetworkException, self.neutron_api.create_port, **port_args) # Verify results self.assertTrue(neutron_api.LOG.exception.called) self.assertTrue(clientv20.Client.called) self.assertTrue(self.neutron_api.client.create_port.called) @mock.patch.object(neutron_api.LOG, 'exception', mock.Mock()) def test_create_port_exception_status_409(self): # Set up test data self.mock_object( self.neutron_api.client, 'create_port', mock.Mock(side_effect=neutron_client_exc.NeutronClientException( status_code=409))) port_args = {'tenant_id': 'test tenant', 'network_id': 'test net'} # Execute method 'create_port' self.assertRaises(exception.PortLimitExceeded, self.neutron_api.create_port, **port_args) # Verify results self.assertTrue(neutron_api.LOG.exception.called) self.assertTrue(clientv20.Client.called) self.assertTrue(self.neutron_api.client.create_port.called) def test_delete_port(self): # Set up test data self.mock_object(self.neutron_api.client, 'delete_port') port_id = 'test port id' # Execute method 'delete_port' self.neutron_api.delete_port(port_id) # Verify results self.neutron_api.client.delete_port.assert_called_once_with(port_id) self.assertTrue(clientv20.Client.called) def test_list_ports(self): # Set up test data search_opts = {'test_option': 'test_value'} fake_ports = [{'fake port': 'fake port info'}] self.mock_object( self.neutron_api.client, 'list_ports', mock.Mock(return_value={'ports': fake_ports})) # Execute method 'list_ports' ports = self.neutron_api.list_ports(**search_opts) # Verify results self.assertEqual(fake_ports, ports) self.assertTrue(clientv20.Client.called) self.neutron_api.client.list_ports.assert_called_once_with( **search_opts) def test_show_port(self): # Set up test data port_id = 'test port id' fake_port = {'fake port': 'fake port info'} self.mock_object( self.neutron_api.client, 'show_port', mock.Mock(return_value={'port': fake_port})) # Execute method 'show_port' port = self.neutron_api.show_port(port_id) # Verify results self.assertEqual(fake_port, port) self.assertTrue(clientv20.Client.called) self.neutron_api.client.show_port.assert_called_once_with(port_id) def test_get_network(self): # Set up test data network_id = 'test network id' fake_network = {'fake network': 'fake network info'} self.mock_object( self.neutron_api.client, 'show_network', mock.Mock(return_value={'network': fake_network})) # Execute method 'get_network' network = self.neutron_api.get_network(network_id) # Verify results self.assertEqual(fake_network, network) self.assertTrue(clientv20.Client.called) self.neutron_api.client.show_network.assert_called_once_with( network_id) def test_get_subnet(self): # Set up test data subnet_id = 'fake subnet id' self.mock_object( self.neutron_api.client, 'show_subnet', mock.Mock(return_value={'subnet': {}})) # Execute method 'get_subnet' subnet = self.neutron_api.get_subnet(subnet_id) # Verify results self.assertEqual({}, subnet) self.assertTrue(clientv20.Client.called) self.neutron_api.client.show_subnet.assert_called_once_with( subnet_id) def test_get_all_network(self): # Set up test data fake_networks = [{'fake network': 'fake network info'}] self.mock_object( self.neutron_api.client, 'list_networks', mock.Mock(return_value={'networks': fake_networks})) # Execute method 'get_all_networks' networks = self.neutron_api.get_all_networks() # Verify results self.assertEqual(fake_networks, networks) self.assertTrue(clientv20.Client.called) self.neutron_api.client.list_networks.assert_called_once_with() def test_list_extensions(self): # Set up test data extensions = [ {'name': neutron_constants.PORTBINDING_EXT}, {'name': neutron_constants.PROVIDER_NW_EXT}, ] self.mock_object( self.neutron_api.client, 'list_extensions', mock.Mock(return_value={'extensions': extensions})) # Execute method 'list_extensions' result = self.neutron_api.list_extensions() # Verify results self.assertTrue(clientv20.Client.called) self.neutron_api.client.list_extensions.assert_called_once_with() self.assertIn(neutron_constants.PORTBINDING_EXT, result) self.assertIn(neutron_constants.PROVIDER_NW_EXT, result) self.assertEqual( extensions[0], result[neutron_constants.PORTBINDING_EXT]) self.assertEqual( extensions[1], result[neutron_constants.PROVIDER_NW_EXT]) def test_create_network(self): # Set up test data net_args = {'tenant_id': 'test tenant', 'name': 'test name'} # Execute method 'network_create' network = self.neutron_api.network_create(**net_args) # Verify results self.assertEqual(net_args['tenant_id'], network['tenant_id']) self.assertEqual(net_args['name'], network['name']) self.assertTrue(clientv20.Client.called) def test_create_subnet(self): # Set up test data subnet_args = { 'tenant_id': 'test tenant', 'name': 'test name', 'net_id': 'test net id', 'cidr': '10.0.0.0/24', } # Execute method 'subnet_create' subnet = self.neutron_api.subnet_create(**subnet_args) # Verify results self.assertEqual(subnet_args['tenant_id'], subnet['tenant_id']) self.assertEqual(subnet_args['name'], subnet['name']) self.assertTrue(clientv20.Client.called) def test_create_router(self): # Set up test data router_args = {'tenant_id': 'test tenant', 'name': 'test name'} # Execute method 'router_create' router = self.neutron_api.router_create(**router_args) # Verify results self.assertEqual(router_args['tenant_id'], router['tenant_id']) self.assertEqual(router_args['name'], router['name']) self.assertTrue(clientv20.Client.called) def test_list_routers(self): # Set up test data fake_routers = [{'fake router': 'fake router info'}] self.mock_object( self.neutron_api.client, 'list_routers', mock.Mock(return_value={'routers': fake_routers})) # Execute method 'router_list' networks = self.neutron_api.router_list() # Verify results self.assertEqual(fake_routers, networks) self.assertTrue(clientv20.Client.called) self.neutron_api.client.list_routers.assert_called_once_with() def test_create_network_exception(self): # Set up test data net_args = {'tenant_id': 'test tenant', 'name': 'test name'} self.mock_object( self.neutron_api.client, 'create_network', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) # Execute method 'network_create' self.assertRaises( exception.NetworkException, self.neutron_api.network_create, **net_args) # Verify results self.neutron_api.client.create_network.assert_called_once_with( {'network': net_args}) self.assertTrue(clientv20.Client.called) def test_create_subnet_exception(self): # Set up test data subnet_args = { 'tenant_id': 'test tenant', 'name': 'test name', 'net_id': 'test net id', 'cidr': '10.0.0.0/24', } self.mock_object( self.neutron_api.client, 'create_subnet', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) # Execute method 'subnet_create' self.assertRaises( exception.NetworkException, self.neutron_api.subnet_create, **subnet_args) # Verify results expected_data = { 'network_id': subnet_args['net_id'], 'tenant_id': subnet_args['tenant_id'], 'cidr': subnet_args['cidr'], 'name': subnet_args['name'], 'ip_version': 4, } self.neutron_api.client.create_subnet.assert_called_once_with( {'subnet': expected_data}) self.assertTrue(clientv20.Client.called) def test_create_router_exception(self): # Set up test data router_args = {'tenant_id': 'test tenant', 'name': 'test name'} self.mock_object( self.neutron_api.client, 'create_router', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) # Execute method 'router_create' self.assertRaises( exception.NetworkException, self.neutron_api.router_create, **router_args) # Verify results self.neutron_api.client.create_router.assert_called_once_with( {'router': router_args}) self.assertTrue(clientv20.Client.called) def test_update_port_fixed_ips(self): # Set up test data port_id = 'test_port' fixed_ips = {'fixed_ips': [{'subnet_id': 'test subnet'}]} # Execute method 'update_port_fixed_ips' port = self.neutron_api.update_port_fixed_ips(port_id, fixed_ips) # Verify results self.assertEqual(fixed_ips, port) self.assertTrue(clientv20.Client.called) def test_update_port_fixed_ips_exception(self): # Set up test data port_id = 'test_port' fixed_ips = {'fixed_ips': [{'subnet_id': 'test subnet'}]} self.mock_object( self.neutron_api.client, 'update_port', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) # Execute method 'update_port_fixed_ips' self.assertRaises( exception.NetworkException, self.neutron_api.update_port_fixed_ips, port_id, fixed_ips) # Verify results self.neutron_api.client.update_port.assert_called_once_with( port_id, {'port': fixed_ips}) self.assertTrue(clientv20.Client.called) def test_router_update_routes(self): # Set up test data router_id = 'test_router' routes = { 'routes': [ {'destination': '0.0.0.0/0', 'nexthop': '8.8.8.8', }, ], } # Execute method 'router_update_routes' router = self.neutron_api.router_update_routes(router_id, routes) # Verify results self.assertEqual(routes, router) self.assertTrue(clientv20.Client.called) def test_router_update_routes_exception(self): # Set up test data router_id = 'test_router' routes = { 'routes': [ {'destination': '0.0.0.0/0', 'nexthop': '8.8.8.8', }, ], } self.mock_object( self.neutron_api.client, 'update_router', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) # Execute method 'router_update_routes' self.assertRaises( exception.NetworkException, self.neutron_api.router_update_routes, router_id, routes) # Verify results self.neutron_api.client.update_router.assert_called_once_with( router_id, {'router': routes}) self.assertTrue(clientv20.Client.called) def test_show_router(self): # Set up test data router_id = 'test router id' fake_router = {'fake router': 'fake router info'} self.mock_object( self.neutron_api.client, 'show_router', mock.Mock(return_value={'router': fake_router})) # Execute method 'show_router' port = self.neutron_api.show_router(router_id) # Verify results self.assertEqual(fake_router, port) self.assertTrue(clientv20.Client.called) self.neutron_api.client.show_router.assert_called_once_with(router_id) def test_router_add_interface(self): # Set up test data router_id = 'test port id' subnet_id = 'test subnet id' port_id = 'test port id' self.mock_object(self.neutron_api.client, 'add_interface_router') # Execute method 'router_add_interface' self.neutron_api.router_add_interface(router_id, subnet_id, port_id) # Verify results self.neutron_api.client.add_interface_router.assert_called_once_with( port_id, {'subnet_id': subnet_id, 'port_id': port_id}) self.assertTrue(clientv20.Client.called) def test_router_add_interface_exception(self): # Set up test data router_id = 'test port id' subnet_id = 'test subnet id' port_id = 'test port id' self.mock_object( self.neutron_api.client, 'add_interface_router', mock.Mock(side_effect=neutron_client_exc.NeutronClientException)) # Execute method 'router_add_interface' self.assertRaises( exception.NetworkException, self.neutron_api.router_add_interface, router_id, subnet_id, port_id) # Verify results self.neutron_api.client.add_interface_router.assert_called_once_with( router_id, {'subnet_id': subnet_id, 'port_id': port_id}) self.assertTrue(clientv20.Client.called) def test_admin_project_id_exist(self): fake_admin_project_id = 'fake_admin_project_id_value' self.neutron_api.client.httpclient = mock.Mock() self.neutron_api.client.httpclient.auth_token = mock.Mock() self.neutron_api.client.httpclient.get_project_id = mock.Mock( return_value=fake_admin_project_id) admin_project_id = self.neutron_api.admin_project_id self.assertEqual(fake_admin_project_id, admin_project_id) self.neutron_api.client.httpclient.auth_token.called def test_admin_project_id_not_exist(self): fake_admin_project_id = 'fake_admin_project_id_value' self.neutron_api.client.httpclient = mock.Mock() self.neutron_api.client.httpclient.auth_token = mock.Mock( return_value=None) self.neutron_api.client.httpclient.authenticate = mock.Mock() self.neutron_api.client.httpclient.get_project_id = mock.Mock( return_value=fake_admin_project_id) admin_project_id = self.neutron_api.admin_project_id self.assertEqual(fake_admin_project_id, admin_project_id) self.neutron_api.client.httpclient.auth_token.called self.neutron_api.client.httpclient.authenticate.called def test_admin_project_id_not_exist_with_failure(self): self.neutron_api.client.httpclient = mock.Mock() self.neutron_api.client.httpclient.auth_token = None self.neutron_api.client.httpclient.authenticate = mock.Mock( side_effect=neutron_client_exc.NeutronClientException) self.neutron_api.client.httpclient.auth_tenant_id = mock.Mock() try: self.neutron_api.admin_project_id except exception.NetworkException: pass else: raise Exception('Expected error was not raised') self.assertTrue(self.neutron_api.client.httpclient.authenticate.called) self.assertFalse( self.neutron_api.client.httpclient.auth_tenant_id.called) def test_get_all_admin_project_networks(self): fake_networks = {'networks': ['fake_net_1', 'fake_net_2']} self.mock_object( self.neutron_api.client, 'list_networks', mock.Mock(return_value=fake_networks)) self.neutron_api.client.httpclient = mock.Mock() self.neutron_api.client.httpclient.auth_token = mock.Mock() self.neutron_api.client.httpclient.auth_tenant_id = mock.Mock() networks = self.neutron_api.get_all_admin_project_networks() self.assertEqual(fake_networks['networks'], networks) self.neutron_api.client.httpclient.auth_token.called self.neutron_api.client.httpclient.auth_tenant_id.called self.neutron_api.client.list_networks.assert_called_once_with( tenant_id=self.neutron_api.admin_project_id, shared=False) manila-10.0.0/manila/tests/network/test_standalone_network_plugin.py0000664000175000017500000005361213656750227026063 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt import netaddr from oslo_config import cfg import six from manila.common import constants from manila import context from manila import exception from manila.network import standalone_network_plugin as plugin from manila import test from manila.tests import utils as test_utils CONF = cfg.CONF fake_context = context.RequestContext( user_id='fake user', project_id='fake project', is_admin=False) fake_share_server = dict(id='fake_share_server_id') fake_share_network = dict(id='fake_share_network_id') fake_share_network_subnet = dict(id='fake_share_network_subnet_id') @ddt.ddt class StandaloneNetworkPluginTest(test.TestCase): @ddt.data('custom_config_group_name', 'DEFAULT') def test_init_only_with_required_data_v4(self, group_name): data = { group_name: { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '24', }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin( config_group_name=group_name) self.assertEqual('10.0.0.1', instance.gateway) self.assertEqual('24', instance.mask) self.assertIsNone(instance.segmentation_id) self.assertIsNone(instance.allowed_ip_ranges) self.assertEqual(4, instance.ip_version) self.assertEqual(netaddr.IPNetwork('10.0.0.1/24'), instance.net) self.assertEqual(['10.0.0.1/24'], instance.allowed_cidrs) self.assertEqual( ('10.0.0.0', '10.0.0.1', '10.0.0.255'), instance.reserved_addresses) @ddt.data('custom_config_group_name', 'DEFAULT') def test_init_with_all_data_v4(self, group_name): data = { group_name: { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '255.255.0.0', 'standalone_network_plugin_network_type': 'vlan', 'standalone_network_plugin_segmentation_id': 1001, 'standalone_network_plugin_allowed_ip_ranges': ( '10.0.0.3-10.0.0.7,10.0.0.69-10.0.0.157,10.0.0.213'), 'network_plugin_ipv4_enabled': True, }, } allowed_cidrs = [ '10.0.0.3/32', '10.0.0.4/30', '10.0.0.69/32', '10.0.0.70/31', '10.0.0.72/29', '10.0.0.80/28', '10.0.0.96/27', '10.0.0.128/28', '10.0.0.144/29', '10.0.0.152/30', '10.0.0.156/31', '10.0.0.213/32', ] with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin( config_group_name=group_name) self.assertEqual(4, instance.ip_version) self.assertEqual('10.0.0.1', instance.gateway) self.assertEqual('255.255.0.0', instance.mask) self.assertEqual('vlan', instance.network_type) self.assertEqual(1001, instance.segmentation_id) self.assertEqual(allowed_cidrs, instance.allowed_cidrs) self.assertEqual( ['10.0.0.3-10.0.0.7', '10.0.0.69-10.0.0.157', '10.0.0.213'], instance.allowed_ip_ranges) self.assertEqual( netaddr.IPNetwork('10.0.0.1/255.255.0.0'), instance.net) self.assertEqual( ('10.0.0.0', '10.0.0.1', '10.0.255.255'), instance.reserved_addresses) @ddt.data('custom_config_group_name', 'DEFAULT') def test_init_only_with_required_data_v6(self, group_name): data = { group_name: { 'standalone_network_plugin_gateway': ( '2001:cdba::3257:9652'), 'standalone_network_plugin_mask': '48', 'network_plugin_ipv6_enabled': True, }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin( config_group_name=group_name) self.assertEqual( '2001:cdba::3257:9652', instance.gateway) self.assertEqual('48', instance.mask) self.assertIsNone(instance.segmentation_id) self.assertIsNone(instance.allowed_ip_ranges) self.assertEqual(6, instance.ip_version) self.assertEqual( netaddr.IPNetwork('2001:cdba::3257:9652/48'), instance.net) self.assertEqual( ['2001:cdba::3257:9652/48'], instance.allowed_cidrs) self.assertEqual( ('2001:cdba::', '2001:cdba::3257:9652', netaddr.IPAddress('2001:cdba:0:ffff:ffff:ffff:ffff:ffff').format() ), instance.reserved_addresses) @ddt.data('custom_config_group_name', 'DEFAULT') def test_init_with_all_data_v6(self, group_name): data = { group_name: { 'standalone_network_plugin_gateway': '2001:db8::0001', 'standalone_network_plugin_mask': '88', 'standalone_network_plugin_network_type': 'vlan', 'standalone_network_plugin_segmentation_id': 3999, 'standalone_network_plugin_allowed_ip_ranges': ( '2001:db8::-2001:db8:0000:0000:0000:007f:ffff:ffff'), 'network_plugin_ipv6_enabled': True, }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin( config_group_name=group_name) self.assertEqual(6, instance.ip_version) self.assertEqual('2001:db8::0001', instance.gateway) self.assertEqual('88', instance.mask) self.assertEqual('vlan', instance.network_type) self.assertEqual(3999, instance.segmentation_id) self.assertEqual(['2001:db8::/89'], instance.allowed_cidrs) self.assertEqual( ['2001:db8::-2001:db8:0000:0000:0000:007f:ffff:ffff'], instance.allowed_ip_ranges) self.assertEqual( netaddr.IPNetwork('2001:db8::0001/88'), instance.net) self.assertEqual( ('2001:db8::', '2001:db8::0001', '2001:db8::ff:ffff:ffff'), instance.reserved_addresses) @ddt.data('flat', 'vlan', 'vxlan', 'gre') def test_init_with_valid_network_types_v4(self, network_type): data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '255.255.0.0', 'standalone_network_plugin_network_type': network_type, 'standalone_network_plugin_segmentation_id': 1001, 'network_plugin_ipv4_enabled': True, }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin( config_group_name='DEFAULT') self.assertEqual(instance.network_type, network_type) @ddt.data( 'foo', 'foovlan', 'vlanfoo', 'foovlanbar', 'None', 'Vlan', 'vlaN') def test_init_with_fake_network_types_v4(self, fake_network_type): data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '255.255.0.0', 'standalone_network_plugin_network_type': fake_network_type, 'standalone_network_plugin_segmentation_id': 1001, 'network_plugin_ipv4_enabled': True, }, } with test_utils.create_temp_config_with_opts(data): self.assertRaises( cfg.ConfigFileValueError, plugin.StandaloneNetworkPlugin, config_group_name='DEFAULT', ) @ddt.data('custom_config_group_name', 'DEFAULT') def test_invalid_init_without_any_config_definitions(self, group_name): self.assertRaises( exception.NetworkBadConfigurationException, plugin.StandaloneNetworkPlugin, config_group_name=group_name) @ddt.data( {}, {'gateway': '20.0.0.1'}, {'mask': '8'}, {'gateway': '20.0.0.1', 'mask': '33'}, {'gateway': '20.0.0.256', 'mask': '16'}) def test_invalid_init_required_data_improper(self, data): group_name = 'custom_group_name' if 'gateway' in data: data['standalone_network_plugin_gateway'] = data.pop('gateway') if 'mask' in data: data['standalone_network_plugin_mask'] = data.pop('mask') data = {group_name: data} with test_utils.create_temp_config_with_opts(data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.StandaloneNetworkPlugin, config_group_name=group_name) @ddt.data( 'fake', '11.0.0.0-11.0.0.5-11.0.0.11', '11.0.0.0-11.0.0.5', '10.0.10.0-10.0.10.5', '10.0.0.0-10.0.0.5,fake', '10.0.10.0-10.0.10.5,10.0.0.0-10.0.0.5', '10.0.10.0-10.0.10.5,10.0.0.10-10.0.10.5', '10.0.0.0-10.0.0.5,10.0.10.0-10.0.10.5') def test_invalid_init_incorrect_allowed_ip_ranges_v4(self, ip_range): group_name = 'DEFAULT' data = { group_name: { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '255.255.255.0', 'standalone_network_plugin_allowed_ip_ranges': ip_range, }, } with test_utils.create_temp_config_with_opts(data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.StandaloneNetworkPlugin, config_group_name=group_name) @ddt.data( {'gateway': '2001:db8::0001', 'vers': 4}, {'gateway': '10.0.0.1', 'vers': 6}) @ddt.unpack def test_invalid_init_mismatch_of_versions(self, gateway, vers): group_name = 'DEFAULT' data = { group_name: { 'standalone_network_plugin_gateway': gateway, 'standalone_network_plugin_mask': '25', }, } if vers == 4: data[group_name]['network_plugin_ipv4_enabled'] = True if vers == 6: data[group_name]['network_plugin_ipv4_enabled'] = False data[group_name]['network_plugin_ipv6_enabled'] = True with test_utils.create_temp_config_with_opts(data): self.assertRaises( exception.NetworkBadConfigurationException, plugin.StandaloneNetworkPlugin, config_group_name=group_name) def test_deallocate_network(self): share_server_id = 'fake_share_server_id' data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '24', }, } fake_allocations = [{'id': 'fake1'}, {'id': 'fake2'}] with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin() self.mock_object( instance.db, 'network_allocations_get_for_share_server', mock.Mock(return_value=fake_allocations)) self.mock_object(instance.db, 'network_allocation_delete') instance.deallocate_network(fake_context, share_server_id) (instance.db.network_allocations_get_for_share_server. assert_called_once_with(fake_context, share_server_id)) (instance.db.network_allocation_delete. assert_has_calls([ mock.call(fake_context, 'fake1'), mock.call(fake_context, 'fake2'), ])) def test_allocate_network_zero_addresses_ipv4(self): data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '24', }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin() self.mock_object(instance.db, 'share_network_subnet_update') allocations = instance.allocate_network( fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, count=0) self.assertEqual([], allocations) instance.db.share_network_subnet_update.assert_called_once_with( fake_context, fake_share_network_subnet['id'], dict(network_type=None, segmentation_id=None, cidr=six.text_type(instance.net.cidr), gateway=six.text_type(instance.gateway), ip_version=4, mtu=1500)) def test_allocate_network_zero_addresses_ipv6(self): data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '2001:db8::0001', 'standalone_network_plugin_mask': '64', 'network_plugin_ipv6_enabled': True, }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin() self.mock_object(instance.db, 'share_network_subnet_update') allocations = instance.allocate_network( fake_context, fake_share_server, fake_share_network, fake_share_network_subnet, count=0) self.assertEqual([], allocations) instance.db.share_network_subnet_update.assert_called_once_with( fake_context, fake_share_network_subnet['id'], dict(network_type=None, segmentation_id=None, cidr=six.text_type(instance.net.cidr), gateway=six.text_type(instance.gateway), ip_version=6, mtu=1500)) def test_allocate_network_one_ip_address_ipv4_no_usages_exist(self): data = { 'DEFAULT': { 'standalone_network_plugin_network_type': 'vlan', 'standalone_network_plugin_segmentation_id': 1003, 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '24', }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin() self.mock_object(instance.db, 'share_network_subnet_update') self.mock_object(instance.db, 'network_allocation_create') self.mock_object( instance.db, 'network_allocations_get_by_ip_address', mock.Mock(return_value=[])) allocations = instance.allocate_network( fake_context, fake_share_server, fake_share_network, fake_share_network_subnet) self.assertEqual(1, len(allocations)) na_data = { 'network_type': 'vlan', 'segmentation_id': 1003, 'cidr': '10.0.0.0/24', 'gateway': '10.0.0.1', 'ip_version': 4, 'mtu': 1500, } instance.db.share_network_subnet_update.assert_called_once_with( fake_context, fake_share_network_subnet['id'], na_data) instance.db.network_allocations_get_by_ip_address.assert_has_calls( [mock.call(fake_context, '10.0.0.2')]) instance.db.network_allocation_create.assert_called_once_with( fake_context, dict(share_server_id=fake_share_server['id'], ip_address='10.0.0.2', status=constants.STATUS_ACTIVE, label='user', **na_data)) def test_allocate_network_two_ip_addresses_ipv4_two_usages_exist(self): ctxt = type('FakeCtxt', (object,), {'fake': ['10.0.0.2', '10.0.0.4']}) def fake_get_allocations_by_ip_address(context, ip_address): if ip_address not in context.fake: context.fake.append(ip_address) return [] else: return context.fake data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '24', }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin() self.mock_object(instance.db, 'share_network_subnet_update') self.mock_object(instance.db, 'network_allocation_create') self.mock_object( instance.db, 'network_allocations_get_by_ip_address', mock.Mock(side_effect=fake_get_allocations_by_ip_address)) allocations = instance.allocate_network( ctxt, fake_share_server, fake_share_network, fake_share_network_subnet, count=2) self.assertEqual(2, len(allocations)) na_data = { 'network_type': None, 'segmentation_id': None, 'cidr': six.text_type(instance.net.cidr), 'gateway': six.text_type(instance.gateway), 'ip_version': 4, 'mtu': 1500, } instance.db.share_network_subnet_update.assert_called_once_with( ctxt, fake_share_network_subnet['id'], dict(**na_data)) instance.db.network_allocations_get_by_ip_address.assert_has_calls( [mock.call(ctxt, '10.0.0.2'), mock.call(ctxt, '10.0.0.3'), mock.call(ctxt, '10.0.0.4'), mock.call(ctxt, '10.0.0.5')]) instance.db.network_allocation_create.assert_has_calls([ mock.call( ctxt, dict(share_server_id=fake_share_server['id'], ip_address='10.0.0.3', status=constants.STATUS_ACTIVE, label='user', **na_data)), mock.call( ctxt, dict(share_server_id=fake_share_server['id'], ip_address='10.0.0.5', status=constants.STATUS_ACTIVE, label='user', **na_data)), ]) def test_allocate_network_no_available_ipv4_addresses(self): data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '10.0.0.1', 'standalone_network_plugin_mask': '30', }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin() self.mock_object(instance.db, 'share_network_subnet_update') self.mock_object(instance.db, 'network_allocation_create') self.mock_object( instance.db, 'network_allocations_get_by_ip_address', mock.Mock(return_value=['not empty list'])) self.assertRaises( exception.NetworkBadConfigurationException, instance.allocate_network, fake_context, fake_share_server, fake_share_network, fake_share_network_subnet) instance.db.share_network_subnet_update.assert_called_once_with( fake_context, fake_share_network_subnet['id'], dict(network_type=None, segmentation_id=None, cidr=six.text_type(instance.net.cidr), gateway=six.text_type(instance.gateway), ip_version=4, mtu=1500)) instance.db.network_allocations_get_by_ip_address.assert_has_calls( [mock.call(fake_context, '10.0.0.2')]) def _setup_manage_network_allocations(self, label=None): data = { 'DEFAULT': { 'standalone_network_plugin_gateway': '192.168.0.1', 'standalone_network_plugin_mask': '24', }, } with test_utils.create_temp_config_with_opts(data): instance = plugin.StandaloneNetworkPlugin(label=label) return instance @ddt.data('admin', None) def test_manage_network_allocations(self, label): allocations = ['192.168.0.11', '192.168.0.12', 'fd12::2000'] instance = self._setup_manage_network_allocations(label=label) if not label: self.mock_object(instance, '_verify_share_network_subnet') self.mock_object(instance.db, 'share_network_subnet_update') self.mock_object(instance.db, 'network_allocation_create') result = instance.manage_network_allocations( fake_context, allocations, fake_share_server, fake_share_network, fake_share_network_subnet) self.assertEqual(['fd12::2000'], result) network_data = { 'network_type': instance.network_type, 'segmentation_id': instance.segmentation_id, 'cidr': six.text_type(instance.net.cidr), 'gateway': six.text_type(instance.gateway), 'ip_version': instance.ip_version, 'mtu': instance.mtu, } data_list = [{ 'share_server_id': fake_share_server['id'], 'ip_address': x, 'status': constants.STATUS_ACTIVE, 'label': instance.label, } for x in ['192.168.0.11', '192.168.0.12']] data_list[0].update(network_data) data_list[1].update(network_data) if not label: instance.db.share_network_subnet_update.assert_called_once_with( fake_context, fake_share_network_subnet['id'], network_data) instance._verify_share_network_subnet.assert_called_once_with( fake_share_server['id'], fake_share_network_subnet) instance.db.network_allocation_create.assert_has_calls([ mock.call(fake_context, data_list[0]), mock.call(fake_context, data_list[1]) ]) def test_unmanage_network_allocations(self): instance = self._setup_manage_network_allocations() self.mock_object(instance, 'deallocate_network') instance.unmanage_network_allocations('context', 'server_id') instance.deallocate_network.assert_called_once_with( 'context', 'server_id') manila-10.0.0/manila/tests/policy.json0000664000175000017500000001210313656750227017662 0ustar zuulzuul00000000000000{ "context_is_admin": "role:admin", "admin_api": "is_admin:True", "admin_or_owner": "is_admin:True or project_id:%(project_id)s", "default": "rule:admin_or_owner", "availability_zone:index": "rule:default", "quota_set:update": "rule:admin_api", "quota_set:show": "rule:default", "quota_set:delete": "rule:admin_api", "quota_class_set:show": "rule:default", "quota_class_set:update": "rule:admin_api", "service:index": "rule:admin_api", "service:update": "rule:admin_api", "share:create": "", "share:list_by_share_server_id": "rule:admin_api", "share:get": "", "share:get_all": "", "share:delete": "rule:default", "share:update": "rule:default", "share:snapshot_update": "", "share:create_snapshot": "", "share:delete_snapshot": "", "share:get_snapshot": "", "share:get_all_snapshots": "", "share:extend": "", "share:shrink": "", "share:manage": "rule:admin_api", "share:unmanage": "rule:admin_api", "share:force_delete": "rule:admin_api", "share:reset_status": "rule:admin_api", "share:migration_start": "rule:admin_api", "share:migration_complete": "rule:admin_api", "share:migration_cancel": "rule:admin_api", "share:migration_get_progress": "rule:admin_api", "share_export_location:index": "rule:default", "share_export_location:show": "rule:default", "share_type:index": "rule:default", "share_type:show": "rule:default", "share_type:default": "rule:default", "share_type:create": "rule:default", "share_type:delete": "rule:default", "share_type:add_project_access": "rule:admin_api", "share_type:list_project_access": "rule:admin_api", "share_type:remove_project_access": "rule:admin_api", "share_types_extra_spec:create": "rule:default", "share_types_extra_spec:update": "rule:default", "share_types_extra_spec:show": "rule:default", "share_types_extra_spec:index": "rule:default", "share_types_extra_spec:delete": "rule:default", "share_instance:index": "rule:admin_api", "share_instance:show": "rule:admin_api", "share_instance:force_delete": "rule:admin_api", "share_instance:reset_status": "rule:admin_api", "share_snapshot:force_delete": "rule:admin_api", "share_snapshot:reset_status": "rule:admin_api", "share_snapshot:manage_snapshot": "rule:admin_api", "share_snapshot:unmanage_snapshot": "rule:admin_api", "share_network:create": "", "share_network:index": "", "share_network:detail": "", "share_network:show": "", "share_network:update": "", "share_network:delete": "", "share_network:get_all_share_networks": "rule:admin_api", "share_server:index": "rule:admin_api", "share_server:show": "rule:admin_api", "share_server:details": "rule:admin_api", "share_server:delete": "rule:admin_api", "share:get_share_metadata": "", "share:delete_share_metadata": "", "share:update_share_metadata": "", "share_extension:availability_zones": "", "security_service:index": "", "security_service:get_all_security_services": "rule:admin_api", "scheduler_stats:pools:index": "rule:admin_api", "scheduler_stats:pools:detail": "rule:admin_api", "share_group:create" : "rule:default", "share_group:delete": "rule:default", "share_group:update": "rule:default", "share_group:get": "rule:default", "share_group:get_all": "rule:default", "share_group:force_delete": "rule:admin_api", "share_group:reset_status": "rule:admin_api", "share_group_snapshot:create" : "rule:default", "share_group_snapshot:delete": "rule:default", "share_group_snapshot:update" : "rule:default", "share_group_snapshot:get": "rule:default", "share_group_snapshot:get_all": "rule:default", "share_group_snapshot:force_delete": "rule:admin_api", "share_group_snapshot:reset_status": "rule:admin_api", "share_replica:get_all": "rule:default", "share_replica:show": "rule:default", "share_replica:create" : "rule:default", "share_replica:delete": "rule:default", "share_replica:promote": "rule:default", "share_replica:resync": "rule:admin_api", "share_replica:reset_status": "rule:admin_api", "share_replica:force_delete": "rule:admin_api", "share_replica:reset_replica_state": "rule:admin_api", "share_group_type:index": "rule:default", "share_group_type:show": "rule:default", "share_group_type:default": "rule:default", "share_group_type:create": "rule:admin_api", "share_group_type:delete": "rule:admin_api", "share_group_type:add_project_access": "rule:admin_api", "share_group_type:list_project_access": "rule:admin_api", "share_group_type:remove_project_access": "rule:admin_api", "share_group_types_spec:create": "rule:admin_api", "share_group_types_spec:update": "rule:admin_api", "share_group_types_spec:show": "rule:admin_api", "share_group_types_spec:index": "rule:admin_api", "share_group_types_spec:delete": "rule:admin_api", "message:delete": "rule:default", "message:get": "rule:default", "message:get_all": "rule:default" } manila-10.0.0/manila/tests/message/0000775000175000017500000000000013656750362017117 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/message/__init__.py0000664000175000017500000000000013656750227021216 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/message/test_api.py0000664000175000017500000000776613656750227021321 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from unittest import mock from oslo_config import cfg from oslo_utils import timeutils from manila import context from manila.message import api as message_api from manila.message.message_field import Action as MsgAction from manila.message.message_field import Detail as MsgDetail from manila.message import message_levels from manila import test CONF = cfg.CONF class MessageApiTest(test.TestCase): def setUp(self): super(MessageApiTest, self).setUp() self.message_api = message_api.API() self.mock_object(self.message_api, 'db') self.ctxt = context.RequestContext('admin', 'fakeproject', True) self.ctxt.request_id = 'fakerequestid' @mock.patch.object(timeutils, 'utcnow') def test_create(self, mock_utcnow): CONF.set_override('message_ttl', 300) now = datetime.datetime.utcnow() mock_utcnow.return_value = now expected_expires_at = now + datetime.timedelta( seconds=300) expected_message_record = { 'project_id': 'fakeproject', 'request_id': 'fakerequestid', 'resource_type': 'fake_resource_type', 'resource_id': None, 'action_id': MsgAction.ALLOCATE_HOST[0], 'detail_id': MsgDetail.NO_VALID_HOST[0], 'message_level': message_levels.ERROR, 'expires_at': expected_expires_at, } self.message_api.create(self.ctxt, MsgAction.ALLOCATE_HOST, "fakeproject", detail=MsgDetail.NO_VALID_HOST, resource_type="fake_resource_type") self.message_api.db.message_create.assert_called_once_with( self.ctxt, expected_message_record) def test_create_swallows_exception(self): self.mock_object(self.message_api.db, 'message_create', mock.Mock(side_effect=Exception())) exception_log = self.mock_object(message_api.LOG, 'exception') self.message_api.create(self.ctxt, MsgAction.ALLOCATE_HOST, 'fakeproject', 'fake_resource') self.message_api.db.message_create.assert_called_once_with( self.ctxt, mock.ANY) exception_log.assert_called_once_with( 'Failed to create message record for request_id %s', self.ctxt.request_id) def test_get(self): self.message_api.get(self.ctxt, 'fake_id') self.message_api.db.message_get.assert_called_once_with(self.ctxt, 'fake_id') def test_get_all(self): self.message_api.get_all(self.ctxt) self.message_api.db.message_get_all.assert_called_once_with( self.ctxt, filters={}, limit=None, offset=None, sort_dir=None, sort_key=None) def test_delete(self): self.message_api.delete(self.ctxt, 'fake_id') self.message_api.db.message_destroy.assert_called_once_with( self.ctxt, 'fake_id') def test_cleanup_expired_messages(self): admin_context = mock.Mock() self.mock_object(self.ctxt, 'elevated', mock.Mock(return_value=admin_context)) self.message_api.cleanup_expired_messages(self.ctxt) self.message_api.db.cleanup_expired_messages.assert_called_once_with( admin_context) manila-10.0.0/manila/tests/message/test_message_field.py0000664000175000017500000000537413656750227023330 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt from oslo_config import cfg from manila import exception from manila.message import message_field from manila import test CONF = cfg.CONF @ddt.ddt class MessageFieldTest(test.TestCase): @ddt.data(message_field.Action, message_field.Detail) def test_unique_ids(self, cls): """Assert that no action or detail id is duplicated.""" ids = [name[0] for name in cls.ALL] self.assertEqual(len(ids), len(set(ids))) @ddt.data({'id': '001', 'content': 'allocate host'}, {'id': 'invalid', 'content': None}) @ddt.unpack def test_translate_action(self, id, content): result = message_field.translate_action(id) if content is None: content = 'unknown action' self.assertEqual(content, result) @ddt.data({'id': '001', 'content': 'An unknown error occurred.'}, {'id': '002', 'content': 'No storage could be allocated for this share ' 'request. Trying again with a different size or ' 'share type may succeed.'}, {'id': 'invalid', 'content': None}) @ddt.unpack def test_translate_detail(self, id, content): result = message_field.translate_detail(id) if content is None: content = 'An unknown error occurred.' self.assertEqual(content, result) @ddt.data({'exception': exception.NoValidHost(reason='fake reason'), 'detail': '', 'expected': '002'}, {'exception': exception.NoValidHost( detail_data={'last_filter': 'CapacityFilter'}, reason='fake reason'), 'detail': '', 'expected': '009'}, {'exception': exception.NoValidHost( detail_data={'last_filter': 'FakeFilter'}, reason='fake reason'), 'detail': '', 'expected': '002'}, {'exception': None, 'detail': message_field.Detail.NO_VALID_HOST, 'expected': '002'}) @ddt.unpack def test_translate_detail_id(self, exception, detail, expected): result = message_field.translate_detail_id(exception, detail) self.assertEqual(expected, result) manila-10.0.0/manila/tests/db/0000775000175000017500000000000013656750362016060 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/__init__.py0000664000175000017500000000000013656750227020157 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/migrations/0000775000175000017500000000000013656750362020234 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/migrations/__init__.py0000664000175000017500000000000013656750227022333 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/migrations/test_utils.py0000664000175000017500000000176413656750227023015 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.db.migrations import utils from manila.db.sqlalchemy import api from manila import test class MigrationUtilsTestCase(test.TestCase): def test_load_table(self): connection = api.get_engine() table_name = 'shares' actual_result = utils.load_table(table_name, connection) self.assertIsNotNone(actual_result) self.assertEqual(table_name, actual_result.name) manila-10.0.0/manila/tests/db/migrations/alembic/0000775000175000017500000000000013656750362021630 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/migrations/alembic/__init__.py0000664000175000017500000000000013656750227023727 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/migrations/alembic/test_migration.py0000664000175000017500000001657413656750227025247 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. """ from unittest import mock from alembic import script from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import test_migrations from oslo_log import log from sqlalchemy.sql import text from manila.db.migrations.alembic import migration from manila.tests.db.migrations.alembic import migrations_data_checks from manila.tests import utils as test_utils LOG = log.getLogger('manila.tests.test_migrations') class ManilaMigrationsCheckers(test_migrations.WalkVersionsMixin, migrations_data_checks.DbMigrationsData): """Test alembic migrations.""" @property def snake_walk(self): return True @property def downgrade(self): return True @property def INIT_VERSION(self): pass @property def REPOSITORY(self): pass @property def migration_api(self): return migration @property def migrate_engine(self): return self.engine def _walk_versions(self, snake_walk=False, downgrade=True): # Determine latest version script from the repo, then # upgrade from 1 through to the latest, with no data # in the databases. This just checks that the schema itself # upgrades successfully. # Place the database under version control alembic_cfg = migration._alembic_config() script_directory = script.ScriptDirectory.from_config(alembic_cfg) self.assertIsNone(self.migration_api.version()) versions = [ver for ver in script_directory.walk_revisions()] LOG.debug('latest version is %s', versions[0].revision) for version in reversed(versions): self._migrate_up(version.revision, with_data=True) if snake_walk: downgraded = self._migrate_down( version, with_data=True) if downgraded: self._migrate_up(version.revision) if downgrade: for version in versions: downgraded = self._migrate_down(version) if snake_walk and downgraded: self._migrate_up(version.revision) self._migrate_down(version) def _migrate_down(self, version, with_data=False): try: self.migration_api.downgrade(version.down_revision) except NotImplementedError: # NOTE(sirp): some migrations, namely release-level # migrations, don't support a downgrade. return False self.assertEqual(version.down_revision, self.migration_api.version()) if with_data: post_downgrade = getattr( self, "_post_downgrade_%s" % version.revision, None) if post_downgrade: post_downgrade(self.engine) return True def _migrate_up(self, version, with_data=False): """migrate up to a new version of the db. We allow for data insertion and post checks at every migration version with special _pre_upgrade_### and _check_### functions in the main test. """ # NOTE(sdague): try block is here because it's impossible to debug # where a failed data migration happens otherwise try: if with_data: data = None pre_upgrade = getattr( self, "_pre_upgrade_%s" % version, None) if pre_upgrade: data = pre_upgrade(self.engine) self.migration_api.upgrade(version) self.assertEqual(version, self.migration_api.version()) if with_data: check = getattr(self, "_check_%s" % version, None) if check: check(self.engine, data) except Exception as e: LOG.error("Failed to migrate to version %(version)s on engine " "%(engine)s. Exception while running the migration: " "%(exception)s", {'version': version, 'engine': self.engine, 'exception': e}) raise # NOTE(vponomaryov): set 10 minutes timeout for case of running it on # very slow nodes/VMs. Note, that this test becomes slower with each # addition of new DB migration. On fast nodes it can take about 5-10 secs # having Mitaka set of migrations. # 'pymysql' works much slower on slow nodes than 'psycopg2'. And such # timeout mostly required for testing of 'mysql' backend. @test_utils.set_timeout(600) def test_walk_versions(self): """Walks all version scripts for each tested database. While walking, ensure that there are no errors in the version scripts for each engine. """ with mock.patch('manila.db.sqlalchemy.api.get_engine', return_value=self.engine): self._walk_versions(snake_walk=self.snake_walk, downgrade=self.downgrade) def test_single_branch(self): alembic_cfg = migration._alembic_config() script_directory = script.ScriptDirectory.from_config(alembic_cfg) actual_result = script_directory.get_heads() self.assertEqual(1, len(actual_result), "Db migrations should have only one branch.") class TestManilaMigrationsMySQL(ManilaMigrationsCheckers, test_base.MySQLOpportunisticTestCase): """Run migration tests on MySQL backend.""" @test_utils.set_timeout(300) def test_mysql_innodb(self): """Test that table creation on mysql only builds InnoDB tables.""" with mock.patch('manila.db.sqlalchemy.api.get_engine', return_value=self.engine): self._walk_versions(snake_walk=False, downgrade=False) # sanity check sanity_check = """SELECT count(*) FROM information_schema.tables WHERE table_schema = :database;""" total = self.engine.execute( text(sanity_check), database=self.engine.url.database) self.assertGreater(total.scalar(), 0, "No tables found. Wrong schema?") noninnodb_query = """ SELECT count(*) FROM information_schema.TABLES WHERE table_schema = :database AND engine != 'InnoDB' AND table_name != 'alembic_version';""" count = self.engine.execute( text(noninnodb_query), database=self.engine.url.database ).scalar() self.assertEqual(0, count, "%d non InnoDB tables created" % count) class TestManilaMigrationsPostgreSQL( ManilaMigrationsCheckers, test_base.PostgreSQLOpportunisticTestCase): """Run migration tests on PostgreSQL backend.""" manila-10.0.0/manila/tests/db/migrations/alembic/migrations_data_checks.py0000664000175000017500000034115313656750227026676 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests data for database migrations. All database migrations with data manipulation (like moving data from column to the table) should have data check class: @map_to_migration('1f0bd302c1a6') # Revision of checked db migration class FooMigrationChecks(BaseMigrationChecks): def setup_upgrade_data(self, engine): ... def check_upgrade(self, engine, data): ... def check_downgrade(self, engine): ... See BaseMigrationChecks class for more information. """ import abc import copy import datetime from oslo_db import exception as oslo_db_exc from oslo_utils import uuidutils import six from sqlalchemy import exc as sa_exc from manila.common import constants from manila.db.migrations import utils class DbMigrationsData(object): migration_mappings = {} methods_mapping = { 'pre': 'setup_upgrade_data', 'check': 'check_upgrade', 'post': 'check_downgrade', } def __getattr__(self, item): parts = item.split('_') is_mapping_method = ( len(parts) > 2 and parts[0] == '' and parts[1] in self.methods_mapping ) if not is_mapping_method: return super(DbMigrationsData, self).__getattribute__(item) check_obj = self.migration_mappings.get(parts[-1], None) if check_obj is None: raise AttributeError check_obj.set_test_case(self) return getattr(check_obj, self.methods_mapping.get(parts[1])) def map_to_migration(revision): def decorator(cls): DbMigrationsData.migration_mappings[revision] = cls() return cls return decorator class BaseMigrationChecks(object): six.add_metaclass(abc.ABCMeta) def __init__(self): self.test_case = None def set_test_case(self, test_case): self.test_case = test_case @abc.abstractmethod def setup_upgrade_data(self, engine): """This method should be used to insert test data for migration. :param engine: SQLAlchemy engine :return: any data which will be passed to 'check_upgrade' as 'data' arg """ @abc.abstractmethod def check_upgrade(self, engine, data): """This method should be used to do assertions after upgrade method. To perform assertions use 'self.test_case' instance property: self.test_case.assertTrue(True) :param engine: SQLAlchemy engine :param data: data returned by 'setup_upgrade_data' """ @abc.abstractmethod def check_downgrade(self, engine): """This method should be used to do assertions after downgrade method. To perform assertions use 'self.test_case' instance property: self.test_case.assertTrue(True) :param engine: SQLAlchemy engine """ def fake_share(**kwargs): share = { 'id': uuidutils.generate_uuid(), 'display_name': 'fake_share', 'display_description': 'my fake share', 'snapshot_id': uuidutils.generate_uuid(), 'is_public': True, 'size': 1, 'deleted': 'False', 'share_proto': 'fake_proto', 'user_id': uuidutils.generate_uuid(), 'project_id': uuidutils.generate_uuid(), 'snapshot_support': True, 'task_state': None, } share.update(kwargs) return share def fake_instance(share_id=None, **kwargs): instance = { 'id': uuidutils.generate_uuid(), 'share_id': share_id or uuidutils.generate_uuid(), 'deleted': 'False', 'host': 'openstack@BackendZ#PoolA', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'access_rules_status': 'active', } instance.update(kwargs) return instance @map_to_migration('38e632621e5a') class ShareTypeMigrationChecks(BaseMigrationChecks): def _get_fake_data(self): extra_specs = [] self.share_type_ids = [] volume_types = [ { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'name': 'vol-type-A', }, { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'name': 'vol-type-B', }, ] for idx, volume_type in enumerate(volume_types): extra_specs.append({ 'volume_type_id': volume_type['id'], 'key': 'foo', 'value': 'bar%s' % idx, 'deleted': False, }) extra_specs.append({ 'volume_type_id': volume_type['id'], 'key': 'xyzzy', 'value': 'spoon_%s' % idx, 'deleted': False, }) self.share_type_ids.append(volume_type['id']) return volume_types, extra_specs def setup_upgrade_data(self, engine): (self.volume_types, self.extra_specs) = self._get_fake_data() volume_types_table = utils.load_table('volume_types', engine) engine.execute(volume_types_table.insert(self.volume_types)) extra_specs_table = utils.load_table('volume_type_extra_specs', engine) engine.execute(extra_specs_table.insert(self.extra_specs)) def check_upgrade(self, engine, data): # Verify table transformations share_types_table = utils.load_table('share_types', engine) share_types_specs_table = utils.load_table( 'share_type_extra_specs', engine) self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, 'volume_types', engine) self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, 'volume_type_extra_specs', engine) # Verify presence of data share_type_ids = [ st['id'] for st in engine.execute(share_types_table.select()) if st['id'] in self.share_type_ids ] self.test_case.assertEqual(sorted(self.share_type_ids), sorted(share_type_ids)) extra_specs = [ {'type': es['share_type_id'], 'key': es['spec_key']} for es in engine.execute(share_types_specs_table.select()) if es['share_type_id'] in self.share_type_ids ] self.test_case.assertEqual(4, len(extra_specs)) def check_downgrade(self, engine): # Verify table transformations volume_types_table = utils.load_table('volume_types', engine) volume_types_specs_table = utils.load_table( 'volume_type_extra_specs', engine) self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, 'share_types', engine) self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, 'share_type_extra_specs', engine) # Verify presence of data volume_type_ids = [ vt['id'] for vt in engine.execute(volume_types_table.select()) if vt['id'] in self.share_type_ids ] self.test_case.assertEqual(sorted(self.share_type_ids), sorted(volume_type_ids)) extra_specs = [ {'type': es['volume_type_id'], 'key': es['key']} for es in engine.execute(volume_types_specs_table.select()) if es['volume_type_id'] in self.share_type_ids ] self.test_case.assertEqual(4, len(extra_specs)) @map_to_migration('5077ffcc5f1c') class ShareInstanceMigrationChecks(BaseMigrationChecks): def _prepare_fake_data(self): time = datetime.datetime(2017, 1, 12, 12, 12, 12) self.share = { 'id': uuidutils.generate_uuid(), 'host': 'fake_host', 'status': 'fake_status', 'scheduled_at': time, 'launched_at': time, 'terminated_at': time, 'availability_zone': 'fake_az'} self.share_snapshot = { 'id': uuidutils.generate_uuid(), 'status': 'fake_status', 'share_id': self.share['id'], 'progress': 'fake_progress'} self.share_export_location = { 'id': 1001, 'share_id': self.share['id']} def setup_upgrade_data(self, engine): self._prepare_fake_data() share_table = utils.load_table('shares', engine) engine.execute(share_table.insert(self.share)) snapshot_table = utils.load_table('share_snapshots', engine) engine.execute(snapshot_table.insert(self.share_snapshot)) el_table = utils.load_table('share_export_locations', engine) engine.execute(el_table.insert(self.share_export_location)) def check_upgrade(self, engine, data): share_table = utils.load_table('shares', engine) s_instance_table = utils.load_table('share_instances', engine) ss_instance_table = utils.load_table('share_snapshot_instances', engine) snapshot_table = utils.load_table('share_snapshots', engine) instance_el_table = utils.load_table('share_instance_export_locations', engine) # Check shares table for column in ['host', 'status', 'scheduled_at', 'launched_at', 'terminated_at', 'share_network_id', 'share_server_id', 'availability_zone']: rows = engine.execute(share_table.select()) for row in rows: self.test_case.assertFalse(hasattr(row, column)) # Check share instance table s_instance_record = engine.execute(s_instance_table.select().where( s_instance_table.c.share_id == self.share['id'])).first() self.test_case.assertTrue(s_instance_record is not None) for column in ['host', 'status', 'scheduled_at', 'launched_at', 'terminated_at', 'availability_zone']: self.test_case.assertEqual(self.share[column], s_instance_record[column]) # Check snapshot table for column in ['status', 'progress']: rows = engine.execute(snapshot_table.select()) for row in rows: self.test_case.assertFalse(hasattr(row, column)) # Check snapshot instance table ss_instance_record = engine.execute(ss_instance_table.select().where( ss_instance_table.c.snapshot_id == self.share_snapshot['id']) ).first() self.test_case.assertEqual(s_instance_record['id'], ss_instance_record['share_instance_id']) for column in ['status', 'progress']: self.test_case.assertEqual(self.share_snapshot[column], ss_instance_record[column]) # Check share export location table self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, 'share_export_locations', engine) # Check share instance export location table el_record = engine.execute(instance_el_table.select().where( instance_el_table.c.share_instance_id == s_instance_record['id']) ).first() self.test_case.assertFalse(el_record is None) self.test_case.assertTrue(hasattr(el_record, 'share_instance_id')) self.test_case.assertFalse(hasattr(el_record, 'share_id')) def check_downgrade(self, engine): self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, 'share_snapshot_instances', engine) self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, 'share_instances', engine) self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, 'share_instance_export_locations', engine) share_table = utils.load_table('shares', engine) snapshot_table = utils.load_table('share_snapshots', engine) share_el_table = utils.load_table('share_export_locations', engine) for column in ['host', 'status', 'scheduled_at', 'launched_at', 'terminated_at', 'share_network_id', 'share_server_id', 'availability_zone']: rows = engine.execute(share_table.select()) for row in rows: self.test_case.assertTrue(hasattr(row, column)) for column in ['status', 'progress']: rows = engine.execute(snapshot_table.select()) for row in rows: self.test_case.assertTrue(hasattr(row, column)) rows = engine.execute(share_el_table.select()) for row in rows: self.test_case.assertFalse(hasattr(row, 'share_instance_id')) self.test_case.assertTrue( hasattr(row, 'share_id')) @map_to_migration('1f0bd302c1a6') class AvailabilityZoneMigrationChecks(BaseMigrationChecks): valid_az_names = ('az1', 'az2') def _get_service_data(self, options): base_dict = { 'binary': 'manila-share', 'topic': 'share', 'disabled': False, 'report_count': '100', } base_dict.update(options) return base_dict def setup_upgrade_data(self, engine): service_fixture = [ self._get_service_data( {'deleted': 0, 'host': 'fake1', 'availability_zone': 'az1'} ), self._get_service_data( {'deleted': 0, 'host': 'fake2', 'availability_zone': 'az1'} ), self._get_service_data( {'deleted': 1, 'host': 'fake3', 'availability_zone': 'az2'} ), ] services_table = utils.load_table('services', engine) for fixture in service_fixture: engine.execute(services_table.insert(fixture)) def check_upgrade(self, engine, _): az_table = utils.load_table('availability_zones', engine) for az in engine.execute(az_table.select()): self.test_case.assertTrue(uuidutils.is_uuid_like(az.id)) self.test_case.assertIn(az.name, self.valid_az_names) self.test_case.assertEqual('False', az.deleted) services_table = utils.load_table('services', engine) for service in engine.execute(services_table.select()): self.test_case.assertTrue( uuidutils.is_uuid_like(service.availability_zone_id) ) def check_downgrade(self, engine): services_table = utils.load_table('services', engine) for service in engine.execute(services_table.select()): self.test_case.assertIn( service.availability_zone, self.valid_az_names ) @map_to_migration('dda6de06349') class ShareInstanceExportLocationMetadataChecks(BaseMigrationChecks): el_table_name = 'share_instance_export_locations' elm_table_name = 'share_instance_export_locations_metadata' def setup_upgrade_data(self, engine): # Setup shares share_fixture = [{'id': 'foo_share_id'}, {'id': 'bar_share_id'}] share_table = utils.load_table('shares', engine) for fixture in share_fixture: engine.execute(share_table.insert(fixture)) # Setup share instances si_fixture = [ {'id': 'foo_share_instance_id_oof', 'share_id': share_fixture[0]['id']}, {'id': 'bar_share_instance_id_rab', 'share_id': share_fixture[1]['id']}, ] si_table = utils.load_table('share_instances', engine) for fixture in si_fixture: engine.execute(si_table.insert(fixture)) # Setup export locations el_fixture = [ {'id': 1, 'path': '/1', 'share_instance_id': si_fixture[0]['id']}, {'id': 2, 'path': '/2', 'share_instance_id': si_fixture[1]['id']}, ] el_table = utils.load_table(self.el_table_name, engine) for fixture in el_fixture: engine.execute(el_table.insert(fixture)) def check_upgrade(self, engine, data): el_table = utils.load_table( 'share_instance_export_locations', engine) for el in engine.execute(el_table.select()): self.test_case.assertTrue(hasattr(el, 'is_admin_only')) self.test_case.assertTrue(hasattr(el, 'uuid')) self.test_case.assertEqual(False, el.is_admin_only) self.test_case.assertTrue(uuidutils.is_uuid_like(el.uuid)) # Write export location metadata el_metadata = [ {'key': 'foo_key', 'value': 'foo_value', 'export_location_id': 1}, {'key': 'bar_key', 'value': 'bar_value', 'export_location_id': 2}, ] elm_table = utils.load_table(self.elm_table_name, engine) engine.execute(elm_table.insert(el_metadata)) # Verify values of written metadata for el_meta_datum in el_metadata: el_id = el_meta_datum['export_location_id'] records = engine.execute(elm_table.select().where( elm_table.c.export_location_id == el_id)) self.test_case.assertEqual(1, records.rowcount) record = records.first() expected_keys = ( 'id', 'created_at', 'updated_at', 'deleted_at', 'deleted', 'export_location_id', 'key', 'value', ) self.test_case.assertEqual(len(expected_keys), len(record.keys())) for key in expected_keys: self.test_case.assertIn(key, record.keys()) for k, v in el_meta_datum.items(): self.test_case.assertTrue(hasattr(record, k)) self.test_case.assertEqual(v, getattr(record, k)) def check_downgrade(self, engine): el_table = utils.load_table( 'share_instance_export_locations', engine) for el in engine.execute(el_table.select()): self.test_case.assertFalse(hasattr(el, 'is_admin_only')) self.test_case.assertFalse(hasattr(el, 'uuid')) self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, self.elm_table_name, engine) @map_to_migration('344c1ac4747f') class AccessRulesStatusMigrationChecks(BaseMigrationChecks): def _get_instance_data(self, data): base_dict = {} base_dict.update(data) return base_dict def setup_upgrade_data(self, engine): share_table = utils.load_table('shares', engine) share = { 'id': 1, 'share_proto': "NFS", 'size': 0, 'snapshot_id': None, 'user_id': 'fake', 'project_id': 'fake', } engine.execute(share_table.insert(share)) rules1 = [ {'id': 'r1', 'share_instance_id': 1, 'state': 'active', 'deleted': 'False'}, {'id': 'r2', 'share_instance_id': 1, 'state': 'active', 'deleted': 'False'}, {'id': 'r3', 'share_instance_id': 1, 'state': 'deleting', 'deleted': 'False'}, ] rules2 = [ {'id': 'r4', 'share_instance_id': 2, 'state': 'active', 'deleted': 'False'}, {'id': 'r5', 'share_instance_id': 2, 'state': 'error', 'deleted': 'False'}, ] rules3 = [ {'id': 'r6', 'share_instance_id': 3, 'state': 'new', 'deleted': 'False'}, ] instance_fixtures = [ {'id': 1, 'deleted': 'False', 'host': 'fake1', 'share_id': 1, 'status': 'available', 'rules': rules1}, {'id': 2, 'deleted': 'False', 'host': 'fake2', 'share_id': 1, 'status': 'available', 'rules': rules2}, {'id': 3, 'deleted': 'False', 'host': 'fake3', 'share_id': 1, 'status': 'available', 'rules': rules3}, {'id': 4, 'deleted': 'False', 'host': 'fake4', 'share_id': 1, 'status': 'deleting', 'rules': []}, ] share_instances_table = utils.load_table('share_instances', engine) share_instances_rules_table = utils.load_table( 'share_instance_access_map', engine) for fixture in instance_fixtures: rules = fixture.pop('rules') engine.execute(share_instances_table.insert(fixture)) for rule in rules: engine.execute(share_instances_rules_table.insert(rule)) def check_upgrade(self, engine, _): instances_table = utils.load_table('share_instances', engine) valid_statuses = { '1': 'active', '2': 'error', '3': 'out_of_sync', '4': None, } instances = engine.execute(instances_table.select().where( instances_table.c.id in valid_statuses.keys())) for instance in instances: self.test_case.assertEqual(valid_statuses[instance['id']], instance['access_rules_status']) def check_downgrade(self, engine): share_instances_rules_table = utils.load_table( 'share_instance_access_map', engine) share_instance_rules_to_check = engine.execute( share_instances_rules_table.select().where( share_instances_rules_table.c.id.in_(('1', '2', '3', '4')))) valid_statuses = { '1': 'active', '2': 'error', '3': 'error', '4': None, } for rule in share_instance_rules_to_check: valid_state = valid_statuses[rule['share_instance_id']] self.test_case.assertEqual(valid_state, rule['state']) @map_to_migration('293fac1130ca') class ShareReplicationMigrationChecks(BaseMigrationChecks): valid_share_display_names = ('FAKE_SHARE_1', 'FAKE_SHARE_2', 'FAKE_SHARE_3') valid_share_ids = [] valid_replication_types = ('writable', 'readable', 'dr') def _load_tables_and_get_data(self, engine): share_table = utils.load_table('shares', engine) share_instances_table = utils.load_table('share_instances', engine) shares = engine.execute( share_table.select().where(share_table.c.id.in_( self.valid_share_ids)) ).fetchall() share_instances = engine.execute(share_instances_table.select().where( share_instances_table.c.share_id.in_(self.valid_share_ids)) ).fetchall() return shares, share_instances def setup_upgrade_data(self, engine): shares_data = [] instances_data = [] self.valid_share_ids = [] for share_display_name in self.valid_share_display_names: share_ref = fake_share(display_name=share_display_name) shares_data.append(share_ref) instances_data.append(fake_instance(share_id=share_ref['id'])) shares_table = utils.load_table('shares', engine) for share in shares_data: self.valid_share_ids.append(share['id']) engine.execute(shares_table.insert(share)) shares_instances_table = utils.load_table('share_instances', engine) for share_instance in instances_data: engine.execute(shares_instances_table.insert(share_instance)) def check_upgrade(self, engine, _): shares, share_instances = self._load_tables_and_get_data(engine) share_ids = [share['id'] for share in shares] share_instance_share_ids = [share_instance['share_id'] for share_instance in share_instances] # Assert no data is lost for sid in self.valid_share_ids: self.test_case.assertIn(sid, share_ids) self.test_case.assertIn(sid, share_instance_share_ids) for share in shares: self.test_case.assertIn(share['display_name'], self.valid_share_display_names) self.test_case.assertEqual('False', share.deleted) self.test_case.assertTrue(hasattr(share, 'replication_type')) for share_instance in share_instances: self.test_case.assertTrue(hasattr(share_instance, 'replica_state')) def check_downgrade(self, engine): shares, share_instances = self._load_tables_and_get_data(engine) share_ids = [share['id'] for share in shares] share_instance_share_ids = [share_instance['share_id'] for share_instance in share_instances] # Assert no data is lost for sid in self.valid_share_ids: self.test_case.assertIn(sid, share_ids) self.test_case.assertIn(sid, share_instance_share_ids) for share in shares: self.test_case.assertEqual('False', share.deleted) self.test_case.assertIn(share.display_name, self.valid_share_display_names) self.test_case.assertFalse(hasattr(share, 'replication_type')) for share_instance in share_instances: self.test_case.assertEqual('False', share_instance.deleted) self.test_case.assertIn(share_instance.share_id, self.valid_share_ids) self.test_case.assertFalse( hasattr(share_instance, 'replica_state')) @map_to_migration('5155c7077f99') class NetworkAllocationsNewLabelColumnChecks(BaseMigrationChecks): table_name = 'network_allocations' ids = ['fake_network_allocation_id_%d' % i for i in (1, 2, 3)] def setup_upgrade_data(self, engine): user_id = 'user_id' project_id = 'project_id' share_server_id = 'foo_share_server_id' # Create share network share_network_data = { 'id': 'foo_share_network_id', 'user_id': user_id, 'project_id': project_id, } sn_table = utils.load_table('share_networks', engine) engine.execute(sn_table.insert(share_network_data)) # Create share server share_server_data = { 'id': share_server_id, 'share_network_id': share_network_data['id'], 'host': 'fake_host', 'status': 'active', } ss_table = utils.load_table('share_servers', engine) engine.execute(ss_table.insert(share_server_data)) # Create network allocations network_allocations = [ {'id': self.ids[0], 'share_server_id': share_server_id, 'ip_address': '1.1.1.1'}, {'id': self.ids[1], 'share_server_id': share_server_id, 'ip_address': '2.2.2.2'}, ] na_table = utils.load_table(self.table_name, engine) for network_allocation in network_allocations: engine.execute(na_table.insert(network_allocation)) def check_upgrade(self, engine, data): na_table = utils.load_table(self.table_name, engine) for na in engine.execute(na_table.select()): self.test_case.assertTrue(hasattr(na, 'label')) self.test_case.assertEqual(na.label, 'user') # Create admin network allocation network_allocations = [ {'id': self.ids[2], 'share_server_id': na.share_server_id, 'ip_address': '3.3.3.3', 'label': 'admin', 'network_type': 'vlan', 'segmentation_id': 1005, 'ip_version': 4, 'cidr': '240.0.0.0/16'}, ] engine.execute(na_table.insert(network_allocations)) # Select admin network allocations for na in engine.execute( na_table.select().where(na_table.c.label == 'admin')): self.test_case.assertTrue(hasattr(na, 'label')) self.test_case.assertEqual('admin', na.label) for col_name in ('network_type', 'segmentation_id', 'ip_version', 'cidr'): self.test_case.assertTrue(hasattr(na, col_name)) self.test_case.assertEqual( network_allocations[0][col_name], getattr(na, col_name)) def check_downgrade(self, engine): na_table = utils.load_table(self.table_name, engine) db_result = engine.execute(na_table.select()) self.test_case.assertTrue(db_result.rowcount >= len(self.ids)) for na in db_result: for col_name in ('label', 'network_type', 'segmentation_id', 'ip_version', 'cidr'): self.test_case.assertFalse(hasattr(na, col_name)) @map_to_migration('eb6d5544cbbd') class ShareSnapshotInstanceNewProviderLocationColumnChecks( BaseMigrationChecks): table_name = 'share_snapshot_instances' def setup_upgrade_data(self, engine): # Setup shares share_data = {'id': 'new_share_id'} s_table = utils.load_table('shares', engine) engine.execute(s_table.insert(share_data)) # Setup share instances share_instance_data = { 'id': 'new_share_instance_id', 'share_id': share_data['id'] } si_table = utils.load_table('share_instances', engine) engine.execute(si_table.insert(share_instance_data)) # Setup share snapshots share_snapshot_data = { 'id': 'new_snapshot_id', 'share_id': share_data['id']} snap_table = utils.load_table('share_snapshots', engine) engine.execute(snap_table.insert(share_snapshot_data)) # Setup snapshot instances snapshot_instance_data = { 'id': 'new_snapshot_instance_id', 'snapshot_id': share_snapshot_data['id'], 'share_instance_id': share_instance_data['id'] } snap_i_table = utils.load_table('share_snapshot_instances', engine) engine.execute(snap_i_table.insert(snapshot_instance_data)) def check_upgrade(self, engine, data): ss_table = utils.load_table(self.table_name, engine) db_result = engine.execute(ss_table.select().where( ss_table.c.id == 'new_snapshot_instance_id')) self.test_case.assertTrue(db_result.rowcount > 0) for ss in db_result: self.test_case.assertTrue(hasattr(ss, 'provider_location')) self.test_case.assertEqual('new_snapshot_id', ss.snapshot_id) def check_downgrade(self, engine): ss_table = utils.load_table(self.table_name, engine) db_result = engine.execute(ss_table.select().where( ss_table.c.id == 'new_snapshot_instance_id')) self.test_case.assertTrue(db_result.rowcount > 0) for ss in db_result: self.test_case.assertFalse(hasattr(ss, 'provider_location')) self.test_case.assertEqual('new_snapshot_id', ss.snapshot_id) @map_to_migration('221a83cfd85b') class ShareNetworksFieldLengthChecks(BaseMigrationChecks): def setup_upgrade_data(self, engine): user_id = '123456789123456789' project_id = 'project_id' # Create share network data share_network_data = { 'id': 'foo_share_network_id_2', 'user_id': user_id, 'project_id': project_id, } sn_table = utils.load_table('share_networks', engine) engine.execute(sn_table.insert(share_network_data)) # Create security_service data security_services_data = { 'id': 'foo_security_services_id', 'type': 'foo_type', 'project_id': project_id } ss_table = utils.load_table('security_services', engine) engine.execute(ss_table.insert(security_services_data)) def _check_length_for_table_columns(self, table_name, engine, cols, length): table = utils.load_table(table_name, engine) db_result = engine.execute(table.select()) self.test_case.assertTrue(db_result.rowcount > 0) for col in cols: self.test_case.assertEqual(table.columns.get(col).type.length, length) def check_upgrade(self, engine, data): self._check_length_for_table_columns('share_networks', engine, ('user_id', 'project_id'), 255) self._check_length_for_table_columns('security_services', engine, ('project_id',), 255) def check_downgrade(self, engine): self._check_length_for_table_columns('share_networks', engine, ('user_id', 'project_id'), 36) self._check_length_for_table_columns('security_services', engine, ('project_id',), 36) @map_to_migration('fdfb668d19e1') class NewGatewayColumnChecks(BaseMigrationChecks): na_table_name = 'network_allocations' sn_table_name = 'share_networks' na_ids = ['network_allocation_id_fake_%d' % i for i in (1, 2, 3)] sn_ids = ['share_network_id_fake_%d' % i for i in (1, 2)] def setup_upgrade_data(self, engine): user_id = 'user_id' project_id = 'project_id' share_server_id = 'share_server_id_foo' # Create share network share_network_data = { 'id': self.sn_ids[0], 'user_id': user_id, 'project_id': project_id, } sn_table = utils.load_table(self.sn_table_name, engine) engine.execute(sn_table.insert(share_network_data)) # Create share server share_server_data = { 'id': share_server_id, 'share_network_id': share_network_data['id'], 'host': 'fake_host', 'status': 'active', } ss_table = utils.load_table('share_servers', engine) engine.execute(ss_table.insert(share_server_data)) # Create network allocations network_allocations = [ { 'id': self.na_ids[0], 'share_server_id': share_server_id, 'ip_address': '1.1.1.1', }, { 'id': self.na_ids[1], 'share_server_id': share_server_id, 'ip_address': '2.2.2.2', }, ] na_table = utils.load_table(self.na_table_name, engine) engine.execute(na_table.insert(network_allocations)) def check_upgrade(self, engine, data): na_table = utils.load_table(self.na_table_name, engine) for na in engine.execute(na_table.select()): self.test_case.assertTrue(hasattr(na, 'gateway')) # Create network allocation network_allocations = [ { 'id': self.na_ids[2], 'share_server_id': na.share_server_id, 'ip_address': '3.3.3.3', 'gateway': '3.3.3.1', 'network_type': 'vlan', 'segmentation_id': 1005, 'ip_version': 4, 'cidr': '240.0.0.0/16', }, ] engine.execute(na_table.insert(network_allocations)) # Select network allocations with gateway info for na in engine.execute( na_table.select().where(na_table.c.gateway == '3.3.3.1')): self.test_case.assertTrue(hasattr(na, 'gateway')) self.test_case.assertEqual(network_allocations[0]['gateway'], getattr(na, 'gateway')) sn_table = utils.load_table(self.sn_table_name, engine) for sn in engine.execute(sn_table.select()): self.test_case.assertTrue(hasattr(sn, 'gateway')) # Create share network share_networks = [ { 'id': self.sn_ids[1], 'user_id': sn.user_id, 'project_id': sn.project_id, 'gateway': '1.1.1.1', 'name': 'name_foo', }, ] engine.execute(sn_table.insert(share_networks)) # Select share network for sn in engine.execute( sn_table.select().where(sn_table.c.name == 'name_foo')): self.test_case.assertTrue(hasattr(sn, 'gateway')) self.test_case.assertEqual(share_networks[0]['gateway'], getattr(sn, 'gateway')) def check_downgrade(self, engine): for table_name, ids in ((self.na_table_name, self.na_ids), (self.sn_table_name, self.sn_ids)): table = utils.load_table(table_name, engine) db_result = engine.execute(table.select()) self.test_case.assertTrue(db_result.rowcount >= len(ids)) for record in db_result: self.test_case.assertFalse(hasattr(record, 'gateway')) @map_to_migration('e8ea58723178') class RemoveHostFromDriverPrivateDataChecks(BaseMigrationChecks): table_name = 'drivers_private_data' host_column_name = 'host' def setup_upgrade_data(self, engine): dpd_data = { 'created_at': datetime.datetime(2016, 7, 14, 22, 31, 22), 'deleted': 0, 'host': 'host1', 'entity_uuid': 'entity_uuid1', 'key': 'key1', 'value': 'value1' } dpd_table = utils.load_table(self.table_name, engine) engine.execute(dpd_table.insert(dpd_data)) def check_upgrade(self, engine, data): dpd_table = utils.load_table(self.table_name, engine) rows = engine.execute(dpd_table.select()) for row in rows: self.test_case.assertFalse(hasattr(row, self.host_column_name)) def check_downgrade(self, engine): dpd_table = utils.load_table(self.table_name, engine) rows = engine.execute(dpd_table.select()) for row in rows: self.test_case.assertTrue(hasattr(row, self.host_column_name)) self.test_case.assertEqual('unknown', row[self.host_column_name]) @map_to_migration('493eaffd79e1') class NewMTUColumnChecks(BaseMigrationChecks): na_table_name = 'network_allocations' sn_table_name = 'share_networks' na_ids = ['network_allocation_id_fake_3_%d' % i for i in (1, 2, 3)] sn_ids = ['share_network_id_fake_3_%d' % i for i in (1, 2)] def setup_upgrade_data(self, engine): user_id = 'user_id' project_id = 'project_id' share_server_id = 'share_server_id_foo_2' # Create share network share_network_data = { 'id': self.sn_ids[0], 'user_id': user_id, 'project_id': project_id, } sn_table = utils.load_table(self.sn_table_name, engine) engine.execute(sn_table.insert(share_network_data)) # Create share server share_server_data = { 'id': share_server_id, 'share_network_id': share_network_data['id'], 'host': 'fake_host', 'status': 'active', } ss_table = utils.load_table('share_servers', engine) engine.execute(ss_table.insert(share_server_data)) # Create network allocations network_allocations = [ { 'id': self.na_ids[0], 'share_server_id': share_server_id, 'ip_address': '1.1.1.1', }, { 'id': self.na_ids[1], 'share_server_id': share_server_id, 'ip_address': '2.2.2.2', }, ] na_table = utils.load_table(self.na_table_name, engine) engine.execute(na_table.insert(network_allocations)) def check_upgrade(self, engine, data): na_table = utils.load_table(self.na_table_name, engine) for na in engine.execute(na_table.select()): self.test_case.assertTrue(hasattr(na, 'mtu')) # Create network allocation network_allocations = [ { 'id': self.na_ids[2], 'share_server_id': na.share_server_id, 'ip_address': '3.3.3.3', 'gateway': '3.3.3.1', 'network_type': 'vlan', 'segmentation_id': 1005, 'ip_version': 4, 'cidr': '240.0.0.0/16', 'mtu': 1509, }, ] engine.execute(na_table.insert(network_allocations)) # Select network allocations with mtu info for na in engine.execute( na_table.select().where(na_table.c.mtu == '1509')): self.test_case.assertTrue(hasattr(na, 'mtu')) self.test_case.assertEqual(network_allocations[0]['mtu'], getattr(na, 'mtu')) # Select all entries and check for the value for na in engine.execute(na_table.select()): self.test_case.assertTrue(hasattr(na, 'mtu')) if na['id'] == self.na_ids[2]: self.test_case.assertEqual(network_allocations[0]['mtu'], getattr(na, 'mtu')) else: self.test_case.assertIsNone(na['mtu']) sn_table = utils.load_table(self.sn_table_name, engine) for sn in engine.execute(sn_table.select()): self.test_case.assertTrue(hasattr(sn, 'mtu')) # Create share network share_networks = [ { 'id': self.sn_ids[1], 'user_id': sn.user_id, 'project_id': sn.project_id, 'gateway': '1.1.1.1', 'name': 'name_foo_2', 'mtu': 1509, }, ] engine.execute(sn_table.insert(share_networks)) # Select share network with MTU set for sn in engine.execute( sn_table.select().where(sn_table.c.name == 'name_foo_2')): self.test_case.assertTrue(hasattr(sn, 'mtu')) self.test_case.assertEqual(share_networks[0]['mtu'], getattr(sn, 'mtu')) # Select all entries and check for the value for sn in engine.execute(sn_table.select()): self.test_case.assertTrue(hasattr(sn, 'mtu')) if sn['id'] == self.sn_ids[1]: self.test_case.assertEqual(network_allocations[0]['mtu'], getattr(sn, 'mtu')) else: self.test_case.assertIsNone(sn['mtu']) def check_downgrade(self, engine): for table_name, ids in ((self.na_table_name, self.na_ids), (self.sn_table_name, self.sn_ids)): table = utils.load_table(table_name, engine) db_result = engine.execute(table.select()) self.test_case.assertTrue(db_result.rowcount >= len(ids)) for record in db_result: self.test_case.assertFalse(hasattr(record, 'mtu')) @map_to_migration('63809d875e32') class AddAccessKeyToShareAccessMapping(BaseMigrationChecks): table_name = 'share_access_map' access_key_column_name = 'access_key' def setup_upgrade_data(self, engine): share_data = { 'id': uuidutils.generate_uuid(), 'share_proto': "CEPHFS", 'size': 1, 'snapshot_id': None, 'user_id': 'fake', 'project_id': 'fake' } share_table = utils.load_table('shares', engine) engine.execute(share_table.insert(share_data)) share_instance_data = { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'host': 'fake', 'share_id': share_data['id'], 'status': 'available', 'access_rules_status': 'active' } share_instance_table = utils.load_table('share_instances', engine) engine.execute(share_instance_table.insert(share_instance_data)) share_access_data = { 'id': uuidutils.generate_uuid(), 'share_id': share_data['id'], 'access_type': 'cephx', 'access_to': 'alice', 'deleted': 'False' } share_access_table = utils.load_table(self.table_name, engine) engine.execute(share_access_table.insert(share_access_data)) share_instance_access_data = { 'id': uuidutils.generate_uuid(), 'share_instance_id': share_instance_data['id'], 'access_id': share_access_data['id'], 'deleted': 'False' } share_instance_access_table = utils.load_table( 'share_instance_access_map', engine) engine.execute(share_instance_access_table.insert( share_instance_access_data)) def check_upgrade(self, engine, data): share_access_table = utils.load_table(self.table_name, engine) rows = engine.execute(share_access_table.select()) for row in rows: self.test_case.assertTrue(hasattr(row, self.access_key_column_name)) def check_downgrade(self, engine): share_access_table = utils.load_table(self.table_name, engine) rows = engine.execute(share_access_table.select()) for row in rows: self.test_case.assertFalse(hasattr(row, self.access_key_column_name)) @map_to_migration('48a7beae3117') class MoveShareTypeIdToInstancesCheck(BaseMigrationChecks): some_shares = [ { 'id': 's1', 'share_type_id': 't1', }, { 'id': 's2', 'share_type_id': 't2', }, { 'id': 's3', 'share_type_id': 't3', }, ] share_ids = [x['id'] for x in some_shares] some_instances = [ { 'id': 'i1', 'share_id': 's3', }, { 'id': 'i2', 'share_id': 's2', }, { 'id': 'i3', 'share_id': 's2', }, { 'id': 'i4', 'share_id': 's1', }, ] instance_ids = [x['id'] for x in some_instances] some_share_types = [ {'id': 't1'}, {'id': 't2'}, {'id': 't3'}, ] def setup_upgrade_data(self, engine): shares_table = utils.load_table('shares', engine) share_instances_table = utils.load_table('share_instances', engine) share_types_table = utils.load_table('share_types', engine) for stype in self.some_share_types: engine.execute(share_types_table.insert(stype)) for share in self.some_shares: engine.execute(shares_table.insert(share)) for instance in self.some_instances: engine.execute(share_instances_table.insert(instance)) def check_upgrade(self, engine, data): shares_table = utils.load_table('shares', engine) share_instances_table = utils.load_table('share_instances', engine) for instance in engine.execute(share_instances_table.select().where( share_instances_table.c.id in self.instance_ids)): share = engine.execute(shares_table.select().where( instance['share_id'] == shares_table.c.id)).first() self.test_case.assertEqual( next((x for x in self.some_shares if share['id'] == x['id']), None)['share_type_id'], instance['share_type_id']) for share in engine.execute(share_instances_table.select().where( shares_table.c.id in self.share_ids)): self.test_case.assertNotIn('share_type_id', share) def check_downgrade(self, engine): shares_table = utils.load_table('shares', engine) share_instances_table = utils.load_table('share_instances', engine) for instance in engine.execute(share_instances_table.select().where( share_instances_table.c.id in self.instance_ids)): self.test_case.assertNotIn('share_type_id', instance) for share in engine.execute(share_instances_table.select().where( shares_table.c.id in self.share_ids)): self.test_case.assertEqual( next((x for x in self.some_shares if share['id'] == x['id']), None)['share_type_id'], share['share_type_id']) @map_to_migration('3e7d62517afa') class CreateFromSnapshotExtraSpecAndShareColumn(BaseMigrationChecks): expected_attr = constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT snap_support_attr = constants.ExtraSpecs.SNAPSHOT_SUPPORT def _get_fake_data(self): extra_specs = [] shares = [] share_instances = [] share_types = [ { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'name': 'share-type-1', 'is_public': False, }, { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'name': 'share-type-2', 'is_public': True, }, ] snapshot_support = (False, True) dhss = ('True', 'False') for idx, share_type in enumerate(share_types): extra_specs.append({ 'share_type_id': share_type['id'], 'spec_key': 'snapshot_support', 'spec_value': snapshot_support[idx], 'deleted': 0, }) extra_specs.append({ 'share_type_id': share_type['id'], 'spec_key': 'driver_handles_share_servers', 'spec_value': dhss[idx], 'deleted': 0, }) share = fake_share(snapshot_support=snapshot_support[idx]) shares.append(share) share_instances.append( fake_instance(share_id=share['id'], share_type_id=share_type['id']) ) return share_types, extra_specs, shares, share_instances def setup_upgrade_data(self, engine): (self.share_types, self.extra_specs, self.shares, self.share_instances) = self._get_fake_data() share_types_table = utils.load_table('share_types', engine) engine.execute(share_types_table.insert(self.share_types)) extra_specs_table = utils.load_table('share_type_extra_specs', engine) engine.execute(extra_specs_table.insert(self.extra_specs)) shares_table = utils.load_table('shares', engine) engine.execute(shares_table.insert(self.shares)) share_instances_table = utils.load_table('share_instances', engine) engine.execute(share_instances_table.insert(self.share_instances)) def check_upgrade(self, engine, data): share_type_ids = [st['id'] for st in self.share_types] share_ids = [s['id'] for s in self.shares] shares_table = utils.load_table('shares', engine) share_types_table = utils.load_table('share_types', engine) extra_specs_table = utils.load_table('share_type_extra_specs', engine) # Pre-existing Shares must be present shares_in_db = engine.execute(shares_table.select()).fetchall() share_ids_in_db = [s['id'] for s in shares_in_db] self.test_case.assertTrue(len(share_ids_in_db) > 1) for share_id in share_ids: self.test_case.assertIn(share_id, share_ids_in_db) # new shares attr must match snapshot support for share in shares_in_db: self.test_case.assertTrue(hasattr(share, self.expected_attr)) self.test_case.assertEqual(share[self.snap_support_attr], share[self.expected_attr]) # Pre-existing Share types must be present share_types_in_db = ( engine.execute(share_types_table.select()).fetchall()) share_type_ids_in_db = [s['id'] for s in share_types_in_db] for share_type_id in share_type_ids: self.test_case.assertIn(share_type_id, share_type_ids_in_db) # Pre-existing extra specs must be present extra_specs_in_db = ( engine.execute(extra_specs_table.select().where( extra_specs_table.c.deleted == 0)).fetchall()) self.test_case.assertGreaterEqual(len(extra_specs_in_db), len(self.extra_specs)) # New Extra spec for share types must match snapshot support for share_type_id in share_type_ids: new_extra_spec = [x for x in extra_specs_in_db if x['spec_key'] == self.expected_attr and x['share_type_id'] == share_type_id] snapshot_support_spec = [ x for x in extra_specs_in_db if x['spec_key'] == self.snap_support_attr and x['share_type_id'] == share_type_id] self.test_case.assertEqual(1, len(new_extra_spec)) self.test_case.assertEqual(1, len(snapshot_support_spec)) self.test_case.assertEqual( snapshot_support_spec[0]['spec_value'], new_extra_spec[0]['spec_value']) def check_downgrade(self, engine): share_type_ids = [st['id'] for st in self.share_types] share_ids = [s['id'] for s in self.shares] shares_table = utils.load_table('shares', engine) share_types_table = utils.load_table('share_types', engine) extra_specs_table = utils.load_table('share_type_extra_specs', engine) # Pre-existing Shares must be present shares_in_db = engine.execute(shares_table.select()).fetchall() share_ids_in_db = [s['id'] for s in shares_in_db] self.test_case.assertTrue(len(share_ids_in_db) > 1) for share_id in share_ids: self.test_case.assertIn(share_id, share_ids_in_db) # Shares should have no attr to create share from snapshot for share in shares_in_db: self.test_case.assertFalse(hasattr(share, self.expected_attr)) # Pre-existing Share types must be present share_types_in_db = ( engine.execute(share_types_table.select()).fetchall()) share_type_ids_in_db = [s['id'] for s in share_types_in_db] for share_type_id in share_type_ids: self.test_case.assertIn(share_type_id, share_type_ids_in_db) # Pre-existing extra specs must be present extra_specs_in_db = ( engine.execute(extra_specs_table.select().where( extra_specs_table.c.deleted == 0)).fetchall()) self.test_case.assertGreaterEqual(len(extra_specs_in_db), len(self.extra_specs)) # Share types must not have create share from snapshot extra spec for share_type_id in share_type_ids: new_extra_spec = [x for x in extra_specs_in_db if x['spec_key'] == self.expected_attr and x['share_type_id'] == share_type_id] self.test_case.assertEqual(0, len(new_extra_spec)) @map_to_migration('87ce15c59bbe') class RevertToSnapshotShareColumn(BaseMigrationChecks): expected_attr = constants.ExtraSpecs.REVERT_TO_SNAPSHOT_SUPPORT def _get_fake_data(self): extra_specs = [] shares = [] share_instances = [] share_types = [ { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'name': 'revert-1', 'is_public': False, }, { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'name': 'revert-2', 'is_public': True, }, ] snapshot_support = (False, True) dhss = ('True', 'False') for idx, share_type in enumerate(share_types): extra_specs.append({ 'share_type_id': share_type['id'], 'spec_key': 'snapshot_support', 'spec_value': snapshot_support[idx], 'deleted': 0, }) extra_specs.append({ 'share_type_id': share_type['id'], 'spec_key': 'driver_handles_share_servers', 'spec_value': dhss[idx], 'deleted': 0, }) share = fake_share(snapshot_support=snapshot_support[idx]) shares.append(share) share_instances.append( fake_instance(share_id=share['id'], share_type_id=share_type['id']) ) return share_types, extra_specs, shares, share_instances def setup_upgrade_data(self, engine): (self.share_types, self.extra_specs, self.shares, self.share_instances) = self._get_fake_data() share_types_table = utils.load_table('share_types', engine) engine.execute(share_types_table.insert(self.share_types)) extra_specs_table = utils.load_table('share_type_extra_specs', engine) engine.execute(extra_specs_table.insert(self.extra_specs)) shares_table = utils.load_table('shares', engine) engine.execute(shares_table.insert(self.shares)) share_instances_table = utils.load_table('share_instances', engine) engine.execute(share_instances_table.insert(self.share_instances)) def check_upgrade(self, engine, data): share_ids = [s['id'] for s in self.shares] shares_table = utils.load_table('shares', engine) # Pre-existing Shares must be present shares_in_db = engine.execute(shares_table.select().where( shares_table.c.deleted == 'False')).fetchall() share_ids_in_db = [s['id'] for s in shares_in_db] self.test_case.assertTrue(len(share_ids_in_db) > 1) for share_id in share_ids: self.test_case.assertIn(share_id, share_ids_in_db) # New shares attr must be present and set to False for share in shares_in_db: self.test_case.assertTrue(hasattr(share, self.expected_attr)) self.test_case.assertEqual(False, share[self.expected_attr]) def check_downgrade(self, engine): share_ids = [s['id'] for s in self.shares] shares_table = utils.load_table('shares', engine) # Pre-existing Shares must be present shares_in_db = engine.execute(shares_table.select()).fetchall() share_ids_in_db = [s['id'] for s in shares_in_db] self.test_case.assertTrue(len(share_ids_in_db) > 1) for share_id in share_ids: self.test_case.assertIn(share_id, share_ids_in_db) # Shares should have no attr to revert share to snapshot for share in shares_in_db: self.test_case.assertFalse(hasattr(share, self.expected_attr)) @map_to_migration('95e3cf760840') class RemoveNovaNetIdColumnFromShareNetworks(BaseMigrationChecks): table_name = 'share_networks' nova_net_column_name = 'nova_net_id' def setup_upgrade_data(self, engine): user_id = 'user_id' project_id = 'project_id' nova_net_id = 'foo_nova_net_id' share_network_data = { 'id': 'foo_share_network_id_3', 'user_id': user_id, 'project_id': project_id, 'nova_net_id': nova_net_id, } sn_table = utils.load_table(self.table_name, engine) engine.execute(sn_table.insert(share_network_data)) def check_upgrade(self, engine, data): sn_table = utils.load_table(self.table_name, engine) rows = engine.execute(sn_table.select()) self.test_case.assertGreater(rows.rowcount, 0) for row in rows: self.test_case.assertFalse(hasattr(row, self.nova_net_column_name)) def check_downgrade(self, engine): sn_table = utils.load_table(self.table_name, engine) rows = engine.execute(sn_table.select()) self.test_case.assertGreater(rows.rowcount, 0) for row in rows: self.test_case.assertTrue(hasattr(row, self.nova_net_column_name)) self.test_case.assertIsNone(row[self.nova_net_column_name]) @map_to_migration('54667b9cade7') class RestoreStateToShareInstanceAccessMap(BaseMigrationChecks): new_instance_mapping_state = { constants.STATUS_ACTIVE: constants.STATUS_ACTIVE, constants.SHARE_INSTANCE_RULES_SYNCING: constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.STATUS_OUT_OF_SYNC: constants.ACCESS_STATE_QUEUED_TO_APPLY, 'updating': constants.ACCESS_STATE_QUEUED_TO_APPLY, 'updating_multiple': constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.SHARE_INSTANCE_RULES_ERROR: constants.ACCESS_STATE_ERROR, } new_access_rules_status = { constants.STATUS_ACTIVE: constants.STATUS_ACTIVE, constants.STATUS_OUT_OF_SYNC: constants.SHARE_INSTANCE_RULES_SYNCING, 'updating': constants.SHARE_INSTANCE_RULES_SYNCING, 'updating_multiple': constants.SHARE_INSTANCE_RULES_SYNCING, constants.SHARE_INSTANCE_RULES_ERROR: constants.SHARE_INSTANCE_RULES_ERROR, } @staticmethod def generate_share_instance(sid, access_rules_status): share_instance_data = { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'host': 'fake', 'share_id': sid, 'status': constants.STATUS_AVAILABLE, 'access_rules_status': access_rules_status } return share_instance_data @staticmethod def generate_share_instance_access_map(share_access_data_id, share_instance_id): share_instance_access_data = { 'id': uuidutils.generate_uuid(), 'share_instance_id': share_instance_id, 'access_id': share_access_data_id, 'deleted': 'False' } return share_instance_access_data def setup_upgrade_data(self, engine): share_data = { 'id': uuidutils.generate_uuid(), 'share_proto': 'fake', 'size': 1, 'snapshot_id': None, 'user_id': 'fake', 'project_id': 'fake' } share_table = utils.load_table('shares', engine) engine.execute(share_table.insert(share_data)) share_instances = [ self.generate_share_instance( share_data['id'], constants.STATUS_ACTIVE), self.generate_share_instance( share_data['id'], constants.STATUS_OUT_OF_SYNC), self.generate_share_instance( share_data['id'], constants.STATUS_ERROR), self.generate_share_instance( share_data['id'], 'updating'), self.generate_share_instance( share_data['id'], 'updating_multiple'), ] self.updating_share_instance = share_instances[3] self.updating_multiple_share_instance = share_instances[4] share_instance_table = utils.load_table('share_instances', engine) for share_instance_data in share_instances: engine.execute(share_instance_table.insert(share_instance_data)) share_access_data = { 'id': uuidutils.generate_uuid(), 'share_id': share_data['id'], 'access_type': 'fake', 'access_to': 'alice', 'deleted': 'False' } share_access_table = utils.load_table('share_access_map', engine) engine.execute(share_access_table.insert(share_access_data)) share_instance_access_data = [] for share_instance in share_instances: sia_map = self.generate_share_instance_access_map( share_access_data['id'], share_instance['id']) share_instance_access_data.append(sia_map) share_instance_access_table = utils.load_table( 'share_instance_access_map', engine) for sia_map in share_instance_access_data: engine.execute(share_instance_access_table.insert(sia_map)) def check_upgrade(self, engine, data): share_instance_table = utils.load_table('share_instances', engine) sia_table = utils.load_table('share_instance_access_map', engine) for rule in engine.execute(sia_table.select()): self.test_case.assertTrue(hasattr(rule, 'state')) correlated_share_instances = engine.execute( share_instance_table.select().where( share_instance_table.c.id == rule['share_instance_id'])) access_rules_status = getattr(correlated_share_instances.first(), 'access_rules_status') self.test_case.assertEqual( self.new_instance_mapping_state[access_rules_status], rule['state']) for instance in engine.execute(share_instance_table.select()): self.test_case.assertTrue(instance['access_rules_status'] not in ('updating', 'updating_multiple', constants.STATUS_OUT_OF_SYNC)) if instance['id'] in (self.updating_share_instance['id'], self.updating_multiple_share_instance['id']): self.test_case.assertEqual( constants.SHARE_INSTANCE_RULES_SYNCING, instance['access_rules_status']) def check_downgrade(self, engine): share_instance_table = utils.load_table('share_instances', engine) sia_table = utils.load_table('share_instance_access_map', engine) for rule in engine.execute(sia_table.select()): self.test_case.assertFalse(hasattr(rule, 'state')) for instance in engine.execute(share_instance_table.select()): if instance['id'] in (self.updating_share_instance['id'], self.updating_multiple_share_instance['id']): self.test_case.assertEqual( constants.STATUS_OUT_OF_SYNC, instance['access_rules_status']) @map_to_migration('e9f79621d83f') class AddCastRulesToReadonlyToInstances(BaseMigrationChecks): share_type = { 'id': uuidutils.generate_uuid(), } shares = [ { 'id': uuidutils.generate_uuid(), 'replication_type': constants.REPLICATION_TYPE_READABLE, }, { 'id': uuidutils.generate_uuid(), 'replication_type': constants.REPLICATION_TYPE_READABLE, }, { 'id': uuidutils.generate_uuid(), 'replication_type': constants.REPLICATION_TYPE_WRITABLE, }, { 'id': uuidutils.generate_uuid(), }, ] share_ids = [x['id'] for x in shares] correct_instance = { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[1], 'replica_state': constants.REPLICA_STATE_IN_SYNC, 'status': constants.STATUS_AVAILABLE, 'share_type_id': share_type['id'], } instances = [ { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[0], 'replica_state': constants.REPLICA_STATE_ACTIVE, 'status': constants.STATUS_AVAILABLE, 'share_type_id': share_type['id'], }, { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[0], 'replica_state': constants.REPLICA_STATE_IN_SYNC, 'status': constants.STATUS_REPLICATION_CHANGE, 'share_type_id': share_type['id'], }, { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[1], 'replica_state': constants.REPLICA_STATE_ACTIVE, 'status': constants.STATUS_REPLICATION_CHANGE, 'share_type_id': share_type['id'], }, correct_instance, { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[2], 'replica_state': constants.REPLICA_STATE_ACTIVE, 'status': constants.STATUS_REPLICATION_CHANGE, 'share_type_id': share_type['id'], }, { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[2], 'replica_state': constants.REPLICA_STATE_IN_SYNC, 'status': constants.STATUS_AVAILABLE, 'share_type_id': share_type['id'], }, { 'id': uuidutils.generate_uuid(), 'share_id': share_ids[3], 'status': constants.STATUS_AVAILABLE, 'share_type_id': share_type['id'], }, ] instance_ids = share_ids = [x['id'] for x in instances] def setup_upgrade_data(self, engine): shares_table = utils.load_table('shares', engine) share_instances_table = utils.load_table('share_instances', engine) share_types_table = utils.load_table('share_types', engine) engine.execute(share_types_table.insert(self.share_type)) for share in self.shares: engine.execute(shares_table.insert(share)) for instance in self.instances: engine.execute(share_instances_table.insert(instance)) def check_upgrade(self, engine, data): shares_table = utils.load_table('shares', engine) share_instances_table = utils.load_table('share_instances', engine) for instance in engine.execute(share_instances_table.select().where( share_instances_table.c.id in self.instance_ids)): self.test_case.assertIn('cast_rules_to_readonly', instance) share = engine.execute(shares_table.select().where( instance['share_id'] == shares_table.c.id)).first() if (instance['replica_state'] != constants.REPLICA_STATE_ACTIVE and share['replication_type'] == constants.REPLICATION_TYPE_READABLE and instance['status'] != constants.STATUS_REPLICATION_CHANGE): self.test_case.assertTrue(instance['cast_rules_to_readonly']) self.test_case.assertEqual(instance['id'], self.correct_instance['id']) else: self.test_case.assertEqual( False, instance['cast_rules_to_readonly']) def check_downgrade(self, engine): share_instances_table = utils.load_table('share_instances', engine) for instance in engine.execute(share_instances_table.select()): self.test_case.assertNotIn('cast_rules_to_readonly', instance) @map_to_migration('03da71c0e321') class ShareGroupMigrationChecks(BaseMigrationChecks): def setup_upgrade_data(self, engine): # Create share type self.share_type_id = uuidutils.generate_uuid() st_fixture = { 'deleted': "False", 'id': self.share_type_id, } st_table = utils.load_table('share_types', engine) engine.execute(st_table.insert(st_fixture)) # Create CG self.cg_id = uuidutils.generate_uuid() cg_fixture = { 'deleted': "False", 'id': self.cg_id, 'user_id': 'fake_user', 'project_id': 'fake_project_id', } cg_table = utils.load_table('consistency_groups', engine) engine.execute(cg_table.insert(cg_fixture)) # Create share_type group mapping self.mapping_id = uuidutils.generate_uuid() mapping_fixture = { 'deleted': "False", 'id': self.mapping_id, 'consistency_group_id': self.cg_id, 'share_type_id': self.share_type_id, } mapping_table = utils.load_table( 'consistency_group_share_type_mappings', engine) engine.execute(mapping_table.insert(mapping_fixture)) # Create share self.share_id = uuidutils.generate_uuid() share_fixture = { 'deleted': "False", 'id': self.share_id, 'consistency_group_id': self.cg_id, 'user_id': 'fake_user', 'project_id': 'fake_project_id', } share_table = utils.load_table('shares', engine) engine.execute(share_table.insert(share_fixture)) # Create share instance self.share_instance_id = uuidutils.generate_uuid() share_instance_fixture = { 'deleted': "False", 'share_type_id': self.share_type_id, 'id': self.share_instance_id, 'share_id': self.share_id, 'cast_rules_to_readonly': False, } share_instance_table = utils.load_table('share_instances', engine) engine.execute(share_instance_table.insert(share_instance_fixture)) # Create cgsnapshot self.cgsnapshot_id = uuidutils.generate_uuid() cg_snap_fixture = { 'deleted': "False", 'id': self.cgsnapshot_id, 'consistency_group_id': self.cg_id, 'user_id': 'fake_user', 'project_id': 'fake_project_id', } cgsnapshots_table = utils.load_table('cgsnapshots', engine) engine.execute(cgsnapshots_table.insert(cg_snap_fixture)) # Create cgsnapshot member self.cgsnapshot_member_id = uuidutils.generate_uuid() cg_snap_member_fixture = { 'deleted': "False", 'id': self.cgsnapshot_member_id, 'cgsnapshot_id': self.cgsnapshot_id, 'share_type_id': self.share_type_id, 'share_instance_id': self.share_instance_id, 'share_id': self.share_id, 'user_id': 'fake_user', 'project_id': 'fake_project_id', } cgsnapshot_members_table = utils.load_table( 'cgsnapshot_members', engine) engine.execute(cgsnapshot_members_table.insert(cg_snap_member_fixture)) def check_upgrade(self, engine, data): sg_table = utils.load_table("share_groups", engine) db_result = engine.execute(sg_table.select().where( sg_table.c.id == self.cg_id)) self.test_case.assertEqual(1, db_result.rowcount) sg = db_result.first() self.test_case.assertIsNone(sg['source_share_group_snapshot_id']) share_table = utils.load_table("shares", engine) share_result = engine.execute(share_table.select().where( share_table.c.id == self.share_id)) self.test_case.assertEqual(1, share_result.rowcount) share = share_result.first() self.test_case.assertEqual(self.cg_id, share['share_group_id']) self.test_case.assertIsNone( share['source_share_group_snapshot_member_id']) mapping_table = utils.load_table( "share_group_share_type_mappings", engine) mapping_result = engine.execute(mapping_table.select().where( mapping_table.c.id == self.mapping_id)) self.test_case.assertEqual(1, mapping_result.rowcount) mapping_record = mapping_result.first() self.test_case.assertEqual( self.cg_id, mapping_record['share_group_id']) self.test_case.assertEqual( self.share_type_id, mapping_record['share_type_id']) sgs_table = utils.load_table("share_group_snapshots", engine) db_result = engine.execute(sgs_table.select().where( sgs_table.c.id == self.cgsnapshot_id)) self.test_case.assertEqual(1, db_result.rowcount) sgs = db_result.first() self.test_case.assertEqual(self.cg_id, sgs['share_group_id']) sgsm_table = utils.load_table("share_group_snapshot_members", engine) db_result = engine.execute(sgsm_table.select().where( sgsm_table.c.id == self.cgsnapshot_member_id)) self.test_case.assertEqual(1, db_result.rowcount) sgsm = db_result.first() self.test_case.assertEqual( self.cgsnapshot_id, sgsm['share_group_snapshot_id']) self.test_case.assertNotIn('share_type_id', sgsm) def check_downgrade(self, engine): cg_table = utils.load_table("consistency_groups", engine) db_result = engine.execute(cg_table.select().where( cg_table.c.id == self.cg_id)) self.test_case.assertEqual(1, db_result.rowcount) cg = db_result.first() self.test_case.assertIsNone(cg['source_cgsnapshot_id']) share_table = utils.load_table("shares", engine) share_result = engine.execute(share_table.select().where( share_table.c.id == self.share_id)) self.test_case.assertEqual(1, share_result.rowcount) share = share_result.first() self.test_case.assertEqual(self.cg_id, share['consistency_group_id']) self.test_case.assertIsNone( share['source_cgsnapshot_member_id']) mapping_table = utils.load_table( "consistency_group_share_type_mappings", engine) mapping_result = engine.execute(mapping_table.select().where( mapping_table.c.id == self.mapping_id)) self.test_case.assertEqual(1, mapping_result.rowcount) cg_st_mapping = mapping_result.first() self.test_case.assertEqual( self.cg_id, cg_st_mapping['consistency_group_id']) self.test_case.assertEqual( self.share_type_id, cg_st_mapping['share_type_id']) cg_snapshots_table = utils.load_table("cgsnapshots", engine) db_result = engine.execute(cg_snapshots_table.select().where( cg_snapshots_table.c.id == self.cgsnapshot_id)) self.test_case.assertEqual(1, db_result.rowcount) cgsnap = db_result.first() self.test_case.assertEqual(self.cg_id, cgsnap['consistency_group_id']) cg_snap_member_table = utils.load_table("cgsnapshot_members", engine) db_result = engine.execute(cg_snap_member_table.select().where( cg_snap_member_table.c.id == self.cgsnapshot_member_id)) self.test_case.assertEqual(1, db_result.rowcount) member = db_result.first() self.test_case.assertEqual( self.cgsnapshot_id, member['cgsnapshot_id']) self.test_case.assertIn('share_type_id', member) self.test_case.assertEqual(self.share_type_id, member['share_type_id']) @map_to_migration('927920b37453') class ShareGroupSnapshotMemberNewProviderLocationColumnChecks( BaseMigrationChecks): table_name = 'share_group_snapshot_members' share_group_type_id = uuidutils.generate_uuid() share_group_id = uuidutils.generate_uuid() share_id = uuidutils.generate_uuid() share_instance_id = uuidutils.generate_uuid() share_group_snapshot_id = uuidutils.generate_uuid() share_group_snapshot_member_id = uuidutils.generate_uuid() def setup_upgrade_data(self, engine): # Setup share group type sgt_data = { 'id': self.share_group_type_id, 'name': uuidutils.generate_uuid(), } sgt_table = utils.load_table('share_group_types', engine) engine.execute(sgt_table.insert(sgt_data)) # Setup share group sg_data = { 'id': self.share_group_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', 'share_group_type_id': self.share_group_type_id, } sg_table = utils.load_table('share_groups', engine) engine.execute(sg_table.insert(sg_data)) # Setup shares share_data = { 'id': self.share_id, 'share_group_id': self.share_group_id, } s_table = utils.load_table('shares', engine) engine.execute(s_table.insert(share_data)) # Setup share instances share_instance_data = { 'id': self.share_instance_id, 'share_id': share_data['id'], 'cast_rules_to_readonly': False, } si_table = utils.load_table('share_instances', engine) engine.execute(si_table.insert(share_instance_data)) # Setup share group snapshot sgs_data = { 'id': self.share_group_snapshot_id, 'share_group_id': self.share_group_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', } sgs_table = utils.load_table('share_group_snapshots', engine) engine.execute(sgs_table.insert(sgs_data)) # Setup share group snapshot member sgsm_data = { 'id': self.share_group_snapshot_member_id, 'share_group_snapshot_id': self.share_group_snapshot_id, 'share_id': self.share_id, 'share_instance_id': self.share_instance_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', } sgsm_table = utils.load_table(self.table_name, engine) engine.execute(sgsm_table.insert(sgsm_data)) def check_upgrade(self, engine, data): sgsm_table = utils.load_table(self.table_name, engine) db_result = engine.execute(sgsm_table.select().where( sgsm_table.c.id == self.share_group_snapshot_member_id)) self.test_case.assertEqual(1, db_result.rowcount) for sgsm in db_result: self.test_case.assertTrue(hasattr(sgsm, 'provider_location')) # Check that we can write string data to the new field # pylint: disable=no-value-for-parameter engine.execute(sgsm_table.update().where( sgsm_table.c.id == self.share_group_snapshot_member_id, ).values({ 'provider_location': ('z' * 255), })) def check_downgrade(self, engine): sgsm_table = utils.load_table(self.table_name, engine) db_result = engine.execute(sgsm_table.select().where( sgsm_table.c.id == self.share_group_snapshot_member_id)) self.test_case.assertEqual(1, db_result.rowcount) for sgsm in db_result: self.test_case.assertFalse(hasattr(sgsm, 'provider_location')) @map_to_migration('d5db24264f5c') class ShareGroupNewConsistentSnapshotSupportColumnChecks(BaseMigrationChecks): table_name = 'share_groups' new_attr_name = 'consistent_snapshot_support' share_group_type_id = uuidutils.generate_uuid() share_group_id = uuidutils.generate_uuid() def setup_upgrade_data(self, engine): # Setup share group type sgt_data = { 'id': self.share_group_type_id, 'name': uuidutils.generate_uuid(), } sgt_table = utils.load_table('share_group_types', engine) engine.execute(sgt_table.insert(sgt_data)) # Setup share group sg_data = { 'id': self.share_group_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', 'share_group_type_id': self.share_group_type_id, } sg_table = utils.load_table('share_groups', engine) engine.execute(sg_table.insert(sg_data)) def check_upgrade(self, engine, data): sg_table = utils.load_table(self.table_name, engine) db_result = engine.execute(sg_table.select().where( sg_table.c.id == self.share_group_id)) self.test_case.assertEqual(1, db_result.rowcount) for sg in db_result: self.test_case.assertTrue(hasattr(sg, self.new_attr_name)) # Check that we can write proper enum data to the new field for value in (None, 'pool', 'host'): # pylint: disable=no-value-for-parameter engine.execute(sg_table.update().where( sg_table.c.id == self.share_group_id, ).values({self.new_attr_name: value})) # Check that we cannot write values that are not allowed by enum. for value in ('', 'fake', 'pool1', 'host1', '1pool', '1host'): # pylint: disable=no-value-for-parameter self.test_case.assertRaises( oslo_db_exc.DBError, engine.execute, sg_table.update().where( sg_table.c.id == self.share_group_id ).values({self.new_attr_name: value}) ) def check_downgrade(self, engine): sg_table = utils.load_table(self.table_name, engine) db_result = engine.execute(sg_table.select().where( sg_table.c.id == self.share_group_id)) self.test_case.assertEqual(1, db_result.rowcount) for sg in db_result: self.test_case.assertFalse(hasattr(sg, self.new_attr_name)) @map_to_migration('7d142971c4ef') class ReservationExpireIndexChecks(BaseMigrationChecks): def setup_upgrade_data(self, engine): pass def _get_reservations_expire_delete_index(self, engine): reservation_table = utils.load_table('reservations', engine) members = ['deleted', 'expire'] for idx in reservation_table.indexes: if sorted(idx.columns.keys()) == members: return idx def check_upgrade(self, engine, data): self.test_case.assertTrue( self._get_reservations_expire_delete_index(engine)) def check_downgrade(self, engine): self.test_case.assertFalse( self._get_reservations_expire_delete_index(engine)) @map_to_migration('5237b6625330') class ShareGroupNewAvailabilityZoneIDColumnChecks(BaseMigrationChecks): table_name = 'share_groups' new_attr_name = 'availability_zone_id' share_group_type_id = uuidutils.generate_uuid() share_group_id = uuidutils.generate_uuid() availability_zone_id = uuidutils.generate_uuid() def setup_upgrade_data(self, engine): # Setup AZ az_data = { 'id': self.availability_zone_id, 'name': uuidutils.generate_uuid(), } az_table = utils.load_table('availability_zones', engine) engine.execute(az_table.insert(az_data)) # Setup share group type sgt_data = { 'id': self.share_group_type_id, 'name': uuidutils.generate_uuid(), } sgt_table = utils.load_table('share_group_types', engine) engine.execute(sgt_table.insert(sgt_data)) # Setup share group sg_data = { 'id': self.share_group_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', 'share_group_type_id': self.share_group_type_id, } sg_table = utils.load_table('share_groups', engine) engine.execute(sg_table.insert(sg_data)) def check_upgrade(self, engine, data): sg_table = utils.load_table(self.table_name, engine) db_result = engine.execute(sg_table.select().where( sg_table.c.id == self.share_group_id)) self.test_case.assertEqual(1, db_result.rowcount) for sg in db_result: self.test_case.assertTrue(hasattr(sg, self.new_attr_name)) # Check that we can write proper data to the new field for value in (None, self.availability_zone_id): # pylint: disable=no-value-for-parameter engine.execute(sg_table.update().where( sg_table.c.id == self.share_group_id, ).values({self.new_attr_name: value})) def check_downgrade(self, engine): sg_table = utils.load_table(self.table_name, engine) db_result = engine.execute(sg_table.select().where( sg_table.c.id == self.share_group_id)) self.test_case.assertEqual(1, db_result.rowcount) for sg in db_result: self.test_case.assertFalse(hasattr(sg, self.new_attr_name)) @map_to_migration('31252d671ae5') class SquashSGSnapshotMembersAndSSIModelsChecks(BaseMigrationChecks): old_table_name = 'share_group_snapshot_members' new_table_name = 'share_snapshot_instances' share_group_type_id = uuidutils.generate_uuid() share_group_id = uuidutils.generate_uuid() share_id = uuidutils.generate_uuid() share_instance_id = uuidutils.generate_uuid() share_group_snapshot_id = uuidutils.generate_uuid() share_group_snapshot_member_id = uuidutils.generate_uuid() keys = ( 'user_id', 'project_id', 'size', 'share_proto', 'share_group_snapshot_id', ) def setup_upgrade_data(self, engine): # Setup share group type sgt_data = { 'id': self.share_group_type_id, 'name': uuidutils.generate_uuid(), } sgt_table = utils.load_table('share_group_types', engine) engine.execute(sgt_table.insert(sgt_data)) # Setup share group sg_data = { 'id': self.share_group_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', 'share_group_type_id': self.share_group_type_id, } sg_table = utils.load_table('share_groups', engine) engine.execute(sg_table.insert(sg_data)) # Setup shares share_data = { 'id': self.share_id, 'share_group_id': self.share_group_id, } s_table = utils.load_table('shares', engine) engine.execute(s_table.insert(share_data)) # Setup share instances share_instance_data = { 'id': self.share_instance_id, 'share_id': share_data['id'], 'cast_rules_to_readonly': False, } si_table = utils.load_table('share_instances', engine) engine.execute(si_table.insert(share_instance_data)) # Setup share group snapshot sgs_data = { 'id': self.share_group_snapshot_id, 'share_group_id': self.share_group_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', } sgs_table = utils.load_table('share_group_snapshots', engine) engine.execute(sgs_table.insert(sgs_data)) # Setup share group snapshot member sgsm_data = { 'id': self.share_group_snapshot_member_id, 'share_group_snapshot_id': self.share_group_snapshot_id, 'share_id': self.share_id, 'share_instance_id': self.share_instance_id, 'project_id': 'fake_project_id', 'user_id': 'fake_user_id', } sgsm_table = utils.load_table(self.old_table_name, engine) engine.execute(sgsm_table.insert(sgsm_data)) def check_upgrade(self, engine, data): ssi_table = utils.load_table(self.new_table_name, engine) db_result = engine.execute(ssi_table.select().where( ssi_table.c.id == self.share_group_snapshot_member_id)) self.test_case.assertEqual(1, db_result.rowcount) for ssi in db_result: for key in self.keys: self.test_case.assertTrue(hasattr(ssi, key)) # Check that we can write string data to the new fields # pylint: disable=no-value-for-parameter engine.execute(ssi_table.update().where( ssi_table.c.id == self.share_group_snapshot_member_id, ).values({ 'user_id': ('u' * 255), 'project_id': ('p' * 255), 'share_proto': ('s' * 255), 'size': 123456789, 'share_group_snapshot_id': self.share_group_snapshot_id, })) # Check that table 'share_group_snapshot_members' does not # exist anymore self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, 'share_group_snapshot_members', engine) def check_downgrade(self, engine): sgsm_table = utils.load_table(self.old_table_name, engine) db_result = engine.execute(sgsm_table.select().where( sgsm_table.c.id == self.share_group_snapshot_member_id)) self.test_case.assertEqual(1, db_result.rowcount) for sgsm in db_result: for key in self.keys: self.test_case.assertTrue(hasattr(sgsm, key)) # Check that create SGS member is absent in SSI table ssi_table = utils.load_table(self.new_table_name, engine) db_result = engine.execute(ssi_table.select().where( ssi_table.c.id == self.share_group_snapshot_member_id)) self.test_case.assertEqual(0, db_result.rowcount) @map_to_migration('238720805ce1') class MessagesTableChecks(BaseMigrationChecks): new_table_name = 'messages' def setup_upgrade_data(self, engine): pass def check_upgrade(self, engine, data): message_data = { 'id': uuidutils.generate_uuid(), 'project_id': 'x' * 255, 'request_id': 'x' * 255, 'resource_type': 'x' * 255, 'resource_id': 'y' * 36, 'action_id': 'y' * 10, 'detail_id': 'y' * 10, 'message_level': 'x' * 255, 'created_at': datetime.datetime(2017, 7, 10, 18, 5, 58), 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'expires_at': datetime.datetime(2017, 7, 11, 18, 5, 58), } new_table = utils.load_table(self.new_table_name, engine) engine.execute(new_table.insert(message_data)) def check_downgrade(self, engine): self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, 'messages', engine) @map_to_migration('b516de97bfee') class ProjectShareTypesQuotasChecks(BaseMigrationChecks): new_table_name = 'project_share_type_quotas' usages_table = 'quota_usages' reservations_table = 'reservations' st_record_id = uuidutils.generate_uuid() def setup_upgrade_data(self, engine): # Create share type self.st_data = { 'id': self.st_record_id, 'name': uuidutils.generate_uuid(), 'deleted': "False", } st_table = utils.load_table('share_types', engine) engine.execute(st_table.insert(self.st_data)) def check_upgrade(self, engine, data): # Create share type quota self.quota_data = { 'project_id': 'x' * 255, 'resource': 'y' * 255, 'hard_limit': 987654321, 'created_at': datetime.datetime(2017, 4, 11, 18, 5, 58), 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'share_type_id': self.st_record_id, } new_table = utils.load_table(self.new_table_name, engine) engine.execute(new_table.insert(self.quota_data)) # Create usage record self.usages_data = { 'project_id': 'x' * 255, 'user_id': None, 'share_type_id': self.st_record_id, 'resource': 'y' * 255, 'in_use': 13, 'reserved': 15, } usages_table = utils.load_table(self.usages_table, engine) engine.execute(usages_table.insert(self.usages_data)) # Create reservation record self.reservations_data = { 'uuid': uuidutils.generate_uuid(), 'usage_id': 1, 'project_id': 'x' * 255, 'user_id': None, 'share_type_id': self.st_record_id, 'resource': 'y' * 255, 'delta': 13, 'expire': datetime.datetime(2399, 4, 11, 18, 5, 58), } reservations_table = utils.load_table(self.reservations_table, engine) engine.execute(reservations_table.insert(self.reservations_data)) def check_downgrade(self, engine): self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, self.new_table_name, engine) for table_name in (self.usages_table, self.reservations_table): table = utils.load_table(table_name, engine) db_result = engine.execute(table.select()) self.test_case.assertGreater(db_result.rowcount, 0) for row in db_result: self.test_case.assertFalse(hasattr(row, 'share_type_id')) @map_to_migration('829a09b0ddd4') class FixProjectShareTypesQuotasUniqueConstraintChecks(BaseMigrationChecks): st_record_id = uuidutils.generate_uuid() def setup_upgrade_data(self, engine): # Create share type self.st_data = { 'id': self.st_record_id, 'name': uuidutils.generate_uuid(), 'deleted': "False", } st_table = utils.load_table('share_types', engine) engine.execute(st_table.insert(self.st_data)) def check_upgrade(self, engine, data): for project_id in ('x' * 255, 'x'): # Create share type quota self.quota_data = { 'project_id': project_id, 'resource': 'y' * 255, 'hard_limit': 987654321, 'created_at': datetime.datetime(2017, 4, 11, 18, 5, 58), 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'share_type_id': self.st_record_id, } new_table = utils.load_table('project_share_type_quotas', engine) engine.execute(new_table.insert(self.quota_data)) def check_downgrade(self, engine): pass @map_to_migration('27cb96d991fa') class NewDescriptionColumnChecks(BaseMigrationChecks): st_table_name = 'share_types' st_ids = ['share_type_id_fake_3_%d' % i for i in (1, 2)] def setup_upgrade_data(self, engine): # Create share type share_type_data = { 'id': self.st_ids[0], 'name': 'name_1', } st_table = utils.load_table(self.st_table_name, engine) engine.execute(st_table.insert(share_type_data)) def check_upgrade(self, engine, data): st_table = utils.load_table(self.st_table_name, engine) for na in engine.execute(st_table.select()): self.test_case.assertTrue(hasattr(na, 'description')) share_type_data_ds = { 'id': self.st_ids[1], 'name': 'name_1', 'description': 'description_1', } engine.execute(st_table.insert(share_type_data_ds)) st = engine.execute(st_table.select().where( share_type_data_ds['id'] == st_table.c.id)).first() self.test_case.assertEqual( share_type_data_ds['description'], st['description']) def check_downgrade(self, engine): table = utils.load_table(self.st_table_name, engine) db_result = engine.execute(table.select()) for record in db_result: self.test_case.assertFalse(hasattr(record, 'description')) @map_to_migration('4a482571410f') class BackenInfoTableChecks(BaseMigrationChecks): new_table_name = 'backend_info' def setup_upgrade_data(self, engine): pass def check_upgrade(self, engine, data): data = { 'host': 'test_host', 'info_hash': 'test_hash', 'created_at': datetime.datetime(2017, 7, 10, 18, 5, 58), 'updated_at': None, 'deleted_at': None, 'deleted': 0, } new_table = utils.load_table(self.new_table_name, engine) engine.execute(new_table.insert(data)) def check_downgrade(self, engine): self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, self.new_table_name, engine) @map_to_migration('579c267fbb4d') class ShareInstanceAccessMapTableChecks(BaseMigrationChecks): share_access_table = 'share_access_map' share_instance_access_table = 'share_instance_access_map' @staticmethod def generate_share_instance(share_id, **kwargs): share_instance_data = { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'host': 'fake', 'share_id': share_id, 'status': constants.STATUS_AVAILABLE, } share_instance_data.update(**kwargs) return share_instance_data @staticmethod def generate_share_access_map(share_id, **kwargs): share_access_data = { 'id': uuidutils.generate_uuid(), 'share_id': share_id, 'deleted': 'False', 'access_type': 'ip', 'access_to': '192.0.2.10', } share_access_data.update(**kwargs) return share_access_data def setup_upgrade_data(self, engine): share = { 'id': uuidutils.generate_uuid(), 'share_proto': 'fake', 'size': 1, 'snapshot_id': None, 'user_id': 'fake', 'project_id': 'fake' } share_table = utils.load_table('shares', engine) engine.execute(share_table.insert(share)) share_instances = [ self.generate_share_instance(share['id']), self.generate_share_instance(share['id']), ] share_instance_table = utils.load_table('share_instances', engine) for share_instance in share_instances: engine.execute(share_instance_table.insert(share_instance)) share_accesses = [ self.generate_share_access_map( share['id'], state=constants.ACCESS_STATE_ACTIVE), self.generate_share_access_map( share['id'], state=constants.ACCESS_STATE_ERROR), ] self.active_share_access = share_accesses[0] self.error_share_access = share_accesses[1] share_access_table = utils.load_table('share_access_map', engine) engine.execute(share_access_table.insert(share_accesses)) def check_upgrade(self, engine, data): share_access_table = utils.load_table( self.share_access_table, engine) share_instance_access_table = utils.load_table( self.share_instance_access_table, engine) share_accesses = engine.execute(share_access_table.select()) share_instance_accesses = engine.execute( share_instance_access_table.select()) for share_access in share_accesses: self.test_case.assertFalse(hasattr(share_access, 'state')) for si_access in share_instance_accesses: if si_access['access_id'] in (self.active_share_access['id'], self.error_share_access['id']): self.test_case.assertIn(si_access['state'], (self.active_share_access['state'], self.error_share_access['state'])) def check_downgrade(self, engine): self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, self.share_instance_access_table, engine) share_access_table = utils.load_table( self.share_access_table, engine) share_accesses = engine.execute(share_access_table.select().where( share_access_table.c.id.in_((self.active_share_access['id'], self.error_share_access['id'])))) for share_access in share_accesses: self.test_case.assertTrue(hasattr(share_access, 'state')) if share_access['id'] == self.active_share_access['id']: self.test_case.assertEqual( constants.ACCESS_STATE_ACTIVE, share_access['state']) elif share_access['id'] == self.error_share_access['id']: self.test_case.assertEqual( constants.ACCESS_STATE_ERROR, share_access['state']) @map_to_migration('097fad24d2fc') class ShareInstancesShareIdIndexChecks(BaseMigrationChecks): def setup_upgrade_data(self, engine): pass def _get_share_instances_share_id_index(self, engine): share_instances_table = utils.load_table('share_instances', engine) for idx in share_instances_table.indexes: if idx.name == 'share_instances_share_id_idx': return idx def check_upgrade(self, engine, data): self.test_case.assertTrue( self._get_share_instances_share_id_index(engine)) def check_downgrade(self, engine): self.test_case.assertFalse( self._get_share_instances_share_id_index(engine)) @map_to_migration('11ee96se625f3') class AccessMetadataTableChecks(BaseMigrationChecks): new_table_name = 'share_access_rules_metadata' record_access_id = uuidutils.generate_uuid() def setup_upgrade_data(self, engine): share_data = { 'id': uuidutils.generate_uuid(), 'share_proto': "NFS", 'size': 1, 'snapshot_id': None, 'user_id': 'fake', 'project_id': 'fake' } share_table = utils.load_table('shares', engine) engine.execute(share_table.insert(share_data)) share_instance_data = { 'id': uuidutils.generate_uuid(), 'deleted': 'False', 'host': 'fake', 'share_id': share_data['id'], 'status': 'available', 'access_rules_status': 'active', 'cast_rules_to_readonly': False, } share_instance_table = utils.load_table('share_instances', engine) engine.execute(share_instance_table.insert(share_instance_data)) share_access_data = { 'id': self.record_access_id, 'share_id': share_data['id'], 'access_type': 'NFS', 'access_to': '10.0.0.1', 'deleted': 'False' } share_access_table = utils.load_table('share_access_map', engine) engine.execute(share_access_table.insert(share_access_data)) share_instance_access_data = { 'id': uuidutils.generate_uuid(), 'share_instance_id': share_instance_data['id'], 'access_id': share_access_data['id'], 'deleted': 'False' } share_instance_access_table = utils.load_table( 'share_instance_access_map', engine) engine.execute(share_instance_access_table.insert( share_instance_access_data)) def check_upgrade(self, engine, data): data = { 'id': 1, 'key': 't' * 255, 'value': 'v' * 1023, 'access_id': self.record_access_id, 'created_at': datetime.datetime(2017, 7, 10, 18, 5, 58), 'updated_at': None, 'deleted_at': None, 'deleted': 'False', } new_table = utils.load_table(self.new_table_name, engine) engine.execute(new_table.insert(data)) def check_downgrade(self, engine): self.test_case.assertRaises(sa_exc.NoSuchTableError, utils.load_table, self.new_table_name, engine) @map_to_migration('6a3fd2984bc31') class ShareServerIsAutoDeletableAndIdentifierChecks(BaseMigrationChecks): def setup_upgrade_data(self, engine): user_id = 'user_id' project_id = 'project_id' # Create share network share_network_data = { 'id': 'fake_sn_id', 'user_id': user_id, 'project_id': project_id, } sn_table = utils.load_table('share_networks', engine) engine.execute(sn_table.insert(share_network_data)) # Create share server share_server_data = { 'id': 'fake_ss_id', 'share_network_id': share_network_data['id'], 'host': 'fake_host', 'status': 'active', } ss_table = utils.load_table('share_servers', engine) engine.execute(ss_table.insert(share_server_data)) def check_upgrade(self, engine, data): ss_table = utils.load_table('share_servers', engine) for ss in engine.execute(ss_table.select()): self.test_case.assertTrue(hasattr(ss, 'is_auto_deletable')) self.test_case.assertEqual(1, ss.is_auto_deletable) self.test_case.assertTrue(hasattr(ss, 'identifier')) self.test_case.assertEqual(ss.id, ss.identifier) def check_downgrade(self, engine): ss_table = utils.load_table('share_servers', engine) for ss in engine.execute(ss_table.select()): self.test_case.assertFalse(hasattr(ss, 'is_auto_deletable')) self.test_case.assertFalse(hasattr(ss, 'identifier')) @map_to_migration('805685098bd2') class ShareNetworkSubnetMigrationChecks(BaseMigrationChecks): user_id = '6VFQ87wnV24lg1c2q1q0lJkTbQBPFZ1m4968' project_id = '19HAW8w58yeUPBy8zGex4EGulWZHd8zZGtHk' share_network = { 'id': uuidutils.generate_uuid(), 'user_id': user_id, 'project_id': project_id, 'neutron_net_id': uuidutils.generate_uuid(), 'neutron_subnet_id': uuidutils.generate_uuid(), 'cidr': '203.0.113.0/24', 'ip_version': 4, 'network_type': 'vxlan', 'segmentation_id': 100, 'gateway': 'fake_gateway', 'mtu': 1500, } share_networks = [share_network] sns_table_name = 'share_network_subnets' sn_table_name = 'share_networks' ss_table_name = 'share_servers' expected_keys = ['neutron_net_id', 'neutron_subnet_id', 'cidr', 'ip_version', 'network_type', 'segmentation_id', 'gateway', 'mtu'] def _setup_data_for_empty_neutron_net_and_subnet_id_test(self, network): network['id'] = uuidutils.generate_uuid() for key in self.expected_keys: network[key] = None return network def setup_upgrade_data(self, engine): share_network_data_without_net_info = ( self._setup_data_for_empty_neutron_net_and_subnet_id_test( copy.deepcopy(self.share_network))) self.share_networks.append(share_network_data_without_net_info) # Load the table to be used below sn_table = utils.load_table(self.sn_table_name, engine) ss_table = utils.load_table(self.ss_table_name, engine) # Share server data share_server_data = { 'host': 'acme@controller-ostk-0', 'status': 'active', } # Create share share networks and one share server for each of them for network in self.share_networks: share_server_data['share_network_id'] = network['id'] share_server_data['id'] = uuidutils.generate_uuid() engine.execute(sn_table.insert(network)) engine.execute(ss_table.insert(share_server_data)) def check_upgrade(self, engine, data): # Load the necessary tables sn_table = utils.load_table(self.sn_table_name, engine) sns_table = utils.load_table(self.sns_table_name, engine) ss_table = utils.load_table(self.ss_table_name, engine) for network in self.share_networks: sn_record = engine.execute(sn_table.select().where( sn_table.c.id == network['id'])).first() for key in self.expected_keys: self.test_case.assertFalse(hasattr(sn_record, key)) sns_record = engine.execute(sns_table.select().where( sns_table.c.share_network_id == network['id'])).first() for key in self.expected_keys: self.test_case.assertTrue(hasattr(sns_record, key)) self.test_case.assertEqual(network[key], sns_record[key]) ss_record = ( engine.execute( ss_table.select().where( ss_table.c.share_network_subnet_id == sns_record['id']) ).first()) self.test_case.assertIs( True, hasattr(ss_record, 'share_network_subnet_id')) self.test_case.assertEqual( ss_record['share_network_subnet_id'], sns_record['id']) self.test_case.assertIs( False, hasattr(ss_record, 'share_network_id')) def check_downgrade(self, engine): sn_table = utils.load_table(self.sn_table_name, engine) # Check if the share network table contains the expected keys for sn in engine.execute(sn_table.select()): for key in self.expected_keys: self.test_case.assertTrue(hasattr(sn, key)) ss_table = utils.load_table(self.ss_table_name, engine) for network in self.share_networks: for ss in engine.execute(ss_table.select().where( ss_table.c.share_network_id == network['id'])): self.test_case.assertFalse(hasattr(ss, 'share_network_subnet_id')) self.test_case.assertTrue(hasattr(ss, 'share_network_id')) self.test_case.assertEqual(network['id'], ss['id']) # Check if the created table doesn't exists anymore self.test_case.assertRaises( sa_exc.NoSuchTableError, utils.load_table, self.sns_table_name, engine) @map_to_migration('e6d88547b381') class ShareInstanceProgressFieldChecks(BaseMigrationChecks): si_table_name = 'share_instances' progress_field_name = 'progress' def setup_upgrade_data(self, engine): pass def check_upgrade(self, engine, data): si_table = utils.load_table(self.si_table_name, engine) for si_record in engine.execute(si_table.select()): self.test_case.assertTrue(hasattr(si_record, self.progress_field_name)) if si_record['status'] == constants.STATUS_AVAILABLE: self.test_case.assertEqual('100%', si_record[self.progress_field_name]) else: self.test_case.assertIsNone( si_record[self.progress_field_name]) def check_downgrade(self, engine): si_table = utils.load_table(self.si_table_name, engine) for si_record in engine.execute(si_table.select()): self.test_case.assertFalse(hasattr(si_record, self.progress_field_name)) manila-10.0.0/manila/tests/db/test_migration.py0000664000175000017500000000546313656750227021472 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import alembic from manila.db import migration from manila import test class MigrationTestCase(test.TestCase): def setUp(self): super(MigrationTestCase, self).setUp() self.config_patcher = mock.patch( 'manila.db.migrations.alembic.migration._alembic_config') self.config = self.config_patcher.start() self.config.return_value = 'fake_config' self.addCleanup(self.config_patcher.stop) @mock.patch('alembic.command.upgrade') def test_upgrade(self, upgrade): migration.upgrade('version_1') upgrade.assert_called_once_with('fake_config', 'version_1') @mock.patch('alembic.command.upgrade') def test_upgrade_none_version(self, upgrade): migration.upgrade(None) upgrade.assert_called_once_with('fake_config', 'head') @mock.patch('alembic.command.downgrade') def test_downgrade(self, downgrade): migration.downgrade('version_1') downgrade.assert_called_once_with('fake_config', 'version_1') @mock.patch('alembic.command.downgrade') def test_downgrade_none_version(self, downgrade): migration.downgrade(None) downgrade.assert_called_once_with('fake_config', 'base') @mock.patch('alembic.command.stamp') def test_stamp(self, stamp): migration.stamp('version_1') stamp.assert_called_once_with('fake_config', 'version_1') @mock.patch('alembic.command.stamp') def test_stamp_none_version(self, stamp): migration.stamp(None) stamp.assert_called_once_with('fake_config', 'head') @mock.patch('alembic.command.revision') def test_revision(self, revision): migration.revision('test_message', 'autogenerate_value') revision.assert_called_once_with('fake_config', 'test_message', 'autogenerate_value') @mock.patch.object(alembic.migration.MigrationContext, 'configure', mock.Mock()) def test_version(self): context = mock.Mock() context.get_current_revision = mock.Mock() alembic.migration.MigrationContext.configure.return_value = context migration.version() context.get_current_revision.assert_called_once_with() manila-10.0.0/manila/tests/db/test_api.py0000664000175000017500000000411413656750227020242 0ustar zuulzuul00000000000000# Copyright (c) Goutham Pacha Ravi. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit Tests for the interface methods in the manila/db/api.py.""" import re from manila.db import api as db_interface from manila.db.sqlalchemy import api as db_api from manila import test class DBInterfaceTestCase(test.TestCase): """Test cases for the DB Interface methods.""" def test_interface_methods(self): """Ensure that implementation methods match interfaces. manila/db/api module is merely shim layer between the database implementation and the other methods using these implementations. Bugs are introduced when the shims go out of sync with the actual implementation. So this test ensures that method names and signatures match between the interface and the implementation. """ members = dir(db_interface) # Ignore private methods for the file and any other members that # need not match. ignore_members = re.compile(r'^_|CONF|IMPL') interfaces = [i for i in members if not ignore_members.match(i)] for interface in interfaces: method = getattr(db_interface, interface) if callable(method): mock_method_call = self.mock_object(db_api, interface) # kwargs always specify defaults, ignore them in the signature. args = filter( lambda x: x != 'kwargs', method.__code__.co_varnames) method(*args) self.assertTrue(mock_method_call.called) manila-10.0.0/manila/tests/db/fakes.py0000664000175000017500000000307313656750227017526 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stubouts, mocks and fixtures for the test suite.""" from manila import db class FakeModel(object): """Stubs out for model.""" def __init__(self, values): self.values = values def __getattr__(self, name): return self.values.get(name) def __getitem__(self, key): if key in self.values: return self.values[key] else: raise NotImplementedError() def __repr__(self): return '' % self.values def get(self, key, default=None): return self.__getattr__(key) or default def __contains__(self, key): return self._getattr__(key) def to_dict(self): return self.values def stub_out(stubs, funcs): """Set the stubs in mapping in the db api.""" for func in funcs: func_name = '_'.join(func.__name__.split('_')[1:]) stubs.Set(db, func_name, func) manila-10.0.0/manila/tests/db/sqlalchemy/0000775000175000017500000000000013656750362020222 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/sqlalchemy/__init__.py0000664000175000017500000000000013656750227022321 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/db/sqlalchemy/test_api.py0000664000175000017500000051232313656750227022412 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright (c) 2014 NetApp, Inc. # Copyright (c) 2015 Rushil Chugh # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Testing of SQLAlchemy backend.""" import copy import datetime import random from unittest import mock import ddt from oslo_db import exception as db_exception from oslo_utils import timeutils from oslo_utils import uuidutils import six from manila.common import constants from manila import context from manila.db.sqlalchemy import api as db_api from manila.db.sqlalchemy import models from manila import exception from manila import quota from manila import test from manila.tests import db_utils QUOTAS = quota.QUOTAS security_service_dict = { 'id': 'fake id', 'project_id': 'fake project', 'type': 'ldap', 'dns_ip': 'fake dns', 'server': 'fake ldap server', 'domain': 'fake ldap domain', 'ou': 'fake ldap ou', 'user': 'fake user', 'password': 'fake password', 'name': 'whatever', 'description': 'nevermind', } class BaseDatabaseAPITestCase(test.TestCase): def _check_fields(self, expected, actual): for key in expected: self.assertEqual(expected[key], actual[key]) @ddt.ddt class GenericDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(GenericDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() @ddt.unpack @ddt.data( {'values': {'test': 'fake'}, 'call_count': 1}, {'values': {'test': 'fake', 'id': 'fake'}, 'call_count': 0}, {'values': {'test': 'fake', 'fooid': 'fake'}, 'call_count': 1}, {'values': {'test': 'fake', 'idfoo': 'fake'}, 'call_count': 1}, ) def test_ensure_model_values_has_id(self, values, call_count): self.mock_object(uuidutils, 'generate_uuid') db_api.ensure_model_dict_has_id(values) self.assertEqual(call_count, uuidutils.generate_uuid.call_count) self.assertIn('id', values) def test_custom_query(self): share = db_utils.create_share() share_access = db_utils.create_access(share_id=share['id']) db_api.share_instance_access_delete( self.ctxt, share_access.instance_mappings[0].id) self.assertRaises(exception.NotFound, db_api.share_access_get, self.ctxt, share_access.id) @ddt.ddt class ShareAccessDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(ShareAccessDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() @ddt.data(0, 3) def test_share_access_get_all_for_share(self, len_rules): share = db_utils.create_share() rules = [db_utils.create_access(share_id=share['id']) for i in range(0, len_rules)] rule_ids = [r['id'] for r in rules] result = db_api.share_access_get_all_for_share(self.ctxt, share['id']) self.assertEqual(len_rules, len(result)) result_ids = [r['id'] for r in result] self.assertEqual(rule_ids, result_ids) def test_share_access_get_all_for_share_no_instance_mappings(self): share = db_utils.create_share() share_instance = share['instance'] rule = db_utils.create_access(share_id=share['id']) # Mark instance mapping soft deleted db_api.share_instance_access_update( self.ctxt, rule['id'], share_instance['id'], {'deleted': "True"}) result = db_api.share_access_get_all_for_share(self.ctxt, share['id']) self.assertEqual([], result) def test_share_instance_access_update(self): share = db_utils.create_share() access = db_utils.create_access(share_id=share['id']) instance_access_mapping = db_api.share_instance_access_get( self.ctxt, access['id'], share.instance['id']) self.assertEqual(constants.ACCESS_STATE_QUEUED_TO_APPLY, access['state']) self.assertIsNone(access['access_key']) db_api.share_instance_access_update( self.ctxt, access['id'], share.instance['id'], {'state': constants.STATUS_ERROR, 'access_key': 'watson4heisman'}) instance_access_mapping = db_api.share_instance_access_get( self.ctxt, access['id'], share.instance['id']) access = db_api.share_access_get(self.ctxt, access['id']) self.assertEqual(constants.STATUS_ERROR, instance_access_mapping['state']) self.assertEqual('watson4heisman', access['access_key']) @ddt.data(True, False) def test_share_access_get_all_for_instance_with_share_access_data( self, with_share_access_data): share = db_utils.create_share() access_1 = db_utils.create_access(share_id=share['id']) access_2 = db_utils.create_access(share_id=share['id']) share_access_keys = ('access_to', 'access_type', 'access_level', 'share_id') rules = db_api.share_access_get_all_for_instance( self.ctxt, share.instance['id'], with_share_access_data=with_share_access_data) share_access_keys_present = True if with_share_access_data else False actual_access_ids = [r['access_id'] for r in rules] self.assertTrue(isinstance(actual_access_ids, list)) expected = [access_1['id'], access_2['id']] self.assertEqual(len(expected), len(actual_access_ids)) for pool in expected: self.assertIn(pool, actual_access_ids) for rule in rules: for key in share_access_keys: self.assertEqual(share_access_keys_present, key in rule) self.assertIn('state', rule) def test_share_access_get_all_for_instance_with_filters(self): share = db_utils.create_share() new_share_instance = db_utils.create_share_instance( share_id=share['id']) access_1 = db_utils.create_access(share_id=share['id']) access_2 = db_utils.create_access(share_id=share['id']) share_access_keys = ('access_to', 'access_type', 'access_level', 'share_id') db_api.share_instance_access_update( self.ctxt, access_1['id'], new_share_instance['id'], {'state': constants.STATUS_ACTIVE}) rules = db_api.share_access_get_all_for_instance( self.ctxt, new_share_instance['id'], filters={'state': constants.ACCESS_STATE_QUEUED_TO_APPLY}) self.assertEqual(1, len(rules)) self.assertEqual(access_2['id'], rules[0]['access_id']) for rule in rules: for key in share_access_keys: self.assertIn(key, rule) def test_share_instance_access_delete(self): share = db_utils.create_share() access = db_utils.create_access(share_id=share['id'], metadata={'key1': 'v1'}) instance_access_mapping = db_api.share_instance_access_get( self.ctxt, access['id'], share.instance['id']) db_api.share_instance_access_delete( self.ctxt, instance_access_mapping['id']) rules = db_api.share_access_get_all_for_instance( self.ctxt, share.instance['id']) self.assertEqual([], rules) self.assertRaises(exception.NotFound, db_api.share_instance_access_get, self.ctxt, access['id'], share['instance']['id']) def test_one_share_with_two_share_instance_access_delete(self): metadata = {'key2': 'v2', 'key3': 'v3'} share = db_utils.create_share() instance = db_utils.create_share_instance(share_id=share['id']) access = db_utils.create_access(share_id=share['id'], metadata=metadata) instance_access_mapping1 = db_api.share_instance_access_get( self.ctxt, access['id'], share.instance['id']) instance_access_mapping2 = db_api.share_instance_access_get( self.ctxt, access['id'], instance['id']) self.assertEqual(instance_access_mapping1['access_id'], instance_access_mapping2['access_id']) db_api.share_instance_delete(self.ctxt, instance['id']) get_accesses = db_api.share_access_get_all_for_share(self.ctxt, share['id']) self.assertEqual(1, len(get_accesses)) get_metadata = ( get_accesses[0].get('share_access_rules_metadata') or {}) get_metadata = {item['key']: item['value'] for item in get_metadata} self.assertEqual(metadata, get_metadata) self.assertEqual(access['id'], get_accesses[0]['id']) db_api.share_instance_delete(self.ctxt, share['instance']['id']) self.assertRaises(exception.NotFound, db_api.share_instance_access_get, self.ctxt, access['id'], share['instance']['id']) get_accesses = db_api.share_access_get_all_for_share(self.ctxt, share['id']) self.assertEqual(0, len(get_accesses)) @ddt.data(True, False) def test_share_instance_access_get_with_share_access_data( self, with_share_access_data): share = db_utils.create_share() access = db_utils.create_access(share_id=share['id']) instance_access = db_api.share_instance_access_get( self.ctxt, access['id'], share['instance']['id'], with_share_access_data=with_share_access_data) for key in ('share_id', 'access_type', 'access_to', 'access_level', 'access_key'): self.assertEqual(with_share_access_data, key in instance_access) @ddt.data({'existing': {'access_type': 'cephx', 'access_to': 'alice'}, 'new': {'access_type': 'user', 'access_to': 'alice'}, 'result': False}, {'existing': {'access_type': 'user', 'access_to': 'bob'}, 'new': {'access_type': 'user', 'access_to': 'bob'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': '10.0.0.10/32'}, 'new': {'access_type': 'ip', 'access_to': '10.0.0.10'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': '10.10.0.11'}, 'new': {'access_type': 'ip', 'access_to': '10.10.0.11'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': 'fd21::11'}, 'new': {'access_type': 'ip', 'access_to': 'fd21::11'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': 'fd21::10'}, 'new': {'access_type': 'ip', 'access_to': 'fd21::10/128'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': '10.10.0.0/22'}, 'new': {'access_type': 'ip', 'access_to': '10.10.0.0/24'}, 'result': False}, {'existing': {'access_type': 'ip', 'access_to': '2620:52::/48'}, 'new': {'access_type': 'ip', 'access_to': '2620:52:0:13b8::/64'}, 'result': False}) @ddt.unpack def test_share_access_check_for_existing_access(self, existing, new, result): share = db_utils.create_share() db_utils.create_access(share_id=share['id'], access_type=existing['access_type'], access_to=existing['access_to']) rule_exists = db_api.share_access_check_for_existing_access( self.ctxt, share['id'], new['access_type'], new['access_to']) self.assertEqual(result, rule_exists) def test_share_access_get_all_for_share_with_metadata(self): share = db_utils.create_share() rules = [db_utils.create_access( share_id=share['id'], metadata={'key1': i}) for i in range(0, 3)] rule_ids = [r['id'] for r in rules] result = db_api.share_access_get_all_for_share(self.ctxt, share['id']) self.assertEqual(3, len(result)) result_ids = [r['id'] for r in result] self.assertEqual(rule_ids, result_ids) result = db_api.share_access_get_all_for_share( self.ctxt, share['id'], {'metadata': {'key1': '2'}}) self.assertEqual(1, len(result)) self.assertEqual(rules[2]['id'], result[0]['id']) def test_share_access_metadata_update(self): share = db_utils.create_share() new_metadata = {'key1': 'test_update', 'key2': 'v2'} rule = db_utils.create_access(share_id=share['id'], metadata={'key1': 'v1'}) result_metadata = db_api.share_access_metadata_update( self.ctxt, rule['id'], metadata=new_metadata) result = db_api.share_access_get(self.ctxt, rule['id']) self.assertEqual(new_metadata, result_metadata) metadata = result.get('share_access_rules_metadata') if metadata: metadata = {item['key']: item['value'] for item in metadata} else: metadata = {} self.assertEqual(new_metadata, metadata) @ddt.ddt class ShareDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(ShareDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() def test_share_filter_by_host_with_pools(self): share_instances = [[ db_api.share_create(self.ctxt, {'host': value}).instance for value in ('foo', 'foo#pool0')]] db_utils.create_share() self._assertEqualListsOfObjects(share_instances[0], db_api.share_instances_get_all_by_host( self.ctxt, 'foo'), ignored_keys=['share_type', 'share_type_id', 'export_locations']) def test_share_filter_all_by_host_with_pools_multiple_hosts(self): share_instances = [[ db_api.share_create(self.ctxt, {'host': value}).instance for value in ('foo', 'foo#pool0', 'foo', 'foo#pool1')]] db_utils.create_share() self._assertEqualListsOfObjects(share_instances[0], db_api.share_instances_get_all_by_host( self.ctxt, 'foo'), ignored_keys=['share_type', 'share_type_id', 'export_locations']) def test_share_filter_all_by_share_server(self): share_network = db_utils.create_share_network() share_server = db_utils.create_share_server( share_network_id=share_network['id']) share = db_utils.create_share(share_server_id=share_server['id'], share_network_id=share_network['id']) actual_result = db_api.share_get_all_by_share_server( self.ctxt, share_server['id']) self.assertEqual(1, len(actual_result)) self.assertEqual(share['id'], actual_result[0].id) def test_share_filter_all_by_share_group(self): group = db_utils.create_share_group() share = db_utils.create_share(share_group_id=group['id']) actual_result = db_api.share_get_all_by_share_group_id( self.ctxt, group['id']) self.assertEqual(1, len(actual_result)) self.assertEqual(share['id'], actual_result[0].id) def test_share_instance_delete_with_share(self): share = db_utils.create_share() self.assertIsNotNone(db_api.share_get(self.ctxt, share['id'])) self.assertIsNotNone(db_api.share_metadata_get(self.ctxt, share['id'])) db_api.share_instance_delete(self.ctxt, share.instance['id']) self.assertRaises(exception.NotFound, db_api.share_get, self.ctxt, share['id']) self.assertRaises(exception.NotFound, db_api.share_metadata_get, self.ctxt, share['id']) def test_share_instance_delete_with_share_need_to_update_usages(self): share = db_utils.create_share() self.assertIsNotNone(db_api.share_get(self.ctxt, share['id'])) self.assertIsNotNone(db_api.share_metadata_get(self.ctxt, share['id'])) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value='reservation')) self.mock_object(quota.QUOTAS, 'commit') db_api.share_instance_delete( self.ctxt, share.instance['id'], need_to_update_usages=True) self.assertRaises(exception.NotFound, db_api.share_get, self.ctxt, share['id']) self.assertRaises(exception.NotFound, db_api.share_metadata_get, self.ctxt, share['id']) quota.QUOTAS.reserve.assert_called_once_with( self.ctxt, project_id=share['project_id'], shares=-1, gigabytes=-share['size'], share_type_id=None, user_id=share['user_id'] ) quota.QUOTAS.commit.assert_called_once_with( self.ctxt, mock.ANY, project_id=share['project_id'], share_type_id=None, user_id=share['user_id'] ) def test_share_instance_get(self): share = db_utils.create_share() instance = db_api.share_instance_get(self.ctxt, share.instance['id']) self.assertEqual('share-%s' % instance['id'], instance['name']) @ddt.data({'with_share_data': True, 'status': constants.STATUS_AVAILABLE}, {'with_share_data': False, 'status': None}) @ddt.unpack def test_share_instance_get_all_by_host(self, with_share_data, status): kwargs = {'status': status} if status else {} db_utils.create_share(**kwargs) instances = db_api.share_instances_get_all_by_host( self.ctxt, 'fake_host', with_share_data=with_share_data, status=status) self.assertEqual(1, len(instances)) instance = instances[0] self.assertEqual('share-%s' % instance['id'], instance['name']) if with_share_data: self.assertEqual('NFS', instance['share_proto']) self.assertEqual(0, instance['size']) else: self.assertNotIn('share_proto', instance) def test_share_instance_get_all_by_host_not_found_exception(self): db_utils.create_share() self.mock_object(db_api, 'share_get', mock.Mock( side_effect=exception.NotFound)) instances = db_api.share_instances_get_all_by_host( self.ctxt, 'fake_host', True) self.assertEqual(0, len(instances)) def test_share_instance_get_all_by_share_group(self): group = db_utils.create_share_group() db_utils.create_share(share_group_id=group['id']) db_utils.create_share() instances = db_api.share_instances_get_all_by_share_group_id( self.ctxt, group['id']) self.assertEqual(1, len(instances)) instance = instances[0] self.assertEqual('share-%s' % instance['id'], instance['name']) @ddt.data('id', 'path') def test_share_instance_get_all_by_export_location(self, type): share = db_utils.create_share() initial_location = ['fake_export_location'] db_api.share_export_locations_update(self.ctxt, share.instance['id'], initial_location, False) if type == 'id': export_location = ( db_api.share_export_locations_get_by_share_id(self.ctxt, share['id'])) value = export_location[0]['uuid'] else: value = 'fake_export_location' instances = db_api.share_instances_get_all( self.ctxt, filters={'export_location_' + type: value}) self.assertEqual(1, len(instances)) instance = instances[0] self.assertEqual('share-%s' % instance['id'], instance['name']) @ddt.data('host', 'share_group_id') def test_share_get_all_sort_by_share_instance_fields(self, sort_key): shares = [db_utils.create_share(**{sort_key: n, 'size': 1}) for n in ('test1', 'test2')] actual_result = db_api.share_get_all( self.ctxt, sort_key=sort_key, sort_dir='desc') self.assertEqual(2, len(actual_result)) self.assertEqual(shares[0]['id'], actual_result[1]['id']) @ddt.data('id', 'path') def test_share_get_all_by_export_location(self, type): share = db_utils.create_share() initial_location = ['fake_export_location'] db_api.share_export_locations_update(self.ctxt, share.instance['id'], initial_location, False) if type == 'id': export_location = db_api.share_export_locations_get_by_share_id( self.ctxt, share['id']) value = export_location[0]['uuid'] else: value = 'fake_export_location' actual_result = db_api.share_get_all( self.ctxt, filters={'export_location_' + type: value}) self.assertEqual(1, len(actual_result)) self.assertEqual(share['id'], actual_result[0]['id']) @ddt.data('id', 'path') def test_share_get_all_by_export_location_not_exist(self, type): share = db_utils.create_share() initial_location = ['fake_export_location'] db_api.share_export_locations_update(self.ctxt, share.instance['id'], initial_location, False) filter = {'export_location_' + type: 'export_location_not_exist'} actual_result = db_api.share_get_all(self.ctxt, filters=filter) self.assertEqual(0, len(actual_result)) @ddt.data((10, 5), (20, 5)) @ddt.unpack def test_share_get_all_with_limit(self, limit, offset): for i in range(limit + 5): db_utils.create_share() filters = {'limit': offset, 'offset': 0} shares_not_requested = db_api.share_get_all( self.ctxt, filters=filters) filters = {'limit': limit, 'offset': offset} shares_requested = db_api.share_get_all(self.ctxt, filters=filters) shares_not_requested_ids = [s['id'] for s in shares_not_requested] shares_requested_ids = [s['id'] for s in shares_requested] self.assertEqual(offset, len(shares_not_requested_ids)) self.assertEqual(limit, len(shares_requested_ids)) self.assertEqual(0, len( set(shares_requested_ids) & set(shares_not_requested_ids))) @ddt.data(None, 'writable') def test_share_get_has_replicas_field(self, replication_type): share = db_utils.create_share(replication_type=replication_type) db_share = db_api.share_get(self.ctxt, share['id']) self.assertIn('has_replicas', db_share) @ddt.data({'with_share_data': False, 'with_share_server': False}, {'with_share_data': False, 'with_share_server': True}, {'with_share_data': True, 'with_share_server': False}, {'with_share_data': True, 'with_share_server': True}) @ddt.unpack def test_share_replicas_get_all(self, with_share_data, with_share_server): share_server = db_utils.create_share_server() share_1 = db_utils.create_share() share_2 = db_utils.create_share() db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_ACTIVE, share_id=share_1['id'], share_server_id=share_server['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_IN_SYNC, share_id=share_1['id'], share_server_id=share_server['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_OUT_OF_SYNC, share_id=share_2['id'], share_server_id=share_server['id']) db_utils.create_share_replica(share_id=share_2['id']) expected_ss_keys = { 'backend_details', 'host', 'id', 'share_network_subnet_id', 'status', } expected_share_keys = { 'project_id', 'share_type_id', 'display_name', 'name', 'share_proto', 'is_public', 'source_share_group_snapshot_member_id', } session = db_api.get_session() with session.begin(): share_replicas = db_api.share_replicas_get_all( self.ctxt, with_share_server=with_share_server, with_share_data=with_share_data, session=session) self.assertEqual(3, len(share_replicas)) for replica in share_replicas: if with_share_server: self.assertTrue(expected_ss_keys.issubset( replica['share_server'].keys())) else: self.assertNotIn('share_server', replica.keys()) self.assertEqual( with_share_data, expected_share_keys.issubset(replica.keys())) @ddt.data({'with_share_data': False, 'with_share_server': False}, {'with_share_data': False, 'with_share_server': True}, {'with_share_data': True, 'with_share_server': False}, {'with_share_data': True, 'with_share_server': True}) @ddt.unpack def test_share_replicas_get_all_by_share(self, with_share_data, with_share_server): share_server = db_utils.create_share_server() share = db_utils.create_share() db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_ACTIVE, share_id=share['id'], share_server_id=share_server['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_IN_SYNC, share_id=share['id'], share_server_id=share_server['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_OUT_OF_SYNC, share_id=share['id'], share_server_id=share_server['id']) expected_ss_keys = { 'backend_details', 'host', 'id', 'share_network_subnet_id', 'status', } expected_share_keys = { 'project_id', 'share_type_id', 'display_name', 'name', 'share_proto', 'is_public', 'source_share_group_snapshot_member_id', } session = db_api.get_session() with session.begin(): share_replicas = db_api.share_replicas_get_all_by_share( self.ctxt, share['id'], with_share_server=with_share_server, with_share_data=with_share_data, session=session) self.assertEqual(3, len(share_replicas)) for replica in share_replicas: if with_share_server: self.assertTrue(expected_ss_keys.issubset( replica['share_server'].keys())) else: self.assertNotIn('share_server', replica.keys()) self.assertEqual(with_share_data, expected_share_keys.issubset(replica.keys())) def test_share_replicas_get_available_active_replica(self): share_server = db_utils.create_share_server() share_1 = db_utils.create_share() share_2 = db_utils.create_share() share_3 = db_utils.create_share() db_utils.create_share_replica( id='Replica1', share_id=share_1['id'], status=constants.STATUS_AVAILABLE, replica_state=constants.REPLICA_STATE_ACTIVE, share_server_id=share_server['id']) db_utils.create_share_replica( id='Replica2', status=constants.STATUS_AVAILABLE, share_id=share_1['id'], replica_state=constants.REPLICA_STATE_ACTIVE, share_server_id=share_server['id']) db_utils.create_share_replica( id='Replica3', status=constants.STATUS_AVAILABLE, share_id=share_2['id'], replica_state=constants.REPLICA_STATE_ACTIVE) db_utils.create_share_replica( id='Replica4', status=constants.STATUS_ERROR, share_id=share_2['id'], replica_state=constants.REPLICA_STATE_ACTIVE) db_utils.create_share_replica( id='Replica5', status=constants.STATUS_AVAILABLE, share_id=share_2['id'], replica_state=constants.REPLICA_STATE_IN_SYNC) db_utils.create_share_replica( id='Replica6', share_id=share_3['id'], status=constants.STATUS_AVAILABLE, replica_state=constants.REPLICA_STATE_IN_SYNC) session = db_api.get_session() expected_ss_keys = { 'backend_details', 'host', 'id', 'share_network_subnet_id', 'status', } expected_share_keys = { 'project_id', 'share_type_id', 'display_name', 'name', 'share_proto', 'is_public', 'source_share_group_snapshot_member_id', } with session.begin(): replica_share_1 = ( db_api.share_replicas_get_available_active_replica( self.ctxt, share_1['id'], with_share_server=True, session=session) ) replica_share_2 = ( db_api.share_replicas_get_available_active_replica( self.ctxt, share_2['id'], with_share_data=True, session=session) ) replica_share_3 = ( db_api.share_replicas_get_available_active_replica( self.ctxt, share_3['id'], session=session) ) self.assertIn(replica_share_1.get('id'), ['Replica1', 'Replica2']) self.assertTrue(expected_ss_keys.issubset( replica_share_1['share_server'].keys())) self.assertFalse( expected_share_keys.issubset(replica_share_1.keys())) self.assertEqual(replica_share_2.get('id'), 'Replica3') self.assertFalse(replica_share_2['share_server']) self.assertTrue( expected_share_keys.issubset(replica_share_2.keys())) self.assertIsNone(replica_share_3) def test_share_replica_get_exception(self): replica = db_utils.create_share_replica(share_id='FAKE_SHARE_ID') self.assertRaises(exception.ShareReplicaNotFound, db_api.share_replica_get, self.ctxt, replica['id']) def test_share_replica_get_without_share_data(self): share = db_utils.create_share() replica = db_utils.create_share_replica( share_id=share['id'], replica_state=constants.REPLICA_STATE_ACTIVE) expected_extra_keys = { 'project_id', 'share_type_id', 'display_name', 'name', 'share_proto', 'is_public', 'source_share_group_snapshot_member_id', } share_replica = db_api.share_replica_get(self.ctxt, replica['id']) self.assertIsNotNone(share_replica['replica_state']) self.assertEqual(share['id'], share_replica['share_id']) self.assertFalse(expected_extra_keys.issubset(share_replica.keys())) def test_share_replica_get_with_share_data(self): share = db_utils.create_share() replica = db_utils.create_share_replica( share_id=share['id'], replica_state=constants.REPLICA_STATE_ACTIVE) expected_extra_keys = { 'project_id', 'share_type_id', 'display_name', 'name', 'share_proto', 'is_public', 'source_share_group_snapshot_member_id', } share_replica = db_api.share_replica_get( self.ctxt, replica['id'], with_share_data=True) self.assertIsNotNone(share_replica['replica_state']) self.assertEqual(share['id'], share_replica['share_id']) self.assertTrue(expected_extra_keys.issubset(share_replica.keys())) def test_share_replica_get_with_share_server(self): session = db_api.get_session() share_server = db_utils.create_share_server() share = db_utils.create_share() replica = db_utils.create_share_replica( share_id=share['id'], replica_state=constants.REPLICA_STATE_ACTIVE, share_server_id=share_server['id'] ) expected_extra_keys = { 'backend_details', 'host', 'id', 'share_network_subnet_id', 'status', } with session.begin(): share_replica = db_api.share_replica_get( self.ctxt, replica['id'], with_share_server=True, session=session) self.assertIsNotNone(share_replica['replica_state']) self.assertEqual( share_server['id'], share_replica['share_server_id']) self.assertTrue(expected_extra_keys.issubset( share_replica['share_server'].keys())) def test_share_replica_update(self): share = db_utils.create_share() replica = db_utils.create_share_replica( share_id=share['id'], replica_state=constants.REPLICA_STATE_ACTIVE) updated_replica = db_api.share_replica_update( self.ctxt, replica['id'], {'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC}) self.assertEqual(constants.REPLICA_STATE_OUT_OF_SYNC, updated_replica['replica_state']) def test_share_replica_delete(self): share = db_utils.create_share() share = db_api.share_get(self.ctxt, share['id']) self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value='reservation')) self.mock_object(quota.QUOTAS, 'commit') replica = db_utils.create_share_replica( share_id=share['id'], replica_state=constants.REPLICA_STATE_ACTIVE) self.assertEqual(1, len( db_api.share_replicas_get_all_by_share(self.ctxt, share['id']))) db_api.share_replica_delete(self.ctxt, replica['id']) self.assertEqual( [], db_api.share_replicas_get_all_by_share(self.ctxt, share['id'])) share_type_id = share['instances'][0].get('share_type_id', None) quota.QUOTAS.reserve.assert_called_once_with( self.ctxt, project_id=share['project_id'], user_id=share['user_id'], share_type_id=share_type_id, share_replicas=-1, replica_gigabytes=share['size']) quota.QUOTAS.commit.assert_called_once_with( self.ctxt, 'reservation', project_id=share['project_id'], user_id=share['user_id'], share_type_id=share_type_id) @ddt.data( (True, {"share_replicas": -1, "replica_gigabytes": 0}, 'active'), (False, {"shares": -1, "gigabytes": 0}, None), (False, {"shares": -1, "gigabytes": 0, "share_replicas": -1, "replica_gigabytes": 0}, 'active') ) @ddt.unpack def test_share_instance_delete_quota_error(self, is_replica, deltas, replica_state): share = db_utils.create_share(replica_state=replica_state) share = db_api.share_get(self.ctxt, share['id']) instance_id = share['instances'][0]['id'] if is_replica: replica = db_utils.create_share_replica( share_id=share['id'], replica_state=constants.REPLICA_STATE_ACTIVE) instance_id = replica['id'] reservation = 'fake' share_type_id = share['instances'][0]['share_type_id'] self.mock_object(quota.QUOTAS, 'reserve', mock.Mock(return_value=reservation)) self.mock_object(quota.QUOTAS, 'commit', mock.Mock( side_effect=exception.QuotaError('fake'))) self.mock_object(quota.QUOTAS, 'rollback') # NOTE(silvacarlose): not calling with assertRaises since the # _update_share_instance_usages method is not raising an exception db_api.share_instance_delete( self.ctxt, instance_id, session=None, need_to_update_usages=True) quota.QUOTAS.reserve.assert_called_once_with( self.ctxt, project_id=share['project_id'], user_id=share['user_id'], share_type_id=share_type_id, **deltas) quota.QUOTAS.commit.assert_called_once_with( self.ctxt, reservation, project_id=share['project_id'], user_id=share['user_id'], share_type_id=share_type_id) quota.QUOTAS.rollback.assert_called_once_with( self.ctxt, reservation, share_type_id=share_type_id) def test_share_instance_access_copy(self): share = db_utils.create_share() rules = [] for i in range(0, 5): rules.append(db_utils.create_access(share_id=share['id'])) instance = db_utils.create_share_instance(share_id=share['id']) share_access_rules = db_api.share_instance_access_copy( self.ctxt, share['id'], instance['id']) share_access_rule_ids = [a['id'] for a in share_access_rules] self.assertEqual(5, len(share_access_rules)) for rule_id in share_access_rule_ids: self.assertIsNotNone( db_api.share_instance_access_get( self.ctxt, rule_id, instance['id'])) @ddt.ddt class ShareGroupDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(ShareGroupDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() def test_share_group_create_with_share_type(self): fake_share_types = ["fake_share_type"] share_group = db_utils.create_share_group(share_types=fake_share_types) share_group = db_api.share_group_get(self.ctxt, share_group['id']) self.assertEqual(1, len(share_group['share_types'])) def test_share_group_get(self): share_group = db_utils.create_share_group() self.assertDictMatch( dict(share_group), dict(db_api.share_group_get(self.ctxt, share_group['id']))) def test_count_share_groups_in_share_network(self): share_network = db_utils.create_share_network() db_utils.create_share_group() db_utils.create_share_group(share_network_id=share_network['id']) count = db_api.count_share_groups_in_share_network( self.ctxt, share_network_id=share_network['id']) self.assertEqual(1, count) def test_share_group_get_all(self): expected_share_group = db_utils.create_share_group() share_groups = db_api.share_group_get_all(self.ctxt, detailed=False) self.assertEqual(1, len(share_groups)) share_group = share_groups[0] self.assertEqual(2, len(dict(share_group).keys())) self.assertEqual(expected_share_group['id'], share_group['id']) self.assertEqual(expected_share_group['name'], share_group['name']) def test_share_group_get_all_with_detail(self): expected_share_group = db_utils.create_share_group() share_groups = db_api.share_group_get_all(self.ctxt, detailed=True) self.assertEqual(1, len(share_groups)) self.assertDictMatch(dict(expected_share_group), dict(share_groups[0])) def test_share_group_get_all_by_host(self): fake_host = 'my_fake_host' expected_share_group = db_utils.create_share_group(host=fake_host) db_utils.create_share_group() share_groups = db_api.share_group_get_all_by_host( self.ctxt, fake_host, detailed=False) self.assertEqual(1, len(share_groups)) share_group = share_groups[0] self.assertEqual(2, len(dict(share_group).keys())) self.assertEqual(expected_share_group['id'], share_group['id']) self.assertEqual(expected_share_group['name'], share_group['name']) def test_share_group_get_all_by_host_with_details(self): fake_host = 'my_fake_host' expected_share_group = db_utils.create_share_group(host=fake_host) db_utils.create_share_group() share_groups = db_api.share_group_get_all_by_host( self.ctxt, fake_host, detailed=True) self.assertEqual(1, len(share_groups)) share_group = share_groups[0] self.assertDictMatch(dict(expected_share_group), dict(share_group)) self.assertEqual(fake_host, share_group['host']) def test_share_group_get_all_by_project(self): fake_project = 'fake_project' expected_group = db_utils.create_share_group( project_id=fake_project) db_utils.create_share_group() groups = db_api.share_group_get_all_by_project(self.ctxt, fake_project, detailed=False) self.assertEqual(1, len(groups)) group = groups[0] self.assertEqual(2, len(dict(group).keys())) self.assertEqual(expected_group['id'], group['id']) self.assertEqual(expected_group['name'], group['name']) def test_share_group_get_all_by_share_server(self): fake_server = 123 expected_group = db_utils.create_share_group( share_server_id=fake_server) db_utils.create_share_group() groups = db_api.share_group_get_all_by_share_server(self.ctxt, fake_server) self.assertEqual(1, len(groups)) group = groups[0] self.assertEqual(expected_group['id'], group['id']) self.assertEqual(expected_group['name'], group['name']) def test_share_group_get_all_by_project_with_details(self): fake_project = 'fake_project' expected_group = db_utils.create_share_group( project_id=fake_project) db_utils.create_share_group() groups = db_api.share_group_get_all_by_project(self.ctxt, fake_project, detailed=True) self.assertEqual(1, len(groups)) group = groups[0] self.assertDictMatch(dict(expected_group), dict(group)) self.assertEqual(fake_project, group['project_id']) @ddt.data(({'name': 'fo'}, 0), ({'description': 'd'}, 0), ({'name': 'foo', 'description': 'd'}, 0), ({'name': 'foo'}, 1), ({'description': 'ds'}, 1), ({'name~': 'foo', 'description~': 'ds'}, 2), ({'name': 'foo', 'description~': 'ds'}, 1), ({'name~': 'foo', 'description': 'ds'}, 1)) @ddt.unpack def test_share_group_get_all_by_name_and_description( self, search_opts, group_number): db_utils.create_share_group(name='fo1', description='d1') expected_group1 = db_utils.create_share_group(name='foo', description='ds') expected_group2 = db_utils.create_share_group(name='foo1', description='ds2') groups = db_api.share_group_get_all( self.ctxt, detailed=True, filters=search_opts) self.assertEqual(group_number, len(groups)) if group_number == 1: self.assertDictMatch(dict(expected_group1), dict(groups[0])) elif group_number == 2: self.assertDictMatch(dict(expected_group1), dict(groups[1])) self.assertDictMatch(dict(expected_group2), dict(groups[0])) def test_share_group_update(self): fake_name = "my_fake_name" expected_group = db_utils.create_share_group() expected_group['name'] = fake_name db_api.share_group_update(self.ctxt, expected_group['id'], {'name': fake_name}) group = db_api.share_group_get(self.ctxt, expected_group['id']) self.assertEqual(fake_name, group['name']) def test_share_group_destroy(self): group = db_utils.create_share_group() db_api.share_group_get(self.ctxt, group['id']) db_api.share_group_destroy(self.ctxt, group['id']) self.assertRaises(exception.NotFound, db_api.share_group_get, self.ctxt, group['id']) def test_count_shares_in_share_group(self): sg = db_utils.create_share_group() db_utils.create_share(share_group_id=sg['id']) db_utils.create_share() count = db_api.count_shares_in_share_group(self.ctxt, sg['id']) self.assertEqual(1, count) def test_count_sg_snapshots_in_share_group(self): sg = db_utils.create_share_group() db_utils.create_share_group_snapshot(sg['id']) db_utils.create_share_group_snapshot(sg['id']) count = db_api.count_share_group_snapshots_in_share_group( self.ctxt, sg['id']) self.assertEqual(2, count) def test_share_group_snapshot_get(self): sg = db_utils.create_share_group() sg_snap = db_utils.create_share_group_snapshot(sg['id']) self.assertDictMatch( dict(sg_snap), dict(db_api.share_group_snapshot_get(self.ctxt, sg_snap['id']))) def test_share_group_snapshot_get_all(self): sg = db_utils.create_share_group() expected_sg_snap = db_utils.create_share_group_snapshot(sg['id']) snaps = db_api.share_group_snapshot_get_all(self.ctxt, detailed=False) self.assertEqual(1, len(snaps)) snap = snaps[0] self.assertEqual(2, len(dict(snap).keys())) self.assertEqual(expected_sg_snap['id'], snap['id']) self.assertEqual(expected_sg_snap['name'], snap['name']) def test_share_group_snapshot_get_all_with_detail(self): sg = db_utils.create_share_group() expected_sg_snap = db_utils.create_share_group_snapshot(sg['id']) snaps = db_api.share_group_snapshot_get_all(self.ctxt, detailed=True) self.assertEqual(1, len(snaps)) snap = snaps[0] self.assertDictMatch(dict(expected_sg_snap), dict(snap)) def test_share_group_snapshot_get_all_by_project(self): fake_project = uuidutils.generate_uuid() sg = db_utils.create_share_group() expected_sg_snap = db_utils.create_share_group_snapshot( sg['id'], project_id=fake_project) snaps = db_api.share_group_snapshot_get_all_by_project( self.ctxt, fake_project, detailed=False) self.assertEqual(1, len(snaps)) snap = snaps[0] self.assertEqual(2, len(dict(snap).keys())) self.assertEqual(expected_sg_snap['id'], snap['id']) self.assertEqual(expected_sg_snap['name'], snap['name']) def test_share_group_snapshot_get_all_by_project_with_details(self): fake_project = uuidutils.generate_uuid() sg = db_utils.create_share_group() expected_sg_snap = db_utils.create_share_group_snapshot( sg['id'], project_id=fake_project) snaps = db_api.share_group_snapshot_get_all_by_project( self.ctxt, fake_project, detailed=True) self.assertEqual(1, len(snaps)) snap = snaps[0] self.assertDictMatch(dict(expected_sg_snap), dict(snap)) self.assertEqual(fake_project, snap['project_id']) def test_share_group_snapshot_update(self): fake_name = "my_fake_name" sg = db_utils.create_share_group() expected_sg_snap = db_utils.create_share_group_snapshot(sg['id']) expected_sg_snap['name'] = fake_name db_api.share_group_snapshot_update( self.ctxt, expected_sg_snap['id'], {'name': fake_name}) sg_snap = db_api.share_group_snapshot_get( self.ctxt, expected_sg_snap['id']) self.assertEqual(fake_name, sg_snap['name']) def test_share_group_snapshot_destroy(self): sg = db_utils.create_share_group() sg_snap = db_utils.create_share_group_snapshot(sg['id']) db_api.share_group_snapshot_get(self.ctxt, sg_snap['id']) db_api.share_group_snapshot_destroy(self.ctxt, sg_snap['id']) self.assertRaises( exception.NotFound, db_api.share_group_snapshot_get, self.ctxt, sg_snap['id']) def test_share_group_snapshot_members_get_all(self): sg = db_utils.create_share_group() share = db_utils.create_share(share_group_id=sg['id']) si = db_utils.create_share_instance(share_id=share['id']) sg_snap = db_utils.create_share_group_snapshot(sg['id']) expected_member = db_utils.create_share_group_snapshot_member( sg_snap['id'], share_instance_id=si['id']) members = db_api.share_group_snapshot_members_get_all( self.ctxt, sg_snap['id']) self.assertEqual(1, len(members)) self.assertDictMatch(dict(expected_member), dict(members[0])) def test_count_share_group_snapshot_members_in_share(self): sg = db_utils.create_share_group() share = db_utils.create_share(share_group_id=sg['id']) si = db_utils.create_share_instance(share_id=share['id']) share2 = db_utils.create_share(share_group_id=sg['id']) si2 = db_utils.create_share_instance(share_id=share2['id']) sg_snap = db_utils.create_share_group_snapshot(sg['id']) db_utils.create_share_group_snapshot_member( sg_snap['id'], share_instance_id=si['id']) db_utils.create_share_group_snapshot_member( sg_snap['id'], share_instance_id=si2['id']) count = db_api.count_share_group_snapshot_members_in_share( self.ctxt, share['id']) self.assertEqual(1, count) def test_share_group_snapshot_members_get(self): sg = db_utils.create_share_group() share = db_utils.create_share(share_group_id=sg['id']) si = db_utils.create_share_instance(share_id=share['id']) sg_snap = db_utils.create_share_group_snapshot(sg['id']) expected_member = db_utils.create_share_group_snapshot_member( sg_snap['id'], share_instance_id=si['id']) member = db_api.share_group_snapshot_member_get( self.ctxt, expected_member['id']) self.assertDictMatch(dict(expected_member), dict(member)) def test_share_group_snapshot_members_get_not_found(self): self.assertRaises( exception.ShareGroupSnapshotMemberNotFound, db_api.share_group_snapshot_member_get, self.ctxt, 'fake_id') def test_share_group_snapshot_member_update(self): sg = db_utils.create_share_group() share = db_utils.create_share(share_group_id=sg['id']) si = db_utils.create_share_instance(share_id=share['id']) sg_snap = db_utils.create_share_group_snapshot(sg['id']) expected_member = db_utils.create_share_group_snapshot_member( sg_snap['id'], share_instance_id=si['id']) db_api.share_group_snapshot_member_update( self.ctxt, expected_member['id'], {'status': constants.STATUS_AVAILABLE}) member = db_api.share_group_snapshot_member_get( self.ctxt, expected_member['id']) self.assertEqual(constants.STATUS_AVAILABLE, member['status']) @ddt.ddt class ShareGroupTypeAPITestCase(test.TestCase): def setUp(self): super(ShareGroupTypeAPITestCase, self).setUp() self.ctxt = context.RequestContext( user_id='user_id', project_id='project_id', is_admin=True) @ddt.data(True, False) def test_share_type_destroy_in_use(self, used_by_groups): share_type_1 = db_utils.create_share_type(name='fike') share_type_2 = db_utils.create_share_type(name='bowman') share_group_type_1 = db_utils.create_share_group_type( name='orange', is_public=False, share_types=[share_type_1['id']], group_specs={'dabo': 'allin', 'cadence': 'count'}, override_defaults=True) db_api.share_group_type_access_add(self.ctxt, share_group_type_1['id'], "2018ndaetfigovnsaslcahfavmrpions") db_api.share_group_type_access_add(self.ctxt, share_group_type_1['id'], "2016ndaetfigovnsaslcahfavmrpions") share_group_type_2 = db_utils.create_share_group_type( name='regalia', share_types=[share_type_2['id']]) if used_by_groups: share_group_1 = db_utils.create_share_group( share_group_type_id=share_group_type_1['id'], share_types=[share_type_1['id']]) share_group_2 = db_utils.create_share_group( share_group_type_id=share_group_type_2['id'], share_types=[share_type_2['id']]) self.assertRaises(exception.ShareGroupTypeInUse, db_api.share_group_type_destroy, self.ctxt, share_group_type_1['id']) self.assertRaises(exception.ShareGroupTypeInUse, db_api.share_group_type_destroy, self.ctxt, share_group_type_2['id']) # Cleanup share groups db_api.share_group_destroy(self.ctxt, share_group_1['id']) db_api.share_group_destroy(self.ctxt, share_group_2['id']) # Let's cleanup share_group_type_1 and verify it is gone self.assertIsNone(db_api.share_group_type_destroy( self.ctxt, share_group_type_1['id'])) self.assertDictMatch( {}, db_api.share_group_type_specs_get( self.ctxt, share_group_type_1['id'])) self.assertRaises(exception.ShareGroupTypeNotFound, db_api.share_group_type_access_get_all, self.ctxt, share_group_type_1['id']) self.assertRaises(exception.ShareGroupTypeNotFound, db_api.share_group_type_get, self.ctxt, share_group_type_1['id']) # share_group_type_2 must still be around self.assertEqual(share_group_type_2['id'], db_api.share_group_type_get( self.ctxt, share_group_type_2['id'])['id']) @ddt.ddt class ShareSnapshotDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(ShareSnapshotDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() self.share_instances = [ db_utils.create_share_instance( status=constants.STATUS_REPLICATION_CHANGE, share_id='fake_share_id_1'), db_utils.create_share_instance( status=constants.STATUS_AVAILABLE, share_id='fake_share_id_1'), db_utils.create_share_instance( status=constants.STATUS_ERROR_DELETING, share_id='fake_share_id_2'), db_utils.create_share_instance( status=constants.STATUS_MANAGING, share_id='fake_share_id_2'), ] self.share_1 = db_utils.create_share( id='fake_share_id_1', instances=self.share_instances[0:2]) self.share_2 = db_utils.create_share( id='fake_share_id_2', instances=self.share_instances[2:-1]) self.snapshot_instances = [ db_utils.create_snapshot_instance( 'fake_snapshot_id_1', status=constants.STATUS_CREATING, share_instance_id=self.share_instances[0]['id']), db_utils.create_snapshot_instance( 'fake_snapshot_id_1', status=constants.STATUS_ERROR, share_instance_id=self.share_instances[1]['id']), db_utils.create_snapshot_instance( 'fake_snapshot_id_1', status=constants.STATUS_DELETING, share_instance_id=self.share_instances[2]['id']), db_utils.create_snapshot_instance( 'fake_snapshot_id_2', status=constants.STATUS_AVAILABLE, id='fake_snapshot_instance_id', provider_location='hogsmeade:snapshot1', progress='87%', share_instance_id=self.share_instances[3]['id']), ] self.snapshot_1 = db_utils.create_snapshot( id='fake_snapshot_id_1', share_id=self.share_1['id'], instances=self.snapshot_instances[0:3]) self.snapshot_2 = db_utils.create_snapshot( id='fake_snapshot_id_2', share_id=self.share_2['id'], instances=self.snapshot_instances[3:4]) self.snapshot_instance_export_locations = [ db_utils.create_snapshot_instance_export_locations( self.snapshot_instances[0].id, path='1.1.1.1:/fake_path', is_admin_only=True), db_utils.create_snapshot_instance_export_locations( self.snapshot_instances[1].id, path='2.2.2.2:/fake_path', is_admin_only=True), db_utils.create_snapshot_instance_export_locations( self.snapshot_instances[2].id, path='3.3.3.3:/fake_path', is_admin_only=True), db_utils.create_snapshot_instance_export_locations( self.snapshot_instances[3].id, path='4.4.4.4:/fake_path', is_admin_only=True) ] def test_create(self): share = db_utils.create_share(size=1) values = { 'share_id': share['id'], 'size': share['size'], 'user_id': share['user_id'], 'project_id': share['project_id'], 'status': constants.STATUS_CREATING, 'progress': '0%', 'share_size': share['size'], 'display_name': 'fake', 'display_description': 'fake', 'share_proto': share['share_proto'] } actual_result = db_api.share_snapshot_create( self.ctxt, values, create_snapshot_instance=True) self.assertEqual(1, len(actual_result.instances)) self.assertSubDictMatch(values, actual_result.to_dict()) def test_share_snapshot_get_latest_for_share(self): share = db_utils.create_share(size=1) values = { 'share_id': share['id'], 'size': share['size'], 'user_id': share['user_id'], 'project_id': share['project_id'], 'status': constants.STATUS_CREATING, 'progress': '0%', 'share_size': share['size'], 'display_description': 'fake', 'share_proto': share['share_proto'], } values1 = copy.deepcopy(values) values1['display_name'] = 'snap1' db_api.share_snapshot_create(self.ctxt, values1) values2 = copy.deepcopy(values) values2['display_name'] = 'snap2' db_api.share_snapshot_create(self.ctxt, values2) values3 = copy.deepcopy(values) values3['display_name'] = 'snap3' db_api.share_snapshot_create(self.ctxt, values3) result = db_api.share_snapshot_get_latest_for_share(self.ctxt, share['id']) self.assertSubDictMatch(values3, result.to_dict()) def test_get_instance(self): snapshot = db_utils.create_snapshot(with_share=True) instance = db_api.share_snapshot_instance_get( self.ctxt, snapshot.instance['id'], with_share_data=True) instance_dict = instance.to_dict() self.assertTrue(hasattr(instance, 'name')) self.assertTrue(hasattr(instance, 'share_name')) self.assertTrue(hasattr(instance, 'share_id')) self.assertIn('name', instance_dict) self.assertIn('share_name', instance_dict) @ddt.data(None, constants.STATUS_ERROR) def test_share_snapshot_instance_get_all_with_filters_some(self, status): expected_status = status or (constants.STATUS_CREATING, constants.STATUS_DELETING) expected_number = 1 if status else 3 filters = { 'snapshot_ids': 'fake_snapshot_id_1', 'statuses': expected_status } instances = db_api.share_snapshot_instance_get_all_with_filters( self.ctxt, filters) for instance in instances: self.assertEqual('fake_snapshot_id_1', instance['snapshot_id']) self.assertIn(instance['status'], filters['statuses']) self.assertEqual(expected_number, len(instances)) def test_share_snapshot_instance_get_all_with_filters_all_filters(self): filters = { 'snapshot_ids': 'fake_snapshot_id_2', 'instance_ids': 'fake_snapshot_instance_id', 'statuses': constants.STATUS_AVAILABLE, 'share_instance_ids': self.share_instances[3]['id'], } instances = db_api.share_snapshot_instance_get_all_with_filters( self.ctxt, filters, with_share_data=True) self.assertEqual(1, len(instances)) self.assertEqual('fake_snapshot_instance_id', instances[0]['id']) self.assertEqual( self.share_2['id'], instances[0]['share_instance']['share_id']) def test_share_snapshot_instance_get_all_with_filters_wrong_filters(self): filters = { 'some_key': 'some_value', 'some_other_key': 'some_other_value', } instances = db_api.share_snapshot_instance_get_all_with_filters( self.ctxt, filters) self.assertEqual(6, len(instances)) def test_share_snapshot_instance_create(self): snapshot = db_utils.create_snapshot(with_share=True) share = snapshot['share'] share_instance = db_utils.create_share_instance(share_id=share['id']) values = { 'snapshot_id': snapshot['id'], 'share_instance_id': share_instance['id'], 'status': constants.STATUS_MANAGING, 'progress': '88%', 'provider_location': 'whomping_willow', } actual_result = db_api.share_snapshot_instance_create( self.ctxt, snapshot['id'], values) snapshot = db_api.share_snapshot_get(self.ctxt, snapshot['id']) self.assertSubDictMatch(values, actual_result.to_dict()) self.assertEqual(2, len(snapshot['instances'])) def test_share_snapshot_instance_update(self): snapshot = db_utils.create_snapshot(with_share=True) values = { 'snapshot_id': snapshot['id'], 'status': constants.STATUS_ERROR, 'progress': '18%', 'provider_location': 'godrics_hollow', } actual_result = db_api.share_snapshot_instance_update( self.ctxt, snapshot['instance']['id'], values) self.assertSubDictMatch(values, actual_result.to_dict()) @ddt.data(2, 1) def test_share_snapshot_instance_delete(self, instances): snapshot = db_utils.create_snapshot(with_share=True) first_instance_id = snapshot['instance']['id'] if instances > 1: instance = db_utils.create_snapshot_instance( snapshot['id'], share_instance_id=snapshot['share']['instance']['id']) else: instance = snapshot['instance'] retval = db_api.share_snapshot_instance_delete( self.ctxt, instance['id']) self.assertIsNone(retval) if instances == 1: self.assertRaises(exception.ShareSnapshotNotFound, db_api.share_snapshot_get, self.ctxt, snapshot['id']) else: snapshot = db_api.share_snapshot_get(self.ctxt, snapshot['id']) self.assertEqual(1, len(snapshot['instances'])) self.assertEqual(first_instance_id, snapshot['instance']['id']) def test_share_snapshot_access_create(self): values = { 'share_snapshot_id': self.snapshot_1['id'], } actual_result = db_api.share_snapshot_access_create(self.ctxt, values) self.assertSubDictMatch(values, actual_result.to_dict()) def test_share_snapshot_instance_access_get_all(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) session = db_api.get_session() values = {'share_snapshot_instance_id': self.snapshot_instances[0].id, 'access_id': access['id']} rules = db_api.share_snapshot_instance_access_get_all( self.ctxt, access['id'], session) self.assertSubDictMatch(values, rules[0].to_dict()) def test_share_snapshot_access_get(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) values = {'share_snapshot_id': self.snapshot_1['id']} actual_value = db_api.share_snapshot_access_get( self.ctxt, access['id']) self.assertSubDictMatch(values, actual_value.to_dict()) def test_share_snapshot_access_get_all_for_share_snapshot(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) values = {'access_type': access['access_type'], 'access_to': access['access_to'], 'share_snapshot_id': self.snapshot_1['id']} actual_value = db_api.share_snapshot_access_get_all_for_share_snapshot( self.ctxt, self.snapshot_1['id'], {}) self.assertSubDictMatch(values, actual_value[0].to_dict()) @ddt.data({'existing': {'access_type': 'cephx', 'access_to': 'alice'}, 'new': {'access_type': 'user', 'access_to': 'alice'}, 'result': False}, {'existing': {'access_type': 'user', 'access_to': 'bob'}, 'new': {'access_type': 'user', 'access_to': 'bob'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': '10.0.0.10/32'}, 'new': {'access_type': 'ip', 'access_to': '10.0.0.10'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': '10.10.0.11'}, 'new': {'access_type': 'ip', 'access_to': '10.10.0.11'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': 'fd21::11'}, 'new': {'access_type': 'ip', 'access_to': 'fd21::11'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': 'fd21::10'}, 'new': {'access_type': 'ip', 'access_to': 'fd21::10/128'}, 'result': True}, {'existing': {'access_type': 'ip', 'access_to': '10.10.0.0/22'}, 'new': {'access_type': 'ip', 'access_to': '10.10.0.0/24'}, 'result': False}, {'existing': {'access_type': 'ip', 'access_to': '2620:52::/48'}, 'new': {'access_type': 'ip', 'access_to': '2620:52:0:13b8::/64'}, 'result': False}) @ddt.unpack def test_share_snapshot_check_for_existing_access(self, existing, new, result): db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id'], access_type=existing['access_type'], access_to=existing['access_to']) rule_exists = db_api.share_snapshot_check_for_existing_access( self.ctxt, self.snapshot_1['id'], new['access_type'], new['access_to']) self.assertEqual(result, rule_exists) def test_share_snapshot_access_get_all_for_snapshot_instance(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) values = {'access_type': access['access_type'], 'access_to': access['access_to'], 'share_snapshot_id': self.snapshot_1['id']} out = db_api.share_snapshot_access_get_all_for_snapshot_instance( self.ctxt, self.snapshot_instances[0].id) self.assertSubDictMatch(values, out[0].to_dict()) def test_share_snapshot_instance_access_update_state(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) values = {'state': constants.STATUS_ACTIVE, 'access_id': access['id'], 'share_snapshot_instance_id': self.snapshot_instances[0].id} actual_result = db_api.share_snapshot_instance_access_update( self.ctxt, access['id'], self.snapshot_1.instance['id'], {'state': constants.STATUS_ACTIVE}) self.assertSubDictMatch(values, actual_result.to_dict()) def test_share_snapshot_instance_access_get(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) values = {'access_id': access['id'], 'share_snapshot_instance_id': self.snapshot_instances[0].id} actual_result = db_api.share_snapshot_instance_access_get( self.ctxt, access['id'], self.snapshot_instances[0].id) self.assertSubDictMatch(values, actual_result.to_dict()) def test_share_snapshot_instance_access_delete(self): access = db_utils.create_snapshot_access( share_snapshot_id=self.snapshot_1['id']) db_api.share_snapshot_instance_access_delete( self.ctxt, access['id'], self.snapshot_1.instance['id']) def test_share_snapshot_instance_export_location_create(self): values = { 'share_snapshot_instance_id': self.snapshot_instances[0].id, } actual_result = db_api.share_snapshot_instance_export_location_create( self.ctxt, values) self.assertSubDictMatch(values, actual_result.to_dict()) def test_share_snapshot_export_locations_get(self): out = db_api.share_snapshot_export_locations_get( self.ctxt, self.snapshot_1['id']) keys = ['share_snapshot_instance_id', 'path', 'is_admin_only'] for expected, actual in zip(self.snapshot_instance_export_locations, out): [self.assertEqual(expected[k], actual[k]) for k in keys] def test_share_snapshot_instance_export_locations_get(self): out = db_api.share_snapshot_instance_export_locations_get_all( self.ctxt, self.snapshot_instances[0].id) keys = ['share_snapshot_instance_id', 'path', 'is_admin_only'] for key in keys: self.assertEqual(self.snapshot_instance_export_locations[0][key], out[0][key]) class ShareExportLocationsDatabaseAPITestCase(test.TestCase): def setUp(self): super(ShareExportLocationsDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() def test_update_valid_order(self): share = db_utils.create_share() initial_locations = ['fake1/1/', 'fake2/2', 'fake3/3'] update_locations = ['fake4/4', 'fake2/2', 'fake3/3'] # add initial locations db_api.share_export_locations_update(self.ctxt, share.instance['id'], initial_locations, False) # update locations db_api.share_export_locations_update(self.ctxt, share.instance['id'], update_locations, True) actual_result = db_api.share_export_locations_get(self.ctxt, share['id']) # actual result should contain locations in exact same order self.assertEqual(actual_result, update_locations) def test_update_string(self): share = db_utils.create_share() initial_location = 'fake1/1/' db_api.share_export_locations_update(self.ctxt, share.instance['id'], initial_location, False) actual_result = db_api.share_export_locations_get(self.ctxt, share['id']) self.assertEqual(actual_result, [initial_location]) def test_get_admin_export_locations(self): ctxt_user = context.RequestContext( user_id='fake user', project_id='fake project', is_admin=False) share = db_utils.create_share() locations = [ {'path': 'fake1/1/', 'is_admin_only': True}, {'path': 'fake2/2/', 'is_admin_only': True}, {'path': 'fake3/3/', 'is_admin_only': True}, ] db_api.share_export_locations_update( self.ctxt, share.instance['id'], locations, delete=False) user_result = db_api.share_export_locations_get(ctxt_user, share['id']) self.assertEqual([], user_result) admin_result = db_api.share_export_locations_get( self.ctxt, share['id']) self.assertEqual(3, len(admin_result)) for location in locations: self.assertIn(location['path'], admin_result) def test_get_user_export_locations(self): ctxt_user = context.RequestContext( user_id='fake user', project_id='fake project', is_admin=False) share = db_utils.create_share() locations = [ {'path': 'fake1/1/', 'is_admin_only': False}, {'path': 'fake2/2/', 'is_admin_only': False}, {'path': 'fake3/3/', 'is_admin_only': False}, ] db_api.share_export_locations_update( self.ctxt, share.instance['id'], locations, delete=False) user_result = db_api.share_export_locations_get(ctxt_user, share['id']) self.assertEqual(3, len(user_result)) for location in locations: self.assertIn(location['path'], user_result) admin_result = db_api.share_export_locations_get( self.ctxt, share['id']) self.assertEqual(3, len(admin_result)) for location in locations: self.assertIn(location['path'], admin_result) def test_get_user_export_locations_old_view(self): ctxt_user = context.RequestContext( user_id='fake user', project_id='fake project', is_admin=False) share = db_utils.create_share() locations = ['fake1/1/', 'fake2/2', 'fake3/3'] db_api.share_export_locations_update( self.ctxt, share.instance['id'], locations, delete=False) user_result = db_api.share_export_locations_get(ctxt_user, share['id']) self.assertEqual(locations, user_result) admin_result = db_api.share_export_locations_get( self.ctxt, share['id']) self.assertEqual(locations, admin_result) @ddt.ddt class ShareInstanceExportLocationsMetadataDatabaseAPITestCase(test.TestCase): def setUp(self): clname = ShareInstanceExportLocationsMetadataDatabaseAPITestCase super(clname, self).setUp() self.ctxt = context.get_admin_context() share_id = 'fake_share_id' instances = [ db_utils.create_share_instance( share_id=share_id, status=constants.STATUS_AVAILABLE), db_utils.create_share_instance( share_id=share_id, status=constants.STATUS_MIGRATING), db_utils.create_share_instance( share_id=share_id, status=constants.STATUS_MIGRATING_TO), ] self.share = db_utils.create_share( id=share_id, instances=instances) self.initial_locations = ['/fake/foo/', '/fake/bar', '/fake/quuz'] self.shown_locations = ['/fake/foo/', '/fake/bar'] for i in range(0, 3): db_api.share_export_locations_update( self.ctxt, instances[i]['id'], self.initial_locations[i], delete=False) def _get_export_location_uuid_by_path(self, path): els = db_api.share_export_locations_get_by_share_id( self.ctxt, self.share.id) export_location_uuid = None for el in els: if el.path == path: export_location_uuid = el.uuid self.assertIsNotNone(export_location_uuid) return export_location_uuid def test_get_export_locations_by_share_id(self): els = db_api.share_export_locations_get_by_share_id( self.ctxt, self.share.id) self.assertEqual(3, len(els)) for path in self.shown_locations: self.assertTrue(any([path in el.path for el in els])) def test_get_export_locations_by_share_id_ignore_migration_dest(self): els = db_api.share_export_locations_get_by_share_id( self.ctxt, self.share.id, ignore_migration_destination=True) self.assertEqual(2, len(els)) for path in self.shown_locations: self.assertTrue(any([path in el.path for el in els])) def test_get_export_locations_by_share_instance_id(self): els = db_api.share_export_locations_get_by_share_instance_id( self.ctxt, self.share.instance.id) self.assertEqual(1, len(els)) for path in [self.shown_locations[1]]: self.assertTrue(any([path in el.path for el in els])) def test_export_location_metadata_update_delete(self): export_location_uuid = self._get_export_location_uuid_by_path( self.initial_locations[0]) metadata = { 'foo_key': 'foo_value', 'bar_key': 'bar_value', 'quuz_key': 'quuz_value', } db_api.export_location_metadata_update( self.ctxt, export_location_uuid, metadata, False) db_api.export_location_metadata_delete( self.ctxt, export_location_uuid, list(metadata.keys())[0:-1]) result = db_api.export_location_metadata_get( self.ctxt, export_location_uuid) key = list(metadata.keys())[-1] self.assertEqual({key: metadata[key]}, result) db_api.export_location_metadata_delete( self.ctxt, export_location_uuid) result = db_api.export_location_metadata_get( self.ctxt, export_location_uuid) self.assertEqual({}, result) def test_export_location_metadata_update_get(self): # Write metadata for target export location export_location_uuid = self._get_export_location_uuid_by_path( self.initial_locations[0]) metadata = {'foo_key': 'foo_value', 'bar_key': 'bar_value'} db_api.export_location_metadata_update( self.ctxt, export_location_uuid, metadata, False) # Write metadata for some concurrent export location other_export_location_uuid = self._get_export_location_uuid_by_path( self.initial_locations[1]) other_metadata = {'key_from_other_el': 'value_of_key_from_other_el'} db_api.export_location_metadata_update( self.ctxt, other_export_location_uuid, other_metadata, False) result = db_api.export_location_metadata_get( self.ctxt, export_location_uuid) self.assertEqual(metadata, result) updated_metadata = { 'foo_key': metadata['foo_key'], 'quuz_key': 'quuz_value', } db_api.export_location_metadata_update( self.ctxt, export_location_uuid, updated_metadata, True) result = db_api.export_location_metadata_get( self.ctxt, export_location_uuid) self.assertEqual(updated_metadata, result) @ddt.data( ("k", "v"), ("k" * 256, "v"), ("k", "v" * 1024), ("k" * 256, "v" * 1024), ) @ddt.unpack def test_set_metadata_with_different_length(self, key, value): export_location_uuid = self._get_export_location_uuid_by_path( self.initial_locations[1]) metadata = {key: value} db_api.export_location_metadata_update( self.ctxt, export_location_uuid, metadata, False) result = db_api.export_location_metadata_get( self.ctxt, export_location_uuid) self.assertEqual(metadata, result) @ddt.ddt class DriverPrivateDataDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(DriverPrivateDataDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() def _get_driver_test_data(self): return uuidutils.generate_uuid() @ddt.data({"details": {"foo": "bar", "tee": "too"}, "valid": {"foo": "bar", "tee": "too"}}, {"details": {"foo": "bar", "tee": ["test"]}, "valid": {"foo": "bar", "tee": six.text_type(["test"])}}) @ddt.unpack def test_update(self, details, valid): test_id = self._get_driver_test_data() initial_data = db_api.driver_private_data_get(self.ctxt, test_id) db_api.driver_private_data_update(self.ctxt, test_id, details) actual_data = db_api.driver_private_data_get(self.ctxt, test_id) self.assertEqual({}, initial_data) self.assertEqual(valid, actual_data) @ddt.data({'with_deleted': True, 'append': False}, {'with_deleted': True, 'append': True}, {'with_deleted': False, 'append': False}, {'with_deleted': False, 'append': True}) @ddt.unpack def test_update_with_more_values(self, with_deleted, append): test_id = self._get_driver_test_data() details = {"tee": "too"} more_details = {"foo": "bar"} result = {"tee": "too", "foo": "bar"} db_api.driver_private_data_update(self.ctxt, test_id, details) if with_deleted: db_api.driver_private_data_delete(self.ctxt, test_id) if append: more_details.update(details) if with_deleted and not append: result.pop("tee") db_api.driver_private_data_update(self.ctxt, test_id, more_details) actual_result = db_api.driver_private_data_get(self.ctxt, test_id) self.assertEqual(result, actual_result) @ddt.data(True, False) def test_update_with_duplicate(self, with_deleted): test_id = self._get_driver_test_data() details = {"tee": "too"} db_api.driver_private_data_update(self.ctxt, test_id, details) if with_deleted: db_api.driver_private_data_delete(self.ctxt, test_id) db_api.driver_private_data_update(self.ctxt, test_id, details) actual_result = db_api.driver_private_data_get(self.ctxt, test_id) self.assertEqual(details, actual_result) def test_update_with_delete_existing(self): test_id = self._get_driver_test_data() details = {"key1": "val1", "key2": "val2", "key3": "val3"} details_update = {"key1": "val1_upd", "key4": "new_val"} # Create new details db_api.driver_private_data_update(self.ctxt, test_id, details) db_api.driver_private_data_update(self.ctxt, test_id, details_update, delete_existing=True) actual_result = db_api.driver_private_data_get( self.ctxt, test_id) self.assertEqual(details_update, actual_result) def test_get(self): test_id = self._get_driver_test_data() test_key = "foo" test_keys = [test_key, "tee"] details = {test_keys[0]: "val", test_keys[1]: "val", "mee": "foo"} db_api.driver_private_data_update(self.ctxt, test_id, details) actual_result_all = db_api.driver_private_data_get( self.ctxt, test_id) actual_result_single_key = db_api.driver_private_data_get( self.ctxt, test_id, test_key) actual_result_list = db_api.driver_private_data_get( self.ctxt, test_id, test_keys) self.assertEqual(details, actual_result_all) self.assertEqual(details[test_key], actual_result_single_key) self.assertEqual(dict.fromkeys(test_keys, "val"), actual_result_list) def test_delete_single(self): test_id = self._get_driver_test_data() test_key = "foo" details = {test_key: "bar", "tee": "too"} valid_result = {"tee": "too"} db_api.driver_private_data_update(self.ctxt, test_id, details) db_api.driver_private_data_delete(self.ctxt, test_id, test_key) actual_result = db_api.driver_private_data_get( self.ctxt, test_id) self.assertEqual(valid_result, actual_result) def test_delete_all(self): test_id = self._get_driver_test_data() details = {"foo": "bar", "tee": "too"} db_api.driver_private_data_update(self.ctxt, test_id, details) db_api.driver_private_data_delete(self.ctxt, test_id) actual_result = db_api.driver_private_data_get( self.ctxt, test_id) self.assertEqual({}, actual_result) @ddt.ddt class ShareNetworkDatabaseAPITestCase(BaseDatabaseAPITestCase): def __init__(self, *args, **kwargs): super(ShareNetworkDatabaseAPITestCase, self).__init__(*args, **kwargs) self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) def setUp(self): super(ShareNetworkDatabaseAPITestCase, self).setUp() self.share_nw_dict = {'id': 'fake network id', 'project_id': self.fake_context.project_id, 'user_id': 'fake_user_id', 'name': 'whatever', 'description': 'fake description'} def test_create_one_network(self): result = db_api.share_network_create(self.fake_context, self.share_nw_dict) self._check_fields(expected=self.share_nw_dict, actual=result) self.assertEqual(0, len(result['share_instances'])) self.assertEqual(0, len(result['security_services'])) def test_create_two_networks_in_different_tenants(self): share_nw_dict2 = self.share_nw_dict.copy() share_nw_dict2['id'] = None share_nw_dict2['project_id'] = 'fake project 2' result1 = db_api.share_network_create(self.fake_context, self.share_nw_dict) result2 = db_api.share_network_create(self.fake_context.elevated(), share_nw_dict2) self._check_fields(expected=self.share_nw_dict, actual=result1) self._check_fields(expected=share_nw_dict2, actual=result2) def test_create_two_networks_in_one_tenant(self): share_nw_dict2 = self.share_nw_dict.copy() share_nw_dict2['id'] += "suffix" result1 = db_api.share_network_create(self.fake_context, self.share_nw_dict) result2 = db_api.share_network_create(self.fake_context, share_nw_dict2) self._check_fields(expected=self.share_nw_dict, actual=result1) self._check_fields(expected=share_nw_dict2, actual=result2) def test_create_with_duplicated_id(self): db_api.share_network_create(self.fake_context, self.share_nw_dict) self.assertRaises(db_exception.DBDuplicateEntry, db_api.share_network_create, self.fake_context, self.share_nw_dict) def test_get(self): db_api.share_network_create(self.fake_context, self.share_nw_dict) result = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self._check_fields(expected=self.share_nw_dict, actual=result) self.assertEqual(0, len(result['share_instances'])) self.assertEqual(0, len(result['security_services'])) def _create_share_network_for_project(self, project_id): ctx = context.RequestContext(user_id='fake user', project_id=project_id, is_admin=False) share_data = self.share_nw_dict.copy() share_data['project_id'] = project_id db_api.share_network_create(ctx, share_data) return share_data def test_get_other_tenant_as_admin(self): expected = self._create_share_network_for_project('fake project 2') result = db_api.share_network_get(self.fake_context.elevated(), self.share_nw_dict['id']) self._check_fields(expected=expected, actual=result) self.assertEqual(0, len(result['share_instances'])) self.assertEqual(0, len(result['security_services'])) def test_get_other_tenant(self): self._create_share_network_for_project('fake project 2') self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_get, self.fake_context, self.share_nw_dict['id']) @ddt.data([{'id': 'fake share id1'}], [{'id': 'fake share id1'}, {'id': 'fake share id2'}],) def test_get_with_shares(self, shares): db_api.share_network_create(self.fake_context, self.share_nw_dict) share_instances = [] for share in shares: share.update({'share_network_id': self.share_nw_dict['id']}) share_instances.append( db_api.share_create(self.fake_context, share).instance ) result = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(len(shares), len(result['share_instances'])) for index, share_instance in enumerate(share_instances): self.assertEqual( share_instance['share_network_id'], result['share_instances'][index]['share_network_id'] ) @ddt.data([{'id': 'fake security service id1', 'type': 'fake type'}], [{'id': 'fake security service id1', 'type': 'fake type'}, {'id': 'fake security service id2', 'type': 'fake type'}]) def test_get_with_security_services(self, security_services): db_api.share_network_create(self.fake_context, self.share_nw_dict) for service in security_services: service.update({'project_id': self.fake_context.project_id}) db_api.security_service_create(self.fake_context, service) db_api.share_network_add_security_service( self.fake_context, self.share_nw_dict['id'], service['id']) result = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(len(security_services), len(result['security_services'])) for index, service in enumerate(security_services): self._check_fields(expected=service, actual=result['security_services'][index]) @ddt.data([{'id': 'fake_id_1', 'availability_zone_id': 'None'}], [{'id': 'fake_id_2', 'availability_zone_id': 'None'}, {'id': 'fake_id_3', 'availability_zone_id': 'fake_az_id'}]) def test_get_with_subnets(self, subnets): db_api.share_network_create(self.fake_context, self.share_nw_dict) for subnet in subnets: subnet['share_network_id'] = self.share_nw_dict['id'] db_api.share_network_subnet_create(self.fake_context, subnet) result = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(len(subnets), len(result['share_network_subnets'])) for index, subnet in enumerate(subnets): self._check_fields(expected=subnet, actual=result['share_network_subnets'][index]) def test_get_not_found(self): self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_get, self.fake_context, 'fake id') def test_delete(self): db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.share_network_delete(self.fake_context, self.share_nw_dict['id']) self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_get, self.fake_context, self.share_nw_dict['id']) def test_delete_not_found(self): self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_delete, self.fake_context, 'fake id') def test_update(self): new_name = 'fake_new_name' db_api.share_network_create(self.fake_context, self.share_nw_dict) result_update = db_api.share_network_update(self.fake_context, self.share_nw_dict['id'], {'name': new_name}) result_get = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(new_name, result_update['name']) self._check_fields(expected=dict(result_update.items()), actual=dict(result_get.items())) def test_update_not_found(self): self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_update, self.fake_context, 'fake id', {}) @ddt.data(1, 2) def test_get_all_one_record(self, records_count): index = 0 share_networks = [] while index < records_count: share_network_dict = dict(self.share_nw_dict) fake_id = 'fake_id%s' % index share_network_dict.update({'id': fake_id, 'project_id': fake_id}) share_networks.append(share_network_dict) db_api.share_network_create(self.fake_context.elevated(), share_network_dict) index += 1 result = db_api.share_network_get_all(self.fake_context.elevated()) self.assertEqual(len(share_networks), len(result)) for index, net in enumerate(share_networks): self._check_fields(expected=net, actual=result[index]) def test_get_all_by_project(self): db_api.share_network_create(self.fake_context, self.share_nw_dict) share_nw_dict2 = dict(self.share_nw_dict) share_nw_dict2['id'] = 'fake share nw id2' share_nw_dict2['project_id'] = 'fake project 2' new_context = context.RequestContext(user_id='fake user 2', project_id='fake project 2', is_admin=False) db_api.share_network_create(new_context, share_nw_dict2) result = db_api.share_network_get_all_by_project( self.fake_context.elevated(), share_nw_dict2['project_id']) self.assertEqual(1, len(result)) self._check_fields(expected=share_nw_dict2, actual=result[0]) def test_add_security_service(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.security_service_create(self.fake_context, security_dict1) db_api.share_network_add_security_service(self.fake_context, self.share_nw_dict['id'], security_dict1['id']) result = (db_api.model_query( self.fake_context, models.ShareNetworkSecurityServiceAssociation). filter_by(security_service_id=security_dict1['id']). filter_by(share_network_id=self.share_nw_dict['id']). first()) self.assertIsNotNone(result) def test_add_security_service_not_found_01(self): security_service_id = 'unknown security service' db_api.share_network_create(self.fake_context, self.share_nw_dict) self.assertRaises(exception.SecurityServiceNotFound, db_api.share_network_add_security_service, self.fake_context, self.share_nw_dict['id'], security_service_id) def test_add_security_service_not_found_02(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} share_nw_id = 'unknown share network' db_api.security_service_create(self.fake_context, security_dict1) self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_add_security_service, self.fake_context, share_nw_id, security_dict1['id']) def test_add_security_service_association_error_already_associated(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.security_service_create(self.fake_context, security_dict1) db_api.share_network_add_security_service(self.fake_context, self.share_nw_dict['id'], security_dict1['id']) self.assertRaises( exception.ShareNetworkSecurityServiceAssociationError, db_api.share_network_add_security_service, self.fake_context, self.share_nw_dict['id'], security_dict1['id']) def test_remove_security_service(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.security_service_create(self.fake_context, security_dict1) db_api.share_network_add_security_service(self.fake_context, self.share_nw_dict['id'], security_dict1['id']) db_api.share_network_remove_security_service(self.fake_context, self.share_nw_dict['id'], security_dict1['id']) result = (db_api.model_query( self.fake_context, models.ShareNetworkSecurityServiceAssociation). filter_by(security_service_id=security_dict1['id']). filter_by(share_network_id=self.share_nw_dict['id']).first()) self.assertIsNone(result) share_nw_ref = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(0, len(share_nw_ref['security_services'])) def test_remove_security_service_not_found_01(self): security_service_id = 'unknown security service' db_api.share_network_create(self.fake_context, self.share_nw_dict) self.assertRaises(exception.SecurityServiceNotFound, db_api.share_network_remove_security_service, self.fake_context, self.share_nw_dict['id'], security_service_id) def test_remove_security_service_not_found_02(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} share_nw_id = 'unknown share network' db_api.security_service_create(self.fake_context, security_dict1) self.assertRaises(exception.ShareNetworkNotFound, db_api.share_network_remove_security_service, self.fake_context, share_nw_id, security_dict1['id']) def test_remove_security_service_dissociation_error(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.security_service_create(self.fake_context, security_dict1) self.assertRaises( exception.ShareNetworkSecurityServiceDissociationError, db_api.share_network_remove_security_service, self.fake_context, self.share_nw_dict['id'], security_dict1['id']) def test_security_services_relation(self): security_dict1 = {'id': 'fake security service id1', 'project_id': self.fake_context.project_id, 'type': 'fake type'} db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.security_service_create(self.fake_context, security_dict1) result = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(0, len(result['security_services'])) def test_shares_relation(self): share_dict = {'id': 'fake share id1'} db_api.share_network_create(self.fake_context, self.share_nw_dict) db_api.share_create(self.fake_context, share_dict) result = db_api.share_network_get(self.fake_context, self.share_nw_dict['id']) self.assertEqual(0, len(result['share_instances'])) @ddt.ddt class ShareNetworkSubnetDatabaseAPITestCase(BaseDatabaseAPITestCase): def __init__(self, *args, **kwargs): super(ShareNetworkSubnetDatabaseAPITestCase, self).__init__( *args, **kwargs) self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) def setUp(self): super(ShareNetworkSubnetDatabaseAPITestCase, self).setUp() self.subnet_dict = {'id': 'fake network id', 'neutron_net_id': 'fake net id', 'neutron_subnet_id': 'fake subnet id', 'network_type': 'vlan', 'segmentation_id': 1000, 'share_network_id': 'fake_id', 'cidr': '10.0.0.0/24', 'ip_version': 4, 'availability_zone_id': None} def test_create(self): result = db_api.share_network_subnet_create( self.fake_context, self.subnet_dict) self._check_fields(expected=self.subnet_dict, actual=result) def test_create_duplicated_id(self): db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) self.assertRaises(db_exception.DBDuplicateEntry, db_api.share_network_subnet_create, self.fake_context, self.subnet_dict) def test_get(self): db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) result = db_api.share_network_subnet_get(self.fake_context, self.subnet_dict['id']) self._check_fields(expected=self.subnet_dict, actual=result) @ddt.data([{'id': 'fake_id_1', 'identifier': 'fake_identifier', 'host': 'fake_host'}], [{'id': 'fake_id_2', 'identifier': 'fake_identifier', 'host': 'fake_host'}, {'id': 'fake_id_3', 'identifier': 'fake_identifier', 'host': 'fake_host'}]) def test_get_with_share_servers(self, share_servers): db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) for share_server in share_servers: share_server['share_network_subnet_id'] = self.subnet_dict['id'] db_api.share_server_create(self.fake_context, share_server) result = db_api.share_network_subnet_get(self.fake_context, self.subnet_dict['id']) self.assertEqual(len(share_servers), len(result['share_servers'])) for index, share_server in enumerate(share_servers): self._check_fields(expected=share_server, actual=result['share_servers'][index]) def test_get_not_found(self): db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) self.assertRaises(exception.ShareNetworkSubnetNotFound, db_api.share_network_subnet_get, self.fake_context, 'fake_id') def test_delete(self): db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) db_api.share_network_subnet_delete(self.fake_context, self.subnet_dict['id']) self.assertRaises(exception.ShareNetworkSubnetNotFound, db_api.share_network_subnet_delete, self.fake_context, self.subnet_dict['id']) def test_delete_not_found(self): self.assertRaises(exception.ShareNetworkSubnetNotFound, db_api.share_network_subnet_delete, self.fake_context, 'fake_id') def test_update(self): update_dict = { 'gateway': 'fake_gateway', 'ip_version': 6, 'mtu': '' } db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) db_api.share_network_subnet_update( self.fake_context, self.subnet_dict['id'], update_dict) result = db_api.share_network_subnet_get(self.fake_context, self.subnet_dict['id']) self._check_fields(expected=update_dict, actual=result) def test_update_not_found(self): self.assertRaises(exception.ShareNetworkSubnetNotFound, db_api.share_network_subnet_update, self.fake_context, self.subnet_dict['id'], {}) @ddt.data([ { 'id': 'sn_id1', 'project_id': 'fake project', 'user_id': 'fake' } ], [ { 'id': 'fake_id', 'project_id': 'fake project', 'user_id': 'fake' }, { 'id': 'sn_id2', 'project_id': 'fake project', 'user_id': 'fake' } ]) def test_get_all_by_share_network(self, share_networks): for idx, share_network in enumerate(share_networks): self.subnet_dict['share_network_id'] = share_network['id'] self.subnet_dict['id'] = 'fake_id%s' % idx db_api.share_network_create(self.fake_context, share_network) db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) for share_network in share_networks: subnets = db_api.share_network_subnet_get_all_by_share_network( self.fake_context, share_network['id']) self.assertEqual(1, len(subnets)) def test_get_by_availability_zone_id(self): az = db_api.availability_zone_create_if_not_exist(self.fake_context, 'fake_zone_id') self.subnet_dict['availability_zone_id'] = az['id'] db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) result = db_api.share_network_subnet_get_by_availability_zone_id( self.fake_context, self.subnet_dict['share_network_id'], az['id']) self._check_fields(expected=self.subnet_dict, actual=result) def test_get_default_subnet(self): db_api.share_network_subnet_create(self.fake_context, self.subnet_dict) result = db_api.share_network_subnet_get_default_subnet( self.fake_context, self.subnet_dict['share_network_id']) self._check_fields(expected=self.subnet_dict, actual=result) @ddt.ddt class SecurityServiceDatabaseAPITestCase(BaseDatabaseAPITestCase): def __init__(self, *args, **kwargs): super(SecurityServiceDatabaseAPITestCase, self).__init__(*args, **kwargs) self.fake_context = context.RequestContext(user_id='fake user', project_id='fake project', is_admin=False) def _check_expected_fields(self, result, expected): for key in expected: self.assertEqual(expected[key], result[key]) def test_create(self): result = db_api.security_service_create(self.fake_context, security_service_dict) self._check_expected_fields(result, security_service_dict) def test_create_with_duplicated_id(self): db_api.security_service_create(self.fake_context, security_service_dict) self.assertRaises(db_exception.DBDuplicateEntry, db_api.security_service_create, self.fake_context, security_service_dict) def test_get(self): db_api.security_service_create(self.fake_context, security_service_dict) result = db_api.security_service_get(self.fake_context, security_service_dict['id']) self._check_expected_fields(result, security_service_dict) def test_get_not_found(self): self.assertRaises(exception.SecurityServiceNotFound, db_api.security_service_get, self.fake_context, 'wrong id') def test_delete(self): db_api.security_service_create(self.fake_context, security_service_dict) db_api.security_service_delete(self.fake_context, security_service_dict['id']) self.assertRaises(exception.SecurityServiceNotFound, db_api.security_service_get, self.fake_context, security_service_dict['id']) def test_update(self): update_dict = { 'dns_ip': 'new dns', 'server': 'new ldap server', 'domain': 'new ldap domain', 'ou': 'new ldap ou', 'user': 'new user', 'password': 'new password', 'name': 'new whatever', 'description': 'new nevermind', } db_api.security_service_create(self.fake_context, security_service_dict) result = db_api.security_service_update(self.fake_context, security_service_dict['id'], update_dict) self._check_expected_fields(result, update_dict) def test_update_no_updates(self): db_api.security_service_create(self.fake_context, security_service_dict) result = db_api.security_service_update(self.fake_context, security_service_dict['id'], {}) self._check_expected_fields(result, security_service_dict) def test_update_not_found(self): self.assertRaises(exception.SecurityServiceNotFound, db_api.security_service_update, self.fake_context, 'wrong id', {}) def test_get_all_no_records(self): result = db_api.security_service_get_all(self.fake_context) self.assertEqual(0, len(result)) @ddt.data(1, 2) def test_get_all(self, records_count): index = 0 services = [] while index < records_count: service_dict = dict(security_service_dict) service_dict.update({'id': 'fake_id%s' % index}) services.append(service_dict) db_api.security_service_create(self.fake_context, service_dict) index += 1 result = db_api.security_service_get_all(self.fake_context) self.assertEqual(len(services), len(result)) for index, service in enumerate(services): self._check_fields(expected=service, actual=result[index]) def test_get_all_two_records(self): dict1 = security_service_dict dict2 = security_service_dict.copy() dict2['id'] = 'fake id 2' db_api.security_service_create(self.fake_context, dict1) db_api.security_service_create(self.fake_context, dict2) result = db_api.security_service_get_all(self.fake_context) self.assertEqual(2, len(result)) def test_get_all_by_project(self): dict1 = security_service_dict dict2 = security_service_dict.copy() dict2['id'] = 'fake id 2' dict2['project_id'] = 'fake project 2' db_api.security_service_create(self.fake_context, dict1) db_api.security_service_create(self.fake_context, dict2) result1 = db_api.security_service_get_all_by_project( self.fake_context, dict1['project_id']) self.assertEqual(1, len(result1)) self._check_expected_fields(result1[0], dict1) result2 = db_api.security_service_get_all_by_project( self.fake_context, dict2['project_id']) self.assertEqual(1, len(result2)) self._check_expected_fields(result2[0], dict2) @ddt.ddt class ShareServerDatabaseAPITestCase(test.TestCase): def setUp(self): super(ShareServerDatabaseAPITestCase, self).setUp() self.ctxt = context.RequestContext(user_id='user_id', project_id='project_id', is_admin=True) def test_share_server_get(self): expected = db_utils.create_share_server() server = db_api.share_server_get(self.ctxt, expected['id']) self.assertEqual(expected['id'], server['id']) self.assertEqual(expected.share_network_subnet_id, server.share_network_subnet_id) self.assertEqual(expected.host, server.host) self.assertEqual(expected.status, server.status) def test_get_not_found(self): fake_id = 'FAKE_UUID' self.assertRaises(exception.ShareServerNotFound, db_api.share_server_get, self.ctxt, fake_id) def test_create(self): server = db_utils.create_share_server() self.assertTrue(server['id']) self.assertEqual(server.share_network_subnet_id, server['share_network_subnet_id']) self.assertEqual(server.host, server['host']) self.assertEqual(server.status, server['status']) def test_delete(self): server = db_utils.create_share_server() num_records = len(db_api.share_server_get_all(self.ctxt)) db_api.share_server_delete(self.ctxt, server['id']) self.assertEqual(num_records - 1, len(db_api.share_server_get_all(self.ctxt))) def test_delete_not_found(self): fake_id = 'FAKE_UUID' self.assertRaises(exception.ShareServerNotFound, db_api.share_server_delete, self.ctxt, fake_id) def test_update(self): update = { 'share_network_id': 'update_net', 'host': 'update_host', 'status': constants.STATUS_ACTIVE, } server = db_utils.create_share_server() updated_server = db_api.share_server_update(self.ctxt, server['id'], update) self.assertEqual(server['id'], updated_server['id']) self.assertEqual(update['share_network_id'], updated_server.share_network_id) self.assertEqual(update['host'], updated_server.host) self.assertEqual(update['status'], updated_server.status) def test_update_not_found(self): fake_id = 'FAKE_UUID' self.assertRaises(exception.ShareServerNotFound, db_api.share_server_update, self.ctxt, fake_id, {}) def test_get_all_by_host_and_share_net_valid(self): subnet_1 = { 'id': '1', 'share_network_id': '1', } subnet_2 = { 'id': '2', 'share_network_id': '2', } valid = { 'share_network_subnet_id': '1', 'host': 'host1', 'status': constants.STATUS_ACTIVE, } invalid = { 'share_network_subnet_id': '2', 'host': 'host1', 'status': constants.STATUS_ERROR, } other = { 'share_network_subnet_id': '1', 'host': 'host2', 'status': constants.STATUS_ACTIVE, } db_utils.create_share_network_subnet(**subnet_1) db_utils.create_share_network_subnet(**subnet_2) valid = db_utils.create_share_server(**valid) db_utils.create_share_server(**invalid) db_utils.create_share_server(**other) servers = db_api.share_server_get_all_by_host_and_share_subnet_valid( self.ctxt, host='host1', share_subnet_id='1') self.assertEqual(valid['id'], servers[0]['id']) def test_get_all_by_host_and_share_net_not_found(self): self.assertRaises( exception.ShareServerNotFound, db_api.share_server_get_all_by_host_and_share_subnet_valid, self.ctxt, host='fake', share_subnet_id='fake' ) def test_get_all(self): srv1 = { 'share_network_id': '1', 'host': 'host1', 'status': constants.STATUS_ACTIVE, } srv2 = { 'share_network_id': '1', 'host': 'host1', 'status': constants.STATUS_ERROR, } srv3 = { 'share_network_id': '2', 'host': 'host2', 'status': constants.STATUS_ACTIVE, } servers = db_api.share_server_get_all(self.ctxt) self.assertEqual(0, len(servers)) to_delete = db_utils.create_share_server(**srv1) db_utils.create_share_server(**srv2) db_utils.create_share_server(**srv3) servers = db_api.share_server_get_all(self.ctxt) self.assertEqual(3, len(servers)) db_api.share_server_delete(self.ctxt, to_delete['id']) servers = db_api.share_server_get_all(self.ctxt) self.assertEqual(2, len(servers)) def test_backend_details_set(self): details = { 'value1': '1', 'value2': '2', } server = db_utils.create_share_server() db_api.share_server_backend_details_set(self.ctxt, server['id'], details) self.assertDictMatch( details, db_api.share_server_get(self.ctxt, server['id'])['backend_details'] ) def test_backend_details_set_not_found(self): fake_id = 'FAKE_UUID' self.assertRaises(exception.ShareServerNotFound, db_api.share_server_backend_details_set, self.ctxt, fake_id, {}) def test_get_with_details(self): values = { 'share_network_subnet_id': 'fake-share-net-id', 'host': 'hostname', 'status': constants.STATUS_ACTIVE, } details = { 'value1': '1', 'value2': '2', } srv_id = db_utils.create_share_server(**values)['id'] db_api.share_server_backend_details_set(self.ctxt, srv_id, details) server = db_api.share_server_get(self.ctxt, srv_id) self.assertEqual(srv_id, server['id']) self.assertEqual(values['share_network_subnet_id'], server.share_network_subnet_id) self.assertEqual(values['host'], server.host) self.assertEqual(values['status'], server.status) self.assertDictMatch(server['backend_details'], details) self.assertIn('backend_details', server.to_dict()) def test_delete_with_details(self): server = db_utils.create_share_server(backend_details={ 'value1': '1', 'value2': '2', }) num_records = len(db_api.share_server_get_all(self.ctxt)) db_api.share_server_delete(self.ctxt, server['id']) self.assertEqual(num_records - 1, len(db_api.share_server_get_all(self.ctxt))) @ddt.data('fake', '-fake-', 'foo_some_fake_identifier_bar', 'foo-some-fake-identifier-bar', 'foobar') def test_share_server_search_by_identifier(self, identifier): server = { 'share_network_id': 'fake-share-net-id', 'host': 'hostname', 'status': constants.STATUS_ACTIVE, 'is_auto_deletable': True, 'updated_at': datetime.datetime(2018, 5, 1), 'identifier': 'some_fake_identifier', } server = db_utils.create_share_server(**server) if identifier == 'foobar': self.assertRaises(exception.ShareServerNotFound, db_api.share_server_search_by_identifier, self.ctxt, identifier) else: result = db_api.share_server_search_by_identifier( self.ctxt, identifier) self.assertEqual(server['id'], result[0]['id']) @ddt.data((True, True, True, 3), (True, True, False, 2), (True, False, False, 1), (False, False, False, 0)) @ddt.unpack def test_share_server_get_all_unused_deletable(self, server_1_is_auto_deletable, server_2_is_auto_deletable, server_3_is_auto_deletable, expected_len): server1 = { 'share_network_id': 'fake-share-net-id', 'host': 'hostname', 'status': constants.STATUS_ACTIVE, 'is_auto_deletable': server_1_is_auto_deletable, 'updated_at': datetime.datetime(2018, 5, 1) } server2 = { 'share_network_id': 'fake-share-net-id', 'host': 'hostname', 'status': constants.STATUS_ACTIVE, 'is_auto_deletable': server_2_is_auto_deletable, 'updated_at': datetime.datetime(2018, 5, 1) } server3 = { 'share_network_id': 'fake-share-net-id', 'host': 'hostname', 'status': constants.STATUS_ACTIVE, 'is_auto_deletable': server_3_is_auto_deletable, 'updated_at': datetime.datetime(2018, 5, 1) } db_utils.create_share_server(**server1) db_utils.create_share_server(**server2) db_utils.create_share_server(**server3) host = 'hostname' updated_before = datetime.datetime(2019, 5, 1) unused_deletable = db_api.share_server_get_all_unused_deletable( self.ctxt, host, updated_before) self.assertEqual(expected_len, len(unused_deletable)) class ServiceDatabaseAPITestCase(test.TestCase): def setUp(self): super(ServiceDatabaseAPITestCase, self).setUp() self.ctxt = context.RequestContext(user_id='user_id', project_id='project_id', is_admin=True) self.service_data = {'host': "fake_host", 'binary': "fake_binary", 'topic': "fake_topic", 'report_count': 0, 'availability_zone': "fake_zone"} def test_create(self): service = db_api.service_create(self.ctxt, self.service_data) az = db_api.availability_zone_get(self.ctxt, "fake_zone") self.assertEqual(az.id, service.availability_zone_id) self.assertSubDictMatch(self.service_data, service.to_dict()) def test_update(self): az_name = 'fake_zone2' update_data = {"availability_zone": az_name} service = db_api.service_create(self.ctxt, self.service_data) db_api.service_update(self.ctxt, service['id'], update_data) service = db_api.service_get(self.ctxt, service['id']) az = db_api.availability_zone_get(self.ctxt, az_name) self.assertEqual(az.id, service.availability_zone_id) valid_values = self.service_data valid_values.update(update_data) self.assertSubDictMatch(valid_values, service.to_dict()) @ddt.ddt class AvailabilityZonesDatabaseAPITestCase(test.TestCase): def setUp(self): super(AvailabilityZonesDatabaseAPITestCase, self).setUp() self.ctxt = context.RequestContext(user_id='user_id', project_id='project_id', is_admin=True) @ddt.data({'fake': 'fake'}, {}, {'fakeavailability_zone': 'fake'}, {'availability_zone': None}, {'availability_zone': ''}) def test__ensure_availability_zone_exists_invalid(self, test_values): session = db_api.get_session() self.assertRaises(ValueError, db_api._ensure_availability_zone_exists, self.ctxt, test_values, session) def test_az_get(self): az_name = 'test_az' az = db_api.availability_zone_create_if_not_exist(self.ctxt, az_name) az_by_id = db_api.availability_zone_get(self.ctxt, az['id']) az_by_name = db_api.availability_zone_get(self.ctxt, az_name) self.assertEqual(az_name, az_by_id['name']) self.assertEqual(az_name, az_by_name['name']) self.assertEqual(az['id'], az_by_id['id']) self.assertEqual(az['id'], az_by_name['id']) def test_az_get_all(self): db_api.availability_zone_create_if_not_exist(self.ctxt, 'test1') db_api.availability_zone_create_if_not_exist(self.ctxt, 'test2') db_api.availability_zone_create_if_not_exist(self.ctxt, 'test3') db_api.service_create(self.ctxt, {'availability_zone': 'test2'}) actual_result = db_api.availability_zone_get_all(self.ctxt) self.assertEqual(1, len(actual_result)) self.assertEqual('test2', actual_result[0]['name']) @ddt.ddt class NetworkAllocationsDatabaseAPITestCase(test.TestCase): def setUp(self): super(NetworkAllocationsDatabaseAPITestCase, self).setUp() self.user_id = 'user_id' self.project_id = 'project_id' self.share_server_id = 'foo_share_server_id' self.ctxt = context.RequestContext( user_id=self.user_id, project_id=self.project_id, is_admin=True) self.user_network_allocations = [ {'share_server_id': self.share_server_id, 'ip_address': '1.1.1.1', 'status': constants.STATUS_ACTIVE, 'label': None}, {'share_server_id': self.share_server_id, 'ip_address': '2.2.2.2', 'status': constants.STATUS_ACTIVE, 'label': 'user'}, ] self.admin_network_allocations = [ {'share_server_id': self.share_server_id, 'ip_address': '3.3.3.3', 'status': constants.STATUS_ACTIVE, 'label': 'admin'}, {'share_server_id': self.share_server_id, 'ip_address': '4.4.4.4', 'status': constants.STATUS_ACTIVE, 'label': 'admin'}, ] def _setup_network_allocations_get_for_share_server(self): # Create share network share_network_data = { 'id': 'foo_share_network_id', 'user_id': self.user_id, 'project_id': self.project_id, } db_api.share_network_create(self.ctxt, share_network_data) # Create share server share_server_data = { 'id': self.share_server_id, 'share_network_id': share_network_data['id'], 'host': 'fake_host', 'status': 'active', } db_api.share_server_create(self.ctxt, share_server_data) # Create user network allocations for user_network_allocation in self.user_network_allocations: db_api.network_allocation_create( self.ctxt, user_network_allocation) # Create admin network allocations for admin_network_allocation in self.admin_network_allocations: db_api.network_allocation_create( self.ctxt, admin_network_allocation) def test_get_only_user_network_allocations(self): self._setup_network_allocations_get_for_share_server() result = db_api.network_allocations_get_for_share_server( self.ctxt, self.share_server_id, label='user') self.assertEqual( len(self.user_network_allocations), len(result)) for na in result: self.assertIn(na.label, (None, 'user')) def test_get_only_admin_network_allocations(self): self._setup_network_allocations_get_for_share_server() result = db_api.network_allocations_get_for_share_server( self.ctxt, self.share_server_id, label='admin') self.assertEqual( len(self.admin_network_allocations), len(result)) for na in result: self.assertEqual(na.label, 'admin') def test_get_all_network_allocations(self): self._setup_network_allocations_get_for_share_server() result = db_api.network_allocations_get_for_share_server( self.ctxt, self.share_server_id, label=None) self.assertEqual( len(self.user_network_allocations + self.admin_network_allocations), len(result) ) for na in result: self.assertIn(na.label, ('admin', 'user', None)) def test_network_allocation_get(self): self._setup_network_allocations_get_for_share_server() for allocation in self.admin_network_allocations: result = db_api.network_allocation_get(self.ctxt, allocation['id']) self.assertIsInstance(result, models.NetworkAllocation) self.assertEqual(allocation['id'], result.id) for allocation in self.user_network_allocations: result = db_api.network_allocation_get(self.ctxt, allocation['id']) self.assertIsInstance(result, models.NetworkAllocation) self.assertEqual(allocation['id'], result.id) def test_network_allocation_get_no_result(self): self._setup_network_allocations_get_for_share_server() self.assertRaises(exception.NotFound, db_api.network_allocation_get, self.ctxt, id='fake') @ddt.data(True, False) def test_network_allocation_get_read_deleted(self, read_deleted): self._setup_network_allocations_get_for_share_server() deleted_allocation = { 'share_server_id': self.share_server_id, 'ip_address': '1.1.1.1', 'status': constants.STATUS_ACTIVE, 'label': None, 'deleted': True, } new_obj = db_api.network_allocation_create(self.ctxt, deleted_allocation) if read_deleted: result = db_api.network_allocation_get(self.ctxt, new_obj.id, read_deleted=read_deleted) self.assertIsInstance(result, models.NetworkAllocation) self.assertEqual(new_obj.id, result.id) else: self.assertRaises(exception.NotFound, db_api.network_allocation_get, self.ctxt, id=self.share_server_id) def test_network_allocation_update(self): self._setup_network_allocations_get_for_share_server() for allocation in self.admin_network_allocations: old_obj = db_api.network_allocation_get(self.ctxt, allocation['id']) self.assertEqual('False', old_obj.deleted) updated_object = db_api.network_allocation_update( self.ctxt, allocation['id'], {'deleted': 'True'}) self.assertEqual('True', updated_object.deleted) @ddt.data(True, False) def test_network_allocation_update_read_deleted(self, read_deleted): self._setup_network_allocations_get_for_share_server() db_api.network_allocation_update( self.ctxt, self.admin_network_allocations[0]['id'], {'deleted': 'True'} ) if read_deleted: updated_object = db_api.network_allocation_update( self.ctxt, self.admin_network_allocations[0]['id'], {'deleted': 'False'}, read_deleted=read_deleted ) self.assertEqual('False', updated_object.deleted) else: self.assertRaises(exception.NotFound, db_api.network_allocation_update, self.ctxt, id=self.share_server_id, values={'deleted': read_deleted}, read_deleted=read_deleted) class ReservationDatabaseAPITest(test.TestCase): def setUp(self): super(ReservationDatabaseAPITest, self).setUp() self.context = context.get_admin_context() def test_reservation_expire(self): quota_usage = db_api.quota_usage_create(self.context, 'fake_project', 'fake_user', 'fake_resource', 0, 12, until_refresh=None) session = db_api.get_session() for time_s in (-1, 1): reservation = db_api._reservation_create( self.context, 'fake_uuid', quota_usage, 'fake_project', 'fake_user', 'fake_resource', 10, timeutils.utcnow() + datetime.timedelta(days=time_s), session=session) db_api.reservation_expire(self.context) reservations = db_api._quota_reservations_query(session, self.context, ['fake_uuid']).all() quota_usage = db_api.quota_usage_get(self.context, 'fake_project', 'fake_resource') self.assertEqual(1, len(reservations)) self.assertEqual(reservation['id'], reservations[0]['id']) self.assertEqual(2, quota_usage['reserved']) @ddt.ddt class PurgeDeletedTest(test.TestCase): def setUp(self): super(PurgeDeletedTest, self).setUp() self.context = context.get_admin_context() def _days_ago(self, begin, end): return timeutils.utcnow() - datetime.timedelta( days=random.randint(begin, end)) def _sqlite_has_fk_constraint(self): # SQLAlchemy doesn't support it at all with < SQLite 3.6.19 import sqlite3 tup = sqlite3.sqlite_version_info return tup[0] > 3 or (tup[0] == 3 and tup[1] >= 7) def _turn_on_foreign_key(self): engine = db_api.get_engine() connection = engine.raw_connection() try: cursor = connection.cursor() cursor.execute("PRAGMA foreign_keys = ON") finally: connection.close() @ddt.data({"del_days": 0, "num_left": 0}, {"del_days": 10, "num_left": 2}, {"del_days": 20, "num_left": 4}) @ddt.unpack def test_purge_records_with_del_days(self, del_days, num_left): fake_now = timeutils.utcnow() with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=fake_now)): # create resources soft-deleted in 0~9, 10~19 days ago for start, end in ((0, 9), (10, 19)): for unused in range(2): # share type db_utils.create_share_type(id=uuidutils.generate_uuid(), deleted_at=self._days_ago(start, end)) # share share = db_utils.create_share_without_instance( metadata={}, deleted_at=self._days_ago(start, end)) # create share network network = db_utils.create_share_network( id=uuidutils.generate_uuid(), deleted_at=self._days_ago(start, end)) # create security service db_utils.create_security_service( id=uuidutils.generate_uuid(), share_network_id=network.id, deleted_at=self._days_ago(start, end)) # create share instance s_instance = db_utils.create_share_instance( id=uuidutils.generate_uuid(), share_network_id=network.id, share_id=share.id) # share access db_utils.create_share_access( id=uuidutils.generate_uuid(), share_id=share['id'], deleted_at=self._days_ago(start, end)) # create share server db_utils.create_share_server( id=uuidutils.generate_uuid(), deleted_at=self._days_ago(start, end), share_network_id=network.id) # create snapshot db_api.share_snapshot_create( self.context, {'share_id': share['id'], 'deleted_at': self._days_ago(start, end)}, create_snapshot_instance=False) # update share instance db_api.share_instance_update( self.context, s_instance.id, {'deleted_at': self._days_ago(start, end)}) db_api.purge_deleted_records(self.context, age_in_days=del_days) for model in [models.ShareTypes, models.Share, models.ShareNetwork, models.ShareAccessMapping, models.ShareInstance, models.ShareServer, models.ShareSnapshot, models.SecurityService]: rows = db_api.model_query(self.context, model).count() self.assertEqual(num_left, rows) def test_purge_records_with_illegal_args(self): self.assertRaises(TypeError, db_api.purge_deleted_records, self.context) self.assertRaises(exception.InvalidParameterValue, db_api.purge_deleted_records, self.context, age_in_days=-1) def test_purge_records_with_constraint(self): if not self._sqlite_has_fk_constraint(): self.skipTest( 'sqlite is too old for reliable SQLA foreign_keys') self._turn_on_foreign_key() type_id = uuidutils.generate_uuid() # create share type1 db_utils.create_share_type(id=type_id, deleted_at=self._days_ago(1, 1)) # create share type2 db_utils.create_share_type(id=uuidutils.generate_uuid(), deleted_at=self._days_ago(1, 1)) # create share share = db_utils.create_share(share_type_id=type_id) db_api.purge_deleted_records(self.context, age_in_days=0) type_row = db_api.model_query(self.context, models.ShareTypes).count() # share type1 should not be deleted self.assertEqual(1, type_row) db_api.model_query(self.context, models.ShareInstance).delete() db_api.share_delete(self.context, share['id']) db_api.purge_deleted_records(self.context, age_in_days=0) s_row = db_api.model_query(self.context, models.Share).count() type_row = db_api.model_query(self.context, models.ShareTypes).count() self.assertEqual(0, s_row + type_row) @ddt.ddt class ShareTypeAPITestCase(test.TestCase): def setUp(self): super(ShareTypeAPITestCase, self).setUp() self.ctxt = context.RequestContext( user_id='user_id', project_id='project_id', is_admin=True) @ddt.data({'used_by_shares': True, 'used_by_group_types': False}, {'used_by_shares': False, 'used_by_group_types': True}, {'used_by_shares': True, 'used_by_group_types': True}) @ddt.unpack def test_share_type_destroy_in_use(self, used_by_shares, used_by_group_types): share_type_1 = db_utils.create_share_type( name='orange', extra_specs={'somekey': 'someval'}, is_public=False, override_defaults=True) share_type_2 = db_utils.create_share_type( name='regalia', override_defaults=True) db_api.share_type_access_add(self.ctxt, share_type_1['id'], "2018ndaetfigovnsaslcahfavmrpions") db_api.share_type_access_add(self.ctxt, share_type_1['id'], "2016ndaetfigovnsaslcahfavmrpions") if used_by_shares: share_1 = db_utils.create_share(share_type_id=share_type_1['id']) db_utils.create_share(share_type_id=share_type_2['id']) if used_by_group_types: group_type_1 = db_utils.create_share_group_type( name='crimson', share_types=[share_type_1['id']]) db_utils.create_share_group_type( name='tide', share_types=[share_type_2['id']]) share_group_1 = db_utils.create_share_group( share_group_type_id=group_type_1['id'], share_types=[share_type_1['id']]) self.assertRaises(exception.ShareTypeInUse, db_api.share_type_destroy, self.ctxt, share_type_1['id']) self.assertRaises(exception.ShareTypeInUse, db_api.share_type_destroy, self.ctxt, share_type_2['id']) # Let's cleanup share_type_1 and verify it is gone if used_by_shares: db_api.share_instance_delete(self.ctxt, share_1.instance.id) if used_by_group_types: db_api.share_group_destroy(self.ctxt, share_group_1['id']) db_api.share_group_type_destroy(self.ctxt, group_type_1['id']) self.assertIsNone( db_api.share_type_destroy(self.ctxt, share_type_1['id'])) self.assertDictMatch( {}, db_api.share_type_extra_specs_get( self.ctxt, share_type_1['id'])) self.assertRaises(exception.ShareTypeNotFound, db_api.share_type_access_get_all, self.ctxt, share_type_1['id']) self.assertRaises(exception.ShareTypeNotFound, db_api.share_type_get, self.ctxt, share_type_1['id']) # share_type_2 must still be around self.assertEqual( share_type_2['id'], db_api.share_type_get(self.ctxt, share_type_2['id'])['id']) @ddt.data({'usages': False, 'reservations': False}, {'usages': False, 'reservations': True}, {'usages': True, 'reservations': False}) @ddt.unpack def test_share_type_destroy_quotas_and_reservations(self, usages, reservations): share_type = db_utils.create_share_type(name='clemsontigers') shares_quota = db_api.quota_create( self.ctxt, "fake-project-id", 'shares', 10, share_type_id=share_type['id']) snapshots_quota = db_api.quota_create( self.ctxt, "fake-project-id", 'snapshots', 30, share_type_id=share_type['id']) if reservations: resources = { 'shares': quota.ReservableResource('shares', '_sync_shares'), 'snapshots': quota.ReservableResource( 'snapshots', '_sync_snapshots'), } project_quotas = { 'shares': shares_quota.hard_limit, 'snapshots': snapshots_quota.hard_limit, } user_quotas = { 'shares': shares_quota.hard_limit, 'snapshots': snapshots_quota.hard_limit, } deltas = {'shares': 1, 'snapshots': 3} expire = timeutils.utcnow() + datetime.timedelta(seconds=86400) reservation_uuids = db_api.quota_reserve( self.ctxt, resources, project_quotas, user_quotas, project_quotas, deltas, expire, False, 30, project_id='fake-project-id', share_type_id=share_type['id']) db_session = db_api.get_session() q_reservations = db_api._quota_reservations_query( db_session, self.ctxt, reservation_uuids).all() # There should be 2 "user" reservations and 2 "share-type" # quota reservations self.assertEqual(4, len(q_reservations)) q_share_type_reservations = [qr for qr in q_reservations if qr['share_type_id'] is not None] # There should be exactly two "share type" quota reservations self.assertEqual(2, len(q_share_type_reservations)) for q_reservation in q_share_type_reservations: self.assertEqual(q_reservation['share_type_id'], share_type['id']) if usages: db_api.quota_usage_create(self.ctxt, 'fake-project-id', 'fake-user-id', 'shares', 3, 2, False, share_type_id=share_type['id']) db_api.quota_usage_create(self.ctxt, 'fake-project-id', 'fake-user-id', 'snapshots', 2, 2, False, share_type_id=share_type['id']) q_usages = db_api.quota_usage_get_all_by_project_and_share_type( self.ctxt, 'fake-project-id', share_type['id']) self.assertEqual(3, q_usages['shares']['in_use']) self.assertEqual(2, q_usages['shares']['reserved']) self.assertEqual(2, q_usages['snapshots']['in_use']) self.assertEqual(2, q_usages['snapshots']['reserved']) # Validate that quotas exist share_type_quotas = db_api.quota_get_all_by_project_and_share_type( self.ctxt, 'fake-project-id', share_type['id']) expected_quotas = { 'project_id': 'fake-project-id', 'share_type_id': share_type['id'], 'shares': 10, 'snapshots': 30, } self.assertDictMatch(expected_quotas, share_type_quotas) db_api.share_type_destroy(self.ctxt, share_type['id']) self.assertRaises(exception.ShareTypeNotFound, db_api.share_type_get, self.ctxt, share_type['id']) # Quotas must be gone share_type_quotas = db_api.quota_get_all_by_project_and_share_type( self.ctxt, 'fake-project-id', share_type['id']) self.assertEqual({'project_id': 'fake-project-id', 'share_type_id': share_type['id']}, share_type_quotas) # Check usages and reservations if usages: q_usages = db_api.quota_usage_get_all_by_project_and_share_type( self.ctxt, 'fake-project-id', share_type['id']) expected_q_usages = {'project_id': 'fake-project-id', 'share_type_id': share_type['id']} self.assertDictMatch(expected_q_usages, q_usages) if reservations: q_reservations = db_api._quota_reservations_query( db_session, self.ctxt, reservation_uuids).all() # just "user" quota reservations should be left, since we didn't # clean them up. self.assertEqual(2, len(q_reservations)) for q_reservation in q_reservations: self.assertIsNone(q_reservation['share_type_id']) @ddt.data( (None, None, 5), ('fake2', None, 2), (None, 'fake', 3), ) @ddt.unpack def test_share_replica_data_get_for_project( self, user_id, share_type_id, expected_result): kwargs = {} if share_type_id: kwargs.update({'id': share_type_id}) share_type_1 = db_utils.create_share_type(**kwargs) share_type_2 = db_utils.create_share_type() share_1 = db_utils.create_share(size=1, user_id='fake', share_type_id=share_type_1['id']) share_2 = db_utils.create_share(size=1, user_id='fake2', share_type_id=share_type_2['id']) project_id = share_1['project_id'] db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_ACTIVE, share_id=share_1['id'], share_type_id=share_type_1['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_IN_SYNC, share_id=share_1['id'], share_type_id=share_type_1['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_IN_SYNC, share_id=share_1['id'], share_type_id=share_type_1['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_ACTIVE, share_id=share_2['id'], share_type_id=share_type_2['id']) db_utils.create_share_replica( replica_state=constants.REPLICA_STATE_IN_SYNC, share_id=share_2['id'], share_type_id=share_type_2['id']) kwargs = {} if user_id: kwargs.update({'user_id': user_id}) if share_type_id: kwargs.update({'share_type_id': share_type_id}) total_amount, total_size = db_api.share_replica_data_get_for_project( self.ctxt, project_id, **kwargs) self.assertEqual(expected_result, total_amount) self.assertEqual(expected_result, total_size) def test_share_type_get_by_name_or_id_found_by_id(self): share_type = db_utils.create_share_type() result = db_api.share_type_get_by_name_or_id( self.ctxt, share_type['id']) self.assertIsNotNone(result) self.assertEqual(share_type['id'], result['id']) def test_share_type_get_by_name_or_id_found_by_name(self): name = uuidutils.generate_uuid() db_utils.create_share_type(name=name) result = db_api.share_type_get_by_name_or_id(self.ctxt, name) self.assertIsNotNone(result) self.assertEqual(name, result['name']) self.assertNotEqual(name, result['id']) def test_share_type_get_by_name_or_id_when_does_not_exist(self): fake_id = uuidutils.generate_uuid() result = db_api.share_type_get_by_name_or_id(self.ctxt, fake_id) self.assertIsNone(result) def test_share_type_get_with_none_id(self): self.assertRaises(exception.DefaultShareTypeNotConfigured, db_api.share_type_get, self.ctxt, None) @ddt.data( {'name': 'st_1', 'description': 'des_1', 'is_public': True}, {'name': 'st_2', 'description': 'des_2', 'is_public': None}, {'name': 'st_3', 'description': None, 'is_public': False}, {'name': None, 'description': 'des_4', 'is_public': True}, ) @ddt.unpack def test_share_type_update(self, name, description, is_public): values = {} if name: values.update({'name': name}) if description: values.update({'description': description}) if is_public is not None: values.update({'is_public': is_public}) share_type = db_utils.create_share_type(name='st_name') db_api.share_type_update(self.ctxt, share_type['id'], values) updated_st = db_api.share_type_get_by_name_or_id(self.ctxt, share_type['id']) if name: self.assertEqual(name, updated_st['name']) if description: self.assertEqual(description, updated_st['description']) if is_public is not None: self.assertEqual(is_public, updated_st['is_public']) def test_share_type_update_not_found(self): share_type = db_utils.create_share_type(name='st_update_test') db_api.share_type_destroy(self.ctxt, share_type['id']) values = {"name": "not_exist"} self.assertRaises(exception.ShareTypeNotFound, db_api.share_type_update, self.ctxt, share_type['id'], values) class MessagesDatabaseAPITestCase(test.TestCase): def setUp(self): super(MessagesDatabaseAPITestCase, self).setUp() self.user_id = uuidutils.generate_uuid() self.project_id = uuidutils.generate_uuid() self.ctxt = context.RequestContext( user_id=self.user_id, project_id=self.project_id, is_admin=False) def test_message_create(self): result = db_utils.create_message(project_id=self.project_id, action_id='001') self.assertIsNotNone(result['id']) def test_message_delete(self): result = db_utils.create_message(project_id=self.project_id, action_id='001') db_api.message_destroy(self.ctxt, result) self.assertRaises(exception.NotFound, db_api.message_get, self.ctxt, result['id']) def test_message_get(self): message = db_utils.create_message(project_id=self.project_id, action_id='001') result = db_api.message_get(self.ctxt, message['id']) self.assertEqual(message['id'], result['id']) self.assertEqual(message['action_id'], result['action_id']) self.assertEqual(message['detail_id'], result['detail_id']) self.assertEqual(message['project_id'], result['project_id']) self.assertEqual(message['message_level'], result['message_level']) def test_message_get_not_found(self): self.assertRaises(exception.MessageNotFound, db_api.message_get, self.ctxt, 'fake_id') def test_message_get_different_project(self): message = db_utils.create_message(project_id='another-project', action_id='001') self.assertRaises(exception.MessageNotFound, db_api.message_get, self.ctxt, message['id']) def test_message_get_all(self): db_utils.create_message(project_id=self.project_id, action_id='001') db_utils.create_message(project_id=self.project_id, action_id='001') db_utils.create_message(project_id='another-project', action_id='001') result = db_api.message_get_all(self.ctxt) self.assertEqual(2, len(result)) def test_message_get_all_as_admin(self): db_utils.create_message(project_id=self.project_id, action_id='001') db_utils.create_message(project_id=self.project_id, action_id='001') db_utils.create_message(project_id='another-project', action_id='001') result = db_api.message_get_all(self.ctxt.elevated()) self.assertEqual(3, len(result)) def test_message_get_all_with_filter(self): for i in ['001', '002', '002']: db_utils.create_message(project_id=self.project_id, action_id=i) result = db_api.message_get_all(self.ctxt, filters={'action_id': '002'}) self.assertEqual(2, len(result)) def test_message_get_all_with_created_since_or_before_filter(self): now = timeutils.utcnow() db_utils.create_message(project_id=self.project_id, action_id='001', created_at=now - datetime.timedelta(seconds=1)) db_utils.create_message(project_id=self.project_id, action_id='001', created_at=now + datetime.timedelta(seconds=1)) db_utils.create_message(project_id=self.project_id, action_id='001', created_at=now + datetime.timedelta(seconds=2)) result1 = db_api.message_get_all(self.ctxt, filters={'created_before': now}) result2 = db_api.message_get_all(self.ctxt, filters={'created_since': now}) self.assertEqual(1, len(result1)) self.assertEqual(2, len(result2)) def test_message_get_all_with_invalid_sort_key(self): self.assertRaises(exception.InvalidInput, db_api.message_get_all, self.ctxt, sort_key='invalid_key') def test_message_get_all_sorted_asc(self): ids = [] for i in ['001', '002', '003']: msg = db_utils.create_message(project_id=self.project_id, action_id=i) ids.append(msg.id) result = db_api.message_get_all(self.ctxt, sort_key='action_id', sort_dir='asc') result_ids = [r.id for r in result] self.assertEqual(result_ids, ids) def test_message_get_all_with_limit_and_offset(self): for i in ['001', '002']: db_utils.create_message(project_id=self.project_id, action_id=i) result = db_api.message_get_all(self.ctxt, limit=1, offset=1) self.assertEqual(1, len(result)) def test_message_get_all_sorted(self): ids = [] for i in ['003', '002', '001']: msg = db_utils.create_message(project_id=self.project_id, action_id=i) ids.append(msg.id) # Default the sort direction to descending result = db_api.message_get_all(self.ctxt, sort_key='action_id') result_ids = [r.id for r in result] self.assertEqual(result_ids, ids) def test_cleanup_expired_messages(self): adm_context = self.ctxt.elevated() now = timeutils.utcnow() db_utils.create_message(project_id=self.project_id, action_id='001', expires_at=now) db_utils.create_message(project_id=self.project_id, action_id='001', expires_at=now - datetime.timedelta(days=1)) db_utils.create_message(project_id=self.project_id, action_id='001', expires_at=now + datetime.timedelta(days=1)) with mock.patch.object(timeutils, 'utcnow') as mock_time_now: mock_time_now.return_value = now db_api.cleanup_expired_messages(adm_context) messages = db_api.message_get_all(adm_context) self.assertEqual(2, len(messages)) class BackendInfoDatabaseAPITestCase(test.TestCase): def setUp(self): """Run before each test.""" super(BackendInfoDatabaseAPITestCase, self).setUp() self.ctxt = context.get_admin_context() def test_create(self): host = "fake_host" value = "fake_hash_value" initial_data = db_api.backend_info_get(self.ctxt, host) db_api.backend_info_update(self.ctxt, host, value) actual_data = db_api.backend_info_get(self.ctxt, host) self.assertIsNone(initial_data) self.assertEqual(value, actual_data['info_hash']) self.assertEqual(host, actual_data['host']) def test_get(self): host = "fake_host" value = "fake_hash_value" db_api.backend_info_update(self.ctxt, host, value, False) actual_result = db_api.backend_info_get(self.ctxt, host) self.assertEqual(value, actual_result['info_hash']) self.assertEqual(host, actual_result['host']) def test_delete(self): host = "fake_host" value = "fake_hash_value" db_api.backend_info_update(self.ctxt, host, value) initial_data = db_api.backend_info_get(self.ctxt, host) db_api.backend_info_update(self.ctxt, host, delete_existing=True) actual_data = db_api.backend_info_get(self.ctxt, host) self.assertEqual(value, initial_data['info_hash']) self.assertEqual(host, initial_data['host']) self.assertIsNone(actual_data) def test_double_update(self): host = "fake_host" value_1 = "fake_hash_value_1" value_2 = "fake_hash_value_2" initial_data = db_api.backend_info_get(self.ctxt, host) db_api.backend_info_update(self.ctxt, host, value_1) db_api.backend_info_update(self.ctxt, host, value_2) actual_data = db_api.backend_info_get(self.ctxt, host) self.assertIsNone(initial_data) self.assertEqual(value_2, actual_data['info_hash']) self.assertEqual(host, actual_data['host']) @ddt.ddt class ShareInstancesTestCase(test.TestCase): def setUp(self): super(ShareInstancesTestCase, self).setUp() self.context = context.get_admin_context() @ddt.data('controller-100', 'controller-0@otherstore03', 'controller-0@otherstore01#pool200') def test_share_instances_host_update_no_matches(self, current_host): share_id = uuidutils.generate_uuid() if '@' in current_host: if '#' in current_host: new_host = 'new-controller-X@backendX#poolX' else: new_host = 'new-controller-X@backendX' else: new_host = 'new-controller-X' instances = [ db_utils.create_share_instance( share_id=share_id, host='controller-0@fancystore01#pool100', status=constants.STATUS_AVAILABLE), db_utils.create_share_instance( share_id=share_id, host='controller-0@otherstore02#pool100', status=constants.STATUS_ERROR), db_utils.create_share_instance( share_id=share_id, host='controller-2@beststore07#pool200', status=constants.STATUS_DELETING), ] db_utils.create_share(id=share_id, instances=instances) updates = db_api.share_instances_host_update(self.context, current_host, new_host) share_instances = db_api.share_instances_get_all( self.context, filters={'share_id': share_id}) self.assertEqual(0, updates) for share_instance in share_instances: self.assertTrue(not share_instance['host'].startswith(new_host)) @ddt.data({'current_host': 'controller-2', 'expected_updates': 1}, {'current_host': 'controller-0@fancystore01', 'expected_updates': 2}, {'current_host': 'controller-0@fancystore01#pool100', 'expected_updates': 1}) @ddt.unpack def test_share_instance_host_update_partial_matches(self, current_host, expected_updates): share_id = uuidutils.generate_uuid() if '@' in current_host: if '#' in current_host: new_host = 'new-controller-X@backendX#poolX' else: new_host = 'new-controller-X@backendX' else: new_host = 'new-controller-X' instances = [ db_utils.create_share_instance( share_id=share_id, host='controller-0@fancystore01#pool100', status=constants.STATUS_AVAILABLE), db_utils.create_share_instance( share_id=share_id, host='controller-0@fancystore01#pool200', status=constants.STATUS_ERROR), db_utils.create_share_instance( share_id=share_id, host='controller-2@beststore07#pool200', status=constants.STATUS_DELETING), ] db_utils.create_share(id=share_id, instances=instances) actual_updates = db_api.share_instances_host_update( self.context, current_host, new_host) share_instances = db_api.share_instances_get_all( self.context, filters={'share_id': share_id}) host_updates = [si for si in share_instances if si['host'].startswith(new_host)] self.assertEqual(actual_updates, expected_updates) self.assertEqual(expected_updates, len(host_updates)) manila-10.0.0/manila/tests/db/sqlalchemy/test_models.py0000664000175000017500000002657713656750227023137 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Testing of SQLAlchemy model classes.""" import ddt from manila.common import constants from manila import context from manila.db.sqlalchemy import api as db_api from manila import test from manila.tests import db_utils @ddt.ddt class ShareTestCase(test.TestCase): """Testing of SQLAlchemy Share model class.""" @ddt.data(constants.STATUS_MANAGE_ERROR, constants.STATUS_CREATING, constants.STATUS_EXTENDING, constants.STATUS_DELETING, constants.STATUS_EXTENDING_ERROR, constants.STATUS_ERROR_DELETING, constants.STATUS_MANAGING, constants.STATUS_MANAGE_ERROR) def test_share_instance_available(self, status): instance_list = [ db_utils.create_share_instance(status=constants.STATUS_AVAILABLE, share_id='fake_id'), db_utils.create_share_instance(status=status, share_id='fake_id') ] share1 = db_utils.create_share(instances=instance_list) share2 = db_utils.create_share(instances=list(reversed(instance_list))) self.assertEqual(constants.STATUS_AVAILABLE, share1.instance['status']) self.assertEqual(constants.STATUS_AVAILABLE, share2.instance['status']) @ddt.data([constants.STATUS_MANAGE_ERROR, constants.STATUS_CREATING], [constants.STATUS_ERROR_DELETING, constants.STATUS_DELETING], [constants.STATUS_ERROR, constants.STATUS_MANAGING], [constants.STATUS_UNMANAGE_ERROR, constants.STATUS_UNMANAGING], [constants.STATUS_INACTIVE, constants.STATUS_EXTENDING], [constants.STATUS_SHRINKING_ERROR, constants.STATUS_SHRINKING]) @ddt.unpack def test_share_instance_not_transitional(self, status, trans_status): instance_list = [ db_utils.create_share_instance(status=status, share_id='fake_id'), db_utils.create_share_instance(status=trans_status, share_id='fake_id') ] share1 = db_utils.create_share(instances=instance_list) share2 = db_utils.create_share(instances=list(reversed(instance_list))) self.assertEqual(status, share1.instance['status']) self.assertEqual(status, share2.instance['status']) def test_share_instance_creating(self): share = db_utils.create_share(status=constants.STATUS_CREATING) self.assertEqual(constants.STATUS_CREATING, share.instance['status']) @ddt.data(constants.STATUS_REPLICATION_CHANGE, constants.STATUS_AVAILABLE, constants.STATUS_ERROR, constants.STATUS_CREATING) def test_share_instance_reverting(self, status): instance_list = [ db_utils.create_share_instance( status=constants.STATUS_REVERTING, share_id='fake_id'), db_utils.create_share_instance( status=status, share_id='fake_id'), db_utils.create_share_instance( status=constants.STATUS_ERROR_DELETING, share_id='fake_id'), ] share1 = db_utils.create_share(instances=instance_list) share2 = db_utils.create_share(instances=list(reversed(instance_list))) self.assertEqual( constants.STATUS_REVERTING, share1.instance['status']) self.assertEqual( constants.STATUS_REVERTING, share2.instance['status']) @ddt.data(constants.STATUS_AVAILABLE, constants.STATUS_ERROR, constants.STATUS_CREATING) def test_share_instance_replication_change(self, status): instance_list = [ db_utils.create_share_instance( status=constants.STATUS_REPLICATION_CHANGE, share_id='fake_id'), db_utils.create_share_instance( status=status, share_id='fake_id'), db_utils.create_share_instance( status=constants.STATUS_ERROR_DELETING, share_id='fake_id') ] share1 = db_utils.create_share(instances=instance_list) share2 = db_utils.create_share(instances=list(reversed(instance_list))) self.assertEqual( constants.STATUS_REPLICATION_CHANGE, share1.instance['status']) self.assertEqual( constants.STATUS_REPLICATION_CHANGE, share2.instance['status']) def test_share_instance_prefer_active_instance(self): instance_list = [ db_utils.create_share_instance( status=constants.STATUS_AVAILABLE, share_id='fake_id', replica_state=constants.REPLICA_STATE_IN_SYNC), db_utils.create_share_instance( status=constants.STATUS_CREATING, share_id='fake_id', replica_state=constants.REPLICA_STATE_OUT_OF_SYNC), db_utils.create_share_instance( status=constants.STATUS_ERROR, share_id='fake_id', replica_state=constants.REPLICA_STATE_ACTIVE), db_utils.create_share_instance( status=constants.STATUS_MANAGING, share_id='fake_id', replica_state=constants.REPLICA_STATE_ACTIVE), ] share1 = db_utils.create_share(instances=instance_list) share2 = db_utils.create_share(instances=list(reversed(instance_list))) self.assertEqual( constants.STATUS_ERROR, share1.instance['status']) self.assertEqual( constants.STATUS_ERROR, share2.instance['status']) def test_access_rules_status_no_instances(self): share = db_utils.create_share(instances=[]) self.assertEqual(constants.STATUS_ACTIVE, share.access_rules_status) @ddt.data(constants.STATUS_ACTIVE, constants.SHARE_INSTANCE_RULES_SYNCING, constants.SHARE_INSTANCE_RULES_ERROR) def test_access_rules_status(self, access_status): instances = [ db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_ERROR, access_rules_status=constants.STATUS_ACTIVE), db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_AVAILABLE, access_rules_status=constants.STATUS_ACTIVE), db_utils.create_share_instance( share_id='fake_id', status=constants.STATUS_AVAILABLE, access_rules_status=access_status), ] share = db_utils.create_share(instances=instances) self.assertEqual(access_status, share.access_rules_status) @ddt.ddt class ShareAccessTestCase(test.TestCase): """Testing of SQLAlchemy Share Access related model classes.""" @ddt.data(constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_ACTIVE, constants.ACCESS_STATE_ERROR, constants.ACCESS_STATE_APPLYING) def test_share_access_mapping_state(self, expected_status): ctxt = context.get_admin_context() share = db_utils.create_share() share_instances = [ share.instance, db_utils.create_share_instance(share_id=share['id']), db_utils.create_share_instance(share_id=share['id']), db_utils.create_share_instance(share_id=share['id']), ] access_rule = db_utils.create_access(share_id=share['id']) # Update the access mapping states db_api.share_instance_access_update( ctxt, access_rule['id'], share_instances[0]['id'], {'state': constants.ACCESS_STATE_ACTIVE}) db_api.share_instance_access_update( ctxt, access_rule['id'], share_instances[1]['id'], {'state': expected_status}) db_api.share_instance_access_update( ctxt, access_rule['id'], share_instances[2]['id'], {'state': constants.ACCESS_STATE_ACTIVE}) db_api.share_instance_access_update( ctxt, access_rule['id'], share_instances[3]['id'], {'deleted': 'True', 'state': constants.STATUS_DELETED}) access_rule = db_api.share_access_get(ctxt, access_rule['id']) self.assertEqual(expected_status, access_rule['state']) class ShareSnapshotTestCase(test.TestCase): """Testing of SQLAlchemy ShareSnapshot model class.""" def test_instance_and_proxified_properties(self): in_sync_replica_instance = db_utils.create_share_instance( status=constants.STATUS_AVAILABLE, share_id='fake_id', replica_state=constants.REPLICA_STATE_IN_SYNC) active_replica_instance = db_utils.create_share_instance( status=constants.STATUS_AVAILABLE, share_id='fake_id', replica_state=constants.REPLICA_STATE_ACTIVE) out_of_sync_replica_instance = db_utils.create_share_instance( status=constants.STATUS_ERROR, share_id='fake_id', replica_state=constants.REPLICA_STATE_OUT_OF_SYNC) non_replica_instance = db_utils.create_share_instance( status=constants.STATUS_CREATING, share_id='fake_id') share_instances = [ in_sync_replica_instance, active_replica_instance, out_of_sync_replica_instance, non_replica_instance, ] share = db_utils.create_share(instances=share_instances) snapshot_instance_list = [ db_utils.create_snapshot_instance( 'fake_snapshot_id', status=constants.STATUS_CREATING, share_instance_id=out_of_sync_replica_instance['id']), db_utils.create_snapshot_instance( 'fake_snapshot_id', status=constants.STATUS_ERROR, share_instance_id=in_sync_replica_instance['id']), db_utils.create_snapshot_instance( 'fake_snapshot_id', status=constants.STATUS_AVAILABLE, provider_location='hogsmeade:snapshot1', progress='87%', share_instance_id=active_replica_instance['id']), db_utils.create_snapshot_instance( 'fake_snapshot_id', status=constants.STATUS_MANAGING, share_instance_id=non_replica_instance['id']), ] snapshot = db_utils.create_snapshot( id='fake_snapshot_id', share_id=share['id'], instances=snapshot_instance_list) # Proxified properties self.assertEqual(constants.STATUS_AVAILABLE, snapshot['status']) self.assertEqual(constants.STATUS_ERROR, snapshot['aggregate_status']) self.assertEqual('hogsmeade:snapshot1', snapshot['provider_location']) self.assertEqual('87%', snapshot['progress']) # Snapshot properties expected_share_name = '-'.join(['share', share['id']]) self.assertEqual(expected_share_name, snapshot['share_name']) self.assertEqual(active_replica_instance['id'], snapshot['instance']['share_instance_id']) manila-10.0.0/manila/tests/test_service.py0000664000175000017500000002117013656750227020545 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2014 NetApp, Inc. # Copyright 2014 Mirantis, Inc. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for remote procedure calls using queue """ from unittest import mock import ddt from oslo_config import cfg from oslo_service import wsgi from manila import context from manila import db from manila import exception from manila import manager from manila import service from manila import test from manila import utils test_service_opts = [ cfg.StrOpt("fake_manager", default="manila.tests.test_service.FakeManager", help="Manager for testing"), cfg.StrOpt("test_service_listen", help="Host to bind test service to"), cfg.IntOpt("test_service_listen_port", default=0, help="Port number to bind test service to"), ] CONF = cfg.CONF CONF.register_opts(test_service_opts) class FakeManager(manager.Manager): """Fake manager for tests.""" RPC_API_VERSION = "1.0" def __init__(self, host=None, db_driver=None, service_name=None): super(FakeManager, self).__init__(host=host, db_driver=db_driver) def test_method(self): return 'manager' class ExtendedService(service.Service): def test_method(self): return 'service' class ServiceManagerTestCase(test.TestCase): """Test cases for Services.""" def test_message_gets_to_manager(self): serv = service.Service('test', 'test', 'test', CONF.fake_manager) serv.start() self.assertEqual('manager', serv.test_method()) def test_override_manager_method(self): serv = ExtendedService('test', 'test', 'test', CONF.fake_manager) serv.start() self.assertEqual('service', serv.test_method()) class ServiceFlagsTestCase(test.TestCase): def test_service_enabled_on_create_based_on_flag(self): self.flags(enable_new_services=True) host = 'foo' binary = 'manila-fake' app = service.Service.create(host=host, binary=binary) app.start() app.stop() ref = db.service_get(context.get_admin_context(), app.service_id) db.service_destroy(context.get_admin_context(), app.service_id) self.assertFalse(ref['disabled']) def test_service_disabled_on_create_based_on_flag(self): self.flags(enable_new_services=False) host = 'foo' binary = 'manila-fake' app = service.Service.create(host=host, binary=binary) app.start() app.stop() ref = db.service_get(context.get_admin_context(), app.service_id) db.service_destroy(context.get_admin_context(), app.service_id) self.assertTrue(ref['disabled']) def fake_service_get_by_args(*args, **kwargs): raise exception.NotFound() def fake_service_get(*args, **kwargs): raise Exception() host = 'foo' binary = 'bar' topic = 'test' service_create = { 'host': host, 'binary': binary, 'topic': topic, 'report_count': 0, 'availability_zone': 'nova', } service_ref = { 'host': host, 'binary': binary, 'topic': topic, 'report_count': 0, 'availability_zone': {'name': 'nova'}, 'id': 1, } @ddt.ddt class ServiceTestCase(test.TestCase): """Test cases for Services.""" def test_create(self): app = service.Service.create(host='foo', binary='manila-fake', topic='fake') self.assertTrue(app) @ddt.data(True, False) def test_periodic_tasks(self, raise_on_error): serv = service.Service(host, binary, topic, CONF.fake_manager) self.mock_object( context, 'get_admin_context', mock.Mock(side_effect=context.get_admin_context)) self.mock_object(serv.manager, 'periodic_tasks') serv.periodic_tasks(raise_on_error=raise_on_error) context.get_admin_context.assert_called_once_with() serv.manager.periodic_tasks.assert_called_once_with( utils.IsAMatcher(context.RequestContext), raise_on_error=raise_on_error) @mock.patch.object(service.db, 'service_get_by_args', mock.Mock(side_effect=fake_service_get_by_args)) @mock.patch.object(service.db, 'service_create', mock.Mock(return_value=service_ref)) @mock.patch.object(service.db, 'service_get', mock.Mock(side_effect=fake_service_get)) def test_report_state_newly_disconnected(self): serv = service.Service(host, binary, topic, CONF.fake_manager) serv.start() serv.report_state() self.assertTrue(serv.model_disconnected) service.db.service_get_by_args.assert_called_once_with( mock.ANY, host, binary) service.db.service_create.assert_called_once_with( mock.ANY, service_create) service.db.service_get.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(service.db, 'service_get_by_args', mock.Mock(side_effect=fake_service_get_by_args)) @mock.patch.object(service.db, 'service_create', mock.Mock(return_value=service_ref)) @mock.patch.object(service.db, 'service_get', mock.Mock(return_value=service_ref)) @mock.patch.object(service.db, 'service_update', mock.Mock(return_value=service_ref. update({'report_count': 1}))) def test_report_state_newly_connected(self): serv = service.Service(host, binary, topic, CONF.fake_manager) serv.start() serv.model_disconnected = True serv.report_state() self.assertFalse(serv.model_disconnected) service.db.service_get_by_args.assert_called_once_with( mock.ANY, host, binary) service.db.service_create.assert_called_once_with( mock.ANY, service_create) service.db.service_get.assert_called_once_with( mock.ANY, service_ref['id']) service.db.service_update.assert_called_once_with( mock.ANY, service_ref['id'], mock.ANY) def test_report_state_service_not_ready(self): with mock.patch.object(service, 'db') as mock_db: mock_db.service_get.return_value = service_ref serv = service.Service(host, binary, topic, CONF.fake_manager) serv.manager.is_service_ready = mock.Mock(return_value=False) serv.start() serv.report_state() serv.manager.is_service_ready.assert_called_once() mock_db.service_update.assert_not_called() class TestWSGIService(test.TestCase): def setUp(self): super(TestWSGIService, self).setUp() self.mock_object(wsgi.Loader, 'load_app') self.test_service = service.WSGIService("test_service") def test_service_random_port(self): self.assertEqual(0, self.test_service.port) self.test_service.start() self.assertNotEqual(0, self.test_service.port) self.test_service.stop() wsgi.Loader.load_app.assert_called_once_with("test_service") def test_reset_pool_size_to_default(self): self.test_service.start() # Stopping the service, which in turn sets pool size to 0 self.test_service.stop() self.assertEqual(0, self.test_service.server._pool.size) # Resetting pool size to default self.test_service.reset() self.test_service.start() self.assertGreater(self.test_service.server._pool.size, 0) wsgi.Loader.load_app.assert_called_once_with("test_service") @mock.patch('oslo_service.wsgi.Server') @mock.patch('oslo_service.wsgi.Loader') def test_ssl_enabled(self, mock_loader, mock_server): self.override_config('osapi_share_use_ssl', True) service.WSGIService("osapi_share") mock_server.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, port=mock.ANY, host=mock.ANY, use_ssl=True) self.assertTrue(mock_loader.called) manila-10.0.0/manila/tests/cmd/0000775000175000017500000000000013656750362016236 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/cmd/__init__.py0000664000175000017500000000000013656750227020335 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/cmd/test_manage.py0000664000175000017500000004030613656750227021102 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import code import readline import sys from unittest import mock import ddt from oslo_config import cfg import six from manila.cmd import manage as manila_manage from manila import context from manila import db from manila.db import migration from manila import test from manila import version CONF = cfg.CONF @ddt.ddt class ManilaCmdManageTestCase(test.TestCase): def setUp(self): super(ManilaCmdManageTestCase, self).setUp() sys.argv = ['manila-share'] CONF(sys.argv[1:], project='manila', version=version.version_string()) self.shell_commands = manila_manage.ShellCommands() self.host_commands = manila_manage.HostCommands() self.db_commands = manila_manage.DbCommands() self.version_commands = manila_manage.VersionCommands() self.config_commands = manila_manage.ConfigCommands() self.get_log_cmds = manila_manage.GetLogCommands() self.service_cmds = manila_manage.ServiceCommands() self.share_cmds = manila_manage.ShareCommands() @mock.patch.object(manila_manage.ShellCommands, 'run', mock.Mock()) def test_shell_commands_bpython(self): self.shell_commands.bpython() manila_manage.ShellCommands.run.assert_called_once_with('bpython') @mock.patch.object(manila_manage.ShellCommands, 'run', mock.Mock()) def test_shell_commands_ipython(self): self.shell_commands.ipython() manila_manage.ShellCommands.run.assert_called_once_with('ipython') @mock.patch.object(manila_manage.ShellCommands, 'run', mock.Mock()) def test_shell_commands_python(self): self.shell_commands.python() manila_manage.ShellCommands.run.assert_called_once_with('python') @ddt.data({}, {'shell': 'bpython'}) def test_run_bpython(self, kwargs): try: import bpython except ImportError as e: self.skipTest(six.text_type(e)) self.mock_object(bpython, 'embed') self.shell_commands.run(**kwargs) bpython.embed.assert_called_once_with() def test_run_bpython_import_error(self): try: import bpython import IPython except ImportError as e: self.skipTest(six.text_type(e)) self.mock_object(bpython, 'embed', mock.Mock(side_effect=ImportError())) self.mock_object(IPython, 'embed') self.shell_commands.run(shell='bpython') IPython.embed.assert_called_once_with() def test_run(self): try: import bpython except ImportError as e: self.skipTest(six.text_type(e)) self.mock_object(bpython, 'embed') self.shell_commands.run() bpython.embed.assert_called_once_with() def test_run_ipython(self): try: import IPython except ImportError as e: self.skipTest(six.text_type(e)) self.mock_object(IPython, 'embed') self.shell_commands.run(shell='ipython') IPython.embed.assert_called_once_with() def test_run_ipython_import_error(self): try: import IPython if not hasattr(IPython, 'Shell'): setattr(IPython, 'Shell', mock.Mock()) setattr(IPython.Shell, 'IPShell', mock.Mock(side_effect=ImportError())) except ImportError as e: self.skipTest(six.text_type(e)) self.mock_object(IPython, 'embed', mock.Mock(side_effect=ImportError())) self.mock_object(readline, 'parse_and_bind') self.mock_object(code, 'interact') shell = IPython.embed.return_value self.shell_commands.run(shell='ipython') IPython.Shell.IPShell.assert_called_once_with(argv=[]) self.assertFalse(shell.mainloop.called) self.assertTrue(readline.parse_and_bind.called) code.interact.assert_called_once_with() def test_run_python(self): self.mock_object(readline, 'parse_and_bind') self.mock_object(code, 'interact') self.shell_commands.run(shell='python') readline.parse_and_bind.assert_called_once_with("tab:complete") code.interact.assert_called_once_with() def test_run_python_import_error(self): self.mock_object(readline, 'parse_and_bind') self.mock_object(code, 'interact') self.shell_commands.run(shell='python') readline.parse_and_bind.assert_called_once_with("tab:complete") code.interact.assert_called_once_with() @mock.patch('six.moves.builtins.print') def test_list(self, print_mock): serv_1 = { 'host': 'fake_host1', 'availability_zone': {'name': 'avail_zone1'}, } serv_2 = { 'host': 'fake_host2', 'availability_zone': {'name': 'avail_zone2'}, } self.mock_object(db, 'service_get_all', mock.Mock(return_value=[serv_1, serv_2])) self.mock_object(context, 'get_admin_context', mock.Mock(return_value='admin_ctxt')) self.host_commands.list(zone='avail_zone1') context.get_admin_context.assert_called_once_with() db.service_get_all.assert_called_once_with('admin_ctxt') print_mock.assert_has_calls([ mock.call(u'host \tzone '), mock.call('fake_host1 \tavail_zone1 ')]) @mock.patch('six.moves.builtins.print') def test_list_zone_is_none(self, print_mock): serv_1 = { 'host': 'fake_host1', 'availability_zone': {'name': 'avail_zone1'}, } serv_2 = { 'host': 'fake_host2', 'availability_zone': {'name': 'avail_zone2'}, } self.mock_object(db, 'service_get_all', mock.Mock(return_value=[serv_1, serv_2])) self.mock_object(context, 'get_admin_context', mock.Mock(return_value='admin_ctxt')) self.host_commands.list() context.get_admin_context.assert_called_once_with() db.service_get_all.assert_called_once_with('admin_ctxt') print_mock.assert_has_calls([ mock.call(u'host \tzone '), mock.call('fake_host1 \tavail_zone1 '), mock.call('fake_host2 \tavail_zone2 ')]) def test_sync(self): self.mock_object(migration, 'upgrade') self.db_commands.sync(version='123') migration.upgrade.assert_called_once_with('123') def test_version(self): self.mock_object(migration, 'version') self.db_commands.version() migration.version.assert_called_once_with() def test_downgrade(self): self.mock_object(migration, 'downgrade') self.db_commands.downgrade(version='123') migration.downgrade.assert_called_once_with('123') def test_revision(self): self.mock_object(migration, 'revision') self.db_commands.revision('message', True) migration.revision.assert_called_once_with('message', True) def test_stamp(self): self.mock_object(migration, 'stamp') self.db_commands.stamp(version='123') migration.stamp.assert_called_once_with('123') def test_version_commands_list(self): self.mock_object(version, 'version_string', mock.Mock(return_value='123')) with mock.patch('sys.stdout', new=six.StringIO()) as fake_out: self.version_commands.list() version.version_string.assert_called_once_with() self.assertEqual('123\n', fake_out.getvalue()) def test_version_commands_call(self): self.mock_object(version, 'version_string', mock.Mock(return_value='123')) with mock.patch('sys.stdout', new=six.StringIO()) as fake_out: self.version_commands() version.version_string.assert_called_once_with() self.assertEqual('123\n', fake_out.getvalue()) def test_get_log_commands_no_errors(self): with mock.patch('sys.stdout', new=six.StringIO()) as fake_out: CONF.set_override('log_dir', None) expected_out = 'No errors in logfiles!\n' self.get_log_cmds.errors() self.assertEqual(expected_out, fake_out.getvalue()) @mock.patch('six.moves.builtins.open') @mock.patch('os.listdir') def test_get_log_commands_errors(self, listdir, open): CONF.set_override('log_dir', 'fake-dir') listdir.return_value = ['fake-error.log'] with mock.patch('sys.stdout', new=six.StringIO()) as fake_out: open.return_value = six.StringIO( '[ ERROR ] fake-error-message') expected_out = ('fake-dir/fake-error.log:-\n' 'Line 1 : [ ERROR ] fake-error-message\n') self.get_log_cmds.errors() self.assertEqual(expected_out, fake_out.getvalue()) open.assert_called_once_with('fake-dir/fake-error.log', 'r') listdir.assert_called_once_with(CONF.log_dir) @mock.patch('six.moves.builtins.open') @mock.patch('os.path.exists') def test_get_log_commands_syslog_no_log_file(self, path_exists, open): path_exists.return_value = False exit = self.assertRaises(SystemExit, self.get_log_cmds.syslog) self.assertEqual(1, exit.code) path_exists.assert_any_call('/var/log/syslog') path_exists.assert_any_call('/var/log/messages') @mock.patch('manila.utils.service_is_up') @mock.patch('manila.db.service_get_all') @mock.patch('manila.context.get_admin_context') def test_service_commands_list(self, get_admin_context, service_get_all, service_is_up): ctxt = context.RequestContext('fake-user', 'fake-project') get_admin_context.return_value = ctxt service = {'binary': 'manila-binary', 'host': 'fake-host.fake-domain', 'availability_zone': {'name': 'fake-zone'}, 'updated_at': '2014-06-30 11:22:33', 'disabled': False} service_get_all.return_value = [service] service_is_up.return_value = True with mock.patch('sys.stdout', new=six.StringIO()) as fake_out: format = "%-16s %-36s %-16s %-10s %-5s %-10s" print_format = format % ('Binary', 'Host', 'Zone', 'Status', 'State', 'Updated At') service_format = format % (service['binary'], service['host'].partition('.')[0], service['availability_zone']['name'], 'enabled', ':-)', service['updated_at']) expected_out = print_format + '\n' + service_format + '\n' self.service_cmds.list() self.assertEqual(expected_out, fake_out.getvalue()) get_admin_context.assert_called_with() service_get_all.assert_called_with(ctxt) service_is_up.assert_called_with(service) def test_methods_of(self): obj = type('Fake', (object,), {name: lambda: 'fake_' for name in ('_a', 'b', 'c')}) expected = [('b', obj.b), ('c', obj.c)] self.assertEqual(expected, manila_manage.methods_of(obj)) @mock.patch('oslo_config.cfg.ConfigOpts.register_cli_opt') def test_main_argv_lt_2(self, register_cli_opt): script_name = 'manila-manage' sys.argv = [script_name] CONF(sys.argv[1:], project='manila', version=version.version_string()) exit = self.assertRaises(SystemExit, manila_manage.main) self.assertTrue(register_cli_opt.called) self.assertEqual(2, exit.code) @mock.patch('oslo_config.cfg.ConfigOpts.__call__') @mock.patch('oslo_log.log.register_options') @mock.patch('oslo_log.log.setup') @mock.patch('oslo_config.cfg.ConfigOpts.register_cli_opt') def test_main_sudo_failed(self, register_cli_opt, log_setup, register_log_opts, config_opts_call): script_name = 'manila-manage' sys.argv = [script_name, 'fake_category', 'fake_action'] config_opts_call.side_effect = cfg.ConfigFilesNotFoundError( mock.sentinel._namespace) exit = self.assertRaises(SystemExit, manila_manage.main) self.assertTrue(register_cli_opt.called) register_log_opts.assert_called_once_with(CONF) config_opts_call.assert_called_once_with( sys.argv[1:], project='manila', version=version.version_string()) self.assertFalse(log_setup.called) self.assertEqual(2, exit.code) @mock.patch('oslo_config.cfg.ConfigOpts.__call__') @mock.patch('oslo_config.cfg.ConfigOpts.register_cli_opt') @mock.patch('oslo_log.log.register_options') def test_main(self, register_log_opts, register_cli_opt, config_opts_call): script_name = 'manila-manage' sys.argv = [script_name, 'config', 'list'] action_fn = mock.MagicMock() CONF.category = mock.MagicMock(action_fn=action_fn) manila_manage.main() self.assertTrue(register_cli_opt.called) register_log_opts.assert_called_once_with(CONF) config_opts_call.assert_called_once_with( sys.argv[1:], project='manila', version=version.version_string()) self.assertTrue(action_fn.called) @ddt.data('bar', '-bar', '--bar') def test_get_arg_string(self, arg): parsed_arg = manila_manage.get_arg_string(arg) self.assertEqual('bar', parsed_arg) @ddt.data({'current_host': 'controller-0@fancystore01#pool100', 'new_host': 'controller-0@fancystore01'}, {'current_host': 'controller-0@fancystore01', 'new_host': 'controller-0'}) @ddt.unpack def test_share_update_host_fail_validation(self, current_host, new_host): self.mock_object(context, 'get_admin_context', mock.Mock(return_value='admin_ctxt')) self.mock_object(db, 'share_instances_host_update') self.assertRaises(SystemExit, self.share_cmds.update_host, current_host, new_host) self.assertFalse(db.share_instances_host_update.called) @ddt.data({'current_host': 'controller-0@fancystore01#pool100', 'new_host': 'controller-0@fancystore02#pool0'}, {'current_host': 'controller-0@fancystore01', 'new_host': 'controller-1@fancystore01'}, {'current_host': 'controller-0', 'new_host': 'controller-1'}, {'current_host': 'controller-0@fancystore01#pool100', 'new_host': 'controller-1@fancystore02', 'force': True}) @ddt.unpack def test_share_update_host(self, current_host, new_host, force=False): self.mock_object(context, 'get_admin_context', mock.Mock(return_value='admin_ctxt')) self.mock_object(db, 'share_instances_host_update', mock.Mock(return_value=20)) with mock.patch('sys.stdout', new=six.StringIO()) as intercepted_op: self.share_cmds.update_host(current_host, new_host, force) expected_op = ("Updated host of 20 share instances on " "%(chost)s to %(nhost)s." % {'chost': current_host, 'nhost': new_host}) self.assertEqual(expected_op, intercepted_op.getvalue().strip()) db.share_instances_host_update.assert_called_once_with( 'admin_ctxt', current_host, new_host) manila-10.0.0/manila/tests/cmd/test_data.py0000664000175000017500000000344513656750227020566 0ustar zuulzuul00000000000000# Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from manila.cmd import data as manila_data from manila import test from manila import version CONF = manila_data.CONF class ManilaCmdDataTestCase(test.TestCase): def test_main(self): sys.argv = ['manila-data'] self.mock_object(manila_data.log, 'setup') self.mock_object(manila_data.log, 'register_options') self.mock_object(manila_data.utils, 'monkey_patch') self.mock_object(manila_data.service.Service, 'create') self.mock_object(manila_data.service, 'serve') self.mock_object(manila_data.service, 'wait') manila_data.main() self.assertEqual('manila', CONF.project) self.assertEqual(version.version_string(), CONF.version) manila_data.log.setup.assert_called_once_with(CONF, "manila") manila_data.log.register_options.assert_called_once_with(CONF) manila_data.utils.monkey_patch.assert_called_once_with() manila_data.service.Service.create.assert_called_once_with( binary='manila-data') manila_data.service.wait.assert_called_once_with() manila_data.service.serve.assert_called_once_with( manila_data.service.Service.create.return_value) manila-10.0.0/manila/tests/cmd/test_share.py0000664000175000017500000000515013656750227020752 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from unittest import mock import ddt from manila.cmd import share as manila_share from manila import test CONF = manila_share.CONF @ddt.ddt class ManilaCmdShareTestCase(test.TestCase): @ddt.data(None, [], ['foo', ], ['foo', 'bar', ]) def test_main(self, backends): self.mock_object(manila_share.log, 'setup') self.mock_object(manila_share.log, 'register_options') self.mock_object(manila_share.utils, 'monkey_patch') self.mock_object(manila_share.service, 'process_launcher') self.mock_object(manila_share.service.Service, 'create') self.launcher = manila_share.service.process_launcher.return_value self.mock_object(self.launcher, 'launch_service') self.mock_object(self.launcher, 'wait') self.server = manila_share.service.Service.create.return_value fake_host = 'fake.host' CONF.set_override('enabled_share_backends', backends) CONF.set_override('host', fake_host) sys.argv = ['manila-share'] manila_share.main() manila_share.log.setup.assert_called_once_with(CONF, "manila") manila_share.log.register_options.assert_called_once_with(CONF) manila_share.utils.monkey_patch.assert_called_once_with() manila_share.service.process_launcher.assert_called_once_with() self.launcher.wait.assert_called_once_with() if backends: manila_share.service.Service.create.assert_has_calls([ mock.call( host=fake_host + '@' + backend, service_name=backend, binary='manila-share', coordination=True, ) for backend in backends ]) self.launcher.launch_service.assert_has_calls([ mock.call(self.server) for backend in backends]) else: manila_share.service.Service.create.assert_called_once_with( binary='manila-share') self.launcher.launch_service.assert_called_once_with(self.server) manila-10.0.0/manila/tests/cmd/test_scheduler.py0000664000175000017500000000361213656750227021627 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from manila.cmd import scheduler as manila_scheduler from manila import test from manila import version CONF = manila_scheduler.CONF class ManilaCmdSchedulerTestCase(test.TestCase): def test_main(self): sys.argv = ['manila-scheduler'] self.mock_object(manila_scheduler.log, 'setup') self.mock_object(manila_scheduler.log, 'register_options') self.mock_object(manila_scheduler.utils, 'monkey_patch') self.mock_object(manila_scheduler.service.Service, 'create') self.mock_object(manila_scheduler.service, 'serve') self.mock_object(manila_scheduler.service, 'wait') manila_scheduler.main() self.assertEqual('manila', CONF.project) self.assertEqual(version.version_string(), CONF.version) manila_scheduler.log.setup.assert_called_once_with(CONF, "manila") manila_scheduler.log.register_options.assert_called_once_with(CONF) manila_scheduler.utils.monkey_patch.assert_called_once_with() manila_scheduler.service.Service.create.assert_called_once_with( binary='manila-scheduler', coordination=True) manila_scheduler.service.wait.assert_called_once_with() manila_scheduler.service.serve.assert_called_once_with( manila_scheduler.service.Service.create.return_value) manila-10.0.0/manila/tests/cmd/test_api.py0000664000175000017500000000352513656750227020425 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from manila.cmd import api as manila_api from manila import test from manila import version CONF = manila_api.CONF class ManilaCmdApiTestCase(test.TestCase): def setUp(self): super(ManilaCmdApiTestCase, self).setUp() sys.argv = ['manila-api'] def test_main(self): self.mock_object(manila_api.log, 'setup') self.mock_object(manila_api.log, 'register_options') self.mock_object(manila_api.utils, 'monkey_patch') self.mock_object(manila_api.service, 'process_launcher') self.mock_object(manila_api.service, 'WSGIService') manila_api.main() process_launcher = manila_api.service.process_launcher process_launcher.assert_called_once_with() self.assertTrue(process_launcher.return_value.launch_service.called) self.assertTrue(process_launcher.return_value.wait.called) self.assertEqual('manila', CONF.project) self.assertEqual(version.version_string(), CONF.version) manila_api.log.setup.assert_called_once_with(CONF, "manila") manila_api.log.register_options.assert_called_once_with(CONF) manila_api.utils.monkey_patch.assert_called_once_with() manila_api.service.WSGIService.assert_called_once_with('osapi_share') manila-10.0.0/manila/tests/cmd/test_status.py0000664000175000017500000000177413656750227021203 0ustar zuulzuul00000000000000# Copyright (c) 2018 NEC, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_upgradecheck.upgradecheck import Code from manila.cmd import status from manila import test class TestUpgradeChecks(test.TestCase): def setUp(self): super(TestUpgradeChecks, self).setUp() self.cmd = status.Checks() def test__check_placeholder(self): check_result = self.cmd._check_placeholder() self.assertEqual( Code.SUCCESS, check_result.code) manila-10.0.0/manila/tests/scheduler/0000775000175000017500000000000013656750362017451 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/__init__.py0000664000175000017500000000000013656750227021550 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/evaluator/0000775000175000017500000000000013656750362021453 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/evaluator/__init__.py0000664000175000017500000000000013656750227023552 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/evaluator/test_evaluator.py0000664000175000017500000001437513656750227025100 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila import exception from manila.scheduler.evaluator import evaluator from manila import test class EvaluatorTestCase(test.TestCase): def test_simple_integer(self): self.assertEqual(2, evaluator.evaluate("1+1")) self.assertEqual(9, evaluator.evaluate("2+3+4")) self.assertEqual(23, evaluator.evaluate("11+12")) self.assertEqual(30, evaluator.evaluate("5*6")) self.assertEqual(2, evaluator.evaluate("22/11")) self.assertEqual(38, evaluator.evaluate("109-71")) self.assertEqual( 493, evaluator.evaluate("872 - 453 + 44 / 22 * 4 + 66")) def test_simple_float(self): self.assertEqual(2.0, evaluator.evaluate("1.0 + 1.0")) self.assertEqual(2.5, evaluator.evaluate("1.5 + 1.0")) self.assertEqual(3.0, evaluator.evaluate("1.5 * 2.0")) def test_int_float_mix(self): self.assertEqual(2.5, evaluator.evaluate("1.5 + 1")) self.assertEqual(4.25, evaluator.evaluate("8.5 / 2")) self.assertEqual(5.25, evaluator.evaluate("10/4+0.75 + 2")) def test_negative_numbers(self): self.assertEqual(-2, evaluator.evaluate("-2")) self.assertEqual(-1, evaluator.evaluate("-2+1")) self.assertEqual(3, evaluator.evaluate("5+-2")) def test_exponent(self): self.assertEqual(8, evaluator.evaluate("2^3")) self.assertEqual(-8, evaluator.evaluate("-2 ^ 3")) self.assertEqual(15.625, evaluator.evaluate("2.5 ^ 3")) self.assertEqual(8, evaluator.evaluate("4 ^ 1.5")) def test_function(self): self.assertEqual(5, evaluator.evaluate("abs(-5)")) self.assertEqual(2, evaluator.evaluate("abs(2)")) self.assertEqual(1, evaluator.evaluate("min(1, 100)")) self.assertEqual(100, evaluator.evaluate("max(1, 100)")) self.assertEqual(100, evaluator.evaluate("max(1, 2, 100)")) def test_parentheses(self): self.assertEqual(1, evaluator.evaluate("(1)")) self.assertEqual(-1, evaluator.evaluate("(-1)")) self.assertEqual(2, evaluator.evaluate("(1+1)")) self.assertEqual(15, evaluator.evaluate("(1+2) * 5")) self.assertEqual(3, evaluator.evaluate("(1+2)*(3-1)/((1+(2-1)))")) self.assertEqual( -8.0, evaluator. evaluate("((1.0 / 0.5) * (2)) *(-2)")) def test_comparisons(self): self.assertTrue(evaluator.evaluate("1 < 2")) self.assertTrue(evaluator.evaluate("2 > 1")) self.assertTrue(evaluator.evaluate("2 != 1")) self.assertFalse(evaluator.evaluate("1 > 2")) self.assertFalse(evaluator.evaluate("2 < 1")) self.assertFalse(evaluator.evaluate("2 == 1")) self.assertTrue(evaluator.evaluate("(1 == 1) == !(1 == 2)")) def test_logic_ops(self): self.assertTrue(evaluator.evaluate("(1 == 1) AND (2 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) and (2 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) && (2 == 2)")) self.assertFalse(evaluator.evaluate("(1 == 1) && (5 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) OR (5 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) or (5 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) || (5 == 2)")) self.assertFalse(evaluator.evaluate("(5 == 1) || (5 == 2)")) self.assertFalse(evaluator.evaluate("(1 == 1) AND NOT (2 == 2)")) self.assertFalse(evaluator.evaluate("(1 == 1) AND not (2 == 2)")) self.assertFalse(evaluator.evaluate("(1 == 1) AND !(2 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) AND NOT (5 == 2)")) self.assertTrue(evaluator.evaluate("(1 == 1) OR NOT (2 == 2) " "AND (5 == 5)")) def test_ternary_conditional(self): self.assertEqual(5, evaluator.evaluate("(1 < 2) ? 5 : 10")) self.assertEqual(10, evaluator.evaluate("(1 > 2) ? 5 : 10")) def test_variables_dict(self): stats = {'iops': 1000, 'usage': 0.65, 'count': 503, 'free_space': 407} request = {'iops': 500, 'size': 4} self.assertEqual(1500, evaluator.evaluate("stats.iops + request.iops", stats=stats, request=request)) def test_missing_var(self): stats = {'iops': 1000, 'usage': 0.65, 'count': 503, 'free_space': 407} request = {'iops': 500, 'size': 4} self.assertRaises(exception.EvaluatorParseException, evaluator.evaluate, "foo.bob + 5", stats=stats, request=request) self.assertRaises(exception.EvaluatorParseException, evaluator.evaluate, "stats.bob + 5", stats=stats, request=request) self.assertRaises(exception.EvaluatorParseException, evaluator.evaluate, "fake.var + 1", stats=stats, request=request, fake=None) def test_bad_expression(self): self.assertRaises(exception.EvaluatorParseException, evaluator.evaluate, "1/*1") def test_nonnumber_comparison(self): nonnumber = {'test': 'foo'} request = {'test': 'bar'} self.assertRaises( exception.EvaluatorParseException, evaluator.evaluate, "nonnumber.test != request.test", nonnumber=nonnumber, request=request) def test_div_zero(self): self.assertRaises(exception.EvaluatorParseException, evaluator.evaluate, "7 / 0") manila-10.0.0/manila/tests/scheduler/drivers/0000775000175000017500000000000013656750362021127 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/drivers/__init__.py0000664000175000017500000000000013656750227023226 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/drivers/test_simple.py0000664000175000017500000001611213656750227024032 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Simple Scheduler """ from unittest import mock from oslo_config import cfg from manila import context from manila import db from manila import exception from manila.scheduler.drivers import base from manila.scheduler.drivers import simple from manila.share import rpcapi as share_rpcapi from manila import test from manila.tests import db_utils from manila import utils CONF = cfg.CONF class SimpleSchedulerSharesTestCase(test.TestCase): """Test case for simple scheduler create share method.""" def setUp(self): super(SimpleSchedulerSharesTestCase, self).setUp() self.mock_object(share_rpcapi, 'ShareAPI') self.driver = simple.SimpleScheduler() self.context = context.RequestContext('fake_user', 'fake_project') self.admin_context = context.RequestContext('fake_admin_user', 'fake_project') self.admin_context.is_admin = True @mock.patch.object(utils, 'service_is_up', mock.Mock(return_value=True)) def test_create_share_if_two_services_up(self): share_id = 'fake' fake_share = {'id': share_id, 'size': 1} fake_service_1 = {'disabled': False, 'host': 'fake_host1'} fake_service_2 = {'disabled': False, 'host': 'fake_host2'} fake_result = [(fake_service_1, 2), (fake_service_2, 1)] fake_request_spec = { 'share_id': share_id, 'share_properties': fake_share, } self.mock_object(db, 'service_get_all_share_sorted', mock.Mock(return_value=fake_result)) self.mock_object(base, 'share_update_db', mock.Mock(return_value=db_utils.create_share())) self.driver.schedule_create_share(self.context, fake_request_spec, {}) utils.service_is_up.assert_called_once_with(utils.IsAMatcher(dict)) db.service_get_all_share_sorted.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) base.share_update_db.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_id, 'fake_host1') def test_create_share_if_services_not_available(self): share_id = 'fake' fake_share = {'id': share_id, 'size': 1} fake_result = [] fake_request_spec = { 'share_id': share_id, 'share_properties': fake_share, } with mock.patch.object(db, 'service_get_all_share_sorted', mock.Mock(return_value=fake_result)): self.assertRaises(exception.NoValidHost, self.driver.schedule_create_share, self.context, fake_request_spec, {}) db.service_get_all_share_sorted.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) def test_create_share_if_max_gigabytes_exceeded(self): share_id = 'fake' fake_share = {'id': share_id, 'size': 10001} fake_service_1 = {'disabled': False, 'host': 'fake_host1'} fake_service_2 = {'disabled': False, 'host': 'fake_host2'} fake_result = [(fake_service_1, 5), (fake_service_2, 7)] fake_request_spec = { 'share_id': share_id, 'share_properties': fake_share, } with mock.patch.object(db, 'service_get_all_share_sorted', mock.Mock(return_value=fake_result)): self.assertRaises(exception.NoValidHost, self.driver.schedule_create_share, self.context, fake_request_spec, {}) db.service_get_all_share_sorted.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) @mock.patch.object(utils, 'service_is_up', mock.Mock(return_value=True)) def test_create_share_availability_zone(self): share_id = 'fake' fake_share = { 'id': share_id, 'size': 1, } fake_instance = { 'availability_zone_id': 'fake', } fake_service_1 = { 'disabled': False, 'host': 'fake_host1', 'availability_zone_id': 'fake', } fake_service_2 = { 'disabled': False, 'host': 'fake_host2', 'availability_zone_id': 'super_fake', } fake_result = [(fake_service_1, 0), (fake_service_2, 1)] fake_request_spec = { 'share_id': share_id, 'share_properties': fake_share, 'share_instance_properties': fake_instance, } self.mock_object(db, 'service_get_all_share_sorted', mock.Mock(return_value=fake_result)) self.mock_object(base, 'share_update_db', mock.Mock(return_value=db_utils.create_share())) self.driver.schedule_create_share(self.context, fake_request_spec, {}) utils.service_is_up.assert_called_once_with(fake_service_1) base.share_update_db.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_id, fake_service_1['host']) db.service_get_all_share_sorted.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) @mock.patch.object(utils, 'service_is_up', mock.Mock(return_value=True)) def test_create_share_availability_zone_on_host(self): share_id = 'fake' fake_share = { 'id': share_id, 'availability_zone': 'fake:fake', 'size': 1, } fake_service = {'disabled': False, 'host': 'fake'} fake_request_spec = { 'share_id': share_id, 'share_properties': fake_share, } self.mock_object(db, 'service_get_all_share_sorted', mock.Mock(return_value=[(fake_service, 1)])) self.mock_object(base, 'share_update_db', mock.Mock(return_value=db_utils.create_share())) self.driver.schedule_create_share(self.admin_context, fake_request_spec, {}) utils.service_is_up.assert_called_once_with(fake_service) db.service_get_all_share_sorted.assert_called_once_with( utils.IsAMatcher(context.RequestContext)) base.share_update_db.assert_called_once_with( utils.IsAMatcher(context.RequestContext), share_id, 'fake') manila-10.0.0/manila/tests/scheduler/drivers/test_filter.py0000664000175000017500000006046113656750227024034 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Filter Scheduler. """ from unittest import mock import ddt from oslo_utils import strutils from manila.common import constants from manila import context from manila import exception from manila.scheduler.drivers import base from manila.scheduler.drivers import filter from manila.scheduler import host_manager from manila.tests.scheduler.drivers import test_base from manila.tests.scheduler import fakes SNAPSHOT_SUPPORT = constants.ExtraSpecs.SNAPSHOT_SUPPORT REPLICATION_TYPE_SPEC = constants.ExtraSpecs.REPLICATION_TYPE_SPEC @ddt.ddt class FilterSchedulerTestCase(test_base.SchedulerTestCase): """Test case for Filter Scheduler.""" driver_cls = filter.FilterScheduler def test___format_filter_properties_active_replica_host_is_provided(self): sched = fakes.FakeFilterScheduler() fake_context = context.RequestContext('user', 'project') request_spec = { 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, 'share_type': {'name': 'NFS'}, 'share_id': ['fake-id1'], 'active_replica_host': 'fake_ar_host', } hosts = [fakes.FakeHostState(host, {'replication_domain': 'xyzzy'}) for host in ('fake_ar_host', 'fake_host_2')] self.mock_object(sched.host_manager, 'get_all_host_states_share', mock.Mock(return_value=hosts)) self.mock_object(sched, 'populate_filter_properties_share') retval = sched._format_filter_properties( fake_context, {}, request_spec) self.assertIn('replication_domain', retval[0]) @ddt.data(True, False) def test__format_filter_properties_backend_specified_for_replica( self, has_share_backend_name): sched = fakes.FakeFilterScheduler() fake_context = context.RequestContext('user', 'project') request_spec = { 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, 'share_type': { 'name': 'NFS', 'extra_specs': {}, }, 'share_id': 'fake-id1', 'active_replica_host': 'fake_ar_host', } if has_share_backend_name: request_spec['share_type']['extra_specs'].update( {'share_backend_name': 'fake_backend'}) self.mock_object(sched.host_manager, 'get_all_host_states_share', mock.Mock(return_value=[])) retval = sched._format_filter_properties( fake_context, {}, request_spec) self.assertNotIn('share_backend_name', retval[0]['share_type']['extra_specs']) def test_create_share_no_hosts(self): # Ensure empty hosts/child_zones result in NoValidHosts exception. sched = fakes.FakeFilterScheduler() fake_context = context.RequestContext('user', 'project') request_spec = { 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, 'share_type': {'name': 'NFS'}, 'share_id': ['fake-id1'], } self.assertRaises(exception.NoValidHost, sched.schedule_create_share, fake_context, request_spec, {}) @mock.patch('manila.scheduler.host_manager.HostManager.' 'get_all_host_states_share') def test_create_share_non_admin(self, _mock_get_all_host_states): # Test creating a volume locally using create_volume, passing # a non-admin context. DB actions should work. self.was_admin = False def fake_get(context, *args, **kwargs): # Make sure this is called with admin context, even though # we're using user context below. self.was_admin = context.is_admin return {} sched = fakes.FakeFilterScheduler() _mock_get_all_host_states.side_effect = fake_get fake_context = context.RequestContext('user', 'project') request_spec = { 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, 'share_type': {'name': 'NFS'}, 'share_id': ['fake-id1'], } self.assertRaises(exception.NoValidHost, sched.schedule_create_share, fake_context, request_spec, {}) self.assertTrue(self.was_admin) @ddt.data( {'name': 'foo'}, {'name': 'foo', 'extra_specs': {}}, *[{'name': 'foo', 'extra_specs': {SNAPSHOT_SUPPORT: v}} for v in ('True', ' True', 'true', '1')] ) @mock.patch('manila.db.service_get_all_by_topic') def test__schedule_share_with_snapshot_support( self, share_type, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = { 'share_type': share_type, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, } weighed_host = sched._schedule_share(fake_context, request_spec, {}) self.assertIsNotNone(weighed_host) self.assertIsNotNone(weighed_host.obj) self.assertTrue(hasattr(weighed_host.obj, SNAPSHOT_SUPPORT)) expected_snapshot_support = strutils.bool_from_string( share_type.get('extra_specs', {}).get( SNAPSHOT_SUPPORT, 'True').split()[-1]) self.assertEqual( expected_snapshot_support, getattr(weighed_host.obj, SNAPSHOT_SUPPORT)) self.assertTrue(_mock_service_get_all_by_topic.called) @ddt.data( *[{'name': 'foo', 'extra_specs': {SNAPSHOT_SUPPORT: v}} for v in ('False', ' False', 'false', '0')] ) @mock.patch('manila.db.service_get_all_by_topic') def test__schedule_share_without_snapshot_support( self, share_type, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = { 'share_type': share_type, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {'project_id': 1, 'size': 1}, } self.assertRaises(exception.NoValidHost, sched._schedule_share, fake_context, request_spec, {}) self.assertTrue(_mock_service_get_all_by_topic.called) @ddt.data( *[{'name': 'foo', 'extra_specs': { SNAPSHOT_SUPPORT: 'True', REPLICATION_TYPE_SPEC: v }} for v in ('writable', 'readable', 'dr')] ) @mock.patch('manila.db.service_get_all_by_topic') def test__schedule_share_with_valid_replication_spec( self, share_type, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = { 'share_type': share_type, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {'project_id': 1, 'size': 1}, } weighed_host = sched._schedule_share(fake_context, request_spec, {}) self.assertIsNotNone(weighed_host) self.assertIsNotNone(weighed_host.obj) self.assertTrue(hasattr(weighed_host.obj, REPLICATION_TYPE_SPEC)) expected_replication_type_support = ( share_type.get('extra_specs', {}).get(REPLICATION_TYPE_SPEC)) self.assertEqual( expected_replication_type_support, getattr(weighed_host.obj, REPLICATION_TYPE_SPEC)) self.assertTrue(_mock_service_get_all_by_topic.called) @ddt.data( *[{'name': 'foo', 'extra_specs': { SNAPSHOT_SUPPORT: 'True', REPLICATION_TYPE_SPEC: v }} for v in ('None', 'readwrite', 'activesync')] ) @mock.patch('manila.db.service_get_all_by_topic') def test__schedule_share_with_invalid_replication_type_spec( self, share_type, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = { 'share_type': share_type, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {'project_id': 1, 'size': 1}, } self.assertRaises(exception.NoValidHost, sched._schedule_share, fake_context, request_spec, {}) self.assertTrue(_mock_service_get_all_by_topic.called) def _setup_dedupe_fakes(self, extra_specs): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) share_type = {'name': 'foo', 'extra_specs': extra_specs} request_spec = { 'share_type': share_type, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {'project_id': 1, 'size': 1}, } return sched, fake_context, request_spec @mock.patch('manila.db.service_get_all_by_topic') def test__schedule_share_with_default_dedupe_value( self, _mock_service_get_all_by_topic): sched, fake_context, request_spec = self._setup_dedupe_fakes( {'capabilities:dedupe': ' False'}) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) weighed_host = sched._schedule_share(fake_context, request_spec, {}) self.assertIsNotNone(weighed_host) self.assertIsNotNone(weighed_host.obj) self.assertTrue(hasattr(weighed_host.obj, 'dedupe')) self.assertFalse(weighed_host.obj.dedupe) self.assertTrue(_mock_service_get_all_by_topic.called) @ddt.data('True', ' True') @mock.patch('manila.db.service_get_all_by_topic') def test__schedule_share_with_default_dedupe_value_fail( self, capability, _mock_service_get_all_by_topic): sched, fake_context, request_spec = self._setup_dedupe_fakes( {'capabilities:dedupe': capability}) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) self.assertRaises(exception.NoValidHost, sched._schedule_share, fake_context, request_spec, {}) self.assertTrue(_mock_service_get_all_by_topic.called) def test_schedule_share_type_is_none(self): sched = fakes.FakeFilterScheduler() request_spec = { 'share_type': None, 'share_properties': {'project_id': 1, 'size': 1}, } self.assertRaises(exception.InvalidParameterValue, sched._schedule_share, self.context, request_spec) @mock.patch('manila.db.service_get_all_by_topic') def test_schedule_share_with_instance_properties( self, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) share_type = {'name': 'foo'} request_spec = { 'share_type': share_type, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {'availability_zone_id': "fake_az"}, } self.assertRaises(exception.NoValidHost, sched._schedule_share, fake_context, request_spec, {}) self.assertTrue(_mock_service_get_all_by_topic.called) def test_max_attempts(self): self.flags(scheduler_max_attempts=4) sched = fakes.FakeFilterScheduler() self.assertEqual(4, sched._max_attempts()) def test_invalid_max_attempts(self): self.flags(scheduler_max_attempts=0) self.assertRaises(exception.InvalidParameterValue, fakes.FakeFilterScheduler) def test_retry_disabled(self): # Retry info should not get populated when re-scheduling is off. self.flags(scheduler_max_attempts=1) sched = fakes.FakeFilterScheduler() request_spec = { 'share_type': {'name': 'iSCSI'}, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, } filter_properties = {} self.assertRaises(exception.NoValidHost, sched._schedule_share, self.context, request_spec, filter_properties=filter_properties) # Should not have retry info in the populated filter properties. self.assertNotIn("retry", filter_properties) def test_retry_attempt_one(self): # Test retry logic on initial scheduling attempt. self.flags(scheduler_max_attempts=2) sched = fakes.FakeFilterScheduler() request_spec = { 'share_type': {'name': 'iSCSI'}, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, } filter_properties = {} self.assertRaises(exception.NoValidHost, sched._schedule_share, self.context, request_spec, filter_properties=filter_properties) num_attempts = filter_properties['retry']['num_attempts'] self.assertEqual(1, num_attempts) def test_retry_attempt_two(self): # Test retry logic when re-scheduling. self.flags(scheduler_max_attempts=2) sched = fakes.FakeFilterScheduler() request_spec = { 'share_type': {'name': 'iSCSI'}, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {}, } retry = dict(num_attempts=1) filter_properties = dict(retry=retry) self.assertRaises(exception.NoValidHost, sched._schedule_share, self.context, request_spec, filter_properties=filter_properties) num_attempts = filter_properties['retry']['num_attempts'] self.assertEqual(2, num_attempts) def test_retry_exceeded_max_attempts(self): # Test for necessary explosion when max retries is exceeded. self.flags(scheduler_max_attempts=2) sched = fakes.FakeFilterScheduler() request_spec = { 'share_type': {'name': 'iSCSI'}, 'share_properties': {'project_id': 1, 'size': 1}, } retry = dict(num_attempts=2) filter_properties = dict(retry=retry) self.assertRaises(exception.NoValidHost, sched._schedule_share, self.context, request_spec, filter_properties=filter_properties) def test_add_retry_host(self): retry = dict(num_attempts=1, hosts=[]) filter_properties = dict(retry=retry) host = "fakehost" sched = fakes.FakeFilterScheduler() sched._add_retry_host(filter_properties, host) hosts = filter_properties['retry']['hosts'] self.assertEqual(1, len(hosts)) self.assertEqual(host, hosts[0]) def test_post_select_populate(self): # Test addition of certain filter props after a node is selected. retry = {'hosts': [], 'num_attempts': 1} filter_properties = {'retry': retry} sched = fakes.FakeFilterScheduler() host_state = host_manager.HostState('host') host_state.total_capacity_gb = 1024 sched._post_select_populate_filter_properties(filter_properties, host_state) self.assertEqual('host', filter_properties['retry']['hosts'][0]) self.assertEqual(1024, host_state.total_capacity_gb) def test_schedule_create_share_group(self): # Ensure empty hosts/child_zones result in NoValidHosts exception. sched = fakes.FakeFilterScheduler() fake_context = context.RequestContext('user', 'project') fake_host = 'fake_host' request_spec = {'share_types': [{'id': 'NFS'}]} self.mock_object(sched, "_get_best_host_for_share_group", mock.Mock(return_value=fake_host)) fake_updated_group = mock.Mock() self.mock_object(base, "share_group_update_db", mock.Mock( return_value=fake_updated_group)) self.mock_object(sched.share_rpcapi, "create_share_group") sched.schedule_create_share_group(fake_context, 'fake_id', request_spec, {}) sched._get_best_host_for_share_group.assert_called_once_with( fake_context, request_spec) base.share_group_update_db.assert_called_once_with( fake_context, 'fake_id', fake_host) sched.share_rpcapi.create_share_group.assert_called_once_with( fake_context, fake_updated_group, fake_host) def test_create_group_no_hosts(self): # Ensure empty hosts/child_zones result in NoValidHosts exception. sched = fakes.FakeFilterScheduler() fake_context = context.RequestContext('user', 'project') request_spec = {'share_types': [{'id': 'NFS'}]} self.assertRaises(exception.NoValidHost, sched.schedule_create_share_group, fake_context, 'fake_id', request_spec, {}) @mock.patch('manila.db.service_get_all_by_topic') def test_get_weighted_candidates_for_share_group( self, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project') fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = {'share_types': [{'name': 'NFS', 'extra_specs': { SNAPSHOT_SUPPORT: 'True', }}]} hosts = sched._get_weighted_candidates_share_group( fake_context, request_spec) self.assertTrue(hosts) @mock.patch('manila.db.service_get_all_by_topic') def test_get_weighted_candidates_for_share_group_no_hosts( self, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project') fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = {'share_types': [{'name': 'NFS', 'extra_specs': { SNAPSHOT_SUPPORT: 'False', }}]} hosts = sched._get_weighted_candidates_share_group( fake_context, request_spec) self.assertEqual([], hosts) @mock.patch('manila.db.service_get_all_by_topic') def test_get_weighted_candidates_for_share_group_many_hosts( self, _mock_service_get_all_by_topic): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project') fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic) request_spec = {'share_types': [{'name': 'NFS', 'extra_specs': { SNAPSHOT_SUPPORT: 'True', }}]} hosts = sched._get_weighted_candidates_share_group( fake_context, request_spec) self.assertEqual(6, len(hosts)) def _host_passes_filters_setup(self, mock_obj): sched = fakes.FakeFilterScheduler() sched.host_manager = fakes.FakeHostManager() fake_context = context.RequestContext('user', 'project', is_admin=True) fakes.mock_host_manager_db_calls(mock_obj) return (sched, fake_context) @mock.patch('manila.db.service_get_all_by_topic') def test_host_passes_filters_happy_day(self, _mock_service_get_topic): sched, ctx = self._host_passes_filters_setup( _mock_service_get_topic) request_spec = {'share_id': 1, 'share_type': {'name': 'fake_type'}, 'share_instance_properties': {}, 'share_properties': {'project_id': 1, 'size': 1}} ret_host = sched.host_passes_filters(ctx, 'host1#_pool0', request_spec, {}) self.assertEqual('host1#_pool0', ret_host.host) self.assertTrue(_mock_service_get_topic.called) @mock.patch('manila.db.service_get_all_by_topic') def test_host_passes_filters_no_capacity(self, _mock_service_get_topic): sched, ctx = self._host_passes_filters_setup( _mock_service_get_topic) request_spec = {'share_id': 1, 'share_type': {'name': 'fake_type'}, 'share_instance_properties': {}, 'share_properties': {'project_id': 1, 'size': 1024}} self.assertRaises(exception.NoValidHost, sched.host_passes_filters, ctx, 'host3#_pool0', request_spec, {}) self.assertTrue(_mock_service_get_topic.called) def test_schedule_create_replica_no_host(self): sched = fakes.FakeFilterScheduler() request_spec = { 'share_type': {'name': 'fake_type'}, 'share_properties': {'project_id': 1, 'size': 1}, 'share_instance_properties': {'project_id': 1, 'size': 1}, } self.mock_object(sched.host_manager, 'get_all_host_states_share', mock.Mock(return_value=[])) self.mock_object(sched.host_manager, 'get_filtered_hosts', mock.Mock(return_value=(None, 'filter'))) self.assertRaises(exception.NoValidHost, sched.schedule_create_replica, self.context, request_spec, {}) def test_schedule_create_replica(self): sched = fakes.FakeFilterScheduler() request_spec = fakes.fake_replica_request_spec() host = 'fake_host' replica_id = request_spec['share_instance_properties']['id'] mock_update_db_call = self.mock_object( base, 'share_replica_update_db', mock.Mock(return_value='replica')) mock_share_rpcapi_call = self.mock_object( sched.share_rpcapi, 'create_share_replica') self.mock_object( self.driver_cls, '_schedule_share', mock.Mock(return_value=fakes.get_fake_host(host_name=host))) retval = sched.schedule_create_replica( self.context, fakes.fake_replica_request_spec(), {}) self.assertIsNone(retval) mock_update_db_call.assert_called_once_with( self.context, replica_id, host) mock_share_rpcapi_call.assert_called_once_with( self.context, 'replica', host, request_spec=request_spec, filter_properties={}) manila-10.0.0/manila/tests/scheduler/drivers/test_base.py0000664000175000017500000000756413656750227023466 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Base Scheduler """ from unittest import mock from oslo_config import cfg from oslo_utils import timeutils from manila import context from manila import db from manila.scheduler.drivers import base from manila import test from manila import utils CONF = cfg.CONF class SchedulerTestCase(test.TestCase): """Test case for base scheduler driver class.""" # So we can subclass this test and re-use tests if we need. driver_cls = base.Scheduler def setUp(self): super(SchedulerTestCase, self).setUp() self.driver = self.driver_cls() self.context = context.RequestContext('fake_user', 'fake_project') self.topic = 'fake_topic' def test_update_service_capabilities(self): service_name = 'fake_service' host = 'fake_host' capabilities = {'fake_capability': 'fake_value'} with mock.patch.object(self.driver.host_manager, 'update_service_capabilities', mock.Mock()): self.driver.update_service_capabilities( service_name, host, capabilities) (self.driver.host_manager.update_service_capabilities. assert_called_once_with(service_name, host, capabilities)) def test_hosts_up(self): service1 = {'host': 'host1'} service2 = {'host': 'host2'} services = [service1, service2] def fake_service_is_up(*args, **kwargs): if args[0]['host'] == 'host1': return False return True with mock.patch.object(db, 'service_get_all_by_topic', mock.Mock(return_value=services)): with mock.patch.object(utils, 'service_is_up', mock.Mock(side_effect=fake_service_is_up)): result = self.driver.hosts_up(self.context, self.topic) self.assertEqual(['host2'], result) db.service_get_all_by_topic.assert_called_once_with( self.context, self.topic) class SchedulerDriverBaseTestCase(SchedulerTestCase): """Test cases for base scheduler driver class methods. These can't fail if the driver is changed. """ def test_unimplemented_schedule(self): fake_args = (1, 2, 3) fake_kwargs = {'cat': 'meow'} self.assertRaises(NotImplementedError, self.driver.schedule, self.context, self.topic, 'schedule_something', *fake_args, **fake_kwargs) class SchedulerDriverModuleTestCase(test.TestCase): """Test case for scheduler driver module methods.""" def setUp(self): super(SchedulerDriverModuleTestCase, self).setUp() self.context = context.RequestContext('fake_user', 'fake_project') @mock.patch.object(db, 'share_update', mock.Mock()) def test_share_host_update_db(self): with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value='fake-now')): base.share_update_db(self.context, 31337, 'fake_host') db.share_update.assert_called_once_with( self.context, 31337, {'host': 'fake_host', 'scheduled_at': 'fake-now'}) manila-10.0.0/manila/tests/scheduler/test_utils.py0000664000175000017500000000361413656750227022226 0ustar zuulzuul00000000000000# Copyright 2016 EMC Corporation OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For utils. """ import ddt from manila.scheduler import utils from manila import test @ddt.ddt class UtilsTestCase(test.TestCase): """Test case for utils.""" @ddt.data( ({'extra_specs': {'thin_provisioning': True}}, True), ({'extra_specs': {'thin_provisioning': False}}, False), ({'extra_specs': {'foo': 'bar'}}, True), ({'foo': 'bar'}, True), ({'extra_specs': {'thin_provisioning': ' True'}}, True), ({'extra_specs': {'thin_provisioning': ' False'}}, False), ({'extra_specs': {'thin_provisioning': ' True'}}, False), ({'extra_specs': {}}, True), ({}, True), ) @ddt.unpack def test_use_thin_logic(self, properties, use_thin): use_thin_logic = utils.use_thin_logic(properties) self.assertEqual(use_thin, use_thin_logic) @ddt.data( (True, True), (False, False), (None, False), ([True, False], True), ([True], True), ([False], False), ('wrong', False), ) @ddt.unpack def test_thin_provisioning(self, thin_capabilities, thin): thin_provisioning = utils.thin_provisioning(thin_capabilities) self.assertEqual(thin, thin_provisioning) manila-10.0.0/manila/tests/scheduler/test_rpcapi.py0000664000175000017500000001244313656750227022344 0ustar zuulzuul00000000000000# Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for manila.scheduler.rpcapi """ import copy from unittest import mock from oslo_config import cfg from manila import context from manila.scheduler import rpcapi as scheduler_rpcapi from manila import test CONF = cfg.CONF class SchedulerRpcAPITestCase(test.TestCase): def tearDown(self): super(SchedulerRpcAPITestCase, self).tearDown() def _test_scheduler_api(self, method, rpc_method, fanout=False, **kwargs): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = scheduler_rpcapi.SchedulerAPI() expected_retval = 'foo' if method == 'call' else None target = { "fanout": fanout, "version": kwargs.pop('version', '1.0'), } expected_msg = copy.deepcopy(kwargs) self.fake_args = None self.fake_kwargs = None def _fake_prepare_method(*args, **kwds): for kwd in kwds: self.assertEqual(target[kwd], kwds[kwd]) return rpcapi.client def _fake_rpc_method(*args, **kwargs): self.fake_args = args self.fake_kwargs = kwargs if expected_retval: return expected_retval with mock.patch.object(rpcapi.client, "prepare") as mock_prepared: mock_prepared.side_effect = _fake_prepare_method with mock.patch.object(rpcapi.client, rpc_method) as mock_method: mock_method.side_effect = _fake_rpc_method retval = getattr(rpcapi, method)(ctxt, **kwargs) self.assertEqual(expected_retval, retval) expected_args = [ctxt, method, expected_msg] for arg, expected_arg in zip(self.fake_args, expected_args): self.assertEqual(expected_arg, arg) def test_update_service_capabilities(self): self._test_scheduler_api('update_service_capabilities', rpc_method='cast', service_name='fake_name', host='fake_host', capabilities='fake_capabilities', fanout=True) def test_create_share_instance(self): self._test_scheduler_api('create_share_instance', rpc_method='cast', request_spec='fake_request_spec', filter_properties='filter_properties', version='1.2') def test_get_pools(self): self._test_scheduler_api('get_pools', rpc_method='call', filters=None, version='1.9') def test_create_share_group(self): self._test_scheduler_api('create_share_group', rpc_method='cast', share_group_id='fake_share_group_id', request_spec='fake_request_spec', filter_properties='filter_properties', version='1.8') def test_migrate_share_to_host(self): self._test_scheduler_api('migrate_share_to_host', rpc_method='cast', share_id='share_id', host='host', force_host_assisted_migration=True, preserve_metadata=True, writable=True, nondisruptive=False, preserve_snapshots=True, new_share_network_id='fake_net_id', new_share_type_id='fake_type_id', request_spec='fake_request_spec', filter_properties='filter_properties', version='1.7') def test_create_share_replica(self): self._test_scheduler_api('create_share_replica', rpc_method='cast', request_spec='fake_request_spec', filter_properties='filter_properties', version='1.5') def test_manage_share(self): self._test_scheduler_api('manage_share', rpc_method='cast', share_id='share_id', driver_options='fake_driver_options', request_spec='fake_request_spec', filter_properties='filter_properties', version='1.6') manila-10.0.0/manila/tests/scheduler/filters/0000775000175000017500000000000013656750362021121 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/filters/__init__.py0000664000175000017500000000000013656750227023220 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/filters/test_json.py0000664000175000017500000003255613656750227023516 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For JsonFilter. """ from oslo_serialization import jsonutils from manila.scheduler.filters import json from manila import test from manila.tests.scheduler import fakes class HostFiltersTestCase(test.TestCase): """Test case for JsonFilter.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024]]) self.filter = json.JsonFilter() def test_json_filter_passes(self): filter_properties = {'resource_type': {'memory_mb': 1024, 'root_gb': 200, 'ephemeral_gb': 0}, 'scheduler_hints': {'query': self.json_query}} capabilities = {'enabled': True} host = fakes.FakeHostState('host1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024, 'capabilities': capabilities}) self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_json_filter_passes_with_no_query(self): filter_properties = {'resource_type': {'memory_mb': 1024, 'root_gb': 200, 'ephemeral_gb': 0}} capabilities = {'enabled': True} host = fakes.FakeHostState('host1', {'free_ram_mb': 0, 'free_disk_mb': 0, 'capabilities': capabilities}) self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_json_filter_fails_on_memory(self): filter_properties = {'resource_type': {'memory_mb': 1024, 'root_gb': 200, 'ephemeral_gb': 0}, 'scheduler_hints': {'query': self.json_query}} capabilities = {'enabled': True} host = fakes.FakeHostState('host1', {'free_ram_mb': 1023, 'free_disk_mb': 200 * 1024, 'capabilities': capabilities}) self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_fails_on_disk(self): filter_properties = {'resource_type': {'memory_mb': 1024, 'root_gb': 200, 'ephemeral_gb': 0}, 'scheduler_hints': {'query': self.json_query}} capabilities = {'enabled': True} host = fakes.FakeHostState('host1', {'free_ram_mb': 1024, 'free_disk_mb': (200 * 1024) - 1, 'capabilities': capabilities}) self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_fails_on_caps_disabled(self): json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024], '$capabilities.enabled']) filter_properties = {'resource_type': {'memory_mb': 1024, 'root_gb': 200, 'ephemeral_gb': 0}, 'scheduler_hints': {'query': json_query}} capabilities = {'enabled': False} host = fakes.FakeHostState('host1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024, 'capabilities': capabilities}) self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_fails_on_service_disabled(self): json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024], ['not', '$service.disabled']]) filter_properties = {'resource_type': {'memory_mb': 1024, 'local_gb': 200}, 'scheduler_hints': {'query': json_query}} capabilities = {'enabled': True} host = fakes.FakeHostState('host1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024, 'capabilities': capabilities}) self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_happy_day(self): """Test json filter more thoroughly.""" raw = ['and', '$capabilities.enabled', ['=', '$capabilities.opt1', 'match'], ['or', ['and', ['<', '$free_ram_mb', 30], ['<', '$free_disk_mb', 300]], ['and', ['>', '$free_ram_mb', 30], ['>', '$free_disk_mb', 300]]]] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } # Passes capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_ram_mb': 10, 'free_disk_mb': 200, 'capabilities': capabilities, 'service': service}) self.assertTrue(self.filter.host_passes(host, filter_properties)) # Passes capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_ram_mb': 40, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertTrue(self.filter.host_passes(host, filter_properties)) # Fails due to capabilities being disabled capabilities = {'enabled': False, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_ram_mb': 40, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) # Fails due to being exact memory/disk we don't want capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_ram_mb': 30, 'free_disk_mb': 300, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) # Fails due to memory lower but disk higher capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_ram_mb': 20, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) # Fails due to capabilities 'opt1' not equal capabilities = {'enabled': True, 'opt1': 'no-match'} service = {'enabled': True} host = fakes.FakeHostState('host1', {'free_ram_mb': 20, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_basic_operators(self): host = fakes.FakeHostState('host1', {'capabilities': {'enabled': True}}) # (operator, arguments, expected_result) ops_to_test = [ ['=', [1, 1], True], ['=', [1, 2], False], ['<', [1, 2], True], ['<', [1, 1], False], ['<', [2, 1], False], ['>', [2, 1], True], ['>', [2, 2], False], ['>', [2, 3], False], ['<=', [1, 2], True], ['<=', [1, 1], True], ['<=', [2, 1], False], ['>=', [2, 1], True], ['>=', [2, 2], True], ['>=', [2, 3], False], ['in', [1, 1], True], ['in', [1, 1, 2, 3], True], ['in', [4, 1, 2, 3], False], ['not', [True], False], ['not', [False], True], ['or', [True, False], True], ['or', [False, False], False], ['and', [True, True], True], ['and', [False, False], False], ['and', [True, False], False], # Nested ((True or False) and (2 > 1)) == Passes ['and', [['or', True, False], ['>', 2, 1]], True]] for (op, args, expected) in ops_to_test: raw = [op] + args filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertEqual(expected, self.filter.host_passes(host, filter_properties)) # This results in [False, True, False, True] and if any are True # then it passes... raw = ['not', True, False, True, False] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertTrue(self.filter.host_passes(host, filter_properties)) # This results in [False, False, False] and if any are True # then it passes...which this doesn't raw = ['not', True, True, True] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_unknown_operator_raises(self): raw = ['!=', 1, 2] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } host = fakes.FakeHostState('host1', {'capabilities': {'enabled': True}}) self.assertRaises(KeyError, self.filter.host_passes, host, filter_properties) def test_json_filter_empty_filters_pass(self): host = fakes.FakeHostState('host1', {'capabilities': {'enabled': True}}) raw = [] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertTrue(self.filter.host_passes(host, filter_properties)) raw = {} filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_json_filter_invalid_num_arguments_fails(self): host = fakes.FakeHostState('host1', {'capabilities': {'enabled': True}}) raw = ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertFalse(self.filter.host_passes(host, filter_properties)) raw = ['>', 1] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_json_filter_unknown_variable_ignored(self): host = fakes.FakeHostState('host1', {'capabilities': {'enabled': True}}) raw = ['=', '$........', 1, 1] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertTrue(self.filter.host_passes(host, filter_properties)) raw = ['=', '$foo', 2, 2] filter_properties = { 'scheduler_hints': { 'query': jsonutils.dumps(raw), }, } self.assertTrue(self.filter.host_passes(host, filter_properties)) manila-10.0.0/manila/tests/scheduler/filters/test_extra_specs_ops.py0000664000175000017500000000522213656750227025734 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Host Filters. """ import ddt from manila.scheduler.filters import extra_specs_ops from manila import test @ddt.ddt class ExtraSpecsOpsTestCase(test.TestCase): def _do_extra_specs_ops_test(self, value, req, matches): assertion = self.assertTrue if matches else self.assertFalse assertion(extra_specs_ops.match(value, req)) @ddt.unpack @ddt.data( ('1', '1', True), ('', '1', False), ('3', '1', False), ('222', '2', False), ('4', '> 2', False), ('123', '= 123', True), ('124', '= 123', True), ('34', '=234', False), ('34', '=', False), ('123', 's== 123', True), ('1234', 's== 123', False), ('1234', 's!= 123', True), ('123', 's!= 123', False), ('1000', 's>= 234', False), ('1234', 's<= 1000', False), ('2', 's< 12', False), ('12', 's> 2', False), ('12311321', ' 11', True), ('12311321', ' 12311321', True), ('12311321', ' 12311321 ', True), ('12310321', ' 11', False), ('12310321', ' 11 ', False), ('abc', ' ABC', True), (True, 'True', True), (True, ' True', True), (True, ' False', False), (False, 'False', True), (False, ' False', True), (False, ' True', False), (False, 'Nonsense', False), (False, ' Nonsense', True), (True, 'False', False), (False, 'True', False), ('12', ' 11 12', True), ('13', ' 11 12', False), ('13', ' 11 12 ', False), ('abc', ' ABC def', True), ('2', '<= 10', True), ('3', '<= 2', False), ('3', '>= 1', True), ('2', '>= 3', False), ('nfs', 'NFS', True), ('NFS', 'nfs', True), ('cifs', 'nfs', False), ) def test_extra_specs_matches_simple(self, value, req, matches): self._do_extra_specs_ops_test( value, req, matches) manila-10.0.0/manila/tests/scheduler/filters/test_ignore_attempted_hosts.py0000664000175000017500000000365113656750227027311 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For IgnoreAttemptedHost filter. """ from manila.scheduler.filters import ignore_attempted_hosts from manila import test from manila.tests.scheduler import fakes class HostFiltersTestCase(test.TestCase): """Test case for IgnoreAttemptedHost filter.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.filter = ignore_attempted_hosts.IgnoreAttemptedHostsFilter() def test_ignore_attempted_hosts_filter_disabled(self): # Test case where re-scheduling is disabled. host = fakes.FakeHostState('host1', {}) filter_properties = {} self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_ignore_attempted_hosts_filter_pass(self): # Node not previously tried. host = fakes.FakeHostState('host1', {}) attempted = dict(num_attempts=2, hosts=['host2']) filter_properties = dict(retry=attempted) self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_ignore_attempted_hosts_filter_fail(self): # Node was already tried. host = fakes.FakeHostState('host1', {}) attempted = dict(num_attempts=2, hosts=['host1']) filter_properties = dict(retry=attempted) self.assertFalse(self.filter.host_passes(host, filter_properties)) manila-10.0.0/manila/tests/scheduler/filters/test_availability_zone.py0000664000175000017500000000761113656750227026244 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For AvailabilityZoneFilter. """ import ddt from oslo_context import context from manila.scheduler.filters import availability_zone from manila import test from manila.tests.scheduler import fakes @ddt.ddt class HostFiltersTestCase(test.TestCase): """Test case for AvailabilityZoneFilter.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.filter = availability_zone.AvailabilityZoneFilter() self.az_id = 'e3ecad6f-e984-4cd1-b149-d83c962374a8' self.fake_service = { 'service': { 'availability_zone_id': self.az_id, 'availability_zone': { 'name': 'nova', 'id': self.az_id } } } @staticmethod def _make_zone_request(zone, is_admin=False): ctxt = context.RequestContext('fake', 'fake', is_admin=is_admin) return { 'context': ctxt, 'request_spec': { 'resource_properties': { 'availability_zone_id': zone } } } def test_availability_zone_filter_same(self): request = self._make_zone_request(self.az_id) host = fakes.FakeHostState('host1', self.fake_service) self.assertTrue(self.filter.host_passes(host, request)) def test_availability_zone_filter_different(self): request = self._make_zone_request('bad') host = fakes.FakeHostState('host1', self.fake_service) self.assertFalse(self.filter.host_passes(host, request)) def test_availability_zone_filter_empty(self): request = {} host = fakes.FakeHostState('host1', self.fake_service) self.assertTrue(self.filter.host_passes(host, request)) def test_availability_zone_filter_both_request_AZ_and_type_AZs_match(self): request = self._make_zone_request( '9382098d-d40f-42a2-8f31-8eb78ee18c02') request['request_spec']['availability_zones'] = [ 'nova', 'super nova', 'hypernova'] service = { 'availability_zone': { 'name': 'nova', 'id': '9382098d-d40f-42a2-8f31-8eb78ee18c02', }, 'availability_zone_id': '9382098d-d40f-42a2-8f31-8eb78ee18c02', } host = fakes.FakeHostState('host1', {'service': service}) self.assertTrue(self.filter.host_passes(host, request)) @ddt.data((['zone1', 'zone2', 'zone 4', 'zone3'], 'zone2', True), (['zone1zone2zone3'], 'zone2', False), (['zone1zone2zone3'], 'nova', False), (['zone1', 'zone2', 'zone 4', 'zone3'], 'zone 4', True)) @ddt.unpack def test_availability_zone_filter_only_share_type_AZs( self, supported_azs, request_az, host_passes): service = { 'availability_zone': { 'name': request_az, 'id': '9382098d-d40f-42a2-8f31-8eb78ee18c02', }, 'availability_zone_id': '9382098d-d40f-42a2-8f31-8eb78ee18c02', } request = self._make_zone_request(None) request['request_spec']['availability_zones'] = supported_azs host = fakes.FakeHostState('host1', {'service': service}) self.assertEqual(host_passes, self.filter.host_passes(host, request)) manila-10.0.0/manila/tests/scheduler/filters/test_retry.py0000664000175000017500000000342513656750227023703 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For RetryFilter. """ from manila.scheduler.filters import retry from manila import test from manila.tests.scheduler import fakes class HostFiltersTestCase(test.TestCase): """Test case for RetryFilter.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.filter = retry.RetryFilter() def test_retry_filter_disabled(self): # Test case where retry/re-scheduling is disabled. host = fakes.FakeHostState('host1', {}) filter_properties = {} self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_retry_filter_pass(self): # Node not previously tried. host = fakes.FakeHostState('host1', {}) retry = dict(num_attempts=2, hosts=['host2']) filter_properties = dict(retry=retry) self.assertTrue(self.filter.host_passes(host, filter_properties)) def test_retry_filter_fail(self): # Node was already tried. host = fakes.FakeHostState('host1', {}) retry = dict(num_attempts=1, hosts=['host1']) filter_properties = dict(retry=retry) self.assertFalse(self.filter.host_passes(host, filter_properties)) manila-10.0.0/manila/tests/scheduler/filters/test_share_replication.py0000664000175000017500000001166113656750227026232 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the ShareReplicationFilter. """ import ddt from oslo_context import context from manila.scheduler.filters import share_replication from manila import test from manila.tests.scheduler import fakes @ddt.ddt class ShareReplicationFilterTestCase(test.TestCase): """Test case for ShareReplicationFilter.""" def setUp(self): super(ShareReplicationFilterTestCase, self).setUp() self.filter = share_replication.ShareReplicationFilter() self.debug_log = self.mock_object(share_replication.LOG, 'debug') @staticmethod def _create_replica_request(replication_domain='kashyyyk', replication_type='dr', active_replica_host=fakes.FAKE_HOST_STRING_1, all_replica_hosts=fakes.FAKE_HOST_STRING_1, is_admin=False): ctxt = context.RequestContext('fake', 'fake', is_admin=is_admin) return { 'context': ctxt, 'request_spec': { 'active_replica_host': active_replica_host, 'all_replica_hosts': all_replica_hosts, }, 'resource_type': { 'extra_specs': { 'replication_type': replication_type, }, }, 'replication_domain': replication_domain, } @ddt.data('tatooine', '') def test_share_replication_filter_fails_incompatible_domain(self, domain): request = self._create_replica_request() host = fakes.FakeHostState('host1', { 'replication_domain': domain, }) self.assertFalse(self.filter.host_passes(host, request)) self.assertTrue(self.debug_log.called) def test_share_replication_filter_fails_no_replication_domain(self): request = self._create_replica_request() host = fakes.FakeHostState('host1', { 'replication_domain': None, }) self.assertFalse(self.filter.host_passes(host, request)) self.assertTrue(self.debug_log.called) def test_share_replication_filter_fails_host_has_replicas(self): all_replica_hosts = ','.join(['host1', fakes.FAKE_HOST_STRING_1]) request = self._create_replica_request( all_replica_hosts=all_replica_hosts) host = fakes.FakeHostState('host1', { 'replication_domain': 'kashyyyk', }) self.assertFalse(self.filter.host_passes(host, request)) self.assertTrue(self.debug_log.called) def test_share_replication_filter_passes_no_replication_type(self): request = self._create_replica_request(replication_type=None) host = fakes.FakeHostState('host1', { 'replication_domain': 'tatooine', }) self.assertTrue(self.filter.host_passes(host, request)) def test_share_replication_filter_passes_no_active_replica_host(self): request = self._create_replica_request(active_replica_host=None) host = fakes.FakeHostState('host1', { 'replication_domain': 'tatooine', }) self.assertTrue(self.filter.host_passes(host, request)) def test_share_replication_filter_passes_happy_day(self): all_replica_hosts = ','.join(['host1', fakes.FAKE_HOST_STRING_1]) request = self._create_replica_request( all_replica_hosts=all_replica_hosts) host = fakes.FakeHostState('host2', { 'replication_domain': 'kashyyyk', }) self.assertTrue(self.filter.host_passes(host, request)) def test_share_replication_filter_empty(self): request = {} host = fakes.FakeHostState('host1', { 'replication_domain': 'naboo', }) self.assertTrue(self.filter.host_passes(host, request)) manila-10.0.0/manila/tests/scheduler/filters/test_capacity.py0000664000175000017500000003316713656750227024341 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For CapacityFilter. """ import ddt from manila.scheduler.filters import capacity from manila import test from manila.tests.scheduler import fakes from manila import utils @ddt.ddt class HostFiltersTestCase(test.TestCase): """Test case CapacityFilter.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.filter = capacity.CapacityFilter() def _stub_service_is_up(self, ret_value): def fake_service_is_up(service): return ret_value self.mock_object(utils, 'service_is_up', fake_service_is_up) @ddt.data( {'size': 100, 'share_on': None, 'host': 'host1'}, {'size': 100, 'share_on': 'host1#pool1', 'host': 'host1#pools1'}) @ddt.unpack def test_capacity_filter_passes(self, size, share_on, host): self._stub_service_is_up(True) filter_properties = {'size': size, 'share_exists_on': share_on} service = {'disabled': False} host = fakes.FakeHostState(host, {'total_capacity_gb': 500, 'free_capacity_gb': 200, 'updated_at': None, 'service': service}) self.assertTrue(self.filter.host_passes(host, filter_properties)) @ddt.data( {'free_capacity': 120, 'total_capacity': 200, 'reserved': 20}, {'free_capacity': None, 'total_capacity': None, 'reserved': None}) @ddt.unpack def test_capacity_filter_fails(self, free_capacity, total_capacity, reserved): self._stub_service_is_up(True) filter_properties = {'size': 100} service = {'disabled': False} host = fakes.FakeHostState('host1', {'total_capacity_gb': total_capacity, 'free_capacity_gb': free_capacity, 'reserved_percentage': reserved, 'updated_at': None, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) def test_capacity_filter_passes_unknown(self): free = 'unknown' self._stub_service_is_up(True) filter_properties = {'size': 100} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_capacity_gb': free, 'updated_at': None, 'service': service}) self.assertTrue(self.filter.host_passes(host, filter_properties)) @ddt.data( {'free_capacity': 'unknown', 'total_capacity': 'unknown'}, {'free_capacity': 200, 'total_capacity': 'unknown'}) @ddt.unpack def test_capacity_filter_passes_total(self, free_capacity, total_capacity): self._stub_service_is_up(True) filter_properties = {'size': 100} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_capacity_gb': free_capacity, 'total_capacity_gb': total_capacity, 'reserved_percentage': 0, 'updated_at': None, 'service': service}) self.assertTrue(self.filter.host_passes(host, filter_properties)) @ddt.data( {'free': 200, 'total': 'unknown', 'reserved': 5}, {'free': 50, 'total': 'unknown', 'reserved': 0}, {'free': 200, 'total': 0, 'reserved': 0}) @ddt.unpack def test_capacity_filter_fails_total(self, free, total, reserved): self._stub_service_is_up(True) filter_properties = {'size': 100} service = {'disabled': False} host = fakes.FakeHostState('host1', {'free_capacity_gb': free, 'total_capacity_gb': total, 'reserved_percentage': reserved, 'updated_at': None, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) @ddt.data( {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 500, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 3000, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 7000, 'max_ratio': 20, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': ' False', 'total': 500, 'free': 200, 'provisioned': 300, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': False, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 400, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 125, 'provisioned': 400, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 80, 'provisioned': 600, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 2.0, 'reserved': 0, 'thin_prov': True, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 500, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': [True, False], 'cap_thin_key': 'thin_provisioning'}, {'size': 3000, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 7000, 'max_ratio': 20, 'reserved': 5, 'thin_prov': [True], 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' False', 'total': 500, 'free': 200, 'provisioned': 300, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': [False], 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': 'True', 'total': 500, 'free': 200, 'provisioned': 400, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': [False, True], 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': 'False', 'total': 500, 'free': 200, 'provisioned': 300, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': False, 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': 'true', 'total': 500, 'free': 125, 'provisioned': 400, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': [True, ], 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': 'false', 'total': 500, 'free': 200, 'provisioned': 300, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': [False, ], 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': None, 'total': 500, 'free': 80, 'provisioned': 600, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': None},) @ddt.unpack def test_filter_thin_passes(self, size, cap_thin, total, free, provisioned, max_ratio, reserved, thin_prov, cap_thin_key): self._stub_service_is_up(True) filter_properties = { 'size': size, 'share_type': { 'extra_specs': { cap_thin_key: cap_thin, } } } service = {'disabled': False} host = fakes.FakeHostState('host1', {'total_capacity_gb': total, 'free_capacity_gb': free, 'provisioned_capacity_gb': provisioned, 'max_over_subscription_ratio': max_ratio, 'reserved_percentage': reserved, 'thin_provisioning': thin_prov, 'updated_at': None, 'service': service}) self.assertTrue(self.filter.host_passes(host, filter_properties)) @ddt.data( {'size': 200, 'cap_thin': ' True', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 0.8, 'reserved': 0, 'thin_prov': True, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 700, 'max_ratio': 1.5, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 2000, 'cap_thin': ' True', 'total': 500, 'free': 30, 'provisioned': 9000, 'max_ratio': 20.0, 'reserved': 0, 'thin_prov': True, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 100, 'provisioned': 1000, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': ' False', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': False, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 0, 'provisioned': 800, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': ' True', 'total': 500, 'free': 99, 'provisioned': 1000, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 400, 'cap_thin': ' True', 'total': 500, 'free': 200, 'provisioned': 600, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': True, 'cap_thin_key': 'thin_provisioning'}, {'size': 200, 'cap_thin': ' True', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 0.8, 'reserved': 0, 'thin_prov': [False, True], 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 2000, 'cap_thin': ' True', 'total': 500, 'free': 30, 'provisioned': 9000, 'max_ratio': 20.0, 'reserved': 0, 'thin_prov': [True], 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': ' False', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': [False], 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': 'False', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': False, 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': 'True', 'total': 500, 'free': 0, 'provisioned': 800, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': [False, True], 'cap_thin_key': 'thin_provisioning'}, {'size': 100, 'cap_thin': 'true', 'total': 500, 'free': 99, 'provisioned': 1000, 'max_ratio': 2.0, 'reserved': 5, 'thin_prov': [True, ], 'cap_thin_key': 'capabilities:thin_provisioning'}, {'size': 100, 'cap_thin': 'false', 'total': 500, 'free': 100, 'provisioned': 400, 'max_ratio': 1.0, 'reserved': 5, 'thin_prov': [False, ], 'cap_thin_key': 'thin_provisioning'}, {'size': 2000, 'cap_thin': None, 'total': 500, 'free': 30, 'provisioned': 9000, 'max_ratio': 20.0, 'reserved': 0, 'thin_prov': [True], 'cap_thin_key': None},) @ddt.unpack def test_filter_thin_fails(self, size, cap_thin, total, free, provisioned, max_ratio, reserved, thin_prov, cap_thin_key): self._stub_service_is_up(True) filter_properties = { 'size': size, 'share_type': { 'extra_specs': { cap_thin_key: cap_thin, } } } service = {'disabled': False} host = fakes.FakeHostState('host1', {'total_capacity_gb': total, 'free_capacity_gb': free, 'provisioned_capacity_gb': provisioned, 'max_over_subscription_ratio': max_ratio, 'reserved_percentage': reserved, 'thin_provisioning': thin_prov, 'updated_at': None, 'service': service}) self.assertFalse(self.filter.host_passes(host, filter_properties)) manila-10.0.0/manila/tests/scheduler/filters/test_create_from_snapshot.py0000664000175000017500000000707613656750227026751 0ustar zuulzuul00000000000000# Copyright 2020 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the CreateFromSnapshotFilter. """ import ddt from manila.scheduler.filters import create_from_snapshot from manila import test from manila.tests.scheduler import fakes @ddt.ddt class CreateFromSnapshotFilterTestCase(test.TestCase): """Test case for CreateFromSnapshotFilter.""" def setUp(self): super(CreateFromSnapshotFilterTestCase, self).setUp() self.filter = create_from_snapshot.CreateFromSnapshotFilter() @staticmethod def _create_request(snapshot_id=None, snapshot_host=None, replication_domain=None): return { 'request_spec': { 'snapshot_id': snapshot_id, 'snapshot_host': snapshot_host, }, 'replication_domain': replication_domain, } @staticmethod def _create_host_state(host=None, rep_domain=None): return fakes.FakeHostState(host, { 'replication_domain': rep_domain, }) def test_without_snapshot_id(self): request = self._create_request() host = self._create_host_state(host='fake_host') self.assertTrue(self.filter.host_passes(host, request)) def test_without_snapshot_host(self): request = self._create_request(snapshot_id='fake_snapshot_id', replication_domain="fake_domain") host = self._create_host_state(host='fake_host', rep_domain='fake_domain_2') self.assertTrue(self.filter.host_passes(host, request)) @ddt.data(('host1@AAA#pool1', 'host1@AAA#pool1'), ('host1@AAA#pool1', 'host1@AAA#pool2')) @ddt.unpack def test_same_backend(self, request_host, host_state): request = self._create_request(snapshot_id='fake_snapshot_id', snapshot_host=request_host) host = self._create_host_state(host=host_state) self.assertTrue(self.filter.host_passes(host, request)) def test_same_availability_zone(self): request = self._create_request(snapshot_id='fake_snapshot_id', snapshot_host='fake_host', replication_domain="fake_domain") host = self._create_host_state(host='fake_host_2', rep_domain='fake_domain') self.assertTrue(self.filter.host_passes(host, request)) def test_different_backend_and_availability_zone(self): request = self._create_request(snapshot_id='fake_snapshot_id', snapshot_host='fake_host', replication_domain="fake_domain") host = self._create_host_state(host='fake_host_2', rep_domain='fake_domain_2') self.assertFalse(self.filter.host_passes(host, request)) manila-10.0.0/manila/tests/scheduler/filters/test_base.py0000664000175000017500000001263613656750227023454 0ustar zuulzuul00000000000000# Copyright (c) 2013 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from unittest import mock from manila.scheduler.filters import base from manila import test class TestBaseFilter(test.TestCase): def setUp(self): super(TestBaseFilter, self).setUp() self.filter = base.BaseFilter() def test_filter_one_is_called(self): filters = [1, 2, 3, 4] filter_properties = {'x': 'y'} side_effect = lambda value, props: value in [2, 3] # noqa: E731 self.mock_object(self.filter, '_filter_one', mock.Mock(side_effect=side_effect)) result = list(self.filter.filter_all(filters, filter_properties)) self.assertEqual([2, 3], result) class FakeExtension(object): def __init__(self, plugin): self.plugin = plugin class BaseFakeFilter(base.BaseFilter): pass class FakeFilter1(BaseFakeFilter): """Derives from BaseFakeFilter and has a fake entry point defined. Entry point is returned by fake ExtensionManager. Should be included in the output of all_classes. """ class FakeFilter2(BaseFakeFilter): """Derives from BaseFakeFilter but has no entry point. Should be not included in all_classes. """ class FakeFilter3(base.BaseFilter): """Does not derive from BaseFakeFilter. Should not be included. """ class FakeFilter4(BaseFakeFilter): """Derives from BaseFakeFilter and has an entry point. Should be included. """ class FakeFilter5(BaseFakeFilter): """Derives from BaseFakeFilter but has no entry point. Should not be included. """ run_filter_once_per_request = True class FakeExtensionManager(list): def __init__(self, namespace): classes = [FakeFilter1, FakeFilter3, FakeFilter4] exts = map(FakeExtension, classes) super(FakeExtensionManager, self).__init__(exts) self.namespace = namespace class TestBaseFilterHandler(test.TestCase): def setUp(self): super(TestBaseFilterHandler, self).setUp() self.mock_object(base.base_handler.extension, 'ExtensionManager', FakeExtensionManager) self.handler = base.BaseFilterHandler(BaseFakeFilter, 'fake_filters') def test_get_all_classes(self): # In order for a FakeFilter to be returned by get_all_classes, it has # to comply with these rules: # * It must be derived from BaseFakeFilter # AND # * It must have a python entrypoint assigned (returned by # FakeExtensionManager) expected = [FakeFilter1, FakeFilter4] result = self.handler.get_all_classes() self.assertEqual(expected, result) def _get_filtered_objects(self, filter_classes, index=0): filter_objs_initial = [1, 2, 3, 4] filter_properties = {'x': 'y'} return self.handler.get_filtered_objects(filter_classes, filter_objs_initial, filter_properties, index) @mock.patch.object(FakeFilter4, 'filter_all') @mock.patch.object(FakeFilter3, 'filter_all', return_value=None) def test_get_filtered_objects_return_none(self, fake3_filter_all, fake4_filter_all): filter_classes = [FakeFilter1, FakeFilter2, FakeFilter3, FakeFilter4] result, last_filter = self._get_filtered_objects(filter_classes) self.assertIsNone(result) self.assertFalse(fake4_filter_all.called) self.assertEqual('FakeFilter3', last_filter) def test_get_filtered_objects(self): filter_objs_expected = [1, 2, 3, 4] filter_classes = [FakeFilter1, FakeFilter2, FakeFilter3, FakeFilter4] result, last_filter = self._get_filtered_objects(filter_classes) self.assertEqual(filter_objs_expected, result) self.assertEqual('FakeFilter4', last_filter) def test_get_filtered_objects_with_filter_run_once(self): filter_objs_expected = [1, 2, 3, 4] filter_classes = [FakeFilter5] with mock.patch.object(FakeFilter5, 'filter_all', return_value=filter_objs_expected ) as fake5_filter_all: result, last_filter = self._get_filtered_objects(filter_classes) self.assertEqual(filter_objs_expected, result) self.assertEqual(1, fake5_filter_all.call_count) result, last_filter = self._get_filtered_objects( filter_classes, index=1) self.assertEqual(filter_objs_expected, result) self.assertEqual(1, fake5_filter_all.call_count) result, last_filter = self._get_filtered_objects( filter_classes, index=2) self.assertEqual(filter_objs_expected, result) self.assertEqual(1, fake5_filter_all.call_count) manila-10.0.0/manila/tests/scheduler/filters/test_base_host.py0000664000175000017500000000335413656750227024506 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Host Filters. """ from oslo_serialization import jsonutils from manila.scheduler.filters import base_host from manila import test class TestFilter(test.TestCase): pass class TestBogusFilter(object): """Class that doesn't inherit from BaseHostFilter.""" pass class HostFiltersTestCase(test.TestCase): """Test case for host filters.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024]]) namespace = 'manila.scheduler.filters' filter_handler = base_host.HostFilterHandler(namespace) classes = filter_handler.get_all_classes() self.class_map = {} for cls in classes: self.class_map[cls.__name__] = cls def test_all_filters(self): # Double check at least a couple of known filters exist self.assertIn('JsonFilter', self.class_map) self.assertIn('CapabilitiesFilter', self.class_map) self.assertIn('AvailabilityZoneFilter', self.class_map) manila-10.0.0/manila/tests/scheduler/filters/test_driver.py0000664000175000017500000001161213656750227024026 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.scheduler.filters import driver from manila import test from manila.tests.scheduler import fakes class HostFiltersTestCase(test.TestCase): def setUp(self): super(HostFiltersTestCase, self).setUp() self.filter = driver.DriverFilter() def test_passing_function(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': '1 == 1', } }) filter_properties = {'share_type': {}} self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_failing_function(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': '1 == 2', } }) filter_properties = {'share_type': {}} self.assertFalse(self.filter.host_passes(host1, filter_properties)) def test_no_filter_function(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': None, } }) filter_properties = {'share_type': {}} self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_not_implemented(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': {} }) filter_properties = {'share_type': {}} self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_no_share_extra_specs(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': '1 == 1', } }) filter_properties = {'share_type': {}} self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_function_extra_spec_replacement(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': 'extra.var == 1', } }) filter_properties = { 'share_type': { 'extra_specs': { 'var': 1, } } } self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_function_stats_replacement(self): host1 = fakes.FakeHostState( 'host1', { 'total_capacity_gb': 100, 'capabilities': { 'filter_function': 'stats.total_capacity_gb < 200', } }) filter_properties = {'share_type': {}} self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_function_share_replacement(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': 'share.size < 5', } }) filter_properties = { 'request_spec': { 'resource_properties': { 'size': 1 } } } self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_function_exception_caught(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'filter_function': '1 / 0 == 0', } }) filter_properties = {} self.assertFalse(self.filter.host_passes(host1, filter_properties)) def test_capabilities(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'foo': 10, 'filter_function': 'capabilities.foo == 10', }, }) filter_properties = {} self.assertTrue(self.filter.host_passes(host1, filter_properties)) def test_wrong_capabilities(self): host1 = fakes.FakeHostState( 'host1', { 'capabilities': { 'bar': 10, 'filter_function': 'capabilities.foo == 10', }, }) filter_properties = {} self.assertFalse(self.filter.host_passes(host1, filter_properties)) manila-10.0.0/manila/tests/scheduler/filters/test_capabilities.py0000664000175000017500000001520513656750227025166 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For CapabilitiesFilter. """ import ddt from manila.scheduler.filters import capabilities from manila import test from manila.tests.scheduler import fakes @ddt.ddt class HostFiltersTestCase(test.TestCase): """Test case for CapabilitiesFilter.""" def setUp(self): super(HostFiltersTestCase, self).setUp() self.filter = capabilities.CapabilitiesFilter() def _do_test_type_filter_extra_specs(self, ecaps, especs, passes): capabilities = {'enabled': True} capabilities.update(ecaps) service = {'disabled': False} filter_properties = {'resource_type': {'name': 'fake_type', 'extra_specs': especs}} host = fakes.FakeHostState('host1', {'free_capacity_gb': 1024, 'capabilities': capabilities, 'service': service}) assertion = self.assertTrue if passes else self.assertFalse assertion(self.filter.host_passes(host, filter_properties)) def test_capability_filter_passes_extra_specs_simple(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': '1', 'opt2': '2'}, especs={'opt1': '1', 'opt2': '2'}, passes=True) def test_capability_filter_passes_extra_specs_ignore_azs_spec(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': '1', 'opt2': '2'}, especs={'opt1': '1', 'opt2': '2', 'availability_zones': 'az1,az2'}, passes=True) def test_capability_filter_fails_extra_specs_simple(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': '1', 'opt2': '2'}, especs={'opt1': '1', 'opt2': '222'}, passes=False) def test_capability_filter_passes_extra_specs_complex(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': 10, 'opt2': 5}, especs={'opt1': '>= 2', 'opt2': '<= 8'}, passes=True) def test_capability_filter_fails_extra_specs_complex(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': 10, 'opt2': 5}, especs={'opt1': '>= 2', 'opt2': '>= 8'}, passes=False) def test_capability_filter_passes_extra_specs_list_simple(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': ['1', '2'], 'opt2': '2'}, especs={'opt1': '1', 'opt2': '2'}, passes=True) @ddt.data(' True', ' False') def test_capability_filter_passes_extra_specs_list_complex(self, opt1): self._do_test_type_filter_extra_specs( ecaps={'opt1': [True, False], 'opt2': ['1', '2']}, especs={'opt1': opt1, 'opt2': '<= 8'}, passes=True) def test_capability_filter_fails_extra_specs_list_simple(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': ['1', '2'], 'opt2': ['2']}, especs={'opt1': '3', 'opt2': '2'}, passes=False) def test_capability_filter_fails_extra_specs_list_complex(self): self._do_test_type_filter_extra_specs( ecaps={'opt1': [True, False], 'opt2': ['1', '2']}, especs={'opt1': 'fake', 'opt2': '<= 8'}, passes=False) def test_capability_filter_passes_scope_extra_specs(self): self._do_test_type_filter_extra_specs( ecaps={'scope_lv1': {'opt1': 10}}, especs={'capabilities:scope_lv1:opt1': '>= 2'}, passes=True) def test_capability_filter_passes_fakescope_extra_specs(self): self._do_test_type_filter_extra_specs( ecaps={'scope_lv1': {'opt1': 10}, 'opt2': 5}, especs={'scope_lv1:opt1': '= 2', 'opt2': '>= 3'}, passes=True) def test_capability_filter_fails_scope_extra_specs(self): self._do_test_type_filter_extra_specs( ecaps={'scope_lv1': {'opt1': 10}}, especs={'capabilities:scope_lv1:opt1': '<= 2'}, passes=False) def test_capability_filter_passes_multi_level_scope_extra_specs(self): self._do_test_type_filter_extra_specs( ecaps={'scope_lv0': {'scope_lv1': {'scope_lv2': {'opt1': 10}}}}, especs={'capabilities:scope_lv0:scope_lv1:scope_lv2:opt1': '>= 2'}, passes=True) def test_capability_filter_fails_wrong_scope_extra_specs(self): self._do_test_type_filter_extra_specs( ecaps={'scope_lv0': {'opt1': 10}}, especs={'capabilities:scope_lv1:opt1': '>= 2'}, passes=False) def test_capability_filter_passes_multi_level_scope_extra_specs_list(self): self._do_test_type_filter_extra_specs( ecaps={ 'scope_lv0': { 'scope_lv1': { 'scope_lv2': { 'opt1': [True, False], }, }, }, }, especs={ 'capabilities:scope_lv0:scope_lv1:scope_lv2:opt1': ' True', }, passes=True) def test_capability_filter_fails_multi_level_scope_extra_specs_list(self): self._do_test_type_filter_extra_specs( ecaps={ 'scope_lv0': { 'scope_lv1': { 'scope_lv2': { 'opt1': [True, False], 'opt2': ['1', '2'], }, }, }, }, especs={ 'capabilities:scope_lv0:scope_lv1:scope_lv2:opt1': ' True', 'capabilities:scope_lv0:scope_lv1:scope_lv2:opt2': '3', }, passes=False) def test_capability_filter_fails_wrong_scope_extra_specs_list(self): self._do_test_type_filter_extra_specs( ecaps={'scope_lv0': {'opt1': [True, False]}}, especs={'capabilities:scope_lv1:opt1': ' True'}, passes=False) manila-10.0.0/manila/tests/scheduler/weighers/0000775000175000017500000000000013656750362021266 5ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/weighers/__init__.py0000664000175000017500000000000013656750227023365 0ustar zuulzuul00000000000000manila-10.0.0/manila/tests/scheduler/weighers/test_host_affinity.py0000664000175000017500000001317013656750227025547 0ustar zuulzuul00000000000000# Copyright 2020 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for Host Affinity Weigher. """ from unittest import mock from manila.common import constants from manila.db import api as db_api from manila.scheduler.weighers import host_affinity from manila import test from manila.tests import db_utils from manila.tests.scheduler import fakes class HostAffinityWeigherTestCase(test.TestCase): def setUp(self): super(HostAffinityWeigherTestCase, self).setUp() self.weigher = host_affinity.HostAffinityWeigher() @staticmethod def _create_weight_properties(snapshot_id=None, snapshot_host=None, availability_zone_id=None): return { 'request_spec': { 'snapshot_id': snapshot_id, 'snapshot_host': snapshot_host, }, 'availability_zone_id': availability_zone_id, } def test_without_snapshot_id(self): host_state = fakes.FakeHostState('host1', { 'host': 'host1@AAA#pool2', }) weight_properties = self._create_weight_properties( snapshot_host='fake_snapshot_host') weight = self.weigher._weigh_object(host_state, weight_properties) self.assertEqual(0, weight) def test_without_snapshot_host(self): host_state = fakes.FakeHostState('host1', { 'host': 'host1@AAA#pool2', }) weight_properties = self._create_weight_properties( snapshot_id='fake_snapshot_id') weight = self.weigher._weigh_object(host_state, weight_properties) self.assertEqual(0, weight) def test_same_backend_and_pool(self): share = db_utils.create_share(host="host1@AAA#pool1", status=constants.STATUS_AVAILABLE) snapshot = db_utils.create_snapshot(share_id=share['id']) self.mock_object(db_api, 'share_snapshot_get', mock.Mock(return_value=snapshot)) host_state = fakes.FakeHostState('host1@AAA#pool1', {}) weight_properties = self._create_weight_properties( snapshot_id=snapshot['id'], snapshot_host=share['host']) weight = self.weigher._weigh_object(host_state, weight_properties) self.assertEqual(100, weight) def test_same_backend_different_pool(self): share = db_utils.create_share(host="host1@AAA#pool1", status=constants.STATUS_AVAILABLE) snapshot = db_utils.create_snapshot(share_id=share['id']) self.mock_object(db_api, 'share_snapshot_get', mock.Mock(return_value=snapshot)) host_state = fakes.FakeHostState('host1@AAA#pool2', {}) weight_properties = self._create_weight_properties( snapshot_id=snapshot['id'], snapshot_host=share['host']) weight = self.weigher._weigh_object(host_state, weight_properties) self.assertEqual(75, weight) def test_different_backend_same_availability_zone(self): share = db_utils.create_share( host="host1@AAA#pool1", status=constants.STATUS_AVAILABLE, availability_zone=fakes.FAKE_AZ_1['name']) snapshot = db_utils.create_snapshot(share_id=share['id']) self.mock_object(db_api, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(db_api, 'availability_zone_get', mock.Mock(return_value=type( 'FakeAZ', (object, ), { 'id': fakes.FAKE_AZ_1['id'], 'name': fakes.FAKE_AZ_1['name'], }))) host_state = fakes.FakeHostState('host2@BBB#pool1', {}) weight_properties = self._create_weight_properties( snapshot_id=snapshot['id'], snapshot_host=share['host'], availability_zone_id='zone1') weight = self.weigher._weigh_object(host_state, weight_properties) self.assertEqual(50, weight) def test_different_backend_and_availability_zone(self): share = db_utils.create_share( host="host1@AAA#pool1", status=constants.STATUS_AVAILABLE, availability_zone=fakes.FAKE_AZ_1['name']) snapshot = db_utils.create_snapshot(share_id=share['id']) self.mock_object(db_api, 'share_snapshot_get', mock.Mock(return_value=snapshot)) self.mock_object(db_api, 'availability_zone_get', mock.Mock(return_value=type( 'FakeAZ', (object,), { 'id': fakes.FAKE_AZ_2['id'], 'name': fakes.FAKE_AZ_2['name'], }))) host_state = fakes.FakeHostState('host2@BBB#pool1', {}) weight_properties = self._create_weight_properties( snapshot_id=snapshot['id'], snapshot_host=share['host'], availability_zone_id='zone1' ) weight = self.weigher._weigh_object(host_state, weight_properties) self.assertEqual(25, weight) manila-10.0.0/manila/tests/scheduler/weighers/test_goodness.py0000664000175000017500000001372013656750227024523 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Goodness Weigher. """ from manila.scheduler.weighers import goodness from manila import test from manila.tests.scheduler import fakes class GoodnessWeigherTestCase(test.TestCase): def test_goodness_weigher_with_no_goodness_function(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'foo': '50' } }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(0, weight) def test_goodness_weigher_passing_host(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': '100' } }) host_state_2 = fakes.FakeHostState('host2', { 'host': 'host2.example.com', 'capabilities': { 'goodness_function': '0' } }) host_state_3 = fakes.FakeHostState('host3', { 'host': 'host3.example.com', 'capabilities': { 'goodness_function': '100 / 2' } }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(100, weight) weight = weigher._weigh_object(host_state_2, weight_properties) self.assertEqual(0, weight) weight = weigher._weigh_object(host_state_3, weight_properties) self.assertEqual(50, weight) def test_goodness_weigher_capabilities_substitution(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'foo': 50, 'goodness_function': '10 + capabilities.foo' } }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(60, weight) def test_goodness_weigher_extra_specs_substitution(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': '10 + extra.foo' } }) weight_properties = { 'share_type': { 'extra_specs': { 'foo': 50 } } } weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(60, weight) def test_goodness_weigher_share_substitution(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': '10 + share.foo' } }) weight_properties = { 'request_spec': { 'resource_properties': { 'foo': 50 } } } weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(60, weight) def test_goodness_weigher_stats_substitution(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': 'stats.free_capacity_gb > 20' }, 'free_capacity_gb': 50 }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(100, weight) def test_goodness_weigher_invalid_substitution(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': '10 + stats.my_val' }, 'foo': 50 }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(0, weight) def test_goodness_weigher_host_rating_out_of_bounds(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': '-10' } }) host_state_2 = fakes.FakeHostState('host2', { 'host': 'host2.example.com', 'capabilities': { 'goodness_function': '200' } }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(0, weight) weight = weigher._weigh_object(host_state_2, weight_properties) self.assertEqual(0, weight) def test_goodness_weigher_invalid_goodness_function(self): weigher = goodness.GoodnessWeigher() host_state = fakes.FakeHostState('host1', { 'host': 'host.example.com', 'capabilities': { 'goodness_function': '50 / 0' } }) weight_properties = {} weight = weigher._weigh_object(host_state, weight_properties) self.assertEqual(0, weight) manila-10.0.0/manila/tests/scheduler/weighers/test_capacity.py0000664000175000017500000002654313656750227024506 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Capacity Weigher. """ from unittest import mock import ddt from oslo_config import cfg from manila import context from manila.scheduler.weighers import base_host from manila.scheduler.weighers import capacity from manila.share import utils from manila import test from manila.tests.scheduler import fakes CONF = cfg.CONF @ddt.ddt class CapacityWeigherTestCase(test.TestCase): def setUp(self): super(CapacityWeigherTestCase, self).setUp() self.host_manager = fakes.FakeHostManager() self.weight_handler = base_host.HostWeightHandler( 'manila.scheduler.weighers') def _get_weighed_host(self, hosts, weight_properties=None, index=0): if weight_properties is None: weight_properties = {'size': 1} return self.weight_handler.get_weighed_objects( [capacity.CapacityWeigher], hosts, weight_properties)[index] @mock.patch('manila.db.api.IMPL.service_get_all_by_topic') def _get_all_hosts(self, _mock_service_get_all_by_topic, disabled=False): ctxt = context.get_admin_context() fakes.mock_host_manager_db_calls(_mock_service_get_all_by_topic, disabled=disabled) host_states = self.host_manager.get_all_host_states_share(ctxt) _mock_service_get_all_by_topic.assert_called_once_with( ctxt, CONF.share_topic) return host_states # NOTE(xyang): If thin_provisioning = True and # max_over_subscription_ratio >= 1, use the following formula: # free = math.floor(total * host_state.max_over_subscription_ratio # - host_state.provisioned_capacity_gb # - total * reserved) # Otherwise, use the following formula: # free = math.floor(free_space - total * reserved) @ddt.data( {'cap_thin': ' True', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host2'}, {'cap_thin': ' False', 'cap_thin_key': 'thin_provisioning', 'winner': 'host1'}, {'cap_thin': 'True', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host2'}, {'cap_thin': 'False', 'cap_thin_key': 'thin_provisioning', 'winner': 'host1'}, {'cap_thin': 'true', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host2'}, {'cap_thin': 'false', 'cap_thin_key': 'thin_provisioning', 'winner': 'host1'}, {'cap_thin': None, 'cap_thin_key': None, 'winner': 'host2'}, ) @ddt.unpack def test_default_of_spreading_first(self, cap_thin, cap_thin_key, winner): hosts = self._get_all_hosts() # pylint: disable=no-value-for-parameter # Results for the 1st test # {'capabilities:thin_provisioning': ' True'}: # host1: thin_provisioning = False # free_capacity_gb = 1024 # free = math.floor(1024 - 1024 * 0.1) = 921.0 # weight = 0.40 # host2: thin_provisioning = True # max_over_subscription_ratio = 2.0 # free_capacity_gb = 300 # free = math.floor(2048 * 2.0 - 1748 - 2048 * 0.1)=2143.0 # weight = 1.0 # host3: thin_provisioning = [False] # free_capacity_gb = 512 # free = math.floor(256 - 512 * 0)=256.0 # weight = 0.08 # host4: thin_provisioning = [True] # max_over_subscription_ratio = 1.0 # free_capacity_gb = 200 # free = math.floor(2048 * 1.0 - 1848 - 2048 * 0.05) = 97.0 # weight = 0.0 # host5: thin_provisioning = [True, False] # max_over_subscription_ratio = 1.5 # free_capacity_gb = 500 # free = math.floor(2048 * 1.5 - 1548 - 2048 * 0.05) = 1421.0 # weight = 0.65 # host6: thin_provisioning = False # free = inf # weight = 0.0 # so, host2 should win: weight_properties = { 'size': 1, 'share_type': { 'extra_specs': { cap_thin_key: cap_thin, } } } weighed_host = self._get_weighed_host( hosts, weight_properties=weight_properties) self.assertEqual(1.0, weighed_host.weight) self.assertEqual( winner, utils.extract_host(weighed_host.obj.host)) def test_unknown_is_last(self): hosts = self._get_all_hosts() # pylint: disable=no-value-for-parameter last_host = self._get_weighed_host(hosts, index=-1) self.assertEqual( 'host6', utils.extract_host(last_host.obj.host)) self.assertEqual(0.0, last_host.weight) @ddt.data( {'cap_thin': ' True', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host4'}, {'cap_thin': ' False', 'cap_thin_key': 'thin_provisioning', 'winner': 'host2'}, {'cap_thin': 'True', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host4'}, {'cap_thin': 'False', 'cap_thin_key': 'thin_provisioning', 'winner': 'host2'}, {'cap_thin': 'true', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host4'}, {'cap_thin': 'false', 'cap_thin_key': 'thin_provisioning', 'winner': 'host2'}, {'cap_thin': None, 'cap_thin_key': None, 'winner': 'host4'}, ) @ddt.unpack def test_capacity_weight_multiplier_negative_1(self, cap_thin, cap_thin_key, winner): self.flags(capacity_weight_multiplier=-1.0) hosts = self._get_all_hosts() # pylint: disable=no-value-for-parameter # Results for the 1st test # {'capabilities:thin_provisioning': ' True'}: # host1: thin_provisioning = False # free_capacity_gb = 1024 # free = math.floor(1024 - 1024 * 0.1) = 921.0 # free * (-1) = -921.0 # weight = -0.40 # host2: thin_provisioning = True # max_over_subscription_ratio = 2.0 # free_capacity_gb = 300 # free = math.floor(2048 * 2.0-1748-2048 * 0.1) = 2143.0 # free * (-1) = -2143.0 # weight = -1.0 # host3: thin_provisioning = [False] # free_capacity_gb = 512 # free = math.floor(256 - 512 * 0) = 256.0 # free * (-1) = -256.0 # weight = -0.08 # host4: thin_provisioning = [True] # max_over_subscription_ratio = 1.0 # free_capacity_gb = 200 # free = math.floor(2048 * 1.0 - 1848 - 2048 * 0.05) = 97.0 # free * (-1) = -97.0 # weight = 0.0 # host5: thin_provisioning = [True, False] # max_over_subscription_ratio = 1.5 # free_capacity_gb = 500 # free = math.floor(2048 * 1.5 - 1548 - 2048 * 0.05) = 1421.0 # free * (-1) = -1421.0 # weight = -0.65 # host6: thin_provisioning = False # free = inf # free * (-1) = -inf # weight = 0.0 # so, host4 should win: weight_properties = { 'size': 1, 'share_type': { 'extra_specs': { cap_thin_key: cap_thin, } } } weighed_host = self._get_weighed_host( hosts, weight_properties=weight_properties) self.assertEqual(0.0, weighed_host.weight) self.assertEqual( winner, utils.extract_host(weighed_host.obj.host)) @ddt.data( {'cap_thin': ' True', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host2'}, {'cap_thin': ' False', 'cap_thin_key': 'thin_provisioning', 'winner': 'host1'}, {'cap_thin': 'True', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host2'}, {'cap_thin': 'False', 'cap_thin_key': 'thin_provisioning', 'winner': 'host1'}, {'cap_thin': 'true', 'cap_thin_key': 'capabilities:thin_provisioning', 'winner': 'host2'}, {'cap_thin': 'false', 'cap_thin_key': 'thin_provisioning', 'winner': 'host1'}, {'cap_thin': None, 'cap_thin_key': None, 'winner': 'host2'}, ) @ddt.unpack def test_capacity_weight_multiplier_2(self, cap_thin, cap_thin_key, winner): self.flags(capacity_weight_multiplier=2.0) hosts = self._get_all_hosts() # pylint: disable=no-value-for-parameter # Results for the 1st test # {'capabilities:thin_provisioning': ' True'}: # host1: thin_provisioning = False # free_capacity_gb = 1024 # free = math.floor(1024-1024*0.1) = 921.0 # free * 2 = 1842.0 # weight = 0.81 # host2: thin_provisioning = True # max_over_subscription_ratio = 2.0 # free_capacity_gb = 300 # free = math.floor(2048 * 2.0 - 1748 - 2048 * 0.1) = 2143.0 # free * 2 = 4286.0 # weight = 2.0 # host3: thin_provisioning = [False] # free_capacity_gb = 512 # free = math.floor(256 - 512 * 0) = 256.0 # free * 2 = 512.0 # weight = 0.16 # host4: thin_provisioning = [True] # max_over_subscription_ratio = 1.0 # free_capacity_gb = 200 # free = math.floor(2048 * 1.0 - 1848 - 2048 * 0.05) = 97.0 # free * 2 = 194.0 # weight = 0.0 # host5: thin_provisioning = [True, False] # max_over_subscription_ratio = 1.5 # free_capacity_gb = 500 # free = math.floor(2048 * 1.5 - 1548 - 2048 * 0.05) = 1421.0 # free * 2 = 2842.0 # weight = 1.29 # host6: thin_provisioning = False # free = inf # weight = 0.0 # so, host2 should win: weight_properties = { 'size': 1, 'share_type': { 'extra_specs': { cap_thin_key: cap_thin, } } } weighed_host = self._get_weighed_host( hosts, weight_properties=weight_properties) self.assertEqual(2.0, weighed_host.weight) self.assertEqual( winner, utils.extract_host(weighed_host.obj.host)) manila-10.0.0/manila/tests/scheduler/weighers/test_pool.py0000664000175000017500000001513513656750227023655 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Pool Weigher. """ from unittest import mock from oslo_config import cfg from oslo_utils import timeutils from manila import context from manila.db import api as db_api from manila.scheduler.weighers import base_host from manila.scheduler.weighers import pool from manila.share import utils from manila import test from manila.tests.scheduler import fakes CONF = cfg.CONF class PoolWeigherTestCase(test.TestCase): def setUp(self): super(PoolWeigherTestCase, self).setUp() self.host_manager = fakes.FakeHostManager() self.weight_handler = base_host.HostWeightHandler( 'manila.scheduler.weighers') share_servers = [ {'id': 'fake_server_id0'}, {'id': 'fake_server_id1'}, {'id': 'fake_server_id2'}, {'id': 'fake_server_id3'}, {'id': 'fake_server_id4'}, ] services = [ dict(id=1, host='host1@AAA', topic='share', disabled=False, availability_zone='zone1', updated_at=timeutils.utcnow()), dict(id=2, host='host2@BBB', topic='share', disabled=False, availability_zone='zone1', updated_at=timeutils.utcnow()), dict(id=3, host='host3@CCC', topic='share', disabled=False, availability_zone='zone2', updated_at=timeutils.utcnow()), dict(id=4, host='host@DDD', topic='share', disabled=False, availability_zone='zone3', updated_at=timeutils.utcnow()), dict(id=5, host='host5@EEE', topic='share', disabled=False, availability_zone='zone3', updated_at=timeutils.utcnow()), ] self.host_manager.service_states = ( fakes.SHARE_SERVICE_STATES_WITH_POOLS) self.mock_object(db_api, 'share_server_get_all_by_host', mock.Mock(return_value=share_servers)) self.mock_object(db_api.IMPL, 'service_get_all_by_topic', mock.Mock(return_value=services)) def _get_weighed_host(self, hosts, weight_properties=None): if weight_properties is None: weight_properties = { 'server_pools_mapping': { 'fake_server_id2': [{'pool_name': 'pool2'}, ], }, } return self.weight_handler.get_weighed_objects( [pool.PoolWeigher], hosts, weight_properties)[0] def _get_all_hosts(self): ctxt = context.get_admin_context() host_states = self.host_manager.get_all_host_states_share(ctxt) db_api.IMPL.service_get_all_by_topic.assert_called_once_with( ctxt, CONF.share_topic) return host_states def test_no_server_pool_mapping(self): weight_properties = { 'server_pools_mapping': {}, } weighed_host = self._get_weighed_host(self._get_all_hosts(), weight_properties) self.assertEqual(0.0, weighed_host.weight) def test_choose_pool_with_existing_share_server(self): # host1: weight = 0*(1.0) # host2: weight = 1*(1.0) # host3: weight = 0*(1.0) # host4: weight = 0*(1.0) # host5: weight = 0*(1.0) # so, host2 should win: weighed_host = self._get_weighed_host(self._get_all_hosts()) self.assertEqual(1.0, weighed_host.weight) self.assertEqual( 'host2@BBB', utils.extract_host(weighed_host.obj.host)) def test_pool_weight_multiplier_positive(self): self.flags(pool_weight_multiplier=2.0) # host1: weight = 0*(2.0) # host2: weight = 1*(2.0) # host3: weight = 0*(2.0) # host4: weight = 0*(2.0) # host5: weight = 0*(2.0) # so, host2 should win: weighed_host = self._get_weighed_host(self._get_all_hosts()) self.assertEqual(2.0, weighed_host.weight) self.assertEqual( 'host2@BBB', utils.extract_host(weighed_host.obj.host)) def test_pool_weight_multiplier_negative(self): self.flags(pool_weight_multiplier=-1.0) weight_properties = { 'server_pools_mapping': { 'fake_server_id0': [{'pool_name': 'pool1'}], 'fake_server_id2': [{'pool_name': 'pool3'}], 'fake_server_id3': [ {'pool_name': 'pool4a'}, {'pool_name': 'pool4b'}, ], 'fake_server_id4': [ {'pool_name': 'pool5a'}, {'pool_name': 'pool5b'}, ], }, } # host1: weight = 1*(-1.0) # host2: weight = 0*(-1.0) # host3: weight = 1*(-1.0) # host4: weight = 1*(-1.0) # host5: weight = 1*(-1.0) # so, host2 should win: weighed_host = self._get_weighed_host(self._get_all_hosts(), weight_properties) self.assertEqual(0.0, weighed_host.weight) self.assertEqual( 'host2@BBB', utils.extract_host(weighed_host.obj.host)) def test_pool_weigher_all_pools_with_share_servers(self): weight_properties = { 'server_pools_mapping': { 'fake_server_id0': [{'pool_name': 'pool1'}], 'fake_server_id1': [{'pool_name': 'pool2'}], 'fake_server_id2': [{'pool_name': 'pool3'}], 'fake_server_id3': [ {'pool_name': 'pool4a'}, {'pool_name': 'pool4b'}, ], 'fake_server_id4': [ {'pool_name': 'pool5a'}, {'pool_name': 'pool5b'}, ], }, } # host1: weight = 1*(1.0) # host2: weight = 1*(1.0) # host3: weight = 1*(1.0) # host4: weight = 1*(1.0) # host5: weight = 1*(1.0) # But after normalization all weighers will be 0 weighed_host = self._get_weighed_host(self._get_all_hosts(), weight_properties) self.assertEqual(0.0, weighed_host.weight) manila-10.0.0/manila/tests/scheduler/weighers/test_base.py0000664000175000017500000000440613656750227023615 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler weighers. """ from manila.scheduler.weighers import base from manila import test from manila.tests.scheduler import fakes class TestWeightHandler(test.TestCase): def test_get_all_classes(self): namespace = "manila.tests.scheduler.fakes" handler = base.BaseWeightHandler( base.BaseWeigher, namespace) classes = handler.get_all_classes() self.assertIn(fakes.FakeWeigher1, classes) self.assertIn(fakes.FakeWeigher2, classes) self.assertNotIn(fakes.FakeClass, classes) def test_no_multiplier(self): class FakeWeigher(base.BaseWeigher): def _weigh_object(self, *args, **kwargs): pass self.assertEqual(1.0, FakeWeigher().weight_multiplier()) def test_no_weight_object(self): class FakeWeigher(base.BaseWeigher): def weight_multiplier(self, *args, **kwargs): pass self.assertRaises(TypeError, FakeWeigher) def test_normalization(self): # weight_list, expected_result, minval, maxval map_ = ( ((), (), None, None), ((0.0, 0.0), (0.0, 0.0), None, None), ((1.0, 1.0), (0.0, 0.0), None, None), ((20.0, 50.0), (0.0, 1.0), None, None), ((20.0, 50.0), (0.0, 0.375), None, 100.0), ((20.0, 50.0), (0.4, 1.0), 0.0, None), ((20.0, 50.0), (0.2, 0.5), 0.0, 100.0), ) for seq, result, minval, maxval in map_: ret = base.normalize(seq, minval=minval, maxval=maxval) self.assertEqual(result, tuple(ret)) manila-10.0.0/manila/tests/scheduler/test_scheduler_options.py0000664000175000017500000001217113656750227024615 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For scheduler options. """ import datetime from oslo_serialization import jsonutils import six from manila.scheduler import scheduler_options from manila import test class FakeSchedulerOptions(scheduler_options.SchedulerOptions): def __init__(self, last_checked, now, file_old, file_now, data, filedata): super(FakeSchedulerOptions, self).__init__() # Change internals ... self.last_modified = file_old self.last_checked = last_checked self.data = data # For overrides ... self._time_now = now self._file_now = file_now self._file_data = six.b(filedata) self.file_was_loaded = False def _get_file_timestamp(self, filename): return self._file_now def _get_file_handle(self, filename): self.file_was_loaded = True if six.PY2: # pylint: disable=import-error import StringIO return StringIO.StringIO(self._file_data) else: import io return io.BytesIO(self._file_data) def _get_time_now(self): return self._time_now class SchedulerOptionsTestCase(test.TestCase): def test_get_configuration_first_time_no_flag(self): last_checked = None now = datetime.datetime(2012, 1, 1, 1, 1, 1) file_old = None file_now = datetime.datetime(2012, 1, 1, 1, 1, 1) data = dict(a=1, b=2, c=3) jdata = jsonutils.dumps(data) fake = FakeSchedulerOptions(last_checked, now, file_old, file_now, {}, jdata) self.assertEqual({}, fake.get_configuration()) self.assertFalse(fake.file_was_loaded) def test_get_configuration_first_time_empty_file(self): last_checked = None now = datetime.datetime(2012, 1, 1, 1, 1, 1) file_old = None file_now = datetime.datetime(2012, 1, 1, 1, 1, 1) jdata = "" fake = FakeSchedulerOptions(last_checked, now, file_old, file_now, {}, jdata) self.assertEqual({}, fake.get_configuration('foo.json')) self.assertTrue(fake.file_was_loaded) def test_get_configuration_first_time_happy_day(self): last_checked = None now = datetime.datetime(2012, 1, 1, 1, 1, 1) file_old = None file_now = datetime.datetime(2012, 1, 1, 1, 1, 1) data = dict(a=1, b=2, c=3) jdata = jsonutils.dumps(data) fake = FakeSchedulerOptions(last_checked, now, file_old, file_now, {}, jdata) self.assertEqual(data, fake.get_configuration('foo.json')) self.assertTrue(fake.file_was_loaded) def test_get_configuration_second_time_no_change(self): last_checked = datetime.datetime(2011, 1, 1, 1, 1, 1) now = datetime.datetime(2012, 1, 1, 1, 1, 1) file_old = datetime.datetime(2012, 1, 1, 1, 1, 1) file_now = datetime.datetime(2012, 1, 1, 1, 1, 1) data = dict(a=1, b=2, c=3) jdata = jsonutils.dumps(data) fake = FakeSchedulerOptions(last_checked, now, file_old, file_now, data, jdata) self.assertEqual(data, fake.get_configuration('foo.json')) self.assertFalse(fake.file_was_loaded) def test_get_configuration_second_time_too_fast(self): last_checked = datetime.datetime(2011, 1, 1, 1, 1, 1) now = datetime.datetime(2011, 1, 1, 1, 1, 2) file_old = datetime.datetime(2012, 1, 1, 1, 1, 1) file_now = datetime.datetime(2013, 1, 1, 1, 1, 1) old_data = dict(a=1, b=2, c=3) data = dict(a=11, b=12, c=13) jdata = jsonutils.dumps(data) fake = FakeSchedulerOptions(last_checked, now, file_old, file_now, old_data, jdata) self.assertEqual(old_data, fake.get_configuration('foo.json')) self.assertFalse(fake.file_was_loaded) def test_get_configuration_second_time_change(self): last_checked = datetime.datetime(2011, 1, 1, 1, 1, 1) now = datetime.datetime(2012, 1, 1, 1, 1, 1) file_old = datetime.datetime(2012, 1, 1, 1, 1, 1) file_now = datetime.datetime(2013, 1, 1, 1, 1, 1) old_data = dict(a=1, b=2, c=3) data = dict(a=11, b=12, c=13) jdata = jsonutils.dumps(data) fake = FakeSchedulerOptions(last_checked, now, file_old, file_now, old_data, jdata) self.assertEqual(data, fake.get_configuration('foo.json')) self.assertTrue(fake.file_was_loaded) manila-10.0.0/manila/tests/scheduler/test_manager.py0000664000175000017500000004157313656750227022506 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Manager """ from importlib import reload from unittest import mock import ddt from oslo_config import cfg from manila.common import constants from manila import context from manila import db from manila import exception from manila.message import message_field from manila import quota from manila.scheduler.drivers import base from manila.scheduler.drivers import filter from manila.scheduler import manager from manila.share import rpcapi as share_rpcapi from manila import test from manila.tests import db_utils from manila.tests import fake_share as fakes CONF = cfg.CONF @ddt.ddt class SchedulerManagerTestCase(test.TestCase): """Test case for scheduler manager.""" driver_cls = base.Scheduler driver_cls_name = 'manila.scheduler.drivers.base.Scheduler' def setUp(self): super(SchedulerManagerTestCase, self).setUp() self.periodic_tasks = [] def _periodic_task(*args, **kwargs): def decorator(f): self.periodic_tasks.append(f) return f return mock.Mock(side_effect=decorator) self.mock_periodic_task = self.mock_object( manager.periodic_task, 'periodic_task', mock.Mock(side_effect=_periodic_task)) reload(manager) self.flags(scheduler_driver=self.driver_cls_name) self.manager = manager.SchedulerManager() self.context = context.RequestContext('fake_user', 'fake_project') self.topic = 'fake_topic' self.fake_args = (1, 2, 3) self.fake_kwargs = {'cat': 'meow', 'dog': 'woof'} def raise_no_valid_host(self, *args, **kwargs): raise exception.NoValidHost(reason="") def test_1_correct_init(self): # Correct scheduler driver manager = self.manager self.assertIsInstance(manager.driver, self.driver_cls) @ddt.data('manila.scheduler.filter_scheduler.FilterScheduler', 'manila.scheduler.drivers.filter.FilterScheduler') def test_scheduler_driver_mapper(self, driver_class): test_manager = manager.SchedulerManager(scheduler_driver=driver_class) self.assertIsInstance(test_manager.driver, filter.FilterScheduler) def test_init_host(self): self.mock_object(context, 'get_admin_context', mock.Mock(return_value='fake_admin_context')) self.mock_object(self.manager, 'request_service_capabilities') self.manager.init_host() self.manager.request_service_capabilities.assert_called_once_with( 'fake_admin_context') def test_get_host_list(self): self.mock_object(self.manager.driver, 'get_host_list') self.manager.get_host_list(context) self.manager.driver.get_host_list.assert_called_once_with() def test_get_service_capabilities(self): self.mock_object(self.manager.driver, 'get_service_capabilities') self.manager.get_service_capabilities(context) self.manager.driver.get_service_capabilities.assert_called_once_with() def test_update_service_capabilities(self): service_name = 'fake_service' host = 'fake_host' with mock.patch.object(self.manager.driver, 'update_service_capabilities', mock.Mock()): self.manager.update_service_capabilities( self.context, service_name=service_name, host=host) (self.manager.driver.update_service_capabilities. assert_called_once_with(service_name, host, {})) with mock.patch.object(self.manager.driver, 'update_service_capabilities', mock.Mock()): capabilities = {'fake_capability': 'fake_value'} self.manager.update_service_capabilities( self.context, service_name=service_name, host=host, capabilities=capabilities) (self.manager.driver.update_service_capabilities. assert_called_once_with(service_name, host, capabilities)) @mock.patch.object(db, 'share_update', mock.Mock()) @mock.patch('manila.message.api.API.create') def test_create_share_exception_puts_share_in_error_state( self, _mock_message_create): """Test NoValidHost exception for create_share. Puts the share in 'error' state and eats the exception. """ fake_share_id = 1 request_spec = {'share_id': fake_share_id} ex = exception.NoValidHost(reason='') with mock.patch.object( self.manager.driver, 'schedule_create_share', mock.Mock(side_effect=ex)): self.mock_object(manager.LOG, 'error') self.manager.create_share_instance( self.context, request_spec=request_spec, filter_properties={}) db.share_update.assert_called_once_with( self.context, fake_share_id, {'status': 'error'}) (self.manager.driver.schedule_create_share. assert_called_once_with(self.context, request_spec, {})) manager.LOG.error.assert_called_once_with(mock.ANY, mock.ANY) _mock_message_create.assert_called_once_with( self.context, message_field.Action.ALLOCATE_HOST, self.context.project_id, resource_type='SHARE', exception=ex, resource_id=fake_share_id) @mock.patch.object(db, 'share_update', mock.Mock()) def test_create_share_other_exception_puts_share_in_error_state(self): """Test any exception except NoValidHost for create_share. Puts the share in 'error' state and re-raises the exception. """ fake_share_id = 1 request_spec = {'share_id': fake_share_id} with mock.patch.object(self.manager.driver, 'schedule_create_share', mock.Mock(side_effect=exception.QuotaError)): self.mock_object(manager.LOG, 'error') self.assertRaises(exception.QuotaError, self.manager.create_share_instance, self.context, request_spec=request_spec, filter_properties={}) db.share_update.assert_called_once_with( self.context, fake_share_id, {'status': 'error'}) (self.manager.driver.schedule_create_share. assert_called_once_with(self.context, request_spec, {})) manager.LOG.error.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(quota.QUOTAS, 'expire') def test__expire_reservations(self, mock_expire): self.manager._expire_reservations(self.context) mock_expire.assert_called_once_with(self.context) @mock.patch('manila.message.api.API.cleanup_expired_messages') def test__clean_expired_messages(self, mock_expire): self.manager._clean_expired_messages(self.context) mock_expire.assert_called_once_with(self.context) def test_periodic_tasks(self): self.assertEqual(2, self.mock_periodic_task.call_count) self.assertEqual(2, len(self.periodic_tasks)) self.assertEqual( self.periodic_tasks[0].__name__, self.manager._expire_reservations.__name__) self.assertEqual( self.periodic_tasks[1].__name__, self.manager._clean_expired_messages.__name__) def test_get_pools(self): """Ensure get_pools exists and calls base_scheduler.get_pools.""" mock_get_pools = self.mock_object(self.manager.driver, 'get_pools', mock.Mock(return_value='fake_pools')) result = self.manager.get_pools(self.context, filters='fake_filters') mock_get_pools.assert_called_once_with(self.context, 'fake_filters', False) self.assertEqual('fake_pools', result) @mock.patch.object(db, 'share_group_update', mock.Mock()) def test_create_group_no_valid_host_puts_group_in_error_state(self): """Test that NoValidHost is raised for create_share_group. Puts the share in 'error' state and eats the exception. """ fake_group_id = 1 group_id = fake_group_id request_spec = {"share_group_id": group_id} with mock.patch.object( self.manager.driver, 'schedule_create_share_group', mock.Mock(side_effect=self.raise_no_valid_host)): self.manager.create_share_group(self.context, fake_group_id, request_spec=request_spec, filter_properties={}) db.share_group_update.assert_called_once_with( self.context, fake_group_id, {'status': 'error'}) (self.manager.driver.schedule_create_share_group. assert_called_once_with(self.context, group_id, request_spec, {})) @mock.patch.object(db, 'share_group_update', mock.Mock()) def test_create_group_exception_puts_group_in_error_state(self): """Test that exceptions for create_share_group. Puts the share in 'error' state and raises the exception. """ fake_group_id = 1 group_id = fake_group_id request_spec = {"share_group_id": group_id} with mock.patch.object(self.manager.driver, 'schedule_create_share_group', mock.Mock(side_effect=exception.NotFound)): self.assertRaises(exception.NotFound, self.manager.create_share_group, self.context, fake_group_id, request_spec=request_spec, filter_properties={}) def test_migrate_share_to_host(self): class fake_host(object): host = 'fake@backend#pool' share = db_utils.create_share() host = fake_host() self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.mock_object(share_rpcapi.ShareAPI, 'migration_start', mock.Mock(side_effect=TypeError)) self.mock_object(base.Scheduler, 'host_passes_filters', mock.Mock(return_value=host)) self.assertRaises( TypeError, self.manager.migrate_share_to_host, self.context, share['id'], 'fake@backend#pool', False, True, True, False, True, 'fake_net_id', 'fake_type_id', {}, None) db.share_get.assert_called_once_with(self.context, share['id']) base.Scheduler.host_passes_filters.assert_called_once_with( self.context, 'fake@backend#pool', {}, None) share_rpcapi.ShareAPI.migration_start.assert_called_once_with( self.context, share, host.host, False, True, True, False, True, 'fake_net_id', 'fake_type_id') @ddt.data(exception.NoValidHost(reason='fake'), TypeError) def test_migrate_share_to_host_exception(self, exc): share = db_utils.create_share(status=constants.STATUS_MIGRATING) host = 'fake@backend#pool' request_spec = {'share_id': share['id']} self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.mock_object( base.Scheduler, 'host_passes_filters', mock.Mock(side_effect=exc)) self.mock_object(db, 'share_update') self.mock_object(db, 'share_instance_update') capture = (exception.NoValidHost if isinstance(exc, exception.NoValidHost) else TypeError) self.assertRaises( capture, self.manager.migrate_share_to_host, self.context, share['id'], host, False, True, True, False, True, 'fake_net_id', 'fake_type_id', request_spec, None) base.Scheduler.host_passes_filters.assert_called_once_with( self.context, host, request_spec, None) db.share_get.assert_called_once_with(self.context, share['id']) db.share_update.assert_called_once_with( self.context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) db.share_instance_update.assert_called_once_with( self.context, share.instance['id'], {'status': constants.STATUS_AVAILABLE}) def test_manage_share(self): share = db_utils.create_share() self.mock_object(db, 'share_get', mock.Mock(return_value=share)) self.mock_object(share_rpcapi.ShareAPI, 'manage_share') self.mock_object(base.Scheduler, 'host_passes_filters') self.manager.manage_share(self.context, share['id'], 'driver_options', {}, None) def test_manage_share_exception(self): share = db_utils.create_share() db_update = self.mock_object(db, 'share_update', mock.Mock()) self.mock_object( base.Scheduler, 'host_passes_filters', mock.Mock(side_effect=exception.NoValidHost('fake'))) share_id = share['id'] self.assertRaises( exception.NoValidHost, self.manager.manage_share, self.context, share['id'], 'driver_options', {'share_id': share_id}, None) db_update.assert_called_once_with( self.context, share_id, {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}) def test_create_share_replica_exception_path(self): """Test 'raisable' exceptions for create_share_replica.""" db_update = self.mock_object(db, 'share_replica_update') self.mock_object(db, 'share_snapshot_instance_get_all_with_filters', mock.Mock(return_value=[{'id': '123'}])) snap_update = self.mock_object(db, 'share_snapshot_instance_update') request_spec = fakes.fake_replica_request_spec() replica_id = request_spec.get('share_instance_properties').get('id') expected_updates = { 'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR, } with mock.patch.object(self.manager.driver, 'schedule_create_replica', mock.Mock(side_effect=exception.NotFound)): self.assertRaises(exception.NotFound, self.manager.create_share_replica, self.context, request_spec=request_spec, filter_properties={}) db_update.assert_called_once_with( self.context, replica_id, expected_updates) snap_update.assert_called_once_with( self.context, '123', {'status': constants.STATUS_ERROR}) def test_create_share_replica_no_valid_host(self): """Test the NoValidHost exception for create_share_replica.""" db_update = self.mock_object(db, 'share_replica_update') request_spec = fakes.fake_replica_request_spec() replica_id = request_spec.get('share_instance_properties').get('id') expected_updates = { 'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR, } with mock.patch.object( self.manager.driver, 'schedule_create_replica', mock.Mock(side_effect=self.raise_no_valid_host)): retval = self.manager.create_share_replica( self.context, request_spec=request_spec, filter_properties={}) self.assertIsNone(retval) db_update.assert_called_once_with( self.context, replica_id, expected_updates) def test_create_share_replica(self): """Test happy path for create_share_replica.""" db_update = self.mock_object(db, 'share_replica_update') mock_scheduler_driver_call = self.mock_object( self.manager.driver, 'schedule_create_replica') request_spec = fakes.fake_replica_request_spec() retval = self.manager.create_share_replica( self.context, request_spec=request_spec, filter_properties={}) mock_scheduler_driver_call.assert_called_once_with( self.context, request_spec, {}) self.assertFalse(db_update.called) self.assertIsNone(retval) manila-10.0.0/manila/tests/scheduler/fakes.py0000664000175000017500000004345013656750227021122 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fakes For Scheduler tests. """ from oslo_utils import timeutils from manila.scheduler.drivers import filter from manila.scheduler import host_manager from manila.scheduler.weighers import base_host as base_host_weigher FAKE_AZ_1 = {'name': 'zone1', 'id': '24433438-c9b5-4cb5-a472-f78462aa5f31'} FAKE_AZ_2 = {'name': 'zone2', 'id': 'ebef050c-d20d-4c44-b272-1a0adce11cb5'} FAKE_AZ_3 = {'name': 'zone3', 'id': '18e7e6e2-39d6-466b-a706-2717bd1086e1'} FAKE_AZ_4 = {'name': 'zone4', 'id': '9ca40ee4-3c2a-4635-9a18-233cf6e0ad0b'} FAKE_AZ_5 = {'name': 'zone4', 'id': 'd76d921d-d6fa-41b4-a180-fb68952784bd'} FAKE_AZ_6 = {'name': 'zone4', 'id': 'bc09c3d6-671c-4d55-9f43-f00757aabc50'} SHARE_SERVICES_NO_POOLS = [ dict(id=1, host='host1', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_1['id'], availability_zone=FAKE_AZ_1), dict(id=2, host='host2@back1', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_2['id'], availability_zone=FAKE_AZ_2), dict(id=3, host='host2@back2', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_3['id'], availability_zone=FAKE_AZ_3), ] SERVICE_STATES_NO_POOLS = { 'host1': dict(share_backend_name='AAA', total_capacity_gb=512, free_capacity_gb=200, timestamp=None, reserved_percentage=0, provisioned_capacity_gb=312, max_over_subscription_ratio=1.0, thin_provisioning=False, snapshot_support=False, create_share_from_snapshot_support=False, revert_to_snapshot_support=True, mount_snapshot_support=True, driver_handles_share_servers=False), 'host2@back1': dict(share_backend_name='BBB', total_capacity_gb=256, free_capacity_gb=100, timestamp=None, reserved_percentage=0, provisioned_capacity_gb=400, max_over_subscription_ratio=2.0, thin_provisioning=True, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, mount_snapshot_support=False, driver_handles_share_servers=False), 'host2@back2': dict(share_backend_name='CCC', total_capacity_gb=10000, free_capacity_gb=700, timestamp=None, reserved_percentage=0, provisioned_capacity_gb=50000, max_over_subscription_ratio=20.0, thin_provisioning=True, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, mount_snapshot_support=False, driver_handles_share_servers=False), } SHARE_SERVICES_WITH_POOLS = [ dict(id=1, host='host1@AAA', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_1['id'], availability_zone=FAKE_AZ_1), dict(id=2, host='host2@BBB', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_2['id'], availability_zone=FAKE_AZ_2), dict(id=3, host='host3@CCC', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_3['id'], availability_zone=FAKE_AZ_3), dict(id=4, host='host4@DDD', topic='share', disabled=False, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_4['id'], availability_zone=FAKE_AZ_4), # service on host5 is disabled dict(id=5, host='host5@EEE', topic='share', disabled=True, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_5['id'], availability_zone=FAKE_AZ_5), dict(id=6, host='host6@FFF', topic='share', disabled=True, updated_at=timeutils.utcnow(), availability_zone_id=FAKE_AZ_6['id'], availability_zone=FAKE_AZ_6), ] SHARE_SERVICE_STATES_WITH_POOLS = { 'host1@AAA': dict(share_backend_name='AAA', timestamp=None, reserved_percentage=0, driver_handles_share_servers=False, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=True, replication_type=None, pools=[dict(pool_name='pool1', total_capacity_gb=51, free_capacity_gb=41, reserved_percentage=0, provisioned_capacity_gb=10, max_over_subscription_ratio=1.0, thin_provisioning=False)]), 'host2@BBB': dict(share_backend_name='BBB', timestamp=None, reserved_percentage=0, driver_handles_share_servers=False, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, replication_type=None, pools=[dict(pool_name='pool2', total_capacity_gb=52, free_capacity_gb=42, reserved_percentage=0, provisioned_capacity_gb=60, max_over_subscription_ratio=2.0, thin_provisioning=True)]), 'host3@CCC': dict(share_backend_name='CCC', timestamp=None, reserved_percentage=0, driver_handles_share_servers=False, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, replication_type=None, pools=[dict(pool_name='pool3', total_capacity_gb=53, free_capacity_gb=43, reserved_percentage=0, provisioned_capacity_gb=100, max_over_subscription_ratio=20.0, thin_provisioning=True)]), 'host4@DDD': dict(share_backend_name='DDD', timestamp=None, reserved_percentage=0, driver_handles_share_servers=False, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, replication_type=None, pools=[dict(pool_name='pool4a', total_capacity_gb=541, free_capacity_gb=441, reserved_percentage=0, provisioned_capacity_gb=800, max_over_subscription_ratio=2.0, thin_provisioning=True), dict(pool_name='pool4b', total_capacity_gb=542, free_capacity_gb=442, reserved_percentage=0, provisioned_capacity_gb=2000, max_over_subscription_ratio=10.0, thin_provisioning=True)]), 'host5@EEE': dict(share_backend_name='EEE', timestamp=None, reserved_percentage=0, driver_handles_share_servers=False, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, replication_type=None, pools=[dict(pool_name='pool5a', total_capacity_gb=551, free_capacity_gb=451, reserved_percentage=0, provisioned_capacity_gb=100, max_over_subscription_ratio=1.0, thin_provisioning=False), dict(pool_name='pool5b', total_capacity_gb=552, free_capacity_gb=452, reserved_percentage=0, provisioned_capacity_gb=100, max_over_subscription_ratio=1.0, thin_provisioning=False)]), 'host6@FFF': dict(share_backend_name='FFF', timestamp=None, reserved_percentage=0, driver_handles_share_servers=False, snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=False, replication_type=None, pools=[dict(pool_name='pool6a', total_capacity_gb='unknown', free_capacity_gb='unknown', reserved_percentage=0, provisioned_capacity_gb=100, max_over_subscription_ratio=1.0, thin_provisioning=False), dict(pool_name='pool6b', total_capacity_gb='unknown', free_capacity_gb='unknown', reserved_percentage=0, provisioned_capacity_gb=100, max_over_subscription_ratio=1.0, thin_provisioning=False)]), } class FakeFilterScheduler(filter.FilterScheduler): def __init__(self, *args, **kwargs): super(FakeFilterScheduler, self).__init__(*args, **kwargs) self.host_manager = host_manager.HostManager() class FakeHostManager(host_manager.HostManager): def __init__(self): super(FakeHostManager, self).__init__() self.service_states = { 'host1': {'total_capacity_gb': 1024, 'free_capacity_gb': 1024, 'allocated_capacity_gb': 0, 'thin_provisioning': False, 'reserved_percentage': 10, 'timestamp': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'replication_type': 'writable', 'replication_domain': 'endor', }, 'host2': {'total_capacity_gb': 2048, 'free_capacity_gb': 300, 'allocated_capacity_gb': 1748, 'provisioned_capacity_gb': 1748, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': True, 'reserved_percentage': 10, 'timestamp': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'replication_type': 'readable', 'replication_domain': 'kashyyyk', }, 'host3': {'total_capacity_gb': 512, 'free_capacity_gb': 256, 'allocated_capacity_gb': 256, 'provisioned_capacity_gb': 256, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': [False], 'reserved_percentage': 0, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'timestamp': None, }, 'host4': {'total_capacity_gb': 2048, 'free_capacity_gb': 200, 'allocated_capacity_gb': 1848, 'provisioned_capacity_gb': 1848, 'max_over_subscription_ratio': 1.0, 'thin_provisioning': [True], 'reserved_percentage': 5, 'timestamp': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'replication_type': 'dr', 'replication_domain': 'naboo', }, 'host5': {'total_capacity_gb': 2048, 'free_capacity_gb': 500, 'allocated_capacity_gb': 1548, 'provisioned_capacity_gb': 1548, 'max_over_subscription_ratio': 1.5, 'thin_provisioning': [True, False], 'reserved_percentage': 5, 'timestamp': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'replication_type': None, }, 'host6': {'total_capacity_gb': 'unknown', 'free_capacity_gb': 'unknown', 'allocated_capacity_gb': 1548, 'thin_provisioning': False, 'reserved_percentage': 5, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'timestamp': None, }, } class FakeHostState(host_manager.HostState): def __init__(self, host, attribute_dict): super(FakeHostState, self).__init__(host) for (key, val) in attribute_dict.items(): setattr(self, key, val) FAKE_HOST_STRING_1 = 'openstack@BackendA#PoolX' FAKE_HOST_STRING_2 = 'openstack@BackendB#PoolY' FAKE_HOST_STRING_3 = 'openstack@BackendC#PoolZ' def mock_host_manager_db_calls(mock_obj, disabled=None): services = [ dict(id=1, host='host1', topic='share', disabled=False, availability_zone=FAKE_AZ_1, availability_zone_id=FAKE_AZ_1['id'], updated_at=timeutils.utcnow()), dict(id=2, host='host2', topic='share', disabled=False, availability_zone=FAKE_AZ_1, availability_zone_id=FAKE_AZ_1['id'], updated_at=timeutils.utcnow()), dict(id=3, host='host3', topic='share', disabled=False, availability_zone=FAKE_AZ_2, availability_zone_id=FAKE_AZ_2['id'], updated_at=timeutils.utcnow()), dict(id=4, host='host4', topic='share', disabled=False, availability_zone=FAKE_AZ_3, availability_zone_id=FAKE_AZ_3['id'], updated_at=timeutils.utcnow()), dict(id=5, host='host5', topic='share', disabled=False, availability_zone=FAKE_AZ_3, availability_zone_id=FAKE_AZ_3['id'], updated_at=timeutils.utcnow()), dict(id=6, host='host6', topic='share', disabled=False, availability_zone=FAKE_AZ_4, availability_zone_id=FAKE_AZ_4['id'], updated_at=timeutils.utcnow()), ] if disabled is None: mock_obj.return_value = services else: mock_obj.return_value = [service for service in services if service['disabled'] == disabled] class FakeWeigher1(base_host_weigher.BaseHostWeigher): def __init__(self): pass class FakeWeigher2(base_host_weigher.BaseHostWeigher): def __init__(self): pass class FakeClass(object): def __init__(self): pass def fake_replica_request_spec(**kwargs): request_spec = { 'share_properties': { 'id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'name': 'fakename', 'size': 1, 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'availability_zone': 'fake_az', 'replication_type': 'dr', }, 'share_instance_properties': { 'id': '8d5566df-1e83-4373-84b8-6f8153a0ac41', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'host': 'openstack@BackendZ#PoolA', 'status': 'available', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '53099868-65f1-11e5-9d70-feff819cdc9f', }, 'share_proto': 'nfs', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'snapshot_id': None, 'share_type': 'fake_share_type', 'share_group': None, } request_spec.update(kwargs) return request_spec def get_fake_host(host_name=None): class FakeHost(object): def __init__(self, host_name=None): self.host = host_name or 'openstack@BackendZ#PoolA' class FakeWeightedHost(object): def __init__(self, host_name=None): self.obj = FakeHost(host_name=host_name) return FakeWeightedHost(host_name=host_name) manila-10.0.0/manila/tests/scheduler/test_host_manager.py0000664000175000017500000014345313656750227023543 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack, LLC # Copyright (c) 2015 Rushil Chugh # Copyright (c) 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For HostManager """ import copy from unittest import mock import ddt from oslo_config import cfg from oslo_utils import timeutils from six import moves from manila import context from manila import db from manila import exception from manila.scheduler.filters import base_host from manila.scheduler import host_manager from manila.scheduler import utils as scheduler_utils from manila import test from manila.tests.scheduler import fakes from manila import utils CONF = cfg.CONF class FakeFilterClass1(base_host.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class FakeFilterClass2(base_host.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass @ddt.ddt class HostManagerTestCase(test.TestCase): """Test case for HostManager class.""" def setUp(self): super(HostManagerTestCase, self).setUp() self.host_manager = host_manager.HostManager() self.fake_hosts = [host_manager.HostState('fake_host%s' % x) for x in moves.range(1, 5)] def test_choose_host_filters_not_found(self): self.flags(scheduler_default_filters='FakeFilterClass3') self.host_manager.filter_classes = [FakeFilterClass1, FakeFilterClass2] self.assertRaises(exception.SchedulerHostFilterNotFound, self.host_manager._choose_host_filters, None) def test_choose_host_filters(self): self.flags(scheduler_default_filters=['FakeFilterClass2']) self.host_manager.filter_classes = [FakeFilterClass1, FakeFilterClass2] # Test 'share' returns 1 correct function filter_classes = self.host_manager._choose_host_filters(None) self.assertEqual(1, len(filter_classes)) self.assertEqual('FakeFilterClass2', filter_classes[0].__name__) def _verify_result(self, info, result): for x in info['got_fprops']: self.assertEqual(info['expected_fprops'], x) self.assertEqual(set(info['expected_objs']), set(info['got_objs'])) self.assertEqual(set(info['got_objs']), set(result)) def test_get_filtered_hosts(self): fake_properties = {'moo': 1, 'cow': 2} info = { 'expected_objs': self.fake_hosts, 'expected_fprops': fake_properties, } with mock.patch.object(self.host_manager, '_choose_host_filters', mock.Mock(return_value=[FakeFilterClass1])): info['got_objs'] = [] info['got_fprops'] = [] def fake_filter_one(_self, obj, filter_props): info['got_objs'].append(obj) info['got_fprops'].append(filter_props) return True self.mock_object(FakeFilterClass1, '_filter_one', fake_filter_one) result, last_filter = self.host_manager.get_filtered_hosts( self.fake_hosts, fake_properties) self._verify_result(info, result) self.host_manager._choose_host_filters.assert_called_once_with( mock.ANY) def test_update_service_capabilities_for_shares(self): service_states = self.host_manager.service_states self.assertDictMatch(service_states, {}) host1_share_capabs = dict(free_capacity_gb=4321, timestamp=1) host2_share_capabs = dict(free_capacity_gb=5432, timestamp=1) host3_share_capabs = dict(free_capacity_gb=6543, timestamp=1) service_name = 'share' with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=31337)): self.host_manager.update_service_capabilities( service_name, 'host1', host1_share_capabs) timeutils.utcnow.assert_called_once_with() with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=31338)): self.host_manager.update_service_capabilities( service_name, 'host2', host2_share_capabs) timeutils.utcnow.assert_called_once_with() with mock.patch.object(timeutils, 'utcnow', mock.Mock(return_value=31339)): self.host_manager.update_service_capabilities( service_name, 'host3', host3_share_capabs) timeutils.utcnow.assert_called_once_with() # Make sure dictionary isn't re-assigned self.assertEqual(service_states, self.host_manager.service_states) # Make sure original dictionary wasn't copied self.assertEqual(1, host1_share_capabs['timestamp']) host1_share_capabs['timestamp'] = 31337 host2_share_capabs['timestamp'] = 31338 host3_share_capabs['timestamp'] = 31339 expected = { 'host1': host1_share_capabs, 'host2': host2_share_capabs, 'host3': host3_share_capabs, } self.assertDictMatch(service_states, expected) def test_get_all_host_states_share(self): fake_context = context.RequestContext('user', 'project') topic = CONF.share_topic tmp_pools = copy.deepcopy(fakes.SHARE_SERVICES_WITH_POOLS) tmp_enable_pools = tmp_pools[:-2] self.mock_object( db, 'service_get_all_by_topic', mock.Mock(return_value=tmp_enable_pools)) self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) with mock.patch.dict(self.host_manager.service_states, fakes.SHARE_SERVICE_STATES_WITH_POOLS): # Get service self.host_manager.get_all_host_states_share(fake_context) # Disabled one service tmp_enable_pools.pop() self.mock_object( db, 'service_get_all_by_topic', mock.Mock(return_value=tmp_enable_pools)) # Get service again self.host_manager.get_all_host_states_share(fake_context) host_state_map = self.host_manager.host_state_map self.assertEqual(3, len(host_state_map)) # Check that service is up for i in moves.range(3): share_node = fakes.SHARE_SERVICES_WITH_POOLS[i] host = share_node['host'] self.assertEqual(share_node, host_state_map[host].service) db.service_get_all_by_topic.assert_called_once_with( fake_context, topic) def test_get_pools_no_pools(self): fake_context = context.RequestContext('user', 'project') self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object( db, 'service_get_all_by_topic', mock.Mock(return_value=fakes.SHARE_SERVICES_NO_POOLS)) host_manager.LOG.warning = mock.Mock() with mock.patch.dict(self.host_manager.service_states, fakes.SERVICE_STATES_NO_POOLS): res = self.host_manager.get_pools(context=fake_context) expected = [ { 'name': 'host1#AAA', 'host': 'host1', 'backend': None, 'pool': 'AAA', 'capabilities': { 'timestamp': None, 'share_backend_name': 'AAA', 'free_capacity_gb': 200, 'driver_version': None, 'total_capacity_gb': 512, 'reserved_percentage': 0, 'provisioned_capacity_gb': 312, 'max_over_subscription_ratio': 1.0, 'thin_provisioning': False, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': True, 'mount_snapshot_support': True, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host2@back1#BBB', 'host': 'host2', 'backend': 'back1', 'pool': 'BBB', 'capabilities': { 'timestamp': None, 'share_backend_name': 'BBB', 'free_capacity_gb': 100, 'driver_version': None, 'total_capacity_gb': 256, 'reserved_percentage': 0, 'provisioned_capacity_gb': 400, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host2@back2#CCC', 'host': 'host2', 'backend': 'back2', 'pool': 'CCC', 'capabilities': { 'timestamp': None, 'share_backend_name': 'CCC', 'free_capacity_gb': 700, 'driver_version': None, 'total_capacity_gb': 10000, 'reserved_percentage': 0, 'provisioned_capacity_gb': 50000, 'max_over_subscription_ratio': 20.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, ] self.assertIsInstance(res, list) self.assertEqual(len(expected), len(res)) for pool in expected: self.assertIn(pool, res) def test_get_pools(self): fake_context = context.RequestContext('user', 'project') self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object( db, 'service_get_all_by_topic', mock.Mock(return_value=fakes.SHARE_SERVICES_WITH_POOLS)) host_manager.LOG.warning = mock.Mock() with mock.patch.dict(self.host_manager.service_states, fakes.SHARE_SERVICE_STATES_WITH_POOLS): res = self.host_manager.get_pools(fake_context) expected = [ { 'name': 'host1@AAA#pool1', 'host': 'host1', 'backend': 'AAA', 'pool': 'pool1', 'capabilities': { 'pool_name': 'pool1', 'timestamp': None, 'share_backend_name': 'AAA', 'free_capacity_gb': 41, 'driver_version': None, 'total_capacity_gb': 51, 'reserved_percentage': 0, 'provisioned_capacity_gb': 10, 'max_over_subscription_ratio': 1.0, 'thin_provisioning': False, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host2@BBB#pool2', 'host': 'host2', 'backend': 'BBB', 'pool': 'pool2', 'capabilities': { 'pool_name': 'pool2', 'timestamp': None, 'share_backend_name': 'BBB', 'free_capacity_gb': 42, 'driver_version': None, 'total_capacity_gb': 52, 'reserved_percentage': 0, 'provisioned_capacity_gb': 60, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host3@CCC#pool3', 'host': 'host3', 'backend': 'CCC', 'pool': 'pool3', 'capabilities': { 'pool_name': 'pool3', 'timestamp': None, 'share_backend_name': 'CCC', 'free_capacity_gb': 43, 'driver_version': None, 'total_capacity_gb': 53, 'reserved_percentage': 0, 'provisioned_capacity_gb': 100, 'max_over_subscription_ratio': 20.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host4@DDD#pool4a', 'host': 'host4', 'backend': 'DDD', 'pool': 'pool4a', 'capabilities': { 'pool_name': 'pool4a', 'timestamp': None, 'share_backend_name': 'DDD', 'free_capacity_gb': 441, 'driver_version': None, 'total_capacity_gb': 541, 'reserved_percentage': 0, 'provisioned_capacity_gb': 800, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host4@DDD#pool4b', 'host': 'host4', 'backend': 'DDD', 'pool': 'pool4b', 'capabilities': { 'pool_name': 'pool4b', 'timestamp': None, 'share_backend_name': 'DDD', 'free_capacity_gb': 442, 'driver_version': None, 'total_capacity_gb': 542, 'reserved_percentage': 0, 'provisioned_capacity_gb': 2000, 'max_over_subscription_ratio': 10.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, ] self.assertIsInstance(res, list) self.assertIsInstance(self.host_manager.host_state_map, dict) self.assertEqual(len(expected), len(res)) for pool in expected: self.assertIn(pool, res) def test_get_pools_host_down(self): fake_context = context.RequestContext('user', 'project') mock_service_is_up = self.mock_object(utils, 'service_is_up') self.mock_object( db, 'service_get_all_by_topic', mock.Mock(return_value=fakes.SHARE_SERVICES_NO_POOLS)) host_manager.LOG.warning = mock.Mock() with mock.patch.dict(self.host_manager.service_states, fakes.SERVICE_STATES_NO_POOLS): # Initialize host data with all services present mock_service_is_up.side_effect = [True, True, True] # Call once to update the host state map self.host_manager.get_pools(fake_context) self.assertEqual(len(fakes.SHARE_SERVICES_NO_POOLS), len(self.host_manager.host_state_map)) # Then mock one host as down mock_service_is_up.side_effect = [True, True, False] res = self.host_manager.get_pools(fake_context) expected = [ { 'name': 'host1#AAA', 'host': 'host1', 'backend': None, 'pool': 'AAA', 'capabilities': { 'timestamp': None, 'driver_handles_share_servers': False, 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': True, 'mount_snapshot_support': True, 'share_backend_name': 'AAA', 'free_capacity_gb': 200, 'driver_version': None, 'total_capacity_gb': 512, 'reserved_percentage': 0, 'vendor_name': None, 'storage_protocol': None, 'provisioned_capacity_gb': 312, 'max_over_subscription_ratio': 1.0, 'thin_provisioning': False, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, { 'name': 'host2@back1#BBB', 'host': 'host2', 'backend': 'back1', 'pool': 'BBB', 'capabilities': { 'timestamp': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'share_backend_name': 'BBB', 'free_capacity_gb': 100, 'driver_version': None, 'total_capacity_gb': 256, 'reserved_percentage': 0, 'vendor_name': None, 'storage_protocol': None, 'provisioned_capacity_gb': 400, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': True, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, ] self.assertIsInstance(res, list) self.assertIsInstance(self.host_manager.host_state_map, dict) self.assertEqual(len(expected), len(res)) self.assertEqual(len(expected), len(self.host_manager.host_state_map)) for pool in expected: self.assertIn(pool, res) def test_get_pools_with_filters(self): fake_context = context.RequestContext('user', 'project') self.mock_object(utils, 'service_is_up', mock.Mock(return_value=True)) self.mock_object( db, 'service_get_all_by_topic', mock.Mock(return_value=fakes.SHARE_SERVICES_WITH_POOLS)) host_manager.LOG.warning = mock.Mock() with mock.patch.dict(self.host_manager.service_states, fakes.SHARE_SERVICE_STATES_WITH_POOLS): res = self.host_manager.get_pools( context=fake_context, filters={'host': 'host2', 'pool': 'pool*', 'capabilities': {'dedupe': 'False'}}) expected = [ { 'name': 'host2@BBB#pool2', 'host': 'host2', 'backend': 'BBB', 'pool': 'pool2', 'capabilities': { 'pool_name': 'pool2', 'timestamp': None, 'driver_handles_share_servers': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'share_backend_name': 'BBB', 'free_capacity_gb': 42, 'driver_version': None, 'total_capacity_gb': 52, 'reserved_percentage': 0, 'provisioned_capacity_gb': 60, 'max_over_subscription_ratio': 2.0, 'thin_provisioning': True, 'vendor_name': None, 'storage_protocol': None, 'dedupe': False, 'compression': False, 'replication_type': None, 'replication_domain': None, 'sg_consistent_snapshot_support': None, }, }, ] self.assertTrue(isinstance(res, list)) self.assertEqual(len(expected), len(res)) for pool in expected: self.assertIn(pool, res) @ddt.data( None, {}, {'key1': 'value1'}, {'capabilities': {'dedupe': 'False'}}, {'capabilities': {'dedupe': ' False'}}, {'key1': 'value1', 'key2': 'value*'}, {'key1': '.*', 'key2': '.*'}, ) def test_passes_filters_true(self, filter): data = { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', 'capabilities': {'dedupe': False}, } self.assertTrue(self.host_manager._passes_filters(data, filter)) @ddt.data( {'key1': 'value$'}, {'key4': 'value'}, {'capabilities': {'dedupe': 'True'}}, {'capabilities': {'dedupe': ' True'}}, {'key1': 'value1.+', 'key2': 'value*'}, ) def test_passes_filters_false(self, filter): data = { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', 'capabilities': {'dedupe': False}, } self.assertFalse(self.host_manager._passes_filters(data, filter)) class HostStateTestCase(test.TestCase): """Test case for HostState class.""" def test_update_from_share_capability_nopool(self): fake_context = context.RequestContext('user', 'project', is_admin=True) share_capability = {'total_capacity_gb': 0, 'free_capacity_gb': 100, 'reserved_percentage': 0, 'timestamp': None, 'ipv4_support': True, 'ipv6_support': False} fake_host = host_manager.HostState('host1', share_capability) self.assertIsNone(fake_host.free_capacity_gb) fake_host.update_from_share_capability(share_capability, context=fake_context) # Backend level stats remain uninitialized self.assertEqual(0, fake_host.total_capacity_gb) self.assertIsNone(fake_host.free_capacity_gb) self.assertTrue(fake_host.ipv4_support) self.assertFalse(fake_host.ipv6_support) # Pool stats has been updated self.assertEqual(0, fake_host.pools['_pool0'].total_capacity_gb) self.assertEqual(100, fake_host.pools['_pool0'].free_capacity_gb) self.assertTrue(fake_host.pools['_pool0'].ipv4_support) self.assertFalse(fake_host.pools['_pool0'].ipv6_support) # Test update for existing host state share_capability.update(dict(total_capacity_gb=1000)) fake_host.update_from_share_capability(share_capability, context=fake_context) self.assertEqual(1000, fake_host.pools['_pool0'].total_capacity_gb) # Test update for existing host state with different backend name share_capability.update(dict(share_backend_name='magic')) fake_host.update_from_share_capability(share_capability, context=fake_context) self.assertEqual(1000, fake_host.pools['magic'].total_capacity_gb) self.assertEqual(100, fake_host.pools['magic'].free_capacity_gb) # 'pool0' becomes nonactive pool, and is deleted self.assertRaises(KeyError, lambda: fake_host.pools['pool0']) def test_update_from_share_capability_with_pools(self): fake_context = context.RequestContext('user', 'project', is_admin=True) fake_host = host_manager.HostState('host1#pool1') self.assertIsNone(fake_host.free_capacity_gb) capability = { 'share_backend_name': 'Backend1', 'vendor_name': 'OpenStack', 'driver_version': '1.1', 'storage_protocol': 'NFS_CIFS', 'ipv4_support': True, 'ipv6_support': False, 'pools': [ {'pool_name': 'pool1', 'total_capacity_gb': 500, 'free_capacity_gb': 230, 'allocated_capacity_gb': 270, 'qos': 'False', 'reserved_percentage': 0, 'dying_disks': 100, 'super_hero_1': 'spider-man', 'super_hero_2': 'flash', 'super_hero_3': 'neoncat', }, {'pool_name': 'pool2', 'total_capacity_gb': 1024, 'free_capacity_gb': 1024, 'allocated_capacity_gb': 0, 'qos': 'False', 'reserved_percentage': 0, 'dying_disks': 200, 'super_hero_1': 'superman', 'super_hero_2': 'Hulk', } ], 'timestamp': None, } fake_host.update_from_share_capability(capability, context=fake_context) self.assertEqual('Backend1', fake_host.share_backend_name) self.assertEqual('NFS_CIFS', fake_host.storage_protocol) self.assertEqual('OpenStack', fake_host.vendor_name) self.assertEqual('1.1', fake_host.driver_version) self.assertTrue(fake_host.ipv4_support) self.assertFalse(fake_host.ipv6_support) # Backend level stats remain uninitialized self.assertEqual(0, fake_host.total_capacity_gb) self.assertIsNone(fake_host.free_capacity_gb) # Pool stats has been updated self.assertEqual(2, len(fake_host.pools)) self.assertEqual(500, fake_host.pools['pool1'].total_capacity_gb) self.assertEqual(230, fake_host.pools['pool1'].free_capacity_gb) self.assertTrue(fake_host.pools['pool1'].ipv4_support) self.assertFalse(fake_host.pools['pool1'].ipv6_support) self.assertEqual(1024, fake_host.pools['pool2'].total_capacity_gb) self.assertEqual(1024, fake_host.pools['pool2'].free_capacity_gb) self.assertTrue(fake_host.pools['pool2'].ipv4_support) self.assertFalse(fake_host.pools['pool2'].ipv6_support) capability = { 'share_backend_name': 'Backend1', 'vendor_name': 'OpenStack', 'driver_version': '1.0', 'storage_protocol': 'NFS_CIFS', 'pools': [ {'pool_name': 'pool3', 'total_capacity_gb': 10000, 'free_capacity_gb': 10000, 'allocated_capacity_gb': 0, 'qos': 'False', 'reserved_percentage': 0, }, ], 'timestamp': None, } # test update HostState Record fake_host.update_from_share_capability(capability, context=fake_context) self.assertEqual('1.0', fake_host.driver_version) # Non-active pool stats has been removed self.assertEqual(1, len(fake_host.pools)) self.assertRaises(KeyError, lambda: fake_host.pools['pool1']) self.assertRaises(KeyError, lambda: fake_host.pools['pool2']) self.assertEqual(10000, fake_host.pools['pool3'].total_capacity_gb) self.assertEqual(10000, fake_host.pools['pool3'].free_capacity_gb) def test_update_from_share_unknown_capability(self): share_capability = { 'total_capacity_gb': 'unknown', 'free_capacity_gb': 'unknown', 'allocated_capacity_gb': 1, 'reserved_percentage': 0, 'timestamp': None } fake_context = context.RequestContext('user', 'project', is_admin=True) fake_host = host_manager.HostState('host1#_pool0') self.assertIsNone(fake_host.free_capacity_gb) fake_host.update_from_share_capability(share_capability, context=fake_context) # Backend level stats remain uninitialized self.assertEqual(fake_host.total_capacity_gb, 0) self.assertIsNone(fake_host.free_capacity_gb) # Pool stats has been updated self.assertEqual(fake_host.pools['_pool0'].total_capacity_gb, 'unknown') self.assertEqual(fake_host.pools['_pool0'].free_capacity_gb, 'unknown') def test_consume_from_share_capability(self): fake_context = context.RequestContext('user', 'project', is_admin=True) share_size = 10 free_capacity = 100 provisioned_capacity_gb = 50 fake_share = {'id': 'foo', 'size': share_size} share_capability = { 'total_capacity_gb': free_capacity * 2, 'free_capacity_gb': free_capacity, 'provisioned_capacity_gb': provisioned_capacity_gb, 'allocated_capacity_gb': provisioned_capacity_gb, 'reserved_percentage': 0, 'timestamp': None } fake_host = host_manager.PoolState('host1', share_capability, '_pool0') fake_host.update_from_share_capability(share_capability, context=fake_context) fake_host.consume_from_share(fake_share) self.assertEqual(fake_host.free_capacity_gb, free_capacity - share_size) self.assertEqual(fake_host.provisioned_capacity_gb, provisioned_capacity_gb + share_size) self.assertEqual(fake_host.allocated_capacity_gb, provisioned_capacity_gb + share_size) def test_consume_from_share_unknown_capability(self): share_capability = { 'total_capacity_gb': 'unknown', 'free_capacity_gb': 'unknown', 'provisioned_capacity_gb': None, 'allocated_capacity_gb': 0, 'reserved_percentage': 0, 'timestamp': None } fake_context = context.RequestContext('user', 'project', is_admin=True) fake_host = host_manager.PoolState('host1', share_capability, '_pool0') share_size = 1000 fake_share = {'id': 'foo', 'size': share_size} fake_host.update_from_share_capability(share_capability, context=fake_context) fake_host.consume_from_share(fake_share) self.assertEqual(fake_host.total_capacity_gb, 'unknown') self.assertEqual(fake_host.free_capacity_gb, 'unknown') self.assertIsNone(fake_host.provisioned_capacity_gb) self.assertEqual(fake_host.allocated_capacity_gb, share_size) def test_consume_from_share_invalid_capacity(self): fake_host = host_manager.PoolState('host1', {}, '_pool0') fake_host.free_capacity_gb = 'invalid_foo_string' fake_host.provisioned_capacity_gb = None fake_host.allocated_capacity_gb = 0 fake_share = {'id': 'fake', 'size': 10} self.assertRaises(exception.InvalidCapacity, fake_host.consume_from_share, fake_share) def test_repr(self): capability = { 'share_backend_name': 'Backend1', 'vendor_name': 'OpenStack', 'driver_version': '1.0', 'storage_protocol': 'NFS_CIFS', 'total_capacity_gb': 20000, 'free_capacity_gb': 15000, 'allocated_capacity_gb': 5000, 'timestamp': None, 'reserved_percentage': 0, } fake_context = context.RequestContext('user', 'project', is_admin=True) fake_host = host_manager.HostState('host1') fake_host.update_from_share_capability(capability, context=fake_context) result = fake_host.__repr__() expected = ("host: 'host1', free_capacity_gb: None, " "pools: {'Backend1': host: 'host1#Backend1', " "free_capacity_gb: 15000, pools: None}") self.assertEqual(expected, result) @ddt.ddt class PoolStateTestCase(test.TestCase): """Test case for HostState class.""" @ddt.data( { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': True, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 4, 'updated_at': timeutils.utcnow() }, { 'id': 2, 'host': 'host1', 'status': 'available', 'share_id': 12, 'size': None, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': False, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, { 'id': 2, 'host': 'host1', 'status': 'available', 'share_id': 12, 'size': None, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': [False], 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, { 'id': 2, 'host': 'host1', 'status': 'available', 'share_id': 12, 'size': None, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': [True, False], 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 4, 'updated_at': timeutils.utcnow() }, { 'id': 2, 'host': 'host1', 'status': 'available', 'share_id': 12, 'size': None, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': True, 'ipv6_support': False}, 'instances': [] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': True, 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': True, 'ipv6_support': False}, 'instances': [] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': [False], 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': True, 'ipv6_support': False}, 'instances': [] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'reserved_percentage': 0, 'timestamp': None, 'thin_provisioning': [True, False], 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': True, 'ipv6_support': False}, 'instances': [] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': False, 'ipv6_support': True }, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 4, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': True, 'ipv6_support': True}, 'instances': [] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'provisioned_capacity_gb': 256, 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2', 'ipv4_support': False, 'ipv6_support': False }, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'provisioned_capacity_gb': 1, 'thin_provisioning': True, 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'provisioned_capacity_gb': 1, 'thin_provisioning': [False], 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'provisioned_capacity_gb': 1, 'thin_provisioning': [True, False], 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'provisioned_capacity_gb': 256, 'thin_provisioning': False, 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, ] }, { 'share_capability': {'total_capacity_gb': 1024, 'free_capacity_gb': 512, 'allocated_capacity_gb': 256, 'provisioned_capacity_gb': 256, 'thin_provisioning': [False], 'reserved_percentage': 0, 'timestamp': None, 'cap1': 'val1', 'cap2': 'val2'}, 'instances': [ { 'id': 1, 'host': 'host1', 'status': 'available', 'share_id': 11, 'size': 1, 'updated_at': timeutils.utcnow() }, ] }, ) @ddt.unpack def test_update_from_share_capability(self, share_capability, instances): fake_context = context.RequestContext('user', 'project', is_admin=True) self.mock_object( db, 'share_instances_get_all_by_host', mock.Mock(return_value=instances)) fake_pool = host_manager.PoolState('host1', None, 'pool0') self.assertIsNone(fake_pool.free_capacity_gb) fake_pool.update_from_share_capability(share_capability, context=fake_context) self.assertEqual('host1#pool0', fake_pool.host) self.assertEqual('pool0', fake_pool.pool_name) self.assertEqual(1024, fake_pool.total_capacity_gb) self.assertEqual(512, fake_pool.free_capacity_gb) self.assertDictMatch(share_capability, fake_pool.capabilities) if 'thin_provisioning' in share_capability: thin_provisioned = scheduler_utils.thin_provisioning( share_capability['thin_provisioning']) else: thin_provisioned = False if thin_provisioned: self.assertEqual(thin_provisioned, fake_pool.thin_provisioning) if 'provisioned_capacity_gb' not in share_capability or ( share_capability['provisioned_capacity_gb'] is None): db.share_instances_get_all_by_host.assert_called_once_with( fake_context, fake_pool.host, with_share_data=True) if len(instances) > 0: self.assertEqual(4, fake_pool.provisioned_capacity_gb) else: self.assertEqual(0, fake_pool.provisioned_capacity_gb) else: self.assertFalse(db.share_instances_get_all_by_host.called) self.assertEqual(share_capability['provisioned_capacity_gb'], fake_pool.provisioned_capacity_gb) else: self.assertFalse(fake_pool.thin_provisioning) self.assertFalse(db.share_instances_get_all_by_host.called) if 'provisioned_capacity_gb' not in share_capability or ( share_capability['provisioned_capacity_gb'] is None): self.assertIsNone(fake_pool.provisioned_capacity_gb) else: self.assertEqual(share_capability['provisioned_capacity_gb'], fake_pool.provisioned_capacity_gb) if 'allocated_capacity_gb' in share_capability: self.assertEqual(share_capability['allocated_capacity_gb'], fake_pool.allocated_capacity_gb) else: self.assertEqual(0, fake_pool.allocated_capacity_gb) if 'ipv4_support' in share_capability: self.assertEqual(share_capability['ipv4_support'], fake_pool.ipv4_support) if 'ipv6_support' in share_capability: self.assertEqual(share_capability['ipv6_support'], fake_pool.ipv6_support) manila-10.0.0/manila/tests/fake_share.py0000664000175000017500000002376213656750227020147 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2015 Intel, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from manila.api.openstack import api_version_request as api_version from manila.common import constants from manila.db.sqlalchemy import models from manila.tests.db import fakes as db_fakes from oslo_utils import uuidutils def fake_share(**kwargs): share = { 'id': 'fakeid', 'name': 'fakename', 'size': 1, 'share_proto': 'fake_proto', 'share_network_id': 'fake share network id', 'share_server_id': 'fake share server id', 'export_location': 'fake_location:/fake_share', 'project_id': 'fake_project_uuid', 'availability_zone': 'fake_az', 'snapshot_support': 'True', 'replication_type': None, 'is_busy': False, 'share_group_id': None, 'instance': { 'id': 'fake_share_instance_id', 'host': 'fakehost', 'share_type_id': '1', }, 'mount_snapshot_support': False, } share.update(kwargs) return db_fakes.FakeModel(share) def fake_share_instance(base_share=None, **kwargs): if base_share is None: share = fake_share() else: share = base_share share_instance = { 'share_id': share['id'], 'id': "fakeinstanceid", 'status': "active", 'host': 'fakehost', 'share_network_id': 'fakesharenetworkid', 'share_server_id': 'fakeshareserverid', 'share_type_id': '1', } for attr in models.ShareInstance._proxified_properties: share_instance[attr] = getattr(share, attr, None) share_instance.update(kwargs) return db_fakes.FakeModel(share_instance) def fake_share_type(**kwargs): share_type = { 'id': "fakesharetype", 'name': "fakesharetypename", 'is_public': False, 'extra_specs': { 'driver_handles_share_servers': 'False', } } extra_specs = kwargs.pop('extra_specs', {}) for key, value in extra_specs.items(): share_type['extra_specs'][key] = value share_type.update(kwargs) return db_fakes.FakeModel(share_type) def fake_snapshot(create_instance=False, **kwargs): instance_keys = ('instance_id', 'snapshot_id', 'share_instance_id', 'status', 'progress', 'provider_location') snapshot_keys = ('id', 'share_name', 'share_id', 'name', 'share_size', 'share_proto', 'instance', 'aggregate_status', 'share', 'project_id', 'size') instance_kwargs = {k: kwargs.get(k) for k in instance_keys if k in kwargs} snapshot_kwargs = {k: kwargs.get(k) for k in snapshot_keys if k in kwargs} aggregate_status = snapshot_kwargs.get( 'aggregate_status', instance_kwargs.get( 'status', constants.STATUS_CREATING)) snapshot = { 'id': 'fakesnapshotid', 'share_name': 'fakename', 'share_id': 'fakeid', 'name': 'fakesnapshotname', 'share_size': 1, 'share_proto': 'fake_proto', 'instance': {}, 'share': 'fake_share', 'aggregate_status': aggregate_status, 'project_id': 'fakeprojectid', 'size': 1, 'user_id': 'xyzzy', } snapshot.update(snapshot_kwargs) if create_instance: if 'instance_id' in instance_kwargs: instance_kwargs['id'] = instance_kwargs.pop('instance_id') snapshot['instance'] = fake_snapshot_instance( base_snapshot=snapshot, **instance_kwargs) snapshot['status'] = snapshot['instance']['status'] snapshot['provider_location'] = ( snapshot['instance']['provider_location'] ) snapshot['progress'] = snapshot['instance']['progress'] snapshot['instances'] = snapshot['instance'], else: snapshot['status'] = constants.STATUS_AVAILABLE snapshot['progress'] = '0%' snapshot['provider_location'] = 'fake' snapshot.update(instance_kwargs) return db_fakes.FakeModel(snapshot) def fake_snapshot_instance(base_snapshot=None, as_primitive=False, **kwargs): if base_snapshot is None: base_snapshot = fake_snapshot() snapshot_instance = { 'id': 'fakesnapshotinstanceid', 'snapshot_id': base_snapshot['id'], 'status': constants.STATUS_CREATING, 'progress': '0%', 'provider_location': 'i_live_here_actually', 'share_name': 'fakename', 'share_id': 'fakeshareinstanceid', 'share_instance': { 'share_id': 'fakeshareid', 'share_type_id': '1', }, 'share_instance_id': 'fakeshareinstanceid', 'deleted': False, 'updated_at': datetime.datetime(2016, 3, 21, 0, 5, 58), 'created_at': datetime.datetime(2016, 3, 21, 0, 5, 58), 'deleted_at': None, 'share': fake_share(), } snapshot_instance.update(kwargs) if as_primitive: return snapshot_instance else: return db_fakes.FakeModel(snapshot_instance) def expected_snapshot(version=None, id='fake_snapshot_id', **kwargs): self_link = 'http://localhost/v1/fake/snapshots/%s' % id bookmark_link = 'http://localhost/fake/snapshots/%s' % id snapshot = { 'id': id, 'share_id': 'fakeshareid', 'created_at': datetime.datetime(1, 1, 1, 1, 1, 1), 'status': 'fakesnapstatus', 'name': 'displaysnapname', 'description': 'displaysnapdesc', 'share_size': 1, 'size': 1, 'share_proto': 'fakesnapproto', 'links': [ { 'href': self_link, 'rel': 'self', }, { 'href': bookmark_link, 'rel': 'bookmark', }, ], } if version and (api_version.APIVersionRequest(version) >= api_version.APIVersionRequest('2.17')): snapshot.update({ 'user_id': 'fakesnapuser', 'project_id': 'fakesnapproject', }) snapshot.update(kwargs) return {'snapshot': snapshot} def search_opts(**kwargs): search_opts = { 'name': 'fake_name', 'status': 'fake_status', 'share_id': 'fake_share_id', 'sort_key': 'fake_sort_key', 'sort_dir': 'fake_sort_dir', 'offset': '1', 'limit': '1', } search_opts.update(kwargs) return search_opts def fake_access(**kwargs): access = { 'id': 'fakeaccid', 'access_type': 'ip', 'access_to': '10.0.0.1', 'access_level': 'rw', 'state': 'active', } access.update(kwargs) return db_fakes.FakeModel(access) def fake_replica(id=None, as_primitive=True, for_manager=False, **kwargs): replica = { 'id': id or uuidutils.generate_uuid(), 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack@BackendZ#PoolA', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': None, 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [{'path': 'path1'}, {'path': 'path2'}], 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '53099868-65f1-11e5-9d70-feff819cdc9f', 'access_rules_status': constants.SHARE_INSTANCE_RULES_SYNCING, } if for_manager: replica.update({ 'user_id': None, 'project_id': None, 'share_type_id': None, 'size': None, 'display_name': None, 'display_description': None, 'replication_type': None, 'snapshot_id': None, 'share_proto': None, 'is_public': None, 'share_group_id': None, 'source_share_group_snapshot_member_id': None, 'availability_zone': 'fake_az', }) replica.update(kwargs) if as_primitive: return replica else: return db_fakes.FakeModel(replica) def fake_replica_request_spec(as_primitive=True, **kwargs): replica = fake_replica(id='9c0db763-a109-4862-b010-10f2bd395295') all_replica_hosts = ','.join(['fake_active_replica_host', replica['host']]) request_spec = { 'share_properties': fake_share( id='f0e4bb5e-65f0-11e5-9d70-feff819cdc9f'), 'share_instance_properties': replica, 'share_proto': 'nfs', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'snapshot_id': None, 'share_type': 'fake_share_type', 'share_group': None, 'active_replica_host': 'fake_active_replica_host', 'all_replica_hosts': all_replica_hosts, } request_spec.update(kwargs) if as_primitive: return request_spec else: return db_fakes.FakeModel(request_spec) def fake_share_server_get(): fake_share_server = { 'status': constants.STATUS_ACTIVE, 'updated_at': None, 'host': 'fake_host', 'share_network_subnet_id': 'fake_sn_id', 'share_network_name': 'fake_sn_name', 'project_id': 'fake_project_id', 'id': 'fake_share_server_id', 'backend_details': { 'security_service_active_directory': '{"name": "fake_AD"}', 'security_service_ldap': '{"name": "fake_LDAP"}', 'security_service_kerberos': '{"name": "fake_kerberos"}', } } return fake_share_server manila-10.0.0/manila/tests/test_coordination.py0000664000175000017500000000730713656750227021603 0ustar zuulzuul00000000000000# Copyright 2015 Intel # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import ddt from tooz import coordination as tooz_coordination from tooz import locking as tooz_locking from manila import coordination from manila import test class Locked(Exception): pass class MockToozLock(tooz_locking.Lock): active_locks = set() def acquire(self, blocking=True): if self.name not in self.active_locks: self.active_locks.add(self.name) return True elif not blocking: return False else: raise Locked def release(self): self.active_locks.remove(self.name) @ddt.ddt class CoordinatorTestCase(test.TestCase): def setUp(self): super(CoordinatorTestCase, self).setUp() self.get_coordinator = self.mock_object(tooz_coordination, 'get_coordinator') def test_coordinator_start(self): crd = self.get_coordinator.return_value agent = coordination.Coordinator() agent.start() self.assertTrue(self.get_coordinator.called) self.assertTrue(crd.start.called) self.assertTrue(agent.started) def test_coordinator_stop(self): crd = self.get_coordinator.return_value agent = coordination.Coordinator() agent.start() self.assertIsNotNone(agent.coordinator) agent.stop() self.assertTrue(crd.stop.called) self.assertIsNone(agent.coordinator) self.assertFalse(agent.started) def test_coordinator_lock(self): crd = self.get_coordinator.return_value crd.get_lock.side_effect = lambda n: MockToozLock(n) agent1 = coordination.Coordinator() agent1.start() agent2 = coordination.Coordinator() agent2.start() lock_string = 'lock' expected_lock = lock_string.encode('ascii') self.assertNotIn(expected_lock, MockToozLock.active_locks) with agent1.get_lock(lock_string): self.assertIn(expected_lock, MockToozLock.active_locks) self.assertRaises(Locked, agent1.get_lock(lock_string).acquire) self.assertRaises(Locked, agent2.get_lock(lock_string).acquire) self.assertNotIn(expected_lock, MockToozLock.active_locks) def test_coordinator_offline(self): crd = self.get_coordinator.return_value crd.start.side_effect = tooz_coordination.ToozConnectionError('err') agent = coordination.Coordinator() self.assertRaises(tooz_coordination.ToozError, agent.start) self.assertFalse(agent.started) @mock.patch.object(coordination.LOCK_COORDINATOR, 'get_lock') class CoordinationTestCase(test.TestCase): def test_lock(self, get_lock): with coordination.Lock('lock'): self.assertTrue(get_lock.called) def test_synchronized(self, get_lock): @coordination.synchronized('lock-{f_name}-{foo.val}-{bar[val]}') def func(foo, bar): pass foo = mock.Mock() foo.val = 7 bar = mock.MagicMock() bar.__getitem__.return_value = 8 func(foo, bar) get_lock.assert_called_with('lock-func-7-8') manila-10.0.0/manila/tests/fake_driver.py0000664000175000017500000001010213656750227020320 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log import six from manila.common import constants from manila.share import driver from manila.tests import fake_service_instance LOG = log.getLogger(__name__) class FakeShareDriver(driver.ShareDriver): """Fake share driver. This fake driver can be also used as a test driver within a real running manila-share instance. To activate it use this in manila.conf:: enabled_share_backends = fake [fake] driver_handles_share_servers = True share_backend_name = fake share_driver = manila.tests.fake_driver.FakeShareDriver With it you basically mocked all backend driver calls but e.g. networking will still be activated. """ def __init__(self, *args, **kwargs): self._setup_service_instance_manager() super(FakeShareDriver, self).__init__([True, False], *args, **kwargs) def _setup_service_instance_manager(self): self.service_instance_manager = ( fake_service_instance.FakeServiceInstanceManager()) def manage_existing(self, share, driver_options, share_server=None): LOG.debug("Fake share driver: manage") LOG.debug("Fake share driver: driver options: %s", six.text_type(driver_options)) return {'size': 1} def unmanage(self, share, share_server=None): LOG.debug("Fake share driver: unmanage") @property def driver_handles_share_servers(self): if not isinstance(self.configuration.safe_get( 'driver_handles_share_servers'), bool): return True return self.configuration.driver_handles_share_servers def create_snapshot(self, context, snapshot, share_server=None): pass def delete_snapshot(self, context, snapshot, share_server=None): pass def create_share(self, context, share, share_server=None): return ['/fake/path', '/fake/path2'] def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): return { 'export_locations': ['/fake/path', '/fake/path2'], 'status': constants.STATUS_AVAILABLE, } def delete_share(self, context, share, share_server=None): pass def ensure_share(self, context, share, share_server=None): pass def allow_access(self, context, share, access, share_server=None): pass def deny_access(self, context, share, access, share_server=None): pass def get_share_stats(self, refresh=False): return None def do_setup(self, context): pass def setup_server(self, *args, **kwargs): pass def teardown_server(self, *args, **kwargs): pass def get_network_allocations_number(self): # NOTE(vponomaryov): Simulate drivers that use share servers and # do not use 'service_instance' module. return 2 def _verify_share_server_handling(self, driver_handles_share_servers): return super(FakeShareDriver, self)._verify_share_server_handling( driver_handles_share_servers) def create_share_group(self, context, group_id, share_server=None): pass def delete_share_group(self, context, group_id, share_server=None): pass def get_share_status(self, share, share_server=None): return { 'export_locations': ['/fake/path', '/fake/path2'], 'status': constants.STATUS_AVAILABLE, } manila-10.0.0/manila/tests/declare_conf.py0000664000175000017500000000152013656750227020447 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg CONF = cfg.CONF CONF.register_opt(cfg.IntOpt('answer', default=42, help='test conf')) manila-10.0.0/manila/tests/fake_network.py0000664000175000017500000001447513656750227020537 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_utils import uuidutils CONF = cfg.CONF class FakeNetwork(object): def __init__(self, **kwargs): self.id = kwargs.pop('id', 'fake_net_id') self.name = kwargs.pop('name', 'net_name') self.subnets = kwargs.pop('subnets', []) for key, value in kwargs.items(): setattr(self, key, value) def __getitem__(self, attr): return getattr(self, attr) class FakeSubnet(object): def __init__(self, **kwargs): self.id = kwargs.pop('id', 'fake_subnet_id') self.network_id = kwargs.pop('network_id', 'fake_net_id') self.cidr = kwargs.pop('cidr', 'fake_cidr') for key, value in kwargs.items(): setattr(self, key, value) def __getitem__(self, attr): return getattr(self, attr) class FakePort(object): def __init__(self, **kwargs): self.id = kwargs.pop('id', 'fake_subnet_id') self.network_id = kwargs.pop('network_id', 'fake_net_id') self.fixed_ips = kwargs.pop('fixed_ips', []) for key, value in kwargs.items(): setattr(self, key, value) def __getitem__(self, attr): return getattr(self, attr) class FakeRouter(object): def __init__(self, **kwargs): self.id = kwargs.pop('id', 'fake_router_id') self.name = kwargs.pop('name', 'fake_router_name') for key, value in kwargs.items(): setattr(self, key, value) def __getitem__(self, attr): return getattr(self, attr) def __setitem__(self, attr, value): setattr(self, attr, value) class FakeDeviceAddr(object): def __init__(self, list_of_addresses=None): self.addresses = list_of_addresses or [ dict(ip_version=4, cidr='1.0.0.0/27'), dict(ip_version=4, cidr='2.0.0.0/27'), dict(ip_version=6, cidr='3.0.0.0/27'), ] def list(self): return self.addresses class FakeDevice(object): def __init__(self, name=None, list_of_addresses=None): self.addr = FakeDeviceAddr(list_of_addresses) self.name = name or 'fake_device_name' class API(object): """Fake Network API.""" admin_project_id = 'fake_admin_project_id' network = { "status": "ACTIVE", "subnets": ["fake_subnet_id"], "name": "fake_network", "tenant_id": "fake_tenant_id", "shared": False, "id": "fake_id", "router:external": False, } port = { "status": "ACTIVE", "allowed_address_pairs": [], "admin_state_up": True, "network_id": "fake_network_id", "tenant_id": "fake_tenant_id", "extra_dhcp_opts": [], "device_owner": "fake", "binding:capabilities": {"port_filter": True}, "mac_address": "00:00:00:00:00:00", "fixed_ips": [ {"subnet_id": "56537094-98d7-430a-b513-81c4dc6d9903", "ip_address": "10.12.12.10"} ], "id": "fake_port_id", "security_groups": ["fake_sec_group_id"], "device_id": "fake_device_id" } def get_all_admin_project_networks(self): net1 = self.network.copy() net1['tenant_id'] = self.admin_project_id net1['id'] = uuidutils.generate_uuid() net2 = self.network.copy() net2['tenant_id'] = self.admin_project_id net2['id'] = uuidutils.generate_uuid() return [net1, net2] def create_port(self, tenant_id, network_id, subnet_id=None, fixed_ip=None, device_owner=None, device_id=None): port = self.port.copy() port['network_id'] = network_id port['admin_state_up'] = True port['tenant_id'] = tenant_id if fixed_ip: fixed_ip_dict = {'ip_address': fixed_ip} if subnet_id: fixed_ip_dict.update({'subnet_id': subnet_id}) port['fixed_ips'] = [fixed_ip_dict] if device_owner: port['device_owner'] = device_owner if device_id: port['device_id'] = device_id return port def list_ports(self, **search_opts): """List ports for the client based on search options.""" ports = [] for i in range(2): ports.append(self.port.copy()) for port in ports: port['id'] = uuidutils.generate_uuid() for key, val in search_opts.items(): port[key] = val if 'id' in search_opts: return ports return ports def show_port(self, port_id): """Return the port for the client given the port id.""" port = self.port.copy() port['id'] = port_id return port def delete_port(self, port_id): pass def get_subnet(self, subnet_id): pass def subnet_create(self, *args, **kwargs): pass def router_add_interface(self, *args, **kwargs): pass def show_router(self, *args, **kwargs): pass def update_port_fixed_ips(self, *args, **kwargs): pass def router_remove_interface(self, *args, **kwargs): pass def update_subnet(self, *args, **kwargs): pass def get_all_networks(self): """Get all networks for client.""" net1 = self.network.copy() net2 = self.network.copy() net1['id'] = uuidutils.generate_uuid() net2['id'] = uuidutils.generate_uuid() return [net1, net2] def get_network(self, network_uuid): """Get specific network for client.""" network = self.network.copy() network['id'] = network_uuid return network def network_create(self, tenant_id, name): network = self.network.copy() network['tenant_id'] = tenant_id network['name'] = name return network manila-10.0.0/manila/tests/fake_notifier.py0000664000175000017500000000471013656750227020654 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import oslo_messaging as messaging from oslo_serialization import jsonutils from manila import rpc NOTIFICATIONS = [] def reset(): del NOTIFICATIONS[:] FakeMessage = collections.namedtuple( 'Message', ['publisher_id', 'priority', 'event_type', 'payload'], ) class FakeNotifier(object): def __init__(self, transport, publisher_id=None, serializer=None): self.transport = transport self.publisher_id = publisher_id for priority in ['debug', 'info', 'warn', 'error', 'critical']: setattr(self, priority, functools.partial(self._notify, priority.upper())) self._serializer = serializer or messaging.serializer.NoOpSerializer() def prepare(self, publisher_id=None): if publisher_id is None: publisher_id = self.publisher_id return self.__class__(self.transport, publisher_id, self._serializer) def _notify(self, priority, ctxt, event_type, payload): payload = self._serializer.serialize_entity(ctxt, payload) # NOTE(sileht): simulate the kombu serializer # this permit to raise an exception if something have not # been serialized correctly jsonutils.to_primitive(payload) msg = dict(publisher_id=self.publisher_id, priority=priority, event_type=event_type, payload=payload) NOTIFICATIONS.append(msg) def stub_notifier(testcase): testcase.mock_object(messaging, 'Notifier', FakeNotifier) if rpc.NOTIFIER: serializer = getattr(rpc.NOTIFIER, '_serializer', None) testcase.mock_object(rpc, 'NOTIFIER', FakeNotifier(rpc.NOTIFIER.transport, rpc.NOTIFIER.publisher_id, serializer=serializer)) manila-10.0.0/manila/share_group/0000775000175000017500000000000013656750362016647 5ustar zuulzuul00000000000000manila-10.0.0/manila/share_group/__init__.py0000664000175000017500000000000013656750227020746 0ustar zuulzuul00000000000000manila-10.0.0/manila/share_group/api.py0000664000175000017500000005212413656750227017776 0ustar zuulzuul00000000000000# Copyright (c) 2015 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handles all requests relating to share groups. """ from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import strutils import six from manila.common import constants from manila.db import base from manila import exception from manila.i18n import _ from manila import quota from manila.scheduler import rpcapi as scheduler_rpcapi from manila import share from manila.share import rpcapi as share_rpcapi from manila.share import share_types CONF = cfg.CONF LOG = log.getLogger(__name__) QUOTAS = quota.QUOTAS class API(base.Base): """API for interacting with the share manager.""" def __init__(self, db_driver=None): self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI() self.share_rpcapi = share_rpcapi.ShareAPI() self.share_api = share.API() super(API, self).__init__(db_driver) def create(self, context, name=None, description=None, share_type_ids=None, source_share_group_snapshot_id=None, share_network_id=None, share_group_type_id=None, availability_zone_id=None, availability_zone=None): """Create new share group.""" share_group_snapshot = None original_share_group = None # NOTE(gouthamr): share_server_id is inherited from the # parent share group if a share group snapshot is specified, # else, it will be set in the share manager. share_server_id = None if source_share_group_snapshot_id: share_group_snapshot = self.db.share_group_snapshot_get( context, source_share_group_snapshot_id) if share_group_snapshot['status'] != constants.STATUS_AVAILABLE: msg = (_("Share group snapshot status must be %s.") % constants.STATUS_AVAILABLE) raise exception.InvalidShareGroupSnapshot(reason=msg) original_share_group = self.db.share_group_get( context, share_group_snapshot['share_group_id']) share_type_ids = [ s['share_type_id'] for s in original_share_group['share_types']] share_network_id = original_share_group['share_network_id'] share_server_id = original_share_group['share_server_id'] availability_zone_id = original_share_group['availability_zone_id'] # Get share_type_objects share_type_objects = [] driver_handles_share_servers = None for share_type_id in (share_type_ids or []): try: share_type_object = share_types.get_share_type( context, share_type_id) except exception.ShareTypeNotFound: msg = _("Share type with id %s could not be found.") raise exception.InvalidInput(msg % share_type_id) share_type_objects.append(share_type_object) extra_specs = share_type_object.get('extra_specs') if extra_specs: share_type_handle_ss = strutils.bool_from_string( extra_specs.get( constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS)) if driver_handles_share_servers is None: driver_handles_share_servers = share_type_handle_ss elif not driver_handles_share_servers == share_type_handle_ss: # NOTE(ameade): if the share types have conflicting values # for driver_handles_share_servers then raise bad request msg = _("The specified share_types cannot have " "conflicting values for the " "driver_handles_share_servers extra spec.") raise exception.InvalidInput(reason=msg) if (not share_type_handle_ss) and share_network_id: msg = _("When using a share types with the " "driver_handles_share_servers extra spec as " "False, a share_network_id must not be provided.") raise exception.InvalidInput(reason=msg) try: if share_network_id: self.db.share_network_get(context, share_network_id) except exception.ShareNetworkNotFound: msg = _("The specified share network does not exist.") raise exception.InvalidInput(reason=msg) if (driver_handles_share_servers and not (source_share_group_snapshot_id or share_network_id)): msg = _("When using a share type with the " "driver_handles_share_servers extra spec as " "True, a share_network_id must be provided.") raise exception.InvalidInput(reason=msg) try: share_group_type = self.db.share_group_type_get( context, share_group_type_id) except exception.ShareGroupTypeNotFound: msg = _("The specified share group type %s does not exist.") raise exception.InvalidInput(reason=msg % share_group_type_id) supported_share_types = set( [x['share_type_id'] for x in share_group_type['share_types']]) supported_share_type_objects = [ share_types.get_share_type(context, share_type_id) for share_type_id in supported_share_types ] if not set(share_type_ids or []) <= supported_share_types: msg = _("The specified share types must be a subset of the share " "types supported by the share group type.") raise exception.InvalidInput(reason=msg) # Grab share type AZs for scheduling share_types_of_new_group = ( share_type_objects or supported_share_type_objects ) stype_azs_of_new_group = [] stypes_unsupported_in_az = [] for stype in share_types_of_new_group: stype_azs = stype.get('extra_specs', {}).get( 'availability_zones', '') if stype_azs: stype_azs = stype_azs.split(',') stype_azs_of_new_group.extend(stype_azs) if availability_zone and availability_zone not in stype_azs: # If an AZ is requested, it must be supported by the AZs # configured in each of the share types requested stypes_unsupported_in_az.append((stype['name'], stype['id'])) if stypes_unsupported_in_az: msg = _("Share group cannot be created since the following share " "types are not supported within the availability zone " "'%(az)s': (%(stypes)s)") payload = {'az': availability_zone, 'stypes': ''} for type_name, type_id in set(stypes_unsupported_in_az): if payload['stypes']: payload['stypes'] += ', ' type_name = '%s ' % (type_name or '') payload['stypes'] += type_name + '(ID: %s)' % type_id raise exception.InvalidInput(reason=msg % payload) try: reservations = QUOTAS.reserve(context, share_groups=1) except exception.OverQuota as e: overs = e.kwargs['overs'] usages = e.kwargs['usages'] quotas = e.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'share_groups' in overs: msg = ("Quota exceeded for '%(s_uid)s' user in '%(s_pid)s' " "project. (%(d_consumed)d of " "%(d_quota)d already consumed).") LOG.warning(msg, { 's_pid': context.project_id, 's_uid': context.user_id, 'd_consumed': _consumed('share_groups'), 'd_quota': quotas['share_groups'], }) raise exception.ShareGroupsLimitExceeded() options = { 'share_group_type_id': share_group_type_id, 'source_share_group_snapshot_id': source_share_group_snapshot_id, 'share_network_id': share_network_id, 'share_server_id': share_server_id, 'availability_zone_id': availability_zone_id, 'name': name, 'description': description, 'user_id': context.user_id, 'project_id': context.project_id, 'status': constants.STATUS_CREATING, 'share_types': share_type_ids or supported_share_types } if original_share_group: options['host'] = original_share_group['host'] share_group = {} try: share_group = self.db.share_group_create(context, options) if share_group_snapshot: members = self.db.share_group_snapshot_members_get_all( context, source_share_group_snapshot_id) for member in members: share_instance = self.db.share_instance_get( context, member['share_instance_id']) share_type = share_types.get_share_type( context, share_instance['share_type_id']) self.share_api.create( context, member['share_proto'], member['size'], None, None, share_group_id=share_group['id'], share_group_snapshot_member=member, share_type=share_type, availability_zone=availability_zone_id, share_network_id=share_network_id) except Exception: with excutils.save_and_reraise_exception(): if share_group: self.db.share_group_destroy( context.elevated(), share_group['id']) QUOTAS.rollback(context, reservations) try: QUOTAS.commit(context, reservations) except Exception: with excutils.save_and_reraise_exception(): QUOTAS.rollback(context, reservations) request_spec = {'share_group_id': share_group['id']} request_spec.update(options) request_spec['availability_zones'] = set(stype_azs_of_new_group) request_spec['share_types'] = share_type_objects request_spec['resource_type'] = share_group_type if share_group_snapshot and original_share_group: self.share_rpcapi.create_share_group( context, share_group, original_share_group['host']) else: self.scheduler_rpcapi.create_share_group( context, share_group_id=share_group['id'], request_spec=request_spec, filter_properties={}) return share_group def delete(self, context, share_group): """Delete share group.""" share_group_id = share_group['id'] if not share_group['host']: self.db.share_group_destroy(context.elevated(), share_group_id) return statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR) if not share_group['status'] in statuses: msg = (_("Share group status must be one of %(statuses)s") % {"statuses": statuses}) raise exception.InvalidShareGroup(reason=msg) # NOTE(ameade): check for group_snapshots in the group if self.db.count_share_group_snapshots_in_share_group( context, share_group_id): msg = (_("Cannot delete a share group with snapshots")) raise exception.InvalidShareGroup(reason=msg) # NOTE(ameade): check for shares in the share group if self.db.count_shares_in_share_group(context, share_group_id): msg = (_("Cannot delete a share group with shares")) raise exception.InvalidShareGroup(reason=msg) share_group = self.db.share_group_update( context, share_group_id, {'status': constants.STATUS_DELETING}) try: reservations = QUOTAS.reserve( context, share_groups=-1, project_id=share_group['project_id'], user_id=share_group['user_id'], ) except exception.OverQuota as e: reservations = None LOG.exception( ("Failed to update quota for deleting share group: %s"), e) try: self.share_rpcapi.delete_share_group(context, share_group) except Exception: with excutils.save_and_reraise_exception(): QUOTAS.rollback(context, reservations) if reservations: QUOTAS.commit( context, reservations, project_id=share_group['project_id'], user_id=share_group['user_id'], ) def update(self, context, group, fields): return self.db.share_group_update(context, group['id'], fields) def get(self, context, share_group_id): return self.db.share_group_get(context, share_group_id) def get_all(self, context, detailed=True, search_opts=None, sort_key=None, sort_dir=None): if search_opts is None: search_opts = {} LOG.debug("Searching for share_groups by: %s", six.text_type(search_opts)) # Get filtered list of share_groups if search_opts.pop('all_tenants', 0) and context.is_admin: share_groups = self.db.share_group_get_all( context, detailed=detailed, filters=search_opts, sort_key=sort_key, sort_dir=sort_dir) else: share_groups = self.db.share_group_get_all_by_project( context, context.project_id, detailed=detailed, filters=search_opts, sort_key=sort_key, sort_dir=sort_dir) return share_groups def create_share_group_snapshot(self, context, name=None, description=None, share_group_id=None): """Create new share group snapshot.""" options = { 'share_group_id': share_group_id, 'name': name, 'description': description, 'user_id': context.user_id, 'project_id': context.project_id, 'status': constants.STATUS_CREATING, } share_group = self.db.share_group_get(context, share_group_id) # Check status of group, must be active if not share_group['status'] == constants.STATUS_AVAILABLE: msg = (_("Share group status must be %s") % constants.STATUS_AVAILABLE) raise exception.InvalidShareGroup(reason=msg) # Create members for every share in the group shares = self.db.share_get_all_by_share_group_id( context, share_group_id) # Check status of all shares, they must be active in order to snap # the group for s in shares: if not s['status'] == constants.STATUS_AVAILABLE: msg = (_("Share %(s)s in share group must have status " "of %(status)s in order to create a group snapshot") % {"s": s['id'], "status": constants.STATUS_AVAILABLE}) raise exception.InvalidShareGroup(reason=msg) try: reservations = QUOTAS.reserve(context, share_group_snapshots=1) except exception.OverQuota as e: overs = e.kwargs['overs'] usages = e.kwargs['usages'] quotas = e.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'share_group_snapshots' in overs: msg = ("Quota exceeded for '%(s_uid)s' user in '%(s_pid)s' " "project. (%(d_consumed)d of " "%(d_quota)d already consumed).") LOG.warning(msg, { 's_pid': context.project_id, 's_uid': context.user_id, 'd_consumed': _consumed('share_group_snapshots'), 'd_quota': quotas['share_group_snapshots'], }) raise exception.ShareGroupSnapshotsLimitExceeded() snap = {} try: snap = self.db.share_group_snapshot_create(context, options) members = [] for s in shares: member_options = { 'share_group_snapshot_id': snap['id'], 'user_id': context.user_id, 'project_id': context.project_id, 'status': constants.STATUS_CREATING, 'size': s['size'], 'share_proto': s['share_proto'], 'share_instance_id': s.instance['id'] } member = self.db.share_group_snapshot_member_create( context, member_options) members.append(member) # Cast to share manager self.share_rpcapi.create_share_group_snapshot( context, snap, share_group['host']) except Exception: with excutils.save_and_reraise_exception(): # This will delete the snapshot and all of it's members if snap: self.db.share_group_snapshot_destroy(context, snap['id']) QUOTAS.rollback(context, reservations) try: QUOTAS.commit(context, reservations) except Exception: with excutils.save_and_reraise_exception(): QUOTAS.rollback(context, reservations) return snap def delete_share_group_snapshot(self, context, snap): """Delete share group snapshot.""" snap_id = snap['id'] statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR) share_group = self.db.share_group_get(context, snap['share_group_id']) if not snap['status'] in statuses: msg = (_("Share group snapshot status must be one of" " %(statuses)s") % {"statuses": statuses}) raise exception.InvalidShareGroupSnapshot(reason=msg) self.db.share_group_snapshot_update( context, snap_id, {'status': constants.STATUS_DELETING}) try: reservations = QUOTAS.reserve( context, share_group_snapshots=-1, project_id=snap['project_id'], user_id=snap['user_id'], ) except exception.OverQuota as e: reservations = None LOG.exception( ("Failed to update quota for deleting share group snapshot: " "%s"), e) # Cast to share manager self.share_rpcapi.delete_share_group_snapshot( context, snap, share_group['host']) if reservations: QUOTAS.commit( context, reservations, project_id=snap['project_id'], user_id=snap['user_id'], ) def update_share_group_snapshot(self, context, share_group_snapshot, fields): return self.db.share_group_snapshot_update( context, share_group_snapshot['id'], fields) def get_share_group_snapshot(self, context, snapshot_id): return self.db.share_group_snapshot_get(context, snapshot_id) def get_all_share_group_snapshots(self, context, detailed=True, search_opts=None, sort_key=None, sort_dir=None): if search_opts is None: search_opts = {} LOG.debug("Searching for share group snapshots by: %s", six.text_type(search_opts)) # Get filtered list of share group snapshots if search_opts.pop('all_tenants', 0) and context.is_admin: share_group_snapshots = self.db.share_group_snapshot_get_all( context, detailed=detailed, filters=search_opts, sort_key=sort_key, sort_dir=sort_dir) else: share_group_snapshots = ( self.db.share_group_snapshot_get_all_by_project( context, context.project_id, detailed=detailed, filters=search_opts, sort_key=sort_key, sort_dir=sort_dir, ) ) return share_group_snapshots def get_all_share_group_snapshot_members(self, context, share_group_snapshot_id): members = self.db.share_group_snapshot_members_get_all( context, share_group_snapshot_id) return members manila-10.0.0/manila/share_group/share_group_types.py0000664000175000017500000001320413656750227022763 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_db import exception as db_exception from oslo_log import log from oslo_utils import uuidutils from manila.common import constants from manila import context from manila import db from manila import exception from manila.i18n import _ CONF = cfg.CONF LOG = log.getLogger(__name__) def create(context, name, share_types, group_specs=None, is_public=True, projects=None): """Creates share group types.""" group_specs = group_specs or {} projects = projects or [] try: type_ref = db.share_group_type_create( context, {"name": name, "group_specs": group_specs, "is_public": is_public, "share_types": share_types}, projects=projects) except db_exception.DBError: LOG.exception('DB error') raise exception.ShareGroupTypeCreateFailed( name=name, group_specs=group_specs) return type_ref def destroy(context, type_id): """Marks share group types as deleted.""" if id is None: msg = _("Share group type ID cannot be None.") raise exception.InvalidShareGroupType(reason=msg) else: db.share_group_type_destroy(context, type_id) def get_all(context, inactive=0, search_opts=None): """Get all non-deleted share group types.""" search_opts = search_opts or {} filters = {} if 'is_public' in search_opts: filters['is_public'] = search_opts.pop('is_public') share_group_types = db.share_group_type_get_all( context, inactive, filters=filters) if search_opts: LOG.debug("Searching by: %s", search_opts) def _check_group_specs_match(share_group_type, searchdict): for k, v in searchdict.items(): if (k not in share_group_type['group_specs'].keys() or share_group_type['group_specs'][k] != v): return False return True # search_option to filter_name mapping. filter_mapping = {'group_specs': _check_group_specs_match} result = {} for type_name, type_args in share_group_types.items(): # go over all filters in the list for opt, values in search_opts.items(): try: filter_func = filter_mapping[opt] except KeyError: # no such filter - ignore it, go to next filter continue else: if filter_func(type_args, values): result[type_name] = type_args break share_group_types = result return share_group_types def get(ctxt, type_id, expected_fields=None): """Retrieves single share group type by id.""" if type_id is None: msg = _("Share type ID cannot be None.") raise exception.InvalidShareGroupType(reason=msg) if ctxt is None: ctxt = context.get_admin_context() return db.share_group_type_get( ctxt, type_id, expected_fields=expected_fields) def get_by_name(context, name): """Retrieves single share group type by name.""" if name is None: msg = _("name cannot be None.") raise exception.InvalidShareGroupType(reason=msg) return db.share_group_type_get_by_name(context, name) def get_by_name_or_id(context, share_group_type=None): if not share_group_type: share_group_type_ref = get_default(context) if not share_group_type_ref: msg = _("Default share group type not found.") raise exception.ShareGroupTypeNotFound(msg) return share_group_type_ref if uuidutils.is_uuid_like(share_group_type): return get(context, share_group_type) else: return get_by_name(context, share_group_type) def get_default(ctxt=None): """Get the default share group type.""" name = CONF.default_share_group_type if name is None: return {} if ctxt is None: ctxt = context.get_admin_context() try: return get_by_name(ctxt, name) except exception.ShareGroupTypeNotFoundByName: LOG.exception( "Default share group type '%s' is not found, " "please check 'default_share_group_type' config.", name, ) def get_tenant_visible_group_specs(): return constants.ExtraSpecs.TENANT_VISIBLE def get_boolean_group_specs(): return constants.ExtraSpecs.BOOLEAN def add_share_group_type_access(context, share_group_type_id, project_id): """Add access to share group type for project_id.""" if share_group_type_id is None: msg = _("share_group_type_id cannot be None.") raise exception.InvalidShareGroupType(reason=msg) return db.share_group_type_access_add( context, share_group_type_id, project_id) def remove_share_group_type_access(context, share_group_type_id, project_id): """Remove access to share group type for project_id.""" if share_group_type_id is None: msg = _("share_group_type_id cannot be None.") raise exception.InvalidShareGroupType(reason=msg) return db.share_group_type_access_remove( context, share_group_type_id, project_id) manila-10.0.0/manila/manager.py0000664000175000017500000001310613656750227016316 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base Manager class. Managers are responsible for a certain aspect of the system. It is a logical grouping of code relating to a portion of the system. In general other components should be using the manager to make changes to the components that it is responsible for. For example, other components that need to deal with volumes in some way, should do so by calling methods on the VolumeManager instead of directly changing fields in the database. This allows us to keep all of the code relating to volumes in the same place. We have adopted a basic strategy of Smart managers and dumb data, which means rather than attaching methods to data objects, components should call manager methods that act on the data. Methods on managers that can be executed locally should be called directly. If a particular method must execute on a remote host, this should be done via rpc to the service that wraps the manager Managers should be responsible for most of the db access, and non-implementation specific data. Anything implementation specific that can't be generalized should be done by the Driver. In general, we prefer to have one manager with multiple drivers for different implementations, but sometimes it makes sense to have multiple managers. You can think of it this way: Abstract different overall strategies at the manager level(FlatNetwork vs VlanNetwork), and different implementations at the driver level(LinuxNetDriver vs CiscoNetDriver). Managers will often provide methods for initial setup of a host or periodic tasks to a wrapping service. This module provides Manager, a base class for managers. """ from oslo_config import cfg from oslo_log import log from oslo_service import periodic_task from manila.db import base from manila.scheduler import rpcapi as scheduler_rpcapi from manila import version CONF = cfg.CONF LOG = log.getLogger(__name__) class PeriodicTasks(periodic_task.PeriodicTasks): def __init__(self): super(PeriodicTasks, self).__init__(CONF) class Manager(base.Base, PeriodicTasks): @property def RPC_API_VERSION(self): """Redefine this in child classes.""" raise NotImplementedError @property def target(self): """This property is used by oslo_messaging. https://wiki.openstack.org/wiki/Oslo/Messaging#API_Version_Negotiation """ if not hasattr(self, '_target'): import oslo_messaging as messaging self._target = messaging.Target(version=self.RPC_API_VERSION) return self._target def __init__(self, host=None, db_driver=None): if not host: host = CONF.host self.host = host self.additional_endpoints = [] self.availability_zone = CONF.storage_availability_zone super(Manager, self).__init__(db_driver) def periodic_tasks(self, context, raise_on_error=False): """Tasks to be run at a periodic interval.""" return self.run_periodic_tasks(context, raise_on_error=raise_on_error) def init_host(self): """Handle initialization if this is a standalone service. Child classes should override this method. """ pass def service_version(self, context): return version.version_string() def service_config(self, context): config = {} for key in CONF: config[key] = CONF.get(key, None) return config def is_service_ready(self): """Method indicating if service is ready. This method should be overridden by subclasses which will return False when the back end is not ready yet. """ return True class SchedulerDependentManager(Manager): """Periodically send capability updates to the Scheduler services. Services that need to update the Scheduler of their capabilities should derive from this class. Otherwise they can derive from manager.Manager directly. Updates are only sent after update_service_capabilities is called with non-None values. """ def __init__(self, host=None, db_driver=None, service_name='undefined'): self.last_capabilities = None self.service_name = service_name self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI() super(SchedulerDependentManager, self).__init__(host, db_driver) def update_service_capabilities(self, capabilities): """Remember these capabilities to send on next periodic update.""" self.last_capabilities = capabilities @periodic_task.periodic_task def _publish_service_capabilities(self, context): """Pass data back to the scheduler at a periodic interval.""" if self.last_capabilities: LOG.debug('Notifying Schedulers of capabilities ...') self.scheduler_rpcapi.update_service_capabilities( context, self.service_name, self.host, self.last_capabilities) manila-10.0.0/manila/coordination.py0000664000175000017500000001473713656750227017407 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tooz Coordination and locking utilities.""" import inspect import decorator from oslo_config import cfg from oslo_log import log from oslo_utils import uuidutils import six from tooz import coordination from tooz import locking from manila import exception from manila.i18n import _ LOG = log.getLogger(__name__) coordination_opts = [ cfg.StrOpt('backend_url', default='file://$state_path', help='The back end URL to use for distributed coordination.') ] CONF = cfg.CONF CONF.register_opts(coordination_opts, group='coordination') class Coordinator(object): """Tooz coordination wrapper. Coordination member id is created from concatenated `prefix` and `agent_id` parameters. :param str agent_id: Agent identifier :param str prefix: Used to provide member identifier with a meaningful prefix. """ def __init__(self, agent_id=None, prefix=''): self.coordinator = None self.agent_id = agent_id or uuidutils.generate_uuid() self.started = False self.prefix = prefix def start(self): """Connect to coordination back end.""" if self.started: return # NOTE(gouthamr): Tooz expects member_id as a byte string. member_id = (self.prefix + self.agent_id).encode('ascii') self.coordinator = coordination.get_coordinator( cfg.CONF.coordination.backend_url, member_id) self.coordinator.start(start_heart=True) self.started = True def stop(self): """Disconnect from coordination back end.""" msg = 'Stopped Coordinator (Agent ID: %(agent)s, prefix: %(prefix)s)' msg_args = {'agent': self.agent_id, 'prefix': self.prefix} if self.started: self.coordinator.stop() self.coordinator = None self.started = False LOG.info(msg, msg_args) def get_lock(self, name): """Return a Tooz back end lock. :param str name: The lock name that is used to identify it across all nodes. """ # NOTE(gouthamr): Tooz expects lock name as a byte string lock_name = (self.prefix + name).encode('ascii') if self.started: return self.coordinator.get_lock(lock_name) else: raise exception.LockCreationFailed(_('Coordinator uninitialized.')) LOCK_COORDINATOR = Coordinator(prefix='manila-') class Lock(locking.Lock): """Lock with dynamic name. :param str lock_name: Lock name. :param dict lock_data: Data for lock name formatting. :param coordinator: Coordinator object to use when creating lock. Defaults to the global coordinator. Using it like so:: with Lock('mylock'): ... ensures that only one process at a time will execute code in context. Lock name can be formatted using Python format string syntax:: Lock('foo-{share.id}, {'share': ...,}') Available field names are keys of lock_data. """ def __init__(self, lock_name, lock_data=None, coordinator=None): super(Lock, self).__init__(six.text_type(id(self))) lock_data = lock_data or {} self.coordinator = coordinator or LOCK_COORDINATOR self.blocking = True self.lock = self._prepare_lock(lock_name, lock_data) def _prepare_lock(self, lock_name, lock_data): if not isinstance(lock_name, six.string_types): raise ValueError(_('Not a valid string: %s') % lock_name) return self.coordinator.get_lock(lock_name.format(**lock_data)) def acquire(self, blocking=None): """Attempts to acquire lock. :param blocking: If True, blocks until the lock is acquired. If False, returns right away. Otherwise, the value is used as a timeout value and the call returns maximum after this number of seconds. :return: returns true if acquired (false if not) :rtype: bool """ blocking = self.blocking if blocking is None else blocking return self.lock.acquire(blocking=blocking) def release(self): """Attempts to release lock. The behavior of releasing a lock which was not acquired in the first place is undefined. """ self.lock.release() def synchronized(lock_name, blocking=True, coordinator=None): """Synchronization decorator. :param str lock_name: Lock name. :param blocking: If True, blocks until the lock is acquired. If False, raises exception when not acquired. Otherwise, the value is used as a timeout value and if lock is not acquired after this number of seconds exception is raised. :param coordinator: Coordinator object to use when creating lock. Defaults to the global coordinator. :raises tooz.coordination.LockAcquireFailed: if lock is not acquired Decorating a method like so:: @synchronized('mylock') def foo(self, *args): ... ensures that only one process will execute the foo method at a time. Different methods can share the same lock:: @synchronized('mylock') def foo(self, *args): ... @synchronized('mylock') def bar(self, *args): ... This way only one of either foo or bar can be executing at a time. Lock name can be formatted using Python format string syntax:: @synchronized('{f_name}-{shr.id}-{snap[name]}') def foo(self, shr, snap): ... Available field names are: decorated function parameters and `f_name` as a decorated function name. """ @decorator.decorator def _synchronized(f, *a, **k): call_args = inspect.getcallargs(f, *a, **k) call_args['f_name'] = f.__name__ lock = Lock(lock_name, call_args, coordinator) with lock(blocking): LOG.debug('Lock "%(name)s" acquired by "%(function)s".', {'name': lock_name, 'function': f.__name__}) return f(*a, **k) return _synchronized manila-10.0.0/manila/api/0000775000175000017500000000000013656750362015102 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/__init__.py0000664000175000017500000000156513656750227017222 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import paste.urlmap def root_app_factory(loader, global_conf, **local_conf): return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf) manila-10.0.0/manila/api/common.py0000664000175000017500000004361513656750227016755 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ipaddress import os import re import six import string from operator import xor from oslo_config import cfg from oslo_log import log from oslo_utils import encodeutils from oslo_utils import strutils from six.moves.urllib import parse import webob from manila.api.openstack import api_version_request as api_version from manila.api.openstack import versioned_method from manila.common import constants from manila import exception from manila.i18n import _ from manila import policy api_common_opts = [ cfg.IntOpt( 'osapi_max_limit', default=1000, help='The maximum number of items returned in a single response from ' 'a collection resource.'), cfg.StrOpt( 'osapi_share_base_URL', help='Base URL to be presented to users in links to the Share API'), ] CONF = cfg.CONF CONF.register_opts(api_common_opts) LOG = log.getLogger(__name__) # Regex that matches alphanumeric characters, periods, hypens, # colons and underscores: # ^ assert position at start of the string # [\w\.\-\:\_] match expression # $ assert position at end of the string VALID_KEY_NAME_REGEX = re.compile(r"^[\w\.\-\:\_]+$", re.UNICODE) def validate_key_names(key_names_list): """Validate each item of the list to match key name regex.""" for key_name in key_names_list: if not VALID_KEY_NAME_REGEX.match(key_name): return False return True def get_pagination_params(request): """Return marker, limit, offset tuple from request. :param request: `wsgi.Request` possibly containing 'marker' and 'limit' GET variables. 'marker' is the id of the last element the client has seen, and 'limit' is the maximum number of items to return. If 'limit' is not specified, 0, or > max_limit, we default to max_limit. Negative values for either marker or limit will cause exc.HTTPBadRequest() exceptions to be raised. """ params = {} if 'limit' in request.GET: params['limit'] = _get_limit_param(request) if 'marker' in request.GET: params['marker'] = _get_marker_param(request) if 'offset' in request.GET: params['offset'] = _get_offset_param(request) return params def _get_limit_param(request): """Extract integer limit from request or fail. Defaults to max_limit if not present and returns max_limit if present 'limit' is greater than max_limit. """ max_limit = CONF.osapi_max_limit try: limit = int(request.GET['limit']) except ValueError: msg = _('limit param must be an integer') raise webob.exc.HTTPBadRequest(explanation=msg) if limit < 0: msg = _('limit param must be positive') raise webob.exc.HTTPBadRequest(explanation=msg) limit = min(limit, max_limit) return limit def _get_marker_param(request): """Extract marker ID from request or fail.""" return request.GET['marker'] def _get_offset_param(request): """Extract offset id from request's dictionary (defaults to 0) or fail.""" offset = request.GET['offset'] return _validate_integer(offset, 'offset', 0, constants.DB_MAX_INT) def _validate_integer(value, name, min_value=None, max_value=None): """Make sure that value is a valid integer, potentially within range. :param value: the value of the integer :param name: the name of the integer :param min_value: the min_length of the integer :param max_value: the max_length of the integer :return: integer """ try: value = strutils.validate_integer(value, name, min_value, max_value) return value except ValueError as e: raise webob.exc.HTTPBadRequest(explanation=e) def _validate_pagination_query(request, max_limit=CONF.osapi_max_limit): """Validate the given request query and return limit and offset.""" try: offset = int(request.GET.get('offset', 0)) except ValueError: msg = _('offset param must be an integer') raise webob.exc.HTTPBadRequest(explanation=msg) try: limit = int(request.GET.get('limit', max_limit)) except ValueError: msg = _('limit param must be an integer') raise webob.exc.HTTPBadRequest(explanation=msg) if limit < 0: msg = _('limit param must be positive') raise webob.exc.HTTPBadRequest(explanation=msg) if offset < 0: msg = _('offset param must be positive') raise webob.exc.HTTPBadRequest(explanation=msg) return limit, offset def limited(items, request, max_limit=CONF.osapi_max_limit): """Return a slice of items according to requested offset and limit. :param items: A sliceable entity :param request: ``wsgi.Request`` possibly containing 'offset' and 'limit' GET variables. 'offset' is where to start in the list, and 'limit' is the maximum number of items to return. If 'limit' is not specified, 0, or > max_limit, we default to max_limit. Negative values for either offset or limit will cause exc.HTTPBadRequest() exceptions to be raised. :kwarg max_limit: The maximum number of items to return from 'items' """ limit, offset = _validate_pagination_query(request, max_limit) limit = min(max_limit, limit or max_limit) range_end = offset + limit return items[offset:range_end] def get_sort_params(params, default_key='created_at', default_dir='desc'): """Retrieves sort key/direction parameters. Processes the parameters to get the 'sort_key' and 'sort_dir' parameter values. :param params: webob.multidict of request parameters (from manila.api.openstack.wsgi.Request.params) :param default_key: default sort key value, will return if no sort key are supplied :param default_dir: default sort dir value, will return if no sort dir are supplied :returns: value of sort key, value of sort dir """ sort_key = params.pop('sort_key', default_key) sort_dir = params.pop('sort_dir', default_dir) return sort_key, sort_dir def remove_version_from_href(href): """Removes the first api version from the href. Given: 'http://manila.example.com/v1.1/123' Returns: 'http://manila.example.com/123' Given: 'http://www.manila.com/v1.1' Returns: 'http://www.manila.com' Given: 'http://manila.example.com/share/v1.1/123' Returns: 'http://manila.example.com/share/123' """ parsed_url = parse.urlsplit(href) url_parts = parsed_url.path.split('/') # NOTE: this should match vX.X or vX expression = re.compile(r'^v([0-9]+|[0-9]+\.[0-9]+)(/.*|$)') for x in range(len(url_parts)): if expression.match(url_parts[x]): del url_parts[x] break new_path = '/'.join(url_parts) if new_path == parsed_url.path: msg = 'href %s does not contain version' % href LOG.debug(msg) raise ValueError(msg) parsed_url = list(parsed_url) parsed_url[2] = new_path return parse.urlunsplit(parsed_url) def dict_to_query_str(params): # TODO(throughnothing): we should just use urllib.urlencode instead of this # But currently we don't work with urlencoded url's param_str = "" for key, val in params.items(): param_str = param_str + '='.join([str(key), str(val)]) + '&' return param_str.rstrip('&') def check_net_id_and_subnet_id(body): if xor('neutron_net_id' in body, 'neutron_subnet_id' in body): msg = _("When creating a new share network subnet you need to " "specify both neutron_net_id and neutron_subnet_id or " "none of them.") raise webob.exc.HTTPBadRequest(explanation=msg) class ViewBuilder(object): """Model API responses as dictionaries.""" _collection_name = None _detail_version_modifiers = [] def _get_links(self, request, identifier): return [{"rel": "self", "href": self._get_href_link(request, identifier), }, {"rel": "bookmark", "href": self._get_bookmark_link(request, identifier), }] def _get_next_link(self, request, identifier): """Return href string with proper limit and marker params.""" params = request.params.copy() params["marker"] = identifier prefix = self._update_link_prefix(request.application_url, CONF.osapi_share_base_URL) url = os.path.join(prefix, request.environ["manila.context"].project_id, self._collection_name) return "%s?%s" % (url, dict_to_query_str(params)) def _get_href_link(self, request, identifier): """Return an href string pointing to this object.""" prefix = self._update_link_prefix(request.application_url, CONF.osapi_share_base_URL) return os.path.join(prefix, request.environ["manila.context"].project_id, self._collection_name, str(identifier)) def _get_bookmark_link(self, request, identifier): """Create a URL that refers to a specific resource.""" base_url = remove_version_from_href(request.application_url) base_url = self._update_link_prefix(base_url, CONF.osapi_share_base_URL) return os.path.join(base_url, request.environ["manila.context"].project_id, self._collection_name, str(identifier)) def _get_collection_links(self, request, items, id_key="uuid"): """Retrieve 'next' link, if applicable.""" links = [] limit = int(request.params.get("limit", 0)) if limit and limit == len(items): last_item = items[-1] if id_key in last_item: last_item_id = last_item[id_key] else: last_item_id = last_item["id"] links.append({ "rel": "next", "href": self._get_next_link(request, last_item_id), }) return links def _update_link_prefix(self, orig_url, prefix): if not prefix: return orig_url url_parts = list(parse.urlsplit(orig_url)) prefix_parts = list(parse.urlsplit(prefix)) url_parts[0:2] = prefix_parts[0:2] return parse.urlunsplit(url_parts) def update_versioned_resource_dict(self, request, resource_dict, resource): """Updates the given resource dict for the given request version. This method calls every method, that is applicable to the request version, in _detail_version_modifiers. """ for method_name in self._detail_version_modifiers: method = getattr(self, method_name) if request.api_version_request.matches_versioned_method(method): request_context = request.environ['manila.context'] method.func(self, request_context, resource_dict, resource) @classmethod def versioned_method(cls, min_ver, max_ver=None, experimental=False): """Decorator for versioning API methods. :param min_ver: string representing minimum version :param max_ver: optional string representing maximum version :param experimental: flag indicating an API is experimental and is subject to change or removal at any time """ def decorator(f): obj_min_ver = api_version.APIVersionRequest(min_ver) if max_ver: obj_max_ver = api_version.APIVersionRequest(max_ver) else: obj_max_ver = api_version.APIVersionRequest() # Add to list of versioned methods registered func_name = f.__name__ new_func = versioned_method.VersionedMethod( func_name, obj_min_ver, obj_max_ver, experimental, f) return new_func return decorator def remove_invalid_options(context, search_options, allowed_search_options): """Remove search options that are not valid for non-admin API/context.""" if context.is_admin: # Allow all options return # Otherwise, strip out all unknown options unknown_options = [opt for opt in search_options if opt not in allowed_search_options] bad_options = ", ".join(unknown_options) LOG.debug("Removing options '%(bad_options)s' from query", {"bad_options": bad_options}) for opt in unknown_options: del search_options[opt] def validate_common_name(access): """Validate common name passed by user. 'access' is used as the certificate's CN (common name) to which access is allowed or denied by the backend. The standard allows for just about any string in the common name. The meaning of a string depends on its interpretation and is limited to 64 characters. """ if not(0 < len(access) < 65): exc_str = _('Invalid CN (common name). Must be 1-64 chars long.') raise webob.exc.HTTPBadRequest(explanation=exc_str) ''' for the reference specification for AD usernames, reference below links: 1:https://msdn.microsoft.com/en-us/library/bb726984.aspx 2:https://technet.microsoft.com/en-us/library/cc733146.aspx ''' def validate_username(access): sole_periods_spaces_re = r'[\s|\.]+$' valid_username_re = r'.[^\"\/\\\[\]\:\;\|\=\,\+\*\?\<\>]{3,254}$' username = access if re.match(sole_periods_spaces_re, username): exc_str = ('Invalid user or group name,cannot consist solely ' 'of periods or spaces.') raise webob.exc.HTTPBadRequest(explanation=exc_str) if not re.match(valid_username_re, username): exc_str = ('Invalid user or group name. Must be 4-255 characters ' 'and consist of alphanumeric characters and ' 'exclude special characters "/\\[]:;|=,+*?<>') raise webob.exc.HTTPBadRequest(explanation=exc_str) def validate_cephx_id(cephx_id): if not cephx_id: raise webob.exc.HTTPBadRequest(explanation=_( 'Ceph IDs may not be empty.')) # This restriction may be lifted in Ceph in the future: # http://tracker.ceph.com/issues/14626 if not set(cephx_id) <= set(string.printable): raise webob.exc.HTTPBadRequest(explanation=_( 'Ceph IDs must consist of ASCII printable characters.')) # Periods are technically permitted, but we restrict them here # to avoid confusion where users are unsure whether they should # include the "client." prefix: otherwise they could accidentally # create "client.client.foobar". if '.' in cephx_id: raise webob.exc.HTTPBadRequest(explanation=_( 'Ceph IDs may not contain periods.')) def validate_ip(access_to, enable_ipv6): try: if enable_ipv6: validator = ipaddress.ip_network else: validator = ipaddress.IPv4Network validator(six.text_type(access_to)) except ValueError as error: err_msg = encodeutils.exception_to_unicode(error) raise webob.exc.HTTPBadRequest(explanation=err_msg) def validate_access(*args, **kwargs): access_type = kwargs.get('access_type') access_to = kwargs.get('access_to') enable_ceph = kwargs.get('enable_ceph') enable_ipv6 = kwargs.get('enable_ipv6') if access_type == 'ip': validate_ip(access_to, enable_ipv6) elif access_type == 'user': validate_username(access_to) elif access_type == 'cert': validate_common_name(access_to.strip()) elif access_type == "cephx" and enable_ceph: validate_cephx_id(access_to) else: if enable_ceph: exc_str = _("Only 'ip', 'user', 'cert' or 'cephx' access " "types are supported.") else: exc_str = _("Only 'ip', 'user' or 'cert' access types " "are supported.") raise webob.exc.HTTPBadRequest(explanation=exc_str) def validate_public_share_policy(context, api_params, api='create'): """Validates if policy allows is_public parameter to be set to True. :arg api_params - A dictionary of values that may contain 'is_public' :returns api_params with 'is_public' item sanitized if present :raises exception.InvalidParameterValue if is_public is set but is Invalid exception.NotAuthorized if is_public is True but policy prevents it """ if 'is_public' not in api_params: return api_params policies = { 'create': 'create_public_share', 'update': 'set_public_share', } policy_to_check = policies[api] try: api_params['is_public'] = strutils.bool_from_string( api_params['is_public'], strict=True) except ValueError as e: raise exception.InvalidParameterValue(six.text_type(e)) public_shares_allowed = policy.check_policy( context, 'share', policy_to_check, do_raise=False) if api_params['is_public'] and not public_shares_allowed: message = _("User is not authorized to set 'is_public' to True in the " "request.") raise exception.NotAuthorized(message=message) return api_params manila-10.0.0/manila/api/contrib/0000775000175000017500000000000013656750362016542 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/contrib/__init__.py0000664000175000017500000000226413656750227020657 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Contrib contains extensions that are shipped with manila. It can't be called 'extensions' because that causes namespacing problems. """ from oslo_config import cfg from oslo_log import log from manila.api import extensions CONF = cfg.CONF LOG = log.getLogger(__name__) def standard_extensions(ext_mgr): extensions.load_standard_extensions(ext_mgr, LOG, __path__, __package__) def select_extensions(ext_mgr): extensions.load_standard_extensions(ext_mgr, LOG, __path__, __package__, CONF.osapi_share_ext_list) manila-10.0.0/manila/api/v1/0000775000175000017500000000000013656750362015430 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/v1/__init__.py0000664000175000017500000000000013656750227017527 0ustar zuulzuul00000000000000manila-10.0.0/manila/api/v1/share_manage.py0000664000175000017500000001253213656750227020417 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import shares as share_views from manila import exception from manila.i18n import _ from manila import share from manila.share import share_types from manila.share import utils as share_utils from manila import utils class ShareManageMixin(object): @wsgi.Controller.authorize('manage') def _manage(self, req, body, allow_dhss_true=False): context = req.environ['manila.context'] share_data = self._validate_manage_parameters(context, body) share_data = common.validate_public_share_policy(context, share_data) # NOTE(vponomaryov): compatibility actions are required between API and # DB layers for 'name' and 'description' API params that are # represented in DB as 'display_name' and 'display_description' # appropriately. name = share_data.get('display_name', share_data.get('name')) description = share_data.get( 'display_description', share_data.get('description')) share = { 'host': share_data['service_host'], 'export_location': share_data['export_path'], 'share_proto': share_data['protocol'].upper(), 'share_type_id': share_data['share_type_id'], 'display_name': name, 'display_description': description, } if share_data.get('is_public') is not None: share['is_public'] = share_data['is_public'] driver_options = share_data.get('driver_options', {}) if allow_dhss_true: share['share_server_id'] = share_data.get('share_server_id') try: share_ref = self.share_api.manage(context, share, driver_options) except exception.PolicyNotAuthorized as e: raise exc.HTTPForbidden(explanation=e) except (exception.InvalidShare, exception.InvalidShareServer) as e: raise exc.HTTPConflict(explanation=e) except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e) return self._view_builder.detail(req, share_ref) def _validate_manage_parameters(self, context, body): if not (body and self.is_valid_body(body, 'share')): msg = _("Share entity not found in request body") raise exc.HTTPUnprocessableEntity(explanation=msg) required_parameters = ('export_path', 'service_host', 'protocol') data = body['share'] for parameter in required_parameters: if parameter not in data: msg = _("Required parameter %s not found") % parameter raise exc.HTTPUnprocessableEntity(explanation=msg) if not data.get(parameter): msg = _("Required parameter %s is empty") % parameter raise exc.HTTPUnprocessableEntity(explanation=msg) if not share_utils.extract_host(data['service_host'], 'pool'): msg = _("service_host parameter should contain pool.") raise exc.HTTPBadRequest(explanation=msg) try: utils.validate_service_host( context, share_utils.extract_host(data['service_host'])) except exception.ServiceNotFound as e: raise exc.HTTPNotFound(explanation=e) except exception.PolicyNotAuthorized as e: raise exc.HTTPForbidden(explanation=e) except exception.AdminRequired as e: raise exc.HTTPForbidden(explanation=e) except exception.ServiceIsDown as e: raise exc.HTTPBadRequest(explanation=e) data['share_type_id'] = self._get_share_type_id( context, data.get('share_type')) return data @staticmethod def _get_share_type_id(context, share_type): try: stype = share_types.get_share_type_by_name_or_id(context, share_type) return stype['id'] except exception.ShareTypeNotFound as e: raise exc.HTTPNotFound(explanation=e) class ShareManageController(ShareManageMixin, wsgi.Controller): """Allows existing share to be 'managed' by Manila.""" resource_name = "share" _view_builder_class = share_views.ViewBuilder def __init__(self, *args, **kwargs): super(ShareManageController, self).__init__(*args, **kwargs) self.share_api = share.API() @wsgi.Controller.api_version('1.0', '2.6') def create(self, req, body): """Legacy method for 'manage share' operation. Should be removed when minimum API version becomes equal to or greater than v2.7 """ body.get('share', {}).pop('is_public', None) return self._manage(req, body) def create_resource(): return wsgi.Resource(ShareManageController()) manila-10.0.0/manila/api/v1/share_unmanage.py0000664000175000017500000000656013656750227020766 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api.openstack import wsgi from manila.common import constants from manila import exception from manila.i18n import _ from manila import share LOG = log.getLogger(__name__) class ShareUnmanageMixin(object): @wsgi.Controller.authorize("unmanage") def _unmanage(self, req, id, body=None, allow_dhss_true=False): """Unmanage a share.""" context = req.environ['manila.context'] LOG.info("Unmanage share with id: %s", id, context=context) try: share = self.share_api.get(context, id) if share.get('has_replicas'): msg = _("Share %s has replicas. It cannot be unmanaged " "until all replicas are removed.") % share['id'] raise exc.HTTPConflict(explanation=msg) if (not allow_dhss_true and share['instance'].get('share_server_id')): msg = _("Operation 'unmanage' is not supported for shares " "that are created on top of share servers " "(created with share-networks).") raise exc.HTTPForbidden(explanation=msg) elif share['status'] in constants.TRANSITIONAL_STATUSES: msg = _("Share with transitional state can not be unmanaged. " "Share '%(s_id)s' is in '%(state)s' state.") % dict( state=share['status'], s_id=share['id']) raise exc.HTTPForbidden(explanation=msg) snapshots = self.share_api.db.share_snapshot_get_all_for_share( context, id) if snapshots: msg = _("Share '%(s_id)s' can not be unmanaged because it has " "'%(amount)s' dependent snapshot(s).") % { 's_id': id, 'amount': len(snapshots)} raise exc.HTTPForbidden(explanation=msg) self.share_api.unmanage(context, share) except exception.NotFound as e: raise exc.HTTPNotFound(explanation=e) except (exception.InvalidShare, exception.PolicyNotAuthorized) as e: raise exc.HTTPForbidden(explanation=e) return webob.Response(status_int=http_client.ACCEPTED) class ShareUnmanageController(ShareUnmanageMixin, wsgi.Controller): """The Unmanage API controller for the OpenStack API.""" resource_name = "share" def __init__(self, *args, **kwargs): super(ShareUnmanageController, self).__init__(*args, **kwargs) self.share_api = share.API() @wsgi.Controller.api_version('1.0', '2.6') def unmanage(self, req, id): return self._unmanage(req, id) def create_resource(): return wsgi.Resource(ShareUnmanageController()) manila-10.0.0/manila/api/v1/share_servers.py0000664000175000017500000001352113656750227020657 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api.openstack import wsgi from manila.api.views import share_servers as share_servers_views from manila.common import constants from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila import share LOG = log.getLogger(__name__) class ShareServerController(wsgi.Controller): """The Share Server API controller for the OpenStack API.""" _view_builder_class = share_servers_views.ViewBuilder resource_name = 'share_server' def __init__(self): self.share_api = share.API() super(ShareServerController, self).__init__() @wsgi.Controller.authorize def index(self, req): """Returns a list of share servers.""" context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) share_servers = db_api.share_server_get_all(context) for s in share_servers: try: s.share_network_id = s.share_network_subnet['share_network_id'] share_network = db_api.share_network_get( context, s.share_network_id) s.project_id = share_network['project_id'] if share_network['name']: s.share_network_name = share_network['name'] else: s.share_network_name = share_network['id'] except exception.ShareNetworkNotFound: # NOTE(dviroel): The share-network may already be deleted while # the share-server is in 'deleting' state. In this scenario, # we will return some empty values. LOG.debug("Unable to retrieve share network details for share " "server %(server)s, the network %(network)s was " "not found.", {'server': s.id, 'network': s.share_network_id}) s.project_id = '' s.share_network_name = '' if search_opts: for k, v in search_opts.items(): share_servers = [s for s in share_servers if (hasattr(s, k) and s[k] == v or k == 'share_network' and v in [s.share_network_name, s.share_network_id])] return self._view_builder.build_share_servers(req, share_servers) @wsgi.Controller.authorize def show(self, req, id): """Return data about the requested share server.""" context = req.environ['manila.context'] try: server = db_api.share_server_get(context, id) share_network = db_api.share_network_get( context, server.share_network_subnet['share_network_id']) server.share_network_id = share_network['id'] server.project_id = share_network['project_id'] if share_network['name']: server.share_network_name = share_network['name'] else: server.share_network_name = share_network['id'] except exception.ShareServerNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) except exception.ShareNetworkNotFound: msg = _("Share server %s could not be found. Its associated " "share network does not " "exist.") % server.share_network_subnet['share_network_id'] raise exc.HTTPNotFound(explanation=msg) return self._view_builder.build_share_server(req, server) @wsgi.Controller.authorize def details(self, req, id): """Return details for requested share server.""" context = req.environ['manila.context'] try: share_server = db_api.share_server_get(context, id) except exception.ShareServerNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) return self._view_builder.build_share_server_details( share_server['backend_details']) @wsgi.Controller.authorize def delete(self, req, id): """Delete specified share server.""" context = req.environ['manila.context'] try: share_server = db_api.share_server_get(context, id) except exception.ShareServerNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) allowed_statuses = [constants.STATUS_ERROR, constants.STATUS_ACTIVE] if share_server['status'] not in allowed_statuses: data = { 'status': share_server['status'], 'allowed_statuses': allowed_statuses, } msg = _("Share server's actual status is %(status)s, allowed " "statuses for deletion are %(allowed_statuses)s.") % (data) raise exc.HTTPForbidden(explanation=msg) LOG.debug("Deleting share server with id: %s.", id) try: self.share_api.delete_share_server(context, share_server) except exception.ShareServerInUse as e: raise exc.HTTPConflict(explanation=e.msg) return webob.Response(status_int=http_client.ACCEPTED) def create_resource(): return wsgi.Resource(ShareServerController()) manila-10.0.0/manila/api/v1/share_metadata.py0000664000175000017500000001264313656750227020752 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from six.moves import http_client import webob from webob import exc from manila.api.openstack import wsgi from manila import exception from manila.i18n import _ from manila import share class ShareMetadataController(object): """The share metadata API controller for the OpenStack API.""" def __init__(self): self.share_api = share.API() super(ShareMetadataController, self).__init__() def _get_metadata(self, context, share_id): try: share = self.share_api.get(context, share_id) meta = self.share_api.get_share_metadata(context, share) except exception.NotFound: msg = _('share does not exist') raise exc.HTTPNotFound(explanation=msg) return meta def index(self, req, share_id): """Returns the list of metadata for a given share.""" context = req.environ['manila.context'] return {'metadata': self._get_metadata(context, share_id)} def create(self, req, share_id, body): try: metadata = body['metadata'] except (KeyError, TypeError): msg = _("Malformed request body") raise exc.HTTPBadRequest(explanation=msg) context = req.environ['manila.context'] new_metadata = self._update_share_metadata(context, share_id, metadata, delete=False) return {'metadata': new_metadata} def update(self, req, share_id, id, body): try: meta_item = body['meta'] except (TypeError, KeyError): expl = _('Malformed request body') raise exc.HTTPBadRequest(explanation=expl) if id not in meta_item: expl = _('Request body and URI mismatch') raise exc.HTTPBadRequest(explanation=expl) if len(meta_item) > 1: expl = _('Request body contains too many items') raise exc.HTTPBadRequest(explanation=expl) context = req.environ['manila.context'] self._update_share_metadata(context, share_id, meta_item, delete=False) return {'meta': meta_item} def update_all(self, req, share_id, body): try: metadata = body['metadata'] except (TypeError, KeyError): expl = _('Malformed request body') raise exc.HTTPBadRequest(explanation=expl) context = req.environ['manila.context'] new_metadata = self._update_share_metadata(context, share_id, metadata, delete=True) return {'metadata': new_metadata} def _update_share_metadata(self, context, share_id, metadata, delete=False): try: share = self.share_api.get(context, share_id) return self.share_api.update_share_metadata(context, share, metadata, delete) except exception.NotFound: msg = _('share does not exist') raise exc.HTTPNotFound(explanation=msg) except (ValueError, AttributeError): msg = _("Malformed request body") raise exc.HTTPBadRequest(explanation=msg) except exception.InvalidMetadata as error: raise exc.HTTPBadRequest(explanation=error.msg) except exception.InvalidMetadataSize as error: raise exc.HTTPBadRequest(explanation=error.msg) def show(self, req, share_id, id): """Return a single metadata item.""" context = req.environ['manila.context'] data = self._get_metadata(context, share_id) try: return {'meta': {id: data[id]}} except KeyError: msg = _("Metadata item was not found") raise exc.HTTPNotFound(explanation=msg) def delete(self, req, share_id, id): """Deletes an existing metadata.""" context = req.environ['manila.context'] metadata = self._get_metadata(context, share_id) if id not in metadata: msg = _("Metadata item was not found") raise exc.HTTPNotFound(explanation=msg) try: share = self.share_api.get(context, share_id) self.share_api.delete_share_metadata(context, share, id) except exception.NotFound: msg = _('share does not exist') raise exc.HTTPNotFound(explanation=msg) return webob.Response(status_int=http_client.OK) def create_resource(): return wsgi.Resource(ShareMetadataController()) manila-10.0.0/manila/api/v1/limits.py0000664000175000017500000003352313656750227017311 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Module dedicated functions/classes dealing with rate limiting requests. """ import collections import copy import math import re import time from oslo_serialization import jsonutils from oslo_utils import importutils from six.moves import http_client import webob.dec import webob.exc from manila.api.openstack import wsgi from manila.api.views import limits as limits_views from manila.i18n import _ from manila import quota from manila.wsgi import common as base_wsgi QUOTAS = quota.QUOTAS # Convenience constants for the limits dictionary passed to Limiter(). PER_SECOND = 1 PER_MINUTE = 60 PER_HOUR = 60 * 60 PER_DAY = 60 * 60 * 24 class LimitsController(wsgi.Controller): """Controller for accessing limits in the OpenStack API.""" def index(self, req): """Return all global and rate limit information.""" context = req.environ['manila.context'] quotas = QUOTAS.get_project_quotas(context, context.project_id, usages=True) abs_limits = {'in_use': {}, 'limit': {}} for k, v in quotas.items(): abs_limits['limit'][k] = v['limit'] abs_limits['in_use'][k] = v['in_use'] rate_limits = req.environ.get("manila.limits", []) builder = self._get_view_builder(req) return builder.build(req, rate_limits, abs_limits) def _get_view_builder(self, req): return limits_views.ViewBuilder() def create_resource(): return wsgi.Resource(LimitsController()) class Limit(object): """Stores information about a limit for HTTP requests.""" UNITS = { 1: "SECOND", 60: "MINUTE", 60 * 60: "HOUR", 60 * 60 * 24: "DAY", } UNIT_MAP = {v: k for k, v in UNITS.items()} def __init__(self, verb, uri, regex, value, unit): """Initialize a new `Limit`. @param verb: HTTP verb (POST, PUT, etc.) @param uri: Human-readable URI @param regex: Regular expression format for this limit @param value: Integer number of requests which can be made @param unit: Unit of measure for the value parameter """ self.verb = verb self.uri = uri self.regex = regex self.value = int(value) self.unit = unit self.unit_string = self.display_unit().lower() self.remaining = int(value) if value <= 0: raise ValueError("Limit value must be > 0") self.last_request = None self.next_request = None self.water_level = 0 self.capacity = self.unit self.request_value = float(self.capacity) / float(self.value) msg = (_("Only %(value)s %(verb)s request(s) can be " "made to %(uri)s every %(unit_string)s.") % {'value': self.value, 'verb': self.verb, 'uri': self.uri, 'unit_string': self.unit_string}) self.error_message = msg def __call__(self, verb, url): """Represents a call to this limit from a relevant request. @param verb: string http verb (POST, GET, etc.) @param url: string URL """ if self.verb != verb or not re.match(self.regex, url): return now = self._get_time() if self.last_request is None: self.last_request = now leak_value = now - self.last_request self.water_level -= leak_value self.water_level = max(self.water_level, 0) self.water_level += self.request_value difference = self.water_level - self.capacity self.last_request = now if difference > 0: self.water_level -= self.request_value self.next_request = now + difference return difference cap = self.capacity water = self.water_level val = self.value self.remaining = math.floor(((cap - water) / cap) * val) self.next_request = now def _get_time(self): """Retrieve the current time. Broken out for testability.""" return time.time() def display_unit(self): """Display the string name of the unit.""" return self.UNITS.get(self.unit, "UNKNOWN") def display(self): """Return a useful representation of this class.""" return { "verb": self.verb, "URI": self.uri, "regex": self.regex, "value": self.value, "remaining": int(self.remaining), "unit": self.display_unit(), "resetTime": int(self.next_request or self._get_time()), } # "Limit" format is a dictionary with the HTTP verb, human-readable URI, # a regular-expression to match, value and unit of measure (PER_DAY, etc.) DEFAULT_LIMITS = [ Limit("POST", "*", ".*", 10, PER_MINUTE), Limit("POST", "*/servers", "^/servers", 50, PER_DAY), Limit("PUT", "*", ".*", 10, PER_MINUTE), Limit("GET", "*changes-since*", ".*changes-since.*", 3, PER_MINUTE), Limit("DELETE", "*", ".*", 100, PER_MINUTE), ] class RateLimitingMiddleware(base_wsgi.Middleware): """Rate-limits requests passing through this middleware. All limit information is stored in memory for this implementation. """ def __init__(self, application, limits=None, limiter=None, **kwargs): """Initialize new `RateLimitingMiddleware`. `RateLimitingMiddleware` wraps the given WSGI application and sets up the given limits. @param application: WSGI application to wrap @param limits: String describing limits @param limiter: String identifying class for representing limits Other parameters are passed to the constructor for the limiter. """ base_wsgi.Middleware.__init__(self, application) # Select the limiter class if limiter is None: limiter = Limiter else: limiter = importutils.import_class(limiter) # Parse the limits, if any are provided if limits is not None: limits = limiter.parse_limits(limits) self._limiter = limiter(limits or DEFAULT_LIMITS, **kwargs) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): """Represents a single call through this middleware. We should record the request if we have a limit relevant to it. If no limit is relevant to the request, ignore it. If the request should be rate limited, return a fault telling the user they are over the limit and need to retry later. """ verb = req.method url = req.url context = req.environ.get("manila.context") if context: username = context.user_id else: username = None delay, error = self._limiter.check_for_delay(verb, url, username) if delay: msg = _("This request was rate-limited.") retry = time.time() + delay return wsgi.OverLimitFault(msg, error, retry) req.environ["manila.limits"] = self._limiter.get_limits(username) return self.application class Limiter(object): """Rate-limit checking class which handles limits in memory.""" def __init__(self, limits, **kwargs): """Initialize the new `Limiter`. @param limits: List of `Limit` objects """ self.limits = copy.deepcopy(limits) self.levels = collections.defaultdict(lambda: copy.deepcopy(limits)) # Pick up any per-user limit information for key, value in kwargs.items(): if key.startswith('user:'): username = key[5:] self.levels[username] = self.parse_limits(value) def get_limits(self, username=None): """Return the limits for a given user.""" return [limit.display() for limit in self.levels[username]] def check_for_delay(self, verb, url, username=None): """Check the given verb/user/user triplet for limit. @return: Tuple of delay (in seconds) and error message (or None, None) """ delays = [] for limit in self.levels[username]: delay = limit(verb, url) if delay: delays.append((delay, limit.error_message)) if delays: delays.sort() return delays[0] return None, None # Note: This method gets called before the class is instantiated, # so this must be either a static method or a class method. It is # used to develop a list of limits to feed to the constructor. We # put this in the class so that subclasses can override the # default limit parsing. @staticmethod def parse_limits(limits): """Convert a string into a list of Limit instances. This implementation expects a semicolon-separated sequence of parenthesized groups, where each group contains a comma-separated sequence consisting of HTTP method, user-readable URI, a URI reg-exp, an integer number of requests which can be made, and a unit of measure. Valid values for the latter are "SECOND", "MINUTE", "HOUR", and "DAY". @return: List of Limit instances. """ # Handle empty limit strings limits = limits.strip() if not limits: return [] # Split up the limits by semicolon result = [] for group in limits.split(';'): group = group.strip() if group[:1] != '(' or group[-1:] != ')': raise ValueError("Limit rules must be surrounded by " "parentheses") group = group[1:-1] # Extract the Limit arguments args = [a.strip() for a in group.split(',')] if len(args) != 5: raise ValueError("Limit rules must contain the following " "arguments: verb, uri, regex, value, unit") # Pull out the arguments verb, uri, regex, value, unit = args # Upper-case the verb verb = verb.upper() # Convert value--raises ValueError if it's not integer value = int(value) # Convert unit unit = unit.upper() if unit not in Limit.UNIT_MAP: raise ValueError("Invalid units specified") unit = Limit.UNIT_MAP[unit] # Build a limit result.append(Limit(verb, uri, regex, value, unit)) return result class WsgiLimiter(object): """Rate-limit checking from a WSGI application. Uses an in-memory `Limiter`. To use, POST ``/`` with JSON data such as:: { "verb" : GET, "path" : "/servers" } and receive a 204 No Content, or a 403 Forbidden with an X-Wait-Seconds header containing the number of seconds to wait before the action would succeed. """ def __init__(self, limits=None): """Initialize the new `WsgiLimiter`. @param limits: List of `Limit` objects """ self._limiter = Limiter(limits or DEFAULT_LIMITS) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, request): """Handles a call to this application. Returns 204 if the request is acceptable to the limiter, else a 403 is returned with a relevant header indicating when the request *will* succeed. """ if request.method != "POST": raise webob.exc.HTTPMethodNotAllowed() try: info = dict(jsonutils.loads(request.body)) except ValueError: raise webob.exc.HTTPBadRequest() username = request.path_info_pop() verb = info.get("verb") path = info.get("path") delay, error = self._limiter.check_for_delay(verb, path, username) if delay: headers = {"X-Wait-Seconds": "%.2f" % delay} return webob.exc.HTTPForbidden(headers=headers, explanation=error) else: return webob.exc.HTTPNoContent() class WsgiLimiterProxy(object): """Rate-limit requests based on answers from a remote source.""" def __init__(self, limiter_address): """Initialize the new `WsgiLimiterProxy`. @param limiter_address: IP/port combination of where to request limit """ self.limiter_address = limiter_address def check_for_delay(self, verb, path, username=None): body = jsonutils.dumps({"verb": verb, "path": path}) headers = {"Content-Type": "application/json"} conn = http_client.HTTPConnection(self.limiter_address) if username: conn.request("POST", "/%s" % (username), body, headers) else: conn.request("POST", "/", body, headers) resp = conn.getresponse() if 200 >= resp.status < 300: return None, None return resp.getheader("X-Wait-Seconds"), resp.read() or None # Note: This method gets called before the class is instantiated, # so this must be either a static method or a class method. It is # used to develop a list of limits to feed to the constructor. # This implementation returns an empty list, since all limit # decisions are made by a remote server. @staticmethod def parse_limits(limits): """Ignore a limits string. This simply doesn't apply for the limit proxy. @return: Empty list. """ return [] manila-10.0.0/manila/api/v1/shares.py0000664000175000017500000005560613656750227017303 0ustar zuulzuul00000000000000# Copyright 2013 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The shares api.""" import ast from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils import six from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import share_accesses as share_access_views from manila.api.views import shares as share_views from manila.common import constants from manila import db from manila import exception from manila.i18n import _ from manila import share from manila.share import share_types from manila import utils LOG = log.getLogger(__name__) class ShareMixin(object): """Mixin class for Share API Controllers.""" def _update(self, *args, **kwargs): db.share_update(*args, **kwargs) def _get(self, *args, **kwargs): return self.share_api.get(*args, **kwargs) def _delete(self, *args, **kwargs): return self.share_api.delete(*args, **kwargs) def _migrate(self, *args, **kwargs): return self.share_api.migrate_share(*args, **kwargs) def show(self, req, id): """Return data about the given share.""" context = req.environ['manila.context'] try: share = self.share_api.get(context, id) except exception.NotFound: raise exc.HTTPNotFound() return self._view_builder.detail(req, share) def delete(self, req, id): """Delete a share.""" context = req.environ['manila.context'] LOG.info("Delete share with id: %s", id, context=context) try: share = self.share_api.get(context, id) # NOTE(ameade): If the share is in a share group, we require its # id be specified as a param. sg_id_key = 'share_group_id' if share.get(sg_id_key): share_group_id = req.params.get(sg_id_key) if not share_group_id: msg = _("Must provide '%s' as a request " "parameter when deleting a share in a share " "group.") % sg_id_key raise exc.HTTPBadRequest(explanation=msg) elif share_group_id != share.get(sg_id_key): msg = _("The specified '%s' does not match " "the share group id of the share.") % sg_id_key raise exc.HTTPBadRequest(explanation=msg) self.share_api.delete(context, share) except exception.NotFound: raise exc.HTTPNotFound() except exception.InvalidShare as e: raise exc.HTTPForbidden(explanation=six.text_type(e)) except exception.Conflict as e: raise exc.HTTPConflict(explanation=six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) def index(self, req): """Returns a summary list of shares.""" req.GET.pop('export_location_id', None) req.GET.pop('export_location_path', None) req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) req.GET.pop('with_count', None) return self._get_shares(req, is_detail=False) def detail(self, req): """Returns a detailed list of shares.""" req.GET.pop('export_location_id', None) req.GET.pop('export_location_path', None) req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) req.GET.pop('with_count', None) return self._get_shares(req, is_detail=True) def _get_shares(self, req, is_detail): """Returns a list of shares, transformed through view builder.""" context = req.environ['manila.context'] common._validate_pagination_query(req) search_opts = {} search_opts.update(req.GET) # Remove keys that are not related to share attrs sort_key = search_opts.pop('sort_key', 'created_at') sort_dir = search_opts.pop('sort_dir', 'desc') show_count = False if 'with_count' in search_opts: show_count = utils.get_bool_from_api_params( 'with_count', search_opts) search_opts.pop('with_count') # Deserialize dicts if 'metadata' in search_opts: search_opts['metadata'] = ast.literal_eval(search_opts['metadata']) if 'extra_specs' in search_opts: search_opts['extra_specs'] = ast.literal_eval( search_opts['extra_specs']) # NOTE(vponomaryov): Manila stores in DB key 'display_name', but # allows to use both keys 'name' and 'display_name'. It is leftover # from Cinder v1 and v2 APIs. if 'name' in search_opts: search_opts['display_name'] = search_opts.pop('name') if 'description' in search_opts: search_opts['display_description'] = search_opts.pop( 'description') # like filter for key, db_key in (('name~', 'display_name~'), ('description~', 'display_description~')): if key in search_opts: search_opts[db_key] = search_opts.pop(key) if sort_key == 'name': sort_key = 'display_name' common.remove_invalid_options( context, search_opts, self._get_share_search_options()) shares = self.share_api.get_all( context, search_opts=search_opts, sort_key=sort_key, sort_dir=sort_dir) total_count = None if show_count: total_count = len(shares) if is_detail: shares = self._view_builder.detail_list(req, shares, total_count) else: shares = self._view_builder.summary_list(req, shares, total_count) return shares def _get_share_search_options(self): """Return share search options allowed by non-admin.""" # NOTE(vponomaryov): share_server_id depends on policy, allow search # by it for non-admins in case policy changed. # Also allow search by extra_specs in case policy # for it allows non-admin access. return ( 'display_name', 'status', 'share_server_id', 'volume_type_id', 'share_type_id', 'snapshot_id', 'host', 'share_network_id', 'is_public', 'metadata', 'extra_specs', 'sort_key', 'sort_dir', 'share_group_id', 'share_group_snapshot_id', 'export_location_id', 'export_location_path', 'display_name~', 'display_description~', 'display_description', 'limit', 'offset' ) @wsgi.Controller.authorize def update(self, req, id, body): """Update a share.""" context = req.environ['manila.context'] if not body or 'share' not in body: raise exc.HTTPUnprocessableEntity() share_data = body['share'] valid_update_keys = ( 'display_name', 'display_description', 'is_public', ) update_dict = {key: share_data[key] for key in valid_update_keys if key in share_data} try: share = self.share_api.get(context, id) except exception.NotFound: raise exc.HTTPNotFound() update_dict = common.validate_public_share_policy( context, update_dict, api='update') share = self.share_api.update(context, share, update_dict) share.update(update_dict) return self._view_builder.detail(req, share) def create(self, req, body): # Remove share group attributes body.get('share', {}).pop('share_group_id', None) share = self._create(req, body) return share @wsgi.Controller.authorize('create') def _create(self, req, body, check_create_share_from_snapshot_support=False, check_availability_zones_extra_spec=False): """Creates a new share.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share'): raise exc.HTTPUnprocessableEntity() share = body['share'] share = common.validate_public_share_policy(context, share) # NOTE(rushiagr): Manila API allows 'name' instead of 'display_name'. if share.get('name'): share['display_name'] = share.get('name') del share['name'] # NOTE(rushiagr): Manila API allows 'description' instead of # 'display_description'. if share.get('description'): share['display_description'] = share.get('description') del share['description'] size = share['size'] share_proto = share['share_proto'].upper() msg = ("Create %(share_proto)s share of %(size)s GB" % {'share_proto': share_proto, 'size': size}) LOG.info(msg, context=context) availability_zone_id = None availability_zone = share.get('availability_zone') if availability_zone: try: availability_zone_id = db.availability_zone_get( context, availability_zone).id except exception.AvailabilityZoneNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) share_group_id = share.get('share_group_id') if share_group_id: try: share_group = db.share_group_get(context, share_group_id) except exception.ShareGroupNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) sg_az_id = share_group['availability_zone_id'] if availability_zone and availability_zone_id != sg_az_id: msg = _("Share cannot have AZ ('%(s_az)s') different than " "share group's one (%(sg_az)s).") % { 's_az': availability_zone_id, 'sg_az': sg_az_id} raise exception.InvalidInput(msg) availability_zone_id = sg_az_id kwargs = { 'availability_zone': availability_zone_id, 'metadata': share.get('metadata'), 'is_public': share.get('is_public', False), 'share_group_id': share_group_id, } snapshot_id = share.get('snapshot_id') if snapshot_id: snapshot = self.share_api.get_snapshot(context, snapshot_id) else: snapshot = None kwargs['snapshot_id'] = snapshot_id share_network_id = share.get('share_network_id') parent_share_type = {} if snapshot: # Need to check that share_network_id from snapshot's # parents share equals to share_network_id from args. # If share_network_id is empty then update it with # share_network_id of parent share. parent_share = self.share_api.get(context, snapshot['share_id']) parent_share_net_id = parent_share.instance['share_network_id'] parent_share_type = share_types.get_share_type( context, parent_share.instance['share_type_id']) if share_network_id: if share_network_id != parent_share_net_id: msg = ("Share network ID should be the same as snapshot's" " parent share's or empty") raise exc.HTTPBadRequest(explanation=msg) elif parent_share_net_id: share_network_id = parent_share_net_id # Verify that share can be created from a snapshot if (check_create_share_from_snapshot_support and not parent_share['create_share_from_snapshot_support']): msg = (_("A new share may not be created from snapshot '%s', " "because the snapshot's parent share does not have " "that capability.") % snapshot_id) LOG.error(msg) raise exc.HTTPBadRequest(explanation=msg) if share_network_id: try: self.share_api.get_share_network( context, share_network_id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) kwargs['share_network_id'] = share_network_id if availability_zone_id: if not db.share_network_subnet_get_by_availability_zone_id( context, share_network_id, availability_zone_id=availability_zone_id): msg = _("A share network subnet was not found for the " "requested availability zone.") raise exc.HTTPBadRequest(explanation=msg) display_name = share.get('display_name') display_description = share.get('display_description') if 'share_type' in share and 'volume_type' in share: msg = 'Cannot specify both share_type and volume_type' raise exc.HTTPBadRequest(explanation=msg) req_share_type = share.get('share_type', share.get('volume_type')) share_type = None if req_share_type: try: if not uuidutils.is_uuid_like(req_share_type): share_type = share_types.get_share_type_by_name( context, req_share_type) else: share_type = share_types.get_share_type( context, req_share_type) except exception.ShareTypeNotFound: msg = _("Share type not found.") raise exc.HTTPNotFound(explanation=msg) elif not snapshot: def_share_type = share_types.get_default_share_type() if def_share_type: share_type = def_share_type # Only use in create share feature. Create share from snapshot # and create share with share group features not # need this check. if (not share_network_id and not snapshot and not share_group_id and share_type and share_type.get('extra_specs') and (strutils.bool_from_string(share_type.get('extra_specs'). get('driver_handles_share_servers')))): msg = _('Share network must be set when the ' 'driver_handles_share_servers is true.') raise exc.HTTPBadRequest(explanation=msg) type_chosen = share_type or parent_share_type if type_chosen and check_availability_zones_extra_spec: type_azs = type_chosen.get( 'extra_specs', {}).get('availability_zones', '') type_azs = type_azs.split(',') if type_azs else [] kwargs['availability_zones'] = type_azs if (availability_zone and type_azs and availability_zone not in type_azs): msg = _("Share type %(type)s is not supported within the " "availability zone chosen %(az)s.") type_chosen = ( req_share_type or "%s (from source snapshot)" % ( parent_share_type.get('name') or parent_share_type.get('id')) ) payload = {'type': type_chosen, 'az': availability_zone} raise exc.HTTPBadRequest(explanation=msg % payload) if share_type: kwargs['share_type'] = share_type new_share = self.share_api.create(context, share_proto, size, display_name, display_description, **kwargs) return self._view_builder.detail(req, new_share) @staticmethod def _any_instance_has_errored_rules(share): for instance in share['instances']: access_rules_status = instance['access_rules_status'] if access_rules_status == constants.SHARE_INSTANCE_RULES_ERROR: return True return False @wsgi.Controller.authorize('allow_access') def _allow_access(self, req, id, body, enable_ceph=False, allow_on_error_status=False, enable_ipv6=False, enable_metadata=False): """Add share access rule.""" context = req.environ['manila.context'] access_data = body.get('allow_access', body.get('os-allow_access')) if not enable_metadata: access_data.pop('metadata', None) share = self.share_api.get(context, id) if (not allow_on_error_status and self._any_instance_has_errored_rules(share)): msg = _("Access rules cannot be added while the share or any of " "its replicas or migration copies has its " "access_rules_status set to %(instance_rules_status)s. " "Deny any rules in %(rule_state)s state and try " "again.") % { 'instance_rules_status': constants.SHARE_INSTANCE_RULES_ERROR, 'rule_state': constants.ACCESS_STATE_ERROR, } raise webob.exc.HTTPBadRequest(explanation=msg) access_type = access_data['access_type'] access_to = access_data['access_to'] common.validate_access(access_type=access_type, access_to=access_to, enable_ceph=enable_ceph, enable_ipv6=enable_ipv6) try: access = self.share_api.allow_access( context, share, access_type, access_to, access_data.get('access_level'), access_data.get('metadata')) except exception.ShareAccessExists as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.InvalidMetadata as error: raise exc.HTTPBadRequest(explanation=error.msg) except exception.InvalidMetadataSize as error: raise exc.HTTPBadRequest(explanation=error.msg) return self._access_view_builder.view(req, access) @wsgi.Controller.authorize('deny_access') def _deny_access(self, req, id, body): """Remove share access rule.""" context = req.environ['manila.context'] access_id = body.get( 'deny_access', body.get('os-deny_access'))['access_id'] try: access = self.share_api.access_get(context, access_id) if access.share_id != id: raise exception.NotFound() share = self.share_api.get(context, id) except exception.NotFound as error: raise webob.exc.HTTPNotFound(explanation=six.text_type(error)) self.share_api.deny_access(context, share, access) return webob.Response(status_int=http_client.ACCEPTED) def _access_list(self, req, id, body): """List share access rules.""" context = req.environ['manila.context'] share = self.share_api.get(context, id) access_rules = self.share_api.access_get_all(context, share) return self._access_view_builder.list_view(req, access_rules) def _extend(self, req, id, body): """Extend size of a share.""" context = req.environ['manila.context'] share, size = self._get_valid_resize_parameters( context, id, body, 'os-extend') try: self.share_api.extend(context, share, size) except (exception.InvalidInput, exception.InvalidShare) as e: raise webob.exc.HTTPBadRequest(explanation=six.text_type(e)) except exception.ShareSizeExceedsAvailableQuota as e: raise webob.exc.HTTPForbidden(explanation=six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) def _shrink(self, req, id, body): """Shrink size of a share.""" context = req.environ['manila.context'] share, size = self._get_valid_resize_parameters( context, id, body, 'os-shrink') try: self.share_api.shrink(context, share, size) except (exception.InvalidInput, exception.InvalidShare) as e: raise webob.exc.HTTPBadRequest(explanation=six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) def _get_valid_resize_parameters(self, context, id, body, action): try: share = self.share_api.get(context, id) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=six.text_type(e)) try: size = int(body.get(action, body.get(action.split('os-')[-1]))['new_size']) except (KeyError, ValueError, TypeError): msg = _("New share size must be specified as an integer.") raise webob.exc.HTTPBadRequest(explanation=msg) return share, size class ShareController(wsgi.Controller, ShareMixin, wsgi.AdminActionsMixin): """The Shares API v1 controller for the OpenStack API.""" resource_name = 'share' _view_builder_class = share_views.ViewBuilder def __init__(self): super(ShareController, self).__init__() self.share_api = share.API() self._access_view_builder = share_access_views.ViewBuilder() @wsgi.action('os-reset_status') def share_reset_status(self, req, id, body): """Reset status of a share.""" return self._reset_status(req, id, body) @wsgi.action('os-force_delete') def share_force_delete(self, req, id, body): """Delete a share, bypassing the check for status.""" return self._force_delete(req, id, body) @wsgi.action('os-allow_access') def allow_access(self, req, id, body): """Add share access rule.""" return self._allow_access(req, id, body) @wsgi.action('os-deny_access') def deny_access(self, req, id, body): """Remove share access rule.""" return self._deny_access(req, id, body) @wsgi.action('os-access_list') def access_list(self, req, id, body): """List share access rules.""" return self._access_list(req, id, body) @wsgi.action('os-extend') def extend(self, req, id, body): """Extend size of a share.""" return self._extend(req, id, body) @wsgi.action('os-shrink') def shrink(self, req, id, body): """Shrink size of a share.""" return self._shrink(req, id, body) def create_resource(): return wsgi.Resource(ShareController()) manila-10.0.0/manila/api/v1/router.py0000664000175000017500000001630713656750227017331 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # Copyright 2011 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ WSGI middleware for OpenStack Share API v1. """ from manila.api import extensions import manila.api.openstack from manila.api.v1 import limits from manila.api.v1 import scheduler_stats from manila.api.v1 import security_service from manila.api.v1 import share_manage from manila.api.v1 import share_metadata from manila.api.v1 import share_servers from manila.api.v1 import share_snapshots from manila.api.v1 import share_types_extra_specs from manila.api.v1 import share_unmanage from manila.api.v1 import shares from manila.api.v2 import availability_zones from manila.api.v2 import quota_class_sets from manila.api.v2 import quota_sets from manila.api.v2 import services from manila.api.v2 import share_networks from manila.api.v2 import share_types from manila.api import versions class APIRouter(manila.api.openstack.APIRouter): """Route API requests. Routes requests on the OpenStack API to the appropriate controller and method. """ ExtensionManager = extensions.ExtensionManager def _setup_routes(self, mapper): self.resources['versions'] = versions.create_resource() mapper.connect("versions", "/", controller=self.resources['versions'], action='index') mapper.redirect("", "/") self.resources["availability_zones"] = ( availability_zones.create_resource_legacy()) mapper.resource("availability-zone", "os-availability-zone", controller=self.resources["availability_zones"]) self.resources["services"] = services.create_resource_legacy() mapper.resource("service", "os-services", controller=self.resources["services"]) self.resources["quota_sets"] = quota_sets.create_resource_legacy() mapper.resource("quota-set", "os-quota-sets", controller=self.resources["quota_sets"], member={'defaults': 'GET'}) self.resources["quota_class_sets"] = ( quota_class_sets.create_resource_legacy()) mapper.resource("quota-class-set", "os-quota-class-sets", controller=self.resources["quota_class_sets"]) self.resources["share_manage"] = share_manage.create_resource() mapper.resource("share_manage", "os-share-manage", controller=self.resources["share_manage"]) self.resources["share_unmanage"] = share_unmanage.create_resource() mapper.resource("share_unmanage", "os-share-unmanage", controller=self.resources["share_unmanage"], member={'unmanage': 'POST'}) self.resources['shares'] = shares.create_resource() mapper.resource("share", "shares", controller=self.resources['shares'], collection={'detail': 'GET'}, member={'action': 'POST'}) self.resources['snapshots'] = share_snapshots.create_resource() mapper.resource("snapshot", "snapshots", controller=self.resources['snapshots'], collection={'detail': 'GET'}, member={'action': 'POST'}) self.resources['share_metadata'] = share_metadata.create_resource() share_metadata_controller = self.resources['share_metadata'] mapper.resource("share_metadata", "metadata", controller=share_metadata_controller, parent_resource=dict(member_name='share', collection_name='shares')) mapper.connect("metadata", "/{project_id}/shares/{share_id}/metadata", controller=share_metadata_controller, action='update_all', conditions={"method": ['PUT']}) self.resources['limits'] = limits.create_resource() mapper.resource("limit", "limits", controller=self.resources['limits']) self.resources["security_services"] = ( security_service.create_resource()) mapper.resource("security-service", "security-services", controller=self.resources['security_services'], collection={'detail': 'GET'}) self.resources['share_networks'] = share_networks.create_resource() mapper.resource(share_networks.RESOURCE_NAME, 'share-networks', controller=self.resources['share_networks'], collection={'detail': 'GET'}, member={'action': 'POST'}) self.resources['share_servers'] = share_servers.create_resource() mapper.resource('share_server', 'share-servers', controller=self.resources['share_servers']) mapper.connect('details', '/{project_id}/share-servers/{id}/details', controller=self.resources['share_servers'], action='details', conditions={"method": ['GET']}) self.resources['types'] = share_types.create_resource() mapper.resource("type", "types", controller=self.resources['types'], collection={'detail': 'GET', 'default': 'GET'}, member={'action': 'POST', 'os-share-type-access': 'GET'}) self.resources['extra_specs'] = ( share_types_extra_specs.create_resource()) mapper.resource('extra_spec', 'extra_specs', controller=self.resources['extra_specs'], parent_resource=dict(member_name='type', collection_name='types')) self.resources['scheduler_stats'] = scheduler_stats.create_resource() mapper.connect('pools', '/{project_id}/scheduler-stats/pools', controller=self.resources['scheduler_stats'], action='pools_index', conditions={'method': ['GET']}) mapper.connect('pools', '/{project_id}/scheduler-stats/pools/detail', controller=self.resources['scheduler_stats'], action='pools_detail', conditions={'method': ['GET']}) manila-10.0.0/manila/api/v1/scheduler_stats.py0000664000175000017500000000636413656750227021207 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api.openstack import wsgi from manila.api.views import scheduler_stats as scheduler_stats_views from manila import exception from manila.i18n import _ from manila.scheduler import rpcapi from manila.share import share_types class SchedulerStatsController(wsgi.Controller): """The Scheduler Stats API controller for the OpenStack API.""" resource_name = 'scheduler_stats:pools' def __init__(self): self.scheduler_api = rpcapi.SchedulerAPI() self._view_builder_class = scheduler_stats_views.ViewBuilder super(SchedulerStatsController, self).__init__() @wsgi.Controller.api_version('1.0', '2.22') @wsgi.Controller.authorize('index') def pools_index(self, req): """Returns a list of storage pools known to the scheduler.""" return self._pools(req, action='index') @wsgi.Controller.api_version('2.23') # noqa @wsgi.Controller.authorize('index') def pools_index(self, req): # pylint: disable=function-redefined return self._pools(req, action='index', enable_share_type=True) @wsgi.Controller.api_version('1.0', '2.22') @wsgi.Controller.authorize('detail') def pools_detail(self, req): """Returns a detailed list of storage pools known to the scheduler.""" return self._pools(req, action='detail') @wsgi.Controller.api_version('2.23') # noqa @wsgi.Controller.authorize('detail') def pools_detail(self, req): # pylint: disable=function-redefined return self._pools(req, action='detail', enable_share_type=True) def _pools(self, req, action='index', enable_share_type=False): context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) if enable_share_type: req_share_type = search_opts.pop('share_type', None) if req_share_type: try: share_type = share_types.get_share_type_by_name_or_id( context, req_share_type) search_opts['capabilities'] = share_type.get('extra_specs', {}) except exception.ShareTypeNotFound: msg = _("Share type %s not found.") % req_share_type raise exc.HTTPBadRequest(explanation=msg) pools = self.scheduler_api.get_pools(context, filters=search_opts, cached=True) detail = (action == 'detail') return self._view_builder.pools(pools, detail=detail) def create_resource(): return wsgi.Resource(SchedulerStatsController()) manila-10.0.0/manila/api/v1/share_types_extra_specs.py0000664000175000017500000001667313656750227022745 0ustar zuulzuul00000000000000# Copyright (c) 2011 Zadara Storage Inc. # Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from six.moves import http_client import webob from manila.api import common from manila.api.openstack import wsgi from manila.common import constants from manila import db from manila import exception from manila.i18n import _ from manila import rpc from manila.share import share_types class ShareTypeExtraSpecsController(wsgi.Controller): """The share type extra specs API controller for the OpenStack API.""" resource_name = 'share_types_extra_spec' def _get_extra_specs(self, context, type_id): extra_specs = db.share_type_extra_specs_get(context, type_id) specs_dict = {} for key, value in extra_specs.items(): specs_dict[key] = value return dict(extra_specs=specs_dict) def _check_type(self, context, type_id): try: share_types.get_share_type(context, type_id) except exception.NotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.msg) def _verify_extra_specs(self, extra_specs, verify_all_required=True): if verify_all_required: try: share_types.get_valid_required_extra_specs(extra_specs) except exception.InvalidExtraSpec as e: raise webob.exc.HTTPBadRequest(explanation=six.text_type(e)) def is_valid_string(v): return isinstance(v, six.string_types) and len(v) in range(1, 256) def is_valid_extra_spec(k, v): valid_extra_spec_key = is_valid_string(k) valid_type = is_valid_string(v) or isinstance(v, bool) valid_required_extra_spec = ( share_types.is_valid_required_extra_spec(k, v) in (None, True)) valid_optional_extra_spec = ( share_types.is_valid_optional_extra_spec(k, v) in (None, True)) return (valid_extra_spec_key and valid_type and valid_required_extra_spec and valid_optional_extra_spec) for k, v in extra_specs.items(): if is_valid_string(k) and isinstance(v, dict): self._verify_extra_specs( v, verify_all_required=verify_all_required) elif not is_valid_extra_spec(k, v): expl = _('Invalid extra_spec: %(key)s: %(value)s') % { 'key': k, 'value': v } raise webob.exc.HTTPBadRequest(explanation=expl) @wsgi.Controller.authorize def index(self, req, type_id): """Returns the list of extra specs for a given share type.""" context = req.environ['manila.context'] self._check_type(context, type_id) return self._get_extra_specs(context, type_id) @wsgi.Controller.authorize def create(self, req, type_id, body=None): context = req.environ['manila.context'] if not self.is_valid_body(body, 'extra_specs'): raise webob.exc.HTTPBadRequest() self._check_type(context, type_id) specs = body['extra_specs'] try: self._verify_extra_specs(specs, False) except exception.InvalidExtraSpec as e: raise webob.exc.HTTPBadRequest(e.message) self._check_key_names(specs.keys()) specs = share_types.sanitize_extra_specs(specs) db.share_type_extra_specs_update_or_create(context, type_id, specs) notifier_info = dict(type_id=type_id, specs=specs) notifier = rpc.get_notifier('shareTypeExtraSpecs') notifier.info(context, 'share_type_extra_specs.create', notifier_info) return body @wsgi.Controller.authorize def update(self, req, type_id, id, body=None): context = req.environ['manila.context'] if not body: expl = _('Request body empty') raise webob.exc.HTTPBadRequest(explanation=expl) self._check_type(context, type_id) if id not in body: expl = _('Request body and URI mismatch') raise webob.exc.HTTPBadRequest(explanation=expl) if len(body) > 1: expl = _('Request body contains too many items') raise webob.exc.HTTPBadRequest(explanation=expl) self._verify_extra_specs(body, False) specs = share_types.sanitize_extra_specs(body) db.share_type_extra_specs_update_or_create(context, type_id, specs) notifier_info = dict(type_id=type_id, id=id) notifier = rpc.get_notifier('shareTypeExtraSpecs') notifier.info(context, 'share_type_extra_specs.update', notifier_info) return specs @wsgi.Controller.authorize def show(self, req, type_id, id): """Return a single extra spec item.""" context = req.environ['manila.context'] self._check_type(context, type_id) specs = self._get_extra_specs(context, type_id) if id in specs['extra_specs']: return {id: specs['extra_specs'][id]} else: raise webob.exc.HTTPNotFound() @wsgi.Controller.api_version('1.0', '2.23') @wsgi.Controller.authorize def delete(self, req, type_id, id): """Deletes an existing extra spec.""" context = req.environ['manila.context'] self._check_type(context, type_id) if id == constants.ExtraSpecs.SNAPSHOT_SUPPORT: msg = _("Extra spec '%s' can't be deleted.") % id raise webob.exc.HTTPForbidden(explanation=msg) return self._delete(req, type_id, id) @wsgi.Controller.api_version('2.24') # noqa @wsgi.Controller.authorize def delete(self, req, type_id, id): # pylint: disable=function-redefined """Deletes an existing extra spec.""" context = req.environ['manila.context'] self._check_type(context, type_id) return self._delete(req, type_id, id) def _delete(self, req, type_id, id): """Deletes an existing extra spec.""" context = req.environ['manila.context'] if id in share_types.get_required_extra_specs(): msg = _("Extra spec '%s' can't be deleted.") % id raise webob.exc.HTTPForbidden(explanation=msg) try: db.share_type_extra_specs_delete(context, type_id, id) except exception.ShareTypeExtraSpecsNotFound as error: raise webob.exc.HTTPNotFound(explanation=error.msg) notifier_info = dict(type_id=type_id, id=id) notifier = rpc.get_notifier('shareTypeExtraSpecs') notifier.info(context, 'share_type_extra_specs.delete', notifier_info) return webob.Response(status_int=http_client.ACCEPTED) def _check_key_names(self, keys): if not common.validate_key_names(keys): expl = _('Key names can only contain alphanumeric characters, ' 'underscores, periods, colons and hyphens.') raise webob.exc.HTTPBadRequest(explanation=expl) def create_resource(): return wsgi.Resource(ShareTypeExtraSpecsController()) manila-10.0.0/manila/api/v1/security_service.py0000664000175000017500000002062613656750227021377 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The security service api.""" from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import security_service as security_service_views from manila.common import constants from manila import db from manila import exception from manila.i18n import _ from manila import policy from manila import utils RESOURCE_NAME = 'security_service' LOG = log.getLogger(__name__) class SecurityServiceController(wsgi.Controller): """The Shares API controller for the OpenStack API.""" _view_builder_class = security_service_views.ViewBuilder def show(self, req, id): """Return data about the given security service.""" context = req.environ['manila.context'] try: security_service = db.security_service_get(context, id) policy.check_policy(context, RESOURCE_NAME, 'show', security_service) except exception.NotFound: raise exc.HTTPNotFound() return self._view_builder.detail(req, security_service) def delete(self, req, id): """Delete a security service.""" context = req.environ['manila.context'] LOG.info("Delete security service with id: %s", id, context=context) try: security_service = db.security_service_get(context, id) except exception.NotFound: raise exc.HTTPNotFound() share_nets = db.share_network_get_all_by_security_service( context, id) if share_nets: msg = _("Cannot delete security service. It is " "assigned to share network(s)") raise exc.HTTPForbidden(explanation=msg) policy.check_policy(context, RESOURCE_NAME, 'delete', security_service) db.security_service_delete(context, id) return webob.Response(status_int=http_client.ACCEPTED) def index(self, req): """Returns a summary list of security services.""" policy.check_policy(req.environ['manila.context'], RESOURCE_NAME, 'index') return self._get_security_services(req, is_detail=False) def detail(self, req): """Returns a detailed list of security services.""" policy.check_policy(req.environ['manila.context'], RESOURCE_NAME, 'detail') return self._get_security_services(req, is_detail=True) def _get_security_services(self, req, is_detail): """Returns a transformed list of security services. The list gets transformed through view builder. """ context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) # NOTE(vponomaryov): remove 'status' from search opts # since it was removed from security service model. search_opts.pop('status', None) if 'share_network_id' in search_opts: share_nw = db.share_network_get(context, search_opts['share_network_id']) security_services = share_nw['security_services'] del search_opts['share_network_id'] else: if context.is_admin and utils.is_all_tenants(search_opts): policy.check_policy(context, RESOURCE_NAME, 'get_all_security_services') security_services = db.security_service_get_all(context) else: security_services = db.security_service_get_all_by_project( context, context.project_id) search_opts.pop('all_tenants', None) common.remove_invalid_options( context, search_opts, self._get_security_services_search_options()) if search_opts: results = [] not_found = object() for ss in security_services: if all(ss.get(opt, not_found) == value for opt, value in search_opts.items()): results.append(ss) security_services = results limited_list = common.limited(security_services, req) if is_detail: security_services = self._view_builder.detail_list( req, limited_list) for ss in security_services['security_services']: share_networks = db.share_network_get_all_by_security_service( context, ss['id']) ss['share_networks'] = [sn['id'] for sn in share_networks] else: security_services = self._view_builder.summary_list( req, limited_list) return security_services def _get_security_services_search_options(self): return ('name', 'id', 'type', 'user', 'server', 'dns_ip', 'domain', ) def _share_servers_dependent_on_sn_exist(self, context, security_service_id): share_networks = db.share_network_get_all_by_security_service( context, security_service_id) for sn in share_networks: for sns in sn['share_network_subnets']: if 'share_servers' in sns and sns['share_servers']: return True return False def update(self, req, id, body): """Update a security service.""" context = req.environ['manila.context'] if not body or 'security_service' not in body: raise exc.HTTPUnprocessableEntity() security_service_data = body['security_service'] valid_update_keys = ( 'description', 'name' ) try: security_service = db.security_service_get(context, id) policy.check_policy(context, RESOURCE_NAME, 'update', security_service) except exception.NotFound: raise exc.HTTPNotFound() if self._share_servers_dependent_on_sn_exist(context, id): for item in security_service_data: if item not in valid_update_keys: msg = _("Cannot update security service %s. It is " "attached to share network with share server " "associated. Only 'name' and 'description' " "fields are available for update.") % id raise exc.HTTPForbidden(explanation=msg) policy.check_policy(context, RESOURCE_NAME, 'update', security_service) security_service = db.security_service_update( context, id, security_service_data) return self._view_builder.detail(req, security_service) def create(self, req, body): """Creates a new security service.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'create') if not self.is_valid_body(body, 'security_service'): raise exc.HTTPUnprocessableEntity() security_service_args = body['security_service'] security_srv_type = security_service_args.get('type') allowed_types = constants.SECURITY_SERVICES_ALLOWED_TYPES if security_srv_type not in allowed_types: raise exception.InvalidInput( reason=(_("Invalid type %(type)s specified for security " "service. Valid types are %(types)s") % {'type': security_srv_type, 'types': ','.join(allowed_types)})) security_service_args['project_id'] = context.project_id security_service = db.security_service_create( context, security_service_args) return self._view_builder.detail(req, security_service) def create_resource(): return wsgi.Resource(SecurityServiceController()) manila-10.0.0/manila/api/v1/share_snapshots.py0000664000175000017500000001775513656750227021225 0ustar zuulzuul00000000000000# Copyright 2013 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The share snapshots api.""" from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import share_snapshots as snapshot_views from manila import db from manila import exception from manila.i18n import _ from manila import share LOG = log.getLogger(__name__) class ShareSnapshotMixin(object): """Mixin class for Share Snapshot Controllers.""" def _update(self, *args, **kwargs): db.share_snapshot_update(*args, **kwargs) def _get(self, *args, **kwargs): return self.share_api.get_snapshot(*args, **kwargs) def _delete(self, *args, **kwargs): return self.share_api.delete_snapshot(*args, **kwargs) def show(self, req, id): """Return data about the given snapshot.""" context = req.environ['manila.context'] try: snapshot = self.share_api.get_snapshot(context, id) # Snapshot with no instances is filtered out. if(snapshot.get('status') is None): raise exc.HTTPNotFound() except exception.NotFound: raise exc.HTTPNotFound() return self._view_builder.detail(req, snapshot) def delete(self, req, id): """Delete a snapshot.""" context = req.environ['manila.context'] LOG.info("Delete snapshot with id: %s", id, context=context) try: snapshot = self.share_api.get_snapshot(context, id) self.share_api.delete_snapshot(context, snapshot) except exception.NotFound: raise exc.HTTPNotFound() return webob.Response(status_int=http_client.ACCEPTED) def index(self, req): """Returns a summary list of snapshots.""" req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) return self._get_snapshots(req, is_detail=False) def detail(self, req): """Returns a detailed list of snapshots.""" req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) return self._get_snapshots(req, is_detail=True) def _get_snapshots(self, req, is_detail): """Returns a list of snapshots.""" context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) # Remove keys that are not related to share attrs search_opts.pop('limit', None) search_opts.pop('offset', None) sort_key = search_opts.pop('sort_key', 'created_at') sort_dir = search_opts.pop('sort_dir', 'desc') # NOTE(vponomaryov): Manila stores in DB key 'display_name', but # allows to use both keys 'name' and 'display_name'. It is leftover # from Cinder v1 and v2 APIs. if 'name' in search_opts: search_opts['display_name'] = search_opts.pop('name') if 'description' in search_opts: search_opts['display_description'] = search_opts.pop( 'description') # like filter for key, db_key in (('name~', 'display_name~'), ('description~', 'display_description~')): if key in search_opts: search_opts[db_key] = search_opts.pop(key) common.remove_invalid_options(context, search_opts, self._get_snapshots_search_options()) snapshots = self.share_api.get_all_snapshots( context, search_opts=search_opts, sort_key=sort_key, sort_dir=sort_dir, ) # Snapshots with no instances are filtered out. snapshots = list(filter(lambda x: x.get('status') is not None, snapshots)) limited_list = common.limited(snapshots, req) if is_detail: snapshots = self._view_builder.detail_list(req, limited_list) else: snapshots = self._view_builder.summary_list(req, limited_list) return snapshots def _get_snapshots_search_options(self): """Return share snapshot search options allowed by non-admin.""" return ('display_name', 'status', 'share_id', 'size', 'display_name~', 'display_description~', 'display_description') def update(self, req, id, body): """Update a snapshot.""" context = req.environ['manila.context'] if not body or 'snapshot' not in body: raise exc.HTTPUnprocessableEntity() snapshot_data = body['snapshot'] valid_update_keys = ( 'display_name', 'display_description', ) update_dict = {key: snapshot_data[key] for key in valid_update_keys if key in snapshot_data} try: snapshot = self.share_api.get_snapshot(context, id) except exception.NotFound: raise exc.HTTPNotFound() snapshot = self.share_api.snapshot_update(context, snapshot, update_dict) snapshot.update(update_dict) return self._view_builder.detail(req, snapshot) @wsgi.response(202) def create(self, req, body): """Creates a new snapshot.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'snapshot'): raise exc.HTTPUnprocessableEntity() snapshot = body['snapshot'] share_id = snapshot['share_id'] share = self.share_api.get(context, share_id) # Verify that share can be snapshotted if not share['snapshot_support']: msg = _("Snapshot cannot be created from share '%s', because " "share back end does not support it.") % share_id LOG.error(msg) raise exc.HTTPUnprocessableEntity(explanation=msg) LOG.info("Create snapshot from share %s", share_id, context=context) # NOTE(rushiagr): v2 API allows name instead of display_name if 'name' in snapshot: snapshot['display_name'] = snapshot.get('name') del snapshot['name'] # NOTE(rushiagr): v2 API allows description instead of # display_description if 'description' in snapshot: snapshot['display_description'] = snapshot.get('description') del snapshot['description'] new_snapshot = self.share_api.create_snapshot( context, share, snapshot.get('display_name'), snapshot.get('display_description')) return self._view_builder.detail( req, dict(new_snapshot.items())) class ShareSnapshotsController(ShareSnapshotMixin, wsgi.Controller, wsgi.AdminActionsMixin): """The Share Snapshots API controller for the OpenStack API.""" resource_name = 'share_snapshot' _view_builder_class = snapshot_views.ViewBuilder def __init__(self): super(ShareSnapshotsController, self).__init__() self.share_api = share.API() @wsgi.action('os-reset_status') def snapshot_reset_status_legacy(self, req, id, body): return self._reset_status(req, id, body) @wsgi.action('os-force_delete') def snapshot_force_delete_legacy(self, req, id, body): return self._force_delete(req, id, body) def create_resource(): return wsgi.Resource(ShareSnapshotsController()) manila-10.0.0/manila/api/middleware/0000775000175000017500000000000013656750362017217 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/middleware/__init__.py0000664000175000017500000000000013656750227021316 0ustar zuulzuul00000000000000manila-10.0.0/manila/api/middleware/fault.py0000664000175000017500000000602113656750227020703 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log import six import webob.dec import webob.exc from manila.api.openstack import wsgi from manila.i18n import _ from manila import utils from manila.wsgi import common as base_wsgi LOG = log.getLogger(__name__) class FaultWrapper(base_wsgi.Middleware): """Calls down the middleware stack, making exceptions into faults.""" _status_to_type = {} @staticmethod def status_to_type(status): if not FaultWrapper._status_to_type: for clazz in utils.walk_class_hierarchy(webob.exc.HTTPError): FaultWrapper._status_to_type[clazz.code] = clazz return FaultWrapper._status_to_type.get( status, webob.exc.HTTPInternalServerError)() def _error(self, inner, req): if isinstance(inner, UnicodeDecodeError): msg = _("Error decoding your request. Either the URL or the " "request body contained characters that could not be " "decoded by Manila.") return wsgi.Fault(webob.exc.HTTPBadRequest(explanation=msg)) LOG.exception("Caught error: %s", inner) safe = getattr(inner, 'safe', False) headers = getattr(inner, 'headers', None) status = getattr(inner, 'code', 500) if status is None: status = 500 msg_dict = dict(url=req.url, status=status) LOG.info("%(url)s returned with HTTP %(status)d", msg_dict) outer = self.status_to_type(status) if headers: outer.headers = headers # NOTE(johannes): We leave the explanation empty here on # purpose. It could possibly have sensitive information # that should not be returned back to the user. See # bugs 868360 and 874472 # NOTE(eglynn): However, it would be over-conservative and # inconsistent with the EC2 API to hide every exception, # including those that are safe to expose, see bug 1021373 if safe: outer.explanation = '%s: %s' % (inner.__class__.__name__, six.text_type(inner)) return wsgi.Fault(outer) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): try: return req.get_response(self.application) except Exception as ex: return self._error(ex, req) manila-10.0.0/manila/api/middleware/auth.py0000664000175000017500000001305113656750227020532 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common Auth Middleware. """ import os from oslo_config import cfg from oslo_log import log from oslo_serialization import jsonutils import webob.dec import webob.exc from manila.api.openstack import wsgi from manila import context from manila.i18n import _ from manila.wsgi import common as base_wsgi use_forwarded_for_opt = cfg.BoolOpt( 'use_forwarded_for', default=False, help='Treat X-Forwarded-For as the canonical remote address. ' 'Only enable this if you have a sanitizing proxy.') CONF = cfg.CONF CONF.register_opt(use_forwarded_for_opt) LOG = log.getLogger(__name__) def pipeline_factory(loader, global_conf, **local_conf): """A paste pipeline replica that keys off of auth_strategy.""" pipeline = local_conf[CONF.auth_strategy] if not CONF.api_rate_limit: limit_name = CONF.auth_strategy + '_nolimit' pipeline = local_conf.get(limit_name, pipeline) pipeline = pipeline.split() filters = [loader.get_filter(n) for n in pipeline[:-1]] app = loader.get_app(pipeline[-1]) filters.reverse() for filter in filters: app = filter(app) return app class InjectContext(base_wsgi.Middleware): """Add a 'manila.context' to WSGI environ.""" def __init__(self, context, *args, **kwargs): self.context = context super(InjectContext, self).__init__(*args, **kwargs) @webob.dec.wsgify(RequestClass=base_wsgi.Request) def __call__(self, req): req.environ['manila.context'] = self.context return self.application class ManilaKeystoneContext(base_wsgi.Middleware): """Make a request context from keystone headers.""" @webob.dec.wsgify(RequestClass=base_wsgi.Request) def __call__(self, req): user_id = req.headers.get('X_USER') user_id = req.headers.get('X_USER_ID', user_id) if user_id is None: LOG.debug("Neither X_USER_ID nor X_USER found in request") return webob.exc.HTTPUnauthorized() # get the roles roles = [r.strip() for r in req.headers.get('X_ROLE', '').split(',')] if 'X_TENANT_ID' in req.headers: # This is the new header since Keystone went to ID/Name project_id = req.headers['X_TENANT_ID'] else: # This is for legacy compatibility project_id = req.headers['X_TENANT'] # Get the auth token auth_token = req.headers.get('X_AUTH_TOKEN', req.headers.get('X_STORAGE_TOKEN')) # Build a context, including the auth_token... remote_address = req.remote_addr if CONF.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) service_catalog = None if req.headers.get('X_SERVICE_CATALOG') is not None: try: catalog_header = req.headers.get('X_SERVICE_CATALOG') service_catalog = jsonutils.loads(catalog_header) except ValueError: raise webob.exc.HTTPInternalServerError( _('Invalid service catalog json.')) ctx = context.RequestContext(user_id, project_id, roles=roles, auth_token=auth_token, remote_address=remote_address, service_catalog=service_catalog) req.environ['manila.context'] = ctx return self.application class NoAuthMiddleware(base_wsgi.Middleware): """Return a fake token if one isn't specified.""" @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): if 'X-Auth-Token' not in req.headers: user_id = req.headers.get('X-Auth-User', 'admin') project_id = req.headers.get('X-Auth-Project-Id', 'admin') os_url = os.path.join(req.url, project_id) res = webob.Response() # NOTE(vish): This is expecting and returning Auth(1.1), whereas # keystone uses 2.0 auth. We should probably allow # 2.0 auth here as well. res.headers['X-Auth-Token'] = '%s:%s' % (user_id, project_id) res.headers['X-Server-Management-Url'] = os_url res.content_type = 'text/plain' res.status = '204' return res token = req.headers['X-Auth-Token'] user_id, _sep, project_id = token.partition(':') project_id = project_id or user_id remote_address = getattr(req, 'remote_address', '127.0.0.1') if CONF.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) ctx = context.RequestContext(user_id, project_id, is_admin=True, remote_address=remote_address) req.environ['manila.context'] = ctx return self.application manila-10.0.0/manila/api/urlmap.py0000664000175000017500000002375013656750227016763 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re try: from urllib.request import parse_http_list # noqa except ImportError: from urllib2 import parse_http_list # noqa import paste.urlmap from manila.api.openstack import wsgi _quoted_string_re = r'"[^"\\]*(?:\\.[^"\\]*)*"' _option_header_piece_re = re.compile( r';\s*([^\s;=]+|%s)\s*' r'(?:=\s*([^;]+|%s))?\s*' % (_quoted_string_re, _quoted_string_re)) def unquote_header_value(value): """Unquotes a header value. This does not use the real unquoting but what browsers are actually using for quoting. :param value: the header value to unquote. """ if value and value[0] == value[-1] == '"': # this is not the real unquoting, but fixing this so that the # RFC is met will result in bugs with internet explorer and # probably some other browsers as well. IE for example is # uploading files with "C:\foo\bar.txt" as filename value = value[1:-1] return value def parse_list_header(value): """Parse lists as described by RFC 2068 Section 2. In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing. The return value is a standard :class:`list`: >>> parse_list_header('token, "quoted value"') ['token', 'quoted value'] :param value: a string with a list header. :return: :class:`list` """ result = [] for item in parse_http_list(value): if item[:1] == item[-1:] == '"': item = unquote_header_value(item[1:-1]) result.append(item) return result def parse_options_header(value): """Parse header into content type and options. Parse a ``Content-Type`` like header into a tuple with the content type and the options: >>> parse_options_header('Content-Type: text/html; mimetype=text/html') ('Content-Type:', {'mimetype': 'text/html'}) :param value: the header to parse. :return: (str, options) """ def _tokenize(string): for match in _option_header_piece_re.finditer(string): key, value = match.groups() key = unquote_header_value(key) if value is not None: value = unquote_header_value(value) yield key, value if not value: return '', {} parts = _tokenize(';' + value) name = next(parts)[0] extra = dict(parts) return name, extra class Accept(object): def __init__(self, value): self._content_types = [parse_options_header(v) for v in parse_list_header(value)] def best_match(self, supported_content_types): # FIXME: Should we have a more sophisticated matching algorithm that # takes into account the version as well? best_quality = -1 best_content_type = None best_params = {} best_match = '*/*' for content_type in supported_content_types: for content_mask, params in self._content_types: try: quality = float(params.get('q', 1)) except ValueError: continue if quality < best_quality: continue elif best_quality == quality: if best_match.count('*') <= content_mask.count('*'): continue if self._match_mask(content_mask, content_type): best_quality = quality best_content_type = content_type best_params = params best_match = content_mask return best_content_type, best_params def content_type_params(self, best_content_type): """Find parameters in Accept header for given content type.""" for content_type, params in self._content_types: if best_content_type == content_type: return params return {} def _match_mask(self, mask, content_type): if '*' not in mask: return content_type == mask if mask == '*/*': return True mask_major = mask[:-2] content_type_major = content_type.split('/', 1)[0] return content_type_major == mask_major def urlmap_factory(loader, global_conf, **local_conf): if 'not_found_app' in local_conf: not_found_app = local_conf.pop('not_found_app') else: not_found_app = global_conf.get('not_found_app') if not_found_app: not_found_app = loader.get_app(not_found_app, global_conf=global_conf) urlmap = URLMap(not_found_app=not_found_app) for path, app_name in local_conf.items(): path = paste.urlmap.parse_path_expression(path) app = loader.get_app(app_name, global_conf=global_conf) urlmap[path] = app return urlmap class URLMap(paste.urlmap.URLMap): def _match(self, host, port, path_info): """Find longest match for a given URL path.""" for (domain, app_url), app in self.applications: if domain and domain != host and domain != host + ':' + port: continue if (path_info == app_url or path_info.startswith(app_url + '/')): return app, app_url return None, None def _set_script_name(self, app, app_url): def wrap(environ, start_response): environ['SCRIPT_NAME'] += app_url return app(environ, start_response) return wrap def _munge_path(self, app, path_info, app_url): def wrap(environ, start_response): environ['SCRIPT_NAME'] += app_url environ['PATH_INFO'] = path_info[len(app_url):] return app(environ, start_response) return wrap def _path_strategy(self, host, port, path_info): """Check path suffix for MIME type and path prefix for API version.""" mime_type = app = app_url = None parts = path_info.rsplit('.', 1) if len(parts) > 1: possible_type = 'application/' + parts[1] if possible_type in wsgi.SUPPORTED_CONTENT_TYPES: mime_type = possible_type parts = path_info.split('/') if len(parts) > 1: possible_app, possible_app_url = self._match(host, port, path_info) # Don't use prefix if it ends up matching default if possible_app and possible_app_url: app_url = possible_app_url app = self._munge_path(possible_app, path_info, app_url) return mime_type, app, app_url def _content_type_strategy(self, host, port, environ): """Check Content-Type header for API version.""" app = None params = parse_options_header(environ.get('CONTENT_TYPE', ''))[1] if 'version' in params: app, app_url = self._match(host, port, '/v' + params['version']) if app: app = self._set_script_name(app, app_url) return app def _accept_strategy(self, host, port, environ, supported_content_types): """Check Accept header for best matching MIME type and API version.""" accept = Accept(environ.get('HTTP_ACCEPT', '')) app = None # Find the best match in the Accept header mime_type, params = accept.best_match(supported_content_types) if 'version' in params: app, app_url = self._match(host, port, '/v' + params['version']) if app: app = self._set_script_name(app, app_url) return mime_type, app def __call__(self, environ, start_response): host = environ.get('HTTP_HOST', environ.get('SERVER_NAME')).lower() if ':' in host: host, port = host.split(':', 1) else: if environ['wsgi.url_scheme'] == 'http': port = '80' else: port = '443' path_info = environ['PATH_INFO'] path_info = self.normalize_url(path_info, False)[1] # The API version is determined in one of three ways: # 1) URL path prefix (eg /v1.1/tenant/servers/detail) # 2) Content-Type header (eg application/json;version=1.1) # 3) Accept header (eg application/json;q=0.8;version=1.1) # Manila supports only application/json as MIME type for the responses. supported_content_types = list(wsgi.SUPPORTED_CONTENT_TYPES) mime_type, app, app_url = self._path_strategy(host, port, path_info) if not app: app = self._content_type_strategy(host, port, environ) if not mime_type or not app: possible_mime_type, possible_app = self._accept_strategy( host, port, environ, supported_content_types) if possible_mime_type and not mime_type: mime_type = possible_mime_type if possible_app and not app: app = possible_app if not mime_type: mime_type = 'application/json' if not app: # Didn't match a particular version, probably matches default app, app_url = self._match(host, port, path_info) if app: app = self._munge_path(app, path_info, app_url) if app: environ['manila.best_content_type'] = mime_type return app(environ, start_response) environ['paste.urlmap_object'] = self return self.not_found_application(environ, start_response) manila-10.0.0/manila/api/v2/0000775000175000017500000000000013656750362015431 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/v2/__init__.py0000664000175000017500000000000013656750227017530 0ustar zuulzuul00000000000000manila-10.0.0/manila/api/v2/share_replicas.py0000664000175000017500000002142113656750227020767 0ustar zuulzuul00000000000000# Copyright 2015 Goutham Pacha Ravi # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Share Replication API.""" import six from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import share_replicas as replication_view from manila.common import constants from manila import db from manila import exception from manila.i18n import _ from manila import share MIN_SUPPORTED_API_VERSION = '2.11' class ShareReplicationController(wsgi.Controller, wsgi.AdminActionsMixin): """The Share Replication API controller for the OpenStack API.""" resource_name = 'share_replica' _view_builder_class = replication_view.ReplicationViewBuilder def __init__(self): super(ShareReplicationController, self).__init__() self.share_api = share.API() def _update(self, *args, **kwargs): db.share_replica_update(*args, **kwargs) def _get(self, *args, **kwargs): return db.share_replica_get(*args, **kwargs) def _delete(self, context, resource, force=True): try: self.share_api.delete_share_replica(context, resource, force=True) except exception.ReplicationException as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) def index(self, req): """Return a summary list of replicas.""" return self._get_replicas(req) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) def detail(self, req): """Returns a detailed list of replicas.""" return self._get_replicas(req, is_detail=True) @wsgi.Controller.authorize('get_all') def _get_replicas(self, req, is_detail=False): """Returns list of replicas.""" context = req.environ['manila.context'] share_id = req.params.get('share_id') if share_id: try: replicas = db.share_replicas_get_all_by_share( context, share_id) except exception.NotFound: msg = _("Share with share ID %s not found.") % share_id raise exc.HTTPNotFound(explanation=msg) else: replicas = db.share_replicas_get_all(context) limited_list = common.limited(replicas, req) if is_detail: replicas = self._view_builder.detail_list(req, limited_list) else: replicas = self._view_builder.summary_list(req, limited_list) return replicas @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.Controller.authorize def show(self, req, id): """Return data about the given replica.""" context = req.environ['manila.context'] try: replica = db.share_replica_get(context, id) except exception.ShareReplicaNotFound: msg = _("Replica %s not found.") % id raise exc.HTTPNotFound(explanation=msg) return self._view_builder.detail(req, replica) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.response(202) @wsgi.Controller.authorize def create(self, req, body): """Add a replica to an existing share.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share_replica'): msg = _("Body does not contain 'share_replica' information.") raise exc.HTTPUnprocessableEntity(explanation=msg) share_id = body.get('share_replica').get('share_id') availability_zone = body.get('share_replica').get('availability_zone') if not share_id: msg = _("Must provide Share ID to add replica.") raise exc.HTTPBadRequest(explanation=msg) try: share_ref = db.share_get(context, share_id) except exception.NotFound: msg = _("No share exists with ID %s.") raise exc.HTTPNotFound(explanation=msg % share_id) share_network_id = share_ref.get('share_network_id', None) try: new_replica = self.share_api.create_share_replica( context, share_ref, availability_zone=availability_zone, share_network_id=share_network_id) except exception.AvailabilityZoneNotFound as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) except exception.ReplicationException as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) except exception.ShareBusyException as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) return self._view_builder.detail(req, new_replica) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.Controller.authorize def delete(self, req, id): """Delete a replica.""" context = req.environ['manila.context'] try: replica = db.share_replica_get(context, id) except exception.ShareReplicaNotFound: msg = _("No replica exists with ID %s.") raise exc.HTTPNotFound(explanation=msg % id) try: self.share_api.delete_share_replica(context, replica) except exception.ReplicationException as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.action('promote') @wsgi.response(202) @wsgi.Controller.authorize def promote(self, req, id, body): """Promote a replica to active state.""" context = req.environ['manila.context'] try: replica = db.share_replica_get(context, id) except exception.ShareReplicaNotFound: msg = _("No replica exists with ID %s.") raise exc.HTTPNotFound(explanation=msg % id) replica_state = replica.get('replica_state') if replica_state == constants.REPLICA_STATE_ACTIVE: return webob.Response(status_int=http_client.OK) try: replica = self.share_api.promote_share_replica(context, replica) except exception.ReplicationException as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) except exception.AdminRequired as e: raise exc.HTTPForbidden(explanation=six.text_type(e)) return self._view_builder.detail(req, replica) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.action('reset_status') def reset_status(self, req, id, body): """Reset the 'status' attribute in the database.""" return self._reset_status(req, id, body) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.action('force_delete') def force_delete(self, req, id, body): """Force deletion on the database, attempt on the backend.""" return self._force_delete(req, id, body) @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.action('reset_replica_state') @wsgi.Controller.authorize def reset_replica_state(self, req, id, body): """Reset the 'replica_state' attribute in the database.""" return self._reset_status(req, id, body, status_attr='replica_state') @wsgi.Controller.api_version(MIN_SUPPORTED_API_VERSION, experimental=True) @wsgi.action('resync') @wsgi.response(202) @wsgi.Controller.authorize def resync(self, req, id, body): """Attempt to update/sync the replica with its source.""" context = req.environ['manila.context'] try: replica = db.share_replica_get(context, id) except exception.ShareReplicaNotFound: msg = _("No replica exists with ID %s.") raise exc.HTTPNotFound(explanation=msg % id) replica_state = replica.get('replica_state') if replica_state == constants.REPLICA_STATE_ACTIVE: return webob.Response(status_int=http_client.OK) try: self.share_api.update_share_replica(context, replica) except exception.InvalidHost as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) def create_resource(): return wsgi.Resource(ShareReplicationController()) manila-10.0.0/manila/api/v2/share_instance_export_locations.py0000664000175000017500000000604113656750227024446 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from webob import exc from manila.api.openstack import wsgi from manila.api.views import export_locations as export_locations_views from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila import policy class ShareInstanceExportLocationController(wsgi.Controller): """The Share Instance Export Locations API controller.""" def __init__(self): self._view_builder_class = export_locations_views.ViewBuilder self.resource_name = 'share_instance_export_location' super(ShareInstanceExportLocationController, self).__init__() def _verify_share_instance(self, context, share_instance_id): try: share_instance = db_api.share_instance_get(context, share_instance_id, with_share_data=True) if not share_instance['is_public']: policy.check_policy(context, 'share_instance', 'show', share_instance) except exception.NotFound: msg = _("Share instance '%s' not found.") % share_instance_id raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version('2.9') @wsgi.Controller.authorize def index(self, req, share_instance_id): """Return a list of export locations for the share instance.""" context = req.environ['manila.context'] self._verify_share_instance(context, share_instance_id) export_locations = ( db_api.share_export_locations_get_by_share_instance_id( context, share_instance_id)) return self._view_builder.summary_list(req, export_locations) @wsgi.Controller.api_version('2.9') @wsgi.Controller.authorize def show(self, req, share_instance_id, export_location_uuid): """Return data about the requested export location.""" context = req.environ['manila.context'] self._verify_share_instance(context, share_instance_id) try: export_location = db_api.share_export_location_get_by_uuid( context, export_location_uuid) return self._view_builder.detail(req, export_location) except exception.ExportLocationNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) def create_resource(): return wsgi.Resource(ShareInstanceExportLocationController()) manila-10.0.0/manila/api/v2/share_instances.py0000664000175000017500000000762313656750227021164 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import share_instance as instance_view from manila import db from manila import exception from manila import share class ShareInstancesController(wsgi.Controller, wsgi.AdminActionsMixin): """The share instances API controller for the OpenStack API.""" resource_name = 'share_instance' _view_builder_class = instance_view.ViewBuilder def __init__(self): self.share_api = share.API() super(ShareInstancesController, self).__init__() def _get(self, *args, **kwargs): return db.share_instance_get(*args, **kwargs) def _update(self, *args, **kwargs): db.share_instance_update(*args, **kwargs) def _delete(self, *args, **kwargs): return self.share_api.delete_instance(*args, **kwargs) @wsgi.Controller.api_version('2.3', '2.6') @wsgi.action('os-reset_status') def instance_reset_status_legacy(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('reset_status') def instance_reset_status(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.api_version('2.3', '2.6') @wsgi.action('os-force_delete') def instance_force_delete_legacy(self, req, id, body): return self._force_delete(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('force_delete') def instance_force_delete(self, req, id, body): return self._force_delete(req, id, body) @wsgi.Controller.api_version("2.3", "2.34") # noqa @wsgi.Controller.authorize def index(self, req): # pylint: disable=function-redefined context = req.environ['manila.context'] req.GET.pop('export_location_id', None) req.GET.pop('export_location_path', None) instances = db.share_instances_get_all(context) return self._view_builder.detail_list(req, instances) @wsgi.Controller.api_version("2.35") # noqa @wsgi.Controller.authorize def index(self, req): # pylint: disable=function-redefined context = req.environ['manila.context'] filters = {} filters.update(req.GET) common.remove_invalid_options( context, filters, ('export_location_id', 'export_location_path')) instances = db.share_instances_get_all(context, filters) return self._view_builder.detail_list(req, instances) @wsgi.Controller.api_version("2.3") @wsgi.Controller.authorize def show(self, req, id): context = req.environ['manila.context'] try: instance = db.share_instance_get(context, id) except exception.NotFound: raise exc.HTTPNotFound() return self._view_builder.detail(req, instance) @wsgi.Controller.api_version("2.3") @wsgi.Controller.authorize('index') def get_share_instances(self, req, share_id): context = req.environ['manila.context'] try: share = self.share_api.get(context, share_id) except exception.NotFound: raise exc.HTTPNotFound() view = instance_view.ViewBuilder() return view.detail_list(req, share.instances) def create_resource(): return wsgi.Resource(ShareInstancesController()) manila-10.0.0/manila/api/v2/share_types.py0000664000175000017500000004132013656750227020331 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # Copyright (c) 2014 NetApp, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The share type API controller module..""" import ast from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils import six from six.moves import http_client import webob from webob import exc from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila.api.views import types as views_types from manila.common import constants from manila import exception from manila.i18n import _ from manila import rpc from manila.share import share_types LOG = log.getLogger(__name__) class ShareTypesController(wsgi.Controller): """The share types API controller for the OpenStack API.""" resource_name = 'share_type' _view_builder_class = views_types.ViewBuilder def __getattr__(self, key): if key == 'os-share-type-access': return self.share_type_access return super(ShareTypesController, self).__getattribute__(key) def _notify_share_type_error(self, context, method, payload): rpc.get_notifier('shareType').error(context, method, payload) def _notify_share_type_info(self, context, method, share_type): payload = dict(share_types=share_type) rpc.get_notifier('shareType').info(context, method, payload) def _check_body(self, body, action_name): if not self.is_valid_body(body, action_name): raise webob.exc.HTTPBadRequest() access = body[action_name] project = access.get('project') if not uuidutils.is_uuid_like(project): msg = _("Bad project format: " "project is not in proper format (%s)") % project raise webob.exc.HTTPBadRequest(explanation=msg) @wsgi.Controller.authorize def index(self, req): """Returns the list of share types.""" limited_types = self._get_share_types(req) req.cache_db_share_types(limited_types) return self._view_builder.index(req, limited_types) @wsgi.Controller.authorize def show(self, req, id): """Return a single share type item.""" context = req.environ['manila.context'] try: share_type = self._show_share_type_details(context, id) except exception.NotFound: msg = _("Share type not found.") raise exc.HTTPNotFound(explanation=msg) share_type['id'] = six.text_type(share_type['id']) req.cache_db_share_type(share_type) return self._view_builder.show(req, share_type) def _show_share_type_details(self, context, id): share_type = share_types.get_share_type(context, id) required_extra_specs = {} try: required_extra_specs = share_types.get_valid_required_extra_specs( share_type['extra_specs']) except exception.InvalidExtraSpec: LOG.exception('Share type %(share_type_id)s has invalid required' ' extra specs.', {'share_type_id': id}) share_type['required_extra_specs'] = required_extra_specs return share_type @wsgi.Controller.authorize def default(self, req): """Return default volume type.""" context = req.environ['manila.context'] try: share_type = share_types.get_default_share_type(context) except exception.NotFound: msg = _("Share type not found") raise exc.HTTPNotFound(explanation=msg) if not share_type: msg = _("Default share type not found") raise exc.HTTPNotFound(explanation=msg) share_type['id'] = six.text_type(share_type['id']) return self._view_builder.show(req, share_type) def _get_share_types(self, req): """Helper function that returns a list of type dicts.""" filters = {} context = req.environ['manila.context'] if context.is_admin: # Only admin has query access to all share types filters['is_public'] = self._parse_is_public( req.params.get('is_public')) else: filters['is_public'] = True extra_specs = req.params.get('extra_specs', {}) extra_specs_disallowed = (req.api_version_request < api_version.APIVersionRequest("2.43")) if extra_specs and extra_specs_disallowed: msg = _("Filter by 'extra_specs' is not supported by this " "microversion. Use 2.43 or greater microversion to " "be able to use filter search by 'extra_specs.") raise webob.exc.HTTPBadRequest(explanation=msg) elif extra_specs: extra_specs = ast.literal_eval(extra_specs) filters['extra_specs'] = share_types.sanitize_extra_specs( extra_specs) limited_types = share_types.get_all_types( context, search_opts=filters).values() return list(limited_types) @staticmethod def _parse_is_public(is_public): """Parse is_public into something usable. * True: API should list public share types only * False: API should list private share types only * None: API should list both public and private share types """ if is_public is None: # preserve default value of showing only public types return True elif six.text_type(is_public).lower() == "all": return None else: try: return strutils.bool_from_string(is_public, strict=True) except ValueError: msg = _('Invalid is_public filter [%s]') % is_public raise exc.HTTPBadRequest(explanation=msg) @wsgi.Controller.api_version("1.0", "2.23") @wsgi.action("create") def create(self, req, body): return self._create(req, body, set_defaults=True) @wsgi.Controller.api_version("2.24") # noqa @wsgi.action("create") def create(self, req, body): # pylint: disable=function-redefined return self._create(req, body, set_defaults=False) @wsgi.Controller.authorize('create') def _create(self, req, body, set_defaults=False): """Creates a new share type.""" context = req.environ['manila.context'] if (not self.is_valid_body(body, 'share_type') and not self.is_valid_body(body, 'volume_type')): raise webob.exc.HTTPBadRequest() elif self.is_valid_body(body, 'share_type'): share_type = body['share_type'] else: share_type = body['volume_type'] name = share_type.get('name') specs = share_type.get('extra_specs', {}) description = share_type.get('description') if (description and req.api_version_request < api_version.APIVersionRequest("2.41")): msg = _("'description' key is not supported by this " "microversion. Use 2.41 or greater microversion " "to be able to use 'description' in share type.") raise webob.exc.HTTPBadRequest(explanation=msg) is_public = share_type.get( 'os-share-type-access:is_public', share_type.get('share_type_access:is_public', True), ) if (name is None or name == "" or len(name) > 255 or (description and len(description) > 255)): msg = _("Type name or description is not valid.") raise webob.exc.HTTPBadRequest(explanation=msg) # Note(cknight): Set the default extra spec value for snapshot_support # for API versions before it was required. if set_defaults: if constants.ExtraSpecs.SNAPSHOT_SUPPORT not in specs: specs[constants.ExtraSpecs.SNAPSHOT_SUPPORT] = True try: required_extra_specs = ( share_types.get_valid_required_extra_specs(specs) ) share_types.create(context, name, specs, is_public, description=description) share_type = share_types.get_share_type_by_name(context, name) share_type['required_extra_specs'] = required_extra_specs req.cache_db_share_type(share_type) self._notify_share_type_info( context, 'share_type.create', share_type) except exception.InvalidExtraSpec as e: raise webob.exc.HTTPBadRequest(explanation=six.text_type(e)) except exception.ShareTypeExists as err: notifier_err = dict(share_types=share_type, error_message=six.text_type(err)) self._notify_share_type_error(context, 'share_type.create', notifier_err) raise webob.exc.HTTPConflict(explanation=six.text_type(err)) except exception.NotFound as err: notifier_err = dict(share_types=share_type, error_message=six.text_type(err)) self._notify_share_type_error(context, 'share_type.create', notifier_err) raise webob.exc.HTTPNotFound() return self._view_builder.show(req, share_type) @wsgi.action("delete") @wsgi.Controller.authorize('delete') def _delete(self, req, id): """Deletes an existing share type.""" context = req.environ['manila.context'] try: share_type = share_types.get_share_type(context, id) share_types.destroy(context, share_type['id']) self._notify_share_type_info( context, 'share_type.delete', share_type) except exception.ShareTypeInUse as err: notifier_err = dict(id=id, error_message=six.text_type(err)) self._notify_share_type_error(context, 'share_type.delete', notifier_err) msg = 'Target share type is still in use.' raise webob.exc.HTTPBadRequest(explanation=msg) except exception.NotFound as err: notifier_err = dict(id=id, error_message=six.text_type(err)) self._notify_share_type_error(context, 'share_type.delete', notifier_err) raise webob.exc.HTTPNotFound() return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version("2.50") @wsgi.action("update") @wsgi.Controller.authorize def update(self, req, id, body): """Update name description is_public for a given share type.""" context = req.environ['manila.context'] if (not self.is_valid_body(body, 'share_type') and not self.is_valid_body(body, 'volume_type')): raise webob.exc.HTTPBadRequest() elif self.is_valid_body(body, 'share_type'): sha_type = body['share_type'] else: sha_type = body['volume_type'] name = sha_type.get('name') description = sha_type.get('description') is_public = sha_type.get('share_type_access:is_public', None) if is_public is not None: try: is_public = strutils.bool_from_string(is_public, strict=True) except ValueError: msg = _("share_type_access:is_public has a non-boolean" " value.") raise webob.exc.HTTPBadRequest(explanation=msg) # If name specified, name can not be empty or greater than 255. if name is not None: if len(name.strip()) == 0: msg = _("Share type name cannot be empty.") raise webob.exc.HTTPBadRequest(explanation=msg) if len(name) > 255: msg = _("Share type name cannot be greater than 255 " "characters in length.") raise webob.exc.HTTPBadRequest(explanation=msg) # If description specified, length can not greater than 255. if description and len(description) > 255: msg = _("Share type description cannot be greater than 255 " "characters in length.") raise webob.exc.HTTPBadRequest(explanation=msg) # Name, description and is_public can not be None. # Specify one of them, or a combination thereof. if name is None and description is None and is_public is None: msg = _("Specify share type name, description, " "share_type_access:is_public or a combination thereof.") raise webob.exc.HTTPBadRequest(explanation=msg) try: share_types.update(context, id, name, description, is_public=is_public) # Get the updated sha_type = self._show_share_type_details(context, id) req.cache_resource(sha_type, name='types') self._notify_share_type_info( context, 'share_type.update', sha_type) except exception.ShareTypeNotFound as err: notifier_err = {"id": id, "error_message": err} self._notify_share_type_error( context, 'share_type.update', notifier_err) # Not found exception will be handled at the wsgi level raise except exception.ShareTypeExists as err: notifier_err = {"share_type": sha_type, "error_message": err} self._notify_share_type_error( context, 'share_type.update', notifier_err) raise webob.exc.HTTPConflict(explanation=err.msg) except exception.ShareTypeUpdateFailed as err: notifier_err = {"share_type": sha_type, "error_message": err} self._notify_share_type_error( context, 'share_type.update', notifier_err) raise webob.exc.HTTPInternalServerError( explanation=err.msg) return self._view_builder.show(req, sha_type) @wsgi.Controller.authorize('list_project_access') def share_type_access(self, req, id): context = req.environ['manila.context'] try: share_type = share_types.get_share_type( context, id, expected_fields=['projects']) except exception.ShareTypeNotFound: explanation = _("Share type %s not found.") % id raise webob.exc.HTTPNotFound(explanation=explanation) if share_type['is_public']: expl = _("Access list not available for public share types.") raise webob.exc.HTTPNotFound(explanation=expl) return self._view_builder.share_type_access(req, share_type) @wsgi.action('addProjectAccess') @wsgi.Controller.authorize('add_project_access') def _add_project_access(self, req, id, body): context = req.environ['manila.context'] self._check_body(body, 'addProjectAccess') project = body['addProjectAccess']['project'] self._verify_if_non_public_share_type(context, id) try: share_types.add_share_type_access(context, id, project) except exception.ShareTypeAccessExists as err: raise webob.exc.HTTPConflict(explanation=six.text_type(err)) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.action('removeProjectAccess') @wsgi.Controller.authorize('remove_project_access') def _remove_project_access(self, req, id, body): context = req.environ['manila.context'] self._check_body(body, 'removeProjectAccess') project = body['removeProjectAccess']['project'] self._verify_if_non_public_share_type(context, id) try: share_types.remove_share_type_access(context, id, project) except exception.ShareTypeAccessNotFound as err: raise webob.exc.HTTPNotFound(explanation=six.text_type(err)) return webob.Response(status_int=http_client.ACCEPTED) def _verify_if_non_public_share_type(self, context, share_type_id): try: share_type = share_types.get_share_type(context, share_type_id) if share_type['is_public']: msg = _("Type access modification is not applicable to " "public share type.") raise webob.exc.HTTPConflict(explanation=msg) except exception.ShareTypeNotFound as err: raise webob.exc.HTTPNotFound(explanation=six.text_type(err)) def create_resource(): return wsgi.Resource(ShareTypesController()) manila-10.0.0/manila/api/v2/share_group_snapshots.py0000664000175000017500000002511413656750227022426 0ustar zuulzuul00000000000000# Copyright 2015 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import uuidutils import six from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import wsgi import manila.api.views.share_group_snapshots as share_group_snapshots_views from manila import db from manila import exception from manila.i18n import _ import manila.share_group.api as share_group_api LOG = log.getLogger(__name__) SG_GRADUATION_VERSION = '2.55' class ShareGroupSnapshotController(wsgi.Controller, wsgi.AdminActionsMixin): """The share group snapshots API controller for the OpenStack API.""" resource_name = 'share_group_snapshot' _view_builder_class = ( share_group_snapshots_views.ShareGroupSnapshotViewBuilder) def __init__(self): super(ShareGroupSnapshotController, self).__init__() self.share_group_api = share_group_api.API() def _get_share_group_snapshot(self, context, sg_snapshot_id): try: return self.share_group_api.get_share_group_snapshot( context, sg_snapshot_id) except exception.NotFound: msg = _("Share group snapshot %s not found.") % sg_snapshot_id raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.authorize('get') def _show(self, req, id): """Return data about the given share group snapshot.""" context = req.environ['manila.context'] sg_snapshot = self._get_share_group_snapshot(context, id) return self._view_builder.detail(req, sg_snapshot) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def show(self, req, id): return self._show(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def show(self, req, id): # pylint: disable=function-redefined return self._show(req, id) @wsgi.Controller.authorize('delete') def _delete_group_snapshot(self, req, id): """Delete a share group snapshot.""" context = req.environ['manila.context'] LOG.info("Delete share group snapshot with id: %s", id, context=context) sg_snapshot = self._get_share_group_snapshot(context, id) try: self.share_group_api.delete_share_group_snapshot( context, sg_snapshot) except exception.InvalidShareGroupSnapshot as e: raise exc.HTTPConflict(explanation=six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def delete(self, req, id): return self._delete_group_snapshot(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def delete(self, req, id): # pylint: disable=function-redefined return self._delete_group_snapshot(req, id) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def index(self, req): """Returns a summary list of share group snapshots.""" return self._get_share_group_snaps(req, is_detail=False) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def index(self, req): # pylint: disable=function-redefined """Returns a summary list of share group snapshots.""" return self._get_share_group_snaps(req, is_detail=False) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def detail(self, req): """Returns a detailed list of share group snapshots.""" return self._get_share_group_snaps(req, is_detail=True) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def detail(self, req): # pylint: disable=function-redefined """Returns a detailed list of share group snapshots.""" return self._get_share_group_snaps(req, is_detail=True) @wsgi.Controller.authorize('get_all') def _get_share_group_snaps(self, req, is_detail): """Returns a list of share group snapshots.""" context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) # Remove keys that are not related to group attrs search_opts.pop('limit', None) search_opts.pop('offset', None) sort_key = search_opts.pop('sort_key', 'created_at') sort_dir = search_opts.pop('sort_dir', 'desc') snaps = self.share_group_api.get_all_share_group_snapshots( context, detailed=is_detail, search_opts=search_opts, sort_dir=sort_dir, sort_key=sort_key) limited_list = common.limited(snaps, req) if is_detail: snaps = self._view_builder.detail_list(req, limited_list) else: snaps = self._view_builder.summary_list(req, limited_list) return snaps @wsgi.Controller.authorize('update') def _update_group_snapshot(self, req, id, body): """Update a share group snapshot.""" context = req.environ['manila.context'] key = 'share_group_snapshot' if not self.is_valid_body(body, key): msg = _("'%s' is missing from the request body.") % key raise exc.HTTPBadRequest(explanation=msg) sg_snapshot_data = body[key] valid_update_keys = { 'name', 'description', } invalid_fields = set(sg_snapshot_data.keys()) - valid_update_keys if invalid_fields: msg = _("The fields %s are invalid or not allowed to be updated.") raise exc.HTTPBadRequest(explanation=msg % invalid_fields) sg_snapshot = self._get_share_group_snapshot(context, id) sg_snapshot = self.share_group_api.update_share_group_snapshot( context, sg_snapshot, sg_snapshot_data) return self._view_builder.detail(req, sg_snapshot) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def update(self, req, id, body): return self._update_group_snapshot(req, id, body) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def update(self, req, id, body): # pylint: disable=function-redefined return self._update_group_snapshot(req, id, body) @wsgi.Controller.authorize('create') def _create(self, req, body): """Creates a new share group snapshot.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share_group_snapshot'): msg = _("'share_group_snapshot' is missing from the request body.") raise exc.HTTPBadRequest(explanation=msg) share_group_snapshot = body.get('share_group_snapshot', {}) share_group_id = share_group_snapshot.get('share_group_id') if not share_group_id: msg = _("Must supply 'share_group_id' attribute.") raise exc.HTTPBadRequest(explanation=msg) if not uuidutils.is_uuid_like(share_group_id): msg = _("The 'share_group_id' attribute must be a uuid.") raise exc.HTTPBadRequest(explanation=six.text_type(msg)) kwargs = {"share_group_id": share_group_id} if 'name' in share_group_snapshot: kwargs['name'] = share_group_snapshot.get('name') if 'description' in share_group_snapshot: kwargs['description'] = share_group_snapshot.get('description') try: new_snapshot = self.share_group_api.create_share_group_snapshot( context, **kwargs) except exception.ShareGroupNotFound as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) except exception.InvalidShareGroup as e: raise exc.HTTPConflict(explanation=six.text_type(e)) return self._view_builder.detail(req, dict(new_snapshot.items())) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.response(202) def create(self, req, body): return self._create(req, body) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.response(202) def create(self, req, body): # pylint: disable=function-redefined return self._create(req, body) @wsgi.Controller.authorize('get') def _members(self, req, id): """Returns a list of share group snapshot members.""" context = req.environ['manila.context'] snaps = self.share_group_api.get_all_share_group_snapshot_members( context, id) limited_list = common.limited(snaps, req) snaps = self._view_builder.member_list(req, limited_list) return snaps @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def members(self, req, id): return self._members(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def members(self, req, id): # pylint: disable=function-redefined return self._members(req, id) def _update(self, *args, **kwargs): db.share_group_snapshot_update(*args, **kwargs) def _get(self, *args, **kwargs): return self.share_group_api.get_share_group_snapshot(*args, **kwargs) def _delete(self, context, resource, force=True): db.share_group_snapshot_destroy(context.elevated(), resource['id']) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action('reset_status') def share_group_snapshot_reset_status(self, req, id, body): return self._reset_status(req, id, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action('reset_status') def share_group_snapshot_reset_status(self, req, id, body): return self._reset_status(req, id, body) # pylint: enable=function-redefined @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action('force_delete') def share_group_snapshot_force_delete(self, req, id, body): return self._force_delete(req, id, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action('force_delete') def share_group_snapshot_force_delete(self, req, id, body): return self._force_delete(req, id, body) def create_resource(): return wsgi.Resource(ShareGroupSnapshotController()) manila-10.0.0/manila/api/v2/share_snapshot_export_locations.py0000664000175000017500000000474113656750227024506 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api.openstack import wsgi from manila.api.views import share_snapshot_export_locations from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila import policy class ShareSnapshotExportLocationController(wsgi.Controller): def __init__(self): self._view_builder_class = ( share_snapshot_export_locations.ViewBuilder) self.resource_name = 'share_snapshot_export_location' super(ShareSnapshotExportLocationController, self).__init__() @wsgi.Controller.api_version('2.32') @wsgi.Controller.authorize def index(self, req, snapshot_id): context = req.environ['manila.context'] snapshot = self._verify_snapshot(context, snapshot_id) return self._view_builder.list_export_locations( req, snapshot['export_locations']) @wsgi.Controller.api_version('2.32') @wsgi.Controller.authorize def show(self, req, snapshot_id, export_location_id): context = req.environ['manila.context'] self._verify_snapshot(context, snapshot_id) export_location = db_api.share_snapshot_instance_export_location_get( context, export_location_id) return self._view_builder.detail_export_location(req, export_location) def _verify_snapshot(self, context, snapshot_id): try: snapshot = db_api.share_snapshot_get(context, snapshot_id) share = db_api.share_get(context, snapshot['share_id']) if not share['is_public']: policy.check_policy(context, 'share', 'get', share) except exception.NotFound: msg = _("Snapshot '%s' not found.") % snapshot_id raise exc.HTTPNotFound(explanation=msg) return snapshot def create_resource(): return wsgi.Resource(ShareSnapshotExportLocationController()) manila-10.0.0/manila/api/v2/share_export_locations.py0000664000175000017500000000776313656750227022576 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api.openstack import wsgi from manila.api.views import export_locations as export_locations_views from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila import policy class ShareExportLocationController(wsgi.Controller): """The Share Export Locations API controller.""" def __init__(self): self._view_builder_class = export_locations_views.ViewBuilder self.resource_name = 'share_export_location' super(ShareExportLocationController, self).__init__() def _verify_share(self, context, share_id): try: share = db_api.share_get(context, share_id) if not share['is_public']: policy.check_policy(context, 'share', 'get', share) except exception.NotFound: msg = _("Share '%s' not found.") % share_id raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.authorize('index') def _index(self, req, share_id, ignore_secondary_replicas=False): context = req.environ['manila.context'] self._verify_share(context, share_id) kwargs = { 'include_admin_only': context.is_admin, 'ignore_migration_destination': True, 'ignore_secondary_replicas': ignore_secondary_replicas, } export_locations = db_api.share_export_locations_get_by_share_id( context, share_id, **kwargs) return self._view_builder.summary_list(req, export_locations) @wsgi.Controller.authorize('show') def _show(self, req, share_id, export_location_uuid, ignore_secondary_replicas=False): context = req.environ['manila.context'] self._verify_share(context, share_id) try: export_location = db_api.share_export_location_get_by_uuid( context, export_location_uuid, ignore_secondary_replicas=ignore_secondary_replicas) except exception.ExportLocationNotFound: msg = _("Export location '%s' not found.") % export_location_uuid raise exc.HTTPNotFound(explanation=msg) if export_location.is_admin_only and not context.is_admin: raise exc.HTTPForbidden() return self._view_builder.detail(req, export_location) @wsgi.Controller.api_version('2.9', '2.46') def index(self, req, share_id): """Return a list of export locations for share.""" return self._index(req, share_id) @wsgi.Controller.api_version('2.47') # noqa: F811 def index(self, req, share_id): # pylint: disable=function-redefined """Return a list of export locations for share.""" return self._index(req, share_id, ignore_secondary_replicas=True) @wsgi.Controller.api_version('2.9', '2.46') def show(self, req, share_id, export_location_uuid): """Return data about the requested export location.""" return self._show(req, share_id, export_location_uuid) @wsgi.Controller.api_version('2.47') # noqa: F811 def show(self, req, share_id, # pylint: disable=function-redefined export_location_uuid): """Return data about the requested export location.""" return self._show(req, share_id, export_location_uuid, ignore_secondary_replicas=True) def create_resource(): return wsgi.Resource(ShareExportLocationController()) manila-10.0.0/manila/api/v2/share_snapshot_instances.py0000664000175000017500000000607713656750227023105 0ustar zuulzuul00000000000000# Copyright 2016 Huawei Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api.openstack import wsgi from manila.api.views import share_snapshot_instances as instance_view from manila import db from manila import exception from manila.i18n import _ from manila import share class ShareSnapshotInstancesController(wsgi.Controller, wsgi.AdminActionsMixin): """The share snapshot instances API controller for the OpenStack API.""" resource_name = 'share_snapshot_instance' _view_builder_class = instance_view.ViewBuilder def __init__(self): self.share_api = share.API() super(ShareSnapshotInstancesController, self).__init__() @wsgi.Controller.api_version('2.19') @wsgi.Controller.authorize def show(self, req, id): context = req.environ['manila.context'] try: snapshot_instance = db.share_snapshot_instance_get( context, id) except exception.ShareSnapshotInstanceNotFound: msg = (_("Snapshot instance %s not found.") % id) raise exc.HTTPNotFound(explanation=msg) return self._view_builder.detail(req, snapshot_instance) @wsgi.Controller.api_version('2.19') @wsgi.Controller.authorize def index(self, req): """Return a summary list of snapshot instances.""" return self._get_instances(req) @wsgi.Controller.api_version('2.19') @wsgi.Controller.authorize def detail(self, req): """Returns a detailed list of snapshot instances.""" return self._get_instances(req, is_detail=True) def _get_instances(self, req, is_detail=False): """Returns list of snapshot instances.""" context = req.environ['manila.context'] snapshot_id = req.params.get('snapshot_id') instances = db.share_snapshot_instance_get_all_with_filters( context, {'snapshot_ids': snapshot_id}) if is_detail: instances = self._view_builder.detail_list(req, instances) else: instances = self._view_builder.summary_list(req, instances) return instances @wsgi.Controller.api_version('2.19') @wsgi.action('reset_status') def reset_status(self, req, id, body): """Reset the 'status' attribute in the database.""" return self._reset_status(req, id, body) def _update(self, *args, **kwargs): db.share_snapshot_instance_update(*args, **kwargs) def create_resource(): return wsgi.Resource(ShareSnapshotInstancesController()) manila-10.0.0/manila/api/v2/messages.py0000664000175000017500000001150213656750227017611 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The messages API controller module. This module handles the following requests: GET /messages GET /messages/ DELETE /messages/ """ from oslo_utils import timeutils from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import wsgi from manila.api.views import messages as messages_view from manila import exception from manila.i18n import _ from manila.message import api as message_api MESSAGES_BASE_MICRO_VERSION = '2.37' MESSAGES_QUERY_BY_TIMESTAMP = '2.52' class MessagesController(wsgi.Controller): """The User Messages API controller for the OpenStack API.""" _view_builder_class = messages_view.ViewBuilder resource_name = 'message' def __init__(self): self.message_api = message_api.API() super(MessagesController, self).__init__() @wsgi.Controller.api_version(MESSAGES_BASE_MICRO_VERSION) @wsgi.Controller.authorize('get') def show(self, req, id): """Return the given message.""" context = req.environ['manila.context'] try: message = self.message_api.get(context, id) except exception.MessageNotFound as error: raise exc.HTTPNotFound(explanation=error.msg) return self._view_builder.detail(req, message) @wsgi.Controller.api_version(MESSAGES_BASE_MICRO_VERSION) @wsgi.Controller.authorize @wsgi.action("delete") def delete(self, req, id): """Delete a message.""" context = req.environ['manila.context'] try: message = self.message_api.get(context, id) self.message_api.delete(context, message) except exception.MessageNotFound as error: raise exc.HTTPNotFound(explanation=error.msg) return webob.Response(status_int=http_client.NO_CONTENT) @wsgi.Controller.api_version(MESSAGES_BASE_MICRO_VERSION, '2.51') @wsgi.Controller.authorize('get_all') def index(self, req): """Returns a list of messages, transformed through view builder.""" context = req.environ['manila.context'] filters = req.params.copy() params = common.get_pagination_params(req) limit, offset = [params.get('limit'), params.get('offset')] sort_key, sort_dir = common.get_sort_params(filters) filters.pop('created_since', None) filters.pop('created_before', None) messages = self.message_api.get_all(context, search_opts=filters, limit=limit, offset=offset, sort_key=sort_key, sort_dir=sort_dir) return self._view_builder.index(req, messages) @wsgi.Controller.api_version(MESSAGES_QUERY_BY_TIMESTAMP) # noqa: F811 @wsgi.Controller.authorize('get_all') def index(self, req): # pylint: disable=function-redefined """Returns a list of messages, transformed through view builder.""" context = req.environ['manila.context'] filters = req.params.copy() params = common.get_pagination_params(req) limit, offset = [params.get('limit'), params.get('offset')] sort_key, sort_dir = common.get_sort_params(filters) for time_comparison_filter in ['created_since', 'created_before']: if time_comparison_filter in filters: time_str = filters.get(time_comparison_filter) try: parsed_time = timeutils.parse_isotime(time_str) except ValueError: msg = _('Invalid value specified for the query ' 'key: %s') % time_comparison_filter raise exc.HTTPBadRequest(explanation=msg) filters[time_comparison_filter] = parsed_time messages = self.message_api.get_all(context, search_opts=filters, limit=limit, offset=offset, sort_key=sort_key, sort_dir=sort_dir) return self._view_builder.index(req, messages) def create_resource(): return wsgi.Resource(MessagesController()) manila-10.0.0/manila/api/v2/share_replica_export_locations.py0000664000175000017500000000556513656750227024273 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from webob import exc from manila.api.openstack import wsgi from manila.api.views import export_locations as export_locations_views from manila.db import api as db_api from manila import exception from manila.i18n import _ class ShareReplicaExportLocationController(wsgi.Controller): """The Share Instance Export Locations API controller.""" def __init__(self): self._view_builder_class = export_locations_views.ViewBuilder self.resource_name = 'share_replica_export_location' super(ShareReplicaExportLocationController, self).__init__() def _verify_share_replica(self, context, share_replica_id): try: db_api.share_replica_get(context, share_replica_id) except exception.NotFound: msg = _("Share replica '%s' not found.") % share_replica_id raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version('2.47', experimental=True) @wsgi.Controller.authorize def index(self, req, share_replica_id): """Return a list of export locations for the share instance.""" context = req.environ['manila.context'] self._verify_share_replica(context, share_replica_id) export_locations = ( db_api.share_export_locations_get_by_share_instance_id( context, share_replica_id, include_admin_only=context.is_admin) ) return self._view_builder.summary_list(req, export_locations, replica=True) @wsgi.Controller.api_version('2.47', experimental=True) @wsgi.Controller.authorize def show(self, req, share_replica_id, export_location_uuid): """Return data about the requested export location.""" context = req.environ['manila.context'] self._verify_share_replica(context, share_replica_id) try: export_location = db_api.share_export_location_get_by_uuid( context, export_location_uuid) return self._view_builder.detail(req, export_location, replica=True) except exception.ExportLocationNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) def create_resource(): return wsgi.Resource(ShareReplicaExportLocationController()) manila-10.0.0/manila/api/v2/share_access_metadata.py0000664000175000017500000000610613656750227022271 0ustar zuulzuul00000000000000# Copyright 2018 Huawei Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The share access rule metadata api.""" import webob from manila.api.openstack import wsgi from manila.api.views import share_accesses as share_access_views from manila import db from manila import exception from manila.i18n import _ from manila import share class ShareAccessMetadataController(wsgi.Controller): """The Share access rule metadata API V2 controller.""" resource_name = 'share_access_metadata' _view_builder_class = share_access_views.ViewBuilder def __init__(self): super(ShareAccessMetadataController, self).__init__() self.share_api = share.API() @wsgi.Controller.api_version('2.45') @wsgi.Controller.authorize def update(self, req, access_id, body=None): context = req.environ['manila.context'] if not self.is_valid_body(body, 'metadata'): raise webob.exc.HTTPBadRequest() metadata = body['metadata'] md = self._update_share_access_metadata(context, access_id, metadata) return self._view_builder.view_metadata(req, md) @wsgi.Controller.api_version('2.45') @wsgi.Controller.authorize @wsgi.response(200) def delete(self, req, access_id, key): """Deletes an existing access metadata.""" context = req.environ['manila.context'] self._assert_access_exists(context, access_id) try: db.share_access_metadata_delete(context, access_id, key) except exception.ShareAccessMetadataNotFound as error: raise webob.exc.HTTPNotFound(explanation=error.msg) def _update_share_access_metadata(self, context, access_id, metadata): self._assert_access_exists(context, access_id) try: return self.share_api.update_share_access_metadata( context, access_id, metadata) except (ValueError, AttributeError): msg = _("Malformed request body") raise webob.exc.HTTPBadRequest(explanation=msg) except exception.InvalidMetadata as error: raise webob.exc.HTTPBadRequest(explanation=error.msg) except exception.InvalidMetadataSize as error: raise webob.exc.HTTPBadRequest(explanation=error.msg) def _assert_access_exists(self, context, access_id): try: self.share_api.access_get(context, access_id) except exception.NotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.msg) def create_resource(): return wsgi.Resource(ShareAccessMetadataController()) manila-10.0.0/manila/api/v2/share_group_type_specs.py0000664000175000017500000001512613656750227022564 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import six from six.moves import http_client import webob from manila.api import common from manila.api.openstack import wsgi from manila import db from manila import exception from manila.i18n import _ from manila.share_group import share_group_types SG_GRADUATION_VERSION = '2.55' class ShareGroupTypeSpecsController(wsgi.Controller): """The share group type specs API controller for the OpenStack API.""" resource_name = 'share_group_types_spec' def _get_group_specs(self, context, type_id): specs = db.share_group_type_specs_get(context, type_id) return {"group_specs": copy.deepcopy(specs)} def _assert_share_group_type_exists(self, context, type_id): try: share_group_types.get(context, type_id) except exception.NotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.msg) def _verify_group_specs(self, group_specs): def is_valid_string(v): return isinstance(v, six.string_types) and len(v) in range(1, 256) def is_valid_spec(k, v): valid_spec_key = is_valid_string(k) valid_type = is_valid_string(v) or isinstance(v, bool) return valid_spec_key and valid_type for k, v in group_specs.items(): if is_valid_string(k) and isinstance(v, dict): self._verify_group_specs(v) elif not is_valid_spec(k, v): expl = _('Invalid extra_spec: %(key)s: %(value)s') % { 'key': k, 'value': v } raise webob.exc.HTTPBadRequest(explanation=expl) @wsgi.Controller.authorize('index') def _index(self, req, id): """Returns the list of group specs for a given share group type.""" context = req.environ['manila.context'] self._assert_share_group_type_exists(context, id) return self._get_group_specs(context, id) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def index(self, req, id): return self._index(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def index(self, req, id): # pylint: disable=function-redefined return self._index(req, id) @wsgi.Controller.authorize('create') def _create(self, req, id, body=None): context = req.environ['manila.context'] if not self.is_valid_body(body, 'group_specs'): raise webob.exc.HTTPBadRequest() self._assert_share_group_type_exists(context, id) specs = body['group_specs'] self._verify_group_specs(specs) self._check_key_names(specs.keys()) db.share_group_type_specs_update_or_create(context, id, specs) return body @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def create(self, req, id, body=None): return self._create(req, id, body) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def create(self, req, id, body=None): # pylint: disable=function-redefined return self._create(req, id, body) @wsgi.Controller.authorize('update') def _update(self, req, id, key, body=None): context = req.environ['manila.context'] if not body: expl = _('Request body empty.') raise webob.exc.HTTPBadRequest(explanation=expl) self._assert_share_group_type_exists(context, id) if key not in body: expl = _('Request body and URI mismatch.') raise webob.exc.HTTPBadRequest(explanation=expl) if len(body) > 1: expl = _('Request body contains too many items.') raise webob.exc.HTTPBadRequest(explanation=expl) self._verify_group_specs(body) db.share_group_type_specs_update_or_create(context, id, body) return body @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def update(self, req, id, key, body=None): return self._update(req, id, key, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def update(self, req, id, key, body=None): return self._update(req, id, key, body) @wsgi.Controller.authorize('show') def _show(self, req, id, key): """Return a single group spec item.""" context = req.environ['manila.context'] self._assert_share_group_type_exists(context, id) specs = self._get_group_specs(context, id) if key in specs['group_specs']: return {key: specs['group_specs'][key]} else: raise webob.exc.HTTPNotFound() # pylint: enable=function-redefined @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def show(self, req, id, key): return self._show(req, id, key) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def show(self, req, id, key): # pylint: disable=function-redefined return self._show(req, id, key) @wsgi.Controller.authorize('delete') def _delete(self, req, id, key): """Deletes an existing group spec.""" context = req.environ['manila.context'] self._assert_share_group_type_exists(context, id) try: db.share_group_type_specs_delete(context, id, key) except exception.ShareGroupTypeSpecsNotFound as error: raise webob.exc.HTTPNotFound(explanation=error.msg) return webob.Response(status_int=http_client.NO_CONTENT) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def delete(self, req, id, key): return self._delete(req, id, key) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def delete(self, req, id, key): # pylint: disable=function-redefined return self._delete(req, id, key) def _check_key_names(self, keys): if not common.validate_key_names(keys): expl = _('Key names can only contain alphanumeric characters, ' 'underscores, periods, colons and hyphens.') raise webob.exc.HTTPBadRequest(explanation=expl) def create_resource(): return wsgi.Resource(ShareGroupTypeSpecsController()) manila-10.0.0/manila/api/v2/share_networks.py0000664000175000017500000005144513656750227021052 0ustar zuulzuul00000000000000# Copyright 2014 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The shares api.""" import copy from oslo_db import exception as db_exception from oslo_log import log from oslo_utils import timeutils import six from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila.api.views import share_networks as share_networks_views from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila import policy from manila import quota from manila.share import rpcapi as share_rpcapi from manila import utils RESOURCE_NAME = 'share_network' RESOURCES_NAME = 'share_networks' LOG = log.getLogger(__name__) QUOTAS = quota.QUOTAS class ShareNetworkController(wsgi.Controller): """The Share Network API controller for the OpenStack API.""" _view_builder_class = share_networks_views.ViewBuilder def __init__(self): super(ShareNetworkController, self).__init__() self.share_rpcapi = share_rpcapi.ShareAPI() def show(self, req, id): """Return data about the requested network info.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'show') try: share_network = db_api.share_network_get(context, id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) return self._view_builder.build_share_network(req, share_network) def _all_share_servers_are_auto_deletable(self, share_network): return all([ss['is_auto_deletable'] for ss in share_network['share_servers']]) def _share_network_contains_subnets(self, share_network): return len(share_network['share_network_subnets']) > 1 def delete(self, req, id): """Delete specified share network.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'delete') try: share_network = db_api.share_network_get(context, id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) share_instances = ( db_api.share_instances_get_all_by_share_network(context, id) ) if share_instances: msg = _("Can not delete share network %(id)s, it has " "%(len)s share(s).") % {'id': id, 'len': len(share_instances)} LOG.error(msg) raise exc.HTTPConflict(explanation=msg) # NOTE(ameade): Do not allow deletion of share network used by share # group sg_count = db_api.count_share_groups_in_share_network(context, id) if sg_count: msg = _("Can not delete share network %(id)s, it has %(len)s " "share group(s).") % {'id': id, 'len': sg_count} LOG.error(msg) raise exc.HTTPConflict(explanation=msg) # NOTE(silvacarlose): Do not allow the deletion of share networks # if it still contains two or more subnets if self._share_network_contains_subnets(share_network): msg = _("The share network %(id)s has more than one subnet " "attached. Please remove the subnets untill you have one " "or no subnets remaining.") % {'id': id} LOG.error(msg) raise exc.HTTPConflict(explanation=msg) for subnet in share_network['share_network_subnets']: if not self._all_share_servers_are_auto_deletable(subnet): msg = _("The service cannot determine if there are any " "non-managed shares on the share network subnet " "%(id)s, so it cannot be deleted. Please contact the " "cloud administrator to rectify.") % { 'id': subnet['id']} LOG.error(msg) raise exc.HTTPConflict(explanation=msg) for subnet in share_network['share_network_subnets']: for share_server in subnet['share_servers']: self.share_rpcapi.delete_share_server(context, share_server) db_api.share_network_delete(context, id) try: reservations = QUOTAS.reserve( context, project_id=share_network['project_id'], share_networks=-1, user_id=share_network['user_id']) except Exception: LOG.exception("Failed to update usages deleting " "share-network.") else: QUOTAS.commit(context, reservations, project_id=share_network['project_id'], user_id=share_network['user_id']) return webob.Response(status_int=http_client.ACCEPTED) def _subnet_has_search_opt(self, key, value, network, exact_value=False): for subnet in network.get('share_network_subnets') or []: if subnet.get(key) == value or ( not exact_value and value in subnet.get(key.rstrip('~')) if key.endswith('~') and subnet.get(key.rstrip('~')) else ()): return True return False def _get_share_networks(self, req, is_detail=True): """Returns a list of share networks.""" context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) if 'security_service_id' in search_opts: networks = db_api.share_network_get_all_by_security_service( context, search_opts['security_service_id']) elif context.is_admin and 'project_id' in search_opts: networks = db_api.share_network_get_all_by_project( context, search_opts['project_id']) elif context.is_admin and utils.is_all_tenants(search_opts): networks = db_api.share_network_get_all(context) else: networks = db_api.share_network_get_all_by_project( context, context.project_id) date_parsing_error_msg = '''%s is not in yyyy-mm-dd format.''' if 'created_since' in search_opts: try: created_since = timeutils.parse_strtime( search_opts['created_since'], fmt="%Y-%m-%d") except ValueError: msg = date_parsing_error_msg % search_opts['created_since'] raise exc.HTTPBadRequest(explanation=msg) networks = [network for network in networks if network['created_at'] >= created_since] if 'created_before' in search_opts: try: created_before = timeutils.parse_strtime( search_opts['created_before'], fmt="%Y-%m-%d") except ValueError: msg = date_parsing_error_msg % search_opts['created_before'] raise exc.HTTPBadRequest(explanation=msg) networks = [network for network in networks if network['created_at'] <= created_before] opts_to_remove = [ 'all_tenants', 'created_since', 'created_before', 'limit', 'offset', 'security_service_id', 'project_id' ] for opt in opts_to_remove: search_opts.pop(opt, None) if search_opts: for key, value in search_opts.items(): if key in ['ip_version', 'segmentation_id']: value = int(value) if (req.api_version_request >= api_version.APIVersionRequest("2.36")): networks = [ network for network in networks if network.get(key) == value or self._subnet_has_search_opt(key, value, network) or (value in network.get(key.rstrip('~')) if key.endswith('~') and network.get(key.rstrip('~')) else ())] else: networks = [ network for network in networks if network.get(key) == value or self._subnet_has_search_opt(key, value, network, exact_value=True)] limited_list = common.limited(networks, req) return self._view_builder.build_share_networks( req, limited_list, is_detail) def _share_network_subnets_contain_share_servers(self, share_network): for subnet in share_network['share_network_subnets']: if subnet['share_servers'] and len(subnet['share_servers']) > 0: return True return False def index(self, req): """Returns a summary list of share networks.""" policy.check_policy(req.environ['manila.context'], RESOURCE_NAME, 'index') return self._get_share_networks(req, is_detail=False) def detail(self, req): """Returns a detailed list of share networks.""" policy.check_policy(req.environ['manila.context'], RESOURCE_NAME, 'detail') return self._get_share_networks(req) def update(self, req, id, body): """Update specified share network.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'update') if not body or RESOURCE_NAME not in body: raise exc.HTTPUnprocessableEntity() try: share_network = db_api.share_network_get(context, id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) update_values = body[RESOURCE_NAME] if 'nova_net_id' in update_values: msg = _("nova networking is not supported starting in Ocata.") raise exc.HTTPBadRequest(explanation=msg) if self._share_network_subnets_contain_share_servers(share_network): for value in update_values: if value not in ['name', 'description']: msg = (_("Cannot update share network %s. It is used by " "share servers. Only 'name' and 'description' " "fields are available for update") % share_network['id']) raise exc.HTTPForbidden(explanation=msg) try: if ('neutron_net_id' in update_values or 'neutron_subnet_id' in update_values): subnet = db_api.share_network_subnet_get_default_subnet( context, id) if not subnet: msg = _("The share network %(id)s does not have a " "'default' subnet that serves all availability " "zones, so subnet details " "('neutron_net_id', 'neutron_subnet_id') cannot " "be updated.") % {'id': id} raise exc.HTTPBadRequest(explanation=msg) # NOTE(silvacarlose): If the default share network subnet have # the fields neutron_net_id and neutron_subnet_id set as None, # we need to make sure that in the update request the user is # passing both parameter since a share network subnet must # have both fields filled or empty. subnet_neutron_net_and_subnet_id_are_empty = ( subnet['neutron_net_id'] is None and subnet['neutron_subnet_id'] is None) update_values_without_neutron_net_or_subnet = ( update_values.get('neutron_net_id') is None or update_values.get('neutron_subnet_id') is None) if (subnet_neutron_net_and_subnet_id_are_empty and update_values_without_neutron_net_or_subnet): msg = _( "To update the share network %(id)s you need to " "specify both 'neutron_net_id' and " "'neutron_subnet_id'.") % {'id': id} raise webob.exc.HTTPBadRequest(explanation=msg) db_api.share_network_subnet_update(context, subnet['id'], update_values) share_network = db_api.share_network_update(context, id, update_values) except db_exception.DBError: msg = "Could not save supplied data due to database error" raise exc.HTTPBadRequest(explanation=msg) return self._view_builder.build_share_network(req, share_network) def create(self, req, body): """Creates a new share network.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'create') if not body or RESOURCE_NAME not in body: raise exc.HTTPUnprocessableEntity() share_network_values = body[RESOURCE_NAME] share_network_subnet_values = copy.deepcopy(share_network_values) share_network_values['project_id'] = context.project_id share_network_values['user_id'] = context.user_id if 'nova_net_id' in share_network_values: msg = _("nova networking is not supported starting in Ocata.") raise exc.HTTPBadRequest(explanation=msg) share_network_values.pop('availability_zone', None) share_network_values.pop('neutron_net_id', None) share_network_values.pop('neutron_subnet_id', None) if req.api_version_request >= api_version.APIVersionRequest("2.51"): if 'availability_zone' in share_network_subnet_values: try: az = db_api.availability_zone_get( context, share_network_subnet_values['availability_zone']) share_network_subnet_values['availability_zone_id'] = ( az['id']) share_network_subnet_values.pop('availability_zone') except exception.AvailabilityZoneNotFound: msg = (_("The provided availability zone %s does not " "exist.") % share_network_subnet_values['availability_zone']) raise exc.HTTPBadRequest(explanation=msg) common.check_net_id_and_subnet_id(share_network_subnet_values) try: reservations = QUOTAS.reserve(context, share_networks=1) except exception.OverQuota as e: overs = e.kwargs['overs'] usages = e.kwargs['usages'] quotas = e.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'share_networks' in overs: LOG.warning("Quota exceeded for %(s_pid)s, " "tried to create " "share-network (%(d_consumed)d of %(d_quota)d " "already consumed).", { 's_pid': context.project_id, 'd_consumed': _consumed('share_networks'), 'd_quota': quotas['share_networks']}) raise exception.ShareNetworksLimitExceeded( allowed=quotas['share_networks']) else: # Tries to create the new share network try: share_network = db_api.share_network_create( context, share_network_values) except db_exception.DBError as e: LOG.exception(e) msg = "Could not create share network." raise exc.HTTPInternalServerError(explanation=msg) share_network_subnet_values['share_network_id'] = ( share_network['id']) share_network_subnet_values.pop('id', None) # Try to create the share network subnet. If it fails, the service # must rollback the share network creation. try: db_api.share_network_subnet_create( context, share_network_subnet_values) except db_exception.DBError: db_api.share_network_delete(context, share_network['id']) msg = _('Could not create share network.') raise exc.HTTPInternalServerError(explanation=msg) QUOTAS.commit(context, reservations) share_network = db_api.share_network_get(context, share_network['id']) return self._view_builder.build_share_network(req, share_network) def action(self, req, id, body): _actions = { 'add_security_service': self._add_security_service, 'remove_security_service': self._remove_security_service } for action, data in body.items(): try: return _actions[action](req, id, data) except KeyError: msg = _("Share networks does not have %s action") % action raise exc.HTTPBadRequest(explanation=msg) def _add_security_service(self, req, id, data): """Associate share network with a given security service.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'add_security_service') share_network = db_api.share_network_get(context, id) if self._share_network_subnets_contain_share_servers(share_network): msg = _("Cannot add security services. Share network is used.") raise exc.HTTPForbidden(explanation=msg) security_service = db_api.security_service_get( context, data['security_service_id']) for attached_service in share_network['security_services']: if attached_service['type'] == security_service['type']: msg = _("Cannot add security service to share network. " "Security service with '%(ss_type)s' type already " "added to '%(sn_id)s' share network") % { 'ss_type': security_service['type'], 'sn_id': share_network['id']} raise exc.HTTPConflict(explanation=msg) try: share_network = db_api.share_network_add_security_service( context, id, data['security_service_id']) except KeyError: msg = "Malformed request body" raise exc.HTTPBadRequest(explanation=msg) except exception.NotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) except exception.ShareNetworkSecurityServiceAssociationError as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) return self._view_builder.build_share_network(req, share_network) def _remove_security_service(self, req, id, data): """Dissociate share network from a given security service.""" context = req.environ['manila.context'] policy.check_policy(context, RESOURCE_NAME, 'remove_security_service') share_network = db_api.share_network_get(context, id) if self._share_network_subnets_contain_share_servers(share_network): msg = _("Cannot remove security services. Share network is used.") raise exc.HTTPForbidden(explanation=msg) try: share_network = db_api.share_network_remove_security_service( context, id, data['security_service_id']) except KeyError: msg = "Malformed request body" raise exc.HTTPBadRequest(explanation=msg) except exception.NotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) except exception.ShareNetworkSecurityServiceDissociationError as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) return self._view_builder.build_share_network(req, share_network) def create_resource(): return wsgi.Resource(ShareNetworkController()) manila-10.0.0/manila/api/v2/share_servers.py0000664000175000017500000002005513656750227020660 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api.openstack import wsgi from manila.api.v1 import share_servers from manila.common import constants from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila.share import utils as share_utils from manila import utils LOG = log.getLogger(__name__) class ShareServerController(share_servers.ShareServerController, wsgi.Controller, wsgi.AdminActionsMixin): """The Share Server API V2 controller for the OpenStack API.""" valid_statuses = { 'status': { constants.STATUS_ACTIVE, constants.STATUS_ERROR, constants.STATUS_DELETING, constants.STATUS_CREATING, constants.STATUS_MANAGING, constants.STATUS_UNMANAGING, constants.STATUS_UNMANAGE_ERROR, constants.STATUS_MANAGE_ERROR, } } def _update(self, context, id, update): db_api.share_server_update(context, id, update) @wsgi.Controller.api_version('2.49') @wsgi.action('reset_status') def share_server_reset_status(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.authorize('manage_share_server') def _manage(self, req, body): """Manage a share server.""" LOG.debug("Manage Share Server with id: %s", id) context = req.environ['manila.context'] identifier, host, share_network, driver_opts, network_subnet = ( self._validate_manage_share_server_parameters(context, body)) try: result = self.share_api.manage_share_server( context, identifier, host, network_subnet, driver_opts) except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.msg) except exception.PolicyNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.msg) result.project_id = share_network["project_id"] result.share_network_id = share_network["id"] if share_network['name']: result.share_network_name = share_network['name'] else: result.share_network_name = share_network['id'] return self._view_builder.build_share_server(req, result) @wsgi.Controller.api_version('2.51') @wsgi.response(202) def manage(self, req, body): return self._manage(req, body) @wsgi.Controller.api_version('2.49') # noqa @wsgi.response(202) def manage(self, req, body): # pylint: disable=function-redefined body.get('share_server', {}).pop('share_network_subnet_id', None) return self._manage(req, body) @wsgi.Controller.authorize('unmanage_share_server') def _unmanage(self, req, id, body=None): context = req.environ['manila.context'] LOG.debug("Unmanage Share Server with id: %s", id) # force's default value is False # force will be True if body is {'unmanage': {'force': True}} force = (body.get('unmanage') or {}).get('force', False) or False try: share_server = db_api.share_server_get( context, id) except exception.ShareServerNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) allowed_statuses = [constants.STATUS_ERROR, constants.STATUS_ACTIVE, constants.STATUS_MANAGE_ERROR, constants.STATUS_UNMANAGE_ERROR] if share_server['status'] not in allowed_statuses: data = { 'status': share_server['status'], 'allowed_statuses': ', '.join(allowed_statuses), } msg = _("Share server's actual status is %(status)s, allowed " "statuses for unmanaging are " "%(allowed_statuses)s.") % data raise exc.HTTPBadRequest(explanation=msg) try: self.share_api.unmanage_share_server( context, share_server, force=force) except (exception.ShareServerInUse, exception.PolicyNotAuthorized) as e: raise exc.HTTPBadRequest(explanation=e.msg) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version("2.49") @wsgi.action('unmanage') def unmanage(self, req, id, body=None): """Unmanage a share server.""" return self._unmanage(req, id, body) def _validate_manage_share_server_parameters(self, context, body): if not (body and self.is_valid_body(body, 'share_server')): msg = _("Share Server entity not found in request body") raise exc.HTTPUnprocessableEntity(explanation=msg) required_parameters = ('host', 'share_network_id', 'identifier') data = body['share_server'] for parameter in required_parameters: if parameter not in data: msg = _("Required parameter %s not found") % parameter raise exc.HTTPBadRequest(explanation=msg) if not data.get(parameter): msg = _("Required parameter %s is empty") % parameter raise exc.HTTPBadRequest(explanation=msg) identifier = data['identifier'] host, share_network_id = data['host'], data['share_network_id'] network_subnet_id = data.get('share_network_subnet_id') if network_subnet_id: try: network_subnet = db_api.share_network_subnet_get( context, network_subnet_id) except exception.ShareNetworkSubnetNotFound: msg = _("The share network subnet %s does not " "exist.") % network_subnet_id raise exc.HTTPBadRequest(explanation=msg) else: network_subnet = db_api.share_network_subnet_get_default_subnet( context, share_network_id) if network_subnet is None: msg = _("The share network %s does have a default subnet. Create " "one or use a specific subnet to manage this share server " "with API version >= 2.51.") % share_network_id raise exc.HTTPBadRequest(explanation=msg) if share_utils.extract_host(host, 'pool'): msg = _("Host parameter should not contain pool.") raise exc.HTTPBadRequest(explanation=msg) try: utils.validate_service_host( context, share_utils.extract_host(host)) except exception.ServiceNotFound as e: raise exc.HTTPBadRequest(explanation=e) except exception.PolicyNotAuthorized as e: raise exc.HTTPForbidden(explanation=e) except exception.AdminRequired as e: raise exc.HTTPForbidden(explanation=e) except exception.ServiceIsDown as e: raise exc.HTTPBadRequest(explanation=e) try: share_network = db_api.share_network_get( context, share_network_id) except exception.ShareNetworkNotFound as e: raise exc.HTTPBadRequest(explanation=e) driver_opts = data.get('driver_options') if driver_opts is not None and not isinstance(driver_opts, dict): msg = _("Driver options must be in dictionary format.") raise exc.HTTPBadRequest(explanation=msg) return identifier, host, share_network, driver_opts, network_subnet def create_resource(): return wsgi.Resource(ShareServerController()) manila-10.0.0/manila/api/v2/share_group_types.py0000664000175000017500000003064513656750227021555 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The group type API controller module.""" from oslo_utils import strutils from oslo_utils import uuidutils import six from six.moves import http_client import webob from webob import exc from manila.api.openstack import wsgi from manila.api.views import share_group_types as views from manila import exception from manila.i18n import _ from manila.share_group import share_group_types SG_GRADUATION_VERSION = '2.55' class ShareGroupTypesController(wsgi.Controller): """The share group types API controller for the OpenStack API.""" resource_name = 'share_group_type' _view_builder_class = views.ShareGroupTypeViewBuilder def _check_body(self, body, action_name): if not self.is_valid_body(body, action_name): raise webob.exc.HTTPBadRequest() access = body[action_name] project = access.get('project') if not uuidutils.is_uuid_like(project): msg = _("Project value (%s) must be in uuid format.") % project raise webob.exc.HTTPBadRequest(explanation=msg) @wsgi.Controller.authorize('index') def _index(self, req): """Returns the list of share group types.""" limited_types = self._get_share_group_types(req) return self._view_builder.index(req, limited_types) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def index(self, req): return self._index(req) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def index(self, req): # pylint: disable=function-redefined return self._index(req) @wsgi.Controller.authorize('show') def _show(self, req, id): """Return a single share group type item.""" context = req.environ['manila.context'] try: share_group_type = share_group_types.get(context, id) except exception.NotFound: msg = _("Share group type with id %s not found.") raise exc.HTTPNotFound(explanation=msg % id) share_group_type['id'] = six.text_type(share_group_type['id']) return self._view_builder.show(req, share_group_type) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def show(self, req, id): return self._show(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def show(self, req, id): # pylint: disable=function-redefined return self._show(req, id) @wsgi.Controller.authorize('default') def _default(self, req): """Return default share group type.""" context = req.environ['manila.context'] share_group_type = share_group_types.get_default(context) if not share_group_type: msg = _("Default share group type not found.") raise exc.HTTPNotFound(explanation=msg) share_group_type['id'] = six.text_type(share_group_type['id']) return self._view_builder.show(req, share_group_type) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def default(self, req): return self._default(req) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def default(self, req): # pylint: disable=function-redefined return self._default(req) def _get_share_group_types(self, req): """Helper function that returns a list of share group type dicts.""" filters = {} context = req.environ['manila.context'] if context.is_admin: # Only admin has query access to all group types filters['is_public'] = self._parse_is_public( req.params.get('is_public')) else: filters['is_public'] = True limited_types = share_group_types.get_all( context, search_opts=filters).values() return list(limited_types) @staticmethod def _parse_is_public(is_public): """Parse is_public into something usable. :returns: - True: API should list public share group types only - False: API should list private share group types only - None: API should list both public and private share group types """ if is_public is None: # preserve default value of showing only public types return True elif six.text_type(is_public).lower() == "all": return None else: try: return strutils.bool_from_string(is_public, strict=True) except ValueError: msg = _('Invalid is_public filter [%s]') % is_public raise exc.HTTPBadRequest(explanation=msg) @wsgi.Controller.authorize('create') def _create(self, req, body): """Creates a new share group type.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share_group_type'): raise webob.exc.HTTPBadRequest() share_group_type = body['share_group_type'] name = share_group_type.get('name') specs = share_group_type.get('group_specs', {}) is_public = share_group_type.get('is_public', True) if not share_group_type.get('share_types'): msg = _("Supported share types must be provided.") raise webob.exc.HTTPBadRequest(explanation=msg) share_types = share_group_type.get('share_types') if name is None or name == "" or len(name) > 255: msg = _("Share group type name is not valid.") raise webob.exc.HTTPBadRequest(explanation=msg) if not (specs is None or isinstance(specs, dict)): msg = _("Group specs can be either of 'None' or 'dict' types.") raise webob.exc.HTTPBadRequest(explanation=msg) if specs: for element in list(specs.keys()) + list(specs.values()): if not isinstance(element, six.string_types): msg = _("Group specs keys and values should be strings.") raise webob.exc.HTTPBadRequest(explanation=msg) try: share_group_types.create( context, name, share_types, specs, is_public) share_group_type = share_group_types.get_by_name( context, name) except exception.ShareGroupTypeExists as err: raise webob.exc.HTTPConflict(explanation=six.text_type(err)) except exception.ShareTypeDoesNotExist as err: raise webob.exc.HTTPNotFound(explanation=six.text_type(err)) except exception.NotFound: raise webob.exc.HTTPNotFound() return self._view_builder.show(req, share_group_type) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action("create") def create(self, req, body): return self._create(req, body) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action("create") def create(self, req, body): # pylint: disable=function-redefined return self._create(req, body) @wsgi.Controller.authorize('delete') def _delete(self, req, id): """Deletes an existing group type.""" context = req.environ['manila.context'] try: share_group_type = share_group_types.get(context, id) share_group_types.destroy(context, share_group_type['id']) except exception.ShareGroupTypeInUse: msg = _('Target share group type with id %s is still in use.') raise webob.exc.HTTPBadRequest(explanation=msg % id) except exception.NotFound: raise webob.exc.HTTPNotFound() return webob.Response(status_int=http_client.NO_CONTENT) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action("delete") def delete(self, req, id): return self._delete(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action("delete") def delete(self, req, id): # pylint: disable=function-redefined return self._delete(req, id) @wsgi.Controller.authorize('list_project_access') def _share_group_type_access(self, req, id): context = req.environ['manila.context'] try: share_group_type = share_group_types.get( context, id, expected_fields=['projects']) except exception.ShareGroupTypeNotFound: explanation = _("Share group type %s not found.") % id raise webob.exc.HTTPNotFound(explanation=explanation) if share_group_type['is_public']: expl = _("Access list not available for public share group types.") raise webob.exc.HTTPNotFound(explanation=expl) projects = [] for project_id in share_group_type['projects']: projects.append( {'share_group_type_id': share_group_type['id'], 'project_id': project_id} ) return {'share_group_type_access': projects} @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def share_group_type_access(self, req, id): return self._share_group_type_access(req, id) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def share_group_type_access(self, req, id): return self._share_group_type_access(req, id) @wsgi.Controller.authorize('add_project_access') def _add_project_access(self, req, id, body): context = req.environ['manila.context'] self._check_body(body, 'addProjectAccess') project = body['addProjectAccess']['project'] self._assert_non_public_share_group_type(context, id) try: share_group_types.add_share_group_type_access( context, id, project) except exception.ShareGroupTypeAccessExists as err: raise webob.exc.HTTPConflict(explanation=six.text_type(err)) return webob.Response(status_int=http_client.ACCEPTED) # pylint: enable=function-redefined @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action('addProjectAccess') def add_project_access(self, req, id, body): return self._add_project_access(req, id, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action('addProjectAccess') def add_project_access(self, req, id, body): return self._add_project_access(req, id, body) @wsgi.Controller.authorize('remove_project_access') def _remove_project_access(self, req, id, body): context = req.environ['manila.context'] self._check_body(body, 'removeProjectAccess') project = body['removeProjectAccess']['project'] self._assert_non_public_share_group_type(context, id) try: share_group_types.remove_share_group_type_access( context, id, project) except exception.ShareGroupTypeAccessNotFound as err: raise webob.exc.HTTPNotFound(explanation=six.text_type(err)) return webob.Response(status_int=http_client.ACCEPTED) # pylint: enable=function-redefined @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action('removeProjectAccess') def remove_project_access(self, req, id, body): return self._remove_project_access(req, id, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action('removeProjectAccess') def remove_project_access(self, req, id, body): return self._remove_project_access(req, id, body) def _assert_non_public_share_group_type(self, context, type_id): try: share_group_type = share_group_types.get( context, type_id) if share_group_type['is_public']: msg = _("Type access modification is not applicable to " "public share group type.") raise webob.exc.HTTPConflict(explanation=msg) except exception.ShareGroupTypeNotFound as err: raise webob.exc.HTTPNotFound(explanation=six.text_type(err)) def create_resource(): return wsgi.Resource(ShareGroupTypesController()) manila-10.0.0/manila/api/v2/share_accesses.py0000664000175000017500000000562713656750227020770 0ustar zuulzuul00000000000000# Copyright 2018 Huawei Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The share accesses api.""" import ast import webob from manila.api.openstack import wsgi from manila.api.views import share_accesses as share_access_views from manila import exception from manila.i18n import _ from manila import share class ShareAccessesController(wsgi.Controller, wsgi.AdminActionsMixin): """The Share accesses API V2 controller for the OpenStack API.""" resource_name = 'share_access_rule' _view_builder_class = share_access_views.ViewBuilder def __init__(self): super(ShareAccessesController, self).__init__() self.share_api = share.API() @wsgi.Controller.api_version('2.45') @wsgi.Controller.authorize('get') def show(self, req, id): """Return data about the given share access rule.""" context = req.environ['manila.context'] share_access = self._get_share_access(context, id) return self._view_builder.view(req, share_access) def _get_share_access(self, context, share_access_id): try: return self.share_api.access_get(context, share_access_id) except exception.NotFound: msg = _("Share access rule %s not found.") % share_access_id raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version('2.45') @wsgi.Controller.authorize def index(self, req): """Returns the list of access rules for a given share.""" context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) if 'share_id' not in search_opts: msg = _("The field 'share_id' has to be specified.") raise webob.exc.HTTPBadRequest(explanation=msg) share_id = search_opts.pop('share_id', None) if 'metadata' in search_opts: search_opts['metadata'] = ast.literal_eval( search_opts['metadata']) try: share = self.share_api.get(context, share_id) except exception.NotFound: msg = _("Share %s not found.") % share_id raise webob.exc.HTTPBadRequest(explanation=msg) access_rules = self.share_api.access_get_all( context, share, search_opts) return self._view_builder.list_view(req, access_rules) def create_resource(): return wsgi.Resource(ShareAccessesController()) manila-10.0.0/manila/api/v2/shares.py0000664000175000017500000004617513656750227017305 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila.api.v1 import share_manage from manila.api.v1 import share_unmanage from manila.api.v1 import shares from manila.api.views import share_accesses as share_access_views from manila.api.views import share_migration as share_migration_views from manila.api.views import shares as share_views from manila.common import constants from manila import db from manila import exception from manila.i18n import _ from manila import share from manila import utils LOG = log.getLogger(__name__) class ShareController(shares.ShareMixin, share_manage.ShareManageMixin, share_unmanage.ShareUnmanageMixin, wsgi.Controller, wsgi.AdminActionsMixin): """The Shares API v2 controller for the OpenStack API.""" resource_name = 'share' _view_builder_class = share_views.ViewBuilder def __init__(self): super(ShareController, self).__init__() self.share_api = share.API() self._access_view_builder = share_access_views.ViewBuilder() self._migration_view_builder = share_migration_views.ViewBuilder() @wsgi.Controller.authorize('revert_to_snapshot') def _revert(self, req, id, body=None): """Revert a share to a snapshot.""" context = req.environ['manila.context'] revert_data = self._validate_revert_parameters(context, body) try: share_id = id snapshot_id = revert_data['snapshot_id'] share = self.share_api.get(context, share_id) snapshot = self.share_api.get_snapshot(context, snapshot_id) # Ensure share supports reverting to a snapshot if not share['revert_to_snapshot_support']: msg_args = {'share_id': share_id, 'snap_id': snapshot_id} msg = _('Share %(share_id)s may not be reverted to snapshot ' '%(snap_id)s, because the share does not have that ' 'capability.') raise exc.HTTPBadRequest(explanation=msg % msg_args) # Ensure requested share & snapshot match. if share['id'] != snapshot['share_id']: msg_args = {'share_id': share_id, 'snap_id': snapshot_id} msg = _('Snapshot %(snap_id)s is not associated with share ' '%(share_id)s.') raise exc.HTTPBadRequest(explanation=msg % msg_args) # Ensure share status is 'available'. if share['status'] != constants.STATUS_AVAILABLE: msg_args = { 'share_id': share_id, 'state': share['status'], 'available': constants.STATUS_AVAILABLE, } msg = _("Share %(share_id)s is in '%(state)s' state, but it " "must be in '%(available)s' state to be reverted to a " "snapshot.") raise exc.HTTPConflict(explanation=msg % msg_args) # Ensure snapshot status is 'available'. if snapshot['status'] != constants.STATUS_AVAILABLE: msg_args = { 'snap_id': snapshot_id, 'state': snapshot['status'], 'available': constants.STATUS_AVAILABLE, } msg = _("Snapshot %(snap_id)s is in '%(state)s' state, but it " "must be in '%(available)s' state to be restored.") raise exc.HTTPConflict(explanation=msg % msg_args) # Ensure a long-running task isn't active on the share if share.is_busy: msg_args = {'share_id': share_id} msg = _("Share %(share_id)s may not be reverted while it has " "an active task.") raise exc.HTTPConflict(explanation=msg % msg_args) # Ensure the snapshot is the most recent one. latest_snapshot = self.share_api.get_latest_snapshot_for_share( context, share_id) if not latest_snapshot: msg_args = {'share_id': share_id} msg = _("Could not determine the latest snapshot for share " "%(share_id)s.") raise exc.HTTPBadRequest(explanation=msg % msg_args) if latest_snapshot['id'] != snapshot_id: msg_args = { 'share_id': share_id, 'snap_id': snapshot_id, 'latest_snap_id': latest_snapshot['id'], } msg = _("Snapshot %(snap_id)s may not be restored because " "it is not the most recent snapshot of share " "%(share_id)s. Currently the latest snapshot is " "%(latest_snap_id)s.") raise exc.HTTPConflict(explanation=msg % msg_args) # Ensure the access rules are not in the process of updating for instance in share['instances']: access_rules_status = instance['access_rules_status'] if access_rules_status != constants.ACCESS_STATE_ACTIVE: msg_args = { 'share_id': share_id, 'snap_id': snapshot_id, 'state': constants.ACCESS_STATE_ACTIVE } msg = _("Snapshot %(snap_id)s belongs to a share " "%(share_id)s which has access rules that are " "not %(state)s.") raise exc.HTTPConflict(explanation=msg % msg_args) msg_args = {'share_id': share_id, 'snap_id': snapshot_id} msg = 'Reverting share %(share_id)s to snapshot %(snap_id)s.' LOG.info(msg, msg_args) self.share_api.revert_to_snapshot(context, share, snapshot) except exception.ShareNotFound as e: raise exc.HTTPNotFound(explanation=e) except exception.ShareSnapshotNotFound as e: raise exc.HTTPBadRequest(explanation=e) except exception.ShareSizeExceedsAvailableQuota as e: raise exc.HTTPForbidden(explanation=e) except exception.ReplicationException as e: raise exc.HTTPBadRequest(explanation=e) return webob.Response(status_int=http_client.ACCEPTED) def _validate_revert_parameters(self, context, body): if not (body and self.is_valid_body(body, 'revert')): msg = _("Revert entity not found in request body.") raise exc.HTTPBadRequest(explanation=msg) required_parameters = ('snapshot_id',) data = body['revert'] for parameter in required_parameters: if parameter not in data: msg = _("Required parameter %s not found.") % parameter raise exc.HTTPBadRequest(explanation=msg) if not data.get(parameter): msg = _("Required parameter %s is empty.") % parameter raise exc.HTTPBadRequest(explanation=msg) return data @wsgi.Controller.api_version("2.48") def create(self, req, body): return self._create(req, body, check_create_share_from_snapshot_support=True, check_availability_zones_extra_spec=True) @wsgi.Controller.api_version("2.31", "2.47") # noqa def create(self, req, body): # pylint: disable=function-redefined return self._create( req, body, check_create_share_from_snapshot_support=True) @wsgi.Controller.api_version("2.24", "2.30") # noqa def create(self, req, body): # pylint: disable=function-redefined body.get('share', {}).pop('share_group_id', None) return self._create(req, body, check_create_share_from_snapshot_support=True) @wsgi.Controller.api_version("2.0", "2.23") # noqa def create(self, req, body): # pylint: disable=function-redefined body.get('share', {}).pop('share_group_id', None) return self._create(req, body) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-reset_status') def share_reset_status_legacy(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('reset_status') def share_reset_status(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-force_delete') def share_force_delete_legacy(self, req, id, body): return self._force_delete(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('force_delete') def share_force_delete(self, req, id, body): return self._force_delete(req, id, body) @wsgi.Controller.api_version('2.29', experimental=True) @wsgi.action("migration_start") @wsgi.Controller.authorize def migration_start(self, req, id, body): """Migrate a share to the specified host.""" context = req.environ['manila.context'] try: share = self.share_api.get(context, id) except exception.NotFound: msg = _("Share %s not found.") % id raise exc.HTTPNotFound(explanation=msg) params = body.get('migration_start') if not params: raise exc.HTTPBadRequest(explanation=_("Request is missing body.")) driver_assisted_params = ['preserve_metadata', 'writable', 'nondisruptive', 'preserve_snapshots'] bool_params = (driver_assisted_params + ['force_host_assisted_migration']) mandatory_params = driver_assisted_params + ['host'] utils.check_params_exist(mandatory_params, params) bool_param_values = utils.check_params_are_boolean(bool_params, params) new_share_network = None new_share_type = None new_share_network_id = params.get('new_share_network_id', None) if new_share_network_id: try: new_share_network = db.share_network_get( context, new_share_network_id) except exception.NotFound: msg = _("Share network %s not " "found.") % new_share_network_id raise exc.HTTPBadRequest(explanation=msg) new_share_type_id = params.get('new_share_type_id', None) if new_share_type_id: try: new_share_type = db.share_type_get( context, new_share_type_id) except exception.NotFound: msg = _("Share type %s not found.") % new_share_type_id raise exc.HTTPBadRequest(explanation=msg) try: return_code = self.share_api.migration_start( context, share, params['host'], bool_param_values['force_host_assisted_migration'], bool_param_values['preserve_metadata'], bool_param_values['writable'], bool_param_values['nondisruptive'], bool_param_values['preserve_snapshots'], new_share_network=new_share_network, new_share_type=new_share_type) except exception.Conflict as e: raise exc.HTTPConflict(explanation=e) return webob.Response(status_int=return_code) @wsgi.Controller.api_version('2.22', experimental=True) @wsgi.action("migration_complete") @wsgi.Controller.authorize def migration_complete(self, req, id, body): """Invokes 2nd phase of share migration.""" context = req.environ['manila.context'] try: share = self.share_api.get(context, id) except exception.NotFound: msg = _("Share %s not found.") % id raise exc.HTTPNotFound(explanation=msg) self.share_api.migration_complete(context, share) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version('2.22', experimental=True) @wsgi.action("migration_cancel") @wsgi.Controller.authorize def migration_cancel(self, req, id, body): """Attempts to cancel share migration.""" context = req.environ['manila.context'] try: share = self.share_api.get(context, id) except exception.NotFound: msg = _("Share %s not found.") % id raise exc.HTTPNotFound(explanation=msg) self.share_api.migration_cancel(context, share) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version('2.22', experimental=True) @wsgi.action("migration_get_progress") @wsgi.Controller.authorize def migration_get_progress(self, req, id, body): """Retrieve share migration progress for a given share.""" context = req.environ['manila.context'] try: share = self.share_api.get(context, id) except exception.NotFound: msg = _("Share %s not found.") % id raise exc.HTTPNotFound(explanation=msg) result = self.share_api.migration_get_progress(context, share) # refresh share model share = self.share_api.get(context, id) return self._migration_view_builder.get_progress(req, share, result) @wsgi.Controller.api_version('2.22', experimental=True) @wsgi.action("reset_task_state") @wsgi.Controller.authorize def reset_task_state(self, req, id, body): return self._reset_status(req, id, body, status_attr='task_state') @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-allow_access') def allow_access_legacy(self, req, id, body): """Add share access rule.""" return self._allow_access(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('allow_access') def allow_access(self, req, id, body): """Add share access rule.""" args = (req, id, body) kwargs = {} if req.api_version_request >= api_version.APIVersionRequest("2.13"): kwargs['enable_ceph'] = True if req.api_version_request >= api_version.APIVersionRequest("2.28"): kwargs['allow_on_error_status'] = True if req.api_version_request >= api_version.APIVersionRequest("2.38"): kwargs['enable_ipv6'] = True if req.api_version_request >= api_version.APIVersionRequest("2.45"): kwargs['enable_metadata'] = True return self._allow_access(*args, **kwargs) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-deny_access') def deny_access_legacy(self, req, id, body): """Remove share access rule.""" return self._deny_access(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('deny_access') def deny_access(self, req, id, body): """Remove share access rule.""" return self._deny_access(req, id, body) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-access_list') def access_list_legacy(self, req, id, body): """List share access rules.""" return self._access_list(req, id, body) @wsgi.Controller.api_version('2.7', '2.44') @wsgi.action('access_list') def access_list(self, req, id, body): """List share access rules.""" return self._access_list(req, id, body) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-extend') def extend_legacy(self, req, id, body): """Extend size of a share.""" return self._extend(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('extend') def extend(self, req, id, body): """Extend size of a share.""" return self._extend(req, id, body) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-shrink') def shrink_legacy(self, req, id, body): """Shrink size of a share.""" return self._shrink(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('shrink') def shrink(self, req, id, body): """Shrink size of a share.""" return self._shrink(req, id, body) @wsgi.Controller.api_version('2.7', '2.7') def manage(self, req, body): body.get('share', {}).pop('is_public', None) detail = self._manage(req, body, allow_dhss_true=False) return detail @wsgi.Controller.api_version("2.8", "2.48") # noqa def manage(self, req, body): # pylint: disable=function-redefined detail = self._manage(req, body, allow_dhss_true=False) return detail @wsgi.Controller.api_version("2.49") # noqa def manage(self, req, body): # pylint: disable=function-redefined detail = self._manage(req, body, allow_dhss_true=True) return detail @wsgi.Controller.api_version('2.7', '2.48') @wsgi.action('unmanage') def unmanage(self, req, id, body=None): return self._unmanage(req, id, body, allow_dhss_true=False) @wsgi.Controller.api_version('2.49') # noqa @wsgi.action('unmanage') def unmanage(self, req, id, body=None): # pylint: disable=function-redefined return self._unmanage(req, id, body, allow_dhss_true=True) @wsgi.Controller.api_version('2.27') @wsgi.action('revert') def revert(self, req, id, body=None): return self._revert(req, id, body) @wsgi.Controller.api_version("2.0") def index(self, req): """Returns a summary list of shares.""" if req.api_version_request < api_version.APIVersionRequest("2.35"): req.GET.pop('export_location_id', None) req.GET.pop('export_location_path', None) if req.api_version_request < api_version.APIVersionRequest("2.36"): req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) if req.api_version_request < api_version.APIVersionRequest("2.42"): req.GET.pop('with_count', None) return self._get_shares(req, is_detail=False) @wsgi.Controller.api_version("2.0") def detail(self, req): """Returns a detailed list of shares.""" if req.api_version_request < api_version.APIVersionRequest("2.35"): req.GET.pop('export_location_id', None) req.GET.pop('export_location_path', None) if req.api_version_request < api_version.APIVersionRequest("2.36"): req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) return self._get_shares(req, is_detail=True) def create_resource(): return wsgi.Resource(ShareController()) manila-10.0.0/manila/api/v2/router.py0000664000175000017500000005450713656750227017336 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # Copyright 2011 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ WSGI middleware for OpenStack Share API v2. """ from manila.api import extensions import manila.api.openstack from manila.api.v1 import limits from manila.api.v1 import scheduler_stats from manila.api.v1 import security_service from manila.api.v1 import share_manage from manila.api.v1 import share_metadata from manila.api.v1 import share_types_extra_specs from manila.api.v1 import share_unmanage from manila.api.v2 import availability_zones from manila.api.v2 import messages from manila.api.v2 import quota_class_sets from manila.api.v2 import quota_sets from manila.api.v2 import services from manila.api.v2 import share_access_metadata from manila.api.v2 import share_accesses from manila.api.v2 import share_export_locations from manila.api.v2 import share_group_snapshots from manila.api.v2 import share_group_type_specs from manila.api.v2 import share_group_types from manila.api.v2 import share_groups from manila.api.v2 import share_instance_export_locations from manila.api.v2 import share_instances from manila.api.v2 import share_network_subnets from manila.api.v2 import share_networks from manila.api.v2 import share_replica_export_locations from manila.api.v2 import share_replicas from manila.api.v2 import share_servers from manila.api.v2 import share_snapshot_export_locations from manila.api.v2 import share_snapshot_instance_export_locations from manila.api.v2 import share_snapshot_instances from manila.api.v2 import share_snapshots from manila.api.v2 import share_types from manila.api.v2 import shares from manila.api import versions class APIRouter(manila.api.openstack.APIRouter): """Route API requests. Routes requests on the OpenStack API to the appropriate controller and method. """ ExtensionManager = extensions.ExtensionManager def _setup_routes(self, mapper): self.resources["versions"] = versions.create_resource() mapper.connect("versions", "/", controller=self.resources["versions"], action="index") mapper.redirect("", "/") self.resources["availability_zones_legacy"] = ( availability_zones.create_resource_legacy()) # TODO(vponomaryov): "os-availability-zone" is deprecated # since v2.7. Remove it when minimum API version becomes equal to # or greater than v2.7. mapper.resource("availability-zone", "os-availability-zone", controller=self.resources["availability_zones_legacy"]) self.resources["availability_zones"] = ( availability_zones.create_resource()) mapper.resource("availability-zone", "availability-zones", controller=self.resources["availability_zones"]) self.resources["services_legacy"] = services.create_resource_legacy() # TODO(vponomaryov): "os-services" is deprecated # since v2.7. Remove it when minimum API version becomes equal to # or greater than v2.7. mapper.resource("service", "os-services", controller=self.resources["services_legacy"]) self.resources["services"] = services.create_resource() mapper.resource("service", "services", controller=self.resources["services"]) self.resources["quota_sets_legacy"] = ( quota_sets.create_resource_legacy()) # TODO(vponomaryov): "os-quota-sets" is deprecated # since v2.7. Remove it when minimum API version becomes equal to # or greater than v2.7. mapper.resource("quota-set", "os-quota-sets", controller=self.resources["quota_sets_legacy"], member={"defaults": "GET"}) self.resources["quota_sets"] = quota_sets.create_resource() mapper.resource("quota-set", "quota-sets", controller=self.resources["quota_sets"], member={"defaults": "GET", "detail": "GET"}) self.resources["quota_class_sets_legacy"] = ( quota_class_sets.create_resource_legacy()) # TODO(vponomaryov): "os-quota-class-sets" is deprecated # since v2.7. Remove it when minimum API version becomes equal to # or greater than v2.7. mapper.resource("quota-class-set", "os-quota-class-sets", controller=self.resources["quota_class_sets_legacy"]) self.resources["quota_class_sets"] = quota_class_sets.create_resource() mapper.resource("quota-class-set", "quota-class-sets", controller=self.resources["quota_class_sets"]) self.resources["share_manage"] = share_manage.create_resource() # TODO(vponomaryov): "os-share-manage" is deprecated # since v2.7. Remove it when minimum API version becomes equal to # or greater than v2.7. mapper.resource("share_manage", "os-share-manage", controller=self.resources["share_manage"]) self.resources["share_unmanage"] = share_unmanage.create_resource() # TODO(vponomaryov): "os-share-unmanage" is deprecated # since v2.7. Remove it when minimum API version becomes equal to # or greater than v2.7. mapper.resource("share_unmanage", "os-share-unmanage", controller=self.resources["share_unmanage"], member={"unmanage": "POST"}) self.resources["shares"] = shares.create_resource() mapper.resource("share", "shares", controller=self.resources["shares"], collection={"detail": "GET"}, member={"action": "POST"}) mapper.connect("shares", "/{project_id}/shares/manage", controller=self.resources["shares"], action="manage", conditions={"method": ["POST"]}) self.resources["share_instances"] = share_instances.create_resource() mapper.resource("share_instance", "share_instances", controller=self.resources["share_instances"], collection={"detail": "GET"}, member={"action": "POST"}) self.resources["share_instance_export_locations"] = ( share_instance_export_locations.create_resource()) mapper.connect("share_instances", ("/{project_id}/share_instances/{share_instance_id}/" "export_locations"), controller=self.resources[ "share_instance_export_locations"], action="index", conditions={"method": ["GET"]}) mapper.connect("share_instances", ("/{project_id}/share_instances/{share_instance_id}/" "export_locations/{export_location_uuid}"), controller=self.resources[ "share_instance_export_locations"], action="show", conditions={"method": ["GET"]}) mapper.connect("share_instance", "/{project_id}/shares/{share_id}/instances", controller=self.resources["share_instances"], action="get_share_instances", conditions={"method": ["GET"]}) self.resources["share_export_locations"] = ( share_export_locations.create_resource()) mapper.connect("shares", "/{project_id}/shares/{share_id}/export_locations", controller=self.resources["share_export_locations"], action="index", conditions={"method": ["GET"]}) mapper.connect("shares", ("/{project_id}/shares/{share_id}/" "export_locations/{export_location_uuid}"), controller=self.resources["share_export_locations"], action="show", conditions={"method": ["GET"]}) self.resources["snapshots"] = share_snapshots.create_resource() mapper.resource("snapshot", "snapshots", controller=self.resources["snapshots"], collection={"detail": "GET"}, member={"action": "POST"}) mapper.connect("snapshots", "/{project_id}/snapshots/manage", controller=self.resources["snapshots"], action="manage", conditions={"method": ["POST"]}) mapper.connect("snapshots", "/{project_id}/snapshots/{snapshot_id}/access-list", controller=self.resources["snapshots"], action="access_list", conditions={"method": ["GET"]}) self.resources["share_snapshot_export_locations"] = ( share_snapshot_export_locations.create_resource()) mapper.connect("snapshots", "/{project_id}/snapshots/{snapshot_id}/" "export-locations", controller=self.resources[ "share_snapshot_export_locations"], action="index", conditions={"method": ["GET"]}) mapper.connect("snapshots", "/{project_id}/snapshots/{snapshot_id}/" "export-locations/{export_location_id}", controller=self.resources[ "share_snapshot_export_locations"], action="show", conditions={"method": ["GET"]}) self.resources['snapshot_instances'] = ( share_snapshot_instances.create_resource()) mapper.resource("snapshot-instance", "snapshot-instances", controller=self.resources['snapshot_instances'], collection={'detail': 'GET'}, member={'action': 'POST'}) self.resources["share_snapshot_instance_export_locations"] = ( share_snapshot_instance_export_locations.create_resource()) mapper.connect("snapshot-instance", "/{project_id}/snapshot-instances/" "{snapshot_instance_id}/export-locations", controller=self.resources[ "share_snapshot_instance_export_locations"], action="index", conditions={"method": ["GET"]}) mapper.connect("snapshot-instance", "/{project_id}/snapshot-instances/" "{snapshot_instance_id}/export-locations/" "{export_location_id}", controller=self.resources[ "share_snapshot_instance_export_locations"], action="show", conditions={"method": ["GET"]}) self.resources["share_metadata"] = share_metadata.create_resource() share_metadata_controller = self.resources["share_metadata"] mapper.resource("share_metadata", "metadata", controller=share_metadata_controller, parent_resource=dict(member_name="share", collection_name="shares")) mapper.connect("metadata", "/{project_id}/shares/{share_id}/metadata", controller=share_metadata_controller, action="update_all", conditions={"method": ["PUT"]}) self.resources["limits"] = limits.create_resource() mapper.resource("limit", "limits", controller=self.resources["limits"]) self.resources["security_services"] = ( security_service.create_resource()) mapper.resource("security-service", "security-services", controller=self.resources["security_services"], collection={"detail": "GET"}) self.resources["share_networks"] = share_networks.create_resource() mapper.resource(share_networks.RESOURCE_NAME, "share-networks", controller=self.resources["share_networks"], collection={"detail": "GET"}, member={"action": "POST"}) self.resources["share_network_subnets"] = ( share_network_subnets.create_resource()) mapper.connect("share-networks", "/{project_id}/share-networks/{share_network_id}/" "subnets", controller=self.resources["share_network_subnets"], action="create", conditions={"method": ["POST"]}) mapper.connect("share-networks", "/{project_id}/share-networks/{share_network_id}/" "subnets/{share_network_subnet_id}", controller=self.resources["share_network_subnets"], action="delete", conditions={"method": ["DELETE"]}) mapper.connect("share-networks", "/{project_id}/share-networks/{share_network_id}/" "subnets/{share_network_subnet_id}", controller=self.resources["share_network_subnets"], action="show", conditions={"method": ["GET"]}) mapper.connect("share-networks", "/{project_id}/share-networks/{share_network_id}/" "subnets", controller=self.resources["share_network_subnets"], action="index", conditions={"method": ["GET"]}) self.resources["share_servers"] = share_servers.create_resource() mapper.resource("share_server", "share-servers", controller=self.resources["share_servers"], member={"action": "POST"}) mapper.connect("details", "/{project_id}/share-servers/{id}/details", controller=self.resources["share_servers"], action="details", conditions={"method": ["GET"]}) mapper.connect("share_servers", "/{project_id}/share-servers/manage", controller=self.resources["share_servers"], action="manage", conditions={"method": ["POST"]}) self.resources["types"] = share_types.create_resource() mapper.resource("type", "types", controller=self.resources["types"], collection={"detail": "GET", "default": "GET"}, member={"action": "POST", "os-share-type-access": "GET", "share_type_access": "GET"}) self.resources["extra_specs"] = ( share_types_extra_specs.create_resource()) mapper.resource("extra_spec", "extra_specs", controller=self.resources["extra_specs"], parent_resource=dict(member_name="type", collection_name="types")) self.resources["scheduler_stats"] = scheduler_stats.create_resource() mapper.connect("pools", "/{project_id}/scheduler-stats/pools", controller=self.resources["scheduler_stats"], action="pools_index", conditions={"method": ["GET"]}) mapper.connect("pools", "/{project_id}/scheduler-stats/pools/detail", controller=self.resources["scheduler_stats"], action="pools_detail", conditions={"method": ["GET"]}) self.resources["share-groups"] = share_groups.create_resource() mapper.resource( "share-group", "share-groups", controller=self.resources["share-groups"], collection={"detail": "GET"}) mapper.connect( "share-groups", "/{project_id}/share-groups/{id}/action", controller=self.resources["share-groups"], action="action", conditions={"method": ["POST"]}) self.resources["share-group-types"] = ( share_group_types.create_resource()) mapper.resource( "share-group-type", "share-group-types", controller=self.resources["share-group-types"], collection={"detail": "GET", "default": "GET"}, member={"action": "POST"}) mapper.connect( "share-group-types", "/{project_id}/share-group-types/{id}/access", controller=self.resources["share-group-types"], action="share_group_type_access", conditions={"method": ["GET"]}) # NOTE(ameade): These routes can be simplified when the following # issue is fixed: https://github.com/bbangert/routes/issues/68 self.resources["group-specs"] = ( share_group_type_specs.create_resource()) mapper.connect( "share-group-types", "/{project_id}/share-group-types/{id}/group-specs", controller=self.resources["group-specs"], action="index", conditions={"method": ["GET"]}) mapper.connect( "share-group-types", "/{project_id}/share-group-types/{id}/group-specs", controller=self.resources["group-specs"], action="create", conditions={"method": ["POST"]}) mapper.connect( "share-group-types", "/{project_id}/share-group-types/{id}/group-specs/{key}", controller=self.resources["group-specs"], action="show", conditions={"method": ["GET"]}) mapper.connect( "share-group-types", "/{project_id}/share-group-types/{id}/group-specs/{key}", controller=self.resources["group-specs"], action="delete", conditions={"method": ["DELETE"]}) mapper.connect( "share-group-types", "/{project_id}/share-group-types/{id}/group-specs/{key}", controller=self.resources["group-specs"], action="update", conditions={"method": ["PUT"]}) self.resources["share-group-snapshots"] = ( share_group_snapshots.create_resource()) mapper.resource( "share-group-snapshot", "share-group-snapshots", controller=self.resources["share-group-snapshots"], collection={"detail": "GET"}, member={"members": "GET", "action": "POST"}) mapper.connect( "share-group-snapshots", "/{project_id}/share-group-snapshots/{id}/action", controller=self.resources["share-group-snapshots"], action="action", conditions={"method": ["POST"]}) self.resources['share-replicas'] = share_replicas.create_resource() mapper.resource("share-replica", "share-replicas", controller=self.resources['share-replicas'], collection={'detail': 'GET'}, member={'action': 'POST'}) self.resources["share-replica-export-locations"] = ( share_replica_export_locations.create_resource()) mapper.connect("share-replicas", ("/{project_id}/share-replicas/{share_replica_id}/" "export-locations"), controller=self.resources[ "share-replica-export-locations"], action="index", conditions={"method": ["GET"]}) mapper.connect("share-replicas", ("/{project_id}/share-replicas/{share_replica_id}/" "export-locations/{export_location_uuid}"), controller=self.resources[ "share-replica-export-locations"], action="show", conditions={"method": ["GET"]}) self.resources['messages'] = messages.create_resource() mapper.resource("message", "messages", controller=self.resources['messages']) self.resources["share-access-rules"] = share_accesses.create_resource() mapper.resource( "share-access-rule", "share-access-rules", controller=self.resources["share-access-rules"], collection={"detail": "GET"}) self.resources["access-metadata"] = ( share_access_metadata.create_resource()) access_metadata_controller = self.resources["access-metadata"] mapper.connect("share-access-rules", "/{project_id}/share-access-rules/{access_id}/metadata", controller=access_metadata_controller, action="update", conditions={"method": ["PUT"]}) mapper.connect("share-access-rules", "/{project_id}/share-access-rules/" "{access_id}/metadata/{key}", controller=access_metadata_controller, action="delete", conditions={"method": ["DELETE"]}) manila-10.0.0/manila/api/v2/quota_class_sets.py0000664000175000017500000000662313656750227021366 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack LLC. # Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from manila.api.openstack import wsgi from manila.api.views import quota_class_sets as quota_class_sets_views from manila import db from manila import exception from manila import quota QUOTAS = quota.QUOTAS class QuotaClassSetsMixin(object): """The Quota Class Sets API controller common logic. Mixin class that should be inherited by Quota Class Sets API controllers, which are used for different API URLs and microversions. """ resource_name = "quota_class_set" _view_builder_class = quota_class_sets_views.ViewBuilder @wsgi.Controller.authorize("show") def _show(self, req, id): context = req.environ['manila.context'] try: db.authorize_quota_class_context(context, id) except exception.NotAuthorized: raise webob.exc.HTTPForbidden() return self._view_builder.detail_list( req, QUOTAS.get_class_quotas(context, id), id) @wsgi.Controller.authorize("update") def _update(self, req, id, body): context = req.environ['manila.context'] quota_class = id for key in body.get(self.resource_name, {}).keys(): if key in QUOTAS: value = int(body[self.resource_name][key]) try: db.quota_class_update(context, quota_class, key, value) except exception.QuotaClassNotFound: db.quota_class_create(context, quota_class, key, value) except exception.AdminRequired: raise webob.exc.HTTPForbidden() return self._view_builder.detail_list( req, QUOTAS.get_class_quotas(context, quota_class)) class QuotaClassSetsControllerLegacy(QuotaClassSetsMixin, wsgi.Controller): """Deprecated Quota Class Sets API controller. Used by legacy API v1 and v2 microversions from 2.0 to 2.6. Registered under deprecated API URL 'os-quota-class-sets'. """ @wsgi.Controller.api_version('1.0', '2.6') def show(self, req, id): return self._show(req, id) @wsgi.Controller.api_version('1.0', '2.6') def update(self, req, id, body): return self._update(req, id, body) class QuotaClassSetsController(QuotaClassSetsMixin, wsgi.Controller): """Quota Class Sets API controller. Used only by API v2 starting from microversion 2.7. Registered under API URL 'quota-class-sets'. """ @wsgi.Controller.api_version('2.7') def show(self, req, id): return self._show(req, id) @wsgi.Controller.api_version('2.7') def update(self, req, id, body): return self._update(req, id, body) def create_resource_legacy(): return wsgi.Resource(QuotaClassSetsControllerLegacy()) def create_resource(): return wsgi.Resource(QuotaClassSetsController()) manila-10.0.0/manila/api/v2/share_groups.py0000664000175000017500000003370213656750227020511 0ustar zuulzuul00000000000000# Copyright 2015 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import uuidutils import six from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila.api.views import share_groups as share_group_views from manila import db from manila import exception from manila.i18n import _ from manila.share import share_types from manila.share_group import api as share_group_api from manila.share_group import share_group_types LOG = log.getLogger(__name__) SG_GRADUATION_VERSION = '2.55' class ShareGroupController(wsgi.Controller, wsgi.AdminActionsMixin): """The Share Groups API controller for the OpenStack API.""" resource_name = 'share_group' _view_builder_class = share_group_views.ShareGroupViewBuilder def __init__(self): super(ShareGroupController, self).__init__() self.share_group_api = share_group_api.API() def _get_share_group(self, context, share_group_id): try: return self.share_group_api.get(context, share_group_id) except exception.NotFound: msg = _("Share group %s not found.") % share_group_id raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.authorize('get') def _show(self, req, id): """Return data about the given share group.""" context = req.environ['manila.context'] share_group = self._get_share_group(context, id) return self._view_builder.detail(req, share_group) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def show(self, req, id): return self._show(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def show(self, req, id): # pylint: disable=function-redefined return self._show(req, id) @wsgi.Controller.authorize('delete') def _delete_share_group(self, req, id): """Delete a share group.""" context = req.environ['manila.context'] LOG.info("Delete share group with id: %s", id, context=context) share_group = self._get_share_group(context, id) try: self.share_group_api.delete(context, share_group) except exception.InvalidShareGroup as e: raise exc.HTTPConflict(explanation=six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def delete(self, req, id): return self._delete_share_group(req, id) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def delete(self, req, id): # pylint: disable=function-redefined return self._delete_share_group(req, id) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def index(self, req): return self._get_share_groups(req, is_detail=False) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def index(self, req): # pylint: disable=function-redefined return self._get_share_groups(req, is_detail=False) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def detail(self, req): return self._get_share_groups(req, is_detail=True) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def detail(self, req): # pylint: disable=function-redefined return self._get_share_groups(req, is_detail=True) @wsgi.Controller.authorize('get_all') def _get_share_groups(self, req, is_detail): """Returns a summary or detail list of share groups.""" context = req.environ['manila.context'] search_opts = {} search_opts.update(req.GET) # Remove keys that are not related to share group attrs search_opts.pop('limit', None) search_opts.pop('offset', None) sort_key = search_opts.pop('sort_key', 'created_at') sort_dir = search_opts.pop('sort_dir', 'desc') if req.api_version_request < api_version.APIVersionRequest("2.36"): search_opts.pop('name~', None) search_opts.pop('description~', None) if 'group_type_id' in search_opts: search_opts['share_group_type_id'] = search_opts.pop( 'group_type_id') share_groups = self.share_group_api.get_all( context, detailed=is_detail, search_opts=search_opts, sort_dir=sort_dir, sort_key=sort_key, ) limited_list = common.limited(share_groups, req) if is_detail: share_groups = self._view_builder.detail_list(req, limited_list) else: share_groups = self._view_builder.summary_list(req, limited_list) return share_groups @wsgi.Controller.authorize('update') def _update_share_group(self, req, id, body): """Update a share group.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share_group'): msg = _("'share_group' is missing from the request body.") raise exc.HTTPBadRequest(explanation=msg) share_group_data = body['share_group'] valid_update_keys = {'name', 'description'} invalid_fields = set(share_group_data.keys()) - valid_update_keys if invalid_fields: msg = _("The fields %s are invalid or not allowed to be updated.") raise exc.HTTPBadRequest(explanation=msg % invalid_fields) share_group = self._get_share_group(context, id) share_group = self.share_group_api.update( context, share_group, share_group_data) return self._view_builder.detail(req, share_group) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) def update(self, req, id, body): return self._update_share_group(req, id, body) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa def update(self, req, id, body): # pylint: disable=function-redefined return self._update_share_group(req, id, body) @wsgi.Controller.authorize('create') def _create(self, req, body): """Creates a new share group.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share_group'): msg = _("'share_group' is missing from the request body.") raise exc.HTTPBadRequest(explanation=msg) share_group = body['share_group'] valid_fields = { 'name', 'description', 'share_types', 'share_group_type_id', 'source_share_group_snapshot_id', 'share_network_id', 'availability_zone', } invalid_fields = set(share_group.keys()) - valid_fields if invalid_fields: msg = _("The fields %s are invalid.") % invalid_fields raise exc.HTTPBadRequest(explanation=msg) if ('share_types' in share_group and 'source_share_group_snapshot_id' in share_group): msg = _("Cannot supply both 'share_types' and " "'source_share_group_snapshot_id' attributes.") raise exc.HTTPBadRequest(explanation=msg) if not (share_group.get('share_types') or 'source_share_group_snapshot_id' in share_group): default_share_type = share_types.get_default_share_type() if default_share_type: share_group['share_types'] = [default_share_type['id']] else: msg = _("Must specify at least one share type as a default " "share type has not been configured.") raise exc.HTTPBadRequest(explanation=msg) kwargs = {} if 'name' in share_group: kwargs['name'] = share_group.get('name') if 'description' in share_group: kwargs['description'] = share_group.get('description') _share_types = share_group.get('share_types') if _share_types: if not all([uuidutils.is_uuid_like(st) for st in _share_types]): msg = _("The 'share_types' attribute must be a list of uuids") raise exc.HTTPBadRequest(explanation=msg) kwargs['share_type_ids'] = _share_types if ('share_network_id' in share_group and 'source_share_group_snapshot_id' in share_group): msg = _("Cannot supply both 'share_network_id' and " "'source_share_group_snapshot_id' attributes as the share " "network is inherited from the source.") raise exc.HTTPBadRequest(explanation=msg) availability_zone = share_group.get('availability_zone') if availability_zone: if 'source_share_group_snapshot_id' in share_group: msg = _( "Cannot supply both 'availability_zone' and " "'source_share_group_snapshot_id' attributes as the " "availability zone is inherited from the source.") raise exc.HTTPBadRequest(explanation=msg) try: az = db.availability_zone_get(context, availability_zone) kwargs['availability_zone_id'] = az.id kwargs['availability_zone'] = az.name except exception.AvailabilityZoneNotFound as e: raise exc.HTTPNotFound(explanation=six.text_type(e)) if 'source_share_group_snapshot_id' in share_group: source_share_group_snapshot_id = share_group.get( 'source_share_group_snapshot_id') if not uuidutils.is_uuid_like(source_share_group_snapshot_id): msg = _("The 'source_share_group_snapshot_id' attribute " "must be a uuid.") raise exc.HTTPBadRequest(explanation=six.text_type(msg)) kwargs['source_share_group_snapshot_id'] = ( source_share_group_snapshot_id) elif 'share_network_id' in share_group: share_network_id = share_group.get('share_network_id') if not uuidutils.is_uuid_like(share_network_id): msg = _("The 'share_network_id' attribute must be a uuid.") raise exc.HTTPBadRequest(explanation=six.text_type(msg)) kwargs['share_network_id'] = share_network_id if 'share_group_type_id' in share_group: share_group_type_id = share_group.get('share_group_type_id') if not uuidutils.is_uuid_like(share_group_type_id): msg = _("The 'share_group_type_id' attribute must be a uuid.") raise exc.HTTPBadRequest(explanation=six.text_type(msg)) kwargs['share_group_type_id'] = share_group_type_id else: # get default def_share_group_type = share_group_types.get_default() if def_share_group_type: kwargs['share_group_type_id'] = def_share_group_type['id'] else: msg = _("Must specify a share group type as a default " "share group type has not been configured.") raise exc.HTTPBadRequest(explanation=msg) try: new_share_group = self.share_group_api.create(context, **kwargs) except exception.InvalidShareGroupSnapshot as e: raise exc.HTTPConflict(explanation=six.text_type(e)) except (exception.ShareGroupSnapshotNotFound, exception.InvalidInput) as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) return self._view_builder.detail( req, {k: v for k, v in new_share_group.items()}) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.response(202) def create(self, req, body): return self._create(req, body) @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.response(202) def create(self, req, body): # pylint: disable=function-redefined return self._create(req, body) def _update(self, *args, **kwargs): db.share_group_update(*args, **kwargs) def _get(self, *args, **kwargs): return self.share_group_api.get(*args, **kwargs) def _delete(self, context, resource, force=True): # Delete all share group snapshots for snap in resource['snapshots']: db.share_group_snapshot_destroy(context, snap['id']) # Delete all shares in share group for share in db.get_all_shares_by_share_group(context, resource['id']): db.share_delete(context, share['id']) db.share_group_destroy(context.elevated(), resource['id']) @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action('reset_status') def share_group_reset_status(self, req, id, body): return self._reset_status(req, id, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action('reset_status') def share_group_reset_status(self, req, id, body): return self._reset_status(req, id, body) # pylint: enable=function-redefined @wsgi.Controller.api_version('2.31', '2.54', experimental=True) @wsgi.action('force_delete') def share_group_force_delete(self, req, id, body): return self._force_delete(req, id, body) # pylint: disable=function-redefined @wsgi.Controller.api_version(SG_GRADUATION_VERSION) # noqa @wsgi.action('force_delete') def share_group_force_delete(self, req, id, body): return self._force_delete(req, id, body) def create_resource(): return wsgi.Resource(ShareGroupController()) manila-10.0.0/manila/api/v2/availability_zones.py0000664000175000017500000000433613656750227021701 0ustar zuulzuul00000000000000# Copyright (c) 2013 OpenStack Foundation # Copyright (c) 2015 Mirantis inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api.openstack import wsgi from manila.api.views import availability_zones as availability_zones_views from manila import db class AvailabilityZoneMixin(object): """The Availability Zone API controller common logic. Mixin class that should be inherited by Availability Zone API controllers, which are used for different API URLs and microversions. """ resource_name = "availability_zone" _view_builder_class = availability_zones_views.ViewBuilder @wsgi.Controller.authorize("index") def _index(self, req): """Describe all known availability zones.""" views = db.availability_zone_get_all(req.environ['manila.context']) return self._view_builder.detail_list(views) class AvailabilityZoneControllerLegacy(AvailabilityZoneMixin, wsgi.Controller): """Deprecated Availability Zone API controller. Used by legacy API v1 and v2 microversions from 2.0 to 2.6. Registered under deprecated API URL 'os-availability-zone'. """ @wsgi.Controller.api_version('1.0', '2.6') def index(self, req): return self._index(req) class AvailabilityZoneController(AvailabilityZoneMixin, wsgi.Controller): """Availability Zone API controller. Used only by API v2 starting from microversion 2.7. Registered under API URL 'availability-zones'. """ @wsgi.Controller.api_version('2.7') def index(self, req): return self._index(req) def create_resource_legacy(): return wsgi.Resource(AvailabilityZoneControllerLegacy()) def create_resource(): return wsgi.Resource(AvailabilityZoneController()) manila-10.0.0/manila/api/v2/quota_sets.py0000664000175000017500000003460013656750227020175 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import strutils from six.moves import http_client from six.moves.urllib import parse import webob from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila.api.views import quota_sets as quota_sets_views from manila import db from manila import exception from manila.i18n import _ from manila import quota QUOTAS = quota.QUOTAS LOG = log.getLogger(__name__) NON_QUOTA_KEYS = ('tenant_id', 'id', 'force', 'share_type') class QuotaSetsMixin(object): """The Quota Sets API controller common logic. Mixin class that should be inherited by Quota Sets API controllers, which are used for different API URLs and microversions. """ resource_name = "quota_set" _view_builder_class = quota_sets_views.ViewBuilder @staticmethod def _validate_quota_limit(limit, minimum, maximum, force_update): # NOTE: -1 is a flag value for unlimited if limit < -1: msg = _("Quota limit must be -1 or greater.") raise webob.exc.HTTPBadRequest(explanation=msg) if ((limit < minimum and not force_update) and (maximum != -1 or (maximum == -1 and limit != -1))): msg = _("Quota limit must be greater than %s.") % minimum raise webob.exc.HTTPBadRequest(explanation=msg) if maximum != -1 and limit > maximum and not force_update: msg = _("Quota limit must be less than %s.") % maximum raise webob.exc.HTTPBadRequest(explanation=msg) @staticmethod def _validate_user_id_and_share_type_args(user_id, share_type): if user_id and share_type: msg = _("'user_id' and 'share_type' values are mutually exclusive") raise webob.exc.HTTPBadRequest(explanation=msg) @staticmethod def _get_share_type_id(context, share_type_name_or_id): if share_type_name_or_id: share_type = db.share_type_get_by_name_or_id( context, share_type_name_or_id) if share_type: return share_type['id'] msg = _("Share type with name or id '%s' not found.") % ( share_type_name_or_id) raise webob.exc.HTTPNotFound(explanation=msg) @staticmethod def _ensure_share_type_arg_is_absent(req): params = parse.parse_qs(req.environ.get('QUERY_STRING', '')) share_type = params.get('share_type', [None])[0] if share_type: msg = _("'share_type' key is not supported by this microversion. " "Use 2.39 or greater microversion to be able " "to use 'share_type' quotas.") raise webob.exc.HTTPBadRequest(explanation=msg) @staticmethod def _ensure_specific_microversion_args_are_absent(body, keys, microversion): body = body.get('quota_set', body) for key in keys: if body.get(key): msg = (_("'%(key)s' key is not supported by this " "microversion. Use %(microversion)s or greater " "microversion to be able to use '%(key)s' quotas.") % {"key": key, "microversion": microversion}) raise webob.exc.HTTPBadRequest(explanation=msg) def _get_quotas(self, context, project_id, user_id=None, share_type_id=None, usages=False): self._validate_user_id_and_share_type_args(user_id, share_type_id) if user_id: values = QUOTAS.get_user_quotas( context, project_id, user_id, usages=usages) elif share_type_id: values = QUOTAS.get_share_type_quotas( context, project_id, share_type_id, usages=usages) else: values = QUOTAS.get_project_quotas( context, project_id, usages=usages) if usages: return values return {k: v['limit'] for k, v in values.items()} @wsgi.Controller.authorize("show") def _show(self, req, id, detail=False): context = req.environ['manila.context'] params = parse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] share_type = params.get('share_type', [None])[0] try: db.authorize_project_context(context, id) # _get_quotas use 'usages' to indicate whether retrieve additional # attributes, so pass detail to the argument. share_type_id = self._get_share_type_id(context, share_type) quotas = self._get_quotas( context, id, user_id, share_type_id, usages=detail) return self._view_builder.detail_list( req, quotas, id, share_type_id) except exception.NotAuthorized: raise webob.exc.HTTPForbidden() @wsgi.Controller.authorize('show') def _defaults(self, req, id): context = req.environ['manila.context'] return self._view_builder.detail_list( req, QUOTAS.get_defaults(context), id) @wsgi.Controller.authorize("update") def _update(self, req, id, body): context = req.environ['manila.context'] project_id = id bad_keys = [] force_update = False params = parse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] share_type = params.get('share_type', [None])[0] self._validate_user_id_and_share_type_args(user_id, share_type) share_type_id = self._get_share_type_id(context, share_type) body = body.get('quota_set', {}) if share_type and body.get('share_groups', body.get('share_group_snapshots')): msg = _("Share type quotas cannot constrain share groups and " "share group snapshots.") raise webob.exc.HTTPBadRequest(explanation=msg) try: settable_quotas = QUOTAS.get_settable_quotas( context, project_id, user_id=user_id, share_type_id=share_type_id) except exception.NotAuthorized: raise webob.exc.HTTPForbidden() for key, value in body.items(): if key == 'share_networks' and share_type_id: msg = _("'share_networks' quota cannot be set for share type. " "It can be set only for project or user.") raise webob.exc.HTTPBadRequest(explanation=msg) elif (key not in QUOTAS and key not in NON_QUOTA_KEYS): bad_keys.append(key) elif key == 'force': force_update = strutils.bool_from_string(value) elif key not in NON_QUOTA_KEYS and value: try: value = int(value) except (ValueError, TypeError): msg = _("Quota '%(value)s' for %(key)s should be " "integer.") % {'value': value, 'key': key} LOG.warning(msg) raise webob.exc.HTTPBadRequest(explanation=msg) LOG.debug("Force update quotas: %s.", force_update) if len(bad_keys) > 0: msg = _("Bad key(s) %s in quota_set.") % ",".join(bad_keys) raise webob.exc.HTTPBadRequest(explanation=msg) try: quotas = self._get_quotas( context, id, user_id=user_id, share_type_id=share_type_id, usages=True) except exception.NotAuthorized: raise webob.exc.HTTPForbidden() for key, value in body.items(): if key in NON_QUOTA_KEYS or (not value and value != 0): continue # validate whether already used and reserved exceeds the new # quota, this check will be ignored if admin want to force # update try: value = int(value) except (ValueError, TypeError): msg = _("Quota '%(value)s' for %(key)s should be " "integer.") % {'value': value, 'key': key} LOG.warning(msg) raise webob.exc.HTTPBadRequest(explanation=msg) if force_update is False and value >= 0: quota_value = quotas.get(key) if quota_value and quota_value['limit'] >= 0: quota_used = (quota_value['in_use'] + quota_value['reserved']) LOG.debug("Quota %(key)s used: %(quota_used)s, " "value: %(value)s.", {'key': key, 'quota_used': quota_used, 'value': value}) if quota_used > value: msg = (_("Quota value %(value)s for %(key)s is " "smaller than already used and reserved " "%(quota_used)s.") % {'value': value, 'key': key, 'quota_used': quota_used}) raise webob.exc.HTTPBadRequest(explanation=msg) minimum = settable_quotas[key]['minimum'] maximum = settable_quotas[key]['maximum'] self._validate_quota_limit(value, minimum, maximum, force_update) try: db.quota_create( context, project_id, key, value, user_id=user_id, share_type_id=share_type_id) except exception.QuotaExists: db.quota_update( context, project_id, key, value, user_id=user_id, share_type_id=share_type_id) except exception.AdminRequired: raise webob.exc.HTTPForbidden() return self._view_builder.detail_list( req, self._get_quotas( context, id, user_id=user_id, share_type_id=share_type_id), share_type=share_type_id, ) @wsgi.Controller.authorize("delete") def _delete(self, req, id): context = req.environ['manila.context'] params = parse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] share_type = params.get('share_type', [None])[0] self._validate_user_id_and_share_type_args(user_id, share_type) try: db.authorize_project_context(context, id) if user_id: QUOTAS.destroy_all_by_project_and_user(context, id, user_id) elif share_type: share_type_id = self._get_share_type_id(context, share_type) QUOTAS.destroy_all_by_project_and_share_type( context, id, share_type_id) else: QUOTAS.destroy_all_by_project(context, id) return webob.Response(status_int=http_client.ACCEPTED) except exception.NotAuthorized: raise webob.exc.HTTPForbidden() class QuotaSetsControllerLegacy(QuotaSetsMixin, wsgi.Controller): """Deprecated Quota Sets API controller. Used by legacy API v1 and v2 microversions from 2.0 to 2.6. Registered under deprecated API URL 'os-quota-sets'. """ @wsgi.Controller.api_version('1.0', '2.6') def show(self, req, id): self._ensure_share_type_arg_is_absent(req) return self._show(req, id) @wsgi.Controller.api_version('1.0', '2.6') def defaults(self, req, id): return self._defaults(req, id) @wsgi.Controller.api_version('1.0', '2.6') def update(self, req, id, body): self._ensure_share_type_arg_is_absent(req) self._ensure_specific_microversion_args_are_absent( body, ['share_groups', 'share_group_snapshots'], "2.40") self._ensure_specific_microversion_args_are_absent( body, ['share_replicas', 'replica_gigabytes'], "2.53") return self._update(req, id, body) @wsgi.Controller.api_version('1.0', '2.6') def delete(self, req, id): self._ensure_share_type_arg_is_absent(req) return self._delete(req, id) class QuotaSetsController(QuotaSetsMixin, wsgi.Controller): """Quota Sets API controller. Used only by API v2 starting from microversion 2.7. Registered under API URL 'quota-sets'. """ @wsgi.Controller.api_version('2.7') def show(self, req, id): if req.api_version_request < api_version.APIVersionRequest("2.39"): self._ensure_share_type_arg_is_absent(req) return self._show(req, id) @wsgi.Controller.api_version('2.25') def detail(self, req, id): if req.api_version_request < api_version.APIVersionRequest("2.39"): self._ensure_share_type_arg_is_absent(req) return self._show(req, id, True) @wsgi.Controller.api_version('2.7') def defaults(self, req, id): return self._defaults(req, id) @wsgi.Controller.api_version('2.7') def update(self, req, id, body): if req.api_version_request < api_version.APIVersionRequest("2.39"): self._ensure_share_type_arg_is_absent(req) elif req.api_version_request < api_version.APIVersionRequest("2.40"): self._ensure_specific_microversion_args_are_absent( body, ['share_groups', 'share_group_snapshots'], "2.40") elif req.api_version_request < api_version.APIVersionRequest("2.53"): self._ensure_specific_microversion_args_are_absent( body, ['share_replicas', 'replica_gigabytes'], "2.53") return self._update(req, id, body) @wsgi.Controller.api_version('2.7') def delete(self, req, id): if req.api_version_request < api_version.APIVersionRequest("2.39"): self._ensure_share_type_arg_is_absent(req) return self._delete(req, id) def create_resource_legacy(): return wsgi.Resource(QuotaSetsControllerLegacy()) def create_resource(): return wsgi.Resource(QuotaSetsController()) manila-10.0.0/manila/api/v2/share_network_subnets.py0000664000175000017500000002015713656750227022426 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common from oslo_db import exception as db_exception from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api.openstack import wsgi from manila.api.views import share_network_subnets as subnet_views from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila.share import rpcapi as share_rpcapi LOG = log.getLogger(__name__) class ShareNetworkSubnetController(wsgi.Controller): """The Share Network Subnet API controller for the OpenStack API.""" resource_name = 'share_network_subnet' _view_builder_class = subnet_views.ViewBuilder def __init__(self): super(ShareNetworkSubnetController, self).__init__() self.share_rpcapi = share_rpcapi.ShareAPI() @wsgi.Controller.api_version("2.51") @wsgi.Controller.authorize def index(self, req, share_network_id): """Returns a list of share network subnets.""" context = req.environ['manila.context'] try: share_network = db_api.share_network_get(context, share_network_id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) return self._view_builder.build_share_network_subnets( req, share_network.get('share_network_subnets')) def _all_share_servers_are_auto_deletable(self, share_network_subnet): return all([ss['is_auto_deletable'] for ss in share_network_subnet['share_servers']]) @wsgi.Controller.api_version('2.51') @wsgi.Controller.authorize def delete(self, req, share_network_id, share_network_subnet_id): """Delete specified share network subnet.""" context = req.environ['manila.context'] try: db_api.share_network_get(context, share_network_id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) try: share_network_subnet = db_api.share_network_subnet_get( context, share_network_subnet_id) except exception.ShareNetworkSubnetNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) for share_server in share_network_subnet['share_servers'] or []: shares = db_api.share_instances_get_all_by_share_server( context, share_server['id']) if shares: msg = _("Cannot delete share network subnet %(id)s, it has " "one or more shares.") % { 'id': share_network_subnet_id} LOG.error(msg) raise exc.HTTPConflict(explanation=msg) # NOTE(silvacarlose): Do not allow the deletion of any share server # if any of them has the flag is_auto_deletable = False if not self._all_share_servers_are_auto_deletable( share_network_subnet): msg = _("The service cannot determine if there are any " "non-managed shares on the share network subnet %(id)s," "so it cannot be deleted. Please contact the cloud " "administrator to rectify.") % { 'id': share_network_subnet_id} LOG.error(msg) raise exc.HTTPConflict(explanation=msg) for share_server in share_network_subnet['share_servers']: self.share_rpcapi.delete_share_server(context, share_server) db_api.share_network_subnet_delete(context, share_network_subnet_id) return webob.Response(status_int=http_client.ACCEPTED) def _validate_subnet(self, context, share_network_id, az=None): """Validate the az for the given subnet. If az is None, the method will search for an existent default subnet. In case of a given AZ, validates if there's an existent subnet for it. """ msg = ("Another share network subnet was found in the " "specified availability zone. Only one share network " "subnet is allowed per availability zone for share " "network %s." % share_network_id) if az is None: default_subnet = db_api.share_network_subnet_get_default_subnet( context, share_network_id) if default_subnet is not None: raise exc.HTTPConflict(explanation=msg) else: az_subnet = ( db_api.share_network_subnet_get_by_availability_zone_id( context, share_network_id, az['id']) ) # If the 'availability_zone_id' is not None, we found a conflict, # otherwise we just have found the default subnet if az_subnet and az_subnet['availability_zone_id']: raise exc.HTTPConflict(explanation=msg) @wsgi.Controller.api_version("2.51") @wsgi.Controller.authorize def create(self, req, share_network_id, body): """Add a new share network subnet into the share network.""" context = req.environ['manila.context'] if not self.is_valid_body(body, 'share-network-subnet'): msg = _("Share Network Subnet is missing from the request body.") raise exc.HTTPBadRequest(explanation=msg) data = body['share-network-subnet'] data['share_network_id'] = share_network_id common.check_net_id_and_subnet_id(data) try: db_api.share_network_get(context, share_network_id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) availability_zone = data.pop('availability_zone', None) subnet_az = None if availability_zone: try: subnet_az = db_api.availability_zone_get(context, availability_zone) except exception.AvailabilityZoneNotFound: msg = _("The provided availability zone %s does not " "exist.") % availability_zone raise exc.HTTPBadRequest(explanation=msg) self._validate_subnet(context, share_network_id, az=subnet_az) try: data['availability_zone_id'] = ( subnet_az['id'] if subnet_az is not None else None) share_network_subnet = db_api.share_network_subnet_create( context, data) except db_exception.DBError as e: msg = _('Could not create the share network subnet.') LOG.error(e) raise exc.HTTPInternalServerError(explanation=msg) share_network_subnet = db_api.share_network_subnet_get( context, share_network_subnet['id']) return self._view_builder.build_share_network_subnet( req, share_network_subnet) @wsgi.Controller.api_version('2.51') @wsgi.Controller.authorize def show(self, req, share_network_id, share_network_subnet_id): """Show share network subnet.""" context = req.environ['manila.context'] try: db_api.share_network_get(context, share_network_id) except exception.ShareNetworkNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) try: share_network_subnet = db_api.share_network_subnet_get( context, share_network_subnet_id) except exception.ShareNetworkSubnetNotFound as e: raise exc.HTTPNotFound(explanation=e.msg) return self._view_builder.build_share_network_subnet( req, share_network_subnet) def create_resource(): return wsgi.Resource(ShareNetworkSubnetController()) manila-10.0.0/manila/api/v2/services.py0000664000175000017500000001007413656750227017630 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from manila.api.openstack import wsgi from manila.api.views import services as services_views from manila import db from manila import utils class ServiceMixin(object): """The Services API controller common logic. Mixin class that should be inherited by Services API controllers, which are used for different API URLs and microversions. """ resource_name = "service" _view_builder_class = services_views.ViewBuilder @wsgi.Controller.authorize("index") def _index(self, req): """Return a list of all running services.""" context = req.environ['manila.context'] all_services = db.service_get_all(context) services = [] for service in all_services: service = { 'id': service['id'], 'binary': service['binary'], 'host': service['host'], 'zone': service['availability_zone']['name'], 'status': 'disabled' if service['disabled'] else 'enabled', 'state': 'up' if utils.service_is_up(service) else 'down', 'updated_at': service['updated_at'], } services.append(service) search_opts = [ 'host', 'binary', 'zone', 'state', 'status', ] for search_opt in search_opts: if search_opt in req.GET: value = req.GET[search_opt] services = [s for s in services if s[search_opt] == value] if len(services) == 0: break return self._view_builder.detail_list(services) @wsgi.Controller.authorize("update") def _update(self, req, id, body): """Enable/Disable scheduling for a service.""" context = req.environ['manila.context'] if id == "enable": data = {'disabled': False} elif id == "disable": data = {'disabled': True} else: raise webob.exc.HTTPNotFound("Unknown action '%s'" % id) try: data['host'] = body['host'] data['binary'] = body['binary'] except (TypeError, KeyError): raise webob.exc.HTTPBadRequest() svc = db.service_get_by_args(context, data['host'], data['binary']) db.service_update( context, svc['id'], {'disabled': data['disabled']}) return self._view_builder.summary(data) class ServiceControllerLegacy(ServiceMixin, wsgi.Controller): """Deprecated Services API controller. Used by legacy API v1 and v2 microversions from 2.0 to 2.6. Registered under deprecated API URL 'os-services'. """ @wsgi.Controller.api_version('1.0', '2.6') def index(self, req): return self._index(req) @wsgi.Controller.api_version('1.0', '2.6') def update(self, req, id, body): return self._update(req, id, body) class ServiceController(ServiceMixin, wsgi.Controller): """Services API controller. Used only by API v2 starting from microversion 2.7. Registered under API URL 'services'. """ @wsgi.Controller.api_version('2.7') def index(self, req): return self._index(req) @wsgi.Controller.api_version('2.7') def update(self, req, id, body): return self._update(req, id, body) def create_resource_legacy(): return wsgi.Resource(ServiceControllerLegacy()) def create_resource(): return wsgi.Resource(ServiceController()) manila-10.0.0/manila/api/v2/share_snapshot_instance_export_locations.py0000664000175000017500000000547213656750227026374 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from manila.api.openstack import wsgi from manila.api.views import share_snapshot_export_locations from manila.db import api as db_api from manila import exception from manila.i18n import _ from manila import policy class ShareSnapshotInstanceExportLocationController(wsgi.Controller): def __init__(self): self._view_builder_class = ( share_snapshot_export_locations.ViewBuilder) self.resource_name = 'share_snapshot_instance_export_location' super(ShareSnapshotInstanceExportLocationController, self).__init__() @wsgi.Controller.api_version('2.32') @wsgi.Controller.authorize def index(self, req, snapshot_instance_id): context = req.environ['manila.context'] instance = self._verify_snapshot_instance( context, snapshot_instance_id) export_locations = ( db_api.share_snapshot_instance_export_locations_get_all( context, instance['id'])) return self._view_builder.list_export_locations(req, export_locations) @wsgi.Controller.api_version('2.32') @wsgi.Controller.authorize def show(self, req, snapshot_instance_id, export_location_id): context = req.environ['manila.context'] self._verify_snapshot_instance(context, snapshot_instance_id) export_location = db_api.share_snapshot_instance_export_location_get( context, export_location_id) return self._view_builder.detail_export_location(req, export_location) def _verify_snapshot_instance(self, context, snapshot_instance_id): try: snapshot_instance = db_api.share_snapshot_instance_get( context, snapshot_instance_id) share = db_api.share_get( context, snapshot_instance.share_instance['share_id']) if not share['is_public']: policy.check_policy(context, 'share', 'get', share) except exception.NotFound: msg = _("Snapshot instance '%s' not found.") % snapshot_instance_id raise exc.HTTPNotFound(explanation=msg) return snapshot_instance def create_resource(): return wsgi.Resource(ShareSnapshotInstanceExportLocationController()) manila-10.0.0/manila/api/v2/share_snapshots.py0000664000175000017500000003034613656750227021215 0ustar zuulzuul00000000000000# Copyright 2013 NetApp # Copyright 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The share snapshots api.""" from oslo_log import log from six.moves import http_client import webob from webob import exc from manila.api import common from manila.api.openstack import api_version_request as api_version from manila.api.openstack import wsgi from manila.api.v1 import share_snapshots from manila.api.views import share_snapshots as snapshot_views from manila.common import constants from manila import exception from manila.i18n import _ from manila import share LOG = log.getLogger(__name__) class ShareSnapshotsController(share_snapshots.ShareSnapshotMixin, wsgi.Controller, wsgi.AdminActionsMixin): """The Share Snapshots API V2 controller for the OpenStack API.""" resource_name = 'share_snapshot' _view_builder_class = snapshot_views.ViewBuilder def __init__(self): super(ShareSnapshotsController, self).__init__() self.share_api = share.API() @wsgi.Controller.authorize('unmanage_snapshot') def _unmanage(self, req, id, body=None, allow_dhss_true=False): """Unmanage a share snapshot.""" context = req.environ['manila.context'] LOG.info("Unmanage share snapshot with id: %s.", id) try: snapshot = self.share_api.get_snapshot(context, id) share = self.share_api.get(context, snapshot['share_id']) if not allow_dhss_true and share.get('share_server_id'): msg = _("Operation 'unmanage_snapshot' is not supported for " "snapshots of shares that are created with share" " servers (created with share-networks).") raise exc.HTTPForbidden(explanation=msg) elif share.get('has_replicas'): msg = _("Share %s has replicas. Snapshots of this share " "cannot currently be unmanaged until all replicas " "are removed.") % share['id'] raise exc.HTTPConflict(explanation=msg) elif snapshot['status'] in constants.TRANSITIONAL_STATUSES: msg = _("Snapshot with transitional state cannot be " "unmanaged. Snapshot '%(s_id)s' is in '%(state)s' " "state.") % {'state': snapshot['status'], 's_id': snapshot['id']} raise exc.HTTPForbidden(explanation=msg) self.share_api.unmanage_snapshot(context, snapshot, share['host']) except (exception.ShareSnapshotNotFound, exception.ShareNotFound) as e: raise exc.HTTPNotFound(explanation=e) return webob.Response(status_int=http_client.ACCEPTED) @wsgi.Controller.authorize('manage_snapshot') def _manage(self, req, body): """Instruct Manila to manage an existing snapshot. Required HTTP Body: .. code-block:: json { "snapshot": { "share_id": , "provider_location": } } Optional elements in 'snapshot' are: name A name for the new snapshot. description A description for the new snapshot. driver_options Driver specific dicts for the existing snapshot. """ context = req.environ['manila.context'] snapshot_data = self._validate_manage_parameters(context, body) # NOTE(vponomaryov): compatibility actions are required between API and # DB layers for 'name' and 'description' API params that are # represented in DB as 'display_name' and 'display_description' # appropriately. name = snapshot_data.get('display_name', snapshot_data.get('name')) description = snapshot_data.get( 'display_description', snapshot_data.get('description')) snapshot = { 'share_id': snapshot_data['share_id'], 'provider_location': snapshot_data['provider_location'], 'display_name': name, 'display_description': description, } driver_options = snapshot_data.get('driver_options', {}) try: snapshot_ref = self.share_api.manage_snapshot(context, snapshot, driver_options) except (exception.ShareNotFound, exception.ShareSnapshotNotFound) as e: raise exc.HTTPNotFound(explanation=e) except (exception.InvalidShare, exception.ManageInvalidShareSnapshot) as e: raise exc.HTTPConflict(explanation=e) return self._view_builder.detail(req, snapshot_ref) def _validate_manage_parameters(self, context, body): if not (body and self.is_valid_body(body, 'snapshot')): msg = _("Snapshot entity not found in request body.") raise exc.HTTPUnprocessableEntity(explanation=msg) data = body['snapshot'] required_parameters = ('share_id', 'provider_location') self._validate_parameters(data, required_parameters) return data def _validate_parameters(self, data, required_parameters, fix_response=False): if fix_response: exc_response = exc.HTTPBadRequest else: exc_response = exc.HTTPUnprocessableEntity for parameter in required_parameters: if parameter not in data: msg = _("Required parameter %s not found.") % parameter raise exc_response(explanation=msg) if not data.get(parameter): msg = _("Required parameter %s is empty.") % parameter raise exc_response(explanation=msg) def _allow(self, req, id, body, enable_ipv6=False): context = req.environ['manila.context'] if not (body and self.is_valid_body(body, 'allow_access')): msg = _("Access data not found in request body.") raise exc.HTTPBadRequest(explanation=msg) access_data = body.get('allow_access') required_parameters = ('access_type', 'access_to') self._validate_parameters(access_data, required_parameters, fix_response=True) access_type = access_data['access_type'] access_to = access_data['access_to'] common.validate_access(access_type=access_type, access_to=access_to, enable_ipv6=enable_ipv6) snapshot = self.share_api.get_snapshot(context, id) self._check_mount_snapshot_support(context, snapshot) try: access = self.share_api.snapshot_allow_access( context, snapshot, access_type, access_to) except exception.ShareSnapshotAccessExists as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) return self._view_builder.detail_access(req, access) def _deny(self, req, id, body): context = req.environ['manila.context'] if not (body and self.is_valid_body(body, 'deny_access')): msg = _("Access data not found in request body.") raise exc.HTTPBadRequest(explanation=msg) access_data = body.get('deny_access') self._validate_parameters( access_data, ('access_id',), fix_response=True) access_id = access_data['access_id'] snapshot = self.share_api.get_snapshot(context, id) self._check_mount_snapshot_support(context, snapshot) access = self.share_api.snapshot_access_get(context, access_id) if access['share_snapshot_id'] != snapshot['id']: msg = _("Access rule provided is not associated with given" " snapshot.") raise webob.exc.HTTPBadRequest(explanation=msg) self.share_api.snapshot_deny_access(context, snapshot, access) return webob.Response(status_int=http_client.ACCEPTED) def _check_mount_snapshot_support(self, context, snapshot): share = self.share_api.get(context, snapshot['share_id']) if not share['mount_snapshot_support']: msg = _("Cannot control access to the snapshot %(snap)s since the " "parent share %(share)s does not support mounting its " "snapshots.") % {'snap': snapshot['id'], 'share': share['id']} raise exc.HTTPBadRequest(explanation=msg) def _access_list(self, req, snapshot_id): context = req.environ['manila.context'] snapshot = self.share_api.get_snapshot(context, snapshot_id) self._check_mount_snapshot_support(context, snapshot) access_list = self.share_api.snapshot_access_get_all(context, snapshot) return self._view_builder.detail_list_access(req, access_list) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-reset_status') def snapshot_reset_status_legacy(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('reset_status') def snapshot_reset_status(self, req, id, body): return self._reset_status(req, id, body) @wsgi.Controller.api_version('2.0', '2.6') @wsgi.action('os-force_delete') def snapshot_force_delete_legacy(self, req, id, body): return self._force_delete(req, id, body) @wsgi.Controller.api_version('2.7') @wsgi.action('force_delete') def snapshot_force_delete(self, req, id, body): return self._force_delete(req, id, body) @wsgi.Controller.api_version('2.12') @wsgi.response(202) def manage(self, req, body): return self._manage(req, body) @wsgi.Controller.api_version('2.12', '2.48') @wsgi.action('unmanage') def unmanage(self, req, id, body=None): return self._unmanage(req, id, body) @wsgi.Controller.api_version('2.49') # noqa @wsgi.action('unmanage') def unmanage(self, req, id, body=None): # pylint: disable=function-redefined return self._unmanage(req, id, body, allow_dhss_true=True) @wsgi.Controller.api_version('2.32') @wsgi.action('allow_access') @wsgi.response(202) @wsgi.Controller.authorize def allow_access(self, req, id, body=None): enable_ipv6 = False if req.api_version_request >= api_version.APIVersionRequest("2.38"): enable_ipv6 = True return self._allow(req, id, body, enable_ipv6) @wsgi.Controller.api_version('2.32') @wsgi.action('deny_access') @wsgi.Controller.authorize def deny_access(self, req, id, body=None): return self._deny(req, id, body) @wsgi.Controller.api_version('2.32') @wsgi.Controller.authorize def access_list(self, req, snapshot_id): return self._access_list(req, snapshot_id) @wsgi.Controller.api_version("2.0") def index(self, req): """Returns a summary list of shares.""" if req.api_version_request < api_version.APIVersionRequest("2.36"): req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) return self._get_snapshots(req, is_detail=False) @wsgi.Controller.api_version("2.0") def detail(self, req): """Returns a detailed list of shares.""" if req.api_version_request < api_version.APIVersionRequest("2.36"): req.GET.pop('name~', None) req.GET.pop('description~', None) req.GET.pop('description', None) return self._get_snapshots(req, is_detail=True) def create_resource(): return wsgi.Resource(ShareSnapshotsController()) manila-10.0.0/manila/api/auth.py0000664000175000017500000000256513656750227016425 0ustar zuulzuul00000000000000# Copyright (c) 2013 OpenStack, LLC. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.api.middleware import auth LOG = log.getLogger(__name__) class ManilaKeystoneContext(auth.ManilaKeystoneContext): def __init__(self, application): LOG.warning('manila.api.auth:ManilaKeystoneContext is deprecated. ' 'Please use ' 'manila.api.middleware.auth:ManilaKeystoneContext ' 'instead.') super(ManilaKeystoneContext, self).__init__(application) def pipeline_factory(loader, global_conf, **local_conf): LOG.warning('manila.api.auth:pipeline_factory is deprecated. ' 'Please use manila.api.middleware.auth:pipeline_factory ' 'instead.') auth.pipeline_factory(loader, global_conf, **local_conf) manila-10.0.0/manila/api/versions.py0000664000175000017500000000663713656750227017340 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack LLC. # Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_config import cfg from manila.api import extensions from manila.api import openstack from manila.api.openstack import api_version_request from manila.api.openstack import wsgi from manila.api.views import versions as views_versions CONF = cfg.CONF _LINKS = [{ 'rel': 'describedby', 'type': 'text/html', 'href': 'http://docs.openstack.org/', }] _MEDIA_TYPES = [{ 'base': 'application/json', 'type': 'application/vnd.openstack.share+json;version=1', }] _KNOWN_VERSIONS = { 'v1.0': { 'id': 'v1.0', 'status': 'DEPRECATED', 'version': '', 'min_version': '', 'updated': '2015-08-27T11:33:21Z', 'links': _LINKS, 'media-types': _MEDIA_TYPES, }, 'v2.0': { 'id': 'v2.0', 'status': 'CURRENT', 'version': api_version_request._MAX_API_VERSION, 'min_version': api_version_request._MIN_API_VERSION, 'updated': '2015-08-27T11:33:21Z', 'links': _LINKS, 'media-types': _MEDIA_TYPES, }, } class VersionsRouter(openstack.APIRouter): """Route versions requests.""" ExtensionManager = extensions.ExtensionManager def _setup_routes(self, mapper): self.resources['versions'] = create_resource() mapper.connect('versions', '/', controller=self.resources['versions'], action='all') mapper.redirect('', '/') class VersionsController(wsgi.Controller): def __init__(self): super(VersionsController, self).__init__(None) @wsgi.Controller.api_version('1.0', '1.0') def index(self, req): """Return versions supported prior to the microversions epoch.""" builder = views_versions.get_view_builder(req) known_versions = copy.deepcopy(_KNOWN_VERSIONS) known_versions.pop('v2.0') return builder.build_versions(known_versions) @wsgi.Controller.api_version('2.0') # noqa def index(self, req): # pylint: disable=function-redefined """Return versions supported after the start of microversions.""" builder = views_versions.get_view_builder(req) known_versions = copy.deepcopy(_KNOWN_VERSIONS) known_versions.pop('v1.0') return builder.build_versions(known_versions) # NOTE (cknight): Calling the versions API without # /v1 or /v2 in the URL will lead to this unversioned # method, which should always return info about all # available versions. @wsgi.response(300) def all(self, req): """Return all known versions.""" builder = views_versions.get_view_builder(req) known_versions = copy.deepcopy(_KNOWN_VERSIONS) return builder.build_versions(known_versions) def create_resource(): return wsgi.Resource(VersionsController()) manila-10.0.0/manila/api/extensions.py0000664000175000017500000002671613656750227017667 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_config import cfg from oslo_log import log from oslo_utils import importutils import webob.dec import webob.exc import manila.api.openstack from manila.api.openstack import wsgi from manila import exception import manila.policy CONF = cfg.CONF LOG = log.getLogger(__name__) class ExtensionDescriptor(object): """Base class that defines the contract for extensions. Note that you don't have to derive from this class to have a valid extension; it is purely a convenience. """ # The name of the extension, e.g., 'Fox In Socks' name = None # The alias for the extension, e.g., 'FOXNSOX' alias = None # Description comes from the docstring for the class # The timestamp when the extension was last updated, e.g., # '2011-01-22T13:25:27-06:00' updated = None def __init__(self, ext_mgr): """Register extension with the extension manager.""" ext_mgr.register(self) self.ext_mgr = ext_mgr def get_resources(self): """List of extensions.ResourceExtension extension objects. Resources define new nouns, and are accessible through URLs. """ resources = [] return resources def get_controller_extensions(self): """List of extensions.ControllerExtension extension objects. Controller extensions are used to extend existing controllers. """ controller_exts = [] return controller_exts class ExtensionsResource(wsgi.Resource): def __init__(self, extension_manager): self.extension_manager = extension_manager super(ExtensionsResource, self).__init__(None) def _translate(self, ext): ext_data = {} ext_data['name'] = ext.name ext_data['alias'] = ext.alias ext_data['description'] = ext.__doc__ ext_data['updated'] = ext.updated ext_data['links'] = [] # TODO(dprince): implement extension links return ext_data def index(self, req): extensions = [] for _alias, ext in self.extension_manager.extensions.items(): extensions.append(self._translate(ext)) return dict(extensions=extensions) def show(self, req, id): try: # NOTE(dprince): the extensions alias is used as the 'id' for show ext = self.extension_manager.extensions[id] except KeyError: raise webob.exc.HTTPNotFound() return dict(extension=self._translate(ext)) def delete(self, req, id): raise webob.exc.HTTPNotFound() def create(self, req): raise webob.exc.HTTPNotFound() class ExtensionManager(object): """Load extensions from the configured extension path. See manila/tests/api/extensions/foxinsocks/extension.py for an example extension implementation. """ def __init__(self): LOG.info('Initializing extension manager.') self.cls_list = CONF.osapi_share_extension self.extensions = {} self._load_extensions() def register(self, ext): # Do nothing if the extension doesn't check out if not self._check_extension(ext): return alias = ext.alias LOG.info('Loaded extension: %s', alias) if alias in self.extensions: raise exception.Error("Found duplicate extension: %s" % alias) self.extensions[alias] = ext def get_resources(self): """Returns a list of ResourceExtension objects.""" resources = [] resources.append(ResourceExtension('extensions', ExtensionsResource(self))) for ext in self.extensions.values(): try: resources.extend(ext.get_resources()) except AttributeError: # NOTE(dprince): Extension aren't required to have resource # extensions pass return resources def get_controller_extensions(self): """Returns a list of ControllerExtension objects.""" controller_exts = [] for ext in self.extensions.values(): try: get_ext_method = ext.get_controller_extensions except AttributeError: # NOTE(Vek): Extensions aren't required to have # controller extensions continue controller_exts.extend(get_ext_method()) return controller_exts def _check_extension(self, extension): """Checks for required methods in extension objects.""" try: LOG.debug('Ext name: %s', extension.name) LOG.debug('Ext alias: %s', extension.alias) LOG.debug('Ext description: %s', ' '.join(extension.__doc__.strip().split())) LOG.debug('Ext updated: %s', extension.updated) except AttributeError: LOG.exception("Exception loading extension.") return False return True def load_extension(self, ext_factory): """Execute an extension factory. Loads an extension. The 'ext_factory' is the name of a callable that will be imported and called with one argument--the extension manager. The factory callable is expected to call the register() method at least once. """ LOG.debug("Loading extension %s", ext_factory) # Load the factory factory = importutils.import_class(ext_factory) # Call it LOG.debug("Calling extension factory %s", ext_factory) factory(self) def _load_extensions(self): """Load extensions specified on the command line.""" extensions = list(self.cls_list) # NOTE(thingee): Backwards compat for the old extension loader path. # We can drop this post-grizzly in the H release. old_contrib_path = ('manila.api.openstack.share.contrib.' 'standard_extensions') new_contrib_path = 'manila.api.contrib.standard_extensions' if old_contrib_path in extensions: LOG.warning('osapi_share_extension is set to deprecated path: ' '%s.', old_contrib_path) LOG.warning('Please set your flag or manila.conf settings for ' 'osapi_share_extension to: %s.', new_contrib_path) extensions = [e.replace(old_contrib_path, new_contrib_path) for e in extensions] for ext_factory in extensions: try: self.load_extension(ext_factory) except Exception as exc: LOG.warning('Failed to load extension %(ext_factory)s: ' '%(exc)s.', {"ext_factory": ext_factory, "exc": exc}) class ControllerExtension(object): """Extend core controllers of manila OpenStack API. Provide a way to extend existing manila OpenStack API core controllers. """ def __init__(self, extension, collection, controller): self.extension = extension self.collection = collection self.controller = controller class ResourceExtension(object): """Add top level resources to the OpenStack API in manila.""" def __init__(self, collection, controller, parent=None, collection_actions=None, member_actions=None, custom_routes_fn=None): if not collection_actions: collection_actions = {} if not member_actions: member_actions = {} self.collection = collection self.controller = controller self.parent = parent self.collection_actions = collection_actions self.member_actions = member_actions self.custom_routes_fn = custom_routes_fn def load_standard_extensions(ext_mgr, logger, path, package, ext_list=None): """Registers all standard API extensions.""" # Walk through all the modules in our directory... our_dir = path[0] for dirpath, dirnames, filenames in os.walk(our_dir): # Compute the relative package name from the dirpath relpath = os.path.relpath(dirpath, our_dir) if relpath == '.': relpkg = '' else: relpkg = '.%s' % '.'.join(relpath.split(os.sep)) # Now, consider each file in turn, only considering .py and .pyc files for fname in filenames: root, ext = os.path.splitext(fname) # Skip __init__ and anything that's not .py and .pyc if (ext not in ('.py', '.pyc')) or root == '__init__': continue # If .pyc and .py both exist, skip .pyc if ext == '.pyc' and ((root + '.py') in filenames): continue # Try loading it classname = "%s%s" % (root[0].upper(), root[1:]) classpath = ("%s%s.%s.%s" % (package, relpkg, root, classname)) if ext_list is not None and classname not in ext_list: logger.debug("Skipping extension: %s" % classpath) continue try: ext_mgr.load_extension(classpath) except Exception as exc: logger.warning('Failed to load extension %(classpath)s: ' '%(exc)s.', {"classpath": classpath, "exc": exc}) # Now, let's consider any subdirectories we may have... subdirs = [] for dname in dirnames: # Skip it if it does not have __init__.py if not os.path.exists(os.path.join(dirpath, dname, '__init__.py')): continue # If it has extension(), delegate... ext_name = ("%s%s.%s.extension" % (package, relpkg, dname)) try: ext = importutils.import_class(ext_name) except ImportError: # extension() doesn't exist on it, so we'll explore # the directory for ourselves subdirs.append(dname) else: try: ext(ext_mgr) except Exception as exc: logger.warning('Failed to load extension ' '%(ext_name)s: %(exc)s.', {"ext_name": ext_name, "exc": exc}) # Update the list of directories we'll explore... dirnames[:] = subdirs def extension_authorizer(api_name, extension_name): def authorize(context, target=None, action=None): if target is None: target = {'project_id': context.project_id, 'user_id': context.user_id} if action is None: act = '%s_extension:%s' % (api_name, extension_name) else: act = '%s_extension:%s:%s' % (api_name, extension_name, action) manila.policy.enforce(context, act, target) return authorize manila-10.0.0/manila/api/views/0000775000175000017500000000000013656750362016237 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/views/__init__.py0000664000175000017500000000000013656750227020336 0ustar zuulzuul00000000000000manila-10.0.0/manila/api/views/share_replicas.py0000664000175000017500000000653213656750227021603 0ustar zuulzuul00000000000000# Copyright 2015 Goutham Pacha Ravi # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ReplicationViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'share_replicas' _collection_links = 'share_replica_links' _detail_version_modifiers = [ "add_cast_rules_to_readonly_field", ] def summary_list(self, request, replicas): """Summary view of a list of replicas.""" return self._list_view(self.summary, request, replicas) def detail_list(self, request, replicas): """Detailed view of a list of replicas.""" return self._list_view(self.detail, request, replicas) def summary(self, request, replica): """Generic, non-detailed view of a share replica.""" replica_dict = { 'id': replica.get('id'), 'share_id': replica.get('share_id'), 'status': replica.get('status'), 'replica_state': replica.get('replica_state'), } return {'share_replica': replica_dict} def detail(self, request, replica): """Detailed view of a single replica.""" context = request.environ['manila.context'] replica_dict = { 'id': replica.get('id'), 'share_id': replica.get('share_id'), 'availability_zone': replica.get('availability_zone'), 'created_at': replica.get('created_at'), 'status': replica.get('status'), 'share_network_id': replica.get('share_network_id'), 'replica_state': replica.get('replica_state'), 'updated_at': replica.get('updated_at'), } if context.is_admin: replica_dict['share_server_id'] = replica.get('share_server_id') replica_dict['host'] = replica.get('host') self.update_versioned_resource_dict(request, replica_dict, replica) return {'share_replica': replica_dict} def _list_view(self, func, request, replicas): """Provide a view for a list of replicas.""" replicas_list = [func(request, replica)['share_replica'] for replica in replicas] replica_links = self._get_collection_links(request, replicas, self._collection_name) replicas_dict = {self._collection_name: replicas_list} if replica_links: replicas_dict[self._collection_links] = replica_links return replicas_dict @common.ViewBuilder.versioned_method("2.30") def add_cast_rules_to_readonly_field(self, context, replica_dict, replica): if context.is_admin: replica_dict['cast_rules_to_readonly'] = replica.get( 'cast_rules_to_readonly', False) manila-10.0.0/manila/api/views/share_group_snapshots.py0000664000175000017500000001047413656750227023237 0ustar zuulzuul00000000000000# Copyright 2015 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ShareGroupSnapshotViewBuilder(common.ViewBuilder): """Model a share group snapshot API response as a python dictionary.""" _collection_name = "share_group_snapshot" def summary_list(self, request, group_snaps): """Show a list of share_group_snapshots without many details.""" return self._list_view(self.summary, request, group_snaps) def detail_list(self, request, group_snaps): """Detailed view of a list of share_group_snapshots.""" return self._list_view(self.detail, request, group_snaps) def member_list(self, request, members): members_list = [] for member in members: member_dict = { 'id': member.get('id'), 'created_at': member.get('created_at'), 'size': member.get('size'), 'share_protocol': member.get('share_proto'), 'project_id': member.get('project_id'), 'share_group_snapshot_id': member.get( 'share_group_snapshot_id'), 'share_id': member.get('share_instance', {}).get('share_id'), # TODO(vponomaryov): add 'provider_location' key in Pike. } members_list.append(member_dict) members_links = self._get_collection_links( request, members, "share_group_snapshot_id") members_dict = {"share_group_snapshot_members": members_list} if members_links: members_dict["share_group_snapshot_members_links"] = members_links return members_dict def summary(self, request, share_group_snap): """Generic, non-detailed view of a share group snapshot.""" return { 'share_group_snapshot': { 'id': share_group_snap.get('id'), 'name': share_group_snap.get('name'), 'links': self._get_links(request, share_group_snap['id']), } } def detail(self, request, share_group_snap): """Detailed view of a single share group snapshot.""" members = self._format_member_list( share_group_snap.get('share_group_snapshot_members', [])) share_group_snap_dict = { 'id': share_group_snap.get('id'), 'name': share_group_snap.get('name'), 'created_at': share_group_snap.get('created_at'), 'status': share_group_snap.get('status'), 'description': share_group_snap.get('description'), 'project_id': share_group_snap.get('project_id'), 'share_group_id': share_group_snap.get('share_group_id'), 'members': members, 'links': self._get_links(request, share_group_snap['id']), } return {'share_group_snapshot': share_group_snap_dict} def _format_member_list(self, members): members_list = [] for member in members: member_dict = { 'id': member.get('id'), 'size': member.get('size'), 'share_id': member.get('share_instance', {}).get('share_id'), } members_list.append(member_dict) return members_list def _list_view(self, func, request, snaps): """Provide a view for a list of share group snapshots.""" snap_list = [func(request, snap)["share_group_snapshot"] for snap in snaps] snaps_links = self._get_collection_links(request, snaps, self._collection_name) snaps_dict = {"share_group_snapshots": snap_list} if snaps_links: snaps_dict["share_group_snapshot_links"] = snaps_links return snaps_dict manila-10.0.0/manila/api/views/export_locations.py0000664000175000017500000000701013656750227022203 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from manila.api import common class ViewBuilder(common.ViewBuilder): """Model export-locations API responses as a python dictionary.""" _collection_name = "export_locations" _detail_version_modifiers = [ 'add_preferred_path_attribute', ] def _get_export_location_view(self, request, export_location, detail=False, replica=False): context = request.environ['manila.context'] view = { 'id': export_location['uuid'], 'path': export_location['path'], } self.update_versioned_resource_dict(request, view, export_location) if context.is_admin: view['share_instance_id'] = export_location['share_instance_id'] view['is_admin_only'] = export_location['is_admin_only'] if detail: view['created_at'] = export_location['created_at'] view['updated_at'] = export_location['updated_at'] if replica: share_instance = export_location['share_instance'] view['replica_state'] = share_instance['replica_state'] view['availability_zone'] = share_instance['availability_zone'] return {'export_location': view} def summary(self, request, export_location, replica=False): """Summary view of a single export location.""" return self._get_export_location_view( request, export_location, detail=False, replica=replica) def detail(self, request, export_location, replica=False): """Detailed view of a single export location.""" return self._get_export_location_view( request, export_location, detail=True, replica=replica) def _list_export_locations(self, req, export_locations, detail=False, replica=False): """View of export locations list.""" view_method = self.detail if detail else self.summary return { self._collection_name: [ view_method(req, elocation, replica=replica)['export_location'] for elocation in export_locations ]} def detail_list(self, request, export_locations): """Detailed View of export locations list.""" return self._list_export_locations(request, export_locations, detail=True) def summary_list(self, request, export_locations, replica=False): """Summary View of export locations list.""" return self._list_export_locations(request, export_locations, detail=False, replica=replica) @common.ViewBuilder.versioned_method('2.14') def add_preferred_path_attribute(self, context, view_dict, export_location): view_dict['preferred'] = strutils.bool_from_string( export_location['el_metadata'].get('preferred')) manila-10.0.0/manila/api/views/share_snapshot_export_locations.py0000664000175000017500000000430513656750227025310 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): _collection_name = "share_snapshot_export_locations" def _get_view(self, request, export_location, detail=False): context = request.environ['manila.context'] result = { 'share_snapshot_export_location': { 'id': export_location['id'], 'path': export_location['path'], 'links': self._get_links(request, export_location['id']), } } ss_el = result['share_snapshot_export_location'] if context.is_admin: ss_el['share_snapshot_instance_id'] = ( export_location['share_snapshot_instance_id']) ss_el['is_admin_only'] = export_location['is_admin_only'] if detail: ss_el['created_at'] = export_location['created_at'] ss_el['updated_at'] = export_location['updated_at'] return result def list_export_locations(self, request, export_locations): context = request.environ['manila.context'] result = {self._collection_name: []} for export_location in export_locations: if context.is_admin or not export_location['is_admin_only']: result[self._collection_name].append(self._get_view( request, export_location)['share_snapshot_export_location']) else: continue return result def detail_export_location(self, request, export_location): return self._get_view(request, export_location, detail=True) manila-10.0.0/manila/api/views/share_snapshot_instances.py0000664000175000017500000000476713656750227023717 0ustar zuulzuul00000000000000# Copyright 2016 Huawei Inc. # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model the server API response as a python dictionary.""" _collection_name = 'snapshot_instances' def summary_list(self, request, instances): """Summary view of a list of share snapshot instances.""" return self._list_view(self.summary, request, instances) def detail_list(self, request, instances): """Detailed view of a list of share snapshot instances.""" return self._list_view(self.detail, request, instances) def summary(self, request, instance): """Generic, non-detailed view of a share snapshot instance.""" instance_dict = { 'id': instance.get('id'), 'snapshot_id': instance.get('snapshot_id'), 'status': instance.get('status'), } return {'snapshot_instance': instance_dict} def detail(self, request, instance): """Detailed view of a single share snapshot instance.""" instance_dict = { 'id': instance.get('id'), 'snapshot_id': instance.get('snapshot_id'), 'created_at': instance.get('created_at'), 'updated_at': instance.get('updated_at'), 'status': instance.get('status'), 'share_id': instance.get('share_instance').get('share_id'), 'share_instance_id': instance.get('share_instance_id'), 'progress': instance.get('progress'), 'provider_location': instance.get('provider_location'), } return {'snapshot_instance': instance_dict} def _list_view(self, func, request, instances): """Provide a view for a list of share snapshot instances.""" instances_list = [func(request, instance)['snapshot_instance'] for instance in instances] instances_dict = {self._collection_name: instances_list} return instances_dict manila-10.0.0/manila/api/views/messages.py0000664000175000017500000000540513656750227020424 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common from manila.message import message_field class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = "messages" def index(self, request, messages): """Show a list of messages.""" return self._list_view(self.detail, request, messages) def detail(self, request, message): """Detailed view of a single message.""" message_ref = { 'id': message.get('id'), 'project_id': message.get('project_id'), 'action_id': message.get('action_id'), 'detail_id': message.get('detail_id'), 'message_level': message.get('message_level'), 'created_at': message.get('created_at'), 'expires_at': message.get('expires_at'), 'request_id': message.get('request_id'), 'links': self._get_links(request, message['id']), 'resource_type': message.get('resource_type'), 'resource_id': message.get('resource_id'), 'user_message': "%s: %s" % ( message_field.translate_action(message.get('action_id')), message_field.translate_detail(message.get('detail_id'))), } return {'message': message_ref} def _list_view(self, func, request, messages, coll_name=_collection_name): """Provide a view for a list of messages. :param func: Function used to format the message data :param request: API request :param messages: List of messages in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :returns: message data in dictionary format """ messages_list = [func(request, message)['message'] for message in messages] messages_links = self._get_collection_links(request, messages, coll_name) messages_dict = dict({"messages": messages_list}) if messages_links: messages_dict['messages_links'] = messages_links return messages_dict manila-10.0.0/manila/api/views/share_networks.py0000664000175000017500000001127313656750227021653 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'share_networks' _detail_version_modifiers = ["add_gateway", "add_mtu", "add_nova_net_id", "add_subnets"] def build_share_network(self, request, share_network): """View of a share network.""" return {'share_network': self._build_share_network_view( request, share_network)} def build_share_networks(self, request, share_networks, is_detail=True): return {'share_networks': [self._build_share_network_view( request, share_network, is_detail) for share_network in share_networks]} def _update_share_network_info(self, request, share_network): for sns in share_network.get('share_network_subnets') or []: if sns.get('is_default') and sns.get('is_default') is True: share_network.update({ 'neutron_net_id': sns.get('neutron_net_id'), 'neutron_subnet_id': sns.get('neutron_subnet_id'), 'network_type': sns.get('network_type'), 'segmentation_id': sns.get('segmentation_id'), 'cidr': sns.get('cidr'), 'ip_version': sns.get('ip_version'), 'gateway': sns.get('gateway'), 'mtu': sns.get('mtu'), }) def _build_share_network_view(self, request, share_network, is_detail=True): sn = { 'id': share_network.get('id'), 'name': share_network.get('name'), } if is_detail: self._update_share_network_info(request, share_network) sn.update({ 'project_id': share_network.get('project_id'), 'created_at': share_network.get('created_at'), 'updated_at': share_network.get('updated_at'), 'neutron_net_id': share_network.get('neutron_net_id'), 'neutron_subnet_id': share_network.get('neutron_subnet_id'), 'network_type': share_network.get('network_type'), 'segmentation_id': share_network.get('segmentation_id'), 'cidr': share_network.get('cidr'), 'ip_version': share_network.get('ip_version'), 'description': share_network.get('description'), }) self.update_versioned_resource_dict(request, sn, share_network) return sn @common.ViewBuilder.versioned_method("2.51") def add_subnets(self, context, network_dict, network): subnets = [{ 'id': sns.get('id'), 'availability_zone': sns.get('availability_zone'), 'created_at': sns.get('created_at'), 'updated_at': sns.get('updated_at'), 'segmentation_id': sns.get('segmentation_id'), 'neutron_net_id': sns.get('neutron_net_id'), 'neutron_subnet_id': sns.get('neutron_subnet_id'), 'ip_version': sns.get('ip_version'), 'cidr': sns.get('cidr'), 'network_type': sns.get('network_type'), 'mtu': sns.get('mtu'), 'gateway': sns.get('gateway'), } for sns in network.get('share_network_subnets')] network_dict['share_network_subnets'] = subnets attr_to_remove = [ 'neutron_net_id', 'neutron_subnet_id', 'network_type', 'segmentation_id', 'cidr', 'ip_version', 'gateway', 'mtu'] for attr in attr_to_remove: network_dict.pop(attr) @common.ViewBuilder.versioned_method("2.18") def add_gateway(self, context, network_dict, network): network_dict['gateway'] = network.get('gateway') @common.ViewBuilder.versioned_method("2.20") def add_mtu(self, context, network_dict, network): network_dict['mtu'] = network.get('mtu') @common.ViewBuilder.versioned_method("1.0", "2.25") def add_nova_net_id(self, context, network_dict, network): network_dict['nova_net_id'] = None manila-10.0.0/manila/api/views/share_servers.py0000664000175000017500000000552313656750227021471 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'share_servers' _detail_version_modifiers = [ "add_is_auto_deletable_and_identifier_fields", "add_share_network_subnet_id_field" ] def build_share_server(self, request, share_server): """View of a share server.""" return { 'share_server': self._build_share_server_view( request, share_server, detailed=True) } def build_share_servers(self, request, share_servers): return { 'share_servers': [self._build_share_server_view(request, share_server) for share_server in share_servers] } def build_share_server_details(self, details): return {'details': details} def _build_share_server_view(self, request, share_server, detailed=False): share_server_dict = { 'id': share_server.id, 'project_id': share_server.project_id, 'updated_at': share_server.updated_at, 'status': share_server.status, 'host': share_server.host, 'share_network_name': share_server.share_network_name, 'share_network_id': share_server.share_network_id, } if detailed: share_server_dict['created_at'] = share_server.created_at share_server_dict['backend_details'] = share_server.backend_details self.update_versioned_resource_dict( request, share_server_dict, share_server) return share_server_dict @common.ViewBuilder.versioned_method("2.51") def add_share_network_subnet_id_field( self, context, share_server_dict, share_server): share_server_dict['share_network_subnet_id'] = ( share_server['share_network_subnet_id']) @common.ViewBuilder.versioned_method("2.49") def add_is_auto_deletable_and_identifier_fields( self, context, share_server_dict, share_server): share_server_dict['is_auto_deletable'] = ( share_server['is_auto_deletable']) share_server_dict['identifier'] = share_server['identifier'] manila-10.0.0/manila/api/views/share_group_types.py0000664000175000017500000000427613656750227022364 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from manila.api import common CONF = cfg.CONF class ShareGroupTypeViewBuilder(common.ViewBuilder): _collection_name = 'share_group_types' _detail_version_modifiers = [ "add_is_default_attr", ] def show(self, request, share_group_type, brief=False): """Trim away extraneous share group type attributes.""" group_specs = share_group_type.get('group_specs', {}) trimmed = { 'id': share_group_type.get('id'), 'name': share_group_type.get('name'), 'is_public': share_group_type.get('is_public'), 'group_specs': group_specs, 'share_types': [ st['share_type_id'] for st in share_group_type['share_types']], } self.update_versioned_resource_dict(request, trimmed, share_group_type) return trimmed if brief else {"share_group_type": trimmed} def index(self, request, share_group_types): """Index over trimmed share group types.""" share_group_types_list = [ self.show(request, share_group_type, True) for share_group_type in share_group_types ] return {"share_group_types": share_group_types_list} @common.ViewBuilder.versioned_method("2.46") def add_is_default_attr(self, context, share_group_type_dict, share_group_type): is_default = False type_name = share_group_type.get('name') default_name = CONF.default_share_group_type if default_name is not None: is_default = default_name == type_name share_group_type_dict['is_default'] = is_default manila-10.0.0/manila/api/views/limits.py0000664000175000017500000001064513656750227020120 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from manila.api import common from manila import utils class ViewBuilder(common.ViewBuilder): """OpenStack API base limits view builder.""" _collection_name = "limits" _detail_version_modifiers = [ "add_share_replica_quotas", ] def build(self, request, rate_limits, absolute_limits): rate_limits = self._build_rate_limits(rate_limits) absolute_limits = self._build_absolute_limits(request, absolute_limits) output = { "limits": { "rate": rate_limits, "absolute": absolute_limits, }, } return output def _build_absolute_limits(self, request, absolute_limits): """Builder for absolute limits. absolute_limits should be given as a dict of limits. For example: {"limit": {"shares": 10, "gigabytes": 1024}, "in_use": {"shares": 8, "gigabytes": 256}}. """ limit_names = { "limit": { "gigabytes": ["maxTotalShareGigabytes"], "snapshot_gigabytes": ["maxTotalSnapshotGigabytes"], "shares": ["maxTotalShares"], "snapshots": ["maxTotalShareSnapshots"], "share_networks": ["maxTotalShareNetworks"], }, "in_use": { "shares": ["totalSharesUsed"], "snapshots": ["totalShareSnapshotsUsed"], "share_networks": ["totalShareNetworksUsed"], "gigabytes": ["totalShareGigabytesUsed"], "snapshot_gigabytes": ["totalSnapshotGigabytesUsed"], }, } limits = {} self.update_versioned_resource_dict(request, limit_names, absolute_limits) for mapping_key in limit_names.keys(): for k, v in absolute_limits.get(mapping_key, {}).items(): if k in limit_names.get(mapping_key, []) and v is not None: for name in limit_names[mapping_key][k]: limits[name] = v return limits def _build_rate_limits(self, rate_limits): limits = [] for rate_limit in rate_limits: _rate_limit_key = None _rate_limit = self._build_rate_limit(rate_limit) # check for existing key for limit in limits: if (limit["uri"] == rate_limit["URI"] and limit["regex"] == rate_limit["regex"]): _rate_limit_key = limit break # ensure we have a key if we didn't find one if not _rate_limit_key: _rate_limit_key = { "uri": rate_limit["URI"], "regex": rate_limit["regex"], "limit": [], } limits.append(_rate_limit_key) _rate_limit_key["limit"].append(_rate_limit) return limits def _build_rate_limit(self, rate_limit): _get_utc = datetime.datetime.utcfromtimestamp next_avail = _get_utc(rate_limit["resetTime"]) return { "verb": rate_limit["verb"], "value": rate_limit["value"], "remaining": int(rate_limit["remaining"]), "unit": rate_limit["unit"], "next-available": utils.isotime(at=next_avail), } @common.ViewBuilder.versioned_method("2.53") def add_share_replica_quotas(self, request, limit_names, absolute_limits): limit_names["limit"]["share_replicas"] = ["maxTotalShareReplicas"] limit_names["limit"]["replica_gigabytes"] = ( ["maxTotalReplicaGigabytes"]) limit_names["in_use"]["share_replicas"] = ["totalShareReplicasUsed"] limit_names["in_use"]["replica_gigabytes"] = ( ["totalReplicaGigabytesUsed"]) manila-10.0.0/manila/api/views/share_accesses.py0000664000175000017500000000722413656750227021571 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common from manila.common import constants from manila.share import api as share_api class ViewBuilder(common.ViewBuilder): """Model a share access API response as a python dictionary.""" _collection_name = 'share_accesses' _detail_version_modifiers = [ "add_access_key", "translate_transitional_statuses", "add_created_at_and_updated_at", "add_access_rule_metadata_field", ] def list_view(self, request, accesses): """View of a list of share accesses.""" return {'access_list': [self.summary_view(request, access)['access'] for access in accesses]} def summary_view(self, request, access): """Summarized view of a single share access.""" access_dict = { 'id': access.get('id'), 'access_level': access.get('access_level'), 'access_to': access.get('access_to'), 'access_type': access.get('access_type'), 'state': access.get('state'), } self.update_versioned_resource_dict( request, access_dict, access) return {'access': access_dict} def view(self, request, access): """Generic view of a single share access.""" access_dict = { 'id': access.get('id'), 'share_id': access.get('share_id'), 'access_level': access.get('access_level'), 'access_to': access.get('access_to'), 'access_type': access.get('access_type'), 'state': access.get('state'), } self.update_versioned_resource_dict( request, access_dict, access) return {'access': access_dict} def view_metadata(self, request, metadata): """View of a share access rule metadata.""" return {'metadata': metadata} @common.ViewBuilder.versioned_method("2.21") def add_access_key(self, context, access_dict, access): access_dict['access_key'] = access.get('access_key') @common.ViewBuilder.versioned_method("2.33") def add_created_at_and_updated_at(self, context, access_dict, access): access_dict['created_at'] = access.get('created_at') access_dict['updated_at'] = access.get('updated_at') @common.ViewBuilder.versioned_method("2.45") def add_access_rule_metadata_field(self, context, access_dict, access): metadata = access.get('share_access_rules_metadata') or {} metadata = {item['key']: item['value'] for item in metadata} access_dict['metadata'] = metadata @common.ViewBuilder.versioned_method("1.0", "2.27") def translate_transitional_statuses(self, context, access_dict, access): """In 2.28, the per access rule status was (re)introduced.""" api = share_api.API() share = api.get(context, access['share_id']) if (share['access_rules_status'] == constants.SHARE_INSTANCE_RULES_SYNCING): access_dict['state'] = constants.STATUS_NEW else: access_dict['state'] = share['access_rules_status'] manila-10.0.0/manila/api/views/shares.py0000664000175000017500000001766513656750227020115 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common from manila.common import constants class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'shares' _detail_version_modifiers = [ "add_snapshot_support_field", "add_task_state_field", "modify_share_type_field", "remove_export_locations", "add_access_rules_status_field", "add_replication_fields", "add_user_id", "add_create_share_from_snapshot_support_field", "add_revert_to_snapshot_support_field", "translate_access_rules_status", "add_share_group_fields", "add_mount_snapshot_support_field", "add_progress_field", "translate_creating_from_snapshot_status", ] def summary_list(self, request, shares, count=None): """Show a list of shares without many details.""" return self._list_view(self.summary, request, shares, count) def detail_list(self, request, shares, count=None): """Detailed view of a list of shares.""" return self._list_view(self.detail, request, shares, count) def summary(self, request, share): """Generic, non-detailed view of a share.""" return { 'share': { 'id': share.get('id'), 'name': share.get('display_name'), 'links': self._get_links(request, share['id']) } } def detail(self, request, share): """Detailed view of a single share.""" context = request.environ['manila.context'] metadata = share.get('share_metadata') if metadata: metadata = {item['key']: item['value'] for item in metadata} else: metadata = {} export_locations = share.get('export_locations', []) share_instance = share.get('instance') or {} if share_instance.get('share_type'): share_type = share_instance.get('share_type').get('name') else: share_type = share_instance.get('share_type_id') share_dict = { 'id': share.get('id'), 'size': share.get('size'), 'availability_zone': share_instance.get('availability_zone'), 'created_at': share.get('created_at'), 'status': share.get('status'), 'name': share.get('display_name'), 'description': share.get('display_description'), 'project_id': share.get('project_id'), 'snapshot_id': share.get('snapshot_id'), 'share_network_id': share_instance.get('share_network_id'), 'share_proto': share.get('share_proto'), 'export_location': share.get('export_location'), 'metadata': metadata, 'share_type': share_type, 'volume_type': share_type, 'links': self._get_links(request, share['id']), 'is_public': share.get('is_public'), 'export_locations': export_locations, } self.update_versioned_resource_dict(request, share_dict, share) if context.is_admin: share_dict['share_server_id'] = share_instance.get( 'share_server_id') share_dict['host'] = share_instance.get('host') return {'share': share_dict} @common.ViewBuilder.versioned_method("2.2") def add_snapshot_support_field(self, context, share_dict, share): share_dict['snapshot_support'] = share.get('snapshot_support') @common.ViewBuilder.versioned_method("2.5") def add_task_state_field(self, context, share_dict, share): share_dict['task_state'] = share.get('task_state') @common.ViewBuilder.versioned_method("2.6") def modify_share_type_field(self, context, share_dict, share): share_instance = share.get('instance') or {} share_type = share_instance.get('share_type_id') share_type_name = None if share_instance.get('share_type'): share_type_name = share_instance.get('share_type').get('name') share_dict.update({ 'share_type_name': share_type_name, 'share_type': share_type, }) @common.ViewBuilder.versioned_method("2.9") def remove_export_locations(self, context, share_dict, share): share_dict.pop('export_location') share_dict.pop('export_locations') @common.ViewBuilder.versioned_method("2.10") def add_access_rules_status_field(self, context, share_dict, share): share_dict['access_rules_status'] = share.get('access_rules_status') @common.ViewBuilder.versioned_method('2.11') def add_replication_fields(self, context, share_dict, share): share_dict['replication_type'] = share.get('replication_type') share_dict['has_replicas'] = share['has_replicas'] @common.ViewBuilder.versioned_method("2.16") def add_user_id(self, context, share_dict, share): share_dict['user_id'] = share.get('user_id') @common.ViewBuilder.versioned_method("2.24") def add_create_share_from_snapshot_support_field(self, context, share_dict, share): share_dict['create_share_from_snapshot_support'] = share.get( 'create_share_from_snapshot_support') @common.ViewBuilder.versioned_method("2.27") def add_revert_to_snapshot_support_field(self, context, share_dict, share): share_dict['revert_to_snapshot_support'] = share.get( 'revert_to_snapshot_support') @common.ViewBuilder.versioned_method("2.10", "2.27") def translate_access_rules_status(self, context, share_dict, share): if (share['access_rules_status'] == constants.SHARE_INSTANCE_RULES_SYNCING): share_dict['access_rules_status'] = constants.STATUS_OUT_OF_SYNC @common.ViewBuilder.versioned_method("2.31") def add_share_group_fields(self, context, share_dict, share): share_dict['share_group_id'] = share.get( 'share_group_id') share_dict['source_share_group_snapshot_member_id'] = share.get( 'source_share_group_snapshot_member_id') @common.ViewBuilder.versioned_method("2.32") def add_mount_snapshot_support_field(self, context, share_dict, share): share_dict['mount_snapshot_support'] = share.get( 'mount_snapshot_support') def _list_view(self, func, request, shares, count=None): """Provide a view for a list of shares.""" shares_list = [func(request, share)['share'] for share in shares] shares_links = self._get_collection_links(request, shares, self._collection_name) shares_dict = dict(shares=shares_list) if count is not None: shares_dict['count'] = count if shares_links: shares_dict['shares_links'] = shares_links return shares_dict @common.ViewBuilder.versioned_method("1.0", "2.53") def translate_creating_from_snapshot_status(self, context, share_dict, share): if share.get('status') == constants.STATUS_CREATING_FROM_SNAPSHOT: share_dict['status'] = constants.STATUS_CREATING @common.ViewBuilder.versioned_method("2.54") def add_progress_field(self, context, share_dict, share): share_dict['progress'] = share.get('progress') manila-10.0.0/manila/api/views/versions.py0000664000175000017500000000426213656750227020465 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack LLC. # Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re from six.moves import urllib def get_view_builder(req): return ViewBuilder(req.application_url) _URL_SUFFIX = {'v1.0': 'v1', 'v2.0': 'v2'} class ViewBuilder(object): def __init__(self, base_url): """Initialize ViewBuilder. :param base_url: url of the root wsgi application """ self.base_url = base_url def build_versions(self, versions): views = [self._build_version(versions[key]) for key in sorted(list(versions.keys()))] return dict(versions=views) def _build_version(self, version): view = copy.deepcopy(version) view['links'] = self._build_links(version) return view def _build_links(self, version_data): """Generate a container of links that refer to the provided version.""" links = copy.deepcopy(version_data.get('links', {})) version = _URL_SUFFIX.get(version_data['id']) links.append({'rel': 'self', 'href': self._generate_href(version=version)}) return links def _generate_href(self, version='v1', path=None): """Create a URL that refers to a specific version_number.""" base_url = self._get_base_url_without_version() href = urllib.parse.urljoin(base_url, version).rstrip('/') + '/' if path: href += path.lstrip('/') return href def _get_base_url_without_version(self): """Get the base URL with out the /v1 suffix.""" return re.sub('v[1-9]+/?$', '', self.base_url) manila-10.0.0/manila/api/views/quota_class_sets.py0000664000175000017500000000407313656750227022171 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): _collection_name = "quota_class_set" _detail_version_modifiers = [ "add_share_group_quotas", "add_share_replica_quotas", ] def detail_list(self, request, quota_class_set, quota_class=None): """Detailed view of quota class set.""" keys = ( 'shares', 'gigabytes', 'snapshots', 'snapshot_gigabytes', 'share_networks', ) view = {key: quota_class_set.get(key) for key in keys} if quota_class: view['id'] = quota_class self.update_versioned_resource_dict(request, view, quota_class_set) return {self._collection_name: view} @common.ViewBuilder.versioned_method("2.40") def add_share_group_quotas(self, context, view, quota_class_set): share_groups = quota_class_set.get('share_groups') share_group_snapshots = quota_class_set.get('share_group_snapshots') if share_groups is not None: view['share_groups'] = share_groups if share_group_snapshots is not None: view['share_group_snapshots'] = share_group_snapshots @common.ViewBuilder.versioned_method("2.53") def add_share_replica_quotas(self, context, view, quota_class_set): view['share_replicas'] = quota_class_set.get('share_replicas') view['replica_gigabytes'] = quota_class_set.get('replica_gigabytes') manila-10.0.0/manila/api/views/share_migration.py0000664000175000017500000000225613656750227021771 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model share migration view data response as a python dictionary.""" _collection_name = 'share_migration' _detail_version_modifiers = [] def get_progress(self, request, share, progress): """View of share migration job progress.""" result = { 'total_progress': progress['total_progress'], 'task_state': share['task_state'], } self.update_versioned_resource_dict(request, result, progress) return result manila-10.0.0/manila/api/views/scheduler_stats.py0000664000175000017500000000343213656750227022007 0ustar zuulzuul00000000000000# Copyright (c) 2014 eBay Inc. # Copyright (c) 2015 Rushil Chugh # Copyright (c) 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model scheduler-stats API responses as a python dictionary.""" _collection_name = "scheduler-stats" def pool_summary(self, pool): """Summary view of a single pool.""" return { 'pool': { 'name': pool.get('name'), 'host': pool.get('host'), 'backend': pool.get('backend'), 'pool': pool.get('pool'), } } def pool_detail(self, pool): """Detailed view of a single pool.""" return { 'pool': { 'name': pool.get('name'), 'host': pool.get('host'), 'backend': pool.get('backend'), 'pool': pool.get('pool'), 'capabilities': pool.get('capabilities'), } } def pools(self, pools, detail=False): """View of a list of pools seen by scheduler.""" view_method = self.pool_detail if detail else self.pool_summary return {"pools": [view_method(pool)['pool'] for pool in pools]} manila-10.0.0/manila/api/views/share_groups.py0000664000175000017500000000721413656750227021316 0ustar zuulzuul00000000000000# Copyright 2015 Alex Meade # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ShareGroupViewBuilder(common.ViewBuilder): """Model a share group API response as a python dictionary.""" _collection_name = 'share_groups' _detail_version_modifiers = [ "add_consistent_snapshot_support_and_az_id_fields_to_sg", ] def summary_list(self, request, share_groups): """Show a list of share groups without many details.""" return self._list_view(self.summary, request, share_groups) def detail_list(self, request, share_groups): """Detailed view of a list of share groups.""" return self._list_view(self.detail, request, share_groups) def summary(self, request, share_group): """Generic, non-detailed view of a share group.""" return { 'share_group': { 'id': share_group.get('id'), 'name': share_group.get('name'), 'links': self._get_links(request, share_group['id']) } } def detail(self, request, share_group): """Detailed view of a single share group.""" context = request.environ['manila.context'] share_group_dict = { 'id': share_group.get('id'), 'name': share_group.get('name'), 'created_at': share_group.get('created_at'), 'status': share_group.get('status'), 'description': share_group.get('description'), 'project_id': share_group.get('project_id'), 'host': share_group.get('host'), 'share_group_type_id': share_group.get('share_group_type_id'), 'source_share_group_snapshot_id': share_group.get( 'source_share_group_snapshot_id'), 'share_network_id': share_group.get('share_network_id'), 'share_types': [st['share_type_id'] for st in share_group.get( 'share_types')], 'links': self._get_links(request, share_group['id']), } self.update_versioned_resource_dict( request, share_group_dict, share_group) if context.is_admin: share_group_dict['share_server_id'] = share_group.get( 'share_server_id') return {'share_group': share_group_dict} @common.ViewBuilder.versioned_method("2.34") def add_consistent_snapshot_support_and_az_id_fields_to_sg( self, context, sg_dict, sg): sg_dict['availability_zone'] = sg.get('availability_zone') sg_dict['consistent_snapshot_support'] = sg.get( 'consistent_snapshot_support') def _list_view(self, func, request, shares): """Provide a view for a list of share groups.""" share_group_list = [ func(request, share)['share_group'] for share in shares ] share_groups_links = self._get_collection_links( request, shares, self._collection_name) share_groups_dict = {"share_groups": share_group_list} if share_groups_links: share_groups_dict['share_groups_links'] = share_groups_links return share_groups_dict manila-10.0.0/manila/api/views/availability_zones.py0000664000175000017500000000224513656750227022504 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): _collection_name = "availability_zones" def _detail(self, availability_zone): """Detailed view of a single availability zone.""" keys = ('id', 'name', 'created_at', 'updated_at') return {key: availability_zone.get(key) for key in keys} def detail_list(self, availability_zones): """Detailed view of a list of availability zones.""" azs = [self._detail(az) for az in availability_zones] return {self._collection_name: azs} manila-10.0.0/manila/api/views/quota_sets.py0000664000175000017500000000453013656750227021002 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): _collection_name = "quota_set" _detail_version_modifiers = [ "add_share_group_quotas", "add_share_replica_quotas", ] def detail_list(self, request, quota_set, project_id=None, share_type=None): """Detailed view of quota set.""" keys = ( 'shares', 'gigabytes', 'snapshots', 'snapshot_gigabytes', ) view = {key: quota_set.get(key) for key in keys} if project_id: view['id'] = project_id if share_type: # NOTE(vponomaryov): remove share groups related data for quotas # that are share-type based. quota_set.pop('share_groups', None) quota_set.pop('share_group_snapshots', None) else: view['share_networks'] = quota_set.get('share_networks') self.update_versioned_resource_dict(request, view, quota_set) return {self._collection_name: view} @common.ViewBuilder.versioned_method("2.40") def add_share_group_quotas(self, context, view, quota_set): share_groups = quota_set.get('share_groups') share_group_snapshots = quota_set.get('share_group_snapshots') if share_groups is not None: view['share_groups'] = share_groups if share_group_snapshots is not None: view['share_group_snapshots'] = share_group_snapshots @common.ViewBuilder.versioned_method("2.53") def add_share_replica_quotas(self, context, view, quota_class_set): view['share_replicas'] = quota_class_set.get('share_replicas') view['replica_gigabytes'] = quota_class_set.get('replica_gigabytes') manila-10.0.0/manila/api/views/types.py0000664000175000017500000001147713656750227017767 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from manila.api import common from manila.common import constants from manila.share import share_types CONF = cfg.CONF class ViewBuilder(common.ViewBuilder): _collection_name = 'types' _detail_version_modifiers = [ "add_is_public_attr_core_api_like", "add_is_public_attr_extension_like", "add_inferred_optional_extra_specs", "add_description_attr", "add_is_default_attr" ] def show(self, request, share_type, brief=False): """Trim away extraneous share type attributes.""" extra_specs = share_type.get('extra_specs', {}) required_extra_specs = share_type.get('required_extra_specs', {}) # Remove non-tenant-visible extra specs in a non-admin context if not request.environ['manila.context'].is_admin: extra_spec_names = share_types.get_tenant_visible_extra_specs() extra_specs = self._filter_extra_specs(extra_specs, extra_spec_names) required_extra_specs = self._filter_extra_specs( required_extra_specs, extra_spec_names) trimmed = { 'id': share_type.get('id'), 'name': share_type.get('name'), 'extra_specs': extra_specs, 'required_extra_specs': required_extra_specs, } self.update_versioned_resource_dict(request, trimmed, share_type) if brief: return trimmed else: return dict(volume_type=trimmed, share_type=trimmed) @common.ViewBuilder.versioned_method("2.7") def add_is_public_attr_core_api_like(self, context, share_type_dict, share_type): share_type_dict['share_type_access:is_public'] = share_type.get( 'is_public', True) @common.ViewBuilder.versioned_method("1.0", "2.6") def add_is_public_attr_extension_like(self, context, share_type_dict, share_type): share_type_dict['os-share-type-access:is_public'] = share_type.get( 'is_public', True) @common.ViewBuilder.versioned_method("2.24") def add_inferred_optional_extra_specs(self, context, share_type_dict, share_type): # NOTE(cknight): The admin sees exactly which extra specs have been set # on the type, but in order to know how shares of a type will behave, # the user must also see the default values of any public extra specs # that aren't explicitly set on the type. if not context.is_admin: for extra_spec in constants.ExtraSpecs.INFERRED_OPTIONAL_MAP: if extra_spec not in share_type_dict['extra_specs']: share_type_dict['extra_specs'][extra_spec] = ( constants.ExtraSpecs.INFERRED_OPTIONAL_MAP[extra_spec]) def index(self, request, share_types): """Index over trimmed share types.""" share_types_list = [self.show(request, share_type, True) for share_type in share_types] return dict(volume_types=share_types_list, share_types=share_types_list) def share_type_access(self, request, share_type): """Return a dictionary view of the projects with access to type.""" projects = [ {'share_type_id': share_type['id'], 'project_id': project_id} for project_id in share_type['projects'] ] return {'share_type_access': projects} def _filter_extra_specs(self, extra_specs, valid_keys): return {key: value for key, value in extra_specs.items() if key in valid_keys} @common.ViewBuilder.versioned_method("2.41") def add_description_attr(self, context, share_type_dict, share_type): share_type_dict['description'] = share_type.get('description') @common.ViewBuilder.versioned_method("2.46") def add_is_default_attr(self, context, share_type_dict, share_type): is_default = False type_name = share_type.get('name') default_name = CONF.default_share_type if default_name is not None: is_default = default_name == type_name share_type_dict['is_default'] = is_default manila-10.0.0/manila/api/views/share_network_subnets.py0000664000175000017500000000464613656750227023241 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'share_network_subnets' def build_share_network_subnet(self, request, share_network_subnet): return { 'share_network_subnet': self._build_share_network_subnet_view( request, share_network_subnet)} def build_share_network_subnets(self, request, share_network_subnets): return {'share_network_subnets': [self._build_share_network_subnet_view( request, share_network_subnet) for share_network_subnet in share_network_subnets]} def _build_share_network_subnet_view(self, request, share_network_subnet): sns = { 'id': share_network_subnet.get('id'), 'availability_zone': share_network_subnet.get('availability_zone'), 'share_network_id': share_network_subnet.get('share_network_id'), 'share_network_name': share_network_subnet['share_network_name'], 'created_at': share_network_subnet.get('created_at'), 'segmentation_id': share_network_subnet.get('segmentation_id'), 'neutron_subnet_id': share_network_subnet.get('neutron_subnet_id'), 'updated_at': share_network_subnet.get('updated_at'), 'neutron_net_id': share_network_subnet.get('neutron_net_id'), 'ip_version': share_network_subnet.get('ip_version'), 'cidr': share_network_subnet.get('cidr'), 'network_type': share_network_subnet.get('network_type'), 'mtu': share_network_subnet.get('mtu'), 'gateway': share_network_subnet.get('gateway') } self.update_versioned_resource_dict(request, sns, share_network_subnet) return sns manila-10.0.0/manila/api/views/services.py0000664000175000017500000000226313656750227020437 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): _collection_name = "services" def summary(self, service): """Summary view of a single service.""" keys = 'host', 'binary', 'disabled' return {key: service.get(key) for key in keys} def detail_list(self, services): """Detailed view of a list of services.""" keys = 'id', 'binary', 'host', 'zone', 'status', 'state', 'updated_at' views = [{key: s.get(key) for key in keys} for s in services] return {self._collection_name: views} manila-10.0.0/manila/api/views/share_instance.py0000664000175000017500000001115613656750227021603 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common from manila.common import constants class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'share_instances' _collection_links = 'share_instances_links' _detail_version_modifiers = [ "remove_export_locations", "add_access_rules_status_field", "add_replication_fields", "add_share_type_field", "add_cast_rules_to_readonly_field", "add_progress_field", "translate_creating_from_snapshot_status", ] def detail_list(self, request, instances): """Detailed view of a list of share instances.""" return self._list_view(self.detail, request, instances) def detail(self, request, share_instance): """Detailed view of a single share instance.""" export_locations = [e['path'] for e in share_instance.export_locations] instance_dict = { 'id': share_instance.get('id'), 'share_id': share_instance.get('share_id'), 'availability_zone': share_instance.get('availability_zone'), 'created_at': share_instance.get('created_at'), 'host': share_instance.get('host'), 'status': share_instance.get('status'), 'share_network_id': share_instance.get('share_network_id'), 'share_server_id': share_instance.get('share_server_id'), 'export_location': share_instance.get('export_location'), 'export_locations': export_locations, } self.update_versioned_resource_dict( request, instance_dict, share_instance) return {'share_instance': instance_dict} def _list_view(self, func, request, instances): """Provide a view for a list of share instances.""" instances_list = [func(request, instance)['share_instance'] for instance in instances] instances_links = self._get_collection_links(request, instances, self._collection_name) instances_dict = {self._collection_name: instances_list} if instances_links: instances_dict[self._collection_links] = instances_links return instances_dict @common.ViewBuilder.versioned_method("2.9") def remove_export_locations(self, context, share_instance_dict, share_instance): share_instance_dict.pop('export_location') share_instance_dict.pop('export_locations') @common.ViewBuilder.versioned_method("2.10") def add_access_rules_status_field(self, context, instance_dict, share_instance): instance_dict['access_rules_status'] = ( share_instance.get('access_rules_status') ) @common.ViewBuilder.versioned_method("2.11") def add_replication_fields(self, context, instance_dict, share_instance): instance_dict['replica_state'] = share_instance.get('replica_state') @common.ViewBuilder.versioned_method("2.22") def add_share_type_field(self, context, instance_dict, share_instance): instance_dict['share_type_id'] = share_instance.get('share_type_id') @common.ViewBuilder.versioned_method("2.30") def add_cast_rules_to_readonly_field(self, context, instance_dict, share_instance): instance_dict['cast_rules_to_readonly'] = share_instance.get( 'cast_rules_to_readonly', False) @common.ViewBuilder.versioned_method("1.0", "2.53") def translate_creating_from_snapshot_status(self, context, instance_dict, share_instance): if (share_instance.get('status') == constants.STATUS_CREATING_FROM_SNAPSHOT): instance_dict['status'] = constants.STATUS_CREATING @common.ViewBuilder.versioned_method("2.54") def add_progress_field(self, context, instance_dict, share_instance): instance_dict['progress'] = share_instance.get('progress') manila-10.0.0/manila/api/views/security_service.py0000664000175000017500000000574513656750227022213 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common from manila.common import constants class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'security_services' _detail_version_modifiers = [ 'add_ou_to_security_service', ] def summary_list(self, request, security_services): """Show a list of security services without many details.""" return self._list_view(self.summary, request, security_services) def detail_list(self, request, security_services): """Detailed view of a list of security services.""" return self._list_view(self.detail, request, security_services) def summary(self, request, security_service): """Generic, non-detailed view of a security service.""" return { 'security_service': { 'id': security_service.get('id'), 'name': security_service.get('name'), 'type': security_service.get('type'), # NOTE(vponomaryov): attr "status" was removed from model and # is left in view for compatibility purposes since it affects # user-facing API. This should be removed right after no one # uses it anymore. 'status': constants.STATUS_NEW, } } def detail(self, request, security_service): """Detailed view of a single security service.""" view = self.summary(request, security_service) keys = ( 'created_at', 'updated_at', 'description', 'dns_ip', 'server', 'domain', 'user', 'password', 'project_id') for key in keys: view['security_service'][key] = security_service.get(key) self.update_versioned_resource_dict( request, view['security_service'], security_service) return view @common.ViewBuilder.versioned_method("2.44") def add_ou_to_security_service(self, context, ss_dict, ss): ss_dict['ou'] = ss.get('ou') def _list_view(self, func, request, security_services): """Provide a view for a list of security services.""" security_services_list = [func(request, service)['security_service'] for service in security_services] security_services_dict = dict(security_services=security_services_list) return security_services_dict manila-10.0.0/manila/api/views/share_snapshots.py0000664000175000017500000001005013656750227022011 0ustar zuulzuul00000000000000# Copyright 2013 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.api import common class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = 'snapshots' _detail_version_modifiers = [ "add_provider_location_field", "add_project_and_user_ids", ] def summary_list(self, request, snapshots): """Show a list of share snapshots without many details.""" return self._list_view(self.summary, request, snapshots) def detail_list(self, request, snapshots): """Detailed view of a list of share snapshots.""" return self._list_view(self.detail, request, snapshots) def summary(self, request, snapshot): """Generic, non-detailed view of an share snapshot.""" return { 'snapshot': { 'id': snapshot.get('id'), 'name': snapshot.get('display_name'), 'links': self._get_links(request, snapshot['id']) } } def detail(self, request, snapshot): """Detailed view of a single share snapshot.""" snapshot_dict = { 'id': snapshot.get('id'), 'share_id': snapshot.get('share_id'), 'share_size': snapshot.get('share_size'), 'created_at': snapshot.get('created_at'), 'status': snapshot.get('aggregate_status'), 'name': snapshot.get('display_name'), 'description': snapshot.get('display_description'), 'size': snapshot.get('size'), 'share_proto': snapshot.get('share_proto'), 'links': self._get_links(request, snapshot['id']), } self.update_versioned_resource_dict(request, snapshot_dict, snapshot) return {'snapshot': snapshot_dict} @common.ViewBuilder.versioned_method("2.12") def add_provider_location_field(self, context, snapshot_dict, snapshot): # NOTE(xyang): Only retrieve provider_location for admin. if context.is_admin: snapshot_dict['provider_location'] = snapshot.get( 'provider_location') @common.ViewBuilder.versioned_method("2.17") def add_project_and_user_ids(self, context, snapshot_dict, snapshot): snapshot_dict['user_id'] = snapshot.get('user_id') snapshot_dict['project_id'] = snapshot.get('project_id') def _list_view(self, func, request, snapshots): """Provide a view for a list of share snapshots.""" snapshots_list = [func(request, snapshot)['snapshot'] for snapshot in snapshots] snapshots_links = self._get_collection_links(request, snapshots, self._collection_name) snapshots_dict = {self._collection_name: snapshots_list} if snapshots_links: snapshots_dict['share_snapshots_links'] = snapshots_links return snapshots_dict def detail_access(self, request, access): access = { 'snapshot_access': { 'id': access['id'], 'access_type': access['access_type'], 'access_to': access['access_to'], 'state': access['state'], } } return access def detail_list_access(self, request, access_list): return { 'snapshot_access_list': ([self.detail_access(request, access)['snapshot_access'] for access in access_list]) } manila-10.0.0/manila/api/openstack/0000775000175000017500000000000013656750362017071 5ustar zuulzuul00000000000000manila-10.0.0/manila/api/openstack/__init__.py0000664000175000017500000001137113656750227021205 0ustar zuulzuul00000000000000# Copyright (c) 2013 OpenStack, LLC. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ WSGI middleware for OpenStack API controllers. """ from oslo_log import log from oslo_service import wsgi as base_wsgi import routes from manila.api.openstack import wsgi from manila.i18n import _ LOG = log.getLogger(__name__) class APIMapper(routes.Mapper): def routematch(self, url=None, environ=None): if url == "": result = self._match("", environ) return result[0], result[1] return routes.Mapper.routematch(self, url, environ) def connect(self, *args, **kwargs): # NOTE(inhye): Default the format part of a route to only accept json # and xml so it doesn't eat all characters after a '.' # in the url. kwargs.setdefault('requirements', {}) if not kwargs['requirements'].get('format'): kwargs['requirements']['format'] = 'json|xml' return routes.Mapper.connect(self, *args, **kwargs) class ProjectMapper(APIMapper): def resource(self, member_name, collection_name, **kwargs): if 'parent_resource' not in kwargs: kwargs['path_prefix'] = '{project_id}/' else: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '{project_id}/%s/:%s_id' % (p_collection, p_member) routes.Mapper.resource(self, member_name, collection_name, **kwargs) class APIRouter(base_wsgi.Router): """Routes requests on the API to the appropriate controller and method.""" ExtensionManager = None # override in subclasses @classmethod def factory(cls, global_config, **local_config): """Simple paste factory, :class:`manila.wsgi.Router` doesn't have.""" return cls() def __init__(self, ext_mgr=None): if ext_mgr is None: if self.ExtensionManager: # pylint: disable=not-callable ext_mgr = self.ExtensionManager() else: raise Exception(_("Must specify an ExtensionManager class")) mapper = ProjectMapper() self.resources = {} self._setup_routes(mapper) self._setup_ext_routes(mapper, ext_mgr) self._setup_extensions(ext_mgr) super(APIRouter, self).__init__(mapper) def _setup_ext_routes(self, mapper, ext_mgr): for resource in ext_mgr.get_resources(): LOG.debug('Extended resource: %s', resource.collection) wsgi_resource = wsgi.Resource(resource.controller) self.resources[resource.collection] = wsgi_resource kargs = dict( controller=wsgi_resource, collection=resource.collection_actions, member=resource.member_actions) if resource.parent: kargs['parent_resource'] = resource.parent mapper.resource(resource.collection, resource.collection, **kargs) if resource.custom_routes_fn: resource.custom_routes_fn(mapper, wsgi_resource) def _setup_extensions(self, ext_mgr): for extension in ext_mgr.get_controller_extensions(): ext_name = extension.extension.name collection = extension.collection controller = extension.controller if collection not in self.resources: LOG.warning('Extension %(ext_name)s: Cannot extend ' 'resource %(collection)s: No such resource', {'ext_name': ext_name, 'collection': collection}) continue LOG.debug('Extension %(ext_name)s extending resource: ' '%(collection)s', {'ext_name': ext_name, 'collection': collection}) resource = self.resources[collection] resource.register_actions(controller) resource.register_extensions(controller) def _setup_routes(self, mapper): raise NotImplementedError manila-10.0.0/manila/api/openstack/versioned_method.py0000664000175000017500000000326513656750227023007 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila import utils class VersionedMethod(utils.ComparableMixin): def __init__(self, name, start_version, end_version, experimental, func): """Versioning information for a single method. Minimum and maximums are inclusive. :param name: Name of the method :param start_version: Minimum acceptable version :param end_version: Maximum acceptable_version :param experimental: True if method is experimental :param func: Method to call """ self.name = name self.start_version = start_version self.end_version = end_version self.experimental = experimental self.func = func def __str__(self): args = { 'name': self.name, 'start': self.start_version, 'end': self.end_version } return ("Version Method %(name)s: min: %(start)s, max: %(end)s" % args) def _cmpkey(self): """Return the value used by ComparableMixin for rich comparisons.""" return self.start_version manila-10.0.0/manila/api/openstack/urlmap.py0000664000175000017500000000176213656750227020751 0ustar zuulzuul00000000000000# Copyright (c) 2013 OpenStack, LLC. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.api import urlmap LOG = log.getLogger(__name__) def urlmap_factory(loader, global_conf, **local_conf): LOG.warning('manila.api.openstack.urlmap:urlmap_factory ' 'is deprecated. ' 'Please use manila.api.urlmap:urlmap_factory instead.') urlmap.urlmap_factory(loader, global_conf, **local_conf) manila-10.0.0/manila/api/openstack/wsgi.py0000664000175000017500000014262513656750227020426 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import inspect import math import time from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import strutils import six from six.moves import http_client import webob import webob.exc from manila.api.openstack import api_version_request as api_version from manila.api.openstack import versioned_method from manila.common import constants from manila import exception from manila.i18n import _ from manila import policy from manila import utils from manila.wsgi import common as wsgi LOG = log.getLogger(__name__) SUPPORTED_CONTENT_TYPES = ( 'application/json', ) _MEDIA_TYPE_MAP = { 'application/json': 'json', } # name of attribute to keep version method information VER_METHOD_ATTR = 'versioned_methods' # Name of header used by clients to request a specific version # of the REST API API_VERSION_REQUEST_HEADER = 'X-OpenStack-Manila-API-Version' EXPERIMENTAL_API_REQUEST_HEADER = 'X-OpenStack-Manila-API-Experimental' V1_SCRIPT_NAME = '/v1' V2_SCRIPT_NAME = '/v2' class Request(webob.Request): """Add some OpenStack API-specific logic to the base webob.Request.""" def __init__(self, *args, **kwargs): super(Request, self).__init__(*args, **kwargs) self._resource_cache = {} if not hasattr(self, 'api_version_request'): self.api_version_request = api_version.APIVersionRequest() def cache_resource(self, resource_to_cache, id_attribute='id', name=None): """Cache the given resource. Allow API methods to cache objects, such as results from a DB query, to be used by API extensions within the same API request. The resource_to_cache can be a list or an individual resource, but ultimately resources are cached individually using the given id_attribute. Different resources types might need to be cached during the same request, they can be cached using the name parameter. For example: Controller 1: request.cache_resource(db_volumes, 'volumes') request.cache_resource(db_volume_types, 'types') Controller 2: db_volumes = request.cached_resource('volumes') db_type_1 = request.cached_resource_by_id('1', 'types') If no name is given, a default name will be used for the resource. An instance of this class only lives for the lifetime of a single API request, so there's no need to implement full cache management. """ if not isinstance(resource_to_cache, list): resource_to_cache = [resource_to_cache] if not name: name = self.path cached_resources = self._resource_cache.setdefault(name, {}) for resource in resource_to_cache: cached_resources[resource[id_attribute]] = resource def cached_resource(self, name=None): """Get the cached resources cached under the given resource name. Allow an API extension to get previously stored objects within the same API request. Note that the object data will be slightly stale. :returns: a dict of id_attribute to the resource from the cached resources, an empty map if an empty collection was cached, or None if nothing has been cached yet under this name """ if not name: name = self.path if name not in self._resource_cache: # Nothing has been cached for this key yet return None return self._resource_cache[name] def cached_resource_by_id(self, resource_id, name=None): """Get a resource by ID cached under the given resource name. Allow an API extension to get a previously stored object within the same API request. This is basically a convenience method to lookup by ID on the dictionary of all cached resources. Note that the object data will be slightly stale. :returns: the cached resource or None if the item is not in the cache """ resources = self.cached_resource(name) if not resources: # Nothing has been cached yet for this key yet return None return resources.get(resource_id) def cache_db_items(self, key, items, item_key='id'): """Cache db items. Allow API methods to store objects from a DB query to be used by API extensions within the same API request. An instance of this class only lives for the lifetime of a single API request, so there's no need to implement full cache management. """ self.cache_resource(items, item_key, key) def get_db_items(self, key): """Get db item by key. Allow an API extension to get previously stored objects within the same API request. Note that the object data will be slightly stale. """ return self.cached_resource(key) def get_db_item(self, key, item_key): """Get db item by key and item key. Allow an API extension to get a previously stored object within the same API request. Note that the object data will be slightly stale. """ return self.get_db_items(key).get(item_key) def cache_db_share_types(self, share_types): self.cache_db_items('share_types', share_types, 'id') def cache_db_share_type(self, share_type): self.cache_db_items('share_types', [share_type], 'id') def get_db_share_types(self): return self.get_db_items('share_types') def get_db_share_type(self, share_type_id): return self.get_db_item('share_types', share_type_id) def best_match_content_type(self): """Determine the requested response content-type.""" if 'manila.best_content_type' not in self.environ: # Calculate the best MIME type content_type = None # Check URL path suffix parts = self.path.rsplit('.', 1) if len(parts) > 1: possible_type = 'application/' + parts[1] if possible_type in SUPPORTED_CONTENT_TYPES: content_type = possible_type if not content_type: content_type = self.accept.best_match(SUPPORTED_CONTENT_TYPES) self.environ['manila.best_content_type'] = (content_type or 'application/json') return self.environ['manila.best_content_type'] def get_content_type(self): """Determine content type of the request body. Does not do any body introspection, only checks header. """ if "Content-Type" not in self.headers: return None allowed_types = SUPPORTED_CONTENT_TYPES content_type = self.content_type if content_type not in allowed_types: raise exception.InvalidContentType(content_type=content_type) return content_type def set_api_version_request(self): """Set API version request based on the request header information. Microversions starts with /v2, so if a client sends a /v1 URL, then ignore the headers and request 1.0 APIs. """ if not self.script_name or not (V1_SCRIPT_NAME in self.script_name or V2_SCRIPT_NAME in self.script_name): # The request is on the base URL without a major version specified self.api_version_request = api_version.APIVersionRequest() elif V1_SCRIPT_NAME in self.script_name: self.api_version_request = api_version.APIVersionRequest('1.0') else: if API_VERSION_REQUEST_HEADER in self.headers: hdr_string = self.headers[API_VERSION_REQUEST_HEADER] self.api_version_request = api_version.APIVersionRequest( hdr_string) # Check that the version requested is within the global # minimum/maximum of supported API versions if not self.api_version_request.matches( api_version.min_api_version(), api_version.max_api_version()): raise exception.InvalidGlobalAPIVersion( req_ver=self.api_version_request.get_string(), min_ver=api_version.min_api_version().get_string(), max_ver=api_version.max_api_version().get_string()) else: self.api_version_request = api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION) # Check if experimental API was requested if EXPERIMENTAL_API_REQUEST_HEADER in self.headers: self.api_version_request.experimental = strutils.bool_from_string( self.headers[EXPERIMENTAL_API_REQUEST_HEADER]) class ActionDispatcher(object): """Maps method name to local methods through action name.""" def dispatch(self, *args, **kwargs): """Find and call local method.""" action = kwargs.pop('action', 'default') action_method = getattr(self, six.text_type(action), self.default) return action_method(*args, **kwargs) def default(self, data): raise NotImplementedError() class TextDeserializer(ActionDispatcher): """Default request body deserialization.""" def deserialize(self, datastring, action='default'): return self.dispatch(datastring, action=action) def default(self, datastring): return {} class JSONDeserializer(TextDeserializer): def _from_json(self, datastring): try: return jsonutils.loads(datastring) except ValueError: msg = _("cannot understand JSON") raise exception.MalformedRequestBody(reason=msg) def default(self, datastring): return {'body': self._from_json(datastring)} class DictSerializer(ActionDispatcher): """Default request body serialization.""" def serialize(self, data, action='default'): return self.dispatch(data, action=action) def default(self, data): return "" class JSONDictSerializer(DictSerializer): """Default JSON request body serialization.""" def default(self, data): return six.b(jsonutils.dumps(data)) def serializers(**serializers): """Attaches serializers to a method. This decorator associates a dictionary of serializers with a method. Note that the function attributes are directly manipulated; the method is not wrapped. """ def decorator(func): if not hasattr(func, 'wsgi_serializers'): func.wsgi_serializers = {} func.wsgi_serializers.update(serializers) return func return decorator def deserializers(**deserializers): """Attaches deserializers to a method. This decorator associates a dictionary of deserializers with a method. Note that the function attributes are directly manipulated; the method is not wrapped. """ def decorator(func): if not hasattr(func, 'wsgi_deserializers'): func.wsgi_deserializers = {} func.wsgi_deserializers.update(deserializers) return func return decorator def response(code): """Attaches response code to a method. This decorator associates a response code with a method. Note that the function attributes are directly manipulated; the method is not wrapped. """ def decorator(func): func.wsgi_code = code return func return decorator class ResponseObject(object): """Bundles a response object with appropriate serializers. Object that app methods may return in order to bind alternate serializers with a response object to be serialized. Its use is optional. """ def __init__(self, obj, code=None, headers=None, **serializers): """Binds serializers with an object. Takes keyword arguments akin to the @serializer() decorator for specifying serializers. Serializers specified will be given preference over default serializers or method-specific serializers on return. """ self.obj = obj self.serializers = serializers self._default_code = 200 self._code = code self._headers = headers or {} self.serializer = None self.media_type = None def __getitem__(self, key): """Retrieves a header with the given name.""" return self._headers[key.lower()] def __setitem__(self, key, value): """Sets a header with the given name to the given value.""" self._headers[key.lower()] = value def __delitem__(self, key): """Deletes the header with the given name.""" del self._headers[key.lower()] def _bind_method_serializers(self, meth_serializers): """Binds method serializers with the response object. Binds the method serializers with the response object. Serializers specified to the constructor will take precedence over serializers specified to this method. :param meth_serializers: A dictionary with keys mapping to response types and values containing serializer objects. """ # We can't use update because that would be the wrong # precedence for mtype, serializer in meth_serializers.items(): self.serializers.setdefault(mtype, serializer) def get_serializer(self, content_type, default_serializers=None): """Returns the serializer for the wrapped object. Returns the serializer for the wrapped object subject to the indicated content type. If no serializer matching the content type is attached, an appropriate serializer drawn from the default serializers will be used. If no appropriate serializer is available, raises InvalidContentType. """ default_serializers = default_serializers or {} try: mtype = _MEDIA_TYPE_MAP.get(content_type, content_type) if mtype in self.serializers: return mtype, self.serializers[mtype] else: return mtype, default_serializers[mtype] except (KeyError, TypeError): raise exception.InvalidContentType(content_type=content_type) def preserialize(self, content_type, default_serializers=None): """Prepares the serializer that will be used to serialize. Determines the serializer that will be used and prepares an instance of it for later call. This allows the serializer to be accessed by extensions for, e.g., template extension. """ mtype, serializer = self.get_serializer(content_type, default_serializers) self.media_type = mtype self.serializer = serializer() def attach(self, **kwargs): """Attach slave templates to serializers.""" if self.media_type in kwargs: self.serializer.attach(kwargs[self.media_type]) def serialize(self, request, content_type, default_serializers=None): """Serializes the wrapped object. Utility method for serializing the wrapped object. Returns a webob.Response object. """ if self.serializer: serializer = self.serializer else: _mtype, _serializer = self.get_serializer(content_type, default_serializers) serializer = _serializer() response = webob.Response() response.status_int = self.code for hdr, value in self._headers.items(): response.headers[hdr] = six.text_type(value) response.headers['Content-Type'] = six.text_type(content_type) if self.obj is not None: response.body = serializer.serialize(self.obj) return response @property def code(self): """Retrieve the response status.""" return self._code or self._default_code @property def headers(self): """Retrieve the headers.""" return self._headers.copy() def action_peek_json(body): """Determine action to invoke.""" try: decoded = jsonutils.loads(body) except ValueError: msg = _("cannot understand JSON") raise exception.MalformedRequestBody(reason=msg) # Make sure there's exactly one key... if len(decoded) != 1: msg = _("too many body keys") raise exception.MalformedRequestBody(reason=msg) # Return the action and the decoded body... return list(decoded.keys())[0] class ResourceExceptionHandler(object): """Context manager to handle Resource exceptions. Used when processing exceptions generated by API implementation methods (or their extensions). Converts most exceptions to Fault exceptions, with the appropriate logging. """ def __enter__(self): return None def __exit__(self, ex_type, ex_value, ex_traceback): if not ex_value: return True if isinstance(ex_value, exception.NotAuthorized): msg = six.text_type(ex_value) raise Fault(webob.exc.HTTPForbidden(explanation=msg)) elif isinstance(ex_value, exception.VersionNotFoundForAPIMethod): raise elif isinstance(ex_value, exception.Invalid): raise Fault(exception.ConvertedException( code=ex_value.code, explanation=six.text_type(ex_value))) elif isinstance(ex_value, TypeError): exc_info = (ex_type, ex_value, ex_traceback) LOG.error('Exception handling resource: %s', ex_value, exc_info=exc_info) raise Fault(webob.exc.HTTPBadRequest()) elif isinstance(ex_value, Fault): LOG.info("Fault thrown: %s", ex_value) raise ex_value elif isinstance(ex_value, webob.exc.HTTPException): LOG.info("HTTP exception thrown: %s", ex_value) raise Fault(ex_value) # We didn't handle the exception return False class Resource(wsgi.Application): """WSGI app that handles (de)serialization and controller dispatch. WSGI app that reads routing information supplied by RoutesMiddleware and calls the requested action method upon its controller. All controller action methods must accept a 'req' argument, which is the incoming wsgi.Request. If the operation is a PUT or POST, the controller method must also accept a 'body' argument (the deserialized request body). They may raise a webob.exc exception or return a dict, which will be serialized by requested content type. Exceptions derived from webob.exc.HTTPException will be automatically wrapped in Fault() to provide API friendly error responses. """ support_api_request_version = True def __init__(self, controller, action_peek=None, **deserializers): """init method of Resource. :param controller: object that implement methods created by routes lib :param action_peek: dictionary of routines for peeking into an action request body to determine the desired action """ self.controller = controller default_deserializers = dict(json=JSONDeserializer) default_deserializers.update(deserializers) self.default_deserializers = default_deserializers self.default_serializers = dict(json=JSONDictSerializer) self.action_peek = dict(json=action_peek_json) self.action_peek.update(action_peek or {}) # Copy over the actions dictionary self.wsgi_actions = {} if controller: self.register_actions(controller) # Save a mapping of extensions self.wsgi_extensions = {} self.wsgi_action_extensions = {} def register_actions(self, controller): """Registers controller actions with this resource.""" actions = getattr(controller, 'wsgi_actions', {}) for key, method_name in actions.items(): self.wsgi_actions[key] = getattr(controller, method_name) def register_extensions(self, controller): """Registers controller extensions with this resource.""" extensions = getattr(controller, 'wsgi_extensions', []) for method_name, action_name in extensions: # Look up the extending method extension = getattr(controller, method_name) if action_name: # Extending an action... if action_name not in self.wsgi_action_extensions: self.wsgi_action_extensions[action_name] = [] self.wsgi_action_extensions[action_name].append(extension) else: # Extending a regular method if method_name not in self.wsgi_extensions: self.wsgi_extensions[method_name] = [] self.wsgi_extensions[method_name].append(extension) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" # NOTE(Vek): Check for get_action_args() override in the # controller if hasattr(self.controller, 'get_action_args'): return self.controller.get_action_args(request_environment) try: args = request_environment['wsgiorg.routing_args'][1].copy() except (KeyError, IndexError, AttributeError): return {} try: del args['controller'] except KeyError: pass try: del args['format'] except KeyError: pass return args def get_body(self, request): try: content_type = request.get_content_type() except exception.InvalidContentType: LOG.debug("Unrecognized Content-Type provided in request") return None, '' if not content_type: LOG.debug("No Content-Type provided in request") return None, '' if len(request.body) <= 0: LOG.debug("Empty body provided in request") return None, '' return content_type, request.body def deserialize(self, meth, content_type, body): meth_deserializers = getattr(meth, 'wsgi_deserializers', {}) try: mtype = _MEDIA_TYPE_MAP.get(content_type, content_type) if mtype in meth_deserializers: deserializer = meth_deserializers[mtype] else: deserializer = self.default_deserializers[mtype] except (KeyError, TypeError): raise exception.InvalidContentType(content_type=content_type) return deserializer().deserialize(body) def pre_process_extensions(self, extensions, request, action_args): # List of callables for post-processing extensions post = [] for ext in extensions: if inspect.isgeneratorfunction(ext): response = None # If it's a generator function, the part before the # yield is the preprocessing stage try: with ResourceExceptionHandler(): gen = ext(req=request, **action_args) response = next(gen) except Fault as ex: response = ex # We had a response... if response: return response, [] # No response, queue up generator for post-processing post.append(gen) else: # Regular functions only perform post-processing post.append(ext) # Run post-processing in the reverse order return None, reversed(post) def post_process_extensions(self, extensions, resp_obj, request, action_args): for ext in extensions: response = None if inspect.isgenerator(ext): # If it's a generator, run the second half of # processing try: with ResourceExceptionHandler(): response = ext.send(resp_obj) except StopIteration: # Normal exit of generator continue except Fault as ex: response = ex else: # Regular functions get post-processing... try: with ResourceExceptionHandler(): response = ext(req=request, resp_obj=resp_obj, **action_args) except exception.VersionNotFoundForAPIMethod: # If an attached extension (@wsgi.extends) for the # method has no version match its not an error. We # just don't run the extends code continue except Fault as ex: response = ex # We had a response... if response: return response return None @webob.dec.wsgify(RequestClass=Request) def __call__(self, request): """WSGI method that controls (de)serialization and method dispatch.""" LOG.info("%(method)s %(url)s", {"method": request.method, "url": request.url}) if self.support_api_request_version: # Set the version of the API requested based on the header try: request.set_api_version_request() except exception.InvalidAPIVersionString as e: return Fault(webob.exc.HTTPBadRequest( explanation=six.text_type(e))) except exception.InvalidGlobalAPIVersion as e: return Fault(webob.exc.HTTPNotAcceptable( explanation=six.text_type(e))) # Identify the action, its arguments, and the requested # content type action_args = self.get_action_args(request.environ) action = action_args.pop('action', None) content_type, body = self.get_body(request) accept = request.best_match_content_type() # NOTE(Vek): Splitting the function up this way allows for # auditing by external tools that wrap the existing # function. If we try to audit __call__(), we can # run into troubles due to the @webob.dec.wsgify() # decorator. return self._process_stack(request, action, action_args, content_type, body, accept) def _process_stack(self, request, action, action_args, content_type, body, accept): """Implement the processing stack.""" # Get the implementing method try: meth, extensions = self.get_method(request, action, content_type, body) except (AttributeError, TypeError): return Fault(webob.exc.HTTPNotFound()) except KeyError as ex: msg = _("There is no such action: %s") % ex.args[0] return Fault(webob.exc.HTTPBadRequest(explanation=msg)) except exception.MalformedRequestBody: msg = _("Malformed request body") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) try: method_name = meth.__qualname__ except AttributeError: method_name = 'Controller: %s Method: %s' % ( six.text_type(self.controller), meth.__name__) if body: decoded_body = encodeutils.safe_decode(body, errors='ignore') msg = ("Action: '%(action)s', calling method: %(meth)s, body: " "%(body)s") % {'action': action, 'body': decoded_body, 'meth': method_name} LOG.debug(strutils.mask_password(msg)) else: LOG.debug("Calling method '%(meth)s'", {'meth': method_name}) # Now, deserialize the request body... try: if content_type: contents = self.deserialize(meth, content_type, body) else: contents = {} except exception.InvalidContentType: msg = _("Unsupported Content-Type") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) except exception.MalformedRequestBody: msg = _("Malformed request body") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) # Update the action args action_args.update(contents) project_id = action_args.pop("project_id", None) context = request.environ.get('manila.context') if (context and project_id and (project_id != context.project_id)): msg = _("Malformed request url") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) # Run pre-processing extensions response, post = self.pre_process_extensions(extensions, request, action_args) if not response: try: with ResourceExceptionHandler(): action_result = self.dispatch(meth, request, action_args) except Fault as ex: response = ex if not response: # No exceptions; convert action_result into a # ResponseObject resp_obj = None if type(action_result) is dict or action_result is None: resp_obj = ResponseObject(action_result) elif isinstance(action_result, ResponseObject): resp_obj = action_result else: response = action_result # Run post-processing extensions if resp_obj: _set_request_id_header(request, resp_obj) # Do a preserialize to set up the response object serializers = getattr(meth, 'wsgi_serializers', {}) resp_obj._bind_method_serializers(serializers) if hasattr(meth, 'wsgi_code'): resp_obj._default_code = meth.wsgi_code resp_obj.preserialize(accept, self.default_serializers) # Process post-processing extensions response = self.post_process_extensions(post, resp_obj, request, action_args) if resp_obj and not response: response = resp_obj.serialize(request, accept, self.default_serializers) try: msg_dict = dict(url=request.url, status=response.status_int) msg = _("%(url)s returned with HTTP %(status)s") % msg_dict except AttributeError as e: msg_dict = dict(url=request.url, e=e) msg = _("%(url)s returned a fault: %(e)s") % msg_dict LOG.info(msg) if hasattr(response, 'headers'): for hdr, val in response.headers.items(): val = utils.convert_str(val) response.headers[hdr] = val _set_request_id_header(request, response.headers) if not request.api_version_request.is_null(): response.headers[API_VERSION_REQUEST_HEADER] = ( request.api_version_request.get_string()) if request.api_version_request.experimental: # NOTE(vponomaryov): Translate our boolean header # to string explicitly to avoid 'TypeError' failure # running manila API under Apache + mod-wsgi. # It is safe to do so, because all headers are returned as # strings anyway. response.headers[EXPERIMENTAL_API_REQUEST_HEADER] = ( '%s' % request.api_version_request.experimental) response.headers['Vary'] = API_VERSION_REQUEST_HEADER return response def get_method(self, request, action, content_type, body): """Look up the action-specific method and its extensions.""" # Look up the method try: if not self.controller: meth = getattr(self, action) else: meth = getattr(self.controller, action) except AttributeError: if (not self.wsgi_actions or action not in ['action', 'create', 'delete']): # Propagate the error raise else: return meth, self.wsgi_extensions.get(action, []) if action == 'action': # OK, it's an action; figure out which action... mtype = _MEDIA_TYPE_MAP.get(content_type) action_name = self.action_peek[mtype](body) LOG.debug("Action body: %s", body) else: action_name = action # Look up the action method return (self.wsgi_actions[action_name], self.wsgi_action_extensions.get(action_name, [])) def dispatch(self, method, request, action_args): """Dispatch a call to the action-specific method.""" try: return method(req=request, **action_args) except exception.VersionNotFoundForAPIMethod: # We deliberately don't return any message information # about the exception to the user so it looks as if # the method is simply not implemented. return Fault(webob.exc.HTTPNotFound()) def action(name): """Mark a function as an action. The given name will be taken as the action key in the body. This is also overloaded to allow extensions to provide non-extending definitions of create and delete operations. """ def decorator(func): func.wsgi_action = name return func return decorator def extends(*args, **kwargs): """Indicate a function extends an operation. Can be used as either:: @extends def index(...): pass or as:: @extends(action='resize') def _action_resize(...): pass """ def decorator(func): # Store enough information to find what we're extending func.wsgi_extends = (func.__name__, kwargs.get('action')) return func # If we have positional arguments, call the decorator if args: return decorator(*args) # OK, return the decorator instead return decorator class ControllerMetaclass(type): """Controller metaclass. This metaclass automates the task of assembling a dictionary mapping action keys to method names. """ def __new__(mcs, name, bases, cls_dict): """Adds the wsgi_actions dictionary to the class.""" # Find all actions actions = {} extensions = [] versioned_methods = None # start with wsgi actions from base classes for base in bases: actions.update(getattr(base, 'wsgi_actions', {})) if base.__name__ == "Controller": # NOTE(cyeoh): This resets the VER_METHOD_ATTR attribute # between API controller class creations. This allows us # to use a class decorator on the API methods that doesn't # require naming explicitly what method is being versioned as # it can be implicit based on the method decorated. It is a bit # ugly. if VER_METHOD_ATTR in base.__dict__: versioned_methods = getattr(base, VER_METHOD_ATTR) delattr(base, VER_METHOD_ATTR) for key, value in cls_dict.items(): if not callable(value): continue if getattr(value, 'wsgi_action', None): actions[value.wsgi_action] = key elif getattr(value, 'wsgi_extends', None): extensions.append(value.wsgi_extends) # Add the actions and extensions to the class dict cls_dict['wsgi_actions'] = actions cls_dict['wsgi_extensions'] = extensions if versioned_methods: cls_dict[VER_METHOD_ATTR] = versioned_methods return super(ControllerMetaclass, mcs).__new__(mcs, name, bases, cls_dict) @six.add_metaclass(ControllerMetaclass) class Controller(object): """Default controller.""" _view_builder_class = None def __init__(self, view_builder=None): """Initialize controller with a view builder instance.""" if view_builder: self._view_builder = view_builder elif self._view_builder_class: # pylint: disable=not-callable self._view_builder = self._view_builder_class() else: self._view_builder = None def __getattribute__(self, key): def version_select(*args, **kwargs): """Select and call the matching version of the specified method. Look for the method which matches the name supplied and version constraints and calls it with the supplied arguments. :returns: Returns the result of the method called :raises: VersionNotFoundForAPIMethod if there is no method which matches the name and version constraints """ # The first arg to all versioned methods is always the request # object. The version for the request is attached to the # request object if len(args) == 0: version_request = kwargs['req'].api_version_request else: version_request = args[0].api_version_request func_list = self.versioned_methods[key] for func in func_list: if version_request.matches_versioned_method(func): # Update the version_select wrapper function so # other decorator attributes like wsgi.response # are still respected. functools.update_wrapper(version_select, func.func) return func.func(self, *args, **kwargs) # No version match raise exception.VersionNotFoundForAPIMethod( version=version_request) try: version_meth_dict = object.__getattribute__(self, VER_METHOD_ATTR) except AttributeError: # No versioning on this class return object.__getattribute__(self, key) if (version_meth_dict and key in object.__getattribute__(self, VER_METHOD_ATTR)): return version_select return object.__getattribute__(self, key) # NOTE(cyeoh): This decorator MUST appear first (the outermost # decorator) on an API method for it to work correctly @classmethod def api_version(cls, min_ver, max_ver=None, experimental=False): """Decorator for versioning API methods. Add the decorator to any method which takes a request object as the first parameter and belongs to a class which inherits from wsgi.Controller. :param min_ver: string representing minimum version :param max_ver: optional string representing maximum version :param experimental: flag indicating an API is experimental and is subject to change or removal at any time """ def decorator(f): obj_min_ver = api_version.APIVersionRequest(min_ver) if max_ver: obj_max_ver = api_version.APIVersionRequest(max_ver) else: obj_max_ver = api_version.APIVersionRequest() # Add to list of versioned methods registered func_name = f.__name__ new_func = versioned_method.VersionedMethod( func_name, obj_min_ver, obj_max_ver, experimental, f) func_dict = getattr(cls, VER_METHOD_ATTR, {}) if not func_dict: setattr(cls, VER_METHOD_ATTR, func_dict) func_list = func_dict.get(func_name, []) if not func_list: func_dict[func_name] = func_list func_list.append(new_func) # Ensure the list is sorted by minimum version (reversed) # so later when we work through the list in order we find # the method which has the latest version which supports # the version requested. # TODO(cyeoh): Add check to ensure that there are no overlapping # ranges of valid versions as that is ambiguous func_list.sort(reverse=True) return f return decorator @staticmethod def authorize(arg): """Decorator for checking the policy on API methods. Add this decorator to any API method which takes a request object as the first parameter and belongs to a class which inherits from wsgi.Controller. The class must also have a class member called 'resource_name' which specifies the resource for the policy check. Can be used in any of the following forms @authorize @authorize('my_action_name') :param arg: Can either be the function being decorated or a str containing the 'action' for the policy check. If no action name is provided, the function name is assumed to be the action name. """ action_name = None def decorator(f): @functools.wraps(f) def wrapper(self, req, *args, **kwargs): action = action_name or f.__name__ context = req.environ['manila.context'] try: policy.check_policy(context, self.resource_name, action) except exception.PolicyNotAuthorized: raise webob.exc.HTTPForbidden() return f(self, req, *args, **kwargs) return wrapper if callable(arg): return decorator(arg) else: action_name = arg return decorator @staticmethod def is_valid_body(body, entity_name): if not (body and entity_name in body): return False def is_dict(d): try: d.get(None) return True except AttributeError: return False if not is_dict(body[entity_name]): return False return True class AdminActionsMixin(object): """Mixin class for API controllers with admin actions.""" body_attributes = { 'status': 'reset_status', 'replica_state': 'reset_replica_state', 'task_state': 'reset_task_state', } valid_statuses = { 'status': set([ constants.STATUS_CREATING, constants.STATUS_AVAILABLE, constants.STATUS_DELETING, constants.STATUS_ERROR, constants.STATUS_ERROR_DELETING, constants.STATUS_MIGRATING, constants.STATUS_MIGRATING_TO, ]), 'replica_state': set([ constants.REPLICA_STATE_ACTIVE, constants.REPLICA_STATE_IN_SYNC, constants.REPLICA_STATE_OUT_OF_SYNC, constants.STATUS_ERROR, ]), 'task_state': set(constants.TASK_STATE_STATUSES), } def _update(self, *args, **kwargs): raise NotImplementedError() def _get(self, *args, **kwargs): raise NotImplementedError() def _delete(self, *args, **kwargs): raise NotImplementedError() def validate_update(self, body, status_attr='status'): update = {} try: update[status_attr] = body[status_attr] except (TypeError, KeyError): msg = _("Must specify '%s'") % status_attr raise webob.exc.HTTPBadRequest(explanation=msg) if update[status_attr] not in self.valid_statuses[status_attr]: expl = (_("Invalid state. Valid states: %s.") % ", ".join(six.text_type(i) for i in self.valid_statuses[status_attr])) raise webob.exc.HTTPBadRequest(explanation=expl) return update @Controller.authorize('reset_status') def _reset_status(self, req, id, body, status_attr='status'): """Reset the status_attr specified on the resource.""" context = req.environ['manila.context'] body_attr = self.body_attributes[status_attr] update = self.validate_update( body.get(body_attr, body.get('-'.join(('os', body_attr)))), status_attr=status_attr) msg = "Updating %(resource)s '%(id)s' with '%(update)r'" LOG.debug(msg, {'resource': self.resource_name, 'id': id, 'update': update}) try: self._update(context, id, update) except exception.NotFound as e: raise webob.exc.HTTPNotFound(six.text_type(e)) return webob.Response(status_int=http_client.ACCEPTED) @Controller.authorize('force_delete') def _force_delete(self, req, id, body): """Delete a resource, bypassing the check for status.""" context = req.environ['manila.context'] try: resource = self._get(context, id) except exception.NotFound as e: raise webob.exc.HTTPNotFound(six.text_type(e)) self._delete(context, resource, force=True) return webob.Response(status_int=http_client.ACCEPTED) class Fault(webob.exc.HTTPException): """Wrap webob.exc.HTTPException to provide API friendly response.""" _fault_names = {400: "badRequest", 401: "unauthorized", 403: "forbidden", 404: "itemNotFound", 405: "badMethod", 409: "conflictingRequest", 413: "overLimit", 415: "badMediaType", 501: "notImplemented", 503: "serviceUnavailable"} def __init__(self, exception): """Create a Fault for the given webob.exc.exception.""" self.wrapped_exc = exception self.status_int = exception.status_int @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): """Generate a WSGI response based on the exception passed to ctor.""" # Replace the body with fault details. code = self.wrapped_exc.status_int fault_name = self._fault_names.get(code, "computeFault") fault_data = { fault_name: { 'code': code, 'message': self.wrapped_exc.explanation}} if code == 413: retry = self.wrapped_exc.headers['Retry-After'] fault_data[fault_name]['retryAfter'] = '%s' % retry if not req.api_version_request.is_null(): self.wrapped_exc.headers[API_VERSION_REQUEST_HEADER] = ( req.api_version_request.get_string()) if req.api_version_request.experimental: # NOTE(vponomaryov): Translate our boolean header # to string explicitly to avoid 'TypeError' failure # running manila API under Apache + mod-wsgi. # It is safe to do so, because all headers are returned as # strings anyway. self.wrapped_exc.headers[EXPERIMENTAL_API_REQUEST_HEADER] = ( '%s' % req.api_version_request.experimental) self.wrapped_exc.headers['Vary'] = API_VERSION_REQUEST_HEADER content_type = req.best_match_content_type() serializer = { 'application/json': JSONDictSerializer(), }[content_type] self.wrapped_exc.body = serializer.serialize(fault_data) self.wrapped_exc.content_type = content_type _set_request_id_header(req, self.wrapped_exc.headers) return self.wrapped_exc def __str__(self): return self.wrapped_exc.__str__() def _set_request_id_header(req, headers): context = req.environ.get('manila.context') if context: headers['x-compute-request-id'] = context.request_id class OverLimitFault(webob.exc.HTTPException): """Rate-limited request response.""" def __init__(self, message, details, retry_time): """Initialize new `OverLimitFault` with relevant information.""" hdrs = OverLimitFault._retry_after(retry_time) self.wrapped_exc = webob.exc.HTTPRequestEntityTooLarge(headers=hdrs) self.content = { "overLimitFault": { "code": self.wrapped_exc.status_int, "message": message, "details": details, }, } @staticmethod def _retry_after(retry_time): delay = int(math.ceil(retry_time - time.time())) retry_after = delay if delay > 0 else 0 headers = {'Retry-After': '%s' % retry_after} return headers @webob.dec.wsgify(RequestClass=Request) def __call__(self, request): """Wrap the exception. Wrap the exception with a serialized body conforming to our error format. """ content_type = request.best_match_content_type() serializer = { 'application/json': JSONDictSerializer(), }[content_type] content = serializer.serialize(self.content) self.wrapped_exc.body = content return self.wrapped_exc manila-10.0.0/manila/api/openstack/rest_api_version_history.rst0000664000175000017500000002044513656750227024764 0ustar zuulzuul00000000000000REST API Version History ======================== This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation. 1.0 (Maximum in Kilo) --------------------- The 1.0 Manila API includes all v1 core APIs existing prior to the introduction of microversions. The /v1 URL is used to call 1.0 APIs, and microversions headers sent to this endpoint are ignored. 2.0 --- This is the initial version of the Manila API which supports microversions. The /v2 URL is used to call 2.x APIs. A user can specify a header in the API request:: X-OpenStack-Manila-API-Version: where ```` is any valid api version for this API. If no version is specified then the API will behave as if version 2.0 was requested. The only API change in version 2.0 is versions, i.e. GET http://localhost:8786/, which now returns information about both 1.0 and 2.x versions and their respective /v1 and /v2 endpoints. All other 2.0 APIs are functionally identical to version 1.0. 2.1 --- Share create() method doesn't ignore availability_zone field of provided share. 2.2 --- Snapshots become optional and share payload now has boolean attr 'snapshot_support'. 2.3 --- Share instances admin API and update of Admin Actions extension. 2.4 --- Consistency groups support. /consistency-groups and /cgsnapshots are implemented. AdminActions 'os-force_delete and' 'os-reset_status' have been updated for both new resources. 2.5 --- Share Migration admin API. 2.6 (Maximum in Liberty) ------------------------ Return share_type UUID instead of name in Share API and add share_type_name field. 2.7 --- Rename old extension-like API URLs to core-API-like. 2.8 --- Allow to set share visibility explicitly using "manage" API. 2.9 --- Add export locations API. Remove export locations from "shares" and "share instances" APIs. 2.10 ---- Field 'access_rules_status' was added to shares and share instances. 2.11 ---- Share Replication support added. All Share replication APIs are tagged 'Experimental'. Share APIs return two new attributes: 'has_replicas' and 'replication_type'. Share instance APIs return a new attribute, 'replica_state'. 2.12 ---- Share snapshot manage and unmanage API. 2.13 ---- Add 'cephx' authentication type for the CephFS Native driver. 2.14 ---- Added attribute 'preferred' to export locations. Drivers may use this field to identify which export locations are most efficient and should be used preferentially by clients. Also, change 'uuid' field to 'id', move timestamps to detail view, and return all non-admin fields to users. 2.15 (Maximum in Mitaka) ------------------------ Added Share migration 'migration_cancel', 'migration_get_progress', 'migration_complete' APIs, renamed 'migrate_share' to 'migration_start' and added notify parameter to 'migration_start'. 2.16 ---- Add user_id in share show/create/manage API. 2.17 ---- Added user_id and project_id in snapshot show/create/manage APIs. 2.18 ---- Add gateway in share network show API. 2.19 ---- Add admin APIs(list/show/detail/reset-status) of snapshot instances. 2.20 ---- Add MTU in share network show API. 2.21 ---- Add access_key in access_list API. 2.22 (Maximum in Newton) ------------------------ Updated migration_start API with 'preserve_metadata', 'writable', 'nondisruptive' and 'new_share_network_id' parameters, renamed 'force_host_copy' to 'force_host_assisted_migration', removed 'notify' parameter and removed previous migrate_share API support. Updated reset_task_state API to accept 'None' value. 2.23 ---- Added share_type to filter results of scheduler-stats/pools API. 2.24 ---- Added optional create_share_from_snapshot_support extra spec. Made snapshot_support extra spec optional. 2.25 ---- Added quota-show detail API. 2.26 ---- Removed nova-net plugin support and removed 'nova_net_id' parameter from share_network API. 2.27 ---- Added share revert to snapshot. This API reverts a share to the specified snapshot. The share is reverted in place, and the snapshot must be the most recent one known to manila. The feature is controlled by a new standard optional extra spec, revert_to_snapshot_support. 2.28 ---- Added transitional states ('queued_to_apply' - was previously 'new', 'queued_to_deny', 'applying' and 'denying') to access rules. 'updating', 'updating_multiple' and 'out_of_sync' are no longer valid values for the 'access_rules_status' field of shares, they have been collapsed into the transitional state 'syncing'. Access rule changes can be made independent of a share's 'access_rules_status'. 2.29 ---- Updated migration_start API adding mandatory parameter 'preserve_snapshots' and changed 'preserve_metadata', 'writable', 'nondisruptive' to be mandatory as well. All previous migration_start APIs prior to this microversion are now unsupported. 2.30 ---- Added cast_rules_to_readonly field to share_instances. 2.31 ---- Convert consistency groups to share groups. 2.32 (Maximum in Ocata) ----------------------- Added mountable snapshots APIs. 2.33 ---- Added created_at and updated_at in access_list API. 2.34 ---- Added 'availability_zone_id' and 'consistent_snapshot_support' fields to 'share_group' object. 2.35 ---- Added support to retrieve shares filtered by export_location_id and export_location_path. 2.36 ---- Added like filter support in ``shares``, ``snapshots``, ``share-networks``, ``share-groups`` list APIs. 2.37 ---- Added /messages APIs. 2.38 ---- Support IPv6 format validation in allow_access API to enable IPv6. 2.39 ---- Added share-type quotas. 2.40 (Maximum in Pike) ---------------------- Added share group and share group snapshot quotas. 2.41 ---- Added 'description' in share type create/list APIs. 2.42 (Maximum in Queens) ------------------------ Added ``with_count`` in share list API to get total count info. 2.43 ---- Added filter search by extra spec for share type list. 2.44 ---- Added 'ou' field to 'security_service' object. 2.45 ---- Added access metadata for share access and also introduced the GET /share-access-rules API. The prior API to retrieve access rules will not work with API version >=2.45. 2.46 (Maximum in Rocky) ----------------------- Added 'is_default' field to 'share_type' and 'share_group_type' objects. 2.47 ---- Export locations for non-active share replicas are no longer retrievable through the export locations APIs: ``GET /v2/{tenant_id}/shares/{share_id}/export_locations`` and ``GET /v2/{tenant_id}/shares/{share_id}/export_locations/{export_location_id}``. A new API is introduced at this version: ``GET /v2/{tenant_id}/share-replicas/{replica_id}/export-locations`` to allow retrieving export locations of share replicas if available. 2.48 ---- Administrators can now use the common, user-visible extra-spec 'availability_zones' within share types to allow provisioning of shares only within specific availability zones. The extra-spec allows using comma separated names of one or more availability zones. 2.49 (Maximum in Stein) ----------------------- Added Manage/Unmanage Share Server APIs. Updated Manage/Unmanage Shares and Snapshots APIs to work in ``driver_handles_shares_servers`` enabled mode. 2.50 ---- Added update share type API to Share Type APIs. We can update the ``name``, ``description`` and/or ``share_type_access:is_public`` fields of the share type by the update share type API. 2.51 (Maximum in Train) ----------------------- Added to the service the possibility to have multiple subnets per share network, each of them associated to a different AZ. It is also possible to configure a default subnet that spans all availability zones. 2.52 ---- Added 'created_before' and 'created_since' field to list messages api, support querying user messages within the specified time period. 2.53 ---- Added quota control for share replicas and replica gigabytes. 2.54 ---- Share and share instance objects include a new field called "progress" which indicates the completion of a share creation operation as a percentage. 2.55 (Maximum in Ussuri) ------------------------ Share groups feature is no longer considered experimental. manila-10.0.0/manila/api/openstack/api_version_request.py0000664000175000017500000003127613656750227023542 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # Copyright 2015 Clinton Knight # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import six from manila.api.openstack import versioned_method from manila import exception from manila.i18n import _ from manila import utils # Define the minimum and maximum version of the API across all of the # REST API. The format of the version is: # X.Y where: # # - X will only be changed if a significant backwards incompatible API # change is made which affects the API as whole. That is, something # that is only very very rarely incremented. # # - Y when you make any change to the API. Note that this includes # semantic changes which may not affect the input or output formats or # even originate in the API code layer. We are not distinguishing # between backwards compatible and backwards incompatible changes in # the versioning system. It must be made clear in the documentation as # to what is a backwards compatible change and what is a backwards # incompatible one. # # You must update the API version history string below with a one or # two line description as well as update rest_api_version_history.rst REST_API_VERSION_HISTORY = """ REST API Version History: * 1.0 - Initial version. Includes all V1 APIs and extensions in Kilo. * 2.0 - Versions API updated to reflect beginning of microversions epoch. * 2.1 - Share create() doesn't ignore availability_zone field of share. * 2.2 - Snapshots become optional feature. * 2.3 - Share instances admin API * 2.4 - Consistency Group support * 2.5 - Share Migration admin API * 2.6 - Return share_type UUID instead of name in Share API * 2.7 - Rename old extension-like API URLs to core-API-like * 2.8 - Attr "is_public" can be set for share using API "manage" * 2.9 - Add export locations API * 2.10 - Field 'access_rules_status' was added to shares and share instances. * 2.11 - Share Replication support * 2.12 - Manage/unmanage snapshot API. * 2.13 - Add "cephx" auth type to allow_access * 2.14 - 'Preferred' attribute in export location metadata * 2.15 - Added Share migration 'migration_cancel', 'migration_get_progress', 'migration_complete' APIs, renamed 'migrate_share' to 'migration_start' and added notify parameter to 'migration_start'. * 2.16 - Add user_id in share show/create/manage API. * 2.17 - Added project_id and user_id fields to the JSON response of snapshot show/create/manage API. * 2.18 - Add gateway to the JSON response of share network show API. * 2.19 - Share snapshot instances admin APIs (list/show/detail/reset-status). * 2.20 - Add MTU to the JSON response of share network show API. * 2.21 - Add access_key to the response of access_list API. * 2.22 - Updated migration_start API with 'preserve-metadata', 'writable', 'nondisruptive' and 'new_share_network_id' parameters, renamed 'force_host_copy' to 'force_host_assisted_migration', removed 'notify' parameter and removed previous migrate_share API support. Updated reset_task_state API to accept 'None' value. * 2.23 - Added share_type to filter results of scheduler-stats/pools API. * 2.24 - Added optional create_share_from_snapshot_support extra spec, which was previously inferred from the 'snapshot_support' extra spec. Also made the 'snapshot_support' extra spec optional. * 2.25 - Added quota-show detail API. * 2.26 - Removed 'nova_net_id' parameter from share_network API. * 2.27 - Added share revert to snapshot API. * 2.28 - Added transitional states to access rules and replaced all transitional access_rules_status values of shares (share_instances) with 'syncing'. Share action API 'access_allow' now accepts rules even when a share or any of its instances may have an access_rules_status set to 'error'. * 2.29 - Updated migration_start API adding mandatory parameter 'preserve_snapshots' and changed 'preserve_metadata', 'writable', 'nondisruptive' to be mandatory as well. All previous migration_start APIs prior to this microversion are now unsupported. * 2.30 - Added cast_rules_to_readonly field to share_instances. * 2.31 - Convert consistency groups to share groups. * 2.32 - Added mountable snapshots APIs. * 2.33 - Added 'created_at' and 'updated_at' to the response of access_list API. * 2.34 - Added 'availability_zone_id' and 'consistent_snapshot_support' fields to 'share_group' object. * 2.35 - Added support to retrieve shares filtered by export_location_id and export_location_path. * 2.36 - Added like filter support in ``shares``, ``snapshots``, ``share-networks``, ``share-groups`` list APIs. * 2.37 - Added /messages APIs. * 2.38 - Support IPv6 validation in allow_access API to enable IPv6 in manila. * 2.39 - Added share-type quotas. * 2.40 - Added share group and share group snapshot quotas. * 2.41 - Added 'description' in share type create/list APIs. * 2.42 - Added ``with_count`` in share list API to get total count info. * 2.43 - Added filter search by extra spec for share type list. * 2.44 - Added 'ou' field to 'security_service' object. * 2.45 - Added access metadata for share access and also introduced the GET /share-access-rules API. The prior API to retrieve access rules will not work with API version >=2.45. * 2.46 - Added 'is_default' field to 'share_type' and 'share_group_type' objects. * 2.47 - Export locations for non-active share replicas are no longer retrievable through the export locations APIs: GET /v2/{tenant_id}/shares/{share_id}/export_locations and GET /v2/{tenant_id}/shares/{share_id}/export_locations/{ export_location_id}. A new API is introduced at this version: GET /v2/{tenant_id}/share-replicas/{ replica_id}/export-locations to allow retrieving individual replica export locations if available. * 2.48 - Added support for extra-spec "availability_zones" within Share types along with validation in the API. * 2.49 - Added Manage/Unmanage Share Server APIs. Updated Manage/Unmanage Shares and Snapshots APIs to work in ``driver_handles_shares_servers`` enabled mode. * 2.50 - Added update share type API to Share Type APIs. Through this API we can update the ``name``, ``description`` and/or ``share_type_access:is_public`` fields of the share type. * 2.51 - Added Share Network with multiple Subnets. Updated Share Networks to handle with one or more subnets in different availability zones. * 2.52 - Added 'created_before' and 'created_since' field to list messages filters, support querying user messages within the specified time period. * 2.53 - Added quota control to share replicas. * 2.54 - Share and share instance objects include a new field called "progress" which indicates the completion of a share creation operation as a percentage. * 2.55 - Share groups feature is no longer considered experimental. """ # The minimum and maximum versions of the API supported # The default api version request is defined to be the # minimum version of the API supported. _MIN_API_VERSION = "2.0" _MAX_API_VERSION = "2.55" DEFAULT_API_VERSION = _MIN_API_VERSION # NOTE(cyeoh): min and max versions declared as functions so we can # mock them for unittests. Do not use the constants directly anywhere # else. def min_api_version(): return APIVersionRequest(_MIN_API_VERSION) def max_api_version(): return APIVersionRequest(_MAX_API_VERSION) class APIVersionRequest(utils.ComparableMixin): """This class represents an API Version Request. This class includes convenience methods for manipulation and comparison of version numbers as needed to implement API microversions. """ def __init__(self, version_string=None, experimental=False): """Create an API version request object.""" self._ver_major = None self._ver_minor = None self._experimental = experimental if version_string is not None: match = re.match(r"^([1-9]\d*)\.([1-9]\d*|0)$", version_string) if match: self._ver_major = int(match.group(1)) self._ver_minor = int(match.group(2)) else: raise exception.InvalidAPIVersionString(version=version_string) def __str__(self): """Debug/Logging representation of object.""" params = { 'major': self._ver_major, 'minor': self._ver_minor, 'experimental': self._experimental, } return ("API Version Request Major: %(major)s, Minor: %(minor)s, " "Experimental: %(experimental)s" % params) def is_null(self): return self._ver_major is None and self._ver_minor is None def _cmpkey(self): """Return the value used by ComparableMixin for rich comparisons.""" return self._ver_major, self._ver_minor @property def experimental(self): return self._experimental @experimental.setter def experimental(self, value): if type(value) != bool: msg = _('The experimental property must be a bool value.') raise exception.InvalidParameterValue(err=msg) self._experimental = value def matches_versioned_method(self, method): """Compares this version to that of a versioned method.""" if type(method) != versioned_method.VersionedMethod: msg = _('An API version request must be compared ' 'to a VersionedMethod object.') raise exception.InvalidParameterValue(err=msg) return self.matches(method.start_version, method.end_version, method.experimental) def matches(self, min_version, max_version, experimental=False): """Compares this version to the specified min/max range. Returns whether the version object represents a version greater than or equal to the minimum version and less than or equal to the maximum version. If min_version is null then there is no minimum limit. If max_version is null then there is no maximum limit. If self is null then raise ValueError. :param min_version: Minimum acceptable version. :param max_version: Maximum acceptable version. :param experimental: Whether to match experimental APIs. :returns: boolean """ if self.is_null(): raise ValueError # NOTE(cknight): An experimental request should still match a # non-experimental API, so the experimental check isn't just # looking for equality. if not self.experimental and experimental: return False if isinstance(min_version, six.string_types): min_version = APIVersionRequest(version_string=min_version) if isinstance(max_version, six.string_types): max_version = APIVersionRequest(version_string=max_version) if not (min_version or max_version): return True elif (min_version and max_version and max_version.is_null() and min_version.is_null()): return True elif not max_version or max_version.is_null(): return min_version <= self elif not min_version or min_version.is_null(): return self <= max_version else: return min_version <= self <= max_version def get_string(self): """Returns a string representation of this object. If this method is used to create an APIVersionRequest, the resulting object will be an equivalent request. """ if self.is_null(): raise ValueError return ("%(major)s.%(minor)s" % {'major': self._ver_major, 'minor': self._ver_minor}) manila-10.0.0/manila/compute/0000775000175000017500000000000013656750362016005 5ustar zuulzuul00000000000000manila-10.0.0/manila/compute/__init__.py0000664000175000017500000000217613656750227020124 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_config.cfg import oslo_utils.importutils _compute_opts = [ oslo_config.cfg.StrOpt('compute_api_class', default='manila.compute.nova.API', help='The full class name of the ' 'Compute API class to use.'), ] oslo_config.cfg.CONF.register_opts(_compute_opts) def API(): importutils = oslo_utils.importutils compute_api_class = oslo_config.cfg.CONF.compute_api_class cls = importutils.import_class(compute_api_class) return cls() manila-10.0.0/manila/compute/nova.py0000664000175000017500000002462213656750227017330 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handles all requests to Nova. """ from keystoneauth1 import loading as ks_loading from novaclient import client as nova_client from novaclient import exceptions as nova_exception from novaclient import utils from oslo_config import cfg import six from manila.common import client_auth from manila.common.config import core_opts from manila.db import base from manila import exception from manila.i18n import _ NOVA_GROUP = 'nova' AUTH_OBJ = None nova_opts = [ cfg.StrOpt('api_microversion', default='2.10', deprecated_group="DEFAULT", deprecated_name="nova_api_microversion", help='Version of Nova API to be used.'), cfg.StrOpt('endpoint_type', default='publicURL', help='Endpoint type to be used with nova client calls.'), cfg.StrOpt('region_name', help='Region name for connecting to nova.'), ] # These fallback options can be removed in/after 9.0.0 (Train) deprecated_opts = { 'cafile': [ cfg.DeprecatedOpt('ca_certificates_file', group="DEFAULT"), cfg.DeprecatedOpt('ca_certificates_file', group=NOVA_GROUP), cfg.DeprecatedOpt('nova_ca_certificates_file', group="DEFAULT"), cfg.DeprecatedOpt('nova_ca_certificates_file', group=NOVA_GROUP), ], 'insecure': [ cfg.DeprecatedOpt('api_insecure', group="DEFAULT"), cfg.DeprecatedOpt('api_insecure', group=NOVA_GROUP), cfg.DeprecatedOpt('nova_api_insecure', group="DEFAULT"), cfg.DeprecatedOpt('nova_api_insecure', group=NOVA_GROUP), ], } CONF = cfg.CONF CONF.register_opts(core_opts) CONF.register_opts(nova_opts, NOVA_GROUP) ks_loading.register_session_conf_options(CONF, NOVA_GROUP, deprecated_opts=deprecated_opts) ks_loading.register_auth_conf_options(CONF, NOVA_GROUP) def list_opts(): return client_auth.AuthClientLoader.list_opts(NOVA_GROUP) def novaclient(context): global AUTH_OBJ if not AUTH_OBJ: AUTH_OBJ = client_auth.AuthClientLoader( client_class=nova_client.Client, exception_module=nova_exception, cfg_group=NOVA_GROUP) return AUTH_OBJ.get_client(context, version=CONF[NOVA_GROUP].api_microversion, endpoint_type=CONF[NOVA_GROUP].endpoint_type, region_name=CONF[NOVA_GROUP].region_name) def _untranslate_server_summary_view(server): """Maps keys for servers summary view.""" d = {} d['id'] = server.id d['status'] = server.status d['flavor'] = server.flavor['id'] d['name'] = server.name d['image'] = server.image['id'] d['created'] = server.created d['addresses'] = server.addresses d['networks'] = server.networks d['tenant_id'] = server.tenant_id d['user_id'] = server.user_id d['security_groups'] = getattr(server, 'security_groups', []) return d def _to_dict(obj): if isinstance(obj, dict): return obj elif hasattr(obj, 'to_dict'): return obj.to_dict() else: return obj.__dict__ def translate_server_exception(method): """Transforms the exception for the instance. Note: keeps its traceback intact. """ @six.wraps(method) def wrapper(self, ctx, instance_id, *args, **kwargs): try: res = method(self, ctx, instance_id, *args, **kwargs) return res except nova_exception.ClientException as e: if isinstance(e, nova_exception.NotFound): raise exception.InstanceNotFound(instance_id=instance_id) elif isinstance(e, nova_exception.BadRequest): raise exception.InvalidInput(reason=six.text_type(e)) else: raise exception.ManilaException(e) return wrapper class API(base.Base): """API for interacting with novaclient.""" def server_create(self, context, name, image, flavor, key_name=None, user_data=None, security_groups=None, block_device_mapping=None, block_device_mapping_v2=None, nics=None, availability_zone=None, instance_count=1, admin_pass=None, meta=None): return _untranslate_server_summary_view( novaclient(context).servers.create( name, image, flavor, userdata=user_data, security_groups=security_groups, key_name=key_name, block_device_mapping=block_device_mapping, block_device_mapping_v2=block_device_mapping_v2, nics=nics, availability_zone=availability_zone, min_count=instance_count, admin_pass=admin_pass, meta=meta) ) def server_delete(self, context, instance): novaclient(context).servers.delete(instance) @translate_server_exception def server_get(self, context, instance_id): return _untranslate_server_summary_view( novaclient(context).servers.get(instance_id) ) def server_get_by_name_or_id(self, context, instance_name_or_id): try: server = utils.find_resource( novaclient(context).servers, instance_name_or_id) except nova_exception.CommandError: # we did not find the server in the current tenant, # and proceed searching in all tenants try: server = utils.find_resource( novaclient(context).servers, instance_name_or_id, all_tenants=True) except nova_exception.CommandError as e: msg = _("Failed to get Nova VM. %s") % e raise exception.ManilaException(msg) return _untranslate_server_summary_view(server) @translate_server_exception def server_pause(self, context, instance_id): novaclient(context).servers.pause(instance_id) @translate_server_exception def server_unpause(self, context, instance_id): novaclient(context).servers.unpause(instance_id) @translate_server_exception def server_suspend(self, context, instance_id): novaclient(context).servers.suspend(instance_id) @translate_server_exception def server_resume(self, context, instance_id): novaclient(context).servers.resume(instance_id) @translate_server_exception def server_reboot(self, context, instance_id, soft_reboot=False): hardness = 'SOFT' if soft_reboot else 'HARD' novaclient(context).servers.reboot(instance_id, hardness) @translate_server_exception def server_rebuild(self, context, instance_id, image_id, password=None): return _untranslate_server_summary_view( novaclient(context).servers.rebuild(instance_id, image_id, password) ) @translate_server_exception def instance_volume_attach(self, context, instance_id, volume_id, device=None): if device == 'auto': device = None return novaclient(context).volumes.create_server_volume(instance_id, volume_id, device) @translate_server_exception def instance_volume_detach(self, context, instance_id, att_id): return novaclient(context).volumes.delete_server_volume(instance_id, att_id) @translate_server_exception def instance_volumes_list(self, context, instance_id): from manila.volume import cinder volumes = novaclient(context).volumes.get_server_volumes(instance_id) for volume in volumes: volume_data = cinder.cinderclient(context).volumes.get(volume.id) volume.name = volume_data.name return volumes @translate_server_exception def server_update(self, context, instance_id, name): return _untranslate_server_summary_view( novaclient(context).servers.update(instance_id, name=name) ) def update_server_volume(self, context, instance_id, volume_id, new_volume_id): novaclient(context).volumes.update_server_volume(instance_id, volume_id, new_volume_id) def keypair_create(self, context, name): return novaclient(context).keypairs.create(name) def keypair_import(self, context, name, public_key): return novaclient(context).keypairs.create(name, public_key) def keypair_delete(self, context, keypair_id): novaclient(context).keypairs.delete(keypair_id) def keypair_list(self, context): return novaclient(context).keypairs.list() def image_list(self, context): client = novaclient(context) if hasattr(client, 'images'): # Old novaclient with 'images' API proxy return client.images.list() # New novaclient without 'images' API proxy return client.glance.list() def image_get(self, context, name): client = novaclient(context) try: # New novaclient without 'images' API proxy return client.glance.find_image(name) except nova_exception.NotFound: raise exception.ServiceInstanceException( _("Image with name '%s' was not found.") % name) except nova_exception.NoUniqueMatch: raise exception.ServiceInstanceException( _("Found more than one image by name '%s'.") % name) def add_security_group_to_server(self, context, server, security_group): return novaclient(context).servers.add_security_group(server, security_group) manila-10.0.0/manila/i18n.py0000664000175000017500000000202113656750227015455 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """oslo.i18n integration module. See https://docs.openstack.org/oslo.i18n/latest/user/usage.html . """ import oslo_i18n DOMAIN = 'manila' _translators = oslo_i18n.TranslatorFactory(domain=DOMAIN) # The primary translation function using the well-known name "_" _ = _translators.primary def translate(value, user_locale): return oslo_i18n.translate(value, user_locale) def get_available_languages(): return oslo_i18n.get_available_languages(DOMAIN) manila-10.0.0/manila/service.py0000664000175000017500000003423613656750227016353 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Generic Node base class for all workers that run on hosts.""" import inspect import os import random from oslo_config import cfg from oslo_log import log import oslo_messaging as messaging from oslo_service import loopingcall from oslo_service import service from oslo_service import wsgi from oslo_utils import importutils from manila import context from manila import coordination from manila import db from manila import exception from manila import rpc from manila import version LOG = log.getLogger(__name__) service_opts = [ cfg.IntOpt('report_interval', default=10, help='Seconds between nodes reporting state to datastore.'), cfg.IntOpt('periodic_interval', default=60, help='Seconds between running periodic tasks.'), cfg.IntOpt('periodic_fuzzy_delay', default=60, help='Range of seconds to randomly delay when starting the ' 'periodic task scheduler to reduce stampeding. ' '(Disable by setting to 0)'), cfg.HostAddressOpt('osapi_share_listen', default="::", help='IP address for OpenStack Share API to listen ' 'on.'), cfg.PortOpt('osapi_share_listen_port', default=8786, help='Port for OpenStack Share API to listen on.'), cfg.IntOpt('osapi_share_workers', default=1, help='Number of workers for OpenStack Share API service.'), cfg.BoolOpt('osapi_share_use_ssl', default=False, help='Wraps the socket in a SSL context if True is set. ' 'A certificate file and key file must be specified.'), ] CONF = cfg.CONF CONF.register_opts(service_opts) class Service(service.Service): """Service object for binaries running on hosts. A service takes a manager and enables rpc by listening to queues based on topic. It also periodically runs tasks on the manager and reports it state to the database services table. """ def __init__(self, host, binary, topic, manager, report_interval=None, periodic_interval=None, periodic_fuzzy_delay=None, service_name=None, coordination=False, *args, **kwargs): super(Service, self).__init__() if not rpc.initialized(): rpc.init(CONF) self.host = host self.binary = binary self.topic = topic self.manager_class_name = manager manager_class = importutils.import_class(self.manager_class_name) self.manager = manager_class(host=self.host, service_name=service_name, *args, **kwargs) self.availability_zone = self.manager.availability_zone self.report_interval = report_interval self.periodic_interval = periodic_interval self.periodic_fuzzy_delay = periodic_fuzzy_delay self.saved_args, self.saved_kwargs = args, kwargs self.timers = [] self.coordinator = coordination def start(self): version_string = version.version_string() LOG.info('Starting %(topic)s node (version %(version_string)s)', {'topic': self.topic, 'version_string': version_string}) self.model_disconnected = False ctxt = context.get_admin_context() if self.coordinator: coordination.LOCK_COORDINATOR.start() try: service_ref = db.service_get_by_args(ctxt, self.host, self.binary) self.service_id = service_ref['id'] except exception.NotFound: self._create_service_ref(ctxt) LOG.debug("Creating RPC server for service %s.", self.topic) target = messaging.Target(topic=self.topic, server=self.host) endpoints = [self.manager] endpoints.extend(self.manager.additional_endpoints) self.rpcserver = rpc.get_server(target, endpoints) self.rpcserver.start() self.manager.init_host() if self.report_interval: pulse = loopingcall.FixedIntervalLoopingCall(self.report_state) pulse.start(interval=self.report_interval, initial_delay=self.report_interval) self.timers.append(pulse) if self.periodic_interval: if self.periodic_fuzzy_delay: initial_delay = random.randint(0, self.periodic_fuzzy_delay) else: initial_delay = None periodic = loopingcall.FixedIntervalLoopingCall( self.periodic_tasks) periodic.start(interval=self.periodic_interval, initial_delay=initial_delay) self.timers.append(periodic) def _create_service_ref(self, context): service_args = { 'host': self.host, 'binary': self.binary, 'topic': self.topic, 'report_count': 0, 'availability_zone': self.availability_zone } service_ref = db.service_create(context, service_args) self.service_id = service_ref['id'] def __getattr__(self, key): manager = self.__dict__.get('manager', None) return getattr(manager, key) @classmethod def create(cls, host=None, binary=None, topic=None, manager=None, report_interval=None, periodic_interval=None, periodic_fuzzy_delay=None, service_name=None, coordination=False): """Instantiates class and passes back application object. :param host: defaults to CONF.host :param binary: defaults to basename of executable :param topic: defaults to bin_name - 'manila-' part :param manager: defaults to CONF._manager :param report_interval: defaults to CONF.report_interval :param periodic_interval: defaults to CONF.periodic_interval :param periodic_fuzzy_delay: defaults to CONF.periodic_fuzzy_delay """ if not host: host = CONF.host if not binary: binary = os.path.basename(inspect.stack()[-1][1]) if not topic: topic = binary if not manager: subtopic = topic.rpartition('manila-')[2] manager = CONF.get('%s_manager' % subtopic, None) if report_interval is None: report_interval = CONF.report_interval if periodic_interval is None: periodic_interval = CONF.periodic_interval if periodic_fuzzy_delay is None: periodic_fuzzy_delay = CONF.periodic_fuzzy_delay service_obj = cls(host, binary, topic, manager, report_interval=report_interval, periodic_interval=periodic_interval, periodic_fuzzy_delay=periodic_fuzzy_delay, service_name=service_name, coordination=coordination) return service_obj def kill(self): """Destroy the service object in the datastore.""" self.stop() try: db.service_destroy(context.get_admin_context(), self.service_id) except exception.NotFound: LOG.warning('Service killed that has no database entry.') def stop(self): # Try to shut the connection down, but if we get any sort of # errors, go ahead and ignore them.. as we're shutting down anyway try: self.rpcserver.stop() except Exception: pass for x in self.timers: try: x.stop() except Exception: pass if self.coordinator: try: coordination.LOCK_COORDINATOR.stop() except Exception: LOG.exception("Unable to stop the Tooz Locking " "Coordinator.") self.timers = [] super(Service, self).stop() def wait(self): for x in self.timers: try: x.wait() except Exception: pass def periodic_tasks(self, raise_on_error=False): """Tasks to be run at a periodic interval.""" ctxt = context.get_admin_context() self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error) def report_state(self): """Update the state of this service in the datastore.""" if not self.manager.is_service_ready(): # NOTE(haixin): If the service is still initializing or failed to # intialize. LOG.error('Manager for service %s is not ready yet, skipping state' ' update routine. Service will appear "down".', self.binary) return ctxt = context.get_admin_context() state_catalog = {} try: try: service_ref = db.service_get(ctxt, self.service_id) except exception.NotFound: LOG.debug('The service database object disappeared, ' 'Recreating it.') self._create_service_ref(ctxt) service_ref = db.service_get(ctxt, self.service_id) state_catalog['report_count'] = service_ref['report_count'] + 1 if (self.availability_zone != service_ref['availability_zone']['name']): state_catalog['availability_zone'] = self.availability_zone db.service_update(ctxt, self.service_id, state_catalog) # TODO(termie): make this pattern be more elegant. if getattr(self, 'model_disconnected', False): self.model_disconnected = False LOG.error('Recovered model server connection!') # TODO(vish): this should probably only catch connection errors except Exception: # pylint: disable=W0702 if not getattr(self, 'model_disconnected', False): self.model_disconnected = True LOG.exception('model server went away') class WSGIService(service.ServiceBase): """Provides ability to launch API from a 'paste' configuration.""" def __init__(self, name, loader=None): """Initialize, but do not start the WSGI server. :param name: The name of the WSGI server given to the loader. :param loader: Loads the WSGI application using the given name. :returns: None """ self.name = name self.manager = self._get_manager() self.loader = loader or wsgi.Loader(CONF) if not rpc.initialized(): rpc.init(CONF) self.app = self.loader.load_app(name) self.host = getattr(CONF, '%s_listen' % name, "0.0.0.0") self.port = getattr(CONF, '%s_listen_port' % name, 0) self.workers = getattr(CONF, '%s_workers' % name, None) self.use_ssl = getattr(CONF, '%s_use_ssl' % name, False) if self.workers is not None and self.workers < 1: LOG.warning( "Value of config option %(name)s_workers must be integer " "greater than 1. Input value ignored.", {'name': name}) # Reset workers to default self.workers = None self.server = wsgi.Server( CONF, name, self.app, host=self.host, port=self.port, use_ssl=self.use_ssl ) def _get_manager(self): """Initialize a Manager object appropriate for this service. Use the service name to look up a Manager subclass from the configuration and initialize an instance. If no class name is configured, just return None. :returns: a Manager instance, or None. """ fl = '%s_manager' % self.name if fl not in CONF: return None manager_class_name = CONF.get(fl, None) if not manager_class_name: return None manager_class = importutils.import_class(manager_class_name) return manager_class() def start(self): """Start serving this service using loaded configuration. Also, retrieve updated port number in case '0' was passed in, which indicates a random port should be used. :returns: None """ if self.manager: self.manager.init_host() self.server.start() self.port = self.server.port def stop(self): """Stop serving this API. :returns: None """ self.server.stop() def wait(self): """Wait for the service to stop serving this API. :returns: None """ self.server.wait() def reset(self): """Reset server greenpool size to default. :returns: None """ self.server.reset() def process_launcher(): return service.ProcessLauncher(CONF, restart_method='mutate') # NOTE(vish): the global launcher is to maintain the existing # functionality of calling service.serve + # service.wait _launcher = None def serve(server, workers=None): global _launcher if _launcher: raise RuntimeError('serve() can only be called once') _launcher = service.launch(CONF, server, workers=workers, restart_method='mutate') def wait(): CONF.log_opt_values(LOG, log.DEBUG) try: _launcher.wait() except KeyboardInterrupt: _launcher.stop() rpc.cleanup() manila-10.0.0/manila/volume/0000775000175000017500000000000013656750362015640 5ustar zuulzuul00000000000000manila-10.0.0/manila/volume/__init__.py0000664000175000017500000000221513656750227017751 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_config.cfg import oslo_utils.importutils _volume_opts = [ oslo_config.cfg.StrOpt('volume_api_class', default='manila.volume.cinder.API', help='The full class name of the ' 'Volume API class to use.'), ] oslo_config.cfg.CONF.register_opts(_volume_opts) def API(): importutils = oslo_utils.importutils volume_api_class = oslo_config.cfg.CONF.volume_api_class cls = importutils.import_class(volume_api_class) return cls() manila-10.0.0/manila/volume/cinder.py0000664000175000017500000003117313656750227017463 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handles all requests relating to volumes + cinder. """ import copy from cinderclient import exceptions as cinder_exception from cinderclient.v3 import client as cinder_client from keystoneauth1 import loading as ks_loading from oslo_config import cfg import six from manila.common import client_auth from manila.common.config import core_opts import manila.context as ctxt from manila.db import base from manila import exception from manila.i18n import _ CINDER_GROUP = 'cinder' AUTH_OBJ = None cinder_opts = [ cfg.BoolOpt('cross_az_attach', default=True, deprecated_group="DEFAULT", deprecated_name="cinder_cross_az_attach", help='Allow attaching between instances and volumes in ' 'different availability zones.'), cfg.IntOpt('http_retries', default=3, help='Number of cinderclient retries on failed HTTP calls.', deprecated_group='DEFAULT', deprecated_name="cinder_http_retries"), cfg.StrOpt('endpoint_type', default='publicURL', help='Endpoint type to be used with cinder client calls.'), cfg.StrOpt('region_name', help='Region name for connecting to cinder.'), ] # These fallback options can be removed in/after 9.0.0 (Train) deprecated_opts = { 'cafile': [ cfg.DeprecatedOpt('ca_certificates_file', group="DEFAULT"), cfg.DeprecatedOpt('ca_certificates_file', group=CINDER_GROUP), cfg.DeprecatedOpt('cinder_ca_certificates_file', group="DEFAULT"), cfg.DeprecatedOpt('cinder_ca_certificates_file', group=CINDER_GROUP), ], 'insecure': [ cfg.DeprecatedOpt('api_insecure', group="DEFAULT"), cfg.DeprecatedOpt('api_insecure', group=CINDER_GROUP), cfg.DeprecatedOpt('cinder_api_insecure', group="DEFAULT"), cfg.DeprecatedOpt('cinder_api_insecure', group=CINDER_GROUP), ], } CONF = cfg.CONF CONF.register_opts(core_opts) CONF.register_opts(cinder_opts, CINDER_GROUP) ks_loading.register_session_conf_options(CONF, CINDER_GROUP, deprecated_opts=deprecated_opts) ks_loading.register_auth_conf_options(CONF, CINDER_GROUP) def list_opts(): return client_auth.AuthClientLoader.list_opts(CINDER_GROUP) def cinderclient(context): global AUTH_OBJ if not AUTH_OBJ: AUTH_OBJ = client_auth.AuthClientLoader( client_class=cinder_client.Client, exception_module=cinder_exception, cfg_group=CINDER_GROUP) return AUTH_OBJ.get_client(context, retries=CONF[CINDER_GROUP].http_retries, endpoint_type=CONF[CINDER_GROUP].endpoint_type, region_name=CONF[CINDER_GROUP].region_name) def _untranslate_volume_summary_view(context, vol): """Maps keys for volumes summary view.""" d = {} d['id'] = vol.id d['status'] = vol.status d['size'] = vol.size d['availability_zone'] = vol.availability_zone d['created_at'] = vol.created_at d['attach_time'] = "" d['mountpoint'] = "" if vol.attachments: att = vol.attachments[0] d['attach_status'] = 'attached' d['instance_uuid'] = att['server_id'] d['mountpoint'] = att['device'] else: d['attach_status'] = 'detached' d['name'] = vol.name d['description'] = vol.description d['volume_type_id'] = vol.volume_type d['snapshot_id'] = vol.snapshot_id d['volume_metadata'] = {} for key, value in vol.metadata.items(): d['volume_metadata'][key] = value if hasattr(vol, 'volume_image_metadata'): d['volume_image_metadata'] = copy.deepcopy(vol.volume_image_metadata) return d def _untranslate_snapshot_summary_view(context, snapshot): """Maps keys for snapshots summary view.""" d = {} d['id'] = snapshot.id d['status'] = snapshot.status d['progress'] = snapshot.progress d['size'] = snapshot.size d['created_at'] = snapshot.created_at d['name'] = snapshot.name d['description'] = snapshot.description d['volume_id'] = snapshot.volume_id d['project_id'] = snapshot.project_id d['volume_size'] = snapshot.size return d def translate_volume_exception(method): """Transforms the exception for the volume, keeps its traceback intact.""" def wrapper(self, ctx, volume_id, *args, **kwargs): try: res = method(self, ctx, volume_id, *args, **kwargs) except cinder_exception.ClientException as e: if isinstance(e, cinder_exception.NotFound): raise exception.VolumeNotFound(volume_id=volume_id) elif isinstance(e, cinder_exception.BadRequest): raise exception.InvalidInput(reason=six.text_type(e)) return res return wrapper def translate_snapshot_exception(method): """Transforms the exception for the snapshot. Note: Keeps its traceback intact. """ def wrapper(self, ctx, snapshot_id, *args, **kwargs): try: res = method(self, ctx, snapshot_id, *args, **kwargs) except cinder_exception.ClientException as e: if isinstance(e, cinder_exception.NotFound): raise exception.VolumeSnapshotNotFound(snapshot_id=snapshot_id) return res return wrapper class API(base.Base): """API for interacting with the volume manager.""" @translate_volume_exception def get(self, context, volume_id): item = cinderclient(context).volumes.get(volume_id) return _untranslate_volume_summary_view(context, item) def get_all(self, context, search_opts={}): items = cinderclient(context).volumes.list(detailed=True, search_opts=search_opts) rval = [] for item in items: rval.append(_untranslate_volume_summary_view(context, item)) return rval def check_attached(self, context, volume): """Raise exception if volume in use.""" if volume['status'] != "in-use": msg = _("status must be 'in-use'") raise exception.InvalidVolume(msg) def check_attach(self, context, volume, instance=None): if volume['status'] != "available": msg = _("status must be 'available'") raise exception.InvalidVolume(msg) if volume['attach_status'] == "attached": msg = _("already attached") raise exception.InvalidVolume(msg) if instance and not CONF[CINDER_GROUP].cross_az_attach: if instance['availability_zone'] != volume['availability_zone']: msg = _("Instance and volume not in same availability_zone") raise exception.InvalidVolume(msg) def check_detach(self, context, volume): if volume['status'] == "available": msg = _("already detached") raise exception.InvalidVolume(msg) @translate_volume_exception def reserve_volume(self, context, volume_id): cinderclient(context).volumes.reserve(volume_id) @translate_volume_exception def unreserve_volume(self, context, volume_id): cinderclient(context).volumes.unreserve(volume_id) @translate_volume_exception def begin_detaching(self, context, volume_id): cinderclient(context).volumes.begin_detaching(volume_id) @translate_volume_exception def roll_detaching(self, context, volume_id): cinderclient(context).volumes.roll_detaching(volume_id) @translate_volume_exception def attach(self, context, volume_id, instance_uuid, mountpoint): cinderclient(context).volumes.attach(volume_id, instance_uuid, mountpoint) @translate_volume_exception def detach(self, context, volume_id): cinderclient(context).volumes.detach(volume_id) @translate_volume_exception def initialize_connection(self, context, volume_id, connector): return cinderclient(context).volumes.initialize_connection(volume_id, connector) @translate_volume_exception def terminate_connection(self, context, volume_id, connector): return cinderclient(context).volumes.terminate_connection(volume_id, connector) def create(self, context, size, name, description, snapshot=None, image_id=None, volume_type=None, metadata=None, availability_zone=None): if snapshot is not None: snapshot_id = snapshot['id'] else: snapshot_id = None kwargs = dict(snapshot_id=snapshot_id, name=name, description=description, volume_type=volume_type, user_id=context.user_id, project_id=context.project_id, availability_zone=availability_zone, metadata=metadata, imageRef=image_id) try: item = cinderclient(context).volumes.create(size, **kwargs) return _untranslate_volume_summary_view(context, item) except cinder_exception.BadRequest as e: raise exception.InvalidInput(reason=six.text_type(e)) except cinder_exception.NotFound: raise exception.NotFound( _("Error in creating cinder " "volume. Cinder volume type %s not exist. Check parameter " "cinder_volume_type in configuration file.") % volume_type) except Exception as e: raise exception.ManilaException(e) @translate_volume_exception def extend(self, context, volume_id, new_size): cinderclient(context).volumes.extend(volume_id, new_size) @translate_volume_exception def delete(self, context, volume_id): cinderclient(context).volumes.delete(volume_id) @translate_volume_exception def update(self, context, volume_id, fields): # Use Manila's context as far as Cinder's is restricted to update # volumes. manila_admin_context = ctxt.get_admin_context() client = cinderclient(manila_admin_context) item = client.volumes.get(volume_id) client.volumes.update(item, **fields) def get_volume_encryption_metadata(self, context, volume_id): return cinderclient(context).volumes.get_encryption_metadata(volume_id) @translate_snapshot_exception def get_snapshot(self, context, snapshot_id): item = cinderclient(context).volume_snapshots.get(snapshot_id) return _untranslate_snapshot_summary_view(context, item) def get_all_snapshots(self, context, search_opts=None): items = cinderclient(context).volume_snapshots.list( detailed=True, search_opts=search_opts) rvals = [] for item in items: rvals.append(_untranslate_snapshot_summary_view(context, item)) return rvals @translate_volume_exception def create_snapshot(self, context, volume_id, name, description): item = cinderclient(context).volume_snapshots.create(volume_id, False, name, description) return _untranslate_snapshot_summary_view(context, item) @translate_volume_exception def create_snapshot_force(self, context, volume_id, name, description): item = cinderclient(context).volume_snapshots.create(volume_id, True, name, description) return _untranslate_snapshot_summary_view(context, item) @translate_snapshot_exception def delete_snapshot(self, context, snapshot_id): cinderclient(context).volume_snapshots.delete(snapshot_id) manila-10.0.0/manila/test.py0000664000175000017500000003352713656750227015674 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base classes for our unit tests. Allows overriding of flags for use of fakes, and some black magic for inline callbacks. """ import os import shutil from unittest import mock import fixtures from oslo_concurrency import lockutils from oslo_config import cfg from oslo_config import fixture as config_fixture import oslo_messaging from oslo_messaging import conffixture as messaging_conffixture from oslo_utils import uuidutils import oslotest.base as base_test from manila.api.openstack import api_version_request as api_version from manila import coordination from manila.db import migration from manila.db.sqlalchemy import api as db_api from manila.db.sqlalchemy import models as db_models from manila import rpc from manila import service from manila.tests import conf_fixture from manila.tests import fake_notifier test_opts = [ cfg.StrOpt('sqlite_clean_db', default='clean.sqlite', help='File name of clean sqlite database.'), cfg.StrOpt('sqlite_db', default='manila.sqlite', help='The filename to use with sqlite.'), ] CONF = cfg.CONF CONF.register_opts(test_opts) _DB_CACHE = None class Database(fixtures.Fixture): def __init__(self, db_session, db_migrate, sql_connection, sqlite_db, sqlite_clean_db): self.sql_connection = sql_connection self.sqlite_db = sqlite_db self.sqlite_clean_db = sqlite_clean_db self.engine = db_session.get_engine() self.engine.dispose() conn = self.engine.connect() if sql_connection == "sqlite://": self.setup_sqlite(db_migrate) else: testdb = os.path.join(CONF.state_path, sqlite_db) db_migrate.upgrade('head') if os.path.exists(testdb): return if sql_connection == "sqlite://": conn = self.engine.connect() self._DB = "".join(line for line in conn.connection.iterdump()) self.engine.dispose() else: cleandb = os.path.join(CONF.state_path, sqlite_clean_db) shutil.copyfile(testdb, cleandb) def setUp(self): super(Database, self).setUp() if self.sql_connection == "sqlite://": conn = self.engine.connect() conn.connection.executescript(self._DB) self.addCleanup(self.engine.dispose) # pylint: disable=no-member else: shutil.copyfile( os.path.join(CONF.state_path, self.sqlite_clean_db), os.path.join(CONF.state_path, self.sqlite_db), ) def setup_sqlite(self, db_migrate): if db_migrate.version(): return db_models.BASE.metadata.create_all(self.engine) db_migrate.stamp('head') class TestCase(base_test.BaseTestCase): """Test case base class for all unit tests.""" def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() conf_fixture.set_defaults(CONF) CONF([], default_config_files=[]) global _DB_CACHE if not _DB_CACHE: _DB_CACHE = Database( db_api, migration, sql_connection=CONF.database.connection, sqlite_db=CONF.sqlite_db, sqlite_clean_db=CONF.sqlite_clean_db, ) self.useFixture(_DB_CACHE) self.injected = [] self._services = [] self.flags(fatal_exception_format_errors=True) # This will be cleaned up by the NestedTempfile fixture lock_path = self.useFixture(fixtures.TempDir()).path self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF)) self.fixture.config(lock_path=lock_path, group='oslo_concurrency') self.fixture.config( disable_process_locking=True, group='oslo_concurrency') rpc.add_extra_exmods('manila.tests') self.addCleanup(rpc.clear_extra_exmods) self.addCleanup(rpc.cleanup) self.messaging_conf = messaging_conffixture.ConfFixture(CONF) self.messaging_conf.transport_url = 'fake:/' self.messaging_conf.response_timeout = 15 self.useFixture(self.messaging_conf) oslo_messaging.get_notification_transport(CONF) self.override_config('driver', ['test'], group='oslo_messaging_notifications') rpc.init(CONF) mock.patch('keystoneauth1.loading.load_auth_from_conf_options').start() fake_notifier.stub_notifier(self) # Locks must be cleaned up after tests CONF.set_override('backend_url', 'file://' + lock_path, group='coordination') coordination.LOCK_COORDINATOR.start() self.addCleanup(coordination.LOCK_COORDINATOR.stop) def tearDown(self): """Runs after each test method to tear down test environment.""" super(TestCase, self).tearDown() # Reset any overridden flags CONF.reset() # Stop any timers for x in self.injected: try: x.stop() except AssertionError: pass # Kill any services for x in self._services: try: x.kill() except Exception: pass # Delete attributes that don't start with _ so they don't pin # memory around unnecessarily for the duration of the test # suite for key in [k for k in self.__dict__.keys() if k[0] != '_']: del self.__dict__[key] def flags(self, **kw): """Override flag variables for a test.""" for k, v in kw.items(): CONF.set_override(k, v) def start_service(self, name, host=None, **kwargs): host = host and host or uuidutils.generate_uuid() kwargs.setdefault('host', host) kwargs.setdefault('binary', 'manila-%s' % name) svc = service.Service.create(**kwargs) svc.start() self._services.append(svc) return svc def mock_object(self, obj, attr_name, new_attr=None, **kwargs): """Use python mock to mock an object attribute Mocks the specified objects attribute with the given value. Automatically performs 'addCleanup' for the mock. """ if not new_attr: new_attr = mock.Mock() patcher = mock.patch.object(obj, attr_name, new_attr, **kwargs) patcher.start() self.addCleanup(patcher.stop) return new_attr def mock_class(self, class_name, new_val=None, **kwargs): """Use python mock to mock a class Mocks the specified objects attribute with the given value. Automatically performs 'addCleanup' for the mock. """ if not new_val: new_val = mock.Mock() patcher = mock.patch(class_name, new_val, **kwargs) patcher.start() self.addCleanup(patcher.stop) return new_val # Useful assertions def assertDictMatch(self, d1, d2, approx_equal=False, tolerance=0.001): """Assert two dicts are equivalent. This is a 'deep' match in the sense that it handles nested dictionaries appropriately. NOTE: If you don't care (or don't know) a given value, you can specify the string DONTCARE as the value. This will cause that dict-item to be skipped. """ def raise_assertion(msg): d1str = str(d1) d2str = str(d2) base_msg = ('Dictionaries do not match. %(msg)s d1: %(d1str)s ' 'd2: %(d2str)s' % {"msg": msg, "d1str": d1str, "d2str": d2str}) raise AssertionError(base_msg) d1keys = set(d1.keys()) d2keys = set(d2.keys()) if d1keys != d2keys: d1only = d1keys - d2keys d2only = d2keys - d1keys raise_assertion('Keys in d1 and not d2: %(d1only)s. ' 'Keys in d2 and not d1: %(d2only)s' % {"d1only": d1only, "d2only": d2only}) for key in d1keys: d1value = d1[key] d2value = d2[key] try: error = abs(float(d1value) - float(d2value)) within_tolerance = error <= tolerance except (ValueError, TypeError): # If both values aren't convertible to float, just ignore # ValueError if arg is a str, TypeError if it's something else # (like None) within_tolerance = False if hasattr(d1value, 'keys') and hasattr(d2value, 'keys'): self.assertDictMatch(d1value, d2value) elif 'DONTCARE' in (d1value, d2value): continue elif approx_equal and within_tolerance: continue elif d1value != d2value: raise_assertion("d1['%(key)s']=%(d1value)s != " "d2['%(key)s']=%(d2value)s" % { "key": key, "d1value": d1value, "d2value": d2value }) def assertDictListMatch(self, L1, L2, approx_equal=False, tolerance=0.001): """Assert a list of dicts are equivalent.""" def raise_assertion(msg): L1str = str(L1) L2str = str(L2) base_msg = ('List of dictionaries do not match: %(msg)s ' 'L1: %(L1str)s L2: %(L2str)s' % {"msg": msg, "L1str": L1str, "L2str": L2str}) raise AssertionError(base_msg) L1count = len(L1) L2count = len(L2) if L1count != L2count: raise_assertion('Length mismatch: len(L1)=%(L1count)d != ' 'len(L2)=%(L2count)d' % {"L1count": L1count, "L2count": L2count}) for d1, d2 in zip(L1, L2): self.assertDictMatch(d1, d2, approx_equal=approx_equal, tolerance=tolerance) def assertSubDictMatch(self, sub_dict, super_dict): """Assert a sub_dict is subset of super_dict.""" self.assertTrue(set(sub_dict.keys()).issubset(set(super_dict.keys()))) for k, sub_value in sub_dict.items(): super_value = super_dict[k] if isinstance(sub_value, dict): self.assertSubDictMatch(sub_value, super_value) elif 'DONTCARE' in (sub_value, super_value): continue else: self.assertEqual(sub_value, super_value) def assertIn(self, a, b, *args, **kwargs): """Python < v2.7 compatibility. Assert 'a' in 'b'.""" try: f = super(TestCase, self).assertIn except AttributeError: self.assertTrue(a in b, *args, **kwargs) else: f(a, b, *args, **kwargs) def assertNotIn(self, a, b, *args, **kwargs): """Python < v2.7 compatibility. Assert 'a' NOT in 'b'.""" try: f = super(TestCase, self).assertNotIn except AttributeError: self.assertFalse(a in b, *args, **kwargs) else: f(a, b, *args, **kwargs) def assertIsInstance(self, a, b, *args, **kwargs): """Python < v2.7 compatibility.""" try: f = super(TestCase, self).assertIsInstance except AttributeError: self.assertIsInstance(a, b) else: f(a, b, *args, **kwargs) def assertIsNone(self, a, *args, **kwargs): """Python < v2.7 compatibility.""" try: f = super(TestCase, self).assertIsNone except AttributeError: self.assertTrue(a is None) else: f(a, *args, **kwargs) def _dict_from_object(self, obj, ignored_keys): if ignored_keys is None: ignored_keys = [] return {k: v for k, v in obj.items() if k not in ignored_keys} def _assertEqualListsOfObjects(self, objs1, objs2, ignored_keys=None): obj_to_dict = lambda o: ( # noqa: E731 self._dict_from_object(o, ignored_keys)) sort_key = lambda d: [d[k] for k in sorted(d)] # noqa: E731 conv_and_sort = lambda obj: ( # noqa: E731 sorted(map(obj_to_dict, obj), key=sort_key)) self.assertEqual(conv_and_sort(objs1), conv_and_sort(objs2)) def is_microversion_ge(self, left, right): return (api_version.APIVersionRequest(left) >= api_version.APIVersionRequest(right)) def is_microversion_lt(self, left, right): return (api_version.APIVersionRequest(left) < api_version.APIVersionRequest(right)) def assert_notify_called(self, mock_notify, calls): for i in range(0, len(calls)): mock_call = mock_notify.call_args_list[i] call = calls[i] posargs = mock_call[0] self.assertEqual(call[0], posargs[0]) self.assertEqual(call[1], posargs[2]) def override_config(self, name, override, group=None): """Cleanly override CONF variables.""" CONF.set_override(name, override, group) self.addCleanup(CONF.clear_override, name, group) manila-10.0.0/manila/share/0000775000175000017500000000000013656750362015433 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/utils.py0000664000175000017500000001221213656750227017143 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # Copyright (c) 2015 Rushil Chugh # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Share-related Utilities and helpers.""" from oslo_config import cfg from manila.common import constants from manila.db import migration from manila import rpc from manila import utils DEFAULT_POOL_NAME = '_pool0' CONF = cfg.CONF def extract_host(host, level='backend', use_default_pool_name=False): """Extract Host, Backend or Pool information from host string. :param host: String for host, which could include host@backend#pool info :param level: Indicate which level of information should be extracted from host string. Level can be 'host', 'backend', 'pool', or 'backend_name', default value is 'backend' :param use_default_pool_name: This flag specifies what to do if level == 'pool' and there is no 'pool' info encoded in host string. default_pool_name=True will return DEFAULT_POOL_NAME, otherwise it will return None. Default value of this parameter is False. :return: expected level of information For example: host = 'HostA@BackendB#PoolC' ret = extract_host(host, 'host') # ret is 'HostA' ret = extract_host(host, 'backend') # ret is 'HostA@BackendB' ret = extract_host(host, 'pool') # ret is 'PoolC' ret = extract_host(host, 'backend_name') # ret is 'BackendB' host = 'HostX@BackendY' ret = extract_host(host, 'pool') # ret is None ret = extract_host(host, 'pool', True) # ret is '_pool0' """ if level == 'host': # Make sure pool is not included hst = host.split('#')[0] return hst.split('@')[0] if level == 'backend_name': hst = host.split('#')[0] return hst.split('@')[1] elif level == 'backend': return host.split('#')[0] elif level == 'pool': lst = host.split('#') if len(lst) == 2: return lst[1] elif use_default_pool_name is True: return DEFAULT_POOL_NAME else: return None def append_host(host, pool): """Encode pool into host info.""" if not host or not pool: return host new_host = "#".join([host, pool]) return new_host def get_active_replica(replica_list): """Returns the first 'active' replica in the list of replicas provided.""" for replica in replica_list: if replica['replica_state'] == constants.REPLICA_STATE_ACTIVE: return replica def change_rules_to_readonly(access_rules, add_rules, delete_rules): dict_access_rules = cast_access_object_to_dict_in_readonly(access_rules) dict_add_rules = cast_access_object_to_dict_in_readonly(add_rules) dict_delete_rules = cast_access_object_to_dict_in_readonly(delete_rules) return dict_access_rules, dict_add_rules, dict_delete_rules def cast_access_object_to_dict_in_readonly(rules): dict_rules = [] for rule in rules: dict_rules.append({ 'access_level': constants.ACCESS_LEVEL_RO, 'access_type': rule['access_type'], 'access_to': rule['access_to'] }) return dict_rules @utils.if_notifications_enabled def notify_about_share_usage(context, share, share_instance, event_suffix, extra_usage_info=None, host=None): if not host: host = CONF.host if not extra_usage_info: extra_usage_info = {} usage_info = _usage_from_share(share, share_instance, **extra_usage_info) rpc.get_notifier("share", host).info(context, 'share.%s' % event_suffix, usage_info) def _usage_from_share(share_ref, share_instance_ref, **extra_usage_info): usage_info = { 'share_id': share_ref['id'], 'user_id': share_ref['user_id'], 'project_id': share_ref['project_id'], 'snapshot_id': share_ref['snapshot_id'], 'share_group_id': share_ref['share_group_id'], 'size': share_ref['size'], 'name': share_ref['display_name'], 'description': share_ref['display_description'], 'proto': share_ref['share_proto'], 'is_public': share_ref['is_public'], 'availability_zone': share_instance_ref['availability_zone'], 'host': share_instance_ref['host'], 'status': share_instance_ref['status'], } usage_info.update(extra_usage_info) return usage_info def get_recent_db_migration_id(): return migration.version() manila-10.0.0/manila/share/__init__.py0000664000175000017500000000200313656750227017537 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Importing full names to not pollute the namespace and cause possible # collisions with use of 'from manila.share import ' elsewhere. import oslo_utils.importutils as import_utils from manila.common import config CONF = config.CONF API = import_utils.import_class(CONF.share_api_class) manila-10.0.0/manila/share/api.py0000664000175000017500000030215313656750227016562 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright (c) 2015 Tom Barron. All rights reserved. # Copyright (c) 2015 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handles all requests relating to shares. """ from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import timeutils import six from manila.common import constants from manila.data import rpcapi as data_rpcapi from manila.db import base from manila import exception from manila.i18n import _ from manila import policy from manila import quota from manila.scheduler import rpcapi as scheduler_rpcapi from manila.share import access from manila.share import rpcapi as share_rpcapi from manila.share import share_types from manila.share import utils as share_utils from manila import utils share_api_opts = [ cfg.BoolOpt('use_scheduler_creating_share_from_snapshot', default=False, help='If set to False, then share creation from snapshot will ' 'be performed on the same host. ' 'If set to True, then scheduler will be used.' 'When enabling this option make sure that filter ' 'CreateShareFromSnapshot is enabled and to have hosts ' 'reporting replication_domain option.' ) ] CONF = cfg.CONF CONF.register_opts(share_api_opts) LOG = log.getLogger(__name__) GB = 1048576 * 1024 QUOTAS = quota.QUOTAS class API(base.Base): """API for interacting with the share manager.""" def __init__(self, db_driver=None): super(API, self).__init__(db_driver) self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI() self.share_rpcapi = share_rpcapi.ShareAPI() self.access_helper = access.ShareInstanceAccess(self.db, None) def _get_all_availability_zones_with_subnets(self, context, share_network_id): compatible_azs = [] for az in self.db.availability_zone_get_all(context): if self.db.share_network_subnet_get_by_availability_zone_id( context, share_network_id=share_network_id, availability_zone_id=az['id']): compatible_azs.append(az['name']) return compatible_azs def _check_if_share_quotas_exceeded(self, context, quota_exception, share_size, operation='create'): overs = quota_exception.kwargs['overs'] usages = quota_exception.kwargs['usages'] quotas = quota_exception.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'gigabytes' in overs: LOG.warning("Quota exceeded for %(s_pid)s, " "tried to %(operation)s " "%(s_size)sG share (%(d_consumed)dG of " "%(d_quota)dG already consumed).", { 's_pid': context.project_id, 's_size': share_size, 'd_consumed': _consumed('gigabytes'), 'd_quota': quotas['gigabytes'], 'operation': operation}) raise exception.ShareSizeExceedsAvailableQuota() elif 'shares' in overs: LOG.warning("Quota exceeded for %(s_pid)s, " "tried to %(operation)s " "share (%(d_consumed)d shares " "already consumed).", { 's_pid': context.project_id, 'd_consumed': _consumed('shares'), 'operation': operation}) raise exception.ShareLimitExceeded(allowed=quotas['shares']) def _check_if_replica_quotas_exceeded(self, context, quota_exception, replica_size, resource_type='share_replica'): overs = quota_exception.kwargs['overs'] usages = quota_exception.kwargs['usages'] quotas = quota_exception.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'share_replicas' in overs: LOG.warning("Quota exceeded for %(s_pid)s, " "unable to create share-replica (%(d_consumed)d " "of %(d_quota)d already consumed).", { 's_pid': context.project_id, 'd_consumed': _consumed('share_replicas'), 'd_quota': quotas['share_replicas']}) exception_kwargs = {} if resource_type != 'share_replica': msg = _("Failed while creating a share with replication " "support. Maximum number of allowed share-replicas " "is exceeded.") exception_kwargs['message'] = msg raise exception.ShareReplicasLimitExceeded(**exception_kwargs) elif 'replica_gigabytes' in overs: LOG.warning("Quota exceeded for %(s_pid)s, " "unable to create a share replica size of " "%(s_size)sG (%(d_consumed)dG of " "%(d_quota)dG already consumed).", { 's_pid': context.project_id, 's_size': replica_size, 'd_consumed': _consumed('replica_gigabytes'), 'd_quota': quotas['replica_gigabytes']}) exception_kwargs = {} if resource_type != 'share_replica': msg = _("Failed while creating a share with replication " "support. Requested share replica exceeds allowed " "project/user or share type gigabytes quota.") exception_kwargs['message'] = msg raise exception.ShareReplicaSizeExceedsAvailableQuota( **exception_kwargs) def create(self, context, share_proto, size, name, description, snapshot_id=None, availability_zone=None, metadata=None, share_network_id=None, share_type=None, is_public=False, share_group_id=None, share_group_snapshot_member=None, availability_zones=None): """Create new share.""" self._check_metadata_properties(metadata) if snapshot_id is not None: snapshot = self.get_snapshot(context, snapshot_id) if snapshot['aggregate_status'] != constants.STATUS_AVAILABLE: msg = _("status must be '%s'") % constants.STATUS_AVAILABLE raise exception.InvalidShareSnapshot(reason=msg) if not size: size = snapshot['size'] else: snapshot = None def as_int(s): try: return int(s) except (ValueError, TypeError): return s # tolerate size as stringified int size = as_int(size) if not isinstance(size, int) or size <= 0: msg = (_("Share size '%s' must be an integer and greater than 0") % size) raise exception.InvalidInput(reason=msg) if snapshot and size < snapshot['size']: msg = (_("Share size '%s' must be equal or greater " "than snapshot size") % size) raise exception.InvalidInput(reason=msg) if snapshot is None: share_type_id = share_type['id'] if share_type else None else: source_share = self.db.share_get(context, snapshot['share_id']) source_share_az = source_share['instance']['availability_zone'] if availability_zone is None: availability_zone = source_share_az elif (availability_zone != source_share_az and not CONF.use_scheduler_creating_share_from_snapshot): LOG.error("The specified availability zone must be the same " "as parent share when you have the configuration " "option 'use_scheduler_creating_share_from_snapshot'" " set to False.") msg = _("The specified availability zone must be the same " "as the parent share when creating from snapshot.") raise exception.InvalidInput(reason=msg) if share_type is None: # Grab the source share's share_type if no new share type # has been provided. share_type_id = source_share['instance']['share_type_id'] share_type = share_types.get_share_type(context, share_type_id) else: share_type_id = share_type['id'] if share_type_id != source_share['instance']['share_type_id']: msg = _("Invalid share type specified: the requested " "share type must match the type of the source " "share. If a share type is not specified when " "requesting a new share from a snapshot, the " "share type of the source share will be applied " "to the new share.") raise exception.InvalidInput(reason=msg) supported_share_protocols = ( proto.upper() for proto in CONF.enabled_share_protocols) if not (share_proto and share_proto.upper() in supported_share_protocols): msg = (_("Invalid share protocol provided: %(provided)s. " "It is either disabled or unsupported. Available " "protocols: %(supported)s") % dict( provided=share_proto, supported=CONF.enabled_share_protocols)) raise exception.InvalidInput(reason=msg) deltas = {'shares': 1, 'gigabytes': size} share_type_attributes = self.get_share_attributes_from_share_type( share_type) share_type_supports_replication = share_type_attributes.get( 'replication_type', None) if share_type_supports_replication: deltas.update( {'share_replicas': 1, 'replica_gigabytes': size}) try: reservations = QUOTAS.reserve( context, share_type_id=share_type_id, **deltas) except exception.OverQuota as e: self._check_if_share_quotas_exceeded(context, e, size) if share_type_supports_replication: self._check_if_replica_quotas_exceeded(context, e, size, resource_type='share') share_group = None if share_group_id: try: share_group = self.db.share_group_get(context, share_group_id) except exception.NotFound as e: raise exception.InvalidParameterValue(six.text_type(e)) if (not share_group_snapshot_member and not (share_group['status'] == constants.STATUS_AVAILABLE)): params = { 'avail': constants.STATUS_AVAILABLE, 'status': share_group['status'], } msg = _("Share group status must be %(avail)s, got " "%(status)s.") % params raise exception.InvalidShareGroup(message=msg) if share_type_id: share_group_st_ids = [ st['share_type_id'] for st in share_group.get('share_types', [])] if share_type_id not in share_group_st_ids: params = { 'type': share_type_id, 'group': share_group_id, } msg = _("The specified share type (%(type)s) is not " "supported by the specified share group " "(%(group)s).") % params raise exception.InvalidParameterValue(msg) if not share_group.get('share_network_id') == share_network_id: params = { 'net': share_network_id, 'group': share_group_id } msg = _("The specified share network (%(net)s) is not " "supported by the specified share group " "(%(group)s).") % params raise exception.InvalidParameterValue(msg) options = { 'size': size, 'user_id': context.user_id, 'project_id': context.project_id, 'snapshot_id': snapshot_id, 'metadata': metadata, 'display_name': name, 'display_description': description, 'share_proto': share_proto, 'is_public': is_public, 'share_group_id': share_group_id, } options.update(share_type_attributes) if share_group_snapshot_member: options['source_share_group_snapshot_member_id'] = ( share_group_snapshot_member['id']) # NOTE(dviroel): If a target availability zone was not provided, the # scheduler will receive a list with all availability zones that # contains a subnet within the selected share network. if share_network_id and not availability_zone: azs_with_subnet = self._get_all_availability_zones_with_subnets( context, share_network_id) if not availability_zones: availability_zones = azs_with_subnet else: availability_zones = ( [az for az in availability_zones if az in azs_with_subnet]) if not availability_zones: msg = _( "The share network is not supported within any requested " "availability zone. Check the share type's " "'availability_zones' extra-spec and the availability " "zones of the share network subnets") raise exception.InvalidInput(message=msg) try: share = self.db.share_create(context, options, create_share_instance=False) QUOTAS.commit(context, reservations, share_type_id=share_type_id) except Exception: with excutils.save_and_reraise_exception(): try: self.db.share_delete(context, share['id']) finally: QUOTAS.rollback( context, reservations, share_type_id=share_type_id) host = None snapshot_host = None if snapshot: snapshot_host = snapshot['share']['instance']['host'] if not CONF.use_scheduler_creating_share_from_snapshot: # Shares from snapshots with restriction - source host only. # It is common situation for different types of backends. host = snapshot['share']['instance']['host'] if share_group and host is None: host = share_group['host'] self.create_instance( context, share, share_network_id=share_network_id, host=host, availability_zone=availability_zone, share_group=share_group, share_group_snapshot_member=share_group_snapshot_member, share_type_id=share_type_id, availability_zones=availability_zones, snapshot_host=snapshot_host) # Retrieve the share with instance details share = self.db.share_get(context, share['id']) return share def get_share_attributes_from_share_type(self, share_type): """Determine share attributes from the share type. The share type can change any time after shares of that type are created, so we copy some share type attributes to the share to consistently govern the behavior of that share over its lifespan. """ inferred_map = constants.ExtraSpecs.INFERRED_OPTIONAL_MAP snapshot_support_key = constants.ExtraSpecs.SNAPSHOT_SUPPORT create_share_from_snapshot_key = ( constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT) revert_to_snapshot_key = ( constants.ExtraSpecs.REVERT_TO_SNAPSHOT_SUPPORT) mount_snapshot_support_key = ( constants.ExtraSpecs.MOUNT_SNAPSHOT_SUPPORT) snapshot_support_default = inferred_map.get(snapshot_support_key) create_share_from_snapshot_support_default = inferred_map.get( create_share_from_snapshot_key) revert_to_snapshot_support_default = inferred_map.get( revert_to_snapshot_key) mount_snapshot_support_default = inferred_map.get( constants.ExtraSpecs.MOUNT_SNAPSHOT_SUPPORT) if share_type: snapshot_support = share_types.parse_boolean_extra_spec( snapshot_support_key, share_type.get('extra_specs', {}).get( snapshot_support_key, snapshot_support_default)) create_share_from_snapshot_support = ( share_types.parse_boolean_extra_spec( create_share_from_snapshot_key, share_type.get('extra_specs', {}).get( create_share_from_snapshot_key, create_share_from_snapshot_support_default))) revert_to_snapshot_support = ( share_types.parse_boolean_extra_spec( revert_to_snapshot_key, share_type.get('extra_specs', {}).get( revert_to_snapshot_key, revert_to_snapshot_support_default))) mount_snapshot_support = share_types.parse_boolean_extra_spec( mount_snapshot_support_key, share_type.get( 'extra_specs', {}).get( mount_snapshot_support_key, mount_snapshot_support_default)) replication_type = share_type.get('extra_specs', {}).get( 'replication_type') else: snapshot_support = snapshot_support_default create_share_from_snapshot_support = ( create_share_from_snapshot_support_default) revert_to_snapshot_support = revert_to_snapshot_support_default mount_snapshot_support = mount_snapshot_support_default replication_type = None return { 'snapshot_support': snapshot_support, 'create_share_from_snapshot_support': create_share_from_snapshot_support, 'revert_to_snapshot_support': revert_to_snapshot_support, 'replication_type': replication_type, 'mount_snapshot_support': mount_snapshot_support, } def create_instance(self, context, share, share_network_id=None, host=None, availability_zone=None, share_group=None, share_group_snapshot_member=None, share_type_id=None, availability_zones=None, snapshot_host=None): request_spec, share_instance = ( self.create_share_instance_and_get_request_spec( context, share, availability_zone=availability_zone, share_group=share_group, host=host, share_network_id=share_network_id, share_type_id=share_type_id, availability_zones=availability_zones, snapshot_host=snapshot_host)) if share_group_snapshot_member: # Inherit properties from the share_group_snapshot_member member_share_instance = share_group_snapshot_member[ 'share_instance'] updates = { 'host': member_share_instance['host'], 'share_network_id': member_share_instance['share_network_id'], 'share_server_id': member_share_instance['share_server_id'], } share = self.db.share_instance_update(context, share_instance['id'], updates) # NOTE(ameade): Do not cast to driver if creating from share group # snapshot return if host: self.share_rpcapi.create_share_instance( context, share_instance, host, request_spec=request_spec, filter_properties={}, snapshot_id=share['snapshot_id'], ) else: # Create share instance from scratch or from snapshot could happen # on hosts other than the source host. self.scheduler_rpcapi.create_share_instance( context, request_spec=request_spec, filter_properties={}) return share_instance def create_share_instance_and_get_request_spec( self, context, share, availability_zone=None, share_group=None, host=None, share_network_id=None, share_type_id=None, cast_rules_to_readonly=False, availability_zones=None, snapshot_host=None): availability_zone_id = None if availability_zone: availability_zone_id = self.db.availability_zone_get( context, availability_zone).id # TODO(u_glide): Add here validation that provided share network # doesn't conflict with provided availability_zone when Neutron # will have AZ support. share_instance = self.db.share_instance_create( context, share['id'], { 'share_network_id': share_network_id, 'status': constants.STATUS_CREATING, 'scheduled_at': timeutils.utcnow(), 'host': host if host else '', 'availability_zone_id': availability_zone_id, 'share_type_id': share_type_id, 'cast_rules_to_readonly': cast_rules_to_readonly, } ) share_properties = { 'id': share['id'], 'size': share['size'], 'user_id': share['user_id'], 'project_id': share['project_id'], 'metadata': self.db.share_metadata_get(context, share['id']), 'share_server_id': share_instance['share_server_id'], 'snapshot_support': share['snapshot_support'], 'create_share_from_snapshot_support': share['create_share_from_snapshot_support'], 'revert_to_snapshot_support': share['revert_to_snapshot_support'], 'mount_snapshot_support': share['mount_snapshot_support'], 'share_proto': share['share_proto'], 'share_type_id': share_type_id, 'is_public': share['is_public'], 'share_group_id': share['share_group_id'], 'source_share_group_snapshot_member_id': share[ 'source_share_group_snapshot_member_id'], 'snapshot_id': share['snapshot_id'], 'replication_type': share['replication_type'], } share_instance_properties = { 'id': share_instance['id'], 'availability_zone_id': share_instance['availability_zone_id'], 'share_network_id': share_instance['share_network_id'], 'share_server_id': share_instance['share_server_id'], 'share_id': share_instance['share_id'], 'host': share_instance['host'], 'status': share_instance['status'], 'replica_state': share_instance['replica_state'], 'share_type_id': share_instance['share_type_id'], } share_type = None if share_instance['share_type_id']: share_type = self.db.share_type_get( context, share_instance['share_type_id']) request_spec = { 'share_properties': share_properties, 'share_instance_properties': share_instance_properties, 'share_proto': share['share_proto'], 'share_id': share['id'], 'snapshot_id': share['snapshot_id'], 'snapshot_host': snapshot_host, 'share_type': share_type, 'share_group': share_group, 'availability_zone_id': availability_zone_id, 'availability_zones': availability_zones, } return request_spec, share_instance def create_share_replica(self, context, share, availability_zone=None, share_network_id=None): if not share.get('replication_type'): msg = _("Replication not supported for share %s.") raise exception.InvalidShare(message=msg % share['id']) if share.get('share_group_id'): msg = _("Replication not supported for shares in a group.") raise exception.InvalidShare(message=msg) self._check_is_share_busy(share) active_replica = self.db.share_replicas_get_available_active_replica( context, share['id']) if not active_replica: msg = _("Share %s does not have any active replica in available " "state.") raise exception.ReplicationException(reason=msg % share['id']) share_type = share_types.get_share_type( context, share.instance['share_type_id']) type_azs = share_type['extra_specs'].get('availability_zones', '') type_azs = [t for t in type_azs.split(',') if type_azs] if (availability_zone and type_azs and availability_zone not in type_azs): msg = _("Share replica cannot be created since the share type " "%(type)s is not supported within the availability zone " "chosen %(az)s.") type_name = '%s' % (share_type['name'] or '') type_id = '(ID: %s)' % share_type['id'] payload = {'type': '%s%s' % (type_name, type_id), 'az': availability_zone} raise exception.InvalidShare(message=msg % payload) try: reservations = QUOTAS.reserve( context, share_replicas=1, replica_gigabytes=share['size'], share_type_id=share_type['id'] ) except exception.OverQuota as e: self._check_if_replica_quotas_exceeded(context, e, share['size']) if share_network_id: if availability_zone: try: az = self.db.availability_zone_get(context, availability_zone) except exception.AvailabilityZoneNotFound: msg = _("Share replica cannot be created because the " "specified availability zone does not exist.") raise exception.InvalidInput(message=msg) if self.db.share_network_subnet_get_by_availability_zone_id( context, share_network_id, az.get('id')) is None: msg = _("Share replica cannot be created because the " "share network is not available within the " "specified availability zone.") raise exception.InvalidShare(message=msg) else: # NOTE(dviroel): If a target availability zone was not # provided, the scheduler will receive a list with all # availability zones that contains subnets within the # selected share network. azs_subnet = self._get_all_availability_zones_with_subnets( context, share_network_id) if not type_azs: type_azs = azs_subnet else: type_azs = ( [az for az in type_azs if az in azs_subnet]) if not type_azs: msg = _( "The share network is not supported within any " "requested availability zone. Check the share type's " "'availability_zones' extra-spec and the availability " "zones of the share network subnets") raise exception.InvalidInput(message=msg) if share['replication_type'] == constants.REPLICATION_TYPE_READABLE: cast_rules_to_readonly = True else: cast_rules_to_readonly = False try: request_spec, share_replica = ( self.create_share_instance_and_get_request_spec( context, share, availability_zone=availability_zone, share_network_id=share_network_id, share_type_id=share['instance']['share_type_id'], cast_rules_to_readonly=cast_rules_to_readonly, availability_zones=type_azs) ) QUOTAS.commit( context, reservations, project_id=share['project_id'], share_type_id=share_type['id'], ) except Exception: with excutils.save_and_reraise_exception(): try: self.db.share_replica_delete( context, share_replica['id'], need_to_update_usages=False) finally: QUOTAS.rollback( context, reservations, share_type_id=share_type['id']) all_replicas = self.db.share_replicas_get_all_by_share( context, share['id']) all_hosts = [r['host'] for r in all_replicas] request_spec['active_replica_host'] = active_replica['host'] request_spec['all_replica_hosts'] = ','.join(all_hosts) self.db.share_replica_update( context, share_replica['id'], {'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC}) existing_snapshots = ( self.db.share_snapshot_get_all_for_share( context, share_replica['share_id']) ) snapshot_instance = { 'status': constants.STATUS_CREATING, 'progress': '0%', 'share_instance_id': share_replica['id'], } for snapshot in existing_snapshots: self.db.share_snapshot_instance_create( context, snapshot['id'], snapshot_instance) self.scheduler_rpcapi.create_share_replica( context, request_spec=request_spec, filter_properties={}) return share_replica def delete_share_replica(self, context, share_replica, force=False): # Disallow deletion of ONLY active replica, *even* when this # operation is forced. replicas = self.db.share_replicas_get_all_by_share( context, share_replica['share_id']) active_replicas = list(filter( lambda x: x['replica_state'] == constants.REPLICA_STATE_ACTIVE, replicas)) if (share_replica.get('replica_state') == constants.REPLICA_STATE_ACTIVE and len(active_replicas) == 1): msg = _("Cannot delete last active replica.") raise exception.ReplicationException(reason=msg) LOG.info("Deleting replica %s.", share_replica['id']) self.db.share_replica_update( context, share_replica['id'], { 'status': constants.STATUS_DELETING, 'terminated_at': timeutils.utcnow(), } ) if not share_replica['host']: # Delete any snapshot instances created on the database replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': share_replica['id']}) ) for snapshot in replica_snapshots: self.db.share_snapshot_instance_delete(context, snapshot['id']) # Delete the replica from the database self.db.share_replica_delete(context, share_replica['id']) else: self.share_rpcapi.delete_share_replica(context, share_replica, force=force) def promote_share_replica(self, context, share_replica): if share_replica.get('status') != constants.STATUS_AVAILABLE: msg = _("Replica %(replica_id)s must be in %(status)s state to be " "promoted.") raise exception.ReplicationException( reason=msg % {'replica_id': share_replica['id'], 'status': constants.STATUS_AVAILABLE}) replica_state = share_replica['replica_state'] if (replica_state in (constants.REPLICA_STATE_OUT_OF_SYNC, constants.STATUS_ERROR) and not context.is_admin): msg = _("Promoting a replica with 'replica_state': %s requires " "administrator privileges.") raise exception.AdminRequired( message=msg % replica_state) self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_REPLICATION_CHANGE}) self.share_rpcapi.promote_share_replica(context, share_replica) return self.db.share_replica_get(context, share_replica['id']) def update_share_replica(self, context, share_replica): if not share_replica['host']: msg = _("Share replica does not have a valid host.") raise exception.InvalidHost(reason=msg) self.share_rpcapi.update_share_replica(context, share_replica) def manage(self, context, share_data, driver_options): shares = self.get_all(context, { 'host': share_data['host'], 'export_location': share_data['export_location'], 'share_proto': share_data['share_proto'], 'share_type_id': share_data['share_type_id'] }) share_type_id = share_data['share_type_id'] share_type = share_types.get_share_type(context, share_type_id) share_server_id = share_data.get('share_server_id') dhss = share_types.parse_boolean_extra_spec( 'driver_handles_share_servers', share_type['extra_specs']['driver_handles_share_servers']) if dhss and not share_server_id: msg = _("Share Server ID parameter is required when managing a " "share using a share type with " "driver_handles_share_servers extra-spec set to True.") raise exception.InvalidInput(reason=msg) if not dhss and share_server_id: msg = _("Share Server ID parameter is not expected when managing a" " share using a share type with " "driver_handles_share_servers extra-spec set to False.") raise exception.InvalidInput(reason=msg) if share_server_id: try: share_server = self.db.share_server_get( context, share_data['share_server_id']) except exception.ShareServerNotFound: msg = _("Share Server specified was not found.") raise exception.InvalidInput(reason=msg) if share_server['status'] != constants.STATUS_ACTIVE: msg = _("Share Server specified is not active.") raise exception.InvalidShareServer(message=msg) subnet = self.db.share_network_subnet_get( context, share_server['share_network_subnet_id']) share_data['share_network_id'] = subnet['share_network_id'] share_data.update({ 'user_id': context.user_id, 'project_id': context.project_id, 'status': constants.STATUS_MANAGING, 'scheduled_at': timeutils.utcnow(), }) share_data.update( self.get_share_attributes_from_share_type(share_type)) LOG.debug("Manage: Found shares %s.", len(shares)) export_location = share_data.pop('export_location') if len(shares) == 0: share = self.db.share_create(context, share_data) else: msg = _("Share already exists.") raise exception.InvalidShare(reason=msg) self.db.share_export_locations_update(context, share.instance['id'], export_location) request_spec = self._get_request_spec_dict( share, share_type, size=0, share_proto=share_data['share_proto'], host=share_data['host']) # NOTE(ganso): Scheduler is called to validate if share type # provided can fit in host provided. It will invoke manage upon # successful validation. self.scheduler_rpcapi.manage_share(context, share['id'], driver_options, request_spec) return self.db.share_get(context, share['id']) def _get_request_spec_dict(self, share, share_type, **kwargs): if share is None: share = {'instance': {}} share_instance = share['instance'] share_properties = { 'size': kwargs.get('size', share.get('size')), 'user_id': kwargs.get('user_id', share.get('user_id')), 'project_id': kwargs.get('project_id', share.get('project_id')), 'snapshot_support': kwargs.get( 'snapshot_support', share_type.get('extra_specs', {}).get('snapshot_support') ), 'create_share_from_snapshot_support': kwargs.get( 'create_share_from_snapshot_support', share_type.get('extra_specs', {}).get( 'create_share_from_snapshot_support') ), 'revert_to_snapshot_support': kwargs.get( 'revert_to_snapshot_support', share_type.get('extra_specs', {}).get( 'revert_to_snapshot_support') ), 'mount_snapshot_support': kwargs.get( 'mount_snapshot_support', share_type.get('extra_specs', {}).get( 'mount_snapshot_support') ), 'share_proto': kwargs.get('share_proto', share.get('share_proto')), 'share_type_id': share_type['id'], 'is_public': kwargs.get('is_public', share.get('is_public')), 'share_group_id': kwargs.get( 'share_group_id', share.get('share_group_id')), 'source_share_group_snapshot_member_id': kwargs.get( 'source_share_group_snapshot_member_id', share.get('source_share_group_snapshot_member_id')), 'snapshot_id': kwargs.get('snapshot_id', share.get('snapshot_id')), } share_instance_properties = { 'availability_zone_id': kwargs.get( 'availability_zone_id', share_instance.get('availability_zone_id')), 'share_network_id': kwargs.get( 'share_network_id', share_instance.get('share_network_id')), 'share_server_id': kwargs.get( 'share_server_id', share_instance.get('share_server_id')), 'share_id': kwargs.get('share_id', share_instance.get('share_id')), 'host': kwargs.get('host', share_instance.get('host')), 'status': kwargs.get('status', share_instance.get('status')), } request_spec = { 'share_properties': share_properties, 'share_instance_properties': share_instance_properties, 'share_type': share_type, 'share_id': share.get('id'), } return request_spec def unmanage(self, context, share): policy.check_policy(context, 'share', 'unmanage') self._check_is_share_busy(share) update_data = {'status': constants.STATUS_UNMANAGING, 'terminated_at': timeutils.utcnow()} share_ref = self.db.share_update(context, share['id'], update_data) self.share_rpcapi.unmanage_share(context, share_ref) # NOTE(u_glide): We should update 'updated_at' timestamp of # share server here, when manage/unmanage operations will be supported # for driver_handles_share_servers=True mode def manage_snapshot(self, context, snapshot_data, driver_options): try: share = self.db.share_get(context, snapshot_data['share_id']) except exception.NotFound: raise exception.ShareNotFound(share_id=snapshot_data['share_id']) if share['has_replicas']: msg = (_("Share %s has replicas. Snapshots of this share cannot " "currently be managed until all replicas are removed.") % share['id']) raise exception.InvalidShare(reason=msg) existing_snapshots = self.db.share_snapshot_get_all_for_share( context, snapshot_data['share_id']) for existing_snap in existing_snapshots: for inst in existing_snap.get('instances'): if (snapshot_data['provider_location'] == inst['provider_location']): msg = _("A share snapshot %(share_snapshot_id)s is " "already managed for provider location " "%(provider_location)s.") % { 'share_snapshot_id': existing_snap['id'], 'provider_location': snapshot_data['provider_location'], } raise exception.ManageInvalidShareSnapshot( reason=msg) snapshot_data.update({ 'user_id': context.user_id, 'project_id': context.project_id, 'status': constants.STATUS_MANAGING, 'share_size': share['size'], 'progress': '0%', 'share_proto': share['share_proto'] }) snapshot = self.db.share_snapshot_create(context, snapshot_data) self.share_rpcapi.manage_snapshot(context, snapshot, share['host'], driver_options) return snapshot def unmanage_snapshot(self, context, snapshot, host): update_data = {'status': constants.STATUS_UNMANAGING, 'terminated_at': timeutils.utcnow()} snapshot_ref = self.db.share_snapshot_update(context, snapshot['id'], update_data) self.share_rpcapi.unmanage_snapshot(context, snapshot_ref, host) def revert_to_snapshot(self, context, share, snapshot): """Revert a share to a snapshot.""" reservations = self._handle_revert_to_snapshot_quotas( context, share, snapshot) try: if share.get('has_replicas'): self._revert_to_replicated_snapshot( context, share, snapshot, reservations) else: self._revert_to_snapshot( context, share, snapshot, reservations) except Exception: with excutils.save_and_reraise_exception(): if reservations: QUOTAS.rollback( context, reservations, share_type_id=share['instance']['share_type_id']) def _handle_revert_to_snapshot_quotas(self, context, share, snapshot): """Reserve extra quota if a revert will result in a larger share.""" # Note(cknight): This value may be positive or negative. size_increase = snapshot['size'] - share['size'] if not size_increase: return None try: return QUOTAS.reserve( context, project_id=share['project_id'], gigabytes=size_increase, user_id=share['user_id'], share_type_id=share['instance']['share_type_id']) except exception.OverQuota as exc: usages = exc.kwargs['usages'] quotas = exc.kwargs['quotas'] consumed_gb = (usages['gigabytes']['reserved'] + usages['gigabytes']['in_use']) msg = _("Quota exceeded for %(s_pid)s. Reverting share " "%(s_sid)s to snapshot %(s_ssid)s will increase the " "share's size by %(s_size)sG, " "(%(d_consumed)dG of %(d_quota)dG already consumed).") msg_args = { 's_pid': context.project_id, 's_sid': share['id'], 's_ssid': snapshot['id'], 's_size': size_increase, 'd_consumed': consumed_gb, 'd_quota': quotas['gigabytes'], } message = msg % msg_args LOG.error(message) raise exception.ShareSizeExceedsAvailableQuota(message=message) def _revert_to_snapshot(self, context, share, snapshot, reservations): """Revert a non-replicated share to a snapshot.""" # Set status of share to 'reverting' self.db.share_update( context, snapshot['share_id'], {'status': constants.STATUS_REVERTING}) # Set status of snapshot to 'restoring' self.db.share_snapshot_update( context, snapshot['id'], {'status': constants.STATUS_RESTORING}) # Send revert API to share host self.share_rpcapi.revert_to_snapshot( context, share, snapshot, share['instance']['host'], reservations) def _revert_to_replicated_snapshot(self, context, share, snapshot, reservations): """Revert a replicated share to a snapshot.""" # Get active replica active_replica = self.db.share_replicas_get_available_active_replica( context, share['id']) if not active_replica: msg = _('Share %s has no active replica in available state.') raise exception.ReplicationException(reason=msg % share['id']) # Get snapshot instance on active replica snapshot_instance_filters = { 'share_instance_ids': active_replica['id'], 'snapshot_ids': snapshot['id'], } snapshot_instances = ( self.db.share_snapshot_instance_get_all_with_filters( context, snapshot_instance_filters)) active_snapshot_instance = ( snapshot_instances[0] if snapshot_instances else None) if not active_snapshot_instance: msg = _('Share %(share)s has no snapshot %(snap)s associated with ' 'its active replica.') msg_args = {'share': share['id'], 'snap': snapshot['id']} raise exception.ReplicationException(reason=msg % msg_args) # Set active replica to 'reverting' self.db.share_replica_update( context, active_replica['id'], {'status': constants.STATUS_REVERTING}) # Set snapshot instance on active replica to 'restoring' self.db.share_snapshot_instance_update( context, active_snapshot_instance['id'], {'status': constants.STATUS_RESTORING}) # Send revert API to active replica host self.share_rpcapi.revert_to_snapshot( context, share, snapshot, active_replica['host'], reservations) @policy.wrap_check_policy('share') def delete(self, context, share, force=False): """Delete share.""" share = self.db.share_get(context, share['id']) share_id = share['id'] statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR, constants.STATUS_INACTIVE) if not (force or share['status'] in statuses): msg = _("Share status must be one of %(statuses)s") % { "statuses": statuses} raise exception.InvalidShare(reason=msg) # NOTE(gouthamr): If the share has more than one replica, # it can't be deleted until the additional replicas are removed. if share.has_replicas: msg = _("Share %s has replicas. Remove the replicas before " "deleting the share.") % share_id raise exception.Conflict(err=msg) snapshots = self.db.share_snapshot_get_all_for_share(context, share_id) if len(snapshots): msg = _("Share still has %d dependent snapshots.") % len(snapshots) raise exception.InvalidShare(reason=msg) share_group_snapshot_members_count = ( self.db.count_share_group_snapshot_members_in_share( context, share_id)) if share_group_snapshot_members_count: msg = ( _("Share still has %d dependent share group snapshot " "members.") % share_group_snapshot_members_count) raise exception.InvalidShare(reason=msg) self._check_is_share_busy(share) for share_instance in share.instances: if share_instance['host']: self.delete_instance(context, share_instance, force=force) else: self.db.share_instance_delete( context, share_instance['id'], need_to_update_usages=True) def delete_instance(self, context, share_instance, force=False): policy.check_policy(context, 'share', 'delete') statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR, constants.STATUS_INACTIVE) if not (force or share_instance['status'] in statuses): msg = _("Share instance status must be one of %(statuses)s") % { "statuses": statuses} raise exception.InvalidShareInstance(reason=msg) share_instance = self.db.share_instance_update( context, share_instance['id'], {'status': constants.STATUS_DELETING, 'terminated_at': timeutils.utcnow()} ) self.share_rpcapi.delete_share_instance(context, share_instance, force=force) # NOTE(u_glide): 'updated_at' timestamp is used to track last usage of # share server. This is required for automatic share servers cleanup # because we should track somehow period of time when share server # doesn't have shares (unused). We do this update only on share # deletion because share server with shares cannot be deleted, so no # need to do this update on share creation or any other share operation if share_instance['share_server_id']: self.db.share_server_update( context, share_instance['share_server_id'], {'updated_at': timeutils.utcnow()}) def delete_share_server(self, context, server): """Delete share server.""" policy.check_policy(context, 'share_server', 'delete', server) shares = self.db.share_instances_get_all_by_share_server(context, server['id']) if shares: raise exception.ShareServerInUse(share_server_id=server['id']) share_groups = self.db.share_group_get_all_by_share_server( context, server['id']) if share_groups: LOG.error("share server '%(ssid)s' in use by share groups.", {'ssid': server['id']}) raise exception.ShareServerInUse(share_server_id=server['id']) # NOTE(vponomaryov): There is no share_server status update here, # it is intentional. # Status will be changed in manila.share.manager after verification # for race condition between share creation on server # and server deletion. self.share_rpcapi.delete_share_server(context, server) def manage_share_server( self, context, identifier, host, share_net_subnet, driver_opts): """Manage a share server.""" try: matched_servers = self.db.share_server_search_by_identifier( context, identifier) except exception.ShareServerNotFound: pass else: msg = _("Identifier %(identifier)s specified matches existing " "share servers: %(servers)s.") % { 'identifier': identifier, 'servers': ', '.join(s['identifier'] for s in matched_servers) } raise exception.InvalidInput(reason=msg) values = { 'host': host, 'share_network_subnet_id': share_net_subnet['id'], 'status': constants.STATUS_MANAGING, 'is_auto_deletable': False, 'identifier': identifier, } server = self.db.share_server_create(context, values) self.share_rpcapi.manage_share_server( context, server, identifier, driver_opts) return self.db.share_server_get(context, server['id']) def unmanage_share_server(self, context, share_server, force=False): """Unmanage a share server.""" shares = self.db.share_instances_get_all_by_share_server( context, share_server['id']) if shares: raise exception.ShareServerInUse( share_server_id=share_server['id']) share_groups = self.db.share_group_get_all_by_share_server( context, share_server['id']) if share_groups: LOG.error("share server '%(ssid)s' in use by share groups.", {'ssid': share_server['id']}) raise exception.ShareServerInUse( share_server_id=share_server['id']) update_data = {'status': constants.STATUS_UNMANAGING, 'terminated_at': timeutils.utcnow()} share_server = self.db.share_server_update( context, share_server['id'], update_data) self.share_rpcapi.unmanage_share_server( context, share_server, force=force) def create_snapshot(self, context, share, name, description, force=False): policy.check_policy(context, 'share', 'create_snapshot', share) if ((not force) and (share['status'] != constants.STATUS_AVAILABLE)): msg = _("Source share status must be " "%s") % constants.STATUS_AVAILABLE raise exception.InvalidShare(reason=msg) size = share['size'] self._check_is_share_busy(share) try: reservations = QUOTAS.reserve( context, snapshots=1, snapshot_gigabytes=size, share_type_id=share['instance']['share_type_id']) except exception.OverQuota as e: overs = e.kwargs['overs'] usages = e.kwargs['usages'] quotas = e.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'snapshot_gigabytes' in overs: msg = ("Quota exceeded for %(s_pid)s, tried to create " "%(s_size)sG snapshot (%(d_consumed)dG of " "%(d_quota)dG already consumed).") LOG.warning(msg, { 's_pid': context.project_id, 's_size': size, 'd_consumed': _consumed('snapshot_gigabytes'), 'd_quota': quotas['snapshot_gigabytes']}) raise exception.SnapshotSizeExceedsAvailableQuota() elif 'snapshots' in overs: msg = ("Quota exceeded for %(s_pid)s, tried to create " "snapshot (%(d_consumed)d snapshots " "already consumed).") LOG.warning(msg, {'s_pid': context.project_id, 'd_consumed': _consumed('snapshots')}) raise exception.SnapshotLimitExceeded( allowed=quotas['snapshots']) options = {'share_id': share['id'], 'size': share['size'], 'user_id': context.user_id, 'project_id': context.project_id, 'status': constants.STATUS_CREATING, 'progress': '0%', 'share_size': share['size'], 'display_name': name, 'display_description': description, 'share_proto': share['share_proto']} try: snapshot = self.db.share_snapshot_create(context, options) QUOTAS.commit( context, reservations, share_type_id=share['instance']['share_type_id']) except Exception: with excutils.save_and_reraise_exception(): try: self.db.snapshot_delete(context, share['id']) finally: QUOTAS.rollback( context, reservations, share_type_id=share['instance']['share_type_id']) # If replicated share, create snapshot instances for each replica if share.get('has_replicas'): snapshot = self.db.share_snapshot_get(context, snapshot['id']) share_instance_id = snapshot['instance']['share_instance_id'] replicas = self.db.share_replicas_get_all_by_share( context, share['id']) replicas = [r for r in replicas if r['id'] != share_instance_id] snapshot_instance = { 'status': constants.STATUS_CREATING, 'progress': '0%', } for replica in replicas: snapshot_instance.update({'share_instance_id': replica['id']}) self.db.share_snapshot_instance_create( context, snapshot['id'], snapshot_instance) self.share_rpcapi.create_replicated_snapshot( context, share, snapshot) else: self.share_rpcapi.create_snapshot(context, share, snapshot) return snapshot def migration_start( self, context, share, dest_host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network=None, new_share_type=None): """Migrates share to a new host.""" if force_host_assisted_migration and ( preserve_metadata or writable or nondisruptive or preserve_snapshots): msg = _('Invalid parameter combination. Cannot set parameters ' '"nondisruptive", "writable", "preserve_snapshots" or ' '"preserve_metadata" to True when enabling the ' '"force_host_assisted_migration" option.') LOG.error(msg) raise exception.InvalidInput(reason=msg) share_instance = share.instance # NOTE(gouthamr): Ensure share does not have replicas. # Currently share migrations are disallowed for replicated shares. if share.has_replicas: msg = _('Share %s has replicas. Remove the replicas before ' 'attempting to migrate the share.') % share['id'] LOG.error(msg) raise exception.Conflict(err=msg) # TODO(ganso): We do not support migrating shares in or out of groups # for now. if share.get('share_group_id'): msg = _('Share %s is a member of a group. This operation is not ' 'currently supported for shares that are members of ' 'groups.') % share['id'] LOG.error(msg) raise exception.InvalidShare(reason=msg) # We only handle "available" share for now if share_instance['status'] != constants.STATUS_AVAILABLE: msg = _('Share instance %(instance_id)s status must be available, ' 'but current status is: %(instance_status)s.') % { 'instance_id': share_instance['id'], 'instance_status': share_instance['status']} raise exception.InvalidShare(reason=msg) # Access rules status must not be error if share_instance['access_rules_status'] == constants.STATUS_ERROR: msg = _('Share instance %(instance_id)s access rules status must ' 'not be in %(error)s when attempting to start a ' 'migration.') % { 'instance_id': share_instance['id'], 'error': constants.STATUS_ERROR} raise exception.InvalidShare(reason=msg) self._check_is_share_busy(share) if force_host_assisted_migration: # We only handle shares without snapshots for # host-assisted migration snaps = self.db.share_snapshot_get_all_for_share(context, share['id']) if snaps: msg = _("Share %s must not have snapshots when using " "host-assisted migration.") % share['id'] raise exception.Conflict(err=msg) dest_host_host = share_utils.extract_host(dest_host) # Make sure the host is in the list of available hosts utils.validate_service_host(context, dest_host_host) if new_share_type: share_type = new_share_type new_share_type_id = new_share_type['id'] dhss = share_type['extra_specs']['driver_handles_share_servers'] dhss = strutils.bool_from_string(dhss, strict=True) if (dhss and not new_share_network and not share_instance['share_network_id']): msg = _( "New share network must be provided when share type of" " given share %s has extra_spec " "'driver_handles_share_servers' as True.") % share['id'] raise exception.InvalidInput(reason=msg) else: share_type = {} share_type_id = share_instance['share_type_id'] if share_type_id: share_type = share_types.get_share_type(context, share_type_id) new_share_type_id = share_instance['share_type_id'] dhss = share_type['extra_specs']['driver_handles_share_servers'] dhss = strutils.bool_from_string(dhss, strict=True) if dhss: if new_share_network: new_share_network_id = new_share_network['id'] else: new_share_network_id = share_instance['share_network_id'] else: if new_share_network: msg = _( "New share network must not be provided when share type of" " given share %s has extra_spec " "'driver_handles_share_servers' as False.") % share['id'] raise exception.InvalidInput(reason=msg) new_share_network_id = None # Make sure the destination is different than the source if (new_share_network_id == share_instance['share_network_id'] and new_share_type_id == share_instance['share_type_id'] and dest_host == share_instance['host']): msg = ("Destination host (%(dest_host)s), share network " "(%(dest_sn)s) or share type (%(dest_st)s) are the same " "as the current host's '%(src_host)s', '%(src_sn)s' and " "'%(src_st)s' respectively. Nothing to be done.") % { 'dest_host': dest_host, 'dest_sn': new_share_network_id, 'dest_st': new_share_type_id, 'src_host': share_instance['host'], 'src_sn': share_instance['share_network_id'], 'src_st': share_instance['share_type_id'], } LOG.info(msg) self.db.share_update( context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_SUCCESS}) return 200 service = self.db.service_get_by_args( context, dest_host_host, 'manila-share') type_azs = share_type['extra_specs'].get('availability_zones', '') type_azs = [t for t in type_azs.split(',') if type_azs] if type_azs and service['availability_zone']['name'] not in type_azs: msg = _("Share %(shr)s cannot be migrated to host %(dest)s " "because share type %(type)s is not supported within the " "availability zone (%(az)s) that the host is in.") type_name = '%s' % (share_type['name'] or '') type_id = '(ID: %s)' % share_type['id'] payload = {'type': '%s%s' % (type_name, type_id), 'az': service['availability_zone']['name'], 'shr': share['id'], 'dest': dest_host} raise exception.InvalidShare(reason=msg % payload) request_spec = self._get_request_spec_dict( share, share_type, availability_zone_id=service['availability_zone_id'], share_network_id=new_share_network_id) self.db.share_update( context, share['id'], {'task_state': constants.TASK_STATE_MIGRATION_STARTING}) self.db.share_instance_update(context, share_instance['id'], {'status': constants.STATUS_MIGRATING}) self.scheduler_rpcapi.migrate_share_to_host( context, share['id'], dest_host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network_id, new_share_type_id, request_spec) return 202 def migration_complete(self, context, share): if share['task_state'] not in ( constants.TASK_STATE_DATA_COPYING_COMPLETED, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE): msg = self._migration_validate_error_message(share) if msg is None: msg = _("First migration phase of share %s not completed" " yet.") % share['id'] LOG.error(msg) raise exception.InvalidShare(reason=msg) share_instance_id, new_share_instance_id = ( self.get_migrating_instances(share)) share_instance_ref = self.db.share_instance_get( context, share_instance_id, with_share_data=True) self.share_rpcapi.migration_complete(context, share_instance_ref, new_share_instance_id) def get_migrating_instances(self, share): share_instance_id = None new_share_instance_id = None for instance in share.instances: if instance['status'] == constants.STATUS_MIGRATING: share_instance_id = instance['id'] if instance['status'] == constants.STATUS_MIGRATING_TO: new_share_instance_id = instance['id'] if None in (share_instance_id, new_share_instance_id): msg = _("Share instances %(instance_id)s and " "%(new_instance_id)s in inconsistent states, cannot" " continue share migration for share %(share_id)s" ".") % {'instance_id': share_instance_id, 'new_instance_id': new_share_instance_id, 'share_id': share['id']} raise exception.ShareMigrationFailed(reason=msg) return share_instance_id, new_share_instance_id def migration_get_progress(self, context, share): if share['task_state'] == ( constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS): share_instance_id, migrating_instance_id = ( self.get_migrating_instances(share)) share_instance_ref = self.db.share_instance_get( context, share_instance_id, with_share_data=True) service_host = share_utils.extract_host(share_instance_ref['host']) service = self.db.service_get_by_args( context, service_host, 'manila-share') if utils.service_is_up(service): try: result = self.share_rpcapi.migration_get_progress( context, share_instance_ref, migrating_instance_id) except Exception: msg = _("Failed to obtain migration progress of share " "%s.") % share['id'] LOG.exception(msg) raise exception.ShareMigrationError(reason=msg) else: result = None elif share['task_state'] == ( constants.TASK_STATE_DATA_COPYING_IN_PROGRESS): data_rpc = data_rpcapi.DataAPI() LOG.info("Sending request to get share migration information" " of share %s.", share['id']) services = self.db.service_get_all_by_topic(context, 'manila-data') if len(services) > 0 and utils.service_is_up(services[0]): try: result = data_rpc.data_copy_get_progress( context, share['id']) except Exception: msg = _("Failed to obtain migration progress of share " "%s.") % share['id'] LOG.exception(msg) raise exception.ShareMigrationError(reason=msg) else: result = None else: result = self._migration_get_progress_state(share) if not (result and result.get('total_progress') is not None): msg = self._migration_validate_error_message(share) if msg is None: msg = _("Migration progress of share %s cannot be obtained at " "this moment.") % share['id'] LOG.error(msg) raise exception.InvalidShare(reason=msg) return result def _migration_get_progress_state(self, share): task_state = share['task_state'] if task_state in (constants.TASK_STATE_MIGRATION_SUCCESS, constants.TASK_STATE_DATA_COPYING_ERROR, constants.TASK_STATE_MIGRATION_CANCELLED, constants.TASK_STATE_MIGRATION_COMPLETING, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_DATA_COPYING_COMPLETED, constants.TASK_STATE_DATA_COPYING_COMPLETING, constants.TASK_STATE_DATA_COPYING_CANCELLED, constants.TASK_STATE_MIGRATION_ERROR): return {'total_progress': 100} elif task_state in (constants.TASK_STATE_MIGRATION_STARTING, constants.TASK_STATE_MIGRATION_DRIVER_STARTING, constants.TASK_STATE_DATA_COPYING_STARTING, constants.TASK_STATE_MIGRATION_IN_PROGRESS): return {'total_progress': 0} else: return None def _migration_validate_error_message(self, share): task_state = share['task_state'] if task_state == constants.TASK_STATE_MIGRATION_SUCCESS: msg = _("Migration of share %s has already " "completed.") % share['id'] elif task_state in (None, constants.TASK_STATE_MIGRATION_ERROR): msg = _("There is no migration being performed for share %s " "at this moment.") % share['id'] elif task_state == constants.TASK_STATE_MIGRATION_CANCELLED: msg = _("Migration of share %s was already " "cancelled.") % share['id'] elif task_state in (constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_DATA_COPYING_COMPLETED): msg = _("Migration of share %s has already completed first " "phase.") % share['id'] else: return None return msg def migration_cancel(self, context, share): migrating = True if share['task_state'] in ( constants.TASK_STATE_DATA_COPYING_COMPLETED, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS): share_instance_id, migrating_instance_id = ( self.get_migrating_instances(share)) share_instance_ref = self.db.share_instance_get( context, share_instance_id, with_share_data=True) service_host = share_utils.extract_host(share_instance_ref['host']) service = self.db.service_get_by_args( context, service_host, 'manila-share') if utils.service_is_up(service): self.share_rpcapi.migration_cancel( context, share_instance_ref, migrating_instance_id) else: migrating = False elif share['task_state'] == ( constants.TASK_STATE_DATA_COPYING_IN_PROGRESS): data_rpc = data_rpcapi.DataAPI() LOG.info("Sending request to cancel migration of " "share %s.", share['id']) services = self.db.service_get_all_by_topic(context, 'manila-data') if len(services) > 0 and utils.service_is_up(services[0]): try: data_rpc.data_copy_cancel(context, share['id']) except Exception: msg = _("Failed to cancel migration of share " "%s.") % share['id'] LOG.exception(msg) raise exception.ShareMigrationError(reason=msg) else: migrating = False else: migrating = False if not migrating: msg = self._migration_validate_error_message(share) if msg is None: msg = _("Migration of share %s cannot be cancelled at this " "moment.") % share['id'] LOG.error(msg) raise exception.InvalidShare(reason=msg) @policy.wrap_check_policy('share') def delete_snapshot(self, context, snapshot, force=False): statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR) if not (force or snapshot['aggregate_status'] in statuses): msg = _("Share Snapshot status must be one of %(statuses)s.") % { "statuses": statuses} raise exception.InvalidShareSnapshot(reason=msg) share = self.db.share_get(context, snapshot['share_id']) snapshot_instances = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'snapshot_ids': snapshot['id']}) ) for snapshot_instance in snapshot_instances: self.db.share_snapshot_instance_update( context, snapshot_instance['id'], {'status': constants.STATUS_DELETING}) if share['has_replicas']: self.share_rpcapi.delete_replicated_snapshot( context, snapshot, share['instance']['host'], share_id=share['id'], force=force) else: self.share_rpcapi.delete_snapshot( context, snapshot, share['instance']['host'], force=force) @policy.wrap_check_policy('share') def update(self, context, share, fields): return self.db.share_update(context, share['id'], fields) @policy.wrap_check_policy('share') def snapshot_update(self, context, snapshot, fields): return self.db.share_snapshot_update(context, snapshot['id'], fields) def get(self, context, share_id): rv = self.db.share_get(context, share_id) if not rv['is_public']: policy.check_policy(context, 'share', 'get', rv) return rv def get_all(self, context, search_opts=None, sort_key='created_at', sort_dir='desc'): policy.check_policy(context, 'share', 'get_all') if search_opts is None: search_opts = {} LOG.debug("Searching for shares by: %s", search_opts) # Prepare filters filters = {} if 'export_location_id' in search_opts: filters['export_location_id'] = search_opts.pop( 'export_location_id') if 'export_location_path' in search_opts: filters['export_location_path'] = search_opts.pop( 'export_location_path') if 'metadata' in search_opts: filters['metadata'] = search_opts.pop('metadata') if not isinstance(filters['metadata'], dict): msg = _("Wrong metadata filter provided: " "%s.") % six.text_type(filters['metadata']) raise exception.InvalidInput(reason=msg) if 'extra_specs' in search_opts: # Verify policy for extra-specs access policy.check_policy(context, 'share_types_extra_spec', 'index') filters['extra_specs'] = search_opts.pop('extra_specs') if not isinstance(filters['extra_specs'], dict): msg = _("Wrong extra specs filter provided: " "%s.") % six.text_type(filters['extra_specs']) raise exception.InvalidInput(reason=msg) if 'limit' in search_opts: filters['limit'] = search_opts.pop('limit') if 'offset' in search_opts: filters['offset'] = search_opts.pop('offset') if not (isinstance(sort_key, six.string_types) and sort_key): msg = _("Wrong sort_key filter provided: " "'%s'.") % six.text_type(sort_key) raise exception.InvalidInput(reason=msg) if not (isinstance(sort_dir, six.string_types) and sort_dir): msg = _("Wrong sort_dir filter provided: " "'%s'.") % six.text_type(sort_dir) raise exception.InvalidInput(reason=msg) is_public = search_opts.pop('is_public', False) is_public = strutils.bool_from_string(is_public, strict=True) # Get filtered list of shares if 'host' in search_opts: policy.check_policy(context, 'share', 'list_by_host') if 'share_server_id' in search_opts: # NOTE(vponomaryov): this is project_id independent policy.check_policy(context, 'share', 'list_by_share_server_id') shares = self.db.share_get_all_by_share_server( context, search_opts.pop('share_server_id'), filters=filters, sort_key=sort_key, sort_dir=sort_dir) elif (context.is_admin and utils.is_all_tenants(search_opts)): shares = self.db.share_get_all( context, filters=filters, sort_key=sort_key, sort_dir=sort_dir) else: shares = self.db.share_get_all_by_project( context, project_id=context.project_id, filters=filters, is_public=is_public, sort_key=sort_key, sort_dir=sort_dir) # NOTE(vponomaryov): we do not need 'all_tenants' opt anymore search_opts.pop('all_tenants', None) if search_opts: results = [] for s in shares: # values in search_opts can be only strings if (all(s.get(k, None) == v or (v in (s.get(k.rstrip('~')) if k.endswith('~') and s.get(k.rstrip('~')) else ())) for k, v in search_opts.items())): results.append(s) shares = results return shares def get_snapshot(self, context, snapshot_id): policy.check_policy(context, 'share_snapshot', 'get_snapshot') return self.db.share_snapshot_get(context, snapshot_id) def get_all_snapshots(self, context, search_opts=None, sort_key='share_id', sort_dir='desc'): policy.check_policy(context, 'share_snapshot', 'get_all_snapshots') search_opts = search_opts or {} LOG.debug("Searching for snapshots by: %s", search_opts) # Read and remove key 'all_tenants' if was provided all_tenants = search_opts.pop('all_tenants', None) string_args = {'sort_key': sort_key, 'sort_dir': sort_dir} string_args.update(search_opts) for k, v in string_args.items(): if not (isinstance(v, six.string_types) and v): msg = _("Wrong '%(k)s' filter provided: " "'%(v)s'.") % {'k': k, 'v': string_args[k]} raise exception.InvalidInput(reason=msg) if (context.is_admin and all_tenants): snapshots = self.db.share_snapshot_get_all( context, filters=search_opts, sort_key=sort_key, sort_dir=sort_dir) else: snapshots = self.db.share_snapshot_get_all_by_project( context, context.project_id, filters=search_opts, sort_key=sort_key, sort_dir=sort_dir) # Remove key 'usage' if provided search_opts.pop('usage', None) if search_opts: results = [] not_found = object() for snapshot in snapshots: if (all(snapshot.get(k, not_found) == v or (v in snapshot.get(k.rstrip('~')) if k.endswith('~') and snapshot.get(k.rstrip('~')) else ()) for k, v in search_opts.items())): results.append(snapshot) snapshots = results return snapshots def get_latest_snapshot_for_share(self, context, share_id): """Get the newest snapshot of a share.""" return self.db.share_snapshot_get_latest_for_share(context, share_id) @staticmethod def _is_invalid_share_instance(instance): return (instance['host'] is None or instance['status'] in constants. INVALID_SHARE_INSTANCE_STATUSES_FOR_ACCESS_RULE_UPDATES) def allow_access(self, ctx, share, access_type, access_to, access_level=None, metadata=None): """Allow access to share.""" # Access rule validation: if access_level not in constants.ACCESS_LEVELS + (None, ): msg = _("Invalid share access level: %s.") % access_level raise exception.InvalidShareAccess(reason=msg) self._check_metadata_properties(metadata) access_exists = self.db.share_access_check_for_existing_access( ctx, share['id'], access_type, access_to) if access_exists: raise exception.ShareAccessExists(access_type=access_type, access=access_to) # Share instance validation if any(instance for instance in share.instances if self._is_invalid_share_instance(instance)): msg = _("New access rules cannot be applied while the share or " "any of its replicas or migration copies lacks a valid " "host or is in an invalid state.") raise exception.InvalidShare(message=msg) values = { 'share_id': share['id'], 'access_type': access_type, 'access_to': access_to, 'access_level': access_level, 'metadata': metadata, } access = self.db.share_access_create(ctx, values) for share_instance in share.instances: self.allow_access_to_instance(ctx, share_instance) return access def allow_access_to_instance(self, context, share_instance): self._conditionally_transition_share_instance_access_rules_status( context, share_instance) self.share_rpcapi.update_access(context, share_instance) def _conditionally_transition_share_instance_access_rules_status( self, context, share_instance): conditionally_change = { constants.STATUS_ACTIVE: constants.SHARE_INSTANCE_RULES_SYNCING, } self.access_helper.get_and_update_share_instance_access_rules_status( context, conditionally_change=conditionally_change, share_instance_id=share_instance['id']) def deny_access(self, ctx, share, access): """Deny access to share.""" if any(instance for instance in share.instances if self._is_invalid_share_instance(instance)): msg = _("Access rules cannot be denied while the share, " "any of its replicas or migration copies lacks a valid " "host or is in an invalid state.") raise exception.InvalidShare(message=msg) for share_instance in share.instances: self.deny_access_to_instance(ctx, share_instance, access) def deny_access_to_instance(self, context, share_instance, access): self._conditionally_transition_share_instance_access_rules_status( context, share_instance) updates = {'state': constants.ACCESS_STATE_QUEUED_TO_DENY} self.access_helper.get_and_update_share_instance_access_rule( context, access['id'], updates=updates, share_instance_id=share_instance['id']) self.share_rpcapi.update_access(context, share_instance) def access_get_all(self, context, share, filters=None): """Returns all access rules for share.""" policy.check_policy(context, 'share', 'access_get_all') rules = self.db.share_access_get_all_for_share( context, share['id'], filters=filters) return rules def access_get(self, context, access_id): """Returns access rule with the id.""" policy.check_policy(context, 'share', 'access_get') rule = self.db.share_access_get(context, access_id) return rule @policy.wrap_check_policy('share') def get_share_metadata(self, context, share): """Get all metadata associated with a share.""" rv = self.db.share_metadata_get(context, share['id']) return dict(rv.items()) @policy.wrap_check_policy('share') def delete_share_metadata(self, context, share, key): """Delete the given metadata item from a share.""" self.db.share_metadata_delete(context, share['id'], key) def _check_is_share_busy(self, share): """Raises an exception if share is busy with an active task.""" if share.is_busy: msg = _("Share %(share_id)s is busy as part of an active " "task: %(task)s.") % { 'share_id': share['id'], 'task': share['task_state'] } raise exception.ShareBusyException(reason=msg) def _check_metadata_properties(self, metadata=None): if not metadata: metadata = {} for k, v in metadata.items(): if not k: msg = _("Metadata property key is blank.") LOG.warning(msg) raise exception.InvalidMetadata(message=msg) if len(k) > 255: msg = _("Metadata property key is " "greater than 255 characters.") LOG.warning(msg) raise exception.InvalidMetadataSize(message=msg) if not v: msg = _("Metadata property value is blank.") LOG.warning(msg) raise exception.InvalidMetadata(message=msg) if len(v) > 1023: msg = _("Metadata property value is " "greater than 1023 characters.") LOG.warning(msg) raise exception.InvalidMetadataSize(message=msg) def update_share_access_metadata(self, context, access_id, metadata): """Updates share access metadata.""" self._check_metadata_properties(metadata) return self.db.share_access_metadata_update( context, access_id, metadata) @policy.wrap_check_policy('share') def update_share_metadata(self, context, share, metadata, delete=False): """Updates or creates share metadata. If delete is True, metadata items that are not specified in the `metadata` argument will be deleted. """ orig_meta = self.get_share_metadata(context, share) if delete: _metadata = metadata else: _metadata = orig_meta.copy() _metadata.update(metadata) self._check_metadata_properties(_metadata) self.db.share_metadata_update(context, share['id'], _metadata, delete) return _metadata def get_share_network(self, context, share_net_id): return self.db.share_network_get(context, share_net_id) def extend(self, context, share, new_size): policy.check_policy(context, 'share', 'extend') if share['status'] != constants.STATUS_AVAILABLE: msg_params = { 'valid_status': constants.STATUS_AVAILABLE, 'share_id': share['id'], 'status': share['status'], } msg = _("Share %(share_id)s status must be '%(valid_status)s' " "to extend, but current status is: " "%(status)s.") % msg_params raise exception.InvalidShare(reason=msg) self._check_is_share_busy(share) size_increase = int(new_size) - share['size'] if size_increase <= 0: msg = (_("New size for extend must be greater " "than current size. (current: %(size)s, " "extended: %(new_size)s).") % {'new_size': new_size, 'size': share['size']}) raise exception.InvalidInput(reason=msg) replicas = self.db.share_replicas_get_all_by_share( context, share['id']) supports_replication = len(replicas) > 0 deltas = { 'project_id': share['project_id'], 'gigabytes': size_increase, 'user_id': share['user_id'], 'share_type_id': share['instance']['share_type_id'] } # NOTE(carloss): If the share type supports replication, we must get # all the replicas that pertain to the share and calculate the final # size (size to increase * amount of replicas), since all the replicas # are going to be extended when the driver sync them. if supports_replication: replica_gigs_to_increase = len(replicas) * size_increase deltas.update({'replica_gigabytes': replica_gigs_to_increase}) try: # we give the user_id of the share, to update the quota usage # for the user, who created the share, because on share delete # only this quota will be decreased reservations = QUOTAS.reserve(context, **deltas) except exception.OverQuota as exc: # Check if the exceeded quota was 'gigabytes' self._check_if_share_quotas_exceeded(context, exc, share['size'], operation='extend') # NOTE(carloss): Check if the exceeded quota is # 'replica_gigabytes'. If so the failure could be caused due to # lack of quotas to extend the share's replicas, then the # '_check_if_replica_quotas_exceeded' method can't be reused here # since the error message must be different from the default one. if supports_replication: overs = exc.kwargs['overs'] usages = exc.kwargs['usages'] quotas = exc.kwargs['quotas'] def _consumed(name): return (usages[name]['reserved'] + usages[name]['in_use']) if 'replica_gigabytes' in overs: LOG.warning("Replica gigabytes quota exceeded " "for %(s_pid)s, tried to extend " "%(s_size)sG share (%(d_consumed)dG of " "%(d_quota)dG already consumed).", { 's_pid': context.project_id, 's_size': share['size'], 'd_consumed': _consumed( 'replica_gigabytes'), 'd_quota': quotas['replica_gigabytes']}) msg = _("Failed while extending a share with replication " "support. There is no available quota to extend " "the share and its %(count)d replicas. Maximum " "number of allowed replica_gigabytes is " "exceeded.") % {'count': len(replicas)} raise exception.ShareReplicaSizeExceedsAvailableQuota( message=msg) self.update(context, share, {'status': constants.STATUS_EXTENDING}) self.share_rpcapi.extend_share(context, share, new_size, reservations) LOG.info("Extend share request issued successfully.", resource=share) def shrink(self, context, share, new_size): policy.check_policy(context, 'share', 'shrink') status = six.text_type(share['status']).lower() valid_statuses = (constants.STATUS_AVAILABLE, constants.STATUS_SHRINKING_POSSIBLE_DATA_LOSS_ERROR) if status not in valid_statuses: msg_params = { 'valid_status': ", ".join(valid_statuses), 'share_id': share['id'], 'status': status, } msg = _("Share %(share_id)s status must in (%(valid_status)s) " "to shrink, but current status is: " "%(status)s.") % msg_params raise exception.InvalidShare(reason=msg) self._check_is_share_busy(share) size_decrease = int(share['size']) - int(new_size) if size_decrease <= 0 or new_size <= 0: msg = (_("New size for shrink must be less " "than current size and greater than 0 (current: %(size)s," " new: %(new_size)s)") % {'new_size': new_size, 'size': share['size']}) raise exception.InvalidInput(reason=msg) self.update(context, share, {'status': constants.STATUS_SHRINKING}) self.share_rpcapi.shrink_share(context, share, new_size) LOG.info("Shrink share (id=%(id)s) request issued successfully." " New size: %(size)s", {'id': share['id'], 'size': new_size}) def snapshot_allow_access(self, context, snapshot, access_type, access_to): """Allow access to a share snapshot.""" access_exists = self.db.share_snapshot_check_for_existing_access( context, snapshot['id'], access_type, access_to) if access_exists: raise exception.ShareSnapshotAccessExists(access_type=access_type, access=access_to) values = { 'share_snapshot_id': snapshot['id'], 'access_type': access_type, 'access_to': access_to, } if any((instance['status'] != constants.STATUS_AVAILABLE) or (instance['share_instance']['host'] is None) for instance in snapshot.instances): msg = _("New access rules cannot be applied while the snapshot or " "any of its replicas or migration copies lacks a valid " "host or is not in %s state.") % constants.STATUS_AVAILABLE raise exception.InvalidShareSnapshotInstance(reason=msg) access = self.db.share_snapshot_access_create(context, values) for snapshot_instance in snapshot.instances: self.share_rpcapi.snapshot_update_access( context, snapshot_instance) return access def snapshot_deny_access(self, context, snapshot, access): """Deny access to a share snapshot.""" if any((instance['status'] != constants.STATUS_AVAILABLE) or (instance['share_instance']['host'] is None) for instance in snapshot.instances): msg = _("Access rules cannot be denied while the snapshot or " "any of its replicas or migration copies lacks a valid " "host or is not in %s state.") % constants.STATUS_AVAILABLE raise exception.InvalidShareSnapshotInstance(reason=msg) for snapshot_instance in snapshot.instances: rule = self.db.share_snapshot_instance_access_get( context, access['id'], snapshot_instance['id']) self.db.share_snapshot_instance_access_update( context, rule['access_id'], snapshot_instance['id'], {'state': constants.ACCESS_STATE_QUEUED_TO_DENY}) self.share_rpcapi.snapshot_update_access( context, snapshot_instance) def snapshot_access_get_all(self, context, snapshot): """Returns all access rules for share snapshot.""" rules = self.db.share_snapshot_access_get_all_for_share_snapshot( context, snapshot['id'], {}) return rules def snapshot_access_get(self, context, access_id): """Returns snapshot access rule with the id.""" rule = self.db.share_snapshot_access_get(context, access_id) return rule def snapshot_export_locations_get(self, context, snapshot): return self.db.share_snapshot_export_locations_get(context, snapshot) def snapshot_export_location_get(self, context, el_id): return self.db.share_snapshot_instance_export_location_get(context, el_id) manila-10.0.0/manila/share/access.py0000664000175000017500000006011313656750227017247 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import ipaddress from oslo_log import log from manila.common import constants from manila.i18n import _ from manila import utils import six LOG = log.getLogger(__name__) def locked_access_rules_operation(operation): """Lock decorator for access rules operations. Takes a named lock prior to executing the operation. The lock is named with the ID of the share instance to which the access rule belongs. Intended use: If an database operation to retrieve or update access rules uses this decorator, it will block actions on all access rules of the share instance until the named lock is free. This is used to avoid race conditions while performing access rules updates on a given share instance. """ def wrapped(*args, **kwargs): instance_id = kwargs.get('share_instance_id') @utils.synchronized( "locked_access_rules_operation_by_share_instance_%s" % instance_id, external=True) def locked_operation(*_args, **_kwargs): return operation(*_args, **_kwargs) return locked_operation(*args, **kwargs) return wrapped class ShareInstanceAccessDatabaseMixin(object): @locked_access_rules_operation def get_and_update_share_instance_access_rules_status( self, context, status=None, conditionally_change=None, share_instance_id=None): """Get and update the access_rules_status of a share instance. :param status: Set this parameter only if you want to omit the conditionally_change parameter; i.e, if you want to force a state change on the share instance regardless of the prior state. :param conditionally_change: Set this parameter to a dictionary of rule state transitions to be made. The key is the expected access_rules_status and the value is the state to transition the access_rules_status to. If the state is not as expected, no transition is performed. Default is {}, which means no state transitions will be made. :returns share_instance: if an update was made. """ if status is not None: updates = {'access_rules_status': status} elif conditionally_change: share_instance = self.db.share_instance_get( context, share_instance_id) access_rules_status = share_instance['access_rules_status'] try: updates = { 'access_rules_status': conditionally_change[access_rules_status], } except KeyError: updates = {} else: updates = {} if updates: share_instance = self.db.share_instance_update( context, share_instance_id, updates, with_share_data=True) return share_instance @locked_access_rules_operation def get_and_update_share_instance_access_rules(self, context, filters=None, updates=None, conditionally_change=None, share_instance_id=None): """Get and conditionally update all access rules of a share instance. :param updates: Set this parameter to a dictionary of key:value pairs corresponding to the keys in the ShareInstanceAccessMapping model. Include 'state' in this dictionary only if you want to omit the conditionally_change parameter; i.e, if you want to force a state change on all filtered rules regardless of the prior state. This parameter is always honored, regardless of whether conditionally_change allows for a state transition as desired. Example:: { 'access_key': 'bob007680048318f4239dfc1c192d5', 'access_level': 'ro', } :param conditionally_change: Set this parameter to a dictionary of rule state transitions to be made. The key is the expected state of the access rule the value is the state to transition the access rule to. If the state is not as expected, no transition is performed. Default is {}, which means no state transitions will be made. Example:: { 'queued_to_apply': 'applying', 'queued_to_deny': 'denying', } """ instance_rules = self.db.share_access_get_all_for_instance( context, share_instance_id, filters=filters) if instance_rules and (updates or conditionally_change): if not updates: updates = {} if not conditionally_change: conditionally_change = {} for rule in instance_rules: mapping_state = rule['state'] rule_updates = copy.deepcopy(updates) try: rule_updates['state'] = conditionally_change[mapping_state] except KeyError: pass if rule_updates: self.db.share_instance_access_update( context, rule['access_id'], share_instance_id, rule_updates) # Refresh the rules after the updates rules_to_get = { 'access_id': tuple([i['access_id'] for i in instance_rules]), } instance_rules = self.db.share_access_get_all_for_instance( context, share_instance_id, filters=rules_to_get) return instance_rules def get_share_instance_access_rules(self, context, filters=None, share_instance_id=None): return self.get_and_update_share_instance_access_rules( context, filters, None, None, share_instance_id) @locked_access_rules_operation def get_and_update_share_instance_access_rule(self, context, rule_id, updates=None, share_instance_id=None, conditionally_change=None): """Get and conditionally update a given share instance access rule. :param updates: Set this parameter to a dictionary of key:value pairs corresponding to the keys in the ShareInstanceAccessMapping model. Include 'state' in this dictionary only if you want to omit the conditionally_change parameter; i.e, if you want to force a state change regardless of the prior state. :param conditionally_change: Set this parameter to a dictionary of rule state transitions to be made. The key is the expected state of the access rule the value is the state to transition the access rule to. If the state is not as expected, no transition is performed. Default is {}, which means no state transitions will be made. Example:: { 'queued_to_apply': 'applying', 'queued_to_deny': 'denying', } """ instance_rule_mapping = self.db.share_instance_access_get( context, rule_id, share_instance_id) if not updates: updates = {} if conditionally_change: mapping_state = instance_rule_mapping['state'] try: updated_state = conditionally_change[mapping_state] updates.update({'state': updated_state}) except KeyError: msg = ("The state of the access rule %(rule_id)s (allowing " "access to share instance %(si)s) was not updated " "because its state was modified by another operation.") msg_payload = { 'si': share_instance_id, 'rule_id': rule_id, } LOG.debug(msg, msg_payload) if updates: self.db.share_instance_access_update( context, rule_id, share_instance_id, updates) # Refresh the rule after update instance_rule_mapping = self.db.share_instance_access_get( context, rule_id, share_instance_id) return instance_rule_mapping @locked_access_rules_operation def delete_share_instance_access_rules(self, context, access_rules, share_instance_id=None): for rule in access_rules: self.db.share_instance_access_delete(context, rule['id']) class ShareInstanceAccess(ShareInstanceAccessDatabaseMixin): def __init__(self, db, driver): self.db = db self.driver = driver def update_access_rules(self, context, share_instance_id, delete_all_rules=False, share_server=None): """Update access rules for a given share instance. :param context: request context :param share_instance_id: ID of the share instance :param delete_all_rules: set this parameter to True if all existing access rules must be denied for a given share instance :param share_server: Share server model or None """ share_instance = self.db.share_instance_get( context, share_instance_id, with_share_data=True) msg_payload = { 'si': share_instance_id, 'shr': share_instance['share_id'], } if delete_all_rules: updates = { 'state': constants.ACCESS_STATE_QUEUED_TO_DENY, } self.get_and_update_share_instance_access_rules( context, updates=updates, share_instance_id=share_instance_id) # Is there a sync in progress? If yes, ignore the incoming request. rule_filter = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_DENYING), } syncing_rules = self.get_and_update_share_instance_access_rules( context, filters=rule_filter, share_instance_id=share_instance_id) if syncing_rules: msg = ("Access rules are being synced for share instance " "%(si)s belonging to share %(shr)s, any rule changes will " "be applied shortly.") LOG.debug(msg, msg_payload) else: rules_to_apply_or_deny = ( self._update_and_get_unsynced_access_rules_from_db( context, share_instance_id) ) if rules_to_apply_or_deny: msg = ("Updating access rules for share instance %(si)s " "belonging to share %(shr)s.") LOG.debug(msg, msg_payload) self._update_access_rules(context, share_instance_id, share_server=share_server) else: msg = ("All access rules have been synced for share instance " "%(si)s belonging to share %(shr)s.") LOG.debug(msg, msg_payload) def _update_access_rules(self, context, share_instance_id, share_server=None): # Refresh the share instance model share_instance = self.db.share_instance_get( context, share_instance_id, with_share_data=True) conditionally_change = { constants.STATUS_ACTIVE: constants.SHARE_INSTANCE_RULES_SYNCING, } share_instance = ( self.get_and_update_share_instance_access_rules_status( context, conditionally_change=conditionally_change, share_instance_id=share_instance_id) or share_instance ) rules_to_be_removed_from_db = [] # Populate rules to send to the driver (access_rules_to_be_on_share, add_rules, delete_rules) = ( self._get_rules_to_send_to_driver(context, share_instance) ) if share_instance['cast_rules_to_readonly']: # Ensure read/only semantics for a migrating instances access_rules_to_be_on_share = self._set_rules_to_readonly( access_rules_to_be_on_share, share_instance) add_rules = [] rules_to_be_removed_from_db = delete_rules delete_rules = [] try: driver_rule_updates = self._update_rules_through_share_driver( context, share_instance, access_rules_to_be_on_share, add_rules, delete_rules, rules_to_be_removed_from_db, share_server) self._process_driver_rule_updates( context, driver_rule_updates, share_instance_id) # Update access rules that are still in 'applying' state conditionally_change = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_ACTIVE, } self.get_and_update_share_instance_access_rules( context, share_instance_id=share_instance_id, conditionally_change=conditionally_change) except Exception: conditionally_change_rule_state = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_ERROR, constants.ACCESS_STATE_DENYING: constants.ACCESS_STATE_ERROR, } self.get_and_update_share_instance_access_rules( context, share_instance_id=share_instance_id, conditionally_change=conditionally_change_rule_state) conditionally_change_access_rules_status = { constants.ACCESS_STATE_ACTIVE: constants.STATUS_ERROR, constants.SHARE_INSTANCE_RULES_SYNCING: constants.STATUS_ERROR, } self.get_and_update_share_instance_access_rules_status( context, share_instance_id=share_instance_id, conditionally_change=conditionally_change_access_rules_status) raise if rules_to_be_removed_from_db: delete_rules = rules_to_be_removed_from_db self.delete_share_instance_access_rules( context, delete_rules, share_instance_id=share_instance['id']) self._loop_for_refresh_else_update_access_rules_status( context, share_instance_id, share_server) msg = _("Access rules were successfully modified for share instance " "%(si)s belonging to share %(shr)s.") msg_payload = { 'si': share_instance['id'], 'shr': share_instance['share_id'], } LOG.info(msg, msg_payload) def _update_rules_through_share_driver(self, context, share_instance, access_rules_to_be_on_share, add_rules, delete_rules, rules_to_be_removed_from_db, share_server): driver_rule_updates = {} share_protocol = share_instance['share_proto'].lower() if (not self.driver.ipv6_implemented and share_protocol == 'nfs'): add_rules = self._filter_ipv6_rules(add_rules) delete_rules = self._filter_ipv6_rules(delete_rules) access_rules_to_be_on_share = self._filter_ipv6_rules( access_rules_to_be_on_share) try: driver_rule_updates = self.driver.update_access( context, share_instance, access_rules_to_be_on_share, add_rules=add_rules, delete_rules=delete_rules, share_server=share_server ) or {} except NotImplementedError: # NOTE(u_glide): Fallback to legacy allow_access/deny_access # for drivers without update_access() method support self._update_access_fallback(context, add_rules, delete_rules, rules_to_be_removed_from_db, share_instance, share_server) return driver_rule_updates def _loop_for_refresh_else_update_access_rules_status(self, context, share_instance_id, share_server): # Do we need to re-sync or apply any new changes? if self._check_needs_refresh(context, share_instance_id): self._update_access_rules(context, share_instance_id, share_server=share_server) else: # Switch the share instance's access_rules_status to 'active' # if there are no more rules in 'error' state, else, ensure # 'error' state. rule_filter = {'state': constants.STATUS_ERROR} rules_in_error_state = ( self.get_and_update_share_instance_access_rules( context, filters=rule_filter, share_instance_id=share_instance_id) ) if not rules_in_error_state: conditionally_change = { constants.SHARE_INSTANCE_RULES_SYNCING: constants.STATUS_ACTIVE, constants.SHARE_INSTANCE_RULES_ERROR: constants.STATUS_ACTIVE, } self.get_and_update_share_instance_access_rules_status( context, conditionally_change=conditionally_change, share_instance_id=share_instance_id) else: conditionally_change = { constants.SHARE_INSTANCE_RULES_SYNCING: constants.SHARE_INSTANCE_RULES_ERROR, } self.get_and_update_share_instance_access_rules_status( context, conditionally_change=conditionally_change, share_instance_id=share_instance_id) def _process_driver_rule_updates(self, context, driver_rule_updates, share_instance_id): for rule_id, rule_updates in driver_rule_updates.items(): if 'state' in rule_updates: # We allow updates *only* if the state is unchanged from # the time this update was initiated. It is possible # that the access rule was denied at the API prior to # the driver reporting that the access rule was added # successfully. state = rule_updates.pop('state') conditional_state_updates = { constants.ACCESS_STATE_APPLYING: state, constants.ACCESS_STATE_DENYING: state, constants.ACCESS_STATE_ACTIVE: state, } else: conditional_state_updates = {} self.get_and_update_share_instance_access_rule( context, rule_id, updates=rule_updates, share_instance_id=share_instance_id, conditionally_change=conditional_state_updates) @staticmethod def _set_rules_to_readonly(access_rules_to_be_on_share, share_instance): LOG.debug("All access rules of share instance %s are being " "cast to read-only for a migration or because the " "instance is a readable replica.", share_instance['id']) for rule in access_rules_to_be_on_share: rule['access_level'] = constants.ACCESS_LEVEL_RO return access_rules_to_be_on_share @staticmethod def _filter_ipv6_rules(rules): filtered = [] for rule in rules: if rule['access_type'] == 'ip': ip_version = ipaddress.ip_network( six.text_type(rule['access_to'])).version if 6 == ip_version: continue filtered.append(rule) return filtered def _get_rules_to_send_to_driver(self, context, share_instance): add_rules = [] delete_rules = [] access_filters = { 'state': (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_ACTIVE, constants.ACCESS_STATE_DENYING), } existing_rules_in_db = self.get_and_update_share_instance_access_rules( context, filters=access_filters, share_instance_id=share_instance['id']) # Update queued rules to transitional states for rule in existing_rules_in_db: if rule['state'] == constants.ACCESS_STATE_APPLYING: add_rules.append(rule) elif rule['state'] == constants.ACCESS_STATE_DENYING: delete_rules.append(rule) delete_rule_ids = [r['id'] for r in delete_rules] access_rules_to_be_on_share = [ r for r in existing_rules_in_db if r['id'] not in delete_rule_ids ] return access_rules_to_be_on_share, add_rules, delete_rules def _check_needs_refresh(self, context, share_instance_id): rules_to_apply_or_deny = ( self._update_and_get_unsynced_access_rules_from_db( context, share_instance_id) ) return any(rules_to_apply_or_deny) def _update_access_fallback(self, context, add_rules, delete_rules, remove_rules, share_instance, share_server): for rule in add_rules: LOG.info( "Applying access rule '%(rule)s' for share " "instance '%(instance)s'", {'rule': rule['id'], 'instance': share_instance['id']} ) self.driver.allow_access( context, share_instance, rule, share_server=share_server ) # NOTE(ganso): Fallback mode temporary compatibility workaround if remove_rules: delete_rules.extend(remove_rules) for rule in delete_rules: LOG.info( "Denying access rule '%(rule)s' from share " "instance '%(instance)s'", {'rule': rule['id'], 'instance': share_instance['id']} ) self.driver.deny_access( context, share_instance, rule, share_server=share_server ) def _update_and_get_unsynced_access_rules_from_db(self, context, share_instance_id): rule_filter = { 'state': (constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY), } conditionally_change = { constants.ACCESS_STATE_QUEUED_TO_APPLY: constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_QUEUED_TO_DENY: constants.ACCESS_STATE_DENYING, } rules_to_apply_or_deny = ( self.get_and_update_share_instance_access_rules( context, filters=rule_filter, share_instance_id=share_instance_id, conditionally_change=conditionally_change) ) return rules_to_apply_or_deny def reset_applying_rules(self, context, share_instance_id): conditional_updates = { constants.ACCESS_STATE_APPLYING: constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_DENYING: constants.ACCESS_STATE_QUEUED_TO_DENY, } self.get_and_update_share_instance_access_rules( context, share_instance_id=share_instance_id, conditionally_change=conditional_updates) manila-10.0.0/manila/share/drivers_private_data.py0000664000175000017500000001377713656750227022225 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Module provides possibility for share drivers to store private information related to common Manila models like Share or Snapshot. """ import abc from oslo_config import cfg from oslo_utils import importutils from oslo_utils import uuidutils import six from manila.db import api as db_api from manila.i18n import _ private_data_opts = [ cfg.StrOpt( 'drivers_private_storage_class', default='manila.share.drivers_private_data.SqlStorageDriver', help='The full class name of the Private Data Driver class to use.'), ] CONF = cfg.CONF @six.add_metaclass(abc.ABCMeta) class StorageDriver(object): def __init__(self, context, backend_host): # Backend shouldn't access data stored by another backend self.backend_host = backend_host self.context = context @abc.abstractmethod def get(self, entity_id, key, default): """Backend implementation for DriverPrivateData.get() method. Should return all keys for given 'entity_id' if 'key' is None. Otherwise should return value for provided 'key'. If values for provided 'entity_id' or 'key' not found, should return 'default'. See DriverPrivateData.get() method for more details. """ @abc.abstractmethod def update(self, entity_id, details, delete_existing): """Backend implementation for DriverPrivateData.update() method. Should update details for given 'entity_id' with behaviour defined by 'delete_existing' boolean flag. See DriverPrivateData.update() method for more details. """ @abc.abstractmethod def delete(self, entity_id, key): """Backend implementation for DriverPrivateData.delete() method. Should return delete all keys if 'key' is None. Otherwise should delete value for provided 'key'. See DriverPrivateData.update() method for more details. """ class SqlStorageDriver(StorageDriver): def update(self, entity_id, details, delete_existing): return db_api.driver_private_data_update( self.context, entity_id, details, delete_existing ) def get(self, entity_id, key, default): return db_api.driver_private_data_get( self.context, entity_id, key, default ) def delete(self, entity_id, key): return db_api.driver_private_data_delete( self.context, entity_id, key ) class DriverPrivateData(object): def __init__(self, storage=None, *args, **kwargs): """Init method. :param storage: None or inheritor of StorageDriver abstract class :param config_group: Optional -- Config group used for loading settings :param context: Optional -- Current context :param backend_host: Optional -- Driver host """ config_group_name = kwargs.get('config_group') CONF.register_opts(private_data_opts, group=config_group_name) if storage is not None: self._storage = storage elif 'context' in kwargs and 'backend_host' in kwargs: if config_group_name: conf = getattr(CONF, config_group_name) else: conf = CONF storage_class = conf.drivers_private_storage_class cls = importutils.import_class(storage_class) self._storage = cls(kwargs.get('context'), kwargs.get('backend_host')) else: msg = _("You should provide 'storage' parameter or" " 'context' and 'backend_host' parameters.") raise ValueError(msg) def get(self, entity_id, key=None, default=None): """Get one, list or all key-value pairs. :param entity_id: Model UUID :param key: Key string or list of keys :param default: Default value for case when key(s) not found :returns: string or dict """ self._validate_entity_id(entity_id) return self._storage.get(entity_id, key, default) def update(self, entity_id, details, delete_existing=False): """Update or create specified key-value pairs. :param entity_id: Model UUID :param details: dict with key-value pairs data. Keys and values should be strings. :param delete_existing: boolean flag which determines behaviour for existing key-value pairs: True - remove all existing key-value pairs False (default) - leave as is """ self._validate_entity_id(entity_id) if not isinstance(details, dict): msg = (_("Provided details %s is not valid dict.") % six.text_type(details)) raise ValueError(msg) return self._storage.update( entity_id, details, delete_existing) def delete(self, entity_id, key=None): """Delete one, list or all key-value pairs. :param entity_id: Model UUID :param key: Key string or list of keys """ self._validate_entity_id(entity_id) return self._storage.delete(entity_id, key) @staticmethod def _validate_entity_id(entity_id): if not uuidutils.is_uuid_like(entity_id): msg = (_("Provided entity_id %s is not valid UUID.") % six.text_type(entity_id)) raise ValueError(msg) manila-10.0.0/manila/share/drivers/0000775000175000017500000000000013656750362017111 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/0000775000175000017500000000000013656750362020400 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/utils.py0000664000175000017500000001747113656750227022124 0ustar zuulzuul00000000000000# Copyright (c) 2015 Bob Callaway. All rights reserved. # Copyright (c) 2015 Tom Barron. All rights reserved. # Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utilities for NetApp drivers.""" import collections import decimal import platform import re from oslo_concurrency import processutils as putils from oslo_log import log import six from manila import exception from manila.i18n import _ from manila import version LOG = log.getLogger(__name__) VALID_TRACE_FLAGS = ['method', 'api'] TRACE_METHOD = False TRACE_API = False API_TRACE_PATTERN = '(.*)' def validate_driver_instantiation(**kwargs): """Checks if a driver is instantiated other than by the unified driver. Helps check direct instantiation of netapp drivers. Call this function in every netapp block driver constructor. """ if kwargs and kwargs.get('netapp_mode') == 'proxy': return LOG.warning('Please use NetAppDriver in the configuration file ' 'to load the driver instead of directly specifying ' 'the driver module name.') def check_flags(required_flags, configuration): """Ensure that the flags we care about are set.""" for flag in required_flags: if getattr(configuration, flag, None) is None: msg = _('Configuration value %s is not set.') % flag raise exception.InvalidInput(reason=msg) def round_down(value, precision='0.00'): """Round a number downward using a specified level of precision. Example: round_down(float(total_space_in_bytes) / units.Gi, '0.01') """ return float(decimal.Decimal(six.text_type(value)).quantize( decimal.Decimal(precision), rounding=decimal.ROUND_DOWN)) def setup_tracing(trace_flags_string, api_trace_pattern=API_TRACE_PATTERN): global TRACE_METHOD global TRACE_API global API_TRACE_PATTERN TRACE_METHOD = False TRACE_API = False API_TRACE_PATTERN = api_trace_pattern if trace_flags_string: flags = trace_flags_string.split(',') flags = [flag.strip() for flag in flags] for invalid_flag in list(set(flags) - set(VALID_TRACE_FLAGS)): LOG.warning('Invalid trace flag: %s', invalid_flag) try: re.compile(api_trace_pattern) except re.error: msg = _('Cannot parse the API trace pattern. %s is not a ' 'valid python regular expression.') % api_trace_pattern raise exception.BadConfigurationException(reason=msg) TRACE_METHOD = 'method' in flags TRACE_API = 'api' in flags def trace(f): def trace_wrapper(self, *args, **kwargs): if TRACE_METHOD: LOG.debug('Entering method %s', f.__name__) result = f(self, *args, **kwargs) if TRACE_METHOD: LOG.debug('Leaving method %s', f.__name__) return result return trace_wrapper def convert_to_list(value): if value is None: return [] elif isinstance(value, six.string_types): return [value] elif isinstance(value, collections.Iterable): return list(value) else: return [value] class OpenStackInfo(object): """OS/distribution, release, and version. NetApp uses these fields as content for EMS log entry. """ PACKAGE_NAME = 'python-manila' def __init__(self): self._version = 'unknown version' self._release = 'unknown release' self._vendor = 'unknown vendor' self._platform = 'unknown platform' def _update_version_from_version_string(self): try: self._version = version.version_info.version_string() except Exception: pass def _update_release_from_release_string(self): try: self._release = version.version_info.release_string() except Exception: pass def _update_platform(self): try: self._platform = platform.platform() except Exception: pass @staticmethod def _get_version_info_version(): return version.version_info.version @staticmethod def _get_version_info_release(): return version.version_info.release_string() def _update_info_from_version_info(self): try: ver = self._get_version_info_version() if ver: self._version = ver except Exception: pass try: rel = self._get_version_info_release() if rel: self._release = rel except Exception: pass # RDO, RHEL-OSP, Mirantis on Redhat, SUSE. def _update_info_from_rpm(self): LOG.debug('Trying rpm command.') try: out, err = putils.execute("rpm", "-q", "--queryformat", "'%{version}\t%{release}\t%{vendor}'", self.PACKAGE_NAME) if not out: LOG.info('No rpm info found for %(pkg)s package.', { 'pkg': self.PACKAGE_NAME}) return False parts = out.split() self._version = parts[0] self._release = parts[1] self._vendor = ' '.join(parts[2::]) return True except Exception as e: LOG.info('Could not run rpm command: %(msg)s.', { 'msg': e}) return False # Ubuntu, Mirantis on Ubuntu. def _update_info_from_dpkg(self): LOG.debug('Trying dpkg-query command.') try: _vendor = None out, err = putils.execute("dpkg-query", "-W", "-f='${Version}'", self.PACKAGE_NAME) if not out: LOG.info( 'No dpkg-query info found for %(pkg)s package.', { 'pkg': self.PACKAGE_NAME}) return False # Debian format: [epoch:]upstream_version[-debian_revision] deb_version = out # In case epoch or revision is missing, copy entire string. _release = deb_version if ':' in deb_version: deb_epoch, upstream_version = deb_version.split(':') _release = upstream_version if '-' in deb_version: deb_revision = deb_version.split('-')[1] _vendor = deb_revision self._release = _release if _vendor: self._vendor = _vendor return True except Exception as e: LOG.info('Could not run dpkg-query command: %(msg)s.', { 'msg': e}) return False def _update_openstack_info(self): self._update_version_from_version_string() self._update_release_from_release_string() self._update_platform() # Some distributions override with more meaningful information. self._update_info_from_version_info() # See if we have still more targeted info from rpm or apt. found_package = self._update_info_from_rpm() if not found_package: self._update_info_from_dpkg() def info(self): self._update_openstack_info() return '%(version)s|%(release)s|%(vendor)s|%(platform)s' % { 'version': self._version, 'release': self._release, 'vendor': self._vendor, 'platform': self._platform} manila-10.0.0/manila/share/drivers/netapp/__init__.py0000664000175000017500000000000013656750227022477 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/common.py0000664000175000017500000001117513656750227022247 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unified driver for NetApp storage systems. Supports multiple storage systems of different families and driver modes. """ from oslo_log import log from oslo_utils import importutils from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.netapp import options from manila.share.drivers.netapp import utils as na_utils LOG = log.getLogger(__name__) MULTI_SVM = 'multi_svm' SINGLE_SVM = 'single_svm' DATAONTAP_CMODE_PATH = 'manila.share.drivers.netapp.dataontap.cluster_mode' # Add new drivers here, no other code changes required. NETAPP_UNIFIED_DRIVER_REGISTRY = { 'ontap_cluster': { MULTI_SVM: DATAONTAP_CMODE_PATH + '.drv_multi_svm.NetAppCmodeMultiSvmShareDriver', SINGLE_SVM: DATAONTAP_CMODE_PATH + '.drv_single_svm.NetAppCmodeSingleSvmShareDriver', }, } NETAPP_UNIFIED_DRIVER_DEFAULT_MODE = { 'ontap_cluster': MULTI_SVM, } class NetAppDriver(object): """"NetApp unified share storage driver. Acts as a factory to create NetApp storage drivers based on the storage family and driver mode configured. """ REQUIRED_FLAGS = ['netapp_storage_family', 'driver_handles_share_servers'] def __new__(cls, *args, **kwargs): config = kwargs.get('configuration', None) if not config: raise exception.InvalidInput( reason=_('Required configuration not found.')) config.append_config_values(driver.share_opts) config.append_config_values(options.netapp_proxy_opts) na_utils.check_flags(NetAppDriver.REQUIRED_FLAGS, config) app_version = na_utils.OpenStackInfo().info() LOG.info('OpenStack OS Version Info: %s', app_version) kwargs['app_version'] = app_version driver_mode = NetAppDriver._get_driver_mode( config.netapp_storage_family, config.driver_handles_share_servers) return NetAppDriver._create_driver(config.netapp_storage_family, driver_mode, *args, **kwargs) @staticmethod def _get_driver_mode(storage_family, driver_handles_share_servers): if driver_handles_share_servers is None: driver_mode = NETAPP_UNIFIED_DRIVER_DEFAULT_MODE.get( storage_family.lower()) if driver_mode: LOG.debug('Default driver mode %s selected.', driver_mode) else: raise exception.InvalidInput( reason=_('Driver mode was not specified and a default ' 'value could not be determined from the ' 'specified storage family.')) elif driver_handles_share_servers: driver_mode = MULTI_SVM else: driver_mode = SINGLE_SVM return driver_mode @staticmethod def _create_driver(storage_family, driver_mode, *args, **kwargs): """"Creates an appropriate driver based on family and mode.""" storage_family = storage_family.lower() fmt = {'storage_family': storage_family, 'driver_mode': driver_mode} LOG.info('Requested unified config: %(storage_family)s and ' '%(driver_mode)s.', fmt) family_meta = NETAPP_UNIFIED_DRIVER_REGISTRY.get(storage_family) if family_meta is None: raise exception.InvalidInput( reason=_('Storage family %s is not supported.') % storage_family) driver_loc = family_meta.get(driver_mode) if driver_loc is None: raise exception.InvalidInput( reason=_('Driver mode %(driver_mode)s is not supported ' 'for storage family %(storage_family)s.') % fmt) kwargs['netapp_mode'] = 'proxy' driver = importutils.import_object(driver_loc, *args, **kwargs) LOG.info('NetApp driver of family %(storage_family)s and mode ' '%(driver_mode)s loaded.', fmt) driver.ipv6_implemented = True return driver manila-10.0.0/manila/share/drivers/netapp/dataontap/0000775000175000017500000000000013656750362022353 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/__init__.py0000664000175000017500000000000013656750227024452 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/client/0000775000175000017500000000000013656750362023631 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/client/__init__.py0000664000175000017500000000000013656750227025730 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/client/api.py0000664000175000017500000005627013656750227024766 0ustar zuulzuul00000000000000# Copyright (c) 2014 Navneet Singh. All rights reserved. # Copyright (c) 2014 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp API for Data ONTAP and OnCommand DFM. Contains classes required to issue API calls to Data ONTAP and OnCommand DFM. """ import copy import re from lxml import etree from oslo_log import log import six from six.moves import urllib from manila import exception from manila.i18n import _ from manila.share.drivers.netapp import utils LOG = log.getLogger(__name__) EONTAPI_EINVAL = '22' EVOLOPNOTSUPP = '160' EAPIERROR = '13001' EAPINOTFOUND = '13005' ESNAPSHOTNOTALLOWED = '13023' EVOLUMEOFFLINE = '13042' EINTERNALERROR = '13114' EDUPLICATEENTRY = '13130' EVOLNOTCLONE = '13170' EVOLMOVE_CANNOT_MOVE_TO_CFO = '13633' EAGGRDOESNOTEXIST = '14420' EVOL_NOT_MOUNTED = '14716' ESIS_CLONE_NOT_LICENSED = '14956' EOBJECTNOTFOUND = '15661' E_VIFMGR_PORT_ALREADY_ASSIGNED_TO_BROADCAST_DOMAIN = '18605' ERELATION_EXISTS = '17122' ENOTRANSFER_IN_PROGRESS = '17130' ETRANSFER_IN_PROGRESS = '17137' EANOTHER_OP_ACTIVE = '17131' ERELATION_NOT_QUIESCED = '17127' ESOURCE_IS_DIFFERENT = '17105' EVOL_CLONE_BEING_SPLIT = '17151' class NaServer(object): """Encapsulates server connection logic.""" TRANSPORT_TYPE_HTTP = 'http' TRANSPORT_TYPE_HTTPS = 'https' SERVER_TYPE_FILER = 'filer' SERVER_TYPE_DFM = 'dfm' URL_FILER = 'servlets/netapp.servlets.admin.XMLrequest_filer' URL_DFM = 'apis/XMLrequest' NETAPP_NS = 'http://www.netapp.com/filer/admin' STYLE_LOGIN_PASSWORD = 'basic_auth' STYLE_CERTIFICATE = 'certificate_auth' def __init__(self, host, server_type=SERVER_TYPE_FILER, transport_type=TRANSPORT_TYPE_HTTP, style=STYLE_LOGIN_PASSWORD, username=None, password=None, port=None, trace=False, api_trace_pattern=utils.API_TRACE_PATTERN): self._host = host self.set_server_type(server_type) self.set_transport_type(transport_type) self.set_style(style) if port: self.set_port(port) self._username = username self._password = password self._trace = trace self._api_trace_pattern = api_trace_pattern self._refresh_conn = True LOG.debug('Using NetApp controller: %s', self._host) def get_transport_type(self): """Get the transport type protocol.""" return self._protocol def set_transport_type(self, transport_type): """Set the transport type protocol for API. Supports http and https transport types. """ if transport_type.lower() not in ( NaServer.TRANSPORT_TYPE_HTTP, NaServer.TRANSPORT_TYPE_HTTPS): raise ValueError('Unsupported transport type') self._protocol = transport_type.lower() if self._protocol == NaServer.TRANSPORT_TYPE_HTTP: if self._server_type == NaServer.SERVER_TYPE_FILER: self.set_port(80) else: self.set_port(8088) else: if self._server_type == NaServer.SERVER_TYPE_FILER: self.set_port(443) else: self.set_port(8488) self._refresh_conn = True def get_style(self): """Get the authorization style for communicating with the server.""" return self._auth_style def set_style(self, style): """Set the authorization style for communicating with the server. Supports basic_auth for now. Certificate_auth mode to be done. """ if style.lower() not in (NaServer.STYLE_LOGIN_PASSWORD, NaServer.STYLE_CERTIFICATE): raise ValueError('Unsupported authentication style') self._auth_style = style.lower() def get_server_type(self): """Get the target server type.""" return self._server_type def set_server_type(self, server_type): """Set the target server type. Supports filer and dfm server types. """ if server_type.lower() not in (NaServer.SERVER_TYPE_FILER, NaServer.SERVER_TYPE_DFM): raise ValueError('Unsupported server type') self._server_type = server_type.lower() if self._server_type == NaServer.SERVER_TYPE_FILER: self._url = NaServer.URL_FILER else: self._url = NaServer.URL_DFM self._ns = NaServer.NETAPP_NS self._refresh_conn = True def set_api_version(self, major, minor): """Set the API version.""" try: self._api_major_version = int(major) self._api_minor_version = int(minor) self._api_version = (six.text_type(major) + "." + six.text_type(minor)) except ValueError: raise ValueError('Major and minor versions must be integers') self._refresh_conn = True def set_system_version(self, system_version): """Set the ONTAP system version.""" self._system_version = system_version self._refresh_conn = True def get_api_version(self): """Gets the API version tuple.""" if hasattr(self, '_api_version'): return (self._api_major_version, self._api_minor_version) return None def get_system_version(self): """Gets the ONTAP system version.""" if hasattr(self, '_system_version'): return self._system_version return None def set_port(self, port): """Set the server communication port.""" try: int(port) except ValueError: raise ValueError('Port must be integer') self._port = six.text_type(port) self._refresh_conn = True def get_port(self): """Get the server communication port.""" return self._port def set_timeout(self, seconds): """Sets the timeout in seconds.""" try: self._timeout = int(seconds) except ValueError: raise ValueError('timeout in seconds must be integer') def get_timeout(self): """Gets the timeout in seconds if set.""" if hasattr(self, '_timeout'): return self._timeout return None def get_vfiler(self): """Get the vfiler to use in tunneling.""" return self._vfiler def set_vfiler(self, vfiler): """Set the vfiler to use if tunneling gets enabled.""" self._vfiler = vfiler def get_vserver(self): """Get the vserver to use in tunneling.""" return self._vserver def set_vserver(self, vserver): """Set the vserver to use if tunneling gets enabled.""" self._vserver = vserver def set_username(self, username): """Set the user name for authentication.""" self._username = username self._refresh_conn = True def set_password(self, password): """Set the password for authentication.""" self._password = password self._refresh_conn = True def invoke_elem(self, na_element, enable_tunneling=False): """Invoke the API on the server.""" if na_element and not isinstance(na_element, NaElement): ValueError('NaElement must be supplied to invoke API') request, request_element = self._create_request(na_element, enable_tunneling) api_name = na_element.get_name() api_name_matches_regex = (re.match(self._api_trace_pattern, api_name) is not None) if self._trace and api_name_matches_regex: LOG.debug("Request: %s", request_element.to_string(pretty=True)) if (not hasattr(self, '_opener') or not self._opener or self._refresh_conn): self._build_opener() try: if hasattr(self, '_timeout'): response = self._opener.open(request, timeout=self._timeout) else: response = self._opener.open(request) except urllib.error.HTTPError as e: raise NaApiError(e.code, e.msg) except urllib.error.URLError as e: raise exception.StorageCommunicationException(six.text_type(e)) except Exception as e: raise NaApiError(message=e) response_xml = response.read() response_element = self._get_result(response_xml) if self._trace and api_name_matches_regex: LOG.debug("Response: %s", response_element.to_string(pretty=True)) return response_element def invoke_successfully(self, na_element, enable_tunneling=False): """Invokes API and checks execution status as success. Need to set enable_tunneling to True explicitly to achieve it. This helps to use same connection instance to enable or disable tunneling. The vserver or vfiler should be set before this call otherwise tunneling remains disabled. """ result = self.invoke_elem(na_element, enable_tunneling) if result.has_attr('status') and result.get_attr('status') == 'passed': return result code = (result.get_attr('errno') or result.get_child_content('errorno') or 'ESTATUSFAILED') if code == ESIS_CLONE_NOT_LICENSED: msg = 'Clone operation failed: FlexClone not licensed.' else: msg = (result.get_attr('reason') or result.get_child_content('reason') or 'Execution status is failed due to unknown reason') raise NaApiError(code, msg) def _create_request(self, na_element, enable_tunneling=False): """Creates request in the desired format.""" netapp_elem = NaElement('netapp') netapp_elem.add_attr('xmlns', self._ns) if hasattr(self, '_api_version'): netapp_elem.add_attr('version', self._api_version) if enable_tunneling: self._enable_tunnel_request(netapp_elem) netapp_elem.add_child_elem(na_element) request_d = netapp_elem.to_string() request = urllib.request.Request( self._get_url(), data=request_d, headers={'Content-Type': 'text/xml', 'charset': 'utf-8'}) return request, netapp_elem def _enable_tunnel_request(self, netapp_elem): """Enables vserver or vfiler tunneling.""" if hasattr(self, '_vfiler') and self._vfiler: if (hasattr(self, '_api_major_version') and hasattr(self, '_api_minor_version') and self._api_major_version >= 1 and self._api_minor_version >= 7): netapp_elem.add_attr('vfiler', self._vfiler) else: raise ValueError('ontapi version has to be atleast 1.7' ' to send request to vfiler') if hasattr(self, '_vserver') and self._vserver: if (hasattr(self, '_api_major_version') and hasattr(self, '_api_minor_version') and self._api_major_version >= 1 and self._api_minor_version >= 15): netapp_elem.add_attr('vfiler', self._vserver) else: raise ValueError('ontapi version has to be atleast 1.15' ' to send request to vserver') def _parse_response(self, response): """Get the NaElement for the response.""" if not response: raise NaApiError('No response received') xml = etree.XML(response) return NaElement(xml) def _get_result(self, response): """Gets the call result.""" processed_response = self._parse_response(response) return processed_response.get_child_by_name('results') def _get_url(self): host = self._host if ':' in host: host = '[%s]' % host return '%s://%s:%s/%s' % (self._protocol, host, self._port, self._url) def _build_opener(self): if self._auth_style == NaServer.STYLE_LOGIN_PASSWORD: auth_handler = self._create_basic_auth_handler() else: auth_handler = self._create_certificate_auth_handler() opener = urllib.request.build_opener(auth_handler) self._opener = opener def _create_basic_auth_handler(self): password_man = urllib.request.HTTPPasswordMgrWithDefaultRealm() password_man.add_password(None, self._get_url(), self._username, self._password) auth_handler = urllib.request.HTTPBasicAuthHandler(password_man) return auth_handler def _create_certificate_auth_handler(self): raise NotImplementedError() def __str__(self): return "server: %s" % (self._host) class NaElement(object): """Class wraps basic building block for NetApp API request.""" def __init__(self, name): """Name of the element or etree.Element.""" if isinstance(name, etree._Element): self._element = name else: self._element = etree.Element(name) def get_name(self): """Returns the tag name of the element.""" return self._element.tag def set_content(self, text): """Set the text string for the element.""" self._element.text = text def get_content(self): """Get the text for the element.""" return self._element.text def add_attr(self, name, value): """Add the attribute to the element.""" self._element.set(name, value) def add_attrs(self, **attrs): """Add multiple attributes to the element.""" for attr in attrs.keys(): self._element.set(attr, attrs.get(attr)) def add_child_elem(self, na_element): """Add the child element to the element.""" if isinstance(na_element, NaElement): self._element.append(na_element._element) return raise ValueError(_("Can only add elements of type NaElement.")) def get_child_by_name(self, name): """Get the child element by the tag name.""" for child in self._element.iterchildren(): if child.tag == name or etree.QName(child.tag).localname == name: return NaElement(child) return None def get_child_content(self, name): """Get the content of the child.""" for child in self._element.iterchildren(): if child.tag == name or etree.QName(child.tag).localname == name: return child.text return None def get_children(self): """Get the children for the element.""" return [NaElement(el) for el in self._element.iterchildren()] def has_attr(self, name): """Checks whether element has attribute.""" attributes = self._element.attrib or {} return name in attributes.keys() def get_attr(self, name): """Get the attribute with the given name.""" attributes = self._element.attrib or {} return attributes.get(name) def get_attr_names(self): """Returns the list of attribute names.""" attributes = self._element.attrib or {} return attributes.keys() def add_new_child(self, name, content, convert=False): """Add child with tag name and context. Convert replaces entity refs to chars. """ child = NaElement(name) if convert: content = NaElement._convert_entity_refs(content) child.set_content(content) self.add_child_elem(child) @staticmethod def _convert_entity_refs(text): """Converts entity refs to chars to handle etree auto conversions.""" text = text.replace("<", "<") text = text.replace(">", ">") return text @staticmethod def create_node_with_children(node, **children): """Creates and returns named node with children.""" parent = NaElement(node) for child in children.keys(): parent.add_new_child(child, children.get(child, None)) return parent def add_node_with_children(self, node, **children): """Creates named node with children.""" parent = NaElement.create_node_with_children(node, **children) self.add_child_elem(parent) def to_string(self, pretty=False, method='xml', encoding='UTF-8'): """Prints the element to string.""" return etree.tostring(self._element, method=method, encoding=encoding, pretty_print=pretty) def __getitem__(self, key): """Dict getter method for NaElement. Returns NaElement list if present, text value in case no NaElement node children or attribute value if present. """ child = self.get_child_by_name(key) if child: if child.get_children(): return child else: return child.get_content() elif self.has_attr(key): return self.get_attr(key) raise KeyError(_('No element by given name %s.') % (key)) def __setitem__(self, key, value): """Dict setter method for NaElement. Accepts dict, list, tuple, str, int, float and long as valid value. """ if key: if value: if isinstance(value, NaElement): child = NaElement(key) child.add_child_elem(value) self.add_child_elem(child) elif isinstance( value, six.string_types + six.integer_types + (float, )): self.add_new_child(key, six.text_type(value)) elif isinstance(value, (list, tuple, dict)): child = NaElement(key) child.translate_struct(value) self.add_child_elem(child) else: raise TypeError(_('Not a valid value for NaElement.')) else: self.add_child_elem(NaElement(key)) else: raise KeyError(_('NaElement name cannot be null.')) def translate_struct(self, data_struct): """Convert list, tuple, dict to NaElement and appends. Example usage: 1. vl1 vl2 vl3 The above can be achieved by doing root = NaElement('root') root.translate_struct({'elem1': 'vl1', 'elem2': 'vl2', 'elem3': 'vl3'}) 2. vl1 vl2 vl3 The above can be achieved by doing root = NaElement('root') root.translate_struct([{'elem1': 'vl1', 'elem2': 'vl2'}, {'elem1': 'vl3'}]) """ if isinstance(data_struct, (list, tuple)): for el in data_struct: if isinstance(el, (list, tuple, dict)): self.translate_struct(el) else: self.add_child_elem(NaElement(el)) elif isinstance(data_struct, dict): for k in data_struct.keys(): child = NaElement(k) if isinstance(data_struct[k], (dict, list, tuple)): child.translate_struct(data_struct[k]) else: if data_struct[k]: child.set_content(six.text_type(data_struct[k])) self.add_child_elem(child) else: raise ValueError(_('Type cannot be converted into NaElement.')) class NaApiError(Exception): """Base exception class for NetApp API errors.""" def __init__(self, code='unknown', message='unknown'): self.code = code self.message = message def __str__(self, *args, **kwargs): return 'NetApp API failed. Reason - %s:%s' % (self.code, self.message) def invoke_api(na_server, api_name, api_family='cm', query=None, des_result=None, additional_elems=None, is_iter=False, records=0, tag=None, timeout=0, tunnel=None): """Invokes any given API call to a NetApp server. :param na_server: na_server instance :param api_name: API name string :param api_family: cm or 7m :param query: API query as dict :param des_result: desired result as dict :param additional_elems: dict other than query and des_result :param is_iter: is iterator API :param records: limit for records, 0 for infinite :param timeout: timeout seconds :param tunnel: tunnel entity, vserver or vfiler name """ record_step = 50 if not (na_server or isinstance(na_server, NaServer)): msg = _("Requires an NaServer instance.") raise exception.InvalidInput(reason=msg) server = copy.copy(na_server) if api_family == 'cm': server.set_vserver(tunnel) else: server.set_vfiler(tunnel) if timeout > 0: server.set_timeout(timeout) iter_records = 0 cond = True while cond: na_element = create_api_request( api_name, query, des_result, additional_elems, is_iter, record_step, tag) result = server.invoke_successfully(na_element, True) if is_iter: if records > 0: iter_records = iter_records + record_step if iter_records >= records: cond = False tag_el = result.get_child_by_name('next-tag') tag = tag_el.get_content() if tag_el else None if not tag: cond = False else: cond = False yield result def create_api_request(api_name, query=None, des_result=None, additional_elems=None, is_iter=False, record_step=50, tag=None): """Creates a NetApp API request. :param api_name: API name string :param query: API query as dict :param des_result: desired result as dict :param additional_elems: dict other than query and des_result :param is_iter: is iterator API :param record_step: records at a time for iter API :param tag: next tag for iter API """ api_el = NaElement(api_name) if query: query_el = NaElement('query') query_el.translate_struct(query) api_el.add_child_elem(query_el) if des_result: res_el = NaElement('desired-attributes') res_el.translate_struct(des_result) api_el.add_child_elem(res_el) if additional_elems: api_el.translate_struct(additional_elems) if is_iter: api_el.add_new_child('max-records', six.text_type(record_step)) if tag: api_el.add_new_child('tag', tag, True) return api_el manila-10.0.0/manila/share/drivers/netapp/dataontap/client/client_base.py0000664000175000017500000001035113656750227026453 0ustar zuulzuul00000000000000# Copyright (c) 2014 Alex Meade. All rights reserved. # Copyright (c) 2014 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import excutils from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp import utils as na_utils LOG = log.getLogger(__name__) class NetAppBaseClient(object): def __init__(self, **kwargs): self.connection = netapp_api.NaServer( host=kwargs['hostname'], transport_type=kwargs['transport_type'], port=kwargs['port'], username=kwargs['username'], password=kwargs['password'], trace=kwargs.get('trace', False), api_trace_pattern=kwargs.get('api_trace_pattern', na_utils.API_TRACE_PATTERN)) def get_ontapi_version(self, cached=True): """Gets the supported ontapi version.""" if cached: return self.connection.get_api_version() result = self.send_request('system-get-ontapi-version', enable_tunneling=False) major = result.get_child_content('major-version') minor = result.get_child_content('minor-version') return major, minor @na_utils.trace def get_system_version(self, cached=True): """Gets the current Data ONTAP version.""" if cached: return self.connection.get_system_version() result = self.send_request('system-get-version') version_tuple = result.get_child_by_name( 'version-tuple') or netapp_api.NaElement('none') system_version_tuple = version_tuple.get_child_by_name( 'system-version-tuple') or netapp_api.NaElement('none') version = {} version['version'] = result.get_child_content('version') version['version-tuple'] = ( int(system_version_tuple.get_child_content('generation')), int(system_version_tuple.get_child_content('major')), int(system_version_tuple.get_child_content('minor'))) return version def _init_features(self): """Set up the repository of available Data ONTAP features.""" self.features = Features() def _strip_xml_namespace(self, string): if string.startswith('{') and '}' in string: return string.split('}', 1)[1] return string def send_request(self, api_name, api_args=None, enable_tunneling=True): """Sends request to Ontapi.""" request = netapp_api.NaElement(api_name) if api_args: request.translate_struct(api_args) return self.connection.invoke_successfully(request, enable_tunneling) @na_utils.trace def get_licenses(self): try: result = self.send_request('license-v2-list-info') except netapp_api.NaApiError: with excutils.save_and_reraise_exception(): LOG.exception("Could not get licenses list.") return sorted( [l.get_child_content('package').lower() for l in result.get_child_by_name('licenses').get_children()]) def send_ems_log_message(self, message_dict): """Sends a message to the Data ONTAP EMS log.""" raise NotImplementedError() class Features(object): def __init__(self): self.defined_features = set() def add_feature(self, name, supported=True): if not isinstance(supported, bool): raise TypeError("Feature value must be a bool type.") self.defined_features.add(name) setattr(self, name, supported) def __getattr__(self, name): # NOTE(cknight): Needed to keep pylint happy. raise AttributeError manila-10.0.0/manila/share/drivers/netapp/dataontap/client/client_cmode.py0000664000175000017500000045045313656750227026643 0ustar zuulzuul00000000000000# Copyright (c) 2014 Alex Meade. All rights reserved. # Copyright (c) 2015 Clinton Knight. All rights reserved. # Copyright (c) 2015 Tom Barron. All rights reserved. # Copyright (c) 2018 Jose Porrua. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import hashlib import re import time from oslo_log import log from oslo_utils import strutils from oslo_utils import units import six from manila import exception from manila.i18n import _ from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.client import client_base from manila.share.drivers.netapp import utils as na_utils LOG = log.getLogger(__name__) DELETED_PREFIX = 'deleted_manila_' DEFAULT_IPSPACE = 'Default' DEFAULT_MAX_PAGE_LENGTH = 50 CUTOVER_ACTION_MAP = { 'defer': 'defer_on_failure', 'abort': 'abort_on_failure', 'force': 'force', 'wait': 'wait', } class NetAppCmodeClient(client_base.NetAppBaseClient): def __init__(self, **kwargs): super(NetAppCmodeClient, self).__init__(**kwargs) self.vserver = kwargs.get('vserver') self.connection.set_vserver(self.vserver) # Default values to run first api. self.connection.set_api_version(1, 15) (major, minor) = self.get_ontapi_version(cached=False) self.connection.set_api_version(major, minor) system_version = self.get_system_version(cached=False) self.connection.set_system_version(system_version) self._init_features() def _init_features(self): """Initialize cDOT feature support map.""" super(NetAppCmodeClient, self)._init_features() ontapi_version = self.get_ontapi_version(cached=True) ontapi_1_20 = ontapi_version >= (1, 20) ontapi_1_2x = (1, 20) <= ontapi_version < (1, 30) ontapi_1_30 = ontapi_version >= (1, 30) ontapi_1_110 = ontapi_version >= (1, 110) self.features.add_feature('SNAPMIRROR_V2', supported=ontapi_1_20) self.features.add_feature('SYSTEM_METRICS', supported=ontapi_1_2x) self.features.add_feature('SYSTEM_CONSTITUENT_METRICS', supported=ontapi_1_30) self.features.add_feature('BROADCAST_DOMAINS', supported=ontapi_1_30) self.features.add_feature('IPSPACES', supported=ontapi_1_30) self.features.add_feature('SUBNETS', supported=ontapi_1_30) self.features.add_feature('CLUSTER_PEER_POLICY', supported=ontapi_1_30) self.features.add_feature('ADVANCED_DISK_PARTITIONING', supported=ontapi_1_30) self.features.add_feature('FLEXVOL_ENCRYPTION', supported=ontapi_1_110) def _invoke_vserver_api(self, na_element, vserver): server = copy.copy(self.connection) server.set_vserver(vserver) result = server.invoke_successfully(na_element, True) return result def _has_records(self, api_result_element): if (not api_result_element.get_child_content('num-records') or api_result_element.get_child_content('num-records') == '0'): return False else: return True def _get_record_count(self, api_result_element): try: return int(api_result_element.get_child_content('num-records')) except TypeError: msg = _('Missing record count for NetApp iterator API invocation.') raise exception.NetAppException(msg) def set_vserver(self, vserver): self.vserver = vserver self.connection.set_vserver(vserver) def send_iter_request(self, api_name, api_args=None, max_page_length=DEFAULT_MAX_PAGE_LENGTH): """Invoke an iterator-style getter API.""" if not api_args: api_args = {} api_args['max-records'] = max_page_length # Get first page result = self.send_request(api_name, api_args) # Most commonly, we can just return here if there is no more data next_tag = result.get_child_content('next-tag') if not next_tag: return result # Ensure pagination data is valid and prepare to store remaining pages num_records = self._get_record_count(result) attributes_list = result.get_child_by_name('attributes-list') if not attributes_list: msg = _('Missing attributes list for API %s.') % api_name raise exception.NetAppException(msg) # Get remaining pages, saving data into first page while next_tag is not None: next_api_args = copy.deepcopy(api_args) next_api_args['tag'] = next_tag next_result = self.send_request(api_name, next_api_args) next_attributes_list = next_result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') for record in next_attributes_list.get_children(): attributes_list.add_child_elem(record) num_records += self._get_record_count(next_result) next_tag = next_result.get_child_content('next-tag') result.get_child_by_name('num-records').set_content( six.text_type(num_records)) result.get_child_by_name('next-tag').set_content('') return result @na_utils.trace def create_vserver(self, vserver_name, root_volume_aggregate_name, root_volume_name, aggregate_names, ipspace_name): """Creates new vserver and assigns aggregates.""" create_args = { 'vserver-name': vserver_name, 'root-volume-security-style': 'unix', 'root-volume-aggregate': root_volume_aggregate_name, 'root-volume': root_volume_name, 'name-server-switch': { 'nsswitch': 'file', }, } if ipspace_name: if not self.features.IPSPACES: msg = 'IPSpaces are not supported on this backend.' raise exception.NetAppException(msg) else: create_args['ipspace'] = ipspace_name self.send_request('vserver-create', create_args) aggr_list = [{'aggr-name': aggr_name} for aggr_name in aggregate_names] modify_args = { 'aggr-list': aggr_list, 'vserver-name': vserver_name, } self.send_request('vserver-modify', modify_args) @na_utils.trace def vserver_exists(self, vserver_name): """Checks if Vserver exists.""" LOG.debug('Checking if Vserver %s exists', vserver_name) api_args = { 'query': { 'vserver-info': { 'vserver-name': vserver_name, }, }, 'desired-attributes': { 'vserver-info': { 'vserver-name': None, }, }, } result = self.send_iter_request('vserver-get-iter', api_args) return self._has_records(result) @na_utils.trace def get_vserver_root_volume_name(self, vserver_name): """Get the root volume name of the vserver.""" api_args = { 'query': { 'vserver-info': { 'vserver-name': vserver_name, }, }, 'desired-attributes': { 'vserver-info': { 'root-volume': None, }, }, } vserver_info = self.send_iter_request('vserver-get-iter', api_args) try: root_volume_name = vserver_info.get_child_by_name( 'attributes-list').get_child_by_name( 'vserver-info').get_child_content('root-volume') except AttributeError: msg = _('Could not determine root volume name ' 'for Vserver %s.') % vserver_name raise exception.NetAppException(msg) return root_volume_name @na_utils.trace def get_vserver_ipspace(self, vserver_name): """Get the IPspace of the vserver, or None if not supported.""" if not self.features.IPSPACES: return None api_args = { 'query': { 'vserver-info': { 'vserver-name': vserver_name, }, }, 'desired-attributes': { 'vserver-info': { 'ipspace': None, }, }, } vserver_info = self.send_iter_request('vserver-get-iter', api_args) try: ipspace = vserver_info.get_child_by_name( 'attributes-list').get_child_by_name( 'vserver-info').get_child_content('ipspace') except AttributeError: msg = _('Could not determine IPspace for Vserver %s.') raise exception.NetAppException(msg % vserver_name) return ipspace @na_utils.trace def ipspace_has_data_vservers(self, ipspace_name): """Check whether an IPspace has any data Vservers assigned to it.""" if not self.features.IPSPACES: return False api_args = { 'query': { 'vserver-info': { 'ipspace': ipspace_name, 'vserver-type': 'data' }, }, 'desired-attributes': { 'vserver-info': { 'vserver-name': None, }, }, } result = self.send_iter_request('vserver-get-iter', api_args) return self._has_records(result) @na_utils.trace def list_vservers(self, vserver_type='data'): """Get the names of vservers present, optionally filtered by type.""" query = { 'vserver-info': { 'vserver-type': vserver_type, } } if vserver_type else None api_args = { 'desired-attributes': { 'vserver-info': { 'vserver-name': None, }, }, } if query: api_args['query'] = query result = self.send_iter_request('vserver-get-iter', api_args) vserver_info_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') return [vserver_info.get_child_content('vserver-name') for vserver_info in vserver_info_list.get_children()] @na_utils.trace def get_vserver_volume_count(self): """Get the number of volumes present on a cluster or vserver. Call this on a vserver client to see how many volumes exist on that vserver. """ api_args = { 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'name': None, }, }, }, } volumes_data = self.send_iter_request('volume-get-iter', api_args) return self._get_record_count(volumes_data) @na_utils.trace def delete_vserver(self, vserver_name, vserver_client, security_services=None): """Delete Vserver. Checks if Vserver exists and does not have active shares. Offlines and destroys root volumes. Deletes Vserver. """ if not self.vserver_exists(vserver_name): LOG.error("Vserver %s does not exist.", vserver_name) return root_volume_name = self.get_vserver_root_volume_name(vserver_name) volumes_count = vserver_client.get_vserver_volume_count() if volumes_count == 1: try: vserver_client.offline_volume(root_volume_name) except netapp_api.NaApiError as e: if e.code == netapp_api.EVOLUMEOFFLINE: LOG.error("Volume %s is already offline.", root_volume_name) else: raise vserver_client.delete_volume(root_volume_name) elif volumes_count > 1: msg = _("Cannot delete Vserver. Vserver %s has shares.") raise exception.NetAppException(msg % vserver_name) if security_services: self._terminate_vserver_services(vserver_name, vserver_client, security_services) self.send_request('vserver-destroy', {'vserver-name': vserver_name}) @na_utils.trace def _terminate_vserver_services(self, vserver_name, vserver_client, security_services): for service in security_services: if service['type'] == 'active_directory': api_args = { 'admin-password': service['password'], 'admin-username': service['user'], } try: vserver_client.send_request('cifs-server-delete', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EOBJECTNOTFOUND: LOG.error('CIFS server does not exist for ' 'Vserver %s.', vserver_name) else: vserver_client.send_request('cifs-server-delete') @na_utils.trace def is_nve_supported(self): """Determine whether NVE is supported on this platform and version.""" nodes = self.list_cluster_nodes() system_version = self.get_system_version() version = system_version.get('version') version_tuple = system_version.get('version-tuple') # NVE requires an ONTAP version >= 9.1. Also, not all platforms # support this feature. NVE is not supported if the version # includes the substring '<1no-DARE>' (no Data At Rest Encryption). if version_tuple >= (9, 1, 0) and "<1no-DARE>" not in version: if nodes is not None: return self.get_security_key_manager_nve_support(nodes[0]) else: LOG.debug('Cluster credentials are required in order to ' 'determine whether NetApp Volume Encryption is ' 'supported or not on this platform.') return False else: LOG.debug('NetApp Volume Encryption is not supported on this ' 'ONTAP version: %(version)s, %(version_tuple)s. ', {'version': version, 'version_tuple': version_tuple}) return False @na_utils.trace def list_cluster_nodes(self): """Get all available cluster nodes.""" api_args = { 'desired-attributes': { 'node-details-info': { 'node': None, }, }, } result = self.send_iter_request('system-node-get-iter', api_args) nodes_info_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') return [node_info.get_child_content('node') for node_info in nodes_info_list.get_children()] @na_utils.trace def get_security_key_manager_nve_support(self, node): """Determine whether the cluster platform supports Volume Encryption""" api_args = {'node': node} try: result = self.send_request( 'security-key-manager-volume-encryption-supported', api_args) vol_encryption_supported = result.get_child_content( 'vol-encryption-supported') or 'false' except netapp_api.NaApiError as e: LOG.debug("NVE disabled due to error code: %s - %s", e.code, e.message) return False return strutils.bool_from_string(vol_encryption_supported) @na_utils.trace def list_node_data_ports(self, node): ports = self.get_node_data_ports(node) return [port.get('port') for port in ports] @na_utils.trace def get_node_data_ports(self, node): """Get applicable data ports on the node.""" api_args = { 'query': { 'net-port-info': { 'node': node, 'link-status': 'up', 'port-type': 'physical|if_group', 'role': 'data', }, }, 'desired-attributes': { 'net-port-info': { 'port': None, 'node': None, 'operational-speed': None, 'ifgrp-port': None, }, }, } result = self.send_iter_request('net-port-get-iter', api_args) net_port_info_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') ports = [] for port_info in net_port_info_list.get_children(): # Skip physical ports that are part of interface groups. if port_info.get_child_content('ifgrp-port'): continue port = { 'node': port_info.get_child_content('node'), 'port': port_info.get_child_content('port'), 'speed': port_info.get_child_content('operational-speed'), } ports.append(port) return self._sort_data_ports_by_speed(ports) @na_utils.trace def _sort_data_ports_by_speed(self, ports): def sort_key(port): value = port.get('speed') if not (value and isinstance(value, six.string_types)): return 0 elif value.isdigit(): return int(value) elif value == 'auto': return 3 elif value == 'undef': return 2 else: return 1 return sorted(ports, key=sort_key, reverse=True) @na_utils.trace def list_root_aggregates(self): """Get names of all aggregates that contain node root volumes.""" desired_attributes = { 'aggr-attributes': { 'aggregate-name': None, 'aggr-raid-attributes': { 'has-local-root': None, 'has-partner-root': None, }, }, } aggrs = self._get_aggregates(desired_attributes=desired_attributes) root_aggregates = [] for aggr in aggrs: aggr_name = aggr.get_child_content('aggregate-name') aggr_raid_attrs = aggr.get_child_by_name('aggr-raid-attributes') local_root = strutils.bool_from_string( aggr_raid_attrs.get_child_content('has-local-root')) partner_root = strutils.bool_from_string( aggr_raid_attrs.get_child_content('has-partner-root')) if local_root or partner_root: root_aggregates.append(aggr_name) return root_aggregates @na_utils.trace def list_non_root_aggregates(self): """Get names of all aggregates that don't contain node root volumes.""" query = { 'aggr-attributes': { 'aggr-raid-attributes': { 'has-local-root': 'false', 'has-partner-root': 'false', } }, } return self._list_aggregates(query=query) @na_utils.trace def _list_aggregates(self, query=None): """Get names of all aggregates.""" try: api_args = { 'desired-attributes': { 'aggr-attributes': { 'aggregate-name': None, }, }, } if query: api_args['query'] = query result = self.send_iter_request('aggr-get-iter', api_args) aggr_list = result.get_child_by_name( 'attributes-list').get_children() except AttributeError: msg = _("Could not list aggregates.") raise exception.NetAppException(msg) return [aggr.get_child_content('aggregate-name') for aggr in aggr_list] @na_utils.trace def list_vserver_aggregates(self): """Returns a list of aggregates available to a vserver. This must be called against a Vserver LIF. """ return list(self.get_vserver_aggregate_capacities().keys()) @na_utils.trace def create_network_interface(self, ip, netmask, vlan, node, port, vserver_name, lif_name, ipspace_name, mtu): """Creates LIF on VLAN port.""" home_port_name = port if vlan: self._create_vlan(node, port, vlan) home_port_name = '%(port)s-%(tag)s' % {'port': port, 'tag': vlan} if self.features.BROADCAST_DOMAINS: self._ensure_broadcast_domain_for_port( node, home_port_name, mtu, ipspace=ipspace_name) LOG.debug('Creating LIF %(lif)s for Vserver %(vserver)s ', {'lif': lif_name, 'vserver': vserver_name}) api_args = { 'address': ip, 'administrative-status': 'up', 'data-protocols': [ {'data-protocol': 'nfs'}, {'data-protocol': 'cifs'}, ], 'home-node': node, 'home-port': home_port_name, 'netmask': netmask, 'interface-name': lif_name, 'role': 'data', 'vserver': vserver_name, } self.send_request('net-interface-create', api_args) @na_utils.trace def _create_vlan(self, node, port, vlan): try: api_args = { 'vlan-info': { 'parent-interface': port, 'node': node, 'vlanid': vlan, }, } self.send_request('net-vlan-create', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EDUPLICATEENTRY: LOG.debug('VLAN %(vlan)s already exists on port %(port)s', {'vlan': vlan, 'port': port}) else: msg = _('Failed to create VLAN %(vlan)s on ' 'port %(port)s. %(err_msg)s') msg_args = {'vlan': vlan, 'port': port, 'err_msg': e.message} raise exception.NetAppException(msg % msg_args) @na_utils.trace def delete_vlan(self, node, port, vlan): try: api_args = { 'vlan-info': { 'parent-interface': port, 'node': node, 'vlanid': vlan, }, } self.send_request('net-vlan-delete', api_args) except netapp_api.NaApiError as e: p = re.compile('port already has a lif bound.*', re.IGNORECASE) if (e.code == netapp_api.EAPIERROR and re.match(p, e.message)): LOG.debug('VLAN %(vlan)s on port %(port)s node %(node)s ' 'still used by LIF and cannot be deleted.', {'vlan': vlan, 'port': port, 'node': node}) else: msg = _('Failed to delete VLAN %(vlan)s on ' 'port %(port)s node %(node)s: %(err_msg)s') msg_args = { 'vlan': vlan, 'port': port, 'node': node, 'err_msg': e.message } raise exception.NetAppException(msg % msg_args) @na_utils.trace def create_route(self, gateway, destination=None): if not gateway: return if not destination: if ':' in gateway: destination = '::/0' else: destination = '0.0.0.0/0' try: api_args = { 'destination': destination, 'gateway': gateway, 'return-record': 'true', } self.send_request('net-routes-create', api_args) except netapp_api.NaApiError as e: p = re.compile('.*Duplicate route exists.*', re.IGNORECASE) if (e.code == netapp_api.EAPIERROR and re.match(p, e.message)): LOG.debug('Route to %(destination)s via gateway %(gateway)s ' 'exists.', {'destination': destination, 'gateway': gateway}) else: msg = _('Failed to create a route to %(destination)s via ' 'gateway %(gateway)s: %(err_msg)s') msg_args = { 'destination': destination, 'gateway': gateway, 'err_msg': e.message, } raise exception.NetAppException(msg % msg_args) @na_utils.trace def _ensure_broadcast_domain_for_port(self, node, port, mtu, ipspace=DEFAULT_IPSPACE): """Ensure a port is in a broadcast domain. Create one if necessary. If the IPspace:domain pair match for the given port, which commonly happens in multi-node clusters, then there isn't anything to do. Otherwise, we can assume the IPspace is correct and extant by this point, so the remaining task is to remove the port from any domain it is already in, create the domain for the IPspace if it doesn't exist, and add the port to this domain. """ # Derive the broadcast domain name from the IPspace name since they # need to be 1-1 and the default for both is the same name, 'Default'. domain = re.sub(r'ipspace', 'domain', ipspace) port_info = self._get_broadcast_domain_for_port(node, port) # Port already in desired ipspace and broadcast domain. if (port_info['ipspace'] == ipspace and port_info['broadcast-domain'] == domain): self._modify_broadcast_domain(domain, ipspace, mtu) return # If in another broadcast domain, remove port from it. if port_info['broadcast-domain']: self._remove_port_from_broadcast_domain( node, port, port_info['broadcast-domain'], port_info['ipspace']) # If desired broadcast domain doesn't exist, create it. if not self._broadcast_domain_exists(domain, ipspace): self._create_broadcast_domain(domain, ipspace, mtu) else: self._modify_broadcast_domain(domain, ipspace, mtu) # Move the port into the broadcast domain where it is needed. self._add_port_to_broadcast_domain(node, port, domain, ipspace) @na_utils.trace def _get_broadcast_domain_for_port(self, node, port): """Get broadcast domain for a specific port.""" api_args = { 'query': { 'net-port-info': { 'node': node, 'port': port, }, }, 'desired-attributes': { 'net-port-info': { 'broadcast-domain': None, 'ipspace': None, }, }, } result = self.send_iter_request('net-port-get-iter', api_args) net_port_info_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') port_info = net_port_info_list.get_children() if not port_info: msg = _('Could not find port %(port)s on node %(node)s.') msg_args = {'port': port, 'node': node} raise exception.NetAppException(msg % msg_args) port = { 'broadcast-domain': port_info[0].get_child_content('broadcast-domain'), 'ipspace': port_info[0].get_child_content('ipspace') } return port @na_utils.trace def _broadcast_domain_exists(self, domain, ipspace): """Check if a broadcast domain exists.""" api_args = { 'query': { 'net-port-broadcast-domain-info': { 'ipspace': ipspace, 'broadcast-domain': domain, }, }, 'desired-attributes': { 'net-port-broadcast-domain-info': None, }, } result = self.send_iter_request('net-port-broadcast-domain-get-iter', api_args) return self._has_records(result) @na_utils.trace def _create_broadcast_domain(self, domain, ipspace, mtu): """Create a broadcast domain.""" api_args = { 'ipspace': ipspace, 'broadcast-domain': domain, 'mtu': mtu, } self.send_request('net-port-broadcast-domain-create', api_args) @na_utils.trace def _modify_broadcast_domain(self, domain, ipspace, mtu): """Modify a broadcast domain.""" api_args = { 'ipspace': ipspace, 'broadcast-domain': domain, 'mtu': mtu, } self.send_request('net-port-broadcast-domain-modify', api_args) @na_utils.trace def _delete_broadcast_domain(self, domain, ipspace): """Delete a broadcast domain.""" api_args = { 'ipspace': ipspace, 'broadcast-domain': domain, } self.send_request('net-port-broadcast-domain-destroy', api_args) @na_utils.trace def _delete_broadcast_domains_for_ipspace(self, ipspace_name): """Deletes all broadcast domains in an IPspace.""" ipspaces = self.get_ipspaces(ipspace_name=ipspace_name) if not ipspaces: return ipspace = ipspaces[0] for broadcast_domain_name in ipspace['broadcast-domains']: self._delete_broadcast_domain(broadcast_domain_name, ipspace_name) @na_utils.trace def _add_port_to_broadcast_domain(self, node, port, domain, ipspace): qualified_port_name = ':'.join([node, port]) try: api_args = { 'ipspace': ipspace, 'broadcast-domain': domain, 'ports': { 'net-qualified-port-name': qualified_port_name, } } self.send_request('net-port-broadcast-domain-add-ports', api_args) except netapp_api.NaApiError as e: if e.code == (netapp_api. E_VIFMGR_PORT_ALREADY_ASSIGNED_TO_BROADCAST_DOMAIN): LOG.debug('Port %(port)s already exists in broadcast domain ' '%(domain)s', {'port': port, 'domain': domain}) else: msg = _('Failed to add port %(port)s to broadcast domain ' '%(domain)s. %(err_msg)s') msg_args = { 'port': qualified_port_name, 'domain': domain, 'err_msg': e.message, } raise exception.NetAppException(msg % msg_args) @na_utils.trace def _remove_port_from_broadcast_domain(self, node, port, domain, ipspace): qualified_port_name = ':'.join([node, port]) api_args = { 'ipspace': ipspace, 'broadcast-domain': domain, 'ports': { 'net-qualified-port-name': qualified_port_name, } } self.send_request('net-port-broadcast-domain-remove-ports', api_args) @na_utils.trace def network_interface_exists(self, vserver_name, node, port, ip, netmask, vlan): """Checks if LIF exists.""" home_port_name = (port if not vlan else '%(port)s-%(tag)s' % {'port': port, 'tag': vlan}) api_args = { 'query': { 'net-interface-info': { 'address': ip, 'home-node': node, 'home-port': home_port_name, 'netmask': netmask, 'vserver': vserver_name, }, }, 'desired-attributes': { 'net-interface-info': { 'interface-name': None, }, }, } result = self.send_iter_request('net-interface-get-iter', api_args) return self._has_records(result) @na_utils.trace def list_network_interfaces(self): """Get the names of available LIFs.""" api_args = { 'desired-attributes': { 'net-interface-info': { 'interface-name': None, }, }, } result = self.send_iter_request('net-interface-get-iter', api_args) lif_info_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') return [lif_info.get_child_content('interface-name') for lif_info in lif_info_list.get_children()] @na_utils.trace def get_network_interfaces(self, protocols=None): """Get available LIFs.""" protocols = na_utils.convert_to_list(protocols) protocols = [protocol.lower() for protocol in protocols] api_args = { 'query': { 'net-interface-info': { 'data-protocols': { 'data-protocol': '|'.join(protocols), } } } } if protocols else None result = self.send_iter_request('net-interface-get-iter', api_args) lif_info_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') interfaces = [] for lif_info in lif_info_list.get_children(): lif = { 'address': lif_info.get_child_content('address'), 'home-node': lif_info.get_child_content('home-node'), 'home-port': lif_info.get_child_content('home-port'), 'interface-name': lif_info.get_child_content('interface-name'), 'netmask': lif_info.get_child_content('netmask'), 'role': lif_info.get_child_content('role'), 'vserver': lif_info.get_child_content('vserver'), } interfaces.append(lif) return interfaces @na_utils.trace def get_ipspace_name_for_vlan_port(self, vlan_node, vlan_port, vlan_id): """Gets IPSpace name for specified VLAN""" if not self.features.IPSPACES: return None port = vlan_port if not vlan_id else '%(port)s-%(id)s' % { 'port': vlan_port, 'id': vlan_id, } api_args = {'node': vlan_node, 'port': port} try: result = self.send_request('net-port-get', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EOBJECTNOTFOUND: msg = _('No pre-existing port or ipspace was found for ' '%(port)s, will attempt to create one.') msg_args = {'port': port} LOG.debug(msg, msg_args) return None else: raise attributes = result.get_child_by_name('attributes') net_port_info = attributes.get_child_by_name('net-port-info') ipspace_name = net_port_info.get_child_content('ipspace') return ipspace_name @na_utils.trace def get_ipspaces(self, ipspace_name=None): """Gets one or more IPSpaces.""" if not self.features.IPSPACES: return [] api_args = {} if ipspace_name: api_args['query'] = { 'net-ipspaces-info': { 'ipspace': ipspace_name, } } result = self.send_iter_request('net-ipspaces-get-iter', api_args) if not self._has_records(result): return [] ipspaces = [] for net_ipspaces_info in result.get_child_by_name( 'attributes-list').get_children(): ipspace = { 'ports': [], 'vservers': [], 'broadcast-domains': [], } ports = net_ipspaces_info.get_child_by_name( 'ports') or netapp_api.NaElement('none') for port in ports.get_children(): ipspace['ports'].append(port.get_content()) vservers = net_ipspaces_info.get_child_by_name( 'vservers') or netapp_api.NaElement('none') for vserver in vservers.get_children(): ipspace['vservers'].append(vserver.get_content()) broadcast_domains = net_ipspaces_info.get_child_by_name( 'broadcast-domains') or netapp_api.NaElement('none') for broadcast_domain in broadcast_domains.get_children(): ipspace['broadcast-domains'].append( broadcast_domain.get_content()) ipspace['ipspace'] = net_ipspaces_info.get_child_content('ipspace') ipspace['id'] = net_ipspaces_info.get_child_content('id') ipspace['uuid'] = net_ipspaces_info.get_child_content('uuid') ipspaces.append(ipspace) return ipspaces @na_utils.trace def ipspace_exists(self, ipspace_name): """Checks if IPspace exists.""" if not self.features.IPSPACES: return False api_args = { 'query': { 'net-ipspaces-info': { 'ipspace': ipspace_name, }, }, 'desired-attributes': { 'net-ipspaces-info': { 'ipspace': None, }, }, } result = self.send_iter_request('net-ipspaces-get-iter', api_args) return self._has_records(result) @na_utils.trace def create_ipspace(self, ipspace_name): """Creates an IPspace.""" api_args = {'ipspace': ipspace_name} self.send_request('net-ipspaces-create', api_args) @na_utils.trace def delete_ipspace(self, ipspace_name): """Deletes an IPspace.""" self._delete_broadcast_domains_for_ipspace(ipspace_name) api_args = {'ipspace': ipspace_name} self.send_request('net-ipspaces-destroy', api_args) @na_utils.trace def add_vserver_to_ipspace(self, ipspace_name, vserver_name): """Assigns a vserver to an IPspace.""" api_args = {'ipspace': ipspace_name, 'vserver': vserver_name} self.send_request('net-ipspaces-assign-vserver', api_args) @na_utils.trace def get_node_for_aggregate(self, aggregate_name): """Get home node for the specified aggregate. This API could return None, most notably if it was sent to a Vserver LIF, so the caller must be able to handle that case. """ if not aggregate_name: return None desired_attributes = { 'aggr-attributes': { 'aggregate-name': None, 'aggr-ownership-attributes': { 'home-name': None, }, }, } try: aggrs = self._get_aggregates(aggregate_names=[aggregate_name], desired_attributes=desired_attributes) except netapp_api.NaApiError as e: if e.code == netapp_api.EAPINOTFOUND: return None else: raise if len(aggrs) < 1: return None aggr_ownership_attrs = aggrs[0].get_child_by_name( 'aggr-ownership-attributes') or netapp_api.NaElement('none') return aggr_ownership_attrs.get_child_content('home-name') @na_utils.trace def get_cluster_aggregate_capacities(self, aggregate_names): """Calculates capacity of one or more aggregates. Returns dictionary of aggregate capacity metrics. 'size-used' is the actual space consumed on the aggregate. 'size-available' is the actual space remaining. 'size-total' is the defined total aggregate size, such that used + available = total. """ if aggregate_names is not None and len(aggregate_names) == 0: return {} desired_attributes = { 'aggr-attributes': { 'aggregate-name': None, 'aggr-space-attributes': { 'size-available': None, 'size-total': None, 'size-used': None, }, }, } aggrs = self._get_aggregates(aggregate_names=aggregate_names, desired_attributes=desired_attributes) aggr_space_dict = dict() for aggr in aggrs: aggr_name = aggr.get_child_content('aggregate-name') aggr_space_attrs = aggr.get_child_by_name('aggr-space-attributes') aggr_space_dict[aggr_name] = { 'available': int(aggr_space_attrs.get_child_content('size-available')), 'total': int(aggr_space_attrs.get_child_content('size-total')), 'used': int(aggr_space_attrs.get_child_content('size-used')), } return aggr_space_dict @na_utils.trace def get_vserver_aggregate_capacities(self, aggregate_names=None): """Calculates capacity of one or more aggregates for a vserver. Returns dictionary of aggregate capacity metrics. This must be called against a Vserver LIF. """ if aggregate_names is not None and len(aggregate_names) == 0: return {} api_args = { 'desired-attributes': { 'vserver-info': { 'vserver-name': None, 'vserver-aggr-info-list': { 'vserver-aggr-info': { 'aggr-name': None, 'aggr-availsize': None, }, }, }, }, } result = self.send_request('vserver-get', api_args) attributes = result.get_child_by_name('attributes') if not attributes: raise exception.NetAppException('Failed to read Vserver info') vserver_info = attributes.get_child_by_name('vserver-info') vserver_name = vserver_info.get_child_content('vserver-name') vserver_aggr_info_element = vserver_info.get_child_by_name( 'vserver-aggr-info-list') or netapp_api.NaElement('none') vserver_aggr_info_list = vserver_aggr_info_element.get_children() if not vserver_aggr_info_list: LOG.warning('No aggregates assigned to Vserver %s.', vserver_name) # Return dict of key-value pair of aggr_name:aggr_size_available. aggr_space_dict = {} for aggr_info in vserver_aggr_info_list: aggr_name = aggr_info.get_child_content('aggr-name') if aggregate_names is None or aggr_name in aggregate_names: aggr_size = int(aggr_info.get_child_content('aggr-availsize')) aggr_space_dict[aggr_name] = {'available': aggr_size} LOG.debug('Found available Vserver aggregates: %s', aggr_space_dict) return aggr_space_dict @na_utils.trace def _get_aggregates(self, aggregate_names=None, desired_attributes=None): query = { 'aggr-attributes': { 'aggregate-name': '|'.join(aggregate_names), } } if aggregate_names else None api_args = {} if query: api_args['query'] = query if desired_attributes: api_args['desired-attributes'] = desired_attributes result = self.send_iter_request('aggr-get-iter', api_args) if not self._has_records(result): return [] else: return result.get_child_by_name('attributes-list').get_children() def get_performance_instance_uuids(self, object_name, node_name): """Get UUIDs of performance instances for a cluster node.""" api_args = { 'objectname': object_name, 'query': { 'instance-info': { 'uuid': node_name + ':*', } } } result = self.send_request('perf-object-instance-list-info-iter', api_args) uuids = [] instances = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('None') for instance_info in instances.get_children(): uuids.append(instance_info.get_child_content('uuid')) return uuids def get_performance_counter_info(self, object_name, counter_name): """Gets info about one or more Data ONTAP performance counters.""" api_args = {'objectname': object_name} result = self.send_request('perf-object-counter-list-info', api_args) counters = result.get_child_by_name( 'counters') or netapp_api.NaElement('None') for counter in counters.get_children(): if counter.get_child_content('name') == counter_name: labels = [] label_list = counter.get_child_by_name( 'labels') or netapp_api.NaElement('None') for label in label_list.get_children(): labels.extend(label.get_content().split(',')) base_counter = counter.get_child_content('base-counter') return { 'name': counter_name, 'labels': labels, 'base-counter': base_counter, } else: raise exception.NotFound(_('Counter %s not found') % counter_name) def get_performance_counters(self, object_name, instance_uuids, counter_names): """Gets one or more cDOT performance counters.""" api_args = { 'objectname': object_name, 'instance-uuids': [ {'instance-uuid': instance_uuid} for instance_uuid in instance_uuids ], 'counters': [ {'counter': counter} for counter in counter_names ], } result = self.send_request('perf-object-get-instances', api_args) counter_data = [] timestamp = result.get_child_content('timestamp') instances = result.get_child_by_name( 'instances') or netapp_api.NaElement('None') for instance in instances.get_children(): instance_name = instance.get_child_content('name') instance_uuid = instance.get_child_content('uuid') node_name = instance_uuid.split(':')[0] counters = instance.get_child_by_name( 'counters') or netapp_api.NaElement('None') for counter in counters.get_children(): counter_name = counter.get_child_content('name') counter_value = counter.get_child_content('value') counter_data.append({ 'instance-name': instance_name, 'instance-uuid': instance_uuid, 'node-name': node_name, 'timestamp': timestamp, counter_name: counter_value, }) return counter_data @na_utils.trace def setup_security_services(self, security_services, vserver_client, vserver_name): api_args = { 'name-mapping-switch': [ {'nmswitch': 'ldap'}, {'nmswitch': 'file'} ], 'name-server-switch': [ {'nsswitch': 'ldap'}, {'nsswitch': 'file'} ], 'vserver-name': vserver_name, } self.send_request('vserver-modify', api_args) for security_service in security_services: if security_service['type'].lower() == 'ldap': vserver_client.configure_ldap(security_service) elif security_service['type'].lower() == 'active_directory': vserver_client.configure_active_directory(security_service, vserver_name) elif security_service['type'].lower() == 'kerberos': self.create_kerberos_realm(security_service) vserver_client.configure_kerberos(security_service, vserver_name) else: msg = _('Unsupported security service type %s for ' 'Data ONTAP driver') raise exception.NetAppException(msg % security_service['type']) @na_utils.trace def enable_nfs(self, versions): """Enables NFS on Vserver.""" self.send_request('nfs-enable') self._enable_nfs_protocols(versions) self._create_default_nfs_export_rules() @na_utils.trace def _enable_nfs_protocols(self, versions): """Set the enabled NFS protocol versions.""" nfs3 = 'true' if 'nfs3' in versions else 'false' nfs40 = 'true' if 'nfs4.0' in versions else 'false' nfs41 = 'true' if 'nfs4.1' in versions else 'false' nfs_service_modify_args = { 'is-nfsv3-enabled': nfs3, 'is-nfsv40-enabled': nfs40, 'is-nfsv41-enabled': nfs41, } self.send_request('nfs-service-modify', nfs_service_modify_args) @na_utils.trace def _create_default_nfs_export_rules(self): """Create the default export rule for the NFS service.""" export_rule_create_args = { 'client-match': '0.0.0.0/0', 'policy-name': 'default', 'ro-rule': { 'security-flavor': 'any', }, 'rw-rule': { 'security-flavor': 'never', }, } self.send_request('export-rule-create', export_rule_create_args) export_rule_create_args['client-match'] = '::/0' self.send_request('export-rule-create', export_rule_create_args) @na_utils.trace def configure_ldap(self, security_service): """Configures LDAP on Vserver.""" config_name = hashlib.md5(six.b(security_service['id'])).hexdigest() api_args = { 'ldap-client-config': config_name, 'servers': { 'ip-address': security_service['server'], }, 'tcp-port': '389', 'schema': 'RFC-2307', 'bind-password': security_service['password'], } self.send_request('ldap-client-create', api_args) api_args = {'client-config': config_name, 'client-enabled': 'true'} self.send_request('ldap-config-create', api_args) @na_utils.trace def configure_active_directory(self, security_service, vserver_name): """Configures AD on Vserver.""" self.configure_dns(security_service) self.set_preferred_dc(security_service) # 'cifs-server' is CIFS Server NetBIOS Name, max length is 15. # Should be unique within each domain (data['domain']). # Cut to 15 char with begin and end, attempt to make valid DNS hostname cifs_server = (vserver_name[0:8] + '-' + vserver_name[-6:]).replace('_', '-').upper() api_args = { 'admin-username': security_service['user'], 'admin-password': security_service['password'], 'force-account-overwrite': 'true', 'cifs-server': cifs_server, 'domain': security_service['domain'], } if security_service['ou'] is not None: api_args['organizational-unit'] = security_service['ou'] try: LOG.debug("Trying to setup CIFS server with data: %s", api_args) self.send_request('cifs-server-create', api_args) except netapp_api.NaApiError as e: msg = _("Failed to create CIFS server entry. %s") raise exception.NetAppException(msg % e.message) @na_utils.trace def create_kerberos_realm(self, security_service): """Creates Kerberos realm on cluster.""" api_args = { 'admin-server-ip': security_service['server'], 'admin-server-port': '749', 'clock-skew': '5', 'comment': '', 'config-name': security_service['id'], 'kdc-ip': security_service['server'], 'kdc-port': '88', 'kdc-vendor': 'other', 'password-server-ip': security_service['server'], 'password-server-port': '464', 'realm': security_service['domain'].upper(), } try: self.send_request('kerberos-realm-create', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EDUPLICATEENTRY: LOG.debug('Kerberos realm config already exists.') else: msg = _('Failed to create Kerberos realm. %s') raise exception.NetAppException(msg % e.message) @na_utils.trace def configure_kerberos(self, security_service, vserver_name): """Configures Kerberos for NFS on Vserver.""" self.configure_dns(security_service) spn = self._get_kerberos_service_principal_name( security_service, vserver_name) lifs = self.list_network_interfaces() if not lifs: msg = _("Cannot set up Kerberos. There are no LIFs configured.") raise exception.NetAppException(msg) for lif_name in lifs: api_args = { 'admin-password': security_service['password'], 'admin-user-name': security_service['user'], 'interface-name': lif_name, 'is-kerberos-enabled': 'true', 'service-principal-name': spn, } self.send_request('kerberos-config-modify', api_args) @na_utils.trace def _get_kerberos_service_principal_name(self, security_service, vserver_name): return ('nfs/' + vserver_name.replace('_', '-') + '.' + security_service['domain'] + '@' + security_service['domain'].upper()) @na_utils.trace def configure_dns(self, security_service): api_args = { 'domains': { 'string': security_service['domain'], }, 'name-servers': [], 'dns-state': 'enabled', } for dns_ip in security_service['dns_ip'].split(','): api_args['name-servers'].append({'ip-address': dns_ip.strip()}) try: self.send_request('net-dns-create', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EDUPLICATEENTRY: LOG.error("DNS exists for Vserver.") else: msg = _("Failed to configure DNS. %s") raise exception.NetAppException(msg % e.message) @na_utils.trace def set_preferred_dc(self, security_service): # server is optional if not security_service['server']: return api_args = { 'preferred-dc': [], 'domain': security_service['domain'], } for dc_ip in security_service['server'].split(','): api_args['preferred-dc'].append({'string': dc_ip.strip()}) try: self.send_request('cifs-domain-preferred-dc-add', api_args) except netapp_api.NaApiError as e: msg = _("Failed to set preferred DC. %s") raise exception.NetAppException(msg % e.message) @na_utils.trace def create_volume(self, aggregate_name, volume_name, size_gb, thin_provisioned=False, snapshot_policy=None, language=None, dedup_enabled=False, compression_enabled=False, max_files=None, snapshot_reserve=None, volume_type='rw', qos_policy_group=None, encrypt=False, **options): """Creates a volume.""" api_args = { 'containing-aggr-name': aggregate_name, 'size': six.text_type(size_gb) + 'g', 'volume': volume_name, 'volume-type': volume_type, } if volume_type != 'dp': api_args['junction-path'] = '/%s' % volume_name if thin_provisioned: api_args['space-reserve'] = 'none' if snapshot_policy is not None: api_args['snapshot-policy'] = snapshot_policy if language is not None: api_args['language-code'] = language if snapshot_reserve is not None: api_args['percentage-snapshot-reserve'] = six.text_type( snapshot_reserve) if qos_policy_group is not None: api_args['qos-policy-group-name'] = qos_policy_group if encrypt is True: if not self.features.FLEXVOL_ENCRYPTION: msg = 'Flexvol encryption is not supported on this backend.' raise exception.NetAppException(msg) else: api_args['encrypt'] = 'true' self.send_request('volume-create', api_args) self.update_volume_efficiency_attributes(volume_name, dedup_enabled, compression_enabled) if max_files is not None: self.set_volume_max_files(volume_name, max_files) @na_utils.trace def enable_dedup(self, volume_name): """Enable deduplication on volume.""" api_args = {'path': '/vol/%s' % volume_name} self.send_request('sis-enable', api_args) @na_utils.trace def disable_dedup(self, volume_name): """Disable deduplication on volume.""" api_args = {'path': '/vol/%s' % volume_name} self.send_request('sis-disable', api_args) @na_utils.trace def enable_compression(self, volume_name): """Enable compression on volume.""" api_args = { 'path': '/vol/%s' % volume_name, 'enable-compression': 'true' } self.send_request('sis-set-config', api_args) @na_utils.trace def disable_compression(self, volume_name): """Disable compression on volume.""" api_args = { 'path': '/vol/%s' % volume_name, 'enable-compression': 'false' } self.send_request('sis-set-config', api_args) @na_utils.trace def get_volume_efficiency_status(self, volume_name): """Get dedupe & compression status for a volume.""" api_args = { 'query': { 'sis-status-info': { 'path': '/vol/%s' % volume_name, }, }, 'desired-attributes': { 'sis-status-info': { 'state': None, 'is-compression-enabled': None, }, }, } try: result = self.send_iter_request('sis-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') sis_status_info = attributes_list.get_child_by_name( 'sis-status-info') or netapp_api.NaElement('none') except exception.NetAppException: msg = _('Failed to get volume efficiency status for %s.') LOG.error(msg, volume_name) sis_status_info = netapp_api.NaElement('none') return { 'dedupe': True if 'enabled' == sis_status_info.get_child_content( 'state') else False, 'compression': True if 'true' == sis_status_info.get_child_content( 'is-compression-enabled') else False, } @na_utils.trace def set_volume_max_files(self, volume_name, max_files): """Set flexvol file limit.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-inode-attributes': { 'files-total': max_files, }, }, }, } self.send_request('volume-modify-iter', api_args) @na_utils.trace def set_volume_size(self, volume_name, size_gb): """Set volume size.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-space-attributes': { 'size': int(size_gb) * units.Gi, }, }, }, } result = self.send_request('volume-modify-iter', api_args) failures = result.get_child_content('num-failed') if failures and int(failures) > 0: failure_list = result.get_child_by_name( 'failure-list') or netapp_api.NaElement('none') errors = failure_list.get_children() if errors: raise netapp_api.NaApiError( errors[0].get_child_content('error-code'), errors[0].get_child_content('error-message')) @na_utils.trace def set_volume_snapdir_access(self, volume_name, hide_snapdir): """Set volume snapshot directory visibility.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-snapshot-attributes': { 'snapdir-access-enabled': six.text_type( not hide_snapdir).lower(), }, }, }, } result = self.send_request('volume-modify-iter', api_args) failures = result.get_child_content('num-failed') if failures and int(failures) > 0: failure_list = result.get_child_by_name( 'failure-list') or netapp_api.NaElement('none') errors = failure_list.get_children() if errors: raise netapp_api.NaApiError( errors[0].get_child_content('error-code'), errors[0].get_child_content('error-message')) @na_utils.trace def set_volume_filesys_size_fixed(self, volume_name, filesys_size_fixed=False): """Set volume file system size fixed to true/false.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-space-attributes': { 'is-filesys-size-fixed': six.text_type( filesys_size_fixed).lower(), }, }, }, } result = self.send_request('volume-modify-iter', api_args) failures = result.get_child_content('num-failed') if failures and int(failures) > 0: failure_list = result.get_child_by_name( 'failure-list') or netapp_api.NaElement('none') errors = failure_list.get_children() if errors: raise netapp_api.NaApiError( errors[0].get_child_content('error-code'), errors[0].get_child_content('error-message')) @na_utils.trace def set_volume_security_style(self, volume_name, security_style='unix'): """Set volume security style""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-security-attributes': { 'style': security_style, }, }, }, } result = self.send_request('volume-modify-iter', api_args) failures = result.get_child_content('num-failed') if failures and int(failures) > 0: failure_list = result.get_child_by_name( 'failure-list') or netapp_api.NaElement('none') errors = failure_list.get_children() if errors: raise netapp_api.NaApiError( errors[0].get_child_content('error-code'), errors[0].get_child_content('error-message')) @na_utils.trace def set_volume_name(self, volume_name, new_volume_name): """Set flexvol name.""" api_args = { 'volume': volume_name, 'new-volume-name': new_volume_name, } self.send_request('volume-rename', api_args) @na_utils.trace def rename_vserver(self, vserver_name, new_vserver_name): """Rename a vserver.""" api_args = { 'vserver-name': vserver_name, 'new-name': new_vserver_name, } self.send_request('vserver-rename', api_args) @na_utils.trace def modify_volume(self, aggregate_name, volume_name, thin_provisioned=False, snapshot_policy=None, language=None, dedup_enabled=False, compression_enabled=False, max_files=None, qos_policy_group=None, hide_snapdir=None, **options): """Update backend volume for a share as necessary.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'containing-aggregate-name': aggregate_name, 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-inode-attributes': {}, 'volume-language-attributes': {}, 'volume-snapshot-attributes': {}, 'volume-space-attributes': { 'space-guarantee': ('none' if thin_provisioned else 'volume'), }, }, }, } if language: api_args['attributes']['volume-attributes'][ 'volume-language-attributes']['language'] = language if max_files: api_args['attributes']['volume-attributes'][ 'volume-inode-attributes']['files-total'] = max_files if snapshot_policy: api_args['attributes']['volume-attributes'][ 'volume-snapshot-attributes'][ 'snapshot-policy'] = snapshot_policy if qos_policy_group: api_args['attributes']['volume-attributes'][ 'volume-qos-attributes'] = { 'policy-group-name': qos_policy_group, } if hide_snapdir in (True, False): # Value of hide_snapdir needs to be inverted for ZAPI parameter api_args['attributes']['volume-attributes'][ 'volume-snapshot-attributes'][ 'snapdir-access-enabled'] = six.text_type( not hide_snapdir).lower() self.send_request('volume-modify-iter', api_args) # Efficiency options must be handled separately self.update_volume_efficiency_attributes(volume_name, dedup_enabled, compression_enabled) @na_utils.trace def update_volume_efficiency_attributes(self, volume_name, dedup_enabled, compression_enabled): """Update dedupe & compression attributes to match desired values.""" efficiency_status = self.get_volume_efficiency_status(volume_name) # cDOT compression requires dedup to be enabled dedup_enabled = dedup_enabled or compression_enabled # enable/disable dedup if needed if dedup_enabled and not efficiency_status['dedupe']: self.enable_dedup(volume_name) elif not dedup_enabled and efficiency_status['dedupe']: self.disable_dedup(volume_name) # enable/disable compression if needed if compression_enabled and not efficiency_status['compression']: self.enable_compression(volume_name) elif not compression_enabled and efficiency_status['compression']: self.disable_compression(volume_name) @na_utils.trace def volume_exists(self, volume_name): """Checks if volume exists.""" LOG.debug('Checking if volume %s exists', volume_name) api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'name': None, }, }, }, } result = self.send_iter_request('volume-get-iter', api_args) return self._has_records(result) @na_utils.trace def is_flexvol_encrypted(self, volume_name, vserver_name): """Checks whether the volume is encrypted or not.""" if not self.features.FLEXVOL_ENCRYPTION: return False api_args = { 'query': { 'volume-attributes': { 'encrypt': 'true', 'volume-id-attributes': { 'name': volume_name, 'owning-vserver-name': vserver_name, }, }, }, 'desired-attributes': { 'volume-attributes': { 'encrypt': None, }, }, } result = self.send_iter_request('volume-get-iter', api_args) if self._has_records(result): attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes = attributes_list.get_child_by_name( 'volume-attributes') or netapp_api.NaElement('none') encrypt = volume_attributes.get_child_content('encrypt') if encrypt: return True return False @na_utils.trace def get_aggregate_for_volume(self, volume_name): """Get the name of the aggregate containing a volume.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'containing-aggregate-name': None, 'name': None, }, }, }, } result = self.send_iter_request('volume-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes = attributes_list.get_child_by_name( 'volume-attributes') or netapp_api.NaElement('none') volume_id_attributes = volume_attributes.get_child_by_name( 'volume-id-attributes') or netapp_api.NaElement('none') aggregate = volume_id_attributes.get_child_content( 'containing-aggregate-name') if not aggregate: msg = _('Could not find aggregate for volume %s.') raise exception.NetAppException(msg % volume_name) return aggregate @na_utils.trace def volume_has_luns(self, volume_name): """Checks if volume has LUNs.""" LOG.debug('Checking if volume %s has LUNs', volume_name) api_args = { 'query': { 'lun-info': { 'volume': volume_name, }, }, 'desired-attributes': { 'lun-info': { 'path': None, }, }, } result = self.send_iter_request('lun-get-iter', api_args) return self._has_records(result) @na_utils.trace def volume_has_junctioned_volumes(self, volume_name): """Checks if volume has volumes mounted beneath its junction path.""" junction_path = self.get_volume_junction_path(volume_name) if not junction_path: return False api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'junction-path': junction_path + '/*', }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'name': None, }, }, }, } result = self.send_iter_request('volume-get-iter', api_args) return self._has_records(result) @na_utils.trace def get_volume(self, volume_name): """Returns the volume with the specified name, if present.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'containing-aggregate-name': None, 'junction-path': None, 'name': None, 'owning-vserver-name': None, 'type': None, 'style': None, }, 'volume-qos-attributes': { 'policy-group-name': None, }, 'volume-space-attributes': { 'size': None, }, }, }, } result = self.send_request('volume-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes_list = attributes_list.get_children() if not self._has_records(result): raise exception.StorageResourceNotFound(name=volume_name) elif len(volume_attributes_list) > 1: msg = _('Could not find unique volume %(vol)s.') msg_args = {'vol': volume_name} raise exception.NetAppException(msg % msg_args) volume_attributes = volume_attributes_list[0] volume_id_attributes = volume_attributes.get_child_by_name( 'volume-id-attributes') or netapp_api.NaElement('none') volume_qos_attributes = volume_attributes.get_child_by_name( 'volume-qos-attributes') or netapp_api.NaElement('none') volume_space_attributes = volume_attributes.get_child_by_name( 'volume-space-attributes') or netapp_api.NaElement('none') volume = { 'aggregate': volume_id_attributes.get_child_content( 'containing-aggregate-name'), 'junction-path': volume_id_attributes.get_child_content( 'junction-path'), 'name': volume_id_attributes.get_child_content('name'), 'owning-vserver-name': volume_id_attributes.get_child_content( 'owning-vserver-name'), 'type': volume_id_attributes.get_child_content('type'), 'style': volume_id_attributes.get_child_content('style'), 'size': volume_space_attributes.get_child_content('size'), 'qos-policy-group-name': volume_qos_attributes.get_child_content( 'policy-group-name') } return volume @na_utils.trace def get_volume_at_junction_path(self, junction_path): """Returns the volume with the specified junction path, if present.""" if not junction_path: return None api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'junction-path': junction_path, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'containing-aggregate-name': None, 'junction-path': None, 'name': None, 'type': None, 'style': None, }, 'volume-space-attributes': { 'size': None, } }, }, } result = self.send_iter_request('volume-get-iter', api_args) if not self._has_records(result): return None attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes = attributes_list.get_child_by_name( 'volume-attributes') or netapp_api.NaElement('none') volume_id_attributes = volume_attributes.get_child_by_name( 'volume-id-attributes') or netapp_api.NaElement('none') volume_space_attributes = volume_attributes.get_child_by_name( 'volume-space-attributes') or netapp_api.NaElement('none') volume = { 'aggregate': volume_id_attributes.get_child_content( 'containing-aggregate-name'), 'junction-path': volume_id_attributes.get_child_content( 'junction-path'), 'name': volume_id_attributes.get_child_content('name'), 'type': volume_id_attributes.get_child_content('type'), 'style': volume_id_attributes.get_child_content('style'), 'size': volume_space_attributes.get_child_content('size'), } return volume @na_utils.trace def get_volume_to_manage(self, aggregate_name, volume_name): """Get flexvol to be managed by Manila.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'containing-aggregate-name': aggregate_name, 'name': volume_name, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'containing-aggregate-name': None, 'junction-path': None, 'name': None, 'type': None, 'style': None, 'owning-vserver-name': None, }, 'volume-qos-attributes': { 'policy-group-name': None, }, 'volume-space-attributes': { 'size': None, }, }, }, } result = self.send_iter_request('volume-get-iter', api_args) if not self._has_records(result): return None attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes = attributes_list.get_child_by_name( 'volume-attributes') or netapp_api.NaElement('none') volume_id_attributes = volume_attributes.get_child_by_name( 'volume-id-attributes') or netapp_api.NaElement('none') volume_qos_attributes = volume_attributes.get_child_by_name( 'volume-qos-attributes') or netapp_api.NaElement('none') volume_space_attributes = volume_attributes.get_child_by_name( 'volume-space-attributes') or netapp_api.NaElement('none') volume = { 'aggregate': volume_id_attributes.get_child_content( 'containing-aggregate-name'), 'junction-path': volume_id_attributes.get_child_content( 'junction-path'), 'name': volume_id_attributes.get_child_content('name'), 'type': volume_id_attributes.get_child_content('type'), 'style': volume_id_attributes.get_child_content('style'), 'owning-vserver-name': volume_id_attributes.get_child_content( 'owning-vserver-name'), 'size': volume_space_attributes.get_child_content('size'), 'qos-policy-group-name': volume_qos_attributes.get_child_content( 'policy-group-name') } return volume @na_utils.trace def create_volume_clone(self, volume_name, parent_volume_name, parent_snapshot_name=None, split=False, qos_policy_group=None, **options): """Clones a volume.""" api_args = { 'volume': volume_name, 'parent-volume': parent_volume_name, 'parent-snapshot': parent_snapshot_name, 'junction-path': '/%s' % volume_name, } if qos_policy_group is not None: api_args['qos-policy-group-name'] = qos_policy_group self.send_request('volume-clone-create', api_args) if split: self.split_volume_clone(volume_name) @na_utils.trace def split_volume_clone(self, volume_name): """Begins splitting a clone from its parent.""" try: api_args = {'volume': volume_name} self.send_request('volume-clone-split-start', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EVOL_CLONE_BEING_SPLIT: return raise @na_utils.trace def check_volume_clone_split_completed(self, volume_name): """Check if volume clone split operation already finished""" return self.get_volume_clone_parent_snaphot(volume_name) is None @na_utils.trace def get_volume_clone_parent_snaphot(self, volume_name): """Gets volume's clone parent. Return the snapshot name of a volume's clone parent, or None if it doesn't exist. """ api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name } } }, 'desired-attributes': { 'volume-attributes': { 'volume-clone-attributes': { 'volume-clone-parent-attributes': { 'snapshot-name': '' } } } } } result = self.send_iter_request('volume-get-iter', api_args) if not self._has_records(result): return None attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes = attributes_list.get_child_by_name( 'volume-attributes') or netapp_api.NaElement('none') vol_clone_attrs = volume_attributes.get_child_by_name( 'volume-clone-attributes') or netapp_api.NaElement('none') vol_clone_parent_atts = vol_clone_attrs.get_child_by_name( 'volume-clone-parent-attributes') or netapp_api.NaElement( 'none') snapshot_name = vol_clone_parent_atts.get_child_content( 'snapshot-name') return snapshot_name @na_utils.trace def get_clone_children_for_snapshot(self, volume_name, snapshot_name): """Returns volumes that are keeping a snapshot locked.""" api_args = { 'query': { 'volume-attributes': { 'volume-clone-attributes': { 'volume-clone-parent-attributes': { 'name': volume_name, 'snapshot-name': snapshot_name, }, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-id-attributes': { 'name': None, }, }, }, } result = self.send_iter_request('volume-get-iter', api_args) if not self._has_records(result): return [] volume_list = [] attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') for volume_attributes in attributes_list.get_children(): volume_id_attributes = volume_attributes.get_child_by_name( 'volume-id-attributes') or netapp_api.NaElement('none') volume_list.append({ 'name': volume_id_attributes.get_child_content('name'), }) return volume_list @na_utils.trace def get_volume_junction_path(self, volume_name, is_style_cifs=False): """Gets a volume junction path.""" api_args = { 'volume': volume_name, 'is-style-cifs': six.text_type(is_style_cifs).lower(), } result = self.send_request('volume-get-volume-path', api_args) return result.get_child_content('junction') @na_utils.trace def mount_volume(self, volume_name, junction_path=None): """Mounts a volume on a junction path.""" api_args = { 'volume-name': volume_name, 'junction-path': (junction_path if junction_path else '/%s' % volume_name) } self.send_request('volume-mount', api_args) @na_utils.trace def offline_volume(self, volume_name): """Offlines a volume.""" try: self.send_request('volume-offline', {'name': volume_name}) except netapp_api.NaApiError as e: if e.code == netapp_api.EVOLUMEOFFLINE: return raise @na_utils.trace def _unmount_volume(self, volume_name, force=False): """Unmounts a volume.""" api_args = { 'volume-name': volume_name, 'force': six.text_type(force).lower(), } try: self.send_request('volume-unmount', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EVOL_NOT_MOUNTED: return raise @na_utils.trace def unmount_volume(self, volume_name, force=False, wait_seconds=30): """Unmounts a volume, retrying if a clone split is ongoing. NOTE(cknight): While unlikely to happen in normal operation, any client that tries to delete volumes immediately after creating volume clones is likely to experience failures if cDOT isn't quite ready for the delete. The volume unmount is the first operation in the delete path that fails in this case, and there is no proactive check we can use to reliably predict the failure. And there isn't a specific error code from volume-unmount, so we have to check for a generic error code plus certain language in the error code. It's ugly, but it works, and it's better than hard-coding a fixed delay. """ # Do the unmount, handling split-related errors with retries. retry_interval = 3 # seconds for retry in range(int(wait_seconds / retry_interval)): try: self._unmount_volume(volume_name, force=force) LOG.debug('Volume %s unmounted.', volume_name) return except netapp_api.NaApiError as e: if e.code == netapp_api.EAPIERROR and 'job ID' in e.message: msg = ('Could not unmount volume %(volume)s due to ' 'ongoing volume operation: %(exception)s') msg_args = {'volume': volume_name, 'exception': e} LOG.warning(msg, msg_args) time.sleep(retry_interval) continue raise msg = _('Failed to unmount volume %(volume)s after ' 'waiting for %(wait_seconds)s seconds.') msg_args = {'volume': volume_name, 'wait_seconds': wait_seconds} LOG.error(msg, msg_args) raise exception.NetAppException(msg % msg_args) @na_utils.trace def delete_volume(self, volume_name): """Deletes a volume.""" self.send_request('volume-destroy', {'name': volume_name}) @na_utils.trace def create_snapshot(self, volume_name, snapshot_name): """Creates a volume snapshot.""" api_args = {'volume': volume_name, 'snapshot': snapshot_name} self.send_request('snapshot-create', api_args) @na_utils.trace def snapshot_exists(self, snapshot_name, volume_name): """Checks if Snapshot exists for a specified volume.""" LOG.debug('Checking if snapshot %(snapshot)s exists for ' 'volume %(volume)s', {'snapshot': snapshot_name, 'volume': volume_name}) """Gets a single snapshot.""" api_args = { 'query': { 'snapshot-info': { 'name': snapshot_name, 'volume': volume_name, }, }, 'desired-attributes': { 'snapshot-info': { 'name': None, 'volume': None, 'busy': None, 'snapshot-owners-list': { 'snapshot-owner': None, } }, }, } result = self.send_request('snapshot-get-iter', api_args) error_record_list = result.get_child_by_name( 'volume-errors') or netapp_api.NaElement('none') errors = error_record_list.get_children() if errors: error = errors[0] error_code = error.get_child_content('errno') error_reason = error.get_child_content('reason') msg = _('Could not read information for snapshot %(name)s. ' 'Code: %(code)s. Reason: %(reason)s') msg_args = { 'name': snapshot_name, 'code': error_code, 'reason': error_reason } if error_code == netapp_api.ESNAPSHOTNOTALLOWED: raise exception.SnapshotUnavailable(msg % msg_args) else: raise exception.NetAppException(msg % msg_args) return self._has_records(result) @na_utils.trace def get_snapshot(self, volume_name, snapshot_name): """Gets a single snapshot.""" api_args = { 'query': { 'snapshot-info': { 'name': snapshot_name, 'volume': volume_name, }, }, 'desired-attributes': { 'snapshot-info': { 'access-time': None, 'name': None, 'volume': None, 'busy': None, 'snapshot-owners-list': { 'snapshot-owner': None, } }, }, } result = self.send_request('snapshot-get-iter', api_args) error_record_list = result.get_child_by_name( 'volume-errors') or netapp_api.NaElement('none') errors = error_record_list.get_children() if errors: error = errors[0] error_code = error.get_child_content('errno') error_reason = error.get_child_content('reason') msg = _('Could not read information for snapshot %(name)s. ' 'Code: %(code)s. Reason: %(reason)s') msg_args = { 'name': snapshot_name, 'code': error_code, 'reason': error_reason } if error_code == netapp_api.ESNAPSHOTNOTALLOWED: raise exception.SnapshotUnavailable(msg % msg_args) else: raise exception.NetAppException(msg % msg_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') snapshot_info_list = attributes_list.get_children() if not self._has_records(result): raise exception.SnapshotResourceNotFound(name=snapshot_name) elif len(snapshot_info_list) > 1: msg = _('Could not find unique snapshot %(snap)s on ' 'volume %(vol)s.') msg_args = {'snap': snapshot_name, 'vol': volume_name} raise exception.NetAppException(msg % msg_args) snapshot_info = snapshot_info_list[0] snapshot = { 'access-time': snapshot_info.get_child_content('access-time'), 'name': snapshot_info.get_child_content('name'), 'volume': snapshot_info.get_child_content('volume'), 'busy': strutils.bool_from_string( snapshot_info.get_child_content('busy')), } snapshot_owners_list = snapshot_info.get_child_by_name( 'snapshot-owners-list') or netapp_api.NaElement('none') snapshot_owners = set([ snapshot_owner.get_child_content('owner') for snapshot_owner in snapshot_owners_list.get_children()]) snapshot['owners'] = snapshot_owners return snapshot @na_utils.trace def rename_snapshot(self, volume_name, snapshot_name, new_snapshot_name): api_args = { 'volume': volume_name, 'current-name': snapshot_name, 'new-name': new_snapshot_name } self.send_request('snapshot-rename', api_args) @na_utils.trace def restore_snapshot(self, volume_name, snapshot_name): """Reverts a volume to the specified snapshot.""" api_args = { 'volume': volume_name, 'snapshot': snapshot_name, } self.send_request('snapshot-restore-volume', api_args) @na_utils.trace def delete_snapshot(self, volume_name, snapshot_name, ignore_owners=False): """Deletes a volume snapshot.""" ignore_owners = ('true' if strutils.bool_from_string(ignore_owners) else 'false') api_args = { 'volume': volume_name, 'snapshot': snapshot_name, 'ignore-owners': ignore_owners, } self.send_request('snapshot-delete', api_args) @na_utils.trace def soft_delete_snapshot(self, volume_name, snapshot_name): """Deletes a volume snapshot, or renames it if delete fails.""" try: self.delete_snapshot(volume_name, snapshot_name) except netapp_api.NaApiError: self.rename_snapshot(volume_name, snapshot_name, DELETED_PREFIX + snapshot_name) msg = _('Soft-deleted snapshot %(snapshot)s on volume %(volume)s.') msg_args = {'snapshot': snapshot_name, 'volume': volume_name} LOG.info(msg, msg_args) @na_utils.trace def prune_deleted_snapshots(self): """Deletes non-busy snapshots that were previously soft-deleted.""" deleted_snapshots_map = self._get_deleted_snapshots() for vserver in deleted_snapshots_map: client = copy.deepcopy(self) client.set_vserver(vserver) for snapshot in deleted_snapshots_map[vserver]: try: client.delete_snapshot(snapshot['volume'], snapshot['name']) except netapp_api.NaApiError: msg = _('Could not delete snapshot %(snap)s on ' 'volume %(volume)s.') msg_args = { 'snap': snapshot['name'], 'volume': snapshot['volume'], } LOG.exception(msg, msg_args) @na_utils.trace def _get_deleted_snapshots(self): """Returns non-busy, soft-deleted snapshots suitable for reaping.""" api_args = { 'query': { 'snapshot-info': { 'name': DELETED_PREFIX + '*', 'busy': 'false', }, }, 'desired-attributes': { 'snapshot-info': { 'name': None, 'vserver': None, 'volume': None, }, }, } result = self.send_iter_request('snapshot-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') # Build a map of snapshots, one list of snapshots per vserver snapshot_map = {} for snapshot_info in attributes_list.get_children(): vserver = snapshot_info.get_child_content('vserver') snapshot_list = snapshot_map.get(vserver, []) snapshot_list.append({ 'name': snapshot_info.get_child_content('name'), 'volume': snapshot_info.get_child_content('volume'), 'vserver': vserver, }) snapshot_map[vserver] = snapshot_list return snapshot_map @na_utils.trace def create_cg_snapshot(self, volume_names, snapshot_name): """Creates a consistency group snapshot of one or more flexvols.""" cg_id = self._start_cg_snapshot(volume_names, snapshot_name) if not cg_id: msg = _('Could not start consistency group snapshot %s.') raise exception.NetAppException(msg % snapshot_name) self._commit_cg_snapshot(cg_id) @na_utils.trace def _start_cg_snapshot(self, volume_names, snapshot_name): api_args = { 'snapshot': snapshot_name, 'timeout': 'relaxed', 'volumes': [ {'volume-name': volume_name} for volume_name in volume_names ], } result = self.send_request('cg-start', api_args) return result.get_child_content('cg-id') @na_utils.trace def _commit_cg_snapshot(self, cg_id): api_args = {'cg-id': cg_id} self.send_request('cg-commit', api_args) @na_utils.trace def create_cifs_share(self, share_name): share_path = '/%s' % share_name api_args = {'path': share_path, 'share-name': share_name} self.send_request('cifs-share-create', api_args) @na_utils.trace def get_cifs_share_access(self, share_name): api_args = { 'query': { 'cifs-share-access-control': { 'share': share_name, }, }, 'desired-attributes': { 'cifs-share-access-control': { 'user-or-group': None, 'permission': None, }, }, } result = self.send_iter_request('cifs-share-access-control-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') rules = {} for rule in attributes_list.get_children(): user_or_group = rule.get_child_content('user-or-group') permission = rule.get_child_content('permission') rules[user_or_group] = permission return rules @na_utils.trace def add_cifs_share_access(self, share_name, user_name, readonly): api_args = { 'permission': 'read' if readonly else 'full_control', 'share': share_name, 'user-or-group': user_name, } self.send_request('cifs-share-access-control-create', api_args) @na_utils.trace def modify_cifs_share_access(self, share_name, user_name, readonly): api_args = { 'permission': 'read' if readonly else 'full_control', 'share': share_name, 'user-or-group': user_name, } self.send_request('cifs-share-access-control-modify', api_args) @na_utils.trace def remove_cifs_share_access(self, share_name, user_name): api_args = {'user-or-group': user_name, 'share': share_name} self.send_request('cifs-share-access-control-delete', api_args) @na_utils.trace def remove_cifs_share(self, share_name): self.send_request('cifs-share-delete', {'share-name': share_name}) @na_utils.trace def add_nfs_export_rule(self, policy_name, client_match, readonly): rule_indices = self._get_nfs_export_rule_indices(policy_name, client_match) if not rule_indices: self._add_nfs_export_rule(policy_name, client_match, readonly) else: # Update first rule and delete the rest self._update_nfs_export_rule( policy_name, client_match, readonly, rule_indices.pop(0)) self._remove_nfs_export_rules(policy_name, rule_indices) @na_utils.trace def _add_nfs_export_rule(self, policy_name, client_match, readonly): api_args = { 'policy-name': policy_name, 'client-match': client_match, 'ro-rule': { 'security-flavor': 'sys', }, 'rw-rule': { 'security-flavor': 'sys' if not readonly else 'never', }, 'super-user-security': { 'security-flavor': 'sys', }, } self.send_request('export-rule-create', api_args) @na_utils.trace def _update_nfs_export_rule(self, policy_name, client_match, readonly, rule_index): api_args = { 'policy-name': policy_name, 'rule-index': rule_index, 'client-match': client_match, 'ro-rule': { 'security-flavor': 'sys' }, 'rw-rule': { 'security-flavor': 'sys' if not readonly else 'never' }, 'super-user-security': { 'security-flavor': 'sys' }, } self.send_request('export-rule-modify', api_args) @na_utils.trace def _get_nfs_export_rule_indices(self, policy_name, client_match): api_args = { 'query': { 'export-rule-info': { 'policy-name': policy_name, 'client-match': client_match, }, }, 'desired-attributes': { 'export-rule-info': { 'vserver-name': None, 'policy-name': None, 'client-match': None, 'rule-index': None, }, }, } result = self.send_iter_request('export-rule-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') export_rule_info_list = attributes_list.get_children() rule_indices = [int(export_rule_info.get_child_content('rule-index')) for export_rule_info in export_rule_info_list] rule_indices.sort() return [six.text_type(rule_index) for rule_index in rule_indices] @na_utils.trace def remove_nfs_export_rule(self, policy_name, client_match): rule_indices = self._get_nfs_export_rule_indices(policy_name, client_match) self._remove_nfs_export_rules(policy_name, rule_indices) @na_utils.trace def _remove_nfs_export_rules(self, policy_name, rule_indices): for rule_index in rule_indices: api_args = { 'policy-name': policy_name, 'rule-index': rule_index } try: self.send_request('export-rule-destroy', api_args) except netapp_api.NaApiError as e: if e.code != netapp_api.EOBJECTNOTFOUND: raise @na_utils.trace def clear_nfs_export_policy_for_volume(self, volume_name): self.set_nfs_export_policy_for_volume(volume_name, 'default') @na_utils.trace def set_nfs_export_policy_for_volume(self, volume_name, policy_name): api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-export-attributes': { 'policy': policy_name, }, }, }, } self.send_request('volume-modify-iter', api_args) @na_utils.trace def set_qos_policy_group_for_volume(self, volume_name, qos_policy_group_name): api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'attributes': { 'volume-attributes': { 'volume-qos-attributes': { 'policy-group-name': qos_policy_group_name, }, }, }, } self.send_request('volume-modify-iter', api_args) @na_utils.trace def get_nfs_export_policy_for_volume(self, volume_name): """Get the name of the export policy for a volume.""" api_args = { 'query': { 'volume-attributes': { 'volume-id-attributes': { 'name': volume_name, }, }, }, 'desired-attributes': { 'volume-attributes': { 'volume-export-attributes': { 'policy': None, }, }, }, } result = self.send_iter_request('volume-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_attributes = attributes_list.get_child_by_name( 'volume-attributes') or netapp_api.NaElement('none') volume_export_attributes = volume_attributes.get_child_by_name( 'volume-export-attributes') or netapp_api.NaElement('none') export_policy = volume_export_attributes.get_child_content('policy') if not export_policy: msg = _('Could not find export policy for volume %s.') raise exception.NetAppException(msg % volume_name) return export_policy @na_utils.trace def create_nfs_export_policy(self, policy_name): api_args = {'policy-name': policy_name} try: self.send_request('export-policy-create', api_args) except netapp_api.NaApiError as e: if e.code != netapp_api.EDUPLICATEENTRY: raise @na_utils.trace def soft_delete_nfs_export_policy(self, policy_name): try: self.delete_nfs_export_policy(policy_name) except netapp_api.NaApiError: # NOTE(cknight): Policy deletion can fail if called too soon after # removing from a flexvol. So rename for later harvesting. self.rename_nfs_export_policy(policy_name, DELETED_PREFIX + policy_name) @na_utils.trace def delete_nfs_export_policy(self, policy_name): api_args = {'policy-name': policy_name} try: self.send_request('export-policy-destroy', api_args) except netapp_api.NaApiError as e: if e.code == netapp_api.EOBJECTNOTFOUND: return raise @na_utils.trace def rename_nfs_export_policy(self, policy_name, new_policy_name): api_args = { 'policy-name': policy_name, 'new-policy-name': new_policy_name } self.send_request('export-policy-rename', api_args) @na_utils.trace def prune_deleted_nfs_export_policies(self): deleted_policy_map = self._get_deleted_nfs_export_policies() for vserver in deleted_policy_map: client = copy.deepcopy(self) client.set_vserver(vserver) for policy in deleted_policy_map[vserver]: try: client.delete_nfs_export_policy(policy) except netapp_api.NaApiError: LOG.debug('Could not delete export policy %s.', policy) @na_utils.trace def _get_deleted_nfs_export_policies(self): api_args = { 'query': { 'export-policy-info': { 'policy-name': DELETED_PREFIX + '*', }, }, 'desired-attributes': { 'export-policy-info': { 'policy-name': None, 'vserver': None, }, }, } result = self.send_iter_request('export-policy-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') policy_map = {} for export_info in attributes_list.get_children(): vserver = export_info.get_child_content('vserver') policies = policy_map.get(vserver, []) policies.append(export_info.get_child_content('policy-name')) policy_map[vserver] = policies return policy_map @na_utils.trace def _get_ems_log_destination_vserver(self): """Returns the best vserver destination for EMS messages.""" major, minor = self.get_ontapi_version(cached=True) if (major > 1) or (major == 1 and minor > 15): # Prefer admin Vserver (requires cluster credentials). admin_vservers = self.list_vservers(vserver_type='admin') if admin_vservers: return admin_vservers[0] # Fall back to data Vserver. data_vservers = self.list_vservers(vserver_type='data') if data_vservers: return data_vservers[0] # If older API version, or no other Vservers found, use node Vserver. node_vservers = self.list_vservers(vserver_type='node') if node_vservers: return node_vservers[0] raise exception.NotFound("No Vserver found to receive EMS messages.") @na_utils.trace def send_ems_log_message(self, message_dict): """Sends a message to the Data ONTAP EMS log.""" # NOTE(cknight): Cannot use deepcopy on the connection context node_client = copy.copy(self) node_client.connection = copy.copy(self.connection) node_client.connection.set_timeout(25) try: node_client.set_vserver(self._get_ems_log_destination_vserver()) node_client.send_request('ems-autosupport-log', message_dict) LOG.debug('EMS executed successfully.') except netapp_api.NaApiError as e: LOG.warning('Failed to invoke EMS. %s', e) @na_utils.trace def get_aggregate(self, aggregate_name): """Get aggregate attributes needed for the storage service catalog.""" if not aggregate_name: return {} desired_attributes = { 'aggr-attributes': { 'aggregate-name': None, 'aggr-raid-attributes': { 'raid-type': None, 'is-hybrid': None, }, }, } try: aggrs = self._get_aggregates(aggregate_names=[aggregate_name], desired_attributes=desired_attributes) except netapp_api.NaApiError: msg = _('Failed to get info for aggregate %s.') LOG.exception(msg, aggregate_name) return {} if len(aggrs) < 1: return {} aggr_attributes = aggrs[0] aggr_raid_attrs = aggr_attributes.get_child_by_name( 'aggr-raid-attributes') or netapp_api.NaElement('none') aggregate = { 'name': aggr_attributes.get_child_content('aggregate-name'), 'raid-type': aggr_raid_attrs.get_child_content('raid-type'), 'is-hybrid': strutils.bool_from_string( aggr_raid_attrs.get_child_content('is-hybrid')), } return aggregate @na_utils.trace def get_aggregate_disk_types(self, aggregate_name): """Get the disk type(s) of an aggregate.""" disk_types = set() disk_types.update(self._get_aggregate_disk_types(aggregate_name)) if self.features.ADVANCED_DISK_PARTITIONING: disk_types.update(self._get_aggregate_disk_types(aggregate_name, shared=True)) return list(disk_types) if disk_types else None @na_utils.trace def _get_aggregate_disk_types(self, aggregate_name, shared=False): """Get the disk type(s) of an aggregate.""" disk_types = set() if shared: disk_raid_info = { 'disk-shared-info': { 'aggregate-list': { 'shared-aggregate-info': { 'aggregate-name': aggregate_name, }, }, }, } else: disk_raid_info = { 'disk-aggregate-info': { 'aggregate-name': aggregate_name, }, } api_args = { 'query': { 'storage-disk-info': { 'disk-raid-info': disk_raid_info, }, }, 'desired-attributes': { 'storage-disk-info': { 'disk-raid-info': { 'effective-disk-type': None, }, }, }, } try: result = self.send_iter_request('storage-disk-get-iter', api_args) except netapp_api.NaApiError: msg = _('Failed to get disk info for aggregate %s.') LOG.exception(msg, aggregate_name) return disk_types attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') for storage_disk_info in attributes_list.get_children(): disk_raid_info = storage_disk_info.get_child_by_name( 'disk-raid-info') or netapp_api.NaElement('none') disk_type = disk_raid_info.get_child_content( 'effective-disk-type') if disk_type: disk_types.add(disk_type) return disk_types @na_utils.trace def check_for_cluster_credentials(self): try: self.list_cluster_nodes() # API succeeded, so definitely a cluster management LIF return True except netapp_api.NaApiError as e: if e.code == netapp_api.EAPINOTFOUND: LOG.debug('Not connected to cluster management LIF.') return False else: raise @na_utils.trace def get_cluster_name(self): """Gets cluster name.""" api_args = { 'desired-attributes': { 'cluster-identity-info': { 'cluster-name': None, } } } result = self.send_request('cluster-identity-get', api_args, enable_tunneling=False) attributes = result.get_child_by_name('attributes') cluster_identity = attributes.get_child_by_name( 'cluster-identity-info') return cluster_identity.get_child_content('cluster-name') @na_utils.trace def create_cluster_peer(self, addresses, username=None, password=None, passphrase=None): """Creates a cluster peer relationship.""" api_args = { 'peer-addresses': [ {'remote-inet-address': address} for address in addresses ], } if username: api_args['user-name'] = username if password: api_args['password'] = password if passphrase: api_args['passphrase'] = passphrase self.send_request('cluster-peer-create', api_args, enable_tunneling=False) @na_utils.trace def get_cluster_peers(self, remote_cluster_name=None): """Gets one or more cluster peer relationships.""" api_args = {} if remote_cluster_name: api_args['query'] = { 'cluster-peer-info': { 'remote-cluster-name': remote_cluster_name, } } result = self.send_iter_request('cluster-peer-get-iter', api_args) if not self._has_records(result): return [] cluster_peers = [] for cluster_peer_info in result.get_child_by_name( 'attributes-list').get_children(): cluster_peer = { 'active-addresses': [], 'peer-addresses': [] } active_addresses = cluster_peer_info.get_child_by_name( 'active-addresses') or netapp_api.NaElement('none') for address in active_addresses.get_children(): cluster_peer['active-addresses'].append(address.get_content()) peer_addresses = cluster_peer_info.get_child_by_name( 'peer-addresses') or netapp_api.NaElement('none') for address in peer_addresses.get_children(): cluster_peer['peer-addresses'].append(address.get_content()) cluster_peer['availability'] = cluster_peer_info.get_child_content( 'availability') cluster_peer['cluster-name'] = cluster_peer_info.get_child_content( 'cluster-name') cluster_peer['cluster-uuid'] = cluster_peer_info.get_child_content( 'cluster-uuid') cluster_peer['remote-cluster-name'] = ( cluster_peer_info.get_child_content('remote-cluster-name')) cluster_peer['serial-number'] = ( cluster_peer_info.get_child_content('serial-number')) cluster_peer['timeout'] = cluster_peer_info.get_child_content( 'timeout') cluster_peers.append(cluster_peer) return cluster_peers @na_utils.trace def delete_cluster_peer(self, cluster_name): """Deletes a cluster peer relationship.""" api_args = {'cluster-name': cluster_name} self.send_request('cluster-peer-delete', api_args, enable_tunneling=False) @na_utils.trace def get_cluster_peer_policy(self): """Gets the cluster peering policy configuration.""" if not self.features.CLUSTER_PEER_POLICY: return {} result = self.send_request('cluster-peer-policy-get') attributes = result.get_child_by_name( 'attributes') or netapp_api.NaElement('none') cluster_peer_policy = attributes.get_child_by_name( 'cluster-peer-policy') or netapp_api.NaElement('none') policy = { 'is-unauthenticated-access-permitted': cluster_peer_policy.get_child_content( 'is-unauthenticated-access-permitted'), 'passphrase-minimum-length': cluster_peer_policy.get_child_content( 'passphrase-minimum-length'), } if policy['is-unauthenticated-access-permitted'] is not None: policy['is-unauthenticated-access-permitted'] = ( strutils.bool_from_string( policy['is-unauthenticated-access-permitted'])) if policy['passphrase-minimum-length'] is not None: policy['passphrase-minimum-length'] = int( policy['passphrase-minimum-length']) return policy @na_utils.trace def set_cluster_peer_policy(self, is_unauthenticated_access_permitted=None, passphrase_minimum_length=None): """Modifies the cluster peering policy configuration.""" if not self.features.CLUSTER_PEER_POLICY: return if (is_unauthenticated_access_permitted is None and passphrase_minimum_length is None): return api_args = {} if is_unauthenticated_access_permitted is not None: api_args['is-unauthenticated-access-permitted'] = ( 'true' if strutils.bool_from_string( is_unauthenticated_access_permitted) else 'false') if passphrase_minimum_length is not None: api_args['passphrase-minlength'] = six.text_type( passphrase_minimum_length) self.send_request('cluster-peer-policy-modify', api_args) @na_utils.trace def create_vserver_peer(self, vserver_name, peer_vserver_name, peer_cluster_name=None): """Creates a Vserver peer relationship for SnapMirrors.""" api_args = { 'vserver': vserver_name, 'peer-vserver': peer_vserver_name, 'applications': [ {'vserver-peer-application': 'snapmirror'}, ], } if peer_cluster_name: api_args['peer-cluster'] = peer_cluster_name self.send_request('vserver-peer-create', api_args, enable_tunneling=False) @na_utils.trace def delete_vserver_peer(self, vserver_name, peer_vserver_name): """Deletes a Vserver peer relationship.""" api_args = {'vserver': vserver_name, 'peer-vserver': peer_vserver_name} self.send_request('vserver-peer-delete', api_args, enable_tunneling=False) @na_utils.trace def accept_vserver_peer(self, vserver_name, peer_vserver_name): """Accepts a pending Vserver peer relationship.""" api_args = {'vserver': vserver_name, 'peer-vserver': peer_vserver_name} self.send_request('vserver-peer-accept', api_args, enable_tunneling=False) @na_utils.trace def get_vserver_peers(self, vserver_name=None, peer_vserver_name=None): """Gets one or more Vserver peer relationships.""" api_args = None if vserver_name or peer_vserver_name: api_args = {'query': {'vserver-peer-info': {}}} if vserver_name: api_args['query']['vserver-peer-info']['vserver'] = ( vserver_name) if peer_vserver_name: api_args['query']['vserver-peer-info']['peer-vserver'] = ( peer_vserver_name) result = self.send_iter_request('vserver-peer-get-iter', api_args) if not self._has_records(result): return [] vserver_peers = [] for vserver_peer_info in result.get_child_by_name( 'attributes-list').get_children(): vserver_peer = { 'vserver': vserver_peer_info.get_child_content('vserver'), 'peer-vserver': vserver_peer_info.get_child_content('peer-vserver'), 'peer-state': vserver_peer_info.get_child_content('peer-state'), 'peer-cluster': vserver_peer_info.get_child_content('peer-cluster'), } vserver_peers.append(vserver_peer) return vserver_peers def _ensure_snapmirror_v2(self): """Verify support for SnapMirror control plane v2.""" if not self.features.SNAPMIRROR_V2: msg = _('SnapMirror features require Data ONTAP 8.2 or later.') raise exception.NetAppException(msg) @na_utils.trace def create_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume, schedule=None, policy=None, relationship_type='data_protection'): """Creates a SnapMirror relationship (cDOT 8.2 or later only).""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, 'relationship-type': relationship_type, } if schedule: api_args['schedule'] = schedule if policy: api_args['policy'] = policy try: self.send_request('snapmirror-create', api_args) except netapp_api.NaApiError as e: if e.code != netapp_api.ERELATION_EXISTS: raise @na_utils.trace def initialize_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume, source_snapshot=None, transfer_priority=None): """Initializes a SnapMirror relationship (cDOT 8.2 or later only).""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } if source_snapshot: api_args['source-snapshot'] = source_snapshot if transfer_priority: api_args['transfer-priority'] = transfer_priority result = self.send_request('snapmirror-initialize', api_args) result_info = {} result_info['operation-id'] = result.get_child_content( 'result-operation-id') result_info['status'] = result.get_child_content('result-status') result_info['jobid'] = result.get_child_content('result-jobid') result_info['error-code'] = result.get_child_content( 'result-error-code') result_info['error-message'] = result.get_child_content( 'result-error-message') return result_info @na_utils.trace def release_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume, relationship_info_only=False): """Removes a SnapMirror relationship on the source endpoint.""" self._ensure_snapmirror_v2() api_args = { 'query': { 'snapmirror-destination-info': { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, 'relationship-info-only': ('true' if relationship_info_only else 'false'), } } } self.send_request('snapmirror-release-iter', api_args) @na_utils.trace def quiesce_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume): """Disables future transfers to a SnapMirror destination.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } self.send_request('snapmirror-quiesce', api_args) @na_utils.trace def abort_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume, clear_checkpoint=False): """Stops ongoing transfers for a SnapMirror relationship.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, 'clear-checkpoint': 'true' if clear_checkpoint else 'false', } try: self.send_request('snapmirror-abort', api_args) except netapp_api.NaApiError as e: if e.code != netapp_api.ENOTRANSFER_IN_PROGRESS: raise @na_utils.trace def break_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume): """Breaks a data protection SnapMirror relationship.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } self.send_request('snapmirror-break', api_args) @na_utils.trace def modify_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume, schedule=None, policy=None, tries=None, max_transfer_rate=None): """Modifies a SnapMirror relationship.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } if schedule: api_args['schedule'] = schedule if policy: api_args['policy'] = policy if tries is not None: api_args['tries'] = tries if max_transfer_rate is not None: api_args['max-transfer-rate'] = max_transfer_rate self.send_request('snapmirror-modify', api_args) @na_utils.trace def delete_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume): """Destroys a SnapMirror relationship.""" self._ensure_snapmirror_v2() api_args = { 'query': { 'snapmirror-info': { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } } } self.send_request('snapmirror-destroy-iter', api_args) @na_utils.trace def update_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume): """Schedules a snapmirror update.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } try: self.send_request('snapmirror-update', api_args) except netapp_api.NaApiError as e: if (e.code != netapp_api.ETRANSFER_IN_PROGRESS and e.code != netapp_api.EANOTHER_OP_ACTIVE): raise @na_utils.trace def resume_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume): """Resume a SnapMirror relationship if it is quiesced.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } try: self.send_request('snapmirror-resume', api_args) except netapp_api.NaApiError as e: if e.code != netapp_api.ERELATION_NOT_QUIESCED: raise @na_utils.trace def resync_snapmirror(self, source_vserver, source_volume, destination_vserver, destination_volume): """Resync a SnapMirror relationship.""" self._ensure_snapmirror_v2() api_args = { 'source-volume': source_volume, 'source-vserver': source_vserver, 'destination-volume': destination_volume, 'destination-vserver': destination_vserver, } self.send_request('snapmirror-resync', api_args) @na_utils.trace def _get_snapmirrors(self, source_vserver=None, source_volume=None, destination_vserver=None, destination_volume=None, desired_attributes=None): query = None if (source_vserver or source_volume or destination_vserver or destination_volume): query = {'snapmirror-info': {}} if source_volume: query['snapmirror-info']['source-volume'] = source_volume if destination_volume: query['snapmirror-info']['destination-volume'] = ( destination_volume) if source_vserver: query['snapmirror-info']['source-vserver'] = source_vserver if destination_vserver: query['snapmirror-info']['destination-vserver'] = ( destination_vserver) api_args = {} if query: api_args['query'] = query if desired_attributes: api_args['desired-attributes'] = desired_attributes result = self.send_iter_request('snapmirror-get-iter', api_args) if not self._has_records(result): return [] else: return result.get_child_by_name('attributes-list').get_children() @na_utils.trace def get_snapmirrors(self, source_vserver, source_volume, destination_vserver, destination_volume, desired_attributes=None): """Gets one or more SnapMirror relationships. Either the source or destination info may be omitted. Desired attributes should be a flat list of attribute names. """ self._ensure_snapmirror_v2() if desired_attributes is not None: desired_attributes = { 'snapmirror-info': {attr: None for attr in desired_attributes}, } result = self._get_snapmirrors( source_vserver=source_vserver, source_volume=source_volume, destination_vserver=destination_vserver, destination_volume=destination_volume, desired_attributes=desired_attributes) snapmirrors = [] for snapmirror_info in result: snapmirror = {} for child in snapmirror_info.get_children(): name = self._strip_xml_namespace(child.get_name()) snapmirror[name] = child.get_content() snapmirrors.append(snapmirror) return snapmirrors def volume_has_snapmirror_relationships(self, volume): """Return True if snapmirror relationships exist for a given volume. If we have snapmirror control plane license, we can verify whether the given volume is part of any snapmirror relationships. """ try: # Check if volume is a source snapmirror volume snapmirrors = self.get_snapmirrors( volume['owning-vserver-name'], volume['name'], None, None) # Check if volume is a destination snapmirror volume if not snapmirrors: snapmirrors = self.get_snapmirrors( None, None, volume['owning-vserver-name'], volume['name']) has_snapmirrors = len(snapmirrors) > 0 except netapp_api.NaApiError: msg = ("Could not determine if volume %s is part of " "existing snapmirror relationships.") LOG.exception(msg, volume['name']) has_snapmirrors = False return has_snapmirrors def list_snapmirror_snapshots(self, volume_name, newer_than=None): """Gets SnapMirror snapshots on a volume.""" api_args = { 'query': { 'snapshot-info': { 'dependency': 'snapmirror', 'volume': volume_name, }, }, } if newer_than: api_args['query']['snapshot-info'][ 'access-time'] = '>' + newer_than result = self.send_iter_request('snapshot-get-iter', api_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') return [snapshot_info.get_child_content('name') for snapshot_info in attributes_list.get_children()] @na_utils.trace def start_volume_move(self, volume_name, vserver, destination_aggregate, cutover_action='wait', encrypt_destination=None): """Moves a FlexVol across Vserver aggregates. Requires cluster-scoped credentials. """ self._send_volume_move_request( volume_name, vserver, destination_aggregate, cutover_action=cutover_action, encrypt_destination=encrypt_destination) @na_utils.trace def check_volume_move(self, volume_name, vserver, destination_aggregate, encrypt_destination=None): """Moves a FlexVol across Vserver aggregates. Requires cluster-scoped credentials. """ self._send_volume_move_request( volume_name, vserver, destination_aggregate, validation_only=True, encrypt_destination=encrypt_destination) @na_utils.trace def _send_volume_move_request(self, volume_name, vserver, destination_aggregate, cutover_action='wait', validation_only=False, encrypt_destination=None): """Send request to check if vol move is possible, or start it. :param volume_name: Name of the FlexVol to be moved. :param destination_aggregate: Name of the destination aggregate :param cutover_action: can have one of ['force', 'defer', 'abort', 'wait']. 'force' will force a cutover despite errors (causing possible client disruptions), 'wait' will wait for cutover to be triggered manually. 'abort' will rollback move on errors on cutover, 'defer' will attempt a cutover, but wait for manual intervention in case of errors. :param validation_only: If set to True, only validates if the volume move is possible, does not trigger data copy. :param encrypt_destination: If set to True, it encrypts the Flexvol after the volume move is complete. """ api_args = { 'source-volume': volume_name, 'vserver': vserver, 'dest-aggr': destination_aggregate, 'cutover-action': CUTOVER_ACTION_MAP[cutover_action], } if self.features.FLEXVOL_ENCRYPTION and encrypt_destination: api_args['encrypt-destination'] = 'true' elif encrypt_destination: msg = 'Flexvol encryption is not supported on this backend.' raise exception.NetAppException(msg) else: api_args['encrypt-destination'] = 'false' if validation_only: api_args['perform-validation-only'] = 'true' self.send_request('volume-move-start', api_args) @na_utils.trace def abort_volume_move(self, volume_name, vserver): """Aborts an existing volume move operation.""" api_args = { 'source-volume': volume_name, 'vserver': vserver, } self.send_request('volume-move-trigger-abort', api_args) @na_utils.trace def trigger_volume_move_cutover(self, volume_name, vserver, force=True): """Triggers the cut-over for a volume in data motion.""" api_args = { 'source-volume': volume_name, 'vserver': vserver, 'force': 'true' if force else 'false', } self.send_request('volume-move-trigger-cutover', api_args) @na_utils.trace def get_volume_move_status(self, volume_name, vserver): """Gets the current state of a volume move operation.""" api_args = { 'query': { 'volume-move-info': { 'volume': volume_name, 'vserver': vserver, }, }, 'desired-attributes': { 'volume-move-info': { 'percent-complete': None, 'estimated-completion-time': None, 'state': None, 'details': None, 'cutover-action': None, 'phase': None, }, }, } result = self.send_iter_request('volume-move-get-iter', api_args) if not self._has_records(result): msg = _("Volume %(vol)s in Vserver %(server)s is not part of any " "data motion operations.") msg_args = {'vol': volume_name, 'server': vserver} raise exception.NetAppException(msg % msg_args) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') volume_move_info = attributes_list.get_child_by_name( 'volume-move-info') or netapp_api.NaElement('none') status_info = { 'percent-complete': volume_move_info.get_child_content( 'percent-complete'), 'estimated-completion-time': volume_move_info.get_child_content( 'estimated-completion-time'), 'state': volume_move_info.get_child_content('state'), 'details': volume_move_info.get_child_content('details'), 'cutover-action': volume_move_info.get_child_content( 'cutover-action'), 'phase': volume_move_info.get_child_content('phase'), } return status_info @na_utils.trace def qos_policy_group_exists(self, qos_policy_group_name): """Checks if a QoS policy group exists.""" try: self.qos_policy_group_get(qos_policy_group_name) except exception.NetAppException: return False return True @na_utils.trace def qos_policy_group_get(self, qos_policy_group_name): """Checks if a QoS policy group exists.""" api_args = { 'query': { 'qos-policy-group-info': { 'policy-group': qos_policy_group_name, }, }, 'desired-attributes': { 'qos-policy-group-info': { 'policy-group': None, 'vserver': None, 'max-throughput': None, 'num-workloads': None }, }, } try: result = self.send_request('qos-policy-group-get-iter', api_args, False) except netapp_api.NaApiError as e: if e.code == netapp_api.EAPINOTFOUND: msg = _("Configured ONTAP login user cannot retrieve " "QoS policies.") LOG.error(msg) raise exception.NetAppException(msg) else: raise if not self._has_records(result): msg = _("No QoS policy group found with name %s.") raise exception.NetAppException(msg % qos_policy_group_name) attributes_list = result.get_child_by_name( 'attributes-list') or netapp_api.NaElement('none') qos_policy_group_info = attributes_list.get_child_by_name( 'qos-policy-group-info') or netapp_api.NaElement('none') policy_info = { 'policy-group': qos_policy_group_info.get_child_content( 'policy-group'), 'vserver': qos_policy_group_info.get_child_content('vserver'), 'max-throughput': qos_policy_group_info.get_child_content( 'max-throughput'), 'num-workloads': int(qos_policy_group_info.get_child_content( 'num-workloads')), } return policy_info @na_utils.trace def qos_policy_group_create(self, qos_policy_group_name, vserver, max_throughput=None): """Creates a QoS policy group.""" api_args = { 'policy-group': qos_policy_group_name, 'vserver': vserver, } if max_throughput: api_args['max-throughput'] = max_throughput return self.send_request('qos-policy-group-create', api_args, False) @na_utils.trace def qos_policy_group_modify(self, qos_policy_group_name, max_throughput): """Modifies a QoS policy group.""" api_args = { 'policy-group': qos_policy_group_name, 'max-throughput': max_throughput, } return self.send_request('qos-policy-group-modify', api_args, False) @na_utils.trace def qos_policy_group_delete(self, qos_policy_group_name): """Attempts to delete a QoS policy group.""" api_args = {'policy-group': qos_policy_group_name} return self.send_request('qos-policy-group-delete', api_args, False) @na_utils.trace def qos_policy_group_rename(self, qos_policy_group_name, new_name): """Renames a QoS policy group.""" if qos_policy_group_name == new_name: return api_args = { 'policy-group-name': qos_policy_group_name, 'new-name': new_name, } return self.send_request('qos-policy-group-rename', api_args, False) @na_utils.trace def mark_qos_policy_group_for_deletion(self, qos_policy_group_name): """Soft delete backing QoS policy group for a manila share.""" # NOTE(gouthamr): ONTAP deletes storage objects asynchronously. As # long as garbage collection hasn't occurred, assigned QoS policy may # still be tagged "in use". So, we rename the QoS policy group using a # specific pattern and later attempt on a best effort basis to # delete any QoS policy groups matching that pattern. if self.qos_policy_group_exists(qos_policy_group_name): new_name = DELETED_PREFIX + qos_policy_group_name try: self.qos_policy_group_rename(qos_policy_group_name, new_name) except netapp_api.NaApiError as ex: msg = ('Rename failure in cleanup of cDOT QoS policy ' 'group %(name)s: %(ex)s') msg_args = {'name': qos_policy_group_name, 'ex': ex} LOG.warning(msg, msg_args) # Attempt to delete any QoS policies named "deleted_manila-*". self.remove_unused_qos_policy_groups() @na_utils.trace def remove_unused_qos_policy_groups(self): """Deletes all QoS policy groups that are marked for deletion.""" api_args = { 'query': { 'qos-policy-group-info': { 'policy-group': '%s*' % DELETED_PREFIX, } }, 'max-records': 3500, 'continue-on-failure': 'true', 'return-success-list': 'false', 'return-failure-list': 'false', } try: self.send_request('qos-policy-group-delete-iter', api_args, False) except netapp_api.NaApiError as ex: msg = 'Could not delete QoS policy groups. Details: %(ex)s' msg_args = {'ex': ex} LOG.debug(msg, msg_args) @na_utils.trace def get_net_options(self): result = self.send_request('net-options-get', None, False) options = result.get_child_by_name('net-options') ipv6_enabled = False ipv6_info = options.get_child_by_name('ipv6-options-info') if ipv6_info: ipv6_enabled = ipv6_info.get_child_content('enabled') == 'true' return { 'ipv6-enabled': ipv6_enabled, } @na_utils.trace def rehost_volume(self, volume_name, vserver, destination_vserver): """Rehosts a volume from one Vserver into another Vserver. :param volume_name: Name of the FlexVol to be rehosted. :param vserver: Source Vserver name to which target volume belongs. :param destination_vserver: Destination Vserver name where target volume must reside after successful volume rehost operation. """ api_args = { 'volume': volume_name, 'vserver': vserver, 'destination-vserver': destination_vserver, } self.send_request('volume-rehost', api_args) manila-10.0.0/manila/share/drivers/netapp/dataontap/protocols/0000775000175000017500000000000013656750362024377 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/protocols/__init__.py0000664000175000017500000000000013656750227026476 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/protocols/cifs_cmode.py0000664000175000017500000001514513656750227027052 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp cDOT CIFS protocol helper class. """ import re from manila.common import constants from manila import exception from manila.i18n import _ from manila.share.drivers.netapp.dataontap.protocols import base from manila.share.drivers.netapp import utils as na_utils class NetAppCmodeCIFSHelper(base.NetAppBaseHelper): """NetApp cDOT CIFS protocol helper class.""" @na_utils.trace def create_share(self, share, share_name, clear_current_export_policy=True): """Creates CIFS share on Data ONTAP Vserver.""" self._client.create_cifs_share(share_name) if clear_current_export_policy: self._client.remove_cifs_share_access(share_name, 'Everyone') # Ensure 'ntfs' security style self._client.set_volume_security_style(share_name, security_style='ntfs') # Return a callback that may be used for generating export paths # for this share. return (lambda export_address, share_name=share_name: r'\\%s\%s' % (export_address, share_name)) @na_utils.trace def delete_share(self, share, share_name): """Deletes CIFS share on Data ONTAP Vserver.""" host_ip, share_name = self._get_export_location(share) self._client.remove_cifs_share(share_name) @na_utils.trace @base.access_rules_synchronized def update_access(self, share, share_name, rules): """Replaces the list of access rules known to the backend storage.""" # Ensure rules are valid for rule in rules: self._validate_access_rule(rule) new_rules = {rule['access_to']: rule['access_level'] for rule in rules} # Get rules from share existing_rules = self._get_access_rules(share, share_name) # Update rules in an order that will prevent transient disruptions self._handle_added_rules(share_name, existing_rules, new_rules) self._handle_ro_to_rw_rules(share_name, existing_rules, new_rules) self._handle_rw_to_ro_rules(share_name, existing_rules, new_rules) self._handle_deleted_rules(share_name, existing_rules, new_rules) @na_utils.trace def _validate_access_rule(self, rule): """Checks whether access rule type and level are valid.""" if rule['access_type'] != 'user': msg = _("Clustered Data ONTAP supports only 'user' type for " "share access rules with CIFS protocol.") raise exception.InvalidShareAccess(reason=msg) if rule['access_level'] not in constants.ACCESS_LEVELS: raise exception.InvalidShareAccessLevel(level=rule['access_level']) @na_utils.trace def _handle_added_rules(self, share_name, existing_rules, new_rules): """Updates access rules added between two rule sets.""" added_rules = { user_or_group: permission for user_or_group, permission in new_rules.items() if user_or_group not in existing_rules } for user_or_group, permission in added_rules.items(): self._client.add_cifs_share_access( share_name, user_or_group, self._is_readonly(permission)) @na_utils.trace def _handle_ro_to_rw_rules(self, share_name, existing_rules, new_rules): """Updates access rules modified (RO-->RW) between two rule sets.""" modified_rules = { user_or_group: permission for user_or_group, permission in new_rules.items() if (user_or_group in existing_rules and permission == constants.ACCESS_LEVEL_RW and existing_rules[user_or_group] != 'full_control') } for user_or_group, permission in modified_rules.items(): self._client.modify_cifs_share_access( share_name, user_or_group, self._is_readonly(permission)) @na_utils.trace def _handle_rw_to_ro_rules(self, share_name, existing_rules, new_rules): """Returns access rules modified (RW-->RO) between two rule sets.""" modified_rules = { user_or_group: permission for user_or_group, permission in new_rules.items() if (user_or_group in existing_rules and permission == constants.ACCESS_LEVEL_RO and existing_rules[user_or_group] != 'read') } for user_or_group, permission in modified_rules.items(): self._client.modify_cifs_share_access( share_name, user_or_group, self._is_readonly(permission)) @na_utils.trace def _handle_deleted_rules(self, share_name, existing_rules, new_rules): """Returns access rules deleted between two rule sets.""" deleted_rules = { user_or_group: permission for user_or_group, permission in existing_rules.items() if user_or_group not in new_rules } for user_or_group, permission in deleted_rules.items(): self._client.remove_cifs_share_access(share_name, user_or_group) @na_utils.trace def _get_access_rules(self, share, share_name): """Returns the list of access rules known to the backend storage.""" return self._client.get_cifs_share_access(share_name) @na_utils.trace def get_target(self, share): """Returns OnTap target IP based on share export location.""" return self._get_export_location(share)[0] @na_utils.trace def get_share_name_for_share(self, share): """Returns the flexvol name that hosts a share.""" _, share_name = self._get_export_location(share) return share_name @staticmethod def _get_export_location(share): """Returns host ip and share name for a given CIFS share.""" export_location = share['export_location'] or '\\\\\\' regex = r'^(?:\\\\|//)(?P.*)(?:\\|/)(?P.*)$' match = re.match(regex, export_location) if match: return match.group('host_ip'), match.group('share_name') else: return '', '' manila-10.0.0/manila/share/drivers/netapp/dataontap/protocols/nfs_cmode.py0000664000175000017500000001630313656750227026711 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp cDOT NFS protocol helper class. """ import uuid from oslo_log import log import six from manila.common import constants from manila import exception from manila.share.drivers.netapp.dataontap.protocols import base from manila.share.drivers.netapp import utils as na_utils LOG = log.getLogger(__name__) class NetAppCmodeNFSHelper(base.NetAppBaseHelper): """NetApp cDOT NFS protocol helper class.""" @staticmethod def _escaped_address(address): if ':' in address: return '[%s]' % address else: return address @na_utils.trace def create_share(self, share, share_name, clear_current_export_policy=True): """Creates NFS share.""" if clear_current_export_policy: self._client.clear_nfs_export_policy_for_volume(share_name) self._ensure_export_policy(share, share_name) export_path = self._client.get_volume_junction_path(share_name) # Return a callback that may be used for generating export paths # for this share. return (lambda export_address, export_path=export_path: ':'.join([self._escaped_address(export_address), export_path])) @na_utils.trace @base.access_rules_synchronized def delete_share(self, share, share_name): """Deletes NFS share.""" LOG.debug('Deleting NFS export policy for share %s', share['id']) export_policy_name = self._get_export_policy_name(share) self._client.clear_nfs_export_policy_for_volume(share_name) self._client.soft_delete_nfs_export_policy(export_policy_name) @na_utils.trace @base.access_rules_synchronized def update_access(self, share, share_name, rules): """Replaces the list of access rules known to the backend storage.""" # Ensure rules are valid for rule in rules: self._validate_access_rule(rule) # Sort rules by ascending network size new_rules = {rule['access_to']: rule['access_level'] for rule in rules} addresses = sorted(new_rules, reverse=True) # Ensure current export policy has the name we expect self._ensure_export_policy(share, share_name) export_policy_name = self._get_export_policy_name(share) # Make temp policy names so this non-atomic workflow remains resilient # across process interruptions. temp_new_export_policy_name = self._get_temp_export_policy_name() temp_old_export_policy_name = self._get_temp_export_policy_name() # Create new export policy self._client.create_nfs_export_policy(temp_new_export_policy_name) # Add new rules to new policy for address in addresses: self._client.add_nfs_export_rule( temp_new_export_policy_name, address, self._is_readonly(new_rules[address])) # Rename policy currently in force LOG.info('Renaming NFS export policy for share %(share)s to ' '%(policy)s.', {'share': share_name, 'policy': temp_old_export_policy_name}) self._client.rename_nfs_export_policy(export_policy_name, temp_old_export_policy_name) # Switch share to the new policy LOG.info('Setting NFS export policy for share %(share)s to ' '%(policy)s.', {'share': share_name, 'policy': temp_new_export_policy_name}) self._client.set_nfs_export_policy_for_volume( share_name, temp_new_export_policy_name) # Delete old policy self._client.soft_delete_nfs_export_policy(temp_old_export_policy_name) # Rename new policy to its final name LOG.info('Renaming NFS export policy for share %(share)s to ' '%(policy)s.', {'share': share_name, 'policy': export_policy_name}) self._client.rename_nfs_export_policy(temp_new_export_policy_name, export_policy_name) @na_utils.trace def _validate_access_rule(self, rule): """Checks whether access rule type and level are valid.""" if rule['access_type'] != 'ip': msg = ("Clustered Data ONTAP supports only 'ip' type for share " "access rules with NFS protocol.") raise exception.InvalidShareAccess(reason=msg) if rule['access_level'] not in constants.ACCESS_LEVELS: raise exception.InvalidShareAccessLevel(level=rule['access_level']) @na_utils.trace def get_target(self, share): """Returns ID of target OnTap device based on export location.""" return self._get_export_location(share)[0] @na_utils.trace def get_share_name_for_share(self, share): """Returns the flexvol name that hosts a share.""" _, volume_junction_path = self._get_export_location(share) volume = self._client.get_volume_at_junction_path(volume_junction_path) return volume.get('name') if volume else None @staticmethod def _get_export_location(share): """Returns IP address and export location of an NFS share.""" export_location = share['export_location'] or ':' result = export_location.rsplit(':', 1) if len(result) != 2: return ['', ''] return result @staticmethod def _get_temp_export_policy_name(): """Builds export policy name for an NFS share.""" return 'temp_' + six.text_type(uuid.uuid1()).replace('-', '_') @staticmethod def _get_export_policy_name(share): """Builds export policy name for an NFS share.""" return 'policy_' + share['id'].replace('-', '_') @na_utils.trace def _ensure_export_policy(self, share, share_name): """Ensures a flexvol/share has an export policy. This method ensures a flexvol has an export policy with a name containing the share ID. For legacy reasons, this may not always be the case. """ expected_export_policy = self._get_export_policy_name(share) actual_export_policy = self._client.get_nfs_export_policy_for_volume( share_name) if actual_export_policy == expected_export_policy: return elif actual_export_policy == 'default': self._client.create_nfs_export_policy(expected_export_policy) self._client.set_nfs_export_policy_for_volume( share_name, expected_export_policy) else: self._client.rename_nfs_export_policy(actual_export_policy, expected_export_policy) manila-10.0.0/manila/share/drivers/netapp/dataontap/protocols/base.py0000664000175000017500000000425613656750227025672 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Abstract base class for NetApp NAS protocol helper classes. """ import abc import six from manila.common import constants from manila import utils def access_rules_synchronized(f): """Decorator for synchronizing share access rule modification methods.""" def wrapped_func(self, *args, **kwargs): # The first argument is always a share, which has an ID key = "share-access-%s" % args[0]['id'] @utils.synchronized(key) def source_func(self, *args, **kwargs): return f(self, *args, **kwargs) return source_func(self, *args, **kwargs) return wrapped_func @six.add_metaclass(abc.ABCMeta) class NetAppBaseHelper(object): """Interface for protocol-specific NAS drivers.""" def __init__(self): self._client = None def set_client(self, client): self._client = client def _is_readonly(self, access_level): """Returns whether an access rule specifies read-only access.""" return access_level == constants.ACCESS_LEVEL_RO @abc.abstractmethod def create_share(self, share, share_name): """Creates NAS share.""" @abc.abstractmethod def delete_share(self, share, share_name): """Deletes NAS share.""" @abc.abstractmethod def update_access(self, share, share_name, rules): """Replaces the list of access rules known to the backend storage.""" @abc.abstractmethod def get_target(self, share): """Returns host where the share located.""" @abc.abstractmethod def get_share_name_for_share(self, share): """Returns the flexvol name that hosts a share.""" manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/0000775000175000017500000000000013656750362025040 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/__init__.py0000664000175000017500000000000013656750227027137 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/drv_multi_svm.py0000664000175000017500000003224313656750227030310 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp Data ONTAP cDOT multi-SVM storage driver. This driver requires a Data ONTAP (Cluster-mode) storage system with installed CIFS and/or NFS licenses, as well as a FlexClone license. This driver manages share servers, meaning it creates Data ONTAP storage virtual machines (i.e. 'vservers') for each share network for provisioning shares. This driver supports NFS & CIFS protocols. """ from manila.share import driver from manila.share.drivers.netapp.dataontap.cluster_mode import lib_multi_svm class NetAppCmodeMultiSvmShareDriver(driver.ShareDriver): """NetApp Cluster-mode multi-SVM share driver.""" DRIVER_NAME = 'NetApp_Cluster_MultiSVM' def __init__(self, *args, **kwargs): super(NetAppCmodeMultiSvmShareDriver, self).__init__( True, *args, **kwargs) self.library = lib_multi_svm.NetAppCmodeMultiSVMFileStorageLibrary( self.DRIVER_NAME, **kwargs) def do_setup(self, context): self.library.do_setup(context) def check_for_setup_error(self): self.library.check_for_setup_error() def get_pool(self, share): return self.library.get_pool(share) def create_share(self, context, share, **kwargs): return self.library.create_share(context, share, **kwargs) def create_share_from_snapshot(self, context, share, snapshot, **kwargs): return self.library.create_share_from_snapshot(context, share, snapshot, **kwargs) def create_snapshot(self, context, snapshot, **kwargs): return self.library.create_snapshot(context, snapshot, **kwargs) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, **kwargs): return self.library.revert_to_snapshot(context, snapshot, **kwargs) def delete_share(self, context, share, **kwargs): self.library.delete_share(context, share, **kwargs) def delete_snapshot(self, context, snapshot, **kwargs): self.library.delete_snapshot(context, snapshot, **kwargs) def extend_share(self, share, new_size, **kwargs): self.library.extend_share(share, new_size, **kwargs) def shrink_share(self, share, new_size, **kwargs): self.library.shrink_share(share, new_size, **kwargs) def manage_existing(self, share, driver_options): raise NotImplementedError def unmanage(self, share): raise NotImplementedError def manage_existing_snapshot(self, snapshot, driver_options): raise NotImplementedError def unmanage_snapshot(self, snapshot): raise NotImplementedError def manage_existing_with_server( self, share, driver_options, share_server=None): return self.library.manage_existing( share, driver_options, share_server=share_server) def unmanage_with_server(self, share, share_server=None): self.library.unmanage(share, share_server=share_server) def manage_existing_snapshot_with_server( self, snapshot, driver_options, share_server=None): return self.library.manage_existing_snapshot( snapshot, driver_options, share_server=share_server) def unmanage_snapshot_with_server(self, snapshot, share_server=None): self.library.unmanage_snapshot(snapshot, share_server=share_server) def update_access(self, context, share, access_rules, add_rules, delete_rules, **kwargs): self.library.update_access(context, share, access_rules, add_rules, delete_rules, **kwargs) def _update_share_stats(self, data=None): data = self.library.get_share_stats( filter_function=self.get_filter_function(), goodness_function=self.get_goodness_function()) super(NetAppCmodeMultiSvmShareDriver, self)._update_share_stats( data=data) def get_default_filter_function(self): return self.library.get_default_filter_function() def get_default_goodness_function(self): return self.library.get_default_goodness_function() def get_share_server_pools(self, share_server): return self.library.get_share_server_pools(share_server) def get_network_allocations_number(self): return self.library.get_network_allocations_number() def get_admin_network_allocations_number(self): return self.library.get_admin_network_allocations_number( self.admin_network_api) def _setup_server(self, network_info, metadata=None): return self.library.setup_server(network_info, metadata) def _teardown_server(self, server_details, **kwargs): self.library.teardown_server(server_details, **kwargs) def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, **kwargs): return self.library.create_replica(context, replica_list, new_replica, access_rules, replica_snapshots) def delete_replica(self, context, replica_list, replica_snapshots, replica, **kwargs): self.library.delete_replica(context, replica_list, replica, replica_snapshots) def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): return self.library.promote_replica(context, replica_list, replica, access_rules, share_server=share_server) def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): return self.library.update_replica_state(context, replica_list, replica, access_rules, replica_snapshots, share_server) def create_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): return self.library.create_replicated_snapshot( context, replica_list, replica_snapshots, share_server=share_server) def delete_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): return self.library.delete_replicated_snapshot( context, replica_list, replica_snapshots, share_server=share_server) def update_replicated_snapshot(self, context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=None): return self.library.update_replicated_snapshot( replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=share_server) def revert_to_replicated_snapshot(self, context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, share_access_rules, snapshot_access_rules, **kwargs): return self.library.revert_to_replicated_snapshot( context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, **kwargs) def migration_check_compatibility(self, context, source_share, destination_share, share_server=None, destination_share_server=None): return self.library.migration_check_compatibility( context, source_share, destination_share, share_server=share_server, destination_share_server=destination_share_server) def migration_start(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_start( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_continue(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_continue( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_get_progress(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_get_progress( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_cancel(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_cancel( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_complete(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_complete( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def create_share_group_snapshot(self, context, snap_dict, share_server=None): fallback_create = super(NetAppCmodeMultiSvmShareDriver, self).create_share_group_snapshot return self.library.create_group_snapshot(context, snap_dict, fallback_create, share_server) def delete_share_group_snapshot(self, context, snap_dict, share_server=None): fallback_delete = super(NetAppCmodeMultiSvmShareDriver, self).delete_share_group_snapshot return self.library.delete_group_snapshot(context, snap_dict, fallback_delete, share_server) def create_share_group_from_share_group_snapshot( self, context, share_group_dict, snapshot_dict, share_server=None): fallback_create = super( NetAppCmodeMultiSvmShareDriver, self).create_share_group_from_share_group_snapshot return self.library.create_group_from_snapshot(context, share_group_dict, snapshot_dict, fallback_create, share_server) def get_configured_ip_versions(self): return self.library.get_configured_ip_versions() def get_backend_info(self, context): return self.library.get_backend_info(context) def ensure_shares(self, context, shares): return self.library.ensure_shares(context, shares) def get_share_server_network_info( self, context, share_server, identifier, driver_options): return self.library.get_share_server_network_info( context, share_server, identifier, driver_options) def manage_server(self, context, share_server, identifier, driver_options): return self.library.manage_server( context, share_server, identifier, driver_options) def unmanage_server(self, server_details, security_services=None): return self.library.unmanage_server(server_details, security_services) def get_share_status(self, share_instance, share_server=None): return self.library.get_share_status(share_instance, share_server) manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py0000664000175000017500000006327113656750227030270 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp Data ONTAP cDOT multi-SVM storage driver library. This library extends the abstract base library and completes the multi-SVM functionality needed by the cDOT multi-SVM Manila driver. This library variant creates Data ONTAP storage virtual machines (i.e. 'vservers') as needed to provision shares. """ import copy import re from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import excutils from manila import exception from manila.i18n import _ from manila.share.drivers.netapp.dataontap.client import client_cmode from manila.share.drivers.netapp.dataontap.cluster_mode import data_motion from manila.share.drivers.netapp.dataontap.cluster_mode import lib_base from manila.share.drivers.netapp import utils as na_utils from manila.share import utils as share_utils from manila import utils LOG = log.getLogger(__name__) SUPPORTED_NETWORK_TYPES = (None, 'flat', 'vlan') SEGMENTED_NETWORK_TYPES = ('vlan',) DEFAULT_MTU = 1500 class NetAppCmodeMultiSVMFileStorageLibrary( lib_base.NetAppCmodeFileStorageLibrary): @na_utils.trace def check_for_setup_error(self): if self._have_cluster_creds: if self.configuration.netapp_vserver: msg = ('Vserver is specified in the configuration. This is ' 'ignored when the driver is managing share servers.') LOG.warning(msg) else: # only have vserver creds, which is an error in multi_svm mode msg = _('Cluster credentials must be specified in the ' 'configuration when the driver is managing share servers.') raise exception.InvalidInput(reason=msg) # Ensure one or more aggregates are available. if not self._find_matching_aggregates(): msg = _('No aggregates are available for provisioning shares. ' 'Ensure that the configuration option ' 'netapp_aggregate_name_search_pattern is set correctly.') raise exception.NetAppException(msg) (super(NetAppCmodeMultiSVMFileStorageLibrary, self). check_for_setup_error()) @na_utils.trace def _get_vserver(self, share_server=None, vserver_name=None): if share_server: backend_details = share_server.get('backend_details') vserver = backend_details.get( 'vserver_name') if backend_details else None if not vserver: msg = _('Vserver name is absent in backend details. Please ' 'check whether Vserver was created properly.') raise exception.VserverNotSpecified(msg) elif vserver_name: vserver = vserver_name else: msg = _('Share server not provided') raise exception.InvalidInput(reason=msg) if not self._client.vserver_exists(vserver): raise exception.VserverNotFound(vserver=vserver) vserver_client = self._get_api_client(vserver) return vserver, vserver_client def _get_ems_pool_info(self): return { 'pools': { 'vserver': None, 'aggregates': self._find_matching_aggregates(), }, } @na_utils.trace def _handle_housekeeping_tasks(self): """Handle various cleanup activities.""" self._client.prune_deleted_nfs_export_policies() self._client.prune_deleted_snapshots() self._client.remove_unused_qos_policy_groups() (super(NetAppCmodeMultiSVMFileStorageLibrary, self). _handle_housekeeping_tasks()) @na_utils.trace def _find_matching_aggregates(self): """Find all aggregates match pattern.""" aggregate_names = self._client.list_non_root_aggregates() pattern = self.configuration.netapp_aggregate_name_search_pattern return [aggr_name for aggr_name in aggregate_names if re.match(pattern, aggr_name)] @na_utils.trace def setup_server(self, network_info, metadata=None): """Creates and configures new Vserver.""" vlan = network_info['segmentation_id'] ports = {} for network_allocation in network_info['network_allocations']: ports[network_allocation['id']] = network_allocation['ip_address'] @utils.synchronized('netapp-VLAN-%s' % vlan, external=True) def setup_server_with_lock(): LOG.debug('Creating server %s', network_info['server_id']) self._validate_network_type(network_info) vserver_name = self._get_vserver_name(network_info['server_id']) server_details = { 'vserver_name': vserver_name, 'ports': jsonutils.dumps(ports) } try: self._create_vserver(vserver_name, network_info) except Exception as e: e.detail_data = {'server_details': server_details} raise return server_details return setup_server_with_lock() @na_utils.trace def _validate_network_type(self, network_info): """Raises exception if the segmentation type is incorrect.""" if network_info['network_type'] not in SUPPORTED_NETWORK_TYPES: msg = _('The specified network type %s is unsupported by the ' 'NetApp clustered Data ONTAP driver') raise exception.NetworkBadConfigurationException( reason=msg % network_info['network_type']) @na_utils.trace def _get_vserver_name(self, server_id): return self.configuration.netapp_vserver_name_template % server_id @na_utils.trace def _create_vserver(self, vserver_name, network_info): """Creates Vserver with given parameters if it doesn't exist.""" if self._client.vserver_exists(vserver_name): msg = _('Vserver %s already exists.') raise exception.NetAppException(msg % vserver_name) # NOTE(lseki): If there's already an ipspace created for the same VLAN # port, reuse it. It will be named after the previously created share # server's neutron subnet id. node_name = self._client.list_cluster_nodes()[0] port = self._get_node_data_port(node_name) vlan = network_info['segmentation_id'] ipspace_name = self._client.get_ipspace_name_for_vlan_port( node_name, port, vlan) or self._create_ipspace(network_info) LOG.debug('Vserver %s does not exist, creating.', vserver_name) self._client.create_vserver( vserver_name, self.configuration.netapp_root_volume_aggregate, self.configuration.netapp_root_volume, self._find_matching_aggregates(), ipspace_name) vserver_client = self._get_api_client(vserver=vserver_name) security_services = None try: self._create_vserver_lifs(vserver_name, vserver_client, network_info, ipspace_name) self._create_vserver_admin_lif(vserver_name, vserver_client, network_info, ipspace_name) self._create_vserver_routes(vserver_client, network_info) vserver_client.enable_nfs( self.configuration.netapp_enabled_share_protocols) security_services = network_info.get('security_services') if security_services: self._client.setup_security_services(security_services, vserver_client, vserver_name) except Exception: with excutils.save_and_reraise_exception(): LOG.error("Failed to configure Vserver.") # NOTE(dviroel): At this point, the lock was already acquired # by the caller of _create_vserver. self._delete_vserver(vserver_name, security_services=security_services, needs_lock=False) def _get_valid_ipspace_name(self, network_id): """Get IPspace name according to network id.""" return 'ipspace_' + network_id.replace('-', '_') @na_utils.trace def _create_ipspace(self, network_info): """If supported, create an IPspace for a new Vserver.""" if not self._client.features.IPSPACES: return None if (network_info['network_allocations'][0]['network_type'] not in SEGMENTED_NETWORK_TYPES): return client_cmode.DEFAULT_IPSPACE # NOTE(cknight): Neutron needs cDOT IP spaces because it can provide # overlapping IP address ranges for different subnets. That is not # believed to be an issue for any of Manila's other network plugins. ipspace_id = network_info.get('neutron_subnet_id') if not ipspace_id: return client_cmode.DEFAULT_IPSPACE ipspace_name = self._get_valid_ipspace_name(ipspace_id) self._client.create_ipspace(ipspace_name) return ipspace_name @na_utils.trace def _create_vserver_lifs(self, vserver_name, vserver_client, network_info, ipspace_name): """Create Vserver data logical interfaces (LIFs).""" nodes = self._client.list_cluster_nodes() node_network_info = zip(nodes, network_info['network_allocations']) for node_name, network_allocation in node_network_info: lif_name = self._get_lif_name(node_name, network_allocation) self._create_lif(vserver_client, vserver_name, ipspace_name, node_name, lif_name, network_allocation) @na_utils.trace def _create_vserver_admin_lif(self, vserver_name, vserver_client, network_info, ipspace_name): """Create Vserver admin LIF, if defined.""" network_allocations = network_info.get('admin_network_allocations') if not network_allocations: LOG.info('No admin network defined for Vserver %s.', vserver_name) return node_name = self._client.list_cluster_nodes()[0] network_allocation = network_allocations[0] lif_name = self._get_lif_name(node_name, network_allocation) self._create_lif(vserver_client, vserver_name, ipspace_name, node_name, lif_name, network_allocation) @na_utils.trace def _create_vserver_routes(self, vserver_client, network_info): """Create Vserver route and set gateways.""" route_gateways = [] # NOTE(gouthamr): Use the gateway from the tenant subnet/s # for the static routes. Do not configure a route for the admin # subnet because fast path routing will work for incoming # connections and there are no requirements for outgoing # connections on the admin network yet. for net_allocation in (network_info['network_allocations']): if net_allocation['gateway'] not in route_gateways: vserver_client.create_route(net_allocation['gateway']) route_gateways.append(net_allocation['gateway']) @na_utils.trace def _get_node_data_port(self, node): port_names = self._client.list_node_data_ports(node) pattern = self.configuration.netapp_port_name_search_pattern matched_port_names = [port_name for port_name in port_names if re.match(pattern, port_name)] if not matched_port_names: raise exception.NetAppException( _('Could not find eligible network ports on node %s on which ' 'to create Vserver LIFs.') % node) return matched_port_names[0] def _get_lif_name(self, node_name, network_allocation): """Get LIF name based on template from manila.conf file.""" lif_name_args = { 'node': node_name, 'net_allocation_id': network_allocation['id'], } return self.configuration.netapp_lif_name_template % lif_name_args @na_utils.trace def _create_lif(self, vserver_client, vserver_name, ipspace_name, node_name, lif_name, network_allocation): """Creates LIF for Vserver.""" port = self._get_node_data_port(node_name) ip_address = network_allocation['ip_address'] netmask = utils.cidr_to_netmask(network_allocation['cidr']) vlan = network_allocation['segmentation_id'] network_mtu = network_allocation.get('mtu') mtu = network_mtu or DEFAULT_MTU if not vserver_client.network_interface_exists( vserver_name, node_name, port, ip_address, netmask, vlan): self._client.create_network_interface( ip_address, netmask, vlan, node_name, port, vserver_name, lif_name, ipspace_name, mtu) @na_utils.trace def get_network_allocations_number(self): """Get number of network interfaces to be created.""" return len(self._client.list_cluster_nodes()) @na_utils.trace def get_admin_network_allocations_number(self, admin_network_api): """Get number of network allocations for creating admin LIFs.""" return 1 if admin_network_api else 0 @na_utils.trace def teardown_server(self, server_details, security_services=None): """Teardown share server.""" vserver = server_details.get( 'vserver_name') if server_details else None if not vserver: LOG.warning("Vserver not specified for share server being " "deleted. Deletion of share server record will " "proceed anyway.") return elif not self._client.vserver_exists(vserver): LOG.warning("Could not find Vserver for share server being " "deleted: %s. Deletion of share server " "record will proceed anyway.", vserver) return self._delete_vserver(vserver, security_services=security_services) @na_utils.trace def _delete_vserver(self, vserver, security_services=None, needs_lock=True): """Delete a Vserver plus IPspace and security services as needed.""" ipspace_name = self._client.get_vserver_ipspace(vserver) vserver_client = self._get_api_client(vserver=vserver) network_interfaces = vserver_client.get_network_interfaces() interfaces_on_vlans = [] vlans = [] for interface in network_interfaces: if '-' in interface['home-port']: interfaces_on_vlans.append(interface) vlans.append(interface['home-port']) if vlans: vlans = '-'.join(sorted(set(vlans))) if vlans else None vlan_id = vlans.split('-')[-1] else: vlan_id = None def _delete_vserver_without_lock(): # NOTE(dviroel): Attempt to delete all vserver peering # created by replication self._delete_vserver_peers(vserver) self._client.delete_vserver(vserver, vserver_client, security_services=security_services) if ipspace_name and not self._client.ipspace_has_data_vservers( ipspace_name): self._client.delete_ipspace(ipspace_name) self._delete_vserver_vlans(interfaces_on_vlans) @utils.synchronized('netapp-VLAN-%s' % vlan_id, external=True) def _delete_vserver_with_lock(): _delete_vserver_without_lock() if needs_lock: return _delete_vserver_with_lock() else: return _delete_vserver_without_lock() @na_utils.trace def _delete_vserver_vlans(self, network_interfaces_on_vlans): """Delete Vserver's VLAN configuration from ports""" for interface in network_interfaces_on_vlans: try: home_port = interface['home-port'] port, vlan = home_port.split('-') node = interface['home-node'] self._client.delete_vlan(node, port, vlan) except exception.NetAppException: LOG.exception("Deleting Vserver VLAN failed.") @na_utils.trace def _delete_vserver_peers(self, vserver): vserver_peers = self._get_vserver_peers(vserver=vserver) for peer in vserver_peers: self._delete_vserver_peer(peer.get('vserver'), peer.get('peer-vserver')) def get_configured_ip_versions(self): versions = [4] options = self._client.get_net_options() if options['ipv6-enabled']: versions.append(6) return versions @na_utils.trace def create_replica(self, context, replica_list, new_replica, access_rules, share_snapshots, share_server=None): """Creates the new replica on this backend and sets up SnapMirror. It creates the peering between the associated vservers before creating the share replica and setting up the SnapMirror. """ # 1. Retrieve source and destination vservers from both replicas, # active and and new_replica src_vserver, dst_vserver = self._get_vservers_from_replicas( context, replica_list, new_replica) # 2. Retrieve the active replica host's client and cluster name src_replica = self.find_active_replica(replica_list) src_replica_host = share_utils.extract_host( src_replica['host'], level='backend_name') src_replica_client = data_motion.get_client_for_backend( src_replica_host, vserver_name=src_vserver) # Cluster name is needed for setting up the vserver peering src_replica_cluster_name = src_replica_client.get_cluster_name() # 3. Retrieve new replica host's client new_replica_host = share_utils.extract_host( new_replica['host'], level='backend_name') new_replica_client = data_motion.get_client_for_backend( new_replica_host, vserver_name=dst_vserver) new_replica_cluster_name = new_replica_client.get_cluster_name() if (dst_vserver != src_vserver and not self._get_vserver_peers(dst_vserver, src_vserver)): # 3.1. Request vserver peer creation from new_replica's host # to active replica's host new_replica_client.create_vserver_peer( dst_vserver, src_vserver, peer_cluster_name=src_replica_cluster_name) # 3.2. Accepts the vserver peering using active replica host's # client (inter-cluster only) if new_replica_cluster_name != src_replica_cluster_name: src_replica_client.accept_vserver_peer(src_vserver, dst_vserver) return (super(NetAppCmodeMultiSVMFileStorageLibrary, self). create_replica(context, replica_list, new_replica, access_rules, share_snapshots)) def delete_replica(self, context, replica_list, replica, share_snapshots, share_server=None): """Removes the replica on this backend and destroys SnapMirror. Removes the replica, destroys the SnapMirror and delete the vserver peering if needed. """ vserver, peer_vserver = self._get_vservers_from_replicas( context, replica_list, replica) super(NetAppCmodeMultiSVMFileStorageLibrary, self).delete_replica( context, replica_list, replica, share_snapshots) # Check if there are no remaining SnapMirror connections and if a # vserver peering exists and delete it. snapmirrors = self._get_snapmirrors(vserver, peer_vserver) snapmirrors_from_peer = self._get_snapmirrors(peer_vserver, vserver) peers = self._get_vserver_peers(peer_vserver, vserver) if not (snapmirrors or snapmirrors_from_peer) and peers: self._delete_vserver_peer(peer_vserver, vserver) def manage_server(self, context, share_server, identifier, driver_options): """Manages a vserver by renaming it and returning backend_details.""" new_vserver_name = self._get_vserver_name(share_server['id']) old_vserver_name = self._get_correct_vserver_old_name(identifier) if new_vserver_name != old_vserver_name: self._client.rename_vserver(old_vserver_name, new_vserver_name) backend_details = {'vserver_name': new_vserver_name} return new_vserver_name, backend_details def unmanage_server(self, server_details, security_services=None): pass def get_share_server_network_info( self, context, share_server, identifier, driver_options): """Returns a list of IPs for each vserver network interface.""" vserver_name = self._get_correct_vserver_old_name(identifier) vserver, vserver_client = self._get_vserver(vserver_name=vserver_name) interfaces = vserver_client.get_network_interfaces() allocations = [] for lif in interfaces: allocations.append(lif['address']) return allocations def _get_correct_vserver_old_name(self, identifier): # In case vserver_name includes the template, we check and add it here if not self._client.vserver_exists(identifier): return self._get_vserver_name(identifier) return identifier def _get_snapmirrors(self, vserver, peer_vserver): return self._client.get_snapmirrors( source_vserver=vserver, source_volume=None, destination_vserver=peer_vserver, destination_volume=None) def _get_vservers_from_replicas(self, context, replica_list, new_replica): active_replica = self.find_active_replica(replica_list) dm_session = data_motion.DataMotionSession() vserver = dm_session.get_vserver_from_share(active_replica) peer_vserver = dm_session.get_vserver_from_share(new_replica) return vserver, peer_vserver def _get_vserver_peers(self, vserver=None, peer_vserver=None): return self._client.get_vserver_peers(vserver, peer_vserver) def _create_vserver_peer(self, context, vserver, peer_vserver): self._client.create_vserver_peer(vserver, peer_vserver) def _delete_vserver_peer(self, vserver, peer_vserver): self._client.delete_vserver_peer(vserver, peer_vserver) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): # NOTE(dviroel): If both parent and child shares are in the same host, # they belong to the same cluster, and we can skip all the processing # below. if parent_share['host'] != share['host']: # 1. Retrieve source and destination vservers from source and # destination shares new_share = copy.deepcopy(share.to_dict()) new_share['share_server'] = share_server.to_dict() dm_session = data_motion.DataMotionSession() src_vserver = dm_session.get_vserver_from_share(parent_share) dest_vserver = dm_session.get_vserver_from_share(new_share) # 2. Retrieve the source share host's client and cluster name src_share_host = share_utils.extract_host( parent_share['host'], level='backend_name') src_share_client = data_motion.get_client_for_backend( src_share_host, vserver_name=src_vserver) # Cluster name is needed for setting up the vserver peering src_share_cluster_name = src_share_client.get_cluster_name() # 3. Retrieve new share host's client dest_share_host = share_utils.extract_host( new_share['host'], level='backend_name') dest_share_client = data_motion.get_client_for_backend( dest_share_host, vserver_name=dest_vserver) dest_share_cluster_name = dest_share_client.get_cluster_name() # If source and destination shares are placed in a different # clusters, we'll need the both vserver peered. if src_share_cluster_name != dest_share_cluster_name: if not self._get_vserver_peers(dest_vserver, src_vserver): # 3.1. Request vserver peer creation from new_replica's # host to active replica's host dest_share_client.create_vserver_peer( dest_vserver, src_vserver, peer_cluster_name=src_share_cluster_name) # 3.2. Accepts the vserver peering using active replica # host's client src_share_client.accept_vserver_peer(src_vserver, dest_vserver) return (super(NetAppCmodeMultiSVMFileStorageLibrary, self) .create_share_from_snapshot( context, share, snapshot, share_server=share_server, parent_share=parent_share)) manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/lib_base.py0000664000175000017500000036100213656750227027154 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # Copyright (c) 2015 Tom Barron. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp Data ONTAP cDOT base storage driver library. This library is the abstract base for subclasses that complete the single-SVM or multi-SVM functionality needed by the cDOT Manila drivers. """ import copy import datetime import json import math import socket from oslo_config import cfg from oslo_log import log from oslo_service import loopingcall from oslo_utils import timeutils from oslo_utils import units import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.client import client_cmode from manila.share.drivers.netapp.dataontap.cluster_mode import data_motion from manila.share.drivers.netapp.dataontap.cluster_mode import performance from manila.share.drivers.netapp.dataontap.protocols import cifs_cmode from manila.share.drivers.netapp.dataontap.protocols import nfs_cmode from manila.share.drivers.netapp import options as na_opts from manila.share.drivers.netapp import utils as na_utils from manila.share import share_types from manila.share import utils as share_utils from manila import utils as manila_utils LOG = log.getLogger(__name__) CONF = cfg.CONF class NetAppCmodeFileStorageLibrary(object): AUTOSUPPORT_INTERVAL_SECONDS = 3600 # hourly SSC_UPDATE_INTERVAL_SECONDS = 3600 # hourly HOUSEKEEPING_INTERVAL_SECONDS = 600 # ten minutes SUPPORTED_PROTOCOLS = ('nfs', 'cifs') DEFAULT_FILTER_FUNCTION = 'capabilities.utilization < 70' DEFAULT_GOODNESS_FUNCTION = '100 - capabilities.utilization' # Internal states when dealing with data motion STATE_SPLITTING_VOLUME_CLONE = 'splitting_volume_clone' STATE_MOVING_VOLUME = 'moving_volume' STATE_SNAPMIRROR_DATA_COPYING = 'snapmirror_data_copying' # Maps NetApp qualified extra specs keys to corresponding backend API # client library argument keywords. When we expose more backend # capabilities here, we will add them to this map. BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP = { 'netapp:thin_provisioned': 'thin_provisioned', 'netapp:dedup': 'dedup_enabled', 'netapp:compression': 'compression_enabled', 'netapp:split_clone_on_create': 'split', 'netapp:hide_snapdir': 'hide_snapdir', } STRING_QUALIFIED_EXTRA_SPECS_MAP = { 'netapp:snapshot_policy': 'snapshot_policy', 'netapp:language': 'language', 'netapp:max_files': 'max_files', } # Maps standard extra spec keys to legacy NetApp keys STANDARD_BOOLEAN_EXTRA_SPECS_MAP = { 'thin_provisioning': 'netapp:thin_provisioned', 'dedupe': 'netapp:dedup', 'compression': 'netapp:compression', } QOS_SPECS = { 'netapp:maxiops': 'maxiops', 'netapp:maxiopspergib': 'maxiopspergib', 'netapp:maxbps': 'maxbps', 'netapp:maxbpspergib': 'maxbpspergib', } HIDE_SNAPDIR_CFG_MAP = { 'visible': False, 'hidden': True, 'default': None, } SIZE_DEPENDENT_QOS_SPECS = {'maxiopspergib', 'maxbpspergib'} def __init__(self, driver_name, **kwargs): na_utils.validate_driver_instantiation(**kwargs) self.driver_name = driver_name self.private_storage = kwargs['private_storage'] self.configuration = kwargs['configuration'] self.configuration.append_config_values(na_opts.netapp_connection_opts) self.configuration.append_config_values(na_opts.netapp_basicauth_opts) self.configuration.append_config_values(na_opts.netapp_transport_opts) self.configuration.append_config_values(na_opts.netapp_support_opts) self.configuration.append_config_values(na_opts.netapp_cluster_opts) self.configuration.append_config_values( na_opts.netapp_provisioning_opts) self.configuration.append_config_values( na_opts.netapp_data_motion_opts) self._licenses = [] self._client = None self._clients = {} self._ssc_stats = {} self._have_cluster_creds = None self._cluster_info = {} self._app_version = kwargs.get('app_version', 'unknown') na_utils.setup_tracing(self.configuration.netapp_trace_flags, self.configuration.netapp_api_trace_pattern) self._backend_name = self.configuration.safe_get( 'share_backend_name') or driver_name @na_utils.trace def do_setup(self, context): self._client = self._get_api_client() self._have_cluster_creds = self._client.check_for_cluster_credentials() if self._have_cluster_creds is True: self._set_cluster_info() # Performance monitoring library self._perf_library = performance.PerformanceLibrary(self._client) @na_utils.trace def _set_cluster_info(self): self._cluster_info['nve_support'] = ( self._client.is_nve_supported() and self._client.features.FLEXVOL_ENCRYPTION) @na_utils.trace def check_for_setup_error(self): self._licenses = self._get_licenses() self._start_periodic_tasks() def _get_vserver(self, share_server=None): raise NotImplementedError() @na_utils.trace def _get_api_client(self, vserver=None): # Use cached value to prevent calls to system-get-ontapi-version. client = self._clients.get(vserver) if not client: client = client_cmode.NetAppCmodeClient( transport_type=self.configuration.netapp_transport_type, username=self.configuration.netapp_login, password=self.configuration.netapp_password, hostname=self.configuration.netapp_server_hostname, port=self.configuration.netapp_server_port, vserver=vserver, trace=na_utils.TRACE_API, api_trace_pattern=na_utils.API_TRACE_PATTERN) self._clients[vserver] = client return client @na_utils.trace def _get_licenses(self): if not self._have_cluster_creds: LOG.debug('License info not available without cluster credentials') return [] self._licenses = self._client.get_licenses() log_data = { 'backend': self._backend_name, 'licenses': ', '.join(self._licenses), } LOG.info('Available licenses on %(backend)s ' 'are %(licenses)s.', log_data) if 'nfs' not in self._licenses and 'cifs' not in self._licenses: msg = 'Neither NFS nor CIFS is licensed on %(backend)s' msg_args = {'backend': self._backend_name} LOG.error(msg, msg_args) return self._licenses @na_utils.trace def _start_periodic_tasks(self): # Run the task once in the current thread so prevent a race with # the first invocation of get_share_stats. self._update_ssc_info() # Start the task that updates the slow-changing storage service catalog ssc_periodic_task = loopingcall.FixedIntervalLoopingCall( self._update_ssc_info) ssc_periodic_task.start(interval=self.SSC_UPDATE_INTERVAL_SECONDS, initial_delay=self.SSC_UPDATE_INTERVAL_SECONDS) # Start the task that logs autosupport (EMS) data to the controller ems_periodic_task = loopingcall.FixedIntervalLoopingCall( self._handle_ems_logging) ems_periodic_task.start(interval=self.AUTOSUPPORT_INTERVAL_SECONDS, initial_delay=0) # Start the task that runs other housekeeping tasks, such as deletion # of previously soft-deleted storage artifacts. housekeeping_periodic_task = loopingcall.FixedIntervalLoopingCall( self._handle_housekeeping_tasks) housekeeping_periodic_task.start( interval=self.HOUSEKEEPING_INTERVAL_SECONDS, initial_delay=0) def _get_backend_share_name(self, share_id): """Get share name according to share name template.""" return self.configuration.netapp_volume_name_template % { 'share_id': share_id.replace('-', '_')} def _get_backend_snapshot_name(self, snapshot_id): """Get snapshot name according to snapshot name template.""" return 'share_snapshot_' + snapshot_id.replace('-', '_') def _get_backend_cg_snapshot_name(self, snapshot_id): """Get snapshot name according to snapshot name template.""" return 'share_cg_snapshot_' + snapshot_id.replace('-', '_') def _get_backend_qos_policy_group_name(self, share_id): """Get QoS policy name according to QoS policy group name template.""" return self.configuration.netapp_qos_policy_group_name_template % { 'share_id': share_id.replace('-', '_')} @na_utils.trace def _get_aggregate_space(self): aggregates = self._find_matching_aggregates() if self._have_cluster_creds: return self._client.get_cluster_aggregate_capacities(aggregates) else: return self._client.get_vserver_aggregate_capacities(aggregates) @na_utils.trace def _check_snaprestore_license(self): """Check if snaprestore license is enabled.""" if not self._licenses: self._licenses = self._client.get_licenses() return 'snaprestore' in self._licenses @na_utils.trace def _get_aggregate_node(self, aggregate_name): """Get home node for the specified aggregate, or None.""" if self._have_cluster_creds: return self._client.get_node_for_aggregate(aggregate_name) else: return None def get_default_filter_function(self): """Get the default filter_function string.""" return self.DEFAULT_FILTER_FUNCTION def get_default_goodness_function(self): """Get the default goodness_function string.""" return self.DEFAULT_GOODNESS_FUNCTION @na_utils.trace def get_share_stats(self, filter_function=None, goodness_function=None): """Retrieve stats info from Data ONTAP backend.""" data = { 'share_backend_name': self._backend_name, 'driver_name': self.driver_name, 'vendor_name': 'NetApp', 'driver_version': '1.0', 'netapp_storage_family': 'ontap_cluster', 'storage_protocol': 'NFS_CIFS', 'pools': self._get_pools(filter_function=filter_function, goodness_function=goodness_function), 'share_group_stats': { 'consistent_snapshot_support': 'host', }, } if self.configuration.replication_domain: data['replication_type'] = 'dr' data['replication_domain'] = self.configuration.replication_domain return data @na_utils.trace def get_share_server_pools(self, share_server): """Return list of pools related to a particular share server. Note that the multi-SVM cDOT driver assigns all available pools to each Vserver, so there is no need to filter the pools any further by share_server. :param share_server: ShareServer class instance. """ return self._get_pools() @na_utils.trace def _get_pools(self, filter_function=None, goodness_function=None): """Retrieve list of pools available to this backend.""" pools = [] aggr_space = self._get_aggregate_space() aggregates = aggr_space.keys() if self._have_cluster_creds: # Get up-to-date node utilization metrics just once. self._perf_library.update_performance_cache({}, self._ssc_stats) qos_support = True else: qos_support = False netapp_flexvol_encryption = self._cluster_info.get( 'nve_support', False) revert_to_snapshot_support = self._check_snaprestore_license() for aggr_name in sorted(aggregates): reserved_percentage = self.configuration.reserved_share_percentage total_capacity_gb = na_utils.round_down(float( aggr_space[aggr_name].get('total', 0)) / units.Gi) free_capacity_gb = na_utils.round_down(float( aggr_space[aggr_name].get('available', 0)) / units.Gi) allocated_capacity_gb = na_utils.round_down(float( aggr_space[aggr_name].get('used', 0)) / units.Gi) if total_capacity_gb == 0.0: total_capacity_gb = 'unknown' pool = { 'pool_name': aggr_name, 'filter_function': filter_function, 'goodness_function': goodness_function, 'total_capacity_gb': total_capacity_gb, 'free_capacity_gb': free_capacity_gb, 'allocated_capacity_gb': allocated_capacity_gb, 'qos': qos_support, 'reserved_percentage': reserved_percentage, 'dedupe': [True, False], 'compression': [True, False], 'netapp_flexvol_encryption': netapp_flexvol_encryption, 'thin_provisioning': [True, False], 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': revert_to_snapshot_support, } # Add storage service catalog data. pool_ssc_stats = self._ssc_stats.get(aggr_name) if pool_ssc_stats: pool.update(pool_ssc_stats) # Add utilization info, or nominal value if not available. utilization = self._perf_library.get_node_utilization_for_pool( aggr_name) pool['utilization'] = na_utils.round_down(utilization) pools.append(pool) return pools @na_utils.trace def _handle_ems_logging(self): """Build and send an EMS log message.""" self._client.send_ems_log_message(self._build_ems_log_message_0()) self._client.send_ems_log_message(self._build_ems_log_message_1()) def _build_base_ems_log_message(self): """Construct EMS Autosupport log message common to all events.""" ems_log = { 'computer-name': socket.gethostname() or 'Manila_node', 'event-source': 'Manila driver %s' % self.driver_name, 'app-version': self._app_version, 'category': 'provisioning', 'log-level': '5', 'auto-support': 'false', } return ems_log @na_utils.trace def _build_ems_log_message_0(self): """Construct EMS Autosupport log message with deployment info.""" ems_log = self._build_base_ems_log_message() ems_log.update({ 'event-id': '0', 'event-description': 'OpenStack Manila connected to cluster node', }) return ems_log @na_utils.trace def _build_ems_log_message_1(self): """Construct EMS Autosupport log message with storage pool info.""" message = self._get_ems_pool_info() ems_log = self._build_base_ems_log_message() ems_log.update({ 'event-id': '1', 'event-description': json.dumps(message), }) return ems_log def _get_ems_pool_info(self): raise NotImplementedError() @na_utils.trace def _handle_housekeeping_tasks(self): """Handle various cleanup activities.""" def _find_matching_aggregates(self): """Find all aggregates match pattern.""" raise NotImplementedError() @na_utils.trace def _get_helper(self, share): """Returns driver which implements share protocol.""" share_protocol = share['share_proto'].lower() if share_protocol not in self.SUPPORTED_PROTOCOLS: err_msg = _("Invalid NAS protocol supplied: %s.") % share_protocol raise exception.NetAppException(err_msg) self._check_license_for_protocol(share_protocol) if share_protocol == 'nfs': return nfs_cmode.NetAppCmodeNFSHelper() elif share_protocol == 'cifs': return cifs_cmode.NetAppCmodeCIFSHelper() @na_utils.trace def _check_license_for_protocol(self, share_protocol): """Validates protocol license if cluster APIs are accessible.""" if not self._have_cluster_creds: return if share_protocol.lower() not in self._licenses: current_licenses = self._get_licenses() if share_protocol.lower() not in current_licenses: msg_args = { 'protocol': share_protocol, 'host': self.configuration.netapp_server_hostname } msg = _('The protocol %(protocol)s is not licensed on ' 'controller %(host)s') % msg_args LOG.error(msg) raise exception.NetAppException(msg) @na_utils.trace def get_pool(self, share): pool = share_utils.extract_host(share['host'], level='pool') if pool: return pool share_name = self._get_backend_share_name(share['id']) return self._client.get_aggregate_for_volume(share_name) @na_utils.trace def create_share(self, context, share, share_server): """Creates new share.""" vserver, vserver_client = self._get_vserver(share_server=share_server) self._allocate_container(share, vserver, vserver_client) return self._create_export(share, share_server, vserver, vserver_client) @na_utils.trace def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Creates new share from snapshot.""" # TODO(dviroel) return progress info in asynchronous answers if parent_share['host'] == share['host']: src_vserver, src_vserver_client = self._get_vserver( share_server=share_server) # Creating a new share from snapshot in the source share's pool self._allocate_container_from_snapshot( share, snapshot, src_vserver, src_vserver_client) return self._create_export(share, share_server, src_vserver, src_vserver_client) parent_share_server = {} if parent_share['share_server'] is not None: # Get only the information needed by Data Motion ss_keys = ['id', 'identifier', 'backend_details', 'host'] for key in ss_keys: parent_share_server[key] = ( parent_share['share_server'].get(key)) # Information to be saved in the private_storage that will need to be # retrieved later, in order to continue with the share creation flow src_share_instance = { 'id': share['id'], 'host': parent_share.get('host'), 'share_server': parent_share_server or None } # NOTE(dviroel): Data Motion functions access share's 'share_server' # attribute to get vserser information. dest_share = copy.deepcopy(share.to_dict()) dest_share['share_server'] = (share_server.to_dict() if share_server else None) dm_session = data_motion.DataMotionSession() # Source host info __, src_vserver, src_backend = ( dm_session.get_backend_info_for_share(parent_share)) src_vserver_client = data_motion.get_client_for_backend( src_backend, vserver_name=src_vserver) src_cluster_name = src_vserver_client.get_cluster_name() # Destination host info dest_vserver, dest_vserver_client = self._get_vserver(share_server) dest_cluster_name = dest_vserver_client.get_cluster_name() try: if (src_cluster_name != dest_cluster_name or not self._have_cluster_creds): # 1. Create a clone on source. We don't need to split from # clone in order to replicate data self._allocate_container_from_snapshot( dest_share, snapshot, src_vserver, src_vserver_client, split=False) # 2. Create a replica in destination host self._allocate_container( dest_share, dest_vserver, dest_vserver_client, replica=True) # 3. Initialize snapmirror relationship with cloned share. src_share_instance['replica_state'] = ( constants.REPLICA_STATE_ACTIVE) dm_session.create_snapmirror(src_share_instance, dest_share) # The snapmirror data copy can take some time to be concluded, # we'll answer this call asynchronously state = self.STATE_SNAPMIRROR_DATA_COPYING else: # NOTE(dviroel): there's a need to split the cloned share from # its parent in order to move it to a different aggregate or # vserver self._allocate_container_from_snapshot( dest_share, snapshot, src_vserver, src_vserver_client, split=True) # The split volume clone operation can take some time to be # concluded and we'll answer the call asynchronously state = self.STATE_SPLITTING_VOLUME_CLONE except Exception: # If the share exists on the source vserser, we need to # delete it since it's a temporary share, not managed by the system dm_session.delete_snapmirror(src_share_instance, dest_share) self._delete_share(src_share_instance, src_vserver_client, remove_export=False) msg = _('Could not create share %(share_id)s from snapshot ' '%(snapshot_id)s in the destination host %(dest_host)s.') msg_args = {'share_id': dest_share['id'], 'snapshot_id': snapshot['id'], 'dest_host': dest_share['host']} raise exception.NetAppException(msg % msg_args) # Store source share info on private storage using destination share id src_share_instance['internal_state'] = state src_share_instance['status'] = constants.STATUS_ACTIVE self.private_storage.update(dest_share['id'], { 'source_share': json.dumps(src_share_instance) }) return { 'status': constants.STATUS_CREATING_FROM_SNAPSHOT, } def _update_create_from_snapshot_status(self, share, share_server=None): # TODO(dviroel) return progress info in asynchronous answers # If the share is creating from snapshot and copying data in background # we'd verify if the operation has finished and trigger new operations # if necessary. source_share_str = self.private_storage.get(share['id'], 'source_share') if source_share_str is None: msg = _('Could not update share %(share_id)s status due to invalid' ' internal state. Aborting share creation.') msg_args = {'share_id': share['id']} LOG.error(msg, msg_args) return {'status': constants.STATUS_ERROR} try: # Check if current operation had finished and continue to move the # source share towards its destination return self._create_from_snapshot_continue(share, share_server) except Exception: # Delete everything associated to the temporary clone created on # the source host. source_share = json.loads(source_share_str) dm_session = data_motion.DataMotionSession() dm_session.delete_snapmirror(source_share, share) __, src_vserver, src_backend = ( dm_session.get_backend_info_for_share(source_share)) src_vserver_client = data_motion.get_client_for_backend( src_backend, vserver_name=src_vserver) self._delete_share(source_share, src_vserver_client, remove_export=False) # Delete private storage info self.private_storage.delete(share['id']) msg = _('Could not complete share %(share_id)s creation due to an ' 'internal error.') msg_args = {'share_id': share['id']} LOG.error(msg, msg_args) return {'status': constants.STATUS_ERROR} def _create_from_snapshot_continue(self, share, share_server=None): return_values = { 'status': constants.STATUS_CREATING_FROM_SNAPSHOT } apply_qos_on_dest = False # Data motion session used to extract host info and manage snapmirrors dm_session = data_motion.DataMotionSession() # Get info from private storage src_share_str = self.private_storage.get(share['id'], 'source_share') src_share = json.loads(src_share_str) current_state = src_share['internal_state'] share['share_server'] = share_server # Source host info __, src_vserver, src_backend = ( dm_session.get_backend_info_for_share(src_share)) src_aggr = share_utils.extract_host(src_share['host'], level='pool') src_vserver_client = data_motion.get_client_for_backend( src_backend, vserver_name=src_vserver) # Destination host info dest_vserver, dest_vserver_client = self._get_vserver(share_server) dest_aggr = share_utils.extract_host(share['host'], level='pool') if current_state == self.STATE_SPLITTING_VOLUME_CLONE: if self._check_volume_clone_split_completed( src_share, src_vserver_client): # Rehost volume if source and destination are hosted in # different vservers if src_vserver != dest_vserver: # NOTE(dviroel): some volume policies, policy rules and # configurations are lost from the source volume after # rehost operation. qos_policy_for_share = ( self._get_backend_qos_policy_group_name(share['id'])) src_vserver_client.mark_qos_policy_group_for_deletion( qos_policy_for_share) # Apply QoS on destination share apply_qos_on_dest = True self._rehost_and_mount_volume( share, src_vserver, src_vserver_client, dest_vserver, dest_vserver_client) # Move the share to the expected aggregate if src_aggr != dest_aggr: # Move volume and 'defer' the cutover. If it fails, the # share will be deleted afterwards self._move_volume_after_splitting( src_share, share, share_server, cutover_action='defer') # Move a volume can take longer, we'll answer # asynchronously current_state = self.STATE_MOVING_VOLUME else: return_values['status'] = constants.STATUS_AVAILABLE elif current_state == self.STATE_MOVING_VOLUME: if self._check_volume_move_completed(share, share_server): if src_vserver != dest_vserver: # NOTE(dviroel): at this point we already rehosted the # share, but we missed applying the qos since it was moving # the share between aggregates apply_qos_on_dest = True return_values['status'] = constants.STATUS_AVAILABLE elif current_state == self.STATE_SNAPMIRROR_DATA_COPYING: replica_state = self.update_replica_state( None, # no context is needed [src_share], share, [], # access_rules [], # snapshot list share_server) if replica_state in [None, constants.STATUS_ERROR]: msg = _("Destination share has failed on replicating data " "from source share.") LOG.exception(msg) raise exception.NetAppException(msg) elif replica_state == constants.REPLICA_STATE_IN_SYNC: try: # 1. Start an update to try to get a last minute # transfer before we quiesce and break dm_session.update_snapmirror(src_share, share) except exception.StorageCommunicationException: # Ignore any errors since the current source replica # may be unreachable pass # 2. Break SnapMirror # NOTE(dviroel): if it fails on break/delete a snapmirror # relationship, we won't be able to delete the share. dm_session.break_snapmirror(src_share, share) dm_session.delete_snapmirror(src_share, share) # 3. Delete the source volume self._delete_share(src_share, src_vserver_client, remove_export=False) share_name = self._get_backend_share_name(src_share['id']) # 4. Set File system size fixed to false dest_vserver_client.set_volume_filesys_size_fixed( share_name, filesys_size_fixed=False) apply_qos_on_dest = True return_values['status'] = constants.STATUS_AVAILABLE else: # Delete this share from private storage since we'll abort this # operation. self.private_storage.delete(share['id']) msg_args = { 'state': current_state, 'id': share['id'], } msg = _("Caught an unexpected internal state '%(state)s' for " "share %(id)s. Aborting operation.") % msg_args LOG.exception(msg) raise exception.NetAppException(msg) if return_values['status'] == constants.STATUS_AVAILABLE: if apply_qos_on_dest: extra_specs = share_types.get_extra_specs_from_share(share) provisioning_options = self._get_provisioning_options( extra_specs) qos_policy_group_name = ( self._modify_or_create_qos_for_existing_share( share, extra_specs, dest_vserver, dest_vserver_client)) if qos_policy_group_name: provisioning_options['qos_policy_group'] = ( qos_policy_group_name) share_name = self._get_backend_share_name(share['id']) # Modify volume to match extra specs dest_vserver_client.modify_volume( dest_aggr, share_name, **provisioning_options) self.private_storage.delete(share['id']) return_values['export_locations'] = self._create_export( share, share_server, dest_vserver, dest_vserver_client, clear_current_export_policy=False) else: new_src_share = copy.deepcopy(src_share) new_src_share['internal_state'] = current_state self.private_storage.update(share['id'], { 'source_share': json.dumps(new_src_share) }) return return_values @na_utils.trace def _allocate_container(self, share, vserver, vserver_client, replica=False): """Create new share on aggregate.""" share_name = self._get_backend_share_name(share['id']) # Get Data ONTAP aggregate name as pool name. pool_name = share_utils.extract_host(share['host'], level='pool') if pool_name is None: msg = _("Pool is not available in the share host field.") raise exception.InvalidHost(reason=msg) provisioning_options = self._get_provisioning_options_for_share( share, vserver, vserver_client=vserver_client, replica=replica) if replica: # If this volume is intended to be a replication destination, # create it as the 'data-protection' type provisioning_options['volume_type'] = 'dp' hide_snapdir = provisioning_options.pop('hide_snapdir') LOG.debug('Creating share %(share)s on pool %(pool)s with ' 'provisioning options %(options)s', {'share': share_name, 'pool': pool_name, 'options': provisioning_options}) vserver_client.create_volume( pool_name, share_name, share['size'], snapshot_reserve=self.configuration. netapp_volume_snapshot_reserve_percent, **provisioning_options) if hide_snapdir: self._apply_snapdir_visibility( hide_snapdir, share_name, vserver_client) def _apply_snapdir_visibility( self, hide_snapdir, share_name, vserver_client): LOG.debug('Applying snapshot visibility according to hide_snapdir ' 'value of %(hide_snapdir)s on share %(share)s.', {'hide_snapdir': hide_snapdir, 'share': share_name}) vserver_client.set_volume_snapdir_access(share_name, hide_snapdir) @na_utils.trace def _remap_standard_boolean_extra_specs(self, extra_specs): """Replace standard boolean extra specs with NetApp-specific ones.""" specs = copy.deepcopy(extra_specs) for (key, netapp_key) in self.STANDARD_BOOLEAN_EXTRA_SPECS_MAP.items(): if key in specs: bool_value = share_types.parse_boolean_extra_spec(key, specs[key]) specs[netapp_key] = 'true' if bool_value else 'false' del specs[key] return specs @na_utils.trace def _check_extra_specs_validity(self, share, extra_specs): """Check if the extra_specs have valid values.""" self._check_boolean_extra_specs_validity( share, extra_specs, list(self.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP)) self._check_string_extra_specs_validity(share, extra_specs) @na_utils.trace def _check_string_extra_specs_validity(self, share, extra_specs): """Check if the string_extra_specs have valid values.""" if 'netapp:max_files' in extra_specs: self._check_if_max_files_is_valid(share, extra_specs['netapp:max_files']) @na_utils.trace def _check_if_max_files_is_valid(self, share, value): """Check if max_files has a valid value.""" if int(value) < 0: args = {'value': value, 'key': 'netapp:max_files', 'type_id': share['share_type_id'], 'share_id': share['id']} msg = _('Invalid value "%(value)s" for extra_spec "%(key)s" ' 'in share_type %(type_id)s for share %(share_id)s.') raise exception.NetAppException(msg % args) @na_utils.trace def _check_boolean_extra_specs_validity(self, share, specs, keys_of_interest): # cDOT compression requires deduplication. dedup = specs.get('netapp:dedup', None) compression = specs.get('netapp:compression', None) if dedup is not None and compression is not None: if dedup.lower() == 'false' and compression.lower() == 'true': spec = {'netapp:dedup': dedup, 'netapp:compression': compression} type_id = share['share_type_id'] share_id = share['id'] args = {'type_id': type_id, 'share_id': share_id, 'spec': spec} msg = _('Invalid combination of extra_specs in share_type ' '%(type_id)s for share %(share_id)s: %(spec)s: ' 'deduplication must be enabled in order for ' 'compression to be enabled.') raise exception.Invalid(msg % args) """Check if the boolean_extra_specs have valid values.""" # Extra spec values must be (ignoring case) 'true' or 'false'. for key in keys_of_interest: value = specs.get(key) if value is not None and value.lower() not in ['true', 'false']: type_id = share['share_type_id'] share_id = share['id'] arg_map = {'value': value, 'key': key, 'type_id': type_id, 'share_id': share_id} msg = _('Invalid value "%(value)s" for extra_spec "%(key)s" ' 'in share_type %(type_id)s for share %(share_id)s.') raise exception.Invalid(msg % arg_map) @na_utils.trace def _get_boolean_provisioning_options(self, specs, boolean_specs_map): """Given extra specs, return corresponding client library kwargs. Build a full set of client library provisioning kwargs, filling in a default value if an explicit value has not been supplied via a corresponding extra spec. Boolean extra spec values are "true" or "false", with missing specs treated as "false". Provisioning kwarg values are True or False. """ # Extract the extra spec keys of concern and their corresponding # kwarg keys as lists. keys_of_interest = list(boolean_specs_map) provisioning_args = [boolean_specs_map[key] for key in keys_of_interest] # Set missing spec values to 'false' for key in keys_of_interest: if key not in specs: specs[key] = 'false' # Build a list of Boolean provisioning arguments from the string # equivalents in the spec values. provisioning_values = [specs[key].lower() == 'true' for key in keys_of_interest] # Combine the list of provisioning args and the list of provisioning # values into a dictionary suitable for use as kwargs when invoking # provisioning methods from the client API library. return dict(zip(provisioning_args, provisioning_values)) @na_utils.trace def _get_string_provisioning_options(self, specs, string_specs_map): """Given extra specs, return corresponding client library kwargs. Build a full set of client library provisioning kwargs, filling in a default value if an explicit value has not been supplied via a corresponding extra spec. """ # Extract the extra spec keys of concern and their corresponding # kwarg keys as lists. keys_of_interest = list(string_specs_map) provisioning_args = [string_specs_map[key] for key in keys_of_interest] # Set missing spec values to 'false' for key in keys_of_interest: if key not in specs: specs[key] = None provisioning_values = [specs[key] for key in keys_of_interest] # Combine the list of provisioning args and the list of provisioning # values into a dictionary suitable for use as kwargs when invoking # provisioning methods from the client API library. return dict(zip(provisioning_args, provisioning_values)) def _get_normalized_qos_specs(self, extra_specs): if not extra_specs.get('qos'): return {} normalized_qos_specs = { self.QOS_SPECS[key.lower()]: value for key, value in extra_specs.items() if self.QOS_SPECS.get(key.lower()) } if not normalized_qos_specs: msg = _("The extra-spec 'qos' is set to True, but no netapp " "supported qos-specs have been specified in the share " "type. Cannot provision a QoS policy. Specify any of the " "following extra-specs and try again: %s") raise exception.NetAppException(msg % list(self.QOS_SPECS)) # TODO(gouthamr): Modify check when throughput floors are allowed if len(normalized_qos_specs) > 1: msg = _('Only one NetApp QoS spec can be set at a time. ' 'Specified QoS limits: %s') raise exception.NetAppException(msg % normalized_qos_specs) return normalized_qos_specs def _get_max_throughput(self, share_size, qos_specs): # QoS limits are exclusive of one another. if 'maxiops' in qos_specs: return '%siops' % qos_specs['maxiops'] elif 'maxiopspergib' in qos_specs: return '%siops' % six.text_type( int(qos_specs['maxiopspergib']) * int(share_size)) elif 'maxbps' in qos_specs: return '%sB/s' % qos_specs['maxbps'] elif 'maxbpspergib' in qos_specs: return '%sB/s' % six.text_type( int(qos_specs['maxbpspergib']) * int(share_size)) @na_utils.trace def _create_qos_policy_group(self, share, vserver, qos_specs, vserver_client=None): max_throughput = self._get_max_throughput(share['size'], qos_specs) qos_policy_group_name = self._get_backend_qos_policy_group_name( share['id']) client = vserver_client or self._client client.qos_policy_group_create(qos_policy_group_name, vserver, max_throughput=max_throughput) return qos_policy_group_name @na_utils.trace def _get_provisioning_options_for_share( self, share, vserver, vserver_client=None, replica=False): """Return provisioning options from a share. Starting with a share, this method gets the extra specs, rationalizes NetApp vs. standard extra spec values, ensures their validity, and returns them in a form suitable for passing to various API client methods. """ extra_specs = share_types.get_extra_specs_from_share(share) extra_specs = self._remap_standard_boolean_extra_specs(extra_specs) self._check_extra_specs_validity(share, extra_specs) provisioning_options = self._get_provisioning_options(extra_specs) qos_specs = self._get_normalized_qos_specs(extra_specs) if qos_specs and not replica: qos_policy_group = self._create_qos_policy_group( share, vserver, qos_specs, vserver_client) provisioning_options['qos_policy_group'] = qos_policy_group return provisioning_options @na_utils.trace def _get_provisioning_options(self, specs): """Return a merged result of string and binary provisioning options.""" boolean_args = self._get_boolean_provisioning_options( specs, self.BOOLEAN_QUALIFIED_EXTRA_SPECS_MAP) string_args = self._get_string_provisioning_options( specs, self.STRING_QUALIFIED_EXTRA_SPECS_MAP) result = boolean_args.copy() result.update(string_args) result['encrypt'] = self._get_nve_option(specs) return result def _get_nve_option(self, specs): if 'netapp_flexvol_encryption' in specs: nve = specs['netapp_flexvol_encryption'].lower() == 'true' else: nve = False return nve @na_utils.trace def _check_aggregate_extra_specs_validity(self, aggregate_name, specs): for specs_key in ('netapp_disk_type', 'netapp_raid_type'): aggr_value = self._ssc_stats.get(aggregate_name, {}).get(specs_key) specs_value = specs.get(specs_key) if aggr_value and specs_value and aggr_value != specs_value: msg = _('Invalid value "%(value)s" for extra_spec "%(key)s" ' 'in aggregate %(aggr)s.') msg_args = { 'value': specs_value, 'key': specs_key, 'aggr': aggregate_name } raise exception.NetAppException(msg % msg_args) @na_utils.trace def _allocate_container_from_snapshot( self, share, snapshot, vserver, vserver_client, snapshot_name_func=_get_backend_snapshot_name, split=None): """Clones existing share.""" share_name = self._get_backend_share_name(share['id']) parent_share_name = self._get_backend_share_name(snapshot['share_id']) if snapshot.get('provider_location') is None: parent_snapshot_name = snapshot_name_func(self, snapshot['id']) else: parent_snapshot_name = snapshot['provider_location'] provisioning_options = self._get_provisioning_options_for_share( share, vserver, vserver_client=vserver_client) hide_snapdir = provisioning_options.pop('hide_snapdir') if split is not None: provisioning_options['split'] = split LOG.debug('Creating share from snapshot %s', snapshot['id']) vserver_client.create_volume_clone( share_name, parent_share_name, parent_snapshot_name, **provisioning_options) if share['size'] > snapshot['size']: vserver_client.set_volume_size(share_name, share['size']) if hide_snapdir: self._apply_snapdir_visibility( hide_snapdir, share_name, vserver_client) @na_utils.trace def _share_exists(self, share_name, vserver_client): return vserver_client.volume_exists(share_name) @na_utils.trace def _delete_share(self, share, vserver_client, remove_export=True): share_name = self._get_backend_share_name(share['id']) if self._share_exists(share_name, vserver_client): if remove_export: self._remove_export(share, vserver_client) self._deallocate_container(share_name, vserver_client) qos_policy_for_share = self._get_backend_qos_policy_group_name( share['id']) vserver_client.mark_qos_policy_group_for_deletion( qos_policy_for_share) else: LOG.info("Share %s does not exist.", share['id']) @na_utils.trace def delete_share(self, context, share, share_server=None): """Deletes share.""" try: vserver, vserver_client = self._get_vserver( share_server=share_server) except (exception.InvalidInput, exception.VserverNotSpecified, exception.VserverNotFound) as error: LOG.warning("Could not determine share server for share being " "deleted: %(share)s. Deletion of share record " "will proceed anyway. Error: %(error)s", {'share': share['id'], 'error': error}) return self._delete_share(share, vserver_client) @na_utils.trace def _deallocate_container(self, share_name, vserver_client): """Free share space.""" vserver_client.unmount_volume(share_name, force=True) vserver_client.offline_volume(share_name) vserver_client.delete_volume(share_name) @na_utils.trace def _create_export(self, share, share_server, vserver, vserver_client, clear_current_export_policy=True): """Creates NAS storage.""" helper = self._get_helper(share) helper.set_client(vserver_client) share_name = self._get_backend_share_name(share['id']) interfaces = vserver_client.get_network_interfaces( protocols=[share['share_proto']]) if not interfaces: msg = _('Cannot find network interfaces for Vserver %(vserver)s ' 'and protocol %(proto)s.') msg_args = {'vserver': vserver, 'proto': share['share_proto']} raise exception.NetAppException(msg % msg_args) # Get LIF addresses with metadata export_addresses = self._get_export_addresses_with_metadata( share, share_server, interfaces) # Create the share and get a callback for generating export locations callback = helper.create_share( share, share_name, clear_current_export_policy=clear_current_export_policy) # Generate export locations using addresses, metadata and callback export_locations = [ { 'path': callback(export_address), 'is_admin_only': metadata.pop('is_admin_only', False), 'metadata': metadata, } for export_address, metadata in copy.deepcopy(export_addresses).items() ] # Sort the export locations to report preferred paths first export_locations = self._sort_export_locations_by_preferred_paths( export_locations) return export_locations @na_utils.trace def _get_export_addresses_with_metadata(self, share, share_server, interfaces): """Return interface addresses with locality and other metadata.""" # Get home node so we can identify preferred paths aggregate_name = share_utils.extract_host(share['host'], level='pool') home_node = self._get_aggregate_node(aggregate_name) # Get admin LIF addresses so we can identify admin export locations admin_addresses = self._get_admin_addresses_for_share_server( share_server) addresses = {} for interface in interfaces: address = interface['address'] is_admin_only = address in admin_addresses if home_node: preferred = interface.get('home-node') == home_node else: preferred = False addresses[address] = { 'is_admin_only': is_admin_only, 'preferred': preferred, } return addresses @na_utils.trace def _get_admin_addresses_for_share_server(self, share_server): if not share_server: return [] admin_addresses = [] for network_allocation in share_server.get('network_allocations'): if network_allocation['label'] == 'admin': admin_addresses.append(network_allocation['ip_address']) return admin_addresses @na_utils.trace def _sort_export_locations_by_preferred_paths(self, export_locations): """Sort the export locations to report preferred paths first.""" sort_key = lambda location: location.get( # noqa: E731 'metadata', {}).get('preferred') is not True return sorted(export_locations, key=sort_key) @na_utils.trace def _remove_export(self, share, vserver_client): """Deletes NAS storage.""" helper = self._get_helper(share) helper.set_client(vserver_client) share_name = self._get_backend_share_name(share['id']) target = helper.get_target(share) # Share may be in error state, so there's no share and target. if target: helper.delete_share(share, share_name) @na_utils.trace def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot of a share.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name(snapshot['share_id']) snapshot_name = self._get_backend_snapshot_name(snapshot['id']) LOG.debug('Creating snapshot %s', snapshot_name) vserver_client.create_snapshot(share_name, snapshot_name) return {'provider_location': snapshot_name} def revert_to_snapshot(self, context, snapshot, share_server=None): """Reverts a share (in place) to the specified snapshot.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name(snapshot['share_id']) snapshot_name = (snapshot.get('provider_location') or self._get_backend_snapshot_name(snapshot['id'])) LOG.debug('Restoring snapshot %s', snapshot_name) vserver_client.restore_snapshot(share_name, snapshot_name) @na_utils.trace def delete_snapshot(self, context, snapshot, share_server=None, snapshot_name=None): """Deletes a snapshot of a share.""" try: vserver, vserver_client = self._get_vserver( share_server=share_server) except (exception.InvalidInput, exception.VserverNotSpecified, exception.VserverNotFound) as error: LOG.warning("Could not determine share server for snapshot " "being deleted: %(snap)s. Deletion of snapshot " "record will proceed anyway. Error: %(error)s", {'snap': snapshot['id'], 'error': error}) return share_name = self._get_backend_share_name(snapshot['share_id']) snapshot_name = (snapshot.get('provider_location') or snapshot_name or self._get_backend_snapshot_name(snapshot['id'])) try: self._delete_snapshot(vserver_client, share_name, snapshot_name) except exception.SnapshotResourceNotFound: msg = ("Snapshot %(snap)s does not exist on share %(share)s.") msg_args = {'snap': snapshot_name, 'share': share_name} LOG.info(msg, msg_args) def _delete_snapshot(self, vserver_client, share_name, snapshot_name): """Deletes a backend snapshot, handling busy snapshots as needed.""" backend_snapshot = vserver_client.get_snapshot(share_name, snapshot_name) LOG.debug('Deleting snapshot %(snap)s for share %(share)s.', {'snap': snapshot_name, 'share': share_name}) if not backend_snapshot['busy']: vserver_client.delete_snapshot(share_name, snapshot_name) elif backend_snapshot['owners'] == {'volume clone'}: # Snapshots are locked by clone(s), so split clone and soft delete snapshot_children = vserver_client.get_clone_children_for_snapshot( share_name, snapshot_name) for snapshot_child in snapshot_children: vserver_client.split_volume_clone(snapshot_child['name']) vserver_client.soft_delete_snapshot(share_name, snapshot_name) else: raise exception.ShareSnapshotIsBusy(snapshot_name=snapshot_name) @na_utils.trace def manage_existing(self, share, driver_options, share_server=None): vserver, vserver_client = self._get_vserver(share_server=share_server) share_size = self._manage_container(share, vserver, vserver_client) export_locations = self._create_export(share, share_server, vserver, vserver_client) return {'size': share_size, 'export_locations': export_locations} @na_utils.trace def unmanage(self, share, share_server=None): pass @na_utils.trace def _manage_container(self, share, vserver, vserver_client): """Bring existing volume under management as a share.""" protocol_helper = self._get_helper(share) protocol_helper.set_client(vserver_client) volume_name = protocol_helper.get_share_name_for_share(share) if not volume_name: msg = _('Volume could not be determined from export location ' '%(export)s.') msg_args = {'export': share['export_location']} raise exception.ManageInvalidShare(reason=msg % msg_args) share_name = self._get_backend_share_name(share['id']) aggregate_name = share_utils.extract_host(share['host'], level='pool') # Get existing volume info volume = vserver_client.get_volume_to_manage(aggregate_name, volume_name) if not volume: msg = _('Volume %(volume)s not found on aggregate %(aggr)s.') msg_args = {'volume': volume_name, 'aggr': aggregate_name} raise exception.ManageInvalidShare(reason=msg % msg_args) # When calculating the size, round up to the next GB. volume_size = int(math.ceil(float(volume['size']) / units.Gi)) # Validate extra specs extra_specs = share_types.get_extra_specs_from_share(share) extra_specs = self._remap_standard_boolean_extra_specs(extra_specs) try: self._check_extra_specs_validity(share, extra_specs) self._check_aggregate_extra_specs_validity(aggregate_name, extra_specs) except exception.ManilaException as ex: raise exception.ManageExistingShareTypeMismatch( reason=six.text_type(ex)) # Ensure volume is manageable self._validate_volume_for_manage(volume, vserver_client) provisioning_options = self._get_provisioning_options(extra_specs) debug_args = { 'share': share_name, 'aggr': aggregate_name, 'options': provisioning_options } LOG.debug('Managing share %(share)s on aggregate %(aggr)s with ' 'provisioning options %(options)s', debug_args) # Rename & remount volume on new path vserver_client.unmount_volume(volume_name) vserver_client.set_volume_name(volume_name, share_name) vserver_client.mount_volume(share_name) qos_policy_group_name = self._modify_or_create_qos_for_existing_share( share, extra_specs, vserver, vserver_client) if qos_policy_group_name: provisioning_options['qos_policy_group'] = qos_policy_group_name # Modify volume to match extra specs vserver_client.modify_volume(aggregate_name, share_name, **provisioning_options) # Save original volume info to private storage original_data = { 'original_name': volume['name'], 'original_junction_path': volume['junction-path'] } self.private_storage.update(share['id'], original_data) return volume_size @na_utils.trace def _validate_volume_for_manage(self, volume, vserver_client): """Ensure volume is a candidate for becoming a share.""" # Check volume info, extra specs validity if volume['type'] != 'rw' or volume['style'] != 'flex': msg = _('Volume %(volume)s must be a read-write flexible volume.') msg_args = {'volume': volume['name']} raise exception.ManageInvalidShare(reason=msg % msg_args) if vserver_client.volume_has_luns(volume['name']): msg = _('Volume %(volume)s must not contain LUNs.') msg_args = {'volume': volume['name']} raise exception.ManageInvalidShare(reason=msg % msg_args) if vserver_client.volume_has_junctioned_volumes(volume['name']): msg = _('Volume %(volume)s must not have junctioned volumes.') msg_args = {'volume': volume['name']} raise exception.ManageInvalidShare(reason=msg % msg_args) if vserver_client.volume_has_snapmirror_relationships(volume): msg = _('Volume %(volume)s must not be in any snapmirror ' 'relationships.') msg_args = {'volume': volume['name']} raise exception.ManageInvalidShare(reason=msg % msg_args) @na_utils.trace def manage_existing_snapshot( self, snapshot, driver_options, share_server=None): """Brings an existing snapshot under Manila management.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name(snapshot['share_id']) existing_snapshot_name = snapshot.get('provider_location') new_snapshot_name = self._get_backend_snapshot_name(snapshot['id']) if not existing_snapshot_name: msg = _('provider_location not specified.') raise exception.ManageInvalidShareSnapshot(reason=msg) # Get the volume containing the snapshot so we can report its size try: volume = vserver_client.get_volume(share_name) except (netapp_api.NaApiError, exception.StorageResourceNotFound, exception.NetAppException): msg = _('Could not determine snapshot %(snap)s size from ' 'volume %(vol)s.') msg_args = {'snap': existing_snapshot_name, 'vol': share_name} LOG.exception(msg, msg_args) raise exception.ShareNotFound(share_id=snapshot['share_id']) # Ensure there aren't any mirrors on this volume if vserver_client.volume_has_snapmirror_relationships(volume): msg = _('Share %s has SnapMirror relationships.') msg_args = {'vol': share_name} raise exception.ManageInvalidShareSnapshot(reason=msg % msg_args) # Rename snapshot try: vserver_client.rename_snapshot(share_name, existing_snapshot_name, new_snapshot_name) except netapp_api.NaApiError: msg = _('Could not rename snapshot %(snap)s in share %(vol)s.') msg_args = {'snap': existing_snapshot_name, 'vol': share_name} raise exception.ManageInvalidShareSnapshot(reason=msg % msg_args) # Save original snapshot info to private storage original_data = {'original_name': existing_snapshot_name} self.private_storage.update(snapshot['id'], original_data) # When calculating the size, round up to the next GB. size = int(math.ceil(float(volume['size']) / units.Gi)) return {'size': size, 'provider_location': new_snapshot_name} @na_utils.trace def unmanage_snapshot(self, snapshot, share_server=None): """Removes the specified snapshot from Manila management.""" @na_utils.trace def create_consistency_group_from_cgsnapshot( self, context, cg_dict, cgsnapshot_dict, share_server=None): """Creates a consistency group from an existing CG snapshot.""" vserver, vserver_client = self._get_vserver(share_server=share_server) # Ensure there is something to do if not cgsnapshot_dict['share_group_snapshot_members']: return None, None clone_list = self._collate_cg_snapshot_info(cg_dict, cgsnapshot_dict) share_update_list = [] LOG.debug('Creating consistency group from CG snapshot %s.', cgsnapshot_dict['id']) for clone in clone_list: self._allocate_container_from_snapshot( clone['share'], clone['snapshot'], vserver, vserver_client, NetAppCmodeFileStorageLibrary._get_backend_cg_snapshot_name) export_locations = self._create_export(clone['share'], share_server, vserver, vserver_client) share_update_list.append({ 'id': clone['share']['id'], 'export_locations': export_locations, }) return None, share_update_list def _collate_cg_snapshot_info(self, cg_dict, cgsnapshot_dict): """Collate the data for a clone of a CG snapshot. Given two data structures, a CG snapshot (cgsnapshot_dict) and a new CG to be cloned from the snapshot (cg_dict), match up both structures into a list of dicts (share & snapshot) suitable for use by existing driver methods that clone individual share snapshots. """ clone_list = list() for share in cg_dict['shares']: clone_info = {'share': share} for cgsnapshot_member in ( cgsnapshot_dict['share_group_snapshot_members']): if (share['source_share_group_snapshot_member_id'] == cgsnapshot_member['id']): clone_info['snapshot'] = { 'share_id': cgsnapshot_member['share_id'], 'id': cgsnapshot_dict['id'], 'size': cgsnapshot_member['size'], } break else: msg = _("Invalid data supplied for creating consistency group " "from CG snapshot %s.") % cgsnapshot_dict['id'] raise exception.InvalidShareGroup(reason=msg) clone_list.append(clone_info) return clone_list @na_utils.trace def create_cgsnapshot(self, context, snap_dict, share_server=None): """Creates a consistency group snapshot.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_names = [self._get_backend_share_name(member['share_id']) for member in snap_dict.get('share_group_snapshot_members', [])] snapshot_name = self._get_backend_cg_snapshot_name(snap_dict['id']) if share_names: LOG.debug('Creating CG snapshot %s.', snapshot_name) vserver_client.create_cg_snapshot(share_names, snapshot_name) return None, None @na_utils.trace def delete_cgsnapshot(self, context, snap_dict, share_server=None): """Deletes a consistency group snapshot.""" try: vserver, vserver_client = self._get_vserver( share_server=share_server) except (exception.InvalidInput, exception.VserverNotSpecified, exception.VserverNotFound) as error: LOG.warning("Could not determine share server for CG snapshot " "being deleted: %(snap)s. Deletion of CG snapshot " "record will proceed anyway. Error: %(error)s", {'snap': snap_dict['id'], 'error': error}) return None, None share_names = [self._get_backend_share_name(member['share_id']) for member in ( snap_dict.get('share_group_snapshot_members', []))] snapshot_name = self._get_backend_cg_snapshot_name(snap_dict['id']) for share_name in share_names: try: self._delete_snapshot( vserver_client, share_name, snapshot_name) except exception.SnapshotResourceNotFound: msg = ("Snapshot %(snap)s does not exist on share " "%(share)s.") msg_args = {'snap': snapshot_name, 'share': share_name} LOG.info(msg, msg_args) continue return None, None @staticmethod def _is_group_cg(context, share_group): return 'host' == share_group.consistent_snapshot_support @na_utils.trace def create_group_snapshot(self, context, snap_dict, fallback_create, share_server=None): share_group = snap_dict['share_group'] if self._is_group_cg(context, share_group): return self.create_cgsnapshot(context, snap_dict, share_server=share_server) else: return fallback_create(context, snap_dict, share_server=share_server) @na_utils.trace def delete_group_snapshot(self, context, snap_dict, fallback_delete, share_server=None): share_group = snap_dict['share_group'] if self._is_group_cg(context, share_group): return self.delete_cgsnapshot(context, snap_dict, share_server=share_server) else: return fallback_delete(context, snap_dict, share_server=share_server) @na_utils.trace def create_group_from_snapshot(self, context, share_group, snapshot_dict, fallback_create, share_server=None): share_group2 = snapshot_dict['share_group'] if self._is_group_cg(context, share_group2): return self.create_consistency_group_from_cgsnapshot( context, share_group, snapshot_dict, share_server=share_server) else: return fallback_create(context, share_group, snapshot_dict, share_server=share_server) @na_utils.trace def _adjust_qos_policy_with_volume_resize(self, share, new_size, vserver_client): # Adjust QoS policy on a share if any if self._have_cluster_creds: share_name = self._get_backend_share_name(share['id']) share_on_the_backend = vserver_client.get_volume(share_name) qos_policy_on_share = share_on_the_backend['qos-policy-group-name'] if qos_policy_on_share is None: return extra_specs = share_types.get_extra_specs_from_share(share) qos_specs = self._get_normalized_qos_specs(extra_specs) size_dependent_specs = {k: v for k, v in qos_specs.items() if k in self.SIZE_DEPENDENT_QOS_SPECS} if size_dependent_specs: max_throughput = self._get_max_throughput( new_size, size_dependent_specs) self._client.qos_policy_group_modify( qos_policy_on_share, max_throughput) @na_utils.trace def extend_share(self, share, new_size, share_server=None): """Extends size of existing share.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name(share['id']) vserver_client.set_volume_filesys_size_fixed(share_name, filesys_size_fixed=False) LOG.debug('Extending share %(name)s to %(size)s GB.', {'name': share_name, 'size': new_size}) vserver_client.set_volume_size(share_name, new_size) self._adjust_qos_policy_with_volume_resize(share, new_size, vserver_client) @na_utils.trace def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name(share['id']) vserver_client.set_volume_filesys_size_fixed(share_name, filesys_size_fixed=False) LOG.debug('Shrinking share %(name)s to %(size)s GB.', {'name': share_name, 'size': new_size}) try: vserver_client.set_volume_size(share_name, new_size) except netapp_api.NaApiError as e: if e.code == netapp_api.EVOLOPNOTSUPP: msg = _('Failed to shrink share %(share_id)s. ' 'The current used space is larger than the the size' ' requested.') msg_args = {'share_id': share['id']} LOG.error(msg, msg_args) raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) self._adjust_qos_policy_with_volume_resize( share, new_size, vserver_client) @na_utils.trace def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Updates access rules for a share.""" # NOTE(ameade): We do not need to add export rules to a non-active # replica as it will fail. replica_state = share.get('replica_state') if (replica_state is not None and replica_state != constants.REPLICA_STATE_ACTIVE): return try: vserver, vserver_client = self._get_vserver( share_server=share_server) except (exception.InvalidInput, exception.VserverNotSpecified, exception.VserverNotFound) as error: LOG.warning("Could not determine share server for share " "%(share)s during access rules update. " "Error: %(error)s", {'share': share['id'], 'error': error}) return share_name = self._get_backend_share_name(share['id']) if self._share_exists(share_name, vserver_client): helper = self._get_helper(share) helper.set_client(vserver_client) helper.update_access(share, share_name, access_rules) else: raise exception.ShareResourceNotFound(share_id=share['id']) def setup_server(self, network_info, metadata=None): raise NotImplementedError() def teardown_server(self, server_details, security_services=None): raise NotImplementedError() def get_network_allocations_number(self): """Get number of network interfaces to be created.""" raise NotImplementedError() @na_utils.trace def _update_ssc_info(self): """Periodically runs to update Storage Service Catalog data. The self._ssc_stats attribute is updated with the following format. { : {: }} """ LOG.info("Updating storage service catalog information for " "backend '%s'", self._backend_name) # Work on a copy and update the ssc data atomically before returning. ssc_stats = copy.deepcopy(self._ssc_stats) aggregate_names = self._find_matching_aggregates() # Initialize entries for each aggregate. for aggregate_name in aggregate_names: if aggregate_name not in ssc_stats: ssc_stats[aggregate_name] = { 'netapp_aggregate': aggregate_name, } if aggregate_names: self._update_ssc_aggr_info(aggregate_names, ssc_stats) self._ssc_stats = ssc_stats @na_utils.trace def _update_ssc_aggr_info(self, aggregate_names, ssc_stats): """Updates the given SSC dictionary with new disk type information. :param aggregate_names: The aggregates this driver cares about :param ssc_stats: The dictionary to update """ if not self._have_cluster_creds: return for aggregate_name in aggregate_names: aggregate = self._client.get_aggregate(aggregate_name) hybrid = (six.text_type(aggregate.get('is-hybrid')).lower() if 'is-hybrid' in aggregate else None) disk_types = self._client.get_aggregate_disk_types(aggregate_name) ssc_stats[aggregate_name].update({ 'netapp_raid_type': aggregate.get('raid-type'), 'netapp_hybrid_aggregate': hybrid, 'netapp_disk_type': disk_types, }) def find_active_replica(self, replica_list): # NOTE(ameade): Find current active replica. There can only be one # active replica (SnapMirror source volume) at a time in cDOT. for r in replica_list: if r['replica_state'] == constants.REPLICA_STATE_ACTIVE: return r def _find_nonactive_replicas(self, replica_list): """Returns a list of all except the active replica.""" return [replica for replica in replica_list if replica['replica_state'] != constants.REPLICA_STATE_ACTIVE] def create_replica(self, context, replica_list, new_replica, access_rules, share_snapshots, share_server=None): """Creates the new replica on this backend and sets up SnapMirror.""" active_replica = self.find_active_replica(replica_list) dm_session = data_motion.DataMotionSession() # 1. Create the destination share dest_backend = share_utils.extract_host(new_replica['host'], level='backend_name') vserver = (dm_session.get_vserver_from_share(new_replica) or self.configuration.netapp_vserver) vserver_client = data_motion.get_client_for_backend( dest_backend, vserver_name=vserver) self._allocate_container(new_replica, vserver, vserver_client, replica=True) # 2. Setup SnapMirror dm_session.create_snapmirror(active_replica, new_replica) model_update = { 'export_locations': [], 'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'access_rules_status': constants.STATUS_ACTIVE, } return model_update def delete_replica(self, context, replica_list, replica, share_snapshots, share_server=None): """Removes the replica on this backend and destroys SnapMirror.""" dm_session = data_motion.DataMotionSession() # 1. Remove SnapMirror dest_backend = share_utils.extract_host(replica['host'], level='backend_name') vserver = (dm_session.get_vserver_from_share(replica) or self.configuration.netapp_vserver) # Ensure that all potential snapmirror relationships and their metadata # involving the replica are destroyed. for other_replica in replica_list: if other_replica['id'] != replica['id']: dm_session.delete_snapmirror(other_replica, replica) dm_session.delete_snapmirror(replica, other_replica) # 2. Delete share vserver_client = data_motion.get_client_for_backend( dest_backend, vserver_name=vserver) share_name = self._get_backend_share_name(replica['id']) if self._share_exists(share_name, vserver_client): self._deallocate_container(share_name, vserver_client) def update_replica_state(self, context, replica_list, replica, access_rules, share_snapshots, share_server=None): """Returns the status of the given replica on this backend.""" active_replica = self.find_active_replica(replica_list) share_name = self._get_backend_share_name(replica['id']) vserver, vserver_client = self._get_vserver(share_server=share_server) if not vserver_client.volume_exists(share_name): msg = _("Volume %(share_name)s does not exist on vserver " "%(vserver)s.") msg_args = {'share_name': share_name, 'vserver': vserver} raise exception.ShareResourceNotFound(msg % msg_args) # NOTE(cknight): The SnapMirror may have been intentionally broken by # a revert-to-snapshot operation, in which case this method should not # attempt to change anything. if active_replica['status'] == constants.STATUS_REVERTING: return None dm_session = data_motion.DataMotionSession() try: snapmirrors = dm_session.get_snapmirrors(active_replica, replica) except netapp_api.NaApiError: LOG.exception("Could not get snapmirrors for replica %s.", replica['id']) return constants.STATUS_ERROR if not snapmirrors: if replica['status'] != constants.STATUS_CREATING: try: dm_session.create_snapmirror(active_replica, replica) except netapp_api.NaApiError: LOG.exception("Could not create snapmirror for " "replica %s.", replica['id']) return constants.STATUS_ERROR return constants.REPLICA_STATE_OUT_OF_SYNC snapmirror = snapmirrors[0] # NOTE(dviroel): Don't try to resume or resync a SnapMirror that has # one of the in progress transfer states, because the storage will # answer with an error. in_progress_status = ['preparing', 'transferring', 'finalizing'] if (snapmirror.get('mirror-state') != 'snapmirrored' and snapmirror.get('relationship-status') in in_progress_status): return constants.REPLICA_STATE_OUT_OF_SYNC if snapmirror.get('mirror-state') != 'snapmirrored': try: vserver_client.resume_snapmirror(snapmirror['source-vserver'], snapmirror['source-volume'], vserver, share_name) vserver_client.resync_snapmirror(snapmirror['source-vserver'], snapmirror['source-volume'], vserver, share_name) return constants.REPLICA_STATE_OUT_OF_SYNC except netapp_api.NaApiError: LOG.exception("Could not resync snapmirror.") return constants.STATUS_ERROR last_update_timestamp = float( snapmirror.get('last-transfer-end-timestamp', 0)) # TODO(ameade): Have a configurable RPO for replicas, for now it is # one hour. if (last_update_timestamp and (timeutils.is_older_than( datetime.datetime.utcfromtimestamp(last_update_timestamp) .isoformat(), 3600))): return constants.REPLICA_STATE_OUT_OF_SYNC # Check all snapshots exist snapshots = [snap['share_replica_snapshot'] for snap in share_snapshots] for snap in snapshots: snapshot_name = snap.get('provider_location') if not vserver_client.snapshot_exists(snapshot_name, share_name): return constants.REPLICA_STATE_OUT_OF_SYNC return constants.REPLICA_STATE_IN_SYNC def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): """Switch SnapMirror relationships and allow r/w ops on replica. Creates a DataMotion session and switches the direction of the SnapMirror relationship between the currently 'active' instance ( SnapMirror source volume) and the replica. Also attempts setting up SnapMirror relationships between the other replicas and the new SnapMirror source volume ('active' instance). :param context: Request Context :param replica_list: List of replicas, including the 'active' instance :param replica: Replica to promote to SnapMirror source :param access_rules: Access rules to apply to the replica :param share_server: ShareServer class instance of replica :return: Updated replica_list """ orig_active_replica = self.find_active_replica(replica_list) dm_session = data_motion.DataMotionSession() new_replica_list = [] # Setup the new active replica try: new_active_replica = ( self._convert_destination_replica_to_independent( context, dm_session, orig_active_replica, replica, access_rules, share_server=share_server)) except exception.StorageCommunicationException: LOG.exception("Could not communicate with the backend " "for replica %s during promotion.", replica['id']) new_active_replica = replica.copy() new_active_replica['replica_state'] = ( constants.STATUS_ERROR) new_active_replica['status'] = constants.STATUS_ERROR return [new_active_replica] new_replica_list.append(new_active_replica) # Change the source replica for all destinations to the new # active replica. for r in replica_list: if r['id'] != replica['id']: r = self._safe_change_replica_source(dm_session, r, orig_active_replica, replica, replica_list) new_replica_list.append(r) # Unmount the original active replica. orig_active_vserver = dm_session.get_vserver_from_share( orig_active_replica) self._unmount_orig_active_replica(orig_active_replica, orig_active_vserver) self._handle_qos_on_replication_change(dm_session, new_active_replica, orig_active_replica, share_server=share_server) return new_replica_list def _unmount_orig_active_replica(self, orig_active_replica, orig_active_vserver=None): orig_active_replica_backend = ( share_utils.extract_host(orig_active_replica['host'], level='backend_name')) orig_active_vserver_client = data_motion.get_client_for_backend( orig_active_replica_backend, vserver_name=orig_active_vserver) share_name = self._get_backend_share_name( orig_active_replica['id']) try: orig_active_vserver_client.unmount_volume(share_name, force=True) LOG.info("Unmount of the original active replica %s successful.", orig_active_replica['id']) except exception.StorageCommunicationException: LOG.exception("Could not unmount the original active replica %s.", orig_active_replica['id']) def _handle_qos_on_replication_change(self, dm_session, new_active_replica, orig_active_replica, share_server=None): # QoS operations: Remove and purge QoS policy on old active replica # if any and create a new policy on the destination if necessary. extra_specs = share_types.get_extra_specs_from_share( orig_active_replica) qos_specs = self._get_normalized_qos_specs(extra_specs) if qos_specs and self._have_cluster_creds: dm_session.remove_qos_on_old_active_replica(orig_active_replica) # Check if a QoS policy already exists for the promoted replica, # if it does, modify it as necessary, else create it: try: new_active_replica_qos_policy = ( self._get_backend_qos_policy_group_name( new_active_replica['id'])) vserver, vserver_client = self._get_vserver( share_server=share_server) volume_name_on_backend = self._get_backend_share_name( new_active_replica['id']) if not self._client.qos_policy_group_exists( new_active_replica_qos_policy): self._create_qos_policy_group( new_active_replica, vserver, qos_specs) else: max_throughput = self._get_max_throughput( new_active_replica['size'], qos_specs) self._client.qos_policy_group_modify( new_active_replica_qos_policy, max_throughput) vserver_client.set_qos_policy_group_for_volume( volume_name_on_backend, new_active_replica_qos_policy) LOG.info("QoS policy applied successfully for promoted " "replica: %s", new_active_replica['id']) except Exception: LOG.exception("Could not apply QoS to the promoted replica.") def _convert_destination_replica_to_independent( self, context, dm_session, orig_active_replica, replica, access_rules, share_server=None): """Breaks SnapMirror and allows r/w ops on the destination replica. For promotion, the existing SnapMirror relationship must be broken and access rules have to be granted to the broken off replica to use it as an independent share. :param context: Request Context :param dm_session: Data motion object for SnapMirror operations :param orig_active_replica: Original SnapMirror source :param replica: Replica to promote to SnapMirror source :param access_rules: Access rules to apply to the replica :param share_server: ShareServer class instance of replica :return: Updated replica """ vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name(replica['id']) try: # 1. Start an update to try to get a last minute transfer before we # quiesce and break dm_session.update_snapmirror(orig_active_replica, replica) except exception.StorageCommunicationException: # Ignore any errors since the current source replica may be # unreachable pass # 2. Break SnapMirror dm_session.break_snapmirror(orig_active_replica, replica) # 3. Setup access rules new_active_replica = replica.copy() helper = self._get_helper(replica) helper.set_client(vserver_client) try: helper.update_access(replica, share_name, access_rules) except Exception: new_active_replica['access_rules_status'] = ( constants.SHARE_INSTANCE_RULES_SYNCING) else: new_active_replica['access_rules_status'] = constants.STATUS_ACTIVE new_active_replica['export_locations'] = self._create_export( new_active_replica, share_server, vserver, vserver_client) new_active_replica['replica_state'] = constants.REPLICA_STATE_ACTIVE # 4. Set File system size fixed to false vserver_client.set_volume_filesys_size_fixed(share_name, filesys_size_fixed=False) return new_active_replica def _safe_change_replica_source(self, dm_session, replica, orig_source_replica, new_source_replica, replica_list): """Attempts to change the SnapMirror source to new source. If the attempt fails, 'replica_state' is set to 'error'. :param dm_session: Data motion object for SnapMirror operations :param replica: Replica that requires a change of source :param orig_source_replica: Original SnapMirror source volume :param new_source_replica: New SnapMirror source volume :return: Updated replica """ try: dm_session.change_snapmirror_source(replica, orig_source_replica, new_source_replica, replica_list) except exception.StorageCommunicationException: replica['status'] = constants.STATUS_ERROR replica['replica_state'] = constants.STATUS_ERROR replica['export_locations'] = [] msg = ("Failed to change replica (%s) to a SnapMirror " "destination. Replica backend is unreachable.") LOG.exception(msg, replica['id']) return replica except netapp_api.NaApiError: replica['replica_state'] = constants.STATUS_ERROR replica['export_locations'] = [] msg = ("Failed to change replica (%s) to a SnapMirror " "destination.") LOG.exception(msg, replica['id']) return replica replica['replica_state'] = constants.REPLICA_STATE_OUT_OF_SYNC replica['export_locations'] = [] return replica def create_replicated_snapshot(self, context, replica_list, snapshot_instances, share_server=None): active_replica = self.find_active_replica(replica_list) active_snapshot = [x for x in snapshot_instances if x['share_id'] == active_replica['id']][0] snapshot_name = self._get_backend_snapshot_name(active_snapshot['id']) self.create_snapshot(context, active_snapshot, share_server=share_server) active_snapshot['status'] = constants.STATUS_AVAILABLE active_snapshot['provider_location'] = snapshot_name snapshots = [active_snapshot] instances = zip(sorted(replica_list, key=lambda x: x['id']), sorted(snapshot_instances, key=lambda x: x['share_id'])) for replica, snapshot in instances: if snapshot['id'] != active_snapshot['id']: snapshot['provider_location'] = snapshot_name snapshots.append(snapshot) dm_session = data_motion.DataMotionSession() if replica.get('host'): try: dm_session.update_snapmirror(active_replica, replica) except netapp_api.NaApiError as e: if e.code != netapp_api.EOBJECTNOTFOUND: raise return snapshots def delete_replicated_snapshot(self, context, replica_list, snapshot_instances, share_server=None): active_replica = self.find_active_replica(replica_list) active_snapshot = [x for x in snapshot_instances if x['share_id'] == active_replica['id']][0] self.delete_snapshot(context, active_snapshot, share_server=share_server, snapshot_name=active_snapshot['provider_location'] ) active_snapshot['status'] = constants.STATUS_DELETED instances = zip(sorted(replica_list, key=lambda x: x['id']), sorted(snapshot_instances, key=lambda x: x['share_id'])) for replica, snapshot in instances: if snapshot['id'] != active_snapshot['id']: dm_session = data_motion.DataMotionSession() if replica.get('host'): try: dm_session.update_snapmirror(active_replica, replica) except netapp_api.NaApiError as e: if e.code != netapp_api.EOBJECTNOTFOUND: raise return [active_snapshot] def update_replicated_snapshot(self, replica_list, share_replica, snapshot_instances, snapshot_instance, share_server=None): active_replica = self.find_active_replica(replica_list) vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name( snapshot_instance['share_id']) snapshot_name = snapshot_instance.get('provider_location') # NOTE(ameade): If there is no provider location, # then grab from active snapshot instance if snapshot_name is None: active_snapshot = [x for x in snapshot_instances if x['share_id'] == active_replica['id']][0] snapshot_name = active_snapshot.get('provider_location') if not snapshot_name: return try: snapshot_exists = vserver_client.snapshot_exists(snapshot_name, share_name) except exception.SnapshotUnavailable: # The volume must still be offline return if (snapshot_exists and snapshot_instance['status'] == constants.STATUS_CREATING): return { 'status': constants.STATUS_AVAILABLE, 'provider_location': snapshot_name, } elif (not snapshot_exists and snapshot_instance['status'] == constants.STATUS_DELETING): raise exception.SnapshotResourceNotFound( name=snapshot_instance.get('provider_location')) dm_session = data_motion.DataMotionSession() try: dm_session.update_snapmirror(active_replica, share_replica) except netapp_api.NaApiError as e: if e.code != netapp_api.EOBJECTNOTFOUND: raise def revert_to_replicated_snapshot(self, context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, share_server=None): """Reverts a replicated share (in place) to the specified snapshot.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_name = self._get_backend_share_name( active_replica_snapshot['share_id']) snapshot_name = ( active_replica_snapshot.get('provider_location') or self._get_backend_snapshot_name(active_replica_snapshot['id'])) LOG.debug('Restoring snapshot %s', snapshot_name) dm_session = data_motion.DataMotionSession() non_active_replica_list = self._find_nonactive_replicas(replica_list) # Ensure source snapshot exists vserver_client.get_snapshot(share_name, snapshot_name) # Break all mirrors for replica in non_active_replica_list: try: dm_session.break_snapmirror( active_replica, replica, mount=False) except netapp_api.NaApiError as e: if e.code != netapp_api.EOBJECTNOTFOUND: raise # Delete source SnapMirror snapshots that will prevent a snap restore snapmirror_snapshot_names = vserver_client.list_snapmirror_snapshots( share_name) for snapmirror_snapshot_name in snapmirror_snapshot_names: vserver_client.delete_snapshot( share_name, snapmirror_snapshot_name, ignore_owners=True) # Restore source snapshot of interest vserver_client.restore_snapshot(share_name, snapshot_name) # Reestablish mirrors for replica in non_active_replica_list: try: dm_session.resync_snapmirror(active_replica, replica) except netapp_api.NaApiError as e: if e.code != netapp_api.EOBJECTNOTFOUND: raise def _check_destination_vserver_for_vol_move(self, source_share, source_vserver, dest_share_server): try: destination_vserver, __ = self._get_vserver( share_server=dest_share_server) except exception.InvalidParameterValue: destination_vserver = None if source_vserver != destination_vserver: msg = _("Cannot migrate %(shr)s efficiently from source " "VServer %(src)s to destination VServer %(dest)s.") msg_args = { 'shr': source_share['id'], 'src': source_vserver, 'dest': destination_vserver, } raise exception.NetAppException(msg % msg_args) def migration_check_compatibility(self, context, source_share, destination_share, share_server=None, destination_share_server=None): """Checks compatibility between self.host and destination host.""" # We need cluster creds to perform an intra-cluster data motion compatible = False destination_host = destination_share['host'] if self._have_cluster_creds: try: backend = share_utils.extract_host( destination_host, level='backend_name') destination_aggregate = share_utils.extract_host( destination_host, level='pool') # Validate new extra-specs are valid on the destination extra_specs = share_types.get_extra_specs_from_share( destination_share) self._check_extra_specs_validity( destination_share, extra_specs) # TODO(gouthamr): Check whether QoS min-throughputs can be # honored on the destination aggregate when supported. self._check_aggregate_extra_specs_validity( destination_aggregate, extra_specs) data_motion.get_backend_configuration(backend) source_vserver, __ = self._get_vserver( share_server=share_server) share_volume = self._get_backend_share_name( source_share['id']) self._check_destination_vserver_for_vol_move( source_share, source_vserver, destination_share_server) encrypt_dest = self._get_dest_flexvol_encryption_value( destination_share) self._client.check_volume_move( share_volume, source_vserver, destination_aggregate, encrypt_destination=encrypt_dest) except Exception: msg = ("Cannot migrate share %(shr)s efficiently between " "%(src)s and %(dest)s.") msg_args = { 'shr': source_share['id'], 'src': source_share['host'], 'dest': destination_host, } LOG.exception(msg, msg_args) else: compatible = True else: msg = ("Cluster credentials have not been configured " "with this share driver. Cannot perform volume move " "operations.") LOG.warning(msg) compatibility = { 'compatible': compatible, 'writable': compatible, 'nondisruptive': compatible, 'preserve_metadata': compatible, 'preserve_snapshots': compatible, } return compatibility def _move_volume_after_splitting(self, source_share, destination_share, share_server=None, cutover_action='wait'): retries = (self.configuration.netapp_start_volume_move_timeout / 5 or 1) @manila_utils.retry(exception.ShareBusyException, interval=5, retries=retries, backoff_rate=1) def try_move_volume(): try: self._move_volume(source_share, destination_share, share_server, cutover_action) except netapp_api.NaApiError as e: undergoing_split = 'undergoing a clone split' msg_args = {'id': source_share['id']} if (e.code == netapp_api.EAPIERROR and undergoing_split in e.message): msg = _('The volume %(id)s is undergoing a clone split ' 'operation. Will retry the operation.') % msg_args LOG.warning(msg) raise exception.ShareBusyException(reason=msg) else: msg = _("Unable to perform move operation for the volume " "%(id)s. Caught an unexpected error. Not " "retrying.") % msg_args raise exception.NetAppException(message=msg) try: try_move_volume() except exception.ShareBusyException: msg_args = {'id': source_share['id']} msg = _("Unable to perform move operation for the volume %(id)s " "because a clone split operation is still in progress. " "Retries exhausted. Not retrying.") % msg_args raise exception.NetAppException(message=msg) def _move_volume(self, source_share, destination_share, share_server=None, cutover_action='wait'): # Intra-cluster migration vserver, vserver_client = self._get_vserver(share_server=share_server) share_volume = self._get_backend_share_name(source_share['id']) destination_aggregate = share_utils.extract_host( destination_share['host'], level='pool') # If the destination's share type extra-spec for Flexvol encryption # is different than the source's, then specify the volume-move # operation to set the correct 'encrypt' attribute on the destination # volume. encrypt_dest = self._get_dest_flexvol_encryption_value( destination_share) self._client.start_volume_move( share_volume, vserver, destination_aggregate, cutover_action=cutover_action, encrypt_destination=encrypt_dest) msg = ("Began volume move operation of share %(shr)s from %(src)s " "to %(dest)s.") msg_args = { 'shr': source_share['id'], 'src': source_share['host'], 'dest': destination_share['host'], } LOG.info(msg, msg_args) def migration_start(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Begins data motion from source_share to destination_share.""" self._move_volume(source_share, destination_share, share_server) def _get_volume_move_status(self, source_share, share_server): vserver, vserver_client = self._get_vserver(share_server=share_server) share_volume = self._get_backend_share_name(source_share['id']) status = self._client.get_volume_move_status(share_volume, vserver) return status def _check_volume_clone_split_completed(self, share, vserver_client): share_volume = self._get_backend_share_name(share['id']) return vserver_client.check_volume_clone_split_completed(share_volume) def _get_dest_flexvol_encryption_value(self, destination_share): dest_share_type_encrypted_val = share_types.get_share_type_extra_specs( destination_share['share_type_id'], 'netapp_flexvol_encryption') encrypt_destination = share_types.parse_boolean_extra_spec( 'netapp_flexvol_encryption', dest_share_type_encrypted_val) return encrypt_destination def _check_volume_move_completed(self, source_share, share_server): """Check progress of volume move operation.""" status = self._get_volume_move_status(source_share, share_server) completed_phases = ( 'cutover_hard_deferred', 'cutover_soft_deferred', 'completed') move_phase = status['phase'].lower() if move_phase == 'failed': msg_args = { 'shr': source_share['id'], 'reason': status['details'], } msg = _("Volume move operation for share %(shr)s failed. Reason: " "%(reason)s") % msg_args LOG.exception(msg) raise exception.NetAppException(msg) elif move_phase in completed_phases: return True return False def migration_continue(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Check progress of migration, try to repair data motion errors.""" return self._check_volume_move_completed(source_share, share_server) def _get_volume_move_progress(self, source_share, share_server): status = self._get_volume_move_status(source_share, share_server) # NOTE (gouthamr): If the volume move is waiting for a manual # intervention to cut-over, the copy is done with respect to the # user. Volume move copies the rest of the data before cut-over anyway. if status['phase'] in ('cutover_hard_deferred', 'cutover_soft_deferred'): status['percent-complete'] = 100 msg = ("Volume move status for share %(share)s: (State) %(state)s. " "(Phase) %(phase)s. Details: %(details)s") msg_args = { 'state': status['state'], 'details': status['details'], 'share': source_share['id'], 'phase': status['phase'], } LOG.info(msg, msg_args) return { 'total_progress': status['percent-complete'] or 0, 'state': status['state'], 'estimated_completion_time': status['estimated-completion-time'], 'phase': status['phase'], 'details': status['details'], } def migration_get_progress(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Return detailed progress of the migration in progress.""" return self._get_volume_move_progress(source_share, share_server) def migration_cancel(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Abort an ongoing migration.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_volume = self._get_backend_share_name(source_share['id']) try: self._get_volume_move_status(source_share, share_server) except exception.NetAppException: LOG.exception("Could not get volume move status.") return self._client.abort_volume_move(share_volume, vserver) msg = ("Share volume move operation for share %(shr)s from host " "%(src)s to %(dest)s was successfully aborted.") msg_args = { 'shr': source_share['id'], 'src': source_share['host'], 'dest': destination_share['host'], } LOG.info(msg, msg_args) def migration_complete(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Initiate the cutover to destination share after move is complete.""" vserver, vserver_client = self._get_vserver(share_server=share_server) share_volume = self._get_backend_share_name(source_share['id']) status = self._get_volume_move_status(source_share, share_server) move_phase = status['phase'].lower() if move_phase == 'completed': LOG.debug("Volume move operation was already successfully " "completed for share %(shr)s.", {'shr': source_share['id']}) elif move_phase in ('cutover_hard_deferred', 'cutover_soft_deferred'): self._client.trigger_volume_move_cutover(share_volume, vserver) self._wait_for_cutover_completion( source_share, share_server) else: msg_args = { 'shr': source_share['id'], 'status': status['state'], 'phase': status['phase'], 'details': status['details'], } msg = _("Cannot complete volume move operation for share %(shr)s. " "Current volume move status: %(status)s, phase: " "%(phase)s. Details: %(details)s") % msg_args LOG.exception(msg) raise exception.NetAppException(msg) new_share_volume_name = self._get_backend_share_name( destination_share['id']) vserver_client.set_volume_name(share_volume, new_share_volume_name) # Modify volume properties per share type extra-specs extra_specs = share_types.get_extra_specs_from_share( destination_share) extra_specs = self._remap_standard_boolean_extra_specs(extra_specs) self._check_extra_specs_validity(destination_share, extra_specs) provisioning_options = self._get_provisioning_options(extra_specs) qos_policy_group_name = self._modify_or_create_qos_for_existing_share( destination_share, extra_specs, vserver, vserver_client) if qos_policy_group_name: provisioning_options['qos_policy_group'] = qos_policy_group_name else: # Removing the QOS Policy on the migrated share as the # new extra-spec for which this share is being migrated to # does not specify any QOS settings. provisioning_options['qos_policy_group'] = "none" qos_policy_of_src_share = self._get_backend_qos_policy_group_name( source_share['id']) self._client.mark_qos_policy_group_for_deletion( qos_policy_of_src_share) destination_aggregate = share_utils.extract_host( destination_share['host'], level='pool') # Modify volume to match extra specs vserver_client.modify_volume(destination_aggregate, new_share_volume_name, **provisioning_options) msg = ("Volume move operation for share %(shr)s has completed " "successfully. Share has been moved from %(src)s to " "%(dest)s.") msg_args = { 'shr': source_share['id'], 'src': source_share['host'], 'dest': destination_share['host'], } LOG.info(msg, msg_args) # NOTE(gouthamr): For nondisruptive migration, current export # policy will not be cleared, the export policy will be renamed to # match the name of the share. export_locations = self._create_export( destination_share, share_server, vserver, vserver_client, clear_current_export_policy=False) src_snaps_dict = {s['id']: s for s in source_snapshots} snapshot_updates = {} for source_snap_id, destination_snap in snapshot_mappings.items(): p_location = src_snaps_dict[source_snap_id]['provider_location'] snapshot_updates.update( {destination_snap['id']: {'provider_location': p_location}}) return { 'export_locations': export_locations, 'snapshot_updates': snapshot_updates, } @na_utils.trace def _modify_or_create_qos_for_existing_share(self, share, extra_specs, vserver, vserver_client): """Gets/Creates QoS policy for an existing FlexVol. The share's assigned QoS policy is renamed and adjusted if the policy is exclusive to the FlexVol. If the policy includes other workloads besides the FlexVol, a new policy is created with the specs necessary. """ qos_specs = self._get_normalized_qos_specs(extra_specs) if not qos_specs: return backend_share_name = self._get_backend_share_name(share['id']) qos_policy_group_name = self._get_backend_qos_policy_group_name( share['id']) create_new_qos_policy_group = True backend_volume = vserver_client.get_volume( backend_share_name) backend_volume_size = int( math.ceil(float(backend_volume['size']) / units.Gi)) LOG.debug("Checking for a pre-existing QoS policy group that " "is exclusive to the volume %s.", backend_share_name) # Does the volume have an exclusive QoS policy that we can rename? if backend_volume['qos-policy-group-name'] is not None: existing_qos_policy_group = self._client.qos_policy_group_get( backend_volume['qos-policy-group-name']) if existing_qos_policy_group['num-workloads'] == 1: # Yay, can set max-throughput and rename msg = ("Found pre-existing QoS policy %(policy)s and it is " "exclusive to the volume %(volume)s. Modifying and " "renaming this policy to %(new_policy)s.") msg_args = { 'policy': backend_volume['qos-policy-group-name'], 'volume': backend_share_name, 'new_policy': qos_policy_group_name, } LOG.debug(msg, msg_args) max_throughput = self._get_max_throughput( backend_volume_size, qos_specs) self._client.qos_policy_group_modify( backend_volume['qos-policy-group-name'], max_throughput) self._client.qos_policy_group_rename( backend_volume['qos-policy-group-name'], qos_policy_group_name) create_new_qos_policy_group = False if create_new_qos_policy_group: share_obj = { 'size': backend_volume_size, 'id': share['id'], } LOG.debug("No existing QoS policy group found for " "volume. Creating a new one with name %s.", qos_policy_group_name) self._create_qos_policy_group(share_obj, vserver, qos_specs, vserver_client=vserver_client) return qos_policy_group_name def _wait_for_cutover_completion(self, source_share, share_server): retries = (self.configuration.netapp_volume_move_cutover_timeout / 5 or 1) @manila_utils.retry(exception.ShareBusyException, interval=5, retries=retries, backoff_rate=1) def check_move_completion(): status = self._get_volume_move_status(source_share, share_server) if status['phase'].lower() != 'completed': msg_args = { 'shr': source_share['id'], 'phs': status['phase'], } msg = _('Volume move operation for share %(shr)s is not ' 'complete. Current Phase: %(phs)s. ' 'Retrying.') % msg_args LOG.warning(msg) raise exception.ShareBusyException(reason=msg) try: check_move_completion() except exception.ShareBusyException: msg = _("Volume move operation did not complete after cut-over " "was triggered. Retries exhausted. Not retrying.") raise exception.NetAppException(message=msg) def get_backend_info(self, context): snapdir_visibility = self.configuration.netapp_reset_snapdir_visibility return { 'snapdir_visibility': snapdir_visibility, } def ensure_shares(self, context, shares): cfg_snapdir = self.configuration.netapp_reset_snapdir_visibility hide_snapdir = self.HIDE_SNAPDIR_CFG_MAP[cfg_snapdir.lower()] if hide_snapdir is not None: for share in shares: share_server = share.get('share_server') vserver, vserver_client = self._get_vserver( share_server=share_server) share_name = self._get_backend_share_name(share['id']) self._apply_snapdir_visibility( hide_snapdir, share_name, vserver_client) def get_share_status(self, share, share_server=None): if share['status'] == constants.STATUS_CREATING_FROM_SNAPSHOT: return self._update_create_from_snapshot_status(share, share_server) else: LOG.warning("Caught an unexpected share status '%s' during share " "status update routine. Skipping.", share['status']) def volume_rehost(self, share, src_vserver, dest_vserver): volume_name = self._get_backend_share_name(share['id']) msg = ("Rehosting volume of share %(shr)s from vserver %(src)s " "to vserver %(dest)s.") msg_args = { 'shr': share['id'], 'src': src_vserver, 'dest': dest_vserver, } LOG.info(msg, msg_args) self._client.rehost_volume(volume_name, src_vserver, dest_vserver) def _rehost_and_mount_volume(self, share, src_vserver, src_vserver_client, dest_vserver, dest_vserver_client): volume_name = self._get_backend_share_name(share['id']) # Unmount volume in the source vserver: src_vserver_client.unmount_volume(volume_name) # Rehost the volume self.volume_rehost(share, src_vserver, dest_vserver) # Mount the volume on the destination vserver dest_vserver_client.mount_volume(volume_name) manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/data_motion.py0000664000175000017500000004624513656750227027723 0ustar zuulzuul00000000000000# Copyright (c) 2016 Alex Meade. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp Data ONTAP data motion library. This library handles transferring data from a source to a destination. Its responsibility is to handle this as efficiently as possible given the location of the data's source and destination. This includes cloning, SnapMirror, and copy-offload as improvements to brute force data transfer. """ from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from manila import exception from manila.i18n import _ from manila.share import configuration from manila.share import driver from manila.share.drivers.netapp.dataontap.client import api as netapp_api from manila.share.drivers.netapp.dataontap.client import client_cmode from manila.share.drivers.netapp import options as na_opts from manila.share.drivers.netapp import utils as na_utils from manila.share import utils as share_utils from manila import utils LOG = log.getLogger(__name__) CONF = cfg.CONF def get_backend_configuration(backend_name): config_stanzas = CONF.list_all_sections() if backend_name not in config_stanzas: msg = _("Could not find backend stanza %(backend_name)s in " "configuration which is required for replication or migration " "workflows with the source backend. Available stanzas are " "%(stanzas)s") params = { "stanzas": config_stanzas, "backend_name": backend_name, } raise exception.BadConfigurationException(reason=msg % params) config = configuration.Configuration(driver.share_opts, config_group=backend_name) config.append_config_values(na_opts.netapp_cluster_opts) config.append_config_values(na_opts.netapp_connection_opts) config.append_config_values(na_opts.netapp_basicauth_opts) config.append_config_values(na_opts.netapp_transport_opts) config.append_config_values(na_opts.netapp_support_opts) config.append_config_values(na_opts.netapp_provisioning_opts) config.append_config_values(na_opts.netapp_data_motion_opts) return config def get_client_for_backend(backend_name, vserver_name=None): config = get_backend_configuration(backend_name) client = client_cmode.NetAppCmodeClient( transport_type=config.netapp_transport_type, username=config.netapp_login, password=config.netapp_password, hostname=config.netapp_server_hostname, port=config.netapp_server_port, vserver=vserver_name or config.netapp_vserver, trace=na_utils.TRACE_API) return client class DataMotionSession(object): def _get_backend_volume_name(self, config, share_obj): """Return the calculated backend name of the share. Uses the netapp_volume_name_template configuration value for the backend to calculate the volume name on the array for the share. """ volume_name = config.netapp_volume_name_template % { 'share_id': share_obj['id'].replace('-', '_')} return volume_name def _get_backend_qos_policy_group_name(self, share): """Get QoS policy name according to QoS policy group name template.""" __, config = self._get_backend_config_obj(share) return config.netapp_qos_policy_group_name_template % { 'share_id': share['id'].replace('-', '_')} def get_vserver_from_share(self, share_obj): share_server = share_obj.get('share_server') if share_server: backend_details = share_server.get('backend_details') if backend_details: return backend_details.get('vserver_name') def _get_backend_config_obj(self, share_obj): backend_name = share_utils.extract_host( share_obj['host'], level='backend_name') config = get_backend_configuration(backend_name) return backend_name, config def get_backend_info_for_share(self, share_obj): backend_name, config = self._get_backend_config_obj(share_obj) vserver = (self.get_vserver_from_share(share_obj) or config.netapp_vserver) volume_name = self._get_backend_volume_name( config, share_obj) return volume_name, vserver, backend_name def get_snapmirrors(self, source_share_obj, dest_share_obj): dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) snapmirrors = dest_client.get_snapmirrors( src_vserver, src_volume_name, dest_vserver, dest_volume_name, desired_attributes=['relationship-status', 'mirror-state', 'source-vserver', 'source-volume', 'last-transfer-end-timestamp']) return snapmirrors def create_snapmirror(self, source_share_obj, dest_share_obj): """Sets up a SnapMirror relationship between two volumes. 1. Create SnapMirror relationship 2. Initialize data transfer asynchronously """ dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) # 1. Create SnapMirror relationship # TODO(ameade): Change the schedule from hourly to a config value dest_client.create_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name, schedule='hourly') # 2. Initialize async transfer of the initial data dest_client.initialize_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) def delete_snapmirror(self, source_share_obj, dest_share_obj, release=True): """Ensures all information about a SnapMirror relationship is removed. 1. Abort snapmirror 2. Delete the snapmirror 3. Release snapmirror to cleanup snapmirror metadata and snapshots """ dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, src_backend = ( self.get_backend_info_for_share(source_share_obj)) # 1. Abort any ongoing transfers try: dest_client.abort_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name, clear_checkpoint=False) except netapp_api.NaApiError: # Snapmirror is already deleted pass # 2. Delete SnapMirror Relationship and cleanup destination snapshots try: dest_client.delete_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) except netapp_api.NaApiError as e: with excutils.save_and_reraise_exception() as exc_context: if (e.code == netapp_api.EOBJECTNOTFOUND or e.code == netapp_api.ESOURCE_IS_DIFFERENT or "(entry doesn't exist)" in e.message): LOG.info('No snapmirror relationship to delete') exc_context.reraise = False if release: # If the source is unreachable, do not perform the release try: src_client = get_client_for_backend(src_backend, vserver_name=src_vserver) except Exception: src_client = None # 3. Cleanup SnapMirror relationship on source try: if src_client: src_client.release_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) except netapp_api.NaApiError as e: with excutils.save_and_reraise_exception() as exc_context: if (e.code == netapp_api.EOBJECTNOTFOUND or e.code == netapp_api.ESOURCE_IS_DIFFERENT or "(entry doesn't exist)" in e.message): # Handle the case where the snapmirror is already # cleaned up exc_context.reraise = False def update_snapmirror(self, source_share_obj, dest_share_obj): """Schedule a snapmirror update to happen on the backend.""" dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) # Update SnapMirror dest_client.update_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) def quiesce_then_abort(self, source_share_obj, dest_share_obj): dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) # 1. Attempt to quiesce, then abort dest_client.quiesce_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) config = get_backend_configuration(share_utils.extract_host( source_share_obj['host'], level='backend_name')) retries = config.netapp_snapmirror_quiesce_timeout / 5 @utils.retry(exception.ReplicationException, interval=5, retries=retries, backoff_rate=1) def wait_for_quiesced(): snapmirror = dest_client.get_snapmirrors( src_vserver, src_volume_name, dest_vserver, dest_volume_name, desired_attributes=['relationship-status', 'mirror-state'] )[0] if snapmirror.get('relationship-status') != 'quiesced': raise exception.ReplicationException( reason=("Snapmirror relationship is not quiesced.")) try: wait_for_quiesced() except exception.ReplicationException: dest_client.abort_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name, clear_checkpoint=False) def break_snapmirror(self, source_share_obj, dest_share_obj, mount=True): """Breaks SnapMirror relationship. 1. Quiesce any ongoing snapmirror transfers 2. Wait until snapmirror finishes transfers and enters quiesced state 3. Break snapmirror 4. Mount the destination volume so it is exported as a share """ dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) # 1. Attempt to quiesce, then abort self.quiesce_then_abort(source_share_obj, dest_share_obj) # 2. Break SnapMirror dest_client.break_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) # 3. Mount the destination volume and create a junction path if mount: dest_client.mount_volume(dest_volume_name) def resync_snapmirror(self, source_share_obj, dest_share_obj): """Resync SnapMirror relationship. """ dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) dest_client.resync_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) def resume_snapmirror(self, source_share_obj, dest_share_obj): """Resume SnapMirror relationship from a quiesced state.""" dest_volume_name, dest_vserver, dest_backend = ( self.get_backend_info_for_share(dest_share_obj)) dest_client = get_client_for_backend(dest_backend, vserver_name=dest_vserver) src_volume_name, src_vserver, __ = self.get_backend_info_for_share( source_share_obj) dest_client.resume_snapmirror(src_vserver, src_volume_name, dest_vserver, dest_volume_name) def change_snapmirror_source(self, replica, orig_source_replica, new_source_replica, replica_list): """Creates SnapMirror relationship from the new source to destination. 1. Delete all snapmirrors involving the replica, but maintain snapmirror metadata and snapshots for efficiency 2. For DHSS=True scenarios, creates a new vserver peer relationship if it does not exists 3. Ensure a new source -> replica snapmirror exists 4. Resync new source -> replica snapmirror relationship """ replica_volume_name, replica_vserver, replica_backend = ( self.get_backend_info_for_share(replica)) replica_client = get_client_for_backend(replica_backend, vserver_name=replica_vserver) new_src_volume_name, new_src_vserver, new_src_backend = ( self.get_backend_info_for_share(new_source_replica)) # 1. delete for other_replica in replica_list: if other_replica['id'] == replica['id']: continue # We need to delete ALL snapmirror relationships # involving this replica but do not remove snapmirror metadata # so that the new snapmirror relationship is efficient. self.delete_snapmirror(other_replica, replica, release=False) self.delete_snapmirror(replica, other_replica, release=False) # 2. vserver operations when driver handles share servers replica_config = get_backend_configuration(replica_backend) if (replica_config.driver_handles_share_servers and replica_vserver != new_src_vserver): # create vserver peering if does not exists if not replica_client.get_vserver_peers(replica_vserver, new_src_vserver): new_src_client = get_client_for_backend( new_src_backend, vserver_name=new_src_vserver) # Cluster name is needed for setting up the vserver peering new_src_cluster_name = new_src_client.get_cluster_name() replica_cluster_name = replica_client.get_cluster_name() replica_client.create_vserver_peer( replica_vserver, new_src_vserver, peer_cluster_name=new_src_cluster_name) if new_src_cluster_name != replica_cluster_name: new_src_client.accept_vserver_peer(new_src_vserver, replica_vserver) # 3. create # TODO(ameade): Update the schedule if needed. replica_client.create_snapmirror(new_src_vserver, new_src_volume_name, replica_vserver, replica_volume_name, schedule='hourly') # 4. resync replica_client.resync_snapmirror(new_src_vserver, new_src_volume_name, replica_vserver, replica_volume_name) @na_utils.trace def remove_qos_on_old_active_replica(self, orig_active_replica): old_active_replica_qos_policy = ( self._get_backend_qos_policy_group_name(orig_active_replica) ) replica_volume_name, replica_vserver, replica_backend = ( self.get_backend_info_for_share(orig_active_replica)) replica_client = get_client_for_backend( replica_backend, vserver_name=replica_vserver) try: replica_client.set_qos_policy_group_for_volume( replica_volume_name, 'none') replica_client.mark_qos_policy_group_for_deletion( old_active_replica_qos_policy) except exception.StorageCommunicationException: LOG.exception("Could not communicate with the backend " "for replica %s to unset QoS policy and mark " "the QoS policy group for deletion.", orig_active_replica['id']) manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/performance.py0000664000175000017500000004164113656750227027721 0ustar zuulzuul00000000000000# Copyright (c) 2016 Clinton Knight # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Performance metrics functions and cache for NetApp systems. """ import copy from oslo_log import log as logging from manila import exception from manila.i18n import _ from manila.share.drivers.netapp.dataontap.client import api as netapp_api LOG = logging.getLogger(__name__) DEFAULT_UTILIZATION = 50 class PerformanceLibrary(object): def __init__(self, zapi_client): self.zapi_client = zapi_client self.performance_counters = {} self.pool_utilization = {} self._init_counter_info() def _init_counter_info(self): """Set a few counter names based on Data ONTAP version.""" self.system_object_name = None self.avg_processor_busy_base_counter_name = None try: if self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS: self.system_object_name = 'system:constituent' self.avg_processor_busy_base_counter_name = ( self._get_base_counter_name('system:constituent', 'avg_processor_busy')) elif self.zapi_client.features.SYSTEM_METRICS: self.system_object_name = 'system' self.avg_processor_busy_base_counter_name = ( self._get_base_counter_name('system', 'avg_processor_busy')) except netapp_api.NaApiError: if self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS: self.avg_processor_busy_base_counter_name = 'cpu_elapsed_time' else: self.avg_processor_busy_base_counter_name = 'cpu_elapsed_time1' LOG.exception('Could not get performance base counter ' 'name. Performance-based scheduler ' 'functions may not be available.') def update_performance_cache(self, flexvol_pools, aggregate_pools): """Called periodically to update per-pool node utilization metrics.""" # Nothing to do on older systems if not (self.zapi_client.features.SYSTEM_METRICS or self.zapi_client.features.SYSTEM_CONSTITUENT_METRICS): return # Get aggregates and nodes for all known pools aggr_names = self._get_aggregates_for_pools(flexvol_pools, aggregate_pools) node_names, aggr_node_map = self._get_nodes_for_aggregates(aggr_names) # Update performance counter cache for each node node_utilization = {} for node_name in node_names: if node_name not in self.performance_counters: self.performance_counters[node_name] = [] # Get new performance counters and save only the last 10 counters = self._get_node_utilization_counters(node_name) if not counters: continue self.performance_counters[node_name].append(counters) self.performance_counters[node_name] = ( self.performance_counters[node_name][-10:]) # Update utilization for each node using newest & oldest sample counters = self.performance_counters[node_name] if len(counters) < 2: node_utilization[node_name] = DEFAULT_UTILIZATION else: node_utilization[node_name] = self._get_node_utilization( counters[0], counters[-1], node_name) # Update pool utilization map atomically pool_utilization = {} all_pools = copy.deepcopy(flexvol_pools) all_pools.update(aggregate_pools) for pool_name, pool_info in all_pools.items(): aggr_name = pool_info.get('netapp_aggregate', 'unknown') node_name = aggr_node_map.get(aggr_name) if node_name: pool_utilization[pool_name] = node_utilization.get( node_name, DEFAULT_UTILIZATION) else: pool_utilization[pool_name] = DEFAULT_UTILIZATION self.pool_utilization = pool_utilization def get_node_utilization_for_pool(self, pool_name): """Get the node utilization for the specified pool, if available.""" return self.pool_utilization.get(pool_name, DEFAULT_UTILIZATION) def update_for_failover(self, zapi_client, flexvol_pools, aggregate_pools): """Change API client after a whole-backend failover event.""" self.zapi_client = zapi_client self.update_performance_cache(flexvol_pools, aggregate_pools) def _get_aggregates_for_pools(self, flexvol_pools, aggregate_pools): """Get the set of aggregates that contain the specified pools.""" aggr_names = set() for pool_name, pool_info in aggregate_pools.items(): aggr_names.add(pool_info.get('netapp_aggregate')) for pool_name, pool_info in flexvol_pools.items(): aggr_names.add(pool_info.get('netapp_aggregate')) return list(aggr_names) def _get_nodes_for_aggregates(self, aggr_names): """Get the cluster nodes that own the specified aggregates.""" node_names = set() aggr_node_map = {} for aggr_name in aggr_names: node_name = self.zapi_client.get_node_for_aggregate(aggr_name) if node_name: node_names.add(node_name) aggr_node_map[aggr_name] = node_name return list(node_names), aggr_node_map def _get_node_utilization(self, counters_t1, counters_t2, node_name): """Get node utilization from two sets of performance counters.""" try: # Time spent in the single-threaded Kahuna domain kahuna_percent = self._get_kahuna_utilization(counters_t1, counters_t2) # If Kahuna is using >60% of the CPU, the controller is fully busy if kahuna_percent > 60: return 100.0 # Average CPU busyness across all processors avg_cpu_percent = 100.0 * self._get_average_cpu_utilization( counters_t1, counters_t2) # Total Consistency Point (CP) time total_cp_time_msec = self._get_total_consistency_point_time( counters_t1, counters_t2) # Time spent in CP Phase 2 (buffer flush) p2_flush_time_msec = self._get_consistency_point_p2_flush_time( counters_t1, counters_t2) # Wall-clock time between the two counter sets poll_time_msec = self._get_total_time(counters_t1, counters_t2, 'total_cp_msecs') # If two polls happened in quick succession, use CPU utilization if total_cp_time_msec == 0 or poll_time_msec == 0: return max(min(100.0, avg_cpu_percent), 0) # Adjusted Consistency Point time adjusted_cp_time_msec = self._get_adjusted_consistency_point_time( total_cp_time_msec, p2_flush_time_msec) adjusted_cp_percent = (100.0 * adjusted_cp_time_msec / poll_time_msec) # Utilization is the greater of CPU busyness & CP time node_utilization = max(avg_cpu_percent, adjusted_cp_percent) return max(min(100.0, node_utilization), 0) except Exception: LOG.exception('Could not calculate node utilization for ' 'node %s.', node_name) return DEFAULT_UTILIZATION def _get_kahuna_utilization(self, counters_t1, counters_t2): """Get time spent in the single-threaded Kahuna domain.""" # Note(cknight): Because Kahuna is single-threaded, running only on # one CPU at a time, we can safely sum the Kahuna CPU usage # percentages across all processors in a node. return sum(self._get_performance_counter_average_multi_instance( counters_t1, counters_t2, 'domain_busy:kahuna', 'processor_elapsed_time')) * 100.0 def _get_average_cpu_utilization(self, counters_t1, counters_t2): """Get average CPU busyness across all processors.""" return self._get_performance_counter_average( counters_t1, counters_t2, 'avg_processor_busy', self.avg_processor_busy_base_counter_name) def _get_total_consistency_point_time(self, counters_t1, counters_t2): """Get time spent in Consistency Points in msecs.""" return float(self._get_performance_counter_delta( counters_t1, counters_t2, 'total_cp_msecs')) def _get_consistency_point_p2_flush_time(self, counters_t1, counters_t2): """Get time spent in CP Phase 2 (buffer flush) in msecs.""" return float(self._get_performance_counter_delta( counters_t1, counters_t2, 'cp_phase_times:p2_flush')) def _get_total_time(self, counters_t1, counters_t2, counter_name): """Get wall clock time between two successive counters in msecs.""" timestamp_t1 = float(self._find_performance_counter_timestamp( counters_t1, counter_name)) timestamp_t2 = float(self._find_performance_counter_timestamp( counters_t2, counter_name)) return (timestamp_t2 - timestamp_t1) * 1000.0 def _get_adjusted_consistency_point_time(self, total_cp_time, p2_flush_time): """Get adjusted CP time by limiting CP phase 2 flush time to 20%.""" return (total_cp_time - p2_flush_time) * 1.20 def _get_performance_counter_delta(self, counters_t1, counters_t2, counter_name): """Calculate a delta value from two performance counters.""" counter_t1 = int( self._find_performance_counter_value(counters_t1, counter_name)) counter_t2 = int( self._find_performance_counter_value(counters_t2, counter_name)) return counter_t2 - counter_t1 def _get_performance_counter_average(self, counters_t1, counters_t2, counter_name, base_counter_name, instance_name=None): """Calculate an average value from two performance counters.""" counter_t1 = float(self._find_performance_counter_value( counters_t1, counter_name, instance_name)) counter_t2 = float(self._find_performance_counter_value( counters_t2, counter_name, instance_name)) base_counter_t1 = float(self._find_performance_counter_value( counters_t1, base_counter_name, instance_name)) base_counter_t2 = float(self._find_performance_counter_value( counters_t2, base_counter_name, instance_name)) return (counter_t2 - counter_t1) / (base_counter_t2 - base_counter_t1) def _get_performance_counter_average_multi_instance(self, counters_t1, counters_t2, counter_name, base_counter_name): """Calculate an average value from multiple counter instances.""" averages = [] instance_names = [] for counter in counters_t1: if counter_name in counter: instance_names.append(counter['instance-name']) for instance_name in instance_names: average = self._get_performance_counter_average( counters_t1, counters_t2, counter_name, base_counter_name, instance_name) averages.append(average) return averages def _find_performance_counter_value(self, counters, counter_name, instance_name=None): """Given a counter set, return the value of a named instance.""" for counter in counters: if counter_name in counter: if (instance_name is None or counter['instance-name'] == instance_name): return counter[counter_name] else: raise exception.NotFound(_('Counter %s not found') % counter_name) def _find_performance_counter_timestamp(self, counters, counter_name, instance_name=None): """Given a counter set, return the timestamp of a named instance.""" for counter in counters: if counter_name in counter: if (instance_name is None or counter['instance-name'] == instance_name): return counter['timestamp'] else: raise exception.NotFound(_('Counter %s not found') % counter_name) def _expand_performance_array(self, object_name, counter_name, counter): """Get array labels and expand counter data array.""" # Get array labels for counter value counter_info = self.zapi_client.get_performance_counter_info( object_name, counter_name) array_labels = [counter_name + ':' + label.lower() for label in counter_info['labels']] array_values = counter[counter_name].split(',') # Combine labels and values, and then mix into existing counter array_data = dict(zip(array_labels, array_values)) counter.update(array_data) def _get_base_counter_name(self, object_name, counter_name): """Get the name of the base counter for the specified counter.""" counter_info = self.zapi_client.get_performance_counter_info( object_name, counter_name) return counter_info['base-counter'] def _get_node_utilization_counters(self, node_name): """Get all performance counters for calculating node utilization.""" try: return (self._get_node_utilization_system_counters(node_name) + self._get_node_utilization_wafl_counters(node_name) + self._get_node_utilization_processor_counters(node_name)) except netapp_api.NaApiError: LOG.exception('Could not get utilization counters from node ' '%s', node_name) return None def _get_node_utilization_system_counters(self, node_name): """Get the system counters for calculating node utilization.""" system_instance_uuids = ( self.zapi_client.get_performance_instance_uuids( self.system_object_name, node_name)) system_counter_names = [ 'avg_processor_busy', self.avg_processor_busy_base_counter_name, ] if 'cpu_elapsed_time1' in system_counter_names: system_counter_names.append('cpu_elapsed_time') system_counters = self.zapi_client.get_performance_counters( self.system_object_name, system_instance_uuids, system_counter_names) return system_counters def _get_node_utilization_wafl_counters(self, node_name): """Get the WAFL counters for calculating node utilization.""" wafl_instance_uuids = self.zapi_client.get_performance_instance_uuids( 'wafl', node_name) wafl_counter_names = ['total_cp_msecs', 'cp_phase_times'] wafl_counters = self.zapi_client.get_performance_counters( 'wafl', wafl_instance_uuids, wafl_counter_names) # Expand array data so we can use wafl:cp_phase_times[P2_FLUSH] for counter in wafl_counters: if 'cp_phase_times' in counter: self._expand_performance_array( 'wafl', 'cp_phase_times', counter) return wafl_counters def _get_node_utilization_processor_counters(self, node_name): """Get the processor counters for calculating node utilization.""" processor_instance_uuids = ( self.zapi_client.get_performance_instance_uuids('processor', node_name)) processor_counter_names = ['domain_busy', 'processor_elapsed_time'] processor_counters = self.zapi_client.get_performance_counters( 'processor', processor_instance_uuids, processor_counter_names) # Expand array data so we can use processor:domain_busy[kahuna] for counter in processor_counters: if 'domain_busy' in counter: self._expand_performance_array( 'processor', 'domain_busy', counter) return processor_counters manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/lib_single_svm.py0000664000175000017500000001433013656750227030407 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp Data ONTAP cDOT single-SVM storage driver library. This library extends the abstract base library and completes the single-SVM functionality needed by the cDOT single-SVM Manila driver. This library variant uses a single Data ONTAP storage virtual machine (i.e. 'vserver') as defined in manila.conf to provision shares. """ import re from oslo_log import log from manila import exception from manila.i18n import _ from manila.share.drivers.netapp.dataontap.cluster_mode import lib_base from manila.share.drivers.netapp import utils as na_utils LOG = log.getLogger(__name__) class NetAppCmodeSingleSVMFileStorageLibrary( lib_base.NetAppCmodeFileStorageLibrary): def __init__(self, driver_name, **kwargs): super(NetAppCmodeSingleSVMFileStorageLibrary, self).__init__( driver_name, **kwargs) self._vserver = self.configuration.netapp_vserver @na_utils.trace def check_for_setup_error(self): # Ensure vserver is specified in configuration. if not self._vserver: msg = _('Vserver must be specified in the configuration ' 'when the driver is not managing share servers.') raise exception.InvalidInput(reason=msg) # Ensure vserver exists. if not self._client.vserver_exists(self._vserver): raise exception.VserverNotFound(vserver=self._vserver) # If we have vserver credentials, ensure the vserver they connect # to matches the vserver specified in the configuration. if not self._have_cluster_creds: if self._vserver not in self._client.list_vservers(): msg = _('Vserver specified in the configuration does not ' 'match supplied credentials.') raise exception.InvalidInput(reason=msg) # Ensure one or more aggregates are available to the vserver. if not self._find_matching_aggregates(): msg = _('No aggregates are available to Vserver %s for ' 'provisioning shares. Ensure that one or more aggregates ' 'are assigned to the Vserver and that the configuration ' 'option netapp_aggregate_name_search_pattern is set ' 'correctly.') % self._vserver raise exception.NetAppException(msg) msg = ('Using Vserver %(vserver)s for backend %(backend)s with ' '%(creds)s credentials.') msg_args = {'vserver': self._vserver, 'backend': self._backend_name} msg_args['creds'] = ('cluster' if self._have_cluster_creds else 'Vserver') LOG.info(msg, msg_args) (super(NetAppCmodeSingleSVMFileStorageLibrary, self). check_for_setup_error()) @na_utils.trace def _get_vserver(self, share_server=None): if share_server is not None: msg = _('Share server must not be passed to the driver ' 'when the driver is not managing share servers.') raise exception.InvalidParameterValue(err=msg) if not self._vserver: msg = _('Vserver not specified in configuration.') raise exception.InvalidInput(reason=msg) if not self._client.vserver_exists(self._vserver): raise exception.VserverNotFound(vserver=self._vserver) vserver_client = self._get_api_client(self._vserver) return self._vserver, vserver_client def _get_ems_pool_info(self): return { 'pools': { 'vserver': self._vserver, 'aggregates': self._find_matching_aggregates(), }, } @na_utils.trace def _handle_housekeeping_tasks(self): """Handle various cleanup activities.""" vserver_client = self._get_api_client(vserver=self._vserver) vserver_client.prune_deleted_nfs_export_policies() vserver_client.prune_deleted_snapshots() if self._have_cluster_creds: # Harvest soft-deleted QoS policy groups vserver_client.remove_unused_qos_policy_groups() (super(NetAppCmodeSingleSVMFileStorageLibrary, self). _handle_housekeeping_tasks()) @na_utils.trace def _find_matching_aggregates(self): """Find all aggregates match pattern.""" vserver_client = self._get_api_client(vserver=self._vserver) aggregate_names = vserver_client.list_vserver_aggregates() root_aggregate_names = [] if self._have_cluster_creds: root_aggregate_names = self._client.list_root_aggregates() pattern = self.configuration.netapp_aggregate_name_search_pattern return [aggr_name for aggr_name in aggregate_names if re.match(pattern, aggr_name) and aggr_name not in root_aggregate_names] @na_utils.trace def get_network_allocations_number(self): """Get number of network interfaces to be created.""" return 0 @na_utils.trace def get_admin_network_allocations_number(self): """Get number of network allocations for creating admin LIFs.""" return 0 @na_utils.trace def get_configured_ip_versions(self): ipv4 = False ipv6 = False vserver_client = self._get_api_client(vserver=self._vserver) interfaces = vserver_client.get_network_interfaces() for interface in interfaces: address = interface['address'] if ':' in address: ipv6 = True else: ipv4 = True versions = [] if ipv4: versions.append(4) if ipv6: versions.append(6) return versions manila-10.0.0/manila/share/drivers/netapp/dataontap/cluster_mode/drv_single_svm.py0000664000175000017500000003174013656750227030440 0ustar zuulzuul00000000000000# Copyright (c) 2015 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ NetApp Data ONTAP cDOT single-SVM storage driver. This driver requires a Data ONTAP (Cluster-mode) storage system with installed CIFS and/or NFS licenses, as well as a FlexClone license. This driver does not manage share servers, meaning it uses a single Data ONTAP storage virtual machine (i.e. 'vserver') as defined in manila.conf to provision shares. This driver supports NFS & CIFS protocols. """ from manila.share import driver from manila.share.drivers.netapp.dataontap.cluster_mode import lib_single_svm class NetAppCmodeSingleSvmShareDriver(driver.ShareDriver): """NetApp Cluster-mode single-SVM share driver.""" DRIVER_NAME = 'NetApp_Cluster_SingleSVM' def __init__(self, *args, **kwargs): super(NetAppCmodeSingleSvmShareDriver, self).__init__( False, *args, **kwargs) self.library = lib_single_svm.NetAppCmodeSingleSVMFileStorageLibrary( self.DRIVER_NAME, **kwargs) def do_setup(self, context): self.library.do_setup(context) def check_for_setup_error(self): self.library.check_for_setup_error() def get_pool(self, share): return self.library.get_pool(share) def create_share(self, context, share, **kwargs): return self.library.create_share(context, share, **kwargs) def create_share_from_snapshot(self, context, share, snapshot, **kwargs): return self.library.create_share_from_snapshot(context, share, snapshot, **kwargs) def create_snapshot(self, context, snapshot, **kwargs): return self.library.create_snapshot(context, snapshot, **kwargs) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, **kwargs): return self.library.revert_to_snapshot(context, snapshot, **kwargs) def delete_share(self, context, share, **kwargs): self.library.delete_share(context, share, **kwargs) def delete_snapshot(self, context, snapshot, **kwargs): self.library.delete_snapshot(context, snapshot, **kwargs) def extend_share(self, share, new_size, **kwargs): self.library.extend_share(share, new_size, **kwargs) def shrink_share(self, share, new_size, **kwargs): self.library.shrink_share(share, new_size, **kwargs) def manage_existing(self, share, driver_options): return self.library.manage_existing(share, driver_options) def unmanage(self, share): self.library.unmanage(share) def manage_existing_snapshot(self, snapshot, driver_options): return self.library.manage_existing_snapshot(snapshot, driver_options) def unmanage_snapshot(self, snapshot): self.library.unmanage_snapshot(snapshot) def manage_existing_with_server( self, share, driver_options, share_server=None): raise NotImplementedError def unmanage_with_server(self, share, share_server=None): raise NotImplementedError def manage_existing_snapshot_with_server( self, snapshot, driver_options, share_server=None): raise NotImplementedError def unmanage_snapshot_with_server(self, snapshot, share_server=None): raise NotImplementedError def update_access(self, context, share, access_rules, add_rules, delete_rules, **kwargs): self.library.update_access(context, share, access_rules, add_rules, delete_rules, **kwargs) def _update_share_stats(self, data=None): data = self.library.get_share_stats( filter_function=self.get_filter_function(), goodness_function=self.get_goodness_function()) super(NetAppCmodeSingleSvmShareDriver, self)._update_share_stats( data=data) def get_default_filter_function(self): return self.library.get_default_filter_function() def get_default_goodness_function(self): return self.library.get_default_goodness_function() def get_share_server_pools(self, share_server): return self.library.get_share_server_pools(share_server) def get_network_allocations_number(self): return self.library.get_network_allocations_number() def get_admin_network_allocations_number(self): return self.library.get_admin_network_allocations_number() def _setup_server(self, network_info, metadata=None): return self.library.setup_server(network_info, metadata) def _teardown_server(self, server_details, **kwargs): self.library.teardown_server(server_details, **kwargs) def create_replica(self, context, replica_list, replica, access_rules, replica_snapshots, **kwargs): return self.library.create_replica(context, replica_list, replica, access_rules, replica_snapshots, **kwargs) def delete_replica(self, context, replica_list, replica_snapshots, replica, **kwargs): self.library.delete_replica(context, replica_list, replica, replica_snapshots, **kwargs) def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): return self.library.promote_replica(context, replica_list, replica, access_rules, share_server=share_server) def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): return self.library.update_replica_state(context, replica_list, replica, access_rules, replica_snapshots, share_server=share_server) def create_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): return self.library.create_replicated_snapshot( context, replica_list, replica_snapshots, share_server=share_server) def delete_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): return self.library.delete_replicated_snapshot( context, replica_list, replica_snapshots, share_server=share_server) def update_replicated_snapshot(self, context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=None): return self.library.update_replicated_snapshot( replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=share_server) def revert_to_replicated_snapshot(self, context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, share_access_rules, snapshot_access_rules, **kwargs): return self.library.revert_to_replicated_snapshot( context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, **kwargs) def migration_check_compatibility(self, context, source_share, destination_share, share_server=None, destination_share_server=None): return self.library.migration_check_compatibility( context, source_share, destination_share, share_server=share_server, destination_share_server=destination_share_server) def migration_start(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_start( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_continue(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_continue( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_get_progress(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_get_progress( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_cancel(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_cancel( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def migration_complete(self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): return self.library.migration_complete( context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=share_server, destination_share_server=destination_share_server) def create_share_group_snapshot(self, context, snap_dict, share_server=None): fallback_create = super(NetAppCmodeSingleSvmShareDriver, self).create_share_group_snapshot return self.library.create_group_snapshot(context, snap_dict, fallback_create, share_server) def delete_share_group_snapshot(self, context, snap_dict, share_server=None): fallback_delete = super(NetAppCmodeSingleSvmShareDriver, self).delete_share_group_snapshot return self.library.delete_group_snapshot(context, snap_dict, fallback_delete, share_server) def create_share_group_from_share_group_snapshot( self, context, share_group_dict, snapshot_dict, share_server=None): fallback_create = super( NetAppCmodeSingleSvmShareDriver, self).create_share_group_from_share_group_snapshot return self.library.create_group_from_snapshot(context, share_group_dict, snapshot_dict, fallback_create, share_server) def get_configured_ip_versions(self): return self.library.get_configured_ip_versions() def get_backend_info(self, context): return self.library.get_backend_info(context) def ensure_shares(self, context, shares): return self.library.ensure_shares(context, shares) def get_share_server_network_info( self, context, share_server, identifier, driver_options): raise NotImplementedError def manage_server(self, context, share_server, identifier, driver_options): raise NotImplementedError def unmanage_server(self, server_details, security_services=None): raise NotImplementedError def get_share_status(self, share_instance, share_server=None): return self.library.get_share_status(share_instance, share_server) manila-10.0.0/manila/share/drivers/netapp/options.py0000664000175000017500000002016113656750227022445 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Contains configuration options for NetApp drivers. Common place to hold configuration options for all NetApp drivers. Options need to be grouped into granular units to be able to be reused by different modules and classes. This does not restrict declaring options in individual modules. If options are not re usable then can be declared in individual modules. It is recommended to Keep options at a single place to ensure re usability and better management of configuration options. """ from oslo_config import cfg netapp_proxy_opts = [ cfg.StrOpt('netapp_storage_family', default='ontap_cluster', help=('The storage family type used on the storage system; ' 'valid values include ontap_cluster for using ' 'clustered Data ONTAP.')), ] netapp_connection_opts = [ cfg.HostAddressOpt('netapp_server_hostname', deprecated_name='netapp_nas_server_hostname', help='The hostname (or IP address) for the storage ' 'system.'), cfg.PortOpt('netapp_server_port', help=('The TCP port to use for communication with the storage ' 'system or proxy server. If not specified, Data ONTAP ' 'drivers will use 80 for HTTP and 443 for HTTPS.')), ] netapp_transport_opts = [ cfg.StrOpt('netapp_transport_type', deprecated_name='netapp_nas_transport_type', default='http', help=('The transport protocol used when communicating with ' 'the storage system or proxy server. Valid values are ' 'http or https.')), ] netapp_basicauth_opts = [ cfg.StrOpt('netapp_login', deprecated_name='netapp_nas_login', help=('Administrative user account name used to access the ' 'storage system.')), cfg.StrOpt('netapp_password', deprecated_name='netapp_nas_password', help=('Password for the administrative user account ' 'specified in the netapp_login option.'), secret=True), ] netapp_provisioning_opts = [ cfg.ListOpt('netapp_enabled_share_protocols', default=['nfs3', 'nfs4.0'], help='The NFS protocol versions that will be enabled. ' 'Supported values include nfs3, nfs4.0, nfs4.1. This ' 'option only applies when the option ' 'driver_handles_share_servers is set to True. '), cfg.StrOpt('netapp_volume_name_template', deprecated_name='netapp_nas_volume_name_template', help='NetApp volume name template.', default='share_%(share_id)s'), cfg.StrOpt('netapp_vserver_name_template', default='os_%s', help='Name template to use for new Vserver. ' 'When using CIFS protocol make sure to not ' 'configure characters illegal in DNS hostnames.'), cfg.StrOpt('netapp_qos_policy_group_name_template', help='NetApp QoS policy group name template.', default='qos_share_%(share_id)s'), cfg.StrOpt('netapp_port_name_search_pattern', default='(.*)', help='Pattern for overriding the selection of network ports ' 'on which to create Vserver LIFs.'), cfg.StrOpt('netapp_lif_name_template', default='os_%(net_allocation_id)s', help='Logical interface (LIF) name template'), cfg.StrOpt('netapp_aggregate_name_search_pattern', default='(.*)', help='Pattern for searching available aggregates ' 'for provisioning.'), cfg.StrOpt('netapp_root_volume_aggregate', help='Name of aggregate to create Vserver root volumes on. ' 'This option only applies when the option ' 'driver_handles_share_servers is set to True.'), cfg.StrOpt('netapp_root_volume', deprecated_name='netapp_root_volume_name', default='root', help='Root volume name.'), cfg.IntOpt('netapp_volume_snapshot_reserve_percent', min=0, max=90, default=5, help='The percentage of share space set aside as reserve for ' 'snapshot usage; valid values range from 0 to 90.'), cfg.StrOpt('netapp_reset_snapdir_visibility', choices=['visible', 'hidden', 'default'], default="default", help="This option forces all existing shares to have their " "snapshot directory visibility set to either 'visible' or " "'hidden' during driver startup. If set to 'default', " "nothing will be changed during startup. This will not " "affect new shares, which will have their snapshot " "directory always visible, unless toggled by the share " "type extra spec 'netapp:hide_snapdir'."), ] netapp_cluster_opts = [ cfg.StrOpt('netapp_vserver', help=('This option specifies the Storage Virtual Machine ' '(i.e. Vserver) name on the storage cluster on which ' 'provisioning of file storage shares should occur. This ' 'option should only be specified when the option ' 'driver_handles_share_servers is set to False (i.e. the ' 'driver is managing shares on a single pre-configured ' 'Vserver).')), ] netapp_support_opts = [ cfg.StrOpt('netapp_trace_flags', help=('Comma-separated list of options that control which ' 'trace info is written to the debug logs. Values ' 'include method and api. API logging can further be ' 'filtered with the ' '``netapp_api_trace_pattern option``.')), cfg.StrOpt('netapp_api_trace_pattern', default='(.*)', help=('A regular expression to limit the API tracing. This ' 'option is honored only if enabling ``api`` tracing ' 'with the ``netapp_trace_flags`` option. By default, ' 'all APIs will be traced.')), ] netapp_data_motion_opts = [ cfg.IntOpt('netapp_snapmirror_quiesce_timeout', min=0, default=3600, # One Hour help='The maximum time in seconds to wait for existing ' 'snapmirror transfers to complete before aborting when ' 'promoting a replica.'), cfg.IntOpt('netapp_volume_move_cutover_timeout', min=0, default=3600, # One Hour, help='The maximum time in seconds to wait for the completion ' 'of a volume move operation after the cutover ' 'was triggered.'), cfg.IntOpt('netapp_start_volume_move_timeout', min=0, default=3600, # One Hour, help='The maximum time in seconds to wait for the completion ' 'of a volume clone split operation in order to start a ' 'volume move.'), ] CONF = cfg.CONF CONF.register_opts(netapp_proxy_opts) CONF.register_opts(netapp_connection_opts) CONF.register_opts(netapp_transport_opts) CONF.register_opts(netapp_basicauth_opts) CONF.register_opts(netapp_provisioning_opts) CONF.register_opts(netapp_support_opts) CONF.register_opts(netapp_data_motion_opts) manila-10.0.0/manila/share/drivers/cephfs/0000775000175000017500000000000013656750362020361 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/cephfs/__init__.py0000664000175000017500000000000013656750227022460 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/cephfs/driver.py0000664000175000017500000005732213656750227022237 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ipaddress import socket import sys from oslo_config import cfg from oslo_config import types from oslo_log import log from oslo_utils import units import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers import ganesha from manila.share.drivers.ganesha import utils as ganesha_utils from manila.share.drivers import helpers as driver_helpers from manila.share import share_types try: import ceph_volume_client ceph_module_found = True except ImportError: ceph_volume_client = None ceph_module_found = False CEPHX_ACCESS_TYPE = "cephx" # The default Ceph administrative identity CEPH_DEFAULT_AUTH_ID = "admin" DEFAULT_VOLUME_MODE = '755' LOG = log.getLogger(__name__) cephfs_opts = [ cfg.StrOpt('cephfs_conf_path', default="", help="Fully qualified path to the ceph.conf file."), cfg.StrOpt('cephfs_cluster_name', help="The name of the cluster in use, if it is not " "the default ('ceph')." ), cfg.StrOpt('cephfs_auth_id', default="manila", help="The name of the ceph auth identity to use." ), cfg.StrOpt('cephfs_volume_path_prefix', default="/volumes", help="The prefix of the cephfs volume path." ), cfg.BoolOpt('cephfs_enable_snapshots', default=False, help="Whether to enable snapshots in this driver." ), cfg.StrOpt('cephfs_protocol_helper_type', default="CEPHFS", choices=['CEPHFS', 'NFS'], ignore_case=True, help="The type of protocol helper to use. Default is " "CEPHFS." ), cfg.BoolOpt('cephfs_ganesha_server_is_remote', default=False, help="Whether the NFS-Ganesha server is remote to the driver." ), cfg.HostAddressOpt('cephfs_ganesha_server_ip', help="The IP address of the NFS-Ganesha server."), cfg.StrOpt('cephfs_ganesha_server_username', default='root', help="The username to authenticate as in the remote " "NFS-Ganesha server host."), cfg.StrOpt('cephfs_ganesha_path_to_private_key', help="The path of the driver host's private SSH key file."), cfg.StrOpt('cephfs_ganesha_server_password', secret=True, help="The password to authenticate as the user in the remote " "Ganesha server host. This is not required if " "'cephfs_ganesha_path_to_private_key' is configured."), cfg.ListOpt('cephfs_ganesha_export_ips', default='', help="List of IPs to export shares. If not supplied, " "then the value of 'cephfs_ganesha_server_ip' " "will be used to construct share export locations."), cfg.StrOpt('cephfs_volume_mode', default=DEFAULT_VOLUME_MODE, help="The read/write/execute permissions mode for CephFS " "volumes, snapshots, and snapshot groups expressed in " "Octal as with linux 'chmod' or 'umask' commands."), ] CONF = cfg.CONF CONF.register_opts(cephfs_opts) def cephfs_share_path(share): """Get VolumePath from Share.""" return ceph_volume_client.VolumePath( share['share_group_id'], share['id']) class CephFSDriver(driver.ExecuteMixin, driver.GaneshaMixin, driver.ShareDriver): """Driver for the Ceph Filesystem.""" def __init__(self, *args, **kwargs): super(CephFSDriver, self).__init__(False, *args, **kwargs) self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'CephFS' self._volume_client = None self.configuration.append_config_values(cephfs_opts) try: self._cephfs_volume_mode = int( self.configuration.cephfs_volume_mode, 8) except ValueError: msg = _("Invalid CephFS volume mode %s") raise exception.BadConfigurationException( msg % self.configuration.cephfs_volume_mode) self.ipv6_implemented = True def do_setup(self, context): if self.configuration.cephfs_protocol_helper_type.upper() == "CEPHFS": protocol_helper_class = getattr( sys.modules[__name__], 'NativeProtocolHelper') else: protocol_helper_class = getattr( sys.modules[__name__], 'NFSProtocolHelper') self.protocol_helper = protocol_helper_class( self._execute, self.configuration, ceph_vol_client=self.volume_client) self.protocol_helper.init_helper() def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" self.protocol_helper.check_for_setup_error() def _update_share_stats(self): stats = self.volume_client.rados.get_cluster_stats() total_capacity_gb = stats['kb'] * units.Mi free_capacity_gb = stats['kb_avail'] * units.Mi data = { 'vendor_name': 'Ceph', 'driver_version': '1.0', 'share_backend_name': self.backend_name, 'storage_protocol': self.configuration.safe_get( 'cephfs_protocol_helper_type'), 'pools': [ { 'pool_name': 'cephfs', 'total_capacity_gb': total_capacity_gb, 'free_capacity_gb': free_capacity_gb, 'qos': 'False', 'reserved_percentage': 0, 'dedupe': [False], 'compression': [False], 'thin_provisioning': [False] } ], 'total_capacity_gb': total_capacity_gb, 'free_capacity_gb': free_capacity_gb, 'snapshot_support': self.configuration.safe_get( 'cephfs_enable_snapshots'), } super( # pylint: disable=no-member CephFSDriver, self)._update_share_stats(data) def _to_bytes(self, gigs): """Convert a Manila size into bytes. Manila uses gibibytes everywhere. :param gigs: integer number of gibibytes. :return: integer number of bytes. """ return gigs * units.Gi @property def volume_client(self): if self._volume_client: return self._volume_client if not ceph_module_found: raise exception.ManilaException( _("Ceph client libraries not found.") ) conf_path = self.configuration.safe_get('cephfs_conf_path') cluster_name = self.configuration.safe_get('cephfs_cluster_name') auth_id = self.configuration.safe_get('cephfs_auth_id') volume_prefix = self.configuration.safe_get( 'cephfs_volume_path_prefix') self._volume_client = ceph_volume_client.CephFSVolumeClient( auth_id, conf_path, cluster_name, volume_prefix=volume_prefix) LOG.info("[%(be)s}] Ceph client found, connecting...", {"be": self.backend_name}) if auth_id != CEPH_DEFAULT_AUTH_ID: # Evict any other manila sessions. Only do this if we're # using a client ID that isn't the default admin ID, to avoid # rudely disrupting anyone else. premount_evict = auth_id else: premount_evict = None try: self._volume_client.connect(premount_evict=premount_evict) except Exception: self._volume_client = None raise else: LOG.info("[%(be)s] Ceph client connection complete.", {"be": self.backend_name}) return self._volume_client def create_share(self, context, share, share_server=None): """Create a CephFS volume. :param context: A RequestContext. :param share: A Share. :param share_server: Always None for CephFS native. :return: The export locations dictionary. """ requested_proto = share['share_proto'].upper() supported_proto = ( self.configuration.cephfs_protocol_helper_type.upper()) if (requested_proto != supported_proto): msg = _("Share protocol %s is not supported.") % requested_proto raise exception.ShareBackendException(msg=msg) # `share` is a Share msg = _("create_share {be} name={id} size={size}" " share_group_id={group}") LOG.debug(msg.format( be=self.backend_name, id=share['id'], size=share['size'], group=share['share_group_id'])) extra_specs = share_types.get_extra_specs_from_share(share) data_isolated = extra_specs.get("cephfs:data_isolated", False) size = self._to_bytes(share['size']) # Create the CephFS volume cephfs_volume = self.volume_client.create_volume( cephfs_share_path(share), size=size, data_isolated=data_isolated, mode=self._cephfs_volume_mode) return self.protocol_helper.get_export_locations(share, cephfs_volume) def delete_share(self, context, share, share_server=None): extra_specs = share_types.get_extra_specs_from_share(share) data_isolated = extra_specs.get("cephfs:data_isolated", False) self.volume_client.delete_volume(cephfs_share_path(share), data_isolated=data_isolated) self.volume_client.purge_volume(cephfs_share_path(share), data_isolated=data_isolated) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): return self.protocol_helper.update_access( context, share, access_rules, add_rules, delete_rules, share_server=share_server) def ensure_share(self, context, share, share_server=None): # Creation is idempotent return self.create_share(context, share, share_server) def extend_share(self, share, new_size, share_server=None): LOG.debug("extend_share {id} {size}".format( id=share['id'], size=new_size)) self.volume_client.set_max_bytes(cephfs_share_path(share), self._to_bytes(new_size)) def shrink_share(self, share, new_size, share_server=None): LOG.debug("shrink_share {id} {size}".format( id=share['id'], size=new_size)) new_bytes = self._to_bytes(new_size) used = self.volume_client.get_used_bytes(cephfs_share_path(share)) if used > new_bytes: # While in fact we can "shrink" our volumes to less than their # used bytes (it's just a quota), raise error anyway to avoid # confusing API consumers that might depend on typical shrink # behaviour. raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) self.volume_client.set_max_bytes(cephfs_share_path(share), new_bytes) def create_snapshot(self, context, snapshot, share_server=None): self.volume_client.create_snapshot_volume( cephfs_share_path(snapshot['share']), '_'.join([snapshot['snapshot_id'], snapshot['id']]), mode=self._cephfs_volume_mode) def delete_snapshot(self, context, snapshot, share_server=None): self.volume_client.destroy_snapshot_volume( cephfs_share_path(snapshot['share']), '_'.join([snapshot['snapshot_id'], snapshot['id']])) def create_share_group(self, context, sg_dict, share_server=None): self.volume_client.create_group(sg_dict['id'], mode=self._cephfs_volume_mode) def delete_share_group(self, context, sg_dict, share_server=None): self.volume_client.destroy_group(sg_dict['id']) def delete_share_group_snapshot(self, context, snap_dict, share_server=None): self.volume_client.destroy_snapshot_group( snap_dict['share_group_id'], snap_dict['id']) return None, [] def create_share_group_snapshot(self, context, snap_dict, share_server=None): self.volume_client.create_snapshot_group( snap_dict['share_group_id'], snap_dict['id'], mode=self._cephfs_volume_mode) return None, [] def __del__(self): if self._volume_client: self._volume_client.disconnect() self._volume_client = None def get_configured_ip_versions(self): return self.protocol_helper.get_configured_ip_versions() class NativeProtocolHelper(ganesha.NASHelperBase): """Helper class for native CephFS protocol""" supported_access_types = (CEPHX_ACCESS_TYPE, ) supported_access_levels = (constants.ACCESS_LEVEL_RW, constants.ACCESS_LEVEL_RO) def __init__(self, execute, config, **kwargs): self.volume_client = kwargs.pop('ceph_vol_client') super(NativeProtocolHelper, self).__init__(execute, config, **kwargs) def _init_helper(self): pass def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" return def get_export_locations(self, share, cephfs_volume): # To mount this you need to know the mon IPs and the path to the volume mon_addrs = self.volume_client.get_mon_addrs() export_location = "{addrs}:{path}".format( addrs=",".join(mon_addrs), path=cephfs_volume['mount_path']) LOG.info("Calculated export location for share %(id)s: %(loc)s", {"id": share['id'], "loc": export_location}) return { 'path': export_location, 'is_admin_only': False, 'metadata': {}, } def _allow_access(self, context, share, access, share_server=None): if access['access_type'] != CEPHX_ACCESS_TYPE: raise exception.InvalidShareAccess( reason=_("Only 'cephx' access type allowed.")) ceph_auth_id = access['access_to'] # We need to check here rather than the API or Manila Client to see # if the ceph_auth_id is the same as the one specified for Manila's # usage. This is due to the fact that the API and the Manila client # cannot read the contents of the Manila configuration file. If it # is the same, we need to error out. if ceph_auth_id == CONF.cephfs_auth_id: error_message = (_('Ceph authentication ID %s must be different ' 'than the one the Manila service uses.') % ceph_auth_id) raise exception.InvalidInput(message=error_message) if not getattr(self.volume_client, 'version', None): if access['access_level'] == constants.ACCESS_LEVEL_RO: LOG.error("Need python-cephfs package version 10.2.3 or " "greater to enable read-only access.") raise exception.InvalidShareAccessLevel( level=constants.ACCESS_LEVEL_RO) auth_result = self.volume_client.authorize( cephfs_share_path(share), ceph_auth_id) else: readonly = access['access_level'] == constants.ACCESS_LEVEL_RO auth_result = self.volume_client.authorize( cephfs_share_path(share), ceph_auth_id, readonly=readonly, tenant_id=share['project_id']) return auth_result['auth_key'] def _deny_access(self, context, share, access, share_server=None): if access['access_type'] != CEPHX_ACCESS_TYPE: LOG.warning("Invalid access type '%(type)s', " "ignoring in deny.", {"type": access['access_type']}) return self.volume_client.deauthorize(cephfs_share_path(share), access['access_to']) self.volume_client.evict( access['access_to'], volume_path=cephfs_share_path(share)) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): access_keys = {} if not (add_rules or delete_rules): # recovery/maintenance mode add_rules = access_rules existing_auths = None # The unversioned volume client cannot fetch from the Ceph backend, # the list of auth IDs that have share access. if getattr(self.volume_client, 'version', None): existing_auths = self.volume_client.get_authorized_ids( cephfs_share_path(share)) if existing_auths: existing_auth_ids = set( [auth[0] for auth in existing_auths]) want_auth_ids = set( [rule['access_to'] for rule in add_rules]) delete_auth_ids = existing_auth_ids.difference( want_auth_ids) for delete_auth_id in delete_auth_ids: delete_rules.append( { 'access_to': delete_auth_id, 'access_type': CEPHX_ACCESS_TYPE, }) # During recovery mode, re-authorize share access for auth IDs that # were already granted access by the backend. Do this to fetch their # access keys and ensure that after recovery, manila and the Ceph # backend are in sync. for rule in add_rules: access_key = self._allow_access(context, share, rule) access_keys.update({rule['access_id']: {'access_key': access_key}}) for rule in delete_rules: self._deny_access(context, share, rule) return access_keys def get_configured_ip_versions(self): return [4] class NFSProtocolHelper(ganesha.GaneshaNASHelper2): shared_data = {} supported_protocols = ('NFS',) def __init__(self, execute, config_object, **kwargs): if config_object.cephfs_ganesha_server_is_remote: execute = ganesha_utils.SSHExecutor( config_object.cephfs_ganesha_server_ip, 22, None, config_object.cephfs_ganesha_server_username, password=config_object.cephfs_ganesha_server_password, privatekey=config_object.cephfs_ganesha_path_to_private_key) else: execute = ganesha_utils.RootExecutor(execute) self.ganesha_host = config_object.cephfs_ganesha_server_ip if not self.ganesha_host: self.ganesha_host = socket.gethostname() LOG.info("NFS-Ganesha server's location defaulted to driver's " "hostname: %s", self.ganesha_host) super(NFSProtocolHelper, self).__init__(execute, config_object, **kwargs) if not hasattr(self, 'ceph_vol_client'): self.ceph_vol_client = kwargs.pop('ceph_vol_client') self.export_ips = config_object.cephfs_ganesha_export_ips if not self.export_ips: self.export_ips = [self.ganesha_host] self.configured_ip_versions = set() self.config = config_object def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" host_address_obj = types.HostAddress() for export_ip in self.config.cephfs_ganesha_export_ips: try: host_address_obj(export_ip) except ValueError: msg = (_("Invalid list member of 'cephfs_ganesha_export_ips' " "option supplied %s -- not a valid IP address or " "hostname.") % export_ip) raise exception.InvalidParameterValue(err=msg) def get_export_locations(self, share, cephfs_volume): export_locations = [] for export_ip in self.export_ips: export_path = "{server_address}:{mount_path}".format( server_address=driver_helpers.escaped_address(export_ip), mount_path=cephfs_volume['mount_path']) LOG.info("Calculated export path for share %(id)s: %(epath)s", {"id": share['id'], "epath": export_path}) export_location = { 'path': export_path, 'is_admin_only': False, 'metadata': {}, } export_locations.append(export_location) return export_locations def _default_config_hook(self): """Callback to provide default export block.""" dconf = super(NFSProtocolHelper, self)._default_config_hook() conf_dir = ganesha_utils.path_from(__file__, "conf") ganesha_utils.patch(dconf, self._load_conf_dir(conf_dir)) return dconf def _fsal_hook(self, base, share, access): """Callback to create FSAL subblock.""" ceph_auth_id = ''.join(['ganesha-', share['id']]) auth_result = self.ceph_vol_client.authorize( cephfs_share_path(share), ceph_auth_id, readonly=False, tenant_id=share['project_id']) # Restrict Ganesha server's access to only the CephFS subtree or path, # corresponding to the manila share, that is to be exported by making # Ganesha use Ceph auth IDs with path restricted capabilities to # communicate with CephFS. return { 'Name': 'Ceph', 'User_Id': ceph_auth_id, 'Secret_Access_Key': auth_result['auth_key'] } def _cleanup_fsal_hook(self, base, share, access): """Callback for FSAL specific cleanup after removing an export.""" ceph_auth_id = ''.join(['ganesha-', share['id']]) self.ceph_vol_client.deauthorize(cephfs_share_path(share), ceph_auth_id) def _get_export_path(self, share): """Callback to provide export path.""" volume_path = cephfs_share_path(share) return self.ceph_vol_client._get_path(volume_path) def _get_export_pseudo_path(self, share): """Callback to provide pseudo path.""" volume_path = cephfs_share_path(share) return self.ceph_vol_client._get_path(volume_path) def get_configured_ip_versions(self): if not self.configured_ip_versions: try: for export_ip in self.export_ips: self.configured_ip_versions.add( ipaddress.ip_address(six.text_type(export_ip)).version) except Exception: # export_ips contained a hostname, safest thing is to # claim support for IPv4 and IPv6 address families LOG.warning("Setting configured IP versions to [4, 6] since " "a hostname (rather than IP address) was supplied " "in 'cephfs_ganesha_server_ip' or " "in 'cephfs_ganesha_export_ips'.") return [4, 6] return list(self.configured_ip_versions) manila-10.0.0/manila/share/drivers/cephfs/conf/0000775000175000017500000000000013656750362021306 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/cephfs/conf/cephfs-export-template.conf0000664000175000017500000000006313656750227026554 0ustar zuulzuul00000000000000EXPORT { FSAL { Name = "CEPH"; } } manila-10.0.0/manila/share/drivers/__init__.py0000664000175000017500000000000013656750227021210 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/veritas/0000775000175000017500000000000013656750362020566 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/veritas/__init__.py0000664000175000017500000000000013656750227022665 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/veritas/veritas_isa.py0000664000175000017500000006045313656750227023461 0ustar zuulzuul00000000000000# Copyright 2017 Veritas Technologies LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Veritas Access Driver for manila shares. Limitation: 1) single tenant """ import hashlib import json from oslo_config import cfg from oslo_log import log as logging from oslo_utils import units from random import shuffle import requests import requests.auth import six from six.moves import http_client from manila.common import constants as const from manila import exception from manila.share import driver LOG = logging.getLogger(__name__) va_share_opts = [ cfg.StrOpt('va_server_ip', help='Console IP of Veritas Access server.'), cfg.IntOpt('va_port', default=14161, help='Veritas Access server REST port.'), cfg.StrOpt('va_user', help='Veritas Access server REST login name.'), cfg.StrOpt('va_pwd', secret=True, help='Veritas Access server REST password.'), cfg.StrOpt('va_pool', help='Veritas Access storage pool from which ' 'shares are served.'), cfg.StrOpt('va_fstype', default='simple', help='Type of VA file system to be created.') ] CONF = cfg.CONF CONF.register_opts(va_share_opts) class NoAuth(requests.auth.AuthBase): """This is a 'authentication' handler. It exists for use with custom authentication systems, such as the one for the Access API, it simply passes the Authorization header as-is. The default authentication handler for requests will clobber the Authorization header. """ def __call__(self, r): return r class ACCESSShareDriver(driver.ExecuteMixin, driver.ShareDriver): """ACCESS Share Driver. Executes commands relating to Manila Shares. Supports creation of shares on ACCESS. API version history: 1.0 - Initial version. """ VA_SHARE_PATH_STR = '/vx/' def __init__(self, *args, **kwargs): """Do initialization.""" super(ACCESSShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(va_share_opts) self.backend_name = self.configuration.safe_get( 'share_backend_name') or "VeritasACCESS" self._va_ip = None self._va_url = None self._pool = None self._fstype = None self._port = None self._user = None self._pwd = None self._cred = None self._connect_resp = None self._verify_ssl_cert = None self._fs_create_str = '/fs/create' self._fs_list_str = '/fs' self._fs_delete_str = '/fs/destroy' self._fs_extend_str = '/fs/grow' self._fs_shrink_str = '/fs/shrink' self._snap_create_str = '/snapshot/create' self._snap_delete_str = '/snapshot/delete' self._snap_list_str = '/snapshot/getSnapShotList' self._nfs_add_str = '/share/create' self._nfs_delete_str = '/share/delete' self._nfs_share_list_str = '/share/all_shares_details_by_path/?path=' self._ip_addr_show_str = '/common/get_all_ips' self._pool_free_str = '/storage/pool' self._update_object = '/objecttags' self.session = None self.host = None LOG.debug("ACCESSShareDriver called") def do_setup(self, context): """Any initialization the share driver does while starting.""" super(ACCESSShareDriver, self).do_setup(context) self._va_ip = self.configuration.va_server_ip self._pool = self.configuration.va_pool self._user = self.configuration.va_user self._pwd = self.configuration.va_pwd self._port = self.configuration.va_port self._fstype = self.configuration.va_fstype self.session = self._authenticate_access(self._va_ip, self._user, self._pwd) def _get_va_share_name(self, name): length = len(name) index = int(length / 2) name1 = name[:index] name2 = name[index:] crc1 = hashlib.md5(name1.encode('utf-8')).hexdigest()[:8] crc2 = hashlib.md5(name2.encode('utf-8')).hexdigest()[:8] return crc1 + '-' + crc2 def _get_va_snap_name(self, name): return self._get_va_share_name(name) def _get_va_share_path(self, name): return self.VA_SHARE_PATH_STR + name def _does_item_exist_at_va_backend(self, item_name, path_given): """Check given share is exists on backend""" path = path_given provider = '%s:%s' % (self.host, self._port) data = {} item_list = self._access_api(self.session, provider, path, json.dumps(data), 'GET') for item in item_list: if item['name'] == item_name: return True return False def _return_access_lists_difference(self, list_a, list_b): """Returns a list of elements in list_a that are not in list_b""" sub_list = [{"access_to": s.get('access_to'), "access_type": s.get('access_type'), "access_level": s.get('access_level')} for s in list_b] return [r for r in list_a if ( {"access_to": r.get("access_to"), "access_type": r.get("access_type"), "access_level": r.get("access_level")} not in sub_list)] def _fetch_existing_rule(self, share_name): """Return list of access rules on given share""" share_path = self._get_va_share_path(share_name) path = self._nfs_share_list_str + share_path provider = '%s:%s' % (self.host, self._port) data = {} share_list = self._access_api(self.session, provider, path, json.dumps(data), 'GET') va_access_list = [] for share in share_list: if share['shareType'] == 'NFS': for share_info in share['shares']: if share_info['name'] == share_path: access_to = share_info['host_name'] a_level = const.ACCESS_LEVEL_RO if const.ACCESS_LEVEL_RW in share_info['privilege']: a_level = const.ACCESS_LEVEL_RW va_access_list.append({ 'access_to': access_to, 'access_level': a_level, 'access_type': 'ip' }) return va_access_list def create_share(self, ctx, share, share_server=None): """Create an ACCESS file system that will be represented as share.""" sharename = share['name'] sizestr = '%sg' % share['size'] LOG.debug("ACCESSShareDriver create_share sharename %s sizestr %r", sharename, sizestr) va_sharename = self._get_va_share_name(sharename) va_sharepath = self._get_va_share_path(va_sharename) va_fs_type = self._fstype path = self._fs_create_str provider = '%s:%s' % (self.host, self._port) data1 = { "largefs": "no", "blkSize": "blksize=8192", "pdirEnable": "pdir_enable=yes" } data1["layout"] = va_fs_type data1["fs_name"] = va_sharename data1["fs_size"] = sizestr data1["pool_disks"] = self._pool result = self._access_api(self.session, provider, path, json.dumps(data1), 'POST') if not result: message = (('ACCESSShareDriver create share failed %s'), sharename) LOG.error(message) raise exception.ShareBackendException(msg=message) data2 = {"type": "FS", "key": "manila"} data2["id"] = va_sharename data2["value"] = 'manila_fs' path = self._update_object result = self._access_api(self.session, provider, path, json.dumps(data2), 'POST') vip = self._get_vip() location = vip + ':' + va_sharepath LOG.debug("ACCESSShareDriver create_share location %s", location) return location def _get_vip(self): """Get a virtual IP from ACCESS.""" ip_list = self._get_access_ips(self.session, self.host) vip = [] for ips in ip_list: if ips['isconsoleip'] == 1: continue if ips['type'] == 'Virtual' and ips['status'] == 'ONLINE': vip.append(ips['ip']) shuffle(vip) return six.text_type(vip[0]) def delete_share(self, context, share, share_server=None): """Delete a share from ACCESS.""" sharename = share['name'] va_sharename = self._get_va_share_name(sharename) LOG.debug("ACCESSShareDriver delete_share %s called", sharename) if share['snapshot_id']: message = (('ACCESSShareDriver delete share %s' ' early return'), sharename) LOG.debug(message) return ret_val = self._does_item_exist_at_va_backend(va_sharename, self._fs_list_str) if not ret_val: return path = self._fs_delete_str provider = '%s:%s' % (self.host, self._port) data = {} data["fs_name"] = va_sharename result = self._access_api(self.session, provider, path, json.dumps(data), 'POST') if not result: message = (('ACCESSShareDriver delete share failed %s'), sharename) LOG.error(message) raise exception.ShareBackendException(msg=message) data2 = {"type": "FS", "key": "manila"} data2["id"] = va_sharename path = self._update_object result = self._access_api(self.session, provider, path, json.dumps(data2), 'DELETE') def extend_share(self, share, new_size, share_server=None): """Extend existing share to new size.""" sharename = share['name'] size = '%s%s' % (six.text_type(new_size), 'g') va_sharename = self._get_va_share_name(sharename) path = self._fs_extend_str provider = '%s:%s' % (self.host, self._port) data1 = {"operationOption": "growto", "tier": "primary"} data1["fs_name"] = va_sharename data1["fs_size"] = size result = self._access_api(self.session, provider, path, json.dumps(data1), 'POST') if not result: message = (('ACCESSShareDriver extend share failed %s'), sharename) LOG.error(message) raise exception.ShareBackendException(msg=message) LOG.debug('ACCESSShareDriver extended share' ' successfully %s', sharename) def shrink_share(self, share, new_size, share_server=None): """Shrink existing share to new size.""" sharename = share['name'] va_sharename = self._get_va_share_name(sharename) size = '%s%s' % (six.text_type(new_size), 'g') path = self._fs_extend_str provider = '%s:%s' % (self.host, self._port) data1 = {"operationOption": "shrinkto", "tier": "primary"} data1["fs_name"] = va_sharename data1["fs_size"] = size result = self._access_api(self.session, provider, path, json.dumps(data1), 'POST') if not result: message = (('ACCESSShareDriver shrink share failed %s'), sharename) LOG.error(message) raise exception.ShareBackendException(msg=message) LOG.debug('ACCESSShareDriver shrunk share successfully %s', sharename) def _allow_access(self, context, share, access, share_server=None): """Give access of a share to an IP.""" access_type = access['access_type'] server = access['access_to'] if access_type != 'ip': raise exception.InvalidShareAccess('Only ip access type ' 'supported.') access_level = access['access_level'] if access_level not in (const.ACCESS_LEVEL_RW, const.ACCESS_LEVEL_RO): raise exception.InvalidShareAccessLevel(level=access_level) export_path = share['export_locations'][0]['path'].split(':', 1) va_sharepath = six.text_type(export_path[1]) access_level = '%s,%s' % (six.text_type(access_level), 'sync,no_root_squash') path = self._nfs_add_str provider = '%s:%s' % (self.host, self._port) data = {} va_share_info = ("{\"share\":[{\"fileSystemPath\":\"" + va_sharepath + "\",\"shareType\":\"NFS\",\"shareDetails\":" + "[{\"client\":\"" + server + "\",\"exportOptions\":\"" + access_level + "\"}]}]}") data["shareDetails"] = va_share_info result = self._access_api(self.session, provider, path, json.dumps(data), 'POST') if not result: message = (('ACCESSShareDriver access failed sharepath %s ' 'server %s'), va_sharepath, server) LOG.error(message) raise exception.ShareBackendException(msg=message) LOG.debug("ACCESSShareDriver allow_access sharepath %s server %s", va_sharepath, server) data2 = {"type": "SHARE", "key": "manila"} data2["id"] = va_sharepath data2["value"] = 'manila_share' path = self._update_object result = self._access_api(self.session, provider, path, json.dumps(data2), 'POST') def _deny_access(self, context, share, access, share_server=None): """Deny access to the share.""" server = access['access_to'] access_type = access['access_type'] if access_type != 'ip': return export_path = share['export_locations'][0]['path'].split(':', 1) va_sharepath = six.text_type(export_path[1]) LOG.debug("ACCESSShareDriver deny_access sharepath %s server %s", va_sharepath, server) path = self._nfs_delete_str provider = '%s:%s' % (self.host, self._port) data = {} va_share_info = ("{\"share\":[{\"fileSystemPath\":\"" + va_sharepath + "\",\"shareType\":\"NFS\",\"shareDetails\":" + "[{\"client\":\"" + server + "\"}]}]}") data["shareDetails"] = va_share_info result = self._access_api(self.session, provider, path, json.dumps(data), 'DELETE') if not result: message = (('ACCESSShareDriver deny failed' ' sharepath %s server %s'), va_sharepath, server) LOG.error(message) raise exception.ShareBackendException(msg=message) LOG.debug("ACCESSShareDriver deny_access sharepath %s server %s", va_sharepath, server) data2 = {"type": "SHARE", "key": "manila"} data2["id"] = va_sharepath path = self._update_object result = self._access_api(self.session, provider, path, json.dumps(data2), 'DELETE') def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access to the share.""" if (add_rules or delete_rules): # deleting rules for rule in delete_rules: self._deny_access(context, share, rule, share_server) # adding rules for rule in add_rules: self._allow_access(context, share, rule, share_server) else: if not access_rules: LOG.warning("No access rules provided in update_access.") else: sharename = self._get_va_share_name(share['name']) existing_a_rules = self._fetch_existing_rule(sharename) d_rule = self._return_access_lists_difference(existing_a_rules, access_rules) for rule in d_rule: LOG.debug("Removing rule %s in recovery.", six.text_type(rule)) self._deny_access(context, share, rule, share_server) a_rule = self._return_access_lists_difference(access_rules, existing_a_rules) for rule in a_rule: LOG.debug("Adding rule %s in recovery.", six.text_type(rule)) self._allow_access(context, share, rule, share_server) def create_snapshot(self, context, snapshot, share_server=None): """create snapshot of a share.""" LOG.debug('ACCESSShareDriver create_snapshot called ' 'for snapshot ID %s.', snapshot['snapshot_id']) sharename = snapshot['share_name'] va_sharename = self._get_va_share_name(sharename) snapname = snapshot['name'] va_snapname = self._get_va_snap_name(snapname) path = self._snap_create_str provider = '%s:%s' % (self.host, self._port) data = {} data["snapShotname"] = va_snapname data["fileSystem"] = va_sharename data["removable"] = 'yes' result = self._access_api(self.session, provider, path, json.dumps(data), 'PUT') if not result: message = (('ACCESSShareDriver create snapshot failed snapname %s' ' sharename %s'), snapname, va_sharename) LOG.error(message) raise exception.ShareBackendException(msg=message) data2 = {"type": "SNAPSHOT", "key": "manila"} data2["id"] = va_snapname data2["value"] = 'manila_snapshot' path = self._update_object result = self._access_api(self.session, provider, path, json.dumps(data2), 'POST') def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" sharename = snapshot['share_name'] va_sharename = self._get_va_share_name(sharename) snapname = snapshot['name'] va_snapname = self._get_va_snap_name(snapname) ret_val = self._does_item_exist_at_va_backend(va_snapname, self._snap_list_str) if not ret_val: return path = self._snap_delete_str provider = '%s:%s' % (self.host, self._port) data = {} data["name"] = va_snapname data["fsName"] = va_sharename data_to_send = {"snapShotDetails": {"snapshot": [data]}} result = self._access_api(self.session, provider, path, json.dumps(data_to_send), 'DELETE') if not result: message = (('ACCESSShareDriver delete snapshot failed snapname %s' ' sharename %s'), snapname, va_sharename) LOG.error(message) raise exception.ShareBackendException(msg=message) data2 = {"type": "SNAPSHOT", "key": "manila"} data2["id"] = va_snapname path = self._update_object result = self._access_api(self.session, provider, path, json.dumps(data2), 'DELETE') def create_share_from_snapshot(self, ctx, share, snapshot, share_server=None, parent_share=None): """create share from a snapshot.""" sharename = snapshot['share_name'] va_sharename = self._get_va_share_name(sharename) snapname = snapshot['name'] va_snapname = self._get_va_snap_name(snapname) va_sharepath = self._get_va_share_path(va_sharename) LOG.debug(('ACCESSShareDriver create_share_from_snapshot snapname %s' ' sharename %s'), va_snapname, va_sharename) vip = self._get_vip() location = vip + ':' + va_sharepath + ':' + va_snapname LOG.debug("ACCESSShareDriver create_share location %s", location) return location def _get_api(self, provider, tail): api_root = 'https://%s/api' % (provider) return api_root + tail def _access_api(self, session, provider, path, input_data, method): """Returns False if failure occurs.""" kwargs = {'data': input_data} if not isinstance(input_data, dict): kwargs['headers'] = {'Content-Type': 'application/json'} full_url = self._get_api(provider, path) response = session.request(method, full_url, **kwargs) if response.status_code != http_client.OK: LOG.debug('Access API operation Failed.') return False if path == self._update_object: return True result = response.json() return result def _get_access_ips(self, session, host): path = self._ip_addr_show_str provider = '%s:%s' % (host, self._port) data = {} ip_list = self._access_api(session, provider, path, json.dumps(data), 'GET') return ip_list def _authenticate_access(self, address, username, password): session = requests.session() session.verify = False session.auth = NoAuth() response = session.post('https://%s:%s/api/rest/authenticate' % (address, self._port), data={'username': username, 'password': password}) if response.status_code != http_client.OK: LOG.debug(('failed to authenticate to remote cluster at %s as %s'), address, username) raise exception.NotAuthorized('Authentication failure.') result = response.json() session.headers.update({'Authorization': 'Bearer {}' .format(result['token'])}) session.headers.update({'Content-Type': 'application/json'}) return session def _get_access_pool_details(self): """Get access pool details.""" path = self._pool_free_str provider = '%s:%s' % (self.host, self._port) data = {} pool_details = self._access_api(self.session, provider, path, json.dumps(data), 'GET') for pool in pool_details: if pool['device_group_name'] == six.text_type(self._pool): total_capacity = (int(pool['capacity']) / units.Gi) used_size = (int(pool['used_size']) / units.Gi) return (total_capacity, (total_capacity - used_size)) message = 'Fetching pool details operation failed.' LOG.error(message) raise exception.ShareBackendException(msg=message) def _update_share_stats(self): """Retrieve status info from share volume group.""" LOG.debug("VRTSISA Updating share status.") self.host = six.text_type(self._va_ip) self.session = self._authenticate_access(self._va_ip, self._user, self._pwd) total_capacity, free_capacity = self._get_access_pool_details() data = { 'share_backend_name': self.backend_name, 'vendor_name': 'Veritas', 'driver_version': '1.0', 'storage_protocol': 'NFS', 'total_capacity_gb': total_capacity, 'free_capacity_gb': free_capacity, 'reserved_percentage': 0, 'QoS_support': False, 'snapshot_support': True, 'create_share_from_snapshot_support': True } super(ACCESSShareDriver, self)._update_share_stats(data) manila-10.0.0/manila/share/drivers/ganesha/0000775000175000017500000000000013656750362020517 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/ganesha/utils.py0000664000175000017500000001132313656750227022231 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import pipes from oslo_concurrency import processutils from oslo_log import log from manila import exception from manila.i18n import _ from manila import utils LOG = log.getLogger(__name__) def patch(base, *overlays): """Recursive dictionary patching.""" for ovl in overlays: for k, v in ovl.items(): if isinstance(v, dict) and isinstance(base.get(k), dict): patch(base[k], v) else: base[k] = v return base def walk(dct): """Recursive iteration over dictionary.""" for k, v in dct.items(): if isinstance(v, dict): for w in walk(v): yield w else: yield k, v class RootExecutor(object): """Execute wrapper defaulting to root execution.""" def __init__(self, execute=utils.execute): self.execute = execute def __call__(self, *args, **kwargs): exkwargs = {"run_as_root": True} exkwargs.update(kwargs) return self.execute(*args, **exkwargs) class SSHExecutor(object): """Callable encapsulating exec through ssh.""" def __init__(self, *args, **kwargs): self.pool = utils.SSHPool(*args, **kwargs) def __call__(self, *args, **kwargs): # argument with identifier 'run_as_root=' is not accepted by # processutils's ssh_execute() method unlike processutils's execute() # method. So implement workaround to enable or disable 'run as root' # behavior. run_as_root = kwargs.pop('run_as_root', False) cmd = ' '.join(pipes.quote(a) for a in args) if run_as_root: cmd = ' '.join(['sudo', cmd]) ssh = self.pool.get() try: ret = processutils.ssh_execute(ssh, cmd, **kwargs) finally: self.pool.put(ssh) return ret def path_from(fpath, *rpath): """Return the join of the dir of fpath and rpath in absolute form.""" return os.path.join(os.path.abspath(os.path.dirname(fpath)), *rpath) def validate_access_rule(supported_access_types, supported_access_levels, access_rule, abort=False): """Validate an access rule. :param access_rule: Access rules to be validated. :param supported_access_types: List of access types that are regarded valid. :param supported_access_levels: List of access levels that are regarded valid. :param abort: a boolean value that indicates if an exception should be raised whether the rule is invalid. :return: Boolean. """ errmsg = _("Unsupported access rule of 'type' %(access_type)s, " "'level' %(access_level)s, 'to' %(access_to)s: " "%(field)s should be one of %(supported)s.") access_param = access_rule.to_dict() def validate(field, supported_tokens, excinfo): if access_rule['access_%s' % field] in supported_tokens: return True access_param['field'] = field access_param['supported'] = ', '.join( "'%s'" % x for x in supported_tokens) if abort: LOG.error(errmsg, access_param) raise excinfo['type']( **{excinfo['about']: excinfo['details'] % access_param}) else: LOG.warning(errmsg, access_param) return False valid = True valid &= validate( 'type', supported_access_types, {'type': exception.InvalidShareAccess, 'about': "reason", 'details': _( "%(access_type)s; only %(supported)s access type is allowed")}) valid &= validate( 'level', supported_access_levels, {'type': exception.InvalidShareAccessLevel, 'about': "level", 'details': "%(access_level)s"}) return valid def fixup_access_rule(access_rule): """Adjust access rule as required for ganesha to handle it properly. :param access_rule: Access rules to be fixed up. :return: access_rule """ if access_rule['access_to'] == '0.0.0.0/0': access_rule['access_to'] = '0.0.0.0' LOG.debug("Set access_to field to '0.0.0.0' in ganesha back end.") return access_rule manila-10.0.0/manila/share/drivers/ganesha/__init__.py0000664000175000017500000002767113656750227022645 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import errno import os import re from oslo_config import cfg from oslo_log import log import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share.drivers.ganesha import manager as ganesha_manager from manila.share.drivers.ganesha import utils as ganesha_utils CONF = cfg.CONF LOG = log.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class NASHelperBase(object): """Interface to work with share.""" # drivers that use a helper derived from this class # should pass the following attributes to # ganesha_utils.validate_access_rule in their # update_access implementation. supported_access_types = () supported_access_levels = () def __init__(self, execute, config, **kwargs): self.configuration = config self._execute = execute def init_helper(self): """Initializes protocol-specific NAS drivers.""" @abc.abstractmethod def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules of share.""" class GaneshaNASHelper(NASHelperBase): """Perform share access changes using Ganesha version < 2.4.""" supported_access_types = ('ip', ) supported_access_levels = (constants.ACCESS_LEVEL_RW, constants.ACCESS_LEVEL_RO) def __init__(self, execute, config, tag='', **kwargs): super(GaneshaNASHelper, self).__init__(execute, config, **kwargs) self.tag = tag _confrx = re.compile(r'\.(conf|json)\Z') def _load_conf_dir(self, dirpath, must_exist=True): """Load Ganesha config files in dirpath in alphabetic order.""" try: dirlist = os.listdir(dirpath) except OSError as e: if e.errno != errno.ENOENT or must_exist: raise dirlist = [] LOG.info('Loading Ganesha config from %s.', dirpath) conf_files = list(filter(self._confrx.search, dirlist)) conf_files.sort() export_template = {} for conf_file in conf_files: with open(os.path.join(dirpath, conf_file)) as f: ganesha_utils.patch( export_template, ganesha_manager.parseconf(f.read())) return export_template def init_helper(self): """Initializes protocol-specific NAS drivers.""" self.ganesha = ganesha_manager.GaneshaManager( self._execute, self.tag, ganesha_config_path=self.configuration.ganesha_config_path, ganesha_export_dir=self.configuration.ganesha_export_dir, ganesha_db_path=self.configuration.ganesha_db_path, ganesha_service_name=self.configuration.ganesha_service_name) system_export_template = self._load_conf_dir( self.configuration.ganesha_export_template_dir, must_exist=False) if system_export_template: self.export_template = system_export_template else: self.export_template = self._default_config_hook() def _default_config_hook(self): """The default export block. Subclass this to add FSAL specific defaults. Suggested approach: take the return value of superclass' method, patch with dict containing your defaults, and return the result. However, you can also provide your defaults from scratch with no regard to superclass. """ return self._load_conf_dir(ganesha_utils.path_from(__file__, "conf")) def _fsal_hook(self, base_path, share, access): """Subclass this to create FSAL block.""" return {} def _cleanup_fsal_hook(self, base_path, share, access): """Callback for FSAL specific cleanup after removing an export.""" pass def _allow_access(self, base_path, share, access): """Allow access to the share.""" ganesha_utils.validate_access_rule( self.supported_access_types, self.supported_access_levels, access, abort=True) access = ganesha_utils.fixup_access_rule(access) cf = {} accid = access['id'] name = share['name'] export_name = "%s--%s" % (name, accid) ganesha_utils.patch(cf, self.export_template, { 'EXPORT': { 'Export_Id': self.ganesha.get_export_id(), 'Path': os.path.join(base_path, name), 'Pseudo': os.path.join(base_path, export_name), 'Tag': accid, 'CLIENT': { 'Clients': access['access_to'] }, 'FSAL': self._fsal_hook(base_path, share, access) } }) self.ganesha.add_export(export_name, cf) def _deny_access(self, base_path, share, access): """Deny access to the share.""" self.ganesha.remove_export("%s--%s" % (share['name'], access['id'])) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules of share.""" rule_state_map = {} if not (add_rules or delete_rules): add_rules = access_rules self.ganesha.reset_exports() self.ganesha.restart_service() for rule in add_rules: try: self._allow_access('/', share, rule) except (exception.InvalidShareAccess, exception.InvalidShareAccessLevel): rule_state_map[rule['id']] = {'state': 'error'} continue for rule in delete_rules: self._deny_access('/', share, rule) return rule_state_map class GaneshaNASHelper2(GaneshaNASHelper): """Perform share access changes using Ganesha version >= 2.4.""" def __init__(self, execute, config, tag='', **kwargs): super(GaneshaNASHelper2, self).__init__(execute, config, **kwargs) if self.configuration.ganesha_rados_store_enable: self.ceph_vol_client = kwargs.pop('ceph_vol_client') def init_helper(self): """Initializes protocol-specific NAS drivers.""" kwargs = { 'ganesha_config_path': self.configuration.ganesha_config_path, 'ganesha_export_dir': self.configuration.ganesha_export_dir, 'ganesha_service_name': self.configuration.ganesha_service_name } if self.configuration.ganesha_rados_store_enable: kwargs['ganesha_rados_store_enable'] = ( self.configuration.ganesha_rados_store_enable) if not self.configuration.ganesha_rados_store_pool_name: raise exception.GaneshaException( _('"ganesha_rados_store_pool_name" config option is not ' 'set in the driver section.')) kwargs['ganesha_rados_store_pool_name'] = ( self.configuration.ganesha_rados_store_pool_name) kwargs['ganesha_rados_export_index'] = ( self.configuration.ganesha_rados_export_index) kwargs['ganesha_rados_export_counter'] = ( self.configuration.ganesha_rados_export_counter) kwargs['ceph_vol_client'] = ( self.ceph_vol_client) else: kwargs['ganesha_db_path'] = self.configuration.ganesha_db_path self.ganesha = ganesha_manager.GaneshaManager( self._execute, self.tag, **kwargs) system_export_template = self._load_conf_dir( self.configuration.ganesha_export_template_dir, must_exist=False) if system_export_template: self.export_template = system_export_template else: self.export_template = self._default_config_hook() def _get_export_path(self, share): """Subclass this to return export path.""" raise NotImplementedError() def _get_export_pseudo_path(self, share): """Subclass this to return export pseudo path.""" raise NotImplementedError() def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules of share. Creates an export per share. Modifies access rules of shares by dynamically updating exports via DBUS. """ confdict = {} existing_access_rules = [] rule_state_map = {} if self.ganesha.check_export_exists(share['name']): confdict = self.ganesha._read_export(share['name']) existing_access_rules = confdict["EXPORT"]["CLIENT"] if not isinstance(existing_access_rules, list): existing_access_rules = [existing_access_rules] else: if not access_rules: LOG.warning("Trying to remove export file '%s' but it's " "already gone", self.ganesha._getpath(share['name'])) return wanted_rw_clients, wanted_ro_clients = [], [] for rule in access_rules: try: ganesha_utils.validate_access_rule( self.supported_access_types, self.supported_access_levels, rule, True) except (exception.InvalidShareAccess, exception.InvalidShareAccessLevel): rule_state_map[rule['id']] = {'state': 'error'} continue rule = ganesha_utils.fixup_access_rule(rule) if rule['access_level'] == 'rw': wanted_rw_clients.append(rule['access_to']) elif rule['access_level'] == 'ro': wanted_ro_clients.append(rule['access_to']) if access_rules: # Add or Update export. clients = [] if wanted_ro_clients: clients.append({ 'Access_Type': 'ro', 'Clients': ','.join(wanted_ro_clients) }) if wanted_rw_clients: clients.append({ 'Access_Type': 'rw', 'Clients': ','.join(wanted_rw_clients) }) if clients: # Empty list if no rules passed validation if existing_access_rules: # Update existing export. ganesha_utils.patch(confdict, { 'EXPORT': { 'CLIENT': clients } }) self.ganesha.update_export(share['name'], confdict) else: # Add new export. ganesha_utils.patch(confdict, self.export_template, { 'EXPORT': { 'Export_Id': self.ganesha.get_export_id(), 'Path': self._get_export_path(share), 'Pseudo': self._get_export_pseudo_path(share), 'Tag': share['name'], 'CLIENT': clients, 'FSAL': self._fsal_hook(None, share, None) } }) self.ganesha.add_export(share['name'], confdict) else: # No clients have access to the share. Remove export. self.ganesha.remove_export(share['name']) self._cleanup_fsal_hook(None, share, None) return rule_state_map manila-10.0.0/manila/share/drivers/ganesha/manager.py0000664000175000017500000005435313656750227022515 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import pipes import re import sys from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import importutils import six from manila import exception from manila.i18n import _ from manila.share.drivers.ganesha import utils as ganesha_utils from manila import utils LOG = log.getLogger(__name__) IWIDTH = 4 def _conf2json(conf): """Convert Ganesha config to JSON.""" # tokenize config string token_list = [six.StringIO()] state = { 'in_quote': False, 'in_comment': False, 'escape': False, } cbk = [] for char in conf: if state['in_quote']: if not state['escape']: if char == '"': state['in_quote'] = False cbk.append(lambda: token_list.append(six.StringIO())) elif char == '\\': cbk.append(lambda: state.update({'escape': True})) else: if char == "#": state['in_comment'] = True if state['in_comment']: if char == "\n": state['in_comment'] = False else: if char == '"': token_list.append(six.StringIO()) state['in_quote'] = True state['escape'] = False if not state['in_comment']: token_list[-1].write(char) while cbk: cbk.pop(0)() if state['in_quote']: raise RuntimeError("Unterminated quoted string") # jsonify tokens js_token_list = ["{"] for tok in token_list: tok = tok.getvalue() if tok[0] == '"': js_token_list.append(tok) continue for pat, s in [ # add omitted "=" signs to block openings (r'([^=\s])\s*{', '\\1={'), # delete trailing semicolons in blocks (r';\s*}', '}'), # add omitted semicolons after blocks (r'}\s*([^}\s])', '};\\1'), # separate syntactically significant characters (r'([;{}=])', ' \\1 ')]: tok = re.sub(pat, s, tok) # map tokens to JSON equivalents for word in tok.split(): if word == "=": word = ":" elif word == ";": word = ',' elif (word in ['{', '}'] or re.search(r'\A-?[1-9]\d*(\.\d+)?\Z', word)): pass else: word = jsonutils.dumps(word) js_token_list.append(word) js_token_list.append("}") # group quoted strings token_grp_list = [] for tok in js_token_list: if tok[0] == '"': if not (token_grp_list and isinstance(token_grp_list[-1], list)): token_grp_list.append([]) token_grp_list[-1].append(tok) else: token_grp_list.append(tok) # process quoted string groups by joining them js_token_list2 = [] for x in token_grp_list: if isinstance(x, list): x = ''.join(['"'] + [tok[1:-1] for tok in x] + ['"']) js_token_list2.append(x) return ''.join(js_token_list2) def _dump_to_conf(confdict, out=sys.stdout, indent=0): """Output confdict in Ganesha config format.""" if isinstance(confdict, dict): for k, v in confdict.items(): if v is None: continue if isinstance(v, dict): out.write(' ' * (indent * IWIDTH) + k + ' ') out.write("{\n") _dump_to_conf(v, out, indent + 1) out.write(' ' * (indent * IWIDTH) + '}') elif isinstance(v, list): for item in v: out.write(' ' * (indent * IWIDTH) + k + ' ') out.write("{\n") _dump_to_conf(item, out, indent + 1) out.write(' ' * (indent * IWIDTH) + '}\n') # The 'CLIENTS' Ganesha string option is an exception in that it's # string value can't be enclosed within quotes as can be done for # other string options in a valid Ganesha conf file. elif k.upper() == 'CLIENTS': out.write(' ' * (indent * IWIDTH) + k + ' = ' + v + ';') else: out.write(' ' * (indent * IWIDTH) + k + ' ') out.write('= ') _dump_to_conf(v, out, indent) out.write(';') out.write('\n') else: dj = jsonutils.dumps(confdict) out.write(dj) def parseconf(conf): """Parse Ganesha config. Both native format and JSON are supported. Convert config to a (nested) dictionary. """ def list_to_dict(l): # Convert a list of key-value pairs stored as tuples to a dict. # For tuples with identical keys, preserve all the values in a # list. e.g., argument [('k', 'v1'), ('k', 'v2')] to function # returns {'k': ['v1', 'v2']}. d = {} for i in l: if isinstance(i, tuple): k, v = i if isinstance(v, list): v = list_to_dict(v) if k in d: d[k] = [d[k]] d[k].append(v) else: d[k] = v return d try: # allow config to be specified in JSON -- # for sake of people who might feel Ganesha config foreign. d = jsonutils.loads(conf) except ValueError: # Customize JSON decoder to convert Ganesha config to a list # of key-value pairs stored as tuples. This allows multiple # occurrences of a config block to be later converted to a # dict key-value pair, with block name being the key and a # list of block contents being the value. li = jsonutils.loads(_conf2json(conf), object_pairs_hook=lambda x: x) d = list_to_dict(li) return d def mkconf(confdict): """Create Ganesha config string from confdict.""" s = six.StringIO() _dump_to_conf(confdict, s) return s.getvalue() rados = None def setup_rados(): global rados if not rados: try: rados = importutils.import_module('rados') except ImportError: raise exception.ShareBackendException( _("python-rados is not installed")) class GaneshaManager(object): """Ganesha instrumentation class.""" def __init__(self, execute, tag, **kwargs): self.confrx = re.compile(r'\.conf\Z') self.ganesha_config_path = kwargs['ganesha_config_path'] self.tag = tag def _execute(*args, **kwargs): msg = kwargs.pop('message', args[0]) makelog = kwargs.pop('makelog', True) try: return execute(*args, **kwargs) except exception.ProcessExecutionError as e: if makelog: LOG.error( ("Error while executing management command on " "Ganesha node %(tag)s: %(msg)s."), {'tag': tag, 'msg': msg}) raise exception.GaneshaCommandFailure( stdout=e.stdout, stderr=e.stderr, exit_code=e.exit_code, cmd=e.cmd) self.execute = _execute self.ganesha_service = kwargs['ganesha_service_name'] self.ganesha_export_dir = kwargs['ganesha_export_dir'] self.execute('mkdir', '-p', self.ganesha_export_dir) self.ganesha_rados_store_enable = kwargs.get( 'ganesha_rados_store_enable') if self.ganesha_rados_store_enable: setup_rados() self.ganesha_rados_store_pool_name = ( kwargs['ganesha_rados_store_pool_name']) self.ganesha_rados_export_counter = ( kwargs['ganesha_rados_export_counter']) self.ganesha_rados_export_index = ( kwargs['ganesha_rados_export_index']) self.ceph_vol_client = ( kwargs['ceph_vol_client']) try: self._get_rados_object(self.ganesha_rados_export_counter) except rados.ObjectNotFound: self._put_rados_object(self.ganesha_rados_export_counter, six.text_type(1000)) else: self.ganesha_db_path = kwargs['ganesha_db_path'] self.execute('mkdir', '-p', os.path.dirname(self.ganesha_db_path)) # Here we are to make sure that an SQLite database of the # required scheme exists at self.ganesha_db_path. # The following command gets us there -- provided the file # does not yet exist (otherwise it just fails). However, # we don't care about this condition, we just execute the # command unconditionally (ignoring failure). Instead we # directly query the db right after, to check its validity. self.execute( "sqlite3", self.ganesha_db_path, 'create table ganesha(key varchar(20) primary key, ' 'value int); insert into ganesha values("exportid", ' '100);', run_as_root=False, check_exit_code=False) self.get_export_id(bump=False) def _getpath(self, name): """Get the path of config file for name.""" return os.path.join(self.ganesha_export_dir, name + ".conf") @staticmethod def _get_export_rados_object_name(name): return 'ganesha-export-' + name def _write_tmp_conf_file(self, path, data): """Write data to tmp conf file.""" dirpath, fname = (getattr(os.path, q + "name")(path) for q in ("dir", "base")) tmpf = self.execute('mktemp', '-p', dirpath, "-t", fname + ".XXXXXX")[0][:-1] self.execute( 'sh', '-c', 'echo %s > %s' % (pipes.quote(data), pipes.quote(tmpf)), message='writing ' + tmpf) return tmpf def _write_conf_file(self, name, data): """Write data to config file for name atomically.""" path = self._getpath(name) tmpf = self._write_tmp_conf_file(path, data) try: self.execute('mv', tmpf, path) except exception.ProcessExecutionError as e: LOG.error('mv temp file ({0}) to {1} failed.'.format(tmpf, path)) self.execute('rm', tmpf) raise exception.GaneshaCommandFailure( stdout=e.stdout, stderr=e.stderr, exit_code=e.exit_code, cmd=e.cmd) return path def _mkindex(self): """Generate the index file for current exports.""" @utils.synchronized("ganesha-index-" + self.tag, external=True) def _mkindex(): files = filter(lambda f: self.confrx.search(f) and f != "INDEX.conf", self.execute('ls', self.ganesha_export_dir, run_as_root=False)[0].split("\n")) index = "".join(map(lambda f: "%include " + os.path.join( self.ganesha_export_dir, f) + "\n", files)) self._write_conf_file("INDEX", index) _mkindex() def _read_export_rados_object(self, name): return parseconf(self._get_rados_object( self._get_export_rados_object_name(name))) def _read_export_file(self, name): return parseconf(self.execute("cat", self._getpath(name), message='reading export ' + name)[0]) def _read_export(self, name): """Return the dict of the export identified by name.""" if self.ganesha_rados_store_enable: return self._read_export_rados_object(name) else: return self._read_export_file(name) def _check_export_rados_object_exists(self, name): try: self._get_rados_object( self._get_export_rados_object_name(name)) return True except rados.ObjectNotFound: return False def _check_file_exists(self, path): try: self.execute('test', '-f', path, makelog=False, run_as_root=False) return True except exception.GaneshaCommandFailure as e: if e.exit_code == 1: return False else: raise exception.GaneshaCommandFailure( stdout=e.stdout, stderr=e.stderr, exit_code=e.exit_code, cmd=e.cmd) def _check_export_file_exists(self, name): return self._check_file_exists(self._getpath(name)) def check_export_exists(self, name): """Check whether export exists.""" if self.ganesha_rados_store_enable: return self._check_export_rados_object_exists(name) else: return self._check_export_file_exists(name) def _write_export_rados_object(self, name, data): """Write confdict to the export RADOS object of name.""" self._put_rados_object(self._get_export_rados_object_name(name), data) # temp export config file required for DBus calls return self._write_tmp_conf_file(self._getpath(name), data) def _write_export(self, name, confdict): """Write confdict to the export file or RADOS object of name.""" for k, v in ganesha_utils.walk(confdict): # values in the export block template that need to be # filled in by Manila are pre-fixed by '@' if isinstance(v, six.string_types) and v[0] == '@': msg = _("Incomplete export block: value %(val)s of attribute " "%(key)s is a stub.") % {'key': k, 'val': v} raise exception.InvalidParameterValue(err=msg) if self.ganesha_rados_store_enable: return self._write_export_rados_object(name, mkconf(confdict)) else: return self._write_conf_file(name, mkconf(confdict)) def _rm_file(self, path): self.execute("rm", "-f", path) def _rm_export_file(self, name): """Remove export file of name.""" self._rm_file(self._getpath(name)) def _rm_export_rados_object(self, name): """Remove export object of name.""" self._delete_rados_object(self._get_export_rados_object_name(name)) def _dbus_send_ganesha(self, method, *args, **kwargs): """Send a message to Ganesha via dbus.""" service = kwargs.pop("service", "exportmgr") self.execute("dbus-send", "--print-reply", "--system", "--dest=org.ganesha.nfsd", "/org/ganesha/nfsd/ExportMgr", "org.ganesha.nfsd.%s.%s" % (service, method), *args, message='dbus call %s.%s' % (service, method), **kwargs) def _remove_export_dbus(self, xid): """Remove an export from Ganesha runtime with given export id.""" self._dbus_send_ganesha("RemoveExport", "uint16:%d" % xid) def _add_rados_object_url_to_index(self, name): """Add an export RADOS object's URL to the RADOS URL index.""" # TODO(rraja): Ensure that the export index object's update is atomic, # e.g., retry object update until the object version between the 'get' # and 'put' operations remains the same. index_data = self._get_rados_object(self.ganesha_rados_export_index) want_url = "%url rados://{0}/{1}".format( self.ganesha_rados_store_pool_name, self._get_export_rados_object_name(name)) if index_data: self._put_rados_object( self.ganesha_rados_export_index, '\n'.join([index_data, want_url]) ) else: self._put_rados_object(self.ganesha_rados_export_index, want_url) def _remove_rados_object_url_from_index(self, name): """Remove an export RADOS object's URL from the RADOS URL index.""" # TODO(rraja): Ensure that the export index object's update is atomic, # e.g., retry object update until the object version between the 'get' # and 'put' operations remains the same. index_data = self._get_rados_object(self.ganesha_rados_export_index) if not index_data: return unwanted_url = "%url rados://{0}/{1}".format( self.ganesha_rados_store_pool_name, self._get_export_rados_object_name(name)) rados_urls = index_data.split('\n') new_rados_urls = [url for url in rados_urls if url != unwanted_url] self._put_rados_object(self.ganesha_rados_export_index, '\n'.join(new_rados_urls)) def add_export(self, name, confdict): """Add an export to Ganesha specified by confdict.""" xid = confdict["EXPORT"]["Export_Id"] undos = [] _mkindex_called = False try: path = self._write_export(name, confdict) if self.ganesha_rados_store_enable: undos.append(lambda: self._rm_export_rados_object(name)) undos.append(lambda: self._rm_file(path)) else: undos.append(lambda: self._rm_export_file(name)) self._dbus_send_ganesha("AddExport", "string:" + path, "string:EXPORT(Export_Id=%d)" % xid) undos.append(lambda: self._remove_export_dbus(xid)) if self.ganesha_rados_store_enable: # Clean up temp export file used for the DBus call self._rm_file(path) self._add_rados_object_url_to_index(name) else: _mkindex_called = True self._mkindex() except exception.ProcessExecutionError as e: for u in undos: u() if not self.ganesha_rados_store_enable and not _mkindex_called: self._mkindex() raise exception.GaneshaCommandFailure( stdout=e.stdout, stderr=e.stderr, exit_code=e.exit_code, cmd=e.cmd) def update_export(self, name, confdict): """Update an export to Ganesha specified by confdict.""" xid = confdict["EXPORT"]["Export_Id"] old_confdict = self._read_export(name) path = self._write_export(name, confdict) try: self._dbus_send_ganesha("UpdateExport", "string:" + path, "string:EXPORT(Export_Id=%d)" % xid) except exception.ProcessExecutionError as e: # Revert the export update. self._write_export(name, old_confdict) raise exception.GaneshaCommandFailure( stdout=e.stdout, stderr=e.stderr, exit_code=e.exit_code, cmd=e.cmd) finally: if self.ganesha_rados_store_enable: # Clean up temp export file used for the DBus update call self._rm_file(path) def remove_export(self, name): """Remove an export from Ganesha.""" try: confdict = self._read_export(name) self._remove_export_dbus(confdict["EXPORT"]["Export_Id"]) finally: if self.ganesha_rados_store_enable: self._delete_rados_object( self._get_export_rados_object_name(name)) self._remove_rados_object_url_from_index(name) else: self._rm_export_file(name) self._mkindex() def _get_rados_object(self, obj_name): """Get data stored in Ceph RADOS object as a text string.""" return self.ceph_vol_client.get_object( self.ganesha_rados_store_pool_name, obj_name).decode('utf-8') def _put_rados_object(self, obj_name, data): """Put data as a byte string in a Ceph RADOS object.""" return self.ceph_vol_client.put_object( self.ganesha_rados_store_pool_name, obj_name, data.encode('utf-8')) def _delete_rados_object(self, obj_name): return self.ceph_vol_client.delete_object( self.ganesha_rados_store_pool_name, obj_name) def get_export_id(self, bump=True): """Get a new export id.""" # XXX overflowing the export id (16 bit unsigned integer) # is not handled if self.ganesha_rados_store_enable: # TODO(rraja): Ensure that the export counter object's update is # atomic, e.g., retry object update until the object version # between the 'get' and 'put' operations remains the same. export_id = int( self._get_rados_object(self.ganesha_rados_export_counter)) if not bump: return export_id export_id += 1 self._put_rados_object(self.ganesha_rados_export_counter, str(export_id)) return export_id else: if bump: bumpcode = 'update ganesha set value = value + 1;' else: bumpcode = '' out = self.execute( "sqlite3", self.ganesha_db_path, bumpcode + 'select * from ganesha where key = "exportid";', run_as_root=False)[0] match = re.search(r'\Aexportid\|(\d+)$', out) if not match: LOG.error("Invalid export database on " "Ganesha node %(tag)s: %(db)s.", {'tag': self.tag, 'db': self.ganesha_db_path}) raise exception.InvalidSqliteDB() return int(match.groups()[0]) def restart_service(self): """Restart the Ganesha service.""" self.execute("service", self.ganesha_service, "restart") def reset_exports(self): """Delete all export files.""" self.execute('sh', '-c', 'rm -f %s/*.conf' % pipes.quote(self.ganesha_export_dir)) self._mkindex() manila-10.0.0/manila/share/drivers/ganesha/conf/0000775000175000017500000000000013656750362021444 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/ganesha/conf/00-base-export-template.conf0000664000175000017500000000223413656750227026573 0ustar zuulzuul00000000000000# This is a Ganesha config template. # Syntactically, a valid Ganesha config # file, but some values in it are stubs. # Fields that have stub values are managed # by Manila; the stubs are of two kinds: # - @config: # value will be taken from Manila config # - @runtime: # value will be determined at runtime # User is free to set Ganesha parameters # which are not reserved to Manila by # stubbing. EXPORT { # Each EXPORT must have a unique Export_Id. Export_Id = @runtime; # The directory in the exported file system this export # is rooted on. Path = @runtime; # FSAL, Ganesha's module component FSAL { # FSAL name Name = @config; } # Path of export in the NFSv4 pseudo filesystem Pseudo = @runtime; # RPC security flavor, one of none, sys, krb5{,i,p} SecType = sys; # Alternative export identifier for NFSv3 Tag = @runtime; # Client specification CLIENT { # Comma separated list of clients Clients = @runtime; # Access type, one of RW, RO, MDONLY, MDONLY_RO, NONE Access_Type = RW; } # User id squashing, one of None, Root, All Squash = None; } manila-10.0.0/manila/share/drivers/service_instance.py0000664000175000017500000013513713656750227023021 0ustar zuulzuul00000000000000# Copyright (c) 2014 NetApp, Inc. # Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Module for managing nova instances for share drivers.""" import abc import os import socket import time import netaddr from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from oslo_utils import netutils import six from manila.common import constants as const from manila import compute from manila import context from manila import exception from manila.i18n import _ from manila.network.linux import ip_lib from manila.network.neutron import api as neutron from manila import utils LOG = log.getLogger(__name__) NEUTRON_NAME = "neutron" share_servers_handling_mode_opts = [ cfg.StrOpt( "service_image_name", default="manila-service-image", help="Name of image in Glance, that will be used for service instance " "creation. Only used if driver_handles_share_servers=True."), cfg.StrOpt( "service_instance_name_template", default="manila_service_instance_%s", help="Name of service instance. " "Only used if driver_handles_share_servers=True."), cfg.StrOpt( "manila_service_keypair_name", default="manila-service", help="Keypair name that will be created and used for service " "instances. Only used if driver_handles_share_servers=True."), cfg.StrOpt( "path_to_public_key", default="~/.ssh/id_rsa.pub", help="Path to hosts public key. " "Only used if driver_handles_share_servers=True."), cfg.StrOpt( "service_instance_security_group", default="manila-service", help="Security group name, that will be used for " "service instance creation. " "Only used if driver_handles_share_servers=True."), cfg.StrOpt( "service_instance_flavor_id", default="100", help="ID of flavor, that will be used for service instance " "creation. Only used if driver_handles_share_servers=True."), cfg.StrOpt( "service_network_name", default="manila_service_network", help="Name of manila service network. Used only with Neutron. " "Only used if driver_handles_share_servers=True."), cfg.StrOpt( "service_network_cidr", default="10.254.0.0/16", help="CIDR of manila service network. Used only with Neutron and " "if driver_handles_share_servers=True."), cfg.IntOpt( "service_network_division_mask", default=28, help="This mask is used for dividing service network into " "subnets, IP capacity of subnet with this mask directly " "defines possible amount of created service VMs " "per tenant's subnet. Used only with Neutron " "and if driver_handles_share_servers=True."), cfg.StrOpt( "interface_driver", default="manila.network.linux.interface.OVSInterfaceDriver", help="Module path to the Virtual Interface (VIF) driver class. This " "option is used only by drivers operating in " "`driver_handles_share_servers=True` mode that provision " "OpenStack compute instances as share servers. This option is " "only supported with Neutron networking. " "Drivers provided in tree work with Linux Bridge " "(manila.network.linux.interface.BridgeInterfaceDriver) and OVS " "(manila.network.linux.interface.OVSInterfaceDriver). If the " "manila-share service is running on a host that is connected to " "the administrator network, a no-op driver " "(manila.network.linux.interface.NoopInterfaceDriver) may " "be used."), cfg.BoolOpt( "connect_share_server_to_tenant_network", default=False, help="Attach share server directly to share network. " "Used only with Neutron and " "if driver_handles_share_servers=True."), cfg.StrOpt( "admin_network_id", help="ID of neutron network used to communicate with admin network," " to create additional admin export locations on."), cfg.StrOpt( "admin_subnet_id", help="ID of neutron subnet used to communicate with admin network," " to create additional admin export locations on. " "Related to 'admin_network_id'."), ] no_share_servers_handling_mode_opts = [ cfg.StrOpt( "service_instance_name_or_id", help="Name or ID of service instance in Nova to use for share " "exports. Used only when share servers handling is disabled."), cfg.HostAddressOpt( "service_net_name_or_ip", help="Can be either name of network that is used by service " "instance within Nova to get IP address or IP address itself " "(either IPv4 or IPv6) for managing shares there. " "Used only when share servers handling is disabled."), cfg.HostAddressOpt( "tenant_net_name_or_ip", help="Can be either name of network that is used by service " "instance within Nova to get IP address or IP address itself " "(either IPv4 or IPv6) for exporting shares. " "Used only when share servers handling is disabled."), ] common_opts = [ cfg.StrOpt( "service_instance_user", help="User in service instance that will be used for authentication."), cfg.StrOpt( "service_instance_password", secret=True, help="Password for service instance user."), cfg.StrOpt( "path_to_private_key", help="Path to host's private key."), cfg.IntOpt( "max_time_to_build_instance", default=300, help="Maximum time in seconds to wait for creating service instance."), cfg.BoolOpt( "limit_ssh_access", default=False, help="Block SSH connection to the service instance from other " "networks than service network."), ] CONF = cfg.CONF class ServiceInstanceManager(object): """Manages nova instances for various share drivers. This class provides following external methods: 1. set_up_service_instance: creates instance and sets up share infrastructure. 2. ensure_service_instance: ensure service instance is available. 3. delete_service_instance: removes service instance and network infrastructure. """ _INSTANCE_CONNECTION_PROTO = "SSH" def get_config_option(self, key): """Returns value of config option. :param key: key of config' option. :returns: str -- value of config's option. first priority is driver's config, second priority is global config. """ if self.driver_config: return self.driver_config.safe_get(key) return CONF.get(key) def _get_network_helper(self): # Historically, there were multiple types of network helper, # but currently the only network helper type is Neutron. return NeutronNetworkHelper(self) def __init__(self, driver_config=None): super(ServiceInstanceManager, self).__init__() self.driver_config = driver_config if self.driver_config: self.driver_config.append_config_values(common_opts) if self.get_config_option("driver_handles_share_servers"): self.driver_config.append_config_values( share_servers_handling_mode_opts) else: self.driver_config.append_config_values( no_share_servers_handling_mode_opts) else: CONF.register_opts(common_opts) if self.get_config_option("driver_handles_share_servers"): CONF.register_opts(share_servers_handling_mode_opts) else: CONF.register_opts(no_share_servers_handling_mode_opts) if not self.get_config_option("service_instance_user"): raise exception.ServiceInstanceException( _('Service instance user is not specified.')) self.admin_context = context.get_admin_context() self._execute = utils.execute self.compute_api = compute.API() self.path_to_private_key = self.get_config_option( "path_to_private_key") self.max_time_to_build_instance = self.get_config_option( "max_time_to_build_instance") self.availability_zone = self.get_config_option( 'backend_availability_zone') or CONF.storage_availability_zone if self.get_config_option("driver_handles_share_servers"): self.path_to_public_key = self.get_config_option( "path_to_public_key") self._network_helper = None @property @utils.synchronized("instantiate_network_helper") def network_helper(self): if not self._network_helper: self._network_helper = self._get_network_helper() self._network_helper.setup_connectivity_with_service_instances() return self._network_helper def get_common_server(self): data = { 'public_address': None, 'private_address': None, 'service_net_name_or_ip': self.get_config_option( 'service_net_name_or_ip'), 'tenant_net_name_or_ip': self.get_config_option( 'tenant_net_name_or_ip'), } data['instance'] = self.compute_api.server_get_by_name_or_id( self.admin_context, self.get_config_option('service_instance_name_or_id')) if netutils.is_valid_ip(data['service_net_name_or_ip']): data['private_address'] = [data['service_net_name_or_ip']] else: data['private_address'] = self._get_addresses_by_network_name( data['service_net_name_or_ip'], data['instance']) if netutils.is_valid_ip(data['tenant_net_name_or_ip']): data['public_address'] = [data['tenant_net_name_or_ip']] else: data['public_address'] = self._get_addresses_by_network_name( data['tenant_net_name_or_ip'], data['instance']) if not (data['public_address'] and data['private_address']): raise exception.ManilaException( "Can not find one of net addresses for service instance. " "Instance: %(instance)s, " "private_address: %(private_address)s, " "public_address: %(public_address)s." % data) share_server = { 'username': self.get_config_option('service_instance_user'), 'password': self.get_config_option('service_instance_password'), 'pk_path': self.path_to_private_key, 'instance_id': data['instance']['id'], } for key in ('private_address', 'public_address'): data[key + '_first'] = None for address in data[key]: if netutils.is_valid_ip(address): data[key + '_first'] = address break share_server['ip'] = data['private_address_first'] share_server['public_address'] = data['public_address_first'] return {'backend_details': share_server} def _get_addresses_by_network_name(self, net_name, server): net_ips = [] if 'networks' in server and net_name in server['networks']: net_ips = server['networks'][net_name] elif 'addresses' in server and net_name in server['addresses']: net_ips = [addr['addr'] for addr in server['addresses'][net_name]] return net_ips def _get_service_instance_name(self, share_server_id): """Returns service vms name.""" if self.driver_config: # Make service instance name unique for multibackend installation name = "%s_%s" % (self.driver_config.config_group, share_server_id) else: name = share_server_id return self.get_config_option("service_instance_name_template") % name def _get_server_ip(self, server, net_name): """Returns service IP address of service instance.""" net_ips = self._get_addresses_by_network_name(net_name, server) if not net_ips: msg = _("Failed to get service instance IP address. " "Service network name is '%(net_name)s' " "and provided data are '%(data)s'.") msg = msg % {'net_name': net_name, 'data': six.text_type(server)} raise exception.ServiceInstanceException(msg) return net_ips[0] def _get_or_create_security_groups(self, context, name=None, description=None, allow_ssh_subnet=False): """Get or create security group for service_instance. :param context: context, that should be used :param name: this is used for selection/creation of sec.group :param description: this is used on sec.group creation step only :param allow_ssh_subnet: subnet details to allow ssh connection from, if not supplied ssh will be allowed from any host :returns: SecurityGroup -- security group instance from Nova :raises: exception.ServiceInstanceException. """ sgs = [] # Common security group name = name or self.get_config_option( "service_instance_security_group") if not name: LOG.warning("Name for service instance security group is not " "provided. Skipping security group step.") return None if not description: description = ("This security group is intended " "to be used by share service.") sec_group_data = const.SERVICE_INSTANCE_SECGROUP_DATA if not allow_ssh_subnet: sec_group_data += const.SSH_PORTS sgs.append(self._get_or_create_security_group(name, description, sec_group_data)) if allow_ssh_subnet: if "cidr" not in allow_ssh_subnet or 'id' not in allow_ssh_subnet: raise exception.ManilaException( "Unable to limit SSH access") ssh_sg_name = "manila-service-subnet-{}".format( allow_ssh_subnet["id"]) sgs.append(self._get_or_create_security_group( ssh_sg_name, description, const.SSH_PORTS, allow_ssh_subnet["cidr"])) return sgs @utils.synchronized( "service_instance_get_or_create_security_group", external=True) def _get_or_create_security_group(self, name, description, sec_group_data, cidr="0.0.0.0/0"): s_groups = self.network_helper.neutron_api.security_group_list({ "name": name, })['security_groups'] s_groups = [s for s in s_groups if s['name'] == name] if not s_groups: LOG.debug("Creating security group with name '%s'.", name) sg = self.network_helper.neutron_api.security_group_create( name, description)['security_group'] for protocol, ports in sec_group_data: self.network_helper.neutron_api.security_group_rule_create( parent_group_id=sg['id'], ip_protocol=protocol, from_port=ports[0], to_port=ports[1], cidr=cidr, ) elif len(s_groups) > 1: msg = _("Ambiguous security_groups.") raise exception.ServiceInstanceException(msg) else: sg = s_groups[0] return sg def ensure_service_instance(self, context, server): """Ensures that server exists and active.""" if 'instance_id' not in server: LOG.warning("Unable to check server existence since " "'instance_id' key is not set in share server " "backend details.") return False try: inst = self.compute_api.server_get(self.admin_context, server['instance_id']) except exception.InstanceNotFound: LOG.warning("Service instance %s does not exist.", server['instance_id']) return False if inst['status'] == 'ACTIVE': return self._check_server_availability(server) return False def _delete_server(self, context, server_id): """Deletes the server.""" try: self.compute_api.server_get(context, server_id) except exception.InstanceNotFound: LOG.debug("Service instance '%s' was not found. " "Nothing to delete, skipping.", server_id) return self.compute_api.server_delete(context, server_id) t = time.time() while time.time() - t < self.max_time_to_build_instance: try: inst = self.compute_api.server_get(context, server_id) if inst.get("status").lower() == "soft_deleted": LOG.debug("Service instance '%s' was soft-deleted " "successfully.", server_id) break except exception.InstanceNotFound: LOG.debug("Service instance '%s' was deleted " "successfully.", server_id) break time.sleep(2) else: raise exception.ServiceInstanceException( _("Instance '%(id)s' has not been deleted in %(s)ss. " "Giving up.") % { 'id': server_id, 's': self.max_time_to_build_instance}) def set_up_service_instance(self, context, network_info): """Finds or creates and sets up service vm. :param context: defines context, that should be used :param network_info: network info for getting allocations :returns: dict with service instance details :raises: exception.ServiceInstanceException """ instance_name = network_info['server_id'] server = self._create_service_instance( context, instance_name, network_info) instance_details = self._get_new_instance_details(server) if not self._check_server_availability(instance_details): e = exception.ServiceInstanceException( _('%(conn_proto)s connection has not been ' 'established to %(server)s in %(time)ss. Giving up.') % { 'conn_proto': self._INSTANCE_CONNECTION_PROTO, 'server': server['ip'], 'time': self.max_time_to_build_instance}) e.detail_data = {'server_details': instance_details} raise e return instance_details def _get_new_instance_details(self, server): instance_details = { 'instance_id': server['id'], 'ip': server['ip'], 'pk_path': server.get('pk_path'), 'subnet_id': server.get('subnet_id'), 'password': self.get_config_option('service_instance_password'), 'username': self.get_config_option('service_instance_user'), 'public_address': server['public_address'], } if server.get('admin_ip'): instance_details['admin_ip'] = server['admin_ip'] if server.get('router_id'): instance_details['router_id'] = server['router_id'] if server.get('service_port_id'): instance_details['service_port_id'] = server['service_port_id'] if server.get('public_port_id'): instance_details['public_port_id'] = server['public_port_id'] if server.get('admin_port_id'): instance_details['admin_port_id'] = server['admin_port_id'] for key in ('password', 'pk_path', 'subnet_id'): if not instance_details[key]: instance_details.pop(key) return instance_details @utils.synchronized("service_instance_get_key", external=True) def _get_key(self, context): """Get ssh key. :param context: defines context, that should be used :returns: tuple with keypair name and path to private key. """ if not (self.path_to_public_key and self.path_to_private_key): return (None, None) path_to_public_key = os.path.expanduser(self.path_to_public_key) path_to_private_key = os.path.expanduser(self.path_to_private_key) if (not os.path.exists(path_to_public_key) or not os.path.exists(path_to_private_key)): return (None, None) keypair_name = self.get_config_option("manila_service_keypair_name") keypairs = [k for k in self.compute_api.keypair_list(context) if k.name == keypair_name] if len(keypairs) > 1: raise exception.ServiceInstanceException(_('Ambiguous keypairs.')) public_key, __ = self._execute('cat', path_to_public_key) if not keypairs: keypair = self.compute_api.keypair_import( context, keypair_name, public_key) else: keypair = keypairs[0] if keypair.public_key != public_key: LOG.debug('Public key differs from existing keypair. ' 'Creating new keypair.') self.compute_api.keypair_delete(context, keypair.id) keypair = self.compute_api.keypair_import( context, keypair_name, public_key) return keypair.name, path_to_private_key def _get_service_image(self, context): """Returns ID of service image for service vm creating.""" service_image_name = self.get_config_option("service_image_name") image = self.compute_api.image_get(context, service_image_name) if image.status != 'active': raise exception.ServiceInstanceException( _("Image with name '%s' is not in 'active' state.") % service_image_name) return image def _create_service_instance(self, context, instance_name, network_info): """Creates service vm and sets up networking for it.""" service_image_id = self._get_service_image(context) key_name, key_path = self._get_key(context) if not (self.get_config_option("service_instance_password") or key_name): raise exception.ServiceInstanceException( _('Neither service instance password nor key are available.')) if not key_path: LOG.warning( 'No key path is available. May be non-existent key path is ' 'provided. Check path_to_private_key (current value ' '%(private_path)s) and path_to_public_key (current value ' '%(public_path)s) in manila configuration file.', dict( private_path=self.path_to_private_key, public_path=self.path_to_public_key)) network_data = self.network_helper.setup_network(network_info) fail_safe_data = dict( router_id=network_data.get('router_id'), subnet_id=network_data.get('subnet_id')) if network_data.get('service_port'): fail_safe_data['service_port_id'] = ( network_data['service_port']['id']) if network_data.get('public_port'): fail_safe_data['public_port_id'] = ( network_data['public_port']['id']) if network_data.get('admin_port'): fail_safe_data['admin_port_id'] = ( network_data['admin_port']['id']) try: create_kwargs = self._get_service_instance_create_kwargs() service_instance = self.compute_api.server_create( context, name=instance_name, image=service_image_id, flavor=self.get_config_option("service_instance_flavor_id"), key_name=key_name, nics=network_data['nics'], availability_zone=self.availability_zone, **create_kwargs) fail_safe_data['instance_id'] = service_instance['id'] service_instance = self.wait_for_instance_to_be_active( service_instance['id'], self.max_time_to_build_instance) if self.get_config_option("limit_ssh_access"): try: service_subnet = network_data['service_subnet'] except KeyError: LOG.error( "Unable to limit ssh access to instance id: '%s'!", fail_safe_data['instance_id']) raise exception.ManilaException( "Unable to limit SSH access - " "invalid service subnet details provided") else: service_subnet = False sec_groups = self._get_or_create_security_groups( context, allow_ssh_subnet=service_subnet) for sg in sec_groups: sg_id = sg['id'] LOG.debug( "Adding security group '%(sg)s' to server '%(si)s'.", dict(sg=sg_id, si=service_instance["id"])) self.compute_api.add_security_group_to_server( context, service_instance["id"], sg_id) ip = (network_data.get('service_port', network_data.get( 'admin_port'))['fixed_ips']) service_instance['ip'] = ip[0]['ip_address'] public_ip = (network_data.get('public_port', network_data.get( 'service_port'))['fixed_ips']) service_instance['public_address'] = public_ip[0]['ip_address'] except Exception as e: e.detail_data = {'server_details': fail_safe_data} raise service_instance.update(fail_safe_data) service_instance['pk_path'] = key_path for pair in [('router', 'router_id'), ('service_subnet', 'subnet_id')]: if pair[0] in network_data and 'id' in network_data[pair[0]]: service_instance[pair[1]] = network_data[pair[0]]['id'] admin_port = network_data.get('admin_port') if admin_port: try: service_instance['admin_ip'] = ( admin_port['fixed_ips'][0]['ip_address']) except Exception: msg = _("Admin port is being used but Admin IP was not found.") LOG.exception(msg) raise exception.AdminIPNotFound(reason=msg) return service_instance def _get_service_instance_create_kwargs(self): """Specify extra arguments used when creating the service instance. Classes inheriting the service instance manager can use this to easily pass extra arguments such as user data or metadata. """ return {} def _check_server_availability(self, instance_details): t = time.time() while time.time() - t < self.max_time_to_build_instance: LOG.debug('Checking server availability.') if not self._test_server_connection(instance_details): time.sleep(5) else: return True return False def _test_server_connection(self, server): try: socket.socket().connect((server['ip'], 22)) LOG.debug('Server %s is available via SSH.', server['ip']) return True except socket.error as e: LOG.debug(e) LOG.debug("Server %s is not available via SSH. Waiting...", server['ip']) return False def delete_service_instance(self, context, server_details): """Removes share infrastructure. Deletes service vm and subnet, associated to share network. """ instance_id = server_details.get("instance_id") self._delete_server(context, instance_id) self.network_helper.teardown_network(server_details) def wait_for_instance_to_be_active(self, instance_id, timeout): t = time.time() while time.time() - t < timeout: try: service_instance = self.compute_api.server_get( self.admin_context, instance_id) except exception.InstanceNotFound as e: LOG.debug(e) time.sleep(1) continue instance_status = service_instance['status'] # NOTE(vponomaryov): emptiness of 'networks' field checked as # workaround for nova/neutron bug #1210483. if (instance_status == 'ACTIVE' and service_instance.get('networks', {})): return service_instance elif service_instance['status'] == 'ERROR': break LOG.debug("Waiting for instance %(instance_id)s to be active. " "Current status: %(instance_status)s.", dict(instance_id=instance_id, instance_status=instance_status)) time.sleep(1) raise exception.ServiceInstanceException( _("Instance %(instance_id)s failed to reach active state " "in %(timeout)s seconds. " "Current status: %(instance_status)s.") % dict(instance_id=instance_id, timeout=timeout, instance_status=instance_status)) def reboot_server(self, server, soft_reboot=False): self.compute_api.server_reboot(self.admin_context, server['instance_id'], soft_reboot) @six.add_metaclass(abc.ABCMeta) class BaseNetworkhelper(object): @abc.abstractproperty def NAME(self): """Returns code name of network helper.""" @abc.abstractmethod def __init__(self, service_instance_manager): """Instantiates class and its attrs.""" @abc.abstractmethod def get_network_name(self, network_info): """Returns name of network for service instance.""" @abc.abstractmethod def setup_connectivity_with_service_instances(self): """Sets up connectivity between Manila host and service instances.""" @abc.abstractmethod def setup_network(self, network_info): """Sets up network for service instance.""" @abc.abstractmethod def teardown_network(self, server_details): """Teardowns network resources provided for service instance.""" class NeutronNetworkHelper(BaseNetworkhelper): def __init__(self, service_instance_manager): self.get_config_option = service_instance_manager.get_config_option self.vif_driver = importutils.import_class( self.get_config_option("interface_driver"))() if service_instance_manager.driver_config: self._network_config_group = ( service_instance_manager.driver_config.network_config_group or service_instance_manager.driver_config.config_group) else: self._network_config_group = None self.use_admin_port = False self.use_service_network = True self._neutron_api = None self._service_network_id = None self.connect_share_server_to_tenant_network = ( self.get_config_option('connect_share_server_to_tenant_network')) self.admin_network_id = self.get_config_option('admin_network_id') self.admin_subnet_id = self.get_config_option('admin_subnet_id') if self.admin_network_id and self.admin_subnet_id: self.use_admin_port = True if self.use_admin_port and self.connect_share_server_to_tenant_network: self.use_service_network = False @property def NAME(self): return NEUTRON_NAME @property def admin_project_id(self): return self.neutron_api.admin_project_id @property @utils.synchronized("instantiate_neutron_api_neutron_net_helper") def neutron_api(self): if not self._neutron_api: self._neutron_api = neutron.API( config_group_name=self._network_config_group) return self._neutron_api @property @utils.synchronized("service_network_id_neutron_net_helper") def service_network_id(self): if not self._service_network_id: self._service_network_id = self._get_service_network_id() return self._service_network_id def get_network_name(self, network_info): """Returns name of network for service instance.""" net = self.neutron_api.get_network(network_info['neutron_net_id']) return net['name'] @utils.synchronized("service_instance_get_service_network", external=True) def _get_service_network_id(self): """Finds existing or creates new service network.""" service_network_name = self.get_config_option("service_network_name") networks = [] for network in self.neutron_api.get_all_admin_project_networks(): if network['name'] == service_network_name: networks.append(network) if len(networks) > 1: raise exception.ServiceInstanceException( _('Ambiguous service networks.')) elif not networks: return self.neutron_api.network_create( self.admin_project_id, service_network_name)['id'] else: return networks[0]['id'] @utils.synchronized( "service_instance_setup_and_teardown_network_for_instance", external=True) def teardown_network(self, server_details): subnet_id = server_details.get("subnet_id") router_id = server_details.get("router_id") service_port_id = server_details.get("service_port_id") public_port_id = server_details.get("public_port_id") admin_port_id = server_details.get("admin_port_id") for port_id in (service_port_id, public_port_id, admin_port_id): if port_id: try: self.neutron_api.delete_port(port_id) except exception.NetworkException as e: if e.kwargs.get('code') != 404: raise LOG.debug("Failed to delete port %(port_id)s with error: " "\n %(exc)s", {"port_id": port_id, "exc": e}) if router_id and subnet_id: ports = self.neutron_api.list_ports( fields=['fixed_ips', 'device_id', 'device_owner']) # NOTE(vponomaryov): iterate ports to get to know whether current # subnet is used or not. We will not remove it from router if it # is used. for port in ports: # NOTE(vponomaryov): if device_id is present, then we know that # this port is used. Also, if device owner is 'compute:*', then # we know that it is VM. We continue only if both are 'True'. if (port['device_id'] and port['device_owner'].startswith('compute:')): for fixed_ip in port['fixed_ips']: if fixed_ip['subnet_id'] == subnet_id: # NOTE(vponomaryov): There are other share servers # exist that use this subnet. So, do not remove it # from router. return try: # NOTE(vponomaryov): there is no other share servers or # some VMs that use this subnet. So, remove it from router. self.neutron_api.router_remove_interface( router_id, subnet_id) except exception.NetworkException as e: if e.kwargs['code'] != 404: raise LOG.debug('Subnet %(subnet_id)s is not attached to the ' 'router %(router_id)s.', {'subnet_id': subnet_id, 'router_id': router_id}) self.neutron_api.update_subnet(subnet_id, '') @utils.synchronized( "service_instance_setup_and_teardown_network_for_instance", external=True) def setup_network(self, network_info): neutron_net_id = network_info['neutron_net_id'] neutron_subnet_id = network_info['neutron_subnet_id'] network_data = dict() subnet_name = ('service_subnet_for_handling_of_share_server_for_' 'tenant_subnet_%s' % neutron_subnet_id) if self.use_service_network: network_data['service_subnet'] = self._get_service_subnet( subnet_name) if not network_data['service_subnet']: network_data['service_subnet'] = ( self.neutron_api.subnet_create( self.admin_project_id, self.service_network_id, subnet_name, self._get_cidr_for_subnet())) network_data['ports'] = [] if not self.connect_share_server_to_tenant_network: network_data['router'] = self._get_private_router( neutron_net_id, neutron_subnet_id) try: self.neutron_api.router_add_interface( network_data['router']['id'], network_data['service_subnet']['id']) except exception.NetworkException as e: if e.kwargs['code'] != 400: raise LOG.debug('Subnet %(subnet_id)s is already attached to the ' 'router %(router_id)s.', {'subnet_id': network_data['service_subnet']['id'], 'router_id': network_data['router']['id']}) else: network_data['public_port'] = self.neutron_api.create_port( self.admin_project_id, neutron_net_id, subnet_id=neutron_subnet_id, device_owner='manila') network_data['ports'].append(network_data['public_port']) if self.use_service_network: network_data['service_port'] = self.neutron_api.create_port( self.admin_project_id, self.service_network_id, subnet_id=network_data['service_subnet']['id'], device_owner='manila') network_data['ports'].append(network_data['service_port']) if self.use_admin_port: network_data['admin_port'] = self.neutron_api.create_port( self.admin_project_id, self.admin_network_id, subnet_id=self.admin_subnet_id, device_owner='manila') network_data['ports'].append(network_data['admin_port']) try: self.setup_connectivity_with_service_instances() except Exception: for port in network_data['ports']: self.neutron_api.delete_port(port['id']) raise network_data['nics'] = [ {'port-id': port['id']} for port in network_data['ports']] public_ip = network_data.get( 'public_port', network_data.get('service_port')) network_data['ip_address'] = public_ip['fixed_ips'][0]['ip_address'] return network_data def _get_cidr_for_subnet(self): """Returns not used cidr for service subnet creating.""" subnets = self._get_all_service_subnets() used_cidrs = set(subnet['cidr'] for subnet in subnets) serv_cidr = netaddr.IPNetwork( self.get_config_option("service_network_cidr")) division_mask = self.get_config_option("service_network_division_mask") for subnet in serv_cidr.subnet(division_mask): cidr = six.text_type(subnet.cidr) if cidr not in used_cidrs: return cidr else: raise exception.ServiceInstanceException(_('No available cidrs.')) def setup_connectivity_with_service_instances(self): """Sets up connectivity with service instances. Creates host port in service network and/or admin network, creating and setting up required network devices. """ if self.use_service_network: LOG.debug("Plugging service instance into service network %s.", self.service_network_id) port = self._get_service_port( self.service_network_id, None, 'manila-share') port = self._add_fixed_ips_to_service_port(port) interface_name = self.vif_driver.get_device_name(port) device = ip_lib.IPDevice(interface_name) self._plug_interface_in_host(interface_name, device, port) if self.use_admin_port: LOG.debug("Plugging service instance into admin network %s.", self.admin_network_id) port = self._get_service_port( self.admin_network_id, self.admin_subnet_id, 'manila-admin-share') interface_name = self.vif_driver.get_device_name(port) device = ip_lib.IPDevice(interface_name) self._plug_interface_in_host(interface_name, device, port, clear_outdated_routes=True) @utils.synchronized("service_instance_plug_interface_in_host", external=True) def _plug_interface_in_host(self, interface_name, device, port, clear_outdated_routes=False): LOG.debug("Plug interface into host - interface_name: %s, " "device: %s, port: %s", interface_name, device, port) self.vif_driver.plug(interface_name, port['id'], port['mac_address']) cidrs_to_clear = [] ip_cidrs = [] for fixed_ip in port['fixed_ips']: subnet = self.neutron_api.get_subnet(fixed_ip['subnet_id']) if clear_outdated_routes: cidrs_to_clear.append(subnet['cidr']) net = netaddr.IPNetwork(subnet['cidr']) ip_cidr = '%s/%s' % (fixed_ip['ip_address'], net.prefixlen) ip_cidrs.append(ip_cidr) self.vif_driver.init_l3(interface_name, ip_cidrs, clear_cidrs=cidrs_to_clear) @utils.synchronized("service_instance_get_service_port", external=True) def _get_service_port(self, network_id, subnet_id, device_id): """Find or creates service neutron port. This port will be used for connectivity with service instances. """ host = socket.gethostname() search_opts = {'device_id': device_id, 'binding:host_id': host} ports = [port for port in self.neutron_api. list_ports(**search_opts)] if len(ports) > 1: raise exception.ServiceInstanceException( _('Error. Ambiguous service ports.')) elif not ports: port = self.neutron_api.create_port( self.admin_project_id, network_id, subnet_id=subnet_id, device_id=device_id, device_owner='manila:share', host_id=host, port_security_enabled=False) else: port = ports[0] return port @utils.synchronized( "service_instance_add_fixed_ips_to_service_port", external=True) def _add_fixed_ips_to_service_port(self, port): network = self.neutron_api.get_network(self.service_network_id) subnets = set(network['subnets']) port_fixed_ips = [] for fixed_ip in port['fixed_ips']: port_fixed_ips.append({'subnet_id': fixed_ip['subnet_id'], 'ip_address': fixed_ip['ip_address']}) if fixed_ip['subnet_id'] in subnets: subnets.remove(fixed_ip['subnet_id']) # If there are subnets here that means that # we need to add those to the port and call update. if subnets: port_fixed_ips.extend([dict(subnet_id=s) for s in subnets]) port = self.neutron_api.update_port_fixed_ips( port['id'], {'fixed_ips': port_fixed_ips}) return port @utils.synchronized("service_instance_get_private_router", external=True) def _get_private_router(self, neutron_net_id, neutron_subnet_id): """Returns router attached to private subnet gateway.""" private_subnet = self.neutron_api.get_subnet(neutron_subnet_id) if not private_subnet['gateway_ip']: raise exception.ServiceInstanceException( _('Subnet must have gateway.')) private_network_ports = [p for p in self.neutron_api.list_ports( network_id=neutron_net_id)] for p in private_network_ports: fixed_ip = p['fixed_ips'][0] if (fixed_ip['subnet_id'] == private_subnet['id'] and fixed_ip['ip_address'] == private_subnet['gateway_ip']): private_subnet_gateway_port = p break else: raise exception.ServiceInstanceException( _('Subnet gateway is not attached to the router.')) private_subnet_router = self.neutron_api.show_router( private_subnet_gateway_port['device_id']) return private_subnet_router @utils.synchronized("service_instance_get_service_subnet", external=True) def _get_service_subnet(self, subnet_name): all_service_subnets = self._get_all_service_subnets() service_subnets = [subnet for subnet in all_service_subnets if subnet['name'] == subnet_name] if len(service_subnets) == 1: return service_subnets[0] elif not service_subnets: unused_service_subnets = [subnet for subnet in all_service_subnets if subnet['name'] == ''] if unused_service_subnets: service_subnet = unused_service_subnets[0] self.neutron_api.update_subnet( service_subnet['id'], subnet_name) return service_subnet return None else: raise exception.ServiceInstanceException( _('Ambiguous service subnets.')) @utils.synchronized( "service_instance_get_all_service_subnets", external=True) def _get_all_service_subnets(self): service_network = self.neutron_api.get_network(self.service_network_id) subnets = [] for subnet_id in service_network['subnets']: subnets.append(self.neutron_api.get_subnet(subnet_id)) return subnets manila-10.0.0/manila/share/drivers/windows/0000775000175000017500000000000013656750362020603 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/windows/__init__.py0000664000175000017500000000000013656750227022702 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/windows/service_instance.py0000664000175000017500000003102213656750227024477 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import re from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log from manila import exception from manila.i18n import _ from manila.share.drivers import service_instance from manila.share.drivers.windows import windows_utils from manila.share.drivers.windows import winrm_helper CONF = cfg.CONF LOG = log.getLogger(__name__) windows_share_server_opts = [ cfg.StrOpt( "winrm_cert_pem_path", default="~/.ssl/cert.pem", help="Path to the x509 certificate used for accessing the service " "instance."), cfg.StrOpt( "winrm_cert_key_pem_path", default="~/.ssl/key.pem", help="Path to the x509 certificate key."), cfg.BoolOpt( "winrm_use_cert_based_auth", default=False, help="Use x509 certificates in order to authenticate to the " "service instance.") ] CONF = cfg.CONF CONF.register_opts(windows_share_server_opts) class WindowsServiceInstanceManager(service_instance.ServiceInstanceManager): """"Manages Windows Nova instances.""" _INSTANCE_CONNECTION_PROTO = "WinRM" _CBS_INIT_RUN_PLUGIN_AFTER_REBOOT = 2 _CBS_INIT_WINRM_PLUGIN = "ConfigWinRMListenerPlugin" _DEFAULT_MINIMUM_PASS_LENGTH = 6 def __init__(self, driver_config=None, remote_execute=None): super(WindowsServiceInstanceManager, self).__init__( driver_config=driver_config) driver_config.append_config_values(windows_share_server_opts) self._use_cert_auth = self.get_config_option( "winrm_use_cert_based_auth") self._cert_pem_path = self.get_config_option( "winrm_cert_pem_path") self._cert_key_pem_path = self.get_config_option( "winrm_cert_key_pem_path") self._check_auth_mode() self._remote_execute = (remote_execute or winrm_helper.WinRMHelper( configuration=driver_config).execute) self._windows_utils = windows_utils.WindowsUtils( remote_execute=self._remote_execute) def _check_auth_mode(self): if self._use_cert_auth: if not (os.path.exists(self._cert_pem_path) and os.path.exists(self._cert_key_pem_path)): msg = _("Certificate based authentication was configured " "but one or more certificates are missing.") raise exception.ServiceInstanceException(msg) LOG.debug("Using certificate based authentication for " "service instances.") else: instance_password = self.get_config_option( "service_instance_password") if not self._check_password_complexity(instance_password): msg = _("The configured service instance password does not " "match the minimum complexity requirements. " "The password must contain at least %s characters. " "Also, it must contain at least one digit, " "one lower case and one upper case character.") raise exception.ServiceInstanceException( msg % self._DEFAULT_MINIMUM_PASS_LENGTH) LOG.debug("Using password based authentication for " "service instances.") def _get_auth_info(self): auth_info = {'use_cert_auth': self._use_cert_auth} if self._use_cert_auth: auth_info.update(cert_pem_path=self._cert_pem_path, cert_key_pem_path=self._cert_key_pem_path) return auth_info def get_common_server(self): data = super(WindowsServiceInstanceManager, self).get_common_server() data['backend_details'].update(self._get_auth_info()) return data def _get_new_instance_details(self, server): instance_details = super(WindowsServiceInstanceManager, self)._get_new_instance_details(server) instance_details.update(self._get_auth_info()) return instance_details def _check_password_complexity(self, password): # Make sure that the Windows complexity requirements are met: # http://technet.microsoft.com/en-us/library/cc786468(v=ws.10).aspx if len(password) < self._DEFAULT_MINIMUM_PASS_LENGTH: return False for r in ("[a-z]", "[A-Z]", "[0-9]"): if not re.search(r, password): return False return True def _test_server_connection(self, server): try: self._remote_execute(server, "whoami", retry=False) LOG.debug("Service VM %s is available via WinRM", server['ip']) return True except Exception as ex: LOG.debug("Server %(ip)s is not available via WinRM. " "Exception: %(ex)s ", dict(ip=server['ip'], ex=ex)) return False def _get_service_instance_create_kwargs(self): create_kwargs = {} if self._use_cert_auth: # At the moment, we pass the x509 certificate via user data. # We'll use keypairs instead as soon as the nova client will # support x509 certificates. with open(self._cert_pem_path, 'r') as f: cert_pem_data = f.read() create_kwargs['user_data'] = cert_pem_data else: # The admin password has to be specified via instance metadata in # order to be passed to the instance via the metadata service or # configdrive. admin_pass = self.get_config_option("service_instance_password") create_kwargs['meta'] = {'admin_pass': admin_pass} return create_kwargs def set_up_service_instance(self, context, network_info): instance_details = super(WindowsServiceInstanceManager, self).set_up_service_instance(context, network_info) security_services = network_info['security_services'] security_service = self.get_valid_security_service(security_services) if security_service: self._setup_security_service(instance_details, security_service) instance_details['joined_domain'] = bool(security_service) return instance_details def _setup_security_service(self, server, security_service): domain = security_service['domain'] admin_username = security_service['user'] admin_password = security_service['password'] dns_ip = security_service['dns_ip'] self._windows_utils.set_dns_client_search_list(server, [domain]) if_index = self._windows_utils.get_interface_index_by_ip(server, server['ip']) self._windows_utils.set_dns_client_server_addresses(server, if_index, [dns_ip]) # Joining an AD domain will alter the WinRM Listener configuration. # Cloudbase-init is required to be running on the Windows service # instance, so we re-enable the plugin configuring the WinRM listener. # # TODO(lpetrut): add a config option so that we may rely on the AD # group policies taking care of the WinRM configuration. self._run_cloudbase_init_plugin_after_reboot( server, plugin_name=self._CBS_INIT_WINRM_PLUGIN) self._join_domain(server, domain, admin_username, admin_password) def _join_domain(self, server, domain, admin_username, admin_password): # As the WinRM configuration may be altered and existing connections # closed, we may not be able to retrieve the result of this operation. # Instead, we'll ensure that the instance actually joined the domain # after the reboot. try: self._windows_utils.join_domain(server, domain, admin_username, admin_password) except processutils.ProcessExecutionError: raise except Exception as exc: LOG.debug("Unexpected error while attempting to join domain " "%(domain)s. Verifying the result of the operation " "after instance reboot. Exception: %(exc)s", dict(domain=domain, exc=exc)) # We reboot the service instance using the Compute API so that # we can wait for it to become active. self.reboot_server(server, soft_reboot=True) self.wait_for_instance_to_be_active( server['instance_id'], timeout=self.max_time_to_build_instance) if not self._check_server_availability(server): raise exception.ServiceInstanceException( _('%(conn_proto)s connection has not been ' 'established to %(server)s in %(time)ss. Giving up.') % { 'conn_proto': self._INSTANCE_CONNECTION_PROTO, 'server': server['ip'], 'time': self.max_time_to_build_instance}) current_domain = self._windows_utils.get_current_domain(server) if current_domain != domain: err_msg = _("Failed to join domain %(requested_domain)s. " "Current domain: %(current_domain)s") raise exception.ServiceInstanceException( err_msg % dict(requested_domain=domain, current_domain=current_domain)) def get_valid_security_service(self, security_services): if not security_services: LOG.info("No security services provided.") elif len(security_services) > 1: LOG.warning("Multiple security services provided. Only one " "security service of type 'active_directory' " "is supported.") else: security_service = security_services[0] security_service_type = security_service['type'] if security_service_type == 'active_directory': return security_service else: LOG.warning("Only security services of type " "'active_directory' are supported. " "Retrieved security " "service type: %(sec_type)s.", {'sec_type': security_service_type}) return None def _run_cloudbase_init_plugin_after_reboot(self, server, plugin_name): cbs_init_reg_section = self._get_cbs_init_reg_section(server) plugin_key_path = "%(cbs_init_section)s\\%(instance_id)s\\Plugins" % { 'cbs_init_section': cbs_init_reg_section, 'instance_id': server['instance_id'] } self._windows_utils.set_win_reg_value( server, path=plugin_key_path, key=plugin_name, value=self._CBS_INIT_RUN_PLUGIN_AFTER_REBOOT) def _get_cbs_init_reg_section(self, server): base_path = 'hklm:\\SOFTWARE' cbs_section = 'Cloudbase Solutions\\Cloudbase-Init' for upper_section in ('', 'Wow6432Node'): cbs_init_section = self._windows_utils.normalize_path( os.path.join(base_path, upper_section, cbs_section)) try: self._windows_utils.get_win_reg_value( server, path=cbs_init_section) return cbs_init_section except processutils.ProcessExecutionError as ex: # The exit code will always be '1' in case of errors, so the # only way to determine the error type is checking stderr. if 'Cannot find path' in ex.stderr: continue else: raise raise exception.ServiceInstanceException( _("Could not retrieve Cloudbase Init registry section")) manila-10.0.0/manila/share/drivers/windows/windows_utils.py0000664000175000017500000002224613656750227024075 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_log import log LOG = log.getLogger(__name__) class WindowsUtils(object): def __init__(self, remote_execute): self._remote_exec = remote_execute self._fsutil_total_space_regex = re.compile('of bytes *: ([0-9]*)') self._fsutil_free_space_regex = re.compile( 'of avail free bytes *: ([0-9]*)') def initialize_disk(self, server, disk_number): cmd = ["Initialize-Disk", "-Number", disk_number] self._remote_exec(server, cmd) def create_partition(self, server, disk_number): cmd = ["New-Partition", "-DiskNumber", disk_number, "-UseMaximumSize"] self._remote_exec(server, cmd) def format_partition(self, server, disk_number, partition_number): cmd = ("Get-Partition -DiskNumber %(disk_number)s " "-PartitionNumber %(partition_number)s | " "Format-Volume -FileSystem NTFS -Force -Confirm:$false" % { 'disk_number': disk_number, 'partition_number': partition_number, }) self._remote_exec(server, cmd) def add_access_path(self, server, mount_path, disk_number, partition_number): cmd = ["Add-PartitionAccessPath", "-DiskNumber", disk_number, "-PartitionNumber", partition_number, "-AccessPath", self.quote_string(mount_path)] self._remote_exec(server, cmd) def resize_partition(self, server, size_bytes, disk_number, partition_number): cmd = ['Resize-Partition', '-DiskNumber', disk_number, '-PartitionNumber', partition_number, '-Size', size_bytes] self._remote_exec(server, cmd) def get_disk_number_by_serial_number(self, server, serial_number): pattern = "%s*" % serial_number[:15] cmd = ("Get-Disk | " "Where-Object {$_.SerialNumber -like '%s'} | " "Select-Object -ExpandProperty Number" % pattern) (out, err) = self._remote_exec(server, cmd) return int(out) if (len(out) > 0) else None def get_disk_number_by_mount_path(self, server, mount_path): cmd = ('Get-Partition | ' 'Where-Object {$_.AccessPaths -contains "%s"} | ' 'Select-Object -ExpandProperty DiskNumber' % (mount_path + "\\")) (out, err) = self._remote_exec(server, cmd) return int(out) if (len(out) > 0) else None def get_volume_path_by_mount_path(self, server, mount_path): cmd = ('Get-Partition | ' 'Where-Object {$_.AccessPaths -contains "%s"} | ' 'Get-Volume | ' 'Select-Object -ExpandProperty Path' % (mount_path + "\\")) (out, err) = self._remote_exec(server, cmd) return out.strip() def get_disk_space_by_path(self, server, mount_path): cmd = ["fsutil", "volume", "diskfree", self.quote_string(mount_path)] (out, err) = self._remote_exec(server, cmd) total_bytes = int(self._fsutil_total_space_regex.findall(out)[0]) free_bytes = int(self._fsutil_free_space_regex.findall(out)[0]) return total_bytes, free_bytes def get_partition_maximum_size(self, server, disk_number, partition_number): cmd = ('Get-PartitionSupportedSize -DiskNumber %(disk_number)s ' '-PartitionNumber %(partition_number)s | ' 'Select-Object -ExpandProperty SizeMax' % dict(disk_number=disk_number, partition_number=partition_number)) (out, err) = self._remote_exec(server, cmd) max_bytes = int(out) return max_bytes def set_disk_online_status(self, server, disk_number, online=True): is_offline = int(not online) cmd = ["Set-Disk", "-Number", disk_number, "-IsOffline", is_offline] self._remote_exec(server, cmd) def set_disk_readonly_status(self, server, disk_number, readonly=False): cmd = ["Set-Disk", "-Number", disk_number, "-IsReadOnly", int(readonly)] self._remote_exec(server, cmd) def update_disk(self, server, disk_number): """Updates cached disk information.""" cmd = ["Update-Disk", disk_number] self._remote_exec(server, cmd) def join_domain(self, server, domain, admin_username, admin_password): # NOTE(lpetrut): An instance reboot is needed but this will be # performed using Nova so that the instance state can be # retrieved easier. LOG.info("Joining server %(ip)s to Active Directory " "domain %(domain)s", dict(ip=server['ip'], domain=domain)) cmds = [ ('$password = "%s" | ' 'ConvertTo-SecureString -asPlainText -Force' % admin_password), ('$credential = ' 'New-Object System.Management.Automation.PSCredential(' '"%s", $password)' % admin_username), ('Add-Computer -DomainName "%s" -Credential $credential' % domain)] cmd = ";".join(cmds) self._remote_exec(server, cmd) def unjoin_domain(self, server, admin_username, admin_password, reboot=False): cmds = [ ('$password = "%s" | ' 'ConvertTo-SecureString -asPlainText -Force' % admin_password), ('$credential = ' 'New-Object System.Management.Automation.PSCredential(' '"%s", $password)' % admin_username), ('Remove-Computer -UnjoinDomaincredential $credential ' '-Passthru -Verbose -Force')] cmd = ";".join(cmds) self._remote_exec(server, cmd) def get_current_domain(self, server): cmd = "(Get-WmiObject Win32_ComputerSystem).Domain" (out, err) = self._remote_exec(server, cmd) return out.strip() def ensure_directory_exists(self, server, path): cmd = ["New-Item", "-ItemType", "Directory", "-Force", "-Path", self.quote_string(path)] self._remote_exec(server, cmd) def remove(self, server, path, force=True, recurse=False, is_junction=False): if self.path_exists(server, path): if is_junction: cmd = ('[System.IO.Directory]::Delete(' '%(path)s, %(recurse)d)' % dict(path=self.quote_string(path), recurse=recurse)) else: cmd = ["Remove-Item", "-Confirm:$false", "-Path", self.quote_string(path)] if force: cmd += ['-Force'] if recurse: cmd += ['-Recurse'] self._remote_exec(server, cmd) else: LOG.debug("Skipping deleting path %s as it does " "not exist.", path) def path_exists(self, server, path): cmd = ["Test-Path", path] (out, _) = self._remote_exec(server, cmd) return out.strip() == "True" def normalize_path(self, path): return path.replace('/', '\\') def get_interface_index_by_ip(self, server, ip): cmd = ('Get-NetIPAddress | ' 'Where-Object {$_.IPAddress -eq "%(ip)s"} | ' 'Select-Object -ExpandProperty InterfaceIndex' % dict(ip=ip)) (out, err) = self._remote_exec(server, cmd) if_index = int(out) return if_index def set_dns_client_search_list(self, server, search_list): src_list = ",".join(["'%s'" % domain for domain in search_list]) cmd = ["Set-DnsClientGlobalSetting", "-SuffixSearchList", "@(%s)" % src_list] self._remote_exec(server, cmd) def set_dns_client_server_addresses(self, server, if_index, dns_servers): dns_sv_list = ",".join(["'%s'" % dns_sv for dns_sv in dns_servers]) cmd = ["Set-DnsClientServerAddress", "-InterfaceIndex", if_index, "-ServerAddresses", "(%s)" % dns_sv_list] self._remote_exec(server, cmd) def set_win_reg_value(self, server, path, key, value): cmd = ['Set-ItemProperty', '-Path', self.quote_string(path), '-Name', key, '-Value', value] self._remote_exec(server, cmd) def get_win_reg_value(self, server, path, name=None): cmd = "Get-ItemProperty -Path %s" % self.quote_string(path) if name: cmd += " | Select-Object -ExpandProperty %s" % name return self._remote_exec(server, cmd, retry=False)[0] def quote_string(self, string): return '"%s"' % string manila-10.0.0/manila/share/drivers/windows/windows_smb_helper.py0000664000175000017500000002563613656750227025063 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import os from oslo_log import log from manila.common import constants from manila import exception from manila.share.drivers import helpers from manila.share.drivers.windows import windows_utils LOG = log.getLogger(__name__) class WindowsSMBHelper(helpers.CIFSHelperBase): _SHARE_ACCESS_RIGHT_MAP = { constants.ACCESS_LEVEL_RW: "Change", constants.ACCESS_LEVEL_RO: "Read"} _NULL_SID = "S-1-0-0" _WIN_ACL_ALLOW = 0 _WIN_ACL_DENY = 1 _WIN_ACCESS_RIGHT_FULL = 0 _WIN_ACCESS_RIGHT_CHANGE = 1 _WIN_ACCESS_RIGHT_READ = 2 _WIN_ACCESS_RIGHT_CUSTOM = 3 _ACCESS_LEVEL_CUSTOM = 'custom' _WIN_ACL_MAP = { _WIN_ACCESS_RIGHT_CHANGE: constants.ACCESS_LEVEL_RW, _WIN_ACCESS_RIGHT_FULL: constants.ACCESS_LEVEL_RW, _WIN_ACCESS_RIGHT_READ: constants.ACCESS_LEVEL_RO, _WIN_ACCESS_RIGHT_CUSTOM: _ACCESS_LEVEL_CUSTOM, } _SUPPORTED_ACCESS_LEVELS = (constants.ACCESS_LEVEL_RO, constants.ACCESS_LEVEL_RW) _SUPPORTED_ACCESS_TYPES = ('user', ) def __init__(self, remote_execute, configuration): self._remote_exec = remote_execute self.configuration = configuration self._windows_utils = windows_utils.WindowsUtils( remote_execute=remote_execute) def init_helper(self, server): self._remote_exec(server, "Get-SmbShare") def create_exports(self, server, share_name, recreate=False): export_location = '\\\\%s\\%s' % (server['public_address'], share_name) if not self._share_exists(server, share_name): share_path = self._windows_utils.normalize_path( os.path.join(self.configuration.share_mount_path, share_name)) # If no access rules are requested, 'Everyone' will have read # access, by default. We set read access for the 'NULL SID' in # order to avoid this. cmd = ['New-SmbShare', '-Name', share_name, '-Path', share_path, '-ReadAccess', "*%s" % self._NULL_SID] self._remote_exec(server, cmd) else: LOG.info("Skipping creating export %s as it already exists.", share_name) return self.get_exports_for_share(server, export_location) def remove_exports(self, server, share_name): if self._share_exists(server, share_name): cmd = ['Remove-SmbShare', '-Name', share_name, "-Force"] self._remote_exec(server, cmd) else: LOG.debug("Skipping removing export %s as it does not exist.", share_name) def _get_volume_path_by_share_name(self, server, share_name): share_path = self._get_share_path_by_name(server, share_name) volume_path = self._windows_utils.get_volume_path_by_mount_path( server, share_path) return volume_path def _get_acls(self, server, share_name): cmd = ('Get-SmbShareAccess -Name %(share_name)s | ' 'Select-Object @("Name", "AccountName", ' '"AccessControlType", "AccessRight") | ' 'ConvertTo-JSON -Compress' % {'share_name': share_name}) (out, err) = self._remote_exec(server, cmd) if not out.strip(): return [] raw_acls = json.loads(out) if isinstance(raw_acls, dict): return [raw_acls] return raw_acls def get_access_rules(self, server, share_name): raw_acls = self._get_acls(server, share_name) acls = [] for raw_acl in raw_acls: access_to = raw_acl['AccountName'] access_right = raw_acl['AccessRight'] access_level = self._WIN_ACL_MAP[access_right] access_allow = raw_acl["AccessControlType"] == self._WIN_ACL_ALLOW if not access_allow: if access_to.lower() == 'everyone' and len(raw_acls) == 1: LOG.debug("No access rules are set yet for share %s", share_name) else: LOG.warning( "Found explicit deny ACE rule that was not " "created by Manila and will be ignored: %s", raw_acl) continue if access_level == self._ACCESS_LEVEL_CUSTOM: LOG.warning( "Found 'custom' ACE rule that will be ignored: %s", raw_acl) continue elif access_right == self._WIN_ACCESS_RIGHT_FULL: LOG.warning( "Account '%(access_to)s' was given full access " "right on share %(share_name)s. Manila only " "grants 'change' access.", {'access_to': access_to, 'share_name': share_name}) acl = { 'access_to': access_to, 'access_level': access_level, 'access_type': 'user', } acls.append(acl) return acls def _grant_share_access(self, server, share_name, access_level, access_to): access_right = self._SHARE_ACCESS_RIGHT_MAP[access_level] cmd = ["Grant-SmbShareAccess", "-Name", share_name, "-AccessRight", access_right, "-AccountName", "'%s'" % access_to, "-Force"] self._remote_exec(server, cmd) self._refresh_acl(server, share_name) LOG.info("Granted %(access_level)s access to '%(access_to)s' " "on share %(share_name)s", {'access_level': access_level, 'access_to': access_to, 'share_name': share_name}) def _refresh_acl(self, server, share_name): cmd = ['Set-SmbPathAcl', '-ShareName', share_name] self._remote_exec(server, cmd) def _revoke_share_access(self, server, share_name, access_to): cmd = ['Revoke-SmbShareAccess', '-Name', share_name, '-AccountName', '"%s"' % access_to, '-Force'] self._remote_exec(server, cmd) self._refresh_acl(server, share_name) LOG.info("Revoked access to '%(access_to)s' " "on share %(share_name)s", {'access_to': access_to, 'share_name': share_name}) def update_access(self, server, share_name, access_rules, add_rules, delete_rules): self.validate_access_rules( access_rules + add_rules, self._SUPPORTED_ACCESS_TYPES, self._SUPPORTED_ACCESS_LEVELS) if not (add_rules or delete_rules): existing_rules = self.get_access_rules(server, share_name) add_rules, delete_rules = self._get_rule_updates( existing_rules=existing_rules, requested_rules=access_rules) LOG.debug(("Missing rules: %(add_rules)s, " "superfluous rules: %(delete_rules)s"), {'add_rules': add_rules, 'delete_rules': delete_rules}) # Some rules may have changed, so we'll # treat the deleted rules first. for deleted_rule in delete_rules: try: self.validate_access_rules( [deleted_rule], self._SUPPORTED_ACCESS_TYPES, self._SUPPORTED_ACCESS_LEVELS) except (exception.InvalidShareAccess, exception.InvalidShareAccessLevel): # This check will allow invalid rules to be deleted. LOG.warning( "Unsupported access level %(level)s or access type " "%(type)s, skipping removal of access rule to " "%(to)s.", {'level': deleted_rule['access_level'], 'type': deleted_rule['access_type'], 'to': deleted_rule['access_to']}) continue self._revoke_share_access(server, share_name, deleted_rule['access_to']) for added_rule in add_rules: self._grant_share_access(server, share_name, added_rule['access_level'], added_rule['access_to']) def _subtract_access_rules(self, access_rules, subtracted_rules): # Account names are case insensitive on Windows. filter_rules = lambda rules: [ # noqa: E731 {'access_to': access_rule['access_to'].lower(), 'access_level': access_rule['access_level'], 'access_type': access_rule['access_type']} for access_rule in rules] return [rule for rule in filter_rules(access_rules) if rule not in filter_rules(subtracted_rules)] def _get_rule_updates(self, existing_rules, requested_rules): added_rules = self._subtract_access_rules(requested_rules, existing_rules) deleted_rules = self._subtract_access_rules(existing_rules, requested_rules) return added_rules, deleted_rules def _get_share_name(self, export_location): return self._windows_utils.normalize_path( export_location).split('\\')[-1] def _get_export_location_template(self, old_export_location): share_name = self._get_share_name(old_export_location) return '\\\\%s' + ('\\%s' % share_name) def _get_share_path_by_name(self, server, share_name, ignore_missing=False): cmd = ('Get-SmbShare -Name %s | ' 'Select-Object -ExpandProperty Path' % share_name) check_exit_code = not ignore_missing (share_path, err) = self._remote_exec(server, cmd, check_exit_code=check_exit_code) return share_path.strip() if share_path else None def get_share_path_by_export_location(self, server, export_location): share_name = self._get_share_name(export_location) return self._get_share_path_by_name(server, share_name) def _share_exists(self, server, share_name): share_path = self._get_share_path_by_name(server, share_name, ignore_missing=True) return bool(share_path) manila-10.0.0/manila/share/drivers/windows/windows_smb_driver.py0000664000175000017500000001664213656750227025074 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_log import log from oslo_utils import units from manila.share import driver as base_driver from manila.share.drivers import generic from manila.share.drivers.windows import service_instance from manila.share.drivers.windows import windows_smb_helper from manila.share.drivers.windows import windows_utils from manila.share.drivers.windows import winrm_helper LOG = log.getLogger(__name__) class WindowsSMBDriver(generic.GenericShareDriver): # NOTE(lpetrut): The first partition will be reserved by the OS. _DEFAULT_SHARE_PARTITION = 2 def __init__(self, *args, **kwargs): super(WindowsSMBDriver, self).__init__(*args, **kwargs) self._remote_execute = winrm_helper.WinRMHelper( configuration=self.configuration).execute self._windows_utils = windows_utils.WindowsUtils( remote_execute=self._remote_execute) self._smb_helper = windows_smb_helper.WindowsSMBHelper( remote_execute=self._remote_execute, configuration=self.configuration) def _update_share_stats(self, data=None): base_driver.ShareDriver._update_share_stats( self, data=dict(storage_protocol="CIFS")) def _setup_service_instance_manager(self): self.service_instance_manager = ( service_instance.WindowsServiceInstanceManager( driver_config=self.configuration)) def _setup_helpers(self): self._helpers = {key: self._smb_helper for key in ("SMB", "CIFS")} def _teardown_server(self, server_details, security_services=None): security_service = ( self.service_instance_manager.get_valid_security_service( security_services)) if server_details.get('joined_domain') and security_service: try: self._windows_utils.unjoin_domain(server_details, security_service['user'], security_service['password']) except Exception as exc: LOG.warning("Failed to remove service instance " "%(instance_id)s from domain %(domain)s. " "Exception: %(exc)s.", dict(instance_id=server_details['instance_id'], domain=security_service['domain'], exc=exc)) super(WindowsSMBDriver, self)._teardown_server(server_details, security_services) def _format_device(self, server_details, volume): disk_number = self._get_disk_number(server_details, volume) self._windows_utils.initialize_disk(server_details, disk_number) self._windows_utils.create_partition(server_details, disk_number) self._windows_utils.format_partition( server_details, disk_number, self._DEFAULT_SHARE_PARTITION) def _mount_device(self, share, server_details, volume): mount_path = self._get_mount_path(share) if not self._is_device_mounted(mount_path, server_details, volume): disk_number = self._get_disk_number(server_details, volume) self._windows_utils.ensure_directory_exists(server_details, mount_path) self._ensure_disk_online_and_writable(server_details, disk_number) self._windows_utils.add_access_path(server_details, mount_path, disk_number, self._DEFAULT_SHARE_PARTITION) def _unmount_device(self, share, server_details): mount_path = self._get_mount_path(share) disk_number = self._windows_utils.get_disk_number_by_mount_path( server_details, mount_path) self._windows_utils.remove(server_details, mount_path, is_junction=True) if disk_number: self._windows_utils.set_disk_online_status( server_details, disk_number, online=False) def _resize_filesystem(self, server_details, volume, new_size=None): disk_number = self._get_disk_number(server_details, volume) self._ensure_disk_online_and_writable(server_details, disk_number) if not new_size: new_size_bytes = self._windows_utils.get_partition_maximum_size( server_details, disk_number, self._DEFAULT_SHARE_PARTITION) else: new_size_bytes = new_size * units.Gi self._windows_utils.resize_partition(server_details, new_size_bytes, disk_number, self._DEFAULT_SHARE_PARTITION) def _ensure_disk_online_and_writable(self, server_details, disk_number): self._windows_utils.update_disk(server_details, disk_number) self._windows_utils.set_disk_readonly_status( server_details, disk_number, readonly=False) self._windows_utils.set_disk_online_status( server_details, disk_number, online=True) def _get_mounted_share_size(self, mount_path, server_details): total_bytes = self._windows_utils.get_disk_space_by_path( server_details, mount_path)[0] return float(total_bytes) / units.Gi def _get_consumed_space(self, mount_path, server_details): total_bytes, free_bytes = self._windows_utils.get_disk_space_by_path( server_details, mount_path) return float(total_bytes - free_bytes) / units.Gi def _get_mount_path(self, share): mount_path = os.path.join(self.configuration.share_mount_path, share['name']) return self._windows_utils.normalize_path(mount_path) def _get_disk_number(self, server_details, volume): disk_number = self._windows_utils.get_disk_number_by_serial_number( server_details, volume['id']) if disk_number is None: LOG.debug("Could not identify the mounted disk by serial number " "using the volume id %(volume_id)s. Attempting to " "retrieve it by the volume mount point %(mountpoint)s.", dict(volume_id=volume['id'], mountpoint=volume['mountpoint'])) # Assumes the mount_point will be something like /dev/hdX mount_point = volume['mountpoint'] disk_number = ord(mount_point[-1]) - ord('a') return disk_number def _is_device_mounted(self, mount_path, server_details, volume=None): disk_number = self._windows_utils.get_disk_number_by_mount_path( server_details, mount_path) return disk_number is not None manila-10.0.0/manila/share/drivers/windows/winrm_helper.py0000664000175000017500000001455313656750227023660 0ustar zuulzuul00000000000000# Copyright (c) 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from oslo_utils import strutils import six from manila import exception from manila.i18n import _ from manila import utils LOG = log.getLogger(__name__) CONF = cfg.CONF winrm_opts = [ cfg.IntOpt( 'winrm_conn_timeout', default=60, help='WinRM connection timeout.'), cfg.IntOpt( 'winrm_operation_timeout', default=60, help='WinRM operation timeout.'), cfg.IntOpt( 'winrm_retry_count', default=3, help='WinRM retry count.'), cfg.IntOpt( 'winrm_retry_interval', default=5, help='WinRM retry interval in seconds'), ] CONF.register_opts(winrm_opts) DEFAULT_PORT_HTTP = 5985 DEFAULT_PORT_HTTPS = 5986 TRANSPORT_PLAINTEXT = 'plaintext' TRANSPORT_SSL = 'ssl' winrm = None def setup_winrm(): global winrm if not winrm: try: winrm = importutils.import_module('winrm') except ImportError: raise exception.ShareBackendException( _("PyWinrm is not installed")) class WinRMHelper(object): def __init__(self, configuration=None): if configuration: configuration.append_config_values(winrm_opts) self._config = configuration else: self._config = CONF setup_winrm() def _get_conn(self, server): auth = self._get_auth(server) conn = WinRMConnection( ip=server['ip'], conn_timeout=self._config.winrm_conn_timeout, operation_timeout=self._config.winrm_operation_timeout, **auth) return conn def execute(self, server, command, check_exit_code=True, retry=True): retries = self._config.winrm_retry_count if retry else 1 conn = self._get_conn(server) @utils.retry(exception=Exception, interval=self._config.winrm_retry_interval, retries=retries) def _execute(): parsed_cmd, sanitized_cmd = self._parse_command(command) LOG.debug("Executing command: %s", sanitized_cmd) (stdout, stderr, exit_code) = conn.execute(parsed_cmd) sanitized_stdout = strutils.mask_password(stdout) sanitized_stderr = strutils.mask_password(stderr) LOG.debug("Executed command: %(cmd)s. Stdout: %(stdout)s. " "Stderr: %(stderr)s. Exit code %(exit_code)s", dict(cmd=sanitized_cmd, stdout=sanitized_stdout, stderr=sanitized_stderr, exit_code=exit_code)) if check_exit_code and exit_code != 0: raise processutils.ProcessExecutionError( stdout=sanitized_stdout, stderr=sanitized_stderr, exit_code=exit_code, cmd=sanitized_cmd) return (stdout, stderr) return _execute() def _parse_command(self, command): if isinstance(command, list) or isinstance(command, tuple): command = " ".join([six.text_type(c) for c in command]) sanitized_cmd = strutils.mask_password(command) b64_command = base64.b64encode(command.encode("utf_16_le")) command = ("powershell.exe -ExecutionPolicy RemoteSigned " "-NonInteractive -EncodedCommand %s" % b64_command) return command, sanitized_cmd def _get_auth(self, server): auth = {'username': server['username']} if server['use_cert_auth']: auth['cert_pem_path'] = server['cert_pem_path'] auth['cert_key_pem_path'] = server['cert_key_pem_path'] else: auth['password'] = server['password'] return auth class WinRMConnection(object): _URL_TEMPLATE = '%(protocol)s://%(ip)s:%(port)s/wsman' def __init__(self, ip=None, port=None, use_ssl=False, transport=None, username=None, password=None, cert_pem_path=None, cert_key_pem_path=None, operation_timeout=None, conn_timeout=None): setup_winrm() use_cert = bool(cert_pem_path and cert_key_pem_path) transport = (TRANSPORT_SSL if use_cert else TRANSPORT_PLAINTEXT) _port = port or self._get_default_port(use_cert) _url = self._get_url(ip, _port, use_cert) self._conn = winrm.protocol.Protocol( endpoint=_url, transport=transport, username=username, password=password, cert_pem=cert_pem_path, cert_key_pem=cert_key_pem_path) self._conn.transport.timeout = conn_timeout self._conn.set_timeout(operation_timeout) def _get_default_port(self, use_ssl): port = (DEFAULT_PORT_HTTPS if use_ssl else DEFAULT_PORT_HTTP) return port def _get_url(self, ip, port, use_ssl): if not ip: err_msg = _("No IP provided.") raise exception.ShareBackendException(msg=err_msg) protocol = 'https' if use_ssl else 'http' return self._URL_TEMPLATE % {'protocol': protocol, 'ip': ip, 'port': port} def execute(self, cmd): shell_id = None cmd_id = None try: shell_id = self._conn.open_shell() cmd_id = self._conn.run_command(shell_id, cmd) (stdout, stderr, exit_code) = self._conn.get_command_output(shell_id, cmd_id) finally: if cmd_id: self._conn.cleanup_command(shell_id, cmd_id) if shell_id: self._conn.close_shell(shell_id) return (stdout, stderr, exit_code) manila-10.0.0/manila/share/drivers/dell_emc/0000775000175000017500000000000013656750362020655 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/__init__.py0000664000175000017500000000000013656750227022754 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/common/0000775000175000017500000000000013656750362022145 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/common/__init__.py0000664000175000017500000000000013656750227024244 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/common/enas/0000775000175000017500000000000013656750362023073 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/common/enas/xml_api_parser.py0000664000175000017500000002337313656750227026462 0ustar zuulzuul00000000000000# Copyright (c) 2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from lxml import etree import six class XMLAPIParser(object): def __init__(self): # The following Boolean acts as the flag for the common sub-element. # For instance: # #
  • server_1
  • #
    # #
  • interface_1
  • #
    self.is_QueryStatus = False self.is_CifsServers = False self.is_Aliases = False self.is_MoverStatus = False self.is_TaskResponse = False self.is_Vdm = False self.is_Interfaces = False self.elt = {} def _remove_ns(self, tag): i = tag.find('}') if i >= 0: tag = tag[i + 1:] return tag def parse(self, xml): result = { 'type': None, 'taskId': None, 'maxSeverity': None, 'objects': [], 'problems': [], } events = ("start", "end") context = etree.iterparse(six.BytesIO(xml), events=events) for action, elem in context: self.tag = self._remove_ns(elem.tag) func = self._get_func(action, self.tag) if func in vars(XMLAPIParser): if action == 'start': eval('self.' + func)(elem, result) elif action == 'end': eval('self.' + func)() return result def _get_func(self, action, tag): if tag == 'W2KServerData': return action + '_' + 'w2k_server_data' temp_list = re.sub(r"([A-Z])", r" \1", tag).split() if temp_list: func_name = action + '_' + '_'.join(temp_list) else: func_name = action + '_' + tag return func_name.lower() def _copy_property(self, source, target, property, list_property=None): for key in property: if key in source: target[key] = source[key] if list_property: for key in list_property: if key in source: target[key] = source[key].split() def _append_elm_property(self, elm, result, property, identifier): for obj in result['objects']: if (identifier in obj and identifier in elm.attrib and elm.attrib[identifier] == obj[identifier]): for key, value in elm.attrib.items(): if key in property: obj[key] = value def _append_element(self, elm, result, property, list_property, identifier): sub_elm = {} self._copy_property(elm.attrib, sub_elm, property, list_property) for obj in result['objects']: if (identifier in obj and identifier in elm.attrib and elm.attrib[identifier] == obj[identifier]): if self.tag in obj: obj[self.tag].append(sub_elm) else: obj[self.tag] = [sub_elm] def start_task_response(self, elm, result): self.is_TaskResponse = True result['type'] = 'TaskResponse' self._copy_property(elm.attrib, result, ['taskId']) def end_task_response(self): self.is_TaskResponse = False def start_fault(self, elm, result): result['type'] = 'Fault' def start_status(self, elm, result): if self.is_TaskResponse: result['maxSeverity'] = elm.attrib['maxSeverity'] elif self.is_MoverStatus or self.is_Vdm: self.elt['maxSeverity'] = elm.attrib['maxSeverity'] def start_query_status(self, elm, result): self.is_QueryStatus = True result['type'] = 'QueryStatus' self._copy_property(elm.attrib, result, ['maxSeverity']) def end_query_status(self): self.is_QueryStatus = False def start_problem(self, elm, result): self.elt = {} properties = ('message', 'messageCode') self._copy_property(elm.attrib, self.elt, properties) result['problems'].append(self.elt) def start_description(self, elm, result): self.elt['Description'] = elm.text def start_action(self, elm, result): self.elt['Action'] = elm.text def start_diagnostics(self, elm, result): self.elt['Diagnostics'] = elm.text def start_file_system(self, elm, result): self.elt = {} property = ( 'fileSystem', 'name', 'type', 'storages', 'volume', 'dataServicePolicies', 'internalUse', ) list_property = ('storagePools',) self._copy_property(elm.attrib, self.elt, property, list_property) result['objects'].append(self.elt) def start_file_system_capacity_info(self, elm, result): property = ('volumeSize',) identifier = 'fileSystem' self._append_elm_property(elm, result, property, identifier) def start_storage_pool(self, elm, result): self.elt = {} property = ('name', 'autoSize', 'usedSize', 'diskType', 'pool', 'dataServicePolicies', 'virtualProvisioning') list_property = ('movers',) self._copy_property(elm.attrib, self.elt, property, list_property) result['objects'].append(self.elt) def start_system_storage_pool_data(self, elm, result): property = ('greedy', 'isBackendPool') self._copy_property(elm.attrib, self.elt, property) def start_mover(self, elm, result): self.elt = {} property = ('name', 'host', 'mover', 'role') list_property = ('ntpServers', 'standbyFors', 'standbys') self._copy_property(elm.attrib, self.elt, property, list_property) result['objects'].append(self.elt) def start_mover_status(self, elm, result): self.is_MoverStatus = True property = ('version', 'csTime', 'clock', 'timezone', 'uptime') identifier = 'mover' self._append_elm_property(elm, result, property, identifier) def end_mover_status(self): self.is_MoverStatus = False def start_mover_dns_domain(self, elm, result): property = ('name', 'protocol') list_property = ('servers',) identifier = 'mover' self._append_element(elm, result, property, list_property, identifier) def start_mover_interface(self, elm, result): property = ( 'name', 'device', 'up', 'ipVersion', 'netMask', 'ipAddress', 'vlanid', ) identifier = 'mover' self._append_element(elm, result, property, None, identifier) def start_logical_network_device(self, elm, result): property = ('name', 'type', 'speed') list_property = ('interfaces',) identifier = 'mover' self._append_element(elm, result, property, list_property, identifier) def start_vdm(self, elm, result): self.is_Vdm = True self.elt = {} property = ('name', 'state', 'mover', 'vdm') self._copy_property(elm.attrib, self.elt, property) result['objects'].append(self.elt) def end_vdm(self): self.is_Vdm = False def start_interfaces(self, elm, result): self.is_Interfaces = True self.elt['Interfaces'] = [] def end_interfaces(self): self.is_Interfaces = False def start_li(self, elm, result): if self.is_CifsServers: self.elt['CifsServers'].append(elm.text) elif self.is_Aliases: self.elt['Aliases'].append(elm.text) elif self.is_Interfaces: self.elt['Interfaces'].append(elm.text) def start_cifs_server(self, elm, result): self.elt = {} property = ('type', 'localUsers', 'name', 'mover', 'moverIdIsVdm') list_property = ('interfaces',) self._copy_property(elm.attrib, self.elt, property, list_property) result['objects'].append(self.elt) def start_aliases(self, elm, result): self.is_Aliases = True self.elt['Aliases'] = [] def end_aliases(self): self.is_Aliases = False def start_w2k_server_data(self, elm, result): property = ('domain', 'compName', 'domainJoined') self._copy_property(elm.attrib, self.elt, property) def start_cifs_share(self, elm, result): self.elt = {} property = ('path', 'fileSystem', 'name', 'mover', 'moverIdIsVdm') self._copy_property(elm.attrib, self.elt, property) result['objects'].append(self.elt) def start_cifs_servers(self, elm, result): self.is_CifsServers = True self.elt['CifsServers'] = [] def end_cifs_servers(self): self.is_CifsServers = False def start_checkpoint(self, elm, result): self.elt = {} property = ('checkpointOf', 'name', 'checkpoint', 'state') self._copy_property(elm.attrib, self.elt, property) result['objects'].append(self.elt) def start_mount(self, elm, result): self.elt = {} property = ('fileSystem', 'path', 'mover', 'moverIdIsVdm') self._copy_property(elm.attrib, self.elt, property) result['objects'].append(self.elt) manila-10.0.0/manila/share/drivers/dell_emc/common/enas/utils.py0000664000175000017500000001303613656750227024610 0ustar zuulzuul00000000000000# Copyright (c) 2014 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import types from oslo_config import cfg from oslo_log import log from oslo_utils import fnmatch from oslo_utils import netutils from oslo_utils import timeutils import ssl CONF = cfg.CONF LOG = log.getLogger(__name__) def decorate_all_methods(decorator, debug_only=False): if debug_only and not CONF.debug: return lambda cls: cls def _decorate_all_methods(cls): for attr_name, attr_val in cls.__dict__.items(): if (isinstance(attr_val, types.FunctionType) and not attr_name.startswith("_")): setattr(cls, attr_name, decorator(attr_val)) return cls return _decorate_all_methods def log_enter_exit(func): if not CONF.debug: return func def inner(self, *args, **kwargs): LOG.debug("Entering %(cls)s.%(method)s.", {'cls': self.__class__.__name__, 'method': func.__name__}) start = timeutils.utcnow() ret = func(self, *args, **kwargs) end = timeutils.utcnow() LOG.debug("Exiting %(cls)s.%(method)s. " "Spent %(duration)s sec. " "Return %(return)s.", {'cls': self.__class__.__name__, 'duration': timeutils.delta_seconds(start, end), 'method': func.__name__, 'return': ret}) return ret return inner def do_match_any(full, matcher_list): """Finds items that match any of the matchers. :param full: Full item list :param matcher_list: The list of matchers. Each matcher supports Unix shell-style wildcards :return: The matched items set and the unmatched items set """ matched = set() not_matched = set() full = set([item.strip() for item in full]) matcher_list = set([item.strip() for item in matcher_list]) for matcher in matcher_list: for item in full: if fnmatch.fnmatchcase(item, matcher): matched.add(item) not_matched = full - matched return matched, not_matched def create_ssl_context(configuration): """Create context for ssl verification. .. note:: starting from python 2.7.9 ssl adds create_default_context. We need to keep compatibility with previous python as well. """ try: if configuration.emc_ssl_cert_verify: context = ssl.create_default_context( capath=configuration.emc_ssl_cert_path) else: context = ssl.create_default_context() context.check_hostname = False context.verify_mode = ssl.CERT_NONE except AttributeError: LOG.warning('Creating ssl context is not supported on this ' 'version of Python, ssl verification is disabled.') context = None return context def parse_ipaddr(text): """Parse the output of VNX server_export command, get IPv4/IPv6 addresses. Example: input: 192.168.100.102:[fdf8:f53b:82e4::57]:[fdf8:f53b:82e4::54] output: ['192.168.100.102', '[fdf8:f53b:82e4::57]', '[fdf8:f53b:82e4::54]'] :param text: The output of VNX server_export command. :return: The list of IPv4/IPv6 addresses. The IPv6 address enclosed by []. """ rst = [] stk = [] ipaddr = '' it = iter(text) try: while True: i = next(it) if i == ':' and not stk and ipaddr: rst.append(ipaddr) ipaddr = '' elif i == ':' and not ipaddr: continue elif i == '[': stk.append(i) elif i == ']': rst.append('[%s]' % ipaddr) stk.pop() ipaddr = '' else: ipaddr += i except StopIteration: if ipaddr: rst.append(ipaddr) return rst def convert_ipv6_format_if_needed(ip_addr): """Convert IPv6 address format if needed. The IPv6 address enclosed by []. For the invalid IPv6 cidr, its format will not be changed. :param ip_addr: IPv6 address. :return: Converted IPv6 address. """ if netutils.is_valid_ipv6_cidr(ip_addr): ip_addr = '[%s]' % ip_addr return ip_addr def export_unc_path(ip_addr): """Convert IPv6 address to valid UNC path. In Microsoft Windows OS, UNC (Uniform Naming Convention) specifies a common syntax to describe the location of a network resource. The colon which used by IPv6 is an illegal character in a UNC path name. So the IPv6 address need to be converted to valid UNC path. References: - https://en.wikipedia.org/wiki/IPv6_address #Literal_IPv6_addresses_in_UNC_path_names - https://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming_Convention :param ip_addr: IPv6 address. :return: UNC path. """ unc_suffix = '.ipv6-literal.net' if netutils.is_valid_ipv6(ip_addr): ip_addr = ip_addr.replace(':', '-') + unc_suffix return ip_addr manila-10.0.0/manila/share/drivers/dell_emc/common/enas/__init__.py0000664000175000017500000000000013656750227025172 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/common/enas/connector.py0000664000175000017500000001456613656750227025453 0ustar zuulzuul00000000000000# Copyright (c) 2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pipes from oslo_concurrency import processutils from oslo_log import log from oslo_utils import excutils import six from six.moves import http_cookiejar from six.moves.urllib import error as url_error from six.moves.urllib import request as url_request from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import utils as enas_utils from manila import utils LOG = log.getLogger(__name__) class XMLAPIConnector(object): def __init__(self, configuration, debug=True): super(XMLAPIConnector, self).__init__() self.storage_ip = enas_utils.convert_ipv6_format_if_needed( configuration.emc_nas_server) self.username = configuration.emc_nas_login self.password = configuration.emc_nas_password self.debug = debug self.auth_url = 'https://' + self.storage_ip + '/Login' self._url = 'https://{}/servlets/CelerraManagementServices'.format( self.storage_ip) context = enas_utils.create_ssl_context(configuration) if context: https_handler = url_request.HTTPSHandler(context=context) else: https_handler = url_request.HTTPSHandler() cookie_handler = url_request.HTTPCookieProcessor( http_cookiejar.CookieJar()) self.url_opener = url_request.build_opener(https_handler, cookie_handler) self._do_setup() def _do_setup(self): credential = ('user=' + self.username + '&password=' + self.password + '&Login=Login') req = url_request.Request(self.auth_url, credential.encode(), constants.CONTENT_TYPE_URLENCODE) resp = self.url_opener.open(req) resp_body = resp.read() self._http_log_resp(resp, resp_body) def _http_log_req(self, req): if not self.debug: return string_parts = ['curl -i'] string_parts.append(' -X %s' % req.get_method()) for k in req.headers: header = ' -H "%s: %s"' % (k, req.headers[k]) string_parts.append(header) if req.data: string_parts.append(" -d '%s'" % req.data) string_parts.append(' ' + req.get_full_url()) LOG.debug("\nREQ: %s.\n", "".join(string_parts)) def _http_log_resp(self, resp, body): if not self.debug: return headers = six.text_type(resp.headers).replace('\n', '\\n') LOG.debug( 'RESP: [%(code)s] %(resp_hdrs)s\n' 'RESP BODY: %(resp_b)s.\n', { 'code': resp.getcode(), 'resp_hdrs': headers, 'resp_b': body, } ) def _request(self, req_body=None, method=None, header=constants.CONTENT_TYPE_URLENCODE): req = url_request.Request(self._url, req_body.encode(), header) if method not in (None, 'GET', 'POST'): req.get_method = lambda: method self._http_log_req(req) try: resp = self.url_opener.open(req) resp_body = resp.read() self._http_log_resp(resp, resp_body) except url_error.HTTPError as http_err: if '403' == six.text_type(http_err.code): raise exception.NotAuthorized() else: err = {'errorCode': -1, 'httpStatusCode': http_err.code, 'messages': six.text_type(http_err), 'request': req_body} msg = (_("The request is invalid. Reason: %(reason)s") % {'reason': err}) raise exception.ManilaException(message=msg) return resp_body def request(self, req_body=None, method=None, header=constants.CONTENT_TYPE_URLENCODE): try: resp_body = self._request(req_body, method, header) except exception.NotAuthorized: LOG.debug("Login again because client certification " "may be expired.") self._do_setup() resp_body = self._request(req_body, method, header) return resp_body class SSHConnector(object): def __init__(self, configuration, debug=True): super(SSHConnector, self).__init__() self.storage_ip = configuration.emc_nas_server self.username = configuration.emc_nas_login self.password = configuration.emc_nas_password self.debug = debug self.sshpool = utils.SSHPool(ip=self.storage_ip, port=22, conn_timeout=None, login=self.username, password=self.password) def run_ssh(self, cmd_list, check_exit_code=False): command = ' '.join(pipes.quote(cmd_arg) for cmd_arg in cmd_list) with self.sshpool.item() as ssh: try: out, err = processutils.ssh_execute( ssh, command, check_exit_code=check_exit_code) self.log_request(command, out, err) return out, err except processutils.ProcessExecutionError as e: with excutils.save_and_reraise_exception(): LOG.error('Error running SSH command: %(cmd)s. ' 'Error: %(excmsg)s.', {'cmd': command, 'excmsg': e}) def log_request(self, cmd, out, err): if not self.debug: return LOG.debug("\nSSH command: %s.\n", cmd) LOG.debug("SSH command output: out=%(out)s, err=%(err)s.\n", {'out': out, 'err': err}) manila-10.0.0/manila/share/drivers/dell_emc/common/enas/constants.py0000664000175000017500000000333113656750227025461 0ustar zuulzuul00000000000000# Copyright (c) 2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. STATUS_OK = 'ok' STATUS_INFO = 'info' STATUS_DEBUG = 'debug' STATUS_WARNING = 'warning' STATUS_ERROR = 'error' STATUS_NOT_FOUND = 'not_found' MSG_GENERAL_ERROR = '13690601492' MSG_INVALID_VDM_ID = '14227341325' MSG_INVALID_MOVER_ID = '14227341323' MSG_FILESYSTEM_NOT_FOUND = "18522112101" MSG_FILESYSTEM_EXIST = '13691191325' MSG_VDM_EXIST = '13421840550' MSG_SNAP_EXIST = '13690535947' MSG_INTERFACE_NAME_EXIST = '13421840550' MSG_INTERFACE_EXIST = '13691781136' MSG_INTERFACE_INVALID_VLAN_ID = '13421850371' MSG_INTERFACE_NON_EXISTENT = '13691781134' MSG_JOIN_DOMAIN = '13157007726' MSG_UNJOIN_DOMAIN = '13157007723' # Necessary to retry when ENAS database is locked for provisioning operation MSG_CODE_RETRY = '13421840537' IP_ALLOCATIONS = 2 CONTENT_TYPE_URLENCODE = {'Content-Type': 'application/x-www-form-urlencoded'} XML_HEADER = '' XML_NAMESPACE = 'http://www.emc.com/schemas/celerra/xml_api' CIFS_ACL_FULLCONTROL = 'fullcontrol' CIFS_ACL_READ = 'read' SSH_DEFAULT_RETRY_PATTERN = r'Error 2201:.*: unable to acquire lock\(s\)' manila-10.0.0/manila/share/drivers/dell_emc/plugin_manager.py0000664000175000017500000000216213656750227024220 0ustar zuulzuul00000000000000# Copyright (c) 2014 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """EMC Share Driver Plugin Framework.""" from stevedore import extension class EMCPluginManager(object): def __init__(self, namespace): self.namespace = namespace self.extension_manager = extension.ExtensionManager(namespace) def load_plugin(self, name, *args, **kwargs): for ext in self.extension_manager.extensions: if ext.name == name: storage_conn = ext.plugin(*args, **kwargs) return storage_conn return None manila-10.0.0/manila/share/drivers/dell_emc/plugins/0000775000175000017500000000000013656750362022336 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/__init__.py0000664000175000017500000000000013656750227024435 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/unity/0000775000175000017500000000000013656750362023506 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/unity/utils.py0000664000175000017500000000760613656750227025231 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility module for EMC Unity Manila Driver """ from oslo_log import log from oslo_utils import fnmatch from oslo_utils import units from manila import exception from manila.i18n import _ LOG = log.getLogger(__name__) def do_match(full, matcher_list): matched = set() full = set([item.strip() for item in full]) if matcher_list is None: # default to all matcher_list = set('*') else: matcher_list = set([item.strip() for item in matcher_list]) for item in full: for matcher in matcher_list: if fnmatch.fnmatchcase(item, matcher): matched.add(item) return matched, full - matched def match_ports(ports_list, port_ids_conf): """Filters the port in `ports_list` with the port id in `port_ids_conf`. A tuple of (`sp_ports_map`, `unmanaged_port_ids`) is returned, in which `sp_ports_map` is a dict whose key is SPA or SPB, value is the matched port id set, `unmanaged_port_ids` is the un-matched port id set. """ patterns = (set('*') if port_ids_conf is None else set(item.strip() for item in port_ids_conf if item.strip())) if not patterns: patterns = set('*') sp_ports_map = {} unmanaged_port_ids = set() for port in ports_list: port_id = port.get_id() for pattern in patterns: if fnmatch.fnmatchcase(port_id, pattern): sp_id = port.parent_storage_processor.get_id() ports_set = sp_ports_map.setdefault(sp_id, set()) ports_set.add(port_id) break else: unmanaged_port_ids.add(port_id) return sp_ports_map, unmanaged_port_ids def find_ports_by_mtu(all_ports, port_ids_conf, mtu): valid_ports = list(filter(lambda p: p.mtu == mtu, all_ports)) managed_port_map, unmatched = match_ports(valid_ports, port_ids_conf) if not managed_port_map: msg = (_('None of the configured port %(conf)s matches the mtu ' '%(mtu)s.') % {'conf': port_ids_conf, 'mtu': mtu}) raise exception.ShareBackendException(msg=msg) return managed_port_map def gib_to_byte(size_gib): return size_gib * units.Gi def get_share_backend_id(share): """Get backend share id. Try to get backend share id from path in case this is managed share, use share['id'] when path is empty. """ backend_share_id = None try: export_locations = share['export_locations'][0] path = export_locations['path'] if share['share_proto'].lower() == 'nfs': # 10.0.0.1:/example_share_name backend_share_id = path.split(':/')[-1] if share['share_proto'].lower() == 'cifs': # \\10.0.0.1\example_share_name backend_share_id = path.split('\\')[-1] except Exception as e: LOG.warning('Cannot get share name from path, make sure the path ' 'is right. Error details: %s', e) if backend_share_id and (backend_share_id != share['id']): return backend_share_id else: return share['id'] def get_snapshot_id(snapshot): """Get backend snapshot id. Take the id from provider_location in case this is managed snapshot. """ return snapshot['provider_location'] or snapshot['id'] manila-10.0.0/manila/share/drivers/dell_emc/plugins/unity/__init__.py0000664000175000017500000000000013656750227025605 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/unity/client.py0000664000175000017500000003330613656750227025343 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from oslo_log import log from oslo_utils import excutils from oslo_utils import importutils storops = importutils.try_import('storops') if storops: # pylint: disable=import-error from storops import exception as storops_ex from storops.unity import enums from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import utils as enas_utils from manila.share.drivers.dell_emc.plugins.unity import utils LOG = log.getLogger(__name__) class UnityClient(object): def __init__(self, host, username, password): if storops is None: LOG.error('StorOps is required to run EMC Unity driver.') self.system = storops.UnitySystem(host, username, password) def create_cifs_share(self, resource, share_name): """Create CIFS share from the resource. :param resource: either UnityFilesystem or UnitySnap object :param share_name: CIFS share name :return: UnityCifsShare object """ try: share = resource.create_cifs_share(share_name) try: # bug on unity: the enable ace API has bug for snap # based share. Log the internal error if it happens. share.enable_ace() except storops_ex.UnityException: msg = ('Failed to enabled ACE for share: {}.') LOG.exception(msg.format(share_name)) return share except storops_ex.UnitySmbShareNameExistedError: return self.get_share(share_name, 'CIFS') def create_nfs_share(self, resource, share_name): """Create NFS share from the resource. :param resource: either UnityFilesystem or UnitySnap object :param share_name: NFS share name :return: UnityNfsShare object """ try: return resource.create_nfs_share(share_name) except storops_ex.UnityNfsShareNameExistedError: return self.get_share(share_name, 'NFS') def create_nfs_filesystem_and_share(self, pool, nas_server, share_name, size_gb): """Create filesystem and share from pool/NAS server. :param pool: pool for file system creation :param nas_server: nas server for file system creation :param share_name: file system and share name :param size_gb: file system size """ size = utils.gib_to_byte(size_gb) pool.create_nfs_share( nas_server, share_name, size, user_cap=True) def get_share(self, name, share_proto): # Validate the share protocol proto = share_proto.upper() if proto == 'CIFS': return self.system.get_cifs_share(name=name) elif proto == 'NFS': return self.system.get_nfs_share(name=name) else: raise exception.BadConfigurationException( reason=_('Invalid NAS protocol supplied: %s.') % share_proto) @staticmethod def delete_share(share): share.delete() def create_filesystem(self, pool, nas_server, share_name, size_gb, proto): try: size = utils.gib_to_byte(size_gb) return pool.create_filesystem(nas_server, share_name, size, proto=proto, user_cap=True) except storops_ex.UnityFileSystemNameAlreadyExisted: LOG.debug('Filesystem %s already exists, ' 'ignoring filesystem creation.', share_name) return self.system.get_filesystem(name=share_name) @staticmethod def delete_filesystem(filesystem): try: filesystem.delete() except storops_ex.UnityResourceNotFoundError: LOG.info('Filesystem %s is already removed.', filesystem.name) def create_nas_server(self, name, sp, pool, tenant=None): try: return self.system.create_nas_server(name, sp, pool, tenant=tenant) except storops_ex.UnityNasServerNameUsedError: LOG.info('Share server %s already exists, ignoring share ' 'server creation.', name) return self.get_nas_server(name) def get_nas_server(self, name): try: return self.system.get_nas_server(name=name) except storops_ex.UnityResourceNotFoundError: LOG.info('NAS server %s not found.', name) raise def delete_nas_server(self, name, username=None, password=None): tenant = None try: nas_server = self.get_nas_server(name=name) tenant = nas_server.tenant nas_server.delete(username=username, password=password) except storops_ex.UnityResourceNotFoundError: LOG.info('NAS server %s not found.', name) if tenant is not None: self._delete_tenant(tenant) @staticmethod def _delete_tenant(tenant): if tenant.nas_servers: LOG.debug('There are NAS servers belonging to the tenant %s. ' 'Do not delete it.', tenant.get_id()) return try: tenant.delete(delete_hosts=True) except storops_ex.UnityException as ex: LOG.warning('Delete tenant %(tenant)s failed with error: ' '%(ex)s. Leave the tenant on the system.', {'tenant': tenant.get_id(), 'ex': ex}) @staticmethod def create_dns_server(nas_server, domain, dns_ip): try: nas_server.create_dns_server(domain, dns_ip) except storops_ex.UnityOneDnsPerNasServerError: LOG.info('DNS server %s already exists, ' 'ignoring DNS server creation.', domain) @staticmethod def create_interface(nas_server, ip_addr, netmask, gateway, port_id, vlan_id=None, prefix_length=None): try: nas_server.create_file_interface(port_id, ip_addr, netmask=netmask, v6_prefix_length=prefix_length, gateway=gateway, vlan_id=vlan_id) except storops_ex.UnityIpAddressUsedError: raise exception.IPAddressInUse(ip=ip_addr) @staticmethod def enable_cifs_service(nas_server, domain, username, password): try: nas_server.enable_cifs_service( nas_server.file_interface, domain=domain, domain_username=username, domain_password=password) except storops_ex.UnitySmbNameInUseError: LOG.info('CIFS service on NAS server %s is ' 'already enabled.', nas_server.name) @staticmethod def enable_nfs_service(nas_server): try: nas_server.enable_nfs_service() except storops_ex.UnityNfsAlreadyEnabledError: LOG.info('NFS service on NAS server %s is ' 'already enabled.', nas_server.name) @staticmethod def create_snapshot(filesystem, name): access_type = enums.FilesystemSnapAccessTypeEnum.CHECKPOINT try: return filesystem.create_snap(name, fs_access_type=access_type) except storops_ex.UnitySnapNameInUseError: LOG.info('Snapshot %(snap)s on Filesystem %(fs)s already ' 'exists.', {'snap': name, 'fs': filesystem.name}) def create_snap_of_snap(self, src_snap, dst_snap_name): if isinstance(src_snap, six.string_types): snap = self.get_snapshot(name=src_snap) else: snap = src_snap try: return snap.create_snap(dst_snap_name) except storops_ex.UnitySnapNameInUseError: return self.get_snapshot(dst_snap_name) def get_snapshot(self, name): return self.system.get_snap(name=name) @staticmethod def delete_snapshot(snap): try: snap.delete() except storops_ex.UnityResourceNotFoundError: LOG.info('Snapshot %s is already removed.', snap.name) def get_pool(self, name=None): return self.system.get_pool(name=name) def get_storage_processor(self, sp_id=None): sp = self.system.get_sp(sp_id) if sp_id is None: # `sp` is a list of SPA and SPB. return [s for s in sp if s is not None and s.existed] else: return sp if sp.existed else None def cifs_clear_access(self, share_name, white_list=None): share = self.system.get_cifs_share(name=share_name) share.clear_access(white_list) def nfs_clear_access(self, share_name, white_list=None): share = self.system.get_nfs_share(name=share_name) share.clear_access(white_list, force_create_host=True) def cifs_allow_access(self, share_name, user_name, access_level): share = self.system.get_cifs_share(name=share_name) if access_level == const.ACCESS_LEVEL_RW: cifs_access = enums.ACEAccessLevelEnum.WRITE else: cifs_access = enums.ACEAccessLevelEnum.READ share.add_ace(user=user_name, access_level=cifs_access) def nfs_allow_access(self, share_name, host_ip, access_level): share = self.system.get_nfs_share(name=share_name) host_ip = enas_utils.convert_ipv6_format_if_needed(host_ip) if access_level == const.ACCESS_LEVEL_RW: share.allow_read_write_access(host_ip, force_create_host=True) share.allow_root_access(host_ip, force_create_host=True) else: share.allow_read_only_access(host_ip, force_create_host=True) def cifs_deny_access(self, share_name, user_name): share = self.system.get_cifs_share(name=share_name) try: share.delete_ace(user=user_name) except storops_ex.UnityAclUserNotFoundError: LOG.debug('ACL User "%(user)s" does not exist.', {'user': user_name}) def nfs_deny_access(self, share_name, host_ip): share = self.system.get_nfs_share(name=share_name) try: share.delete_access(host_ip) except storops_ex.UnityHostNotFoundException: LOG.info('%(host)s access to %(share)s is already removed.', {'host': host_ip, 'share': share_name}) def get_file_ports(self): ports = self.system.get_file_port() link_up_ports = [] for port in ports: if port.is_link_up and self._is_external_port(port.id): link_up_ports.append(port) return link_up_ports def extend_filesystem(self, fs, new_size_gb): size = utils.gib_to_byte(new_size_gb) try: fs.extend(size, user_cap=True) except storops_ex.UnityNothingToModifyError: LOG.debug('The size of the file system %(id)s is %(size)s ' 'bytes.', {'id': fs.get_id(), 'size': size}) return size def shrink_filesystem(self, share_id, fs, new_size_gb): size = utils.gib_to_byte(new_size_gb) try: fs.shrink(size, user_cap=True) except storops_ex.UnityNothingToModifyError: LOG.debug('The size of the file system %(id)s is %(size)s ' 'bytes.', {'id': fs.get_id(), 'size': size}) except storops_ex.UnityShareShrinkSizeTooSmallError: LOG.error('The used size of the file system %(id)s is ' 'bigger than input shrink size,' 'it may cause date loss.', {'id': fs.get_id()}) raise exception.ShareShrinkingPossibleDataLoss(share_id=share_id) return size @staticmethod def _is_external_port(port_id): return 'eth' in port_id or '_la' in port_id def get_tenant(self, name, vlan_id): if not vlan_id: # Do not create vlan for flat network return None tenant = None try: tenant_name = "vlan_%(vlan_id)s_%(name)s" % {'vlan_id': vlan_id, 'name': name} tenant = self.system.create_tenant(tenant_name, vlans=[vlan_id]) except (storops_ex.UnityVLANUsedByOtherTenantError, storops_ex.UnityTenantNameInUseError, storops_ex.UnityVLANAlreadyHasInterfaceError): with excutils.save_and_reraise_exception() as exc: tenant = self.system.get_tenant_use_vlan(vlan_id) if tenant is not None: LOG.debug("The VLAN %s is already added into a tenant. " "Use the existing VLAN tenant.", vlan_id) exc.reraise = False except storops_ex.SystemAPINotSupported: LOG.info("This system doesn't support tenant.") return tenant def restore_snapshot(self, snap_name): snap = self.get_snapshot(snap_name) return snap.restore(delete_backup=True) manila-10.0.0/manila/share/drivers/dell_emc/plugins/unity/connection.py0000664000175000017500000011215413656750227026223 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unity backend for the EMC Manila driver.""" import random from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import importutils from oslo_utils import netutils storops = importutils.try_import('storops') if storops: # pylint: disable=import-error from storops import exception as storops_ex from storops.unity import enums from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import utils as enas_utils from manila.share.drivers.dell_emc.plugins import base as driver from manila.share.drivers.dell_emc.plugins.unity import client from manila.share.drivers.dell_emc.plugins.unity import utils as unity_utils from manila.share import utils as share_utils from manila import utils """Version history: 7.0.0 - Supports DHSS=False mode 7.0.1 - Fix parsing management IPv6 address 7.0.2 - Bugfix: failed to delete CIFS share if wrong access was set 8.0.0 - Supports manage/unmanage share server/share/snapshot """ VERSION = "8.0.0" LOG = log.getLogger(__name__) SUPPORTED_NETWORK_TYPES = (None, 'flat', 'vlan') UNITY_OPTS = [ cfg.StrOpt('unity_server_meta_pool', required=True, deprecated_name='emc_nas_server_pool', help='Pool to persist the meta-data of NAS server.'), cfg.ListOpt('unity_share_data_pools', deprecated_name='emc_nas_pool_names', help='Comma separated list of pools that can be used to ' 'persist share data.'), cfg.ListOpt('unity_ethernet_ports', deprecated_name='emc_interface_ports', help='Comma separated list of ports that can be used for ' 'share server interfaces. Members of the list ' 'can be Unix-style glob expressions.'), cfg.StrOpt('emc_nas_server_container', deprecated_for_removal=True, deprecated_reason='Unity driver supports nas server auto load ' 'balance.', help='Storage processor to host the NAS server. Obsolete.'), cfg.StrOpt('unity_share_server', help='NAS server used for creating share when driver ' 'is in DHSS=False mode. It is required when ' 'driver_handles_share_servers=False in manila.conf.'), ] CONF = cfg.CONF CONF.register_opts(UNITY_OPTS) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class UnityStorageConnection(driver.StorageConnection): """Implements Unity specific functionality for EMC Manila driver.""" IP_ALLOCATIONS = 1 @enas_utils.log_enter_exit def __init__(self, *args, **kwargs): super(UnityStorageConnection, self).__init__(*args, **kwargs) if 'configuration' in kwargs: kwargs['configuration'].append_config_values(UNITY_OPTS) self.client = None self.pool_set = None self.nas_server_pool = None self.reserved_percentage = None self.max_over_subscription_ratio = None self.port_ids_conf = None self.unity_share_server = None self.ipv6_implemented = True self.revert_to_snap_support = True self.shrink_share_support = True self.manage_existing_support = True self.manage_existing_with_server_support = True self.manage_existing_snapshot_support = True self.manage_snapshot_with_server_support = True self.manage_server_support = True self.get_share_server_network_info_support = True # props from super class. self.driver_handles_share_servers = (True, False) def connect(self, emc_share_driver, context): """Connect to Unity storage.""" config = emc_share_driver.configuration storage_ip = enas_utils.convert_ipv6_format_if_needed( config.emc_nas_server) username = config.emc_nas_login password = config.emc_nas_password self.client = client.UnityClient(storage_ip, username, password) pool_conf = config.safe_get('unity_share_data_pools') self.pool_set = self._get_managed_pools(pool_conf) self.reserved_percentage = config.safe_get( 'reserved_share_percentage') if self.reserved_percentage is None: self.reserved_percentage = 0 self.max_over_subscription_ratio = config.safe_get( 'max_over_subscription_ratio') self.port_ids_conf = config.safe_get('unity_ethernet_ports') self.unity_share_server = config.safe_get('unity_share_server') self.driver_handles_share_servers = config.safe_get( 'driver_handles_share_servers') if (not self.driver_handles_share_servers) and ( not self.unity_share_server): msg = ("Make sure there is NAS server name " "configured for share creation when driver " "is in DHSS=False mode.") raise exception.BadConfigurationException(reason=msg) self.validate_port_configuration(self.port_ids_conf) pool_name = config.unity_server_meta_pool self._config_pool(pool_name) def get_server_name(self, share_server=None): if not self.driver_handles_share_servers: return self.unity_share_server else: return self._get_server_name(share_server) def validate_port_configuration(self, port_ids_conf): """Initializes the SP and ports based on the port option.""" ports = self.client.get_file_ports() sp_ports_map, unmanaged_port_ids = unity_utils.match_ports( ports, port_ids_conf) if not sp_ports_map: msg = (_("All the specified storage ports to be managed " "do not exist. Please check your configuration " "unity_ethernet_ports in manila.conf. " "The available ports in the backend are %s.") % ",".join([port.get_id() for port in ports])) raise exception.BadConfigurationException(reason=msg) if unmanaged_port_ids: LOG.info("The following specified ports are not managed by " "the backend: %(unmanaged)s. This host will only " "manage the storage ports: %(exist)s", {'unmanaged': ",".join(unmanaged_port_ids), 'exist': ",".join(map(",".join, sp_ports_map.values()))}) else: LOG.debug("Ports: %s will be managed.", ",".join(map(",".join, sp_ports_map.values()))) if len(sp_ports_map) == 1: LOG.info("Only ports of %s are configured. Configure ports " "of both SPA and SPB to use both of the SPs.", list(sp_ports_map)[0]) return sp_ports_map def check_for_setup_error(self): """Check for setup error.""" def manage_existing(self, share, driver_options, share_server=None): """Manages a share that exists on backend. :param share: Share that will be managed. :param driver_options: Driver-specific options provided by admin. :param share_server: Share server name provided by admin in DHSS=True. :returns: Returns a dict with share size and export location. """ export_locations = share['export_locations'] if not export_locations: message = ("Failed to manage existing share: %s, missing " "export locations." % share['id']) raise exception.ManageInvalidShare(reason=message) try: share_size = int(driver_options.get("size", 0)) except (ValueError, TypeError): msg = _("The driver options' size to manage the share " "%(share_id)s, should be an integer, in format " "driver-options size=. Value specified: " "%(size)s.") % {'share_id': share['id'], 'size': driver_options.get("size")} raise exception.ManageInvalidShare(reason=msg) if not share_size: msg = _("Share %(share_id)s has no specified size. " "Using default value 1, set size in driver options if you " "want.") % {'share_id': share['id']} LOG.warning(msg) share_size = 1 share_id = unity_utils.get_share_backend_id(share) backend_share = self.client.get_share(share_id, share['share_proto']) if not backend_share: message = ("Could not find the share in backend, please make sure " "the export location is right.") raise exception.ManageInvalidShare(reason=message) # Check the share server when in DHSS=true mode if share_server: backend_share_server = self._get_server_name(share_server) if not backend_share_server: message = ("Could not find the backend share server: %s, " "please make sure that share server with the " "specified name exists in the backend.", share_server) raise exception.BadConfigurationException(message) LOG.info("Share %(shr_path)s is being managed with ID " "%(shr_id)s.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) # export_locations was not changed, return original value return {"size": share_size, 'export_locations': { 'path': share['export_locations'][0]['path']}} def manage_existing_with_server(self, share, driver_options, share_server): return self.manage_existing(share, driver_options, share_server) def manage_existing_snapshot(self, snapshot, driver_options, share_server=None): """Brings an existing snapshot under Manila management.""" try: snapshot_size = int(driver_options.get("size", 0)) except (ValueError, TypeError): msg = _("The size in driver options to manage snapshot " "%(snap_id)s should be an integer, in format " "driver-options size=. Value passed: " "%(size)s.") % {'snap_id': snapshot['id'], 'size': driver_options.get("size")} raise exception.ManageInvalidShareSnapshot(reason=msg) if not snapshot_size: msg = _("Snapshot %(snap_id)s has no specified size. " "Use default value 1, set size in driver options if you " "want.") % {'snap_id': snapshot['id']} LOG.info(msg) snapshot_size = 1 provider_location = snapshot.get('provider_location') snap = self.client.get_snapshot(provider_location) if not snap: message = ("Could not find a snapshot in the backend with " "provider_location: %s, please make sure " "the snapshot exists in the backend." % provider_location) raise exception.ManageInvalidShareSnapshot(reason=message) LOG.info("Snapshot %(provider_location)s in Unity will be managed " "with ID %(snapshot_id)s.", {'provider_location': snapshot.get('provider_location'), 'snapshot_id': snapshot['id']}) return {"size": snapshot_size, "provider_location": provider_location} def manage_existing_snapshot_with_server(self, snapshot, driver_options, share_server): return self.manage_existing_snapshot(snapshot, driver_options, share_server) def manage_server(self, context, share_server, identifier, driver_options): """Manage the share server and return compiled back end details. :param context: Current context. :param share_server: Share server model. :param identifier: A driver-specific share server identifier :param driver_options: Dictionary of driver options to assist managing the share server :return: Identifier and dictionary with back end details to be saved in the database. Example:: 'my_new_server_identifier',{'server_name': 'my_old_server'} """ nas_server = self.client.get_nas_server(identifier) if not nas_server: message = ("Could not find the backend share server by server " "name: %s, please make sure the share server is " "existing in the backend." % identifier) raise exception.ManageInvalidShare(reason=message) return identifier, driver_options def get_share_server_network_info( self, context, share_server, identifier, driver_options): """Obtain network allocations used by share server. :param context: Current context. :param share_server: Share server model. :param identifier: A driver-specific share server identifier :param driver_options: Dictionary of driver options to assist managing the share server :return: The containing IP address allocated in the backend, Unity only supports single IP address Example:: ['10.10.10.10'] or ['fd11::2000'] """ containing_ips = [] nas_server = self.client.get_nas_server(identifier) if nas_server: for file_interface in nas_server.file_interface: containing_ips.append(file_interface.ip_address) return containing_ips def create_share(self, context, share, share_server=None): """Create a share and export it based on protocol used.""" share_name = share['id'] size = share['size'] # Check share's protocol. # Throw an exception immediately if it is an invalid protocol. share_proto = share['share_proto'].upper() proto_enum = self._get_proto_enum(share_proto) # Get pool name from share host field pool_name = self._get_pool_name_from_host(share['host']) # Get share server name from share server or manila.conf. server_name = self.get_server_name(share_server) pool = self.client.get_pool(pool_name) try: nas_server = self.client.get_nas_server(server_name) except storops_ex.UnityResourceNotFoundError: message = (_("Failed to get NAS server %(server)s when " "creating the share %(share)s.") % {'server': server_name, 'share': share_name}) LOG.exception(message) raise exception.EMCUnityError(err=message) locations = None if share_proto == 'CIFS': filesystem = self.client.create_filesystem( pool, nas_server, share_name, size, proto=proto_enum) self.client.create_cifs_share(filesystem, share_name) locations = self._get_cifs_location( nas_server.file_interface, share_name) elif share_proto == 'NFS': self.client.create_nfs_filesystem_and_share( pool, nas_server, share_name, size) locations = self._get_nfs_location( nas_server.file_interface, share_name) return locations def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from a snapshot - clone a snapshot.""" share_name = share['id'] # Check share's protocol. # Throw an exception immediately if it is an invalid protocol. share_proto = share['share_proto'].upper() self._validate_share_protocol(share_proto) # Get share server name from share server server_name = self.get_server_name(share_server) try: nas_server = self.client.get_nas_server(server_name) except storops_ex.UnityResourceNotFoundError: message = (_("Failed to get NAS server %(server)s when " "creating the share %(share)s.") % {'server': server_name, 'share': share_name}) LOG.exception(message) raise exception.EMCUnityError(err=message) snapshot_id = unity_utils.get_snapshot_id(snapshot) backend_snap = self.client.create_snap_of_snap(snapshot_id, share_name) locations = None if share_proto == 'CIFS': self.client.create_cifs_share(backend_snap, share_name) locations = self._get_cifs_location( nas_server.file_interface, share_name) elif share_proto == 'NFS': self.client.create_nfs_share(backend_snap, share_name) locations = self._get_nfs_location( nas_server.file_interface, share_name) return locations def delete_share(self, context, share, share_server=None): """Delete a share.""" share_name = unity_utils.get_share_backend_id(share) try: backend_share = self.client.get_share(share_name, share['share_proto']) except storops_ex.UnityResourceNotFoundError: LOG.warning("Share %s is not found when deleting the share", share_name) return # Share created by the API create_share_from_snapshot() if self._is_share_from_snapshot(backend_share): filesystem = backend_share.snap.filesystem self.client.delete_snapshot(backend_share.snap) else: filesystem = backend_share.filesystem self.client.delete_share(backend_share) if self._is_isolated_filesystem(filesystem): self.client.delete_filesystem(filesystem) def extend_share(self, share, new_size, share_server=None): share_id = unity_utils.get_share_backend_id(share) backend_share = self.client.get_share(share_id, share['share_proto']) if not self._is_share_from_snapshot(backend_share): self.client.extend_filesystem(backend_share.filesystem, new_size) else: share_id = share['id'] reason = ("Driver does not support extending a " "snapshot based share.") raise exception.ShareExtendingError(share_id=share_id, reason=reason) def shrink_share(self, share, new_size, share_server=None): """Shrinks a share to new size. :param share: Share that will be shrunk. :param new_size: New size of share. :param share_server: Data structure with share server information. Not used by this driver. """ share_id = unity_utils.get_share_backend_id(share) backend_share = self.client.get_share(share_id, share['share_proto']) if self._is_share_from_snapshot(backend_share): reason = ("Driver does not support shrinking a " "snapshot based share.") raise exception.ShareShrinkingError(share_id=share_id, reason=reason) self.client.shrink_filesystem(share_id, backend_share.filesystem, new_size) LOG.info("Share %(shr_id)s successfully shrunk to " "%(shr_size)sG.", {'shr_id': share_id, 'shr_size': new_size}) def create_snapshot(self, context, snapshot, share_server=None): """Create snapshot from share.""" share = snapshot['share'] share_name = unity_utils.get_share_backend_id( share) if share else snapshot['share_id'] share_proto = snapshot['share']['share_proto'] backend_share = self.client.get_share(share_name, share_proto) snapshot_name = snapshot['id'] if self._is_share_from_snapshot(backend_share): self.client.create_snap_of_snap(backend_share.snap, snapshot_name) else: self.client.create_snapshot(backend_share.filesystem, snapshot_name) return {'provider_location': snapshot_name} def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot.""" snapshot_id = unity_utils.get_snapshot_id(snapshot) snap = self.client.get_snapshot(snapshot_id) self.client.delete_snapshot(snap) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): # adding rules if add_rules: for rule in add_rules: self.allow_access(context, share, rule, share_server) # deleting rules if delete_rules: for rule in delete_rules: self.deny_access(context, share, rule, share_server) # recovery mode if not (add_rules or delete_rules): white_list = [] for rule in access_rules: self.allow_access(context, share, rule, share_server) white_list.append(rule['access_to']) self.clear_access(share, white_list) def clear_access(self, share, white_list=None): share_proto = share['share_proto'].upper() share_name = unity_utils.get_share_backend_id(share) if share_proto == 'CIFS': self.client.cifs_clear_access(share_name, white_list) elif share_proto == 'NFS': self.client.nfs_clear_access(share_name, white_list) def allow_access(self, context, share, access, share_server=None): """Allow access to a share.""" access_level = access['access_level'] if access_level not in const.ACCESS_LEVELS: raise exception.InvalidShareAccessLevel(level=access_level) share_proto = share['share_proto'].upper() self._validate_share_protocol(share_proto) self._validate_share_access_type(share, access) if share_proto == 'CIFS': self._cifs_allow_access(share, access) elif share_proto == 'NFS': self._nfs_allow_access(share, access) def deny_access(self, context, share, access, share_server): """Deny access to a share.""" share_proto = share['share_proto'].upper() self._validate_share_protocol(share_proto) self._validate_share_access_type(share, access) if share_proto == 'CIFS': self._cifs_deny_access(share, access) elif share_proto == 'NFS': self._nfs_deny_access(share, access) def ensure_share(self, context, share, share_server): """Ensure that the share is exported.""" share_name = unity_utils.get_share_backend_id(share) share_proto = share['share_proto'] backend_share = self.client.get_share(share_name, share_proto) if not backend_share.existed: raise exception.ShareNotFound(share_id=share_name) def update_share_stats(self, stats_dict): """Communicate with EMCNASClient to get the stats.""" stats_dict['driver_version'] = VERSION stats_dict['pools'] = [] for pool in self.client.get_pool(): if pool.name in self.pool_set: # the unit of following numbers are GB total_size = float(pool.size_total) used_size = float(pool.size_used) pool_stat = { 'pool_name': pool.name, 'thin_provisioning': True, 'total_capacity_gb': total_size, 'free_capacity_gb': total_size - used_size, 'allocated_capacity_gb': used_size, 'provisioned_capacity_gb': float(pool.size_subscribed), 'qos': False, 'reserved_percentage': self.reserved_percentage, 'max_over_subscription_ratio': self.max_over_subscription_ratio, } stats_dict['pools'].append(pool_stat) if not stats_dict.get('pools'): message = _("Failed to update storage pool.") LOG.error(message) raise exception.EMCUnityError(err=message) def get_pool(self, share): """Get the pool name of the share.""" backend_share = self.client.get_share( share['id'], share['share_proto']) return backend_share.filesystem.pool.name def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return self.IP_ALLOCATIONS def setup_server(self, network_info, metadata=None): """Set up and configures share server with given network parameters.""" server_name = network_info['server_id'] segmentation_id = network_info['segmentation_id'] network = self.validate_network(network_info) mtu = network['mtu'] tenant = self.client.get_tenant(network_info['server_id'], segmentation_id) sp_ports_map = unity_utils.find_ports_by_mtu( self.client.get_file_ports(), self.port_ids_conf, mtu) sp = self._choose_sp(sp_ports_map) nas_server = self.client.create_nas_server(server_name, sp, self.nas_server_pool, tenant=tenant) sp = nas_server.home_sp port_id = self._choose_port(sp_ports_map, sp) try: self._create_network_interface(nas_server, network, port_id) self._handle_security_services( nas_server, network_info['security_services']) return {'share_server_name': server_name} except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Could not setup server.') server_details = {'share_server_name': server_name} self.teardown_server( server_details, network_info['security_services']) def teardown_server(self, server_details, security_services=None): """Teardown share server.""" if not server_details: LOG.debug('Server details are empty.') return server_name = server_details.get('share_server_name') if not server_name: LOG.debug('No share server found for server %s.', server_details.get('instance_id')) return username = None password = None for security_service in security_services: if security_service['type'] == 'active_directory': username = security_service['user'] password = security_service['password'] break self.client.delete_nas_server(server_name, username, password) def _cifs_allow_access(self, share, access): """Allow access to CIFS share.""" self.client.cifs_allow_access( share['id'], access['access_to'], access['access_level']) def _cifs_deny_access(self, share, access): """Deny access to CIFS share.""" self.client.cifs_deny_access(share['id'], access['access_to']) def _config_pool(self, pool_name): try: self.nas_server_pool = self.client.get_pool(pool_name) except storops_ex.UnityResourceNotFoundError: message = (_("The storage pools %s to store NAS server " "configuration do not exist.") % pool_name) LOG.exception(message) raise exception.BadConfigurationException(reason=message) @staticmethod def validate_network(network_info): network = network_info['network_allocations'][0] if network['network_type'] not in SUPPORTED_NETWORK_TYPES: msg = _('The specified network type %s is unsupported by ' 'the EMC Unity driver') raise exception.NetworkBadConfigurationException( reason=msg % network['network_type']) return network def _create_network_interface(self, nas_server, network, port_id): kargs = {'ip_addr': network['ip_address'], 'gateway': network['gateway'], 'vlan_id': network['segmentation_id'], 'port_id': port_id} if netutils.is_valid_ipv6_cidr(kargs['ip_addr']): kargs['netmask'] = None kargs['prefix_length'] = str(utils.cidr_to_prefixlen( network['cidr'])) else: kargs['netmask'] = utils.cidr_to_netmask(network['cidr']) # Create the interfaces on NAS server self.client.create_interface(nas_server, **kargs) def _choose_sp(self, sp_ports_map): sp = None if len(sp_ports_map.keys()) == 1: # Only one storage processor has usable ports, # create NAS server on that SP. sp = self.client.get_storage_processor( sp_id=list(sp_ports_map.keys())[0]) LOG.debug('All the usable ports belong to %s. ' 'Creating NAS server on this SP without ' 'load balance.', sp.get_id()) return sp @staticmethod def _choose_port(sp_ports_map, sp): ports = sp_ports_map[sp.get_id()] return random.choice(list(ports)) @staticmethod def _get_cifs_location(file_interfaces, share_name): return [ {'path': r'\\%(interface)s\%(share_name)s' % { 'interface': enas_utils.export_unc_path(interface.ip_address), 'share_name': share_name} } for interface in file_interfaces ] def _get_managed_pools(self, pool_conf): # Get the real pools from the backend storage real_pools = set(pool.name for pool in self.client.get_pool()) if not pool_conf: LOG.debug("No storage pool is specified, so all pools in storage " "system will be managed.") return real_pools matched_pools, unmanaged_pools = unity_utils.do_match(real_pools, pool_conf) if not matched_pools: msg = (_("All the specified storage pools to be managed " "do not exist. Please check your configuration " "emc_nas_pool_names in manila.conf. " "The available pools in the backend are %s") % ",".join(real_pools)) raise exception.BadConfigurationException(reason=msg) if unmanaged_pools: LOG.info("The following specified storage pools " "are not managed by the backend: " "%(un_managed)s. This host will only manage " "the storage pools: %(exist)s", {'un_managed': ",".join(unmanaged_pools), 'exist': ",".join(matched_pools)}) else: LOG.debug("Storage pools: %s will be managed.", ",".join(matched_pools)) return matched_pools @staticmethod def _get_nfs_location(file_interfaces, share_name): return [ {'path': '%(interface)s:/%(share_name)s' % { 'interface': enas_utils.convert_ipv6_format_if_needed( interface.ip_address), 'share_name': share_name} } for interface in file_interfaces ] @staticmethod def _get_pool_name_from_host(host): pool_name = share_utils.extract_host(host, level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % host) raise exception.InvalidHost(reason=message) return pool_name @staticmethod def _get_proto_enum(share_proto): share_proto = share_proto.upper() UnityStorageConnection._validate_share_protocol(share_proto) if share_proto == 'CIFS': return enums.FSSupportedProtocolEnum.CIFS elif share_proto == 'NFS': return enums.FSSupportedProtocolEnum.NFS @staticmethod def _get_server_name(share_server): if not share_server: msg = _('Share server not provided.') raise exception.InvalidInput(reason=msg) # Try to get share server name from property 'identifier' first in # case this is managed share server. server_name = share_server.get('identifier') or share_server.get( 'backend_details', {}).get('share_server_name') if server_name is None: msg = (_("Name of the share server %s not found.") % share_server['id']) LOG.error(msg) raise exception.InvalidInput(reason=msg) return server_name def _handle_security_services(self, nas_server, security_services): kerberos_enabled = False # Support 'active_directory' and 'kerberos' for security_service in security_services: service_type = security_service['type'] if service_type == 'active_directory': # Create DNS server for NAS server domain = security_service['domain'] dns_ip = security_service['dns_ip'] self.client.create_dns_server(nas_server, domain, dns_ip) # Enable CIFS service username = security_service['user'] password = security_service['password'] self.client.enable_cifs_service(nas_server, domain=domain, username=username, password=password) elif service_type == 'kerberos': # Enable NFS service with kerberos kerberos_enabled = True # TODO(jay.xu): enable nfs service with kerberos LOG.warning('Kerberos is not supported by ' 'EMC Unity manila driver plugin.') elif service_type == 'ldap': LOG.warning('LDAP is not supported by ' 'EMC Unity manila driver plugin.') else: LOG.warning('Unknown security service type: %s.', service_type) if not kerberos_enabled: # Enable NFS service without kerberos self.client.enable_nfs_service(nas_server) def _nfs_allow_access(self, share, access): """Allow access to NFS share.""" self.client.nfs_allow_access( share['id'], access['access_to'], access['access_level']) def _nfs_deny_access(self, share, access): """Deny access to NFS share.""" self.client.nfs_deny_access(share['id'], access['access_to']) @staticmethod def _is_isolated_filesystem(filesystem): filesystem.update() return ( not filesystem.has_snap() and not (filesystem.cifs_share or filesystem.nfs_share) ) @staticmethod def _is_share_from_snapshot(share): return True if share.snap else False @staticmethod def _validate_share_access_type(share, access): reason = None share_proto = share['share_proto'].upper() if share_proto == 'CIFS' and access['access_type'] != 'user': reason = _('Only user access type allowed for CIFS share.') elif share_proto == 'NFS' and access['access_type'] != 'ip': reason = _('Only IP access type allowed for NFS share.') if reason: raise exception.InvalidShareAccess(reason=reason) @staticmethod def _validate_share_protocol(share_proto): if share_proto not in ('NFS', 'CIFS'): raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a share (in place) to the specified snapshot.""" snapshot_id = unity_utils.get_snapshot_id(snapshot) return self.client.restore_snapshot(snapshot_id) manila-10.0.0/manila/share/drivers/dell_emc/plugins/vnx/0000775000175000017500000000000013656750362023151 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/vnx/__init__.py0000664000175000017500000000000013656750227025250 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/vnx/object_manager.py0000664000175000017500000023053213656750227026470 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re from lxml import builder from lxml import etree as ET from oslo_concurrency import processutils from oslo_log import log import six from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import connector from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import utils as enas_utils from manila.share.drivers.dell_emc.common.enas import xml_api_parser as parser from manila import utils LOG = log.getLogger(__name__) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class StorageObjectManager(object): def __init__(self, configuration): self.context = dict() self.connectors = dict() self.connectors['XML'] = connector.XMLAPIConnector(configuration) self.connectors['SSH'] = connector.SSHConnector(configuration) elt_maker = builder.ElementMaker(nsmap={None: constants.XML_NAMESPACE}) xml_parser = parser.XMLAPIParser() obj_types = StorageObject.__subclasses__() # pylint: disable=no-member for item in obj_types: key = item.__name__ self.context[key] = eval(key)(self.connectors, elt_maker, xml_parser, self) def getStorageContext(self, type): if type in self.context: return self.context[type] else: message = (_("Invalid storage object type %s.") % type) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) class StorageObject(object): def __init__(self, conn, elt_maker, xml_parser, manager): self.conn = conn self.elt_maker = elt_maker self.xml_parser = xml_parser self.manager = manager self.xml_retry = False self.ssh_retry_patterns = [ ( constants.SSH_DEFAULT_RETRY_PATTERN, exception.EMCVnxLockRequiredException() ), ] def _translate_response(self, response): """Translate different status to ok/error status.""" if (constants.STATUS_OK == response['maxSeverity'] or constants.STATUS_ERROR == response['maxSeverity']): return old_Severity = response['maxSeverity'] if response['maxSeverity'] in (constants.STATUS_DEBUG, constants.STATUS_INFO): response['maxSeverity'] = constants.STATUS_OK LOG.warning("Translated status from %(old)s to %(new)s. " "Message: %(info)s.", {'old': old_Severity, 'new': response['maxSeverity'], 'info': response}) def _response_validation(self, response, error_code): """Validates whether a response includes a certain error code.""" msg_codes = self._get_problem_message_codes(response['problems']) for code in msg_codes: if code == error_code: return True return False def _get_problem_message_codes(self, problems): message_codes = [] for problem in problems: if 'messageCode' in problem: message_codes.append(problem['messageCode']) return message_codes def _get_problem_messages(self, problems): messages = [] for problem in problems: if 'message' in problem: messages.append(problem['message']) return messages def _get_problem_diags(self, problems): diags = [] for problem in problems: if 'Diagnostics' in problem: diags.append(problem['Diagnostics']) return diags def _build_query_package(self, body): return self.elt_maker.RequestPacket( self.elt_maker.Request( self.elt_maker.Query(body) ) ) def _build_task_package(self, body): return self.elt_maker.RequestPacket( self.elt_maker.Request( self.elt_maker.StartTask(body, timeout='300') ) ) @utils.retry(exception.EMCVnxLockRequiredException) def _send_request(self, req): req_xml = constants.XML_HEADER + ET.tostring(req).decode('utf-8') rsp_xml = self.conn['XML'].request(str(req_xml)) response = self.xml_parser.parse(rsp_xml) self._translate_response(response) if (response['maxSeverity'] != constants.STATUS_OK and self._response_validation(response, constants.MSG_CODE_RETRY)): raise exception.EMCVnxLockRequiredException return response @utils.retry(exception.EMCVnxLockRequiredException) def _execute_cmd(self, cmd, retry_patterns=None, check_exit_code=False): """Execute NAS command via SSH. :param retry_patterns: list of tuples,where each tuple contains a reg expression and an exception. :param check_exit_code: Boolean. Raise processutils.ProcessExecutionError if the command failed to execute and this parameter is set to True. """ if retry_patterns is None: retry_patterns = self.ssh_retry_patterns try: out, err = self.conn['SSH'].run_ssh(cmd, check_exit_code) except processutils.ProcessExecutionError as e: for pattern in retry_patterns: if re.search(pattern[0], e.stdout): raise pattern[1] raise return out, err def _copy_properties(self, source, target, property_map, deep_copy=True): for property in property_map: if isinstance(property, tuple): target_key, src_key = property else: target_key = src_key = property if src_key in source: if deep_copy and isinstance(source[src_key], list): target[target_key] = copy.deepcopy(source[src_key]) else: target[target_key] = source[src_key] else: target[target_key] = None def _get_mover_id(self, mover_name, is_vdm): if is_vdm: return self.get_context('VDM').get_id(mover_name) else: return self.get_context('Mover').get_id(mover_name, self.xml_retry) def get_context(self, type): return self.manager.getStorageContext(type) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class FileSystem(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(FileSystem, self).__init__(conn, elt_maker, xml_parser, manager) self.filesystem_map = dict() @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, name, size, pool_name, mover_name, is_vdm=True): pool_id = self.get_context('StoragePool').get_id(pool_name) mover_id = self._get_mover_id(mover_name, is_vdm) if is_vdm: mover = self.elt_maker.Vdm(vdm=mover_id) else: mover = self.elt_maker.Mover(mover=mover_id) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewFileSystem( mover, self.elt_maker.StoragePool( pool=pool_id, size=six.text_type(size), mayContainSlices='true' ), name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._response_validation( response, constants.MSG_FILESYSTEM_EXIST): LOG.warning("File system %s already exists. " "Skip the creation.", name) return elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create file system %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get(self, name): if name not in self.filesystem_map: request = self._build_query_package( self.elt_maker.FileSystemQueryParams( self.elt_maker.AspectSelection( fileSystems='true', fileSystemCapacityInfos='true' ), self.elt_maker.Alias(name=name) ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: if self._is_filesystem_nonexistent(response): return constants.STATUS_NOT_FOUND, response['problems'] else: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] src = response['objects'][0] filesystem = {} property_map = ( 'name', ('pools_id', 'storagePools'), ('volume_id', 'volume'), ('size', 'volumeSize'), ('id', 'fileSystem'), 'type', 'dataServicePolicies', ) self._copy_properties(src, filesystem, property_map) self.filesystem_map[name] = filesystem return constants.STATUS_OK, self.filesystem_map[name] def delete(self, name): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("File system %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get file system by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) id = self.filesystem_map[name]['id'] request = self._build_task_package( self.elt_maker.DeleteFileSystem(fileSystem=id) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete file system %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self.filesystem_map.pop(name) def extend(self, name, pool_name, new_size): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get file system by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) id = out['id'] size = int(out['size']) if new_size < size: message = (_("Failed to extend file system %(name)s because new " "size %(new_size)d is smaller than old size " "%(size)d.") % {'name': name, 'new_size': new_size, 'size': size}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) elif new_size == size: return pool_id = self.get_context('StoragePool').get_id(pool_name) request = self._build_task_package( self.elt_maker.ExtendFileSystem( self.elt_maker.StoragePool( pool=pool_id, size=six.text_type(new_size - size) ), fileSystem=id, ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to extend file system %(name)s to new size " "%(new_size)d. Reason: %(err)s.") % {'name': name, 'new_size': new_size, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get_id(self, name): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get file system by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) return self.filesystem_map[name]['id'] def _is_filesystem_nonexistent(self, response): """Translate different status to ok/error status.""" msg_codes = self._get_problem_message_codes(response['problems']) diags = self._get_problem_diags(response['problems']) for code, diagnose in zip(msg_codes, diags): if (code == constants.MSG_FILESYSTEM_NOT_FOUND and diagnose.find('File system not found.') != -1): return True return False def create_from_snapshot(self, name, snap_name, source_fs_name, pool_name, mover_name, connect_id): create_fs_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-name', name, '-type', 'uxfs', '-create', 'samesize=' + source_fs_name, 'pool=%s' % pool_name, 'storage=SINGLE', 'worm=off', '-thin', 'no', '-option', 'slice=y', ] self._execute_cmd(create_fs_cmd) ro_mount_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_mount', mover_name, '-option', 'ro', name, '/%s' % name, ] self._execute_cmd(ro_mount_cmd) session_name = name + ':' + snap_name copy_ckpt_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_copy', '-name', session_name[0:63], '-source', '-ckpt', snap_name, '-destination', '-fs', name, '-interconnect', 'id=%s' % connect_id, '-overwrite_destination', '-full_copy', ] try: self._execute_cmd(copy_ckpt_cmd, check_exit_code=True) except processutils.ProcessExecutionError: LOG.exception("Failed to copy content from snapshot %(snap)s " "to file system %(filesystem)s.", {'snap': snap_name, 'filesystem': name}) # When an error happens during nas_copy, we need to continue # deleting the checkpoint of the target file system if it exists. query_fs_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-info', name, ] out, err = self._execute_cmd(query_fs_cmd) re_ckpts = r'ckpts\s*=\s*(.*)\s*' m = re.search(re_ckpts, out) if m is not None: ckpts = m.group(1) for ckpt in re.split(',', ckpts): umount_ckpt_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_umount', mover_name, '-perm', ckpt, ] self._execute_cmd(umount_ckpt_cmd) delete_ckpt_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-delete', ckpt, '-Force', ] self._execute_cmd(delete_ckpt_cmd) rw_mount_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_mount', mover_name, '-option', 'rw', name, '/%s' % name, ] self._execute_cmd(rw_mount_cmd) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class StoragePool(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(StoragePool, self).__init__(conn, elt_maker, xml_parser, manager) self.pool_map = dict() def get(self, name, force=False): if name not in self.pool_map or force: status, out = self.get_all() if constants.STATUS_OK != status: return status, out if name not in self.pool_map: return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.pool_map[name] def get_all(self): self.pool_map.clear() request = self._build_query_package( self.elt_maker.StoragePoolQueryParams() ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] for item in response['objects']: pool = {} property_map = ( 'name', ('movers_id', 'movers'), ('total_size', 'autoSize'), ('used_size', 'usedSize'), 'diskType', 'dataServicePolicies', ('id', 'pool'), ) self._copy_properties(item, pool, property_map) self.pool_map[item['name']] = pool return constants.STATUS_OK, self.pool_map def get_id(self, name): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get storage pool by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) return out['id'] @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class MountPoint(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(MountPoint, self).__init__(conn, elt_maker, xml_parser, manager) @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, mount_path, fs_name, mover_name, is_vdm=True): fs_id = self.get_context('FileSystem').get_id(fs_name) mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewMount( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', ), fileSystem=fs_id, path=mount_path ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._is_mount_point_already_existent(response): LOG.warning("Mount Point %(mount)s already exists. " "Skip the creation.", {'mount': mount_path}) return elif constants.STATUS_OK != response['maxSeverity']: message = (_('Failed to create Mount Point %(mount)s for ' 'file system %(fs_name)s. Reason: %(err)s.') % {'mount': mount_path, 'fs_name': fs_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) @utils.retry(exception.EMCVnxInvalidMoverID) def get(self, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_query_package( self.elt_maker.MountQueryParams( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false' ) ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['objects'] if not response['objects']: return constants.STATUS_NOT_FOUND, None else: return constants.STATUS_OK, response['objects'] @utils.retry(exception.EMCVnxInvalidMoverID) def delete(self, mount_path, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.DeleteMount( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', path=mount_path ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._is_mount_point_nonexistent(response): LOG.warning('Mount point %(mount)s on mover %(mover_name)s ' 'not found.', {'mount': mount_path, 'mover_name': mover_name}) return elif constants.STATUS_OK != response['maxSeverity']: message = (_('Failed to delete mount point %(mount)s on mover ' '%(mover_name)s. Reason: %(err)s.') % {'mount': mount_path, 'mover_name': mover_name, 'err': response}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def _is_mount_point_nonexistent(self, response): """Translate different status to ok/error status.""" msg_codes = self._get_problem_message_codes(response['problems']) message = self._get_problem_messages(response['problems']) for code, msg in zip(msg_codes, message): if ((code == constants.MSG_GENERAL_ERROR and msg.find( 'No such path or invalid operation') != -1) or code == constants.MSG_INVALID_VDM_ID or code == constants.MSG_INVALID_MOVER_ID): return True return False def _is_mount_point_already_existent(self, response): """Translate different status to ok/error status.""" msg_codes = self._get_problem_message_codes(response['problems']) message = self._get_problem_messages(response['problems']) for code, msg in zip(msg_codes, message): if ((code == constants.MSG_GENERAL_ERROR and msg.find( 'Mount already exists') != -1)): return True return False @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class Mover(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(Mover, self).__init__(conn, elt_maker, xml_parser, manager) self.mover_map = dict() self.mover_ref_map = dict() def get_ref(self, name, force=False): if name not in self.mover_ref_map or force: self.mover_ref_map.clear() request = self._build_query_package( self.elt_maker.MoverQueryParams( self.elt_maker.AspectSelection(movers='true') ) ) response = self._send_request(request) if constants.STATUS_ERROR == response['maxSeverity']: return response['maxSeverity'], response['problems'] for item in response['objects']: mover = {} property_map = ('name', ('id', 'mover')) self._copy_properties(item, mover, property_map) if mover: self.mover_ref_map[mover['name']] = mover if (name not in self.mover_ref_map or self.mover_ref_map[name]['id'] == ''): return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.mover_ref_map[name] def get(self, name, force=False): if name not in self.mover_map or force: if name in self.mover_ref_map and not force: mover_id = self.mover_ref_map[name]['id'] else: mover_id = self.get_id(name, force) if name in self.mover_map: self.mover_map.pop(name) request = self._build_query_package( self.elt_maker.MoverQueryParams( self.elt_maker.AspectSelection( moverDeduplicationSettings='true', moverDnsDomains='true', moverInterfaces='true', moverNetworkDevices='true', moverNisDomains='true', moverRoutes='true', movers='true', moverStatuses='true' ), mover=mover_id ) ) response = self._send_request(request) if constants.STATUS_ERROR == response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] mover = {} src = response['objects'][0] property_map = ( 'name', ('id', 'mover'), ('Status', 'maxSeverity'), 'version', 'uptime', 'role', ('interfaces', 'MoverInterface'), ('devices', 'LogicalNetworkDevice'), ('dns_domain', 'MoverDnsDomain'), ) self._copy_properties(src, mover, property_map) internal_devices = [] if mover['interfaces']: for interface in mover['interfaces']: if self._is_internal_device(interface['device']): internal_devices.append(interface) mover['interfaces'] = [var for var in mover['interfaces'] if var not in internal_devices] self.mover_map[name] = mover return constants.STATUS_OK, self.mover_map[name] def get_id(self, name, force=False): status, mover_ref = self.get_ref(name, force) if constants.STATUS_OK != status: message = (_("Failed to get mover by name %(name)s.") % {'name': name}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) return mover_ref['id'] def _is_internal_device(self, device): for device_type in ('mge', 'fxg', 'tks', 'fsn'): if device.find(device_type) == 0: return True return False def get_interconnect_id(self, source, destination): header = [ 'id', 'name', 'source_server', 'destination_system', 'destination_server', ] conn_id = None command_nas_cel = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_cel', '-interconnect', '-l', ] out, err = self._execute_cmd(command_nas_cel) lines = out.strip().split('\n') for line in lines: if line.strip().split() == header: LOG.info('Found the header of the command ' '/nas/bin/nas_cel -interconnect -l.') else: interconn = line.strip().split() if interconn[2] == source and interconn[4] == destination: conn_id = interconn[0] return conn_id def get_physical_devices(self, mover_name): physical_network_devices = [] cmd_sysconfig = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_sysconfig', mover_name, '-pci' ] out, err = self._execute_cmd(cmd_sysconfig) re_pattern = (r'0:\s*(?P\S+)\s*IRQ:\s*(?P\d+)\n' r'.*\n' r'\s*Link:\s*(?P[A-Za-z]+)') for device in re.finditer(re_pattern, out): if 'Up' in device.group('link'): physical_network_devices.append(device.group('name')) return physical_network_devices @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class VDM(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(VDM, self).__init__(conn, elt_maker, xml_parser, manager) self.vdm_map = dict() @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, name, mover_name): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewVdm(mover=mover_id, name=name) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._response_validation(response, constants.MSG_VDM_EXIST): LOG.warning("VDM %(name)s already exists. Skip the creation.", {'name': name}) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create VDM %(name)s on mover " "%(mover_name)s. Reason: %(err)s.") % {'name': name, 'mover_name': mover_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get(self, name): if name not in self.vdm_map: request = self._build_query_package( self.elt_maker.VdmQueryParams() ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] elif not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] for item in response['objects']: vdm = {} property_map = ( 'name', ('id', 'vdm'), 'state', ('host_mover_id', 'mover'), ('interfaces', 'Interfaces'), ) self._copy_properties(item, vdm, property_map) self.vdm_map[item['name']] = vdm if name not in self.vdm_map: return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.vdm_map[name] def delete(self, name): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("VDM %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get VDM by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) vdm_id = self.vdm_map[name]['id'] request = self._build_task_package( self.elt_maker.DeleteVdm(vdm=vdm_id) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete VDM %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self.vdm_map.pop(name) def get_id(self, name): status, vdm = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get VDM by name %(name)s.") % {'name': name}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) return vdm['id'] def attach_nfs_interface(self, vdm_name, if_name): command_attach_nfs_interface = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-vdm', vdm_name, '-attach', if_name, ] self._execute_cmd(command_attach_nfs_interface) def detach_nfs_interface(self, vdm_name, if_name): command_detach_nfs_interface = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-vdm', vdm_name, '-detach', if_name, ] try: self._execute_cmd(command_detach_nfs_interface, check_exit_code=True) except processutils.ProcessExecutionError: interfaces = self.get_interfaces(vdm_name) if if_name not in interfaces['nfs']: LOG.debug("Failed to detach interface %(interface)s " "from mover %(mover_name)s.", {'interface': if_name, 'mover_name': vdm_name}) else: message = (_("Failed to detach interface %(interface)s " "from mover %(mover_name)s.") % {'interface': if_name, 'mover_name': vdm_name}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get_interfaces(self, vdm_name): interfaces = { 'cifs': [], 'nfs': [], } re_pattern = (r'Interfaces to services mapping:' r'\s*(?P(\s*interface=.*)*)') command_get_interfaces = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-i', '-vdm', vdm_name, ] out, err = self._execute_cmd(command_get_interfaces) m = re.search(re_pattern, out) if m: if_list = m.group('interfaces').split('\n') for i in if_list: m_if = re.search(r'\s*interface=(?P.*)\s*:' r'\s*(?P.*)\s*', i) if m_if: if_name = m_if.group('if').strip() if 'cifs' == m_if.group('type') and if_name != '': interfaces['cifs'].append(if_name) elif (m_if.group('type') in ('vdm', 'nfs') and if_name != ''): interfaces['nfs'].append(if_name) return interfaces @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class Snapshot(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(Snapshot, self).__init__(conn, elt_maker, xml_parser, manager) self.snap_map = dict() def create(self, name, fs_name, pool_id, ckpt_size=None): fs_id = self.get_context('FileSystem').get_id(fs_name) if ckpt_size: elt_pool = self.elt_maker.StoragePool( pool=pool_id, size=six.text_type(ckpt_size) ) else: elt_pool = self.elt_maker.StoragePool(pool=pool_id) new_ckpt = self.elt_maker.NewCheckpoint( self.elt_maker.SpaceAllocationMethod( elt_pool ), checkpointOf=fs_id, name=name ) request = self._build_task_package(new_ckpt) response = self._send_request(request) if self._response_validation(response, constants.MSG_SNAP_EXIST): LOG.warning("Snapshot %(name)s already exists. " "Skip the creation.", {'name': name}) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create snapshot %(name)s on " "filesystem %(fs_name)s. Reason: %(err)s.") % {'name': name, 'fs_name': fs_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get(self, name): if name not in self.snap_map: request = self._build_query_package( self.elt_maker.CheckpointQueryParams( self.elt_maker.Alias(name=name) ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] src = response['objects'][0] snap = {} property_map = ( 'name', ('id', 'checkpoint'), 'checkpointOf', 'state', ) self._copy_properties(src, snap, property_map) self.snap_map[name] = snap return constants.STATUS_OK, self.snap_map[name] def delete(self, name): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("Snapshot %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get snapshot by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) chpt_id = self.snap_map[name]['id'] request = self._build_task_package( self.elt_maker.DeleteCheckpoint(checkpoint=chpt_id) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete snapshot %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self.snap_map.pop(name) def get_id(self, name): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get snapshot by %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) return self.snap_map[name]['id'] @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class MoverInterface(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(MoverInterface, self).__init__(conn, elt_maker, xml_parser, manager) @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, interface): # Maximum of 32 characters for mover interface name name = interface['name'] if len(name) > 32: name = name[0:31] device_name = interface['device_name'] ip_addr = interface['ip'] mover_name = interface['mover_name'] net_mask = interface['net_mask'] vlan_id = interface['vlan_id'] if interface['vlan_id'] else -1 mover_id = self._get_mover_id(mover_name, False) params = dict(device=device_name, ipAddress=six.text_type(ip_addr), mover=mover_id, name=name, netMask=net_mask, vlanid=six.text_type(vlan_id)) if interface.get('ip_version') == 6: params['ipVersion'] = 'IPv6' if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewMoverInterface(**params) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._response_validation( response, constants.MSG_INTERFACE_NAME_EXIST): LOG.warning("Mover interface name %s already exists. " "Skip the creation.", name) return elif self._response_validation( response, constants.MSG_INTERFACE_EXIST): LOG.warning("Mover interface IP %s already exists. " "Skip the creation.", ip_addr) return elif self._response_validation( response, constants.MSG_INTERFACE_INVALID_VLAN_ID): # When fail to create a mover interface with the specified # vlan id, VNX will leave an interface with vlan id 0 in the # backend. So we should explicitly remove the interface. try: self.delete(six.text_type(ip_addr), mover_name) except exception.EMCVnxXMLAPIError: pass message = (_("Invalid vlan id %s. Other interfaces on this " "subnet are in a different vlan.") % vlan_id) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create mover interface %(interface)s. " "Reason: %(err)s.") % {'interface': interface, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get(self, name, mover_name): # Maximum of 32 characters for mover interface name if len(name) > 32: name = name[0:31] status, mover = self.manager.getStorageContext('Mover').get( mover_name, True) if constants.STATUS_OK == status: for interface in mover['interfaces']: if name == interface['name']: return constants.STATUS_OK, interface return constants.STATUS_NOT_FOUND, None @utils.retry(exception.EMCVnxInvalidMoverID) def delete(self, ip_addr, mover_name): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.DeleteMoverInterface( ipAddress=six.text_type(ip_addr), mover=mover_id ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._response_validation( response, constants.MSG_INTERFACE_NON_EXISTENT): LOG.warning("Mover interface %s not found. " "Skip the deletion.", ip_addr) return elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete mover interface %(ip)s on mover " "%(mover)s. Reason: %(err)s.") % {'ip': ip_addr, 'mover': mover_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class DNSDomain(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(DNSDomain, self).__init__(conn, elt_maker, xml_parser, manager) @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, mover_name, name, servers, protocol='udp'): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewMoverDnsDomain( mover=mover_id, name=name, servers=servers, protocol=protocol ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create DNS domain %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) @utils.retry(exception.EMCVnxInvalidMoverID) def delete(self, mover_name, name): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.DeleteMoverDnsDomain( mover=mover_id, name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: LOG.warning("Failed to delete DNS domain %(name)s. " "Reason: %(err)s.", {'name': name, 'err': response['problems']}) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class CIFSServer(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(CIFSServer, self).__init__(conn, elt_maker, xml_parser, manager) self.cifs_server_map = dict() @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, server_args): compName = server_args['name'] # Maximum of 14 characters for netBIOS name name = server_args['name'][-14:] # Maximum of 12 characters for alias name alias_name = server_args['name'][-12:] interfaces = server_args['interface_ip'] domain_name = server_args['domain_name'] user_name = server_args['user_name'] password = server_args['password'] mover_name = server_args['mover_name'] is_vdm = server_args['is_vdm'] mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False alias_name_list = [self.elt_maker.li(alias_name)] request = self._build_task_package( self.elt_maker.NewW2KCifsServer( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if server_args['is_vdm'] else 'false' ), self.elt_maker.Aliases(*alias_name_list), self.elt_maker.JoinDomain(userName=user_name, password=password), compName=compName, domain=domain_name, interfaces=interfaces, name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) if constants.STATUS_OK != response['maxSeverity']: status, out = self.get(compName, mover_name, is_vdm) if constants.STATUS_OK == status and out['domainJoined'] == 'true': return else: message = (_("Failed to create CIFS server %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) @utils.retry(exception.EMCVnxInvalidMoverID) def get_all(self, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_query_package( self.elt_maker.CifsServerQueryParams( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false' ) ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['objects'] if mover_name in self.cifs_server_map: self.cifs_server_map.pop(mover_name) self.cifs_server_map[mover_name] = dict() for item in response['objects']: self.cifs_server_map[mover_name][item['compName'].lower()] = item return constants.STATUS_OK, self.cifs_server_map[mover_name] def get(self, name, mover_name, is_vdm=True, force=False): # name is compName name = name.lower() if (mover_name in self.cifs_server_map and name in self.cifs_server_map[mover_name]) and not force: return constants.STATUS_OK, self.cifs_server_map[mover_name][name] self.get_all(mover_name, is_vdm) if mover_name in self.cifs_server_map: for compName, server in self.cifs_server_map[mover_name].items(): if name == compName: return constants.STATUS_OK, server return constants.STATUS_NOT_FOUND, None @utils.retry(exception.EMCVnxInvalidMoverID) def modify(self, server_args): """Make CIFS server join or un-join the domain. :param server_args: Dictionary for CIFS server modification name: CIFS server name instead of compName join_domain: True for joining the domain, false for un-joining user_name: User name under which the domain is joined password: Password associated with the user name mover_name: mover or VDM name is_vdm: Boolean to indicate mover or VDM :raises exception.EMCVnxXMLAPIError: if modification fails. """ name = server_args['name'] join_domain = server_args['join_domain'] user_name = server_args['user_name'] password = server_args['password'] mover_name = server_args['mover_name'] if 'is_vdm' in server_args.keys(): is_vdm = server_args['is_vdm'] else: is_vdm = True mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.ModifyW2KCifsServer( self.elt_maker.DomainSetting( joinDomain='true' if join_domain else 'false', password=password, userName=user_name, ), mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif self._ignore_modification_error(response, join_domain): return elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to modify CIFS server %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def _ignore_modification_error(self, response, join_domain): if self._response_validation(response, constants.MSG_JOIN_DOMAIN): return join_domain elif self._response_validation(response, constants.MSG_UNJOIN_DOMAIN): return not join_domain return False def delete(self, computer_name, mover_name, is_vdm=True): try: status, out = self.get( computer_name.lower(), mover_name, is_vdm, self.xml_retry) if constants.STATUS_NOT_FOUND == status: LOG.warning("CIFS server %(name)s on mover %(mover_name)s " "not found. Skip the deletion.", {'name': computer_name, 'mover_name': mover_name}) return except exception.EMCVnxXMLAPIError: LOG.warning("CIFS server %(name)s on mover %(mover_name)s " "not found. Skip the deletion.", {'name': computer_name, 'mover_name': mover_name}) return server_name = out['name'] mover_id = self._get_mover_id(mover_name, is_vdm) request = self._build_task_package( self.elt_maker.DeleteCifsServer( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', name=server_name ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete CIFS server %(name)s. " "Reason: %(err)s.") % {'name': computer_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self.cifs_server_map[mover_name].pop(computer_name) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class CIFSShare(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(CIFSShare, self).__init__(conn, elt_maker, xml_parser, manager) self.cifs_share_map = dict() @utils.retry(exception.EMCVnxInvalidMoverID) def create(self, name, server_name, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False share_path = '/' + name request = self._build_task_package( self.elt_maker.NewCifsShare( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false' ), self.elt_maker.CifsServers(self.elt_maker.li(server_name)), name=name, path=share_path ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create file share %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get(self, name): if name not in self.cifs_share_map: request = self._build_query_package( self.elt_maker.CifsShareQueryParams(name=name) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, None self.cifs_share_map[name] = response['objects'][0] return constants.STATUS_OK, self.cifs_share_map[name] @utils.retry(exception.EMCVnxInvalidMoverID) def delete(self, name, mover_name, is_vdm=True): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("CIFS share %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get CIFS share by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False netbios_names = self.cifs_share_map[name]['CifsServers'] request = self._build_task_package( self.elt_maker.DeleteCifsShare( self.elt_maker.CifsServers(*map(lambda a: self.elt_maker.li(a), netbios_names)), mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCVnxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete file system %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self.cifs_share_map.pop(name) def disable_share_access(self, share_name, mover_name): cmd_str = 'sharesd %s set noaccess' % share_name disable_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % cmd_str, ] try: self._execute_cmd(disable_access, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to disable the access to CIFS share ' '%(name)s. Reason: %(err)s.') % {'name': share_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def allow_share_access(self, mover_name, share_name, user_name, domain, access=constants.CIFS_ACL_FULLCONTROL): account = user_name + "@" + domain allow_str = ('sharesd %(share_name)s grant %(account)s=%(access)s' % {'share_name': share_name, 'account': account, 'access': access}) allow_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % allow_str, ] try: self._execute_cmd(allow_access, check_exit_code=True) except processutils.ProcessExecutionError as expt: dup_msg = re.compile(r'ACE for %(domain)s\\%(user)s unchanged' % {'domain': domain, 'user': user_name}, re.I) if re.search(dup_msg, expt.stdout): LOG.warning("Duplicate access control entry, " "skipping allow...") else: message = (_('Failed to allow the access %(access)s to ' 'CIFS share %(name)s. Reason: %(err)s.') % {'access': access, 'name': share_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def deny_share_access(self, mover_name, share_name, user_name, domain, access=constants.CIFS_ACL_FULLCONTROL): account = user_name + "@" + domain revoke_str = ('sharesd %(share_name)s revoke %(account)s=%(access)s' % {'share_name': share_name, 'account': account, 'access': access}) allow_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % revoke_str, ] try: self._execute_cmd(allow_access, check_exit_code=True) except processutils.ProcessExecutionError as expt: not_found_msg = re.compile( r'No ACE found for %(domain)s\\%(user)s' % {'domain': domain, 'user': user_name}, re.I) user_err_msg = re.compile( r'Cannot get mapping for %(domain)s\\%(user)s' % {'domain': domain, 'user': user_name}, re.I) if re.search(not_found_msg, expt.stdout): LOG.warning("No access control entry found, " "skipping deny...") elif re.search(user_err_msg, expt.stdout): LOG.warning("User not found on domain, skipping deny...") else: message = (_('Failed to deny the access %(access)s to ' 'CIFS share %(name)s. Reason: %(err)s.') % {'access': access, 'name': share_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get_share_access(self, mover_name, share_name): get_str = 'sharesd %s dump' % share_name get_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % get_str, ] try: out, err = self._execute_cmd(get_access, check_exit_code=True) except processutils.ProcessExecutionError: msg = _('Failed to get access list of CIFS share %s.') % share_name LOG.exception(msg) raise exception.EMCVnxXMLAPIError(err=msg) ret = {} name_pattern = re.compile(r"Unix user '(.+?)'") access_pattern = re.compile(r"ALLOWED:(.+?):") name = None for line in out.splitlines(): if name is None: names = name_pattern.findall(line) if names: name = names[0].lower() else: accesses = access_pattern.findall(line) if accesses: ret[name] = accesses[0].lower() name = None return ret def clear_share_access(self, mover_name, share_name, domain, white_list_users): existing_users = self.get_share_access(mover_name, share_name) white_list_users_set = set(user.lower() for user in white_list_users) users_to_remove = set(existing_users.keys()) - white_list_users_set for user in users_to_remove: self.deny_share_access(mover_name, share_name, user, domain, existing_users[user]) return users_to_remove @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class NFSShare(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(NFSShare, self).__init__(conn, elt_maker, xml_parser, manager) self.nfs_share_map = {} def create(self, name, mover_name): share_path = '/' + name create_nfs_share_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-option', 'access=-0.0.0.0/0.0.0.0', share_path, ] try: self._execute_cmd(create_nfs_share_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to create NFS share %(name)s on mover ' '%(mover_name)s. Reason: %(err)s.') % {'name': name, 'mover_name': mover_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def delete(self, name, mover_name): path = '/' + name status, out = self.get(name, mover_name) if constants.STATUS_NOT_FOUND == status: LOG.warning("NFS share %s not found. Skip the deletion.", path) return delete_nfs_share_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-unexport', '-perm', path, ] try: self._execute_cmd(delete_nfs_share_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to delete NFS share %(name)s on ' '%(mover_name)s. Reason: %(err)s.') % {'name': name, 'mover_name': mover_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self.nfs_share_map.pop(name) def get(self, name, mover_name, force=False, check_exit_code=False): if name in self.nfs_share_map and not force: return constants.STATUS_OK, self.nfs_share_map[name] path = '/' + name nfs_share = { "mover_name": '', "path": '', 'AccessHosts': [], 'RwHosts': [], 'RoHosts': [], 'RootHosts': [], 'readOnly': '', } nfs_query_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-P', 'nfs', '-list', path, ] try: out, err = self._execute_cmd(nfs_query_cmd, check_exit_code=check_exit_code) except processutils.ProcessExecutionError as expt: dup_msg = (r'%(mover_name)s : No such file or directory' % {'mover_name': mover_name}) if re.search(dup_msg, expt.stdout): LOG.warning("NFS share %s not found.", name) return constants.STATUS_NOT_FOUND, None else: message = (_('Failed to list NFS share %(name)s on ' '%(mover_name)s. Reason: %(err)s.') % {'name': name, 'mover_name': mover_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) re_exports = r'%s\s*:\s*\nexport\s*(.*)\n' % mover_name m = re.search(re_exports, out) if m is not None: nfs_share['path'] = path nfs_share['mover_name'] = mover_name export = m.group(1) fields = export.split(" ") for field in fields: field = field.strip() if field.startswith('rw='): nfs_share['RwHosts'] = enas_utils.parse_ipaddr(field[3:]) elif field.startswith('access='): nfs_share['AccessHosts'] = enas_utils.parse_ipaddr( field[7:]) elif field.startswith('root='): nfs_share['RootHosts'] = enas_utils.parse_ipaddr(field[5:]) elif field.startswith('ro='): nfs_share['RoHosts'] = enas_utils.parse_ipaddr(field[3:]) self.nfs_share_map[name] = nfs_share else: return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.nfs_share_map[name] def allow_share_access(self, share_name, host_ip, mover_name, access_level=const.ACCESS_LEVEL_RW): @utils.synchronized('emc-shareaccess-' + share_name) def do_allow_access(share_name, host_ip, mover_name, access_level): status, share = self.get(share_name, mover_name) if constants.STATUS_NOT_FOUND == status: message = (_('NFS share %s not found.') % share_name) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) changed = False rwhosts = share['RwHosts'] rohosts = share['RoHosts'] host_ip = enas_utils.convert_ipv6_format_if_needed(host_ip) if access_level == const.ACCESS_LEVEL_RW: if host_ip not in rwhosts: rwhosts.append(host_ip) changed = True if host_ip in rohosts: rohosts.remove(host_ip) changed = True if access_level == const.ACCESS_LEVEL_RO: if host_ip not in rohosts: rohosts.append(host_ip) changed = True if host_ip in rwhosts: rwhosts.remove(host_ip) changed = True roothosts = share['RootHosts'] if host_ip not in roothosts: roothosts.append(host_ip) changed = True accesshosts = share['AccessHosts'] if host_ip not in accesshosts: accesshosts.append(host_ip) changed = True if not changed: LOG.debug("%(host)s is already in access list of share " "%(name)s.", {'host': host_ip, 'name': share_name}) else: path = '/' + share_name self._set_share_access(path, mover_name, rwhosts, rohosts, roothosts, accesshosts) # Update self.nfs_share_map self.get(share_name, mover_name, force=True, check_exit_code=True) do_allow_access(share_name, host_ip, mover_name, access_level) def deny_share_access(self, share_name, host_ip, mover_name): @utils.synchronized('emc-shareaccess-' + share_name) def do_deny_access(share_name, host_ip, mover_name): status, share = self.get(share_name, mover_name) if constants.STATUS_OK != status: message = (_('Query nfs share %(path)s failed. ' 'Reason %(err)s.') % {'path': share_name, 'err': share}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) changed = False rwhosts = set(share['RwHosts']) if host_ip in rwhosts: rwhosts.remove(host_ip) changed = True roothosts = set(share['RootHosts']) if host_ip in roothosts: roothosts.remove(host_ip) changed = True accesshosts = set(share['AccessHosts']) if host_ip in accesshosts: accesshosts.remove(host_ip) changed = True rohosts = set(share['RoHosts']) if host_ip in rohosts: rohosts.remove(host_ip) changed = True if not changed: LOG.debug("%(host)s is already in access list of share " "%(name)s.", {'host': host_ip, 'name': share_name}) else: path = '/' + share_name self._set_share_access(path, mover_name, rwhosts, rohosts, roothosts, accesshosts) # Update self.nfs_share_map self.get(share_name, mover_name, force=True, check_exit_code=True) do_deny_access(share_name, host_ip, mover_name) def clear_share_access(self, share_name, mover_name, white_list_hosts): @utils.synchronized('emc-shareaccess-' + share_name) def do_clear_access(share_name, mover_name, white_list_hosts): def hosts_to_remove(orig_list): if white_list_hosts is None: ret = set() else: ret = set(white_list_hosts).intersection(set(orig_list)) return ret status, share = self.get(share_name, mover_name) if constants.STATUS_OK != status: message = (_('Query nfs share %(path)s failed. ' 'Reason %(err)s.') % {'path': share_name, 'err': status}) raise exception.EMCVnxXMLAPIError(err=message) self._set_share_access('/' + share_name, mover_name, hosts_to_remove(share['RwHosts']), hosts_to_remove(share['RoHosts']), hosts_to_remove(share['RootHosts']), hosts_to_remove(share['AccessHosts'])) # Update self.nfs_share_map self.get(share_name, mover_name, force=True, check_exit_code=True) do_clear_access(share_name, mover_name, white_list_hosts) def _set_share_access(self, path, mover_name, rw_hosts, ro_hosts, root_hosts, access_hosts): if access_hosts is None: access_hosts = set() try: access_hosts.remove('-0.0.0.0/0.0.0.0') except (ValueError, KeyError): pass access_str = ('access=%(access)s' % {'access': ':'.join( list(access_hosts) + ['-0.0.0.0/0.0.0.0'])}) if root_hosts: access_str += (',root=%(root)s' % {'root': ':'.join(root_hosts)}) if rw_hosts: access_str += ',rw=%(rw)s' % {'rw': ':'.join(rw_hosts)} if ro_hosts: access_str += ',ro=%(ro)s' % {'ro': ':'.join(ro_hosts)} set_nfs_share_access_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-ignore', '-option', access_str, path, ] try: self._execute_cmd(set_nfs_share_access_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to set NFS share %(name)s access on ' '%(mover_name)s. Reason: %(err)s.') % {'name': path[1:], 'mover_name': mover_name, 'err': expt}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) manila-10.0.0/manila/share/drivers/dell_emc/plugins/vnx/connection.py0000664000175000017500000010546113656750227025671 0ustar zuulzuul00000000000000# Copyright (c) 2014 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """VNX backend for the EMC Manila driver.""" import copy import random import six from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import units from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import utils as enas_utils from manila.share.drivers.dell_emc.plugins import base as driver from manila.share.drivers.dell_emc.plugins.vnx import object_manager as manager from manila.share import utils as share_utils from manila import utils """Version history: 1.0.0 - Initial version (Liberty) 2.0.0 - Bumped the version for Mitaka 3.0.0 - Bumped the version for Ocata 4.0.0 - Bumped the version for Pike 5.0.0 - Bumped the version for Queens 9.0.0 - Bumped the version for Ussuri 9.0.1 - Fixes bug 1871999: wrong format of export locations """ VERSION = "9.0.1" LOG = log.getLogger(__name__) VNX_OPTS = [ cfg.StrOpt('vnx_server_container', deprecated_name='emc_nas_server_container', help='Data mover to host the NAS server.'), cfg.ListOpt('vnx_share_data_pools', deprecated_name='emc_nas_pool_names', help='Comma separated list of pools that can be used to ' 'persist share data.'), cfg.ListOpt('vnx_ethernet_ports', deprecated_name='emc_interface_ports', help='Comma separated list of ports that can be used for ' 'share server interfaces. Members of the list ' 'can be Unix-style glob expressions.') ] CONF = cfg.CONF CONF.register_opts(VNX_OPTS) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class VNXStorageConnection(driver.StorageConnection): """Implements VNX specific functionality for EMC Manila driver.""" @enas_utils.log_enter_exit def __init__(self, *args, **kwargs): super(VNXStorageConnection, self).__init__(*args, **kwargs) if 'configuration' in kwargs: kwargs['configuration'].append_config_values(VNX_OPTS) self.mover_name = None self.pools = None self.manager = None self.pool_conf = None self.reserved_percentage = None self.driver_handles_share_servers = True self.port_conf = None self.ipv6_implemented = True def create_share(self, context, share, share_server=None): """Create a share and export it based on protocol used.""" share_name = share['id'] size = share['size'] * units.Ki share_proto = share['share_proto'] # Validate the share protocol if share_proto.upper() not in ('NFS', 'CIFS'): raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) # Get the pool name from share host field pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share['host']) raise exception.InvalidHost(reason=message) # Validate share server self._share_server_validation(share_server) if share_proto == 'CIFS': vdm_name = self._get_share_server_name(share_server) server_name = vdm_name # Check if CIFS server exists. status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %s not found.") % server_name) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self._allocate_container(share_name, size, share_server, pool_name) if share_proto == 'NFS': location = self._create_nfs_share(share_name, share_server) elif share_proto == 'CIFS': location = self._create_cifs_share(share_name, share_server) return [ {'path': location} ] def _share_server_validation(self, share_server): """Validate the share server.""" if not share_server: msg = _('Share server not provided') raise exception.InvalidInput(reason=msg) backend_details = share_server.get('backend_details') vdm = backend_details.get( 'share_server_name') if backend_details else None if vdm is None: message = _("No share server found.") LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def _allocate_container(self, share_name, size, share_server, pool_name): """Allocate file system for share.""" vdm_name = self._get_share_server_name(share_server) self._get_context('FileSystem').create( share_name, size, pool_name, vdm_name) def _allocate_container_from_snapshot(self, share, snapshot, share_server, pool_name): """Allocate file system from snapshot.""" vdm_name = self._get_share_server_name(share_server) interconn_id = self._get_context('Mover').get_interconnect_id( self.mover_name, self.mover_name) self._get_context('FileSystem').create_from_snapshot( share['id'], snapshot['id'], snapshot['share_id'], pool_name, vdm_name, interconn_id) nwe_size = share['size'] * units.Ki self._get_context('FileSystem').extend(share['id'], pool_name, nwe_size) @enas_utils.log_enter_exit def _create_cifs_share(self, share_name, share_server): """Create CIFS share.""" vdm_name = self._get_share_server_name(share_server) server_name = vdm_name # Get available CIFS Server and interface (one CIFS server per VDM) status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if 'interfaces' not in server or len(server['interfaces']) == 0: message = (_("CIFS server %s doesn't have interface, " "so the share is inaccessible.") % server['compName']) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) interface = enas_utils.export_unc_path(server['interfaces'][0]) self._get_context('CIFSShare').create(share_name, server['name'], vdm_name) self._get_context('CIFSShare').disable_share_access(share_name, vdm_name) location = (r'\\%(interface)s\%(name)s' % {'interface': interface, 'name': share_name}) return location @enas_utils.log_enter_exit def _create_nfs_share(self, share_name, share_server): """Create NFS share.""" vdm_name = self._get_share_server_name(share_server) self._get_context('NFSShare').create(share_name, vdm_name) nfs_if = enas_utils.convert_ipv6_format_if_needed( share_server['backend_details']['nfs_if']) return ('%(nfs_if)s:/%(share_name)s' % {'nfs_if': nfs_if, 'share_name': share_name}) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from a snapshot - clone a snapshot.""" share_name = share['id'] share_proto = share['share_proto'] # Validate the share protocol if share_proto.upper() not in ('NFS', 'CIFS'): raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) # Get the pool name from share host field pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share['host']) raise exception.InvalidHost(reason=message) self._share_server_validation(share_server) self._allocate_container_from_snapshot( share, snapshot, share_server, pool_name) nfs_if = enas_utils.convert_ipv6_format_if_needed( share_server['backend_details']['nfs_if']) if share_proto == 'NFS': self._create_nfs_share(share_name, share_server) location = ('%(nfs_if)s:/%(share_name)s' % {'nfs_if': nfs_if, 'share_name': share_name}) elif share_proto == 'CIFS': location = self._create_cifs_share(share_name, share_server) return [ {'path': location} ] def create_snapshot(self, context, snapshot, share_server=None): """Create snapshot from share.""" share_name = snapshot['share_id'] status, filesystem = self._get_context('FileSystem').get(share_name) if status != constants.STATUS_OK: message = (_("File System %s not found.") % share_name) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) pool_id = filesystem['pools_id'][0] self._get_context('Snapshot').create(snapshot['id'], snapshot['share_id'], pool_id) def delete_share(self, context, share, share_server=None): """Delete a share.""" if share_server is None: LOG.warning("Driver does not support share deletion without " "share network specified. Return directly because " "there is nothing to clean.") return share_proto = share['share_proto'] if share_proto == 'NFS': self._delete_nfs_share(share, share_server) elif share_proto == 'CIFS': self._delete_cifs_share(share, share_server) else: raise exception.InvalidShare( reason='Unsupported share type') @enas_utils.log_enter_exit def _delete_cifs_share(self, share, share_server): """Delete CIFS share.""" vdm_name = self._get_share_server_name(share_server) name = share['id'] self._get_context('CIFSShare').delete(name, vdm_name) self._deallocate_container(name, vdm_name) @enas_utils.log_enter_exit def _delete_nfs_share(self, share, share_server): """Delete NFS share.""" vdm_name = self._get_share_server_name(share_server) name = share['id'] self._get_context('NFSShare').delete(name, vdm_name) self._deallocate_container(name, vdm_name) @enas_utils.log_enter_exit def _deallocate_container(self, share_name, vdm_name): """Delete underneath objects of the share.""" path = '/' + share_name try: # Delete mount point self._get_context('MountPoint').delete(path, vdm_name) except Exception: LOG.debug("Skip the failure of mount point %s deletion.", path) try: # Delete file system self._get_context('FileSystem').delete(share_name) except Exception: LOG.debug("Skip the failure of file system %s deletion.", share_name) def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot.""" self._get_context('Snapshot').delete(snapshot['id']) def ensure_share(self, context, share, share_server=None): """Ensure that the share is exported.""" def extend_share(self, share, new_size, share_server=None): # Get the pool name from share host field pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share['host']) raise exception.InvalidHost(reason=message) share_name = share['id'] self._get_context('FileSystem').extend( share_name, pool_name, new_size * units.Ki) def allow_access(self, context, share, access, share_server=None): """Allow access to a share.""" access_level = access['access_level'] if access_level not in const.ACCESS_LEVELS: raise exception.InvalidShareAccessLevel(level=access_level) share_proto = share['share_proto'] if share_proto == 'NFS': self._nfs_allow_access(context, share, access, share_server) elif share_proto == 'CIFS': self._cifs_allow_access(context, share, access, share_server) else: raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) @enas_utils.log_enter_exit def _cifs_allow_access(self, context, share, access, share_server): """Allow access to CIFS share.""" vdm_name = self._get_share_server_name(share_server) share_name = share['id'] if access['access_type'] != 'user': reason = _('Only user access type allowed for CIFS share') raise exception.InvalidShareAccess(reason=reason) user_name = access['access_to'] access_level = access['access_level'] if access_level == const.ACCESS_LEVEL_RW: cifs_access = constants.CIFS_ACL_FULLCONTROL else: cifs_access = constants.CIFS_ACL_READ # Check if CIFS server exists. server_name = vdm_name status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %s not found.") % server_name) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self._get_context('CIFSShare').allow_share_access( vdm_name, share_name, user_name, server['domain'], access=cifs_access) @enas_utils.log_enter_exit def _nfs_allow_access(self, context, share, access, share_server): """Allow access to NFS share.""" vdm_name = self._get_share_server_name(share_server) access_type = access['access_type'] if access_type != 'ip': reason = _('Only ip access type allowed.') raise exception.InvalidShareAccess(reason=reason) host_ip = access['access_to'] access_level = access['access_level'] self._get_context('NFSShare').allow_share_access( share['id'], host_ip, vdm_name, access_level) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): # deleting rules for rule in delete_rules: self.deny_access(context, share, rule, share_server) # adding rules for rule in add_rules: self.allow_access(context, share, rule, share_server) # recovery mode if not (add_rules or delete_rules): white_list = [] for rule in access_rules: self.allow_access(context, share, rule, share_server) white_list.append( enas_utils.convert_ipv6_format_if_needed( rule['access_to'])) self.clear_access(share, share_server, white_list) def clear_access(self, share, share_server, white_list): share_proto = share['share_proto'].upper() share_name = share['id'] if share_proto == 'CIFS': self._cifs_clear_access(share_name, share_server, white_list) elif share_proto == 'NFS': self._nfs_clear_access(share_name, share_server, white_list) @enas_utils.log_enter_exit def _cifs_clear_access(self, share_name, share_server, white_list): """Clear access for CIFS share except hosts in the white list.""" vdm_name = self._get_share_server_name(share_server) # Check if CIFS server exists. server_name = vdm_name status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %(server_name)s has issue. " "Detail: %(status)s") % {'server_name': server_name, 'status': status}) raise exception.EMCVnxXMLAPIError(err=message) self._get_context('CIFSShare').clear_share_access( share_name=share_name, mover_name=vdm_name, domain=server['domain'], white_list_users=white_list) @enas_utils.log_enter_exit def _nfs_clear_access(self, share_name, share_server, white_list): """Clear access for NFS share except hosts in the white list.""" self._get_context('NFSShare').clear_share_access( share_name=share_name, mover_name=self._get_share_server_name(share_server), white_list_hosts=white_list) def deny_access(self, context, share, access, share_server=None): """Deny access to a share.""" share_proto = share['share_proto'] if share_proto == 'NFS': self._nfs_deny_access(share, access, share_server) elif share_proto == 'CIFS': self._cifs_deny_access(share, access, share_server) else: raise exception.InvalidShare( reason=_('Unsupported share type')) @enas_utils.log_enter_exit def _cifs_deny_access(self, share, access, share_server): """Deny access to CIFS share.""" vdm_name = self._get_share_server_name(share_server) share_name = share['id'] if access['access_type'] != 'user': reason = _('Only user access type allowed for CIFS share') raise exception.InvalidShareAccess(reason=reason) user_name = access['access_to'] access_level = access['access_level'] if access_level == const.ACCESS_LEVEL_RW: cifs_access = constants.CIFS_ACL_FULLCONTROL else: cifs_access = constants.CIFS_ACL_READ # Check if CIFS server exists. server_name = vdm_name status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %s not found.") % server_name) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) self._get_context('CIFSShare').deny_share_access( vdm_name, share_name, user_name, server['domain'], access=cifs_access) @enas_utils.log_enter_exit def _nfs_deny_access(self, share, access, share_server): """Deny access to NFS share.""" vdm_name = self._get_share_server_name(share_server) access_type = access['access_type'] if access_type != 'ip': reason = _('Only ip access type allowed.') raise exception.InvalidShareAccess(reason=reason) host_ip = enas_utils.convert_ipv6_format_if_needed(access['access_to']) self._get_context('NFSShare').deny_share_access(share['id'], host_ip, vdm_name) def check_for_setup_error(self): """Check for setup error.""" # To verify the input from Manila configuration status, out = self._get_context('Mover').get_ref(self.mover_name, True) if constants.STATUS_ERROR == status: message = (_("Could not find Data Mover by name: %s.") % self.mover_name) LOG.error(message) raise exception.InvalidParameterValue(err=message) self.pools = self._get_managed_storage_pools(self.pool_conf) def _get_managed_storage_pools(self, pools): matched_pools = set() if pools: # Get the real pools from the backend storage status, backend_pools = self._get_context('StoragePool').get_all() if status != constants.STATUS_OK: message = (_("Failed to get storage pool information. " "Reason: %s") % backend_pools) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) real_pools = set([item for item in backend_pools]) conf_pools = set([item.strip() for item in pools]) matched_pools, unmatched_pools = enas_utils.do_match_any( real_pools, conf_pools) if not matched_pools: msg = (_("None of the specified storage pools to be managed " "exist. Please check your configuration " "vnx_share_data_pools in manila.conf. " "The available pools in the backend are %s.") % ",".join(real_pools)) raise exception.InvalidParameterValue(err=msg) LOG.info("Storage pools: %s will be managed.", ",".join(matched_pools)) else: LOG.debug("No storage pool is specified, so all pools " "in storage system will be managed.") return matched_pools def connect(self, emc_share_driver, context): """Connect to VNX NAS server.""" config = emc_share_driver.configuration config.append_config_values(VNX_OPTS) self.mover_name = config.vnx_server_container self.pool_conf = config.safe_get('vnx_share_data_pools') self.reserved_percentage = config.safe_get('reserved_share_percentage') if self.reserved_percentage is None: self.reserved_percentage = 0 self.manager = manager.StorageObjectManager(config) self.port_conf = config.safe_get('vnx_ethernet_ports') def get_managed_ports(self): # Get the real ports(devices) list from the backend storage real_ports = self._get_physical_devices(self.mover_name) if not self.port_conf: LOG.debug("No ports are specified, so any of the ports on the " "Data Mover can be used.") return real_ports matched_ports, unmanaged_ports = enas_utils.do_match_any( real_ports, self.port_conf) if not matched_ports: msg = (_("None of the specified network ports exist. " "Please check your configuration vnx_ethernet_ports " "in manila.conf. The available ports on the Data Mover " "are %s.") % ",".join(real_ports)) raise exception.BadConfigurationException(reason=msg) LOG.debug("Ports: %s can be used.", ",".join(matched_ports)) return list(matched_ports) def update_share_stats(self, stats_dict): """Communicate with EMCNASClient to get the stats.""" stats_dict['driver_version'] = VERSION self._get_context('Mover').get_ref(self.mover_name, True) stats_dict['pools'] = [] status, pools = self._get_context('StoragePool').get_all() for name, pool in pools.items(): if not self.pools or pool['name'] in self.pools: total_size = float(pool['total_size']) used_size = float(pool['used_size']) pool_stat = dict( pool_name=pool['name'], total_capacity_gb=total_size, free_capacity_gb=total_size - used_size, qos=False, reserved_percentage=self.reserved_percentage, ) stats_dict['pools'].append(pool_stat) if not stats_dict['pools']: message = _("Failed to update storage pool.") LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) def get_pool(self, share): """Get the pool name of the share.""" share_name = share['id'] status, filesystem = self._get_context('FileSystem').get(share_name) if status != constants.STATUS_OK: message = (_("File System %(name)s not found. " "Reason: %(err)s") % {'name': share_name, 'err': filesystem}) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) pool_id = filesystem['pools_id'][0] # Get the real pools from the backend storage status, backend_pools = self._get_context('StoragePool').get_all() if status != constants.STATUS_OK: message = (_("Failed to get storage pool information. " "Reason: %s") % backend_pools) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) for name, pool_info in backend_pools.items(): if pool_info['id'] == pool_id: return name available_pools = [item for item in backend_pools] message = (_("No matched pool name for share: %(share)s. " "Available pools: %(pools)s") % {'share': share_name, 'pools': available_pools}) raise exception.EMCVnxXMLAPIError(err=message) def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return constants.IP_ALLOCATIONS def setup_server(self, network_info, metadata=None): """Set up and configures share server with given network parameters.""" # Only support single security service with type 'active_directory' vdm_name = network_info['server_id'] vlan_id = network_info['segmentation_id'] active_directory = None allocated_interfaces = [] if network_info.get('security_services'): is_valid, active_directory = self._get_valid_security_service( network_info['security_services']) if not is_valid: raise exception.EMCVnxXMLAPIError(err=active_directory) try: if not self._vdm_exist(vdm_name): LOG.debug('Share server %s not found, creating ' 'share server...', vdm_name) self._get_context('VDM').create(vdm_name, self.mover_name) devices = self.get_managed_ports() for net_info in network_info['network_allocations']: random.shuffle(devices) ip_version = net_info['ip_version'] interface = { 'name': net_info['id'][-12:], 'device_name': devices[0], 'ip': net_info['ip_address'], 'mover_name': self.mover_name, 'vlan_id': vlan_id if vlan_id else -1, } if ip_version == 6: interface['ip_version'] = ip_version interface['net_mask'] = six.text_type( utils.cidr_to_prefixlen(network_info['cidr'])) else: interface['net_mask'] = utils.cidr_to_netmask( network_info['cidr']) self._get_context('MoverInterface').create(interface) allocated_interfaces.append(interface) cifs_interface = allocated_interfaces[0] nfs_interface = allocated_interfaces[1] if active_directory: self._configure_active_directory( active_directory, vdm_name, cifs_interface) self._get_context('VDM').attach_nfs_interface( vdm_name, nfs_interface['name']) return { 'share_server_name': vdm_name, 'cifs_if': cifs_interface['ip'], 'nfs_if': nfs_interface['ip'], } except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Could not setup server.') server_details = self._construct_backend_details( vdm_name, allocated_interfaces) self.teardown_server( server_details, network_info['security_services']) def _construct_backend_details(self, vdm_name, interfaces): if_number = len(interfaces) cifs_if = interfaces[0]['ip'] if if_number > 0 else None nfs_if = interfaces[1]['ip'] if if_number > 1 else None return { 'share_server_name': vdm_name, 'cifs_if': cifs_if, 'nfs_if': nfs_if, } @enas_utils.log_enter_exit def _vdm_exist(self, name): status, out = self._get_context('VDM').get(name) if constants.STATUS_OK != status: return False return True def _get_physical_devices(self, mover_name): """Get a proper network device to create interface.""" devices = self._get_context('Mover').get_physical_devices(mover_name) if not devices: message = (_("Could not get physical device port on mover %s.") % self.mover_name) LOG.error(message) raise exception.EMCVnxXMLAPIError(err=message) return devices def _configure_active_directory( self, security_service, vdm_name, interface): domain = security_service['domain'] server = security_service['dns_ip'] self._get_context('DNSDomain').create(self.mover_name, domain, server) cifs_server_args = { 'name': vdm_name, 'interface_ip': interface['ip'], 'domain_name': security_service['domain'], 'user_name': security_service['user'], 'password': security_service['password'], 'mover_name': vdm_name, 'is_vdm': True, } self._get_context('CIFSServer').create(cifs_server_args) def teardown_server(self, server_details, security_services=None): """Teardown share server.""" if not server_details: LOG.debug('Server details are empty.') return vdm_name = server_details.get('share_server_name') if not vdm_name: LOG.debug('No share server found in server details.') return cifs_if = server_details.get('cifs_if') nfs_if = server_details.get('nfs_if') status, vdm = self._get_context('VDM').get(vdm_name) if constants.STATUS_OK != status: LOG.debug('Share server %s not found.', vdm_name) return interfaces = self._get_context('VDM').get_interfaces(vdm_name) for if_name in interfaces['nfs']: self._get_context('VDM').detach_nfs_interface(vdm_name, if_name) if security_services: # Only support single security service with type 'active_directory' is_valid, active_directory = self._get_valid_security_service( security_services) if is_valid: status, servers = self._get_context('CIFSServer').get_all( vdm_name) if constants.STATUS_OK != status: LOG.error('Could not find CIFS server by name: %s.', vdm_name) else: cifs_servers = copy.deepcopy(servers) for name, server in cifs_servers.items(): # Unjoin CIFS Server from domain cifs_server_args = { 'name': server['name'], 'join_domain': False, 'user_name': active_directory['user'], 'password': active_directory['password'], 'mover_name': vdm_name, 'is_vdm': True, } try: self._get_context('CIFSServer').modify( cifs_server_args) except exception.EMCVnxXMLAPIError as expt: LOG.debug("Failed to modify CIFS server " "%(server)s. Reason: %(err)s.", {'server': server, 'err': expt}) self._get_context('CIFSServer').delete(name, vdm_name) # Delete interface from Data Mover if cifs_if: self._get_context('MoverInterface').delete(cifs_if, self.mover_name) if nfs_if: self._get_context('MoverInterface').delete(nfs_if, self.mover_name) # Delete Virtual Data Mover self._get_context('VDM').delete(vdm_name) def _get_valid_security_service(self, security_services): """Validate security services and return a supported security service. :param security_services: :returns: (, ) -- is true to indicate security_services includes zero or single security service for active directory. Otherwise, it would return false. return error message when is false. Otherwise, it will return zero or single security service for active directory. """ # Only support single security service with type 'active_directory' service_number = len(security_services) if (service_number > 1 or security_services[0]['type'] != 'active_directory'): return False, _("Unsupported security services. " "Only support single security service and " "only support type 'active_directory'") return True, security_services[0] def _get_share_server_name(self, share_server): try: return share_server['backend_details']['share_server_name'] except Exception: LOG.debug("Didn't get share server name from share_server %s.", share_server) return share_server['id'] def _get_context(self, type): return self.manager.getStorageContext(type) manila-10.0.0/manila/share/drivers/dell_emc/plugins/powermax/0000775000175000017500000000000013656750362024200 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/powermax/__init__.py0000664000175000017500000000000013656750227026277 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/powermax/object_manager.py0000664000175000017500000023165413656750227027525 0ustar zuulzuul00000000000000# Copyright (c) 2016 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re from lxml import builder from lxml import etree as ET from oslo_concurrency import processutils from oslo_log import log import six from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import connector from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import utils as powermax_utils from manila.share.drivers.dell_emc.common.enas import xml_api_parser as parser from manila import utils LOG = log.getLogger(__name__) @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class StorageObjectManager(object): def __init__(self, configuration): self.context = {} self.connectors = {} self.connectors['XML'] = connector.XMLAPIConnector(configuration) self.connectors['SSH'] = connector.SSHConnector(configuration) elt_maker = builder.ElementMaker(nsmap={None: constants.XML_NAMESPACE}) xml_parser = parser.XMLAPIParser() obj_types = StorageObject.__subclasses__() # pylint: disable=no-member for item in obj_types: key = item.__name__ self.context[key] = eval(key)(self.connectors, elt_maker, xml_parser, self) def getStorageContext(self, type): if type in self.context: return self.context[type] else: message = (_("Invalid storage object type %s.") % type) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) class StorageObject(object): def __init__(self, conn, elt_maker, xml_parser, manager): self.conn = conn self.elt_maker = elt_maker self.xml_parser = xml_parser self.manager = manager self.xml_retry = False self.ssh_retry_patterns = [ ( constants.SSH_DEFAULT_RETRY_PATTERN, exception.EMCPowerMaxLockRequiredException() ), ] def _translate_response(self, response): """Translate different status to ok/error status.""" if (constants.STATUS_OK == response['maxSeverity'] or constants.STATUS_ERROR == response['maxSeverity']): return old_Severity = response['maxSeverity'] if response['maxSeverity'] in (constants.STATUS_DEBUG, constants.STATUS_INFO): response['maxSeverity'] = constants.STATUS_OK LOG.warning("Translated status from %(old)s to %(new)s. " "Message: %(info)s.", {'old': old_Severity, 'new': response['maxSeverity'], 'info': response}) def _response_validation(self, response, error_code): """Validates whether a response includes a certain error code.""" msg_codes = self._get_problem_message_codes(response['problems']) for code in msg_codes: if code == error_code: return True return False def _get_problem_message_codes(self, problems): message_codes = [] for problem in problems: if 'messageCode' in problem: message_codes.append(problem['messageCode']) return message_codes def _get_problem_messages(self, problems): messages = [] for problem in problems: if 'message' in problem: messages.append(problem['message']) return messages def _get_problem_diags(self, problems): diags = [] for problem in problems: if 'Diagnostics' in problem: diags.append(problem['Diagnostics']) return diags def _build_query_package(self, body): return self.elt_maker.RequestPacket( self.elt_maker.Request( self.elt_maker.Query(body) ) ) def _build_task_package(self, body): return self.elt_maker.RequestPacket( self.elt_maker.Request( self.elt_maker.StartTask(body, timeout='300') ) ) @utils.retry(exception.EMCPowerMaxLockRequiredException) def _send_request(self, req): req_xml = constants.XML_HEADER + ET.tostring(req).decode('utf-8') rsp_xml = self.conn['XML'].request(str(req_xml)) response = self.xml_parser.parse(rsp_xml) self._translate_response(response) if (response['maxSeverity'] != constants.STATUS_OK and self._response_validation(response, constants.MSG_CODE_RETRY)): raise exception.EMCPowerMaxLockRequiredException return response @utils.retry(exception.EMCPowerMaxLockRequiredException) def _execute_cmd(self, cmd, retry_patterns=None, check_exit_code=False): """Execute NAS command via SSH. :param retry_patterns: list of tuples,where each tuple contains a reg expression and an exception. :param check_exit_code: Boolean. Raise processutils.ProcessExecutionError if the command failed to execute and this parameter is set to True. """ if retry_patterns is None: retry_patterns = self.ssh_retry_patterns try: out, err = self.conn['SSH'].run_ssh(cmd, check_exit_code) except processutils.ProcessExecutionError as e: for pattern in retry_patterns: if re.search(pattern[0], e.stdout): raise pattern[1] raise return out, err def _copy_properties(self, source, target, property_map, deep_copy=True): for prop in property_map: if isinstance(prop, tuple): target_key, src_key = prop else: target_key = src_key = prop if src_key in source: if deep_copy and isinstance(source[src_key], list): target[target_key] = copy.deepcopy(source[src_key]) else: target[target_key] = source[src_key] else: target[target_key] = None def _get_mover_id(self, mover_name, is_vdm): if is_vdm: return self.get_context('VDM').get_id(mover_name) else: return self.get_context('Mover').get_id(mover_name, self.xml_retry) def get_context(self, type): return self.manager.getStorageContext(type) @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class FileSystem(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(FileSystem, self).__init__(conn, elt_maker, xml_parser, manager) self.filesystem_map = {} @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, name, size, pool_name, mover_name, is_vdm=True): pool_id = self.get_context('StoragePool').get_id(pool_name) mover_id = self._get_mover_id(mover_name, is_vdm) if is_vdm: mover = self.elt_maker.Vdm(vdm=mover_id) else: mover = self.elt_maker.Mover(mover=mover_id) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewFileSystem( mover, self.elt_maker.StoragePool( pool=pool_id, size=six.text_type(size), mayContainSlices='true' ), name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._response_validation( response, constants.MSG_FILESYSTEM_EXIST): LOG.warning("File system %s already exists. " "Skip the creation.", name) return elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create file system %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get(self, name): if name not in self.filesystem_map: request = self._build_query_package( self.elt_maker.FileSystemQueryParams( self.elt_maker.AspectSelection( fileSystems='true', fileSystemCapacityInfos='true' ), self.elt_maker.Alias(name=name) ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: if self._is_filesystem_nonexistent(response): return constants.STATUS_NOT_FOUND, response['problems'] else: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] src = response['objects'][0] filesystem = {} property_map = ( 'name', ('pools_id', 'storagePools'), ('volume_id', 'volume'), ('size', 'volumeSize'), ('id', 'fileSystem'), 'type', 'dataServicePolicies', ) self._copy_properties(src, filesystem, property_map) self.filesystem_map[name] = filesystem return constants.STATUS_OK, self.filesystem_map[name] def delete(self, name): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("File system %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get file system by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) enas_id = self.filesystem_map[name]['id'] request = self._build_task_package( self.elt_maker.DeleteFileSystem(fileSystem=enas_id) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete file system %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self.filesystem_map.pop(name) def extend(self, name, pool_name, new_size): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get file system by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) enas_id = out['id'] size = int(out['size']) if new_size < size: message = (_("Failed to extend file system %(name)s because new " "size %(new_size)d is smaller than old size " "%(size)d.") % {'name': name, 'new_size': new_size, 'size': size}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) elif new_size == size: return pool_id = self.get_context('StoragePool').get_id(pool_name) request = self._build_task_package( self.elt_maker.ExtendFileSystem( self.elt_maker.StoragePool( pool=pool_id, size=six.text_type(new_size - size) ), fileSystem=enas_id, ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to extend file system %(name)s to new size " "%(new_size)d. Reason: %(err)s.") % {'name': name, 'new_size': new_size, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get_id(self, name): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get file system by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) return self.filesystem_map[name]['id'] def _is_filesystem_nonexistent(self, response): """Translate different status to ok/error status.""" msg_codes = self._get_problem_message_codes(response['problems']) diags = self._get_problem_diags(response['problems']) for code, diagnose in zip(msg_codes, diags): if (code == constants.MSG_FILESYSTEM_NOT_FOUND and diagnose.find('File system not found.') != -1): return True return False def create_from_snapshot(self, name, snap_name, source_fs_name, pool_name, mover_name, connect_id): create_fs_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-name', name, '-type', 'uxfs', '-create', 'samesize=' + source_fs_name, 'pool=%s' % pool_name, 'storage=SINGLE', 'worm=off', '-thin', 'no', '-option', 'slice=y', ] self._execute_cmd(create_fs_cmd) ro_mount_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_mount', mover_name, '-option', 'ro', name, '/%s' % name, ] self._execute_cmd(ro_mount_cmd) session_name = name + ':' + snap_name copy_ckpt_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_copy', '-name', session_name[0:63], '-source', '-ckpt', snap_name, '-destination', '-fs', name, '-interconnect', 'id=%s' % connect_id, '-overwrite_destination', '-full_copy', ] try: self._execute_cmd(copy_ckpt_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: LOG.error("Failed to copy content from snapshot %(snap)s to " "file system %(filesystem)s. Reason: %(err)s.", {'snap': snap_name, 'filesystem': name, 'err': expt}) # When an error happens during nas_copy, we need to continue # deleting the checkpoint of the target file system if it exists. query_fs_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-info', name, ] out, err = self._execute_cmd(query_fs_cmd) re_ckpts = r'ckpts\s*=\s*(.*)\s*' m = re.search(re_ckpts, out) if m is not None: ckpts = m.group(1) for ckpt in re.split(',', ckpts): umount_ckpt_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_umount', mover_name, '-perm', ckpt, ] self._execute_cmd(umount_ckpt_cmd) delete_ckpt_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_fs', '-delete', ckpt, '-Force', ] self._execute_cmd(delete_ckpt_cmd) rw_mount_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_mount', mover_name, '-option', 'rw', name, '/%s' % name, ] self._execute_cmd(rw_mount_cmd) @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class StoragePool(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(StoragePool, self).__init__(conn, elt_maker, xml_parser, manager) self.pool_map = {} def get(self, name, force=False): if name not in self.pool_map or force: status, out = self.get_all() if constants.STATUS_OK != status: return status, out if name not in self.pool_map: return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.pool_map[name] def get_all(self): self.pool_map.clear() request = self._build_query_package( self.elt_maker.StoragePoolQueryParams() ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] for item in response['objects']: pool = {} property_map = ( 'name', ('movers_id', 'movers'), ('total_size', 'autoSize'), ('used_size', 'usedSize'), 'diskType', 'dataServicePolicies', ('id', 'pool'), ) self._copy_properties(item, pool, property_map) self.pool_map[item['name']] = pool return constants.STATUS_OK, self.pool_map def get_id(self, name): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get storage pool by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) return out['id'] @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class MountPoint(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(MountPoint, self).__init__(conn, elt_maker, xml_parser, manager) @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, mount_path, fs_name, mover_name, is_vdm=True): fs_id = self.get_context('FileSystem').get_id(fs_name) mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewMount( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', ), fileSystem=fs_id, path=mount_path ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._is_mount_point_already_existent(response): LOG.warning("Mount Point %(mount)s already exists. " "Skip the creation.", {'mount': mount_path}) return elif constants.STATUS_OK != response['maxSeverity']: message = (_('Failed to create Mount Point %(mount)s for ' 'file system %(fs_name)s. Reason: %(err)s.') % {'mount': mount_path, 'fs_name': fs_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) @utils.retry(exception.EMCPowerMaxInvalidMoverID) def get(self, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_query_package( self.elt_maker.MountQueryParams( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false' ) ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['objects'] if not response['objects']: return constants.STATUS_NOT_FOUND, None else: return constants.STATUS_OK, response['objects'] @utils.retry(exception.EMCPowerMaxInvalidMoverID) def delete(self, mount_path, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.DeleteMount( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', path=mount_path ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._is_mount_point_nonexistent(response): LOG.warning('Mount point %(mount)s on mover %(mover_name)s ' 'not found.', {'mount': mount_path, 'mover_name': mover_name}) return elif constants.STATUS_OK != response['maxSeverity']: message = (_('Failed to delete mount point %(mount)s on mover ' '%(mover_name)s. Reason: %(err)s.') % {'mount': mount_path, 'mover_name': mover_name, 'err': response}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def _is_mount_point_nonexistent(self, response): """Translate different status to ok/error status.""" msg_codes = self._get_problem_message_codes(response['problems']) message = self._get_problem_messages(response['problems']) for code, msg in zip(msg_codes, message): if ((code == constants.MSG_GENERAL_ERROR and msg.find( 'No such path or invalid operation') != -1) or code == constants.MSG_INVALID_VDM_ID or code == constants.MSG_INVALID_MOVER_ID): return True return False def _is_mount_point_already_existent(self, response): """Translate different status to ok/error status.""" msg_codes = self._get_problem_message_codes(response['problems']) message = self._get_problem_messages(response['problems']) for code, msg in zip(msg_codes, message): if ((code == constants.MSG_GENERAL_ERROR and msg.find( 'Mount already exists') != -1)): return True return False @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class Mover(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(Mover, self).__init__(conn, elt_maker, xml_parser, manager) self.mover_map = {} self.mover_ref_map = {} def get_ref(self, name, force=False): if name not in self.mover_ref_map or force: self.mover_ref_map.clear() request = self._build_query_package( self.elt_maker.MoverQueryParams( self.elt_maker.AspectSelection(movers='true') ) ) response = self._send_request(request) if constants.STATUS_ERROR == response['maxSeverity']: return response['maxSeverity'], response['problems'] for item in response['objects']: mover = {} property_map = ('name', ('id', 'mover')) self._copy_properties(item, mover, property_map) if mover: self.mover_ref_map[mover['name']] = mover if (name not in self.mover_ref_map or self.mover_ref_map[name]['id'] == ''): return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.mover_ref_map[name] def get(self, name, force=False): if name not in self.mover_map or force: if name in self.mover_ref_map and not force: mover_id = self.mover_ref_map[name]['id'] else: mover_id = self.get_id(name, force) if name in self.mover_map: self.mover_map.pop(name) request = self._build_query_package( self.elt_maker.MoverQueryParams( self.elt_maker.AspectSelection( moverDeduplicationSettings='true', moverDnsDomains='true', moverInterfaces='true', moverNetworkDevices='true', moverNisDomains='true', moverRoutes='true', movers='true', moverStatuses='true' ), mover=mover_id ) ) response = self._send_request(request) if constants.STATUS_ERROR == response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] mover = {} src = response['objects'][0] property_map = ( 'name', ('id', 'mover'), ('Status', 'maxSeverity'), 'version', 'uptime', 'role', ('interfaces', 'MoverInterface'), ('devices', 'LogicalNetworkDevice'), ('dns_domain', 'MoverDnsDomain'), ) self._copy_properties(src, mover, property_map) internal_devices = [] if mover['interfaces']: for interface in mover['interfaces']: if self._is_internal_device(interface['device']): internal_devices.append(interface) mover['interfaces'] = [var for var in mover['interfaces'] if var not in internal_devices] self.mover_map[name] = mover return constants.STATUS_OK, self.mover_map[name] def get_id(self, name, force=False): status, mover_ref = self.get_ref(name, force) if constants.STATUS_OK != status: message = (_("Failed to get mover by name %(name)s.") % {'name': name}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) return mover_ref['id'] def _is_internal_device(self, device): for device_type in ('mge', 'fxg', 'tks', 'fsn'): if device.find(device_type) == 0: return True return False def get_interconnect_id(self, source, destination): header = [ 'id', 'name', 'source_server', 'destination_system', 'destination_server', ] conn_id = None command_nas_cel = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_cel', '-interconnect', '-l', ] out, err = self._execute_cmd(command_nas_cel) lines = out.strip().split('\n') for line in lines: if line.strip().split() == header: LOG.info('Found the header of the command ' '/nas/bin/nas_cel -interconnect -l.') else: interconn = line.strip().split() if interconn[2] == source and interconn[4] == destination: conn_id = interconn[0] return conn_id def get_physical_devices(self, mover_name): physical_network_devices = [] cmd_sysconfig = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_sysconfig', mover_name, '-pci' ] out, err = self._execute_cmd(cmd_sysconfig) re_pattern = (r'0:\s*(?P\S+)\s*IRQ:\s*(?P\d+)\n' r'.*\n' r'\s*Link:\s*(?P[A-Za-z]+)') for device in re.finditer(re_pattern, out): if 'Up' in device.group('link'): physical_network_devices.append(device.group('name')) return physical_network_devices @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class VDM(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(VDM, self).__init__(conn, elt_maker, xml_parser, manager) self.vdm_map = {} @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, name, mover_name): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewVdm(mover=mover_id, name=name) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._response_validation(response, constants.MSG_VDM_EXIST): LOG.warning("VDM %(name)s already exists. Skip the creation.", {'name': name}) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create VDM %(name)s on mover " "%(mover_name)s. Reason: %(err)s.") % {'name': name, 'mover_name': mover_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get(self, name): if name not in self.vdm_map: request = self._build_query_package( self.elt_maker.VdmQueryParams() ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] elif not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] for item in response['objects']: vdm = {} property_map = ( 'name', ('id', 'vdm'), 'state', ('host_mover_id', 'mover'), ('interfaces', 'Interfaces'), ) self._copy_properties(item, vdm, property_map) self.vdm_map[item['name']] = vdm if name not in self.vdm_map: return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.vdm_map[name] def delete(self, name): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("VDM %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get VDM by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) vdm_id = self.vdm_map[name]['id'] request = self._build_task_package( self.elt_maker.DeleteVdm(vdm=vdm_id) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete VDM %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self.vdm_map.pop(name) def get_id(self, name): status, vdm = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get VDM by name %(name)s.") % {'name': name}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) return vdm['id'] def attach_nfs_interface(self, vdm_name, if_name): command_attach_nfs_interface = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-vdm', vdm_name, '-attach', if_name, ] self._execute_cmd(command_attach_nfs_interface) def detach_nfs_interface(self, vdm_name, if_name): command_detach_nfs_interface = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-vdm', vdm_name, '-detach', if_name, ] try: self._execute_cmd(command_detach_nfs_interface, check_exit_code=True) except processutils.ProcessExecutionError: interfaces = self.get_interfaces(vdm_name) if if_name not in interfaces['nfs']: LOG.debug("Failed to detach interface %(interface)s " "from mover %(mover_name)s.", {'interface': if_name, 'mover_name': vdm_name}) else: message = (_("Failed to detach interface %(interface)s " "from mover %(mover_name)s.") % {'interface': if_name, 'mover_name': vdm_name}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get_interfaces(self, vdm_name): interfaces = { 'cifs': [], 'nfs': [], } re_pattern = (r'Interfaces to services mapping:' r'\s*(?P(\s*interface=.*)*)') command_get_interfaces = [ 'env', 'NAS_DB=/nas', '/nas/bin/nas_server', '-i', '-vdm', vdm_name, ] out, err = self._execute_cmd(command_get_interfaces) m = re.search(re_pattern, out) if m: if_list = m.group('interfaces').split('\n') for i in if_list: m_if = re.search(r'\s*interface=(?P.*)\s*:' r'\s*(?P.*)\s*', i) if m_if: if_name = m_if.group('if').strip() if 'cifs' == m_if.group('type') and if_name != '': interfaces['cifs'].append(if_name) elif (m_if.group('type') in ('vdm', 'nfs') and if_name != ''): interfaces['nfs'].append(if_name) return interfaces @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class Snapshot(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(Snapshot, self).__init__(conn, elt_maker, xml_parser, manager) self.snap_map = {} def create(self, name, fs_name, pool_id, ckpt_size=None): fs_id = self.get_context('FileSystem').get_id(fs_name) if ckpt_size: elt_pool = self.elt_maker.StoragePool( pool=pool_id, size=six.text_type(ckpt_size) ) else: elt_pool = self.elt_maker.StoragePool(pool=pool_id) new_ckpt = self.elt_maker.NewCheckpoint( self.elt_maker.SpaceAllocationMethod( elt_pool ), checkpointOf=fs_id, name=name ) request = self._build_task_package(new_ckpt) response = self._send_request(request) if self._response_validation(response, constants.MSG_SNAP_EXIST): LOG.warning("Snapshot %(name)s already exists. " "Skip the creation.", {'name': name}) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create snapshot %(name)s on " "filesystem %(fs_name)s. Reason: %(err)s.") % {'name': name, 'fs_name': fs_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get(self, name): if name not in self.snap_map: request = self._build_query_package( self.elt_maker.CheckpointQueryParams( self.elt_maker.Alias(name=name) ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, response['problems'] src = response['objects'][0] snap = {} property_map = ( 'name', ('id', 'checkpoint'), 'checkpointOf', 'state', ) self._copy_properties(src, snap, property_map) self.snap_map[name] = snap return constants.STATUS_OK, self.snap_map[name] def delete(self, name): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("Snapshot %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get snapshot by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) chpt_id = self.snap_map[name]['id'] request = self._build_task_package( self.elt_maker.DeleteCheckpoint(checkpoint=chpt_id) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete snapshot %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self.snap_map.pop(name) def get_id(self, name): status, out = self.get(name) if constants.STATUS_OK != status: message = (_("Failed to get snapshot by %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) return self.snap_map[name]['id'] @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class MoverInterface(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(MoverInterface, self).__init__(conn, elt_maker, xml_parser, manager) @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, interface): # Maximum of 32 characters for mover interface name name = interface['name'] if len(name) > 32: name = name[0:31] device_name = interface['device_name'] ip_addr = interface['ip'] mover_name = interface['mover_name'] net_mask = interface['net_mask'] vlan_id = interface['vlan_id'] if interface['vlan_id'] else -1 mover_id = self._get_mover_id(mover_name, False) params = dict(device=device_name, ipAddress=six.text_type(ip_addr), mover=mover_id, name=name, netMask=net_mask, vlanid=six.text_type(vlan_id)) if interface.get('ip_version') == 6: params['ipVersion'] = 'IPv6' if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewMoverInterface(**params)) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._response_validation( response, constants.MSG_INTERFACE_NAME_EXIST): LOG.warning("Mover interface name %s already exists. " "Skip the creation.", name) elif self._response_validation( response, constants.MSG_INTERFACE_EXIST): LOG.warning("Mover interface IP %s already exists. " "Skip the creation.", ip_addr) elif self._response_validation( response, constants.MSG_INTERFACE_INVALID_VLAN_ID): # When fail to create a mover interface with the specified # vlan id, PowerMax will leave an interface with vlan id 0 in the # backend. So we should explicitly remove the interface. try: self.delete(six.text_type(ip_addr), mover_name) except exception.EMCPowerMaxXMLAPIError: pass message = (_("Invalid vlan id %s. Other interfaces on this " "subnet are in a different vlan.") % vlan_id) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create mover interface %(interface)s. " "Reason: %(err)s.") % {'interface': interface, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get(self, name, mover_name): # Maximum of 32 characters for mover interface name if len(name) > 32: name = name[0:31] status, mover = self.manager.getStorageContext('Mover').get( mover_name, True) if constants.STATUS_OK == status: for interface in mover['interfaces']: if name == interface['name']: return constants.STATUS_OK, interface return constants.STATUS_NOT_FOUND, None @utils.retry(exception.EMCPowerMaxInvalidMoverID) def delete(self, ip_addr, mover_name): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.DeleteMoverInterface( ipAddress=six.text_type(ip_addr), mover=mover_id ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._response_validation( response, constants.MSG_INTERFACE_NON_EXISTENT): LOG.warning("Mover interface %s not found. " "Skip the deletion.", ip_addr) return elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete mover interface %(ip)s on mover " "%(mover)s. Reason: %(err)s.") % {'ip': ip_addr, 'mover': mover_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class DNSDomain(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(DNSDomain, self).__init__(conn, elt_maker, xml_parser, manager) @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, mover_name, name, servers, protocol='udp'): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.NewMoverDnsDomain( mover=mover_id, name=name, servers=servers, protocol=protocol ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create DNS domain %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) @utils.retry(exception.EMCPowerMaxInvalidMoverID) def delete(self, mover_name, name): mover_id = self._get_mover_id(mover_name, False) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.DeleteMoverDnsDomain( mover=mover_id, name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: LOG.warning("Failed to delete DNS domain %(name)s. " "Reason: %(err)s.", {'name': name, 'err': response['problems']}) @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class CIFSServer(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(CIFSServer, self).__init__(conn, elt_maker, xml_parser, manager) self.cifs_server_map = {} @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, server_args): compName = server_args['name'] # Maximum of 14 characters for netBIOS name name = server_args['name'][-14:] # Maximum of 12 characters for alias name alias_name = server_args['name'][-12:] interfaces = server_args['interface_ip'] domain_name = server_args['domain_name'] user_name = server_args['user_name'] password = server_args['password'] mover_name = server_args['mover_name'] is_vdm = server_args['is_vdm'] mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False alias_name_list = [self.elt_maker.li(alias_name)] request = self._build_task_package( self.elt_maker.NewW2KCifsServer( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if server_args['is_vdm'] else 'false' ), self.elt_maker.Aliases(*alias_name_list), self.elt_maker.JoinDomain(userName=user_name, password=password), compName=compName, domain=domain_name, interfaces=interfaces, name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) if constants.STATUS_OK != response['maxSeverity']: status, out = self.get(compName, mover_name, is_vdm) if constants.STATUS_OK == status and out['domainJoined'] == 'true': return else: message = (_("Failed to create CIFS server %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) @utils.retry(exception.EMCPowerMaxInvalidMoverID) def get_all(self, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_query_package( self.elt_maker.CifsServerQueryParams( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false' ) ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['objects'] if mover_name in self.cifs_server_map: self.cifs_server_map.pop(mover_name) self.cifs_server_map[mover_name] = {} for item in response['objects']: self.cifs_server_map[mover_name][item['compName'].lower()] = item return constants.STATUS_OK, self.cifs_server_map[mover_name] def get(self, name, mover_name, is_vdm=True, force=False): # name is compName name = name.lower() if (mover_name in self.cifs_server_map and name in self.cifs_server_map[mover_name]) and not force: return constants.STATUS_OK, self.cifs_server_map[mover_name][name] self.get_all(mover_name, is_vdm) if mover_name in self.cifs_server_map: for compName, server in self.cifs_server_map[mover_name].items(): if name == compName: return constants.STATUS_OK, server return constants.STATUS_NOT_FOUND, None @utils.retry(exception.EMCPowerMaxInvalidMoverID) def modify(self, server_args): """Make CIFS server join or un-join the domain. :param server_args: Dictionary for CIFS server modification name: CIFS server name instead of compName join_domain: True for joining the domain, false for un-joining user_name: User name under which the domain is joined password: Password associated with the user name mover_name: mover or VDM name is_vdm: Boolean to indicate mover or VDM :raises exception.EMCPowerMaxXMLAPIError: if modification fails. """ name = server_args['name'] join_domain = server_args['join_domain'] user_name = server_args['user_name'] password = server_args['password'] mover_name = server_args['mover_name'] if 'is_vdm' in server_args.keys(): is_vdm = server_args['is_vdm'] else: is_vdm = True mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False request = self._build_task_package( self.elt_maker.ModifyW2KCifsServer( self.elt_maker.DomainSetting( joinDomain='true' if join_domain else 'false', password=password, userName=user_name, ), mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif self._ignore_modification_error(response, join_domain): return elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to modify CIFS server %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def _ignore_modification_error(self, response, join_domain): if self._response_validation(response, constants.MSG_JOIN_DOMAIN): return join_domain elif self._response_validation(response, constants.MSG_UNJOIN_DOMAIN): return not join_domain return False def delete(self, computer_name, mover_name, is_vdm=True): try: status, out = self.get( computer_name.lower(), mover_name, is_vdm, self.xml_retry) if constants.STATUS_NOT_FOUND == status: LOG.warning("CIFS server %(name)s on mover %(mover_name)s " "not found. Skip the deletion.", {'name': computer_name, 'mover_name': mover_name}) return except exception.EMCPowerMaxXMLAPIError: LOG.warning("CIFS server %(name)s on mover %(mover_name)s " "not found. Skip the deletion.", {'name': computer_name, 'mover_name': mover_name}) return server_name = out['name'] mover_id = self._get_mover_id(mover_name, is_vdm) request = self._build_task_package( self.elt_maker.DeleteCifsServer( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', name=server_name ) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete CIFS server %(name)s. " "Reason: %(err)s.") % {'name': computer_name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self.cifs_server_map[mover_name].pop(computer_name) @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class CIFSShare(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(CIFSShare, self).__init__(conn, elt_maker, xml_parser, manager) self.cifs_share_map = {} @utils.retry(exception.EMCPowerMaxInvalidMoverID) def create(self, name, server_name, mover_name, is_vdm=True): mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False share_path = '/' + name request = self._build_task_package( self.elt_maker.NewCifsShare( self.elt_maker.MoverOrVdm( mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false' ), self.elt_maker.CifsServers(self.elt_maker.li(server_name)), name=name, path=share_path ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to create file share %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get(self, name): if name not in self.cifs_share_map: request = self._build_query_package( self.elt_maker.CifsShareQueryParams(name=name) ) response = self._send_request(request) if constants.STATUS_OK != response['maxSeverity']: return response['maxSeverity'], response['problems'] if not response['objects']: return constants.STATUS_NOT_FOUND, None self.cifs_share_map[name] = response['objects'][0] return constants.STATUS_OK, self.cifs_share_map[name] @utils.retry(exception.EMCPowerMaxInvalidMoverID) def delete(self, name, mover_name, is_vdm=True): status, out = self.get(name) if constants.STATUS_NOT_FOUND == status: LOG.warning("CIFS share %s not found. Skip the deletion.", name) return elif constants.STATUS_OK != status: message = (_("Failed to get CIFS share by name %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': out}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) mover_id = self._get_mover_id(mover_name, is_vdm) if self.xml_retry: self.xml_retry = False netbios_names = self.cifs_share_map[name]['CifsServers'] request = self._build_task_package( self.elt_maker.DeleteCifsShare( self.elt_maker.CifsServers(*map(lambda a: self.elt_maker.li(a), netbios_names)), mover=mover_id, moverIdIsVdm='true' if is_vdm else 'false', name=name ) ) response = self._send_request(request) if (self._response_validation(response, constants.MSG_INVALID_MOVER_ID) and not self.xml_retry): self.xml_retry = True raise exception.EMCPowerMaxInvalidMoverID(id=mover_id) elif constants.STATUS_OK != response['maxSeverity']: message = (_("Failed to delete file system %(name)s. " "Reason: %(err)s.") % {'name': name, 'err': response['problems']}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self.cifs_share_map.pop(name) def disable_share_access(self, share_name, mover_name): cmd_str = 'sharesd %s set noaccess' % share_name disable_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % cmd_str, ] try: self._execute_cmd(disable_access, check_exit_code=True) except processutils.ProcessExecutionError: message = (_('Failed to disable the access to CIFS share ' '%(name)s.') % {'name': share_name}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def allow_share_access(self, mover_name, share_name, user_name, domain, access=constants.CIFS_ACL_FULLCONTROL): account = user_name + "@" + domain allow_str = ('sharesd %(share_name)s grant %(account)s=%(access)s' % {'share_name': share_name, 'account': account, 'access': access}) allow_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % allow_str, ] try: self._execute_cmd(allow_access, check_exit_code=True) except processutils.ProcessExecutionError as expt: dup_msg = re.compile(r'ACE for %(domain)s\\%(user)s unchanged' % {'domain': domain, 'user': user_name}, re.I) if re.search(dup_msg, expt.stdout): LOG.warning("Duplicate access control entry, " "skipping allow...") else: message = (_('Failed to allow the access %(access)s to ' 'CIFS share %(name)s. Reason: %(err)s.') % {'access': access, 'name': share_name, 'err': expt}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def deny_share_access(self, mover_name, share_name, user_name, domain, access=constants.CIFS_ACL_FULLCONTROL): account = user_name + "@" + domain revoke_str = ('sharesd %(share_name)s revoke %(account)s=%(access)s' % {'share_name': share_name, 'account': account, 'access': access}) allow_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % revoke_str, ] try: self._execute_cmd(allow_access, check_exit_code=True) except processutils.ProcessExecutionError as expt: not_found_msg = re.compile( r'No ACE found for %(domain)s\\%(user)s' % {'domain': domain, 'user': user_name}, re.I) user_err_msg = re.compile( r'Cannot get mapping for %(domain)s\\%(user)s' % {'domain': domain, 'user': user_name}, re.I) if re.search(not_found_msg, expt.stdout): LOG.warning("No access control entry found, " "skipping deny...") elif re.search(user_err_msg, expt.stdout): LOG.warning("User not found on domain, skipping deny...") else: message = (_('Failed to deny the access %(access)s to ' 'CIFS share %(name)s. Reason: %(err)s.') % {'access': access, 'name': share_name, 'err': expt}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get_share_access(self, mover_name, share_name): get_str = 'sharesd %s dump' % share_name get_access = [ 'env', 'NAS_DB=/nas', '/nas/bin/.server_config', mover_name, '-v', "%s" % get_str, ] try: out, err = self._execute_cmd(get_access, check_exit_code=True) except processutils.ProcessExecutionError: msg = _('Failed to get access list of CIFS share %s.') % share_name LOG.exception(msg) raise exception.EMCPowerMaxXMLAPIError(err=msg) ret = {} name_pattern = re.compile(r"Unix user '(.+?)'") access_pattern = re.compile(r"ALLOWED:(.+?):") name = None for line in out.splitlines(): if name is None: names = name_pattern.findall(line) if names: name = names[0].lower() else: accesses = access_pattern.findall(line) if accesses: ret[name] = accesses[0].lower() name = None return ret def clear_share_access(self, mover_name, share_name, domain, white_list_users): existing_users = self.get_share_access(mover_name, share_name) white_list_users_set = set(user.lower() for user in white_list_users) users_to_remove = set(existing_users.keys()) - white_list_users_set for user in users_to_remove: self.deny_share_access(mover_name, share_name, user, domain, existing_users[user]) return users_to_remove @powermax_utils.decorate_all_methods(powermax_utils.log_enter_exit, debug_only=True) class NFSShare(StorageObject): def __init__(self, conn, elt_maker, xml_parser, manager): super(NFSShare, self).__init__(conn, elt_maker, xml_parser, manager) self.nfs_share_map = {} def create(self, name, mover_name): share_path = '/' + name create_nfs_share_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-option', 'access=-0.0.0.0/0.0.0.0', share_path, ] try: self._execute_cmd(create_nfs_share_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to create NFS share %(name)s on mover ' '%(mover_name)s. Reason: %(err)s.') % {'name': name, 'mover_name': mover_name, 'err': expt}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def delete(self, name, mover_name): path = '/' + name status, out = self.get(name, mover_name) if constants.STATUS_NOT_FOUND == status: LOG.warning("NFS share %s not found. Skip the deletion.", path) return delete_nfs_share_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-unexport', '-perm', path, ] try: self._execute_cmd(delete_nfs_share_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to delete NFS share %(name)s on ' '%(mover_name)s. Reason: %(err)s.') % {'name': name, 'mover_name': mover_name, 'err': expt}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self.nfs_share_map.pop(name) def get(self, name, mover_name, force=False, check_exit_code=False): if name in self.nfs_share_map and not force: return constants.STATUS_OK, self.nfs_share_map[name] path = '/' + name nfs_share = { "mover_name": '', "path": '', 'AccessHosts': [], 'RwHosts': [], 'RoHosts': [], 'RootHosts': [], 'readOnly': '', } nfs_query_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-P', 'nfs', '-list', path, ] try: out, err = self._execute_cmd(nfs_query_cmd, check_exit_code=check_exit_code) except processutils.ProcessExecutionError as expt: dup_msg = (r'%(mover_name)s : No such file or directory' % {'mover_name': mover_name}) if re.search(dup_msg, expt.stdout): LOG.warning("NFS share %s not found.", name) return constants.STATUS_NOT_FOUND, None else: message = (_('Failed to list NFS share %(name)s on ' '%(mover_name)s. Reason: %(err)s.') % {'name': name, 'mover_name': mover_name, 'err': expt}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) re_exports = r'%s\s*:\s*\nexport\s*(.*)\n' % mover_name m = re.search(re_exports, out) if m is not None: nfs_share['path'] = path nfs_share['mover_name'] = mover_name export = m.group(1) fields = export.split(" ") for field in fields: field = field.strip() if field.startswith('rw='): nfs_share['RwHosts'] = powermax_utils.parse_ipaddr( field[3:]) elif field.startswith('access='): nfs_share['AccessHosts'] = powermax_utils.parse_ipaddr( field[7:]) elif field.startswith('root='): nfs_share['RootHosts'] = powermax_utils.parse_ipaddr( field[5:]) elif field.startswith('ro='): nfs_share['RoHosts'] = powermax_utils.parse_ipaddr( field[3:]) self.nfs_share_map[name] = nfs_share else: return constants.STATUS_NOT_FOUND, None return constants.STATUS_OK, self.nfs_share_map[name] def allow_share_access(self, share_name, host_ip, mover_name, access_level=const.ACCESS_LEVEL_RW): @utils.synchronized('emc-shareaccess-' + share_name) def do_allow_access(share_name, host_ip, mover_name, access_level): status, share = self.get(share_name, mover_name) if constants.STATUS_NOT_FOUND == status: message = (_('NFS share %s not found.') % share_name) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) changed = False rwhosts = share['RwHosts'] rohosts = share['RoHosts'] host_ip = powermax_utils.convert_ipv6_format_if_needed(host_ip) if access_level == const.ACCESS_LEVEL_RW: if host_ip not in rwhosts: rwhosts.append(host_ip) changed = True if host_ip in rohosts: rohosts.remove(host_ip) changed = True if access_level == const.ACCESS_LEVEL_RO: if host_ip not in rohosts: rohosts.append(host_ip) changed = True if host_ip in rwhosts: rwhosts.remove(host_ip) changed = True roothosts = share['RootHosts'] if host_ip not in roothosts: roothosts.append(host_ip) changed = True accesshosts = share['AccessHosts'] if host_ip not in accesshosts: accesshosts.append(host_ip) changed = True if not changed: LOG.debug("%(host)s is already in access list of share " "%(name)s.", {'host': host_ip, 'name': share_name}) else: path = '/' + share_name self._set_share_access(path, mover_name, rwhosts, rohosts, roothosts, accesshosts) # Update self.nfs_share_map self.get(share_name, mover_name, force=True, check_exit_code=True) do_allow_access(share_name, host_ip, mover_name, access_level) def deny_share_access(self, share_name, host_ip, mover_name): @utils.synchronized('emc-shareaccess-' + share_name) def do_deny_access(share_name, host_ip, mover_name): status, share = self.get(share_name, mover_name) if constants.STATUS_OK != status: message = (_('Query nfs share %(path)s failed. ' 'Reason %(err)s.') % {'path': share_name, 'err': share}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) changed = False rwhosts = set(share['RwHosts']) if host_ip in rwhosts: rwhosts.remove(host_ip) changed = True roothosts = set(share['RootHosts']) if host_ip in roothosts: roothosts.remove(host_ip) changed = True accesshosts = set(share['AccessHosts']) if host_ip in accesshosts: accesshosts.remove(host_ip) changed = True rohosts = set(share['RoHosts']) if host_ip in rohosts: rohosts.remove(host_ip) changed = True if not changed: LOG.debug("%(host)s is already in access list of share " "%(name)s.", {'host': host_ip, 'name': share_name}) else: path = '/' + share_name self._set_share_access(path, mover_name, rwhosts, rohosts, roothosts, accesshosts) # Update self.nfs_share_map self.get(share_name, mover_name, force=True, check_exit_code=True) do_deny_access(share_name, host_ip, mover_name) def clear_share_access(self, share_name, mover_name, white_list_hosts): @utils.synchronized('emc-shareaccess-' + share_name) def do_clear_access(share_name, mover_name, white_list_hosts): def hosts_to_remove(orig_list): if white_list_hosts is None: ret = set() else: ret = set(white_list_hosts).intersection(set(orig_list)) return ret status, share = self.get(share_name, mover_name) if constants.STATUS_OK != status: message = (_('Query nfs share %(path)s failed. ' 'Reason %(err)s.') % {'path': share_name, 'err': status}) raise exception.EMCPowerMaxXMLAPIError(err=message) self._set_share_access('/' + share_name, mover_name, hosts_to_remove(share['RwHosts']), hosts_to_remove(share['RoHosts']), hosts_to_remove(share['RootHosts']), hosts_to_remove(share['AccessHosts'])) # Update self.nfs_share_map self.get(share_name, mover_name, force=True, check_exit_code=True) do_clear_access(share_name, mover_name, white_list_hosts) def _set_share_access(self, path, mover_name, rw_hosts, ro_hosts, root_hosts, access_hosts): if access_hosts is None: access_hosts = set() try: access_hosts.remove('-0.0.0.0/0.0.0.0') except(ValueError, KeyError): pass access_str = ('access=%(access)s' % {'access': ':'.join( list(access_hosts) + ['-0.0.0.0/0.0.0.0'])}) if root_hosts: access_str += (',root=%(root)s' % {'root': ':'.join(root_hosts)}) if rw_hosts: access_str += ',rw=%(rw)s' % {'rw': ':'.join(rw_hosts)} if ro_hosts: access_str += ',ro=%(ro)s' % {'ro': ':'.join(ro_hosts)} set_nfs_share_access_cmd = [ 'env', 'NAS_DB=/nas', '/nas/bin/server_export', mover_name, '-ignore', '-option', access_str, path, ] try: self._execute_cmd(set_nfs_share_access_cmd, check_exit_code=True) except processutils.ProcessExecutionError as expt: message = (_('Failed to set NFS share %(name)s access on ' '%(mover_name)s. Reason: %(err)s.') % {'name': path[1:], 'mover_name': mover_name, 'err': expt}) LOG.exception(message) raise exception.EMCPowerMaxXMLAPIError(err=message) manila-10.0.0/manila/share/drivers/dell_emc/plugins/powermax/connection.py0000664000175000017500000010676613656750227026731 0ustar zuulzuul00000000000000# Copyright (c) 2019 Dell Inc. or its subsidiaries. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """PowerMax backend for the Dell EMC Manila driver.""" import copy import random import six from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import units from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.common.enas import constants from manila.share.drivers.dell_emc.common.enas import utils as enas_utils from manila.share.drivers.dell_emc.plugins import base as driver from manila.share.drivers.dell_emc.plugins.powermax import ( object_manager as manager) from manila.share import utils as share_utils from manila import utils """Version history: 1.0.0 - Initial version 2.0.0 - Implement IPv6 support 3.0.0 - Rebranding to PowerMax 3.1.0 - Access Host details prevents a read-only share mounts (bug #1845147) 3.2.0 - Wrong format of export locations (bug #1871999) """ VERSION = "3.2.0" LOG = log.getLogger(__name__) POWERMAX_OPTS = [ cfg.StrOpt('powermax_server_container', deprecated_name='vmax_server_container', help='Data mover to host the NAS server.'), cfg.ListOpt('powermax_share_data_pools', deprecated_name='vmax_share_data_pools', help='Comma separated list of pools that can be used to ' 'persist share data.'), cfg.ListOpt('powermax_ethernet_ports', deprecated_name='vmax_ethernet_ports', help='Comma separated list of ports that can be used for ' 'share server interfaces. Members of the list ' 'can be Unix-style glob expressions.') ] CONF = cfg.CONF CONF.register_opts(POWERMAX_OPTS) @enas_utils.decorate_all_methods(enas_utils.log_enter_exit, debug_only=True) class PowerMaxStorageConnection(driver.StorageConnection): """Implements powermax specific functionality for Dell EMC Manila driver. """ @enas_utils.log_enter_exit def __init__(self, *args, **kwargs): super(PowerMaxStorageConnection, self).__init__(*args, **kwargs) if 'configuration' in kwargs: kwargs['configuration'].append_config_values(POWERMAX_OPTS) self.mover_name = None self.pools = None self.manager = None self.pool_conf = None self.reserved_percentage = None self.driver_handles_share_servers = True self.port_conf = None self.ipv6_implemented = True def create_share(self, context, share, share_server=None): """Create a share and export it based on protocol used.""" share_name = share['id'] size = share['size'] * units.Ki share_proto = share['share_proto'].upper() # Validate the share protocol if share_proto not in ('NFS', 'CIFS'): raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) # Get the pool name from share host field pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share['host']) raise exception.InvalidHost(reason=message) # Validate share server self._share_server_validation(share_server) if share_proto == 'CIFS': vdm_name = self._get_share_server_name(share_server) server_name = vdm_name # Check if CIFS server exists. status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %s not found.") % server_name) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self._allocate_container(share_name, size, share_server, pool_name) if share_proto == 'NFS': location = self._create_nfs_share(share_name, share_server) elif share_proto == 'CIFS': location = self._create_cifs_share(share_name, share_server) return [ {'path': location} ] def _share_server_validation(self, share_server): """Validate the share server.""" if not share_server: msg = _('Share server not provided') raise exception.InvalidInput(reason=msg) backend_details = share_server.get('backend_details') vdm = backend_details.get( 'share_server_name') if backend_details else None if vdm is None: message = _("No share server found.") LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def _allocate_container(self, share_name, size, share_server, pool_name): """Allocate file system for share.""" vdm_name = self._get_share_server_name(share_server) self._get_context('FileSystem').create( share_name, size, pool_name, vdm_name) def _allocate_container_from_snapshot(self, share, snapshot, share_server, pool_name): """Allocate file system from snapshot.""" vdm_name = self._get_share_server_name(share_server) interconn_id = self._get_context('Mover').get_interconnect_id( self.mover_name, self.mover_name) self._get_context('FileSystem').create_from_snapshot( share['id'], snapshot['id'], snapshot['share_id'], pool_name, vdm_name, interconn_id) nwe_size = share['size'] * units.Ki self._get_context('FileSystem').extend(share['id'], pool_name, nwe_size) @enas_utils.log_enter_exit def _create_cifs_share(self, share_name, share_server): """Create CIFS share.""" vdm_name = self._get_share_server_name(share_server) server_name = vdm_name # Get available CIFS Server and interface (one CIFS server per VDM) status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if 'interfaces' not in server or len(server['interfaces']) == 0: message = (_("CIFS server %s doesn't have interface, " "so the share is inaccessible.") % server['compName']) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) interface = enas_utils.export_unc_path(server['interfaces'][0]) self._get_context('CIFSShare').create(share_name, server['name'], vdm_name) self._get_context('CIFSShare').disable_share_access(share_name, vdm_name) location = (r'\\%(interface)s\%(name)s' % {'interface': interface, 'name': share_name}) return location @enas_utils.log_enter_exit def _create_nfs_share(self, share_name, share_server): """Create NFS share.""" vdm_name = self._get_share_server_name(share_server) self._get_context('NFSShare').create(share_name, vdm_name) nfs_if = enas_utils.convert_ipv6_format_if_needed( share_server['backend_details']['nfs_if']) return ('%(nfs_if)s:/%(share_name)s' % {'nfs_if': nfs_if, 'share_name': share_name}) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from a snapshot - clone a snapshot.""" share_name = share['id'] share_proto = share['share_proto'].upper() # Validate the share protocol if share_proto not in ('NFS', 'CIFS'): raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) # Get the pool name from share host field pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share['host']) raise exception.InvalidHost(reason=message) self._share_server_validation(share_server) self._allocate_container_from_snapshot( share, snapshot, share_server, pool_name) nfs_if = enas_utils.convert_ipv6_format_if_needed( share_server['backend_details']['nfs_if']) if share_proto == 'NFS': self._create_nfs_share(share_name, share_server) location = ('%(nfs_if)s:/%(share_name)s' % {'nfs_if': nfs_if, 'share_name': share_name}) elif share_proto == 'CIFS': location = self._create_cifs_share(share_name, share_server) return [ {'path': location} ] def create_snapshot(self, context, snapshot, share_server=None): """Create snapshot from share.""" share_name = snapshot['share_id'] status, filesystem = self._get_context('FileSystem').get(share_name) if status != constants.STATUS_OK: message = (_("File System %s not found.") % share_name) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) pool_id = filesystem['pools_id'][0] self._get_context('Snapshot').create(snapshot['id'], snapshot['share_id'], pool_id) def delete_share(self, context, share, share_server=None): """Delete a share.""" if share_server is None: LOG.warning("Share network should be specified for " "share deletion.") return share_proto = share['share_proto'].upper() if share_proto == 'NFS': self._delete_nfs_share(share, share_server) elif share_proto == 'CIFS': self._delete_cifs_share(share, share_server) else: raise exception.InvalidShare( reason=_('Unsupported share protocol')) @enas_utils.log_enter_exit def _delete_cifs_share(self, share, share_server): """Delete CIFS share.""" vdm_name = self._get_share_server_name(share_server) name = share['id'] self._get_context('CIFSShare').delete(name, vdm_name) self._deallocate_container(name, vdm_name) @enas_utils.log_enter_exit def _delete_nfs_share(self, share, share_server): """Delete NFS share.""" vdm_name = self._get_share_server_name(share_server) name = share['id'] self._get_context('NFSShare').delete(name, vdm_name) self._deallocate_container(name, vdm_name) @enas_utils.log_enter_exit def _deallocate_container(self, share_name, vdm_name): """Delete underneath objects of the share.""" path = '/' + share_name try: # Delete mount point self._get_context('MountPoint').delete(path, vdm_name) except exception.EMCPowerMaxXMLAPIError as e: LOG.exception("CIFS server %(name)s on mover %(mover_name)s " "not found due to error %(err)s. Skip the " "deletion.", {'name': path, 'mover_name': vdm_name, 'err': e.message}) try: # Delete file system self._get_context('FileSystem').delete(share_name) except exception.EMCPowerMaxXMLAPIError as e: LOG.exception("File system %(share_name)s not found due to " "error %(err)s. Skip the deletion.", {'share_name': share_name, 'err': e.message}) def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot.""" self._get_context('Snapshot').delete(snapshot['id']) def ensure_share(self, context, share, share_server=None): """Ensure that the share is exported.""" def extend_share(self, share, new_size, share_server=None): # Get the pool name from share host field pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share['host']) raise exception.InvalidHost(reason=message) share_name = share['id'] self._get_context('FileSystem').extend( share_name, pool_name, new_size * units.Ki) def allow_access(self, context, share, access, share_server=None): """Allow access to a share.""" access_level = access['access_level'] if access_level not in const.ACCESS_LEVELS: raise exception.InvalidShareAccessLevel(level=access_level) share_proto = share['share_proto'] if share_proto == 'NFS': self._nfs_allow_access(context, share, access, share_server) elif share_proto == 'CIFS': self._cifs_allow_access(context, share, access, share_server) else: raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) @enas_utils.log_enter_exit def _cifs_allow_access(self, context, share, access, share_server): """Allow access to CIFS share.""" vdm_name = self._get_share_server_name(share_server) share_name = share['id'] if access['access_type'] != 'user': reason = _('Only user access type allowed for CIFS share') raise exception.InvalidShareAccess(reason=reason) user_name = access['access_to'] access_level = access['access_level'] if access_level == const.ACCESS_LEVEL_RW: cifs_access = constants.CIFS_ACL_FULLCONTROL else: cifs_access = constants.CIFS_ACL_READ # Check if CIFS server exists. server_name = vdm_name status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %s not found.") % server_name) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self._get_context('CIFSShare').allow_share_access( vdm_name, share_name, user_name, server['domain'], access=cifs_access) @enas_utils.log_enter_exit def _nfs_allow_access(self, context, share, access, share_server): """Allow access to NFS share.""" vdm_name = self._get_share_server_name(share_server) access_type = access['access_type'] if access_type != 'ip': reason = _('Only ip access type allowed.') raise exception.InvalidShareAccess(reason=reason) host_ip = access['access_to'] access_level = access['access_level'] self._get_context('NFSShare').allow_share_access( share['id'], host_ip, vdm_name, access_level) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): # deleting rules for rule in delete_rules: self.deny_access(context, share, rule, share_server) # adding rules for rule in add_rules: self.allow_access(context, share, rule, share_server) # recovery mode if not (add_rules or delete_rules): white_list = [] for rule in access_rules: self.allow_access(context, share, rule, share_server) white_list.append( enas_utils.convert_ipv6_format_if_needed( rule['access_to'])) self.clear_access(share, share_server, white_list) def clear_access(self, share, share_server, white_list): share_proto = share['share_proto'].upper() share_name = share['id'] if share_proto == 'CIFS': self._cifs_clear_access(share_name, share_server, white_list) elif share_proto == 'NFS': self._nfs_clear_access(share_name, share_server, white_list) @enas_utils.log_enter_exit def _cifs_clear_access(self, share_name, share_server, white_list): """Clear access for CIFS share except hosts in the white list.""" vdm_name = self._get_share_server_name(share_server) # Check if CIFS server exists. server_name = vdm_name status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %(server_name)s has issue. " "Detail: %(status)s") % {'server_name': server_name, 'status': status}) raise exception.EMCPowerMaxXMLAPIError(err=message) self._get_context('CIFSShare').clear_share_access( share_name=share_name, mover_name=vdm_name, domain=server['domain'], white_list_users=white_list) @enas_utils.log_enter_exit def _nfs_clear_access(self, share_name, share_server, white_list): """Clear access for NFS share except hosts in the white list.""" self._get_context('NFSShare').clear_share_access( share_name=share_name, mover_name=self._get_share_server_name(share_server), white_list_hosts=white_list) def deny_access(self, context, share, access, share_server=None): """Deny access to a share.""" share_proto = share['share_proto'] if share_proto == 'NFS': self._nfs_deny_access(share, access, share_server) elif share_proto == 'CIFS': self._cifs_deny_access(share, access, share_server) else: raise exception.InvalidShare( reason=_('Unsupported share protocol')) @enas_utils.log_enter_exit def _cifs_deny_access(self, share, access, share_server): """Deny access to CIFS share.""" vdm_name = self._get_share_server_name(share_server) share_name = share['id'] if access['access_type'] != 'user': LOG.warning("Only user access type allowed for CIFS share.") return user_name = access['access_to'] access_level = access['access_level'] if access_level == const.ACCESS_LEVEL_RW: cifs_access = constants.CIFS_ACL_FULLCONTROL else: cifs_access = constants.CIFS_ACL_READ # Check if CIFS server exists. server_name = vdm_name status, server = self._get_context('CIFSServer').get(server_name, vdm_name) if status != constants.STATUS_OK: message = (_("CIFS server %s not found.") % server_name) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) self._get_context('CIFSShare').deny_share_access( vdm_name, share_name, user_name, server['domain'], access=cifs_access) @enas_utils.log_enter_exit def _nfs_deny_access(self, share, access, share_server): """Deny access to NFS share.""" vdm_name = self._get_share_server_name(share_server) access_type = access['access_type'] if access_type != 'ip': LOG.warning("Only ip access type allowed.") return host_ip = enas_utils.convert_ipv6_format_if_needed(access['access_to']) self._get_context('NFSShare').deny_share_access(share['id'], host_ip, vdm_name) def check_for_setup_error(self): """Check for setup error.""" # To verify the input from Manila configuration status, out = self._get_context('Mover').get_ref(self.mover_name, True) if constants.STATUS_ERROR == status: message = (_("Could not find Data Mover by name: %s.") % self.mover_name) LOG.error(message) raise exception.InvalidParameterValue(err=message) self.pools = self._get_managed_storage_pools(self.pool_conf) def _get_managed_storage_pools(self, pools): matched_pools = set() if pools: # Get the real pools from the backend storage status, backend_pools = self._get_context('StoragePool').get_all() if status != constants.STATUS_OK: message = (_("Failed to get storage pool information. " "Reason: %s") % backend_pools) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) real_pools = set([item for item in backend_pools]) conf_pools = set([item.strip() for item in pools]) matched_pools, unmatched_pools = enas_utils.do_match_any( real_pools, conf_pools) if not matched_pools: msg = (_("None of the specified storage pools to be managed " "exist. Please check your configuration " "emc_nas_pool_names in manila.conf. " "The available pools in the backend are %s.") % ",".join(real_pools)) raise exception.InvalidParameterValue(err=msg) LOG.info("Storage pools: %s will be managed.", ",".join(matched_pools)) else: LOG.debug("No storage pool is specified, so all pools " "in storage system will be managed.") return matched_pools def connect(self, emc_share_driver, context): """Connect to PowerMax NAS server.""" config = emc_share_driver.configuration config.append_config_values(POWERMAX_OPTS) self.mover_name = config.safe_get('powermax_server_container') self.pool_conf = config.safe_get('powermax_share_data_pools') self.reserved_percentage = config.safe_get('reserved_share_percentage') if self.reserved_percentage is None: self.reserved_percentage = 0 self.manager = manager.StorageObjectManager(config) self.port_conf = config.safe_get('powermax_ethernet_ports') def get_managed_ports(self): # Get the real ports(devices) list from the backend storage real_ports = self._get_physical_devices(self.mover_name) if not self.port_conf: LOG.debug("No ports are specified, so any of the ports on the " "Data Mover can be used.") return real_ports matched_ports, unmanaged_ports = enas_utils.do_match_any( real_ports, self.port_conf) if not matched_ports: msg = (_("None of the specified network ports exist. " "Please check your configuration powermax_ethernet_ports " "in manila.conf. The available ports on the Data Mover " "are %s.") % ",".join(real_ports)) raise exception.BadConfigurationException(reason=msg) LOG.debug("Ports: %s can be used.", ",".join(matched_ports)) return list(matched_ports) def update_share_stats(self, stats_dict): """Communicate with EMCNASClient to get the stats.""" stats_dict['driver_version'] = VERSION self._get_context('Mover').get_ref(self.mover_name, True) stats_dict['pools'] = [] status, pools = self._get_context('StoragePool').get_all() for name, pool in pools.items(): if not self.pools or pool['name'] in self.pools: total_size = float(pool['total_size']) used_size = float(pool['used_size']) pool_stat = { 'pool_name': pool['name'], 'total_capacity_gb': total_size, 'free_capacity_gb': total_size - used_size, 'qos': False, 'reserved_percentage': self.reserved_percentage, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'ipv6_support': True } stats_dict['pools'].append(pool_stat) if not stats_dict['pools']: message = _("Failed to update storage pool.") LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) def get_pool(self, share): """Get the pool name of the share.""" share_name = share['id'] status, filesystem = self._get_context('FileSystem').get(share_name) if status != constants.STATUS_OK: message = (_("File System %(name)s not found. " "Reason: %(err)s") % {'name': share_name, 'err': filesystem}) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) pool_id = filesystem['pools_id'][0] # Get the real pools from the backend storage status, backend_pools = self._get_context('StoragePool').get_all() if status != constants.STATUS_OK: message = (_("Failed to get storage pool information. " "Reason: %s") % backend_pools) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) for name, pool_info in backend_pools.items(): if pool_info['id'] == pool_id: return name available_pools = [item for item in backend_pools] message = (_("No matched pool name for share: %(share)s. " "Available pools: %(pools)s") % {'share': share_name, 'pools': available_pools}) raise exception.EMCPowerMaxXMLAPIError(err=message) def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return constants.IP_ALLOCATIONS def setup_server(self, network_info, metadata=None): """Set up and configure share server. Sets up and configures share server with given network parameters. """ # Only support single security service with type 'active_directory' vdm_name = network_info['server_id'] vlan_id = network_info['segmentation_id'] active_directory = None allocated_interfaces = [] if network_info.get('security_services'): is_valid, active_directory = self._get_valid_security_service( network_info['security_services']) if not is_valid: raise exception.EMCPowerMaxXMLAPIError(err=active_directory) try: if not self._vdm_exist(vdm_name): LOG.debug('Share server %s not found, creating ' 'share server...', vdm_name) self._get_context('VDM').create(vdm_name, self.mover_name) devices = self.get_managed_ports() for net_info in network_info['network_allocations']: random.shuffle(devices) ip_version = net_info['ip_version'] interface = { 'name': net_info['id'][-12:], 'device_name': devices[0], 'ip': net_info['ip_address'], 'mover_name': self.mover_name, 'vlan_id': vlan_id if vlan_id else -1, } if ip_version == 6: interface['ip_version'] = ip_version interface['net_mask'] = six.text_type( utils.cidr_to_prefixlen( network_info['cidr'])) else: interface['net_mask'] = utils.cidr_to_netmask( network_info['cidr']) self._get_context('MoverInterface').create(interface) allocated_interfaces.append(interface) cifs_interface = allocated_interfaces[0] nfs_interface = allocated_interfaces[1] if active_directory: self._configure_active_directory( active_directory, vdm_name, cifs_interface) self._get_context('VDM').attach_nfs_interface( vdm_name, nfs_interface['name']) return { 'share_server_name': vdm_name, 'cifs_if': cifs_interface['ip'], 'nfs_if': nfs_interface['ip'], } except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Could not setup server') server_details = self._construct_backend_details( vdm_name, allocated_interfaces) self.teardown_server( server_details, network_info['security_services']) def _construct_backend_details(self, vdm_name, interfaces): if_number = len(interfaces) cifs_if = interfaces[0]['ip'] if if_number > 0 else None nfs_if = interfaces[1]['ip'] if if_number > 1 else None return { 'share_server_name': vdm_name, 'cifs_if': cifs_if, 'nfs_if': nfs_if, } @enas_utils.log_enter_exit def _vdm_exist(self, name): status, out = self._get_context('VDM').get(name) if constants.STATUS_OK != status: return False return True def _get_physical_devices(self, mover_name): """Get a proper network device to create interface.""" devices = self._get_context('Mover').get_physical_devices(mover_name) if not devices: message = (_("Could not get physical device port on mover %s.") % self.mover_name) LOG.error(message) raise exception.EMCPowerMaxXMLAPIError(err=message) return devices def _configure_active_directory( self, security_service, vdm_name, interface): domain = security_service['domain'] server = security_service['dns_ip'] self._get_context('DNSDomain').create(self.mover_name, domain, server) cifs_server_args = { 'name': vdm_name, 'interface_ip': interface['ip'], 'domain_name': security_service['domain'], 'user_name': security_service['user'], 'password': security_service['password'], 'mover_name': vdm_name, 'is_vdm': True, } self._get_context('CIFSServer').create(cifs_server_args) def teardown_server(self, server_details, security_services=None): """Teardown share server.""" if not server_details: LOG.debug('Server details are empty.') return vdm_name = server_details.get('share_server_name') if not vdm_name: LOG.debug('No share server found in server details.') return cifs_if = server_details.get('cifs_if') nfs_if = server_details.get('nfs_if') status, vdm = self._get_context('VDM').get(vdm_name) if constants.STATUS_OK != status: LOG.debug('Share server %s not found.', vdm_name) return interfaces = self._get_context('VDM').get_interfaces(vdm_name) for if_name in interfaces['nfs']: self._get_context('VDM').detach_nfs_interface(vdm_name, if_name) if security_services: # Only support single security service with type 'active_directory' is_valid, active_directory = self._get_valid_security_service( security_services) if is_valid: status, servers = self._get_context('CIFSServer').get_all( vdm_name) if constants.STATUS_OK != status: LOG.error('Could not find CIFS server by name: %s.', vdm_name) else: cifs_servers = copy.deepcopy(servers) for name, server in cifs_servers.items(): # Unjoin CIFS Server from domain cifs_server_args = { 'name': server['name'], 'join_domain': False, 'user_name': active_directory['user'], 'password': active_directory['password'], 'mover_name': vdm_name, 'is_vdm': True, } try: self._get_context('CIFSServer').modify( cifs_server_args) except exception.EMCPowerMaxXMLAPIError as expt: LOG.debug("Failed to modify CIFS server " "%(server)s. Reason: %(err)s.", {'server': server, 'err': expt}) self._get_context('CIFSServer').delete(name, vdm_name) # Delete interface from Data Mover if cifs_if: self._get_context('MoverInterface').delete(cifs_if, self.mover_name) if nfs_if: self._get_context('MoverInterface').delete(nfs_if, self.mover_name) # Delete Virtual Data Mover self._get_context('VDM').delete(vdm_name) def _get_valid_security_service(self, security_services): """Validate security services and return a supported security service. :param security_services: :returns: (, ) -- is true to indicate security_services includes zero or single security service for active directory. Otherwise, it would return false. return error message when is false. Otherwise, it will return zero or single security service for active directory. """ # Only support single security service with type 'active_directory' if (len(security_services) > 1 or (security_services and security_services[0]['type'] != 'active_directory')): return False, _("Unsupported security services. " "Only support single security service and " "only support type 'active_directory'") return True, security_services[0] def _get_share_server_name(self, share_server): try: return share_server['backend_details']['share_server_name'] except Exception: LOG.debug("Didn't get share server name from share_server %s.", share_server) return share_server['id'] def _get_context(self, context_type): return self.manager.getStorageContext(context_type) manila-10.0.0/manila/share/drivers/dell_emc/plugins/base.py0000664000175000017500000000575513656750227023636 0ustar zuulzuul00000000000000# Copyright (c) 2014 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """EMC Share Driver Base Plugin API """ import abc import six @six.add_metaclass(abc.ABCMeta) class StorageConnection(object): """Subclasses should implement storage backend specific functionality.""" def __init__(self, *args, **kwargs): # NOTE(vponomaryov): redefine 'driver_handles_share_servers' within # plugin. self.driver_handles_share_servers = None @abc.abstractmethod def create_share(self, context, share, share_server): """Is called to create share.""" @abc.abstractmethod def create_snapshot(self, context, snapshot, share_server): """Is called to create snapshot.""" @abc.abstractmethod def delete_share(self, context, share, share_server): """Is called to remove share.""" @abc.abstractmethod def delete_snapshot(self, context, snapshot, share_server): """Is called to remove snapshot.""" @abc.abstractmethod def ensure_share(self, context, share, share_server): """Invoked to ensure that share is exported.""" @abc.abstractmethod def extend_share(self, share, new_size, share_server): """Invoked to extend share.""" @abc.abstractmethod def allow_access(self, context, share, access, share_server): """Allow access to the share.""" @abc.abstractmethod def deny_access(self, context, share, access, share_server): """Deny access to the share.""" def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share.""" raise NotImplementedError() def raise_connect_error(self): """Check for setup error.""" pass def connect(self, emc_share_driver, context): """Any initialization the share driver does while starting.""" pass def update_share_stats(self, stats_dict): """Add key/values to stats_dict.""" pass def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return 0 @abc.abstractmethod def setup_server(self, network_info, metadata=None): """Set up and configure share server with given network parameters.""" @abc.abstractmethod def teardown_server(self, server_details, security_services=None): """Teardown share server.""" manila-10.0.0/manila/share/drivers/dell_emc/plugins/isilon/0000775000175000017500000000000013656750362023633 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/isilon/__init__.py0000664000175000017500000000000013656750227025732 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/dell_emc/plugins/isilon/isilon_api.py0000664000175000017500000003132013656750227026332 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from enum import Enum from oslo_serialization import jsonutils import requests import six from manila import exception from manila.i18n import _ class IsilonApi(object): def __init__(self, api_url, auth, verify_ssl_cert=True): self.host_url = api_url self.session = requests.session() self.session.auth = auth self.verify_ssl_cert = verify_ssl_cert def create_directory(self, container_path, recursive=False): """Create a directory.""" headers = {"x-isi-ifs-target-type": "container"} url = (self.host_url + "/namespace" + container_path + '?recursive=' + six.text_type(recursive)) r = self.request('PUT', url, headers=headers) return r.status_code == 200 def clone_snapshot(self, snapshot_name, fq_target_dir): self.create_directory(fq_target_dir) snapshot = self.get_snapshot(snapshot_name) snapshot_path = snapshot['path'] # remove /ifs from start of path relative_snapshot_path = snapshot_path[4:] fq_snapshot_path = ('/ifs/.snapshot/' + snapshot_name + relative_snapshot_path) self._clone_directory_contents(fq_snapshot_path, fq_target_dir, snapshot_name, relative_snapshot_path) def _clone_directory_contents(self, fq_source_dir, fq_target_dir, snapshot_name, relative_path): dir_listing = self.get_directory_listing(fq_source_dir) for item in dir_listing['children']: name = item['name'] source_item_path = fq_source_dir + '/' + name new_relative_path = relative_path + '/' + name dest_item_path = fq_target_dir + '/' + name if item['type'] == 'container': # create the container name in the target dir & clone dir self.create_directory(dest_item_path) self._clone_directory_contents(source_item_path, dest_item_path, snapshot_name, new_relative_path) elif item['type'] == 'object': self.clone_file_from_snapshot('/ifs' + new_relative_path, dest_item_path, snapshot_name) def clone_file_from_snapshot(self, fq_file_path, fq_dest_path, snapshot_name): headers = {'x-isi-ifs-copy-source': '/namespace' + fq_file_path} snapshot_suffix = '&snapshot=' + snapshot_name url = (self.host_url + '/namespace' + fq_dest_path + '?clone=true' + snapshot_suffix) self.request('PUT', url, headers=headers) def get_directory_listing(self, fq_dir_path): url = self.host_url + '/namespace' + fq_dir_path + '?detail=default' r = self.request('GET', url) r.raise_for_status() return r.json() def is_path_existent(self, resource_path): url = self.host_url + '/namespace' + resource_path r = self.request('HEAD', url) if r.status_code == 200: return True elif r.status_code == 404: return False else: r.raise_for_status() def get_snapshot(self, snapshot_name): r = self.request('GET', self.host_url + '/platform/1/snapshot/snapshots/' + snapshot_name) snapshot_json = r.json() if r.status_code == 200: return snapshot_json['snapshots'][0] elif r.status_code == 404: return None else: r.raise_for_status() def get_snapshots(self): r = self.request('GET', self.host_url + '/platform/1/snapshot/snapshots') if r.status_code == 200: return r.json() else: r.raise_for_status() def lookup_nfs_export(self, share_path): response = self.session.get( self.host_url + '/platform/1/protocols/nfs/exports', verify=self.verify_ssl_cert) nfs_exports_json = response.json() for export in nfs_exports_json['exports']: for path in export['paths']: if path == share_path: return export['id'] return None def get_nfs_export(self, export_id): response = self.request('GET', self.host_url + '/platform/1/protocols/nfs/exports/' + six.text_type(export_id)) if response.status_code == 200: return response.json()['exports'][0] else: return None def lookup_smb_share(self, share_name): response = self.session.get( self.host_url + '/platform/1/protocols/smb/shares/' + share_name) if response.status_code == 200: return response.json()['shares'][0] else: return None def create_nfs_export(self, export_path): """Creates an NFS export using the Platform API. :param export_path: a string specifying the desired export path :return: "True" if created successfully; "False" otherwise """ data = {'paths': [export_path]} url = self.host_url + '/platform/1/protocols/nfs/exports' response = self.request('POST', url, data=data) return response.status_code == 201 def create_smb_share(self, share_name, share_path): """Creates an SMB/CIFS share. :param share_name: the name of the CIFS share :param share_path: the path associated with the CIFS share :return: "True" if the share created successfully; returns "False" otherwise """ data = {'permissions': []} data['name'] = share_name data['path'] = share_path url = self.host_url + '/platform/1/protocols/smb/shares' response = self.request('POST', url, data=data) return response.status_code == 201 def create_snapshot(self, snapshot_name, snapshot_path): """Creates a snapshot.""" data = {'name': snapshot_name, 'path': snapshot_path} r = self.request('POST', self.host_url + '/platform/1/snapshot/snapshots', data=data) if r.status_code == 201: return True else: r.raise_for_status() def delete(self, fq_resource_path, recursive=False): """Deletes a file or folder.""" r = self.request('DELETE', self.host_url + '/namespace' + fq_resource_path + '?recursive=' + six.text_type(recursive)) r.raise_for_status() def delete_nfs_share(self, share_number): response = self.session.delete( self.host_url + '/platform/1/protocols/nfs/exports' + '/' + six.text_type(share_number)) return response.status_code == 204 def delete_smb_share(self, share_name): url = self.host_url + '/platform/1/protocols/smb/shares/' + share_name response = self.request('DELETE', url) return response.status_code == 204 def delete_snapshot(self, snapshot_name): response = self.request( 'DELETE', '{0}/platform/1/snapshot/snapshots/{1}' .format(self.host_url, snapshot_name)) response.raise_for_status() def quota_create(self, path, quota_type, size): thresholds = {'hard': size} data = { 'path': path, 'type': quota_type, 'include_snapshots': False, 'thresholds_include_overhead': False, 'enforced': True, 'thresholds': thresholds, } response = self.request( 'POST', '{0}/platform/1/quota/quotas'.format(self.host_url), data=data) response.raise_for_status() def quota_get(self, path, quota_type): response = self.request( 'GET', '{0}/platform/1/quota/quotas?path={1}'.format(self.host_url, path), ) if response.status_code == 404: return None elif response.status_code != 200: response.raise_for_status() json = response.json() len_returned_quotas = len(json['quotas']) if len_returned_quotas == 0: return None elif len_returned_quotas == 1: return json['quotas'][0] else: message = (_('Greater than one quota returned when querying ' 'quotas associated with share path: %(path)s .') % {'path': path}) raise exception.ShareBackendException(msg=message) def quota_modify_size(self, quota_id, new_size): data = {'thresholds': {'hard': new_size}} response = self.request( 'PUT', '{0}/platform/1/quota/quotas/{1}'.format(self.host_url, quota_id), data=data ) response.raise_for_status() def quota_set(self, path, quota_type, size): """Sets a quota of the given type and size on the given path.""" quota_json = self.quota_get(path, quota_type) if quota_json is None: self.quota_create(path, quota_type, size) else: # quota already exists, modify it's size quota_id = quota_json['id'] self.quota_modify_size(quota_id, size) def smb_permissions_add(self, share_name, user, smb_permission): smb_share = self.lookup_smb_share(share_name) permissions = smb_share['permissions'] # lookup given user string user_json = self.auth_lookup_user(user) auth_mappings = user_json['mapping'] if len(auth_mappings) > 1: message = (_('More than one mapping found for user "%(user)s".') % {'user': user}) raise exception.ShareBackendException(msg=message) user_sid = auth_mappings[0]['user']['sid'] new_permission = { 'permission': smb_permission.value, 'permission_type': 'allow', 'trustee': user_sid } url = '{0}/platform/1/protocols/smb/shares/{1}'.format( self.host_url, share_name) new_permissions = list(permissions) new_permissions.append(new_permission) data = {'permissions': new_permissions} r = self.request('PUT', url, data=data) r.raise_for_status() def smb_permissions_remove(self, share_name, user): smb_share = self.lookup_smb_share(share_name) permissions = smb_share['permissions'] # find the perm to remove perm_to_remove = None for perm in list(permissions): if perm['trustee']['name'] == user: perm_to_remove = perm if perm_to_remove is not None: permissions.remove(perm) else: message = _('Attempting to remove permission for user "%(user)s", ' 'but this user was not found in the share\'s ' '(%(share)s) permissions list.') % {'user': user, 'share': smb_share} raise exception.ShareBackendException(msg=message) self.request('PUT', '{0}/platform/1/protocols/smb/shares/{1}'.format( self.host_url, share_name), data={'permissions': permissions}) def auth_lookup_user(self, user_string): url = '{0}/platform/1/auth/mapping/users/lookup'.format(self.host_url) r = self.request('GET', url, params={"user": user_string}) if r.status_code == 404: raise exception.ShareBackendException(msg='user not found') elif r.status_code != 200: r.raise_for_status() return r.json() def request(self, method, url, headers=None, data=None, params=None): if data is not None: data = jsonutils.dumps(data) r = self.session.request(method, url, headers=headers, data=data, verify=self.verify_ssl_cert, params=params) return r class SmbPermission(Enum): full = 'full' rw = 'change' ro = 'read' manila-10.0.0/manila/share/drivers/dell_emc/plugins/isilon/isilon.py0000664000175000017500000005076213656750227025514 0ustar zuulzuul00000000000000# Copyright 2015 EMC Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Isilon specific NAS backend plugin. """ import os from oslo_config import cfg from oslo_log import log from oslo_utils import units from requests.exceptions import HTTPError import six from manila.common import constants as const from manila import exception from manila.i18n import _ from manila.share.drivers.dell_emc.plugins import base from manila.share.drivers.dell_emc.plugins.isilon import isilon_api CONF = cfg.CONF VERSION = "0.1.0" LOG = log.getLogger(__name__) class IsilonStorageConnection(base.StorageConnection): """Implements Isilon specific functionality for EMC Manila driver.""" def __init__(self, *args, **kwargs): super(IsilonStorageConnection, self).__init__(*args, **kwargs) self._server = None self._port = None self._username = None self._password = None self._server_url = None self._connect_resp = None self._root_dir = None self._verify_ssl_cert = None self._containers = {} self._shares = {} self._snapshots = {} self._isilon_api = None self._isilon_api_class = isilon_api.IsilonApi self.driver_handles_share_servers = False def _get_container_path(self, share): """Return path to a container.""" return os.path.join(self._root_dir, share['name']) def create_share(self, context, share, share_server): """Is called to create share.""" if share['share_proto'] == 'NFS': location = self._create_nfs_share(share) elif share['share_proto'] == 'CIFS': location = self._create_cifs_share(share) else: message = (_('Unsupported share protocol: %(proto)s.') % {'proto': share['share_proto']}) LOG.error(message) raise exception.InvalidShare(reason=message) # apply directory quota based on share size max_share_size = share['size'] * units.Gi self._isilon_api.quota_create( self._get_container_path(share), 'directory', max_share_size) return location def create_share_from_snapshot(self, context, share, snapshot, share_server): """Creates a share from the snapshot.""" # Create share at new location location = self.create_share(context, share, share_server) # Clone snapshot to new location fq_target_dir = self._get_container_path(share) self._isilon_api.clone_snapshot(snapshot['name'], fq_target_dir) return location def _create_nfs_share(self, share): """Is called to create nfs share.""" container_path = self._get_container_path(share) self._isilon_api.create_directory(container_path) share_created = self._isilon_api.create_nfs_export(container_path) if not share_created: message = ( _('The requested NFS share "%(share)s" was not created.') % {'share': share['name']}) LOG.error(message) raise exception.ShareBackendException(msg=message) location = '{0}:{1}'.format(self._server, container_path) return location def _create_cifs_share(self, share): """Is called to create cifs share.""" # Create the directory container_path = self._get_container_path(share) self._isilon_api.create_directory(container_path) self._isilon_api.create_smb_share(share['name'], container_path) share_path = '\\\\{0}\\{1}'.format(self._server, share['name']) return share_path def create_snapshot(self, context, snapshot, share_server): """Is called to create snapshot.""" snapshot_path = os.path.join(self._root_dir, snapshot['share_name']) self._isilon_api.create_snapshot(snapshot['name'], snapshot_path) def delete_share(self, context, share, share_server): """Is called to remove share.""" if share['share_proto'] == 'NFS': self._delete_nfs_share(share) elif share['share_proto'] == 'CIFS': self._delete_cifs_share(share) else: message = (_('Unsupported share type: %(type)s.') % {'type': share['share_proto']}) LOG.error(message) raise exception.InvalidShare(reason=message) def _delete_nfs_share(self, share): """Is called to remove nfs share.""" share_id = self._isilon_api.lookup_nfs_export( self._root_dir + '/' + share['name']) if share_id is None: lw = ('Attempted to delete NFS Share "%s", but the share does ' 'not appear to exist.') LOG.warning(lw, share['name']) else: # attempt to delete the share export_deleted = self._isilon_api.delete_nfs_share(share_id) if not export_deleted: message = _('Error deleting NFS share: %s') % share['name'] LOG.error(message) raise exception.ShareBackendException(msg=message) def _delete_cifs_share(self, share): """Is called to remove CIFS share.""" smb_share = self._isilon_api.lookup_smb_share(share['name']) if smb_share is None: lw = ('Attempted to delete CIFS Share "%s", but the share does ' 'not appear to exist.') LOG.warning(lw, share['name']) else: share_deleted = self._isilon_api.delete_smb_share(share['name']) if not share_deleted: message = _('Error deleting CIFS share: %s') % share['name'] LOG.error(message) raise exception.ShareBackendException(msg=message) def delete_snapshot(self, context, snapshot, share_server): """Is called to remove snapshot.""" self._isilon_api.delete_snapshot(snapshot['name']) def ensure_share(self, context, share, share_server): """Invoked to ensure that share is exported.""" def extend_share(self, share, new_size, share_server=None): """Extends a share.""" new_quota_size = new_size * units.Gi self._isilon_api.quota_set( self._get_container_path(share), 'directory', new_quota_size) def allow_access(self, context, share, access, share_server): """Allow access to the share.""" if share['share_proto'] == 'NFS': self._nfs_allow_access(share, access) elif share['share_proto'] == 'CIFS': self._cifs_allow_access(share, access) else: message = _( 'Unsupported share protocol: %s. Only "NFS" and ' '"CIFS" are currently supported share protocols.') % share[ 'share_proto'] LOG.error(message) raise exception.InvalidShare(reason=message) def _nfs_allow_access(self, share, access): """Allow access to nfs share.""" access_type = access['access_type'] if access_type != 'ip': message = _('Only "ip" access type allowed for the NFS ' 'protocol.') LOG.error(message) raise exception.InvalidShareAccess(reason=message) export_path = self._get_container_path(share) access_ip = access['access_to'] access_level = access['access_level'] share_id = self._isilon_api.lookup_nfs_export(export_path) share_access_group = 'clients' if access_level == const.ACCESS_LEVEL_RO: share_access_group = 'read_only_clients' # Get current allowed clients export = self._get_existing_nfs_export(share_id) current_clients = export[share_access_group] # Format of ips could be '10.0.0.2', or '10.0.0.2, 10.0.0.0/24' ips = list() ips.append(access_ip) ips.extend(current_clients) export_params = {share_access_group: ips} url = '{0}/platform/1/protocols/nfs/exports/{1}'.format( self._server_url, share_id) resp = self._isilon_api.request('PUT', url, data=export_params) resp.raise_for_status() def _cifs_allow_access(self, share, access): access_type = access['access_type'] access_to = access['access_to'] access_level = access['access_level'] if access_type == 'ip': access_ip = access['access_to'] self._cifs_allow_access_ip(access_ip, share, access_level) elif access_type == 'user': self._cifs_allow_access_user(access_to, share, access_level) else: message = _('Only "ip" and "user" access types allowed for ' 'CIFS protocol.') LOG.error(message) raise exception.InvalidShareAccess(reason=message) def _cifs_allow_access_ip(self, ip, share, access_level): if access_level == const.ACCESS_LEVEL_RO: message = _('Only RW Access allowed for CIFS Protocol when using ' 'the "ip" access type.') LOG.error(message) raise exception.InvalidShareAccess(reason=message) allowed_ip = 'allow:' + ip smb_share = self._isilon_api.lookup_smb_share(share['name']) host_acl = smb_share['host_acl'] if allowed_ip not in host_acl: host_acl.append(allowed_ip) data = {'host_acl': host_acl} url = ('{0}/platform/1/protocols/smb/shares/{1}' .format(self._server_url, share['name'])) r = self._isilon_api.request('PUT', url, data=data) r.raise_for_status() def _cifs_allow_access_user(self, user, share, access_level): if access_level == const.ACCESS_LEVEL_RW: smb_permission = isilon_api.SmbPermission.rw elif access_level == const.ACCESS_LEVEL_RO: smb_permission = isilon_api.SmbPermission.ro else: message = _('Only "RW" and "RO" access levels are supported.') LOG.error(message) raise exception.InvalidShareAccess(reason=message) self._isilon_api.smb_permissions_add(share['name'], user, smb_permission) def deny_access(self, context, share, access, share_server): """Deny access to the share.""" if share['share_proto'] == 'NFS': self._nfs_deny_access(share, access) elif share['share_proto'] == 'CIFS': self._cifs_deny_access(share, access) def _nfs_deny_access(self, share, access): """Deny access to nfs share.""" if access['access_type'] != 'ip': return denied_ip = access['access_to'] access_level = access['access_level'] share_access_group = 'clients' if access_level == const.ACCESS_LEVEL_RO: share_access_group = 'read_only_clients' # Get list of currently allowed client ips export_id = self._isilon_api.lookup_nfs_export( self._get_container_path(share)) if export_id is None: message = _('Share %s should have been created, but was not ' 'found.') % share['name'] LOG.error(message) raise exception.ShareBackendException(msg=message) export = self._get_existing_nfs_export(export_id) try: clients = export[share_access_group] except KeyError: message = (_('Export %(export_name)s should have contained the ' 'JSON key %(json_key)s, but this key was not found.') % {'export_name': share['name'], 'json_key': share_access_group}) LOG.error(message) raise exception.ShareBackendException(msg=message) allowed_ips = set(clients) if allowed_ips.__contains__(denied_ip): allowed_ips.remove(denied_ip) data = {share_access_group: list(allowed_ips)} url = ('{0}/platform/1/protocols/nfs/exports/{1}' .format(self._server_url, six.text_type(export_id))) r = self._isilon_api.request('PUT', url, data=data) r.raise_for_status() def _get_existing_nfs_export(self, export_id): export = self._isilon_api.get_nfs_export(export_id) if export is None: message = _('NFS share with export id %d should have been ' 'created, but was not found.') % export_id LOG.error(message) raise exception.ShareBackendException(msg=message) return export def _cifs_deny_access(self, share, access): access_type = access['access_type'] if access_type == 'ip': self._cifs_deny_access_ip(access['access_to'], share) elif access_type == 'user': self._cifs_deny_access_user(share, access) else: message = _('Access type for CIFS deny access request was ' '"%(access_type)s". Only "user" and "ip" access types ' 'are supported for CIFS protocol access.') % { 'access_type': access_type} LOG.warning(message) def _cifs_deny_access_ip(self, denied_ip, share): """Deny access to cifs share.""" share_json = self._isilon_api.lookup_smb_share(share['name']) host_acl_list = share_json['host_acl'] allow_ip = 'allow:' + denied_ip if allow_ip in host_acl_list: host_acl_list.remove(allow_ip) share_params = {"host_acl": host_acl_list} url = ('{0}/platform/1/protocols/smb/shares/{1}' .format(self._server_url, share['name'])) resp = self._isilon_api.request('PUT', url, data=share_params) resp.raise_for_status() def _cifs_deny_access_user(self, share, access): self._isilon_api.smb_permissions_remove(share['name'], access[ 'access_to']) def check_for_setup_error(self): """Check for setup error.""" def connect(self, emc_share_driver, context): """Connect to an Isilon cluster.""" self._server = emc_share_driver.configuration.safe_get( "emc_nas_server") self._port = ( int(emc_share_driver.configuration.safe_get("emc_nas_server_port")) ) self._server_url = ('https://' + self._server + ':' + six.text_type(self._port)) self._username = emc_share_driver.configuration.safe_get( "emc_nas_login") self._password = emc_share_driver.configuration.safe_get( "emc_nas_password") self._root_dir = emc_share_driver.configuration.safe_get( "emc_nas_root_dir") # TODO(Shaun Edwards): make verify ssl a config variable? self._verify_ssl_cert = False self._isilon_api = self._isilon_api_class(self._server_url, auth=( self._username, self._password), verify_ssl_cert=self._verify_ssl_cert) if not self._isilon_api.is_path_existent(self._root_dir): self._isilon_api.create_directory(self._root_dir, recursive=True) def update_share_stats(self, stats_dict): """TODO.""" # TODO(Shaun Edwards): query capacity, set storage_protocol, # QoS support? stats_dict['driver_version'] = VERSION def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" # TODO(Shaun Edwards) return 0 def setup_server(self, network_info, metadata=None): """Set up and configures share server with given network parameters.""" # TODO(Shaun Edwards): Look into supporting share servers def teardown_server(self, server_details, security_services=None): """Teardown share server.""" # TODO(Shaun Edwards): Look into supporting share servers def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update share access.""" if share['share_proto'] == 'NFS': state_map = self._update_access_nfs(share, access_rules) if share['share_proto'] == 'CIFS': state_map = self._update_access_cifs(share, access_rules) return state_map def _update_access_nfs(self, share, access_rules): """Updates access on a NFS share.""" nfs_rw_ips = set() nfs_ro_ips = set() rule_state_map = {} for rule in access_rules: rule_state_map[rule['access_id']] = { 'state': 'error' } for rule in access_rules: if rule['access_level'] == const.ACCESS_LEVEL_RW: nfs_rw_ips.add(rule['access_to']) elif rule['access_level'] == const.ACCESS_LEVEL_RO: nfs_ro_ips.add(rule['access_to']) export_id = self._isilon_api.lookup_nfs_export( self._get_container_path(share)) if export_id is None: # share does not exist on backend (set all rules to error state) return rule_state_map data = { 'clients': list(nfs_rw_ips), 'read_only_clients': list(nfs_ro_ips) } url = ('{0}/platform/1/protocols/nfs/exports/{1}' .format(self._server_url, six.text_type(export_id))) r = self._isilon_api.request('PUT', url, data=data) try: r.raise_for_status() except HTTPError: return rule_state_map # if we finish the bulk rule update with no error set rules to active for rule in access_rules: rule_state_map[rule['access_id']]['state'] = 'active' return rule_state_map def _update_access_cifs(self, share, access_rules): """Clear access on a CIFS share.""" cifs_ip_set = set() users = set() for rule in access_rules: if rule['access_type'] == 'ip': cifs_ip_set.add('allow:' + rule['access_to']) elif rule['access_type'] == 'user': users.add(rule['access_to']) smb_share = self._isilon_api.lookup_smb_share(share['name']) backend_smb_user_permissions = smb_share['permissions'] perms_to_remove = [] for perm in backend_smb_user_permissions: if perm['trustee']['name'] not in users: perms_to_remove.append(perm) for perm in perms_to_remove: backend_smb_user_permissions.remove(perm) data = { 'host_acl': list(cifs_ip_set), 'permissions': backend_smb_user_permissions, } url = ('{0}/platform/1/protocols/smb/shares/{1}' .format(self._server_url, share['name'])) r = self._isilon_api.request('PUT', url, data=data) try: r.raise_for_status() except HTTPError: # clear access rules failed so set all access rules to error state rule_state_map = {} for rule in access_rules: rule_state_map[rule['access_id']] = { 'state': 'error' } return rule_state_map # add access rules that don't exist on backend rule_state_map = {} for rule in access_rules: rule_state_map[rule['access_id']] = { 'state': 'error' } try: if rule['access_type'] == 'ip': self._cifs_allow_access_ip(rule['access_to'], share, rule['access_level']) rule_state_map[rule['access_id']]['state'] = 'active' elif rule['access_type'] == 'user': backend_users = set() for perm in backend_smb_user_permissions: backend_users.add(perm['trustee']['name']) if rule['access_to'] not in backend_users: self._cifs_allow_access_user( rule['access_to'], share, rule['access_level']) rule_state_map[rule['access_id']]['state'] = 'active' else: continue except exception.ManilaException: pass return rule_state_map manila-10.0.0/manila/share/drivers/dell_emc/driver.py0000664000175000017500000003157213656750227022532 0ustar zuulzuul00000000000000# Copyright (c) 2019 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ EMC specific NAS storage driver. This driver is a pluggable driver that allows specific EMC NAS devices to be plugged-in as the underlying backend. Use the Manila configuration variable "share_backend_name" to specify, which backend plugins to use. """ from oslo_config import cfg from oslo_log import log from manila.share import driver from manila.share.drivers.dell_emc import plugin_manager as manager EMC_NAS_OPTS = [ cfg.StrOpt('emc_nas_login', help='User name for the EMC server.'), cfg.StrOpt('emc_nas_password', help='Password for the EMC server.'), cfg.HostAddressOpt('emc_nas_server', help='EMC server hostname or IP address.'), cfg.PortOpt('emc_nas_server_port', default=8080, help='Port number for the EMC server.'), cfg.BoolOpt('emc_nas_server_secure', default=True, help='Use secure connection to server.'), cfg.StrOpt('emc_share_backend', ignore_case=True, choices=['isilon', 'vnx', 'unity', 'vmax', 'powermax'], help='Share backend.'), cfg.StrOpt('emc_nas_root_dir', help='The root directory where shares will be located.'), cfg.BoolOpt('emc_ssl_cert_verify', default=True, help='If set to False the https client will not validate the ' 'SSL certificate of the backend endpoint.'), cfg.StrOpt('emc_ssl_cert_path', help='Can be used to specify a non default path to a ' 'CA_BUNDLE file or directory with certificates of trusted ' 'CAs, which will be used to validate the backend.') ] LOG = log.getLogger(__name__) CONF = cfg.CONF CONF.register_opts(EMC_NAS_OPTS) class EMCShareDriver(driver.ShareDriver): """EMC specific NAS driver. Allows for NFS and CIFS NAS storage usage.""" def __init__(self, *args, **kwargs): self.configuration = kwargs.get('configuration', None) if self.configuration: self.configuration.append_config_values(EMC_NAS_OPTS) self.backend_name = self.configuration.safe_get( 'emc_share_backend') else: self.backend_name = CONF.emc_share_backend self.backend_name = self.backend_name or 'EMC_NAS_Storage' self.plugin_manager = manager.EMCPluginManager( namespace='manila.share.drivers.dell_emc.plugins') if self.backend_name == 'vmax': LOG.warning("Configuration option 'emc_share_backend=vmax' will " "remain a valid option until the V release of " "OpenStack. After that, only " "'emc_share_backend=powermax' will be excepted.") self.backend_name = 'powermax' self.plugin = self.plugin_manager.load_plugin( self.backend_name, configuration=self.configuration) super(EMCShareDriver, self).__init__( self.plugin.driver_handles_share_servers, *args, **kwargs) if hasattr(self.plugin, 'ipv6_implemented'): self.ipv6_implemented = self.plugin.ipv6_implemented if hasattr(self.plugin, 'revert_to_snap_support'): self.revert_to_snap_support = self.plugin.revert_to_snap_support else: self.revert_to_snap_support = False if hasattr(self.plugin, 'shrink_share_support'): self.shrink_share_support = self.plugin.shrink_share_support else: self.shrink_share_support = False if hasattr(self.plugin, 'manage_existing_support'): self.manage_existing_support = self.plugin.manage_existing_support else: self.manage_existing_support = False if hasattr(self.plugin, 'manage_existing_with_server_support'): self.manage_existing_with_server_support = ( self.plugin.manage_existing_with_server_support) else: self.manage_existing_with_server_support = False if hasattr(self.plugin, 'manage_existing_snapshot_support'): self.manage_existing_snapshot_support = ( self.plugin.manage_existing_snapshot_support) else: self.manage_existing_snapshot_support = False if hasattr(self.plugin, 'manage_snapshot_with_server_support'): self.manage_snapshot_with_server_support = ( self.plugin.manage_snapshot_with_server_support) else: self.manage_snapshot_with_server_support = False if hasattr(self.plugin, 'manage_server_support'): self.manage_server_support = self.plugin.manage_server_support else: self.manage_server_support = False if hasattr(self.plugin, 'get_share_server_network_info_support'): self.get_share_server_network_info_support = ( self.plugin.get_share_server_network_info_support) else: self.get_share_server_network_info_support = False def manage_existing(self, share, driver_options): """manage an existing share""" if self.manage_existing_support: return self.plugin.manage_existing(share, driver_options) else: return NotImplementedError() def manage_existing_with_server(self, share, driver_options, share_server=None): """manage an existing share""" if self.manage_existing_with_server_support: return self.plugin.manage_existing_with_server( share, driver_options, share_server) else: return NotImplementedError() def manage_existing_snapshot(self, snapshot, driver_options): """manage an existing share snapshot""" if self.manage_existing_snapshot_support: return self.plugin.manage_existing_snapshot(snapshot, driver_options) else: return NotImplementedError() def manage_existing_snapshot_with_server(self, snapshot, driver_options, share_server=None): """manage an existing share snapshot""" if self.manage_snapshot_with_server_support: return self.plugin.manage_existing_snapshot_with_server( snapshot, driver_options, share_server=None) else: return NotImplementedError() def manage_server(self, context, share_server, identifier, driver_options): if self.manage_server_support: return self.plugin.manage_server(context, share_server, identifier, driver_options) else: return NotImplementedError() def get_share_server_network_info( self, context, share_server, identifier, driver_options): if self.get_share_server_network_info_support: return self.plugin.get_share_server_network_info( context, share_server, identifier, driver_options) else: return NotImplementedError() def unmanage_server(self, server_details, security_services=None): LOG.info('Dell EMC driver will unmanage share server: %s out of ' 'OpenStack.', server_details.get('server_id')) def unmanage(self, share): LOG.info('Dell EMC driver will unmanage share: %s out of ' 'OpenStack.', share.get('id')) def unmanage_with_server(self, share, share_server=None): LOG.info('Dell EMC driver will unmanage share: %s out of ' 'OpenStack.', share.get('id')) def unmanage_snapshot(self, snapshot): LOG.info('Dell EMC driver will unmanage snapshot: %s out of ' 'OpenStack.', snapshot.get('id')) def unmanage_snapshot_with_server(self, snapshot, share_server=None): LOG.info('Dell EMC driver will unmanage snapshot: %s out of ' 'OpenStack.', snapshot.get('id')) def create_share(self, context, share, share_server=None): """Is called to create share.""" location = self.plugin.create_share(context, share, share_server) return location def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" location = self.plugin.create_share_from_snapshot( context, share, snapshot, share_server) return location def extend_share(self, share, new_size, share_server=None): """Is called to extend share.""" self.plugin.extend_share(share, new_size, share_server) def shrink_share(self, share, new_size, share_server=None): """Is called to shrink share.""" if self.shrink_share_support: self.plugin.shrink_share(share, new_size, share_server) else: raise NotImplementedError() def create_snapshot(self, context, snapshot, share_server=None): """Is called to create snapshot.""" return self.plugin.create_snapshot(context, snapshot, share_server) def delete_share(self, context, share, share_server=None): """Is called to remove share.""" self.plugin.delete_share(context, share, share_server) def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove snapshot.""" self.plugin.delete_snapshot(context, snapshot, share_server) def ensure_share(self, context, share, share_server=None): """Invoked to sure that share is exported.""" self.plugin.ensure_share(context, share, share_server) def allow_access(self, context, share, access, share_server=None): """Allow access to the share.""" self.plugin.allow_access(context, share, access, share_server) def deny_access(self, context, share, access, share_server=None): """Deny access to the share.""" self.plugin.deny_access(context, share, access, share_server) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access to the share.""" self.plugin.update_access(context, share, access_rules, add_rules, delete_rules, share_server) def check_for_setup_error(self): """Check for setup error.""" self.plugin.check_for_setup_error() def do_setup(self, context): """Any initialization the share driver does while starting.""" self.plugin.connect(self, context) def _update_share_stats(self): """Retrieve stats info from share.""" backend_name = self.configuration.safe_get( 'share_backend_name') or "EMC_NAS_Storage" data = dict( share_backend_name=backend_name, vendor_name='Dell EMC', storage_protocol='NFS_CIFS', snapshot_support=True, create_share_from_snapshot_support=True, revert_to_snapshot_support=self.revert_to_snap_support) self.plugin.update_share_stats(data) super(EMCShareDriver, self)._update_share_stats(data) def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return self.plugin.get_network_allocations_number() def _setup_server(self, network_info, metadata=None): """Set up and configures share server with given network parameters.""" return self.plugin.setup_server(network_info, metadata) def _teardown_server(self, server_details, security_services=None): """Teardown share server.""" return self.plugin.teardown_server(server_details, security_services) def get_configured_ip_versions(self): if self.ipv6_implemented: return [4, 6] else: return [4] def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): if self.revert_to_snap_support: return self.plugin.revert_to_snapshot(context, snapshot, share_access_rules, snapshot_access_rules, share_server) else: raise NotImplementedError() manila-10.0.0/manila/share/drivers/maprfs/0000775000175000017500000000000013656750362020401 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/maprfs/__init__.py0000664000175000017500000000000013656750227022500 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/maprfs/driver_util.py0000664000175000017500000003251313656750227023307 0ustar zuulzuul00000000000000# Copyright (c) 2016, MapR Technologies # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility for processing MapR cluster operations """ import json import pipes import socket from oslo_concurrency import processutils from oslo_log import log import six from manila.common import constants from manila import exception from manila.i18n import _ from manila import utils LOG = log.getLogger(__name__) def get_version_handler(configuration): # here can be choosing DriverUtils depend on cluster version return BaseDriverUtil(configuration) class BaseDriverUtil(object): """Utility class for MapR-FS specific operations.""" NOT_FOUND_MSG = 'No such' ERROR_MSG = 'ERROR' def __init__(self, configuration): self.configuration = configuration self.ssh_connections = {} self.hosts = self.configuration.maprfs_clinode_ip self.local_hosts = socket.gethostbyname_ex(socket.gethostname())[2] self.maprcli_bin = '/usr/bin/maprcli' self.hadoop_bin = '/usr/bin/hadoop' def _execute(self, *cmd, **kwargs): for x in range(0, len(self.hosts)): try: check_exit_code = kwargs.pop('check_exit_code', True) host = self.hosts[x] if host in self.local_hosts: cmd = self._as_user(cmd, self.configuration.maprfs_ssh_name) out, err = utils.execute(*cmd, check_exit_code=check_exit_code) else: out, err = self._run_ssh(host, cmd, check_exit_code) # move available cldb host to the beginning if x > 0: self.hosts[0], self.hosts[x] = self.hosts[x], self.hosts[0] return out, err except exception.ProcessExecutionError as e: if self._check_error(e): raise elif x < len(self.hosts) - 1: msg = ('Error running SSH command. Trying another host') LOG.error(msg) else: raise except Exception as e: if x < len(self.hosts) - 1: msg = ('Error running SSH command. Trying another host') LOG.error(msg) else: raise exception.ProcessExecutionError(six.text_type(e)) def _run_ssh(self, host, cmd_list, check_exit_code=False): command = ' '.join(pipes.quote(cmd_arg) for cmd_arg in cmd_list) connection = self.ssh_connections.get(host) if connection is None: ssh_name = self.configuration.maprfs_ssh_name password = self.configuration.maprfs_ssh_pw private_key = self.configuration.maprfs_ssh_private_key remote_ssh_port = self.configuration.maprfs_ssh_port ssh_conn_timeout = self.configuration.ssh_conn_timeout min_size = self.configuration.ssh_min_pool_conn max_size = self.configuration.ssh_max_pool_conn ssh_pool = utils.SSHPool(host, remote_ssh_port, ssh_conn_timeout, ssh_name, password=password, privatekey=private_key, min_size=min_size, max_size=max_size) ssh = ssh_pool.create() self.ssh_connections[host] = (ssh_pool, ssh) else: ssh_pool, ssh = connection if not ssh.get_transport().is_active(): ssh_pool.remove(ssh) ssh = ssh_pool.create() self.ssh_connections[host] = (ssh_pool, ssh) return processutils.ssh_execute( ssh, command, check_exit_code=check_exit_code) @staticmethod def _check_error(error): # check if error was native return BaseDriverUtil.ERROR_MSG in error.stdout @staticmethod def _as_user(cmd, user): return ['sudo', 'su', '-', user, '-c', ' '.join(pipes.quote(cmd_arg) for cmd_arg in cmd)] @staticmethod def _add_params(cmd, **kwargs): params = [] for x in kwargs.keys(): params.append('-' + x) params.append(kwargs[x]) return cmd + params def create_volume(self, name, path, size, **kwargs): # delete size param as it is set separately if kwargs.get('quota'): del kwargs['quota'] sizestr = six.text_type(size) + 'G' cmd = [self.maprcli_bin, 'volume', 'create', '-name', name, '-path', path, '-quota', sizestr, '-readAce', '', '-writeAce', ''] cmd = self._add_params(cmd, **kwargs) self._execute(*cmd) def volume_exists(self, volume_name): cmd = [self.maprcli_bin, 'volume', 'info', '-name', volume_name] out, __ = self._execute(*cmd, check_exit_code=False) return self.NOT_FOUND_MSG not in out def delete_volume(self, name): cmd = [self.maprcli_bin, 'volume', 'remove', '-name', name, '-force', 'true'] out, __ = self._execute(*cmd, check_exit_code=False) # if volume does not exist do not raise exception.ProcessExecutionError if self.ERROR_MSG in out and self.NOT_FOUND_MSG not in out: raise exception.ProcessExecutionError(out) def set_volume_size(self, name, size): sizestr = six.text_type(size) + 'G' cmd = [self.maprcli_bin, 'volume', 'modify', '-name', name, '-quota', sizestr] self._execute(*cmd) def create_snapshot(self, name, volume_name): cmd = [self.maprcli_bin, 'volume', 'snapshot', 'create', '-snapshotname', name, '-volume', volume_name] self._execute(*cmd) def delete_snapshot(self, name, volume_name): cmd = [self.maprcli_bin, 'volume', 'snapshot', 'remove', '-snapshotname', name, '-volume', volume_name] out, __ = self._execute(*cmd, check_exit_code=False) # if snapshot does not exist do not raise ProcessExecutionError if self.ERROR_MSG in out and self.NOT_FOUND_MSG not in out: raise exception.ProcessExecutionError(out) def get_volume_info(self, volume_name, columns=None): cmd = [self.maprcli_bin, 'volume', 'info', '-name', volume_name, '-json'] if columns: cmd += ['-columns', ','.join(columns)] out, __ = self._execute(*cmd) return json.loads(out)['data'][0] def get_volume_info_by_path(self, volume_path, columns=None, check_if_exists=False): cmd = [self.maprcli_bin, 'volume', 'info', '-path', volume_path, '-json'] if columns: cmd += ['-columns', ','.join(columns)] out, __ = self._execute(*cmd, check_exit_code=not check_if_exists) if check_if_exists and self.NOT_FOUND_MSG in out: return None return json.loads(out)['data'][0] def get_snapshot_list(self, volume_name=None, volume_path=None): params = {} if volume_name: params['volume'] = volume_name if volume_path: params['path'] = volume_name cmd = [self.maprcli_bin, 'volume', 'snapshot', 'list', '-volume', '-columns', 'snapshotname', '-json'] cmd = self._add_params(cmd, **params) out, __ = self._execute(*cmd) return [x['snapshotname'] for x in json.loads(out)['data']] def rename_volume(self, name, new_name): cmd = [self.maprcli_bin, 'volume', 'rename', '-name', name, '-newname', new_name] self._execute(*cmd) def fs_capacity(self): cmd = [self.hadoop_bin, 'fs', '-df'] out, err = self._execute(*cmd) lines = out.splitlines() try: fields = lines[1].split() total = int(fields[1]) free = int(fields[3]) except (IndexError, ValueError): msg = _('Failed to get MapR-FS capacity info.') LOG.exception(msg) raise exception.ProcessExecutionError(msg) return total, free def maprfs_ls(self, path): cmd = [self.hadoop_bin, 'fs', '-ls', path] out, __ = self._execute(*cmd) return out def maprfs_cp(self, source, dest): cmd = [self.hadoop_bin, 'fs', '-cp', '-p', source, dest] self._execute(*cmd) def maprfs_chmod(self, dest, mod): cmd = [self.hadoop_bin, 'fs', '-chmod', mod, dest] self._execute(*cmd) def maprfs_du(self, path): cmd = [self.hadoop_bin, 'fs', '-du', '-s', path] out, __ = self._execute(*cmd) return int(out.split(' ')[0]) def check_state(self): cmd = [self.hadoop_bin, 'fs', '-ls', '/'] out, __ = self._execute(*cmd, check_exit_code=False) return 'Found' in out def dir_not_empty(self, path): cmd = [self.hadoop_bin, 'fs', '-ls', path] out, __ = self._execute(*cmd, check_exit_code=False) return 'Found' in out def set_volume_ace(self, volume_name, access_rules): read_accesses = [] write_accesses = [] for access_rule in access_rules: if access_rule['access_level'] == constants.ACCESS_LEVEL_RO: read_accesses.append(access_rule['access_to']) elif access_rule['access_level'] == constants.ACCESS_LEVEL_RW: read_accesses.append(access_rule['access_to']) write_accesses.append(access_rule['access_to']) def rule_type(access_to): if self.group_exists(access_to): return 'g' elif self.user_exists(access_to): return 'u' else: # if nor user nor group exits, it should try add group rule return 'g' read_accesses_string = '|'.join( map(lambda x: rule_type(x) + ':' + x, read_accesses)) write_accesses_string = '|'.join( map(lambda x: rule_type(x) + ':' + x, write_accesses)) cmd = [self.maprcli_bin, 'volume', 'modify', '-name', volume_name, '-readAce', read_accesses_string, '-writeAce', write_accesses_string] self._execute(*cmd) def add_volume_ace_rules(self, volume_name, access_rules): if not access_rules: return access_rules_map = self.get_access_rules(volume_name) for access_rule in access_rules: access_rules_map[access_rule['access_to']] = access_rule self.set_volume_ace(volume_name, access_rules_map.values()) def remove_volume_ace_rules(self, volume_name, access_rules): if not access_rules: return access_rules_map = self.get_access_rules(volume_name) for access_rule in access_rules: if access_rules_map.get(access_rule['access_to']): del access_rules_map[access_rule['access_to']] self.set_volume_ace(volume_name, access_rules_map.values()) def get_access_rules(self, volume_name): info = self.get_volume_info(volume_name) aces = info['volumeAces'] read_ace = aces['readAce'] write_ace = aces['writeAce'] access_rules_map = {} self._retrieve_access_rules_from_ace(read_ace, 'r', access_rules_map) self._retrieve_access_rules_from_ace(write_ace, 'w', access_rules_map) return access_rules_map def _retrieve_access_rules_from_ace(self, ace, ace_type, access_rules_map): access = constants.ACCESS_LEVEL_RW if ace_type == 'w' else ( constants.ACCESS_LEVEL_RO) if ace not in ['p', '']: write_rules = [x.strip() for x in ace.split('|')] for user in write_rules: rule_type, username = user.split(':') if rule_type not in ['u', 'g']: continue access_rules_map[username] = { 'access_level': access, 'access_to': username, 'access_type': 'user', } def user_exists(self, user): cmd = ['getent', 'passwd', user] out, __ = self._execute(*cmd, check_exit_code=False) return out != '' def group_exists(self, group): cmd = ['getent', 'group', group] out, __ = self._execute(*cmd, check_exit_code=False) return out != '' def get_cluster_name(self): cmd = [self.maprcli_bin, 'dashboard', 'info', '-json'] out, __ = self._execute(*cmd) try: return json.loads(out)['data'][0]['cluster']['name'] except (IndexError, ValueError) as e: msg = (_("Failed to parse cluster name. Error: %s") % e) raise exception.ProcessExecutionError(msg) manila-10.0.0/manila/share/drivers/maprfs/maprfs_native.py0000664000175000017500000004634013656750227023620 0ustar zuulzuul00000000000000# Copyright (c) 2016, MapR Technologies # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver for MapR-FS distributed file system. """ import math import os from oslo_config import cfg from oslo_log import log from oslo_utils import strutils from oslo_utils import units from manila import context from manila import exception from manila.i18n import _ from manila.share import api from manila.share import driver from manila.share.drivers.maprfs import driver_util as mapru LOG = log.getLogger(__name__) maprfs_native_share_opts = [ cfg.ListOpt('maprfs_clinode_ip', help='The list of IPs or hostnames of nodes where mapr-core ' 'is installed.'), cfg.PortOpt('maprfs_ssh_port', default=22, help='CLDB node SSH port.'), cfg.StrOpt('maprfs_ssh_name', default="mapr", help='Cluster admin user ssh login name.'), cfg.StrOpt('maprfs_ssh_pw', help='Cluster node SSH login password, ' 'This parameter is not necessary, if ' '\'maprfs_ssh_private_key\' is configured.'), cfg.StrOpt('maprfs_ssh_private_key', help='Path to SSH private ' 'key for login.'), cfg.StrOpt('maprfs_base_volume_dir', default='/', help='Path in MapRFS where share volumes must be created.'), cfg.ListOpt('maprfs_zookeeper_ip', help='The list of IPs or hostnames of ZooKeeper nodes.'), cfg.ListOpt('maprfs_cldb_ip', help='The list of IPs or hostnames of CLDB nodes.'), cfg.BoolOpt('maprfs_rename_managed_volume', default=True, help='Specify whether existing volume should be renamed when' ' start managing.'), ] CONF = cfg.CONF CONF.register_opts(maprfs_native_share_opts) class MapRFSNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver): """MapR-FS Share Driver. Executes commands relating to shares. driver_handles_share_servers must be False because this driver does not support creating or managing virtual storage servers (share servers) API version history: 1.0 - Initial Version """ def __init__(self, *args, **kwargs): super(MapRFSNativeShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(maprfs_native_share_opts) self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'MapR-FS-Native' self._base_volume_dir = self.configuration.safe_get( 'maprfs_base_volume_dir') or '/' self._maprfs_util = None self._maprfs_base_path = "maprfs://" self.cldb_ip = self.configuration.maprfs_cldb_ip or [] self.zookeeper_ip = self.configuration.maprfs_zookeeper_ip or [] self.rename_volume = self.configuration.maprfs_rename_managed_volume self.api = api.API() def do_setup(self, context): """Do initialization while the share driver starts.""" super(MapRFSNativeShareDriver, self).do_setup(context) self._maprfs_util = mapru.get_version_handler(self.configuration) def _share_dir(self, share_name): return os.path.join(self._base_volume_dir, share_name) def _volume_name(self, share_name): return share_name def _get_share_path(self, share): return share['export_location'] def _get_snapshot_path(self, snapshot): share_dir = snapshot['share_instance']['export_location'].split( ' ')[0][len(self._maprfs_base_path):] return os.path.join(share_dir, '.snapshot', snapshot['provider_location'] or snapshot['name']) def _get_volume_name(self, context, share): metadata = self.api.get_share_metadata(context, {'id': share['share_id']}) return metadata.get('_name', self._volume_name(share['name'])) def _get_share_export_locations(self, share, path=None): """Return share path on storage provider.""" cluster_name = self._maprfs_util.get_cluster_name() path = '%(path)s -C %(cldb)s -Z %(zookeeper)s -N %(name)s' % { 'path': self._maprfs_base_path + ( path or self._share_dir(share['name'])), 'cldb': ' '.join(self.cldb_ip), 'zookeeper': ' '.join(self.zookeeper_ip), 'name': cluster_name } export_list = [{ "path": path, "is_admin_only": False, "metadata": { "cldb": ','.join(self.cldb_ip), "zookeeper": ','.join(self.zookeeper_ip), "cluster-name": cluster_name, }, }] return export_list def _create_share(self, share, metadata, context): """Creates a share.""" if share['share_proto'].lower() != 'maprfs': msg = _('Only MapRFS protocol supported!') LOG.error(msg) raise exception.MapRFSException(msg=msg) options = {k[1:]: v for k, v in metadata.items() if k[0] == '_'} share_dir = options.pop('path', self._share_dir(share['name'])) volume_name = options.pop('name', self._volume_name(share['name'])) try: self._maprfs_util.create_volume(volume_name, share_dir, share['size'], **options) # posix permissions should be 777, ACEs are used as a restriction self._maprfs_util.maprfs_chmod(share_dir, '777') except exception.ProcessExecutionError: self.api.update_share_metadata(context, {'id': share['share_id']}, {'_name': 'error'}) msg = (_('Failed to create volume in MapR-FS for the ' 'share %(share_name)s.') % {'share_name': share['name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) def _set_share_size(self, share, size): volume_name = self._get_volume_name(context.get_admin_context(), share) try: if share['size'] > size: info = self._maprfs_util.get_volume_info(volume_name) used = info['totalused'] if int(used) >= int(size) * units.Ki: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) self._maprfs_util.set_volume_size(volume_name, size) except exception.ProcessExecutionError: msg = (_('Failed to set space quota for the share %(share_name)s.') % {'share_name': share['name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) def get_network_allocations_number(self): return 0 def create_share(self, context, share, share_server=None): """Create a MapRFS volume which acts as a share.""" metadata = self.api.get_share_metadata(context, {'id': share['share_id']}) self._create_share(share, metadata, context) return self._get_share_export_locations(share, path=metadata.get('_path')) def ensure_share(self, context, share, share_server=None): """Updates export location if it is changes.""" volume_name = self._get_volume_name(context, share) if self._maprfs_util.volume_exists(volume_name): info = self._maprfs_util.get_volume_info(volume_name) path = info['mountdir'] old_location = share['export_locations'][0] new_location = self._get_share_export_locations( share, path=path) if new_location[0]['path'] != old_location['path']: return new_location else: raise exception.ShareResourceNotFound(share_id=share['share_id']) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Creates a share from snapshot.""" metadata = self.api.get_share_metadata(context, {'id': share['share_id']}) sn_share_tenant = self.api.get_share_metadata(context, { 'id': snapshot['share_instance']['share_id']}).get('_tenantuser') if sn_share_tenant and sn_share_tenant != metadata.get('_tenantuser'): msg = ( _('Cannot create share from snapshot %(snapshot_name)s ' 'with name %(share_name)s. Error: Tenant user should not ' 'differ from tenant of the source snapshot.') % {'snapshot_name': snapshot['name'], 'share_name': share['name']}) LOG.error(msg) raise exception.MapRFSException(msg=msg) share_dir = metadata.get('_path', self._share_dir(share['name'])) snapshot_path = self._get_snapshot_path(snapshot) self._create_share(share, metadata, context) try: if self._maprfs_util.dir_not_empty(snapshot_path): self._maprfs_util.maprfs_cp(snapshot_path + '/*', share_dir) except exception.ProcessExecutionError: msg = ( _('Failed to create share from snapshot %(snapshot_name)s ' 'with name %(share_name)s.') % { 'snapshot_name': snapshot['name'], 'share_name': share['name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) return self._get_share_export_locations(share, path=metadata.get('_path')) def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot.""" volume_name = self._get_volume_name(context, snapshot['share']) snapshot_name = snapshot['name'] try: self._maprfs_util.create_snapshot(snapshot_name, volume_name) return {'provider_location': snapshot_name} except exception.ProcessExecutionError: msg = ( _('Failed to create snapshot %(snapshot_name)s for the share ' '%(share_name)s.') % {'snapshot_name': snapshot_name, 'share_name': snapshot['share_name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) def delete_share(self, context, share, share_server=None): """Deletes share storage.""" volume_name = self._get_volume_name(context, share) if volume_name == "error": LOG.info("Skipping deleting share with name %s, as it does not" " exist on the backend", share['name']) return try: self._maprfs_util.delete_volume(volume_name) except exception.ProcessExecutionError: msg = (_('Failed to delete share %(share_name)s.') % {'share_name': share['name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" snapshot_name = snapshot['provider_location'] or snapshot['name'] volume_name = self._get_volume_name(context, snapshot['share']) try: self._maprfs_util.delete_snapshot(snapshot_name, volume_name) except exception.ProcessExecutionError: msg = (_('Failed to delete snapshot %(snapshot_name)s.') % {'snapshot_name': snapshot['name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share.""" for access in access_rules: if access['access_type'].lower() != 'user': msg = _("Only 'user' access type allowed!") LOG.error(msg) raise exception.InvalidShareAccess(reason=msg) volume_name = self._get_volume_name(context, share) try: # 'update_access' is called before share is removed, so this # method shouldn`t raise exception if share does # not exist actually if not self._maprfs_util.volume_exists(volume_name): LOG.warning('Can not get share %s.', share['name']) return # check update if add_rules or delete_rules: self._maprfs_util.remove_volume_ace_rules(volume_name, delete_rules) self._maprfs_util.add_volume_ace_rules(volume_name, add_rules) else: self._maprfs_util.set_volume_ace(volume_name, access_rules) except exception.ProcessExecutionError: msg = (_('Failed to update access for share %(name)s.') % {'name': share['name']}) LOG.exception(msg) raise exception.MapRFSException(msg=msg) def extend_share(self, share, new_size, share_server=None): """Extend share storage.""" self._set_share_size(share, new_size) def shrink_share(self, share, new_size, share_server=None): """Shrink share storage.""" self._set_share_size(share, new_size) def _check_maprfs_state(self): try: return self._maprfs_util.check_state() except exception.ProcessExecutionError: msg = _('Failed to check MapRFS state.') LOG.exception(msg) raise exception.MapRFSException(msg=msg) def check_for_setup_error(self): """Return an error if the prerequisites are not met.""" if not self.configuration.maprfs_clinode_ip: msg = _( 'MapR cluster has not been specified in the configuration. ' 'Add the ip or list of ip of nodes with mapr-core installed ' 'in the "maprfs_clinode_ip" configuration parameter.') LOG.error(msg) raise exception.MapRFSException(msg=msg) if not self.configuration.maprfs_cldb_ip: LOG.warning('CLDB nodes are not specified!') if not self.configuration.maprfs_zookeeper_ip: LOG.warning('Zookeeper nodes are not specified!') if not self._check_maprfs_state(): msg = _('MapR-FS is not in healthy state.') LOG.error(msg) raise exception.MapRFSException(msg=msg) try: self._maprfs_util.maprfs_ls( os.path.join(self._base_volume_dir, '')) except exception.ProcessExecutionError: msg = _('Invalid "maprfs_base_volume_name". No such directory.') LOG.exception(msg) raise exception.MapRFSException(msg=msg) def manage_existing(self, share, driver_options): try: # retrieve share path from export location, maprfs:// prefix and # metadata (-C -Z -N) should be casted away share_path = share['export_location'].split( )[0][len(self._maprfs_base_path):] info = self._maprfs_util.get_volume_info_by_path( share_path, check_if_exists=True) if not info: msg = _("Share %s not found") % share[ 'export_location'] LOG.error(msg) raise exception.ManageInvalidShare(reason=msg) size = math.ceil(float(info['quota']) / units.Ki) used = math.ceil(float(info['totalused']) / units.Ki) volume_name = info['volumename'] should_rename = self.rename_volume rename_option = driver_options.get('rename') if rename_option: should_rename = strutils.bool_from_string(rename_option) if should_rename: self._maprfs_util.rename_volume(volume_name, share['name']) else: self.api.update_share_metadata(context.get_admin_context(), {'id': share['share_id']}, {'_name': volume_name}) location = self._get_share_export_locations(share, path=share_path) if size == 0: size = used msg = ( 'Share %s has no size quota. Total used value will be' ' used as share size') LOG.warning(msg, share['name']) return {'size': size, 'export_locations': location} except (ValueError, KeyError, exception.ProcessExecutionError): msg = _('Failed to manage share.') LOG.exception(msg) raise exception.MapRFSException(msg=msg) def manage_existing_snapshot(self, snapshot, driver_options): volume_name = self._get_volume_name(context.get_admin_context(), snapshot['share']) snapshot_path = self._get_snapshot_path(snapshot) try: snapshot_list = self._maprfs_util.get_snapshot_list( volume_name=volume_name) snapshot_name = snapshot['provider_location'] if snapshot_name not in snapshot_list: msg = _("Snapshot %s not found") % snapshot_name LOG.error(msg) raise exception.ManageInvalidShareSnapshot(reason=msg) size = math.ceil(float(self._maprfs_util.maprfs_du( snapshot_path)) / units.Gi) return {'size': size} except exception.ProcessExecutionError: msg = _("Manage existing share snapshot failed.") LOG.exception(msg) raise exception.MapRFSException(msg=msg) def _update_share_stats(self): """Retrieves stats info of share directories group.""" try: total, free = self._maprfs_util.fs_capacity() except exception.ProcessExecutionError: msg = _('Failed to check MapRFS capacity info.') LOG.exception(msg) raise exception.MapRFSException(msg=msg) total_capacity_gb = int(math.ceil(float(total) / units.Gi)) free_capacity_gb = int(math.floor(float(free) / units.Gi)) data = { 'share_backend_name': self.backend_name, 'storage_protocol': 'MAPRFS', 'driver_handles_share_servers': self.driver_handles_share_servers, 'vendor_name': 'MapR Technologies', 'driver_version': '1.0', 'total_capacity_gb': total_capacity_gb, 'free_capacity_gb': free_capacity_gb, 'snapshot_support': True, 'create_share_from_snapshot_support': True, } super(MapRFSNativeShareDriver, self)._update_share_stats(data) manila-10.0.0/manila/share/drivers/infortrend/0000775000175000017500000000000013656750362021263 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/infortrend/__init__.py0000664000175000017500000000000013656750227023362 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/infortrend/infortrend_nas.py0000664000175000017500000006301513656750227024655 0ustar zuulzuul00000000000000# Copyright (c) 2019 Infortrend Technology, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import re from oslo_concurrency import processutils from oslo_log import log from oslo_utils import units from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import utils as share_utils from manila import utils as manila_utils LOG = log.getLogger(__name__) def _bi_to_gi(bi_size): return bi_size / units.Gi class InfortrendNAS(object): _SSH_PORT = 22 def __init__(self, nas_ip, username, password, ssh_key, timeout, pool_dict, channel_dict): self.nas_ip = nas_ip self.port = self._SSH_PORT self.username = username self.password = password self.ssh_key = ssh_key self.ssh_timeout = timeout self.pool_dict = pool_dict self.channel_dict = channel_dict self.command = "" self.ssh = None self.sshpool = None self.location = 'a@0' def _execute(self, command_line): command_line.extend(['-z', self.location]) commands = ' '.join(command_line) manila_utils.check_ssh_injection(commands) LOG.debug('Executing: %(command)s', {'command': commands}) cli_out = self._ssh_execute(commands) return self._parser(cli_out) def _ssh_execute(self, commands): try: out, err = processutils.ssh_execute( self.ssh, commands, timeout=self.ssh_timeout, check_exit_code=True) except processutils.ProcessExecutionError as pe: rc = pe.exit_code out = pe.stdout out = out.replace('\n', '\\n') msg = _('Error on execute ssh command. ' 'Exit code: %(rc)d, msg: %(out)s') % { 'rc': rc, 'out': out} raise exception.InfortrendNASException(err=msg) return out def _parser(self, content=None): LOG.debug('parsing data:\n%s', content) content = content.replace("\r", "") content = content.strip() json_string = content.replace("'", "\"") cli_data = json_string.splitlines()[2] if cli_data: try: data_dict = json.loads(cli_data) except Exception: msg = _('Failed to parse data: ' '%(cli_data)s to dictionary.') % { 'cli_data': cli_data} LOG.error(msg) raise exception.InfortrendNASException(err=msg) rc = int(data_dict['cliCode'][0]['Return'], 16) if rc == 0: result = data_dict['data'] else: result = data_dict['cliCode'][0]['CLI'] else: msg = _('No data is returned from NAS.') LOG.error(msg) raise exception.InfortrendNASException(err=msg) if rc != 0: msg = _('NASCLI error, returned: %(result)s.') % { 'result': result} LOG.error(msg) raise exception.InfortrendCLIException( err=msg, rc=rc, out=result) return rc, result def do_setup(self): self._init_connect() self._ensure_service_on('nfs') self._ensure_service_on('cifs') def _init_connect(self): if not (self.sshpool and self.ssh): self.sshpool = manila_utils.SSHPool(ip=self.nas_ip, port=self.port, conn_timeout=None, login=self.username, password=self.password, privatekey=self.ssh_key) self.ssh = self.sshpool.create() if not self.ssh.get_transport().is_active(): self.sshpool = manila_utils.SSHPool(ip=self.nas_ip, port=self.port, conn_timeout=None, login=self.username, password=self.password, privatekey=self.ssh_key) self.ssh = self.sshpool.create() LOG.debug('NAScmd [%s@%s] start!', self.username, self.nas_ip) def check_for_setup_error(self): self._check_pools_setup() self._check_channels_status() def _ensure_service_on(self, proto, slot='A'): command_line = ['service', 'status', proto] rc, service_status = self._execute(command_line) if not service_status[0][slot][proto.upper()]['enabled']: command_line = ['service', 'restart', proto] self._execute(command_line) def _check_channels_status(self): channel_list = list(self.channel_dict.keys()) command_line = ['ifconfig', 'inet', 'show'] rc, channels_status = self._execute(command_line) for channel in channels_status: if 'CH' in channel['datalink']: ch = channel['datalink'].strip('CH') if ch in self.channel_dict.keys(): self.channel_dict[ch] = channel['IP'] channel_list.remove(ch) if channel['status'] == 'DOWN': LOG.warning('Channel [%(ch)s] status ' 'is down, please check.', { 'ch': ch}) if len(channel_list) != 0: msg = _('Channel setting %(channel_list)s is invalid!') % { 'channel_list': channel_list} LOG.error(msg) raise exception.InfortrendNASException(message=msg) def _check_pools_setup(self): pool_list = list(self.pool_dict.keys()) command_line = ['folder', 'status'] rc, pool_data = self._execute(command_line) for pool in pool_data: pool_name = self._extract_pool_name(pool) if pool_name in self.pool_dict.keys(): pool_list.remove(pool_name) self.pool_dict[pool_name]['id'] = pool['volumeId'] self.pool_dict[pool_name]['path'] = pool['directory'] + '/' if len(pool_list) == 0: break if len(pool_list) != 0: msg = _('Please create %(pool_list)s pool/s in advance!') % { 'pool_list': pool_list} LOG.error(msg) raise exception.InfortrendNASException(message=msg) def _extract_pool_name(self, pool_info): return pool_info['directory'].split('/')[1] def _extract_lv_name(self, pool_info): return pool_info['path'].split('/')[2] def update_pools_stats(self): pools = [] command_line = ['folder', 'status'] rc, pools_data = self._execute(command_line) for pool_info in pools_data: pool_name = self._extract_pool_name(pool_info) if pool_name in self.pool_dict.keys(): total_space = float(pool_info['size']) pool_quota_used = self._get_pool_quota_used(pool_name) available_space = total_space - pool_quota_used total_capacity_gb = round(_bi_to_gi(total_space), 2) free_capacity_gb = round(_bi_to_gi(available_space), 2) pool = { 'pool_name': pool_name, 'total_capacity_gb': total_capacity_gb, 'free_capacity_gb': free_capacity_gb, 'reserved_percentage': 0, 'qos': False, 'dedupe': False, 'compression': False, 'snapshot_support': False, 'thin_provisioning': False, 'thick_provisioning': True, 'replication_type': None, } pools.append(pool) return pools def _get_pool_quota_used(self, pool_name): pool_quota_used = 0.0 pool_data = self._get_share_pool_data(pool_name) folder_name = self._extract_lv_name(pool_data) command_line = ['fquota', 'status', pool_data['id'], folder_name, '-t', 'folder'] rc, quota_status = self._execute(command_line) for share_quota in quota_status: pool_quota_used += int(share_quota['quota']) return pool_quota_used def _get_share_pool_data(self, pool_name): if not pool_name: msg = _("Pool is not available in the share host.") raise exception.InvalidHost(reason=msg) if pool_name in self.pool_dict.keys(): return self.pool_dict[pool_name] else: msg = _('Pool [%(pool_name)s] not set in conf.') % { 'pool_name': pool_name} LOG.error(msg) raise exception.InfortrendNASException(err=msg) def create_share(self, share, share_server=None): pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) folder_name = self._extract_lv_name(pool_data) share_proto = share['share_proto'].lower() share_name = share['id'].replace('-', '') share_path = pool_data['path'] + share_name command_line = ['folder', 'options', pool_data['id'], folder_name, '-c', share_name] self._execute(command_line) self._set_share_size( pool_data['id'], pool_name, share_name, share['size']) self._ensure_protocol_on(share_path, share_proto, share_name) LOG.info('Create Share [%(share)s] completed.', { 'share': share['id']}) return self._export_location( share_name, share_proto, pool_data['path']) def _export_location(self, share_name, share_proto, pool_path=None): location = [] location_data = { 'pool_path': pool_path, 'share_name': share_name, } self._check_channels_status() for ch in sorted(self.channel_dict.keys()): ip = self.channel_dict[ch] if share_proto == 'nfs': location.append( ip + ':%(pool_path)s%(share_name)s' % location_data) elif share_proto == 'cifs': location.append( '\\\\' + ip + '\\%(share_name)s' % location_data) else: msg = _('Unsupported protocol: [%s].') % share_proto raise exception.InvalidInput(msg) return location def _set_share_size(self, pool_id, pool_name, share_name, share_size): pool_data = self._get_share_pool_data(pool_name) folder_name = self._extract_lv_name(pool_data) command_line = ['fquota', 'create', pool_id, folder_name, share_name, str(share_size) + 'G', '-t', 'folder'] self._execute(command_line) LOG.debug('Set Share [%(share_name)s] ' 'Size [%(share_size)s G] completed.', { 'share_name': share_name, 'share_size': share_size}) return def _get_share_size(self, pool_id, pool_name, share_name): share_size = None command_line = ['fquota', 'status', pool_id, share_name, '-t', 'folder'] rc, quota_status = self._execute(command_line) for share_quota in quota_status: if share_quota['name'] == share_name: share_size = round(_bi_to_gi(float(share_quota['quota'])), 2) break return share_size def delete_share(self, share, share_server=None): pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) folder_name = self._extract_lv_name(pool_data) share_name = share['id'].replace('-', '') if self._check_share_exist(pool_name, share_name): command_line = ['folder', 'options', pool_data['id'], folder_name, '-d', share_name] self._execute(command_line) else: LOG.warning('Share [%(share_name)s] is already deleted.', { 'share_name': share_name}) LOG.info('Delete Share [%(share)s] completed.', { 'share': share['id']}) def _check_share_exist(self, pool_name, share_name): path = self.pool_dict[pool_name]['path'] command_line = ['pagelist', 'folder', path] rc, subfolders = self._execute(command_line) return any(subfolder['name'] == share_name for subfolder in subfolders) def update_access(self, share, access_rules, add_rules, delete_rules, share_server=None): self._evict_unauthorized_clients(share, access_rules, share_server) access_dict = {} for access in access_rules: try: self._allow_access(share, access, share_server) except (exception.InfortrendNASException) as e: msg = _('Failed to allow access to client %(access)s, ' 'reason %(e)s.') % { 'access': access['access_to'], 'e': e} LOG.error(msg) access_dict[access['id']] = 'error' return access_dict def _evict_unauthorized_clients(self, share, access_rules, share_server=None): pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) share_proto = share['share_proto'].lower() share_name = share['id'].replace('-', '') share_path = pool_data['path'] + share_name access_list = [] for access in access_rules: access_list.append(access['access_to']) if share_proto == 'nfs': host_ip_list = [] command_line = ['share', 'status', '-f', share_path] rc, nfs_status = self._execute(command_line) host_list = nfs_status[0]['nfs_detail']['hostList'] for host in host_list: if host['host'] != '*': host_ip_list.append(host['host']) for ip in host_ip_list: if ip not in access_list: command_line = ['share', 'options', share_path, 'nfs', '-c', ip] try: self._execute(command_line) except exception.InfortrendNASException: msg = _("Failed to remove share access rule %s") % (ip) LOG.exception(msg) pass elif share_proto == 'cifs': host_user_list = [] command_line = ['acl', 'get', share_path] rc, cifs_status = self._execute(command_line) for cifs_rule in cifs_status: if cifs_rule['name']: host_user_list.append(cifs_rule['name']) for user in host_user_list: if user not in access_list: command_line = ['acl', 'delete', share_path, '-u', user] try: self._execute(command_line) except exception.InfortrendNASException: msg = _("Failed to remove share access rule %s") % ( user) LOG.exception(msg) pass def _allow_access(self, share, access, share_server=None): pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) share_name = share['id'].replace('-', '') share_path = pool_data['path'] + share_name share_proto = share['share_proto'].lower() access_type = access['access_type'] access_level = access['access_level'] or constants.ACCESS_LEVEL_RW access_to = access['access_to'] ACCESS_LEVEL_MAP = {access_level: access_level} msg = self._check_access_legal(share_proto, access_type) if msg: raise exception.InvalidShareAccess(reason=msg) if share_proto == 'nfs': command_line = ['share', 'options', share_path, 'nfs', '-h', access_to, '-p', access_level] self._execute(command_line) elif share_proto == 'cifs': if not self._check_user_exist(access_to): msg = _('Please create user [%(user)s] in advance.') % { 'user': access_to} LOG.error(msg) raise exception.InfortrendNASException(err=msg) if access_level == constants.ACCESS_LEVEL_RW: cifs_access = 'f' elif access_level == constants.ACCESS_LEVEL_RO: cifs_access = 'r' try: access_level = ACCESS_LEVEL_MAP[access_level] except KeyError: msg = _('Unsupported access_level: [%s].') % access_level raise exception.InvalidInput(msg) command_line = ['acl', 'set', share_path, '-u', access_to, '-a', cifs_access] self._execute(command_line) LOG.info('Share [%(share)s] access to [%(access_to)s] ' 'level [%(level)s] protocol [%(share_proto)s] completed.', { 'share': share['id'], 'access_to': access_to, 'level': access_level, 'share_proto': share_proto}) def _ensure_protocol_on(self, share_path, share_proto, cifs_name): if not self._check_proto_enabled(share_path, share_proto): command_line = ['share', share_path, share_proto, 'on'] if share_proto == 'cifs': command_line.extend(['-n', cifs_name]) self._execute(command_line) def _check_proto_enabled(self, share_path, share_proto): command_line = ['share', 'status', '-f', share_path] rc, share_status = self._execute(command_line) if share_status: check_enabled = share_status[0][share_proto] if check_enabled: return True return False def _check_user_exist(self, user_name): command_line = ['useradmin', 'user', 'list'] rc, user_list = self._execute(command_line) for user in user_list: if user['Name'] == user_name: return True return False def _check_access_legal(self, share_proto, access_type): msg = None if share_proto == 'cifs' and access_type != 'user': msg = _('Infortrend CIFS share only supports USER access type.') elif share_proto == 'nfs' and access_type != 'ip': msg = _('Infortrend NFS share only supports IP access type.') elif share_proto not in ('nfs', 'cifs'): msg = _('Unsupported share protocol [%s].') % share_proto return msg def get_pool(self, share): pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: share_name = share['id'].replace('-', '') for pool in self.pool_dict.keys(): if self._check_share_exist(pool, share_name): pool_name = pool break return pool_name def ensure_share(self, share, share_server=None): share_proto = share['share_proto'].lower() pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) share_name = share['id'].replace('-', '') return self._export_location( share_name, share_proto, pool_data['path']) def extend_share(self, share, new_size, share_server=None): pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) share_name = share['id'].replace('-', '') self._set_share_size(pool_data['id'], pool_name, share_name, new_size) LOG.info('Successfully Extend Share [%(share)s] ' 'to size [%(new_size)s G].', { 'share': share['id'], 'new_size': new_size}) def shrink_share(self, share, new_size, share_server=None): pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) share_name = share['id'].replace('-', '') folder_name = self._extract_lv_name(pool_data) command_line = ['fquota', 'status', pool_data['id'], folder_name, '-t', 'folder'] rc, quota_status = self._execute(command_line) for share_quota in quota_status: if share_quota['name'] == share_name: used_space = round(_bi_to_gi(float(share_quota['used'])), 2) if new_size < used_space: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) self._set_share_size(pool_data['id'], pool_name, share_name, new_size) LOG.info('Successfully Shrink Share [%(share)s] ' 'to size [%(new_size)s G].', { 'share': share['id'], 'new_size': new_size}) def manage_existing(self, share, driver_options): share_proto = share['share_proto'].lower() pool_name = share_utils.extract_host(share['host'], level='pool') pool_data = self._get_share_pool_data(pool_name) volume_name = self._extract_lv_name(pool_data) input_location = share['export_locations'][0]['path'] share_name = share['id'].replace('-', '') ch_ip, folder_name = self._parse_location(input_location, share_proto) if not self._check_channel_ip(ch_ip): msg = _('Export location ip: [%(ch_ip)s] ' 'is incorrect, please use data port ip.') % { 'ch_ip': ch_ip} LOG.error(msg) raise exception.InfortrendNASException(err=msg) if not self._check_share_exist(pool_name, folder_name): msg = _('Can not find folder [%(folder_name)s] ' 'in pool [%(pool_name)s].') % { 'folder_name': folder_name, 'pool_name': pool_name} LOG.error(msg) raise exception.InfortrendNASException(err=msg) share_path = pool_data['path'] + folder_name self._ensure_protocol_on(share_path, share_proto, share_name) share_size = self._get_share_size( pool_data['id'], pool_name, folder_name) if not share_size: msg = _('Folder [%(folder_name)s] has no size limitation, ' 'please set it first for Openstack management.') % { 'folder_name': folder_name} LOG.error(msg) raise exception.InfortrendNASException(err=msg) # rename folder name command_line = ['folder', 'options', pool_data['id'], volume_name, '-k', folder_name, share_name] self._execute(command_line) location = self._export_location( share_name, share_proto, pool_data['path']) LOG.info('Successfully Manage Infortrend Share [%(folder_name)s], ' 'Size: [%(size)s G], Protocol: [%(share_proto)s], ' 'new name: [%(share_name)s].', { 'folder_name': folder_name, 'size': share_size, 'share_proto': share_proto, 'share_name': share_name}) return {'size': share_size, 'export_locations': location} def _parse_location(self, input_location, share_proto): ip = None folder_name = None pattern_ip = r'[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' if share_proto == 'nfs': pattern_folder = r'[^\/]+$' ip = "".join(re.findall(pattern_ip, input_location)) folder_name = "".join(re.findall(pattern_folder, input_location)) elif share_proto == 'cifs': pattern_folder = r'[^\\]+$' ip = "".join(re.findall(pattern_ip, input_location)) folder_name = "".join(re.findall(pattern_folder, input_location)) if not (ip and folder_name): msg = _('Export location error, please check ' 'ip: [%(ip)s], folder_name: [%(folder_name)s].') % { 'ip': ip, 'folder_name': folder_name} LOG.error(msg) raise exception.InfortrendNASException(err=msg) return ip, folder_name def _check_channel_ip(self, channel_ip): return any(ip == channel_ip for ip in self.channel_dict.values()) def unmanage(self, share): pool_name = share_utils.extract_host(share['host'], level='pool') share_name = share['id'].replace('-', '') if not self._check_share_exist(pool_name, share_name): LOG.warning('Share [%(share_name)s] does not exist.', { 'share_name': share_name}) return LOG.info('Successfully Unmanaged Share [%(share)s].', { 'share': share['id']}) manila-10.0.0/manila/share/drivers/infortrend/driver.py0000664000175000017500000002404613656750227023136 0ustar zuulzuul00000000000000# Copyright (c) 2019 Infortrend Technology, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.infortrend import infortrend_nas LOG = log.getLogger(__name__) infortrend_nas_opts = [ cfg.HostAddressOpt('infortrend_nas_ip', required=True, help='Infortrend NAS IP for management.'), cfg.StrOpt('infortrend_nas_user', default='manila', help='User for the Infortrend NAS server.'), cfg.StrOpt('infortrend_nas_password', default=None, secret=True, help='Password for the Infortrend NAS server. ' 'This is not necessary ' 'if infortrend_nas_ssh_key is set.'), cfg.StrOpt('infortrend_nas_ssh_key', default=None, help='SSH key for the Infortrend NAS server. ' 'This is not necessary ' 'if infortrend_nas_password is set.'), cfg.ListOpt('infortrend_share_pools', required=True, help='Comma separated list of Infortrend NAS pools.'), cfg.ListOpt('infortrend_share_channels', required=True, help='Comma separated list of Infortrend channels.'), cfg.IntOpt('infortrend_ssh_timeout', default=30, help='SSH timeout in seconds.'), ] CONF = cfg.CONF CONF.register_opts(infortrend_nas_opts) class InfortrendNASDriver(driver.ShareDriver): """Infortrend Share Driver for GS/GSe Family using NASCLI. Version history: 1.0.0 - Initial driver """ VERSION = "1.0.0" PROTOCOL = "NFS_CIFS" def __init__(self, *args, **kwargs): super(InfortrendNASDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(infortrend_nas_opts) nas_ip = self.configuration.safe_get('infortrend_nas_ip') username = self.configuration.safe_get('infortrend_nas_user') password = self.configuration.safe_get('infortrend_nas_password') ssh_key = self.configuration.safe_get('infortrend_nas_ssh_key') timeout = self.configuration.safe_get('infortrend_ssh_timeout') self.backend_name = self.configuration.safe_get('share_backend_name') if not (password or ssh_key): msg = _('Either infortrend_nas_password or infortrend_nas_ssh_key ' 'should be set.') raise exception.InvalidParameterValue(err=msg) pool_dict = self._init_pool_dict() channel_dict = self._init_channel_dict() self.ift_nas = infortrend_nas.InfortrendNAS(nas_ip, username, password, ssh_key, timeout, pool_dict, channel_dict) def _init_pool_dict(self): pools_names = self.configuration.safe_get('infortrend_share_pools') return {el: {} for el in pools_names} def _init_channel_dict(self): channels = self.configuration.safe_get('infortrend_share_channels') return {el: '' for el in channels} def do_setup(self, context): """Any initialization the share driver does while starting.""" LOG.debug('Infortrend NAS do_setup start.') self.ift_nas.do_setup() def check_for_setup_error(self): """Check for setup error.""" LOG.debug('Infortrend NAS check_for_setup_error start.') self.ift_nas.check_for_setup_error() def _update_share_stats(self): """Retrieve stats info from share group.""" LOG.debug('Updating Infortrend backend [%s].', self.backend_name) data = dict( share_backend_name=self.backend_name, vendor_name='Infortrend', driver_version=self.VERSION, storage_protocol=self.PROTOCOL, reserved_percentage=self.configuration.reserved_share_percentage, pools=self.ift_nas.update_pools_stats()) LOG.debug('Infortrend pools status: %s', data['pools']) super(InfortrendNASDriver, self)._update_share_stats(data) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. :param context: Current context :param share: Share model with share data. :param access_rules: All access rules for given share :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: Not used by this driver. :returns: None, or a dictionary of ``access_id``, ``access_key`` as key: value pairs for the rules added, where, ``access_id`` is the UUID (string) of the access rule, and ``access_key`` is the credential (string) of the entity granted access. During recovery after error, the returned dictionary must contain ``access_id``, ``access_key`` for all the rules that the driver is ordered to resync, i.e. rules in the ``access_rules`` parameter. """ return self.ift_nas.update_access(share, access_rules, add_rules, delete_rules, share_server) def create_share(self, context, share, share_server=None): """Create a share.""" LOG.debug('Creating share: %s.', share['id']) return self.ift_nas.create_share(share, share_server) def delete_share(self, context, share, share_server=None): """Remove a share.""" LOG.debug('Deleting share: %s.', share['id']) return self.ift_nas.delete_share(share, share_server) def get_pool(self, share): """Return pool name where the share resides on. :param share: The share hosted by the driver. """ return self.ift_nas.get_pool(share) def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported. Driver can use this method to update the list of export locations of the share if it changes. To do that, you should return list with export locations. :return None or list with export locations """ return self.ift_nas.ensure_share(share, share_server) def manage_existing(self, share, driver_options): """Brings an existing share under Manila management. If the provided share is not valid, then raise a ManageInvalidShare exception, specifying a reason for the failure. If the provided share is not in a state that can be managed, such as being replicated on the backend, the driver *MUST* raise ManageInvalidShare exception with an appropriate message. The share has a share_type, and the driver can inspect that and compare against the properties of the referenced backend share. If they are incompatible, raise a ManageExistingShareTypeMismatch, specifying a reason for the failure. :param share: Share model :param driver_options: Driver-specific options provided by admin. :return: share_update dictionary with required key 'size', which should contain size of the share. """ LOG.debug( 'Manage existing for share: %(share)s,', { 'share': share['share_id'], }) return self.ift_nas.manage_existing(share, driver_options) def unmanage(self, share): """Removes the specified share from Manila management. Does not delete the underlying backend share. For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Manila-specific configuration that they have associated with the backend share. If provided share cannot be unmanaged, then raise an UnmanageInvalidShare exception, specifying a reason for the failure. This method is invoked when the share is being unmanaged with a share type that has ``driver_handles_share_servers`` extra-spec set to False. """ LOG.debug( 'Unmanage share: %(share)s', { 'share': share['share_id'], }) return self.ift_nas.unmanage(share) def extend_share(self, share, new_size, share_server=None): """Extends size of existing share. :param share: Share model :param new_size: New size of share (new_size > share['size']) :param share_server: Optional -- Share server model """ return self.ift_nas.extend_share(share, new_size, share_server) def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share. If consumed space on share larger than new_size driver should raise ShareShrinkingPossibleDataLoss exception: raise ShareShrinkingPossibleDataLoss(share_id=share['id']) :param share: Share model :param new_size: New size of share (new_size < share['size']) :param share_server: Optional -- Share server model :raises ShareShrinkingPossibleDataLoss, NotImplementedError """ return self.ift_nas.shrink_share(share, new_size, share_server) manila-10.0.0/manila/share/drivers/generic.py0000664000175000017500000013405313656750227021105 0ustar zuulzuul00000000000000# Copyright (c) 2014 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Generic Driver for shares.""" import os import time from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from oslo_utils import units import retrying import six from manila.common import constants as const from manila import compute from manila import context from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers import service_instance from manila import utils from manila import volume LOG = log.getLogger(__name__) share_opts = [ cfg.StrOpt('smb_template_config_path', default='$state_path/smb.conf', help="Path to smb config."), cfg.StrOpt('volume_name_template', default='manila-share-%s', help="Volume name template."), cfg.StrOpt('volume_snapshot_name_template', default='manila-snapshot-%s', help="Volume snapshot name template."), cfg.StrOpt('share_mount_path', default='/shares', help="Parent path in service instance where shares " "will be mounted."), cfg.IntOpt('max_time_to_create_volume', default=180, help="Maximum time to wait for creating cinder volume."), cfg.IntOpt('max_time_to_extend_volume', default=180, help="Maximum time to wait for extending cinder volume."), cfg.IntOpt('max_time_to_attach', default=120, help="Maximum time to wait for attaching cinder volume."), cfg.StrOpt('service_instance_smb_config_path', default='$share_mount_path/smb.conf', help="Path to SMB config in service instance."), cfg.ListOpt('share_helpers', default=[ 'CIFS=manila.share.drivers.helpers.CIFSHelperIPAccess', 'NFS=manila.share.drivers.helpers.NFSHelper', ], help='Specify list of share export helpers.'), cfg.StrOpt('share_volume_fstype', default='ext4', choices=['ext4', 'ext3'], help='Filesystem type of the share volume.'), cfg.StrOpt('cinder_volume_type', help='Name or id of cinder volume type which will be used ' 'for all volumes created by driver.'), ] CONF = cfg.CONF CONF.register_opts(share_opts) # NOTE(u_glide): These constants refer to the column number in the "df" output BLOCK_DEVICE_SIZE_INDEX = 1 USED_SPACE_INDEX = 2 def ensure_server(f): def wrap(self, context, *args, **kwargs): server = kwargs.get('share_server') if not self.driver_handles_share_servers: if not server: server = self.service_instance_manager.get_common_server() kwargs['share_server'] = server else: raise exception.ManilaException( _("Share server handling is not available. " "But 'share_server' was provided. '%s'. " "Share network should not be used.") % server.get('id')) elif not server: raise exception.ManilaException( _("Share server handling is enabled. But 'share_server' " "is not provided. Make sure you used 'share_network'.")) if not server.get('backend_details'): raise exception.ManilaException( _("Share server '%s' does not have backend details.") % server['id']) if not self.service_instance_manager.ensure_service_instance( context, server['backend_details']): raise exception.ServiceInstanceUnavailable() return f(self, context, *args, **kwargs) return wrap class GenericShareDriver(driver.ExecuteMixin, driver.ShareDriver): """Executes commands relating to Shares.""" def __init__(self, *args, **kwargs): """Do initialization.""" super(GenericShareDriver, self).__init__( [False, True], *args, **kwargs) self.admin_context = context.get_admin_context() self.configuration.append_config_values(share_opts) self._helpers = {} self.backend_name = self.configuration.safe_get( 'share_backend_name') or "Cinder_Volumes" self.ssh_connections = {} self._setup_service_instance_manager() self.private_storage = kwargs.get('private_storage') def _setup_service_instance_manager(self): self.service_instance_manager = ( service_instance.ServiceInstanceManager( driver_config=self.configuration)) def _ssh_exec(self, server, command, check_exit_code=True): LOG.debug("_ssh_exec - server: %s, command: %s, check_exit_code: %s", server, command, check_exit_code) connection = self.ssh_connections.get(server['instance_id']) ssh_conn_timeout = self.configuration.ssh_conn_timeout if not connection: ssh_pool = utils.SSHPool(server['ip'], 22, ssh_conn_timeout, server['username'], server.get('password'), server.get('pk_path'), max_size=1) ssh = ssh_pool.create() self.ssh_connections[server['instance_id']] = (ssh_pool, ssh) else: ssh_pool, ssh = connection if not ssh.get_transport().is_active(): ssh_pool.remove(ssh) ssh = ssh_pool.create() self.ssh_connections[server['instance_id']] = (ssh_pool, ssh) # (aovchinnikov): ssh_execute does not behave well when passed # parameters with spaces. wrap = lambda token: "\"" + token + "\"" # noqa: E731 command = [wrap(tkn) if tkn.count(' ') else tkn for tkn in command] return processutils.ssh_execute(ssh, ' '.join(command), check_exit_code=check_exit_code) def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" def do_setup(self, context): """Any initialization the generic driver does while starting.""" super(GenericShareDriver, self).do_setup(context) self.compute_api = compute.API() self.volume_api = volume.API() self._setup_helpers() common_sv_available = False share_server = None sv_fetch_retry_interval = 5 while not (common_sv_available or self.driver_handles_share_servers): try: # Verify availability of common server share_server = ( self.service_instance_manager.get_common_server()) common_sv_available = self._is_share_server_active( context, share_server) except Exception as ex: LOG.error(ex) if not common_sv_available: time.sleep(sv_fetch_retry_interval) LOG.warning("Waiting for the common service VM to become " "available. " "Driver is currently uninitialized. " "Share server: %(share_server)s " "Retry interval: %(retry_interval)s", dict(share_server=share_server, retry_interval=sv_fetch_retry_interval)) def _setup_helpers(self): """Initializes protocol-specific NAS drivers.""" helpers = self.configuration.share_helpers if helpers: for helper_str in helpers: share_proto, __, import_str = helper_str.partition('=') helper = importutils.import_class(import_str) self._helpers[share_proto.upper()] = helper( self._execute, self._ssh_exec, self.configuration) else: raise exception.ManilaException( "No protocol helpers selected for Generic Driver. " "Please specify using config option 'share_helpers'.") @ensure_server def create_share(self, context, share, share_server=None): """Creates share.""" return self._create_share( context, share, snapshot=None, share_server=share_server, ) def _create_share(self, context, share, snapshot, share_server=None): helper = self._get_helper(share) server_details = share_server['backend_details'] volume = self._allocate_container( self.admin_context, share, snapshot=snapshot) volume = self._attach_volume( self.admin_context, share, server_details['instance_id'], volume) if not snapshot: self._format_device(server_details, volume) self._mount_device(share, server_details, volume) export_locations = helper.create_exports( server_details, share['name']) return export_locations @utils.retry(exception.ProcessExecutionError, backoff_rate=1) def _is_device_file_available(self, server_details, volume): """Checks whether the device file is available""" command = ['sudo', 'test', '-b', volume['mountpoint']] self._ssh_exec(server_details, command) def _format_device(self, server_details, volume): """Formats device attached to the service vm.""" self._is_device_file_available(server_details, volume) command = ['sudo', 'mkfs.%s' % self.configuration.share_volume_fstype, volume['mountpoint']] self._ssh_exec(server_details, command) def _is_device_mounted(self, mount_path, server_details, volume=None): """Checks whether volume already mounted or not.""" log_data = { 'mount_path': mount_path, 'server_id': server_details['instance_id'], } if volume and volume.get('mountpoint', ''): log_data['volume_id'] = volume['id'] log_data['dev_mount_path'] = volume['mountpoint'] msg = ("Checking whether volume '%(volume_id)s' with mountpoint " "'%(dev_mount_path)s' is mounted on mount path '%(mount_p" "ath)s' on server '%(server_id)s' or not." % log_data) else: msg = ("Checking whether mount path '%(mount_path)s' exists on " "server '%(server_id)s' or not." % log_data) LOG.debug(msg) mounts_list_cmd = ['sudo', 'mount'] output, __ = self._ssh_exec(server_details, mounts_list_cmd) mounts = output.split('\n') for mount in mounts: mount_elements = mount.split(' ') if (len(mount_elements) > 2 and mount_path == mount_elements[2]): if volume: # Mount goes with device path and mount path if (volume.get('mountpoint', '') == mount_elements[0]): return True else: # Unmount goes only by mount path return True return False def _add_mount_permanently(self, share_id, server_details): """Add mount permanently for mounted filesystems.""" try: self._ssh_exec( server_details, ['grep', share_id, const.MOUNT_FILE_TEMP, '|', 'sudo', 'tee', '-a', const.MOUNT_FILE], ) except exception.ProcessExecutionError as e: LOG.error("Failed to add 'Share-%(share_id)s' mount " "permanently on server '%(instance_id)s'.", {"share_id": share_id, "instance_id": server_details['instance_id']}) raise exception.ShareBackendException(msg=six.text_type(e)) try: # Remount it to avoid postponed point of failure self._ssh_exec(server_details, ['sudo', 'mount', '-a']) except exception.ProcessExecutionError: LOG.error("Failed to mount all shares on server '%s'.", server_details['instance_id']) def _remove_mount_permanently(self, share_id, server_details): """Remove mount permanently from mounted filesystems.""" try: self._ssh_exec( server_details, ['sudo', 'sed', '-i', '\'/%s/d\'' % share_id, const.MOUNT_FILE], ) except exception.ProcessExecutionError as e: LOG.error("Failed to remove 'Share-%(share_id)s' mount " "permanently on server '%(instance_id)s'.", {"share_id": share_id, "instance_id": server_details['instance_id']}) raise exception.ShareBackendException(msg=six.text_type(e)) def _mount_device(self, share, server_details, volume): """Mounts block device to the directory on service vm. Mounts attached and formatted block device to the directory if not mounted yet. """ @utils.synchronized('generic_driver_mounts_' '%s' % server_details['instance_id']) def _mount_device_with_lock(): mount_path = self._get_mount_path(share) device_path = volume['mountpoint'] log_data = { 'dev': device_path, 'path': mount_path, 'server': server_details['instance_id'], } try: if not self._is_device_mounted(mount_path, server_details, volume): LOG.debug("Mounting '%(dev)s' to path '%(path)s' on " "server '%(server)s'.", log_data) mount_cmd = ( 'sudo', 'mkdir', '-p', mount_path, '&&', 'sudo', 'mount', device_path, mount_path, '&&', 'sudo', 'chmod', '777', mount_path, '&&', 'sudo', 'umount', mount_path, # NOTE(vponomaryov): 'tune2fs' is required to make # filesystem of share created from snapshot have # unique ID, in case of LVM volumes, by default, # it will have the same UUID as source volume one. # 'tune2fs' command can be executed only when device # is not mounted and also, in current case, it takes # effect only after it was mounted. Closes #1645751 # NOTE(gouthamr): Executing tune2fs -U only works on # a recently checked filesystem. See debian bug 857336 '&&', 'sudo', 'e2fsck', '-y', '-f', device_path, '&&', 'sudo', 'tune2fs', '-U', 'random', device_path, '&&', 'sudo', 'mount', device_path, mount_path, ) self._ssh_exec(server_details, mount_cmd) self._add_mount_permanently(share.id, server_details) else: LOG.warning("Mount point '%(path)s' already exists on " "server '%(server)s'.", log_data) except exception.ProcessExecutionError as e: raise exception.ShareBackendException(msg=six.text_type(e)) return _mount_device_with_lock() @utils.retry(exception.ProcessExecutionError) def _unmount_device(self, share, server_details): """Unmounts block device from directory on service vm.""" @utils.synchronized('generic_driver_mounts_' '%s' % server_details['instance_id']) def _unmount_device_with_lock(): mount_path = self._get_mount_path(share) log_data = { 'path': mount_path, 'server': server_details['instance_id'], } if self._is_device_mounted(mount_path, server_details): LOG.debug("Unmounting path '%(path)s' on server " "'%(server)s'.", log_data) unmount_cmd = ['sudo', 'umount', mount_path, '&&', 'sudo', 'rmdir', mount_path] self._ssh_exec(server_details, unmount_cmd) self._remove_mount_permanently(share.id, server_details) else: LOG.warning("Mount point '%(path)s' does not exist on " "server '%(server)s'.", log_data) return _unmount_device_with_lock() def _get_mount_path(self, share): """Returns the path to use for mount device in service vm.""" return os.path.join(self.configuration.share_mount_path, share['name']) def _attach_volume(self, context, share, instance_id, volume): """Attaches cinder volume to service vm.""" @utils.synchronized( "generic_driver_attach_detach_%s" % instance_id, external=True) def do_attach(volume): if volume['status'] == 'in-use': attached_volumes = [vol.id for vol in self.compute_api.instance_volumes_list( self.admin_context, instance_id)] if volume['id'] in attached_volumes: return volume else: raise exception.ManilaException( _('Volume %s is already attached to another instance') % volume['id']) @retrying.retry(stop_max_attempt_number=3, wait_fixed=2000, retry_on_exception=lambda exc: True) def attach_volume(): self.compute_api.instance_volume_attach( self.admin_context, instance_id, volume['id']) attach_volume() t = time.time() while time.time() - t < self.configuration.max_time_to_attach: volume = self.volume_api.get(context, volume['id']) if volume['status'] == 'in-use': return volume elif volume['status'] not in ('attaching', 'reserved'): raise exception.ManilaException( _('Failed to attach volume %s') % volume['id']) time.sleep(1) else: err_msg = { 'volume_id': volume['id'], 'max_time': self.configuration.max_time_to_attach } raise exception.ManilaException( _('Volume %(volume_id)s has not been attached in ' '%(max_time)ss. Giving up.') % err_msg) return do_attach(volume) def _get_volume_name(self, share_id): return self.configuration.volume_name_template % share_id def _get_volume(self, context, share_id): """Finds volume, associated to the specific share.""" volume_id = self.private_storage.get(share_id, 'volume_id') if volume_id is not None: return self.volume_api.get(context, volume_id) else: # Fallback to legacy method return self._get_volume_legacy(context, share_id) def _get_volume_legacy(self, context, share_id): # NOTE(u_glide): this method is deprecated and will be removed in # future versions volume_name = self._get_volume_name(share_id) search_opts = {'name': volume_name} if context.is_admin: search_opts['all_tenants'] = True volumes_list = self.volume_api.get_all(context, search_opts) if len(volumes_list) == 1: return volumes_list[0] elif len(volumes_list) > 1: LOG.error( "Expected only one volume in volume list with name " "'%(name)s', but got more than one in a result - " "'%(result)s'.", { 'name': volume_name, 'result': volumes_list}) raise exception.ManilaException( _("Error. Ambiguous volumes for name '%s'") % volume_name) return None def _get_volume_snapshot(self, context, snapshot_id): """Find volume snapshot associated to the specific share snapshot.""" volume_snapshot_id = self.private_storage.get( snapshot_id, 'volume_snapshot_id') if volume_snapshot_id is not None: return self.volume_api.get_snapshot(context, volume_snapshot_id) else: # Fallback to legacy method return self._get_volume_snapshot_legacy(context, snapshot_id) def _get_volume_snapshot_legacy(self, context, snapshot_id): # NOTE(u_glide): this method is deprecated and will be removed in # future versions volume_snapshot_name = ( self.configuration.volume_snapshot_name_template % snapshot_id) volume_snapshot_list = self.volume_api.get_all_snapshots( context, {'name': volume_snapshot_name}) volume_snapshot = None if len(volume_snapshot_list) == 1: volume_snapshot = volume_snapshot_list[0] elif len(volume_snapshot_list) > 1: LOG.error( "Expected only one volume snapshot in list with name " "'%(name)s', but got more than one in a result - " "'%(result)s'.", { 'name': volume_snapshot_name, 'result': volume_snapshot_list}) raise exception.ManilaException( _('Error. Ambiguous volume snaphots')) return volume_snapshot def _detach_volume(self, context, share, server_details): """Detaches cinder volume from service vm.""" instance_id = server_details['instance_id'] @utils.synchronized( "generic_driver_attach_detach_%s" % instance_id, external=True) def do_detach(): attached_volumes = [vol.id for vol in self.compute_api.instance_volumes_list( self.admin_context, instance_id)] try: volume = self._get_volume(context, share['id']) except exception.VolumeNotFound: LOG.warning("Volume not found for share %s. " "Possibly already deleted.", share['id']) volume = None if volume and volume['id'] in attached_volumes: self.compute_api.instance_volume_detach( self.admin_context, instance_id, volume['id'] ) t = time.time() while time.time() - t < self.configuration.max_time_to_attach: volume = self.volume_api.get(context, volume['id']) if volume['status'] in (const.STATUS_AVAILABLE, const.STATUS_ERROR): break time.sleep(1) else: err_msg = { 'volume_id': volume['id'], 'max_time': self.configuration.max_time_to_attach } raise exception.ManilaException( _('Volume %(volume_id)s has not been detached in ' '%(max_time)ss. Giving up.') % err_msg) do_detach() def _allocate_container(self, context, share, snapshot=None): """Creates cinder volume, associated to share by name.""" volume_snapshot = None if snapshot: volume_snapshot = self._get_volume_snapshot(context, snapshot['id']) volume = self.volume_api.create( context, share['size'], self.configuration.volume_name_template % share['id'], '', snapshot=volume_snapshot, volume_type=self.configuration.cinder_volume_type, availability_zone=share['availability_zone']) self.private_storage.update( share['id'], {'volume_id': volume['id']}) msg_error = _('Failed to create volume') msg_timeout = ( _('Volume has not been created in %ss. Giving up') % self.configuration.max_time_to_create_volume ) return self._wait_for_available_volume( volume, self.configuration.max_time_to_create_volume, msg_error=msg_error, msg_timeout=msg_timeout ) def _wait_for_available_volume(self, volume, timeout, msg_error, msg_timeout, expected_size=None): t = time.time() while time.time() - t < timeout: if volume['status'] == const.STATUS_AVAILABLE: if expected_size and volume['size'] != expected_size: LOG.debug("The volume %(vol_id)s is available but the " "volume size does not match the expected size. " "A volume resize operation may be pending. " "Expected size: %(expected_size)s, " "Actual size: %(volume_size)s.", dict(vol_id=volume['id'], expected_size=expected_size, volume_size=volume['size'])) else: break elif 'error' in volume['status'].lower(): raise exception.ManilaException(msg_error) time.sleep(1) volume = self.volume_api.get(self.admin_context, volume['id']) else: raise exception.ManilaException(msg_timeout) return volume def _deallocate_container(self, context, share): """Deletes cinder volume.""" try: volume = self._get_volume(context, share['id']) except exception.VolumeNotFound: LOG.info("Volume not found. Already deleted?") volume = None if volume: if volume['status'] == 'in-use': raise exception.ManilaException( _('Volume is still in use and ' 'cannot be deleted now.')) self.volume_api.delete(context, volume['id']) t = time.time() while (time.time() - t < self.configuration.max_time_to_create_volume): try: volume = self.volume_api.get(context, volume['id']) except exception.VolumeNotFound: LOG.debug('Volume was deleted successfully') break time.sleep(1) else: raise exception.ManilaException( _('Volume have not been ' 'deleted in %ss. Giving up') % self.configuration.max_time_to_create_volume) def _update_share_stats(self): """Retrieve stats info from share volume group.""" data = dict( share_backend_name=self.backend_name, storage_protocol='NFS_CIFS', reserved_percentage=self.configuration.reserved_share_percentage, ) super(GenericShareDriver, self)._update_share_stats(data) @ensure_server def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" return self._create_share( context, share, snapshot=snapshot, share_server=share_server, ) @ensure_server def extend_share(self, share, new_size, share_server=None): server_details = share_server['backend_details'] helper = self._get_helper(share) helper.disable_access_for_maintenance(server_details, share['name']) self._unmount_device(share, server_details) volume = self._get_volume(self.admin_context, share['id']) if int(new_size) > volume['size']: self._detach_volume(self.admin_context, share, server_details) volume = self._extend_volume(self.admin_context, volume, new_size) volume = self._attach_volume( self.admin_context, share, server_details['instance_id'], volume) self._resize_filesystem(server_details, volume) self._mount_device(share, server_details, volume) helper.restore_access_after_maintenance(server_details, share['name']) def _extend_volume(self, context, volume, new_size): self.volume_api.extend(context, volume['id'], new_size) msg_error = _('Failed to extend volume %s') % volume['id'] msg_timeout = ( _('Volume has not been extended in %ss. Giving up') % self.configuration.max_time_to_extend_volume ) return self._wait_for_available_volume( volume, self.configuration.max_time_to_extend_volume, msg_error=msg_error, msg_timeout=msg_timeout, expected_size=new_size ) @ensure_server def shrink_share(self, share, new_size, share_server=None): server_details = share_server['backend_details'] helper = self._get_helper(share) export_location = share['export_locations'][0]['path'] mount_path = helper.get_share_path_by_export_location( server_details, export_location) consumed_space = self._get_consumed_space(mount_path, server_details) LOG.debug("Consumed space on share: %s", consumed_space) if consumed_space >= new_size: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) volume = self._get_volume(self.admin_context, share['id']) helper.disable_access_for_maintenance(server_details, share['name']) self._unmount_device(share, server_details) try: self._resize_filesystem(server_details, volume, new_size=new_size) except exception.Invalid: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) except Exception as e: msg = _("Cannot shrink share: %s") % six.text_type(e) raise exception.Invalid(msg) finally: self._mount_device(share, server_details, volume) helper.restore_access_after_maintenance(server_details, share['name']) def _resize_filesystem(self, server_details, volume, new_size=None): """Resize filesystem of provided volume.""" check_command = ['sudo', 'fsck', '-pf', volume['mountpoint']] self._ssh_exec(server_details, check_command) command = ['sudo', 'resize2fs', volume['mountpoint']] if new_size: command.append("%sG" % six.text_type(new_size)) try: self._ssh_exec(server_details, command) except processutils.ProcessExecutionError as e: if e.stderr.find('New size smaller than minimum') != -1: msg = (_("Invalid 'new_size' provided: %s") % six.text_type(new_size)) raise exception.Invalid(msg) else: msg = _("Cannot resize file-system: %s") % six.text_type(e) raise exception.ManilaException(msg) def _is_share_server_active(self, context, share_server): """Check if the share server is active.""" has_active_share_server = ( share_server and share_server.get('backend_details') and self.service_instance_manager.ensure_service_instance( context, share_server['backend_details'])) return has_active_share_server def delete_share(self, context, share, share_server=None): """Deletes share.""" helper = self._get_helper(share) if not self.driver_handles_share_servers: share_server = self.service_instance_manager.get_common_server() if self._is_share_server_active(context, share_server): helper.remove_exports( share_server['backend_details'], share['name']) self._unmount_device(share, share_server['backend_details']) self._detach_volume(self.admin_context, share, share_server['backend_details']) # Note(jun): It is an intended breakage to deal with the cases # with any reason that caused absence of Nova instances. self._deallocate_container(self.admin_context, share) self.private_storage.delete(share['id']) def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot.""" model_update = {} volume = self._get_volume( self.admin_context, snapshot['share_instance_id']) volume_snapshot_name = (self.configuration. volume_snapshot_name_template % snapshot['id']) volume_snapshot = self.volume_api.create_snapshot_force( self.admin_context, volume['id'], volume_snapshot_name, '') t = time.time() while time.time() - t < self.configuration.max_time_to_create_volume: if volume_snapshot['status'] == const.STATUS_AVAILABLE: break if volume_snapshot['status'] == const.STATUS_ERROR: raise exception.ManilaException(_('Failed to create volume ' 'snapshot')) time.sleep(1) volume_snapshot = self.volume_api.get_snapshot( self.admin_context, volume_snapshot['id']) # NOTE(xyang): We should look at whether we still need to save # volume_snapshot_id in private_storage later, now that is saved # in provider_location. self.private_storage.update( snapshot['id'], {'volume_snapshot_id': volume_snapshot['id']}) # NOTE(xyang): Need to update provider_location in the db so # that it can be used in manage/unmanage snapshot tempest tests. model_update['provider_location'] = volume_snapshot['id'] else: raise exception.ManilaException( _('Volume snapshot have not been ' 'created in %ss. Giving up') % self.configuration.max_time_to_create_volume) return model_update def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" volume_snapshot = self._get_volume_snapshot(self.admin_context, snapshot['id']) if volume_snapshot is None: return self.volume_api.delete_snapshot(self.admin_context, volume_snapshot['id']) t = time.time() while time.time() - t < self.configuration.max_time_to_create_volume: try: snapshot = self.volume_api.get_snapshot(self.admin_context, volume_snapshot['id']) except exception.VolumeSnapshotNotFound: LOG.debug('Volume snapshot was deleted successfully') self.private_storage.delete(snapshot['id']) break time.sleep(1) else: raise exception.ManilaException( _('Volume snapshot have not been ' 'deleted in %ss. Giving up') % self.configuration.max_time_to_create_volume) @ensure_server def ensure_share(self, context, share, share_server=None): """Ensure that storage are mounted and exported.""" helper = self._get_helper(share) volume = self._get_volume(context, share['id']) # NOTE(vponomaryov): volume can be None for managed shares if volume: volume = self._attach_volume( context, share, share_server['backend_details']['instance_id'], volume) self._mount_device(share, share_server['backend_details'], volume) helper.create_exports( share_server['backend_details'], share['name'], recreate=True) @ensure_server def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. This driver has two different behaviors according to parameters: 1. Recovery after error - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' shall be empty. Previously existing access rules are cleared and then added back according to 'access_rules'. 2. Adding/Deleting of several access rules - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Rules in 'access_rules' are ignored and only rules from 'add_rules' and 'delete_rules' are applied. :param context: Current context :param share: Share model with share data. :param access_rules: All access rules for given share :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model """ self._get_helper(share).update_access(share_server['backend_details'], share['name'], access_rules, add_rules=add_rules, delete_rules=delete_rules) def _get_helper(self, share): helper = self._helpers.get(share['share_proto']) if helper: return helper else: raise exception.InvalidShare( reason="Wrong, unsupported or disabled protocol") def get_network_allocations_number(self): """Get number of network interfaces to be created.""" # NOTE(vponomaryov): Generic driver does not need allocations, because # Nova will handle it. It is valid for all multitenant drivers, that # use service instance provided by Nova. return 0 def _setup_server(self, network_info, metadata=None): msg = "Creating share server '%s'." LOG.debug(msg, network_info['server_id']) server = self.service_instance_manager.set_up_service_instance( self.admin_context, network_info) for helper in self._helpers.values(): helper.init_helper(server) return server def _teardown_server(self, server_details, security_services=None): instance_id = server_details.get("instance_id") LOG.debug("Removing share infrastructure for service instance '%s'.", instance_id) self.service_instance_manager.delete_service_instance( self.admin_context, server_details) def manage_existing(self, share, driver_options): """Manage existing share to manila. Generic driver accepts only one driver_option 'volume_id'. If an administrator provides this option, then appropriate Cinder volume will be managed by Manila as well. :param share: share data :param driver_options: Empty dict or dict with 'volume_id' option. :return: dict with share size, example: {'size': 1} """ helper = self._get_helper(share) share_server = self.service_instance_manager.get_common_server() server_details = share_server['backend_details'] old_export_location = share['export_locations'][0]['path'] mount_path = helper.get_share_path_by_export_location( share_server['backend_details'], old_export_location) LOG.debug("Manage: mount path = %s", mount_path) mounted = self._is_device_mounted(mount_path, server_details) LOG.debug("Manage: is share mounted = %s", mounted) if not mounted: msg = _("Provided share %s is not mounted.") % share['id'] raise exception.ManageInvalidShare(reason=msg) def get_volume(): if 'volume_id' in driver_options: try: return self.volume_api.get( self.admin_context, driver_options['volume_id']) except exception.VolumeNotFound as e: raise exception.ManageInvalidShare(reason=six.text_type(e)) # NOTE(vponomaryov): Manila can only combine volume name by itself, # nowhere to get volume ID from. Return None since Cinder volume # names are not unique or fixed, hence, they can not be used for # sure. return None share_volume = get_volume() if share_volume: instance_volumes = self.compute_api.instance_volumes_list( self.admin_context, server_details['instance_id']) attached_volumes = [vol.id for vol in instance_volumes] LOG.debug('Manage: attached volumes = %s', six.text_type(attached_volumes)) if share_volume['id'] not in attached_volumes: msg = _("Provided volume %s is not attached " "to service instance.") % share_volume['id'] raise exception.ManageInvalidShare(reason=msg) linked_volume_name = self._get_volume_name(share['id']) if share_volume['name'] != linked_volume_name: LOG.debug('Manage: volume_id = %s', share_volume['id']) self.volume_api.update(self.admin_context, share_volume['id'], {'name': linked_volume_name}) self.private_storage.update( share['id'], {'volume_id': share_volume['id']}) share_size = share_volume['size'] else: share_size = self._get_mounted_share_size( mount_path, share_server['backend_details']) export_locations = helper.get_exports_for_share( server_details, old_export_location) return {'size': share_size, 'export_locations': export_locations} def manage_existing_snapshot(self, snapshot, driver_options): """Manage existing share snapshot with manila. :param snapshot: Snapshot data :param driver_options: Not used by the Generic driver currently :return: dict with share snapshot size, example: {'size': 1} """ model_update = {} volume_snapshot = None snapshot_size = snapshot.get('share_size', 0) provider_location = snapshot.get('provider_location') try: volume_snapshot = self.volume_api.get_snapshot( self.admin_context, provider_location) except exception.VolumeSnapshotNotFound as e: raise exception.ManageInvalidShareSnapshot( reason=six.text_type(e)) if volume_snapshot: snapshot_size = volume_snapshot['size'] # NOTE(xyang): volume_snapshot_id is saved in private_storage # in create_snapshot, so saving it here too for consistency. # We should look at whether we still need to save it in # private_storage later. self.private_storage.update( snapshot['id'], {'volume_snapshot_id': volume_snapshot['id']}) # NOTE(xyang): provider_location is used to map a Manila snapshot # to its name on the storage backend and prevent managing of the # same snapshot twice. model_update['provider_location'] = volume_snapshot['id'] model_update['size'] = snapshot_size return model_update def unmanage_snapshot(self, snapshot): """Unmanage share snapshot with manila.""" self.private_storage.delete(snapshot['id']) def _get_mount_stats_by_index(self, mount_path, server_details, index, block_size='G'): """Get mount stats using df shell command. :param mount_path: Share path on share server :param server_details: Share server connection details :param index: Data index in df command output: BLOCK_DEVICE_SIZE_INDEX - Size of block device USED_SPACE_INDEX - Used space :param block_size: size of block (example: G, M, Mib, etc) :returns: value of provided index """ share_size_cmd = ['df', '-PB%s' % block_size, mount_path] output, __ = self._ssh_exec(server_details, share_size_cmd) lines = output.split('\n') return int(lines[1].split()[index][:-1]) def _get_mounted_share_size(self, mount_path, server_details): try: size = self._get_mount_stats_by_index( mount_path, server_details, BLOCK_DEVICE_SIZE_INDEX) except Exception as e: msg = _("Cannot calculate size of share %(path)s : %(error)s") % { 'path': mount_path, 'error': six.text_type(e) } raise exception.ManageInvalidShare(reason=msg) return size def _get_consumed_space(self, mount_path, server_details): try: size = self._get_mount_stats_by_index( mount_path, server_details, USED_SPACE_INDEX, block_size='M') size /= float(units.Ki) except Exception as e: msg = _("Cannot calculate consumed space on share " "%(path)s : %(error)s") % { 'path': mount_path, 'error': six.text_type(e) } raise exception.InvalidShare(reason=msg) return size manila-10.0.0/manila/share/drivers/zfssa/0000775000175000017500000000000013656750362020237 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/zfssa/__init__.py0000664000175000017500000000000013656750227022336 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/zfssa/restclient.py0000664000175000017500000003105013656750227022764 0ustar zuulzuul00000000000000# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ZFS Storage Appliance REST API Client Programmatic Interface TODO(diemtran): this module needs to be placed in a library common to OpenStack services. When this happens, the file should be removed from Manila code base and imported from the relevant library. """ import time from oslo_serialization import jsonutils import six from six.moves import http_client from six.moves.urllib import error as urlerror from six.moves.urllib import request as urlrequest def log_debug_msg(obj, message): if obj.log_function: obj.log_function(message) class Status(object): """Result HTTP Status.""" #: Request return OK OK = http_client.OK # pylint: disable=invalid-name #: New resource created successfully CREATED = http_client.CREATED #: Command accepted ACCEPTED = http_client.ACCEPTED #: Command returned OK but no data will be returned NO_CONTENT = http_client.NO_CONTENT #: Bad Request BAD_REQUEST = http_client.BAD_REQUEST #: User is not authorized UNAUTHORIZED = http_client.UNAUTHORIZED #: The request is not allowed FORBIDDEN = http_client.FORBIDDEN #: The requested resource was not found NOT_FOUND = http_client.NOT_FOUND #: The request is not allowed NOT_ALLOWED = http_client.METHOD_NOT_ALLOWED #: Request timed out TIMEOUT = http_client.REQUEST_TIMEOUT #: Invalid request CONFLICT = http_client.CONFLICT #: Service Unavailable BUSY = http_client.SERVICE_UNAVAILABLE class RestResult(object): """Result from a REST API operation.""" def __init__(self, logfunc=None, response=None, err=None): """Initialize a RestResult containing the results from a REST call. :param logfunc: debug log function. :param response: HTTP response. :param err: HTTP error. """ self.response = response self.log_function = logfunc self.error = err self.data = "" self.status = 0 if self.response: self.status = self.response.getcode() result = self.response.read() while result: self.data += result result = self.response.read() if self.error: self.status = self.error.code self.data = http_client.responses[self.status] log_debug_msg(self, 'Response code: %s' % self.status) log_debug_msg(self, 'Response data: %s' % self.data) def get_header(self, name): """Get an HTTP header with the given name from the results. :param name: HTTP header name. :return: The header value or None if no value is found. """ if self.response is None: return None info = self.response.info() return info.getheader(name) class RestClientError(Exception): """Exception for ZFS REST API client errors.""" def __init__(self, status, name="ERR_INTERNAL", message=None): """Create a REST Response exception. :param status: HTTP response status. :param name: The name of the REST API error type. :param message: Descriptive error message returned from REST call. """ super(RestClientError, self).__init__(message) self.code = status self.name = name self.msg = message if status in http_client.responses: self.msg = http_client.responses[status] def __str__(self): return "%d %s %s" % (self.code, self.name, self.msg) class RestClientURL(object): # pylint: disable=R0902 """ZFSSA urllib client.""" def __init__(self, url, logfunc=None, **kwargs): """Initialize a REST client. :param url: The ZFSSA REST API URL. :key session: HTTP Cookie value of x-auth-session obtained from a normal BUI login. :key timeout: Time in seconds to wait for command to complete. (Default is 60 seconds). """ self.url = url self.log_function = logfunc self.local = kwargs.get("local", False) self.base_path = kwargs.get("base_path", "/api") self.timeout = kwargs.get("timeout", 60) self.headers = None if kwargs.get('session'): self.headers['x-auth-session'] = kwargs.get('session') self.headers = {"content-type": "application/json"} self.do_logout = False self.auth_str = None def _path(self, path, base_path=None): """Build rest url path.""" if path.startswith("http://") or path.startswith("https://"): return path if base_path is None: base_path = self.base_path if not path.startswith(base_path) and not ( self.local and ("/api" + path).startswith(base_path)): path = "%s%s" % (base_path, path) if self.local and path.startswith("/api"): path = path[4:] return self.url + path def _authorize(self): """Performs authorization setting x-auth-session.""" self.headers['authorization'] = 'Basic %s' % self.auth_str if 'x-auth-session' in self.headers: del self.headers['x-auth-session'] try: result = self.post("/access/v1") del self.headers['authorization'] if result.status == http_client.CREATED: self.headers['x-auth-session'] = ( result.get_header('x-auth-session')) self.do_logout = True log_debug_msg(self, ('ZFSSA version: %s') % result.get_header('x-zfssa-version')) elif result.status == http_client.NOT_FOUND: raise RestClientError(result.status, name="ERR_RESTError", message=("REST Not Available:" "Please Upgrade")) except RestClientError: del self.headers['authorization'] raise def login(self, auth_str): """Login to an appliance using a user name and password. Start a session like what is done logging into the BUI. This is not a requirement to run REST commands, since the protocol is stateless. What is does is set up a cookie session so that some server side caching can be done. If login is used remember to call logout when finished. :param auth_str: Authorization string (base64). """ self.auth_str = auth_str self._authorize() def logout(self): """Logout of an appliance.""" result = None try: result = self.delete("/access/v1", base_path="/api") except RestClientError: pass self.headers.clear() self.do_logout = False return result def islogin(self): """return if client is login.""" return self.do_logout @staticmethod def mkpath(*args, **kwargs): """Make a path?query string for making a REST request. :cmd_params args: The path part. :cmd_params kwargs: The query part. """ buf = six.StringIO() query = "?" for arg in args: buf.write("/") buf.write(arg) for k in kwargs: buf.write(query) if query == "?": query = "&" buf.write(k) buf.write("=") buf.write(kwargs[k]) return buf.getvalue() # pylint: disable=R0912 def request(self, path, request, body=None, **kwargs): """Make an HTTP request and return the results. :param path: Path used with the initialized URL to make a request. :param request: HTTP request type (GET, POST, PUT, DELETE). :param body: HTTP body of request. :key accept: Set HTTP 'Accept' header with this value. :key base_path: Override the base_path for this request. :key content: Set HTTP 'Content-Type' header with this value. """ out_hdrs = dict.copy(self.headers) if kwargs.get("accept"): out_hdrs['accept'] = kwargs.get("accept") if body: if isinstance(body, dict): body = six.text_type(jsonutils.dumps(body)) if body and len(body): out_hdrs['content-length'] = len(body) zfssaurl = self._path(path, kwargs.get("base_path")) req = urlrequest.Request(zfssaurl, body, out_hdrs) req.get_method = lambda: request maxreqretries = kwargs.get("maxreqretries", 10) retry = 0 response = None log_debug_msg(self, 'Request: %s %s' % (request, zfssaurl)) log_debug_msg(self, 'Out headers: %s' % out_hdrs) if body and body != '': log_debug_msg(self, 'Body: %s' % body) while retry < maxreqretries: try: response = urlrequest.urlopen(req, timeout=self.timeout) except urlerror.HTTPError as err: if err.code == http_client.NOT_FOUND: log_debug_msg(self, 'REST Not Found: %s' % err.code) else: log_debug_msg(self, ('REST Not Available: %s') % err.code) if (err.code == http_client.SERVICE_UNAVAILABLE and retry < maxreqretries): retry += 1 time.sleep(1) log_debug_msg(self, ('Server Busy retry request: %s') % retry) continue if ((err.code == http_client.UNAUTHORIZED or err.code == http_client.INTERNAL_SERVER_ERROR) and '/access/v1' not in zfssaurl): try: log_debug_msg(self, ('Authorizing request: ' '%(zfssaurl)s ' 'retry: %(retry)d .') % {'zfssaurl': zfssaurl, 'retry': retry}) self._authorize() req.add_header('x-auth-session', self.headers['x-auth-session']) except RestClientError: log_debug_msg(self, ('Cannot authorize.')) retry += 1 time.sleep(1) continue return RestResult(self.log_function, err=err) except urlerror.URLError as err: log_debug_msg(self, ('URLError: %s') % err.reason) raise RestClientError(-1, name="ERR_URLError", message=err.reason) break if ((response and response.getcode() == http_client.SERVICE_UNAVAILABLE) and retry >= maxreqretries): raise RestClientError(response.getcode(), name="ERR_HTTPError", message="REST Not Available: Disabled") return RestResult(self.log_function, response=response) def get(self, path, **kwargs): """Make an HTTP GET request. :param path: Path to resource. """ return self.request(path, "GET", **kwargs) def post(self, path, body="", **kwargs): """Make an HTTP POST request. :param path: Path to resource. :param body: Post data content. """ return self.request(path, "POST", body, **kwargs) def put(self, path, body="", **kwargs): """Make an HTTP PUT request. :param path: Path to resource. :param body: Put data content. """ return self.request(path, "PUT", body, **kwargs) def delete(self, path, **kwargs): """Make an HTTP DELETE request. :param path: Path to resource that will be deleted. """ return self.request(path, "DELETE", **kwargs) def head(self, path, **kwargs): """Make an HTTP HEAD request. :param path: Path to resource. """ return self.request(path, "HEAD", **kwargs) manila-10.0.0/manila/share/drivers/zfssa/zfssarest.py0000664000175000017500000004331213656750227022640 0ustar zuulzuul00000000000000# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ZFS Storage Appliance Proxy """ from oslo_log import log from oslo_serialization import jsonutils from manila import exception from manila.i18n import _ from manila.share.drivers.zfssa import restclient LOG = log.getLogger(__name__) def factory_restclient(url, logfunc, **kwargs): return restclient.RestClientURL(url, logfunc, **kwargs) class ZFSSAApi(object): """ZFSSA API proxy class.""" pools_path = '/api/storage/v1/pools' pool_path = pools_path + '/%s' projects_path = pool_path + '/projects' project_path = projects_path + '/%s' shares_path = project_path + '/filesystems' share_path = shares_path + '/%s' snapshots_path = share_path + '/snapshots' snapshot_path = snapshots_path + '/%s' clone_path = snapshot_path + '/clone' service_path = '/api/service/v1/services/%s/enable' def __init__(self): self.host = None self.url = None self.rclient = None def __del__(self): if self.rclient: del self.rclient def rest_get(self, path, expected): ret = self.rclient.get(path) if ret.status != expected: exception_msg = (_('Rest call to %(host)s %(path)s failed.' 'Status: %(status)d Message: %(data)s') % {'host': self.host, 'path': path, 'status': ret.status, 'data': ret.data}) LOG.error(exception_msg) raise exception.ShareBackendException(msg=exception_msg) return ret def _is_pool_owned(self, pdata): """returns True if the pool's owner is the same as the host.""" svc = '/api/system/v1/version' ret = self.rest_get(svc, restclient.Status.OK) vdata = jsonutils.loads(ret.data) return (vdata['version']['asn'] == pdata['pool']['asn'] and vdata['version']['nodename'] == pdata['pool']['owner']) def set_host(self, host, timeout=None): self.host = host self.url = "https://%s:215" % self.host self.rclient = factory_restclient(self.url, LOG.debug, timeout=timeout) def login(self, auth_str): """Login to the appliance.""" if self.rclient and not self.rclient.islogin(): self.rclient.login(auth_str) def enable_service(self, service): """Enable the specified service.""" svc = self.service_path % service ret = self.rclient.put(svc) if ret.status != restclient.Status.ACCEPTED: exception_msg = (_("Cannot enable %s service.") % service) raise exception.ShareBackendException(msg=exception_msg) def verify_avail_space(self, pool, project, share, size): """Check if there is enough space available to a new share.""" self.verify_project(pool, project) avail = self.get_project_stats(pool, project) if avail < size: exception_msg = (_('Error creating ' 'share: %(share)s on ' 'pool: %(pool)s. ' 'Not enough space.') % {'share': share, 'pool': pool}) raise exception.ShareBackendException(msg=exception_msg) def get_pool_stats(self, pool): """Get space_available and used properties of a pool. returns (avail, used). """ svc = self.pool_path % pool ret = self.rclient.get(svc) if ret.status != restclient.Status.OK: exception_msg = (_('Error getting pool stats: ' 'pool: %(pool)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'pool': pool, 'ret.status': ret.status, 'ret.data': ret.data}) raise exception.InvalidInput(reason=exception_msg) val = jsonutils.loads(ret.data) if not self._is_pool_owned(val): exception_msg = (_('Error pool ownership: ' 'pool %(pool)s is not owned ' 'by %(host)s.') % {'pool': pool, 'host': self.host}) raise exception.InvalidInput(reason=pool) avail = val['pool']['usage']['available'] used = val['pool']['usage']['used'] return avail, used def get_project_stats(self, pool, project): """Get space_available of a project. Used to check whether a project has enough space (after reservation) or not. """ svc = self.project_path % (pool, project) ret = self.rclient.get(svc) if ret.status != restclient.Status.OK: exception_msg = (_('Error getting project stats: ' 'pool: %(pool)s ' 'project: %(project)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'pool': pool, 'project': project, 'ret.status': ret.status, 'ret.data': ret.data}) raise exception.InvalidInput(reason=exception_msg) val = jsonutils.loads(ret.data) avail = val['project']['space_available'] return avail def create_project(self, pool, project, arg): """Create a project on a pool. Check first whether the pool exists.""" self.verify_pool(pool) svc = self.project_path % (pool, project) ret = self.rclient.get(svc) if ret.status != restclient.Status.OK: svc = self.projects_path % pool ret = self.rclient.post(svc, arg) if ret.status != restclient.Status.CREATED: exception_msg = (_('Error creating project: ' '%(project)s on ' 'pool: %(pool)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'project': project, 'pool': pool, 'ret.status': ret.status, 'ret.data': ret.data}) raise exception.ShareBackendException(msg=exception_msg) def verify_pool(self, pool): """Checks whether pool exists.""" svc = self.pool_path % pool self.rest_get(svc, restclient.Status.OK) def verify_project(self, pool, project): """Checks whether project exists.""" svc = self.project_path % (pool, project) ret = self.rest_get(svc, restclient.Status.OK) return ret def create_share(self, pool, project, share): """Create a share in the specified pool and project.""" self.verify_avail_space(pool, project, share, share['quota']) svc = self.share_path % (pool, project, share['name']) ret = self.rclient.get(svc) if ret.status != restclient.Status.OK: svc = self.shares_path % (pool, project) ret = self.rclient.post(svc, share) if ret.status != restclient.Status.CREATED: exception_msg = (_('Error creating ' 'share: %(name)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'name': share['name'], 'ret.status': ret.status, 'ret.data': ret.data}) raise exception.ShareBackendException(msg=exception_msg) else: exception_msg = (_('Share with name %s already exists.') % share['name']) raise exception.ShareBackendException(msg=exception_msg) def get_share(self, pool, project, share): """Return share properties.""" svc = self.share_path % (pool, project, share) ret = self.rest_get(svc, restclient.Status.OK) val = jsonutils.loads(ret.data) return val['filesystem'] def modify_share(self, pool, project, share, arg): """Modify a set of properties of a share.""" svc = self.share_path % (pool, project, share) ret = self.rclient.put(svc, arg) if ret.status != restclient.Status.ACCEPTED: exception_msg = (_('Error modifying %(arg)s ' ' of share %(id)s.') % {'arg': arg, 'id': share}) raise exception.ShareBackendException(msg=exception_msg) def delete_share(self, pool, project, share): """Delete a share. The function assumes the share has no clone or snapshot. """ svc = self.share_path % (pool, project, share) ret = self.rclient.delete(svc) if ret.status != restclient.Status.NO_CONTENT: exception_msg = (('Error deleting ' 'share: %(share)s to ' 'pool: %(pool)s ' 'project: %(project)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.'), {'share': share, 'pool': pool, 'project': project, 'ret.status': ret.status, 'ret.data': ret.data}) LOG.error(exception_msg) def create_snapshot(self, pool, project, share, snapshot): """Create a snapshot of the given share.""" svc = self.snapshots_path % (pool, project, share) arg = {'name': snapshot} ret = self.rclient.post(svc, arg) if ret.status != restclient.Status.CREATED: exception_msg = (_('Error creating ' 'snapshot: %(snapshot)s on ' 'share: %(share)s to ' 'pool: %(pool)s ' 'project: %(project)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'snapshot': snapshot, 'share': share, 'pool': pool, 'project': project, 'ret.status': ret.status, 'ret.data': ret.data}) raise exception.ShareBackendException(msg=exception_msg) def delete_snapshot(self, pool, project, share, snapshot): """Delete a snapshot that has no clone.""" svc = self.snapshot_path % (pool, project, share, snapshot) ret = self.rclient.delete(svc) if ret.status != restclient.Status.NO_CONTENT: exception_msg = (_('Error deleting ' 'snapshot: %(snapshot)s on ' 'share: %(share)s to ' 'pool: %(pool)s ' 'project: %(project)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'snapshot': snapshot, 'share': share, 'pool': pool, 'project': project, 'ret.status': ret.status, 'ret.data': ret.data}) LOG.error(exception_msg) raise exception.ShareBackendException(msg=exception_msg) def clone_snapshot(self, pool, project, snapshot, clone, arg): """Create a new share from the given snapshot.""" self.verify_avail_space(pool, project, clone['id'], clone['size']) svc = self.clone_path % (pool, project, snapshot['share_id'], snapshot['id']) ret = self.rclient.put(svc, arg) if ret.status != restclient.Status.CREATED: exception_msg = (_('Error cloning ' 'snapshot: %(snapshot)s on ' 'share: %(share)s of ' 'Pool: %(pool)s ' 'project: %(project)s ' 'return code: %(ret.status)d ' 'message: %(ret.data)s.') % {'snapshot': snapshot['id'], 'share': snapshot['share_id'], 'pool': pool, 'project': project, 'ret.status': ret.status, 'ret.data': ret.data}) LOG.error(exception_msg) raise exception.ShareBackendException(msg=exception_msg) def has_clones(self, pool, project, share, snapshot): """Check whether snapshot has existing clones.""" svc = self.snapshot_path % (pool, project, share, snapshot) ret = self.rest_get(svc, restclient.Status.OK) val = jsonutils.loads(ret.data) return val['snapshot']['numclones'] != 0 def allow_access_nfs(self, pool, project, share, access): """Allow an IP access to a share through NFS.""" if access['access_type'] != 'ip': reason = _('Only ip access type allowed.') raise exception.InvalidShareAccess(reason) ip = access['access_to'] details = self.get_share(pool, project, share) sharenfs = details['sharenfs'] if sharenfs == 'on' or sharenfs == 'rw': LOG.debug('Share %s has read/write permission ' 'open to all.', share) return if sharenfs == 'off': sharenfs = 'sec=sys' if ip in sharenfs: LOG.debug('Access to share %(share)s via NFS ' 'already granted to %(ip)s.', {'share': share, 'ip': ip}) return entry = (',rw=@%s' % ip) if '/' not in ip: entry = "%s/32" % entry arg = {'sharenfs': sharenfs + entry} self.modify_share(pool, project, share, arg) def deny_access_nfs(self, pool, project, share, access): """Denies access of an IP to a share through NFS. Since sharenfs property allows a combination of mutiple syntaxes: sharenfs="sec=sys,rw=@first_ip,rw=@second_ip" sharenfs="sec=sys,rw=@first_ip:@second_ip" sharenfs="sec=sys,rw=@first_ip:@second_ip,rw=@third_ip" The function checks what syntax is used and remove the IP accordingly. """ if access['access_type'] != 'ip': reason = _('Only ip access type allowed.') raise exception.InvalidShareAccess(reason) ip = access['access_to'] entry = ('@%s' % ip) if '/' not in ip: entry = "%s/32" % entry details = self.get_share(pool, project, share) if entry not in details['sharenfs']: LOG.debug('IP %(ip)s does not have access ' 'to Share %(share)s via NFS.', {'ip': ip, 'share': share}) return sharenfs = str(details['sharenfs']) argval = '' if sharenfs.find((',rw=%s:' % entry)) >= 0: argval = sharenfs.replace(('%s:' % entry), '') elif sharenfs.find((',rw=%s' % entry)) >= 0: argval = sharenfs.replace((',rw=%s' % entry), '') elif sharenfs.find((':%s' % entry)) >= 0: argval = sharenfs.replace((':%s' % entry), '') arg = {'sharenfs': argval} LOG.debug('deny_access: %s', argval) self.modify_share(pool, project, share, arg) def create_schema(self, schema): """Create a custom ZFSSA schema.""" base = '/api/storage/v1/schema' svc = "%(base)s/%(prop)s" % {'base': base, 'prop': schema['property']} ret = self.rclient.get(svc) if ret.status == restclient.Status.OK: LOG.warning('Property %s already exists.', schema['property']) return ret = self.rclient.post(base, schema) if ret.status != restclient.Status.CREATED: exception_msg = (_('Error Creating ' 'Property: %(property)s ' 'Type: %(type)s ' 'Description: %(description)s ' 'Return code: %(ret.status)d ' 'Message: %(ret.data)s.') % {'property': schema['property'], 'type': schema['type'], 'description': schema['description'], 'ret.status': ret.status, 'ret.data': ret.data}) LOG.error(exception_msg) raise exception.ShareBackendException(msg=exception_msg) manila-10.0.0/manila/share/drivers/zfssa/zfssashare.py0000664000175000017500000005100413656750227022762 0ustar zuulzuul00000000000000# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ ZFS Storage Appliance Manila Share Driver """ import base64 import math from oslo_config import cfg from oslo_log import log from oslo_utils import units import six from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.zfssa import zfssarest ZFSSA_OPTS = [ cfg.HostAddressOpt('zfssa_host', help='ZFSSA management IP address.'), cfg.HostAddressOpt('zfssa_data_ip', help='IP address for data.'), cfg.StrOpt('zfssa_auth_user', help='ZFSSA management authorized username.'), cfg.StrOpt('zfssa_auth_password', help='ZFSSA management authorized user\'s password.'), cfg.StrOpt('zfssa_pool', help='ZFSSA storage pool name.'), cfg.StrOpt('zfssa_project', help='ZFSSA project name.'), cfg.StrOpt('zfssa_nas_checksum', default='fletcher4', help='Controls checksum used for data blocks.'), cfg.StrOpt('zfssa_nas_compression', default='off', help='Data compression-off, lzjb, gzip-2, gzip, gzip-9.'), cfg.StrOpt('zfssa_nas_logbias', default='latency', help='Controls behavior when servicing synchronous writes.'), cfg.StrOpt('zfssa_nas_mountpoint', default='', help='Location of project in ZFS/SA.'), cfg.StrOpt('zfssa_nas_quota_snap', default='true', help='Controls whether a share quota includes snapshot.'), cfg.StrOpt('zfssa_nas_rstchown', default='true', help='Controls whether file ownership can be changed.'), cfg.StrOpt('zfssa_nas_vscan', default='false', help='Controls whether the share is scanned for viruses.'), cfg.StrOpt('zfssa_rest_timeout', help='REST connection timeout (in seconds).'), cfg.StrOpt('zfssa_manage_policy', default='loose', choices=['loose', 'strict'], help='Driver policy for share manage. A strict policy checks ' 'for a schema named manila_managed, and makes sure its ' 'value is true. A loose policy does not check for the ' 'schema.') ] cfg.CONF.register_opts(ZFSSA_OPTS) LOG = log.getLogger(__name__) def factory_zfssa(): return zfssarest.ZFSSAApi() class ZFSSAShareDriver(driver.ShareDriver): """ZFSSA share driver: Supports NFS and CIFS protocols. Uses ZFSSA RESTful API to create shares and snapshots on backend. API version history: 1.0 - Initial version. 1.0.1 - Add share shrink/extend feature. 1.0.2 - Add share manage/unmanage feature. """ VERSION = '1.0.2' PROTOCOL = 'NFS_CIFS' def __init__(self, *args, **kwargs): super(ZFSSAShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(ZFSSA_OPTS) self.zfssa = None self._stats = None self.mountpoint = '/export/' lcfg = self.configuration required = [ 'zfssa_host', 'zfssa_data_ip', 'zfssa_auth_user', 'zfssa_auth_password', 'zfssa_pool', 'zfssa_project' ] for prop in required: if not getattr(lcfg, prop, None): exception_msg = _('%s is required in manila.conf') % prop LOG.error(exception_msg) raise exception.InvalidParameterValue(exception_msg) self.default_args = { 'compression': lcfg.zfssa_nas_compression, 'logbias': lcfg.zfssa_nas_logbias, 'checksum': lcfg.zfssa_nas_checksum, 'vscan': lcfg.zfssa_nas_vscan, 'rstchown': lcfg.zfssa_nas_rstchown, } self.share_args = { 'sharedav': 'off', 'shareftp': 'off', 'sharesftp': 'off', 'sharetftp': 'off', 'root_permissions': '777', 'sharenfs': 'sec=sys', 'sharesmb': 'off', 'quota_snap': self.configuration.zfssa_nas_quota_snap, 'reservation_snap': self.configuration.zfssa_nas_quota_snap, 'custom:manila_managed': True, } def do_setup(self, context): """Login, create project, no sharing option enabled.""" lcfg = self.configuration LOG.debug("Connecting to host: %s.", lcfg.zfssa_host) self.zfssa = factory_zfssa() self.zfssa.set_host(lcfg.zfssa_host, timeout=lcfg.zfssa_rest_timeout) creds = '%s:%s' % (lcfg.zfssa_auth_user, lcfg.zfssa_auth_password) auth_str = base64.encodestring(six.b(creds))[:-1] self.zfssa.login(auth_str) if lcfg.zfssa_nas_mountpoint == '': self.mountpoint += lcfg.zfssa_project else: self.mountpoint += lcfg.zfssa_nas_mountpoint arg = { 'name': lcfg.zfssa_project, 'sharesmb': 'off', 'sharenfs': 'off', 'mountpoint': self.mountpoint, } arg.update(self.default_args) self.zfssa.create_project(lcfg.zfssa_pool, lcfg.zfssa_project, arg) self.zfssa.enable_service('nfs') self.zfssa.enable_service('smb') schema = { 'property': 'manila_managed', 'description': 'Managed by Manila', 'type': 'Boolean', } self.zfssa.create_schema(schema) def check_for_setup_error(self): """Check for properly configured pool, project.""" lcfg = self.configuration LOG.debug("Verifying pool %s.", lcfg.zfssa_pool) self.zfssa.verify_pool(lcfg.zfssa_pool) LOG.debug("Verifying project %s.", lcfg.zfssa_project) self.zfssa.verify_project(lcfg.zfssa_pool, lcfg.zfssa_project) def _export_location(self, share): """Export share's location based on protocol used.""" lcfg = self.configuration arg = { 'host': lcfg.zfssa_data_ip, 'mountpoint': self.mountpoint, 'name': share['id'], } location = '' proto = share['share_proto'] if proto == 'NFS': location = ("%(host)s:%(mountpoint)s/%(name)s" % arg) elif proto == 'CIFS': location = ("\\\\%(host)s\\%(name)s" % arg) else: exception_msg = _('Protocol %s is not supported.') % proto LOG.error(exception_msg) raise exception.InvalidParameterValue(exception_msg) LOG.debug("Export location: %s.", location) return location def create_arg(self, size): size = units.Gi * int(size) arg = { 'quota': size, 'reservation': size, } arg.update(self.share_args) return arg def create_share(self, context, share, share_server=None): """Create a share and export it based on protocol used. The created share inherits properties from its project. """ lcfg = self.configuration arg = self.create_arg(share['size']) arg.update(self.default_args) arg.update({'name': share['id']}) if share['share_proto'] == 'CIFS': arg.update({'sharesmb': 'on'}) LOG.debug("ZFSSAShareDriver.create_share: id=%(name)s, size=%(quota)s", {'name': arg['name'], 'quota': arg['quota']}) self.zfssa.create_share(lcfg.zfssa_pool, lcfg.zfssa_project, arg) return self._export_location(share) def delete_share(self, context, share, share_server=None): """Delete a share. Shares with existing snapshots can't be deleted. """ LOG.debug("ZFSSAShareDriver.delete_share: id=%s", share['id']) lcfg = self.configuration self.zfssa.delete_share(lcfg.zfssa_pool, lcfg.zfssa_project, share['id']) def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot of the snapshot['share_id'].""" LOG.debug("ZFSSAShareDriver.create_snapshot: " "id=%(snap)s share=%(share)s", {'snap': snapshot['id'], 'share': snapshot['share_id']}) lcfg = self.configuration self.zfssa.create_snapshot(lcfg.zfssa_pool, lcfg.zfssa_project, snapshot['share_id'], snapshot['id']) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from a snapshot - clone a snapshot.""" lcfg = self.configuration LOG.debug("ZFSSAShareDriver.create_share_from_snapshot: clone=%s", share['id']) LOG.debug("ZFSSAShareDriver.create_share_from_snapshot: snapshot=%s", snapshot['id']) arg = self.create_arg(share['size']) details = { 'share': share['id'], 'project': lcfg.zfssa_project, } arg.update(details) if share['share_proto'] == 'CIFS': arg.update({'sharesmb': 'on'}) self.zfssa.clone_snapshot(lcfg.zfssa_pool, lcfg.zfssa_project, snapshot, share, arg) return self._export_location(share) def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot. Snapshots with existing clones cannot be deleted. """ LOG.debug("ZFSSAShareDriver.delete_snapshot: id=%s", snapshot['id']) lcfg = self.configuration has_clones = self.zfssa.has_clones(lcfg.zfssa_pool, lcfg.zfssa_project, snapshot['share_id'], snapshot['id']) if has_clones: LOG.error("snapshot %s: has clones", snapshot['id']) raise exception.ShareSnapshotIsBusy(snapshot_name=snapshot['id']) self.zfssa.delete_snapshot(lcfg.zfssa_pool, lcfg.zfssa_project, snapshot['share_id'], snapshot['id']) def manage_existing(self, share, driver_options): """Manage an existing ZFSSA share. This feature requires an option 'zfssa_name', which specifies the name of the share as appeared in ZFSSA. The driver automatically retrieves information from the ZFSSA backend and returns the correct share size and export location. """ if 'zfssa_name' not in driver_options: msg = _('Name of the share in ZFSSA share has to be ' 'specified in option zfssa_name.') LOG.error(msg) raise exception.ShareBackendException(msg=msg) name = driver_options['zfssa_name'] try: details = self._get_share_details(name) except Exception: LOG.error('Cannot manage share %s', name) raise lcfg = self.configuration input_export_loc = share['export_locations'][0]['path'] proto = share['share_proto'] self._verify_share_to_manage(name, details) # Get and verify share size: size_byte = details['quota'] size_gb = int(math.ceil(size_byte / float(units.Gi))) if size_byte % units.Gi != 0: # Round up the size: new_size_byte = size_gb * units.Gi free_space = self.zfssa.get_project_stats(lcfg.zfssa_pool, lcfg.zfssa_project) diff_space = int(new_size_byte - size_byte) if diff_space > free_space: msg = (_('Quota and reservation of share %(name)s need to be ' 'rounded up to %(size)d. But there is not enough ' 'space in the backend.') % {'name': name, 'size': size_gb}) LOG.error(msg) raise exception.ManageInvalidShare(reason=msg) size_byte = new_size_byte # Get and verify share export location, also update share properties. arg = { 'host': lcfg.zfssa_data_ip, 'mountpoint': input_export_loc, 'name': share['id'], } manage_args = self.default_args.copy() manage_args.update(self.share_args) # The ZFSSA share name has to be updated, as Manila generates a new # share id for each share to be managed. manage_args.update({'name': share['id'], 'quota': size_byte, 'reservation': size_byte}) if proto == 'NFS': export_loc = ("%(host)s:%(mountpoint)s/%(name)s" % arg) manage_args.update({'sharenfs': 'sec=sys', 'sharesmb': 'off'}) elif proto == 'CIFS': export_loc = ("\\\\%(host)s\\%(name)s" % arg) manage_args.update({'sharesmb': 'on', 'sharenfs': 'off'}) else: msg = _('Protocol %s is not supported.') % proto LOG.error(msg) raise exception.ManageInvalidShare(reason=msg) self.zfssa.modify_share(lcfg.zfssa_pool, lcfg.zfssa_project, name, manage_args) return {'size': size_gb, 'export_locations': export_loc} def _verify_share_to_manage(self, name, details): lcfg = self.configuration if lcfg.zfssa_manage_policy == 'loose': return if 'custom:manila_managed' not in details: msg = (_("Unknown if the share: %s to be managed is " "already being managed by Manila. Aborting manage " "share. Please add 'manila_managed' custom schema " "property to the share and set its value to False." "Alternatively, set Manila config property " "'zfssa_manage_policy' to 'loose' to remove this " "restriction.") % name) LOG.error(msg) raise exception.ManageInvalidShare(reason=msg) if details['custom:manila_managed'] is True: msg = (_("Share %s is already being managed by Manila.") % name) LOG.error(msg) raise exception.ManageInvalidShare(reason=msg) def unmanage(self, share): """Removes the specified share from Manila management. This task involves only changing the custom:manila_managed property to False. Current accesses to the share will be removed in ZFSSA, as these accesses are removed in Manila. """ name = share['id'] lcfg = self.configuration managed = 'custom:manila_managed' details = self._get_share_details(name) if (managed not in details) or (details[managed] is not True): msg = (_("Share %s is not being managed by the current Manila " "instance.") % name) LOG.error(msg) raise exception.UnmanageInvalidShare(reason=msg) arg = {'custom:manila_managed': False} if share['share_proto'] == 'NFS': arg.update({'sharenfs': 'off'}) elif share['share_proto'] == 'CIFS': arg.update({'sharesmb': 'off'}) else: msg = (_("ZFSSA does not support %s protocol.") % share['share_proto']) LOG.error(msg) raise exception.UnmanageInvalidShare(reason=msg) self.zfssa.modify_share(lcfg.zfssa_pool, lcfg.zfssa_project, name, arg) def _get_share_details(self, name): lcfg = self.configuration details = self.zfssa.get_share(lcfg.zfssa_pool, lcfg.zfssa_project, name) if not details: msg = (_("Share %s doesn't exist in ZFSSA.") % name) LOG.error(msg) raise exception.ShareResourceNotFound(share_id=name) return details def ensure_share(self, context, share, share_server=None): self._get_share_details(share['id']) def shrink_share(self, share, new_size, share_server=None): """Shrink a share to new_size.""" lcfg = self.configuration details = self.zfssa.get_share(lcfg.zfssa_pool, lcfg.zfssa_project, share['id']) used_space = details['space_data'] new_size_byte = int(new_size) * units.Gi if used_space > new_size_byte: LOG.error('%(used).1fGB of share %(id)s is already used. ' 'Cannot shrink to %(newsize)dGB.', {'used': float(used_space) / units.Gi, 'id': share['id'], 'newsize': new_size}) raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) arg = self.create_arg(new_size) self.zfssa.modify_share(lcfg.zfssa_pool, lcfg.zfssa_project, share['id'], arg) def extend_share(self, share, new_size, share_server=None): """Extend a share to new_size.""" lcfg = self.configuration free_space = self.zfssa.get_project_stats(lcfg.zfssa_pool, lcfg.zfssa_project) diff_space = int(new_size - share['size']) * units.Gi if diff_space > free_space: msg = (_('There is not enough free space in project %s') % (lcfg.zfssa_project)) LOG.error(msg) raise exception.ShareExtendingError(share_id=share['id'], reason=msg) arg = self.create_arg(new_size) self.zfssa.modify_share(lcfg.zfssa_pool, lcfg.zfssa_project, share['id'], arg) def allow_access(self, context, share, access, share_server=None): """Allows access to an NFS share for the specified IP.""" LOG.debug("ZFSSAShareDriver.allow_access: share=%s", share['id']) lcfg = self.configuration if share['share_proto'] == 'NFS': self.zfssa.allow_access_nfs(lcfg.zfssa_pool, lcfg.zfssa_project, share['id'], access) def deny_access(self, context, share, access, share_server=None): """Deny access to an NFS share for the specified IP.""" LOG.debug("ZFSSAShareDriver.deny_access: share=%s", share['id']) lcfg = self.configuration if share['share_proto'] == 'NFS': self.zfssa.deny_access_nfs(lcfg.zfssa_pool, lcfg.zfssa_project, share['id'], access) elif share['share_proto'] == 'CIFS': return def _update_share_stats(self): """Retrieve stats info from a share.""" backend_name = self.configuration.safe_get('share_backend_name') data = dict( share_backend_name=backend_name or self.__class__.__name__, vendor_name='Oracle', driver_version=self.VERSION, storage_protocol=self.PROTOCOL) lcfg = self.configuration (avail, used) = self.zfssa.get_pool_stats(lcfg.zfssa_pool) if avail: data['free_capacity_gb'] = int(avail) / units.Gi if used: total = int(avail) + int(used) data['total_capacity_gb'] = total / units.Gi else: data['total_capacity_gb'] = 0 else: data['free_capacity_gb'] = 0 data['total_capacity_gb'] = 0 super(ZFSSAShareDriver, self)._update_share_stats(data) def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs.""" return 0 manila-10.0.0/manila/share/drivers/helpers.py0000664000175000017500000005663313656750227021142 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import ipaddress import os import re import six from oslo_log import log from manila.common import constants as const from manila import exception from manila.i18n import _ from manila import utils LOG = log.getLogger(__name__) class NASHelperBase(object): """Interface to work with share.""" def __init__(self, execute, ssh_execute, config_object): self.configuration = config_object self._execute = execute self._ssh_exec = ssh_execute def init_helper(self, server): pass def create_exports(self, server, share_name, recreate=False): """Create new exports, delete old ones if exist.""" raise NotImplementedError() def remove_exports(self, server, share_name): """Remove exports.""" raise NotImplementedError() def configure_access(self, server, share_name): """Configure server before allowing access.""" pass def update_access(self, server, share_name, access_rules, add_rules, delete_rules): """Update access rules for given share. This driver has two different behaviors according to parameters: 1. Recovery after error - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' shall be empty. Previously existing access rules are cleared and then added back according to 'access_rules'. 2. Adding/Deleting of several access rules - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Rules in 'access_rules' are ignored and only rules from 'add_rules' and 'delete_rules' are applied. :param server: None or Share server's backend details :param share_name: Share's path according to id. :param access_rules: All access rules for given share :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. """ raise NotImplementedError() @staticmethod def _verify_server_has_public_address(server): if 'public_address' in server: pass elif 'public_addresses' in server: if not isinstance(server['public_addresses'], list): raise exception.ManilaException(_("public_addresses must be " "a list")) else: raise exception.ManilaException( _("Can not get public_address(es) for generation of export.")) def _get_export_location_template(self, export_location_or_path): """Returns template of export location. Example for NFS: %s:/path/to/share Example for CIFS: \\\\%s\\cifs_share_name """ raise NotImplementedError() def get_exports_for_share(self, server, export_location_or_path): """Returns list of exports based on server info.""" self._verify_server_has_public_address(server) export_location_template = self._get_export_location_template( export_location_or_path) export_locations = [] if 'public_addresses' in server: pairs = list(map(lambda addr: (addr, False), server['public_addresses'])) else: pairs = [(server['public_address'], False)] # NOTE(vponomaryov): # Generic driver case: 'admin_ip' exists only in case of DHSS=True # mode and 'ip' exists in case of DHSS=False mode. # Use one of these for creation of export location for service needs. service_address = server.get("admin_ip", server.get("ip")) if service_address: pairs.append((service_address, True)) for ip, is_admin in pairs: export_locations.append({ "path": export_location_template % ip, "is_admin_only": is_admin, "metadata": { # TODO(vponomaryov): remove this fake metadata when # proper appears. "export_location_metadata_example": "example", }, }) return export_locations def get_share_path_by_export_location(self, server, export_location): """Returns share path by its export location.""" raise NotImplementedError() def disable_access_for_maintenance(self, server, share_name): """Disables access to share to perform maintenance operations.""" def restore_access_after_maintenance(self, server, share_name): """Enables access to share after maintenance operations were done.""" @staticmethod def validate_access_rules(access_rules, allowed_types, allowed_levels): """Validates access rules according to access_type and access_level. :param access_rules: List of access rules to be validated. :param allowed_types: tuple of allowed type values. :param allowed_levels: tuple of allowed level values. """ for access in (access_rules or []): access_type = access['access_type'] access_level = access['access_level'] if access_type not in allowed_types: reason = _("Only %s access type allowed.") % ( ', '.join(tuple(["'%s'" % x for x in allowed_types]))) raise exception.InvalidShareAccess(reason=reason) if access_level not in allowed_levels: raise exception.InvalidShareAccessLevel(level=access_level) def _get_maintenance_file_path(self, share_name): return os.path.join(self.configuration.share_mount_path, "%s.maintenance" % share_name) def nfs_synchronized(f): def wrapped_func(self, *args, **kwargs): key = "nfs-%s" % args[0].get("lock_name", args[0]["instance_id"]) # NOTE(vponomaryov): 'external' lock is required for DHSS=False # mode of LVM and Generic drivers, that may have lots of # driver instances on single host. @utils.synchronized(key, external=True) def source_func(self, *args, **kwargs): return f(self, *args, **kwargs) return source_func(self, *args, **kwargs) return wrapped_func def escaped_address(address): addr = ipaddress.ip_address(six.text_type(address)) if addr.version == 4: return six.text_type(addr) else: return '[%s]' % six.text_type(addr) class NFSHelper(NASHelperBase): """Interface to work with share.""" def create_exports(self, server, share_name, recreate=False): path = os.path.join(self.configuration.share_mount_path, share_name) server_copy = copy.copy(server) public_addresses = [] if 'public_addresses' in server_copy: for address in server_copy['public_addresses']: public_addresses.append( escaped_address(address)) server_copy['public_addresses'] = public_addresses for t in ['public_address', 'admin_ip', 'ip']: address = server_copy.get(t) if address is not None: server_copy[t] = escaped_address(address) return self.get_exports_for_share(server_copy, path) def init_helper(self, server): try: self._ssh_exec(server, ['sudo', 'exportfs']) except exception.ProcessExecutionError as e: if 'command not found' in e.stderr: raise exception.ManilaException( _('NFS server is not installed on %s') % server['instance_id']) LOG.error(e.stderr) def remove_exports(self, server, share_name): """Remove exports.""" @nfs_synchronized def update_access(self, server, share_name, access_rules, add_rules, delete_rules): """Update access rules for given share. Please refer to base class for a more in-depth description. """ local_path = os.path.join(self.configuration.share_mount_path, share_name) out, err = self._ssh_exec(server, ['sudo', 'exportfs']) # Recovery mode if not (add_rules or delete_rules): self.validate_access_rules( access_rules, ('ip',), (const.ACCESS_LEVEL_RO, const.ACCESS_LEVEL_RW)) hosts = self.get_host_list(out, local_path) for host in hosts: parsed_host = self._get_parsed_address_or_cidr(host) self._ssh_exec(server, ['sudo', 'exportfs', '-u', ':'.join((parsed_host, local_path))]) self._sync_nfs_temp_and_perm_files(server) for access in access_rules: rules_options = '%s,no_subtree_check,no_root_squash' access_to = self._get_parsed_address_or_cidr( access['access_to']) self._ssh_exec( server, ['sudo', 'exportfs', '-o', rules_options % access['access_level'], ':'.join((access_to, local_path))]) self._sync_nfs_temp_and_perm_files(server) # Adding/Deleting specific rules else: self.validate_access_rules( add_rules, ('ip',), (const.ACCESS_LEVEL_RO, const.ACCESS_LEVEL_RW)) for access in delete_rules: try: self.validate_access_rules( [access], ('ip',), (const.ACCESS_LEVEL_RO, const.ACCESS_LEVEL_RW)) except (exception.InvalidShareAccess, exception.InvalidShareAccessLevel): LOG.warning( "Unsupported access level %(level)s or access type " "%(type)s, skipping removal of access rule to " "%(to)s.", {'level': access['access_level'], 'type': access['access_type'], 'to': access['access_to']}) continue access_to = self._get_parsed_address_or_cidr( access['access_to']) self._ssh_exec(server, ['sudo', 'exportfs', '-u', ':'.join((access_to, local_path))]) if delete_rules: self._sync_nfs_temp_and_perm_files(server) for access in add_rules: access_to = self._get_parsed_address_or_cidr( access['access_to']) found_item = re.search( re.escape(local_path) + r'[\s\n]*' + re.escape(access_to), out) if found_item is not None: LOG.warning("Access rule %(type)s:%(to)s already " "exists for share %(name)s", { 'to': access['access_to'], 'type': access['access_type'], 'name': share_name }) else: rules_options = '%s,no_subtree_check,no_root_squash' self._ssh_exec( server, ['sudo', 'exportfs', '-o', rules_options % access['access_level'], ':'.join((access_to, local_path))]) if add_rules: self._sync_nfs_temp_and_perm_files(server) @staticmethod def _get_parsed_address_or_cidr(access_to): network = ipaddress.ip_network(six.text_type(access_to)) mask_length = network.prefixlen address = six.text_type(network.network_address) if mask_length == 0: # Special case because Linux exports don't support /0 netmasks return '*' if network.version == 4: if mask_length == 32: return address return '%s/%s' % (address, mask_length) if mask_length == 128: return "[%s]" % address return "[%s]/%s" % (address, mask_length) @staticmethod def get_host_list(output, local_path): entries = [] output = output.replace('\n\t\t', ' ') lines = output.split('\n') for line in lines: items = line.split(' ') if local_path == items[0]: entries.append(items[1]) return entries def _sync_nfs_temp_and_perm_files(self, server): """Sync changes of exports with permanent NFS config file. This is required to ensure, that after share server reboot, exports still exist. """ sync_cmd = [ 'sudo', 'cp', const.NFS_EXPORTS_FILE_TEMP, const.NFS_EXPORTS_FILE ] self._ssh_exec(server, sync_cmd) self._ssh_exec(server, ['sudo', 'exportfs', '-a']) out, _ = self._ssh_exec( server, ['sudo', 'service', 'nfs-kernel-server', 'status'], check_exit_code=False) if "not" in out: self._ssh_exec( server, ['sudo', 'service', 'nfs-kernel-server', 'restart']) def _get_export_location_template(self, export_location_or_path): path = export_location_or_path.split(':')[-1] return '%s:' + path def get_share_path_by_export_location(self, server, export_location): return export_location.split(':')[-1] @nfs_synchronized def disable_access_for_maintenance(self, server, share_name): maintenance_file = self._get_maintenance_file_path(share_name) backup_exports = [ 'cat', const.NFS_EXPORTS_FILE, '|', 'grep', share_name, '|', 'sudo', 'tee', maintenance_file ] self._ssh_exec(server, backup_exports) local_path = os.path.join(self.configuration.share_mount_path, share_name) out, err = self._ssh_exec(server, ['sudo', 'exportfs']) hosts = self.get_host_list(out, local_path) for host in hosts: self._ssh_exec(server, ['sudo', 'exportfs', '-u', ':'.join((host, local_path))]) self._sync_nfs_temp_and_perm_files(server) @nfs_synchronized def restore_access_after_maintenance(self, server, share_name): maintenance_file = self._get_maintenance_file_path(share_name) restore_exports = [ 'cat', maintenance_file, '|', 'sudo', 'tee', '-a', const.NFS_EXPORTS_FILE, '&&', 'sudo', 'exportfs', '-r', '&&', 'sudo', 'rm', '-f', maintenance_file ] self._ssh_exec(server, restore_exports) class CIFSHelperBase(NASHelperBase): @staticmethod def _get_share_group_name_from_export_location(export_location): if '/' in export_location and '\\' in export_location: pass elif export_location.startswith('\\\\'): return export_location.split('\\')[-1] elif export_location.startswith('//'): return export_location.split('/')[-1] msg = _("Got incorrect CIFS export location '%s'.") % export_location raise exception.InvalidShare(reason=msg) def _get_export_location_template(self, export_location_or_path): group_name = self._get_share_group_name_from_export_location( export_location_or_path) return ('\\\\%s' + ('\\%s' % group_name)) class CIFSHelperIPAccess(CIFSHelperBase): """Manage shares in samba server by net conf tool. Class provides functionality to operate with CIFS shares. Samba server should be configured to use registry as configuration backend to allow dynamically share managements. This class allows to define access to shares by IPs with RW access level. """ def __init__(self, *args): super(CIFSHelperIPAccess, self).__init__(*args) self.parameters = { 'browseable': 'yes', 'create mask': '0755', 'hosts deny': '0.0.0.0/0', # deny all by default 'hosts allow': '127.0.0.1', 'read only': 'no', } def init_helper(self, server): # This is smoke check that we have required dependency self._ssh_exec(server, ['sudo', 'net', 'conf', 'list']) def create_exports(self, server, share_name, recreate=False): """Create share at samba server.""" share_path = os.path.join(self.configuration.share_mount_path, share_name) create_cmd = [ 'sudo', 'net', 'conf', 'addshare', share_name, share_path, 'writeable=y', 'guest_ok=y', ] try: self._ssh_exec( server, ['sudo', 'net', 'conf', 'showshare', share_name, ]) except exception.ProcessExecutionError: # Share does not exist, create it try: self._ssh_exec(server, create_cmd) except Exception: msg = _("Could not create CIFS export %s.") % share_name LOG.exception(msg) raise exception.ManilaException(msg) else: # Share exists if recreate: self._ssh_exec( server, ['sudo', 'net', 'conf', 'delshare', share_name, ]) try: self._ssh_exec(server, create_cmd) except Exception: msg = _("Could not create CIFS export %s.") % share_name LOG.exception(msg) raise exception.ManilaException(msg) else: msg = _('Share section %s already defined.') % share_name raise exception.ShareBackendException(msg=msg) for param, value in self.parameters.items(): self._ssh_exec(server, ['sudo', 'net', 'conf', 'setparm', share_name, param, value]) return self.get_exports_for_share(server, '\\\\%s\\' + share_name) def remove_exports(self, server, share_name): """Remove share definition from samba server.""" try: self._ssh_exec( server, ['sudo', 'net', 'conf', 'delshare', share_name]) except exception.ProcessExecutionError as e: LOG.warning("Caught error trying delete share: %(error)s, try" "ing delete it forcibly.", {'error': e.stderr}) self._ssh_exec(server, ['sudo', 'smbcontrol', 'all', 'close-share', share_name]) def update_access(self, server, share_name, access_rules, add_rules, delete_rules): """Update access rules for given share. Please refer to base class for a more in-depth description. For this specific implementation, add_rules and delete_rules parameters are not used. """ hosts = [] self.validate_access_rules( access_rules, ('ip',), (const.ACCESS_LEVEL_RW,)) for access in access_rules: hosts.append(access['access_to']) self._set_allow_hosts(server, hosts, share_name) def _get_allow_hosts(self, server, share_name): (out, _) = self._ssh_exec(server, ['sudo', 'net', 'conf', 'getparm', share_name, 'hosts allow']) return out.split() def _set_allow_hosts(self, server, hosts, share_name): value = ' '.join(hosts) or ' ' self._ssh_exec(server, ['sudo', 'net', 'conf', 'setparm', share_name, 'hosts allow', value]) def get_share_path_by_export_location(self, server, export_location): # Get name of group that contains share data on CIFS server group_name = self._get_share_group_name_from_export_location( export_location) # Get parameter 'path' from group that belongs to current share (out, __) = self._ssh_exec( server, ['sudo', 'net', 'conf', 'getparm', group_name, 'path']) # Remove special symbols from response and return path return out.strip() def disable_access_for_maintenance(self, server, share_name): maintenance_file = self._get_maintenance_file_path(share_name) allowed_hosts = " ".join(self._get_allow_hosts(server, share_name)) backup_exports = [ 'echo', "'%s'" % allowed_hosts, '|', 'sudo', 'tee', maintenance_file ] self._ssh_exec(server, backup_exports) self._set_allow_hosts(server, [], share_name) def restore_access_after_maintenance(self, server, share_name): maintenance_file = self._get_maintenance_file_path(share_name) (exports, __) = self._ssh_exec(server, ['cat', maintenance_file]) self._set_allow_hosts(server, exports.split(), share_name) self._ssh_exec(server, ['sudo', 'rm', '-f', maintenance_file]) class CIFSHelperUserAccess(CIFSHelperIPAccess): """Manage shares in samba server by net conf tool. Class provides functionality to operate with CIFS shares. Samba server should be configured to use registry as configuration backend to allow dynamically share managements. This class allows to define access to shares by usernames with either RW or RO access levels. """ def __init__(self, *args): super(CIFSHelperUserAccess, self).__init__(*args) self.parameters = { 'browseable': 'yes', 'create mask': '0755', 'hosts allow': '0.0.0.0/0', 'read only': 'no', } def update_access(self, server, share_name, access_rules, add_rules, delete_rules): """Update access rules for given share. Please refer to base class for a more in-depth description. For this specific implementation, add_rules and delete_rules parameters are not used. """ all_users_rw = [] all_users_ro = [] self.validate_access_rules( access_rules, ('user',), (const.ACCESS_LEVEL_RO, const.ACCESS_LEVEL_RW)) for access in access_rules: if access['access_level'] == const.ACCESS_LEVEL_RW: all_users_rw.append(access['access_to']) else: all_users_ro.append(access['access_to']) self._set_valid_users( server, all_users_rw, share_name, const.ACCESS_LEVEL_RW) self._set_valid_users( server, all_users_ro, share_name, const.ACCESS_LEVEL_RO) def _get_conf_param(self, access_level): if access_level == const.ACCESS_LEVEL_RW: return 'valid users' else: return 'read list' def _set_valid_users(self, server, users, share_name, access_level): value = ' '.join(users) param = self._get_conf_param(access_level) self._ssh_exec(server, ['sudo', 'net', 'conf', 'setparm', share_name, param, value]) manila-10.0.0/manila/share/drivers/tegile/0000775000175000017500000000000013656750362020362 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/tegile/__init__.py0000664000175000017500000000000013656750227022461 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/tegile/tegile.py0000664000175000017500000004631013656750227022211 0ustar zuulzuul00000000000000# Copyright (c) 2016 by Tegile Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver for Tegile storage. """ import json import requests import six from oslo_config import cfg from oslo_log import log from manila import exception from manila.i18n import _ from manila.share import driver from manila.share import utils as share_utils from manila import utils tegile_opts = [ cfg.HostAddressOpt('tegile_nas_server', help='Tegile NAS server hostname or IP address.'), cfg.StrOpt('tegile_nas_login', help='User name for the Tegile NAS server.'), cfg.StrOpt('tegile_nas_password', help='Password for the Tegile NAS server.'), cfg.StrOpt('tegile_default_project', help='Create shares in this project')] CONF = cfg.CONF CONF.register_opts(tegile_opts) LOG = log.getLogger(__name__) DEFAULT_API_SERVICE = 'openstack' TEGILE_API_PATH = 'zebi/api' TEGILE_LOCAL_CONTAINER_NAME = 'Local' TEGILE_SNAPSHOT_PREFIX = 'Manual-S-' VENDOR = 'Tegile Systems Inc.' DEFAULT_BACKEND_NAME = 'Tegile' VERSION = '1.0.0' DEBUG_LOGGING = False # For debugging purposes def debugger(func): """Returns a wrapper that wraps func. The wrapper will log the entry and exit points of the function. """ def wrapper(*args, **kwds): if DEBUG_LOGGING: LOG.debug('Entering %(classname)s.%(funcname)s', { 'classname': args[0].__class__.__name__, 'funcname': func.__name__, }) LOG.debug('Arguments: %(args)s, %(kwds)s', { 'args': args[1:], 'kwds': kwds, }) f_result = func(*args, **kwds) if DEBUG_LOGGING: LOG.debug('Exiting %(classname)s.%(funcname)s', { 'classname': args[0].__class__.__name__, 'funcname': func.__name__, }) LOG.debug('Results: %(result)s', {'result': f_result}) return f_result return wrapper class TegileAPIExecutor(object): def __init__(self, classname, hostname, username, password): self._classname = classname self._hostname = hostname self._username = username self._password = password def __call__(self, *args, **kwargs): return self._send_api_request(*args, **kwargs) @debugger @utils.retry(exception=(requests.ConnectionError, requests.Timeout), interval=30, retries=3, backoff_rate=1) def _send_api_request(self, method, params=None, request_type='post', api_service=DEFAULT_API_SERVICE, fine_logging=DEBUG_LOGGING): if params is not None: params = json.dumps(params) url = 'https://%s/%s/%s/%s' % (self._hostname, TEGILE_API_PATH, api_service, method) if fine_logging: LOG.debug('TegileAPIExecutor(%(classname)s) method: %(method)s, ' 'url: %(url)s', { 'classname': self._classname, 'method': method, 'url': url, }) if request_type == 'post': if fine_logging: LOG.debug('TegileAPIExecutor(%(classname)s) ' 'method: %(method)s, payload: %(payload)s', { 'classname': self._classname, 'method': method, 'payload': params, }) req = requests.post(url, data=params, auth=(self._username, self._password), verify=False) else: req = requests.get(url, auth=(self._username, self._password), verify=False) if fine_logging: LOG.debug('TegileAPIExecutor(%(classname)s) method: %(method)s, ' 'return code: %(retcode)s', { 'classname': self._classname, 'method': method, 'retcode': req, }) try: response = req.json() if fine_logging: LOG.debug('TegileAPIExecutor(%(classname)s) ' 'method: %(method)s, response: %(response)s', { 'classname': self._classname, 'method': method, 'response': response, }) except ValueError: # Some APIs don't return output and that's fine response = '' req.close() if req.status_code != 200: raise exception.TegileAPIException(response=req.text) return response class TegileShareDriver(driver.ShareDriver): """Tegile NAS driver. Allows for NFS and CIFS NAS storage usage.""" def __init__(self, *args, **kwargs): super(TegileShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(tegile_opts) self._default_project = (self.configuration.safe_get( "tegile_default_project") or 'openstack') self._backend_name = (self.configuration.safe_get('share_backend_name') or CONF.share_backend_name or DEFAULT_BACKEND_NAME) self._hostname = self.configuration.safe_get('tegile_nas_server') username = self.configuration.safe_get('tegile_nas_login') password = self.configuration.safe_get('tegile_nas_password') self._api = TegileAPIExecutor(self.__class__.__name__, self._hostname, username, password) @debugger def create_share(self, context, share, share_server=None): """Is called to create share.""" share_name = share['name'] share_proto = share['share_proto'] pool_name = share_utils.extract_host(share['host'], level='pool') params = (pool_name, self._default_project, share_name, share_proto) # Share name coming from the backend is the most reliable. Sometimes # a few options in Tegile array could cause sharename to be different # from the one passed to it. Eg. 'projectname-sharename' instead # of 'sharename' if inherited share properties are selected. ip, real_share_name = self._api('createShare', params).split() LOG.info("Created share %(sharename)s, share id %(shid)s.", {'sharename': share_name, 'shid': share['id']}) return self._get_location_path(real_share_name, share_proto, ip) @debugger def extend_share(self, share, new_size, share_server=None): """Is called to extend share. There is no resize for Tegile shares. We just adjust the quotas. The API is still called 'resizeShare'. """ self._adjust_size(share, new_size, share_server) @debugger def shrink_share(self, shrink_share, shrink_size, share_server=None): """Uses resize_share to shrink a share. There is no shrink for Tegile shares. We just adjust the quotas. The API is still called 'resizeShare'. """ self._adjust_size(shrink_share, shrink_size, share_server) @debugger def _adjust_size(self, share, new_size, share_server=None): pool, project, share_name = self._get_pool_project_share_name(share) params = ('%s/%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name), six.text_type(new_size), 'GB') self._api('resizeShare', params) @debugger def delete_share(self, context, share, share_server=None): """Is called to remove share.""" pool, project, share_name = self._get_pool_project_share_name(share) params = ('%s/%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name), True, False) self._api('deleteShare', params) @debugger def create_snapshot(self, context, snapshot, share_server=None): """Is called to create snapshot.""" snap_name = snapshot['name'] pool, project, share_name = self._get_pool_project_share_name( snapshot['share']) share = { 'poolName': '%s' % pool, 'projectName': '%s' % project, 'name': share_name, 'availableSize': 0, 'totalSize': 0, 'datasetPath': '%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project), 'mountpoint': share_name, 'local': 'true', } params = (share, snap_name, False) LOG.info('Creating snapshot for share_name=%(shr)s' ' snap_name=%(name)s', {'shr': share_name, 'name': snap_name}) self._api('createShareSnapshot', params) @debugger def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from a snapshot - clone a snapshot.""" pool, project, share_name = self._get_pool_project_share_name(share) params = ('%s/%s/%s/%s@%s%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, snapshot['share_name'], TEGILE_SNAPSHOT_PREFIX, snapshot['name'], ), share_name, True, ) ip, real_share_name = self._api('cloneShareSnapshot', params).split() share_proto = share['share_proto'] return self._get_location_path(real_share_name, share_proto, ip) @debugger def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove snapshot.""" pool, project, share_name = self._get_pool_project_share_name( snapshot['share']) params = ('%s/%s/%s/%s@%s%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name, TEGILE_SNAPSHOT_PREFIX, snapshot['name']), False) self._api('deleteShareSnapshot', params) @debugger def ensure_share(self, context, share, share_server=None): """Invoked to sure that share is exported.""" # Fetching share name from server, because some configuration # options can cause sharename different from the OpenStack share name pool, project, share_name = self._get_pool_project_share_name(share) params = [ '%s/%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name), ] ip, real_share_name = self._api('getShareIPAndMountPoint', params).split() share_proto = share['share_proto'] location = self._get_location_path(real_share_name, share_proto, ip) return [location] @debugger def _allow_access(self, context, share, access, share_server=None): """Allow access to the share.""" share_proto = share['share_proto'] access_type = access['access_type'] access_level = access['access_level'] access_to = access['access_to'] self._check_share_access(share_proto, access_type) pool, project, share_name = self._get_pool_project_share_name(share) params = ('%s/%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name), share_proto, access_type, access_to, access_level) self._api('shareAllowAccess', params) @debugger def _deny_access(self, context, share, access, share_server=None): """Deny access to the share.""" share_proto = share['share_proto'] access_type = access['access_type'] access_level = access['access_level'] access_to = access['access_to'] self._check_share_access(share_proto, access_type) pool, project, share_name = self._get_pool_project_share_name(share) params = ('%s/%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name), share_proto, access_type, access_to, access_level) self._api('shareDenyAccess', params) def _check_share_access(self, share_proto, access_type): if share_proto == 'CIFS' and access_type != 'user': reason = ('Only USER access type is allowed for ' 'CIFS shares.') LOG.warning(reason) raise exception.InvalidShareAccess(reason=reason) elif share_proto == 'NFS' and access_type not in ('ip', 'user'): reason = ('Only IP or USER access types are allowed for ' 'NFS shares.') LOG.warning(reason) raise exception.InvalidShareAccess(reason=reason) elif share_proto not in ('NFS', 'CIFS'): reason = ('Unsupported protocol \"%s\" specified for ' 'access rule.') % share_proto raise exception.InvalidShareAccess(reason=reason) @debugger def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): if not (add_rules or delete_rules): # Recovery mode pool, project, share_name = ( self._get_pool_project_share_name(share)) share_proto = share['share_proto'] params = ('%s/%s/%s/%s' % (pool, TEGILE_LOCAL_CONTAINER_NAME, project, share_name), share_proto) # Clears all current ACLs # Remove ip and user ACLs if share_proto is NFS # Remove user ACLs if share_proto is CIFS self._api('clearAccessRules', params) # Looping through all rules. # Will have one API call per rule. for access in access_rules: self._allow_access(context, share, access, share_server) else: # Adding/Deleting specific rules for access in delete_rules: self._deny_access(context, share, access, share_server) for access in add_rules: self._allow_access(context, share, access, share_server) @debugger def _update_share_stats(self, **kwargs): """Retrieve stats info.""" try: data = self._api(method='getArrayStats', request_type='get', fine_logging=False) # fixing values coming back here as String to float for pool in data.get('pools', []): pool['total_capacity_gb'] = float( pool.get('total_capacity_gb', 0)) pool['free_capacity_gb'] = float( pool.get('free_capacity_gb', 0)) pool['allocated_capacity_gb'] = float( pool.get('allocated_capacity_gb', 0)) pool['qos'] = pool.pop('QoS_support', False) pool['reserved_percentage'] = ( self.configuration.reserved_share_percentage) pool['dedupe'] = True pool['compression'] = True pool['thin_provisioning'] = True pool['max_over_subscription_ratio'] = ( self.configuration.max_over_subscription_ratio) data['share_backend_name'] = self._backend_name data['vendor_name'] = VENDOR data['driver_version'] = VERSION data['storage_protocol'] = 'NFS_CIFS' data['snapshot_support'] = True data['create_share_from_snapshot_support'] = True data['qos'] = False super(TegileShareDriver, self)._update_share_stats(data) except Exception: msg = _('Unexpected error while trying to get the ' 'usage stats from array.') LOG.exception(msg) raise @debugger def get_pool(self, share): """Returns pool name where share resides. :param share: The share hosted by the driver. :return: Name of the pool where given share is hosted. """ pool = share_utils.extract_host(share['host'], level='pool') return pool @debugger def get_network_allocations_number(self): """Get number of network interfaces to be created.""" return 0 @debugger def _get_location_path(self, share_name, share_proto, ip=None): if ip is None: ip = self._hostname if share_proto == 'NFS': location = '%s:%s' % (ip, share_name) elif share_proto == 'CIFS': location = r'\\%s\%s' % (ip, share_name) else: message = _('Invalid NAS protocol supplied: %s.') % share_proto raise exception.InvalidInput(message) export_location = { 'path': location, 'is_admin_only': False, 'metadata': { 'preferred': True, }, } return export_location @debugger def _get_pool_project_share_name(self, share): pool = share_utils.extract_host(share['host'], level='pool') project = self._default_project share_name = share['name'] return pool, project, share_name manila-10.0.0/manila/share/drivers/zfsonlinux/0000775000175000017500000000000013656750362021330 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/zfsonlinux/utils.py0000664000175000017500000002734313656750227023053 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Module for storing ZFSonLinux driver utility stuff such as: - Common ZFS code - Share helpers """ # TODO(vponomaryov): add support of SaMBa import abc from oslo_log import log import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.ganesha import utils as ganesha_utils from manila import utils LOG = log.getLogger(__name__) def zfs_dataset_synchronized(f): def wrapped_func(self, *args, **kwargs): key = "zfs-dataset-%s" % args[0] @utils.synchronized(key, external=True) def source_func(self, *args, **kwargs): return f(self, *args, **kwargs) return source_func(self, *args, **kwargs) return wrapped_func def get_remote_shell_executor( ip, port, conn_timeout, login=None, password=None, privatekey=None, max_size=10): return ganesha_utils.SSHExecutor( ip=ip, port=port, conn_timeout=conn_timeout, login=login, password=password, privatekey=privatekey, max_size=max_size, ) class ExecuteMixin(driver.ExecuteMixin): def init_execute_mixin(self, *args, **kwargs): """Init method for mixin called in the end of driver's __init__().""" super(ExecuteMixin, self).init_execute_mixin(*args, **kwargs) if self.configuration.zfs_use_ssh: self.ssh_executor = get_remote_shell_executor( ip=self.configuration.zfs_service_ip, port=22, conn_timeout=self.configuration.ssh_conn_timeout, login=self.configuration.zfs_ssh_username, password=self.configuration.zfs_ssh_user_password, privatekey=self.configuration.zfs_ssh_private_key_path, max_size=10, ) else: self.ssh_executor = None def execute(self, *cmd, **kwargs): """Common interface for running shell commands.""" if kwargs.get('executor'): executor = kwargs.get('executor') elif self.ssh_executor: executor = self.ssh_executor else: executor = self._execute kwargs.pop('executor', None) if cmd[0] == 'sudo': kwargs['run_as_root'] = True cmd = cmd[1:] return executor(*cmd, **kwargs) @utils.retry(exception.ProcessExecutionError, interval=5, retries=36, backoff_rate=1) def execute_with_retry(self, *cmd, **kwargs): """Retry wrapper over common shell interface.""" try: return self.execute(*cmd, **kwargs) except exception.ProcessExecutionError as e: LOG.warning("Failed to run command, got error: %s", e) raise def _get_option(self, resource_name, option_name, pool_level=False, **kwargs): """Returns value of requested zpool or zfs dataset option.""" app = 'zpool' if pool_level else 'zfs' out, err = self.execute( 'sudo', app, 'get', option_name, resource_name, **kwargs) data = self.parse_zfs_answer(out) option = data[0]['VALUE'] msg_payload = {'option': option_name, 'value': option} LOG.debug("ZFS option %(option)s's value is %(value)s.", msg_payload) return option def parse_zfs_answer(self, string): """Returns list of dicts with data returned by ZFS shell commands.""" lines = string.split('\n') if len(lines) < 2: return [] keys = list(filter(None, lines[0].split(' '))) data = [] for line in lines[1:]: values = list(filter(None, line.split(' '))) if not values: continue data.append(dict(zip(keys, values))) return data def get_zpool_option(self, zpool_name, option_name, **kwargs): """Returns value of requested zpool option.""" return self._get_option(zpool_name, option_name, True, **kwargs) def get_zfs_option(self, dataset_name, option_name, **kwargs): """Returns value of requested zfs dataset option.""" return self._get_option(dataset_name, option_name, False, **kwargs) def zfs(self, *cmd, **kwargs): """ZFS shell commands executor.""" return self.execute('sudo', 'zfs', *cmd, **kwargs) def zfs_with_retry(self, *cmd, **kwargs): """ZFS shell commands executor.""" return self.execute_with_retry('sudo', 'zfs', *cmd, **kwargs) @six.add_metaclass(abc.ABCMeta) class NASHelperBase(object): """Base class for share helpers of 'ZFS on Linux' driver.""" def __init__(self, configuration): """Init share helper. :param configuration: share driver 'configuration' instance :return: share helper instance. """ self.configuration = configuration self.init_execute_mixin() # pylint: disable=no-member self.verify_setup() @abc.abstractmethod def verify_setup(self): """Performs checks for required stuff.""" @abc.abstractmethod def create_exports(self, dataset_name, executor): """Creates share exports.""" @abc.abstractmethod def get_exports(self, dataset_name, service, executor): """Gets/reads share exports.""" @abc.abstractmethod def remove_exports(self, dataset_name, executor): """Removes share exports.""" @abc.abstractmethod def update_access(self, dataset_name, access_rules, add_rules, delete_rules, executor): """Update access rules for specified ZFS dataset.""" class NFSviaZFSHelper(ExecuteMixin, NASHelperBase): """Helper class for handling ZFS datasets as NFS shares. Kernel and Fuse versions of ZFS have different syntax for setting up access rules, and this Helper designed to satisfy both making autodetection. """ @property def is_kernel_version(self): """Says whether Kernel version of ZFS is used or not.""" if not hasattr(self, '_is_kernel_version'): try: self.execute('modinfo', 'zfs') self._is_kernel_version = True except exception.ProcessExecutionError as e: LOG.info( "Looks like ZFS kernel module is absent. " "Assuming FUSE version is installed. Error: %s", e) self._is_kernel_version = False return self._is_kernel_version def verify_setup(self): """Performs checks for required stuff.""" out, err = self.execute('which', 'exportfs') if not out: raise exception.ZFSonLinuxException( msg=_("Utility 'exportfs' is not installed.")) try: self.execute('sudo', 'exportfs') except exception.ProcessExecutionError: LOG.exception("Call of 'exportfs' utility returned error.") raise # Init that class instance attribute on start of manila-share service self.is_kernel_version def create_exports(self, dataset_name, executor=None): """Creates NFS share exports for given ZFS dataset.""" return self.get_exports(dataset_name, executor=executor) def get_exports(self, dataset_name, executor=None): """Gets/reads NFS share export for given ZFS dataset.""" mountpoint = self.get_zfs_option( dataset_name, 'mountpoint', executor=executor) return [ { "path": "%(ip)s:%(mp)s" % {"ip": ip, "mp": mountpoint}, "metadata": { }, "is_admin_only": is_admin_only, } for ip, is_admin_only in ( (self.configuration.zfs_share_export_ip, False), (self.configuration.zfs_service_ip, True)) ] @zfs_dataset_synchronized def remove_exports(self, dataset_name, executor=None): """Removes NFS share exports for given ZFS dataset.""" sharenfs = self.get_zfs_option( dataset_name, 'sharenfs', executor=executor) if sharenfs == 'off': return self.zfs("set", "sharenfs=off", dataset_name, executor=executor) def _get_parsed_access_to(self, access_to): netmask = utils.cidr_to_netmask(access_to) if netmask == '255.255.255.255': return access_to.split('/')[0] return access_to.split('/')[0] + '/' + netmask @zfs_dataset_synchronized def update_access(self, dataset_name, access_rules, add_rules, delete_rules, make_all_ro=False, executor=None): """Update access rules for given ZFS dataset exported as NFS share.""" rw_rules = [] ro_rules = [] for rule in access_rules: if rule['access_type'].lower() != 'ip': msg = _("Only IP access type allowed for NFS protocol.") raise exception.InvalidShareAccess(reason=msg) if (rule['access_level'] == constants.ACCESS_LEVEL_RW and not make_all_ro): rw_rules.append(self._get_parsed_access_to(rule['access_to'])) elif (rule['access_level'] in (constants.ACCESS_LEVEL_RW, constants.ACCESS_LEVEL_RO)): ro_rules.append(self._get_parsed_access_to(rule['access_to'])) else: msg = _("Unsupported access level provided - " "%s.") % rule['access_level'] raise exception.InvalidShareAccess(reason=msg) rules = [] if self.is_kernel_version: if rw_rules: rules.append( "rw=%s,no_root_squash" % ":".join(rw_rules)) if ro_rules: rules.append("ro=%s,no_root_squash" % ":".join(ro_rules)) rules_str = "sharenfs=" + (','.join(rules) or 'off') else: for rule in rw_rules: rules.append("%s:rw,no_root_squash" % rule) for rule in ro_rules: rules.append("%s:ro,no_root_squash" % rule) rules_str = "sharenfs=" + (' '.join(rules) or 'off') out, err = self.zfs( 'list', '-r', dataset_name.split('/')[0], executor=executor) data = self.parse_zfs_answer(out) for datum in data: if datum['NAME'] == dataset_name: self.zfs("set", rules_str, dataset_name) break else: LOG.warning( "Dataset with '%(name)s' NAME is absent on backend. " "Access rules were not applied.", {'name': dataset_name}) # NOTE(vponomaryov): Setting of ZFS share options does not remove rules # that were added and then removed. So, remove them explicitly. if delete_rules and access_rules: mountpoint = self.get_zfs_option(dataset_name, 'mountpoint') for rule in delete_rules: if rule['access_type'].lower() != 'ip': continue access_to = self._get_parsed_access_to(rule['access_to']) export_location = access_to + ':' + mountpoint self.execute( 'sudo', 'exportfs', '-u', export_location, executor=executor, ) manila-10.0.0/manila/share/drivers/zfsonlinux/__init__.py0000664000175000017500000000000013656750227023427 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/zfsonlinux/driver.py0000664000175000017500000020524013656750227023200 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Module with ZFSonLinux share driver that utilizes ZFS filesystem resources and exports them as shares. """ import math import os import time from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from oslo_utils import strutils from oslo_utils import timeutils from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import configuration from manila.share import driver from manila.share.drivers.zfsonlinux import utils as zfs_utils from manila.share.manager import share_manager_opts # noqa from manila.share import share_types from manila.share import utils as share_utils from manila import utils zfsonlinux_opts = [ cfg.HostAddressOpt( "zfs_share_export_ip", required=True, help="IP to be added to user-facing export location. Required."), cfg.HostAddressOpt( "zfs_service_ip", required=True, help="IP to be added to admin-facing export location. Required."), cfg.ListOpt( "zfs_zpool_list", required=True, help="Specify list of zpools that are allowed to be used by backend. " "Can contain nested datasets. Examples: " "Without nested dataset: 'zpool_name'. " "With nested dataset: 'zpool_name/nested_dataset_name'. " "Required."), cfg.ListOpt( "zfs_dataset_creation_options", help="Define here list of options that should be applied " "for each dataset creation if needed. Example: " "compression=gzip,dedup=off. " "Note that, for secondary replicas option 'readonly' will be set " "to 'on' and for active replicas to 'off' in any way. " "Also, 'quota' will be equal to share size. Optional."), cfg.StrOpt( "zfs_dataset_name_prefix", default='manila_share_', help="Prefix to be used in each dataset name. Optional."), cfg.StrOpt( "zfs_dataset_snapshot_name_prefix", default='manila_share_snapshot_', help="Prefix to be used in each dataset snapshot name. Optional."), cfg.BoolOpt( "zfs_use_ssh", default=False, help="Remote ZFS storage hostname that should be used for SSH'ing. " "Optional."), cfg.StrOpt( "zfs_ssh_username", help="SSH user that will be used in 2 cases: " "1) By manila-share service in case it is located on different " "host than its ZFS storage. " "2) By manila-share services with other ZFS backends that " "perform replication. " "It is expected that SSH'ing will be key-based, passwordless. " "This user should be passwordless sudoer. Optional."), cfg.StrOpt( "zfs_ssh_user_password", secret=True, help="Password for user that is used for SSH'ing ZFS storage host. " "Not used for replication operations. They require " "passwordless SSH access. Optional."), cfg.StrOpt( "zfs_ssh_private_key_path", help="Path to SSH private key that should be used for SSH'ing ZFS " "storage host. Not used for replication operations. Optional."), cfg.ListOpt( "zfs_share_helpers", required=True, default=[ "NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper", ], help="Specify list of share export helpers for ZFS storage. " "It should look like following: " "'FOO_protocol=foo.FooClass,BAR_protocol=bar.BarClass'. " "Required."), cfg.StrOpt( "zfs_replica_snapshot_prefix", required=True, default="tmp_snapshot_for_replication_", help="Set snapshot prefix for usage in ZFS replication. Required."), cfg.StrOpt( "zfs_migration_snapshot_prefix", required=True, default="tmp_snapshot_for_share_migration_", help="Set snapshot prefix for usage in ZFS migration. Required."), ] CONF = cfg.CONF CONF.register_opts(zfsonlinux_opts) LOG = log.getLogger(__name__) def ensure_share_server_not_provided(f): def wrap(self, context, *args, **kwargs): server = kwargs.get( "share_server", kwargs.get("destination_share_server")) if server: raise exception.InvalidInput( reason=_("Share server handling is not available. " "But 'share_server' was provided. '%s'. " "Share network should not be used.") % server.get( "id", server)) return f(self, context, *args, **kwargs) return wrap def get_backend_configuration(backend_name): config_stanzas = CONF.list_all_sections() if backend_name not in config_stanzas: msg = _("Could not find backend stanza %(backend_name)s in " "configuration which is required for share replication and " "migration. Available stanzas are %(stanzas)s") params = { "stanzas": config_stanzas, "backend_name": backend_name, } raise exception.BadConfigurationException(reason=msg % params) config = configuration.Configuration( driver.share_opts, config_group=backend_name) config.append_config_values(zfsonlinux_opts) config.append_config_values(share_manager_opts) config.append_config_values(driver.ssh_opts) return config class ZFSonLinuxShareDriver(zfs_utils.ExecuteMixin, driver.ShareDriver): def __init__(self, *args, **kwargs): super(ZFSonLinuxShareDriver, self).__init__( [False], *args, config_opts=[zfsonlinux_opts], **kwargs) self.replica_snapshot_prefix = ( self.configuration.zfs_replica_snapshot_prefix) self.migration_snapshot_prefix = ( self.configuration.zfs_migration_snapshot_prefix) self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'ZFSonLinux' self.zpool_list = self._get_zpool_list() self.dataset_creation_options = ( self.configuration.zfs_dataset_creation_options) self.share_export_ip = self.configuration.zfs_share_export_ip self.service_ip = self.configuration.zfs_service_ip self.private_storage = kwargs.get('private_storage') self._helpers = {} # Set config based capabilities self._init_common_capabilities() self._shell_executors = {} def _get_shell_executor_by_host(self, host): backend_name = share_utils.extract_host(host, level='backend_name') if backend_name in CONF.enabled_share_backends: # Return executor of this host return self.execute elif backend_name not in self._shell_executors: config = get_backend_configuration(backend_name) self._shell_executors[backend_name] = ( zfs_utils.get_remote_shell_executor( ip=config.zfs_service_ip, port=22, conn_timeout=config.ssh_conn_timeout, login=config.zfs_ssh_username, password=config.zfs_ssh_user_password, privatekey=config.zfs_ssh_private_key_path, max_size=10, ) ) # Return executor of remote host return self._shell_executors[backend_name] def _init_common_capabilities(self): self.common_capabilities = {} if 'dedup=on' in self.dataset_creation_options: self.common_capabilities['dedupe'] = [True] elif 'dedup=off' in self.dataset_creation_options: self.common_capabilities['dedupe'] = [False] else: self.common_capabilities['dedupe'] = [True, False] if 'compression=off' in self.dataset_creation_options: self.common_capabilities['compression'] = [False] elif any('compression=' in option for option in self.dataset_creation_options): self.common_capabilities['compression'] = [True] else: self.common_capabilities['compression'] = [True, False] # NOTE(vponomaryov): Driver uses 'quota' approach for # ZFS dataset. So, we can consider it as # 'always thin provisioned' because this driver never reserves # space for dataset. self.common_capabilities['thin_provisioning'] = [True] self.common_capabilities['max_over_subscription_ratio'] = ( self.configuration.max_over_subscription_ratio) self.common_capabilities['qos'] = [False] def _get_zpool_list(self): zpools = [] for zpool in self.configuration.zfs_zpool_list: zpool_name = zpool.split('/')[0] if zpool_name in zpools: raise exception.BadConfigurationException( reason=_("Using the same zpool twice is prohibited. " "Duplicate is '%(zpool)s'. List of zpools: " "%(zpool_list)s.") % { 'zpool': zpool, 'zpool_list': ', '.join( self.configuration.zfs_zpool_list)}) zpools.append(zpool_name) return zpools @zfs_utils.zfs_dataset_synchronized def _delete_dataset_or_snapshot_with_retry(self, name): """Attempts to destroy some dataset or snapshot with retries.""" # NOTE(vponomaryov): it is possible to see 'dataset is busy' error # under the load. So, we are ok to perform retry in this case. mountpoint = self.get_zfs_option(name, 'mountpoint') if '@' not in name: # NOTE(vponomaryov): check that dataset has no open files. start_point = time.time() while time.time() - start_point < 60: try: out, err = self.execute('lsof', '-w', mountpoint) except exception.ProcessExecutionError: # NOTE(vponomaryov): lsof returns code 1 if search # didn't give results. break LOG.debug("Cannot destroy dataset '%(name)s', it has " "opened files. Will wait 2 more seconds. " "Out: \n%(out)s", { 'name': name, 'out': out}) time.sleep(2) else: raise exception.ZFSonLinuxException( msg=_("Could not destroy '%s' dataset, " "because it had opened files.") % name) # NOTE(vponomaryov): Now, when no file usages and mounts of dataset # exist, destroy dataset. try: self.zfs('destroy', '-f', name) return except exception.ProcessExecutionError: LOG.info("Failed to destroy ZFS dataset, retrying one time") # NOTE(bswartz): There appears to be a bug in ZFS when creating and # destroying datasets concurrently where the filesystem remains mounted # even though ZFS thinks it's unmounted. The most reliable workaround # I've found is to force the unmount, then retry the destroy, with # short pauses around the unmount. time.sleep(1) try: self.execute('sudo', 'umount', mountpoint) except exception.ProcessExecutionError: # Ignore failed umount, it's normal pass time.sleep(1) # This time the destroy is expected to succeed. self.zfs('destroy', '-f', name) def _setup_helpers(self): """Setups share helper for ZFS backend.""" self._helpers = {} helpers = self.configuration.zfs_share_helpers if helpers: for helper_str in helpers: share_proto, __, import_str = helper_str.partition('=') helper = importutils.import_class(import_str) self._helpers[share_proto.upper()] = helper( self.configuration) else: raise exception.BadConfigurationException( reason=_( "No share helpers selected for ZFSonLinux Driver. " "Please specify using config option 'zfs_share_helpers'.")) def _get_share_helper(self, share_proto): """Returns share helper specific for used share protocol.""" helper = self._helpers.get(share_proto) if helper: return helper else: raise exception.InvalidShare( reason=_("Wrong, unsupported or disabled protocol - " "'%s'.") % share_proto) def do_setup(self, context): """Perform basic setup and checks.""" super(ZFSonLinuxShareDriver, self).do_setup(context) self._setup_helpers() for ip in (self.share_export_ip, self.service_ip): if not utils.is_valid_ip_address(ip, 4): raise exception.BadConfigurationException( reason=_("Wrong IP address provided: " "%s") % self.share_export_ip) if not self.zpool_list: raise exception.BadConfigurationException( reason=_("No zpools specified for usage: " "%s") % self.zpool_list) # Make pool mounts shared so that cloned namespaces receive unmounts # and don't prevent us from unmounting datasets for zpool in self.configuration.zfs_zpool_list: self.execute('sudo', 'mount', '--make-rshared', ('/%s' % zpool)) if self.configuration.zfs_use_ssh: # Check workability of SSH executor self.ssh_executor('whoami') def _get_pools_info(self): """Returns info about all pools used by backend.""" pools = [] for zpool in self.zpool_list: free_size = self.get_zpool_option(zpool, 'free') free_size = utils.translate_string_size_to_float(free_size) total_size = self.get_zpool_option(zpool, 'size') total_size = utils.translate_string_size_to_float(total_size) pool = { 'pool_name': zpool, 'total_capacity_gb': float(total_size), 'free_capacity_gb': float(free_size), 'reserved_percentage': self.configuration.reserved_share_percentage, } pool.update(self.common_capabilities) if self.configuration.replication_domain: pool['replication_type'] = 'readable' pools.append(pool) return pools def _update_share_stats(self): """Retrieves share stats info.""" data = { 'share_backend_name': self.backend_name, 'storage_protocol': 'NFS', 'reserved_percentage': self.configuration.reserved_share_percentage, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'driver_name': 'ZFS', 'pools': self._get_pools_info(), } if self.configuration.replication_domain: data['replication_type'] = 'readable' super(ZFSonLinuxShareDriver, self)._update_share_stats(data) def _get_share_name(self, share_id): """Returns name of dataset used for given share.""" prefix = self.configuration.zfs_dataset_name_prefix or '' return prefix + share_id.replace('-', '_') def _get_snapshot_name(self, snapshot_id): """Returns name of dataset snapshot used for given share snapshot.""" prefix = self.configuration.zfs_dataset_snapshot_name_prefix or '' return prefix + snapshot_id.replace('-', '_') def _get_dataset_creation_options(self, share, is_readonly=False): """Returns list of options to be used for dataset creation.""" options = ['quota=%sG' % share['size']] extra_specs = share_types.get_extra_specs_from_share(share) dedupe_set = False dedupe = extra_specs.get('dedupe') if dedupe: dedupe = strutils.bool_from_string( dedupe.lower().split(' ')[-1], default=dedupe) if (dedupe in self.common_capabilities['dedupe']): options.append('dedup=%s' % ('on' if dedupe else 'off')) dedupe_set = True else: raise exception.ZFSonLinuxException(msg=_( "Cannot use requested '%(requested)s' value of 'dedupe' " "extra spec. It does not fit allowed value '%(allowed)s' " "that is configured for backend.") % { 'requested': dedupe, 'allowed': self.common_capabilities['dedupe']}) compression_set = False compression_type = extra_specs.get('zfsonlinux:compression') if compression_type: if (compression_type == 'off' and False in self.common_capabilities['compression']): options.append('compression=off') compression_set = True elif (compression_type != 'off' and True in self.common_capabilities['compression']): options.append('compression=%s' % compression_type) compression_set = True else: raise exception.ZFSonLinuxException(msg=_( "Cannot use value '%s' of extra spec " "'zfsonlinux:compression' because compression is disabled " "for this backend. Set extra spec 'compression=True' to " "make scheduler pick up appropriate backend." ) % compression_type) for option in self.dataset_creation_options or []: if any(v in option for v in ( 'readonly', 'sharenfs', 'sharesmb', 'quota')): continue if 'dedup' in option and dedupe_set is True: continue if 'compression' in option and compression_set is True: continue options.append(option) if is_readonly: options.append('readonly=on') else: options.append('readonly=off') return options def _get_dataset_name(self, share): """Returns name of dataset used for given share.""" pool_name = share_utils.extract_host(share['host'], level='pool') # Pick pool with nested dataset name if set up for pool in self.configuration.zfs_zpool_list: pool_data = pool.split('/') if (pool_name == pool_data[0] and len(pool_data) > 1): pool_name = pool if pool_name[-1] == '/': pool_name = pool_name[0:-1] break dataset_name = self._get_share_name(share['id']) full_dataset_name = '%(pool)s/%(dataset)s' % { 'pool': pool_name, 'dataset': dataset_name} return full_dataset_name @ensure_share_server_not_provided def create_share(self, context, share, share_server=None): """Is called to create a share.""" options = self._get_dataset_creation_options(share, is_readonly=False) cmd = ['create'] for option in options: cmd.extend(['-o', option]) dataset_name = self._get_dataset_name(share) cmd.append(dataset_name) ssh_cmd = '%(username)s@%(host)s' % { 'username': self.configuration.zfs_ssh_username, 'host': self.service_ip, } pool_name = share_utils.extract_host(share['host'], level='pool') self.private_storage.update( share['id'], { 'entity_type': 'share', 'dataset_name': dataset_name, 'ssh_cmd': ssh_cmd, # used with replication and migration 'pool_name': pool_name, # used in replication 'used_options': ' '.join(options), } ) self.zfs(*cmd) return self._get_share_helper( share['share_proto']).create_exports(dataset_name) @ensure_share_server_not_provided def delete_share(self, context, share, share_server=None): """Is called to remove a share.""" pool_name = self.private_storage.get(share['id'], 'pool_name') pool_name = pool_name or share_utils.extract_host( share["host"], level="pool") dataset_name = self.private_storage.get(share['id'], 'dataset_name') if not dataset_name: dataset_name = self._get_dataset_name(share) out, err = self.zfs('list', '-r', pool_name) data = self.parse_zfs_answer(out) for datum in data: if datum['NAME'] != dataset_name: continue # Delete dataset's snapshots first out, err = self.zfs('list', '-r', '-t', 'snapshot', pool_name) snapshots = self.parse_zfs_answer(out) full_snapshot_prefix = ( dataset_name + '@') for snap in snapshots: if full_snapshot_prefix in snap['NAME']: self._delete_dataset_or_snapshot_with_retry(snap['NAME']) self._get_share_helper( share['share_proto']).remove_exports(dataset_name) self._delete_dataset_or_snapshot_with_retry(dataset_name) break else: LOG.warning( "Share with '%(id)s' ID and '%(name)s' NAME is " "absent on backend. Nothing has been deleted.", {'id': share['id'], 'name': dataset_name}) self.private_storage.delete(share['id']) @ensure_share_server_not_provided def create_snapshot(self, context, snapshot, share_server=None): """Is called to create a snapshot.""" dataset_name = self.private_storage.get( snapshot['share_instance_id'], 'dataset_name') snapshot_tag = self._get_snapshot_name(snapshot['id']) snapshot_name = dataset_name + '@' + snapshot_tag self.private_storage.update( snapshot['snapshot_id'], { 'entity_type': 'snapshot', 'snapshot_tag': snapshot_tag, } ) self.zfs('snapshot', snapshot_name) return {"provider_location": snapshot_name} @ensure_share_server_not_provided def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove a snapshot.""" self._delete_snapshot(context, snapshot) self.private_storage.delete(snapshot['snapshot_id']) def _get_saved_snapshot_name(self, snapshot_instance): snapshot_tag = self.private_storage.get( snapshot_instance['snapshot_id'], 'snapshot_tag') dataset_name = self.private_storage.get( snapshot_instance['share_instance_id'], 'dataset_name') snapshot_name = dataset_name + '@' + snapshot_tag return snapshot_name def _delete_snapshot(self, context, snapshot): snapshot_name = self._get_saved_snapshot_name(snapshot) out, err = self.zfs('list', '-r', '-t', 'snapshot', snapshot_name) data = self.parse_zfs_answer(out) for datum in data: if datum['NAME'] == snapshot_name: self._delete_dataset_or_snapshot_with_retry(snapshot_name) break else: LOG.warning( "Snapshot with '%(id)s' ID and '%(name)s' NAME is " "absent on backend. Nothing has been deleted.", {'id': snapshot['id'], 'name': snapshot_name}) @ensure_share_server_not_provided def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create a share from snapshot.""" src_backend_name = share_utils.extract_host( snapshot.share_instance['host'], level='backend_name' ) src_snapshot_name = self._get_saved_snapshot_name(snapshot) dataset_name = self._get_dataset_name(share) dst_backend_ssh_cmd = '%(username)s@%(host)s' % { 'username': self.configuration.zfs_ssh_username, 'host': self.service_ip, } dst_backend_pool_name = share_utils.extract_host(share['host'], level='pool') options = self._get_dataset_creation_options(share, is_readonly=False) self.private_storage.update( share['id'], { 'entity_type': 'share', 'dataset_name': dataset_name, 'ssh_cmd': dst_backend_ssh_cmd, # used in replication 'pool_name': dst_backend_pool_name, # used in replication 'used_options': options, } ) # NOTE(andrebeltrami): Implementing the support for create share # from snapshot in different backends in different hosts src_config = get_backend_configuration(src_backend_name) src_backend_ssh_cmd = '%(username)s@%(host)s' % { 'username': src_config.zfs_ssh_username, 'host': src_config.zfs_service_ip, } self.execute( # NOTE(vponomaryov): SSH is used as workaround for 'execute' # implementation restriction that does not support usage # of '|'. 'ssh', src_backend_ssh_cmd, 'sudo', 'zfs', 'send', '-vD', src_snapshot_name, '|', 'ssh', dst_backend_ssh_cmd, 'sudo', 'zfs', 'receive', '-v', dataset_name, ) # Apply options based on used share type that may differ from # one used for original share. for option in options: self.zfs('set', option, dataset_name) # Delete with retry as right after creation it may be temporary busy. self.execute_with_retry( 'sudo', 'zfs', 'destroy', dataset_name + '@' + src_snapshot_name.split('@')[-1]) return self._get_share_helper( share['share_proto']).create_exports(dataset_name) def get_pool(self, share): """Return pool name where the share resides on. :param share: The share hosted by the driver. """ pool_name = share_utils.extract_host(share['host'], level='pool') return pool_name @ensure_share_server_not_provided def ensure_share(self, context, share, share_server=None): """Invoked to ensure that given share is exported.""" dataset_name = self.private_storage.get(share['id'], 'dataset_name') if not dataset_name: dataset_name = self._get_dataset_name(share) pool_name = share_utils.extract_host(share['host'], level='pool') out, err = self.zfs('list', '-r', pool_name) data = self.parse_zfs_answer(out) for datum in data: if datum['NAME'] == dataset_name: ssh_cmd = '%(username)s@%(host)s' % { 'username': self.configuration.zfs_ssh_username, 'host': self.service_ip, } self.private_storage.update( share['id'], {'ssh_cmd': ssh_cmd}) sharenfs = self.get_zfs_option(dataset_name, 'sharenfs') if sharenfs != 'off': self.zfs('share', dataset_name) export_locations = self._get_share_helper( share['share_proto']).get_exports(dataset_name) return export_locations else: raise exception.ShareResourceNotFound(share_id=share['id']) def get_network_allocations_number(self): """ZFS does not handle networking. Return 0.""" return 0 @ensure_share_server_not_provided def extend_share(self, share, new_size, share_server=None): """Extends size of existing share.""" dataset_name = self._get_dataset_name(share) self.zfs('set', 'quota=%sG' % new_size, dataset_name) @ensure_share_server_not_provided def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" dataset_name = self._get_dataset_name(share) consumed_space = self.get_zfs_option(dataset_name, 'used') consumed_space = utils.translate_string_size_to_float(consumed_space) if consumed_space >= new_size: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) self.zfs('set', 'quota=%sG' % new_size, dataset_name) @ensure_share_server_not_provided def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Updates access rules for given share.""" dataset_name = self._get_dataset_name(share) executor = self._get_shell_executor_by_host(share['host']) return self._get_share_helper(share['share_proto']).update_access( dataset_name, access_rules, add_rules, delete_rules, executor=executor) def manage_existing(self, share, driver_options): """Manage existing ZFS dataset as manila share. ZFSonLinux driver accepts only one driver_option 'size'. If an administrator provides this option, then such quota will be set to dataset and used as share size. Otherwise, driver will set quota equal to nearest bigger rounded integer of usage size. Driver does not expect mountpoint to be changed (should be equal to default that is "/%(dataset_name)s"). :param share: share data :param driver_options: Empty dict or dict with 'size' option. :return: dict with share size and its export locations. """ old_export_location = share["export_locations"][0]["path"] old_dataset_name = old_export_location.split(":/")[-1] scheduled_pool_name = share_utils.extract_host( share["host"], level="pool") actual_pool_name = old_dataset_name.split("/")[0] new_dataset_name = self._get_dataset_name(share) # Calculate quota for managed dataset quota = driver_options.get("size") if not quota: consumed_space = self.get_zfs_option(old_dataset_name, "used") consumed_space = utils.translate_string_size_to_float( consumed_space) quota = int(consumed_space) + 1 share["size"] = int(quota) # Save dataset-specific data in private storage options = self._get_dataset_creation_options(share, is_readonly=False) ssh_cmd = "%(username)s@%(host)s" % { "username": self.configuration.zfs_ssh_username, "host": self.service_ip, } # Perform checks on requested dataset if actual_pool_name != scheduled_pool_name: raise exception.ZFSonLinuxException( _("Cannot manage share '%(share_id)s' " "(share_instance '%(si_id)s'), because scheduled " "pool '%(sch)s' and actual '%(actual)s' differ.") % { "share_id": share["share_id"], "si_id": share["id"], "sch": scheduled_pool_name, "actual": actual_pool_name}) out, err = self.zfs("list", "-r", actual_pool_name) data = self.parse_zfs_answer(out) for datum in data: if datum["NAME"] == old_dataset_name: break else: raise exception.ZFSonLinuxException( _("Cannot manage share '%(share_id)s' " "(share_instance '%(si_id)s'), because dataset " "'%(dataset)s' not found in zpool '%(zpool)s'.") % { "share_id": share["share_id"], "si_id": share["id"], "dataset": old_dataset_name, "zpool": actual_pool_name}) # Unmount the dataset before attempting to rename and mount try: self._unmount_share_with_retry(old_dataset_name) except exception.ZFSonLinuxException: msg = _("Unable to unmount share before renaming and re-mounting.") raise exception.ZFSonLinuxException(message=msg) # Rename the dataset and mount with new name self.zfs_with_retry("rename", old_dataset_name, new_dataset_name) try: self.zfs("mount", new_dataset_name) except exception.ProcessExecutionError: # Workaround for bug/1785180 out, err = self.zfs("mount") mounted = any([new_dataset_name in mountedfs for mountedfs in out.splitlines()]) if not mounted: raise # Apply options to dataset for option in options: self.zfs("set", option, new_dataset_name) # Get new export locations of renamed dataset export_locations = self._get_share_helper( share["share_proto"]).get_exports(new_dataset_name) self.private_storage.update( share["id"], { "entity_type": "share", "dataset_name": new_dataset_name, "ssh_cmd": ssh_cmd, # used in replication "pool_name": actual_pool_name, # used in replication "used_options": " ".join(options), } ) return {"size": share["size"], "export_locations": export_locations} def unmanage(self, share): """Removes the specified share from Manila management.""" self.private_storage.delete(share['id']) def manage_existing_snapshot(self, snapshot_instance, driver_options): """Manage existing share snapshot with manila. :param snapshot_instance: SnapshotInstance data :param driver_options: expects only one optional key 'size'. :return: dict with share snapshot instance fields for update, example:: { 'size': 1, 'provider_location': 'path/to/some/dataset@some_snapshot_tag', } """ snapshot_size = int(driver_options.get("size", 0)) old_provider_location = snapshot_instance.get("provider_location") old_snapshot_tag = old_provider_location.split("@")[-1] new_snapshot_tag = self._get_snapshot_name(snapshot_instance["id"]) self.private_storage.update( snapshot_instance["snapshot_id"], { "entity_type": "snapshot", "old_snapshot_tag": old_snapshot_tag, "snapshot_tag": new_snapshot_tag, } ) try: self.zfs("list", "-r", "-t", "snapshot", old_provider_location) except exception.ProcessExecutionError as e: raise exception.ManageInvalidShareSnapshot(reason=e.stderr) if not snapshot_size: consumed_space = self.get_zfs_option(old_provider_location, "used") consumed_space = utils.translate_string_size_to_float( consumed_space) snapshot_size = int(math.ceil(consumed_space)) dataset_name = self.private_storage.get( snapshot_instance["share_instance_id"], "dataset_name") new_provider_location = dataset_name + "@" + new_snapshot_tag self.zfs("rename", old_provider_location, new_provider_location) return { "size": snapshot_size, "provider_location": new_provider_location, } def unmanage_snapshot(self, snapshot_instance): """Unmanage dataset snapshot.""" self.private_storage.delete(snapshot_instance["snapshot_id"]) @utils.retry(exception.ZFSonLinuxException) def _unmount_share_with_retry(self, share_name): out, err = self.execute("sudo", "mount") if "%s " % share_name not in out: return self.zfs_with_retry("umount", "-f", share_name) out, err = self.execute("sudo", "mount") if "%s " % share_name in out: raise exception.ZFSonLinuxException( _("Unable to unmount dataset %s"), share_name) def _get_replication_snapshot_prefix(self, replica): """Returns replica-based snapshot prefix.""" replication_snapshot_prefix = "%s_%s" % ( self.replica_snapshot_prefix, replica['id'].replace('-', '_')) return replication_snapshot_prefix def _get_replication_snapshot_tag(self, replica): """Returns replica- and time-based snapshot tag.""" current_time = timeutils.utcnow().isoformat() snapshot_tag = "%s_time_%s" % ( self._get_replication_snapshot_prefix(replica), current_time) return snapshot_tag def _get_active_replica(self, replica_list): for replica in replica_list: if replica['replica_state'] == constants.REPLICA_STATE_ACTIVE: return replica msg = _("Active replica not found.") raise exception.ReplicationException(reason=msg) def _get_migration_snapshot_prefix(self, share_instance): """Returns migration-based snapshot prefix.""" migration_snapshot_prefix = "%s_%s" % ( self.migration_snapshot_prefix, share_instance['id'].replace('-', '_')) return migration_snapshot_prefix def _get_migration_snapshot_tag(self, share_instance): """Returns migration- and time-based snapshot tag.""" current_time = timeutils.utcnow().isoformat() snapshot_tag = "%s_time_%s" % ( self._get_migration_snapshot_prefix(share_instance), current_time) snapshot_tag = ( snapshot_tag.replace('-', '_').replace('.', '_').replace(':', '_')) return snapshot_tag @ensure_share_server_not_provided def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None): """Replicates the active replica to a new replica on this backend.""" active_replica = self._get_active_replica(replica_list) src_dataset_name = self.private_storage.get( active_replica['id'], 'dataset_name') ssh_to_src_cmd = self.private_storage.get( active_replica['id'], 'ssh_cmd') dst_dataset_name = self._get_dataset_name(new_replica) ssh_cmd = '%(username)s@%(host)s' % { 'username': self.configuration.zfs_ssh_username, 'host': self.service_ip, } snapshot_tag = self._get_replication_snapshot_tag(new_replica) src_snapshot_name = ( '%(dataset_name)s@%(snapshot_tag)s' % { 'snapshot_tag': snapshot_tag, 'dataset_name': src_dataset_name, } ) # Save valuable data to DB self.private_storage.update(active_replica['id'], { 'repl_snapshot_tag': snapshot_tag, }) self.private_storage.update(new_replica['id'], { 'entity_type': 'replica', 'replica_type': 'readable', 'dataset_name': dst_dataset_name, 'ssh_cmd': ssh_cmd, 'pool_name': share_utils.extract_host( new_replica['host'], level='pool'), 'repl_snapshot_tag': snapshot_tag, }) # Create temporary snapshot. It will exist until following replica sync # After it - new one will appear and so in loop. self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'snapshot', src_snapshot_name, ) # Send/receive temporary snapshot out, err = self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'send', '-vDR', src_snapshot_name, '|', 'ssh', ssh_cmd, 'sudo', 'zfs', 'receive', '-v', dst_dataset_name, ) msg = ("Info about replica '%(replica_id)s' creation is following: " "\n%(out)s") LOG.debug(msg, {'replica_id': new_replica['id'], 'out': out}) # Make replica readonly self.zfs('set', 'readonly=on', dst_dataset_name) # Set original share size as quota to new replica self.zfs('set', 'quota=%sG' % active_replica['size'], dst_dataset_name) # Apply access rules from original share self._get_share_helper(new_replica['share_proto']).update_access( dst_dataset_name, access_rules, add_rules=[], delete_rules=[], make_all_ro=True) return { 'export_locations': self._get_share_helper( new_replica['share_proto']).create_exports(dst_dataset_name), 'replica_state': constants.REPLICA_STATE_IN_SYNC, 'access_rules_status': constants.STATUS_ACTIVE, } @ensure_share_server_not_provided def delete_replica(self, context, replica_list, replica_snapshots, replica, share_server=None): """Deletes a replica. This is called on the destination backend.""" pool_name = self.private_storage.get(replica['id'], 'pool_name') dataset_name = self.private_storage.get(replica['id'], 'dataset_name') if not dataset_name: dataset_name = self._get_dataset_name(replica) # Delete dataset's snapshots first out, err = self.zfs('list', '-r', '-t', 'snapshot', pool_name) data = self.parse_zfs_answer(out) for datum in data: if dataset_name in datum['NAME']: self._delete_dataset_or_snapshot_with_retry(datum['NAME']) # Now we delete dataset itself out, err = self.zfs('list', '-r', pool_name) data = self.parse_zfs_answer(out) for datum in data: if datum['NAME'] == dataset_name: self._get_share_helper( replica['share_proto']).remove_exports(dataset_name) self._delete_dataset_or_snapshot_with_retry(dataset_name) break else: LOG.warning( "Share replica with '%(id)s' ID and '%(name)s' NAME is " "absent on backend. Nothing has been deleted.", {'id': replica['id'], 'name': dataset_name}) self.private_storage.delete(replica['id']) @ensure_share_server_not_provided def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): """Syncs replica and updates its 'replica_state'.""" return self._update_replica_state( context, replica_list, replica, replica_snapshots, access_rules) def _update_replica_state(self, context, replica_list, replica, replica_snapshots=None, access_rules=None): active_replica = self._get_active_replica(replica_list) src_dataset_name = self.private_storage.get( active_replica['id'], 'dataset_name') ssh_to_src_cmd = self.private_storage.get( active_replica['id'], 'ssh_cmd') ssh_to_dst_cmd = self.private_storage.get( replica['id'], 'ssh_cmd') dst_dataset_name = self.private_storage.get( replica['id'], 'dataset_name') # Create temporary snapshot previous_snapshot_tag = self.private_storage.get( replica['id'], 'repl_snapshot_tag') snapshot_tag = self._get_replication_snapshot_tag(replica) src_snapshot_name = src_dataset_name + '@' + snapshot_tag self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'snapshot', src_snapshot_name, ) # Make sure it is readonly self.zfs('set', 'readonly=on', dst_dataset_name) # Send/receive diff between previous snapshot and last one out, err = self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'send', '-vDRI', previous_snapshot_tag, src_snapshot_name, '|', 'ssh', ssh_to_dst_cmd, 'sudo', 'zfs', 'receive', '-vF', dst_dataset_name, ) msg = ("Info about last replica '%(replica_id)s' sync is following: " "\n%(out)s") LOG.debug(msg, {'replica_id': replica['id'], 'out': out}) # Update DB data that will be used on following replica sync self.private_storage.update(active_replica['id'], { 'repl_snapshot_tag': snapshot_tag, }) self.private_storage.update( replica['id'], {'repl_snapshot_tag': snapshot_tag}) # Destroy all snapshots on dst filesystem except referenced ones. snap_references = set() for repl in replica_list: snap_references.add( self.private_storage.get(repl['id'], 'repl_snapshot_tag')) dst_pool_name = dst_dataset_name.split('/')[0] out, err = self.zfs('list', '-r', '-t', 'snapshot', dst_pool_name) data = self.parse_zfs_answer(out) for datum in data: if (dst_dataset_name in datum['NAME'] and '@' + self.replica_snapshot_prefix in datum['NAME'] and datum['NAME'].split('@')[-1] not in snap_references): self._delete_dataset_or_snapshot_with_retry(datum['NAME']) # Destroy all snapshots on src filesystem except referenced ones. src_pool_name = src_snapshot_name.split('/')[0] out, err = self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'list', '-r', '-t', 'snapshot', src_pool_name, ) data = self.parse_zfs_answer(out) full_src_snapshot_prefix = ( src_dataset_name + '@' + self._get_replication_snapshot_prefix(replica)) for datum in data: if (full_src_snapshot_prefix in datum['NAME'] and datum['NAME'].split('@')[-1] not in snap_references): self.execute_with_retry( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'destroy', '-f', datum['NAME'], ) if access_rules: # Apply access rules from original share # TODO(vponomaryov): we should remove somehow rules that were # deleted on active replica after creation of secondary replica. # For the moment there will be difference and it can be considered # as a bug. self._get_share_helper(replica['share_proto']).update_access( dst_dataset_name, access_rules, add_rules=[], delete_rules=[], make_all_ro=True) # Return results return constants.REPLICA_STATE_IN_SYNC @ensure_share_server_not_provided def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): """Promotes secondary replica to active and active to secondary.""" active_replica = self._get_active_replica(replica_list) src_dataset_name = self.private_storage.get( active_replica['id'], 'dataset_name') ssh_to_src_cmd = self.private_storage.get( active_replica['id'], 'ssh_cmd') dst_dataset_name = self.private_storage.get( replica['id'], 'dataset_name') replica_dict = { r['id']: { 'id': r['id'], # NOTE(vponomaryov): access rules will be updated in next # 'sync' operation. 'access_rules_status': constants.SHARE_INSTANCE_RULES_SYNCING, } for r in replica_list } try: # Mark currently active replica as readonly self.execute( 'ssh', ssh_to_src_cmd, 'set', 'readonly=on', src_dataset_name, ) # Create temporary snapshot of currently active replica snapshot_tag = self._get_replication_snapshot_tag(active_replica) src_snapshot_name = src_dataset_name + '@' + snapshot_tag self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'snapshot', src_snapshot_name, ) # Apply temporary snapshot to all replicas for repl in replica_list: if repl['replica_state'] == constants.REPLICA_STATE_ACTIVE: continue previous_snapshot_tag = self.private_storage.get( repl['id'], 'repl_snapshot_tag') dataset_name = self.private_storage.get( repl['id'], 'dataset_name') ssh_to_dst_cmd = self.private_storage.get( repl['id'], 'ssh_cmd') try: # Send/receive diff between previous snapshot and last one out, err = self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'send', '-vDRI', previous_snapshot_tag, src_snapshot_name, '|', 'ssh', ssh_to_dst_cmd, 'sudo', 'zfs', 'receive', '-vF', dataset_name, ) except exception.ProcessExecutionError as e: LOG.warning("Failed to sync replica %(id)s. %(e)s", {'id': repl['id'], 'e': e}) replica_dict[repl['id']]['replica_state'] = ( constants.REPLICA_STATE_OUT_OF_SYNC) continue msg = ("Info about last replica '%(replica_id)s' " "sync is following: \n%(out)s") LOG.debug(msg, {'replica_id': repl['id'], 'out': out}) # Update latest replication snapshot for replica self.private_storage.update( repl['id'], {'repl_snapshot_tag': snapshot_tag}) # Update latest replication snapshot for currently active replica self.private_storage.update( active_replica['id'], {'repl_snapshot_tag': snapshot_tag}) replica_dict[active_replica['id']]['replica_state'] = ( constants.REPLICA_STATE_IN_SYNC) except Exception as e: LOG.warning( "Failed to update currently active replica. \n%s", e) replica_dict[active_replica['id']]['replica_state'] = ( constants.REPLICA_STATE_OUT_OF_SYNC) # Create temporary snapshot of new replica and sync it with other # secondary replicas. snapshot_tag = self._get_replication_snapshot_tag(replica) src_snapshot_name = dst_dataset_name + '@' + snapshot_tag ssh_to_src_cmd = self.private_storage.get(replica['id'], 'ssh_cmd') self.zfs('snapshot', src_snapshot_name) for repl in replica_list: if (repl['replica_state'] == constants.REPLICA_STATE_ACTIVE or repl['id'] == replica['id']): continue previous_snapshot_tag = self.private_storage.get( repl['id'], 'repl_snapshot_tag') dataset_name = self.private_storage.get( repl['id'], 'dataset_name') ssh_to_dst_cmd = self.private_storage.get( repl['id'], 'ssh_cmd') try: # Send/receive diff between previous snapshot and last one out, err = self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'send', '-vDRI', previous_snapshot_tag, src_snapshot_name, '|', 'ssh', ssh_to_dst_cmd, 'sudo', 'zfs', 'receive', '-vF', dataset_name, ) except exception.ProcessExecutionError as e: LOG.warning("Failed to sync replica %(id)s. %(e)s", {'id': repl['id'], 'e': e}) replica_dict[repl['id']]['replica_state'] = ( constants.REPLICA_STATE_OUT_OF_SYNC) continue msg = ("Info about last replica '%(replica_id)s' " "sync is following: \n%(out)s") LOG.debug(msg, {'replica_id': repl['id'], 'out': out}) # Update latest replication snapshot for replica self.private_storage.update( repl['id'], {'repl_snapshot_tag': snapshot_tag}) # Update latest replication snapshot for new active replica self.private_storage.update( replica['id'], {'repl_snapshot_tag': snapshot_tag}) replica_dict[replica['id']]['replica_state'] = ( constants.REPLICA_STATE_ACTIVE) self._get_share_helper(replica['share_proto']).update_access( dst_dataset_name, access_rules, add_rules=[], delete_rules=[]) replica_dict[replica['id']]['access_rules_status'] = ( constants.STATUS_ACTIVE) self.zfs('set', 'readonly=off', dst_dataset_name) return list(replica_dict.values()) @ensure_share_server_not_provided def create_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): """Create a snapshot and update across the replicas.""" active_replica = self._get_active_replica(replica_list) src_dataset_name = self.private_storage.get( active_replica['id'], 'dataset_name') ssh_to_src_cmd = self.private_storage.get( active_replica['id'], 'ssh_cmd') replica_snapshots_dict = { si['id']: {'id': si['id']} for si in replica_snapshots} active_snapshot_instance_id = [ si['id'] for si in replica_snapshots if si['share_instance_id'] == active_replica['id']][0] snapshot_tag = self._get_snapshot_name(active_snapshot_instance_id) # Replication should not be dependent on manually created snapshots # so, create additional one, newer, that will be used for replication # synchronizations. repl_snapshot_tag = self._get_replication_snapshot_tag(active_replica) src_snapshot_name = src_dataset_name + '@' + repl_snapshot_tag self.private_storage.update( replica_snapshots[0]['snapshot_id'], { 'entity_type': 'snapshot', 'snapshot_tag': snapshot_tag, } ) for tag in (snapshot_tag, repl_snapshot_tag): self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'snapshot', src_dataset_name + '@' + tag, ) # Populate snapshot to all replicas for replica_snapshot in replica_snapshots: replica_id = replica_snapshot['share_instance_id'] if replica_id == active_replica['id']: replica_snapshots_dict[replica_snapshot['id']]['status'] = ( constants.STATUS_AVAILABLE) continue previous_snapshot_tag = self.private_storage.get( replica_id, 'repl_snapshot_tag') dst_dataset_name = self.private_storage.get( replica_id, 'dataset_name') ssh_to_dst_cmd = self.private_storage.get(replica_id, 'ssh_cmd') try: # Send/receive diff between previous snapshot and last one out, err = self.execute( 'ssh', ssh_to_src_cmd, 'sudo', 'zfs', 'send', '-vDRI', previous_snapshot_tag, src_snapshot_name, '|', 'ssh', ssh_to_dst_cmd, 'sudo', 'zfs', 'receive', '-vF', dst_dataset_name, ) except exception.ProcessExecutionError as e: LOG.warning( "Failed to sync snapshot instance %(id)s. %(e)s", {'id': replica_snapshot['id'], 'e': e}) replica_snapshots_dict[replica_snapshot['id']]['status'] = ( constants.STATUS_ERROR) continue replica_snapshots_dict[replica_snapshot['id']]['status'] = ( constants.STATUS_AVAILABLE) msg = ("Info about last replica '%(replica_id)s' " "sync is following: \n%(out)s") LOG.debug(msg, {'replica_id': replica_id, 'out': out}) # Update latest replication snapshot for replica self.private_storage.update( replica_id, {'repl_snapshot_tag': repl_snapshot_tag}) # Update latest replication snapshot for currently active replica self.private_storage.update( active_replica['id'], {'repl_snapshot_tag': repl_snapshot_tag}) return list(replica_snapshots_dict.values()) @ensure_share_server_not_provided def delete_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): """Delete a snapshot by deleting its instances across the replicas.""" active_replica = self._get_active_replica(replica_list) replica_snapshots_dict = { si['id']: {'id': si['id']} for si in replica_snapshots} for replica_snapshot in replica_snapshots: replica_id = replica_snapshot['share_instance_id'] snapshot_name = self._get_saved_snapshot_name(replica_snapshot) if active_replica['id'] == replica_id: self._delete_snapshot(context, replica_snapshot) replica_snapshots_dict[replica_snapshot['id']]['status'] = ( constants.STATUS_DELETED) continue ssh_cmd = self.private_storage.get(replica_id, 'ssh_cmd') out, err = self.execute( 'ssh', ssh_cmd, 'sudo', 'zfs', 'list', '-r', '-t', 'snapshot', snapshot_name, ) data = self.parse_zfs_answer(out) for datum in data: if datum['NAME'] != snapshot_name: continue self.execute_with_retry( 'ssh', ssh_cmd, 'sudo', 'zfs', 'destroy', '-f', datum['NAME'], ) self.private_storage.delete(replica_snapshot['id']) replica_snapshots_dict[replica_snapshot['id']]['status'] = ( constants.STATUS_DELETED) self.private_storage.delete(replica_snapshot['snapshot_id']) return list(replica_snapshots_dict.values()) @ensure_share_server_not_provided def update_replicated_snapshot(self, context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=None): """Update the status of a snapshot instance that lives on a replica.""" self._update_replica_state(context, replica_list, share_replica) snapshot_name = self._get_saved_snapshot_name(replica_snapshot) out, err = self.zfs('list', '-r', '-t', 'snapshot', snapshot_name) data = self.parse_zfs_answer(out) snapshot_found = False for datum in data: if datum['NAME'] == snapshot_name: snapshot_found = True break return_dict = {'id': replica_snapshot['id']} if snapshot_found: return_dict.update({'status': constants.STATUS_AVAILABLE}) else: return_dict.update({'status': constants.STATUS_ERROR}) return return_dict @ensure_share_server_not_provided def migration_check_compatibility( self, context, source_share, destination_share, share_server=None, destination_share_server=None): """Is called to test compatibility with destination backend.""" backend_name = share_utils.extract_host( destination_share['host'], level='backend_name') config = get_backend_configuration(backend_name) compatible = self.configuration.share_driver == config.share_driver return { 'compatible': compatible, 'writable': False, 'preserve_metadata': True, 'nondisruptive': True, } @ensure_share_server_not_provided def migration_start( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to start share migration.""" src_dataset_name = self.private_storage.get( source_share['id'], 'dataset_name') dst_dataset_name = self._get_dataset_name(destination_share) backend_name = share_utils.extract_host( destination_share['host'], level='backend_name') ssh_cmd = '%(username)s@%(host)s' % { 'username': self.configuration.zfs_ssh_username, 'host': self.configuration.zfs_service_ip, } config = get_backend_configuration(backend_name) remote_ssh_cmd = '%(username)s@%(host)s' % { 'username': config.zfs_ssh_username, 'host': config.zfs_service_ip, } snapshot_tag = self._get_migration_snapshot_tag(destination_share) src_snapshot_name = ( '%(dataset_name)s@%(snapshot_tag)s' % { 'snapshot_tag': snapshot_tag, 'dataset_name': src_dataset_name, } ) # Save valuable data to DB self.private_storage.update(source_share['id'], { 'migr_snapshot_tag': snapshot_tag, }) self.private_storage.update(destination_share['id'], { 'entity_type': 'share', 'dataset_name': dst_dataset_name, 'ssh_cmd': remote_ssh_cmd, 'pool_name': share_utils.extract_host( destination_share['host'], level='pool'), 'migr_snapshot_tag': snapshot_tag, }) # Create temporary snapshot on src host. self.execute('sudo', 'zfs', 'snapshot', src_snapshot_name) # Send/receive temporary snapshot cmd = ( 'ssh ' + ssh_cmd + ' ' 'sudo zfs send -vDR ' + src_snapshot_name + ' ' '| ssh ' + remote_ssh_cmd + ' ' 'sudo zfs receive -v ' + dst_dataset_name ) filename = dst_dataset_name.replace('/', '_') with utils.tempdir() as tmpdir: tmpfilename = os.path.join(tmpdir, '%s.sh' % filename) with open(tmpfilename, "w") as migr_script: migr_script.write(cmd) self.execute('sudo', 'chmod', '755', tmpfilename) self.execute('nohup', tmpfilename, '&') @ensure_share_server_not_provided def migration_continue( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called in source share's backend to continue migration.""" snapshot_tag = self.private_storage.get( destination_share['id'], 'migr_snapshot_tag') out, err = self.execute('ps', 'aux') if not '@%s' % snapshot_tag in out: dst_dataset_name = self.private_storage.get( destination_share['id'], 'dataset_name') try: self.execute( 'sudo', 'zfs', 'get', 'quota', dst_dataset_name, executor=self._get_shell_executor_by_host( destination_share['host']), ) return True except exception.ProcessExecutionError as e: raise exception.ZFSonLinuxException(msg=_( 'Migration process is absent and dst dataset ' 'returned following error: %s') % e) @ensure_share_server_not_provided def migration_complete( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to perform 2nd phase of driver migration of a given share. """ dst_dataset_name = self.private_storage.get( destination_share['id'], 'dataset_name') snapshot_tag = self.private_storage.get( destination_share['id'], 'migr_snapshot_tag') dst_snapshot_name = ( '%(dataset_name)s@%(snapshot_tag)s' % { 'snapshot_tag': snapshot_tag, 'dataset_name': dst_dataset_name, } ) dst_executor = self._get_shell_executor_by_host( destination_share['host']) # Destroy temporary migration snapshot on dst host self.execute( 'sudo', 'zfs', 'destroy', dst_snapshot_name, executor=dst_executor, ) # Get export locations of new share instance export_locations = self._get_share_helper( destination_share['share_proto']).create_exports( dst_dataset_name, executor=dst_executor) # Destroy src share and temporary migration snapshot on src (this) host self.delete_share(context, source_share) return {'export_locations': export_locations} @ensure_share_server_not_provided def migration_cancel( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Is called to cancel driver migration.""" src_dataset_name = self.private_storage.get( source_share['id'], 'dataset_name') dst_dataset_name = self.private_storage.get( destination_share['id'], 'dataset_name') ssh_cmd = self.private_storage.get( destination_share['id'], 'ssh_cmd') snapshot_tag = self.private_storage.get( destination_share['id'], 'migr_snapshot_tag') # Kill migration process if exists try: out, err = self.execute('ps', 'aux') lines = out.split('\n') for line in lines: if '@%s' % snapshot_tag in line: migr_pid = [ x for x in line.strip().split(' ') if x != ''][1] self.execute('sudo', 'kill', '-9', migr_pid) except exception.ProcessExecutionError as e: LOG.warning( "Caught following error trying to kill migration process: %s", e) # Sleep couple of seconds before destroying updated objects time.sleep(2) # Destroy snapshot on source host self._delete_dataset_or_snapshot_with_retry( src_dataset_name + '@' + snapshot_tag) # Destroy dataset and its migration snapshot on destination host try: self.execute( 'ssh', ssh_cmd, 'sudo', 'zfs', 'destroy', '-r', dst_dataset_name, ) except exception.ProcessExecutionError as e: LOG.warning( "Failed to destroy destination dataset with following error: " "%s", e) LOG.debug( "Migration of share with ID '%s' has been canceled.", source_share["id"]) manila-10.0.0/manila/share/drivers/container/0000775000175000017500000000000013656750362021073 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/container/protocol_helper.py0000664000175000017500000001303013656750227024642 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.common import constants as const from manila import exception from manila.i18n import _ LOG = log.getLogger(__name__) class DockerCIFSHelper(object): def __init__(self, container_helper, *args, **kwargs): super(DockerCIFSHelper, self).__init__() self.share = kwargs.get("share") self.conf = kwargs.get("config") self.container = container_helper def create_share(self, server_id): share_name = self.share.share_id cmd = ["net", "conf", "addshare", share_name, "/shares/%s" % share_name, "writeable=y"] if self.conf.container_cifs_guest_ok: cmd.append("guest_ok=y") else: cmd.append("guest_ok=n") self.container.execute(server_id, cmd) parameters = { "browseable": "yes", "create mask": "0755", "read only": "no", } for param, value in parameters.items(): self.container.execute( server_id, ["net", "conf", "setparm", share_name, param, value] ) # TODO(tbarron): pass configured address family when we support IPv6 address = self.container.fetch_container_address( server_id, address_family='inet') return r"//%(addr)s/%(name)s" % {"addr": address, "name": share_name} def delete_share(self, server_id, share_name, ignore_errors=False): self.container.execute( server_id, ["net", "conf", "delshare", share_name], ignore_errors=ignore_errors ) def _get_access_group(self, access_level): if access_level == const.ACCESS_LEVEL_RO: access = "read list" elif access_level == const.ACCESS_LEVEL_RW: access = "valid users" else: raise exception.InvalidShareAccessLevel(level=access_level) return access def _get_existing_users(self, server_id, share_name, access): result = self.container.execute( server_id, ["net", "conf", "getparm", share_name, access], ignore_errors=True ) if result: return result[0].rstrip('\n') else: return "" def _set_users(self, server_id, share_name, access, users_to_set): self.container.execute( server_id, ["net", "conf", "setparm", share_name, access, users_to_set] ) def _allow_access(self, share_name, server_id, user_to_allow, access_level): access = self._get_access_group(access_level) try: existing_users = self._get_existing_users(server_id, share_name, access) except TypeError: users_to_allow = user_to_allow else: users_to_allow = " ".join([existing_users, user_to_allow]) self._set_users(server_id, share_name, access, users_to_allow) def _deny_access(self, share_name, server_id, user_to_deny, access_level): access = self._get_access_group(access_level) try: existing_users = self._get_existing_users(server_id, share_name, access) except TypeError: LOG.warning("Can't access smbd at share %s.", share_name) return else: allowed_users = " ".join(sorted(set(existing_users.split()) - set([user_to_deny]))) if allowed_users != existing_users: self._set_users(server_id, share_name, access, allowed_users) def update_access(self, server_id, share_name, access_rules, add_rules=None, delete_rules=None): def _rule_updater(rules, action, override_type_check=False): for rule in rules: access_level = rule['access_level'] access_type = rule['access_type'] # (aovchinnikov): override_type_check is used to ensure # broken rules deletion. if access_type == 'user' or override_type_check: action(share_name, server_id, rule['access_to'], access_level) else: msg = _("Access type '%s' is not supported by the " "driver.") % access_type raise exception.InvalidShareAccess(reason=msg) if not (add_rules or delete_rules): # clean all users first. self.container.execute( server_id, ["net", "conf", "setparm", share_name, "valid users", ""] ) _rule_updater(access_rules or [], self._allow_access) return _rule_updater(add_rules or [], self._allow_access) _rule_updater(delete_rules or [], self._deny_access, override_type_check=True) manila-10.0.0/manila/share/drivers/container/__init__.py0000664000175000017500000000000013656750227023172 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/container/storage_helper.py0000664000175000017500000001347713656750227024464 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import re from oslo_config import cfg from oslo_log import log from manila import exception from manila.i18n import _ from manila.share import driver CONF = cfg.CONF lv_opts = [ cfg.StrOpt("container_volume_group", default="manila_docker_volumes", help="LVM volume group to use for volumes. This volume group " "must be created by the cloud administrator independently " "from manila operations."), ] CONF.register_opts(lv_opts) LOG = log.getLogger(__name__) class LVMHelper(driver.ExecuteMixin): def __init__(self, *args, **kwargs): self.configuration = kwargs.pop("configuration", None) if self.configuration is None: raise exception.ManilaException(_("LVMHelper called without " "supplying configuration.")) self.configuration.append_config_values(lv_opts) super(LVMHelper, self).__init__(*args, **kwargs) self.init_execute_mixin() def get_share_server_pools(self, share_server=None): out, err = self._execute('vgs', self.configuration.container_volume_group, '--options', 'vg_size,vg_free', '--noheadings', '--units', 'g', run_as_root=True) if err: msg = _("Unable to gather size of the volume group %(vg)s to be " "used by the driver. Error: %(err)s") raise exception.ShareBackendException( msg % {'vg': self.configuration.container_volume_group, 'err': err}) (free_size, total_size) = sorted(re.findall(r"\d+\.\d+|\d+", out), reverse=False) return [{ 'pool_name': self.configuration.container_volume_group, 'total_capacity_gb': float(total_size), 'free_capacity_gb': float(free_size), 'reserved_percentage': 0, }, ] def _get_lv_device(self, share_name): return os.path.join("/dev", self.configuration.container_volume_group, share_name) def _get_lv_folder(self, share_name): # Provides folder name in hosts /tmp to which logical volume is # mounted prior to providing access to it from a container. return os.path.join("/tmp/shares", share_name) def provide_storage(self, share_name, size): self._execute("lvcreate", "-p", "rw", "-L", str(size) + "G", "-n", share_name, self.configuration.container_volume_group, run_as_root=True) self._execute("mkfs.ext4", self._get_lv_device(share_name), run_as_root=True) def _try_to_unmount_device(self, device): # NOTE(ganso): We invoke this method to be sure volume was unmounted, # and we swallow the exception in case it fails to. try: self._execute("umount", device, run_as_root=True) except exception.ProcessExecutionError as e: LOG.warning("Failed to umount helper directory %(device)s due to " "%(reason)s.", {'device': device, 'reason': e}) def remove_storage(self, share_name): device = self._get_lv_device(share_name) self._try_to_unmount_device(device) # (aovchinnikov): bug 1621784 manifests itself in jamming logical # volumes, so try removing once and issue warning until it is fixed. try: self._execute("lvremove", "-f", "--autobackup", "n", device, run_as_root=True) except exception.ProcessExecutionError as e: LOG.warning("Failed to remove logical volume %(device)s due to " "%(reason)s.", {'device': device, 'reason': e}) def rename_storage(self, share_name, new_share_name): old_device = self._get_lv_device(share_name) new_device = self._get_lv_device(new_share_name) self._try_to_unmount_device(old_device) try: self._execute("lvrename", "--autobackup", "n", old_device, new_device, run_as_root=True) except exception.ProcessExecutionError as e: msg = ("Failed to rename logical volume %(device)s due to " "%(reason)s." % {'device': old_device, 'reason': e}) LOG.exception(msg) raise def extend_share(self, share_name, new_size, share_server=None): lv_device = self._get_lv_device(share_name) cmd = ('lvextend', '-L', '%sG' % new_size, '-n', lv_device) self._execute(*cmd, run_as_root=True) self._execute("e2fsck", "-f", "-y", lv_device, run_as_root=True) self._execute('resize2fs', lv_device, run_as_root=True) def get_size(self, share_name): device = self._get_lv_device(share_name) size = self._execute( "lvs", "-o", "lv_size", "--noheadings", "--nosuffix", "--units", "g", device, run_as_root=True) LOG.debug("Found size %(size)s for LVM device " "%(lvm)s.", {'size': size[0], 'lvm': share_name}) return size[0] manila-10.0.0/manila/share/drivers/container/driver.py0000664000175000017500000003773513656750227022757 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Container Driver for shares. This driver uses a container as a share server. Current implementation suggests that a container when started by Docker will be plugged into a Linux bridge. Also it is suggested that all interfaces willing to talk to each other reside in an OVS bridge.""" import math import re from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from manila import exception from manila.i18n import _ from manila.share import driver from manila import utils CONF = cfg.CONF LOG = log.getLogger(__name__) container_opts = [ cfg.StrOpt("container_linux_bridge_name", default="docker0", required=True, help="Linux bridge used by container hypervisor to plug " "host-side veth to. It will be unplugged from here " "by the driver."), cfg.StrOpt("container_ovs_bridge_name", default="br-int", required=True, help="OVS bridge to use to plug a container to."), cfg.BoolOpt("container_cifs_guest_ok", default=True, help="Determines whether to allow guest access to CIFS share " "or not."), cfg.StrOpt("container_image_name", default="manila-docker-container", help="Image to be used for a container-based share server."), cfg.StrOpt("container_helper", default="manila.share.drivers.container.container_helper." "DockerExecHelper", help="Container helper which provides container-related " "operations to the driver."), cfg.StrOpt("container_protocol_helper", default="manila.share.drivers.container.protocol_helper." "DockerCIFSHelper", help="Helper which facilitates interaction with share server."), cfg.StrOpt("container_storage_helper", default="manila.share.drivers.container.storage_helper." "LVMHelper", help="Helper which facilitates interaction with storage " "solution used to actually store data. By default LVM " "is used to provide storage for a share."), ] class ContainerShareDriver(driver.ShareDriver, driver.ExecuteMixin): def __init__(self, *args, **kwargs): super(ContainerShareDriver, self).__init__([True], *args, **kwargs) self.configuration.append_config_values(container_opts) self.backend_name = self.configuration.safe_get( "share_backend_name") or "Docker" self.container = importutils.import_class( self.configuration.container_helper)( configuration=self.configuration) self.storage = importutils.import_class( self.configuration.container_storage_helper)( configuration=self.configuration) self._helpers = {} def _get_helper(self, share): if share["share_proto"].upper() == "CIFS": helper = self._helpers.get("CIFS") if helper is not None: return helper(self.container, share=share, config=self.configuration) self._helpers["CIFS"] = importutils.import_class( self.configuration.container_protocol_helper) return self._helpers["CIFS"](self.container, share=share, config=self.configuration) else: raise exception.InvalidShare( reason=_("Wrong, unsupported or disabled protocol.")) def _update_share_stats(self): data = { 'share_backend_name': self.backend_name, 'storage_protocol': 'CIFS', 'reserved_percentage': self.configuration.reserved_share_percentage, 'consistency_group_support': None, 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'driver_name': 'ContainerShareDriver', 'pools': self.storage.get_share_server_pools() } super(ContainerShareDriver, self)._update_share_stats(data) def create_share(self, context, share, share_server=None): LOG.debug("Create share on server '%s'.", share_server["id"]) server_id = self._get_container_name(share_server["id"]) share_name = share.share_id self.storage.provide_storage(share_name, share['size']) location = self._create_export_and_mount_storage( share, server_id, share_name) return location @utils.synchronized('container_driver_delete_share_lock', external=True) def delete_share(self, context, share, share_server=None): LOG.debug("Deleting share %(share)s on server '%(server)s'.", {"server": share_server["id"], "share": self._get_share_name(share)}) server_id = self._get_container_name(share_server["id"]) share_name = self._get_share_name(share) self._delete_export_and_umount_storage(share, server_id, share_name, ignore_errors=True) self.storage.remove_storage(share_name) LOG.debug("Deleted share %s successfully.", share_name) def _get_share_name(self, share): if share.get('export_location'): return share['export_location'].split('/')[-1] else: return share.share_id def extend_share(self, share, new_size, share_server=None): server_id = self._get_container_name(share_server["id"]) share_name = self._get_share_name(share) self.container.execute( server_id, ["umount", "/shares/%s" % share_name] ) self.storage.extend_share(share_name, new_size, share_server) lv_device = self.storage._get_lv_device(share_name) self.container.execute( server_id, ["mount", lv_device, "/shares/%s" % share_name] ) def ensure_share(self, context, share, share_server=None): pass def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): server_id = self._get_container_name(share_server["id"]) share_name = self._get_share_name(share) LOG.debug("Updating access to share %(share)s at " "share server %(share_server)s.", {"share_server": share_server["id"], "share": share_name}) self._get_helper(share).update_access(server_id, share_name, access_rules, add_rules, delete_rules) def get_network_allocations_number(self): return 1 def _get_container_name(self, server_id): return "manila_%s" % server_id.replace("-", "_") def do_setup(self, *args, **kwargs): pass def check_for_setup_error(self, *args, **kwargs): host_id = self.configuration.safe_get("neutron_host_id") neutron_class = importutils.import_class( 'manila.network.neutron.neutron_network_plugin.' 'NeutronNetworkPlugin' ) actual_class = importutils.import_class( self.configuration.safe_get("network_api_class")) if host_id is None and issubclass(actual_class, neutron_class): msg = _("%s requires neutron_host_id to be " "specified.") % neutron_class raise exception.ManilaException(msg) elif host_id is None: LOG.warning("neutron_host_id is not specified. This driver " "might not work as expected without it.") def _connect_to_network(self, server_id, network_info, host_veth): LOG.debug("Attempting to connect container to neutron network.") network_allocation = network_info['network_allocations'][0] port_address = network_allocation.ip_address port_mac = network_allocation.mac_address port_id = network_allocation.id self.container.execute( server_id, ["ifconfig", "eth0", port_address, "up"] ) self.container.execute( server_id, ["ip", "link", "set", "dev", "eth0", "address", port_mac] ) msg_helper = { 'id': server_id, 'veth': host_veth, 'lb': self.configuration.container_linux_bridge_name, 'ovsb': self.configuration.container_ovs_bridge_name, 'ip': port_address, 'network': network_info['neutron_net_id'], 'subnet': network_info['neutron_subnet_id'], } LOG.debug("Container %(id)s veth is %(veth)s.", msg_helper) LOG.debug("Removing %(veth)s from %(lb)s.", msg_helper) self._execute("brctl", "delif", self.configuration.container_linux_bridge_name, host_veth, run_as_root=True) LOG.debug("Plugging %(veth)s into %(ovsb)s.", msg_helper) set_if = ['--', 'set', 'interface', host_veth] e_mac = set_if + ['external-ids:attached-mac="%s"' % port_mac] e_id = set_if + ['external-ids:iface-id="%s"' % port_id] e_status = set_if + ['external-ids:iface-status=active'] e_mcid = set_if + ['external-ids:manila-container=%s' % server_id] self._execute("ovs-vsctl", "--", "add-port", self.configuration.container_ovs_bridge_name, host_veth, *(e_mac + e_id + e_status + e_mcid), run_as_root=True) LOG.debug("Now container %(id)s should be accessible from network " "%(network)s and subnet %(subnet)s by address %(ip)s.", msg_helper) @utils.synchronized("container_driver_teardown_lock", external=True) def _teardown_server(self, *args, **kwargs): server_id = self._get_container_name(kwargs["server_details"]["id"]) self.container.stop_container(server_id) veth = self.container.find_container_veth(server_id) if veth: LOG.debug("Deleting veth %s.", veth) try: self._execute("ovs-vsctl", "--", "del-port", self.configuration.container_ovs_bridge_name, veth, run_as_root=True) except exception.ProcessExecutionError as e: LOG.warning("Failed to delete port %s: port " "vanished.", veth) LOG.error(e) def _get_veth_state(self): result = self._execute("brctl", "show", self.configuration.container_linux_bridge_name, run_as_root=True) veths = re.findall("veth.*\\n", result[0]) veths = [x.rstrip('\n') for x in veths] msg = ("The following veth interfaces are plugged into %s now: " % self.configuration.container_linux_bridge_name) LOG.debug(msg + ", ".join(veths)) return veths def _get_corresponding_veth(self, before, after): result = list(set(after) ^ set(before)) if len(result) != 1: raise exception.ManilaException(_("Multiple veths for container.")) return result[0] @utils.synchronized("veth-lock", external=True) def _setup_server(self, network_info, metadata=None): msg = "Creating share server '%s'." server_id = self._get_container_name(network_info["server_id"]) LOG.debug(msg, server_id) veths_before = self._get_veth_state() try: self.container.start_container(server_id) except Exception as e: raise exception.ManilaException(_("Cannot create container: %s") % e) veths_after = self._get_veth_state() veth = self._get_corresponding_veth(veths_before, veths_after) self._connect_to_network(server_id, network_info, veth) LOG.info("Container %s was created.", server_id) return {"id": network_info["server_id"]} def _delete_export_and_umount_storage( self, share, server_id, share_name, ignore_errors=False): self._get_helper(share).delete_share(server_id, share_name, ignore_errors=ignore_errors) self.container.execute( server_id, ["umount", "/shares/%s" % share_name], ignore_errors=ignore_errors ) # (aovchinnikov): bug 1621784 manifests itself here as well as in # storage helper. There is a chance that we won't be able to remove # this directory, despite the fact that it is not shared anymore and # already contains nothing. In such case the driver should not fail # share deletion, but issue a warning. self.container.execute( server_id, ["rm", "-fR", "/shares/%s" % share_name], ignore_errors=True ) def _create_export_and_mount_storage(self, share, server_id, share_name): self.container.execute( server_id, ["mkdir", "-m", "750", "/shares/%s" % share_name] ) lv_device = self.storage._get_lv_device(share_name) self.container.execute( server_id, ["mount", lv_device, "/shares/%s" % share_name] ) location = self._get_helper(share).create_share(server_id) return location def manage_existing_with_server( self, share, driver_options, share_server=None): if not share_server and self.driver_handles_share_servers: raise exception.ShareBackendException( "A share server object is needed to manage a share in this " "driver mode of operation.") server_id = self._get_container_name(share_server["id"]) share_name = self._get_share_name(share) size = int(math.ceil(float(self.storage.get_size(share_name)))) self._delete_export_and_umount_storage(share, server_id, share_name) new_share_name = share.share_id self.storage.rename_storage(share_name, new_share_name) location = self._create_export_and_mount_storage( share, server_id, new_share_name) result = {'size': size, 'export_locations': [location]} LOG.info("Successfully managed share %(share)s, returning %(data)s", {'share': share.id, 'data': result}) return result def unmanage_with_server(self, share, share_server=None): pass def get_share_server_network_info( self, context, share_server, identifier, driver_options): name = self._get_correct_container_old_name(identifier) return [self.container.fetch_container_address(name, "inet")] def manage_server(self, context, share_server, identifier, driver_options): new_name = self._get_container_name(share_server['id']) old_name = self._get_correct_container_old_name(identifier) self.container.rename_container(old_name, new_name) return new_name, {'id': share_server['id']} def unmanage_server(self, server_details, security_services=None): pass def _get_correct_container_old_name(self, name): # Check if the container with the given name exists, else return # the name based on the driver template if not self.container.container_exists(name): return self._get_container_name(name) return name manila-10.0.0/manila/share/drivers/container/container_helper.py0000664000175000017500000001716513656750227025000 0ustar zuulzuul00000000000000# Copyright (c) 2016 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import uuid from oslo_log import log from oslo_utils import excutils from manila import exception from manila.i18n import _ from manila.share import driver LOG = log.getLogger(__name__) class DockerExecHelper(driver.ExecuteMixin): def __init__(self, *args, **kwargs): self.configuration = kwargs.pop("configuration", None) super(DockerExecHelper, self).__init__(*args, **kwargs) self.init_execute_mixin() def start_container(self, name=None): name = name or "".join(["manila_cifs_docker_container", str(uuid.uuid1()).replace("-", "_")]) image_name = self.configuration.container_image_name LOG.debug("Starting container from image %s.", image_name) # (aovchinnikov): --privileged is required for both samba and # nfs-ganesha to actually allow access to shared folders. # # (aovchinnikov): To actually make docker container mount a # logical volume created after container start-up to some location # inside it, we must share entire /dev with it. While seemingly # dangerous it is not and moreover this is apparently the only sane # way to do it. The reason is when a logical volume gets created # several new things appear in /dev: a new /dev/dm-X and a symlink # in /dev/volume_group_name pointing to /dev/dm-X. But to be able # to interact with /dev/dm-X, it must be already present inside # the container's /dev i.e. it must have been -v shared during # container start-up. So we should either precreate an unknown # number of /dev/dm-Xs (one per LV), share them all and hope # for the best or share the entire /dev and hope for the best. # # The risk of allowing a container having access to entire host's # /dev is not as big as it seems: as long as actual share providers # are invulnerable this does not pose any extra risks. If, however, # share providers contain vulnerabilities then the driver does not # provide any more possibilities for an exploitation than other # first-party drivers. cmd = ["docker", "run", "-d", "-i", "-t", "--privileged", "-v", "/dev:/dev", "--name=%s" % name, "-v", "/tmp/shares:/shares", image_name] try: result = self._inner_execute(cmd) except (exception.ProcessExecutionError, OSError): raise exception.ShareBackendException( msg="Container %s has failed to start." % name) LOG.info("A container has been successfully started! Its id is " "%s.", result[0].rstrip('\n')) def stop_container(self, name): LOG.debug("Stopping container %s.", name) try: self._inner_execute(["docker", "stop", name]) except (exception.ProcessExecutionError, OSError): raise exception.ShareBackendException( msg="Container %s has failed to stop properly." % name) LOG.info("Container %s is successfully stopped.", name) def execute(self, name=None, cmd=None, ignore_errors=False): if name is None: raise exception.ManilaException(_("Container name not specified.")) if cmd is None or (type(cmd) is not list): raise exception.ManilaException(_("Missing or malformed command.")) LOG.debug("Executing inside a container %s.", name) cmd = ["docker", "exec", "-i", name] + cmd result = self._inner_execute(cmd, ignore_errors=ignore_errors) return result def _inner_execute(self, cmd, ignore_errors=False): LOG.debug("Executing command: %s.", " ".join(cmd)) try: result = self._execute(*cmd, run_as_root=True) except (exception.ProcessExecutionError, OSError) as e: with excutils.save_and_reraise_exception( reraise=not ignore_errors): LOG.warning("Failed to run command %(cmd)s due to " "%(reason)s.", {'cmd': cmd, 'reason': e}) else: LOG.debug("Execution result: %s.", result) return result def fetch_container_address(self, name, address_family="inet6"): result = self.execute( name, ["ip", "-oneline", "-family", address_family, "address", "show", "scope", "global", "dev", "eth0"], ) address_w_prefix = result[0].split()[3] address = address_w_prefix.split('/')[0] return address def rename_container(self, name, new_name): veth_name = self.find_container_veth(name) if not veth_name: raise exception.ManilaException( _("Could not find OVS information related to " "container %s.") % name) try: self._inner_execute(["docker", "rename", name, new_name]) except (exception.ProcessExecutionError, OSError): raise exception.ShareBackendException( msg="Could not rename container %s." % name) cmd = ["ovs-vsctl", "set", "interface", veth_name, "external-ids:manila-container=%s" % new_name] try: self._inner_execute(cmd) except (exception.ProcessExecutionError, OSError): try: self._inner_execute(["docker", "rename", new_name, name]) except (exception.ProcessExecutionError, OSError): msg = _("Could not rename back container %s.") % name LOG.exception(msg) raise exception.ShareBackendException( msg="Could not update OVS information %s." % name) LOG.info("Container %s has been successfully renamed.", name) def find_container_veth(self, name): interfaces = self._execute("ovs-vsctl", "list", "interface", run_as_root=True)[0] veths = set(re.findall("veth[0-9a-zA-Z]{7}", interfaces)) manila_re = "manila-container=\".*\"" for veth in veths: try: iface_data = self._execute("ovs-vsctl", "list", "interface", veth, run_as_root=True)[0] except (exception.ProcessExecutionError, OSError) as e: LOG.debug("Error listing interface %(veth)s. " "Reason: %(reason)s", {'veth': veth, 'reason': e}) continue container_id = re.findall(manila_re, iface_data) if container_id == []: continue elif container_id[0].split("manila-container=")[-1].split( "manila_")[-1].strip('"') == name.split("manila_")[-1]: return veth def container_exists(self, name): result = self._execute("docker", "ps", "--no-trunc", "--format='{{.Names}}'", run_as_root=True)[0] for line in result.split('\n'): if name == line.strip("'"): return True return False manila-10.0.0/manila/share/drivers/glusterfs/0000775000175000017500000000000013656750362021127 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/glusterfs/layout_volume.py0000664000175000017500000006277313656750227024424 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """GlusterFS volume mapped share layout.""" import os import random import re import shutil import string import tempfile import xml.etree.cElementTree as etree from oslo_config import cfg from oslo_log import log import six from manila import exception from manila.i18n import _ from manila.share.drivers.glusterfs import common from manila.share.drivers.glusterfs import layout from manila import utils LOG = log.getLogger(__name__) glusterfs_volume_mapped_opts = [ cfg.ListOpt('glusterfs_servers', default=[], deprecated_name='glusterfs_targets', help='List of GlusterFS servers that can be used to create ' 'shares. Each GlusterFS server should be of the form ' '[remoteuser@], and they are assumed to ' 'belong to distinct Gluster clusters.'), cfg.StrOpt('glusterfs_volume_pattern', help='Regular expression template used to filter ' 'GlusterFS volumes for share creation. ' 'The regex template can optionally (ie. with support ' 'of the GlusterFS backend) contain the #{size} ' 'parameter which matches an integer (sequence of ' 'digits) in which case the value shall be interpreted as ' 'size of the volume in GB. Examples: ' r'"manila-share-volume-\d+$", ' r'"manila-share-volume-#{size}G-\d+$"; ' 'with matching volume names, respectively: ' '"manila-share-volume-12", "manila-share-volume-3G-13". ' 'In latter example, the number that matches "#{size}", ' 'that is, 3, is an indication that the size of volume ' 'is 3G.'), ] CONF = cfg.CONF CONF.register_opts(glusterfs_volume_mapped_opts) # The dict specifying named parameters # that can be used with glusterfs_volume_pattern # in #{} format. # For each of them we give regex pattern it matches # and a transformer function ('trans') for the matched # string value. # Currently we handle only #{size}. PATTERN_DICT = {'size': {'pattern': r'(?P\d+)', 'trans': int}} USER_MANILA_SHARE = 'user.manila-share' USER_CLONED_FROM = 'user.manila-cloned-from' UUID_RE = re.compile(r'\A[\da-f]{8}-([\da-f]{4}-){3}[\da-f]{12}\Z', re.I) class GlusterfsVolumeMappedLayout(layout.GlusterfsShareLayoutBase): _snapshots_are_supported = True def __init__(self, driver, *args, **kwargs): super(GlusterfsVolumeMappedLayout, self).__init__( driver, *args, **kwargs) self.gluster_used_vols = set() self.configuration.append_config_values( common.glusterfs_common_opts) self.configuration.append_config_values( glusterfs_volume_mapped_opts) self.gluster_nosnap_vols_dict = {} self.volume_pattern = self._compile_volume_pattern() self.volume_pattern_keys = self.volume_pattern.groupindex.keys() for srvaddr in self.configuration.glusterfs_servers: # format check for srvaddr self._glustermanager(srvaddr, False) self.glusterfs_versions = {} self.private_storage = kwargs.get('private_storage') def _compile_volume_pattern(self): """Compile a RegexObject from the config specified regex template. (cfg.glusterfs_volume_pattern) """ subdict = {} for key, val in PATTERN_DICT.items(): subdict[key] = val['pattern'] # Using templates with placeholder syntax #{} class CustomTemplate(string.Template): delimiter = '#' volume_pattern = CustomTemplate( self.configuration.glusterfs_volume_pattern).substitute( subdict) return re.compile(volume_pattern) def do_setup(self, context): """Setup the GlusterFS volumes.""" glusterfs_versions, exceptions = {}, {} for srvaddr in self.configuration.glusterfs_servers: try: glusterfs_versions[srvaddr] = self._glustermanager( srvaddr, False).get_gluster_version() except exception.GlusterfsException as exc: exceptions[srvaddr] = six.text_type(exc) if exceptions: for srvaddr, excmsg in exceptions.items(): LOG.error("'gluster version' failed on server " "%(server)s with: %(message)s", {'server': srvaddr, 'message': excmsg}) raise exception.GlusterfsException(_( "'gluster version' failed on servers %s") % ( ','.join(exceptions.keys()))) notsupp_servers = [] for srvaddr, vers in glusterfs_versions.items(): if common.numreduct(vers) < self.driver.GLUSTERFS_VERSION_MIN: notsupp_servers.append(srvaddr) if notsupp_servers: gluster_version_min_str = '.'.join( six.text_type(c) for c in self.driver.GLUSTERFS_VERSION_MIN) for srvaddr in notsupp_servers: LOG.error("GlusterFS version %(version)s on server " "%(server)s is not supported, " "minimum requirement: %(minvers)s", {'server': srvaddr, 'version': '.'.join(glusterfs_versions[srvaddr]), 'minvers': gluster_version_min_str}) raise exception.GlusterfsException(_( "Unsupported GlusterFS version on servers %(servers)s, " "minimum requirement: %(minvers)s") % { 'servers': ','.join(notsupp_servers), 'minvers': gluster_version_min_str}) self.glusterfs_versions = glusterfs_versions gluster_volumes_initial = set( self._fetch_gluster_volumes(filter_used=False)) if not gluster_volumes_initial: # No suitable volumes are found on the Gluster end. # Raise exception. msg = (_("Gluster backend does not provide any volume " "matching pattern %s" ) % self.configuration.glusterfs_volume_pattern) LOG.error(msg) raise exception.GlusterfsException(msg) LOG.info("Found %d Gluster volumes allocated for Manila.", len(gluster_volumes_initial)) self._check_mount_glusterfs() def _glustermanager(self, gluster_address, req_volume=True): """Create GlusterManager object for gluster_address.""" return common.GlusterManager( gluster_address, self.driver._execute, self.configuration.glusterfs_path_to_private_key, self.configuration.glusterfs_server_password, requires={'volume': req_volume}) def _share_manager(self, share): """Return GlusterManager object representing share's backend.""" gluster_address = self.private_storage.get(share['id'], 'volume') if gluster_address is None: return return self._glustermanager(gluster_address) def _fetch_gluster_volumes(self, filter_used=True): """Do a 'gluster volume list | grep '. Aggregate the results from all servers. Extract the named groups from the matching volume names using the specs given in PATTERN_DICT. Return a dict with keys of the form :/ and values being dicts that map names of named groups to their extracted value. """ volumes_dict = {} for srvaddr in self.configuration.glusterfs_servers: gluster_mgr = self._glustermanager(srvaddr, False) if gluster_mgr.user: logmsg = ("Retrieving volume list " "on host %s") % gluster_mgr.host else: logmsg = ("Retrieving volume list") out, err = gluster_mgr.gluster_call('volume', 'list', log=logmsg) for volname in out.split("\n"): patmatch = self.volume_pattern.match(volname) if not patmatch: continue comp_vol = gluster_mgr.components.copy() comp_vol.update({'volume': volname}) gluster_mgr_vol = self._glustermanager(comp_vol) if filter_used: vshr = gluster_mgr_vol.get_vol_option( USER_MANILA_SHARE) or '' if UUID_RE.search(vshr): continue pattern_dict = {} for key in self.volume_pattern_keys: keymatch = patmatch.group(key) if keymatch is None: pattern_dict[key] = None else: trans = PATTERN_DICT[key].get('trans', lambda x: x) pattern_dict[key] = trans(keymatch) volumes_dict[gluster_mgr_vol.qualified] = pattern_dict return volumes_dict @utils.synchronized("glusterfs_native", external=False) def _pop_gluster_vol(self, size=None): """Pick an unbound volume. Do a _fetch_gluster_volumes() first to get the complete list of usable volumes. Keep only the unbound ones (ones that are not yet used to back a share). If size is given, try to pick one which has a size specification (according to the 'size' named group of the volume pattern), and its size is greater-than-or-equal to the given size. Return the volume chosen (in :/ format). """ voldict = self._fetch_gluster_volumes() # calculate the set of unused volumes unused_vols = set(voldict) - self.gluster_used_vols if not unused_vols: # No volumes available for use as share. Warn user. LOG.warning("No unused gluster volumes available for use as " "share! Create share won't be supported unless " "existing shares are deleted or some gluster " "volumes are created with names matching " "'glusterfs_volume_pattern'.") else: LOG.info("Number of gluster volumes in use: " "%(inuse-numvols)s. Number of gluster volumes " "available for use as share: %(unused-numvols)s", {'inuse-numvols': len(self.gluster_used_vols), 'unused-numvols': len(unused_vols)}) # volmap is the data structure used to categorize and sort # the unused volumes. It's a nested dictionary of structure # {: } # where is either an integer or None, # is a dictionary of structure {: } # where is a host name (IP address), is a list # of volumes (gluster addresses). volmap = {None: {}} # if both caller has specified size and 'size' occurs as # a parameter in the volume pattern... if size and 'size' in self.volume_pattern_keys: # then this function is used to extract the # size value for a given volume from the voldict... get_volsize = lambda vol: voldict[vol]['size'] # noqa: E731 else: # else just use a stub. get_volsize = lambda vol: None # noqa: E731 for vol in unused_vols: # For each unused volume, we extract the # and values with which it can be inserted # into the volmap, and conditionally perform # the insertion (with the condition being: once # caller specified size and a size indication was # found in the volume name, we require that the # indicated size adheres to caller's spec). volsize = get_volsize(vol) if not volsize or volsize >= size: hostmap = volmap.get(volsize) if not hostmap: hostmap = {} volmap[volsize] = hostmap host = self._glustermanager(vol).host hostvols = hostmap.get(host) if not hostvols: hostvols = [] hostmap[host] = hostvols hostvols.append(vol) if len(volmap) > 1: # volmap has keys apart from the default None, # ie. volumes with sensible and adherent size # indication have been found. Then pick the smallest # of the size values. chosen_size = sorted(n for n in volmap.keys() if n)[0] else: chosen_size = None chosen_hostmap = volmap[chosen_size] if not chosen_hostmap: msg = (_("Couldn't find a free gluster volume to use.")) LOG.error(msg) raise exception.GlusterfsException(msg) # From the hosts we choose randomly to tend towards # even distribution of share backing volumes among # Gluster clusters. chosen_host = random.choice(list(chosen_hostmap.keys())) # Within a host's volumes, choose alphabetically first, # to make it predictable. vol = sorted(chosen_hostmap[chosen_host])[0] self.gluster_used_vols.add(vol) return vol @utils.synchronized("glusterfs_native", external=False) def _push_gluster_vol(self, exp_locn): try: self.gluster_used_vols.remove(exp_locn) except KeyError: msg = (_("Couldn't find the share in used list.")) LOG.error(msg) raise exception.GlusterfsException(msg) def _wipe_gluster_vol(self, gluster_mgr): # Create a temporary mount. gluster_export = gluster_mgr.export tmpdir = tempfile.mkdtemp() try: common._mount_gluster_vol(self.driver._execute, gluster_export, tmpdir) except exception.GlusterfsException: shutil.rmtree(tmpdir, ignore_errors=True) raise # Delete the contents of a GlusterFS volume that is temporarily # mounted. # From GlusterFS version 3.7, two directories, '.trashcan' at the root # of the GlusterFS volume and 'internal_op' within the '.trashcan' # directory, are internally created when a GlusterFS volume is started. # GlusterFS does not allow unlink(2) of the two directories. So do not # delete the paths of the two directories, but delete their contents # along with the rest of the contents of the volume. srvaddr = gluster_mgr.host_access if common.numreduct(self.glusterfs_versions[srvaddr]) < (3, 7): cmd = ['find', tmpdir, '-mindepth', '1', '-delete'] else: ignored_dirs = map(lambda x: os.path.join(tmpdir, *x), [('.trashcan', ), ('.trashcan', 'internal_op')]) ignored_dirs = list(ignored_dirs) cmd = ['find', tmpdir, '-mindepth', '1', '!', '-path', ignored_dirs[0], '!', '-path', ignored_dirs[1], '-delete'] try: self.driver._execute(*cmd, run_as_root=True) except exception.ProcessExecutionError as exc: msg = (_("Error trying to wipe gluster volume. " "gluster_export: %(export)s, Error: %(error)s") % {'export': gluster_export, 'error': exc.stderr}) LOG.error(msg) raise exception.GlusterfsException(msg) finally: # Unmount. common._umount_gluster_vol(self.driver._execute, tmpdir) shutil.rmtree(tmpdir, ignore_errors=True) def create_share(self, context, share, share_server=None): """Create a share using GlusterFS volume. 1 Manila share = 1 GlusterFS volume. Pick an unused GlusterFS volume for use as a share. """ try: vol = self._pop_gluster_vol(share['size']) except exception.GlusterfsException: msg = ("Error creating share %(share_id)s", {'share_id': share['id']}) LOG.error(msg) raise gmgr = self._glustermanager(vol) export = self.driver._setup_via_manager( {'share': share, 'manager': gmgr}) gmgr.set_vol_option(USER_MANILA_SHARE, share['id']) self.private_storage.update(share['id'], {'volume': vol}) # TODO(deepakcs): Enable quota and set it to the share size. # For native protocol, the export_location should be of the form: # server:/volname LOG.info("export_location sent back from create_share: %s", export) return export def delete_share(self, context, share, share_server=None): """Delete a share on the GlusterFS volume. 1 Manila share = 1 GlusterFS volume. Put the gluster volume back in the available list. """ gmgr = self._share_manager(share) if not gmgr: # Share does not have a record in private storage. # It means create_share{,_from_snapshot} did not # succeed(*). In that case we should not obstruct # share deletion, so we just return doing nothing. # # (*) or we have a database corruption but then # basically does not matter what we do here return clone_of = gmgr.get_vol_option(USER_CLONED_FROM) or '' try: if UUID_RE.search(clone_of): # We take responsibility for the lifecycle # management of those volumes which were # created by us (as snapshot clones) ... gmgr.gluster_call('volume', 'delete', gmgr.volume) else: # ... for volumes that come from the pool, we return # them to the pool (after some purification rituals) self._wipe_gluster_vol(gmgr) gmgr.set_vol_option(USER_MANILA_SHARE, 'NONE') self._push_gluster_vol(gmgr.qualified) except exception.GlusterfsException: msg = ("Error during delete_share request for " "share %(share_id)s", {'share_id': share['id']}) LOG.error(msg) raise self.private_storage.delete(share['id']) # TODO(deepakcs): Disable quota. @staticmethod def _find_actual_backend_snapshot_name(gluster_mgr, snapshot): args = ('snapshot', 'list', gluster_mgr.volume, '--mode=script') out, err = gluster_mgr.gluster_call( *args, log=("Retrieving snapshot list")) snapgrep = list(filter(lambda x: snapshot['id'] in x, out.split("\n"))) if len(snapgrep) != 1: msg = (_("Failed to identify backing GlusterFS object " "for snapshot %(snap_id)s of share %(share_id)s: " "a single candidate was expected, %(found)d was found.") % {'snap_id': snapshot['id'], 'share_id': snapshot['share_id'], 'found': len(snapgrep)}) raise exception.GlusterfsException(msg) backend_snapshot_name = snapgrep[0] return backend_snapshot_name def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): old_gmgr = self._share_manager(snapshot['share_instance']) # Snapshot clone feature in GlusterFS server essential to support this # API is available in GlusterFS server versions 3.7 and higher. So do # a version check. vers = self.glusterfs_versions[old_gmgr.host_access] minvers = (3, 7) if common.numreduct(vers) < minvers: minvers_str = '.'.join(six.text_type(c) for c in minvers) vers_str = '.'.join(vers) msg = (_("GlusterFS version %(version)s on server %(server)s does " "not support creation of shares from snapshot. " "minimum requirement: %(minversion)s") % {'version': vers_str, 'server': old_gmgr.host, 'minversion': minvers_str}) LOG.error(msg) raise exception.GlusterfsException(msg) # Clone the snapshot. The snapshot clone, a new GlusterFS volume # would serve as a share. backend_snapshot_name = self._find_actual_backend_snapshot_name( old_gmgr, snapshot) volume = ''.join(['manila-', share['id']]) args_tuple = (('snapshot', 'activate', backend_snapshot_name, 'force', '--mode=script'), ('snapshot', 'clone', volume, backend_snapshot_name)) for args in args_tuple: out, err = old_gmgr.gluster_call( *args, log=("Creating share from snapshot")) # Get a manager for the new volume/share. comp_vol = old_gmgr.components.copy() comp_vol.update({'volume': volume}) gmgr = self._glustermanager(comp_vol) export = self.driver._setup_via_manager( {'share': share, 'manager': gmgr}, {'share': snapshot['share_instance'], 'manager': old_gmgr}) argseq = (('set', [USER_CLONED_FROM, snapshot['share_id']]), ('set', [USER_MANILA_SHARE, share['id']]), ('start', [])) for op, opargs in argseq: args = ['volume', op, gmgr.volume] + opargs gmgr.gluster_call(*args, log=("Creating share from snapshot")) self.gluster_used_vols.add(gmgr.qualified) self.private_storage.update(share['id'], {'volume': gmgr.qualified}) return export def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot.""" gluster_mgr = self._share_manager(snapshot['share']) if gluster_mgr.qualified in self.gluster_nosnap_vols_dict: opret, operrno = -1, 0 operrstr = self.gluster_nosnap_vols_dict[gluster_mgr.qualified] else: args = ('--xml', 'snapshot', 'create', 'manila-' + snapshot['id'], gluster_mgr.volume) out, err = gluster_mgr.gluster_call( *args, log=("Retrieving volume info")) if not out: raise exception.GlusterfsException( 'gluster volume info %s: no data received' % gluster_mgr.volume ) outxml = etree.fromstring(out) opret = int(common.volxml_get(outxml, 'opRet')) operrno = int(common.volxml_get(outxml, 'opErrno')) operrstr = common.volxml_get(outxml, 'opErrstr', default=None) if opret == -1: vers = self.glusterfs_versions[gluster_mgr.host_access] if common.numreduct(vers) > (3, 6): # This logic has not yet been implemented in GlusterFS 3.6 if operrno == 0: self.gluster_nosnap_vols_dict[ gluster_mgr.qualified] = operrstr msg = _("Share %(share_id)s does not support snapshots: " "%(errstr)s.") % {'share_id': snapshot['share_id'], 'errstr': operrstr} LOG.error(msg) raise exception.ShareSnapshotNotSupported(msg) raise exception.GlusterfsException( _("Creating snapshot for share %(share_id)s failed " "with %(errno)d: %(errstr)s") % { 'share_id': snapshot['share_id'], 'errno': operrno, 'errstr': operrstr}) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" gluster_mgr = self._share_manager(snapshot['share']) backend_snapshot_name = self._find_actual_backend_snapshot_name( gluster_mgr, snapshot) args = ('--xml', 'snapshot', 'delete', backend_snapshot_name, '--mode=script') out, err = gluster_mgr.gluster_call( *args, log=("Error deleting snapshot")) if not out: raise exception.GlusterfsException( _('gluster snapshot delete %s: no data received') % gluster_mgr.volume ) outxml = etree.fromstring(out) gluster_mgr.xml_response_check(outxml, args[1:]) def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported.""" gmgr = self._share_manager(share) self.gluster_used_vols.add(gmgr.qualified) gmgr.set_vol_option(USER_MANILA_SHARE, share['id']) # Debt... def manage_existing(self, share, driver_options): raise NotImplementedError() def unmanage(self, share): raise NotImplementedError() def extend_share(self, share, new_size, share_server=None): raise NotImplementedError() def shrink_share(self, share, new_size, share_server=None): raise NotImplementedError() manila-10.0.0/manila/share/drivers/glusterfs/__init__.py0000664000175000017500000002766313656750227023256 0ustar zuulzuul00000000000000# Copyright (c) 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Flat network GlusterFS Driver. Manila shares are subdirectories within a GlusterFS volume. The backend, a GlusterFS cluster, uses one of the two NFS servers, Gluster-NFS or NFS-Ganesha, based on a configuration option, to mediate access to the shares. NFS-Ganesha server supports NFSv3 and v4 protocols, while Gluster-NFS server supports only NFSv3 protocol. TODO(rraja): support SMB protocol. """ import re import socket import sys from oslo_config import cfg from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers import ganesha from manila.share.drivers.ganesha import utils as ganesha_utils from manila.share.drivers.glusterfs import layout from manila import utils GlusterfsManilaShare_opts = [ cfg.StrOpt('glusterfs_nfs_server_type', default='Gluster', help='Type of NFS server that mediate access to the Gluster ' 'volumes (Gluster or Ganesha).'), cfg.HostAddressOpt('glusterfs_ganesha_server_ip', help="Remote Ganesha server node's IP address."), cfg.StrOpt('glusterfs_ganesha_server_username', default='root', help="Remote Ganesha server node's username."), cfg.StrOpt('glusterfs_ganesha_server_password', secret=True, help="Remote Ganesha server node's login password. " "This is not required if 'glusterfs_path_to_private_key'" ' is configured.'), ] CONF = cfg.CONF CONF.register_opts(GlusterfsManilaShare_opts) NFS_EXPORT_DIR = 'nfs.export-dir' NFS_EXPORT_VOL = 'nfs.export-volumes' NFS_RPC_AUTH_ALLOW = 'nfs.rpc-auth-allow' NFS_RPC_AUTH_REJECT = 'nfs.rpc-auth-reject' class GlusterfsShareDriver(driver.ExecuteMixin, driver.GaneshaMixin, layout.GlusterfsShareDriverBase): """Execute commands relating to Shares.""" GLUSTERFS_VERSION_MIN = (3, 5) supported_layouts = ('layout_directory.GlusterfsDirectoryMappedLayout', 'layout_volume.GlusterfsVolumeMappedLayout') supported_protocols = ('NFS',) def __init__(self, *args, **kwargs): super(GlusterfsShareDriver, self).__init__(False, *args, **kwargs) self._helpers = {} self.configuration.append_config_values(GlusterfsManilaShare_opts) self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'GlusterFS' self.nfs_helper = getattr( sys.modules[__name__], self.configuration.glusterfs_nfs_server_type + 'NFSHelper') def do_setup(self, context): # in order to do an initial instantialization of the helper self._get_helper() super(GlusterfsShareDriver, self).do_setup(context) def _setup_via_manager(self, share_manager, share_manager_parent=None): gluster_manager = share_manager['manager'] # TODO(csaba): This should be refactored into proper dispatch to helper if self.nfs_helper == GlusterNFSHelper and not gluster_manager.path: # default behavior of NFS_EXPORT_VOL is as if it were 'on' export_vol = gluster_manager.get_vol_option( NFS_EXPORT_VOL, boolean=True) if export_vol is False: raise exception.GlusterfsException( _("Gluster-NFS with volume layout should be used " "with `nfs.export-volumes = on`")) setting = [NFS_RPC_AUTH_REJECT, '*'] else: # gluster-nfs export of the whole volume must be prohibited # to not to defeat access control setting = [NFS_EXPORT_VOL, False] gluster_manager.set_vol_option(*setting) return self.nfs_helper(self._execute, self.configuration, gluster_manager=gluster_manager).get_export( share_manager['share']) def check_for_setup_error(self): pass def _update_share_stats(self): """Retrieve stats info from the GlusterFS volume.""" data = dict( storage_protocol='NFS', vendor_name='Red Hat', share_backend_name=self.backend_name, reserved_percentage=self.configuration.reserved_share_percentage) super(GlusterfsShareDriver, self)._update_share_stats(data) def get_network_allocations_number(self): return 0 def _get_helper(self, gluster_mgr=None): """Choose a protocol specific helper class.""" helper_class = self.nfs_helper if (self.nfs_helper == GlusterNFSHelper and gluster_mgr and not gluster_mgr.path): helper_class = GlusterNFSVolHelper helper = helper_class(self._execute, self.configuration, gluster_manager=gluster_mgr) helper.init_helper() return helper @property def supported_access_types(self): return self.nfs_helper.supported_access_types @property def supported_access_levels(self): return self.nfs_helper.supported_access_levels def _update_access_via_manager(self, gluster_mgr, context, share, add_rules, delete_rules, recovery=False, share_server=None): """Update access to the share.""" self._get_helper(gluster_mgr).update_access( '/', share, add_rules, delete_rules, recovery=recovery) class GlusterNFSHelper(ganesha.NASHelperBase): """Manage shares with Gluster-NFS server.""" supported_access_types = ('ip', ) supported_access_levels = (constants.ACCESS_LEVEL_RW, ) def __init__(self, execute, config_object, **kwargs): self.gluster_manager = kwargs.pop('gluster_manager') super(GlusterNFSHelper, self).__init__(execute, config_object, **kwargs) def get_export(self, share): return self.gluster_manager.export def _get_export_dir_dict(self): """Get the export entries of shares in the GlusterFS volume.""" export_dir = self.gluster_manager.get_vol_option( NFS_EXPORT_DIR) edh = {} if export_dir: # see # https://github.com/gluster/glusterfs # /blob/aa19909/xlators/nfs/server/src/nfs.c#L1582 # regarding the format of nfs.export-dir edl = export_dir.split(',') # parsing export_dir into a dict of {dir: [hostpec,..]..} # format r = re.compile(r'\A/(.*)\((.*)\)\Z') for ed in edl: d, e = r.match(ed).groups() edh[d] = e.split('|') return edh def update_access(self, base_path, share, add_rules, delete_rules, recovery=False): """Update access rules.""" existing_rules_set = set() # The name of the directory, which is exported as the share. export_dir = self.gluster_manager.path[1:] # Fetch the existing export entries as an export dictionary with the # exported directories and the list of client IP addresses authorized # to access them as key-value pairs. export_dir_dict = self._get_export_dir_dict() if export_dir in export_dir_dict: existing_rules_set = set(export_dir_dict[export_dir]) add_rules_set = {rule['access_to'] for rule in add_rules} delete_rules_set = {rule['access_to'] for rule in delete_rules} new_rules_set = ( (existing_rules_set | add_rules_set) - delete_rules_set) if new_rules_set: export_dir_dict[export_dir] = new_rules_set elif export_dir not in export_dir_dict: return else: export_dir_dict.pop(export_dir) # Reconstruct the export entries. if export_dir_dict: export_dirs_new = (",".join("/%s(%s)" % (d, "|".join(sorted(v))) for d, v in sorted(export_dir_dict.items()))) else: export_dirs_new = None self.gluster_manager.set_vol_option(NFS_EXPORT_DIR, export_dirs_new) class GlusterNFSVolHelper(GlusterNFSHelper): """Manage shares with Gluster-NFS server, volume mapped variant.""" def _get_vol_exports(self): export_vol = self.gluster_manager.get_vol_option( NFS_RPC_AUTH_ALLOW) return export_vol.split(',') if export_vol else [] def update_access(self, base_path, share, add_rules, delete_rules, recovery=False): """Update access rules.""" existing_rules_set = set(self._get_vol_exports()) add_rules_set = {rule['access_to'] for rule in add_rules} delete_rules_set = {rule['access_to'] for rule in delete_rules} new_rules_set = ( (existing_rules_set | add_rules_set) - delete_rules_set) if new_rules_set: argseq = ((NFS_RPC_AUTH_ALLOW, ','.join(sorted(new_rules_set))), (NFS_RPC_AUTH_REJECT, None)) else: argseq = ((NFS_RPC_AUTH_ALLOW, None), (NFS_RPC_AUTH_REJECT, '*')) for args in argseq: self.gluster_manager.set_vol_option(*args) class GaneshaNFSHelper(ganesha.GaneshaNASHelper): shared_data = {} def __init__(self, execute, config_object, **kwargs): self.gluster_manager = kwargs.pop('gluster_manager') if config_object.glusterfs_ganesha_server_ip: execute = ganesha_utils.SSHExecutor( config_object.glusterfs_ganesha_server_ip, 22, None, config_object.glusterfs_ganesha_server_username, password=config_object.glusterfs_ganesha_server_password, privatekey=config_object.glusterfs_path_to_private_key) else: execute = ganesha_utils.RootExecutor(execute) self.ganesha_host = config_object.glusterfs_ganesha_server_ip if not self.ganesha_host: self.ganesha_host = socket.gethostname() kwargs['tag'] = '-'.join(('GLUSTER', 'Ganesha', self.ganesha_host)) super(GaneshaNFSHelper, self).__init__(execute, config_object, **kwargs) def get_export(self, share): return ':/'.join((self.ganesha_host, share['name'] + "--")) def init_helper(self): @utils.synchronized(self.tag) def _init_helper(): if self.tag in self.shared_data: return True super(GaneshaNFSHelper, self).init_helper() self.shared_data[self.tag] = { 'ganesha': self.ganesha, 'export_template': self.export_template} return False if _init_helper(): tagdata = self.shared_data[self.tag] self.ganesha = tagdata['ganesha'] self.export_template = tagdata['export_template'] def _default_config_hook(self): """Callback to provide default export block.""" dconf = super(GaneshaNFSHelper, self)._default_config_hook() conf_dir = ganesha_utils.path_from(__file__, "conf") ganesha_utils.patch(dconf, self._load_conf_dir(conf_dir)) return dconf def _fsal_hook(self, base, share, access): """Callback to create FSAL subblock.""" return {"Hostname": self.gluster_manager.host, "Volume": self.gluster_manager.volume, "Volpath": self.gluster_manager.path} manila-10.0.0/manila/share/drivers/glusterfs/common.py0000664000175000017500000004141413656750227022775 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Common GlussterFS routines.""" import re import xml.etree.cElementTree as etree from oslo_config import cfg from oslo_log import log import six from manila import exception from manila.i18n import _ from manila.share.drivers.ganesha import utils as ganesha_utils LOG = log.getLogger(__name__) glusterfs_common_opts = [ cfg.StrOpt('glusterfs_server_password', secret=True, deprecated_name='glusterfs_native_server_password', help='Remote GlusterFS server node\'s login password. ' 'This is not required if ' '\'glusterfs_path_to_private_key\' is ' 'configured.'), cfg.StrOpt('glusterfs_path_to_private_key', deprecated_name='glusterfs_native_path_to_private_key', help='Path of Manila host\'s private SSH key file.'), ] CONF = cfg.CONF CONF.register_opts(glusterfs_common_opts) def _check_volume_presence(f): def wrapper(self, *args, **kwargs): if not self.components.get('volume'): raise exception.GlusterfsException( _("Gluster address does not have a volume component.")) return f(self, *args, **kwargs) return wrapper def volxml_get(xmlout, *paths, **kwargs): """Attempt to extract a value by a set of Xpaths from XML.""" for path in paths: value = xmlout.find(path) if value is not None: break if value is None: if 'default' in kwargs: return kwargs['default'] raise exception.InvalidShare( _("Volume query response XML has no value for any of " "the following Xpaths: %s") % ", ".join(paths)) return value.text class GlusterManager(object): """Interface with a GlusterFS volume.""" scheme = re.compile(r'\A(?:(?P[^:@/]+)@)?' r'(?P[^:@/]+)' r'(?::/(?P[^/]+)(?P/.*)?)?\Z') # See this about GlusterFS' convention for Boolean interpretation # of strings: # https://github.com/gluster/glusterfs/blob/v3.7.8/ # libglusterfs/src/common-utils.c#L1680-L1708 GLUSTERFS_TRUE_VALUES = ('ON', 'YES', 'TRUE', 'ENABLE', '1') GLUSTERFS_FALSE_VALUES = ('OFF', 'NO', 'FALSE', 'DISABLE', '0') @classmethod def parse(cls, address): """Parse address string into component dict.""" m = cls.scheme.search(address) if not m: raise exception.GlusterfsException( _('Invalid gluster address %s.') % address) return m.groupdict() def __getattr__(self, attr): if attr in self.components: return self.components[attr] raise AttributeError("'%(typ)s' object has no attribute '%(attr)s'" % {'typ': type(self).__name__, 'attr': attr}) def __init__(self, address, execf=None, path_to_private_key=None, remote_server_password=None, requires={}): """Initialize a GlusterManager instance. :param address: the Gluster URI (either string of [@][:/[/]] format or component dict with "user", "host", "volume", "path" keys). :param execf: executor function for management commands. :param path_to_private_key: path to private ssh key of remote server. :param remote_server_password: ssh password for remote server. :param requires: a dict mapping some of the component names to either True or False; having it specified, respectively, the presence or absence of the given component in the uri will be enforced. """ if isinstance(address, dict): tmp_addr = "" if address.get('user') is not None: tmp_addr = address.get('user') + '@' if address.get('host') is not None: tmp_addr += address.get('host') if address.get('volume') is not None: tmp_addr += ':/' + address.get('volume') if address.get('path') is not None: tmp_addr += address.get('path') self.components = self.parse(tmp_addr) # Verify that the original dictionary matches the parsed # dictionary. This will flag typos such as {'volume': 'vol/err'} # in the original dictionary as errors. Additionally, # extra keys will need to be flagged as an error. sanitized_address = {key: None for key in self.scheme.groupindex} sanitized_address.update(address) if sanitized_address != self.components: raise exception.GlusterfsException( _('Invalid gluster address %s.') % address) else: self.components = self.parse(address) for k, v in requires.items(): if v is None: continue if (self.components.get(k) is not None) != v: raise exception.GlusterfsException( _('Invalid gluster address %s.') % address) self.path_to_private_key = path_to_private_key self.remote_server_password = remote_server_password if execf: self.gluster_call = self.make_gluster_call(execf) @property def host_access(self): return '@'.join(filter(None, (self.user, self.host))) def _build_uri(self, base): u = base for sep, comp in ((':/', 'volume'), ('', 'path')): if self.components[comp] is None: break u = sep.join((u, self.components[comp])) return u @property def qualified(self): return self._build_uri(self.host_access) @property def export(self): if self.volume: return self._build_uri(self.host) def make_gluster_call(self, execf): """Execute a Gluster command locally or remotely.""" if self.user: gluster_execf = ganesha_utils.SSHExecutor( self.host, 22, None, self.user, password=self.remote_server_password, privatekey=self.path_to_private_key) else: gluster_execf = ganesha_utils.RootExecutor(execf) def _gluster_call(*args, **kwargs): logmsg = kwargs.pop('log', None) error_policy = kwargs.pop('error_policy', 'coerce') if (error_policy not in ('raw', 'coerce', 'suppress') and not isinstance(error_policy[0], int)): raise TypeError(_("undefined error_policy %s") % repr(error_policy)) try: return gluster_execf(*(('gluster',) + args), **kwargs) except exception.ProcessExecutionError as exc: if error_policy == 'raw': raise elif error_policy == 'coerce': pass elif (error_policy == 'suppress' or exc.exit_code in error_policy): return if logmsg: LOG.error("%s: GlusterFS instrumentation failed.", logmsg) raise exception.GlusterfsException( _("GlusterFS management command '%(cmd)s' failed " "with details as follows:\n%(details)s.") % { 'cmd': ' '.join(args), 'details': exc}) return _gluster_call def xml_response_check(self, xmlout, command, countpath=None): """Sanity check for GlusterFS XML response.""" commandstr = ' '.join(command) ret = {} for e in 'opRet', 'opErrno': ret[e] = int(volxml_get(xmlout, e)) if ret == {'opRet': -1, 'opErrno': 0}: raise exception.GlusterfsException(_( 'GlusterFS command %(command)s on volume %(volume)s failed' ) % {'volume': self.volume, 'command': command}) if list(ret.values()) != [0, 0]: errdct = {'volume': self.volume, 'command': commandstr, 'opErrstr': volxml_get(xmlout, 'opErrstr', default=None)} errdct.update(ret) raise exception.InvalidShare(_( 'GlusterFS command %(command)s on volume %(volume)s got ' 'unexpected response: ' 'opRet=%(opRet)s, opErrno=%(opErrno)s, opErrstr=%(opErrstr)s' ) % errdct) if not countpath: return count = volxml_get(xmlout, countpath) if count != '1': raise exception.InvalidShare( _('GlusterFS command %(command)s on volume %(volume)s got ' 'ambiguous response: ' '%(count)s records') % { 'volume': self.volume, 'command': commandstr, 'count': count}) def _get_vol_option_via_info(self, option): """Get the value of an option set on a GlusterFS volume via volinfo.""" args = ('--xml', 'volume', 'info', self.volume) out, err = self.gluster_call(*args, log=("retrieving volume info")) if not out: raise exception.GlusterfsException( 'gluster volume info %s: no data received' % self.volume ) volxml = etree.fromstring(out) self.xml_response_check(volxml, args[1:], './volInfo/volumes/count') for e in volxml.findall(".//option"): o, v = (volxml_get(e, a) for a in ('name', 'value')) if o == option: return v @_check_volume_presence def _get_vol_user_option(self, useropt): """Get the value of an user option set on a GlusterFS volume.""" option = '.'.join(('user', useropt)) return self._get_vol_option_via_info(option) @_check_volume_presence def _get_vol_regular_option(self, option): """Get the value of a regular option set on a GlusterFS volume.""" args = ('--xml', 'volume', 'get', self.volume, option) out, err = self.gluster_call(*args, check_exit_code=False) if not out: # all input is valid, but the option has not been set # (nb. some options do come by a null value, but some # don't even have that, see eg. cluster.nufa) return try: optxml = etree.fromstring(out) except Exception: # non-xml output indicates that GlusterFS backend does not support # 'vol get', we fall back to 'vol info' based retrieval (glusterfs # < 3.7). return self._get_vol_option_via_info(option) self.xml_response_check(optxml, args[1:], './volGetopts/count') # the Xpath has changed from first to second as of GlusterFS # 3.7.14 (see http://review.gluster.org/14931). return volxml_get(optxml, './volGetopts/Value', './volGetopts/Opt/Value') def get_vol_option(self, option, boolean=False): """Get the value of an option set on a GlusterFS volume.""" useropt = re.sub(r'\Auser\.', '', option) if option == useropt: value = self._get_vol_regular_option(option) else: value = self._get_vol_user_option(useropt) if not boolean or value is None: return value if value.upper() in self.GLUSTERFS_TRUE_VALUES: return True if value.upper() in self.GLUSTERFS_FALSE_VALUES: return False raise exception.GlusterfsException(_( "GlusterFS volume option on volume %(volume)s: " "%(option)s=%(value)s cannot be interpreted as Boolean") % { 'volume': self.volume, 'option': option, 'value': value}) @_check_volume_presence def set_vol_option(self, option, value, ignore_failure=False): value = {True: self.GLUSTERFS_TRUE_VALUES[0], False: self.GLUSTERFS_FALSE_VALUES[0]}.get(value, value) if value is None: args = ('reset', (option,)) else: args = ('set', (option, value)) policy = (1,) if ignore_failure else 'coerce' self.gluster_call( 'volume', args[0], self.volume, *args[1], error_policy=policy) def get_gluster_version(self): """Retrieve GlusterFS version. :returns: version (as tuple of strings, example: ('3', '6', '0beta2')) """ out, err = self.gluster_call('--version', log=("GlusterFS version query")) try: owords = out.split() if owords[0] != 'glusterfs': raise RuntimeError vers = owords[1].split('.') # provoke an exception if vers does not start with two numerals int(vers[0]) int(vers[1]) except Exception: raise exception.GlusterfsException( _("Cannot parse version info obtained from server " "%(server)s, version info: %(info)s") % {'server': self.host, 'info': out}) return vers def check_gluster_version(self, minvers): """Retrieve and check GlusterFS version. :param minvers: minimum version to require (given as tuple of integers, example: (3, 6)) """ vers = self.get_gluster_version() if numreduct(vers) < minvers: raise exception.GlusterfsException(_( "Unsupported GlusterFS version %(version)s on server " "%(server)s, minimum requirement: %(minvers)s") % { 'server': self.host, 'version': '.'.join(vers), 'minvers': '.'.join(six.text_type(c) for c in minvers)}) def numreduct(vers): """The numeric reduct of a tuple of strings. That is, applying an integer conversion map on the longest initial segment of vers which consists of numerals. """ numvers = [] for c in vers: try: numvers.append(int(c)) except ValueError: break return tuple(numvers) def _mount_gluster_vol(execute, gluster_export, mount_path, ensure=False): """Mount a GlusterFS volume at the specified mount path. :param execute: command execution function :param gluster_export: GlusterFS export to mount :param mount_path: path to mount at :param ensure: boolean to allow remounting a volume with a warning """ execute('mkdir', '-p', mount_path) command = ['mount', '-t', 'glusterfs', gluster_export, mount_path] try: execute(*command, run_as_root=True) except exception.ProcessExecutionError as exc: if ensure and 'already mounted' in exc.stderr: LOG.warning("%s is already mounted.", gluster_export) else: raise exception.GlusterfsException( 'Unable to mount Gluster volume' ) def _umount_gluster_vol(execute, mount_path): """Unmount a GlusterFS volume at the specified mount path. :param execute: command execution function :param mount_path: path where volume is mounted """ try: execute('umount', mount_path, run_as_root=True) except exception.ProcessExecutionError as exc: msg = (_("Unable to unmount gluster volume. " "mount_dir: %(mount_path)s, Error: %(error)s") % {'mount_path': mount_path, 'error': exc.stderr}) LOG.error(msg) raise exception.GlusterfsException(msg) def _restart_gluster_vol(gluster_mgr): """Restart a GlusterFS volume through its manager. :param gluster_mgr: GlusterManager instance """ # TODO(csaba): '--mode=script' ensures that the Gluster CLI runs in # script mode. This seems unnecessary as the Gluster CLI is # expected to run in non-interactive mode when the stdin is not # a terminal, as is the case below. But on testing, found the # behaviour of Gluster-CLI to be the contrary. Need to investigate # this odd-behaviour of Gluster-CLI. gluster_mgr.gluster_call( 'volume', 'stop', gluster_mgr.volume, '--mode=script', log=("stopping GlusterFS volume %s") % gluster_mgr.volume) gluster_mgr.gluster_call( 'volume', 'start', gluster_mgr.volume, log=("starting GlusterFS volume %s") % gluster_mgr.volume) manila-10.0.0/manila/share/drivers/glusterfs/glusterfs_native.py0000664000175000017500000002160713656750227025073 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ GlusterFS native protocol (glusterfs) driver for shares. Manila share is a GlusterFS volume. Unlike the generic driver, this does not use service VM approach. Instances directly talk with the GlusterFS backend storage pool. Instance use the 'glusterfs' protocol to mount the GlusterFS share. Access to the share is allowed via SSL Certificates. Only the instance which has the SSL trust established with the GlusterFS backend can mount and hence use the share. Supports working with multiple glusterfs volumes. """ import re from oslo_log import log from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.glusterfs import common from manila.share.drivers.glusterfs import layout from manila import utils LOG = log.getLogger(__name__) ACCESS_TYPE_CERT = 'cert' AUTH_SSL_ALLOW = 'auth.ssl-allow' CLIENT_SSL = 'client.ssl' NFS_EXPORT_VOL = 'nfs.export-volumes' SERVER_SSL = 'server.ssl' DYNAMIC_AUTH = 'server.dynamic-auth' class GlusterfsNativeShareDriver(driver.ExecuteMixin, layout.GlusterfsShareDriverBase): """GlusterFS native protocol (glusterfs) share driver. Executes commands relating to Shares. Supports working with multiple glusterfs volumes. API version history: 1.0 - Initial version. 1.1 - Support for working with multiple gluster volumes. """ GLUSTERFS_VERSION_MIN = (3, 6) _supported_access_levels = (constants.ACCESS_LEVEL_RW, ) _supported_access_types = (ACCESS_TYPE_CERT, ) supported_layouts = ('layout_volume.GlusterfsVolumeMappedLayout',) supported_protocols = ('GLUSTERFS',) def __init__(self, *args, **kwargs): super(GlusterfsNativeShareDriver, self).__init__( False, *args, **kwargs) self._helpers = None self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'GlusterFS-Native' def _setup_via_manager(self, share_mgr, share_mgr_parent=None): # Enable gluster volumes for SSL access only. gluster_mgr = share_mgr['manager'] gluster_mgr_parent = (share_mgr_parent or {}).get('manager', None) ssl_allow_opt = (gluster_mgr_parent if gluster_mgr_parent else gluster_mgr).get_vol_option( AUTH_SSL_ALLOW) if not ssl_allow_opt: # Not having AUTH_SSL_ALLOW set is a problematic edge case. # - In GlusterFS 3.6, it implies that access is allowed to # none, including intra-service access, which causes # problems internally in GlusterFS # - In GlusterFS 3.7, it implies that access control is # disabled, which defeats the purpose of this driver -- # so to avoid these possibilities, we throw an error in this case. msg = (_("Option %(option)s is not defined on gluster volume. " "Volume: %(volname)s") % {'volname': gluster_mgr.volume, 'option': AUTH_SSL_ALLOW}) LOG.error(msg) raise exception.GlusterfsException(msg) gluster_actions = [] if gluster_mgr_parent: # The clone of the snapshot, a new volume, retains the authorized # access list of the snapshotted volume/share, which includes TLS # identities of the backend servers, Manila hosts and clients. # Retain the identities of the GlusterFS servers and Manila host, # and exclude those of the clients in the authorized access list of # the new volume. The TLS identities of GlusterFS servers are # determined as those that are prefixed by 'glusterfs-server'. # And the TLS identity of the Manila host is identified as the # one that has 'manila-host' as the prefix. # Wrt. GlusterFS' parsing of auth.ssl-allow, please see code from # https://github.com/gluster/glusterfs/blob/v3.6.2/ # xlators/protocol/auth/login/src/login.c#L80 # until end of gf_auth() function old_access_list = re.split('[ ,]', ssl_allow_opt) glusterfs_server_CN_pattern = r'\Aglusterfs-server' manila_host_CN_pattern = r'\Amanila-host' regex = re.compile( r'%(pattern1)s|%(pattern2)s' % { 'pattern1': glusterfs_server_CN_pattern, 'pattern2': manila_host_CN_pattern}) access_to = ','.join(filter(regex.match, old_access_list)) gluster_actions.append((AUTH_SSL_ALLOW, access_to)) for option, value in ( (NFS_EXPORT_VOL, False), (CLIENT_SSL, True), (SERVER_SSL, True) ): gluster_actions.append((option, value)) for action in gluster_actions: gluster_mgr.set_vol_option(*action) gluster_mgr.set_vol_option(DYNAMIC_AUTH, True, ignore_failure=True) # SSL enablement requires a fresh volume start # to take effect if gluster_mgr_parent: # in this case the volume is not started # yet (will only be started after this func # returns), so we have nothing to do here pass else: common._restart_gluster_vol(gluster_mgr) return gluster_mgr.export @utils.synchronized("glusterfs_native_access", external=False) def _update_access_via_manager(self, gluster_mgr, context, share, add_rules, delete_rules, recovery=False, share_server=None): """Update access rules, authorize SSL CNs (Common Names).""" # Fetch existing authorized CNs, the value of Gluster option # 'auth.ssl-allow' that is available as a comma separated string. # wrt. GlusterFS' parsing of auth.ssl-allow, please see code from # https://github.com/gluster/glusterfs/blob/v3.6.2/ # xlators/protocol/auth/login/src/login.c#L80 # until end of gf_auth() function ssl_allow_opt = gluster_mgr.get_vol_option(AUTH_SSL_ALLOW) existing_rules_set = set(re.split('[ ,]', ssl_allow_opt)) add_rules_set = {rule['access_to'] for rule in add_rules} for rule in add_rules_set: if re.search('[ ,]', rule): raise exception.GlusterfsException( _("Invalid 'access_to' '%s': common names used for " "GlusterFS authentication should not contain comma " "or whitespace.") % rule) delete_rules_set = {rule['access_to'] for rule in delete_rules} new_rules_set = ( (existing_rules_set | add_rules_set) - delete_rules_set) # Addition or removal of CNs in the authorized list through the # Gluster CLI, used by 'GlusterManager' objects, can only be done by # replacing the existing list with the newly modified list. ssl_allow_opt = ','.join(sorted(new_rules_set)) gluster_mgr.set_vol_option(AUTH_SSL_ALLOW, ssl_allow_opt) # When the Gluster option, DYNAMIC_AUTH is not enabled for the gluster # volume/manila share, the removal of CN of a client does not affect # the client's existing connection to the volume until the volume is # restarted. if delete_rules: dynauth = gluster_mgr.get_vol_option(DYNAMIC_AUTH, boolean=True) if not dynauth: common._restart_gluster_vol(gluster_mgr) def _update_share_stats(self): """Send stats info for the GlusterFS volume.""" data = dict( share_backend_name=self.backend_name, vendor_name='Red Hat', driver_version='1.1', storage_protocol='glusterfs', reserved_percentage=self.configuration.reserved_share_percentage) # We don't use a service mount to get stats data. # Instead we use glusterfs quota feature and use that to limit # the share to its expected share['size']. # TODO(deepakcs): Change below once glusterfs supports volume # specific stats via the gluster cli. data['total_capacity_gb'] = 'unknown' data['free_capacity_gb'] = 'unknown' super(GlusterfsNativeShareDriver, self)._update_share_stats(data) def get_network_allocations_number(self): return 0 manila-10.0.0/manila/share/drivers/glusterfs/layout.py0000664000175000017500000002361413656750227023024 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """GlusterFS share layouts. A share layout encapsulates a particular way of mapping GlusterFS entities to a share and utilizing them to back the share. """ import abc import errno from oslo_config import cfg from oslo_utils import importutils import six from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.ganesha import utils as ganesha_utils glusterfs_share_layout_opts = [ cfg.StrOpt( 'glusterfs_share_layout', help="Specifies GlusterFS share layout, that is, " "the method of associating backing GlusterFS " "resources to shares."), ] CONF = cfg.CONF CONF.register_opts(glusterfs_share_layout_opts) class GlusterfsShareDriverBase(driver.ShareDriver): LAYOUT_PREFIX = 'manila.share.drivers.glusterfs' supported_layouts = () supported_protocols = () _supported_access_types = () _supported_access_levels = () GLUSTERFS_VERSION_MIN = (0, 0) def __init__(self, *args, **kwargs): super(GlusterfsShareDriverBase, self).__init__(*args, **kwargs) self.configuration.append_config_values( glusterfs_share_layout_opts) layout_name = self.configuration.glusterfs_share_layout if not layout_name: layout_name = self.supported_layouts[0] if layout_name not in self.supported_layouts: raise exception.GlusterfsException( _('driver %(driver)s does not support %(layout)s layout') % {'driver': type(self).__name__, 'layout': layout_name}) self.layout = importutils.import_object( '.'.join((self.LAYOUT_PREFIX, layout_name)), self, **kwargs) # we determine snapshot support in our own scope, as # 1) the calculation based on parent method # redefinition does not work for us, as actual # glusterfs driver classes are subclassed from # *this* class, not from driver.ShareDriver # and they don't need to redefine snapshot # methods for themselves; # 2) snapshot support depends on choice of layout. self._snapshots_are_supported = getattr(self.layout, '_snapshots_are_supported', False) def _setup_via_manager(self, share_mgr, share_mgr_parent=None): """Callback for layout's `create_share` and `create_share_from_snapshot` :param share_mgr: a {'share': , 'manager': } dict where is the share created in `create_share` or `create_share_from_snapshot` and is a GlusterManager instance representing the GlusterFS resource allocated for it. :param gluster_mgr_parent: a {'share': , 'manager': } dict where is the original share of the snapshot used in `create_share_from_snapshot` and is a GlusterManager instance representing the GlusterFS resource allocated for it. :returns: export location for share_mgr['share']. """ @property def supported_access_levels(self): return self._supported_access_levels @property def supported_access_types(self): return self._supported_access_types def _access_rule_validator(self, abort): def validator(rule): return ganesha_utils.validate_access_rule( self.supported_access_types, self.supported_access_levels, rule, abort) return validator def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. Driver supports 2 different cases in this method: 1. Recovery after error - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' are []. Driver should clear any existent access rules and apply all access rules for given share. This recovery is made at driver start up. 2. Adding/Deleting of several access rules - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Driver can ignore rules in 'access_rules' and apply only rules from 'add_rules' and 'delete_rules'. """ gluster_mgr = self.layout._share_manager(share) access_rules, add_rules, delete_rules = ( list(filter(self._access_rule_validator(abort), rules)) for ( rules, abort) in ((access_rules, True), (add_rules, True), (delete_rules, False))) # Recovery mode. if not (add_rules or delete_rules): ruleop, recovery = (access_rules, []), True else: ruleop, recovery = (add_rules, delete_rules), False self._update_access_via_manager(gluster_mgr, context, share, *ruleop, recovery=recovery) def _update_access_via_manager(self, gluster_mgr, context, share, add_rules, delete_rules, recovery=False, share_server=None): raise NotImplementedError() def do_setup(self, *a, **kw): return self.layout.do_setup(*a, **kw) @classmethod def _check_proto(cls, share): proto = share['share_proto'].upper() if proto not in cls.supported_protocols: msg = _("Share protocol %s is not supported.") % proto raise exception.ShareBackendException(msg=msg) def create_share(self, context, share, *a, **kw): self._check_proto(share) return self.layout.create_share(context, share, *a, **kw) def create_share_from_snapshot(self, context, share, *a, **kw): self._check_proto(share) return self.layout.create_share_from_snapshot(context, share, *a, **kw) def create_snapshot(self, *a, **kw): return self.layout.create_snapshot(*a, **kw) def delete_share(self, *a, **kw): return self.layout.delete_share(*a, **kw) def delete_snapshot(self, *a, **kw): return self.layout.delete_snapshot(*a, **kw) def ensure_share(self, *a, **kw): return self.layout.ensure_share(*a, **kw) def manage_existing(self, *a, **kw): return self.layout.manage_existing(*a, **kw) def unmanage(self, *a, **kw): return self.layout.unmanage(*a, **kw) def extend_share(self, *a, **kw): return self.layout.extend_share(*a, **kw) def shrink_share(self, *a, **kw): return self.layout.shrink_share(*a, **kw) def _update_share_stats(self, data={}): try: data.update(self.layout._update_share_stats()) except NotImplementedError: pass super(GlusterfsShareDriverBase, self)._update_share_stats(data) @six.add_metaclass(abc.ABCMeta) class GlusterfsShareLayoutBase(object): """Base class for share layouts.""" def __init__(self, driver, *args, **kwargs): self.driver = driver self.configuration = kwargs.get('configuration') def _check_mount_glusterfs(self): """Checks if mount.glusterfs(8) is available.""" try: self.driver._execute('mount.glusterfs', check_exit_code=False) except OSError as exc: if exc.errno == errno.ENOENT: raise exception.GlusterfsException( _('mount.glusterfs is not installed.')) else: raise @abc.abstractmethod def _share_manager(self, share): """Return GlusterManager object representing share's backend.""" @abc.abstractmethod def do_setup(self, context): """Any initialization the share driver does while starting.""" @abc.abstractmethod def create_share(self, context, share, share_server=None): """Is called to create share.""" @abc.abstractmethod def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" @abc.abstractmethod def create_snapshot(self, context, snapshot, share_server=None): """Is called to create snapshot.""" @abc.abstractmethod def delete_share(self, context, share, share_server=None): """Is called to remove share.""" @abc.abstractmethod def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove snapshot.""" @abc.abstractmethod def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported.""" @abc.abstractmethod def manage_existing(self, share, driver_options): """Brings an existing share under Manila management.""" @abc.abstractmethod def unmanage(self, share): """Removes the specified share from Manila management.""" @abc.abstractmethod def extend_share(self, share, new_size, share_server=None): """Extends size of existing share.""" @abc.abstractmethod def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" def _update_share_stats(self): raise NotImplementedError() manila-10.0.0/manila/share/drivers/glusterfs/layout_directory.py0000664000175000017500000002357713656750227025120 0ustar zuulzuul00000000000000# Copyright (c) 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """GlusterFS directory mapped share layout.""" import math import os from oslo_config import cfg from oslo_log import log import six import xml.etree.cElementTree as etree from manila import exception from manila.i18n import _ from manila.share.drivers.glusterfs import common from manila.share.drivers.glusterfs import layout from manila import utils LOG = log.getLogger(__name__) glusterfs_directory_mapped_opts = [ cfg.StrOpt('glusterfs_target', help='Specifies the GlusterFS volume to be mounted on the ' 'Manila host. It is of the form ' '[remoteuser@]:.'), cfg.StrOpt('glusterfs_mount_point_base', default='$state_path/mnt', help='Base directory containing mount points for Gluster ' 'volumes.'), ] CONF = cfg.CONF CONF.register_opts(glusterfs_directory_mapped_opts) class GlusterfsDirectoryMappedLayout(layout.GlusterfsShareLayoutBase): def __init__(self, driver, *args, **kwargs): super(GlusterfsDirectoryMappedLayout, self).__init__( driver, *args, **kwargs) self.configuration.append_config_values( common.glusterfs_common_opts) self.configuration.append_config_values( glusterfs_directory_mapped_opts) def _glustermanager(self, gluster_address): """Create GlusterManager object for gluster_address.""" return common.GlusterManager( gluster_address, self.driver._execute, self.configuration.glusterfs_path_to_private_key, self.configuration.glusterfs_server_password, requires={'volume': True}) def do_setup(self, context): """Prepares the backend and appropriate NAS helpers.""" if not self.configuration.glusterfs_target: raise exception.GlusterfsException( _('glusterfs_target configuration that specifies the GlusterFS' ' volume to be mounted on the Manila host is not set.')) self.gluster_manager = self._glustermanager( self.configuration.glusterfs_target) self.gluster_manager.check_gluster_version( self.driver.GLUSTERFS_VERSION_MIN) self._check_mount_glusterfs() # enable quota options of a GlusteFS volume to allow # creation of shares of specific size args = ('volume', 'quota', self.gluster_manager.volume, 'enable') try: self.gluster_manager.gluster_call(*args) except exception.GlusterfsException: if (self.gluster_manager. get_vol_option('features.quota')) != 'on': LOG.exception("Error in tuning GlusterFS volume to enable " "creation of shares of specific size.") raise self._ensure_gluster_vol_mounted() def _share_manager(self, share): comp_path = self.gluster_manager.components.copy() comp_path.update({'path': '/' + share['name']}) return self._glustermanager(comp_path) def _get_mount_point_for_gluster_vol(self): """Return mount point for the GlusterFS volume.""" return os.path.join(self.configuration.glusterfs_mount_point_base, self.gluster_manager.volume) def _ensure_gluster_vol_mounted(self): """Ensure GlusterFS volume is native-mounted on Manila host.""" mount_path = self._get_mount_point_for_gluster_vol() try: common._mount_gluster_vol(self.driver._execute, self.gluster_manager.export, mount_path, ensure=True) except exception.GlusterfsException: LOG.exception('Could not mount the Gluster volume %s', self.gluster_manager.volume) raise def _get_local_share_path(self, share): """Determine mount path of the GlusterFS volume in the Manila host.""" local_vol_path = self._get_mount_point_for_gluster_vol() if not os.access(local_vol_path, os.R_OK): raise exception.GlusterfsException('share path %s does not exist' % local_vol_path) return os.path.join(local_vol_path, share['name']) def _update_share_stats(self): """Retrieve stats info from the GlusterFS volume.""" # sanity check for gluster ctl mount smpb = os.stat(self.configuration.glusterfs_mount_point_base) smp = os.stat(self._get_mount_point_for_gluster_vol()) if smpb.st_dev == smp.st_dev: raise exception.GlusterfsException( _("GlusterFS control mount is not available") ) smpv = os.statvfs(self._get_mount_point_for_gluster_vol()) return {'total_capacity_gb': (smpv.f_blocks * smpv.f_frsize) >> 30, 'free_capacity_gb': (smpv.f_bavail * smpv.f_frsize) >> 30} def create_share(self, ctx, share, share_server=None): """Create a sub-directory/share in the GlusterFS volume.""" # probe into getting a NAS protocol helper for the share in order # to facilitate early detection of unsupported protocol type local_share_path = self._get_local_share_path(share) cmd = ['mkdir', local_share_path] try: self.driver._execute(*cmd, run_as_root=True) self._set_directory_quota(share, share['size']) except Exception as exc: if isinstance(exc, exception.ProcessExecutionError): exc = exception.GlusterfsException(exc) if isinstance(exc, exception.GlusterfsException): self._cleanup_create_share(local_share_path, share['name']) LOG.error('Unable to create share %s', share['name']) raise exc comp_share = self.gluster_manager.components.copy() comp_share['path'] = '/' + share['name'] export_location = self.driver._setup_via_manager( {'share': share, 'manager': self._glustermanager(comp_share)}) return export_location def _cleanup_create_share(self, share_path, share_name): """Cleanup share that errored out during its creation.""" if os.path.exists(share_path): cmd = ['rm', '-rf', share_path] try: self.driver._execute(*cmd, run_as_root=True) except exception.ProcessExecutionError as exc: LOG.error('Cannot cleanup share, %s, that errored out ' 'during its creation, but exists in GlusterFS ' 'volume.', share_name) raise exception.GlusterfsException(exc) def delete_share(self, context, share, share_server=None): """Remove a sub-directory/share from the GlusterFS volume.""" local_share_path = self._get_local_share_path(share) cmd = ['rm', '-rf', local_share_path] try: self.driver._execute(*cmd, run_as_root=True) except exception.ProcessExecutionError: LOG.exception('Unable to delete share %s', share['name']) raise def ensure_share(self, context, share, share_server=None): pass def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): raise NotImplementedError def create_snapshot(self, context, snapshot, share_server=None): raise NotImplementedError def delete_snapshot(self, context, snapshot, share_server=None): raise NotImplementedError def manage_existing(self, share, driver_options): raise NotImplementedError def unmanage(self, share): raise NotImplementedError def extend_share(self, share, new_size, share_server=None): """Extend a sub-directory/share in the GlusterFS volume.""" self._set_directory_quota(share, new_size) def shrink_share(self, share, new_size, share_server=None): """Shrink a sub-directory/share in the GlusterFS volume.""" usage = self._get_directory_usage(share) consumed_limit = int(math.ceil(usage)) if consumed_limit > new_size: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) self._set_directory_quota(share, new_size) def _set_directory_quota(self, share, new_size): sizestr = six.text_type(new_size) + 'GB' share_dir = '/' + share['name'] args = ('volume', 'quota', self.gluster_manager.volume, 'limit-usage', share_dir, sizestr) try: self.gluster_manager.gluster_call(*args) except exception.GlusterfsException: LOG.error('Unable to set quota share %s', share['name']) raise def _get_directory_usage(self, share): share_dir = '/' + share['name'] args = ('--xml', 'volume', 'quota', self.gluster_manager.volume, 'list', share_dir) try: out, err = self.gluster_manager.gluster_call(*args) except exception.GlusterfsException: LOG.error('Unable to get quota share %s', share['name']) raise volxml = etree.fromstring(out) usage_byte = volxml.find('./volQuota/limit/used_space').text usage = utils.translate_string_size_to_float(usage_byte) return usage manila-10.0.0/manila/share/drivers/glusterfs/conf/0000775000175000017500000000000013656750362022054 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/glusterfs/conf/10-glusterfs-export-template.conf0000664000175000017500000000020613656750227030305 0ustar zuulzuul00000000000000EXPORT { FSAL { Name = GLUSTER; Hostname = @config; Volume = @config; Volpath = @runtime; } } manila-10.0.0/manila/share/drivers/huawei/0000775000175000017500000000000013656750362020373 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/huawei/__init__.py0000664000175000017500000000000013656750227022472 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/huawei/huawei_nas.py0000664000175000017500000002646213656750227023102 0ustar zuulzuul00000000000000# Copyright (c) 2014 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Huawei Nas Driver for Huawei storage arrays.""" from xml.etree import ElementTree as ET from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from manila import exception from manila.i18n import _ from manila.share import driver HUAWEI_UNIFIED_DRIVER_REGISTRY = { 'V3': 'manila.share.drivers.huawei.v3.connection.V3StorageConnection', } huawei_opts = [ cfg.StrOpt('manila_huawei_conf_file', default='/etc/manila/manila_huawei_conf.xml', help='The configuration file for the Manila Huawei driver.')] CONF = cfg.CONF CONF.register_opts(huawei_opts) LOG = log.getLogger(__name__) class HuaweiNasDriver(driver.ShareDriver): """Huawei Share Driver. Executes commands relating to Shares. Driver version history:: 1.0 - Initial version. 1.1 - Add shrink share. Add extend share. Add manage share. Add share level(ro). Add smartx capabilities. Support multi pools in one backend. 1.2 - Add share server support. Add ensure share. Add QoS support. Add create share from snapshot. 1.3 - Add manage snapshot. Support reporting disk type of pool. Add replication support. """ def __init__(self, *args, **kwargs): """Do initialization.""" LOG.debug("Enter into init function of Huawei Driver.") super(HuaweiNasDriver, self).__init__((True, False), *args, **kwargs) if not self.configuration: raise exception.InvalidInput(reason=_( "Huawei driver configuration missing.")) self.configuration.append_config_values(huawei_opts) kwargs.pop('configuration') self.plugin = importutils.import_object(self.get_backend_driver(), self.configuration, **kwargs) def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" self.plugin.check_conf_file() self.plugin.check_service() def get_backend_driver(self): filename = self.configuration.manila_huawei_conf_file try: tree = ET.parse(filename) root = tree.getroot() except Exception as err: message = (_('Read Huawei config file(%(filename)s)' ' for Manila error: %(err)s') % {'filename': filename, 'err': err}) LOG.error(message) raise exception.InvalidInput(reason=message) product = root.findtext('Storage/Product') backend_driver = HUAWEI_UNIFIED_DRIVER_REGISTRY.get(product) if backend_driver is None: raise exception.InvalidInput( reason=_('Product %s is not supported. Product ' 'must be set to V3.') % product) return backend_driver def do_setup(self, context): """Any initialization the huawei nas driver does while starting.""" LOG.debug("Do setup the plugin.") self.plugin.connect() def create_share(self, context, share, share_server=None): """Create a share.""" LOG.debug("Create a share.") location = self.plugin.create_share(share, share_server) return location def extend_share(self, share, new_size, share_server=None): LOG.debug("Extend a share.") self.plugin.extend_share(share, new_size, share_server) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from snapshot.""" LOG.debug("Create a share from snapshot %s.", snapshot['snapshot_id']) location = self.plugin.create_share_from_snapshot(share, snapshot) return location def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" LOG.debug("Shrink a share.") self.plugin.shrink_share(share, new_size, share_server) def delete_share(self, context, share, share_server=None): """Delete a share.""" LOG.debug("Delete a share.") self.plugin.delete_share(share, share_server) def create_snapshot(self, context, snapshot, share_server=None): """Create a snapshot.""" LOG.debug("Create a snapshot.") snapshot_name = self.plugin.create_snapshot(snapshot, share_server) return {'provider_location': snapshot_name} def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot.""" LOG.debug("Delete a snapshot.") self.plugin.delete_snapshot(snapshot, share_server) def ensure_share(self, context, share, share_server=None): """Ensure that share is exported.""" LOG.debug("Ensure share.") location = self.plugin.ensure_share(share, share_server) return location def allow_access(self, context, share, access, share_server=None): """Allow access to the share.""" LOG.debug("Allow access.") self.plugin.allow_access(share, access, share_server) def deny_access(self, context, share, access, share_server=None): """Deny access to the share.""" LOG.debug("Deny access.") self.plugin.deny_access(share, access, share_server) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules list.""" LOG.debug("Update access.") self.plugin.update_access(share, access_rules, add_rules, delete_rules, share_server) def get_pool(self, share): """Return pool name where the share resides on.""" LOG.debug("Get pool.") return self.plugin.get_pool(share) def get_network_allocations_number(self): """Get number of network interfaces to be created.""" LOG.debug("Get network allocations number.") return self.plugin.get_network_allocations_number() def manage_existing(self, share, driver_options): """Manage existing share.""" LOG.debug("Manage existing share to manila.") share_size, location = self.plugin.manage_existing(share, driver_options) return {'size': share_size, 'export_locations': location} def manage_existing_snapshot(self, snapshot, driver_options): """Manage existing snapshot.""" LOG.debug("Manage existing snapshot to manila.") snapshot_name = self.plugin.manage_existing_snapshot(snapshot, driver_options) return {'provider_location': snapshot_name} def _update_share_stats(self): """Retrieve status info from share group.""" backend_name = self.configuration.safe_get('share_backend_name') data = dict( share_backend_name=backend_name or 'HUAWEI_NAS_Driver', vendor_name='Huawei', driver_version='1.3', storage_protocol='NFS_CIFS', qos=True, total_capacity_gb=0.0, free_capacity_gb=0.0, snapshot_support=self.plugin.snapshot_support, create_share_from_snapshot_support=self.plugin.snapshot_support, revert_to_snapshot_support=self.plugin.snapshot_support, ) # huawei array doesn't support snapshot replication, so driver can't # create replicated snapshot, this's not fit the requirement of # replication feature. # to avoid this problem, we specify huawei driver can't support # snapshot and replication both, as a workaround. if not data['snapshot_support'] and self.plugin.replication_support: data['replication_type'] = 'dr' self.plugin.update_share_stats(data) super(HuaweiNasDriver, self)._update_share_stats(data) def _setup_server(self, network_info, metadata=None): """Set up share server with given network parameters.""" return self.plugin.setup_server(network_info, metadata) def _teardown_server(self, server_details, security_services=None): """Teardown share server.""" return self.plugin.teardown_server(server_details, security_services) def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None): """Replicate the active replica to a new replica on this backend.""" return self.plugin.create_replica(context, replica_list, new_replica, access_rules, replica_snapshots, share_server) def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): """Update the replica_state of a replica.""" return self.plugin.update_replica_state(context, replica_list, replica, access_rules, replica_snapshots, share_server) def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): """Promote a replica to 'active' replica state..""" return self.plugin.promote_replica(context, replica_list, replica, access_rules, share_server) def delete_replica(self, context, replica_list, replica_snapshots, replica, share_server=None): """Delete a replica.""" self.plugin.delete_replica(context, replica_list, replica_snapshots, replica, share_server) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): self.plugin.revert_to_snapshot(context, snapshot, share_access_rules, snapshot_access_rules, share_server) manila-10.0.0/manila/share/drivers/huawei/huawei_utils.py0000664000175000017500000000463013656750227023452 0ustar zuulzuul00000000000000# Copyright (c) 2015 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log from manila.share.drivers.huawei import constants from manila.share import share_types LOG = log.getLogger(__name__) def get_share_extra_specs_params(type_id): """Return the parameters for creating the share.""" opts = None if type_id is not None: specs = share_types.get_share_type_extra_specs(type_id) opts = _get_opts_from_specs(specs) LOG.debug('Get share type extra specs: %s', opts) return opts def _get_opts_from_specs(specs): opts = copy.deepcopy(constants.OPTS_CAPABILITIES) opts.update(constants.OPTS_VALUE) for key, value in specs.items(): # Get the scope, if using scope format scope = None key_split = key.split(':') if len(key_split) not in (1, 2): continue if len(key_split) == 1: key = key_split[0] else: scope = key_split[0] key = key_split[1] if scope: scope = scope.lower() if key: key = key.lower() # We want both the scheduler and the driver to act on the value. if ((not scope or scope == 'capabilities') and key in constants.OPTS_CAPABILITIES): words = value.split() if not (words and len(words) == 2 and words[0] == ''): LOG.error("Extra specs must be specified as " "capabilities:%s=' True'.", key) else: opts[key] = words[1].lower() if ((scope in constants.OPTS_CAPABILITIES) and (key in constants.OPTS_VALUE)): if ((scope in constants.OPTS_ASSOCIATE) and (key in constants.OPTS_ASSOCIATE[scope])): opts[key] = value return opts manila-10.0.0/manila/share/drivers/huawei/constants.py0000664000175000017500000000763513656750227022774 0ustar zuulzuul00000000000000# Copyright (c) 2014 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. STATUS_ETH_RUNNING = "10" STATUS_FS_HEALTH = "1" STATUS_FS_RUNNING = "27" STATUS_FSSNAPSHOT_HEALTH = '1' STATUS_JOIN_DOMAIN = '1' STATUS_EXIT_DOMAIN = '0' STATUS_SERVICE_RUNNING = "2" QOS_STATUSES = (STATUS_QOS_ACTIVE, STATUS_QOS_INACTIVATED, STATUS_QOS_IDLE) = ('2', '45', '46') DEFAULT_WAIT_INTERVAL = 3 DEFAULT_TIMEOUT = 60 MAX_FS_NUM_IN_QOS = 64 MSG_SNAPSHOT_NOT_FOUND = 1073754118 IP_ALLOCATIONS_DHSS_FALSE = 0 IP_ALLOCATIONS_DHSS_TRUE = 1 SOCKET_TIMEOUT = 52 LOGIN_SOCKET_TIMEOUT = 4 QOS_NAME_PREFIX = 'OpenStack_' SYSTEM_NAME_PREFIX = "Array-" MIN_ARRAY_VERSION_FOR_QOS = 'V300R003C00' TMP_PATH_SRC_PREFIX = "huawei_manila_tmp_path_src_" TMP_PATH_DST_PREFIX = "huawei_manila_tmp_path_dst_" ACCESS_NFS_RW = "1" ACCESS_NFS_RO = "0" ACCESS_CIFS_FULLCONTROL = "1" ACCESS_CIFS_RO = "0" ERROR_CONNECT_TO_SERVER = -403 ERROR_UNAUTHORIZED_TO_SERVER = -401 ERROR_LOGICAL_PORT_EXIST = 1073813505 ERROR_USER_OR_GROUP_NOT_EXIST = 1077939723 ERROR_REPLICATION_PAIR_NOT_EXIST = 1077937923 PORT_TYPE_ETH = '1' PORT_TYPE_BOND = '7' PORT_TYPE_VLAN = '8' SORT_BY_VLAN = 1 SORT_BY_LOGICAL = 2 ALLOC_TYPE_THIN_FLAG = "1" ALLOC_TYPE_THICK_FLAG = "0" ALLOC_TYPE_THIN = "Thin" ALLOC_TYPE_THICK = "Thick" THIN_PROVISIONING = "true" THICK_PROVISIONING = "false" OPTS_QOS_VALUE = { 'maxiops': None, 'miniops': None, 'minbandwidth': None, 'maxbandwidth': None, 'latency': None, 'iotype': None } QOS_LOWER_LIMIT = ['MINIOPS', 'LATENCY', 'MINBANDWIDTH'] QOS_UPPER_LIMIT = ['MAXIOPS', 'MAXBANDWIDTH'] OPTS_CAPABILITIES = { 'dedupe': False, 'compression': False, 'huawei_smartcache': False, 'huawei_smartpartition': False, 'thin_provisioning': None, 'qos': False, 'huawei_sectorsize': None, } OPTS_VALUE = { 'cachename': None, 'partitionname': None, 'sectorsize': None, } OPTS_VALUE.update(OPTS_QOS_VALUE) OPTS_ASSOCIATE = { 'huawei_smartcache': 'cachename', 'huawei_smartpartition': 'partitionname', 'huawei_sectorsize': 'sectorsize', 'qos': OPTS_QOS_VALUE, } VALID_SECTOR_SIZES = ('4', '8', '16', '32', '64') LOCAL_RES_TYPES = (FILE_SYSTEM_TYPE,) = ('40',) REPLICA_MODELS = (REPLICA_SYNC_MODEL, REPLICA_ASYNC_MODEL) = ('1', '2') REPLICA_SPEED_MODELS = (REPLICA_SPEED_LOW, REPLICA_SPEED_MEDIUM, REPLICA_SPEED_HIGH, REPLICA_SPEED_HIGHEST) = ('1', '2', '3', '4') REPLICA_HEALTH_STATUSES = (REPLICA_HEALTH_STATUS_NORMAL, REPLICA_HEALTH_STATUS_FAULT, REPLICA_HEALTH_STATUS_INVALID) = ('1', '2', '14') REPLICA_DATA_STATUSES = ( REPLICA_DATA_STATUS_SYNCHRONIZED, REPLICA_DATA_STATUS_COMPLETE, REPLICA_DATA_STATUS_INCOMPLETE) = ('1', '2', '5') REPLICA_DATA_STATUS_IN_SYNC = ( REPLICA_DATA_STATUS_SYNCHRONIZED, REPLICA_DATA_STATUS_COMPLETE) REPLICA_RUNNING_STATUSES = ( REPLICA_RUNNING_STATUS_NORMAL, REPLICA_RUNNING_STATUS_SYNCING, REPLICA_RUNNING_STATUS_SPLITTED, REPLICA_RUNNING_STATUS_TO_RECOVER, REPLICA_RUNNING_STATUS_INTERRUPTED, REPLICA_RUNNING_STATUS_INVALID) = ( '1', '23', '26', '33', '34', '35') REPLICA_SECONDARY_ACCESS_RIGHTS = ( REPLICA_SECONDARY_ACCESS_DENIED, REPLICA_SECONDARY_RO, REPLICA_SECONDARY_RW) = ('1', '2', '3') manila-10.0.0/manila/share/drivers/huawei/base.py0000664000175000017500000001042413656750227021660 0ustar zuulzuul00000000000000# Copyright (c) 2015 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Abstract base class to work with share.""" import abc import six @six.add_metaclass(abc.ABCMeta) class HuaweiBase(object): """Interface to work with share.""" def __init__(self, configuration): """Do initialization.""" self.configuration = configuration @abc.abstractmethod def create_share(self, share, share_server): """Is called to create share.""" @abc.abstractmethod def create_snapshot(self, snapshot, share_server): """Is called to create snapshot.""" @abc.abstractmethod def delete_share(self, share, share_server): """Is called to remove share.""" @abc.abstractmethod def delete_snapshot(self, snapshot, share_server): """Is called to remove snapshot.""" @abc.abstractmethod def allow_access(self, share, access, share_server): """Allow access to the share.""" @abc.abstractmethod def deny_access(self, share, access, share_server): """Deny access to the share.""" @abc.abstractmethod def ensure_share(self, share, share_server=None): """Ensure that share is exported.""" @abc.abstractmethod def update_access(self, share, access_rules, add_rules, delete_rules, share_server): """Update access rules list.""" @abc.abstractmethod def extend_share(self, share, new_size, share_server): """Extends size of existing share.""" @abc.abstractmethod def create_share_from_snapshot(self, share, snapshot, share_server=None, parent_share=None): """Create share from snapshot.""" @abc.abstractmethod def shrink_share(self, share, new_size, share_server): """Shrinks size of existing share.""" @abc.abstractmethod def manage_existing(self, share, driver_options): """Manage existing share.""" @abc.abstractmethod def manage_existing_snapshot(self, snapshot, driver_options): """Manage existing snapshot.""" @abc.abstractmethod def get_network_allocations_number(self): """Get number of network interfaces to be created.""" @abc.abstractmethod def get_pool(self, share): """Return pool name where the share resides on.""" def update_share_stats(self, stats_dict): """Retrieve stats info from share group.""" @abc.abstractmethod def setup_server(self, network_info, metadata=None): """Set up share server with given network parameters.""" @abc.abstractmethod def teardown_server(self, server_details, security_services=None): """Teardown share server.""" @abc.abstractmethod def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None): """Replicate the active replica to a new replica on this backend.""" @abc.abstractmethod def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): """Update the replica_state of a replica.""" @abc.abstractmethod def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): """Promote a replica to 'active' replica state.""" @abc.abstractmethod def delete_replica(self, context, replica_list, replica_snapshots, replica, share_server=None): """Delete a replica.""" @abc.abstractmethod def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): """Revert a snapshot.""" manila-10.0.0/manila/share/drivers/huawei/v3/0000775000175000017500000000000013656750362020723 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/huawei/v3/__init__.py0000664000175000017500000000000013656750227023022 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/huawei/v3/helper.py0000664000175000017500000014562413656750227022570 0ustar zuulzuul00000000000000# Copyright (c) 2014 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import copy import requests import time from xml.etree import ElementTree as ET from oslo_log import log from oslo_serialization import jsonutils import six from manila import exception from manila.i18n import _ from manila.share.drivers.huawei import constants from manila import utils LOG = log.getLogger(__name__) class RestHelper(object): """Helper class for Huawei OceanStor V3 storage system.""" def __init__(self, configuration): self.configuration = configuration self.url = None self.session = None # pylint: disable=no-member requests.packages.urllib3.disable_warnings( requests.packages.urllib3.exceptions.InsecureRequestWarning) requests.packages.urllib3.disable_warnings( requests.packages.urllib3.exceptions.InsecurePlatformWarning) # pylint: enable=no-member def init_http_head(self): self.url = None self.session = requests.Session() self.session.headers.update({ "Connection": "keep-alive", "Content-Type": "application/json"}) self.session.verify = False def do_call(self, url, data, method, calltimeout=constants.SOCKET_TIMEOUT): """Send requests to server. Send HTTPS call, get response in JSON. Convert response into Python Object and return it. """ if self.url: url = self.url + url LOG.debug('Request URL: %(url)s\n' 'Call Method: %(method)s\n' 'Request Data: %(data)s\n', {'url': url, 'method': method, 'data': data}) kwargs = {'timeout': calltimeout} if data: kwargs['data'] = data if method in ('POST', 'PUT', 'GET', 'DELETE'): func = getattr(self.session, method.lower()) else: msg = _("Request method %s is invalid.") % method LOG.error(msg) raise exception.ShareBackendException(msg=msg) try: res = func(url, **kwargs) except Exception as err: LOG.error('\nBad response from server: %(url)s.' ' Error: %(err)s', {'url': url, 'err': err}) return {"error": {"code": constants.ERROR_CONNECT_TO_SERVER, "description": "Connect server error"}} try: res.raise_for_status() except requests.HTTPError as exc: return {"error": {"code": exc.response.status_code, "description": six.text_type(exc)}} result = res.json() LOG.debug('Response Data: %s', result) return result def login(self): """Login huawei array.""" login_info = self._get_login_info() urlstr = login_info['RestURL'] url_list = urlstr.split(";") deviceid = None for item_url in url_list: url = item_url.strip('').strip('\n') + "xx/sessions" data = jsonutils.dumps({"username": login_info['UserName'], "password": login_info['UserPassword'], "scope": "0"}) self.init_http_head() result = self.do_call(url, data, 'POST', calltimeout=constants.LOGIN_SOCKET_TIMEOUT) if((result['error']['code'] != 0) or ("data" not in result) or (result['data']['deviceid'] is None)): LOG.error("Login to %s failed, try another.", item_url) continue LOG.debug('Login success: %(url)s\n', {'url': item_url}) deviceid = result['data']['deviceid'] self.url = item_url + deviceid self.session.headers['iBaseToken'] = result['data']['iBaseToken'] break if deviceid is None: err_msg = _("All url login fail.") LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) return deviceid @utils.synchronized('huawei_manila') def call(self, url, data, method): """Send requests to server. If fail, try another RestURL. """ deviceid = None old_url = self.url result = self.do_call(url, data, method) error_code = result['error']['code'] if(error_code == constants.ERROR_CONNECT_TO_SERVER or error_code == constants.ERROR_UNAUTHORIZED_TO_SERVER): LOG.error("Can't open the recent url, re-login.") deviceid = self.login() if deviceid is not None: LOG.debug('Replace URL: \n' 'Old URL: %(old_url)s\n' 'New URL: %(new_url)s\n', {'old_url': old_url, 'new_url': self.url}) result = self.do_call(url, data, method) return result def _create_filesystem(self, fs_param): """Create file system.""" url = "/filesystem" data = jsonutils.dumps(fs_param) result = self.call(url, data, 'POST') msg = 'Create filesystem error.' self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result['data']['ID'] def _assert_rest_result(self, result, err_str): if result['error']['code'] != 0: err_msg = (_('%(err)s\nresult: %(res)s.') % {'err': err_str, 'res': result}) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) def _assert_data_in_result(self, result, msg): if "data" not in result: err_msg = (_('%s "data" was not in result.') % msg) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) def _get_login_info(self): """Get login IP, username and password from config file.""" logininfo = {} filename = self.configuration.manila_huawei_conf_file tree = ET.parse(filename) root = tree.getroot() RestURL = root.findtext('Storage/RestURL') logininfo['RestURL'] = RestURL.strip() # Prefix !$$$ means encoded already. prefix_name = '!$$$' need_encode = False for key in ['UserName', 'UserPassword']: node = root.find('Storage/%s' % key) if node.text.startswith(prefix_name): logininfo[key] = base64.b64decode( six.b(node.text[4:])).decode() else: logininfo[key] = node.text node.text = prefix_name + base64.b64encode( six.b(node.text)).decode() need_encode = True if need_encode: self._change_file_mode(filename) try: tree.write(filename, 'UTF-8') except Exception as err: err_msg = (_('File write error %s.') % err) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) return logininfo def _change_file_mode(self, filepath): try: utils.execute('chmod', '666', filepath, run_as_root=True) except Exception as err: LOG.error('Bad response from change file: %s.', err) raise def create_share(self, share_name, fs_id, share_proto): """Create a share.""" share_url_type = self._get_share_url_type(share_proto) share_path = self._get_share_path(share_name) filepath = {} if share_proto == 'NFS': filepath = { "DESCRIPTION": "", "FSID": fs_id, "SHAREPATH": share_path, } elif share_proto == 'CIFS': filepath = { "SHAREPATH": share_path, "DESCRIPTION": "", "ABEENABLE": "false", "ENABLENOTIFY": "true", "ENABLEOPLOCK": "true", "NAME": share_name.replace("-", "_"), "FSID": fs_id, "TENANCYID": "0", } else: raise exception.InvalidShare( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) url = "/" + share_url_type data = jsonutils.dumps(filepath) result = self.call(url, data, "POST") msg = 'Create share error.' self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result['data']['ID'] def _delete_share_by_id(self, share_id, share_url_type): """Delete share by share id.""" url = "/" + share_url_type + "/" + share_id result = self.call(url, None, "DELETE") self._assert_rest_result(result, 'Delete share error.') def _delete_fs(self, fs_id): """Delete file system.""" # Get available file system url = "/filesystem/" + fs_id result = self.call(url, None, "DELETE") self._assert_rest_result(result, 'Delete file system error.') def _get_cifs_service_status(self): url = "/CIFSSERVICE" result = self.call(url, None, "GET") msg = 'Get CIFS service status error.' self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result['data']['RUNNINGSTATUS'] def _get_nfs_service_status(self): url = "/NFSSERVICE" result = self.call(url, None, "GET") msg = 'Get NFS service status error.' self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) service = {} service['RUNNINGSTATUS'] = result['data']['RUNNINGSTATUS'] service['SUPPORTV3'] = result['data']['SUPPORTV3'] service['SUPPORTV4'] = result['data']['SUPPORTV4'] return service def _start_nfs_service_status(self): url = "/NFSSERVICE" nfsserviceinfo = { "NFSV4DOMAIN": "localdomain", "RUNNINGSTATUS": "2", "SUPPORTV3": 'true', "SUPPORTV4": 'true', "TYPE": "16452", } data = jsonutils.dumps(nfsserviceinfo) result = self.call(url, data, "PUT") self._assert_rest_result(result, 'Start NFS service error.') def _start_cifs_service_status(self): url = "/CIFSSERVICE" cifsserviceinfo = { "ENABLENOTIFY": "true", "ENABLEOPLOCK": "true", "ENABLEOPLOCKLEASE": "false", "GUESTENABLE": "false", "OPLOCKTIMEOUT": "35", "RUNNINGSTATUS": "2", "SECURITYMODEL": "3", "SIGNINGENABLE": "false", "SIGNINGREQUIRED": "false", "TYPE": "16453", } data = jsonutils.dumps(cifsserviceinfo) result = self.call(url, data, "PUT") self._assert_rest_result(result, 'Start CIFS service error.') def _find_pool_info(self, pool_name, result): if pool_name is None: return poolinfo = {} pool_name = pool_name.strip() for item in result.get('data', []): if pool_name == item['NAME'] and '2' == item['USAGETYPE']: poolinfo['name'] = pool_name poolinfo['ID'] = item['ID'] poolinfo['CAPACITY'] = item['USERFREECAPACITY'] poolinfo['TOTALCAPACITY'] = item['USERTOTALCAPACITY'] poolinfo['CONSUMEDCAPACITY'] = item['USERCONSUMEDCAPACITY'] poolinfo['TIER0CAPACITY'] = item['TIER0CAPACITY'] poolinfo['TIER1CAPACITY'] = item['TIER1CAPACITY'] poolinfo['TIER2CAPACITY'] = item['TIER2CAPACITY'] break return poolinfo def _find_all_pool_info(self): url = "/storagepool" result = self.call(url, None, "GET") msg = "Query resource pool error." self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result def _read_xml(self): """Open xml file and parse the content.""" filename = self.configuration.manila_huawei_conf_file try: tree = ET.parse(filename) root = tree.getroot() except Exception as err: message = (_('Read Huawei config file(%(filename)s)' ' for Manila error: %(err)s') % {'filename': filename, 'err': err}) LOG.error(message) raise exception.InvalidInput(reason=message) return root def _remove_access_from_share(self, access_id, share_proto): access_type = self._get_share_client_type(share_proto) url = "/" + access_type + "/" + access_id result = self.call(url, None, "DELETE") self._assert_rest_result(result, 'delete access from share error!') def _get_access_count(self, share_id, share_client_type): url_subfix = ("/" + share_client_type + "/count?" + "filter=PARENTID::" + share_id) url = url_subfix result = self.call(url, None, "GET") msg = "Get access count by share error!" self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return int(result['data']['COUNT']) def _get_all_access_from_share(self, share_id, share_proto): """Return a list of all the access IDs of the share""" share_client_type = self._get_share_client_type(share_proto) count = self._get_access_count(share_id, share_client_type) access_ids = [] range_begin = 0 while count > 0: access_range = self._get_access_from_share_range(share_id, range_begin, share_client_type) for item in access_range: access_ids.append(item['ID']) range_begin += 100 count -= 100 return access_ids def _get_access_from_share(self, share_id, access_to, share_proto): """Segments to find access for a period of 100.""" share_client_type = self._get_share_client_type(share_proto) count = self._get_access_count(share_id, share_client_type) access_id = None range_begin = 0 while count > 0: if access_id: break access_range = self._get_access_from_share_range(share_id, range_begin, share_client_type) for item in access_range: if item['NAME'] in (access_to, '@' + access_to): access_id = item['ID'] range_begin += 100 count -= 100 return access_id def _get_access_from_share_range(self, share_id, range_begin, share_client_type): range_end = range_begin + 100 url = ("/" + share_client_type + "?filter=PARENTID::" + share_id + "&range=[" + six.text_type(range_begin) + "-" + six.text_type(range_end) + "]") result = self.call(url, None, "GET") self._assert_rest_result(result, 'Get access id by share error!') return result.get('data', []) def _get_level_by_access_id(self, access_id, share_proto): share_client_type = self._get_share_client_type(share_proto) url = "/" + share_client_type + "/" + access_id result = self.call(url, None, "GET") self._assert_rest_result(result, 'Get access information error!') access_info = result.get('data', []) access_level = access_info.get('ACCESSVAL') if not access_level: access_level = access_info.get('PERMISSION') return access_level def _change_access_rest(self, access_id, share_proto, access_level): """Change access level of the share.""" if share_proto == 'NFS': self._change_nfs_access_rest(access_id, access_level) elif share_proto == 'CIFS': self._change_cifs_access_rest(access_id, access_level) else: raise exception.InvalidInput( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) def _change_nfs_access_rest(self, access_id, access_level): url = "/NFS_SHARE_AUTH_CLIENT/" + access_id access = { "ACCESSVAL": access_level, "SYNC": "0", "ALLSQUASH": "1", "ROOTSQUASH": "0", } data = jsonutils.dumps(access) result = self.call(url, data, "PUT") msg = 'Change access error.' self._assert_rest_result(result, msg) def _change_cifs_access_rest(self, access_id, access_level): url = "/CIFS_SHARE_AUTH_CLIENT/" + access_id access = { "PERMISSION": access_level, } data = jsonutils.dumps(access) result = self.call(url, data, "PUT") msg = 'Change access error.' self._assert_rest_result(result, msg) def _allow_access_rest(self, share_id, access_to, share_proto, access_level): """Allow access to the share.""" if share_proto == 'NFS': self._allow_nfs_access_rest(share_id, access_to, access_level) elif share_proto == 'CIFS': self._allow_cifs_access_rest(share_id, access_to, access_level) else: raise exception.InvalidInput( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) def _allow_nfs_access_rest(self, share_id, access_to, access_level): url = "/NFS_SHARE_AUTH_CLIENT" access = { "TYPE": "16409", "NAME": access_to, "PARENTID": share_id, "ACCESSVAL": access_level, "SYNC": "0", "ALLSQUASH": "1", "ROOTSQUASH": "0", } data = jsonutils.dumps(access) result = self.call(url, data, "POST") msg = 'Allow access error.' self._assert_rest_result(result, msg) def _allow_cifs_access_rest(self, share_id, access_to, access_level): url = "/CIFS_SHARE_AUTH_CLIENT" domain_type = { 'local': '2', 'ad': '0' } error_msg = 'Allow access error.' access_info = ('Access info (access_to: %(access_to)s, ' 'access_level: %(access_level)s, share_id: %(id)s)' % {'access_to': access_to, 'access_level': access_level, 'id': share_id}) def send_rest(access_to, domain_type): access = { "NAME": access_to, "PARENTID": share_id, "PERMISSION": access_level, "DOMAINTYPE": domain_type, } data = jsonutils.dumps(access) result = self.call(url, data, "POST") error_code = result['error']['code'] if error_code == 0: return True elif error_code != constants.ERROR_USER_OR_GROUP_NOT_EXIST: self._assert_rest_result(result, error_msg) return False if '\\' not in access_to: # First, try to add user access. LOG.debug('Try to add user access. %s.', access_info) if send_rest(access_to, domain_type['local']): return # Second, if add user access failed, # try to add group access. LOG.debug('Failed with add user access, ' 'try to add group access. %s.', access_info) # Group name starts with @. if send_rest('@' + access_to, domain_type['local']): return else: LOG.debug('Try to add domain user access. %s.', access_info) if send_rest(access_to, domain_type['ad']): return # If add domain user access failed, # try to add domain group access. LOG.debug('Failed with add domain user access, ' 'try to add domain group access. %s.', access_info) # Group name starts with @. if send_rest('@' + access_to, domain_type['ad']): return raise exception.InvalidShare(reason=error_msg) def _get_share_client_type(self, share_proto): share_client_type = None if share_proto == 'NFS': share_client_type = "NFS_SHARE_AUTH_CLIENT" elif share_proto == 'CIFS': share_client_type = "CIFS_SHARE_AUTH_CLIENT" else: raise exception.InvalidInput( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) return share_client_type def _check_snapshot_id_exist(self, snapshot_info): """Check the snapshot id exists.""" if snapshot_info['error']['code'] == constants.MSG_SNAPSHOT_NOT_FOUND: return False elif snapshot_info['error']['code'] == 0: return True else: err_str = "Check the snapshot id exists error!" err_msg = (_('%(err)s\nresult: %(res)s.') % {'err': err_str, 'res': snapshot_info}) raise exception.InvalidShareSnapshot(reason=err_msg) def _get_snapshot_by_id(self, snap_id): """Get snapshot by id""" url = "/FSSNAPSHOT/" + snap_id result = self.call(url, None, "GET") return result def _delete_snapshot(self, snap_id): """Deletes snapshot.""" url = "/FSSNAPSHOT/%s" % snap_id data = jsonutils.dumps({"TYPE": "48", "ID": snap_id}) result = self.call(url, data, "DELETE") self._assert_rest_result(result, 'Delete snapshot error.') def _create_snapshot(self, sharefsid, snapshot_name): """Create a snapshot.""" filepath = { "PARENTTYPE": "40", "TYPE": "48", "PARENTID": sharefsid, "NAME": snapshot_name.replace("-", "_"), "DESCRIPTION": "", } url = "/FSSNAPSHOT" data = jsonutils.dumps(filepath) result = self.call(url, data, "POST") msg = 'Create a snapshot error.' self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result['data']['ID'] def _get_share_by_name(self, share_name, share_url_type): """Segments to find share for a period of 100.""" count = self._get_share_count(share_url_type) share = {} range_begin = 0 while True: if count < 0 or share: break share = self._get_share_by_name_range(share_name, range_begin, share_url_type) range_begin += 100 count -= 100 return share def _get_share_count(self, share_url_type): """Get share count.""" url = "/" + share_url_type + "/count" result = self.call(url, None, "GET") self._assert_rest_result(result, 'Get share count error!') return int(result['data']['COUNT']) def _get_share_by_name_range(self, share_name, range_begin, share_url_type): """Get share by share name.""" range_end = range_begin + 100 url = ("/" + share_url_type + "?range=[" + six.text_type(range_begin) + "-" + six.text_type(range_end) + "]") result = self.call(url, None, "GET") self._assert_rest_result(result, 'Get share by name error!') share_path = self._get_share_path(share_name) share = {} for item in result.get('data', []): if share_path == item['SHAREPATH']: share['ID'] = item['ID'] share['FSID'] = item['FSID'] break return share def _get_share_url_type(self, share_proto): share_url_type = None if share_proto == 'NFS': share_url_type = "NFSHARE" elif share_proto == 'CIFS': share_url_type = "CIFSHARE" else: raise exception.InvalidInput( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) return share_url_type def get_fsid_by_name(self, share_name): share_name = share_name.replace("-", "_") url = "/FILESYSTEM?filter=NAME::%s&range=[0-8191]" % share_name result = self.call(url, None, "GET") self._assert_rest_result(result, 'Get filesystem by name error!') for item in result.get('data', []): if share_name == item['NAME']: return item['ID'] def _get_fs_info_by_id(self, fsid): url = "/filesystem/%s" % fsid result = self.call(url, None, "GET") msg = "Get filesystem info by id error!" self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) fs = {} fs['HEALTHSTATUS'] = result['data']['HEALTHSTATUS'] fs['RUNNINGSTATUS'] = result['data']['RUNNINGSTATUS'] fs['CAPACITY'] = result['data']['CAPACITY'] fs['ALLOCTYPE'] = result['data']['ALLOCTYPE'] fs['POOLNAME'] = result['data']['PARENTNAME'] fs['COMPRESSION'] = result['data']['ENABLECOMPRESSION'] fs['DEDUP'] = result['data']['ENABLEDEDUP'] fs['SMARTPARTITIONID'] = result['data']['CACHEPARTITIONID'] fs['SMARTCACHEID'] = result['data']['SMARTCACHEPARTITIONID'] return fs def _get_share_path(self, share_name): share_path = "/" + share_name.replace("-", "_") + "/" return share_path def get_share_name_by_id(self, share_id): share_name = "share_" + share_id return share_name def _get_share_name_by_export_location(self, export_location, share_proto): export_location_split = None share_name = None share_ip = None if export_location: if share_proto == 'NFS': export_location_split = export_location.split(':/') if len(export_location_split) == 2: share_name = export_location_split[1] share_ip = export_location_split[0] elif share_proto == 'CIFS': export_location_split = export_location.split('\\') if (len(export_location_split) == 4 and export_location_split[0] == "" and export_location_split[1] == ""): share_ip = export_location_split[2] share_name = export_location_split[3] if share_name is None: raise exception.InvalidInput( reason=(_('No share with export location %s could be found.') % export_location)) root = self._read_xml() target_ip = root.findtext('Storage/LogicalPortIP') if target_ip: if share_ip != target_ip.strip(): raise exception.InvalidInput( reason=(_('The share IP %s is not configured.') % share_ip)) else: raise exception.InvalidInput( reason=(_('The config parameter LogicalPortIP is not set.'))) return share_name def _get_snapshot_id(self, fs_id, snap_name): snapshot_id = (fs_id + "@" + "share_snapshot_" + snap_name.replace("-", "_")) return snapshot_id def _change_share_size(self, fsid, new_size): url = "/filesystem/%s" % fsid capacityinfo = { "CAPACITY": new_size, } data = jsonutils.dumps(capacityinfo) result = self.call(url, data, "PUT") msg = "Change a share size error!" self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) def _change_fs_name(self, fsid, name): url = "/filesystem/%s" % fsid fs_param = { "NAME": name.replace("-", "_"), } data = jsonutils.dumps(fs_param) result = self.call(url, data, "PUT") msg = _("Change filesystem name error.") self._assert_rest_result(result, msg) def _change_extra_specs(self, fsid, extra_specs): url = "/filesystem/%s" % fsid fs_param = { "ENABLEDEDUP": extra_specs['dedupe'], "ENABLECOMPRESSION": extra_specs['compression'] } data = jsonutils.dumps(fs_param) result = self.call(url, data, "PUT") msg = _("Change extra_specs error.") self._assert_rest_result(result, msg) def _get_partition_id_by_name(self, name): url = "/cachepartition" result = self.call(url, None, "GET") self._assert_rest_result(result, _('Get partition by name error.')) if "data" in result: for item in result['data']: if name == item['NAME']: return item['ID'] return None def get_partition_info_by_id(self, partitionid): url = '/cachepartition/' + partitionid result = self.call(url, None, "GET") self._assert_rest_result(result, _('Get partition by partition id error.')) return result['data'] def _add_fs_to_partition(self, fs_id, partition_id): url = "/filesystem/associate/cachepartition" data = jsonutils.dumps({"ID": partition_id, "ASSOCIATEOBJTYPE": 40, "ASSOCIATEOBJID": fs_id, "TYPE": 268}) result = self.call(url, data, "POST") self._assert_rest_result(result, _('Add filesystem to partition error.')) def _remove_fs_from_partition(self, fs_id, partition_id): url = "/smartPartition/removeFs" data = jsonutils.dumps({"ID": partition_id, "ASSOCIATEOBJTYPE": 40, "ASSOCIATEOBJID": fs_id, "TYPE": 268}) result = self.call(url, data, "PUT") self._assert_rest_result(result, _('Remove filesystem from partition error.')) def _rename_share_snapshot(self, snapshot_id, new_name): url = "/FSSNAPSHOT/" + snapshot_id data = jsonutils.dumps({"NAME": new_name}) result = self.call(url, data, "PUT") msg = _('Rename share snapshot on array error.') self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) def _get_cache_id_by_name(self, name): url = "/SMARTCACHEPARTITION" result = self.call(url, None, "GET") self._assert_rest_result(result, _('Get cache by name error.')) if "data" in result: for item in result['data']: if name == item['NAME']: return item['ID'] return None def get_cache_info_by_id(self, cacheid): url = "/SMARTCACHEPARTITION/" + cacheid data = jsonutils.dumps({"TYPE": "273", "ID": cacheid}) result = self.call(url, data, "GET") self._assert_rest_result( result, _('Get smartcache by cache id error.')) return result['data'] def _add_fs_to_cache(self, fs_id, cache_id): url = "/SMARTCACHEPARTITION/CREATE_ASSOCIATE" data = jsonutils.dumps({"ID": cache_id, "ASSOCIATEOBJTYPE": 40, "ASSOCIATEOBJID": fs_id, "TYPE": 273}) result = self.call(url, data, "PUT") self._assert_rest_result(result, _('Add filesystem to cache error.')) def get_qos(self): url = "/ioclass" result = self.call(url, None, "GET") self._assert_rest_result(result, _('Get QoS information error.')) return result def find_available_qos(self, qos): """"Find available QoS on the array.""" qos_id = None fs_list = [] temp_qos = copy.deepcopy(qos) result = self.get_qos() if 'data' in result: if 'LATENCY' not in temp_qos: temp_qos['LATENCY'] = '0' for item in result['data']: for key in constants.OPTS_QOS_VALUE: if temp_qos.get(key.upper()) != item.get(key.upper()): break else: fs_num = len(item['FSLIST'].split(",")) # We use this QoS only if the filesystems in it is less # than 64, else we cannot add filesystem to this QoS # any more. if (item['RUNNINGSTATUS'] == constants.STATUS_QOS_ACTIVE and fs_num < constants.MAX_FS_NUM_IN_QOS and item['NAME'].startswith( constants.QOS_NAME_PREFIX) and item['LUNLIST'] == '[""]'): qos_id = item['ID'] fs_list = item['FSLIST'] break return (qos_id, fs_list) def add_share_to_qos(self, qos_id, fs_id, fs_list): """Add filesystem to QoS.""" url = "/ioclass/" + qos_id new_fs_list = [] fs_list_string = fs_list[1:-1] for fs_string in fs_list_string.split(","): tmp_fs_id = fs_string[1:-1] if '' != tmp_fs_id and tmp_fs_id != fs_id: new_fs_list.append(tmp_fs_id) new_fs_list.append(fs_id) data = jsonutils.dumps({"FSLIST": new_fs_list, "TYPE": 230, "ID": qos_id}) result = self.call(url, data, "PUT") msg = _('Associate filesystem to Qos error.') self._assert_rest_result(result, msg) def create_qos_policy(self, qos, fs_id): # Get local time. localtime = time.strftime('%Y%m%d%H%M%S', time.localtime(time.time())) # Package QoS name. qos_name = constants.QOS_NAME_PREFIX + fs_id + '_' + localtime mergedata = { "TYPE": "230", "NAME": qos_name, "FSLIST": ["%s" % fs_id], "CLASSTYPE": "1", "SCHEDULEPOLICY": "2", "SCHEDULESTARTTIME": "1410969600", "STARTTIME": "08:00", "DURATION": "86400", "CYCLESET": "[1,2,3,4,5,6,0]", } mergedata.update(qos) data = jsonutils.dumps(mergedata) url = "/ioclass" result = self.call(url, data, 'POST') self._assert_rest_result(result, _('Create QoS policy error.')) return result['data']['ID'] def activate_deactivate_qos(self, qos_id, enablestatus): """Activate or deactivate QoS. enablestatus: true (activate) enablestatus: false (deactivate) """ url = "/ioclass/active/" + qos_id data = jsonutils.dumps({ "TYPE": 230, "ID": qos_id, "ENABLESTATUS": enablestatus}) result = self.call(url, data, "PUT") self._assert_rest_result( result, _('Activate or deactivate QoS error.')) def change_fs_priority_high(self, fs_id): """Change fs priority to high.""" url = "/filesystem/" + fs_id data = jsonutils.dumps({"IOPRIORITY": "3"}) result = self.call(url, data, "PUT") self._assert_rest_result( result, _('Change filesystem priority error.')) def delete_qos_policy(self, qos_id): """Delete a QoS policy.""" url = "/ioclass/" + qos_id data = jsonutils.dumps({"TYPE": "230", "ID": qos_id}) result = self.call(url, data, 'DELETE') self._assert_rest_result(result, _('Delete QoS policy error.')) def get_qosid_by_fsid(self, fs_id): """Get QoS id by fs id.""" url = "/filesystem/" + fs_id result = self.call(url, None, "GET") self._assert_rest_result( result, _('Get QoS id by filesystem id error.')) return result['data'].get('IOCLASSID') def get_fs_list_in_qos(self, qos_id): """Get the filesystem list in QoS.""" qos_info = self.get_qos_info(qos_id) fs_list = [] fs_string = qos_info['FSLIST'][1:-1] for fs in fs_string.split(","): fs_id = fs[1:-1] fs_list.append(fs_id) return fs_list def get_qos_info(self, qos_id): """Get QoS information.""" url = "/ioclass/" + qos_id result = self.call(url, None, "GET") self._assert_rest_result(result, _('Get QoS information error.')) return result['data'] def remove_fs_from_qos(self, fs_id, fs_list, qos_id): """Remove filesystem from QoS.""" fs_list = [i for i in fs_list if i != fs_id] url = "/ioclass/" + qos_id data = jsonutils.dumps({"FSLIST": fs_list, "TYPE": 230, "ID": qos_id}) result = self.call(url, data, "PUT") msg = _('Remove filesystem from QoS error.') self._assert_rest_result(result, msg) def _remove_fs_from_cache(self, fs_id, cache_id): url = "/SMARTCACHEPARTITION/REMOVE_ASSOCIATE" data = jsonutils.dumps({"ID": cache_id, "ASSOCIATEOBJTYPE": 40, "ASSOCIATEOBJID": fs_id, "TYPE": 273}) result = self.call(url, data, "PUT") self._assert_rest_result(result, _('Remove filesystem from cache error.')) def get_all_eth_port(self): url = "/ETH_PORT" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get all eth port error.')) all_eth = {} if "data" in result: all_eth = result['data'] return all_eth def get_eth_port_by_id(self, port_id): url = "/ETH_PORT/" + port_id result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get eth port by id error.')) if "data" in result: return result['data'] return None def get_all_bond_port(self): url = "/BOND_PORT" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get all bond port error.')) all_bond = {} if "data" in result: all_bond = result['data'] return all_bond def get_port_id(self, port_name, port_type): if port_type == constants.PORT_TYPE_ETH: all_eth = self.get_all_eth_port() for item in all_eth: if port_name == item['LOCATION']: return item['ID'] elif port_type == constants.PORT_TYPE_BOND: all_bond = self.get_all_bond_port() for item in all_bond: if port_name == item['NAME']: return item['ID'] return None def get_all_vlan(self): url = "/vlan" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get all vlan error.')) all_vlan = {} if "data" in result: all_vlan = result['data'] return all_vlan def get_vlan(self, port_id, vlan_tag): url = "/vlan" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get vlan error.')) vlan_tag = six.text_type(vlan_tag) if "data" in result: for item in result['data']: if port_id == item['PORTID'] and vlan_tag == item['TAG']: return True, item['ID'] return False, None def create_vlan(self, port_id, port_type, vlan_tag): url = "/vlan" data = jsonutils.dumps({"PORTID": port_id, "PORTTYPE": port_type, "TAG": six.text_type(vlan_tag), "TYPE": "280"}) result = self.call(url, data, "POST") self._assert_rest_result(result, _('Create vlan error.')) return result['data']['ID'] def check_vlan_exists_by_id(self, vlan_id): all_vlan = self.get_all_vlan() return any(vlan['ID'] == vlan_id for vlan in all_vlan) def delete_vlan(self, vlan_id): url = "/vlan/" + vlan_id result = self.call(url, None, 'DELETE') if result['error']['code'] == constants.ERROR_LOGICAL_PORT_EXIST: LOG.warning('Cannot delete vlan because there is ' 'a logical port on vlan.') return self._assert_rest_result(result, _('Delete vlan error.')) def get_logical_port(self, home_port_id, ip, subnet): url = "/LIF" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get logical port error.')) if "data" not in result: return False, None for item in result['data']: if (home_port_id == item['HOMEPORTID'] and ip == item['IPV4ADDR'] and subnet == item['IPV4MASK']): if item['OPERATIONALSTATUS'] != 'true': self._activate_logical_port(item['ID']) return True, item['ID'] return False, None def _activate_logical_port(self, logical_port_id): url = "/LIF/" + logical_port_id data = jsonutils.dumps({"OPERATIONALSTATUS": "true"}) result = self.call(url, data, 'PUT') self._assert_rest_result(result, _('Activate logical port error.')) def create_logical_port(self, home_port_id, home_port_type, ip, subnet): url = "/LIF" info = { "ADDRESSFAMILY": 0, "CANFAILOVER": "true", "HOMEPORTID": home_port_id, "HOMEPORTTYPE": home_port_type, "IPV4ADDR": ip, "IPV4GATEWAY": "", "IPV4MASK": subnet, "NAME": ip, "OPERATIONALSTATUS": "true", "ROLE": 2, "SUPPORTPROTOCOL": 3, "TYPE": "279", } data = jsonutils.dumps(info) result = self.call(url, data, 'POST') self._assert_rest_result(result, _('Create logical port error.')) return result['data']['ID'] def check_logical_port_exists_by_id(self, logical_port_id): all_logical_port = self.get_all_logical_port() return any(port['ID'] == logical_port_id for port in all_logical_port) def get_all_logical_port(self): url = "/LIF" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get all logical port error.')) all_logical_port = {} if "data" in result: all_logical_port = result['data'] return all_logical_port def delete_logical_port(self, logical_port_id): url = "/LIF/" + logical_port_id result = self.call(url, None, 'DELETE') self._assert_rest_result(result, _('Delete logical port error.')) def set_DNS_ip_address(self, dns_ip_list): if len(dns_ip_list) > 3: message = _('Most three ips can be set to DNS.') LOG.error(message) raise exception.InvalidInput(reason=message) url = "/DNS_Server" dns_info = { "ADDRESS": jsonutils.dumps(dns_ip_list), "TYPE": "260", } data = jsonutils.dumps(dns_info) result = self.call(url, data, 'PUT') self._assert_rest_result(result, _('Set DNS ip address error.')) if "data" in result: return result['data'] return None def get_DNS_ip_address(self): url = "/DNS_Server" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get DNS ip address error.')) ip_address = {} if "data" in result: ip_address = jsonutils.loads(result['data']['ADDRESS']) return ip_address def add_AD_config(self, user, password, domain, system_name): url = "/AD_CONFIG" info = { "ADMINNAME": user, "ADMINPWD": password, "DOMAINSTATUS": 1, "FULLDOMAINNAME": domain, "OU": "", "SYSTEMNAME": system_name, "TYPE": "16414", } data = jsonutils.dumps(info) result = self.call(url, data, 'PUT') self._assert_rest_result(result, _('Add AD config error.')) def delete_AD_config(self, user, password): url = "/AD_CONFIG" info = { "ADMINNAME": user, "ADMINPWD": password, "DOMAINSTATUS": 0, "TYPE": "16414", } data = jsonutils.dumps(info) result = self.call(url, data, 'PUT') self._assert_rest_result(result, _('Delete AD config error.')) def get_AD_config(self): url = "/AD_CONFIG" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get AD config error.')) if "data" in result: return result['data'] return None def get_AD_domain_name(self): result = self.get_AD_config() if result and result['DOMAINSTATUS'] == '1': return True, result['FULLDOMAINNAME'] return False, None def add_LDAP_config(self, server, domain): url = "/LDAP_CONFIG" info = { "BASEDN": domain, "LDAPSERVER": server, "PORTNUM": 389, "TRANSFERTYPE": "1", "TYPE": "16413", "USERNAME": "", } data = jsonutils.dumps(info) result = self.call(url, data, 'PUT') self._assert_rest_result(result, _('Add LDAP config error.')) def delete_LDAP_config(self): url = "/LDAP_CONFIG" result = self.call(url, None, 'DELETE') self._assert_rest_result(result, _('Delete LDAP config error.')) def get_LDAP_config(self): url = "/LDAP_CONFIG" result = self.call(url, None, 'GET') self._assert_rest_result(result, _('Get LDAP config error.')) if "data" in result: return result['data'] return None def get_LDAP_domain_server(self): result = self.get_LDAP_config() if result and result['LDAPSERVER']: return True, result['LDAPSERVER'] return False, None def _get_array_info(self): url = "/system/" result = self.call(url, None, "GET") msg = _('Get array info error.') self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result.get('data') def find_array_version(self): info = self._get_array_info() return info.get('PRODUCTVERSION') def get_array_wwn(self): info = self._get_array_info() return info.get('wwn') def _get_all_remote_devices(self): url = "/remote_device" result = self.call(url, None, "GET") self._assert_rest_result(result, _('Get all remote devices error.')) return result.get('data', []) def get_remote_device_by_wwn(self, wwn): devices = self._get_all_remote_devices() for device in devices: if device.get('WWN') == wwn: return device return {} def create_replication_pair(self, pair_params): url = "/REPLICATIONPAIR" data = jsonutils.dumps(pair_params) result = self.call(url, data, "POST") msg = _('Failed to create replication pair for ' '(LOCALRESID: %(lres)s, REMOTEDEVICEID: %(rdev)s, ' 'REMOTERESID: %(rres)s).') % { 'lres': pair_params['LOCALRESID'], 'rdev': pair_params['REMOTEDEVICEID'], 'rres': pair_params['REMOTERESID']} self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result['data'] def split_replication_pair(self, pair_id): url = '/REPLICATIONPAIR/split' data = jsonutils.dumps({"ID": pair_id, "TYPE": "263"}) result = self.call(url, data, "PUT") msg = _('Failed to split replication pair %s.') % pair_id self._assert_rest_result(result, msg) def switch_replication_pair(self, pair_id): url = '/REPLICATIONPAIR/switch' data = jsonutils.dumps({"ID": pair_id, "TYPE": "263"}) result = self.call(url, data, "PUT") msg = _('Failed to switch replication pair %s.') % pair_id self._assert_rest_result(result, msg) def delete_replication_pair(self, pair_id): url = "/REPLICATIONPAIR/" + pair_id data = None result = self.call(url, data, "DELETE") if (result['error']['code'] == constants.ERROR_REPLICATION_PAIR_NOT_EXIST): LOG.warning('Replication pair %s was not found.', pair_id) return msg = _('Failed to delete replication pair %s.') % pair_id self._assert_rest_result(result, msg) def sync_replication_pair(self, pair_id): url = "/REPLICATIONPAIR/sync" data = jsonutils.dumps({"ID": pair_id, "TYPE": "263"}) result = self.call(url, data, "PUT") msg = _('Failed to sync replication pair %s.') % pair_id self._assert_rest_result(result, msg) def cancel_pair_secondary_write_lock(self, pair_id): url = "/REPLICATIONPAIR/CANCEL_SECODARY_WRITE_LOCK" data = jsonutils.dumps({"ID": pair_id, "TYPE": "263"}) result = self.call(url, data, "PUT") msg = _('Failed to cancel replication pair %s ' 'secondary write lock.') % pair_id self._assert_rest_result(result, msg) def set_pair_secondary_write_lock(self, pair_id): url = "/REPLICATIONPAIR/SET_SECODARY_WRITE_LOCK" data = jsonutils.dumps({"ID": pair_id, "TYPE": "263"}) result = self.call(url, data, "PUT") msg = _('Failed to set replication pair %s ' 'secondary write lock.') % pair_id self._assert_rest_result(result, msg) def get_replication_pair_by_id(self, pair_id): url = "/REPLICATIONPAIR/" + pair_id result = self.call(url, None, "GET") msg = _('Failed to get replication pair %s.') % pair_id self._assert_rest_result(result, msg) self._assert_data_in_result(result, msg) return result.get('data') def rollback_snapshot(self, snap_id): url = "/FSSNAPSHOT/ROLLBACK_FSSNAPSHOT" data = jsonutils.dumps({"ID": snap_id}) result = self.call(url, data, "PUT") msg = _('Failed to rollback snapshot %s.') % snap_id self._assert_rest_result(result, msg) manila-10.0.0/manila/share/drivers/huawei/v3/replication.py0000664000175000017500000002344513656750227023616 0ustar zuulzuul00000000000000# Copyright (c) 2016 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import strutils from manila.common import constants as common_constants from manila import exception from manila.i18n import _ from manila.share.drivers.huawei import constants LOG = log.getLogger(__name__) class ReplicaPairManager(object): def __init__(self, helper): self.helper = helper def create(self, local_share_info, remote_device_wwn, remote_fs_id): local_share_name = local_share_info.get('name') try: local_fs_id = self.helper.get_fsid_by_name(local_share_name) if not local_fs_id: msg = _("Local fs was not found by name %s.") LOG.error(msg, local_share_name) raise exception.ReplicationException( reason=msg % local_share_name) remote_device = self.helper.get_remote_device_by_wwn( remote_device_wwn) pair_params = { "LOCALRESID": local_fs_id, "LOCALRESTYPE": constants.FILE_SYSTEM_TYPE, "REMOTEDEVICEID": remote_device.get('ID'), "REMOTEDEVICENAME": remote_device.get('NAME'), "REMOTERESID": remote_fs_id, "REPLICATIONMODEL": constants.REPLICA_ASYNC_MODEL, "RECOVERYPOLICY": '2', "SYNCHRONIZETYPE": '1', "SPEED": constants.REPLICA_SPEED_MEDIUM, } pair_info = self.helper.create_replication_pair(pair_params) except Exception: msg = ("Failed to create replication pair for share %s.") LOG.exception(msg, local_share_name) raise self._sync_replication_pair(pair_info['ID']) return pair_info['ID'] def _get_replication_pair_info(self, replica_pair_id): try: pair_info = self.helper.get_replication_pair_by_id( replica_pair_id) except Exception: LOG.exception('Failed to get replication pair info for ' '%s.', replica_pair_id) raise return pair_info def _check_replication_health(self, pair_info): if (pair_info['HEALTHSTATUS'] != constants.REPLICA_HEALTH_STATUS_NORMAL): return common_constants.STATUS_ERROR def _check_replication_running_status(self, pair_info): if (pair_info['RUNNINGSTATUS'] in ( constants.REPLICA_RUNNING_STATUS_SPLITTED, constants.REPLICA_RUNNING_STATUS_TO_RECOVER)): return common_constants.REPLICA_STATE_OUT_OF_SYNC if (pair_info['RUNNINGSTATUS'] in ( constants.REPLICA_RUNNING_STATUS_INTERRUPTED, constants.REPLICA_RUNNING_STATUS_INVALID)): return common_constants.STATUS_ERROR def _check_replication_secondary_data_status(self, pair_info): if (pair_info['SECRESDATASTATUS'] in constants.REPLICA_DATA_STATUS_IN_SYNC): return common_constants.REPLICA_STATE_IN_SYNC else: return common_constants.REPLICA_STATE_OUT_OF_SYNC def _check_replica_state(self, pair_info): result = self._check_replication_health(pair_info) if result is not None: return result result = self._check_replication_running_status(pair_info) if result is not None: return result return self._check_replication_secondary_data_status(pair_info) def get_replica_state(self, replica_pair_id): try: pair_info = self._get_replication_pair_info(replica_pair_id) except Exception: # if cannot communicate to backend, return error LOG.error('Cannot get replica state, return %s', common_constants.STATUS_ERROR) return common_constants.STATUS_ERROR return self._check_replica_state(pair_info) def _sync_replication_pair(self, pair_id): try: self.helper.sync_replication_pair(pair_id) except Exception as err: LOG.warning('Failed to sync replication pair %(id)s. ' 'Reason: %(err)s', {'id': pair_id, 'err': err}) def update_replication_pair_state(self, replica_pair_id): pair_info = self._get_replication_pair_info(replica_pair_id) health = self._check_replication_health(pair_info) if health is not None: LOG.warning("Cannot update the replication %s " "because it's not in normal status.", replica_pair_id) return if strutils.bool_from_string(pair_info['ISPRIMARY']): # current replica is primary, not consistent with manila. # the reason for this circumstance is the last switch over # didn't succeed completely. continue the switch over progress.. try: self.helper.switch_replication_pair(replica_pair_id) except Exception: msg = ('Replication pair %s primary/secondary ' 'relationship is not right, try to switch over ' 'again but still failed.') LOG.exception(msg, replica_pair_id) return # refresh the replication pair info pair_info = self._get_replication_pair_info(replica_pair_id) if pair_info['SECRESACCESS'] == constants.REPLICA_SECONDARY_RW: try: self.helper.set_pair_secondary_write_lock(replica_pair_id) except Exception: msg = ('Replication pair %s secondary access is R/W, ' 'try to set write lock but still failed.') LOG.exception(msg, replica_pair_id) return if pair_info['RUNNINGSTATUS'] in ( constants.REPLICA_RUNNING_STATUS_NORMAL, constants.REPLICA_RUNNING_STATUS_SPLITTED, constants.REPLICA_RUNNING_STATUS_TO_RECOVER): self._sync_replication_pair(replica_pair_id) def switch_over(self, replica_pair_id): pair_info = self._get_replication_pair_info(replica_pair_id) if strutils.bool_from_string(pair_info['ISPRIMARY']): LOG.warning('The replica to promote is already primary, ' 'no need to switch over.') return replica_state = self._check_replica_state(pair_info) if replica_state != common_constants.REPLICA_STATE_IN_SYNC: # replica is not in SYNC state, can't be promoted msg = _('Data of replica %s is not synchronized, ' 'can not promote.') raise exception.ReplicationException( reason=msg % replica_pair_id) try: self.helper.split_replication_pair(replica_pair_id) except Exception: # split failed # means replication pair is in an abnormal status, # ignore this exception, continue to cancel secondary write lock, # let secondary share accessible for disaster recovery. LOG.exception('Failed to split replication pair %s while ' 'switching over.', replica_pair_id) try: self.helper.cancel_pair_secondary_write_lock(replica_pair_id) except Exception: LOG.exception('Failed to cancel replication pair %s ' 'secondary write lock.', replica_pair_id) raise try: self.helper.switch_replication_pair(replica_pair_id) self.helper.set_pair_secondary_write_lock(replica_pair_id) self.helper.sync_replication_pair(replica_pair_id) except Exception: LOG.exception('Failed to completely switch over ' 'replication pair %s.', replica_pair_id) # for all the rest steps, # because secondary share is accessible now, # the upper business may access the secondary share, # return success to tell replica is primary. return def delete_replication_pair(self, replica_pair_id): try: self.helper.split_replication_pair(replica_pair_id) except Exception: # Ignore this exception because replication pair may at some # abnormal status that supports deleting. LOG.warning('Failed to split replication pair %s ' 'before deleting it. Ignore this exception, ' 'and try to delete anyway.', replica_pair_id) try: self.helper.delete_replication_pair(replica_pair_id) except Exception: LOG.exception('Failed to delete replication pair %s.', replica_pair_id) raise def create_replica_pair(self, ctx, local_share_info, remote_device_wwn, remote_fs_id): """Create replication pair for RPC call. This is for remote call, because replica pair can only be created by master node. """ return self.create(local_share_info, remote_device_wwn, remote_fs_id) manila-10.0.0/manila/share/drivers/huawei/v3/smartx.py0000664000175000017500000001745013656750227022622 0ustar zuulzuul00000000000000# Copyright (c) 2015 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import excutils from oslo_utils import strutils from manila import exception from manila.i18n import _ from manila.share.drivers.huawei import constants class SmartPartition(object): def __init__(self, helper): self.helper = helper def add(self, opts, fsid): if not strutils.bool_from_string(opts['huawei_smartpartition']): return if not opts['partitionname']: raise exception.InvalidInput( reason=_('Partition name is None, please set ' 'huawei_smartpartition:partitionname in key.')) partition_id = self.helper._get_partition_id_by_name( opts['partitionname']) if not partition_id: raise exception.InvalidInput( reason=_('Can not find partition id.')) self.helper._add_fs_to_partition(fsid, partition_id) class SmartCache(object): def __init__(self, helper): self.helper = helper def add(self, opts, fsid): if not strutils.bool_from_string(opts['huawei_smartcache']): return if not opts['cachename']: raise exception.InvalidInput( reason=_('Illegal value specified for cache.')) cache_id = self.helper._get_cache_id_by_name(opts['cachename']) if not cache_id: raise exception.InvalidInput( reason=(_('Can not find cache id by cache name %(name)s.') % {'name': opts['cachename']})) self.helper._add_fs_to_cache(fsid, cache_id) class SmartQos(object): def __init__(self, helper): self.helper = helper def create_qos(self, qos, fs_id): policy_id = None try: # Check QoS priority. if self._check_qos_high_priority(qos): self.helper.change_fs_priority_high(fs_id) # Create QoS policy and activate it. (qos_id, fs_list) = self.helper.find_available_qos(qos) if qos_id is not None: self.helper.add_share_to_qos(qos_id, fs_id, fs_list) else: policy_id = self.helper.create_qos_policy(qos, fs_id) self.helper.activate_deactivate_qos(policy_id, True) except exception.InvalidInput: with excutils.save_and_reraise_exception(): if policy_id is not None: self.helper.delete_qos_policy(policy_id) def _check_qos_high_priority(self, qos): """Check QoS priority.""" for key, value in qos.items(): if (key.find('MIN') == 0) or (key.find('LATENCY') == 0): return True return False def delete_qos(self, qos_id): qos_info = self.helper.get_qos_info(qos_id) qos_status = qos_info['RUNNINGSTATUS'] if qos_status != constants.STATUS_QOS_INACTIVATED: self.helper.activate_deactivate_qos(qos_id, False) self.helper.delete_qos_policy(qos_id) class SmartX(object): def __init__(self, helper): self.helper = helper def get_smartx_extra_specs_opts(self, opts): opts = self.get_capabilities_opts(opts, 'dedupe') opts = self.get_capabilities_opts(opts, 'compression') opts = self.get_smartprovisioning_opts(opts) opts = self.get_smartcache_opts(opts) opts = self.get_smartpartition_opts(opts) opts = self.get_sectorsize_opts(opts) qos = self.get_qos_opts(opts) return opts, qos def get_capabilities_opts(self, opts, key): if strutils.bool_from_string(opts[key]): opts[key] = True else: opts[key] = False return opts def get_smartprovisioning_opts(self, opts): thin_provision = opts.get('thin_provisioning') if (thin_provision is None or strutils.bool_from_string(thin_provision)): opts['LUNType'] = constants.ALLOC_TYPE_THIN_FLAG else: opts['LUNType'] = constants.ALLOC_TYPE_THICK_FLAG return opts def get_smartcache_opts(self, opts): if strutils.bool_from_string(opts['huawei_smartcache']): if not opts['cachename']: raise exception.InvalidInput( reason=_('Cache name is None, please set ' 'huawei_smartcache:cachename in key.')) else: opts['cachename'] = None return opts def get_smartpartition_opts(self, opts): if strutils.bool_from_string(opts['huawei_smartpartition']): if not opts['partitionname']: raise exception.InvalidInput( reason=_('Partition name is None, please set ' 'huawei_smartpartition:partitionname in key.')) else: opts['partitionname'] = None return opts def get_sectorsize_opts(self, opts): value = None if strutils.bool_from_string(opts.get('huawei_sectorsize')): value = opts.get('sectorsize') if not value: root = self.helper._read_xml() sectorsize = root.findtext('Filesystem/SectorSize') if sectorsize: sectorsize = sectorsize.strip() value = sectorsize if value: if value not in constants.VALID_SECTOR_SIZES: raise exception.InvalidInput( reason=(_('Illegal value(%s) specified for sectorsize: ' 'set to either 4, 8, 16, 32 or 64.') % value)) else: opts['sectorsize'] = int(value) return opts def get_qos_opts(self, opts): qos = {} if not strutils.bool_from_string(opts.get('qos')): return for key, value in opts.items(): if (key in constants.OPTS_QOS_VALUE) and value is not None: if (key.upper() != 'IOTYPE') and (int(value) <= 0): err_msg = (_('QoS config is wrong. %(key)s' ' must be set greater than 0.') % {'key': key}) raise exception.InvalidInput(reason=err_msg) elif ((key.upper() == 'IOTYPE') and (value not in ['0', '1', '2'])): raise exception.InvalidInput( reason=(_('Illegal value specified for IOTYPE: ' 'set to either 0, 1, or 2.'))) else: qos[key.upper()] = value if len(qos) <= 1 or 'IOTYPE' not in qos: msg = (_('QoS config is incomplete. Please set more. ' 'QoS policy: %(qos_policy)s.') % {'qos_policy': qos}) raise exception.InvalidInput(reason=msg) lowerlimit = constants.QOS_LOWER_LIMIT upperlimit = constants.QOS_UPPER_LIMIT if (set(lowerlimit).intersection(set(qos)) and set(upperlimit).intersection(set(qos))): msg = (_('QoS policy conflict, both protection policy and ' 'restriction policy are set. ' 'QoS policy: %(qos_policy)s ') % {'qos_policy': qos}) raise exception.InvalidInput(reason=msg) return qos manila-10.0.0/manila/share/drivers/huawei/v3/connection.py0000664000175000017500000022733713656750227023452 0ustar zuulzuul00000000000000# Copyright (c) 2015 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import random import string import tempfile import time from oslo_config import cfg from oslo_log import log import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils import excutils from oslo_utils import strutils from oslo_utils import units import six from manila.common import constants as common_constants from manila.data import utils as data_utils from manila import exception from manila.i18n import _ from manila import rpc from manila.share.drivers.huawei import base as driver from manila.share.drivers.huawei import constants from manila.share.drivers.huawei import huawei_utils from manila.share.drivers.huawei.v3 import helper from manila.share.drivers.huawei.v3 import replication from manila.share.drivers.huawei.v3 import rpcapi as v3_rpcapi from manila.share.drivers.huawei.v3 import smartx from manila.share import share_types from manila.share import utils as share_utils from manila import utils CONF = cfg.CONF LOG = log.getLogger(__name__) class V3StorageConnection(driver.HuaweiBase): """Helper class for Huawei OceanStor V3 storage system.""" def __init__(self, configuration, **kwargs): super(V3StorageConnection, self).__init__(configuration) self.helper = helper.RestHelper(self.configuration) self.replica_mgr = replication.ReplicaPairManager(self.helper) self.rpc_client = v3_rpcapi.HuaweiV3API() self.private_storage = kwargs.get('private_storage') self.qos_support = False self.snapshot_support = False self.replication_support = False def _setup_rpc_server(self, endpoints): host = "%s@%s" % (CONF.host, self.configuration.config_group) target = messaging.Target(topic=self.rpc_client.topic, server=host) self.rpc_server = rpc.get_server(target, endpoints) self.rpc_server.start() def connect(self): """Try to connect to V3 server.""" self.helper.login() self._setup_rpc_server([self.replica_mgr]) self._setup_conf() def _setup_conf(self): root = self.helper._read_xml() snapshot_support = root.findtext('Storage/SnapshotSupport') if snapshot_support: self.snapshot_support = strutils.bool_from_string( snapshot_support, strict=True) replication_support = root.findtext('Storage/ReplicationSupport') if replication_support: self.replication_support = strutils.bool_from_string( replication_support, strict=True) def create_share(self, share, share_server=None): """Create a share.""" share_name = share['name'] share_proto = share['share_proto'] pool_name = share_utils.extract_host(share['host'], level='pool') if not pool_name: msg = _("Pool is not available in the share host field.") raise exception.InvalidHost(reason=msg) result = self.helper._find_all_pool_info() poolinfo = self.helper._find_pool_info(pool_name, result) if not poolinfo: msg = (_("Can not find pool info by pool name: %s.") % pool_name) raise exception.InvalidHost(reason=msg) fs_id = None # We sleep here to ensure the newly created filesystem can be read. wait_interval = self._get_wait_interval() timeout = self._get_timeout() try: fs_id = self.allocate_container(share, poolinfo) fs = self.helper._get_fs_info_by_id(fs_id) end_time = time.time() + timeout while not (self.check_fs_status(fs['HEALTHSTATUS'], fs['RUNNINGSTATUS']) or time.time() > end_time): time.sleep(wait_interval) fs = self.helper._get_fs_info_by_id(fs_id) if not self.check_fs_status(fs['HEALTHSTATUS'], fs['RUNNINGSTATUS']): raise exception.InvalidShare( reason=(_('Invalid status of filesystem: ' 'HEALTHSTATUS=%(health)s ' 'RUNNINGSTATUS=%(running)s.') % {'health': fs['HEALTHSTATUS'], 'running': fs['RUNNINGSTATUS']})) except Exception as err: if fs_id is not None: qos_id = self.helper.get_qosid_by_fsid(fs_id) if qos_id: self.remove_qos_fs(fs_id, qos_id) self.helper._delete_fs(fs_id) message = (_('Failed to create share %(name)s. ' 'Reason: %(err)s.') % {'name': share_name, 'err': err}) raise exception.InvalidShare(reason=message) try: self.helper.create_share(share_name, fs_id, share_proto) except Exception as err: if fs_id is not None: qos_id = self.helper.get_qosid_by_fsid(fs_id) if qos_id: self.remove_qos_fs(fs_id, qos_id) self.helper._delete_fs(fs_id) raise exception.InvalidShare( reason=(_('Failed to create share %(name)s. Reason: %(err)s.') % {'name': share_name, 'err': err})) ip = self._get_share_ip(share_server) location = self._get_location_path(share_name, share_proto, ip) return location def _get_share_ip(self, share_server): """"Get share logical ip.""" if share_server: ip = share_server['backend_details'].get('ip') else: root = self.helper._read_xml() ip = root.findtext('Storage/LogicalPortIP').strip() return ip def extend_share(self, share, new_size, share_server): share_proto = share['share_proto'] share_name = share['name'] # The unit is in sectors. size = int(new_size) * units.Mi * 2 share_url_type = self.helper._get_share_url_type(share_proto) share = self.helper._get_share_by_name(share_name, share_url_type) if not share: err_msg = (_("Can not get share ID by share %s.") % share_name) LOG.error(err_msg) raise exception.InvalidShareAccess(reason=err_msg) fsid = share['FSID'] fs_info = self.helper._get_fs_info_by_id(fsid) current_size = int(fs_info['CAPACITY']) / units.Mi / 2 if current_size >= new_size: err_msg = (_("New size for extend must be bigger than " "current size on array. (current: %(size)s, " "new: %(new_size)s).") % {'size': current_size, 'new_size': new_size}) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) self.helper._change_share_size(fsid, size) def shrink_share(self, share, new_size, share_server): """Shrinks size of existing share.""" share_proto = share['share_proto'] share_name = share['name'] # The unit is in sectors. size = int(new_size) * units.Mi * 2 share_url_type = self.helper._get_share_url_type(share_proto) share = self.helper._get_share_by_name(share_name, share_url_type) if not share: err_msg = (_("Can not get share ID by share %s.") % share_name) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) fsid = share['FSID'] fs_info = self.helper._get_fs_info_by_id(fsid) if not fs_info: err_msg = (_("Can not get filesystem info by filesystem ID: %s.") % fsid) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) current_size = int(fs_info['CAPACITY']) / units.Mi / 2 if current_size <= new_size: err_msg = (_("New size for shrink must be less than current " "size on array. (current: %(size)s, " "new: %(new_size)s).") % {'size': current_size, 'new_size': new_size}) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) if fs_info['ALLOCTYPE'] != constants.ALLOC_TYPE_THIN_FLAG: err_msg = (_("Share (%s) can not be shrunk. only 'Thin' shares " "support shrink.") % share_name) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) self.helper._change_share_size(fsid, size) def check_fs_status(self, health_status, running_status): if (health_status == constants.STATUS_FS_HEALTH and running_status == constants.STATUS_FS_RUNNING): return True else: return False def assert_filesystem(self, fsid): fs = self.helper._get_fs_info_by_id(fsid) if not self.check_fs_status(fs['HEALTHSTATUS'], fs['RUNNINGSTATUS']): err_msg = (_('Invalid status of filesystem: ' 'HEALTHSTATUS=%(health)s ' 'RUNNINGSTATUS=%(running)s.') % {'health': fs['HEALTHSTATUS'], 'running': fs['RUNNINGSTATUS']}) raise exception.StorageResourceException(err_msg) def create_snapshot(self, snapshot, share_server=None): """Create a snapshot.""" snap_name = snapshot['id'] share_proto = snapshot['share']['share_proto'] share_url_type = self.helper._get_share_url_type(share_proto) share = self.helper._get_share_by_name(snapshot['share_name'], share_url_type) if not share: err_msg = _('Can not create snapshot,' ' because share id is not provided.') LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) sharefsid = share['FSID'] snapshot_name = "share_snapshot_" + snap_name snap_id = self.helper._create_snapshot(sharefsid, snapshot_name) LOG.info('Creating snapshot id %s.', snap_id) return snapshot_name.replace("-", "_") def delete_snapshot(self, snapshot, share_server=None): """Delete a snapshot.""" LOG.debug("Delete a snapshot.") snap_name = snapshot['id'] sharefsid = self.helper.get_fsid_by_name(snapshot['share_name']) if sharefsid is None: LOG.warning('Delete snapshot share id %s fs has been ' 'deleted.', snap_name) return snapshot_id = self.helper._get_snapshot_id(sharefsid, snap_name) snapshot_info = self.helper._get_snapshot_by_id(snapshot_id) snapshot_flag = self.helper._check_snapshot_id_exist(snapshot_info) if snapshot_flag: self.helper._delete_snapshot(snapshot_id) else: LOG.warning("Can not find snapshot %s on array.", snap_name) def update_share_stats(self, stats_dict): """Retrieve status info from share group.""" root = self.helper._read_xml() all_pool_info = self.helper._find_all_pool_info() stats_dict["pools"] = [] pool_name_list = root.findtext('Filesystem/StoragePool') pool_name_list = pool_name_list.split(";") for pool_name in pool_name_list: pool_name = pool_name.strip().strip('\n') capacity = self._get_capacity(pool_name, all_pool_info) disk_type = self._get_disk_type(pool_name, all_pool_info) if capacity: pool = dict( pool_name=pool_name, total_capacity_gb=capacity['TOTALCAPACITY'], free_capacity_gb=capacity['CAPACITY'], provisioned_capacity_gb=( capacity['PROVISIONEDCAPACITYGB']), max_over_subscription_ratio=( self.configuration.safe_get( 'max_over_subscription_ratio')), allocated_capacity_gb=capacity['CONSUMEDCAPACITY'], qos=self._get_qos_capability(), reserved_percentage=0, thin_provisioning=[True, False], dedupe=[True, False], compression=[True, False], huawei_smartcache=[True, False], huawei_smartpartition=[True, False], huawei_sectorsize=[True, False], ) if disk_type: pool['huawei_disk_type'] = disk_type stats_dict["pools"].append(pool) if not stats_dict["pools"]: err_msg = _("The StoragePool is None.") LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) def _get_qos_capability(self): version = self.helper.find_array_version() if version.upper() >= constants.MIN_ARRAY_VERSION_FOR_QOS: self.qos_support = True else: self.qos_support = False return self.qos_support def delete_share(self, share, share_server=None): """Delete share.""" share_name = share['name'] share_url_type = self.helper._get_share_url_type(share['share_proto']) share = self.helper._get_share_by_name(share_name, share_url_type) if not share: LOG.warning('The share was not found. Share name:%s', share_name) fsid = self.helper.get_fsid_by_name(share_name) if fsid: self.helper._delete_fs(fsid) return LOG.warning('The filesystem was not found.') return share_id = share['ID'] share_fs_id = share['FSID'] if share_id: self.helper._delete_share_by_id(share_id, share_url_type) if share_fs_id: if self.qos_support: qos_id = self.helper.get_qosid_by_fsid(share_fs_id) if qos_id: self.remove_qos_fs(share_fs_id, qos_id) self.helper._delete_fs(share_fs_id) return share def create_share_from_snapshot(self, share, snapshot, share_server=None, parent_share=None): """Create a share from snapshot.""" share_fs_id = self.helper.get_fsid_by_name(snapshot['share_name']) if not share_fs_id: err_msg = (_("The source filesystem of snapshot %s " "does not exist.") % snapshot['snapshot_id']) LOG.error(err_msg) raise exception.StorageResourceNotFound( name=snapshot['share_name']) snapshot_id = self.helper._get_snapshot_id(share_fs_id, snapshot['id']) snapshot_info = self.helper._get_snapshot_by_id(snapshot_id) snapshot_flag = self.helper._check_snapshot_id_exist(snapshot_info) if not snapshot_flag: err_msg = (_("Cannot find snapshot %s on array.") % snapshot['snapshot_id']) LOG.error(err_msg) raise exception.ShareSnapshotNotFound( snapshot_id=snapshot['snapshot_id']) self.assert_filesystem(share_fs_id) old_share_name = self.helper.get_share_name_by_id( snapshot['share_id']) old_share_proto = self._get_share_proto(old_share_name) if not old_share_proto: err_msg = (_("Cannot find source share %(share)s of " "snapshot %(snapshot)s on array.") % {'share': snapshot['share_id'], 'snapshot': snapshot['snapshot_id']}) LOG.error(err_msg) raise exception.ShareResourceNotFound( share_id=snapshot['share_id']) new_share_path = self.create_share(share) new_share = { "share_proto": share['share_proto'], "size": share['size'], "name": share['name'], "mount_path": new_share_path.replace("\\", "/"), "mount_src": tempfile.mkdtemp(prefix=constants.TMP_PATH_DST_PREFIX), "id": snapshot['share_id'], } old_share_path = self._get_location_path(old_share_name, old_share_proto) old_share = { "share_proto": old_share_proto, "name": old_share_name, "mount_path": old_share_path.replace("\\", "/"), "mount_src": tempfile.mkdtemp(prefix=constants.TMP_PATH_SRC_PREFIX), "snapshot_name": ("share_snapshot_" + snapshot['id'].replace("-", "_")), "id": snapshot['share_id'], } try: self.copy_data_from_parent_share(old_share, new_share) except Exception: with excutils.save_and_reraise_exception(): self.delete_share(new_share) finally: for item in (new_share, old_share): try: os.rmdir(item['mount_src']) except Exception as err: LOG.warning('Failed to remove temp file. File path:' '%(file_path)s. Reason: %(err)s.', {'file_path': item['mount_src'], 'err': err}) return new_share_path def copy_data_from_parent_share(self, old_share, new_share): old_access = self.get_access(old_share) old_access_id = self._get_access_id(old_share, old_access) if not old_access_id: try: self.allow_access(old_share, old_access) except exception.ManilaException as err: with excutils.save_and_reraise_exception(): LOG.error('Failed to add access to share %(name)s. ' 'Reason: %(err)s.', {'name': old_share['name'], 'err': err}) new_access = self.get_access(new_share) try: try: self.mount_share_to_host(old_share, old_access) except exception.ShareMountException as err: with excutils.save_and_reraise_exception(): LOG.error('Failed to mount old share %(name)s. ' 'Reason: %(err)s.', {'name': old_share['name'], 'err': err}) try: self.allow_access(new_share, new_access) self.mount_share_to_host(new_share, new_access) except Exception as err: with excutils.save_and_reraise_exception(): self.umount_share_from_host(old_share) LOG.error('Failed to mount new share %(name)s. ' 'Reason: %(err)s.', {'name': new_share['name'], 'err': err}) copied = self.copy_snapshot_data(old_share, new_share) for item in (new_share, old_share): try: self.umount_share_from_host(item) except exception.ShareUmountException as err: LOG.warning('Failed to unmount share %(name)s. ' 'Reason: %(err)s.', {'name': item['name'], 'err': err}) self.deny_access(new_share, new_access) if copied: LOG.debug("Created share from snapshot successfully, " "new_share: %s, old_share: %s.", new_share, old_share) else: message = (_('Failed to copy data from share %(old_share)s ' 'to share %(new_share)s.') % {'old_share': old_share['name'], 'new_share': new_share['name']}) raise exception.ShareCopyDataException(reason=message) finally: if not old_access_id: self.deny_access(old_share, old_access) def get_access(self, share): share_proto = share['share_proto'] access = {} root = self.helper._read_xml() if share_proto == 'NFS': access['access_to'] = root.findtext('Filesystem/NFSClient/IP') access['access_level'] = common_constants.ACCESS_LEVEL_RW access['access_type'] = 'ip' elif share_proto == 'CIFS': access['access_to'] = root.findtext( 'Filesystem/CIFSClient/UserName') access['access_password'] = root.findtext( 'Filesystem/CIFSClient/UserPassword') access['access_level'] = common_constants.ACCESS_LEVEL_RW access['access_type'] = 'user' LOG.debug("Get access for share: %s, access_type: %s, access_to: %s, " "access_level: %s", share['name'], access['access_type'], access['access_to'], access['access_level']) return access def _get_access_id(self, share, access): """Get access id of the share.""" access_id = None share_name = share['name'] share_proto = share['share_proto'] share_url_type = self.helper._get_share_url_type(share_proto) access_to = access['access_to'] share = self.helper._get_share_by_name(share_name, share_url_type) access_id = self.helper._get_access_from_share(share['ID'], access_to, share_proto) if access_id is None: LOG.debug('Cannot get access ID from share. ' 'share_name: %s', share_name) return access_id def copy_snapshot_data(self, old_share, new_share): src_path = '/'.join((old_share['mount_src'], '.snapshot', old_share['snapshot_name'])) dst_path = new_share['mount_src'] copy_finish = False LOG.debug("Copy data from src_path: %s to dst_path: %s.", src_path, dst_path) try: ignore_list = '' copy = data_utils.Copy(src_path, dst_path, ignore_list) copy.run() if copy.get_progress()['total_progress'] == 100: copy_finish = True except Exception as err: LOG.error("Failed to copy data, reason: %s.", err) return copy_finish def umount_share_from_host(self, share): try: utils.execute('umount', share['mount_path'], run_as_root=True) except Exception as err: message = (_("Failed to unmount share %(share)s. " "Reason: %(reason)s.") % {'share': share['name'], 'reason': six.text_type(err)}) raise exception.ShareUmountException(reason=message) def mount_share_to_host(self, share, access): LOG.debug("Mounting share: %s to host, mount_src: %s", share['name'], share['mount_src']) try: if share['share_proto'] == 'NFS': utils.execute('mount', '-t', 'nfs', share['mount_path'], share['mount_src'], run_as_root=True) LOG.debug("Execute mount. mount_src: %s", share['mount_src']) elif share['share_proto'] == 'CIFS': user = ('username=' + access['access_to'] + ',' + 'password=' + access['access_password']) utils.execute('mount', '-t', 'cifs', share['mount_path'], share['mount_src'], '-o', user, run_as_root=True) except Exception as err: message = (_('Bad response from mount share: %(share)s. ' 'Reason: %(reason)s.') % {'share': share['name'], 'reason': six.text_type(err)}) raise exception.ShareMountException(reason=message) def get_network_allocations_number(self): """Get number of network interfaces to be created.""" if self.configuration.driver_handles_share_servers: return constants.IP_ALLOCATIONS_DHSS_TRUE else: return constants.IP_ALLOCATIONS_DHSS_FALSE def _get_capacity(self, pool_name, result): """Get free capacity and total capacity of the pools.""" poolinfo = self.helper._find_pool_info(pool_name, result) if poolinfo: total = float(poolinfo['TOTALCAPACITY']) / units.Mi / 2 free = float(poolinfo['CAPACITY']) / units.Mi / 2 consumed = float(poolinfo['CONSUMEDCAPACITY']) / units.Mi / 2 poolinfo['TOTALCAPACITY'] = total poolinfo['CAPACITY'] = free poolinfo['CONSUMEDCAPACITY'] = consumed poolinfo['PROVISIONEDCAPACITYGB'] = round( float(total) - float(free), 2) return poolinfo def _get_disk_type(self, pool_name, result): """Get disk type of the pool.""" pool_info = self.helper._find_pool_info(pool_name, result) if not pool_info: return None pool_disk = [] for i, x in enumerate(['ssd', 'sas', 'nl_sas']): if pool_info['TIER%dCAPACITY' % i] != '0': pool_disk.append(x) if len(pool_disk) > 1: pool_disk = ['mix'] return pool_disk[0] if pool_disk else None def _init_filesys_para(self, share, poolinfo, extra_specs): """Init basic filesystem parameters.""" name = share['name'] size = int(share['size']) * units.Mi * 2 fileparam = { "NAME": name.replace("-", "_"), "DESCRIPTION": "", "ALLOCTYPE": extra_specs['LUNType'], "CAPACITY": size, "PARENTID": poolinfo['ID'], "INITIALALLOCCAPACITY": units.Ki * 20, "PARENTTYPE": 216, "SNAPSHOTRESERVEPER": 20, "INITIALDISTRIBUTEPOLICY": 0, "ISSHOWSNAPDIR": True, "RECYCLESWITCH": 0, "RECYCLEHOLDTIME": 15, "RECYCLETHRESHOLD": 0, "RECYCLEAUTOCLEANSWITCH": 0, "ENABLEDEDUP": extra_specs['dedupe'], "ENABLECOMPRESSION": extra_specs['compression'], } if fileparam['ALLOCTYPE'] == constants.ALLOC_TYPE_THICK_FLAG: if (extra_specs['dedupe'] or extra_specs['compression']): err_msg = _( 'The filesystem type is "Thick",' ' so dedupe or compression cannot be set.') LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) if extra_specs['sectorsize']: fileparam['SECTORSIZE'] = extra_specs['sectorsize'] * units.Ki return fileparam def deny_access(self, share, access, share_server=None): """Deny access to share.""" share_proto = share['share_proto'] share_name = share['name'] share_url_type = self.helper._get_share_url_type(share_proto) access_type = access['access_type'] if share_proto == 'NFS' and access_type not in ('ip', 'user'): LOG.warning('Only IP or USER access types are allowed for ' 'NFS shares.') return elif share_proto == 'CIFS' and access_type != 'user': LOG.warning('Only USER access type is allowed for' ' CIFS shares.') return access_to = access['access_to'] # Huawei array uses * to represent IP addresses of all clients if (share_proto == 'NFS' and access_type == 'ip' and access_to == '0.0.0.0/0'): access_to = '*' share = self.helper._get_share_by_name(share_name, share_url_type) if not share: LOG.warning('Can not get share %s.', share_name) return access_id = self.helper._get_access_from_share(share['ID'], access_to, share_proto) if not access_id: LOG.warning('Can not get access id from share. ' 'share_name: %s', share_name) return self.helper._remove_access_from_share(access_id, share_proto) def allow_access(self, share, access, share_server=None): """Allow access to the share.""" share_proto = share['share_proto'] share_name = share['name'] share_url_type = self.helper._get_share_url_type(share_proto) access_type = access['access_type'] access_level = access['access_level'] access_to = access['access_to'] if access_level not in common_constants.ACCESS_LEVELS: raise exception.InvalidShareAccess( reason=(_('Unsupported level of access was provided - %s') % access_level)) if share_proto == 'NFS': if access_type == 'user': # Use 'user' as 'netgroup' for NFS. # A group name starts with @. access_to = '@' + access_to elif access_type != 'ip': message = _('Only IP or USER access types ' 'are allowed for NFS shares.') raise exception.InvalidShareAccess(reason=message) if access_level == common_constants.ACCESS_LEVEL_RW: access_level = constants.ACCESS_NFS_RW else: access_level = constants.ACCESS_NFS_RO # Huawei array uses * to represent IP addresses of all clients if access_to == '0.0.0.0/0': access_to = '*' elif share_proto == 'CIFS': if access_type == 'user': if access_level == common_constants.ACCESS_LEVEL_RW: access_level = constants.ACCESS_CIFS_FULLCONTROL else: access_level = constants.ACCESS_CIFS_RO else: message = _('Only USER access type is allowed' ' for CIFS shares.') raise exception.InvalidShareAccess(reason=message) share_stor = self.helper._get_share_by_name(share_name, share_url_type) if not share_stor: err_msg = (_("Share %s does not exist on the backend.") % share_name) LOG.error(err_msg) raise exception.ShareResourceNotFound(share_id=share['id']) share_id = share_stor['ID'] # Check if access already exists access_id = self.helper._get_access_from_share(share_id, access_to, share_proto) if access_id: # Check if the access level equal level_exist = self.helper._get_level_by_access_id(access_id, share_proto) if level_exist != access_level: # Change the access level self.helper._change_access_rest(access_id, share_proto, access_level) else: # Add this access to share self.helper._allow_access_rest(share_id, access_to, share_proto, access_level) def clear_access(self, share, share_server=None): """Remove all access rules of the share""" share_proto = share['share_proto'] share_name = share['name'] share_url_type = self.helper._get_share_url_type(share_proto) share_stor = self.helper._get_share_by_name(share_name, share_url_type) if not share_stor: LOG.warning('Cannot get share %s.', share_name) return share_id = share_stor['ID'] all_accesses = self.helper._get_all_access_from_share(share_id, share_proto) for access_id in all_accesses: self.helper._remove_access_from_share(access_id, share_proto) def update_access(self, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules list.""" if not (add_rules or delete_rules): self.clear_access(share, share_server) for access in access_rules: self.allow_access(share, access, share_server) else: for access in delete_rules: self.deny_access(share, access, share_server) for access in add_rules: self.allow_access(share, access, share_server) def get_pool(self, share): pool_name = share_utils.extract_host(share['host'], level='pool') if pool_name: return pool_name share_name = share['name'] share_url_type = self.helper._get_share_url_type(share['share_proto']) share = self.helper._get_share_by_name(share_name, share_url_type) pool_name = None if share: pool = self.helper._get_fs_info_by_id(share['FSID']) pool_name = pool['POOLNAME'] return pool_name def allocate_container(self, share, poolinfo): """Creates filesystem associated to share by name.""" opts = huawei_utils.get_share_extra_specs_params( share['share_type_id']) if opts is None: opts = constants.OPTS_CAPABILITIES smart = smartx.SmartX(self.helper) smartx_opts, qos = smart.get_smartx_extra_specs_opts(opts) fileParam = self._init_filesys_para(share, poolinfo, smartx_opts) fsid = self.helper._create_filesystem(fileParam) try: if qos: smart_qos = smartx.SmartQos(self.helper) smart_qos.create_qos(qos, fsid) smartpartition = smartx.SmartPartition(self.helper) smartpartition.add(opts, fsid) smartcache = smartx.SmartCache(self.helper) smartcache.add(opts, fsid) except Exception as err: if fsid is not None: qos_id = self.helper.get_qosid_by_fsid(fsid) if qos_id: self.remove_qos_fs(fsid, qos_id) self.helper._delete_fs(fsid) message = (_('Failed to add smartx. Reason: %(err)s.') % {'err': err}) raise exception.InvalidShare(reason=message) return fsid def manage_existing(self, share, driver_options): """Manage existing share.""" share_proto = share['share_proto'] share_name = share['name'] old_export_location = share['export_locations'][0]['path'] pool_name = share_utils.extract_host(share['host'], level='pool') share_url_type = self.helper._get_share_url_type(share_proto) old_share_name = self.helper._get_share_name_by_export_location( old_export_location, share_proto) share_storage = self.helper._get_share_by_name(old_share_name, share_url_type) if not share_storage: err_msg = (_("Can not get share ID by share %s.") % old_export_location) LOG.error(err_msg) raise exception.InvalidShare(reason=err_msg) fs_id = share_storage['FSID'] fs = self.helper._get_fs_info_by_id(fs_id) if not self.check_fs_status(fs['HEALTHSTATUS'], fs['RUNNINGSTATUS']): raise exception.InvalidShare( reason=(_('Invalid status of filesystem: ' 'HEALTHSTATUS=%(health)s ' 'RUNNINGSTATUS=%(running)s.') % {'health': fs['HEALTHSTATUS'], 'running': fs['RUNNINGSTATUS']})) if pool_name and pool_name != fs['POOLNAME']: raise exception.InvalidHost( reason=(_('The current pool(%(fs_pool)s) of filesystem ' 'does not match the input pool(%(host_pool)s).') % {'fs_pool': fs['POOLNAME'], 'host_pool': pool_name})) result = self.helper._find_all_pool_info() poolinfo = self.helper._find_pool_info(pool_name, result) opts = huawei_utils.get_share_extra_specs_params( share['share_type_id']) specs = share_types.get_share_type_extra_specs(share['share_type_id']) if ('capabilities:thin_provisioning' not in specs.keys() and 'thin_provisioning' not in specs.keys()): if fs['ALLOCTYPE'] == constants.ALLOC_TYPE_THIN_FLAG: opts['thin_provisioning'] = constants.THIN_PROVISIONING else: opts['thin_provisioning'] = constants.THICK_PROVISIONING change_opts = self.check_retype_change_opts(opts, poolinfo, fs) LOG.info('Retyping share (%(share)s), changed options are : ' '(%(change_opts)s).', {'share': old_share_name, 'change_opts': change_opts}) try: self.retype_share(change_opts, fs_id) except Exception as err: message = (_("Retype share error. Share: %(share)s. " "Reason: %(reason)s.") % {'share': old_share_name, 'reason': err}) raise exception.InvalidShare(reason=message) share_size = int(fs['CAPACITY']) / units.Mi / 2 self.helper._change_fs_name(fs_id, share_name) location = self._get_location_path(share_name, share_proto) return (share_size, [location]) def _check_snapshot_valid_for_manage(self, snapshot_info): snapshot_name = snapshot_info['data']['NAME'] # Check whether the snapshot is normal. if (snapshot_info['data']['HEALTHSTATUS'] != constants.STATUS_FSSNAPSHOT_HEALTH): msg = (_("Can't import snapshot %(snapshot)s to Manila. " "Snapshot status is not normal, snapshot status: " "%(status)s.") % {'snapshot': snapshot_name, 'status': snapshot_info['data']['HEALTHSTATUS']}) raise exception.ManageInvalidShareSnapshot( reason=msg) def manage_existing_snapshot(self, snapshot, driver_options): """Manage existing snapshot.""" share_proto = snapshot['share']['share_proto'] share_url_type = self.helper._get_share_url_type(share_proto) share_storage = self.helper._get_share_by_name(snapshot['share_name'], share_url_type) if not share_storage: err_msg = (_("Failed to import snapshot %(snapshot)s to Manila. " "Snapshot source share %(share)s doesn't exist " "on array.") % {'snapshot': snapshot['provider_location'], 'share': snapshot['share_name']}) raise exception.InvalidShare(reason=err_msg) sharefsid = share_storage['FSID'] provider_location = snapshot.get('provider_location') snapshot_id = sharefsid + "@" + provider_location snapshot_info = self.helper._get_snapshot_by_id(snapshot_id) snapshot_flag = self.helper._check_snapshot_id_exist(snapshot_info) if not snapshot_flag: err_msg = (_("Cannot find snapshot %s on array.") % snapshot['provider_location']) raise exception.ManageInvalidShareSnapshot(reason=err_msg) else: self._check_snapshot_valid_for_manage(snapshot_info) snapshot_name = ("share_snapshot_" + snapshot['id'].replace("-", "_")) self.helper._rename_share_snapshot(snapshot_id, snapshot_name) return snapshot_name def check_retype_change_opts(self, opts, poolinfo, fs): change_opts = { "partitionid": None, "cacheid": None, "dedupe&compression": None, } # SmartPartition old_partition_id = fs['SMARTPARTITIONID'] old_partition_name = None new_partition_id = None new_partition_name = None if strutils.bool_from_string(opts['huawei_smartpartition']): if not opts['partitionname']: raise exception.InvalidInput( reason=_('Partition name is None, please set ' 'huawei_smartpartition:partitionname in key.')) new_partition_name = opts['partitionname'] new_partition_id = self.helper._get_partition_id_by_name( new_partition_name) if new_partition_id is None: raise exception.InvalidInput( reason=(_("Can't find partition name on the array, " "partition name is: %(name)s.") % {"name": new_partition_name})) if old_partition_id != new_partition_id: if old_partition_id: partition_info = self.helper.get_partition_info_by_id( old_partition_id) old_partition_name = partition_info['NAME'] change_opts["partitionid"] = ([old_partition_id, old_partition_name], [new_partition_id, new_partition_name]) # SmartCache old_cache_id = fs['SMARTCACHEID'] old_cache_name = None new_cache_id = None new_cache_name = None if strutils.bool_from_string(opts['huawei_smartcache']): if not opts['cachename']: raise exception.InvalidInput( reason=_('Cache name is None, please set ' 'huawei_smartcache:cachename in key.')) new_cache_name = opts['cachename'] new_cache_id = self.helper._get_cache_id_by_name( new_cache_name) if new_cache_id is None: raise exception.InvalidInput( reason=(_("Can't find cache name on the array, " "cache name is: %(name)s.") % {"name": new_cache_name})) if old_cache_id != new_cache_id: if old_cache_id: cache_info = self.helper.get_cache_info_by_id( old_cache_id) old_cache_name = cache_info['NAME'] change_opts["cacheid"] = ([old_cache_id, old_cache_name], [new_cache_id, new_cache_name]) # SmartDedupe&SmartCompression smartx_opts = constants.OPTS_CAPABILITIES if opts is not None: smart = smartx.SmartX(self.helper) smartx_opts, qos = smart.get_smartx_extra_specs_opts(opts) old_compression = fs['COMPRESSION'] new_compression = smartx_opts['compression'] old_dedupe = fs['DEDUP'] new_dedupe = smartx_opts['dedupe'] if fs['ALLOCTYPE'] == constants.ALLOC_TYPE_THIN_FLAG: fs['ALLOCTYPE'] = constants.ALLOC_TYPE_THIN else: fs['ALLOCTYPE'] = constants.ALLOC_TYPE_THICK if strutils.bool_from_string(opts['thin_provisioning']): opts['thin_provisioning'] = constants.ALLOC_TYPE_THIN else: opts['thin_provisioning'] = constants.ALLOC_TYPE_THICK if fs['ALLOCTYPE'] != opts['thin_provisioning']: msg = (_("Manage existing share " "fs type and new_share_type mismatch. " "fs type is: %(fs_type)s, " "new_share_type is: %(new_share_type)s") % {"fs_type": fs['ALLOCTYPE'], "new_share_type": opts['thin_provisioning']}) raise exception.InvalidHost(reason=msg) else: if fs['ALLOCTYPE'] == constants.ALLOC_TYPE_THICK: if new_compression or new_dedupe: raise exception.InvalidInput( reason=_("Dedupe or compression cannot be set for " "thick filesystem.")) else: if (old_dedupe != new_dedupe or old_compression != new_compression): change_opts["dedupe&compression"] = ([old_dedupe, old_compression], [new_dedupe, new_compression]) return change_opts def retype_share(self, change_opts, fs_id): if change_opts.get('partitionid'): old, new = change_opts['partitionid'] old_id = old[0] old_name = old[1] new_id = new[0] new_name = new[1] if old_id: self.helper._remove_fs_from_partition(fs_id, old_id) if new_id: self.helper._add_fs_to_partition(fs_id, new_id) msg = (_("Retype FS(id: %(fs_id)s) smartpartition from " "(name: %(old_name)s, id: %(old_id)s) to " "(name: %(new_name)s, id: %(new_id)s) " "performed successfully.") % {"fs_id": fs_id, "old_id": old_id, "old_name": old_name, "new_id": new_id, "new_name": new_name}) LOG.info(msg) if change_opts.get('cacheid'): old, new = change_opts['cacheid'] old_id = old[0] old_name = old[1] new_id = new[0] new_name = new[1] if old_id: self.helper._remove_fs_from_cache(fs_id, old_id) if new_id: self.helper._add_fs_to_cache(fs_id, new_id) msg = (_("Retype FS(id: %(fs_id)s) smartcache from " "(name: %(old_name)s, id: %(old_id)s) to " "(name: %(new_name)s, id: %(new_id)s) " "performed successfully.") % {"fs_id": fs_id, "old_id": old_id, "old_name": old_name, "new_id": new_id, "new_name": new_name}) LOG.info(msg) if change_opts.get('dedupe&compression'): old, new = change_opts['dedupe&compression'] old_dedupe = old[0] old_compression = old[1] new_dedupe = new[0] new_compression = new[1] if ((old_dedupe != new_dedupe) or (old_compression != new_compression)): new_smartx_opts = {"dedupe": new_dedupe, "compression": new_compression} self.helper._change_extra_specs(fs_id, new_smartx_opts) msg = (_("Retype FS(id: %(fs_id)s) dedupe from %(old_dedupe)s " "to %(new_dedupe)s performed successfully, " "compression from " "%(old_compression)s to %(new_compression)s " "performed successfully.") % {"fs_id": fs_id, "old_dedupe": old_dedupe, "new_dedupe": new_dedupe, "old_compression": old_compression, "new_compression": new_compression}) LOG.info(msg) def remove_qos_fs(self, fs_id, qos_id): fs_list = self.helper.get_fs_list_in_qos(qos_id) fs_count = len(fs_list) if fs_count <= 1: qos = smartx.SmartQos(self.helper) qos.delete_qos(qos_id) else: self.helper.remove_fs_from_qos(fs_id, fs_list, qos_id) def _get_location_path(self, share_name, share_proto, ip=None): location = None if ip is None: root = self.helper._read_xml() ip = root.findtext('Storage/LogicalPortIP').strip() if share_proto == 'NFS': location = '%s:/%s' % (ip, share_name.replace("-", "_")) elif share_proto == 'CIFS': location = '\\\\%s\\%s' % (ip, share_name.replace("-", "_")) else: raise exception.InvalidShareAccess( reason=(_('Invalid NAS protocol supplied: %s.') % share_proto)) return location def _get_share_proto(self, share_name): share_proto = None for proto in ('NFS', 'CIFS'): share_url_type = self.helper._get_share_url_type(proto) share = self.helper._get_share_by_name(share_name, share_url_type) if share: share_proto = proto break return share_proto def _get_wait_interval(self): """Get wait interval from huawei conf file.""" root = self.helper._read_xml() wait_interval = root.findtext('Filesystem/WaitInterval') if wait_interval: return int(wait_interval) else: LOG.info( "Wait interval is not configured in huawei " "conf file. Use default: %(default_wait_interval)d.", {"default_wait_interval": constants.DEFAULT_WAIT_INTERVAL}) return constants.DEFAULT_WAIT_INTERVAL def _get_timeout(self): """Get timeout from huawei conf file.""" root = self.helper._read_xml() timeout = root.findtext('Filesystem/Timeout') if timeout: return int(timeout) else: LOG.info( "Timeout is not configured in huawei conf file. " "Use default: %(default_timeout)d.", {"default_timeout": constants.DEFAULT_TIMEOUT}) return constants.DEFAULT_TIMEOUT def check_conf_file(self): """Check the config file, make sure the essential items are set.""" root = self.helper._read_xml() resturl = root.findtext('Storage/RestURL') username = root.findtext('Storage/UserName') pwd = root.findtext('Storage/UserPassword') product = root.findtext('Storage/Product') pool_node = root.findtext('Filesystem/StoragePool') logical_port_ip = root.findtext('Storage/LogicalPortIP') if product != "V3": err_msg = (_( 'check_conf_file: Config file invalid. ' 'Product must be set to V3.')) LOG.error(err_msg) raise exception.InvalidInput(err_msg) if not (resturl and username and pwd): err_msg = (_( 'check_conf_file: Config file invalid. RestURL,' ' UserName and UserPassword must be set.')) LOG.error(err_msg) raise exception.InvalidInput(err_msg) if not pool_node: err_msg = (_( 'check_conf_file: Config file invalid. ' 'StoragePool must be set.')) LOG.error(err_msg) raise exception.InvalidInput(err_msg) if not (self.configuration.driver_handles_share_servers or logical_port_ip): err_msg = (_( 'check_conf_file: Config file invalid. LogicalPortIP ' 'must be set when driver_handles_share_servers is False.')) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) if self.snapshot_support and self.replication_support: err_msg = _('Config file invalid. SnapshotSupport and ' 'ReplicationSupport can not both be set to True.') LOG.error(err_msg) raise exception.BadConfigurationException(reason=err_msg) def check_service(self): running_status = self.helper._get_cifs_service_status() if running_status != constants.STATUS_SERVICE_RUNNING: self.helper._start_cifs_service_status() service = self.helper._get_nfs_service_status() if ((service['RUNNINGSTATUS'] != constants.STATUS_SERVICE_RUNNING) or (service['SUPPORTV3'] == 'false') or (service['SUPPORTV4'] == 'false')): self.helper._start_nfs_service_status() def setup_server(self, network_info, metadata=None): """Set up share server with given network parameters.""" self._check_network_type_validate(network_info['network_type']) vlan_tag = network_info['segmentation_id'] or 0 ip = network_info['network_allocations'][0]['ip_address'] subnet = utils.cidr_to_netmask(network_info['cidr']) if not utils.is_valid_ip_address(ip, '4'): err_msg = (_( "IP (%s) is invalid. Only IPv4 addresses are supported.") % ip) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) ad_created = False ldap_created = False try: if network_info.get('security_services'): active_directory, ldap = self._get_valid_security_service( network_info.get('security_services')) # Configure AD or LDAP Domain. if active_directory: self._configure_AD_domain(active_directory) ad_created = True if ldap: self._configure_LDAP_domain(ldap) ldap_created = True # Create vlan and logical_port. vlan_id, logical_port_id = ( self._create_vlan_and_logical_port(vlan_tag, ip, subnet)) except exception.ManilaException: if ad_created: dns_ip_list = [] user = active_directory['user'] password = active_directory['password'] self.helper.set_DNS_ip_address(dns_ip_list) self.helper.delete_AD_config(user, password) self._check_AD_expected_status(constants.STATUS_EXIT_DOMAIN) if ldap_created: self.helper.delete_LDAP_config() raise return { 'share_server_name': network_info['server_id'], 'share_server_id': network_info['server_id'], 'vlan_id': vlan_id, 'logical_port_id': logical_port_id, 'ip': ip, 'subnet': subnet, 'vlan_tag': vlan_tag, 'ad_created': ad_created, 'ldap_created': ldap_created, } def _check_network_type_validate(self, network_type): if network_type not in ('flat', 'vlan', None): err_msg = (_( 'Invalid network type. Network type must be flat or vlan.')) raise exception.NetworkBadConfigurationException(reason=err_msg) def _get_valid_security_service(self, security_services): """Validate security services and return AD/LDAP config.""" service_number = len(security_services) err_msg = _("Unsupported security services. " "Only AD and LDAP are supported.") if service_number > 2: LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) active_directory = None ldap = None for ss in security_services: if ss['type'] == 'active_directory': active_directory = ss elif ss['type'] == 'ldap': ldap = ss else: LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) return active_directory, ldap def _configure_AD_domain(self, active_directory): dns_ip = active_directory['dns_ip'] user = active_directory['user'] password = active_directory['password'] domain = active_directory['domain'] if not (dns_ip and user and password and domain): raise exception.InvalidInput( reason=_("dns_ip or user or password or domain " "in security_services is None.")) # Check DNS server exists or not. ip_address = self.helper.get_DNS_ip_address() if ip_address and ip_address[0]: err_msg = (_("DNS server (%s) has already been configured.") % ip_address[0]) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) # Check AD config exists or not. ad_exists, AD_domain = self.helper.get_AD_domain_name() if ad_exists: err_msg = (_("AD domain (%s) has already been configured.") % AD_domain) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) # Set DNS server ip. dns_ip_list = dns_ip.split(",") DNS_config = self.helper.set_DNS_ip_address(dns_ip_list) # Set AD config. digits = string.digits random_id = ''.join([random.choice(digits) for i in range(9)]) system_name = constants.SYSTEM_NAME_PREFIX + random_id try: self.helper.add_AD_config(user, password, domain, system_name) self._check_AD_expected_status(constants.STATUS_JOIN_DOMAIN) except exception.ManilaException as err: if DNS_config: dns_ip_list = [] self.helper.set_DNS_ip_address(dns_ip_list) raise exception.InvalidShare( reason=(_('Failed to add AD config. ' 'Reason: %s.') % err)) def _check_AD_expected_status(self, expected_status): wait_interval = self._get_wait_interval() timeout = self._get_timeout() retries = timeout / wait_interval interval = wait_interval backoff_rate = 1 @utils.retry(exception.InvalidShare, interval, retries, backoff_rate) def _check_AD_status(): ad = self.helper.get_AD_config() if ad['DOMAINSTATUS'] != expected_status: raise exception.InvalidShare( reason=(_('AD domain (%s) status is not expected.') % ad['FULLDOMAINNAME'])) _check_AD_status() def _configure_LDAP_domain(self, ldap): server = ldap['server'] domain = ldap['domain'] if not server or not domain: raise exception.InvalidInput(reason=_("Server or domain is None.")) # Check LDAP config exists or not. ldap_exists, LDAP_domain = self.helper.get_LDAP_domain_server() if ldap_exists: err_msg = (_("LDAP domain (%s) has already been configured.") % LDAP_domain) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) # Set LDAP config. server_number = len(server.split(',')) if server_number == 1: server = server + ",," elif server_number == 2: server = server + "," elif server_number > 3: raise exception.InvalidInput( reason=_("Cannot support more than three LDAP servers.")) self.helper.add_LDAP_config(server, domain) def _create_vlan_and_logical_port(self, vlan_tag, ip, subnet): optimal_port, port_type = self._get_optimal_port(vlan_tag) port_id = self.helper.get_port_id(optimal_port, port_type) home_port_id = port_id home_port_type = port_type vlan_id = 0 vlan_exists = True if port_type is None or port_id is None: err_msg = _("No appropriate port found to create logical port.") LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) if vlan_tag: vlan_exists, vlan_id = self.helper.get_vlan(port_id, vlan_tag) if not vlan_exists: # Create vlan. vlan_id = self.helper.create_vlan( port_id, port_type, vlan_tag) home_port_id = vlan_id home_port_type = constants.PORT_TYPE_VLAN logical_port_exists, logical_port_id = ( self.helper.get_logical_port(home_port_id, ip, subnet)) if not logical_port_exists: try: # Create logical port. logical_port_id = ( self.helper.create_logical_port( home_port_id, home_port_type, ip, subnet)) except exception.ManilaException as err: if not vlan_exists: self.helper.delete_vlan(vlan_id) raise exception.InvalidShare( reason=(_('Failed to create logical port. ' 'Reason: %s.') % err)) return vlan_id, logical_port_id def _get_optimal_port(self, vlan_tag): """Get an optimal physical port or bond port.""" root = self.helper._read_xml() port_info = [] port_list = root.findtext('Storage/Port') if port_list: port_list = port_list.split(";") for port in port_list: port = port.strip().strip('\n') if port: port_info.append(port) eth_port, bond_port = self._get_online_port(port_info) if vlan_tag: optimal_port, port_type = ( self._get_least_port(eth_port, bond_port, sort_type=constants.SORT_BY_VLAN)) else: optimal_port, port_type = ( self._get_least_port(eth_port, bond_port, sort_type=constants.SORT_BY_LOGICAL)) if not optimal_port: err_msg = (_("Cannot find optimal port. port_info: %s.") % port_info) LOG.error(err_msg) raise exception.InvalidInput(reason=err_msg) return optimal_port, port_type def _get_online_port(self, all_port_list): eth_port = self.helper.get_all_eth_port() bond_port = self.helper.get_all_bond_port() eth_status = constants.STATUS_ETH_RUNNING online_eth_port = [] for eth in eth_port: if (eth_status == eth['RUNNINGSTATUS'] and not eth['IPV4ADDR'] and not eth['BONDNAME']): online_eth_port.append(eth['LOCATION']) online_bond_port = [] for bond in bond_port: if eth_status == bond['RUNNINGSTATUS']: port_id = jsonutils.loads(bond['PORTIDLIST']) bond_eth_port = self.helper.get_eth_port_by_id(port_id[0]) if bond_eth_port and not bond_eth_port['IPV4ADDR']: online_bond_port.append(bond['NAME']) filtered_eth_port = [] filtered_bond_port = [] if len(all_port_list) == 0: filtered_eth_port = online_eth_port filtered_bond_port = online_bond_port else: all_port_list = list(set(all_port_list)) for port in all_port_list: is_eth_port = False for eth in online_eth_port: if port == eth: filtered_eth_port.append(port) is_eth_port = True break if is_eth_port: continue for bond in online_bond_port: if port == bond: filtered_bond_port.append(port) break return filtered_eth_port, filtered_bond_port def _get_least_port(self, eth_port, bond_port, sort_type): sorted_eth = [] sorted_bond = [] if sort_type == constants.SORT_BY_VLAN: _get_sorted_least_port = self._get_sorted_least_port_by_vlan else: _get_sorted_least_port = self._get_sorted_least_port_by_logical if eth_port: sorted_eth = _get_sorted_least_port(eth_port) if bond_port: sorted_bond = _get_sorted_least_port(bond_port) if sorted_eth and sorted_bond: if sorted_eth[1] >= sorted_bond[1]: return sorted_bond[0], constants.PORT_TYPE_BOND else: return sorted_eth[0], constants.PORT_TYPE_ETH elif sorted_eth: return sorted_eth[0], constants.PORT_TYPE_ETH elif sorted_bond: return sorted_bond[0], constants.PORT_TYPE_BOND else: return None, None def _get_sorted_least_port_by_vlan(self, port_list): if not port_list: return None vlan_list = self.helper.get_all_vlan() count = {} for item in port_list: count[item] = 0 for item in port_list: for vlan in vlan_list: pos = vlan['NAME'].rfind('.') if vlan['NAME'][:pos] == item: count[item] += 1 sort_port = sorted(count.items(), key=lambda count: count[1]) return sort_port[0] def _get_sorted_least_port_by_logical(self, port_list): if not port_list: return None logical_list = self.helper.get_all_logical_port() count = {} for item in port_list: count[item] = 0 for logical in logical_list: if logical['HOMEPORTTYPE'] == constants.PORT_TYPE_VLAN: pos = logical['HOMEPORTNAME'].rfind('.') if logical['HOMEPORTNAME'][:pos] == item: count[item] += 1 else: if logical['HOMEPORTNAME'] == item: count[item] += 1 sort_port = sorted(count.items(), key=lambda count: count[1]) return sort_port[0] def teardown_server(self, server_details, security_services=None): if not server_details: LOG.debug('Server details are empty.') return logical_port_id = server_details.get('logical_port_id') vlan_id = server_details.get('vlan_id') ad_created = server_details.get('ad_created') ldap_created = server_details.get('ldap_created') # Delete logical_port. if logical_port_id: logical_port_exists = ( self.helper.check_logical_port_exists_by_id(logical_port_id)) if logical_port_exists: self.helper.delete_logical_port(logical_port_id) # Delete vlan. if vlan_id and vlan_id != '0': vlan_exists = self.helper.check_vlan_exists_by_id(vlan_id) if vlan_exists: self.helper.delete_vlan(vlan_id) if security_services: active_directory, ldap = ( self._get_valid_security_service(security_services)) if ad_created and ad_created == '1' and active_directory: dns_ip = active_directory['dns_ip'] user = active_directory['user'] password = active_directory['password'] domain = active_directory['domain'] # Check DNS server exists or not. ip_address = self.helper.get_DNS_ip_address() if ip_address and ip_address[0] == dns_ip: dns_ip_list = [] self.helper.set_DNS_ip_address(dns_ip_list) # Check AD config exists or not. ad_exists, AD_domain = self.helper.get_AD_domain_name() if ad_exists and AD_domain == domain: self.helper.delete_AD_config(user, password) self._check_AD_expected_status( constants.STATUS_EXIT_DOMAIN) if ldap_created and ldap_created == '1' and ldap: server = ldap['server'] domain = ldap['domain'] # Check LDAP config exists or not. ldap_exists, LDAP_domain = ( self.helper.get_LDAP_domain_server()) if ldap_exists: LDAP_config = self.helper.get_LDAP_config() if (LDAP_config['LDAPSERVER'] == server and LDAP_config['BASEDN'] == domain): self.helper.delete_LDAP_config() def ensure_share(self, share, share_server=None): """Ensure that share is exported.""" share_proto = share['share_proto'] share_name = share['name'] share_id = share['id'] share_url_type = self.helper._get_share_url_type(share_proto) share_storage = self.helper._get_share_by_name(share_name, share_url_type) if not share_storage: raise exception.ShareResourceNotFound(share_id=share_id) fs_id = share_storage['FSID'] self.assert_filesystem(fs_id) ip = self._get_share_ip(share_server) location = self._get_location_path(share_name, share_proto, ip) return [location] def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None): """Create a new share, and create a remote replication pair.""" active_replica = share_utils.get_active_replica(replica_list) if (self.private_storage.get(active_replica['share_id'], 'replica_pair_id')): # for huawei array, only one replication can be created for # each active replica, so if a replica pair id is recorded for # this share, it means active replica already has a replication, # can not create anymore. msg = _('Cannot create more than one replica for share %s.') LOG.error(msg, active_replica['share_id']) raise exception.ReplicationException( reason=msg % active_replica['share_id']) # Create a new share new_share_name = new_replica['name'] location = self.create_share(new_replica, share_server) # create a replication pair. # replication pair only can be created by master node, # so here is a remote call to trigger master node to # start the creating progress. try: replica_pair_id = self.rpc_client.create_replica_pair( context, active_replica['host'], local_share_info=active_replica, remote_device_wwn=self.helper.get_array_wwn(), remote_fs_id=self.helper.get_fsid_by_name(new_share_name) ) except Exception: LOG.exception('Failed to create a replication pair ' 'with host %s.', active_replica['host']) raise self.private_storage.update(new_replica['share_id'], {'replica_pair_id': replica_pair_id}) # Get the state of the new created replica replica_state = self.replica_mgr.get_replica_state(replica_pair_id) replica_ref = { 'export_locations': [location], 'replica_state': replica_state, 'access_rules_status': common_constants.STATUS_ACTIVE, } return replica_ref def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): replica_pair_id = self.private_storage.get(replica['share_id'], 'replica_pair_id') if replica_pair_id is None: msg = ("No replication pair ID recorded for share %s.") LOG.error(msg, replica['share_id']) return common_constants.STATUS_ERROR self.replica_mgr.update_replication_pair_state(replica_pair_id) return self.replica_mgr.get_replica_state(replica_pair_id) def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): replica_pair_id = self.private_storage.get(replica['share_id'], 'replica_pair_id') if replica_pair_id is None: msg = _("No replication pair ID recorded for share %s.") LOG.error(msg, replica['share_id']) raise exception.ReplicationException( reason=msg % replica['share_id']) try: self.replica_mgr.switch_over(replica_pair_id) except Exception: LOG.exception('Failed to promote replica %s.', replica['id']) raise updated_new_active_access = True cleared_old_active_access = True try: self.update_access(replica, access_rules, [], [], share_server) except Exception: LOG.warning('Failed to set access rules to ' 'new active replica %s.', replica['id']) updated_new_active_access = False old_active_replica = share_utils.get_active_replica(replica_list) try: self.clear_access(old_active_replica, share_server) except Exception: LOG.warning("Failed to clear access rules from " "old active replica %s.", old_active_replica['id']) cleared_old_active_access = False new_active_update = { 'id': replica['id'], 'replica_state': common_constants.REPLICA_STATE_ACTIVE, } new_active_update['access_rules_status'] = ( common_constants.STATUS_ACTIVE if updated_new_active_access else common_constants.SHARE_INSTANCE_RULES_SYNCING) # get replica state for new secondary after switch over replica_state = self.replica_mgr.get_replica_state(replica_pair_id) old_active_update = { 'id': old_active_replica['id'], 'replica_state': replica_state, } old_active_update['access_rules_status'] = ( common_constants.SHARE_INSTANCE_RULES_SYNCING if cleared_old_active_access else common_constants.STATUS_ACTIVE) return [new_active_update, old_active_update] def delete_replica(self, context, replica_list, replica_snapshots, replica, share_server=None): replica_pair_id = self.private_storage.get(replica['share_id'], 'replica_pair_id') if replica_pair_id is None: msg = ("No replication pair ID recorded for share %(share)s. " "Continue to delete replica %(replica)s.") LOG.warning(msg, {'share': replica['share_id'], 'replica': replica['id']}) else: self.replica_mgr.delete_replication_pair(replica_pair_id) self.private_storage.delete(replica['share_id']) try: self.delete_share(replica, share_server) except Exception: LOG.exception('Failed to delete replica %s.', replica['id']) raise def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server): fs_id = self.helper.get_fsid_by_name(snapshot['share_name']) if not fs_id: msg = _("The source filesystem of snapshot %s " "not exist.") % snapshot['id'] LOG.error(msg) raise exception.ShareResourceNotFound( share_id=snapshot['share_id']) snapshot_id = self.helper._get_snapshot_id(fs_id, snapshot['id']) self.helper.rollback_snapshot(snapshot_id) manila-10.0.0/manila/share/drivers/huawei/v3/rpcapi.py0000664000175000017500000000312313656750227022552 0ustar zuulzuul00000000000000# Copyright (c) 2016 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_messaging as messaging from manila import rpc from manila.share import utils class HuaweiV3API(object): """Client side of the huawei V3 rpc API. API version history: 1.0 - Initial version. """ BASE_RPC_API_VERSION = '1.0' def __init__(self): self.topic = 'huawei_v3' target = messaging.Target(topic=self.topic, version=self.BASE_RPC_API_VERSION) self.client = rpc.get_client(target, version_cap='1.0') def create_replica_pair(self, context, host, local_share_info, remote_device_wwn, remote_fs_id): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.0') return call_context.call( context, 'create_replica_pair', local_share_info=local_share_info, remote_device_wwn=remote_device_wwn, remote_fs_id=remote_fs_id) manila-10.0.0/manila/share/drivers/infinidat/0000775000175000017500000000000013656750362021056 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/infinidat/__init__.py0000664000175000017500000000000013656750227023155 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/infinidat/infinibox.py0000664000175000017500000005043613656750227023425 0ustar zuulzuul00000000000000# Copyright 2017 Infinidat Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ INFINIDAT InfiniBox Share Driver """ import functools import ipaddress from oslo_config import cfg from oslo_log import log as logging from oslo_utils import units import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share import utils from manila import version try: import capacity import infinisdk except ImportError: capacity = None infinisdk = None LOG = logging.getLogger(__name__) infinidat_connection_opts = [ cfg.HostAddressOpt('infinibox_hostname', help='The name (or IP address) for the INFINIDAT ' 'Infinibox storage system.'), ] infinidat_auth_opts = [ cfg.StrOpt('infinibox_login', help=('Administrative user account name used to access the ' 'INFINIDAT Infinibox storage system.')), cfg.StrOpt('infinibox_password', help=('Password for the administrative user account ' 'specified in the infinibox_login option.'), secret=True), ] infinidat_general_opts = [ cfg.StrOpt('infinidat_pool_name', help='Name of the pool from which volumes are allocated.'), cfg.StrOpt('infinidat_nas_network_space_name', help='Name of the NAS network space on the INFINIDAT ' 'InfiniBox.'), cfg.BoolOpt('infinidat_thin_provision', help='Use thin provisioning.', default=True)] CONF = cfg.CONF CONF.register_opts(infinidat_connection_opts) CONF.register_opts(infinidat_auth_opts) CONF.register_opts(infinidat_general_opts) _MANILA_TO_INFINIDAT_ACCESS_LEVEL = { constants.ACCESS_LEVEL_RW: 'RW', constants.ACCESS_LEVEL_RO: 'RO', } # Max retries for the REST API client in case of a failure: _API_MAX_RETRIES = 5 # Identifier used as the REST API User-Agent string: _INFINIDAT_MANILA_IDENTIFIER = ( "manila/%s" % version.version_info.release_string()) def infinisdk_to_manila_exceptions(func): @functools.wraps(func) def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except infinisdk.core.exceptions.InfiniSDKException as ex: # string formatting of 'ex' includes http code and url msg = _('Caught exception from infinisdk: %s') % ex LOG.exception(msg) raise exception.ShareBackendException(msg=msg) return wrapper class InfiniboxShareDriver(driver.ShareDriver): VERSION = '1.0' # driver version def __init__(self, *args, **kwargs): super(InfiniboxShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(infinidat_connection_opts) self.configuration.append_config_values(infinidat_auth_opts) self.configuration.append_config_values(infinidat_general_opts) def _setup_and_get_system_object(self, management_address, auth): system = infinisdk.InfiniBox(management_address, auth=auth) system.api.add_auto_retry( lambda e: isinstance( e, infinisdk.core.exceptions.APITransportFailure) and "Interrupted system call" in e.error_desc, _API_MAX_RETRIES) system.api.set_source_identifier(_INFINIDAT_MANILA_IDENTIFIER) system.login() return system def do_setup(self, context): """Driver initialization""" if infinisdk is None: msg = _("Missing 'infinisdk' python module, ensure the library" " is installed and available.") raise exception.ManilaException(message=msg) infinibox_login = self._safe_get_from_config_or_fail('infinibox_login') infinibox_password = ( self._safe_get_from_config_or_fail('infinibox_password')) auth = (infinibox_login, infinibox_password) management_address = ( self._safe_get_from_config_or_fail('infinibox_hostname')) self._pool_name = ( self._safe_get_from_config_or_fail('infinidat_pool_name')) self._network_space_name = ( self._safe_get_from_config_or_fail( 'infinidat_nas_network_space_name')) self._system = ( self._setup_and_get_system_object(management_address, auth)) backend_name = self.configuration.safe_get('share_backend_name') self._backend_name = backend_name or self.__class__.__name__ thin_provisioning = self.configuration.infinidat_thin_provision self._provtype = "THIN" if thin_provisioning else "THICK" LOG.debug('setup complete') def _update_share_stats(self): """Retrieve stats info from share group.""" (free_capacity_bytes, physical_capacity_bytes, provisioned_capacity_gb) = self._get_available_capacity() max_over_subscription_ratio = ( self.configuration.max_over_subscription_ratio) data = dict( share_backend_name=self._backend_name, vendor_name='INFINIDAT', driver_version=self.VERSION, storage_protocol='NFS', total_capacity_gb=float(physical_capacity_bytes) / units.Gi, free_capacity_gb=float(free_capacity_bytes) / units.Gi, reserved_percentage=self.configuration.reserved_share_percentage, thin_provisioning=self.configuration.infinidat_thin_provision, max_over_subscription_ratio=max_over_subscription_ratio, provisioned_capacity_gb=provisioned_capacity_gb, snapshot_support=True, create_share_from_snapshot_support=True, mount_snapshot_support=True, revert_to_snapshot_support=True) super(InfiniboxShareDriver, self)._update_share_stats(data) def _get_available_capacity(self): # pylint: disable=no-member pool = self._get_infinidat_pool() free_capacity_bytes = (pool.get_free_physical_capacity() / capacity.byte) physical_capacity_bytes = (pool.get_physical_capacity() / capacity.byte) provisioned_capacity_gb = ( (pool.get_virtual_capacity() - pool.get_free_virtual_capacity()) / capacity.GB) # pylint: enable=no-member return (free_capacity_bytes, physical_capacity_bytes, provisioned_capacity_gb) def _safe_get_from_config_or_fail(self, config_parameter): config_value = self.configuration.safe_get(config_parameter) if not config_value: # None or empty string reason = (_("%(config_parameter)s configuration parameter " "must be specified") % {'config_parameter': config_parameter}) LOG.error(reason) raise exception.BadConfigurationException(reason=reason) return config_value def _verify_share_protocol(self, share): if share['share_proto'] != 'NFS': reason = (_('Unsupported share protocol: %(proto)s.') % {'proto': share['share_proto']}) LOG.error(reason) raise exception.InvalidShare(reason=reason) def _verify_access_type(self, access): if access['access_type'] != 'ip': reason = _('Only "ip" access type allowed for the NFS protocol.') LOG.error(reason) raise exception.InvalidShareAccess(reason=reason) return True def _make_share_name(self, manila_share): return 'openstack-shr-%s' % manila_share['id'] def _make_snapshot_name(self, manila_snapshot): return 'openstack-snap-%s' % manila_snapshot['id'] def _set_manila_object_metadata(self, infinidat_object, manila_object): data = {"system": "openstack", "openstack_version": version.version_info.release_string(), "manila_id": manila_object['id'], "manila_name": manila_object['name'], "host.created_by": _INFINIDAT_MANILA_IDENTIFIER} infinidat_object.set_metadata_from_dict(data) @infinisdk_to_manila_exceptions def _get_infinidat_pool(self): pool = self._system.pools.safe_get(name=self._pool_name) if pool is None: msg = _('Pool "%s" not found') % self._pool_name LOG.error(msg) raise exception.ShareBackendException(msg=msg) return pool @infinisdk_to_manila_exceptions def _get_infinidat_nas_network_space_ips(self): network_space = self._system.network_spaces.safe_get( name=self._network_space_name) if network_space is None: msg = _('INFINIDAT InfiniBox NAS network space "%s" ' 'not found') % self._network_space_name LOG.error(msg) raise exception.ShareBackendException(msg=msg) network_space_ips = network_space.get_ips() if not network_space_ips: msg = _('INFINIDAT InfiniBox NAS network space "%s" has no IP ' 'addresses defined') % self._network_space_name LOG.error(msg) raise exception.ShareBackendException(msg=msg) ip_addresses = ( [ip_munch.ip_address for ip_munch in network_space_ips if ip_munch.enabled]) if not ip_addresses: msg = _('INFINIDAT InfiniBox NAS network space "%s" has no ' 'enabled IP addresses') % self._network_space_name LOG.error(msg) raise exception.ShareBackendException(msg=msg) return ip_addresses def _get_full_nfs_export_paths(self, export_path): network_space_ips = self._get_infinidat_nas_network_space_ips() return ['{network_space_ip}:{export_path}'.format( network_space_ip=network_space_ip, export_path=export_path) for network_space_ip in network_space_ips] @infinisdk_to_manila_exceptions def _get_infinidat_filesystem_by_name(self, name): filesystem = self._system.filesystems.safe_get(name=name) if filesystem is None: msg = (_('Filesystem not found on the Infinibox by its name: %s') % name) LOG.error(msg) raise exception.ShareResourceNotFound(share_id=name) return filesystem def _get_infinidat_filesystem(self, manila_share): filesystem_name = self._make_share_name(manila_share) return self._get_infinidat_filesystem_by_name(filesystem_name) def _get_infinidat_snapshot_by_name(self, name): snapshot = self._system.filesystems.safe_get(name=name) if snapshot is None: msg = (_('Snapshot not found on the Infinibox by its name: %s') % name) LOG.error(msg) raise exception.ShareSnapshotNotFound(snapshot_id=name) return snapshot def _get_infinidat_snapshot(self, manila_snapshot): snapshot_name = self._make_snapshot_name(manila_snapshot) return self._get_infinidat_snapshot_by_name(snapshot_name) def _get_infinidat_dataset(self, manila_object, is_snapshot): return (self._get_infinidat_snapshot(manila_object) if is_snapshot else self._get_infinidat_filesystem(manila_object)) @infinisdk_to_manila_exceptions def _get_export(self, infinidat_filesystem): infinidat_exports = infinidat_filesystem.get_exports() if len(infinidat_exports) == 0: msg = _("Could not find share export") raise exception.ShareBackendException(msg=msg) elif len(infinidat_exports) > 1: msg = _("INFINIDAT filesystem has more than one active export; " "possibly not a Manila share") LOG.error(msg) raise exception.ShareBackendException(msg=msg) return infinidat_exports[0] def _get_infinidat_access_level(self, access): """Translates between Manila access levels to INFINIDAT API ones""" access_level = access['access_level'] try: return _MANILA_TO_INFINIDAT_ACCESS_LEVEL[access_level] except KeyError: raise exception.InvalidShareAccessLevel(level=access_level) def _get_ip_address_range(self, ip_address): """Parse single IP address or subnet into a range. If the IP address string is in subnet mask format, returns a - string. If the IP address contains a single IP address, returns only that IP address. """ ip_address = six.text_type(ip_address) # try treating the ip_address parameter as a range of IP addresses: ip_network = ipaddress.ip_network(ip_address, strict=False) ip_network_hosts = list(ip_network.hosts()) if len(ip_network_hosts) == 0: # /32, single IP address return ip_address.split('/')[0] return "{}-{}".format(ip_network_hosts[0], ip_network_hosts[-1]) @infinisdk_to_manila_exceptions def _create_filesystem_export(self, infinidat_filesystem): infinidat_export = infinidat_filesystem.add_export(permissions=[]) export_paths = self._get_full_nfs_export_paths( infinidat_export.get_export_path()) export_locations = [{ 'path': export_path, 'is_admin_only': False, 'metadata': {}, } for export_path in export_paths] return export_locations @infinisdk_to_manila_exceptions def _delete_share(self, share, is_snapshot): if is_snapshot: dataset_name = self._make_snapshot_name(share) else: dataset_name = self._make_share_name(share) try: infinidat_filesystem = ( self._get_infinidat_filesystem_by_name(dataset_name)) except exception.ShareResourceNotFound: message = ("share %(share)s not found on Infinibox, skipping " "delete") LOG.warning(message, {"share": share}) return # filesystem not found try: infinidat_export = self._get_export(infinidat_filesystem) infinidat_export.safe_delete() except exception.ShareBackendException: # it is possible that the export has been deleted pass infinidat_filesystem.safe_delete() @infinisdk_to_manila_exceptions def _extend_share(self, infinidat_filesystem, share, new_size): # pylint: disable=no-member new_size_capacity_units = new_size * capacity.GiB # pylint: enable=no-member old_size = infinidat_filesystem.get_size() infinidat_filesystem.resize(new_size_capacity_units - old_size) @infinisdk_to_manila_exceptions def _update_access(self, manila_object, access_rules, is_snapshot): infinidat_filesystem = self._get_infinidat_dataset( manila_object, is_snapshot=is_snapshot) infinidat_export = self._get_export(infinidat_filesystem) permissions = [ {'access': self._get_infinidat_access_level(access_rule), 'client': self._get_ip_address_range(access_rule['access_to']), 'no_root_squash': True} for access_rule in access_rules if self._verify_access_type(access_rule)] infinidat_export.update_permissions(permissions) @infinisdk_to_manila_exceptions def create_share(self, context, share, share_server=None): self._verify_share_protocol(share) pool = self._get_infinidat_pool() size = share['size'] * capacity.GiB # pylint: disable=no-member share_name = self._make_share_name(share) infinidat_filesystem = self._system.filesystems.create( pool=pool, name=share_name, size=size, provtype=self._provtype) self._set_manila_object_metadata(infinidat_filesystem, share) return self._create_filesystem_export(infinidat_filesystem) @infinisdk_to_manila_exceptions def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): name = self._make_share_name(share) infinidat_snapshot = self._get_infinidat_snapshot(snapshot) infinidat_new_share = infinidat_snapshot.create_snapshot( name=name, write_protected=False) self._extend_share(infinidat_new_share, share, share['size']) return self._create_filesystem_export(infinidat_new_share) @infinisdk_to_manila_exceptions def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot.""" share = snapshot['share'] infinidat_filesystem = self._get_infinidat_filesystem(share) name = self._make_snapshot_name(snapshot) infinidat_snapshot = infinidat_filesystem.create_snapshot(name=name) # snapshot is created in the same size as the original share, so no # extending is needed self._set_manila_object_metadata(infinidat_snapshot, snapshot) return {'export_locations': self._create_filesystem_export(infinidat_snapshot)} def delete_share(self, context, share, share_server=None): try: self._verify_share_protocol(share) except exception.InvalidShare: # cleanup shouldn't fail on wrong protocol or missing share: message = ("failed to delete share %(share)s; unsupported share " "protocol %(share_proto)s, only NFS is supported") LOG.warning(message, {"share": share, "share_proto": share['share_proto']}) return self._delete_share(share, is_snapshot=False) def delete_snapshot(self, context, snapshot, share_server=None): self._delete_share(snapshot, is_snapshot=True) def ensure_share(self, context, share, share_server=None): # will raise ShareResourceNotFound if the share was not found: infinidat_filesystem = self._get_infinidat_filesystem(share) try: infinidat_export = self._get_export(infinidat_filesystem) return self._get_full_nfs_export_paths( infinidat_export.get_export_path()) except exception.ShareBackendException: # export not found, need to re-export message = ("missing export for share %(share)s, trying to " "re-export") LOG.info(message, {"share": share}) return self._create_filesystem_export(infinidat_filesystem) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): # As the Infinibox API can bulk update export access rules, we will try # to use the access_rules list self._verify_share_protocol(share) self._update_access(share, access_rules, is_snapshot=False) def get_network_allocations_number(self): return 0 @infinisdk_to_manila_exceptions def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): infinidat_snapshot = self._get_infinidat_snapshot(snapshot) infinidat_parent_share = self._get_infinidat_filesystem( snapshot['share']) infinidat_parent_share.restore(infinidat_snapshot) def extend_share(self, share, new_size, share_server=None): infinidat_filesystem = self._get_infinidat_filesystem(share) self._extend_share(infinidat_filesystem, share, new_size) def snapshot_update_access(self, context, snapshot, access_rules, add_rules, delete_rules, share_server=None): # snapshots are to be mounted in read-only mode, see: # "Add mountable snapshots" on openstack specs. access_rules, _, _ = utils.change_rules_to_readonly( access_rules, [], []) try: self._update_access(snapshot, access_rules, is_snapshot=True) except exception.InvalidShareAccess as e: raise exception.InvalidSnapshotAccess(e) manila-10.0.0/manila/share/drivers/ibm/0000775000175000017500000000000013656750362017660 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/ibm/__init__.py0000664000175000017500000000000013656750227021757 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/ibm/gpfs.py0000664000175000017500000014402713656750227021201 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ GPFS Driver for shares. Config Requirements: GPFS file system must have quotas enabled (`mmchfs -Q yes`). Notes: GPFS independent fileset is used for each share. TODO(nileshb): add support for share server creation/deletion/handling. Limitation: While using remote GPFS node, with Ganesha NFS, 'gpfs_ssh_private_key' for remote login to the GPFS node must be specified and there must be a passwordless authentication already setup between the Manila share service and the remote GPFS node. """ import abc import math import os import re import socket from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import importutils from oslo_utils import strutils from oslo_utils import units import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.helpers import NFSHelper from manila.share import share_types from manila import utils LOG = log.getLogger(__name__) # matches multiple comma separated avpairs on a line. values with an embedded # comma must be wrapped in quotation marks AVPATTERN = re.compile(r'\s*(?P\w+)\s*=\s*(?P' r'(["][a-zA-Z0-9_, ]+["])|(\w+))\s*[,]?') ERR_FILE_NOT_FOUND = 2 gpfs_share_opts = [ cfg.HostAddressOpt('gpfs_share_export_ip', help='IP to be added to GPFS export string.'), cfg.StrOpt('gpfs_mount_point_base', default='$state_path/mnt', help='Base folder where exported shares are located.'), cfg.StrOpt('gpfs_nfs_server_type', default='CES', help=('NFS Server type. Valid choices are "CES" (Ganesha NFS) ' 'or "KNFS" (Kernel NFS).')), cfg.ListOpt('gpfs_nfs_server_list', help=('A list of the fully qualified NFS server names that ' 'make up the OpenStack Manila configuration.')), cfg.BoolOpt('is_gpfs_node', default=False, help=('True:when Manila services are running on one of the ' 'Spectrum Scale node. ' 'False:when Manila services are not running on any of ' 'the Spectrum Scale node.')), cfg.PortOpt('gpfs_ssh_port', default=22, help='GPFS server SSH port.'), cfg.StrOpt('gpfs_ssh_login', help='GPFS server SSH login name.'), cfg.StrOpt('gpfs_ssh_password', secret=True, help='GPFS server SSH login password. ' 'The password is not needed, if \'gpfs_ssh_private_key\' ' 'is configured.'), cfg.StrOpt('gpfs_ssh_private_key', help='Path to GPFS server SSH private key for login.'), cfg.ListOpt('gpfs_share_helpers', default=[ 'KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper', 'CES=manila.share.drivers.ibm.gpfs.CESHelper', ], help='Specify list of share export helpers.'), cfg.StrOpt('knfs_export_options', default=('rw,sync,no_root_squash,insecure,no_wdelay,' 'no_subtree_check'), help=('Options to use when exporting a share using kernel ' 'NFS server. Note that these defaults can be overridden ' 'when a share is created by passing metadata with key ' 'name export_options.'), deprecated_for_removal=True, deprecated_reason="This option isn't used any longer. Please " "use share-type extra specs for export " "options."), ] CONF = cfg.CONF CONF.register_opts(gpfs_share_opts) class GPFSShareDriver(driver.ExecuteMixin, driver.GaneshaMixin, driver.ShareDriver): """GPFS Share Driver. Executes commands relating to Shares. Supports creation of shares on a GPFS cluster. API version history: 1.0 - Initial version. 1.1 - Added extend_share functionality 2.0 - Added CES support for NFS Ganesha """ def __init__(self, *args, **kwargs): """Do initialization.""" super(GPFSShareDriver, self).__init__(False, *args, **kwargs) self._helpers = {} self.configuration.append_config_values(gpfs_share_opts) self.backend_name = self.configuration.safe_get( 'share_backend_name') or "IBM Storage System" self.sshpool = None self.ssh_connections = {} self._gpfs_execute = None if self.configuration.is_gpfs_node: self.GPFS_PATH = '' else: self.GPFS_PATH = '/usr/lpp/mmfs/bin/' def do_setup(self, context): """Any initialization the share driver does while starting.""" super(GPFSShareDriver, self).do_setup(context) if self.configuration.is_gpfs_node: self._gpfs_execute = self._gpfs_local_execute else: self._gpfs_execute = self._gpfs_remote_execute self._setup_helpers() def _gpfs_local_execute(self, *cmd, **kwargs): if 'run_as_root' not in kwargs: kwargs.update({'run_as_root': True}) if 'ignore_exit_code' in kwargs: check_exit_code = kwargs.pop('ignore_exit_code') check_exit_code.append(0) kwargs.update({'check_exit_code': check_exit_code}) return utils.execute(*cmd, **kwargs) def _gpfs_remote_execute(self, *cmd, **kwargs): host = self.configuration.gpfs_share_export_ip check_exit_code = kwargs.pop('check_exit_code', True) ignore_exit_code = kwargs.pop('ignore_exit_code', None) return self._run_ssh(host, cmd, ignore_exit_code, check_exit_code) def _sanitize_command(self, cmd_list): # pylint: disable=too-many-function-args return ' '.join(six.moves.shlex_quote(cmd_arg) for cmd_arg in cmd_list) def _run_ssh(self, host, cmd_list, ignore_exit_code=None, check_exit_code=True): command = self._sanitize_command(cmd_list) if not self.sshpool: gpfs_ssh_login = self.configuration.gpfs_ssh_login password = self.configuration.gpfs_ssh_password privatekey = self.configuration.gpfs_ssh_private_key gpfs_ssh_port = self.configuration.gpfs_ssh_port ssh_conn_timeout = self.configuration.ssh_conn_timeout min_size = self.configuration.ssh_min_pool_conn max_size = self.configuration.ssh_max_pool_conn self.sshpool = utils.SSHPool(host, gpfs_ssh_port, ssh_conn_timeout, gpfs_ssh_login, password=password, privatekey=privatekey, min_size=min_size, max_size=max_size) try: with self.sshpool.item() as ssh: return self._gpfs_ssh_execute( ssh, command, ignore_exit_code=ignore_exit_code, check_exit_code=check_exit_code) except Exception as e: with excutils.save_and_reraise_exception(): msg = (_('Error running SSH command: %(cmd)s. ' 'Error: %(excmsg)s.') % {'cmd': command, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def _gpfs_ssh_execute(self, ssh, cmd, ignore_exit_code=None, check_exit_code=True): sanitized_cmd = strutils.mask_password(cmd) LOG.debug('Running cmd (SSH): %s', sanitized_cmd) stdin_stream, stdout_stream, stderr_stream = ssh.exec_command(cmd) channel = stdout_stream.channel stdout = stdout_stream.read() sanitized_stdout = strutils.mask_password(stdout) stderr = stderr_stream.read() sanitized_stderr = strutils.mask_password(stderr) stdin_stream.close() exit_status = channel.recv_exit_status() # exit_status == -1 if no exit code was returned if exit_status != -1: LOG.debug('Result was %s', exit_status) if ((check_exit_code and exit_status != 0) and (ignore_exit_code is None or exit_status not in ignore_exit_code)): raise exception.ProcessExecutionError(exit_code=exit_status, stdout=sanitized_stdout, stderr=sanitized_stderr, cmd=sanitized_cmd) return (sanitized_stdout, sanitized_stderr) def _check_gpfs_state(self): try: out, __ = self._gpfs_execute(self.GPFS_PATH + 'mmgetstate', '-Y') except exception.ProcessExecutionError as e: msg = (_('Failed to check GPFS state. Error: %(excmsg)s.') % {'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) lines = out.splitlines() try: state_token = lines[0].split(':').index('state') gpfs_state = lines[1].split(':')[state_token] except (IndexError, ValueError) as e: msg = (_('Failed to check GPFS state. Error: %(excmsg)s.') % {'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) if gpfs_state != 'active': return False return True def _is_dir(self, path): try: output, __ = self._gpfs_execute('stat', '--format=%F', path, run_as_root=False) except exception.ProcessExecutionError as e: msg = (_('%(path)s is not a directory. Error: %(excmsg)s') % {'path': path, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) return output.strip() == 'directory' def _is_gpfs_path(self, directory): try: self._gpfs_execute(self.GPFS_PATH + 'mmlsattr', directory) except exception.ProcessExecutionError as e: msg = (_('%(dir)s is not on GPFS filesystem. Error: %(excmsg)s.') % {'dir': directory, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) return True def _setup_helpers(self): """Initializes protocol-specific NAS drivers.""" self._helpers = {} for helper_str in self.configuration.gpfs_share_helpers: share_proto, _, import_str = helper_str.partition('=') helper = importutils.import_class(import_str) self._helpers[share_proto.upper()] = helper(self._gpfs_execute, self.configuration) def _local_path(self, sharename): """Get local path for a share or share snapshot by name.""" return os.path.join(self.configuration.gpfs_mount_point_base, sharename) def _get_gpfs_device(self): fspath = self.configuration.gpfs_mount_point_base try: (out, __) = self._gpfs_execute('df', fspath) except exception.ProcessExecutionError as e: msg = (_('Failed to get GPFS device for %(fspath)s.' 'Error: %(excmsg)s') % {'fspath': fspath, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) lines = out.splitlines() fs = lines[1].split()[0] return fs def _create_share(self, shareobj): """Create a linked fileset file in GPFS. Note: GPFS file system must have quotas enabled (mmchfs -Q yes). """ sharename = shareobj['name'] sizestr = '%sG' % shareobj['size'] sharepath = self._local_path(sharename) fsdev = self._get_gpfs_device() # create fileset for the share, link it to root path and set max size try: self._gpfs_execute(self.GPFS_PATH + 'mmcrfileset', fsdev, sharename, '--inode-space', 'new') except exception.ProcessExecutionError as e: msg = (_('Failed to create fileset on %(fsdev)s for ' 'the share %(sharename)s. Error: %(excmsg)s.') % {'fsdev': fsdev, 'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) try: self._gpfs_execute(self.GPFS_PATH + 'mmlinkfileset', fsdev, sharename, '-J', sharepath) except exception.ProcessExecutionError as e: msg = (_('Failed to link fileset for the share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) try: self._gpfs_execute(self.GPFS_PATH + 'mmsetquota', fsdev + ':' + sharename, '--block', '0:' + sizestr) except exception.ProcessExecutionError as e: msg = (_('Failed to set quota for the share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) try: self._gpfs_execute('chmod', '777', sharepath) except exception.ProcessExecutionError as e: msg = (_('Failed to set permissions for share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def _delete_share(self, shareobj): """Remove container by removing GPFS fileset.""" sharename = shareobj['name'] fsdev = self._get_gpfs_device() # ignore error, when the fileset does not exist # it may happen, when the share creation failed, the share is in # 'error' state, and the fileset was never created # we want to ignore that error condition while deleting the fileset, # i.e. 'Fileset name share-xyz not found', with error code '2' # and mark the deletion successful ignore_exit_code = [ERR_FILE_NOT_FOUND] # unlink and delete the share's fileset try: self._gpfs_execute(self.GPFS_PATH + 'mmunlinkfileset', fsdev, sharename, '-f', ignore_exit_code=ignore_exit_code) except exception.ProcessExecutionError as e: msg = (_('Failed unlink fileset for share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) try: self._gpfs_execute(self.GPFS_PATH + 'mmdelfileset', fsdev, sharename, '-f', ignore_exit_code=ignore_exit_code) except exception.ProcessExecutionError as e: msg = (_('Failed delete fileset for share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def _get_available_capacity(self, path): """Calculate available space on path.""" try: out, __ = self._gpfs_execute('df', '-P', '-B', '1', path) except exception.ProcessExecutionError as e: msg = (_('Failed to check available capacity for %(path)s.' 'Error: %(excmsg)s.') % {'path': path, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) out = out.splitlines()[1] size = int(out.split()[1]) available = int(out.split()[3]) return available, size def _create_share_snapshot(self, snapshot): """Create a snapshot of the share.""" sharename = snapshot['share_name'] snapshotname = snapshot['name'] fsdev = self._get_gpfs_device() LOG.debug( 'Attempting to create a snapshot %(snap)s from share %(share)s ' 'on device %(dev)s.', {'share': sharename, 'snap': snapshotname, 'dev': fsdev} ) try: self._gpfs_execute(self.GPFS_PATH + 'mmcrsnapshot', fsdev, snapshot['name'], '-j', sharename) except exception.ProcessExecutionError as e: msg = (_('Failed to create snapshot %(snapshot)s. ' 'Error: %(excmsg)s.') % {'snapshot': snapshot['name'], 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def _delete_share_snapshot(self, snapshot): """Delete a snapshot of the share.""" sharename = snapshot['share_name'] fsdev = self._get_gpfs_device() try: self._gpfs_execute(self.GPFS_PATH + 'mmdelsnapshot', fsdev, snapshot['name'], '-j', sharename) except exception.ProcessExecutionError as e: msg = (_('Failed to delete snapshot %(snapshot)s. ' 'Error: %(excmsg)s.') % {'snapshot': snapshot['name'], 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def _create_share_from_snapshot(self, share, snapshot, share_path): """Create share from a share snapshot.""" self._create_share(share) snapshot_path = self._get_snapshot_path(snapshot) snapshot_path = snapshot_path + "/" try: self._gpfs_execute('rsync', '-rp', snapshot_path, share_path) except exception.ProcessExecutionError as e: msg = (_('Failed to create share %(share)s from ' 'snapshot %(snapshot)s. Error: %(excmsg)s.') % {'share': share['name'], 'snapshot': snapshot['name'], 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def _extend_share(self, shareobj, new_size): sharename = shareobj['name'] sizestr = '%sG' % new_size fsdev = self._get_gpfs_device() try: self._gpfs_execute(self.GPFS_PATH + 'mmsetquota', fsdev + ':' + sharename, '--block', '0:' + sizestr) except exception.ProcessExecutionError as e: msg = (_('Failed to set quota for the share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': sharename, 'excmsg': e}) LOG.error(msg) raise exception.GPFSException(msg) def get_network_allocations_number(self): return 0 def create_share(self, ctx, share, share_server=None): """Create GPFS directory that will be represented as share.""" self._create_share(share) share_path = self._get_share_path(share) location = self._get_helper(share).create_export(share_path) return location def create_share_from_snapshot(self, ctx, share, snapshot, share_server=None, parent_share=None): """Is called to create share from a snapshot.""" share_path = self._get_share_path(share) self._create_share_from_snapshot(share, snapshot, share_path) location = self._get_helper(share).create_export(share_path) return location def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot.""" self._create_share_snapshot(snapshot) def delete_share(self, ctx, share, share_server=None): """Remove and cleanup share storage.""" location = self._get_share_path(share) self._get_helper(share).remove_export(location, share) self._delete_share(share) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" self._delete_share_snapshot(snapshot) def extend_share(self, share, new_size, share_server=None): """Extends the quota on the share fileset.""" self._extend_share(share, new_size) def ensure_share(self, ctx, share, share_server=None): """Ensure that storage are mounted and exported.""" def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share.""" helper = self._get_helper(share) location = self._get_share_path(share) for access in delete_rules: helper.deny_access(location, share, access) for access in add_rules: helper.allow_access(location, share, access) if not (add_rules or delete_rules): helper.resync_access(location, share, access_rules) def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" if not self._check_gpfs_state(): msg = (_('GPFS is not active.')) LOG.error(msg) raise exception.GPFSException(msg) if not self.configuration.gpfs_share_export_ip: msg = (_('gpfs_share_export_ip must be specified.')) LOG.error(msg) raise exception.InvalidParameterValue(err=msg) gpfs_base_dir = self.configuration.gpfs_mount_point_base if not gpfs_base_dir.startswith('/'): msg = (_('%s must be an absolute path.') % gpfs_base_dir) LOG.error(msg) raise exception.GPFSException(msg) if not self._is_dir(gpfs_base_dir): msg = (_('%s is not a directory.') % gpfs_base_dir) LOG.error(msg) raise exception.GPFSException(msg) if not self._is_gpfs_path(gpfs_base_dir): msg = (_('%s is not on GPFS. Perhaps GPFS not mounted.') % gpfs_base_dir) LOG.error(msg) raise exception.GPFSException(msg) if self.configuration.gpfs_nfs_server_type not in ("KNFS", "CES"): msg = (_('Invalid gpfs_nfs_server_type value: %s. ' 'Valid values are: "KNFS", "CES".') % self.configuration.gpfs_nfs_server_type) LOG.error(msg) raise exception.InvalidParameterValue(err=msg) if ((not self.configuration.gpfs_nfs_server_list) and (self.configuration.gpfs_nfs_server_type != 'CES')): msg = (_('Missing value for gpfs_nfs_server_list.')) LOG.error(msg) raise exception.InvalidParameterValue(err=msg) def _is_share_valid(self, fsdev, location): try: out, __ = self._gpfs_execute(self.GPFS_PATH + 'mmlsfileset', fsdev, '-J', location, '-L', '-Y') except exception.ProcessExecutionError: msg = (_('Given share path %(share_path)s does not exist at ' 'mount point %(mount_point)s.') % {'share_path': location, 'mount_point': fsdev}) LOG.exception(msg) raise exception.ManageInvalidShare(reason=msg) lines = out.splitlines() try: validation_token = lines[0].split(':').index('allocInodes') alloc_inodes = lines[1].split(':')[validation_token] except (IndexError, ValueError): msg = (_('Failed to check share at %s.') % location) LOG.exception(msg) raise exception.GPFSException(msg) return alloc_inodes != '0' def _get_share_name(self, fsdev, location): try: out, __ = self._gpfs_execute(self.GPFS_PATH + 'mmlsfileset', fsdev, '-J', location, '-L', '-Y') except exception.ProcessExecutionError: msg = (_('Given share path %(share_path)s does not exist at ' 'mount point %(mount_point)s.') % {'share_path': location, 'mount_point': fsdev}) LOG.exception(msg) raise exception.ManageInvalidShare(reason=msg) lines = out.splitlines() try: validation_token = lines[0].split(':').index('filesetName') share_name = lines[1].split(':')[validation_token] except (IndexError, ValueError): msg = (_('Failed to check share at %s.') % location) LOG.exception(msg) raise exception.GPFSException(msg) return share_name def _manage_existing(self, fsdev, share, old_share_name): new_share_name = share['name'] new_export_location = self._local_path(new_share_name) try: self._gpfs_execute(self.GPFS_PATH + 'mmunlinkfileset', fsdev, old_share_name, '-f') except exception.ProcessExecutionError: msg = _('Failed to unlink fileset for share %s.') % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) LOG.debug('Unlinked the fileset of share %s.', old_share_name) try: self._gpfs_execute(self.GPFS_PATH + 'mmchfileset', fsdev, old_share_name, '-j', new_share_name) except exception.ProcessExecutionError: msg = _('Failed to rename fileset for share %s.') % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) LOG.debug('Renamed the fileset from %(old_share)s to %(new_share)s.', {'old_share': old_share_name, 'new_share': new_share_name}) try: self._gpfs_execute(self.GPFS_PATH + 'mmlinkfileset', fsdev, new_share_name, '-J', new_export_location) except exception.ProcessExecutionError: msg = _('Failed to link fileset for the share %s.' ) % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) LOG.debug('Linked the fileset of share %(share_name)s at location ' '%(export_location)s.', {'share_name': new_share_name, 'export_location': new_export_location}) try: self._gpfs_execute('chmod', '777', new_export_location) except exception.ProcessExecutionError: msg = _('Failed to set permissions for share %s.') % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) LOG.debug('Changed the permission of share %s.', new_share_name) try: out, __ = self._gpfs_execute(self.GPFS_PATH + 'mmlsquota', '-j', new_share_name, '-Y', fsdev) except exception.ProcessExecutionError: msg = _('Failed to check size for share %s.') % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) lines = out.splitlines() try: quota_limit = lines[0].split(':').index('blockLimit') quota_status = lines[1].split(':')[quota_limit] except (IndexError, ValueError): msg = _('Failed to check quota for share %s.') % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) share_size = int(quota_status) # Note: since share_size returns integer value in KB, # we are checking whether share is less than 1GiB. # (units.Mi * KB = 1GB) if share_size < units.Mi: try: self._gpfs_execute(self.GPFS_PATH + 'mmsetquota', fsdev + ':' + new_share_name, '--block', '0:1G') except exception.ProcessExecutionError: msg = _('Failed to set quota for share %s.') % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) LOG.info('Existing share %(shr)s has size %(size)s KB ' 'which is below 1GiB, so extended it to 1GiB.', {'shr': new_share_name, 'size': share_size}) share_size = 1 else: orig_share_size = share_size share_size = int(math.ceil(float(share_size) / units.Mi)) if orig_share_size != share_size * units.Mi: try: self._gpfs_execute(self.GPFS_PATH + 'mmsetquota', fsdev + ':' + new_share_name, '--block', '0:' + str(share_size) + 'G') except exception.ProcessExecutionError: msg = _('Failed to set quota for share %s.' ) % new_share_name LOG.exception(msg) raise exception.GPFSException(msg) new_export_location = self._get_helper(share).create_export( new_export_location) return share_size, new_export_location def manage_existing(self, share, driver_options): old_export = share['export_location'].split(':') try: ces_ip = old_export[0] old_export_location = old_export[1] except IndexError: msg = _('Incorrect export path. Expected format: ' 'IP:/gpfs_mount_point_base/share_id.') LOG.exception(msg) raise exception.ShareBackendException(msg=msg) if ces_ip not in self.configuration.gpfs_nfs_server_list: msg = _('The CES IP %s is not present in the ' 'configuration option "gpfs_nfs_server_list".') % ces_ip raise exception.ShareBackendException(msg=msg) fsdev = self._get_gpfs_device() if not self._is_share_valid(fsdev, old_export_location): err_msg = _('Given share path %s does not have a valid ' 'share.') % old_export_location raise exception.ManageInvalidShare(reason=err_msg) share_name = self._get_share_name(fsdev, old_export_location) out = self._get_helper(share)._has_client_access(old_export_location) if out: err_msg = _('Clients have access to %s share currently. Evict any ' 'clients before trying again.') % share_name raise exception.ManageInvalidShare(reason=err_msg) share_size, new_export_location = self._manage_existing( fsdev, share, share_name) return {"size": share_size, "export_locations": new_export_location} def _update_share_stats(self): """Retrieve stats info from share volume group.""" data = dict( share_backend_name=self.backend_name, vendor_name='IBM', storage_protocol='NFS', reserved_percentage=self.configuration.reserved_share_percentage) free, capacity = self._get_available_capacity( self.configuration.gpfs_mount_point_base) data['total_capacity_gb'] = math.ceil(capacity / units.Gi) data['free_capacity_gb'] = math.ceil(free / units.Gi) super(GPFSShareDriver, self)._update_share_stats(data) def _get_helper(self, share): if share['share_proto'] == 'NFS': return self._helpers[self.configuration.gpfs_nfs_server_type] else: msg = (_('Share protocol %s not supported by GPFS driver.') % share['share_proto']) LOG.error(msg) raise exception.InvalidShare(reason=msg) def _get_share_path(self, share): """Returns share path on storage provider.""" return os.path.join(self.configuration.gpfs_mount_point_base, share['name']) def _get_snapshot_path(self, snapshot): """Returns share path on storage provider.""" snapshot_dir = ".snapshots" return os.path.join(self.configuration.gpfs_mount_point_base, snapshot["share_name"], snapshot_dir, snapshot["name"]) @six.add_metaclass(abc.ABCMeta) class NASHelperBase(object): """Interface to work with share.""" def __init__(self, execute, config_object): self.configuration = config_object self._execute = execute def create_export(self, local_path): """Construct location of new export.""" return ':'.join([self.configuration.gpfs_share_export_ip, local_path]) def get_export_options(self, share, access, helper): """Get the export options.""" extra_specs = share_types.get_extra_specs_from_share(share) if helper == 'KNFS': export_options = extra_specs.get('knfs:export_options') elif helper == 'CES': export_options = extra_specs.get('ces:export_options') else: export_options = None options = self._get_validated_opt_list(export_options) options.append(self.get_access_option(access)) return ','.join(options) def _validate_export_options(self, options): """Validate the export options.""" options_not_allowed = self._get_options_not_allowed() invalid_options = [ option for option in options if option in options_not_allowed ] if invalid_options: raise exception.InvalidInput(reason='Invalid export_option %s as ' 'it is set by access_type.' % invalid_options) def _get_validated_opt_list(self, export_options): """Validate the export options and return an option list.""" if export_options: options = export_options.lower().split(',') self._validate_export_options(options) else: options = [] return options @abc.abstractmethod def get_access_option(self, access): """Get access option string based on access level.""" @abc.abstractmethod def _get_options_not_allowed(self): """Get access options that are not allowed in extra-specs.""" @abc.abstractmethod def remove_export(self, local_path, share): """Remove export.""" @abc.abstractmethod def allow_access(self, local_path, share, access): """Allow access to the host.""" @abc.abstractmethod def deny_access(self, local_path, share, access): """Deny access to the host.""" @abc.abstractmethod def resync_access(self, local_path, share, access_rules): """Re-sync all access rules for given share.""" class KNFSHelper(NASHelperBase): """Wrapper for Kernel NFS Commands.""" def __init__(self, execute, config_object): super(KNFSHelper, self).__init__(execute, config_object) self._execute = execute try: self._execute('exportfs', check_exit_code=True, run_as_root=True) except exception.ProcessExecutionError as e: msg = (_('NFS server not found. Error: %s.') % e) LOG.error(msg) raise exception.GPFSException(msg) def _has_client_access(self, local_path, access_to=None): try: out, __ = self._execute('exportfs', run_as_root=True) except exception.ProcessExecutionError: msg = _('Failed to check exports on the systems.') LOG.exception(msg) raise exception.GPFSException(msg) if access_to: if (re.search(re.escape(local_path) + r'[\s\n]*' + re.escape(access_to), out)): return True else: if re.findall(local_path + '\\b', ''.join(out)): return True return False def _publish_access(self, *cmd, **kwargs): check_exit_code = kwargs.get('check_exit_code', True) outs = [] localserver_iplist = socket.gethostbyname_ex(socket.gethostname())[2] for server in self.configuration.gpfs_nfs_server_list: if server in localserver_iplist: run_command = cmd run_local = True else: sshlogin = self.configuration.gpfs_ssh_login remote_login = sshlogin + '@' + server run_command = ['ssh', remote_login] + list(cmd) run_local = False try: out = utils.execute(*run_command, run_as_root=run_local, check_exit_code=check_exit_code) except exception.ProcessExecutionError: raise outs.append(out) return outs def _verify_denied_access(self, local_path, share, ip): try: cmd = ['exportfs'] outs = self._publish_access(*cmd) except exception.ProcessExecutionError: msg = _('Failed to verify denied access for ' 'share %s.') % share['name'] LOG.exception(msg) raise exception.GPFSException(msg) for stdout, stderr in outs: if stderr and stderr.strip(): msg = ('Log/ignore stderr during _validate_denied_access for ' 'share %(sharename)s. Return code OK. ' 'Stderr: %(stderr)s' % {'sharename': share['name'], 'stderr': stderr}) LOG.debug(msg) gpfs_ips = NFSHelper.get_host_list(stdout, local_path) if ip in gpfs_ips: msg = (_('Failed to deny access for share %(sharename)s. ' 'IP %(ip)s still has access.') % {'sharename': share['name'], 'ip': ip}) LOG.error(msg) raise exception.GPFSException(msg) def remove_export(self, local_path, share): """Remove export.""" def get_access_option(self, access): """Get access option string based on access level.""" return access['access_level'] def _get_options_not_allowed(self): """Get access options that are not allowed in extra-specs.""" return list(constants.ACCESS_LEVELS) def _get_exports(self): """Get exportfs output.""" try: out, __ = self._execute('exportfs', run_as_root=True) except exception.ProcessExecutionError as e: msg = (_('Failed to check exports on the systems. ' ' Error: %s.') % e) LOG.error(msg) raise exception.GPFSException(msg) return out def allow_access(self, local_path, share, access, error_on_exists=True): """Allow access to one or more vm instances.""" if access['access_type'] != 'ip': raise exception.InvalidShareAccess(reason='Only ip access type ' 'supported.') if error_on_exists: # check if present in export out = re.search( re.escape(local_path) + r'[\s\n]*' + re.escape(access['access_to']), self._get_exports()) if out is not None: access_type = access['access_type'] access_to = access['access_to'] raise exception.ShareAccessExists(access_type=access_type, access=access_to) export_opts = self.get_export_options(share, access, 'KNFS') cmd = ['exportfs', '-o', export_opts, ':'.join([access['access_to'], local_path])] try: self._publish_access(*cmd) except exception.ProcessExecutionError: msg = _('Failed to allow access for share %s.') % share['name'] LOG.exception(msg) raise exception.GPFSException(msg) def _deny_ip(self, local_path, share, ip): """Remove access for one or more vm instances.""" cmd = ['exportfs', '-u', ':'.join([ip, local_path])] try: # Can get exit code 0 for success or 1 for already gone (also # potentially get 1 due to exportfs bug). So allow # _publish_access to continue with [0, 1] and then verify after # it is done. self._publish_access(*cmd, check_exit_code=[0, 1]) except exception.ProcessExecutionError: msg = _('Failed to deny access for share %s.') % share['name'] LOG.exception(msg) raise exception.GPFSException(msg) # Error code (0 or 1) makes deny IP success indeterminate. # So, verify that the IP access was completely removed. self._verify_denied_access(local_path, share, ip) def deny_access(self, local_path, share, access): """Remove access for one or more vm instances.""" self._deny_ip(local_path, share, access['access_to']) def _remove_other_access(self, local_path, share, access_rules): """Remove any client access that is not in access_rules.""" exports = self._get_exports() gpfs_ips = set(NFSHelper.get_host_list(exports, local_path)) manila_ips = set([x['access_to'] for x in access_rules]) remove_ips = gpfs_ips - manila_ips for ip in remove_ips: self._deny_ip(local_path, share, ip) def resync_access(self, local_path, share, access_rules): """Re-sync all access rules for given share.""" for access in access_rules: self.allow_access(local_path, share, access, error_on_exists=False) self._remove_other_access(local_path, share, access_rules) class CESHelper(NASHelperBase): """Wrapper for NFS by Spectrum Scale CES""" def __init__(self, execute, config_object): super(CESHelper, self).__init__(execute, config_object) self._execute = execute if self.configuration.is_gpfs_node: self.GPFS_PATH = '' else: self.GPFS_PATH = '/usr/lpp/mmfs/bin/' def _execute_mmnfs_command(self, cmd, err_msg): try: out, __ = self._execute(self.GPFS_PATH + 'mmnfs', 'export', *cmd) except exception.ProcessExecutionError as e: msg = (_('%(err_msg)s Error: %(e)s.') % {'err_msg': err_msg, 'e': e}) LOG.error(msg) raise exception.GPFSException(msg) return out @staticmethod def _fix_export_data(data, headers): """Export data split by ':' may need fixing if client had colons.""" # If an IPv6 client shows up then ':' delimiters don't work. # So use header positions to get data before/after Clients. # Then what is left in between can be joined back into a client IP. client_index = headers.index('Clients') # reverse_client_index is distance from end. reverse_client_index = len(headers) - (client_index + 1) after_client_index = len(data) - reverse_client_index before_client = data[:client_index] client = data[client_index: after_client_index] after_client = data[after_client_index:] result_data = before_client result_data.append(':'.join(client)) # Fixes colons in client IP result_data.extend(after_client) return result_data def _get_nfs_client_exports(self, local_path): """Get the current NFS client export details from GPFS.""" out = self._execute_mmnfs_command( ('list', '-n', local_path, '-Y'), 'Failed to get exports from the system.') # Remove the header line and use the headers to describe the data lines = out.splitlines() for line in lines: data = line.split(':') if "HEADER" in data: headers = data lines.remove(line) break else: msg = _('Failed to parse exports for path %s. ' 'No HEADER found.') % local_path LOG.error(msg) raise exception.GPFSException(msg) exports = [] for line in lines: data = line.split(':') if len(data) < 3: continue # Skip empty lines (and anything less than minimal). result_data = self._fix_export_data(data, headers) exports.append(dict(zip(headers, result_data))) return exports def _has_client_access(self, local_path, access_to=None): """Check path for any export or for one with a specific IP address.""" gpfs_clients = self._get_nfs_client_exports(local_path) return gpfs_clients and (access_to is None or access_to in [ x['Clients'] for x in gpfs_clients]) def remove_export(self, local_path, share): """Remove export.""" if self._has_client_access(local_path): err_msg = ('Failed to remove export for share %s.' % share['name']) self._execute_mmnfs_command(('remove', local_path), err_msg) def _get_options_not_allowed(self): """Get access options that are not allowed in extra-specs.""" return ['access_type=ro', 'access_type=rw'] def get_access_option(self, access): """Get access option string based on access level.""" if access['access_level'] == constants.ACCESS_LEVEL_RO: return 'access_type=ro' else: return 'access_type=rw' def allow_access(self, local_path, share, access): """Allow access to the host.""" if access['access_type'] != 'ip': raise exception.InvalidShareAccess(reason='Only ip access type ' 'supported.') has_exports = self._has_client_access(local_path) export_opts = self.get_export_options(share, access, 'CES') if not has_exports: cmd = ['add', local_path, '-c', access['access_to'] + '(' + export_opts + ')'] else: cmd = ['change', local_path, '--nfsadd', access['access_to'] + '(' + export_opts + ')'] err_msg = ('Failed to allow access for share %s.' % share['name']) self._execute_mmnfs_command(cmd, err_msg) def deny_access(self, local_path, share, access, force=False): """Deny access to the host.""" has_export = self._has_client_access(local_path, access['access_to']) if has_export: err_msg = ('Failed to remove access for share %s.' % share['name']) self._execute_mmnfs_command(('change', local_path, '--nfsremove', access['access_to']), err_msg) def _get_client_opts(self, access, opts_list): """Get client options string for access rule and NFS options.""" nfs_opts = ','.join([self.get_access_option(access)] + opts_list) return '%(ip)s(%(nfs_opts)s)' % {'ip': access['access_to'], 'nfs_opts': nfs_opts} def _get_share_opts(self, share): """Get a list of NFS options from the share's share type.""" extra_specs = share_types.get_extra_specs_from_share(share) opts_list = self._get_validated_opt_list( extra_specs.get('ces:export_options')) return opts_list def _nfs_change(self, local_path, share, access_rules, gpfs_clients): """Bulk add/update/remove of access rules for share.""" opts_list = self._get_share_opts(share) # Create a map of existing client access rules from GPFS. # Key from 'Clients' is an IP address or # Value from 'Access_Type' is RW|RO (case varies) gpfs_map = { x['Clients']: x['Access_Type'].lower() for x in gpfs_clients} gpfs_ips = set(gpfs_map.keys()) manila_ips = set([x['access_to'] for x in access_rules]) add_ips = manila_ips - gpfs_ips update_ips = gpfs_ips.intersection(manila_ips) remove_ips = gpfs_ips - manila_ips adds = [] updates = [] if add_ips or update_ips: for access in access_rules: ip = access['access_to'] if ip in add_ips: adds.append(self._get_client_opts(access, opts_list)) elif (ip in update_ips and access['access_level'] != gpfs_map[ip]): updates.append(self._get_client_opts(access, opts_list)) if remove_ips or adds or updates: cmd = ['change', local_path] if remove_ips: cmd.append('--nfsremove') cmd.append(','.join(remove_ips)) if adds: cmd.append('--nfsadd') cmd.append(';'.join(adds)) if updates: cmd.append('--nfschange') cmd.append(';'.join(updates)) err_msg = ('Failed to resync access for share %s.' % share['name']) self._execute_mmnfs_command(cmd, err_msg) def _nfs_add(self, access_rules, local_path, share): """Bulk add of access rules to share.""" if not access_rules: return opts_list = self._get_share_opts(share) client_options = [] for access in access_rules: client_options.append(self._get_client_opts(access, opts_list)) cmd = ['add', local_path, '-c', ';'.join(client_options)] err_msg = ('Failed to resync access for share %s.' % share['name']) self._execute_mmnfs_command(cmd, err_msg) def resync_access(self, local_path, share, access_rules): """Re-sync all access rules for given share.""" gpfs_clients = self._get_nfs_client_exports(local_path) if not gpfs_clients: self._nfs_add(access_rules, local_path, share) else: self._nfs_change(local_path, share, access_rules, gpfs_clients) manila-10.0.0/manila/share/drivers/hdfs/0000775000175000017500000000000013656750362020035 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hdfs/__init__.py0000664000175000017500000000000013656750227022134 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hdfs/hdfs_native.py0000664000175000017500000004106613656750227022710 0ustar zuulzuul00000000000000# Copyright (c) 2015 Intel, Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """HDFS native protocol (hdfs) driver for manila shares. Manila share is a directory in HDFS. And this share does not use service VM instance (share server). The instance directly talks to the HDFS cluster. The initial version only supports single namenode and flat network. Configuration Requirements: To enable access control, HDFS file system must have ACLs enabled. """ import math import os import pipes import socket from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log from oslo_utils import units import six from manila import exception from manila.i18n import _ from manila.share import driver from manila import utils LOG = log.getLogger(__name__) hdfs_native_share_opts = [ cfg.HostAddressOpt('hdfs_namenode_ip', help='The IP of the HDFS namenode.'), cfg.PortOpt('hdfs_namenode_port', default=9000, help='The port of HDFS namenode service.'), cfg.PortOpt('hdfs_ssh_port', default=22, help='HDFS namenode SSH port.'), cfg.StrOpt('hdfs_ssh_name', help='HDFS namenode ssh login name.'), cfg.StrOpt('hdfs_ssh_pw', help='HDFS namenode SSH login password, ' 'This parameter is not necessary, if ' '\'hdfs_ssh_private_key\' is configured.'), cfg.StrOpt('hdfs_ssh_private_key', help='Path to HDFS namenode SSH private ' 'key for login.'), ] CONF = cfg.CONF CONF.register_opts(hdfs_native_share_opts) class HDFSNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver): """HDFS Share Driver. Executes commands relating to shares. API version history: 1.0 - Initial Version """ def __init__(self, *args, **kwargs): super(HDFSNativeShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(hdfs_native_share_opts) self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'HDFS-Native' self.ssh_connections = {} self._hdfs_execute = None self._hdfs_bin = None self._hdfs_base_path = None def do_setup(self, context): """Do initialization while the share driver starts.""" super(HDFSNativeShareDriver, self).do_setup(context) host = self.configuration.hdfs_namenode_ip local_hosts = socket.gethostbyname_ex(socket.gethostname())[2] if host in local_hosts: self._hdfs_execute = self._hdfs_local_execute else: self._hdfs_execute = self._hdfs_remote_execute self._hdfs_bin = 'hdfs' self._hdfs_base_path = ( 'hdfs://' + self.configuration.hdfs_namenode_ip + ':' + six.text_type(self.configuration.hdfs_namenode_port)) def _hdfs_local_execute(self, *cmd, **kwargs): if 'run_as_root' not in kwargs: kwargs.update({'run_as_root': False}) return utils.execute(*cmd, **kwargs) def _hdfs_remote_execute(self, *cmd, **kwargs): host = self.configuration.hdfs_namenode_ip check_exit_code = kwargs.pop('check_exit_code', False) return self._run_ssh(host, cmd, check_exit_code) def _run_ssh(self, host, cmd_list, check_exit_code=False): command = ' '.join(pipes.quote(cmd_arg) for cmd_arg in cmd_list) connection = self.ssh_connections.get(host) if not connection: hdfs_ssh_name = self.configuration.hdfs_ssh_name password = self.configuration.hdfs_ssh_pw privatekey = self.configuration.hdfs_ssh_private_key hdfs_ssh_port = self.configuration.hdfs_ssh_port ssh_conn_timeout = self.configuration.ssh_conn_timeout min_size = self.configuration.ssh_min_pool_conn max_size = self.configuration.ssh_max_pool_conn ssh_pool = utils.SSHPool(host, hdfs_ssh_port, ssh_conn_timeout, hdfs_ssh_name, password=password, privatekey=privatekey, min_size=min_size, max_size=max_size) ssh = ssh_pool.create() self.ssh_connections[host] = (ssh_pool, ssh) else: ssh_pool, ssh = connection if not ssh.get_transport().is_active(): ssh_pool.remove(ssh) ssh = ssh_pool.create() self.ssh_connections[host] = (ssh_pool, ssh) try: return processutils.ssh_execute( ssh, command, check_exit_code=check_exit_code) except Exception as e: msg = (_('Error running SSH command: %(cmd)s. ' 'Error: %(excmsg)s.') % {'cmd': command, 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def _set_share_size(self, share, size=None): share_dir = '/' + share['name'] if not size: sizestr = six.text_type(share['size']) + 'g' else: sizestr = six.text_type(size) + 'g' try: self._hdfs_execute(self._hdfs_bin, 'dfsadmin', '-setSpaceQuota', sizestr, share_dir) except exception.ProcessExecutionError as e: msg = (_('Failed to set space quota for the ' 'share %(sharename)s. Error: %(excmsg)s.') % {'sharename': share['name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def _create_share(self, share): """Creates a share.""" if share['share_proto'].lower() != 'hdfs': msg = _('Only HDFS protocol supported!') LOG.error(msg) raise exception.HDFSException(msg) share_dir = '/' + share['name'] try: self._hdfs_execute(self._hdfs_bin, 'dfs', '-mkdir', share_dir) except exception.ProcessExecutionError as e: msg = (_('Failed to create directory in hdfs for the ' 'share %(sharename)s. Error: %(excmsg)s.') % {'sharename': share['name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) # set share size self._set_share_size(share) try: self._hdfs_execute(self._hdfs_bin, 'dfsadmin', '-allowSnapshot', share_dir) except exception.ProcessExecutionError as e: msg = (_('Failed to allow snapshot for the ' 'share %(sharename)s. Error: %(excmsg)s.') % {'sharename': share['name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def _get_share_path(self, share): """Return share path on storage provider.""" return os.path.join(self._hdfs_base_path, share['name']) def _get_snapshot_path(self, snapshot): """Return snapshot path on storage provider.""" snapshot_dir = '.snapshot' return os.path.join('/', snapshot['share_name'], snapshot_dir, snapshot['name']) def get_network_allocations_number(self): return 0 def create_share(self, context, share, share_server=None): """Create a HDFS directory which acted as a share.""" self._create_share(share) return self._get_share_path(share) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Creates a snapshot.""" self._create_share(share) share_path = '/' + share['name'] snapshot_path = self._get_snapshot_path(snapshot) try: # check if the directory is empty (out, __) = self._hdfs_execute( self._hdfs_bin, 'dfs', '-ls', snapshot_path) # only copy files when the snapshot directory is not empty if out: copy_path = snapshot_path + "/*" cmd = [self._hdfs_bin, 'dfs', '-cp', copy_path, share_path] self._hdfs_execute(*cmd) except exception.ProcessExecutionError as e: msg = (_('Failed to create share %(sharename)s from ' 'snapshot %(snapshotname)s. Error: %(excmsg)s.') % {'sharename': share['name'], 'snapshotname': snapshot['name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) return self._get_share_path(share) def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot.""" share_dir = '/' + snapshot['share_name'] snapshot_name = snapshot['name'] cmd = [self._hdfs_bin, 'dfs', '-createSnapshot', share_dir, snapshot_name] try: self._hdfs_execute(*cmd) except exception.ProcessExecutionError as e: msg = (_('Failed to create snapshot %(snapshotname)s for ' 'the share %(sharename)s. Error: %(excmsg)s.') % {'snapshotname': snapshot_name, 'sharename': snapshot['share_name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def delete_share(self, context, share, share_server=None): """Deletes share storage.""" share_dir = '/' + share['name'] cmd = [self._hdfs_bin, 'dfs', '-rm', '-r', share_dir] try: self._hdfs_execute(*cmd) except exception.ProcessExecutionError as e: msg = (_('Failed to delete share %(sharename)s. ' 'Error: %(excmsg)s.') % {'sharename': share['name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" share_dir = '/' + snapshot['share_name'] cmd = [self._hdfs_bin, 'dfs', '-deleteSnapshot', share_dir, snapshot['name']] try: self._hdfs_execute(*cmd) except exception.ProcessExecutionError as e: msg = (_('Failed to delete snapshot %(snapshotname)s. ' 'Error: %(excmsg)s.') % {'snapshotname': snapshot['name'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def ensure_share(self, context, share, share_server=None): """Ensure the storage are exported.""" def allow_access(self, context, share, access, share_server=None): """Allows access to the share for a given user.""" if access['access_type'] != 'user': msg = _("Only 'user' access type allowed!") LOG.error(msg) raise exception.InvalidShareAccess(msg) # Note(jun): For directories in HDFS, the x permission is # required to access a child of the directory. if access['access_level'] == 'rw': access_level = 'rwx' elif access['access_level'] == 'ro': access_level = 'r-x' else: msg = (_('The access level %(accesslevel)s was unsupported.') % {'accesslevel': access['access_level']}) LOG.error(msg) raise exception.InvalidShareAccess(msg) share_dir = '/' + share['name'] user_access = ':'.join([access['access_type'], access['access_to'], access_level]) cmd = [self._hdfs_bin, 'dfs', '-setfacl', '-m', '-R', user_access, share_dir] try: (__, out) = self._hdfs_execute(*cmd, check_exit_code=True) except exception.ProcessExecutionError as e: msg = (_('Failed to set ACL of share %(sharename)s for ' 'user: %(username)s' 'Error: %(excmsg)s.') % {'sharename': share['name'], 'username': access['access_to'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def deny_access(self, context, share, access, share_server=None): """Denies the access to the share for a given user.""" share_dir = '/' + share['name'] access_name = ':'.join([access['access_type'], access['access_to']]) cmd = [self._hdfs_bin, 'dfs', '-setfacl', '-x', '-R', access_name, share_dir] try: (__, out) = self._hdfs_execute(*cmd, check_exit_code=True) except exception.ProcessExecutionError as e: msg = (_('Failed to deny ACL of share %(sharename)s for ' 'user: %(username)s' 'Error: %(excmsg)s.') % {'sharename': share['name'], 'username': access['access_to'], 'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) def extend_share(self, share, new_size, share_server=None): """Extend share storage.""" self._set_share_size(share, new_size) def _check_hdfs_state(self): try: (out, __) = self._hdfs_execute(self._hdfs_bin, 'fsck', '/') except exception.ProcessExecutionError as e: msg = (_('Failed to check hdfs state. Error: %(excmsg)s.') % {'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) if 'HEALTHY' in out: return True else: return False def check_for_setup_error(self): """Return an error if the prerequisites are met.""" if not self.configuration.hdfs_namenode_ip: msg = _('Not specify the hdfs cluster yet! ' 'Add the ip of hdfs namenode in the ' 'hdfs_namenode_ip configuration parameter.') LOG.error(msg) raise exception.HDFSException(msg) if not self._check_hdfs_state(): msg = _('HDFS is not in healthy state.') LOG.error(msg) raise exception.HDFSException(msg) def _get_available_capacity(self): """Calculate available space on path.""" try: (out, __) = self._hdfs_execute(self._hdfs_bin, 'dfsadmin', '-report') except exception.ProcessExecutionError as e: msg = (_('Failed to check available capacity for hdfs.' 'Error: %(excmsg)s.') % {'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) lines = out.splitlines() try: total = int(lines[1].split()[2]) free = int(lines[2].split()[2]) except (IndexError, ValueError) as e: msg = (_('Failed to get hdfs capacity info. ' 'Error: %(excmsg)s.') % {'excmsg': six.text_type(e)}) LOG.error(msg) raise exception.HDFSException(msg) return total, free def _update_share_stats(self): """Retrieves stats info of share directories group.""" data = dict(share_backend_name=self.backend_name, storage_protocol='HDFS', reserved_percentage=self.configuration. reserved_share_percentage) total, free = self._get_available_capacity() data['total_capacity_gb'] = math.ceil(total / units.Gi) data['free_capacity_gb'] = math.ceil(free / units.Gi) super(HDFSNativeShareDriver, self)._update_share_stats(data) manila-10.0.0/manila/share/drivers/nexenta/0000775000175000017500000000000013656750362020553 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/nexenta/utils.py0000664000175000017500000000305513656750227022270 0ustar zuulzuul00000000000000# Copyright 2019 Nexenta by DDN, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import six from oslo_utils import units def str2size(s, scale=1024): """Convert size-string. String format: [:space:] to bytes. :param s: size-string :param scale: base size """ if not s: return 0 if isinstance(s, six.integer_types): return s match = re.match(r'^([\.\d]+)\s*([BbKkMmGgTtPpEeZzYy]?)', s) if match is None: raise ValueError('Invalid value: %s' % s) groups = match.groups() value = float(groups[0]) suffix = len(groups) > 1 and groups[1].upper() or 'B' types = ('B', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y') for i, t in enumerate(types): if suffix == t: return float(value * pow(scale, i)) def str2gib_size(s): """Covert size-string to size in gigabytes.""" size_in_bytes = str2size(s) return size_in_bytes // units.Gi def bytes_to_gb(size): return float(size) / units.Gi manila-10.0.0/manila/share/drivers/nexenta/__init__.py0000664000175000017500000000000013656750227022652 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/nexenta/ns5/0000775000175000017500000000000013656750362021260 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/nexenta/ns5/__init__.py0000664000175000017500000000000013656750227023357 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/nexenta/ns5/jsonrpc.py0000664000175000017500000005336313656750227023322 0ustar zuulzuul00000000000000# Copyright 2019 Nexenta by DDN, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import json import posixpath from eventlet import greenthread from oslo_log import log as logging import requests import six from manila import exception from manila.i18n import _ LOG = logging.getLogger(__name__) class NefException(exception.ManilaException): def __init__(self, data=None, **kwargs): defaults = { 'name': 'NexentaError', 'code': 'EBADMSG', 'source': 'ManilaDriver', 'message': 'Unknown error' } if isinstance(data, dict): for key in defaults: if key in kwargs: continue if key in data: kwargs[key] = data[key] else: kwargs[key] = defaults[key] elif isinstance(data, six.string_types): if 'message' not in kwargs: kwargs['message'] = data for key in defaults: if key not in kwargs: kwargs[key] = defaults[key] message = (_('%(message)s (source: %(source)s, ' 'name: %(name)s, code: %(code)s)') % kwargs) self.code = kwargs['code'] del kwargs['message'] super(NefException, self).__init__(message, **kwargs) class NefRequest(object): def __init__(self, proxy, method): self.proxy = proxy self.method = method self.path = None self.lock = False self.time = 0 self.data = [] self.payload = {} self.stat = {} self.hooks = { 'response': self.hook } self.kwargs = { 'hooks': self.hooks, 'timeout': self.proxy.timeout } def __call__(self, path, payload=None): LOG.debug('NEF request start: %(method)s %(path)s %(payload)s', {'method': self.method, 'path': path, 'payload': payload}) if self.method not in ['get', 'delete', 'put', 'post']: message = (_('NEF API does not support %(method)s method'), {'method': self.method}) raise NefException(code='EINVAL', message=message) if not path: message = (_('NEF API call requires collection path')) raise NefException(code='EINVAL', message=message) self.path = path if payload: if not isinstance(payload, dict): message = (_('NEF API call payload must be a dictionary')) raise NefException(code='EINVAL', message=message) if self.method in ['get', 'delete']: self.payload = {'params': payload} elif self.method in ['put', 'post']: self.payload = {'data': json.dumps(payload)} try: response = self.request(self.method, self.path, **self.payload) except (requests.exceptions.ConnectionError, requests.exceptions.Timeout) as error: LOG.debug('Failed to %(method)s %(path)s %(payload)s: %(error)s', {'method': self.method, 'path': self.path, 'payload': self.payload, 'error': error}) if not self.failover(): raise error LOG.debug('Retry initial request after failover: ' '%(method)s %(path)s %(payload)s', {'method': self.method, 'path': self.path, 'payload': self.payload}) response = self.request(self.method, self.path, **self.payload) LOG.debug('NEF request done: %(method)s %(path)s %(payload)s, ' 'total response time: %(time)s seconds, ' 'total requests count: %(count)s, ' 'requests statistics: %(stat)s', {'method': self.method, 'path': self.path, 'payload': self.payload, 'time': self.time, 'count': sum(self.stat.values()), 'stat': self.stat}) if response.ok and not response.content: return None content = json.loads(response.content) if not response.ok: raise NefException(content) if isinstance(content, dict) and 'data' in content: return self.data return content def request(self, method, path, **kwargs): url = self.proxy.url(path) LOG.debug('Perform session request: %(method)s %(url)s %(body)s', {'method': method, 'url': url, 'body': kwargs}) kwargs.update(self.kwargs) return self.proxy.session.request(method, url, **kwargs) def hook(self, response, **kwargs): initial_text = (_('initial request %(method)s %(path)s %(body)s') % {'method': self.method, 'path': self.path, 'body': self.payload}) request_text = (_('session request %(method)s %(url)s %(body)s') % {'method': response.request.method, 'url': response.request.url, 'body': response.request.body}) response_text = (_('session response %(code)s %(content)s') % {'code': response.status_code, 'content': response.content}) text = (_('%(request_text)s and %(response_text)s') % {'request_text': request_text, 'response_text': response_text}) LOG.debug('Hook start on %(text)s', {'text': text}) if response.status_code not in self.stat: self.stat[response.status_code] = 0 self.stat[response.status_code] += 1 self.time += response.elapsed.total_seconds() if response.ok and not response.content: LOG.debug('Hook done on %(text)s: ' 'empty response content', {'text': text}) return response if not response.content: message = (_('There is no response content ' 'is available for %(text)s') % {'text': text}) raise NefException(code='ENODATA', message=message) try: content = json.loads(response.content) except (TypeError, ValueError) as error: message = (_('Failed to decode JSON for %(text)s: %(error)s') % {'text': text, 'error': error}) raise NefException(code='ENOMSG', message=message) method = 'get' # pylint: disable=no-member if response.status_code == requests.codes.unauthorized: if self.stat[response.status_code] > self.proxy.retries: raise NefException(content) self.auth() request = response.request.copy() request.headers.update(self.proxy.session.headers) LOG.debug('Retry last %(text)s after authentication', {'text': request_text}) return self.proxy.session.send(request, **kwargs) elif response.status_code == requests.codes.not_found: if self.lock: LOG.debug('Hook done on %(text)s: ' 'nested failover is detected', {'text': text}) return response if self.stat[response.status_code] > self.proxy.retries: raise NefException(content) if not self.failover(): LOG.debug('Hook done on %(text)s: ' 'no valid hosts found', {'text': text}) return response LOG.debug('Retry %(text)s after failover', {'text': initial_text}) return self.request(self.method, self.path, **self.payload) elif response.status_code == requests.codes.server_error: if not (isinstance(content, dict) and 'code' in content and content['code'] == 'EBUSY'): raise NefException(content) if self.stat[response.status_code] > self.proxy.retries: raise NefException(content) self.proxy.delay(self.stat[response.status_code]) LOG.debug('Retry %(text)s after delay', {'text': initial_text}) return self.request(self.method, self.path, **self.payload) elif response.status_code == requests.codes.accepted: path = self.getpath(content, 'monitor') if not path: message = (_('There is no monitor path ' 'available for %(text)s') % {'text': text}) raise NefException(code='ENOMSG', message=message) self.proxy.delay(self.stat[response.status_code]) return self.request(method, path) elif response.status_code == requests.codes.ok: if not (isinstance(content, dict) and 'data' in content): LOG.debug('Hook done on %(text)s: there ' 'is no JSON data available', {'text': text}) return response LOG.debug('Append %(count)s data items to response', {'count': len(content['data'])}) self.data += content['data'] path = self.getpath(content, 'next') if not path: LOG.debug('Hook done on %(text)s: there ' 'is no next path available', {'text': text}) return response LOG.debug('Perform next session request %(method)s %(path)s', {'method': method, 'path': path}) return self.request(method, path) LOG.debug('Hook done on %(text)s and ' 'returned original response', {'text': text}) return response def auth(self): method = 'post' path = 'auth/login' payload = {'username': self.proxy.username, 'password': self.proxy.password} data = json.dumps(payload) kwargs = {'data': data} self.proxy.delete_bearer() response = self.request(method, path, **kwargs) content = json.loads(response.content) if not (isinstance(content, dict) and 'token' in content): message = (_('There is no authentication token available ' 'for authentication request %(method)s %(url)s ' '%(body)s and response %(code)s %(content)s') % {'method': response.request.method, 'url': response.request.url, 'body': response.request.body, 'code': response.status_code, 'content': response.content}) raise NefException(code='ENODATA', message=message) token = content['token'] self.proxy.update_token(token) def failover(self): result = False self.lock = True method = 'get' host = self.proxy.host root = self.proxy.root for item in self.proxy.hosts: if item == host: continue self.proxy.update_host(item) LOG.debug('Try to failover path ' '%(root)s to host %(host)s', {'root': root, 'host': item}) try: response = self.request(method, root) except (requests.exceptions.ConnectionError, requests.exceptions.Timeout) as error: LOG.debug('Skip unavailable host %(host)s ' 'due to error: %(error)s', {'host': item, 'error': error}) continue LOG.debug('Failover result: %(code)s %(content)s', {'code': response.status_code, 'content': response.content}) # pylint: disable=no-member if response.status_code == requests.codes.ok: LOG.debug('Successful failover path ' '%(root)s to host %(host)s', {'root': root, 'host': item}) self.proxy.update_lock() result = True break else: LOG.debug('Skip unsuitable host %(host)s: ' 'there is no %(root)s path found', {'host': item, 'root': root}) self.lock = False return result @staticmethod def getpath(content, name): if isinstance(content, dict) and 'links' in content: for link in content['links']: if not isinstance(link, dict): continue if 'rel' in link and 'href' in link: if link['rel'] == name: return link['href'] return None class NefCollections(object): subj = 'collection' root = '/collections' def __init__(self, proxy): self.proxy = proxy def path(self, name): quoted_name = six.moves.urllib.parse.quote_plus(name) return posixpath.join(self.root, quoted_name) def get(self, name, payload=None): LOG.debug('Get properties of %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = self.path(name) return self.proxy.get(path, payload) def set(self, name, payload=None): LOG.debug('Modify properties of %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = self.path(name) return self.proxy.put(path, payload) def list(self, payload=None): LOG.debug('List of %(subj)ss: %(payload)s', {'subj': self.subj, 'payload': payload}) return self.proxy.get(self.root, payload) def create(self, payload=None): LOG.debug('Create %(subj)s: %(payload)s', {'subj': self.subj, 'payload': payload}) try: return self.proxy.post(self.root, payload) except NefException as error: if error.code != 'EEXIST': raise error def delete(self, name, payload=None): LOG.debug('Delete %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = self.path(name) try: return self.proxy.delete(path, payload) except NefException as error: if error.code != 'ENOENT': raise error class NefSettings(NefCollections): subj = 'setting' root = '/settings/properties' def create(self, payload=None): return NotImplemented def delete(self, name, payload=None): return NotImplemented class NefDatasets(NefCollections): subj = 'dataset' root = '/storage/datasets' def rename(self, name, payload=None): LOG.debug('Rename %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'rename') return self.proxy.post(path, payload) class NefSnapshots(NefDatasets, NefCollections): subj = 'snapshot' root = '/storage/snapshots' def clone(self, name, payload=None): LOG.debug('Clone %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'clone') return self.proxy.post(path, payload) class NefFilesystems(NefDatasets, NefCollections): subj = 'filesystem' root = '/storage/filesystems' def rollback(self, name, payload=None): LOG.debug('Rollback %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'rollback') return self.proxy.post(path, payload) def mount(self, name, payload=None): LOG.debug('Mount %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'mount') return self.proxy.post(path, payload) def unmount(self, name, payload=None): LOG.debug('Unmount %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'unmount') return self.proxy.post(path, payload) def acl(self, name, payload=None): LOG.debug('Set %(subj)s %(name)s ACL: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'acl') return self.proxy.post(path, payload) def promote(self, name, payload=None): LOG.debug('Promote %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'promote') return self.proxy.post(path, payload) class NefHpr(NefCollections): subj = 'HPR service' root = '/hpr' def activate(self, payload=None): LOG.debug('Activate %(payload)s', {'payload': payload}) path = posixpath.join(self.root, 'activate') return self.proxy.post(path, payload) def start(self, name, payload=None): LOG.debug('Start %(subj)s %(name)s: %(payload)s', {'subj': self.subj, 'name': name, 'payload': payload}) path = posixpath.join(self.path(name), 'start') return self.proxy.post(path, payload) class NefServices(NefCollections): subj = 'service' root = '/services' class NefNfs(NefCollections): subj = 'NFS' root = '/nas/nfs' class NefNetAddresses(NefCollections): subj = 'network address' root = '/network/addresses' class NefProxy(object): def __init__(self, proto, path, conf): self.session = requests.Session() self.settings = NefSettings(self) self.filesystems = NefFilesystems(self) self.snapshots = NefSnapshots(self) self.services = NefServices(self) self.hpr = NefHpr(self) self.nfs = NefNfs(self) self.netaddrs = NefNetAddresses(self) self.proto = proto self.path = path self.lock = None self.tokens = {} self.headers = { 'Content-Type': 'application/json', 'X-XSS-Protection': '1' } if conf.nexenta_use_https: self.scheme = 'https' else: self.scheme = 'http' self.username = conf.nexenta_user self.password = conf.nexenta_password self.hosts = [] if conf.nexenta_rest_addresses: for host in conf.nexenta_rest_addresses: self.hosts.append(host.strip()) self.root = self.filesystems.path(path) if not self.hosts: self.hosts.append(conf.nexenta_nas_host) self.host = self.hosts[0] if conf.nexenta_rest_port: self.port = conf.nexenta_rest_port else: if conf.nexenta_use_https: self.port = 8443 else: self.port = 8080 self.backoff_factor = conf.nexenta_rest_backoff_factor self.retries = len(self.hosts) * conf.nexenta_rest_retry_count self.timeout = ( conf.nexenta_rest_connect_timeout, conf.nexenta_rest_read_timeout) # pylint: disable=no-member max_retries = requests.packages.urllib3.util.retry.Retry( total=conf.nexenta_rest_retry_count, backoff_factor=conf.nexenta_rest_backoff_factor) adapter = requests.adapters.HTTPAdapter(max_retries=max_retries) self.session.verify = conf.nexenta_ssl_cert_verify self.session.headers.update(self.headers) self.session.mount('%s://' % self.scheme, adapter) if not conf.nexenta_ssl_cert_verify: requests.packages.urllib3.disable_warnings() self.update_lock() def __getattr__(self, name): return NefRequest(self, name) def delete_bearer(self): if 'Authorization' in self.session.headers: del self.session.headers['Authorization'] def update_bearer(self, token): bearer = 'Bearer %s' % token self.session.headers['Authorization'] = bearer def update_token(self, token): self.tokens[self.host] = token self.update_bearer(token) def update_host(self, host): self.host = host if host in self.tokens: token = self.tokens[host] self.update_bearer(token) def update_lock(self): prop = self.settings.get('system.guid') guid = prop.get('value') path = '%s:%s' % (guid, self.path) if isinstance(path, six.text_type): path = path.encode('utf-8') self.lock = hashlib.md5(path).hexdigest() def url(self, path): netloc = '%s:%d' % (self.host, int(self.port)) components = (self.scheme, netloc, str(path), None, None) url = six.moves.urllib.parse.urlunsplit(components) return url def delay(self, attempt): interval = int(self.backoff_factor * (2 ** (attempt - 1))) LOG.debug('Waiting for %(interval)s seconds', {'interval': interval}) greenthread.sleep(interval) manila-10.0.0/manila/share/drivers/nexenta/ns5/nexenta_nas.py0000664000175000017500000005553313656750227024150 0ustar zuulzuul00000000000000# Copyright 2019 Nexenta by DDN, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import posixpath from oslo_log import log from oslo_utils import units from manila.common import constants as common from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.nexenta.ns5 import jsonrpc from manila.share.drivers.nexenta import options from manila.share.drivers.nexenta import utils VERSION = '1.1' LOG = log.getLogger(__name__) ZFS_MULTIPLIER = 1.1 # ZFS quotas do not take metadata into account. class NexentaNasDriver(driver.ShareDriver): """Nexenta Share Driver. Executes commands relating to Shares. API version history: 1.0 - Initial version. 1.1 - Failover support. - Unshare filesystem completely after last securityContext is removed. - Moved all http/url code to jsonrpc. - Manage existing support. - Revert to snapshot support. """ driver_prefix = 'nexenta' def __init__(self, *args, **kwargs): """Do initialization.""" LOG.debug('Initializing Nexenta driver.') super(NexentaNasDriver, self).__init__(False, *args, **kwargs) self.configuration = kwargs.get('configuration') if self.configuration: self.configuration.append_config_values( options.nexenta_connection_opts) self.configuration.append_config_values( options.nexenta_nfs_opts) self.configuration.append_config_values( options.nexenta_dataset_opts) else: raise exception.BadConfigurationException( reason=_('Nexenta configuration missing.')) self.nef = None self.verify_ssl = self.configuration.nexenta_ssl_cert_verify self.nas_host = self.configuration.nexenta_nas_host self.nef_port = self.configuration.nexenta_rest_port self.nef_user = self.configuration.nexenta_user self.nef_password = self.configuration.nexenta_password self.pool_name = self.configuration.nexenta_pool self.parent_fs = self.configuration.nexenta_folder self.nfs_mount_point_base = self.configuration.nexenta_mount_point_base self.dataset_compression = ( self.configuration.nexenta_dataset_compression) self.provisioned_capacity = 0 @property def storage_protocol(self): protocol = '' if self.configuration.nexenta_nfs: protocol = 'NFS' else: msg = _('At least 1 storage protocol must be enabled.') raise exception.NexentaException(msg) return protocol @property def root_path(self): return posixpath.join(self.pool_name, self.parent_fs) @property def share_backend_name(self): if not hasattr(self, '_share_backend_name'): self._share_backend_name = None if self.configuration: self._share_backend_name = self.configuration.safe_get( 'share_backend_name') if not self._share_backend_name: self._share_backend_name = 'NexentaStor5' return self._share_backend_name def do_setup(self, context): self.nef = jsonrpc.NefProxy(self.storage_protocol, self.root_path, self.configuration) def check_for_setup_error(self): """Check root filesystem, NFS service and NFS share.""" filesystem = self.nef.filesystems.get(self.root_path) if filesystem['mountPoint'] == 'none': message = (_('NFS root filesystem %(path)s is not writable') % {'path': filesystem['mountPoint']}) raise jsonrpc.NefException(code='ENOENT', message=message) if not filesystem['isMounted']: message = (_('NFS root filesystem %(path)s is not mounted') % {'path': filesystem['mountPoint']}) raise jsonrpc.NefException(code='ENOTDIR', message=message) payload = {} if filesystem['nonBlockingMandatoryMode']: payload['nonBlockingMandatoryMode'] = False if filesystem['smartCompression']: payload['smartCompression'] = False if payload: self.nef.filesystems.set(self.root_path, payload) service = self.nef.services.get('nfs') if service['state'] != 'online': message = (_('NFS server service is not online: %(state)s') % {'state': service['state']}) raise jsonrpc.NefException(code='ESRCH', message=message) self._get_provisioned_capacity() def _get_provisioned_capacity(self): payload = {'fields': 'referencedQuotaSize'} self.provisioned_capacity += self.nef.filesystems.get( self.root_path, payload)['referencedQuotaSize'] def ensure_share(self, context, share, share_server=None): pass def create_share(self, context, share, share_server=None): """Create a share.""" LOG.debug('Creating share: %s.', self._get_share_name(share)) dataset_path = self._get_dataset_path(share) size = int(share['size'] * units.Gi * ZFS_MULTIPLIER) payload = { 'recordSize': self.configuration.nexenta_dataset_record_size, 'compressionMode': self.dataset_compression, 'path': dataset_path, 'referencedQuotaSize': size, 'nonBlockingMandatoryMode': False } if not self.configuration.nexenta_thin_provisioning: payload['referencedReservationSize'] = size self.nef.filesystems.create(payload) try: mount_path = self._mount_filesystem(share) except jsonrpc.NefException as create_error: try: payload = {'force': True} self.nef.filesystems.delete(dataset_path, payload) except jsonrpc.NefException as delete_error: LOG.debug('Failed to delete share %(path)s: %(error)s', {'path': dataset_path, 'error': delete_error}) raise create_error self.provisioned_capacity += share['size'] location = { 'path': mount_path, 'id': self._get_share_name(share) } return [location] def _mount_filesystem(self, share): """Ensure that filesystem is activated and mounted on the host.""" dataset_path = self._get_dataset_path(share) payload = {'fields': 'mountPoint,isMounted'} filesystem = self.nef.filesystems.get(dataset_path, payload) if filesystem['mountPoint'] == 'none': payload = {'datasetName': dataset_path} self.nef.hpr.activate(payload) filesystem = self.nef.filesystems.get(dataset_path, payload) elif not filesystem['isMounted']: self.nef.filesystems.mount(dataset_path) return '%s:%s' % (self.nas_host, filesystem['mountPoint']) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" snapshot_path = self._get_snapshot_path(snapshot) LOG.debug('Creating share from snapshot %s.', snapshot_path) clone_path = self._get_dataset_path(share) size = int(share['size'] * units.Gi * ZFS_MULTIPLIER) payload = { 'targetPath': clone_path, 'referencedQuotaSize': size, 'recordSize': self.configuration.nexenta_dataset_record_size, 'compressionMode': self.dataset_compression, 'nonBlockingMandatoryMode': False } if not self.configuration.nexenta_thin_provisioning: payload['referencedReservationSize'] = size self.nef.snapshots.clone(snapshot_path, payload) self._remount_filesystem(clone_path) self.provisioned_capacity += share['size'] try: mount_path = self._mount_filesystem(share) except jsonrpc.NefException as create_error: try: payload = {'force': True} self.nef.filesystems.delete(clone_path, payload) except jsonrpc.NefException as delete_error: LOG.debug('Failed to delete share %(path)s: %(error)s', {'path': clone_path, 'error': delete_error}) raise create_error location = { 'path': mount_path, 'id': self._get_share_name(share) } return [location] def _remount_filesystem(self, clone_path): """Workaround for NEF bug: cloned share has offline NFS status""" self.nef.filesystems.unmount(clone_path) self.nef.filesystems.mount(clone_path) def _get_dataset_path(self, share): share_name = self._get_share_name(share) return posixpath.join(self.root_path, share_name) def _get_share_name(self, share): """Get share name with share name prefix.""" return ('%(prefix)s%(share_id)s' % { 'prefix': self.configuration.nexenta_share_name_prefix, 'share_id': share['share_id']}) def _get_snapshot_path(self, snapshot): """Return ZFS snapshot path for the snapshot.""" snapshot_id = ( snapshot['snapshot_id'] or snapshot['share_group_snapshot_id']) share = snapshot.get('share') or snapshot.get('share_instance') fs_path = self._get_dataset_path(share) return '%s@snapshot-%s' % (fs_path, snapshot_id) def delete_share(self, context, share, share_server=None): """Delete a share.""" LOG.debug('Deleting share: %s.', self._get_share_name(share)) share_path = self._get_dataset_path(share) delete_payload = {'force': True, 'snapshots': True} try: self.nef.filesystems.delete(share_path, delete_payload) except jsonrpc.NefException as error: if error.code != 'EEXIST': raise error snapshots_tree = {} snapshots_payload = {'parent': share_path, 'fields': 'path'} snapshots = self.nef.snapshots.list(snapshots_payload) for snapshot in snapshots: clones_payload = {'fields': 'clones,creationTxg'} data = self.nef.snapshots.get(snapshot['path'], clones_payload) if data['clones']: snapshots_tree[data['creationTxg']] = data['clones'][0] if snapshots_tree: clone_path = snapshots_tree[max(snapshots_tree)] self.nef.filesystems.promote(clone_path) self.nef.filesystems.delete(share_path, delete_payload) self.provisioned_capacity -= share['size'] def extend_share(self, share, new_size, share_server=None): """Extends a share.""" LOG.debug( 'Extending share: %(name)s to %(size)sG.', ( {'name': self._get_share_name(share), 'size': new_size})) self._set_quota(share, new_size) if not self.configuration.nexenta_thin_provisioning: self._set_reservation(share, new_size) self.provisioned_capacity += (new_size - share['size']) def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" LOG.debug( 'Shrinking share: %(name)s to %(size)sG.', { 'name': self._get_share_name(share), 'size': new_size}) share_path = self._get_dataset_path(share) share_data = self.nef.filesystems.get(share_path) used = share_data['bytesUsedBySelf'] / units.Gi if used > new_size: raise exception.ShareShrinkingPossibleDataLoss( share_id=self._get_share_name(share)) if not self.configuration.nexenta_thin_provisioning: self._set_reservation(share, new_size) self._set_quota(share, new_size) self.provisioned_capacity += (share['size'] - new_size) def create_snapshot(self, context, snapshot, share_server=None): """Create a snapshot.""" snapshot_path = self._get_snapshot_path(snapshot) LOG.debug('Creating snapshot: %s.', snapshot_path) payload = {'path': snapshot_path} self.nef.snapshots.create(payload) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot. :param snapshot: snapshot reference """ snapshot_path = self._get_snapshot_path(snapshot) LOG.debug('Deleting snapshot: %s.', snapshot_path) payload = {'defer': True} self.nef.snapshots.delete(snapshot_path, payload) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a share (in place) to the specified snapshot. Does not delete the share snapshot. The share and snapshot must both be 'available' for the restore to be attempted. The snapshot must be the most recent one taken by Manila; the API layer performs this check so the driver doesn't have to. The share must be reverted in place to the contents of the snapshot. Application admins should quiesce or otherwise prepare the application for the shared file system contents to change suddenly. :param context: Current context :param snapshot: The snapshot to be restored :param share_access_rules: List of all access rules for the affected share :param snapshot_access_rules: List of all access rules for the affected snapshot :param share_server: Optional -- Share server model or None """ snapshot_path = self._get_snapshot_path(snapshot).split('@')[1] LOG.debug('Reverting to snapshot: %s.', snapshot_path) share_path = self._get_dataset_path(snapshot['share']) payload = {'snapshot': snapshot_path} self.nef.filesystems.rollback(share_path, payload) def manage_existing(self, share, driver_options): """Brings an existing share under Manila management. If the provided share is not valid, then raise a ManageInvalidShare exception, specifying a reason for the failure. If the provided share is not in a state that can be managed, such as being replicated on the backend, the driver *MUST* raise ManageInvalidShare exception with an appropriate message. The share has a share_type, and the driver can inspect that and compare against the properties of the referenced backend share. If they are incompatible, raise a ManageExistingShareTypeMismatch, specifying a reason for the failure. :param share: Share model :param driver_options: Driver-specific options provided by admin. :return: share_update dictionary with required key 'size', which should contain size of the share. """ LOG.debug('Manage share %s.', self._get_share_name(share)) export_path = share['export_locations'][0]['path'] # check that filesystem with provided export exists. fs_path = export_path.split(':/')[1] fs_data = self.nef.filesystems.get(fs_path) if not fs_data: # wrong export path, raise exception. msg = _('Share %s does not exist on Nexenta Store appliance, ' 'cannot manage.') % export_path raise exception.NexentaException(msg) # get dataset properties. if fs_data['referencedQuotaSize']: size = (fs_data['referencedQuotaSize'] / units.Gi) + 1 else: size = fs_data['bytesReferenced'] / units.Gi + 1 # rename filesystem on appliance to correlate with manila ID. new_path = '%s/%s' % (self.root_path, self._get_share_name(share)) self.nef.filesystems.rename(fs_path, {'newPath': new_path}) # make sure quotas and reservations are correct. if not self.configuration.nexenta_thin_provisioning: self._set_reservation(share, size) self._set_quota(share, size) return {'size': size, 'export_locations': [{ 'path': '%s:/%s' % (self.nas_host, new_path) }]} def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. Using access_rules list for both adding and deleting rules. :param context: The `context.RequestContext` object for the request :param share: Share that will have its access rules updated. :param access_rules: All access rules for given share. This list is enough to update the access rules for given share. :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. Not used by this driver. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. Not used by this driver. :param share_server: Data structure with share server information. Not used by this driver. """ LOG.debug('Updating access to share %(id)s with following access ' 'rules: %(rules)s', { 'id': self._get_share_name(share), 'rules': [( rule.get('access_type'), rule.get('access_level'), rule.get('access_to')) for rule in access_rules]}) rw_list = [] ro_list = [] update_dict = {} if share['share_proto'] == 'NFS': for rule in access_rules: if rule['access_type'].lower() != 'ip': msg = _( 'Only IP access control type is supported for NFS.') LOG.warning(msg) update_dict[rule['access_id']] = { 'state': 'error', } else: update_dict[rule['access_id']] = { 'state': 'active', } if rule['access_level'] == common.ACCESS_LEVEL_RW: rw_list.append(rule['access_to']) else: ro_list.append(rule['access_to']) self._update_nfs_access(share, rw_list, ro_list) return update_dict def _update_nfs_access(self, share, rw_list, ro_list): # Define allowed security context types to be able to tell whether # the 'security_contexts' dict contains any rules at all context_types = {'none', 'root', 'readOnlyList', 'readWriteList'} security_contexts = {'securityModes': ['sys']} def add_sc(addr_list, sc_type): if sc_type not in context_types: return rule_list = [] for addr in addr_list: address_mask = addr.strip().split('/', 1) address = address_mask[0] ls = {"allow": True, "etype": "fqdn", "entity": address} if len(address_mask) == 2: mask = int(address_mask[1]) if 0 <= mask < 31: ls['mask'] = mask ls['etype'] = 'network' rule_list.append(ls) # Context type with no addresses will result in an API error if rule_list: security_contexts[sc_type] = rule_list add_sc(rw_list, 'readWriteList') add_sc(ro_list, 'readOnlyList') payload = {'securityContexts': [security_contexts]} share_path = self._get_dataset_path(share) if self.nef.nfs.list({'filesystem': share_path}): if not set(security_contexts.keys()) & context_types: self.nef.nfs.delete(share_path) else: self.nef.nfs.set(share_path, payload) else: payload['filesystem'] = share_path self.nef.nfs.create(payload) payload = { 'flags': ['file_inherit', 'dir_inherit'], 'permissions': ['full_set'], 'principal': 'everyone@', 'type': 'allow' } self.nef.filesystems.acl(share_path, payload) def _set_quota(self, share, new_size): quota = int(new_size * units.Gi * ZFS_MULTIPLIER) share_path = self._get_dataset_path(share) payload = {'referencedQuotaSize': quota} LOG.debug('Setting quota for dataset %s.', share_path) self.nef.filesystems.set(share_path, payload) def _set_reservation(self, share, new_size): res_size = int(new_size * units.Gi * ZFS_MULTIPLIER) share_path = self._get_dataset_path(share) payload = {'referencedReservationSize': res_size} self.nef.filesystems.set(share_path, payload) def _update_share_stats(self, data=None): super(NexentaNasDriver, self)._update_share_stats() total, free, allocated = self._get_capacity_info() compression = not self.dataset_compression == 'off' data = { 'vendor_name': 'Nexenta', 'storage_protocol': self.storage_protocol, 'share_backend_name': self.share_backend_name, 'nfs_mount_point_base': self.nfs_mount_point_base, 'driver_version': VERSION, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'pools': [{ 'pool_name': self.pool_name, 'compression': compression, 'total_capacity_gb': total, 'free_capacity_gb': free, 'reserved_percentage': ( self.configuration.reserved_share_percentage), 'max_over_subscription_ratio': ( self.configuration.safe_get( 'max_over_subscription_ratio')), 'thin_provisioning': self.configuration.nexenta_thin_provisioning, 'provisioned_capacity_gb': self.provisioned_capacity, }], } self._stats.update(data) def _get_capacity_info(self): """Calculate available space on the NFS share.""" data = self.nef.filesystems.get(self.root_path) free = int(utils.bytes_to_gb(data['bytesAvailable'])) allocated = int(utils.bytes_to_gb(data['bytesUsed'])) total = free + allocated return total, free, allocated manila-10.0.0/manila/share/drivers/nexenta/options.py0000664000175000017500000001257413656750227022631 0ustar zuulzuul00000000000000# Copyright 2019 Nexenta by DDN, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nexenta.options` -- Contains configuration options for Nexenta drivers. ============================================================================= .. automodule:: nexenta.options """ from oslo_config import cfg nexenta_connection_opts = [ cfg.ListOpt('nexenta_rest_addresses', help='One or more comma delimited IP addresses for management ' 'communication with NexentaStor appliance.'), cfg.IntOpt('nexenta_rest_port', default=8443, help='Port to connect to Nexenta REST API server.'), cfg.StrOpt('nexenta_rest_protocol', default='auto', choices=['http', 'https', 'auto'], help='Use http or https for REST connection (default auto).'), cfg.BoolOpt('nexenta_use_https', default=True, help='Use HTTP secure protocol for NexentaStor ' 'management REST API connections'), cfg.StrOpt('nexenta_user', default='admin', help='User name to connect to Nexenta SA.', required=True), cfg.StrOpt('nexenta_password', help='Password to connect to Nexenta SA.', required=True, secret=True), cfg.StrOpt('nexenta_volume', default='volume1', help='Volume name on NexentaStor.'), cfg.StrOpt('nexenta_pool', default='pool1', required=True, help='Pool name on NexentaStor.'), cfg.BoolOpt('nexenta_nfs', default=True, help='Defines whether share over NFS is enabled.'), cfg.BoolOpt('nexenta_ssl_cert_verify', default=False, help='Defines whether the driver should check ssl cert.'), cfg.FloatOpt('nexenta_rest_connect_timeout', default=30, help='Specifies the time limit (in seconds), within ' 'which the connection to NexentaStor management ' 'REST API server must be established'), cfg.FloatOpt('nexenta_rest_read_timeout', default=300, help='Specifies the time limit (in seconds), ' 'within which NexentaStor management ' 'REST API server must send a response'), cfg.FloatOpt('nexenta_rest_backoff_factor', default=1, help='Specifies the backoff factor to apply ' 'between connection attempts to NexentaStor ' 'management REST API server'), cfg.IntOpt('nexenta_rest_retry_count', default=5, help='Specifies the number of times to repeat NexentaStor ' 'management REST API call in case of connection errors ' 'and NexentaStor appliance EBUSY or ENOENT errors'), ] nexenta_nfs_opts = [ cfg.HostAddressOpt('nexenta_nas_host', deprecated_name='nexenta_host', help='Data IP address of Nexenta storage appliance.', required=True), cfg.StrOpt('nexenta_mount_point_base', default='$state_path/mnt', help='Base directory that contains NFS share mount points.'), ] nexenta_dataset_opts = [ cfg.StrOpt('nexenta_nfs_share', default='nfs_share', help='Parent filesystem where all the shares will be created. ' 'This parameter is only used by NexentaStor4 driver.'), cfg.StrOpt('nexenta_share_name_prefix', help='Nexenta share name prefix.', default='share-'), cfg.StrOpt('nexenta_folder', default='folder', required=True, help='Parent folder on NexentaStor.'), cfg.StrOpt('nexenta_dataset_compression', default='on', choices=['on', 'off', 'gzip', 'gzip-1', 'gzip-2', 'gzip-3', 'gzip-4', 'gzip-5', 'gzip-6', 'gzip-7', 'gzip-8', 'gzip-9', 'lzjb', 'zle', 'lz4'], help='Compression value for new ZFS folders.'), cfg.StrOpt('nexenta_dataset_dedupe', default='off', choices=['on', 'off', 'sha256', 'verify', 'sha256, verify'], help='Deduplication value for new ZFS folders. ' 'Only used by NexentaStor4 driver.'), cfg.BoolOpt('nexenta_thin_provisioning', default=True, help=('If True shares will not be space guaranteed and ' 'overprovisioning will be enabled.')), cfg.IntOpt('nexenta_dataset_record_size', default=131072, help='Specifies a suggested block size in for files in a file ' 'system. (bytes)'), ] manila-10.0.0/manila/share/drivers/nexenta/ns4/0000775000175000017500000000000013656750362021257 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/nexenta/ns4/__init__.py0000664000175000017500000000000013656750227023356 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/nexenta/ns4/nexenta_nfs_helper.py0000664000175000017500000002210013656750227025473 0ustar zuulzuul00000000000000# Copyright 2016 Nexenta Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import excutils from manila.common import constants as common from manila import exception from manila.i18n import _ from manila.share.drivers.nexenta.ns4 import jsonrpc from manila.share.drivers.nexenta import utils LOG = log.getLogger(__name__) NOT_EXIST = 'does not exist' DEP_CLONES = 'has dependent clones' class NFSHelper(object): def __init__(self, configuration): self.configuration = configuration self.nfs_mount_point_base = ( self.configuration.nexenta_mount_point_base) self.dataset_compression = ( self.configuration.nexenta_dataset_compression) self.dataset_dedupe = self.configuration.nexenta_dataset_dedupe self.nms = None self.nms_protocol = self.configuration.nexenta_rest_protocol self.nms_host = self.configuration.nexenta_host self.volume = self.configuration.nexenta_volume self.share = self.configuration.nexenta_nfs_share self.nms_port = self.configuration.nexenta_rest_port self.nms_user = self.configuration.nexenta_user self.nfs = self.configuration.nexenta_nfs self.nms_password = self.configuration.nexenta_password self.storage_protocol = 'NFS' def do_setup(self): if self.nms_protocol == 'auto': protocol, auto = 'http', True else: protocol, auto = self.nms_protocol, False path = '/rest/nms/' self.nms = jsonrpc.NexentaJSONProxy( protocol, self.nms_host, self.nms_port, path, self.nms_user, self.nms_password, auto=auto) def check_for_setup_error(self): if not self.nms.volume.object_exists(self.volume): raise exception.NexentaException(reason=_( "Volume %s does not exist in NexentaStor appliance.") % self.volume) folder = '%s/%s' % (self.volume, self.share) create_folder_props = { 'recordsize': '4K', 'quota': 'none', 'compression': self.dataset_compression, } if not self.nms.folder.object_exists(folder): self.nms.folder.create_with_props( self.volume, self.share, create_folder_props) def create_filesystem(self, share): """Create file system.""" create_folder_props = { 'recordsize': '4K', 'quota': '%sG' % share['size'], 'compression': self.dataset_compression, } if not self.configuration.nexenta_thin_provisioning: create_folder_props['reservation'] = '%sG' % share['size'] parent_path = '%s/%s' % (self.volume, self.share) self.nms.folder.create_with_props( parent_path, share['name'], create_folder_props) path = self._get_share_path(share['name']) return [self._get_location_path(path, share['share_proto'])] def set_quota(self, share_name, new_size): if self.configuration.nexenta_thin_provisioning: quota = '%sG' % new_size self.nms.folder.set_child_prop( self._get_share_path(share_name), 'quota', quota) def _get_location_path(self, path, protocol): location = None if protocol == 'NFS': location = {'path': '%s:/volumes/%s' % (self.nms_host, path)} else: raise exception.InvalidShare( reason=(_('Only NFS protocol is currently supported.'))) return location def delete_share(self, share_name): """Delete share.""" folder = self._get_share_path(share_name) try: self.nms.folder.destroy(folder.strip(), '-r') except exception.NexentaException as e: with excutils.save_and_reraise_exception() as exc: if NOT_EXIST in e.args[0]: LOG.info('Folder %s does not exist, it was ' 'already deleted.', folder) exc.reraise = False def _get_share_path(self, share_name): return '%s/%s/%s' % (self.volume, self.share, share_name) def _get_snapshot_name(self, snapshot_name): return 'snapshot-%s' % snapshot_name def create_snapshot(self, share_name, snapshot_name): """Create a snapshot.""" folder = self._get_share_path(share_name) self.nms.folder.create_snapshot(folder, snapshot_name, '-r') model_update = {'provider_location': '%s@%s' % (folder, snapshot_name)} return model_update def delete_snapshot(self, share_name, snapshot_name): """Deletes snapshot.""" try: self.nms.snapshot.destroy('%s@%s' % ( self._get_share_path(share_name), snapshot_name), '') except exception.NexentaException as e: with excutils.save_and_reraise_exception() as exc: if NOT_EXIST in e.args[0]: LOG.info('Snapshot %(folder)s@%(snapshot)s does not ' 'exist, it was already deleted.', { 'folder': share_name, 'snapshot': snapshot_name, }) exc.reraise = False elif DEP_CLONES in e.args[0]: LOG.info( 'Snapshot %(folder)s@%(snapshot)s has dependent ' 'clones, it will be deleted later.', { 'folder': share_name, 'snapshot': snapshot_name }) exc.reraise = False def create_share_from_snapshot(self, share, snapshot): snapshot_name = '%s/%s/%s@%s' % ( self.volume, self.share, snapshot['share_name'], snapshot['name']) self.nms.folder.clone( snapshot_name, '%s/%s/%s' % (self.volume, self.share, share['name'])) path = self._get_share_path(share['name']) return [self._get_location_path(path, share['share_proto'])] def update_access(self, share_name, access_rules): """Update access to the share.""" rw_list = [] ro_list = [] for rule in access_rules: if rule['access_type'].lower() != 'ip': msg = _('Only IP access type is supported.') raise exception.InvalidShareAccess(reason=msg) else: if rule['access_level'] == common.ACCESS_LEVEL_RW: rw_list.append(rule['access_to']) else: ro_list.append(rule['access_to']) share_opts = { 'auth_type': 'none', 'read_write': ':'.join(rw_list), 'read_only': ':'.join(ro_list), 'recursive': 'true', 'anonymous_rw': 'true', 'anonymous': 'true', 'extra_options': 'anon=0', } self.nms.netstorsvc.share_folder( 'svc:/network/nfs/server:default', self._get_share_path(share_name), share_opts) def _get_capacity_info(self): """Calculate available space on the NFS share.""" folder_props = self.nms.folder.get_child_props( '%s/%s' % (self.volume, self.share), 'used|available') free = utils.str2gib_size(folder_props['available']) allocated = utils.str2gib_size(folder_props['used']) return free + allocated, free, allocated def update_share_stats(self): """Update driver capabilities. No way of tracking provisioned capacity on this appliance, not returning any to let the scheduler estimate it. """ total, free, allocated = self._get_capacity_info() compression = not self.dataset_compression == 'off' dedupe = not self.dataset_dedupe == 'off' return { 'vendor_name': 'Nexenta', 'storage_protocol': self.storage_protocol, 'nfs_mount_point_base': self.nfs_mount_point_base, 'pools': [{ 'pool_name': self.volume, 'total_capacity_gb': total, 'free_capacity_gb': free, 'reserved_percentage': self.configuration.reserved_share_percentage, 'compression': compression, 'dedupe': dedupe, 'max_over_subscription_ratio': ( self.configuration.safe_get( 'max_over_subscription_ratio')), 'thin_provisioning': self.configuration.nexenta_thin_provisioning, }], } manila-10.0.0/manila/share/drivers/nexenta/ns4/jsonrpc.py0000664000175000017500000000570713656750227023320 0ustar zuulzuul00000000000000# Copyright 2016 Nexenta Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nexenta.jsonrpc` -- Nexenta-specific JSON RPC client ===================================================================== .. automodule:: nexenta.jsonrpc """ import base64 import json import requests from oslo_log import log from oslo_serialization import jsonutils from manila import exception from manila import utils LOG = log.getLogger(__name__) class NexentaJSONProxy(object): retry_exc_tuple = (requests.exceptions.ConnectionError,) def __init__(self, scheme, host, port, path, user, password, auto=False, obj=None, method=None): self.scheme = scheme.lower() self.host = host self.port = port self.path = path self.user = user self.password = password self.auto = auto self.obj = obj self.method = method def __getattr__(self, name): if not self.obj: obj, method = name, None elif not self.method: obj, method = self.obj, name else: obj, method = '%s.%s' % (self.obj, self.method), name return NexentaJSONProxy(self.scheme, self.host, self.port, self.path, self.user, self.password, self.auto, obj, method) @property def url(self): return '%s://%s:%s%s' % (self.scheme, self.host, self.port, self.path) def __hash__(self): return self.url.__hash__() def __repr__(self): return 'NMS proxy: %s' % self.url @utils.retry(retry_exc_tuple, retries=6) def __call__(self, *args): data = jsonutils.dumps({ 'object': self.obj, 'method': self.method, 'params': args, }) auth = base64.b64encode( ('%s:%s' % (self.user, self.password)).encode('utf-8')) headers = { 'Content-Type': 'application/json', 'Authorization': 'Basic %s' % auth, } LOG.debug('Sending JSON data: %s', data) r = requests.post(self.url, data=data, headers=headers) response = json.loads(r.content) if r.content else None LOG.debug('Got response: %s', response) if response.get('error') is not None: message = response['error'].get('message', '') raise exception.NexentaException(reason=message) return response.get('result') manila-10.0.0/manila/share/drivers/nexenta/ns4/nexenta_nas.py0000664000175000017500000001275213656750227024143 0ustar zuulzuul00000000000000# Copyright 2016 Nexenta Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.nexenta.ns4 import nexenta_nfs_helper from manila.share.drivers.nexenta import options VERSION = '1.0' LOG = log.getLogger(__name__) class NexentaNasDriver(driver.ShareDriver): """Nexenta Share Driver. Executes commands relating to Shares. API version history: 1.0 - Initial version. """ def __init__(self, *args, **kwargs): """Do initialization.""" LOG.debug('Initializing Nexenta driver.') super(NexentaNasDriver, self).__init__(False, *args, **kwargs) self.configuration = kwargs.get('configuration') if self.configuration: self.configuration.append_config_values( options.nexenta_connection_opts) self.configuration.append_config_values( options.nexenta_nfs_opts) self.configuration.append_config_values( options.nexenta_dataset_opts) self.helper = nexenta_nfs_helper.NFSHelper(self.configuration) else: raise exception.BadConfigurationException( reason=_('Nexenta configuration missing.')) @property def share_backend_name(self): if not hasattr(self, '_share_backend_name'): self._share_backend_name = None if self.configuration: self._share_backend_name = self.configuration.safe_get( 'share_backend_name') if not self._share_backend_name: self._share_backend_name = 'NexentaStor4' return self._share_backend_name def do_setup(self, context): """Any initialization the Nexenta NAS driver does while starting.""" LOG.debug('Setting up the NexentaStor4 plugin.') return self.helper.do_setup() def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" self.helper.check_for_setup_error() def create_share(self, context, share, share_server=None): """Create a share.""" LOG.debug('Creating share %s.', share['name']) return self.helper.create_filesystem(share) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" LOG.debug('Creating share from snapshot %s.', snapshot['name']) return self.helper.create_share_from_snapshot(share, snapshot) def delete_share(self, context, share, share_server=None): """Delete a share.""" LOG.debug('Deleting share %s.', share['name']) self.helper.delete_share(share['name']) def extend_share(self, share, new_size, share_server=None): """Extends a share.""" LOG.debug('Extending share %(name)s to %(size)sG.', { 'name': share['name'], 'size': new_size}) self.helper.set_quota(share['name'], new_size) def create_snapshot(self, context, snapshot, share_server=None): """Create a snapshot.""" LOG.debug('Creating a snapshot of share %s.', snapshot['share_name']) snap_id = self.helper.create_snapshot( snapshot['share_name'], snapshot['name']) LOG.info('Created snapshot %s.', snap_id) def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot.""" LOG.debug('Deleting snapshot %(shr_name)s@%(snap_name)s.', { 'shr_name': snapshot['share_name'], 'snap_name': snapshot['name']}) self.helper.delete_snapshot(snapshot['share_name'], snapshot['name']) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. :param context: The `context.RequestContext` object for the request :param share: Share that will have its access rules updated. :param access_rules: All access rules for given share. This list is enough to update the access rules for given share. :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. Not used by this driver. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. Not used by this driver. :param share_server: Data structure with share server information. Not used by this driver. """ self.helper.update_access(share['name'], access_rules) def _update_share_stats(self, data=None): super(NexentaNasDriver, self)._update_share_stats() data = self.helper.update_share_stats() data['driver_version'] = VERSION data['share_backend_name'] = self.share_backend_name self._stats.update(data) manila-10.0.0/manila/share/drivers/lvm.py0000664000175000017500000005424613656750227020274 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # Copyright 2016 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ LVM Driver for shares. """ import ipaddress import math import os import re from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from oslo_utils import timeutils import six from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers import generic from manila.share import utils LOG = log.getLogger(__name__) share_opts = [ cfg.StrOpt('lvm_share_export_root', default='$state_path/mnt', help='Base folder where exported shares are located.'), cfg.ListOpt('lvm_share_export_ips', help='List of IPs to export shares belonging to the LVM ' 'storage driver.'), cfg.IntOpt('lvm_share_mirrors', default=0, help='If set, create LVMs with multiple mirrors. Note that ' 'this requires lvm_mirrors + 2 PVs with available space.'), cfg.StrOpt('lvm_share_volume_group', default='lvm-shares', help='Name for the VG that will contain exported shares.'), cfg.ListOpt('lvm_share_helpers', default=[ 'CIFS=manila.share.drivers.helpers.CIFSHelperUserAccess', 'NFS=manila.share.drivers.helpers.NFSHelper', ], help='Specify list of share export helpers.'), ] CONF = cfg.CONF CONF.register_opts(share_opts) CONF.register_opts(generic.share_opts) class LVMMixin(driver.ExecuteMixin): def check_for_setup_error(self): """Returns an error if prerequisites aren't met.""" out, err = self._execute('vgs', '--noheadings', '-o', 'name', run_as_root=True) volume_groups = out.split() if self.configuration.lvm_share_volume_group not in volume_groups: msg = (_("Share volume group %s doesn't exist.") % self.configuration.lvm_share_volume_group) raise exception.InvalidParameterValue(err=msg) if not self.configuration.lvm_share_export_ips: msg = _("The option lvm_share_export_ips must be specified.") raise exception.InvalidParameterValue(err=msg) def _allocate_container(self, share): sizestr = '%sG' % share['size'] cmd = ['lvcreate', '-L', sizestr, '-n', share['name'], self.configuration.lvm_share_volume_group] if self.configuration.lvm_share_mirrors: cmd += ['-m', self.configuration.lvm_share_mirrors, '--nosync'] terras = int(sizestr[:-1]) / 1024.0 if terras >= 1.5: rsize = int(2 ** math.ceil(math.log(terras) / math.log(2))) # NOTE(vish): Next power of two for region size. See: # http://red.ht/U2BPOD cmd += ['-R', six.text_type(rsize)] self._try_execute(*cmd, run_as_root=True) device_name = self._get_local_path(share) self._execute('mkfs.%s' % self.configuration.share_volume_fstype, device_name, run_as_root=True) def _extend_container(self, share, device_name, size): cmd = ['lvextend', '-L', '%sG' % size, '-n', device_name] self._try_execute(*cmd, run_as_root=True) def _deallocate_container(self, share_name): """Deletes a logical volume for share.""" try: self._try_execute('lvremove', '-f', "%s/%s" % (self.configuration.lvm_share_volume_group, share_name), run_as_root=True) except exception.ProcessExecutionError as exc: if "not found" not in exc.stderr: LOG.exception("Error deleting volume") raise LOG.warning("Volume not found: %s", exc.stderr) def _create_snapshot(self, context, snapshot): """Creates a snapshot.""" orig_lv_name = "%s/%s" % (self.configuration.lvm_share_volume_group, snapshot['share_name']) self._try_execute( 'lvcreate', '-L', '%sG' % snapshot['share']['size'], '--name', snapshot['name'], '--snapshot', orig_lv_name, run_as_root=True) self._set_random_uuid_to_device(snapshot) def _set_random_uuid_to_device(self, share_or_snapshot): # NOTE(vponomaryov): 'tune2fs' is required to make # filesystem of share created from snapshot have # unique ID, in case of LVM volumes, by default, # it will have the same UUID as source volume. Closes #1645751 # NOTE(gouthamr): Executing tune2fs -U only works on # a recently checked filesystem. # See: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=857336 device_path = self._get_local_path(share_or_snapshot) self._execute('e2fsck', '-y', '-f', device_path, run_as_root=True) self._execute( 'tune2fs', '-U', 'random', device_path, run_as_root=True, ) def create_snapshot(self, context, snapshot, share_server=None): self._create_snapshot(context, snapshot) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot.""" self._deallocate_container(snapshot['name']) class LVMShareDriver(LVMMixin, driver.ShareDriver): """Executes commands relating to Shares.""" def __init__(self, *args, **kwargs): """Do initialization.""" super(LVMShareDriver, self).__init__([False], *args, **kwargs) self.configuration.append_config_values(share_opts) self.configuration.append_config_values(generic.share_opts) self.configuration.share_mount_path = ( self.configuration.lvm_share_export_root) self._helpers = None self.configured_ip_version = None self.backend_name = self.configuration.safe_get( 'share_backend_name') or 'LVM' # Set of parameters used for compatibility with # Generic driver's helpers. self.share_server = { 'instance_id': self.backend_name, 'lock_name': 'manila_lvm', } self.share_server['public_addresses'] = ( self.configuration.lvm_share_export_ips ) self.ipv6_implemented = True def _ssh_exec_as_root(self, server, command, check_exit_code=True): kwargs = {} if 'sudo' in command: kwargs['run_as_root'] = True command.remove('sudo') kwargs['check_exit_code'] = check_exit_code return self._execute(*command, **kwargs) def do_setup(self, context): """Any initialization the volume driver does while starting.""" super(LVMShareDriver, self).do_setup(context) self._setup_helpers() def _setup_helpers(self): """Initializes protocol-specific NAS drivers.""" self._helpers = {} for helper_str in self.configuration.lvm_share_helpers: share_proto, _, import_str = helper_str.partition('=') helper = importutils.import_class(import_str) # TODO(rushiagr): better way to handle configuration # instead of just passing to the helper self._helpers[share_proto.upper()] = helper( self._execute, self._ssh_exec_as_root, self.configuration) def _get_local_path(self, share): # The escape characters are expected by the device mapper. escaped_group = ( self.configuration.lvm_share_volume_group.replace('-', '--')) escaped_name = share['name'].replace('-', '--') return "/dev/mapper/%s-%s" % (escaped_group, escaped_name) def _update_share_stats(self): """Retrieve stats info from share volume group.""" data = { 'share_backend_name': self.backend_name, 'storage_protocol': 'NFS_CIFS', 'reserved_percentage': self.configuration.reserved_share_percentage, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': True, 'mount_snapshot_support': True, 'driver_name': 'LVMShareDriver', 'pools': self.get_share_server_pools(), } super(LVMShareDriver, self)._update_share_stats(data) def get_share_server_pools(self, share_server=None): out, err = self._execute('vgs', self.configuration.lvm_share_volume_group, '--rows', '--units', 'g', run_as_root=True) total_size = re.findall(r"VSize\s[0-9.]+g", out)[0][6:-1] free_size = re.findall(r"VFree\s[0-9.]+g", out)[0][6:-1] return [{ 'pool_name': 'lvm-single-pool', 'total_capacity_gb': float(total_size), 'free_capacity_gb': float(free_size), 'reserved_percentage': 0, }, ] def create_share(self, context, share, share_server=None): self._allocate_container(share) # create file system device_name = self._get_local_path(share) location = self._get_helper(share).create_exports( self.share_server, share['name']) self._mount_device(share, device_name) return location def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" self._allocate_container(share) snapshot_device_name = self._get_local_path(snapshot) share_device_name = self._get_local_path(share) self._set_random_uuid_to_device(share) self._copy_volume( snapshot_device_name, share_device_name, share['size']) location = self._get_helper(share).create_exports( self.share_server, share['name']) self._mount_device(share, share_device_name) return location def delete_share(self, context, share, share_server=None): self._unmount_device(share) self._delete_share(context, share) self._deallocate_container(share['name']) def _unmount_device(self, share_or_snapshot): """Unmount the filesystem of a share or snapshot LV.""" mount_path = self._get_mount_path(share_or_snapshot) if os.path.exists(mount_path): # umount, may be busy try: self._execute('umount', '-f', mount_path, run_as_root=True) except exception.ProcessExecutionError as exc: if 'device is busy' in six.text_type(exc): raise exception.ShareBusyException( reason=share_or_snapshot['name']) else: LOG.error('Unable to umount: %s', exc) raise # remove dir self._execute('rmdir', mount_path, run_as_root=True) def ensure_shares(self, context, shares): updates = {} for share in shares: updates[share['id']] = { 'export_locations': self.ensure_share(context, share)} return updates def ensure_share(self, ctx, share, share_server=None): """Ensure that storage are mounted and exported.""" device_name = self._get_local_path(share) self._mount_device(share, device_name) return self._get_helper(share).create_exports( self.share_server, share['name'], recreate=True) def _delete_share(self, ctx, share): """Delete a share.""" try: self._get_helper(share).remove_exports( self.share_server, share['name']) except exception.ProcessExecutionError: LOG.warning("Can't remove share %r", share['id']) except exception.InvalidShare as exc: LOG.warning(exc) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. This driver has two different behaviors according to parameters: 1. Recovery after error - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' shall be empty. Previously existing access rules are cleared and then added back according to 'access_rules'. 2. Adding/Deleting of several access rules - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Rules in 'access_rules' are ignored and only rules from 'add_rules' and 'delete_rules' are applied. :param context: Current context :param share: Share model with share data. :param access_rules: All access rules for given share :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model """ self._get_helper(share).update_access(self.share_server, share['name'], access_rules, add_rules=add_rules, delete_rules=delete_rules) def _get_helper(self, share): if share['share_proto'].lower().startswith('nfs'): return self._helpers['NFS'] elif share['share_proto'].lower().startswith('cifs'): return self._helpers['CIFS'] else: raise exception.InvalidShare(reason='Wrong share protocol') def _mount_device(self, share_or_snapshot, device_name): """Mount LV for share or snapshot and ignore if already mounted.""" mount_path = self._get_mount_path(share_or_snapshot) self._execute('mkdir', '-p', mount_path) try: self._execute('mount', device_name, mount_path, run_as_root=True, check_exit_code=True) self._execute('chmod', '777', mount_path, run_as_root=True, check_exit_code=True) except exception.ProcessExecutionError: out, err = self._execute('mount', '-l', run_as_root=True) if device_name in out: LOG.warning("%s is already mounted", device_name) else: raise return mount_path def _get_mount_path(self, share_or_snapshot): """Returns path where share or snapshot is mounted.""" return os.path.join(self.configuration.share_mount_path, share_or_snapshot['name']) def _copy_volume(self, srcstr, deststr, size_in_g): # Use O_DIRECT to avoid thrashing the system buffer cache extra_flags = ['iflag=direct', 'oflag=direct'] # Check whether O_DIRECT is supported try: self._execute('dd', 'count=0', 'if=%s' % srcstr, 'of=%s' % deststr, *extra_flags, run_as_root=True) except exception.ProcessExecutionError: extra_flags = [] # Perform the copy self._execute('dd', 'if=%s' % srcstr, 'of=%s' % deststr, 'count=%d' % (size_in_g * 1024), 'bs=1M', *extra_flags, run_as_root=True) def extend_share(self, share, new_size, share_server=None): device_name = self._get_local_path(share) self._extend_container(share, device_name, new_size) self._execute('resize2fs', device_name, run_as_root=True) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): share = snapshot['share'] # Temporarily remove all access rules self._get_helper(share).update_access(self.share_server, snapshot['name'], [], [], []) self._get_helper(share).update_access(self.share_server, share['name'], [], [], []) # Unmount the snapshot filesystem self._unmount_device(snapshot) # Unmount the share filesystem self._unmount_device(share) # Merge the snapshot LV back into the share, reverting it snap_lv_name = "%s/%s" % (self.configuration.lvm_share_volume_group, snapshot['name']) self._execute('lvconvert', '--merge', snap_lv_name, run_as_root=True) # Now recreate the snapshot that was destroyed by the merge self._create_snapshot(context, snapshot) # At this point we can mount the share again device_name = self._get_local_path(share) self._mount_device(share, device_name) # Also remount the snapshot device_name = self._get_local_path(snapshot) self._mount_device(snapshot, device_name) # Lastly we add all the access rules back self._get_helper(share).update_access(self.share_server, share['name'], share_access_rules, [], []) snapshot_access_rules, __, __ = utils.change_rules_to_readonly( snapshot_access_rules, [], []) self._get_helper(share).update_access(self.share_server, snapshot['name'], snapshot_access_rules, [], []) def create_snapshot(self, context, snapshot, share_server=None): self._create_snapshot(context, snapshot) device_name = self._get_local_path(snapshot) self._mount_device(snapshot, device_name) helper = self._get_helper(snapshot['share']) exports = helper.create_exports(self.share_server, snapshot['name']) return {'export_locations': exports} def delete_snapshot(self, context, snapshot, share_server=None): self._unmount_device(snapshot) super(LVMShareDriver, self).delete_snapshot(context, snapshot, share_server) def get_configured_ip_versions(self): if self.configured_ip_version is None: try: self.configured_ip_version = [] for ip in self.configuration.lvm_share_export_ips: self.configured_ip_version.append( ipaddress.ip_address(six.text_type(ip)).version) except Exception: message = (_("Invalid 'lvm_share_export_ips' option supplied " "%s.") % self.configuration.lvm_share_export_ips) raise exception.InvalidInput(reason=message) return self.configured_ip_version def snapshot_update_access(self, context, snapshot, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given snapshot. This driver has two different behaviors according to parameters: 1. Recovery after error - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' shall be empty. Previously existing access rules are cleared and then added back according to 'access_rules'. 2. Adding/Deleting of several access rules - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Rules in 'access_rules' are ignored and only rules from 'add_rules' and 'delete_rules' are applied. :param context: Current context :param snapshot: Snapshot model with snapshot data. :param access_rules: All access rules for given snapshot :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model """ helper = self._get_helper(snapshot['share']) access_rules, add_rules, delete_rules = utils.change_rules_to_readonly( access_rules, add_rules, delete_rules) helper.update_access(self.share_server, snapshot['name'], access_rules, add_rules=add_rules, delete_rules=delete_rules) def update_share_usage_size(self, context, shares): updated_shares = [] out, err = self._execute( 'df', '-l', '--output=target,used', '--block-size=g') gathered_at = timeutils.utcnow() for share in shares: try: mount_path = self._get_mount_path(share) if os.path.exists(mount_path): used_size = (re.findall( mount_path + r"\s*[0-9.]+G", out)[0]. split(' ')[-1][:-1]) updated_shares.append({'id': share['id'], 'used_size': used_size, 'gathered_at': gathered_at}) else: raise exception.NotFound( _("Share mount path %s could not be " "found.") % mount_path) except Exception: LOG.exception("Failed to gather 'used_size' for share %s.", share['id']) return updated_shares def get_backend_info(self, context): return { 'export_ips': ','.join(self.share_server['public_addresses']), 'db_version': utils.get_recent_db_migration_id(), } manila-10.0.0/manila/share/drivers/qnap/0000775000175000017500000000000013656750362020050 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/qnap/__init__.py0000664000175000017500000000000013656750227022147 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/qnap/api.py0000664000175000017500000006461713656750227021211 0ustar zuulzuul00000000000000# Copyright (c) 2016 QNAP Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ API for QNAP Storage. """ import base64 import functools import re import ssl try: import xml.etree.cElementTree as ET except ImportError: import xml.etree.ElementTree as ET from oslo_log import log as logging import six from six.moves import http_client from six.moves import urllib from manila import exception from manila.i18n import _ from manila import utils LOG = logging.getLogger(__name__) MSG_SESSION_EXPIRED = _("Session ID expired") MSG_UNEXPECT_RESP = _("Unexpected response from QNAP API") def _connection_checker(func): """Decorator to check session has expired or not.""" @utils.retry(exception=exception.ShareBackendException, retries=5) @functools.wraps(func) def inner_connection_checker(self, *args, **kwargs): LOG.debug('in _connection_checker') pattern = re.compile(r".*Session ID expired.$") try: return func(self, *args, **kwargs) except exception.ShareBackendException as e: matches = pattern.match(six.text_type(e)) if matches: LOG.debug('Session might have expired.' ' Trying to relogin') self._login() raise return inner_connection_checker class QnapAPIExecutor(object): """Makes QNAP API calls for ES NAS.""" def __init__(self, *args, **kwargs): self.sid = None self.username = kwargs['username'] self.password = kwargs['password'] self.ip, self.port, self.ssl = ( self._parse_management_url(kwargs['management_url'])) self._login() def _parse_management_url(self, management_url): pattern = re.compile(r"(http|https)\:\/\/(\S+)\:(\d+)") matches = pattern.match(management_url) if matches.group(1) == 'http': management_ssl = False else: management_ssl = True management_ip = matches.group(2) management_port = matches.group(3) return management_ip, management_port, management_ssl def _prepare_connection(self, isSSL, ip, port): if isSSL: if hasattr(ssl, '_create_unverified_context'): context = ssl._create_unverified_context() connection = http_client.HTTPSConnection(ip, port=port, context=context) else: connection = http_client.HTTPSConnection(ip, port=port) else: connection = http_client.HTTPConnection(ip, port) return connection def get_basic_info(self, management_url): """Get the basic information of NAS.""" LOG.debug('in get_basic_info') management_ip, management_port, management_ssl = ( self._parse_management_url(management_url)) connection = self._prepare_connection(management_ssl, management_ip, management_port) connection.request('GET', '/cgi-bin/authLogin.cgi') response = connection.getresponse() data = response.read() LOG.debug('response data: %s', data) root = ET.fromstring(data) display_model_name = root.find('model/displayModelName').text internal_model_name = root.find('model/internalModelName').text fw_version = root.find('firmware/version').text connection.close() return display_model_name, internal_model_name, fw_version def _execute_and_get_response_details(self, nas_ip, url): """Will prepare response after executing a http request.""" LOG.debug('port: %(port)s, ssl: %(ssl)s', {'port': self.port, 'ssl': self.ssl}) res_details = {} # Prepare the connection connection = self._prepare_connection(self.ssl, nas_ip, self.port) # Make the connection LOG.debug('url : %s', url) connection.request('GET', url) # Extract the response as the connection was successful response = connection.getresponse() # Read the response data = response.read() LOG.debug('response data: %s', data) res_details['data'] = data res_details['error'] = None res_details['http_status'] = response.status connection.close() return res_details def execute_login(self): """Login and return sid.""" params = { 'user': self.username, 'pwd': base64.b64encode(self.password.encode("utf-8")), 'serviceKey': '1', } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/authLogin.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) session_id = root.find('authSid').text return session_id def _login(self): """Execute Https Login API.""" self.sid = self.execute_login() LOG.debug('sid: %s', self.sid) def _sanitize_params(self, params): sanitized_params = {} for key in params: value = params[key] if value is not None: if isinstance(value, list): sanitized_params[key] = [six.text_type(v) for v in value] else: sanitized_params[key] = six.text_type(value) return sanitized_params @_connection_checker def create_share(self, share, pool_name, create_share_name, share_proto, **kwargs): """Create share.""" LOG.debug('create_share_name: %s', create_share_name) params = { 'wiz_func': 'share_create', 'action': 'add_share', 'vol_name': create_share_name, 'vol_size': six.text_type(share['size']) + 'GB', 'threshold': '80', 'dedup': ('sha512' if kwargs['qnap_deduplication'] is True else 'off'), 'compression': '1' if kwargs['qnap_compression'] is True else '0', 'thin_pro': '1' if kwargs['qnap_thin_provision'] is True else '0', 'cache': '1' if kwargs['qnap_ssd_cache'] is True else '0', 'cifs_enable': '0' if share_proto == 'NFS' else '1', 'nfs_enable': '0' if share_proto == 'CIFS' else '1', 'afp_enable': '0', 'ftp_enable': '0', 'encryption': '0', 'hidden': '0', 'oplocks': '1', 'sync': 'always', 'userrw0': 'admin', 'userrd_len': '0', 'userrw_len': '1', 'userno_len': '0', 'access_r': 'setup_users', 'path_type': 'auto', 'recycle_bin': '1', 'recycle_bin_administrators_only': '0', 'pool_name': pool_name, 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/wizReq.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('ES_RET_CODE').text < '0': msg = _("Fail to create share %s on NAS.") % create_share_name LOG.error(msg) raise exception.ShareBackendException(msg=msg) vol_list = root.find('func').find('ownContent').find('volumeList') vol_info_tree = vol_list.findall('volume') for vol in vol_info_tree: LOG.debug('Iterating vol name: %(name)s, index: %(id)s', {'name': vol.find('volumeLabel').text, 'id': vol.find('volumeValue').text}) if (create_share_name == vol.find('volumeLabel').text): LOG.debug('volumeLabel:%s', vol.find('volumeLabel').text) return vol.find('volumeValue').text return res_details['data'] @_connection_checker def delete_share(self, vol_id, *args, **kwargs): """Execute delete share API.""" params = { 'func': 'volume_mgmt', 'vol_remove': '1', 'volumeID': vol_id, 'stop_service': 'no', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': msg = _('Delete share id: %s failed') % vol_id raise exception.ShareBackendException(msg=msg) @_connection_checker def get_specific_poolinfo(self, pool_id): """Execute get_specific_poolinfo API.""" params = { 'store': 'poolInfo', 'func': 'extra_get', 'poolID': pool_id, 'Pool_Info': '1', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': msg = _('get_specific_poolinfo failed') raise exception.ShareBackendException(msg=msg) pool_list = root.find('Pool_Index') pool_info_tree = pool_list.findall('row') for pool in pool_info_tree: if pool_id == pool.find('poolID').text: LOG.debug('poolID: %s', pool.find('poolID').text) return pool @_connection_checker def get_share_info(self, pool_id, **kwargs): """Execute get_share_info API.""" for key, value in kwargs.items(): LOG.debug('%(key)s = %(val)s', {'key': key, 'val': value}) params = { 'store': 'poolVolumeList', 'poolID': pool_id, 'func': 'extra_get', 'Pool_Vol_Info': '1', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) vol_list = root.find('Volume_Info') vol_info_tree = vol_list.findall('row') for vol in vol_info_tree: LOG.debug('Iterating vol name: %(name)s, index: %(id)s', {'name': vol.find('vol_label').text, 'id': vol.find('vol_no').text}) if 'vol_no' in kwargs: if kwargs['vol_no'] == vol.find('vol_no').text: LOG.debug('vol_no:%s', vol.find('vol_no').text) return vol elif 'vol_label' in kwargs: if kwargs['vol_label'] == vol.find('vol_label').text: LOG.debug('vol_label:%s', vol.find('vol_label').text) return vol return None @_connection_checker def get_specific_volinfo(self, vol_id, **kwargs): """Execute get_specific_volinfo API.""" params = { 'store': 'volumeInfo', 'volumeID': vol_id, 'func': 'extra_get', 'Volume_Info': '1', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/disk_manage.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) vol_list = root.find('Volume_Info') vol_info_tree = vol_list.findall('row') for vol in vol_info_tree: if vol_id == vol.find('vol_no').text: LOG.debug('vol_no: %s', vol.find('vol_no').text) return vol @_connection_checker def get_snapshot_info(self, **kwargs): """Execute get_snapshot_info API.""" params = { 'func': 'extra_get', 'volumeID': kwargs['volID'], 'snapshot_list': '1', 'snap_start': '0', 'snap_count': '100', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) snapshot_list = root.find('SnapshotList') # if snapshot_list is None: if not snapshot_list: return None if ('snapshot_name' in kwargs): snapshot_tree = snapshot_list.findall('row') for snapshot in snapshot_tree: if (kwargs['snapshot_name'] == snapshot.find('snapshot_name').text): LOG.debug('snapshot_name:%s', kwargs['snapshot_name']) return snapshot if (snapshot is snapshot_tree[-1]): return None return res_details['data'] @_connection_checker def create_snapshot_api(self, volumeID, snapshot_name): """Execute CGI to create snapshot from source share.""" LOG.debug('volumeID: %s', volumeID) LOG.debug('snapshot_name: %s', snapshot_name) params = { 'func': 'create_snapshot', 'volumeID': volumeID, 'snapshot_name': snapshot_name, 'expire_min': '0', 'vital': '1', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('ES_RET_CODE').text < '0': msg = _('Create snapshot failed') raise exception.ShareBackendException(msg=msg) @_connection_checker def delete_snapshot_api(self, snapshot_id): """Execute CGI to delete snapshot from snapshot_id.""" params = { 'func': 'del_snapshots', 'snapshotID': snapshot_id, 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) # snapshot not exist if root.find('result').text == '-206021': LOG.warning('Snapshot id %s does not exist', snapshot_id) return # share not exist if root.find('result').text == '-200005': LOG.warning('Share of snapshot id %s does not exist', snapshot_id) return if root.find('result').text < '0': msg = _('Failed to delete snapshot.') raise exception.ShareBackendException(msg=msg) @_connection_checker def clone_snapshot(self, snapshot_id, new_sharename, clone_size): """Execute CGI to clone snapshot as share.""" params = { 'func': 'clone_qsnapshot', 'by_vol': '1', 'snapshotID': snapshot_id, 'new_name': new_sharename, 'clone_size': '{}g'.format(clone_size), 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': msg = _('Failed to clone snapshot.') raise exception.ShareBackendException(msg=msg) @_connection_checker def edit_share(self, share_dict): """Edit share properties.""" LOG.debug('share_dict[sharename]: %s', share_dict['sharename']) params = { 'wiz_func': 'share_property', 'action': 'share_property', 'sharename': share_dict['sharename'], 'old_sharename': share_dict['old_sharename'], 'dedup': 'sha512' if share_dict['deduplication'] else 'off', 'compression': '1' if share_dict['compression'] else '0', 'thin_pro': '1' if share_dict['thin_provision'] else '0', 'cache': '1' if share_dict['ssd_cache'] else '0', 'cifs_enable': '1' if share_dict['share_proto'] == 'CIFS' else '0', 'nfs_enable': '1' if share_dict['share_proto'] == 'NFS' else '0', 'afp_enable': '0', 'ftp_enable': '0', 'hidden': '0', 'oplocks': '1', 'sync': 'always', 'recycle_bin': '1', 'recycle_bin_administrators_only': '0', 'sid': self.sid, } if share_dict.get('new_size'): params['vol_size'] = six.text_type(share_dict['new_size']) + 'GB' sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/priv/privWizard.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('ES_RET_CODE').text < '0': msg = _('Edit sharename %s failed') % share_dict['sharename'] raise exception.ShareBackendException(msg=msg) @_connection_checker def get_host_list(self, **kwargs): """Execute get_host_list API.""" params = { 'module': 'hosts', 'func': 'get_hostlist', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) host_list = root.find('content').find('host_list') # if host_list is None: if not host_list: return None return_hosts = [] host_tree = host_list.findall('host') for host in host_tree: LOG.debug('host:%s', host) return_hosts.append(host) return return_hosts @_connection_checker def add_host(self, hostname, ipv4): """Execute add_host API.""" params = { 'module': 'hosts', 'func': 'apply_addhost', 'name': hostname, 'ipaddr_v4': ipv4, 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) @_connection_checker def edit_host(self, hostname, ipv4_list): """Execute edit_host API.""" params = { 'module': 'hosts', 'func': 'apply_sethost', 'name': hostname, 'ipaddr_v4': ipv4_list, 'sid': self.sid, } sanitized_params = self._sanitize_params(params) # urlencode with True parameter to parse ipv4_list sanitized_params = urllib.parse.urlencode(sanitized_params, True) url = ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) @_connection_checker def delete_host(self, hostname): """Execute delete_host API.""" params = { 'module': 'hosts', 'func': 'apply_delhost', 'host_name': hostname, 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/accessrights/accessrightsRequest.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) @_connection_checker def set_nfs_access(self, sharename, access, host_name): """Execute set_nfs_access API.""" params = { 'wiz_func': 'share_nfs_control', 'action': 'share_nfs_control', 'sharename': sharename, 'access': access, 'host_name': host_name, 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/priv/privWizard.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) class QnapAPIExecutorTS(QnapAPIExecutor): """Makes QNAP API calls for TS NAS.""" @_connection_checker def get_snapshot_info(self, **kwargs): """Execute get_snapshot_info API.""" for key, value in kwargs.items(): LOG.debug('%(key)s = %(val)s', {'key': key, 'val': value}) params = { 'func': 'extra_get', 'LUNIndex': kwargs['lun_index'], 'smb_snapshot_list': '1', 'smb_snapshot': '1', 'snapshot_list': '1', 'sid': self.sid, } sanitized_params = self._sanitize_params(params) sanitized_params = urllib.parse.urlencode(sanitized_params) url = ('/cgi-bin/disk/snapshot.cgi?%s' % sanitized_params) res_details = self._execute_and_get_response_details(self.ip, url) root = ET.fromstring(res_details['data']) if root.find('authPassed').text == '0': raise exception.ShareBackendException(msg=MSG_SESSION_EXPIRED) if root.find('result').text < '0': raise exception.ShareBackendException(msg=MSG_UNEXPECT_RESP) snapshot_list = root.find('SnapshotList') if snapshot_list is None: return None snapshot_tree = snapshot_list.findall('row') for snapshot in snapshot_tree: if (kwargs['snapshot_name'] == snapshot.find('snapshot_name').text): LOG.debug('snapshot_name:%s', kwargs['snapshot_name']) return snapshot return None manila-10.0.0/manila/share/drivers/qnap/qnap.py0000664000175000017500000011776213656750227021377 0ustar zuulzuul00000000000000# Copyright (c) 2016 QNAP Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver for QNAP Storage. This driver supports QNAP Storage for NFS. """ import datetime import math import re import time from oslo_config import cfg from oslo_log import log as logging from oslo_utils import timeutils from oslo_utils import units from manila.common import constants from manila import exception from manila.i18n import _ from manila import share from manila.share import driver from manila.share.drivers.qnap import api from manila.share import share_types from manila import utils LOG = logging.getLogger(__name__) qnap_manila_opts = [ cfg.StrOpt('qnap_management_url', required=True, help='The URL to manage QNAP Storage.'), cfg.HostAddressOpt('qnap_share_ip', required=True, help='NAS share IP for mounting shares.'), cfg.StrOpt('qnap_nas_login', required=True, help='Username for QNAP storage.'), cfg.StrOpt('qnap_nas_password', required=True, secret=True, help='Password for QNAP storage.'), cfg.StrOpt('qnap_poolname', required=True, help='Pool within which QNAP shares must be created.'), ] CONF = cfg.CONF CONF.register_opts(qnap_manila_opts) class QnapShareDriver(driver.ShareDriver): """OpenStack driver to enable QNAP Storage. Version history: 1.0.0 - Initial driver (Only NFS) 1.0.1 - Add support for QES fw 1.1.4. 1.0.2 - Fix bug #1736370, QNAP Manila driver: Access rule setting is override by the another access rule. 1.0.3 - Add supports for Thin Provisioning, SSD Cache, Deduplication and Compression. 1.0.4 - Add support for QES fw 2.0.0. 1.0.5 - Fix bug #1773761, when user tries to manage share, the size of managed share should not be changed. 1.0.6 - Add support for QES fw 2.1.0. 1.0.7 - Add support for QES fw on TDS series NAS model. 1.0.8 - Fix bug, driver should not manage snapshot which does not exist in NAS. Fix bug, driver should create share from snapshot with specified size. """ DRIVER_VERSION = '1.0.8' def __init__(self, *args, **kwargs): """Initialize QnapShareDriver.""" super(QnapShareDriver, self).__init__(False, *args, **kwargs) self.private_storage = kwargs.get('private_storage') self.api_executor = None self.group_stats = {} self.configuration.append_config_values(qnap_manila_opts) self.share_api = share.API() def do_setup(self, context): """Setup the QNAP Manila share driver.""" self.ctxt = context LOG.debug('context: %s', context) # Setup API Executor try: self.api_executor = self._create_api_executor() except Exception: LOG.exception('Failed to create HTTP client. Check IP ' 'address, port, username, password and make ' 'sure the array version is compatible.') raise def check_for_setup_error(self): """Check the status of setup.""" if self.api_executor is None: msg = _("Failed to instantiate API client to communicate with " "QNAP storage systems.") raise exception.ShareBackendException(msg=msg) def _create_api_executor(self): """Create API executor by NAS model.""" """LOG.debug('CONF.qnap_nas_login=%(conf)s', {'conf': CONF.qnap_nas_login}) LOG.debug('self.configuration.qnap_nas_login=%(conf)s', {'conf': self.configuration.qnap_nas_login})""" self.api_executor = api.QnapAPIExecutor( username=self.configuration.qnap_nas_login, password=self.configuration.qnap_nas_password, management_url=self.configuration.qnap_management_url) display_model_name, internal_model_name, fw_version = ( self.api_executor.get_basic_info( self.configuration.qnap_management_url)) pattern = re.compile(r"^([A-Z]+)-?[A-Z]{0,2}(\d+)\d{2}(U|[a-z]*)") matches = pattern.match(display_model_name) if not matches: return None model_type = matches.group(1) ts_model_types = ( "TS", "SS", "IS", "TVS", "TBS" ) tes_model_types = ( "TES", "TDS" ) es_model_types = ( "ES", ) if model_type in ts_model_types: if (fw_version.startswith("4.2") or fw_version.startswith("4.3")): LOG.debug('Create TS API Executor') # modify the pool name to pool index self.configuration.qnap_poolname = ( self._get_ts_model_pool_id( self.configuration.qnap_poolname)) return api.QnapAPIExecutorTS( username=self.configuration.qnap_nas_login, password=self.configuration.qnap_nas_password, management_url=self.configuration.qnap_management_url) elif model_type in tes_model_types: if 'TS' in internal_model_name: if (fw_version.startswith("4.2") or fw_version.startswith("4.3")): LOG.debug('Create TS API Executor') # modify the pool name to pool index self.configuration.qnap_poolname = ( self._get_ts_model_pool_id( self.configuration.qnap_poolname)) return api.QnapAPIExecutorTS( username=self.configuration.qnap_nas_login, password=self.configuration.qnap_nas_password, management_url=self.configuration.qnap_management_url) elif "1.1.2" <= fw_version <= "2.1.9999": LOG.debug('Create ES API Executor') return api.QnapAPIExecutor( username=self.configuration.qnap_nas_login, password=self.configuration.qnap_nas_password, management_url=self.configuration.qnap_management_url) elif model_type in es_model_types: if "1.1.2" <= fw_version <= "2.1.9999": LOG.debug('Create ES API Executor') return api.QnapAPIExecutor( username=self.configuration.qnap_nas_login, password=self.configuration.qnap_nas_password, management_url=self.configuration.qnap_management_url) msg = _('QNAP Storage model is not supported by this driver.') raise exception.ShareBackendException(msg=msg) def _get_ts_model_pool_id(self, pool_name): """Modify the pool name to pool index.""" pattern = re.compile(r"^(\d+)+|^Storage Pool (\d+)+") matches = pattern.match(pool_name) if matches.group(1): return matches.group(1) else: return matches.group(2) @utils.synchronized('qnap-gen_name') def _gen_random_name(self, type): if type == 'share': infix = "shr-" elif type == 'snapshot': infix = "snp-" elif type == 'host': infix = "hst-" else: infix = "" return ("manila-%(ifx)s%(time)s" % {'ifx': infix, 'time': timeutils.utcnow().strftime('%Y%m%d%H%M%S%f')}) def _gen_host_name(self, vol_name_timestamp, access_level): # host_name will be manila-{vol_name_timestamp}-ro or # manila-{vol_name_timestamp}-rw return 'manila-{}-{}'.format(vol_name_timestamp, access_level) def _get_timestamp_from_vol_name(self, vol_name): vol_name_split = vol_name.split('-') dt = datetime.datetime.strptime(vol_name_split[2], '%Y%m%d%H%M%S%f') return int(time.mktime(dt.timetuple())) def _get_location_path(self, share_name, share_proto, ip, vol_id): if share_proto == 'NFS': vol = self.api_executor.get_specific_volinfo(vol_id) vol_mount_path = vol.find('vol_mount_path').text location = '%s:%s' % (ip, vol_mount_path) else: msg = _('Invalid NAS protocol: %s') % share_proto raise exception.InvalidInput(reason=msg) export_location = { 'path': location, 'is_admin_only': False, } return export_location def _update_share_stats(self): """Get latest share stats.""" backend_name = (self.configuration.safe_get( 'share_backend_name') or self.__class__.__name__) LOG.debug('backend_name=%(backend_name)s', {'backend_name': backend_name}) selected_pool = self.api_executor.get_specific_poolinfo( self.configuration.qnap_poolname) total_capacity_gb = (int(selected_pool.find('capacity_bytes').text) / units.Gi) LOG.debug('total_capacity_gb: %s GB', total_capacity_gb) free_capacity_gb = (int(selected_pool.find('freesize_bytes').text) / units.Gi) LOG.debug('free_capacity_gb: %s GB', free_capacity_gb) alloc_capacity_gb = (int(selected_pool.find('allocated_bytes').text) / units.Gi) LOG.debug('allocated_capacity_gb: %s GB', alloc_capacity_gb) reserved_percentage = self.configuration.safe_get( 'reserved_share_percentage') # single pool now, need support multiple pools in the future single_pool = { "pool_name": self.configuration.qnap_poolname, "total_capacity_gb": total_capacity_gb, "free_capacity_gb": free_capacity_gb, "allocated_capacity_gb": alloc_capacity_gb, "reserved_percentage": reserved_percentage, "qos": False, "dedupe": [True, False], "compression": [True, False], "thin_provisioning": [True, False], "qnap_ssd_cache": [True, False] } data = { "share_backend_name": backend_name, "vendor_name": "QNAP", "driver_version": self.DRIVER_VERSION, "storage_protocol": "NFS", "snapshot_support": True, "create_share_from_snapshot_support": True, "driver_handles_share_servers": self.configuration.safe_get( 'driver_handles_share_servers'), 'pools': [single_pool], } super(QnapShareDriver, self)._update_share_stats(data) @utils.retry(exception=exception.ShareBackendException, interval=3, retries=5) @utils.synchronized('qnap-create_share') def create_share(self, context, share, share_server=None): """Create a new share.""" LOG.debug('share: %s', share.__dict__) extra_specs = share_types.get_extra_specs_from_share(share) LOG.debug('extra_specs: %s', extra_specs) qnap_thin_provision = share_types.parse_boolean_extra_spec( 'thin_provisioning', extra_specs.get("thin_provisioning") or extra_specs.get('capabilities:thin_provisioning') or 'true') qnap_compression = share_types.parse_boolean_extra_spec( 'compression', extra_specs.get("compression") or extra_specs.get('capabilities:compression') or 'true') qnap_deduplication = share_types.parse_boolean_extra_spec( 'dedupe', extra_specs.get("dedupe") or extra_specs.get('capabilities:dedupe') or 'false') qnap_ssd_cache = share_types.parse_boolean_extra_spec( 'qnap_ssd_cache', extra_specs.get("qnap_ssd_cache") or extra_specs.get("capabilities:qnap_ssd_cache") or 'false') LOG.debug('qnap_thin_provision: %(qnap_thin_provision)s ' 'qnap_compression: %(qnap_compression)s ' 'qnap_deduplication: %(qnap_deduplication)s ' 'qnap_ssd_cache: %(qnap_ssd_cache)s', {'qnap_thin_provision': qnap_thin_provision, 'qnap_compression': qnap_compression, 'qnap_deduplication': qnap_deduplication, 'qnap_ssd_cache': qnap_ssd_cache}) share_proto = share['share_proto'] # User could create two shares with the same name on horizon. # Therefore, we should not use displayname to create shares on NAS. create_share_name = self._gen_random_name("share") # If share name exists, need to change to another name. created_share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_label=create_share_name) LOG.debug('created_share: %s', created_share) if created_share is not None: msg = (_("The share name %s is used by other share on NAS.") % create_share_name) LOG.error(msg) raise exception.ShareBackendException(msg=msg) if (qnap_deduplication and not qnap_thin_provision): msg = _("Dedupe cannot be enabled without thin_provisioning.") LOG.debug('Dedupe cannot be enabled without thin_provisioning.') raise exception.InvalidExtraSpec(reason=msg) self.api_executor.create_share( share, self.configuration.qnap_poolname, create_share_name, share_proto, qnap_thin_provision=qnap_thin_provision, qnap_compression=qnap_compression, qnap_deduplication=qnap_deduplication, qnap_ssd_cache=qnap_ssd_cache) created_share = self._get_share_info(create_share_name) volID = created_share.find('vol_no').text # Use private_storage to record volume ID and Name created in the NAS. LOG.debug('volID: %(volID)s ' 'volName: %(create_share_name)s', {'volID': volID, 'create_share_name': create_share_name}) _metadata = {'volID': volID, 'volName': create_share_name, 'thin_provision': qnap_thin_provision, 'compression': qnap_compression, 'deduplication': qnap_deduplication, 'ssd_cache': qnap_ssd_cache} self.private_storage.update(share['id'], _metadata) return self._get_location_path(create_share_name, share['share_proto'], self.configuration.qnap_share_ip, volID) @utils.retry(exception=exception.ShareBackendException, interval=5, retries=5, backoff_rate=1) def _get_share_info(self, share_name): share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_label=share_name) if share is None: msg = _("Fail to get share info of %s on NAS.") % share_name LOG.error(msg) raise exception.ShareBackendException(msg=msg) else: return share @utils.synchronized('qnap-delete_share') def delete_share(self, context, share, share_server=None): """Delete the specified share.""" # Use private_storage to retrieve volume ID created in the NAS. volID = self.private_storage.get(share['id'], 'volID') if not volID: LOG.warning('volID for Share %s does not exist', share['id']) return LOG.debug('volID: %s', volID) del_share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_no=volID) if del_share is None: LOG.warning('Share %s does not exist', share['id']) return vol_no = del_share.find('vol_no').text self.api_executor.delete_share(vol_no) self.private_storage.delete(share['id']) @utils.synchronized('qnap-extend_share') def extend_share(self, share, new_size, share_server=None): """Extend an existing share.""" LOG.debug('Entering extend_share share_name=%(share_name)s ' 'share_id=%(share_id)s ' 'new_size=%(size)s', {'share_name': share['display_name'], 'share_id': share['id'], 'size': new_size}) # Use private_storage to retrieve volume Name created in the NAS. volName = self.private_storage.get(share['id'], 'volName') if not volName: LOG.debug('Share %s does not exist', share['id']) raise exception.ShareResourceNotFound(share_id=share['id']) LOG.debug('volName: %s', volName) thin_provision = self.private_storage.get( share['id'], 'thin_provision') compression = self.private_storage.get(share['id'], 'compression') deduplication = self.private_storage.get(share['id'], 'deduplication') ssd_cache = self.private_storage.get(share['id'], 'ssd_cache') LOG.debug('thin_provision: %(thin_provision)s ' 'compression: %(compression)s ' 'deduplication: %(deduplication)s ' 'ssd_cache: %(ssd_cache)s', {'thin_provision': thin_provision, 'compression': compression, 'deduplication': deduplication, 'ssd_cache': ssd_cache}) share_dict = { 'sharename': volName, 'old_sharename': volName, 'new_size': new_size, 'thin_provision': thin_provision == 'True', 'compression': compression == 'True', 'deduplication': deduplication == 'True', 'ssd_cache': ssd_cache == 'True', 'share_proto': share['share_proto'] } self.api_executor.edit_share(share_dict) @utils.retry(exception=exception.ShareBackendException, interval=3, retries=5) @utils.synchronized('qnap-create_snapshot') def create_snapshot(self, context, snapshot, share_server=None): """Create a snapshot.""" LOG.debug('snapshot[share][share_id]: %s', snapshot['share']['share_id']) LOG.debug('snapshot id: %s', snapshot['id']) # Use private_storage to retrieve volume ID created in the NAS. volID = self.private_storage.get(snapshot['share']['id'], 'volID') if not volID: LOG.warning( 'volID for Share %s does not exist', snapshot['share']['id']) raise exception.ShareResourceNotFound( share_id=snapshot['share']['id']) LOG.debug('volID: %s', volID) # User could create two snapshot with the same name on horizon. # Therefore, we should not use displayname to create snapshot on NAS. # if snapshot exist, need to change another create_snapshot_name = self._gen_random_name("snapshot") LOG.debug('create_snapshot_name: %s', create_snapshot_name) check_snapshot = self.api_executor.get_snapshot_info( volID=volID, snapshot_name=create_snapshot_name) if check_snapshot is not None: msg = _("Failed to create an unused snapshot name.") raise exception.ShareBackendException(msg=msg) LOG.debug('create_snapshot_name: %s', create_snapshot_name) self.api_executor.create_snapshot_api(volID, create_snapshot_name) snapshot_id = "" created_snapshot = self.api_executor.get_snapshot_info( volID=volID, snapshot_name=create_snapshot_name) if created_snapshot is not None: snapshot_id = created_snapshot.find('snapshot_id').text else: msg = _("Failed to get snapshot information.") raise exception.ShareBackendException(msg=msg) LOG.debug('created_snapshot: %s', created_snapshot) LOG.debug('snapshot_id: %s', snapshot_id) # Use private_storage to record data instead of metadata. _metadata = {'snapshot_id': snapshot_id} self.private_storage.update(snapshot['id'], _metadata) # Test to get value from private_storage. snapshot_id = self.private_storage.get(snapshot['id'], 'snapshot_id') LOG.debug('snapshot_id: %s', snapshot_id) return {'provider_location': snapshot_id} @utils.synchronized('qnap-delete_snapshot') def delete_snapshot(self, context, snapshot, share_server=None): """Delete a snapshot.""" LOG.debug('Entering delete_snapshot. The deleted snapshot=%(snap)s', {'snap': snapshot['id']}) snapshot_id = (snapshot.get('provider_location') or self.private_storage.get(snapshot['id'], 'snapshot_id')) if not snapshot_id: LOG.warning('Snapshot %s does not exist', snapshot['id']) return LOG.debug('snapshot_id: %s', snapshot_id) self.api_executor.delete_snapshot_api(snapshot_id) self.private_storage.delete(snapshot['id']) @utils.retry(exception=exception.ShareBackendException, interval=3, retries=5) @utils.synchronized('qnap-create_share_from_snapshot') def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from a snapshot.""" LOG.debug('Entering create_share_from_snapshot. The source ' 'snapshot=%(snap)s. The created share=%(share)s', {'snap': snapshot['id'], 'share': share['id']}) snapshot_id = (snapshot.get('provider_location') or self.private_storage.get(snapshot['id'], 'snapshot_id')) if not snapshot_id: LOG.warning('Snapshot %s does not exist', snapshot['id']) raise exception.SnapshotResourceNotFound(name=snapshot['id']) LOG.debug('snapshot_id: %s', snapshot_id) create_share_name = self._gen_random_name("share") # if sharename exist, need to change another created_share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_label=create_share_name) if created_share is not None: msg = _("Failed to create an unused share name.") raise exception.ShareBackendException(msg=msg) self.api_executor.clone_snapshot(snapshot_id, create_share_name, share['size']) create_volID = "" created_share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_label=create_share_name) if created_share is not None: create_volID = created_share.find('vol_no').text LOG.debug('create_volID: %s', create_volID) else: msg = _("Failed to clone a snapshot in time.") raise exception.ShareBackendException(msg=msg) thin_provision = self.private_storage.get( snapshot['share_instance_id'], 'thin_provision') compression = self.private_storage.get( snapshot['share_instance_id'], 'compression') deduplication = self.private_storage.get( snapshot['share_instance_id'], 'deduplication') ssd_cache = self.private_storage.get( snapshot['share_instance_id'], 'ssd_cache') LOG.debug('thin_provision: %(thin_provision)s ' 'compression: %(compression)s ' 'deduplication: %(deduplication)s ' 'ssd_cache: %(ssd_cache)s', {'thin_provision': thin_provision, 'compression': compression, 'deduplication': deduplication, 'ssd_cache': ssd_cache}) # Use private_storage to record volume ID and Name created in the NAS. _metadata = { 'volID': create_volID, 'volName': create_share_name, 'thin_provision': thin_provision, 'compression': compression, 'deduplication': deduplication, 'ssd_cache': ssd_cache } self.private_storage.update(share['id'], _metadata) # Test to get value from private_storage. volName = self.private_storage.get(share['id'], 'volName') LOG.debug('volName: %s', volName) return self._get_location_path(create_share_name, share['share_proto'], self.configuration.qnap_share_ip, create_volID) def _get_vol_host(self, host_list, vol_name_timestamp): vol_host_list = [] if host_list is None: return vol_host_list for host in host_list: # Check host alias name with prefix "manila-{vol_name_timestamp}" # to find the host of this manila share. LOG.debug('_get_vol_host name:%s', host.find('name').text) # Because driver supports only IPv4 now, check "netaddrs" # have "ipv4" tag to get address. if re.match("^manila-{}".format(vol_name_timestamp), host.find('name').text): host_dict = { 'index': host.find('index').text, 'hostid': host.find('hostid').text, 'name': host.find('name').text, 'ipv4': [], } for ipv4 in host.findall('netaddrs/ipv4'): host_dict['ipv4'].append(ipv4.text) vol_host_list.append(host_dict) LOG.debug('_get_vol_host vol_host_list:%s', vol_host_list) return vol_host_list @utils.synchronized('qnap-update_access') def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): if not (add_rules or delete_rules): volName = self.private_storage.get(share['id'], 'volName') LOG.debug('volName: %s', volName) if volName is None: LOG.debug('Share %s does not exist', share['id']) raise exception.ShareResourceNotFound(share_id=share['id']) # Clear all current ACLs self.api_executor.set_nfs_access(volName, 2, "all") vol_name_timestamp = self._get_timestamp_from_vol_name(volName) host_list = self.api_executor.get_host_list() LOG.debug('host_list:%s', host_list) vol_host_list = self._get_vol_host(host_list, vol_name_timestamp) # If host already exist, delete the host if len(vol_host_list) > 0: for vol_host in vol_host_list: self.api_executor.delete_host(vol_host['name']) # Add each one through all rules. for access in access_rules: self._allow_access(context, share, access, share_server) else: # Adding/Deleting specific rules for access in delete_rules: self._deny_access(context, share, access, share_server) for access in add_rules: self._allow_access(context, share, access, share_server) def _allow_access(self, context, share, access, share_server=None): """Allow access to the share.""" share_proto = share['share_proto'] access_type = access['access_type'] access_level = access['access_level'] access_to = access['access_to'] LOG.debug('share_proto: %(share_proto)s ' 'access_type: %(access_type)s ' 'access_level: %(access_level)s ' 'access_to: %(access_to)s', {'share_proto': share_proto, 'access_type': access_type, 'access_level': access_level, 'access_to': access_to}) self._check_share_access(share_proto, access_type) vol_name = self.private_storage.get(share['id'], 'volName') vol_name_timestamp = self._get_timestamp_from_vol_name(vol_name) host_name = self._gen_host_name(vol_name_timestamp, access_level) host_list = self.api_executor.get_host_list() LOG.debug('vol_name: %(vol_name)s ' 'access_level: %(access_level)s ' 'host_name: %(host_name)s ' 'host_list: %(host_list)s ', {'vol_name': vol_name, 'access_level': access_level, 'host_name': host_name, 'host_list': host_list}) filter_host_list = self._get_vol_host(host_list, vol_name_timestamp) if len(filter_host_list) == 0: # if host does not exist, create a host for the share self.api_executor.add_host(host_name, access_to) elif (len(filter_host_list) == 1 and filter_host_list[0]['name'] == host_name): # if the host exist, and this host is for the same access right, # add ip to the host. ipv4_list = filter_host_list[0]['ipv4'] if access_to not in ipv4_list: ipv4_list.append(access_to) LOG.debug('vol_host["ipv4"]: %s', filter_host_list[0]['ipv4']) LOG.debug('ipv4_list: %s', ipv4_list) self.api_executor.edit_host(host_name, ipv4_list) else: # Until now, share of QNAP NAS can only apply one access level for # all ips. "rw" for some ips and "ro" for else is not allowed. support_level = (constants.ACCESS_LEVEL_RW if access_level == constants.ACCESS_LEVEL_RO else constants.ACCESS_LEVEL_RO) reason = _('Share only supports one access ' 'level: %s') % support_level LOG.error(reason) raise exception.InvalidShareAccess(reason=reason) access = 1 if access_level == constants.ACCESS_LEVEL_RO else 0 self.api_executor.set_nfs_access(vol_name, access, host_name) def _deny_access(self, context, share, access, share_server=None): """Deny access to the share.""" share_proto = share['share_proto'] access_type = access['access_type'] access_level = access['access_level'] access_to = access['access_to'] LOG.debug('share_proto: %(share_proto)s ' 'access_type: %(access_type)s ' 'access_level: %(access_level)s ' 'access_to: %(access_to)s', {'share_proto': share_proto, 'access_type': access_type, 'access_level': access_level, 'access_to': access_to}) try: self._check_share_access(share_proto, access_type) except exception.InvalidShareAccess: LOG.warning('The denied rule is invalid and does not exist.') return vol_name = self.private_storage.get(share['id'], 'volName') vol_name_timestamp = self._get_timestamp_from_vol_name(vol_name) host_name = self._gen_host_name(vol_name_timestamp, access_level) host_list = self.api_executor.get_host_list() LOG.debug('vol_name: %(vol_name)s ' 'access_level: %(access_level)s ' 'host_name: %(host_name)s ' 'host_list: %(host_list)s ', {'vol_name': vol_name, 'access_level': access_level, 'host_name': host_name, 'host_list': host_list}) filter_host_list = self._get_vol_host(host_list, vol_name_timestamp) # if share already have host, remove ip from host for vol_host in filter_host_list: if vol_host['name'] == host_name: ipv4_list = vol_host['ipv4'] if access_to in ipv4_list: ipv4_list.remove(access_to) LOG.debug('vol_host["ipv4"]: %s', vol_host['ipv4']) LOG.debug('ipv4_list: %s', ipv4_list) if len(ipv4_list) == 0: # if list empty, remove the host self.api_executor.set_nfs_access( vol_name, 2, host_name) self.api_executor.delete_host(host_name) else: self.api_executor.edit_host(host_name, ipv4_list) break def _check_share_access(self, share_proto, access_type): if share_proto == 'NFS' and access_type != 'ip': reason = _('Only "ip" access type is allowed for ' 'NFS shares.') LOG.warning(reason) raise exception.InvalidShareAccess(reason=reason) elif share_proto != 'NFS': reason = _('Invalid NAS protocol: %s') % share_proto raise exception.InvalidShareAccess(reason=reason) def manage_existing(self, share, driver_options): """Manages a share that exists on backend.""" if share['share_proto'].lower() == 'nfs': # 10.0.0.1:/share/example LOG.info("Share %(shr_path)s will be managed with ID " "%(shr_id)s.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) old_path_info = share['export_locations'][0]['path'].split( ':/share/') if len(old_path_info) == 2: ip = old_path_info[0] share_name = old_path_info[1] else: msg = _("Incorrect path. It should have the following format: " "IP:/share/share_name.") raise exception.ShareBackendException(msg=msg) else: msg = _('Invalid NAS protocol: %s') % share['share_proto'] raise exception.InvalidInput(reason=msg) if ip != self.configuration.qnap_share_ip: msg = _("The NAS IP %(ip)s is not configured.") % {'ip': ip} raise exception.ShareBackendException(msg=msg) existing_share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_label=share_name) if existing_share is None: msg = _("The share %s trying to be managed was not found on " "backend.") % share['id'] raise exception.ManageInvalidShare(reason=msg) extra_specs = share_types.get_extra_specs_from_share(share) qnap_thin_provision = share_types.parse_boolean_extra_spec( 'thin_provisioning', extra_specs.get("thin_provisioning") or extra_specs.get('capabilities:thin_provisioning') or 'true') qnap_compression = share_types.parse_boolean_extra_spec( 'compression', extra_specs.get("compression") or extra_specs.get('capabilities:compression') or 'true') qnap_deduplication = share_types.parse_boolean_extra_spec( 'dedupe', extra_specs.get("dedupe") or extra_specs.get('capabilities:dedupe') or 'false') qnap_ssd_cache = share_types.parse_boolean_extra_spec( 'qnap_ssd_cache', extra_specs.get("qnap_ssd_cache") or extra_specs.get("capabilities:qnap_ssd_cache") or 'false') LOG.debug('qnap_thin_provision: %(qnap_thin_provision)s ' 'qnap_compression: %(qnap_compression)s ' 'qnap_deduplication: %(qnap_deduplication)s ' 'qnap_ssd_cache: %(qnap_ssd_cache)s', {'qnap_thin_provision': qnap_thin_provision, 'qnap_compression': qnap_compression, 'qnap_deduplication': qnap_deduplication, 'qnap_ssd_cache': qnap_ssd_cache}) if (qnap_deduplication and not qnap_thin_provision): msg = _("Dedupe cannot be enabled without thin_provisioning.") LOG.debug('Dedupe cannot be enabled without thin_provisioning.') raise exception.InvalidExtraSpec(reason=msg) vol_no = existing_share.find('vol_no').text vol = self.api_executor.get_specific_volinfo(vol_no) vol_size_gb = math.ceil(float(vol.find('size').text) / units.Gi) share_dict = { 'sharename': share_name, 'old_sharename': share_name, 'thin_provision': qnap_thin_provision, 'compression': qnap_compression, 'deduplication': qnap_deduplication, 'ssd_cache': qnap_ssd_cache, 'share_proto': share['share_proto'] } self.api_executor.edit_share(share_dict) _metadata = {} _metadata['volID'] = vol_no _metadata['volName'] = share_name _metadata['thin_provision'] = qnap_thin_provision _metadata['compression'] = qnap_compression _metadata['deduplication'] = qnap_deduplication _metadata['ssd_cache'] = qnap_ssd_cache self.private_storage.update(share['id'], _metadata) LOG.info("Share %(shr_path)s was successfully managed with ID " "%(shr_id)s.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) export_locations = self._get_location_path( share_name, share['share_proto'], self.configuration.qnap_share_ip, vol_no) return {'size': vol_size_gb, 'export_locations': export_locations} def unmanage(self, share): """Remove the specified share from Manila management.""" self.private_storage.delete(share['id']) def manage_existing_snapshot(self, snapshot, driver_options): """Manage existing share snapshot with manila.""" volID = self.private_storage.get(snapshot['share']['id'], 'volID') LOG.debug('volID: %s', volID) existing_share = self.api_executor.get_share_info( self.configuration.qnap_poolname, vol_no=volID) if existing_share is None: msg = _("The share id %s was not found on backend.") % volID LOG.error(msg) raise exception.ShareNotFound(msg) snapshot_id = snapshot.get('provider_location') snapshot_id_info = snapshot_id.split('@') if len(snapshot_id_info) == 2: share_name = snapshot_id_info[0] snapshot_name = snapshot_id_info[1] else: msg = _("Incorrect provider_location format. It should have the " "following format: share_name@snapshot_name.") LOG.error(msg) raise exception.InvalidParameterValue(msg) if share_name != existing_share.find('vol_label').text: msg = (_("The assigned share %(share_name)s was not matched " "%(vol_label)s on backend.") % {'share_name': share_name, 'vol_label': existing_share.find('vol_label').text}) LOG.error(msg) raise exception.ShareNotFound(msg) check_snapshot = self.api_executor.get_snapshot_info( volID=volID, snapshot_name=snapshot_name) if check_snapshot is None: msg = (_("The snapshot %(snapshot_name)s was not " "found on backend.") % {'snapshot_name': snapshot_name}) LOG.error(msg) raise exception.InvalidParameterValue(err=msg) _metadata = { 'snapshot_id': snapshot_id, } self.private_storage.update(snapshot['id'], _metadata) parent_size = check_snapshot.find('parent_size') snap_size_gb = None if parent_size is not None: snap_size_gb = math.ceil(float(parent_size.text) / units.Gi) return {'size': snap_size_gb} def unmanage_snapshot(self, snapshot): """Remove the specified snapshot from Manila management.""" self.private_storage.delete(snapshot['id']) manila-10.0.0/manila/share/drivers/inspur/0000775000175000017500000000000013656750362020431 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/inspur/__init__.py0000664000175000017500000000000013656750227022530 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/inspur/instorage/0000775000175000017500000000000013656750362022424 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/inspur/instorage/__init__.py0000664000175000017500000000000013656750227024523 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/inspur/instorage/cli_helper.py0000664000175000017500000003755213656750227025120 0ustar zuulzuul00000000000000# Copyright 2019 Inspur Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ CLI helpers for Inspur InStorage """ import paramiko import re import six from eventlet import greenthread from oslo_concurrency import processutils from oslo_log import log from oslo_utils import excutils from manila import exception from manila.i18n import _ from manila import utils as manila_utils LOG = log.getLogger(__name__) class SSHRunner(object): """SSH runner is used to run ssh command on inspur instorage system.""" def __init__(self, host, port, login, password, privatekey=None): self.host = host self.port = port self.login = login self.password = password self.privatekey = privatekey self.ssh_conn_timeout = 60 self.ssh_min_pool_size = 1 self.ssh_max_pool_size = 10 self.sshpool = None def __call__(self, cmd_list, check_exit_code=True, attempts=1): """SSH tool""" manila_utils.check_ssh_injection(cmd_list) command = ' '.join(cmd_list) if not self.sshpool: try: self.sshpool = manila_utils.SSHPool( self.host, self.port, self.ssh_conn_timeout, self.login, password=self.password, privatekey=self.privatekey, min_size=self.ssh_min_pool_size, max_size=self.ssh_max_pool_size ) except paramiko.SSHException: LOG.error("Unable to create SSHPool") raise try: return self._ssh_execute(self.sshpool, command, check_exit_code, attempts) except Exception: LOG.error("Error running SSH command: %s", command) raise def _ssh_execute(self, sshpool, command, check_exit_code=True, attempts=1): try: with sshpool.item() as ssh: last_exception = None while attempts > 0: attempts -= 1 try: return processutils.ssh_execute( ssh, command, check_exit_code=check_exit_code) except Exception as e: LOG.exception('Error has occurred') last_exception = e greenthread.sleep(1) try: raise processutils.ProcessExecutionError( exit_code=last_exception.exit_code, stdout=last_exception.stdout, stderr=last_exception.stderr, cmd=last_exception.cmd) except AttributeError: raise processutils.ProcessExecutionError( exit_code=-1, stdout="", stderr="Error running SSH command", cmd=command) except Exception: with excutils.save_and_reraise_exception(): LOG.error("Error running SSH command: %s", command) class CLIParser(object): """Parse MCS CLI output and generate iterable.""" def __init__(self, raw, ssh_cmd=None, delim='!', with_header=True): super(CLIParser, self).__init__() if ssh_cmd: self.ssh_cmd = ' '.join(ssh_cmd) else: self.ssh_cmd = 'None' self.raw = raw self.delim = delim self.with_header = with_header self.result = self._parse() def __getitem__(self, key): try: return self.result[key] except KeyError: msg = (_('Did not find the expected key %(key)s in %(fun)s: ' '%(raw)s.') % {'key': key, 'fun': self.ssh_cmd, 'raw': self.raw}) raise exception.ShareBackendException(msg=msg) def __iter__(self): for a in self.result: yield a def __len__(self): return len(self.result) def _parse(self): def get_reader(content, delim): for line in content.lstrip().splitlines(): line = line.strip() if line: yield line.split(delim) else: yield [] if isinstance(self.raw, six.string_types): stdout, stderr = self.raw, '' else: stdout, stderr = self.raw reader = get_reader(stdout, self.delim) result = [] if self.with_header: hds = tuple() for row in reader: hds = row break for row in reader: cur = dict() if len(hds) != len(row): msg = (_('Unexpected CLI response: header/row mismatch. ' 'header: %(header)s, row: %(row)s.') % {'header': hds, 'row': row}) raise exception.ShareBackendException(msg=msg) for k, v in zip(hds, row): CLIParser.append_dict(cur, k, v) result.append(cur) else: cur = dict() for row in reader: if row: CLIParser.append_dict(cur, row[0], ' '.join(row[1:])) elif cur: # start new section result.append(cur) cur = dict() if cur: result.append(cur) return result @staticmethod def append_dict(dict_, key, value): key, value = key.strip(), value.strip() obj = dict_.get(key, None) if obj is None: dict_[key] = value elif isinstance(obj, list): obj.append(value) dict_[key] = obj else: dict_[key] = [obj, value] return dict_ class InStorageSSH(object): """SSH interface to Inspur InStorage systems.""" def __init__(self, ssh_runner): self._ssh = ssh_runner def _run_ssh(self, ssh_cmd): try: return self._ssh(ssh_cmd) except processutils.ProcessExecutionError as e: msg = (_('CLI Exception output:\n command: %(cmd)s\n ' 'stdout: %(out)s\n stderr: %(err)s.') % {'cmd': ssh_cmd, 'out': e.stdout, 'err': e.stderr}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) def run_ssh_inq(self, ssh_cmd, delim='!', with_header=False): """Run an SSH command and return parsed output.""" raw = self._run_ssh(ssh_cmd) LOG.debug('Response for cmd %s is %s', ssh_cmd, raw) return CLIParser(raw, ssh_cmd=ssh_cmd, delim=delim, with_header=with_header) def run_ssh_assert_no_output(self, ssh_cmd): """Run an SSH command and assert no output returned.""" out, err = self._run_ssh(ssh_cmd) if len(out.strip()) != 0: msg = (_('Expected no output from CLI command %(cmd)s, ' 'got %(out)s.') % {'cmd': ' '.join(ssh_cmd), 'out': out}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) def run_ssh_check_created(self, ssh_cmd): """Run an SSH command and return the ID of the created object.""" out, err = self._run_ssh(ssh_cmd) try: match_obj = re.search(r'\[([0-9]+)\],? successfully created', out) return match_obj.group(1) except (AttributeError, IndexError): msg = (_('Failed to parse CLI output:\n command: %(cmd)s\n ' 'stdout: %(out)s\n stderr: %(err)s.') % {'cmd': ssh_cmd, 'out': out, 'err': err}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) def lsnode(self, node_id=None): with_header = True ssh_cmd = ['mcsinq', 'lsnode', '-delim', '!'] if node_id: with_header = False ssh_cmd.append(node_id) return self.run_ssh_inq(ssh_cmd, with_header=with_header) def lsnaspool(self, pool_id=None): ssh_cmd = ['mcsinq', 'lsnaspool', '-delim', '!'] if pool_id: ssh_cmd.append(pool_id) return self.run_ssh_inq(ssh_cmd, with_header=True) def lsfs(self, node_name=None, fsname=None): if fsname and not node_name: msg = _('Node name should be set when file system name is set.') LOG.error(msg) raise exception.InvalidParameterValue(msg) ssh_cmd = ['mcsinq', 'lsfs', '-delim', '!'] to_append = [] if node_name: to_append += ['-node', '"%s"' % node_name] if fsname: to_append += ['-name', '"%s"' % fsname] if not to_append: to_append += ['-all'] ssh_cmd += to_append return self.run_ssh_inq(ssh_cmd, with_header=True) def addfs(self, fsname, pool_name, size, node_name): """Create a file system on the storage. :param fsname: file system name :param pool_name: pool in which to create the file system :param size: file system size in GB :param node_name: the primary node name :return: """ ssh_cmd = ['mcsop', 'addfs', '-name', '"%s"' % fsname, '-pool', '"%s"' % pool_name, '-size', '%dg' % size, '-node', '"%s"' % node_name] self.run_ssh_assert_no_output(ssh_cmd) def rmfs(self, fsname): """Remove the specific file system. :param fsname: file system name to be removed :return: """ ssh_cmd = ['mcsop', 'rmfs', '-name', '"%s"' % fsname] self.run_ssh_assert_no_output(ssh_cmd) def expandfs(self, fsname, size): """Expand the space of the specific file system. :param fsname: file system name :param size: the size(GB) to be expanded, origin + size = result :return: """ ssh_cmd = ['mcsop', 'expandfs', '-name', '"%s"' % fsname, '-size', '%dg' % size] self.run_ssh_assert_no_output(ssh_cmd) # NAS directory operation def lsnasdir(self, dirpath): """List the child directory under dirpath. :param dirpath: the parent directory to list with :return: """ ssh_cmd = ['mcsinq', 'lsnasdir', '-delim', '!', '"%s"' % dirpath] return self.run_ssh_inq(ssh_cmd, with_header=True) def addnasdir(self, dirpath): """Create a new NAS directory indicated by dirpath.""" ssh_cmd = ['mcsop', 'addnasdir', '"%s"' % dirpath] self.run_ssh_assert_no_output(ssh_cmd) def chnasdir(self, old_path, new_path): """Rename the NAS directory name.""" ssh_cmd = ['mcsop', 'chnasdir', '-oldpath', '"%s"' % old_path, '-newpath', '"%s"' % new_path] self.run_ssh_assert_no_output(ssh_cmd) def rmnasdir(self, dirpath): """Remove the specific dirpath.""" ssh_cmd = ['mcsop', 'rmnasdir', '"%s"' % dirpath] self.run_ssh_assert_no_output(ssh_cmd) # NFS operation def rmnfs(self, share_path): """Remove the NFS indicated by path.""" ssh_cmd = ['mcsop', 'rmnfs', '"%s"' % share_path] self.run_ssh_assert_no_output(ssh_cmd) def lsnfslist(self, prefix=None): """List NFS shares on a system.""" ssh_cmd = ['mcsinq', 'lsnfslist', '-delim', '!'] if prefix: ssh_cmd.append('"%s"' % prefix) return self.run_ssh_inq(ssh_cmd, with_header=True) def lsnfsinfo(self, share_path): """List a specific NFS share's information.""" ssh_cmd = ['mcsinq', 'lsnfsinfo', '-delim', '!', '"%s"' % share_path] return self.run_ssh_inq(ssh_cmd, with_header=True) def addnfsclient(self, share_path, client_spec): """Add a client access rule to NFS share. :param share_path: the NFS share path. :param client_spec: IP/MASK:RIGHTS:ALL_SQUASH:ROOT_SQUASH. :return: """ ssh_cmd = ['mcsop', 'addnfsclient', '-path', '"%s"' % share_path, '-client', client_spec] self.run_ssh_assert_no_output(ssh_cmd) def chnfsclient(self, share_path, client_spec): """Change a NFS share's client info.""" ssh_cmd = ['mcsop', 'chnfsclient', '-path', '"%s"' % share_path, '-client', client_spec] self.run_ssh_assert_no_output(ssh_cmd) def rmnfsclient(self, share_path, client_spec): """Remove a client info from the NFS share.""" # client_spec parameter for rmnfsclient is IP/MASK, # so we need remove the right part client_spec = client_spec.split(':')[0] ssh_cmd = ['mcsop', 'rmnfsclient', '-path', '"%s"' % share_path, '-client', client_spec] self.run_ssh_assert_no_output(ssh_cmd) # CIFS operation def lscifslist(self, filter=None): """List CIFS shares on the system.""" ssh_cmd = ['mcsinq', 'lscifslist', '-delim', '!'] if filter: ssh_cmd.append('"%s"' % filter) return self.run_ssh_inq(ssh_cmd, with_header=True) def lscifsinfo(self, share_name): """List a specific CIFS share's information.""" ssh_cmd = ['mcsinq', 'lscifsinfo', '-delim', '!', '"%s"' % share_name] return self.run_ssh_inq(ssh_cmd, with_header=True) def addcifs(self, share_name, dirpath, oplocks='off'): """Create a CIFS share with given path.""" ssh_cmd = ['mcsop', 'addcifs', '-name', share_name, '-path', dirpath, '-oplocks', oplocks] self.run_ssh_assert_no_output(ssh_cmd) def rmcifs(self, share_name): """Remove a CIFS share.""" ssh_cmd = ['mcsop', 'rmcifs', share_name] self.run_ssh_assert_no_output(ssh_cmd) def chcifs(self, share_name, oplocks='off'): """Change a CIFS share's attribute. :param share_name: share's name :param oplocks: 'off' or 'on' :return: """ ssh_cmd = ['mcsop', 'chcifs', '-name', share_name, '-oplocks', oplocks] self.run_ssh_assert_no_output(ssh_cmd) def addcifsuser(self, share_name, rights): """Add a user access rule to CIFS share. :param share_name: share's name :param rights: [LU|LG]:xxx:[rw|ro] :return: """ ssh_cmd = ['mcsop', 'addcifsuser', '-name', share_name, '-rights', rights] self.run_ssh_assert_no_output(ssh_cmd) def chcifsuser(self, share_name, rights): """Change a user access rule.""" ssh_cmd = ['mcsop', 'chcifsuser', '-name', share_name, '-rights', rights] self.run_ssh_assert_no_output(ssh_cmd) def rmcifsuser(self, share_name, rights): """Remove CIFS user from a CIFS share.""" # the rights parameter for rmcifsuser is LU:NAME rights = ':'.join(rights.split(':')[0:-1]) ssh_cmd = ['mcsop', 'rmcifsuser', '-name', share_name, '-rights', rights] self.run_ssh_assert_no_output(ssh_cmd) # NAS port ip def lsnasportip(self): """List NAS service port ip address.""" ssh_cmd = ['mcsinq', 'lsnasportip', '-delim', '!'] return self.run_ssh_inq(ssh_cmd, with_header=True) manila-10.0.0/manila/share/drivers/inspur/instorage/instorage.py0000664000175000017500000005206513656750227025001 0ustar zuulzuul00000000000000# Copyright 2019 Inspur Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Driver for Inspur InStorage """ import ipaddress import itertools import six from oslo_config import cfg from oslo_log import log from oslo_utils import units from manila import coordination from manila import exception from manila.i18n import _ from manila.share import driver from manila.share import utils as share_utils from manila.share.drivers.inspur.instorage.cli_helper import InStorageSSH from manila.share.drivers.inspur.instorage.cli_helper import SSHRunner instorage_opts = [ cfg.HostAddressOpt( 'instorage_nas_ip', required=True, help='IP address for the InStorage.' ), cfg.PortOpt( 'instorage_nas_port', default=22, help='Port number for the InStorage.' ), cfg.StrOpt( 'instorage_nas_login', required=True, help='Username for the InStorage.' ), cfg.StrOpt( 'instorage_nas_password', required=True, secret=True, help='Password for the InStorage.' ), cfg.ListOpt( 'instorage_nas_pools', required=True, help='The Storage Pools Manila should use, a comma separated list.' ) ] CONF = cfg.CONF CONF.register_opts(instorage_opts) LOG = log.getLogger(__name__) class InStorageShareDriver(driver.ShareDriver): """Inspur InStorage NAS driver. Allows for NFS and CIFS NAS. .. code::none Version history: 1.0.0 - Initial driver. Driver support: share create/delete extend size update_access protocol: NFS/CIFS """ VENDOR = 'INSPUR' VERSION = '1.0.0' PROTOCOL = 'NFS_CIFS' def __init__(self, *args, **kwargs): super(InStorageShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(instorage_opts) self.backend_name = self.configuration.safe_get('share_backend_name') self.backend_pools = self.configuration.instorage_nas_pools self.ssh_runner = SSHRunner(**{ 'host': self.configuration.instorage_nas_ip, 'port': 22, 'login': self.configuration.instorage_nas_login, 'password': self.configuration.instorage_nas_password }) self.assistant = InStorageAssistant(self.ssh_runner) def check_for_setup_error(self): nodes = self.assistant.get_nodes_info() if len(nodes) == 0: msg = _('No valid node, be sure the NAS Port IP is configured') raise exception.ShareBackendException(msg=msg) pools = self.assistant.get_available_pools() not_exist = set(self.backend_pools).difference(set(pools)) if not_exist: msg = _('Pool %s not exist on the storage system') % not_exist raise exception.InvalidParameterValue(msg) def _update_share_stats(self, **kwargs): """Retrieve share stats information.""" try: stats = { 'share_backend_name': self.backend_name, 'vendor_name': self.VENDOR, 'driver_version': self.VERSION, 'storage_protocol': 'NFS_CIFS', 'reserved_percentage': self.configuration.reserved_share_percentage, 'max_over_subscription_ratio': self.configuration.max_over_subscription_ratio, 'snapshot_support': False, 'create_share_from_snapshot_support': False, 'revert_to_snapshot_support': False, 'qos': False, 'total_capacity_gb': 0.0, 'free_capacity_gb': 0.0, 'pools': [] } pools = self.assistant.get_pools_attr(self.backend_pools) total_capacity_gb = 0 free_capacity_gb = 0 for pool in pools.values(): total_capacity_gb += pool['total_capacity_gb'] free_capacity_gb += pool['free_capacity_gb'] stats['pools'].append(pool) stats['total_capacity_gb'] = total_capacity_gb stats['free_capacity_gb'] = free_capacity_gb LOG.debug('share status %s', stats) super(InStorageShareDriver, self)._update_share_stats(stats) except Exception: msg = _('Unexpected error while trying to get the ' 'usage stats from array.') LOG.exception(msg) raise @staticmethod def generate_share_name(share): # Generate a name with id of the share as base, and do follows: # 1. Remove the '-' in the id string. # 2. Transform all alpha to lower case. # 3. If the first char of the id is a num, # transform it to an Upper case alpha start from 'A', # such as '0' -> 'A', '1' -> 'B'. # e.g. # generate_share_name({ # 'id': '46CF5E85-D618-4023-8727-6A1EA9292954', # ... # }) # returns 'E6cf5e85d618402387276a1ea9292954' name = share['id'].replace('-', '').lower() if name[0] in '0123456789': name = chr(ord('A') + ord(name[0]) - ord('0')) + name[1:] return name def get_network_allocations_number(self): """Get the number of network interfaces to be created.""" return 0 def create_share(self, context, share, share_server=None): """Create a new share instance.""" share_name = self.generate_share_name(share) share_size = share['size'] share_proto = share['share_proto'] pool_name = share_utils.extract_host(share['host'], level='pool') self.assistant.create_share( share_name, pool_name, share_size, share_proto ) return self.assistant.get_export_locations(share_name, share_proto) def delete_share(self, context, share, share_server=None): """Delete the given share instance.""" share_name = self.generate_share_name(share) share_proto = share['share_proto'] self.assistant.delete_share(share_name, share_proto) def extend_share(self, share, new_size, share_server=None): """Extend the share instance's size to new size.""" share_name = self.generate_share_name(share) self.assistant.extend_share(share_name, new_size) def ensure_share(self, context, share, share_server=None): """Ensure that the share instance is exported.""" share_name = self.generate_share_name(share) share_proto = share['share_proto'] return self.assistant.get_export_locations(share_name, share_proto) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update the share instance's access rule.""" share_name = self.generate_share_name(share) share_proto = share['share_proto'] @coordination.synchronized('inspur-instorage-access-' + share_name) def _update_access(name, proto, rules, add_rules, delete_rules): self.assistant.update_access( name, proto, rules, add_rules, delete_rules ) _update_access( share_name, share_proto, access_rules, add_rules, delete_rules ) class InStorageAssistant(object): NFS_CLIENT_SPEC_PATTERN = ( '%(ip)s/%(mask)s:%(rights)s:%(all_squash)s:%(root_squash)s' ) CIFS_CLIENT_RIGHT_PATTERN = ( '%(type)s:%(name)s:%(rights)s' ) def __init__(self, ssh_runner): self.ssh = InStorageSSH(ssh_runner) @staticmethod def handle_keyerror(cmd, out): msg = (_('Could not find key in output of command %(cmd)s: %(out)s.') % {'out': out, 'cmd': cmd}) raise exception.ShareBackendException(msg=msg) def size_to_gb(self, size): new_size = 0 if 'P' in size: new_size = int(float(size.rstrip('PB')) * units.Mi) elif 'T' in size: new_size = int(float(size.rstrip('TB')) * units.Ki) elif 'G' in size: new_size = int(float(size.rstrip('GB')) * 1) elif 'M' in size: mb_size = float(size.rstrip('MB')) new_size = int((mb_size + units.Ki - 1) / units.Ki) return new_size def get_available_pools(self): nas_pools = self.ssh.lsnaspool() return [pool['pool_name'] for pool in nas_pools] def get_pools_attr(self, backend_pools): pools = {} fs_attr = self.ssh.lsfs() nas_pools = self.ssh.lsnaspool() for pool_attr in nas_pools: pool_name = pool_attr['pool_name'] if pool_name not in backend_pools: continue total_used_capacity = 0 total_allocated_capacity = 0 for fs in fs_attr: if fs['pool_name'] != pool_name: continue allocated = self.size_to_gb(fs['total_capacity']) used = self.size_to_gb(fs['used_capacity']) total_allocated_capacity += allocated total_used_capacity += used available = self.size_to_gb(pool_attr['available_capacity']) pool = { 'pool_name': pool_name, 'total_capacity_gb': total_allocated_capacity + available, 'free_capacity_gb': available, 'allocated_capacity_gb': total_allocated_capacity, 'reserved_percentage': 0, 'qos': False, 'dedupe': False, 'compression': False, 'thin_provisioning': False, 'max_over_subscription_ratio': 0 } pools[pool_name] = pool return pools def get_nodes_info(self): """Return a dictionary containing information of system's nodes.""" nodes = {} resp = self.ssh.lsnasportip() for port in resp: try: # Port is invalid if it has no IP configured. if port['ip'] == '': continue node_name = port['node_name'] if node_name not in nodes: nodes[node_name] = {} node = nodes[node_name] node[port['id']] = port except KeyError: self.handle_keyerror('lsnasportip', port) return nodes @staticmethod def get_fsname_by_name(name): return ('%(fsname)s' % {'fsname': name})[0:32] @staticmethod def get_dirname_by_name(name): return ('%(dirname)s' % {'dirname': name})[0:32] def get_dirpath_by_name(self, name): fsname = self.get_fsname_by_name(name) dirname = self.get_dirname_by_name(name) return '/fs/%(fsname)s/%(dirname)s' % { 'fsname': fsname, 'dirname': dirname } def create_share(self, name, pool, size, proto): """Create a share with given info.""" # use one available node as the primary node nodes = self.get_nodes_info() if len(nodes) == 0: msg = _('No valid node, be sure the NAS Port IP is configured') raise exception.ShareBackendException(msg=msg) node_name = [key for key in nodes.keys()][0] # first create the file system on which share will be created fsname = self.get_fsname_by_name(name) self.ssh.addfs(fsname, pool, size, node_name) # then create the directory used for the share dirpath = self.get_dirpath_by_name(name) self.ssh.addnasdir(dirpath) # For CIFS, we need to create a CIFS share. # For NAS, the share is automatically added when the first # 'access spec' is added on it. if proto == 'CIFS': self.ssh.addcifs(name, dirpath) def check_share_exist(self, name): """Check whether the specified share exist on backend.""" fsname = self.get_fsname_by_name(name) for fs in self.ssh.lsfs(): if fs['fs_name'] == fsname: return True return False def delete_share(self, name, proto): """Delete the given share.""" if not self.check_share_exist(name): LOG.warning('Share %s does not exist on the backend.', name) return # For CIFS, we have to delete the share first. # For NAS, when the last client access spec is removed from # it, the share is automatically deleted. if proto == 'CIFS': self.ssh.rmcifs(name) # then delete the directory dirpath = self.get_dirpath_by_name(name) self.ssh.rmnasdir(dirpath) # at last delete the file system fsname = self.get_fsname_by_name(name) self.ssh.rmfs(fsname) def extend_share(self, name, new_size): """Extend a given share to a new size. :param name: the name of the share. :param new_size: the new size the share should be. :return: """ # first get the original capacity old_size = None fsname = self.get_fsname_by_name(name) for fs in self.ssh.lsfs(): if fs['fs_name'] == fsname: old_size = self.size_to_gb(fs['total_capacity']) break if old_size is None: msg = _('share %s is not available') % name raise exception.ShareBackendException(msg=msg) LOG.debug('Extend fs %s from %dGB to %dGB', fsname, old_size, new_size) self.ssh.expandfs(fsname, new_size - old_size) def get_export_locations(self, name, share_proto): """Get the export locations of a given share. :param name: the name of the share. :param share_proto: the protocol of the share. :return: a list of export locations. """ if share_proto == 'NFS': dirpath = self.get_dirpath_by_name(name) pattern = '%(ip)s:' + dirpath elif share_proto == 'CIFS': pattern = '\\\\%(ip)s\\' + name else: msg = _('share protocol %s is not supported') % share_proto raise exception.ShareBackendException(msg=msg) # we need get the node so that we know which port ip we can use node_name = None fsname = self.get_fsname_by_name(name) for node in self.ssh.lsnode(): for fs in self.ssh.lsfs(node['name']): if fs['fs_name'] == fsname: node_name = node['name'] break if node_name: break if node_name is None: msg = _('share %s is not available') % name raise exception.ShareBackendException(msg=msg) locations = [] ports = self.ssh.lsnasportip() for port in ports: if port['node_name'] == node_name and port['ip'] != '': location = pattern % {'ip': port['ip']} locations.append({ 'path': location, 'is_admin_only': False, 'metadata': {} }) return locations def classify_nfs_client_spec(self, client_spec, dirpath): nfslist = self.ssh.lsnfslist(dirpath) if len(nfslist): nfsinfo = self.ssh.lsnfsinfo(dirpath) spec_set = set([ self.NFS_CLIENT_SPEC_PATTERN % i for i in nfsinfo ]) else: spec_set = set() client_spec_set = set(client_spec) del_spec = spec_set.difference(client_spec_set) add_spec = client_spec_set.difference(spec_set) return list(add_spec), list(del_spec) def access_rule_to_client_spec(self, access_rule): if access_rule['access_type'] != 'ip': msg = _('only ip access type is supported when using NFS protocol') raise exception.ShareBackendException(msg=msg) network = ipaddress.ip_network(six.text_type(access_rule['access_to'])) if network.version != 4: msg = _('only IPV4 is accepted when using NFS protocol') raise exception.ShareBackendException(msg=msg) client_spec = self.NFS_CLIENT_SPEC_PATTERN % { 'ip': six.text_type(network.network_address), 'mask': six.text_type(network.netmask), 'rights': access_rule['access_level'], 'all_squash': 'all_squash', 'root_squash': 'root_squash' } return client_spec def update_nfs_access(self, share_name, access_rules, add_rules, delete_rules): """Update a NFS share's access rule.""" dirpath = self.get_dirpath_by_name(share_name) if add_rules or delete_rules: add_spec = [ self.access_rule_to_client_spec(r) for r in add_rules ] del_spec = [ self.access_rule_to_client_spec(r) for r in delete_rules ] _, can_del_spec = self.classify_nfs_client_spec( [], dirpath ) to_del_set = set(del_spec) can_del_set = set(can_del_spec) will_del_set = to_del_set.intersection(can_del_set) del_spec = list(will_del_set) else: access_spec = [ self.access_rule_to_client_spec(r) for r in access_rules ] add_spec, del_spec = self.classify_nfs_client_spec( access_spec, dirpath ) for spec in del_spec: self.ssh.rmnfsclient(dirpath, spec) for spec in add_spec: self.ssh.addnfsclient(dirpath, spec) def classify_cifs_rights(self, access_rights, share_name): cifsinfo = self.ssh.lscifsinfo(share_name) rights_set = set([ self.CIFS_CLIENT_RIGHT_PATTERN % i for i in cifsinfo ]) access_rights_set = set(access_rights) del_rights = rights_set.difference(access_rights_set) add_rights = access_rights_set.difference(rights_set) return list(add_rights), list(del_rights) def access_rule_to_rights(self, access_rule): if access_rule['access_type'] != 'user': msg = _('only user access type is supported' ' when using CIFS protocol') raise exception.ShareBackendException(msg=msg) rights = self.CIFS_CLIENT_RIGHT_PATTERN % { 'type': 'LU', 'name': access_rule['access_to'], 'rights': access_rule['access_level'] } return rights def update_cifs_access(self, share_name, access_rules, add_rules, delete_rules): """Update a CIFS share's access rule.""" if add_rules or delete_rules: add_rights = [ self.access_rule_to_rights(r) for r in add_rules ] del_rights = [ self.access_rule_to_rights(r) for r in delete_rules ] else: access_rights = [ self.access_rule_to_rights(r) for r in access_rules ] add_rights, del_rights = self.classify_cifs_rights( access_rights, share_name ) for rights in del_rights: self.ssh.rmcifsuser(share_name, rights) for rights in add_rights: self.ssh.addcifsuser(share_name, rights) @staticmethod def check_access_type(access_type, *rules): rule_chain = itertools.chain(*rules) if all([r['access_type'] == access_type for r in rule_chain]): return True else: return False def update_access(self, share_name, share_proto, access_rules, add_rules, delete_rules): if share_proto == 'CIFS': if self.check_access_type('user', access_rules, add_rules, delete_rules): self.update_cifs_access(share_name, access_rules, add_rules, delete_rules) else: msg = _("Only %s access type allowed.") % "user" raise exception.InvalidShareAccess(reason=msg) elif share_proto == 'NFS': if self.check_access_type('ip', access_rules, add_rules, delete_rules): self.update_nfs_access(share_name, access_rules, add_rules, delete_rules) else: msg = _("Only %s access type allowed.") % "ip" raise exception.InvalidShareAccess(reason=msg) else: msg = _('share protocol %s is not supported') % share_proto raise exception.ShareBackendException(msg=msg) manila-10.0.0/manila/share/drivers/inspur/as13000/0000775000175000017500000000000013656750362021420 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/inspur/as13000/__init__.py0000664000175000017500000000000013656750227023517 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/inspur/as13000/as13000_nas.py0000664000175000017500000010176713656750227023636 0ustar zuulzuul00000000000000# Copyright 2018 Inspur Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Share driver for Inspur AS13000 """ import eventlet import functools import json import re import requests import six import time from oslo_config import cfg from oslo_log import log as logging from oslo_utils import units from manila import exception from manila.i18n import _ from manila.share import driver from manila.share import utils as share_utils inspur_as13000_opts = [ cfg.HostAddressOpt( 'as13000_nas_ip', required=True, help='IP address for the AS13000 storage.'), cfg.PortOpt( 'as13000_nas_port', default=8088, help='Port number for the AS13000 storage.'), cfg.StrOpt( 'as13000_nas_login', required=True, help='Username for the AS13000 storage'), cfg.StrOpt( 'as13000_nas_password', required=True, secret=True, help='Password for the AS13000 storage'), cfg.ListOpt( 'as13000_share_pools', required=True, help='The Storage Pools Manila should use, a comma separated list'), cfg.IntOpt( 'as13000_token_available_time', default=3600, help='The effective time of token validity in seconds.') ] CONF = cfg.CONF CONF.register_opts(inspur_as13000_opts) LOG = logging.getLogger(__name__) def inspur_driver_debug_trace(f): """Log the method entrance and exit including active backend name. This should only be used on Share_Driver class methods. It depends on having a 'self' argument that is a AS13000_Driver. """ @functools.wraps(f) def wrapper(*args, **kwargs): driver = args[0] cls_name = driver.__class__.__name__ method_name = "%(cls_name)s.%(method)s" % {"cls_name": cls_name, "method": f.__name__} backend_name = driver.configuration.share_backend_name LOG.debug("[%(backend_name)s] Enter %(method_name)s", {"method_name": method_name, "backend_name": backend_name}) result = f(*args, **kwargs) LOG.debug("[%(backend_name)s] Leave %(method_name)s", {"method_name": method_name, "backend_name": backend_name}) return result return wrapper class RestAPIExecutor(object): def __init__(self, hostname, port, username, password): self._hostname = hostname self._port = port self._username = username self._password = password self._token_pool = [] self._token_size = 1 def logins(self): """login the AS13000 and store the token in token_pool""" times = self._token_size while times > 0: token = self.login() self._token_pool.append(token) times = times - 1 LOG.debug('Logged into the AS13000.') def login(self): """login in the AS13000 and return the token""" method = 'security/token' params = {'name': self._username, 'password': self._password} token = self.send_rest_api(method=method, params=params, request_type='post').get('token') return token def logout(self): method = 'security/token' self.send_rest_api(method=method, request_type='delete') def refresh_token(self, force=False): if force is True: for i in range(self._token_size): self._token_pool = [] token = self.login() self._token_pool.append(token) else: for i in range(self._token_size): self.logout() token = self.login() self._token_pool.append(token) LOG.debug('Tokens have been refreshed.') def send_rest_api(self, method, params=None, request_type='post'): attempts = 3 msge = '' while attempts > 0: attempts -= 1 try: return self.send_api(method, params, request_type) except exception.NetworkException as e: msge = six.text_type(e) LOG.error(msge) self.refresh_token(force=True) eventlet.sleep(1) except exception.ShareBackendException as e: msge = six.text_type(e) break msg = (_('Access RestAPI /rest/%(method)s by %(type)s failed,' ' error: %(msge)s') % {'method': method, 'msge': msge, 'type': request_type}) LOG.error(msg) raise exception.ShareBackendException(msg) @staticmethod def do_request(cmd, url, header, data): LOG.debug('CMD: %(cmd)s, URL: %(url)s, DATA: %(data)s', {'cmd': cmd, 'url': url, 'data': data}) if cmd == 'post': req = requests.post(url, data=data, headers=header) elif cmd == 'get': req = requests.get(url, data=data, headers=header) elif cmd == 'put': req = requests.put(url, data=data, headers=header) elif cmd == 'delete': req = requests.delete(url, data=data, headers=header) else: msg = (_('Unsupported cmd: %s') % cmd) raise exception.ShareBackendException(msg) response = req.json() code = req.status_code LOG.debug('CODE: %(code)s, RESPONSE: %(response)s', {'code': code, 'response': response}) if code != 200: msg = (_('Code: %(code)s, URL: %(url)s, Message: %(msg)s') % {'code': req.status_code, 'url': req.url, 'msg': req.text}) LOG.error(msg) raise exception.NetworkException(msg) return response def send_api(self, method, params=None, request_type='post'): if params: params = json.dumps(params) url = ('http://%(hostname)s:%(port)s/%(rest)s/%(method)s' % {'hostname': self._hostname, 'port': self._port, 'rest': 'rest', 'method': method}) # header is not needed when the driver login the backend if method == 'security/token': # token won't be return to the token_pool if request_type == 'delete': header = {'X-Auth-Token': self._token_pool.pop(0)} else: header = None else: if len(self._token_pool) == 0: self.logins() token = self._token_pool.pop(0) header = {'X-Auth-Token': token} self._token_pool.append(token) response = self.do_request(request_type, url, header, params) try: code = response.get('code') if code == 0: if request_type == 'get': data = response.get('data') else: if method == 'security/token': data = response.get('data') else: data = response.get('message') data = str(data).lower() if hasattr(data, 'success'): return elif code == 301: msg = _('Token is expired') LOG.error(msg) raise exception.NetworkException(msg) else: message = response.get('message') msg = (_('Unexpected RestAPI response: %(code)d %(msg)s') % { 'code': code, 'msg': message}) LOG.error(msg) raise exception.ShareBackendException(msg) except ValueError: msg = _("Deal with response failed") raise exception.ShareBackendException(msg) return data class AS13000ShareDriver(driver.ShareDriver): """AS13000 Share Driver Version history: V1.0.0: Initial version Driver support: share create/delete, snapshot create/delete, extend size, create_share_from_snapshot, update_access. protocol: NFS/CIFS """ VENDOR = 'INSPUR' VERSION = '1.0.0' PROTOCOL = 'NFS_CIFS' def __init__(self, *args, **kwargs): super(AS13000ShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(inspur_as13000_opts) self.hostname = self.configuration.as13000_nas_ip self.port = self.configuration.as13000_nas_port self.username = self.configuration.as13000_nas_login self.password = self.configuration.as13000_nas_password self.token_available_time = (self.configuration. as13000_token_available_time) self.pools = self.configuration.as13000_share_pools # base dir detail contain the information which we will use # when we create subdirectorys self.base_dir_detail = None self._token_time = 0 self.ips = [] self._rest = RestAPIExecutor(self.hostname, self.port, self.username, self.password) @inspur_driver_debug_trace def do_setup(self, context): # get access tokens self._rest.logins() self._token_time = time.time() # Check the pool in conf exist in the backend self._validate_pools_exist() # get the base directory detail self.base_dir_detail = self._get_directory_detail(self.pools[0]) # get all backend node ip self.ips = self._get_nodes_ips() @inspur_driver_debug_trace def check_for_setup_error(self): if self.base_dir_detail is None: msg = _('The pool status is not right') raise exception.ShareBackendException(msg) if len(self.ips) == 0: msg = _('All backend nodes are down') raise exception.ShareBackendException(msg) @inspur_driver_debug_trace def create_share(self, context, share, share_server=None): """Create a share.""" pool, name, size, proto = self._get_share_instance_pnsp(share) # create directory first share_path = self._create_directory(share_name=name, pool_name=pool) # then create nfs or cifs share if proto == 'nfs': self._create_nfs_share(share_path=share_path) else: self._create_cifs_share(share_name=name, share_path=share_path) # finally we set the quota of directory self._set_directory_quota(share_path, size) locations = self._get_location_path(name, share_path, proto) LOG.debug('Create share: name:%(name)s' ' protocol:%(proto)s,location: %(loc)s', {'name': name, 'proto': proto, 'loc': locations}) return locations @inspur_driver_debug_trace def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Create a share from snapshot.""" pool, name, size, proto = self._get_share_instance_pnsp(share) # create directory first share_path = self._create_directory(share_name=name, pool_name=pool) # as quota must be set when directory is empty # then we set the quota of directory self._set_directory_quota(share_path, size) # and next clone snapshot to dest_path self._clone_directory_to_dest(snapshot=snapshot, dest_path=share_path) # finally create share if proto == 'nfs': self._create_nfs_share(share_path=share_path) else: self._create_cifs_share(share_name=name, share_path=share_path) locations = self._get_location_path(name, share_path, proto) LOG.debug('Create share from snapshot:' ' name:%(name)s protocol:%(proto)s,location: %(loc)s', {'name': name, 'proto': proto, 'loc': locations}) return locations @inspur_driver_debug_trace def delete_share(self, context, share, share_server=None): """Delete share.""" pool, name, _, proto = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, name) if proto == 'nfs': share_backend = self._get_nfs_share(share_path) if len(share_backend) == 0: return else: self._delete_nfs_share(share_path) else: share_backend = self._get_cifs_share(name) if len(share_backend) == 0: return else: self._delete_cifs_share(name) self._delete_directory(share_path) LOG.debug('Delete share: %s', name) @inspur_driver_debug_trace def extend_share(self, share, new_size, share_server=None): """extend share to new size""" pool, name, size, proto = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, name) self._set_directory_quota(share_path, new_size) LOG.debug('extend share %(name)s to new size %(size)s GB', {'name': name, 'size': new_size}) @inspur_driver_debug_trace def ensure_share(self, context, share, share_server=None): """Ensure that share is exported.""" pool, name, size, proto = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, name) if proto == 'nfs': share_backend = self._get_nfs_share(share_path) elif proto == 'cifs': share_backend = self._get_cifs_share(name) else: msg = (_('Invalid NAS protocol supplied: %s.') % proto) LOG.error(msg) raise exception.InvalidInput(msg) if len(share_backend) == 0: raise exception.ShareResourceNotFound(share_id=share['share_id']) return self._get_location_path(name, share_path, proto) @inspur_driver_debug_trace def create_snapshot(self, context, snapshot, share_server=None): """create snapshot of share""" # !!! Attention the share property is a ShareInstance share = snapshot['share'] pool, share_name, _, _ = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, share_name) snap_name = self._generate_snapshot_name(snapshot) method = 'snapshot/directory' request_type = 'post' params = {'path': share_path, 'snapName': snap_name} self._rest.send_rest_api(method=method, params=params, request_type=request_type) LOG.debug('Create snapshot %(snap)s for share %(share)s', {'snap': snap_name, 'share': share_name}) @inspur_driver_debug_trace def delete_snapshot(self, context, snapshot, share_server=None): """delete snapshot of share""" # !!! Attention the share property is a ShareInstance share = snapshot['share'] pool, share_name, _, _ = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, share_name) # if there are no snapshot exist, driver will return directly snaps_backend = self._get_snapshots_from_share(share_path) if len(snaps_backend) == 0: return snap_name = self._generate_snapshot_name(snapshot) method = ('snapshot/directory?path=%s&snapName=%s' % (share_path, snap_name)) request_type = 'delete' self._rest.send_rest_api(method=method, request_type=request_type) LOG.debug('Delete snapshot %(snap)s of share %(share)s', {'snap': snap_name, 'share': share_name}) @staticmethod def transfer_rule_to_client(proto, rule): """transfer manila access rule to backend client""" access_level = rule['access_level'] if proto == 'cifs' and access_level == 'rw': access_level = 'rwx' return dict(name=rule['access_to'], type=(0 if proto == 'nfs' else 1), authority=access_level) @inspur_driver_debug_trace def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """update access of share""" pool, share_name, _, proto = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, share_name) method = 'file/share/%s' % proto request_type = 'put' params = { 'path': share_path, 'addedClientList': [], 'deletedClientList': [], 'editedClientList': [] } if proto == 'nfs': share_backend = self._get_nfs_share(share_path) params['pathAuthority'] = share_backend['pathAuthority'] else: params['name'] = share_name if add_rules or delete_rules: to_add_clients = [self.transfer_rule_to_client(proto, rule) for rule in add_rules] params['addedClientList'] = to_add_clients to_del_clients = [self.transfer_rule_to_client(proto, rule) for rule in delete_rules] params['deletedClientList'] = to_del_clients else: access_clients = [self.transfer_rule_to_client(proto, rule) for rule in access_rules] params['addedClientList'] = access_clients self._clear_access(share) self._rest.send_rest_api(method=method, params=params, request_type=request_type) LOG.debug('complete the update access work for share %s', share_name) @inspur_driver_debug_trace def _update_share_stats(self, data=None): """update the backend stats including driver info and pools info""" # Do a check of the token validity each time we update share stats, # do a refresh if token already expires time_difference = time.time() - self._token_time if time_difference > self.token_available_time: self._rest.refresh_token() self._token_time = time.time() LOG.debug('Token of Driver has been refreshed') data = { 'vendor_name': self.VENDOR, 'driver_version': self.VERSION, 'storage_protocol': self.PROTOCOL, 'share_backend_name': self.configuration.safe_get('share_backend_name'), 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'pools': [self._get_pool_stats(pool) for pool in self.pools] } super(AS13000ShareDriver, self)._update_share_stats(data) @inspur_driver_debug_trace def _clear_access(self, share): """clear all access of share""" pool, share_name, size, proto = self._get_share_instance_pnsp(share) share_path = self._generate_share_path(pool, share_name) method = 'file/share/%s' % proto request_type = 'put' params = { 'path': share_path, 'addedClientList': [], 'deletedClientList': [], 'editedClientList': [] } if proto == 'nfs': share_backend = self._get_nfs_share(share_path) params['deletedClientList'] = share_backend['clientList'] params['pathAuthority'] = share_backend['pathAuthority'] else: share_backend = self._get_cifs_share(share_name) params['deletedClientList'] = share_backend['userList'] params['name'] = share_name self._rest.send_rest_api(method=method, params=params, request_type=request_type) LOG.debug('Clear all the access of share %s', share_name) @inspur_driver_debug_trace def _validate_pools_exist(self): """Check the pool in conf exist in the backend""" available_pools = self._get_directory_list('/') for pool in self.pools: if pool not in available_pools: msg = (_('Pool %s is not exist in backend storage.') % pool) LOG.error(msg) raise exception.InvalidInput(reason=msg) @inspur_driver_debug_trace def _get_directory_quota(self, path): """get the quota of directory""" method = 'file/quota/directory?path=/%s' % path request_type = 'get' data = self._rest.send_rest_api(method=method, request_type=request_type) quota = data.get('hardthreshold') if quota is None: # the method of '_update_share_stats' will check quota of pools. # To avoid return NONE for pool info, so raise this exception msg = (_(r'Quota of pool: /%s is not set, ' r'please set it in GUI of AS13000') % path) LOG.error(msg) raise exception.ShareBackendException(msg=msg) hardunit = data.get('hardunit') used_capacity = data.get('capacity') used_capacity = (str(used_capacity)).upper() used_capacity = self._unit_convert(used_capacity) if hardunit == 1: quota = quota * 1024 total_capacity = int(quota) used_capacity = int(used_capacity) return total_capacity, used_capacity def _get_pool_stats(self, path): """Get the stats of pools, such as capacity and other information.""" total_capacity, used_capacity = self._get_directory_quota(path) free_capacity = total_capacity - used_capacity pool = { 'pool_name': path, 'reserved_percentage': self.configuration.reserved_share_percentage, 'max_over_subscription_ratio': self.configuration.max_over_subscription_ratio, 'dedupe': False, 'compression': False, 'qos': False, 'thin_provisioning': True, 'total_capacity_gb': total_capacity, 'free_capacity_gb': free_capacity, 'allocated_capacity_gb': used_capacity, 'snapshot_support': True, 'create_share_from_snapshot_support': True } return pool @inspur_driver_debug_trace def _get_directory_list(self, path): """Get all the directory list of target path""" method = 'file/directory?path=%s' % path request_type = 'get' directory_list = self._rest.send_rest_api(method=method, request_type=request_type) dir_list = [] for directory in directory_list: dir_list.append(directory['name']) return dir_list @inspur_driver_debug_trace def _create_directory(self, share_name, pool_name): """create a directory for share""" method = 'file/directory' request_type = 'post' params = {'name': share_name, 'parentPath': self.base_dir_detail['path'], 'authorityInfo': self.base_dir_detail['authorityInfo'], 'dataProtection': self.base_dir_detail['dataProtection'], 'poolName': self.base_dir_detail['poolName']} self._rest.send_rest_api(method=method, params=params, request_type=request_type) return self._generate_share_path(pool_name, share_name) @inspur_driver_debug_trace def _delete_directory(self, share_path): """delete the directory when delete share""" method = 'file/directory?path=%s' % share_path request_type = 'delete' self._rest.send_rest_api(method=method, request_type=request_type) @inspur_driver_debug_trace def _set_directory_quota(self, share_path, quota): """set directory quota for share""" method = 'file/quota/directory' request_type = 'put' params = {'path': share_path, 'hardthreshold': quota, 'hardunit': 2} self._rest.send_rest_api(method=method, params=params, request_type=request_type) @inspur_driver_debug_trace def _create_nfs_share(self, share_path): """create a NFS share""" method = 'file/share/nfs' request_type = 'post' params = {'path': share_path, 'pathAuthority': 'rw', 'client': []} self._rest.send_rest_api(method=method, params=params, request_type=request_type) @inspur_driver_debug_trace def _delete_nfs_share(self, share_path): """Delete the NFS share""" method = 'file/share/nfs?path=%s' % share_path request_type = 'delete' self._rest.send_rest_api(method=method, request_type=request_type) @inspur_driver_debug_trace def _get_nfs_share(self, share_path): """Get the nfs share in backend""" method = 'file/share/nfs?path=%s' % share_path request_type = 'get' share_backend = self._rest.send_rest_api(method=method, request_type=request_type) return share_backend @inspur_driver_debug_trace def _create_cifs_share(self, share_name, share_path): """Create a CIFS share.""" method = 'file/share/cifs' request_type = 'post' params = {'path': share_path, 'name': share_name, 'userlist': []} self._rest.send_rest_api(method=method, params=params, request_type=request_type) @inspur_driver_debug_trace def _delete_cifs_share(self, share_name): """Delete the CIFS share.""" method = 'file/share/cifs?name=%s' % share_name request_type = 'delete' self._rest.send_rest_api(method=method, request_type=request_type) @inspur_driver_debug_trace def _get_cifs_share(self, share_name): """Get the CIFS share in backend""" method = 'file/share/cifs?name=%s' % share_name request_type = 'get' share_backend = self._rest.send_rest_api(method=method, request_type=request_type) return share_backend @inspur_driver_debug_trace def _clone_directory_to_dest(self, snapshot, dest_path): """Clone the directory to the new directory""" # get the origin share name of the snapshot share_instance = snapshot['share_instance'] pool, name, _, _ = self._get_share_instance_pnsp(share_instance) share_path = self._generate_share_path(pool, name) # get the snapshot instance name snap_name = self._generate_snapshot_name(snapshot) method = 'snapshot/directory/clone' request_type = 'post' params = {'path': share_path, 'snapName': snap_name, 'destPath': dest_path} self._rest.send_rest_api(method=method, params=params, request_type=request_type) LOG.debug('Clone Path: %(path)s Snapshot: %(snap)s to Path %(dest)s', {'path': share_path, 'snap': snap_name, 'dest': dest_path}) @inspur_driver_debug_trace def _get_snapshots_from_share(self, path): """get all the snapshot of share""" method = 'snapshot/directory?path=%s' % path request_type = 'get' snaps = self._rest.send_rest_api(method=method, request_type=request_type) return snaps @inspur_driver_debug_trace def _get_location_path(self, share_name, share_path, share_proto): """return all the location of all nodes""" if share_proto == 'nfs': location = [ {'path': r'%(ip)s:%(share_path)s' % {'ip': ip, 'share_path': share_path}} for ip in self.ips] else: location = [ {'path': r'\\%(ip)s\%(share_name)s' % {'ip': ip, 'share_name': share_name}} for ip in self.ips] return location def _get_nodes_virtual_ips(self): """Get the virtual ip list of the node""" method = 'ctdb/set' request_type = 'get' ctdb_set = self._rest.send_rest_api(method=method, request_type=request_type) virtual_ips = [] for vip in ctdb_set['virtualIpList']: ip = vip['ip'].split('/')[0] virtual_ips.append(ip) return virtual_ips def _get_nodes_physical_ips(self): """Get the physical ip of all the backend nodes""" method = 'cluster/node/cache' request_type = 'get' cached_nodes = self._rest.send_rest_api(method=method, request_type=request_type) node_ips = [] for node in cached_nodes: if node['runningStatus'] == 1 and node['healthStatus'] == 1: node_ips.append(node['nodeIp']) return node_ips def _get_nodes_ips(self): """Return both the physical ip and virtual ip""" virtual_ips = self._get_nodes_virtual_ips() physical_ips = self._get_nodes_physical_ips() return virtual_ips + physical_ips def _get_share_instance_pnsp(self, share_instance): """Get pool, name, size, proto information of a share instance. AS13000 require all the names can only consist of letters,numbers, and undercores,and must begin with a letter. Also the length of name must less than 32 character. The driver will use the ID as the name in backend, add 'share_' to the beginning,and convert '-' to '_' """ pool = share_utils.extract_host(share_instance['host'], level='pool') name = self._generate_share_name(share_instance) # a share instance may not contain size attr. try: size = share_instance['size'] except AttributeError: size = None # a share instance may not contain proto attr. try: proto = share_instance['share_proto'].lower() except AttributeError: proto = None LOG.debug("Pool %s, Name: %s, Size: %s, Protocol: %s", pool, name, size, proto) return pool, name, size, proto def _unit_convert(self, capacity): """Convert all units to GB""" capacity = str(capacity) capacity = capacity.upper() try: unit_of_used = re.findall(r'[A-Z]', capacity) unit_of_used = ''.join(unit_of_used) except BaseException: unit_of_used = '' capacity = capacity.replace(unit_of_used, '') capacity = float(capacity.replace(unit_of_used, '')) if unit_of_used in ['B', '']: capacity = capacity / units.Gi elif unit_of_used in ['K', 'KB']: capacity = capacity / units.Mi elif unit_of_used in ['M', 'MB']: capacity = capacity / units.Ki elif unit_of_used in ['G', 'GB']: capacity = capacity elif unit_of_used in ['T', 'TB']: capacity = capacity * units.Ki elif unit_of_used in ['E', 'EB']: capacity = capacity * units.Mi capacity = '%.0f' % capacity return float(capacity) def _format_name(self, name): """format name to meet the backend requirements""" name = name[0:32] name = name.replace('-', '_') return name def _generate_share_name(self, share_instance): share_name = 'share_%s' % share_instance['id'] return self._format_name(share_name) def _generate_snapshot_name(self, snapshot_instance): snap_name = 'snap_%s' % snapshot_instance['id'] return self._format_name(snap_name) @staticmethod def _generate_share_path(pool, share_name): return r'/%s/%s' % (pool, share_name) def _get_directory_detail(self, directory): method = 'file/directory/detail?path=/%s' % directory request_type = 'get' details = self._rest.send_rest_api(method=method, request_type=request_type) return details[0] manila-10.0.0/manila/share/drivers/quobyte/0000775000175000017500000000000013656750362020601 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/quobyte/__init__.py0000664000175000017500000000000013656750227022700 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/quobyte/quobyte.py0000664000175000017500000004206213656750227022647 0ustar zuulzuul00000000000000# Copyright (c) 2015 Quobyte Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Quobyte driver. Manila shares are directly mapped to Quobyte volumes. The access to the shares is provided by the Quobyte NFS proxy (a Ganesha NFS server). """ import math from oslo_config import cfg from oslo_log import log from oslo_utils import units import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.quobyte import jsonrpc LOG = log.getLogger(__name__) quobyte_manila_share_opts = [ cfg.StrOpt('quobyte_api_url', help='URL of the Quobyte API server (http or https)'), cfg.StrOpt('quobyte_api_ca', help='The X.509 CA file to verify the server cert.'), cfg.BoolOpt('quobyte_delete_shares', default=False, help='Actually deletes shares (vs. unexport)'), cfg.StrOpt('quobyte_api_username', default='admin', help='Username for Quobyte API server.'), cfg.StrOpt('quobyte_api_password', default='quobyte', secret=True, help='Password for Quobyte API server'), cfg.StrOpt('quobyte_volume_configuration', default='BASE', help='Name of volume configuration used for new shares.'), cfg.StrOpt('quobyte_default_volume_user', default='root', help='Default owning user for new volumes.'), cfg.StrOpt('quobyte_default_volume_group', default='root', help='Default owning group for new volumes.'), cfg.StrOpt('quobyte_export_path', default='/quobyte', help='Export path for shares of this bacckend. This needs ' 'to match the quobyte-nfs services "Pseudo" option.'), ] CONF = cfg.CONF CONF.register_opts(quobyte_manila_share_opts) class QuobyteShareDriver(driver.ExecuteMixin, driver.ShareDriver,): """Map share commands to Quobyte volumes. Version history: 1.0 - Initial driver. 1.0.1 - Adds ensure_share() implementation. 1.1 - Adds extend_share() and shrink_share() implementation. 1.2 - Adds update_access() implementation and related methods 1.2.1 - Improved capacity calculation 1.2.2 - Minor optimizations 1.2.3 - Updated RPC layer for improved stability 1.2.4 - Fixed handling updated QB API error codes 1.2.5 - Fixed two quota handling bugs 1.2.6 - Fixed volume resize and jsonrpc code style bugs 1.2.7 - Add quobyte_export_path option """ DRIVER_VERSION = '1.2.7' def __init__(self, *args, **kwargs): super(QuobyteShareDriver, self).__init__(False, *args, **kwargs) self.configuration.append_config_values(quobyte_manila_share_opts) self.backend_name = (self.configuration.safe_get('share_backend_name') or CONF.share_backend_name or 'Quobyte') def _fetch_existing_access(self, context, share): volume_uuid = self._resolve_volume_name(share['name'], share['project_id']) result = self.rpc.call('getConfiguration', {}) if result is None: raise exception.QBException( "Could not retrieve Quobyte configuration data!") tenant_configs = result['tenant_configuration'] qb_access_list = [] for tc in tenant_configs: for va in tc['volume_access']: if va['volume_uuid'] == volume_uuid: a_level = constants.ACCESS_LEVEL_RW if va['read_only']: a_level = constants.ACCESS_LEVEL_RO qb_access_list.append({ 'access_to': va['restrict_to_network'], 'access_level': a_level, 'access_type': 'ip' }) return qb_access_list def do_setup(self, context): """Prepares the backend.""" self.rpc = jsonrpc.JsonRpc( url=self.configuration.quobyte_api_url, ca_file=self.configuration.quobyte_api_ca, user_credentials=( self.configuration.quobyte_api_username, self.configuration.quobyte_api_password)) try: self.rpc.call('getInformation', {}) except Exception as exc: LOG.error("Could not connect to API: %s", exc) raise exception.QBException( _('Could not connect to API: %s') % exc) def _update_share_stats(self): total_gb, free_gb = self._get_capacities() data = dict( storage_protocol='NFS', vendor_name='Quobyte', share_backend_name=self.backend_name, driver_version=self.DRIVER_VERSION, total_capacity_gb=total_gb, free_capacity_gb=free_gb, reserved_percentage=self.configuration.reserved_share_percentage) super(QuobyteShareDriver, self)._update_share_stats(data) def _get_capacities(self): result = self.rpc.call('getSystemStatistics', {}) total = float(result['total_physical_capacity']) used = float(result['total_physical_usage']) LOG.info('Read capacity of %(cap)s bytes and ' 'usage of %(use)s bytes from backend. ', {'cap': total, 'use': used}) free = total - used if free < 0: free = 0 # no space available free_replicated = free / self._get_qb_replication_factor() # floor numbers to nine digits (bytes) total = math.floor((total / units.Gi) * units.G) / units.G free = math.floor((free_replicated / units.Gi) * units.G) / units.G return total, free def _get_qb_replication_factor(self): result = self.rpc.call('getEffectiveVolumeConfiguration', {'configuration_name': self. configuration.quobyte_volume_configuration}) return int(result['configuration']['volume_metadata_configuration'] ['replication_factor']) def check_for_setup_error(self): pass def get_network_allocations_number(self): return 0 def _get_project_name(self, context, project_id): """Retrieve the project name. TODO (kaisers): retrieve the project name in order to store and use in the backend for better usability. """ return project_id def _resize_share(self, share, new_size): newsize_bytes = new_size * units.Gi self.rpc.call('setQuota', {"quotas": [ {"consumer": [{"type": "VOLUME", "identifier": self._resolve_volume_name(share["name"], share['project_id']), "tenant_id": share["project_id"]}], "limits": [{"type": "LOGICAL_DISK_SPACE", "value": newsize_bytes}]} ]}) def _resolve_volume_name(self, volume_name, tenant_domain): """Resolve a volume name to the global volume uuid.""" result = self.rpc.call('resolveVolumeName', dict( volume_name=volume_name, tenant_domain=tenant_domain), [jsonrpc.ERROR_ENOENT, jsonrpc.ERROR_ENTITY_NOT_FOUND]) if result: return result['volume_uuid'] return None # not found def _subtract_access_lists(self, list_a, list_b): """Returns a list of elements in list_a that are not in list_b :param list_a: Base list of access rules :param list_b: List of access rules not to be returned :return: List of elements of list_a not present in list_b """ sub_tuples_list = [{"to": s.get('access_to'), "type": s.get('access_type'), "level": s.get('access_level')} for s in list_b] return [r for r in list_a if ( {"to": r.get("access_to"), "type": r.get("access_type"), "level": r.get("access_level")} not in sub_tuples_list)] def create_share(self, context, share, share_server=None): """Create or export a volume that is usable as a Manila share.""" if share['share_proto'] != 'NFS': raise exception.QBException( _('Quobyte driver only supports NFS shares')) volume_uuid = self._resolve_volume_name(share['name'], share['project_id']) if not volume_uuid: # create tenant, expect ERROR_GARBAGE_ARGS if it already exists self.rpc.call('setTenant', dict(tenant=dict(tenant_id=share['project_id'])), expected_errors=[jsonrpc.ERROR_GARBAGE_ARGS]) result = self.rpc.call('createVolume', dict( name=share['name'], tenant_domain=share['project_id'], root_user_id=self.configuration.quobyte_default_volume_user, root_group_id=self.configuration.quobyte_default_volume_group, configuration_name=(self.configuration. quobyte_volume_configuration))) volume_uuid = result['volume_uuid'] result = self.rpc.call('exportVolume', dict( volume_uuid=volume_uuid, protocol='NFS')) self._resize_share(share, share['size']) return self._build_share_export_string(result) def delete_share(self, context, share, share_server=None): """Delete the corresponding Quobyte volume.""" volume_uuid = self._resolve_volume_name(share['name'], share['project_id']) if not volume_uuid: LOG.warning("No volume found for " "share %(project_id)s/%(name)s", {"project_id": share['project_id'], "name": share['name']}) return if self.configuration.quobyte_delete_shares: self.rpc.call('deleteVolume', {'volume_uuid': volume_uuid}) else: self.rpc.call('exportVolume', {"volume_uuid": volume_uuid, "remove_export": True, }) def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported. :param context: The `context.RequestContext` object for the request :param share: Share instance that will be checked. :param share_server: Data structure with share server information. Not used by this driver. :returns: IP: of share :raises: :ShareResourceNotFound: If the share instance cannot be found in the backend """ volume_uuid = self._resolve_volume_name(share['name'], share['project_id']) LOG.debug("Ensuring Quobyte share %s", share['name']) if not volume_uuid: raise (exception.ShareResourceNotFound( share_id=share['id'])) result = self.rpc.call('exportVolume', dict( volume_uuid=volume_uuid, protocol='NFS')) return self._build_share_export_string(result) def _allow_access(self, context, share, access, share_server=None): """Allow access to a share.""" if access['access_type'] != 'ip': raise exception.InvalidShareAccess( _('Quobyte driver only supports ip access control')) volume_uuid = self._resolve_volume_name(share['name'], share['project_id']) ro = access['access_level'] == (constants.ACCESS_LEVEL_RO) call_params = { "volume_uuid": volume_uuid, "read_only": ro, "add_allow_ip": access['access_to']} self.rpc.call('exportVolume', call_params) def _build_share_export_string(self, rpc_result): return '%(nfs_server_ip)s:%(qb_exp_path)s%(nfs_export_path)s' % { "nfs_server_ip": rpc_result["nfs_server_ip"], "qb_exp_path": self.configuration.quobyte_export_path, "nfs_export_path": rpc_result["nfs_export_path"]} def _deny_access(self, context, share, access, share_server=None): """Remove white-list ip from a share.""" if access['access_type'] != 'ip': LOG.debug('Quobyte driver only supports ip access control. ' 'Ignoring deny access call for %s , %s', share['name'], self._get_project_name(context, share['project_id'])) return volume_uuid = self._resolve_volume_name(share['name'], share['project_id']) call_params = { "volume_uuid": volume_uuid, "remove_allow_ip": access['access_to']} self.rpc.call('exportVolume', call_params) def extend_share(self, ext_share, ext_size, share_server=None): """Uses _resize_share to extend a share. :param ext_share: Share model. :param ext_size: New size of share (new_size > share['size']). :param share_server: Currently not used. """ self._resize_share(share=ext_share, new_size=ext_size) def shrink_share(self, shrink_share, shrink_size, share_server=None): """Uses _resize_share to shrink a share. Quobyte uses soft quotas. If a shares current size is bigger than the new shrunken size no data is lost. Data can be continuously read from the share but new writes receive out of disk space replies. :param shrink_share: Share model. :param shrink_size: New size of share (new_size < share['size']). :param share_server: Currently not used. """ self._resize_share(share=shrink_share, new_size=shrink_size) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. Two different cases are supported in here: 1. Recovery after error - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' are empty. Driver should apply all access rules for given share. 2. Adding/Deleting of several access rules - 'access_rules' contains all access_rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Driver can ignore rules in 'access_rules' and apply only rules from 'add_rules' and 'delete_rules'. :param context: Current context :param share: Share model with share data. :param access_rules: All access rules for given share :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model :raises If all of the *_rules params are None the method raises an InvalidShareAccess exception """ if (add_rules or delete_rules): # Handling access rule update for d_rule in delete_rules: self._deny_access(context, share, d_rule) for a_rule in add_rules: self._allow_access(context, share, a_rule) else: if not access_rules: LOG.warning("No access rules provided in update_access.") else: # Handling access rule recovery existing_rules = self._fetch_existing_access(context, share) missing_rules = self._subtract_access_lists(access_rules, existing_rules) for a_rule in missing_rules: LOG.debug("Adding rule %s in recovery.", six.text_type(a_rule)) self._allow_access(context, share, a_rule) superfluous_rules = self._subtract_access_lists(existing_rules, access_rules) for d_rule in superfluous_rules: LOG.debug("Removing rule %s in recovery.", six.text_type(d_rule)) self._deny_access(context, share, d_rule) manila-10.0.0/manila/share/drivers/quobyte/jsonrpc.py0000664000175000017500000001076213656750227022637 0ustar zuulzuul00000000000000# Copyright (c) 2015 Quobyte Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Quobyte driver helper. Control Quobyte over its JSON RPC API. """ import requests from requests import auth from requests import codes from oslo_log import log from oslo_serialization import jsonutils import six import six.moves.urllib.parse as urlparse from manila import exception from manila import utils LOG = log.getLogger(__name__) ERROR_ENOENT = 2 ERROR_ENTITY_NOT_FOUND = -24 ERROR_GARBAGE_ARGS = -3 class JsonRpc(object): def __init__(self, url, user_credentials, ca_file=None, key_file=None, cert_file=None): parsedurl = urlparse.urlparse(url) self._url = parsedurl.geturl() self._netloc = parsedurl.netloc self._ca_file = ca_file self._url_scheme = parsedurl.scheme if self._url_scheme == 'https': if not self._ca_file: self._ca_file = False LOG.warning( "Will not verify the server certificate of the API service" " because the CA certificate is not available.") self._id = 0 self._credentials = auth.HTTPBasicAuth( user_credentials[0], user_credentials[1]) self._key_file = key_file self._cert_file = cert_file @utils.synchronized('quobyte-request') def call(self, method_name, user_parameters, expected_errors=None): if expected_errors is None: expected_errors = [] # prepare request self._id += 1 parameters = {'retry': 'INFINITELY'} # Backend specific setting if user_parameters: parameters.update(user_parameters) post_data = { 'jsonrpc': '2.0', 'method': method_name, 'params': parameters, 'id': six.text_type(self._id), } LOG.debug("Request payload to be send is: %s", jsonutils.dumps(post_data)) # send request if self._url_scheme == 'https': if self._cert_file: result = requests.post(url=self._url, json=post_data, auth=self._credentials, verify=self._ca_file, cert=(self._cert_file, self._key_file)) else: result = requests.post(url=self._url, json=post_data, auth=self._credentials, verify=self._ca_file) else: result = requests.post(url=self._url, json=post_data, auth=self._credentials) # eval request response if result.status_code == codes['OK']: LOG.debug("Retrieved data from Quobyte backend: %s", result.text) response = result.json() return self._checked_for_application_error(response, expected_errors) # If things did not work out provide error info LOG.debug("Backend request resulted in error: %s", result.text) result.raise_for_status() def _checked_for_application_error(self, result, expected_errors=None): if expected_errors is None: expected_errors = [] if 'error' in result and result['error']: if 'message' in result['error'] and 'code' in result['error']: if result["error"]["code"] in expected_errors: # hit an expected error, return empty result return None else: raise exception.QBRpcException( result=result["error"]["message"], qbcode=result["error"]["code"]) else: raise exception.QBException(six.text_type(result["error"])) return result["result"] manila-10.0.0/manila/share/drivers/hpe/0000775000175000017500000000000013656750362017665 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hpe/__init__.py0000664000175000017500000000000013656750227021764 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hpe/hpe_3par_mediator.py0000664000175000017500000021074213656750227023632 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """HPE 3PAR Mediator for OpenStack Manila. This 'mediator' de-couples the 3PAR focused client from the OpenStack focused driver. """ from oslo_log import log from oslo_utils import importutils from oslo_utils import units import six from manila.data import utils as data_utils from manila import exception from manila.i18n import _ from manila import utils hpe3parclient = importutils.try_import("hpe3parclient") if hpe3parclient: from hpe3parclient import file_client # pylint: disable=import-error LOG = log.getLogger(__name__) MIN_CLIENT_VERSION = (4, 0, 0) DENY = '-' ALLOW = '+' OPEN_STACK_MANILA = 'OpenStack Manila' FULL = 1 THIN = 2 DEDUPE = 6 ENABLED = 1 DISABLED = 2 CACHE = 'cache' CONTINUOUS_AVAIL = 'continuous_avail' ACCESS_BASED_ENUM = 'access_based_enum' SMB_EXTRA_SPECS_MAP = { CACHE: CACHE, CONTINUOUS_AVAIL: 'ca', ACCESS_BASED_ENUM: 'abe', } IP_ALREADY_EXISTS = 'IP address %s already exists' USER_ALREADY_EXISTS = '"allow" permission already exists for "%s"' DOES_NOT_EXIST = 'does not exist, cannot' LOCAL_IP = '127.0.0.1' LOCAL_IP_RO = '127.0.0.2' SUPER_SHARE = 'OPENSTACK_SUPER_SHARE' TMP_RO_SNAP_EXPORT = "Temp RO snapshot export as source for creating RW share." class HPE3ParMediator(object): """3PAR client-facing code for the 3PAR driver. Version history: 1.0.0 - Begin Liberty development (post-Kilo) 1.0.1 - Report thin/dedup/hp_flash_cache capabilities 1.0.2 - Add share server/share network support 1.0.3 - Use hp3par prefix for share types and capabilities 2.0.0 - Rebranded HP to HPE 2.0.1 - Add access_level (e.g. read-only support) 2.0.2 - Add extend/shrink 2.0.3 - Fix SMB read-only access (added in 2.0.1) 2.0.4 - Remove file tree on delete when using nested shares #1538800 2.0.5 - Reduce the fsquota by share size when a share is deleted #1582931 2.0.6 - Read-write share from snapshot (using driver mount and copy) 2.0.7 - Add update_access support 2.0.8 - Multi pools support per backend 2.0.9 - Fix get_vfs() to correctly validate conf IP addresses at boot up #1621016 """ VERSION = "2.0.9" def __init__(self, **kwargs): self.hpe3par_username = kwargs.get('hpe3par_username') self.hpe3par_password = kwargs.get('hpe3par_password') self.hpe3par_api_url = kwargs.get('hpe3par_api_url') self.hpe3par_debug = kwargs.get('hpe3par_debug') self.hpe3par_san_ip = kwargs.get('hpe3par_san_ip') self.hpe3par_san_login = kwargs.get('hpe3par_san_login') self.hpe3par_san_password = kwargs.get('hpe3par_san_password') self.hpe3par_san_ssh_port = kwargs.get('hpe3par_san_ssh_port') self.hpe3par_san_private_key = kwargs.get('hpe3par_san_private_key') self.hpe3par_fstore_per_share = kwargs.get('hpe3par_fstore_per_share') self.hpe3par_require_cifs_ip = kwargs.get('hpe3par_require_cifs_ip') self.hpe3par_cifs_admin_access_username = ( kwargs.get('hpe3par_cifs_admin_access_username')) self.hpe3par_cifs_admin_access_password = ( kwargs.get('hpe3par_cifs_admin_access_password')) self.hpe3par_cifs_admin_access_domain = ( kwargs.get('hpe3par_cifs_admin_access_domain')) self.hpe3par_share_mount_path = kwargs.get('hpe3par_share_mount_path') self.my_ip = kwargs.get('my_ip') self.ssh_conn_timeout = kwargs.get('ssh_conn_timeout') self._client = None self.client_version = None @staticmethod def no_client(): return hpe3parclient is None def do_setup(self): if self.no_client(): msg = _('You must install hpe3parclient before using the 3PAR ' 'driver. Run "pip install --upgrade python-3parclient" ' 'to upgrade the hpe3parclient.') LOG.error(msg) raise exception.HPE3ParInvalidClient(message=msg) self.client_version = hpe3parclient.version_tuple if self.client_version < MIN_CLIENT_VERSION: msg = (_('Invalid hpe3parclient version found (%(found)s). ' 'Version %(minimum)s or greater required. Run "pip' ' install --upgrade python-3parclient" to upgrade' ' the hpe3parclient.') % {'found': '.'.join(map(six.text_type, self.client_version)), 'minimum': '.'.join(map(six.text_type, MIN_CLIENT_VERSION))}) LOG.error(msg) raise exception.HPE3ParInvalidClient(message=msg) try: self._client = file_client.HPE3ParFilePersonaClient( self.hpe3par_api_url) except Exception as e: msg = (_('Failed to connect to HPE 3PAR File Persona Client: %s') % six.text_type(e)) LOG.exception(msg) raise exception.ShareBackendException(message=msg) try: ssh_kwargs = {} if self.hpe3par_san_ssh_port: ssh_kwargs['port'] = self.hpe3par_san_ssh_port if self.ssh_conn_timeout: ssh_kwargs['conn_timeout'] = self.ssh_conn_timeout if self.hpe3par_san_private_key: ssh_kwargs['privatekey'] = self.hpe3par_san_private_key self._client.setSSHOptions( self.hpe3par_san_ip, self.hpe3par_san_login, self.hpe3par_san_password, **ssh_kwargs ) except Exception as e: msg = (_('Failed to set SSH options for HPE 3PAR File Persona ' 'Client: %s') % six.text_type(e)) LOG.exception(msg) raise exception.ShareBackendException(message=msg) LOG.info("HPE3ParMediator %(version)s, " "hpe3parclient %(client_version)s", {"version": self.VERSION, "client_version": hpe3parclient.get_version_string()}) try: wsapi_version = self._client.getWsApiVersion()['build'] LOG.info("3PAR WSAPI %s", wsapi_version) except Exception as e: msg = (_('Failed to get 3PAR WSAPI version: %s') % six.text_type(e)) LOG.exception(msg) raise exception.ShareBackendException(message=msg) if self.hpe3par_debug: self._client.debug_rest(True) # Includes SSH debug (setSSH above) def _wsapi_login(self): try: self._client.login(self.hpe3par_username, self.hpe3par_password) except Exception as e: msg = (_("Failed to Login to 3PAR (%(url)s) as %(user)s " "because: %(err)s") % {'url': self.hpe3par_api_url, 'user': self.hpe3par_username, 'err': six.text_type(e)}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) def _wsapi_logout(self): try: self._client.http.unauthenticate() except Exception as e: msg = ("Failed to Logout from 3PAR (%(url)s) because %(err)s") LOG.warning(msg, {'url': self.hpe3par_api_url, 'err': six.text_type(e)}) # don't raise exception on logout() @staticmethod def build_export_locations(protocol, ips, path): if not ips: message = _('Failed to build export location due to missing IP.') raise exception.InvalidInput(reason=message) if not path: message = _('Failed to build export location due to missing path.') raise exception.InvalidInput(reason=message) share_proto = HPE3ParMediator.ensure_supported_protocol(protocol) if share_proto == 'nfs': return ['%s:%s' % (ip, path) for ip in ips] else: return [r'\\%s\%s' % (ip, path) for ip in ips] def get_provisioned_gb(self, fpg): total_mb = 0 try: result = self._client.getfsquota(fpg=fpg) except Exception as e: result = {'message': six.text_type(e)} error_msg = result.get('message') if error_msg: message = (_('Error while getting fsquotas for FPG ' '%(fpg)s: %(msg)s') % {'fpg': fpg, 'msg': error_msg}) LOG.error(message) raise exception.ShareBackendException(msg=message) for fsquota in result['members']: total_mb += float(fsquota['hardBlock']) return total_mb / units.Ki def get_fpg_status(self, fpg): """Get capacity and capabilities for FPG.""" try: result = self._client.getfpg(fpg) except Exception as e: msg = (_('Failed to get capacity for fpg %(fpg)s: %(e)s') % {'fpg': fpg, 'e': six.text_type(e)}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) if result['total'] != 1: msg = (_('Failed to get capacity for fpg %s.') % fpg) LOG.error(msg) raise exception.ShareBackendException(msg=msg) member = result['members'][0] total_capacity_gb = float(member['capacityKiB']) / units.Mi free_capacity_gb = float(member['availCapacityKiB']) / units.Mi volumes = member['vvs'] if isinstance(volumes, list): volume = volumes[0] # Use first name from list else: volume = volumes # There is just a name self._wsapi_login() try: volume_info = self._client.getVolume(volume) volume_set = self._client.getVolumeSet(fpg) finally: self._wsapi_logout() provisioning_type = volume_info['provisioningType'] if provisioning_type not in (THIN, FULL, DEDUPE): msg = (_('Unexpected provisioning type for FPG %(fpg)s: ' '%(ptype)s.') % {'fpg': fpg, 'ptype': provisioning_type}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) dedupe = provisioning_type == DEDUPE thin_provisioning = provisioning_type in (THIN, DEDUPE) flash_cache_policy = volume_set.get('flashCachePolicy', DISABLED) hpe3par_flash_cache = flash_cache_policy == ENABLED status = { 'pool_name': fpg, 'total_capacity_gb': total_capacity_gb, 'free_capacity_gb': free_capacity_gb, 'thin_provisioning': thin_provisioning, 'dedupe': dedupe, 'hpe3par_flash_cache': hpe3par_flash_cache, 'hp3par_flash_cache': hpe3par_flash_cache, } if thin_provisioning: status['provisioned_capacity_gb'] = self.get_provisioned_gb(fpg) return status @staticmethod def ensure_supported_protocol(share_proto): protocol = share_proto.lower() if protocol == 'cifs': protocol = 'smb' if protocol not in ['smb', 'nfs']: message = (_('Invalid protocol. Expected nfs or smb. Got %s.') % protocol) LOG.error(message) raise exception.InvalidShareAccess(reason=message) return protocol @staticmethod def other_protocol(share_proto): """Given 'nfs' or 'smb' (or equivalent) return the other one.""" protocol = HPE3ParMediator.ensure_supported_protocol(share_proto) return 'nfs' if protocol == 'smb' else 'smb' @staticmethod def ensure_prefix(uid, protocol=None, readonly=False): if uid.startswith('osf-'): return uid if protocol: proto = '-%s' % HPE3ParMediator.ensure_supported_protocol(protocol) else: proto = '' if readonly: ro = '-ro' else: ro = '' # Format is osf[-ro]-{nfs|smb}-uid return 'osf%s%s-%s' % (proto, ro, uid) @staticmethod def _get_nfs_options(extra_specs, readonly): """Validate the NFS extra_specs and return the options to use.""" nfs_options = extra_specs.get('hpe3par:nfs_options') if nfs_options is None: nfs_options = extra_specs.get('hp3par:nfs_options') if nfs_options: msg = ("hp3par:nfs_options is deprecated. Use " "hpe3par:nfs_options instead.") LOG.warning(msg) if nfs_options: options = nfs_options.split(',') else: options = [] # rw, ro, and (no)root_squash (in)secure options are not allowed in # extra_specs because they will be forcibly set below. # no_subtree_check and fsid are not allowed per 3PAR support. # Other strings will be allowed to be sent to the 3PAR which will do # further validation. options_not_allowed = ['ro', 'rw', 'no_root_squash', 'root_squash', 'secure', 'insecure', 'no_subtree_check', 'fsid'] invalid_options = [ option for option in options if option in options_not_allowed ] if invalid_options: raise exception.InvalidInput(_('Invalid hp3par:nfs_options or ' 'hpe3par:nfs_options in ' 'extra-specs. The following ' 'options are not allowed: %s') % invalid_options) options.append('ro' if readonly else 'rw') options.append('no_root_squash') options.append('insecure') return ','.join(options) def _build_createfshare_kwargs(self, protocol, fpg, fstore, readonly, sharedir, extra_specs, comment, client_ip=None): createfshare_kwargs = dict(fpg=fpg, fstore=fstore, sharedir=sharedir, comment=comment) if 'hp3par_flash_cache' in extra_specs: msg = ("hp3par_flash_cache is deprecated. Use " "hpe3par_flash_cache instead.") LOG.warning(msg) if protocol == 'nfs': if client_ip: createfshare_kwargs['clientip'] = client_ip else: # New NFS shares needs seed IP to prevent "all" access. # Readonly and readwrite NFS shares client IPs cannot overlap. if readonly: createfshare_kwargs['clientip'] = LOCAL_IP_RO else: createfshare_kwargs['clientip'] = LOCAL_IP options = self._get_nfs_options(extra_specs, readonly) createfshare_kwargs['options'] = options else: # To keep the original (Kilo, Liberty) behavior where CIFS IP # access rules were required in addition to user rules enable # this to use a seed IP instead of the default (all allowed). if self.hpe3par_require_cifs_ip: if client_ip: createfshare_kwargs['allowip'] = client_ip else: createfshare_kwargs['allowip'] = LOCAL_IP smb_opts = (ACCESS_BASED_ENUM, CONTINUOUS_AVAIL, CACHE) for smb_opt in smb_opts: opt_value = extra_specs.get('hpe3par:smb_%s' % smb_opt) if opt_value is None: opt_value = extra_specs.get('hp3par:smb_%s' % smb_opt) if opt_value: msg = ("hp3par:smb_* is deprecated. Use " "hpe3par:smb_* instead.") LOG.warning(msg) if opt_value: opt_key = SMB_EXTRA_SPECS_MAP[smb_opt] createfshare_kwargs[opt_key] = opt_value return createfshare_kwargs def _update_capacity_quotas(self, fstore, new_size, old_size, fpg, vfs): @utils.synchronized('hpe3par-update-quota-' + fstore) def _sync_update_capacity_quotas(fstore, new_size, old_size, fpg, vfs): """Update 3PAR quotas and return setfsquota output.""" if self.hpe3par_fstore_per_share: hcapacity = six.text_type(new_size * units.Ki) scapacity = hcapacity else: hard_size_mb = (new_size - old_size) * units.Ki soft_size_mb = hard_size_mb result = self._client.getfsquota( fpg=fpg, vfs=vfs, fstore=fstore) LOG.debug("getfsquota result=%s", result) quotas = result['members'] if len(quotas) == 1: hard_size_mb += int(quotas[0].get('hardBlock', '0')) soft_size_mb += int(quotas[0].get('softBlock', '0')) hcapacity = six.text_type(hard_size_mb) scapacity = six.text_type(soft_size_mb) return self._client.setfsquota(vfs, fpg=fpg, fstore=fstore, scapacity=scapacity, hcapacity=hcapacity) try: result = _sync_update_capacity_quotas( fstore, new_size, old_size, fpg, vfs) LOG.debug("setfsquota result=%s", result) except Exception as e: msg = (_('Failed to update capacity quota ' '%(size)s on %(fstore)s with exception: %(e)s') % {'size': new_size - old_size, 'fstore': fstore, 'e': six.text_type(e)}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) # Non-empty result is an error message returned from the 3PAR if result: msg = (_('Failed to update capacity quota ' '%(size)s on %(fstore)s with error: %(error)s') % {'size': new_size - old_size, 'fstore': fstore, 'error': result}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) def _create_share(self, project_id, share_id, protocol, extra_specs, fpg, vfs, fstore, sharedir, readonly, size, comment, client_ip=None): share_name = self.ensure_prefix(share_id, readonly=readonly) if not (sharedir or self.hpe3par_fstore_per_share): sharedir = share_name if fstore: use_existing_fstore = True else: use_existing_fstore = False if self.hpe3par_fstore_per_share: # Do not use -ro in the fstore name. fstore = self.ensure_prefix(share_id, readonly=False) else: fstore = self.ensure_prefix(project_id, protocol) createfshare_kwargs = self._build_createfshare_kwargs( protocol, fpg, fstore, readonly, sharedir, extra_specs, comment, client_ip=client_ip) if not use_existing_fstore: try: result = self._client.createfstore( vfs, fstore, fpg=fpg, comment=comment) LOG.debug("createfstore result=%s", result) except Exception as e: msg = (_('Failed to create fstore %(fstore)s: %(e)s') % {'fstore': fstore, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) if size: self._update_capacity_quotas(fstore, size, 0, fpg, vfs) try: if readonly and protocol == 'nfs': # For NFS, RO is a 2nd 3PAR share pointing to same sharedir share_name = self.ensure_prefix(share_id, readonly=readonly) result = self._client.createfshare(protocol, vfs, share_name, **createfshare_kwargs) LOG.debug("createfshare result=%s", result) except Exception as e: msg = (_('Failed to create share %(share_name)s: %(e)s') % {'share_name': share_name, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) try: result = self._client.getfshare( protocol, share_name, fpg=fpg, vfs=vfs, fstore=fstore) LOG.debug("getfshare result=%s", result) except Exception as e: msg = (_('Failed to get fshare %(share_name)s after creating it: ' '%(e)s') % {'share_name': share_name, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) if result['total'] != 1: msg = (_('Failed to get fshare %(share_name)s after creating it. ' 'Expected to get 1 fshare. Got %(total)s.') % {'share_name': share_name, 'total': result['total']}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) return result['members'][0] def create_share(self, project_id, share_id, share_proto, extra_specs, fpg, vfs, fstore=None, sharedir=None, readonly=False, size=None, comment=OPEN_STACK_MANILA, client_ip=None): """Create the share and return its path. This method can create a share when called by the driver or when called locally from create_share_from_snapshot(). The optional parameters allow re-use. :param project_id: The tenant ID. :param share_id: The share-id with or without osf- prefix. :param share_proto: The protocol (to map to smb or nfs) :param extra_specs: The share type extra-specs :param fpg: The file provisioning group :param vfs: The virtual file system :param fstore: (optional) The file store. When provided, an existing file store is used. Otherwise one is created. :param sharedir: (optional) Share directory. :param readonly: (optional) Create share as read-only. :param size: (optional) Size limit for file store if creating one. :param comment: (optional) Comment to set on the share. :param client_ip: (optional) IP address to give access to. :return: share path string """ protocol = self.ensure_supported_protocol(share_proto) share = self._create_share(project_id, share_id, protocol, extra_specs, fpg, vfs, fstore, sharedir, readonly, size, comment, client_ip=client_ip) if protocol == 'nfs': return share['sharePath'] else: return share['shareName'] def create_share_from_snapshot(self, share_id, share_proto, extra_specs, orig_project_id, orig_share_id, snapshot_id, fpg, vfs, ips, size=None, comment=OPEN_STACK_MANILA): protocol = self.ensure_supported_protocol(share_proto) snapshot_tag = self.ensure_prefix(snapshot_id) orig_share_name = self.ensure_prefix(orig_share_id) snapshot = self._find_fsnap(orig_project_id, orig_share_name, protocol, snapshot_tag, fpg, vfs) if not snapshot: msg = (_('Failed to create share from snapshot for ' 'FPG/VFS/tag %(fpg)s/%(vfs)s/%(tag)s. ' 'Snapshot not found.') % { 'fpg': fpg, 'vfs': vfs, 'tag': snapshot_tag}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) fstore = snapshot['fstoreName'] if fstore == orig_share_name: # No subdir for original share created with fstore_per_share sharedir = '.snapshot/%s' % snapshot['snapName'] else: sharedir = '.snapshot/%s/%s' % (snapshot['snapName'], orig_share_name) if protocol == "smb" and (not self.hpe3par_cifs_admin_access_username or not self.hpe3par_cifs_admin_access_password): LOG.warning("hpe3par_cifs_admin_access_username and " "hpe3par_cifs_admin_access_password must be " "provided in order for CIFS shares created from " "snapshots to be writable.") return self.create_share( orig_project_id, share_id, protocol, extra_specs, fpg, vfs, fstore=fstore, sharedir=sharedir, readonly=True, comment=comment, ) # Export the snapshot as read-only to copy from. temp = ' '.join((comment, TMP_RO_SNAP_EXPORT)) source_path = self.create_share( orig_project_id, share_id, protocol, extra_specs, fpg, vfs, fstore=fstore, sharedir=sharedir, readonly=True, comment=temp, client_ip=self.my_ip ) try: share_name = self.ensure_prefix(share_id) dest_path = self.create_share( orig_project_id, share_id, protocol, extra_specs, fpg, vfs, fstore=fstore, readonly=False, size=size, comment=comment, client_ip=','.join((self.my_ip, LOCAL_IP)) ) try: if protocol == 'smb': self._grant_admin_smb_access( protocol, fpg, vfs, fstore, comment, share=share_name) ro_share_name = self.ensure_prefix(share_id, readonly=True) self._grant_admin_smb_access( protocol, fpg, vfs, fstore, temp, share=ro_share_name) source_locations = self.build_export_locations( protocol, ips, source_path) dest_locations = self.build_export_locations( protocol, ips, dest_path) self._copy_share_data( share_id, source_locations[0], dest_locations[0], protocol) # Revoke the admin access that was needed to copy to the dest. if protocol == 'nfs': self._change_access(DENY, orig_project_id, share_id, protocol, 'ip', self.my_ip, 'rw', fpg, vfs) else: self._revoke_admin_smb_access( protocol, fpg, vfs, fstore, comment) except Exception as e: msg = ('Exception during mount and copy from RO snapshot ' 'to RW share: %s') LOG.error(msg, e) self._delete_share(share_name, protocol, fpg, vfs, fstore) raise finally: self._delete_ro_share( orig_project_id, share_id, protocol, fpg, vfs, fstore) return dest_path def _copy_share_data(self, dest_id, source_location, dest_location, protocol): mount_location = "%s%s" % (self.hpe3par_share_mount_path, dest_id) source_share_dir = '/'.join((mount_location, "source_snap")) dest_share_dir = '/'.join((mount_location, "dest_share")) dirs_to_remove = [] dirs_to_unmount = [] try: utils.execute('mkdir', '-p', source_share_dir, run_as_root=True) dirs_to_remove.append(source_share_dir) self._mount_share(protocol, source_location, source_share_dir) dirs_to_unmount.append(source_share_dir) utils.execute('mkdir', dest_share_dir, run_as_root=True) dirs_to_remove.append(dest_share_dir) self._mount_share(protocol, dest_location, dest_share_dir) dirs_to_unmount.append(dest_share_dir) self._copy_data(source_share_dir, dest_share_dir) finally: for d in dirs_to_unmount: self._unmount_share(d) if dirs_to_remove: dirs_to_remove.append(mount_location) utils.execute('rmdir', *dirs_to_remove, run_as_root=True) def _copy_data(self, source_share_dir, dest_share_dir): err_msg = None err_data = None try: copy = data_utils.Copy(source_share_dir, dest_share_dir, '') copy.run() progress = copy.get_progress()['total_progress'] if progress != 100: err_msg = _("Failed to copy data, reason: " "Total progress %d != 100.") err_data = progress except Exception as err: err_msg = _("Failed to copy data, reason: %s.") err_data = six.text_type(err) if err_msg: raise exception.ShareBackendException(msg=err_msg % err_data) def _delete_share(self, share_name, protocol, fpg, vfs, fstore): try: self._client.removefshare( protocol, vfs, share_name, fpg=fpg, fstore=fstore) except Exception as e: msg = (_('Failed to remove share %(share_name)s: %(e)s') % {'share_name': share_name, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) def _delete_ro_share(self, project_id, share_id, protocol, fpg, vfs, fstore): share_name_ro = self.ensure_prefix(share_id, readonly=True) if not fstore: fstore = self._find_fstore(project_id, share_name_ro, protocol, fpg, vfs, allow_cross_protocol=True) if fstore: self._delete_share(share_name_ro, protocol, fpg, vfs, fstore) return fstore def delete_share(self, project_id, share_id, share_size, share_proto, fpg, vfs, share_ip): protocol = self.ensure_supported_protocol(share_proto) share_name = self.ensure_prefix(share_id) fstore = self._find_fstore(project_id, share_name, protocol, fpg, vfs, allow_cross_protocol=True) removed_writable = False if fstore: self._delete_share(share_name, protocol, fpg, vfs, fstore) removed_writable = True # Try to delete the read-only twin share, too. fstore = self._delete_ro_share( project_id, share_id, protocol, fpg, vfs, fstore) if fstore == share_name: try: self._client.removefstore(vfs, fstore, fpg=fpg) except Exception as e: msg = (_('Failed to remove fstore %(fstore)s: %(e)s') % {'fstore': fstore, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) elif removed_writable: try: # Attempt to remove file tree on delete when using nested # shares. If the file tree cannot be removed for whatever # reason, we will not treat this as an error_deleting # issue. We will allow the delete to continue as requested. self._delete_file_tree( share_name, protocol, fpg, vfs, fstore, share_ip) # reduce the fsquota by share size when a tree is deleted. self._update_capacity_quotas( fstore, 0, share_size, fpg, vfs) except Exception as e: msg = ('Exception during cleanup of deleted ' 'share %(share)s in filestore %(fstore)s: %(e)s') data = { 'fstore': fstore, 'share': share_name, 'e': six.text_type(e), } LOG.warning(msg, data) def _delete_file_tree(self, share_name, protocol, fpg, vfs, fstore, share_ip): # If the share protocol is CIFS, we need to make sure the admin # provided the proper config values. If they have not, we can simply # return out and log a warning. if protocol == "smb" and (not self.hpe3par_cifs_admin_access_username or not self.hpe3par_cifs_admin_access_password): LOG.warning("hpe3par_cifs_admin_access_username and " "hpe3par_cifs_admin_access_password must be " "provided in order for the file tree to be " "properly deleted.") return mount_location = "%s%s" % (self.hpe3par_share_mount_path, share_name) share_dir = mount_location + "/%s" % share_name # Create the super share. self._create_super_share(protocol, fpg, vfs, fstore) # Create the mount directory. self._create_mount_directory(mount_location) # Mount the super share. self._mount_super_share(protocol, mount_location, fpg, vfs, fstore, share_ip) # Delete the share from the super share. self._delete_share_directory(share_dir) # Unmount the super share. self._unmount_share(mount_location) # Delete the mount directory. self._delete_share_directory(mount_location) def _grant_admin_smb_access(self, protocol, fpg, vfs, fstore, comment, share=SUPER_SHARE): user = '+%s:fullcontrol' % self.hpe3par_cifs_admin_access_username setfshare_kwargs = { 'fpg': fpg, 'fstore': fstore, 'comment': comment, 'allowperm': user, } try: self._client.setfshare( protocol, vfs, share, **setfshare_kwargs) except Exception as err: raise exception.ShareBackendException( msg=_("There was an error adding permissions: %s") % err) def _revoke_admin_smb_access(self, protocol, fpg, vfs, fstore, comment, share=SUPER_SHARE): user = '-%s:fullcontrol' % self.hpe3par_cifs_admin_access_username setfshare_kwargs = { 'fpg': fpg, 'fstore': fstore, 'comment': comment, 'allowperm': user, } try: self._client.setfshare( protocol, vfs, share, **setfshare_kwargs) except Exception as err: raise exception.ShareBackendException( msg=_("There was an error revoking permissions: %s") % err) def _create_super_share(self, protocol, fpg, vfs, fstore, readonly=False): sharedir = '' extra_specs = {} comment = 'OpenStack super share used to delete nested shares.' createfshare_kwargs = self._build_createfshare_kwargs(protocol, fpg, fstore, readonly, sharedir, extra_specs, comment) # If the share is NFS, we need to give the host access to the share in # order to properly mount it. if protocol == 'nfs': createfshare_kwargs['clientip'] = self.my_ip else: createfshare_kwargs['allowip'] = self.my_ip try: result = self._client.createfshare(protocol, vfs, SUPER_SHARE, **createfshare_kwargs) LOG.debug("createfshare for %(name)s, result=%(result)s", {'name': SUPER_SHARE, 'result': result}) except Exception as e: msg = (_('Failed to create share %(share_name)s: %(e)s'), {'share_name': SUPER_SHARE, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) # If the share is CIFS, we need to grant access to the specified admin. if protocol == 'smb': self._grant_admin_smb_access(protocol, fpg, vfs, fstore, comment) def _create_mount_directory(self, mount_location): try: utils.execute('mkdir', mount_location, run_as_root=True) except Exception as err: message = ("There was an error creating mount directory: " "%s. The nested file tree will not be deleted.", six.text_type(err)) LOG.warning(message) def _mount_share(self, protocol, export_location, mount_dir): if protocol == 'nfs': cmd = ('mount', '-t', 'nfs', export_location, mount_dir) utils.execute(*cmd, run_as_root=True) else: export_location = export_location.replace('\\', '/') cred = ('username=' + self.hpe3par_cifs_admin_access_username + ',password=' + self.hpe3par_cifs_admin_access_password + ',domain=' + self.hpe3par_cifs_admin_access_domain) cmd = ('mount', '-t', 'cifs', export_location, mount_dir, '-o', cred) utils.execute(*cmd, run_as_root=True) def _mount_super_share(self, protocol, mount_dir, fpg, vfs, fstore, share_ip): try: mount_location = self._generate_mount_path( protocol, fpg, vfs, fstore, share_ip) self._mount_share(protocol, mount_location, mount_dir) except Exception as err: message = ("There was an error mounting the super share: " "%s. The nested file tree will not be deleted.", six.text_type(err)) LOG.warning(message) def _unmount_share(self, mount_location): try: utils.execute('umount', mount_location, run_as_root=True) except Exception as err: message = ("There was an error unmounting the share at " "%(mount_location)s: %(error)s") msg_data = { 'mount_location': mount_location, 'error': six.text_type(err), } LOG.warning(message, msg_data) def _delete_share_directory(self, directory): try: utils.execute('rm', '-rf', directory, run_as_root=True) except Exception as err: message = ("There was an error removing the share: " "%s. The nested file tree will not be deleted.", six.text_type(err)) LOG.warning(message) def _generate_mount_path(self, protocol, fpg, vfs, fstore, share_ip): path = None if protocol == 'nfs': path = (("%(share_ip)s:/%(fpg)s/%(vfs)s/%(fstore)s/") % {'share_ip': share_ip, 'fpg': fpg, 'vfs': vfs, 'fstore': fstore}) else: path = (("//%(share_ip)s/%(share_name)s/") % {'share_ip': share_ip, 'share_name': SUPER_SHARE}) return path def get_vfs(self, fpg, vfs=None): """Get the VFS or raise an exception.""" try: result = self._client.getvfs(fpg=fpg, vfs=vfs) except Exception as e: msg = (_('Exception during getvfs %(vfs)s: %(e)s') % {'vfs': vfs, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) if result['total'] != 1: error_msg = result.get('message') if error_msg: message = (_('Error while validating FPG/VFS ' '(%(fpg)s/%(vfs)s): %(msg)s') % {'fpg': fpg, 'vfs': vfs, 'msg': error_msg}) LOG.error(message) raise exception.ShareBackendException(msg=message) else: message = (_('Error while validating FPG/VFS ' '(%(fpg)s/%(vfs)s): Expected 1, ' 'got %(total)s.') % {'fpg': fpg, 'vfs': vfs, 'total': result['total']}) LOG.error(message) raise exception.ShareBackendException(msg=message) value = result['members'][0] if isinstance(value['vfsip'], dict): # This is for 3parclient returning only one VFS entry LOG.debug("3parclient version up to 4.2.1 is in use. Client " "upgrade may be needed if using a VFS with multiple " "IP addresses.") value['vfsip']['address'] = [value['vfsip']['address']] else: # This is for 3parclient returning list of VFS entries # Format get_vfs ret value to combine all IP addresses discovered_vfs_ips = [] for vfs_entry in value['vfsip']: if vfs_entry['address']: discovered_vfs_ips.append(vfs_entry['address']) value['vfsip'] = value['vfsip'][0] value['vfsip']['address'] = discovered_vfs_ips return value @staticmethod def _is_share_from_snapshot(fshare): path = fshare.get('shareDir') if path: return '.snapshot' in path.split('/') path = fshare.get('sharePath') return path and '.snapshot' in path.split('/') def create_snapshot(self, orig_project_id, orig_share_id, orig_share_proto, snapshot_id, fpg, vfs): """Creates a snapshot of a share.""" fshare = self._find_fshare(orig_project_id, orig_share_id, orig_share_proto, fpg, vfs) if not fshare: msg = (_('Failed to create snapshot for FPG/VFS/fshare ' '%(fpg)s/%(vfs)s/%(fshare)s: Failed to find fshare.') % {'fpg': fpg, 'vfs': vfs, 'fshare': orig_share_id}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) if self._is_share_from_snapshot(fshare): msg = (_('Failed to create snapshot for FPG/VFS/fshare ' '%(fpg)s/%(vfs)s/%(fshare)s: Share is a read-only ' 'share of an existing snapshot.') % {'fpg': fpg, 'vfs': vfs, 'fshare': orig_share_id}) LOG.error(msg) raise exception.ShareBackendException(msg=msg) fstore = fshare.get('fstoreName') snapshot_tag = self.ensure_prefix(snapshot_id) try: result = self._client.createfsnap( vfs, fstore, snapshot_tag, fpg=fpg) LOG.debug("createfsnap result=%s", result) except Exception as e: msg = (_('Failed to create snapshot for FPG/VFS/fstore ' '%(fpg)s/%(vfs)s/%(fstore)s: %(e)s') % {'fpg': fpg, 'vfs': vfs, 'fstore': fstore, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) def delete_snapshot(self, orig_project_id, orig_share_id, orig_proto, snapshot_id, fpg, vfs): """Deletes a snapshot of a share.""" snapshot_tag = self.ensure_prefix(snapshot_id) snapshot = self._find_fsnap(orig_project_id, orig_share_id, orig_proto, snapshot_tag, fpg, vfs) if not snapshot: return fstore = snapshot.get('fstoreName') for protocol in ('nfs', 'smb'): try: shares = self._client.getfshare(protocol, fpg=fpg, vfs=vfs, fstore=fstore) except Exception as e: msg = (_('Unexpected exception while getting share list. ' 'Cannot delete snapshot without checking for ' 'dependent shares first: %s') % six.text_type(e)) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) for share in shares['members']: if protocol == 'nfs': path = share['sharePath'][1:].split('/') dot_snapshot_index = 3 else: if share['shareDir']: path = share['shareDir'].split('/') else: path = None dot_snapshot_index = 0 snapshot_index = dot_snapshot_index + 1 if path and len(path) > snapshot_index: if (path[dot_snapshot_index] == '.snapshot' and path[snapshot_index].endswith(snapshot_tag)): msg = (_('Cannot delete snapshot because it has a ' 'dependent share.')) raise exception.Invalid(msg) snapname = snapshot['snapName'] try: result = self._client.removefsnap( vfs, fstore, snapname=snapname, fpg=fpg) LOG.debug("removefsnap result=%s", result) except Exception as e: msg = (_('Failed to delete snapshot for FPG/VFS/fstore/snapshot ' '%(fpg)s/%(vfs)s/%(fstore)s/%(snapname)s: %(e)s') % { 'fpg': fpg, 'vfs': vfs, 'fstore': fstore, 'snapname': snapname, 'e': six.text_type(e)}) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) # Try to reclaim the space try: self._client.startfsnapclean(fpg, reclaimStrategy='maxspeed') except Exception: # Remove already happened so only log this. LOG.exception('Unexpected exception calling startfsnapclean ' 'for FPG %(fpg)s.', {'fpg': fpg}) @staticmethod def _validate_access_type(protocol, access_type): if access_type not in ('ip', 'user'): msg = (_("Invalid access type. Expected 'ip' or 'user'. " "Actual '%s'.") % access_type) LOG.error(msg) raise exception.InvalidInput(reason=msg) if protocol == 'nfs' and access_type != 'ip': msg = (_("Invalid NFS access type. HPE 3PAR NFS supports 'ip'. " "Actual '%s'.") % access_type) LOG.error(msg) raise exception.HPE3ParInvalid(err=msg) return protocol @staticmethod def _validate_access_level(protocol, access_type, access_level, fshare): readonly = access_level == 'ro' snapshot = HPE3ParMediator._is_share_from_snapshot(fshare) if snapshot and not readonly: reason = _('3PAR shares from snapshots require read-only access') LOG.error(reason) raise exception.InvalidShareAccess(reason=reason) if protocol == 'smb' and access_type == 'ip' and snapshot != readonly: msg = (_("Invalid CIFS access rule. HPE 3PAR optionally supports " "IP access rules for CIFS shares, but they must be " "read-only for shares from snapshots and read-write for " "other shares. Use the required CIFS 'user' access rules " "to refine access.")) LOG.error(msg) raise exception.InvalidShareAccess(reason=msg) @staticmethod def ignore_benign_access_results(plus_or_minus, access_type, access_to, result): # TODO(markstur): Remove the next line when hpe3parclient is fixed. result = [x for x in result if x != '\r'] if result: if plus_or_minus == DENY: if DOES_NOT_EXIST in result[0]: return None else: if access_type == 'user': if USER_ALREADY_EXISTS % access_to in result[0]: return None elif IP_ALREADY_EXISTS % access_to in result[0]: return None return result def _change_access(self, plus_or_minus, project_id, share_id, share_proto, access_type, access_to, access_level, fpg, vfs, extra_specs=None): """Allow or deny access to a share. Plus_or_minus character indicates add to allow list (+) or remove from allow list (-). """ readonly = access_level == 'ro' protocol = self.ensure_supported_protocol(share_proto) try: self._validate_access_type(protocol, access_type) except Exception: if plus_or_minus == DENY: # Catch invalid rules for deny. Allow them to be deleted. return else: raise fshare = self._find_fshare(project_id, share_id, protocol, fpg, vfs, readonly=readonly) if not fshare: # Change access might apply to the share with the name that # does not match the access_level prefix. other_fshare = self._find_fshare(project_id, share_id, protocol, fpg, vfs, readonly=not readonly) if other_fshare: if plus_or_minus == DENY: # Try to deny rule from 'other' share for SMB or legacy. fshare = other_fshare elif self._is_share_from_snapshot(other_fshare): # Found a share-from-snapshot from before # "-ro" was added to the name. Use it. fshare = other_fshare elif protocol == 'nfs': # We don't have the RO|RW share we need, but the # opposite one already exists. It is OK to create # the one we need for ALLOW with NFS (not from snapshot). fstore = other_fshare.get('fstoreName') sharedir = other_fshare.get('shareDir') comment = other_fshare.get('comment') fshare = self._create_share(project_id, share_id, protocol, extra_specs, fpg, vfs, fstore=fstore, sharedir=sharedir, readonly=readonly, size=None, comment=comment) else: # SMB only has one share for RO and RW. Try to use it. fshare = other_fshare if not fshare: msg = _('Failed to change (%(change)s) access ' 'to FPG/share %(fpg)s/%(share)s ' 'for %(type)s %(to)s %(level)s): ' 'Share does not exist on 3PAR.') msg_data = { 'change': plus_or_minus, 'fpg': fpg, 'share': share_id, 'type': access_type, 'to': access_to, 'level': access_level, } if plus_or_minus == DENY: LOG.warning(msg, msg_data) return else: raise exception.HPE3ParInvalid(err=msg % msg_data) try: self._validate_access_level( protocol, access_type, access_level, fshare) except exception.InvalidShareAccess as e: if plus_or_minus == DENY: # Allow invalid access rules to be deleted. msg = _('Ignoring deny invalid access rule ' 'for FPG/share %(fpg)s/%(share)s ' 'for %(type)s %(to)s %(level)s): %(e)s') msg_data = { 'change': plus_or_minus, 'fpg': fpg, 'share': share_id, 'type': access_type, 'to': access_to, 'level': access_level, 'e': six.text_type(e), } LOG.info(msg, msg_data) return else: raise share_name = fshare.get('shareName') setfshare_kwargs = { 'fpg': fpg, 'fstore': fshare.get('fstoreName'), 'comment': fshare.get('comment'), } if protocol == 'nfs': access_change = '%s%s' % (plus_or_minus, access_to) setfshare_kwargs['clientip'] = access_change elif protocol == 'smb': if access_type == 'ip': access_change = '%s%s' % (plus_or_minus, access_to) setfshare_kwargs['allowip'] = access_change else: access_str = 'read' if readonly else 'fullcontrol' perm = '%s%s:%s' % (plus_or_minus, access_to, access_str) setfshare_kwargs['allowperm'] = perm try: result = self._client.setfshare( protocol, vfs, share_name, **setfshare_kwargs) result = self.ignore_benign_access_results( plus_or_minus, access_type, access_to, result) except Exception as e: result = six.text_type(e) LOG.debug("setfshare result=%s", result) if result: msg = (_('Failed to change (%(change)s) access to FPG/share ' '%(fpg)s/%(share)s for %(type)s %(to)s %(level)s: ' '%(error)s') % {'change': plus_or_minus, 'fpg': fpg, 'share': share_id, 'type': access_type, 'to': access_to, 'level': access_level, 'error': result}) raise exception.ShareBackendException(msg=msg) def _find_fstore(self, project_id, share_id, share_proto, fpg, vfs, allow_cross_protocol=False): share = self._find_fshare(project_id, share_id, share_proto, fpg, vfs, allow_cross_protocol=allow_cross_protocol) return share.get('fstoreName') if share else None def _find_fshare(self, project_id, share_id, share_proto, fpg, vfs, allow_cross_protocol=False, readonly=False): share = self._find_fshare_with_proto(project_id, share_id, share_proto, fpg, vfs, readonly=readonly) if not share and allow_cross_protocol: other_proto = self.other_protocol(share_proto) share = self._find_fshare_with_proto(project_id, share_id, other_proto, fpg, vfs, readonly=readonly) return share def _find_fshare_with_proto(self, project_id, share_id, share_proto, fpg, vfs, readonly=False): protocol = self.ensure_supported_protocol(share_proto) share_name = self.ensure_prefix(share_id, readonly=readonly) project_fstore = self.ensure_prefix(project_id, share_proto) search_order = [ {'fpg': fpg, 'vfs': vfs, 'fstore': project_fstore}, {'fpg': fpg, 'vfs': vfs, 'fstore': share_name}, {'fpg': fpg}, {} ] try: for search_params in search_order: result = self._client.getfshare(protocol, share_name, **search_params) shares = result.get('members', []) if len(shares) == 1: return shares[0] except Exception as e: msg = (_('Unexpected exception while getting share list: %s') % six.text_type(e)) raise exception.ShareBackendException(msg=msg) def _find_fsnap(self, project_id, share_id, orig_proto, snapshot_tag, fpg, vfs): share_name = self.ensure_prefix(share_id) osf_project_id = self.ensure_prefix(project_id, orig_proto) pattern = '*_%s' % self.ensure_prefix(snapshot_tag) search_order = [ {'pat': True, 'fpg': fpg, 'vfs': vfs, 'fstore': osf_project_id}, {'pat': True, 'fpg': fpg, 'vfs': vfs, 'fstore': share_name}, {'pat': True, 'fpg': fpg}, {'pat': True}, ] try: for search_params in search_order: result = self._client.getfsnap(pattern, **search_params) snapshots = result.get('members', []) if len(snapshots) == 1: return snapshots[0] except Exception as e: msg = (_('Unexpected exception while getting snapshots: %s') % six.text_type(e)) raise exception.ShareBackendException(msg=msg) def update_access(self, project_id, share_id, share_proto, extra_specs, access_rules, add_rules, delete_rules, fpg, vfs): """Update access to a share.""" protocol = self.ensure_supported_protocol(share_proto) if not (delete_rules or add_rules): # We need to re add all the rules. Check with 3PAR on it's current # list and only add the deltas. share = self._find_fshare(project_id, share_id, share_proto, fpg, vfs) ref_users = [] ro_ref_rules = [] if protocol == 'nfs': ref_rules = share['clients'] # Check for RO rules. ro_share = self._find_fshare(project_id, share_id, share_proto, fpg, vfs, readonly=True) if ro_share: ro_ref_rules = ro_share['clients'] else: ref_rules = [x[0] for x in share['allowPerm']] ref_users = ref_rules[:] # Get IP access as well ips = share['allowIP'] if not isinstance(ips, list): # If there is only one IP, the API returns a string # rather than a list. We need to account for that. ips = [ips] ref_rules += ips # Retrieve base rules. base_rules = [] for rule in access_rules: base_rules.append(rule['access_to']) # Check if we need to remove any rules from 3PAR. for rule in ref_rules: if rule in ref_users: rule_type = 'user' else: rule_type = 'ip' if rule not in base_rules + [LOCAL_IP, LOCAL_IP_RO]: self._change_access(DENY, project_id, share_id, share_proto, rule_type, rule, None, fpg, vfs) # Check to see if there are any RO rules to remove. for rule in ro_ref_rules: if rule not in base_rules + [LOCAL_IP, LOCAL_IP_RO]: self._change_access(DENY, project_id, share_id, share_proto, rule_type, rule, 'ro', fpg, vfs) # Check the rules we need to add. for rule in access_rules: if rule['access_to'] not in ref_rules and ( rule['access_to'] not in ro_ref_rules): # Rule does not exist, we need to add it self._change_access(ALLOW, project_id, share_id, share_proto, rule['access_type'], rule['access_to'], rule['access_level'], fpg, vfs, extra_specs=extra_specs) else: # We have deltas of the rules that need to be added and deleted. for rule in delete_rules: self._change_access(DENY, project_id, share_id, share_proto, rule['access_type'], rule['access_to'], rule['access_level'], fpg, vfs) for rule in add_rules: self._change_access(ALLOW, project_id, share_id, share_proto, rule['access_type'], rule['access_to'], rule['access_level'], fpg, vfs, extra_specs=extra_specs) def resize_share(self, project_id, share_id, share_proto, new_size, old_size, fpg, vfs): """Extends or shrinks size of existing share.""" share_name = self.ensure_prefix(share_id) fstore = self._find_fstore(project_id, share_name, share_proto, fpg, vfs, allow_cross_protocol=False) if not fstore: msg = (_('Cannot resize share because it was not found.')) raise exception.InvalidShare(reason=msg) self._update_capacity_quotas(fstore, new_size, old_size, fpg, vfs) def fsip_exists(self, fsip): """Try to get FSIP. Return True if it exists.""" vfs = fsip['vfs'] fpg = fsip['fspool'] try: result = self._client.getfsip(vfs, fpg=fpg) LOG.debug("getfsip result: %s", result) except Exception: msg = (_('Failed to get FSIPs for FPG/VFS %(fspool)s/%(vfs)s.') % fsip) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) for member in result['members']: if all(item in member.items() for item in fsip.items()): return True return False def create_fsip(self, ip, subnet, vlantag, fpg, vfs): vlantag_str = six.text_type(vlantag) if vlantag else '0' # Try to create it. It's OK if it already exists. try: result = self._client.createfsip(ip, subnet, vfs, fpg=fpg, vlantag=vlantag_str) LOG.debug("createfsip result: %s", result) except Exception: msg = (_('Failed to create FSIP for %s') % ip) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) # Verify that it really exists. fsip = { 'fspool': fpg, 'vfs': vfs, 'address': ip, 'prefixLen': subnet, 'vlanTag': vlantag_str, } if not self.fsip_exists(fsip): msg = (_('Failed to get FSIP after creating it for ' 'FPG/VFS/IP/subnet/VLAN ' '%(fspool)s/%(vfs)s/' '%(address)s/%(prefixLen)s/%(vlanTag)s.') % fsip) LOG.error(msg) raise exception.ShareBackendException(msg=msg) def remove_fsip(self, ip, fpg, vfs): if not (vfs and ip): # If there is no VFS and/or IP, then there is no FSIP to remove. return try: result = self._client.removefsip(vfs, ip, fpg=fpg) LOG.debug("removefsip result: %s", result) except Exception: msg = (_('Failed to remove FSIP %s') % ip) LOG.exception(msg) raise exception.ShareBackendException(msg=msg) # Verify that it really no longer exists. fsip = { 'fspool': fpg, 'vfs': vfs, 'address': ip, } if self.fsip_exists(fsip): msg = (_('Failed to remove FSIP for FPG/VFS/IP ' '%(fspool)s/%(vfs)s/%(address)s.') % fsip) LOG.error(msg) raise exception.ShareBackendException(msg=msg) manila-10.0.0/manila/share/drivers/hpe/hpe_3par_driver.py0000664000175000017500000006325513656750227023326 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett Packard Enterprise Development LP # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """HPE 3PAR Driver for OpenStack Manila.""" import datetime import hashlib import inspect import os import re from oslo_config import cfg from oslo_config import types from oslo_log import log import six from manila.common import config from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.hpe import hpe_3par_mediator from manila.share import share_types from manila.share import utils as share_utils from manila import utils LOG = log.getLogger(__name__) class FPG(types.String, types.IPAddress): """FPG type. Used to represent multiple pools per backend values. Converts configuration value to an FPGs value. FPGs value format:: FPG name, IP address 1, IP address 2, ..., IP address 4 where FPG name is a string value, IP address is of type types.IPAddress Optionally doing range checking. If value is whitespace or empty string will raise error :param min_ip: Optional check that number of min IP address of VFS. :param max_ip: Optional check that number of max IP address of VFS. :param type_name: Type name to be used in the sample config file. """ MAX_SUPPORTED_IP_PER_VFS = 4 def __init__(self, min_ip=0, max_ip=MAX_SUPPORTED_IP_PER_VFS, type_name='FPG'): types.String.__init__(self, type_name=type_name) types.IPAddress.__init__(self, type_name=type_name) if max_ip < min_ip: msg = _("Pool's max acceptable IP cannot be less than min.") raise exception.HPE3ParInvalid(err=msg) if min_ip < 0: msg = _("Pools must be configured with zero or more IPs.") raise exception.HPE3ParInvalid(err=msg) if max_ip > FPG.MAX_SUPPORTED_IP_PER_VFS: msg = (_("Pool's max acceptable IP cannot be greater than " "supported value=%s.") % FPG.MAX_SUPPORTED_IP_PER_VFS) raise exception.HPE3ParInvalid(err=msg) self.min_ip = min_ip self.max_ip = max_ip def __call__(self, value): if value is None or value.strip(' ') == '': message = _("Invalid configuration. hpe3par_fpg must be set.") LOG.error(message) raise exception.HPE3ParInvalid(err=message) ips = [] values = value.split(",") # Extract pool name pool_name = values.pop(0).strip() # values will now be ['ip1', ...] if len(values) < self.min_ip: msg = (_("Require at least %s IPs configured per " "pool") % self.min_ip) raise exception.HPE3ParInvalid(err=msg) if len(values) > self.max_ip: msg = (_("Cannot configure IPs more than max supported " "%s IPs per pool") % self.max_ip) raise exception.HPE3ParInvalid(err=msg) for ip_addr in values: ip_addr = types.String.__call__(self, ip_addr.strip()) try: ips.append(types.IPAddress.__call__(self, ip_addr)) except ValueError as verror: raise exception.HPE3ParInvalid(err=verror) fpg = {pool_name: ips} return fpg def __repr__(self): return 'FPG' def _formatter(self, value): return six.text_type(value) HPE3PAR_OPTS = [ cfg.StrOpt('hpe3par_api_url', default='', help="3PAR WSAPI Server Url like " "https://<3par ip>:8080/api/v1", deprecated_name='hp3par_api_url'), cfg.StrOpt('hpe3par_username', default='', help="3PAR username with the 'edit' role", deprecated_name='hp3par_username'), cfg.StrOpt('hpe3par_password', default='', help="3PAR password for the user specified in hpe3par_username", secret=True, deprecated_name='hp3par_password'), cfg.HostAddressOpt('hpe3par_san_ip', help="IP address of SAN controller", deprecated_name='hp3par_san_ip'), cfg.StrOpt('hpe3par_san_login', default='', help="Username for SAN controller", deprecated_name='hp3par_san_login'), cfg.StrOpt('hpe3par_san_password', default='', help="Password for SAN controller", secret=True, deprecated_name='hp3par_san_password'), cfg.PortOpt('hpe3par_san_ssh_port', default=22, help='SSH port to use with SAN', deprecated_name='hp3par_san_ssh_port'), cfg.MultiOpt('hpe3par_fpg', item_type=FPG(min_ip=0, max_ip=FPG.MAX_SUPPORTED_IP_PER_VFS), help="The File Provisioning Group (FPG) to use", deprecated_name='hp3par_fpg'), cfg.BoolOpt('hpe3par_fstore_per_share', default=False, help="Use one filestore per share", deprecated_name='hp3par_fstore_per_share'), cfg.BoolOpt('hpe3par_require_cifs_ip', default=False, help="Require IP access rules for CIFS (in addition to user)"), cfg.BoolOpt('hpe3par_debug', default=False, help="Enable HTTP debugging to 3PAR", deprecated_name='hp3par_debug'), cfg.StrOpt('hpe3par_cifs_admin_access_username', default='', help="File system admin user name for CIFS.", deprecated_name='hp3par_cifs_admin_access_username'), cfg.StrOpt('hpe3par_cifs_admin_access_password', default='', help="File system admin password for CIFS.", secret=True, deprecated_name='hp3par_cifs_admin_access_password'), cfg.StrOpt('hpe3par_cifs_admin_access_domain', default='LOCAL_CLUSTER', help="File system domain for the CIFS admin user.", deprecated_name='hp3par_cifs_admin_access_domain'), cfg.StrOpt('hpe3par_share_mount_path', default='/mnt/', help="The path where shares will be mounted when deleting " "nested file trees.", deprecated_name='hpe3par_share_mount_path'), ] CONF = cfg.CONF CONF.register_opts(HPE3PAR_OPTS) def to_list(var): """Convert var to list type if not""" if isinstance(var, six.string_types): return [var] else: return var class HPE3ParShareDriver(driver.ShareDriver): """HPE 3PAR driver for Manila. Supports NFS and CIFS protocols on arrays with File Persona. Version history:: 1.0.0 - Begin Liberty development (post-Kilo) 1.0.1 - Report thin/dedup/hp_flash_cache capabilities 1.0.2 - Add share server/share network support 2.0.0 - Rebranded HP to HPE 2.0.1 - Add access_level (e.g. read-only support) 2.0.2 - Add extend/shrink 2.0.3 - Remove file tree on delete when using nested shares #1538800 2.0.4 - Reduce the fsquota by share size when a share is deleted #1582931 2.0.5 - Add update_access support 2.0.6 - Multi pool support per backend 2.0.7 - Fix get_vfs() to correctly validate conf IP addresses at boot up #1621016 2.0.8 - Replace ConsistencyGroup with ShareGroup """ VERSION = "2.0.8" def __init__(self, *args, **kwargs): super(HPE3ParShareDriver, self).__init__((True, False), *args, **kwargs) self.configuration = kwargs.get('configuration', None) self.configuration.append_config_values(HPE3PAR_OPTS) self.configuration.append_config_values(driver.ssh_opts) self.configuration.append_config_values(config.global_opts) self.fpgs = {} self._hpe3par = None # mediator between driver and client def do_setup(self, context): """Any initialization the share driver does while starting.""" LOG.info("Starting share driver %(driver_name)s (%(version)s)", {'driver_name': self.__class__.__name__, 'version': self.VERSION}) mediator = hpe_3par_mediator.HPE3ParMediator( hpe3par_username=self.configuration.hpe3par_username, hpe3par_password=self.configuration.hpe3par_password, hpe3par_api_url=self.configuration.hpe3par_api_url, hpe3par_debug=self.configuration.hpe3par_debug, hpe3par_san_ip=self.configuration.hpe3par_san_ip, hpe3par_san_login=self.configuration.hpe3par_san_login, hpe3par_san_password=self.configuration.hpe3par_san_password, hpe3par_san_ssh_port=self.configuration.hpe3par_san_ssh_port, hpe3par_fstore_per_share=(self.configuration .hpe3par_fstore_per_share), hpe3par_require_cifs_ip=self.configuration.hpe3par_require_cifs_ip, hpe3par_cifs_admin_access_username=( self.configuration.hpe3par_cifs_admin_access_username), hpe3par_cifs_admin_access_password=( self.configuration.hpe3par_cifs_admin_access_password), hpe3par_cifs_admin_access_domain=( self.configuration.hpe3par_cifs_admin_access_domain), hpe3par_share_mount_path=( self.configuration.hpe3par_share_mount_path), my_ip=self.configuration.my_ip, ssh_conn_timeout=self.configuration.ssh_conn_timeout, ) mediator.do_setup() def _validate_pool_ips(addresses, conf_pool_ips): # Pool configured IP addresses should be subset of IP addresses # retured from vfs if not set(conf_pool_ips) <= set(addresses): msg = _("Incorrect configuration. " "Configuration pool IP address did not match with " "IP addresses at 3par array") raise exception.HPE3ParInvalid(err=msg) def _construct_fpg(): # FPG must be configured and must exist. # self.configuration.safe_get('hpe3par_fpg') will have value in # following format: # [ {'pool_name':['ip_addr', 'ip_addr', ...]}, ... ] for fpg in self.configuration.safe_get('hpe3par_fpg'): pool_name = list(fpg)[0] conf_pool_ips = fpg[pool_name] # Validate the FPG and discover the VFS # This also validates the client, connection, firmware, WSAPI, # FPG... vfs_info = mediator.get_vfs(pool_name) if self.driver_handles_share_servers: # Use discovered IP(s) from array self.fpgs[pool_name] = { vfs_info['vfsname']: vfs_info['vfsip']['address']} elif conf_pool_ips == []: # not DHSS and IPs not configured in manila.conf. if not vfs_info['vfsip']['address']: msg = _("Unsupported configuration. " "hpe3par_fpg must have IP address " "or be discoverable at 3PAR") LOG.error(msg) raise exception.HPE3ParInvalid(err=msg) else: # Use discovered pool ips self.fpgs[pool_name] = { vfs_info['vfsname']: vfs_info['vfsip']['address']} else: # not DHSS and IPs configured in manila.conf _validate_pool_ips(vfs_info['vfsip']['address'], conf_pool_ips) self.fpgs[pool_name] = { vfs_info['vfsname']: conf_pool_ips} _construct_fpg() # Don't set _hpe3par until it is ready. Otherwise _update_stats fails. self._hpe3par = mediator def _get_pool_location_from_share_host(self, share_instance_host): # Return pool name, vfs, IPs for a pool from share instance host pool_name = share_utils.extract_host(share_instance_host, level='pool') if not pool_name: message = (_("Pool is not available in the share host %s.") % share_instance_host) raise exception.InvalidHost(reason=message) if pool_name not in self.fpgs: message = (_("Pool location lookup failed. " "Could not find pool %s") % pool_name) raise exception.InvalidHost(reason=message) vfs = list(self.fpgs[pool_name])[0] ips = self.fpgs[pool_name][vfs] return (pool_name, vfs, ips) def _get_pool_location(self, share, share_server=None): # Return pool name, vfs, IPs for a pool from share host field # Use share_server if provided, instead of self.fpgs if share_server is not None: # When DHSS ips = share_server['backend_details'].get('ip') ips = to_list(ips) vfs = share_server['backend_details'].get('vfs') pool_name = share_server['backend_details'].get('fpg') return (pool_name, vfs, ips) else: # When DHSS = false return self._get_pool_location_from_share_host(share['host']) def check_for_setup_error(self): try: # Log the source SHA for support. Only do this with DEBUG. if LOG.isEnabledFor(log.DEBUG): LOG.debug('HPE3ParShareDriver SHA1: %s', self.sha1_hash(HPE3ParShareDriver)) LOG.debug('HPE3ParMediator SHA1: %s', self.sha1_hash(hpe_3par_mediator.HPE3ParMediator)) except Exception as e: # Don't let any exceptions during the SHA1 logging interfere # with startup. This is just debug info to identify the source # code. If it doesn't work, just log a debug message. LOG.debug('Source code SHA1 not logged due to: %s', six.text_type(e)) @staticmethod def sha1_hash(clazz): """Get the SHA1 hash for the source of a class.""" source_file = inspect.getsourcefile(clazz) file_size = os.path.getsize(source_file) sha1 = hashlib.sha1() sha1.update(("blob %u\0" % file_size).encode('utf-8')) with open(source_file, 'rb') as f: sha1.update(f.read()) return sha1.hexdigest() def get_network_allocations_number(self): return 1 def choose_share_server_compatible_with_share(self, context, share_servers, share, snapshot=None, share_group=None): """Method that allows driver to choose share server for provided share. If compatible share-server is not found, method should return None. :param context: Current context :param share_servers: list with share-server models :param share: share model :param snapshot: snapshot model :param share_group: ShareGroup model with shares :returns: share-server or None """ # If creating in a share group, raise exception if share_group: msg = _("HPE 3PAR driver does not support share group") raise exception.InvalidRequest(message=msg) pool_name = share_utils.extract_host(share['host'], level='pool') for share_server in share_servers: if share_server['backend_details'].get('fpg') == pool_name: return share_server return None @staticmethod def _validate_network_type(network_type): if network_type not in ('flat', 'vlan', None): reason = _('Invalid network type. %s is not supported by the ' '3PAR driver.') raise exception.NetworkBadConfigurationException( reason=reason % network_type) def _create_share_server(self, network_info, request_host=None): """Is called to create/setup share server""" # Return pool name, vfs, IPs for a pool pool_name, vfs, ips = self._get_pool_location_from_share_host( request_host) ip = network_info['network_allocations'][0]['ip_address'] if ip not in ips: # Besides DHSS, admin could have setup IP to VFS directly on array if len(ips) > (FPG.MAX_SUPPORTED_IP_PER_VFS - 1): message = (_("Pool %s has exceeded 3PAR's " "max supported VFS IP address") % pool_name) LOG.error(message) raise exception.Invalid(message) subnet = utils.cidr_to_netmask(network_info['cidr']) vlantag = network_info['segmentation_id'] self._hpe3par.create_fsip(ip, subnet, vlantag, pool_name, vfs) # Update in global saved config, self.fpgs[pool_name] ips.append(ip) return {'share_server_name': network_info['server_id'], 'share_server_id': network_info['server_id'], 'ip': ip, 'subnet': subnet, 'vlantag': vlantag if vlantag else 0, 'fpg': pool_name, 'vfs': vfs} def _setup_server(self, network_info, metadata=None): LOG.debug("begin _setup_server with %s", network_info) self._validate_network_type(network_info['network_type']) if metadata is not None and metadata['request_host'] is not None: return self._create_share_server(network_info, metadata['request_host']) def _teardown_server(self, server_details, security_services=None): LOG.debug("begin _teardown_server with %s", server_details) fpg = server_details.get('fpg') vfs = server_details.get('vfs') ip = server_details.get('ip') self._hpe3par.remove_fsip(ip, fpg, vfs) if ip in self.fpgs[fpg][vfs]: self.fpgs[fpg][vfs].remove(ip) @staticmethod def build_share_comment(share): """Create an informational only comment to help admins and testers.""" info = { 'name': share['display_name'], 'host': share['host'], 'now': datetime.datetime.now().strftime('%H%M%S'), } acceptable = re.compile(r'[^a-zA-Z0-9_=:@# \-]+', re.UNICODE) comment = ("OpenStack Manila - host=%(host)s orig_name=%(name)s " "created=%(now)s" % info) return acceptable.sub('_', comment)[:254] # clean and truncate def create_share(self, context, share, share_server=None): """Is called to create share.""" fpg, vfs, ips = self._get_pool_location(share, share_server) protocol = share['share_proto'] extra_specs = share_types.get_extra_specs_from_share(share) path = self._hpe3par.create_share( share['project_id'], share['id'], protocol, extra_specs, fpg, vfs, size=share['size'], comment=self.build_share_comment(share) ) return self._hpe3par.build_export_locations(protocol, ips, path) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot.""" fpg, vfs, ips = self._get_pool_location(share, share_server) protocol = share['share_proto'] extra_specs = share_types.get_extra_specs_from_share(share) path = self._hpe3par.create_share_from_snapshot( share['id'], protocol, extra_specs, share['project_id'], snapshot['share_id'], snapshot['id'], fpg, vfs, ips, size=share['size'], comment=self.build_share_comment(share) ) return self._hpe3par.build_export_locations(protocol, ips, path) def delete_share(self, context, share, share_server=None): """Deletes share and its fstore.""" fpg, vfs, ips = self._get_pool_location(share, share_server) self._hpe3par.delete_share(share['project_id'], share['id'], share['size'], share['share_proto'], fpg, vfs, ips[0]) def create_snapshot(self, context, snapshot, share_server=None): """Creates a snapshot of a share.""" fpg, vfs, ips = self._get_pool_location(snapshot['share'], share_server) self._hpe3par.create_snapshot(snapshot['share']['project_id'], snapshot['share']['id'], snapshot['share']['share_proto'], snapshot['id'], fpg, vfs) def delete_snapshot(self, context, snapshot, share_server=None): """Deletes a snapshot of a share.""" fpg, vfs, ips = self._get_pool_location(snapshot['share'], share_server) self._hpe3par.delete_snapshot(snapshot['share']['project_id'], snapshot['share']['id'], snapshot['share']['share_proto'], snapshot['id'], fpg, vfs) def ensure_share(self, context, share, share_server=None): pass def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access to the share.""" extra_specs = None if 'NFS' == share['share_proto']: # Avoiding DB call otherwise extra_specs = share_types.get_extra_specs_from_share(share) fpg, vfs, ips = self._get_pool_location(share, share_server) self._hpe3par.update_access(share['project_id'], share['id'], share['share_proto'], extra_specs, access_rules, add_rules, delete_rules, fpg, vfs) def extend_share(self, share, new_size, share_server=None): """Extends size of existing share.""" fpg, vfs, ips = self._get_pool_location(share, share_server) self._hpe3par.resize_share(share['project_id'], share['id'], share['share_proto'], new_size, share['size'], fpg, vfs) def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share.""" fpg, vfs, ips = self._get_pool_location(share, share_server) self._hpe3par.resize_share(share['project_id'], share['id'], share['share_proto'], new_size, share['size'], fpg, vfs) def _update_share_stats(self): """Retrieve stats info from share group.""" backend_name = self.configuration.safe_get( 'share_backend_name') or "HPE_3PAR" max_over_subscription_ratio = self.configuration.safe_get( 'max_over_subscription_ratio') reserved_share_percentage = self.configuration.safe_get( 'reserved_share_percentage') if reserved_share_percentage is None: reserved_share_percentage = 0 stats = { 'share_backend_name': backend_name, 'driver_handles_share_servers': self.driver_handles_share_servers, 'vendor_name': 'HPE', 'driver_version': self.VERSION, 'storage_protocol': 'NFS_CIFS', 'total_capacity_gb': 0, 'free_capacity_gb': 0, 'provisioned_capacity_gb': 0, 'reserved_percentage': reserved_share_percentage, 'max_over_subscription_ratio': max_over_subscription_ratio, 'qos': False, 'thin_provisioning': True, # 3PAR default is thin } if not self._hpe3par: LOG.info( "Skipping capacity and capabilities update. Setup has not " "completed.") else: for fpg in self.fpgs: fpg_status = self._hpe3par.get_fpg_status(fpg) fpg_status['reserved_percentage'] = reserved_share_percentage LOG.debug("FPG status = %s.", fpg_status) stats.setdefault('pools', []).append(fpg_status) super(HPE3ParShareDriver, self)._update_share_stats(stats) manila-10.0.0/manila/share/drivers/hitachi/0000775000175000017500000000000013656750362020522 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hitachi/__init__.py0000664000175000017500000000000013656750227022621 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hitachi/hsp/0000775000175000017500000000000013656750362021314 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hitachi/hsp/__init__.py0000664000175000017500000000000013656750227023413 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hitachi/hsp/rest.py0000664000175000017500000001652413656750227022653 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import requests from manila import exception from manila.i18n import _ from manila import utils # Suppress the Insecure request warnings requests.packages.urllib3.disable_warnings() # pylint: disable=no-member class HSPRestBackend(object): def __init__(self, hsp_host, hsp_username, hsp_password): self.host = hsp_host self.username = hsp_username self.password = hsp_password def _send_post(self, url, payload=None): resp = requests.post(url, auth=(self.username, self.password), data=payload, verify=False) if resp.status_code == 202: self._wait_job_status(resp.headers['location'], 'COMPLETE') else: msg = (_("HSP API post failed: %s.") % resp.json()['messages'][0]['message']) raise exception.HSPBackendException(msg=msg) def _send_get(self, url, payload=None): resp = requests.get(url, auth=(self.username, self.password), data=payload, verify=False) if resp.status_code == 200: if resp.content == 'null': return None else: return resp.json() else: msg = (_("HSP API get failed: %s.") % resp.json()['messages'][0]['message']) raise exception.HSPBackendException(msg=msg) def _send_delete(self, url, payload=None): resp = requests.delete(url, auth=(self.username, self.password), data=payload, verify=False) if resp.status_code == 202: self._wait_job_status(resp.headers['location'], 'COMPLETE') else: msg = (_("HSP API delete failed: %s.") % resp.json()['messages'][0]['message']) raise exception.HSPBackendException(msg=msg) def add_file_system(self, name, quota): url = "https://%s/hspapi/file-systems/" % self.host payload = { 'quota': quota, 'auto-access': False, 'enabled': True, 'description': '', 'record-access-time': True, 'tags': '', # Usage percentage in which a warning will be shown 'space-hwm': 90, # Usage percentage in which the warning will be cleared 'space-lwm': 70, 'name': name, } self._send_post(url, payload=json.dumps(payload)) def get_file_system(self, name): url = ("https://%s/hspapi/file-systems/list?name=%s" % (self.host, name)) filesystems = self._send_get(url) try: return filesystems['list'][0] except (TypeError, KeyError, IndexError): msg = _("Filesystem does not exist or is not available.") raise exception.HSPItemNotFoundException(msg=msg) def delete_file_system(self, filesystem_id): url = "https://%s/hspapi/file-systems/%s" % (self.host, filesystem_id) self._send_delete(url) def resize_file_system(self, filesystem_id, new_size): url = "https://%s/hspapi/file-systems/%s" % (self.host, filesystem_id) payload = {'quota': new_size} self._send_post(url, payload=json.dumps(payload)) def rename_file_system(self, filesystem_id, new_name): url = "https://%s/hspapi/file-systems/%s" % (self.host, filesystem_id) payload = {'name': new_name} self._send_post(url, payload=json.dumps(payload)) def add_share(self, name, filesystem_id): url = "https://%s/hspapi/shares/" % self.host payload = { 'description': '', 'type': 'NFS', 'enabled': True, 'tags': '', 'name': name, 'file-system-id': filesystem_id, } self._send_post(url, payload=json.dumps(payload)) def get_share(self, fs_id=None, name=None): if fs_id is not None: url = ('https://%s/hspapi/shares/list?file-system-id=%s' % (self.host, fs_id)) elif name is not None: url = ('https://%s/hspapi/shares/list?name=%s' % (self.host, name)) share = self._send_get(url) try: return share['list'][0] except (TypeError, KeyError, IndexError): msg = _("Share %s does not exist or is not available.") if fs_id is not None: args = "for filesystem %s" % fs_id else: args = name raise exception.HSPItemNotFoundException(msg=msg % args) def delete_share(self, share_id): url = "https://%s/hspapi/shares/%s" % (self.host, share_id) self._send_delete(url) def add_access_rule(self, share_id, host_to, read_write): url = "https://%s/hspapi/shares/%s/" % (self.host, share_id) payload = { "action": "add-access-rule", "name": share_id + host_to, "host-specification": host_to, "read-write": read_write, } self._send_post(url, payload=json.dumps(payload)) def delete_access_rule(self, share_id, rule_name): url = "https://%s/hspapi/shares/%s/" % (self.host, share_id) payload = { "action": "delete-access-rule", "name": rule_name, } self._send_post(url, payload=json.dumps(payload)) def get_access_rules(self, share_id): url = ("https://%s/hspapi/shares/%s/access-rules" % (self.host, share_id)) rules = self._send_get(url) try: rules = rules['list'] except (TypeError, KeyError, IndexError): rules = [] return rules def get_cluster(self): url = "https://%s/hspapi/clusters/list" % self.host clusters = self._send_get(url) try: return clusters['list'][0] except (TypeError, KeyError, IndexError): msg = _("No cluster was found on HSP.") raise exception.HSPBackendException(msg=msg) @utils.retry(exception.HSPTimeoutException, retries=10, wait_random=True) def _wait_job_status(self, job_url, target_status): resp_json = self._send_get(job_url) status = resp_json['properties']['completion-status'] if status == 'ERROR': msg = _("HSP job %(id)s failed. %(reason)s") job_id = resp_json['id'] reason = resp_json['properties']['completion-details'] raise exception.HSPBackendException(msg=msg % {'id': job_id, 'reason': reason}) elif status != target_status: msg = _("Timeout while waiting for job %s to complete.") args = resp_json['id'] raise exception.HSPTimeoutException(msg=msg % args) manila-10.0.0/manila/share/drivers/hitachi/hsp/driver.py0000664000175000017500000003414213656750227023165 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import units from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share.drivers.hitachi.hsp import rest LOG = log.getLogger(__name__) hitachi_hsp_opts = [ cfg.HostAddressOpt('hitachi_hsp_host', required=True, help="HSP management host for communication between " "Manila controller and HSP."), cfg.StrOpt('hitachi_hsp_username', required=True, help="HSP username to perform tasks such as create filesystems" " and shares."), cfg.StrOpt('hitachi_hsp_password', required=True, secret=True, help="HSP password for the username provided."), ] class HitachiHSPDriver(driver.ShareDriver): """Manila HSP Driver implementation. 1.0.0 - Initial Version. """ def __init__(self, *args, **kwargs): super(HitachiHSPDriver, self).__init__( [False], *args, config_opts=[hitachi_hsp_opts], **kwargs) self.private_storage = kwargs.get('private_storage') self.backend_name = self.configuration.safe_get('share_backend_name') self.hsp_host = self.configuration.safe_get('hitachi_hsp_host') self.hsp = rest.HSPRestBackend( self.hsp_host, self.configuration.safe_get('hitachi_hsp_username'), self.configuration.safe_get('hitachi_hsp_password') ) def _update_share_stats(self, data=None): LOG.debug("Updating Backend Capability Information - Hitachi HSP.") reserved = self.configuration.safe_get('reserved_share_percentage') max_over_subscription_ratio = self.configuration.safe_get( 'max_over_subscription_ratio') hsp_cluster = self.hsp.get_cluster() total_space = hsp_cluster['properties']['total-storage-capacity'] free_space = hsp_cluster['properties']['total-storage-available'] data = { 'share_backend_name': self.backend_name, 'vendor_name': 'Hitachi', 'driver_version': '1.0.0', 'storage_protocol': 'NFS', 'pools': [{ 'reserved_percentage': reserved, 'pool_name': 'HSP', 'thin_provisioning': True, 'total_capacity_gb': total_space / units.Gi, 'free_capacity_gb': free_space / units.Gi, 'max_over_subscription_ratio': max_over_subscription_ratio, 'qos': False, 'dedupe': False, 'compression': False, }], } LOG.info("Hitachi HSP Capabilities: %(data)s.", {'data': data}) super(HitachiHSPDriver, self)._update_share_stats(data) def create_share(self, context, share, share_server=None): LOG.debug("Creating share in HSP: %(shr)s", {'shr': share['id']}) if share['share_proto'].lower() != 'nfs': msg = _("Only NFS protocol is currently supported.") raise exception.InvalidShare(reason=msg) self.hsp.add_file_system(share['id'], share['size'] * units.Gi) filesystem_id = self.hsp.get_file_system(share['id'])['id'] try: self.hsp.add_share(share['id'], filesystem_id) except exception.HSPBackendException: with excutils.save_and_reraise_exception(): self.hsp.delete_file_system(filesystem_id) msg = ("Could not create share %s on HSP.") LOG.exception(msg, share['id']) uri = self.hsp_host + ':/' + share['id'] LOG.debug("Share created successfully on path: %(uri)s.", {'uri': uri}) return [{ "path": uri, "metadata": {}, "is_admin_only": False, }] def delete_share(self, context, share, share_server=None): LOG.debug("Deleting share in HSP: %(shr)s.", {'shr': share['id']}) filesystem_id = hsp_share_id = None try: filesystem_id = self.hsp.get_file_system(share['id'])['id'] hsp_share_id = self.hsp.get_share(filesystem_id)['id'] except exception.HSPItemNotFoundException: LOG.info("Share %(shr)s already removed from backend.", {'shr': share['id']}) if hsp_share_id: # Clean all rules from share before deleting it current_rules = self.hsp.get_access_rules(hsp_share_id) for rule in current_rules: try: self.hsp.delete_access_rule(hsp_share_id, rule['name']) except exception.HSPBackendException as e: if 'No matching access rule found.' in e.msg: LOG.debug("Rule %(rule)s already deleted in " "backend.", {'rule': rule['name']}) else: raise self.hsp.delete_share(hsp_share_id) if filesystem_id: self.hsp.delete_file_system(filesystem_id) LOG.debug("Export and share successfully deleted: %(shr)s.", {'shr': share['id']}) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): LOG.debug("Updating access rules for share: %(shr)s.", {'shr': share['id']}) try: filesystem_id = self.hsp.get_file_system(share['id'])['id'] hsp_share_id = self.hsp.get_share(filesystem_id)['id'] except exception.HSPItemNotFoundException: raise exception.ShareResourceNotFound(share_id=share['id']) if not (add_rules or delete_rules): # Recovery mode current_rules = self.hsp.get_access_rules(hsp_share_id) # Indexing the rules for faster searching hsp_rules_dict = { rule['host-specification']: rule['read-write'] for rule in current_rules } manila_rules_dict = {} for rule in access_rules: if rule['access_type'].lower() != 'ip': msg = _("Only IP access type currently supported.") raise exception.InvalidShareAccess(reason=msg) access_to = rule['access_to'] is_rw = rule['access_level'] == constants.ACCESS_LEVEL_RW manila_rules_dict[access_to] = is_rw # Remove the rules that exist on HSP but not on manila remove_rules = self._get_complement(hsp_rules_dict, manila_rules_dict) # Add the rules that exist on manila but not on HSP add_rules = self._get_complement(manila_rules_dict, hsp_rules_dict) for rule in remove_rules: rule_name = self._get_hsp_rule_name(hsp_share_id, rule[0]) self.hsp.delete_access_rule(hsp_share_id, rule_name) for rule in add_rules: self.hsp.add_access_rule(hsp_share_id, rule[0], rule[1]) else: for rule in delete_rules: if rule['access_type'].lower() != 'ip': continue # get the real rule name in HSP rule_name = self._get_hsp_rule_name(hsp_share_id, rule['access_to']) try: self.hsp.delete_access_rule(hsp_share_id, rule_name) except exception.HSPBackendException as e: if 'No matching access rule found.' in e.msg: LOG.debug("Rule %(rule)s already deleted in " "backend.", {'rule': rule['access_to']}) else: raise for rule in add_rules: if rule['access_type'].lower() != 'ip': msg = _("Only IP access type currently supported.") raise exception.InvalidShareAccess(reason=msg) try: self.hsp.add_access_rule( hsp_share_id, rule['access_to'], (rule['access_level'] == constants.ACCESS_LEVEL_RW)) except exception.HSPBackendException as e: if 'Duplicate NFS access rule exists' in e.msg: LOG.debug("Rule %(rule)s already exists in " "backend.", {'rule': rule['access_to']}) else: raise LOG.debug("Successfully updated share %(shr)s rules.", {'shr': share['id']}) def _get_hsp_rule_name(self, share_id, host_to): rule_name = share_id + host_to all_rules = self.hsp.get_access_rules(share_id) for rule in all_rules: # check if this rule has other name in HSP if rule['host-specification'] == host_to: rule_name = rule['name'] break return rule_name def _get_complement(self, rules_a, rules_b): """Returns the rules of list A that are not on list B""" complement = [] for rule, is_rw in rules_a.items(): if rule not in rules_b or rules_b[rule] != is_rw: complement.append((rule, is_rw)) return complement def extend_share(self, share, new_size, share_server=None): LOG.debug("Extending share in HSP: %(shr_id)s.", {'shr_id': share['id']}) old_size = share['size'] hsp_cluster = self.hsp.get_cluster() free_space = hsp_cluster['properties']['total-storage-available'] free_space = free_space / units.Gi if (new_size - old_size) < free_space: filesystem_id = self.hsp.get_file_system(share['id'])['id'] self.hsp.resize_file_system(filesystem_id, new_size * units.Gi) else: msg = (_("Share %s cannot be extended due to insufficient space.") % share['id']) raise exception.HSPBackendException(msg=msg) LOG.info("Share %(shr_id)s successfully extended to " "%(shr_size)sG.", {'shr_id': share['id'], 'shr_size': new_size}) def shrink_share(self, share, new_size, share_server=None): LOG.debug("Shrinking share in HSP: %(shr_id)s.", {'shr_id': share['id']}) file_system = self.hsp.get_file_system(share['id']) usage = file_system['properties']['used-capacity'] / units.Gi LOG.debug("Usage for share %(shr_id)s in HSP: %(usage)sG.", {'shr_id': share['id'], 'usage': usage}) if new_size > usage: self.hsp.resize_file_system(file_system['id'], new_size * units.Gi) else: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) LOG.info("Share %(shr_id)s successfully shrunk to " "%(shr_size)sG.", {'shr_id': share['id'], 'shr_size': new_size}) def manage_existing(self, share, driver_options): LOG.debug("Managing share in HSP: %(shr_id)s.", {'shr_id': share['id']}) ip, share_name = share['export_locations'][0]['path'].split(':') try: hsp_share = self.hsp.get_share(name=share_name.strip('/')) except exception.HSPItemNotFoundException: msg = _("The share %s trying to be managed was not found on " "backend.") % share['id'] raise exception.ManageInvalidShare(reason=msg) self.hsp.rename_file_system(hsp_share['properties']['file-system-id'], share['id']) original_name = hsp_share['properties']['file-system-name'] private_storage_content = { 'old_name': original_name, 'new_name': share['id'], } self.private_storage.update(share['id'], private_storage_content) LOG.debug("Filesystem %(original_name)s was renamed to %(name)s.", {'original_name': original_name, 'name': share['id']}) file_system = self.hsp.get_file_system(share['id']) LOG.info("Share %(shr_path)s was successfully managed with ID " "%(shr_id)s.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) export_locations = [{ "path": share['export_locations'][0]['path'], "metadata": {}, "is_admin_only": False, }] return {'size': file_system['properties']['quota'] / units.Gi, 'export_locations': export_locations} def unmanage(self, share): original_name = self.private_storage.get(share['id'], 'old_name') LOG.debug("Filesystem %(name)s that was originally named " "%(original_name)s will no longer be managed.", {'original_name': original_name, 'name': share['id']}) self.private_storage.delete(share['id']) LOG.info("The share with current path %(shr_path)s and ID " "%(shr_id)s is no longer being managed.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) def get_default_filter_function(self): return "share.size >= 128" manila-10.0.0/manila/share/drivers/hitachi/hnas/0000775000175000017500000000000013656750362021453 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hitachi/hnas/__init__.py0000664000175000017500000000000013656750227023552 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/drivers/hitachi/hnas/driver.py0000664000175000017500000015440713656750227023333 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_config import cfg from oslo_log import log from oslo_utils import excutils from oslo_utils import importutils import six from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import driver from manila.share import utils LOG = log.getLogger(__name__) hitachi_hnas_opts = [ cfg.HostAddressOpt('hitachi_hnas_ip', deprecated_name='hds_hnas_ip', help="HNAS management interface IP for communication " "between Manila controller and HNAS."), cfg.StrOpt('hitachi_hnas_user', deprecated_name='hds_hnas_user', help="HNAS username Base64 String in order to perform tasks " "such as create file-systems and network interfaces."), cfg.StrOpt('hitachi_hnas_password', deprecated_name='hds_hnas_password', secret=True, help="HNAS user password. Required only if private key is not " "provided."), cfg.IntOpt('hitachi_hnas_evs_id', deprecated_name='hds_hnas_evs_id', help="Specify which EVS this backend is assigned to."), cfg.HostAddressOpt('hitachi_hnas_evs_ip', deprecated_name='hds_hnas_evs_ip', help="Specify IP for mounting shares."), cfg.HostAddressOpt('hitachi_hnas_admin_network_ip', help="Specify IP for mounting shares in the Admin " "network."), cfg.StrOpt('hitachi_hnas_file_system_name', deprecated_name='hds_hnas_file_system_name', help="Specify file-system name for creating shares."), cfg.StrOpt('hitachi_hnas_ssh_private_key', deprecated_name='hds_hnas_ssh_private_key', secret=True, help="RSA/DSA private key value used to connect into HNAS. " "Required only if password is not provided."), cfg.HostAddressOpt('hitachi_hnas_cluster_admin_ip0', deprecated_name='hds_hnas_cluster_admin_ip0', help="The IP of the clusters admin node. Only set in " "HNAS multinode clusters."), cfg.IntOpt('hitachi_hnas_stalled_job_timeout', deprecated_name='hds_hnas_stalled_job_timeout', default=30, help="The time (in seconds) to wait for stalled HNAS jobs " "before aborting."), cfg.StrOpt('hitachi_hnas_driver_helper', deprecated_name='hds_hnas_driver_helper', default='manila.share.drivers.hitachi.hnas.ssh.HNASSSHBackend', help="Python class to be used for driver helper."), cfg.BoolOpt('hitachi_hnas_allow_cifs_snapshot_while_mounted', deprecated_name='hds_hnas_allow_cifs_snapshot_while_mounted', default=False, help="By default, CIFS snapshots are not allowed to be taken " "when the share has clients connected because consistent " "point-in-time replica cannot be guaranteed for all " "files. Enabling this might cause inconsistent snapshots " "on CIFS shares."), ] CONF = cfg.CONF CONF.register_opts(hitachi_hnas_opts) class HitachiHNASDriver(driver.ShareDriver): """Manila HNAS Driver implementation. Driver versions:: 1.0.0 - Initial Version. 2.0.0 - Refactoring, bugfixes, implemented Share Shrink and Update Access. 3.0.0 - New driver location, implemented support for CIFS protocol. 3.1.0 - Added admin network export location support. 4.0.0 - Added mountable snapshots, revert-to-snapshot and manage snapshots features support. """ def __init__(self, *args, **kwargs): """Do initialization.""" LOG.debug("Invoking base constructor for Manila Hitachi HNAS Driver.") super(HitachiHNASDriver, self).__init__(False, *args, **kwargs) LOG.debug("Setting up attributes for Manila Hitachi HNAS Driver.") self.configuration.append_config_values(hitachi_hnas_opts) LOG.debug("Reading config parameters for Manila Hitachi HNAS Driver.") self.backend_name = self.configuration.safe_get('share_backend_name') hnas_helper = self.configuration.safe_get('hitachi_hnas_driver_helper') hnas_ip = self.configuration.safe_get('hitachi_hnas_ip') hnas_username = self.configuration.safe_get('hitachi_hnas_user') hnas_password = self.configuration.safe_get('hitachi_hnas_password') hnas_evs_id = self.configuration.safe_get('hitachi_hnas_evs_id') self.hnas_evs_ip = self.configuration.safe_get('hitachi_hnas_evs_ip') self.hnas_admin_network_ip = self.configuration.safe_get( 'hitachi_hnas_admin_network_ip') self.fs_name = self.configuration.safe_get( 'hitachi_hnas_file_system_name') self.cifs_snapshot = self.configuration.safe_get( 'hitachi_hnas_allow_cifs_snapshot_while_mounted') ssh_private_key = self.configuration.safe_get( 'hitachi_hnas_ssh_private_key') cluster_admin_ip0 = self.configuration.safe_get( 'hitachi_hnas_cluster_admin_ip0') self.private_storage = kwargs.get('private_storage') job_timeout = self.configuration.safe_get( 'hitachi_hnas_stalled_job_timeout') if hnas_helper is None: msg = _("The config parameter hitachi_hnas_driver_helper is not " "set.") raise exception.InvalidParameterValue(err=msg) if hnas_evs_id is None: msg = _("The config parameter hitachi_hnas_evs_id is not set.") raise exception.InvalidParameterValue(err=msg) if self.hnas_evs_ip is None: msg = _("The config parameter hitachi_hnas_evs_ip is not set.") raise exception.InvalidParameterValue(err=msg) if hnas_ip is None: msg = _("The config parameter hitachi_hnas_ip is not set.") raise exception.InvalidParameterValue(err=msg) if hnas_username is None: msg = _("The config parameter hitachi_hnas_user is not set.") raise exception.InvalidParameterValue(err=msg) if hnas_password is None and ssh_private_key is None: msg = _("Credentials configuration parameters missing: " "you need to set hitachi_hnas_password or " "hitachi_hnas_ssh_private_key.") raise exception.InvalidParameterValue(err=msg) LOG.debug("Initializing HNAS Layer.") helper = importutils.import_class(hnas_helper) self.hnas = helper(hnas_ip, hnas_username, hnas_password, ssh_private_key, cluster_admin_ip0, hnas_evs_id, self.hnas_evs_ip, self.fs_name, job_timeout) def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. :param context: The `context.RequestContext` object for the request :param share: Share that will have its access rules updated. :param access_rules: All access rules for given share. :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(share['id']) try: self._ensure_share(share, hnas_share_id) except exception.HNASItemNotFoundException: raise exception.ShareResourceNotFound(share_id=share['id']) self._check_protocol(share['id'], share['share_proto']) if share['share_proto'].lower() == 'nfs': self._nfs_update_access(share, hnas_share_id, access_rules) else: if not (add_rules or delete_rules): # recovery mode self._clean_cifs_access_list(hnas_share_id) self._cifs_allow_access(share, hnas_share_id, access_rules) else: self._cifs_deny_access(share, hnas_share_id, delete_rules) self._cifs_allow_access(share, hnas_share_id, add_rules) def _nfs_update_access(self, share, hnas_share_id, access_rules): host_list = [] for rule in access_rules: if rule['access_type'].lower() != 'ip': msg = _("Only IP access type currently supported for NFS. " "Share provided %(share)s with rule type " "%(type)s.") % {'share': share['id'], 'type': rule['access_type']} raise exception.InvalidShareAccess(reason=msg) if rule['access_level'] == constants.ACCESS_LEVEL_RW: host_list.append(rule['access_to'] + '(' + rule['access_level'] + ',norootsquash)') else: host_list.append(rule['access_to'] + '(' + rule['access_level'] + ')') self.hnas.update_nfs_access_rule(host_list, share_id=hnas_share_id) if host_list: LOG.debug("Share %(share)s has the rules: %(rules)s", {'share': share['id'], 'rules': ', '.join(host_list)}) else: LOG.debug("Share %(share)s has no rules.", {'share': share['id']}) def _cifs_allow_access(self, share_or_snapshot, hnas_id, add_rules, is_snapshot=False): entity_type = "share" if is_snapshot: entity_type = "snapshot" for rule in add_rules: if rule['access_type'].lower() != 'user': msg = _("Only USER access type currently supported for CIFS. " "%(entity_type)s provided %(share)s with " "rule %(r_id)s type %(type)s allowing permission " "to %(to)s.") % { 'entity_type': entity_type.capitalize(), 'share': share_or_snapshot['id'], 'type': rule['access_type'], 'r_id': rule['id'], 'to': rule['access_to'], } raise exception.InvalidShareAccess(reason=msg) if rule['access_level'] == constants.ACCESS_LEVEL_RW: # Adding permission acr = Allow Change&Read permission = 'acr' else: # Adding permission ar = Allow Read permission = 'ar' formatted_user = rule['access_to'].replace('\\', '\\\\') self.hnas.cifs_allow_access(hnas_id, formatted_user, permission, is_snapshot=is_snapshot) LOG.debug("Added %(rule)s rule for user/group %(user)s " "to %(entity_type)s %(share)s.", {'rule': rule['access_level'], 'user': rule['access_to'], 'entity_type': entity_type, 'share': share_or_snapshot['id']}) def _cifs_deny_access(self, share_or_snapshot, hnas_id, delete_rules, is_snapshot=False): if is_snapshot: entity_type = "snapshot" share_proto = share_or_snapshot['share']['share_proto'] else: entity_type = "share" share_proto = share_or_snapshot['share_proto'] for rule in delete_rules: if rule['access_type'].lower() != 'user': LOG.warning('Only USER access type is allowed for ' 'CIFS. %(entity_type)s ' 'provided %(share)s with ' 'protocol %(proto)s.', {'entity_type': entity_type.capitalize(), 'share': share_or_snapshot['id'], 'proto': share_proto}) continue formatted_user = rule['access_to'].replace('\\', '\\\\') self.hnas.cifs_deny_access(hnas_id, formatted_user, is_snapshot=is_snapshot) LOG.debug("Access denied for user/group %(user)s " "to %(entity_type)s %(share)s.", {'user': rule['access_to'], 'entity_type': entity_type, 'share': share_or_snapshot['id']}) def _clean_cifs_access_list(self, hnas_id, is_snapshot=False): permission_list = self.hnas.list_cifs_permissions(hnas_id) for permission in permission_list: formatted_user = r'"\{1}{0}\{1}"'.format(permission[0], '"') self.hnas.cifs_deny_access(hnas_id, formatted_user, is_snapshot=is_snapshot) def create_share(self, context, share, share_server=None): r"""Creates share. :param context: The `context.RequestContext` object for the request :param share: Share that will be created. :param share_server: Data structure with share server information. Not used by this driver. :returns: Returns a list of dicts containing the EVS IP concatenated with the path of share in the filesystem. Example for NFS:: [ { 'path': '172.24.44.10:/shares/id', 'metadata': {}, 'is_admin_only': False }, { 'path': '192.168.0.10:/shares/id', 'metadata': {}, 'is_admin_only': True } ] Example for CIFS:: [ { 'path': '\\172.24.44.10\id', 'metadata': {}, 'is_admin_only': False }, { 'path': '\\192.168.0.10\id', 'metadata': {}, 'is_admin_only': True } ] """ LOG.debug("Creating share in HNAS: %(shr)s.", {'shr': share['id']}) self._check_protocol(share['id'], share['share_proto']) export_list = self._create_share(share['id'], share['size'], share['share_proto']) LOG.debug("Share %(share)s created successfully on path(s): " "%(paths)s.", {'paths': ', '.join([x['path'] for x in export_list]), 'share': share['id']}) return export_list def delete_share(self, context, share, share_server=None): """Deletes share. :param context: The `context.RequestContext` object for the request :param share: Share that will be deleted. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(share['id']) LOG.debug("Deleting share in HNAS: %(shr)s.", {'shr': share['id']}) self._delete_share(hnas_share_id, share['share_proto']) LOG.debug("Export and share successfully deleted: %(shr)s.", {'shr': share['id']}) def create_snapshot(self, context, snapshot, share_server=None): """Creates snapshot. :param context: The `context.RequestContext` object for the request :param snapshot: Snapshot that will be created. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(snapshot['share_id']) LOG.debug("The snapshot of share %(snap_share_id)s will be created " "with id %(snap_id)s.", {'snap_share_id': snapshot['share_id'], 'snap_id': snapshot['id']}) export_locations = self._create_snapshot(hnas_share_id, snapshot) LOG.info("Snapshot %(id)s successfully created.", {'id': snapshot['id']}) output = { 'provider_location': os.path.join( '/snapshots', hnas_share_id, snapshot['id']) } if export_locations: output['export_locations'] = export_locations return output def delete_snapshot(self, context, snapshot, share_server=None): """Deletes snapshot. :param context: The `context.RequestContext` object for the request :param snapshot: Snapshot that will be deleted. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(snapshot['share_id']) hnas_snapshot_id = self._get_hnas_snapshot_id(snapshot) LOG.debug("The snapshot %(snap_id)s will be deleted. The related " "share ID is %(snap_share_id)s.", {'snap_id': snapshot['id'], 'snap_share_id': snapshot['share_id']}) self._delete_snapshot(snapshot['share'], hnas_share_id, hnas_snapshot_id) LOG.info("Snapshot %(id)s successfully deleted.", {'id': snapshot['id']}) def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): r"""Creates a new share from snapshot. :param context: The `context.RequestContext` object for the request :param share: Information about the new share. :param snapshot: Information about the snapshot that will be copied to new share. :param share_server: Data structure with share server information. Not used by this driver. :returns: Returns a list of dicts containing the EVS IP concatenated with the path of share in the filesystem. Example for NFS:: [ { 'path': '172.24.44.10:/shares/id', 'metadata': {}, 'is_admin_only': False }, { 'path': '192.168.0.10:/shares/id', 'metadata': {}, 'is_admin_only': True } ] Example for CIFS:: [ { 'path': '\\172.24.44.10\id', 'metadata': {}, 'is_admin_only': False }, { 'path': '\\192.168.0.10\id', 'metadata': {}, 'is_admin_only': True } ] """ LOG.debug("Creating a new share from snapshot: %(ss_id)s.", {'ss_id': snapshot['id']}) hnas_src_share_id = self._get_hnas_share_id(snapshot['share_id']) hnas_src_snap_id = self._get_hnas_snapshot_id(snapshot) export_list = self._create_share_from_snapshot( share, hnas_src_share_id, hnas_src_snap_id) LOG.debug("Share %(share)s created successfully on path(s): " "%(paths)s.", {'paths': ', '.join([x['path'] for x in export_list]), 'share': share['id']}) return export_list def ensure_share(self, context, share, share_server=None): r"""Ensure that share is exported. :param context: The `context.RequestContext` object for the request :param share: Share that will be checked. :param share_server: Data structure with share server information. Not used by this driver. :returns: Returns a list of dicts containing the EVS IP concatenated with the path of share in the filesystem. Example for NFS:: [ { 'path': '172.24.44.10:/shares/id', 'metadata': {}, 'is_admin_only': False }, { 'path': '192.168.0.10:/shares/id', 'metadata': {}, 'is_admin_only': True } ] Example for CIFS:: [ { 'path': '\\172.24.44.10\id', 'metadata': {}, 'is_admin_only': False }, { 'path': '\\192.168.0.10\id', 'metadata': {}, 'is_admin_only': True } ] """ LOG.debug("Ensuring share in HNAS: %(shr)s.", {'shr': share['id']}) hnas_share_id = self._get_hnas_share_id(share['id']) export_list = self._ensure_share(share, hnas_share_id) LOG.debug("Share ensured in HNAS: %(shr)s, protocol %(proto)s.", {'shr': share['id'], 'proto': share['share_proto']}) return export_list def extend_share(self, share, new_size, share_server=None): """Extends a share to new size. :param share: Share that will be extended. :param new_size: New size of share. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(share['id']) LOG.debug("Expanding share in HNAS: %(shr_id)s.", {'shr_id': share['id']}) self._extend_share(hnas_share_id, share, new_size) LOG.info("Share %(shr_id)s successfully extended to " "%(shr_size)s.", {'shr_id': share['id'], 'shr_size': six.text_type(new_size)}) # TODO(alyson): Implement in DHSS = true mode def get_network_allocations_number(self): """Track allocations_number in DHSS = true. When using the setting driver_handles_share_server = false does not require to track allocations_number because we do not handle network stuff. """ return 0 def _update_share_stats(self, data=None): """Updates the Capability of Backend.""" LOG.debug("Updating Backend Capability Information - Hitachi HNAS.") self._check_fs_mounted() total_space, free_space, dedupe = self.hnas.get_stats() reserved = self.configuration.safe_get('reserved_share_percentage') data = { 'share_backend_name': self.backend_name, 'driver_handles_share_servers': self.driver_handles_share_servers, 'vendor_name': 'Hitachi', 'driver_version': '4.0.0', 'storage_protocol': 'NFS_CIFS', 'total_capacity_gb': total_space, 'free_capacity_gb': free_space, 'reserved_percentage': reserved, 'qos': False, 'thin_provisioning': True, 'dedupe': dedupe, 'revert_to_snapshot_support': True, 'mount_snapshot_support': True, } LOG.info("HNAS Capabilities: %(data)s.", {'data': six.text_type(data)}) super(HitachiHNASDriver, self)._update_share_stats(data) def manage_existing(self, share, driver_options): r"""Manages a share that exists on backend. :param share: Share that will be managed. :param driver_options: Empty dict or dict with 'volume_id' option. :returns: Returns a dict with size of the share managed and a list of dicts containing its export locations. Example for NFS:: { 'size': 10, 'export_locations': [ { 'path': '172.24.44.10:/shares/id', 'metadata': {}, 'is_admin_only': False }, { 'path': '192.168.0.10:/shares/id', 'metadata': {}, 'is_admin_only': True } ] } Example for CIFS:: { 'size': 10, 'export_locations': [ { 'path': '\\172.24.44.10\id', 'metadata': {}, 'is_admin_only': False }, { 'path': '\\192.168.0.10\id', 'metadata': {}, 'is_admin_only': True } ] } """ hnas_share_id = self._get_hnas_share_id(share['id']) # Make sure returned value is the same as provided, # confirming it does not exist. if hnas_share_id != share['id']: msg = _("Share ID %s already exists, cannot manage.") % share['id'] raise exception.HNASBackendException(msg=msg) self._check_protocol(share['id'], share['share_proto']) if share['share_proto'].lower() == 'nfs': # 10.0.0.1:/shares/example LOG.info("Share %(shr_path)s will be managed with ID " "%(shr_id)s.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) old_path_info = share['export_locations'][0]['path'].split( ':/shares/') if len(old_path_info) == 2: evs_ip = old_path_info[0] hnas_share_id = old_path_info[1] else: msg = _("Incorrect path. It should have the following format: " "IP:/shares/share_id.") raise exception.ShareBackendException(msg=msg) else: # then its CIFS # \\10.0.0.1\example old_path = share['export_locations'][0]['path'].split('\\') if len(old_path) == 4: evs_ip = old_path[2] hnas_share_id = old_path[3] else: msg = _("Incorrect path. It should have the following format: " "\\\\IP\\share_id.") raise exception.ShareBackendException(msg=msg) if evs_ip != self.hnas_evs_ip: msg = _("The EVS IP %(evs)s is not " "configured.") % {'evs': evs_ip} raise exception.ShareBackendException(msg=msg) if self.backend_name not in share['host']: msg = _("The backend passed in the host parameter (%(shr)s) is " "not configured.") % {'shr': share['host']} raise exception.ShareBackendException(msg=msg) output = self._manage_existing(share, hnas_share_id) self.private_storage.update( share['id'], {'hnas_id': hnas_share_id}) LOG.debug("HNAS ID %(hnas_id)s has been saved to private storage for " "Share ID %(share_id)s", {'hnas_id': hnas_share_id, 'share_id': share['id']}) LOG.info("Share %(shr_path)s was successfully managed with ID " "%(shr_id)s.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) return output def unmanage(self, share): """Unmanages a share. :param share: Share that will be unmanaged. """ self.private_storage.delete(share['id']) if len(share['export_locations']) == 0: LOG.info("The share with ID %(shr_id)s is no longer being " "managed.", {'shr_id': share['id']}) else: LOG.info("The share with current path %(shr_path)s and ID " "%(shr_id)s is no longer being managed.", {'shr_path': share['export_locations'][0]['path'], 'shr_id': share['id']}) def shrink_share(self, share, new_size, share_server=None): """Shrinks a share to new size. :param share: Share that will be shrunk. :param new_size: New size of share. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(share['id']) LOG.debug("Shrinking share in HNAS: %(shr_id)s.", {'shr_id': share['id']}) self._shrink_share(hnas_share_id, share, new_size) LOG.info("Share %(shr_id)s successfully shrunk to " "%(shr_size)sG.", {'shr_id': share['id'], 'shr_size': six.text_type(new_size)}) def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a share to a given snapshot. :param context: The `context.RequestContext` object for the request :param snapshot: The snapshot to which the share is to be reverted to. :param share_access_rules: List of all access rules for the affected share. Not used by this driver. :param snapshot_access_rules: List of all access rules for the affected snapshot. Not used by this driver. :param share_server: Data structure with share server information. Not used by this driver. """ hnas_share_id = self._get_hnas_share_id(snapshot['share_id']) hnas_snapshot_id = self._get_hnas_snapshot_id(snapshot) self._ensure_snapshot(snapshot, hnas_snapshot_id) dest_path = os.path.join('/shares', hnas_share_id) src_path = os.path.join('/snapshots', hnas_share_id, hnas_snapshot_id) self.hnas.tree_delete(dest_path) self.hnas.vvol_create(hnas_share_id) self.hnas.quota_add(hnas_share_id, snapshot['size']) try: self.hnas.tree_clone(src_path, dest_path) except exception.HNASNothingToCloneException: LOG.warning("Source directory is empty, creating an empty " "directory.") LOG.info("Share %(share)s successfully reverted to snapshot " "%(snapshot)s.", {'share': snapshot['share_id'], 'snapshot': snapshot['id']}) def _get_hnas_share_id(self, share_id): hnas_id = self.private_storage.get(share_id, 'hnas_id') if hnas_id is None: hnas_id = share_id LOG.debug("Share ID is %(shr_id)s and respective HNAS ID " "is %(hnas_id)s.", {'shr_id': share_id, 'hnas_id': hnas_id}) return hnas_id def _get_hnas_snapshot_id(self, snapshot): hnas_snapshot_id = snapshot['id'] if snapshot['provider_location']: LOG.debug("Snapshot %(snap_id)s with provider_location: " "%(p_loc)s.", {'snap_id': hnas_snapshot_id, 'p_loc': snapshot['provider_location']}) hnas_snapshot_id = snapshot['provider_location'].split('/')[-1] return hnas_snapshot_id def _create_share(self, share_id, share_size, share_proto): """Creates share. Creates a virtual-volume, adds a quota limit and exports it. :param share_id: manila's database ID of share that will be created. :param share_size: Size limit of share. :param share_proto: Protocol of share that will be created (NFS or CIFS) :returns: Returns a list of dicts containing the new share's export locations. """ self._check_fs_mounted() self.hnas.vvol_create(share_id) self.hnas.quota_add(share_id, share_size) LOG.debug("Share created with id %(shr)s, size %(size)sG.", {'shr': share_id, 'size': share_size}) self._create_export(share_id, share_proto) export_list = self._get_export_locations(share_proto, share_id) return export_list def _create_export(self, share_id, share_proto, snapshot_id=None): try: if share_proto.lower() == 'nfs': # Create NFS export self.hnas.nfs_export_add(share_id, snapshot_id=snapshot_id) LOG.debug("NFS Export created to %(shr)s.", {'shr': share_id}) else: # Create CIFS share with vvol path self.hnas.cifs_share_add(share_id, snapshot_id=snapshot_id) LOG.debug("CIFS share created to %(shr)s.", {'shr': share_id}) except exception.HNASBackendException: with excutils.save_and_reraise_exception(): if snapshot_id is None: self.hnas.vvol_delete(share_id) def _check_fs_mounted(self): mounted = self.hnas.check_fs_mounted() if not mounted: msg = _("Filesystem %s is not mounted.") % self.fs_name raise exception.HNASBackendException(msg=msg) def _ensure_share(self, share, hnas_share_id): """Ensure that share is exported. :param share: Share that will be checked. :param hnas_share_id: HNAS ID of share that will be checked. :returns: Returns a list of dicts containing the share's export locations. """ self._check_protocol(share['id'], share['share_proto']) self._check_fs_mounted() self.hnas.check_vvol(hnas_share_id) self.hnas.check_quota(hnas_share_id) if share['share_proto'].lower() == 'nfs': self.hnas.check_export(hnas_share_id) else: self.hnas.check_cifs(hnas_share_id) export_list = self._get_export_locations( share['share_proto'], hnas_share_id) return export_list def _shrink_share(self, hnas_share_id, share, new_size): """Shrinks a share to new size. :param hnas_share_id: HNAS ID of share that will be shrunk. :param share: model of share that will be shrunk. :param new_size: New size of share after shrink operation. """ self._ensure_share(share, hnas_share_id) usage = self.hnas.get_share_usage(hnas_share_id) LOG.debug("Usage space in share %(share)s: %(usage)sG", {'share': share['id'], 'usage': usage}) if new_size > usage: self.hnas.modify_quota(hnas_share_id, new_size) else: raise exception.ShareShrinkingPossibleDataLoss( share_id=share['id']) def _extend_share(self, hnas_share_id, share, new_size): """Extends a share to new size. :param hnas_share_id: HNAS ID of share that will be extended. :param share: model of share that will be extended. :param new_size: New size of share after extend operation. """ self._ensure_share(share, hnas_share_id) old_size = share['size'] available_space = self.hnas.get_stats()[1] LOG.debug("Available space in filesystem: %(space)sG.", {'space': available_space}) if (new_size - old_size) < available_space: self.hnas.modify_quota(hnas_share_id, new_size) else: msg = (_("Share %s cannot be extended due to insufficient space.") % share['id']) raise exception.HNASBackendException(msg=msg) def _delete_share(self, hnas_share_id, share_proto): """Deletes share. It uses tree-delete-job-submit to format and delete virtual-volumes. Quota is deleted with virtual-volume. :param hnas_share_id: HNAS ID of share that will be deleted. :param share_proto: Protocol of share that will be deleted. """ self._check_fs_mounted() if share_proto.lower() == 'nfs': self.hnas.nfs_export_del(hnas_share_id) elif share_proto.lower() == 'cifs': self.hnas.cifs_share_del(hnas_share_id) self.hnas.vvol_delete(hnas_share_id) def _manage_existing(self, share, hnas_share_id): """Manages a share that exists on backend. :param share: share that will be managed. :param hnas_share_id: HNAS ID of share that will be managed. :returns: Returns a dict with size of the share managed and a list of dicts containing its export locations. """ self._ensure_share(share, hnas_share_id) share_size = self.hnas.get_share_quota(hnas_share_id) if share_size is None: msg = (_("The share %s trying to be managed does not have a " "quota limit, please set it before manage.") % share['id']) raise exception.ManageInvalidShare(reason=msg) export_list = self._get_export_locations( share['share_proto'], hnas_share_id) return {'size': share_size, 'export_locations': export_list} def _create_snapshot(self, hnas_share_id, snapshot): """Creates a snapshot of share. It copies the directory and all files to a new directory inside /snapshots/share_id/. :param hnas_share_id: HNAS ID of share for snapshot. :param snapshot: Snapshot that will be created. """ self._ensure_share(snapshot['share'], hnas_share_id) saved_list = [] share_proto = snapshot['share']['share_proto'] self._check_protocol(snapshot['share_id'], share_proto) if share_proto.lower() == 'nfs': saved_list = self.hnas.get_nfs_host_list(hnas_share_id) new_list = [] for access in saved_list: for rw in ('read_write', 'readwrite', 'rw'): access = access.replace(rw, 'ro') new_list.append(access) self.hnas.update_nfs_access_rule(new_list, share_id=hnas_share_id) else: # CIFS if (self.hnas.is_cifs_in_use(hnas_share_id) and not self.cifs_snapshot): msg = _("CIFS snapshot when share is mounted is disabled. " "Set hitachi_hnas_allow_cifs_snapshot_while_mounted to" " True or unmount the share to take a snapshot.") raise exception.ShareBackendException(msg=msg) src_path = os.path.join('/shares', hnas_share_id) dest_path = os.path.join('/snapshots', hnas_share_id, snapshot['id']) try: self.hnas.tree_clone(src_path, dest_path) except exception.HNASNothingToCloneException: LOG.warning("Source directory is empty, creating an empty " "directory.") self.hnas.create_directory(dest_path) finally: if share_proto.lower() == 'nfs': self.hnas.update_nfs_access_rule(saved_list, share_id=hnas_share_id) export_locations = [] if snapshot['share'].get('mount_snapshot_support'): self._create_export(hnas_share_id, share_proto, snapshot_id=snapshot['id']) export_locations = self._get_export_locations( share_proto, snapshot['id'], is_snapshot=True) return export_locations def _delete_snapshot(self, share, hnas_share_id, snapshot_id): """Deletes snapshot. It receives the hnas_share_id only to join the path for snapshot. :param hnas_share_id: HNAS ID of share from which snapshot was taken. :param snapshot_id: ID of snapshot. """ self._check_fs_mounted() share_proto = share['share_proto'] if share.get('mount_snapshot_support'): if share_proto.lower() == 'nfs': self.hnas.nfs_export_del(snapshot_id=snapshot_id) elif share_proto.lower() == 'cifs': self.hnas.cifs_share_del(snapshot_id) path = os.path.join('/snapshots', hnas_share_id, snapshot_id) self.hnas.tree_delete(path) path = os.path.join('/snapshots', hnas_share_id) self.hnas.delete_directory(path) def _create_share_from_snapshot(self, share, src_hnas_share_id, hnas_snapshot_id): """Creates a new share from snapshot. It copies everything from snapshot directory to a new vvol, set a quota limit for it and export. :param share: a dict from new share. :param src_hnas_share_id: HNAS ID of share from which snapshot was taken. :param hnas_snapshot_id: HNAS ID from snapshot that will be copied to new share. :returns: Returns a list of dicts containing the new share's export locations. """ dest_path = os.path.join('/shares', share['id']) src_path = os.path.join('/snapshots', src_hnas_share_id, hnas_snapshot_id) # Before copying everything to new vvol, we need to create it, # because we only can transform an empty directory into a vvol. self._check_fs_mounted() self.hnas.vvol_create(share['id']) self.hnas.quota_add(share['id'], share['size']) try: self.hnas.tree_clone(src_path, dest_path) except exception.HNASNothingToCloneException: LOG.warning("Source directory is empty, exporting " "directory.") self._check_protocol(share['id'], share['share_proto']) try: if share['share_proto'].lower() == 'nfs': self.hnas.nfs_export_add(share['id']) else: self.hnas.cifs_share_add(share['id']) except exception.HNASBackendException: with excutils.save_and_reraise_exception(): self.hnas.vvol_delete(share['id']) return self._get_export_locations( share['share_proto'], share['id']) def _check_protocol(self, share_id, protocol): if protocol.lower() not in ('nfs', 'cifs'): msg = _("Only NFS or CIFS protocol are currently supported. " "Share provided %(share)s with protocol " "%(proto)s.") % {'share': share_id, 'proto': protocol} raise exception.ShareBackendException(msg=msg) def _get_export_locations(self, share_proto, hnas_id, is_snapshot=False): export_list = [] for ip in (self.hnas_evs_ip, self.hnas_admin_network_ip): if ip: path = self._get_export_path(ip, share_proto, hnas_id, is_snapshot) export_list.append({ "path": path, "is_admin_only": ip == self.hnas_admin_network_ip, "metadata": {}, }) return export_list def _get_export_path(self, ip, share_proto, hnas_id, is_snapshot): r"""Gets and returns export path. :param ip: IP from HNAS EVS configured. :param share_proto: Share or snapshot protocol (NFS or CIFS). :param hnas_id: Entity ID in HNAS, it can be the ID from a share or a snapshot. :param is_snapshot: Boolean to determine if export is related to a share or a snapshot. :return: Complete export path, for example: - In NFS: SHARE: 172.24.44.10:/shares/id SNAPSHOT: 172.24.44.10:/snapshots/id - In CIFS: SHARE and SNAPSHOT: \\172.24.44.10\id """ if share_proto.lower() == 'nfs': if is_snapshot: path = os.path.join('/snapshots', hnas_id) else: path = os.path.join('/shares', hnas_id) export = ':'.join((ip, path)) else: export = r'\\%s\%s' % (ip, hnas_id) return export def _ensure_snapshot(self, snapshot, hnas_snapshot_id): """Ensure that snapshot is exported. :param snapshot: Snapshot that will be checked. :param hnas_snapshot_id: HNAS ID of snapshot that will be checked. :returns: Returns a list of dicts containing the snapshot's export locations or None if mount_snapshot_support is False. """ self._check_protocol(snapshot['share_id'], snapshot['share']['share_proto']) self._check_fs_mounted() self.hnas.check_directory(snapshot['provider_location']) export_list = None if snapshot['share'].get('mount_snapshot_support'): if snapshot['share']['share_proto'].lower() == 'nfs': self.hnas.check_export(hnas_snapshot_id, is_snapshot=True) else: self.hnas.check_cifs(hnas_snapshot_id) export_list = self._get_export_locations( snapshot['share']['share_proto'], hnas_snapshot_id, is_snapshot=True) return export_list def ensure_snapshot(self, context, snapshot, share_server=None): r"""Ensure that snapshot is exported. :param context: The `context.RequestContext` object for the request. :param snapshot: Snapshot that will be checked. :param share_server: Data structure with share server information. Not used by this driver. :returns: Returns a list of dicts containing the EVS IP concatenated with the path of snapshot in the filesystem or None if mount_snapshot_support is False. Example for NFS:: [ { 'path': '172.24.44.10:/snapshots/id', 'metadata': {}, 'is_admin_only': False }, { 'path': '192.168.0.10:/snapshots/id', 'metadata': {}, 'is_admin_only': True } ] Example for CIFS:: [ { 'path': '\\172.24.44.10\id', 'metadata': {}, 'is_admin_only': False }, { 'path': '\\192.168.0.10\id', 'metadata': {}, 'is_admin_only': True } ] """ LOG.debug("Ensuring snapshot in HNAS: %(snap)s.", {'snap': snapshot['id']}) hnas_snapshot_id = self._get_hnas_snapshot_id(snapshot) export_list = self._ensure_snapshot(snapshot, hnas_snapshot_id) LOG.debug("Snapshot ensured in HNAS: %(snap)s, protocol %(proto)s.", {'snap': snapshot['id'], 'proto': snapshot['share']['share_proto']}) return export_list def manage_existing_snapshot(self, snapshot, driver_options): """Manages a snapshot that exists only in HNAS. The snapshot to be managed should be in the path /snapshots/SHARE_ID/SNAPSHOT_ID. Also, the size of snapshot should be provided as --driver_options size=. :param snapshot: snapshot that will be managed. :param driver_options: expects only one key 'size'. It must be provided in order to manage a snapshot. :returns: Returns a dict with size of snapshot managed """ try: snapshot_size = int(driver_options.get("size", 0)) except (ValueError, TypeError): msg = _("The size in driver options to manage snapshot " "%(snap_id)s should be an integer, in format " "driver-options size=. Value passed: " "%(size)s.") % {'snap_id': snapshot['id'], 'size': driver_options.get("size")} raise exception.ManageInvalidShareSnapshot(reason=msg) if snapshot_size == 0: msg = _("Snapshot %(snap_id)s has no size specified for manage. " "Please, provide the size with parameter driver-options " "size=.") % {'snap_id': snapshot['id']} raise exception.ManageInvalidShareSnapshot(reason=msg) hnas_share_id = self._get_hnas_share_id(snapshot['share_id']) LOG.debug("Path provided to manage snapshot: %(path)s.", {'path': snapshot['provider_location']}) path_info = snapshot['provider_location'].split('/') if len(path_info) == 4 and path_info[1] == 'snapshots': path_share_id = path_info[2] hnas_snapshot_id = path_info[3] else: msg = (_("Incorrect path %(path)s for manage snapshot " "%(snap_id)s. It should have the following format: " "/snapshots/SHARE_ID/SNAPSHOT_ID.") % {'path': snapshot['provider_location'], 'snap_id': snapshot['id']}) raise exception.ManageInvalidShareSnapshot(reason=msg) if hnas_share_id != path_share_id: msg = _("The snapshot %(snap_id)s does not belong to share " "%(share_id)s.") % {'snap_id': snapshot['id'], 'share_id': snapshot['share_id']} raise exception.ManageInvalidShareSnapshot(reason=msg) if not self.hnas.check_directory(snapshot['provider_location']): msg = _("Snapshot %(snap_id)s does not exist in " "HNAS.") % {'snap_id': hnas_snapshot_id} raise exception.ManageInvalidShareSnapshot(reason=msg) try: self._ensure_snapshot(snapshot, hnas_snapshot_id) except exception.HNASItemNotFoundException: LOG.warning("Export does not exist for snapshot %s, " "creating a new one.", snapshot['id']) self._create_export(hnas_share_id, snapshot['share']['share_proto'], snapshot_id=hnas_snapshot_id) output = {'size': snapshot_size} if snapshot['share'].get('mount_snapshot_support'): export_locations = self._get_export_locations( snapshot['share']['share_proto'], hnas_snapshot_id, is_snapshot=True) output['export_locations'] = export_locations LOG.info("Snapshot %(snap_path)s for share %(shr_id)s was " "successfully managed with ID %(snap_id)s.", {'snap_path': snapshot['provider_location'], 'shr_id': snapshot['share_id'], 'snap_id': snapshot['id']}) return output def unmanage_snapshot(self, snapshot): """Unmanage a share snapshot :param snapshot: Snapshot that will be unmanaged. """ LOG.info("The snapshot with ID %(snap_id)s from share " "%(share_id)s is no longer being managed by Manila. " "However, it is not deleted and can be found in HNAS.", {'snap_id': snapshot['id'], 'share_id': snapshot['share_id']}) def snapshot_update_access(self, context, snapshot, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given snapshot. Drivers should support 2 different cases in this method: 1. Recovery after error - 'access_rules' contains all access rules, 'add_rules' and 'delete_rules' shall be empty. Driver should clear any existent access rules and apply all access rules for given snapshot. This recovery is made at driver start up. 2. Adding/Deleting of several access rules - 'access_rules' contains all access rules, 'add_rules' and 'delete_rules' contain rules which should be added/deleted. Driver can ignore rules in 'access_rules' and apply only rules from 'add_rules' and 'delete_rules'. All snapshots rules should be read only. :param context: Current context :param snapshot: Snapshot model with snapshot data. :param access_rules: All access rules for given snapshot :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model """ hnas_snapshot_id = self._get_hnas_snapshot_id(snapshot) self._ensure_snapshot(snapshot, hnas_snapshot_id) access_rules, add_rules, delete_rules = utils.change_rules_to_readonly( access_rules, add_rules, delete_rules) if snapshot['share']['share_proto'].lower() == 'nfs': host_list = [] for rule in access_rules: if rule['access_type'].lower() != 'ip': msg = _("Only IP access type currently supported for NFS. " "Snapshot provided %(snapshot)s with rule type " "%(type)s.") % {'snapshot': snapshot['id'], 'type': rule['access_type']} raise exception.InvalidSnapshotAccess(reason=msg) host_list.append(rule['access_to'] + '(ro)') self.hnas.update_nfs_access_rule(host_list, snapshot_id=hnas_snapshot_id) if host_list: LOG.debug("Snapshot %(snapshot)s has the rules: %(rules)s", {'snapshot': snapshot['id'], 'rules': ', '.join(host_list)}) else: LOG.debug("Snapshot %(snapshot)s has no rules.", {'snapshot': snapshot['id']}) else: if not (add_rules or delete_rules): # cifs recovery mode self._clean_cifs_access_list(hnas_snapshot_id, is_snapshot=True) self._cifs_allow_access(snapshot, hnas_snapshot_id, access_rules, is_snapshot=True) else: self._cifs_deny_access(snapshot, hnas_snapshot_id, delete_rules, is_snapshot=True) self._cifs_allow_access(snapshot, hnas_snapshot_id, add_rules, is_snapshot=True) manila-10.0.0/manila/share/drivers/hitachi/hnas/ssh.py0000664000175000017500000010523113656750227022624 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_concurrency import processutils from oslo_log import log from oslo_utils import strutils from oslo_utils import units import paramiko import six import os import time from manila import exception from manila.i18n import _ from manila import utils as mutils LOG = log.getLogger(__name__) class HNASSSHBackend(object): def __init__(self, hnas_ip, hnas_username, hnas_password, ssh_private_key, cluster_admin_ip0, evs_id, evs_ip, fs_name, job_timeout): self.ip = hnas_ip self.port = 22 self.user = hnas_username self.password = hnas_password self.priv_key = ssh_private_key self.admin_ip0 = cluster_admin_ip0 self.evs_id = six.text_type(evs_id) self.fs_name = fs_name self.evs_ip = evs_ip self.sshpool = None self.job_timeout = job_timeout LOG.debug("Hitachi HNAS Driver using SSH backend.") def get_stats(self): """Get the stats from file-system. :returns: fs_capacity.size = Total size from filesystem. available_space = Free space currently on filesystem. dedupe = True if dedupe is enabled on filesystem. """ command = ['df', '-a', '-f', self.fs_name] try: output, err = self._execute(command) except processutils.ProcessExecutionError: msg = _("Could not get HNAS backend stats.") LOG.exception(msg) raise exception.HNASBackendException(msg=msg) line = output.split('\n') fs = Filesystem(line[3]) available_space = fs.size - fs.used return fs.size, available_space, fs.dedupe def nfs_export_add(self, share_id, snapshot_id=None): if snapshot_id is not None: path = os.path.join('/snapshots', share_id, snapshot_id) name = os.path.join('/snapshots', snapshot_id) else: path = name = os.path.join('/shares', share_id) command = ['nfs-export', 'add', '-S', 'disable', '-c', '127.0.0.1', name, self.fs_name, path] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Could not create NFS export %s.") % name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def nfs_export_del(self, share_id=None, snapshot_id=None): if share_id is not None: name = os.path.join('/shares', share_id) elif snapshot_id is not None: name = os.path.join('/snapshots', snapshot_id) else: msg = _("NFS export not specified to delete.") raise exception.HNASBackendException(msg=msg) command = ['nfs-export', 'del', name] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'does not exist' in e.stderr: LOG.warning("Export %s does not exist on " "backend anymore.", name) else: msg = _("Could not delete NFS export %s.") % name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def cifs_share_add(self, share_id, snapshot_id=None): if snapshot_id is not None: path = r'\\snapshots\\' + share_id + r'\\' + snapshot_id name = snapshot_id else: path = r'\\shares\\' + share_id name = share_id command = ['cifs-share', 'add', '-S', 'disable', '--enable-abe', '--nodefaultsaa', name, self.fs_name, path] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Could not create CIFS share %s.") % name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def cifs_share_del(self, name): command = ['cifs-share', 'del', '--target-label', self.fs_name, name] try: self._execute(command) except processutils.ProcessExecutionError as e: if e.exit_code == 1: LOG.warning("CIFS share %s does not exist on " "backend anymore.", name) else: msg = _("Could not delete CIFS share %s.") % name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def get_nfs_host_list(self, share_id): export = self._get_export(share_id) return export[0].export_configuration def update_nfs_access_rule(self, host_list, share_id=None, snapshot_id=None): if share_id is not None: name = os.path.join('/shares', share_id) elif snapshot_id is not None: name = os.path.join('/snapshots', snapshot_id) else: msg = _("No share/snapshot provided to update NFS rules.") raise exception.HNASBackendException(msg=msg) command = ['nfs-export', 'mod', '-c'] if len(host_list) == 0: command.append('127.0.0.1') else: string_command = '"' + six.text_type(host_list[0]) for i in range(1, len(host_list)): string_command += ',' + (six.text_type(host_list[i])) string_command += '"' command.append(string_command) command.append(name) try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Could not update access rules for NFS export %s.") % name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def cifs_allow_access(self, name, user, permission, is_snapshot=False): command = ['cifs-saa', 'add', '--target-label', self.fs_name, name, user, permission] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'already listed as a user' in e.stderr: if is_snapshot: LOG.debug('User %(user)s already allowed to access ' 'snapshot %(snapshot)s.', { 'user': user, 'snapshot': name, }) else: self._update_cifs_rule(name, user, permission) else: entity_type = "share" if is_snapshot: entity_type = "snapshot" msg = _("Could not add access of user %(user)s to " "%(entity_type)s %(name)s.") % { 'user': user, 'name': name, 'entity_type': entity_type, } LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def _update_cifs_rule(self, name, user, permission): LOG.debug('User %(user)s already allowed to access ' 'share %(share)s. Updating access level...', { 'user': user, 'share': name, }) command = ['cifs-saa', 'change', '--target-label', self.fs_name, name, user, permission] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Could not update access of user %(user)s to " "share %(share)s.") % { 'user': user, 'share': name, } LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def cifs_deny_access(self, name, user, is_snapshot=False): command = ['cifs-saa', 'delete', '--target-label', self.fs_name, name, user] entity_type = "share" if is_snapshot: entity_type = "snapshot" try: self._execute(command) except processutils.ProcessExecutionError as e: if ('not listed as a user' in e.stderr or 'Could not delete user/group' in e.stderr): LOG.warning('User %(user)s already not allowed to access ' '%(entity_type)s %(name)s.', { 'entity_type': entity_type, 'user': user, 'name': name }) else: msg = _("Could not delete access of user %(user)s to " "%(entity_type)s %(name)s.") % { 'user': user, 'name': name, 'entity_type': entity_type, } LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def list_cifs_permissions(self, hnas_share_id): command = ['cifs-saa', 'list', '--target-label', self.fs_name, hnas_share_id] try: output, err = self._execute(command) except processutils.ProcessExecutionError as e: if 'No entries for this share' in e.stderr: LOG.debug('Share %(share)s does not have any permission ' 'added.', {'share': hnas_share_id}) return [] else: msg = _("Could not list access of share %s.") % hnas_share_id LOG.exception(msg) raise exception.HNASBackendException(msg=msg) permissions = CIFSPermissions(output) return permissions.permission_list def tree_clone(self, src_path, dest_path): command = ['tree-clone-job-submit', '-e', '-f', self.fs_name, src_path, dest_path] try: output, err = self._execute(command) except processutils.ProcessExecutionError as e: if ('Cannot find any clonable files in the source directory' in e.stderr): msg = _("Source path %s is empty.") % src_path LOG.debug(msg) raise exception.HNASNothingToCloneException(msg=msg) else: msg = _("Could not submit tree clone job to clone from %(src)s" " to %(dest)s.") % {'src': src_path, 'dest': dest_path} LOG.exception(msg) raise exception.HNASBackendException(msg=msg) job_submit = JobSubmit(output) if job_submit.request_status == 'Request submitted successfully': job_id = job_submit.job_id job_status = None progress = '' job_rechecks = 0 starttime = time.time() deadline = starttime + self.job_timeout while (not job_status or job_status.job_state != "Job was completed"): command = ['tree-clone-job-status', job_id] output, err = self._execute(command) job_status = JobStatus(output) if job_status.job_state == 'Job failed': break old_progress = progress progress = job_status.data_bytes_processed if old_progress == progress: job_rechecks += 1 now = time.time() if now > deadline: command = ['tree-clone-job-abort', job_id] self._execute(command) LOG.error("Timeout in snapshot creation from " "source path %s.", src_path) msg = _("Share snapshot of source path %s " "was not created.") % src_path raise exception.HNASBackendException(msg=msg) else: time.sleep(job_rechecks ** 2) else: job_rechecks = 0 if (job_status.job_state, job_status.job_status, job_status.directories_missing, job_status.files_missing) == ("Job was completed", "Success", '0', '0'): LOG.debug("Snapshot of source path %(src)s to destination " "path %(dest)s created successfully.", {'src': src_path, 'dest': dest_path}) else: LOG.error('Error creating snapshot of source path %s.', src_path) msg = _('Snapshot of source path %s was not ' 'created.') % src_path raise exception.HNASBackendException(msg=msg) def tree_delete(self, path): command = ['tree-delete-job-submit', '--confirm', '-f', self.fs_name, path] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'Source path: Cannot access' in e.stderr: LOG.warning("Attempted to delete path %s " "but it does not exist.", path) else: msg = _("Could not submit tree delete job to delete path " "%s.") % path LOG.exception(msg) raise exception.HNASBackendException(msg=msg) @mutils.retry(exception=exception.HNASSSCContextChange, wait_random=True, retries=5) def create_directory(self, dest_path): self._locked_selectfs('create', dest_path) if not self.check_directory(dest_path): msg = _("Command to create directory %(path)s was run in another " "filesystem instead of %(fs)s.") % { 'path': dest_path, 'fs': self.fs_name, } LOG.warning(msg) raise exception.HNASSSCContextChange(msg=msg) @mutils.retry(exception=exception.HNASSSCContextChange, wait_random=True, retries=5) def delete_directory(self, path): try: self._locked_selectfs('delete', path) except exception.HNASDirectoryNotEmpty: pass else: if self.check_directory(path): msg = _("Command to delete empty directory %(path)s was run in" " another filesystem instead of %(fs)s.") % { 'path': path, 'fs': self.fs_name, } LOG.debug(msg) raise exception.HNASSSCContextChange(msg=msg) @mutils.retry(exception=exception.HNASSSCIsBusy, wait_random=True, retries=5) def check_directory(self, path): command = ['path-to-object-number', '-f', self.fs_name, path] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'path-to-object-number is currently running' in e.stdout: msg = (_("SSC command path-to-object-number for path %s " "is currently busy.") % path) raise exception.HNASSSCIsBusy(msg=msg) if 'Unable to locate component:' in e.stdout: LOG.debug("Cannot find %(path)s: %(out)s", {'path': path, 'out': e.stdout}) return False else: msg = _("Could not check if path %s exists.") % path LOG.exception(msg) raise exception.HNASBackendException(msg=msg) return True def check_fs_mounted(self): command = ['df', '-a', '-f', self.fs_name] output, err = self._execute(command) if "not found" in output: msg = _("Filesystem %s does not exist or it is not available " "in the current EVS context.") % self.fs_name LOG.error(msg) raise exception.HNASItemNotFoundException(msg=msg) else: line = output.split('\n') fs = Filesystem(line[3]) return fs.mounted def mount(self): command = ['mount', self.fs_name] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'file system is already mounted' not in e.stderr: msg = _("Failed to mount filesystem %s.") % self.fs_name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def vvol_create(self, vvol_name): # create a virtual-volume inside directory path = '/shares/' + vvol_name command = ['virtual-volume', 'add', '--ensure', self.fs_name, vvol_name, path] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Failed to create vvol %s.") % vvol_name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def vvol_delete(self, vvol_name): path = '/shares/' + vvol_name # Virtual-volume and quota are deleted together command = ['tree-delete-job-submit', '--confirm', '-f', self.fs_name, path] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'Source path: Cannot access' in e.stderr: LOG.warning("Share %s does not exist.", vvol_name) else: msg = _("Failed to delete vvol %s.") % vvol_name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def quota_add(self, vvol_name, vvol_quota): str_quota = six.text_type(vvol_quota) + 'G' command = ['quota', 'add', '--usage-limit', str_quota, '--usage-hard-limit', 'yes', self.fs_name, vvol_name] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Failed to add %(quota)s quota to vvol " "%(vvol)s.") % {'quota': str_quota, 'vvol': vvol_name} LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def modify_quota(self, vvol_name, new_size): str_quota = six.text_type(new_size) + 'G' command = ['quota', 'mod', '--usage-limit', str_quota, self.fs_name, vvol_name] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Failed to update quota of vvol %(vvol)s to " "%(quota)s.") % {'quota': str_quota, 'vvol': vvol_name} LOG.exception(msg) raise exception.HNASBackendException(msg=msg) def check_vvol(self, vvol_name): command = ['virtual-volume', 'list', '--verbose', self.fs_name, vvol_name] try: self._execute(command) except processutils.ProcessExecutionError: msg = _("Virtual volume %s does not exist.") % vvol_name LOG.exception(msg) raise exception.HNASItemNotFoundException(msg=msg) def check_quota(self, vvol_name): command = ['quota', 'list', '--verbose', self.fs_name, vvol_name] try: output, err = self._execute(command) except processutils.ProcessExecutionError: msg = _("Could not check quota of vvol %s.") % vvol_name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) if 'No quotas matching specified filter criteria' in output: msg = _("Virtual volume %s does not have any" " quota.") % vvol_name LOG.error(msg) raise exception.HNASItemNotFoundException(msg=msg) def check_export(self, vvol_name, is_snapshot=False): export = self._get_export(vvol_name, is_snapshot=is_snapshot) if (vvol_name in export[0].export_name and self.fs_name in export[0].file_system_label): return else: msg = _("Export %s does not exist.") % export[0].export_name LOG.error(msg) raise exception.HNASItemNotFoundException(msg=msg) def check_cifs(self, vvol_name): output = self._cifs_list(vvol_name) cifs_share = CIFSShare(output) if self.fs_name != cifs_share.fs: msg = _("CIFS share %(share)s is not located in " "configured filesystem " "%(fs)s.") % {'share': vvol_name, 'fs': self.fs_name} LOG.error(msg) raise exception.HNASItemNotFoundException(msg=msg) def is_cifs_in_use(self, vvol_name): output = self._cifs_list(vvol_name) cifs_share = CIFSShare(output) return cifs_share.is_mounted def _cifs_list(self, vvol_name): command = ['cifs-share', 'list', vvol_name] try: output, err = self._execute(command) except processutils.ProcessExecutionError as e: if 'does not exist' in e.stderr: msg = _("CIFS share %(share)s was not found in EVS " "%(evs_id)s") % {'share': vvol_name, 'evs_id': self.evs_id} LOG.exception(msg) raise exception.HNASItemNotFoundException(msg=msg) else: msg = _("Could not list CIFS shares by vvol name " "%s.") % vvol_name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) return output def get_share_quota(self, share_id): command = ['quota', 'list', self.fs_name, share_id] output, err = self._execute(command) quota = Quota(output) if quota.limit is None: return None if quota.limit_unit == 'TB': return quota.limit * units.Ki elif quota.limit_unit == 'GB': return quota.limit else: msg = _("Share %s does not support quota values " "below 1G.") % share_id LOG.error(msg) raise exception.HNASBackendException(msg=msg) def get_share_usage(self, share_id): command = ['quota', 'list', self.fs_name, share_id] output, err = self._execute(command) quota = Quota(output) if quota.usage is None: msg = _("Virtual volume %s does not have any quota.") % share_id LOG.error(msg) raise exception.HNASItemNotFoundException(msg=msg) else: bytes_usage = strutils.string_to_bytes(six.text_type(quota.usage) + quota.usage_unit) return bytes_usage / units.Gi def _get_export(self, name, is_snapshot=False): if is_snapshot: name = '/snapshots/' + name else: name = '/shares/' + name command = ['nfs-export', 'list ', name] export_list = [] try: output, err = self._execute(command) except processutils.ProcessExecutionError as e: if 'does not exist' in e.stderr: msg = _("Export %(name)s was not found in EVS " "%(evs_id)s.") % { 'name': name, 'evs_id': self.evs_id, } LOG.exception(msg) raise exception.HNASItemNotFoundException(msg=msg) else: msg = _("Could not list NFS exports by name %s.") % name LOG.exception(msg) raise exception.HNASBackendException(msg=msg) items = output.split('Export name') if items[0][0] == '\n': items.pop(0) for i in range(0, len(items)): export_list.append(Export(items[i])) return export_list @mutils.retry(exception=exception.HNASConnException, wait_random=True) def _execute(self, commands): command = ['ssc', '127.0.0.1'] if self.admin_ip0 is not None: command = ['ssc', '--smuauth', self.admin_ip0] command += ['console-context', '--evs', self.evs_id] commands = command + commands mutils.check_ssh_injection(commands) commands = ' '.join(commands) if not self.sshpool: self.sshpool = mutils.SSHPool(ip=self.ip, port=self.port, conn_timeout=None, login=self.user, password=self.password, privatekey=self.priv_key) with self.sshpool.item() as ssh: ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: out, err = processutils.ssh_execute(ssh, commands, check_exit_code=True) LOG.debug("Command %(cmd)s result: out = %(out)s - err = " "%(err)s.", { 'cmd': commands, 'out': out, 'err': err, }) return out, err except processutils.ProcessExecutionError as e: if 'Failed to establish SSC connection' in e.stderr: msg = _("Failed to establish SSC connection.") LOG.debug(msg) raise exception.HNASConnException(msg=msg) else: LOG.debug("Error running SSH command. " "Command %(cmd)s result: out = %(out)s - err = " "%(err)s - exit = %(exit)s.", { 'cmd': e.cmd, 'out': e.stdout, 'err': e.stderr, 'exit': e.exit_code, }) raise @mutils.synchronized("hitachi_hnas_select_fs", external=True) def _locked_selectfs(self, op, path): if op == 'create': command = ['selectfs', self.fs_name, '\n', 'ssc', '127.0.0.1', 'console-context', '--evs', self.evs_id, 'mkdir', '-p', path] try: self._execute(command) except processutils.ProcessExecutionError as e: if "Current file system invalid: VolumeNotFound" in e.stderr: msg = _("Command to create directory %s failed due to " "context change.") % path LOG.debug(msg) raise exception.HNASSSCContextChange(msg=msg) else: msg = _("Failed to create directory %s.") % path LOG.exception(msg) raise exception.HNASBackendException(msg=msg) if op == 'delete': command = ['selectfs', self.fs_name, '\n', 'ssc', '127.0.0.1', 'console-context', '--evs', self.evs_id, 'rmdir', path] try: self._execute(command) except processutils.ProcessExecutionError as e: if 'DirectoryNotEmpty' in e.stderr: msg = _("Share %s has more snapshots.") % path LOG.debug(msg) raise exception.HNASDirectoryNotEmpty(msg=msg) elif 'cannot remove' in e.stderr and 'NotFound' in e.stderr: LOG.warning("Attempted to delete path %s but it does " "not exist.", path) elif 'Current file system invalid: VolumeNotFound' in e.stderr: msg = _("Command to delete empty directory %s failed due " "to context change.") % path LOG.debug(msg) raise exception.HNASSSCContextChange(msg=msg) else: msg = _("Failed to delete directory %s.") % path LOG.exception(msg) raise exception.HNASBackendException(msg=msg) class Export(object): def __init__(self, data): if data: split_data = data.split('Export configuration:\n') items = split_data[0].split('\n') self.export_name = items[0].split(':')[1].strip() self.export_path = items[1].split(':')[1].strip() if '*** not available ***' in items[2]: self.file_system_info = items[2].split(':')[1].strip() index = 0 else: self.file_system_label = items[2].split(':')[1].strip() self.file_system_size = items[3].split(':')[1].strip() self.file_system_free_space = items[4].split(':')[1].strip() self.file_system_state = items[5].split(':')[1] self.formatted = items[6].split('=')[1].strip() self.mounted = items[7].split('=')[1].strip() self.failed = items[8].split('=')[1].strip() self.thin_provisioned = items[9].split('=')[1].strip() index = 7 self.access_snapshots = items[3 + index].split(':')[1].strip() self.display_snapshots = items[4 + index].split(':')[1].strip() self.read_caching = items[5 + index].split(':')[1].strip() self.disaster_recovery_setting = items[6 + index].split(':')[1] self.recovered = items[7 + index].split('=')[1].strip() self.transfer_setting = items[8 + index].split('=')[1].strip() self.export_configuration = [] export_config = split_data[1].split('\n') for i in range(0, len(export_config)): if any(j.isdigit() or j.isalpha() for j in export_config[i]): self.export_configuration.append(export_config[i]) class JobStatus(object): def __init__(self, data): if data: lines = data.split("\n") self.job_id = lines[0].split()[3] self.physical_node = lines[2].split()[3] self.evs = lines[3].split()[2] self.volume_number = lines[4].split()[3] self.fs_id = lines[5].split()[4] self.fs_name = lines[6].split()[4] self.source_path = lines[7].split()[3] self.creation_time = " ".join(lines[8].split()[3:5]) self.destination_path = lines[9].split()[3] self.ensure_path_exists = lines[10].split()[5] self.job_state = " ".join(lines[12].split()[3:]) self.job_started = " ".join(lines[14].split()[2:4]) self.job_ended = " ".join(lines[15].split()[2:4]) self.job_status = lines[16].split()[2] error_details_line = lines[17].split() if len(error_details_line) > 3: self.error_details = " ".join(error_details_line[3:]) else: self.error_details = None self.directories_processed = lines[18].split()[3] self.files_processed = lines[19].split()[3] self.data_bytes_processed = lines[20].split()[4] self.directories_missing = lines[21].split()[4] self.files_missing = lines[22].split()[4] self.files_skipped = lines[23].split()[4] skipping_details_line = lines[24].split() if len(skipping_details_line) > 3: self.skipping_details = " ".join(skipping_details_line[3:]) else: self.skipping_details = None class JobSubmit(object): def __init__(self, data): if data: split_data = data.replace(".", "").split() self.request_status = " ".join(split_data[1:4]) self.job_id = split_data[8] class Filesystem(object): def __init__(self, data): if data: items = data.split() self.id = items[0] self.label = items[1] self.evs = items[2] self.size = float(items[3]) self.size_measure = items[4] if self.size_measure == 'TB': self.size = self.size * units.Ki if items[5:7] == ["Not", "mounted"]: self.mounted = False else: self.mounted = True self.used = float(items[5]) self.used_measure = items[6] if self.used_measure == 'TB': self.used = self.used * units.Ki self.dedupe = 'dedupe enabled' in data class Quota(object): def __init__(self, data): if data: if 'No quotas matching' in data: self.type = None self.target = None self.usage = None self.usage_unit = None self.limit = None self.limit_unit = None else: items = data.split() self.type = items[2] self.target = items[6] self.usage = items[9] self.usage_unit = items[10] if items[13] == 'Unset': self.limit = None else: self.limit = float(items[13]) self.limit_unit = items[14] class CIFSPermissions(object): def __init__(self, data): self.permission_list = [] hnas_cifs_permissions = [('Allow Read', 'ar'), ('Allow Change & Read', 'acr'), ('Allow Full Control', 'af'), ('Deny Read', 'dr'), ('Deny Change & Read', 'dcr'), ('Deny Full Control', 'df')] lines = data.split('\n') for line in lines: filtered = list(filter(lambda x: x[0] in line, hnas_cifs_permissions)) if len(filtered) == 1: token, permission = filtered[0] user = line.split(token)[1:][0].strip() self.permission_list.append((user, permission)) class CIFSShare(object): def __init__(self, data): lines = data.split('\n') for line in lines: if 'File system label' in line: self.fs = line.split(': ')[1] elif 'Share users' in line: users = line.split(': ') self.is_mounted = users[1] != '0' manila-10.0.0/manila/share/share_types.py0000664000175000017500000003334113656750227020337 0ustar zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation. # Copyright (c) 2015 Tom Barron. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Built-in share type properties.""" import re from oslo_config import cfg from oslo_db import exception as db_exception from oslo_log import log from oslo_utils import strutils from oslo_utils import uuidutils import six from manila.common import constants from manila import context from manila import db from manila import exception from manila.i18n import _ CONF = cfg.CONF LOG = log.getLogger(__name__) def create(context, name, extra_specs=None, is_public=True, projects=None, description=None): """Creates share types.""" extra_specs = extra_specs or {} projects = projects or [] try: get_valid_required_extra_specs(extra_specs) get_valid_optional_extra_specs(extra_specs) except exception.InvalidExtraSpec as e: raise exception.InvalidShareType(reason=six.text_type(e)) extra_specs = sanitize_extra_specs(extra_specs) try: type_ref = db.share_type_create(context, dict(name=name, description=description, extra_specs=extra_specs, is_public=is_public), projects=projects) except db_exception.DBError: LOG.exception('DB error.') raise exception.ShareTypeCreateFailed(name=name, extra_specs=extra_specs) return type_ref def sanitize_extra_specs(extra_specs): """Post process extra specs here if necessary""" az_spec = constants.ExtraSpecs.AVAILABILITY_ZONES if az_spec in extra_specs: extra_specs[az_spec] = sanitize_csv(extra_specs[az_spec]) return extra_specs def update(context, id, name, description, is_public=None): """Update share type by id.""" values = {} if name: values.update({'name': name}) if description == "": values.update({'description': None}) elif description: values.update({'description': description}) if is_public is not None: values.update({'is_public': is_public}) try: db.share_type_update(context, id, values) except db_exception.DBError: LOG.exception('DB error.') raise exception.ShareTypeUpdateFailed(id=id) def destroy(context, id): """Marks share types as deleted.""" if id is None: msg = _("id cannot be None") raise exception.InvalidShareType(reason=msg) else: db.share_type_destroy(context, id) def get_all_types(context, inactive=0, search_opts=None): """Get all non-deleted share_types. """ search_opts = search_opts or {} filters = {} if 'is_public' in search_opts: filters['is_public'] = search_opts.pop('is_public') share_types = db.share_type_get_all(context, inactive, filters=filters) for type_name, type_args in share_types.items(): required_extra_specs = {} try: required_extra_specs = get_valid_required_extra_specs( type_args['extra_specs']) except exception.InvalidExtraSpec: LOG.exception('Share type %(share_type)s has invalid required' ' extra specs.', {'share_type': type_name}) type_args['required_extra_specs'] = required_extra_specs search_vars = {} availability_zones = search_opts.get('extra_specs', {}).pop( 'availability_zones', None) extra_specs = search_opts.pop('extra_specs', {}) if extra_specs: search_vars['extra_specs'] = extra_specs if availability_zones: search_vars['availability_zones'] = availability_zones.split(',') if search_opts: # No other search options are currently supported return {} elif not search_vars: return share_types LOG.debug("Searching by: %s", search_vars) def _check_extra_specs_match(share_type, searchdict): for k, v in searchdict.items(): if (k not in share_type['extra_specs'].keys() or share_type['extra_specs'][k] != v): return False return True def _check_availability_zones_match(share_type, availability_zones): type_azs = share_type['extra_specs'].get('availability_zones') if type_azs: type_azs = type_azs.split(',') return set(availability_zones).issubset(set(type_azs)) return True # search_option to filter_name mapping. filter_mapping = { 'extra_specs': _check_extra_specs_match, 'availability_zones': _check_availability_zones_match, } result = {} for type_name, type_args in share_types.items(): # go over all filters in the list (*AND* operation) type_matches = True for opt, value in search_vars.items(): try: filter_func = filter_mapping[opt] except KeyError: # no such filter - ignore it, go to next filter continue else: if not filter_func(type_args, value): type_matches = False break if type_matches: result[type_name] = type_args return result def get_share_type(ctxt, id, expected_fields=None): """Retrieves single share type by id.""" if id is None: msg = _("id cannot be None") raise exception.InvalidShareType(reason=msg) if ctxt is None: ctxt = context.get_admin_context() return db.share_type_get(ctxt, id, expected_fields=expected_fields) def get_share_type_by_name(context, name): """Retrieves single share type by name.""" if name is None: msg = _("name cannot be None") raise exception.InvalidShareType(reason=msg) return db.share_type_get_by_name(context, name) def get_share_type_by_name_or_id(context, share_type=None): if not share_type: share_type_ref = get_default_share_type(context) if not share_type_ref: msg = _("Default share type not found.") raise exception.ShareTypeNotFound(message=msg) return share_type_ref if uuidutils.is_uuid_like(share_type): return get_share_type(context, share_type) else: return get_share_type_by_name(context, share_type) def get_default_share_type(ctxt=None): """Get the default share type.""" name = CONF.default_share_type if name is None: return {} if ctxt is None: ctxt = context.get_admin_context() share_type = {} try: share_type = get_share_type_by_name(ctxt, name) required_extra_specs = get_valid_required_extra_specs( share_type['extra_specs']) share_type['required_extra_specs'] = required_extra_specs return share_type except exception.ShareTypeNotFoundByName as e: # Couldn't find share type with the name in default_share_type # flag, record this issue and move on # TODO(zhiteng) consider add notification to warn admin LOG.exception('Default share type is not found, ' 'please check default_share_type config: %s', e) except exception.InvalidExtraSpec as ex: LOG.exception('Default share type has invalid required extra' ' specs: %s', ex) def get_share_type_extra_specs(share_type_id, key=False): share_type = get_share_type(context.get_admin_context(), share_type_id) extra_specs = share_type['extra_specs'] if key: if extra_specs.get(key): return extra_specs.get(key) else: return False else: return extra_specs def get_required_extra_specs(): return constants.ExtraSpecs.REQUIRED def get_optional_extra_specs(): return constants.ExtraSpecs.OPTIONAL def get_tenant_visible_extra_specs(): return constants.ExtraSpecs.TENANT_VISIBLE def get_boolean_extra_specs(): return constants.ExtraSpecs.BOOLEAN def is_valid_required_extra_spec(key, value): """Validates required extra_spec value. :param key: extra_spec name :param value: extra_spec value :return: None if provided extra_spec is not required True/False if extra_spec is required and valid or not. """ if key not in get_required_extra_specs(): return if key == constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS: return strutils.bool_from_string(value, default=None) is not None return False def get_valid_required_extra_specs(extra_specs): """Validates and returns required extra specs from dict. Raises InvalidExtraSpec if extra specs are not valid, or if any required extra specs are missing. """ extra_specs = extra_specs or {} missed_extra_specs = set(get_required_extra_specs()) - set(extra_specs) if missed_extra_specs: specs = ",".join(missed_extra_specs) msg = _("Required extra specs '%s' not specified.") % specs raise exception.InvalidExtraSpec(reason=msg) required_extra_specs = {} for k in get_required_extra_specs(): value = extra_specs.get(k, '') if not is_valid_required_extra_spec(k, value): msg = _("Value of required extra_spec %s is not valid") % k raise exception.InvalidExtraSpec(reason=msg) required_extra_specs[k] = value return required_extra_specs def is_valid_csv(extra_spec_value): if not isinstance(extra_spec_value, six.string_types): extra_spec_value = six.text_type(extra_spec_value) values = extra_spec_value.split(',') return all([v.strip() for v in values]) def sanitize_csv(csv_string): return ','.join(value.strip() for value in csv_string.split(',') if (csv_string and value)) def is_valid_optional_extra_spec(key, value): """Validates optional but standardized extra_spec value. :param key: extra_spec name :param value: extra_spec value :return: None if provided extra_spec is not required True/False if extra_spec is required and valid or not. """ if key not in get_optional_extra_specs(): return if key == constants.ExtraSpecs.SNAPSHOT_SUPPORT: return parse_boolean_extra_spec(key, value) is not None elif key == constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT: return parse_boolean_extra_spec(key, value) is not None elif key == constants.ExtraSpecs.REVERT_TO_SNAPSHOT_SUPPORT: return parse_boolean_extra_spec(key, value) is not None elif key == constants.ExtraSpecs.REPLICATION_TYPE_SPEC: return value in constants.ExtraSpecs.REPLICATION_TYPES elif key == constants.ExtraSpecs.MOUNT_SNAPSHOT_SUPPORT: return parse_boolean_extra_spec(key, value) is not None elif key == constants.ExtraSpecs.AVAILABILITY_ZONES: return is_valid_csv(value) return False def get_valid_optional_extra_specs(extra_specs): """Validates and returns optional/standard extra specs from dict. Raises InvalidExtraSpec if extra specs are not valid. """ extra_specs = extra_specs or {} present_optional_extra_spec_keys = set(extra_specs).intersection( set(get_optional_extra_specs())) optional_extra_specs = {} for key in present_optional_extra_spec_keys: value = extra_specs.get(key, '') if not is_valid_optional_extra_spec(key, value): msg = _("Value of optional extra_spec %s is not valid.") % key raise exception.InvalidExtraSpec(reason=msg) optional_extra_specs[key] = value return optional_extra_specs def add_share_type_access(context, share_type_id, project_id): """Add access to share type for project_id.""" if share_type_id is None: msg = _("share_type_id cannot be None") raise exception.InvalidShareType(reason=msg) return db.share_type_access_add(context, share_type_id, project_id) def remove_share_type_access(context, share_type_id, project_id): """Remove access to share type for project_id.""" if share_type_id is None: msg = _("share_type_id cannot be None") raise exception.InvalidShareType(reason=msg) return db.share_type_access_remove(context, share_type_id, project_id) def get_extra_specs_from_share(share): type_id = share.get('share_type_id', None) return get_share_type_extra_specs(type_id) def parse_boolean_extra_spec(extra_spec_key, extra_spec_value): """Parse extra spec values of the form ' True' or ' False' This method returns the boolean value of an extra spec value. If the value does not conform to the standard boolean pattern, it raises an InvalidExtraSpec exception. """ if not isinstance(extra_spec_value, six.string_types): extra_spec_value = six.text_type(extra_spec_value) match = re.match(r'^\s*(?PTrue|False)$', extra_spec_value.strip(), re.IGNORECASE) if match: extra_spec_value = match.group('value') try: return strutils.bool_from_string(extra_spec_value, strict=True) except ValueError: msg = (_('Invalid boolean extra spec %(key)s : %(value)s') % {'key': extra_spec_key, 'value': extra_spec_value}) raise exception.InvalidExtraSpec(reason=msg) manila-10.0.0/manila/share/hooks/0000775000175000017500000000000013656750362016556 5ustar zuulzuul00000000000000manila-10.0.0/manila/share/hooks/__init__.py0000664000175000017500000000000013656750227020655 0ustar zuulzuul00000000000000manila-10.0.0/manila/share/driver.py0000664000175000017500000035502113656750227017306 0ustar zuulzuul00000000000000# Copyright 2012 NetApp # Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Drivers for shares. """ import six import time from oslo_config import cfg from oslo_log import log from manila.common import constants from manila import exception from manila.i18n import _ from manila import network from manila import utils LOG = log.getLogger(__name__) share_opts = [ # NOTE(rushiagr): Reasonable to define this option at only one place. cfg.IntOpt( 'num_shell_tries', default=3, help='Number of times to attempt to run flakey shell commands.'), cfg.IntOpt( 'reserved_share_percentage', default=0, help='The percentage of backend capacity reserved.'), cfg.StrOpt( 'share_backend_name', help='The backend name for a given driver implementation.'), cfg.StrOpt( 'network_config_group', help="Name of the configuration group in the Manila conf file " "to look for network config options." "If not set, the share backend's config group will be used." "If an option is not found within provided group, then " "'DEFAULT' group will be used for search of option."), cfg.BoolOpt( 'driver_handles_share_servers', help="There are two possible approaches for share drivers in Manila. " "First is when share driver is able to handle share-servers and " "second when not. Drivers can support either both or only one " "of these approaches. So, set this opt to True if share driver " "is able to handle share servers and it is desired mode else set " "False. It is set to None by default to make this choice " "intentional."), cfg.FloatOpt( 'max_over_subscription_ratio', default=20.0, help='Float representation of the over subscription ratio ' 'when thin provisioning is involved. Default ratio is ' '20.0, meaning provisioned capacity can be 20 times ' 'the total physical capacity. If the ratio is 10.5, it ' 'means provisioned capacity can be 10.5 times the ' 'total physical capacity. A ratio of 1.0 means ' 'provisioned capacity cannot exceed the total physical ' 'capacity. A ratio lower than 1.0 is invalid.'), cfg.ListOpt( 'migration_ignore_files', default=['lost+found'], help="List of files and folders to be ignored when migrating shares. " "Items should be names (not including any path)."), cfg.StrOpt( 'share_mount_template', default='mount -vt %(proto)s %(options)s %(export)s %(path)s', help="The template for mounting shares for this backend. Must specify " "the executable with all necessary parameters for the protocol " "supported. 'proto' template element may not be required if " "included in the command. 'export' and 'path' template elements " "are required. It is advisable to separate different commands " "per backend."), cfg.StrOpt( 'share_unmount_template', default='umount -v %(path)s', help="The template for unmounting shares for this backend. Must " "specify the executable with all necessary parameters for the " "protocol supported. 'path' template element is required. It is " "advisable to separate different commands per backend."), cfg.DictOpt( 'protocol_access_mapping', default={ 'ip': ['nfs'], 'user': ['cifs'], }, help="Protocol access mapping for this backend. Should be a " "dictionary comprised of " "{'access_type1': ['share_proto1', 'share_proto2']," " 'access_type2': ['share_proto2', 'share_proto3']}."), cfg.BoolOpt( 'migration_readonly_rules_support', default=True, deprecated_for_removal=True, deprecated_reason="All drivers are now required to support read-only " "access rules.", deprecated_name='migration_readonly_support', help="Specify whether read only access rule mode is supported in this " "backend. Obsolete."), cfg.StrOpt( "admin_network_config_group", help="If share driver requires to setup admin network for share, then " "define network plugin config options in some separate config " "group and set its name here. Used only with another " "option 'driver_handles_share_servers' set to 'True'."), # Replication option/s cfg.StrOpt( "replication_domain", help="A string specifying the replication domain that the backend " "belongs to. This option needs to be specified the same in the " "configuration sections of all backends that support " "replication between each other. If this option is not " "specified in the group, it means that replication is not " "enabled on the backend."), cfg.StrOpt('backend_availability_zone', default=None, help='Availability zone for this share backend. If not set, ' 'the ``storage_availability_zone`` option from the ' '``[DEFAULT]`` section is used.'), cfg.StrOpt('filter_function', help='String representation for an equation that will be ' 'used to filter hosts.'), cfg.StrOpt('goodness_function', help='String representation for an equation that will be ' 'used to determine the goodness of a host.'), ] ssh_opts = [ cfg.IntOpt( 'ssh_conn_timeout', default=60, help='Backend server SSH connection timeout.'), cfg.IntOpt( 'ssh_min_pool_conn', default=1, help='Minimum number of connections in the SSH pool.'), cfg.IntOpt( 'ssh_max_pool_conn', default=10, help='Maximum number of connections in the SSH pool.'), ] ganesha_opts = [ cfg.StrOpt('ganesha_config_dir', default='/etc/ganesha', help='Directory where Ganesha config files are stored.'), cfg.StrOpt('ganesha_config_path', default='$ganesha_config_dir/ganesha.conf', help='Path to main Ganesha config file.'), cfg.StrOpt('ganesha_service_name', default='ganesha.nfsd', help='Name of the ganesha nfs service.'), cfg.StrOpt('ganesha_db_path', default='$state_path/manila-ganesha.db', help='Location of Ganesha database file. ' '(Ganesha module only.)'), cfg.StrOpt('ganesha_export_dir', default='$ganesha_config_dir/export.d', help='Path to directory containing Ganesha export ' 'configuration. (Ganesha module only.)'), cfg.StrOpt('ganesha_export_template_dir', default='/etc/manila/ganesha-export-templ.d', help='Path to directory containing Ganesha export ' 'block templates. (Ganesha module only.)'), cfg.BoolOpt('ganesha_rados_store_enable', default=False, help='Persist Ganesha exports and export counter ' 'in Ceph RADOS objects, highly available storage.'), cfg.StrOpt('ganesha_rados_store_pool_name', help='Name of the Ceph RADOS pool to store Ganesha exports ' 'and export counter.'), cfg.StrOpt('ganesha_rados_export_counter', default='ganesha-export-counter', help='Name of the Ceph RADOS object used as the Ganesha ' 'export counter.'), cfg.StrOpt('ganesha_rados_export_index', default='ganesha-export-index', help='Name of the Ceph RADOS object used to store a list ' 'of the export RADOS object URLS.'), ] CONF = cfg.CONF CONF.register_opts(share_opts) CONF.register_opts(ssh_opts) CONF.register_opts(ganesha_opts) class ExecuteMixin(object): """Provides an executable functionality to a driver class.""" def init_execute_mixin(self, *args, **kwargs): if self.configuration: self.configuration.append_config_values(ssh_opts) self.set_execute(kwargs.pop('execute', utils.execute)) def set_execute(self, execute): self._execute = execute def _try_execute(self, *command, **kwargs): # NOTE(vish): Volume commands can partially fail due to timing, but # running them a second time on failure will usually # recover nicely. tries = 0 while True: try: self._execute(*command, **kwargs) return True except exception.ProcessExecutionError: tries += 1 if tries >= self.configuration.num_shell_tries: raise LOG.exception("Recovering from a failed execute. " "Try number %s", tries) time.sleep(tries ** 2) class GaneshaMixin(object): """Augment derived classes with Ganesha configuration.""" def init_ganesha_mixin(self, *args, **kwargs): if self.configuration: self.configuration.append_config_values(ganesha_opts) class ShareDriver(object): """Class defines interface of NAS driver.""" def __init__(self, driver_handles_share_servers, *args, **kwargs): """Implements base functionality for share drivers. :param driver_handles_share_servers: expected boolean value or tuple/list/set of boolean values. There are two possible approaches for share drivers in Manila. First is when share driver is able to handle share-servers and second when not. Drivers can support either both (indicated by a tuple/set/list with (True, False)) or only one of these approaches. So, it is allowed to be 'True' when share driver does support handling of share servers and allowed to be 'False' when it does support usage of unhandled share-servers that are not tracked by Manila. Share drivers are allowed to work only in one of two possible driver modes, that is why only one should be chosen. :param config_opts: tuple, list or set of config option lists that should be registered in driver's configuration right after this attribute is created. Useful for usage with mixin classes. """ super(ShareDriver, self).__init__() self.configuration = kwargs.get('configuration', None) self.initialized = False self._stats = {} self.ip_versions = None self.ipv6_implemented = False self.pools = [] if self.configuration: self.configuration.append_config_values(share_opts) network_config_group = (self.configuration.network_config_group or self.configuration.config_group) admin_network_config_group = ( self.configuration.admin_network_config_group) else: network_config_group = None admin_network_config_group = ( CONF.admin_network_config_group) self._verify_share_server_handling(driver_handles_share_servers) if self.driver_handles_share_servers: # Enable common network self.network_api = network.API( config_group_name=network_config_group) # Enable admin network if admin_network_config_group: self._admin_network_api = network.API( config_group_name=admin_network_config_group, label='admin') for config_opt_set in kwargs.get('config_opts', []): self.configuration.append_config_values(config_opt_set) if hasattr(self, 'init_execute_mixin'): # Instance with 'ExecuteMixin' # pylint: disable=no-member self.init_execute_mixin(*args, **kwargs) if hasattr(self, 'init_ganesha_mixin'): # Instance with 'GaneshaMixin' # pylint: disable=no-member self.init_ganesha_mixin(*args, **kwargs) @property def admin_network_api(self): if hasattr(self, '_admin_network_api'): return self._admin_network_api @property def driver_handles_share_servers(self): if self.configuration: return self.configuration.safe_get('driver_handles_share_servers') return CONF.driver_handles_share_servers @property def replication_domain(self): if self.configuration: return self.configuration.safe_get('replication_domain') return CONF.replication_domain def _verify_share_server_handling(self, driver_handles_share_servers): """Verifies driver_handles_share_servers and given configuration.""" if not isinstance(self.driver_handles_share_servers, bool): raise exception.ManilaException( "Config opt 'driver_handles_share_servers' has improper " "value - '%s'. Please define it as boolean." % self.driver_handles_share_servers) elif isinstance(driver_handles_share_servers, bool): driver_handles_share_servers = [driver_handles_share_servers] elif not isinstance(driver_handles_share_servers, (tuple, list, set)): raise exception.ManilaException( "Improper data provided for 'driver_handles_share_servers' - " "%s" % driver_handles_share_servers) if any(not isinstance(v, bool) for v in driver_handles_share_servers): raise exception.ManilaException( "Provided wrong data: %s" % driver_handles_share_servers) if (self.driver_handles_share_servers not in driver_handles_share_servers): raise exception.ManilaException( "Driver does not support mode 'driver_handles_share_servers=" "%(actual)s'. It can be used only with value '%(allowed)s'." % {'actual': self.driver_handles_share_servers, 'allowed': driver_handles_share_servers}) def migration_check_compatibility( self, context, source_share, destination_share, share_server=None, destination_share_server=None): """Checks destination compatibility for migration of a given share. .. note:: Is called to test compatibility with destination backend. Driver should check if it is compatible with destination backend so driver-assisted migration can proceed. :param context: The 'context.RequestContext' object for the request. :param source_share: Reference to the share to be migrated. :param destination_share: Reference to the share model to be used by migrated share. :param share_server: Share server model or None. :param destination_share_server: Destination Share server model or None. :return: A dictionary containing values indicating if destination backend is compatible, if share can remain writable during migration, if it can preserve all file metadata and if it can perform migration of given share non-disruptively. Example:: { 'compatible': True, 'writable': True, 'preserve_metadata': True, 'nondisruptive': True, 'preserve_snapshots': True, } """ return { 'compatible': False, 'writable': False, 'preserve_metadata': False, 'nondisruptive': False, 'preserve_snapshots': False, } def migration_start( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Starts migration of a given share to another host. .. note:: Is called in source share's backend to start migration. Driver should implement this method if willing to perform migration in a driver-assisted way, useful for when source share's backend driver is compatible with destination backend driver. This method should start the migration procedure in the backend and end. Following steps should be done in 'migration_continue'. :param context: The 'context.RequestContext' object for the request. :param source_share: Reference to the original share model. :param destination_share: Reference to the share model to be used by migrated share. :param source_snapshots: List of snapshots owned by the source share. :param snapshot_mappings: Mapping of source snapshot IDs to destination snapshot models. :param share_server: Share server model or None. :param destination_share_server: Destination Share server model or None. """ raise NotImplementedError() def migration_continue( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Continues migration of a given share to another host. .. note:: Is called in source share's backend to continue migration. Driver should implement this method to continue monitor the migration progress in storage and perform following steps until 1st phase is completed. :param context: The 'context.RequestContext' object for the request. :param source_share: Reference to the original share model. :param destination_share: Reference to the share model to be used by migrated share. :param source_snapshots: List of snapshots owned by the source share. :param snapshot_mappings: Mapping of source snapshot IDs to destination snapshot models. :param share_server: Share server model or None. :param destination_share_server: Destination Share server model or None. :return: Boolean value to indicate if 1st phase is finished. """ raise NotImplementedError() def migration_complete( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Completes migration of a given share to another host. .. note:: Is called in source share's backend to complete migration. If driver is implementing 2-phase migration, this method should perform the disruptive tasks related to the 2nd phase of migration, thus completing it. Driver should also delete all original share data from source backend. :param context: The 'context.RequestContext' object for the request. :param source_share: Reference to the original share model. :param destination_share: Reference to the share model to be used by migrated share. :param source_snapshots: List of snapshots owned by the source share. :param snapshot_mappings: Mapping of source snapshot IDs to destination snapshot models. :param share_server: Share server model or None. :param destination_share_server: Destination Share server model or None. :return: If the migration changes the share export locations, snapshot provider locations or snapshot export locations, this method should return a dictionary with the relevant info. In such case, a dictionary containing a list of export locations and a list of model updates for each snapshot indexed by their IDs. Example:: { 'export_locations': [ { 'path': '1.2.3.4:/foo', 'metadata': {}, 'is_admin_only': False }, { 'path': '5.6.7.8:/foo', 'metadata': {}, 'is_admin_only': True }, ], 'snapshot_updates': { 'bc4e3b28-0832-4168-b688-67fdc3e9d408': { 'provider_location': '/snapshots/foo/bar_1', 'export_locations': [ { 'path': '1.2.3.4:/snapshots/foo/bar_1', 'is_admin_only': False, }, { 'path': '5.6.7.8:/snapshots/foo/bar_1', 'is_admin_only': True, }, ], }, '2e62b7ea-4e30-445f-bc05-fd523ca62941': { 'provider_location': '/snapshots/foo/bar_2', 'export_locations': [ { 'path': '1.2.3.4:/snapshots/foo/bar_2', 'is_admin_only': False, }, { 'path': '5.6.7.8:/snapshots/foo/bar_2', 'is_admin_only': True, }, ], }, }, } """ raise NotImplementedError() def migration_cancel( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Cancels migration of a given share to another host. .. note:: Is called in source share's backend to cancel migration. If possible, driver can implement a way to cancel an in-progress migration. :param context: The 'context.RequestContext' object for the request. :param source_share: Reference to the original share model. :param destination_share: Reference to the share model to be used by migrated share. :param source_snapshots: List of snapshots owned by the source share. :param snapshot_mappings: Mapping of source snapshot IDs to destination snapshot models. :param share_server: Share server model or None. :param destination_share_server: Destination Share server model or None. """ raise NotImplementedError() def migration_get_progress( self, context, source_share, destination_share, source_snapshots, snapshot_mappings, share_server=None, destination_share_server=None): """Obtains progress of migration of a given share to another host. .. note:: Is called in source share's backend to obtain migration progress. If possible, driver can implement a way to return migration progress information. :param context: The 'context.RequestContext' object for the request. :param source_share: Reference to the original share model. :param destination_share: Reference to the share model to be used by migrated share. :param source_snapshots: List of snapshots owned by the source share. :param snapshot_mappings: Mapping of source snapshot IDs to destination snapshot models. :param share_server: Share server model or None. :param destination_share_server: Destination Share server model or None. :return: A dictionary with at least 'total_progress' field containing the percentage value. """ raise NotImplementedError() def connection_get_info(self, context, share, share_server=None): """Is called to provide necessary generic migration logic. :param context: The 'context.RequestContext' object for the request. :param share: Reference to the share being migrated. :param share_server: Share server model or None. :return: A dictionary with migration information. """ mount_template = self._get_mount_command(context, share, share_server) unmount_template = self._get_unmount_command(context, share, share_server) access_mapping = self._get_access_mapping(context, share, share_server) info = { 'mount': mount_template, 'unmount': unmount_template, 'access_mapping': access_mapping, } LOG.debug("Migration info obtained for share %(share_id)s: %(info)s.", {'share_id': share['id'], 'info': six.text_type(info)}) return info def _get_access_mapping(self, context, share, share_server): mapping = self.configuration.safe_get('protocol_access_mapping') or {} result = {} share_proto = share['share_proto'].lower() for access_type, protocols in mapping.items(): if share_proto in [y.lower() for y in protocols]: result[access_type] = result.get(access_type, []) result[access_type].append(share_proto) return result def _get_mount_command(self, context, share_instance, share_server=None): """Is called to delegate mounting share logic.""" mount_template = self.configuration.safe_get('share_mount_template') mount_export = self._get_mount_export(share_instance, share_server) format_template = { 'proto': share_instance['share_proto'].lower(), 'export': mount_export, 'path': '%(path)s', 'options': '%(options)s', } return mount_template % format_template def _get_mount_export(self, share_instance, share_server=None): # NOTE(ganso): If drivers want to override the export_location IP, # they can do so using this configuration. This method can also be # overridden if necessary. path = next((x['path'] for x in share_instance['export_locations'] if x['is_admin_only']), None) if not path: path = share_instance['export_locations'][0]['path'] return path def _get_unmount_command(self, context, share_instance, share_server=None): return self.configuration.safe_get('share_unmount_template') def create_share(self, context, share, share_server=None): """Is called to create share.""" raise NotImplementedError() def create_share_from_snapshot(self, context, share, snapshot, share_server=None, parent_share=None): """Is called to create share from snapshot. Creating a share from snapshot can take longer than a simple clone operation if data copy is required from one host to another. For this reason driver will be able complete this creation asynchronously, by providing a 'creating_from_snapshot' status in the model update. When answering asynchronously, drivers must implement the call 'get_share_status' in order to provide updates for shares with 'creating_from_snapshot' status. It is expected that the driver returns a model update to the share manager that contains: share status and a list of export_locations. A list of 'export_locations' is mandatory only for share in 'available' status. The current supported status are 'available' and 'creating_from_snapshot'. :param context: Current context :param share: Share instance model with share data. :param snapshot: Snapshot instance model . :param share_server: Share server model or None. :param parent_share: Share model from parent snapshot with share data and share server model. :returns: a dictionary of updates containing current share status and its export_location (if available). Example:: { 'status': 'available', 'export_locations': [{...}, {...}], } :raises: ShareBackendException. A ShareBackendException in this method will set the instance to 'error' and the operation will end. """ raise NotImplementedError() def create_snapshot(self, context, snapshot, share_server=None): """Is called to create snapshot. :param context: Current context :param snapshot: Snapshot model. Share model could be retrieved through snapshot['share']. :param share_server: Share server model or None. :return: None or a dictionary with key 'export_locations' containing a list of export locations, if snapshots can be mounted. """ raise NotImplementedError() def delete_share(self, context, share, share_server=None): """Is called to remove share.""" raise NotImplementedError() def delete_snapshot(self, context, snapshot, share_server=None): """Is called to remove snapshot. :param context: Current context :param snapshot: Snapshot model. Share model could be retrieved through snapshot['share']. :param share_server: Share server model or None. """ raise NotImplementedError() def get_pool(self, share): """Return pool name where the share resides on. :param share: The share hosted by the driver. """ def ensure_share(self, context, share, share_server=None): """Invoked to ensure that share is exported. Driver can use this method to update the list of export locations of the share if it changes. To do that, you should return list with export locations. :return: None or list with export locations """ raise NotImplementedError() def allow_access(self, context, share, access, share_server=None): """Allow access to the share.""" raise NotImplementedError() def deny_access(self, context, share, access, share_server=None): """Deny access to the share.""" raise NotImplementedError() def update_access(self, context, share, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given share. ``access_rules`` contains all access_rules that need to be on the share. If the driver can make bulk access rule updates, it can safely ignore the ``add_rules`` and ``delete_rules`` parameters. If the driver cannot make bulk access rule changes, it can rely on new rules to be present in ``add_rules`` and rules that need to be removed to be present in ``delete_rules``. When a rule in ``delete_rules`` was never applied, drivers must not raise an exception, or attempt to set the rule to ``error`` state. ``add_rules`` and ``delete_rules`` can be empty lists, in this situation, drivers should ensure that the rules present in ``access_rules`` are the same as those on the back end. One scenario where this situation is forced is when the access_level is changed for all existing rules (share migration and for readable replicas). Drivers must be mindful of this call for share replicas. When 'update_access' is called on one of the replicas, the call is likely propagated to all replicas belonging to the share, especially when individual rules are added or removed. If a particular access rule does not make sense to the driver in the context of a given replica, the driver should be careful to report a correct behavior, and take meaningful action. For example, if R/W access is requested on a replica that is part of a "readable" type replication; R/O access may be added by the driver instead of R/W. Note that raising an exception *will* result in the access_rules_status on the replica, and the share itself being "out_of_sync". Drivers can sync on the valid access rules that are provided on the ``create_replica`` and ``promote_replica`` calls. :param context: Current context :param share: Share model with share data. :param access_rules: A list of access rules for given share :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model :returns: None, or a dictionary of updates in the format:: { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'access_key': 'alice31493e5441b8171d2310d80e37e', 'state': 'error', }, '28f6eabb-4342-486a-a7f4-45688f0c0295': { 'access_key': 'bob0078aa042d5a7325480fd13228b', 'state': 'active', }, } The top level keys are 'access_id' fields of the access rules that need to be updated. ``access_key``s are credentials (str) of the entities granted access. Any rule in the ``access_rules`` parameter can be updated. .. important:: Raising an exception in this method will force *all* rules in 'applying' and 'denying' states to 'error'. An access rule can be set to 'error' state, either explicitly via this return parameter or because of an exception raised in this method. Such an access rule will no longer be sent to the driver on subsequent access rule updates. When users deny that rule however, the driver will be asked to deny access to the client/s represented by the rule. We expect that a rule that was error-ed at the driver should never exist on the back end. So, do not fail the deletion request. Also, it is possible that the driver may receive a request to add a rule that is already present on the back end. This can happen if the share manager service goes down while the driver is committing access rule changes. Since we cannot determine if the rule was applied successfully by the driver before the disruption, we will treat all 'applying' transitional rules as new rules and repeat the request. """ raise NotImplementedError() def check_for_setup_error(self): """Check for setup error.""" max_ratio = self.configuration.safe_get('max_over_subscription_ratio') if not max_ratio or float(max_ratio) < 1.0: msg = (_("Invalid max_over_subscription_ratio '%s'. " "Valid value should be >= 1.0.") % max_ratio) raise exception.InvalidParameterValue(err=msg) def do_setup(self, context): """Any initialization the share driver does while starting.""" def get_share_stats(self, refresh=False): """Get share status. If 'refresh' is True, run update the stats first. """ if refresh: self._update_share_stats() return self._stats def get_network_allocations_number(self): """Returns number of network allocations for creating VIFs. Drivers that use Nova for share servers should return zero (0) here same as Generic driver does. Because Nova will handle network resources allocation. Drivers that handle networking itself should calculate it according to their own requirements. It can have 1+ network interfaces. """ raise NotImplementedError() def get_admin_network_allocations_number(self): return 0 def update_network_allocation(self, context, share_server): """Update network allocation after share server creation.""" self.network_api.update_network_allocation(context, share_server) def update_admin_network_allocation(self, context, share_server): """Update admin network allocation after share server creation.""" if (self.get_admin_network_allocations_number() and self.admin_network_api): self.admin_network_api.update_network_allocation(context, share_server) def allocate_network(self, context, share_server, share_network, share_network_subnet, count=None, **kwargs): """Allocate network resources using given network information.""" if count is None: count = self.get_network_allocations_number() if count: kwargs.update(count=count) self.network_api.allocate_network( context, share_server, share_network=share_network, share_network_subnet=share_network_subnet, **kwargs) def allocate_admin_network(self, context, share_server, count=None, **kwargs): """Allocate admin network resources using given network information.""" if count is None: count = self.get_admin_network_allocations_number() if count and not self.admin_network_api: msg = _("Admin network plugin is not set up.") raise exception.NetworkBadConfigurationException(reason=msg) elif count: kwargs.update(count=count) self.admin_network_api.allocate_network( context, share_server, **kwargs) def deallocate_network(self, context, share_server_id): """Deallocate network resources for the given share server.""" if self.get_network_allocations_number(): self.network_api.deallocate_network(context, share_server_id) def choose_share_server_compatible_with_share(self, context, share_servers, share, snapshot=None, share_group=None): """Method that allows driver to choose share server for provided share. If compatible share-server is not found, method should return None. :param context: Current context :param share_servers: list with share-server models :param share: share model :param snapshot: snapshot model :param share_group: ShareGroup model with shares :returns: share-server or None """ # If creating in a share group, use its share server if share_group: for share_server in share_servers: if (share_group.get('share_server_id') == share_server['id']): return share_server return None return share_servers[0] if share_servers else None def choose_share_server_compatible_with_share_group( self, context, share_servers, share_group_ref, share_group_snapshot=None): return share_servers[0] if share_servers else None def setup_server(self, *args, **kwargs): if self.driver_handles_share_servers: return self._setup_server(*args, **kwargs) else: LOG.debug( "Skipping step 'setup share server', because driver is " "enabled with mode when Manila does not handle share servers.") def _setup_server(self, network_info, metadata=None): """Sets up and configures share server with given network parameters. Redefine it within share driver when it is going to handle share servers. :param metadata: a dictionary, for now containing a key 'request_host' """ raise NotImplementedError() def manage_existing(self, share, driver_options): """Brings an existing share under Manila management. If the provided share is not valid, then raise a ManageInvalidShare exception, specifying a reason for the failure. If the provided share is not in a state that can be managed, such as being replicated on the backend, the driver *MUST* raise ManageInvalidShare exception with an appropriate message. The share has a share_type, and the driver can inspect that and compare against the properties of the referenced backend share. If they are incompatible, raise a ManageExistingShareTypeMismatch, specifying a reason for the failure. This method is invoked when the share is being managed with a share type that has ``driver_handles_share_servers`` extra-spec set to False. :param share: Share model :param driver_options: Driver-specific options provided by admin. :return: share_update dictionary with required key 'size', which should contain size of the share. """ raise NotImplementedError() def manage_existing_with_server( self, share, driver_options, share_server=None): """Brings an existing share under Manila management. If the provided share is not valid, then raise a ManageInvalidShare exception, specifying a reason for the failure. If the provided share is not in a state that can be managed, such as being replicated on the backend, the driver *MUST* raise ManageInvalidShare exception with an appropriate message. The share has a share_type, and the driver can inspect that and compare against the properties of the referenced backend share. If they are incompatible, raise a ManageExistingShareTypeMismatch, specifying a reason for the failure. This method is invoked when the share is being managed with a share type that has ``driver_handles_share_servers`` extra-spec set to True. :param share: Share model :param driver_options: Driver-specific options provided by admin. :param share_server: Share server model or None. :return: share_update dictionary with required key 'size', which should contain size of the share. """ raise NotImplementedError() def unmanage(self, share): """Removes the specified share from Manila management. Does not delete the underlying backend share. For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Manila-specific configuration that they have associated with the backend share. If provided share cannot be unmanaged, then raise an UnmanageInvalidShare exception, specifying a reason for the failure. This method is invoked when the share is being unmanaged with a share type that has ``driver_handles_share_servers`` extra-spec set to False. """ def unmanage_with_server(self, share, share_server=None): """Removes the specified share from Manila management. Does not delete the underlying backend share. For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Manila-specific configuration that they have associated with the backend share. If provided share cannot be unmanaged, then raise an UnmanageInvalidShare exception, specifying a reason for the failure. This method is invoked when the share is being unmanaged with a share type that has ``driver_handles_share_servers`` extra-spec set to True. """ def manage_existing_snapshot(self, snapshot, driver_options): """Brings an existing snapshot under Manila management. If provided snapshot is not valid, then raise a ManageInvalidShareSnapshot exception, specifying a reason for the failure. This method is invoked when the snapshot that is being managed belongs to a share that has its share type with ``driver_handles_share_servers`` extra-spec set to False. :param snapshot: ShareSnapshotInstance model with ShareSnapshot data. Example:: { 'id': , 'snapshot_id': < snapshot id>, 'provider_location': , ... } :param driver_options: Optional driver-specific options provided by admin. Example:: { 'key': 'value', ... } :return: model_update dictionary with required key 'size', which should contain size of the share snapshot, and key 'export_locations' containing a list of export locations, if snapshots can be mounted. """ raise NotImplementedError() def manage_existing_snapshot_with_server(self, snapshot, driver_options, share_server=None): """Brings an existing snapshot under Manila management. If provided snapshot is not valid, then raise a ManageInvalidShareSnapshot exception, specifying a reason for the failure. This method is invoked when the snapshot that is being managed belongs to a share that has its share type with ``driver_handles_share_servers`` extra-spec set to True. :param snapshot: ShareSnapshotInstance model with ShareSnapshot data. Example:: { 'id': , 'snapshot_id': < snapshot id>, 'provider_location': , ... } :param driver_options: Optional driver-specific options provided by admin. Example:: { 'key': 'value', ... } :param share_server: Share server model or None. :return: model_update dictionary with required key 'size', which should contain size of the share snapshot, and key 'export_locations' containing a list of export locations, if snapshots can be mounted. """ raise NotImplementedError() def unmanage_snapshot(self, snapshot): """Removes the specified snapshot from Manila management. Does not delete the underlying backend share snapshot. For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Manila-specific configuration that they have associated with the backend share snapshot. If provided share snapshot cannot be unmanaged, then raise an UnmanageInvalidShareSnapshot exception, specifying a reason for the failure. This method is invoked when the snapshot that is being unmanaged belongs to a share that has its share type with ``driver_handles_share_servers`` extra-spec set to False. """ def unmanage_snapshot_with_server(self, snapshot, share_server=None): """Removes the specified snapshot from Manila management. Does not delete the underlying backend share snapshot. For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Manila-specific configuration that they have associated with the backend share snapshot. If provided share snapshot cannot be unmanaged, then raise an UnmanageInvalidShareSnapshot exception, specifying a reason for the failure. This method is invoked when the snapshot that is being unmanaged belongs to a share that has its share type with ``driver_handles_share_servers`` extra-spec set to True. """ def revert_to_snapshot(self, context, snapshot, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a share (in place) to the specified snapshot. Does not delete the share snapshot. The share and snapshot must both be 'available' for the restore to be attempted. The snapshot must be the most recent one taken by Manila; the API layer performs this check so the driver doesn't have to. The share must be reverted in place to the contents of the snapshot. Application admins should quiesce or otherwise prepare the application for the shared file system contents to change suddenly. :param context: Current context :param snapshot: The snapshot to be restored :param share_access_rules: List of all access rules for the affected share :param snapshot_access_rules: List of all access rules for the affected snapshot :param share_server: Optional -- Share server model or None """ raise NotImplementedError() def extend_share(self, share, new_size, share_server=None): """Extends size of existing share. :param share: Share model :param new_size: New size of share (new_size > share['size']) :param share_server: Optional -- Share server model """ raise NotImplementedError() def shrink_share(self, share, new_size, share_server=None): """Shrinks size of existing share. If consumed space on share larger than new_size driver should raise ShareShrinkingPossibleDataLoss exception: raise ShareShrinkingPossibleDataLoss(share_id=share['id']) :param share: Share model :param new_size: New size of share (new_size < share['size']) :param share_server: Optional -- Share server model :raises ShareShrinkingPossibleDataLoss, NotImplementedError """ raise NotImplementedError() def teardown_server(self, *args, **kwargs): if self.driver_handles_share_servers: return self._teardown_server(*args, **kwargs) else: LOG.debug( "Skipping step 'teardown share server', because driver is " "enabled with mode when Manila does not handle share servers.") def _teardown_server(self, server_details, security_services=None): """Tears down share server. Redefine it within share driver when it is going to handle share servers. """ raise NotImplementedError() def _has_redefined_driver_methods(self, methods): """Returns boolean as a result of methods presence and redefinition.""" if not isinstance(methods, (set, list, tuple)): methods = (methods, ) for method_name in methods: method = getattr(type(self), method_name, None) if (not method or method == getattr(ShareDriver, method_name)): return False return True @property def snapshots_are_supported(self): if not hasattr(self, '_snapshots_are_supported'): methods = ('create_snapshot', 'delete_snapshot') # NOTE(vponomaryov): calculate default value for # stat 'snapshot_support' based on implementation of # appropriate methods of this base driver class. self._snapshots_are_supported = self._has_redefined_driver_methods( methods) return self._snapshots_are_supported @property def creating_shares_from_snapshots_is_supported(self): """Calculate default value for create_share_from_snapshot_support.""" if not hasattr(self, '_creating_shares_from_snapshots_is_supported'): methods = ('create_share_from_snapshot', ) self._creating_shares_from_snapshots_is_supported = ( self._has_redefined_driver_methods(methods)) return ( self._creating_shares_from_snapshots_is_supported and self.snapshots_are_supported ) def _update_share_stats(self, data=None): """Retrieve stats info from share group. :param data: dict -- dict with key-value pairs to redefine common ones. """ LOG.debug("Updating share stats.") backend_name = (self.configuration.safe_get('share_backend_name') or CONF.share_backend_name) # Note(zhiteng): These information are driver/backend specific, # each driver may define these values in its own config options # or fetch from driver specific configuration file. common = dict( share_backend_name=backend_name or 'Generic_NFS', driver_handles_share_servers=self.driver_handles_share_servers, vendor_name='Open Source', driver_version='1.0', storage_protocol=None, total_capacity_gb='unknown', free_capacity_gb='unknown', reserved_percentage=0, qos=False, pools=self.pools or None, snapshot_support=self.snapshots_are_supported, create_share_from_snapshot_support=( self.creating_shares_from_snapshots_is_supported), revert_to_snapshot_support=False, mount_snapshot_support=False, replication_domain=self.replication_domain, filter_function=self.get_filter_function(), goodness_function=self.get_goodness_function(), ) if isinstance(data, dict): common.update(data) sg_stats = data.get('share_group_stats', {}) if data else {} common['share_group_stats'] = { 'consistent_snapshot_support': sg_stats.get( 'consistent_snapshot_support'), } self.add_ip_version_capability(common) self._stats = common def get_share_server_pools(self, share_server): """Return list of pools related to a particular share server. :param share_server: ShareServer class instance. """ return [] def create_share_group(self, context, share_group_dict, share_server=None): """Create a share group. :param context: :param share_group_dict: The share group details EXAMPLE: { 'status': 'creating', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': 'False', 'created_at': datetime.datetime(2015, 8, 10, 15, 14, 6), 'updated_at': None, 'source_share_group_snapshot_id': 'some_fake_uuid', 'share_group_type_id': 'some_fake_uuid', 'host': 'hostname@backend_name', 'share_network_id': None, 'share_server_id': None, 'deleted_at': None, 'share_types': [], 'id': 'some_fake_uuid', 'name': None } :returns: (share_group_model_update, share_update_list) share_group_model_update - a dict containing any values to be updated for the SG in the database. This value may be None. """ LOG.debug('Created a Share Group with ID: %s.', share_group_dict['id']) def create_share_group_from_share_group_snapshot( self, context, share_group_dict, share_group_snapshot_dict, share_server=None): """Create a share group from a share group snapshot. When creating a share from snapshot operation takes longer than a simple clone operation, drivers will be able to complete this creation asynchronously, by providing a 'creating_from_snapshot' status in the returned model update. The current supported status are 'available' and 'creating_from_snapshot'. In order to provide updates for shares with 'creating_from_snapshot' status, drivers must implement the call 'get_share_status'. :param context: :param share_group_dict: The share group details EXAMPLE: .. code:: { 'status': 'creating', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': 'False', 'created_at': datetime.datetime(2015, 8, 10, 15, 14, 6), 'updated_at': None, 'source_share_group_snapshot_id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'host': 'hostname@backend_name', 'deleted_at': None, 'shares': [], # The new shares being created 'share_types': [], 'id': 'some_fake_uuid', 'name': None } :param share_group_snapshot_dict: The share group snapshot details EXAMPLE: .. code:: { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share_group_id': 'some_fake_uuid', 'share_share_group_snapshot_members': [ { 'status': 'available', 'user_id': 'a0314a441ca842019b0952224aa39192', 'deleted': 'False', 'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share': , 'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share_proto': 'NFS', 'project_id': '13c0be6290934bd98596cfa004650049', 'share_group_snapshot_id': 'some_fake_uuid', 'deleted_at': None, 'id': 'some_fake_uuid', 'size': 1 } ], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } :return: (share_group_model_update, share_update_list) share_group_model_update - a dict containing any values to be updated for the share group in the database. This value may be None share_update_list - a list of dictionaries containing dicts for every share created in the share group. Any share dicts should at a minimum contain the 'id' key and, for synchronous creation, the 'export_locations'. For asynchronous share creation this dict must also contain the key 'status' with the value set to 'creating_from_snapshot'. The current supported status are 'available' and 'creating_from_snapshot'. Export locations should be in the same format as returned by a share_create. This list may be empty or None. EXAMPLE: .. code:: [ { 'id': 'uuid', 'export_locations': [{...}, {...}], }, { 'id': 'uuid', 'export_locations': [], 'status': 'creating_from_snapshot', }, ] """ # Ensure that the share group snapshot has members if not share_group_snapshot_dict['share_group_snapshot_members']: return None, None clone_list = self._collate_share_group_snapshot_info( share_group_dict, share_group_snapshot_dict) share_update_list = [] LOG.debug('Creating share group from group snapshot %s.', share_group_snapshot_dict['id']) for clone in clone_list: kwargs = {} share_update_info = {} if self.driver_handles_share_servers: kwargs['share_server'] = share_server model_update = ( self.create_share_from_snapshot( context, clone['share'], clone['snapshot'], **kwargs)) if isinstance(model_update, dict): status = model_update.get('status') # NOTE(dviroel): share status is mandatory when answering # a model update. If not provided, won't be possible to # determine if was successfully created. if status is None: msg = _("Driver didn't provide a share status.") raise exception.InvalidShareInstance(reason=msg) if status not in [constants.STATUS_AVAILABLE, constants.STATUS_CREATING_FROM_SNAPSHOT]: msg = _('Driver returned an invalid status: %s') % status raise exception.InvalidShareInstance(reason=msg) share_update_info.update({'status': status}) export_locations = model_update.get('export_locations', []) else: # NOTE(dviroel): the driver that doesn't implement the new # model_update will return only the export locations export_locations = model_update share_update_info.update({ 'id': clone['share']['id'], 'export_locations': export_locations, }) share_update_list.append(share_update_info) return None, share_update_list def delete_share_group(self, context, share_group_dict, share_server=None): """Delete a share group :param context: The request context :param share_group_dict: The share group details EXAMPLE: .. code:: { 'status': 'creating', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': 'False', 'created_at': datetime.datetime(2015, 8, 10, 15, 14, 6), 'updated_at': None, 'source_share_group_snapshot_id': 'some_fake_uuid', 'share_share_group_type_id': 'some_fake_uuid', 'host': 'hostname@backend_name', 'deleted_at': None, 'shares': [], # The new shares being created 'share_types': [], 'id': 'some_fake_uuid', 'name': None } :return: share_group_model_update share_group_model_update - a dict containing any values to be updated for the group in the database. This value may be None. """ def _cleanup_group_share_snapshot(self, context, share_snapshot, share_server): """Deletes the snapshot of a share belonging to a group.""" try: self.delete_snapshot( context, share_snapshot, share_server=share_server) except exception.ManilaException: msg = ('Could not delete share group snapshot member %(snap)s ' 'for share %(share)s.') LOG.error(msg, { 'snap': share_snapshot['id'], 'share': share_snapshot['share_id'], }) raise def create_share_group_snapshot(self, context, snap_dict, share_server=None): """Create a share group snapshot. :param context: :param snap_dict: The share group snapshot details EXAMPLE: .. code:: { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share_group_id': 'some_fake_uuid', 'share_group_snapshot_members': [ { 'status': 'available', 'share_type_id': 'some_fake_uuid', 'user_id': 'a0314a441ca842019b0952224aa39192', 'deleted': 'False', 'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share': , 'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share_proto': 'NFS', 'project_id': '13c0be6290934bd98596cfa004650049', 'share_group_snapshot_id': 'some_fake_uuid', 'deleted_at': None, 'share_id': 'some_fake_uuid', 'id': 'some_fake_uuid', 'size': 1, 'provider_location': None, } ], 'deleted_at': None, 'id': 'some_fake_uuid', 'name': None } :return: (share_group_snapshot_update, member_update_list) share_group_snapshot_update - a dict containing any values to be updated for the CGSnapshot in the database. This value may be None. member_update_list - a list of dictionaries containing for every member of the share group snapshot. Each dict should contains values to be updated for the ShareGroupSnapshotMember in the database. This list may be empty or None. """ LOG.debug('Attempting to create a share group snapshot %s.', snap_dict['id']) snapshot_members = snap_dict.get('share_group_snapshot_members', []) if not self._stats.get('snapshot_support'): raise exception.ShareGroupSnapshotNotSupported( share_group=snap_dict['share_group_id']) elif not snapshot_members: LOG.warning('No shares in share group to create snapshot.') return None, None else: share_snapshots = [] snapshot_members_updates = [] for member in snapshot_members: share_snapshot = { 'snapshot_id': member['share_group_snapshot_id'], 'share_id': member['share_id'], 'share_instance_id': member['share']['id'], 'id': member['id'], 'share': member['share'], 'size': member['share']['size'], 'share_size': member['share']['size'], 'share_proto': member['share']['share_proto'], 'provider_location': None, } try: member_update = self.create_snapshot( context, share_snapshot, share_server=share_server) if member_update: member_update['id'] = member['id'] snapshot_members_updates.append(member_update) share_snapshots.append(share_snapshot) except exception.ManilaException as e: msg = ('Could not create share group snapshot. Failed ' 'to create share snapshot %(snap)s for ' 'share %(share)s.') LOG.exception(msg, { 'snap': share_snapshot['id'], 'share': share_snapshot['share_id'] }) # clean up any share snapshots previously created LOG.debug( 'Attempting to clean up snapshots due to failure.') for share_snapshot in share_snapshots: self._cleanup_group_share_snapshot( context, share_snapshot, share_server) raise e LOG.debug('Successfully created share group snapshot %s.', snap_dict['id']) return None, snapshot_members_updates def delete_share_group_snapshot(self, context, snap_dict, share_server=None): """Delete a share group snapshot :param context: :param snap_dict: The share group snapshot details EXAMPLE: .. code:: { 'status': 'available', 'project_id': '13c0be6290934bd98596cfa004650049', 'user_id': 'a0314a441ca842019b0952224aa39192', 'description': None, 'deleted': '0', 'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share_group_id': 'some_fake_uuid', 'share_group_snapshot_members': [ { 'status': 'available', 'share_type_id': 'some_fake_uuid', 'share_id': 'some_fake_uuid', 'user_id': 'a0314a441ca842019b0952224aa39192', 'deleted': 'False', 'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share': , 'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'share_proto': 'NFS', 'project_id': '13c0be6290934bd98596cfa004650049', 'share_group_snapshot_id': 'some_fake_uuid', 'deleted_at': None, 'id': 'some_fake_uuid', 'size': 1, 'provider_location': 'fake_provider_location_value', } ], 'deleted_at': None, 'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a', 'name': None } :return: (share_group_snapshot_update, member_update_list) share_group_snapshot_update - a dict containing any values to be updated for the ShareGroupSnapshot in the database. This value may be None. """ snapshot_members = snap_dict.get('share_group_snapshot_members', []) LOG.debug('Deleting share group snapshot %s.', snap_dict['id']) for member in snapshot_members: share_snapshot = { 'snapshot_id': member['share_group_snapshot_id'], 'share_id': member['share_id'], 'share_instance_id': member['share']['id'], 'id': member['id'], 'share': member['share'], 'size': member['share']['size'], 'share_size': member['share']['size'], 'share_proto': member['share']['share_proto'], 'provider_location': member['provider_location'], } self.delete_snapshot( context, share_snapshot, share_server=share_server) LOG.debug('Deleted share group snapshot %s.', snap_dict['id']) return None, None def _collate_share_group_snapshot_info(self, share_group_dict, share_group_snapshot_dict): """Collate the data for a clone of the SG snapshot. Given two data structures, a share group snapshot ( share_group_snapshot_dict) and a new share to be cloned from the snapshot (share_group_dict), match up both structures into a list of dicts (share & snapshot) suitable for use by existing method that clones individual share snapshots. """ clone_list = [] for share in share_group_dict['shares']: clone_info = {'share': share} for share_group_snapshot_member in share_group_snapshot_dict[ 'share_group_snapshot_members']: if (share['source_share_group_snapshot_member_id'] == share_group_snapshot_member['id']): clone_info['snapshot'] = share_group_snapshot_member break if len(clone_info) != 2: msg = _( "Invalid data supplied for creating share group from " "share group snapshot " "%s.") % share_group_snapshot_dict['id'] raise exception.InvalidShareGroup(reason=msg) clone_list.append(clone_info) return clone_list def get_periodic_hook_data(self, context, share_instances): """Dedicated for update/extend of data for existing share instances. Redefine this method in share driver to be able to update/change/extend share instances data that will be used by periodic hook action. One of possible updates is add-on of "automount" CLI commands for each share instance for case of notification is enabled using 'hook' approach. :param context: Current context :param share_instances: share instances list provided by share manager :return: list of share instances. """ return share_instances def create_replica(self, context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None): """Replicate the active replica to a new replica on this backend. .. note:: This call is made on the host that the new replica is being created upon. :param context: Current context :param replica_list: List of all replicas for a particular share. This list also contains the replica to be created. The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': or None, }, ... ] :param new_replica: The share replica dictionary. Example:: { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS2', 'status': 'creating', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'out_of_sync', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [ models.ShareInstanceExportLocations, ], 'access_rules_status': 'out_of_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': 'e6155221-ea00-49ef-abf9-9f89b7dd900a', 'share_server': or None, } :param access_rules: A list of access rules. These are rules that other instances of the share already obey. Drivers are expected to apply access rules to the new replica or disregard access rules that don't apply. Example:: [ { 'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676', 'deleted' = False, 'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'access_type' = 'ip', 'access_to' = '172.16.20.1', 'access_level' = 'rw', } ] :param replica_snapshots: List of dictionaries of snapshot instances. This includes snapshot instances of every snapshot of the share whose 'aggregate_status' property was reported to be 'available' when the share manager initiated this request. Each list member will have two sub dictionaries: 'active_replica_snapshot' and 'share_replica_snapshot'. The 'active' replica snapshot corresponds to the instance of the snapshot on any of the 'active' replicas of the share while share_replica_snapshot corresponds to the snapshot instance for the specific replica that will need to exist on the new share replica that is being created. The driver needs to ensure that this snapshot instance is truly available before transitioning the replica from 'out_of_sync' to 'in_sync'. Snapshots instances for snapshots that have an 'aggregate_status' of 'creating' or 'deleting' will be polled for in the ``update_replicated_snapshot`` method. Example:: [ { 'active_replica_snapshot': { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'status': 'available', 'provider_location': '/newton/share-snapshot-10e49c3e-aca9', ... }, 'share_replica_snapshot': { 'id': '', 'share_instance_id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'status': 'available', 'provider_location': None, ... }, } ] :param share_server: or None Share server of the replica being created. :return: None or a dictionary. The dictionary can contain export_locations replica_state and access_rules_status. export_locations is a list of paths and replica_state is one of 'active', 'in_sync', 'out_of_sync' or 'error'. .. important:: A backend supporting 'writable' type replication should return 'active' as the replica_state. Export locations should be in the same format as returned during the ``create_share`` call. Example:: { 'export_locations': [ { 'path': '172.16.20.22/sample/export/path', 'is_admin_only': False, 'metadata': {'some_key': 'some_value'}, }, ], 'replica_state': 'in_sync', 'access_rules_status': 'in_sync', } """ raise NotImplementedError() def delete_replica(self, context, replica_list, replica_snapshots, replica, share_server=None): """Delete a replica. .. note:: This call is made on the host that hosts the replica being deleted. :param context: Current context :param replica_list: List of all replicas for a particular share This list also contains the replica to be deleted. The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': or None, }, ... ] :param replica: Dictionary of the share replica being deleted. Example:: { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS2', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [ models.ShareInstanceExportLocations ], 'access_rules_status': 'out_of_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '53099868-65f1-11e5-9d70-feff819cdc9f', 'share_server': or None, } :param replica_snapshots: List of dictionaries of snapshot instances. The dict contains snapshot instances that are associated with the share replica being deleted. No model updates to snapshot instances are possible in this method. The driver should return when the cleanup is completed on the backend for both, the snapshots and the replica itself. Drivers must handle situations where the snapshot may not yet have finished 'creating' on this replica. Example:: [ { 'id': '89dafd00-0999-4d23-8614-13eaa6b02a3b', 'snapshot_id': '3ce1caf7-0945-45fd-a320-714973e949d3', 'status: 'available', 'share_instance_id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f' ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'creating', 'share_instance_id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f' ... }, ... ] :param share_server: or None Share server of the replica to be deleted. :return: None. :raises: Exception. Any exception raised will set the share replica's 'status' and 'replica_state' attributes to 'error_deleting'. It will not affect snapshots belonging to this replica. """ raise NotImplementedError() def promote_replica(self, context, replica_list, replica, access_rules, share_server=None): """Promote a replica to 'active' replica state. .. note:: This call is made on the host that hosts the replica being promoted. :param context: Current context :param replica_list: List of all replicas for a particular share This list also contains the replica to be promoted. The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': or None, }, ... ] :param replica: Dictionary of the replica to be promoted. Example:: { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS2', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [ models.ShareInstanceExportLocations ], 'access_rules_status': 'in_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': or None, } :param access_rules: A list of access rules These access rules are obeyed by other instances of the share Example:: [ { 'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676', 'deleted' = False, 'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'access_type' = 'ip', 'access_to' = '172.16.20.1', 'access_level' = 'rw', } ] :param share_server: or None Share server of the replica to be promoted. :return: updated_replica_list or None. The driver can return the updated list as in the request parameter. Changes that will be updated to the Database are: 'export_locations', 'access_rules_status' and 'replica_state'. :raises: Exception. This can be any exception derived from BaseException. This is re-raised by the manager after some necessary cleanup. If the driver raises an exception during promotion, it is assumed that all of the replicas of the share are in an inconsistent state. Recovery is only possible through the periodic update call and/or administrator intervention to correct the 'status' of the affected replicas if they become healthy again. """ raise NotImplementedError() def update_replica_state(self, context, replica_list, replica, access_rules, replica_snapshots, share_server=None): """Update the replica_state of a replica. .. note:: This call is made on the host which hosts the replica being updated. Drivers should fix replication relationships that were broken if possible inside this method. This method is called periodically by the share manager; and whenever requested by the administrator through the 'resync' API. :param context: Current context :param replica_list: List of all replicas for a particular share This list also contains the replica to be updated. The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': or None, }, ... ] :param replica: Dictionary of the replica being updated Replica state will always be 'in_sync', 'out_of_sync', or 'error'. Replicas in 'active' state will not be passed via this parameter. Example:: { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS1', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80', 'export_locations': [ models.ShareInstanceExportLocations, ], 'access_rules_status': 'in_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', } :param access_rules: A list of access rules These access rules are obeyed by other instances of the share. The driver could attempt to sync on any un-applied access_rules. Example:: [ { 'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676', 'deleted' = False, 'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'access_type' = 'ip', 'access_to' = '172.16.20.1', 'access_level' = 'rw', } ] :param replica_snapshots: List of dictionaries of snapshot instances. This includes snapshot instances of every snapshot of the share whose 'aggregate_status' property was reported to be 'available' when the share manager initiated this request. Each list member will have two sub dictionaries: 'active_replica_snapshot' and 'share_replica_snapshot'. The 'active' replica snapshot corresponds to the instance of the snapshot on any of the 'active' replicas of the share while share_replica_snapshot corresponds to the snapshot instance for the specific replica being updated. The driver needs to ensure that this snapshot instance is truly available before transitioning from 'out_of_sync' to 'in_sync'. Snapshots instances for snapshots that have an 'aggregate_status' of 'creating' or 'deleting' will be polled for in the update_replicated_snapshot method. Example:: [ { 'active_replica_snapshot': { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'status': 'available', 'provider_location': '/newton/share-snapshot-10e49c3e-aca9', ... }, 'share_replica_snapshot': { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'status': 'creating', 'provider_location': None, ... }, } ] :param share_server: or None :return: replica_state: a str value denoting the replica_state. Valid values are 'in_sync' and 'out_of_sync' or None (to leave the current replica_state unchanged). """ raise NotImplementedError() def create_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): """Create a snapshot on active instance and update across the replicas. .. note:: This call is made on the 'active' replica's host. Drivers are expected to transfer the snapshot created to the respective replicas. The driver is expected to return model updates to the share manager. If it was able to confirm the creation of any number of the snapshot instances passed in this interface, it can set their status to 'available' as a cue for the share manager to set the progress attr to '100%'. :param context: Current context :param replica_list: List of all replicas for a particular share The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, ... ] :param replica_snapshots: List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. All the instances will have their status attribute set to 'creating'. Example:: [ { 'id': 'd3931a93-3984-421e-a9e7-d9f71895450a', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'creating', 'progress': '0%', ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'creating', 'progress': '0%', ... }, ... ] :param share_server: or None :return: List of dictionaries of snapshot instances. The dictionaries can contain values that need to be updated on the database for the snapshot instances being created. :raises: Exception. Any exception in this method will set all instances to 'error'. """ raise NotImplementedError() def revert_to_replicated_snapshot(self, context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, share_access_rules, snapshot_access_rules, share_server=None): """Reverts a replicated share (in place) to the specified snapshot. .. note:: This call is made on the 'active' replica's host, since drivers may not be able to revert snapshots on individual replicas. Does not delete the share snapshot. The share and snapshot must both be 'available' for the restore to be attempted. The snapshot must be the most recent one taken by Manila; the API layer performs this check so the driver doesn't have to. The share must be reverted in place to the contents of the snapshot. Application admins should quiesce or otherwise prepare the application for the shared file system contents to change suddenly. :param context: Current context :param active_replica: The current active replica :param replica_list: List of all replicas for a particular share The 'active' replica will have its 'replica_state' attr set to 'active' and its 'status' set to 'reverting'. :param active_replica_snapshot: snapshot to be restored :param replica_snapshots: List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. The snapshot of the active replica to be restored with have its status attribute set to 'restoring'. :param share_access_rules: List of access rules for the affected share. :param snapshot_access_rules: List of access rules for the affected snapshot. :param share_server: Optional -- Share server model """ raise NotImplementedError() def delete_replicated_snapshot(self, context, replica_list, replica_snapshots, share_server=None): """Delete a snapshot by deleting its instances across the replicas. .. note:: This call is made on the 'active' replica's host, since drivers may not be able to delete the snapshot from an individual replica. The driver is expected to return model updates to the share manager. If it was able to confirm the removal of any number of the snapshot instances passed in this interface, it can set their status to 'deleted' as a cue for the share manager to clean up that instance from the database. :param context: Current context :param replica_list: List of all replicas for a particular share The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, ... ] :param replica_snapshots: List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. All the instances will have their status attribute set to 'deleting'. Example:: [ { 'id': 'd3931a93-3984-421e-a9e7-d9f71895450a', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status': 'deleting', 'progress': '100%', ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'deleting', 'progress': '100%', ... }, ... ] :param share_server: or None :return: List of dictionaries of snapshot instances. The dictionaries can contain values that need to be updated on the database for the snapshot instances being deleted. To confirm the deletion of the snapshot instance, set the 'status' attribute of the instance to 'deleted' (constants.STATUS_DELETED) :raises: Exception. Any exception in this method will set the status attribute of all snapshot instances to 'error_deleting'. """ raise NotImplementedError() def update_replicated_snapshot(self, context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=None): """Update the status of a snapshot instance that lives on a replica. .. note:: For DR and Readable styles of replication, this call is made on the replica's host and not the 'active' replica's host. This method is called periodically by the share manager. It will query for snapshot instances that track the parent snapshot across non-'active' replicas. Drivers can expect the status of the instance to be 'creating' or 'deleting'. If the driver sees that a snapshot instance has been removed from the replica's backend and the instance status was set to 'deleting', it is expected to raise a SnapshotResourceNotFound exception. All other exceptions will set the snapshot instance status to 'error'. If the instance was not in 'deleting' state, raising a SnapshotResourceNotFound will set the instance status to 'error'. :param context: Current context :param replica_list: List of all replicas for a particular share The 'active' replica will have its 'replica_state' attr set to 'active'. Example:: [ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': or None, }, ... ] :param share_replica: Share replica dictionary. This replica is associated with the snapshot instance whose status is being updated. Replicas in 'active' replica_state will not be passed via this parameter. Example:: { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS1', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80', 'export_locations': [ models.ShareInstanceExportLocations, ], 'access_rules_status': 'in_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', } :param replica_snapshots: List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. This will include the snapshot instance being updated as well. Example:: [ { 'id': 'd3931a93-3984-421e-a9e7-d9f71895450a', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', ... }, ... ] :param replica_snapshot: Dictionary of the snapshot instance. This is the instance to be updated. It will be in 'creating' or 'deleting' state when sent via this parameter. Example:: { 'name': 'share-snapshot-18825630-574f-4912-93bb-af4611ef35a2', 'share_id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_name': 'share-d487b88d-e428-4230-a465-a800c2cce5f8', 'status': 'creating', 'id': '18825630-574f-4912-93bb-af4611ef35a2', 'deleted': False, 'created_at': datetime.datetime(2016, 8, 3, 0, 5, 58), 'share': , 'updated_at': datetime.datetime(2016, 8, 3, 0, 5, 58), 'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'progress': '0%', 'deleted_at': None, 'provider_location': None, } :param share_server: or None :return: replica_snapshot_model_update: a dictionary. The dictionary must contain values that need to be updated on the database for the snapshot instance that represents the snapshot on the replica. :raises: exception.SnapshotResourceNotFound Raise this exception for snapshots that are not found on the backend and their status was 'deleting'. """ raise NotImplementedError() def get_filter_function(self): """Get filter_function string. Returns either the string from the driver instance or global section in manila.conf. If nothing is specified in manila.conf, then try to find the default filter_function. When None is returned the scheduler will always pass the driver instance. :return: a filter_function string or None """ ret_function = self.configuration.filter_function if not ret_function: ret_function = CONF.filter_function if not ret_function: # pylint: disable=assignment-from-none ret_function = self.get_default_filter_function() # pylint: enable=assignment-from-none return ret_function def get_goodness_function(self): """Get good_function string. Returns either the string from the driver instance or global section in manila.conf. If nothing is specified in manila.conf, then try to find the default goodness_function. When None is returned the scheduler will give the lowest score to the driver instance. :return: a goodness_function string or None """ ret_function = self.configuration.goodness_function if not ret_function: ret_function = CONF.goodness_function if not ret_function: # pylint: disable=assignment-from-none ret_function = self.get_default_goodness_function() # pylint: enable=assignment-from-none return ret_function def get_default_filter_function(self): """Get the default filter_function string. Each driver could overwrite the method to return a well-known default string if it is available. :return: None """ return None def get_default_goodness_function(self): """Get the default goodness_function string. Each driver could overwrite the method to return a well-known default string if it is available. :return: None """ return None def snapshot_update_access(self, context, snapshot, access_rules, add_rules, delete_rules, share_server=None): """Update access rules for given snapshot. ``access_rules`` contains all access_rules that need to be on the share. If the driver can make bulk access rule updates, it can safely ignore the ``add_rules`` and ``delete_rules`` parameters. If the driver cannot make bulk access rule changes, it can rely on new rules to be present in ``add_rules`` and rules that need to be removed to be present in ``delete_rules``. When a rule in ``add_rules`` already exists in the back end, drivers must not raise an exception. When a rule in ``delete_rules`` was never applied, drivers must not raise an exception, or attempt to set the rule to ``error`` state. ``add_rules`` and ``delete_rules`` can be empty lists, in this situation, drivers should ensure that the rules present in ``access_rules`` are the same as those on the back end. :param context: Current context :param snapshot: Snapshot model with snapshot data. :param access_rules: All access rules for given snapshot :param add_rules: Empty List or List of access rules which should be added. access_rules already contains these rules. :param delete_rules: Empty List or List of access rules which should be removed. access_rules doesn't contain these rules. :param share_server: None or Share server model """ raise NotImplementedError() def update_share_usage_size(self, context, shares): """Invoked to get the usage size of given shares. Driver can use this method to update the share usage size of the shares. To do that, a dictionary of shares should be returned. :param shares: None or a list of all shares for updates. :returns: An empty list or a list of dictionary of updates in the following format. The value of "used_size" can be specified in GiB units, as a floating point number:: [ { 'id': '09960614-8574-4e03-89cf-7cf267b0bd08', 'used_size': '200', 'gathered_at': datetime.datetime(2017, 8, 10, 15, 14, 6), }, ] """ LOG.debug("This backend does not support gathering 'used_size' of " "shares created on it.") return [] def get_configured_ip_versions(self): """"Get allowed IP versions. The supported versions are returned with list, possible values are: [4], [6], or [4, 6] Drivers that assert ipv6_implemented = True must override this method. If the returned list includes 4, then shares created by this driver must have an IPv4 export location. If the list includes 6, then shares created by the driver must have an IPv6 export location. Drivers should check that their storage controller actually has IPv4/IPv6 enabled and configured properly. """ # For drivers that haven't implemented IPv6, assume legacy behavior if not self.ipv6_implemented: return [4] raise NotImplementedError() def add_ip_version_capability(self, data): """Add IP version support capabilities. When DHSS is true, the capabilities are determined by driver and configured network plugin. When DHSS is false, the capabilities are determined by driver only. :param data: the capability dictionary :returns: capability data """ self.ip_versions = self.get_configured_ip_versions() if isinstance(self.ip_versions, list): self.ip_versions = set(self.ip_versions) else: self.ip_versions = set(list(self.ip_versions)) if not self.ip_versions: LOG.error("Backend %s supports neither IPv4 nor IPv6.", data['share_backend_name']) if self.driver_handles_share_servers: network_versions = self.network_api.enabled_ip_versions self.ip_versions = self.ip_versions & network_versions if not self.ip_versions: LOG.error("The enabled IP version of the network plugin is " "not compatible with the version supported by " "backend %s.", data['share_backend_name']) data['ipv4_support'] = (4 in self.ip_versions) data['ipv6_support'] = (6 in self.ip_versions) return data def get_backend_info(self, context): """Get driver and array configuration parameters. Driver can use this method to get the special configuration info and return for assessment. :returns: A dictionary containing driver-specific info. Example:: { 'version': '2.23' 'port': '80', 'logicalportip': '1.1.1.1', ... } """ raise NotImplementedError() def ensure_shares(self, context, shares): """Invoked to ensure that shares are exported. Driver can use this method to update the list of export locations of the shares if it changes. To do that, a dictionary of shares should be returned. :shares: A list of all shares for updates. :returns: None or a dictionary of updates in the format. Example:: { '09960614-8574-4e03-89cf-7cf267b0bd08': { 'export_locations': [{...}, {...}], 'status': 'error', }, '28f6eabb-4342-486a-a7f4-45688f0c0295': { 'export_locations': [{...}, {...}], 'status': 'available', }, } """ raise NotImplementedError() def get_share_server_network_info( self, context, share_server, identifier, driver_options): """Obtain network allocations used by share server. :param context: Current context. :param share_server: Share server model. :param identifier: A driver-specific share server identifier :param driver_options: Dictionary of driver options to assist managing the share server :return: A list containing IP addresses allocated in the backend. Example:: ['10.10.10.10', 'fd11::2000', '192.168.10.10'] """ raise NotImplementedError() def manage_server(self, context, share_server, identifier, driver_options): """Manage the share server and return compiled back end details. :param context: Current context. :param share_server: Share server model. :param identifier: A driver-specific share server identifier :param driver_options: Dictionary of driver options to assist managing the share server :return: Identifier and dictionary with back end details to be saved in the database. Example:: 'my_new_server_identifier',{'server_name': 'my_old_server'} """ raise NotImplementedError() def unmanage_server(self, server_details, security_services=None): """Unmanages the share server. If a driver supports unmanaging of share servers, the driver must override this method and return successfully. :param server_details: share server backend details. :param security_services: list of security services configured with this share server. """ raise NotImplementedError() def get_share_status(self, share, share_server=None): """Invoked periodically to get the current status of a given share. Driver can use this method to update the status of a share that is still pending from other operations. This method is expected to be called in a periodic interval set by the 'periodic_interval' configuration in seconds. :param share: share to get updated status from. :param share_server: share server model or None. :returns: a dictionary of updates with the current share status, that must be 'available', 'creating_from_snapshot' or 'error', a list of export locations, if available, and a progress field which indicates the completion of the share creation operation. EXAMPLE:: { 'status': 'available', 'export_locations': [{...}, {...}], 'progress': '50%' } :raises: ShareBackendException. A ShareBackendException in this method will set the instance status to 'error'. """ raise NotImplementedError() manila-10.0.0/manila/share/snapshot_access.py0000664000175000017500000001621713656750227021174 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.common import constants from manila import utils LOG = log.getLogger(__name__) class ShareSnapshotInstanceAccess(object): def __init__(self, db, driver): self.db = db self.driver = driver def update_access_rules(self, context, snapshot_instance_id, delete_all_rules=False, share_server=None): """Update driver and database access rules for given snapshot instance. :param context: current context :param snapshot_instance_id: Id of the snapshot instance model :param delete_all_rules: Whether all rules should be deleted. :param share_server: Share server model or None """ snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot_instance_id, with_share_data=True) snapshot_id = snapshot_instance['snapshot_id'] @utils.synchronized( "update_access_rules_for_snapshot_%s" % snapshot_id, external=True) def _update_access_rules_locked(*args, **kwargs): return self._update_access_rules(*args, **kwargs) _update_access_rules_locked( context=context, snapshot_instance=snapshot_instance, delete_all_rules=delete_all_rules, share_server=share_server, ) def _update_access_rules(self, context, snapshot_instance, delete_all_rules=None, share_server=None): # NOTE(ganso): First let's get all the rules and the mappings. rules = self.db.share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance['id']) add_rules = [] delete_rules = [] if delete_all_rules: # NOTE(ganso): We want to delete all rules. delete_rules = rules rules_to_be_on_snapshot = [] # NOTE(ganso): We select all deletable mappings. for rule in rules: # NOTE(ganso): No need to update the state if already set. if rule['state'] != constants.ACCESS_STATE_DENYING: self.db.share_snapshot_instance_access_update( context, rule['access_id'], snapshot_instance['id'], {'state': constants.ACCESS_STATE_DENYING}) else: # NOTE(ganso): error'ed rules are to be left alone until # reset back to "queued_to_deny" by API. rules_to_be_on_snapshot = [ r for r in rules if r['state'] not in ( constants.ACCESS_STATE_QUEUED_TO_DENY, # NOTE(ganso): We select denying rules as a recovery # mechanism for invalid rules during a restart. constants.ACCESS_STATE_DENYING, # NOTE(ganso): We do not re-send error-ed access rules to # drivers. constants.ACCESS_STATE_ERROR ) ] # NOTE(ganso): Process queued rules for rule in rules: # NOTE(ganso): We are barely handling recovery, so if any rule # exists in 'applying' or 'denying' state, we add them again. if rule['state'] in (constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_APPLYING): if rule['state'] == ( constants.ACCESS_STATE_QUEUED_TO_APPLY): self.db.share_snapshot_instance_access_update( context, rule['access_id'], snapshot_instance['id'], {'state': constants.ACCESS_STATE_APPLYING}) add_rules.append(rule) elif rule['state'] in ( constants.ACCESS_STATE_QUEUED_TO_DENY, constants.ACCESS_STATE_DENYING): if rule['state'] == ( constants.ACCESS_STATE_QUEUED_TO_DENY): self.db.share_snapshot_instance_access_update( context, rule['access_id'], snapshot_instance['id'], {'state': constants.ACCESS_STATE_DENYING}) delete_rules.append(rule) try: self.driver.snapshot_update_access( context, snapshot_instance, rules_to_be_on_snapshot, add_rules=add_rules, delete_rules=delete_rules, share_server=share_server) # NOTE(ganso): successfully added rules transition to "active". for rule in add_rules: self.db.share_snapshot_instance_access_update( context, rule['access_id'], snapshot_instance['id'], {'state': constants.STATUS_ACTIVE}) except Exception: # NOTE(ganso): if we failed, we set all the transitional rules # to ERROR. for rule in add_rules + delete_rules: self.db.share_snapshot_instance_access_update( context, rule['access_id'], snapshot_instance['id'], {'state': constants.STATUS_ERROR}) raise self._remove_access_rules( context, delete_rules, snapshot_instance['id']) if self._check_needs_refresh(context, snapshot_instance['id']): self._update_access_rules(context, snapshot_instance, share_server=share_server) else: LOG.info("Access rules were successfully applied for " "snapshot instance: %s", snapshot_instance['id']) def _check_needs_refresh(self, context, snapshot_instance_id): rules = self.db.share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance_id) return (any(rule['state'] in ( constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_QUEUED_TO_DENY) for rule in rules)) def _remove_access_rules(self, context, rules, snapshot_instance_id): if not rules: return for rule in rules: self.db.share_snapshot_instance_access_delete( context, rule['access_id'], snapshot_instance_id) def get_snapshot_instance_access_rules(self, context, snapshot_instance_id): return self.db.share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance_id) manila-10.0.0/manila/share/manager.py0000664000175000017500000061676013656750227017437 0ustar zuulzuul00000000000000# Copyright (c) 2014 NetApp Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """NAS share manager managers creating shares and access rights. **Related Flags** :share_driver: Used by :class:`ShareManager`. """ import copy import datetime import functools import hashlib from operator import xor from oslo_config import cfg from oslo_log import log from oslo_serialization import jsonutils from oslo_service import periodic_task from oslo_utils import excutils from oslo_utils import importutils from oslo_utils import timeutils import six from manila.common import constants from manila import context from manila import coordination from manila.data import rpcapi as data_rpcapi from manila import exception from manila.i18n import _ from manila import manager from manila.message import api as message_api from manila.message import message_field from manila import quota from manila.share import access from manila.share import api from manila.share import configuration from manila.share import drivers_private_data from manila.share import migration from manila.share import rpcapi as share_rpcapi from manila.share import share_types from manila.share import snapshot_access from manila.share import utils as share_utils from manila import utils LOG = log.getLogger(__name__) share_manager_opts = [ cfg.StrOpt('share_driver', default='manila.share.drivers.generic.GenericShareDriver', help='Driver to use for share creation.'), cfg.ListOpt('hook_drivers', default=[], help='Driver(s) to perform some additional actions before and ' 'after share driver actions and on a periodic basis. ' 'Default is [].', deprecated_group='DEFAULT'), cfg.BoolOpt('delete_share_server_with_last_share', default=False, help='Whether share servers will ' 'be deleted on deletion of the last share.'), cfg.BoolOpt('unmanage_remove_access_rules', default=False, help='If set to True, then manila will deny access and remove ' 'all access rules on share unmanage.' 'If set to False - nothing will be changed.'), cfg.BoolOpt('automatic_share_server_cleanup', default=True, help='If set to True, then Manila will delete all share ' 'servers which were unused more than specified time .' 'If set to False - automatic deletion of share servers ' 'will be disabled.', deprecated_group='DEFAULT'), cfg.IntOpt('unused_share_server_cleanup_interval', default=10, help='Unallocated share servers reclamation time interval ' '(minutes). Minimum value is 10 minutes, maximum is 60 ' 'minutes. The reclamation function is run every ' '10 minutes and delete share servers which were unused ' 'more than unused_share_server_cleanup_interval option ' 'defines. This value reflects the shortest time Manila ' 'will wait for a share server to go unutilized before ' 'deleting it.', deprecated_group='DEFAULT', min=10, max=60), cfg.IntOpt('replica_state_update_interval', default=300, help='This value, specified in seconds, determines how often ' 'the share manager will poll for the health ' '(replica_state) of each replica instance.'), cfg.IntOpt('migration_driver_continue_update_interval', default=60, help='This value, specified in seconds, determines how often ' 'the share manager will poll the driver to perform the ' 'next step of migration in the storage backend, for a ' 'migrating share.'), cfg.IntOpt('share_usage_size_update_interval', default=300, help='This value, specified in seconds, determines how often ' 'the share manager will poll the driver to update the ' 'share usage size in the storage backend, for shares in ' 'that backend.'), cfg.BoolOpt('enable_gathering_share_usage_size', default=False, help='If set to True, share usage size will be polled for in ' 'the interval specified with ' '"share_usage_size_update_interval". Usage data can be ' 'consumed by telemetry integration. If telemetry is not ' 'configured, this option must be set to False. ' 'If set to False - gathering share usage size will be' ' disabled.'), ] CONF = cfg.CONF CONF.register_opts(share_manager_opts) CONF.import_opt('periodic_hooks_interval', 'manila.share.hook') CONF.import_opt('periodic_interval', 'manila.service') # Drivers that need to change module paths or class names can add their # old/new path here to maintain backward compatibility. MAPPING = { 'manila.share.drivers.netapp.cluster_mode.NetAppClusteredShareDriver': 'manila.share.drivers.netapp.common.NetAppDriver', 'manila.share.drivers.hp.hp_3par_driver.HP3ParShareDriver': 'manila.share.drivers.hpe.hpe_3par_driver.HPE3ParShareDriver', 'manila.share.drivers.hitachi.hds_hnas.HDSHNASDriver': 'manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver', 'manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver': 'manila.share.drivers.glusterfs.glusterfs_native.' 'GlusterfsNativeShareDriver', 'manila.share.drivers.emc.driver.EMCShareDriver': 'manila.share.drivers.dell_emc.driver.EMCShareDriver', 'manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver': 'manila.share.drivers.cephfs.driver.CephFSDriver', } QUOTAS = quota.QUOTAS def locked_share_replica_operation(operation): """Lock decorator for share replica operations. Takes a named lock prior to executing the operation. The lock is named with the id of the share to which the replica belongs. Intended use: If a replica operation uses this decorator, it will block actions on all share replicas of the share until the named lock is free. This is used to protect concurrent operations on replicas of the same share e.g. promote ReplicaA while deleting ReplicaB, both belonging to the same share. """ def wrapped(*args, **kwargs): share_id = kwargs.get('share_id') @coordination.synchronized( 'locked-share-replica-operation-for-share-%s' % share_id) def locked_replica_operation(*_args, **_kwargs): return operation(*_args, **_kwargs) return locked_replica_operation(*args, **kwargs) return wrapped def add_hooks(f): """Hook decorator to perform action before and after a share method call The hook decorator can perform actions before some share driver methods calls and after a call with results of driver call and preceding hook call. """ @functools.wraps(f) def wrapped(self, *args, **kwargs): if not self.hooks: return f(self, *args, **kwargs) pre_hook_results = [] for hook in self.hooks: pre_hook_results.append( hook.execute_pre_hook( func_name=f.__name__, *args, **kwargs)) wrapped_func_results = f(self, *args, **kwargs) for i, hook in enumerate(self.hooks): hook.execute_post_hook( func_name=f.__name__, driver_action_results=wrapped_func_results, pre_hook_data=pre_hook_results[i], *args, **kwargs) return wrapped_func_results return wrapped class ShareManager(manager.SchedulerDependentManager): """Manages NAS storages.""" RPC_API_VERSION = '1.19' def __init__(self, share_driver=None, service_name=None, *args, **kwargs): """Load the driver from args, or from flags.""" self.configuration = configuration.Configuration( share_manager_opts, config_group=service_name) super(ShareManager, self).__init__(service_name='share', *args, **kwargs) if not share_driver: share_driver = self.configuration.share_driver if share_driver in MAPPING: msg_args = {'old': share_driver, 'new': MAPPING[share_driver]} LOG.warning("Driver path %(old)s is deprecated, update your " "configuration to the new path %(new)s", msg_args) share_driver = MAPPING[share_driver] ctxt = context.get_admin_context() private_storage = drivers_private_data.DriverPrivateData( context=ctxt, backend_host=self.host, config_group=self.configuration.config_group ) self.driver = importutils.import_object( share_driver, private_storage=private_storage, configuration=self.configuration, ) backend_availability_zone = self.driver.configuration.safe_get( 'backend_availability_zone') self.availability_zone = ( backend_availability_zone or CONF.storage_availability_zone ) self.access_helper = access.ShareInstanceAccess(self.db, self.driver) self.snapshot_access_helper = ( snapshot_access.ShareSnapshotInstanceAccess(self.db, self.driver)) self.migration_wait_access_rules_timeout = ( CONF.migration_wait_access_rules_timeout) self.message_api = message_api.API() self.hooks = [] self._init_hook_drivers() def _init_hook_drivers(self): # Try to initialize hook driver(s). hook_drivers = self.configuration.safe_get("hook_drivers") for hook_driver in hook_drivers: self.hooks.append( importutils.import_object( hook_driver, configuration=self.configuration, host=self.host, ) ) def _ensure_share_instance_has_pool(self, ctxt, share_instance): pool = share_utils.extract_host(share_instance['host'], 'pool') if pool is None: # No pool name encoded in host, so this is a legacy # share created before pool is introduced, ask # driver to provide pool info if it has such # knowledge and update the DB. try: pool = self.driver.get_pool(share_instance) except Exception: LOG.exception("Failed to fetch pool name for share: " "%(share)s.", {'share': share_instance['id']}) return if pool: new_host = share_utils.append_host( share_instance['host'], pool) self.db.share_instance_update( ctxt, share_instance['id'], {'host': new_host}) return pool @add_hooks def init_host(self): """Initialization for a standalone service.""" ctxt = context.get_admin_context() driver_host_pair = "{}@{}".format( self.driver.__class__.__name__, self.host) # we want to retry to setup the driver. In case of a multi-backend # scenario, working backends are usable and the non-working ones (where # do_setup() or check_for_setup_error() fail) retry. @utils.retry(Exception, interval=2, backoff_rate=2, backoff_sleep_max=600, retries=0) def _driver_setup(): self.driver.initialized = False LOG.debug("Start initialization of driver: '%s'", driver_host_pair) try: self.driver.do_setup(ctxt) self.driver.check_for_setup_error() except Exception: LOG.exception("Error encountered during initialization of " "driver %s", driver_host_pair) raise else: self.driver.initialized = True _driver_setup() if (self.driver.driver_handles_share_servers and hasattr(self.driver, 'service_instance_manager')): (self.driver.service_instance_manager.network_helper. setup_connectivity_with_service_instances()) self.ensure_driver_resources(ctxt) self.publish_service_capabilities(ctxt) LOG.info("Finished initialization of driver: '%(driver)s" "@%(host)s'", {"driver": self.driver.__class__.__name__, "host": self.host}) def is_service_ready(self): """Return if Manager is ready to accept requests. This is to inform Service class that in case of manila driver initialization failure the manager is actually down and not ready to accept any requests. """ return self.driver.initialized def ensure_driver_resources(self, ctxt): old_backend_info = self.db.backend_info_get(ctxt, self.host) old_backend_info_hash = (old_backend_info.get('info_hash') if old_backend_info is not None else None) new_backend_info = None new_backend_info_hash = None backend_info_implemented = True update_share_instances = [] try: new_backend_info = self.driver.get_backend_info(ctxt) except Exception as e: if not isinstance(e, NotImplementedError): LOG.exception( "The backend %(host)s could not get backend info.", {'host': self.host}) raise else: backend_info_implemented = False LOG.debug( ("The backend %(host)s does not support get backend" " info method."), {'host': self.host}) if new_backend_info: new_backend_info_hash = hashlib.sha1(six.text_type( sorted(new_backend_info.items())).encode('utf-8')).hexdigest() if (old_backend_info_hash == new_backend_info_hash and backend_info_implemented): LOG.debug( ("Ensure shares is being skipped because the %(host)s's old " "backend info is the same as its new backend info."), {'host': self.host}) return share_instances = self.db.share_instances_get_all_by_host( ctxt, self.host) LOG.debug("Re-exporting %s shares", len(share_instances)) for share_instance in share_instances: share_ref = self.db.share_get(ctxt, share_instance['share_id']) if share_ref.is_busy: LOG.info( "Share instance %(id)s: skipping export, " "because it is busy with an active task: %(task)s.", {'id': share_instance['id'], 'task': share_ref['task_state']}, ) continue if share_instance['status'] != constants.STATUS_AVAILABLE: LOG.info( "Share instance %(id)s: skipping export, " "because it has '%(status)s' status.", {'id': share_instance['id'], 'status': share_instance['status']}, ) continue self._ensure_share_instance_has_pool(ctxt, share_instance) share_instance = self.db.share_instance_get( ctxt, share_instance['id'], with_share_data=True) share_instance_dict = self._get_share_instance_dict( ctxt, share_instance) update_share_instances.append(share_instance_dict) if update_share_instances: try: update_share_instances = self.driver.ensure_shares( ctxt, update_share_instances) or {} except Exception as e: if not isinstance(e, NotImplementedError): LOG.exception("Caught exception trying ensure " "share instances.") else: self._ensure_share(ctxt, update_share_instances) if new_backend_info: self.db.backend_info_update( ctxt, self.host, new_backend_info_hash) for share_instance in share_instances: if share_instance['id'] not in update_share_instances: continue if update_share_instances[share_instance['id']].get('status'): self.db.share_instance_update( ctxt, share_instance['id'], {'status': ( update_share_instances[share_instance['id']]. get('status')), 'host': share_instance['host']} ) update_export_location = ( update_share_instances[share_instance['id']] .get('export_locations')) if update_export_location: self.db.share_export_locations_update( ctxt, share_instance['id'], update_export_location) share_server = self._get_share_server(ctxt, share_instance) if share_instance['access_rules_status'] != ( constants.STATUS_ACTIVE): try: # Cast any existing 'applying' rules to 'new' self.access_helper.reset_applying_rules( ctxt, share_instance['id']) self.access_helper.update_access_rules( ctxt, share_instance['id'], share_server=share_server) except Exception: LOG.exception( ("Unexpected error occurred while updating access " "rules for share instance %(s_id)s."), {'s_id': share_instance['id']}, ) snapshot_instances = ( self.db.share_snapshot_instance_get_all_with_filters( ctxt, {'share_instance_ids': share_instance['id']}, with_share_data=True)) for snap_instance in snapshot_instances: rules = ( self.db. share_snapshot_access_get_all_for_snapshot_instance( ctxt, snap_instance['id'])) # NOTE(ganso): We don't invoke update_access for snapshots if # we don't have invalid rules or pending updates if any(r['state'] in (constants.ACCESS_STATE_DENYING, constants.ACCESS_STATE_QUEUED_TO_DENY, constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_QUEUED_TO_APPLY) for r in rules): try: self.snapshot_access_helper.update_access_rules( ctxt, snap_instance['id'], share_server) except Exception: LOG.exception( "Unexpected error occurred while updating " "access rules for snapshot instance %s.", snap_instance['id']) def _ensure_share(self, ctxt, share_instances): for share_instance in share_instances: try: export_locations = self.driver.ensure_share( ctxt, share_instance, share_server=share_instance['share_server']) except Exception: LOG.exception("Caught exception trying ensure " "share '%(s_id)s'.", {'s_id': share_instance['id']}) continue if export_locations: self.db.share_export_locations_update( ctxt, share_instance['id'], export_locations) def _provide_share_server_for_share(self, context, share_network_id, share_instance, snapshot=None, share_group=None, create_on_backend=True): """Gets or creates share_server and updates share with its id. Active share_server can be deleted if there are no dependent shares on it. So we need avoid possibility to delete share_server in time gap between reaching active state for share_server and setting up share_server_id for share. It is possible, for example, with first share creation, which starts share_server creation. For this purpose used shared lock between this method and the one with deletion of share_server. :param context: Current context :param share_network_id: Share network where existing share server should be found or created. If share_network_id is None method use share_network_id from provided snapshot. :param share_instance: Share Instance model :param snapshot: Optional -- Snapshot model :param create_on_backend: Boolean. If True, driver will be asked to create the share server if no share server is available. :returns: dict, dict -- first value is share_server, that has been chosen for share schedule. Second value is share updated with share_server_id. """ if not (share_network_id or snapshot): msg = _("'share_network_id' parameter or 'snapshot'" " should be provided. ") raise ValueError(msg) def error(msg, *args): LOG.error(msg, *args) self.db.share_instance_update(context, share_instance['id'], {'status': constants.STATUS_ERROR}) parent_share_server = None parent_share_same_dest = False if snapshot: parent_share_server_id = ( snapshot['share']['instance']['share_server_id']) try: parent_share_server = self.db.share_server_get( context, parent_share_server_id) except exception.ShareServerNotFound: with excutils.save_and_reraise_exception(): error("Parent share server %s does not exist.", parent_share_server_id) if parent_share_server['status'] != constants.STATUS_ACTIVE: error_params = { 'id': parent_share_server_id, 'status': parent_share_server['status'], } error("Parent share server %(id)s has invalid status " "'%(status)s'.", error_params) raise exception.InvalidShareServer( share_server_id=parent_share_server ) parent_share_same_dest = (snapshot['share']['instance']['host'] == share_instance['host']) share_network_subnet_id = None if share_network_id: share_network_subnet = ( self.db.share_network_subnet_get_by_availability_zone_id( context, share_network_id, availability_zone_id=share_instance.get( 'availability_zone_id'))) if not share_network_subnet: raise exception.ShareNetworkSubnetNotFound( share_network_subnet_id=None) share_network_subnet_id = share_network_subnet['id'] elif parent_share_server: share_network_subnet_id = ( parent_share_server['share_network_subnet_id']) def get_available_share_servers(): if parent_share_server and parent_share_same_dest: return [parent_share_server] else: return ( self.db .share_server_get_all_by_host_and_share_subnet_valid( context, self.host, share_network_subnet_id) ) @utils.synchronized("share_manager_%s" % share_network_subnet_id, external=True) def _wrapped_provide_share_server_for_share(): try: available_share_servers = get_available_share_servers() except exception.ShareServerNotFound: available_share_servers = None compatible_share_server = None if available_share_servers: try: compatible_share_server = ( self.driver.choose_share_server_compatible_with_share( context, available_share_servers, share_instance, snapshot=snapshot.instance if snapshot else None, share_group=share_group ) ) except Exception as e: with excutils.save_and_reraise_exception(): error("Cannot choose compatible share server: %s", e) if not compatible_share_server: compatible_share_server = self.db.share_server_create( context, { 'host': self.host, 'share_network_subnet_id': share_network_subnet_id, 'status': constants.STATUS_CREATING, } ) msg = ("Using share_server %(share_server)s for share instance" " %(share_instance_id)s") LOG.debug(msg, { 'share_server': compatible_share_server['id'], 'share_instance_id': share_instance['id'] }) share_instance_ref = self.db.share_instance_update( context, share_instance['id'], {'share_server_id': compatible_share_server['id']}, with_share_data=True ) if create_on_backend: metadata = {'request_host': share_instance['host']} compatible_share_server = ( self._create_share_server_in_backend( context, compatible_share_server, metadata=metadata)) return compatible_share_server, share_instance_ref return _wrapped_provide_share_server_for_share() def _create_share_server_in_backend(self, context, share_server, metadata=None): """Perform setup_server on backend :param metadata: A dictionary, to be passed to driver's setup_server() """ if share_server['status'] == constants.STATUS_CREATING: # Create share server on backend with data from db. share_server = self._setup_server(context, share_server, metadata=metadata) LOG.info("Share server created successfully.") else: LOG.info("Using preexisting share server: " "'%(share_server_id)s'", {'share_server_id': share_server['id']}) return share_server def create_share_server(self, context, share_server_id): """Invoked to create a share server in this backend. This method is invoked to create the share server defined in the model obtained by the supplied id. :param context: The 'context.RequestContext' object for the request. :param share_server_id: The id of the server to be created. """ share_server = self.db.share_server_get(context, share_server_id) self._create_share_server_in_backend(context, share_server) def provide_share_server(self, context, share_instance_id, share_network_id, snapshot_id=None): """Invoked to provide a compatible share server. This method is invoked to find a compatible share server among the existing ones or create a share server database instance with the share server properties that will be used to create the share server later. :param context: The 'context.RequestContext' object for the request. :param share_instance_id: The id of the share instance whose model attributes will be used to provide the share server. :param share_network_id: The id of the share network the share server to be provided has to be related to. :param snapshot_id: The id of the snapshot to be used to obtain the share server if applicable. :return: The id of the share server that is being provided. """ share_instance = self.db.share_instance_get(context, share_instance_id, with_share_data=True) snapshot_ref = None if snapshot_id: snapshot_ref = self.db.share_snapshot_get(context, snapshot_id) share_group_ref = None if share_instance.get('share_group_id'): share_group_ref = self.db.share_group_get( context, share_instance['share_group_id']) share_server, share_instance = self._provide_share_server_for_share( context, share_network_id, share_instance, snapshot_ref, share_group_ref, create_on_backend=False) return share_server['id'] def _provide_share_server_for_share_group(self, context, share_network_id, share_network_subnet_id, share_group_ref, share_group_snapshot=None): """Gets or creates share_server and updates share group with its id. Active share_server can be deleted if there are no shares or share groups dependent on it. So we need avoid possibility to delete share_server in time gap between reaching active state for share_server and setting up share_server_id for share group. It is possible, for example, with first share group creation, which starts share_server creation. For this purpose used shared lock between this method and the one with deletion of share_server. :param context: Current context :param share_network_id: Share network where existing share server should be found or created. :param share_network_subnet_id: Share network subnet where existing share server should be found or created. If not specified, the default subnet will be used. :param share_group_ref: Share Group model :param share_group_snapshot: Optional -- ShareGroupSnapshot model. If supplied, driver will use it to choose the appropriate share server. :returns: dict, dict -- first value is share_server, that has been chosen for share group schedule. Second value is share group updated with share_server_id. """ if not share_network_id: msg = _("'share_network_id' parameter should be provided. ") raise exception.InvalidInput(reason=msg) def error(msg, *args): LOG.error(msg, *args) self.db.share_group_update( context, share_group_ref['id'], {'status': constants.STATUS_ERROR}) @utils.synchronized("share_manager_%s" % share_network_id, external=True) def _wrapped_provide_share_server_for_share_group(): try: available_share_servers = ( self.db .share_server_get_all_by_host_and_share_subnet_valid( context, self.host, share_network_subnet_id)) except exception.ShareServerNotFound: available_share_servers = None compatible_share_server = None choose_share_server = ( self.driver.choose_share_server_compatible_with_share_group) if available_share_servers: try: compatible_share_server = choose_share_server( context, available_share_servers, share_group_ref, share_group_snapshot=share_group_snapshot, ) except Exception as e: with excutils.save_and_reraise_exception(): error("Cannot choose compatible share-server: %s", e) if not compatible_share_server: compatible_share_server = self.db.share_server_create( context, { 'host': self.host, 'share_network_subnet_id': share_network_subnet_id, 'status': constants.STATUS_CREATING } ) msg = ("Using share_server %(share_server)s for share " "group %(group_id)s") LOG.debug(msg, { 'share_server': compatible_share_server['id'], 'group_id': share_group_ref['id'] }) updated_share_group = self.db.share_group_update( context, share_group_ref['id'], {'share_server_id': compatible_share_server['id']}, ) if compatible_share_server['status'] == constants.STATUS_CREATING: # Create share server on backend with data from db. compatible_share_server = self._setup_server( context, compatible_share_server) LOG.info("Share server created successfully.") else: LOG.info("Used preexisting share server " "'%(share_server_id)s'", {'share_server_id': compatible_share_server['id']}) return compatible_share_server, updated_share_group return _wrapped_provide_share_server_for_share_group() def _get_share_server(self, context, share_instance): if share_instance['share_server_id']: return self.db.share_server_get( context, share_instance['share_server_id']) else: return None @utils.require_driver_initialized def connection_get_info(self, context, share_instance_id): share_instance = self.db.share_instance_get( context, share_instance_id, with_share_data=True) share_server = None if share_instance.get('share_server_id'): share_server = self.db.share_server_get( context, share_instance['share_server_id']) return self.driver.connection_get_info(context, share_instance, share_server) def _migration_start_driver( self, context, share_ref, src_share_instance, dest_host, writable, preserve_metadata, nondisruptive, preserve_snapshots, new_share_network_id, new_az_id, new_share_type_id): share_server = self._get_share_server(context, src_share_instance) share_api = api.API() request_spec, dest_share_instance = ( share_api.create_share_instance_and_get_request_spec( context, share_ref, new_az_id, None, dest_host, new_share_network_id, new_share_type_id)) self.db.share_instance_update( context, dest_share_instance['id'], {'status': constants.STATUS_MIGRATING_TO}) # refresh and obtain proxified properties dest_share_instance = self.db.share_instance_get( context, dest_share_instance['id'], with_share_data=True) helper = migration.ShareMigrationHelper( context, self.db, share_ref, self.access_helper) try: if dest_share_instance['share_network_id']: rpcapi = share_rpcapi.ShareAPI() # NOTE(ganso): Obtaining the share_server_id asynchronously so # we can wait for it to be ready. dest_share_server_id = rpcapi.provide_share_server( context, dest_share_instance, dest_share_instance['share_network_id']) rpcapi.create_share_server( context, dest_share_instance, dest_share_server_id) dest_share_server = helper.wait_for_share_server( dest_share_server_id) else: dest_share_server = None compatibility = self.driver.migration_check_compatibility( context, src_share_instance, dest_share_instance, share_server, dest_share_server) if not compatibility.get('compatible'): msg = _("Destination host %(host)s is not compatible with " "share %(share)s's source backend for driver-assisted " "migration.") % { 'host': dest_host, 'share': share_ref['id'], } raise exception.ShareMigrationFailed(reason=msg) if (not compatibility.get('nondisruptive') and nondisruptive): msg = _("Driver cannot perform a non-disruptive migration of " "share %s.") % share_ref['id'] raise exception.ShareMigrationFailed(reason=msg) if (not compatibility.get('preserve_metadata') and preserve_metadata): msg = _("Driver cannot perform migration of share %s while " "preserving all metadata.") % share_ref['id'] raise exception.ShareMigrationFailed(reason=msg) if not compatibility.get('writable') and writable: msg = _("Driver cannot perform migration of share %s while " "remaining writable.") % share_ref['id'] raise exception.ShareMigrationFailed(reason=msg) if (not compatibility.get('preserve_snapshots') and preserve_snapshots): msg = _("Driver cannot perform migration of share %s while " "preserving snapshots.") % share_ref['id'] raise exception.ShareMigrationFailed(reason=msg) snapshot_mapping = {} src_snap_instances = [] src_snapshots = self.db.share_snapshot_get_all_for_share( context, share_ref['id']) if compatibility.get('preserve_snapshots'): # Make sure all snapshots are 'available' if any(x['status'] != constants.STATUS_AVAILABLE for x in src_snapshots): msg = _( "All snapshots must have '%(status)s' status to be " "migrated by the driver along with share " "%(share)s.") % { 'share': share_ref['id'], 'status': constants.STATUS_AVAILABLE } raise exception.ShareMigrationFailed(reason=msg) src_snap_instances = [x.instance for x in src_snapshots] dest_snap_instance_data = { 'status': constants.STATUS_MIGRATING_TO, 'progress': '0%', 'share_instance_id': dest_share_instance['id'], } for snap_instance in src_snap_instances: snapshot_mapping[snap_instance['id']] = ( self.db.share_snapshot_instance_create( context, snap_instance['snapshot_id'], dest_snap_instance_data)) self.db.share_snapshot_instance_update( context, snap_instance['id'], {'status': constants.STATUS_MIGRATING}) else: if src_snapshots: msg = _("Driver does not support preserving snapshots, " "driver-assisted migration cannot proceed while " "share %s has snapshots.") % share_ref['id'] raise exception.ShareMigrationFailed(reason=msg) if not compatibility.get('writable'): self._cast_access_rules_to_readonly( context, src_share_instance, share_server) LOG.debug("Initiating driver migration for share %s.", share_ref['id']) self.db.share_update( context, share_ref['id'], {'task_state': ( constants.TASK_STATE_MIGRATION_DRIVER_STARTING)}) self.driver.migration_start( context, src_share_instance, dest_share_instance, src_snap_instances, snapshot_mapping, share_server, dest_share_server) self.db.share_update( context, share_ref['id'], {'task_state': ( constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS)}) except Exception: # NOTE(ganso): Cleaning up error'ed destination share instance from # database. It is assumed that driver cleans up leftovers in # backend when migration fails. self._migration_delete_instance(context, dest_share_instance['id']) self._restore_migrating_snapshots_status( context, src_share_instance['id']) # NOTE(ganso): Read only access rules and share instance status # will be restored in migration_start's except block. # NOTE(ganso): For now source share instance should remain in # migrating status for host-assisted migration. msg = _("Driver-assisted migration of share %s " "failed.") % share_ref['id'] LOG.exception(msg) raise exception.ShareMigrationFailed(reason=msg) return True def _cast_access_rules_to_readonly(self, context, src_share_instance, share_server): self.db.share_instance_update( context, src_share_instance['id'], {'cast_rules_to_readonly': True}) # Set all 'applying' or 'active' rules to 'queued_to_apply'. Since the # share instance has its cast_rules_to_readonly attribute set to True, # existing rules will be cast to read/only. acceptable_past_states = (constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_ACTIVE) new_state = constants.ACCESS_STATE_QUEUED_TO_APPLY conditionally_change = {k: new_state for k in acceptable_past_states} self.access_helper.get_and_update_share_instance_access_rules( context, share_instance_id=src_share_instance['id'], conditionally_change=conditionally_change) self.access_helper.update_access_rules( context, src_share_instance['id'], share_server=share_server) utils.wait_for_access_update( context, self.db, src_share_instance, self.migration_wait_access_rules_timeout) def _reset_read_only_access_rules( self, context, share, share_instance_id, supress_errors=True, helper=None): instance = self.db.share_instance_get(context, share_instance_id, with_share_data=True) if instance['cast_rules_to_readonly']: update = {'cast_rules_to_readonly': False} self.db.share_instance_update( context, share_instance_id, update) share_server = self._get_share_server(context, instance) if helper is None: helper = migration.ShareMigrationHelper( context, self.db, share, self.access_helper) if supress_errors: helper.cleanup_access_rules(instance, share_server) else: helper.revert_access_rules(instance, share_server) @periodic_task.periodic_task( spacing=CONF.migration_driver_continue_update_interval) @utils.require_driver_initialized def migration_driver_continue(self, context): """Invokes driver to continue migration of shares.""" instances = self.db.share_instances_get_all_by_host(context, self.host) for instance in instances: if instance['status'] != constants.STATUS_MIGRATING: continue share = self.db.share_get(context, instance['share_id']) if share['task_state'] == ( constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS): share_api = api.API() src_share_instance_id, dest_share_instance_id = ( share_api.get_migrating_instances(share)) src_share_instance = instance dest_share_instance = self.db.share_instance_get( context, dest_share_instance_id, with_share_data=True) src_share_server = self._get_share_server( context, src_share_instance) dest_share_server = self._get_share_server( context, dest_share_instance) src_snap_instances, snapshot_mappings = ( self._get_migrating_snapshots(context, src_share_instance, dest_share_instance)) try: finished = self.driver.migration_continue( context, src_share_instance, dest_share_instance, src_snap_instances, snapshot_mappings, src_share_server, dest_share_server) if finished: self.db.share_update( context, instance['share_id'], {'task_state': (constants. TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)}) LOG.info("Share Migration for share %s completed " "first phase successfully.", share['id']) else: share = self.db.share_get( context, instance['share_id']) if (share['task_state'] == constants.TASK_STATE_MIGRATION_CANCELLED): LOG.warning( "Share Migration for share %s was cancelled.", share['id']) except Exception: # NOTE(ganso): Cleaning up error'ed destination share # instance from database. It is assumed that driver cleans # up leftovers in backend when migration fails. self._migration_delete_instance( context, dest_share_instance['id']) self._restore_migrating_snapshots_status( context, src_share_instance['id']) self._reset_read_only_access_rules( context, share, src_share_instance_id) self.db.share_instance_update( context, src_share_instance_id, {'status': constants.STATUS_AVAILABLE}) self.db.share_update( context, instance['share_id'], {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) msg = _("Driver-assisted migration of share %s " "failed.") % share['id'] LOG.exception(msg) def _get_migrating_snapshots( self, context, src_share_instance, dest_share_instance): dest_snap_instances = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': [dest_share_instance['id']]})) snapshot_mappings = {} src_snap_instances = [] if len(dest_snap_instances) > 0: src_snap_instances = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': [src_share_instance['id']]})) for snap in src_snap_instances: dest_snap_instance = next( x for x in dest_snap_instances if snap['snapshot_id'] == x['snapshot_id']) snapshot_mappings[snap['id']] = dest_snap_instance return src_snap_instances, snapshot_mappings def _restore_migrating_snapshots_status( self, context, src_share_instance_id, errored_dest_instance_id=None): filters = {'share_instance_ids': [src_share_instance_id]} status = constants.STATUS_AVAILABLE if errored_dest_instance_id: filters['share_instance_ids'].append(errored_dest_instance_id) status = constants.STATUS_ERROR snap_instances = ( self.db.share_snapshot_instance_get_all_with_filters( context, filters) ) for instance in snap_instances: if instance['status'] == constants.STATUS_MIGRATING: self.db.share_snapshot_instance_update( context, instance['id'], {'status': status}) elif (errored_dest_instance_id and instance['status'] == constants.STATUS_MIGRATING_TO): self.db.share_snapshot_instance_update( context, instance['id'], {'status': status}) @utils.require_driver_initialized def migration_start( self, context, share_id, dest_host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network_id=None, new_share_type_id=None): """Migrates a share from current host to another host.""" LOG.debug("Entered migration_start method for share %s.", share_id) self.db.share_update( context, share_id, {'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}) share_ref = self.db.share_get(context, share_id) share_instance = self._get_share_instance(context, share_ref) success = False host_value = share_utils.extract_host(dest_host) service = self.db.service_get_by_args( context, host_value, 'manila-share') new_az_id = service['availability_zone_id'] if not force_host_assisted_migration: try: success = self._migration_start_driver( context, share_ref, share_instance, dest_host, writable, preserve_metadata, nondisruptive, preserve_snapshots, new_share_network_id, new_az_id, new_share_type_id) except Exception as e: if not isinstance(e, NotImplementedError): LOG.exception( ("The driver could not migrate the share %(shr)s"), {'shr': share_id}) try: if not success: if (writable or preserve_metadata or nondisruptive or preserve_snapshots): msg = _("Migration for share %s could not be " "performed because host-assisted migration is not " "allowed when share must remain writable, " "preserve snapshots and/or file metadata or be " "performed nondisruptively.") % share_id raise exception.ShareMigrationFailed(reason=msg) # We only handle shares without snapshots for now snaps = self.db.share_snapshot_get_all_for_share( context, share_id) if snaps: msg = _("Share %s must not have snapshots in order to " "perform a host-assisted migration.") % share_id raise exception.ShareMigrationFailed(reason=msg) LOG.debug("Starting host-assisted migration " "for share %s.", share_id) self.db.share_update( context, share_id, {'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}) self._migration_start_host_assisted( context, share_ref, share_instance, dest_host, new_share_network_id, new_az_id, new_share_type_id) except Exception: msg = _("Host-assisted migration failed for share %s.") % share_id LOG.exception(msg) self.db.share_update( context, share_id, {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) self._reset_read_only_access_rules( context, share_ref, share_instance['id']) self.db.share_instance_update( context, share_instance['id'], {'status': constants.STATUS_AVAILABLE}) raise exception.ShareMigrationFailed(reason=msg) def _migration_start_host_assisted( self, context, share, src_share_instance, dest_host, new_share_network_id, new_az_id, new_share_type_id): rpcapi = share_rpcapi.ShareAPI() helper = migration.ShareMigrationHelper( context, self.db, share, self.access_helper) share_server = self._get_share_server(context.elevated(), src_share_instance) self._cast_access_rules_to_readonly( context, src_share_instance, share_server) try: dest_share_instance = helper.create_instance_and_wait( share, dest_host, new_share_network_id, new_az_id, new_share_type_id) self.db.share_instance_update( context, dest_share_instance['id'], {'status': constants.STATUS_MIGRATING_TO}) except Exception: msg = _("Failed to create instance on destination " "backend during migration of share %s.") % share['id'] LOG.exception(msg) raise exception.ShareMigrationFailed(reason=msg) ignore_list = self.driver.configuration.safe_get( 'migration_ignore_files') data_rpc = data_rpcapi.DataAPI() try: src_connection_info = self.driver.connection_get_info( context, src_share_instance, share_server) dest_connection_info = rpcapi.connection_get_info( context, dest_share_instance) LOG.debug("Time to start copying in migration" " for share %s.", share['id']) data_rpc.migration_start( context, share['id'], ignore_list, src_share_instance['id'], dest_share_instance['id'], src_connection_info, dest_connection_info) except Exception: msg = _("Failed to obtain migration info from backends or" " invoking Data Service for migration of " "share %s.") % share['id'] LOG.exception(msg) helper.cleanup_new_instance(dest_share_instance) raise exception.ShareMigrationFailed(reason=msg) def _migration_complete_driver( self, context, share_ref, src_share_instance, dest_share_instance): share_server = self._get_share_server(context, src_share_instance) dest_share_server = self._get_share_server( context, dest_share_instance) self.db.share_update( context, share_ref['id'], {'task_state': constants.TASK_STATE_MIGRATION_COMPLETING}) src_snap_instances, snapshot_mappings = ( self._get_migrating_snapshots(context, src_share_instance, dest_share_instance)) data_updates = self.driver.migration_complete( context, src_share_instance, dest_share_instance, src_snap_instances, snapshot_mappings, share_server, dest_share_server) or {} if data_updates.get('export_locations'): self.db.share_export_locations_update( context, dest_share_instance['id'], data_updates['export_locations']) snapshot_updates = data_updates.get('snapshot_updates') or {} dest_extra_specs = self._get_extra_specs_from_share_type( context, dest_share_instance['share_type_id']) for src_snap_ins, dest_snap_ins in snapshot_mappings.items(): model_update = snapshot_updates.get(dest_snap_ins['id']) or {} snapshot_export_locations = model_update.pop( 'export_locations', []) model_update['status'] = constants.STATUS_AVAILABLE model_update['progress'] = '100%' self.db.share_snapshot_instance_update( context, dest_snap_ins['id'], model_update) if dest_extra_specs['mount_snapshot_support']: for el in snapshot_export_locations: values = { 'share_snapshot_instance_id': dest_snap_ins['id'], 'path': el['path'], 'is_admin_only': el['is_admin_only'], } self.db.share_snapshot_instance_export_location_create( context, values) helper = migration.ShareMigrationHelper( context, self.db, share_ref, self.access_helper) helper.apply_new_access_rules(dest_share_instance) self.db.share_instance_update( context, dest_share_instance['id'], {'status': constants.STATUS_AVAILABLE, 'progress': '100%'}) self.db.share_instance_update(context, src_share_instance['id'], {'status': constants.STATUS_INACTIVE}) self._migration_delete_instance(context, src_share_instance['id']) def _migration_delete_instance(self, context, instance_id): # refresh the share instance model share_instance = self.db.share_instance_get( context, instance_id, with_share_data=True) rules = self.access_helper.get_and_update_share_instance_access_rules( context, share_instance_id=instance_id) self.access_helper.delete_share_instance_access_rules( context, rules, instance_id) snap_instances = self.db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': [instance_id]}) for instance in snap_instances: self.db.share_snapshot_instance_delete(context, instance['id']) self.db.share_instance_delete(context, instance_id) LOG.info("Share instance %s: deleted successfully.", instance_id) self._check_delete_share_server(context, share_instance) @utils.require_driver_initialized def migration_complete(self, context, src_instance_id, dest_instance_id): src_share_instance = self.db.share_instance_get( context, src_instance_id, with_share_data=True) dest_share_instance = self.db.share_instance_get( context, dest_instance_id, with_share_data=True) share_ref = self.db.share_get(context, src_share_instance['share_id']) LOG.info("Received request to finish Share Migration for " "share %s.", share_ref['id']) if share_ref['task_state'] == ( constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE): try: self._migration_complete_driver( context, share_ref, src_share_instance, dest_share_instance) except Exception: msg = _("Driver migration completion failed for" " share %s.") % share_ref['id'] LOG.exception(msg) # NOTE(ganso): If driver fails during migration-complete, # all instances are set to error and it is up to the admin # to fix the problem to either complete migration # manually or clean it up. At this moment, data # preservation at the source backend cannot be # guaranteed. self._restore_migrating_snapshots_status( context, src_share_instance['id'], errored_dest_instance_id=dest_share_instance['id']) self.db.share_instance_update( context, src_instance_id, {'status': constants.STATUS_ERROR}) self.db.share_instance_update( context, dest_instance_id, {'status': constants.STATUS_ERROR}) self.db.share_update( context, share_ref['id'], {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) raise exception.ShareMigrationFailed(reason=msg) else: try: self._migration_complete_host_assisted( context, share_ref, src_instance_id, dest_instance_id) except Exception: msg = _("Host-assisted migration completion failed for" " share %s.") % share_ref['id'] LOG.exception(msg) self.db.share_update( context, share_ref['id'], {'task_state': constants.TASK_STATE_MIGRATION_ERROR}) self.db.share_instance_update( context, src_instance_id, {'status': constants.STATUS_AVAILABLE}) raise exception.ShareMigrationFailed(reason=msg) model_update = self._get_extra_specs_from_share_type( context, dest_share_instance['share_type_id']) model_update['task_state'] = constants.TASK_STATE_MIGRATION_SUCCESS self.db.share_update( context, dest_share_instance['share_id'], model_update) LOG.info("Share Migration for share %s" " completed successfully.", share_ref['id']) def _get_extra_specs_from_share_type(self, context, share_type_id): share_type = share_types.get_share_type(context, share_type_id) share_api = api.API() return share_api.get_share_attributes_from_share_type(share_type) def _migration_complete_host_assisted(self, context, share_ref, src_instance_id, dest_instance_id): src_share_instance = self.db.share_instance_get( context, src_instance_id, with_share_data=True) dest_share_instance = self.db.share_instance_get( context, dest_instance_id, with_share_data=True) helper = migration.ShareMigrationHelper( context, self.db, share_ref, self.access_helper) task_state = share_ref['task_state'] if task_state in (constants.TASK_STATE_DATA_COPYING_ERROR, constants.TASK_STATE_DATA_COPYING_CANCELLED): msg = _("Data copy of host assisted migration for share %s has not" " completed successfully.") % share_ref['id'] LOG.warning(msg) helper.cleanup_new_instance(dest_share_instance) cancelled = ( task_state == constants.TASK_STATE_DATA_COPYING_CANCELLED) suppress_errors = True if cancelled: suppress_errors = False self._reset_read_only_access_rules( context, share_ref, src_instance_id, supress_errors=suppress_errors, helper=helper) self.db.share_instance_update( context, src_instance_id, {'status': constants.STATUS_AVAILABLE}) if cancelled: self.db.share_update( context, share_ref['id'], {'task_state': constants.TASK_STATE_MIGRATION_CANCELLED}) LOG.info("Share Migration for share %s" " was cancelled.", share_ref['id']) return else: raise exception.ShareMigrationFailed(reason=msg) elif task_state != constants.TASK_STATE_DATA_COPYING_COMPLETED: msg = _("Data copy for migration of share %s has not completed" " yet.") % share_ref['id'] LOG.error(msg) raise exception.ShareMigrationFailed(reason=msg) self.db.share_update( context, share_ref['id'], {'task_state': constants.TASK_STATE_MIGRATION_COMPLETING}) try: helper.apply_new_access_rules(dest_share_instance) except Exception: msg = _("Failed to apply new access rules during migration " "of share %s.") % share_ref['id'] LOG.exception(msg) helper.cleanup_new_instance(dest_share_instance) self._reset_read_only_access_rules( context, share_ref, src_instance_id, helper=helper, supress_errors=True) self.db.share_instance_update( context, src_instance_id, {'status': constants.STATUS_AVAILABLE}) raise exception.ShareMigrationFailed(reason=msg) self.db.share_instance_update( context, dest_share_instance['id'], {'status': constants.STATUS_AVAILABLE, 'progress': '100%'}) self.db.share_instance_update(context, src_instance_id, {'status': constants.STATUS_INACTIVE}) helper.delete_instance_and_wait(src_share_instance) @utils.require_driver_initialized def migration_cancel(self, context, src_instance_id, dest_instance_id): src_share_instance = self.db.share_instance_get( context, src_instance_id, with_share_data=True) dest_share_instance = self.db.share_instance_get( context, dest_instance_id, with_share_data=True) share_ref = self.db.share_get(context, src_share_instance['share_id']) if share_ref['task_state'] not in ( constants.TASK_STATE_DATA_COPYING_COMPLETED, constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE, constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS): msg = _("Migration of share %s cannot be cancelled at this " "moment.") % share_ref['id'] raise exception.InvalidShare(reason=msg) share_server = self._get_share_server(context, src_share_instance) dest_share_server = self._get_share_server( context, dest_share_instance) helper = migration.ShareMigrationHelper( context, self.db, share_ref, self.access_helper) if share_ref['task_state'] == ( constants.TASK_STATE_DATA_COPYING_COMPLETED): self.db.share_instance_update( context, dest_share_instance['id'], {'status': constants.STATUS_INACTIVE}) helper.cleanup_new_instance(dest_share_instance) else: src_snap_instances, snapshot_mappings = ( self._get_migrating_snapshots(context, src_share_instance, dest_share_instance)) self.driver.migration_cancel( context, src_share_instance, dest_share_instance, src_snap_instances, snapshot_mappings, share_server, dest_share_server) self._migration_delete_instance(context, dest_share_instance['id']) self._restore_migrating_snapshots_status( context, src_share_instance['id']) self._reset_read_only_access_rules( context, share_ref, src_instance_id, supress_errors=False, helper=helper) self.db.share_instance_update( context, src_instance_id, {'status': constants.STATUS_AVAILABLE}) self.db.share_update( context, share_ref['id'], {'task_state': constants.TASK_STATE_MIGRATION_CANCELLED}) LOG.info("Share Migration for share %s" " was cancelled.", share_ref['id']) @utils.require_driver_initialized def migration_get_progress(self, context, src_instance_id, dest_instance_id): src_share_instance = self.db.share_instance_get( context, src_instance_id, with_share_data=True) dest_share_instance = self.db.share_instance_get( context, dest_instance_id, with_share_data=True) share_ref = self.db.share_get(context, src_share_instance['share_id']) # Confirm that it is driver migration scenario if share_ref['task_state'] != ( constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS): msg = _("Driver is not performing migration for" " share %s at this moment.") % share_ref['id'] raise exception.InvalidShare(reason=msg) share_server = None if share_ref.instance.get('share_server_id'): share_server = self.db.share_server_get( context, src_share_instance['share_server_id']) dest_share_server = None if dest_share_instance.get('share_server_id'): dest_share_server = self.db.share_server_get( context, dest_share_instance['share_server_id']) src_snap_instances, snapshot_mappings = ( self._get_migrating_snapshots(context, src_share_instance, dest_share_instance)) return self.driver.migration_get_progress( context, src_share_instance, dest_share_instance, src_snap_instances, snapshot_mappings, share_server, dest_share_server) def _get_share_instance(self, context, share): if isinstance(share, six.string_types): id = share else: id = share.instance['id'] return self.db.share_instance_get(context, id, with_share_data=True) @add_hooks @utils.require_driver_initialized def create_share_instance(self, context, share_instance_id, request_spec=None, filter_properties=None, snapshot_id=None): """Creates a share instance.""" context = context.elevated() share_instance = self._get_share_instance(context, share_instance_id) share_id = share_instance.get('share_id') share_network_id = share_instance.get('share_network_id') share = self.db.share_get(context, share_id) self._notify_about_share_usage(context, share, share_instance, "create.start") if not share_instance['availability_zone']: share_instance = self.db.share_instance_update( context, share_instance_id, {'availability_zone': self.availability_zone}, with_share_data=True ) if share_network_id and not self.driver.driver_handles_share_servers: self.db.share_instance_update( context, share_instance_id, {'status': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.CREATE, share['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_id, detail=message_field.Detail.UNEXPECTED_NETWORK) raise exception.ManilaException(_( "Creation of share instance %s failed: driver does not expect " "share-network to be provided with current " "configuration.") % share_instance_id) if snapshot_id is not None: snapshot_ref = self.db.share_snapshot_get(context, snapshot_id) parent_share_server_id = ( snapshot_ref['share']['instance']['share_server_id']) else: snapshot_ref = None parent_share_server_id = None share_group_ref = None if share_instance.get('share_group_id'): share_group_ref = self.db.share_group_get( context, share_instance['share_group_id']) if share_network_id or parent_share_server_id: try: share_server, share_instance = ( self._provide_share_server_for_share( context, share_network_id, share_instance, snapshot=snapshot_ref, share_group=share_group_ref, ) ) except Exception: with excutils.save_and_reraise_exception(): error = ("Creation of share instance %s failed: " "failed to get share server.") LOG.error(error, share_instance_id) self.db.share_instance_update( context, share_instance_id, {'status': constants.STATUS_ERROR} ) self.message_api.create( context, message_field.Action.CREATE, share['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_id, detail=message_field.Detail.NO_SHARE_SERVER) else: share_server = None status = constants.STATUS_AVAILABLE try: if snapshot_ref: # NOTE(dviroel): we need to provide the parent share info to # assist drivers that create shares from snapshot in different # pools or back ends parent_share_instance = self.db.share_instance_get( context, snapshot_ref['share']['instance']['id'], with_share_data=True) parent_share_dict = self._get_share_instance_dict( context, parent_share_instance) model_update = self.driver.create_share_from_snapshot( context, share_instance, snapshot_ref.instance, share_server=share_server, parent_share=parent_share_dict) if isinstance(model_update, list): # NOTE(dviroel): the driver that doesn't implement the new # model_update will return only the export locations export_locations = model_update else: # NOTE(dviroel): share status is mandatory when answering # a model update. If not provided, won't be possible to # determine if was successfully created. status = model_update.get('status') if status is None: msg = _("Driver didn't provide a share status.") raise exception.InvalidShareInstance(reason=msg) export_locations = model_update.get('export_locations') else: export_locations = self.driver.create_share( context, share_instance, share_server=share_server) if status not in [constants.STATUS_AVAILABLE, constants.STATUS_CREATING_FROM_SNAPSHOT]: msg = _('Driver returned an invalid status: %s') % status raise exception.InvalidShareInstance(reason=msg) if export_locations: self.db.share_export_locations_update( context, share_instance['id'], export_locations) except Exception as e: with excutils.save_and_reraise_exception(): LOG.error("Share instance %s failed on creation.", share_instance_id) detail_data = getattr(e, 'detail_data', {}) def get_export_location(details): if not isinstance(details, dict): return None return details.get('export_locations', details.get('export_location')) export_locations = get_export_location(detail_data) if export_locations: self.db.share_export_locations_update( context, share_instance['id'], export_locations) else: LOG.warning('Share instance information in exception ' 'can not be written to db because it ' 'contains %s and it is not a dictionary.', detail_data) self.db.share_instance_update( context, share_instance_id, {'status': constants.STATUS_ERROR} ) self.message_api.create( context, message_field.Action.CREATE, share['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_id, exception=e) else: LOG.info("Share instance %s created successfully.", share_instance_id) progress = '100%' if status == constants.STATUS_AVAILABLE else '0%' updates = { 'status': status, 'launched_at': timeutils.utcnow(), 'progress': progress } if share.get('replication_type'): updates['replica_state'] = constants.REPLICA_STATE_ACTIVE self.db.share_instance_update(context, share_instance_id, updates) self._notify_about_share_usage(context, share, share_instance, "create.end") def _update_share_replica_access_rules_state(self, context, share_replica_id, state): """Update the access_rules_status for the share replica.""" self.access_helper.get_and_update_share_instance_access_rules_status( context, status=state, share_instance_id=share_replica_id) def _get_replica_snapshots_for_snapshot(self, context, snapshot_id, active_replica_id, share_replica_id, with_share_data=True): """Return dict of snapshot instances of active and replica instances. This method returns a dict of snapshot instances for snapshot referred to by snapshot_id. The dict contains the snapshot instance pertaining to the 'active' replica and the snapshot instance pertaining to the replica referred to by share_replica_id. """ filters = { 'snapshot_ids': snapshot_id, 'share_instance_ids': (share_replica_id, active_replica_id), } instance_list = self.db.share_snapshot_instance_get_all_with_filters( context, filters, with_share_data=with_share_data) snapshots = { 'active_replica_snapshot': self._get_snapshot_instance_dict( context, list(filter(lambda x: x['share_instance_id'] == active_replica_id, instance_list))[0]), 'share_replica_snapshot': self._get_snapshot_instance_dict( context, list(filter(lambda x: x['share_instance_id'] == share_replica_id, instance_list))[0]), } return snapshots @add_hooks @utils.require_driver_initialized @locked_share_replica_operation def create_share_replica(self, context, share_replica_id, share_id=None, request_spec=None, filter_properties=None): """Create a share replica.""" context = context.elevated() share_replica = self.db.share_replica_get( context, share_replica_id, with_share_data=True, with_share_server=True) if not share_replica['availability_zone']: share_replica = self.db.share_replica_update( context, share_replica['id'], {'availability_zone': self.availability_zone}, with_share_data=True ) _active_replica = ( self.db.share_replicas_get_available_active_replica( context, share_replica['share_id'], with_share_data=True, with_share_server=True)) if not _active_replica: self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.CREATE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], detail=message_field.Detail.NO_ACTIVE_REPLICA) msg = _("An 'active' replica must exist in 'available' " "state to create a new replica for share %s.") raise exception.ReplicationException( reason=msg % share_replica['share_id']) # We need the share_network_id in case of # driver_handles_share_server=True share_network_id = share_replica.get('share_network_id', None) if xor(bool(share_network_id), self.driver.driver_handles_share_servers): self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.CREATE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], detail=message_field.Detail.UNEXPECTED_NETWORK) raise exception.InvalidDriverMode( "The share-network value provided does not match with the " "current driver configuration.") if share_network_id: try: share_server, share_replica = ( self._provide_share_server_for_share( context, share_network_id, share_replica) ) except Exception: with excutils.save_and_reraise_exception(): LOG.error("Failed to get share server " "for share replica creation.") self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.CREATE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], detail=message_field.Detail.NO_SHARE_SERVER) else: share_server = None # Map the existing access rules for the share to # the replica in the DB. share_access_rules = self.db.share_instance_access_copy( context, share_replica['share_id'], share_replica['id']) # Get snapshots for the share. share_snapshots = self.db.share_snapshot_get_all_for_share( context, share_id) # Get the required data for snapshots that have 'aggregate_status' # set to 'available'. available_share_snapshots = [ self._get_replica_snapshots_for_snapshot( context, x['id'], _active_replica['id'], share_replica_id) for x in share_snapshots if x['aggregate_status'] == constants.STATUS_AVAILABLE] replica_list = ( self.db.share_replicas_get_all_by_share( context, share_replica['share_id'], with_share_data=True, with_share_server=True) ) replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] share_replica = self._get_share_instance_dict(context, share_replica) try: replica_ref = self.driver.create_replica( context, replica_list, share_replica, share_access_rules, available_share_snapshots, share_server=share_server) or {} except Exception as excep: with excutils.save_and_reraise_exception(): LOG.error("Share replica %s failed on creation.", share_replica['id']) self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR}) self._update_share_replica_access_rules_state( context, share_replica['id'], constants.STATUS_ERROR) self.message_api.create( context, message_field.Action.CREATE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], exception=excep) if replica_ref.get('export_locations'): if isinstance(replica_ref.get('export_locations'), list): self.db.share_export_locations_update( context, share_replica['id'], replica_ref.get('export_locations')) else: msg = ('Invalid export locations passed to the share ' 'manager.') LOG.warning(msg) if replica_ref.get('replica_state'): self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_AVAILABLE, 'replica_state': replica_ref.get('replica_state'), 'progress': '100%'}) if replica_ref.get('access_rules_status'): self._update_share_replica_access_rules_state( context, share_replica['id'], replica_ref.get('access_rules_status')) else: self._update_share_replica_access_rules_state( context, share_replica['id'], constants.STATUS_ACTIVE) LOG.info("Share replica %s created successfully.", share_replica['id']) @add_hooks @utils.require_driver_initialized @locked_share_replica_operation def delete_share_replica(self, context, share_replica_id, share_id=None, force=False): """Delete a share replica.""" context = context.elevated() share_replica = self.db.share_replica_get( context, share_replica_id, with_share_data=True, with_share_server=True) # Grab all the snapshot instances that belong to this replica. replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': share_replica_id}, with_share_data=True) ) replica_list = ( self.db.share_replicas_get_all_by_share( context, share_replica['share_id'], with_share_data=True, with_share_server=True) ) replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] replica_snapshots = [self._get_snapshot_instance_dict(context, s) for s in replica_snapshots] share_server = self._get_share_server(context, share_replica) share_replica = self._get_share_instance_dict(context, share_replica) try: self.access_helper.update_access_rules( context, share_replica_id, delete_all_rules=True, share_server=share_server ) except Exception as excep: with excutils.save_and_reraise_exception() as exc_context: # Set status to 'error' from 'deleting' since # access_rules_status has been set to 'error'. self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.DELETE_ACCESS_RULES, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], exception=excep) if force: msg = _("The driver was unable to delete access rules " "for the replica: %s. Will attempt to delete " "the replica anyway.") LOG.exception(msg, share_replica['id']) exc_context.reraise = False try: self.driver.delete_replica( context, replica_list, replica_snapshots, share_replica, share_server=share_server) except Exception as excep: with excutils.save_and_reraise_exception() as exc_context: if force: msg = _("The driver was unable to delete the share " "replica: %s on the backend. Since " "this operation is forced, the replica will be " "deleted from Manila's database. A cleanup on " "the backend may be necessary.") LOG.exception(msg, share_replica['id']) exc_context.reraise = False else: self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_ERROR_DELETING, 'replica_state': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.DELETE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], exception=excep) for replica_snapshot in replica_snapshots: self.db.share_snapshot_instance_delete( context, replica_snapshot['id']) self.db.share_replica_delete(context, share_replica['id']) LOG.info("Share replica %s deleted successfully.", share_replica['id']) @add_hooks @utils.require_driver_initialized @locked_share_replica_operation def promote_share_replica(self, context, share_replica_id, share_id=None): """Promote a share replica to active state.""" context = context.elevated() share_replica = self.db.share_replica_get( context, share_replica_id, with_share_data=True, with_share_server=True) replication_type = share_replica['replication_type'] if replication_type == constants.REPLICATION_TYPE_READABLE: ensure_old_active_replica_to_readonly = True else: ensure_old_active_replica_to_readonly = False share_server = self._get_share_server(context, share_replica) # Get list of all replicas for share replica_list = ( self.db.share_replicas_get_all_by_share( context, share_replica['share_id'], with_share_data=True, with_share_server=True) ) try: old_active_replica = list(filter( lambda r: ( r['replica_state'] == constants.REPLICA_STATE_ACTIVE), replica_list))[0] except IndexError: self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_AVAILABLE}) msg = _("Share %(share)s has no replica with 'replica_state' " "set to %(state)s. Promoting %(replica)s is not " "possible.") self.message_api.create( context, message_field.Action.PROMOTE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], detail=message_field.Detail.NO_ACTIVE_REPLICA) raise exception.ReplicationException( reason=msg % {'share': share_replica['share_id'], 'state': constants.REPLICA_STATE_ACTIVE, 'replica': share_replica['id']}) access_rules = self.db.share_access_get_all_for_share( context, share_replica['share_id']) replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] share_replica = self._get_share_instance_dict(context, share_replica) try: updated_replica_list = ( self.driver.promote_replica( context, replica_list, share_replica, access_rules, share_server=share_server) ) except Exception as excep: with excutils.save_and_reraise_exception(): # (NOTE) gouthamr: If the driver throws an exception at # this stage, there is a good chance that the replicas are # somehow altered on the backend. We loop through the # replicas and set their 'status's to 'error' and # leave the 'replica_state' unchanged. This also changes the # 'status' of the replica that failed to promote to 'error' as # before this operation. The backend may choose to update # the actual replica_state during the replica_monitoring # stage. updates = {'status': constants.STATUS_ERROR} for replica_ref in replica_list: self.db.share_replica_update( context, replica_ref['id'], updates) self.message_api.create( context, message_field.Action.PROMOTE, replica_ref['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=replica_ref['id'], exception=excep) # Set any 'creating' snapshots on the currently active replica to # 'error' since we cannot guarantee they will finish 'creating'. active_replica_snapshot_instances = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': share_replica['id']}) ) for instance in active_replica_snapshot_instances: if instance['status'] in (constants.STATUS_CREATING, constants.STATUS_DELETING): msg = ("The replica snapshot instance %(instance)s was " "in %(state)s. Since it was not in %(available)s " "state when the replica was promoted, it will be " "set to %(error)s.") payload = { 'instance': instance['id'], 'state': instance['status'], 'available': constants.STATUS_AVAILABLE, 'error': constants.STATUS_ERROR, } LOG.info(msg, payload) self.db.share_snapshot_instance_update( context, instance['id'], {'status': constants.STATUS_ERROR}) if not updated_replica_list: self.db.share_replica_update( context, old_active_replica['id'], {'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC, 'cast_rules_to_readonly': ensure_old_active_replica_to_readonly}) self.db.share_replica_update( context, share_replica['id'], {'status': constants.STATUS_AVAILABLE, 'replica_state': constants.REPLICA_STATE_ACTIVE, 'cast_rules_to_readonly': False}) else: while updated_replica_list: # NOTE(vponomaryov): update 'active' replica last. for updated_replica in updated_replica_list: if (updated_replica['id'] == share_replica['id'] and len(updated_replica_list) > 1): continue updated_replica_list.remove(updated_replica) break updated_export_locs = updated_replica.get( 'export_locations') if(updated_export_locs is not None and isinstance(updated_export_locs, list)): self.db.share_export_locations_update( context, updated_replica['id'], updated_export_locs) updated_replica_state = updated_replica.get( 'replica_state') updates = {} # Change the promoted replica's status from 'available' to # 'replication_change' and unset cast_rules_to_readonly if updated_replica['id'] == share_replica['id']: updates['cast_rules_to_readonly'] = False updates['status'] = constants.STATUS_AVAILABLE elif updated_replica['id'] == old_active_replica['id']: updates['cast_rules_to_readonly'] = ( ensure_old_active_replica_to_readonly) if updated_replica_state == constants.STATUS_ERROR: updates['status'] = constants.STATUS_ERROR if updated_replica_state is not None: updates['replica_state'] = updated_replica_state if updates: self.db.share_replica_update( context, updated_replica['id'], updates) if updated_replica.get('access_rules_status'): self._update_share_replica_access_rules_state( context, share_replica['id'], updated_replica.get('access_rules_status')) LOG.info("Share replica %s: promoted to active state " "successfully.", share_replica['id']) @periodic_task.periodic_task(spacing=CONF.replica_state_update_interval) @utils.require_driver_initialized def periodic_share_replica_update(self, context): LOG.debug("Updating status of share replica instances.") replicas = self.db.share_replicas_get_all(context, with_share_data=True) # Filter only non-active replicas belonging to this backend def qualified_replica(r): return (share_utils.extract_host(r['host']) == share_utils.extract_host(self.host)) replicas = list(filter(lambda x: qualified_replica(x), replicas)) for replica in replicas: self._share_replica_update( context, replica, share_id=replica['share_id']) @add_hooks @utils.require_driver_initialized def update_share_replica(self, context, share_replica_id, share_id=None): """Initiated by the force_update API.""" share_replica = self.db.share_replica_get( context, share_replica_id, with_share_data=True, with_share_server=True) self._share_replica_update(context, share_replica, share_id=share_id) @locked_share_replica_operation def _share_replica_update(self, context, share_replica, share_id=None): share_server = self._get_share_server(context, share_replica) # Re-grab the replica: try: share_replica = self.db.share_replica_get( context, share_replica['id'], with_share_data=True, with_share_server=True) except exception.ShareReplicaNotFound: # Replica may have been deleted, nothing to do here return # We don't poll for replicas that are busy in some operation, # or if they are the 'active' instance. if (share_replica['status'] in constants.TRANSITIONAL_STATUSES or share_replica['replica_state'] == constants.REPLICA_STATE_ACTIVE): return access_rules = self.db.share_access_get_all_for_share( context, share_replica['share_id']) LOG.debug("Updating status of share share_replica %s: ", share_replica['id']) replica_list = ( self.db.share_replicas_get_all_by_share( context, share_replica['share_id'], with_share_data=True, with_share_server=True) ) _active_replica = [x for x in replica_list if x['replica_state'] == constants.REPLICA_STATE_ACTIVE][0] # Get snapshots for the share. share_snapshots = self.db.share_snapshot_get_all_for_share( context, share_id) # Get the required data for snapshots that have 'aggregate_status' # set to 'available'. available_share_snapshots = [ self._get_replica_snapshots_for_snapshot( context, x['id'], _active_replica['id'], share_replica['id']) for x in share_snapshots if x['aggregate_status'] == constants.STATUS_AVAILABLE] replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] share_replica = self._get_share_instance_dict(context, share_replica) try: replica_state = self.driver.update_replica_state( context, replica_list, share_replica, access_rules, available_share_snapshots, share_server=share_server) except Exception as excep: msg = ("Driver error when updating replica " "state for replica %s.") LOG.exception(msg, share_replica['id']) self.db.share_replica_update( context, share_replica['id'], {'replica_state': constants.STATUS_ERROR, 'status': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.UPDATE, share_replica['project_id'], resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica['id'], exception=excep) return if replica_state in (constants.REPLICA_STATE_IN_SYNC, constants.REPLICA_STATE_OUT_OF_SYNC, constants.STATUS_ERROR): self.db.share_replica_update(context, share_replica['id'], {'replica_state': replica_state}) elif replica_state: msg = (("Replica %(id)s cannot be set to %(state)s " "through update call.") % {'id': share_replica['id'], 'state': replica_state}) LOG.warning(msg) def _validate_share_and_driver_mode(self, share_instance): driver_dhss = self.driver.driver_handles_share_servers share_dhss = share_types.parse_boolean_extra_spec( 'driver_handles_share_servers', share_types.get_share_type_extra_specs( share_instance['share_type_id'], constants.ExtraSpecs.DRIVER_HANDLES_SHARE_SERVERS)) if driver_dhss != share_dhss: msg = _("Driver mode of share %(share)s being managed is " "incompatible with mode DHSS=%(dhss)s configured for" " this backend.") % {'share': share_instance['share_id'], 'dhss': driver_dhss} raise exception.InvalidShare(reason=msg) return driver_dhss @add_hooks @utils.require_driver_initialized def manage_share(self, context, share_id, driver_options): context = context.elevated() share_ref = self.db.share_get(context, share_id) share_instance = self._get_share_instance(context, share_ref) share_type_extra_specs = self._get_extra_specs_from_share_type( context, share_instance['share_type_id']) share_type_supports_replication = share_type_extra_specs.get( 'replication_type', None) project_id = share_ref['project_id'] try: driver_dhss = self._validate_share_and_driver_mode(share_instance) if driver_dhss is True: share_server = self._get_share_server(context, share_instance) share_update = ( self.driver.manage_existing_with_server( share_instance, driver_options, share_server) or {} ) else: share_update = ( self.driver.manage_existing( share_instance, driver_options) or {} ) if not share_update.get('size'): msg = _("Driver cannot calculate share size.") raise exception.InvalidShare(reason=msg) deltas = { 'project_id': project_id, 'user_id': context.user_id, 'shares': 1, 'gigabytes': share_update['size'], 'share_type_id': share_instance['share_type_id'], } if share_type_supports_replication: deltas.update({'share_replicas': 1, 'replica_gigabytes': share_update['size']}) reservations = QUOTAS.reserve(context, **deltas) QUOTAS.commit( context, reservations, project_id=project_id, share_type_id=share_instance['share_type_id'], ) share_update.update({ 'status': constants.STATUS_AVAILABLE, 'launched_at': timeutils.utcnow(), 'availability_zone': self.availability_zone, }) # If the share was managed with `replication_type` extra-spec, the # instance becomes an `active` replica. if share_ref.get('replication_type'): share_update['replica_state'] = constants.REPLICA_STATE_ACTIVE # NOTE(vponomaryov): we should keep only those export locations # that driver has calculated to avoid incompatibilities with one # provided by user. if 'export_locations' in share_update: self.db.share_export_locations_update( context, share_instance['id'], share_update.pop('export_locations'), delete=True) self.db.share_update(context, share_id, share_update) except Exception: # NOTE(vponomaryov): set size as 1 because design expects size # to be set, it also will allow us to handle delete/unmanage # operations properly with this errored share according to quotas. self.db.share_update( context, share_id, {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}) raise @add_hooks @utils.require_driver_initialized def manage_snapshot(self, context, snapshot_id, driver_options): context = context.elevated() snapshot_ref = self.db.share_snapshot_get(context, snapshot_id) snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot_ref.instance['id'], with_share_data=True ) project_id = snapshot_ref['project_id'] driver_dhss = self.driver.driver_handles_share_servers try: if driver_dhss is True: share_server = self._get_share_server(context, snapshot_ref['share']) snapshot_update = ( self.driver.manage_existing_snapshot_with_server( snapshot_instance, driver_options, share_server) or {} ) else: snapshot_update = ( self.driver.manage_existing_snapshot( snapshot_instance, driver_options) or {} ) if not snapshot_update.get('size'): snapshot_update['size'] = snapshot_ref['share']['size'] LOG.warning("Cannot get the size of the snapshot " "%(snapshot_id)s. Using the size of " "the share instead.", {'snapshot_id': snapshot_id}) self._update_quota_usages(context, project_id, { "snapshots": 1, "snapshot_gigabytes": snapshot_update['size'], }) snapshot_export_locations = snapshot_update.pop( 'export_locations', []) if snapshot_instance['share']['mount_snapshot_support']: for el in snapshot_export_locations: values = { 'share_snapshot_instance_id': snapshot_instance['id'], 'path': el['path'], 'is_admin_only': el['is_admin_only'], } self.db.share_snapshot_instance_export_location_create( context, values) snapshot_update.update({ 'status': constants.STATUS_AVAILABLE, 'progress': '100%', }) snapshot_update.pop('id', None) self.db.share_snapshot_update(context, snapshot_id, snapshot_update) except Exception: # NOTE(vponomaryov): set size as 1 because design expects size # to be set, it also will allow us to handle delete/unmanage # operations properly with this errored snapshot according to # quotas. self.db.share_snapshot_update( context, snapshot_id, {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}) raise def _update_quota_usages(self, context, project_id, usages): user_id = context.user_id for resource, usage in usages.items(): try: current_usage = self.db.quota_usage_get( context, project_id, resource, user_id) self.db.quota_usage_update( context, project_id, user_id, resource, in_use=current_usage['in_use'] + usage) except exception.QuotaUsageNotFound: self.db.quota_usage_create(context, project_id, user_id, resource, usage) @add_hooks @utils.require_driver_initialized def unmanage_share(self, context, share_id): context = context.elevated() share_ref = self.db.share_get(context, share_id) share_instance = self._get_share_instance(context, share_ref) share_server = None project_id = share_ref['project_id'] replicas = self.db.share_replicas_get_all_by_share( context, share_id) supports_replication = len(replicas) > 0 def share_manage_set_error_status(msg, exception): status = {'status': constants.STATUS_UNMANAGE_ERROR} self.db.share_update(context, share_id, status) LOG.error(msg, exception) dhss = self.driver.driver_handles_share_servers try: if dhss is True: share_server = self._get_share_server(context, share_instance) self.driver.unmanage_with_server(share_instance, share_server) else: self.driver.unmanage(share_instance) except exception.InvalidShare as e: share_manage_set_error_status( ("Share can not be unmanaged: %s."), e) return deltas = { 'project_id': project_id, 'shares': -1, 'gigabytes': -share_ref['size'], 'share_type_id': share_instance['share_type_id'], } # NOTE(carloss): while unmanaging a share, a share will not contain # replicas other than the active one. So there is no need to # recalculate the amount of share replicas to be deallocated. if supports_replication: deltas.update({'share_replicas': -1, 'replica_gigabytes': -share_ref['size']}) try: reservations = QUOTAS.reserve(context, **deltas) QUOTAS.commit( context, reservations, project_id=project_id, share_type_id=share_instance['share_type_id'], ) except Exception as e: # Note(imalinovskiy): # Quota reservation errors here are not fatal, because # unmanage is administrator API and he/she could update user # quota usages later if it's required. LOG.warning("Failed to update quota usages: %s.", e) if self.configuration.safe_get('unmanage_remove_access_rules'): try: self.access_helper.update_access_rules( context, share_instance['id'], delete_all_rules=True, share_server=share_server ) except Exception as e: share_manage_set_error_status( ("Can not remove access rules of share: %s."), e) return self.db.share_instance_delete(context, share_instance['id']) # NOTE(ganso): Since we are unmanaging a share that is still within a # share server, we need to prevent the share server from being # auto-deleted. if share_server and share_server['is_auto_deletable']: self.db.share_server_update(context, share_server['id'], {'is_auto_deletable': False}) msg = ("Since share %(share)s has been un-managed from share " "server %(server)s. This share server must be removed " "manually, either by un-managing or by deleting it. The " "share network subnet %(subnet)s and share network " "%(network)s cannot be deleted unless this share server " "has been removed.") msg_args = { 'share': share_id, 'server': share_server['id'], 'subnet': share_server['share_network_subnet_id'], 'network': share_instance['share_network_id'] } LOG.warning(msg, msg_args) LOG.info("Share %s: unmanaged successfully.", share_id) @add_hooks @utils.require_driver_initialized def unmanage_snapshot(self, context, snapshot_id): status = {'status': constants.STATUS_UNMANAGE_ERROR} context = context.elevated() snapshot_ref = self.db.share_snapshot_get(context, snapshot_id) share_server = self._get_share_server(context, snapshot_ref['share']) snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot_ref.instance['id'], with_share_data=True ) project_id = snapshot_ref['project_id'] if self.configuration.safe_get('unmanage_remove_access_rules'): try: self.snapshot_access_helper.update_access_rules( context, snapshot_instance['id'], delete_all_rules=True, share_server=share_server) except Exception: LOG.exception( ("Cannot remove access rules of snapshot %s."), snapshot_id) self.db.share_snapshot_update(context, snapshot_id, status) return dhss = self.driver.driver_handles_share_servers try: if dhss: self.driver.unmanage_snapshot_with_server( snapshot_instance, share_server) else: self.driver.unmanage_snapshot(snapshot_instance) except exception.UnmanageInvalidShareSnapshot as e: self.db.share_snapshot_update(context, snapshot_id, status) LOG.error("Share snapshot cannot be unmanaged: %s.", e) return try: share_type_id = snapshot_ref['share']['instance']['share_type_id'] reservations = QUOTAS.reserve( context, project_id=project_id, snapshots=-1, snapshot_gigabytes=-snapshot_ref['size'], share_type_id=share_type_id, ) QUOTAS.commit( context, reservations, project_id=project_id, share_type_id=share_type_id, ) except Exception as e: # Note(imalinovskiy): # Quota reservation errors here are not fatal, because # unmanage is administrator API and he/she could update user # quota usages later if it's required. LOG.warning("Failed to update quota usages: %s.", e) self.db.share_snapshot_instance_delete( context, snapshot_instance['id']) @add_hooks @utils.require_driver_initialized def manage_share_server(self, context, share_server_id, identifier, driver_opts): if self.driver.driver_handles_share_servers is False: msg = _("Cannot manage share server %s in a " "backend configured with driver_handles_share_servers" " set to False.") % share_server_id raise exception.ManageShareServerError(reason=msg) server = self.db.share_server_get(context, share_server_id) try: share_network_subnet = self.db.share_network_subnet_get( context, server['share_network_subnet_id']) share_network = self.db.share_network_get( context, share_network_subnet['share_network_id']) number_allocations = ( self.driver.get_network_allocations_number()) if self.driver.admin_network_api: number_allocations += ( self.driver.get_admin_network_allocations_number()) if number_allocations > 0: # allocations obtained from the driver that still need to # be validated remaining_allocations = ( self.driver.get_share_server_network_info( context, server, identifier, driver_opts)) if len(remaining_allocations) > 0: if self.driver.admin_network_api: remaining_allocations = ( self.driver.admin_network_api. manage_network_allocations( context, remaining_allocations, server)) # allocations that are managed are removed from # remaining_allocations remaining_allocations = ( self.driver.network_api. manage_network_allocations( context, remaining_allocations, server, share_network, share_network_subnet)) # We require that all allocations are managed, else we # may have problems deleting this share server if len(remaining_allocations) > 0: msg = ("Failed to manage all allocations. " "Allocations %s were not " "managed." % six.text_type( remaining_allocations)) raise exception.ManageShareServerError(reason=msg) else: # if there should be allocations, but the driver # doesn't return any something is wrong msg = ("Driver did not return required network " "allocations to be managed. Required number " "of allocations is %s." % number_allocations) raise exception.ManageShareServerError(reason=msg) new_identifier, backend_details = self.driver.manage_server( context, server, identifier, driver_opts) if not new_identifier: new_identifier = server['id'] if backend_details is None or not isinstance( backend_details, dict): backend_details = {} for security_service in share_network['security_services']: ss_type = security_service['type'] data = { 'name': security_service['name'], 'ou': security_service['ou'], 'domain': security_service['domain'], 'server': security_service['server'], 'dns_ip': security_service['dns_ip'], 'user': security_service['user'], 'type': ss_type, 'password': security_service['password'], } backend_details.update({ 'security_service_' + ss_type: jsonutils.dumps(data) }) if backend_details: self.db.share_server_backend_details_set( context, server['id'], backend_details) self.db.share_server_update( context, share_server_id, {'status': constants.STATUS_ACTIVE, 'identifier': new_identifier}) except Exception: msg = "Error managing share server %s" LOG.exception(msg, share_server_id) self.db.share_server_update( context, share_server_id, {'status': constants.STATUS_MANAGE_ERROR}) raise LOG.info("Share server %s managed successfully.", share_server_id) @add_hooks @utils.require_driver_initialized def unmanage_share_server(self, context, share_server_id, force=False): server = self.db.share_server_get( context, share_server_id) server_details = server['backend_details'] security_services = [] for ss_name in constants.SECURITY_SERVICES_ALLOWED_TYPES: ss = server_details.get('security_service_' + ss_name) if ss: security_services.append(jsonutils.loads(ss)) try: self.driver.unmanage_server(server_details, security_services) except NotImplementedError: if not force: LOG.error("Did not unmanage share server %s since the driver " "does not support managing share servers and no " "``force`` option was supplied.", share_server_id) self.db.share_server_update( context, share_server_id, {'status': constants.STATUS_UNMANAGE_ERROR}) return try: if self.driver.get_network_allocations_number() > 0: # NOTE(ganso): This will already remove admin allocations. self.driver.network_api.unmanage_network_allocations( context, share_server_id) elif (self.driver.get_admin_network_allocations_number() > 0 and self.driver.admin_network_api): # NOTE(ganso): This is here in case there are only admin # allocations. self.driver.admin_network_api.unmanage_network_allocations( context, share_server_id) self.db.share_server_delete(context, share_server_id) except Exception: msg = "Error unmanaging share server %s" LOG.exception(msg, share_server_id) self.db.share_server_update( context, share_server_id, {'status': constants.STATUS_UNMANAGE_ERROR}) raise LOG.info("Share server %s unmanaged successfully.", share_server_id) @add_hooks @utils.require_driver_initialized def revert_to_snapshot(self, context, snapshot_id, reservations): context = context.elevated() snapshot = self.db.share_snapshot_get(context, snapshot_id) share = snapshot['share'] share_id = share['id'] share_instance_id = snapshot.instance.share_instance_id share_access_rules = ( self.access_helper.get_share_instance_access_rules( context, filters={'state': constants.STATUS_ACTIVE}, share_instance_id=share_instance_id)) snapshot_access_rules = ( self.snapshot_access_helper.get_snapshot_instance_access_rules( context, snapshot.instance['id'])) if share.get('has_replicas'): self._revert_to_replicated_snapshot( context, share, snapshot, reservations, share_access_rules, snapshot_access_rules, share_id=share_id) else: self._revert_to_snapshot(context, share, snapshot, reservations, share_access_rules, snapshot_access_rules) def _revert_to_snapshot(self, context, share, snapshot, reservations, share_access_rules, snapshot_access_rules): share_server = self._get_share_server(context, share) share_id = share['id'] snapshot_id = snapshot['id'] project_id = share['project_id'] user_id = share['user_id'] snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot.instance['id'], with_share_data=True) share_type_id = snapshot_instance["share_instance"]["share_type_id"] # Make primitive to pass the information to the driver snapshot_instance_dict = self._get_snapshot_instance_dict( context, snapshot_instance, snapshot=snapshot) try: self.driver.revert_to_snapshot(context, snapshot_instance_dict, share_access_rules, snapshot_access_rules, share_server=share_server) except Exception as excep: with excutils.save_and_reraise_exception(): msg = ('Share %(share)s could not be reverted ' 'to snapshot %(snap)s.') msg_args = {'share': share_id, 'snap': snapshot_id} LOG.exception(msg, msg_args) if reservations: QUOTAS.rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id, ) self.db.share_update( context, share_id, {'status': constants.STATUS_REVERTING_ERROR}) self.db.share_snapshot_update( context, snapshot_id, {'status': constants.STATUS_AVAILABLE}) self.message_api.create( context, message_field.Action.REVERT_TO_SNAPSHOT, share['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_id, exception=excep) if reservations: QUOTAS.commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id, ) self.db.share_update( context, share_id, {'status': constants.STATUS_AVAILABLE, 'size': snapshot['size']}) self.db.share_snapshot_update( context, snapshot_id, {'status': constants.STATUS_AVAILABLE}) msg = ('Share %(share)s reverted to snapshot %(snap)s ' 'successfully.') msg_args = {'share': share_id, 'snap': snapshot_id} LOG.info(msg, msg_args) @add_hooks @utils.require_driver_initialized def delete_share_instance(self, context, share_instance_id, force=False): """Delete a share instance.""" context = context.elevated() share_instance = self._get_share_instance(context, share_instance_id) share_id = share_instance.get('share_id') share_server = self._get_share_server(context, share_instance) share = self.db.share_get(context, share_id) self._notify_about_share_usage(context, share, share_instance, "delete.start") try: self.access_helper.update_access_rules( context, share_instance_id, delete_all_rules=True, share_server=share_server ) except exception.ShareResourceNotFound: LOG.warning("Share instance %s does not exist in the " "backend.", share_instance_id) except Exception as excep: with excutils.save_and_reraise_exception() as exc_context: if force: msg = ("The driver was unable to delete access rules " "for the instance: %s. Will attempt to delete " "the instance anyway.") LOG.error(msg, share_instance_id) exc_context.reraise = False else: self.db.share_instance_update( context, share_instance_id, {'status': constants.STATUS_ERROR_DELETING}) self.message_api.create( context, message_field.Action.DELETE_ACCESS_RULES, share_instance['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_instance_id, exception=excep) try: self.driver.delete_share(context, share_instance, share_server=share_server) except exception.ShareResourceNotFound: LOG.warning("Share instance %s does not exist in the " "backend.", share_instance_id) except Exception as excep: with excutils.save_and_reraise_exception() as exc_context: if force: msg = ("The driver was unable to delete the share " "instance: %s on the backend. Since this " "operation is forced, the instance will be " "deleted from Manila's database. A cleanup on " "the backend may be necessary.") LOG.error(msg, share_instance_id) exc_context.reraise = False else: self.db.share_instance_update( context, share_instance_id, {'status': constants.STATUS_ERROR_DELETING}) self.message_api.create( context, message_field.Action.DELETE, share_instance['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_instance_id, exception=excep) self.db.share_instance_delete( context, share_instance_id, need_to_update_usages=True) LOG.info("Share instance %s: deleted successfully.", share_instance_id) self._check_delete_share_server(context, share_instance) self._notify_about_share_usage(context, share, share_instance, "delete.end") def _check_delete_share_server(self, context, share_instance): if CONF.delete_share_server_with_last_share: share_server = self._get_share_server(context, share_instance) if (share_server and len(share_server.share_instances) == 0 and share_server.is_auto_deletable is True): LOG.debug("Scheduled deletion of share-server " "with id '%s' automatically by " "deletion of last share.", share_server['id']) self.delete_share_server(context, share_server) @periodic_task.periodic_task(spacing=600) @utils.require_driver_initialized def delete_free_share_servers(self, ctxt): if not (self.driver.driver_handles_share_servers and self.configuration.automatic_share_server_cleanup): return LOG.info("Check for unused share servers to delete.") updated_before = timeutils.utcnow() - datetime.timedelta( minutes=self.configuration.unused_share_server_cleanup_interval) servers = self.db.share_server_get_all_unused_deletable(ctxt, self.host, updated_before) for server in servers: self.delete_share_server(ctxt, server) @add_hooks @utils.require_driver_initialized def create_snapshot(self, context, share_id, snapshot_id): """Create snapshot for share.""" snapshot_ref = self.db.share_snapshot_get(context, snapshot_id) share_server = self._get_share_server( context, snapshot_ref['share']['instance']) snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot_ref.instance['id'], with_share_data=True ) snapshot_instance_id = snapshot_instance['id'] snapshot_instance = self._get_snapshot_instance_dict( context, snapshot_instance) try: model_update = self.driver.create_snapshot( context, snapshot_instance, share_server=share_server) or {} except Exception as excep: with excutils.save_and_reraise_exception(): self.db.share_snapshot_instance_update( context, snapshot_instance_id, {'status': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.CREATE, snapshot_ref['project_id'], resource_type=message_field.Resource.SHARE_SNAPSHOT, resource_id=snapshot_instance_id, exception=excep) snapshot_export_locations = model_update.pop('export_locations', []) if snapshot_instance['share']['mount_snapshot_support']: for el in snapshot_export_locations: values = { 'share_snapshot_instance_id': snapshot_instance_id, 'path': el['path'], 'is_admin_only': el['is_admin_only'], } self.db.share_snapshot_instance_export_location_create(context, values) if model_update.get('status') in (None, constants.STATUS_AVAILABLE): model_update['status'] = constants.STATUS_AVAILABLE model_update['progress'] = '100%' self.db.share_snapshot_instance_update( context, snapshot_instance_id, model_update) @add_hooks @utils.require_driver_initialized def delete_snapshot(self, context, snapshot_id, force=False): """Delete share snapshot.""" context = context.elevated() snapshot_ref = self.db.share_snapshot_get(context, snapshot_id) share_server = self._get_share_server( context, snapshot_ref['share']['instance']) snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot_ref.instance['id'], with_share_data=True) snapshot_instance_id = snapshot_instance['id'] if context.project_id != snapshot_ref['project_id']: project_id = snapshot_ref['project_id'] else: project_id = context.project_id snapshot_instance = self._get_snapshot_instance_dict( context, snapshot_instance) share_ref = self.db.share_get(context, snapshot_ref['share_id']) if share_ref['mount_snapshot_support']: try: self.snapshot_access_helper.update_access_rules( context, snapshot_instance['id'], delete_all_rules=True, share_server=share_server) except Exception: LOG.exception( ("Failed to remove access rules for snapshot %s."), snapshot_instance['id']) LOG.warning("The driver was unable to remove access rules " "for snapshot %s. Moving on.", snapshot_instance['snapshot_id']) try: self.driver.delete_snapshot(context, snapshot_instance, share_server=share_server) except Exception as excep: with excutils.save_and_reraise_exception() as exc: if force: msg = _("The driver was unable to delete the " "snapshot %s on the backend. Since this " "operation is forced, the snapshot will " "be deleted from Manila's database. A cleanup on " "the backend may be necessary.") LOG.exception(msg, snapshot_id) exc.reraise = False else: self.db.share_snapshot_instance_update( context, snapshot_instance_id, {'status': constants.STATUS_ERROR_DELETING}) self.message_api.create( context, message_field.Action.DELETE, snapshot_ref['project_id'], resource_type=message_field.Resource.SHARE_SNAPSHOT, resource_id=snapshot_instance_id, exception=excep) self.db.share_snapshot_instance_delete(context, snapshot_instance_id) share_type_id = snapshot_ref['share']['instance']['share_type_id'] try: reservations = QUOTAS.reserve( context, project_id=project_id, snapshots=-1, snapshot_gigabytes=-snapshot_ref['size'], user_id=snapshot_ref['user_id'], share_type_id=share_type_id, ) except Exception: reservations = None LOG.exception("Failed to update quota usages while deleting " "snapshot %s.", snapshot_id) if reservations: QUOTAS.commit( context, reservations, project_id=project_id, user_id=snapshot_ref['user_id'], share_type_id=share_type_id, ) @add_hooks @utils.require_driver_initialized @locked_share_replica_operation def create_replicated_snapshot(self, context, snapshot_id, share_id=None): """Create a snapshot for a replicated share.""" # Grab the snapshot and replica information from the DB. snapshot = self.db.share_snapshot_get(context, snapshot_id) share_server = self._get_share_server(context, snapshot['share']) replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'snapshot_ids': snapshot['id']}, with_share_data=True) ) replica_list = ( self.db.share_replicas_get_all_by_share( context, share_id, with_share_data=True, with_share_server=True) ) # Make primitives to pass the information to the driver. replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] replica_snapshots = [self._get_snapshot_instance_dict(context, s) for s in replica_snapshots] updated_instances = [] try: updated_instances = self.driver.create_replicated_snapshot( context, replica_list, replica_snapshots, share_server=share_server) or [] except Exception: with excutils.save_and_reraise_exception(): for instance in replica_snapshots: self.db.share_snapshot_instance_update( context, instance['id'], {'status': constants.STATUS_ERROR}) for instance in updated_instances: if instance['status'] == constants.STATUS_AVAILABLE: instance.update({'progress': '100%'}) self.db.share_snapshot_instance_update( context, instance['id'], instance) def _find_active_replica_on_host(self, replica_list): """Find the active replica matching this manager's host.""" for replica in replica_list: if (replica['replica_state'] == constants.REPLICA_STATE_ACTIVE and share_utils.extract_host(replica['host']) == self.host): return replica @locked_share_replica_operation def _revert_to_replicated_snapshot(self, context, share, snapshot, reservations, share_access_rules, snapshot_access_rules, share_id=None): share_server = self._get_share_server(context, share) snapshot_id = snapshot['id'] project_id = share['project_id'] user_id = share['user_id'] # Get replicas, including an active replica replica_list = self.db.share_replicas_get_all_by_share( context, share_id, with_share_data=True, with_share_server=True) active_replica = self._find_active_replica_on_host(replica_list) # Get snapshot instances, including one on an active replica replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'snapshot_ids': snapshot_id}, with_share_data=True)) snapshot_instance_filters = { 'share_instance_ids': active_replica['id'], 'snapshot_ids': snapshot_id, } active_replica_snapshot = ( self.db.share_snapshot_instance_get_all_with_filters( context, snapshot_instance_filters))[0] # Make primitives to pass the information to the driver replica_list = [self._get_share_instance_dict(context, replica) for replica in replica_list] active_replica = self._get_share_instance_dict(context, active_replica) replica_snapshots = [self._get_snapshot_instance_dict(context, s) for s in replica_snapshots] active_replica_snapshot = self._get_snapshot_instance_dict( context, active_replica_snapshot, snapshot=snapshot) try: self.driver.revert_to_replicated_snapshot( context, active_replica, replica_list, active_replica_snapshot, replica_snapshots, share_access_rules, snapshot_access_rules, share_server=share_server) except Exception: with excutils.save_and_reraise_exception(): msg = ('Share %(share)s could not be reverted ' 'to snapshot %(snap)s.') msg_args = {'share': share_id, 'snap': snapshot_id} LOG.exception(msg, msg_args) if reservations: QUOTAS.rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=active_replica['share_type_id'], ) self.db.share_replica_update( context, active_replica['id'], {'status': constants.STATUS_REVERTING_ERROR}) self.db.share_snapshot_instance_update( context, active_replica_snapshot['id'], {'status': constants.STATUS_AVAILABLE}) if reservations: QUOTAS.commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=active_replica['share_type_id'], ) self.db.share_update(context, share_id, {'size': snapshot['size']}) self.db.share_replica_update( context, active_replica['id'], {'status': constants.STATUS_AVAILABLE}) self.db.share_snapshot_instance_update( context, active_replica_snapshot['id'], {'status': constants.STATUS_AVAILABLE}) msg = ('Share %(share)s reverted to snapshot %(snap)s ' 'successfully.') msg_args = {'share': share_id, 'snap': snapshot_id} LOG.info(msg, msg_args) @add_hooks @utils.require_driver_initialized @locked_share_replica_operation def delete_replicated_snapshot(self, context, snapshot_id, share_id=None, force=False): """Delete a snapshot from a replicated share.""" # Grab the replica and snapshot information from the DB. snapshot = self.db.share_snapshot_get(context, snapshot_id) share_server = self._get_share_server(context, snapshot['share']) replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'snapshot_ids': snapshot['id']}, with_share_data=True) ) replica_list = ( self.db.share_replicas_get_all_by_share( context, share_id, with_share_data=True, with_share_server=True) ) replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] replica_snapshots = [self._get_snapshot_instance_dict(context, s) for s in replica_snapshots] deleted_instances = [] updated_instances = [] db_force_delete_msg = _('The driver was unable to delete some or all ' 'of the share replica snapshots on the ' 'backend/s. Since this operation is forced, ' 'the replica snapshots will be deleted from ' 'Manila.') try: updated_instances = self.driver.delete_replicated_snapshot( context, replica_list, replica_snapshots, share_server=share_server) or [] except Exception: with excutils.save_and_reraise_exception() as e: if force: # Can delete all instances if forced. deleted_instances = replica_snapshots LOG.exception(db_force_delete_msg) e.reraise = False else: for instance in replica_snapshots: self.db.share_snapshot_instance_update( context, instance['id'], {'status': constants.STATUS_ERROR_DELETING}) if not deleted_instances: if force: # Ignore model updates on 'force' delete. LOG.warning(db_force_delete_msg) deleted_instances = replica_snapshots else: deleted_instances = list(filter( lambda x: x['status'] == constants.STATUS_DELETED, updated_instances)) updated_instances = list(filter( lambda x: x['status'] != constants.STATUS_DELETED, updated_instances)) for instance in deleted_instances: self.db.share_snapshot_instance_delete(context, instance['id']) for instance in updated_instances: self.db.share_snapshot_instance_update( context, instance['id'], instance) @periodic_task.periodic_task(spacing=CONF.replica_state_update_interval) @utils.require_driver_initialized def periodic_share_replica_snapshot_update(self, context): LOG.debug("Updating status of share replica snapshots.") transitional_statuses = (constants.STATUS_CREATING, constants.STATUS_DELETING) replicas = self.db.share_replicas_get_all(context, with_share_data=True) def qualified_replica(r): # Filter non-active replicas belonging to this backend return (share_utils.extract_host(r['host']) == share_utils.extract_host(self.host) and r['replica_state'] != constants.REPLICA_STATE_ACTIVE) host_replicas = list(filter( lambda x: qualified_replica(x), replicas)) transitional_replica_snapshots = [] # Get snapshot instances for each replica that are in 'creating' or # 'deleting' states. for replica in host_replicas: filters = { 'share_instance_ids': replica['id'], 'statuses': transitional_statuses, } replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, filters, with_share_data=True) ) transitional_replica_snapshots.extend(replica_snapshots) for replica_snapshot in transitional_replica_snapshots: replica_snapshots = ( self.db.share_snapshot_instance_get_all_with_filters( context, {'snapshot_ids': replica_snapshot['snapshot_id']}) ) share_id = replica_snapshot['share']['share_id'] self._update_replica_snapshot( context, replica_snapshot, replica_snapshots=replica_snapshots, share_id=share_id) @locked_share_replica_operation def _update_replica_snapshot(self, context, replica_snapshot, replica_snapshots=None, share_id=None): # Re-grab the replica: try: share_replica = self.db.share_replica_get( context, replica_snapshot['share_instance_id'], with_share_data=True, with_share_server=True) replica_snapshot = self.db.share_snapshot_instance_get( context, replica_snapshot['id'], with_share_data=True) except exception.NotFound: # Replica may have been deleted, try to cleanup the snapshot # instance try: self.db.share_snapshot_instance_delete( context, replica_snapshot['id']) except exception.ShareSnapshotInstanceNotFound: # snapshot instance has been deleted, nothing to do here pass return msg_payload = { 'snapshot_instance': replica_snapshot['id'], 'replica': share_replica['id'], } LOG.debug("Updating status of replica snapshot %(snapshot_instance)s: " "on replica: %(replica)s", msg_payload) # Grab all the replica and snapshot information. replica_list = ( self.db.share_replicas_get_all_by_share( context, share_replica['share_id'], with_share_data=True, with_share_server=True) ) replica_list = [self._get_share_instance_dict(context, r) for r in replica_list] replica_snapshots = replica_snapshots or [] # Convert data to primitives to send to the driver. replica_snapshots = [self._get_snapshot_instance_dict(context, s) for s in replica_snapshots] replica_snapshot = self._get_snapshot_instance_dict( context, replica_snapshot) share_replica = self._get_share_instance_dict(context, share_replica) share_server = share_replica['share_server'] snapshot_update = None try: snapshot_update = self.driver.update_replicated_snapshot( context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=share_server) or {} except exception.SnapshotResourceNotFound: if replica_snapshot['status'] == constants.STATUS_DELETING: LOG.info('Snapshot %(snapshot_instance)s on replica ' '%(replica)s has been deleted.', msg_payload) self.db.share_snapshot_instance_delete( context, replica_snapshot['id']) else: LOG.exception("Replica snapshot %s was not found on " "the backend.", replica_snapshot['id']) self.db.share_snapshot_instance_update( context, replica_snapshot['id'], {'status': constants.STATUS_ERROR}) except Exception: LOG.exception("Driver error while updating replica snapshot: " "%s", replica_snapshot['id']) self.db.share_snapshot_instance_update( context, replica_snapshot['id'], {'status': constants.STATUS_ERROR}) if snapshot_update: snapshot_status = snapshot_update.get('status') if snapshot_status == constants.STATUS_AVAILABLE: snapshot_update['progress'] = '100%' self.db.share_snapshot_instance_update( context, replica_snapshot['id'], snapshot_update) @add_hooks @utils.require_driver_initialized def update_access(self, context, share_instance_id): """Allow/Deny access to some share.""" share_instance = self._get_share_instance(context, share_instance_id) share_server = self._get_share_server(context, share_instance) LOG.debug("Received request to update access for share instance" " %s.", share_instance_id) self.access_helper.update_access_rules( context, share_instance_id, share_server=share_server) @periodic_task.periodic_task(spacing=CONF.periodic_interval) @utils.require_driver_initialized def _report_driver_status(self, context): LOG.info('Updating share status') share_stats = self.driver.get_share_stats(refresh=True) if not share_stats: return if self.driver.driver_handles_share_servers: share_stats['server_pools_mapping'] = ( self._get_servers_pool_mapping(context) ) self.update_service_capabilities(share_stats) @periodic_task.periodic_task(spacing=CONF.periodic_hooks_interval) @utils.require_driver_initialized def _execute_periodic_hook(self, context): """Executes periodic-based hooks.""" # TODO(vponomaryov): add also access rules and share servers share_instances = ( self.db.share_instances_get_all_by_host( context=context, host=self.host)) periodic_hook_data = self.driver.get_periodic_hook_data( context=context, share_instances=share_instances) for hook in self.hooks: hook.execute_periodic_hook( context=context, periodic_hook_data=periodic_hook_data) def _get_servers_pool_mapping(self, context): """Get info about relationships between pools and share_servers.""" share_servers = self.db.share_server_get_all_by_host(context, self.host) return {server['id']: self.driver.get_share_server_pools(server) for server in share_servers} @add_hooks @utils.require_driver_initialized def publish_service_capabilities(self, context): """Collect driver status and then publish it.""" self._report_driver_status(context) self._publish_service_capabilities(context) def _form_server_setup_info(self, context, share_server, share_network, share_network_subnet): # Network info is used by driver for setting up share server # and getting server info on share creation. network_allocations = self.db.network_allocations_get_for_share_server( context, share_server['id'], label='user') admin_network_allocations = ( self.db.network_allocations_get_for_share_server( context, share_server['id'], label='admin')) # NOTE(vponomaryov): following network_info fields are deprecated: # 'segmentation_id', 'cidr' and 'network_type'. # And they should be used from network allocations directly. # They should be removed right after no one uses them. network_info = { 'server_id': share_server['id'], 'segmentation_id': share_network_subnet['segmentation_id'], 'cidr': share_network_subnet['cidr'], 'neutron_net_id': share_network_subnet['neutron_net_id'], 'neutron_subnet_id': share_network_subnet['neutron_subnet_id'], 'security_services': share_network['security_services'], 'network_allocations': network_allocations, 'admin_network_allocations': admin_network_allocations, 'backend_details': share_server.get('backend_details'), 'network_type': share_network_subnet['network_type'], } return network_info def _setup_server(self, context, share_server, metadata=None): try: share_network_subnet = share_server['share_network_subnet'] share_network_subnet_id = share_network_subnet['id'] share_network_id = share_network_subnet['share_network_id'] share_network = self.db.share_network_get( context, share_network_id) self.driver.allocate_network(context, share_server, share_network, share_network_subnet) self.driver.allocate_admin_network(context, share_server) # Get share_network_subnet in case it was updated. share_network_subnet = self.db.share_network_subnet_get( context, share_network_subnet_id) network_info = self._form_server_setup_info( context, share_server, share_network, share_network_subnet) self._validate_segmentation_id(network_info) # NOTE(vponomaryov): Save security services data to share server # details table to remove dependency from share network after # creation operation. It will allow us to delete share server and # share network separately without dependency on each other. for security_service in network_info['security_services']: ss_type = security_service['type'] data = { 'name': security_service['name'], 'ou': security_service['ou'], 'domain': security_service['domain'], 'server': security_service['server'], 'dns_ip': security_service['dns_ip'], 'user': security_service['user'], 'type': ss_type, 'password': security_service['password'], } self.db.share_server_backend_details_set( context, share_server['id'], {'security_service_' + ss_type: jsonutils.dumps(data)}) server_info = self.driver.setup_server( network_info, metadata=metadata) self.driver.update_network_allocation(context, share_server) self.driver.update_admin_network_allocation(context, share_server) if server_info and isinstance(server_info, dict): self.db.share_server_backend_details_set( context, share_server['id'], server_info) return self.db.share_server_update( context, share_server['id'], {'status': constants.STATUS_ACTIVE, 'identifier': server_info.get( 'identifier', share_server['id'])}) except Exception as e: with excutils.save_and_reraise_exception(): details = getattr(e, "detail_data", {}) if isinstance(details, dict): server_details = details.get("server_details", {}) if not isinstance(server_details, dict): LOG.debug( ("Cannot save non-dict data (%(data)s) " "provided as 'server details' of " "failed share server '%(server)s'."), {"server": share_server["id"], "data": server_details}) else: invalid_details = [] for key, value in server_details.items(): try: self.db.share_server_backend_details_set( context, share_server['id'], {key: value}) except Exception: invalid_details.append("%(key)s: %(value)s" % { 'key': six.text_type(key), 'value': six.text_type(value) }) if invalid_details: LOG.debug( ("Following server details " "cannot be written to db : %s"), six.text_type("\n".join(invalid_details))) else: LOG.debug( ("Cannot save non-dict data (%(data)s) provided as " "'detail data' of failed share server '%(server)s'."), {"server": share_server["id"], "data": details}) self.db.share_server_update( context, share_server['id'], {'status': constants.STATUS_ERROR}) self.driver.deallocate_network(context, share_server['id']) def _validate_segmentation_id(self, network_info): """Raises exception if the segmentation type is incorrect.""" if (network_info['network_type'] in (None, 'flat') and network_info['segmentation_id']): msg = _('A segmentation ID %(vlan_id)s was specified but can not ' 'be used with a network of type %(seg_type)s; the ' 'segmentation ID option must be omitted or set to 0') raise exception.NetworkBadConfigurationException( reason=msg % {'vlan_id': network_info['segmentation_id'], 'seg_type': network_info['network_type']}) elif (network_info['network_type'] == 'vlan' and (network_info['segmentation_id'] is None or int(network_info['segmentation_id']) > 4094 or int(network_info['segmentation_id']) < 1)): msg = _('A segmentation ID %s was specified but is not valid for ' 'a VLAN network type; the segmentation ID must be an ' 'integer value in the range of [1,4094]') raise exception.NetworkBadConfigurationException( reason=msg % network_info['segmentation_id']) elif (network_info['network_type'] == 'vxlan' and (network_info['segmentation_id'] is None or int(network_info['segmentation_id']) > 16777215 or int(network_info['segmentation_id']) < 1)): msg = _('A segmentation ID %s was specified but is not valid for ' 'a VXLAN network type; the segmentation ID must be an ' 'integer value in the range of [1,16777215]') raise exception.NetworkBadConfigurationException( reason=msg % network_info['segmentation_id']) elif (network_info['network_type'] == 'gre' and (network_info['segmentation_id'] is None or int(network_info['segmentation_id']) > 4294967295 or int(network_info['segmentation_id']) < 1)): msg = _('A segmentation ID %s was specified but is not valid for ' 'a GRE network type; the segmentation ID must be an ' 'integer value in the range of [1, 4294967295]') raise exception.NetworkBadConfigurationException( reason=msg % network_info['segmentation_id']) @add_hooks @utils.require_driver_initialized def delete_share_server(self, context, share_server): @utils.synchronized( "share_manager_%s" % share_server['share_network_subnet_id']) def _wrapped_delete_share_server(): # NOTE(vponomaryov): Verify that there are no dependent shares. # Without this verification we can get here exception in next case: # share-server-delete API was called after share creation scheduled # and share_server reached ACTIVE status, but before update # of share_server_id field for share. If so, after lock realese # this method starts executing when amount of dependent shares # has been changed. server_id = share_server['id'] shares = self.db.share_instances_get_all_by_share_server( context, server_id) if shares: raise exception.ShareServerInUse(share_server_id=server_id) server_details = share_server['backend_details'] self.db.share_server_update(context, server_id, {'status': constants.STATUS_DELETING}) try: LOG.debug("Deleting share server '%s'", server_id) security_services = [] for ss_name in constants.SECURITY_SERVICES_ALLOWED_TYPES: ss = server_details.get('security_service_' + ss_name) if ss: security_services.append(jsonutils.loads(ss)) self.driver.teardown_server( server_details=server_details, security_services=security_services) except Exception: with excutils.save_and_reraise_exception(): LOG.error( "Share server '%s' failed on deletion.", server_id) self.db.share_server_update( context, server_id, {'status': constants.STATUS_ERROR}) else: self.db.share_server_delete(context, share_server['id']) _wrapped_delete_share_server() LOG.info( "Share server '%s' has been deleted successfully.", share_server['id']) self.driver.deallocate_network(context, share_server['id']) @add_hooks @utils.require_driver_initialized def extend_share(self, context, share_id, new_size, reservations): context = context.elevated() share = self.db.share_get(context, share_id) share_instance = self._get_share_instance(context, share) share_server = self._get_share_server(context, share_instance) project_id = share['project_id'] user_id = share['user_id'] self._notify_about_share_usage(context, share, share_instance, "extend.start") try: self.driver.extend_share( share_instance, new_size, share_server=share_server) except Exception as e: LOG.exception("Extend share failed.", resource=share) self.message_api.create( context, message_field.Action.EXTEND, project_id, resource_type=message_field.Resource.SHARE, resource_id=share_id, detail=message_field.Detail.DRIVER_FAILED_EXTEND) try: self.db.share_update( context, share['id'], {'status': constants.STATUS_EXTENDING_ERROR} ) raise exception.ShareExtendingError( reason=six.text_type(e), share_id=share_id) finally: QUOTAS.rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_instance['share_type_id'], ) # we give the user_id of the share, to update the quota usage # for the user, who created the share, because on share delete # only this quota will be decreased QUOTAS.commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_instance['share_type_id'], ) share_update = { 'size': int(new_size), # NOTE(u_glide): translation to lower case should be removed in # a row with usage of upper case of share statuses in all places 'status': constants.STATUS_AVAILABLE.lower() } share = self.db.share_update(context, share['id'], share_update) LOG.info("Extend share completed successfully.", resource=share) self._notify_about_share_usage(context, share, share_instance, "extend.end") @add_hooks @utils.require_driver_initialized def shrink_share(self, context, share_id, new_size): context = context.elevated() share = self.db.share_get(context, share_id) share_instance = self._get_share_instance(context, share) share_server = self._get_share_server(context, share_instance) project_id = share['project_id'] user_id = share['user_id'] new_size = int(new_size) replicas = self.db.share_replicas_get_all_by_share( context, share['id']) supports_replication = len(replicas) > 0 self._notify_about_share_usage(context, share, share_instance, "shrink.start") def error_occurred(exc, msg, status=constants.STATUS_SHRINKING_ERROR): LOG.exception(msg, resource=share) self.db.share_update(context, share['id'], {'status': status}) raise exception.ShareShrinkingError( reason=six.text_type(exc), share_id=share_id) reservations = None try: size_decrease = int(share['size']) - new_size # we give the user_id of the share, to update the quota usage # for the user, who created the share, because on share delete # only this quota will be decreased deltas = { 'project_id': project_id, 'user_id': user_id, 'share_type_id': share_instance['share_type_id'], 'gigabytes': -size_decrease, } # NOTE(carloss): if the share supports replication we need # to query all its replicas and calculate the final size to # deallocate (amount of replicas * size to decrease). if supports_replication: replica_gigs_to_deallocate = len(replicas) * size_decrease deltas.update( {'replica_gigabytes': -replica_gigs_to_deallocate}) reservations = QUOTAS.reserve(context, **deltas) except Exception as e: error_occurred( e, ("Failed to update quota on share shrinking.")) try: self.driver.shrink_share( share_instance, new_size, share_server=share_server) # NOTE(u_glide): Replace following except block by error notification # when Manila has such mechanism. It's possible because drivers # shouldn't shrink share when this validation error occurs. except Exception as e: if isinstance(e, exception.ShareShrinkingPossibleDataLoss): msg = ("Shrink share failed due to possible data loss.") status = constants.STATUS_AVAILABLE error_params = {'msg': msg, 'status': status} self.message_api.create( context, message_field.Action.SHRINK, share['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_id, detail=message_field.Detail.DRIVER_REFUSED_SHRINK) else: error_params = {'msg': ("Shrink share failed.")} try: error_occurred(e, **error_params) finally: QUOTAS.rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_instance['share_type_id'], ) QUOTAS.commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_instance['share_type_id'], ) share_update = { 'size': new_size, 'status': constants.STATUS_AVAILABLE } share = self.db.share_update(context, share['id'], share_update) LOG.info("Shrink share completed successfully.", resource=share) self._notify_about_share_usage(context, share, share_instance, "shrink.end") @utils.require_driver_initialized def create_share_group(self, context, share_group_id): context = context.elevated() share_group_ref = self.db.share_group_get(context, share_group_id) share_group_ref['host'] = self.host shares = self.db.share_instances_get_all_by_share_group_id( context, share_group_id) source_share_group_snapshot_id = share_group_ref.get( "source_share_group_snapshot_id") snap_ref = None parent_share_server_id = None if source_share_group_snapshot_id: snap_ref = self.db.share_group_snapshot_get( context, source_share_group_snapshot_id) for member in snap_ref['share_group_snapshot_members']: member['share'] = self.db.share_instance_get( context, member['share_instance_id'], with_share_data=True) if 'share_group' in snap_ref: parent_share_server_id = snap_ref['share_group'][ 'share_server_id'] status = constants.STATUS_AVAILABLE share_network_id = share_group_ref.get('share_network_id') share_server = None if parent_share_server_id and self.driver.driver_handles_share_servers: share_server = self.db.share_server_get(context, parent_share_server_id) share_network_subnet = share_server['share_network_subnet'] share_network_id = share_network_subnet['share_network_id'] if share_network_id and not self.driver.driver_handles_share_servers: self.db.share_group_update( context, share_group_id, {'status': constants.STATUS_ERROR}) msg = _("Driver does not expect share-network to be provided " "with current configuration.") raise exception.InvalidInput(reason=msg) if not share_server and share_network_id: availability_zone_id = self._get_az_for_share_group( context, share_group_ref) subnet = self.db.share_network_subnet_get_by_availability_zone_id( context, share_network_id, availability_zone_id) try: share_server, share_group_ref = ( self._provide_share_server_for_share_group( context, share_network_id, subnet.get('id'), share_group_ref, share_group_snapshot=snap_ref, ) ) except Exception: with excutils.save_and_reraise_exception(): LOG.error("Failed to get share server" " for share group creation.") self.db.share_group_update( context, share_group_id, {'status': constants.STATUS_ERROR}) self.message_api.create( context, message_field.Action.CREATE, share_group_ref['project_id'], resource_type=message_field.Resource.SHARE_GROUP, resource_id=share_group_id, detail=message_field.Detail.NO_SHARE_SERVER) try: # TODO(ameade): Add notification for create.start LOG.info("Share group %s: creating", share_group_id) model_update, share_update_list = None, None share_group_ref['shares'] = shares if snap_ref: model_update, share_update_list = ( self.driver.create_share_group_from_share_group_snapshot( context, share_group_ref, snap_ref, share_server=share_server)) else: model_update = self.driver.create_share_group( context, share_group_ref, share_server=share_server) if model_update: share_group_ref = self.db.share_group_update( context, share_group_ref['id'], model_update) if share_update_list: for share in share_update_list: values = copy.deepcopy(share) # NOTE(dviroel): To keep backward compatibility we can't # keep 'status' as a mandatory parameter. We'll set its # value to 'available' as default. i_status = values.get('status', constants.STATUS_AVAILABLE) if i_status not in [ constants.STATUS_AVAILABLE, constants.STATUS_CREATING_FROM_SNAPSHOT]: msg = _( 'Driver returned an invalid status %s') % i_status raise exception.InvalidShareInstance(reason=msg) values['status'] = i_status values['progress'] = ( '100%' if i_status == constants.STATUS_AVAILABLE else '0%') values.pop('id') export_locations = values.pop('export_locations') self.db.share_instance_update(context, share['id'], values) self.db.share_export_locations_update(context, share['id'], export_locations) except Exception: with excutils.save_and_reraise_exception(): self.db.share_group_update( context, share_group_ref['id'], {'status': constants.STATUS_ERROR, 'availability_zone_id': self._get_az_for_share_group( context, share_group_ref), 'consistent_snapshot_support': self.driver._stats[ 'share_group_stats'].get( 'consistent_snapshot_support')}) for share in shares: self.db.share_instance_update( context, share['id'], {'status': constants.STATUS_ERROR}) LOG.error("Share group %s: create failed", share_group_id) now = timeutils.utcnow() for share in shares: self.db.share_instance_update( context, share['id'], {'status': constants.STATUS_AVAILABLE}) self.db.share_group_update( context, share_group_ref['id'], {'status': status, 'created_at': now, 'availability_zone_id': self._get_az_for_share_group( context, share_group_ref), 'consistent_snapshot_support': self.driver._stats[ 'share_group_stats'].get('consistent_snapshot_support')}) LOG.info("Share group %s: created successfully", share_group_id) # TODO(ameade): Add notification for create.end return share_group_ref['id'] def _get_az_for_share_group(self, context, share_group_ref): if not share_group_ref['availability_zone_id']: return self.db.availability_zone_get( context, self.availability_zone)['id'] return share_group_ref['availability_zone_id'] @utils.require_driver_initialized def delete_share_group(self, context, share_group_id): context = context.elevated() share_group_ref = self.db.share_group_get(context, share_group_id) share_group_ref['host'] = self.host share_group_ref['shares'] = ( self.db.share_instances_get_all_by_share_group_id( context, share_group_id)) # TODO(ameade): Add notification for delete.start try: LOG.info("Share group %s: deleting", share_group_id) share_server = None if share_group_ref.get('share_server_id'): share_server = self.db.share_server_get( context, share_group_ref['share_server_id']) model_update = self.driver.delete_share_group( context, share_group_ref, share_server=share_server) if model_update: share_group_ref = self.db.share_group_update( context, share_group_ref['id'], model_update) except Exception: with excutils.save_and_reraise_exception(): self.db.share_group_update( context, share_group_ref['id'], {'status': constants.STATUS_ERROR}) LOG.error("Share group %s: delete failed", share_group_ref['id']) self.db.share_group_destroy(context, share_group_id) LOG.info("Share group %s: deleted successfully", share_group_id) # TODO(ameade): Add notification for delete.end @utils.require_driver_initialized def create_share_group_snapshot(self, context, share_group_snapshot_id): context = context.elevated() snap_ref = self.db.share_group_snapshot_get( context, share_group_snapshot_id) for member in snap_ref['share_group_snapshot_members']: member['share'] = self.db.share_instance_get( context, member['share_instance_id'], with_share_data=True) status = constants.STATUS_AVAILABLE now = timeutils.utcnow() updated_members_ids = [] try: LOG.info("Share group snapshot %s: creating", share_group_snapshot_id) share_server = None if snap_ref['share_group'].get('share_server_id'): share_server = self.db.share_server_get( context, snap_ref['share_group']['share_server_id']) snapshot_update, member_update_list = ( self.driver.create_share_group_snapshot( context, snap_ref, share_server=share_server)) for update in (member_update_list or []): # NOTE(vponomaryov): we expect that each member is a dict # and has required 'id' key and some optional keys # to be updated such as 'provider_location'. It is planned # to have here also 'export_locations' when it is supported. member_id = update.pop('id', None) if not member_id: LOG.warning( "One of share group snapshot '%s' members does not " "have reference ID. Its update was skipped.", share_group_snapshot_id) continue # TODO(vponomaryov): remove following condition when # sgs members start supporting export locations. if 'export_locations' in update: LOG.debug( "Removing 'export_locations' data from " "share group snapshot member '%s' update because " "export locations are not supported.", member_id) update.pop('export_locations') db_update = { 'updated_at': now, 'status': update.pop('status', status) } if 'provider_location' in update: db_update['provider_location'] = ( update.pop('provider_location')) if 'size' in update: db_update['size'] = int(update.pop('size')) updated_members_ids.append(member_id) self.db.share_group_snapshot_member_update( context, member_id, db_update) if update: LOG.debug( "Share group snapshot ID='%(sgs_id)s', " "share group snapshot member ID='%(sgsm_id)s'. " "Following keys of sgs member were not updated " "as not allowed: %(keys)s.", {'sgs_id': share_group_snapshot_id, 'sgsm_id': member_id, 'keys': ', '.join(update)}) if snapshot_update: snap_ref = self.db.share_group_snapshot_update( context, snap_ref['id'], snapshot_update) except Exception: with excutils.save_and_reraise_exception(): self.db.share_group_snapshot_update( context, snap_ref['id'], {'status': constants.STATUS_ERROR}) LOG.error("Share group snapshot %s: create failed", share_group_snapshot_id) for member in (snap_ref.get('share_group_snapshot_members') or []): if member['id'] in updated_members_ids: continue update = {'status': status, 'updated_at': now} self.db.share_group_snapshot_member_update( context, member['id'], update) self.db.share_group_snapshot_update( context, snap_ref['id'], {'status': status, 'updated_at': now}) LOG.info("Share group snapshot %s: created successfully", share_group_snapshot_id) return snap_ref['id'] @utils.require_driver_initialized def delete_share_group_snapshot(self, context, share_group_snapshot_id): context = context.elevated() snap_ref = self.db.share_group_snapshot_get( context, share_group_snapshot_id) for member in snap_ref['share_group_snapshot_members']: member['share'] = self.db.share_instance_get( context, member['share_instance_id'], with_share_data=True) snapshot_update = False try: LOG.info("Share group snapshot %s: deleting", share_group_snapshot_id) share_server = None if snap_ref['share_group'].get('share_server_id'): share_server = self.db.share_server_get( context, snap_ref['share_group']['share_server_id']) snapshot_update, member_update_list = ( self.driver.delete_share_group_snapshot( context, snap_ref, share_server=share_server)) if member_update_list: snapshot_update = snapshot_update or {} snapshot_update['share_group_snapshot_members'] = [] for update in (member_update_list or []): snapshot_update['share_group_snapshot_members'].append(update) if snapshot_update: snap_ref = self.db.share_group_snapshot_update( context, snap_ref['id'], snapshot_update) except Exception: with excutils.save_and_reraise_exception(): self.db.share_group_snapshot_update( context, snap_ref['id'], {'status': constants.STATUS_ERROR}) LOG.error("Share group snapshot %s: delete failed", snap_ref['name']) self.db.share_group_snapshot_destroy(context, share_group_snapshot_id) LOG.info("Share group snapshot %s: deleted successfully", share_group_snapshot_id) def _get_share_server_dict(self, context, share_server): share_server_ref = { 'id': share_server.get('id'), 'project_id': share_server.get('project_id'), 'updated_at': share_server.get('updated_at'), 'status': share_server.get('status'), 'host': share_server.get('host'), 'share_network_name': share_server.get('share_network_name'), 'share_network_id': share_server.get('share_network_id'), 'created_at': share_server.get('created_at'), 'backend_details': share_server.get('backend_details'), 'share_network_subnet_id': share_server.get('share_network_subnet_id', None), 'is_auto_deletable': share_server.get('is_auto_deletable', None), 'identifier': share_server.get('identifier', None), 'network_allocations': share_server.get('network_allocations', None) } return share_server_ref def _get_export_location_dict(self, context, export_location): export_location_ref = { 'id': export_location.get('uuid'), 'path': export_location.get('path'), 'created_at': export_location.get('created_at'), 'updated_at': export_location.get('updated_at'), 'share_instance_id': export_location.get('share_instance_id', None), 'is_admin_only': export_location.get('is_admin_only', None), } return export_location_ref def _get_share_instance_dict(self, context, share_instance): # TODO(gouthamr): remove method when the db layer returns primitives share_instance_ref = { 'id': share_instance.get('id'), 'name': share_instance.get('name'), 'share_id': share_instance.get('share_id'), 'host': share_instance.get('host'), 'status': share_instance.get('status'), 'replica_state': share_instance.get('replica_state'), 'availability_zone_id': share_instance.get('availability_zone_id'), 'share_network_id': share_instance.get('share_network_id'), 'share_server_id': share_instance.get('share_server_id'), 'deleted': share_instance.get('deleted'), 'terminated_at': share_instance.get('terminated_at'), 'launched_at': share_instance.get('launched_at'), 'scheduled_at': share_instance.get('scheduled_at'), 'updated_at': share_instance.get('updated_at'), 'deleted_at': share_instance.get('deleted_at'), 'created_at': share_instance.get('created_at'), 'share_server': self._get_share_server(context, share_instance), 'access_rules_status': share_instance.get('access_rules_status'), # Share details 'user_id': share_instance.get('user_id'), 'project_id': share_instance.get('project_id'), 'size': share_instance.get('size'), 'display_name': share_instance.get('display_name'), 'display_description': share_instance.get('display_description'), 'snapshot_id': share_instance.get('snapshot_id'), 'share_proto': share_instance.get('share_proto'), 'share_type_id': share_instance.get('share_type_id'), 'is_public': share_instance.get('is_public'), 'share_group_id': share_instance.get('share_group_id'), 'source_share_group_snapshot_member_id': share_instance.get( 'source_share_group_snapshot_member_id'), 'availability_zone': share_instance.get('availability_zone'), } if share_instance_ref['share_server']: share_instance_ref['share_server'] = self._get_share_server_dict( context, share_instance_ref['share_server'] ) share_instance_ref['export_locations'] = [ self._get_export_location_dict(context, el) for el in share_instance.get('export_locations') or [] ] return share_instance_ref def _get_snapshot_instance_dict(self, context, snapshot_instance, snapshot=None): # TODO(gouthamr): remove method when the db layer returns primitives snapshot_instance_ref = { 'name': snapshot_instance.get('name'), 'share_id': snapshot_instance.get('share_id'), 'share_name': snapshot_instance.get('share_name'), 'status': snapshot_instance.get('status'), 'id': snapshot_instance.get('id'), 'deleted': snapshot_instance.get('deleted') or False, 'created_at': snapshot_instance.get('created_at'), 'share': snapshot_instance.get('share'), 'updated_at': snapshot_instance.get('updated_at'), 'share_instance_id': snapshot_instance.get('share_instance_id'), 'snapshot_id': snapshot_instance.get('snapshot_id'), 'progress': snapshot_instance.get('progress'), 'deleted_at': snapshot_instance.get('deleted_at'), 'provider_location': snapshot_instance.get('provider_location'), } if snapshot: snapshot_instance_ref.update({ 'size': snapshot.get('size'), }) return snapshot_instance_ref def snapshot_update_access(self, context, snapshot_instance_id): snapshot_instance = self.db.share_snapshot_instance_get( context, snapshot_instance_id, with_share_data=True) share_server = self._get_share_server( context, snapshot_instance['share_instance']) self.snapshot_access_helper.update_access_rules( context, snapshot_instance['id'], share_server=share_server) def _notify_about_share_usage(self, context, share, share_instance, event_suffix, extra_usage_info=None): share_utils.notify_about_share_usage( context, share, share_instance, event_suffix, extra_usage_info=extra_usage_info, host=self.host) @periodic_task.periodic_task( spacing=CONF.share_usage_size_update_interval, enabled=CONF.enable_gathering_share_usage_size) @utils.require_driver_initialized def update_share_usage_size(self, context): """Invokes driver to gather usage size of shares.""" updated_share_instances = [] share_instances = self.db.share_instances_get_all_by_host( context, host=self.host, with_share_data=True) if share_instances: try: updated_share_instances = self.driver.update_share_usage_size( context, share_instances) except Exception: LOG.exception("Gather share usage size failure.") for si in updated_share_instances: share_instance = self._get_share_instance(context, si['id']) share = self.db.share_get(context, share_instance['share_id']) self._notify_about_share_usage( context, share, share_instance, "consumed.size", extra_usage_info={'used_size': si['used_size'], 'gathered_at': si['gathered_at']}) @periodic_task.periodic_task(spacing=CONF.periodic_interval) @utils.require_driver_initialized def periodic_share_status_update(self, context): """Invokes share driver to update shares status.""" LOG.debug("Updating status of share instances.") share_instances = self.db.share_instances_get_all_by_host( context, self.host, with_share_data=True, status=constants.STATUS_CREATING_FROM_SNAPSHOT) for si in share_instances: si_dict = self._get_share_instance_dict(context, si) self._update_share_status(context, si_dict) def _update_share_status(self, context, share_instance): share_server = self._get_share_server(context, share_instance) if share_server is not None: share_server = self._get_share_server_dict(context, share_server) try: data_updates = self.driver.get_share_status(share_instance, share_server) except Exception: LOG.exception( ("Unexpected driver error occurred while updating status for " "share instance %(id)s that belongs to share '%(share_id)s'"), {'id': share_instance['id'], 'share_id': share_instance['share_id']} ) data_updates = { 'status': constants.STATUS_ERROR } status = data_updates.get('status') if status == constants.STATUS_ERROR: msg = ("Status of share instance %(id)s that belongs to share " "%(share_id)s was updated to '%(status)s'." % {'id': share_instance['id'], 'share_id': share_instance['share_id'], 'status': status}) LOG.error(msg) self.db.share_instance_update( context, share_instance['id'], {'status': constants.STATUS_ERROR, 'progress': None}) self.message_api.create( context, message_field.Action.UPDATE, share_instance['project_id'], resource_type=message_field.Resource.SHARE, resource_id=share_instance['share_id'], detail=message_field.Detail.DRIVER_FAILED_CREATING_FROM_SNAP) return export_locations = data_updates.get('export_locations') progress = data_updates.get('progress') statuses_requiring_update = [ constants.STATUS_AVAILABLE, constants.STATUS_CREATING_FROM_SNAPSHOT] if status in statuses_requiring_update: si_updates = { 'status': status, } progress = ('100%' if status == constants.STATUS_AVAILABLE else progress) if progress is not None: si_updates.update({'progress': progress}) self.db.share_instance_update( context, share_instance['id'], si_updates) msg = ("Status of share instance %(id)s that belongs to share " "%(share_id)s was updated to '%(status)s'." % {'id': share_instance['id'], 'share_id': share_instance['share_id'], 'status': status}) LOG.debug(msg) if export_locations: self.db.share_export_locations_update( context, share_instance['id'], export_locations) manila-10.0.0/manila/share/hook.py0000664000175000017500000001334613656750227016754 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Module with hook interface for actions performed by share driver. All available hooks are placed in manila/share/hooks dir. Hooks are used by share services and can serve several use cases such as any kind of notification and performing additional backend-specific actions. """ import abc from oslo_config import cfg from oslo_log import log import six from manila import context as ctxt hook_options = [ cfg.BoolOpt( "enable_pre_hooks", default=False, help="Whether to enable pre hooks or not.", deprecated_group='DEFAULT'), cfg.BoolOpt( "enable_post_hooks", default=False, help="Whether to enable post hooks or not.", deprecated_group='DEFAULT'), cfg.BoolOpt( "enable_periodic_hooks", default=False, help="Whether to enable periodic hooks or not.", deprecated_group='DEFAULT'), cfg.BoolOpt( "suppress_pre_hooks_errors", default=False, help="Whether to suppress pre hook errors (allow driver perform " "actions) or not.", deprecated_group='DEFAULT'), cfg.BoolOpt( "suppress_post_hooks_errors", default=False, help="Whether to suppress post hook errors (allow driver's results " "to pass through) or not.", deprecated_group='DEFAULT'), cfg.FloatOpt( "periodic_hooks_interval", default=300.0, help="Interval in seconds between execution of periodic hooks. " "Used when option 'enable_periodic_hooks' is set to True. " "Default is 300.", deprecated_group='DEFAULT'), ] CONF = cfg.CONF CONF.register_opts(hook_options) LOG = log.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class HookBase(object): def get_config_option(self, key): if self.configuration: return self.configuration.safe_get(key) return CONF.get(key) def __init__(self, configuration, host): self.host = host self.configuration = configuration if self.configuration: self.configuration.append_config_values(hook_options) self.pre_hooks_enabled = self.get_config_option("enable_pre_hooks") self.post_hooks_enabled = self.get_config_option("enable_post_hooks") self.periodic_hooks_enabled = self.get_config_option( "enable_periodic_hooks") self.suppress_pre_hooks_errors = self.get_config_option( "suppress_pre_hooks_errors") self.suppress_post_hooks_errors = self.get_config_option( "suppress_post_hooks_errors") def execute_pre_hook(self, context=None, func_name=None, *args, **kwargs): """Hook called before driver's action.""" if not self.pre_hooks_enabled: return LOG.debug("Running 'pre hook'.") context = context or ctxt.get_admin_context() try: pre_data = self._execute_pre_hook( context=context, func_name=func_name, *args, **kwargs) except Exception as e: if self.suppress_pre_hooks_errors: LOG.warning("\nSuppressed exception in pre hook. %s\n", e) pre_data = e else: raise return pre_data def execute_post_hook(self, context=None, func_name=None, pre_hook_data=None, driver_action_results=None, *args, **kwargs): """Hook called after driver's action.""" if not self.post_hooks_enabled: return LOG.debug("Running 'post hook'.") context = context or ctxt.get_admin_context() try: post_data = self._execute_post_hook( context=context, func_name=func_name, pre_hook_data=pre_hook_data, driver_action_results=driver_action_results, *args, **kwargs) except Exception as e: if self.suppress_post_hooks_errors: LOG.warning( "\nSuppressed exception in post hook. %s\n", e) post_data = e else: raise return post_data def execute_periodic_hook(self, context, periodic_hook_data, *args, **kwargs): """Hook called on periodic basis.""" if not self.periodic_hooks_enabled: return LOG.debug("Running 'periodic hook'.") context = context or ctxt.get_admin_context() return self._execute_periodic_hook( context, periodic_hook_data, *args, **kwargs) @abc.abstractmethod def _execute_pre_hook(self, context, func_name, *args, **kwargs): """Redefine this method for pre hook action.""" @abc.abstractmethod def _execute_post_hook(self, context, func_name, pre_hook_data, driver_action_results, *args, **kwargs): """Redefine this method for post hook action.""" @abc.abstractmethod def _execute_periodic_hook(self, context, periodic_hook_data, *args, **kwargs): """Redefine this method for periodic hook action.""" manila-10.0.0/manila/share/configuration.py0000664000175000017500000000475413656750227020666 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2012 Rackspace Hosting # Copyright (c) 2013 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Configuration support for all drivers. This module allows support for setting configurations either from default or from a particular CONF group, to be able to set multiple configurations for a given set of values. For instance, two generic configurations can be set by naming them in groups as [generic1] share_backend_name=generic-backend-1 ... [generic2] share_backend_name=generic-backend-2 ... And the configuration group name will be passed in so that all calls to configuration.volume_group within that instance will be mapped to the proper named group. This class also ensures the implementation's configuration is grafted into the option group. This is due to the way cfg works. All cfg options must be defined and registered in the group in which they are used. """ from oslo_config import cfg CONF = cfg.CONF class Configuration(object): def __init__(self, share_opts, config_group=None): """Graft config values into config group. This takes care of grafting the implementation's config values into the config group. """ self.config_group = config_group # set the local conf so that __call__'s know what to use if self.config_group: self._ensure_config_values(share_opts) self.local_conf = CONF._get(self.config_group) else: self.local_conf = CONF def _ensure_config_values(self, share_opts): CONF.register_opts(share_opts, group=self.config_group) def append_config_values(self, share_opts): self._ensure_config_values(share_opts) def safe_get(self, value): try: return self.__getattr__(value) except cfg.NoSuchOptError: return None def __getattr__(self, value): return getattr(self.local_conf, value) manila-10.0.0/manila/share/migration.py0000664000175000017500000001634713656750227020011 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hitachi Data Systems. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Helper class for Share Migration.""" import time from oslo_config import cfg from oslo_log import log from manila.common import constants from manila import exception from manila.i18n import _ from manila.share import api as share_api import manila.utils as utils LOG = log.getLogger(__name__) migration_opts = [ cfg.IntOpt( 'migration_wait_access_rules_timeout', default=180, help="Time to wait for access rules to be allowed/denied on backends " "when migrating shares using generic approach (seconds)."), cfg.IntOpt( 'migration_create_delete_share_timeout', default=300, help='Timeout for creating and deleting share instances ' 'when performing share migration (seconds).'), ] CONF = cfg.CONF CONF.register_opts(migration_opts) class ShareMigrationHelper(object): def __init__(self, context, db, share, access_helper): self.db = db self.share = share self.context = context self.access_helper = access_helper self.api = share_api.API() self.access_helper = access_helper self.migration_create_delete_share_timeout = ( CONF.migration_create_delete_share_timeout) self.migration_wait_access_rules_timeout = ( CONF.migration_wait_access_rules_timeout) def delete_instance_and_wait(self, share_instance): self.api.delete_instance(self.context, share_instance, True) # Wait for deletion. starttime = time.time() deadline = starttime + self.migration_create_delete_share_timeout tries = 0 instance = "Something not None" while instance is not None: try: instance = self.db.share_instance_get(self.context, share_instance['id']) tries += 1 now = time.time() if now > deadline: msg = _("Timeout trying to delete instance " "%s") % share_instance['id'] raise exception.ShareMigrationFailed(reason=msg) except exception.NotFound: instance = None else: # 1.414 = square-root of 2 time.sleep(1.414 ** tries) def create_instance_and_wait(self, share, dest_host, new_share_network_id, new_az_id, new_share_type_id): new_share_instance = self.api.create_instance( self.context, share, new_share_network_id, dest_host, new_az_id, share_type_id=new_share_type_id) # Wait for new_share_instance to become ready starttime = time.time() deadline = starttime + self.migration_create_delete_share_timeout new_share_instance = self.db.share_instance_get( self.context, new_share_instance['id'], with_share_data=True) tries = 0 while new_share_instance['status'] != constants.STATUS_AVAILABLE: tries += 1 now = time.time() if new_share_instance['status'] == constants.STATUS_ERROR: msg = _("Failed to create new share instance" " (from %(share_id)s) on " "destination host %(host_name)s") % { 'share_id': share['id'], 'host_name': dest_host} self.cleanup_new_instance(new_share_instance) raise exception.ShareMigrationFailed(reason=msg) elif now > deadline: msg = _("Timeout creating new share instance " "(from %(share_id)s) on " "destination host %(host_name)s") % { 'share_id': share['id'], 'host_name': dest_host} self.cleanup_new_instance(new_share_instance) raise exception.ShareMigrationFailed(reason=msg) else: # 1.414 = square-root of 2 time.sleep(1.414 ** tries) new_share_instance = self.db.share_instance_get( self.context, new_share_instance['id'], with_share_data=True) return new_share_instance # NOTE(ganso): Cleanup methods do not throw exceptions, since the # exceptions that should be thrown are the ones that call the cleanup def cleanup_new_instance(self, new_instance): try: self.delete_instance_and_wait(new_instance) except Exception: LOG.warning("Failed to cleanup new instance during generic" " migration for share %s.", self.share['id']) def cleanup_access_rules(self, share_instance, share_server): try: self.revert_access_rules(share_instance, share_server) except Exception: LOG.warning("Failed to cleanup access rules during generic" " migration for share %s.", self.share['id']) def revert_access_rules(self, share_instance, share_server): # Cast all rules to 'queued_to_apply' so that they can be re-applied. updates = {'state': constants.ACCESS_STATE_QUEUED_TO_APPLY} self.access_helper.get_and_update_share_instance_access_rules( self.context, updates=updates, share_instance_id=share_instance['id']) self.access_helper.update_access_rules( self.context, share_instance['id'], share_server=share_server) utils.wait_for_access_update( self.context, self.db, share_instance, self.migration_wait_access_rules_timeout) def apply_new_access_rules(self, new_share_instance): rules = self.db.share_instance_access_copy( self.context, self.share['id'], new_share_instance['id']) if rules: self.api.allow_access_to_instance(self.context, new_share_instance) utils.wait_for_access_update( self.context, self.db, new_share_instance, self.migration_wait_access_rules_timeout) else: LOG.debug("No access rules to sync to destination share instance.") @utils.retry(exception.ShareServerNotReady, retries=8) def wait_for_share_server(self, share_server_id): share_server = self.db.share_server_get(self.context, share_server_id) if share_server['status'] == constants.STATUS_ERROR: raise exception.ShareServerNotCreated( share_server_id=share_server_id) elif share_server['status'] == constants.STATUS_ACTIVE: return share_server else: raise exception.ShareServerNotReady( share_server_id=share_server_id, time=511, state=constants.STATUS_AVAILABLE) manila-10.0.0/manila/share/rpcapi.py0000664000175000017500000004242413656750227017271 0ustar zuulzuul00000000000000# Copyright 2012, Intel, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the share RPC API. """ from oslo_config import cfg import oslo_messaging as messaging from oslo_serialization import jsonutils from manila import rpc from manila.share import utils CONF = cfg.CONF class ShareAPI(object): """Client side of the share rpc API. API version history: 1.0 - Initial version. 1.1 - Add manage_share() and unmanage_share() methods 1.2 - Add extend_share() method 1.3 - Add shrink_share() method 1.4 - Introduce Share Instances: create_share() -> create_share_instance() delete_share() -> delete_share_instance() Add share_instance argument to allow_access() & deny_access() 1.5 - Add create_consistency_group, delete_consistency_group create_cgsnapshot, and delete_cgsnapshot methods 1.6 - Introduce Share migration: migrate_share() get_migration_info() get_driver_migration_info() 1.7 - Update target call API in allow/deny access methods (Removed in 1.14) 1.8 - Introduce Share Replication: create_share_replica() delete_share_replica() promote_share_replica() update_share_replica() 1.9 - Add manage_snapshot() and unmanage_snapshot() methods 1.10 - Add migration_complete(), migration_cancel() and migration_get_progress(), rename migrate_share() to migration_start(), rename get_migration_info() to migration_get_info(), rename get_driver_migration_info() to migration_get_driver_info() 1.11 - Add create_replicated_snapshot() and delete_replicated_snapshot() methods 1.12 - Add provide_share_server(), create_share_server() and migration_driver_recovery(), remove migration_get_driver_info(), update migration_cancel(), migration_complete() and migration_get_progress method signature, rename migration_get_info() to connection_get_info() 1.13 - Introduce share revert to snapshot: revert_to_snapshot() 1.14 - Add update_access() and remove allow_access() and deny_access(). 1.15 - Updated migration_start() method with new parameter "preserve_snapshots" 1.16 - Convert create_consistency_group, delete_consistency_group create_cgsnapshot, and delete_cgsnapshot methods to create_share_group, delete_share_group create_share_group_snapshot, and delete_share_group_snapshot 1.17 - Add snapshot_update_access() 1.18 - Remove unused "share_id" parameter from revert_to_snapshot() 1.19 - Add manage_share_server() and unmanage_share_server() """ BASE_RPC_API_VERSION = '1.0' def __init__(self, topic=None): super(ShareAPI, self).__init__() target = messaging.Target(topic=CONF.share_topic, version=self.BASE_RPC_API_VERSION) self.client = rpc.get_client(target, version_cap='1.19') def create_share_instance(self, context, share_instance, host, request_spec, filter_properties, snapshot_id=None): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.4') request_spec_p = jsonutils.to_primitive(request_spec) call_context.cast(context, 'create_share_instance', share_instance_id=share_instance['id'], request_spec=request_spec_p, filter_properties=filter_properties, snapshot_id=snapshot_id) def manage_share(self, context, share, driver_options=None): host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=host, version='1.1') call_context.cast(context, 'manage_share', share_id=share['id'], driver_options=driver_options) def unmanage_share(self, context, share): host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=host, version='1.1') call_context.cast(context, 'unmanage_share', share_id=share['id']) def manage_snapshot(self, context, snapshot, host, driver_options=None): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.9') call_context.cast(context, 'manage_snapshot', snapshot_id=snapshot['id'], driver_options=driver_options) def unmanage_snapshot(self, context, snapshot, host): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.9') call_context.cast(context, 'unmanage_snapshot', snapshot_id=snapshot['id']) def manage_share_server( self, context, share_server, identifier, driver_opts): host = utils.extract_host(share_server['host']) call_context = self.client.prepare(server=host, version='1.19') call_context.cast(context, 'manage_share_server', share_server_id=share_server['id'], identifier=identifier, driver_opts=driver_opts) def unmanage_share_server(self, context, share_server, force=False): host = utils.extract_host(share_server['host']) call_context = self.client.prepare(server=host, version='1.19') call_context.cast(context, 'unmanage_share_server', share_server_id=share_server['id'], force=force) def revert_to_snapshot(self, context, share, snapshot, host, reservations): host = utils.extract_host(host) call_context = self.client.prepare(server=host, version='1.18') call_context.cast(context, 'revert_to_snapshot', snapshot_id=snapshot['id'], reservations=reservations) def delete_share_instance(self, context, share_instance, force=False): host = utils.extract_host(share_instance['host']) call_context = self.client.prepare(server=host, version='1.4') call_context.cast(context, 'delete_share_instance', share_instance_id=share_instance['id'], force=force) def migration_start(self, context, share, dest_host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network_id, new_share_type_id): new_host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=new_host, version='1.15') call_context.cast( context, 'migration_start', share_id=share['id'], dest_host=dest_host, force_host_assisted_migration=force_host_assisted_migration, preserve_metadata=preserve_metadata, writable=writable, nondisruptive=nondisruptive, preserve_snapshots=preserve_snapshots, new_share_network_id=new_share_network_id, new_share_type_id=new_share_type_id) def connection_get_info(self, context, share_instance): new_host = utils.extract_host(share_instance['host']) call_context = self.client.prepare(server=new_host, version='1.12') return call_context.call(context, 'connection_get_info', share_instance_id=share_instance['id']) def delete_share_server(self, context, share_server): host = utils.extract_host(share_server['host']) call_context = self.client.prepare(server=host, version='1.0') call_context.cast(context, 'delete_share_server', share_server=share_server) def create_snapshot(self, context, share, snapshot): host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=host) call_context.cast(context, 'create_snapshot', share_id=share['id'], snapshot_id=snapshot['id']) def delete_snapshot(self, context, snapshot, host, force=False): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host) call_context.cast(context, 'delete_snapshot', snapshot_id=snapshot['id'], force=force) def create_replicated_snapshot(self, context, share, replicated_snapshot): host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=host, version='1.11') call_context.cast(context, 'create_replicated_snapshot', snapshot_id=replicated_snapshot['id'], share_id=share['id']) def delete_replicated_snapshot(self, context, replicated_snapshot, host, share_id=None, force=False): host = utils.extract_host(host) call_context = self.client.prepare(server=host, version='1.11') call_context.cast(context, 'delete_replicated_snapshot', snapshot_id=replicated_snapshot['id'], share_id=share_id, force=force) def update_access(self, context, share_instance): host = utils.extract_host(share_instance['host']) call_context = self.client.prepare(server=host, version='1.14') call_context.cast(context, 'update_access', share_instance_id=share_instance['id']) def publish_service_capabilities(self, context): call_context = self.client.prepare(fanout=True, version='1.0') call_context.cast(context, 'publish_service_capabilities') def extend_share(self, context, share, new_size, reservations): host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=host, version='1.2') call_context.cast(context, 'extend_share', share_id=share['id'], new_size=new_size, reservations=reservations) def shrink_share(self, context, share, new_size): host = utils.extract_host(share['instance']['host']) call_context = self.client.prepare(server=host, version='1.3') call_context.cast(context, 'shrink_share', share_id=share['id'], new_size=new_size) def create_share_group(self, context, share_group, host): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.16') call_context.cast( context, 'create_share_group', share_group_id=share_group['id']) def delete_share_group(self, context, share_group): new_host = utils.extract_host(share_group['host']) call_context = self.client.prepare(server=new_host, version='1.16') call_context.cast( context, 'delete_share_group', share_group_id=share_group['id']) def create_share_group_snapshot(self, context, share_group_snapshot, host): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.16') call_context.cast( context, 'create_share_group_snapshot', share_group_snapshot_id=share_group_snapshot['id']) def delete_share_group_snapshot(self, context, share_group_snapshot, host): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.16') call_context.cast( context, 'delete_share_group_snapshot', share_group_snapshot_id=share_group_snapshot['id']) def create_share_replica(self, context, share_replica, host, request_spec, filter_properties): new_host = utils.extract_host(host) call_context = self.client.prepare(server=new_host, version='1.8') request_spec_p = jsonutils.to_primitive(request_spec) call_context.cast(context, 'create_share_replica', share_replica_id=share_replica['id'], request_spec=request_spec_p, filter_properties=filter_properties, share_id=share_replica['share_id']) def delete_share_replica(self, context, share_replica, force=False): host = utils.extract_host(share_replica['host']) call_context = self.client.prepare(server=host, version='1.8') call_context.cast(context, 'delete_share_replica', share_replica_id=share_replica['id'], share_id=share_replica['share_id'], force=force) def promote_share_replica(self, context, share_replica): host = utils.extract_host(share_replica['host']) call_context = self.client.prepare(server=host, version='1.8') call_context.cast(context, 'promote_share_replica', share_replica_id=share_replica['id'], share_id=share_replica['share_id']) def update_share_replica(self, context, share_replica): host = utils.extract_host(share_replica['host']) call_context = self.client.prepare(server=host, version='1.8') call_context.cast(context, 'update_share_replica', share_replica_id=share_replica['id'], share_id=share_replica['share_id']) def migration_complete(self, context, src_share_instance, dest_instance_id): new_host = utils.extract_host(src_share_instance['host']) call_context = self.client.prepare(server=new_host, version='1.12') call_context.cast(context, 'migration_complete', src_instance_id=src_share_instance['id'], dest_instance_id=dest_instance_id) def migration_cancel(self, context, src_share_instance, dest_instance_id): new_host = utils.extract_host(src_share_instance['host']) call_context = self.client.prepare(server=new_host, version='1.12') call_context.cast(context, 'migration_cancel', src_instance_id=src_share_instance['id'], dest_instance_id=dest_instance_id) def migration_get_progress(self, context, src_share_instance, dest_instance_id): new_host = utils.extract_host(src_share_instance['host']) call_context = self.client.prepare(server=new_host, version='1.12') return call_context.call(context, 'migration_get_progress', src_instance_id=src_share_instance['id'], dest_instance_id=dest_instance_id) def provide_share_server(self, context, share_instance, share_network_id, snapshot_id=None): new_host = utils.extract_host(share_instance['host']) call_context = self.client.prepare(server=new_host, version='1.12') return call_context.call(context, 'provide_share_server', share_instance_id=share_instance['id'], share_network_id=share_network_id, snapshot_id=snapshot_id) def create_share_server(self, context, share_instance, share_server_id): new_host = utils.extract_host(share_instance['host']) call_context = self.client.prepare(server=new_host, version='1.12') call_context.cast(context, 'create_share_server', share_server_id=share_server_id) def snapshot_update_access(self, context, snapshot_instance): host = utils.extract_host(snapshot_instance['share_instance']['host']) call_context = self.client.prepare(server=host, version='1.17') call_context.cast(context, 'snapshot_update_access', snapshot_instance_id=snapshot_instance['id']) manila-10.0.0/manila/network/0000775000175000017500000000000013656750362016022 5ustar zuulzuul00000000000000manila-10.0.0/manila/network/__init__.py0000664000175000017500000001230413656750227020133 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc from oslo_config import cfg from oslo_utils import importutils import six from manila.db import base as db_base from manila import exception from manila.i18n import _ network_opts = [ cfg.StrOpt( 'network_api_class', default='manila.network.neutron.' 'neutron_network_plugin.NeutronNetworkPlugin', deprecated_group='DEFAULT', help='The full class name of the Networking API class to use.'), ] network_base_opts = [ cfg.BoolOpt( 'network_plugin_ipv4_enabled', default=True, help="Whether to support IPv4 network resource, Default=True."), cfg.BoolOpt( 'network_plugin_ipv6_enabled', default=False, help="Whether to support IPv6 network resource, Default=False. " "If this option is True, the value of " "'network_plugin_ipv4_enabled' will be ignored."), ] CONF = cfg.CONF def API(config_group_name=None, label='user'): """Selects class and config group of network plugin. :param config_group_name: name of config group to be used for registration of networking opts. :returns: instance of networking plugin class """ CONF.register_opts(network_opts, group=config_group_name) if config_group_name: network_api_class = getattr(CONF, config_group_name).network_api_class else: network_api_class = CONF.network_api_class cls = importutils.import_class(network_api_class) return cls(config_group_name=config_group_name, label=label) @six.add_metaclass(abc.ABCMeta) class NetworkBaseAPI(db_base.Base): """User network plugin for setting up main net interfaces.""" def __init__(self, config_group_name=None, db_driver=None): if config_group_name: CONF.register_opts(network_base_opts, group=config_group_name) else: CONF.register_opts(network_base_opts) self.configuration = getattr(CONF, six.text_type(config_group_name), CONF) super(NetworkBaseAPI, self).__init__(db_driver=db_driver) def _verify_share_network(self, share_server_id, share_network): if share_network is None: msg = _("'Share network' is not provided for setting up " "network interfaces for 'Share server' " "'%s'.") % share_server_id raise exception.NetworkBadConfigurationException(reason=msg) def _verify_share_network_subnet(self, share_server_id, share_network_subnet): if share_network_subnet is None: msg = _("'Share network subnet' is not provided for setting up " "network interfaces for 'Share server' " "'%s'.") % share_server_id raise exception.NetworkBadConfigurationException(reason=msg) def update_network_allocation(self, context, share_server): """Update network allocation. Optional method to be called by the manager after share server creation which can be overloaded in case the port state has to be updated. :param context: RequestContext object :param share_server: share server object :return: list of updated ports or None if nothing was updated """ @abc.abstractmethod def allocate_network(self, context, share_server, share_network=None, share_network_subnet=None, **kwargs): pass @abc.abstractmethod def deallocate_network(self, context, share_server_id): pass @abc.abstractmethod def manage_network_allocations( self, context, allocations, share_server, share_network=None, share_network_subnet=None): pass @abc.abstractmethod def unmanage_network_allocations(self, context, share_server_id): pass @property def enabled_ip_versions(self): if not hasattr(self, '_enabled_ip_versions'): self._enabled_ip_versions = set() if self.configuration.network_plugin_ipv6_enabled: self._enabled_ip_versions.add(6) if self.configuration.network_plugin_ipv4_enabled: self._enabled_ip_versions.add(4) if not self._enabled_ip_versions: msg = _("Either 'network_plugin_ipv4_enabled' or " "'network_plugin_ipv6_enabled' " "should be configured to 'True'.") raise exception.NetworkBadConfigurationException(reason=msg) return self._enabled_ip_versions manila-10.0.0/manila/network/standalone_network_plugin.py0000664000175000017500000004004713656750227023660 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import netaddr from oslo_config import cfg from oslo_log import log import six from manila.common import constants from manila import exception from manila.i18n import _ from manila import network from manila import utils standalone_network_plugin_opts = [ cfg.StrOpt( 'standalone_network_plugin_gateway', help="Gateway address that should be used. Required.", deprecated_group='DEFAULT'), cfg.StrOpt( 'standalone_network_plugin_mask', help="Network mask that will be used. Can be either decimal " "like '24' or binary like '255.255.255.0'. Required.", deprecated_group='DEFAULT'), cfg.StrOpt( 'standalone_network_plugin_network_type', help="Network type, such as 'flat', 'vlan', 'vxlan' or 'gre'. " "Empty value is alias for 'flat'. " "It will be assigned to share-network and share drivers will be " "able to use this for network interfaces within provisioned " "share servers. Optional.", choices=['flat', 'vlan', 'vxlan', 'gre'], deprecated_group='DEFAULT'), cfg.IntOpt( 'standalone_network_plugin_segmentation_id', help="Set it if network has segmentation (VLAN, VXLAN, etc...). " "It will be assigned to share-network and share drivers will be " "able to use this for network interfaces within provisioned " "share servers. Optional. Example: 1001", deprecated_group='DEFAULT'), cfg.ListOpt( 'standalone_network_plugin_allowed_ip_ranges', help="Can be IP address, range of IP addresses or list of addresses " "or ranges. Contains addresses from IP network that are allowed " "to be used. If empty, then will be assumed that all host " "addresses from network can be used. Optional. " "Examples: 10.0.0.10 or 10.0.0.10-10.0.0.20 or " "10.0.0.10-10.0.0.20,10.0.0.30-10.0.0.40,10.0.0.50", deprecated_group='DEFAULT'), cfg.IntOpt( 'standalone_network_plugin_mtu', default=1500, help="Maximum Transmission Unit (MTU) value of the network. Default " "value is 1500.", deprecated_group='DEFAULT'), ] CONF = cfg.CONF LOG = log.getLogger(__name__) class StandaloneNetworkPlugin(network.NetworkBaseAPI): """Standalone network plugin for share drivers. This network plugin can be used with any network platform. It can serve flat networks as well as segmented. It does not require some specific network services in OpenStack like the Neutron plugin. The only thing that plugin does is reservation and release of IP addresses from some network. """ def __init__(self, config_group_name=None, db_driver=None, label='user'): self.config_group_name = config_group_name or 'DEFAULT' super(StandaloneNetworkPlugin, self).__init__(config_group_name=self.config_group_name, db_driver=db_driver) CONF.register_opts( standalone_network_plugin_opts, group=self.config_group_name) self.configuration = getattr(CONF, self.config_group_name, CONF) self._set_persistent_network_data() self._label = label LOG.debug( "\nStandalone network plugin data for config group " "'%(config_group)s': \n" "IP version - %(ip_version)s\n" "Used network - %(net)s\n" "Used gateway - %(gateway)s\n" "Used network type - %(network_type)s\n" "Used segmentation ID - %(segmentation_id)s\n" "Allowed CIDRs - %(cidrs)s\n" "Original allowed IP ranges - %(ip_ranges)s\n" "Reserved IP addresses - %(reserved)s\n", dict( config_group=self.config_group_name, ip_version=self.ip_version, net=six.text_type(self.net), gateway=self.gateway, network_type=self.network_type, segmentation_id=self.segmentation_id, cidrs=self.allowed_cidrs, ip_ranges=self.allowed_ip_ranges, reserved=self.reserved_addresses)) @property def label(self): return self._label def _set_persistent_network_data(self): """Sets persistent data for whole plugin.""" # NOTE(tommylikehu): Standalone plugin could only support # either IPv4 or IPv6, so if both network_plugin_ipv4_enabled # and network_plugin_ipv6_enabled are configured True # we would only support IPv6. ipv4_enabled = getattr(self.configuration, 'network_plugin_ipv4_enabled', None) ipv6_enabled = getattr(self.configuration, 'network_plugin_ipv6_enabled', None) if ipv4_enabled: ip_version = 4 if ipv6_enabled: ip_version = 6 if ipv4_enabled and ipv6_enabled: LOG.warning("Only IPv6 is enabled, although both " "'network_plugin_ipv4_enabled' and " "'network_plugin_ipv6_enabled' are " "configured True.") self.network_type = ( self.configuration.standalone_network_plugin_network_type) self.segmentation_id = ( self.configuration.standalone_network_plugin_segmentation_id) self.gateway = self.configuration.standalone_network_plugin_gateway self.mask = self.configuration.standalone_network_plugin_mask self.allowed_ip_ranges = ( self.configuration.standalone_network_plugin_allowed_ip_ranges) self.ip_version = ip_version self.net = self._get_network() self.allowed_cidrs = self._get_list_of_allowed_addresses() self.reserved_addresses = ( six.text_type(self.net.network), self.gateway, six.text_type(self.net.broadcast)) self.mtu = self.configuration.standalone_network_plugin_mtu def _get_network(self): """Returns IPNetwork object calculated from gateway and netmask.""" if not isinstance(self.gateway, six.string_types): raise exception.NetworkBadConfigurationException( _("Configuration option 'standalone_network_plugin_gateway' " "is required and has improper value '%s'.") % self.gateway) if not isinstance(self.mask, six.string_types): raise exception.NetworkBadConfigurationException( _("Configuration option 'standalone_network_plugin_mask' is " "required and has improper value '%s'.") % self.mask) try: return netaddr.IPNetwork(self.gateway + '/' + self.mask) except netaddr.AddrFormatError as e: raise exception.NetworkBadConfigurationException( reason=e) def _get_list_of_allowed_addresses(self): """Returns list of CIDRs that can be used for getting IP addresses. Reads information provided via configuration, such as gateway, netmask, segmentation ID and allowed IP ranges, then performs validation of provided data. :returns: list of CIDRs as text types. :raises: exception.NetworkBadConfigurationException """ cidrs = [] if self.allowed_ip_ranges: for ip_range in self.allowed_ip_ranges: ip_range_start = ip_range_end = None if utils.is_valid_ip_address(ip_range, self.ip_version): ip_range_start = ip_range_end = ip_range elif '-' in ip_range: ip_range_list = ip_range.split('-') if len(ip_range_list) == 2: ip_range_start = ip_range_list[0] ip_range_end = ip_range_list[1] for ip in ip_range_list: utils.is_valid_ip_address(ip, self.ip_version) else: msg = _("Wrong value for IP range " "'%s' was provided.") % ip_range raise exception.NetworkBadConfigurationException( reason=msg) else: msg = _("Config option " "'standalone_network_plugin_allowed_ip_ranges' " "has incorrect value " "'%s'.") % self.allowed_ip_ranges raise exception.NetworkBadConfigurationException( reason=msg) range_instance = netaddr.IPRange(ip_range_start, ip_range_end) if range_instance not in self.net: data = dict( range=six.text_type(range_instance), net=six.text_type(self.net), gateway=self.gateway, netmask=self.net.netmask) msg = _("One of provided allowed IP ranges ('%(range)s') " "does not fit network '%(net)s' combined from " "gateway '%(gateway)s' and netmask " "'%(netmask)s'.") % data raise exception.NetworkBadConfigurationException( reason=msg) cidrs.extend( six.text_type(cidr) for cidr in range_instance.cidrs()) else: if self.net.version != self.ip_version: msg = _("Configured invalid IP version '%(conf_v)s', network " "has version ""'%(net_v)s'") % dict( conf_v=self.ip_version, net_v=self.net.version) raise exception.NetworkBadConfigurationException(reason=msg) cidrs.append(six.text_type(self.net)) return cidrs def _get_available_ips(self, context, amount): """Returns IP addresses from allowed IP range if there are unused IPs. :returns: IP addresses as list of text types :raises: exception.NetworkBadConfigurationException """ ips = [] if amount < 1: return ips iterator = netaddr.iter_unique_ips(*self.allowed_cidrs) for ip in iterator: ip = six.text_type(ip) if (ip in self.reserved_addresses or self.db.network_allocations_get_by_ip_address(context, ip)): continue else: ips.append(ip) if len(ips) == amount: return ips msg = _("No available IP addresses left in CIDRs %(cidrs)s. " "Requested amount of IPs to be provided '%(amount)s', " "available only '%(available)s'.") % { 'cidrs': self.allowed_cidrs, 'amount': amount, 'available': len(ips)} raise exception.NetworkBadConfigurationException(reason=msg) def _save_network_info(self, context, share_network_subnet): """Update share-network-subnet with plugin specific data.""" data = { 'network_type': self.network_type, 'segmentation_id': self.segmentation_id, 'cidr': six.text_type(self.net.cidr), 'gateway': six.text_type(self.gateway), 'ip_version': self.ip_version, 'mtu': self.mtu, } share_network_subnet.update(data) if self.label != 'admin': self.db.share_network_subnet_update( context, share_network_subnet['id'], data) @utils.synchronized( "allocate_network_for_standalone_network_plugin", external=True) def allocate_network(self, context, share_server, share_network=None, share_network_subnet=None, **kwargs): """Allocate network resources using one dedicated network. This one has interprocess lock to avoid concurrency in creation of share servers with same IP addresses using different share-networks. """ allocation_count = kwargs.get('count', 1) if self.label != 'admin': self._verify_share_network(share_server['id'], share_network_subnet) else: share_network_subnet = share_network_subnet or {} self._save_network_info(context, share_network_subnet) allocations = [] ip_addresses = self._get_available_ips(context, allocation_count) for ip_address in ip_addresses: data = { 'share_server_id': share_server['id'], 'ip_address': ip_address, 'status': constants.STATUS_ACTIVE, 'label': self.label, 'network_type': share_network_subnet['network_type'], 'segmentation_id': share_network_subnet['segmentation_id'], 'cidr': share_network_subnet['cidr'], 'gateway': share_network_subnet['gateway'], 'ip_version': share_network_subnet['ip_version'], 'mtu': share_network_subnet['mtu'], } allocations.append( self.db.network_allocation_create(context, data)) return allocations def deallocate_network(self, context, share_server_id): """Deallocate network resources for share server.""" allocations = self.db.network_allocations_get_for_share_server( context, share_server_id) for allocation in allocations: self.db.network_allocation_delete(context, allocation['id']) def unmanage_network_allocations(self, context, share_server_id): self.deallocate_network(context, share_server_id) def manage_network_allocations(self, context, allocations, share_server, share_network=None, share_network_subnet=None): if self.label != 'admin': self._verify_share_network_subnet(share_server['id'], share_network_subnet) else: share_network_subnet = share_network_subnet or {} self._save_network_info(context, share_network_subnet) # We begin matching the allocations to known neutron ports and # finally return the non-consumed allocations remaining_allocations = list(allocations) ips = [netaddr.IPAddress(allocation) for allocation in remaining_allocations] cidrs = [netaddr.IPNetwork(cidr) for cidr in self.allowed_cidrs] selected_allocations = [] for ip in ips: if any(ip in cidr for cidr in cidrs): allocation = six.text_type(ip) selected_allocations.append(allocation) for allocation in selected_allocations: data = { 'share_server_id': share_server['id'], 'ip_address': allocation, 'status': constants.STATUS_ACTIVE, 'label': self.label, 'network_type': share_network_subnet['network_type'], 'segmentation_id': share_network_subnet['segmentation_id'], 'cidr': share_network_subnet['cidr'], 'gateway': share_network_subnet['gateway'], 'ip_version': share_network_subnet['ip_version'], 'mtu': share_network_subnet['mtu'], } self.db.network_allocation_create(context, data) remaining_allocations.remove(allocation) return remaining_allocations manila-10.0.0/manila/network/linux/0000775000175000017500000000000013656750362017161 5ustar zuulzuul00000000000000manila-10.0.0/manila/network/linux/__init__.py0000664000175000017500000000000013656750227021260 0ustar zuulzuul00000000000000manila-10.0.0/manila/network/linux/interface.py0000664000175000017500000002215213656750227021475 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import netaddr from oslo_config import cfg from oslo_log import log import six from manila import exception from manila.i18n import _ from manila.network.linux import ip_lib from manila.network.linux import ovs_lib from manila import utils LOG = log.getLogger(__name__) OPTS = [ cfg.StrOpt('ovs_integration_bridge', default='br-int', help=_('Name of Open vSwitch bridge to use.')), ] CONF = cfg.CONF CONF.register_opts(OPTS) def device_name_synchronized(f): """Wraps methods with interprocess locks by device names.""" def wrapped_func(self, *args, **kwargs): device_name = "device_name_%s" % args[0] @utils.synchronized("linux_interface_%s" % device_name, external=True) def source_func(self, *args, **kwargs): return f(self, *args, **kwargs) return source_func(self, *args, **kwargs) return wrapped_func @six.add_metaclass(abc.ABCMeta) class LinuxInterfaceDriver(object): # from linux IF_NAMESIZE DEV_NAME_LEN = 14 DEV_NAME_PREFIX = 'tap' def __init__(self): self.conf = CONF @device_name_synchronized def init_l3(self, device_name, ip_cidrs, namespace=None, clear_cidrs=[]): """Set the L3 settings for the interface using data from the port. ip_cidrs: list of 'X.X.X.X/YY' strings """ device = ip_lib.IPDevice(device_name, namespace=namespace) for cidr in clear_cidrs: device.route.clear_outdated_routes(cidr) previous = {} for address in device.addr.list(scope='global', filters=['permanent']): previous[address['cidr']] = address['ip_version'] # add new addresses for ip_cidr in ip_cidrs: net = netaddr.IPNetwork(ip_cidr) if ip_cidr in previous: del previous[ip_cidr] continue device.addr.add(net.version, ip_cidr, str(net.broadcast)) # clean up any old addresses for ip_cidr, ip_version in previous.items(): device.addr.delete(ip_version, ip_cidr) # ensure that interface is first in the list device.route.pullup_route(device_name) # here we are checking for garbage devices from removed service port self._remove_outdated_interfaces(device) def _remove_outdated_interfaces(self, device): """Finds and removes unused network device.""" device_cidr_set = self._get_set_of_device_cidrs(device) for dev in ip_lib.IPWrapper().get_devices(): if dev.name != device.name and dev.name[:3] == device.name[:3]: cidr_set = self._get_set_of_device_cidrs(dev) if device_cidr_set & cidr_set: self.unplug(dev.name) def _get_set_of_device_cidrs(self, device): cidrs = set() addr_list = [] try: # NOTE(ganso): I could call ip_lib.device_exists here, but since # this is a concurrency problem, it would not fix the problem. addr_list = device.addr.list() except Exception as e: if 'does not exist' in six.text_type(e): LOG.warning( "Device %s does not exist anymore.", device.name) else: raise for addr in addr_list: if addr['ip_version'] == 4: cidrs.add(six.text_type(netaddr.IPNetwork(addr['cidr']).cidr)) return cidrs def check_bridge_exists(self, bridge): if not ip_lib.device_exists(bridge): raise exception.BridgeDoesNotExist(bridge=bridge) def get_device_name(self, port): return (self.DEV_NAME_PREFIX + port['id'])[:self.DEV_NAME_LEN] @abc.abstractmethod def plug(self, device_name, port_id, mac_address, bridge=None, namespace=None, prefix=None): """Plug in the interface.""" @abc.abstractmethod def unplug(self, device_name, bridge=None, namespace=None, prefix=None): """Unplug the interface.""" class NoopInterfaceDriver(LinuxInterfaceDriver): """Noop driver when manila-share is already connected to admin network""" def init_l3(self, device_name, ip_cidrs, namespace=None, clear_cidrs=[]): pass def plug(self, device_name, port_id, mac_address, bridge=None, namespace=None, prefix=None): pass def unplug(self, device_name, bridge=None, namespace=None, prefix=None): pass class OVSInterfaceDriver(LinuxInterfaceDriver): """Driver for creating an internal interface on an OVS bridge.""" DEV_NAME_PREFIX = 'tap' def _get_tap_name(self, dev_name): return dev_name def _ovs_add_port(self, bridge, device_name, port_id, mac_address, internal=True): cmd = ['ovs-vsctl', '--', '--may-exist', 'add-port', bridge, device_name] if internal: cmd += ['--', 'set', 'Interface', device_name, 'type=internal'] cmd += ['--', 'set', 'Interface', device_name, 'external-ids:iface-id=%s' % port_id, '--', 'set', 'Interface', device_name, 'external-ids:iface-status=active', '--', 'set', 'Interface', device_name, 'external-ids:attached-mac=%s' % mac_address] utils.execute(*cmd, run_as_root=True) @device_name_synchronized def plug(self, device_name, port_id, mac_address, bridge=None, namespace=None, prefix=None): """Plug in the interface.""" if not bridge: bridge = self.conf.ovs_integration_bridge self.check_bridge_exists(bridge) ip = ip_lib.IPWrapper() ns_dev = ip.device(device_name) if not ip_lib.device_exists(device_name, namespace=namespace): LOG.info("Device %s does not exist - creating ....", device_name) tap_name = self._get_tap_name(device_name) self._ovs_add_port(bridge, tap_name, port_id, mac_address) ns_dev.link.set_address(mac_address) # Add an interface created by ovs to the namespace. if namespace: namespace_obj = ip.ensure_namespace(namespace) namespace_obj.add_device_to_namespace(ns_dev) else: LOG.info("Device %s already exists.", device_name) if ns_dev.link.address != mac_address: LOG.warning("Reset mac address to %s", mac_address) ns_dev.link.set_address(mac_address) ns_dev.link.set_up() @device_name_synchronized def unplug(self, device_name, bridge=None, namespace=None, prefix=None): """Unplug the interface.""" if not bridge: bridge = self.conf.ovs_integration_bridge tap_name = self._get_tap_name(device_name) self.check_bridge_exists(bridge) ovs = ovs_lib.OVSBridge(bridge) try: ovs.delete_port(tap_name) except RuntimeError: LOG.error("Failed unplugging interface '%s'", device_name) class BridgeInterfaceDriver(LinuxInterfaceDriver): """Driver for creating bridge interfaces.""" DEV_NAME_PREFIX = 'ns-' @device_name_synchronized def plug(self, device_name, port_id, mac_address, bridge=None, namespace=None, prefix=None): """Plugin the interface.""" ip = ip_lib.IPWrapper() if prefix: tap_name = device_name.replace(prefix, 'tap') else: tap_name = device_name.replace(self.DEV_NAME_PREFIX, 'tap') if not ip_lib.device_exists(device_name, namespace=namespace): # Create ns_veth in a namespace if one is configured. root_veth, ns_veth = ip.add_veth(tap_name, device_name, namespace2=namespace) ns_veth.link.set_address(mac_address) else: ns_veth = ip.device(device_name) root_veth = ip.device(tap_name) LOG.warning("Device %s already exists.", device_name) root_veth.link.set_up() ns_veth.link.set_up() @device_name_synchronized def unplug(self, device_name, bridge=None, namespace=None, prefix=None): """Unplug the interface.""" device = ip_lib.IPDevice(device_name, namespace) try: device.link.delete() LOG.debug("Unplugged interface '%s'", device_name) except RuntimeError: LOG.error("Failed unplugging interface '%s'", device_name) manila-10.0.0/manila/network/linux/ip_lib.py0000664000175000017500000003573713656750227021010 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import netaddr import six from manila.i18n import _ from manila import utils LOOPBACK_DEVNAME = 'lo' class SubProcessBase(object): def __init__(self, namespace=None): self.namespace = namespace def _run(self, options, command, args): if self.namespace: return self._as_root(options, command, args) else: return self._execute(options, command, args) def _as_root(self, options, command, args, use_root_namespace=False): namespace = self.namespace if not use_root_namespace else None return self._execute(options, command, args, namespace, as_root=True) @classmethod def _execute(cls, options, command, args, namespace=None, as_root=False): opt_list = ['-%s' % o for o in options] if namespace: ip_cmd = ['ip', 'netns', 'exec', namespace, 'ip'] else: ip_cmd = ['ip'] total_cmd = ip_cmd + opt_list + [command] + list(args) return utils.execute(*total_cmd, run_as_root=as_root)[0] class IPWrapper(SubProcessBase): def __init__(self, namespace=None): super(IPWrapper, self).__init__(namespace=namespace) self.netns = IpNetnsCommand(self) def device(self, name): return IPDevice(name, self.namespace) def get_devices(self, exclude_loopback=False): retval = [] output = self._execute('o', 'link', ('list',), self.namespace) for line in output.split('\n'): if '<' not in line: continue tokens = line.split(':', 2) if len(tokens) >= 3: name = tokens[1].split('@', 1)[0].strip() if exclude_loopback and name == LOOPBACK_DEVNAME: continue retval.append(IPDevice(name, self.namespace)) return retval def add_tuntap(self, name, mode='tap'): self._as_root('', 'tuntap', ('add', name, 'mode', mode)) return IPDevice(name, self.namespace) def add_veth(self, name1, name2, namespace2=None): args = ['add', name1, 'type', 'veth', 'peer', 'name', name2] if namespace2 is None: namespace2 = self.namespace else: self.ensure_namespace(namespace2) args += ['netns', namespace2] self._as_root('', 'link', tuple(args)) return (IPDevice(name1, self.namespace), IPDevice(name2, namespace2)) def ensure_namespace(self, name): if not self.netns.exists(name): ip = self.netns.add(name) lo = ip.device(LOOPBACK_DEVNAME) lo.link.set_up() else: ip = IPWrapper(name) return ip def namespace_is_empty(self): return not self.get_devices(exclude_loopback=True) def garbage_collect_namespace(self): """Conditionally destroy the namespace if it is empty.""" if self.namespace and self.netns.exists(self.namespace): if self.namespace_is_empty(): self.netns.delete(self.namespace) return True return False def add_device_to_namespace(self, device): if self.namespace: device.link.set_netns(self.namespace) @classmethod def get_namespaces(cls): output = cls._execute('', 'netns', ('list',)) return [l.strip() for l in output.split('\n')] class IPDevice(SubProcessBase): def __init__(self, name, namespace=None): super(IPDevice, self).__init__(namespace=namespace) self.name = name self.link = IpLinkCommand(self) self.addr = IpAddrCommand(self) self.route = IpRouteCommand(self) def __eq__(self, other): return (other is not None and self.name == other.name and self.namespace == other.namespace) def __str__(self): return self.name class IpCommandBase(object): COMMAND = '' def __init__(self, parent): self._parent = parent def _run(self, *args, **kwargs): return self._parent._run(kwargs.get('options', []), self.COMMAND, args) def _as_root(self, *args, **kwargs): return self._parent._as_root(kwargs.get('options', []), self.COMMAND, args, kwargs.get('use_root_namespace', False)) class IpDeviceCommandBase(IpCommandBase): @property def name(self): return self._parent.name class IpLinkCommand(IpDeviceCommandBase): COMMAND = 'link' def set_address(self, mac_address): self._as_root('set', self.name, 'address', mac_address) def set_mtu(self, mtu_size): self._as_root('set', self.name, 'mtu', mtu_size) def set_up(self): self._as_root('set', self.name, 'up') def set_down(self): self._as_root('set', self.name, 'down') def set_netns(self, namespace): self._as_root('set', self.name, 'netns', namespace) self._parent.namespace = namespace def set_name(self, name): self._as_root('set', self.name, 'name', name) self._parent.name = name def set_alias(self, alias_name): self._as_root('set', self.name, 'alias', alias_name) def delete(self): self._as_root('delete', self.name) @property def address(self): return self.attributes.get('link/ether') @property def state(self): return self.attributes.get('state') @property def mtu(self): return self.attributes.get('mtu') @property def qdisc(self): return self.attributes.get('qdisc') @property def qlen(self): return self.attributes.get('qlen') @property def alias(self): return self.attributes.get('alias') @property def attributes(self): return self._parse_line(self._run('show', self.name, options='o')) def _parse_line(self, value): if not value: return {} device_name, settings = value.replace("\\", '').split('>', 1) tokens = settings.split() keys = tokens[::2] values = [int(v) if v.isdigit() else v for v in tokens[1::2]] retval = dict(zip(keys, values)) return retval class IpAddrCommand(IpDeviceCommandBase): COMMAND = 'addr' def add(self, ip_version, cidr, broadcast, scope='global'): self._as_root('add', cidr, 'brd', broadcast, 'scope', scope, 'dev', self.name, options=[ip_version]) def delete(self, ip_version, cidr): self._as_root('del', cidr, 'dev', self.name, options=[ip_version]) def flush(self): self._as_root('flush', self.name) def list(self, scope=None, to=None, filters=None): if filters is None: filters = [] retval = [] if scope: filters += ['scope', scope] if to: filters += ['to', to] for line in self._run('show', self.name, *filters).split('\n'): line = line.strip() if not line.startswith('inet'): continue parts = line.split() if parts[0] == 'inet6': version = 6 scope = parts[3] broadcast = '::' else: version = 4 if parts[2] == 'brd': broadcast = parts[3] scope = parts[5] else: # sometimes output of 'ip a' might look like: # inet 192.168.100.100/24 scope global eth0 # and broadcast needs to be calculated from CIDR broadcast = str(netaddr.IPNetwork(parts[1]).broadcast) scope = parts[3] retval.append(dict(cidr=parts[1], broadcast=broadcast, scope=scope, ip_version=version, dynamic=('dynamic' == parts[-1]))) return retval class IpRouteCommand(IpDeviceCommandBase): COMMAND = 'route' def add_gateway(self, gateway, metric=None): args = ['replace', 'default', 'via', gateway] if metric: args += ['metric', metric] args += ['dev', self.name] self._as_root(*args) def delete_gateway(self, gateway): self._as_root('del', 'default', 'via', gateway, 'dev', self.name) def get_gateway(self, scope=None, filters=None): if filters is None: filters = [] retval = None if scope: filters += ['scope', scope] route_list_lines = self._run('list', 'dev', self.name, *filters).split('\n') default_route_line = next((x.strip() for x in route_list_lines if x.strip().startswith('default')), None) if default_route_line: gateway_index = 2 parts = default_route_line.split() retval = dict(gateway=parts[gateway_index]) metric_index = 4 parts_has_metric = (len(parts) > metric_index) if parts_has_metric: retval.update(metric=int(parts[metric_index])) return retval def pullup_route(self, interface_name): """Pullup route entry. Ensures that the route entry for the interface is before all others on the same subnet. """ device_list = [] device_route_list_lines = self._run('list', 'proto', 'kernel', 'dev', interface_name).split('\n') for device_route_line in device_route_list_lines: try: subnet = device_route_line.split()[0] except Exception: continue subnet_route_list_lines = self._run( 'list', 'proto', 'kernel', 'exact', subnet).split('\n') for subnet_route_line in subnet_route_list_lines: i = iter(subnet_route_line.split()) while(next(i) != 'dev'): pass device = next(i) try: while(next(i) != 'src'): pass src = next(i) except Exception: src = '' if device != interface_name: device_list.append((device, src)) else: break for (device, src) in device_list: self._as_root('del', subnet, 'dev', device) if (src != ''): self._as_root('append', subnet, 'proto', 'kernel', 'src', src, 'dev', device) else: self._as_root('append', subnet, 'proto', 'kernel', 'dev', device) def clear_outdated_routes(self, cidr): """Removes duplicated routes for a certain network CIDR. Removes all routes related to supplied CIDR except for the one related to this interface device. :param cidr: The network CIDR to be cleared. """ routes = self.list() items = [x for x in routes if x['Destination'] == cidr and x.get('Device') and x['Device'] != self.name] for item in items: self.delete_net_route(item['Destination'], item['Device']) def list(self): """List all routes :return: A dictionary with field 'Destination' and 'Device' for each route entry. 'Gateway' field is included if route has a gateway. """ routes = [] output = self._as_root('list') lines = output.split('\n') for line in lines: items = line.split() if len(items) > 0: item = {'Destination': items[0]} if len(items) > 1: if items[1] == 'via': item['Gateway'] = items[2] if len(items) > 3 and items[3] == 'dev': item['Device'] = items[4] if items[1] == 'dev': item['Device'] = items[2] routes.append(item) return routes def delete_net_route(self, cidr, device): """Deletes a route according to supplied CIDR and interface device. :param cidr: The network CIDR to be removed. :param device: The network interface device to be removed. """ self._as_root('delete', cidr, 'dev', device) class IpNetnsCommand(IpCommandBase): COMMAND = 'netns' def add(self, name): self._as_root('add', name, use_root_namespace=True) return IPWrapper(name) def delete(self, name): self._as_root('delete', name, use_root_namespace=True) def execute(self, cmds, addl_env=None, check_exit_code=True): if addl_env is None: addl_env = dict() if not self._parent.namespace: raise Exception(_('No namespace defined for parent')) else: env_params = [] if addl_env: env_params = (['env'] + ['%s=%s' % pair for pair in sorted(addl_env.items())]) total_cmd = (['ip', 'netns', 'exec', self._parent.namespace] + env_params + list(cmds)) return utils.execute(*total_cmd, run_as_root=True, check_exit_code=check_exit_code) def exists(self, name): output = self._as_root('list', options='o', use_root_namespace=True) for line in output.split('\n'): if name == line.strip(): return True return False def device_exists(device_name, namespace=None): try: address = IPDevice(device_name, namespace).link.address except Exception as e: if 'does not exist' in six.text_type(e): return False raise return bool(address) def iproute_arg_supported(command, arg): command += ['help'] stdout, stderr = utils.execute(command, check_exit_code=False, return_stderr=True) return any(arg in line for line in stderr.split('\n')) manila-10.0.0/manila/network/linux/ovs_lib.py0000664000175000017500000000411113656750227021165 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_log import log from manila import utils LOG = log.getLogger(__name__) class OVSBridge(object): def __init__(self, br_name): self.br_name = br_name self.re_id = self.re_compile_id() def re_compile_id(self): external = r'external_ids\s*' mac = r'attached-mac="(?P([a-fA-F\d]{2}:){5}([a-fA-F\d]{2}))"' iface = r'iface-id="(?P[^"]+)"' name = r'name\s*:\s"(?P[^"]*)"' port = r'ofport\s*:\s(?P-?\d+)' _re = (r'%(external)s:\s{ ( %(mac)s,? | %(iface)s,? | . )* }' r' \s+ %(name)s \s+ %(port)s' % {'external': external, 'mac': mac, 'iface': iface, 'name': name, 'port': port}) return re.compile(_re, re.M | re.X) def run_vsctl(self, args): full_args = ["ovs-vsctl", "--timeout=2"] + args try: return utils.execute(*full_args, run_as_root=True) except Exception: LOG.exception("Unable to execute %(cmd)s.", {'cmd': full_args}) def reset_bridge(self): self.run_vsctl(["--", "--if-exists", "del-br", self.br_name]) self.run_vsctl(["add-br", self.br_name]) def delete_port(self, port_name): self.run_vsctl(["--", "--if-exists", "del-port", self.br_name, port_name]) manila-10.0.0/manila/network/neutron/0000775000175000017500000000000013656750362017514 5ustar zuulzuul00000000000000manila-10.0.0/manila/network/neutron/__init__.py0000664000175000017500000000000013656750227021613 0ustar zuulzuul00000000000000manila-10.0.0/manila/network/neutron/api.py0000664000175000017500000003571513656750227020652 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2014 Mirantis Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from neutronclient.common import exceptions as neutron_client_exc from neutronclient.v2_0 import client as clientv20 from oslo_config import cfg from oslo_log import log from manila.common import client_auth from manila import context from manila import exception from manila.network.neutron import constants as neutron_constants NEUTRON_GROUP = 'neutron' neutron_opts = [ cfg.StrOpt( 'url', default='http://127.0.0.1:9696', deprecated_group="DEFAULT", deprecated_name="neutron_url", help='URL for connecting to neutron.'), cfg.IntOpt( 'url_timeout', default=30, deprecated_group="DEFAULT", deprecated_name="neutron_url_timeout", help='Timeout value for connecting to neutron in seconds.'), cfg.StrOpt( 'auth_strategy', default='keystone', deprecated_group="DEFAULT", help='Auth strategy for connecting to neutron in admin context.'), cfg.StrOpt( 'endpoint_type', default='publicURL', help='Endpoint type to be used with neutron client calls.'), cfg.StrOpt( 'region_name', help='Region name for connecting to neutron in admin context.'), ] # These fallback options can be removed in/after 9.0.0 (Train) deprecated_opts = { 'cafile': [ cfg.DeprecatedOpt('ca_certificates_file', group="DEFAULT"), cfg.DeprecatedOpt('ca_certificates_file', group=NEUTRON_GROUP), ], 'insecure': [ cfg.DeprecatedOpt('api_insecure', group="DEFAULT"), cfg.DeprecatedOpt('api_insecure', group=NEUTRON_GROUP), ], } CONF = cfg.CONF LOG = log.getLogger(__name__) def list_opts(): return client_auth.AuthClientLoader.list_opts(NEUTRON_GROUP) class API(object): """API for interacting with the neutron 2.x API. :param configuration: instance of config or config group. """ def __init__(self, config_group_name=None): self.config_group_name = config_group_name or 'DEFAULT' ks_loading.register_session_conf_options( CONF, NEUTRON_GROUP, deprecated_opts=deprecated_opts) ks_loading.register_auth_conf_options(CONF, NEUTRON_GROUP) CONF.register_opts(neutron_opts, NEUTRON_GROUP) self.configuration = getattr(CONF, self.config_group_name, CONF) self.last_neutron_extension_sync = None self.extensions = {} self.auth_obj = None @property def client(self): return self.get_client(context.get_admin_context()) def get_client(self, context): if not self.auth_obj: self.auth_obj = client_auth.AuthClientLoader( client_class=clientv20.Client, exception_module=neutron_client_exc, cfg_group=NEUTRON_GROUP) return self.auth_obj.get_client( self, context, endpoint_type=CONF[NEUTRON_GROUP].endpoint_type, region_name=CONF[NEUTRON_GROUP].region_name, ) @property def admin_project_id(self): if self.client.httpclient.auth_token is None: try: self.client.httpclient.authenticate() except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) return self.client.httpclient.get_project_id() def get_all_admin_project_networks(self): search_opts = {'tenant_id': self.admin_project_id, 'shared': False} nets = self.client.list_networks(**search_opts).get('networks', []) return nets def create_port(self, tenant_id, network_id, host_id=None, subnet_id=None, fixed_ip=None, device_owner=None, device_id=None, mac_address=None, port_security_enabled=True, security_group_ids=None, dhcp_opts=None, **kwargs): try: port_req_body = {'port': {}} port_req_body['port']['network_id'] = network_id port_req_body['port']['admin_state_up'] = True port_req_body['port']['tenant_id'] = tenant_id if not port_security_enabled: port_req_body['port']['port_security_enabled'] = ( port_security_enabled) elif security_group_ids: port_req_body['port']['security_groups'] = security_group_ids if mac_address: port_req_body['port']['mac_address'] = mac_address if host_id: if not self._has_port_binding_extension(): msg = ("host_id (%(host_id)s) specified but neutron " "doesn't support port binding. Please activate the " "extension accordingly." % {"host_id": host_id}) raise exception.NetworkException(message=msg) port_req_body['port']['binding:host_id'] = host_id if dhcp_opts is not None: port_req_body['port']['extra_dhcp_opts'] = dhcp_opts if subnet_id: fixed_ip_dict = {'subnet_id': subnet_id} if fixed_ip: fixed_ip_dict.update({'ip_address': fixed_ip}) port_req_body['port']['fixed_ips'] = [fixed_ip_dict] if device_owner: port_req_body['port']['device_owner'] = device_owner if device_id: port_req_body['port']['device_id'] = device_id if kwargs: port_req_body['port'].update(kwargs) port = self.client.create_port(port_req_body).get('port', {}) return port except neutron_client_exc.NeutronClientException as e: LOG.exception('Neutron error creating port on network %s', network_id) if e.status_code == 409: raise exception.PortLimitExceeded() raise exception.NetworkException(code=e.status_code, message=e.message) def delete_port(self, port_id): try: self.client.delete_port(port_id) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def delete_subnet(self, subnet_id): try: self.client.delete_subnet(subnet_id) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def list_ports(self, **search_opts): """List ports for the client based on search options.""" return self.client.list_ports(**search_opts).get('ports') def show_port(self, port_id): """Return the port for the client given the port id.""" try: return self.client.show_port(port_id).get('port') except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def get_all_networks(self): """Get all networks for client.""" return self.client.list_networks().get('networks') def get_network(self, network_uuid): """Get specific network for client.""" try: network = self.client.show_network(network_uuid).get('network', {}) return network except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def get_subnet(self, subnet_uuid): """Get specific subnet for client.""" try: return self.client.show_subnet(subnet_uuid).get('subnet', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def list_extensions(self): extensions_list = self.client.list_extensions().get('extensions') return {ext['name']: ext for ext in extensions_list} def _has_port_binding_extension(self): if not self.extensions: self.extensions = self.list_extensions() return neutron_constants.PORTBINDING_EXT in self.extensions def router_create(self, tenant_id, name): router_req_body = {'router': {}} router_req_body['router']['tenant_id'] = tenant_id router_req_body['router']['name'] = name try: return self.client.create_router(router_req_body).get('router', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def network_create(self, tenant_id, name): network_req_body = {'network': {}} network_req_body['network']['tenant_id'] = tenant_id network_req_body['network']['name'] = name try: return self.client.create_network( network_req_body).get('network', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def subnet_create(self, tenant_id, net_id, name, cidr): subnet_req_body = {'subnet': {}} subnet_req_body['subnet']['tenant_id'] = tenant_id subnet_req_body['subnet']['name'] = name subnet_req_body['subnet']['network_id'] = net_id subnet_req_body['subnet']['cidr'] = cidr subnet_req_body['subnet']['ip_version'] = 4 try: return self.client.create_subnet( subnet_req_body).get('subnet', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def router_add_interface(self, router_id, subnet_id, port_id=None): body = {} if subnet_id: body['subnet_id'] = subnet_id if port_id: body['port_id'] = port_id try: self.client.add_interface_router(router_id, body) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def router_remove_interface(self, router_id, subnet_id, port_id=None): body = {} if subnet_id: body['subnet_id'] = subnet_id if port_id: body['port_id'] = port_id try: self.client.remove_interface_router(router_id, body) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def router_list(self): try: return self.client.list_routers().get('routers', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def update_port_fixed_ips(self, port_id, fixed_ips): try: port_req_body = {'port': fixed_ips} port = self.client.update_port( port_id, port_req_body).get('port', {}) return port except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def show_router(self, router_id): try: return self.client.show_router(router_id).get('router', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def router_update_routes(self, router_id, routes): try: router_req_body = {'router': routes} port = self.client.update_router( router_id, router_req_body).get('router', {}) return port except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def update_subnet(self, subnet_uuid, name): """Update specific subnet for client.""" subnet_req_body = {'subnet': {'name': name}} try: return self.client.update_subnet( subnet_uuid, subnet_req_body).get('subnet', {}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException(code=e.status_code, message=e.message) def security_group_list(self, search_opts=None): try: return self.client.list_security_groups(**search_opts) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException( code=e.status_code, message=e.message) def security_group_create(self, name, description=""): try: return self.client.create_security_group( {'security_group': {"name": name, "description": description}}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException( code=e.status_code, message=e.message) def security_group_rule_create(self, parent_group_id, ip_protocol=None, from_port=None, to_port=None, cidr=None, group_id=None, direction="ingress"): request = {"security_group_id": parent_group_id, "protocol": ip_protocol, "remote_ip_prefix": cidr, "remote_group_id": group_id, "direction": direction} if ip_protocol != "icmp": request["port_range_min"] = from_port request["port_range_max"] = to_port try: return self.client.create_security_group_rule( {"security_group_rule": request}) except neutron_client_exc.NeutronClientException as e: raise exception.NetworkException( code=e.status_code, message=e.message) manila-10.0.0/manila/network/neutron/neutron_network_plugin.py0000664000175000017500000006732113656750227024720 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ipaddress import six import socket from oslo_config import cfg from oslo_log import log from manila.common import constants from manila import exception from manila.i18n import _ from manila import network from manila.network.neutron import api as neutron_api from manila.network.neutron import constants as neutron_constants from manila import utils LOG = log.getLogger(__name__) neutron_network_plugin_opts = [ cfg.StrOpt( 'neutron_physical_net_name', help="The name of the physical network to determine which net segment " "is used. This opt is optional and will only be used for " "networks configured with multiple segments.", deprecated_group='DEFAULT'), ] neutron_single_network_plugin_opts = [ cfg.StrOpt( 'neutron_net_id', help="Default Neutron network that will be used for share server " "creation. This opt is used only with " "class 'NeutronSingleNetworkPlugin'.", deprecated_group='DEFAULT'), cfg.StrOpt( 'neutron_subnet_id', help="Default Neutron subnet that will be used for share server " "creation. Should be assigned to network defined in opt " "'neutron_net_id'. This opt is used only with " "class 'NeutronSingleNetworkPlugin'.", deprecated_group='DEFAULT'), ] neutron_bind_network_plugin_opts = [ cfg.StrOpt( 'neutron_vnic_type', help="vNIC type used for binding.", choices=['baremetal', 'normal', 'direct', 'direct-physical', 'macvtap'], default='baremetal'), cfg.StrOpt( "neutron_host_id", help="Host ID to be used when creating neutron port. If not set " "host is set to manila-share host by default.", default=socket.gethostname()), ] neutron_binding_profile = [ cfg.ListOpt( "neutron_binding_profiles", help="A list of binding profiles to be used during port binding. This " "option can be used with the NeutronBindNetworkPlugin. The value for " "this option has to be a comma separated list of names that " "correspond to each binding profile. Each binding profile needs to be " "specified as an individual configuration section using the binding " "profile name as the section name."), ] neutron_binding_profile_opts = [ cfg.StrOpt( 'neutron_switch_id', help="Switch ID for binding profile."), cfg.StrOpt( 'neutron_port_id', help="Port ID on the given switch.",), cfg.DictOpt( 'neutron_switch_info', help="Switch label. For example: 'switch_ip: 10.4.30.5'. Multiple " "key-value pairs separated by commas are accepted.",), ] CONF = cfg.CONF class NeutronNetworkPlugin(network.NetworkBaseAPI): def __init__(self, *args, **kwargs): db_driver = kwargs.pop('db_driver', None) config_group_name = kwargs.get('config_group_name', 'DEFAULT') super(NeutronNetworkPlugin, self).__init__(config_group_name=config_group_name, db_driver=db_driver) self._neutron_api = None self._neutron_api_args = args self._neutron_api_kwargs = kwargs self._label = kwargs.pop('label', 'user') CONF.register_opts( neutron_network_plugin_opts, group=self.neutron_api.config_group_name) @property def label(self): return self._label @property @utils.synchronized("instantiate_neutron_api") def neutron_api(self): if not self._neutron_api: self._neutron_api = neutron_api.API(*self._neutron_api_args, **self._neutron_api_kwargs) return self._neutron_api def _store_neutron_net_info(self, context, share_network_subnet): self._save_neutron_network_data(context, share_network_subnet) self._save_neutron_subnet_data(context, share_network_subnet) def allocate_network(self, context, share_server, share_network=None, share_network_subnet=None, **kwargs): """Allocate network resources using given network information. Create neutron ports for a given neutron network and subnet, create manila db records for allocated neutron ports. :param context: RequestContext object :param share_server: share server data :param share_network: share network data :param share_network_subnet: share network subnet data :param kwargs: allocations parameters given by the back-end driver. Supported params: 'count' - how many allocations should be created 'device_owner' - set owner for network allocations :rtype: list of :class: 'dict' """ if not self._has_provider_network_extension(): msg = "%s extension required" % neutron_constants.PROVIDER_NW_EXT raise exception.NetworkBadConfigurationException(reason=msg) self._verify_share_network(share_server['id'], share_network) self._verify_share_network_subnet(share_server['id'], share_network_subnet) self._store_neutron_net_info(context, share_network_subnet) allocation_count = kwargs.get('count', 1) device_owner = kwargs.get('device_owner', 'share') ports = [] for __ in range(0, allocation_count): ports.append(self._create_port( context, share_server, share_network, share_network_subnet, device_owner)) return ports def manage_network_allocations( self, context, allocations, share_server, share_network=None, share_network_subnet=None): self._verify_share_network_subnet(share_server['id'], share_network_subnet) self._store_neutron_net_info(context, share_network_subnet) # We begin matching the allocations to known neutron ports and # finally return the non-consumed allocations remaining_allocations = list(allocations) fixed_ip_filter = ('subnet_id=' + share_network_subnet['neutron_subnet_id']) port_list = self.neutron_api.list_ports( network_id=share_network_subnet['neutron_net_id'], device_owner='manila:share', fixed_ips=fixed_ip_filter) selected_ports = self._get_ports_respective_to_ips( remaining_allocations, port_list) LOG.debug("Found matching allocations in Neutron:" " %s", six.text_type(selected_ports)) for selected_port in selected_ports: port_dict = { 'id': selected_port['port']['id'], 'share_server_id': share_server['id'], 'ip_address': selected_port['allocation'], 'gateway': share_network_subnet['gateway'], 'mac_address': selected_port['port']['mac_address'], 'status': constants.STATUS_ACTIVE, 'label': self.label, 'network_type': share_network_subnet.get('network_type'), 'segmentation_id': share_network_subnet.get('segmentation_id'), 'ip_version': share_network_subnet['ip_version'], 'cidr': share_network_subnet['cidr'], 'mtu': share_network_subnet['mtu'], } # There should not be existing allocations with the same port_id. try: existing_port = self.db.network_allocation_get( context, selected_port['port']['id'], read_deleted=False) except exception.NotFound: pass else: msg = _("There were existing conflicting manila network " "allocations found while trying to manage share " "server %(new_ss)s. The conflicting port belongs to " "share server %(old_ss)s.") % { 'new_ss': share_server['id'], 'old_ss': existing_port['share_server_id'], } raise exception.ManageShareServerError(reason=msg) # If there are previously deleted allocations, we undelete them try: self.db.network_allocation_get( context, selected_port['port']['id'], read_deleted=True) except exception.NotFound: self.db.network_allocation_create(context, port_dict) else: port_dict.pop('id') port_dict.update({ 'deleted_at': None, 'deleted': 'False', }) self.db.network_allocation_update( context, selected_port['port']['id'], port_dict, read_deleted=True) remaining_allocations.remove(selected_port['allocation']) return remaining_allocations def unmanage_network_allocations(self, context, share_server_id): ports = self.db.network_allocations_get_for_share_server( context, share_server_id) for port in ports: self.db.network_allocation_delete(context, port['id']) def _get_ports_respective_to_ips(self, allocations, port_list): selected_ports = [] for port in port_list: for ip in port['fixed_ips']: if ip['ip_address'] in allocations: if not any(port['id'] == p['port']['id'] for p in selected_ports): selected_ports.append( {'port': port, 'allocation': ip['ip_address']}) else: LOG.warning("Port %s has more than one IP that " "matches allocations, please use ports " "respective to only one allocation IP.", port['id']) return selected_ports def _get_matched_ip_address(self, fixed_ips, ip_version): """Get first ip address which matches the specified ip_version.""" for ip in fixed_ips: try: address = ipaddress.ip_address(six.text_type(ip['ip_address'])) if address.version == ip_version: return ip['ip_address'] except ValueError: LOG.error("%(address)s isn't a valid ip " "address, omitted.", {'address': ip['ip_address']}) msg = _("Can not find any IP address with configured IP " "version %(version)s in share-network.") % {'version': ip_version} raise exception.NetworkBadConfigurationException(reason=msg) def deallocate_network(self, context, share_server_id): """Deallocate neutron network resources for the given share server. Delete previously allocated neutron ports, delete manila db records for deleted ports. :param context: RequestContext object :param share_server_id: id of share server :rtype: None """ ports = self.db.network_allocations_get_for_share_server( context, share_server_id) for port in ports: self._delete_port(context, port) def _get_port_create_args(self, share_server, share_network_subnet, device_owner): return { "network_id": share_network_subnet['neutron_net_id'], "subnet_id": share_network_subnet['neutron_subnet_id'], "device_owner": 'manila:' + device_owner, "device_id": share_server.get('id'), } def _create_port(self, context, share_server, share_network, share_network_subnet, device_owner): create_args = self._get_port_create_args( share_server, share_network_subnet, device_owner) port = self.neutron_api.create_port( share_network['project_id'], **create_args) ip_address = self._get_matched_ip_address( port['fixed_ips'], share_network_subnet['ip_version']) port_dict = { 'id': port['id'], 'share_server_id': share_server['id'], 'ip_address': ip_address, 'gateway': share_network_subnet['gateway'], 'mac_address': port['mac_address'], 'status': constants.STATUS_ACTIVE, 'label': self.label, 'network_type': share_network_subnet.get('network_type'), 'segmentation_id': share_network_subnet.get('segmentation_id'), 'ip_version': share_network_subnet['ip_version'], 'cidr': share_network_subnet['cidr'], 'mtu': share_network_subnet['mtu'], } return self.db.network_allocation_create(context, port_dict) def _delete_port(self, context, port): try: self.neutron_api.delete_port(port['id']) except exception.NetworkException: self.db.network_allocation_update( context, port['id'], {'status': constants.STATUS_ERROR}) raise else: self.db.network_allocation_delete(context, port['id']) def _has_provider_network_extension(self): extensions = self.neutron_api.list_extensions() return neutron_constants.PROVIDER_NW_EXT in extensions def _is_neutron_multi_segment(self, share_network_subnet, net_info=None): if net_info is None: net_info = self.neutron_api.get_network( share_network_subnet['neutron_net_id']) return 'segments' in net_info def _save_neutron_network_data(self, context, share_network_subnet): net_info = self.neutron_api.get_network( share_network_subnet['neutron_net_id']) segmentation_id = None network_type = None if self._is_neutron_multi_segment(share_network_subnet, net_info): # we have a multi segment network and need to identify the # lowest segment used for binding phy_nets = [] phy = self.neutron_api.configuration.neutron_physical_net_name if not phy: msg = "Cannot identify segment used for binding. Please add " "neutron_physical_net_name in configuration." raise exception.NetworkBadConfigurationException(reason=msg) for segment in net_info['segments']: phy_nets.append(segment['provider:physical_network']) if segment['provider:physical_network'] == phy: segmentation_id = segment['provider:segmentation_id'] network_type = segment['provider:network_type'] if not (segmentation_id and network_type): msg = ("No matching neutron_physical_net_name found for %s " "(found: %s)." % (phy, phy_nets)) raise exception.NetworkBadConfigurationException(reason=msg) else: network_type = net_info['provider:network_type'] segmentation_id = net_info['provider:segmentation_id'] provider_nw_dict = { 'network_type': network_type, 'segmentation_id': segmentation_id, 'mtu': net_info['mtu'], } share_network_subnet.update(provider_nw_dict) if self.label != 'admin': self.db.share_network_subnet_update( context, share_network_subnet['id'], provider_nw_dict) def _save_neutron_subnet_data(self, context, share_network_subnet): subnet_info = self.neutron_api.get_subnet( share_network_subnet['neutron_subnet_id']) subnet_values = { 'cidr': subnet_info['cidr'], 'gateway': subnet_info['gateway_ip'], 'ip_version': subnet_info['ip_version'] } share_network_subnet.update(subnet_values) if self.label != 'admin': self.db.share_network_subnet_update( context, share_network_subnet['id'], subnet_values) class NeutronSingleNetworkPlugin(NeutronNetworkPlugin): def __init__(self, *args, **kwargs): super(NeutronSingleNetworkPlugin, self).__init__(*args, **kwargs) CONF.register_opts( neutron_single_network_plugin_opts, group=self.neutron_api.config_group_name) self.net = self.neutron_api.configuration.neutron_net_id self.subnet = self.neutron_api.configuration.neutron_subnet_id self._verify_net_and_subnet() def _select_proper_share_network_subnet(self, context, share_network_subnet): if self.label != 'admin': share_network_subnet = self._update_share_network_net_data( context, share_network_subnet) else: share_network_subnet = { 'project_id': self.neutron_api.admin_project_id, 'neutron_net_id': self.net, 'neutron_subnet_id': self.subnet, } return share_network_subnet def allocate_network(self, context, share_server, share_network=None, share_network_subnet=None, **kwargs): share_network_subnet = self._select_proper_share_network_subnet( context, share_network_subnet) # Update share network project_id info if needed if share_network_subnet.get('project_id', None) is not None: share_network['project_id'] = share_network_subnet.pop( 'project_id') return super(NeutronSingleNetworkPlugin, self).allocate_network( context, share_server, share_network, share_network_subnet, **kwargs) def manage_network_allocations( self, context, allocations, share_server, share_network=None, share_network_subnet=None): share_network_subnet = self._select_proper_share_network_subnet( context, share_network_subnet) # Update share network project_id info if needed if share_network and share_network_subnet.get('project_id', None): share_network['project_id'] = ( share_network_subnet.pop('project_id')) return super(NeutronSingleNetworkPlugin, self).manage_network_allocations( context, allocations, share_server, share_network, share_network_subnet) def _verify_net_and_subnet(self): data = dict(net=self.net, subnet=self.subnet) if self.net and self.subnet: net = self.neutron_api.get_network(self.net) if not (net.get('subnets') and data['subnet'] in net['subnets']): raise exception.NetworkBadConfigurationException( "Subnet '%(subnet)s' does not belong to " "network '%(net)s'." % data) else: raise exception.NetworkBadConfigurationException( "Neutron net and subnet are expected to be both set. " "Got: net=%(net)s and subnet=%(subnet)s." % data) def _update_share_network_net_data(self, context, share_network_subnet): upd = dict() if not share_network_subnet.get('neutron_net_id') == self.net: if share_network_subnet.get('neutron_net_id') is not None: raise exception.NetworkBadConfigurationException( "Using neutron net id different from None or value " "specified in the config is forbidden for " "NeutronSingleNetworkPlugin. Allowed values: (%(net)s, " "None), received value: %(err)s" % { "net": self.net, "err": share_network_subnet.get('neutron_net_id')}) upd['neutron_net_id'] = self.net if not share_network_subnet.get('neutron_subnet_id') == self.subnet: if share_network_subnet.get('neutron_subnet_id') is not None: raise exception.NetworkBadConfigurationException( "Using neutron subnet id different from None or value " "specified in the config is forbidden for " "NeutronSingleNetworkPlugin. Allowed values: (%(snet)s, " "None), received value: %(err)s" % { "snet": self.subnet, "err": share_network_subnet.get('neutron_subnet_id')}) upd['neutron_subnet_id'] = self.subnet if upd: share_network_subnet = self.db.share_network_subnet_update( context, share_network_subnet['id'], upd) return share_network_subnet class NeutronBindNetworkPlugin(NeutronNetworkPlugin): def __init__(self, *args, **kwargs): super(NeutronBindNetworkPlugin, self).__init__(*args, **kwargs) self.binding_profiles = [] CONF.register_opts( neutron_binding_profile, group=self.neutron_api.config_group_name) conf = CONF[self.neutron_api.config_group_name] if conf.neutron_binding_profiles: for profile in conf.neutron_binding_profiles: CONF.register_opts(neutron_binding_profile_opts, group=profile) self.binding_profiles.append(profile) CONF.register_opts( neutron_bind_network_plugin_opts, group=self.neutron_api.config_group_name) self.config = self.neutron_api.configuration def update_network_allocation(self, context, share_server): if self.config.neutron_vnic_type == 'normal': ports = self.db.network_allocations_get_for_share_server( context, share_server['id']) self._wait_for_ports_bind(ports, share_server) return ports @utils.retry(exception.NetworkBindException, retries=20) def _wait_for_ports_bind(self, ports, share_server): inactive_ports = [] for port in ports: port = self._neutron_api.show_port(port['id']) if (port['status'] == neutron_constants.PORT_STATUS_ERROR or ('binding:vif_type' in port and port['binding:vif_type'] == neutron_constants.VIF_TYPE_BINDING_FAILED)): msg = _("Port binding %s failed.") % port['id'] raise exception.NetworkException(msg) elif port['status'] != neutron_constants.PORT_STATUS_ACTIVE: LOG.debug("The port %(id)s is in state %(state)s. " "Wait for active state.", { "id": port['id'], "state": port['status']}) inactive_ports.append(port['id']) if len(inactive_ports) == 0: return msg = _("Ports are not fully bound for share server " "'%(s_id)s' (inactive ports: %(ports)s)") % { "s_id": share_server['id'], "ports": inactive_ports} raise exception.NetworkBindException(msg) def _get_port_create_args(self, share_server, share_network_subnet, device_owner): arguments = super( NeutronBindNetworkPlugin, self)._get_port_create_args( share_server, share_network_subnet, device_owner) arguments['host_id'] = self.config.neutron_host_id arguments['binding:vnic_type'] = self.config.neutron_vnic_type if self.binding_profiles: local_links = [] for profile in self.binding_profiles: local_links.append({ 'switch_id': CONF[profile]['neutron_switch_id'], 'port_id': CONF[profile]['neutron_port_id'], 'switch_info': CONF[profile]['neutron_switch_info'], }) arguments['binding:profile'] = { "local_link_information": local_links} return arguments def _save_neutron_network_data(self, context, share_network_subnet): """Store the Neutron network info. In case of dynamic multi segments the segment is determined while binding the port. Therefore this method will return for multi segments network without storing network information (apart from mtu). Instead, multi segments network will wait until ports are bound and then store network information (see allocate_network()). """ if self._is_neutron_multi_segment(share_network_subnet): # In case of dynamic multi segment the segment is determined while # binding the port, only mtu is known and already needed self._save_neutron_network_mtu(context, share_network_subnet) return super(NeutronBindNetworkPlugin, self)._save_neutron_network_data( context, share_network_subnet) def _save_neutron_network_mtu(self, context, share_network_subnet): """Store the Neutron network mtu. In case of dynamic multi segments only the mtu needs storing before binding the port. """ net_info = self.neutron_api.get_network( share_network_subnet['neutron_net_id']) mtu_dict = { 'mtu': net_info['mtu'], } share_network_subnet.update(mtu_dict) if self.label != 'admin': self.db.share_network_subnet_update( context, share_network_subnet['id'], mtu_dict) def allocate_network(self, context, share_server, share_network=None, share_network_subnet=None, **kwargs): ports = super(NeutronBindNetworkPlugin, self).allocate_network( context, share_server, share_network, share_network_subnet, **kwargs) # If vnic type is 'normal' we expect a neutron agent to bind the # ports. This action requires a vnic to be spawned by the driver. # Therefore we do not wait for the port binding here, but # return the unbound ports and expect the share manager to call # update_network_allocation after the share server was created, in # order to update the ports with the correct binding. if self.config.neutron_vnic_type != 'normal': self._wait_for_ports_bind(ports, share_server) if self._is_neutron_multi_segment(share_network_subnet): # update segment information after port bind super(NeutronBindNetworkPlugin, self)._save_neutron_network_data(context, share_network_subnet) for num, port in enumerate(ports): port_info = { 'network_type': share_network_subnet['network_type'], 'segmentation_id': share_network_subnet['segmentation_id'], 'cidr': share_network_subnet['cidr'], 'ip_version': share_network_subnet['ip_version'], } ports[num] = self.db.network_allocation_update( context, port['id'], port_info) return ports class NeutronBindSingleNetworkPlugin(NeutronSingleNetworkPlugin, NeutronBindNetworkPlugin): pass manila-10.0.0/manila/network/neutron/constants.py0000664000175000017500000000145013656750227022102 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. PROVIDER_NW_EXT = 'Provider Network' PORTBINDING_EXT = 'Port Binding' PORT_STATUS_ERROR = 'ERROR' PORT_STATUS_ACTIVE = 'ACTIVE' VIF_TYPE_BINDING_FAILED = 'binding_failed' manila-10.0.0/manila/message/0000775000175000017500000000000013656750362015755 5ustar zuulzuul00000000000000manila-10.0.0/manila/message/__init__.py0000664000175000017500000000000013656750227020054 0ustar zuulzuul00000000000000manila-10.0.0/manila/message/message_field.py0000664000175000017500000001455313656750227021126 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila import exception from manila.i18n import _ class Resource(object): SHARE = 'SHARE' SHARE_GROUP = 'SHARE_GROUP' SHARE_REPLICA = 'SHARE_REPLICA' SHARE_SNAPSHOT = 'SHARE_SNAPSHOT' class Action(object): ALLOCATE_HOST = ('001', _('allocate host')) CREATE = ('002', _('create')) DELETE_ACCESS_RULES = ('003', _('delete access rules')) PROMOTE = ('004', _('promote')) UPDATE = ('005', _('update')) REVERT_TO_SNAPSHOT = ('006', _('revert to snapshot')) DELETE = ('007', _('delete')) EXTEND = ('008', _('extend')) SHRINK = ('009', _('shrink')) ALL = (ALLOCATE_HOST, CREATE, DELETE_ACCESS_RULES, PROMOTE, UPDATE, REVERT_TO_SNAPSHOT, DELETE, EXTEND, SHRINK) class Detail(object): UNKNOWN_ERROR = ('001', _('An unknown error occurred.')) NO_VALID_HOST = ( '002', _("No storage could be allocated for this share request. " "Trying again with a different size or share type may " "succeed.")) UNEXPECTED_NETWORK = ( '003', _("Driver does not expect share-network to be provided with " "current configuration.")) NO_SHARE_SERVER = ( '004', _("Could not find an existing share server or allocate one on " "the share network provided. You may use a different share " "network, or verify the network details in the share network " "and retry your request. If this doesn't work, contact your " "administrator to troubleshoot issues with your network.")) NO_ACTIVE_AVAILABLE_REPLICA = ( '005', _("An 'active' replica must exist in 'available' state to " "create a new replica for share.")) NO_ACTIVE_REPLICA = ( '006', _("Share has no replica with 'replica_state' set to 'active'.")) FILTER_MSG = _("No storage could be allocated for this share request, " "%s filter didn't succeed.") FILTER_AVAILABILITY = ('007', FILTER_MSG % 'AvailabilityZone') FILTER_CAPABILITIES = ('008', FILTER_MSG % 'Capabilities') FILTER_CAPACITY = ('009', FILTER_MSG % 'Capacity') FILTER_DRIVER = ('010', FILTER_MSG % 'Driver') FILTER_IGNORE = ('011', FILTER_MSG % 'IgnoreAttemptedHosts') FILTER_JSON = ('012', FILTER_MSG % 'Json') FILTER_RETRY = ('013', FILTER_MSG % 'Retry') FILTER_REPLICATION = ('014', FILTER_MSG % 'ShareReplication') DRIVER_FAILED_EXTEND = ( '015', _("Share Driver failed to extend share, The share status has been " "set to extending_error. This action cannot be re-attempted until " "the status has been rectified. Contact your administrator to " "determine the cause of this failure.")) FILTER_CREATE_FROM_SNAPSHOT = ('016', FILTER_MSG % 'CreateFromSnapshot') DRIVER_FAILED_CREATING_FROM_SNAP = ( '017', _("Share Driver has failed to create the share from snapshot. This " "operation can be re-attempted by creating a new share. Contact " "your administrator to determine the cause of this failure.")) DRIVER_REFUSED_SHRINK = ( '018', _("Share Driver refused to shrink the share. The size to be shrunk is" " smaller than the current used space. The share status has been" " set to available. Please select a size greater than the current" " used space.")) ALL = (UNKNOWN_ERROR, NO_VALID_HOST, UNEXPECTED_NETWORK, NO_SHARE_SERVER, NO_ACTIVE_AVAILABLE_REPLICA, NO_ACTIVE_REPLICA, FILTER_AVAILABILITY, FILTER_CAPABILITIES, FILTER_CAPACITY, FILTER_DRIVER, FILTER_IGNORE, FILTER_JSON, FILTER_RETRY, FILTER_REPLICATION, DRIVER_FAILED_EXTEND, FILTER_CREATE_FROM_SNAPSHOT, DRIVER_FAILED_CREATING_FROM_SNAP, DRIVER_REFUSED_SHRINK) # Exception and detail mappings EXCEPTION_DETAIL_MAPPINGS = { NO_VALID_HOST: ['NoValidHost'], } # Use special code for each filter rather then categorize all as # NO_VALID_HOST FILTER_DETAIL_MAPPINGS = { 'AvailabilityZoneFilter': FILTER_AVAILABILITY, 'CapabilitiesFilter': FILTER_CAPABILITIES, 'CapacityFilter': FILTER_CAPACITY, 'DriverFilter': FILTER_DRIVER, 'IgnoreAttemptedHostsFilter': FILTER_IGNORE, 'JsonFilter': FILTER_JSON, 'RetryFilter': FILTER_RETRY, 'ShareReplicationFilter': FILTER_REPLICATION, 'CreateFromSnapshotFilter': FILTER_CREATE_FROM_SNAPSHOT, } def translate_action(action_id): action_message = next((action[1] for action in Action.ALL if action[0] == action_id), None) return action_message or 'unknown action' def translate_detail(detail_id): detail_message = next((action[1] for action in Detail.ALL if action[0] == detail_id), None) return detail_message or Detail.UNKNOWN_ERROR[1] def translate_detail_id(excep, detail): if excep is not None: detail = _translate_exception_to_detail(excep) if detail in Detail.ALL: return detail[0] return Detail.UNKNOWN_ERROR[0] def _translate_exception_to_detail(ex): if isinstance(ex, exception.NoValidHost): # if NoValidHost was raised because a filter failed (a filter # didn't return any hosts), use a filter-specific detail details = getattr(ex, 'detail_data', {}) last_filter = details.get('last_filter') return Detail.FILTER_DETAIL_MAPPINGS.get( last_filter, Detail.NO_VALID_HOST) else: for key, value in Detail.EXCEPTION_DETAIL_MAPPINGS.items(): if ex.__class__.__name__ in value: return key manila-10.0.0/manila/message/api.py0000664000175000017500000000700613656750227017103 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handles all requests related to user facing messages. """ import datetime from oslo_config import cfg from oslo_log import log as logging from oslo_utils import timeutils import six from manila.db import base from manila.message import message_field from manila.message import message_levels messages_opts = [ cfg.IntOpt('message_ttl', default=2592000, help='Message minimum life in seconds.'), cfg.IntOpt('message_reap_interval', default=86400, help='Interval between periodic task runs to clean expired ' 'messages in seconds.'), ] CONF = cfg.CONF CONF.register_opts(messages_opts) LOG = logging.getLogger(__name__) class API(base.Base): """API for handling user messages.""" def create(self, context, action, project_id, resource_type=None, resource_id=None, exception=None, detail=None, level=message_levels.ERROR): """Create a message with the specified information.""" LOG.info("Creating message record for request_id = %s", context.request_id) # Updates expiry time for message as per message_ttl config. expires_at = (timeutils.utcnow() + datetime.timedelta( seconds=CONF.message_ttl)) detail_id = message_field.translate_detail_id(exception, detail) message_record = { 'project_id': project_id, 'request_id': context.request_id, 'resource_type': resource_type, 'resource_id': resource_id, 'action_id': action[0], 'detail_id': detail_id, 'message_level': level, 'expires_at': expires_at, } try: self.db.message_create(context, message_record) except Exception: LOG.exception(("Failed to create message record " "for request_id %s"), context.request_id) def get(self, context, id): """Return message with the specified message id.""" return self.db.message_get(context, id) def get_all(self, context, search_opts=None, limit=None, offset=None, sort_key=None, sort_dir=None): """Return messages for the given context.""" LOG.debug("Searching for messages by: %s", six.text_type(search_opts)) search_opts = search_opts or {} messages = self.db.message_get_all(context, filters=search_opts, limit=limit, offset=offset, sort_key=sort_key, sort_dir=sort_dir) return messages def delete(self, context, id): """Delete message with the specified message id.""" return self.db.message_destroy(context, id) def cleanup_expired_messages(self, context): ctx = context.elevated() count = self.db.cleanup_expired_messages(ctx) LOG.info("Deleted %s expired messages.", count) manila-10.0.0/manila/message/message_levels.py0000664000175000017500000000115513656750227021327 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Message level constants.""" ERROR = 'ERROR' manila-10.0.0/manila/db/0000775000175000017500000000000013656750362014716 5ustar zuulzuul00000000000000manila-10.0.0/manila/db/__init__.py0000664000175000017500000000144213656750227017030 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ DB abstraction for Manila """ from manila.db.api import * # noqa manila-10.0.0/manila/db/api.py0000664000175000017500000014760413656750227016055 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Defines interface for DB access. The underlying driver is loaded as a :class:`LazyPluggable`. Functions in this module are imported into the manila.db namespace. Call these functions from manila.db namespace, not the manila.db.api namespace. All functions in this module return objects that implement a dictionary-like interface. Currently, many of these objects are sqlalchemy objects that implement a dictionary interface. However, a future goal is to have all of these objects be simple dictionaries. **Related Flags** :backend: string to lookup in the list of LazyPluggable backends. `sqlalchemy` is the only supported backend right now. :connection: string specifying the sqlalchemy connection to use, like: `sqlite:///var/lib/manila/manila.sqlite`. :enable_new_services: when adding a new service to the database, is it in the pool of available hardware (Default: True) """ from oslo_config import cfg from oslo_db import api as db_api db_opts = [ cfg.StrOpt('db_backend', default='sqlalchemy', help='The backend to use for database.'), cfg.BoolOpt('enable_new_services', default=True, help='Services to be added to the available pool on create.'), cfg.StrOpt('share_name_template', default='share-%s', help='Template string to be used to generate share names.'), cfg.StrOpt('share_snapshot_name_template', default='share-snapshot-%s', help='Template string to be used to generate share snapshot ' 'names.'), ] CONF = cfg.CONF CONF.register_opts(db_opts) _BACKEND_MAPPING = {'sqlalchemy': 'manila.db.sqlalchemy.api'} IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING, lazy=True) def authorize_project_context(context, project_id): """Ensures a request has permission to access the given project.""" return IMPL.authorize_project_context(context, project_id) def authorize_quota_class_context(context, class_name): """Ensures a request has permission to access the given quota class.""" return IMPL.authorize_quota_class_context(context, class_name) ################### def service_destroy(context, service_id): """Destroy the service or raise if it does not exist.""" return IMPL.service_destroy(context, service_id) def service_get(context, service_id): """Get a service or raise if it does not exist.""" return IMPL.service_get(context, service_id) def service_get_by_host_and_topic(context, host, topic): """Get a service by host it's on and topic it listens to.""" return IMPL.service_get_by_host_and_topic(context, host, topic) def service_get_all(context, disabled=None): """Get all services.""" return IMPL.service_get_all(context, disabled) def service_get_all_by_topic(context, topic): """Get all services for a given topic.""" return IMPL.service_get_all_by_topic(context, topic) def service_get_all_share_sorted(context): """Get all share services sorted by share count. :returns: a list of (Service, share_count) tuples. """ return IMPL.service_get_all_share_sorted(context) def service_get_by_args(context, host, binary): """Get the state of an service by node name and binary.""" return IMPL.service_get_by_args(context, host, binary) def service_create(context, values): """Create a service from the values dictionary.""" return IMPL.service_create(context, values) def service_update(context, service_id, values): """Set the given properties on an service and update it. Raises NotFound if service does not exist. """ return IMPL.service_update(context, service_id, values) #################### def quota_create(context, project_id, resource, limit, user_id=None, share_type_id=None): """Create a quota for the given project and resource.""" return IMPL.quota_create(context, project_id, resource, limit, user_id=user_id, share_type_id=share_type_id) def quota_get_all_by_project_and_user(context, project_id, user_id): """Retrieve all quotas associated with a given project and user.""" return IMPL.quota_get_all_by_project_and_user(context, project_id, user_id) def quota_get_all_by_project_and_share_type(context, project_id, share_type_id): """Retrieve all quotas associated with a given project and user.""" return IMPL.quota_get_all_by_project_and_share_type( context, project_id, share_type_id) def quota_get_all_by_project(context, project_id): """Retrieve all quotas associated with a given project.""" return IMPL.quota_get_all_by_project(context, project_id) def quota_get_all(context, project_id): """Retrieve all user quotas associated with a given project.""" return IMPL.quota_get_all(context, project_id) def quota_update(context, project_id, resource, limit, user_id=None, share_type_id=None): """Update a quota or raise if it does not exist.""" return IMPL.quota_update(context, project_id, resource, limit, user_id=user_id, share_type_id=share_type_id) ################### def quota_class_create(context, class_name, resource, limit): """Create a quota class for the given name and resource.""" return IMPL.quota_class_create(context, class_name, resource, limit) def quota_class_get(context, class_name, resource): """Retrieve a quota class or raise if it does not exist.""" return IMPL.quota_class_get(context, class_name, resource) def quota_class_get_default(context): """Retrieve all default quotas.""" return IMPL.quota_class_get_default(context) def quota_class_get_all_by_name(context, class_name): """Retrieve all quotas associated with a given quota class.""" return IMPL.quota_class_get_all_by_name(context, class_name) def quota_class_update(context, class_name, resource, limit): """Update a quota class or raise if it does not exist.""" return IMPL.quota_class_update(context, class_name, resource, limit) ################### def quota_usage_get(context, project_id, resource, user_id=None, share_type_id=None): """Retrieve a quota usage or raise if it does not exist.""" return IMPL.quota_usage_get( context, project_id, resource, user_id=user_id, share_type_id=share_type_id) def quota_usage_get_all_by_project_and_user(context, project_id, user_id): """Retrieve all usage associated with a given resource.""" return IMPL.quota_usage_get_all_by_project_and_user(context, project_id, user_id) def quota_usage_get_all_by_project_and_share_type(context, project_id, share_type_id): """Retrieve all usage associated with a given resource.""" return IMPL.quota_usage_get_all_by_project_and_share_type( context, project_id, share_type_id) def quota_usage_get_all_by_project(context, project_id): """Retrieve all usage associated with a given resource.""" return IMPL.quota_usage_get_all_by_project(context, project_id) def quota_usage_create(context, project_id, user_id, resource, in_use, reserved=0, until_refresh=None, share_type_id=None): """Create a quota usage.""" return IMPL.quota_usage_create( context, project_id, user_id, resource, in_use, reserved, until_refresh, share_type_id=share_type_id) def quota_usage_update(context, project_id, user_id, resource, share_type_id=None, **kwargs): """Update a quota usage or raise if it does not exist.""" return IMPL.quota_usage_update( context, project_id, user_id, resource, share_type_id=share_type_id, **kwargs) ################### def quota_reserve(context, resources, quotas, user_quotas, share_type_quotas, deltas, expire, until_refresh, max_age, project_id=None, user_id=None, share_type_id=None): """Check quotas and create appropriate reservations.""" return IMPL.quota_reserve( context, resources, quotas, user_quotas, share_type_quotas, deltas, expire, until_refresh, max_age, project_id=project_id, user_id=user_id, share_type_id=share_type_id) def reservation_commit(context, reservations, project_id=None, user_id=None, share_type_id=None): """Commit quota reservations.""" return IMPL.reservation_commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id) def reservation_rollback(context, reservations, project_id=None, user_id=None, share_type_id=None): """Roll back quota reservations.""" return IMPL.reservation_rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id) def quota_destroy_all_by_project_and_user(context, project_id, user_id): """Destroy all quotas associated with a given project and user.""" return IMPL.quota_destroy_all_by_project_and_user(context, project_id, user_id) def quota_destroy_all_by_share_type(context, share_type_id, project_id=None): """Destroy all quotas associated with a given share type and project.""" return IMPL.quota_destroy_all_by_share_type( context, share_type_id, project_id=project_id) def quota_destroy_all_by_project(context, project_id): """Destroy all quotas associated with a given project.""" return IMPL.quota_destroy_all_by_project(context, project_id) def reservation_expire(context): """Roll back any expired reservations.""" return IMPL.reservation_expire(context) ################### def share_instance_get(context, instance_id, with_share_data=False): """Get share instance by id.""" return IMPL.share_instance_get(context, instance_id, with_share_data=with_share_data) def share_instance_create(context, share_id, values): """Create new share instance.""" return IMPL.share_instance_create(context, share_id, values) def share_instance_delete(context, instance_id, session=None, need_to_update_usages=False): """Delete share instance.""" return IMPL.share_instance_delete( context, instance_id, session=session, need_to_update_usages=need_to_update_usages) def share_instance_update(context, instance_id, values, with_share_data=False): """Update share instance fields.""" return IMPL.share_instance_update(context, instance_id, values, with_share_data=with_share_data) def share_instances_host_update(context, current_host, new_host): """Update the host attr of all share instances that are on current_host.""" return IMPL.share_instances_host_update(context, current_host, new_host) def share_instances_get_all(context, filters=None): """Returns all share instances.""" return IMPL.share_instances_get_all(context, filters=filters) def share_instances_get_all_by_share_server(context, share_server_id): """Returns all share instances with given share_server_id.""" return IMPL.share_instances_get_all_by_share_server(context, share_server_id) def share_instances_get_all_by_host(context, host, with_share_data=False, status=None): """Returns all share instances with given host.""" return IMPL.share_instances_get_all_by_host( context, host, with_share_data=with_share_data, status=status) def share_instances_get_all_by_share_network(context, share_network_id): """Returns list of shares that belong to given share network.""" return IMPL.share_instances_get_all_by_share_network(context, share_network_id) def share_instances_get_all_by_share(context, share_id): """Returns list of shares that belong to given share.""" return IMPL.share_instances_get_all_by_share(context, share_id) def share_instances_get_all_by_share_group_id(context, share_group_id): """Returns list of share instances that belong to given share group.""" return IMPL.share_instances_get_all_by_share_group_id( context, share_group_id) ################### def share_create(context, share_values, create_share_instance=True): """Create new share.""" return IMPL.share_create(context, share_values, create_share_instance=create_share_instance) def share_update(context, share_id, values): """Update share fields.""" return IMPL.share_update(context, share_id, values) def share_get(context, share_id): """Get share by id.""" return IMPL.share_get(context, share_id) def share_get_all(context, filters=None, sort_key=None, sort_dir=None): """Get all shares.""" return IMPL.share_get_all( context, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) def share_get_all_by_project(context, project_id, filters=None, is_public=False, sort_key=None, sort_dir=None): """Returns all shares with given project ID.""" return IMPL.share_get_all_by_project( context, project_id, filters=filters, is_public=is_public, sort_key=sort_key, sort_dir=sort_dir, ) def share_get_all_by_share_group_id(context, share_group_id, filters=None, sort_key=None, sort_dir=None): """Returns all shares with given project ID and share group id.""" return IMPL.share_get_all_by_share_group_id( context, share_group_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_get_all_by_share_server(context, share_server_id, filters=None, sort_key=None, sort_dir=None): """Returns all shares with given share server ID.""" return IMPL.share_get_all_by_share_server( context, share_server_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) def share_delete(context, share_id): """Delete share.""" return IMPL.share_delete(context, share_id) ################### def share_access_create(context, values): """Allow access to share.""" return IMPL.share_access_create(context, values) def share_access_get(context, access_id): """Get share access rule.""" return IMPL.share_access_get(context, access_id) def share_access_get_all_for_share(context, share_id, filters=None): """Get all access rules for given share.""" return IMPL.share_access_get_all_for_share(context, share_id, filters=filters) def share_access_get_all_for_instance(context, instance_id, filters=None, with_share_access_data=True): """Get all access rules related to a certain share instance.""" return IMPL.share_access_get_all_for_instance( context, instance_id, filters=filters, with_share_access_data=with_share_access_data) def share_access_get_all_by_type_and_access(context, share_id, access_type, access): """Returns share access by given type and access.""" return IMPL.share_access_get_all_by_type_and_access( context, share_id, access_type, access) def share_access_check_for_existing_access(context, share_id, access_type, access_to): """Returns True if rule corresponding to the type and client exists.""" return IMPL.share_access_check_for_existing_access( context, share_id, access_type, access_to) def share_instance_access_create(context, values, share_instance_id): """Allow access to share instance.""" return IMPL.share_instance_access_create( context, values, share_instance_id) def share_instance_access_copy(context, share_id, instance_id): """Maps the existing access rules for the share to the instance in the DB. Adds the instance mapping to the share's access rules and returns the share's access rules. """ return IMPL.share_instance_access_copy(context, share_id, instance_id) def share_instance_access_get(context, access_id, instance_id, with_share_access_data=True): """Get access rule mapping for share instance.""" return IMPL.share_instance_access_get( context, access_id, instance_id, with_share_access_data=with_share_access_data) def share_instance_access_update(context, access_id, instance_id, updates): """Update the access mapping row for a given share instance and access.""" return IMPL.share_instance_access_update( context, access_id, instance_id, updates) def share_instance_access_delete(context, mapping_id): """Deny access to share instance.""" return IMPL.share_instance_access_delete(context, mapping_id) def share_access_metadata_update(context, access_id, metadata): """Update metadata of share access rule.""" return IMPL.share_access_metadata_update(context, access_id, metadata) def share_access_metadata_delete(context, access_id, key): """Delete metadata of share access rule.""" return IMPL.share_access_metadata_delete(context, access_id, key) #################### def share_snapshot_instance_update(context, instance_id, values): """Set the given properties on a share snapshot instance and update it. Raises NotFound if snapshot instance does not exist. """ return IMPL.share_snapshot_instance_update(context, instance_id, values) def share_snapshot_instance_create(context, snapshot_id, values): """Create a share snapshot instance for an existing snapshot.""" return IMPL.share_snapshot_instance_create( context, snapshot_id, values) def share_snapshot_instance_get(context, instance_id, with_share_data=False): """Get a snapshot instance or raise a NotFound exception.""" return IMPL.share_snapshot_instance_get( context, instance_id, with_share_data=with_share_data) def share_snapshot_instance_get_all_with_filters(context, filters, with_share_data=False): """Get all snapshot instances satisfying provided filters.""" return IMPL.share_snapshot_instance_get_all_with_filters( context, filters, with_share_data=with_share_data) def share_snapshot_instance_delete(context, snapshot_instance_id): """Delete a share snapshot instance.""" return IMPL.share_snapshot_instance_delete(context, snapshot_instance_id) #################### def share_snapshot_create(context, values): """Create a snapshot from the values dictionary.""" return IMPL.share_snapshot_create(context, values) def share_snapshot_get(context, snapshot_id): """Get a snapshot or raise if it does not exist.""" return IMPL.share_snapshot_get(context, snapshot_id) def share_snapshot_get_all(context, filters=None, sort_key=None, sort_dir=None): """Get all snapshots.""" return IMPL.share_snapshot_get_all( context, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) def share_snapshot_get_all_by_project(context, project_id, filters=None, sort_key=None, sort_dir=None): """Get all snapshots belonging to a project.""" return IMPL.share_snapshot_get_all_by_project( context, project_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) def share_snapshot_get_all_for_share(context, share_id, filters=None, sort_key=None, sort_dir=None): """Get all snapshots for a share.""" return IMPL.share_snapshot_get_all_for_share( context, share_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) def share_snapshot_get_latest_for_share(context, share_id): """Get the most recent snapshot for a share.""" return IMPL.share_snapshot_get_latest_for_share(context, share_id) def share_snapshot_update(context, snapshot_id, values): """Set the given properties on an snapshot and update it. Raises NotFound if snapshot does not exist. """ return IMPL.share_snapshot_update(context, snapshot_id, values) ################### def share_snapshot_access_create(context, values): """Create a share snapshot access from the values dictionary.""" return IMPL.share_snapshot_access_create(context, values) def share_snapshot_access_get(context, access_id): """Get share snapshot access rule from given access_id.""" return IMPL.share_snapshot_access_get(context, access_id) def share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance_id, session=None): """Get all access rules related to a certain snapshot instance.""" return IMPL.share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance_id, session) def share_snapshot_access_get_all_for_share_snapshot(context, share_snapshot_id, filters): """Get all access rules for a given share snapshot according to filters.""" return IMPL.share_snapshot_access_get_all_for_share_snapshot( context, share_snapshot_id, filters) def share_snapshot_check_for_existing_access(context, share_snapshot_id, access_type, access_to): """Returns True if rule corresponding to the type and client exists.""" return IMPL.share_snapshot_check_for_existing_access(context, share_snapshot_id, access_type, access_to) def share_snapshot_export_locations_get(context, snapshot_id): """Get all export locations for a given share snapshot.""" return IMPL.share_snapshot_export_locations_get(context, snapshot_id) def share_snapshot_instance_access_update( context, access_id, instance_id, updates): """Update the state of the share snapshot instance access.""" return IMPL.share_snapshot_instance_access_update( context, access_id, instance_id, updates) def share_snapshot_instance_access_get(context, share_snapshot_instance_id, access_id): """Get the share snapshot instance access related to given ids.""" return IMPL.share_snapshot_instance_access_get( context, share_snapshot_instance_id, access_id) def share_snapshot_instance_access_delete(context, access_id, snapshot_instance_id): """Delete share snapshot instance access given its id.""" return IMPL.share_snapshot_instance_access_delete( context, access_id, snapshot_instance_id) def share_snapshot_instance_export_location_create(context, values): """Create a share snapshot instance export location.""" return IMPL.share_snapshot_instance_export_location_create(context, values) def share_snapshot_instance_export_locations_get_all( context, share_snapshot_instance_id): """Get the share snapshot instance export locations for given id.""" return IMPL.share_snapshot_instance_export_locations_get_all( context, share_snapshot_instance_id) def share_snapshot_instance_export_location_get(context, el_id): """Get the share snapshot instance export location for given id.""" return IMPL.share_snapshot_instance_export_location_get( context, el_id) def share_snapshot_instance_export_location_delete(context, el_id): """Delete share snapshot instance export location given its id.""" return IMPL.share_snapshot_instance_export_location_delete(context, el_id) ################### def security_service_create(context, values): """Create security service DB record.""" return IMPL.security_service_create(context, values) def security_service_delete(context, id): """Delete security service DB record.""" return IMPL.security_service_delete(context, id) def security_service_update(context, id, values): """Update security service DB record.""" return IMPL.security_service_update(context, id, values) def security_service_get(context, id): """Get security service DB record.""" return IMPL.security_service_get(context, id) def security_service_get_all(context): """Get all security service DB records.""" return IMPL.security_service_get_all(context) def security_service_get_all_by_project(context, project_id): """Get all security service DB records for the given project.""" return IMPL.security_service_get_all_by_project(context, project_id) #################### def share_metadata_get(context, share_id): """Get all metadata for a share.""" return IMPL.share_metadata_get(context, share_id) def share_metadata_delete(context, share_id, key): """Delete the given metadata item.""" IMPL.share_metadata_delete(context, share_id, key) def share_metadata_update(context, share, metadata, delete): """Update metadata if it exists, otherwise create it.""" IMPL.share_metadata_update(context, share, metadata, delete) ################### def share_export_location_get_by_uuid(context, export_location_uuid, ignore_secondary_replicas=False): """Get specific export location of a share.""" return IMPL.share_export_location_get_by_uuid( context, export_location_uuid, ignore_secondary_replicas=ignore_secondary_replicas) def share_export_locations_get(context, share_id): """Get all export locations of a share.""" return IMPL.share_export_locations_get(context, share_id) def share_export_locations_get_by_share_id(context, share_id, include_admin_only=True, ignore_migration_destination=False, ignore_secondary_replicas=False): """Get all export locations of a share by its ID.""" return IMPL.share_export_locations_get_by_share_id( context, share_id, include_admin_only=include_admin_only, ignore_migration_destination=ignore_migration_destination, ignore_secondary_replicas=ignore_secondary_replicas) def share_export_locations_get_by_share_instance_id(context, share_instance_id, include_admin_only=True): """Get all export locations of a share instance by its ID.""" return IMPL.share_export_locations_get_by_share_instance_id( context, share_instance_id, include_admin_only=include_admin_only) def share_export_locations_update(context, share_instance_id, export_locations, delete=True): """Update export locations of a share instance.""" return IMPL.share_export_locations_update( context, share_instance_id, export_locations, delete) #################### def export_location_metadata_get(context, export_location_uuid, session=None): """Get all metadata of an export location.""" return IMPL.export_location_metadata_get( context, export_location_uuid, session=session) def export_location_metadata_delete(context, export_location_uuid, keys, session=None): """Delete metadata of an export location.""" return IMPL.export_location_metadata_delete( context, export_location_uuid, keys, session=session) def export_location_metadata_update(context, export_location_uuid, metadata, delete, session=None): """Update metadata of an export location.""" return IMPL.export_location_metadata_update( context, export_location_uuid, metadata, delete, session=session) #################### def share_network_create(context, values): """Create a share network DB record.""" return IMPL.share_network_create(context, values) def share_network_delete(context, id): """Delete a share network DB record.""" return IMPL.share_network_delete(context, id) def share_network_update(context, id, values): """Update a share network DB record.""" return IMPL.share_network_update(context, id, values) def share_network_get(context, id): """Get requested share network DB record.""" return IMPL.share_network_get(context, id) def share_network_get_all(context): """Get all share network DB records.""" return IMPL.share_network_get_all(context) def share_network_get_all_by_project(context, project_id): """Get all share network DB records for the given project.""" return IMPL.share_network_get_all_by_project(context, project_id) def share_network_get_all_by_security_service(context, security_service_id): """Get all share network DB records for the given project.""" return IMPL.share_network_get_all_by_security_service( context, security_service_id) def share_network_add_security_service(context, id, security_service_id): return IMPL.share_network_add_security_service(context, id, security_service_id) def share_network_remove_security_service(context, id, security_service_id): return IMPL.share_network_remove_security_service(context, id, security_service_id) def count_share_networks(context, project_id, user_id=None, share_type_id=None, session=None): return IMPL.count_share_networks( context, project_id, user_id=user_id, share_type_id=share_type_id, session=session, ) ################## def share_network_subnet_create(context, values): """Create a share network subnet DB record.""" return IMPL.share_network_subnet_create(context, values) def share_network_subnet_delete(context, network_subnet_id): """Delete a share network subnet DB record.""" return IMPL.share_network_subnet_delete(context, network_subnet_id) def share_network_subnet_update(context, network_subnet_id, values): """Update a share network subnet DB record.""" return IMPL.share_network_subnet_update(context, network_subnet_id, values) def share_network_subnet_get(context, network_subnet_id, session=None): """Get requested share network subnet DB record.""" return IMPL.share_network_subnet_get(context, network_subnet_id, session=session) def share_network_subnet_get_all(context): """Get all share network subnet DB record.""" return IMPL.share_network_subnet_get_all(context) def share_network_subnet_get_by_availability_zone_id(context, share_network_id, availability_zone_id): """Get a share network subnet DB record. This method returns a subnet DB record for a given share network id and an availability zone. If the 'availability_zone_id' is 'None', a record may be returned and it will represent the default share network subnet. Be aware that if there is no subnet for a specific availability zone id, this method will return the default share network subnet, if it exists. """ return IMPL.share_network_subnet_get_by_availability_zone_id( context, share_network_id, availability_zone_id) def share_network_subnet_get_default_subnet(context, share_network_id): """Get the default share network subnet DB record.""" return IMPL.share_network_subnet_get_default_subnet(context, share_network_id) ################## def network_allocation_create(context, values): """Create a network allocation DB record.""" return IMPL.network_allocation_create(context, values) def network_allocation_delete(context, id): """Delete a network allocation DB record.""" return IMPL.network_allocation_delete(context, id) def network_allocation_update(context, id, values, read_deleted=None): """Update a network allocation DB record.""" return IMPL.network_allocation_update(context, id, values, read_deleted=read_deleted) def network_allocation_get(context, id, session=None, read_deleted=None): """Get a network allocation DB record.""" return IMPL.network_allocation_get(context, id, session, read_deleted=read_deleted) def network_allocations_get_for_share_server(context, share_server_id, session=None, label=None): """Get network allocations for share server.""" return IMPL.network_allocations_get_for_share_server( context, share_server_id, label=label, session=session) def network_allocations_get_by_ip_address(context, ip_address): """Get network allocations by IP address.""" return IMPL.network_allocations_get_by_ip_address(context, ip_address) ################## def share_server_create(context, values): """Create share server DB record.""" return IMPL.share_server_create(context, values) def share_server_delete(context, id): """Delete share server DB record.""" return IMPL.share_server_delete(context, id) def share_server_update(context, id, values): """Update share server DB record.""" return IMPL.share_server_update(context, id, values) def share_server_get(context, id, session=None): """Get share server DB record by ID.""" return IMPL.share_server_get(context, id, session=session) def share_server_search_by_identifier(context, identifier, session=None): """Search for share servers based on given identifier.""" return IMPL.share_server_search_by_identifier( context, identifier, session=session) def share_server_get_all_by_host_and_share_subnet_valid(context, host, share_subnet_id, session=None): """Get share server DB records by host and share net not error.""" return IMPL.share_server_get_all_by_host_and_share_subnet_valid( context, host, share_subnet_id, session=session) def share_server_get_all(context): """Get all share server DB records.""" return IMPL.share_server_get_all(context) def share_server_get_all_by_host(context, host): """Get all share servers related to particular host.""" return IMPL.share_server_get_all_by_host(context, host) def share_server_get_all_unused_deletable(context, host, updated_before): """Get all free share servers DB records.""" return IMPL.share_server_get_all_unused_deletable(context, host, updated_before) def share_server_backend_details_set(context, share_server_id, server_details): """Create DB record with backend details.""" return IMPL.share_server_backend_details_set(context, share_server_id, server_details) ################## def share_type_create(context, values, projects=None): """Create a new share type.""" return IMPL.share_type_create(context, values, projects) def share_type_update(context, share_type_id, values): """Update an exist share type.""" return IMPL.share_type_update(context, share_type_id, values) def share_type_get_all(context, inactive=False, filters=None): """Get all share types. :param context: context to query under :param inactive: Include inactive share types to the result set :param filters: Filters for the query in the form of key/value. :is_public: Filter share types based on visibility: * **True**: List public share types only * **False**: List private share types only * **None**: List both public and private share types :returns: list of matching share types """ return IMPL.share_type_get_all(context, inactive, filters) def share_type_get(context, type_id, inactive=False, expected_fields=None): """Get share type by id. :param context: context to query under :param type_id: share type id to get. :param inactive: Consider inactive share types when searching :param expected_fields: Return those additional fields. Supported fields are: projects. :returns: share type """ return IMPL.share_type_get(context, type_id, inactive, expected_fields) def share_type_get_by_name(context, name): """Get share type by name.""" return IMPL.share_type_get_by_name(context, name) def share_type_get_by_name_or_id(context, name_or_id): """Get share type by name or ID and return None if not found.""" return IMPL.share_type_get_by_name_or_id(context, name_or_id) def share_type_access_get_all(context, type_id): """Get all share type access of a share type.""" return IMPL.share_type_access_get_all(context, type_id) def share_type_access_add(context, type_id, project_id): """Add share type access for project.""" return IMPL.share_type_access_add(context, type_id, project_id) def share_type_access_remove(context, type_id, project_id): """Remove share type access for project.""" return IMPL.share_type_access_remove(context, type_id, project_id) def share_type_destroy(context, id): """Delete a share type.""" return IMPL.share_type_destroy(context, id) #################### def share_type_extra_specs_get(context, share_type_id): """Get all extra specs for a share type.""" return IMPL.share_type_extra_specs_get(context, share_type_id) def share_type_extra_specs_delete(context, share_type_id, key): """Delete the given extra specs item.""" return IMPL.share_type_extra_specs_delete(context, share_type_id, key) def share_type_extra_specs_update_or_create(context, share_type_id, extra_specs): """Create or update share type extra specs. This adds or modifies the key/value pairs specified in the extra specs dict argument. """ return IMPL.share_type_extra_specs_update_or_create(context, share_type_id, extra_specs) def driver_private_data_get(context, entity_id, key=None, default=None): """Get one, list or all key-value pairs for given entity_id.""" return IMPL.driver_private_data_get(context, entity_id, key, default) def driver_private_data_update(context, entity_id, details, delete_existing=False): """Update key-value pairs for given entity_id.""" return IMPL.driver_private_data_update(context, entity_id, details, delete_existing) def driver_private_data_delete(context, entity_id, key=None): """Remove one, list or all key-value pairs for given entity_id.""" return IMPL.driver_private_data_delete(context, entity_id, key) #################### def availability_zone_get(context, id_or_name): """Get availability zone by name or id.""" return IMPL.availability_zone_get(context, id_or_name) def availability_zone_get_all(context): """Get all active availability zones.""" return IMPL.availability_zone_get_all(context) #################### def share_group_get(context, share_group_id): """Get a share group or raise if it does not exist.""" return IMPL.share_group_get(context, share_group_id) def share_group_get_all(context, detailed=True, filters=None, sort_key=None, sort_dir=None): """Get all share groups.""" return IMPL.share_group_get_all( context, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_group_get_all_by_host(context, host, detailed=True, filters=None, sort_key=None, sort_dir=None): """Get all share groups belonging to a host.""" return IMPL.share_group_get_all_by_host( context, host, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_group_create(context, values): """Create a share group from the values dictionary.""" return IMPL.share_group_create(context, values) def share_group_get_all_by_share_server(context, share_server_id, filters=None, sort_key=None, sort_dir=None): """Get all share groups associated with a share server.""" return IMPL.share_group_get_all_by_share_server( context, share_server_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_group_get_all_by_project(context, project_id, detailed=True, filters=None, sort_key=None, sort_dir=None): """Get all share groups belonging to a project.""" return IMPL.share_group_get_all_by_project( context, project_id, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_group_update(context, share_group_id, values): """Set the given properties on a share group and update it. Raises NotFound if share group does not exist. """ return IMPL.share_group_update(context, share_group_id, values) def share_group_destroy(context, share_group_id): """Destroy the share group or raise if it does not exist.""" return IMPL.share_group_destroy(context, share_group_id) def count_shares_in_share_group(context, share_group_id): """Returns the number of undeleted shares with the specified group.""" return IMPL.count_shares_in_share_group(context, share_group_id) def get_all_shares_by_share_group(context, share_group_id): return IMPL.get_all_shares_by_share_group(context, share_group_id) def count_share_group_snapshots_in_share_group(context, share_group_id): """Returns the number of sg snapshots with the specified share group.""" return IMPL.count_share_group_snapshots_in_share_group( context, share_group_id) def count_share_groups_in_share_network(context, share_network_id, session=None): """Return the number of groups with the specified share network.""" return IMPL.count_share_groups_in_share_network(context, share_network_id) def count_share_group_snapshot_members_in_share(context, share_id, session=None): """Returns the number of group snapshot members linked to the share.""" return IMPL.count_share_group_snapshot_members_in_share(context, share_id) def share_group_snapshot_get(context, share_group_snapshot_id): """Get a share group snapshot.""" return IMPL.share_group_snapshot_get(context, share_group_snapshot_id) def share_group_snapshot_get_all(context, detailed=True, filters=None, sort_key=None, sort_dir=None): """Get all share group snapshots.""" return IMPL.share_group_snapshot_get_all( context, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_group_snapshot_get_all_by_project(context, project_id, detailed=True, filters=None, sort_key=None, sort_dir=None): """Get all share group snapshots belonging to a project.""" return IMPL.share_group_snapshot_get_all_by_project( context, project_id, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) def share_group_snapshot_create(context, values): """Create a share group snapshot from the values dictionary.""" return IMPL.share_group_snapshot_create(context, values) def share_group_snapshot_update(context, share_group_snapshot_id, values): """Set the given properties on a share group snapshot and update it. Raises NotFound if share group snapshot does not exist. """ return IMPL.share_group_snapshot_update( context, share_group_snapshot_id, values) def share_group_snapshot_destroy(context, share_group_snapshot_id): """Destroy the share_group_snapshot or raise if it does not exist.""" return IMPL.share_group_snapshot_destroy(context, share_group_snapshot_id) def share_group_snapshot_members_get_all(context, share_group_snapshot_id): """Return the members of a share group snapshot.""" return IMPL.share_group_snapshot_members_get_all( context, share_group_snapshot_id) def share_group_snapshot_member_create(context, values): """Create a share group snapshot member from the values dictionary.""" return IMPL.share_group_snapshot_member_create(context, values) def share_group_snapshot_member_update(context, member_id, values): """Set the given properties on a share group snapshot member and update it. Raises NotFound if share_group_snapshot member does not exist. """ return IMPL.share_group_snapshot_member_update(context, member_id, values) #################### def share_replicas_get_all(context, with_share_server=False, with_share_data=False): """Returns all share replicas regardless of share.""" return IMPL.share_replicas_get_all( context, with_share_server=with_share_server, with_share_data=with_share_data) def share_replicas_get_all_by_share(context, share_id, with_share_server=False, with_share_data=False): """Returns all share replicas for a given share.""" return IMPL.share_replicas_get_all_by_share( context, share_id, with_share_server=with_share_server, with_share_data=with_share_data) def share_replicas_get_available_active_replica(context, share_id, with_share_server=False, with_share_data=False): """Returns an active replica for a given share.""" return IMPL.share_replicas_get_available_active_replica( context, share_id, with_share_server=with_share_server, with_share_data=with_share_data) def share_replica_get(context, replica_id, with_share_server=False, with_share_data=False): """Get share replica by id.""" return IMPL.share_replica_get( context, replica_id, with_share_server=with_share_server, with_share_data=with_share_data) def share_replica_update(context, share_replica_id, values, with_share_data=False): """Updates a share replica with given values.""" return IMPL.share_replica_update(context, share_replica_id, values, with_share_data=with_share_data) def share_replica_delete(context, share_replica_id, need_to_update_usages=True): """Deletes a share replica.""" return IMPL.share_replica_delete( context, share_replica_id, need_to_update_usages=need_to_update_usages) def purge_deleted_records(context, age_in_days): """Purge deleted rows older than given age from all tables :raises: InvalidParameterValue if age_in_days is incorrect. """ return IMPL.purge_deleted_records(context, age_in_days=age_in_days) #################### def share_group_type_create(context, values, projects=None): """Create a new share group type.""" return IMPL.share_group_type_create(context, values, projects) def share_group_type_get_all(context, inactive=False, filters=None): """Get all share group types. :param context: context to query under :param inactive: Include inactive share group types to the result set :param filters: Filters for the query in the form of key/value. :is_public: Filter share group types based on visibility: * **True**: List public group types only * **False**: List private group types only * **None**: List both public and private group types :returns: list of matching share group types """ return IMPL.share_group_type_get_all(context, inactive, filters) def share_group_type_get(context, type_id, inactive=False, expected_fields=None): """Get share_group type by id. :param context: context to query under :param type_id: group type id to get. :param inactive: Consider inactive group types when searching :param expected_fields: Return those additional fields. Supported fields are: projects. :returns: share group type """ return IMPL.share_group_type_get( context, type_id, inactive, expected_fields) def share_group_type_get_by_name(context, name): """Get share group type by name.""" return IMPL.share_group_type_get_by_name(context, name) def share_group_type_access_get_all(context, type_id): """Get all share group type access of a share group type.""" return IMPL.share_group_type_access_get_all(context, type_id) def share_group_type_access_add(context, type_id, project_id): """Add share group type access for project.""" return IMPL.share_group_type_access_add(context, type_id, project_id) def share_group_type_access_remove(context, type_id, project_id): """Remove share group type access for project.""" return IMPL.share_group_type_access_remove(context, type_id, project_id) def share_group_type_destroy(context, type_id): """Delete a share group type.""" return IMPL.share_group_type_destroy(context, type_id) def share_group_type_specs_get(context, type_id): """Get all group specs for a share group type.""" return IMPL.share_group_type_specs_get(context, type_id) def share_group_type_specs_delete(context, type_id, key): """Delete the given group specs item.""" return IMPL.share_group_type_specs_delete(context, type_id, key) def share_group_type_specs_update_or_create(context, type_id, group_specs): """Create or update share group type specs. This adds or modifies the key/value pairs specified in the group specs dict argument. """ return IMPL.share_group_type_specs_update_or_create( context, type_id, group_specs) #################### def message_get(context, message_id): """Return a message with the specified ID.""" return IMPL.message_get(context, message_id) def message_get_all(context, filters=None, limit=None, offset=None, sort_key=None, sort_dir=None): """Returns all messages with the project of the specified context.""" return IMPL.message_get_all(context, filters=filters, limit=limit, offset=offset, sort_key=sort_key, sort_dir=sort_dir) def message_create(context, values): """Creates a new message with the specified values.""" return IMPL.message_create(context, values) def message_destroy(context, message_id): """Deletes message with the specified ID.""" return IMPL.message_destroy(context, message_id) def cleanup_expired_messages(context): """Soft delete expired messages""" return IMPL.cleanup_expired_messages(context) def backend_info_get(context, host): """Get hash info for given host.""" return IMPL.backend_info_get(context, host) def backend_info_update(context, host, value=None, delete_existing=False): """Update hash info for host.""" return IMPL.backend_info_update(context, host=host, value=value, delete_existing=delete_existing) manila-10.0.0/manila/db/migrations/0000775000175000017500000000000013656750362017072 5ustar zuulzuul00000000000000manila-10.0.0/manila/db/migrations/utils.py0000664000175000017500000000136713656750227020613 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy as sa def load_table(name, connection): return sa.Table(name, sa.MetaData(), autoload=True, autoload_with=connection) manila-10.0.0/manila/db/migrations/__init__.py0000664000175000017500000000000013656750227021171 0ustar zuulzuul00000000000000manila-10.0.0/manila/db/migrations/alembic.ini0000664000175000017500000000214713656750227021173 0ustar zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s/alembic # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # max length of characters to apply to the # "slug" field #truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # set to 'true' to allow .pyc and .pyo files without # a source .py file to be detected as revisions in the # versions/ directory # sourceless = false #sqlalchemy.url = driver://user:pass@localhost/dbname # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S manila-10.0.0/manila/db/migrations/alembic/0000775000175000017500000000000013656750362020466 5ustar zuulzuul00000000000000manila-10.0.0/manila/db/migrations/alembic/__init__.py0000664000175000017500000000000013656750227022565 0ustar zuulzuul00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/0000775000175000017500000000000013656750362022336 5ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/5237b6625330_add_availability_zone_id_field_to_share_groups.pymanila-10.0.0/manila/db/migrations/alembic/versions/5237b6625330_add_availability_zone_id_field_to_s0000664000175000017500000000222513656750227033035 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add 'availability_zone_id' field to 'share_groups' table. Revision ID: 5237b6625330 Revises: 7d142971c4ef Create Date: 2017-03-17 18:49:53.742325 """ # revision identifiers, used by Alembic. revision = '5237b6625330' down_revision = '7d142971c4ef' from alembic import op import sqlalchemy as sa SG_TABLE_NAME = 'share_groups' ATTR_NAME = 'availability_zone_id' def upgrade(): op.add_column( SG_TABLE_NAME, sa.Column( ATTR_NAME, sa.String(36), default=None, nullable=True, ), ) def downgrade(): op.drop_column(SG_TABLE_NAME, ATTR_NAME) manila-10.0.0/manila/db/migrations/alembic/versions/ef0c02b4366_add_share_type_projects.py0000664000175000017500000000472713656750227031332 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add_share_type_projects Revision ID: ef0c02b4366 Revises: 17115072e1c3 Create Date: 2015-02-20 10:49:40.744974 """ # revision identifiers, used by Alembic. revision = 'ef0c02b4366' down_revision = '59eb64046740' from alembic import op from oslo_log import log import sqlalchemy as sql LOG = log.getLogger(__name__) def upgrade(): meta = sql.MetaData() meta.bind = op.get_bind() is_public = sql.Column('is_public', sql.Boolean) try: op.add_column('share_types', is_public) share_types = sql.Table('share_types', meta, is_public.copy()) # pylint: disable=no-value-for-parameter share_types.update().values(is_public=True).execute() except Exception: LOG.error("Column |%s| not created!", repr(is_public)) raise try: op.create_table( 'share_type_projects', sql.Column('id', sql.Integer, primary_key=True, nullable=False), sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('share_type_id', sql.String(36), sql.ForeignKey('share_types.id', name="stp_id_fk")), sql.Column('project_id', sql.String(length=255)), sql.Column('deleted', sql.Integer), sql.UniqueConstraint('share_type_id', 'project_id', 'deleted', name="stp_project_id_uc"), mysql_engine='InnoDB', ) except Exception: LOG.error("Table |%s| not created!", 'share_type_projects') raise def downgrade(): try: op.drop_column('share_types', 'is_public') except Exception: LOG.error("share_types.is_public column not dropped") raise try: op.drop_table('share_type_projects') except Exception: LOG.error("share_type_projects table not dropped") raise ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/344c1ac4747f_add_share_instance_access_rules_status.pymanila-10.0.0/manila/db/migrations/alembic/versions/344c1ac4747f_add_share_instance_access_rules_sta0000664000175000017500000001027713656750227033306 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Remove access rules status and add access_rule_status to share_instance model Revision ID: 344c1ac4747f Revises: dda6de06349 Create Date: 2015-11-18 14:58:55.806396 """ # revision identifiers, used by Alembic. revision = '344c1ac4747f' down_revision = 'dda6de06349' from alembic import op from sqlalchemy import Column, String from manila.common import constants from manila.db.migrations import utils priorities = { 'active': 0, 'new': 1, 'error': 2 } upgrade_data_mapping = { 'active': 'active', 'new': 'out_of_sync', 'error': 'error', } def upgrade(): """Transform individual access rules states to 'access_rules_status'. WARNING: This method performs lossy converting of existing data in DB. """ op.add_column( 'share_instances', Column('access_rules_status', String(length=255)) ) connection = op.get_bind() share_instances_table = utils.load_table('share_instances', connection) instance_access_table = utils.load_table('share_instance_access_map', connection) # NOTE(u_glide): Data migrations shouldn't be performed on live clouds # because it will lead to unpredictable behaviour of running operations # like migration. instances_query = ( share_instances_table.select() .where(share_instances_table.c.status == constants.STATUS_AVAILABLE) .where(share_instances_table.c.deleted == 'False') ) for instance in connection.execute(instances_query): access_mappings_query = instance_access_table.select().where( instance_access_table.c.share_instance_id == instance['id'] ).where(instance_access_table.c.deleted == 'False') status = constants.STATUS_ACTIVE for access_rule in connection.execute(access_mappings_query): if (access_rule['state'] == constants.STATUS_DELETING or access_rule['state'] not in priorities): continue if priorities[access_rule['state']] > priorities[status]: status = access_rule['state'] # pylint: disable=no-value-for-parameter op.execute( share_instances_table.update().where( share_instances_table.c.id == instance['id'] ).values({'access_rules_status': upgrade_data_mapping[status]}) ) op.drop_column('share_instance_access_map', 'state') def downgrade(): op.add_column( 'share_instance_access_map', Column('state', String(length=255)) ) connection = op.get_bind() share_instances_table = utils.load_table('share_instances', connection) instance_access_table = utils.load_table('share_instance_access_map', connection) instances_query = ( share_instances_table.select() .where(share_instances_table.c.status == constants.STATUS_AVAILABLE) .where(share_instances_table.c.deleted == 'False') ) for instance in connection.execute(instances_query): # NOTE(u_glide): We cannot determine if a rule is applied or not in # Manila, so administrator should manually handle such access rules. if instance['access_rules_status'] == 'active': state = 'active' else: state = 'error' # pylint: disable=no-value-for-parameter op.execute( instance_access_table.update().where( instance_access_table.c.share_instance_id == instance['id'] ).where(instance_access_table.c.deleted == 'False').values( {'state': state} ) ) op.drop_column('share_instances', 'access_rules_status') ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/4ee2cf4be19a_remove_share_snapshots_export_location.pymanila-10.0.0/manila/db/migrations/alembic/versions/4ee2cf4be19a_remove_share_snapshots_export_locat0000664000175000017500000000206513656750227033666 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Remove share_snapshots.export_location Revision ID: 4ee2cf4be19a Revises: 17115072e1c3 Create Date: 2015-02-26 11:11:55.734663 """ # revision identifiers, used by Alembic. revision = '4ee2cf4be19a' down_revision = '17115072e1c3' from alembic import op import sqlalchemy as sql def upgrade(): op.drop_column('share_snapshots', 'export_location') def downgrade(): op.add_column('share_snapshots', sql.Column('export_location', sql.String(255))) manila-10.0.0/manila/db/migrations/alembic/versions/211836bf835c_add_access_level.py0000664000175000017500000000217213656750227027711 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add access level Revision ID: 211836bf835c Revises: 162a3e673105 Create Date: 2014-12-19 05:34:06.790159 """ # revision identifiers, used by Alembic. revision = '211836bf835c' down_revision = '162a3e673105' from alembic import op import sqlalchemy as sa from manila.common import constants def upgrade(): op.add_column('share_access_map', sa.Column('access_level', sa.String(2), default=constants.ACCESS_LEVEL_RW)) def downgrade(): op.drop_column('share_access_map', 'access_level') manila-10.0.0/manila/db/migrations/alembic/versions/a77e2ad5012d_add_share_snapshot_access.py0000664000175000017500000000723613656750227031760 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hitachi Data Systems. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_share_snapshot_access Revision ID: a77e2ad5012d Revises: e1949a93157a Create Date: 2016-07-15 13:32:19.417771 """ # revision identifiers, used by Alembic. revision = 'a77e2ad5012d' down_revision = 'e1949a93157a' from manila.common import constants from manila.db.migrations import utils from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'share_snapshot_access_map', sa.Column('id', sa.String(36), primary_key=True), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('share_snapshot_id', sa.String(36), sa.ForeignKey('share_snapshots.id', name='ssam_snapshot_fk')), sa.Column('access_type', sa.String(255)), sa.Column('access_to', sa.String(255)) ) op.create_table( 'share_snapshot_instance_access_map', sa.Column('id', sa.String(36), primary_key=True), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('share_snapshot_instance_id', sa.String(36), sa.ForeignKey('share_snapshot_instances.id', name='ssiam_snapshot_instance_fk')), sa.Column('access_id', sa.String(36), sa.ForeignKey('share_snapshot_access_map.id', name='ssam_access_fk')), sa.Column('state', sa.String(255), default=constants.ACCESS_STATE_QUEUED_TO_APPLY) ) op.create_table( 'share_snapshot_instance_export_locations', sa.Column('id', sa.String(36), primary_key=True), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('share_snapshot_instance_id', sa.String(36), sa.ForeignKey('share_snapshot_instances.id', name='ssiel_snapshot_instance_fk')), sa.Column('path', sa.String(2000)), sa.Column('is_admin_only', sa.Boolean, default=False, nullable=False) ) op.add_column('shares', sa.Column('mount_snapshot_support', sa.Boolean, default=False)) connection = op.get_bind() shares_table = utils.load_table('shares', connection) # pylint: disable=no-value-for-parameter op.execute( shares_table.update().where( shares_table.c.deleted == 'False').values({ 'mount_snapshot_support': False, }) ) def downgrade(): op.drop_table('share_snapshot_instance_export_locations') op.drop_table('share_snapshot_instance_access_map') op.drop_table('share_snapshot_access_map') op.drop_column('shares', 'mount_snapshot_support') manila-10.0.0/manila/db/migrations/alembic/versions/238720805ce1_add_messages_table.py0000664000175000017500000000375313656750227030161 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add messages table Revision ID: 238720805ce1 Revises: 31252d671ae5 Create Date: 2017-02-02 08:38:55.134095 """ # revision identifiers, used by Alembic. revision = '238720805ce1' down_revision = '31252d671ae5' from alembic import op from oslo_log import log from sqlalchemy import Column, DateTime from sqlalchemy import MetaData, String, Table LOG = log.getLogger(__name__) def upgrade(): meta = MetaData() meta.bind = op.get_bind() # New table messages = Table( 'messages', meta, Column('id', String(36), primary_key=True, nullable=False), Column('project_id', String(255), nullable=False), Column('request_id', String(255), nullable=True), Column('resource_type', String(255)), Column('resource_id', String(36), nullable=True), Column('action_id', String(10), nullable=False), Column('detail_id', String(10), nullable=True), Column('message_level', String(255), nullable=False), Column('created_at', DateTime(timezone=False)), Column('updated_at', DateTime(timezone=False)), Column('deleted_at', DateTime(timezone=False)), Column('deleted', String(36)), Column('expires_at', DateTime(timezone=False)), mysql_engine='InnoDB', mysql_charset='utf8' ) messages.create() def downgrade(): try: op.drop_table('messages') except Exception: LOG.error("messages table not dropped") raise manila-10.0.0/manila/db/migrations/alembic/versions/3a482171410f_add_drivers_private_data_table.py0000664000175000017500000000405213656750227032536 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_driver_private_data_table Revision ID: 3a482171410f Revises: 56cdbe267881 Create Date: 2015-04-21 14:47:38.201658 """ # revision identifiers, used by Alembic. revision = '3a482171410f' down_revision = '56cdbe267881' from alembic import op from oslo_log import log import sqlalchemy as sql LOG = log.getLogger(__name__) drivers_private_data_table_name = 'drivers_private_data' def upgrade(): try: op.create_table( drivers_private_data_table_name, sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('deleted', sql.Integer, default=0), sql.Column('host', sql.String(255), nullable=False, primary_key=True), sql.Column('entity_uuid', sql.String(36), nullable=False, primary_key=True), sql.Column('key', sql.String(255), nullable=False, primary_key=True), sql.Column('value', sql.String(1023), nullable=False), mysql_engine='InnoDB', ) except Exception: LOG.error("Table |%s| not created!", drivers_private_data_table_name) raise def downgrade(): try: op.drop_table(drivers_private_data_table_name) except Exception: LOG.error("%s table not dropped", drivers_private_data_table_name) raise ././@LongLink0000000000000000000000000000017600000000000011221 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/d5db24264f5c_add_consistent_snapshot_support_attr_to_share_group_model.pymanila-10.0.0/manila/db/migrations/alembic/versions/d5db24264f5c_add_consistent_snapshot_support_att0000664000175000017500000000324513656750227033547 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add enum 'consistent_snapshot_support' attr to 'share_groups' model. Revision ID: d5db24264f5c Revises: 927920b37453 Create Date: 2017-02-03 15:59:31.134166 """ # revision identifiers, used by Alembic. revision = 'd5db24264f5c' down_revision = '927920b37453' from alembic import op import sqlalchemy as sa SG_TABLE_NAME = 'share_groups' ATTR_NAME = 'consistent_snapshot_support' ENUM_POOL_VALUE = 'pool' ENUM_HOST_VALUE = 'host' def upgrade(): # Workaround for following alembic bug: # https://bitbucket.org/zzzeek/alembic/issue/89 context = op.get_context() if context.bind.dialect.name == 'postgresql': op.execute( "CREATE TYPE %s AS ENUM ('%s', '%s')" % ( ATTR_NAME, ENUM_POOL_VALUE, ENUM_HOST_VALUE)) op.add_column( SG_TABLE_NAME, sa.Column( ATTR_NAME, sa.Enum(ENUM_POOL_VALUE, ENUM_HOST_VALUE, name=ATTR_NAME), nullable=True, ), ) def downgrade(): op.drop_column(SG_TABLE_NAME, ATTR_NAME) context = op.get_context() if context.bind.dialect.name == 'postgresql': op.execute('DROP TYPE %s' % ATTR_NAME) manila-10.0.0/manila/db/migrations/alembic/versions/533646c7af38_remove_unused_attr_status.py0000664000175000017500000000412513656750227031775 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Remove unused attr status Revision ID: 533646c7af38 Revises: 3a482171410f Create Date: 2015-05-28 13:13:47.651353 """ # revision identifiers, used by Alembic. revision = '533646c7af38' down_revision = '3a482171410f' from alembic import op from oslo_log import log import sqlalchemy as sql from manila.common import constants LOG = log.getLogger(__name__) COLUMN_NAME = 'status' TABLE_NAMES = ('network_allocations', 'security_services') def upgrade(): for t_name in TABLE_NAMES: try: op.drop_column(t_name, COLUMN_NAME) except Exception: LOG.error("Column '%s' could not be dropped", COLUMN_NAME) raise def downgrade(): for t_name in TABLE_NAMES: try: op.add_column( t_name, sql.Column( COLUMN_NAME, # NOTE(vponomaryov): original type of attr was enum. But # alembic is buggy with enums [1], so use string type # instead. Anyway we have no reason to keep enum/constraint # on specific set of possible statuses because they have # not been used. # [1] - https://bitbucket.org/zzzeek/alembic/ # issue/89/opadd_column-and-opdrop_column-should sql.String(255), default=constants.STATUS_NEW, ), ) except Exception: LOG.error("Column '%s' could not be added", COLUMN_NAME) raise ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/097fad24d2fc_add_share_instances_share_id_index.pymanila-10.0.0/manila/db/migrations/alembic/versions/097fad24d2fc_add_share_instances_share_id_index.0000664000175000017500000000231713656750227033327 0ustar zuulzuul00000000000000# Copyright 2018 SAP SE # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_share_instances_share_id_index Revision ID: 097fad24d2fc Revises: 0274d20c560f Create Date: 2018-06-12 10:06:50.642418 """ # revision identifiers, used by Alembic. revision = '097fad24d2fc' down_revision = '0274d20c560f' from alembic import op INDEX_NAME = 'share_instances_share_id_idx' TABLE_NAME = 'share_instances' def upgrade(): op.create_index(INDEX_NAME, TABLE_NAME, ['share_id']) def downgrade(): op.drop_constraint('si_share_fk', TABLE_NAME, type_='foreignkey') op.drop_index(INDEX_NAME, TABLE_NAME) op.create_foreign_key( 'si_share_fk', TABLE_NAME, 'shares', ['share_id'], ['id']) manila-10.0.0/manila/db/migrations/alembic/versions/293fac1130ca_add_replication_attrs.py0000664000175000017500000000252013656750227031125 0ustar zuulzuul00000000000000# Copyright 2015 Goutham Pacha Ravi. # All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add replication attributes to Share and ShareInstance models. Revision ID: 293fac1130ca Revises: 344c1ac4747f Create Date: 2015-09-10 15:45:07.273043 """ # revision identifiers, used by Alembic. revision = '293fac1130ca' down_revision = '344c1ac4747f' from alembic import op import sqlalchemy as sa def upgrade(): """Add replication attributes to Shares and ShareInstances.""" op.add_column('shares', sa.Column('replication_type', sa.String(255))) op.add_column('share_instances', sa.Column('replica_state', sa.String(255))) def downgrade(): """Remove replication attributes from Shares and ShareInstances.""" op.drop_column('shares', 'replication_type') op.drop_column('share_instances', 'replica_state') manila-10.0.0/manila/db/migrations/alembic/versions/63809d875e32_add_access_key.py0000664000175000017500000000175013656750227027327 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_access_key Revision ID: 63809d875e32 Revises: 493eaffd79e1 Create Date: 2016-07-16 20:53:05.958896 """ # revision identifiers, used by Alembic. revision = '63809d875e32' down_revision = '493eaffd79e1' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'share_access_map', sa.Column('access_key', sa.String(255), nullable=True)) def downgrade(): op.drop_column('share_access_map', 'access_key') manila-10.0.0/manila/db/migrations/alembic/versions/323840a08dc4_add_shares_task_state.py0000664000175000017500000000176613656750227030770 0ustar zuulzuul00000000000000# Copyright 2015 Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add shares.task_state Revision ID: 323840a08dc4 Revises: 3651e16d7c43 Create Date: 2015-04-30 07:58:45.175790 """ # revision identifiers, used by Alembic. revision = '323840a08dc4' down_revision = '3651e16d7c43' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column('shares', sa.Column('task_state', sa.String(255))) def downgrade(): op.drop_column('shares', 'task_state') manila-10.0.0/manila/db/migrations/alembic/versions/56cdbe267881_add_share_export_locations_table.py0000664000175000017500000000744213656750227033304 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add share_export_locations table Revision ID: 56cdbe267881 Revises: 17115072e1c3 Create Date: 2015-02-27 14:06:30.464315 """ # revision identifiers, used by Alembic. revision = '56cdbe267881' down_revision = '30cb96d995fa' from alembic import op import sqlalchemy as sa from sqlalchemy import func from sqlalchemy.sql import table def upgrade(): export_locations_table = op.create_table( 'share_export_locations', sa.Column('id', sa.Integer, primary_key=True, nullable=False), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.Integer, default=0), sa.Column('path', sa.String(2000)), sa.Column('share_id', sa.String(36), sa.ForeignKey('shares.id', name="sel_id_fk")), mysql_engine='InnoDB', mysql_charset='utf8') shares_table = table( 'shares', sa.Column('created_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.Integer), sa.Column('export_location', sa.String(length=255)), sa.Column('id', sa.String(length=36)), sa.Column('updated_at', sa.DateTime)) export_locations = [] session = sa.orm.Session(bind=op.get_bind().connect()) for share in session.query(shares_table).all(): deleted = share.deleted if isinstance(share.deleted, int) else 0 export_locations.append({ 'created_at': share.created_at, 'updated_at': share.updated_at, 'deleted_at': share.deleted_at, 'deleted': deleted, 'share_id': share.id, 'path': share.export_location, }) op.bulk_insert(export_locations_table, export_locations) op.drop_column('shares', 'export_location') session.close_all() def downgrade(): """Remove share_export_locations table. This method can lead to data loss because only first export_location is saved in shares table. """ op.add_column('shares', sa.Column('export_location', sa.String(255))) export_locations_table = table( 'share_export_locations', sa.Column('share_id', sa.String(length=36)), sa.Column('path', sa.String(length=255)), sa.Column('updated_at', sa.DateTime), sa.Column('deleted', sa.Integer)) connection = op.get_bind() session = sa.orm.Session(bind=connection.connect()) export_locations = session.query( func.min(export_locations_table.c.updated_at), export_locations_table.c.share_id, export_locations_table.c.path).filter( export_locations_table.c.deleted == 0).group_by( export_locations_table.c.share_id, export_locations_table.c.path).all() shares = sa.Table('shares', sa.MetaData(), autoload=True, autoload_with=connection) for location in export_locations: # pylint: disable=no-value-for-parameter update = (shares.update().where(shares.c.id == location.share_id). values(export_location=location.path)) connection.execute(update) op.drop_table('share_export_locations') session.close_all() manila-10.0.0/manila/db/migrations/alembic/versions/7d142971c4ef_add_reservation_expire_index.py0000664000175000017500000000176413656750227032460 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_reservation_expire_index Revision ID: 7d142971c4ef Revises: d5db24264f5c Create Date: 2017-03-02 09:19:27.114719 """ # revision identifiers, used by Alembic. revision = '7d142971c4ef' down_revision = 'd5db24264f5c' from alembic import op INDEX_NAME = 'reservations_deleted_expire_idx' TABLE_NAME = 'reservations' def upgrade(): op.create_index(INDEX_NAME, TABLE_NAME, ['deleted', 'expire']) def downgrade(): op.drop_index(INDEX_NAME, TABLE_NAME) manila-10.0.0/manila/db/migrations/alembic/versions/b516de97bfee_add_quota_per_share_type_model.py0000664000175000017500000000406213656750227033174 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add ProjectShareTypeQuota model Revision ID: b516de97bfee Revises: 238720805ce1 Create Date: 2017-03-27 15:11:11.449617 """ # revision identifiers, used by Alembic. revision = 'b516de97bfee' down_revision = '238720805ce1' from alembic import op import sqlalchemy as sql NEW_TABLE_NAME = 'project_share_type_quotas' def upgrade(): op.create_table( NEW_TABLE_NAME, sql.Column('id', sql.Integer, primary_key=True, nullable=False), sql.Column('project_id', sql.String(length=255)), sql.Column('resource', sql.String(length=255), nullable=False), sql.Column('hard_limit', sql.Integer, nullable=True), sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('deleted', sql.Integer, default=0), sql.Column( 'share_type_id', sql.String(36), sql.ForeignKey( 'share_types.id', name='share_type_id_fk', ), nullable=False), sql.UniqueConstraint( 'share_type_id', 'resource', 'deleted', name="uc_quotas_per_share_types"), mysql_engine='InnoDB', ) for table_name in ('quota_usages', 'reservations'): op.add_column( table_name, sql.Column('share_type_id', sql.String(36), nullable=True), ) def downgrade(): op.drop_table(NEW_TABLE_NAME) for table_name in ('quota_usages', 'reservations'): op.drop_column(table_name, 'share_type_id') manila-10.0.0/manila/db/migrations/alembic/versions/30cb96d995fa_add_is_public_column_for_share.py0000664000175000017500000000247013656750227033011 0ustar zuulzuul00000000000000# Copyright 2015 mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add public column for share Revision ID: 30cb96d995fa Revises: ef0c02b4366 Create Date: 2015-01-16 03:07:15.548947 """ # revision identifiers, used by Alembic. revision = '30cb96d995fa' down_revision = 'ef0c02b4366' from alembic import op from oslo_log import log import sqlalchemy as sa LOG = log.getLogger(__name__) def upgrade(): try: op.add_column('shares', sa.Column('is_public', sa.Boolean, default=False)) except Exception: LOG.error("Column shares.is_public not created!") raise def downgrade(): try: op.drop_column('shares', 'is_public') except Exception: LOG.error("Column shares.is_public not dropped!") raise ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/e8ea58723178_remove_host_from_driver_private_data.pymanila-10.0.0/manila/db/migrations/alembic/versions/e8ea58723178_remove_host_from_driver_private_dat0000664000175000017500000000730113656750227033347 0ustar zuulzuul00000000000000# Copyright (c) 2016 EMC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Remove host from driver private data Revision ID: e8ea58723178 Revises: fdfb668d19e1 Create Date: 2016-07-11 12:59:34.579291 """ # revision identifiers, used by Alembic. revision = 'e8ea58723178' down_revision = 'fdfb668d19e1' from alembic import op from oslo_log import log from oslo_utils import uuidutils import sqlalchemy as sql from manila.db.migrations import utils LOG = log.getLogger(__name__) TABLE_NAME = 'drivers_private_data' COLUMN_HOST = 'host' DEFAULT_HOST = 'unknown' COLUMN_ENTITY = 'entity_uuid' COLUMN_KEY = 'key' MYSQL_ENGINE = 'mysql' def upgrade(): bind = op.get_bind() engine = bind.engine try: if (engine.name == MYSQL_ENGINE): op.drop_constraint('PRIMARY', TABLE_NAME, type_='primary') op.create_primary_key('DRIVERS_PRIVATE_PK', TABLE_NAME, ['entity_uuid', 'key']) op.drop_column(TABLE_NAME, COLUMN_HOST) except Exception: LOG.error("Column '%s' could not be dropped", COLUMN_HOST) raise def downgrade(): connection = op.get_bind() from_table = utils.load_table(TABLE_NAME, connection) migration_table_name = "_migrating_%(table)s_%(session)s" % { 'table': TABLE_NAME, 'session': uuidutils.generate_uuid()[:8] } LOG.info("Creating the migration table %(table)s", { 'table': migration_table_name }) migration_table = op.create_table( migration_table_name, sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('deleted', sql.Integer, default=0), sql.Column('host', sql.String(255), nullable=False, primary_key=True), sql.Column('entity_uuid', sql.String(36), nullable=False, primary_key=True), sql.Column('key', sql.String(255), nullable=False, primary_key=True), sql.Column('value', sql.String(1023), nullable=False), mysql_engine='InnoDB', ) LOG.info("Copying data from %(from_table)s to the migration " "table %(migration_table)s", { 'from_table': TABLE_NAME, 'migration_table': migration_table_name }) rows = [] for row in op.get_bind().execute(from_table.select()): rows.append({ 'created_at': row.created_at, 'updated_at': row.updated_at, 'deleted_at': row.deleted_at, 'deleted': row.deleted, 'host': DEFAULT_HOST, 'entity_uuid': row.entity_uuid, 'key': row.key, 'value': row.value }) op.bulk_insert(migration_table, rows) LOG.info("Dropping table %(from_table)s", { 'from_table': TABLE_NAME }) op.drop_table(TABLE_NAME) LOG.info("Rename the migration table %(migration_table)s to " "the original table %(from_table)s", { 'migration_table': migration_table_name, 'from_table': TABLE_NAME }) op.rename_table(migration_table_name, TABLE_NAME) manila-10.0.0/manila/db/migrations/alembic/versions/dda6de06349_add_export_locations_metadata.py0000664000175000017500000000757513656750227032607 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add DB support for share instance export locations metadata. Revision ID: dda6de06349 Revises: 323840a08dc4 Create Date: 2015-11-30 13:50:15.914232 """ # revision identifiers, used by Alembic. revision = 'dda6de06349' down_revision = '323840a08dc4' from alembic import op from oslo_log import log from oslo_utils import uuidutils import sqlalchemy as sa SI_TABLE_NAME = 'share_instances' EL_TABLE_NAME = 'share_instance_export_locations' ELM_TABLE_NAME = 'share_instance_export_locations_metadata' LOG = log.getLogger(__name__) def upgrade(): try: meta = sa.MetaData() meta.bind = op.get_bind() # Add new 'is_admin_only' column in export locations table that will be # used for hiding admin export locations from common users in API. op.add_column( EL_TABLE_NAME, sa.Column('is_admin_only', sa.Boolean, default=False)) # Create new 'uuid' column as String(36) in export locations table # that will be used for API. op.add_column( EL_TABLE_NAME, sa.Column('uuid', sa.String(36), unique=True), ) # Generate UUID for each existing export location. el_table = sa.Table( EL_TABLE_NAME, meta, sa.Column('id', sa.Integer), sa.Column('uuid', sa.String(36)), sa.Column('is_admin_only', sa.Boolean), ) for record in el_table.select().execute(): # pylint: disable=no-value-for-parameter el_table.update().values( is_admin_only=False, uuid=uuidutils.generate_uuid(), ).where( el_table.c.id == record.id, ).execute() # Make new 'uuid' column in export locations table not nullable. op.alter_column( EL_TABLE_NAME, 'uuid', existing_type=sa.String(length=36), nullable=False, ) except Exception: LOG.error("Failed to update '%s' table!", EL_TABLE_NAME) raise try: op.create_table( ELM_TABLE_NAME, sa.Column('id', sa.Integer, primary_key=True), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.Integer), sa.Column('export_location_id', sa.Integer, sa.ForeignKey('%s.id' % EL_TABLE_NAME, name="elm_id_fk"), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', sa.String(length=1023), nullable=False), sa.UniqueConstraint('export_location_id', 'key', 'deleted', name="elm_el_id_uc"), mysql_engine='InnoDB', ) except Exception: LOG.error("Failed to create '%s' table!", ELM_TABLE_NAME) raise def downgrade(): try: op.drop_table(ELM_TABLE_NAME) except Exception: LOG.error("Failed to drop '%s' table!", ELM_TABLE_NAME) raise try: op.drop_column(EL_TABLE_NAME, 'is_admin_only') op.drop_column(EL_TABLE_NAME, 'uuid') except Exception: LOG.error("Failed to update '%s' table!", EL_TABLE_NAME) raise ././@LongLink0000000000000000000000000000016200000000000011214 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/e9f79621d83f_add_cast_rules_to_readonly_to_share_instances.pymanila-10.0.0/manila/db/migrations/alembic/versions/e9f79621d83f_add_cast_rules_to_readonly_to_share0000664000175000017500000000710713656750227033366 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_cast_rules_to_readonly_to_share_instances Revision ID: e9f79621d83f Revises: 54667b9cade7 Create Date: 2016-12-01 04:06:33.115054 """ # revision identifiers, used by Alembic. revision = 'e9f79621d83f' down_revision = '54667b9cade7' from alembic import op from oslo_log import log import sqlalchemy as sa from manila.common import constants from manila.db.migrations import utils LOG = log.getLogger(__name__) def upgrade(): LOG.info("Adding cast_rules_to_readonly column to share instances.") op.add_column('share_instances', sa.Column('cast_rules_to_readonly', sa.Boolean, default=False)) connection = op.get_bind() shares_table = utils.load_table('shares', connection) share_instances_table = utils.load_table('share_instances', connection) # First, set the value of ``cast_rules_to_readonly`` in every existing # share instance to False # pylint: disable=no-value-for-parameter op.execute( share_instances_table.update().values({ 'cast_rules_to_readonly': False, }) ) # Set the value of ``cast_rules_to_readonly`` to True for secondary # replicas in 'readable' replication relationships replicated_shares_query = ( shares_table.select() .where(shares_table.c.deleted == 'False') .where(shares_table.c.replication_type == constants.REPLICATION_TYPE_READABLE) ) for replicated_share in connection.execute(replicated_shares_query): # NOTE (gouthamr): Only secondary replicas that are not undergoing a # 'replication_change' (promotion to active) are considered. When the # replication change is complete, the share manager will take care # of ensuring the correct values for the replicas that were involved # in the transaction. secondary_replicas_query = ( share_instances_table.select().where( share_instances_table.c.deleted == 'False').where( share_instances_table.c.replica_state != constants.REPLICA_STATE_ACTIVE).where( share_instances_table.c.status != constants.STATUS_REPLICATION_CHANGE).where( replicated_share['id'] == share_instances_table.c.share_id ) ) for replica in connection.execute(secondary_replicas_query): # pylint: disable=no-value-for-parameter op.execute( share_instances_table.update().where( share_instances_table.c.id == replica.id ).values({ 'cast_rules_to_readonly': True, }) ) op.alter_column('share_instances', 'cast_rules_to_readonly', existing_type=sa.Boolean, existing_server_default=False, nullable=False) def downgrade(): LOG.info("Removing cast_rules_to_readonly column from share " "instances.") op.drop_column('share_instances', 'cast_rules_to_readonly') manila-10.0.0/manila/db/migrations/alembic/versions/38e632621e5a_change_volume_type_to_share_type.py0000664000175000017500000001237413656750227033257 0ustar zuulzuul00000000000000# Copyright 2015 Bob Callaway. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """change volume_type to share_type Revision ID: 38e632621e5a Revises: 162a3e673105 Create Date: 2014-10-02 09:14:03.172324 """ # revision identifiers, used by Alembic. revision = '38e632621e5a' down_revision = '211836bf835c' from alembic import op from oslo_log import log from oslo_utils import strutils import sqlalchemy as sa from sqlalchemy.sql import table LOG = log.getLogger(__name__) def upgrade(): LOG.info("Renaming column name shares.volume_type_id to " "shares.share_type.id") op.alter_column("shares", "volume_type_id", new_column_name="share_type_id", type_=sa.String(length=36)) LOG.info("Renaming volume_types table to share_types") op.rename_table("volume_types", "share_types") op.drop_constraint('vt_name_uc', 'share_types', type_='unique') op.create_unique_constraint('st_name_uc', 'share_types', ['name', 'deleted']) LOG.info("Creating share_type_extra_specs table") st_es = op.create_table( 'share_type_extra_specs', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.Integer), sa.Column('id', sa.Integer, primary_key=True, nullable=False), sa.Column('share_type_id', sa.String(length=36), sa.ForeignKey('share_types.id', name="st_id_fk"), nullable=False), sa.Column('spec_key', sa.String(length=255)), sa.Column('spec_value', sa.String(length=255)), mysql_engine='InnoDB') LOG.info("Migrating volume_type_extra_specs to " "share_type_extra_specs") _copy_records(destination_table=st_es, up_migration=True) LOG.info("Dropping volume_type_extra_specs table") op.drop_table("volume_type_extra_specs") def downgrade(): LOG.info("Creating volume_type_extra_specs table") vt_es = op.create_table( 'volume_type_extra_specs', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.Boolean), sa.Column('id', sa.Integer, primary_key=True, nullable=False), sa.Column('volume_type_id', sa.String(length=36), sa.ForeignKey('share_types.id'), nullable=False), sa.Column('key', sa.String(length=255)), sa.Column('value', sa.String(length=255)), mysql_engine='InnoDB') LOG.info("Migrating share_type_extra_specs to " "volume_type_extra_specs") _copy_records(destination_table=vt_es, up_migration=False) LOG.info("Dropping share_type_extra_specs table") op.drop_table("share_type_extra_specs") LOG.info("Renaming share_types table to volume_types") op.drop_constraint('st_name_uc', 'share_types', type_='unique') op.create_unique_constraint('vt_name_uc', 'share_types', ['name', 'deleted']) op.rename_table("share_types", "volume_types") LOG.info("Renaming column name shares.share_type_id to " "shares.volume_type.id") op.alter_column("shares", "share_type_id", new_column_name="volume_type_id", type_=sa.String(length=36)) def _copy_records(destination_table, up_migration=True): old = ('volume', '') new = ('share', 'spec_') data_from, data_to = (old, new) if up_migration else (new, old) from_table = table( data_from[0] + '_type_extra_specs', sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.Boolean if up_migration else sa.Integer), sa.Column('id', sa.Integer, primary_key=True, nullable=False), sa.Column(data_from[0] + '_type_id', sa.String(length=36)), sa.Column(data_from[1] + 'key', sa.String(length=255)), sa.Column(data_from[1] + 'value', sa.String(length=255))) extra_specs = [] for es in op.get_bind().execute(from_table.select()): if up_migration: deleted = strutils.int_from_bool_as_string(es.deleted) else: deleted = strutils.bool_from_string(es.deleted, default=True) extra_specs.append({ 'created_at': es.created_at, 'updated_at': es.updated_at, 'deleted_at': es.deleted_at, 'deleted': deleted, data_to[0] + '_type_id': getattr(es, data_from[0] + '_type_id'), data_to[1] + 'key': getattr(es, data_from[1] + 'key'), data_to[1] + 'value': getattr(es, data_from[1] + 'value'), }) op.bulk_insert(destination_table, extra_specs) ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/95e3cf760840_remove_nova_net_id_column_from_share_.pymanila-10.0.0/manila/db/migrations/alembic/versions/95e3cf760840_remove_nova_net_id_column_from_shar0000664000175000017500000000200413656750227033303 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """remove_nova_net_id_column_from_share_networks Revision ID: 95e3cf760840 Revises: 3e7d62517afa Create Date: 2016-12-13 16:11:05.191717 """ # revision identifiers, used by Alembic. revision = '95e3cf760840' down_revision = '3e7d62517afa' from alembic import op import sqlalchemy as sa def upgrade(): op.drop_column('share_networks', 'nova_net_id') def downgrade(): op.add_column( 'share_networks', sa.Column('nova_net_id', sa.String(36), nullable=True)) manila-10.0.0/manila/db/migrations/alembic/versions/59eb64046740_add_required_extra_spec.py0000664000175000017500000000514413656750227031244 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add required extra spec Revision ID: 59eb64046740 Revises: 162a3e673105 Create Date: 2015-01-29 15:33:25.348140 """ # revision identifiers, used by Alembic. revision = '59eb64046740' down_revision = '4ee2cf4be19a' from alembic import op from oslo_utils import timeutils import sqlalchemy as sa from sqlalchemy.sql import table def upgrade(): session = sa.orm.Session(bind=op.get_bind().connect()) es_table = table( 'share_type_extra_specs', sa.Column('created_at', sa.DateTime), sa.Column('deleted', sa.Integer), sa.Column('share_type_id', sa.String(length=36)), sa.Column('spec_key', sa.String(length=255)), sa.Column('spec_value', sa.String(length=255))) st_table = table( 'share_types', sa.Column('deleted', sa.Integer), sa.Column('id', sa.Integer)) # NOTE(vponomaryov): field 'deleted' is integer here. existing_required_extra_specs = (session.query(es_table). filter(es_table.c.spec_key == 'driver_handles_share_servers'). filter(es_table.c.deleted == 0). all()) exclude_st_ids = [es.share_type_id for es in existing_required_extra_specs] # NOTE(vponomaryov): field 'deleted' is string here. share_types = (session.query(st_table). filter(st_table.c.deleted.in_(('0', 'False', ))). filter(st_table.c.id.notin_(exclude_st_ids)). all()) extra_specs = [] for st in share_types: extra_specs.append({ 'spec_key': 'driver_handles_share_servers', 'spec_value': 'True', 'deleted': 0, 'created_at': timeutils.utcnow(), 'share_type_id': st.id, }) op.bulk_insert(es_table, extra_specs) session.close_all() def downgrade(): """Downgrade method. We can't determine, which extra specs should be removed after insertion, that's why do nothing here. """ ././@LongLink0000000000000000000000000000020100000000000011206 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/927920b37453_add_provider_location_for_share_group_snapshot_members_model.pymanila-10.0.0/manila/db/migrations/alembic/versions/927920b37453_add_provider_location_for_share_gro0000664000175000017500000000222313656750227033115 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add 'provider_location' attr to 'share_group_snapshot_members' model. Revision ID: 927920b37453 Revises: a77e2ad5012d Create Date: 2017-01-31 20:10:44.937763 """ # revision identifiers, used by Alembic. revision = '927920b37453' down_revision = 'a77e2ad5012d' from alembic import op import sqlalchemy as sa SGSM_TABLE_NAME = 'share_group_snapshot_members' PROVIDER_LOCATION_NAME = 'provider_location' def upgrade(): op.add_column( SGSM_TABLE_NAME, sa.Column(PROVIDER_LOCATION_NAME, sa.String(255), nullable=True), ) def downgrade(): op.drop_column(SGSM_TABLE_NAME, PROVIDER_LOCATION_NAME) ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/54667b9cade7_restore_share_instance_access_map_state.pymanila-10.0.0/manila/db/migrations/alembic/versions/54667b9cade7_restore_share_instance_access_map_s0000664000175000017500000000713313656750227033430 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_share_instance_access_map_state Revision ID: 54667b9cade7 Revises: 87ce15c59bbe Create Date: 2016-09-02 10:18:07.290461 """ # revision identifiers, used by Alembic. revision = '54667b9cade7' down_revision = '87ce15c59bbe' from alembic import op from sqlalchemy import Column, String from manila.common import constants from manila.db.migrations import utils # Mapping for new value to be assigned as ShareInstanceAccessMapping's state access_rules_status_to_state_mapping = { constants.STATUS_ACTIVE: constants.ACCESS_STATE_ACTIVE, constants.STATUS_OUT_OF_SYNC: constants.ACCESS_STATE_QUEUED_TO_APPLY, 'updating': constants.ACCESS_STATE_QUEUED_TO_APPLY, 'updating_multiple': constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.STATUS_ERROR: constants.ACCESS_STATE_ERROR, } # Mapping for changes to Share Instance's access_rules_status access_rules_status_upgrade_mapping = { constants.STATUS_ACTIVE: constants.STATUS_ACTIVE, constants.STATUS_OUT_OF_SYNC: constants.SHARE_INSTANCE_RULES_SYNCING, 'updating': constants.SHARE_INSTANCE_RULES_SYNCING, 'updating_multiple': constants.SHARE_INSTANCE_RULES_SYNCING, constants.STATUS_ERROR: constants.STATUS_ERROR, } def upgrade(): op.add_column('share_instance_access_map', Column('state', String(length=255), default=constants.ACCESS_STATE_QUEUED_TO_APPLY)) connection = op.get_bind() share_instances_table = utils.load_table('share_instances', connection) instance_access_map_table = utils.load_table('share_instance_access_map', connection) instances_query = ( share_instances_table.select().where( share_instances_table.c.status == constants.STATUS_AVAILABLE).where( share_instances_table.c.deleted == 'False') ) for instance in connection.execute(instances_query): access_rule_status = instance['access_rules_status'] # pylint: disable=no-value-for-parameter op.execute( instance_access_map_table.update().where( instance_access_map_table.c.share_instance_id == instance['id'] ).values({ 'state': access_rules_status_to_state_mapping[ access_rule_status], }) ) op.execute( share_instances_table.update().where( share_instances_table.c.id == instance['id'] ).values({ 'access_rules_status': access_rules_status_upgrade_mapping[ access_rule_status], }) ) def downgrade(): op.drop_column('share_instance_access_map', 'state') connection = op.get_bind() share_instances_table = utils.load_table('share_instances', connection) # pylint: disable=no-value-for-parameter op.execute( share_instances_table.update().where( share_instances_table.c.access_rules_status == constants.SHARE_INSTANCE_RULES_SYNCING).values({ 'access_rules_status': constants.STATUS_OUT_OF_SYNC}) ) ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/17115072e1c3_add_nova_net_id_column_to_share_networks.pymanila-10.0.0/manila/db/migrations/alembic/versions/17115072e1c3_add_nova_net_id_column_to_share_net0000664000175000017500000000206713656750227033135 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_nova_net_id_column_to_share_networks Revision ID: 17115072e1c3 Revises: 38e632621e5a Create Date: 2015-02-05 18:07:19.062995 """ # revision identifiers, used by Alembic. revision = '17115072e1c3' down_revision = '38e632621e5a' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'share_networks', sa.Column('nova_net_id', sa.String(36), nullable=True)) def downgrade(): op.drop_column('share_networks', 'nova_net_id') manila-10.0.0/manila/db/migrations/alembic/versions/3651e16d7c43_add_consistency_groups.py0000664000175000017500000001727013656750227031226 0ustar zuulzuul00000000000000# Copyright (c) 2015 Alex Meade # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Create Consistency Groups Tables and Columns Revision ID: 3651e16d7c43 Revises: 55761e5f59c5 Create Date: 2015-07-29 13:17:15.940454 """ # revision identifiers, used by Alembic. revision = '3651e16d7c43' down_revision = '55761e5f59c5' SHARE_NETWORK_FK_CONSTRAINT_NAME = "fk_cg_share_network_id" SHARE_SERVER_FK_CONSTRAINT_NAME = "fk_cg_share_server_id" SHARES_CG_FK_CONSTRAINT_NAME = "fk_shares_consistency_group_id" CG_MAP_FK_CONSTRAINT_NAME = "fk_cgstm_cg_id" SHARE_TYPE_FK_CONSTRAINT_NAME = "fk_cgstm_share_type_id" CGSNAP_CG_ID_FK_CONSTRAINT_NAME = "fk_cgsnapshots_consistency_group_id" CGSNAP_MEM_SHARETYPE_FK_CONSTRAINT_NAME = "fk_cgsnapshot_members_share_type_id" CGSNAP_MEM_SNAP_ID_FK_CONSTRAINT_NAME = "fk_cgsnapshot_members_cgsnapshot_id" CGSNAP_MEM_SHARE_FK_CONSTRAINT_NAME = "fk_cgsnapshot_members_share_id" CGSNAP_MEM_INST_FK_CONSTRAINT_NAME = "fk_cgsnapshot_members_share_instance_id" from alembic import op from oslo_log import log import sqlalchemy as sa LOG = log.getLogger(__name__) def upgrade(): # New table - consistency_groups op.create_table( 'consistency_groups', sa.Column('id', sa.String(36), primary_key=True, nullable=False), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('user_id', sa.String(length=255), nullable=False), sa.Column('project_id', sa.String(length=255), nullable=False), sa.Column('host', sa.String(length=255)), sa.Column('name', sa.String(length=255)), sa.Column('description', sa.String(length=255)), sa.Column('status', sa.String(length=255)), sa.Column('source_cgsnapshot_id', sa.String(length=36)), sa.Column('share_network_id', sa.String(length=36), sa.ForeignKey('share_networks.id', name=SHARE_NETWORK_FK_CONSTRAINT_NAME), nullable=True), sa.Column('share_server_id', sa.String(length=36), sa.ForeignKey('share_servers.id', name=SHARE_SERVER_FK_CONSTRAINT_NAME), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8') op.add_column( 'shares', sa.Column('consistency_group_id', sa.String(36), sa.ForeignKey('consistency_groups.id', name=SHARES_CG_FK_CONSTRAINT_NAME))) op.add_column('shares', sa.Column('source_cgsnapshot_member_id', sa.String(36))) op.create_table( 'consistency_group_share_type_mappings', sa.Column('id', sa.String(36), primary_key=True, nullable=False), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('consistency_group_id', sa.String(length=36), sa.ForeignKey('consistency_groups.id', name=CG_MAP_FK_CONSTRAINT_NAME), nullable=False), sa.Column('share_type_id', sa.String(length=36), sa.ForeignKey('share_types.id', name=SHARE_TYPE_FK_CONSTRAINT_NAME), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') op.create_table( 'cgsnapshots', sa.Column('id', sa.String(36), primary_key=True, nullable=False), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('user_id', sa.String(length=255), nullable=False), sa.Column('project_id', sa.String(length=255), nullable=False), sa.Column('consistency_group_id', sa.String(length=36), sa.ForeignKey('consistency_groups.id', name=CGSNAP_CG_ID_FK_CONSTRAINT_NAME), nullable=False), sa.Column('name', sa.String(length=255)), sa.Column('description', sa.String(length=255)), sa.Column('status', sa.String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8') op.create_table( 'cgsnapshot_members', sa.Column('id', sa.String(36), primary_key=True, nullable=False), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('user_id', sa.String(length=255), nullable=False), sa.Column('project_id', sa.String(length=255), nullable=False), sa.Column('cgsnapshot_id', sa.String(length=36), sa.ForeignKey('cgsnapshots.id', name=CGSNAP_MEM_SNAP_ID_FK_CONSTRAINT_NAME), nullable=False), sa.Column('share_instance_id', sa.String(length=36), sa.ForeignKey('share_instances.id', name=CGSNAP_MEM_INST_FK_CONSTRAINT_NAME), nullable=False), sa.Column('share_id', sa.String(length=36), sa.ForeignKey('shares.id', name=CGSNAP_MEM_SHARE_FK_CONSTRAINT_NAME), nullable=False), sa.Column('share_type_id', sa.String(length=36), sa.ForeignKey('share_types.id', name=CGSNAP_MEM_SHARETYPE_FK_CONSTRAINT_NAME), nullable=False), sa.Column('size', sa.Integer), sa.Column('status', sa.String(length=255)), sa.Column('share_proto', sa.String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8') def downgrade(): try: op.drop_table('cgsnapshot_members') except Exception: LOG.exception("Error Dropping 'cgsnapshot_members' table.") try: op.drop_table('cgsnapshots') except Exception: LOG.exception("Error Dropping 'cgsnapshots' table.") try: op.drop_table('consistency_group_share_type_mappings') except Exception: LOG.exception("Error Dropping " "'consistency_group_share_type_mappings' table.") try: op.drop_column('shares', 'source_cgsnapshot_member_id') except Exception: LOG.exception("Error Dropping 'source_cgsnapshot_member_id' " "column from 'shares' table.") try: op.drop_constraint(SHARES_CG_FK_CONSTRAINT_NAME, 'shares', type_='foreignkey') except Exception: LOG.exception("Error Dropping '%s' constraint.", SHARES_CG_FK_CONSTRAINT_NAME) try: op.drop_column('shares', 'consistency_group_id') except Exception: LOG.exception("Error Dropping 'consistency_group_id' column " "from 'shares' table.") try: op.drop_table('consistency_groups') except Exception: LOG.exception("Error Dropping 'consistency_groups' table.") ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/829a09b0ddd4_fix_project_share_type_quotas_unique_constraint.pymanila-10.0.0/manila/db/migrations/alembic/versions/829a09b0ddd4_fix_project_share_type_quotas_uniqu0000664000175000017500000000274713656750227033545 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fix 'project_share_type_quotas' unique constraint Revision ID: 829a09b0ddd4 Revises: b516de97bfee Create Date: 2017-10-12 20:15:51.267488 """ # revision identifiers, used by Alembic. revision = '829a09b0ddd4' down_revision = 'b516de97bfee' from alembic import op TABLE_NAME = 'project_share_type_quotas' UNIQUE_CONSTRAINT_NAME = 'uc_quotas_per_share_types' ST_FK_NAME = 'share_type_id_fk' def upgrade(): op.drop_constraint(ST_FK_NAME, TABLE_NAME, type_='foreignkey') op.drop_constraint(UNIQUE_CONSTRAINT_NAME, TABLE_NAME, type_='unique') op.create_foreign_key( ST_FK_NAME, TABLE_NAME, 'share_types', ['share_type_id'], ['id']) op.create_unique_constraint( UNIQUE_CONSTRAINT_NAME, TABLE_NAME, ['share_type_id', 'resource', 'deleted', 'project_id']) def downgrade(): # NOTE(vponomaryov): no need to implement old behaviour as it was bug, and, # moreover, not compatible with data from upgraded version. pass manila-10.0.0/manila/db/migrations/alembic/versions/11ee96se625f3_add_metadata_for_access.py0000664000175000017500000000404613656750227031503 0ustar zuulzuul00000000000000# Copyright 2018 Huawei Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add metadata for access rule Revision ID: 11ee96se625f3 Revises: 097fad24d2fc Create Date: 2018-06-16 03:07:15.548947 """ # revision identifiers, used by Alembic. revision = '11ee96se625f3' down_revision = '097fad24d2fc' from alembic import op from oslo_log import log import sqlalchemy as sql LOG = log.getLogger(__name__) access_metadata_table_name = 'share_access_rules_metadata' def upgrade(): try: op.create_table( access_metadata_table_name, sql.Column('created_at', sql.DateTime(timezone=False)), sql.Column('updated_at', sql.DateTime(timezone=False)), sql.Column('deleted_at', sql.DateTime(timezone=False)), sql.Column('deleted', sql.String(36), default='False'), sql.Column('access_id', sql.String(36), sql.ForeignKey('share_access_map.id'), nullable=False), sql.Column('key', sql.String(255), nullable=False), sql.Column('value', sql.String(1023), nullable=False), sql.Column('id', sql.Integer, primary_key=True, nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) except Exception: LOG.error("Table |%s| not created!", access_metadata_table_name) raise def downgrade(): try: op.drop_table(access_metadata_table_name) except Exception: LOG.error("%s table not dropped", access_metadata_table_name) raise manila-10.0.0/manila/db/migrations/alembic/versions/1f0bd302c1a6_add_availability_zones_table.py0000664000175000017500000001136213656750227032436 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_availability_zones_table Revision ID: 1f0bd302c1a6 Revises: 579c267fbb4d Create Date: 2015-07-24 12:09:36.008570 """ # revision identifiers, used by Alembic. revision = '1f0bd302c1a6' down_revision = '579c267fbb4d' from alembic import op from oslo_utils import timeutils from oslo_utils import uuidutils from sqlalchemy import Column, DateTime, ForeignKey, String, UniqueConstraint from manila.db.migrations import utils def collect_existing_az_from_services_table(connection, services_table, az_table): az_name_to_id_mapping = dict() existing_az = [] for service in connection.execute(services_table.select()): if service.availability_zone in az_name_to_id_mapping: continue az_id = uuidutils.generate_uuid() az_name_to_id_mapping[service.availability_zone] = az_id existing_az.append({ 'created_at': timeutils.utcnow(), 'id': az_id, 'name': service.availability_zone }) op.bulk_insert(az_table, existing_az) return az_name_to_id_mapping def upgrade(): connection = op.get_bind() # Create new AZ table and columns availability_zones_table = op.create_table( 'availability_zones', Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('name', String(length=255)), UniqueConstraint('name', 'deleted', name='az_name_uc'), mysql_engine='InnoDB', mysql_charset='utf8') for table_name, fk_name in (('services', 'service_az_id_fk'), ('share_instances', 'si_az_id_fk')): op.add_column( table_name, Column('availability_zone_id', String(36), ForeignKey('availability_zones.id', name=fk_name)) ) # Collect existing AZs from services table services_table = utils.load_table('services', connection) az_name_to_id_mapping = collect_existing_az_from_services_table( connection, services_table, availability_zones_table) # Map string AZ names to ID's in target tables # pylint: disable=no-value-for-parameter set_az_id_in_table = lambda table, id, name: ( # noqa: E731 op.execute( table.update().where(table.c.availability_zone == name).values( {'availability_zone_id': id}) ) ) share_instances_table = utils.load_table('share_instances', connection) for name, id in az_name_to_id_mapping.items(): for table_name in [services_table, share_instances_table]: set_az_id_in_table(table_name, id, name) # Remove old AZ columns from tables op.drop_column('services', 'availability_zone') op.drop_column('share_instances', 'availability_zone') def downgrade(): connection = op.get_bind() # Create old AZ fields op.add_column('services', Column('availability_zone', String(length=255))) op.add_column('share_instances', Column('availability_zone', String(length=255))) # Migrate data az_table = utils.load_table('availability_zones', connection) share_instances_table = utils.load_table('share_instances', connection) services_table = utils.load_table('services', connection) for az in connection.execute(az_table.select()): # pylint: disable=no-value-for-parameter op.execute( share_instances_table.update().where( share_instances_table.c.availability_zone_id == az.id ).values({'availability_zone': az.name}) ) op.execute( services_table.update().where( services_table.c.availability_zone_id == az.id ).values({'availability_zone': az.name}) ) # Remove AZ_id columns and AZ table op.drop_constraint('service_az_id_fk', 'services', type_='foreignkey') op.drop_column('services', 'availability_zone_id') op.drop_constraint('si_az_id_fk', 'share_instances', type_='foreignkey') op.drop_column('share_instances', 'availability_zone_id') op.drop_table('availability_zones') manila-10.0.0/manila/db/migrations/alembic/versions/221a83cfd85b_change_user_project_id_length.py0000664000175000017500000000357713656750227032646 0ustar zuulzuul00000000000000# Copyright 2016 SAP SE # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """change_user_id_length Revision ID: 221a83cfd85b Revises: eb6d5544cbbd Create Date: 2016-06-21 14:22:48.314501 """ # revision identifiers, used by Alembic. revision = '221a83cfd85b' down_revision = 'eb6d5544cbbd' from alembic import op from oslo_log import log import sqlalchemy as sa LOG = log.getLogger(__name__) def upgrade(): LOG.info("Changing user_id length for share_networks") op.alter_column("share_networks", "user_id", type_=sa.String(length=255)) LOG.info("Changing project_id length for share_networks") op.alter_column("share_networks", "project_id", type_=sa.String(length=255)) LOG.info("Changing project_id length for security_services") op.alter_column("security_services", "project_id", type_=sa.String(length=255)) def downgrade(): LOG.info("Changing back user_id length for share_networks") op.alter_column("share_networks", "user_id", type_=sa.String(length=36)) LOG.info("Changing back project_id length for share_networks") op.alter_column("share_networks", "project_id", type_=sa.String(length=36)) LOG.info("Changing back project_id length for security_services") op.alter_column("security_services", "project_id", type_=sa.String(length=36)) ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/3e7d62517afa_add_create_share_from_snapshot_support.pymanila-10.0.0/manila/db/migrations/alembic/versions/3e7d62517afa_add_create_share_from_snapshot_supp0000664000175000017500000001332113656750227033426 0ustar zuulzuul00000000000000# Copyright (c) 2016 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add 'create_share_from_snapshot_support' extra spec to share types Revision ID: 3e7d62517afa Revises: 48a7beae3117 Create Date: 2016-08-16 10:48:11.497499 """ # revision identifiers, used by Alembic. revision = '3e7d62517afa' down_revision = '48a7beae3117' from alembic import op from oslo_utils import timeutils import sqlalchemy as sa from sqlalchemy.sql import table from manila.common import constants def upgrade(): """Performs DB upgrade to add create_share_from_snapshot_support. Prior to this migration, the 'snapshot_support' extra spec meant two different things: a snapshot may be created, and a new share may be created from a snapshot. With the planned addition of new snapshot semantics (revert to snapshot, mountable snapshots), it is likely a driver may be able to support one or both of the new semantics but *not* be able to create a share from a snapshot. So this migration separates the existing snapshot_support extra spec and share attribute into two values to enable logical separability of the features. Add 'create_share_from_snapshot_support' extra spec to all share types and attribute 'create_share_from_snapshot_support' to Share model. """ session = sa.orm.Session(bind=op.get_bind().connect()) extra_specs_table = table( 'share_type_extra_specs', sa.Column('created_at', sa.DateTime), sa.Column('deleted', sa.Integer), sa.Column('share_type_id', sa.String(length=36)), sa.Column('spec_key', sa.String(length=255)), sa.Column('spec_value', sa.String(length=255))) share_type_table = table( 'share_types', sa.Column('deleted', sa.Integer), sa.Column('id', sa.Integer)) # Get list of share type IDs that don't already have the new required # create_share_from_snapshot_support extra spec defined. existing_extra_specs = session.query( extra_specs_table).filter( extra_specs_table.c.spec_key == constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT).filter( extra_specs_table.c.deleted == 0).all() excluded_st_ids = [es.share_type_id for es in existing_extra_specs] # Get share types for the IDs we got in the previous query share_types = session.query(share_type_table).filter( share_type_table.c.deleted.in_(('0', 'False', ))).filter( share_type_table.c.id.notin_(excluded_st_ids)).all() extra_specs = [] now = timeutils.utcnow() for share_type in share_types: # Get the value of snapshot_support for each extant share type snapshot_support_extra_spec = session.query( extra_specs_table).filter( extra_specs_table.c.spec_key == constants.ExtraSpecs.SNAPSHOT_SUPPORT).filter( extra_specs_table.c.share_type_id == share_type.id).first() spec_value = (snapshot_support_extra_spec.spec_value if snapshot_support_extra_spec else 'False') # Copy the snapshot_support value to create_share_from_snapshot_support extra_specs.append({ 'spec_key': constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT, 'spec_value': spec_value, 'deleted': 0, 'created_at': now, 'share_type_id': share_type.id, }) if extra_specs: op.bulk_insert(extra_specs_table, extra_specs) # Add create_share_from_snapshot_support attribute to shares table op.add_column('shares', sa.Column('create_share_from_snapshot_support', sa.Boolean, default=True)) # Copy snapshot_support to create_share_from_snapshot_support on each share shares_table = sa.Table( 'shares', sa.MetaData(), sa.Column('id', sa.String(length=36)), sa.Column('deleted', sa.String(length=36)), sa.Column('snapshot_support', sa.Boolean), sa.Column('create_share_from_snapshot_support', sa.Boolean), ) # pylint: disable=no-value-for-parameter update = shares_table.update().where( shares_table.c.deleted == 'False').values( create_share_from_snapshot_support=shares_table.c.snapshot_support) session.execute(update) session.commit() session.close_all() def downgrade(): """Performs DB downgrade removing create_share_from_snapshot_support. Remove 'create_share_from_snapshot_support' extra spec from all share types and attribute 'create_share_from_snapshot_support' from Share model. """ connection = op.get_bind().connect() deleted_at = timeutils.utcnow() extra_specs = sa.Table( 'share_type_extra_specs', sa.MetaData(), autoload=True, autoload_with=connection) # pylint: disable=no-value-for-parameter update = extra_specs.update().where( extra_specs.c.spec_key == constants.ExtraSpecs.CREATE_SHARE_FROM_SNAPSHOT_SUPPORT).where( extra_specs.c.deleted == 0).values(deleted=extra_specs.c.id, deleted_at=deleted_at) connection.execute(update) op.drop_column('shares', 'create_share_from_snapshot_support') manila-10.0.0/manila/db/migrations/alembic/versions/03da71c0e321_convert_cgs_to_share_groups.py0000664000175000017500000002227113656750227032310 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Convert consistency groups to share groups Revision ID: 03da71c0e321 Revises: e9f79621d83f Create Date: 2016-05-19 10:25:17.899008 """ # revision identifiers, used by Alembic. revision = "03da71c0e321" down_revision = "e9f79621d83f" from alembic import op from oslo_log import log import sqlalchemy as sa from sqlalchemy import Column, String from manila.db.migrations import utils LOG = log.getLogger(__name__) def upgrade(): LOG.info("Renaming consistency group tables") # Rename tables op.rename_table("consistency_groups", "share_groups") op.rename_table("cgsnapshots", "share_group_snapshots") op.rename_table("cgsnapshot_members", "share_group_snapshot_members") op.rename_table( "consistency_group_share_type_mappings", "share_group_share_type_mappings") # Update columns and foreign keys op.drop_constraint( "fk_shares_consistency_group_id", "shares", type_="foreignkey") op.alter_column( "shares", "consistency_group_id", existing_type=String(36), existing_nullable=True, new_column_name="share_group_id") op.alter_column( "shares", "source_cgsnapshot_member_id", existing_type=String(36), existing_nullable=True, new_column_name="source_share_group_snapshot_member_id") op.create_foreign_key( "fk_shares_share_group_id", "shares", "share_groups", ["share_group_id"], ["id"]) op.drop_constraint( "fk_cg_share_network_id", "share_groups", type_="foreignkey") op.drop_constraint( "fk_cg_share_server_id", "share_groups", type_="foreignkey") op.alter_column( "share_groups", "source_cgsnapshot_id", existing_type=String(36), new_column_name="source_share_group_snapshot_id") op.create_foreign_key( "fk_share_group_share_network_id", "share_groups", "share_networks", ["share_network_id"], ["id"]) op.create_foreign_key( "fk_share_group_share_server_id", "share_groups", "share_servers", ["share_server_id"], ["id"]) op.drop_constraint( "fk_cgsnapshots_consistency_group_id", "share_group_snapshots", type_="foreignkey") op.alter_column( "share_group_snapshots", "consistency_group_id", existing_type=String(36), new_column_name="share_group_id") op.create_foreign_key( "fk_share_group_snapshots_share_group_id", "share_group_snapshots", "share_groups", ["share_group_id"], ["id"]) op.drop_constraint( "fk_cgstm_cg_id", "share_group_share_type_mappings", type_="foreignkey") op.drop_constraint( "fk_cgstm_share_type_id", "share_group_share_type_mappings", type_="foreignkey") op.alter_column( "share_group_share_type_mappings", "consistency_group_id", existing_type=String(36), new_column_name="share_group_id") op.create_foreign_key( "fk_sgstm_share_group_id", "share_group_share_type_mappings", "share_groups", ["share_group_id"], ["id"]) op.create_foreign_key( "fk_sgstm_share_type_id", "share_group_share_type_mappings", "share_types", ["share_type_id"], ["id"]) op.drop_constraint( "fk_cgsnapshot_members_cgsnapshot_id", "share_group_snapshot_members", type_="foreignkey") op.drop_constraint( "fk_cgsnapshot_members_share_instance_id", "share_group_snapshot_members", type_="foreignkey") op.drop_constraint( "fk_cgsnapshot_members_share_id", "share_group_snapshot_members", type_="foreignkey") op.drop_constraint( "fk_cgsnapshot_members_share_type_id", "share_group_snapshot_members", type_="foreignkey") op.alter_column( "share_group_snapshot_members", "cgsnapshot_id", existing_type=String(36), new_column_name="share_group_snapshot_id") op.create_foreign_key( "fk_gsm_group_snapshot_id", "share_group_snapshot_members", "share_group_snapshots", ["share_group_snapshot_id"], ["id"]) op.create_foreign_key( "fk_gsm_share_instance_id", "share_group_snapshot_members", "share_instances", ["share_instance_id"], ["id"]) op.create_foreign_key( "fk_gsm_share_id", "share_group_snapshot_members", "shares", ["share_id"], ["id"]) op.drop_column("share_group_snapshot_members", "share_type_id") def downgrade(): meta = sa.MetaData() meta.bind = op.get_bind() # Rename tables op.rename_table("share_groups", "consistency_groups") op.rename_table("share_group_snapshots", "cgsnapshots") op.rename_table("share_group_snapshot_members", "cgsnapshot_members") op.rename_table( "share_group_share_type_mappings", "consistency_group_share_type_mappings") # Update columns and foreign keys op.drop_constraint( "fk_shares_share_group_id", "shares", type_="foreignkey") op.alter_column( "shares", "share_group_id", existing_type=String(36), new_column_name="consistency_group_id") op.alter_column( "shares", "source_share_group_snapshot_member_id", existing_type=String(36), existing_nullable=True, new_column_name="source_cgsnapshot_member_id") op.create_foreign_key( "fk_shares_consistency_group_id", "shares", "consistency_groups", ["consistency_group_id"], ["id"]) op.drop_constraint( "fk_share_group_share_network_id", "consistency_groups", type_="foreignkey") op.drop_constraint( "fk_share_group_share_server_id", "consistency_groups", type_="foreignkey") op.alter_column( "consistency_groups", "source_share_group_snapshot_id", existing_type=String(36), new_column_name="source_cgsnapshot_id") op.create_foreign_key( "fk_cg_share_network_id", "consistency_groups", "share_networks", ["share_network_id"], ["id"]) op.create_foreign_key( "fk_cg_share_server_id", "consistency_groups", "share_servers", ["share_server_id"], ["id"]) op.drop_constraint( "fk_share_group_snapshots_share_group_id", "cgsnapshots", type_="foreignkey") op.alter_column( "cgsnapshots", "share_group_id", existing_type=String(36), new_column_name="consistency_group_id") op.create_foreign_key( "fk_cgsnapshots_consistency_group_id", "cgsnapshots", "consistency_groups", ["consistency_group_id"], ["id"]) op.drop_constraint( "fk_sgstm_share_group_id", "consistency_group_share_type_mappings", type_="foreignkey") op.drop_constraint( "fk_sgstm_share_type_id", "consistency_group_share_type_mappings", type_="foreignkey") op.alter_column( "consistency_group_share_type_mappings", "share_group_id", existing_type=String(36), new_column_name="consistency_group_id") op.create_foreign_key( "fk_cgstm_cg_id", "consistency_group_share_type_mappings", "consistency_groups", ["consistency_group_id"], ["id"]) op.create_foreign_key( "fk_cgstm_share_type_id", "consistency_group_share_type_mappings", "share_types", ["share_type_id"], ["id"]) op.drop_constraint( "fk_gsm_group_snapshot_id", "cgsnapshot_members", type_="foreignkey") op.drop_constraint( "fk_gsm_share_instance_id", "cgsnapshot_members", type_="foreignkey") op.drop_constraint( "fk_gsm_share_id", "cgsnapshot_members", type_="foreignkey") op.alter_column( "cgsnapshot_members", "share_group_snapshot_id", existing_type=String(36), new_column_name="cgsnapshot_id") op.create_foreign_key( "fk_cgsnapshot_members_cgsnapshot_id", "cgsnapshot_members", "cgsnapshots", ["cgsnapshot_id"], ["id"]) op.create_foreign_key( "fk_cgsnapshot_members_share_instance_id", "cgsnapshot_members", "share_instances", ["share_instance_id"], ["id"]) op.create_foreign_key( "fk_cgsnapshot_members_share_id", "cgsnapshot_members", "shares", ["share_id"], ["id"]) op.add_column( "cgsnapshot_members", Column('share_type_id', String(36), nullable=True)) connection = op.get_bind() si_table = utils.load_table('share_instances', connection) member_table = utils.load_table('cgsnapshot_members', connection) for si_record in connection.execute(si_table.select()): # pylint: disable=no-value-for-parameter connection.execute( member_table.update().where( member_table.c.share_instance_id == si_record.id, ).values({"share_type_id": si_record.share_type_id})) op.alter_column( "cgsnapshot_members", Column('share_type_id', String(36), nullable=False)) op.create_foreign_key( "fk_cgsnapshot_members_share_type_id", "cgsnapshot_members", "share_types", ["share_type_id"], ["id"]) ././@LongLink0000000000000000000000000000020300000000000011210 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/6a3fd2984bc31_add_is_auto_deletable_and_identifier_fields_for_share_servers.pymanila-10.0.0/manila/db/migrations/alembic/versions/6a3fd2984bc31_add_is_auto_deletable_and_identifi0000664000175000017500000000453113656750227033220 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add is_auto_deletable and identifier fields for share servers Revision ID: 6a3fd2984bc31 Revises: 11ee96se625f3 Create Date: 2018-10-29 11:27:44.194732 """ # revision identifiers, used by Alembic. revision = '6a3fd2984bc31' down_revision = '11ee96se625f3' from alembic import op from oslo_log import log import sqlalchemy as sa from manila.db.migrations import utils LOG = log.getLogger(__name__) def upgrade(): try: op.add_column('share_servers', sa.Column( 'is_auto_deletable', sa.Boolean, default=True)) op.add_column('share_servers', sa.Column( 'identifier', sa.String(length=255), default=None)) except Exception: LOG.error("Columns share_servers.is_auto_deletable " "and/or share_servers.identifier not created!") raise try: connection = op.get_bind() share_servers_table = utils.load_table('share_servers', connection) for server in connection.execute(share_servers_table.select()): # pylint: disable=no-value-for-parameter connection.execute( share_servers_table.update().where( share_servers_table.c.id == server.id, ).values({"identifier": server.id, "is_auto_deletable": True})) except Exception: LOG.error( "Could not initialize share_servers.is_auto_deletable to True" " and share_servers.identifier with the share server ID!") raise def downgrade(): try: op.drop_column('share_servers', 'is_auto_deletable') op.drop_column('share_servers', 'identifier') except Exception: LOG.error("Columns share_servers.is_auto_deletable and/or " "share_servers.identifier not dropped!") raise ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/e6d88547b381_add_progress_field_to_share_instance.pymanila-10.0.0/manila/db/migrations/alembic/versions/e6d88547b381_add_progress_field_to_share_instanc0000664000175000017500000000345013656750227033261 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add-progress-field-to-share-instance Revision ID: e6d88547b381 Revises: 805685098bd2 Create Date: 2020-01-31 14:06:15.952747 """ # revision identifiers, used by Alembic. revision = 'e6d88547b381' down_revision = '805685098bd2' from alembic import op from manila.common import constants from manila.db.migrations import utils from oslo_log import log import sqlalchemy as sa LOG = log.getLogger(__name__) def upgrade(): try: connection = op.get_bind() op.add_column('share_instances', sa.Column('progress', sa.String(32), nullable=True, default=None)) share_instances_table = utils.load_table('share_instances', connection) updated_data = {'progress': '100%'} # pylint: disable=no-value-for-parameter op.execute( share_instances_table.update().where( share_instances_table.c.status == constants.STATUS_AVAILABLE, ).values(updated_data) ) except Exception: LOG.error("Column share_instances.progress not created.") raise def downgrade(): try: op.drop_column('share_instances', 'progress') except Exception: LOG.error("Column share_instances.progress not dropped.") raise ././@LongLink0000000000000000000000000000020200000000000011207 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/5155c7077f99_add_more_network_info_attributes_to_network_allocations_table.pymanila-10.0.0/manila/db/migrations/alembic/versions/5155c7077f99_add_more_network_info_attributes_to0000664000175000017500000000345713656750227033277 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add more network info attributes to 'network_allocations' table. Revision ID: 5155c7077f99 Revises: 293fac1130ca Create Date: 2015-12-22 12:05:24.297049 """ # revision identifiers, used by Alembic. revision = '5155c7077f99' down_revision = '293fac1130ca' from alembic import op import sqlalchemy as sa def upgrade(): default_label_value = 'user' op.add_column( 'network_allocations', sa.Column('label', sa.String(255), default=default_label_value, server_default=default_label_value, nullable=True), ) op.add_column( 'network_allocations', sa.Column('network_type', sa.String(32), nullable=True)) op.add_column( 'network_allocations', sa.Column('segmentation_id', sa.Integer, nullable=True)) op.add_column( 'network_allocations', sa.Column('ip_version', sa.Integer, nullable=True)) op.add_column( 'network_allocations', sa.Column('cidr', sa.String(64), nullable=True)) def downgrade(): for col_name in ('label', 'network_type', 'segmentation_id', 'ip_version', 'cidr'): op.drop_column('network_allocations', col_name) manila-10.0.0/manila/db/migrations/alembic/versions/0274d20c560f_add_ou_to_security_service.py0000664000175000017500000000204013656750227032036 0ustar zuulzuul00000000000000# Copyright 2018 SAP SE # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add ou to security service Revision ID: 0274d20c560f Revises: 4a482571410f Create Date: 2017-05-19 17:27:30.274440 """ # revision identifiers, used by Alembic. revision = '0274d20c560f' down_revision = '4a482571410f' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'security_services', sa.Column('ou', sa.String(255), nullable=True)) def downgrade(): op.drop_column('security_services', 'ou') manila-10.0.0/manila/db/migrations/alembic/versions/3db9992c30f3_transform_statuses_to_lowercase.py0000664000175000017500000000335313656750227033254 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Transform statuses to lowercase Revision ID: 3db9992c30f3 Revises: 533646c7af38 Create Date: 2015-05-28 19:30:35.645773 """ # revision identifiers, used by Alembic. revision = '3db9992c30f3' down_revision = '533646c7af38' from alembic import op import sqlalchemy as sa from manila.db.migrations import utils def upgrade(): # NOTE(vponomaryov): shares has some statuses as uppercase, so # transform them in addition to statuses of share servers. for table in ('shares', 'share_servers'): _transform_case(table, make_upper=False) def downgrade(): # NOTE(vponomaryov): transform share server statuses to uppercase and # leave share statuses as is. _transform_case('share_servers', make_upper=True) def _transform_case(table_name, make_upper): connection = op.get_bind() table = utils.load_table(table_name, connection) case = sa.func.upper if make_upper else sa.func.lower for row in connection.execute(table.select()): op.execute( table.update().where( # pylint: disable=no-value-for-parameter table.c.id == row.id ).values({'status': case(row.status)}) ) ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/fdfb668d19e1_add_gateway_to_network_allocations_table.pymanila-10.0.0/manila/db/migrations/alembic/versions/fdfb668d19e1_add_gateway_to_network_allocations_0000664000175000017500000000223213656750227033524 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_gateway_to_network_allocations_table Revision ID: fdfb668d19e1 Revises: 221a83cfd85b Create Date: 2016-04-19 10:07:16.224806 """ # revision identifiers, used by Alembic. revision = 'fdfb668d19e1' down_revision = '221a83cfd85b' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'network_allocations', sa.Column('gateway', sa.String(64), nullable=True)) op.add_column( 'share_networks', sa.Column('gateway', sa.String(64), nullable=True)) def downgrade(): op.drop_column('network_allocations', 'gateway') op.drop_column('share_networks', 'gateway') manila-10.0.0/manila/db/migrations/alembic/versions/87ce15c59bbe_add_revert_to_snapshot_support.py0000664000175000017500000000377013656750227033241 0ustar zuulzuul00000000000000# Copyright (c) 2016 Clinton Knight. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_revert_to_snapshot_support Revision ID: 87ce15c59bbe Revises: 95e3cf760840 Create Date: 2016-08-18 00:12:34.587018 """ # revision identifiers, used by Alembic. revision = '87ce15c59bbe' down_revision = '95e3cf760840' from alembic import op import sqlalchemy as sa def upgrade(): """Performs DB upgrade to add revert_to_snapshot_support. Add attribute 'revert_to_snapshot_support' to Share model. """ session = sa.orm.Session(bind=op.get_bind().connect()) # Add create_share_from_snapshot_support attribute to shares table op.add_column( 'shares', sa.Column('revert_to_snapshot_support', sa.Boolean, default=False)) # Set revert_to_snapshot_support on each share shares_table = sa.Table( 'shares', sa.MetaData(), sa.Column('id', sa.String(length=36)), sa.Column('deleted', sa.String(length=36)), sa.Column('revert_to_snapshot_support', sa.Boolean), ) # pylint: disable=no-value-for-parameter update = shares_table.update().where( shares_table.c.deleted == 'False').values( revert_to_snapshot_support=False) session.execute(update) session.commit() session.close_all() def downgrade(): """Performs DB downgrade removing revert_to_snapshot_support. Remove attribute 'revert_to_snapshot_support' from Share model. """ op.drop_column('shares', 'revert_to_snapshot_support') manila-10.0.0/manila/db/migrations/alembic/versions/579c267fbb4d_add_share_instances_access_map.py0000664000175000017500000001006113656750227032755 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_share_instances_access_map Revision ID: 579c267fbb4d Revises: 5077ffcc5f1c Create Date: 2015-08-19 07:51:52.928542 """ # revision identifiers, used by Alembic. revision = '579c267fbb4d' down_revision = '5077ffcc5f1c' from alembic import op from sqlalchemy import Column, DateTime, ForeignKey, String from oslo_utils import uuidutils from manila.db.migrations import utils def upgrade(): """Create 'share_instance_access_map' table and move 'state' column.""" instance_access_table = op.create_table( 'share_instance_access_map', Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('share_instance_id', String(length=36), ForeignKey('share_instances.id', name="siam_instance_fk")), Column('access_id', String(length=36), ForeignKey('share_access_map.id', name="siam_access_fk")), Column('state', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8') # NOTE(u_glide): Move all states from 'share_access_map' # to 'share_instance_access_map' instance_access_mappings = [] connection = op.get_bind() access_table = utils.load_table('share_access_map', connection) instances_table = utils.load_table('share_instances', connection) for access_rule in connection.execute(access_table.select()): # pylint: disable=assignment-from-no-return instances_query = instances_table.select().where( instances_table.c.share_id == access_rule.share_id ) for instance in connection.execute(instances_query): instance_access_mappings.append({ 'created_at': access_rule.created_at, 'updated_at': access_rule.updated_at, 'deleted_at': access_rule.deleted_at, 'deleted': access_rule.deleted, 'id': uuidutils.generate_uuid(), 'share_instance_id': instance.id, 'access_id': access_rule.id, 'state': access_rule.state, }) op.bulk_insert(instance_access_table, instance_access_mappings) op.drop_column('share_access_map', 'state') def downgrade(): """Remove 'share_instance_access_map' table and add 'state' column back. This method can lead to data loss because only first state is saved in share_access_map table. """ op.add_column('share_access_map', Column('state', String(length=255))) # NOTE(u_glide): Move all states from 'share_instance_access_map' # to 'share_access_map' connection = op.get_bind() access_table = utils.load_table('share_access_map', connection) instance_access_table = utils.load_table('share_instance_access_map', connection) share_access_rules = connection.execute( access_table.select().where(access_table.c.deleted == "False")) for access_rule in share_access_rules: access_mapping = connection.execute( instance_access_table.select().where( instance_access_table.c.access_id == access_rule['id']) ).first() # pylint: disable=no-value-for-parameter op.execute( access_table.update().where( access_table.c.id == access_rule['id'] ).values({'state': access_mapping['state']}) ) op.drop_table('share_instance_access_map') manila-10.0.0/manila/db/migrations/alembic/versions/4a482571410f_add_backends_info_table.py0000664000175000017500000000346213656750227031133 0ustar zuulzuul00000000000000# Copyright 2017 Huawei inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_backend_info_table Revision ID: 4a482571410f Revises: 27cb96d991fa Create Date: 2017-05-18 14:47:38.201658 """ # revision identifiers, used by Alembic. revision = '4a482571410f' down_revision = '27cb96d991fa' from alembic import op from oslo_log import log import sqlalchemy as sql LOG = log.getLogger(__name__) backend_info_table_name = 'backend_info' def upgrade(): try: op.create_table( backend_info_table_name, sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('deleted', sql.Integer, default=0), sql.Column('host', sql.String(255), nullable=False, primary_key=True), sql.Column('info_hash', sql.String(255), nullable=False), mysql_engine='InnoDB', ) except Exception: LOG.error("Table |%s| not created!", backend_info_table_name) raise def downgrade(): try: op.drop_table(backend_info_table_name) except Exception: LOG.error("%s table not dropped", backend_info_table_name) raise ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/493eaffd79e1_add_mtu_network_allocations_share_networks.pymanila-10.0.0/manila/db/migrations/alembic/versions/493eaffd79e1_add_mtu_network_allocations_share_n0000664000175000017500000000216713656750227033537 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_mtu_network_allocations Revision ID: 493eaffd79e1 Revises: e8ea58723178 Create Date: 2016-08-01 14:18:31.899606 """ # revision identifiers, used by Alembic. revision = '493eaffd79e1' down_revision = 'e8ea58723178' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'network_allocations', sa.Column('mtu', sa.Integer, nullable=True)) op.add_column( 'share_networks', sa.Column('mtu', sa.Integer, nullable=True)) def downgrade(): op.drop_column('network_allocations', 'mtu') op.drop_column('share_networks', 'mtu') manila-10.0.0/manila/db/migrations/alembic/versions/5077ffcc5f1c_add_share_instances.py0000664000175000017500000002705213656750227030662 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_share_instances Revision ID: 5077ffcc5f1c Revises: 3db9992c30f3 Create Date: 2015-06-26 12:54:55.630152 """ # revision identifiers, used by Alembic. revision = '5077ffcc5f1c' down_revision = '3db9992c30f3' from alembic import op from sqlalchemy import Column, DateTime, ForeignKey, String import six from manila.db.migrations import utils def create_share_instances_table(connection): # Create 'share_instances' table share_instances_table = op.create_table( 'share_instances', Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('share_id', String(length=36), ForeignKey('shares.id', name="si_share_fk")), Column('host', String(length=255)), Column('status', String(length=255)), Column('scheduled_at', DateTime), Column('launched_at', DateTime), Column('terminated_at', DateTime), Column('share_network_id', String(length=36), ForeignKey('share_networks.id', name="si_share_network_fk"), nullable=True), Column('share_server_id', String(length=36), ForeignKey('share_servers.id', name="si_share_server_fk"), nullable=True), Column('availability_zone', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8') # Migrate data from 'shares' to 'share_instances' share_instances = [] shares_table = utils.load_table('shares', connection) for share in connection.execute(shares_table.select()): share_instances.append({ 'created_at': share.created_at, 'updated_at': share.updated_at, 'deleted_at': share.deleted_at, 'deleted': share.deleted, 'id': share.id, 'share_id': share.id, 'host': share.host, 'status': share.status, 'scheduled_at': share.scheduled_at, 'launched_at': share.launched_at, 'terminated_at': share.terminated_at, 'share_network_id': share.share_network_id, 'share_server_id': share.share_server_id, 'availability_zone': share.availability_zone, }) op.bulk_insert(share_instances_table, share_instances) # Remove columns moved to 'share_instances' table with op.batch_alter_table("shares") as batch_op: for fk in shares_table.foreign_keys: batch_op.drop_constraint(fk.name, type_='foreignkey') batch_op.drop_column('host') batch_op.drop_column('status') batch_op.drop_column('scheduled_at') batch_op.drop_column('launched_at') batch_op.drop_column('terminated_at') batch_op.drop_column('share_network_id') batch_op.drop_column('share_server_id') batch_op.drop_column('availability_zone') def remove_share_instances_table(connection): with op.batch_alter_table("shares") as batch_op: batch_op.add_column(Column('host', String(length=255))) batch_op.add_column(Column('status', String(length=255))) batch_op.add_column(Column('scheduled_at', DateTime)) batch_op.add_column(Column('launched_at', DateTime)) batch_op.add_column(Column('terminated_at', DateTime)) batch_op.add_column(Column('share_network_id', String(length=36), ForeignKey('share_networks.id'), nullable=True)) batch_op.add_column(Column('share_server_id', String(length=36), ForeignKey('share_servers.id'), nullable=True)) batch_op.add_column(Column('availability_zone', String(length=255))) shares_table = utils.load_table('shares', connection) share_inst_table = utils.load_table('share_instances', connection) for share in connection.execute(shares_table.select()): instance = connection.execute( share_inst_table.select().where( share_inst_table.c.share_id == share.id) ).first() # pylint: disable=no-value-for-parameter op.execute( shares_table.update().where( shares_table.c.id == share.id ).values( { 'host': instance['host'], 'status': instance['status'], 'scheduled_at': instance['scheduled_at'], 'launched_at': instance['launched_at'], 'terminated_at': instance['terminated_at'], 'share_network_id': instance['share_network_id'], 'share_server_id': instance['share_server_id'], 'availability_zone': instance['availability_zone'], } ) ) op.drop_table('share_instances') def create_snapshot_instances_table(connection): # Create 'share_snapshot_instances' table snapshot_instances_table = op.create_table( 'share_snapshot_instances', Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('snapshot_id', String(length=36), ForeignKey('share_snapshots.id', name="ssi_snapshot_fk")), Column('share_instance_id', String(length=36), ForeignKey('share_instances.id', name="ssi_share_instance_fk")), Column('status', String(length=255)), Column('progress', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) # Migrate data from share_snapshots to share_snapshot_instances snapshot_instances = [] snapshot_table = utils.load_table('share_snapshots', connection) share_instances_table = utils.load_table('share_instances', connection) for snapshot in connection.execute(snapshot_table.select()): share_instances_rows = connection.execute( share_instances_table.select().where( share_instances_table.c.share_id == snapshot.share_id ) ) snapshot_instances.append({ 'created_at': snapshot.created_at, 'updated_at': snapshot.updated_at, 'deleted_at': snapshot.deleted_at, 'deleted': snapshot.deleted, 'id': snapshot.id, 'snapshot_id': snapshot.id, 'status': snapshot.status, 'progress': snapshot.progress, 'share_instance_id': share_instances_rows.first().id, }) op.bulk_insert(snapshot_instances_table, snapshot_instances) # Remove columns moved to 'share_snapshot_instances' table with op.batch_alter_table("share_snapshots") as batch_op: batch_op.drop_column('status') batch_op.drop_column('progress') def remove_snapshot_instances_table(connection): with op.batch_alter_table("share_snapshots") as batch_op: batch_op.add_column(Column('status', String(length=255))) batch_op.add_column(Column('progress', String(length=255))) snapshots_table = utils.load_table('share_snapshots', connection) snapshots_inst_table = utils.load_table('share_snapshot_instances', connection) for snapshot_instance in connection.execute(snapshots_inst_table.select()): snapshot = connection.execute( snapshots_table.select().where( snapshots_table.c.id == snapshot_instance.snapshot_id) ).first() # pylint: disable=no-value-for-parameter op.execute( snapshots_table.update().where( snapshots_table.c.id == snapshot.id ).values( { 'status': snapshot_instance['status'], 'progress': snapshot_instance['progress'], } ) ) op.drop_table('share_snapshot_instances') def upgrade_export_locations_table(connection): # Update 'share_export_locations' table op.add_column( 'share_export_locations', Column('share_instance_id', String(36), ForeignKey('share_instances.id', name="sel_instance_id_fk")) ) # Convert share_id to share_instance_id share_el_table = utils.load_table('share_export_locations', connection) share_instances_table = utils.load_table('share_instances', connection) for export in connection.execute(share_el_table.select()): share_instance = connection.execute( share_instances_table.select().where( share_instances_table.c.share_id == export.share_id) ).first() # pylint: disable=no-value-for-parameter op.execute( share_el_table.update().where( share_el_table.c.id == export.id ).values({'share_instance_id': six.text_type(share_instance.id)}) ) with op.batch_alter_table("share_export_locations") as batch_op: batch_op.drop_constraint('sel_id_fk', type_='foreignkey') batch_op.drop_column('share_id') batch_op.rename_table('share_export_locations', 'share_instance_export_locations') def downgrade_export_locations_table(connection): op.rename_table('share_instance_export_locations', 'share_export_locations') op.add_column( 'share_export_locations', Column('share_id', String(36), ForeignKey('shares.id', name="sel_id_fk")) ) # Convert share_instance_id to share_id share_el_table = utils.load_table('share_export_locations', connection) share_instances_table = utils.load_table('share_instances', connection) for export in connection.execute(share_el_table.select()): share_instance = connection.execute( share_instances_table.select().where( share_instances_table.c.id == export.share_instance_id) ).first() # pylint: disable=no-value-for-parameter op.execute( share_el_table.update().where( share_el_table.c.id == export.id ).values({'share_id': six.text_type(share_instance.share_id)}) ) with op.batch_alter_table("share_export_locations") as batch_op: batch_op.drop_constraint('sel_instance_id_fk', type_='foreignkey') batch_op.drop_column('share_instance_id') def upgrade(): connection = op.get_bind() create_share_instances_table(connection) create_snapshot_instances_table(connection) upgrade_export_locations_table(connection) def downgrade(): """Remove share_instances and share_snapshot_instance tables. This method can lead to data loss because only first share/snapshot instance is saved in shares/snapshot table. """ connection = op.get_bind() downgrade_export_locations_table(connection) remove_snapshot_instances_table(connection) remove_share_instances_table(connection) ././@LongLink0000000000000000000000000000020300000000000011210 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/805685098bd2_add_share_network_subnets_table_and_modify_share_servers_table.pymanila-10.0.0/manila/db/migrations/alembic/versions/805685098bd2_add_share_network_subnets_table_and0000664000175000017500000002315613656750227033177 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add_share_network_subnets_table_and_modify_share_networks_and_servers Revision ID: 805685098bd2 Revises: 6a3fd2984bc31 Create Date: 2019-05-09 16:28:41.919714 """ # revision identifiers, used by Alembic. revision = '805685098bd2' down_revision = '6a3fd2984bc31' from alembic import op from manila.db.migrations import utils from oslo_log import log from oslo_utils import uuidutils import sqlalchemy as sa LOG = log.getLogger(__name__) def upgrade(): # New table try: share_networks_fk_name = ( "fk_share_network_subnets_share_network_id_share_networks") availability_zones_fk_name = ( "fk_share_network_subnets_availaility_zone_id_availability_zones") share_network_subnets_table = op.create_table( 'share_network_subnets', sa.Column('id', sa.String(36), primary_key=True, nullable=False), sa.Column('neutron_net_id', sa.String(36), nullable=True), sa.Column('neutron_subnet_id', sa.String(36), nullable=True), sa.Column('network_type', sa.String(32), nullable=True), sa.Column('cidr', sa.String(64), nullable=True), sa.Column('segmentation_id', sa.Integer, nullable=True), sa.Column('gateway', sa.String(64), nullable=True), sa.Column('mtu', sa.Integer, nullable=True), sa.Column('share_network_id', sa.String(36), sa.ForeignKey( 'share_networks.id', name=share_networks_fk_name)), sa.Column('ip_version', sa.Integer, nullable=True), sa.Column('availability_zone_id', sa.String(36), sa.ForeignKey('availability_zones.id', name=availability_zones_fk_name)), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), mysql_engine='InnoDB', mysql_charset='utf8' ) except Exception: LOG.error("Table |%s| not created!", 'share_network_subnets') raise share_serves_fk_name = ( "fk_share_servers_share_network_subnet_id_share_network_subnets") op.add_column( 'share_servers', sa.Column( 'share_network_subnet_id', sa.String(36), sa.ForeignKey('share_network_subnets.id', name=share_serves_fk_name), ) ) connection = op.get_bind() share_networks_table = utils.load_table('share_networks', connection) share_servers_table = utils.load_table('share_servers', connection) share_network_subnets = [] # Get all share_networks and move all their data to share network subnet for share_network in connection.execute(share_networks_table.select()): share_network_subnet = { 'id': uuidutils.generate_uuid(), 'neutron_net_id': share_network.neutron_net_id, 'neutron_subnet_id': share_network.neutron_subnet_id, 'network_type': share_network.network_type, 'cidr': share_network.cidr, 'segmentation_id': share_network.segmentation_id, 'gateway': share_network.gateway, 'mtu': share_network.mtu, 'share_network_id': share_network.id, 'ip_version': share_network.ip_version, 'created_at': share_network.created_at, 'updated_at': share_network.updated_at, 'deleted_at': share_network.deleted_at, 'deleted': share_network.deleted, } share_network_subnets.append(share_network_subnet) # Insertions for the new share network subnets op.bulk_insert(share_network_subnets_table, share_network_subnets) # Updates the field share server table with the share network subnet id for sns in share_network_subnets: share_servers = connection.execute(share_servers_table.select().where( share_servers_table.c.share_network_id == sns['share_network_id'] )) updated_data = {'share_network_subnet_id': sns['id']} _update_share_servers(share_servers, updated_data, share_servers_table) if connection.engine.name == 'mysql': # Drops necessary constraint from share servers table. Only mysql # needs constraint handling. Postgresql/sqlite don't op.drop_constraint("share_servers_ibfk_1", "share_servers", type_="foreignkey") op.drop_column('share_servers', 'share_network_id') op.drop_column('share_networks', 'neutron_net_id') op.drop_column('share_networks', 'neutron_subnet_id') op.drop_column('share_networks', 'network_type') op.drop_column('share_networks', 'segmentation_id') op.drop_column('share_networks', 'gateway') op.drop_column('share_networks', 'mtu') op.drop_column('share_networks', 'cidr') op.drop_column('share_networks', 'ip_version') def _update_share_servers(share_servers, updated_data, share_servers_table): for share_server in share_servers: # pylint: disable=no-value-for-parameter op.execute( share_servers_table.update().where( share_servers_table.c.id == share_server.id, ).values(updated_data) ) def retrieve_default_subnet(subnets): # NOTE (silvacarlose): A default subnet is that one which doesn't contain # an availability zone. If all the share networks contain an az, we can # retrieve whichever share network, then we pick up the first. for subnet in subnets: if subnet.availability_zone_id is None: return subnet return subnets[0] if subnets is not None else None def downgrade(): connection = op.get_bind() # Include again the removed fields in the share network table op.add_column('share_networks', sa.Column('neutron_net_id', sa.String(36), nullable=True)) op.add_column('share_networks', sa.Column('neutron_subnet_id', sa.String(36), nullable=True)) op.add_column('share_networks', sa.Column('network_type', sa.String(32), nullable=True)) op.add_column('share_networks', sa.Column('cidr', sa.String(64), nullable=True)) op.add_column('share_networks', sa.Column('gateway', sa.String(64), nullable=True)) op.add_column('share_networks', sa.Column('mtu', sa.Integer, nullable=True)) op.add_column('share_networks', sa.Column('segmentation_id', sa.Integer, nullable=True)) op.add_column('share_networks', sa.Column('ip_version', sa.Integer, nullable=True)) # Include again the removed field in the share server table op.add_column('share_servers', sa.Column('share_network_id', sa.String(36), sa.ForeignKey('share_networks.id', name="share_servers_ibfk_1"))) share_networks_table = utils.load_table('share_networks', connection) share_servers_table = utils.load_table('share_servers', connection) subnets_table = utils.load_table('share_network_subnets', connection) for share_network in connection.execute(share_networks_table.select()): network_subnets = connection.execute(subnets_table.select().where( subnets_table.c.share_network_id == share_network.id)) default_subnet = retrieve_default_subnet(network_subnets) if default_subnet is not None: op.execute( # pylint: disable=no-value-for-parameter share_networks_table.update().where( share_networks_table.c.id == share_network.id, ).values({ 'neutron_net_id': default_subnet.neutron_net_id, 'neutron_subnet_id': default_subnet.neutron_subnet_id, 'network_type': default_subnet.network_type, 'cidr': default_subnet.cidr, 'gateway': default_subnet.gateway, 'mtu': default_subnet.mtu, 'segmentation_id': default_subnet.segmentation_id, 'ip_version': default_subnet.ip_version, }) ) for network_subnet in network_subnets: share_servers = connection.execute( share_servers_table.select().where( share_servers_table.c.share_network_subnet_id == network_subnet.id)) updated_data = {'share_network_id': share_network.id} _update_share_servers(share_servers, updated_data, share_servers_table) share_serves_fk_name = ( "fk_share_servers_share_network_subnet_id_share_network_subnets") if connection.engine.name == 'mysql': op.drop_constraint(share_serves_fk_name, "share_servers", type_="foreignkey") op.drop_column('share_servers', 'share_network_subnet_id') try: op.drop_table('share_network_subnets') except Exception: LOG.error("Failed to drop 'share_network_subnets' table!") raise manila-10.0.0/manila/db/migrations/alembic/versions/e1949a93157a_add_share_group_types_table.py0000664000175000017500000001206613656750227032204 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add share group types table Revision ID: e1949a93157a Revises: 03da71c0e321 Create Date: 2016-06-01 10:41:06.410945 """ # revision identifiers, used by Alembic. revision = 'e1949a93157a' down_revision = '03da71c0e321' from alembic import op from oslo_log import log import sqlalchemy as sql LOG = log.getLogger(__name__) def upgrade(): meta = sql.MetaData() meta.bind = op.get_bind() # Add share group types try: op.create_table( 'share_group_types', sql.Column( 'id', sql.String(length=36), primary_key=True, nullable=False), sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('is_public', sql.Boolean()), sql.Column('name', sql.String(length=255)), sql.Column('deleted', sql.String(length=36)), sql.UniqueConstraint( 'name', 'deleted', name="uniq_share_group_type_name"), mysql_engine='InnoDB', ) except Exception: LOG.error("Table |%s| not created!", 'share_group_types') raise # Add share group specs try: op.create_table( 'share_group_type_specs', sql.Column('id', sql.Integer, primary_key=True, nullable=False), sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('spec_key', sql.String(length=255)), sql.Column('spec_value', sql.String(length=255)), sql.Column('deleted', sql.Integer), sql.Column( 'share_group_type_id', sql.String(36), sql.ForeignKey( 'share_group_types.id', name="sgtp_id_extra_specs_fk")), mysql_engine='InnoDB', ) except Exception: LOG.error("Table |%s| not created!", 'share_group_type_specs') raise # Add share group project types try: op.create_table( 'share_group_type_projects', sql.Column('id', sql.Integer, primary_key=True, nullable=False), sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column( 'share_group_type_id', sql.String(36), sql.ForeignKey('share_group_types.id', name="sgtp_id_fk")), sql.Column('project_id', sql.String(length=255)), sql.Column('deleted', sql.Integer), sql.UniqueConstraint( 'share_group_type_id', 'project_id', 'deleted', name="sgtp_project_id_uc"), mysql_engine='InnoDB', ) except Exception: LOG.error("Table |%s| not created!", 'share_group_type_projects') raise # Add mapping between group types and share types op.create_table( 'share_group_type_share_type_mappings', sql.Column('id', sql.String(36), primary_key=True, nullable=False), sql.Column('created_at', sql.DateTime), sql.Column('updated_at', sql.DateTime), sql.Column('deleted_at', sql.DateTime), sql.Column('deleted', sql.String(36), default='False'), sql.Column( 'share_group_type_id', sql.String(length=36), sql.ForeignKey('share_group_types.id', name="sgtp_id_sgt_id_uc"), nullable=False), sql.Column( 'share_type_id', sql.String(length=36), sql.ForeignKey('share_types.id', name="sgtp_id_st_id_uc"), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8') # Add share group type for share groups op.add_column( 'share_groups', sql.Column( 'share_group_type_id', sql.String(36), sql.ForeignKey('share_group_types.id', name="sgt_id_sg_id_uc"), ) ) # TODO(ameade): Create type for existing consistency groups def downgrade(): # Remove share group type for share groups op.drop_constraint("sgt_id_sg_id_uc", "share_groups", type_="foreignkey") op.drop_column('share_groups', 'share_group_type_id') # Drop mappings for table_name in ('share_group_type_share_type_mappings', 'share_group_type_projects', 'share_group_type_specs', 'share_group_types'): try: op.drop_table(table_name) except Exception: LOG.error("%s table not dropped", table_name) raise ././@LongLink0000000000000000000000000000021300000000000011211 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/b10fb432c042_squash_share_group_snapshot_members_and_share_snapshot_instance_models.pymanila-10.0.0/manila/db/migrations/alembic/versions/b10fb432c042_squash_share_group_snapshot_members0000664000175000017500000001456513656750227033423 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Squash 'share_group_snapshot_members' and 'share_snapshot_instances' models. Revision ID: 31252d671ae5 Revises: 5237b6625330 Create Date: 2017-02-28 15:35:27.500063 """ # revision identifiers, used by Alembic. revision = '31252d671ae5' down_revision = '5237b6625330' from alembic import op import sqlalchemy as sa from manila.db.migrations import utils SSI_TABLE_NAME = 'share_snapshot_instances' SGSM_TABLE_NAME = 'share_group_snapshot_members' def upgrade(): # Update 'share_snapshot_instance' table with new fields op.add_column(SSI_TABLE_NAME, sa.Column('user_id', sa.String(255))) op.add_column(SSI_TABLE_NAME, sa.Column('project_id', sa.String(255))) op.add_column(SSI_TABLE_NAME, sa.Column('size', sa.Integer)) op.add_column(SSI_TABLE_NAME, sa.Column('share_proto', sa.String(255))) op.add_column( SSI_TABLE_NAME, sa.Column('share_group_snapshot_id', sa.String(36))) # Drop FK for 'snapshot_id' because it will be null in case of SGS member op.drop_constraint('ssi_snapshot_fk', SSI_TABLE_NAME, type_='foreignkey') # Move existing SG snapshot members to share snapshot instance table connection = op.get_bind() ssi_table = utils.load_table(SSI_TABLE_NAME, connection) ssgm_table = utils.load_table(SGSM_TABLE_NAME, connection) ported_data = [] for ssgm_record in connection.execute(ssgm_table.select()): ported_data.append({ "id": ssgm_record.id, "share_group_snapshot_id": ssgm_record.share_group_snapshot_id, "share_instance_id": ssgm_record.share_instance_id, "size": ssgm_record.size, "status": ssgm_record.status, "share_proto": ssgm_record.share_proto, "user_id": ssgm_record.user_id, "project_id": ssgm_record.project_id, "provider_location": ssgm_record.provider_location, "created_at": ssgm_record.created_at, "updated_at": ssgm_record.updated_at, "deleted_at": ssgm_record.deleted_at, "deleted": ssgm_record.deleted, }) op.bulk_insert(ssi_table, ported_data) # Delete 'share_group_snapshot_members' table op.drop_table(SGSM_TABLE_NAME) def downgrade(): # Create 'share_group_snapshot_members' table op.create_table( SGSM_TABLE_NAME, sa.Column('id', sa.String(36), primary_key=True, nullable=False), sa.Column('created_at', sa.DateTime), sa.Column('updated_at', sa.DateTime), sa.Column('deleted_at', sa.DateTime), sa.Column('deleted', sa.String(36), default='False'), sa.Column('user_id', sa.String(length=255), nullable=False), sa.Column('project_id', sa.String(length=255), nullable=False), sa.Column( 'share_group_snapshot_id', sa.String(length=36), sa.ForeignKey( 'share_group_snapshots.id', name='fk_gsm_group_snapshot_id'), nullable=False), sa.Column( 'share_instance_id', sa.String(length=36), sa.ForeignKey( 'share_instances.id', name='fk_gsm_share_instance_id'), nullable=False), sa.Column( 'share_id', sa.String(length=36), sa.ForeignKey('shares.id', name='fk_gsm_share_id'), nullable=False), sa.Column('size', sa.Integer), sa.Column('status', sa.String(length=255)), sa.Column('share_proto', sa.String(length=255)), sa.Column('provider_location', sa.String(255), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8') # Select all share snapshot instances that # have not null 'share_snapshot_group_id' to new table connection = op.get_bind() ssi_table = utils.load_table(SSI_TABLE_NAME, connection) share_instances_table = utils.load_table("share_instances", connection) ssgm_table = utils.load_table(SGSM_TABLE_NAME, connection) ported_data = [] for row in connection.execute( ssi_table.join( share_instances_table, share_instances_table.c.id == ssi_table.c.share_instance_id ).select(use_labels=True).where( ssi_table.c.share_group_snapshot_id.isnot(None), )): ported_data.append({ "id": row.share_snapshot_instances_id, "share_group_snapshot_id": ( row.share_snapshot_instances_share_group_snapshot_id), "share_id": row.share_instances_share_id, "share_instance_id": row.share_instances_id, "size": row.share_snapshot_instances_size, "status": row.share_snapshot_instances_status, "share_proto": row.share_snapshot_instances_share_proto, "user_id": row.share_snapshot_instances_user_id, "project_id": row.share_snapshot_instances_project_id, "provider_location": ( row.share_snapshot_instances_provider_location), "created_at": row.share_snapshot_instances_created_at, "updated_at": row.share_snapshot_instances_updated_at, "deleted_at": row.share_snapshot_instances_deleted_at, "deleted": row.share_snapshot_instances_deleted or "False", }) # Copy share group snapshot members to new table op.bulk_insert(ssgm_table, ported_data) # Remove copied records from source table connection.execute( ssi_table.delete().where( # pylint: disable=no-value-for-parameter ssi_table.c.share_group_snapshot_id.isnot(None))) # Remove redundant fields from 'share_snapshot_instance' table for column_name in ('user_id', 'project_id', 'size', 'share_proto', 'share_group_snapshot_id'): op.drop_column(SSI_TABLE_NAME, column_name) # Add back FK for 'snapshot_id' field op.create_foreign_key( 'ssi_snapshot_fk', SSI_TABLE_NAME, 'share_snapshots', ['snapshot_id'], ['id']) ././@LongLink0000000000000000000000000000016600000000000011220 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/eb6d5544cbbd_add_provider_location_to_share_snapshot_instances.pymanila-10.0.0/manila/db/migrations/alembic/versions/eb6d5544cbbd_add_provider_location_to_share_snap0000664000175000017500000000205113656750227033553 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add provider_location to share_snapshot_instances Revision ID: eb6d5544cbbd Revises: 5155c7077f99 Create Date: 2016-02-12 22:25:39.594545 """ # revision identifiers, used by Alembic. revision = 'eb6d5544cbbd' down_revision = '5155c7077f99' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'share_snapshot_instances', sa.Column('provider_location', sa.String(255), nullable=True)) def downgrade(): op.drop_column('share_snapshot_instances', 'provider_location') manila-10.0.0/manila/db/migrations/alembic/versions/27cb96d991fa_add_description_for_share_type.py0000664000175000017500000000254013656750227033047 0ustar zuulzuul00000000000000# Copyright (c) 2017 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add description for share type Revision ID: 27cb96d991fa Revises: 829a09b0ddd4 Create Date: 2017-09-16 03:07:15.548947 """ # revision identifiers, used by Alembic. revision = '27cb96d991fa' down_revision = '829a09b0ddd4' from alembic import op from oslo_log import log import sqlalchemy as sa LOG = log.getLogger(__name__) def upgrade(): try: op.add_column( 'share_types', sa.Column('description', sa.String(255), nullable=True)) except Exception: LOG.error("Column share_types.description not created!") raise def downgrade(): try: op.drop_column('share_types', 'description') except Exception: LOG.error("Column share_types.description not dropped!") raise ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000manila-10.0.0/manila/db/migrations/alembic/versions/55761e5f59c5_add_snapshot_support_extra_spec_to_share_types.pymanila-10.0.0/manila/db/migrations/alembic/versions/55761e5f59c5_add_snapshot_support_extra_spec_to_0000664000175000017500000001015513656750227033361 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add 'snapshot_support' extra spec to share types Revision ID: 55761e5f59c5 Revises: 1f0bd302c1a6 Create Date: 2015-08-13 14:02:54.656864 """ # revision identifiers, used by Alembic. revision = '55761e5f59c5' down_revision = '1f0bd302c1a6' from alembic import op from oslo_utils import timeutils import sqlalchemy as sa from sqlalchemy.sql import table from manila.common import constants def upgrade(): """Performs DB upgrade to support feature of making snapshots optional. Add 'snapshot_support' extra spec to all share types and attr 'snapshot_support' to Share model. """ session = sa.orm.Session(bind=op.get_bind().connect()) es_table = table( 'share_type_extra_specs', sa.Column('created_at', sa.DateTime), sa.Column('deleted', sa.Integer), sa.Column('share_type_id', sa.String(length=36)), sa.Column('spec_key', sa.String(length=255)), sa.Column('spec_value', sa.String(length=255))) st_table = table( 'share_types', sa.Column('deleted', sa.Integer), sa.Column('id', sa.Integer)) # NOTE(vponomaryov): field 'deleted' is integer here. existing_extra_specs = (session.query(es_table). filter(es_table.c.spec_key == constants.ExtraSpecs.SNAPSHOT_SUPPORT). filter(es_table.c.deleted == 0). all()) exclude_st_ids = [es.share_type_id for es in existing_extra_specs] # NOTE(vponomaryov): field 'deleted' is string here. share_types = (session.query(st_table). filter(st_table.c.deleted.in_(('0', 'False', ))). filter(st_table.c.id.notin_(exclude_st_ids)). all()) session.close_all() extra_specs = [] now = timeutils.utcnow() for st in share_types: extra_specs.append({ 'spec_key': constants.ExtraSpecs.SNAPSHOT_SUPPORT, 'spec_value': 'True', 'deleted': 0, 'created_at': now, 'share_type_id': st.id, }) if extra_specs: op.bulk_insert(es_table, extra_specs) # NOTE(vponomaryov): shares that were created before applying this # migration can have incorrect value because they were created without # consideration of driver capability to create snapshots. op.add_column('shares', sa.Column('snapshot_support', sa.Boolean, default=True)) connection = op.get_bind().connect() shares = sa.Table( 'shares', sa.MetaData(), autoload=True, autoload_with=connection) # pylint: disable=no-value-for-parameter update = shares.update().where(shares.c.deleted == 'False').values( snapshot_support=True) connection.execute(update) def downgrade(): """Performs DB downgrade removing support of 'optional snapshots' feature. Remove 'snapshot_support' extra spec from all share types and attr 'snapshot_support' from Share model. """ connection = op.get_bind().connect() extra_specs = sa.Table( 'share_type_extra_specs', sa.MetaData(), autoload=True, autoload_with=connection) # pylint: disable=no-value-for-parameter update = extra_specs.update().where( extra_specs.c.spec_key == constants.ExtraSpecs.SNAPSHOT_SUPPORT).where( extra_specs.c.deleted == 0).values( deleted=extra_specs.c.id, deleted_at=timeutils.utcnow(), ) connection.execute(update) op.drop_column('shares', 'snapshot_support') manila-10.0.0/manila/db/migrations/alembic/versions/162a3e673105_manila_init.py0000664000175000017500000004103413656750227026645 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """manila_init Revision ID: 162a3e673105 Revises: None Create Date: 2014-07-23 17:51:57.077203 """ # revision identifiers, used by Alembic. revision = '162a3e673105' down_revision = None from alembic import op from oslo_log import log from sqlalchemy import Boolean, Column, DateTime, ForeignKey from sqlalchemy import Integer, MetaData, String, Table, UniqueConstraint LOG = log.getLogger(__name__) def upgrade(): migrate_engine = op.get_bind().engine meta = MetaData() meta.bind = migrate_engine services = Table( 'services', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer, default=0), Column('id', Integer, primary_key=True, nullable=False), Column('host', String(length=255)), Column('binary', String(length=255)), Column('topic', String(length=255)), Column('report_count', Integer, nullable=False), Column('disabled', Boolean), Column('availability_zone', String(length=255)), mysql_engine='InnoDB', mysql_charset='utf8' ) quotas = Table( 'quotas', meta, Column('id', Integer, primary_key=True, nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer, default=0), Column('project_id', String(length=255)), Column('resource', String(length=255), nullable=False), Column('hard_limit', Integer), mysql_engine='InnoDB', mysql_charset='utf8' ) quota_classes = Table( 'quota_classes', meta, Column('created_at', DateTime(timezone=False)), Column('updated_at', DateTime(timezone=False)), Column('deleted_at', DateTime(timezone=False)), Column('deleted', Integer, default=0), Column('id', Integer(), primary_key=True), Column('class_name', String(length=255, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False), index=True), Column('resource', String(length=255, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False)), Column('hard_limit', Integer(), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8', ) quota_usages = Table( 'quota_usages', meta, Column('created_at', DateTime(timezone=False)), Column('updated_at', DateTime(timezone=False)), Column('deleted_at', DateTime(timezone=False)), Column('deleted', Integer, default=0), Column('id', Integer(), primary_key=True), Column('user_id', String(length=255)), Column('project_id', String(length=255, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False), index=True), Column('resource', String(length=255, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False)), Column('in_use', Integer(), nullable=False), Column('reserved', Integer(), nullable=False), Column('until_refresh', Integer(), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8', ) reservations = Table( 'reservations', meta, Column('created_at', DateTime(timezone=False)), Column('updated_at', DateTime(timezone=False)), Column('deleted_at', DateTime(timezone=False)), Column('deleted', Integer, default=0), Column('id', Integer(), primary_key=True), Column('user_id', String(length=255)), Column('uuid', String(length=36, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False), nullable=False), Column('usage_id', Integer(), ForeignKey('quota_usages.id'), nullable=False), Column('project_id', String(length=255, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False), index=True), Column('resource', String(length=255, convert_unicode=True, unicode_error=None, _warn_on_bytestring=False)), Column('delta', Integer(), nullable=False), Column('expire', DateTime(timezone=False)), mysql_engine='InnoDB', mysql_charset='utf8', ) project_user_quotas = Table( 'project_user_quotas', meta, Column('id', Integer, primary_key=True, nullable=False), Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer, default=0), Column('user_id', String(length=255), nullable=False), Column('project_id', String(length=255), nullable=False), Column('resource', String(length=25), nullable=False), Column('hard_limit', Integer, nullable=True), mysql_engine='InnoDB', mysql_charset='utf8', ) shares = Table( 'shares', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('host', String(length=255)), Column('size', Integer), Column('availability_zone', String(length=255)), Column('status', String(length=255)), Column('scheduled_at', DateTime), Column('launched_at', DateTime), Column('terminated_at', DateTime), Column('display_name', String(length=255)), Column('display_description', String(length=255)), Column('snapshot_id', String(length=36)), Column('share_network_id', String(length=36), ForeignKey('share_networks.id'), nullable=True), Column('share_server_id', String(length=36), ForeignKey('share_servers.id'), nullable=True), Column('share_proto', String(255)), Column('export_location', String(255)), Column('volume_type_id', String(length=36)), mysql_engine='InnoDB', mysql_charset='utf8' ) access_map = Table( 'share_access_map', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('share_id', String(36), ForeignKey('shares.id'), nullable=False), Column('access_type', String(255)), Column('access_to', String(255)), Column('state', String(255)), mysql_engine='InnoDB', mysql_charset='utf8' ) share_snapshots = Table( 'share_snapshots', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('user_id', String(length=255)), Column('project_id', String(length=255)), Column('share_id', String(36), ForeignKey('shares.id'), nullable=False), Column('size', Integer), Column('status', String(length=255)), Column('progress', String(length=255)), Column('display_name', String(length=255)), Column('display_description', String(length=255)), Column('share_size', Integer), Column('share_proto', String(length=255)), Column('export_location', String(255)), mysql_engine='InnoDB', mysql_charset='utf8' ) share_metadata = Table( 'share_metadata', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer, default=0), Column('id', Integer, primary_key=True, nullable=False), Column('share_id', String(length=36), ForeignKey('shares.id'), nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=1023), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) security_services = Table( 'security_services', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('project_id', String(length=36), nullable=False), Column('type', String(length=32), nullable=False), Column('dns_ip', String(length=64), nullable=True), Column('server', String(length=255), nullable=True), Column('domain', String(length=255), nullable=True), Column('user', String(length=255), nullable=True), Column('password', String(length=255), nullable=True), Column('name', String(length=255), nullable=True), Column('description', String(length=255), nullable=True), Column('status', String(length=16)), mysql_engine='InnoDB', mysql_charset='utf8', ) share_networks = Table( 'share_networks', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('project_id', String(length=36), nullable=False), Column('user_id', String(length=36)), Column('neutron_net_id', String(length=36), nullable=True), Column('neutron_subnet_id', String(length=36), nullable=True), Column('network_type', String(length=32), nullable=True), Column('segmentation_id', Integer, nullable=True), Column('cidr', String(length=64), nullable=True), Column('ip_version', Integer, nullable=True), Column('name', String(length=255), nullable=True), Column('description', String(length=255), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8', ) share_servers = Table( 'share_servers', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('share_network_id', String(length=36), ForeignKey('share_networks.id'), nullable=True), Column('host', String(length=255), nullable=True), Column('status', String(length=32)), mysql_engine='InnoDB', mysql_charset='utf8', ) share_server_backend_details = Table( 'share_server_backend_details', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default=0), Column('id', Integer, primary_key=True, nullable=False), Column('share_server_id', String(length=36), ForeignKey('share_servers.id'), nullable=False), Column('key', String(length=255), nullable=False), Column('value', String(length=1023), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8' ) network_allocations = Table( 'network_allocations', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('ip_address', String(length=64), nullable=True), Column('mac_address', String(length=32), nullable=True), Column('share_server_id', String(length=36), ForeignKey('share_servers.id'), nullable=False), Column('status', String(length=32)), mysql_engine='InnoDB', mysql_charset='utf8', ) ss_nw_association = Table( 'share_network_security_service_association', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Integer, default=0), Column('id', Integer, primary_key=True, nullable=False), Column('share_network_id', String(length=36), ForeignKey('share_networks.id'), nullable=False), Column('security_service_id', String(length=36), ForeignKey('security_services.id'), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', ) volume_types = Table( 'volume_types', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', String(length=36), default='False'), Column('id', String(length=36), primary_key=True, nullable=False), Column('name', String(length=255)), UniqueConstraint('name', 'deleted', name='vt_name_uc'), mysql_engine='InnoDB' ) volume_type_extra_specs = Table( 'volume_type_extra_specs', meta, Column('created_at', DateTime), Column('updated_at', DateTime), Column('deleted_at', DateTime), Column('deleted', Boolean), Column('id', Integer, primary_key=True, nullable=False), Column('volume_type_id', String(length=36), ForeignKey('volume_types.id'), nullable=False), Column('key', String(length=255)), Column('value', String(length=255)), mysql_engine='InnoDB' ) # create all tables # Take care on create order for those with FK dependencies tables = [quotas, services, quota_classes, quota_usages, reservations, project_user_quotas, security_services, share_networks, ss_nw_association, share_servers, network_allocations, shares, access_map, share_snapshots, share_server_backend_details, share_metadata, volume_types, volume_type_extra_specs] for table in tables: if not table.exists(): try: table.create() except Exception: LOG.info(repr(table)) LOG.exception('Exception while creating table.') raise if migrate_engine.name == "mysql": tables = ["quotas", "services", "quota_classes", "quota_usages", "reservations", "project_user_quotas", "share_access_map", "share_snapshots", "share_metadata", "security_services", "share_networks", "network_allocations", "shares", "share_servers", "share_network_security_service_association", "volume_types", "volume_type_extra_specs", "share_server_backend_details"] migrate_engine.execute("SET foreign_key_checks = 0") for table in tables: migrate_engine.execute( "ALTER TABLE %s CONVERT TO CHARACTER SET utf8" % table) migrate_engine.execute("SET foreign_key_checks = 1") migrate_engine.execute( "ALTER DATABASE %s DEFAULT CHARACTER SET utf8" % migrate_engine.url.database) migrate_engine.execute("ALTER TABLE %s Engine=InnoDB" % table) def downgrade(): raise NotImplementedError('Downgrade from initial Manila install is not' ' supported.') manila-10.0.0/manila/db/migrations/alembic/versions/48a7beae3117_move_share_type_id_to_instances.py0000664000175000017500000000621313656750227033224 0ustar zuulzuul00000000000000# Copyright 2016, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """move_share_type_id_to_instances Revision ID: 48a7beae3117 Revises: 63809d875e32 Create Date: 2016-07-19 13:04:50.035139 """ # revision identifiers, used by Alembic. revision = '48a7beae3117' down_revision = '63809d875e32' from alembic import op import sqlalchemy as sa from manila.db.migrations import utils def upgrade(): """Move share_type_id from Shares to Share Instances table.""" # NOTE(ganso): Adding share_type_id as a foreign key to share_instances # table. Please note that share_type_id is NOT a foreign key in shares # table prior to this migration. op.add_column( 'share_instances', sa.Column('share_type_id', sa.String(36), sa.ForeignKey('share_types.id', name='si_st_id_fk'), nullable=True)) connection = op.get_bind() shares_table = utils.load_table('shares', connection) share_instances_table = utils.load_table('share_instances', connection) for instance in connection.execute(share_instances_table.select()): share = connection.execute(shares_table.select().where( instance['share_id'] == shares_table.c.id)).first() # pylint: disable=no-value-for-parameter op.execute(share_instances_table.update().where( share_instances_table.c.id == instance['id']).values( {'share_type_id': share['share_type_id']})) op.drop_column('shares', 'share_type_id') def downgrade(): """Move share_type_id from Share Instances to Shares table. This method can lead to data loss because only the share_type_id from the first share instance is moved to the shares table. """ # NOTE(ganso): Adding back share_type_id to the shares table NOT as a # foreign key, as it was before. op.add_column( 'shares', sa.Column('share_type_id', sa.String(36), nullable=True)) connection = op.get_bind() shares_table = utils.load_table('shares', connection) share_instances_table = utils.load_table('share_instances', connection) for share in connection.execute(shares_table.select()): instance = connection.execute(share_instances_table.select().where( share['id'] == share_instances_table.c.share_id)).first() # pylint: disable=no-value-for-parameter op.execute(shares_table.update().where( shares_table.c.id == instance['share_id']).values( {'share_type_id': instance['share_type_id']})) op.drop_constraint('si_st_id_fk', 'share_instances', type_='foreignkey') op.drop_column('share_instances', 'share_type_id') manila-10.0.0/manila/db/migrations/alembic/env.py0000664000175000017500000000245513656750227021636 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import with_statement from alembic import context from manila.db.sqlalchemy import api as db_api from manila.db.sqlalchemy import models as db_models def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = db_api.get_engine() connection = engine.connect() target_metadata = db_models.ManilaBase.metadata # pylint: disable=no-member context.configure(connection=connection, target_metadata=target_metadata) try: with context.begin_transaction(): context.run_migrations() finally: connection.close() run_migrations_online() manila-10.0.0/manila/db/migrations/alembic/migration.py0000664000175000017500000000451613656750227023037 0ustar zuulzuul00000000000000# Copyright 2014 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import alembic from alembic import config as alembic_config import alembic.migration as alembic_migration # pylint: disable=import-error from oslo_config import cfg from manila.db.sqlalchemy import api as db_api CONF = cfg.CONF def _alembic_config(): path = os.path.join(os.path.dirname(__file__), os.pardir, 'alembic.ini') config = alembic_config.Config(path) return config def version(): """Current database version. :returns: Database version :rtype: string """ engine = db_api.get_engine() with engine.connect() as conn: context = alembic_migration.MigrationContext.configure(conn) return context.get_current_revision() def upgrade(revision): """Upgrade database. :param version: Desired database version :type version: string """ return alembic.command.upgrade(_alembic_config(), revision or 'head') def downgrade(revision): """Downgrade database. :param version: Desired database version :type version: string """ return alembic.command.downgrade(_alembic_config(), revision or 'base') def stamp(revision): """Stamp database with provided revision. Don't run any migrations. :param revision: Should match one from repository or head - to stamp database with most recent revision :type revision: string """ return alembic.command.stamp(_alembic_config(), revision or 'head') def revision(message=None, autogenerate=False): """Create template for migration. :param message: Text that will be used for migration title :type message: string :param autogenerate: If True - generates diff based on current database state :type autogenerate: bool """ return alembic.command.revision(_alembic_config(), message, autogenerate) manila-10.0.0/manila/db/migrations/alembic/script.py.mako0000664000175000017500000000167113656750227023277 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} manila-10.0.0/manila/db/migration.py0000664000175000017500000000272513656750227017267 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" from manila import utils IMPL = utils.LazyPluggable( 'db_backend', sqlalchemy='manila.db.migrations.alembic.migration') def upgrade(version): """Upgrade database to 'version' or the most recent version.""" return IMPL.upgrade(version) def downgrade(version): """Downgrade database to 'version' or to initial state.""" return IMPL.downgrade(version) def version(): """Display the current database version.""" return IMPL.version() def stamp(version): """Stamp database with 'version' or the most recent version.""" return IMPL.stamp(version) def revision(message, autogenerate): """Generate new migration script.""" return IMPL.revision(message, autogenerate) manila-10.0.0/manila/db/base.py0000664000175000017500000000252613656750227016207 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base class for classes that need modular database access.""" from oslo_config import cfg from oslo_utils import importutils db_driver_opt = cfg.StrOpt('db_driver', default='manila.db', help='Driver to use for database access.') CONF = cfg.CONF CONF.register_opt(db_driver_opt) class Base(object): """DB driver is injected in the init method.""" def __init__(self, db_driver=None): super(Base, self).__init__() if not db_driver: db_driver = CONF.db_driver self.db = importutils.import_module(db_driver) # pylint: disable=C0103 manila-10.0.0/manila/db/sqlalchemy/0000775000175000017500000000000013656750362017060 5ustar zuulzuul00000000000000manila-10.0.0/manila/db/sqlalchemy/utils.py0000664000175000017500000000351013656750227020571 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of paginate query.""" from manila import exception import sqlalchemy def paginate_query(query, model, limit, sort_key='created_at', sort_dir='desc', offset=None): """Returns a query with sorting / pagination criteria added. :param query: the query object to which we should add paging/sorting :param model: the ORM model class :param limit: maximum number of items to return :param sort_key: attributes by which results should be sorted, default is created_at :param sort_dir: direction in which results should be sorted (asc, desc) :param offset: the number of items to skip from the marker or from the first element. :rtype: sqlalchemy.orm.query.Query :return: The query with sorting/pagination added. """ try: sort_key_attr = getattr(model, sort_key) except AttributeError: raise exception.InvalidInput(reason='Invalid sort key %s' % sort_key) if sort_dir == 'desc': query = query.order_by(sqlalchemy.desc(sort_key_attr)) else: query = query.order_by(sqlalchemy.asc(sort_key_attr)) if limit is not None: query = query.limit(limit) if offset: query = query.offset(offset) return query manila-10.0.0/manila/db/sqlalchemy/__init__.py0000664000175000017500000000000013656750227021157 0ustar zuulzuul00000000000000manila-10.0.0/manila/db/sqlalchemy/api.py0000664000175000017500000056161513656750227020221 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright (c) 2014 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of SQLAlchemy backend.""" import copy import datetime from functools import wraps import ipaddress import sys import warnings # NOTE(uglide): Required to override default oslo_db Query class import manila.db.sqlalchemy.query # noqa from oslo_config import cfg from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_db import exception as db_exception from oslo_db import options as db_options from oslo_db.sqlalchemy import session from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log from oslo_utils import excutils from oslo_utils import timeutils from oslo_utils import uuidutils import six from sqlalchemy import MetaData from sqlalchemy import or_ from sqlalchemy.orm import joinedload from sqlalchemy.orm import subqueryload from sqlalchemy.sql.expression import literal from sqlalchemy.sql.expression import true from sqlalchemy.sql import func from manila.common import constants from manila.db.sqlalchemy import models from manila.db.sqlalchemy import utils from manila import exception from manila.i18n import _ from manila import quota CONF = cfg.CONF LOG = log.getLogger(__name__) QUOTAS = quota.QUOTAS _DEFAULT_QUOTA_NAME = 'default' PER_PROJECT_QUOTAS = [] _FACADE = None _DEFAULT_SQL_CONNECTION = 'sqlite://' db_options.set_defaults(cfg.CONF, connection=_DEFAULT_SQL_CONNECTION) def _create_facade_lazily(): global _FACADE if _FACADE is None: _FACADE = session.EngineFacade.from_config(cfg.CONF) return _FACADE def get_engine(): facade = _create_facade_lazily() return facade.get_engine() def get_session(**kwargs): facade = _create_facade_lazily() return facade.get_session(**kwargs) def get_backend(): """The backend is this module itself.""" return sys.modules[__name__] def is_admin_context(context): """Indicates if the request context is an administrator.""" if not context: warnings.warn(_('Use of empty request context is deprecated'), DeprecationWarning) raise Exception('die') return context.is_admin def is_user_context(context): """Indicates if the request context is a normal user.""" if not context: return False if context.is_admin: return False if not context.user_id or not context.project_id: return False return True def authorize_project_context(context, project_id): """Ensures a request has permission to access the given project.""" if is_user_context(context): if not context.project_id: raise exception.NotAuthorized() elif context.project_id != project_id: raise exception.NotAuthorized() def authorize_user_context(context, user_id): """Ensures a request has permission to access the given user.""" if is_user_context(context): if not context.user_id: raise exception.NotAuthorized() elif context.user_id != user_id: raise exception.NotAuthorized() def authorize_quota_class_context(context, class_name): """Ensures a request has permission to access the given quota class.""" if is_user_context(context): if not context.quota_class: raise exception.NotAuthorized() elif context.quota_class != class_name: raise exception.NotAuthorized() def require_admin_context(f): """Decorator to require admin request context. The first argument to the wrapped function must be the context. """ @wraps(f) def wrapper(*args, **kwargs): if not is_admin_context(args[0]): raise exception.AdminRequired() return f(*args, **kwargs) return wrapper def require_context(f): """Decorator to require *any* user or admin context. This does no authorization for user or project access matching, see :py:func:`authorize_project_context` and :py:func:`authorize_user_context`. The first argument to the wrapped function must be the context. """ @wraps(f) def wrapper(*args, **kwargs): if not is_admin_context(args[0]) and not is_user_context(args[0]): raise exception.NotAuthorized() return f(*args, **kwargs) return wrapper def require_share_exists(f): """Decorator to require the specified share to exist. Requires the wrapped function to use context and share_id as their first two arguments. """ @wraps(f) def wrapper(context, share_id, *args, **kwargs): share_get(context, share_id) return f(context, share_id, *args, **kwargs) wrapper.__name__ = f.__name__ return wrapper def require_share_instance_exists(f): """Decorator to require the specified share instance to exist. Requires the wrapped function to use context and share_instance_id as their first two arguments. """ @wraps(f) def wrapper(context, share_instance_id, *args, **kwargs): share_instance_get(context, share_instance_id) return f(context, share_instance_id, *args, **kwargs) wrapper.__name__ = f.__name__ return wrapper def apply_sorting(model, query, sort_key, sort_dir): if sort_dir.lower() not in ('desc', 'asc'): msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' " "and sort direction is '%(sort_dir)s'.") % { "sort_key": sort_key, "sort_dir": sort_dir} raise exception.InvalidInput(reason=msg) sort_attr = getattr(model, sort_key) sort_method = getattr(sort_attr, sort_dir.lower()) return query.order_by(sort_method()) def handle_db_data_error(f): def wrapper(*args, **kwargs): try: return f(*args, **kwargs) except db_exc.DBDataError: msg = _('Error writing field to database.') LOG.exception(msg) raise exception.Invalid(msg) return wrapper def model_query(context, model, *args, **kwargs): """Query helper that accounts for context's `read_deleted` field. :param context: context to query under :param model: model to query. Must be a subclass of ModelBase. :param session: if present, the session to use :param read_deleted: if present, overrides context's read_deleted field. :param project_only: if present and context is user-type, then restrict query to match the context's project_id. """ session = kwargs.get('session') or get_session() read_deleted = kwargs.get('read_deleted') or context.read_deleted project_only = kwargs.get('project_only') kwargs = dict() if project_only and not context.is_admin: kwargs['project_id'] = context.project_id if read_deleted in ('no', 'n', False): kwargs['deleted'] = False elif read_deleted in ('yes', 'y', True): kwargs['deleted'] = True return db_utils.model_query( model=model, session=session, args=args, **kwargs) def exact_filter(query, model, filters, legal_keys, created_at_key='created_at'): """Applies exact match filtering to a query. Returns the updated query. Modifies filters argument to remove filters consumed. :param query: query to apply filters to :param model: model object the query applies to, for IN-style filtering :param filters: dictionary of filters; values that are lists, tuples, sets, or frozensets cause an 'IN' test to be performed, while exact matching ('==' operator) is used for other values :param legal_keys: list of keys to apply exact filtering to """ filter_dict = {} created_at_attr = getattr(model, created_at_key, None) # Walk through all the keys for key in legal_keys: # Skip ones we're not filtering on if key not in filters: continue # OK, filtering on this key; what value do we search for? value = filters.pop(key) if key == 'created_since' and created_at_attr: # This is a reserved query parameter to indicate resources created # after a particular datetime value = timeutils.normalize_time(value) query = query.filter(created_at_attr.op('>=')(value)) elif key == 'created_before' and created_at_attr: # This is a reserved query parameter to indicate resources created # before a particular datetime value = timeutils.normalize_time(value) query = query.filter(created_at_attr.op('<=')(value)) elif isinstance(value, (list, tuple, set, frozenset)): # Looking for values in a list; apply to query directly column_attr = getattr(model, key) query = query.filter(column_attr.in_(value)) else: # OK, simple exact match; save for later filter_dict[key] = value # Apply simple exact matches if filter_dict: query = query.filter_by(**filter_dict) return query def ensure_model_dict_has_id(model_dict): if not model_dict.get('id'): model_dict['id'] = uuidutils.generate_uuid() return model_dict def _sync_shares(context, project_id, user_id, session, share_type_id=None): (shares, gigs) = share_data_get_for_project( context, project_id, user_id, share_type_id=share_type_id, session=session) return {'shares': shares} def _sync_snapshots(context, project_id, user_id, session, share_type_id=None): (snapshots, gigs) = snapshot_data_get_for_project( context, project_id, user_id, share_type_id=share_type_id, session=session) return {'snapshots': snapshots} def _sync_gigabytes(context, project_id, user_id, session, share_type_id=None): _junk, share_gigs = share_data_get_for_project( context, project_id, user_id, share_type_id=share_type_id, session=session) return {"gigabytes": share_gigs} def _sync_snapshot_gigabytes(context, project_id, user_id, session, share_type_id=None): _junk, snapshot_gigs = snapshot_data_get_for_project( context, project_id, user_id, share_type_id=share_type_id, session=session) return {"snapshot_gigabytes": snapshot_gigs} def _sync_share_networks(context, project_id, user_id, session, share_type_id=None): share_networks_count = count_share_networks( context, project_id, user_id, share_type_id=share_type_id, session=session) return {'share_networks': share_networks_count} def _sync_share_groups(context, project_id, user_id, session, share_type_id=None): share_groups_count = count_share_groups( context, project_id, user_id, share_type_id=share_type_id, session=session) return {'share_groups': share_groups_count} def _sync_share_group_snapshots(context, project_id, user_id, session, share_type_id=None): share_group_snapshots_count = count_share_group_snapshots( context, project_id, user_id, share_type_id=share_type_id, session=session) return {'share_group_snapshots': share_group_snapshots_count} def _sync_share_replicas(context, project_id, user_id, session, share_type_id=None): share_replicas_count, _junk = share_replica_data_get_for_project( context, project_id, user_id, session, share_type_id=share_type_id) return {'share_replicas': share_replicas_count} def _sync_replica_gigabytes(context, project_id, user_id, session, share_type_id=None): _junk, replica_gigs = share_replica_data_get_for_project( context, project_id, user_id, session, share_type_id=share_type_id) return {'replica_gigabytes': replica_gigs} QUOTA_SYNC_FUNCTIONS = { '_sync_shares': _sync_shares, '_sync_snapshots': _sync_snapshots, '_sync_gigabytes': _sync_gigabytes, '_sync_snapshot_gigabytes': _sync_snapshot_gigabytes, '_sync_share_networks': _sync_share_networks, '_sync_share_groups': _sync_share_groups, '_sync_share_group_snapshots': _sync_share_group_snapshots, '_sync_share_replicas': _sync_share_replicas, '_sync_replica_gigabytes': _sync_replica_gigabytes, } ################### @require_admin_context def service_destroy(context, service_id): session = get_session() with session.begin(): service_ref = service_get(context, service_id, session=session) service_ref.soft_delete(session) @require_admin_context def service_get(context, service_id, session=None): result = (model_query( context, models.Service, session=session). filter_by(id=service_id). first()) if not result: raise exception.ServiceNotFound(service_id=service_id) return result @require_admin_context def service_get_all(context, disabled=None): query = model_query(context, models.Service) if disabled is not None: query = query.filter_by(disabled=disabled) return query.all() @require_admin_context def service_get_all_by_topic(context, topic): return (model_query( context, models.Service, read_deleted="no"). filter_by(disabled=False). filter_by(topic=topic). all()) @require_admin_context def service_get_by_host_and_topic(context, host, topic): result = (model_query( context, models.Service, read_deleted="no"). filter_by(disabled=False). filter_by(host=host). filter_by(topic=topic). first()) if not result: raise exception.ServiceNotFound(service_id=host) return result @require_admin_context def _service_get_all_topic_subquery(context, session, topic, subq, label): sort_value = getattr(subq.c, label) return (model_query(context, models.Service, func.coalesce(sort_value, 0), session=session, read_deleted="no"). filter_by(topic=topic). filter_by(disabled=False). outerjoin((subq, models.Service.host == subq.c.host)). order_by(sort_value). all()) @require_admin_context def service_get_all_share_sorted(context): session = get_session() with session.begin(): topic = CONF.share_topic label = 'share_gigabytes' subq = (model_query(context, models.Share, func.sum(models.Share.size).label(label), session=session, read_deleted="no"). join(models.ShareInstance, models.ShareInstance.share_id == models.Share.id). group_by(models.ShareInstance.host). subquery()) return _service_get_all_topic_subquery(context, session, topic, subq, label) @require_admin_context def service_get_by_args(context, host, binary): result = (model_query(context, models.Service). filter_by(host=host). filter_by(binary=binary). first()) if not result: raise exception.HostBinaryNotFound(host=host, binary=binary) return result @require_admin_context def service_create(context, values): session = get_session() _ensure_availability_zone_exists(context, values, session) service_ref = models.Service() service_ref.update(values) if not CONF.enable_new_services: service_ref.disabled = True with session.begin(): service_ref.save(session) return service_ref @require_admin_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def service_update(context, service_id, values): session = get_session() _ensure_availability_zone_exists(context, values, session, strict=False) with session.begin(): service_ref = service_get(context, service_id, session=session) service_ref.update(values) service_ref.save(session=session) ################### @require_context def quota_get_all_by_project_and_user(context, project_id, user_id): authorize_project_context(context, project_id) user_quotas = model_query( context, models.ProjectUserQuota, models.ProjectUserQuota.resource, models.ProjectUserQuota.hard_limit, ).filter_by( project_id=project_id, ).filter_by( user_id=user_id, ).all() result = {'project_id': project_id, 'user_id': user_id} for u_quota in user_quotas: result[u_quota.resource] = u_quota.hard_limit return result @require_context def quota_get_all_by_project_and_share_type(context, project_id, share_type_id): authorize_project_context(context, project_id) share_type_quotas = model_query( context, models.ProjectShareTypeQuota, models.ProjectShareTypeQuota.resource, models.ProjectShareTypeQuota.hard_limit, ).filter_by( project_id=project_id, ).filter_by( share_type_id=share_type_id, ).all() result = { 'project_id': project_id, 'share_type_id': share_type_id, } for st_quota in share_type_quotas: result[st_quota.resource] = st_quota.hard_limit return result @require_context def quota_get_all_by_project(context, project_id): authorize_project_context(context, project_id) project_quotas = model_query( context, models.Quota, read_deleted="no", ).filter_by( project_id=project_id, ).all() result = {'project_id': project_id} for p_quota in project_quotas: result[p_quota.resource] = p_quota.hard_limit return result @require_context def quota_get_all(context, project_id): authorize_project_context(context, project_id) result = (model_query(context, models.ProjectUserQuota). filter_by(project_id=project_id). all()) return result @require_admin_context def quota_create(context, project_id, resource, limit, user_id=None, share_type_id=None): per_user = user_id and resource not in PER_PROJECT_QUOTAS if per_user: check = model_query(context, models.ProjectUserQuota).filter( models.ProjectUserQuota.project_id == project_id, models.ProjectUserQuota.user_id == user_id, models.ProjectUserQuota.resource == resource, ).all() quota_ref = models.ProjectUserQuota() quota_ref.user_id = user_id elif share_type_id: check = model_query(context, models.ProjectShareTypeQuota).filter( models.ProjectShareTypeQuota.project_id == project_id, models.ProjectShareTypeQuota.share_type_id == share_type_id, models.ProjectShareTypeQuota.resource == resource, ).all() quota_ref = models.ProjectShareTypeQuota() quota_ref.share_type_id = share_type_id else: check = model_query(context, models.Quota).filter( models.Quota.project_id == project_id, models.Quota.resource == resource, ).all() quota_ref = models.Quota() if check: raise exception.QuotaExists(project_id=project_id, resource=resource) quota_ref.project_id = project_id quota_ref.resource = resource quota_ref.hard_limit = limit session = get_session() with session.begin(): quota_ref.save(session) return quota_ref @require_admin_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def quota_update(context, project_id, resource, limit, user_id=None, share_type_id=None): per_user = user_id and resource not in PER_PROJECT_QUOTAS if per_user: query = model_query(context, models.ProjectUserQuota).filter( models.ProjectUserQuota.project_id == project_id, models.ProjectUserQuota.user_id == user_id, models.ProjectUserQuota.resource == resource, ) elif share_type_id: query = model_query(context, models.ProjectShareTypeQuota).filter( models.ProjectShareTypeQuota.project_id == project_id, models.ProjectShareTypeQuota.share_type_id == share_type_id, models.ProjectShareTypeQuota.resource == resource, ) else: query = model_query(context, models.Quota).filter( models.Quota.project_id == project_id, models.Quota.resource == resource, ) result = query.update({'hard_limit': limit}) if not result: if per_user: raise exception.ProjectUserQuotaNotFound( project_id=project_id, user_id=user_id) elif share_type_id: raise exception.ProjectShareTypeQuotaNotFound( project_id=project_id, share_type=share_type_id) raise exception.ProjectQuotaNotFound(project_id=project_id) ################### @require_context def quota_class_get(context, class_name, resource, session=None): result = (model_query(context, models.QuotaClass, session=session, read_deleted="no"). filter_by(class_name=class_name). filter_by(resource=resource). first()) if not result: raise exception.QuotaClassNotFound(class_name=class_name) return result @require_context def quota_class_get_default(context): rows = (model_query(context, models.QuotaClass, read_deleted="no"). filter_by(class_name=_DEFAULT_QUOTA_NAME). all()) result = {'class_name': _DEFAULT_QUOTA_NAME} for row in rows: result[row.resource] = row.hard_limit return result @require_context def quota_class_get_all_by_name(context, class_name): authorize_quota_class_context(context, class_name) rows = (model_query(context, models.QuotaClass, read_deleted="no"). filter_by(class_name=class_name). all()) result = {'class_name': class_name} for row in rows: result[row.resource] = row.hard_limit return result @require_admin_context def quota_class_create(context, class_name, resource, limit): quota_class_ref = models.QuotaClass() quota_class_ref.class_name = class_name quota_class_ref.resource = resource quota_class_ref.hard_limit = limit session = get_session() with session.begin(): quota_class_ref.save(session) return quota_class_ref @require_admin_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def quota_class_update(context, class_name, resource, limit): result = (model_query(context, models.QuotaClass, read_deleted="no"). filter_by(class_name=class_name). filter_by(resource=resource). update({'hard_limit': limit})) if not result: raise exception.QuotaClassNotFound(class_name=class_name) ################### @require_context def quota_usage_get(context, project_id, resource, user_id=None, share_type_id=None): query = (model_query(context, models.QuotaUsage, read_deleted="no"). filter_by(project_id=project_id). filter_by(resource=resource)) if user_id: if resource not in PER_PROJECT_QUOTAS: result = query.filter_by(user_id=user_id).first() else: result = query.filter_by(user_id=None).first() elif share_type_id: result = query.filter_by(queryshare_type_id=share_type_id).first() else: result = query.first() if not result: raise exception.QuotaUsageNotFound(project_id=project_id) return result def _quota_usage_get_all(context, project_id, user_id=None, share_type_id=None): authorize_project_context(context, project_id) query = (model_query(context, models.QuotaUsage, read_deleted="no"). filter_by(project_id=project_id)) result = {'project_id': project_id} if user_id: query = query.filter(or_(models.QuotaUsage.user_id == user_id, models.QuotaUsage.user_id is None)) result['user_id'] = user_id elif share_type_id: query = query.filter_by(share_type_id=share_type_id) result['share_type_id'] = share_type_id else: query = query.filter_by(share_type_id=None) rows = query.all() for row in rows: if row.resource in result: result[row.resource]['in_use'] += row.in_use result[row.resource]['reserved'] += row.reserved else: result[row.resource] = dict(in_use=row.in_use, reserved=row.reserved) return result @require_context def quota_usage_get_all_by_project(context, project_id): return _quota_usage_get_all(context, project_id) @require_context def quota_usage_get_all_by_project_and_user(context, project_id, user_id): return _quota_usage_get_all(context, project_id, user_id=user_id) @require_context def quota_usage_get_all_by_project_and_share_type(context, project_id, share_type_id): return _quota_usage_get_all( context, project_id, share_type_id=share_type_id) def _quota_usage_create(context, project_id, user_id, resource, in_use, reserved, until_refresh, share_type_id=None, session=None): quota_usage_ref = models.QuotaUsage() if share_type_id: quota_usage_ref.share_type_id = share_type_id else: quota_usage_ref.user_id = user_id quota_usage_ref.project_id = project_id quota_usage_ref.resource = resource quota_usage_ref.in_use = in_use quota_usage_ref.reserved = reserved quota_usage_ref.until_refresh = until_refresh # updated_at is needed for judgement of max_age quota_usage_ref.updated_at = timeutils.utcnow() quota_usage_ref.save(session=session) return quota_usage_ref @require_admin_context def quota_usage_create(context, project_id, user_id, resource, in_use, reserved, until_refresh, share_type_id=None): session = get_session() return _quota_usage_create( context, project_id, user_id, resource, in_use, reserved, until_refresh, share_type_id=share_type_id, session=session) @require_admin_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def quota_usage_update(context, project_id, user_id, resource, share_type_id=None, **kwargs): updates = {} for key in ('in_use', 'reserved', 'until_refresh'): if key in kwargs: updates[key] = kwargs[key] query = model_query( context, models.QuotaUsage, read_deleted="no", ).filter_by(project_id=project_id).filter_by(resource=resource) if share_type_id: query = query.filter_by(share_type_id=share_type_id) else: query = query.filter(or_(models.QuotaUsage.user_id == user_id, models.QuotaUsage.user_id is None)) result = query.update(updates) if not result: raise exception.QuotaUsageNotFound(project_id=project_id) ################### def _reservation_create(context, uuid, usage, project_id, user_id, resource, delta, expire, share_type_id=None, session=None): reservation_ref = models.Reservation() reservation_ref.uuid = uuid reservation_ref.usage_id = usage['id'] reservation_ref.project_id = project_id if share_type_id: reservation_ref.share_type_id = share_type_id else: reservation_ref.user_id = user_id reservation_ref.resource = resource reservation_ref.delta = delta reservation_ref.expire = expire reservation_ref.save(session=session) return reservation_ref ################### # NOTE(johannes): The quota code uses SQL locking to ensure races don't # cause under or over counting of resources. To avoid deadlocks, this # code always acquires the lock on quota_usages before acquiring the lock # on reservations. def _get_share_type_quota_usages(context, session, project_id, share_type_id): rows = model_query( context, models.QuotaUsage, read_deleted="no", session=session, ).filter( models.QuotaUsage.project_id == project_id, models.QuotaUsage.share_type_id == share_type_id, ).with_lockmode('update').all() return {row.resource: row for row in rows} def _get_user_quota_usages(context, session, project_id, user_id): # Broken out for testability rows = (model_query(context, models.QuotaUsage, read_deleted="no", session=session). filter_by(project_id=project_id). filter(or_(models.QuotaUsage.user_id == user_id, models.QuotaUsage.user_id is None)). with_lockmode('update'). all()) return {row.resource: row for row in rows} def _get_project_quota_usages(context, session, project_id): rows = (model_query(context, models.QuotaUsage, read_deleted="no", session=session). filter_by(project_id=project_id). filter(models.QuotaUsage.share_type_id is None). with_lockmode('update'). all()) result = dict() # Get the total count of in_use,reserved for row in rows: if row.resource in result: result[row.resource]['in_use'] += row.in_use result[row.resource]['reserved'] += row.reserved result[row.resource]['total'] += (row.in_use + row.reserved) else: result[row.resource] = dict(in_use=row.in_use, reserved=row.reserved, total=row.in_use + row.reserved) return result @require_context def quota_reserve(context, resources, project_quotas, user_quotas, share_type_quotas, deltas, expire, until_refresh, max_age, project_id=None, user_id=None, share_type_id=None): user_reservations = _quota_reserve( context, resources, project_quotas, user_quotas, deltas, expire, until_refresh, max_age, project_id, user_id=user_id) if share_type_id: try: st_reservations = _quota_reserve( context, resources, project_quotas, share_type_quotas, deltas, expire, until_refresh, max_age, project_id, share_type_id=share_type_id) except exception.OverQuota: with excutils.save_and_reraise_exception(): # rollback previous reservations reservation_rollback( context, user_reservations, project_id=project_id, user_id=user_id) return user_reservations + st_reservations return user_reservations @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def _quota_reserve(context, resources, project_quotas, user_or_st_quotas, deltas, expire, until_refresh, max_age, project_id=None, user_id=None, share_type_id=None): elevated = context.elevated() session = get_session() with session.begin(): if project_id is None: project_id = context.project_id if share_type_id: user_or_st_usages = _get_share_type_quota_usages( context, session, project_id, share_type_id) else: user_id = user_id if user_id else context.user_id user_or_st_usages = _get_user_quota_usages( context, session, project_id, user_id) # Get the current usages project_usages = _get_project_quota_usages( context, session, project_id) # Handle usage refresh work = set(deltas.keys()) while work: resource = work.pop() # Do we need to refresh the usage? refresh = False if ((resource not in PER_PROJECT_QUOTAS) and (resource not in user_or_st_usages)): user_or_st_usages[resource] = _quota_usage_create( elevated, project_id, user_id, resource, 0, 0, until_refresh or None, share_type_id=share_type_id, session=session) refresh = True elif ((resource in PER_PROJECT_QUOTAS) and (resource not in user_or_st_usages)): user_or_st_usages[resource] = _quota_usage_create( elevated, project_id, None, resource, 0, 0, until_refresh or None, share_type_id=share_type_id, session=session) refresh = True elif user_or_st_usages[resource].in_use < 0: # Negative in_use count indicates a desync, so try to # heal from that... refresh = True elif user_or_st_usages[resource].until_refresh is not None: user_or_st_usages[resource].until_refresh -= 1 if user_or_st_usages[resource].until_refresh <= 0: refresh = True elif max_age and (user_or_st_usages[resource].updated_at - timeutils.utcnow()).seconds >= max_age: refresh = True # OK, refresh the usage if refresh: # Grab the sync routine sync = QUOTA_SYNC_FUNCTIONS[resources[resource].sync] updates = sync( elevated, project_id, user_id, share_type_id=share_type_id, session=session) for res, in_use in updates.items(): # Make sure we have a destination for the usage! if ((res not in PER_PROJECT_QUOTAS) and (res not in user_or_st_usages)): user_or_st_usages[res] = _quota_usage_create( elevated, project_id, user_id, res, 0, 0, until_refresh or None, share_type_id=share_type_id, session=session) if ((res in PER_PROJECT_QUOTAS) and (res not in user_or_st_usages)): user_or_st_usages[res] = _quota_usage_create( elevated, project_id, None, res, 0, 0, until_refresh or None, share_type_id=share_type_id, session=session) if user_or_st_usages[res].in_use != in_use: LOG.debug( 'quota_usages out of sync, updating. ' 'project_id: %(project_id)s, ' 'user_id: %(user_id)s, ' 'share_type_id: %(share_type_id)s, ' 'resource: %(res)s, ' 'tracked usage: %(tracked_use)s, ' 'actual usage: %(in_use)s', {'project_id': project_id, 'user_id': user_id, 'share_type_id': share_type_id, 'res': res, 'tracked_use': user_or_st_usages[res].in_use, 'in_use': in_use}) # Update the usage user_or_st_usages[res].in_use = in_use user_or_st_usages[res].until_refresh = ( until_refresh or None) # Because more than one resource may be refreshed # by the call to the sync routine, and we don't # want to double-sync, we make sure all refreshed # resources are dropped from the work set. work.discard(res) # NOTE(Vek): We make the assumption that the sync # routine actually refreshes the # resources that it is the sync routine # for. We don't check, because this is # a best-effort mechanism. # Check for deltas that would go negative unders = [res for res, delta in deltas.items() if delta < 0 and delta + user_or_st_usages[res].in_use < 0] # Now, let's check the quotas # NOTE(Vek): We're only concerned about positive increments. # If a project has gone over quota, we want them to # be able to reduce their usage without any # problems. for key, value in user_or_st_usages.items(): if key not in project_usages: project_usages[key] = value overs = [res for res, delta in deltas.items() if user_or_st_quotas[res] >= 0 and delta >= 0 and (0 <= project_quotas[res] < delta + project_usages[res]['total'] or user_or_st_quotas[res] < delta + user_or_st_usages[res].total)] # NOTE(Vek): The quota check needs to be in the transaction, # but the transaction doesn't fail just because # we're over quota, so the OverQuota raise is # outside the transaction. If we did the raise # here, our usage updates would be discarded, but # they're not invalidated by being over-quota. # Create the reservations if not overs: reservations = [] for res, delta in deltas.items(): reservation = _reservation_create(elevated, uuidutils.generate_uuid(), user_or_st_usages[res], project_id, user_id, res, delta, expire, share_type_id=share_type_id, session=session) reservations.append(reservation.uuid) # Also update the reserved quantity # NOTE(Vek): Again, we are only concerned here about # positive increments. Here, though, we're # worried about the following scenario: # # 1) User initiates resize down. # 2) User allocates a new instance. # 3) Resize down fails or is reverted. # 4) User is now over quota. # # To prevent this, we only update the # reserved value if the delta is positive. if delta > 0: user_or_st_usages[res].reserved += delta # Apply updates to the usages table for usage_ref in user_or_st_usages.values(): session.add(usage_ref) if unders: LOG.warning("Change will make usage less than 0 for the following " "resources: %s", unders) if overs: if project_quotas == user_or_st_quotas: usages = project_usages else: usages = user_or_st_usages usages = {k: dict(in_use=v['in_use'], reserved=v['reserved']) for k, v in usages.items()} raise exception.OverQuota( overs=sorted(overs), quotas=user_or_st_quotas, usages=usages) return reservations def _quota_reservations_query(session, context, reservations): """Return the relevant reservations.""" # Get the listed reservations return (model_query(context, models.Reservation, read_deleted="no", session=session). filter(models.Reservation.uuid.in_(reservations)). with_lockmode('update')) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def reservation_commit(context, reservations, project_id=None, user_id=None, share_type_id=None): session = get_session() with session.begin(): if share_type_id: st_usages = _get_share_type_quota_usages( context, session, project_id, share_type_id) else: st_usages = {} user_usages = _get_user_quota_usages( context, session, project_id, user_id) reservation_query = _quota_reservations_query( session, context, reservations) for reservation in reservation_query.all(): if reservation['share_type_id']: usages = st_usages else: usages = user_usages usage = usages[reservation.resource] if reservation.delta >= 0: usage.reserved -= reservation.delta usage.in_use += reservation.delta reservation_query.soft_delete(synchronize_session=False) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def reservation_rollback(context, reservations, project_id=None, user_id=None, share_type_id=None): session = get_session() with session.begin(): if share_type_id: st_usages = _get_share_type_quota_usages( context, session, project_id, share_type_id) else: st_usages = {} user_usages = _get_user_quota_usages( context, session, project_id, user_id) reservation_query = _quota_reservations_query( session, context, reservations) for reservation in reservation_query.all(): if reservation['share_type_id']: usages = st_usages else: usages = user_usages usage = usages[reservation.resource] if reservation.delta >= 0: usage.reserved -= reservation.delta reservation_query.soft_delete(synchronize_session=False) @require_admin_context def quota_destroy_all_by_project_and_user(context, project_id, user_id): session = get_session() with session.begin(): (model_query(context, models.ProjectUserQuota, session=session, read_deleted="no"). filter_by(project_id=project_id). filter_by(user_id=user_id).soft_delete(synchronize_session=False)) (model_query(context, models.QuotaUsage, session=session, read_deleted="no"). filter_by(project_id=project_id). filter_by(user_id=user_id).soft_delete(synchronize_session=False)) (model_query(context, models.Reservation, session=session, read_deleted="no"). filter_by(project_id=project_id). filter_by(user_id=user_id).soft_delete(synchronize_session=False)) @require_admin_context def quota_destroy_all_by_share_type(context, share_type_id, project_id=None): """Soft deletes all quotas, usages and reservations. :param context: request context for queries, updates and logging :param share_type_id: ID of the share type to filter the quotas, usages and reservations under. :param project_id: ID of the project to filter the quotas, usages and reservations under. If not provided, share type quotas for all projects will be acted upon. """ session = get_session() with session.begin(): share_type_quotas = model_query( context, models.ProjectShareTypeQuota, session=session, read_deleted="no", ).filter_by(share_type_id=share_type_id) share_type_quota_usages = model_query( context, models.QuotaUsage, session=session, read_deleted="no", ).filter_by(share_type_id=share_type_id) share_type_quota_reservations = model_query( context, models.Reservation, session=session, read_deleted="no", ).filter_by(share_type_id=share_type_id) if project_id is not None: share_type_quotas = share_type_quotas.filter_by( project_id=project_id) share_type_quota_usages = share_type_quota_usages.filter_by( project_id=project_id) share_type_quota_reservations = ( share_type_quota_reservations.filter_by(project_id=project_id)) share_type_quotas.soft_delete(synchronize_session=False) share_type_quota_usages.soft_delete(synchronize_session=False) share_type_quota_reservations.soft_delete(synchronize_session=False) @require_admin_context def quota_destroy_all_by_project(context, project_id): session = get_session() with session.begin(): (model_query(context, models.Quota, session=session, read_deleted="no"). filter_by(project_id=project_id). soft_delete(synchronize_session=False)) (model_query(context, models.ProjectUserQuota, session=session, read_deleted="no"). filter_by(project_id=project_id). soft_delete(synchronize_session=False)) (model_query(context, models.QuotaUsage, session=session, read_deleted="no"). filter_by(project_id=project_id). soft_delete(synchronize_session=False)) (model_query(context, models.Reservation, session=session, read_deleted="no"). filter_by(project_id=project_id). soft_delete(synchronize_session=False)) @require_admin_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def reservation_expire(context): session = get_session() with session.begin(): current_time = timeutils.utcnow() reservation_query = (model_query( context, models.Reservation, session=session, read_deleted="no"). filter(models.Reservation.expire < current_time)) for reservation in reservation_query.all(): if reservation.delta >= 0: quota_usage = model_query(context, models.QuotaUsage, session=session, read_deleted="no").filter( models.QuotaUsage.id == reservation.usage_id).first() quota_usage.reserved -= reservation.delta session.add(quota_usage) reservation_query.soft_delete(synchronize_session=False) ################ def _extract_subdict_by_fields(source_dict, fields): dict_to_extract_from = copy.deepcopy(source_dict) sub_dict = {} for field in fields: field_value = dict_to_extract_from.pop(field, None) if field_value: sub_dict.update({field: field_value}) return sub_dict, dict_to_extract_from def _extract_share_instance_values(values): share_instance_model_fields = [ 'status', 'host', 'scheduled_at', 'launched_at', 'terminated_at', 'share_server_id', 'share_network_id', 'availability_zone', 'replica_state', 'share_type_id', 'share_type', 'access_rules_status', ] share_instance_values, share_values = ( _extract_subdict_by_fields(values, share_instance_model_fields) ) return share_instance_values, share_values def _change_size_to_instance_size(snap_instance_values): if 'size' in snap_instance_values: snap_instance_values['instance_size'] = snap_instance_values['size'] snap_instance_values.pop('size') def _extract_snapshot_instance_values(values): fields = ['status', 'progress', 'provider_location'] snapshot_instance_values, snapshot_values = ( _extract_subdict_by_fields(values, fields) ) return snapshot_instance_values, snapshot_values ################ @require_context def share_instance_create(context, share_id, values): session = get_session() with session.begin(): return _share_instance_create(context, share_id, values, session) def _share_instance_create(context, share_id, values, session): if not values.get('id'): values['id'] = uuidutils.generate_uuid() values.update({'share_id': share_id}) share_instance_ref = models.ShareInstance() share_instance_ref.update(values) share_instance_ref.save(session=session) return share_instance_get(context, share_instance_ref['id'], session=session) @require_admin_context def share_instances_host_update(context, current_host, new_host): session = get_session() host_field = models.ShareInstance.host with session.begin(): query = model_query( context, models.ShareInstance, session=session, read_deleted="no", ).filter(host_field.like('{}%'.format(current_host))) result = query.update( {host_field: func.replace(host_field, current_host, new_host)}, synchronize_session=False) return result @require_context def share_instance_update(context, share_instance_id, values, with_share_data=False): session = get_session() _ensure_availability_zone_exists(context, values, session, strict=False) with session.begin(): instance_ref = _share_instance_update( context, share_instance_id, values, session ) if with_share_data: parent_share = share_get(context, instance_ref['share_id'], session=session) instance_ref.set_share_data(parent_share) return instance_ref def _share_instance_update(context, share_instance_id, values, session): share_instance_ref = share_instance_get(context, share_instance_id, session=session) share_instance_ref.update(values) share_instance_ref.save(session=session) return share_instance_ref @require_context def share_instance_get(context, share_instance_id, session=None, with_share_data=False): if session is None: session = get_session() result = model_query( context, models.ShareInstance, session=session, ).filter_by( id=share_instance_id, ).options( joinedload('export_locations'), joinedload('share_type'), ).first() if result is None: raise exception.NotFound() if with_share_data: parent_share = share_get(context, result['share_id'], session=session) result.set_share_data(parent_share) return result @require_admin_context def share_instances_get_all(context, filters=None): session = get_session() query = model_query( context, models.ShareInstance, session=session, read_deleted="no", ).options( joinedload('export_locations'), ) filters = filters or {} export_location_id = filters.get('export_location_id') export_location_path = filters.get('export_location_path') if export_location_id or export_location_path: query = query.join( models.ShareInstanceExportLocations, models.ShareInstanceExportLocations.share_instance_id == models.ShareInstance.id) if export_location_path: query = query.filter( models.ShareInstanceExportLocations.path == export_location_path) if export_location_id: query = query.filter( models.ShareInstanceExportLocations.uuid == export_location_id) # Returns list of share instances that satisfy filters. query = query.all() return query @require_context def _update_share_instance_usages(context, share, instance_ref, is_replica=False): deltas = {} no_instances_remain = len(share.instances) == 0 share_usages_to_release = {"shares": -1, "gigabytes": -share['size']} replica_usages_to_release = {"share_replicas": -1, "replica_gigabytes": -share['size']} if is_replica and no_instances_remain: # A share that had a replication_type is being deleted, so there's # need to update the share replica quotas and the share quotas deltas.update(replica_usages_to_release) deltas.update(share_usages_to_release) elif is_replica: # The user is deleting a share replica deltas.update(replica_usages_to_release) else: # A share with no replication_type is being deleted deltas.update(share_usages_to_release) reservations = None try: # we give the user_id of the share, to update # the quota usage for the user, who created the share reservations = QUOTAS.reserve( context, project_id=share['project_id'], user_id=share['user_id'], share_type_id=instance_ref['share_type_id'], **deltas) QUOTAS.commit( context, reservations, project_id=share['project_id'], user_id=share['user_id'], share_type_id=instance_ref['share_type_id']) except Exception: resource_name = ( 'share replica' if is_replica else 'share') resource_id = instance_ref['id'] if is_replica else share['id'] msg = (_("Failed to update usages deleting %(resource_name)s " "'%(id)s'.") % {'id': resource_id, "resource_name": resource_name}) LOG.exception(msg) if reservations: QUOTAS.rollback( context, reservations, share_type_id=instance_ref['share_type_id']) @require_context def share_instance_delete(context, instance_id, session=None, need_to_update_usages=False): if session is None: session = get_session() with session.begin(): share_export_locations_update(context, instance_id, [], delete=True) instance_ref = share_instance_get(context, instance_id, session=session) is_replica = instance_ref['replica_state'] is not None instance_ref.soft_delete(session=session, update_status=True) share = share_get(context, instance_ref['share_id'], session=session) if len(share.instances) == 0: share_access_delete_all_by_share(context, share['id']) session.query(models.ShareMetadata).filter_by( share_id=share['id']).soft_delete() share.soft_delete(session=session) if need_to_update_usages: _update_share_instance_usages(context, share, instance_ref, is_replica=is_replica) def _set_instances_share_data(context, instances, session): if instances and not isinstance(instances, list): instances = [instances] instances_with_share_data = [] for instance in instances: try: parent_share = share_get(context, instance['share_id'], session=session) except exception.NotFound: continue instance.set_share_data(parent_share) instances_with_share_data.append(instance) return instances_with_share_data @require_admin_context def share_instances_get_all_by_host(context, host, with_share_data=False, status=None, session=None): """Retrieves all share instances hosted on a host.""" session = session or get_session() instances = ( model_query(context, models.ShareInstance).filter( or_( models.ShareInstance.host == host, models.ShareInstance.host.like("{0}#%".format(host)) ) ) ) if status is not None: instances = instances.filter(models.ShareInstance.status == status) # Returns list of all instances that satisfy filters. instances = instances.all() if with_share_data: instances = _set_instances_share_data(context, instances, session) return instances @require_context def share_instances_get_all_by_share_network(context, share_network_id): """Returns list of share instances that belong to given share network.""" result = ( model_query(context, models.ShareInstance).filter( models.ShareInstance.share_network_id == share_network_id, ).all() ) return result @require_context def share_instances_get_all_by_share_server(context, share_server_id): """Returns list of share instance with given share server.""" result = ( model_query(context, models.ShareInstance).filter( models.ShareInstance.share_server_id == share_server_id, ).all() ) return result @require_context def share_instances_get_all_by_share(context, share_id): """Returns list of share instances that belong to given share.""" result = ( model_query(context, models.ShareInstance).filter( models.ShareInstance.share_id == share_id, ).all() ) return result @require_context def share_instances_get_all_by_share_group_id(context, share_group_id): """Returns list of share instances that belong to given share group.""" result = ( model_query(context, models.Share).filter( models.Share.share_group_id == share_group_id, ).all() ) instances = [] for share in result: instance = share.instance instance.set_share_data(share) instances.append(instance) return instances ################ def _share_replica_get_with_filters(context, share_id=None, replica_id=None, replica_state=None, status=None, with_share_server=True, session=None): query = model_query(context, models.ShareInstance, session=session, read_deleted="no") if share_id is not None: query = query.filter(models.ShareInstance.share_id == share_id) if replica_id is not None: query = query.filter(models.ShareInstance.id == replica_id) if replica_state is not None: query = query.filter( models.ShareInstance.replica_state == replica_state) else: query = query.filter(models.ShareInstance.replica_state.isnot(None)) if status is not None: query = query.filter(models.ShareInstance.status == status) if with_share_server: query = query.options(joinedload('share_server')) return query def _set_replica_share_data(context, replicas, session): if replicas and not isinstance(replicas, list): replicas = [replicas] for replica in replicas: parent_share = share_get(context, replica['share_id'], session=session) replica.set_share_data(parent_share) return replicas @require_context def share_replicas_get_all(context, with_share_data=False, with_share_server=True, session=None): """Returns replica instances for all available replicated shares.""" session = session or get_session() result = _share_replica_get_with_filters( context, with_share_server=with_share_server, session=session).all() if with_share_data: result = _set_replica_share_data(context, result, session) return result @require_context def share_replicas_get_all_by_share(context, share_id, with_share_data=False, with_share_server=False, session=None): """Returns replica instances for a given share.""" session = session or get_session() result = _share_replica_get_with_filters( context, with_share_server=with_share_server, share_id=share_id, session=session).all() if with_share_data: result = _set_replica_share_data(context, result, session) return result @require_context def share_replicas_get_available_active_replica(context, share_id, with_share_data=False, with_share_server=False, session=None): """Returns an 'active' replica instance that is 'available'.""" session = session or get_session() result = _share_replica_get_with_filters( context, with_share_server=with_share_server, share_id=share_id, replica_state=constants.REPLICA_STATE_ACTIVE, status=constants.STATUS_AVAILABLE, session=session).first() if result and with_share_data: result = _set_replica_share_data(context, result, session)[0] return result @require_context def share_replica_get(context, replica_id, with_share_data=False, with_share_server=False, session=None): """Returns summary of requested replica if available.""" session = session or get_session() result = _share_replica_get_with_filters( context, with_share_server=with_share_server, replica_id=replica_id, session=session).first() if result is None: raise exception.ShareReplicaNotFound(replica_id=replica_id) if with_share_data: result = _set_replica_share_data(context, result, session)[0] return result @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_replica_update(context, share_replica_id, values, with_share_data=False, session=None): """Updates a share replica with specified values.""" session = session or get_session() with session.begin(): _ensure_availability_zone_exists(context, values, session, strict=False) updated_share_replica = _share_instance_update( context, share_replica_id, values, session=session) if with_share_data: updated_share_replica = _set_replica_share_data( context, updated_share_replica, session)[0] return updated_share_replica @require_context def share_replica_delete(context, share_replica_id, session=None, need_to_update_usages=True): """Deletes a share replica.""" session = session or get_session() share_instance_delete(context, share_replica_id, session=session, need_to_update_usages=need_to_update_usages) ################ def _share_get_query(context, session=None): if session is None: session = get_session() return (model_query(context, models.Share, session=session). options(joinedload('share_metadata'))) def _metadata_refs(metadata_dict, meta_class): metadata_refs = [] if metadata_dict: for k, v in metadata_dict.items(): value = six.text_type(v) if isinstance(v, bool) else v metadata_ref = meta_class() metadata_ref['key'] = k metadata_ref['value'] = value metadata_refs.append(metadata_ref) return metadata_refs @require_context def share_create(context, share_values, create_share_instance=True): values = copy.deepcopy(share_values) values = ensure_model_dict_has_id(values) values['share_metadata'] = _metadata_refs(values.get('metadata'), models.ShareMetadata) session = get_session() share_ref = models.Share() share_instance_values, share_values = _extract_share_instance_values( values) _ensure_availability_zone_exists(context, share_instance_values, session, strict=False) share_ref.update(share_values) with session.begin(): share_ref.save(session=session) if create_share_instance: _share_instance_create(context, share_ref['id'], share_instance_values, session=session) # NOTE(u_glide): Do so to prevent errors with relationships return share_get(context, share_ref['id'], session=session) @require_admin_context def share_data_get_for_project(context, project_id, user_id, share_type_id=None, session=None): query = (model_query(context, models.Share, func.count(models.Share.id), func.sum(models.Share.size), read_deleted="no", session=session). filter_by(project_id=project_id)) if share_type_id: query = query.join("instances").filter_by(share_type_id=share_type_id) elif user_id: query = query.filter_by(user_id=user_id) result = query.first() return (result[0] or 0, result[1] or 0) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_update(context, share_id, update_values): session = get_session() values = copy.deepcopy(update_values) share_instance_values, share_values = _extract_share_instance_values( values) _ensure_availability_zone_exists(context, share_instance_values, session, strict=False) with session.begin(): share_ref = share_get(context, share_id, session=session) _share_instance_update(context, share_ref.instance['id'], share_instance_values, session=session) share_ref.update(share_values) share_ref.save(session=session) return share_ref @require_context def share_get(context, share_id, session=None): result = _share_get_query(context, session).filter_by(id=share_id).first() if result is None: raise exception.NotFound() return result def _share_get_all_with_filters(context, project_id=None, share_server_id=None, share_group_id=None, filters=None, is_public=False, sort_key=None, sort_dir=None): """Returns sorted list of shares that satisfies filters. :param context: context to query under :param project_id: project id that owns shares :param share_server_id: share server that hosts shares :param filters: dict of filters to specify share selection :param is_public: public shares from other projects will be added to result if True :param sort_key: key of models.Share to be used for sorting :param sort_dir: desired direction of sorting, can be 'asc' and 'desc' :returns: list -- models.Share :raises: exception.InvalidInput """ if not sort_key: sort_key = 'created_at' if not sort_dir: sort_dir = 'desc' query = ( _share_get_query(context).join( models.ShareInstance, models.ShareInstance.share_id == models.Share.id ) ) if project_id: if is_public: query = query.filter(or_(models.Share.project_id == project_id, models.Share.is_public)) else: query = query.filter(models.Share.project_id == project_id) if share_server_id: query = query.filter( models.ShareInstance.share_server_id == share_server_id) if share_group_id: query = query.filter( models.Share.share_group_id == share_group_id) # Apply filters if not filters: filters = {} export_location_id = filters.get('export_location_id') export_location_path = filters.get('export_location_path') if export_location_id or export_location_path: query = query.join( models.ShareInstanceExportLocations, models.ShareInstanceExportLocations.share_instance_id == models.ShareInstance.id) if export_location_path: query = query.filter( models.ShareInstanceExportLocations.path == export_location_path) if export_location_id: query = query.filter( models.ShareInstanceExportLocations.uuid == export_location_id) if 'metadata' in filters: for k, v in filters['metadata'].items(): # pylint: disable=no-member query = query.filter( or_(models.Share.share_metadata.any( key=k, value=v))) if 'extra_specs' in filters: query = query.join( models.ShareTypeExtraSpecs, models.ShareTypeExtraSpecs.share_type_id == models.ShareInstance.share_type_id) for k, v in filters['extra_specs'].items(): query = query.filter(or_(models.ShareTypeExtraSpecs.key == k, models.ShareTypeExtraSpecs.value == v)) try: query = apply_sorting(models.Share, query, sort_key, sort_dir) except AttributeError: try: query = apply_sorting( models.ShareInstance, query, sort_key, sort_dir) except AttributeError: msg = _("Wrong sorting key provided - '%s'.") % sort_key raise exception.InvalidInput(reason=msg) if 'limit' in filters: offset = filters.get('offset', 0) query = query.limit(filters['limit']).offset(offset) # Returns list of shares that satisfy filters. query = query.all() return query @require_admin_context def share_get_all(context, filters=None, sort_key=None, sort_dir=None): query = _share_get_all_with_filters( context, filters=filters, sort_key=sort_key, sort_dir=sort_dir) return query @require_context def share_get_all_by_project(context, project_id, filters=None, is_public=False, sort_key=None, sort_dir=None): """Returns list of shares with given project ID.""" query = _share_get_all_with_filters( context, project_id=project_id, filters=filters, is_public=is_public, sort_key=sort_key, sort_dir=sort_dir, ) return query @require_context def share_get_all_by_share_group_id(context, share_group_id, filters=None, sort_key=None, sort_dir=None): """Returns list of shares with given group ID.""" query = _share_get_all_with_filters( context, share_group_id=share_group_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) return query @require_context def share_get_all_by_share_server(context, share_server_id, filters=None, sort_key=None, sort_dir=None): """Returns list of shares with given share server.""" query = _share_get_all_with_filters( context, share_server_id=share_server_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) return query @require_context def share_delete(context, share_id): session = get_session() with session.begin(): share_ref = share_get(context, share_id, session) if len(share_ref.instances) > 0: msg = _("Share %(id)s has %(count)s share instances.") % { 'id': share_id, 'count': len(share_ref.instances)} raise exception.InvalidShare(msg) share_ref.soft_delete(session=session) (session.query(models.ShareMetadata). filter_by(share_id=share_id).soft_delete()) ################### def _share_access_get_query(context, session, values, read_deleted='no'): """Get access record.""" query = (model_query( context, models.ShareAccessMapping, session=session, read_deleted=read_deleted).options( joinedload('share_access_rules_metadata'))) return query.filter_by(**values) def _share_instance_access_query(context, session, access_id=None, instance_id=None): filters = {'deleted': 'False'} if access_id is not None: filters.update({'access_id': access_id}) if instance_id is not None: filters.update({'share_instance_id': instance_id}) return model_query(context, models.ShareInstanceAccessMapping, session=session).filter_by(**filters) def _share_access_metadata_get_item(context, access_id, key, session=None): result = (_share_access_metadata_get_query( context, access_id, session=session).filter_by(key=key).first()) if not result: raise exception.ShareAccessMetadataNotFound( metadata_key=key, access_id=access_id) return result def _share_access_metadata_get_query(context, access_id, session=None): return (model_query( context, models.ShareAccessRulesMetadata, session=session, read_deleted="no"). filter_by(access_id=access_id). options(joinedload('access'))) @require_context def share_access_metadata_update(context, access_id, metadata): session = get_session() with session.begin(): # Now update all existing items with new values, or create new meta # objects for meta_key, meta_value in metadata.items(): # update the value whether it exists or not item = {"value": meta_value} try: meta_ref = _share_access_metadata_get_item( context, access_id, meta_key, session=session) except exception.ShareAccessMetadataNotFound: meta_ref = models.ShareAccessRulesMetadata() item.update({"key": meta_key, "access_id": access_id}) meta_ref.update(item) meta_ref.save(session=session) return metadata @require_context def share_access_metadata_delete(context, access_id, key): session = get_session() with session.begin(): metadata = _share_access_metadata_get_item( context, access_id, key, session=session) metadata.soft_delete(session) @require_context def share_access_create(context, values): values = ensure_model_dict_has_id(values) session = get_session() with session.begin(): values['share_access_rules_metadata'] = ( _metadata_refs(values.get('metadata'), models.ShareAccessRulesMetadata)) access_ref = models.ShareAccessMapping() access_ref.update(values) access_ref.save(session=session) parent_share = share_get(context, values['share_id'], session=session) for instance in parent_share.instances: vals = { 'share_instance_id': instance['id'], 'access_id': access_ref['id'], } _share_instance_access_create(vals, session) return share_access_get(context, access_ref['id']) @require_context def share_instance_access_create(context, values, share_instance_id): values = ensure_model_dict_has_id(values) session = get_session() with session.begin(): access_list = _share_access_get_query( context, session, { 'share_id': values['share_id'], 'access_type': values['access_type'], 'access_to': values['access_to'], }).all() if len(access_list) > 0: access_ref = access_list[0] else: access_ref = models.ShareAccessMapping() access_ref.update(values) access_ref.save(session=session) vals = { 'share_instance_id': share_instance_id, 'access_id': access_ref['id'], } _share_instance_access_create(vals, session) return share_access_get(context, access_ref['id']) @require_context def share_instance_access_copy(context, share_id, instance_id, session=None): """Copy access rules from share to share instance.""" session = session or get_session() share_access_rules = _share_access_get_query( context, session, {'share_id': share_id}).all() for access_rule in share_access_rules: values = { 'share_instance_id': instance_id, 'access_id': access_rule['id'], } _share_instance_access_create(values, session) return share_access_rules def _share_instance_access_create(values, session): access_ref = models.ShareInstanceAccessMapping() access_ref.update(ensure_model_dict_has_id(values)) access_ref.save(session=session) return access_ref @require_context def share_access_get(context, access_id, session=None): """Get access record.""" session = session or get_session() access = _share_access_get_query( context, session, {'id': access_id}).first() if access: return access else: raise exception.NotFound() @require_context def share_instance_access_get(context, access_id, instance_id, with_share_access_data=True): """Get access record.""" session = get_session() access = _share_instance_access_query(context, session, access_id, instance_id).first() if access is None: raise exception.NotFound() if with_share_access_data: access = _set_instances_share_access_data(context, access, session)[0] return access @require_context def share_access_get_all_for_share(context, share_id, filters=None, session=None): filters = filters or {} session = session or get_session() query = (_share_access_get_query( context, session, {'share_id': share_id}).filter( models.ShareAccessMapping.instance_mappings.any())) if 'metadata' in filters: for k, v in filters['metadata'].items(): query = query.filter( or_(models.ShareAccessMapping. share_access_rules_metadata.any(key=k, value=v))) return query.all() @require_context def share_access_get_all_for_instance(context, instance_id, filters=None, with_share_access_data=True, session=None): """Get all access rules related to a certain share instance.""" session = session or get_session() filters = copy.deepcopy(filters) if filters else {} filters.update({'share_instance_id': instance_id}) legal_filter_keys = ('id', 'share_instance_id', 'access_id', 'state') query = _share_instance_access_query(context, session) query = exact_filter( query, models.ShareInstanceAccessMapping, filters, legal_filter_keys) instance_accesses = query.all() if with_share_access_data: instance_accesses = _set_instances_share_access_data( context, instance_accesses, session) return instance_accesses def _set_instances_share_access_data(context, instance_accesses, session): if instance_accesses and not isinstance(instance_accesses, list): instance_accesses = [instance_accesses] for instance_access in instance_accesses: share_access = share_access_get( context, instance_access['access_id'], session=session) instance_access.set_share_access_data(share_access) return instance_accesses def _set_instances_snapshot_access_data(context, instance_accesses, session): if instance_accesses and not isinstance(instance_accesses, list): instance_accesses = [instance_accesses] for instance_access in instance_accesses: snapshot_access = share_snapshot_access_get( context, instance_access['access_id'], session=session) instance_access.set_snapshot_access_data(snapshot_access) return instance_accesses @require_context def share_access_get_all_by_type_and_access(context, share_id, access_type, access): session = get_session() return _share_access_get_query(context, session, {'share_id': share_id, 'access_type': access_type, 'access_to': access}).all() @require_context def share_access_check_for_existing_access(context, share_id, access_type, access_to): return _check_for_existing_access( context, 'share', share_id, access_type, access_to) def _check_for_existing_access(context, resource, resource_id, access_type, access_to): session = get_session() if resource == 'share': query_method = _share_access_get_query access_to_field = models.ShareAccessMapping.access_to else: query_method = _share_snapshot_access_get_query access_to_field = models.ShareSnapshotAccessMapping.access_to with session.begin(): if access_type == 'ip': rules = query_method( context, session, {'%s_id' % resource: resource_id, 'access_type': access_type}).filter( access_to_field.startswith(access_to.split('/')[0])).all() matching_rules = [ rule for rule in rules if ipaddress.ip_network(six.text_type(access_to)) == ipaddress.ip_network(six.text_type(rule['access_to'])) ] return len(matching_rules) > 0 else: return query_method( context, session, {'%s_id' % resource: resource_id, 'access_type': access_type, 'access_to': access_to}).count() > 0 @require_context def share_access_delete_all_by_share(context, share_id): session = get_session() with session.begin(): (session.query(models.ShareAccessMapping). filter_by(share_id=share_id).soft_delete()) @require_context def share_instance_access_delete(context, mapping_id): session = get_session() with session.begin(): mapping = (session.query(models.ShareInstanceAccessMapping). filter_by(id=mapping_id).first()) if not mapping: exception.NotFound() mapping.soft_delete(session, update_status=True, status_field_name='state') other_mappings = _share_instance_access_query( context, session, mapping['access_id']).all() # NOTE(u_glide): Remove access rule if all mappings were removed. if len(other_mappings) == 0: (session.query(models.ShareAccessRulesMetadata).filter_by( access_id=mapping['access_id']).soft_delete()) (session.query(models.ShareAccessMapping).filter_by( id=mapping['access_id']).soft_delete()) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_instance_access_update(context, access_id, instance_id, updates): session = get_session() share_access_fields = ('access_type', 'access_to', 'access_key', 'access_level') share_access_map_updates, share_instance_access_map_updates = ( _extract_subdict_by_fields(updates, share_access_fields) ) with session.begin(): share_access = _share_access_get_query( context, session, {'id': access_id}).first() share_access.update(share_access_map_updates) share_access.save(session=session) access = _share_instance_access_query( context, session, access_id, instance_id).first() access.update(share_instance_access_map_updates) access.save(session=session) return access ################### @require_context def share_snapshot_instance_create(context, snapshot_id, values, session=None): session = session or get_session() values = copy.deepcopy(values) _change_size_to_instance_size(values) if not values.get('id'): values['id'] = uuidutils.generate_uuid() values.update({'snapshot_id': snapshot_id}) instance_ref = models.ShareSnapshotInstance() instance_ref.update(values) instance_ref.save(session=session) return share_snapshot_instance_get(context, instance_ref['id'], session=session) @require_context def share_snapshot_instance_update(context, instance_id, values): session = get_session() instance_ref = share_snapshot_instance_get(context, instance_id, session=session) _change_size_to_instance_size(values) # NOTE(u_glide): Ignore updates to custom properties for extra_key in models.ShareSnapshotInstance._extra_keys: if extra_key in values: values.pop(extra_key) instance_ref.update(values) instance_ref.save(session=session) return instance_ref @require_context def share_snapshot_instance_delete(context, snapshot_instance_id, session=None): session = session or get_session() with session.begin(): snapshot_instance_ref = share_snapshot_instance_get( context, snapshot_instance_id, session=session) access_rules = share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance_id, session=session) for rule in access_rules: share_snapshot_instance_access_delete( context, rule['access_id'], snapshot_instance_id) for el in snapshot_instance_ref.export_locations: share_snapshot_instance_export_location_delete(context, el['id']) snapshot_instance_ref.soft_delete( session=session, update_status=True) snapshot = share_snapshot_get( context, snapshot_instance_ref['snapshot_id'], session=session) if len(snapshot.instances) == 0: snapshot.soft_delete(session=session) @require_context def share_snapshot_instance_get(context, snapshot_instance_id, session=None, with_share_data=False): session = session or get_session() result = _share_snapshot_instance_get_with_filters( context, instance_ids=[snapshot_instance_id], session=session).first() if result is None: raise exception.ShareSnapshotInstanceNotFound( instance_id=snapshot_instance_id) if with_share_data: result = _set_share_snapshot_instance_data(context, result, session)[0] return result @require_context def share_snapshot_instance_get_all_with_filters(context, search_filters, with_share_data=False, session=None): """Get snapshot instances filtered by known attrs, ignore unknown attrs. All filters accept list/tuples to filter on, along with simple values. """ def listify(values): if values: if not isinstance(values, (list, tuple, set)): return values, else: return values session = session or get_session() _known_filters = ('instance_ids', 'snapshot_ids', 'share_instance_ids', 'statuses') filters = {k: listify(search_filters.get(k)) for k in _known_filters} result = _share_snapshot_instance_get_with_filters( context, session=session, **filters).all() if with_share_data: result = _set_share_snapshot_instance_data(context, result, session) return result def _share_snapshot_instance_get_with_filters(context, instance_ids=None, snapshot_ids=None, statuses=None, share_instance_ids=None, session=None): query = model_query(context, models.ShareSnapshotInstance, session=session, read_deleted="no") if instance_ids is not None: query = query.filter( models.ShareSnapshotInstance.id.in_(instance_ids)) if snapshot_ids is not None: query = query.filter( models.ShareSnapshotInstance.snapshot_id.in_(snapshot_ids)) if share_instance_ids is not None: query = query.filter(models.ShareSnapshotInstance.share_instance_id .in_(share_instance_ids)) if statuses is not None: query = query.filter(models.ShareSnapshotInstance.status.in_(statuses)) query = query.options(joinedload('share_group_snapshot')) return query def _set_share_snapshot_instance_data(context, snapshot_instances, session): if snapshot_instances and not isinstance(snapshot_instances, list): snapshot_instances = [snapshot_instances] for snapshot_instance in snapshot_instances: share_instance = share_instance_get( context, snapshot_instance['share_instance_id'], session=session, with_share_data=True) snapshot_instance['share'] = share_instance return snapshot_instances ################### @require_context def share_snapshot_create(context, create_values, create_snapshot_instance=True): values = copy.deepcopy(create_values) values = ensure_model_dict_has_id(values) snapshot_ref = models.ShareSnapshot() snapshot_instance_values, snapshot_values = ( _extract_snapshot_instance_values(values) ) share_ref = share_get(context, snapshot_values.get('share_id')) snapshot_instance_values.update( {'share_instance_id': share_ref.instance.id} ) snapshot_ref.update(snapshot_values) session = get_session() with session.begin(): snapshot_ref.save(session=session) if create_snapshot_instance: share_snapshot_instance_create( context, snapshot_ref['id'], snapshot_instance_values, session=session ) return share_snapshot_get( context, snapshot_values['id'], session=session) @require_admin_context def snapshot_data_get_for_project(context, project_id, user_id, share_type_id=None, session=None): query = (model_query(context, models.ShareSnapshot, func.count(models.ShareSnapshot.id), func.sum(models.ShareSnapshot.size), read_deleted="no", session=session). filter_by(project_id=project_id)) if share_type_id: query = query.join( models.ShareInstance, models.ShareInstance.share_id == models.ShareSnapshot.share_id, ).filter_by(share_type_id=share_type_id) elif user_id: query = query.filter_by(user_id=user_id) result = query.first() return result[0] or 0, result[1] or 0 @require_context def share_snapshot_get(context, snapshot_id, session=None): result = (model_query(context, models.ShareSnapshot, session=session, project_only=True). filter_by(id=snapshot_id). options(joinedload('share')). options(joinedload('instances')). first()) if not result: raise exception.ShareSnapshotNotFound(snapshot_id=snapshot_id) return result def _share_snapshot_get_all_with_filters(context, project_id=None, share_id=None, filters=None, sort_key=None, sort_dir=None): # Init data sort_key = sort_key or 'share_id' sort_dir = sort_dir or 'desc' filters = filters or {} query = model_query(context, models.ShareSnapshot) if project_id: query = query.filter_by(project_id=project_id) if share_id: query = query.filter_by(share_id=share_id) query = query.options(joinedload('share')) query = query.options(joinedload('instances')) # Apply filters if 'usage' in filters: usage_filter_keys = ['any', 'used', 'unused'] if filters['usage'] == 'any': pass elif filters['usage'] == 'used': query = query.filter(or_(models.Share.snapshot_id == ( models.ShareSnapshot.id))) elif filters['usage'] == 'unused': query = query.filter(or_(models.Share.snapshot_id != ( models.ShareSnapshot.id))) else: msg = _("Wrong 'usage' key provided - '%(key)s'. " "Expected keys are '%(ek)s'.") % { 'key': filters['usage'], 'ek': six.text_type(usage_filter_keys)} raise exception.InvalidInput(reason=msg) # Apply sorting try: attr = getattr(models.ShareSnapshot, sort_key) except AttributeError: msg = _("Wrong sorting key provided - '%s'.") % sort_key raise exception.InvalidInput(reason=msg) if sort_dir.lower() == 'desc': query = query.order_by(attr.desc()) elif sort_dir.lower() == 'asc': query = query.order_by(attr.asc()) else: msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' " "and sort direction is '%(sort_dir)s'.") % { "sort_key": sort_key, "sort_dir": sort_dir} raise exception.InvalidInput(reason=msg) # Returns list of shares that satisfy filters return query.all() @require_admin_context def share_snapshot_get_all(context, filters=None, sort_key=None, sort_dir=None): return _share_snapshot_get_all_with_filters( context, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) @require_context def share_snapshot_get_all_by_project(context, project_id, filters=None, sort_key=None, sort_dir=None): authorize_project_context(context, project_id) return _share_snapshot_get_all_with_filters( context, project_id=project_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) @require_context def share_snapshot_get_all_for_share(context, share_id, filters=None, sort_key=None, sort_dir=None): return _share_snapshot_get_all_with_filters( context, share_id=share_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir, ) @require_context def share_snapshot_get_latest_for_share(context, share_id): snapshots = _share_snapshot_get_all_with_filters( context, share_id=share_id, sort_key='created_at', sort_dir='desc') return snapshots[0] if snapshots else None @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_snapshot_update(context, snapshot_id, values): session = get_session() with session.begin(): snapshot_ref = share_snapshot_get(context, snapshot_id, session=session) instance_values, snapshot_values = ( _extract_snapshot_instance_values(values) ) if snapshot_values: snapshot_ref.update(snapshot_values) snapshot_ref.save(session=session) if instance_values: snapshot_ref.instance.update(instance_values) snapshot_ref.instance.save(session=session) return snapshot_ref ################################# @require_context def share_snapshot_access_create(context, values): values = ensure_model_dict_has_id(values) session = get_session() with session.begin(): access_ref = models.ShareSnapshotAccessMapping() access_ref.update(values) access_ref.save(session=session) snapshot = share_snapshot_get(context, values['share_snapshot_id'], session=session) for instance in snapshot.instances: vals = { 'share_snapshot_instance_id': instance['id'], 'access_id': access_ref['id'], } _share_snapshot_instance_access_create(vals, session) return share_snapshot_access_get(context, access_ref['id']) def _share_snapshot_access_get_query(context, session, filters, read_deleted='no'): query = model_query(context, models.ShareSnapshotAccessMapping, session=session, read_deleted=read_deleted) return query.filter_by(**filters) def _share_snapshot_instance_access_get_query(context, session, access_id=None, share_snapshot_instance_id=None): filters = {'deleted': 'False'} if access_id is not None: filters.update({'access_id': access_id}) if share_snapshot_instance_id is not None: filters.update( {'share_snapshot_instance_id': share_snapshot_instance_id}) return model_query(context, models.ShareSnapshotInstanceAccessMapping, session=session).filter_by(**filters) @require_context def share_snapshot_instance_access_get_all(context, access_id, session): rules = _share_snapshot_instance_access_get_query( context, session, access_id=access_id).all() return rules @require_context def share_snapshot_access_get(context, access_id, session=None): session = session or get_session() access = _share_snapshot_access_get_query( context, session, {'id': access_id}).first() if access: return access else: raise exception.NotFound() def _share_snapshot_instance_access_create(values, session): access_ref = models.ShareSnapshotInstanceAccessMapping() access_ref.update(ensure_model_dict_has_id(values)) access_ref.save(session=session) return access_ref @require_context def share_snapshot_access_get_all_for_share_snapshot(context, share_snapshot_id, filters): session = get_session() filters['share_snapshot_id'] = share_snapshot_id access_list = _share_snapshot_access_get_query( context, session, filters).all() return access_list @require_context def share_snapshot_check_for_existing_access(context, share_snapshot_id, access_type, access_to): return _check_for_existing_access( context, 'share_snapshot', share_snapshot_id, access_type, access_to) @require_context def share_snapshot_access_get_all_for_snapshot_instance( context, snapshot_instance_id, filters=None, with_snapshot_access_data=True, session=None): """Get all access rules related to a certain snapshot instance.""" session = session or get_session() filters = copy.deepcopy(filters) if filters else {} filters.update({'share_snapshot_instance_id': snapshot_instance_id}) query = _share_snapshot_instance_access_get_query(context, session) legal_filter_keys = ( 'id', 'share_snapshot_instance_id', 'access_id', 'state') query = exact_filter( query, models.ShareSnapshotInstanceAccessMapping, filters, legal_filter_keys) instance_accesses = query.all() if with_snapshot_access_data: instance_accesses = _set_instances_snapshot_access_data( context, instance_accesses, session) return instance_accesses @require_context def share_snapshot_instance_access_update( context, access_id, instance_id, updates): snapshot_access_fields = ('access_type', 'access_to') snapshot_access_map_updates, share_instance_access_map_updates = ( _extract_subdict_by_fields(updates, snapshot_access_fields) ) session = get_session() with session.begin(): snapshot_access = _share_snapshot_access_get_query( context, session, {'id': access_id}).first() if not snapshot_access: raise exception.NotFound() snapshot_access.update(snapshot_access_map_updates) snapshot_access.save(session=session) access = _share_snapshot_instance_access_get_query( context, session, access_id=access_id, share_snapshot_instance_id=instance_id).first() if not access: raise exception.NotFound() access.update(share_instance_access_map_updates) access.save(session=session) return access @require_context def share_snapshot_instance_access_get( context, access_id, share_snapshot_instance_id, with_snapshot_access_data=True): session = get_session() with session.begin(): access = _share_snapshot_instance_access_get_query( context, session, access_id=access_id, share_snapshot_instance_id=share_snapshot_instance_id).first() if access is None: raise exception.NotFound() if with_snapshot_access_data: return _set_instances_snapshot_access_data( context, access, session)[0] else: return access @require_context def share_snapshot_instance_access_delete( context, access_id, snapshot_instance_id): session = get_session() with session.begin(): rule = _share_snapshot_instance_access_get_query( context, session, access_id=access_id, share_snapshot_instance_id=snapshot_instance_id).first() if not rule: exception.NotFound() rule.soft_delete(session, update_status=True, status_field_name='state') other_mappings = share_snapshot_instance_access_get_all( context, rule['access_id'], session) if len(other_mappings) == 0: ( session.query(models.ShareSnapshotAccessMapping) .filter_by(id=rule['access_id']) .soft_delete(update_status=True, status_field_name='state') ) @require_context def share_snapshot_instance_export_location_create(context, values): values = ensure_model_dict_has_id(values) session = get_session() with session.begin(): ssiel = models.ShareSnapshotInstanceExportLocation() ssiel.update(values) ssiel.save(session=session) return ssiel def _share_snapshot_instance_export_locations_get_query(context, session, values): query = model_query(context, models.ShareSnapshotInstanceExportLocation, session=session) return query.filter_by(**values) @require_context def share_snapshot_export_locations_get(context, snapshot_id): session = get_session() snapshot = share_snapshot_get(context, snapshot_id, session=session) ins_ids = [ins['id'] for ins in snapshot.instances] export_locations = _share_snapshot_instance_export_locations_get_query( context, session, {}).filter( models.ShareSnapshotInstanceExportLocation. share_snapshot_instance_id.in_(ins_ids)).all() return export_locations @require_context def share_snapshot_instance_export_locations_get_all( context, share_snapshot_instance_id): session = get_session() export_locations = _share_snapshot_instance_export_locations_get_query( context, session, {'share_snapshot_instance_id': share_snapshot_instance_id}).all() return export_locations @require_context def share_snapshot_instance_export_location_get(context, el_id): session = get_session() export_location = _share_snapshot_instance_export_locations_get_query( context, session, {'id': el_id}).first() if export_location: return export_location else: raise exception.NotFound() @require_context def share_snapshot_instance_export_location_delete(context, el_id): session = get_session() with session.begin(): el = _share_snapshot_instance_export_locations_get_query( context, session, {'id': el_id}).first() if not el: exception.NotFound() el.soft_delete(session=session) ################################# @require_context @require_share_exists def share_metadata_get(context, share_id): return _share_metadata_get(context, share_id) @require_context @require_share_exists def share_metadata_delete(context, share_id, key): (_share_metadata_get_query(context, share_id). filter_by(key=key).soft_delete()) @require_context @require_share_exists def share_metadata_update(context, share_id, metadata, delete): return _share_metadata_update(context, share_id, metadata, delete) def _share_metadata_get_query(context, share_id, session=None): return (model_query(context, models.ShareMetadata, session=session, read_deleted="no"). filter_by(share_id=share_id). options(joinedload('share'))) def _share_metadata_get(context, share_id, session=None): rows = _share_metadata_get_query(context, share_id, session=session).all() result = {} for row in rows: result[row['key']] = row['value'] return result @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def _share_metadata_update(context, share_id, metadata, delete, session=None): if not session: session = get_session() with session.begin(): # Set existing metadata to deleted if delete argument is True if delete: original_metadata = _share_metadata_get(context, share_id, session=session) for meta_key, meta_value in original_metadata.items(): if meta_key not in metadata: meta_ref = _share_metadata_get_item(context, share_id, meta_key, session=session) meta_ref.soft_delete(session=session) meta_ref = None # Now update all existing items with new values, or create new meta # objects for meta_key, meta_value in metadata.items(): # update the value whether it exists or not item = {"value": meta_value} try: meta_ref = _share_metadata_get_item(context, share_id, meta_key, session=session) except exception.ShareMetadataNotFound: meta_ref = models.ShareMetadata() item.update({"key": meta_key, "share_id": share_id}) meta_ref.update(item) meta_ref.save(session=session) return metadata def _share_metadata_get_item(context, share_id, key, session=None): result = (_share_metadata_get_query(context, share_id, session=session). filter_by(key=key). first()) if not result: raise exception.ShareMetadataNotFound(metadata_key=key, share_id=share_id) return result ############################ # Export locations functions ############################ def _share_export_locations_get(context, share_instance_ids, include_admin_only=True, ignore_secondary_replicas=False, session=None): session = session or get_session() if not isinstance(share_instance_ids, (set, list, tuple)): share_instance_ids = (share_instance_ids, ) query = model_query( context, models.ShareInstanceExportLocations, session=session, read_deleted="no", ).filter( models.ShareInstanceExportLocations.share_instance_id.in_( share_instance_ids), ).order_by( "updated_at", ).options( joinedload("_el_metadata_bare"), ) if not include_admin_only: query = query.filter_by(is_admin_only=False) if ignore_secondary_replicas: replica_state_attr = models.ShareInstance.replica_state query = query.join("share_instance").filter( or_(replica_state_attr == None, # noqa replica_state_attr == constants.REPLICA_STATE_ACTIVE)) return query.all() @require_context @require_share_exists def share_export_locations_get_by_share_id(context, share_id, include_admin_only=True, ignore_migration_destination=False, ignore_secondary_replicas=False): share = share_get(context, share_id) if ignore_migration_destination: ids = [instance.id for instance in share.instances if instance['status'] != constants.STATUS_MIGRATING_TO] else: ids = [instance.id for instance in share.instances] rows = _share_export_locations_get( context, ids, include_admin_only=include_admin_only, ignore_secondary_replicas=ignore_secondary_replicas) return rows @require_context @require_share_instance_exists def share_export_locations_get_by_share_instance_id(context, share_instance_id, include_admin_only=True): rows = _share_export_locations_get( context, [share_instance_id], include_admin_only=include_admin_only) return rows @require_context @require_share_exists def share_export_locations_get(context, share_id): # NOTE(vponomaryov): this method is kept for compatibility with # old approach. New one uses 'share_export_locations_get_by_share_id'. # Which returns list of dicts instead of list of strings, as this one does. share = share_get(context, share_id) rows = _share_export_locations_get( context, share.instance.id, context.is_admin) return [location['path'] for location in rows] @require_context def share_export_location_get_by_uuid(context, export_location_uuid, ignore_secondary_replicas=False, session=None): session = session or get_session() query = model_query( context, models.ShareInstanceExportLocations, session=session, read_deleted="no", ).filter_by( uuid=export_location_uuid, ).options( joinedload("_el_metadata_bare"), ) if ignore_secondary_replicas: replica_state_attr = models.ShareInstance.replica_state query = query.join("share_instance").filter( or_(replica_state_attr == None, # noqa replica_state_attr == constants.REPLICA_STATE_ACTIVE)) result = query.first() if not result: raise exception.ExportLocationNotFound(uuid=export_location_uuid) return result @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_export_locations_update(context, share_instance_id, export_locations, delete): # NOTE(u_glide): # Backward compatibility code for drivers, # which return single export_location as string if not isinstance(export_locations, (list, tuple, set)): export_locations = (export_locations, ) export_locations_as_dicts = [] for el in export_locations: # NOTE(vponomaryov): transform old export locations view to new one export_location = el if isinstance(el, six.string_types): export_location = { "path": el, "is_admin_only": False, "metadata": {}, } elif isinstance(export_location, dict): if 'metadata' not in export_location: export_location['metadata'] = {} else: raise exception.ManilaException( _("Wrong export location type '%s'.") % type(export_location)) export_locations_as_dicts.append(export_location) export_locations = export_locations_as_dicts export_locations_paths = [el['path'] for el in export_locations] session = get_session() current_el_rows = _share_export_locations_get( context, share_instance_id, session=session) def get_path_list_from_rows(rows): return set([l['path'] for l in rows]) current_el_paths = get_path_list_from_rows(current_el_rows) def create_indexed_time_dict(key_list): base = timeutils.utcnow() return { # NOTE(u_glide): Incrementing timestamp by microseconds to make # timestamp order match index order. key: base + datetime.timedelta(microseconds=index) for index, key in enumerate(key_list) } indexed_update_time = create_indexed_time_dict(export_locations_paths) for el in current_el_rows: if delete and el['path'] not in export_locations_paths: export_location_metadata_delete(context, el['uuid']) el.soft_delete(session) else: updated_at = indexed_update_time[el['path']] el.update({ 'updated_at': updated_at, 'deleted': 0, }) el.save(session=session) if el['el_metadata']: export_location_metadata_update( context, el['uuid'], el['el_metadata'], session=session) # Now add new export locations for el in export_locations: if el['path'] in current_el_paths: # Already updated continue location_ref = models.ShareInstanceExportLocations() location_ref.update({ 'uuid': uuidutils.generate_uuid(), 'path': el['path'], 'share_instance_id': share_instance_id, 'updated_at': indexed_update_time[el['path']], 'deleted': 0, 'is_admin_only': el.get('is_admin_only', False), }) location_ref.save(session=session) if not el.get('metadata'): continue export_location_metadata_update( context, location_ref['uuid'], el.get('metadata'), session=session) return get_path_list_from_rows(_share_export_locations_get( context, share_instance_id, session=session)) ##################################### # Export locations metadata functions ##################################### def _export_location_metadata_get_query(context, export_location_uuid, session=None): session = session or get_session() export_location_id = share_export_location_get_by_uuid( context, export_location_uuid).id return model_query( context, models.ShareInstanceExportLocationsMetadata, session=session, read_deleted="no", ).filter_by( export_location_id=export_location_id, ) @require_context def export_location_metadata_get(context, export_location_uuid, session=None): rows = _export_location_metadata_get_query( context, export_location_uuid, session=session).all() result = {} for row in rows: result[row["key"]] = row["value"] return result @require_context def export_location_metadata_delete(context, export_location_uuid, keys=None): session = get_session() metadata = _export_location_metadata_get_query( context, export_location_uuid, session=session, ) # NOTE(vponomaryov): if keys is None then we delete all metadata. if keys is not None: keys = keys if isinstance(keys, (list, set, tuple)) else (keys, ) metadata = metadata.filter( models.ShareInstanceExportLocationsMetadata.key.in_(keys)) metadata = metadata.all() for meta_ref in metadata: meta_ref.soft_delete(session=session) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def export_location_metadata_update(context, export_location_uuid, metadata, delete=False, session=None): session = session or get_session() if delete: original_metadata = export_location_metadata_get( context, export_location_uuid, session=session) keys_for_deletion = set(original_metadata).difference(metadata) if keys_for_deletion: export_location_metadata_delete( context, export_location_uuid, keys=keys_for_deletion) el = share_export_location_get_by_uuid(context, export_location_uuid) for meta_key, meta_value in metadata.items(): # NOTE(vponomaryov): we should use separate session # for each meta_ref because of autoincrement of integer primary key # that will not take effect using one session and we will rewrite, # in that case, single record - first one added with this call. session = get_session() if meta_value is None: LOG.warning("%s should be properly defined in the driver.", meta_key) item = {"value": meta_value, "updated_at": timeutils.utcnow()} meta_ref = _export_location_metadata_get_query( context, export_location_uuid, session=session, ).filter_by( key=meta_key, ).first() if not meta_ref: meta_ref = models.ShareInstanceExportLocationsMetadata() item.update({ "key": meta_key, "export_location_id": el.id, }) meta_ref.update(item) meta_ref.save(session=session) return metadata ################################### @require_context def security_service_create(context, values): values = ensure_model_dict_has_id(values) security_service_ref = models.SecurityService() security_service_ref.update(values) session = get_session() with session.begin(): security_service_ref.save(session=session) return security_service_ref @require_context def security_service_delete(context, id): session = get_session() with session.begin(): security_service_ref = security_service_get(context, id, session=session) security_service_ref.soft_delete(session) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def security_service_update(context, id, values): session = get_session() with session.begin(): security_service_ref = security_service_get(context, id, session=session) security_service_ref.update(values) security_service_ref.save(session=session) return security_service_ref @require_context def security_service_get(context, id, session=None): result = (_security_service_get_query(context, session=session). filter_by(id=id).first()) if result is None: raise exception.SecurityServiceNotFound(security_service_id=id) return result @require_context def security_service_get_all(context): return _security_service_get_query(context).all() @require_context def security_service_get_all_by_project(context, project_id): return (_security_service_get_query(context). filter_by(project_id=project_id).all()) def _security_service_get_query(context, session=None): if session is None: session = get_session() return model_query(context, models.SecurityService, session=session) ################### def _network_get_query(context, session=None): if session is None: session = get_session() return (model_query(context, models.ShareNetwork, session=session, project_only=True). options(joinedload('share_instances'), joinedload('security_services'), subqueryload('share_network_subnets'))) @require_context def share_network_create(context, values): values = ensure_model_dict_has_id(values) network_ref = models.ShareNetwork() network_ref.update(values) session = get_session() with session.begin(): network_ref.save(session=session) return share_network_get(context, values['id'], session) @require_context def share_network_delete(context, id): session = get_session() with session.begin(): network_ref = share_network_get(context, id, session=session) network_ref.soft_delete(session) @require_context def share_network_update(context, id, values): session = get_session() with session.begin(): network_ref = share_network_get(context, id, session=session) network_ref.update(values) network_ref.save(session=session) return network_ref @require_context def share_network_get(context, id, session=None): result = _network_get_query(context, session).filter_by(id=id).first() if result is None: raise exception.ShareNetworkNotFound(share_network_id=id) return result @require_context def share_network_get_all(context): return _network_get_query(context).all() @require_context def share_network_get_all_by_project(context, project_id): return _network_get_query(context).filter_by(project_id=project_id).all() @require_context def share_network_get_all_by_security_service(context, security_service_id): session = get_session() return (model_query(context, models.ShareNetwork, session=session). join(models.ShareNetworkSecurityServiceAssociation, models.ShareNetwork.id == models.ShareNetworkSecurityServiceAssociation.share_network_id). filter_by(security_service_id=security_service_id, deleted=0) .all()) @require_context def share_network_add_security_service(context, id, security_service_id): session = get_session() with session.begin(): assoc_ref = (model_query( context, models.ShareNetworkSecurityServiceAssociation, session=session). filter_by(share_network_id=id). filter_by( security_service_id=security_service_id).first()) if assoc_ref: msg = "Already associated" raise exception.ShareNetworkSecurityServiceAssociationError( share_network_id=id, security_service_id=security_service_id, reason=msg) share_nw_ref = share_network_get(context, id, session=session) security_service_ref = security_service_get(context, security_service_id, session=session) share_nw_ref.security_services += [security_service_ref] share_nw_ref.save(session=session) return share_nw_ref @require_context def share_network_remove_security_service(context, id, security_service_id): session = get_session() with session.begin(): share_nw_ref = share_network_get(context, id, session=session) security_service_get(context, security_service_id, session=session) assoc_ref = (model_query( context, models.ShareNetworkSecurityServiceAssociation, session=session). filter_by(share_network_id=id). filter_by(security_service_id=security_service_id).first()) if assoc_ref: assoc_ref.soft_delete(session) else: msg = "No association defined" raise exception.ShareNetworkSecurityServiceDissociationError( share_network_id=id, security_service_id=security_service_id, reason=msg) return share_nw_ref @require_context def count_share_networks(context, project_id, user_id=None, share_type_id=None, session=None): query = model_query( context, models.ShareNetwork, func.count(models.ShareNetwork.id), read_deleted="no", session=session).filter_by(project_id=project_id) if share_type_id: query = query.join("share_instances").filter_by( share_type_id=share_type_id) elif user_id is not None: query = query.filter_by(user_id=user_id) return query.first()[0] ################### @require_context def _network_subnet_get_query(context, session=None): if session is None: session = get_session() return (model_query(context, models.ShareNetworkSubnet, session=session). options(joinedload('share_servers'), joinedload('share_network'))) @require_context def share_network_subnet_create(context, values): values = ensure_model_dict_has_id(values) network_subnet_ref = models.ShareNetworkSubnet() network_subnet_ref.update(values) session = get_session() with session.begin(): network_subnet_ref.save(session=session) return share_network_subnet_get( context, network_subnet_ref['id'], session=session) @require_context def share_network_subnet_delete(context, network_subnet_id): session = get_session() with session.begin(): network_subnet_ref = share_network_subnet_get(context, network_subnet_id, session=session) network_subnet_ref.soft_delete(session=session, update_status=True) @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_network_subnet_update(context, network_subnet_id, values): session = get_session() with session.begin(): network_subnet_ref = share_network_subnet_get(context, network_subnet_id, session=session) network_subnet_ref.update(values) network_subnet_ref.save(session=session) return network_subnet_ref @require_context def share_network_subnet_get(context, network_subnet_id, session=None): result = (_network_subnet_get_query(context, session) .filter_by(id=network_subnet_id) .first()) if result is None: raise exception.ShareNetworkSubnetNotFound( share_network_subnet_id=network_subnet_id) return result @require_context def share_network_subnet_get_all(context): return _network_subnet_get_query(context).all() @require_context def share_network_subnet_get_all_by_share_network(context, network_id): return _network_subnet_get_query(context).filter_by( share_network_id=network_id).all() @require_context def share_network_subnet_get_by_availability_zone_id( context, share_network_id, availability_zone_id): result = (_network_subnet_get_query(context).filter_by( share_network_id=share_network_id, availability_zone_id=availability_zone_id).first()) # If a specific subnet wasn't found, try get the default one if availability_zone_id and not result: return (_network_subnet_get_query(context).filter_by( share_network_id=share_network_id, availability_zone_id=None).first()) return result @require_context def share_network_subnet_get_default_subnet(context, share_network_id): return share_network_subnet_get_by_availability_zone_id( context, share_network_id, availability_zone_id=None) ################### def _server_get_query(context, session=None): if session is None: session = get_session() return (model_query(context, models.ShareServer, session=session). options(joinedload('share_instances'), joinedload('network_allocations'), joinedload('share_network_subnet'))) @require_context def share_server_create(context, values): values = ensure_model_dict_has_id(values) server_ref = models.ShareServer() server_ref.update(values) session = get_session() with session.begin(): server_ref.save(session=session) # NOTE(u_glide): Do so to prevent errors with relationships return share_server_get(context, server_ref['id'], session=session) @require_context def share_server_delete(context, id): session = get_session() with session.begin(): server_ref = share_server_get(context, id, session=session) share_server_backend_details_delete(context, id, session=session) server_ref.soft_delete(session=session, update_status=True) @require_context def share_server_update(context, id, values): session = get_session() with session.begin(): server_ref = share_server_get(context, id, session=session) server_ref.update(values) server_ref.save(session=session) return server_ref @require_context def share_server_get(context, server_id, session=None): result = (_server_get_query(context, session).filter_by(id=server_id) .first()) if result is None: raise exception.ShareServerNotFound(share_server_id=server_id) return result @require_context def share_server_search_by_identifier(context, identifier, session=None): identifier_field = models.ShareServer.identifier # try if given identifier is a substring of existing entry's identifier result = (_server_get_query(context, session).filter( identifier_field.like('%{}%'.format(identifier))).all()) if not result: # repeat it with underscores instead of hyphens result = (_server_get_query(context, session).filter( identifier_field.like('%{}%'.format( identifier.replace("-", "_")))).all()) if not result: # repeat it with hypens instead of underscores result = (_server_get_query(context, session).filter( identifier_field.like('%{}%'.format( identifier.replace("_", "-")))).all()) if not result: # try if an existing identifier is a substring of given identifier result = (_server_get_query(context, session).filter( literal(identifier).contains(identifier_field)).all()) if not result: # repeat it with underscores instead of hyphens result = (_server_get_query(context, session).filter( literal(identifier.replace("-", "_")).contains( identifier_field)).all()) if not result: # repeat it with hypens instead of underscores result = (_server_get_query(context, session).filter( literal(identifier.replace("_", "-")).contains( identifier_field)).all()) if not result: raise exception.ShareServerNotFound(share_server_id=identifier) return result @require_context def share_server_get_all_by_host_and_share_subnet_valid(context, host, share_subnet_id, session=None): result = (_server_get_query(context, session).filter_by(host=host) .filter_by(share_network_subnet_id=share_subnet_id) .filter(models.ShareServer.status.in_( (constants.STATUS_CREATING, constants.STATUS_ACTIVE))).all()) if not result: filters_description = ('share_network_subnet_id is "%(share_net_id)s",' ' host is "%(host)s" and status in' ' "%(status_cr)s" or "%(status_act)s"') % { 'share_net_id': share_subnet_id, 'host': host, 'status_cr': constants.STATUS_CREATING, 'status_act': constants.STATUS_ACTIVE, } raise exception.ShareServerNotFoundByFilters( filters_description=filters_description) return result @require_context def share_server_get_all(context): return _server_get_query(context).all() @require_context def share_server_get_all_by_host(context, host): return _server_get_query(context).filter_by(host=host).all() @require_context def share_server_get_all_unused_deletable(context, host, updated_before): valid_server_status = ( constants.STATUS_INACTIVE, constants.STATUS_ACTIVE, constants.STATUS_ERROR, ) result = (_server_get_query(context) .filter_by(is_auto_deletable=True) .filter_by(host=host) .filter(~models.ShareServer.share_groups.any()) .filter(~models.ShareServer.share_instances.any()) .filter(models.ShareServer.status.in_(valid_server_status)) .filter(models.ShareServer.updated_at < updated_before).all()) return result @require_context def share_server_backend_details_set(context, share_server_id, server_details): share_server_get(context, share_server_id) for meta_key, meta_value in server_details.items(): meta_ref = models.ShareServerBackendDetails() meta_ref.update({ 'key': meta_key, 'value': meta_value, 'share_server_id': share_server_id }) session = get_session() with session.begin(): meta_ref.save(session) return server_details @require_context def share_server_backend_details_delete(context, share_server_id, session=None): if not session: session = get_session() share_server_details = (model_query(context, models.ShareServerBackendDetails, session=session) .filter_by(share_server_id=share_server_id).all()) for item in share_server_details: item.soft_delete(session) ################### def _driver_private_data_query(session, context, entity_id, key=None, read_deleted=False): query = model_query( context, models.DriverPrivateData, session=session, read_deleted=read_deleted, ).filter_by( entity_uuid=entity_id, ) if isinstance(key, list): return query.filter(models.DriverPrivateData.key.in_(key)) elif key is not None: return query.filter_by(key=key) return query @require_context def driver_private_data_get(context, entity_id, key=None, default=None, session=None): if not session: session = get_session() query = _driver_private_data_query(session, context, entity_id, key) if key is None or isinstance(key, list): return {item.key: item.value for item in query.all()} else: result = query.first() return result["value"] if result is not None else default @require_context def driver_private_data_update(context, entity_id, details, delete_existing=False, session=None): # NOTE(u_glide): following code modifies details dict, that's why we should # copy it new_details = copy.deepcopy(details) if not session: session = get_session() with session.begin(): # Process existing data original_data = session.query(models.DriverPrivateData).filter_by( entity_uuid=entity_id).all() for data_ref in original_data: in_new_details = data_ref['key'] in new_details if in_new_details: new_value = six.text_type(new_details.pop(data_ref['key'])) data_ref.update({ "value": new_value, "deleted": 0, "deleted_at": None }) data_ref.save(session=session) elif delete_existing and data_ref['deleted'] != 1: data_ref.update({ "deleted": 1, "deleted_at": timeutils.utcnow() }) data_ref.save(session=session) # Add new data for key, value in new_details.items(): data_ref = models.DriverPrivateData() data_ref.update({ "entity_uuid": entity_id, "key": key, "value": six.text_type(value) }) data_ref.save(session=session) return details @require_context def driver_private_data_delete(context, entity_id, key=None, session=None): if not session: session = get_session() with session.begin(): query = _driver_private_data_query(session, context, entity_id, key) query.update({"deleted": 1, "deleted_at": timeutils.utcnow()}) ################### @require_context def network_allocation_create(context, values): values = ensure_model_dict_has_id(values) alloc_ref = models.NetworkAllocation() alloc_ref.update(values) session = get_session() with session.begin(): alloc_ref.save(session=session) return alloc_ref @require_context def network_allocation_delete(context, id): session = get_session() with session.begin(): alloc_ref = network_allocation_get(context, id, session=session) alloc_ref.soft_delete(session) @require_context def network_allocation_get(context, id, session=None, read_deleted="no"): if session is None: session = get_session() result = (model_query(context, models.NetworkAllocation, session=session, read_deleted=read_deleted). filter_by(id=id).first()) if result is None: raise exception.NotFound() return result @require_context def network_allocations_get_by_ip_address(context, ip_address): session = get_session() result = (model_query(context, models.NetworkAllocation, session=session). filter_by(ip_address=ip_address).all()) return result or [] @require_context def network_allocations_get_for_share_server(context, share_server_id, session=None, label=None): if session is None: session = get_session() query = model_query( context, models.NetworkAllocation, session=session, ).filter_by( share_server_id=share_server_id, ) if label: if label != 'admin': query = query.filter(or_( # NOTE(vponomaryov): we treat None as alias for 'user'. models.NetworkAllocation.label == None, # noqa models.NetworkAllocation.label == label, )) else: query = query.filter(models.NetworkAllocation.label == label) result = query.all() return result @require_context def network_allocation_update(context, id, values, read_deleted=None): session = get_session() with session.begin(): alloc_ref = network_allocation_get(context, id, session=session, read_deleted=read_deleted) alloc_ref.update(values) alloc_ref.save(session=session) return alloc_ref ################### def _dict_with_specs(inst_type_query, specs_key='extra_specs'): """Convert type query result to dict with extra_spec and rate_limit. Takes a share [group] type query returned by sqlalchemy and returns it as a dictionary, converting the extra/group specs entry from a list of dicts: 'extra_specs' : [{'key': 'k1', 'value': 'v1', ...}, ...] 'group_specs' : [{'key': 'k1', 'value': 'v1', ...}, ...] to a single dict: 'extra_specs' : {'k1': 'v1'} 'group_specs' : {'k1': 'v1'} """ inst_type_dict = dict(inst_type_query) specs = {x['key']: x['value'] for x in inst_type_query[specs_key]} inst_type_dict[specs_key] = specs return inst_type_dict @require_admin_context def share_type_create(context, values, projects=None): """Create a new share type. In order to pass in extra specs, the values dict should contain a 'extra_specs' key/value pair: {'extra_specs' : {'k1': 'v1', 'k2': 'v2', ...}} """ values = ensure_model_dict_has_id(values) projects = projects or [] session = get_session() with session.begin(): try: values['extra_specs'] = _metadata_refs(values.get('extra_specs'), models.ShareTypeExtraSpecs) share_type_ref = models.ShareTypes() share_type_ref.update(values) share_type_ref.save(session=session) except db_exception.DBDuplicateEntry: raise exception.ShareTypeExists(id=values['name']) except Exception as e: raise db_exception.DBError(e) for project in set(projects): access_ref = models.ShareTypeProjects() access_ref.update({"share_type_id": share_type_ref.id, "project_id": project}) access_ref.save(session=session) return share_type_ref def _share_type_get_query(context, session=None, read_deleted=None, expected_fields=None): expected_fields = expected_fields or [] query = (model_query(context, models.ShareTypes, session=session, read_deleted=read_deleted). options(joinedload('extra_specs'))) if 'projects' in expected_fields: query = query.options(joinedload('projects')) if not context.is_admin: the_filter = [models.ShareTypes.is_public == true()] projects_attr = getattr(models.ShareTypes, 'projects') the_filter.extend([ projects_attr.any(project_id=context.project_id) ]) query = query.filter(or_(*the_filter)) return query @handle_db_data_error @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def _type_update(context, type_id, values, is_group): if values.get('name') is None: values.pop('name', None) if is_group: model = models.ShareGroupTypes exists_exc = exception.ShareGroupTypeExists exists_args = {'type_id': values.get('name')} else: model = models.ShareTypes exists_exc = exception.ShareTypeExists exists_args = {'id': values.get('name')} session = get_session() with session.begin(): query = model_query(context, model, session=session) try: result = query.filter_by(id=type_id).update(values) except db_exception.DBDuplicateEntry: # This exception only occurs if there's a non-deleted # share/group type which has the same name as the name being # updated. raise exists_exc(**exists_args) if not result: if is_group: raise exception.ShareGroupTypeNotFound(type_id=type_id) else: raise exception.ShareTypeNotFound(share_type_id=type_id) def share_type_update(context, share_type_id, values): _type_update(context, share_type_id, values, is_group=False) @require_context def share_type_get_all(context, inactive=False, filters=None): """Returns a dict describing all share_types with name as key.""" filters = filters or {} read_deleted = "yes" if inactive else "no" query = _share_type_get_query(context, read_deleted=read_deleted) if 'is_public' in filters and filters['is_public'] is not None: the_filter = [models. ShareTypes.is_public == filters['is_public']] if filters['is_public'] and context.project_id is not None: projects_attr = getattr(models. ShareTypes, 'projects') the_filter.extend([ projects_attr.any( project_id=context.project_id, deleted=0) ]) if len(the_filter) > 1: query = query.filter(or_(*the_filter)) else: query = query.filter(the_filter[0]) rows = query.order_by("name").all() result = {} for row in rows: result[row['name']] = _dict_with_specs(row) return result def _share_type_get_id_from_share_type_query(context, id, session=None): return (model_query( context, models.ShareTypes, read_deleted="no", session=session). filter_by(id=id)) def _share_type_get_id_from_share_type(context, id, session=None): result = _share_type_get_id_from_share_type_query( context, id, session=session).first() if not result: raise exception.ShareTypeNotFound(share_type_id=id) return result['id'] def _share_type_get(context, id, session=None, inactive=False, expected_fields=None): expected_fields = expected_fields or [] read_deleted = "yes" if inactive else "no" result = (_share_type_get_query( context, session, read_deleted, expected_fields). filter_by(id=id). first()) if not result: # The only way that id could be None is if the default share type is # not configured and no other share type was specified. if id is None: raise exception.DefaultShareTypeNotConfigured() raise exception.ShareTypeNotFound(share_type_id=id) share_type = _dict_with_specs(result) if 'projects' in expected_fields: share_type['projects'] = [p['project_id'] for p in result['projects']] return share_type @require_context def share_type_get(context, id, inactive=False, expected_fields=None): """Return a dict describing specific share_type.""" return _share_type_get(context, id, session=None, inactive=inactive, expected_fields=expected_fields) def _share_type_get_by_name(context, name, session=None): result = (model_query(context, models.ShareTypes, session=session). options(joinedload('extra_specs')). filter_by(name=name). first()) if not result: raise exception.ShareTypeNotFoundByName(share_type_name=name) return _dict_with_specs(result) @require_context def share_type_get_by_name(context, name): """Return a dict describing specific share_type.""" return _share_type_get_by_name(context, name) @require_context def share_type_get_by_name_or_id(context, name_or_id): """Return a dict describing specific share_type using its name or ID. :returns: ShareType object or None if not found """ try: return _share_type_get(context, name_or_id) except exception.ShareTypeNotFound: try: return _share_type_get_by_name(context, name_or_id) except exception.ShareTypeNotFoundByName: return None @require_admin_context def share_type_destroy(context, id): session = get_session() with session.begin(): _share_type_get(context, id, session) shares_count = model_query( context, models.ShareInstance, read_deleted="no", session=session, ).filter_by(share_type_id=id).count() share_group_types_count = model_query( context, models.ShareGroupTypeShareTypeMapping, read_deleted="no", session=session, ).filter_by(share_type_id=id).count() if shares_count or share_group_types_count: msg = ("Deletion of share type %(stype)s failed; it in use by " "%(shares)d shares and %(gtypes)d share group types") msg_args = {'stype': id, 'shares': shares_count, 'gtypes': share_group_types_count} LOG.error(msg, msg_args) raise exception.ShareTypeInUse(share_type_id=id) model_query( context, models.ShareTypeExtraSpecs, session=session ).filter_by( share_type_id=id ).soft_delete() model_query( context, models.ShareTypeProjects, session=session ).filter_by( share_type_id=id, ).soft_delete() model_query( context, models.ShareTypes, session=session ).filter_by( id=id ).soft_delete() # Destroy any quotas, usages and reservations for the share type: quota_destroy_all_by_share_type(context, id) def _share_type_access_query(context, session=None): return model_query(context, models.ShareTypeProjects, session=session, read_deleted="no") @require_admin_context def share_type_access_get_all(context, type_id): share_type_id = _share_type_get_id_from_share_type(context, type_id) return (_share_type_access_query(context). filter_by(share_type_id=share_type_id).all()) @require_admin_context def share_type_access_add(context, type_id, project_id): """Add given tenant to the share type access list.""" share_type_id = _share_type_get_id_from_share_type(context, type_id) access_ref = models.ShareTypeProjects() access_ref.update({"share_type_id": share_type_id, "project_id": project_id}) session = get_session() with session.begin(): try: access_ref.save(session=session) except db_exception.DBDuplicateEntry: raise exception.ShareTypeAccessExists(share_type_id=type_id, project_id=project_id) return access_ref @require_admin_context def share_type_access_remove(context, type_id, project_id): """Remove given tenant from the share type access list.""" share_type_id = _share_type_get_id_from_share_type(context, type_id) count = (_share_type_access_query(context). filter_by(share_type_id=share_type_id). filter_by(project_id=project_id). soft_delete(synchronize_session=False)) if count == 0: raise exception.ShareTypeAccessNotFound( share_type_id=type_id, project_id=project_id) #################### def _share_type_extra_specs_query(context, share_type_id, session=None): return (model_query(context, models.ShareTypeExtraSpecs, session=session, read_deleted="no"). filter_by(share_type_id=share_type_id). options(joinedload('share_type'))) @require_context def share_type_extra_specs_get(context, share_type_id): rows = (_share_type_extra_specs_query(context, share_type_id). all()) result = {} for row in rows: result[row['key']] = row['value'] return result @require_context def share_type_extra_specs_delete(context, share_type_id, key): session = get_session() with session.begin(): _share_type_extra_specs_get_item(context, share_type_id, key, session) (_share_type_extra_specs_query(context, share_type_id, session). filter_by(key=key).soft_delete()) def _share_type_extra_specs_get_item(context, share_type_id, key, session=None): result = _share_type_extra_specs_query( context, share_type_id, session=session ).filter_by(key=key).options(joinedload('share_type')).first() if not result: raise exception.ShareTypeExtraSpecsNotFound( extra_specs_key=key, share_type_id=share_type_id) return result @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_type_extra_specs_update_or_create(context, share_type_id, specs): session = get_session() with session.begin(): spec_ref = None for key, value in specs.items(): try: spec_ref = _share_type_extra_specs_get_item( context, share_type_id, key, session) except exception.ShareTypeExtraSpecsNotFound: spec_ref = models.ShareTypeExtraSpecs() spec_ref.update({"key": key, "value": value, "share_type_id": share_type_id, "deleted": 0}) spec_ref.save(session=session) return specs def _ensure_availability_zone_exists(context, values, session, strict=True): az_name = values.pop('availability_zone', None) if strict and not az_name: msg = _("Values dict should have 'availability_zone' field.") raise ValueError(msg) elif not az_name: return if uuidutils.is_uuid_like(az_name): az_ref = availability_zone_get(context, az_name, session=session) else: az_ref = availability_zone_create_if_not_exist( context, az_name, session=session) values.update({'availability_zone_id': az_ref['id']}) @require_context def availability_zone_get(context, id_or_name, session=None): if session is None: session = get_session() query = model_query(context, models.AvailabilityZone, session=session) if uuidutils.is_uuid_like(id_or_name): query = query.filter_by(id=id_or_name) else: query = query.filter_by(name=id_or_name) result = query.first() if not result: raise exception.AvailabilityZoneNotFound(id=id_or_name) return result @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def availability_zone_create_if_not_exist(context, name, session=None): if session is None: session = get_session() az = models.AvailabilityZone() az.update({'id': uuidutils.generate_uuid(), 'name': name}) try: with session.begin(): az.save(session) # NOTE(u_glide): Do not catch specific exception here, because it depends # on concrete backend used by SqlAlchemy except Exception: return availability_zone_get(context, name, session=session) return az @require_context def availability_zone_get_all(context): session = get_session() enabled_services = model_query( context, models.Service, models.Service.availability_zone_id, session=session, read_deleted="no" ).filter_by(disabled=False).distinct() return model_query(context, models.AvailabilityZone, session=session, read_deleted="no").filter( models.AvailabilityZone.id.in_(enabled_services) ).all() @require_admin_context def purge_deleted_records(context, age_in_days): """Purge soft-deleted records older than(and equal) age from tables.""" if age_in_days < 0: msg = _('Must supply a non-negative value for "age_in_days".') LOG.error(msg) raise exception.InvalidParameterValue(msg) metadata = MetaData() metadata.reflect(get_engine()) session = get_session() session.begin() deleted_age = timeutils.utcnow() - datetime.timedelta(days=age_in_days) for table in reversed(metadata.sorted_tables): if 'deleted' in table.columns.keys(): try: mds = [m for m in models.__dict__.values() if (hasattr(m, '__tablename__') and m.__tablename__ == six.text_type(table))] if len(mds) > 0: # collect all soft-deleted records with session.begin_nested(): model = mds[0] s_deleted_records = session.query(model).filter( model.deleted_at <= deleted_age) deleted_count = 0 # delete records one by one, # skip the records which has FK constraints for record in s_deleted_records: try: with session.begin_nested(): session.delete(record) deleted_count += 1 except db_exc.DBError: LOG.warning( ("Deleting soft-deleted resource %s " "failed, skipping."), record) if deleted_count != 0: LOG.info("Deleted %(count)s records in " "table %(table)s.", {'count': deleted_count, 'table': table}) except db_exc.DBError: LOG.warning("Querying table %s's soft-deleted records " "failed, skipping.", table) session.commit() #################### def _share_group_get(context, share_group_id, session=None): session = session or get_session() result = (model_query(context, models.ShareGroup, session=session, project_only=True, read_deleted='no'). filter_by(id=share_group_id). options(joinedload('share_types')). first()) if not result: raise exception.ShareGroupNotFound(share_group_id=share_group_id) return result @require_context def share_group_get(context, share_group_id, session=None): return _share_group_get(context, share_group_id, session=session) def _share_group_get_all(context, project_id=None, share_server_id=None, host=None, detailed=True, filters=None, sort_key=None, sort_dir=None, session=None): session = session or get_session() sort_key = sort_key or 'created_at' sort_dir = sort_dir or 'desc' query = model_query( context, models.ShareGroup, session=session, read_deleted='no') # Apply filters if not filters: filters = {} no_key = 'key_is_absent' for k, v in filters.items(): temp_k = k.rstrip('~') if k in constants.LIKE_FILTER else k filter_attr = getattr(models.ShareGroup, temp_k, no_key) if filter_attr == no_key: msg = _("Share groups cannot be filtered using '%s' key.") raise exception.InvalidInput(reason=msg % k) if k in constants.LIKE_FILTER: query = query.filter(filter_attr.op('LIKE')(u'%' + v + u'%')) else: query = query.filter(filter_attr == v) if project_id: query = query.filter( models.ShareGroup.project_id == project_id) if host: query = query.filter( models.ShareGroup.host == host) if share_server_id: query = query.filter( models.ShareGroup.share_server_id == share_server_id) try: query = apply_sorting(models.ShareGroup, query, sort_key, sort_dir) except AttributeError: msg = _("Wrong sorting key provided - '%s'.") % sort_key raise exception.InvalidInput(reason=msg) if detailed: return query.options(joinedload('share_types')).all() else: query = query.with_entities( models.ShareGroup.id, models.ShareGroup.name) values = [] for sg_id, sg_name in query.all(): values.append({"id": sg_id, "name": sg_name}) return values @require_admin_context def share_group_get_all(context, detailed=True, filters=None, sort_key=None, sort_dir=None): return _share_group_get_all( context, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) @require_admin_context def share_group_get_all_by_host(context, host, detailed=True): return _share_group_get_all(context, host=host, detailed=detailed) @require_context def share_group_get_all_by_project(context, project_id, detailed=True, filters=None, sort_key=None, sort_dir=None): authorize_project_context(context, project_id) return _share_group_get_all( context, project_id=project_id, detailed=detailed, filters=filters, sort_key=sort_key, sort_dir=sort_dir) @require_context def share_group_get_all_by_share_server(context, share_server_id, filters=None, sort_key=None, sort_dir=None): return _share_group_get_all( context, share_server_id=share_server_id, filters=filters, sort_key=sort_key, sort_dir=sort_dir) @require_context def share_group_create(context, values): share_group = models.ShareGroup() if not values.get('id'): values['id'] = six.text_type(uuidutils.generate_uuid()) mappings = [] for item in values.get('share_types') or []: mapping = models.ShareGroupShareTypeMapping() mapping['id'] = six.text_type(uuidutils.generate_uuid()) mapping['share_type_id'] = item mapping['share_group_id'] = values['id'] mappings.append(mapping) values['share_types'] = mappings session = get_session() with session.begin(): share_group.update(values) session.add(share_group) return _share_group_get(context, values['id'], session=session) @require_context def share_group_update(context, share_group_id, values): session = get_session() with session.begin(): share_group_ref = _share_group_get( context, share_group_id, session=session) share_group_ref.update(values) share_group_ref.save(session=session) return share_group_ref @require_admin_context def share_group_destroy(context, share_group_id): session = get_session() with session.begin(): share_group_ref = _share_group_get( context, share_group_id, session=session) share_group_ref.soft_delete(session) session.query(models.ShareGroupShareTypeMapping).filter_by( share_group_id=share_group_ref['id']).soft_delete() @require_context def count_shares_in_share_group(context, share_group_id, session=None): session = session or get_session() return (model_query(context, models.Share, session=session, project_only=True, read_deleted="no"). filter_by(share_group_id=share_group_id). count()) @require_context def get_all_shares_by_share_group(context, share_group_id, session=None): session = session or get_session() return (model_query( context, models.Share, session=session, project_only=True, read_deleted="no"). filter_by(share_group_id=share_group_id). all()) @require_context def count_share_groups(context, project_id, user_id=None, share_type_id=None, session=None): query = model_query( context, models.ShareGroup, func.count(models.ShareGroup.id), read_deleted="no", session=session).filter_by(project_id=project_id) if share_type_id: query = query.join("share_group_share_type_mappings").filter_by( share_type_id=share_type_id) elif user_id is not None: query = query.filter_by(user_id=user_id) return query.first()[0] @require_context def count_share_group_snapshots(context, project_id, user_id=None, share_type_id=None, session=None): query = model_query( context, models.ShareGroupSnapshot, func.count(models.ShareGroupSnapshot.id), read_deleted="no", session=session).filter_by(project_id=project_id) if share_type_id: query = query.join( "share_group" ).join( "share_group_share_type_mappings" ).filter_by(share_type_id=share_type_id) elif user_id is not None: query = query.filter_by(user_id=user_id) return query.first()[0] @require_context def share_replica_data_get_for_project(context, project_id, user_id=None, session=None, share_type_id=None): session = session or get_session() query = model_query( context, models.ShareInstance, func.count(models.ShareInstance.id), func.sum(models.Share.size), read_deleted="no", session=session).join( models.Share, models.ShareInstance.share_id == models.Share.id).filter( models.Share.project_id == project_id).filter( models.ShareInstance.replica_state.isnot(None)) if share_type_id: query = query.filter( models.ShareInstance.share_type_id == share_type_id) elif user_id: query = query.filter(models.Share.user_id == user_id) result = query.first() return result[0] or 0, result[1] or 0 @require_context def count_share_group_snapshots_in_share_group(context, share_group_id, session=None): session = session or get_session() return model_query( context, models.ShareGroupSnapshot, session=session, project_only=True, read_deleted="no", ).filter_by( share_group_id=share_group_id, ).count() @require_context def count_share_groups_in_share_network(context, share_network_id, session=None): session = session or get_session() return (model_query( context, models.ShareGroup, session=session, project_only=True, read_deleted="no"). filter_by(share_network_id=share_network_id). count()) @require_context def count_share_group_snapshot_members_in_share(context, share_id, session=None): session = session or get_session() return model_query( context, models.ShareSnapshotInstance, session=session, project_only=True, read_deleted="no", ).join( models.ShareInstance, models.ShareInstance.id == ( models.ShareSnapshotInstance.share_instance_id), ).filter( models.ShareInstance.share_id == share_id, ).count() @require_context def _share_group_snapshot_get(context, share_group_snapshot_id, session=None): session = session or get_session() result = model_query( context, models.ShareGroupSnapshot, session=session, project_only=True, read_deleted='no', ).options( joinedload('share_group'), joinedload('share_group_snapshot_members'), ).filter_by( id=share_group_snapshot_id, ).first() if not result: raise exception.ShareGroupSnapshotNotFound( share_group_snapshot_id=share_group_snapshot_id) return result def _share_group_snapshot_get_all( context, project_id=None, detailed=True, filters=None, sort_key=None, sort_dir=None, session=None): session = session or get_session() if not sort_key: sort_key = 'created_at' if not sort_dir: sort_dir = 'desc' query = model_query( context, models.ShareGroupSnapshot, session=session, read_deleted='no', ).options( joinedload('share_group'), joinedload('share_group_snapshot_members'), ) # Apply filters if not filters: filters = {} no_key = 'key_is_absent' for k, v in filters.items(): filter_attr = getattr(models.ShareGroupSnapshot, k, no_key) if filter_attr == no_key: msg = _("Share group snapshots cannot be filtered using '%s' key.") raise exception.InvalidInput(reason=msg % k) query = query.filter(filter_attr == v) if project_id: query = query.filter( models.ShareGroupSnapshot.project_id == project_id) try: query = apply_sorting( models.ShareGroupSnapshot, query, sort_key, sort_dir) except AttributeError: msg = _("Wrong sorting key provided - '%s'.") % sort_key raise exception.InvalidInput(reason=msg) if detailed: return query.all() else: query = query.with_entities(models.ShareGroupSnapshot.id, models.ShareGroupSnapshot.name) values = [] for sgs_id, sgs_name in query.all(): values.append({"id": sgs_id, "name": sgs_name}) return values @require_context def share_group_snapshot_get(context, share_group_snapshot_id, session=None): session = session or get_session() return _share_group_snapshot_get( context, share_group_snapshot_id, session=session) @require_admin_context def share_group_snapshot_get_all( context, detailed=True, filters=None, sort_key=None, sort_dir=None): return _share_group_snapshot_get_all( context, filters=filters, detailed=detailed, sort_key=sort_key, sort_dir=sort_dir) @require_context def share_group_snapshot_get_all_by_project( context, project_id, detailed=True, filters=None, sort_key=None, sort_dir=None): authorize_project_context(context, project_id) return _share_group_snapshot_get_all( context, project_id=project_id, filters=filters, detailed=detailed, sort_key=sort_key, sort_dir=sort_dir, ) @require_context def share_group_snapshot_create(context, values): share_group_snapshot = models.ShareGroupSnapshot() if not values.get('id'): values['id'] = six.text_type(uuidutils.generate_uuid()) session = get_session() with session.begin(): share_group_snapshot.update(values) session.add(share_group_snapshot) return _share_group_snapshot_get( context, values['id'], session=session) @require_context def share_group_snapshot_update(context, share_group_snapshot_id, values): session = get_session() with session.begin(): share_group_ref = _share_group_snapshot_get( context, share_group_snapshot_id, session=session) share_group_ref.update(values) share_group_ref.save(session=session) return share_group_ref @require_admin_context def share_group_snapshot_destroy(context, share_group_snapshot_id): session = get_session() with session.begin(): share_group_snap_ref = _share_group_snapshot_get( context, share_group_snapshot_id, session=session) share_group_snap_ref.soft_delete(session) session.query(models.ShareSnapshotInstance).filter_by( share_group_snapshot_id=share_group_snapshot_id).soft_delete() @require_context def share_group_snapshot_members_get_all(context, share_group_snapshot_id, session=None): session = session or get_session() query = model_query( context, models.ShareSnapshotInstance, session=session, read_deleted='no', ).filter_by(share_group_snapshot_id=share_group_snapshot_id) return query.all() @require_context def share_group_snapshot_member_get(context, member_id, session=None): result = model_query( context, models.ShareSnapshotInstance, session=session, project_only=True, read_deleted='no', ).filter_by(id=member_id).first() if not result: raise exception.ShareGroupSnapshotMemberNotFound(member_id=member_id) return result @require_context def share_group_snapshot_member_create(context, values): member = models.ShareSnapshotInstance() if not values.get('id'): values['id'] = six.text_type(uuidutils.generate_uuid()) _change_size_to_instance_size(values) session = get_session() with session.begin(): member.update(values) session.add(member) return share_group_snapshot_member_get( context, values['id'], session=session) @require_context def share_group_snapshot_member_update(context, member_id, values): session = get_session() _change_size_to_instance_size(values) with session.begin(): member = share_group_snapshot_member_get( context, member_id, session=session) member.update(values) session.add(member) return share_group_snapshot_member_get( context, member_id, session=session) #################### @require_admin_context def share_group_type_create(context, values, projects=None): """Create a new share group type. In order to pass in group specs, the values dict should contain a 'group_specs' key/value pair: {'group_specs' : {'k1': 'v1', 'k2': 'v2', ...}} """ values = ensure_model_dict_has_id(values) projects = projects or [] session = get_session() with session.begin(): try: values['group_specs'] = _metadata_refs( values.get('group_specs'), models.ShareGroupTypeSpecs) mappings = [] for item in values.get('share_types', []): share_type = share_type_get_by_name_or_id(context, item) if not share_type: raise exception.ShareTypeDoesNotExist(share_type=item) mapping = models.ShareGroupTypeShareTypeMapping() mapping['id'] = six.text_type(uuidutils.generate_uuid()) mapping['share_type_id'] = share_type['id'] mapping['share_group_type_id'] = values['id'] mappings.append(mapping) values['share_types'] = mappings share_group_type_ref = models.ShareGroupTypes() share_group_type_ref.update(values) share_group_type_ref.save(session=session) except db_exception.DBDuplicateEntry: raise exception.ShareGroupTypeExists(type_id=values['name']) except exception.ShareTypeDoesNotExist: raise except Exception as e: raise db_exception.DBError(e) for project in set(projects): access_ref = models.ShareGroupTypeProjects() access_ref.update({"share_group_type_id": share_group_type_ref.id, "project_id": project}) access_ref.save(session=session) return share_group_type_ref def _share_group_type_get_query(context, session=None, read_deleted=None, expected_fields=None): expected_fields = expected_fields or [] query = model_query( context, models.ShareGroupTypes, session=session, read_deleted=read_deleted ).options( joinedload('group_specs'), joinedload('share_types'), ) if 'projects' in expected_fields: query = query.options(joinedload('projects')) if not context.is_admin: the_filter = [models.ShareGroupTypes.is_public == true()] projects_attr = getattr(models.ShareGroupTypes, 'projects') the_filter.extend([ projects_attr.any(project_id=context.project_id) ]) query = query.filter(or_(*the_filter)) return query @require_context def share_group_type_get_all(context, inactive=False, filters=None): """Returns a dict describing all share group types with name as key.""" filters = filters or {} read_deleted = "yes" if inactive else "no" query = _share_group_type_get_query(context, read_deleted=read_deleted) if 'is_public' in filters and filters['is_public'] is not None: the_filter = [models.ShareGroupTypes.is_public == filters['is_public']] if filters['is_public'] and context.project_id is not None: projects_attr = getattr(models. ShareGroupTypes, 'projects') the_filter.extend([ projects_attr.any( project_id=context.project_id, deleted=0) ]) if len(the_filter) > 1: query = query.filter(or_(*the_filter)) else: query = query.filter(the_filter[0]) rows = query.order_by("name").all() result = {} for row in rows: result[row['name']] = _dict_with_specs(row, 'group_specs') return result def _share_group_type_get_id_from_share_group_type_query(context, type_id, session=None): return model_query( context, models.ShareGroupTypes, read_deleted="no", session=session, ).filter_by(id=type_id) def _share_group_type_get_id_from_share_group_type(context, type_id, session=None): result = _share_group_type_get_id_from_share_group_type_query( context, type_id, session=session).first() if not result: raise exception.ShareGroupTypeNotFound(type_id=type_id) return result['id'] @require_context def _share_group_type_get(context, type_id, session=None, inactive=False, expected_fields=None): expected_fields = expected_fields or [] read_deleted = "yes" if inactive else "no" result = _share_group_type_get_query( context, session, read_deleted, expected_fields, ).filter_by(id=type_id).first() if not result: raise exception.ShareGroupTypeNotFound(type_id=type_id) share_group_type = _dict_with_specs(result, 'group_specs') if 'projects' in expected_fields: share_group_type['projects'] = [ p['project_id'] for p in result['projects']] return share_group_type @require_context def share_group_type_get(context, type_id, inactive=False, expected_fields=None): """Return a dict describing specific share group type.""" return _share_group_type_get( context, type_id, session=None, inactive=inactive, expected_fields=expected_fields) @require_context def _share_group_type_get_by_name(context, name, session=None): result = model_query( context, models.ShareGroupTypes, session=session, ).options( joinedload('group_specs'), joinedload('share_types'), ).filter_by( name=name, ).first() if not result: raise exception.ShareGroupTypeNotFoundByName(type_name=name) return _dict_with_specs(result, 'group_specs') @require_context def share_group_type_get_by_name(context, name): """Return a dict describing specific share group type.""" return _share_group_type_get_by_name(context, name) @require_admin_context def share_group_type_destroy(context, type_id): session = get_session() with session.begin(): _share_group_type_get(context, type_id, session) results = model_query( context, models.ShareGroup, session=session, read_deleted="no", ).filter_by( share_group_type_id=type_id, ).count() if results: LOG.error('Share group type %s deletion failed, it in use.', type_id) raise exception.ShareGroupTypeInUse(type_id=type_id) model_query( context, models.ShareGroupTypeSpecs, session=session, ).filter_by( share_group_type_id=type_id, ).soft_delete() model_query( context, models.ShareGroupTypeShareTypeMapping, session=session ).filter_by( share_group_type_id=type_id, ).soft_delete() model_query( context, models.ShareGroupTypeProjects, session=session ).filter_by( share_group_type_id=type_id, ).soft_delete() model_query( context, models.ShareGroupTypes, session=session ).filter_by( id=type_id, ).soft_delete() def _share_group_type_access_query(context, session=None): return model_query(context, models.ShareGroupTypeProjects, session=session, read_deleted="no") @require_admin_context def share_group_type_access_get_all(context, type_id): share_group_type_id = _share_group_type_get_id_from_share_group_type( context, type_id) return _share_group_type_access_query(context).filter_by( share_group_type_id=share_group_type_id, ).all() @require_admin_context def share_group_type_access_add(context, type_id, project_id): """Add given tenant to the share group type access list.""" share_group_type_id = _share_group_type_get_id_from_share_group_type( context, type_id) access_ref = models.ShareGroupTypeProjects() access_ref.update({"share_group_type_id": share_group_type_id, "project_id": project_id}) session = get_session() with session.begin(): try: access_ref.save(session=session) except db_exception.DBDuplicateEntry: raise exception.ShareGroupTypeAccessExists( type_id=share_group_type_id, project_id=project_id) return access_ref @require_admin_context def share_group_type_access_remove(context, type_id, project_id): """Remove given tenant from the share group type access list.""" share_group_type_id = _share_group_type_get_id_from_share_group_type( context, type_id) count = _share_group_type_access_query(context).filter_by( share_group_type_id=share_group_type_id, ).filter_by( project_id=project_id, ).soft_delete( synchronize_session=False, ) if count == 0: raise exception.ShareGroupTypeAccessNotFound( type_id=share_group_type_id, project_id=project_id) def _share_group_type_specs_query(context, type_id, session=None): return model_query( context, models.ShareGroupTypeSpecs, session=session, read_deleted="no" ).filter_by( share_group_type_id=type_id, ).options( joinedload('share_group_type'), ) @require_context def share_group_type_specs_get(context, type_id): rows = _share_group_type_specs_query(context, type_id).all() result = {} for row in rows: result[row['key']] = row['value'] return result @require_context def share_group_type_specs_delete(context, type_id, key): session = get_session() with session.begin(): _share_group_type_specs_get_item(context, type_id, key, session) _share_group_type_specs_query( context, type_id, session, ).filter_by( key=key, ).soft_delete() @require_context def _share_group_type_specs_get_item(context, type_id, key, session=None): result = _share_group_type_specs_query( context, type_id, session=session, ).filter_by( key=key, ).options( joinedload('share_group_type'), ).first() if not result: raise exception.ShareGroupTypeSpecsNotFound( specs_key=key, type_id=type_id) return result @require_context @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) def share_group_type_specs_update_or_create(context, type_id, specs): session = get_session() with session.begin(): spec_ref = None for key, value in specs.items(): try: spec_ref = _share_group_type_specs_get_item( context, type_id, key, session) except exception.ShareGroupTypeSpecsNotFound: spec_ref = models.ShareGroupTypeSpecs() spec_ref.update({"key": key, "value": value, "share_group_type_id": type_id, "deleted": 0}) spec_ref.save(session=session) return specs ############################### @require_context def message_get(context, message_id): query = model_query(context, models.Message, read_deleted="no", project_only="yes") result = query.filter_by(id=message_id).first() if not result: raise exception.MessageNotFound(message_id=message_id) return result @require_context def message_get_all(context, filters=None, limit=None, offset=None, sort_key='created_at', sort_dir='desc'): """Retrieves all messages. If no sort parameters are specified then the returned messages are sorted by the 'created_at' key in descending order. :param context: context to query under :param limit: maximum number of items to return :param offset: the number of items to skip from the marker or from the first element. :param sort_key: attributes by which results should be sorted. :param sort_dir: directions in which results should be sorted. :param filters: dictionary of filters; values that are in lists, tuples, or sets cause an 'IN' operation, while exact matching is used for other values, see exact_filter function for more information :returns: list of matching messages """ messages = models.Message session = get_session() with session.begin(): query = model_query(context, messages, read_deleted="no", project_only="yes") legal_filter_keys = ('request_id', 'resource_type', 'resource_id', 'action_id', 'detail_id', 'message_level', 'created_since', 'created_before') if not filters: filters = {} query = exact_filter(query, messages, filters, legal_filter_keys) query = utils.paginate_query(query, messages, limit, sort_key=sort_key, sort_dir=sort_dir, offset=offset) return query.all() @require_context def message_create(context, message_values): values = copy.deepcopy(message_values) message_ref = models.Message() if not values.get('id'): values['id'] = uuidutils.generate_uuid() message_ref.update(values) session = get_session() with session.begin(): session.add(message_ref) return message_get(context, message_ref['id']) @require_context def message_destroy(context, message): session = get_session() with session.begin(): (model_query(context, models.Message, session=session). filter_by(id=message.get('id')).soft_delete()) @require_admin_context def cleanup_expired_messages(context): session = get_session() now = timeutils.utcnow() with session.begin(): return session.query(models.Message).filter( models.Message.expires_at < now).delete() @require_context def backend_info_get(context, host): """Get hash info for given host.""" session = get_session() result = _backend_info_query(session, context, host) return result @require_context def backend_info_create(context, host, value): session = get_session() with session.begin(): info_ref = models.BackendInfo() info_ref.update({"host": host, "info_hash": value}) info_ref.save(session) return info_ref @require_context def backend_info_update(context, host, value=None, delete_existing=False): """Remove backend info for host name.""" session = get_session() with session.begin(): info_ref = _backend_info_query(session, context, host) if info_ref: if value: info_ref.update({"info_hash": value}) elif delete_existing and info_ref['deleted'] != 1: info_ref.update({"deleted": 1, "deleted_at": timeutils.utcnow()}) else: info_ref = models.BackendInfo() info_ref.update({"host": host, "info_hash": value}) info_ref.save(session) return info_ref def _backend_info_query(session, context, host, read_deleted=False): result = model_query( context, models.BackendInfo, session=session, read_deleted=read_deleted, ).filter_by( host=host, ).first() return result manila-10.0.0/manila/db/sqlalchemy/query.py0000664000175000017500000000301413656750227020575 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import orm import sqlalchemy from manila.common import constants class Query(orm.Query): def soft_delete(self, synchronize_session='evaluate', update_status=False, status_field_name='status'): if update_status: setattr(self, status_field_name, constants.STATUS_DELETED) return super(Query, self).soft_delete(synchronize_session) def get_maker(engine, autocommit=True, expire_on_commit=False): """Return a SQLAlchemy sessionmaker using the given engine.""" return sqlalchemy.orm.sessionmaker(bind=engine, class_=orm.Session, autocommit=autocommit, expire_on_commit=expire_on_commit, query_cls=Query) # NOTE(uglide): Monkey patch oslo_db get_maker() function to use custom Query orm.get_maker = get_maker manila-10.0.0/manila/db/sqlalchemy/models.py0000664000175000017500000014261613656750227020727 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SQLAlchemy models for Manila data. """ from oslo_config import cfg from oslo_db.sqlalchemy import models from sqlalchemy import Column, Integer, String, schema from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import orm from sqlalchemy import ForeignKey, DateTime, Boolean, Enum from manila.common import constants CONF = cfg.CONF BASE = declarative_base() class ManilaBase(models.ModelBase, models.TimestampMixin, models.SoftDeleteMixin): """Base class for Manila Models.""" __table_args__ = {'mysql_engine': 'InnoDB'} metadata = None def to_dict(self): model_dict = {} for k, v in self.items(): if not issubclass(type(v), ManilaBase): model_dict[k] = v return model_dict def soft_delete(self, session, update_status=False, status_field_name='status'): """Mark this object as deleted.""" if update_status: setattr(self, status_field_name, constants.STATUS_DELETED) return super(ManilaBase, self).soft_delete(session) class Service(BASE, ManilaBase): """Represents a running service on a host.""" __tablename__ = 'services' id = Column(Integer, primary_key=True) host = Column(String(255)) # , ForeignKey('hosts.id')) binary = Column(String(255)) topic = Column(String(255)) report_count = Column(Integer, nullable=False, default=0) disabled = Column(Boolean, default=False) availability_zone_id = Column(String(36), ForeignKey('availability_zones.id'), nullable=True) availability_zone = orm.relationship( "AvailabilityZone", lazy='immediate', primaryjoin=( 'and_(' 'Service.availability_zone_id == ' 'AvailabilityZone.id, ' 'AvailabilityZone.deleted == \'False\')' ) ) class ManilaNode(BASE, ManilaBase): """Represents a running manila service on a host.""" __tablename__ = 'manila_nodes' id = Column(Integer, primary_key=True) service_id = Column(Integer, ForeignKey('services.id'), nullable=True) class Quota(BASE, ManilaBase): """Represents a single quota override for a project. If there is no row for a given project id and resource, then the default for the quota class is used. If there is no row for a given quota class and resource, then the default for the deployment is used. If the row is present but the hard limit is Null, then the resource is unlimited. """ __tablename__ = 'quotas' id = Column(Integer, primary_key=True) project_id = Column(String(255), index=True) resource = Column(String(255)) hard_limit = Column(Integer, nullable=True) class ProjectUserQuota(BASE, ManilaBase): """Represents a single quota override for a user with in a project.""" __tablename__ = 'project_user_quotas' id = Column(Integer, primary_key=True, nullable=False) project_id = Column(String(255), nullable=False) user_id = Column(String(255), nullable=False) resource = Column(String(255), nullable=False) hard_limit = Column(Integer) class ProjectShareTypeQuota(BASE, ManilaBase): """Represents a single quota override for a share type within a project.""" __tablename__ = 'project_share_type_quotas' id = Column(Integer, primary_key=True, nullable=False) project_id = Column(String(255), nullable=False) share_type_id = Column( String(36), ForeignKey('share_types.id'), nullable=False) resource = Column(String(255), nullable=False) hard_limit = Column(Integer) class QuotaClass(BASE, ManilaBase): """Represents a single quota override for a quota class. If there is no row for a given quota class and resource, then the default for the deployment is used. If the row is present but the hard limit is Null, then the resource is unlimited. """ __tablename__ = 'quota_classes' id = Column(Integer, primary_key=True) class_name = Column(String(255), index=True) resource = Column(String(255)) hard_limit = Column(Integer, nullable=True) class QuotaUsage(BASE, ManilaBase): """Represents the current usage for a given resource.""" __tablename__ = 'quota_usages' id = Column(Integer, primary_key=True) project_id = Column(String(255), index=True) user_id = Column(String(255)) share_type_id = Column(String(36)) resource = Column(String(255)) in_use = Column(Integer) reserved = Column(Integer) @property def total(self): return self.in_use + self.reserved until_refresh = Column(Integer, nullable=True) class Reservation(BASE, ManilaBase): """Represents a resource reservation for quotas.""" __tablename__ = 'reservations' id = Column(Integer, primary_key=True) uuid = Column(String(36), nullable=False) usage_id = Column(Integer, ForeignKey('quota_usages.id'), nullable=False) project_id = Column(String(255), index=True) user_id = Column(String(255)) share_type_id = Column(String(36)) resource = Column(String(255)) delta = Column(Integer) expire = Column(DateTime, nullable=False) class Share(BASE, ManilaBase): """Represents an NFS and CIFS shares.""" __tablename__ = 'shares' _extra_keys = ['name', 'export_location', 'export_locations', 'status', 'host', 'share_server_id', 'share_network_id', 'availability_zone', 'access_rules_status', 'share_type_id'] @property def name(self): return CONF.share_name_template % self.id @property def export_location(self): if len(self.instances) > 0: return self.instance.export_location @property def is_busy(self): # Make sure share is not busy, i.e., not part of a migration if self.task_state in constants.BUSY_TASK_STATES: return True return False @property def export_locations(self): # TODO(gouthamr): Return AZ specific export locations for replicated # shares. # NOTE(gouthamr): For a replicated share, export locations of the # 'active' instances are chosen, if 'available'. all_export_locations = [] select_instances = list(filter( lambda x: x['replica_state'] == constants.REPLICA_STATE_ACTIVE, self.instances)) or self.instances for instance in select_instances: if instance['status'] == constants.STATUS_AVAILABLE: for export_location in instance.export_locations: all_export_locations.append(export_location['path']) return all_export_locations def __getattr__(self, item): proxified_properties = ('status', 'host', 'share_server_id', 'share_network_id', 'availability_zone', 'share_type_id', 'share_type') if item in proxified_properties: return getattr(self.instance, item, None) raise AttributeError(item) @property def share_server_id(self): return self.__getattr__('share_server_id') @property def has_replicas(self): if len(self.instances) > 1: # NOTE(gouthamr): The 'primary' instance of a replicated share # has a 'replica_state' set to 'active'. Only the secondary replica # instances need to be regarded as true 'replicas' by users. replicas = (list(filter(lambda x: x['replica_state'] is not None, self.instances))) return len(replicas) > 1 return False @property def progress(self): if len(self.instances) > 0: return self.instance.progress @property def instance(self): # NOTE(gouthamr): The order of preference: status 'replication_change', # followed by 'available' and 'error'. If replicated share and # not undergoing a 'replication_change', only 'active' instances are # preferred. result = None if len(self.instances) > 0: order = (constants.STATUS_REVERTING, constants.STATUS_REPLICATION_CHANGE, constants.STATUS_MIGRATING, constants.STATUS_AVAILABLE, constants.STATUS_ERROR) other_statuses = ( [x['status'] for x in self.instances if x['status'] not in order and x['status'] not in constants.TRANSITIONAL_STATUSES] ) order = (order + tuple(other_statuses) + constants.TRANSITIONAL_STATUSES) sorted_instances = sorted( self.instances, key=lambda x: order.index(x['status'])) select_instances = sorted_instances if (select_instances[0]['status'] != constants.STATUS_REPLICATION_CHANGE): select_instances = ( list(filter(lambda x: x['replica_state'] == constants.REPLICA_STATE_ACTIVE, sorted_instances)) or sorted_instances ) result = select_instances[0] return result @property def access_rules_status(self): return get_access_rules_status(self.instances) id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') user_id = Column(String(255)) project_id = Column(String(255)) size = Column(Integer) display_name = Column(String(255)) display_description = Column(String(255)) snapshot_id = Column(String(36)) snapshot_support = Column(Boolean, default=True) create_share_from_snapshot_support = Column(Boolean, default=True) revert_to_snapshot_support = Column(Boolean, default=False) replication_type = Column(String(255), nullable=True) mount_snapshot_support = Column(Boolean, default=False) share_proto = Column(String(255)) is_public = Column(Boolean, default=False) share_group_id = Column(String(36), ForeignKey('share_groups.id'), nullable=True) source_share_group_snapshot_member_id = Column(String(36), nullable=True) task_state = Column(String(255)) instances = orm.relationship( "ShareInstance", lazy='subquery', primaryjoin=( 'and_(' 'Share.id == ShareInstance.share_id, ' 'ShareInstance.deleted == "False")' ), viewonly=True, join_depth=2, ) class ShareInstance(BASE, ManilaBase): __tablename__ = 'share_instances' _extra_keys = ['name', 'export_location', 'availability_zone', 'replica_state'] _proxified_properties = ('user_id', 'project_id', 'size', 'display_name', 'display_description', 'snapshot_id', 'share_proto', 'is_public', 'share_group_id', 'replication_type', 'source_share_group_snapshot_member_id', 'mount_snapshot_support') def set_share_data(self, share): for share_property in self._proxified_properties: setattr(self, share_property, share[share_property]) @property def name(self): return CONF.share_name_template % self.id @property def export_location(self): if len(self.export_locations) > 0: return self.export_locations[0]['path'] @property def availability_zone(self): if self._availability_zone: return self._availability_zone['name'] id = Column(String(36), primary_key=True) share_id = Column(String(36), ForeignKey('shares.id')) deleted = Column(String(36), default='False') host = Column(String(255)) status = Column(String(255)) progress = Column(String(32)) ACCESS_STATUS_PRIORITIES = { constants.STATUS_ACTIVE: 0, constants.SHARE_INSTANCE_RULES_SYNCING: 1, constants.SHARE_INSTANCE_RULES_ERROR: 2, } access_rules_status = Column(Enum(constants.STATUS_ACTIVE, constants.SHARE_INSTANCE_RULES_SYNCING, constants.SHARE_INSTANCE_RULES_ERROR), default=constants.STATUS_ACTIVE) scheduled_at = Column(DateTime) launched_at = Column(DateTime) terminated_at = Column(DateTime) replica_state = Column(String(255), nullable=True) cast_rules_to_readonly = Column(Boolean, default=False, nullable=False) share_type_id = Column(String(36), ForeignKey('share_types.id'), nullable=True) availability_zone_id = Column(String(36), ForeignKey('availability_zones.id'), nullable=True) _availability_zone = orm.relationship( "AvailabilityZone", lazy='subquery', foreign_keys=availability_zone_id, primaryjoin=( 'and_(' 'ShareInstance.availability_zone_id == ' 'AvailabilityZone.id, ' 'AvailabilityZone.deleted == \'False\')' ) ) export_locations = orm.relationship( "ShareInstanceExportLocations", lazy='joined', backref=orm.backref('share_instance', lazy='joined'), primaryjoin=( 'and_(' 'ShareInstance.id == ' 'ShareInstanceExportLocations.share_instance_id, ' 'ShareInstanceExportLocations.deleted == 0)' ) ) share_network_id = Column(String(36), ForeignKey('share_networks.id'), nullable=True) share_server_id = Column(String(36), ForeignKey('share_servers.id'), nullable=True) share_type = orm.relationship( "ShareTypes", lazy='subquery', foreign_keys=share_type_id, primaryjoin='and_(' 'ShareInstance.share_type_id == ShareTypes.id, ' 'ShareTypes.deleted == "False")') class ShareInstanceExportLocations(BASE, ManilaBase): """Represents export locations of share instances.""" __tablename__ = 'share_instance_export_locations' _extra_keys = ['el_metadata', ] @property def el_metadata(self): el_metadata = {} for meta in self._el_metadata_bare: # pylint: disable=no-member el_metadata[meta['key']] = meta['value'] return el_metadata @property def replica_state(self): return self.share_instance['replica_state'] id = Column(Integer, primary_key=True) uuid = Column(String(36), nullable=False, unique=True) share_instance_id = Column( String(36), ForeignKey('share_instances.id'), nullable=False) path = Column(String(2000)) is_admin_only = Column(Boolean, default=False, nullable=False) class ShareInstanceExportLocationsMetadata(BASE, ManilaBase): """Represents export location metadata of share instances.""" __tablename__ = "share_instance_export_locations_metadata" _extra_keys = ['export_location_uuid', ] id = Column(Integer, primary_key=True) export_location_id = Column( Integer, ForeignKey("share_instance_export_locations.id"), nullable=False) key = Column(String(255), nullable=False) value = Column(String(1023), nullable=False) export_location = orm.relationship( ShareInstanceExportLocations, backref="_el_metadata_bare", foreign_keys=export_location_id, lazy='immediate', primaryjoin="and_(" "%(cls_name)s.export_location_id == " "ShareInstanceExportLocations.id," "%(cls_name)s.deleted == 0)" % { "cls_name": "ShareInstanceExportLocationsMetadata"}) @property def export_location_uuid(self): return self.export_location.uuid # pylint: disable=no-member class ShareTypes(BASE, ManilaBase): """Represent possible share_types of volumes offered.""" __tablename__ = "share_types" id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') name = Column(String(255)) description = Column(String(255)) is_public = Column(Boolean, default=True) class ShareTypeProjects(BASE, ManilaBase): """Represent projects associated share_types.""" __tablename__ = "share_type_projects" __table_args__ = (schema.UniqueConstraint( "share_type_id", "project_id", "deleted", name="uniq_share_type_projects0share_type_id0project_id0deleted"), ) id = Column(Integer, primary_key=True) share_type_id = Column(Integer, ForeignKey('share_types.id'), nullable=False) project_id = Column(String(255)) share_type = orm.relationship( ShareTypes, backref="projects", foreign_keys=share_type_id, primaryjoin='and_(' 'ShareTypeProjects.share_type_id == ShareTypes.id,' 'ShareTypeProjects.deleted == 0)') class ShareTypeExtraSpecs(BASE, ManilaBase): """Represents additional specs as key/value pairs for a share_type.""" __tablename__ = 'share_type_extra_specs' id = Column(Integer, primary_key=True) key = Column("spec_key", String(255)) value = Column("spec_value", String(255)) share_type_id = Column(String(36), ForeignKey('share_types.id'), nullable=False) share_type = orm.relationship( ShareTypes, backref="extra_specs", foreign_keys=share_type_id, primaryjoin='and_(' 'ShareTypeExtraSpecs.share_type_id == ShareTypes.id,' 'ShareTypeExtraSpecs.deleted == 0)' ) class ShareMetadata(BASE, ManilaBase): """Represents a metadata key/value pair for a share.""" __tablename__ = 'share_metadata' id = Column(Integer, primary_key=True) key = Column(String(255), nullable=False) value = Column(String(1023), nullable=False) share_id = Column(String(36), ForeignKey('shares.id'), nullable=False) share = orm.relationship(Share, backref="share_metadata", foreign_keys=share_id, primaryjoin='and_(' 'ShareMetadata.share_id == Share.id,' 'ShareMetadata.deleted == 0)') class ShareAccessMapping(BASE, ManilaBase): """Represents access to share.""" __tablename__ = 'share_access_map' id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') share_id = Column(String(36), ForeignKey('shares.id')) access_type = Column(String(255)) access_to = Column(String(255)) access_key = Column(String(255), nullable=True) access_level = Column(Enum(*constants.ACCESS_LEVELS), default=constants.ACCESS_LEVEL_RW) @property def state(self): """Get the aggregated 'state' from all the instance mapping states. An access rule is supposed to be truly 'active' when it has been applied across all of the share instances of the parent share object. """ return get_aggregated_access_rules_state(self.instance_mappings) instance_mappings = orm.relationship( "ShareInstanceAccessMapping", lazy='immediate', primaryjoin=( 'and_(' 'ShareAccessMapping.id == ' 'ShareInstanceAccessMapping.access_id, ' 'ShareInstanceAccessMapping.deleted == "False")' ) ) class ShareAccessRulesMetadata(BASE, ManilaBase): """Represents a metadata key/value pair for a share access rule.""" __tablename__ = 'share_access_rules_metadata' id = Column(Integer, primary_key=True) deleted = Column(String(36), default='False') key = Column(String(255), nullable=False) value = Column(String(1023), nullable=False) access_id = Column(String(36), ForeignKey('share_access_map.id'), nullable=False) access = orm.relationship( ShareAccessMapping, backref="share_access_rules_metadata", foreign_keys=access_id, lazy='immediate', primaryjoin='and_(' 'ShareAccessRulesMetadata.access_id == ShareAccessMapping.id,' 'ShareAccessRulesMetadata.deleted == "False")') class ShareInstanceAccessMapping(BASE, ManilaBase): """Represents access to individual share instances.""" __tablename__ = 'share_instance_access_map' _proxified_properties = ('share_id', 'access_type', 'access_key', 'access_to', 'access_level') def set_share_access_data(self, share_access): for share_access_attr in self._proxified_properties: setattr(self, share_access_attr, share_access[share_access_attr]) id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') share_instance_id = Column(String(36), ForeignKey('share_instances.id')) access_id = Column(String(36), ForeignKey('share_access_map.id')) state = Column(String(255), default=constants.ACCESS_STATE_QUEUED_TO_APPLY) instance = orm.relationship( "ShareInstance", lazy='immediate', primaryjoin=( 'and_(' 'ShareInstanceAccessMapping.share_instance_id == ' 'ShareInstance.id, ' 'ShareInstanceAccessMapping.deleted == "False")' ) ) class ShareSnapshot(BASE, ManilaBase): """Represents a snapshot of a share.""" __tablename__ = 'share_snapshots' _extra_keys = ['name', 'share_name', 'status', 'progress', 'provider_location', 'aggregate_status'] def __getattr__(self, item): proxified_properties = ('status', 'progress', 'provider_location') if item in proxified_properties: return getattr(self.instance, item, None) raise AttributeError(item) @property def export_locations(self): # TODO(gouthamr): Return AZ specific export locations for replicated # snapshots. # NOTE(gouthamr): For a replicated snapshot, export locations of the # 'active' instances are chosen, if 'available'. all_export_locations = [] select_instances = list(filter( lambda x: (x['share_instance']['replica_state'] == constants.REPLICA_STATE_ACTIVE), self.instances)) or self.instances for instance in select_instances: if instance['status'] == constants.STATUS_AVAILABLE: for export_location in instance.export_locations: all_export_locations.append(export_location) return all_export_locations @property def name(self): return CONF.share_snapshot_name_template % self.id @property def share_name(self): return CONF.share_name_template % self.share_id @property def instance(self): result = None if len(self.instances) > 0: def qualified_replica(x): preferred_statuses = (constants.REPLICA_STATE_ACTIVE,) return x['replica_state'] in preferred_statuses replica_snapshots = list(filter( lambda x: qualified_replica(x.share_instance), self.instances)) migrating_snapshots = list(filter( lambda x: x.share_instance['status'] == constants.STATUS_MIGRATING, self.instances)) snapshot_instances = (replica_snapshots or migrating_snapshots or self.instances) result = snapshot_instances[0] return result @property def aggregate_status(self): """Get the aggregated 'status' of all instances. A snapshot is supposed to be truly 'available' when it is available across all of the share instances of the parent share object. In case of replication, we only consider replicas (share instances) that are in 'in_sync' replica_state. """ def qualified_replica(x): preferred_statuses = (constants.REPLICA_STATE_ACTIVE, constants.REPLICA_STATE_IN_SYNC) return x['replica_state'] in preferred_statuses replica_snapshots = list(filter( lambda x: qualified_replica(x['share_instance']), self.instances)) if not replica_snapshots: return self.status order = (constants.STATUS_DELETING, constants.STATUS_CREATING, constants.STATUS_ERROR, constants.STATUS_MIGRATING, constants.STATUS_AVAILABLE) other_statuses = [x['status'] for x in self.instances if x['status'] not in order] order = (order + tuple(other_statuses)) sorted_instances = sorted( replica_snapshots, key=lambda x: order.index(x['status'])) return sorted_instances[0].status id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') user_id = Column(String(255)) project_id = Column(String(255)) share_id = Column(String(36)) size = Column(Integer) display_name = Column(String(255)) display_description = Column(String(255)) share_size = Column(Integer) share_proto = Column(String(255)) share = orm.relationship(Share, backref="snapshots", foreign_keys=share_id, primaryjoin='and_(' 'ShareSnapshot.share_id == Share.id,' 'ShareSnapshot.deleted == "False")') class ShareSnapshotInstance(BASE, ManilaBase): """Represents a snapshot of a share.""" __tablename__ = 'share_snapshot_instances' _extra_keys = ['name', 'share_id', 'share_name'] @property def name(self): return CONF.share_snapshot_name_template % self.id @property def share_name(self): return CONF.share_name_template % self.share_instance_id @property def share_id(self): # NOTE(u_glide): This property required for compatibility # with share drivers return self.share_instance_id @property def size(self): # NOTE(silvacarlose) for backwards compatibility if self.instance_size is None: return self.snapshot.size else: return self.instance_size id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') snapshot_id = Column(String(36), nullable=True) share_instance_id = Column( String(36), ForeignKey('share_instances.id'), nullable=False) status = Column(String(255)) progress = Column(String(255)) provider_location = Column(String(255)) share_proto = Column(String(255)) instance_size = Column('size', Integer) share_group_snapshot_id = Column(String(36), nullable=True) user_id = Column(String(255)) project_id = Column(String(255)) export_locations = orm.relationship( "ShareSnapshotInstanceExportLocation", lazy='immediate', primaryjoin=( 'and_(' 'ShareSnapshotInstance.id == ' 'ShareSnapshotInstanceExportLocation.share_snapshot_instance_id, ' 'ShareSnapshotInstanceExportLocation.deleted == "False")' ) ) share_instance = orm.relationship( ShareInstance, backref="snapshot_instances", lazy='immediate', primaryjoin=( 'and_(' 'ShareSnapshotInstance.share_instance_id == ShareInstance.id,' 'ShareSnapshotInstance.deleted == "False")') ) snapshot = orm.relationship( "ShareSnapshot", lazy="immediate", foreign_keys=snapshot_id, backref="instances", primaryjoin=( 'and_(' 'ShareSnapshot.id == ShareSnapshotInstance.snapshot_id, ' 'ShareSnapshotInstance.deleted == "False")' ), viewonly=True, join_depth=2, ) share_group_snapshot = orm.relationship( "ShareGroupSnapshot", lazy="immediate", foreign_keys=share_group_snapshot_id, backref="share_group_snapshot_members", primaryjoin=('ShareGroupSnapshot.id == ' 'ShareSnapshotInstance.share_group_snapshot_id'), viewonly=True, join_depth=2, ) class ShareSnapshotAccessMapping(BASE, ManilaBase): """Represents access to share snapshot.""" __tablename__ = 'share_snapshot_access_map' @property def state(self): """Get the aggregated 'state' from all the instance mapping states. An access rule is supposed to be truly 'active' when it has been applied across all of the share snapshot instances of the parent share snapshot object. """ return get_aggregated_access_rules_state(self.instance_mappings) id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') share_snapshot_id = Column(String(36), ForeignKey('share_snapshots.id')) access_type = Column(String(255)) access_to = Column(String(255)) instance_mappings = orm.relationship( "ShareSnapshotInstanceAccessMapping", lazy='immediate', primaryjoin=( 'and_(' 'ShareSnapshotAccessMapping.id == ' 'ShareSnapshotInstanceAccessMapping.access_id, ' 'ShareSnapshotInstanceAccessMapping.deleted == "False")' ) ) class ShareSnapshotInstanceAccessMapping(BASE, ManilaBase): """Represents access to individual share snapshot instances.""" __tablename__ = 'share_snapshot_instance_access_map' _proxified_properties = ('share_snapshot_id', 'access_type', 'access_to') def set_snapshot_access_data(self, snapshot_access): for snapshot_access_attr in self._proxified_properties: setattr(self, snapshot_access_attr, snapshot_access[snapshot_access_attr]) id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') share_snapshot_instance_id = Column(String(36), ForeignKey( 'share_snapshot_instances.id')) access_id = Column(String(36), ForeignKey('share_snapshot_access_map.id')) state = Column(Enum(*constants.ACCESS_RULES_STATES), default=constants.ACCESS_STATE_QUEUED_TO_APPLY) instance = orm.relationship( "ShareSnapshotInstance", lazy='immediate', primaryjoin=( 'and_(' 'ShareSnapshotInstanceAccessMapping.share_snapshot_instance_id == ' 'ShareSnapshotInstance.id, ' 'ShareSnapshotInstanceAccessMapping.deleted == "False")' ) ) class ShareSnapshotInstanceExportLocation(BASE, ManilaBase): """Represents export locations of share snapshot instances.""" __tablename__ = 'share_snapshot_instance_export_locations' id = Column(String(36), primary_key=True) share_snapshot_instance_id = Column( String(36), ForeignKey('share_snapshot_instances.id'), nullable=False) path = Column(String(2000)) is_admin_only = Column(Boolean, default=False, nullable=False) deleted = Column(String(36), default='False') class SecurityService(BASE, ManilaBase): """Security service information for manila shares.""" __tablename__ = 'security_services' id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') project_id = Column(String(255), nullable=False) type = Column(String(32), nullable=False) dns_ip = Column(String(64), nullable=True) server = Column(String(255), nullable=True) domain = Column(String(255), nullable=True) user = Column(String(255), nullable=True) password = Column(String(255), nullable=True) name = Column(String(255), nullable=True) description = Column(String(255), nullable=True) ou = Column(String(255), nullable=True) class ShareNetwork(BASE, ManilaBase): """Represents network data used by share.""" __tablename__ = 'share_networks' id = Column(String(36), primary_key=True, nullable=False) deleted = Column(String(36), default='False') project_id = Column(String(255), nullable=False) user_id = Column(String(255), nullable=False) name = Column(String(255), nullable=True) description = Column(String(255), nullable=True) security_services = orm.relationship( "SecurityService", secondary="share_network_security_service_association", backref="share_networks", primaryjoin='and_(' 'ShareNetwork.id == ' 'ShareNetworkSecurityServiceAssociation.share_network_id,' 'ShareNetworkSecurityServiceAssociation.deleted == 0,' 'ShareNetwork.deleted == "False")', secondaryjoin='and_(' 'SecurityService.id == ' 'ShareNetworkSecurityServiceAssociation.security_service_id,' 'SecurityService.deleted == "False")') share_instances = orm.relationship( "ShareInstance", backref='share_network', primaryjoin='and_(' 'ShareNetwork.id == ShareInstance.share_network_id,' 'ShareInstance.deleted == "False")') share_network_subnets = orm.relationship( "ShareNetworkSubnet", lazy='joined', backref=orm.backref('share_network', lazy='joined'), primaryjoin='and_' '(ShareNetwork.id == ShareNetworkSubnet.share_network_id,' 'ShareNetworkSubnet.deleted == "False")') class ShareNetworkSubnet(BASE, ManilaBase): """Represents a share network subnet used by some resources.""" _extra_keys = ['availability_zone'] __tablename__ = 'share_network_subnets' id = Column(String(36), primary_key=True, nullable=False) neutron_net_id = Column(String(36), nullable=True) neutron_subnet_id = Column(String(36), nullable=True) network_type = Column(String(32), nullable=True) cidr = Column(String(64), nullable=True) segmentation_id = Column(Integer, nullable=True) gateway = Column(String(64), nullable=True) mtu = Column(Integer, nullable=True) deleted = Column(String(36), default='False') share_network_id = Column(String(36), ForeignKey('share_networks.id'), nullable=False) ip_version = Column(Integer, nullable=True) availability_zone_id = Column( String(36), ForeignKey('availability_zones.id'), nullable=True) share_servers = orm.relationship( "ShareServer", backref='share_network_subnet', lazy='immediate', primaryjoin='and_(ShareNetworkSubnet.id ' '== ShareServer.share_network_subnet_id,' 'ShareServer.deleted == "False")') _availability_zone = orm.relationship( "AvailabilityZone", lazy='immediate', foreign_keys=availability_zone_id, primaryjoin=( "and_(" "ShareNetworkSubnet.availability_zone_id == AvailabilityZone.id, " "AvailabilityZone.deleted == 'False')")) @property def availability_zone(self): if self._availability_zone: return self._availability_zone['name'] @property def is_default(self): return self.availability_zone_id is None @property def share_network_name(self): return self.share_network['name'] class ShareServer(BASE, ManilaBase): """Represents share server used by share.""" __tablename__ = 'share_servers' id = Column(String(36), primary_key=True, nullable=False) deleted = Column(String(36), default='False') share_network_subnet_id = Column( String(36), ForeignKey('share_network_subnets.id'), nullable=True) host = Column(String(255), nullable=False) is_auto_deletable = Column(Boolean, default=True) identifier = Column(String(255), nullable=True) status = Column(Enum( constants.STATUS_INACTIVE, constants.STATUS_ACTIVE, constants.STATUS_ERROR, constants.STATUS_DELETING, constants.STATUS_CREATING, constants.STATUS_DELETED, constants.STATUS_MANAGING, constants.STATUS_UNMANAGING, constants.STATUS_UNMANAGE_ERROR, constants.STATUS_MANAGE_ERROR), default=constants.STATUS_INACTIVE) network_allocations = orm.relationship( "NetworkAllocation", primaryjoin='and_(' 'ShareServer.id == NetworkAllocation.share_server_id,' 'NetworkAllocation.deleted == "False")') share_instances = orm.relationship( "ShareInstance", backref='share_server', primaryjoin='and_(' 'ShareServer.id == ShareInstance.share_server_id,' 'ShareInstance.deleted == "False")') share_groups = orm.relationship( "ShareGroup", backref='share_server', primaryjoin='and_(' 'ShareServer.id == ShareGroup.share_server_id,' 'ShareGroup.deleted == "False")') _backend_details = orm.relationship( "ShareServerBackendDetails", lazy='immediate', viewonly=True, primaryjoin='and_(' 'ShareServer.id == ' 'ShareServerBackendDetails.share_server_id, ' 'ShareServerBackendDetails.deleted == "False")') @property def backend_details(self): return {model['key']: model['value'] for model in self._backend_details} _extra_keys = ['backend_details'] class ShareServerBackendDetails(BASE, ManilaBase): """Represents a metadata key/value pair for a share server.""" __tablename__ = 'share_server_backend_details' deleted = Column(String(36), default='False') id = Column(Integer, primary_key=True) key = Column(String(255), nullable=False) value = Column(String(1023), nullable=False) share_server_id = Column(String(36), ForeignKey('share_servers.id'), nullable=False) class ShareNetworkSecurityServiceAssociation(BASE, ManilaBase): """Association table between compute_zones and compute_nodes tables.""" __tablename__ = 'share_network_security_service_association' id = Column(Integer, primary_key=True) share_network_id = Column(String(36), ForeignKey('share_networks.id'), nullable=False) security_service_id = Column(String(36), ForeignKey('security_services.id'), nullable=False) class NetworkAllocation(BASE, ManilaBase): """Represents network allocation data.""" __tablename__ = 'network_allocations' id = Column(String(36), primary_key=True, nullable=False) deleted = Column(String(36), default='False') label = Column(String(255), nullable=True) ip_address = Column(String(64), nullable=True) ip_version = Column(Integer, nullable=True) cidr = Column(String(64), nullable=True) gateway = Column(String(64), nullable=True) mtu = Column(Integer, nullable=True) network_type = Column(String(32), nullable=True) segmentation_id = Column(Integer, nullable=True) mac_address = Column(String(32), nullable=True) share_server_id = Column(String(36), ForeignKey('share_servers.id'), nullable=False) class DriverPrivateData(BASE, ManilaBase): """Represents a private data as key-value pairs for a driver.""" __tablename__ = 'drivers_private_data' entity_uuid = Column(String(36), nullable=False, primary_key=True) key = Column(String(255), nullable=False, primary_key=True) value = Column(String(1023), nullable=False) class AvailabilityZone(BASE, ManilaBase): """Represents a private data as key-value pairs for a driver.""" __tablename__ = 'availability_zones' id = Column(String(36), primary_key=True, nullable=False) deleted = Column(String(36), default='False') name = Column(String(255), nullable=False) class ShareGroupTypes(BASE, ManilaBase): """Represent possible share group types of shares offered.""" __tablename__ = "share_group_types" __table_args__ = ( schema.UniqueConstraint( "name", "deleted", name="uniq_share_group_type_name"), ) id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') name = Column(String(255)) is_public = Column(Boolean, default=True) class ShareGroup(BASE, ManilaBase): """Represents a share group.""" __tablename__ = 'share_groups' _extra_keys = [ 'availability_zone', ] id = Column(String(36), primary_key=True) user_id = Column(String(255), nullable=False) project_id = Column(String(255), nullable=False) deleted = Column(String(36), default='False') host = Column(String(255)) name = Column(String(255)) description = Column(String(255)) status = Column(String(255)) source_share_group_snapshot_id = Column(String(36)) share_network_id = Column( String(36), ForeignKey('share_networks.id'), nullable=True) share_server_id = Column( String(36), ForeignKey('share_servers.id'), nullable=True) share_group_type_id = Column( String(36), ForeignKey('share_group_types.id'), nullable=True) availability_zone_id = Column( String(36), ForeignKey('availability_zones.id'), nullable=True) consistent_snapshot_support = Column(Enum('pool', 'host'), default=None) share_group_type = orm.relationship( ShareGroupTypes, backref="share_groups", foreign_keys=share_group_type_id, primaryjoin="and_(" "ShareGroup.share_group_type_id ==" "ShareGroupTypes.id," "ShareGroup.deleted == 'False')") _availability_zone = orm.relationship( "AvailabilityZone", lazy='immediate', foreign_keys=availability_zone_id, primaryjoin=( "and_(" "ShareGroup.availability_zone_id == AvailabilityZone.id, " "AvailabilityZone.deleted == 'False')")) @property def availability_zone(self): if self._availability_zone: return self._availability_zone['name'] class ShareGroupTypeProjects(BASE, ManilaBase): """Represent projects associated share group types.""" __tablename__ = "share_group_type_projects" __table_args__ = (schema.UniqueConstraint( "share_group_type_id", "project_id", "deleted", name=("uniq_share_group_type_projects0share_group_type_id" "0project_id0deleted")), ) id = Column(Integer, primary_key=True) share_group_type_id = Column( String, ForeignKey('share_group_types.id'), nullable=False) project_id = Column(String(255)) share_group_type = orm.relationship( ShareGroupTypes, backref="projects", foreign_keys=share_group_type_id, primaryjoin='and_(' 'ShareGroupTypeProjects.share_group_type_id == ' 'ShareGroupTypes.id,' 'ShareGroupTypeProjects.deleted == 0)') class ShareGroupTypeSpecs(BASE, ManilaBase): """Represents additional specs for a share group type.""" __tablename__ = 'share_group_type_specs' id = Column(Integer, primary_key=True) key = Column("spec_key", String(255)) value = Column("spec_value", String(255)) share_group_type_id = Column( String(36), ForeignKey('share_group_types.id'), nullable=False) share_group_type = orm.relationship( ShareGroupTypes, backref="group_specs", foreign_keys=share_group_type_id, primaryjoin='and_(' 'ShareGroupTypeSpecs.share_group_type_id == ShareGroupTypes.id,' 'ShareGroupTypeSpecs.deleted == 0)' ) class ShareGroupSnapshot(BASE, ManilaBase): """Represents a share group snapshot.""" __tablename__ = 'share_group_snapshots' id = Column(String(36), primary_key=True) share_group_id = Column(String(36), ForeignKey('share_groups.id')) user_id = Column(String(255), nullable=False) project_id = Column(String(255), nullable=False) deleted = Column(String(36), default='False') name = Column(String(255)) description = Column(String(255)) status = Column(String(255)) share_group = orm.relationship( ShareGroup, backref=orm.backref("snapshots", lazy='joined'), foreign_keys=share_group_id, primaryjoin=('and_(' 'ShareGroupSnapshot.share_group_id == ShareGroup.id,' 'ShareGroupSnapshot.deleted == "False")') ) class ShareGroupTypeShareTypeMapping(BASE, ManilaBase): """Represents the share types supported by a share group type.""" __tablename__ = 'share_group_type_share_type_mappings' id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') share_group_type_id = Column( String(36), ForeignKey('share_group_types.id'), nullable=False) share_type_id = Column( String(36), ForeignKey('share_types.id'), nullable=False) share_group_type = orm.relationship( ShareGroupTypes, backref="share_types", foreign_keys=share_group_type_id, primaryjoin=('and_(' 'ShareGroupTypeShareTypeMapping.share_group_type_id ' '== ShareGroupTypes.id,' 'ShareGroupTypeShareTypeMapping.deleted == "False")') ) class ShareGroupShareTypeMapping(BASE, ManilaBase): """Represents the share types in a share group.""" __tablename__ = 'share_group_share_type_mappings' id = Column(String(36), primary_key=True) deleted = Column(String(36), default='False') share_group_id = Column( String(36), ForeignKey('share_groups.id'), nullable=False) share_type_id = Column( String(36), ForeignKey('share_types.id'), nullable=False) share_group = orm.relationship( ShareGroup, backref="share_types", foreign_keys=share_group_id, primaryjoin=('and_(' 'ShareGroupShareTypeMapping.share_group_id ' '== ShareGroup.id,' 'ShareGroupShareTypeMapping.deleted == "False")') ) class Message(BASE, ManilaBase): """Represents a user message. User messages show information about API operations to the API end-user. """ __tablename__ = 'messages' id = Column(String(36), primary_key=True, nullable=False) project_id = Column(String(255), nullable=False) # Info/Error/Warning. message_level = Column(String(255), nullable=False) request_id = Column(String(255), nullable=True) resource_type = Column(String(255)) # The uuid of the related resource. resource_id = Column(String(36), nullable=True) # Operation specific action ID, this ID is mapped # to a message in manila/message/message_field.py action_id = Column(String(10), nullable=False) # After this time the message may no longer exist. expires_at = Column(DateTime, nullable=True) # Message detail ID, this ID is mapped # to a message in manila/message/message_field.py detail_id = Column(String(10), nullable=True) deleted = Column(String(36), default='False') class BackendInfo(BASE, ManilaBase): """Represent Backend Info.""" __tablename__ = "backend_info" host = Column(String(255), primary_key=True) info_hash = Column(String(255)) def register_models(): """Register Models and create metadata. Called from manila.db.sqlalchemy.__init__ as part of loading the driver, it will never need to be called explicitly elsewhere unless the connection is lost and needs to be reestablished. """ from sqlalchemy import create_engine models = (Service, Share, ShareAccessMapping, ShareSnapshot ) engine = create_engine(CONF.database.connection, echo=False) for model in models: model.metadata.create_all(engine) def get_access_rules_status(instances): share_access_status = constants.STATUS_ACTIVE if len(instances) == 0: return share_access_status priorities = ShareInstance.ACCESS_STATUS_PRIORITIES for instance in instances: if instance['status'] != constants.STATUS_AVAILABLE: continue instance_access_status = instance['access_rules_status'] if priorities.get(instance_access_status) > priorities.get( share_access_status): share_access_status = instance_access_status if share_access_status == constants.SHARE_INSTANCE_RULES_ERROR: break return share_access_status def get_aggregated_access_rules_state(instance_mappings): state = None if len(instance_mappings) > 0: order = (constants.ACCESS_STATE_ERROR, constants.ACCESS_STATE_DENYING, constants.ACCESS_STATE_QUEUED_TO_DENY, constants.ACCESS_STATE_QUEUED_TO_APPLY, constants.ACCESS_STATE_APPLYING, constants.ACCESS_STATE_ACTIVE) sorted_instance_mappings = sorted( instance_mappings, key=lambda x: order.index(x['state'])) state = sorted_instance_mappings[0].state return state manila-10.0.0/manila/quota.py0000664000175000017500000013272413656750227016045 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Quotas for shares.""" import datetime from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from oslo_utils import timeutils import six from manila import db from manila import exception LOG = log.getLogger(__name__) quota_opts = [ cfg.IntOpt('quota_shares', default=50, help='Number of shares allowed per project.'), cfg.IntOpt('quota_snapshots', default=50, help='Number of share snapshots allowed per project.'), cfg.IntOpt('quota_gigabytes', default=1000, help='Number of share gigabytes allowed per project.'), cfg.IntOpt('quota_snapshot_gigabytes', default=1000, help='Number of snapshot gigabytes allowed per project.'), cfg.IntOpt('quota_share_networks', default=10, help='Number of share-networks allowed per project.'), cfg.IntOpt('quota_share_replicas', default=100, help='Number of share-replicas allowed per project.'), cfg.IntOpt('quota_replica_gigabytes', default=1000, help='Number of replica gigabytes allowed per project.'), cfg.IntOpt('quota_share_groups', default=50, help='Number of share groups allowed.'), cfg.IntOpt('quota_share_group_snapshots', default=50, help='Number of share group snapshots allowed.'), cfg.IntOpt('reservation_expire', default=86400, help='Number of seconds until a reservation expires.'), cfg.IntOpt('until_refresh', default=0, help='Count of reservations until usage is refreshed.'), cfg.IntOpt('max_age', default=0, help='Number of seconds between subsequent usage refreshes.'), cfg.StrOpt('quota_driver', default='manila.quota.DbQuotaDriver', help='Default driver to use for quota checks.'), ] CONF = cfg.CONF CONF.register_opts(quota_opts) class DbQuotaDriver(object): """Database Quota driver. Driver to perform necessary checks to enforce quotas and obtain quota information. The default driver utilizes the local database. """ def get_by_class(self, context, quota_class, resource): """Get a specific quota by quota class.""" return db.quota_class_get(context, quota_class, resource) def get_defaults(self, context, resources): """Given a list of resources, retrieve the default quotas. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. """ quotas = {} default_quotas = db.quota_class_get_default(context) for resource in resources.values(): quotas[resource.name] = default_quotas.get(resource.name, resource.default) return quotas def get_class_quotas(self, context, resources, quota_class, defaults=True): """Retrieve quotas for a quota class. Given a list of resources, retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param quota_class: The name of the quota class to return quotas for. :param defaults: If True, the default value will be reported if there is no specific value for the resource. """ quotas = {} class_quotas = db.quota_class_get_all_by_name(context, quota_class) for resource in resources.values(): if defaults or resource.name in class_quotas: quotas[resource.name] = class_quotas.get(resource.name, resource.default) return quotas def _process_quotas(self, context, resources, project_id, quotas, quota_class=None, defaults=True, usages=None, remains=False): modified_quotas = {} # Get the quotas for the appropriate class. If the project ID # matches the one in the context, we use the quota_class from # the context, otherwise, we use the provided quota_class (if # any) if project_id == context.project_id: quota_class = context.quota_class if quota_class: class_quotas = db.quota_class_get_all_by_name(context, quota_class) else: class_quotas = {} default_quotas = self.get_defaults(context, resources) for resource in resources.values(): # Omit default/quota class values if not defaults and resource.name not in quotas: continue limit = quotas.get( resource.name, class_quotas.get(resource.name, default_quotas[resource.name])) modified_quotas[resource.name] = dict(limit=limit) # Include usages if desired. This is optional because one # internal consumer of this interface wants to access the # usages directly from inside a transaction. if usages: usage = usages.get(resource.name, {}) modified_quotas[resource.name].update( in_use=usage.get('in_use', 0), reserved=usage.get('reserved', 0), ) # Initialize remains quotas. if remains: modified_quotas[resource.name].update(remains=limit) if remains: all_quotas = db.quota_get_all(context, project_id) for quota in all_quotas: if quota.resource in modified_quotas: modified_quotas[quota.resource]['remains'] -= ( quota.hard_limit) return modified_quotas def get_project_quotas(self, context, resources, project_id, quota_class=None, defaults=True, usages=True, remains=False): """Retrieve quotas for project. Given a list of resources, retrieve the quotas for the given project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current in_use and reserved counts will also be returned. :param remains: If True, the current remains of the project will will be returned. """ project_quotas = db.quota_get_all_by_project(context, project_id) project_usages = None if usages: project_usages = db.quota_usage_get_all_by_project(context, project_id) return self._process_quotas(context, resources, project_id, project_quotas, quota_class, defaults=defaults, usages=project_usages, remains=remains) def get_user_quotas(self, context, resources, project_id, user_id, quota_class=None, defaults=True, usages=True): """Retrieve quotas for user and project. Given a list of resources, retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current in_use and reserved counts will also be returned. """ user_quotas = db.quota_get_all_by_project_and_user( context, project_id, user_id) # Use the project quota for default user quota. proj_quotas = db.quota_get_all_by_project(context, project_id) for key, value in proj_quotas.items(): if key not in user_quotas.keys(): user_quotas[key] = value user_usages = None if usages: user_usages = db.quota_usage_get_all_by_project_and_user( context, project_id, user_id) return self._process_quotas(context, resources, project_id, user_quotas, quota_class, defaults=defaults, usages=user_usages) def get_share_type_quotas(self, context, resources, project_id, share_type_id, quota_class=None, defaults=True, usages=True): """Retrieve quotas for share_type and project. Given a list of resources, retrieve the quotas for the given share_type and project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The UUID of the project to return quotas for. :param share_type: UUID/name of a share type to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current in_use and reserved counts will also be returned. """ st_quotas = db.quota_get_all_by_project_and_share_type( context, project_id, share_type_id) # Use the project quota for default share_type quota. project_quotas = db.quota_get_all_by_project(context, project_id) for key, value in project_quotas.items(): if key not in st_quotas.keys(): st_quotas[key] = value st_usages = None if usages: st_usages = db.quota_usage_get_all_by_project_and_share_type( context, project_id, share_type_id) return self._process_quotas( context, resources, project_id, st_quotas, quota_class, defaults=defaults, usages=st_usages) def get_settable_quotas(self, context, resources, project_id, user_id=None, share_type_id=None): """Retrieve range of settable quotas. Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param share_type_id: The UUID of the share_type to return quotas for. """ settable_quotas = {} project_quotas = self.get_project_quotas( context, resources, project_id, remains=True) if user_id or share_type_id: if user_id: subquotas = self.get_user_quotas( context, resources, project_id, user_id) else: subquotas = self.get_share_type_quotas( context, resources, project_id, share_type_id) for key, value in subquotas.items(): settable_quotas[key] = { "minimum": value['in_use'] + value['reserved'], "maximum": project_quotas[key]["limit"], } else: for key, value in project_quotas.items(): minimum = max( int(value['limit'] - value['remains']), int(value['in_use'] + value['reserved']) ) settable_quotas[key] = {"minimum": minimum, "maximum": -1} return settable_quotas def _get_quotas(self, context, resources, keys, has_sync, project_id=None, user_id=None, share_type_id=None): """Retrieve quotas for a resource. A helper method which retrieves the quotas for the specific resources identified by keys, and which apply to the current context. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param keys: A list of the desired quotas to retrieve. :param has_sync: If True, indicates that the resource must have a sync attribute; if False, indicates that the resource must NOT have a sync attribute. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. (Special case: user operates on resource, owned/created by different user) """ # Filter resources if has_sync: sync_filt = lambda x: hasattr(x, 'sync') # noqa: E731 else: sync_filt = lambda x: not hasattr(x, 'sync') # noqa: E731 desired = set(keys) sub_resources = {k: v for k, v in resources.items() if k in desired and sync_filt(v)} # Make sure we accounted for all of them... if len(keys) != len(sub_resources): unknown = desired - set(sub_resources.keys()) raise exception.QuotaResourceUnknown(unknown=sorted(unknown)) if user_id: # Grab and return the quotas (without usages) quotas = self.get_user_quotas(context, sub_resources, project_id, user_id, context.quota_class, usages=False) elif share_type_id: # Grab and return the quotas (without usages) quotas = self.get_share_type_quotas( context, sub_resources, project_id, share_type_id, context.quota_class, usages=False) else: # Grab and return the quotas (without usages) quotas = self.get_project_quotas(context, sub_resources, project_id, context.quota_class, usages=False) return {k: v['limit'] for k, v in quotas.items()} def reserve(self, context, resources, deltas, expire=None, project_id=None, user_id=None, share_type_id=None): """Check quotas and reserve resources. For counting quotas--those quotas for which there is a usage synchronization function--this method checks quotas against current usage and the desired deltas. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it does not have a usage synchronization function. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns a list of reservation UUIDs which were created. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param deltas: A dictionary of the proposed delta changes. :param expire: An optional parameter specifying an expiration time for the reservations. If it is a simple number, it is interpreted as a number of seconds and added to the current time; if it is a datetime.timedelta object, it will also be added to the current time. A datetime.datetime object will be interpreted as the absolute expiration time. If None is specified, the default expiration time set by --default-reservation-expire will be used (this value will be treated as a number of seconds). :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. (Special case: user operates on resource, owned/created by different user) """ # Set up the reservation expiration if expire is None: expire = CONF.reservation_expire if isinstance(expire, six.integer_types): expire = datetime.timedelta(seconds=expire) if isinstance(expire, datetime.timedelta): expire = timeutils.utcnow() + expire if not isinstance(expire, datetime.datetime): raise exception.InvalidReservationExpiration(expire=expire) # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user_id is None, then we use the user_id in context if user_id is None: user_id = context.user_id # Get the applicable quotas. # NOTE(Vek): We're not worried about races at this point. # Yes, the admin may be in the process of reducing # quotas, but that's a pretty rare thing. quotas = self._get_quotas( context, resources, deltas, has_sync=True, project_id=project_id) user_quotas = self._get_quotas( context, resources, deltas, has_sync=True, project_id=project_id, user_id=user_id) if share_type_id: share_type_quotas = self._get_quotas( context, resources, deltas, has_sync=True, project_id=project_id, share_type_id=share_type_id) else: share_type_quotas = {} # NOTE(Vek): Most of the work here has to be done in the DB # API, because we have to do it in a transaction, # which means access to the session. Since the # session isn't available outside the DBAPI, we # have to do the work there. return db.quota_reserve( context, resources, quotas, user_quotas, share_type_quotas, deltas, expire, CONF.until_refresh, CONF.max_age, project_id=project_id, user_id=user_id, share_type_id=share_type_id) def commit(self, context, reservations, project_id=None, user_id=None, share_type_id=None): """Commit reservations. :param context: The request context, for access checks. :param reservations: A list of the reservation UUIDs, as returned by the reserve() method. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. (Special case: user operates on resource, owned/created by different user) """ # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user_id is None, then we use the user_id in context if user_id is None: user_id = context.user_id db.reservation_commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id) def rollback(self, context, reservations, project_id=None, user_id=None, share_type_id=None): """Roll back reservations. :param context: The request context, for access checks. :param reservations: A list of the reservation UUIDs, as returned by the reserve() method. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. (Special case: user operates on resource, owned/created by different user) """ # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user_id is None, then we use the user_id in context if user_id is None: user_id = context.user_id db.reservation_rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id) def usage_reset(self, context, resources): """Reset usage records. Reset the usage records for a particular user on a list of resources. This will force that user's usage records to be refreshed the next time a reservation is made. Note: this does not affect the currently outstanding reservations the user has; those reservations must be committed or rolled back (or expired). :param context: The request context, for access checks. :param resources: A list of the resource names for which the usage must be reset. """ # We need an elevated context for the calls to # quota_usage_update() elevated = context.elevated() for resource in resources: try: # Reset the usage to -1, which will force it to be # refreshed db.quota_usage_update(elevated, context.project_id, context.user_id, resource, in_use=-1) except exception.QuotaUsageNotFound: # That means it'll be refreshed anyway pass def destroy_all_by_project(self, context, project_id): """Destroy metadata associated with a project. Destroy all quotas, usages, and reservations associated with a project. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. """ db.quota_destroy_all_by_project(context, project_id) def destroy_all_by_project_and_user(self, context, project_id, user_id): """Destroy metadata associated with a project and user. Destroy all quotas, usages, and reservations associated with a project and user. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. :param user_id: The ID of the user being deleted. """ db.quota_destroy_all_by_project_and_user(context, project_id, user_id) def destroy_all_by_project_and_share_type(self, context, project_id, share_type_id): """Destroy metadata associated with a project and share_type. Destroy all quotas, usages, and reservations associated with a project and share_type. :param context: The request context, for access checks. :param project_id: The ID of the project. :param share_type_id: The UUID of the share type. """ db.quota_destroy_all_by_share_type(context, share_type_id, project_id=project_id) def expire(self, context): """Expire reservations. Explores all currently existing reservations and rolls back any that have expired. :param context: The request context, for access checks. """ db.reservation_expire(context) class BaseResource(object): """Describe a single resource for quota checking.""" def __init__(self, name, flag=None): """Initializes a Resource. :param name: The name of the resource, i.e., "shares". :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ self.name = name self.flag = flag @property def default(self): """Return the default value of the quota.""" return CONF[self.flag] if self.flag else -1 class ReservableResource(BaseResource): """Describe a reservable resource.""" def __init__(self, name, sync, flag=None): """Initializes a ReservableResource. Reservable resources are those resources which directly correspond to objects in the database, i.e., shares, gigabytes, etc. A ReservableResource must be constructed with a usage synchronization function, which will be called to determine the current counts of one or more resources. The usage synchronization function will be passed three arguments: an admin context, the project ID, and an opaque session object, which should in turn be passed to the underlying database function. Synchronization functions should return a dictionary mapping resource names to the current in_use count for those resources; more than one resource and resource count may be returned. Note that synchronization functions may be associated with more than one ReservableResource. :param name: The name of the resource, i.e., "shares". :param sync: A callable which returns a dictionary to resynchronize the in_use count for one or more resources, as described above. :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ super(ReservableResource, self).__init__(name, flag=flag) self.sync = sync class AbsoluteResource(BaseResource): """Describe a non-reservable resource.""" pass class CountableResource(AbsoluteResource): """Describe a countable resource. Describe a resource where the counts aren't based solely on the project ID. """ def __init__(self, name, count, flag=None): """Initializes a CountableResource. Countable resources are those resources which directly correspond to objects in the database, i.e., shares, gigabytes, etc., but for which a count by project ID is inappropriate. A CountableResource must be constructed with a counting function, which will be called to determine the current counts of the resource. The counting function will be passed the context, along with the extra positional and keyword arguments that are passed to Quota.count(). It should return an integer specifying the count. Note that this counting is not performed in a transaction-safe manner. This resource class is a temporary measure to provide required functionality, until a better approach to solving this problem can be evolved. :param name: The name of the resource, i.e., "shares". :param count: A callable which returns the count of the resource. The arguments passed are as described above. :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ super(CountableResource, self).__init__(name, flag=flag) self.count = count class QuotaEngine(object): """Represent the set of recognized quotas.""" def __init__(self, quota_driver_class=None): """Initialize a Quota object.""" self._resources = {} self._driver_cls = quota_driver_class self.__driver = None @property def _driver(self): if self.__driver: return self.__driver if not self._driver_cls: self._driver_cls = CONF.quota_driver if isinstance(self._driver_cls, six.string_types): self._driver_cls = importutils.import_object(self._driver_cls) self.__driver = self._driver_cls return self.__driver def __contains__(self, resource): return resource in self._resources def register_resource(self, resource): """Register a resource.""" self._resources[resource.name] = resource def register_resources(self, resources): """Register a list of resources.""" for resource in resources: self.register_resource(resource) def get_by_class(self, context, quota_class, resource): """Get a specific quota by quota class.""" return self._driver.get_by_class(context, quota_class, resource) def get_defaults(self, context): """Retrieve the default quotas. :param context: The request context, for access checks. """ return self._driver.get_defaults(context, self._resources) def get_class_quotas(self, context, quota_class, defaults=True): """Retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param quota_class: The name of the quota class to return quotas for. :param defaults: If True, the default value will be reported if there is no specific value for the resource. """ return self._driver.get_class_quotas(context, self._resources, quota_class, defaults=defaults) def get_user_quotas(self, context, project_id, user_id, quota_class=None, defaults=True, usages=True): """Retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current in_use and reserved counts will also be returned. """ return self._driver.get_user_quotas(context, self._resources, project_id, user_id, quota_class=quota_class, defaults=defaults, usages=usages) def get_share_type_quotas(self, context, project_id, share_type_id, quota_class=None, defaults=True, usages=True): """Retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param share_type_id: The UUID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current in_use and reserved counts will also be returned. """ return self._driver.get_share_type_quotas( context, self._resources, project_id, share_type_id, quota_class=quota_class, defaults=defaults, usages=usages) def get_project_quotas(self, context, project_id, quota_class=None, defaults=True, usages=True, remains=False): """Retrieve the quotas for the given project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current in_use and reserved counts will also be returned. :param remains: If True, the current remains of the project will will be returned. """ return self._driver.get_project_quotas(context, self._resources, project_id, quota_class=quota_class, defaults=defaults, usages=usages, remains=remains) def get_settable_quotas(self, context, project_id, user_id=None, share_type_id=None): """Get settable quotas. Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param share_type_id: The UUID of the share_type to return quotas for. """ return self._driver.get_settable_quotas( context, self._resources, project_id, user_id=user_id, share_type_id=share_type_id) def count(self, context, resource, *args, **kwargs): """Count a resource. For countable resources, invokes the count() function and returns its result. Arguments following the context and resource are passed directly to the count function declared by the resource. :param context: The request context, for access checks. :param resource: The name of the resource, as a string. """ # Get the resource res = self._resources.get(resource) if not res or not hasattr(res, 'count'): raise exception.QuotaResourceUnknown(unknown=[resource]) return res.count(context, *args, **kwargs) def reserve(self, context, expire=None, project_id=None, user_id=None, share_type_id=None, **deltas): """Check quotas and reserve resources. For counting quotas--those quotas for which there is a usage synchronization function--this method checks quotas against current usage and the desired deltas. The deltas are given as keyword arguments, and current usage and other reservations are factored into the quota check. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it does not have a usage synchronization function. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns a list of reservation UUIDs which were created. :param context: The request context, for access checks. :param expire: An optional parameter specifying an expiration time for the reservations. If it is a simple number, it is interpreted as a number of seconds and added to the current time; if it is a datetime.timedelta object, it will also be added to the current time. A datetime.datetime object will be interpreted as the absolute expiration time. If None is specified, the default expiration time set by --default-reservation-expire will be used (this value will be treated as a number of seconds). :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. """ reservations = self._driver.reserve( context, self._resources, deltas, expire=expire, project_id=project_id, user_id=user_id, share_type_id=share_type_id, ) LOG.debug("Created reservations %s", reservations) return reservations def commit(self, context, reservations, project_id=None, user_id=None, share_type_id=None): """Commit reservations. :param context: The request context, for access checks. :param reservations: A list of the reservation UUIDs, as returned by the reserve() method. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. """ try: self._driver.commit( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id) except Exception: # NOTE(Vek): Ignoring exceptions here is safe, because the # usage resynchronization and the reservation expiration # mechanisms will resolve the issue. The exception is # logged, however, because this is less than optimal. LOG.exception("Failed to commit reservations %s", reservations) return LOG.debug("Committed reservations %s", reservations) def rollback(self, context, reservations, project_id=None, user_id=None, share_type_id=None): """Roll back reservations. :param context: The request context, for access checks. :param reservations: A list of the reservation UUIDs, as returned by the reserve() method. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. """ try: self._driver.rollback( context, reservations, project_id=project_id, user_id=user_id, share_type_id=share_type_id) except Exception: # NOTE(Vek): Ignoring exceptions here is safe, because the # usage resynchronization and the reservation expiration # mechanisms will resolve the issue. The exception is # logged, however, because this is less than optimal. LOG.exception("Failed to roll back reservations %s", reservations) return LOG.debug("Rolled back reservations %s", reservations) def usage_reset(self, context, resources): """Reset usage records. Reset the usage records for a particular user on a list of resources. This will force that user's usage records to be refreshed the next time a reservation is made. Note: this does not affect the currently outstanding reservations the user has; those reservations must be committed or rolled back (or expired). :param context: The request context, for access checks. :param resources: A list of the resource names for which the usage must be reset. """ self._driver.usage_reset(context, resources) def destroy_all_by_project_and_user(self, context, project_id, user_id): """Destroy metadata associated with a project and user. Destroy all quotas, usages, and reservations associated with a project and user. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. :param user_id: The ID of the user being deleted. """ self._driver.destroy_all_by_project_and_user(context, project_id, user_id) def destroy_all_by_project_and_share_type(self, context, project_id, share_type_id): """Destroy metadata associated with a project and share_type. Destroy all quotas, usages, and reservations associated with a project and share_type. :param context: The request context, for access checks. :param project_id: The ID of the project. :param share_type_id: The UUID of the share_type. """ self._driver.destroy_all_by_project_and_share_type( context, project_id, share_type_id) def destroy_all_by_project(self, context, project_id): """Destroy metadata associated with a project. Destroy all quotas, usages, and reservations associated with a project. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. """ self._driver.destroy_all_by_project(context, project_id) def expire(self, context): """Expire reservations. Explores all currently existing reservations and rolls back any that have expired. :param context: The request context, for access checks. """ self._driver.expire(context) @property def resources(self): return sorted(self._resources.keys()) QUOTAS = QuotaEngine() resources = [ ReservableResource('shares', '_sync_shares', 'quota_shares'), ReservableResource('snapshots', '_sync_snapshots', 'quota_snapshots'), ReservableResource('gigabytes', '_sync_gigabytes', 'quota_gigabytes'), ReservableResource('snapshot_gigabytes', '_sync_snapshot_gigabytes', 'quota_snapshot_gigabytes'), ReservableResource('share_networks', '_sync_share_networks', 'quota_share_networks'), ReservableResource('share_groups', '_sync_share_groups', 'quota_share_groups'), ReservableResource('share_group_snapshots', '_sync_share_group_snapshots', 'quota_share_group_snapshots'), ReservableResource('share_replicas', '_sync_share_replicas', 'quota_share_replicas'), ReservableResource('replica_gigabytes', '_sync_replica_gigabytes', 'quota_replica_gigabytes'), ] QUOTAS.register_resources(resources) manila-10.0.0/manila/cmd/0000775000175000017500000000000013656750362015074 5ustar zuulzuul00000000000000manila-10.0.0/manila/cmd/__init__.py0000664000175000017500000000000013656750227017173 0ustar zuulzuul00000000000000manila-10.0.0/manila/cmd/share.py0000664000175000017500000000367713656750227016565 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2013 NetApp # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for manila Share.""" import eventlet eventlet.monkey_patch() import sys from oslo_config import cfg from oslo_log import log from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from manila.common import config # Need to register global_opts # noqa from manila import service from manila import utils from manila import version CONF = cfg.CONF def main(): log.register_options(CONF) gmr_opts.set_defaults(CONF) CONF(sys.argv[1:], project='manila', version=version.version_string()) log.setup(CONF, "manila") utils.monkey_patch() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) launcher = service.process_launcher() if CONF.enabled_share_backends: for backend in CONF.enabled_share_backends: host = "%s@%s" % (CONF.host, backend) server = service.Service.create(host=host, service_name=backend, binary='manila-share', coordination=True) launcher.launch_service(server) else: server = service.Service.create(binary='manila-share') launcher.launch_service(server) launcher.wait() if __name__ == '__main__': main() manila-10.0.0/manila/cmd/api.py0000664000175000017500000000324113656750227016217 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Starter script for manila OS API.""" import eventlet eventlet.monkey_patch() import sys from oslo_config import cfg from oslo_log import log from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from manila.common import config # Need to register global_opts # noqa from manila import service from manila import utils from manila import version CONF = cfg.CONF def main(): log.register_options(CONF) gmr_opts.set_defaults(CONF) CONF(sys.argv[1:], project='manila', version=version.version_string()) config.verify_share_protocols() log.setup(CONF, "manila") utils.monkey_patch() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) launcher = service.process_launcher() server = service.WSGIService('osapi_share') launcher.launch_service(server, workers=server.workers or 1) launcher.wait() if __name__ == '__main__': main() manila-10.0.0/manila/cmd/status.py0000664000175000017500000000341213656750227016771 0ustar zuulzuul00000000000000# Copyright (c) 2018 NEC, Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from oslo_config import cfg from oslo_upgradecheck import upgradecheck from manila.i18n import _ class Checks(upgradecheck.UpgradeCommands): """Various upgrade checks should be added as separate methods in this class and added to _upgrade_checks tuple. """ def _check_placeholder(self): # This is just a placeholder for upgrade checks, it should be # removed when the actual checks are added return upgradecheck.Result(upgradecheck.Code.SUCCESS) # The format of the check functions is to return an # oslo_upgradecheck.upgradecheck.Result # object with the appropriate # oslo_upgradecheck.upgradecheck.Code and details set. # If the check hits warnings or failures then those should be stored # in the returned Result's "details" attribute. The # summary will be rolled up at the end of the check() method. _upgrade_checks = ( # In the future there should be some real checks added here (_('Placeholder'), _check_placeholder), ) def main(): return upgradecheck.main( cfg.CONF, project='manila', upgrade_command=Checks()) if __name__ == '__main__': sys.exit(main()) manila-10.0.0/manila/cmd/manage.py0000664000175000017500000004060513656750227016703 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Interactive shell based on Django: # # Copyright (c) 2005, the Lawrence Journal-World # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # 3. Neither the name of Django nor the names of its contributors may be # used to endorse or promote products derived from this software without # specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ CLI interface for manila management. """ from __future__ import print_function import os import sys from oslo_config import cfg from oslo_log import log from manila.common import config # Need to register global_opts # noqa from manila import context from manila import db from manila.db import migration from manila.i18n import _ from manila import utils from manila import version CONF = cfg.CONF HOST_UPDATE_HELP_MSG = ("A fully qualified host string is of the format " "'HostA@BackendB#PoolC'. Provide only the host name " "(ex: 'HostA') to update the hostname part of " "the host string. Provide only the " "host name and backend name (ex: 'HostA@BackendB') to " "update the host and backend names.") HOST_UPDATE_CURRENT_HOST_HELP = ("Current share host name. %s" % HOST_UPDATE_HELP_MSG) HOST_UPDATE_NEW_HOST_HELP = "New share host name. %s" % HOST_UPDATE_HELP_MSG # Decorators for actions def args(*args, **kwargs): def _decorator(func): func.__dict__.setdefault('args', []).insert(0, (args, kwargs)) return func return _decorator class ShellCommands(object): def bpython(self): """Runs a bpython shell. Falls back to Ipython/python shell if unavailable """ self.run('bpython') def ipython(self): """Runs an Ipython shell. Falls back to Python shell if unavailable """ self.run('ipython') def python(self): """Runs a python shell. Falls back to Python shell if unavailable """ self.run('python') @args('--shell', dest="shell", metavar='', help='Python shell') def run(self, shell=None): """Runs a Python interactive interpreter.""" if not shell: shell = 'bpython' if shell == 'bpython': try: import bpython bpython.embed() except ImportError: shell = 'ipython' if shell == 'ipython': try: from IPython import embed embed() except ImportError: # Ipython < 0.11 try: import IPython # Explicitly pass an empty list as arguments, because # otherwise IPython would use sys.argv from this script. shell = IPython.Shell.IPShell(argv=[]) shell.mainloop() except ImportError: # no IPython module shell = 'python' if shell == 'python': import code try: # Try activating rlcompleter, because it's handy. import readline except ImportError: pass else: # We don't have to wrap the following import in a 'try', # because we already know 'readline' was imported successfully. import rlcompleter # noqa readline.parse_and_bind("tab:complete") code.interact() @args('--path', required=True, help='Script path') def script(self, path): """Runs the script from the specified path with flags set properly. arguments: path """ exec(compile(open(path).read(), path, 'exec'), locals(), globals()) class HostCommands(object): """List hosts.""" @args('zone', nargs='?', default=None, help='Availability Zone (default: %(default)s)') def list(self, zone=None): """Show a list of all physical hosts. Filter by zone. args: [zone] """ print("%-25s\t%-15s" % (_('host'), _('zone'))) ctxt = context.get_admin_context() services = db.service_get_all(ctxt) if zone: services = [ s for s in services if s['availability_zone']['name'] == zone] hosts = [] for srv in services: if not [h for h in hosts if h['host'] == srv['host']]: hosts.append(srv) for h in hosts: print("%-25s\t%-15s" % (h['host'], h['availability_zone']['name'])) class DbCommands(object): """Class for managing the database.""" def __init__(self): pass @args('version', nargs='?', default=None, help='Database version') def sync(self, version=None): """Sync the database up to the most recent version.""" return migration.upgrade(version) def version(self): """Print the current database version.""" print(migration.version()) # NOTE(imalinovskiy): # Manila init migration hardcoded here, # because alembic has strange behaviour: # downgrade base = downgrade from head(162a3e673105) -> base(162a3e673105) # = downgrade from 162a3e673105 -> (empty) [ERROR] # downgrade 162a3e673105 = downgrade from head(162a3e673105)->162a3e673105 # = do nothing [OK] @args('version', nargs='?', default='162a3e673105', help='Version to downgrade') def downgrade(self, version=None): """Downgrade database to the given version.""" return migration.downgrade(version) @args('--message', help='Revision message') @args('--autogenerate', help='Autogenerate migration from schema') def revision(self, message, autogenerate): """Generate new migration.""" return migration.revision(message, autogenerate) @args('version', nargs='?', default=None, help='Version to stamp version table with') def stamp(self, version=None): """Stamp the version table with the given version.""" return migration.stamp(version) @args('age_in_days', type=int, default=0, nargs='?', help='A non-negative integer, denoting the age of soft-deleted ' 'records in number of days. 0 can be specified to purge all ' 'soft-deleted rows, default is %(default)d.') def purge(self, age_in_days): """Purge soft-deleted records older than a given age.""" age_in_days = int(age_in_days) if age_in_days < 0: print(_("Must supply a non-negative value for age.")) exit(1) ctxt = context.get_admin_context() db.purge_deleted_records(ctxt, age_in_days) class VersionCommands(object): """Class for exposing the codebase version.""" def list(self): print(version.version_string()) def __call__(self): self.list() class ConfigCommands(object): """Class for exposing the flags defined by flag_file(s).""" def list(self): for key, value in CONF.items(): if value is not None: print('%s = %s' % (key, value)) class GetLogCommands(object): """Get logging information.""" def errors(self): """Get all of the errors from the log files.""" error_found = 0 if CONF.log_dir: logs = [x for x in os.listdir(CONF.log_dir) if x.endswith('.log')] for file in logs: log_file = os.path.join(CONF.log_dir, file) lines = [line.strip() for line in open(log_file, "r")] lines.reverse() print_name = 0 for index, line in enumerate(lines): if line.find(" ERROR ") > 0: error_found += 1 if print_name == 0: print(log_file + ":-") print_name = 1 print("Line %d : %s" % (len(lines) - index, line)) if error_found == 0: print("No errors in logfiles!") @args('num_entries', nargs='?', type=int, default=10, help='Number of entries to list (default: %(default)d)') def syslog(self, num_entries=10): """Get of the manila syslog events.""" entries = int(num_entries) count = 0 log_file = '' if os.path.exists('/var/log/syslog'): log_file = '/var/log/syslog' elif os.path.exists('/var/log/messages'): log_file = '/var/log/messages' else: print("Unable to find system log file!") sys.exit(1) lines = [line.strip() for line in open(log_file, "r")] lines.reverse() print("Last %s manila syslog entries:-" % (entries)) for line in lines: if line.find("manila") > 0: count += 1 print("%s" % (line)) if count == entries: break if count == 0: print("No manila entries in syslog!") class ServiceCommands(object): """Methods for managing services.""" def list(self): """Show a list of all manila services.""" ctxt = context.get_admin_context() services = db.service_get_all(ctxt) print_format = "%-16s %-36s %-16s %-10s %-5s %-10s" print(print_format % ( _('Binary'), _('Host'), _('Zone'), _('Status'), _('State'), _('Updated At')) ) for svc in services: alive = utils.service_is_up(svc) art = ":-)" if alive else "XXX" status = 'enabled' if svc['disabled']: status = 'disabled' print(print_format % ( svc['binary'], svc['host'].partition('.')[0], svc['availability_zone']['name'], status, art, svc['updated_at'], )) class ShareCommands(object): @staticmethod def _validate_hosts(current_host, new_host): err = None if '@' in current_host: if '#' in current_host and '#' not in new_host: err = "%(chost)s specifies a pool but %(nhost)s does not." elif '@' not in new_host: err = "%(chost)s specifies a backend but %(nhost)s does not." if err: print(err % {'chost': current_host, 'nhost': new_host}) sys.exit(1) @args('--currenthost', required=True, help=HOST_UPDATE_CURRENT_HOST_HELP) @args('--newhost', required=True, help=HOST_UPDATE_NEW_HOST_HELP) @args('--force', required=False, type=bool, default=False, help="Ignore validations.") def update_host(self, current_host, new_host, force=False): """Modify the host name associated with a share. Particularly to recover from cases where one has moved their Manila Share node, or modified their 'host' opt or their backend section name in the manila configuration file. """ if not force: self._validate_hosts(current_host, new_host) ctxt = context.get_admin_context() updated = db.share_instances_host_update(ctxt, current_host, new_host) print("Updated host of %(count)s share instances on %(chost)s " "to %(nhost)s." % {'count': updated, 'chost': current_host, 'nhost': new_host}) CATEGORIES = { 'config': ConfigCommands, 'db': DbCommands, 'host': HostCommands, 'logs': GetLogCommands, 'service': ServiceCommands, 'share': ShareCommands, 'shell': ShellCommands, 'version': VersionCommands } def methods_of(obj): """Get all callable methods of an object that don't start with underscore. Returns a list of tuples of the form (method_name, method). """ result = [] for i in dir(obj): if callable(getattr(obj, i)) and not i.startswith('_'): result.append((i, getattr(obj, i))) return result def add_command_parsers(subparsers): for category in CATEGORIES: command_object = CATEGORIES[category]() parser = subparsers.add_parser(category) parser.set_defaults(command_object=command_object) category_subparsers = parser.add_subparsers(dest='action') for (action, action_fn) in methods_of(command_object): parser = category_subparsers.add_parser(action) action_kwargs = [] for args, kwargs in getattr(action_fn, 'args', []): parser.add_argument(*args, **kwargs) parser.set_defaults(action_fn=action_fn) parser.set_defaults(action_kwargs=action_kwargs) category_opt = cfg.SubCommandOpt('category', title='Command categories', handler=add_command_parsers) def get_arg_string(args): arg = None if args[0] == '-': # (Note)zhiteng: args starts with CONF.oparser.prefix_chars # is optional args. Notice that cfg module takes care of # actual ArgParser so prefix_chars is always '-'. if args[1] == '-': # This is long optional arg arg = args[2:] else: arg = args[1:] else: arg = args return arg def fetch_func_args(func): fn_args = [] for args, kwargs in getattr(func, 'args', []): arg = get_arg_string(args[0]) fn_args.append(getattr(CONF.category, arg)) return fn_args def main(): """Parse options and call the appropriate class/method.""" CONF.register_cli_opt(category_opt) script_name = sys.argv[0] if len(sys.argv) < 2: print(_("\nOpenStack manila version: %(version)s\n") % {'version': version.version_string()}) print(script_name + " category action []") print(_("Available categories:")) for category in CATEGORIES: print("\t%s" % category) sys.exit(2) try: log.register_options(CONF) CONF(sys.argv[1:], project='manila', version=version.version_string()) log.setup(CONF, "manila") except cfg.ConfigFilesNotFoundError as e: cfg_files = e.config_files print(_("Failed to read configuration file(s): %s") % cfg_files) sys.exit(2) fn = CONF.category.action_fn fn_args = fetch_func_args(fn) fn(*fn_args) if __name__ == '__main__': main() manila-10.0.0/manila/cmd/data.py0000664000175000017500000000271313656750227016362 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2015, Hitachi Data Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for manila data copy service.""" import eventlet eventlet.monkey_patch() import sys from oslo_config import cfg from oslo_log import log from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from manila.common import config # Need to register global_opts # noqa from manila import service from manila import utils from manila import version CONF = cfg.CONF def main(): log.register_options(CONF) gmr_opts.set_defaults(CONF) CONF(sys.argv[1:], project='manila', version=version.version_string()) log.setup(CONF, "manila") utils.monkey_patch() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) server = service.Service.create(binary='manila-data') service.serve(server) service.wait() if __name__ == '__main__': main() manila-10.0.0/manila/cmd/scheduler.py0000664000175000017500000000316413656750227017430 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for manila Scheduler.""" import eventlet eventlet.monkey_patch() import sys from oslo_config import cfg from oslo_log import log from oslo_reports import guru_meditation_report as gmr from oslo_reports import opts as gmr_opts from manila.common import config # Need to register global_opts # noqa from manila import service from manila import utils from manila import version CONF = cfg.CONF def main(): log.register_options(CONF) gmr_opts.set_defaults(CONF) CONF(sys.argv[1:], project='manila', version=version.version_string()) log.setup(CONF, "manila") utils.monkey_patch() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) server = service.Service.create(binary='manila-scheduler', coordination=True) service.serve(server) service.wait() if __name__ == '__main__': main() manila-10.0.0/manila/context.py0000664000175000017500000001262113656750227016371 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack LLC. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """RequestContext: context for requests that persist through all of manila.""" import copy from oslo_context import context from oslo_utils import timeutils import six from manila.i18n import _ from manila import policy class RequestContext(context.RequestContext): """Security context and request information. Represents the user taking a given action within the system. """ def __init__(self, user_id, project_id, is_admin=None, read_deleted="no", roles=None, remote_address=None, timestamp=None, request_id=None, auth_token=None, overwrite=True, quota_class=None, service_catalog=None, **kwargs): """Initialize RequestContext. :param read_deleted: 'no' indicates deleted records are hidden, 'yes' indicates deleted records are visible, 'only' indicates that *only* deleted records are visible. :param overwrite: Set to False to ensure that the greenthread local copy of the index is not overwritten. :param kwargs: Extra arguments that might be present, but we ignore because they possibly came in from older rpc messages. """ user = kwargs.pop('user', None) tenant = kwargs.pop('tenant', None) super(RequestContext, self).__init__( auth_token=auth_token, user=user_id or user, tenant=project_id or tenant, domain=kwargs.pop('domain', None), user_domain=kwargs.pop('user_domain', None), project_domain=kwargs.pop('project_domain', None), is_admin=is_admin, read_only=kwargs.pop('read_only', False), show_deleted=kwargs.pop('show_deleted', False), request_id=request_id, resource_uuid=kwargs.pop('resource_uuid', None), overwrite=overwrite, roles=roles) self.user_id = self.user self.project_id = self.tenant if self.is_admin is None: self.is_admin = policy.check_is_admin(self) elif self.is_admin and 'admin' not in self.roles: self.roles.append('admin') self.read_deleted = read_deleted self.remote_address = remote_address if not timestamp: timestamp = timeutils.utcnow() if isinstance(timestamp, six.string_types): timestamp = timeutils.parse_strtime(timestamp) self.timestamp = timestamp if service_catalog: self.service_catalog = [s for s in service_catalog if s.get('type') in ('compute', 'volume')] else: self.service_catalog = [] self.quota_class = quota_class def _get_read_deleted(self): return self._read_deleted def _set_read_deleted(self, read_deleted): if read_deleted not in ('no', 'yes', 'only'): raise ValueError(_("read_deleted can only be one of 'no', " "'yes' or 'only', not %r") % read_deleted) self._read_deleted = read_deleted def _del_read_deleted(self): del self._read_deleted read_deleted = property(_get_read_deleted, _set_read_deleted, _del_read_deleted) def to_dict(self): values = super(RequestContext, self).to_dict() values.update({ 'user_id': getattr(self, 'user_id', None), 'project_id': getattr(self, 'project_id', None), 'read_deleted': getattr(self, 'read_deleted', None), 'remote_address': getattr(self, 'remote_address', None), 'timestamp': self.timestamp.isoformat() if hasattr( self, 'timestamp') else None, 'quota_class': getattr(self, 'quota_class', None), 'service_catalog': getattr(self, 'service_catalog', None)}) return values @classmethod def from_dict(cls, values): return cls(**values) def elevated(self, read_deleted=None, overwrite=False): """Return a version of this context with admin flag set.""" ctx = copy.deepcopy(self) ctx.is_admin = True if 'admin' not in ctx.roles: ctx.roles.append('admin') if read_deleted is not None: ctx.read_deleted = read_deleted return ctx def to_policy_values(self): policy = super(RequestContext, self).to_policy_values() policy['is_admin'] = self.is_admin return policy def get_admin_context(read_deleted="no"): return RequestContext(user_id=None, project_id=None, is_admin=True, read_deleted=read_deleted, overwrite=False) manila-10.0.0/manila/scheduler/0000775000175000017500000000000013656750362016307 5ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/utils.py0000664000175000017500000001574513656750227020035 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # Copyright (c) 2016 EMC Corporation # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from oslo_utils import strutils from manila.scheduler.filters import extra_specs_ops LOG = log.getLogger(__name__) def generate_stats(host_state, properties): """Generates statistics from host and share data.""" host_stats = { 'host': host_state.host, 'share_backend_name': host_state.share_backend_name, 'vendor_name': host_state.vendor_name, 'driver_version': host_state.driver_version, 'storage_protocol': host_state.storage_protocol, 'qos': host_state.qos, 'total_capacity_gb': host_state.total_capacity_gb, 'allocated_capacity_gb': host_state.allocated_capacity_gb, 'free_capacity_gb': host_state.free_capacity_gb, 'reserved_percentage': host_state.reserved_percentage, 'driver_handles_share_servers': host_state.driver_handles_share_servers, 'thin_provisioning': host_state.thin_provisioning, 'updated': host_state.updated, 'dedupe': host_state.dedupe, 'compression': host_state.compression, 'snapshot_support': host_state.snapshot_support, 'create_share_from_snapshot_support': host_state.create_share_from_snapshot_support, 'revert_to_snapshot_support': host_state.revert_to_snapshot_support, 'mount_snapshot_support': host_state.mount_snapshot_support, 'replication_domain': host_state.replication_domain, 'replication_type': host_state.replication_type, 'provisioned_capacity_gb': host_state.provisioned_capacity_gb, 'pools': host_state.pools, 'max_over_subscription_ratio': host_state.max_over_subscription_ratio, 'sg_consistent_snapshot_support': ( host_state.sg_consistent_snapshot_support), 'ipv4_support': host_state.ipv4_support, 'ipv6_support': host_state.ipv6_support, } host_caps = host_state.capabilities share_type = properties.get('share_type', {}) extra_specs = share_type.get('extra_specs', {}) share_group_type = properties.get('share_group_type', {}) group_specs = share_group_type.get('group_specs', {}) request_spec = properties.get('request_spec', {}) share_stats = request_spec.get('resource_properties', {}) stats = { 'host_stats': host_stats, 'host_caps': host_caps, 'share_type': share_type, 'extra_specs': extra_specs, 'share_stats': share_stats, 'share_group_type': share_group_type, 'group_specs': group_specs, } return stats def use_thin_logic(share_type): # NOTE(xyang): To preserve the existing behavior, we use thin logic # to evaluate in two cases: # 1) 'thin_provisioning' is not set in extra specs (This is for # backward compatibility. If not set, the scheduler behaves # the same as before this bug fix). # 2) 'thin_provisioning' is set in extra specs and it is # ' True' or 'True'. # Otherwise we use the thick logic to evaluate. use_thin_logic = True thin_spec = None try: thin_spec = share_type.get('extra_specs', {}).get( 'thin_provisioning') if thin_spec is None: thin_spec = share_type.get('extra_specs', {}).get( 'capabilities:thin_provisioning') # NOTE(xyang) 'use_thin_logic' and 'thin_provisioning' are NOT # the same thing. The first purpose of "use_thin_logic" is to # preserve the existing scheduler behavior if 'thin_provisioning' # is NOT in extra_specs (if thin_spec is None, use_thin_logic # should be True). The second purpose of 'use_thin_logic' # is to honor 'thin_provisioning' if it is in extra specs (if # thin_spec is set to True, use_thin_logic should be True; if # thin_spec is set to False, use_thin_logic should be False). use_thin_logic = strutils.bool_from_string( thin_spec, strict=True) if thin_spec is not None else True except ValueError: # Check if the value of thin_spec is ' True'. if thin_spec is not None and not extra_specs_ops.match( True, thin_spec): use_thin_logic = False return use_thin_logic def thin_provisioning(host_state_thin_provisioning): # NOTE(xyang): host_state_thin_provisioning is reported by driver. # It can be either bool (True or False) or # list ([True, False], [True], [False]). thin_capability = [host_state_thin_provisioning] if not isinstance( host_state_thin_provisioning, list) else host_state_thin_provisioning return True in thin_capability def capabilities_satisfied(capabilities, extra_specs): # These extra-specs are not capabilities for matching hosts ignored_extra_specs = ( 'availability_zones', 'capabilities:availability_zones', ) for key, req in extra_specs.items(): # Ignore some extra_specs if told to if key in ignored_extra_specs: continue # Either not scoped format, or in capabilities scope scope = key.split(':') # Ignore scoped (such as vendor-specific) capabilities if len(scope) > 1 and scope[0] != "capabilities": continue # Strip off prefix if spec started with 'capabilities:' elif scope[0] == "capabilities": del scope[0] cap = capabilities for index in range(len(scope)): try: cap = cap.get(scope[index]) except AttributeError: cap = None if cap is None: LOG.debug("Host doesn't provide capability '%(cap)s' " "listed in the extra specs", {'cap': scope[index]}) return False # Make all capability values a list so we can handle lists cap_list = [cap] if not isinstance(cap, list) else cap # Loop through capability values looking for any match for cap_value in cap_list: if extra_specs_ops.match(cap_value, req): break else: # Nothing matched, so bail out LOG.debug('Share type extra spec requirement ' '"%(key)s=%(req)s" does not match reported ' 'capability "%(cap)s"', {'key': key, 'req': req, 'cap': cap}) return False return True manila-10.0.0/manila/scheduler/__init__.py0000664000175000017500000000000013656750227020406 0ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/evaluator/0000775000175000017500000000000013656750362020311 5ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/evaluator/__init__.py0000664000175000017500000000000013656750227022410 0ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/evaluator/evaluator.py0000664000175000017500000001743513656750227022677 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator import re import pyparsing import six from manila import exception from manila.i18n import _ def _operatorOperands(tokenList): it = iter(tokenList) while 1: try: op1 = next(it) op2 = next(it) yield(op1, op2) except StopIteration: break class EvalConstant(object): def __init__(self, toks): self.value = toks[0] def eval(self): result = self.value if (isinstance(result, six.string_types) and re.match(r"^[a-zA-Z_]+\.[a-zA-Z_]+$", result)): (which_dict, entry) = result.split('.') try: result = _vars[which_dict][entry] except KeyError as e: msg = _("KeyError: %s") % six.text_type(e) raise exception.EvaluatorParseException(reason=msg) except TypeError as e: msg = _("TypeError: %s") % six.text_type(e) raise exception.EvaluatorParseException(reason=msg) try: result = int(result) except ValueError: try: result = float(result) except ValueError as e: msg = _("ValueError: %s") % six.text_type(e) raise exception.EvaluatorParseException(reason=msg) return result class EvalSignOp(object): operations = { '+': 1, '-': -1, } def __init__(self, toks): self.sign, self.value = toks[0] def eval(self): return self.operations[self.sign] * self.value.eval() class EvalAddOp(object): def __init__(self, toks): self.value = toks[0] def eval(self): sum = self.value[0].eval() for op, val in _operatorOperands(self.value[1:]): if op == '+': sum += val.eval() elif op == '-': sum -= val.eval() return sum class EvalMultOp(object): def __init__(self, toks): self.value = toks[0] def eval(self): prod = self.value[0].eval() for op, val in _operatorOperands(self.value[1:]): try: if op == '*': prod *= val.eval() elif op == '/': prod /= float(val.eval()) except ZeroDivisionError as e: msg = _("ZeroDivisionError: %s") % six.text_type(e) raise exception.EvaluatorParseException(reason=msg) return prod class EvalPowerOp(object): def __init__(self, toks): self.value = toks[0] def eval(self): prod = self.value[0].eval() for op, val in _operatorOperands(self.value[1:]): prod = pow(prod, val.eval()) return prod class EvalNegateOp(object): def __init__(self, toks): self.negation, self.value = toks[0] def eval(self): return not self.value.eval() class EvalComparisonOp(object): operations = { "<": operator.lt, "<=": operator.le, ">": operator.gt, ">=": operator.ge, "!=": operator.ne, "==": operator.eq, "<>": operator.ne, } def __init__(self, toks): self.value = toks[0] def eval(self): val1 = self.value[0].eval() for op, val in _operatorOperands(self.value[1:]): fn = self.operations[op] val2 = val.eval() if not fn(val1, val2): break val1 = val2 else: return True return False class EvalTernaryOp(object): def __init__(self, toks): self.value = toks[0] def eval(self): condition = self.value[0].eval() if condition: return self.value[2].eval() else: return self.value[4].eval() class EvalFunction(object): functions = { "abs": abs, "max": max, "min": min, } def __init__(self, toks): self.func, self.value = toks[0] def eval(self): args = self.value.eval() if type(args) is list: return self.functions[self.func](*args) else: return self.functions[self.func](args) class EvalCommaSeperator(object): def __init__(self, toks): self.value = toks[0] def eval(self): val1 = self.value[0].eval() val2 = self.value[2].eval() if type(val2) is list: val_list = [] val_list.append(val1) for val in val2: val_list.append(val) return val_list return [val1, val2] class EvalBoolAndOp(object): def __init__(self, toks): self.value = toks[0] def eval(self): left = self.value[0].eval() right = self.value[2].eval() return left and right class EvalBoolOrOp(object): def __init__(self, toks): self.value = toks[0] def eval(self): left = self.value[0].eval() right = self.value[2].eval() return left or right _parser = None _vars = {} def _def_parser(): # Enabling packrat parsing greatly speeds up the parsing. pyparsing.ParserElement.enablePackrat() alphas = pyparsing.alphas Combine = pyparsing.Combine Forward = pyparsing.Forward nums = pyparsing.nums oneOf = pyparsing.oneOf opAssoc = pyparsing.opAssoc operatorPrecedence = pyparsing.operatorPrecedence Word = pyparsing.Word integer = Word(nums) real = Combine(Word(nums) + '.' + Word(nums)) variable = Word(alphas + '_' + '.') number = real | integer expr = Forward() fn = Word(alphas + '_' + '.') operand = number | variable | fn signop = oneOf('+ -') addop = oneOf('+ -') multop = oneOf('* /') comparisonop = oneOf(' '.join(EvalComparisonOp.operations.keys())) ternaryop = ('?', ':') boolandop = oneOf('AND and &&') boolorop = oneOf('OR or ||') negateop = oneOf('NOT not !') operand.setParseAction(EvalConstant) expr = operatorPrecedence(operand, [ (fn, 1, opAssoc.RIGHT, EvalFunction), ("^", 2, opAssoc.RIGHT, EvalPowerOp), (signop, 1, opAssoc.RIGHT, EvalSignOp), (multop, 2, opAssoc.LEFT, EvalMultOp), (addop, 2, opAssoc.LEFT, EvalAddOp), (negateop, 1, opAssoc.RIGHT, EvalNegateOp), (comparisonop, 2, opAssoc.LEFT, EvalComparisonOp), (ternaryop, 3, opAssoc.LEFT, EvalTernaryOp), (boolandop, 2, opAssoc.LEFT, EvalBoolAndOp), (boolorop, 2, opAssoc.LEFT, EvalBoolOrOp), (',', 2, opAssoc.RIGHT, EvalCommaSeperator), ]) return expr def evaluate(expression, **kwargs): """Evaluates an expression. Provides the facility to evaluate mathematical expressions, and to substitute variables from dictionaries into those expressions. Supports both integer and floating point values, and automatic promotion where necessary. """ global _parser if _parser is None: _parser = _def_parser() global _vars _vars = kwargs try: result = _parser.parseString(expression, parseAll=True)[0] except pyparsing.ParseException as e: msg = _("ParseException: %s") % six.text_type(e) raise exception.EvaluatorParseException(reason=msg) return result.eval() manila-10.0.0/manila/scheduler/drivers/0000775000175000017500000000000013656750362017765 5ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/drivers/__init__.py0000664000175000017500000000000013656750227022064 0ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/drivers/simple.py0000664000175000017500000000621513656750227021634 0ustar zuulzuul00000000000000# Copyright (c) 2010 OpenStack, LLC. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Simple Scheduler """ from oslo_config import cfg from manila import db from manila import exception from manila.i18n import _ from manila.scheduler.drivers import base from manila.scheduler.drivers import chance from manila import utils simple_scheduler_opts = [ cfg.IntOpt("max_gigabytes", default=10000, help="Maximum number of volume gigabytes to allow per host."), ] CONF = cfg.CONF CONF.register_opts(simple_scheduler_opts) class SimpleScheduler(chance.ChanceScheduler): """Implements Naive Scheduler that tries to find least loaded host.""" def schedule_create_share(self, context, request_spec, filter_properties): """Picks a host that is up and has the fewest shares.""" # TODO(rushiagr) - pick only hosts that run shares elevated = context.elevated() share_id = request_spec.get('share_id') snapshot_id = request_spec.get('snapshot_id') share_properties = request_spec.get('share_properties') share_size = share_properties.get('size') instance_properties = request_spec.get('share_instance_properties', {}) availability_zone_id = instance_properties.get('availability_zone_id') results = db.service_get_all_share_sorted(elevated) if availability_zone_id: results = [(service_g, gigs) for (service_g, gigs) in results if (service_g['availability_zone_id'] == availability_zone_id)] for result in results: (service, share_gigabytes) = result if share_gigabytes + share_size > CONF.max_gigabytes: msg = _("Not enough allocatable share gigabytes remaining") raise exception.NoValidHost(reason=msg) if utils.service_is_up(service) and not service['disabled']: updated_share = base.share_update_db(context, share_id, service['host']) self.share_rpcapi.create_share_instance( context, updated_share.instance, service['host'], request_spec, None, snapshot_id=snapshot_id) return None msg = _("Is the appropriate service running?") raise exception.NoValidHost(reason=msg) manila-10.0.0/manila/scheduler/drivers/filter.py0000664000175000017500000004570213656750227021634 0ustar zuulzuul00000000000000# Copyright (c) 2011 Intel Corporation # Copyright (c) 2011 OpenStack, LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ The FilterScheduler is for scheduling of share and share group creation. You can customize this scheduler by specifying your own share/share group filters and weighing functions. """ from oslo_config import cfg from oslo_log import log from manila import exception from manila.i18n import _ from manila.scheduler.drivers import base from manila.scheduler import scheduler_options from manila.share import share_types CONF = cfg.CONF LOG = log.getLogger(__name__) class FilterScheduler(base.Scheduler): """Scheduler that can be used for filtering and weighing.""" def __init__(self, *args, **kwargs): super(FilterScheduler, self).__init__(*args, **kwargs) self.cost_function_cache = None self.options = scheduler_options.SchedulerOptions() self.max_attempts = self._max_attempts() def _get_configuration_options(self): """Fetch options dictionary. Broken out for testing.""" return self.options.get_configuration() def get_pools(self, context, filters, cached): return self.host_manager.get_pools(context, filters, cached) def _post_select_populate_filter_properties(self, filter_properties, host_state): """Add additional information to filter properties. Add additional information to the filter properties after a host has been selected by the scheduling process. """ # Add a retry entry for the selected volume backend: self._add_retry_host(filter_properties, host_state.host) def _add_retry_host(self, filter_properties, host): """Add retry entry for the selected volume backend. In the event that the request gets re-scheduled, this entry will signal that the given backend has already been tried. """ retry = filter_properties.get('retry') if not retry: return hosts = retry['hosts'] hosts.append(host) def _max_attempts(self): max_attempts = CONF.scheduler_max_attempts if max_attempts < 1: msg = _("Invalid value for 'scheduler_max_attempts', " "must be >=1") raise exception.InvalidParameterValue(err=msg) return max_attempts def schedule_create_share(self, context, request_spec, filter_properties): weighed_host = self._schedule_share(context, request_spec, filter_properties) host = weighed_host.obj.host share_id = request_spec['share_id'] snapshot_id = request_spec['snapshot_id'] updated_share = base.share_update_db(context, share_id, host) self._post_select_populate_filter_properties(filter_properties, weighed_host.obj) # context is not serializable filter_properties.pop('context', None) self.share_rpcapi.create_share_instance( context, updated_share.instance, host, request_spec=request_spec, filter_properties=filter_properties, snapshot_id=snapshot_id ) def schedule_create_replica(self, context, request_spec, filter_properties): share_replica_id = request_spec['share_instance_properties'].get('id') weighed_host = self._schedule_share( context, request_spec, filter_properties) host = weighed_host.obj.host updated_share_replica = base.share_replica_update_db( context, share_replica_id, host) self._post_select_populate_filter_properties(filter_properties, weighed_host.obj) # context is not serializable filter_properties.pop('context', None) self.share_rpcapi.create_share_replica( context, updated_share_replica, host, request_spec=request_spec, filter_properties=filter_properties) def _format_filter_properties(self, context, filter_properties, request_spec): elevated = context.elevated() share_properties = request_spec['share_properties'] share_instance_properties = (request_spec.get( 'share_instance_properties', {})) # Since Manila is using mixed filters from Oslo and it's own, which # takes 'resource_XX' and 'volume_XX' as input respectively, copying # 'volume_XX' to 'resource_XX' will make both filters happy. resource_properties = share_properties.copy() resource_properties.update(share_instance_properties.copy()) share_type = request_spec.get("share_type", {}) if not share_type: msg = _("You must create a share type in advance," " and specify in request body or" " set default_share_type in manila.conf.") LOG.error(msg) raise exception.InvalidParameterValue(err=msg) extra_specs = share_type.get('extra_specs', {}) if extra_specs: for extra_spec_name in share_types.get_boolean_extra_specs(): extra_spec = extra_specs.get(extra_spec_name) if extra_spec is not None: if not extra_spec.startswith(""): extra_spec = " %s" % extra_spec share_type['extra_specs'][extra_spec_name] = extra_spec resource_type = request_spec.get("share_type") or {} request_spec.update({'resource_properties': resource_properties}) config_options = self._get_configuration_options() share_group = request_spec.get('share_group') # NOTE(gouthamr): If 'active_replica_host' or 'snapshot_host' is # present in the request spec, pass that host's 'replication_domain' to # the ShareReplication and CreateFromSnapshot filters. active_replica_host = request_spec.get('active_replica_host') snapshot_host = request_spec.get('snapshot_host') allowed_hosts = [] if active_replica_host: allowed_hosts.append(active_replica_host) if snapshot_host: allowed_hosts.append(snapshot_host) replication_domain = None if active_replica_host or snapshot_host: temp_hosts = self.host_manager.get_all_host_states_share(elevated) matching_host = next((host for host in temp_hosts if host.host in allowed_hosts), None) if matching_host: replication_domain = matching_host.replication_domain # NOTE(zengyingzhe): remove the 'share_backend_name' extra spec, # let scheduler choose the available host for this replica or # snapshot clone creation request. share_type.get('extra_specs', {}).pop('share_backend_name', None) if filter_properties is None: filter_properties = {} self._populate_retry_share(filter_properties, resource_properties) filter_properties.update({'context': context, 'request_spec': request_spec, 'config_options': config_options, 'share_type': share_type, 'resource_type': resource_type, 'share_group': share_group, 'replication_domain': replication_domain, }) self.populate_filter_properties_share(request_spec, filter_properties) return filter_properties, share_properties def _schedule_share(self, context, request_spec, filter_properties=None): """Returns a list of hosts that meet the required specs. The list is ordered by their fitness. """ elevated = context.elevated() filter_properties, share_properties = self._format_filter_properties( context, filter_properties, request_spec) # Find our local list of acceptable hosts by filtering and # weighing our options. we virtually consume resources on # it so subsequent selections can adjust accordingly. # Note: remember, we are using an iterator here. So only # traverse this list once. hosts = self.host_manager.get_all_host_states_share(elevated) # Filter local hosts based on requirements ... hosts, last_filter = self.host_manager.get_filtered_hosts( hosts, filter_properties) if not hosts: msg = _('Failed to find a weighted host, the last executed filter' ' was %s.') raise exception.NoValidHost( reason=msg % last_filter, detail_data={'last_filter': last_filter}) LOG.debug("Filtered share %(hosts)s", {"hosts": hosts}) # weighted_host = WeightedHost() ... the best # host for the job. weighed_hosts = self.host_manager.get_weighed_hosts(hosts, filter_properties) best_host = weighed_hosts[0] LOG.debug("Choosing for share: %(best_host)s", {"best_host": best_host}) # NOTE(rushiagr): updating the available space parameters at same place best_host.obj.consume_from_share(share_properties) return best_host def _populate_retry_share(self, filter_properties, properties): """Populate filter properties with retry history. Populate filter properties with history of retries for this request. If maximum retries is exceeded, raise NoValidHost. """ max_attempts = self.max_attempts retry = filter_properties.pop('retry', {}) if max_attempts == 1: # re-scheduling is disabled. return # retry is enabled, update attempt count: if retry: retry['num_attempts'] += 1 else: retry = { 'num_attempts': 1, 'hosts': [] # list of share service hosts tried } filter_properties['retry'] = retry share_id = properties.get('share_id') self._log_share_error(share_id, retry) if retry['num_attempts'] > max_attempts: msg = _("Exceeded max scheduling attempts %(max_attempts)d for " "share %(share_id)s") % { "max_attempts": max_attempts, "share_id": share_id } raise exception.NoValidHost(reason=msg) def _log_share_error(self, share_id, retry): """Log any exceptions from a previous share create operation. If the request contained an exception from a previous share create operation, log it to aid debugging. """ exc = retry.pop('exc', None) # string-ified exception from share if not exc: return # no exception info from a previous attempt, skip hosts = retry.get('hosts') if not hosts: return # no previously attempted hosts, skip last_host = hosts[-1] LOG.error("Error scheduling %(share_id)s from last share-service: " "%(last_host)s : %(exc)s", { "share_id": share_id, "last_host": last_host, "exc": "exc" }) def populate_filter_properties_share(self, request_spec, filter_properties): """Stuff things into filter_properties. Can be overridden in a subclass to add more data. """ shr = request_spec['share_properties'] inst = request_spec['share_instance_properties'] filter_properties['size'] = shr['size'] filter_properties['availability_zone_id'] = ( inst.get('availability_zone_id') ) filter_properties['user_id'] = shr.get('user_id') filter_properties['metadata'] = shr.get('metadata') def schedule_create_share_group(self, context, share_group_id, request_spec, filter_properties): LOG.info("Scheduling share group %s.", share_group_id) host = self._get_best_host_for_share_group(context, request_spec) if not host: msg = _("No hosts available for share group %s.") % share_group_id raise exception.NoValidHost(reason=msg) msg = "Chose host %(host)s for create_share_group %(group)s." LOG.info(msg, {'host': host, 'group': share_group_id}) updated_share_group = base.share_group_update_db( context, share_group_id, host) self.share_rpcapi.create_share_group( context, updated_share_group, host) def _get_weighted_hosts_for_share_type(self, context, request_spec, share_type): config_options = self._get_configuration_options() # NOTE(ameade): Find our local list of acceptable hosts by # filtering and weighing our options. We virtually consume # resources on it so subsequent selections can adjust accordingly. # NOTE(ameade): Remember, we are using an iterator here. So only # traverse this list once. all_hosts = self.host_manager.get_all_host_states_share(context) if not all_hosts: return [] share_type['extra_specs'] = share_type.get('extra_specs', {}) if share_type['extra_specs']: for spec_name in share_types.get_required_extra_specs(): extra_spec = share_type['extra_specs'].get(spec_name) if extra_spec is not None: share_type['extra_specs'][spec_name] = ( " %s" % extra_spec) filter_properties = { 'context': context, 'request_spec': request_spec, 'config_options': config_options, 'share_type': share_type, 'resource_type': share_type, 'size': 0, } # Filter local hosts based on requirements ... hosts, last_filter = self.host_manager.get_filtered_hosts( all_hosts, filter_properties) if not hosts: return [] LOG.debug("Filtered %s", hosts) # weighted_host = WeightedHost() ... the best host for the job. weighed_hosts = self.host_manager.get_weighed_hosts( hosts, filter_properties) if not weighed_hosts: return [] return weighed_hosts def _get_weighted_hosts_for_share_group_type(self, context, request_spec, share_group_type): config_options = self._get_configuration_options() all_hosts = self.host_manager.get_all_host_states_share(context) if not all_hosts: return [] filter_properties = { 'context': context, 'request_spec': request_spec, 'config_options': config_options, 'share_group_type': share_group_type, 'resource_type': share_group_type, } hosts, last_filter = self.host_manager.get_filtered_hosts( all_hosts, filter_properties, CONF.scheduler_default_share_group_filters) if not hosts: return [] LOG.debug("Filtered %s", hosts) weighed_hosts = self.host_manager.get_weighed_hosts( hosts, filter_properties) if not weighed_hosts: return [] return weighed_hosts def _get_weighted_candidates_share_group(self, context, request_spec): """Finds hosts that support the share group. Returns a list of hosts that meet the required specs, ordered by their fitness. """ elevated = context.elevated() shr_types = request_spec.get("share_types") weighed_hosts = [] for iteration_count, share_type in enumerate(shr_types): temp_weighed_hosts = self._get_weighted_hosts_for_share_type( elevated, request_spec, share_type) # NOTE(ameade): Take the intersection of hosts so we have one that # can support all share types of the share group if iteration_count == 0: weighed_hosts = temp_weighed_hosts else: new_weighed_hosts = [] for host1 in weighed_hosts: for host2 in temp_weighed_hosts: if host1.obj.host == host2.obj.host: new_weighed_hosts.append(host1) weighed_hosts = new_weighed_hosts if not weighed_hosts: return [] # NOTE(ameade): Ensure the hosts support the share group type share_group_type = request_spec.get("resource_type", {}) temp_weighed_group_hosts = ( self._get_weighted_hosts_for_share_group_type( elevated, request_spec, share_group_type)) new_weighed_hosts = [] for host1 in weighed_hosts: for host2 in temp_weighed_group_hosts: if host1.obj.host == host2.obj.host: new_weighed_hosts.append(host1) weighed_hosts = new_weighed_hosts return weighed_hosts def _get_best_host_for_share_group(self, context, request_spec): weighed_hosts = self._get_weighted_candidates_share_group( context, request_spec) if not weighed_hosts: return None return weighed_hosts[0].obj.host def host_passes_filters(self, context, host, request_spec, filter_properties): elevated = context.elevated() filter_properties, share_properties = self._format_filter_properties( context, filter_properties, request_spec) hosts = self.host_manager.get_all_host_states_share(elevated) hosts, last_filter = self.host_manager.get_filtered_hosts( hosts, filter_properties) hosts = self.host_manager.get_weighed_hosts(hosts, filter_properties) for tgt_host in hosts: if tgt_host.obj.host == host: return tgt_host.obj msg = (_('Cannot place share %(id)s on %(host)s, the last executed' ' filter was %(last_filter)s.') % {'id': request_spec['share_id'], 'host': host, 'last_filter': last_filter}) raise exception.NoValidHost(reason=msg) manila-10.0.0/manila/scheduler/drivers/base.py0000664000175000017500000001203413656750227021251 0ustar zuulzuul00000000000000# Copyright (c) 2010 OpenStack, LLC. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler base class that all Schedulers should inherit from """ from oslo_config import cfg from oslo_utils import importutils from oslo_utils import timeutils from manila import db from manila.i18n import _ from manila.share import rpcapi as share_rpcapi from manila import utils scheduler_driver_opts = [ cfg.StrOpt('scheduler_host_manager', default='manila.scheduler.host_manager.HostManager', help='The scheduler host manager class to use.'), cfg.IntOpt('scheduler_max_attempts', default=3, help='Maximum number of attempts to schedule a share.'), ] CONF = cfg.CONF CONF.register_opts(scheduler_driver_opts) def share_update_db(context, share_id, host): '''Set the host and set the scheduled_at field of a share. :returns: A Share with the updated fields set properly. ''' now = timeutils.utcnow() values = {'host': host, 'scheduled_at': now} return db.share_update(context, share_id, values) def share_replica_update_db(context, share_replica_id, host): """Set the host and the scheduled_at field of a share replica. :returns: A Share Replica with the updated fields set. """ now = timeutils.utcnow() values = {'host': host, 'scheduled_at': now} return db.share_replica_update(context, share_replica_id, values) def share_group_update_db(context, share_group_id, host): '''Set the host and set the updated_at field of a share group. :returns: A share group with the updated fields set properly. ''' now = timeutils.utcnow() values = {'host': host, 'updated_at': now} return db.share_group_update(context, share_group_id, values) class Scheduler(object): """The base class that all Scheduler classes should inherit from.""" def __init__(self): self.host_manager = importutils.import_object( CONF.scheduler_host_manager) self.share_rpcapi = share_rpcapi.ShareAPI() def get_host_list(self): """Get a list of hosts from the HostManager.""" return self.host_manager.get_host_list() def get_service_capabilities(self): """Get the normalized set of capabilities for the services.""" return self.host_manager.get_service_capabilities() def update_service_capabilities(self, service_name, host, capabilities): """Process a capability update from a service node.""" self.host_manager.update_service_capabilities(service_name, host, capabilities) def hosts_up(self, context, topic): """Return the list of hosts that have a running service for topic.""" services = db.service_get_all_by_topic(context, topic) return [service['host'] for service in services if utils.service_is_up(service)] def schedule(self, context, topic, method, *_args, **_kwargs): """Must override schedule method for scheduler to work.""" raise NotImplementedError(_("Must implement a fallback schedule")) def schedule_create_share(self, context, request_spec, filter_properties): """Must override schedule method for scheduler to work.""" raise NotImplementedError(_("Must implement schedule_create_share")) def schedule_create_share_group(self, context, share_group_id, request_spec, filter_properties): """Must override schedule method for scheduler to work.""" raise NotImplementedError(_( "Must implement schedule_create_share_group")) def get_pools(self, context, filters): """Must override schedule method for scheduler to work.""" raise NotImplementedError(_("Must implement get_pools")) def host_passes_filters(self, context, host, request_spec, filter_properties): """Must override schedule method for migration to work.""" raise NotImplementedError(_("Must implement host_passes_filters")) def schedule_create_replica(self, context, request_spec, filter_properties): """Must override schedule method for create replica to work.""" raise NotImplementedError(_("Must implement schedule_create_replica")) manila-10.0.0/manila/scheduler/drivers/chance.py0000664000175000017500000000515513656750227021566 0ustar zuulzuul00000000000000# Copyright (c) 2010 OpenStack, LLC. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Chance (Random) Scheduler implementation """ import random from oslo_config import cfg from manila import exception from manila.i18n import _ from manila.scheduler.drivers import base CONF = cfg.CONF class ChanceScheduler(base.Scheduler): """Implements Scheduler as a random node selector.""" def _filter_hosts(self, request_spec, hosts, **kwargs): """Filter a list of hosts based on request_spec.""" filter_properties = kwargs.get('filter_properties', {}) ignore_hosts = filter_properties.get('ignore_hosts', []) hosts = [host for host in hosts if host not in ignore_hosts] return hosts def _schedule(self, context, topic, request_spec, **kwargs): """Picks a host that is up at random.""" elevated = context.elevated() hosts = self.hosts_up(elevated, topic) if not hosts: msg = _("Is the appropriate service running?") raise exception.NoValidHost(reason=msg) hosts = self._filter_hosts(request_spec, hosts, **kwargs) if not hosts: msg = _("Could not find another host") raise exception.NoValidHost(reason=msg) return hosts[int(random.random() * len(hosts))] def schedule_create_share(self, context, request_spec, filter_properties): """Picks a host that is up at random.""" topic = CONF.share_topic host = self._schedule(context, topic, request_spec, filter_properties=filter_properties) share_id = request_spec['share_id'] snapshot_id = request_spec['snapshot_id'] updated_share = base.share_update_db(context, share_id, host) self.share_rpcapi.create_share_instance( context, updated_share.instance, host, request_spec, filter_properties, snapshot_id ) manila-10.0.0/manila/scheduler/base_handler.py0000664000175000017500000000342113656750227021270 0ustar zuulzuul00000000000000# Copyright (c) 2011-2013 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A common base for handling extension classes. Used by BaseFilterHandler and BaseWeightHandler """ import inspect from stevedore import extension class BaseHandler(object): """Base class to handle loading filter and weight classes.""" def __init__(self, modifier_class_type, modifier_namespace): self.namespace = modifier_namespace self.modifier_class_type = modifier_class_type self.extension_manager = extension.ExtensionManager(modifier_namespace) def _is_correct_class(self, cls): """Check if an object is the correct type. Return whether an object is a class of the correct type and is not prefixed with an underscore. """ return (inspect.isclass(cls) and not cls.__name__.startswith('_') and issubclass(cls, self.modifier_class_type)) def get_all_classes(self): # We use a set, as some classes may have an entrypoint of their own, # and also be returned by a function such as 'all_filters' for example return [ext.plugin for ext in self.extension_manager if self._is_correct_class(ext.plugin)] manila-10.0.0/manila/scheduler/filters/0000775000175000017500000000000013656750362017757 5ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/filters/share_replication.py0000664000175000017500000000707113656750227024031 0ustar zuulzuul00000000000000# Copyright (c) 2016 Goutham Pacha Ravi # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.scheduler.filters import base_host LOG = log.getLogger(__name__) class ShareReplicationFilter(base_host.BaseHostFilter): """ShareReplicationFilter filters hosts based on replication support.""" def host_passes(self, host_state, filter_properties): """Return True if 'active' replica's host can replicate with host. Design of this filter: - Share replication is symmetric. All backends that can replicate between each other must share the same 'replication_domain'. - For scheduling a share that can be replicated in the future, this filter checks for 'replication_domain' capability. - For scheduling a replica, it checks for the 'replication_domain' compatibility. """ active_replica_host = filter_properties.get('request_spec', {}).get( 'active_replica_host') existing_replica_hosts = filter_properties.get('request_spec', {}).get( 'all_replica_hosts', '').split(',') replication_type = filter_properties.get('resource_type', {}).get( 'extra_specs', {}).get('replication_type') active_replica_replication_domain = filter_properties.get( 'replication_domain') host_replication_domain = host_state.replication_domain if replication_type is None: # NOTE(gouthamr): You're probably not creating a replicated # share or a replica, then this host obviously passes. return True elif host_replication_domain is None: msg = "Replication is not enabled on host %s." LOG.debug(msg, host_state.host) return False elif active_replica_host is None: # 'replication_type' filtering will be handled by the # capabilities filter, since it is a share-type extra-spec. return True # Scheduler filtering by replication_domain for a replica if active_replica_replication_domain != host_replication_domain: msg = ("The replication domain of Host %(host)s is " "'%(host_domain)s' and it does not match the replication " "domain of the 'active' replica's host: " "%(active_replica_host)s, which is '%(arh_domain)s'. ") kwargs = { "host": host_state.host, "host_domain": host_replication_domain, "active_replica_host": active_replica_host, "arh_domain": active_replica_replication_domain, } LOG.debug(msg, kwargs) return False # Check host string for already created replicas if host_state.host in existing_replica_hosts: msg = ("Skipping host %s since it already hosts a replica for " "this share.") LOG.debug(msg, host_state.host) return False return True manila-10.0.0/manila/scheduler/filters/extra_specs_ops.py0000664000175000017500000000506513656750227023540 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator import six from oslo_utils import strutils # 1. The following operations are supported: # =, s==, s!=, s>=, s>, s<=, s<, , , , ==, !=, >=, <= # 2. Note that is handled in a different way below. # 3. If the first word in the extra_specs is not one of the operators, # it is ignored. _op_methods = {'=': lambda x, y: float(x) >= float(y), '': lambda x, y: y in x, '': lambda x, y: (strutils.bool_from_string(x) is strutils.bool_from_string(y)), '==': lambda x, y: float(x) == float(y), '!=': lambda x, y: float(x) != float(y), '>=': lambda x, y: float(x) >= float(y), '<=': lambda x, y: float(x) <= float(y), 's==': operator.eq, 's!=': operator.ne, 's<': operator.lt, 's<=': operator.le, 's>': operator.gt, 's>=': operator.ge} def match(value, req): # Make case-insensitive if (isinstance(value, six.string_types)): value = value.lower() req = req.lower() words = req.split() op = method = None if words: op = words.pop(0) method = _op_methods.get(op) if op != '' and not method: if type(value) is bool: return value == strutils.bool_from_string( req, strict=False, default=req) else: return value == req if value is None: return False if op == '': # Ex: v1 v2 v3 while True: if words.pop(0) == value: return True if not words: break op = words.pop(0) # remove a keyword if not words: break return False try: if words and method(value, words[0]): return True except ValueError: pass return False manila-10.0.0/manila/scheduler/filters/__init__.py0000664000175000017500000000000013656750227022056 0ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/filters/json.py0000664000175000017500000001163213656750227021305 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import operator from oslo_serialization import jsonutils import six from manila.scheduler.filters import base_host class JsonFilter(base_host.BaseHostFilter): """Host Filter to allow simple JSON-based grammar for selecting hosts.""" def _op_compare(self, args, op): """Check if operator can compare the first arg with the others. Returns True if the specified operator can successfully compare the first item in the args with all the rest. Will return False if only one item is in the list. """ if len(args) < 2: return False if op is operator.contains: bad = args[0] not in args[1:] else: bad = [arg for arg in args[1:] if not op(args[0], arg)] return not bool(bad) def _equals(self, args): """First term is == all the other terms.""" return self._op_compare(args, operator.eq) def _less_than(self, args): """First term is < all the other terms.""" return self._op_compare(args, operator.lt) def _greater_than(self, args): """First term is > all the other terms.""" return self._op_compare(args, operator.gt) def _in(self, args): """First term is in set of remaining terms.""" return self._op_compare(args, operator.contains) def _less_than_equal(self, args): """First term is <= all the other terms.""" return self._op_compare(args, operator.le) def _greater_than_equal(self, args): """First term is >= all the other terms.""" return self._op_compare(args, operator.ge) def _not(self, args): """Flip each of the arguments.""" return [not arg for arg in args] def _or(self, args): """True if any arg is True.""" return any(args) def _and(self, args): """True if all args are True.""" return all(args) commands = { '=': _equals, '<': _less_than, '>': _greater_than, 'in': _in, '<=': _less_than_equal, '>=': _greater_than_equal, 'not': _not, 'or': _or, 'and': _and, } def _parse_string(self, string, host_state): """Parse string. Strings prefixed with $ are capability lookups in the form '$variable' where 'variable' is an attribute in the HostState class. If $variable is a dictionary, you may use: $variable.dictkey """ if not string: return None if not string.startswith("$"): return string path = string[1:].split(".") obj = getattr(host_state, path[0], None) if obj is None: return None for item in path[1:]: obj = obj.get(item) if obj is None: return None return obj def _process_filter(self, query, host_state): """Recursively parse the query structure.""" if not query: return True cmd = query[0] method = self.commands[cmd] cooked_args = [] for arg in query[1:]: if isinstance(arg, list): arg = self._process_filter(arg, host_state) elif isinstance(arg, six.string_types): arg = self._parse_string(arg, host_state) if arg is not None: cooked_args.append(arg) result = method(self, cooked_args) return result def host_passes(self, host_state, filter_properties): """Filters hosts. Return a list of hosts that can fulfill the requirements specified in the query. """ # TODO(zhiteng) Add description for filter_properties structure # and scheduler_hints. try: query = filter_properties['scheduler_hints']['query'] except KeyError: query = None if not query: return True # NOTE(comstud): Not checking capabilities or service for # enabled/disabled so that a provided json filter can decide result = self._process_filter(jsonutils.loads(query), host_state) if isinstance(result, list): # If any succeeded, include the host result = any(result) if result: # Filter it out. return True return False manila-10.0.0/manila/scheduler/filters/create_from_snapshot.py0000664000175000017500000000532613656750227024544 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.scheduler.filters import base_host from manila.share import utils as share_utils LOG = log.getLogger(__name__) class CreateFromSnapshotFilter(base_host.BaseHostFilter): """CreateFromSnapshotFilter filters hosts based on replication_domain.""" def host_passes(self, host_state, filter_properties): """Return True if new share's host is compatible with snapshot's host. Design of this filter: - Creating shares from snapshots in another pool or backend needs to match with one of the below conditions: - The backend of the new share must be the same as its parent snapshot. - Both new share and snapshot are in the same replication_domain """ snapshot_id = filter_properties.get('request_spec', {}).get( 'snapshot_id') snapshot_host = filter_properties.get( 'request_spec', {}).get('snapshot_host') if None in [snapshot_id, snapshot_host]: # NOTE(silvacarlose): if the request does not contain a snapshot_id # or a snapshot_host, the user is not creating a share from a # snapshot and we don't need to filter out the host. return True snapshot_backend = share_utils.extract_host(snapshot_host, 'backend') snapshot_rep_domain = filter_properties.get('replication_domain') host_backend = share_utils.extract_host(host_state.host, 'backend') host_rep_domain = host_state.replication_domain # Same backend if host_backend == snapshot_backend: return True # Same replication domain if snapshot_rep_domain and snapshot_rep_domain == host_rep_domain: return True msg = ("The parent's snapshot %(snapshot_id)s back end and " "replication domain don't match with the back end and " "replication domain of the Host %(host)s.") kwargs = { "snapshot_id": snapshot_id, "host": host_state.host } LOG.debug(msg, kwargs) return False manila-10.0.0/manila/scheduler/filters/base_host.py0000664000175000017500000000252013656750227022277 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler host filters """ from manila.scheduler.filters import base class BaseHostFilter(base.BaseFilter): """Base class for host filters.""" def _filter_one(self, obj, filter_properties): """Return True if the object passes the filter, otherwise False.""" return self.host_passes(obj, filter_properties) def host_passes(self, host_state, filter_properties): """Return True if the HostState passes the filter, otherwise False. Override this in a subclass. """ raise NotImplementedError() class HostFilterHandler(base.BaseFilterHandler): def __init__(self, namespace): super(HostFilterHandler, self).__init__(BaseHostFilter, namespace) manila-10.0.0/manila/scheduler/filters/ignore_attempted_hosts.py0000664000175000017500000000363513656750227025112 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.scheduler.filters import base_host LOG = log.getLogger(__name__) class IgnoreAttemptedHostsFilter(base_host.BaseHostFilter): """Filter out previously attempted hosts A host passes this filter if it has not already been attempted for scheduling. The scheduler needs to add previously attempted hosts to the 'retry' key of filter_properties in order for this to work correctly. For example:: { 'retry': { 'hosts': ['host1', 'host2'], 'num_attempts': 3, } } """ def host_passes(self, host_state, filter_properties): """Skip nodes that have already been attempted.""" attempted = filter_properties.get('retry') if not attempted: # Re-scheduling is disabled LOG.debug("Re-scheduling is disabled.") return True hosts = attempted.get('hosts', []) host = host_state.host passes = host not in hosts pass_msg = "passes" if passes else "fails" LOG.debug("Host %(host)s %(pass_msg)s. Previously tried hosts: " "%(hosts)s", {'host': host, 'pass_msg': pass_msg, 'hosts': hosts}) return passes manila-10.0.0/manila/scheduler/filters/retry.py0000664000175000017500000000306613656750227021503 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack, LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.scheduler.filters import base_host LOG = log.getLogger(__name__) class RetryFilter(base_host.BaseHostFilter): """Filter out already tried nodes for scheduling purposes.""" def host_passes(self, host_state, filter_properties): """Skip nodes that have already been attempted.""" retry = filter_properties.get('retry') if not retry: # Re-scheduling is disabled LOG.debug("Re-scheduling is disabled") return True hosts = retry.get('hosts', []) host = host_state.host passes = host not in hosts pass_msg = "passes" if passes else "fails" LOG.debug("Host %(host)s %(pass_msg)s. Previously tried hosts: " "%(hosts)s", {"host": host, "pass_msg": pass_msg, "hosts": hosts}) # Host passes if it's not in the list of previously attempted hosts: return passes manila-10.0.0/manila/scheduler/filters/driver.py0000664000175000017500000000727013656750227021632 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from oslo_log import log as logging from manila.scheduler.evaluator import evaluator from manila.scheduler.filters import base_host from manila.scheduler import utils LOG = logging.getLogger(__name__) class DriverFilter(base_host.BaseHostFilter): """DriverFilter filters hosts based on a 'filter function' and metrics. DriverFilter filters based on share host's provided 'filter function' and metrics. """ def host_passes(self, host_state, filter_properties): """Determines whether a host has a passing filter_function or not.""" stats = self._generate_stats(host_state, filter_properties) LOG.debug("Driver Filter: Checking host '%s'", stats['host_stats']['host']) result = self._check_filter_function(stats) LOG.debug("Result: %s", result) LOG.debug("Done checking host '%s'", stats['host_stats']['host']) return result def _check_filter_function(self, stats): """Checks if a share passes a host's filter function. Returns a tuple in the format (filter_passing, filter_invalid). Both values are booleans. """ if stats['filter_function'] is None: LOG.debug("Filter function not set :: passing host.") return True try: filter_result = self._run_evaluator(stats['filter_function'], stats) except Exception as ex: # Warn the admin for now that there is an error in the # filter function. LOG.warning("Error in filtering function " "'%(function)s' : '%(error)s' :: failing host.", {'function': stats['filter_function'], 'error': ex, }) return False msg = "Filter function result for host %(host)s: %(result)s." args = {'host': stats['host_stats']['host'], 'result': six.text_type(filter_result)} LOG.info(msg, args) return filter_result def _run_evaluator(self, func, stats): """Evaluates a given function using the provided available stats.""" host_stats = stats['host_stats'] host_caps = stats['host_caps'] extra_specs = stats['extra_specs'] share_stats = stats['share_stats'] result = evaluator.evaluate( func, extra=extra_specs, stats=host_stats, capabilities=host_caps, share=share_stats) return result def _generate_stats(self, host_state, filter_properties): """Generates statistics from host and share data.""" filter_function = None if ('filter_function' in host_state.capabilities and host_state.capabilities['filter_function'] is not None): filter_function = six.text_type( host_state.capabilities['filter_function']) stats = utils.generate_stats(host_state, filter_properties) stats['filter_function'] = filter_function return stats manila-10.0.0/manila/scheduler/filters/share_group_filters/0000775000175000017500000000000013656750362024025 5ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/filters/share_group_filters/__init__.py0000664000175000017500000000000013656750227026124 0ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/filters/share_group_filters/consistent_snapshot.py0000664000175000017500000000252713656750227030515 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.scheduler.filters import base_host class ConsistentSnapshotFilter(base_host.BaseHostFilter): """Filters hosts based on possibility to create consistent SG snapshots.""" def host_passes(self, host_state, filter_properties): """Return True if host will work with desired share group.""" cs_group_spec = filter_properties['share_group_type'].get( 'group_specs', {}).get('consistent_snapshot_support') # NOTE(vpoomaryov): if 'consistent_snapshot_support' group spec # is not set, then we assume that share group owner do not care about # it, which means any host should pass this filter. if cs_group_spec is None: return True return cs_group_spec == host_state.sg_consistent_snapshot_support manila-10.0.0/manila/scheduler/filters/base.py0000664000175000017500000000721213656750227021245 0ustar zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Filter support """ from oslo_log import log from manila.scheduler import base_handler LOG = log.getLogger(__name__) class BaseFilter(object): """Base class for all filter classes.""" def _filter_one(self, obj, filter_properties): """Check if an object passes a filter. Return True if it passes the filter, False otherwise. Override this in a subclass. """ return True def filter_all(self, filter_obj_list, filter_properties): """Yield objects that pass the filter. Can be overridden in a subclass, if you need to base filtering decisions on all objects. Otherwise, one can just override _filter_one() to filter a single object. """ for obj in filter_obj_list: if self._filter_one(obj, filter_properties): yield obj # Set to true in a subclass if a filter only needs to be run once # for each request rather than for each instance run_filter_once_per_request = False def run_filter_for_index(self, index): """Check if filter needs to be run for the "index-th" instance. Return True if the filter needs to be run for the "index-th" instance in a request. Only need to override this if a filter needs anything other than "first only" or "all" behaviour. """ return not (self.run_filter_once_per_request and index > 0) class BaseFilterHandler(base_handler.BaseHandler): """Base class to handle loading filter classes. This class should be subclassed where one needs to use filters. """ def get_filtered_objects(self, filter_classes, objs, filter_properties, index=0): """Get objects after filter :param filter_classes: filters that will be used to filter the objects :param objs: objects that will be filtered :param filter_properties: client filter properties :param index: This value needs to be increased in the caller function of get_filtered_objects when handling each resource. """ list_objs = list(objs) LOG.debug("Starting with %d host(s)", len(list_objs)) for filter_cls in filter_classes: cls_name = filter_cls.__name__ filter_class = filter_cls() if filter_class.run_filter_for_index(index): objs = filter_class.filter_all(list_objs, filter_properties) if objs is None: LOG.debug("Filter %(cls_name)s says to stop filtering", {'cls_name': cls_name}) return (None, cls_name) list_objs = list(objs) msg = ("Filter %(cls_name)s returned %(obj_len)d host(s)" % {'cls_name': cls_name, 'obj_len': len(list_objs)}) if not list_objs: LOG.info(msg) break LOG.debug(msg) return (list_objs, cls_name) manila-10.0.0/manila/scheduler/filters/capabilities.py0000664000175000017500000000372513656750227022771 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log from manila.scheduler.filters import base_host from manila.scheduler import utils LOG = log.getLogger(__name__) class CapabilitiesFilter(base_host.BaseHostFilter): """HostFilter to work with resource (instance & volume) type records.""" def _satisfies_extra_specs(self, capabilities, resource_type): """Compare capabilities against extra specs. Check that the capabilities provided by the services satisfy the extra specs associated with the resource type. """ extra_specs = resource_type.get('extra_specs', []) if not extra_specs: return True return utils.capabilities_satisfied(capabilities, extra_specs) def host_passes(self, host_state, filter_properties): """Return a list of hosts that can create resource_type.""" # Note(zhiteng) Currently only Cinder and Nova are using # this filter, so the resource type is either instance or # volume. resource_type = filter_properties.get('resource_type') if not self._satisfies_extra_specs(host_state.capabilities, resource_type): LOG.debug("%(host_state)s fails resource_type extra_specs " "requirements", {'host_state': host_state}) return False return True manila-10.0.0/manila/scheduler/filters/availability_zone.py0000664000175000017500000000312713656750227024041 0ustar zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila.scheduler.filters import base_host class AvailabilityZoneFilter(base_host.BaseHostFilter): """Filters Hosts by availability zone.""" # Availability zones do not change within a request run_filter_once_per_request = True def host_passes(self, host_state, filter_properties): spec = filter_properties.get('request_spec', {}) props = spec.get('resource_properties', {}) request_az_id = props.get('availability_zone_id', spec.get('availability_zone_id')) request_azs = spec.get('availability_zones') host_az_id = host_state.service['availability_zone_id'] host_az = host_state.service['availability_zone']['name'] host_satisfied = True if request_az_id is not None: host_satisfied = request_az_id == host_az_id if request_azs: host_satisfied = host_satisfied and host_az in request_azs return host_satisfied manila-10.0.0/manila/scheduler/filters/capacity.py0000664000175000017500000001277413656750227022141 0ustar zuulzuul00000000000000# Copyright (c) 2012 Intel # Copyright (c) 2012 OpenStack, LLC. # Copyright (c) 2015 EMC Corporation # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import math from oslo_log import log from manila.scheduler.filters import base_host from manila.scheduler import utils LOG = log.getLogger(__name__) class CapacityFilter(base_host.BaseHostFilter): """CapacityFilter filters based on share host's capacity utilization.""" def host_passes(self, host_state, filter_properties): """Return True if host has sufficient capacity.""" share_size = filter_properties.get('size', 0) if host_state.free_capacity_gb is None: # Fail Safe LOG.error("Free capacity not set: " "share node info collection broken.") return False free_space = host_state.free_capacity_gb total_space = host_state.total_capacity_gb reserved = float(host_state.reserved_percentage) / 100 if free_space == 'unknown': # NOTE(zhiteng) for those back-ends cannot report actual # available capacity, we assume it is able to serve the # request. Even if it was not, the retry mechanism is # able to handle the failure by rescheduling return True elif total_space == 'unknown': # NOTE(xyang): If total_space is 'unknown' and # reserved is 0, we assume the back-ends can serve the request. # If total_space is 'unknown' and reserved # is not 0, we cannot calculate the reserved space. # float(total_space) will throw an exception. total*reserved # also won't work. So the back-ends cannot serve the request. return reserved == 0 and share_size <= free_space total = float(total_space) if total <= 0: LOG.warning("Insufficient free space for share creation. " "Total capacity is %(total).2f on host %(host)s.", {"total": total, "host": host_state.host}) return False # NOTE(xyang): Calculate how much free space is left after taking # into account the reserved space. free = math.floor(free_space - total * reserved) msg_args = {"host": host_state.host, "requested": share_size, "available": free} LOG.debug("Space information for share creation " "on host %(host)s (requested / avail): " "%(requested)s/%(available)s", msg_args) share_type = filter_properties.get('share_type', {}) use_thin_logic = utils.use_thin_logic(share_type) thin_provisioning = utils.thin_provisioning( host_state.thin_provisioning) # NOTE(xyang): Only evaluate using max_over_subscription_ratio # if use_thin_logic and thin_provisioning are True. Check if the # ratio of provisioned capacity over total capacity would exceed # subscription ratio. # If max_over_subscription_ratio = 1, the provisioned_ratio # should still be limited by the max_over_subscription_ratio; # otherwise, it could result in infinite provisioning. if (use_thin_logic and thin_provisioning and host_state.max_over_subscription_ratio >= 1): provisioned_ratio = ((host_state.provisioned_capacity_gb + share_size) / total) if provisioned_ratio > host_state.max_over_subscription_ratio: LOG.warning( "Insufficient free space for thin provisioning. " "The ratio of provisioned capacity over total capacity " "%(provisioned_ratio).2f would exceed the maximum over " "subscription ratio %(oversub_ratio).2f on host " "%(host)s.", {"provisioned_ratio": provisioned_ratio, "oversub_ratio": host_state.max_over_subscription_ratio, "host": host_state.host}) return False else: # NOTE(xyang): Adjust free_virtual calculation based on # free and max_over_subscription_ratio. adjusted_free_virtual = ( free * host_state.max_over_subscription_ratio) return adjusted_free_virtual >= share_size elif (use_thin_logic and thin_provisioning and host_state.max_over_subscription_ratio < 1): LOG.error("Invalid max_over_subscription_ratio: %(ratio)s. " "Valid value should be >= 1.", {"ratio": host_state.max_over_subscription_ratio}) return False if free < share_size: LOG.warning("Insufficient free space for share creation " "on host %(host)s (requested / avail): " "%(requested)s/%(available)s", msg_args) return False return True manila-10.0.0/manila/scheduler/host_manager.py0000664000175000017500000006674213656750227021347 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack, LLC. # Copyright (c) 2015 Rushil Chugh # Copyright (c) 2015 Clinton Knight # Copyright (c) 2015 EMC Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Manage hosts in the current zone. """ import re try: from UserDict import IterableUserDict # noqa except ImportError: from collections import UserDict as IterableUserDict # noqa from oslo_config import cfg from oslo_log import log from oslo_utils import timeutils import six from manila import db from manila import exception from manila.scheduler.filters import base_host as base_host_filter from manila.scheduler import utils as scheduler_utils from manila.scheduler.weighers import base_host as base_host_weigher from manila.share import utils as share_utils from manila import utils host_manager_opts = [ cfg.ListOpt('scheduler_default_filters', default=[ 'AvailabilityZoneFilter', 'CapacityFilter', 'CapabilitiesFilter', 'DriverFilter', 'ShareReplicationFilter', 'CreateFromSnapshotFilter', ], help='Which filter class names to use for filtering hosts ' 'when not specified in the request.'), cfg.ListOpt('scheduler_default_weighers', default=[ 'CapacityWeigher', 'GoodnessWeigher', 'HostAffinityWeigher', ], help='Which weigher class names to use for weighing hosts.'), cfg.ListOpt( 'scheduler_default_share_group_filters', default=[ 'AvailabilityZoneFilter', 'ConsistentSnapshotFilter', ], help='Which filter class names to use for filtering hosts ' 'creating share group when not specified in the request.'), ] CONF = cfg.CONF CONF.register_opts(host_manager_opts) CONF.import_opt('max_over_subscription_ratio', 'manila.share.driver') LOG = log.getLogger(__name__) class ReadOnlyDict(IterableUserDict): """A read-only dict.""" def __init__(self, source=None): self.data = {} self.update(source) def __setitem__(self, key, item): raise TypeError def __delitem__(self, key): raise TypeError def clear(self): raise TypeError def pop(self, key, *args): raise TypeError def popitem(self): raise TypeError def update(self, source=None): if source is None: return elif isinstance(source, IterableUserDict): self.data = source.data elif isinstance(source, type({})): self.data = source else: raise TypeError class HostState(object): """Mutable and immutable information tracked for a host.""" def __init__(self, host, capabilities=None, service=None): self.capabilities = None self.service = None self.host = host self.update_capabilities(capabilities, service) self.share_backend_name = None self.vendor_name = None self.driver_version = 0 self.storage_protocol = None self.qos = False # Mutable available resources. # These will change as resources are virtually "consumed". self.total_capacity_gb = 0 self.free_capacity_gb = None self.reserved_percentage = 0 self.allocated_capacity_gb = 0 # NOTE(xyang): The apparent allocated space indicating how much # capacity has been provisioned. This could be the sum of sizes # of all shares on a backend, which could be greater than or # equal to the allocated_capacity_gb. self.provisioned_capacity_gb = 0 self.max_over_subscription_ratio = 1.0 self.thin_provisioning = False self.driver_handles_share_servers = False self.snapshot_support = True self.create_share_from_snapshot_support = True self.revert_to_snapshot_support = False self.mount_snapshot_support = False self.dedupe = False self.compression = False self.replication_type = None self.replication_domain = None self.ipv4_support = None self.ipv6_support = None # PoolState for all pools self.pools = {} self.updated = None # Share Group capabilities self.sg_consistent_snapshot_support = None def update_capabilities(self, capabilities=None, service=None): # Read-only capability dicts if capabilities is None: capabilities = {} self.capabilities = ReadOnlyDict(capabilities) if service is None: service = {} self.service = ReadOnlyDict(service) def update_from_share_capability( self, capability, service=None, context=None): """Update information about a host from its share_node info. 'capability' is the status info reported by share backend, a typical capability looks like this:: capability = { 'share_backend_name': 'Local NFS', #\ 'vendor_name': 'OpenStack', # backend level 'driver_version': '1.0', # mandatory/fixed 'storage_protocol': 'NFS', #/ stats&capabilities 'active_shares': 10, #\ 'IOPS_provisioned': 30000, # optional custom 'fancy_capability_1': 'eat', # stats & capabilities 'fancy_capability_2': 'drink', #/ 'pools':[ { 'pool_name': '1st pool', #\ 'total_capacity_gb': 500, # mandatory stats 'free_capacity_gb': 230, # for pools 'allocated_capacity_gb': 270, # | 'qos': 'False', # | 'reserved_percentage': 0, #/ 'dying_disks': 100, #\ 'super_hero_1': 'spider-man', # optional custom 'super_hero_2': 'flash', # stats & 'super_hero_3': 'neoncat', # capabilities 'super_hero_4': 'green lantern', #/ }, { 'pool_name': '2nd pool', 'total_capacity_gb': 1024, 'free_capacity_gb': 1024, 'allocated_capacity_gb': 0, 'qos': 'False', 'reserved_percentage': 0, 'dying_disks': 200, 'super_hero_1': 'superman', 'super_hero_2': 'Hulk', }] } """ self.update_capabilities(capability, service) if capability: if self.updated and self.updated > capability['timestamp']: return # Update backend level info self.update_backend(capability) # Update pool level info self.update_pools(capability, service, context=context) def update_pools(self, capability, service, context=None): """Update storage pools information from backend reported info.""" if not capability: return pools = capability.get('pools', None) active_pools = set() if pools and isinstance(pools, list): # Update all pools stats according to information from list # of pools in share capacity for pool_cap in pools: pool_name = pool_cap['pool_name'] self._append_backend_info(pool_cap) cur_pool = self.pools.get(pool_name, None) if not cur_pool: # Add new pool cur_pool = PoolState(self.host, pool_cap, pool_name) self.pools[pool_name] = cur_pool cur_pool.update_from_share_capability( pool_cap, service, context=context) active_pools.add(pool_name) elif pools is None: # To handle legacy driver that doesn't report pool # information in the capability, we have to prepare # a pool from backend level info, or to update the one # we created in self.pools. pool_name = self.share_backend_name if pool_name is None: # To get DEFAULT_POOL_NAME pool_name = share_utils.extract_host(self.host, 'pool', True) if len(self.pools) == 0: # No pool was there single_pool = PoolState(self.host, capability, pool_name) self._append_backend_info(capability) self.pools[pool_name] = single_pool else: # This is a update from legacy driver try: single_pool = self.pools[pool_name] except KeyError: single_pool = PoolState(self.host, capability, pool_name) self._append_backend_info(capability) self.pools[pool_name] = single_pool single_pool.update_from_share_capability( capability, service, context=context) active_pools.add(pool_name) # Remove non-active pools from self.pools nonactive_pools = set(self.pools.keys()) - active_pools for pool in nonactive_pools: LOG.debug("Removing non-active pool %(pool)s @ %(host)s " "from scheduler cache.", {'pool': pool, 'host': self.host}) del self.pools[pool] def _append_backend_info(self, pool_cap): # Fill backend level info to pool if needed. if not pool_cap.get('share_backend_name'): pool_cap['share_backend_name'] = self.share_backend_name if not pool_cap.get('storage_protocol'): pool_cap['storage_protocol'] = self.storage_protocol if not pool_cap.get('vendor_name'): pool_cap['vendor_name'] = self.vendor_name if not pool_cap.get('driver_version'): pool_cap['driver_version'] = self.driver_version if not pool_cap.get('timestamp'): pool_cap['timestamp'] = self.updated if not pool_cap.get('storage_protocol'): pool_cap['storage_protocol'] = self.storage_protocol if 'driver_handles_share_servers' not in pool_cap: pool_cap['driver_handles_share_servers'] = ( self.driver_handles_share_servers) if 'snapshot_support' not in pool_cap: pool_cap['snapshot_support'] = self.snapshot_support if 'create_share_from_snapshot_support' not in pool_cap: pool_cap['create_share_from_snapshot_support'] = ( self.create_share_from_snapshot_support) if 'revert_to_snapshot_support' not in pool_cap: pool_cap['revert_to_snapshot_support'] = ( self.revert_to_snapshot_support) if 'mount_snapshot_support' not in pool_cap: pool_cap['mount_snapshot_support'] = self.mount_snapshot_support if 'dedupe' not in pool_cap: pool_cap['dedupe'] = self.dedupe if 'compression' not in pool_cap: pool_cap['compression'] = self.compression if not pool_cap.get('replication_type'): pool_cap['replication_type'] = self.replication_type if not pool_cap.get('replication_domain'): pool_cap['replication_domain'] = self.replication_domain if 'sg_consistent_snapshot_support' not in pool_cap: pool_cap['sg_consistent_snapshot_support'] = ( self.sg_consistent_snapshot_support) if self.ipv4_support is not None: pool_cap['ipv4_support'] = self.ipv4_support if self.ipv6_support is not None: pool_cap['ipv6_support'] = self.ipv6_support def update_backend(self, capability): self.share_backend_name = capability.get('share_backend_name') self.vendor_name = capability.get('vendor_name') self.driver_version = capability.get('driver_version') self.storage_protocol = capability.get('storage_protocol') self.driver_handles_share_servers = capability.get( 'driver_handles_share_servers') self.snapshot_support = capability.get('snapshot_support') self.create_share_from_snapshot_support = capability.get( 'create_share_from_snapshot_support') self.revert_to_snapshot_support = capability.get( 'revert_to_snapshot_support', False) self.mount_snapshot_support = capability.get( 'mount_snapshot_support', False) self.updated = capability['timestamp'] self.replication_type = capability.get('replication_type') self.replication_domain = capability.get('replication_domain') self.sg_consistent_snapshot_support = capability.get( 'share_group_stats', {}).get('consistent_snapshot_support') if capability.get('ipv4_support') is not None: self.ipv4_support = capability['ipv4_support'] if capability.get('ipv6_support') is not None: self.ipv6_support = capability['ipv6_support'] def consume_from_share(self, share): """Incrementally update host state from an share.""" if self.provisioned_capacity_gb is not None: self.provisioned_capacity_gb += share['size'] self.allocated_capacity_gb += share['size'] if (isinstance(self.free_capacity_gb, six.string_types) and self.free_capacity_gb != 'unknown'): raise exception.InvalidCapacity( name='free_capacity_gb', value=six.text_type(self.free_capacity_gb) ) if self.free_capacity_gb != 'unknown': self.free_capacity_gb -= share['size'] self.updated = timeutils.utcnow() def __repr__(self): return ("host: '%(host)s', free_capacity_gb: %(free)s, " "pools: %(pools)s" % {'host': self.host, 'free': self.free_capacity_gb, 'pools': self.pools} ) class PoolState(HostState): def __init__(self, host, capabilities, pool_name): new_host = share_utils.append_host(host, pool_name) super(PoolState, self).__init__(new_host, capabilities) self.pool_name = pool_name # No pools in pool self.pools = None def _estimate_provisioned_capacity(self, host_name, context=None): """Estimate provisioned capacity from share sizes on backend.""" provisioned_capacity = 0 instances = db.share_instances_get_all_by_host( context, host_name, with_share_data=True) for instance in instances: # Size of share instance that's still being created, will be None. provisioned_capacity += instance['size'] or 0 return provisioned_capacity def update_from_share_capability( self, capability, service=None, context=None): """Update information about a pool from its share_node info.""" self.update_capabilities(capability, service) if capability: if self.updated and self.updated > capability['timestamp']: return self.update_backend(capability) self.total_capacity_gb = capability['total_capacity_gb'] self.free_capacity_gb = capability['free_capacity_gb'] self.allocated_capacity_gb = capability.get( 'allocated_capacity_gb', 0) self.qos = capability.get('qos', False) self.reserved_percentage = capability['reserved_percentage'] self.thin_provisioning = scheduler_utils.thin_provisioning( capability.get('thin_provisioning', False)) # NOTE(xyang): provisioned_capacity_gb is the apparent total # capacity of all the shares created on a backend, which is # greater than or equal to allocated_capacity_gb, which is the # apparent total capacity of all the shares created on a backend # in Manila. # NOTE(nidhimittalhada): If 'provisioned_capacity_gb' is not set, # then calculating 'provisioned_capacity_gb' from share sizes # on host, as per information available in manila database. # NOTE(jose-castro-leon): Only calculate provisioned_capacity_gb # on thin provisioned pools self.provisioned_capacity_gb = capability.get( 'provisioned_capacity_gb') if self.thin_provisioning and self.provisioned_capacity_gb is None: self.provisioned_capacity_gb = ( self._estimate_provisioned_capacity(self.host, context=context)) self.max_over_subscription_ratio = capability.get( 'max_over_subscription_ratio', CONF.max_over_subscription_ratio) self.dedupe = capability.get( 'dedupe', False) self.compression = capability.get( 'compression', False) self.replication_type = capability.get( 'replication_type', self.replication_type) self.replication_domain = capability.get( 'replication_domain') self.sg_consistent_snapshot_support = capability.get( 'sg_consistent_snapshot_support') def update_pools(self, capability): # Do nothing, since we don't have pools within pool, yet pass class HostManager(object): """Base HostManager class.""" host_state_cls = HostState def __init__(self): self.service_states = {} # { : {: {cap k : v}}} self.host_state_map = {} self.filter_handler = base_host_filter.HostFilterHandler( 'manila.scheduler.filters') self.filter_classes = self.filter_handler.get_all_classes() self.weight_handler = base_host_weigher.HostWeightHandler( 'manila.scheduler.weighers') self.weight_classes = self.weight_handler.get_all_classes() def _choose_host_filters(self, filter_cls_names): """Choose acceptable filters. Since the caller may specify which filters to use we need to have an authoritative list of what is permissible. This function checks the filter names against a predefined set of acceptable filters. """ if filter_cls_names is None: filter_cls_names = CONF.scheduler_default_filters if not isinstance(filter_cls_names, (list, tuple)): filter_cls_names = [filter_cls_names] good_filters = [] bad_filters = [] for filter_name in filter_cls_names: found_class = False for cls in self.filter_classes: if cls.__name__ == filter_name: found_class = True good_filters.append(cls) break if not found_class: bad_filters.append(filter_name) if bad_filters: msg = ", ".join(bad_filters) raise exception.SchedulerHostFilterNotFound(filter_name=msg) return good_filters def _choose_host_weighers(self, weight_cls_names): """Choose acceptable weighers. Since the caller may specify which weighers to use, we need to have an authoritative list of what is permissible. This function checks the weigher names against a predefined set of acceptable weighers. """ if weight_cls_names is None: weight_cls_names = CONF.scheduler_default_weighers if not isinstance(weight_cls_names, (list, tuple)): weight_cls_names = [weight_cls_names] good_weighers = [] bad_weighers = [] for weigher_name in weight_cls_names: found_class = False for cls in self.weight_classes: if cls.__name__ == weigher_name: good_weighers.append(cls) found_class = True break if not found_class: bad_weighers.append(weigher_name) if bad_weighers: msg = ", ".join(bad_weighers) raise exception.SchedulerHostWeigherNotFound(weigher_name=msg) return good_weighers def get_filtered_hosts(self, hosts, filter_properties, filter_class_names=None): """Filter hosts and return only ones passing all filters.""" filter_classes = self._choose_host_filters(filter_class_names) return self.filter_handler.get_filtered_objects(filter_classes, hosts, filter_properties) def get_weighed_hosts(self, hosts, weight_properties, weigher_class_names=None): """Weigh the hosts.""" weigher_classes = self._choose_host_weighers(weigher_class_names) weight_properties['server_pools_mapping'] = {} for backend, info in self.service_states.items(): weight_properties['server_pools_mapping'].update( info.get('server_pools_mapping', {})) return self.weight_handler.get_weighed_objects(weigher_classes, hosts, weight_properties) def update_service_capabilities(self, service_name, host, capabilities): """Update the per-service capabilities based on this notification.""" if service_name not in ('share',): LOG.debug('Ignoring %(service_name)s service update ' 'from %(host)s', {'service_name': service_name, 'host': host}) return # Copy the capabilities, so we don't modify the original dict capability_copy = dict(capabilities) capability_copy["timestamp"] = timeutils.utcnow() # Reported time self.service_states[host] = capability_copy LOG.debug("Received %(service_name)s service update from " "%(host)s: %(cap)s", {'service_name': service_name, 'host': host, 'cap': capabilities}) def _update_host_state_map(self, context): # Get resource usage across the available share nodes: topic = CONF.share_topic share_services = db.service_get_all_by_topic(context, topic) active_hosts = set() for service in share_services: host = service['host'] # Warn about down services and remove them from host_state_map if not utils.service_is_up(service) or service['disabled']: LOG.warning("Share service is down. (host: %s).", host) continue # Create and register host_state if not in host_state_map capabilities = self.service_states.get(host, None) host_state = self.host_state_map.get(host) if not host_state: host_state = self.host_state_cls( host, capabilities=capabilities, service=dict(service.items())) self.host_state_map[host] = host_state # Update capabilities and attributes in host_state host_state.update_from_share_capability( capabilities, service=dict(service.items()), context=context) active_hosts.add(host) # remove non-active hosts from host_state_map nonactive_hosts = set(self.host_state_map.keys()) - active_hosts for host in nonactive_hosts: LOG.info("Removing non-active host: %(host)s from " "scheduler cache.", {'host': host}) self.host_state_map.pop(host, None) def get_all_host_states_share(self, context): """Returns a dict of all the hosts the HostManager knows about. Each of the consumable resources in HostState are populated with capabilities scheduler received from RPC. For example: {'192.168.1.100': HostState(), ...} """ self._update_host_state_map(context) # Build a pool_state map and return that map instead of host_state_map all_pools = {} for host, state in self.host_state_map.items(): for key in state.pools: pool = state.pools[key] # Use host.pool_name to make sure key is unique pool_key = '.'.join([host, pool.pool_name]) all_pools[pool_key] = pool return all_pools.values() def get_pools(self, context, filters=None, cached=False): """Returns a dict of all pools on all hosts HostManager knows about.""" if not cached or not self.host_state_map: self._update_host_state_map(context) all_pools = [] for host, host_state in self.host_state_map.items(): for pool in host_state.pools.values(): fully_qualified_pool_name = share_utils.append_host( host, pool.pool_name) host_name = share_utils.extract_host( fully_qualified_pool_name, level='host') backend_name = (share_utils.extract_host( fully_qualified_pool_name, level='backend').split('@')[1] if '@' in fully_qualified_pool_name else None) pool_name = share_utils.extract_host( fully_qualified_pool_name, level='pool') new_pool = { 'name': fully_qualified_pool_name, 'host': host_name, 'backend': backend_name, 'pool': pool_name, 'capabilities': pool.capabilities, } if self._passes_filters(new_pool, filters): all_pools.append(new_pool) return all_pools def _passes_filters(self, dict_to_check, filter_dict): """Applies a set of regex filters to a dictionary. If no filter keys are supplied, the data passes unfiltered and the method returns True. Otherwise, each key in the filter (filter_dict) must be present in the data (dict_to_check) and the filter values are applied as regex expressions to the data values. If any of the filter values fail to match their corresponding data values, the method returns False. But if all filters match, the method returns True. """ if not filter_dict: return True for filter_key, filter_value in filter_dict.items(): if filter_key not in dict_to_check: return False if filter_key == 'capabilities': if not scheduler_utils.capabilities_satisfied( dict_to_check.get(filter_key), filter_value): return False elif not re.match(filter_value, dict_to_check.get(filter_key)): return False return True manila-10.0.0/manila/scheduler/weighers/0000775000175000017500000000000013656750362020124 5ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/weighers/__init__.py0000664000175000017500000000000013656750227022223 0ustar zuulzuul00000000000000manila-10.0.0/manila/scheduler/weighers/base_host.py0000664000175000017500000000241213656750227022444 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler host weighers """ from manila.scheduler.weighers import base class WeighedHost(base.WeighedObject): def to_dict(self): return { 'weight': self.weight, 'host': self.obj.host, } def __repr__(self): return ("WeighedHost [host: %s, weight: %s]" % (self.obj.host, self.weight)) class BaseHostWeigher(base.BaseWeigher): """Base class for host weighers.""" pass class HostWeightHandler(base.BaseWeightHandler): object_class = WeighedHost def __init__(self, namespace): super(HostWeightHandler, self).__init__(BaseHostWeigher, namespace) manila-10.0.0/manila/scheduler/weighers/goodness.py0000664000175000017500000001044113656750227022317 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from manila.scheduler.evaluator import evaluator from manila.scheduler import utils from manila.scheduler.weighers import base_host LOG = logging.getLogger(__name__) class GoodnessWeigher(base_host.BaseHostWeigher): """Goodness Weigher. Assign weights based on a host's goodness function. Goodness rating is the following: .. code-block:: none 0 -- host is a poor choice . . 50 -- host is a good choice . . 100 -- host is a perfect choice """ def _weigh_object(self, host_state, weight_properties): """Determine host's goodness rating based on a goodness_function.""" stats = self._generate_stats(host_state, weight_properties) LOG.debug("Checking host '%s'", stats['host_stats']['host']) result = self._check_goodness_function(stats) LOG.debug("Goodness: %s", result) LOG.debug("Done checking host '%s'", stats['host_stats']['host']) return result def _check_goodness_function(self, stats): """Gets a host's goodness rating based on its goodness function.""" goodness_rating = 0 if stats['goodness_function'] is None: LOG.warning("Goodness function not set :: defaulting to " "minimal goodness rating of 0.") else: try: goodness_result = self._run_evaluator( stats['goodness_function'], stats) except Exception as ex: LOG.warning("Error in goodness_function function " "'%(function)s' : '%(error)s' :: Defaulting " "to a goodness of 0.", {'function': stats['goodness_function'], 'error': ex, }) return goodness_rating if type(goodness_result) is bool: if goodness_result: goodness_rating = 100 elif goodness_result < 0 or goodness_result > 100: LOG.warning("Invalid goodness result. Result must be " "between 0 and 100. Result generated: '%s' " ":: Defaulting to a goodness of 0.", goodness_result) else: goodness_rating = goodness_result msg = "Goodness function result for host %(host)s: %(result)s." args = {'host': stats['host_stats']['host'], 'result': six.text_type(goodness_rating)} LOG.info(msg, args) return goodness_rating def _run_evaluator(self, func, stats): """Evaluates a given function using the provided available stats.""" host_stats = stats['host_stats'] host_caps = stats['host_caps'] extra_specs = stats['extra_specs'] share_stats = stats['share_stats'] result = evaluator.evaluate( func, extra=extra_specs, stats=host_stats, capabilities=host_caps, share=share_stats) return result def _generate_stats(self, host_state, weight_properties): """Generates statistics from host and share data.""" goodness_function = None if ('goodness_function' in host_state.capabilities and host_state.capabilities['goodness_function'] is not None): goodness_function = six.text_type( host_state.capabilities['goodness_function']) stats = utils.generate_stats(host_state, weight_properties) stats['goodness_function'] = goodness_function return stats manila-10.0.0/manila/scheduler/weighers/pool.py0000664000175000017500000000364313656750227021455 0ustar zuulzuul00000000000000# Copyright 2015 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from manila import context from manila.db import api as db_api from manila.scheduler.weighers import base_host from manila.share import utils pool_weight_opts = [ cfg.FloatOpt('pool_weight_multiplier', default=1.0, help='Multiplier used for weighing pools which have ' 'existing share servers. Negative numbers mean to spread' ' vs stack.'), ] CONF = cfg.CONF CONF.register_opts(pool_weight_opts) class PoolWeigher(base_host.BaseHostWeigher): def weight_multiplier(self): """Override the weight multiplier.""" return CONF.pool_weight_multiplier def _weigh_object(self, host_state, weight_properties): """Pools with existing share server win.""" pool_mapping = weight_properties.get('server_pools_mapping', {}) if not pool_mapping: return 0 ctx = context.get_admin_context() host = utils.extract_host(host_state.host, 'backend') servers = db_api.share_server_get_all_by_host(ctx, host) pool = utils.extract_host(host_state.host, 'pool') for server in servers: if any(pool == p['pool_name'] for p in pool_mapping.get( server['id'], [])): return 1 return 0 manila-10.0.0/manila/scheduler/weighers/host_affinity.py0000664000175000017500000000560613656750227023353 0ustar zuulzuul00000000000000# Copyright 2019 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from manila import context from manila.db import api as db_api from manila.scheduler.weighers import base_host from manila.share import utils as share_utils class HostAffinityWeigher(base_host.BaseHostWeigher): def _weigh_object(self, obj, weight_properties): """Weigh hosts based on their proximity to the source's share pool. If no snapshot_id was provided will return 0, otherwise, if source and destination hosts are located on: 1. same back ends and pools: host is a perfect choice (100) 2. same back ends and different pools: host is a very good choice (75) 3. different back ends with the same AZ: host is a good choice (50) 4. different back ends and AZs: host isn't so good choice (25) """ ctx = context.get_admin_context() request_spec = weight_properties.get('request_spec') snapshot_id = request_spec.get('snapshot_id') snapshot_host = request_spec.get('snapshot_host') if None in [snapshot_id, snapshot_host]: # NOTE(silvacarlose): if the request does not contain a snapshot_id # or a snapshot_host, the user is not creating a share from a # snapshot and we don't need to weigh the host. return 0 snapshot_ref = db_api.share_snapshot_get(ctx, snapshot_id) # Source host info: pool, backend and availability zone src_pool = share_utils.extract_host(snapshot_host, 'pool') src_backend = share_utils.extract_host( request_spec.get('snapshot_host'), 'backend') src_az = snapshot_ref['share']['availability_zone'] # Destination host info: pool, backend and availability zone dst_pool = share_utils.extract_host(obj.host, 'pool') dst_backend = share_utils.extract_host(obj.host, 'backend') # NOTE(dviroel): All hosts were already filtered by the availability # zone parameter. dst_az = None if weight_properties['availability_zone_id']: dst_az = db_api.availability_zone_get( ctx, weight_properties['availability_zone_id']).name if src_backend == dst_backend: return 100 if (src_pool and src_pool == dst_pool) else 75 else: return 50 if (src_az and src_az == dst_az) else 25 manila-10.0.0/manila/scheduler/weighers/base.py0000664000175000017500000001074113656750227021413 0ustar zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Pluggable Weighing support """ import abc import six from manila.scheduler import base_handler def normalize(weight_list, minval=None, maxval=None): """Normalize the values in a list between 0 and 1.0. The normalization is made regarding the lower and upper values present in weight_list. If the minval and/or maxval parameters are set, these values will be used instead of the minimum and maximum from the list. If all the values are equal, they are normalized to 0. """ if not weight_list: return () if maxval is None: maxval = max(weight_list) if minval is None: minval = min(weight_list) maxval = float(maxval) minval = float(minval) if minval == maxval: return [0] * len(weight_list) range_ = maxval - minval return ((i - minval) / range_ for i in weight_list) class WeighedObject(object): """Object with weight information.""" def __init__(self, obj, weight): self.obj = obj self.weight = weight def __repr__(self): return "" % (self.obj, self.weight) @six.add_metaclass(abc.ABCMeta) class BaseWeigher(object): """Base class for pluggable weighers. The attributes maxval and minval can be specified to set up the maximum and minimum values for the weighed objects. These values will then be taken into account in the normalization step, instead of taking the values from the calculated weighers. """ minval = None maxval = None def weight_multiplier(self): """How weighted this weigher should be. Override this method in a subclass, so that the returned value is read from a configuration option to permit operators specify a multiplier for the weigher. """ return 1.0 @abc.abstractmethod def _weigh_object(self, obj, weight_properties): """Override in a subclass to specify a weight for a specific object.""" def weigh_objects(self, weighed_obj_list, weight_properties): """Weigh multiple objects. Override in a subclass if you need access to all objects in order to calculate weighers. Do not modify the weight of an object here, just return a list of weighers. """ # Calculate the weighers weights = [] for obj in weighed_obj_list: weight = self._weigh_object(obj.obj, weight_properties) # Record the min and max values if they are None. If they anything # but none we assume that the weigher has set them if self.minval is None: self.minval = weight if self.maxval is None: self.maxval = weight if weight < self.minval: self.minval = weight elif weight > self.maxval: self.maxval = weight weights.append(weight) return weights class BaseWeightHandler(base_handler.BaseHandler): object_class = WeighedObject def get_weighed_objects(self, weigher_classes, obj_list, weighing_properties): """Return a sorted (descending), normalized list of WeighedObjects.""" if not obj_list: return [] weighed_objs = [self.object_class(obj, 0.0) for obj in obj_list] for weigher_cls in weigher_classes: weigher = weigher_cls() weights = weigher.weigh_objects(weighed_objs, weighing_properties) # Normalize the weighers weights = normalize(weights, minval=weigher.minval, maxval=weigher.maxval) for i, weight in enumerate(weights): obj = weighed_objs[i] obj.weight += weigher.weight_multiplier() * weight return sorted(weighed_objs, key=lambda x: x.weight, reverse=True) manila-10.0.0/manila/scheduler/weighers/capacity.py0000664000175000017500000001040113656750227022267 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack, LLC. # Copyright (c) 2015 EMC Corporation # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Capacity Weigher. Weigh hosts by their virtual or actual free capacity. For thin provisioning, weigh hosts by their virtual free capacity calculated by the total capacity multiplied by the max over subscription ratio and subtracting the provisioned capacity; Otherwise, weigh hosts by their actual free capacity, taking into account the reserved space. The default is to spread shares across all hosts evenly. If you prefer stacking, you can set the 'capacity_weight_multiplier' option to a negative number and the weighing has the opposite effect of the default. """ import math from oslo_config import cfg from manila.scheduler import utils from manila.scheduler.weighers import base_host capacity_weight_opts = [ cfg.FloatOpt('capacity_weight_multiplier', default=1.0, help='Multiplier used for weighing share capacity. ' 'Negative numbers mean to stack vs spread.'), ] CONF = cfg.CONF CONF.register_opts(capacity_weight_opts) class CapacityWeigher(base_host.BaseHostWeigher): def weight_multiplier(self): """Override the weight multiplier.""" return CONF.capacity_weight_multiplier def _weigh_object(self, host_state, weight_properties): """Higher weighers win. We want spreading to be the default.""" reserved = float(host_state.reserved_percentage) / 100 free_space = host_state.free_capacity_gb total_space = host_state.total_capacity_gb if 'unknown' in (total_space, free_space): # NOTE(u_glide): "unknown" capacity always sorts to the bottom if CONF.capacity_weight_multiplier > 0: free = float('-inf') else: free = float('inf') else: total = float(total_space) share_type = weight_properties.get('share_type', {}) use_thin_logic = utils.use_thin_logic(share_type) thin_provisioning = utils.thin_provisioning( host_state.thin_provisioning) if use_thin_logic and thin_provisioning: # NOTE(xyang): Calculate virtual free capacity for thin # provisioning. free = math.floor( total * host_state.max_over_subscription_ratio - host_state.provisioned_capacity_gb - total * reserved) else: # NOTE(xyang): Calculate how much free space is left after # taking into account the reserved space. free = math.floor(free_space - total * reserved) return free def weigh_objects(self, weighed_obj_list, weight_properties): weights = super(CapacityWeigher, self).weigh_objects(weighed_obj_list, weight_properties) # NOTE(u_glide): Replace -inf with (minimum - 1) and # inf with (maximum + 1) to avoid errors in # manila.scheduler.weighers.base.normalize() method if self.minval == float('-inf'): self.minval = self.maxval for val in weights: if float('-inf') < val < self.minval: self.minval = val self.minval -= 1 return [self.minval if w == float('-inf') else w for w in weights] elif self.maxval == float('inf'): self.maxval = self.minval for val in weights: if self.maxval < val < float('inf'): self.maxval = val self.maxval += 1 return [self.maxval if w == float('inf') else w for w in weights] else: return weights manila-10.0.0/manila/scheduler/manager.py0000664000175000017500000003067713656750227020310 0ustar zuulzuul00000000000000# Copyright (c) 2010 OpenStack, LLC. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Scheduler Service """ from oslo_config import cfg from oslo_log import log from oslo_service import periodic_task from oslo_utils import excutils from oslo_utils import importutils from manila.common import constants from manila import context from manila import coordination from manila import db from manila import exception from manila import manager from manila.message import api as message_api from manila.message import message_field from manila import quota from manila import rpc from manila.share import rpcapi as share_rpcapi LOG = log.getLogger(__name__) scheduler_driver_opt = cfg.StrOpt('scheduler_driver', default='manila.scheduler.drivers.' 'filter.FilterScheduler', help='Default scheduler driver to use.') CONF = cfg.CONF CONF.register_opt(scheduler_driver_opt) # Drivers that need to change module paths or class names can add their # old/new path here to maintain backward compatibility. MAPPING = { 'manila.scheduler.chance.ChanceScheduler': 'manila.scheduler.drivers.chance.ChanceScheduler', 'manila.scheduler.filter_scheduler.FilterScheduler': 'manila.scheduler.drivers.filter.FilterScheduler', 'manila.scheduler.simple.SimpleScheduler': 'manila.scheduler.drivers.simple.SimpleScheduler', } class SchedulerManager(manager.Manager): """Chooses a host to create shares.""" RPC_API_VERSION = '1.9' def __init__(self, scheduler_driver=None, service_name=None, *args, **kwargs): if not scheduler_driver: scheduler_driver = CONF.scheduler_driver if scheduler_driver in MAPPING: msg_args = { 'old': scheduler_driver, 'new': MAPPING[scheduler_driver], } LOG.warning("Scheduler driver path %(old)s is deprecated, " "update your configuration to the new path " "%(new)s", msg_args) scheduler_driver = MAPPING[scheduler_driver] self.driver = importutils.import_object(scheduler_driver) self.message_api = message_api.API() super(SchedulerManager, self).__init__(*args, **kwargs) def init_host(self): ctxt = context.get_admin_context() self.request_service_capabilities(ctxt) def get_host_list(self, context): """Get a list of hosts from the HostManager.""" return self.driver.get_host_list() def get_service_capabilities(self, context): """Get the normalized set of capabilities for this zone.""" return self.driver.get_service_capabilities() def update_service_capabilities(self, context, service_name=None, host=None, capabilities=None, **kwargs): """Process a capability update from a service node.""" if capabilities is None: capabilities = {} self.driver.update_service_capabilities(service_name, host, capabilities) def create_share_instance(self, context, request_spec=None, filter_properties=None): try: self.driver.schedule_create_share(context, request_spec, filter_properties) except exception.NoValidHost as ex: self._set_share_state_and_notify( 'create_share', {'status': constants.STATUS_ERROR}, context, ex, request_spec, message_field.Action.ALLOCATE_HOST) except Exception as ex: with excutils.save_and_reraise_exception(): self._set_share_state_and_notify( 'create_share', {'status': constants.STATUS_ERROR}, context, ex, request_spec) def get_pools(self, context, filters=None, cached=False): """Get active pools from the scheduler's cache.""" return self.driver.get_pools(context, filters, cached) def manage_share(self, context, share_id, driver_options, request_spec, filter_properties=None): """Ensure that the host exists and can accept the share.""" def _manage_share_set_error(self, context, ex, request_spec): # NOTE(nidhimittalhada): set size as 1 because design expects size # to be set, it also will allow us to handle delete/unmanage # operations properly with this errored share according to quotas. self._set_share_state_and_notify( 'manage_share', {'status': constants.STATUS_MANAGE_ERROR, 'size': 1}, context, ex, request_spec) share_ref = db.share_get(context, share_id) try: self.driver.host_passes_filters( context, share_ref['host'], request_spec, filter_properties) except Exception as ex: with excutils.save_and_reraise_exception(): _manage_share_set_error(self, context, ex, request_spec) else: share_rpcapi.ShareAPI().manage_share(context, share_ref, driver_options) def migrate_share_to_host( self, context, share_id, host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network_id, new_share_type_id, request_spec, filter_properties=None): """Ensure that the host exists and can accept the share.""" share_ref = db.share_get(context, share_id) def _migrate_share_set_error(self, context, ex, request_spec): instance = next((x for x in share_ref.instances if x['status'] == constants.STATUS_MIGRATING), None) if instance: db.share_instance_update( context, instance['id'], {'status': constants.STATUS_AVAILABLE}) self._set_share_state_and_notify( 'migrate_share_to_host', {'task_state': constants.TASK_STATE_MIGRATION_ERROR}, context, ex, request_spec) try: tgt_host = self.driver.host_passes_filters( context, host, request_spec, filter_properties) except Exception as ex: with excutils.save_and_reraise_exception(): _migrate_share_set_error(self, context, ex, request_spec) else: try: share_rpcapi.ShareAPI().migration_start( context, share_ref, tgt_host.host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network_id, new_share_type_id) except Exception as ex: with excutils.save_and_reraise_exception(): _migrate_share_set_error(self, context, ex, request_spec) def _set_share_state_and_notify(self, method, state, context, ex, request_spec, action=None): LOG.error("Failed to schedule %(method)s: %(ex)s", {"method": method, "ex": ex}) properties = request_spec.get('share_properties', {}) share_id = request_spec.get('share_id', None) if share_id: db.share_update(context, share_id, state) if action: self.message_api.create( context, action, context.project_id, resource_type=message_field.Resource.SHARE, resource_id=share_id, exception=ex) payload = dict(request_spec=request_spec, share_properties=properties, share_id=share_id, state=state, method=method, reason=ex) rpc.get_notifier("scheduler").error( context, 'scheduler.' + method, payload) def request_service_capabilities(self, context): share_rpcapi.ShareAPI().publish_service_capabilities(context) def _set_share_group_error_state(self, method, context, ex, request_spec, action=None): LOG.warning("Failed to schedule_%(method)s: %(ex)s", {"method": method, "ex": ex}) share_group_state = {'status': constants.STATUS_ERROR} share_group_id = request_spec.get('share_group_id') if share_group_id: db.share_group_update(context, share_group_id, share_group_state) if action: self.message_api.create( context, action, context.project_id, resource_type=message_field.Resource.SHARE_GROUP, resource_id=share_group_id, exception=ex) @periodic_task.periodic_task(spacing=600, run_immediately=True) def _expire_reservations(self, context): quota.QUOTAS.expire(context) def create_share_group(self, context, share_group_id, request_spec=None, filter_properties=None): try: self.driver.schedule_create_share_group( context, share_group_id, request_spec, filter_properties) except exception.NoValidHost as ex: self._set_share_group_error_state( 'create_share_group', context, ex, request_spec, message_field.Action.ALLOCATE_HOST) except Exception as ex: with excutils.save_and_reraise_exception(): self._set_share_group_error_state( 'create_share_group', context, ex, request_spec) def _set_share_replica_error_state(self, context, method, exc, request_spec, action=None): LOG.warning("Failed to schedule_%(method)s: %(exc)s", {'method': method, 'exc': exc}) status_updates = { 'status': constants.STATUS_ERROR, 'replica_state': constants.STATUS_ERROR, } share_replica_id = request_spec.get( 'share_instance_properties').get('id') # Set any snapshot instances to 'error'. replica_snapshots = db.share_snapshot_instance_get_all_with_filters( context, {'share_instance_ids': share_replica_id}) for snapshot_instance in replica_snapshots: db.share_snapshot_instance_update( context, snapshot_instance['id'], {'status': constants.STATUS_ERROR}) db.share_replica_update(context, share_replica_id, status_updates) if action: self.message_api.create( context, action, context.project_id, resource_type=message_field.Resource.SHARE_REPLICA, resource_id=share_replica_id, exception=exc) def create_share_replica(self, context, request_spec=None, filter_properties=None): try: self.driver.schedule_create_replica(context, request_spec, filter_properties) except exception.NoValidHost as exc: self._set_share_replica_error_state( context, 'create_share_replica', exc, request_spec, message_field.Action.ALLOCATE_HOST) except Exception as exc: with excutils.save_and_reraise_exception(): self._set_share_replica_error_state( context, 'create_share_replica', exc, request_spec) @periodic_task.periodic_task(spacing=CONF.message_reap_interval, run_immediately=True) @coordination.synchronized('locked-clean-expired-messages') def _clean_expired_messages(self, context): self.message_api.cleanup_expired_messages(context) manila-10.0.0/manila/scheduler/scheduler_options.py0000664000175000017500000000664413656750227022424 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack, LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SchedulerOptions monitors a local .json file for changes and loads it if needed. This file is converted to a data structure and passed into the filtering and weighing functions which can use it for dynamic configuration. """ import datetime import os from oslo_config import cfg from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import timeutils scheduler_json_config_location_opt = cfg.StrOpt( 'scheduler_json_config_location', default='', help='Absolute path to scheduler configuration JSON file.') CONF = cfg.CONF CONF.register_opt(scheduler_json_config_location_opt) LOG = log.getLogger(__name__) class SchedulerOptions(object): """Monitor and load local .json file for filtering and weighing. SchedulerOptions monitors a local .json file for changes and loads it if needed. This file is converted to a data structure and passed into the filtering and weighing functions which can use it for dynamic configuration. """ def __init__(self): super(SchedulerOptions, self).__init__() self.data = {} self.last_modified = None self.last_checked = None def _get_file_handle(self, filename): """Get file handle. Broken out for testing.""" return open(filename) def _get_file_timestamp(self, filename): """Get the last modified datetime. Broken out for testing.""" try: return os.path.getmtime(filename) except os.error: LOG.exception("Could not stat scheduler options file " "%(filename)s.", {"filename": filename}) raise def _load_file(self, handle): """Decode the JSON file. Broken out for testing.""" try: return jsonutils.load(handle) except ValueError: LOG.exception("Could not decode scheduler options.") return {} def _get_time_now(self): """Get current UTC. Broken out for testing.""" return timeutils.utcnow() def get_configuration(self, filename=None): """Check the json file for changes and load it if needed.""" if not filename: filename = CONF.scheduler_json_config_location if not filename: return self.data if self.last_checked: now = self._get_time_now() if now - self.last_checked < datetime.timedelta(minutes=5): return self.data last_modified = self._get_file_timestamp(filename) if (not last_modified or not self.last_modified or last_modified > self.last_modified): self.data = self._load_file(self._get_file_handle(filename)) self.last_modified = last_modified if not self.data: self.data = {} return self.data manila-10.0.0/manila/scheduler/rpcapi.py0000664000175000017500000001340413656750227020141 0ustar zuulzuul00000000000000# Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Client side of the scheduler manager RPC API. """ from oslo_config import cfg import oslo_messaging as messaging from oslo_serialization import jsonutils from manila import rpc CONF = cfg.CONF class SchedulerAPI(object): """Client side of the scheduler rpc API. API version history: 1.0 - Initial version. 1.1 - Add get_pools method 1.2 - Introduce Share Instances. Replace ``create_share()`` with ``create_share_instance()`` 1.3 - Add create_consistency_group method (renamed in 1.7) 1.4 - Add migrate_share_to_host method 1.5 - Add create_share_replica 1.6 - Add manage_share 1.7 - Updated migrate_share_to_host method with new parameters 1.8 - Rename create_consistency_group -> create_share_group method 1.9 - Add cached parameter to get_pools method """ RPC_API_VERSION = '1.9' def __init__(self): super(SchedulerAPI, self).__init__() target = messaging.Target(topic=CONF.scheduler_topic, version=self.RPC_API_VERSION) self.client = rpc.get_client(target, version_cap=self.RPC_API_VERSION) def create_share_instance(self, context, request_spec=None, filter_properties=None): request_spec_p = jsonutils.to_primitive(request_spec) call_context = self.client.prepare(version='1.2') return call_context.cast(context, 'create_share_instance', request_spec=request_spec_p, filter_properties=filter_properties) def update_service_capabilities(self, context, service_name, host, capabilities): call_context = self.client.prepare(fanout=True, version='1.0') call_context.cast(context, 'update_service_capabilities', service_name=service_name, host=host, capabilities=capabilities) def get_pools(self, context, filters=None, cached=False): call_context = self.client.prepare(version='1.9') return call_context.call(context, 'get_pools', filters=filters, cached=cached) def create_share_group(self, context, share_group_id, request_spec=None, filter_properties=None): """Casts an rpc to the scheduler to create a share group. Example of 'request_spec' argument value:: { 'share_group_type_id': 'fake_share_group_type_id', 'share_group_id': 'some_fake_uuid', 'availability_zone_id': 'some_fake_az_uuid', 'share_types': [models.ShareType], 'resource_type': models.ShareGroup, } """ request_spec_p = jsonutils.to_primitive(request_spec) call_context = self.client.prepare(version='1.8') return call_context.cast(context, 'create_share_group', share_group_id=share_group_id, request_spec=request_spec_p, filter_properties=filter_properties) def migrate_share_to_host( self, context, share_id, host, force_host_assisted_migration, preserve_metadata, writable, nondisruptive, preserve_snapshots, new_share_network_id, new_share_type_id, request_spec=None, filter_properties=None): call_context = self.client.prepare(version='1.7') request_spec_p = jsonutils.to_primitive(request_spec) return call_context.cast( context, 'migrate_share_to_host', share_id=share_id, host=host, force_host_assisted_migration=force_host_assisted_migration, preserve_metadata=preserve_metadata, writable=writable, nondisruptive=nondisruptive, preserve_snapshots=preserve_snapshots, new_share_network_id=new_share_network_id, new_share_type_id=new_share_type_id, request_spec=request_spec_p, filter_properties=filter_properties) def create_share_replica(self, context, request_spec=None, filter_properties=None): request_spec_p = jsonutils.to_primitive(request_spec) call_context = self.client.prepare(version='1.5') return call_context.cast(context, 'create_share_replica', request_spec=request_spec_p, filter_properties=filter_properties) def manage_share(self, context, share_id, driver_options, request_spec=None, filter_properties=None): call_context = self.client.prepare(version='1.6') return call_context.cast(context, 'manage_share', share_id=share_id, driver_options=driver_options, request_spec=request_spec, filter_properties=filter_properties) manila-10.0.0/manila/policy.py0000664000175000017500000001653213656750227016211 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack, LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Policy Engine For Manila""" import functools import sys from oslo_config import cfg from oslo_log import log as logging from oslo_policy import policy from oslo_utils import excutils from manila import exception from manila import policies CONF = cfg.CONF LOG = logging.getLogger(__name__) _ENFORCER = None def reset(): global _ENFORCER if _ENFORCER: _ENFORCER.clear() _ENFORCER = None def init(rules=None, use_conf=True): """Init an Enforcer class. :param policy_file: Custom policy file to use, if none is specified, `CONF.policy_file` will be used. :param rules: Default dictionary / Rules to use. It will be considered just in the first instantiation. :param default_rule: Default rule to use, CONF.default_rule will be used if none is specified. :param use_conf: Whether to load rules from config file. """ global _ENFORCER if not _ENFORCER: _ENFORCER = policy.Enforcer(CONF, rules=rules, use_conf=use_conf) register_rules(_ENFORCER) def enforce(context, action, target, do_raise=True): """Verifies that the action is valid on the target in this context. :param context: manila context :param action: string representing the action to be checked, this should be colon separated for clarity. i.e. ``share:create``, :param target: dictionary representing the object of the action for object creation, this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}`` :param do_raise: Whether to raise an exception if check fails. :returns: When ``do_raise`` is ``False``, returns a value that evaluates as ``True`` or ``False`` depending on whether the policy allows action on the target. :raises: manila.exception.PolicyNotAuthorized if verification fails and ``do_raise`` is ``True``. """ init() if not isinstance(context, dict): context = context.to_dict() # Add the exception arguments if asked to do a raise extra = {} if do_raise: extra.update(exc=exception.PolicyNotAuthorized, action=action, do_raise=do_raise) return _ENFORCER.enforce(action, target, context, **extra) def set_rules(rules, overwrite=True, use_conf=False): """Set rules based on the provided dict of rules. :param rules: New rules to use. It should be an instance of dict. :param overwrite: Whether to overwrite current rules or update them with the new rules. :param use_conf: Whether to reload rules from config file. """ init(use_conf=False) _ENFORCER.set_rules(rules, overwrite, use_conf) def get_rules(): if _ENFORCER: return _ENFORCER.rules def register_rules(enforcer): enforcer.register_defaults(policies.list_rules()) def get_enforcer(): # This method is for use by oslopolicy CLI scripts. Those scripts need the # 'output-file' and 'namespace' options, but having those in sys.argv means # loading the Manila config options will fail as those are not expected to # be present. So we pass in an arg list with those stripped out. conf_args = [] # Start at 1 because cfg.CONF expects the equivalent of sys.argv[1:] i = 1 while i < len(sys.argv): if sys.argv[i].strip('-') in ['namespace', 'output-file']: i += 2 continue conf_args.append(sys.argv[i]) i += 1 cfg.CONF(conf_args, project='manila') init() return _ENFORCER def authorize(context, action, target, do_raise=True, exc=None): """Verifies that the action is valid on the target in this context. :param context: manila context :param action: string representing the action to be checked this should be colon separated for clarity. i.e. ``share:create``, :param target: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}`` :param do_raise: if True (the default), raises PolicyNotAuthorized; if False, returns False :param exc: Class of the exception to raise if the check fails. Any remaining arguments passed to :meth:`authorize` (both positional and keyword arguments) will be passed to the exception class. If not specified, :class:`PolicyNotAuthorized` will be used. :raises manila.exception.PolicyNotAuthorized: if verification fails and do_raise is True. Or if 'exc' is specified it will raise an exception of that type. :return: returns a non-False value (not necessarily "True") if authorized, and the exact value False if not authorized and do_raise is False. """ init() credentials = context.to_policy_values() if not exc: exc = exception.PolicyNotAuthorized try: result = _ENFORCER.authorize(action, target, credentials, do_raise=do_raise, exc=exc, action=action) except policy.PolicyNotRegistered: with excutils.save_and_reraise_exception(): LOG.exception('Policy not registered') except Exception: with excutils.save_and_reraise_exception(): LOG.debug('Policy check for %(action)s failed with credentials ' '%(credentials)s', {'action': action, 'credentials': credentials}) return result def check_is_admin(context): """Whether or not user is admin according to policy setting. """ init() credentials = context.to_policy_values() target = credentials return _ENFORCER.authorize('context_is_admin', target, credentials) def wrap_check_policy(resource): """Check policy corresponding to the wrapped methods prior to execution.""" def check_policy_wraper(func): @functools.wraps(func) def wrapped(self, context, target_obj, *args, **kwargs): check_policy(context, resource, func.__name__, target_obj) return func(self, context, target_obj, *args, **kwargs) return wrapped return check_policy_wraper def check_policy(context, resource, action, target_obj=None, do_raise=True): target = { 'project_id': context.project_id, 'user_id': context.user_id, } target.update(target_obj or {}) _action = '%s:%s' % (resource, action) return authorize(context, _action, target, do_raise=do_raise) manila-10.0.0/.stestr.conf0000664000175000017500000000005613656750227015342 0ustar zuulzuul00000000000000[DEFAULT] test_path=./manila/tests top_dir=./ manila-10.0.0/devstack/0000775000175000017500000000000013656750362014674 5ustar zuulzuul00000000000000manila-10.0.0/devstack/settings0000664000175000017500000002717313656750227016471 0ustar zuulzuul00000000000000# Setting configuration file for manila services # ---------------------------------------------- # 1) It is possible to set any custom opt to any config group using following: # $ export MANILA_OPTGROUP_foo_bar=value # where 'foo' is name of config group and 'bar' is name of option. # # 2) 'MANILA_CONFIGURE_GROUPS' contains list of config group names used to create # config groups, but 'MANILA_ENABLED_BACKENDS' is used to set config groups as # Manila share back ends. Both can be set like following: # $ export MANILA_ENABLED_BACKENDS=foo,bar # where 'foo' and 'bar' are names of config groups with opts for some share # drivers. By default they are equal. Also be attentive, if you modify both, # make sure 'MANILA_CONFIGURE_GROUPS' contains all values from # 'MANILA_ENABLED_BACKENDS'. # DEFAULT group is always defined, no need to specify it within 'MANILA_CONFIGURE_GROUPS'. # # 3) Two default backends are used for compatibility with previous approach. # They have same configuration except name of backend. Both use generic driver. # They can be enabled by adding values of following env vars: # 'MANILA_BACKEND1_CONFIG_GROUP_NAME' and 'MANILA_BACKEND2_CONFIG_GROUP_NAME' # to the env var 'MANILA_ENABLED_BACKENDS' or will be enabled # if 'MANILA_ENABLED_BACKENDS' is empty. # # 4) 'CINDER_OVERSUBSCRIPTION_RATIO' - manila devstack-plugin env var that is # useful for all share drivers that use Cinder. If it is set, then it will be # applied for two Cinder options: 'max_over_subscription_ratio' and # 'lvm_max_over_subscription_ratio'. Should be float. Example: # CINDER_OVERSUBSCRIPTION_RATIO=20.0 # Defaults # -------- MANILA_GIT_BASE=${MANILA_GIT_BASE:-https://opendev.org} MANILA_REPO_ROOT=${MANILA_REPO_ROOT:-openstack} MANILACLIENT_REPO=${MANILA_GIT_BASE}/${MANILA_REPO_ROOT}/python-manilaclient MANILACLIENT_BRANCH=${MANILACLIENT_BRANCH:-master} # Set up default directories MANILA_DIR=${MANILA_DIR:=$DEST/manila} MANILA_LOCK_PATH=${MANILA_LOCK_PATH:=$OSLO_LOCK_PATH} MANILA_LOCK_PATH=${MANILA_LOCK_PATH:=$MANILA_DIR/manila_locks} MANILACLIENT_DIR=${MANILACLIENT_DIR:=$DEST/python-manilaclient} MANILA_STATE_PATH=${MANILA_STATE_PATH:=$DATA_DIR/manila} MANILA_CONF_DIR=${MANILA_CONF_DIR:-/etc/manila} MANILA_CONF=$MANILA_CONF_DIR/manila.conf MANILA_API_PASTE_INI=$MANILA_CONF_DIR/api-paste.ini # Set this to False to leave "default_share_type" and # "default_share_group_type" configuration options empty. MANILA_CONFIGURE_DEFAULT_TYPES=${MANILA_CONFIGURE_DEFAULT_TYPES:-True} MANILA_DEFAULT_SHARE_TYPE=${MANILA_DEFAULT_SHARE_TYPE:-default} # MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS is expected to contain extra specs key-value pairs, # that should be assigned to default share type. Both - qualified and unqualified extra specs are supported. # Pairs are separated by spaces, value is assigned to key using sign of equality. Examples: # MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='foo=bar' # MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='foo=bar quuz=xyzzy' # MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS='foo=bar quuz=xyzzy fakeprefix:baz=waldo' MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS=${MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS:-''} MANILA_DHSS_TRUE_SHARE_TYPE_EXTRA_SPECS=${MANILA_DHSS_TRUE_SHARE_TYPE_EXTRA_SPECS:-$MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS} MANILA_DHSS_FALSE_SHARE_TYPE_EXTRA_SPECS=${MANILA_DHSS_FALSE_SHARE_TYPE_EXTRA_SPECS:-$MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS} # Share groups and their specs MANILA_DEFAULT_SHARE_GROUP_TYPE=${MANILA_DEFAULT_SHARE_GROUP_TYPE:-default} # MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS is expected to contain key-value pairs, # that should be assigned to default share group type. Both - qualified and unqualified specs are supported. # Pairs are separated by spaces, value is assigned to key using sign of equality. Examples: # MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS='foo=bar' # MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS='foo=bar quuz=xyzzy' # MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS='foo=bar quuz=xyzzy fakeprefix:baz=waldo' MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS=${MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS:-''} # Public facing bits MANILA_SERVICE_HOST=${MANILA_SERVICE_HOST:-$SERVICE_HOST} MANILA_SERVICE_PORT=${MANILA_SERVICE_PORT:-8786} MANILA_SERVICE_PORT_INT=${MANILA_SERVICE_PORT_INT:-18786} MANILA_SERVICE_PROTOCOL=${MANILA_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL} MANILA_ENDPOINT_BASE=$MANILA_SERVICE_PROTOCOL://$MANILA_SERVICE_HOST:$MANILA_SERVICE_PORT # Support entry points installation of console scripts if [[ -d $MANILA_DIR/bin ]]; then MANILA_BIN_DIR=$MANILA_DIR/bin else MANILA_BIN_DIR=$(get_python_exec_prefix) fi # Common opts SHARE_NAME_PREFIX=${SHARE_NAME_PREFIX:-share-} MANILA_ENABLED_SHARE_PROTOCOLS=${ENABLED_SHARE_PROTOCOLS:-"NFS,CIFS"} MANILA_SCHEDULER_DRIVER=${MANILA_SCHEDULER_DRIVER:-manila.scheduler.filter_scheduler.FilterScheduler} MANILA_SERVICE_SECGROUP="manila-service" # Following env var defines whether to apply downgrade migrations setting up DB or not. # If it is set to False, then only 'upgrade' migrations will be applied. # If it is set to True, then will be applied 'upgrade', 'downgrade' and 'upgrade' # migrations again. MANILA_USE_DOWNGRADE_MIGRATIONS=${MANILA_USE_DOWNGRADE_MIGRATIONS:-"False"} # Toggle for deploying manila-api service under Apache web server with enabled # 'mod_wsgi' plugin. MANILA_USE_MOD_WSGI=${MANILA_USE_MOD_WSGI:-False} # Toggle for deploying manila-api service with uWSGI # Set it as True, because starting with Pike it is requirement from # 'governance' project. See: # https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html#completion-criteria MANILA_USE_UWSGI=${MANILA_USE_UWSGI:-True} MANILA_WSGI=$MANILA_BIN_DIR/manila-wsgi MANILA_UWSGI_CONF=$MANILA_CONF_DIR/manila-uwsgi.ini if [ $(trueorfalse False MANILA_USE_UWSGI) == True ]; then MANILA_ENDPOINT_BASE=$MANILA_SERVICE_PROTOCOL://$MANILA_SERVICE_HOST/share fi # Common info for Generic driver(s) SHARE_DRIVER=${SHARE_DRIVER:-manila.share.drivers.generic.GenericShareDriver} eval USER_HOME=~ MANILA_PATH_TO_PUBLIC_KEY=${MANILA_PATH_TO_PUBLIC_KEY:-"$USER_HOME/.ssh/id_rsa.pub"} MANILA_PATH_TO_PRIVATE_KEY=${MANILA_PATH_TO_PRIVATE_KEY:-"$USER_HOME/.ssh/id_rsa"} MANILA_SERVICE_KEYPAIR_NAME=${MANILA_SERVICE_KEYPAIR_NAME:-"manila-service"} MANILA_SERVICE_INSTANCE_USER=${MANILA_SERVICE_INSTANCE_USER:-"manila"} MANILA_SERVICE_IMAGE_URL=${MANILA_SERVICE_IMAGE_URL:-"http://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2"} MANILA_SERVICE_IMAGE_NAME=${MANILA_SERVICE_IMAGE_NAME:-"manila-service-image-master"} MANILA_USE_SCHEDULER_CREATING_SHARE_FROM_SNAPSHOT=${MANILA_USE_SCHEDULER_CREATING_SHARE_FROM_SNAPSHOT:-"False"} # Third party CI Vendors should set this to false to skip the service image download MANILA_SERVICE_IMAGE_ENABLED=$(trueorfalse True MANILA_SERVICE_IMAGE_ENABLED) MANILA_USE_SERVICE_INSTANCE_PASSWORD=${MANILA_USE_SERVICE_INSTANCE_PASSWORD:-"False"} MANILA_SERVICE_INSTANCE_PASSWORD=${MANILA_SERVICE_INSTANCE_PASSWORD:-"manila"} MANILA_SERVICE_VM_FLAVOR_REF=${MANILA_SERVICE_VM_FLAVOR_REF:-100} MANILA_SERVICE_VM_FLAVOR_NAME=${MANILA_SERVICE_VM_FLAVOR_NAME:-"manila-service-flavor"} MANILA_SERVICE_VM_FLAVOR_RAM=${MANILA_SERVICE_VM_FLAVOR_RAM:-320} MANILA_SERVICE_VM_FLAVOR_DISK=${MANILA_SERVICE_VM_FLAVOR_DISK:-3} MANILA_SERVICE_VM_FLAVOR_VCPUS=${MANILA_SERVICE_VM_FLAVOR_VCPUS:-1} # Support for multi backend configuration (default is no support) MANILA_MULTI_BACKEND=$(trueorfalse False MANILA_MULTI_BACKEND) DEPRECATED_TEXT="$DEPRECATED_TEXT\n'MANILA_MULTI_BACKEND' is deprecated, it makes influence only when is set to True and 'MANILA_ENABLED_BACKENDS' is not set. Use 'MANILA_ENABLED_BACKENDS' instead if you want to use custom setting. Set there a list of back end names to be enabled.\n To configure custom back ends use (any opt in any group can be set in this way) following: MANILA_OPTGROUP_foo_bar=value where 'foo' is name of config group and 'bar' is name of option.\n" # First share backend data, that will be used in any installation MANILA_BACKEND1_CONFIG_GROUP_NAME=${MANILA_BACKEND1_CONFIG_GROUP_NAME:-generic1} # deprecated MANILA_SHARE_BACKEND1_NAME=${MANILA_SHARE_BACKEND1_NAME:-GENERIC1} # deprecated # Second share backend data, that will be used only with MANILA_MULTI_BACKEND=True MANILA_BACKEND2_CONFIG_GROUP_NAME=${MANILA_BACKEND2_CONFIG_GROUP_NAME:-generic2} # deprecated MANILA_SHARE_BACKEND2_NAME=${MANILA_SHARE_BACKEND2_NAME:-GENERIC2} # deprecated # Options for configuration of LVM share driver SHARE_BACKING_FILE_SIZE=${SHARE_BACKING_FILE_SIZE:-8400M} SHARE_GROUP=${SHARE_GROUP:-lvm-shares} MANILA_MNT_DIR=${MANILA_MNT_DIR:=$MANILA_STATE_PATH/mnt} SMB_CONF=${SMB_CONF:-/etc/samba/smb.conf} SMB_PRIVATE_DIR=${SMB_PRIVATE_DIR:-/var/lib/samba/private} CONFIGURE_BACKING_FILE=${CONFIGURE_BACKING_FILE:-"True"} MANILA_LVM_SHARE_EXPORT_IPS=${MANILA_LVM_SHARE_EXPORT_IPS:-$HOST_IP} # Options for replication MANILA_REPLICA_STATE_UPDATE_INTERVAL=${MANILA_REPLICA_STATE_UPDATE_INTERVAL:-300} # Options for configuration of ZFSonLinux driver # 'MANILA_ZFSONLINUX_ZPOOL_SIZE' defines size of each zpool. That value # will be used for creation of sparse files. MANILA_ZFSONLINUX_ZPOOL_SIZE=${MANILA_ZFSONLINUX_ZPOOL_SIZE:-"30G"} MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR=${MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR:-"/opt/stack/data/manila/zfsonlinux"} MANILA_ZFSONLINUX_SHARE_EXPORT_IP=${MANILA_ZFSONLINUX_SHARE_EXPORT_IP:-$HOST_IP} MANILA_ZFSONLINUX_SERVICE_IP=${MANILA_ZFSONLINUX_SERVICE_IP:-$HOST_IP} MANILA_ZFSONLINUX_DATASET_CREATION_OPTIONS=${MANILA_ZFSONLINUX_DATASET_CREATION_OPTIONS:-"compression=gzip"} MANILA_ZFSONLINUX_USE_SSH=${MANILA_ZFSONLINUX_USE_SSH:-"False"} MANILA_ZFSONLINUX_SSH_USERNAME=${MANILA_ZFSONLINUX_SSH_USERNAME:-$STACK_USER} # If MANILA_ZFSONLINUX_REPLICATION_DOMAIN is set to empty value then # Manila will consider replication feature as disabled for ZFSonLinux share driver. MANILA_ZFSONLINUX_REPLICATION_DOMAIN=${MANILA_ZFSONLINUX_REPLICATION_DOMAIN:-"ZFSonLinux"} # Container Driver MANILA_CONTAINER_DRIVER=${MANILA_CONTAINER_DRIVER:-"manila.share.drivers.container.driver.ContainerShareDriver"} MANILA_DOCKER_IMAGE_ALIAS=${MANILA_DOCKER_IMAGE_ALIAS:-"manila_docker_image"} MANILA_CONTAINER_VOLUME_GROUP_NAME=${MANILA_CONTAINER_VOLUME_GROUP_NAME:-"manila_docker_volumes"} # (aovchinnikov): This location is temporary and will be changed to a # permanent one as soon as possible. MANILA_DOCKER_IMAGE_URL=${MANILA_DOCKER_IMAGE_URL:-"https://github.com/a-ovchinnikov/manila-image-elements-lxd-images/releases/download/0.1.0/manila-docker-container.tar.gz"} # Network Plugin MANILA_NETWORK_API_CLASS=${MANILA_NETWORK_API_CLASS:-"manila.network.neutron.neutron_network_plugin.NeutronBindNetworkPlugin"} MANILA_NEUTRON_VNIC_TYPE=${MANILA_NEUTRON_VNIC_TYPE:-"normal"} # SSH TIMEOUT MANILA_SSH_TIMEOUT=${MANILA_SSH_TIMEOUT:-180} # Admin Network setup MANILA_ADMIN_NET_RANGE=${MANILA_ADMIN_NET_RANGE:=10.2.5.0/24} # Data Service IP configuration MANILA_DATA_NODE_IP=${MANILA_DATA_NODE_IP:=$MANILA_ADMIN_NET_RANGE} # Data Service copy validation MANILA_DATA_COPY_CHECK_HASH=${MANILA_DATA_COPY_CHECK_HASH:=True} # Manila IPv6 Setup flag MANILA_SETUP_IPV6=${MANILA_SETUP_IPV6:=False} MANILA_RESTORE_IPV6_DEFAULT_ROUTE=${MANILA_RESTORE_IPV6_DEFAULT_ROUTE:=True} # Install manila-tempest-plugin system-wide # This operation has been deprecated. manila-tempest-plugin has a devstack # plugin that must be preferred over this approach. MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=${MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE:=True} # Enable manila services # ---------------------- # We have to add Manila to enabled services for screen_it to work # It consists of 4 parts: m-api (API), m-shr (Share), m-sch (Scheduler) # and m-dat (Data). enable_service manila enable_service m-api enable_service m-shr enable_service m-sch enable_service m-dat manila-10.0.0/devstack/files/0000775000175000017500000000000013656750362015776 5ustar zuulzuul00000000000000manila-10.0.0/devstack/files/rpms/0000775000175000017500000000000013656750362016757 5ustar zuulzuul00000000000000manila-10.0.0/devstack/files/rpms/manila0000664000175000017500000000000513656750227020136 0ustar zuulzuul00000000000000lvm2 manila-10.0.0/devstack/files/debs/0000775000175000017500000000000013656750362016713 5ustar zuulzuul00000000000000manila-10.0.0/devstack/files/debs/manila0000664000175000017500000000000513656750227020072 0ustar zuulzuul00000000000000lvm2 manila-10.0.0/devstack/files/rpms-suse/0000775000175000017500000000000013656750362017734 5ustar zuulzuul00000000000000manila-10.0.0/devstack/files/rpms-suse/manila0000664000175000017500000000000513656750227021113 0ustar zuulzuul00000000000000lvm2 manila-10.0.0/devstack/apache-manila.template0000664000175000017500000000147513656750227021120 0ustar zuulzuul00000000000000Listen %PORT% LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %D(us)" manila_combined WSGIDaemonProcess manila-api processes=%APIWORKERS% threads=2 user=%USER% display-name=%{GROUP} WSGIProcessGroup manila-api WSGIScriptAlias / %MANILA_BIN_DIR%/manila-wsgi WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/%APACHE_NAME%/manila_api.log CustomLog /var/log/%APACHE_NAME%/manila_api_access.log manila_combined = 2.4> Require all granted Order allow,deny Allow from all manila-10.0.0/devstack/plugin.sh0000775000175000017500000015366213656750227016546 0ustar zuulzuul00000000000000#!/bin/bash # Plugin file for enabling manila services # ---------------------------------------- # Save trace setting XTRACE=$(set +o | grep xtrace) set -o xtrace # Entry Points # ------------ function _clean_share_group { local vg=$1 local vg_prefix=$2 # Clean out existing shares for lv in `sudo lvs --noheadings -o lv_name $vg`; do # vg_prefix prefixes the LVs we want if [[ "${lv#$vg_prefix}" != "$lv" ]]; then sudo umount -f $MANILA_MNT_DIR/$lv sudo lvremove -f $vg/$lv sudo rm -rf $MANILA_MNT_DIR/$lv fi done } function _clean_manila_lvm_backing_file { local vg=$1 # if there is no logical volume left, it's safe to attempt a cleanup # of the backing file if [ -z "`sudo lvs --noheadings -o lv_name $vg`" ]; then # if the backing physical device is a loop device, it was probably setup by devstack VG_DEV=$(sudo losetup -j $DATA_DIR/${vg}-backing-file | awk -F':' '/backing-file/ { print $1 }') if [[ -n "$VG_DEV" ]]; then sudo losetup -d $VG_DEV rm -f $DATA_DIR/${vg}-backing-file fi fi } function _clean_zfsonlinux_data { for filename in "$MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR"/*; do if [[ $(sudo zpool list | grep $filename) ]]; then echo "Destroying zpool named $filename" sudo zpool destroy -f $filename file="$MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR$filename" echo "Destroying file named $file" rm -f $file fi done } # cleanup_manila - Remove residual data files, anything left over from previous # runs that a clean run would need to clean up function cleanup_manila { # All stuff, that are created by share drivers will be cleaned up by other services. _clean_share_group $SHARE_GROUP $SHARE_NAME_PREFIX _clean_manila_lvm_backing_file $SHARE_GROUP _clean_zfsonlinux_data if [ $(trueorfalse False MANILA_USE_UWSGI) == True ]; then remove_uwsgi_config "$MANILA_UWSGI_CONF" "$MANILA_WSGI" fi } # _config_manila_apache_wsgi() - Configure manila-api wsgi application. function _config_manila_apache_wsgi { local manila_api_apache_conf local venv_path="" manila_api_apache_conf=$(apache_site_config_for manila-api) sudo cp $MANILA_DIR/devstack/apache-manila.template $manila_api_apache_conf sudo sed -e " s|%APACHE_NAME%|$APACHE_NAME|g; s|%MANILA_BIN_DIR%|$MANILA_BIN_DIR|g; s|%PORT%|$REAL_MANILA_SERVICE_PORT|g; s|%APIWORKERS%|$API_WORKERS|g; s|%USER%|$STACK_USER|g; " -i $manila_api_apache_conf } # configure_default_backends - configures default Manila backends with generic driver. function configure_default_backends { # Configure two default backends with generic drivers onboard for group_name in $MANILA_BACKEND1_CONFIG_GROUP_NAME $MANILA_BACKEND2_CONFIG_GROUP_NAME; do iniset $MANILA_CONF $group_name share_driver $SHARE_DRIVER if [ "$MANILA_BACKEND1_CONFIG_GROUP_NAME" == "$group_name" ]; then iniset $MANILA_CONF $group_name share_backend_name $MANILA_SHARE_BACKEND1_NAME else iniset $MANILA_CONF $group_name share_backend_name $MANILA_SHARE_BACKEND2_NAME fi iniset $MANILA_CONF $group_name path_to_public_key $MANILA_PATH_TO_PUBLIC_KEY iniset $MANILA_CONF $group_name path_to_private_key $MANILA_PATH_TO_PRIVATE_KEY iniset $MANILA_CONF $group_name service_image_name $MANILA_SERVICE_IMAGE_NAME iniset $MANILA_CONF $group_name service_instance_user $MANILA_SERVICE_INSTANCE_USER iniset $MANILA_CONF $group_name driver_handles_share_servers True if [ "$SHARE_DRIVER" == $MANILA_CONTAINER_DRIVER ]; then iniset $MANILA_CONF $group_name network_api_class $MANILA_NETWORK_API_CLASS iniset $MANILA_CONF $group_name neutron_host_id $(hostname) iniset $MANILA_CONF $group_name neutron_vnic_type $MANILA_NEUTRON_VNIC_TYPE fi if [ $(trueorfalse False MANILA_USE_SERVICE_INSTANCE_PASSWORD) == True ]; then iniset $MANILA_CONF $group_name service_instance_password $MANILA_SERVICE_INSTANCE_PASSWORD fi if [ "$SHARE_DRIVER" == "manila.share.drivers.generic.GenericShareDriver" ]; then iniset $MANILA_CONF $group_name ssh_conn_timeout $MANILA_SSH_TIMEOUT fi done } # set_config_opts - this allows to set any config opt to any config group, # parsing env vars by prefix special 'MANILA_OPTGROUP_'. function set_config_opts { # expects only one param - name of config group(s) as list separated by commas GROUP_NAMES=$1 if [[ -n "$GROUP_NAMES" ]]; then for be in ${GROUP_NAMES//,/ }; do # get backend_specific opt values prefix=MANILA_OPTGROUP_$be\_ ( set -o posix ; set ) | grep ^$prefix | while read -r line ; do # parse it to opt names and values opt=${line#$prefix} opt_name=${opt%%=*} opt_value=${opt##*=} iniset $MANILA_CONF $be $opt_name $opt_value done done fi } # set_cinder_quotas - Sets Cinder quotas, that is useful for generic driver, # which uses Cinder volumes and snapshots. function set_cinder_quotas { # Update Cinder configuration to make sure default quotas are enough # for Manila using Generic driver with parallel testing. if is_service_enabled cinder; then if [[ ! "$CINDER_CONF" ]]; then CINDER_CONF=/etc/cinder/cinder.conf fi iniset $CINDER_CONF DEFAULT quota_volumes 50 iniset $CINDER_CONF DEFAULT quota_snapshots 50 iniset $CINDER_CONF DEFAULT quota_gigabytes 1000 fi } function set_backend_availability_zones { ENABLED_BACKENDS=$1 echo_summary "Setting up backend_availability_zone option \ for any enabled backends that do not use the Generic driver. \ Availability zones for the Generic driver must coincide with those \ created for Nova and Cinder." local zonenum generic_driver='manila.share.drivers.generic.GenericShareDriver' for BE in ${ENABLED_BACKENDS//,/ }; do share_driver=$(iniget $MANILA_CONF $BE share_driver) if [[ $share_driver != $generic_driver ]]; then zone="manila-zone-$((zonenum++))" iniset $MANILA_CONF $BE backend_availability_zone $zone fi done } # configure_manila - Set config files, create data dirs, etc function configure_manila { if [[ ! -d $MANILA_CONF_DIR ]]; then sudo mkdir -p $MANILA_CONF_DIR fi sudo chown $STACK_USER $MANILA_CONF_DIR if [[ -f $MANILA_DIR/etc/manila/policy.json ]]; then cp -p $MANILA_DIR/etc/manila/policy.json $MANILA_CONF_DIR fi # Set the paths of certain binaries MANILA_ROOTWRAP=$(get_rootwrap_location manila) # If Manila ships the new rootwrap filters files, deploy them # (owned by root) and add a parameter to $MANILA_ROOTWRAP ROOTWRAP_MANILA_SUDOER_CMD="$MANILA_ROOTWRAP" if [[ -d $MANILA_DIR/etc/manila/rootwrap.d ]]; then # Wipe any existing rootwrap.d files first if [[ -d $MANILA_CONF_DIR/rootwrap.d ]]; then sudo rm -rf $MANILA_CONF_DIR/rootwrap.d fi # Deploy filters to /etc/manila/rootwrap.d sudo mkdir -m 755 $MANILA_CONF_DIR/rootwrap.d sudo cp $MANILA_DIR/etc/manila/rootwrap.d/*.filters $MANILA_CONF_DIR/rootwrap.d sudo chown -R root:root $MANILA_CONF_DIR/rootwrap.d sudo chmod 644 $MANILA_CONF_DIR/rootwrap.d/* # Set up rootwrap.conf, pointing to /etc/manila/rootwrap.d sudo cp $MANILA_DIR/etc/manila/rootwrap.conf $MANILA_CONF_DIR/ sudo sed -e "s:^filters_path=.*$:filters_path=$MANILA_CONF_DIR/rootwrap.d:" -i $MANILA_CONF_DIR/rootwrap.conf sudo chown root:root $MANILA_CONF_DIR/rootwrap.conf sudo chmod 0644 $MANILA_CONF_DIR/rootwrap.conf # Specify rootwrap.conf as first parameter to manila-rootwrap MANILA_ROOTWRAP="$MANILA_ROOTWRAP $MANILA_CONF_DIR/rootwrap.conf" ROOTWRAP_MANILA_SUDOER_CMD="$MANILA_ROOTWRAP *" fi TEMPFILE=`mktemp` echo "$USER ALL=(root) NOPASSWD: $ROOTWRAP_MANILA_SUDOER_CMD" >$TEMPFILE chmod 0440 $TEMPFILE sudo chown root:root $TEMPFILE sudo mv $TEMPFILE /etc/sudoers.d/manila-rootwrap cp $MANILA_DIR/etc/manila/api-paste.ini $MANILA_API_PASTE_INI # Remove old conf file if exists rm -f $MANILA_CONF configure_keystone_authtoken_middleware $MANILA_CONF manila iniset $MANILA_CONF DEFAULT auth_strategy keystone iniset $MANILA_CONF DEFAULT debug True iniset $MANILA_CONF DEFAULT scheduler_driver $MANILA_SCHEDULER_DRIVER iniset $MANILA_CONF DEFAULT share_name_template ${SHARE_NAME_PREFIX}%s iniset $MANILA_CONF DATABASE connection `database_connection_url manila` iniset $MANILA_CONF DATABASE max_pool_size 40 iniset $MANILA_CONF DEFAULT api_paste_config $MANILA_API_PASTE_INI iniset $MANILA_CONF DEFAULT rootwrap_config $MANILA_CONF_DIR/rootwrap.conf iniset $MANILA_CONF DEFAULT osapi_share_extension manila.api.contrib.standard_extensions iniset $MANILA_CONF DEFAULT state_path $MANILA_STATE_PATH # Note: Sample share types will still be created if the below is False if [ $(trueorfalse False MANILA_CONFIGURE_DEFAULT_TYPES) == True ]; then iniset $MANILA_CONF DEFAULT default_share_type $MANILA_DEFAULT_SHARE_TYPE iniset $MANILA_CONF DEFAULT default_share_group_type $MANILA_DEFAULT_SHARE_GROUP_TYPE fi if ! [[ -z $MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL ]]; then iniset $MANILA_CONF DEFAULT migration_driver_continue_update_interval $MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL fi if ! [[ -z $MANILA_DATA_COPY_CHECK_HASH ]]; then iniset $MANILA_CONF DEFAULT check_hash $MANILA_DATA_COPY_CHECK_HASH fi iniset $MANILA_CONF DEFAULT enabled_share_protocols $MANILA_ENABLED_SHARE_PROTOCOLS iniset $MANILA_CONF oslo_concurrency lock_path $MANILA_LOCK_PATH iniset $MANILA_CONF DEFAULT wsgi_keep_alive False iniset $MANILA_CONF DEFAULT lvm_share_volume_group $SHARE_GROUP # Set the replica_state_update_interval iniset $MANILA_CONF DEFAULT replica_state_update_interval $MANILA_REPLICA_STATE_UPDATE_INTERVAL # Set the use_scheduler_creating_share_from_snapshot iniset $MANILA_CONF DEFAULT use_scheduler_creating_share_from_snapshot $MANILA_USE_SCHEDULER_CREATING_SHARE_FROM_SNAPSHOT if is_service_enabled neutron; then configure_keystone_authtoken_middleware $MANILA_CONF neutron neutron fi if is_service_enabled nova; then configure_keystone_authtoken_middleware $MANILA_CONF nova nova fi if is_service_enabled cinder; then configure_keystone_authtoken_middleware $MANILA_CONF cinder cinder fi # Note: set up config group does not mean that this backend will be enabled. # To enable it, specify its name explicitly using "enabled_share_backends" opt. configure_default_backends default_backends=$MANILA_BACKEND1_CONFIG_GROUP_NAME if [ "$MANILA_MULTI_BACKEND" = "True" ]; then default_backends+=,$MANILA_BACKEND2_CONFIG_GROUP_NAME fi if [ ! $MANILA_ENABLED_BACKENDS ]; then # If $MANILA_ENABLED_BACKENDS is not set, use configured backends by default export MANILA_ENABLED_BACKENDS=$default_backends fi iniset $MANILA_CONF DEFAULT enabled_share_backends $MANILA_ENABLED_BACKENDS if [ ! -f $MANILA_PATH_TO_PRIVATE_KEY ]; then ssh-keygen -N "" -t rsa -f $MANILA_PATH_TO_PRIVATE_KEY; fi iniset $MANILA_CONF DEFAULT manila_service_keypair_name $MANILA_SERVICE_KEYPAIR_NAME REAL_MANILA_SERVICE_PORT=$MANILA_SERVICE_PORT if is_service_enabled tls-proxy; then # Set the protocol to 'https', update the endpoint base and set the default port MANILA_SERVICE_PROTOCOL="https" MANILA_ENDPOINT_BASE="${MANILA_ENDPOINT_BASE/http:/https:}" REAL_MANILA_SERVICE_PORT=$MANILA_SERVICE_PORT_INT # Set the service port for a proxy to take the original iniset $MANILA_CONF DEFAULT osapi_share_listen_port $REAL_MANILA_SERVICE_PORT iniset $MANILA_CONF oslo_middleware enable_proxy_headers_parsing True fi iniset_rpc_backend manila $MANILA_CONF DEFAULT setup_logging $MANILA_CONF MANILA_CONFIGURE_GROUPS=${MANILA_CONFIGURE_GROUPS:-"$MANILA_ENABLED_BACKENDS"} set_config_opts $MANILA_CONFIGURE_GROUPS set_config_opts DEFAULT set_backend_availability_zones $MANILA_ENABLED_BACKENDS if [ $(trueorfalse False MANILA_USE_UWSGI) == True ]; then write_uwsgi_config "$MANILA_UWSGI_CONF" "$MANILA_WSGI" "/share" fi if [ $(trueorfalse False MANILA_USE_MOD_WSGI) == True ]; then _config_manila_apache_wsgi fi } function create_manila_service_keypair { if is_service_enabled nova; then local keypair_exists=$( openstack keypair list | grep " $MANILA_SERVICE_KEYPAIR_NAME " ) if [[ -z $keypair_exists ]]; then openstack keypair create $MANILA_SERVICE_KEYPAIR_NAME --public-key $MANILA_PATH_TO_PUBLIC_KEY fi fi } function is_driver_enabled { driver_name=$1 for BE in ${MANILA_ENABLED_BACKENDS//,/ }; do share_driver=$(iniget $MANILA_CONF $BE share_driver) if [ "$share_driver" == "$driver_name" ]; then return 0 fi done return 1 } # create_service_share_servers - creates service Nova VMs, one per generic # driver, and only if it is configured to mode without handling of share servers. function create_service_share_servers { created_admin_network=false for BE in ${MANILA_ENABLED_BACKENDS//,/ }; do driver_handles_share_servers=$(iniget $MANILA_CONF $BE driver_handles_share_servers) share_driver=$(iniget $MANILA_CONF $BE share_driver) generic_driver='manila.share.drivers.generic.GenericShareDriver' if [[ $share_driver == $generic_driver ]]; then if [[ $(trueorfalse False driver_handles_share_servers) == False ]]; then vm_name='manila_service_share_server_'$BE local vm_exists=$( openstack server list --all-projects | grep " $vm_name " ) if [[ -z $vm_exists ]]; then private_net_id=$(openstack network show $PRIVATE_NETWORK_NAME -f value -c id) vm_id=$(openstack server create $vm_name \ --flavor $MANILA_SERVICE_VM_FLAVOR_NAME \ --image $MANILA_SERVICE_IMAGE_NAME \ --nic net-id=$private_net_id \ --security-group $MANILA_SERVICE_SECGROUP \ --key-name $MANILA_SERVICE_KEYPAIR_NAME \ | grep ' id ' | get_field 2) else vm_id=$(openstack server show $vm_name -f value -c id) fi floating_ip=$(openstack floating ip create $PUBLIC_NETWORK_NAME --subnet $PUBLIC_SUBNET_NAME | grep 'floating_ip_address' | get_field 2) # TODO(rishabh-d-dave): For time being circumvent the bug - # https://bugs.launchpad.net/python-openstackclient/+bug/1747721 # Once fixed, replace the following 3 lines by - # openstack server add floating ip $vm_id $floating_ip vm_port_id=$(openstack port list --server $vm_id -c ID -f \ value) openstack floating ip set --port $vm_port_id $floating_ip iniset $MANILA_CONF $BE service_instance_name_or_id $vm_id iniset $MANILA_CONF $BE service_net_name_or_ip $floating_ip iniset $MANILA_CONF $BE tenant_net_name_or_ip $PRIVATE_NETWORK_NAME else if is_service_enabled neutron; then if ! [[ -z $MANILA_ADMIN_NET_RANGE ]]; then if [ $created_admin_network == false ]; then project_id=$(openstack project show $SERVICE_PROJECT_NAME -c id -f value) local admin_net_id=$( openstack network show admin_net -f value -c id ) if [[ -z $admin_net_id ]]; then openstack network create admin_net --project $project_id admin_net_id=$(openstack network show admin_net -f value -c id) fi local admin_subnet_id=$( openstack subnet show admin_subnet -f value -c id ) if [[ -z $admin_subnet_id ]]; then openstack subnet create admin_subnet --project $project_id --ip-version 4 --network $admin_net_id --gateway None --subnet-range $MANILA_ADMIN_NET_RANGE admin_subnet_id=$(openstack subnet show admin_subnet -f value -c id) fi created_admin_network=true fi iniset $MANILA_CONF $BE admin_network_id $admin_net_id iniset $MANILA_CONF $BE admin_subnet_id $admin_subnet_id fi fi fi fi done configure_data_service_generic_driver } function configure_data_service_generic_driver { enabled_backends=(${MANILA_ENABLED_BACKENDS//,/ }) share_driver=$(iniget $MANILA_CONF ${enabled_backends[0]} share_driver) generic_driver='manila.share.drivers.generic.GenericShareDriver' if [[ $share_driver == $generic_driver ]]; then driver_handles_share_servers=$(iniget $MANILA_CONF ${enabled_backends[0]} driver_handles_share_servers) if [[ $(trueorfalse False driver_handles_share_servers) == False ]]; then iniset $MANILA_CONF DEFAULT data_node_access_ips $PUBLIC_NETWORK_GATEWAY else if ! [[ -z $MANILA_DATA_NODE_IP ]]; then iniset $MANILA_CONF DEFAULT data_node_access_ips $MANILA_DATA_NODE_IP fi fi fi } # create_manila_service_flavor - creates flavor, that will be used by backends # with configured generic driver to boot Nova VMs with. function create_manila_service_flavor { if is_service_enabled nova; then local flavor_exists=$( openstack flavor list | grep " $MANILA_SERVICE_VM_FLAVOR_NAME " ) if [[ -z $flavor_exists ]]; then # Create flavor for Manila's service VM openstack flavor create \ $MANILA_SERVICE_VM_FLAVOR_NAME \ --id $MANILA_SERVICE_VM_FLAVOR_REF \ --ram $MANILA_SERVICE_VM_FLAVOR_RAM \ --disk $MANILA_SERVICE_VM_FLAVOR_DISK \ --vcpus $MANILA_SERVICE_VM_FLAVOR_VCPUS fi fi } # create_manila_service_image - creates image, that will be used by backends # with configured generic driver to boot Nova VMs from. function create_manila_service_image { if is_service_enabled nova; then TOKEN=$(openstack token issue -c id -f value) local image_exists=$( openstack image list | grep " $MANILA_SERVICE_IMAGE_NAME " ) if [[ -z $image_exists ]]; then # Download Manila's image if is_service_enabled g-reg; then upload_image $MANILA_SERVICE_IMAGE_URL $TOKEN fi fi fi } # create_manila_service_secgroup - creates security group that is used by # Nova VMs when generic driver is configured. function create_manila_service_secgroup { # Create a secgroup if ! openstack security group list | grep -q $MANILA_SERVICE_SECGROUP; then openstack security group create $MANILA_SERVICE_SECGROUP --description "$MANILA_SERVICE_SECGROUP description" if ! timeout 30 sh -c "while ! openstack security group list | grep -q $MANILA_SERVICE_SECGROUP; do sleep 1; done"; then echo "Security group not created" exit 1 fi fi # Configure Security Group Rules if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q icmp; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol icmp fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " tcp .* 22 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol tcp --dst-port 22 fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " tcp .* 2049 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol tcp --dst-port 2049 fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " udp .* 2049 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol udp --dst-port 2049 fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " udp .* 445 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol udp --dst-port 445 fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " tcp .* 445 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol tcp --dst-port 445 fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " tcp .* 139 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol tcp --dst-port 137:139 fi if ! openstack security group rule list $MANILA_SERVICE_SECGROUP | grep -q " udp .* 139 "; then openstack security group rule create $MANILA_SERVICE_SECGROUP --protocol udp --dst-port 137:139 fi # List secgroup rules openstack security group rule list $MANILA_SERVICE_SECGROUP } # create_manila_accounts - Set up common required manila accounts function create_manila_accounts { create_service_user "manila" get_or_create_service "manila" "share" "Manila Shared Filesystem Service" get_or_create_endpoint "share" "$REGION_NAME" \ "$MANILA_ENDPOINT_BASE/v1/\$(project_id)s" # Set up Manila v2 service and endpoint get_or_create_service "manilav2" "sharev2" "Manila Shared Filesystem Service V2" get_or_create_endpoint "sharev2" "$REGION_NAME" \ "$MANILA_ENDPOINT_BASE/v2/\$(project_id)s" } # create_default_share_group_type - create share group type that will be set as default. function create_default_share_group_type { local type_exists=$( manila share-group-type-list | grep " $MANILA_DEFAULT_SHARE_GROUP_TYPE " ) if [[ -z $type_exists ]]; then manila share-group-type-create $MANILA_DEFAULT_SHARE_GROUP_TYPE $MANILA_DEFAULT_SHARE_TYPE fi if [[ $MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS ]]; then manila share-group-type-key $MANILA_DEFAULT_SHARE_GROUP_TYPE set $MANILA_DEFAULT_SHARE_GROUP_TYPE_SPECS fi } # create_default_share_type - create share type that will be set as default # if $MANILA_CONFIGURE_DEFAULT_TYPES is set to True, if set to False, the share # type identified by $MANILA_DEFAULT_SHARE_TYPE is still created, but not # configured as default. function create_default_share_type { enabled_backends=(${MANILA_ENABLED_BACKENDS//,/ }) driver_handles_share_servers=$(iniget $MANILA_CONF ${enabled_backends[0]} driver_handles_share_servers) local type_exists=$( manila type-list | grep " $MANILA_DEFAULT_SHARE_TYPE " ) if [[ -z $type_exists ]]; then local command_args="$MANILA_DEFAULT_SHARE_TYPE $driver_handles_share_servers" #if is_driver_enabled $MANILA_CONTAINER_DRIVER; then # # TODO(aovchinnikov): Remove this condition when Container driver supports # # snapshots # command_args="$command_args --snapshot_support false" #fi manila type-create $command_args fi if [[ $MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS ]]; then manila type-key $MANILA_DEFAULT_SHARE_TYPE set $MANILA_DEFAULT_SHARE_TYPE_EXTRA_SPECS fi } # create_custom_share_types - create share types suitable for both possible # driver modes with names "dhss_true" and "dhss_false". function create_custom_share_types { manila type-create dhss_true True if [[ $MANILA_DHSS_TRUE_SHARE_TYPE_EXTRA_SPECS ]]; then manila type-key dhss_true set $MANILA_DHSS_TRUE_SHARE_TYPE_EXTRA_SPECS fi manila type-create dhss_false False if [[ $MANILA_DHSS_FALSE_SHARE_TYPE_EXTRA_SPECS ]]; then manila type-key dhss_false set $MANILA_DHSS_FALSE_SHARE_TYPE_EXTRA_SPECS fi } # configure_backing_file - Set up backing file for LVM function configure_backing_file { sudo vgscan if ! sudo vgs $SHARE_GROUP; then if [ "$CONFIGURE_BACKING_FILE" = "True" ]; then SHARE_BACKING_FILE=${SHARE_BACKING_FILE:-$DATA_DIR/${SHARE_GROUP}-backing-file} # Only create if the file doesn't already exists [[ -f $SHARE_BACKING_FILE ]] || truncate -s $SHARE_BACKING_FILE_SIZE $SHARE_BACKING_FILE DEV=`sudo losetup -f --show $SHARE_BACKING_FILE` else DEV=$SHARE_BACKING_FILE fi # Only create if the loopback device doesn't contain $SHARE_GROUP if ! sudo vgs $SHARE_GROUP; then sudo vgcreate $SHARE_GROUP $DEV; fi fi mkdir -p $MANILA_STATE_PATH/shares mkdir -p /tmp/shares } # init_manila - Initializes database and creates manila dir if absent function init_manila { if is_service_enabled $DATABASE_BACKENDS; then # (re)create manila database recreate_database manila $MANILA_BIN_DIR/manila-manage db sync if [[ $(trueorfalse False MANILA_USE_DOWNGRADE_MIGRATIONS) == True ]]; then # Use both - upgrade and downgrade migrations to verify that # downgrade migrations do not break structure of Manila database. $MANILA_BIN_DIR/manila-manage db downgrade $MANILA_BIN_DIR/manila-manage db sync fi # Display version as debug-action (see bug/1473400) $MANILA_BIN_DIR/manila-manage db version fi if [ "$SHARE_DRIVER" == "manila.share.drivers.lvm.LVMShareDriver" ]; then if is_service_enabled m-shr; then # Configure a default volume group called '`lvm-shares`' for the share # service if it does not yet exist. If you don't wish to use a file backed # volume group, create your own volume group called ``stack-volumes`` before # invoking ``stack.sh``. # # By default, the backing file is 8G in size, and is stored in ``/opt/stack/data``. configure_backing_file fi elif [ "$SHARE_DRIVER" == $MANILA_CONTAINER_DRIVER ]; then if is_service_enabled m-shr; then SHARE_GROUP=$MANILA_CONTAINER_VOLUME_GROUP_NAME configure_backing_file fi elif [ "$SHARE_DRIVER" == "manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver" ]; then if is_service_enabled m-shr; then mkdir -p $MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR file_counter=0 MANILA_ZFSONLINUX_SERVICE_IP=${MANILA_ZFSONLINUX_SERVICE_IP:-"127.0.0.1"} for BE in ${MANILA_ENABLED_BACKENDS//,/ }; do if [[ $file_counter == 0 ]]; then # NOTE(vponomaryov): create two pools for first ZFS backend # to cover different use cases that are supported by driver: # - Support of more than one zpool for share backend. # - Support of nested datasets. local first_file="$MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR"/alpha local second_file="$MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR"/betta truncate -s $MANILA_ZFSONLINUX_ZPOOL_SIZE $first_file truncate -s $MANILA_ZFSONLINUX_ZPOOL_SIZE $second_file sudo zpool create alpha $first_file sudo zpool create betta $second_file # Create subdir (nested dataset) for second pool sudo zfs create betta/subdir iniset $MANILA_CONF $BE zfs_zpool_list alpha,betta/subdir elif [[ $file_counter == 1 ]]; then local file="$MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR"/gamma truncate -s $MANILA_ZFSONLINUX_ZPOOL_SIZE $file sudo zpool create gamma $file iniset $MANILA_CONF $BE zfs_zpool_list gamma else local filename=file"$file_counter" local file="$MANILA_ZFSONLINUX_BACKEND_FILES_CONTAINER_DIR"/"$filename" truncate -s $MANILA_ZFSONLINUX_ZPOOL_SIZE $file sudo zpool create $filename $file iniset $MANILA_CONF $BE zfs_zpool_list $filename fi iniset $MANILA_CONF $BE zfs_share_export_ip $MANILA_ZFSONLINUX_SHARE_EXPORT_IP iniset $MANILA_CONF $BE zfs_service_ip $MANILA_ZFSONLINUX_SERVICE_IP iniset $MANILA_CONF $BE zfs_dataset_creation_options $MANILA_ZFSONLINUX_DATASET_CREATION_OPTIONS iniset $MANILA_CONF $BE zfs_use_ssh $MANILA_ZFSONLINUX_USE_SSH iniset $MANILA_CONF $BE zfs_ssh_username $MANILA_ZFSONLINUX_SSH_USERNAME iniset $MANILA_CONF $BE replication_domain $MANILA_ZFSONLINUX_REPLICATION_DOMAIN iniset $MANILA_CONF $BE driver_handles_share_servers False let "file_counter=file_counter+1" done # Install the server's SSH key in our known_hosts file eval STACK_HOME=~$STACK_USER ssh-keyscan ${MANILA_ZFSONLINUX_SERVICE_IP} >> $STACK_HOME/.ssh/known_hosts # If the server is this machine, setup trust for ourselves (otherwise you're on your own) if [ "$MANILA_ZFSONLINUX_SERVICE_IP" = "127.0.0.1" ] || [ "$MANILA_ZFSONLINUX_SERVICE_IP" = "localhost" ] ; then # Trust our own SSH keys eval SSH_USER_HOME=~$MANILA_ZFSONLINUX_SSH_USERNAME cat $STACK_HOME/.ssh/*.pub >> $SSH_USER_HOME/.ssh/authorized_keys # Give ssh user sudo access echo "$MANILA_ZFSONLINUX_SSH_USERNAME ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers > /dev/null iniset $MANILA_CONF DEFAULT data_node_access_ips $MANILA_ZFSONLINUX_SERVICE_IP fi fi fi } # check_nfs_kernel_service_state_ubuntu- Make sure nfsd is running function check_nfs_kernel_service_state_ubuntu { # (aovchinnikov): Workaround for nfs-utils bug 1052264 if [[ $(sudo service nfs-kernel-server status &> /dev/null || echo 'fail') == 'fail' ]]; then echo "Apparently nfsd is not running. Trying to fix that." sudo mkdir -p "/media/nfsdonubuntuhelper" # (aovchinnikov): shell wrapping is needed for cases when a file to be written # is owned by root. sudo sh -c "echo '/media/nfsdonubuntuhelper 127.0.0.1(ro)' >> /etc/exports" sudo service nfs-kernel-server start fi if [[ $(sudo service nfs-kernel-server status &> /dev/null || echo 'fail') == 'fail' ]]; then echo "Failed to start nfsd. Exiting." exit 1 fi } function _install_nfs_and_samba { if is_ubuntu; then install_package nfs-kernel-server nfs-common samba check_nfs_kernel_service_state_ubuntu elif is_fedora; then install_package nfs-utils samba sudo systemctl enable smb.service sudo systemctl start smb.service sudo systemctl enable nfs-server.service sudo systemctl start nfs-server.service elif is_suse; then install_package nfs-kernel-server nfs-utils samba else echo "This distro is not supported. Skipping step of NFS and Samba installation." fi } # install_manilaclient - Collect source and prepare # In order to install from git, add LIBS_FROM_GIT="python-manilaclient" # to local.conf function install_manilaclient { if use_library_from_git "python-manilaclient"; then git_clone $MANILACLIENT_REPO $MANILACLIENT_DIR $MANILACLIENT_BRANCH setup_develop $MANILACLIENT_DIR else pip_install python-manilaclient fi } # install_manila - Collect source and prepare function install_manila { setup_develop $MANILA_DIR if is_service_enabled m-shr; then if [[ ! $(systemctl is-active nfs-ganesha.service) == 'active' ]] ; then if [ "$SHARE_DRIVER" != "manila.share.drivers.cephfs.driver.CephFSDriver" ] ; then _install_nfs_and_samba fi fi if [ "$SHARE_DRIVER" == "manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver" ]; then if [[ $(sudo zfs list &> /dev/null && sudo zpool list &> /dev/null || echo 'absent') == 'absent' ]]; then # ZFS not found, try to install it if is_ubuntu; then if [[ $(lsb_release -s -d) == *"14.04"* ]]; then # Trusty sudo apt-get install -y software-properties-common sudo apt-add-repository --yes ppa:zfs-native/stable # Workaround for bug #1609696 sudo apt-mark hold grub* sudo apt-get -y -q update && sudo apt-get -y -q upgrade # Workaround for bug #1609696 sudo apt-mark unhold grub* sudo apt-get install -y linux-headers-generic sudo apt-get install -y build-essential sudo apt-get install -y ubuntu-zfs elif [[ $(echo $(lsb_release -rs) '>=' 16.04 | bc -l) == 1 ]]; then # Xenial and beyond sudo apt-get install -y zfsutils-linux else echo "Only 'Trusty', 'Xenial' and newer releases of Ubuntu are supported." exit 1 fi else echo "Manila Devstack plugin supports installation "\ "of ZFS packages only for 'Ubuntu' distros. "\ "Please, install it first by other means or add its support "\ "for your distro." exit 1 fi sudo modprobe zfs sudo modprobe zpool fi check_nfs_kernel_service_state_ubuntu elif [ "$SHARE_DRIVER" == $MANILA_CONTAINER_DRIVER ]; then if is_ubuntu; then echo "Installing docker...." install_docker_ubuntu echo "Importing docker image" import_docker_service_image_ubuntu elif is_fedora; then echo "Installing docker...." install_docker_fedora echo "Importing docker image" # TODO(tbarron): See if using a fedora container image # is faster/smaller because of fewer extra dependencies. import_docker_service_image_ubuntu else echo "Manila Devstack plugin does not support Container Driver on"\ " distros other than Ubuntu or Fedora." exit 1 fi fi fi } #configure_samba - Configure node as Samba server function configure_samba { if [ "$SHARE_DRIVER" == "manila.share.drivers.lvm.LVMShareDriver" ]; then # TODO(vponomaryov): add here condition for ZFSonLinux driver too # when it starts to support SAMBA samba_daemon_name=smbd if is_service_enabled m-shr; then if is_fedora; then samba_daemon_name=smb fi sudo service $samba_daemon_name restart || echo "Couldn't restart '$samba_daemon_name' service" fi if [[ -e /usr/share/samba/smb.conf ]]; then sudo cp /usr/share/samba/smb.conf $SMB_CONF fi sudo chown $STACK_USER -R /etc/samba iniset $SMB_CONF global include registry iniset $SMB_CONF global security user if [ ! -d "$SMB_PRIVATE_DIR" ]; then sudo mkdir $SMB_PRIVATE_DIR sudo touch $SMB_PRIVATE_DIR/secrets.tdb fi for backend_name in ${MANILA_ENABLED_BACKENDS//,/ }; do iniset $MANILA_CONF $backend_name driver_handles_share_servers False iniset $MANILA_CONF $backend_name lvm_share_export_ips $MANILA_LVM_SHARE_EXPORT_IPS done iniset $MANILA_CONF DEFAULT data_node_access_ips $HOST_IP fi } # start_manila_api - starts manila API services and checks its availability function start_manila_api { # NOTE(vkmc) If both options are set to true we are using uwsgi # as the preferred way to deploy manila. See # https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html#uwsgi-vs-mod-wsgi # for more details if [ $(trueorfalse False MANILA_USE_UWSGI) == True ] && [ $(trueorfalse False MANILA_USE_MOD_WSGI) == True ]; then MSG="Both MANILA_USE_UWSGI and MANILA_USE_MOD_WSGI are set to True. Using UWSGI as the preferred option Set MANILA_USE_UWSGI to False to deploy manila api with MOD_WSGI" warn $LINENO $MSG fi if [ $(trueorfalse False MANILA_USE_UWSGI) == True ]; then echo "Deploying with UWSGI" run_process m-api "$MANILA_BIN_DIR/uwsgi --ini $MANILA_UWSGI_CONF --procname-prefix manila-api" elif [ $(trueorfalse False MANILA_USE_MOD_WSGI) == True ]; then echo "Deploying with MOD_WSGI" install_apache_wsgi enable_apache_site manila-api restart_apache_server tail_log m-api /var/log/$APACHE_NAME/manila_api.log else echo "Deploying with built-in server" run_process m-api "$MANILA_BIN_DIR/manila-api --config-file $MANILA_CONF" fi echo "Waiting for Manila API to start..." # This is a health check against the manila-api service we just started. # We use the port ($REAL_MANILA_SERVICE_PORT) here because we want to hit # the bare service endpoint, even if the tls tunnel should be enabled. # We're making sure that the internal port is checked using unencryted # traffic at this point. local MANILA_HEALTH_CHECK_URL=$MANILA_SERVICE_PROTOCOL://$MANILA_SERVICE_HOST:$REAL_MANILA_SERVICE_PORT if [ $(trueorfalse False MANILA_USE_UWSGI) == True ]; then MANILA_HEALTH_CHECK_URL=$MANILA_ENDPOINT_BASE fi if ! wait_for_service $SERVICE_TIMEOUT $MANILA_HEALTH_CHECK_URL; then die $LINENO "Manila API did not start" fi # Start proxies if enabled # # If tls-proxy is enabled and MANILA_USE_UWSGI is set to True, a generic # http-services-tls-proxy will be set up to handle tls-termination to # manila as well as all the other https services, we don't need to # create our own. if [ $(trueorfalse False MANILA_USE_UWSGI) == False ] && is_service_enabled tls-proxy; then start_tls_proxy manila '*' $MANILA_SERVICE_PORT $MANILA_SERVICE_HOST $MANILA_SERVICE_PORT_INT fi } # start_rest_of_manila - starts non-api manila services function start_rest_of_manila { run_process m-shr "$MANILA_BIN_DIR/manila-share --config-file $MANILA_CONF" run_process m-sch "$MANILA_BIN_DIR/manila-scheduler --config-file $MANILA_CONF" run_process m-dat "$MANILA_BIN_DIR/manila-data --config-file $MANILA_CONF" } # start_manila - start all manila services. This function is kept for compatibility # reasons with old approach. function start_manila { start_manila_api start_rest_of_manila } # stop_manila - Stop running processes function stop_manila { # Disable manila api service if [ $(trueorfalse False MANILA_USE_MOD_WSGI) == True ]; then disable_apache_site manila-api restart_apache_server else stop_process m-api fi # Kill all other manila processes for serv in m-sch m-shr m-dat; do stop_process $serv done } function install_manila_tempest_plugin { MANILA_TEMPEST_PLUGIN_REPO=${MANILA_TEMPEST_PLUGIN_REPO:-${GIT_BASE}/openstack/manila-tempest-plugin} MANILA_TEMPEST_PLUGIN_BRANCH=${MANILA_TEMPEST_PLUGIN_BRANCH:-master} MANILA_TEMPEST_PLUGIN_DIR=$DEST/manila-tempest-plugin git_clone $MANILA_TEMPEST_PLUGIN_REPO $MANILA_TEMPEST_PLUGIN_DIR $MANILA_TEMPEST_PLUGIN_BRANCH setup_develop $MANILA_TEMPEST_PLUGIN_DIR } # update_tempest - Function used for updating Tempest config if Tempest service enabled function update_tempest { if is_service_enabled tempest; then TEMPEST_CONFIG=${TEMPEST_CONFIG:-$TEMPEST_DIR/etc/tempest.conf} ADMIN_TENANT_NAME=${ADMIN_TENANT_NAME:-"admin"} ADMIN_DOMAIN_NAME=${ADMIN_DOMAIN_NAME:-"Default"} ADMIN_PASSWORD=${ADMIN_PASSWORD:-"secretadmin"} if [ $(trueorfalse False MANILA_USE_SERVICE_INSTANCE_PASSWORD) == True ]; then iniset $TEMPEST_CONFIG share image_password $MANILA_SERVICE_INSTANCE_PASSWORD fi iniset $TEMPEST_CONFIG share image_with_share_tools $MANILA_SERVICE_IMAGE_NAME iniset $TEMPEST_CONFIG auth admin_username ${ADMIN_USERNAME:-"admin"} iniset $TEMPEST_CONFIG auth admin_password ${ADMIN_PASSWORD:-"secretadmin"} iniset $TEMPEST_CONFIG auth admin_tenant_name $ADMIN_TENANT_NAME iniset $TEMPEST_CONFIG auth admin_domain_name $ADMIN_DOMAIN_NAME iniset $TEMPEST_CONFIG identity username ${TEMPEST_USERNAME:-"demo"} iniset $TEMPEST_CONFIG identity password $ADMIN_PASSWORD iniset $TEMPEST_CONFIG identity tenant_name ${TEMPEST_TENANT_NAME:-"demo"} iniset $TEMPEST_CONFIG identity domain_name $ADMIN_DOMAIN_NAME iniset $TEMPEST_CONFIG identity alt_username ${ALT_USERNAME:-"alt_demo"} iniset $TEMPEST_CONFIG identity alt_password $ADMIN_PASSWORD iniset $TEMPEST_CONFIG identity alt_tenant_name ${ALT_TENANT_NAME:-"alt_demo"} iniset $TEMPEST_CONFIG identity alt_domain_name $ADMIN_DOMAIN_NAME fi } function install_docker_ubuntu { sudo apt-get update install_package apparmor install_package docker.io } function install_docker_fedora { sudo yum install -y docker sudo systemctl enable docker sudo systemctl start docker } function download_image { local image_url=$1 local image image_fname image_fname=`basename "$image_url"` if [[ $image_url != file* ]]; then # Downloads the image (uec ami+akistyle), then extracts it. if [[ ! -f $FILES/$image_fname || "$(stat -c "%s" $FILES/$image_fname)" = "0" ]]; then wget --progress=dot:giga -c $image_url -O $FILES/$image_fname if [[ $? -ne 0 ]]; then echo "Not found: $image_url" return fi fi image="$FILES/${image_fname}" else # File based URL (RFC 1738): ``file://host/path`` # Remote files are not considered here. # unix: ``file:///home/user/path/file`` # windows: ``file:///C:/Documents%20and%20Settings/user/path/file`` image=$(echo $image_url | sed "s/^file:\/\///g") if [[ ! -f $image || "$(stat -c "%s" $image)" == "0" ]]; then echo "Not found: $image_url" return fi fi } function import_docker_service_image_ubuntu { GZIPPED_IMG_NAME=`basename "$MANILA_DOCKER_IMAGE_URL"` IMG_NAME_LOAD=${GZIPPED_IMG_NAME%.*} LOCAL_IMG_NAME=${IMG_NAME_LOAD%.*} if [[ "$(sudo docker images -q $LOCAL_IMG_NAME)" == "" ]]; then download_image $MANILA_DOCKER_IMAGE_URL # Import image in Docker gzip -d $FILES/$GZIPPED_IMG_NAME sudo docker load --input $FILES/$IMG_NAME_LOAD fi } function remove_docker_service_image { sudo docker rmi $MANILA_DOCKER_IMAGE_ALIAS } function install_libraries { if [ $(trueorfalse False MANILA_MULTI_BACKEND) == True ]; then if [ $(trueorfalse True RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS) == True ]; then if is_ubuntu; then install_package nfs-common else install_package nfs-utils fi fi fi } function setup_ipv6 { # This will fail with multiple default routes and is not needed in CI # but may be useful when developing with devstack locally if [ $(trueorfalse False MANILA_RESTORE_IPV6_DEFAULT_ROUTE) == True ]; then # save IPv6 default route to add back later after enabling forwarding local default_route=$(ip -6 route | grep default | cut -d ' ' -f1,2,3,4,5) fi # make sure those system values are set sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0 sudo sysctl -w net.ipv6.conf.all.accept_ra=2 sudo sysctl -w net.ipv6.conf.all.forwarding=1 # Disable in-band as our communication is only internal sudo ovs-vsctl set Bridge $PUBLIC_BRIDGE other_config:disable-in-band=true # Create address scopes and subnet pools openstack address scope create --share --ip-version 4 scope-v4 openstack address scope create --share --ip-version 6 scope-v6 openstack subnet pool create $SUBNETPOOL_NAME_V4 --default-prefix-length $SUBNETPOOL_SIZE_V4 --pool-prefix $SUBNETPOOL_PREFIX_V4 --address-scope scope-v4 --default --share openstack subnet pool create $SUBNETPOOL_NAME_V6 --default-prefix-length $SUBNETPOOL_SIZE_V6 --pool-prefix $SUBNETPOOL_PREFIX_V6 --address-scope scope-v6 --default --share # Create example private network and router openstack router create $Q_ROUTER_NAME openstack network create $PRIVATE_NETWORK_NAME openstack subnet create --ip-version 6 --use-default-subnet-pool --ipv6-address-mode $IPV6_ADDRESS_MODE --ipv6-ra-mode $IPV6_RA_MODE --network $PRIVATE_NETWORK_NAME $IPV6_PRIVATE_SUBNET_NAME openstack subnet create --ip-version 4 --use-default-subnet-pool --network $PRIVATE_NETWORK_NAME $PRIVATE_SUBNET_NAME openstack router add subnet $Q_ROUTER_NAME $IPV6_PRIVATE_SUBNET_NAME openstack router add subnet $Q_ROUTER_NAME $PRIVATE_SUBNET_NAME # Create public network openstack network create $PUBLIC_NETWORK_NAME --external --default --provider-network-type flat --provider-physical-network $PUBLIC_PHYSICAL_NETWORK local public_gateway_ipv6=$(openstack subnet create $IPV6_PUBLIC_SUBNET_NAME --ip-version 6 --network $PUBLIC_NETWORK_NAME --subnet-pool $SUBNETPOOL_NAME_V6 --no-dhcp -c gateway_ip -f value) local public_gateway_ipv4=$(openstack subnet create $PUBLIC_SUBNET_NAME --ip-version 4 --network $PUBLIC_NETWORK_NAME --subnet-range $FLOATING_RANGE --no-dhcp -c gateway_ip -f value) # Set router to use public network openstack router set --external-gateway $PUBLIC_NETWORK_NAME $Q_ROUTER_NAME # Configure interfaces due to NEUTRON_CREATE_INITIAL_NETWORKS=False local ipv4_cidr_len=${FLOATING_RANGE#*/} sudo ip -6 addr add "$public_gateway_ipv6"/$SUBNETPOOL_SIZE_V6 dev $PUBLIC_BRIDGE sudo ip addr add $PUBLIC_NETWORK_GATEWAY/"$ipv4_cidr_len" dev $PUBLIC_BRIDGE # Enabling interface is needed due to NEUTRON_CREATE_INITIAL_NETWORKS=False sudo ip link set $PUBLIC_BRIDGE up if [ "$SHARE_DRIVER" == "manila.share.drivers.lvm.LVMShareDriver" ]; then for backend_name in ${MANILA_ENABLED_BACKENDS//,/ }; do iniset $MANILA_CONF $backend_name lvm_share_export_ips $public_gateway_ipv4,$public_gateway_ipv6 done iniset $MANILA_CONF DEFAULT data_node_access_ips $public_gateway_ipv4 fi if [ "$SHARE_DRIVER" == "manila.share.drivers.cephfs.driver.CephFSDriver" ]; then for backend_name in ${MANILA_ENABLED_BACKENDS//,/ }; do iniset $MANILA_CONF $backend_name cephfs_ganesha_export_ips $public_gateway_ipv4,$public_gateway_ipv6 done iniset $MANILA_CONF DEFAULT data_node_access_ips $public_gateway_ipv4 fi # install Quagga for setting up the host routes dynamically install_package quagga # set Quagga daemons ( echo "zebra=yes" echo "bgpd=yes" echo "ospfd=no" echo "ospf6d=no" echo "ripd=no" echo "ripngd=no" echo "isisd=no" echo "babeld=no" ) | sudo tee /etc/quagga/daemons > /dev/null # set Quagga zebra.conf ( echo "hostname dsvm" echo "password openstack" echo "log file /var/log/quagga/zebra.log" ) | sudo tee /etc/quagga/zebra.conf > /dev/null # set Quagga vtysh.conf ( echo "service integrated-vtysh-config" echo "username quagga nopassword" ) | sudo tee /etc/quagga/vtysh.conf > /dev/null # set Quagga bgpd.conf ( echo "log file /var/log/quagga/bgpd.log" echo "bgp multiple-instance" echo "router bgp 200" echo " bgp router-id 1.2.3.4" echo " neighbor $public_gateway_ipv6 remote-as 100" echo " neighbor $public_gateway_ipv6 passive" echo " address-family ipv6" echo " neighbor $public_gateway_ipv6 activate" echo "line vty" echo "debug bgp events" echo "debug bgp filters" echo "debug bgp fsm" echo "debug bgp keepalives" echo "debug bgp updates" ) | sudo tee /etc/quagga/bgpd.conf > /dev/null # Quagga logging sudo mkdir -p /var/log/quagga sudo touch /var/log/quagga/zebra.log sudo touch /var/log/quagga/bgpd.log sudo chown -R quagga:quagga /var/log/quagga GetOSVersion QUAGGA_SERVICES="zebra bgpd" if [[ is_ubuntu && "$os_CODENAME" == "xenial" ]]; then # In Ubuntu Xenial, the services bgpd and zebra are under # one systemd unit: quagga QUAGGA_SERVICES="quagga" elif is_fedora; then # Disable SELinux rule that conflicts with Zebra sudo setsebool -P zebra_write_config 1 fi sudo systemctl enable $QUAGGA_SERVICES sudo systemctl restart $QUAGGA_SERVICES # log the systemd status sudo systemctl status $QUAGGA_SERVICES # This will fail with mutltiple default routes and is not needed in CI # but may be useful when developing with devstack locally if [ $(trueorfalse False MANILA_RESTORE_IPV6_DEFAULT_ROUTE) == True ]; then # add default IPv6 route back if ! [[ -z $default_route ]]; then # "replace" should ignore "RTNETLINK answers: File exists" # error if the route wasn't flushed by the bgp setup we did earlier. sudo ip -6 route replace $default_route fi fi } # Main dispatcher if [[ "$1" == "stack" && "$2" == "install" ]]; then echo_summary "Installing Manila Client" install_manilaclient echo_summary "Installing Manila" install_manila set_cinder_quotas elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then echo_summary "Configuring Manila" configure_manila echo_summary "Initializing Manila" init_manila echo_summary "Installing extra libraries" install_libraries echo_summary "Creating Manila entities for auth service" create_manila_accounts # Cinder config update if is_service_enabled cinder && [[ -n "$CINDER_OVERSUBSCRIPTION_RATIO" ]]; then CINDER_CONF=${CINDER_CONF:-/etc/cinder/cinder.conf} CINDER_ENABLED_BACKENDS=$(iniget $CINDER_CONF DEFAULT enabled_backends) for BN in ${CINDER_ENABLED_BACKENDS//,/ }; do iniset $CINDER_CONF $BN lvm_max_over_subscription_ratio $CINDER_OVERSUBSCRIPTION_RATIO done iniset $CINDER_CONF DEFAULT max_over_subscription_ratio $CINDER_OVERSUBSCRIPTION_RATIO fi elif [[ "$1" == "stack" && "$2" == "extra" ]]; then if is_service_enabled nova; then echo_summary "Creating Manila service flavor" create_manila_service_flavor echo_summary "Creating Manila service security group" create_manila_service_secgroup fi # Skip image downloads when disabled. # This way vendor Manila driver CI tests can skip # this potentially long and unnecessary download. if [ "$MANILA_SERVICE_IMAGE_ENABLED" = "True" ]; then echo_summary "Creating Manila service image" create_manila_service_image else echo_summary "Skipping download of Manila service image" fi if is_service_enabled nova; then echo_summary "Creating Manila service keypair" create_manila_service_keypair fi echo_summary "Configure Samba server" configure_samba echo_summary "Configuring IPv6" if [ $(trueorfalse False MANILA_SETUP_IPV6) == True ]; then setup_ipv6 fi echo_summary "Starting Manila API" start_manila_api # Workaround for bug #1660304 if [ "$SHARE_DRIVER" != "manila.share.drivers.generic.GenericShareDriver" ]; then echo_summary "Starting rest of Manila services - scheduler, share and data" start_rest_of_manila fi echo_summary "Creating Manila default share type" create_default_share_type echo_summary "Creating Manila default share group type" create_default_share_group_type echo_summary "Creating Manila custom share types" create_custom_share_types echo_summary "Manila UI is no longer enabled by default. \ Add enable_plugin manila-ui https://opendev.org/openstack/manila-ui \ to your local.conf file to enable Manila UI" elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then ########################################################################### # NOTE(vponomaryov): Workaround for bug #1660304 # We are able to create Nova VMs now only when last Nova step is performed # which is registration of cell0. It is registered as last action in # "post-extra" section. if is_service_enabled nova; then echo_summary "Creating Manila service VMs for generic driver \ backends for which handlng of share servers is disabled." create_service_share_servers fi if [ "$SHARE_DRIVER" == "manila.share.drivers.generic.GenericShareDriver" ]; then echo_summary "Starting rest of Manila services - scheduler, share and data" start_rest_of_manila fi ########################################################################### if [ $(trueorfalse False MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE) == True ]; then echo_summary "Fetching and installing manila-tempest-plugin system-wide" install_manila_tempest_plugin export DEPRECATED_TEXT="$DEPRECATED_TEXT\nInstalling manila-tempest-plugin can be done with the help of its own DevStack plugin by adding: \n\n\t'enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin'.\n\nManila's DevStack plugin will stop installing it automatically." fi echo_summary "Update Tempest config" update_tempest fi if [[ "$1" == "unstack" ]]; then cleanup_manila fi if [[ "$1" == "clean" ]]; then cleanup_manila sudo rm -rf /etc/manila fi # Restore xtrace $XTRACE manila-10.0.0/devstack/README.rst0000664000175000017500000000127413656750227016367 0ustar zuulzuul00000000000000====================== Enabling in Devstack ====================== We can enable the manila service in DevStack. For details, please refer to `development-environment-devstack`_, the following steps can be used as a quickstart reference: 1. Download DevStack 2. Add this repo as an external repository:: > cat local.conf [[local|localrc]] # Enable manila enable_plugin manila https://opendev.org/openstack/manila # Enable manila ui in the dashboard enable_plugin manila-ui https://opendev.org/openstack/manila-ui 3. run ``stack.sh`` .. _development-environment-devstack: https://docs.openstack.org/manila/latest/contributor/development-environment-devstack.html manila-10.0.0/devstack/upgrade/0000775000175000017500000000000013656750362016323 5ustar zuulzuul00000000000000manila-10.0.0/devstack/upgrade/from-newton/0000775000175000017500000000000013656750362020576 5ustar zuulzuul00000000000000manila-10.0.0/devstack/upgrade/from-newton/upgrade-manila0000664000175000017500000000111313656750227023403 0ustar zuulzuul00000000000000#!/usr/bin/env bash # ``upgrade-manila`` function configure_manila_upgrade { XTRACE=$(set +o | grep xtrace) set -o xtrace # Copy release-specific files sudo cp -f $TARGET_RELEASE_DIR/manila/etc/manila/rootwrap.d/* $MANILA_CONF_DIR/rootwrap.d sudo cp $TARGET_RELEASE_DIR/manila/etc/manila/api-paste.ini $MANILA_CONF_DIR/api-paste.ini sudo cp $TARGET_RELEASE_DIR/manila/etc/manila/policy.json $MANILA_CONF_DIR/policy.json sudo cp $TARGET_RELEASE_DIR/manila/etc/manila/rootwrap.conf $MANILA_CONF_DIR/rootwrap.conf # reset to previous state $XTRACE } manila-10.0.0/devstack/upgrade/settings0000664000175000017500000000175613656750227020117 0ustar zuulzuul00000000000000#!/bin/bash register_project_for_upgrade manila register_db_to_save manila export BASE_RUN_SMOKE=False export TARGET_RUN_SMOKE=False export ENABLE_TEMPEST=False export PYTHON3_VERSION=${PYTHON3_VERSION} devstack_localrc base enable_service manila m-api m-shr m-sch m-dat devstack_localrc base enable_plugin manila https://opendev.org/openstack/manila stable/train devstack_localrc base MANILA_UI_ENABLED=False devstack_localrc base OSLO_SERVICE_WORKS=True # NOTE(vponomaryov): stable client is used for keeping scenarios stable # so they are not broken by changed CLI view. devstack_localrc base MANILACLIENT_BRANCH="stable/train" devstack_localrc target enable_service manila m-api m-shr m-sch m-dat devstack_localrc target enable_plugin manila https://opendev.org/openstack/manila devstack_localrc target MANILA_UI_ENABLED=False devstack_localrc target OSLO_SERVICE_WORKS=True devstack_localrc target MANILA_USE_DOWNGRADE_MIGRATIONS=False devstack_localrc target MANILACLIENT_BRANCH="stable/train" manila-10.0.0/devstack/upgrade/shutdown.sh0000775000175000017500000000072013656750227020534 0ustar zuulzuul00000000000000#!/bin/bash # # set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions source $BASE_DEVSTACK_DIR/functions source $BASE_DEVSTACK_DIR/stackrc # needed for status directory # Locate the manila plugin and get its functions MANILA_DEVSTACK_DIR=$(dirname $(dirname $0)) source $MANILA_DEVSTACK_DIR/plugin.sh set -o xtrace stop_manila # Ensure everything is stopped ensure_services_stopped manila-api manila-share manila-scheduler manila-data manila-10.0.0/devstack/upgrade/resources.sh0000775000175000017500000004506613656750227020707 0ustar zuulzuul00000000000000#!/bin/bash set -o errexit source $GRENADE_DIR/grenaderc source $GRENADE_DIR/functions source $TOP_DIR/openrc admin demo set -o xtrace ################################# Settings #################################### # Access rules data specific to first enabled backend. MANILA_GRENADE_ACCESS_TYPE=${MANILA_GRENADE_ACCESS_TYPE:-"ip"} MANILA_GRENADE_ACCESS_TO=${MANILA_GRENADE_ACCESS_TO:-"127.0.0.1"} # Network information that will be used in case DHSS=True driver is used # with non-single-network-plugin. MANILA_GRENADE_NETWORK_NAME=${MANILA_GRENADE_NETWORK_NAME:-"private"} MANILA_GRENADE_SUBNET_NAME=${MANILA_GRENADE_SUBNET_NAME:-"private-subnet"} # Timeout that will be used for share creation wait operation. MANILA_GRENADE_WAIT_STEP=${MANILA_GRENADE_WAIT_STEP:-"4"} MANILA_GRENADE_WAIT_TIMEOUT=${MANILA_GRENADE_WAIT_TIMEOUT:-"300"} MANILA_GRENADE_SHARE_NETWORK_NAME=${MANILA_GRENADE_SHARE_NETWORK_NAME:-"manila_grenade_share_network"} MANILA_GRENADE_SHARE_TYPE_NAME=${MANILA_GRENADE_SHARE_TYPE_NAME:-"manila_grenade_share_type"} MANILA_GRENADE_SHARE_NAME=${MANILA_GRENADE_SHARE_NAME:-"manila_grenade_share"} MANILA_GRENADE_SHARE_SNAPSHOT_NAME=${MANILA_GRENADE_SHARE_SNAPSHOT_NAME:-"manila_grenade_share_snapshot"} # Extra specs that will be set for newly created share type MANILA_GRENADE_SHARE_TYPE_SNAPSHOT_SUPPORT_EXTRA_SPEC=${MANILA_GRENADE_SHARE_TYPE_SNAPSHOT_SUPPORT_EXTRA_SPEC:-"True"} MANILA_GRENADE_SHARE_TYPE_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT_EXTRA_SPEC=${MANILA_GRENADE_SHARE_TYPE_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT_EXTRA_SPEC:-"True"} MANILA_GRENADE_SHARE_TYPE_REVERT_TO_SNAPSHOT_SUPPORT_EXTRA_SPEC=${MANILA_GRENADE_SHARE_TYPE_REVERT_TO_SNAPSHOT_SUPPORT_EXTRA_SPEC:-"True"} MANILA_GRENADE_SHARE_TYPE_MOUNT_SNAPSHOT_SUPPORT_EXTRA_SPEC=${MANILA_GRENADE_SHARE_TYPE_MOUNT_SNAPSHOT_SUPPORT_EXTRA_SPEC:-"True"} MANILA_CONF_DIR=${MANILA_CONF_DIR:-/etc/manila} MANILA_CONF=$MANILA_CONF_DIR/manila.conf ################################ Scenarios #################################### function scenario_1_do_share_with_rules_and_metadata { # NOTE(vponomaryov): nova-network with DHSS=True drivers is not supported # by this scenario. enabled_share_backends=$(iniget $MANILA_CONF DEFAULT enabled_share_backends) backend=$( echo $enabled_share_backends | cut -d',' -f 1 ) enabled_share_protocols=$(iniget $MANILA_CONF DEFAULT enabled_share_protocols) share_protocol=$( echo $enabled_share_protocols | cut -d',' -f 1 ) driver_handles_share_servers=$(iniget $MANILA_CONF $backend driver_handles_share_servers) create_share_cmd="manila create $share_protocol 1 " create_share_cmd+="--share-type $MANILA_GRENADE_SHARE_TYPE_NAME " create_share_cmd+="--name $MANILA_GRENADE_SHARE_NAME" if [[ $(trueorfalse False driver_handles_share_servers) == True ]]; then share_driver=$(iniget $MANILA_CONF $backend share_driver) generic_driver='manila.share.drivers.generic.GenericShareDriver' windows_driver='manila.share.drivers.windows.windows_smb_driver.WindowsSMBDriver' network_plugin=$(iniget $MANILA_CONF $backend network_plugin) share_network_cmd="manila share-network-create " share_network_cmd+="--name $MANILA_GRENADE_SHARE_NETWORK_NAME" if is_service_enabled neutron; then if [[ $share_driver == $generic_driver || \ $share_driver == $windows_driver || \ ! $network_plugin =~ 'Single' || \ ! $network_plugin =~ 'Standalone' ]]; then net_id=$(openstack network show $MANILA_GRENADE_NETWORK_NAME -c id -f value) subnet_id=$(openstack subnet show $MANILA_GRENADE_SUBNET_NAME -c id -f value) share_network_cmd+=" --neutron-net $net_id --neutron-subnet $subnet_id" fi else echo 'Neutron service is disabled, creating empty share-network' fi create_share_cmd+=" --share-network $MANILA_GRENADE_SHARE_NETWORK_NAME" resource_save manila share_network $MANILA_GRENADE_SHARE_NETWORK_NAME else resource_save manila share_network 'None' fi # Create share-network eval $share_network_cmd # Create share-type manila type-create \ $MANILA_GRENADE_SHARE_TYPE_NAME \ $driver_handles_share_servers \ --snapshot_support $MANILA_GRENADE_SHARE_TYPE_SNAPSHOT_SUPPORT_EXTRA_SPEC \ --create_share_from_snapshot_support $MANILA_GRENADE_SHARE_TYPE_CREATE_SHARE_FROM_SNAPSHOT_SUPPORT_EXTRA_SPEC \ --revert_to_snapshot_support $MANILA_GRENADE_SHARE_TYPE_REVERT_TO_SNAPSHOT_SUPPORT_EXTRA_SPEC \ --mount_snapshot_support $MANILA_GRENADE_SHARE_TYPE_MOUNT_SNAPSHOT_SUPPORT_EXTRA_SPEC # Create share eval $create_share_cmd # Wait for share creation results wait_timeout=$MANILA_GRENADE_WAIT_TIMEOUT available='false' while (( wait_timeout > 0 )) ; do current_status=$( manila show $MANILA_GRENADE_SHARE_NAME | \ grep " status " | get_field 2 ) if [[ $current_status == 'available' ]]; then available='true' break elif [[ $current_status == 'creating' ]]; then ((wait_timeout-=$MANILA_GRENADE_WAIT_STEP)) sleep $MANILA_GRENADE_WAIT_STEP elif [[ $current_status == 'error' ]]; then die $LINENO "Share is in 'error' state." else die $LINENO "Should never reach this line." fi done if [[ $available == 'true' ]]; then echo "Share has been created successfully." else die $LINENO "Share timed out to reach 'available' status." fi # Create some metadata manila metadata $MANILA_GRENADE_SHARE_NAME set gre=nade # Add access rules manila access-allow $MANILA_GRENADE_SHARE_NAME \ $MANILA_GRENADE_ACCESS_TYPE $MANILA_GRENADE_ACCESS_TO # Wait for access rule creation results wait_timeout=$MANILA_GRENADE_WAIT_TIMEOUT active='false' while (( wait_timeout > 0 )) ; do current_state=$( manila access-list $MANILA_GRENADE_SHARE_NAME | \ grep " $MANILA_GRENADE_ACCESS_TO " | get_field 5 ) case $current_state in active) active='true' break;; creating|new|queued_to_apply|applying) ((wait_timeout-=$MANILA_GRENADE_WAIT_STEP)) sleep $MANILA_GRENADE_WAIT_STEP;; error) die $LINENO "Failed to create access rule.";; *) die $LINENO "Should never reach this line.";; esac done if [[ $active == 'true' ]]; then echo "Access rule has been created successfully." else die $LINENO "Access rule timed out to reach 'active' state." fi } function scenario_1_verify_share_with_rules_and_metadata { share_status=$(manila show $MANILA_GRENADE_SHARE_NAME | \ grep " status " | get_field 2) if [[ $share_status != "available" ]]; then die $LINENO "Share status is not 'available'. It is $share_status" fi rule_state=$(manila access-list $MANILA_GRENADE_SHARE_NAME | \ grep " $MANILA_GRENADE_ACCESS_TO " | get_field 5) if [[ $rule_state != "active" ]]; then die $LINENO "Share rule state is not 'active'. It is $rule_state" fi metadata=$(manila metadata-show $MANILA_GRENADE_SHARE_NAME | \ grep 'gre' | get_field 2) if [[ $metadata != "nade" ]]; then die $LINENO "Share metadata is not 'gre=nade'. It is gre=$metadata" fi } function scenario_1_destroy_share_with_rules_and_metadata { manila delete $MANILA_GRENADE_SHARE_NAME wait_timeout=$MANILA_GRENADE_WAIT_TIMEOUT found='true' while (( wait_timeout > 0 )) ; do share_status=$( manila list --columns id,name,status | \ grep $MANILA_GRENADE_SHARE_NAME | get_field 3) if [[ -z $share_status ]]; then found='false' break elif [[ $share_status == 'deleting' ]]; then ((wait_timeout-=$MANILA_GRENADE_WAIT_STEP)) sleep $MANILA_GRENADE_WAIT_STEP elif [[ $share_status == 'error_deleting' ]]; then die $LINENO "Share failed to be deleted." else die $LINENO "Should never reach this line." fi done if [[ $found == 'true' ]]; then die $LINENO "Share timed out to be deleted." else echo "Share has been deleted successfully." fi share_network=$(resource_get manila share_network) if [[ -n $share_network && $share_network != 'None' ]]; then manila share-network-delete $MANILA_GRENADE_SHARE_NETWORK_NAME fi manila type-delete $MANILA_GRENADE_SHARE_TYPE_NAME } ##### function scenario_2_do_attach_ss_to_sn { manila security-service-create \ ldap \ --name fake_ss_name \ --description fake_ss_description \ --dns-ip fake_dns_ip \ --server fake_server \ --domain fake_domain \ --user fake_user \ --password fake_password manila share-network-create \ --name fake_sn_name \ --description fake_sn_description \ --neutron-net-id fake_net \ --neutron-subnet-id fake_subnet manila share-network-security-service-add fake_sn_name fake_ss_name } function scenario_2_verify_attach_ss_to_sn { attached_security_service=$(\ manila share-network-security-service-list fake_sn_name | \ grep "fake_ss_name") if [[ -z $attached_security_service ]] ; then die $LINENO "Security service 'fake_ss_name' is not attached "\ "to share-network 'fake_sn_name'." fi function assert { actual=$(manila $1 $2 | grep " $3 " | get_field 2) if [[ $actual != $4 ]]; then die $LINENO "Field $3 for command $1 with arg $2 has "\ "value $actual, but $4 is expected." fi } assert share-network-show fake_sn_name description fake_sn_description # From API version 2.51, share-network-show command doesn't have # neutron_net_id and neutron_subnet_id, that information is in # "share-network-subnets" assert "--os-share-api-version 2.50 share-network-show" fake_sn_name neutron_net_id fake_net assert "--os-share-api-version 2.50 share-network-show" fake_sn_name neutron_subnet_id fake_subnet share_network_subnets=$(manila share-network-show fake_sn_name | grep share_network_subnets) if [[ ! -z "$share_network_subnets" ]]; then neutron_net_id=$(echo $share_network_subnets | tr ',' '\n' | grep neutron_net_id | cut -d "'" -f4) neutron_subnet_id=$(echo $share_network_subnets | tr ',' '\n' | grep neutron_subnet_id | cut -d "'" -f4) if [[ $neutron_net_id != fake_net ]]; then die $LINENO "Neutron net ID for share network isn't fake_net, it is $neutron_net_id" fi if [[ $neutron_subnet_id != fake_subnet ]]; then die $LINENO "Neutron subnet ID for share network isn't fake_subnet, it is $neutron_subnet_id" fi fi assert security-service-show fake_ss_name description fake_ss_description assert security-service-show fake_ss_name dns_ip fake_dns_ip assert security-service-show fake_ss_name server fake_server assert security-service-show fake_ss_name domain fake_domain assert security-service-show fake_ss_name user fake_user assert security-service-show fake_ss_name password fake_password } function scenario_2_destroy_attach_ss_to_sn { manila share-network-delete fake_sn_name manila security-service-delete fake_ss_name } ##### function scenario_3_do_quotas { current_shares_quota=$(manila quota-show --tenant fake | \ grep " shares " | get_field 2) ((new_shares_quota=$current_shares_quota + 5)) manila quota-update fake --shares $new_shares_quota resource_save manila quota $new_shares_quota } function scenario_3_verify_quotas { shares_quota=$(manila quota-show --tenant fake | \ grep " shares " | get_field 2) expected=$(resource_get manila quota) if [[ $shares_quota != $expected ]] ; then die $LINENO "Shares quota for 'fake' tenant is expected "\ "as $expected but it is $shares_quota." fi } function scenario_3_destroy_quotas { manila quota-delete --tenant fake } ##### function scenario_4_do_private_share_types { manila type-create ${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 false \ --is-public false manila type-access-add ${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 \ $(openstack project show demo -c id -f value) } function scenario_4_verify_private_share_types { share_type_visibility=$(manila type-list --all \ --columns name,visibility | \ grep ${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 | get_field 2) if [[ $share_type_visibility != 'private' ]] ; then die $LINENO "Visibility of share type "\ "${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 is not "\ "'private'. It is $share_type_visibility" fi project_id=$(openstack project show demo -c id -f value) access=$(manila type-access-list \ ${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 | grep $project_id) if [[ -z $access ]]; then die $LINENO "Expected $project_id project ID is not found in list "\ "of allowed projects of "\ "${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 share type." fi } function scenario_4_destroy_private_share_types { manila type-delete ${MANILA_GRENADE_SHARE_TYPE_NAME}_scenario4 } ##### function scenario_5_do_share_snapshot { if [[ $(trueorfalse True MANILA_GRENADE_SHARE_TYPE_SNAPSHOT_SUPPORT_EXTRA_SPEC) == True ]]; then # Create share snapshot manila snapshot-create $MANILA_GRENADE_SHARE_NAME \ --name $MANILA_GRENADE_SHARE_SNAPSHOT_NAME resource_save manila share_snapshot $MANILA_GRENADE_SHARE_SNAPSHOT_NAME # Wait for share snapshot creation results wait_timeout=$MANILA_GRENADE_WAIT_TIMEOUT available='false' while (( wait_timeout > 0 )) ; do current_status=$( manila snapshot-show $MANILA_GRENADE_SHARE_SNAPSHOT_NAME | \ grep " status " | get_field 2 ) if [[ $current_status == 'available' ]]; then available='true' break elif [[ $current_status == 'creating' ]]; then ((wait_timeout-=$MANILA_GRENADE_WAIT_STEP)) sleep $MANILA_GRENADE_WAIT_STEP elif [[ $current_status == 'error' ]]; then die $LINENO "Share snapshot is in 'error' state." else die $LINENO "Should never reach this line." fi done if [[ $available == 'true' ]]; then echo "Share snapshot has been created successfully." else die $LINENO "Share snapshot timed out to reach 'available' status." fi else echo "Skipping scenario '5' with creation of share snapshot." fi } function scenario_5_verify_share_snapshot { if [[ $(trueorfalse True MANILA_GRENADE_SHARE_TYPE_SNAPSHOT_SUPPORT_EXTRA_SPEC) == True ]]; then # Check that source share ID is set share_id_in_snapshot=$( manila snapshot-show \ $MANILA_GRENADE_SHARE_SNAPSHOT_NAME \ | grep "| share_id " | get_field 2 ) if [[ -z $share_id_in_snapshot ]]; then die $LINENO "Source share ID is not set." fi # Check that snapshot's source share ID is correct share_id=$( manila show $MANILA_GRENADE_SHARE_NAME \ | grep "| id " | get_field 2 ) if [[ $share_id != $share_id_in_snapshot ]]; then die $LINENO "Actual source share ID '$share_id_in_snapshot' is not "\ "equal to expected value '$share_id'." fi # Check presence of expected columns in snapshot view snapshot_output=$( manila snapshot-show $MANILA_GRENADE_SHARE_SNAPSHOT_NAME ) for snapshot_column in 'id' 'provider_location' 'name' 'size' 'export_locations'; do echo $snapshot_output | grep "| $snapshot_column " if [[ $? != 0 ]]; then die $LINENO "'$snapshot_column' column was not found in output '$snapshot_output'" fi done fi } function scenario_5_destroy_share_snapshot { if [[ $(trueorfalse True MANILA_GRENADE_SHARE_TYPE_SNAPSHOT_SUPPORT_EXTRA_SPEC) == True ]]; then manila snapshot-delete $MANILA_GRENADE_SHARE_SNAPSHOT_NAME wait_timeout=$MANILA_GRENADE_WAIT_TIMEOUT found='true' while (( wait_timeout > 0 )) ; do snapshot_status=$( manila snapshot-list --columns id,name,status | \ grep $MANILA_GRENADE_SHARE_SNAPSHOT_NAME | get_field 3) if [[ -z $snapshot_status ]]; then found='false' break elif [[ $snapshot_status == 'deleting' ]]; then ((wait_timeout-=$MANILA_GRENADE_WAIT_STEP)) sleep $MANILA_GRENADE_WAIT_STEP elif [[ $snapshot_status == 'error_deleting' ]]; then die $LINENO "Share snapshot failed to be deleted." else die $LINENO "Should never reach this line." fi done if [[ $found == 'true' ]]; then die $LINENO "Share snapshot timed out to be deleted." else echo "Share snapshot has been deleted successfully." fi fi } ################################# Main logic ################################## function create { scenario_1_do_share_with_rules_and_metadata scenario_2_do_attach_ss_to_sn scenario_3_do_quotas scenario_4_do_private_share_types scenario_5_do_share_snapshot echo "Manila 'create': SUCCESS" } function verify { scenario_1_verify_share_with_rules_and_metadata scenario_2_verify_attach_ss_to_sn scenario_3_verify_quotas scenario_4_verify_private_share_types scenario_5_verify_share_snapshot echo "Manila 'verify': SUCCESS" } function destroy { scenario_5_destroy_share_snapshot scenario_1_destroy_share_with_rules_and_metadata scenario_2_destroy_attach_ss_to_sn scenario_3_destroy_quotas scenario_4_destroy_private_share_types echo "Manila 'destroy': SUCCESS" } function verify_noapi { : } ################################# Dispatcher ################################## case $1 in "create") create ;; "verify_noapi") verify_noapi ;; "verify") verify ;; "destroy") destroy ;; "force_destroy") set +o errexit destroy ;; esac ############################################################################### manila-10.0.0/devstack/upgrade/upgrade.sh0000775000175000017500000000421513656750227020313 0ustar zuulzuul00000000000000#!/usr/bin/env bash # ``upgrade-manila`` echo "*********************************************************************" echo "Begin $0" echo "*********************************************************************" # Clean up any resources that may be in use cleanup() { set +o errexit echo "*********************************************************************" echo "ERROR: Abort $0" echo "*********************************************************************" # Kill ourselves to signal any calling process trap 2; kill -2 $$ } trap cleanup SIGHUP SIGINT SIGTERM # Keep track of the grenade directory RUN_DIR=$(cd $(dirname "$0") && pwd) # Source params source $GRENADE_DIR/grenaderc # Import common functions source $GRENADE_DIR/functions # This script exits on an error so that errors don't compound and you see # only the first error that occurred. set -o errexit # Upgrade Manila # ============== # Locate manila devstack plugin, the directory above the # grenade plugin. MANILA_DEVSTACK_DIR=$(dirname $(dirname $0)) # Get functions from current DevStack source $TARGET_DEVSTACK_DIR/functions source $TARGET_DEVSTACK_DIR/lib/tls source $TARGET_DEVSTACK_DIR/stackrc source $(dirname $(dirname $BASH_SOURCE))/settings source $(dirname $(dirname $BASH_SOURCE))/plugin.sh # Print the commands being run so that we can see the command that triggers # an error. It is also useful for following allowing as the install occurs. set -o xtrace # Save current config files for posterity [[ -d $SAVE_DIR/etc.manila ]] || cp -pr $MANILA_CONF_DIR $SAVE_DIR/etc.manila # Install the target manila install_manila # calls upgrade-manila for specific release upgrade_project manila $RUN_DIR $BASE_DEVSTACK_BRANCH $TARGET_DEVSTACK_BRANCH # Migrate the database $MANILA_BIN_DIR/manila-manage db sync || die $LINENO "DB migration error" start_manila # Don't succeed unless the services come up ensure_services_started manila-api manila-share manila-scheduler manila-data set +o xtrace echo "*********************************************************************" echo "SUCCESS: End $0" echo "*********************************************************************" manila-10.0.0/devstack/upgrade/from-mitaka/0000775000175000017500000000000013656750362020532 5ustar zuulzuul00000000000000manila-10.0.0/devstack/upgrade/from-mitaka/upgrade-manila0000664000175000017500000000111313656750227023337 0ustar zuulzuul00000000000000#!/usr/bin/env bash # ``upgrade-manila`` function configure_manila_upgrade { XTRACE=$(set +o | grep xtrace) set -o xtrace # Copy release-specific files sudo cp -f $TARGET_RELEASE_DIR/manila/etc/manila/rootwrap.d/* $MANILA_CONF_DIR/rootwrap.d sudo cp $TARGET_RELEASE_DIR/manila/etc/manila/api-paste.ini $MANILA_CONF_DIR/api-paste.ini sudo cp $TARGET_RELEASE_DIR/manila/etc/manila/policy.json $MANILA_CONF_DIR/policy.json sudo cp $TARGET_RELEASE_DIR/manila/etc/manila/rootwrap.conf $MANILA_CONF_DIR/rootwrap.conf # reset to previous state $XTRACE } manila-10.0.0/LICENSE0000664000175000017500000002363713656750227014110 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. manila-10.0.0/playbooks/0000775000175000017500000000000013656750362015073 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/manila-tox-genconfig/0000775000175000017500000000000013656750362021101 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/manila-tox-genconfig/post.yaml0000664000175000017500000000043713656750227022756 0ustar zuulzuul00000000000000- hosts: all roles: - role: fetch-tox-output tasks: - name: Copy generated config sample file synchronize: src: "{{ zuul.project.src_dir }}/etc/manila/manila.conf.sample" dest: "{{ zuul.executor.log_root }}" mode: pull verify_host: true manila-10.0.0/playbooks/legacy/0000775000175000017500000000000013656750362016337 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/0000775000175000017500000000000013656750362027341 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/run.yaml0000664000175000017500000001071413656750227031034 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-minimal-dsvm-cephfs-nfs-centos-7 from old job gate-manila-tempest-minimal-dsvm-cephfs-nfs-centos-7-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' # Install centos-release-openstack-* needed for rabbitmq-server - name: Add centos-release-openstack-pike support become: yes yum: name: centos-release-openstack-pike state: present - name: Check for /etc/yum/vars/contentdir stat: path: /etc/yum/vars/contentdir register: yum_contentdir - when: not yum_contentdir.stat.exists block: - name: Discover package architecture command: rpm -q --qf "%{arch}" -f /etc/redhat-release register: rpm_arch - debug: msg: Package architecture is '{{ rpm_arch.stdout }}' - name: Set contentdir to altarch set_fact: yum_contentdir: altarch when: rpm_arch.stdout in ['aarch64', 'ppc64le'] - name: Populate /etc/yum/vars/contentdir copy: dest: /etc/yum/vars/contentdir content: "{{ yum_contentdir|default('centos') }}" become: true - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] SKIP_EPEL_INSTALL=True enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph # Enable CephFS as the backend for Manila. ENABLE_CEPH_MANILA=True # Disable Ceph as the storage backend for Nova. ENABLE_CEPH_NOVA=False # Disable Ceph as the storage backend for Glance. ENABLE_CEPH_GLANCE=False # Disable Ceph as the storage backend for Cinder. ENABLE_CEPH_CINDER=False # Disable Ceph as the storage backend for Cinder backup. ENABLE_CEPH_C_BAK=False # Set native or NFS variant of ceph driver MANILA_CEPH_DRIVER=cephfsnfs EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export PROJECTS="openstack/devstack-plugin-ceph $PROJECTS" export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest export OVERRIDE_ENABLED_SERVICES function pre_test_hook { # Configure Manila with a CephFS Native or NFS driver backend. # Refer to job-template pre_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/pre_test_hook.sh \ false cephfsnfs singlebackend } export -f pre_test_hook function post_test_hook { # Configure and run Tempest API tests on Manila with a # CephFSNative driver backend. # Refer to job-template post_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/post_test_hook.sh \ singlebackend cephfsnfs api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/post.yaml0000664000175000017500000000136713656750227031221 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs - name: Copy ganesha config and logs synchronize: src: '{{ item.src }}' dest: '{{ zuul.executor.log_root }}/{{ item.dest }}' mode: pull copy_links: true verify_host: true with_items: - src: /etc/ganesha dest: logs/etc/ - src: /var/log/ganesha dest: logs/ manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/0000775000175000017500000000000013656750362030273 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/run.yaml0000664000175000017500000000676313656750227031777 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-postgres-generic-singlebackend from old job gate-manila-tempest-dsvm-postgres-generic-singlebackend-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export ENABLED_SERVICES=tempest # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 1 \ generic \ singlebackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ singlebackend \ generic \ api \ 1 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/post.yaml0000664000175000017500000000063313656750227032146 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/0000775000175000017500000000000013656750362025026 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/run.yaml0000664000175000017500000000604313656750227026521 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-glusterfs-nfs from old job gate-manila-tempest-dsvm-glusterfs-nfs-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Enable devstack-plugin-glusterfs plugin, to install and configure GlusterFS. enable_plugin devstack-plugin-glusterfs https://opendev.org/x/devstack-plugin-glusterfs # Configure devstack-plugin-glusterfs to enable GlusterFS as a backend for Manila. CONFIGURE_GLUSTERFS_MANILA=True # Configure devstack-plugin-glusterfs to use respective GlusterFS driver variant. GLUSTERFS_MANILA_DRIVER_TYPE=glusterfs-nfs EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export PROJECTS="x/devstack-plugin-glusterfs $PROJECTS" export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False function pre_test_hook { # Configure devstack to run manila installation without handling of share servers source $BASE/new/devstack-plugin-glusterfs/manila/pre_test_hook.sh } export -f pre_test_hook function post_test_hook { # Configure and run tempest on singlebackend manila installation source $BASE/new/devstack-plugin-glusterfs/manila/post_test_hook.sh \ singlebackend \ glusterfs-nfs \ api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/post.yaml0000664000175000017500000000063313656750227026701 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/0000775000175000017500000000000013656750362030031 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/run.yaml0000664000175000017500000001023413656750227031521 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-manila-tempest-dsvm-generic-scenario-custom-image from old job gate-manila-tempest-dsvm-generic-scenario-custom-image-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=0 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" # Install manila-image-elements project for building custom image export PROJECTS="openstack/manila-image-elements $PROJECTS" export ENABLED_SERVICES=tempest export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" function pre_test_hook { current_dir=$(pwd) # Go to 'manila-image-elements' dir, build image and get its name cd /opt/stack/new/manila-image-elements ./tools/gate/build-images generic_with_custom_image image_name=$(cat ./IMAGE_NAME) export MANILA_SERVICE_IMAGE_URL="file://$(pwd)/$image_name" export MANILA_SERVICE_IMAGE_NAME=$(basename -s .tar.gz $(basename -s .qcow2 $image_name)) # Return back to execution dir cd $current_dir # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 1 \ generic_with_custom_image \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ generic_with_custom_image \ scenario \ 0 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/post.yaml0000664000175000017500000000063313656750227031704 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-native/0000775000175000017500000000000013656750362025526 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-native/run.yaml0000664000175000017500000000605713656750227027226 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-glusterfs-native from old job gate-manila-tempest-dsvm-glusterfs-native-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Enable devstack-plugin-glusterfs plugin, to install and configure GlusterFS. enable_plugin devstack-plugin-glusterfs https://opendev.org/x/devstack-plugin-glusterfs # Configure devstack-plugin-glusterfs to enable GlusterFS as a backend for Manila. CONFIGURE_GLUSTERFS_MANILA=True # Configure devstack-plugin-glusterfs to use respective GlusterFS driver variant. GLUSTERFS_MANILA_DRIVER_TYPE=glusterfs-native EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export PROJECTS="x/devstack-plugin-glusterfs $PROJECTS" # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False function pre_test_hook { # Configure devstack to run manila installation without handling of share servers source $BASE/new/devstack-plugin-glusterfs/manila/pre_test_hook.sh } export -f pre_test_hook function post_test_hook { # Configure and run tempest on singlebackend manila installation source $BASE/new/devstack-plugin-glusterfs/manila/post_test_hook.sh \ singlebackend \ glusterfs-native \ api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-native/post.yaml0000664000175000017500000000063313656750227027401 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/0000775000175000017500000000000013656750362030377 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/run.yaml0000664000175000017500000001024613656750227032072 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-manila-tempest-dsvm-container-scenario-custom-image from old job gate-manila-tempest-dsvm-container-scenario-custom-image-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=0 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" # Install manila-image-elements project for building custom image export PROJECTS="openstack/manila-image-elements $PROJECTS" export ENABLED_SERVICES=tempest export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" function pre_test_hook { current_dir=$(pwd) # Go to 'manila-image-elements' dir, build image and get its name cd /opt/stack/new/manila-image-elements ./tools/gate/build-images container_with_custom_image image_name=$(cat ./IMAGE_NAME) export MANILA_SERVICE_IMAGE_URL="file://$(pwd)/$image_name" export MANILA_SERVICE_IMAGE_NAME=$(basename -s .tar.gz $(basename -s .qcow2 $image_name)) # Return back to execution dir cd $current_dir # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 1 \ container_with_custom_image \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ container_with_custom_image \ scenario \ 0 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/post.yaml0000664000175000017500000000063313656750227032252 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/0000775000175000017500000000000013656750362026775 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/run.yaml0000664000175000017500000000610413656750227030466 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-glusterfs-native-heketi from old job gate-manila-tempest-dsvm-glusterfs-native-heketi-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Enable devstack-plugin-glusterfs plugin, to install and configure GlusterFS. enable_plugin devstack-plugin-glusterfs https://opendev.org/x/devstack-plugin-glusterfs # Configure devstack-plugin-glusterfs to enable GlusterFS as a backend for Manila. CONFIGURE_GLUSTERFS_MANILA=True # Configure devstack-plugin-glusterfs to use respective GlusterFS driver variant. GLUSTERFS_MANILA_DRIVER_TYPE=glusterfs-native-heketi EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export PROJECTS="x/devstack-plugin-glusterfs $PROJECTS" # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False function pre_test_hook { # Configure devstack to run manila installation without handling of share servers source $BASE/new/devstack-plugin-glusterfs/manila/pre_test_hook.sh } export -f pre_test_hook function post_test_hook { # Configure and run tempest on singlebackend manila installation source $BASE/new/devstack-plugin-glusterfs/manila/post_test_hook.sh \ singlebackend \ glusterfs-heketi \ api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/post.yaml0000664000175000017500000000063313656750227030650 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-mysql-generic/0000775000175000017500000000000013656750362025003 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-mysql-generic/run.yaml0000664000175000017500000000671713656750227026506 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-mysql-generic from old job gate-manila-tempest-dsvm-mysql-generic-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=0 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export ENABLED_SERVICES=tempest # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 1 \ generic \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ generic \ api \ 0 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-mysql-generic/post.yaml0000664000175000017500000000063313656750227026656 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-lvm/0000775000175000017500000000000013656750362024446 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-lvm/run.yaml0000664000175000017500000001042313656750227026136 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-minimal-dsvm-lvm from old job gate-manila-tempest-minimal-dsvm-lvm tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] SKIP_EPEL_INSTALL=True enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin enable_plugin neutron-dynamic-routing https://opendev.org/openstack/neutron-dynamic-routing EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export MANILA_SETUP_IPV6=True export RUN_MANILA_IPV6_TESTS=True export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False # Basic services needed for minimal job OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest # Enable glance for scenario tests OVERRIDE_ENABLED_SERVICES+=,g-api,g-reg # Enable nova for scenario tests OVERRIDE_ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch,n-crt,n-cauth,n-obj # Enable neutron for scenario tests OVERRIDE_ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-l3,q-agt # Enable tls-proxy OVERRIDE_ENABLED_SERVICES+=,tls-proxy # Enable mandatory placement services for nova starting with ocata if [[ "stable/newton" != $ZUUL_BRANCH ]]; then OVERRIDE_ENABLED_SERVICES+=,placement-api,placement-client fi export OVERRIDE_ENABLED_SERVICES # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export PROJECTS="openstack/neutron-dynamic-routing $PROJECTS" export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh False lvm multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh multibackend lvm api False } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-lvm/post.yaml0000664000175000017500000000063313656750227026321 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-lvm/run-ipv6.yaml0000664000175000017500000001056213656750227027024 0ustar zuulzuul00000000000000- hosts: all name: IPv6 version of manila-tempest-minimal-dsvm-lvm tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] SKIP_EPEL_INSTALL=True enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin enable_plugin neutron-dynamic-routing https://opendev.org/openstack/neutron-dynamic-routing SERVICE_IP_VERSION=6 SERVICE_HOST="" EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export MANILA_SETUP_IPV6=True export RUN_MANILA_IPV6_TESTS=True export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False # Basic services needed for minimal job OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest # Enable glance for scenario tests OVERRIDE_ENABLED_SERVICES+=,g-api,g-reg # Enable nova for scenario tests OVERRIDE_ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch,n-crt,n-cauth,n-obj # Enable neutron for scenario tests OVERRIDE_ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-l3,q-agt # Enable tls-proxy OVERRIDE_ENABLED_SERVICES+=,tls-proxy # Enable mandatory placement services for nova starting with ocata if [[ "stable/newton" != $ZUUL_BRANCH ]]; then OVERRIDE_ENABLED_SERVICES+=,placement-api,placement-client fi export OVERRIDE_ENABLED_SERVICES # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export PROJECTS="openstack/neutron-dynamic-routing $PROJECTS" export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh False lvm multibackend } export -f pre_test_hook function post_test_hook { cd $BASE/new/tempest/tools ./verify-ipv6-only-deployments.sh # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh multibackend lvm api False } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/0000775000175000017500000000000013656750362030137 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/run.yaml0000664000175000017500000000727313656750227031640 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7 from old job gate-manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' # Install centos-release-openstack-* needed for rabbitmq-server - name: Add centos-release-openstack-pike support become: yes yum: name: centos-release-openstack-pike state: present - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] SKIP_EPEL_INSTALL=True # swift is not ready for python3 yet disable_service s-account disable_service s-container disable_service s-object disable_service s-proxy enable_plugin manila https://opendev.org/openstack/manila enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph # Enable CephFS as the backend for Manila. ENABLE_CEPH_MANILA=True # Disable Ceph as the storage backend for Nova. ENABLE_CEPH_NOVA=False # Disable Ceph as the storage backend for Glance. ENABLE_CEPH_GLANCE=False # Disable Ceph as the storage backend for Cinder. ENABLE_CEPH_CINDER=False # Disable Ceph as the storage backend for Cinder backup. ENABLE_CEPH_C_BAK=False # Set native or NFS variant of ceph driver MANILA_CEPH_DRIVER=cephfsnfs EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export DEVSTACK_GATE_USE_PYTHON3=True export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export PROJECTS="openstack/python-manilaclient openstack/devstack-plugin-ceph $PROJECTS" export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" function pre_test_hook { # Configure Manila with a CephFS Native or NFS driver backend. # Refer to job-template pre_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/pre_test_hook.sh \ false cephfsnfs singlebackend } export -f pre_test_hook function post_test_hook { # Configure and run Tempest API tests on Manila with a # CephFSNative driver backend. # Refer to job-template post_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/post_test_hook.sh \ singlebackend cephfsnfs api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/post.yaml0000664000175000017500000000136713656750227032017 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs - name: Copy ganesha config and logs synchronize: src: '{{ item.src }}' dest: '{{ zuul.executor.log_root }}/{{ item.dest }}' mode: pull copy_links: true verify_host: true with_items: - src: /etc/ganesha dest: logs/etc/ - src: /var/log/ganesha dest: logs/ manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-dummy/0000775000175000017500000000000013656750362025003 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-dummy/run.yaml0000664000175000017500000000655713656750227026510 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-minimal-dsvm-dummy from old job gate-manila-tempest-minimal-dsvm-dummy-ubuntu-xenial tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" # Basic services needed for minimal job export OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest,tls-proxy # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh False dummy multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh multibackend dummy api False } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-dummy/post.yaml0000664000175000017500000000063313656750227026656 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/0000775000175000017500000000000013656750362027041 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/run.yaml0000664000175000017500000000674713656750227030547 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-generic-no-share-servers from old job gate-manila-tempest-dsvm-generic-no-share-servers-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=0 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export ENABLED_SERVICES=tempest # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export DEVSTACK_GATE_USE_PYTHON3=True export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 0 \ generic \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ generic \ api \ 0 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/post.yaml0000664000175000017500000000063313656750227030714 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-hdfs/0000775000175000017500000000000013656750362023150 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-hdfs/run.yaml0000664000175000017500000000517713656750227024652 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-manila-tempest-dsvm-hdfs from old job gate-manila-tempest-dsvm-hdfs-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Enable devstack-plugin-hdfs plugin, to install and configure HDFS. enable_plugin devstack-plugin-hdfs https://opendev.org/x/devstack-plugin-hdfs EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export PROJECTS="x/devstack-plugin-hdfs $PROJECTS" export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False function pre_test_hook { # Configure devstack to run manila installation without handling of share servers source $BASE/new/devstack-plugin-hdfs/manila/pre_test_hook.sh } export -f pre_test_hook function post_test_hook { # Configure and run tempest on multi-backend manila installation source $BASE/new/devstack-plugin-hdfs/manila/post_test_hook.sh } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-hdfs/post.yaml0000664000175000017500000000063313656750227025023 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/0000775000175000017500000000000013656750362030041 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/run.yaml0000664000175000017500000001073313656750227031535 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-minimal-dsvm-cephfs-native-centos-7 from old job gate-manila-tempest-minimal-dsvm-cephfs-native-centos-7-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' # Install centos-release-openstack-* needed for rabbitmq-server - name: Add centos-release-openstack-pike support become: yes yum: name: centos-release-openstack-pike state: present - name: Check for /etc/yum/vars/contentdir stat: path: /etc/yum/vars/contentdir register: yum_contentdir - when: not yum_contentdir.stat.exists block: - name: Discover package architecture command: rpm -q --qf "%{arch}" -f /etc/redhat-release register: rpm_arch - debug: msg: Package architecture is '{{ rpm_arch.stdout }}' - name: Set contentdir to altarch set_fact: yum_contentdir: altarch when: rpm_arch.stdout in ['aarch64', 'ppc64le'] - name: Populate /etc/yum/vars/contentdir copy: dest: /etc/yum/vars/contentdir content: "{{ yum_contentdir|default('centos') }}" become: true - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] SKIP_EPEL_INSTALL=True enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph # Enable CephFS as the backend for Manila. ENABLE_CEPH_MANILA=True # Disable Ceph as the storage backend for Nova. ENABLE_CEPH_NOVA=False # Disable Ceph as the storage backend for Glance. ENABLE_CEPH_GLANCE=False # Disable Ceph as the storage backend for Cinder. ENABLE_CEPH_CINDER=False # Disable Ceph as the storage backend for Cinder backup. ENABLE_CEPH_C_BAK=False # Set native or NFS variant of ceph driver MANILA_CEPH_DRIVER=cephfsnative EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export PROJECTS="openstack/devstack-plugin-ceph $PROJECTS" export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest export OVERRIDE_ENABLED_SERVICES function pre_test_hook { # Configure Manila with a CephFS Native or NFS driver backend. # Refer to job-template pre_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/pre_test_hook.sh \ false cephfsnative singlebackend } export -f pre_test_hook function post_test_hook { # Configure and run Tempest API tests on Manila with a # CephFSNative driver backend. # Refer to job-template post_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/post_test_hook.sh \ singlebackend cephfsnative api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/post.yaml0000664000175000017500000000063313656750227031714 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-container/0000775000175000017500000000000013656750362026052 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-container/run.yaml0000664000175000017500000000673513656750227027555 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-postgres-container from old job gate-manila-tempest-dsvm-postgres-container-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export ENABLED_SERVICES=tempest # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 1 \ container \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ container \ api \ 1 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-container/post.yaml0000664000175000017500000000063313656750227027725 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/0000775000175000017500000000000013656750362025704 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/run.yaml0000664000175000017500000001041213656750227027372 0ustar zuulzuul00000000000000- hosts: all name: manila-tempest-minimal-dsvm-cephfs-nfs tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin enable_plugin neutron-dynamic-routing https://opendev.org/openstack/neutron-dynamic-routing enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph # Enable CephFS as the backend for Manila. ENABLE_CEPH_MANILA=True # Disable Ceph as the storage backend for Nova. ENABLE_CEPH_NOVA=False # Disable Ceph as the storage backend for Glance. ENABLE_CEPH_GLANCE=False # Disable Ceph as the storage backend for Cinder. ENABLE_CEPH_CINDER=False # Disable Ceph as the storage backend for Cinder backup. ENABLE_CEPH_C_BAK=False # Set native or NFS variant of ceph driver MANILA_CEPH_DRIVER=cephfsnfs EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export MANILA_SETUP_IPV6=True export RUN_MANILA_IPV6_TESTS=True # Basic services needed for minimal job OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest # Enable glance for scenario tests OVERRIDE_ENABLED_SERVICES+=,g-api,g-reg # Enable nova for scenario tests OVERRIDE_ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch,n-crt,n-cauth,n-obj # Enable neutron for scenario tests OVERRIDE_ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-l3,q-agt # Enable tls-proxy OVERRIDE_ENABLED_SERVICES+=,tls-proxy OVERRIDE_ENABLED_SERVICES+=,placement-api,placement-client export OVERRIDE_ENABLED_SERVICES # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 PROJECTS="openstack/devstack-plugin-ceph $PROJECTS" PROJECTS="openstack/manila-tempest-plugin $PROJECTS" PROJECTS="openstack/neutron-dynamic-routing $PROJECTS" export PROJECTS export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # Configure Manila with a CephFS Native or NFS driver backend. # Refer to job-template pre_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/pre_test_hook.sh \ false cephfsnfs singlebackend } export -f pre_test_hook function post_test_hook { # Configure and run Tempest API tests on Manila with a # CephFSNative driver backend. # Refer to job-template post_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/post_test_hook.sh \ singlebackend cephfsnfs api_with_scenario } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/post.yaml0000664000175000017500000000136713656750227027564 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs - name: Copy ganesha config and logs synchronize: src: '{{ item.src }}' dest: '{{ zuul.executor.log_root }}/{{ item.dest }}' mode: pull copy_links: true verify_host: true with_items: - src: /etc/ganesha dest: logs/etc/ - src: /var/log/ganesha dest: logs/ manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/0000775000175000017500000000000013656750362026275 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/run.yaml0000664000175000017500000000610313656750227027765 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-glusterfs-nfs-heketi from old job gate-manila-tempest-dsvm-glusterfs-nfs-heketi-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin # Enable devstack-plugin-glusterfs plugin, to install and configure GlusterFS. enable_plugin devstack-plugin-glusterfs https://opendev.org/x/devstack-plugin-glusterfs # Configure devstack-plugin-glusterfs to enable GlusterFS as a backend for Manila. CONFIGURE_GLUSTERFS_MANILA=True # Configure devstack-plugin-glusterfs to use respective GlusterFS driver variant. GLUSTERFS_MANILA_DRIVER_TYPE=glusterfs-nfs-heketi EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export PROJECTS="x/devstack-plugin-glusterfs $PROJECTS" # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False function pre_test_hook { # Configure devstack to run manila installation without handling of share servers source $BASE/new/devstack-plugin-glusterfs/manila/pre_test_hook.sh } export -f pre_test_hook function post_test_hook { # Configure and run tempest on singlebackend manila installation source $BASE/new/devstack-plugin-glusterfs/manila/post_test_hook.sh \ singlebackend \ glusterfs-nfs-heketi \ api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/post.yaml0000664000175000017500000000063313656750227030150 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/0000775000175000017500000000000013656750362026307 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/run.yaml0000664000175000017500000000674113656750227030007 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-postgres-zfsonlinux from old job gate-manila-tempest-dsvm-postgres-zfsonlinux-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=1 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export ENABLED_SERVICES=tempest # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 0 \ zfsonlinux \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ zfsonlinux \ api \ 1 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/post.yaml0000664000175000017500000000063313656750227030162 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/0000775000175000017500000000000013656750362030637 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/run.yaml0000664000175000017500000000731213656750227032332 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7 from old job gate-manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' # Install centos-release-openstack-* needed for rabbitmq-server - name: Add centos-release-openstack-pike support become: yes yum: name: centos-release-openstack-pike state: present - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] SKIP_EPEL_INSTALL=True # swift is not ready for python3 yet disable_service s-account disable_service s-container disable_service s-object disable_service s-proxy enable_plugin manila https://opendev.org/openstack/manila enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph # Enable CephFS as the backend for Manila. ENABLE_CEPH_MANILA=True # Disable Ceph as the storage backend for Nova. ENABLE_CEPH_NOVA=False # Disable Ceph as the storage backend for Glance. ENABLE_CEPH_GLANCE=False # Disable Ceph as the storage backend for Cinder. ENABLE_CEPH_CINDER=False # Disable Ceph as the storage backend for Cinder backup. ENABLE_CEPH_C_BAK=False # Set native or NFS variant of ceph driver MANILA_CEPH_DRIVER=cephfsnative EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export DEVSTACK_GATE_USE_PYTHON3=True export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export PROJECTS="openstack/python-manilaclient openstack/devstack-plugin-ceph $PROJECTS" export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" function pre_test_hook { # Configure Manila with a CephFS Native or NFS driver backend. # Refer to job-template pre_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/pre_test_hook.sh \ false cephfsnative singlebackend } export -f pre_test_hook function post_test_hook { # Configure and run Tempest API tests on Manila with a # CephFSNative driver backend. # Refer to job-template post_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/post_test_hook.sh \ singlebackend cephfsnative api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/post.yaml0000664000175000017500000000063313656750227032512 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/grenade-dsvm-manila/0000775000175000017500000000000013656750362022152 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/grenade-dsvm-manila/run.yaml0000664000175000017500000000465313656750227023652 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-grenade-dsvm-manila from old job gate-grenade-dsvm-manila-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PROJECTS="openstack/grenade $PROJECTS" export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=0 export DEVSTACK_GATE_TEMPEST_NOTESTS=1 export DEVSTACK_GATE_GRENADE=pullup export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False # Basic services needed for grenade manila job using dummy driver export OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit export DEVSTACK_GATE_USE_PYTHON3=True # Enable manila grenade plugin. Provided repo should be # cloned by zuul before devstack run and below provided # link should not be used. export GRENADE_PLUGINRC="enable_grenade_plugin manila https://opendev.org/openstack/manila" # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 function pre_test_hook { source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ True \ dummy \ multibackend } export -f pre_test_hook export BRANCH_OVERRIDE=default if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/grenade-dsvm-manila/post.yaml0000664000175000017500000000063313656750227024025 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/0000775000175000017500000000000013656750362026404 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/run.yaml0000664000175000017500000000671713656750227030107 0ustar zuulzuul00000000000000- hosts: all name: manila-tempest-minimal-dsvm-cephfs-native tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph # Enable CephFS as the backend for Manila. ENABLE_CEPH_MANILA=True # Disable Ceph as the storage backend for Nova. ENABLE_CEPH_NOVA=False # Disable Ceph as the storage backend for Glance. ENABLE_CEPH_GLANCE=False # Disable Ceph as the storage backend for Cinder. ENABLE_CEPH_CINDER=False # Disable Ceph as the storage backend for Cinder backup. ENABLE_CEPH_C_BAK=False # Set native or NFS variant of ceph driver MANILA_CEPH_DRIVER=cephfsnative EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export ENABLED_SERVICES=tempest export PROJECTS="openstack/devstack-plugin-ceph $PROJECTS" export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit,tempest export OVERRIDE_ENABLED_SERVICES export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # Configure Manila with a CephFS Native or NFS driver backend. # Refer to job-template pre_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/pre_test_hook.sh \ false cephfsnative singlebackend } export -f pre_test_hook function post_test_hook { # Configure and run Tempest API tests on Manila with a # CephFSNative driver backend. # Refer to job-template post_test_hook for more details on the # arguments. source $BASE/new/devstack-plugin-ceph/manila/post_test_hook.sh \ singlebackend cephfsnative api } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/post.yaml0000664000175000017500000000063313656750227030257 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-scenario/0000775000175000017500000000000013656750362024027 5ustar zuulzuul00000000000000manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-scenario/run.yaml0000664000175000017500000000671213656750227025525 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job manila-tempest-dsvm-scenario from old job gate-manila-tempest-dsvm-scenario-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x cat << 'EOF' >>"/tmp/dg-local.conf" [[local|localrc]] enable_plugin manila https://opendev.org/openstack/manila enable_plugin manila-tempest-plugin https://opendev.org/openstack/manila-tempest-plugin EOF executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export DEVSTACK_GATE_NEUTRON=1 export DEVSTACK_GATE_POSTGRES=0 export DEVSTACK_PROJECT_FROM_GIT="python-manilaclient" export ENABLED_SERVICES=tempest # Keep localrc to be able to set some vars in pre_test_hook export KEEP_LOCALRC=1 export PROJECTS="openstack/manila-tempest-plugin $PROJECTS" export MANILA_INSTALL_TEMPEST_PLUGIN_SYSTEMWIDE=False export DEVSTACK_GATE_USE_PYTHON3=True function pre_test_hook { # 'dhss' - acronym for 'Driver Handles Share Servers', # defines mode of a share driver. Boolean-like. # 'driver' - codename of a share driver to configure. # 'back_end_type' - defines which installation Manila should # have - either 'singlebackend' or 'multibackend'. source $BASE/new/manila/contrib/ci/pre_test_hook.sh \ 1 \ generic \ multibackend } export -f pre_test_hook function post_test_hook { # 'back_end_type' - defines which installation Manila is # configured to - either 'singlebackend' or 'multibackend'. # 'driver' - codename of a share driver that is configured in # Manila. It is used for enabling/disabling tests that are not # supported by share driver that is used. # 'test_type' - defines which set of test suites should be used, # can have 'api' and 'scenario' values. # 'postgres_enabled' - set of test suites depends on DB backend # in some cases, so it is provided explicitely. Boolean-like. source $BASE/new/manila/contrib/ci/post_test_hook.sh \ multibackend \ generic \ scenario \ 0 } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' manila-10.0.0/playbooks/legacy/manila-tempest-dsvm-scenario/post.yaml0000664000175000017500000000063313656750227025702 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs manila-10.0.0/.coveragerc0000664000175000017500000000014713656750227015213 0ustar zuulzuul00000000000000[run] branch = True source = manila omit = manila/test* concurrency = eventlet [report] precision = 0 manila-10.0.0/manila.egg-info/0000775000175000017500000000000013656750362016023 5ustar zuulzuul00000000000000manila-10.0.0/manila.egg-info/entry_points.txt0000664000175000017500000000542113656750362021323 0ustar zuulzuul00000000000000[console_scripts] manila-all = manila.cmd.all:main manila-api = manila.cmd.api:main manila-data = manila.cmd.data:main manila-manage = manila.cmd.manage:main manila-rootwrap = oslo_rootwrap.cmd:main manila-scheduler = manila.cmd.scheduler:main manila-share = manila.cmd.share:main manila-status = manila.cmd.status:main [manila.scheduler.filters] AvailabilityZoneFilter = manila.scheduler.filters.availability_zone:AvailabilityZoneFilter CapabilitiesFilter = manila.scheduler.filters.capabilities:CapabilitiesFilter CapacityFilter = manila.scheduler.filters.capacity:CapacityFilter ConsistentSnapshotFilter = manila.scheduler.filters.share_group_filters.consistent_snapshot:ConsistentSnapshotFilter CreateFromSnapshotFilter = manila.scheduler.filters.create_from_snapshot:CreateFromSnapshotFilter DriverFilter = manila.scheduler.filters.driver:DriverFilter IgnoreAttemptedHostsFilter = manila.scheduler.filters.ignore_attempted_hosts:IgnoreAttemptedHostsFilter JsonFilter = manila.scheduler.filters.json:JsonFilter RetryFilter = manila.scheduler.filters.retry:RetryFilter ShareReplicationFilter = manila.scheduler.filters.share_replication:ShareReplicationFilter [manila.scheduler.weighers] CapacityWeigher = manila.scheduler.weighers.capacity:CapacityWeigher GoodnessWeigher = manila.scheduler.weighers.goodness:GoodnessWeigher HostAffinityWeigher = manila.scheduler.weighers.host_affinity:HostAffinityWeigher PoolWeigher = manila.scheduler.weighers.pool:PoolWeigher [manila.share.drivers.dell_emc.plugins] isilon = manila.share.drivers.dell_emc.plugins.isilon.isilon:IsilonStorageConnection powermax = manila.share.drivers.dell_emc.plugins.powermax.connection:PowerMaxStorageConnection unity = manila.share.drivers.dell_emc.plugins.unity.connection:UnityStorageConnection vnx = manila.share.drivers.dell_emc.plugins.vnx.connection:VNXStorageConnection [manila.tests.scheduler.fakes] FakeWeigher1 = manila.tests.scheduler.fakes:FakeWeigher1 FakeWeigher2 = manila.tests.scheduler.fakes:FakeWeigher2 [oslo.config.opts] manila = manila.opts:list_opts [oslo.config.opts.defaults] manila = manila.common.config:set_middleware_defaults [oslo.policy.enforcer] manila = manila.policy:get_enforcer [oslo.policy.policies] manila = manila.policies:list_rules [oslo_messaging.notify.drivers] manila.openstack.common.notifier.log_notifier = oslo_messaging.notify._impl_log:LogDriver manila.openstack.common.notifier.no_op_notifier = oslo_messaging.notify._impl_noop:NoOpDriver manila.openstack.common.notifier.rpc_notifier = oslo_messaging.notify.messaging:MessagingDriver manila.openstack.common.notifier.rpc_notifier2 = oslo_messaging.notify.messaging:MessagingV2Driver manila.openstack.common.notifier.test_notifier = oslo_messaging.notify._impl_test:TestDriver [wsgi_scripts] manila-wsgi = manila.wsgi.wsgi:initialize_application manila-10.0.0/manila.egg-info/dependency_links.txt0000664000175000017500000000000113656750362022071 0ustar zuulzuul00000000000000 manila-10.0.0/manila.egg-info/requires.txt0000664000175000017500000000147713656750362020434 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 alembic>=0.8.10 Babel!=2.4.0,>=2.3.4 eventlet!=0.23.0,!=0.25.0,>=0.22.0 greenlet>=0.4.10 lxml!=3.7.0,>=3.4.1 netaddr>=0.7.18 oslo.config>=5.2.0 oslo.context>=2.19.2 oslo.db>=4.27.0 oslo.i18n>=3.15.3 oslo.log>=3.36.0 oslo.messaging>=6.4.0 oslo.middleware>=3.31.0 oslo.policy>=1.30.0 oslo.reports>=1.18.0 oslo.rootwrap>=5.8.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.service>=2.1.1 oslo.upgradecheck>=0.1.0 oslo.utils>=3.40.2 oslo.concurrency>=3.26.0 paramiko>=2.0.0 Paste>=2.0.2 PasteDeploy>=1.5.0 pyparsing>=2.1.0 python-neutronclient>=6.7.0 keystoneauth1>=3.4.0 keystonemiddleware>=4.17.0 requests>=2.14.2 retrying!=1.3.0,>=1.2.3 Routes>=2.3.1 six>=1.10.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 stevedore>=1.20.0 tooz>=1.58.0 python-cinderclient!=4.0.0,>=3.3.0 python-novaclient>=9.1.0 WebOb>=1.7.1 manila-10.0.0/manila.egg-info/pbr.json0000664000175000017500000000005713656750362017503 0ustar zuulzuul00000000000000{"git_version": "653092e5", "is_release": true}manila-10.0.0/manila.egg-info/PKG-INFO0000664000175000017500000000507613656750362017130 0ustar zuulzuul00000000000000Metadata-Version: 1.2 Name: manila Version: 10.0.0 Summary: Shared Storage for OpenStack Home-page: https://docs.openstack.org/manila/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/manila.svg :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on ====== MANILA ====== You have come across an OpenStack shared file system service. It has identified itself as "Manila." It was abstracted from the Cinder project. * Wiki: https://wiki.openstack.org/wiki/Manila * Developer docs: https://docs.openstack.org/manila/latest/ Getting Started --------------- If you'd like to run from the master branch, you can clone the git repo: git clone https://opendev.org/openstack/manila For developer information please see `HACKING.rst `_ You can raise bugs here https://bugs.launchpad.net/manila Python client ------------- https://opendev.org/openstack/python-manilaclient * Documentation for the project can be found at: https://docs.openstack.org/manila/latest/ * Release notes for the project can be found at: https://docs.openstack.org/releasenotes/manila/ * Source for the project: https://opendev.org/openstack/manila * Bugs: https://bugs.launchpad.net/manila * Blueprints: https://blueprints.launchpad.net/manila * Design specifications are tracked at: https://specs.openstack.org/openstack/manila-specs/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Requires-Python: >=3.6 manila-10.0.0/manila.egg-info/top_level.txt0000664000175000017500000000000713656750362020552 0ustar zuulzuul00000000000000manila manila-10.0.0/manila.egg-info/not-zip-safe0000664000175000017500000000000113656750362020251 0ustar zuulzuul00000000000000 manila-10.0.0/manila.egg-info/SOURCES.txt0000664000175000017500000025437513656750362017727 0ustar zuulzuul00000000000000.coveragerc .pylintrc .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst babel.cfg bindep.txt lower-constraints.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini api-ref/source/availability-zones.inc api-ref/source/conf.py api-ref/source/experimental.inc api-ref/source/extensions.inc api-ref/source/index.rst api-ref/source/limits.inc api-ref/source/os-share-manage.inc api-ref/source/parameters.yaml api-ref/source/quota-classes.inc api-ref/source/quota-sets.inc api-ref/source/scheduler-stats.inc api-ref/source/security-services.inc api-ref/source/services.inc api-ref/source/share-access-rule-metadata.inc api-ref/source/share-access-rules.inc api-ref/source/share-actions.inc api-ref/source/share-export-locations.inc api-ref/source/share-group-snapshots.inc api-ref/source/share-group-types.inc api-ref/source/share-groups.inc api-ref/source/share-instance-export-locations.inc api-ref/source/share-instances.inc api-ref/source/share-metadata.inc api-ref/source/share-migration.inc api-ref/source/share-networks.inc api-ref/source/share-replica-export-locations.inc api-ref/source/share-replicas.inc api-ref/source/share-servers.inc api-ref/source/share-types.inc api-ref/source/shares.inc api-ref/source/snapshot-instances.inc api-ref/source/snapshots.inc api-ref/source/status.yaml api-ref/source/user-messages.inc api-ref/source/versions.inc api-ref/source/samples/availability-zones-list-response.json api-ref/source/samples/export-location-list-response.json api-ref/source/samples/export-location-show-response.json api-ref/source/samples/extensions-list-response.json api-ref/source/samples/limits-response.json api-ref/source/samples/pools-list-detailed-response.json api-ref/source/samples/pools-list-response.json api-ref/source/samples/quota-classes-show-response.json api-ref/source/samples/quota-classes-update-request.json api-ref/source/samples/quota-classes-update-response.json api-ref/source/samples/quota-show-detail-response.json api-ref/source/samples/quota-show-response.json api-ref/source/samples/quota-update-request.json api-ref/source/samples/quota-update-response.json api-ref/source/samples/security-service-create-request.json api-ref/source/samples/security-service-create-response.json api-ref/source/samples/security-service-show-response.json api-ref/source/samples/security-service-update-request.json api-ref/source/samples/security-service-update-response.json api-ref/source/samples/security-services-list-detailed-response.json api-ref/source/samples/security-services-list-for-share-network-response.json api-ref/source/samples/security-services-list-response.json api-ref/source/samples/service-disable-request.json api-ref/source/samples/service-disable-response.json api-ref/source/samples/service-enable-request.json api-ref/source/samples/service-enable-response.json api-ref/source/samples/services-list-response.json api-ref/source/samples/services-list-with-filters-response.json api-ref/source/samples/share-access-rules-list-response.json api-ref/source/samples/share-access-rules-show-response.json api-ref/source/samples/share-access-rules-update-metadata-request.json api-ref/source/samples/share-access-rules-update-metadata-response.json api-ref/source/samples/share-actions-extend-request.json api-ref/source/samples/share-actions-force-delete-request.json api-ref/source/samples/share-actions-grant-access-request.json api-ref/source/samples/share-actions-grant-access-response.json api-ref/source/samples/share-actions-list-access-rules-request.json api-ref/source/samples/share-actions-list-access-rules-response.json api-ref/source/samples/share-actions-reset-state-request.json api-ref/source/samples/share-actions-revert-to-snapshot-request.json api-ref/source/samples/share-actions-revoke-access-request.json api-ref/source/samples/share-actions-shrink-request.json api-ref/source/samples/share-actions-unmanage-request.json api-ref/source/samples/share-create-request.json api-ref/source/samples/share-create-response.json api-ref/source/samples/share-group-create-request.json api-ref/source/samples/share-group-create-response.json api-ref/source/samples/share-group-reset-state-request.json api-ref/source/samples/share-group-show-response.json api-ref/source/samples/share-group-snapshot-actions-reset-state-request.json api-ref/source/samples/share-group-snapshot-create-request.json api-ref/source/samples/share-group-snapshot-create-response.json api-ref/source/samples/share-group-snapshot-show-response.json api-ref/source/samples/share-group-snapshot-update-request.json api-ref/source/samples/share-group-snapshot-update-response.json api-ref/source/samples/share-group-snapshots-list-detailed-response.json api-ref/source/samples/share-group-snapshots-list-members-response.json api-ref/source/samples/share-group-snapshots-list-response.json api-ref/source/samples/share-group-type-create-request.json api-ref/source/samples/share-group-type-create-response.json api-ref/source/samples/share-group-type-grant-access-request.json api-ref/source/samples/share-group-type-revoke-access-request.json api-ref/source/samples/share-group-type-set-request.json api-ref/source/samples/share-group-type-set-response.json api-ref/source/samples/share-group-types-default-list-response.json api-ref/source/samples/share-group-types-group-specs-list-response.json api-ref/source/samples/share-group-types-list-access-response.json api-ref/source/samples/share-group-types-list-response.json api-ref/source/samples/share-group-update-request.json api-ref/source/samples/share-group-update-response.json api-ref/source/samples/share-groups-list-response.json api-ref/source/samples/share-instance-actions-force-delete-request.json api-ref/source/samples/share-instance-actions-reset-state-request.json api-ref/source/samples/share-instances-list-response.json api-ref/source/samples/share-manage-request.json api-ref/source/samples/share-manage-response.json api-ref/source/samples/share-network-add-security-service-request.json api-ref/source/samples/share-network-add-security-service-response.json api-ref/source/samples/share-network-create-request.json api-ref/source/samples/share-network-create-response.json api-ref/source/samples/share-network-remove-security-service-request.json api-ref/source/samples/share-network-remove-security-service-response.json api-ref/source/samples/share-network-show-response.json api-ref/source/samples/share-network-update-request.json api-ref/source/samples/share-network-update-response.json api-ref/source/samples/share-networks-list-detailed-response.json api-ref/source/samples/share-networks-list-response.json api-ref/source/samples/share-replica-create-request.json api-ref/source/samples/share-replica-create-response.json api-ref/source/samples/share-replica-export-location-list-response.json api-ref/source/samples/share-replica-export-location-show-response.json api-ref/source/samples/share-replicas-force-delete-request.json api-ref/source/samples/share-replicas-list-detail-response.json api-ref/source/samples/share-replicas-list-response.json api-ref/source/samples/share-replicas-reset-replica-state-request.json api-ref/source/samples/share-replicas-reset-state-request.json api-ref/source/samples/share-replicas-show-response.json api-ref/source/samples/share-server-manage-request.json api-ref/source/samples/share-server-manage-response.json api-ref/source/samples/share-server-reset-state-request.json api-ref/source/samples/share-server-show-details-response.json api-ref/source/samples/share-server-show-response.json api-ref/source/samples/share-server-unmanage-request.json api-ref/source/samples/share-servers-list-response.json api-ref/source/samples/share-set-metadata-request.json api-ref/source/samples/share-set-metadata-response.json api-ref/source/samples/share-show-instance-response.json api-ref/source/samples/share-show-metadata-item-response.json api-ref/source/samples/share-show-metadata-response.json api-ref/source/samples/share-show-response.json api-ref/source/samples/share-type-create-request.json api-ref/source/samples/share-type-create-response.json api-ref/source/samples/share-type-grant-access-request.json api-ref/source/samples/share-type-revoke-access-request.json api-ref/source/samples/share-type-set-request.json api-ref/source/samples/share-type-set-response.json api-ref/source/samples/share-type-show-response.json api-ref/source/samples/share-type-update-request.json api-ref/source/samples/share-type-update-response.json api-ref/source/samples/share-types-default-list-response.json api-ref/source/samples/share-types-extra-specs-list-response.json api-ref/source/samples/share-types-list-access-response.json api-ref/source/samples/share-types-list-response.json api-ref/source/samples/share-update-metadata-request.json api-ref/source/samples/share-update-metadata-response.json api-ref/source/samples/share-update-null-metadata-request.json api-ref/source/samples/share-update-null-metadata-response.json api-ref/source/samples/share-update-request.json api-ref/source/samples/share-update-response.json api-ref/source/samples/shares-list-detailed-response.json api-ref/source/samples/shares-list-response.json api-ref/source/samples/snapshot-actions-force-delete-request.json api-ref/source/samples/snapshot-actions-reset-state-request.json api-ref/source/samples/snapshot-actions-unmanage-request.json api-ref/source/samples/snapshot-create-request.json api-ref/source/samples/snapshot-create-response.json api-ref/source/samples/snapshot-instance-actions-reset-state-request.json api-ref/source/samples/snapshot-instance-show-response.json api-ref/source/samples/snapshot-instances-list-response.json api-ref/source/samples/snapshot-instances-list-with-detail-response.json api-ref/source/samples/snapshot-manage-request.json api-ref/source/samples/snapshot-manage-response.json api-ref/source/samples/snapshot-show-response.json api-ref/source/samples/snapshot-update-request.json api-ref/source/samples/snapshot-update-response.json api-ref/source/samples/snapshots-list-detailed-response.json api-ref/source/samples/snapshots-list-response.json api-ref/source/samples/user-message-show-response.json api-ref/source/samples/user-messages-list-response.json api-ref/source/samples/versions-get-version-response.json api-ref/source/samples/versions-index-response.json contrib/ci/common.sh contrib/ci/post_test_hook.sh contrib/ci/pre_test_hook.sh contrib/share_driver_hooks/README.rst contrib/share_driver_hooks/zaqar_notification.py contrib/share_driver_hooks/zaqar_notification_example_consumer.py contrib/share_driver_hooks/zaqarclientwrapper.py devstack/README.rst devstack/apache-manila.template devstack/plugin.sh devstack/settings devstack/files/debs/manila devstack/files/rpms/manila devstack/files/rpms-suse/manila devstack/upgrade/resources.sh devstack/upgrade/settings devstack/upgrade/shutdown.sh devstack/upgrade/upgrade.sh devstack/upgrade/from-mitaka/upgrade-manila devstack/upgrade/from-newton/upgrade-manila doc/README.rst doc/requirements.txt doc/ext/__init__.py doc/source/conf.py doc/source/index.rst doc/source/admin/capabilities_and_extra_specs.rst doc/source/admin/cephfs_driver.rst doc/source/admin/container_driver.rst doc/source/admin/emc_isilon_driver.rst doc/source/admin/emc_vnx_driver.rst doc/source/admin/export_location_metadata.rst doc/source/admin/generic_driver.rst doc/source/admin/glusterfs_driver.rst doc/source/admin/glusterfs_native_driver.rst doc/source/admin/gpfs_driver.rst doc/source/admin/group_capabilities_and_extra_specs.rst doc/source/admin/hdfs_native_driver.rst doc/source/admin/hitachi_hnas_driver.rst doc/source/admin/hpe_3par_driver.rst doc/source/admin/huawei_nas_driver.rst doc/source/admin/index.rst doc/source/admin/infortrend_driver.rst doc/source/admin/netapp_cluster_mode_driver.rst doc/source/admin/nexentastor5_driver.rst doc/source/admin/share_back_ends_feature_support_mapping.rst doc/source/admin/shared-file-systems-crud-share.rst doc/source/admin/shared-file-systems-key-concepts.rst doc/source/admin/shared-file-systems-manage-and-unmanage-share.rst doc/source/admin/shared-file-systems-manage-and-unmanage-snapshot.rst doc/source/admin/shared-file-systems-multi-backend.rst doc/source/admin/shared-file-systems-network-plugins.rst doc/source/admin/shared-file-systems-networking.rst doc/source/admin/shared-file-systems-quotas.rst doc/source/admin/shared-file-systems-scheduling.rst doc/source/admin/shared-file-systems-security-services.rst doc/source/admin/shared-file-systems-services-manage.rst doc/source/admin/shared-file-systems-share-management.rst doc/source/admin/shared-file-systems-share-migration.rst doc/source/admin/shared-file-systems-share-networks.rst doc/source/admin/shared-file-systems-share-replication.rst doc/source/admin/shared-file-systems-share-resize.rst doc/source/admin/shared-file-systems-share-server-management.rst doc/source/admin/shared-file-systems-share-types.rst doc/source/admin/shared-file-systems-snapshots.rst doc/source/admin/shared-file-systems-troubleshoot.rst doc/source/admin/tegile_driver.rst doc/source/admin/zfs_on_linux_driver.rst doc/source/cli/index.rst doc/source/cli/manila-manage.rst doc/source/cli/manila-status.rst doc/source/cli/manila.rst doc/source/configuration/index.rst doc/source/configuration/figures/hds_network.jpg doc/source/configuration/figures/hsp_network.png doc/source/configuration/figures/openstack-spectrumscale-setup.JPG doc/source/configuration/shared-file-systems/api.rst doc/source/configuration/shared-file-systems/config-options.rst doc/source/configuration/shared-file-systems/drivers.rst doc/source/configuration/shared-file-systems/log-files.rst doc/source/configuration/shared-file-systems/overview.rst doc/source/configuration/shared-file-systems/drivers/cephfs-native-driver.rst doc/source/configuration/shared-file-systems/drivers/dell-emc-powermax-driver.rst doc/source/configuration/shared-file-systems/drivers/dell-emc-unity-driver.rst doc/source/configuration/shared-file-systems/drivers/dell-emc-vnx-driver.rst doc/source/configuration/shared-file-systems/drivers/emc-isilon-driver.rst doc/source/configuration/shared-file-systems/drivers/generic-driver.rst doc/source/configuration/shared-file-systems/drivers/glusterfs-driver.rst doc/source/configuration/shared-file-systems/drivers/glusterfs-native-driver.rst doc/source/configuration/shared-file-systems/drivers/hdfs-native-driver.rst doc/source/configuration/shared-file-systems/drivers/hitachi-hnas-driver.rst doc/source/configuration/shared-file-systems/drivers/hitachi-hsp-driver.rst doc/source/configuration/shared-file-systems/drivers/hpe-3par-share-driver.rst doc/source/configuration/shared-file-systems/drivers/huawei-nas-driver.rst doc/source/configuration/shared-file-systems/drivers/ibm-spectrumscale-driver.rst doc/source/configuration/shared-file-systems/drivers/infinidat-share-driver.rst doc/source/configuration/shared-file-systems/drivers/infortrend-nas-driver.rst doc/source/configuration/shared-file-systems/drivers/lvm-driver.rst doc/source/configuration/shared-file-systems/drivers/maprfs-native-driver.rst doc/source/configuration/shared-file-systems/drivers/netapp-cluster-mode-driver.rst doc/source/configuration/shared-file-systems/drivers/nexentastor5-driver.rst doc/source/configuration/shared-file-systems/drivers/quobyte-driver.rst doc/source/configuration/shared-file-systems/drivers/windows-smb-driver.rst doc/source/configuration/shared-file-systems/drivers/zfs-on-linux-driver.rst doc/source/configuration/shared-file-systems/drivers/zfssa-manila-driver.rst doc/source/configuration/shared-file-systems/samples/api-paste.ini.rst doc/source/configuration/shared-file-systems/samples/index.rst doc/source/configuration/shared-file-systems/samples/manila.conf.rst doc/source/configuration/shared-file-systems/samples/policy.rst doc/source/configuration/shared-file-systems/samples/rootwrap.conf.rst doc/source/configuration/shared-file-systems/samples/sample_policy.rst doc/source/configuration/tables/manila-api.inc doc/source/configuration/tables/manila-ca.inc doc/source/configuration/tables/manila-cephfs.inc doc/source/configuration/tables/manila-common.inc doc/source/configuration/tables/manila-compute.inc doc/source/configuration/tables/manila-emc.inc doc/source/configuration/tables/manila-ganesha.inc doc/source/configuration/tables/manila-generic.inc doc/source/configuration/tables/manila-glusterfs.inc doc/source/configuration/tables/manila-hdfs.inc doc/source/configuration/tables/manila-hds_hnas.inc doc/source/configuration/tables/manila-hds_hsp.inc doc/source/configuration/tables/manila-hnas.inc doc/source/configuration/tables/manila-hpe3par.inc doc/source/configuration/tables/manila-huawei.inc doc/source/configuration/tables/manila-infinidat.inc doc/source/configuration/tables/manila-infortrend.inc doc/source/configuration/tables/manila-lvm.inc doc/source/configuration/tables/manila-maprfs.inc doc/source/configuration/tables/manila-netapp.inc doc/source/configuration/tables/manila-nexentastor5.inc doc/source/configuration/tables/manila-powermax.inc doc/source/configuration/tables/manila-quobyte.inc doc/source/configuration/tables/manila-quota.inc doc/source/configuration/tables/manila-redis.inc doc/source/configuration/tables/manila-san.inc doc/source/configuration/tables/manila-scheduler.inc doc/source/configuration/tables/manila-share.inc doc/source/configuration/tables/manila-spectrumscale_ces.inc doc/source/configuration/tables/manila-spectrumscale_knfs.inc doc/source/configuration/tables/manila-tegile.inc doc/source/configuration/tables/manila-unity.inc doc/source/configuration/tables/manila-vnx.inc doc/source/configuration/tables/manila-winrm.inc doc/source/configuration/tables/manila-zfs.inc doc/source/configuration/tables/manila-zfssa.inc doc/source/contributor/adding_release_notes.rst doc/source/contributor/addmethod.openstackapi.rst doc/source/contributor/api_microversion_dev.rst doc/source/contributor/api_microversion_history.rst doc/source/contributor/architecture.rst doc/source/contributor/auth.rst doc/source/contributor/commit_message_tags.rst doc/source/contributor/contributing.rst doc/source/contributor/database.rst doc/source/contributor/development-environment-devstack.rst doc/source/contributor/development.environment.rst doc/source/contributor/documenting_your_work.rst doc/source/contributor/driver_filter_goodness_weigher.rst doc/source/contributor/driver_requirements.rst doc/source/contributor/experimental_apis.rst doc/source/contributor/fakes.rst doc/source/contributor/ganesha.rst doc/source/contributor/gerrit.rst doc/source/contributor/guru_meditation_report.rst doc/source/contributor/i18n.rst doc/source/contributor/index.rst doc/source/contributor/intro.rst doc/source/contributor/launchpad.rst doc/source/contributor/manila-review-policy.rst doc/source/contributor/manila.rst doc/source/contributor/pool-aware-manila-scheduler.rst doc/source/contributor/project-team-lead.rst doc/source/contributor/rpc.rst doc/source/contributor/scheduler.rst doc/source/contributor/services.rst doc/source/contributor/share.rst doc/source/contributor/share_hooks.rst doc/source/contributor/share_migration.rst doc/source/contributor/share_replication.rst doc/source/contributor/tempest_tests.rst doc/source/contributor/threading.rst doc/source/contributor/unit_tests.rst doc/source/contributor/user_messages.rst doc/source/contributor/samples/cephfs_local.conf doc/source/contributor/samples/container_local.conf doc/source/contributor/samples/generic_local.conf doc/source/contributor/samples/lvm_local.conf doc/source/contributor/samples/zfsonlinux_local.conf doc/source/images/rpc/arch.png doc/source/images/rpc/arch.svg doc/source/images/rpc/flow1.png doc/source/images/rpc/flow1.svg doc/source/images/rpc/flow2.png doc/source/images/rpc/flow2.svg doc/source/images/rpc/hds_network.jpg doc/source/images/rpc/rabt.png doc/source/images/rpc/rabt.svg doc/source/images/rpc/state.png doc/source/install/get-started-with-shared-file-systems.rst doc/source/install/index.rst doc/source/install/install-controller-debian.rst doc/source/install/install-controller-node.rst doc/source/install/install-controller-obs.rst doc/source/install/install-controller-rdo.rst doc/source/install/install-controller-ubuntu.rst doc/source/install/install-share-debian.rst doc/source/install/install-share-node.rst doc/source/install/install-share-obs.rst doc/source/install/install-share-rdo.rst doc/source/install/install-share-ubuntu.rst doc/source/install/next-steps.rst doc/source/install/post-install.rst doc/source/install/verify.rst doc/source/install/common/controller-node-common-configuration.rst doc/source/install/common/controller-node-prerequisites.rst doc/source/install/common/dhss-false-mode-configuration.rst doc/source/install/common/dhss-false-mode-intro.rst doc/source/install/common/dhss-false-mode-using-shared-file-systems.rst doc/source/install/common/dhss-true-mode-configuration.rst doc/source/install/common/dhss-true-mode-intro.rst doc/source/install/common/dhss-true-mode-using-shared-file-systems.rst doc/source/install/common/figures doc/source/install/common/share-node-common-configuration.rst doc/source/install/common/share-node-share-server-modes.rst doc/source/install/figures/hwreqs.graffle doc/source/install/figures/hwreqs.png doc/source/install/figures/hwreqs.svg doc/source/reference/glossary.rst doc/source/reference/index.rst doc/source/user/API.rst doc/source/user/create-and-manage-shares.rst doc/source/user/index.rst doc/source/user/troubleshooting-asynchronous-failures.rst etc/manila/README.manila.conf etc/manila/api-paste.ini etc/manila/logging_sample.conf etc/manila/manila-policy-generator.conf etc/manila/rootwrap.conf etc/manila/rootwrap.d/share.filters etc/oslo-config-generator/manila.conf manila/__init__.py manila/context.py manila/coordination.py manila/exception.py manila/i18n.py manila/manager.py manila/opts.py manila/policy.py manila/quota.py manila/rpc.py manila/service.py manila/test.py manila/utils.py manila/version.py manila.egg-info/PKG-INFO manila.egg-info/SOURCES.txt manila.egg-info/dependency_links.txt manila.egg-info/entry_points.txt manila.egg-info/not-zip-safe manila.egg-info/pbr.json manila.egg-info/requires.txt manila.egg-info/top_level.txt manila/api/__init__.py manila/api/auth.py manila/api/common.py manila/api/extensions.py manila/api/urlmap.py manila/api/versions.py manila/api/contrib/__init__.py manila/api/middleware/__init__.py manila/api/middleware/auth.py manila/api/middleware/fault.py manila/api/openstack/__init__.py manila/api/openstack/api_version_request.py manila/api/openstack/rest_api_version_history.rst manila/api/openstack/urlmap.py manila/api/openstack/versioned_method.py manila/api/openstack/wsgi.py manila/api/v1/__init__.py manila/api/v1/limits.py manila/api/v1/router.py manila/api/v1/scheduler_stats.py manila/api/v1/security_service.py manila/api/v1/share_manage.py manila/api/v1/share_metadata.py manila/api/v1/share_servers.py manila/api/v1/share_snapshots.py manila/api/v1/share_types_extra_specs.py manila/api/v1/share_unmanage.py manila/api/v1/shares.py manila/api/v2/__init__.py manila/api/v2/availability_zones.py manila/api/v2/messages.py manila/api/v2/quota_class_sets.py manila/api/v2/quota_sets.py manila/api/v2/router.py manila/api/v2/services.py manila/api/v2/share_access_metadata.py manila/api/v2/share_accesses.py manila/api/v2/share_export_locations.py manila/api/v2/share_group_snapshots.py manila/api/v2/share_group_type_specs.py manila/api/v2/share_group_types.py manila/api/v2/share_groups.py manila/api/v2/share_instance_export_locations.py manila/api/v2/share_instances.py manila/api/v2/share_network_subnets.py manila/api/v2/share_networks.py manila/api/v2/share_replica_export_locations.py manila/api/v2/share_replicas.py manila/api/v2/share_servers.py manila/api/v2/share_snapshot_export_locations.py manila/api/v2/share_snapshot_instance_export_locations.py manila/api/v2/share_snapshot_instances.py manila/api/v2/share_snapshots.py manila/api/v2/share_types.py manila/api/v2/shares.py manila/api/views/__init__.py manila/api/views/availability_zones.py manila/api/views/export_locations.py manila/api/views/limits.py manila/api/views/messages.py manila/api/views/quota_class_sets.py manila/api/views/quota_sets.py manila/api/views/scheduler_stats.py manila/api/views/security_service.py manila/api/views/services.py manila/api/views/share_accesses.py manila/api/views/share_group_snapshots.py manila/api/views/share_group_types.py manila/api/views/share_groups.py manila/api/views/share_instance.py manila/api/views/share_migration.py manila/api/views/share_network_subnets.py manila/api/views/share_networks.py manila/api/views/share_replicas.py manila/api/views/share_servers.py manila/api/views/share_snapshot_export_locations.py manila/api/views/share_snapshot_instances.py manila/api/views/share_snapshots.py manila/api/views/shares.py manila/api/views/types.py manila/api/views/versions.py manila/cmd/__init__.py manila/cmd/api.py manila/cmd/data.py manila/cmd/manage.py manila/cmd/scheduler.py manila/cmd/share.py manila/cmd/status.py manila/common/__init__.py manila/common/client_auth.py manila/common/config.py manila/common/constants.py manila/compute/__init__.py manila/compute/nova.py manila/data/__init__.py manila/data/helper.py manila/data/manager.py manila/data/rpcapi.py manila/data/utils.py manila/db/__init__.py manila/db/api.py manila/db/base.py manila/db/migration.py manila/db/migrations/__init__.py manila/db/migrations/alembic.ini manila/db/migrations/utils.py manila/db/migrations/alembic/__init__.py manila/db/migrations/alembic/env.py manila/db/migrations/alembic/migration.py manila/db/migrations/alembic/script.py.mako manila/db/migrations/alembic/versions/0274d20c560f_add_ou_to_security_service.py manila/db/migrations/alembic/versions/03da71c0e321_convert_cgs_to_share_groups.py manila/db/migrations/alembic/versions/097fad24d2fc_add_share_instances_share_id_index.py manila/db/migrations/alembic/versions/11ee96se625f3_add_metadata_for_access.py manila/db/migrations/alembic/versions/162a3e673105_manila_init.py manila/db/migrations/alembic/versions/17115072e1c3_add_nova_net_id_column_to_share_networks.py manila/db/migrations/alembic/versions/1f0bd302c1a6_add_availability_zones_table.py manila/db/migrations/alembic/versions/211836bf835c_add_access_level.py manila/db/migrations/alembic/versions/221a83cfd85b_change_user_project_id_length.py manila/db/migrations/alembic/versions/238720805ce1_add_messages_table.py manila/db/migrations/alembic/versions/27cb96d991fa_add_description_for_share_type.py manila/db/migrations/alembic/versions/293fac1130ca_add_replication_attrs.py manila/db/migrations/alembic/versions/30cb96d995fa_add_is_public_column_for_share.py manila/db/migrations/alembic/versions/323840a08dc4_add_shares_task_state.py manila/db/migrations/alembic/versions/344c1ac4747f_add_share_instance_access_rules_status.py manila/db/migrations/alembic/versions/3651e16d7c43_add_consistency_groups.py manila/db/migrations/alembic/versions/38e632621e5a_change_volume_type_to_share_type.py manila/db/migrations/alembic/versions/3a482171410f_add_drivers_private_data_table.py manila/db/migrations/alembic/versions/3db9992c30f3_transform_statuses_to_lowercase.py manila/db/migrations/alembic/versions/3e7d62517afa_add_create_share_from_snapshot_support.py manila/db/migrations/alembic/versions/48a7beae3117_move_share_type_id_to_instances.py manila/db/migrations/alembic/versions/493eaffd79e1_add_mtu_network_allocations_share_networks.py manila/db/migrations/alembic/versions/4a482571410f_add_backends_info_table.py manila/db/migrations/alembic/versions/4ee2cf4be19a_remove_share_snapshots_export_location.py manila/db/migrations/alembic/versions/5077ffcc5f1c_add_share_instances.py manila/db/migrations/alembic/versions/5155c7077f99_add_more_network_info_attributes_to_network_allocations_table.py manila/db/migrations/alembic/versions/5237b6625330_add_availability_zone_id_field_to_share_groups.py manila/db/migrations/alembic/versions/533646c7af38_remove_unused_attr_status.py manila/db/migrations/alembic/versions/54667b9cade7_restore_share_instance_access_map_state.py manila/db/migrations/alembic/versions/55761e5f59c5_add_snapshot_support_extra_spec_to_share_types.py manila/db/migrations/alembic/versions/56cdbe267881_add_share_export_locations_table.py manila/db/migrations/alembic/versions/579c267fbb4d_add_share_instances_access_map.py manila/db/migrations/alembic/versions/59eb64046740_add_required_extra_spec.py manila/db/migrations/alembic/versions/63809d875e32_add_access_key.py manila/db/migrations/alembic/versions/6a3fd2984bc31_add_is_auto_deletable_and_identifier_fields_for_share_servers.py manila/db/migrations/alembic/versions/7d142971c4ef_add_reservation_expire_index.py manila/db/migrations/alembic/versions/805685098bd2_add_share_network_subnets_table_and_modify_share_servers_table.py manila/db/migrations/alembic/versions/829a09b0ddd4_fix_project_share_type_quotas_unique_constraint.py manila/db/migrations/alembic/versions/87ce15c59bbe_add_revert_to_snapshot_support.py manila/db/migrations/alembic/versions/927920b37453_add_provider_location_for_share_group_snapshot_members_model.py manila/db/migrations/alembic/versions/95e3cf760840_remove_nova_net_id_column_from_share_.py manila/db/migrations/alembic/versions/a77e2ad5012d_add_share_snapshot_access.py manila/db/migrations/alembic/versions/b10fb432c042_squash_share_group_snapshot_members_and_share_snapshot_instance_models.py manila/db/migrations/alembic/versions/b516de97bfee_add_quota_per_share_type_model.py manila/db/migrations/alembic/versions/d5db24264f5c_add_consistent_snapshot_support_attr_to_share_group_model.py manila/db/migrations/alembic/versions/dda6de06349_add_export_locations_metadata.py manila/db/migrations/alembic/versions/e1949a93157a_add_share_group_types_table.py manila/db/migrations/alembic/versions/e6d88547b381_add_progress_field_to_share_instance.py manila/db/migrations/alembic/versions/e8ea58723178_remove_host_from_driver_private_data.py manila/db/migrations/alembic/versions/e9f79621d83f_add_cast_rules_to_readonly_to_share_instances.py manila/db/migrations/alembic/versions/eb6d5544cbbd_add_provider_location_to_share_snapshot_instances.py manila/db/migrations/alembic/versions/ef0c02b4366_add_share_type_projects.py manila/db/migrations/alembic/versions/fdfb668d19e1_add_gateway_to_network_allocations_table.py manila/db/sqlalchemy/__init__.py manila/db/sqlalchemy/api.py manila/db/sqlalchemy/models.py manila/db/sqlalchemy/query.py manila/db/sqlalchemy/utils.py manila/hacking/__init__.py manila/hacking/checks.py manila/message/__init__.py manila/message/api.py manila/message/message_field.py manila/message/message_levels.py manila/network/__init__.py manila/network/standalone_network_plugin.py manila/network/linux/__init__.py manila/network/linux/interface.py manila/network/linux/ip_lib.py manila/network/linux/ovs_lib.py manila/network/neutron/__init__.py manila/network/neutron/api.py manila/network/neutron/constants.py manila/network/neutron/neutron_network_plugin.py manila/policies/__init__.py manila/policies/availability_zone.py manila/policies/base.py manila/policies/message.py manila/policies/quota_class_set.py manila/policies/quota_set.py manila/policies/scheduler_stats.py manila/policies/security_service.py manila/policies/service.py manila/policies/share_access.py manila/policies/share_access_metadata.py manila/policies/share_export_location.py manila/policies/share_group.py manila/policies/share_group_snapshot.py manila/policies/share_group_type.py manila/policies/share_group_types_spec.py manila/policies/share_instance.py manila/policies/share_instance_export_location.py manila/policies/share_network.py manila/policies/share_network_subnet.py manila/policies/share_replica.py manila/policies/share_replica_export_location.py manila/policies/share_server.py manila/policies/share_snapshot.py manila/policies/share_snapshot_export_location.py manila/policies/share_snapshot_instance.py manila/policies/share_snapshot_instance_export_location.py manila/policies/share_type.py manila/policies/share_types_extra_spec.py manila/policies/shares.py manila/scheduler/__init__.py manila/scheduler/base_handler.py manila/scheduler/host_manager.py manila/scheduler/manager.py manila/scheduler/rpcapi.py manila/scheduler/scheduler_options.py manila/scheduler/utils.py manila/scheduler/drivers/__init__.py manila/scheduler/drivers/base.py manila/scheduler/drivers/chance.py manila/scheduler/drivers/filter.py manila/scheduler/drivers/simple.py manila/scheduler/evaluator/__init__.py manila/scheduler/evaluator/evaluator.py manila/scheduler/filters/__init__.py manila/scheduler/filters/availability_zone.py manila/scheduler/filters/base.py manila/scheduler/filters/base_host.py manila/scheduler/filters/capabilities.py manila/scheduler/filters/capacity.py manila/scheduler/filters/create_from_snapshot.py manila/scheduler/filters/driver.py manila/scheduler/filters/extra_specs_ops.py manila/scheduler/filters/ignore_attempted_hosts.py manila/scheduler/filters/json.py manila/scheduler/filters/retry.py manila/scheduler/filters/share_replication.py manila/scheduler/filters/share_group_filters/__init__.py manila/scheduler/filters/share_group_filters/consistent_snapshot.py manila/scheduler/weighers/__init__.py manila/scheduler/weighers/base.py manila/scheduler/weighers/base_host.py manila/scheduler/weighers/capacity.py manila/scheduler/weighers/goodness.py manila/scheduler/weighers/host_affinity.py manila/scheduler/weighers/pool.py manila/share/__init__.py manila/share/access.py manila/share/api.py manila/share/configuration.py manila/share/driver.py manila/share/drivers_private_data.py manila/share/hook.py manila/share/manager.py manila/share/migration.py manila/share/rpcapi.py manila/share/share_types.py manila/share/snapshot_access.py manila/share/utils.py manila/share/drivers/__init__.py manila/share/drivers/generic.py manila/share/drivers/helpers.py manila/share/drivers/lvm.py manila/share/drivers/service_instance.py manila/share/drivers/cephfs/__init__.py manila/share/drivers/cephfs/driver.py manila/share/drivers/cephfs/conf/cephfs-export-template.conf manila/share/drivers/container/__init__.py manila/share/drivers/container/container_helper.py manila/share/drivers/container/driver.py manila/share/drivers/container/protocol_helper.py manila/share/drivers/container/storage_helper.py manila/share/drivers/dell_emc/__init__.py manila/share/drivers/dell_emc/driver.py manila/share/drivers/dell_emc/plugin_manager.py manila/share/drivers/dell_emc/common/__init__.py manila/share/drivers/dell_emc/common/enas/__init__.py manila/share/drivers/dell_emc/common/enas/connector.py manila/share/drivers/dell_emc/common/enas/constants.py manila/share/drivers/dell_emc/common/enas/utils.py manila/share/drivers/dell_emc/common/enas/xml_api_parser.py manila/share/drivers/dell_emc/plugins/__init__.py manila/share/drivers/dell_emc/plugins/base.py manila/share/drivers/dell_emc/plugins/isilon/__init__.py manila/share/drivers/dell_emc/plugins/isilon/isilon.py manila/share/drivers/dell_emc/plugins/isilon/isilon_api.py manila/share/drivers/dell_emc/plugins/powermax/__init__.py manila/share/drivers/dell_emc/plugins/powermax/connection.py manila/share/drivers/dell_emc/plugins/powermax/object_manager.py manila/share/drivers/dell_emc/plugins/unity/__init__.py manila/share/drivers/dell_emc/plugins/unity/client.py manila/share/drivers/dell_emc/plugins/unity/connection.py manila/share/drivers/dell_emc/plugins/unity/utils.py manila/share/drivers/dell_emc/plugins/vnx/__init__.py manila/share/drivers/dell_emc/plugins/vnx/connection.py manila/share/drivers/dell_emc/plugins/vnx/object_manager.py manila/share/drivers/ganesha/__init__.py manila/share/drivers/ganesha/manager.py manila/share/drivers/ganesha/utils.py manila/share/drivers/ganesha/conf/00-base-export-template.conf manila/share/drivers/glusterfs/__init__.py manila/share/drivers/glusterfs/common.py manila/share/drivers/glusterfs/glusterfs_native.py manila/share/drivers/glusterfs/layout.py manila/share/drivers/glusterfs/layout_directory.py manila/share/drivers/glusterfs/layout_volume.py manila/share/drivers/glusterfs/conf/10-glusterfs-export-template.conf manila/share/drivers/hdfs/__init__.py manila/share/drivers/hdfs/hdfs_native.py manila/share/drivers/hitachi/__init__.py manila/share/drivers/hitachi/hnas/__init__.py manila/share/drivers/hitachi/hnas/driver.py manila/share/drivers/hitachi/hnas/ssh.py manila/share/drivers/hitachi/hsp/__init__.py manila/share/drivers/hitachi/hsp/driver.py manila/share/drivers/hitachi/hsp/rest.py manila/share/drivers/hpe/__init__.py manila/share/drivers/hpe/hpe_3par_driver.py manila/share/drivers/hpe/hpe_3par_mediator.py manila/share/drivers/huawei/__init__.py manila/share/drivers/huawei/base.py manila/share/drivers/huawei/constants.py manila/share/drivers/huawei/huawei_nas.py manila/share/drivers/huawei/huawei_utils.py manila/share/drivers/huawei/v3/__init__.py manila/share/drivers/huawei/v3/connection.py manila/share/drivers/huawei/v3/helper.py manila/share/drivers/huawei/v3/replication.py manila/share/drivers/huawei/v3/rpcapi.py manila/share/drivers/huawei/v3/smartx.py manila/share/drivers/ibm/__init__.py manila/share/drivers/ibm/gpfs.py manila/share/drivers/infinidat/__init__.py manila/share/drivers/infinidat/infinibox.py manila/share/drivers/infortrend/__init__.py manila/share/drivers/infortrend/driver.py manila/share/drivers/infortrend/infortrend_nas.py manila/share/drivers/inspur/__init__.py manila/share/drivers/inspur/as13000/__init__.py manila/share/drivers/inspur/as13000/as13000_nas.py manila/share/drivers/inspur/instorage/__init__.py manila/share/drivers/inspur/instorage/cli_helper.py manila/share/drivers/inspur/instorage/instorage.py manila/share/drivers/maprfs/__init__.py manila/share/drivers/maprfs/driver_util.py manila/share/drivers/maprfs/maprfs_native.py manila/share/drivers/netapp/__init__.py manila/share/drivers/netapp/common.py manila/share/drivers/netapp/options.py manila/share/drivers/netapp/utils.py manila/share/drivers/netapp/dataontap/__init__.py manila/share/drivers/netapp/dataontap/client/__init__.py manila/share/drivers/netapp/dataontap/client/api.py manila/share/drivers/netapp/dataontap/client/client_base.py manila/share/drivers/netapp/dataontap/client/client_cmode.py manila/share/drivers/netapp/dataontap/cluster_mode/__init__.py manila/share/drivers/netapp/dataontap/cluster_mode/data_motion.py manila/share/drivers/netapp/dataontap/cluster_mode/drv_multi_svm.py manila/share/drivers/netapp/dataontap/cluster_mode/drv_single_svm.py manila/share/drivers/netapp/dataontap/cluster_mode/lib_base.py manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py manila/share/drivers/netapp/dataontap/cluster_mode/lib_single_svm.py manila/share/drivers/netapp/dataontap/cluster_mode/performance.py manila/share/drivers/netapp/dataontap/protocols/__init__.py manila/share/drivers/netapp/dataontap/protocols/base.py manila/share/drivers/netapp/dataontap/protocols/cifs_cmode.py manila/share/drivers/netapp/dataontap/protocols/nfs_cmode.py manila/share/drivers/nexenta/__init__.py manila/share/drivers/nexenta/options.py manila/share/drivers/nexenta/utils.py manila/share/drivers/nexenta/ns4/__init__.py manila/share/drivers/nexenta/ns4/jsonrpc.py manila/share/drivers/nexenta/ns4/nexenta_nas.py manila/share/drivers/nexenta/ns4/nexenta_nfs_helper.py manila/share/drivers/nexenta/ns5/__init__.py manila/share/drivers/nexenta/ns5/jsonrpc.py manila/share/drivers/nexenta/ns5/nexenta_nas.py manila/share/drivers/qnap/__init__.py manila/share/drivers/qnap/api.py manila/share/drivers/qnap/qnap.py manila/share/drivers/quobyte/__init__.py manila/share/drivers/quobyte/jsonrpc.py manila/share/drivers/quobyte/quobyte.py manila/share/drivers/tegile/__init__.py manila/share/drivers/tegile/tegile.py manila/share/drivers/veritas/__init__.py manila/share/drivers/veritas/veritas_isa.py manila/share/drivers/windows/__init__.py manila/share/drivers/windows/service_instance.py manila/share/drivers/windows/windows_smb_driver.py manila/share/drivers/windows/windows_smb_helper.py manila/share/drivers/windows/windows_utils.py manila/share/drivers/windows/winrm_helper.py manila/share/drivers/zfsonlinux/__init__.py manila/share/drivers/zfsonlinux/driver.py manila/share/drivers/zfsonlinux/utils.py manila/share/drivers/zfssa/__init__.py manila/share/drivers/zfssa/restclient.py manila/share/drivers/zfssa/zfssarest.py manila/share/drivers/zfssa/zfssashare.py manila/share/hooks/__init__.py manila/share_group/__init__.py manila/share_group/api.py manila/share_group/share_group_types.py manila/testing/README.rst manila/tests/__init__.py manila/tests/conf_fixture.py manila/tests/db_utils.py manila/tests/declare_conf.py manila/tests/fake_client_exception_class.py manila/tests/fake_compute.py manila/tests/fake_driver.py manila/tests/fake_network.py manila/tests/fake_notifier.py manila/tests/fake_service_instance.py manila/tests/fake_share.py manila/tests/fake_utils.py manila/tests/fake_volume.py manila/tests/fake_zfssa.py manila/tests/policy.json manila/tests/runtime_conf.py manila/tests/test_api.py manila/tests/test_conf.py manila/tests/test_context.py manila/tests/test_coordination.py manila/tests/test_exception.py manila/tests/test_hacking.py manila/tests/test_manager.py manila/tests/test_misc.py manila/tests/test_network.py manila/tests/test_policy.py manila/tests/test_quota.py manila/tests/test_rpc.py manila/tests/test_service.py manila/tests/test_test.py manila/tests/test_test_utils.py manila/tests/test_utils.py manila/tests/utils.py manila/tests/api/__init__.py manila/tests/api/common.py manila/tests/api/fakes.py manila/tests/api/test_common.py manila/tests/api/test_extensions.py manila/tests/api/test_middleware.py manila/tests/api/test_versions.py manila/tests/api/test_wsgi.py manila/tests/api/contrib/__init__.py manila/tests/api/contrib/stubs.py manila/tests/api/extensions/__init__.py manila/tests/api/extensions/foxinsocks.py manila/tests/api/middleware/__init__.py manila/tests/api/middleware/test_auth.py manila/tests/api/middleware/test_faults.py manila/tests/api/openstack/__init__.py manila/tests/api/openstack/test_api_version_request.py manila/tests/api/openstack/test_versioned_method.py manila/tests/api/openstack/test_wsgi.py manila/tests/api/v1/__init__.py manila/tests/api/v1/stubs.py manila/tests/api/v1/test_limits.py manila/tests/api/v1/test_scheduler_stats.py manila/tests/api/v1/test_security_service.py manila/tests/api/v1/test_share_manage.py manila/tests/api/v1/test_share_metadata.py manila/tests/api/v1/test_share_servers.py manila/tests/api/v1/test_share_snapshots.py manila/tests/api/v1/test_share_types_extra_specs.py manila/tests/api/v1/test_share_unmanage.py manila/tests/api/v1/test_shares.py manila/tests/api/v2/__init__.py manila/tests/api/v2/stubs.py manila/tests/api/v2/test_availability_zones.py manila/tests/api/v2/test_messages.py manila/tests/api/v2/test_quota_class_sets.py manila/tests/api/v2/test_quota_sets.py manila/tests/api/v2/test_security_services.py manila/tests/api/v2/test_services.py manila/tests/api/v2/test_share_access_metadata.py manila/tests/api/v2/test_share_accesses.py manila/tests/api/v2/test_share_export_locations.py manila/tests/api/v2/test_share_group_snapshots.py manila/tests/api/v2/test_share_group_type_specs.py manila/tests/api/v2/test_share_group_types.py manila/tests/api/v2/test_share_groups.py manila/tests/api/v2/test_share_instance_export_locations.py manila/tests/api/v2/test_share_instances.py manila/tests/api/v2/test_share_network_subnets.py manila/tests/api/v2/test_share_networks.py manila/tests/api/v2/test_share_replica_export_locations.py manila/tests/api/v2/test_share_replicas.py manila/tests/api/v2/test_share_servers.py manila/tests/api/v2/test_share_snapshot_export_locations.py manila/tests/api/v2/test_share_snapshot_instance_export_locations.py manila/tests/api/v2/test_share_snapshot_instances.py manila/tests/api/v2/test_share_snapshots.py manila/tests/api/v2/test_share_types.py manila/tests/api/v2/test_shares.py manila/tests/api/views/__init__.py manila/tests/api/views/test_quota_class_sets.py manila/tests/api/views/test_quota_sets.py manila/tests/api/views/test_scheduler_stats.py manila/tests/api/views/test_share_accesses.py manila/tests/api/views/test_share_network_subnets.py manila/tests/api/views/test_share_networks.py manila/tests/api/views/test_shares.py manila/tests/api/views/test_versions.py manila/tests/cmd/__init__.py manila/tests/cmd/test_api.py manila/tests/cmd/test_data.py manila/tests/cmd/test_manage.py manila/tests/cmd/test_scheduler.py manila/tests/cmd/test_share.py manila/tests/cmd/test_status.py manila/tests/common/__init__.py manila/tests/common/test_client_auth.py manila/tests/common/test_config.py manila/tests/compute/__init__.py manila/tests/compute/test_nova.py manila/tests/data/__init__.py manila/tests/data/test_helper.py manila/tests/data/test_manager.py manila/tests/data/test_rpcapi.py manila/tests/data/test_utils.py manila/tests/db/__init__.py manila/tests/db/fakes.py manila/tests/db/test_api.py manila/tests/db/test_migration.py manila/tests/db/migrations/__init__.py manila/tests/db/migrations/test_utils.py manila/tests/db/migrations/alembic/__init__.py manila/tests/db/migrations/alembic/migrations_data_checks.py manila/tests/db/migrations/alembic/test_migration.py manila/tests/db/sqlalchemy/__init__.py manila/tests/db/sqlalchemy/test_api.py manila/tests/db/sqlalchemy/test_models.py manila/tests/integrated/__init__.py manila/tests/integrated/integrated_helpers.py manila/tests/integrated/test_extensions.py manila/tests/integrated/test_login.py manila/tests/integrated/api/__init__.py manila/tests/integrated/api/client.py manila/tests/message/__init__.py manila/tests/message/test_api.py manila/tests/message/test_message_field.py manila/tests/monkey_patch_example/__init__.py manila/tests/monkey_patch_example/example_a.py manila/tests/monkey_patch_example/example_b.py manila/tests/network/__init__.py manila/tests/network/test_standalone_network_plugin.py manila/tests/network/linux/__init__.py manila/tests/network/linux/test_interface.py manila/tests/network/linux/test_ip_lib.py manila/tests/network/linux/test_ovs_lib.py manila/tests/network/neutron/__init__.py manila/tests/network/neutron/test_neutron_api.py manila/tests/network/neutron/test_neutron_plugin.py manila/tests/scheduler/__init__.py manila/tests/scheduler/fakes.py manila/tests/scheduler/test_host_manager.py manila/tests/scheduler/test_manager.py manila/tests/scheduler/test_rpcapi.py manila/tests/scheduler/test_scheduler_options.py manila/tests/scheduler/test_utils.py manila/tests/scheduler/drivers/__init__.py manila/tests/scheduler/drivers/test_base.py manila/tests/scheduler/drivers/test_filter.py manila/tests/scheduler/drivers/test_simple.py manila/tests/scheduler/evaluator/__init__.py manila/tests/scheduler/evaluator/test_evaluator.py manila/tests/scheduler/filters/__init__.py manila/tests/scheduler/filters/test_availability_zone.py manila/tests/scheduler/filters/test_base.py manila/tests/scheduler/filters/test_base_host.py manila/tests/scheduler/filters/test_capabilities.py manila/tests/scheduler/filters/test_capacity.py manila/tests/scheduler/filters/test_create_from_snapshot.py manila/tests/scheduler/filters/test_driver.py manila/tests/scheduler/filters/test_extra_specs_ops.py manila/tests/scheduler/filters/test_ignore_attempted_hosts.py manila/tests/scheduler/filters/test_json.py manila/tests/scheduler/filters/test_retry.py manila/tests/scheduler/filters/test_share_replication.py manila/tests/scheduler/weighers/__init__.py manila/tests/scheduler/weighers/test_base.py manila/tests/scheduler/weighers/test_capacity.py manila/tests/scheduler/weighers/test_goodness.py manila/tests/scheduler/weighers/test_host_affinity.py manila/tests/scheduler/weighers/test_pool.py manila/tests/share/__init__.py manila/tests/share/test_access.py manila/tests/share/test_api.py manila/tests/share/test_driver.py manila/tests/share/test_drivers_private_data.py manila/tests/share/test_hook.py manila/tests/share/test_manager.py manila/tests/share/test_migration.py manila/tests/share/test_rpcapi.py manila/tests/share/test_share_types.py manila/tests/share/test_share_utils.py manila/tests/share/test_snapshot_access.py manila/tests/share/drivers/__init__.py manila/tests/share/drivers/dummy.py manila/tests/share/drivers/test_ganesha.py manila/tests/share/drivers/test_generic.py manila/tests/share/drivers/test_glusterfs.py manila/tests/share/drivers/test_helpers.py manila/tests/share/drivers/test_lvm.py manila/tests/share/drivers/test_service_instance.py manila/tests/share/drivers/cephfs/__init__.py manila/tests/share/drivers/cephfs/test_driver.py manila/tests/share/drivers/container/__init__.py manila/tests/share/drivers/container/fakes.py manila/tests/share/drivers/container/test_container_helper.py manila/tests/share/drivers/container/test_driver.py manila/tests/share/drivers/container/test_protocol_helper.py manila/tests/share/drivers/container/test_storage_helper.py manila/tests/share/drivers/dell_emc/__init__.py manila/tests/share/drivers/dell_emc/test_driver.py manila/tests/share/drivers/dell_emc/common/__init__.py manila/tests/share/drivers/dell_emc/common/enas/__init__.py manila/tests/share/drivers/dell_emc/common/enas/fakes.py manila/tests/share/drivers/dell_emc/common/enas/test_connector.py manila/tests/share/drivers/dell_emc/common/enas/test_utils.py manila/tests/share/drivers/dell_emc/common/enas/utils.py manila/tests/share/drivers/dell_emc/plugins/__init__.py manila/tests/share/drivers/dell_emc/plugins/isilon/__init__.py manila/tests/share/drivers/dell_emc/plugins/isilon/test_isilon.py manila/tests/share/drivers/dell_emc/plugins/isilon/test_isilon_api.py manila/tests/share/drivers/dell_emc/plugins/powermax/__init__.py manila/tests/share/drivers/dell_emc/plugins/powermax/test_connection.py manila/tests/share/drivers/dell_emc/plugins/powermax/test_object_manager.py manila/tests/share/drivers/dell_emc/plugins/unity/__init__.py manila/tests/share/drivers/dell_emc/plugins/unity/fake_exceptions.py manila/tests/share/drivers/dell_emc/plugins/unity/mocked_manila.yaml manila/tests/share/drivers/dell_emc/plugins/unity/mocked_unity.yaml manila/tests/share/drivers/dell_emc/plugins/unity/res_mock.py manila/tests/share/drivers/dell_emc/plugins/unity/test_client.py manila/tests/share/drivers/dell_emc/plugins/unity/test_connection.py manila/tests/share/drivers/dell_emc/plugins/unity/test_utils.py manila/tests/share/drivers/dell_emc/plugins/unity/utils.py manila/tests/share/drivers/dell_emc/plugins/vnx/__init__.py manila/tests/share/drivers/dell_emc/plugins/vnx/test_connection.py manila/tests/share/drivers/dell_emc/plugins/vnx/test_object_manager.py manila/tests/share/drivers/ganesha/__init__.py manila/tests/share/drivers/ganesha/test_manager.py manila/tests/share/drivers/ganesha/test_utils.py manila/tests/share/drivers/glusterfs/__init__.py manila/tests/share/drivers/glusterfs/test_common.py manila/tests/share/drivers/glusterfs/test_glusterfs_native.py manila/tests/share/drivers/glusterfs/test_layout.py manila/tests/share/drivers/glusterfs/test_layout_directory.py manila/tests/share/drivers/glusterfs/test_layout_volume.py manila/tests/share/drivers/hdfs/__init__.py manila/tests/share/drivers/hdfs/test_hdfs_native.py manila/tests/share/drivers/hitachi/__init__.py manila/tests/share/drivers/hitachi/hnas/__init__.py manila/tests/share/drivers/hitachi/hnas/test_driver.py manila/tests/share/drivers/hitachi/hnas/test_ssh.py manila/tests/share/drivers/hitachi/hsp/__init__.py manila/tests/share/drivers/hitachi/hsp/fakes.py manila/tests/share/drivers/hitachi/hsp/test_driver.py manila/tests/share/drivers/hitachi/hsp/test_rest.py manila/tests/share/drivers/hpe/__init__.py manila/tests/share/drivers/hpe/test_hpe_3par_constants.py manila/tests/share/drivers/hpe/test_hpe_3par_driver.py manila/tests/share/drivers/hpe/test_hpe_3par_mediator.py manila/tests/share/drivers/huawei/__init__.py manila/tests/share/drivers/huawei/test_huawei_nas.py manila/tests/share/drivers/ibm/__init__.py manila/tests/share/drivers/ibm/test_gpfs.py manila/tests/share/drivers/infinidat/__init__.py manila/tests/share/drivers/infinidat/test_infinidat.py manila/tests/share/drivers/infortrend/__init__.py manila/tests/share/drivers/infortrend/fake_infortrend_manila_data.py manila/tests/share/drivers/infortrend/fake_infortrend_nas_data.py manila/tests/share/drivers/infortrend/test_infortrend_nas.py manila/tests/share/drivers/inspur/__init__.py manila/tests/share/drivers/inspur/as13000/__init__.py manila/tests/share/drivers/inspur/as13000/test_as13000_nas.py manila/tests/share/drivers/inspur/instorage/__init__.py manila/tests/share/drivers/inspur/instorage/test_instorage.py manila/tests/share/drivers/maprfs/__init__.py manila/tests/share/drivers/maprfs/test_maprfs.py manila/tests/share/drivers/netapp/__init__.py manila/tests/share/drivers/netapp/fakes.py manila/tests/share/drivers/netapp/test_common.py manila/tests/share/drivers/netapp/test_utils.py manila/tests/share/drivers/netapp/dataontap/__init__.py manila/tests/share/drivers/netapp/dataontap/fakes.py manila/tests/share/drivers/netapp/dataontap/client/__init__.py manila/tests/share/drivers/netapp/dataontap/client/fakes.py manila/tests/share/drivers/netapp/dataontap/client/test_api.py manila/tests/share/drivers/netapp/dataontap/client/test_client_base.py manila/tests/share/drivers/netapp/dataontap/client/test_client_cmode.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/__init__.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_data_motion.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_driver_interfaces.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_lib_base.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_lib_multi_svm.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_lib_single_svm.py manila/tests/share/drivers/netapp/dataontap/cluster_mode/test_performance.py manila/tests/share/drivers/netapp/dataontap/protocols/__init__.py manila/tests/share/drivers/netapp/dataontap/protocols/fakes.py manila/tests/share/drivers/netapp/dataontap/protocols/test_base.py manila/tests/share/drivers/netapp/dataontap/protocols/test_cifs_cmode.py manila/tests/share/drivers/netapp/dataontap/protocols/test_nfs_cmode.py manila/tests/share/drivers/nexenta/__init__.py manila/tests/share/drivers/nexenta/test_utils.py manila/tests/share/drivers/nexenta/ns4/__init__.py manila/tests/share/drivers/nexenta/ns4/test_jsonrpc.py manila/tests/share/drivers/nexenta/ns4/test_nexenta_nas.py manila/tests/share/drivers/nexenta/ns5/__init__.py manila/tests/share/drivers/nexenta/ns5/test_jsonrpc.py manila/tests/share/drivers/nexenta/ns5/test_nexenta_nas.py manila/tests/share/drivers/qnap/__init__.py manila/tests/share/drivers/qnap/fakes.py manila/tests/share/drivers/qnap/test_api.py manila/tests/share/drivers/qnap/test_qnap.py manila/tests/share/drivers/quobyte/__init__.py manila/tests/share/drivers/quobyte/test_jsonrpc.py manila/tests/share/drivers/quobyte/test_quobyte.py manila/tests/share/drivers/tegile/__init__.py manila/tests/share/drivers/tegile/test_tegile.py manila/tests/share/drivers/veritas/__init__.py manila/tests/share/drivers/veritas/test_veritas_isa.py manila/tests/share/drivers/windows/__init__.py manila/tests/share/drivers/windows/test_service_instance.py manila/tests/share/drivers/windows/test_windows_smb_driver.py manila/tests/share/drivers/windows/test_windows_smb_helper.py manila/tests/share/drivers/windows/test_windows_utils.py manila/tests/share/drivers/windows/test_winrm_helper.py manila/tests/share/drivers/zfsonlinux/__init__.py manila/tests/share/drivers/zfsonlinux/test_driver.py manila/tests/share/drivers/zfsonlinux/test_utils.py manila/tests/share/drivers/zfssa/__init__.py manila/tests/share/drivers/zfssa/test_zfssarest.py manila/tests/share/drivers/zfssa/test_zfssashare.py manila/tests/share_group/__init__.py manila/tests/share_group/test_api.py manila/tests/share_group/test_share_group_types.py manila/tests/var/ca.crt manila/tests/var/certificate.crt manila/tests/var/privatekey.key manila/tests/volume/__init__.py manila/tests/volume/test_cinder.py manila/tests/wsgi/__init__.py manila/tests/wsgi/test_common.py manila/tests/wsgi/test_wsgi.py manila/tests/xenapi/__init__.py manila/volume/__init__.py manila/volume/cinder.py manila/wsgi/__init__.py manila/wsgi/common.py manila/wsgi/eventlet_server.py manila/wsgi/wsgi.py playbooks/legacy/grenade-dsvm-manila/post.yaml playbooks/legacy/grenade-dsvm-manila/run.yaml playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/post.yaml playbooks/legacy/manila-tempest-dsvm-container-scenario-custom-image/run.yaml playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/post.yaml playbooks/legacy/manila-tempest-dsvm-generic-no-share-servers/run.yaml playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/post.yaml playbooks/legacy/manila-tempest-dsvm-generic-scenario-custom-image/run.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-native/post.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-native/run.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/post.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-native-heketi/run.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/post.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs/run.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/post.yaml playbooks/legacy/manila-tempest-dsvm-glusterfs-nfs-heketi/run.yaml playbooks/legacy/manila-tempest-dsvm-hdfs/post.yaml playbooks/legacy/manila-tempest-dsvm-hdfs/run.yaml playbooks/legacy/manila-tempest-dsvm-mysql-generic/post.yaml playbooks/legacy/manila-tempest-dsvm-mysql-generic/run.yaml playbooks/legacy/manila-tempest-dsvm-postgres-container/post.yaml playbooks/legacy/manila-tempest-dsvm-postgres-container/run.yaml playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/post.yaml playbooks/legacy/manila-tempest-dsvm-postgres-generic-singlebackend/run.yaml playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/post.yaml playbooks/legacy/manila-tempest-dsvm-postgres-zfsonlinux/run.yaml playbooks/legacy/manila-tempest-dsvm-scenario/post.yaml playbooks/legacy/manila-tempest-dsvm-scenario/run.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/post.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native/run.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/post.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-native-centos-7/run.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/post.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs/run.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/post.yaml playbooks/legacy/manila-tempest-minimal-dsvm-cephfs-nfs-centos-7/run.yaml playbooks/legacy/manila-tempest-minimal-dsvm-dummy/post.yaml playbooks/legacy/manila-tempest-minimal-dsvm-dummy/run.yaml playbooks/legacy/manila-tempest-minimal-dsvm-lvm/post.yaml playbooks/legacy/manila-tempest-minimal-dsvm-lvm/run-ipv6.yaml playbooks/legacy/manila-tempest-minimal-dsvm-lvm/run.yaml playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/post.yaml playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-native-centos-7/run.yaml playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/post.yaml playbooks/legacy/manila-tempest-minimal-py35-dsvm-cephfs-nfs-centos-7/run.yaml playbooks/manila-tox-genconfig/post.yaml rally-jobs/rally-manila-no-ss.yaml rally-jobs/rally-manila.yaml releasenotes/notes/.placeholder releasenotes/notes/3par-add-update-access-68fc12ffc099f480.yaml releasenotes/notes/3par-fix-get_vfs-driver-bootup-db6b085eb6094f5f.yaml releasenotes/notes/3par-pool-support-fb43b368214c9eda.yaml releasenotes/notes/Huawei-driver-utilize-requests-lib-67f2c4e7ae0d2efa.yaml releasenotes/notes/Use-http_proxy_to_wsgi-instead-of-ssl-middleware-df533a2c2d9c3a61.yaml releasenotes/notes/add-ability-to-check-tenant-quota-usages-7fs17djahy61nsd6.yaml releasenotes/notes/add-access-key-to-share-access-map-2fda4c06a750e24e.yaml releasenotes/notes/add-cast-rules-to-readonly-field-62ead37b728db654.yaml releasenotes/notes/add-cleanup-create-from-snap-hnas-0e0431f1fc861a4e.yaml releasenotes/notes/add-count-info-in-share-21a6b36c0f4c87b9.yaml releasenotes/notes/add-create-share-from-snapshot-another-pool-or-backend-98d61fe753b85632.yaml releasenotes/notes/add-create_share_from_snapshot_support-extra-spec-9b1c3ad6796dd07d.yaml releasenotes/notes/add-export-location-filter-92ead37b728db654.yaml releasenotes/notes/add-export-locations-api-6fc6086c6a081faa.yaml releasenotes/notes/add-gathering-usage-size-8454sd45deopb14e.yaml releasenotes/notes/add-hsp-default-filter-function-0af60a819faabfec.yaml releasenotes/notes/add-ipv6-32d89161a9a1e0b4.yaml releasenotes/notes/add-is-default-e49727d276dd9bc3.yaml releasenotes/notes/add-like-filter-4c1d6dc02f40d5a5.yaml releasenotes/notes/add-manage-db-purge-b32a24ee045d8d45.yaml releasenotes/notes/add-perodic-task-7454sd45deopb13e.yaml releasenotes/notes/add-policy-in-code-c31a24ee045d8d21.yaml releasenotes/notes/add-share-access-metadata-4fda2c06e750e83c.yaml releasenotes/notes/add-share-group-quotas-4e426907eed4c000.yaml releasenotes/notes/add-share-migration-support-in-zfsonlinux-driver-88e6da5692b50810.yaml releasenotes/notes/add-share-type-filter-to-pool-list-api-267614b4d93j12de.yaml releasenotes/notes/add-share-type-quotas-33a6b36c0f4c88b1.yaml releasenotes/notes/add-snapshot-instances-admin-api-959a1121aa407629.yaml releasenotes/notes/add-support-filter-search-for-share-type-fdbaaa9510cc59dd.yaml-5655800975cec5d4.yaml releasenotes/notes/add-tegile-driver-1859114513edb13e.yaml releasenotes/notes/add-tenant-quota-for-share-replicas-and-replicas-size-565ffca315afb6f0.yaml releasenotes/notes/add-two-new-fields-to-share-groups-api-bc576dddd58a3086.yaml releasenotes/notes/add-update-host-command-to-manila-manage-b32ad5017b564c9e.yaml releasenotes/notes/add-user-id-echo-8f42db469b27ff14.yaml releasenotes/notes/add_gateway_into_db-1f3cd3f392ae81cf.yaml releasenotes/notes/add_mtu_info_db-3c1d6dc02f40d5a6.yaml releasenotes/notes/add_user_id_and_project_id_to_snapshot_APIs-157614b4b8d01e15.yaml releasenotes/notes/added-possibility-to-run-manila-api-with-web-servers-that-support-wsgi-apps-cfffe0b789f8670a.yaml releasenotes/notes/api-versions-mark-v1-deprecated-3540d39279fbd60e.yaml releasenotes/notes/blueprint-netapp-snapshot-visibility-4f090a20145fbf34.yaml releasenotes/notes/bp-admin-network-hnas-9b714736e521101e.yaml releasenotes/notes/bp-export-locations-az-api-changes-c8aa1a3a5bc86312.yaml releasenotes/notes/bp-netapp-ontap-storage-based-cryptograpy-bb7e28896e2a2539.yaml releasenotes/notes/bp-ocata-migration-improvements-c8c5675e266100da.yaml releasenotes/notes/bp-share-type-supported-azs-2e12ed406f181b3b.yaml releasenotes/notes/bp-support-query-user-message-by-timestamp-c0a02b3b3e337e12.yaml releasenotes/notes/bp-update-share-type-name-or-description-a39c5991b930932f.yaml releasenotes/notes/bug-1578328-fix-replica-deletion-in-cDOT-7e4502fb50b69507.yaml releasenotes/notes/bug-1591357-fix-cannot-remove-user-rule-for-NFS-8e1130e2accabd56.yaml releasenotes/notes/bug-1597940-fix-hpe3par-delete-share-0daf75193f318c41.yaml releasenotes/notes/bug-1602525-port_binding_mandatory-2aaba0fa72b82676.yaml releasenotes/notes/bug-1607029-fix-share-server-deletion-when-interfaces-dont-exist-4d00fe9dafadc252.yaml releasenotes/notes/bug-1613303-fix-config-generator-18b9f9be40d7eee6.yaml releasenotes/notes/bug-1624526-netapp-cdot-filter-root-aggregates-c30ac5064d530b86.yaml releasenotes/notes/bug-1626249-reintroduce-per-share-instance-access-rule-state-7c08a91373b21557.yaml releasenotes/notes/bug-1626523-migration-rw-access-fix-7da3365c7b5b90a1.yaml releasenotes/notes/bug-1634278-unmount-orig-active-after-promote-8e24c099ddc1e564.yaml releasenotes/notes/bug-1634734-fix-backend-extraspec-for-replication-d611d2227997ae3e.yaml releasenotes/notes/bug-1638896-missing-migration-completing-state-1e4926ed56eb268c.yaml releasenotes/notes/bug-1638994-drop-fake-cg-support-from-generic-driver-16efce98f94b1b6b.yaml releasenotes/notes/bug-1639188-fix-extend-operation-of-shrinked-share-in-generic-driver-5c7f82faefaf26ea.yaml releasenotes/notes/bug-1639662-fix-share-service-VM-restart-problem-1110f9133cc294e8.yaml releasenotes/notes/bug-1640169-check-ceph-connection-on-setup-c92bde41ced43326.yaml releasenotes/notes/bug-1645746-fix-inheritance-of-access-rules-from-parent-share-by-zfsonlinux-child-shares-4f85908c8e9871ef.yaml releasenotes/notes/bug-1645751-fixed-shares-created-from-snapshots-for-lvm-and-generic-drivers-94a1161a9e0b5a85.yaml releasenotes/notes/bug-1646603-netapp-broadcast-domains-411a626d38835177.yaml releasenotes/notes/bug-1649782-fixed-incorrect-exportfs-exportfs.yaml releasenotes/notes/bug-1650043-gpfs-access-bugs-8c10f26ff1f795f4.yaml releasenotes/notes/bug-1651578-gpfs-prepend-beb99f408cf20bb5.yaml releasenotes/notes/bug-1651587-deny-access-verify-563ef2f3f6b8c13b.yaml releasenotes/notes/bug-1654598-enforce-policy-checks-for-share-export-locations-a5cea1ec123b1469.yaml releasenotes/notes/bug-1657033-fix-share-metadata-error-when-deleting-share.yaml releasenotes/notes/bug-1658133-fix-lvm-revert-34a90e70c9aa7354.yaml releasenotes/notes/bug-1659023-netapp-cg-fix-56bb77b7bc61c3f5.yaml releasenotes/notes/bug-1660319-1660336-migration-share-groups-e66a1478634947ad.yaml releasenotes/notes/bug-1660321-fix-default-approach-for-share-group-snapshot-creation-3e843155c395e861.yaml releasenotes/notes/bug-1660425-snapshot-access-in-error-bce279ee310060f5.yaml releasenotes/notes/bug-1660686-snapshot-export-locations-mount-not-supported-cdc2f5a3b57a9319.yaml releasenotes/notes/bug-1660726-migration-export-locations-5670734670435015.yaml releasenotes/notes/bug-1661266-add-consistent-snapshot-support-attr-to-share-groups-DB-model-daa1d05129802796.yaml releasenotes/notes/bug-1661271-hnas-snapshot-readonly-4e50183100ed2b19.yaml releasenotes/notes/bug-1661381-migration-snapshot-export-locations-169786dcec386402.yaml releasenotes/notes/bug-1662615-hnas-snapshot-concurrency-2147159ea6b086c5.yaml releasenotes/notes/bug-1663300-554e9c78ca2ba992.yaml releasenotes/notes/bug-1664201-fix-share-replica-status-update-concurrency-in-replica-promotion-feature-63b15d96106c65da.yaml releasenotes/notes/bug-1665002-hnas-driver-version-f3a8f6bff3dbe054.yaml releasenotes/notes/bug-1665072-migration-success-fix-3da1e80fbab666de.yaml releasenotes/notes/bug-1666541-quobyte-resize-list-param-bc5b9c42bdc94c9f.yaml releasenotes/notes/bug-1667450-migration-stale-source-9c092fee267a7a0f.yaml releasenotes/notes/bug-1674908-allow-user-access-fix-495b3e42bdc985ec.yaml releasenotes/notes/bug-1678524-check-snaprestore-license-for-snapshot-revert-6d32afdc5d0b2b51.yaml releasenotes/notes/bug-1682795-share-access-list-api-5b1e86218959f796.yaml releasenotes/notes/bug-1684032-6e4502fdceb693dr7.yaml releasenotes/notes/bug-1690159-retry-backend-init-58486ea420feaf51.yaml releasenotes/notes/bug-1690785-fix-gpfs-path-91a354bc69bf6a47.yaml releasenotes/notes/bug-1694768-fix-netapp-cdot-revert-to-snapshot-5e1be65260454988.yaml releasenotes/notes/bug-1696000-netapp-fix-security-style-on-cifs-shares-cbdd557a27d11961.yaml releasenotes/notes/bug-1696669-add-ou-to-security-service-06b69615bd417d40.yaml releasenotes/notes/bug-1698250-netapp-cdot-fix-share-server-deletion-494ab3ad1c0a97c0.yaml releasenotes/notes/bug-1698258-netapp-fix-tenant-network-gateways-85935582e89a72a0.yaml releasenotes/notes/bug-1699836-disallow-share-type-deletion-with-active-share-group-types-83809532d06ef0dd.yaml releasenotes/notes/bug-1700346-new-exception-for-no-default-share-type-b1dd9bbe8c9cb3df.yaml releasenotes/notes/bug-1700871-ontap-allow-extend-of-replicated-share-2c9709180d954308.yaml releasenotes/notes/bug-1703660-fix-netapp-driver-preferred-state-0ce1a62961cded35.yaml releasenotes/notes/bug-1704622-netapp-cdot-fix-share-specs-on-migration-bfbbebec26533652.yaml releasenotes/notes/bug-1704971-fix-name-description-filter-85935582e89a72a0.yaml releasenotes/notes/bug-1705533-manage-api-error-message-fix-967b0d44c09b914a.yaml releasenotes/notes/bug-1706137-netapp-manila-set-valid-qos-during-migration-4405fff02bd6fa83.yaml releasenotes/notes/bug-1707066-deny-ipv6-access-in-error-bce379ee310060f6.yaml releasenotes/notes/bug-1707084-netapp-manila-driver-to-honour-std-extra-specs-d32fae4e9411b503.yaml releasenotes/notes/bug-1707943-make-lvm-revert-synchronous-0ef5baee3367fd27.yaml releasenotes/notes/bug-1707946-nfs-helper-0-netmask-224da94b82056f93.yaml releasenotes/notes/bug-1714691-decimal-separators-in-locales-392c0c794c49c1c2.yaml releasenotes/notes/bug-1716922-security-group-creation-failed-d46085d11370d918.yaml releasenotes/notes/bug-1717135-ganesha-cleanup-of-tmp-config-files-66082b2384ace0a5.yaml releasenotes/notes/bug-1717263-netapp-ontap-fix-size-for-share-from-snapshot-02385baa7e085f39.yaml releasenotes/notes/bug-1717392-fix-downgrade-share-access-map-bbd5fe9cc7002f2d.yaml releasenotes/notes/bug-172112-fix-drives-private-storage-update-deleted-entries-7516ba624da2dda7.yaml releasenotes/notes/bug-1721787-fix-getting-share-networks-and-security-services-error-7e5e7981fcbf2b53.yaml releasenotes/notes/bug-1730509-netapp-ipv6-hostname-39abc7f40d48c844.yaml releasenotes/notes/bug-1733494-allow-user-group-name-with-blank-access-fix-665b3e42bdc985ac.yaml releasenotes/notes/bug-1734127-a239d022bef4a002.yaml releasenotes/notes/bug-1735832-43e9291ddd73286d.yaml releasenotes/notes/bug-1736370-qnap-fix-access-rule-override-1b79b70ae48ad9e6.yaml releasenotes/notes/bug-1745436-78c46f8a0c96cbca.yaml releasenotes/notes/bug-1745436-remove-data-node-access-ip-config-opt-709f330c57cdb0d5.yaml releasenotes/notes/bug-1746202-fix-unicodeDecodeError-when-decode-API-input-4e4502fb50b69502.yaml releasenotes/notes/bug-1746723-8b89633062885f0b.yaml releasenotes/notes/bug-1747695-fixed-ip-version-in-neutron-bind-network-plugin-526958e2d83df072.yaml releasenotes/notes/bug-1749184-eb06929e76a14fce.yaml releasenotes/notes/bug-1750074-fix-rabbitmq-password-in-debug-mode-4e136ff86223c4ea.yaml releasenotes/notes/bug-1765420-netapp-fix-delete-share-for-vsadmins-b5dc9e0224cb3ba2.yaml releasenotes/notes/bug-1767430-access-control-raise-ip-address-conflict-on-host-routes-0c298125fee4a640.yaml releasenotes/notes/bug-1772026-nve-license-not-present-fix-e5d2e0d6c5df9227.yaml releasenotes/notes/bug-1772647-b98025c07553e35d.yaml releasenotes/notes/bug-1773761-qnap-fix-manage-share-size-override-a18acdf1a41909b0.yaml releasenotes/notes/bug-1773929-a5cb52c8417ec5fc.yaml releasenotes/notes/bug-1774159-0afe3dbc39e3c6b0.yaml releasenotes/notes/bug-1774604-qb-driver-b7e717cbc71d6189.yaml releasenotes/notes/bug-1777126-netapp-skip-route-setup-if-no-gateway-e841635dcd20fd12.yaml releasenotes/notes/bug-1777551-security-networks-api-all-tenants-fix-a061274afe15180d.yaml releasenotes/notes/bug-1777551-security-services-api-all-tenants-fix-e820ec370d7df473.yaml releasenotes/notes/bug-1785129-fix-sighup-behavior-with-scheduler-8ee803ad0e543cce.yaml releasenotes/notes/bug-1785180-zfsonlinux-retry-unmounting-during-manage-872cf46313c5a4ff.yaml releasenotes/notes/bug-1794402-fix-share-stats-container-driver-b3cb1fa2987ad4b1.yaml releasenotes/notes/bug-1795463-fix-pagination-slowness-8fcda3746aa13940.yaml releasenotes/notes/bug-1798219-fix-snapshot-creation-lvm-and-generic-driver-55e349e02e7fa370.yaml releasenotes/notes/bug-1801763-gate-public-share-creation-by-policy-a0ad84e4127a3fc3.yaml releasenotes/notes/bug-1804651-netapp-cdot-add-peferred-dc-to-cifs-ad-99072ce663762e83.yaml releasenotes/notes/bug-1804656-netapp-cdot-add-port-ids-to-share-server-backend-424ca11a1eb44826.yaml releasenotes/notes/bug-1804659-speed-up-pools-detail-18f539a96042099a.yaml releasenotes/notes/bug-1811680-destroy-quotas-usages-reservations-when-deleting-share-type-a18f2e00a65fe922.yaml releasenotes/notes/bug-1813054-remove-share-usage-size-audit-period-conf-opt-7331013d1cdb7b43.yaml releasenotes/notes/bug-1815038-extend-remove_version_from_href-support-ea479daaaf5c5700.yaml releasenotes/notes/bug-1815532-supply-request-id-in-all-apis-74419bc1b1feea1e.yaml releasenotes/notes/bug-1816420-validate-access-type-for-ganehas-c42ce6f859fa0c8c.yaml releasenotes/notes/bug-1818081-fix-inferred-script-name-in-case-of-proxy-urls-e33466af856708b4.yaml releasenotes/notes/bug-1822099-fix-multisegment-mtu.yaml-ac2e31c084d8bbb6.yaml releasenotes/notes/bug-1831092-netapp-fix-race-condition-524555133aaa6ca8.yaml releasenotes/notes/bug-1845135-fix-Unity-cannot-use-mgmt-ipv6-9407710a3fc7f4aa.yaml releasenotes/notes/bug-1845147-powermax-read-only-policy-585c29c5ff020007.yaml releasenotes/notes/bug-1845147-vnx-read-only-policy-75b0f414ea5ef471.yaml releasenotes/notes/bug-1845452-unity--fix-fail-to-delete-cifs-share-c502a10ae306e506.yaml releasenotes/notes/bug-1846836-fix-share-network-update-unexpected-success-eba8f40db392c467.yaml releasenotes/notes/bug-1848889-netapp-fix-share-replica-update-check-failure-90aa964417e7734c.yaml releasenotes/notes/bug-1850264-add-async-error-when-share-extend-error-a0c458204b395994.yaml releasenotes/notes/bug-1853940-not-send-heartbeat-if-driver-not-initial-9c3cee39e8c725d1.yaml releasenotes/notes/bug-1858328-netapp-fix-shrinking-error-48bcfffe694f5e81.yaml releasenotes/notes/bug-1859775-snapshot-over-quota-exception-bb6691612af03ddf.yaml releasenotes/notes/bug-1859785-share-list-speed-6b09e7717624e037.yaml releasenotes/notes/bug-1861485-fix-share-network-retrieval-31768dcda5aeeaaa.yaml releasenotes/notes/bug-1862833-fix-backref-by-eager-loading-2d897976e7598625.yaml releasenotes/notes/bug-1869148-if-only-pyc-exist-the-extension-API-cannot-be-loaded-172cb9153ebd4b56.yaml releasenotes/notes/bug-1869712-fix-increased-scheduled-time-for-non-thin-provisioned-backends-1da2cc33d365ba4f.yaml releasenotes/notes/bug-1870751-cleanup-share-type-and-group-type-project-access-when-deleted-4fcd49ba6e6c40bd.yaml releasenotes/notes/bug-1871999-dell-emc-vnx-powermax-wrong-export-locations-e9763631c621656f.yaml releasenotes/notes/bug-1872243-netapp-fix-vserver-peer-with-same-vserver-8bc65816f1764784.yaml releasenotes/notes/bug-1872872-fix-quota-checking-b06fd372be143101.yaml releasenotes/notes/bug-1872873-fix-consume-from-share-eea5941de17a5bcc.yaml releasenotes/notes/bug-1873963-netapp-fix-vserver-peer-intra-cluster-966398cf3a621edd.yaml releasenotes/notes/bug-667744-fix-c64071e6e5a098f7.yaml releasenotes/notes/bug_1564623_change-e286060a27b02f64.yaml releasenotes/notes/bug_1582931-1437eae20fa544d1.yaml releasenotes/notes/bug_1844046-fix-image-not-found-629415d50cd6042a.yaml releasenotes/notes/bugfix-1771958-1771970-bcec841e7ae6b9f6.yaml releasenotes/notes/cephfs-add-nfs-protocol-support-44764094c9d784d8.yaml releasenotes/notes/cephfs-native-add-readonly-shares-support-067ccab0217ab5f5.yaml releasenotes/notes/cephfs-native-enhance-update-access-support-e1a1258084c997ca.yaml releasenotes/notes/cephfs-native-fix-evict-c45fd2de8f520757.yaml releasenotes/notes/cephfs-nfs-ipv6-support-2ffd9c0448c2f47e.yaml releasenotes/notes/cephfs-set-mode-b7fb3ec51300c220.yaml releasenotes/notes/change_user_project_length-93cc8d1c32926e75.yaml releasenotes/notes/check-thin-provisioning-4bb702535f6b10b6.yaml releasenotes/notes/clean-expired-messages-6161094d0c108aa7.yaml releasenotes/notes/config-for-cephfs-volume-prefix-67f2513f603cb614.yaml releasenotes/notes/container-driver-5d972cc40e314663.yaml releasenotes/notes/container-driver-hardening-against-races-30c9f517a6392b9d.yaml releasenotes/notes/container-manage-unmanage-share-servers-880d889828ee7ce3.yaml releasenotes/notes/dedupe-support-hnas-driver-017d2f2a93a8b487.yaml releasenotes/notes/delete_vlan_on_vserver_delete-a7acd145c0b8236d.yaml releasenotes/notes/dell-emc-unity-use-user-capacity-322f8bbb7c536453.yaml releasenotes/notes/deprecate-memcached-servers-config-option-f4456382b9b4d6db.yaml releasenotes/notes/deprecate-old-ks-opts-in-nova-neutron-cinder-groups-e395015088d93fdc.yaml releasenotes/notes/deprecate-service-instance-network-helper-option-82ff62a038f2bfa3.yaml releasenotes/notes/disable-share-groups-api-by-default-0627b97ac2cda4cb.yaml releasenotes/notes/driver-filter-91e2c60c9d1a48dd.yaml releasenotes/notes/drop-python2-support-e160ff36811a5964.yaml releasenotes/notes/drop-support-for-lvm-share-export-ip-e031ef4c5f95b534.yaml releasenotes/notes/emc-unity-manila-support-d4f5a410501cfdae.yaml releasenotes/notes/emc_vnx_interface_ports_configuration-00d454b3003ef981.yaml releasenotes/notes/enhance-ensure-share-58fc14ffc099f481.yaml releasenotes/notes/error-share-set-size-ff5d4f4ac2d56755.yaml releasenotes/notes/estimate-provisioned-capacity-34f0d2d7c6c56621.yaml releasenotes/notes/extra_specs_case_insensitive-e9d4ca10d94f2307.yaml releasenotes/notes/fix-consistency-groups-api-dd9b5b99138e22eb.yaml releasenotes/notes/fix-ganesha-allow-access-for-all-ips-09773a79dc76ad44.yaml releasenotes/notes/fix-hds-hnas-unconfined-09b79f3bdb24a83c.yaml releasenotes/notes/fix-hnas-mount-on-manage-snapshot-91e094c579ddf1a3.yaml releasenotes/notes/fix-huawei-driver-cifs-mount-issue-2d7bff5a7e6e3ad6.yaml releasenotes/notes/fix-huawei-driver-qos-deletion-9ad62db3d7415980.yaml releasenotes/notes/fix-huawei-exception-a09b73234ksd94kd.yaml releasenotes/notes/fix-managing-twice-hnas-4956a7653d27e320.yaml releasenotes/notes/fix-py3-netapp-a9815186ddc865d4.yaml releasenotes/notes/fix-race-condition-netapp-5a36f6ba95a49c5e.yaml releasenotes/notes/fix-share-instance-list-with-limit-db7b5b99138e22ee.yaml releasenotes/notes/fix-share-manager-shrinking-data-loss-state-edc87ba2fd7e32d8.yaml releasenotes/notes/fix-volume-efficiency-status-2102ad630c5407a8.yaml releasenotes/notes/fix_access_level_managed_shares_hnas-c76a09beed365b46.yaml releasenotes/notes/fix_cephx_validation-cba4df77f9f45c6e.yaml releasenotes/notes/fix_limit_formating_routes-1b0e1a475de6ac44.yaml releasenotes/notes/fix_manage_snapshots_hnas-2c0e1a47b5e6ac33.yaml releasenotes/notes/fix_policy_file-4a382ac241c718c6.yaml releasenotes/notes/fixed-netapp-cdot-autosupport-3fabd8ac2e407f70.yaml releasenotes/notes/fixing-driver-filter-14022294c8c04d2d.yaml releasenotes/notes/ganesha-dynamic-update-access-be80bd1cb785e733.yaml releasenotes/notes/ganesha-store-exports-and-export-counter-in-ceph-rados-052b925f8ea460f4.yaml releasenotes/notes/generic-driver-noop-interface-driver-24abcf7af1e08ff9.yaml releasenotes/notes/generic-route-racing-adf92d212f1ab4de.yaml releasenotes/notes/glusterfs-add-directory-layout-extend-shrink-fd2a008f152edbf5.yaml releasenotes/notes/glusterfs-handle-new-volume-option-xml-schema-dad06253453c572c.yaml releasenotes/notes/gpfs-nfs-server-type-default-value-change-58890adba373737c.yaml releasenotes/notes/graduate-share-groups-feature-5f751b49ccc62969.yaml releasenotes/notes/guru-meditation-support-7872da69f529a6c2.yaml releasenotes/notes/hitachi-driver-cifs-user-support-3f1a8b894fe3e9bb.yaml releasenotes/notes/hnas-driver-rename-7ef74fe720f7e04b.yaml releasenotes/notes/hnas-manage-unmanage-snapshot-support-0d939e1764c9ebb9.yaml releasenotes/notes/hnas-mountable-snapshots-4fbffa05656112c4.yaml releasenotes/notes/hnas-revert-to-snapshot-a2405cd6653b1e85.yaml releasenotes/notes/hnas_allow_managed_fix-4ec7794e2035d3f2.yaml releasenotes/notes/hpe3par-rw-snapshot-shares-f7c33b4bf528bf00.yaml releasenotes/notes/hsp-driver-e00aff5bc89d4b54.yaml releasenotes/notes/huawei-driver-replication-8ed62c8d26ad5060.yaml releasenotes/notes/huawei-driver-sectorsize-config-da776132ba6da2a7.yaml releasenotes/notes/huawei-driver-support-snapshot-revert-1208c586bd8db98e.yaml releasenotes/notes/huawei-pool-disktype-support-0a52ba5d44da55f9.yaml releasenotes/notes/huawei-support-access-all-ip-4994c10ff75ac683.yaml releasenotes/notes/hybrid-aggregates-in-netapp-cdot-drivers-e7c90fb62426c281.yaml releasenotes/notes/ibm-gpfs-ces-support-3498e35d9fea1b55.yaml releasenotes/notes/ibm-gpfs-manage-support-c110120c350728e3.yaml releasenotes/notes/infinidat-add-infinibox-driver-ec652258e710d6a0.yaml releasenotes/notes/infinidat-balance-network-spaces-ips-25a9f1e587b87156.yaml releasenotes/notes/infinidat-delete-datasets-with-snapshots-4d18f8c197918606.yaml releasenotes/notes/infortrend-manila-driver-a1a2af20de6368cb.yaml releasenotes/notes/inspur-as13000-driver-41f6b7caea82e46e.yaml releasenotes/notes/inspur-instorage-driver-51d7a67f253f3ecd.yaml releasenotes/notes/inspur-support-rwx-for-cifs-permission-4279f1fe7a59fd00.yaml releasenotes/notes/introduce-tooz-library-5fed75b8caffcf42.yaml releasenotes/notes/limiting-ssh-access-from-tenant-network-6519efd6d6895076.yaml releasenotes/notes/lv-mounting-inside-containers-af8f84d1fab256d1.yaml releasenotes/notes/lvm-export-ips-5f73f30df94381d3.yaml releasenotes/notes/manage-share-in-zfsonlinux-driver-e80921081206f75b.yaml releasenotes/notes/manage-share-snapshot-in-huawei-driver-007b2c763fbdf480.yaml releasenotes/notes/manage-snapshot-in-zfsonlinux-driver-6478d8d5b3c6a97f.yaml releasenotes/notes/manage-unmanage-replicated-share-fa90ce34372b6df5.yaml releasenotes/notes/manage-unmanage-share-servers-cd4a6523d8e9fbdf.yaml releasenotes/notes/manage-unmanage-snapshot-bd92164472638f44.yaml releasenotes/notes/manage-unmanage-snapshot-in-netapp-cdot-driver-5cb4b1619c39625a.yaml releasenotes/notes/manila-status-upgrade-check-framework-aef9b5cf9d8e3bda.yaml releasenotes/notes/maprfs-manila-drivers-1541296f26cf78fd.yaml releasenotes/notes/migration-access-fix-71a0f52ea7a152a3.yaml releasenotes/notes/migration-empty-files-01d1a3caa2e9705e.yaml releasenotes/notes/migration-share-type-98e3d3c4c6f47bd9.yaml releasenotes/notes/move-emc-share-driver-to-dell-emc-dir-1ec34dee0544270d.yaml releasenotes/notes/multi-segment-support-fa171a8e3201d54e.yaml releasenotes/notes/netapp-cdot-apply-mtu-from-network-provider-d12179a2374cdda0.yaml releasenotes/notes/netapp-cdot-clone-split-control-a68b5fc80f1fc368.yaml releasenotes/notes/netapp-cdot-configure-nfs-versions-83e3f319c4592c39.yaml releasenotes/notes/netapp-cdot-optimized-migration-within-share-server-92cfa1bcf0c317fc.yaml releasenotes/notes/netapp-cdot-quality-of-service-limits-c1fe8601d00cb5a8.yaml releasenotes/notes/netapp-cdot-ss-multiple-dns-ip-df42a217977ce44d.yaml releasenotes/notes/netapp-cdot-switch-volume-efficiency-bd22733445d146f0.yaml releasenotes/notes/netapp-cdot-use-security-service-ou-4dc5835c9e00ad9d.yaml releasenotes/notes/netapp-create-share-from-snapshot-another-pool-330639b57aa5f04d.yaml releasenotes/notes/netapp-default-ipv6-route-13a9fd4959928524.yaml releasenotes/notes/netapp-ipv6-support-f448e99a7c112362.yaml releasenotes/notes/netapp-manage-unmanage-share-servers-635496b46e306920.yaml releasenotes/notes/netapp-replication-dhss-true-5b2887de8e9a2cb5.yaml releasenotes/notes/netapp-support-filtering-api-tracing-02d1f4271f44d24c.yaml releasenotes/notes/netapp_cdot_performance_utilization-aff1b498a159470e.yaml releasenotes/notes/neutron-binding-driver-43f01565051b031b.yaml releasenotes/notes/newton-migration-improvements-cf9d3d6e37e19c94.yaml releasenotes/notes/nexenta-manila-drivers-cbd0b376a076ec50.yaml releasenotes/notes/nexentastor5-v1.1-1ad6c8f7b5cc11b6.yaml releasenotes/notes/per-backend-az-590c68be0e2cb4bd.yaml releasenotes/notes/powermax-rebrand-manila-a46a0c2ac0aa77ed.yaml releasenotes/notes/qb-bug-1733807-581e71e6581de28e.yaml releasenotes/notes/qnap-enhance-support-53848fda525b7ea4.yaml releasenotes/notes/qnap-fix-manage-snapshot-not-exist-4b111982ddc5fdae.yaml releasenotes/notes/qnap-fix-share-and-snapshot-inconsistant-bd628c6e14eeab14.yaml releasenotes/notes/qnap-manila-driver-a30fe4011cb90801.yaml releasenotes/notes/qnap-support-qes-200-639f3ad70687023d.yaml releasenotes/notes/qnap-support-qes-210-8775e6c210f3ca9f.yaml releasenotes/notes/qnap-tds-support-qes-24704313a0881c8c.yaml releasenotes/notes/remove-AllocType-from-huawei-driver-8b279802f36efb00.yaml releasenotes/notes/remove-confusing-deprecation-warnings-a17c20d8973ef2bb.yaml releasenotes/notes/remove-deprecated-default-options-00fed1238fb6dca0.yaml releasenotes/notes/remove-deprecated-size-limiter-9d7c8ab69cf85aea.yaml releasenotes/notes/remove-host-field-from-shares-and-replicas-a087f85bc4a4ba45.yaml releasenotes/notes/remove-intree-tempest-plugin-9fcf6edbeba47cba.yaml releasenotes/notes/remove-nova-net-support-from-service-instance-module-dd7559803fa01d45.yaml releasenotes/notes/remove-nova-network-support-f5bcb8b2fcd38581.yaml releasenotes/notes/remove-os-region-name-82e3cd4c7fb05ff4.yaml releasenotes/notes/remove-root-helper-config-option-fd517b0603031afa.yaml releasenotes/notes/remove-standalone-network-plugin-ip-version-440ebcf27ffd22f8.yaml releasenotes/notes/rename-cephfs-native-driver-3d9b4e3c6c78ee98.yaml releasenotes/notes/reset_tap_device_after_node_restart-0690a6beca077b95.yaml releasenotes/notes/revert-switch-to-use-glanceclient-bc462a5477d6b8cb.yaml releasenotes/notes/rules-for-managed-share-f28a26ffc980f6fb.yaml releasenotes/notes/share-mount-snapshots-b52bf3433d1e7afb.yaml releasenotes/notes/share-network-with-multiple-subnets-a56be8b646b9e463.yaml releasenotes/notes/share-replication-81ecf4a32a5c83b6.yaml releasenotes/notes/share-revert-to-snapshot-3d028fa00620651e.yaml releasenotes/notes/share-revert-to-snapshot-in-netapp-cdot-driver-37f645ec3c14313c.yaml releasenotes/notes/share-server-delete-failure-ca29d6b286a2c790.yaml releasenotes/notes/snapshot-force-delete-4432bebfb5a0bbc9.yaml releasenotes/notes/support-ipv6-in-drivers-and-network-plugins-1833121513edb13d.yaml releasenotes/notes/support-qes-114-5881c0ff0e7da512.yaml releasenotes/notes/switch-to-use-glanceclient-dde019b0b141caf8.yaml releasenotes/notes/unexpected-data-of-share-from-snap-134189fc0f3eeedf.yaml releasenotes/notes/unity-drvier-support-1gb-share-48f032dff8a6a789.yaml releasenotes/notes/unity-manage-server-share-snapshot-support-6a0bbbed74da13c7.yaml releasenotes/notes/unity-manila-ipv6-support-dd9bcf23064baceb.yaml releasenotes/notes/unity-revert-to-snapshot-support-1cffc3914982003d.yaml releasenotes/notes/unity-shrink-share-support-cc748daebfe8f562.yaml releasenotes/notes/unity-un-handles-share-server-mode-support-e179c092ab148948.yaml releasenotes/notes/unity-vnx-rename-options-1656168dd4bdba70.yaml releasenotes/notes/use-oslo-logging-for-config-options-388da64bb4ce45db.yaml releasenotes/notes/use-tooz-heartbeat-c6aa7e15444e63c3.yaml releasenotes/notes/user-messages-api-589ee7d68ccba70c.yaml releasenotes/notes/veritas-access-manila-driver-d75558c01ce6d428.yaml releasenotes/notes/vlan-enhancement-in-unity-driver-0f1d972f2f6d00d9.yaml releasenotes/notes/vmax-manila-support-7c655fc094c09367.yaml releasenotes/notes/vmax-rename-options-44d8123d14a23f94.yaml releasenotes/notes/vnx-manila-ipv6-support-9ae986431549cc63.yaml releasenotes/notes/vnx-ssl-verification-2d26a24e7e73bf81.yaml releasenotes/notes/windows-smb-fix-default-access-d4b9eee899e400a0.yaml releasenotes/notes/zfsonlinux-driver-improvement-create-share-from-snapshot-another-backend-44296f572681be35.yaml releasenotes/notes/zfssa-driver-add-share-manage-unmanage-9bd6d2e25cc86c35.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder releasenotes/source/locale/de/LC_MESSAGES/releasenotes.po releasenotes/source/locale/es/LC_MESSAGES/releasenotes.po releasenotes/source/locale/fr/LC_MESSAGES/releasenotes.po tools/check_exec.py tools/check_logging.sh tools/coding-checks.sh tools/cover.sh tools/enable-pre-commit-hook.sh tools/fast8.sh tools/install_venv.py tools/install_venv_common.py tools/test-setup.sh tools/validate-json-files.py tools/with_venv.sh